Compare commits

..

192 Commits

Author SHA1 Message Date
Mikayla Maki
5b74ddf020 WIP: Remove provider name 2025-11-18 23:56:05 -08:00
Mikayla Maki
54f20ae5d5 Remove provider name 2025-11-18 22:13:04 -08:00
Mikayla Maki
811efa45d0 Disambiguate similar completion events 2025-11-18 21:04:35 -08:00
Mikayla Maki
74501e0936 Simplify LanguageModelCompletionEvent enum, remove
`StatusUpdate::Failed` state by turning it into an error early, and then
faltten `StatusUpdate` into LanguageModelCompletionEvent
2025-11-18 20:56:11 -08:00
Michael Benfield
2ad8bd00ce Simplifying errors
Co-authored-by: Mikayla <mikayla@zed.dev>
2025-11-18 20:45:22 -08:00
Martin Bergo
7c0663b825 google_ai: Add gemini-3-pro-preview model (#43015)
Release Notes:

- Added the newly released Gemini 3 Pro Preview Model


https://docs.cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/3-pro
2025-11-18 23:51:32 +00:00
Lukas Wirth
94a43dc73a extension_host: Fix IS_WASM_THREAD being set for wrong threads (#43005)
https://github.com/zed-industries/zed/pull/40883 implemented this
incorrectly. It was marking a random background thread as a wasm thread
(whatever thread picked up the wasm epoch timer background task),
instead of marking the threads that actually run the wasm extension.

This has two implications:
1. it didn't prevent extension panics from tearing down as planned
2. Worse, it actually made us hide legit panics in sentry for one of our
background workers.

Now 2 still technically applies for all tokio threads after this, but we
basically only use these for wasm extensions in the main zed binary.

Release Notes:

- Fixed extension panics crashing Zed on Linux
2025-11-18 23:49:22 +00:00
Ben Kunkle
e8e0707256 zeta2: Improve queries parsing (#43012)
Closes #ISSUE

Release Notes:

- N/A *or* Added/Fixed/Improved ...

---------

Co-authored-by: Agus <agus@zed.dev>
Co-authored-by: Max <max@zed.dev>
2025-11-18 23:46:29 +00:00
Tom Zaspel
d7c340c739 docs: Add documenation for OpenTofu support (#42448)
Closes -

Release Notes:

- N/A

Signed-off-by: Tom Zaspel <40226087+tzabbi@users.noreply.github.com>
2025-11-18 18:40:09 -05:00
Julia Ryan
16b24e892e Increase error verbosity (#43013)
Closes #42288

This will actually print the parsing error that prevented the vscode
settings file from being loaded which should make it easier for users to
self help when they have an invalid config.

Release Notes:

- N/A
2025-11-18 23:25:12 +00:00
Barani S
917148c5ce gpui: Use DWM API for backdrop effects and add Mica/Mica Alt support (#41842)
This PR updates window background rendering to use the **official DWM
backdrop API** (`DwmSetWindowAttribute`) instead of the legacy
`SetWindowCompositionAttribute`.
It also adds **Mica** and **Mica Alt** options to
`WindowBackgroundAppearance` for native Windows 11 effects.

### Motivation

Enables modern, stable, and GPU-accelerated backdrops consistent with
Windows 11’s Fluent Design.
Removes reliance on undocumented APIs while maintaining backward
compatibility with older Windows versions.

### Changes

* Added `MicaBackdrop` and `MicaAltBackdrop` variants.
* Switched to DWM API for applying backdrop effects.
* Verified fallback behavior on Windows 10.

### Release Notes:

- Added `WindowBackgroundAppearance::MicaBackdrop` and
`WindowBackgroundAppearance::MicaAltBackdrop` for Windows 11 Mica and
Mica Alt window backdrops.

### Screenshots

- `WindowBackgroundAppearance::Blurred`
<img width="553" height="354" alt="image"
src="https://github.com/user-attachments/assets/57c9c25d-9412-4141-94b5-00000cc0b1ec"
/>

- `WindowBackgroundAppearance::MicaBackdrop`
<img width="553" height="354" alt="image"
src="https://github.com/user-attachments/assets/019f541c-3335-4c9e-b026-71f5a1786534"
/>

- `WindowBackgroundAppearance::MicaAltBackdrop`
<img width="553" height="354" alt="image"
src="https://github.com/user-attachments/assets/5128d600-c94d-4c89-b81a-8b842fe1337a"
/>

---------

Co-authored-by: John Tur <john-tur@outlook.com>
2025-11-18 18:20:32 -05:00
Piotr Osiewicz
951132fc13 chore: Fix build graph - again (#42999)
11.3s -> 10.0s for silly stuff like extracting actions from crates.
project panel still depends on git_ui though..

Release Notes:

- N/A
2025-11-18 19:20:34 +01:00
Ben Kunkle
bf0dd4057c zeta2: Make new_text/old_text parsing more robust (#42997)
Closes #ISSUE

The model often uses the wrong closing tag, or has spaces around the
closing tag name. This PR makes it so that opening tags are treated as
authoritative and any closing tag with the name `new_text` `old_text` or
`edits` is accepted based on depth. This has the additional benefit that
the parsing is more robust with contents that contain `new_text`
`old_text` or `edits. I.e. the following test passes

```rust
    #[test]
    fn test_extract_xml_edits_with_conflicting_content() {
        let input = indoc! {r#"
            <edits path="component.tsx">
            <old_text>
            <new_text></new_text>
            </old_text>
            <new_text>
            <old_text></old_text>
            </new_text>
            </edits>
        "#};

        let result = extract_xml_replacements(input).unwrap();
        assert_eq!(result.file_path, "component.tsx");
        assert_eq!(result.replacements.len(), 1);
        assert_eq!(result.replacements[0].0, "<new_text></new_text>");
        assert_eq!(result.replacements[0].1, "<old_text></old_text>");
    }
```

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-18 12:36:37 -05:00
Conrad Irwin
3c4ca3f372 Remove settings::Maybe (#42933)
It's unclear how this would ever be useful

cc @probably-neb

Release Notes:

- N/A
2025-11-18 10:23:16 -07:00
Artur Shirokov
03132921c7 Add HTTP transport support for MCP servers (#39021)
### What this solves

This PR adds support for HTTP and SSE (Server-Sent Events) transports to
Zed's context server implementation, enabling communication with remote
MCP servers. Currently, Zed only supports local MCP servers via stdio
transport. This limitation prevents users from:

- Connecting to cloud-hosted MCP servers
- Using MCP servers running in containers or on remote machines
- Leveraging MCP servers that are designed to work over HTTP/SSE

### Why it's important

The MCP (Model Context Protocol) specification includes HTTP/SSE as
standard transport options, and many MCP server implementations are
being built with these transports in mind. Without this support, Zed
users are limited to a subset of the MCP ecosystem. This is particularly
important for:

- Enterprise users who need to connect to centralized MCP services
- Developers working with MCP servers that require network isolation
- Users wanting to leverage cloud-based context providers (e.g.,
knowledge bases, API integrations)

### Implementation approach

The implementation follows Zed's existing architectural patterns:

- **Transports**: Added `HttpTransport` and `SseTransport` to the
`context_server` crate, built on top of the existing `http_client` crate
- **Async handling**: Uses `gpui::spawn` for network operations instead
of introducing a new Tokio runtime
- **Settings**: Extended `ContextServerSettings` enum with a `Remote`
variant to support URL-based configuration
- **UI**: Updated the agent configuration UI with an "Add Remote Server"
option and dedicated modal for remote server management

### Changes included

- [x] HTTP transport implementation with request/response handling
- [x] SSE transport for server-sent events streaming
- [x] `build_transport` function to construct appropriate transport
based on URL scheme
- [x] Settings system updates to support remote server configuration
- [x] UI updates for adding/editing remote servers
- [x] Unit tests using `FakeHttpClient` for both transports
- [x] Integration tests (WIP)
- [x] Documentation updates (WIP)

### Testing

- Unit tests for both `HttpTransport` and `SseTransport` using mocked
HTTP client
- Manual testing with example MCP servers over HTTP/SSE
- Settings validation and UI interaction testing

### Screenshots/Recordings

[TODO: Add screenshots of the new "Add Remote Server" UI and
configuration modal]

### Example configuration

Users can now configure remote MCP servers in their `settings.json`:

```json
{
  "context_servers": {
    "my-remote-server": {
      "enabled": true,
      "url": "http://localhost:3000/mcp"
    }
  }
}
```

### AI assistance disclosure

I used AI to help with:

- Understanding the MCP protocol specification and how HTTP/SSE
transports should work
- Reviewing Zed's existing patterns for async operations and suggesting
consistent approaches
- Generating boilerplate for test cases
- Debugging SSE streaming issues

All code has been manually reviewed, tested, and adapted to fit Zed's
architecture. The core logic, architectural decisions, and integration
with Zed's systems were done with human understanding of the codebase.
AI was primarily used as a reference tool and for getting unstuck on
specific technical issues.

Release notes:
* You can now configure MCP Servers that connect over HTTP in your
settings file. These are not yet available in the extensions API.
  ```
  {
    "context_servers": {
      "my-remote-server": {
        "enabled": true,
        "url": "http://localhost:3000/mcp"
      }
    }
  }
  ```

---------

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-11-18 16:39:08 +00:00
Richard Feldman
c0fadae881 Thought signatures (#42915)
Implement Gemini API's [thought
signatures](https://ai.google.dev/gemini-api/docs/thinking#signatures)

Release Notes:

- Added thought signatures for Gemini tool calls
2025-11-18 10:41:19 -05:00
Agus Zubiaga
1c66c3991d Enable sweep flag for staff (#42987)
Release Notes:

- N/A
2025-11-18 15:39:27 +00:00
Agus Zubiaga
7e591a7e9a Fix sweep icon spacing (#42986)
Release Notes:

- N/A
2025-11-18 15:33:03 +00:00
Danilo Leal
c44d93745a agent_ui: Improve the modal to add LLM providers (#42983)
Closes https://github.com/zed-industries/zed/issues/42807

This PR makes the modal to add LLM providers a bit better to interact
with:

1. Added a scrollbar
2. Made the inputs navigable with tab
3. Added some responsiveness to ensure it resizes on shorter windows


https://github.com/user-attachments/assets/758ea5f0-6bcc-4a2b-87ea-114982f37caf

Release Notes:

- agent: Improved the modal to add LLM providers by making it responsive
and keyboard navigable.
2025-11-18 12:28:14 -03:00
Lukas Wirth
b4e4e0d3ac remote: Fix up incorrect logs (#42979)
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-18 15:14:52 +00:00
Lukas Wirth
097024d46f util: Use process spawn helpers in more places (#42976)
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-18 14:31:39 +00:00
Ben Brandt
f1c2afdee0 Update codex docs to include configuration for third-party providers (#42973)
Release Notes:

- N/A
2025-11-18 13:50:59 +00:00
Jakub Konka
ea120dfe18 Revert "git: Remove JobStatus from PendingOp in favour of in-flight p… (#42970)
…runing (#42955)"

This reverts commit 696fdd8fed.

Release Notes:

- N/A
2025-11-18 13:30:40 +00:00
Lukas Wirth
d2988ffc77 vim: Fix snapshot out of bounds indexing (#42969)
Fixes ZED-38X

Release Notes:

- N/A
2025-11-18 13:02:40 +00:00
Engin Açıkgöz
f17d2c92b6 terminal_view: Fix terminal opening in root directory when editing single file corktree (#42953)
Fixes #42945

## Problem
When opening a single file via command line (e.g., `zed
~/Downloads/file.txt`), the terminal panel was opening in the root
directory (/) instead of the file's directory.

## Root Cause
The code only checked for active project directory, which returns None
when a single file is opened. Additionally, file worktrees weren't
handling parent directory lookup.

## Solution
Added fallback logic to use the first project directory when there's no
active entry, and made file worktrees return their parent directory
instead of None.

## Testing
- All existing tests pass
- Added test coverage for file worktree scenarios
- Manually tested with `zed ~/Downloads/file.txt` - terminal now opens
in correct directory

This improves the user experience for users who frequently open single
files from the command line.

## Release Notes

- Fixed terminal opening in root directory when editing single files
from the command line
2025-11-18 13:37:48 +01:00
Antonio Scandurra
c1d9dc369c Try reducing flakiness of fs-event tests by bumping timeout to 4s on CI (#42960)
Release Notes:

- N/A
2025-11-18 11:00:02 +00:00
Jakub Konka
696fdd8fed git: Remove JobStatus from PendingOp in favour of in-flight pruning (#42955)
The idea is that we only store running (`!self.finished`) or finished
(`self.finished`) pending ops, while everything else (skipped, errored)
jobs are pruned out immediately. We don't really need them in the grand
scheme of things anyway.

Release Notes:

- N/A
2025-11-18 10:22:34 +00:00
Lena
980f8bff2a Add a github issue label to shoo the stalebot away (#42950)
Labeling an issue with "never stale" will keep the stalebot away; the
bot can get annoying in some situations otherwise.

Release Notes:

- N/A
2025-11-18 10:15:09 +01:00
Kirill Bulatov
2a3bcbfe0f Properly check chunk version on lsp store update (#42951)
Release Notes:

- N/A

Co-authored-by: Lukas Wirth <lukas@zed.dev>
2025-11-18 09:13:32 +00:00
aleanon
5225a84aff For and await highlighting rust (#42924)
Closes #42922

Release Notes:

- Fixed Correctly highlighting the 'for' keyword in Rust as
keyword.control only in for loops.
- Fixed Highlighting the 'await' keyword in Rust as keyword.control
2025-11-18 09:11:36 +01:00
Max Brunsfeld
5c70f8391f Fix panic when using sweep AI without token env var (#42940)
Release Notes:

- N/A
2025-11-17 22:23:14 -08:00
Danilo Leal
10efbd5eb4 agent_ui: Show the "new thread" keybinding for the currently active agent (#42939)
This PR's goal is to improve discoverability of how Zed "remembers" the
currently selected agent when hitting `cmd-n` (or `ctrl-n`). Hitting
that binding starts a new thread with whatever agent is currently
selected.

In the example below, I am in a Claude Code thread and if I hit `cmd-n`,
a new, fresh CC thread will be started:

<img width="500" height="822" alt="Screenshot 2025-11-18 at 1  13@2x"
src="https://github.com/user-attachments/assets/d3acd1aa-459d-4078-9b62-bbac3b8c1600"
/>


Release Notes:

- agent: Improved discoverability of the `cmd-n` keybinding to create a
new thread with the currently selected agent.
2025-11-18 01:53:37 -03:00
Agus Zubiaga
0386f240a9 Add experimental Sweep edit prediction provider (#42927)
Only for staff

Release Notes:

- N/A

---------

Co-authored-by: Max Brunsfeld <maxbrunsfeld@gmail.com>
Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-11-18 01:00:26 +00:00
Conrad Irwin
a39ba03bcc Use metrics-id for sentry user id when we have it (#42931)
This should make it easier to correlate Sentry reports with user reports
and
github issues (for users who have diagnostics enabled)

Release Notes:

- N/A
2025-11-17 23:44:59 +00:00
Lukas Wirth
2c7bcfcb7b multi_buffer: Work around another panic bug in path_key (#42920)
Fixes ZED-346 for now until I find the time to dig into this bug
properly

Release Notes:

- Fixed a panic in the diagnostics pane
2025-11-17 22:38:48 +00:00
Lukas Wirth
6bea23e990 text: Temporarily remove assert_char_boundary panics (#42919)
As discussed in the first responders meeting. We have collected a lot of
backtraces from these, but it's not quite clear yet what causes this.
Removing these should ideally make things a bit more stable even if we
may run into panics later one when the faulty anchor is used still.

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-17 22:20:45 +00:00
Julia Ryan
98da1ea169 Fix remote extension syncing (#42918)
Closes #40906
Closes #39729

SFTP uploads weren't quoting the install directory which was causing
extension syncing to fail. We were also only running `install_extension`
once per remote-connection instead of once per project (thx @feeiyu for
pointing this out) so extension weren't being loaded in subsequently
opened remote projects.

Release Notes:

- N/A

---------

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-11-17 16:13:58 -06:00
Danilo Leal
98a83b47e6 agent_ui: Make input fields in Bedrock settings keyboard navigable (#42916)
Closes https://github.com/zed-industries/zed/issues/36587

This PR enables jumping from one input to the other, in the Bedrock
settings section, with tab.

Release Notes:

- N/A
2025-11-17 19:13:15 -03:00
Danilo Leal
5f356d04ff agent_ui: Fix model name label truncation (#42921)
Closes https://github.com/zed-industries/zed/issues/32739

Release Notes:

- agent: Fixed an issue where the label for model names wouldn't use all
the available space in the model picker.
2025-11-17 19:12:26 -03:00
Marshall Bowers
73d3f9611e collab: Add external_id column to billing_customers table (#42923)
This PR adds an `external_id` column to the `billing_customers` table.

Release Notes:

- N/A
2025-11-17 22:12:00 +00:00
Finn Evers
d9cfc2c883 Fix formatting in various files (#42917)
This fixes various issues where rustfmt failed to format code due to too
long strings, most of which I stumbled across over the last week and
some additonal ones I searched for whilst fixing the others.

Release Notes:

- N/A
2025-11-17 21:48:09 +00:00
Dino
ee420d530e vim: Change approach to fixing vim's temporary mode bug (#42894)
The `Vim.exit_temporary_normal` method had been updated
(https://github.com/zed-industries/zed/pull/42742) to expect and
`Option<&Motion>` that would then be used to determine whether to move
the cursor right in case the motion was `Some(EndOfLine { ..})`.
Unfortunately this meant that all callers now had to provide this
argument, even if just `None`.

After merging those changes I remember that we could probably play
around with `clip_at_line_ends` so this commit removes those intial
changes in favor of updating the `vim::normal::Vim.move_cursor` method
so that, if vim is in temporary mode and `EndOfLine` is used, it
disables clipping at line ends so that the newline character can be
selected.

Closes [#42278](https://github.com/zed-industries/zed/issues/42278)

Release Notes:

- N/A
2025-11-17 21:34:37 +00:00
Miguel Raz Guzmán Macedo
d801d0950e Add @miguelraz to reviewers and support sections (#42904)
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-17 15:22:34 -06:00
Lukas Wirth
3f25d36b3c agent_ui: Fix text pasting no longer working (#42914)
Regressed in https://github.com/zed-industries/zed/pull/42908
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-17 21:04:50 +00:00
Joseph T. Lyons
f015368586 Update top-ranking issues script (#42911)
- Added Windows category
- Removed unused import
- Fixed a type error reported by `ty`

Release Notes:

- N/A
2025-11-17 15:20:22 -05:00
Ben Kunkle
4bf3b9d62e zeta2: Output bucketed_analysis.md (#42890)
Closes #ISSUE

Makes it so that a file named `bucketed_analysis.md` is written to the
runs directory after an eval is ran with > 1 repetitions. This file
buckets the predictions made by the model by comparing the edits made so
that seeing how many times different failure modes were encountered
becomes much easier.

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-17 15:17:39 -05:00
Lukas Wirth
599a217ea5 workspace: Fix logging of errors in prompt_err (#42908)
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-17 20:39:50 +01:00
ozzy
b0a7defd09 Fix track file renames in git panel (#42352)
Closes #30549

Release Notes:

- Fixed: Git renames now properly show as renamed files in the git panel
instead of appearing as deleted + untracked files
<img width="351" height="132" alt="Screenshot 2025-11-10 at 17 39 44"
src="https://github.com/user-attachments/assets/80e9c286-1abd-4498-a7d5-bd21633e6597"
/>
<img width="500" height="95" alt="Screenshot 2025-11-10 at 17 39 55"
src="https://github.com/user-attachments/assets/e4c59796-df3a-4d12-96f4-e6706b13a32f"
/>
2025-11-17 13:25:51 -06:00
Davis Vaughan
57e3bcfcf8 Revise R documentation - about Air in particular (#42755)
Returning the favor from @rgbkrk in
https://github.com/posit-dev/air/pull/445

I noticed the R docs around Air are a bit incorrect / out of date. I'll
make a few more comments inline. Feel free to take over for any other
edits.

Release Notes:

- Improved R language support documentation
2025-11-17 11:53:42 -07:00
Oleksiy Syvokon
b2f561165f zeta2: Support qwen3-minimal prompt format (#42902)
This prompt is for a fine-tuned model. It has the following changes,
compared to `minimal`:
- No instructions at all, except for one sentence at the beginning of
the prompt.
- Output is a simplified unified diff -- hunk headers have no line
counts (e.g., `@@ -20 +20 @@`)
- Qwen's FIM tokens are used where possible (`<|file_sep|>`,
`<|fim_prefix|>`, `<|fim_suffix|>`, etc.)

To evaluate this model:
```
ZED_ZETA2_MODEL=zeta2-exp [usual zeta-cli eval params ...]  --prompt-format minimal-qwen
```

This will point to the most recent Baseten deployment of zeta2-exp
(which may change in the future, so the prompt-format may get out of
sync).

Release Notes:

- N/A
2025-11-17 20:36:05 +02:00
localcc
fd1494c31a Fix remote server completions not being queried from all LSP servers (#42723)
Closes #41294

Release Notes:

- Fixed remote LSPs not being queried
2025-11-17 18:07:49 +00:00
Danilo Leal
faa1136651 agent_ui: Don't create a new terminal when hitting the new thread binding from the terminal (#42898)
Closes https://github.com/zed-industries/zed/issues/32701

Release Notes:

- agent: Fixed a bug where hitting the `NewThread` keybinding when
focused inside a terminal within the agent panel would create a new
terminal tab instead of a new thread.
2025-11-17 15:05:38 -03:00
Conrad Irwin
6bf5e92a25 Revert "Keep selection in SwitchToHelixNormalMode (#41583)" (#42892)
Closes #ISSUE

Release Notes:

- Fixes vim "go to definition" making a selection
2025-11-17 11:01:34 -07:00
Lukas Wirth
46ad6c0bbb ci: Remove remaining nextest compiles (#42630)
Follow up to https://github.com/zed-industries/zed/pull/42556

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-17 17:59:48 +00:00
Lukas Wirth
671500de1b agent_ui: Fix images copied from win explorer not being pastable (#42858)
Closes https://github.com/zed-industries/zed/issues/41505

A bit adhoc but it gets the job done for now

Release Notes:

- Fixed images copied from windows explorer not being pastable in the
agent panel
2025-11-17 17:58:22 +00:00
Kirill Bulatov
0519c645fb Deduplicate inlays when getting those from multiple language servers (#42899)
Part of https://github.com/zed-industries/zed/issues/42671

Release Notes:

- Deduplicate inlay hints from different language servers
2025-11-17 17:53:05 +00:00
Richard Feldman
23872b0523 Fix stale edits (#42895)
Closes #34069

<img width="532" height="880" alt="Screenshot 2025-11-17 at 11 14 19 AM"
src="https://github.com/user-attachments/assets/abc50c32-d54d-4310-a6e6-83008db7ed81"
/>

<img width="525" height="863" alt="Screenshot 2025-11-17 at 12 22 50 PM"
src="https://github.com/user-attachments/assets/15a69792-c2c7-4727-add9-c1f9baa5e665"
/>

Release Notes:

- Agent file edits now error if the file has changed since last read
(allowing the agent to read changes and avoid overwriting changes made
outside Zed)
2025-11-17 12:23:18 -05:00
Richard Feldman
4b050b651a Support Agent Servers on remoting (#42683)
<img width="348" height="359" alt="Screenshot 2025-11-13 at 6 53 39 PM"
src="https://github.com/user-attachments/assets/6fe75796-8ceb-4f98-9d35-005c90417fd4"
/>

Also added support for per-target env vars to Agent Server Extensions

Closes https://github.com/zed-industries/zed/issues/42291

Release Notes:

- Per-target env vars are now supported on Agent Server Extensions
- Agent Server Extensions are now available when doing SSH remoting

---------

Co-authored-by: Lukas Wirth <me@lukaswirth.dev>
Co-authored-by: Mikayla Maki <mikayla.c.maki@gmail.com>
2025-11-17 10:48:14 -05:00
Danilo Leal
bb46bc167a settings_ui: Add "Edit in settings.json" button to subpage header (#42886)
Closes https://github.com/zed-industries/zed/issues/42094

This will make it consistent with the regular/main page. Also ended up
fixing a bug along the way where this button wouldn't work for subpage
items.

Release Notes:

- settings ui: Fixed a bug where the "Edit in settings.json" wouldn't
work for subpages like all the Language pages.
2025-11-17 11:58:43 -03:00
Oleksiy Syvokon
b274f80dd9 zeta2: Print average length of prompts and outputs (#42885)
Release Notes:

- N/A
2025-11-17 16:56:58 +02:00
Danilo Leal
d77ab99ab1 keymap_editor: Make "toggle exact match mode" the default for binding search (#42883)
I think having the "exact mode" turned on by default is usually what
users will expect when searching for a specific keybinding. When it's
turned off, it's very odd to search for a super common binding like
"command-enter" and get no results. That happens because without that
mode, we're trying to match for subsequent matches, which I'm betting
it's an edge case. Hopefully, this change will make the keymap editor
feel more like it works well.

I'm also adding the toggle icon button inside the keystroke input for
consistency with the project search input.

Making this change very inspired by [Sam Rose's
feedback](https://bsky.app/profile/samwho.dev/post/3m5juszqyd22w).

Release Notes:

- keymap editor: Made the "toggle exact match mode" the default
keystroke search mode so that whatever you search for matches exactly to
results.
2025-11-17 11:43:15 -03:00
Finn Evers
97792f7fb9 Prefer loading extension.toml before extension.json (#42884)
Closes #42406

The issue for the fish-extension is that a `extension.json` is still
present next to a `extension.toml`, although the former is deprecated.

We should prefer the `extension.toml` if it is present and only fall
back to the `extension.json` if needed. This PR tackles this.

Release Notes:

- N/A
2025-11-17 14:29:50 +00:00
tidely
9bebf314e0 http_client: Remove unused HttpClient::type_name method (#42803)
Closes #ISSUE

Remove unused method `HttpClient::type_name`. Looking at the PR from a
year ago when it was added, it was never actually used for anything and
seems like a prototyping artifact.

Other misc changes for the `http_client` crate include:

- Use `derive_more::Deref` for `HttpClientWithUrl` (already used for
`HttpClientWithProxy`)
- Move `http_client::proxy()` higher up in the trait definition. (It was
in between methods that have default implementations)

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-17 15:28:22 +01:00
Danilo Leal
4092e81ada keymap_editor: Adjust some items of the UI (#42876)
- Only showing the "Create" menu item in the right-click context menu
for actions that _do not_ contain a binding already assigned to them
- Only show the "Clear Input" icon button in the keystroke modal when
the input is focused/in recording mode
- Add a subtle hover style to the table rows just to make it easier to
navigate

Release Notes:

- N/A
2025-11-17 10:59:46 -03:00
Kirill Bulatov
e0b64773d9 Properly sanitize out inlay hints from remote hosts (#42878)
Part of https://github.com/zed-industries/zed/issues/42671

Release Notes:

- Fixed remote hosts causing duplicate hints to be displayed
2025-11-17 15:53:18 +02:00
Piotr Osiewicz
f1bebd79d1 zeta2: Add skip-prediction flag to eval CLI (#42872)
Release Notes:

- N/A
2025-11-17 13:37:51 +00:00
Lukas Wirth
a66a539a09 Reduce macro burden for rust-analyzer (#42871)
This enables optimizations for our own proc-macros as well as some heavy
hitters. Additionally this gates the `derive_inspector_reflection` to be
skipped for rust-analyzer as it currently slows down rust-analyzer way
too much

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-17 12:31:00 +00:00
Lucas Parry
a2d3e3baf9 project_panel: Add sort mode (#40160)
Closes #4533 (partly at least)

Release Notes:

- Added `project_panel.sort_mode` option to control explorer file sort
(directories first, mixed, files first)

 ## Summary

Adds three sorting modes for the project panel to give users more
control over how files and directories are displayed:

- **`directories_first`** (default): Current behaviour - directories
grouped before files
- **`mixed`**: Files and directories sorted together alphabetically
- **`files_first`**: filed grouped before directories

 ## Motivation

Users coming from different editors and file managers have different
expectations for file sorting. Some prefer directories grouped at the
top (traditional), while others prefer the macOS Finder-style mixed
sorting where "Apple1/", "apple2.tsx" and "Apple3/" appear
alphabetically mixed together.


 ### Screenshots

New sort options in settings:
<img width="515" height="160" alt="image"
src="https://github.com/user-attachments/assets/8f4e6668-6989-4881-a9bd-ed1f4f0beb40"
/>


Directories first | Mixed | Files first
-------------|-----|-----
<img width="328" height="888" alt="image"
src="https://github.com/user-attachments/assets/308e5c7a-6e6a-46ba-813d-6e268222925c"
/> | <img width="327" height="891" alt="image"
src="https://github.com/user-attachments/assets/8274d8ca-b60f-456e-be36-e35a3259483c"
/> | <img width="328" height="890" alt="image"
src="https://github.com/user-attachments/assets/3c3b1332-cf08-4eaf-9bed-527c00b41529"
/>


### Agent usage

Copilot-cli/claude-code/codex-cli helped out a lot. I'm not from a rust
background, but really wanted this solved, and it gave me a chance to
play with some of the coding agents I'm not permitted to use for work
stuff

---------

Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>
2025-11-17 17:52:46 +05:30
Serophots
175162af4f project_panel: Fix preview tabs disabling focusing files after just one click in project panel (#42836)
Closes #41484

With preview tabs disabled, when you click once on a file in the project
panel, rather than focusing on that file, zed will incorrectly focus on
the text editor panel. This means if you click on a file to focus it,
then follow up with a keybind like backspace to delete that file, it
doesn't delete that file because the backspace goes through to the text
editor instead.

Incorrect behaviour seen here:


https://github.com/user-attachments/assets/8c2dea90-bd90-4507-8ba6-344be348f151



Release Notes:

- Fixed improper UI focus behaviour in the project panel when preview
tabs are disabled
2025-11-17 16:53:12 +05:30
Dino
cdcc068906 vim: Fix temporary mode exit on end of line (#42742)
When using the end of line motion ($) while in temporary mode, the
cursor would be placed in insert mode just before the last character
instead of after, just like in NeoVim.

This happens because `EndOfLine` kind of assumes that we're in `Normal`
mode and simply places the cursor in the last character instead of the
newline character.

This commit moves the cursor one position to the right when exiting
temporary mode and the motion used was `Motion::EndOfLine`

- Update `vim::normal::Vim.exit_temporary_normal` to now accept a
`Option<&Motion>` argument, in case callers want this new logic to
potentially be applied

Closes #42278 

Release Notes:

- Fixed temporary mode exit when using `$` to move to the end of the
line
2025-11-17 11:14:49 +00:00
Mayank Verma
86484aaded languages: Clean up invalid init calls after recent API changes (#42866)
Related to https://github.com/zed-industries/zed/pull/41670

Release Notes:

- Cleaned up invalid init calls after recent API changes in
https://github.com/zed-industries/zed/pull/42238
2025-11-17 10:29:29 +00:00
Mayank Verma
d32934a893 languages: Fix indentation for if/else statements in C/C++ without braces (#41670)
Closes #41179

Release Notes:

- Fixed indentation for if/else statements in C/C++ without braces
2025-11-17 10:05:54 +01:00
warrenjokinen
b463266fa1 Remove mention of Fireside Hacks (#42853)
Fireside Hack events are no longer being held.

Closes #ISSUE

Release Notes:

- N/A
2025-11-16 23:26:38 -05:00
Agus Zubiaga
b0525a26a6 Report automatically discarded zeta predictions (#42761)
We weren't reporting predictions that were generated but never made it
out of the provider, such as predictions that failed to interpolate, and
those that are cancelled because another request completes before it.

Release Notes:

- N/A

---------

Co-authored-by: Max Brunsfeld <maxbrunsfeld@gmail.com>
Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-11-16 11:51:13 -08:00
Smit Barmase
1683052e6c editor: Fix MoveToEnclosingBracket and unmatched forward/backward Vim motions in Markdown code blocks (#42813)
We now correctly use bracket ranges from the deepest syntax layer when
finding enclosing brackets.

Release Notes:

- Fixed an issue where `MoveToEnclosingBracket` didn’t work correctly
inside Markdown code blocks.
- Fixed an issue where unmatched forward/backward Vim motions didn’t
work correctly inside Markdown code blocks.

---------

Co-authored-by: MuskanPaliwal <muskan10112002@gmail.com>
2025-11-15 23:57:49 +05:30
Alvaro Parker
07cc87b288 Fix wild install script (#42747)
Use
[`command`](https://www.gnu.org/software/bash/manual/bash.html#index-command)
instead of `which` to check if `wild` is installed.

Using `which` will result in an error being printed to stdout: 

```bash
./script/install-wild
which: invalid option -- 's'
/usr/local/bin/wild
Warning: existing wild 0.6.0 found at /usr/local/bin/wild. Skipping installation.
```

Release Notes:

- N/A
2025-11-15 15:15:37 +01:00
Danilo Leal
1277f328c4 docs: Improve custom keybinding for external agent example (#42776)
Follow up to https://github.com/zed-industries/zed/pull/42772 adding
some comments to improve clarity.

Release Notes:

- N/A
2025-11-14 23:08:46 +00:00
Danilo Leal
b3097cfc8a docs: Add section about keybinding for external agent threads (#42772)
Release Notes:

- N/A
2025-11-14 19:54:52 -03:00
Ivan Pasquariello
305206fd48 Make drag and double click enabled on the whole title bar on macOS (#41839)
Closes #4947

Taken inspiration from @tasuren implementation, plus the addition for
the double click enabled on the whole title bar too to
maximizes/restores the window.

I was not able to test the application on Linux, no need to test on
Windows since the feature is enabled by the OS.

Release Notes:

- Fixed title bar not fully draggable on macOS
- Fixed not being able to maximizes/restores the window with double
click on the whole title bar on macOS
2025-11-14 22:46:35 +00:00
Ben Kunkle
c387203ac8 zeta2: Prediction prompt engineering (#42758)
Closes #ISSUE

Release Notes:

- N/A *or* Added/Fixed/Improved ...

---------

Co-authored-by: Agus Zubiaga <agus@zed.dev>
Co-authored-by: Michael Sloan <mgsloan@gmail.com>
2025-11-14 16:50:55 -05:00
Danilo Leal
a260ba6428 agent_ui: Simplify labels in new thread menu (#42746)
Drop the "new", it's simpler! 😆 

| Before | After |
|--------|--------|
| <img width="800" height="932" alt="Screenshot 2025-11-14 at 2  48@2x"
src="https://github.com/user-attachments/assets/efa67d57-9b5c-4eef-8dc7-f36c8e6a4a90"
/> | <img width="800" height="772" alt="Screenshot 2025-11-14 at 2 
47@2x"
src="https://github.com/user-attachments/assets/042d2a0b-24b4-4ad5-8411-82e0eafb993f"
/> |




Release Notes:

- N/A
2025-11-14 18:02:09 +00:00
Lukas Wirth
a8e0de37ac gpui: Fix crashes when losing devices while resizing on windows (#42740)
Fixes ZED-1HC

Release Notes:

- Fixed Zed panicking when moving Zed windows over different screens
associated with different gpu devices on windows
2025-11-14 17:51:26 +00:00
Danilo Leal
a1a599dac5 collab_ui: Fix search matching in the panel (#42743)
Release Notes:

- collab: Fixed a regression where search matches wouldn't expand the
parent channel if that happened to be collapsed.
2025-11-14 14:45:58 -03:00
Smit Barmase
524b97d729 project_panel: Fix autoscroll and filename editor focus race condition (#42739)
Closes https://github.com/zed-industries/zed/issues/40867

Since the recent changes in
[https://github.com/zed-industries/zed/pull/38881](https://github.com/zed-industries/zed/pull/38881),
the filename editor is sometimes not focused after duplicating a file or
creating a new one, and similarly, autoscroll sometimes didn’t work. It
turns out that multiple calls to `update_visible_entries_task` cancel
the existing task, which might contain information about whether we need
to focus the filename editor and autoscroll after the task ends. To fix
this, we now carry that information forward to the next task that
overwrites it, so that when the latest task ends, we can use that
information to do the right thing.

Release Notes:

- Fixed an issue in the Project Panel where duplicating or creating an
entry sometimes didn’t focus the rename editing field.
2025-11-14 21:56:48 +05:30
Ben Kunkle
8772727034 zeta2: Improve zeta old text matching (#42580)
This PR improves Zeta2's matching of `old_text`/`new_text` pairs, using
similar code to what we use in the edit agent. For right now, we've
duplicated the code, as opposed to trying to generalize it.

Release Notes:

- N/A

---------

Co-authored-by: Max <max@zed.dev>
Co-authored-by: Michael <michael@zed.dev>
Co-authored-by: Max Brunsfeld <maxbrunsfeld@gmail.com>
Co-authored-by: Agus <agus@zed.dev>
2025-11-14 11:18:16 -05:00
Dino
aaa116d129 languages: Fix command used for Go subtests (#42734)
The command used to run go subtests was breaking if the test contained
square brackets, for example:

```
go test . -v -run ^TestInventoryCheckout$/^\[test\]_test_checkout$
```

After a bit of testing it appears that the best way to actually resolve
this in a way supported by `go test` is to wrap this command in quotes.
As such, this commit updates the command to, considering the example
above:

```
go test . -v -run '^TestInventoryCheckout$/^\[test\]_test_checkout$'
```

We also tested escape the square brackets, using `\\\[` instead of `\[`,
but that would lead to a more complex change, so we opted for the
simpler solution of wrapping the command in quotes.

Closes #42347 

Release Notes:

- Fixed command used to run Go subtests to ensure that escaped
characters don't lead to a failure in finding tests to run
2025-11-14 16:04:54 +00:00
Danilo Leal
c1096d8b63 agent_ui: Render error descriptions as markdown in thread view callouts (#42732)
This PR makes the description in the callout that display general errors
in the agent panel be rendered as markdown. This allow us to pass URLs
to these error strings that will be clickable, improving the overall
interaction with them. Here's an example:

<img width="500" height="396" alt="Screenshot 2025-11-14 at 11  43@2x"
src="https://github.com/user-attachments/assets/f4fc629a-6314-4da1-8c19-b60e1a09653b"
/>

Release Notes:

- agent: Improved the interaction with errors by allowing links to be
clickable.
2025-11-14 12:12:47 -03:00
Josh Piasecki
092071a2f0 git_ui: Allow opening a file with the diff hunks expanded (#40616)
So i just discovered `editor::ExpandAllDiffHunks`

I have been really missing the ability to look at changes NOT in a multi
buffer so i was very pleased to finally figure out that this is already
possible in Zed.

i have seen alot of discussion/issues requesting this feature so i think
it is safe to say i'm not the only one that is not aware it exists.

i think the wording in the docs could better communicate what this
feature actually is, however, i think an even better way to show users
that this feature exists would be to just put it in front of them.

In the `GitPanel`:
- `menu::Confirm` opens the project diff
- `menu::SecondaryConfirm` opens the selected file in a new editor.

I think it would be REALLY nice if opening a file with
`SecondaryConfirm` opened the file with the diff hunks already expanded
and scrolled the editor to the first hunk.

ideally i see this being toggle-able in settings something like
`GitPanel - Open File with Diffs Expanded` or something. so the user
could turn this off if they preferred.

I tried creating a new keybinding using the new `actions::Sequence`
it was something like:
```json
{
  "context": "GitPanel && ChangesList",
  "bindings": {
    "cmd-enter" : [ "actions::Sequence", ["menu:SecondaryConfirm", "editor::ToggleFocus", "editor::ExpandAllDiffHunks", "editor::GoToHunk"]]
  }
}
```
but the action sequence does not work. i think because opening the file
is an async task.

i have a first attempt here, of just trying to get the diff hunks to
expand after opening the file.
i tried to copy and paste the logic/structure as best i could from the
confirm method in file_finder.rs:1432

it compiles, but it does not work, and i do not have enough experience
in rust or in this project to figure out anything further.

if anyone was interested in working on this with me i would enjoy
learning more and i think this would be a nice way to showcase this
tool!
2025-11-14 08:47:46 -05:00
Oleksiy Syvokon
723f9b1371 zeta2: Add minimal prompt for fine-tuned models (#42691)
1. Add `--prompt-format=minimal` that matches single-sentence
instructions used in fine-tuned models (specifically, in `1028-*` and
`1029-*` models)

2. Use separate configs for agentic context search model and edit
prediction model. This is useful when running a fine-tuned EP model, but
we still want to run vanilla model for context retrieval.

3. `zeta2-exp` is a symlink to the same-named Baseten deployment. This
model can be redeployed and updated without having to update the
deployment id.

4. Print scores as a compact table

Release Notes:

- N/A

---------

Co-authored-by: Piotr Osiewicz <piotr@zed.dev>
2025-11-14 13:08:54 +00:00
Jakub Konka
37523b0007 git_panel: Fix buffer header checkbox not showing partially staged files (#42718)
Release Notes:

- Fixed buffer header controls (staging checkbox) not showing partially
staged files
2025-11-14 12:55:04 +00:00
Jakub Konka
b4167caaf1 git_panel: Fix StageAll/UnstageAll not working when panel not in focus (#42708)
Release Notes:

- Fixed "Stage All"/"Unstage All" buttons from not working when git
panel is not in focus
2025-11-14 10:42:32 +01:00
Smit Barmase
020f518231 project_panel: Add tests for cross worktree drag-and-drop (#42704)
Add missing tests for cross worktree drag-and-drop:

- file -> directory
- file -> file (drops into parent directory)
- whole directory move
- multi selection move

Release Notes:

- N/A
2025-11-14 13:31:44 +05:30
morgankrey
ead4f26b52 Update docs for Gemini ZDR (#42697)
Closes #ISSUE

Release Notes:

- N/A
2025-11-14 00:22:20 -06:00
Josh Piasecki
3de3a369f5 editor: Add diffs_expanded to key context when diff hunks are expanded (#40617)
including a new identifier on the Editor key context will allow for some
more flexibility when creating keybindings.

for example i would like to be able to set the following:
```json
{
  "context": "Editor",
  "bindings": {
    "pageup": ["editor::MovePageUp", { "center_cursor": true }],
    "pagedown": ["editor::MovePageDown", { "center_cursor": true }],
  }
},
{
  "context": "Editor && diffs_expanded",
  "bindings": {
    "pageup": "editor::GoToPrevHunk",
    "pagedown": "editor::GoToHunk",
  }
},
```

<img width="1392" height="1167" alt="Screenshot 2025-10-18 at 23 51 46"
src="https://github.com/user-attachments/assets/cf4e262e-97e7-4dd9-bbda-cd272770f1ac"
/>


very open to suggestions for the name. that's the best i could come up
with.

the action *IS* called `editor::ExpandAllDiffHunks` so this seems
fitting.

the identifier is included if *any* diff hunk is visible, even if some
of them have been closed using `editor::ToggleSelectedDiffHunk`


Release Notes:

- The Editor key context now includes 'diffs_expanded' when diff changes
are visible
2025-11-14 03:33:53 +00:00
Xipeng Jin
28a0b82618 git_panel: Fix FocusChanges does nothing with no entries (#42553)
Closes #31155

Release Notes:

- Ensure `git_panel::FocusChanges` bypasses the panel’s `Focusable`
logic and directly focuses the `ChangesList` handle so the command works
even when the repository has no entries.
- Keep the `Focusable` behavior from the commit 45b126a (which routes
empty panels to the commit editor) by handling this special-case action
rather than regressing the default focus experience.
2025-11-13 21:59:39 -05:00
Mayank Verma
e2c95a8d84 git: Continue parsing other branches when refs have missing fields (#42523)
Closes #34684

Release Notes:

- (Let's Git Together) Fixed Git panel not showing any branches when
repository contains refs with missing fields
2025-11-13 21:16:38 -05:00
Anthony Eid
3da4d3aac3 settings_ui: Make open project settings action open settings UI (#42669)
This PR makes the `OpenProjectSettings` action open the settings UI in
project settings mode for the first visible worktree, instead of opening
the file. It also adds a `OpenProjectSettingsFile` action that maintains
the old behavior.

Finally, this PR partially fixes a bug where the settings UI won't load
project settings when the settings window is loaded before opening a
project/workspace. This happens because the global `app_state` isn't
correct in the `Subscription` that refreshes the available setting files
to open. The bug is still present in some cases, but it's out of scope
for this PR.

Release Notes:

- settings ui: Project Settings action now opens settings UI instead of
a file
2025-11-13 20:06:09 -05:00
Conrad Irwin
6f99eeffa8 Don't try and delete ./target/. (#42680)
Release Notes:

- N/A
2025-11-13 16:39:35 -07:00
Julia Ryan
15ab96af6b Add windows nightly update banner (#42576)
Hopefully this will nudge some of the beta users who were on nightly to
get on the official stable builds now that they're out.

Release Notes:

- N/A
2025-11-13 15:33:33 -08:00
Marshall Bowers
e80b490ac0 client: Clear plan and usage information when signing out (#42678)
This PR makes it so we clear the user's plan and usage information when
they sign out.

Release Notes:

- Signing out will now clear the local cache containing the plan and
usage information.
2025-11-13 23:13:27 +00:00
Jakub Konka
3c577ba019 git_panel: Fix Stage All/Unstage All ignoring partially staged files (#42677)
Release Notes:

- Fix "Stage All"/"Unstage All" not affecting partially staged files
2025-11-13 23:57:05 +01:00
Danilo Leal
e1d295a6b4 markdown: Improve table display (#42674)
Closes https://github.com/zed-industries/zed/issues/36330
Closes https://github.com/zed-industries/zed/issues/35460

This PR improves how we display markdown tables by relying on grids
rather than flexbox. Given this makes text inside each cell wrap, I
ended up removing the `table_overflow_x_scroll` method, as it was 1)
used only in the agent panel, and 2) arguably not the best approach as a
whole, because as soon as you need to scroll a table, you probably need
more elements to make it be really great.

One thing I'm slightly unsatisfied with, though, is the border
situation. I added a half pixel border to the cell so they all sum up to
1px, but there are cases where there's a tiny space between rows and I
don't quite know where that's coming from and how it happens. But I
think it's a reasonable improvement overall.

<img width="500" height="1248" alt="Screenshot 2025-11-13 at 7  05@2x"
src="https://github.com/user-attachments/assets/182b2235-efeb-4a61-ada2-98262967355d"
/>

Release Notes:

- agent: Improved table rendering in the agent panel, ensuring cell text
wraps, not going off-screen.
2025-11-13 19:36:16 -03:00
AidanV
84f24e4b62 vim: Add :<range>w <filename> command (#41256)
Release Notes:

- Adds support for `:[range]w {file}`
  - This writes the lines in the range to the specified
- Adds support for `:[range]w`
  - This replaces the current file with the selected lines
2025-11-13 13:27:08 -07:00
Abul Hossain Khan
03fad4b951 workspace: Fix pinned tab causing resize loop on adjacent tab (#41884)
Closes #41467 

My first PR in Zed, any guidance or tips are appreciated.

This fixes the flickering/resize loop that occurred on the tab
immediately to the right of a pinned tab.

Removed the conditional border on the pinned tabs container. The border
was a visual indicator to show when unpinned tabs were scrolled, but it
wasn't essential and was causing the layout thrashing.

Release Notes:

- Fixed

---------

Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>
2025-11-14 01:52:57 +05:30
Kevin Rubio
c626e770a0 outline_panel: Remove toggle expanded behavior from OpenSelectedEntry (#42214)
Fixed outline panel space key behavior by removing duplicate toggle call

The `open_selected_entry` function in `outline_panel.rs` was incorrectly
calling `self.toggle_expanded(&selected_entry, window, cx)` in addition
to its primary logic, causing the space key to both open/close entries
AND toggle their expanded state. Removed the redundant `toggle_expanded`
call to achieve the intended behavior.

Closes #41711

Release Notes:

- Fixed issue with the outline panel where pressing space would cause an
open selected entry to collapse and cause a closed selected entry to
open.

---------

Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>
2025-11-14 01:07:22 +05:30
Lionel Henry
fa0c7500c1 Update runtimed to fix compatibility issue with the Ark kernel (#40889)
Closes #40888

This updates runtimed to the latest version, which handles the
"starting" variant of `execution_state`. It actually handles a bunch of
other variants that are not documented in the protocol (see
https://jupyter-client.readthedocs.io/en/stable/messaging.html#kernel-status),
like "starting", "terminating", etc. I added implementations for these
variants as well.

Release Notes:

- Fixed issue that prevented the Ark kernel from working in Zed
(#40888).

---------

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-11-13 19:35:45 +00:00
Richard Feldman
e91be9e98e Fix ACP CLI login via remote (#42647)
Release Notes:

- Fixed logging into Gemini CLI and Claude Code when remoting and
authenticating via CLI

Co-authored-by: Lukas Wirth <me@lukaswirth.dev>
2025-11-13 19:13:09 +01:00
Rafael Lüder
46eb9e5223 Update scale factor and drawable size when macOS window changes screen (#38269)
Summary

Fixes UI scaling issue that occurs when starting Zed after disconnecting
an external monitor on macOS. The window's scale factor and drawable
size are now properly updated when the window changes screens.

Problem Description

When an external monitor is disconnected and Zed is started with only
the built-in screen active, the UI scale becomes incorrect. This happens
because:

1. macOS triggers the `window_did_change_screen` callback when a window
moves between displays (including when displays are disconnected)
2. The existing implementation only restarted the display link but
didn't update the window's scale factor or drawable size
3. This left the window with stale scaling information from the previous
display configuration

Root Cause

The `window_did_change_screen` callback in
`crates/gpui/src/platform/mac/window.rs` was missing the logic to update
the window's scale factor and drawable size when moving between screens.
This logic was only present in the `view_did_change_backing_properties
callback`, which isn't triggered when external monitors are
disconnected.

Solution

- Extracted common logic: Created a new `update_window_scale_factor()`
function that encapsulates the scale factor and drawable size update
logic
- Added scale factor update to screen change: Modified
`window_did_change_screen` to call this function after restarting the
display link
- Refactored existing code: Updated `view_did_change_backing_properties`
to use the new shared function, reducing code duplication

The fix ensures that whenever a window changes screens (due to monitor
disconnect, reconnect, or manual movement), the scale factor, drawable
size, and renderer state are properly synchronized.

Testing

-  Verified that UI scaling remains correct after disconnecting
external monitor
-  Confirmed that reconnecting external monitor works properly
-  Tested that manual window movement between displays updates scaling
correctly
-  No regressions observed in normal window operations

To verity my fix worked I had to copy my preview workspace over my dev
workspace, once I had done this I could reproduce the issue on main
consistently. After switching to the branch with this fix the issue was
resolved.

The fix is similar to what was done on
https://github.com/zed-industries/zed/pull/35686 (Windows)

Closes #37245 #38229

Release Notes:

- Fixed: Update scale factor and drawable size when macOS window changes
screen

---------

Co-authored-by: Kate <work@localcc.cc>
2025-11-13 16:51:13 +00:00
Conrad Irwin
cb7bd5fe19 Include source PR number in cherry-picks (#42642)
Release Notes:

- N/A
2025-11-13 16:06:26 +00:00
Ben Kunkle
b900ac2ac7 ci: Fix script/clear-target-dir-if-larger-than post #41652 (#42640)
Closes #ISSUE

The namespace runners mount the `target` directory to the cache drive,
so `rm -rf target` would fail with `Device Busy`. Instead we now do `rm
-rf target/* target/.*` to remove all files (including hidden files)
from the `target` directory, without removing the target directory
itself

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-13 10:58:59 -05:00
Dino
b709996ec6 editor: Fix pane's tab buttons flicker on right-click (#42549)
Whenever right-click was used on the editor, the pane's tab buttons
would flicker, which was confirmed to happen because of the following
check:

```
self.focus_handle.contains_focused(window, cx)
    || self
        .active_item()
        .is_some_and(|item| {
            item.item_focus_handle(cx).contains_focused(window, cx)
        })
```

This check was returning `false` right after right-clicking but
returning `true` right after. When digging into it a little bit more,
this appears to be happening because the editor's `MouseContextMenu`
relies on `ContextMenu` which is rendered in a deferred fashion but
`MouseContextMenu` updates the window's focus to it instantaneously.

Since the `ContextMenu` is rendered in a deferred fashion, its focus
handle is not yet a descendant of the editor (pane's active item) focus
handle, so the `contains_focused(window, cx)` call would return `false`,
with it returning `true` after the menu was rendered.

This commit updates the `MouseContextMenu::new` function to leverage
`cx.on_next_frame` and ensure that the focus is only moved to the
`ContextMenu` 2 frames later, ensuring that by the time the focus is
moved, the `ContextMenu`'s focus handle is a descendant of the editor's.

Closes #41771 

Release Notes:

- Fixed pane's tab buttons flickering when using right-click on the
editor
2025-11-13 15:57:26 +00:00
Smit Barmase
b6972d70a5 editor: Fix panic when calculating jump data for buffer header (#42639)
Just on nightly.

Release Notes:

- N/A *or* Added/Fixed/Improved ...

Co-authored-by: Lukas Wirth <me@lukaswirth.dev>
2025-11-13 15:48:05 +00:00
kitt
ec1664f61a zed: Enable line wrapping for cli help (#42496)
This enables clap's [wrap-help] feature and sets max_term_width to wrap
after 100 columns (the value clap is planning to default to in clap-v5).

This commit also adds blank lines which cause clap to split longer doc
comments into separate help (displayed for `-h`) and long_help
(displayed for `--help`) messages, as per [doc-processing].

[wrap-help]:
https://docs.rs/clap/4.5.49/clap/_features/index.html#optional-features
[doc-processing]:
https://docs.rs/clap/4.5.49/clap/_derive/index.html#pre-processing

![before: some lines of help text stretch across the whole screen.
after: all lines are wrapped at 100 columns, and some manual linebreaks
are preserved where it makes sense (in particular, when listing the
user-data-dir locations on each
platform)](https://github.com/user-attachments/assets/359067b4-5ffb-4fe3-80bd-5e1062986417)


Release Notes:

- N/A
2025-11-13 10:46:51 -05:00
Agus Zubiaga
c2c5fceb5b zeta eval: Allow no headings under "Expected Context" (#42638)
Release Notes:

- N/A
2025-11-13 15:43:22 +00:00
Richard Feldman
eadc2301e0 Fetch the unit eval commit before checking it out (#42636)
Release Notes:

- N/A
2025-11-13 15:21:53 +00:00
Richard Feldman
b500470391 Disabled agent commands (#42579)
Closes #31346

Release Notes:

- Agent commands no longer show up in the command palette when `agent`
is disabled. Same for edit predictions.
2025-11-13 10:10:02 -05:00
Oleksiy Syvokon
55e4258147 agent: Workaround for Sonnet inserting </parameter> tag (#42634)
Release Notes:

- N/A
2025-11-13 15:09:16 +00:00
Agus Zubiaga
8467a1b08b zeta eval: Improve output (#42629)
Hides the aggregated scores if only one example/repetition ran. It also
fixes an issue with the expected context scoring.

Release Notes:

- N/A
2025-11-13 14:47:48 +00:00
Tim McLean
fb90b12073 Add retry support for OpenAI-compatible LLM providers (#37891)
Automatically retry the agent's LLM completion requests when the
provider returns 429 Too Many Requests. Uses the Retry-After header to
determine the retry delay if it is available.

Many providers are frequently overloaded or have low rate limits. These
providers are essentially unusable without automatic retries.

Tested with Cerebras configured via openai_compatible.

Related: #31531 

Release Notes:

- Added automatic retries for OpenAI-compatible LLM providers

---------

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-11-13 14:15:46 +00:00
Mayank Verma
92e64f9cf0 settings: Add tilde expansion support for LSP binary path (#41715)
Closes #38227

Release Notes:

- Added tilde expansion support for LSP binary path in `settings.json`
2025-11-13 09:14:18 -05:00
Remco Smits
f318bb5fd7 markdown: Add support for HTML href elements (#42265)
This PR adds support for `HTML` href elements. It also refactored the
way we stored the regions, this was done because otherwise I had to add
2 extra arguments to each `HTML` parser method. It's now also more
inline with how we have done it for the highlights.

**Small note**: the markdown parser only supports HTML href tags inside
a paragraph tag. So adding them as a root node will result in just
showing the inner text. This is a limitation of the markdown parser we
use itself.

**Before**
<img width="935" height="174" alt="Screenshot 2025-11-08 at 15 40 28"
src="https://github.com/user-attachments/assets/42172222-ed49-4a4b-8957-a46330e54c69"
/>

**After**
<img width="1026" height="180" alt="Screenshot 2025-11-08 at 15 29 55"
src="https://github.com/user-attachments/assets/9e139c2d-d43a-4952-8d1f-15eb92966241"
/>

**Example code**
```markdown
<p>asd <a href="https://example.com">Link Text</a> more text</p>
<p><a href="https://example.com">Link Text</a></p>

[Duck Duck Go](https://duckduckgo.com)
```

**TODO**:
- [x] Add tests

cc @bennetbo

Release Notes:

- Markdown Preview: Add support for `HTML` href elements.

---------

Co-authored-by: Bennet Bo Fenner <bennet@zed.dev>
2025-11-13 15:12:17 +01:00
Piotr Osiewicz
430b55405a search: New recent old search implementation (#40835)
This is an in-progress work on changing how task scheduler affects
performance of project search. Instead of relying on tasks being
executed at a discretion of the task scheduler, we want to experiment
with having a set of "agents" that prioritize driving in-progress
project search matches to completion over pushing the whole thing to
completion. This should hopefully significantly improve throughput &
latency of project search.

This PR has been reverted previously in #40831.

Release Notes:
- Improved project search performance in local projects.

---------

Co-authored-by: Smit Barmase <smit@zed.dev>
Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>
2025-11-13 14:56:40 +01:00
Lukas Wirth
27f700e2b2 askpass: Quote paths in generated askpass script (#42622)
Closes https://github.com/zed-industries/zed/issues/42618

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-13 14:37:47 +01:00
Smit Barmase
b5633f5bc7 editor: Improve multi-buffer header filename click to jump to the latest selection from that buffer - take 2 (#42613)
Relands https://github.com/zed-industries/zed/pull/42480

Release Notes:

- Clicking the multi-buffer header file name or the "Open file" button
now jumps to the most recent selection in that buffer, if one exists.

---------

Co-authored-by: Piotr Osiewicz <24362066+osiewicz@users.noreply.github.com>
2025-11-13 17:14:33 +05:30
R.Amogh
b9ce52dc95 agent_ui: Fix scrolling in context server configuration modal (#42502)
## Summary

Fixes #42342

When installing a dev extension with long installation instructions, the
configuration modal would overflow and users couldn't scroll to see the
full content or interact with buttons at the bottom.

## Solution

This PR adds a `ScrollHandle` to the `ConfigureContextServerModal` and
passes it to the `Modal` component, enabling the built-in modal
scrolling capability. This ensures all content remains accessible
regardless of length.

## Changes

- Added `ScrollHandle` import to the ui imports
- Added `scroll_handle: ScrollHandle` field to
`ConfigureContextServerModal` struct
- Initialize `scroll_handle` with `ScrollHandle::new()` when creating
the modal
- Pass the scroll handle to `Modal::new()` instead of `None`

## Testing

- Built the changes locally
- Tested with extensions that have long installation instructions
- Verified scrolling works and all content is accessible
- Confirmed no regression for extensions with short descriptions

Release Notes:

- Fixed scrolling issue in extension configuration modal when
installation instructions overflow the viewport

---------

Co-authored-by: Finn Evers <finn.evers@outlook.de>
2025-11-13 12:41:38 +01:00
mikeHag
34a7cfb2e5 Update cargo.rs to allow debugging of integration test annotated with the ignore attribute (#42574)
Address #40429

If an integration test is annotated with the ignore attribute, allow the
"debug: Test" option of the debug scenario or Code Action to run with
"--include-ignored"

Closes #40429

Release Notes:

- N/A
2025-11-13 12:31:23 +01:00
Kirill Bulatov
99016e3a85 Update outdated dependencies (#42611)
New rustc starts to output a few warnings, fix them by updating the
corresponding packages.

<details>
  <summary>Incompatibility notes</summary>
    
  ```
The following warnings were discovered during the build. These warnings
are an
indication that the packages contain code that will become an error in a
future release of Rust. These warnings typically cover changes to close
soundness problems, unintended or undocumented behavior, or critical
problems
that cannot be fixed in a backwards-compatible fashion, and are not
expected
to be in wide use.

Each warning should contain a link for more information on what the
warning
means and how to resolve it.


To solve this problem, you can try the following approaches:


- Some affected dependencies have newer versions available.
You may want to consider updating them to a newer version to see if the
issue has been fixed.

num-bigint-dig v0.8.4 has the following newer versions available: 0.8.5,
0.9.0, 0.9.1

- If the issue is not solved by updating the dependencies, a fix has to
be
implemented by those dependencies. You can help with that by notifying
the
maintainers of this problem (e.g. by creating a bug report) or by
proposing a
fix to the maintainers (e.g. by creating a pull request):

  - num-bigint-dig@0.8.4
  - Repository: https://github.com/dignifiedquire/num-bigint
- Detailed warning command: `cargo report future-incompatibilities --id
1 --package num-bigint-dig@0.8.4`

- If waiting for an upstream fix is not an option, you can use the
`[patch]`
section in `Cargo.toml` to use your own version of the dependency. For
more
information, see:

https://doc.rust-lang.org/cargo/reference/overriding-dependencies.html#the-patch-section

The package `num-bigint-dig v0.8.4` currently triggers the following
future incompatibility lints:
> warning: macro `vec` is private
> -->
/Users/someonetoignore/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/num-bigint-dig-0.8.4/src/biguint.rs:490:22
>     |
> 490 |         BigUint::new(vec![1])
>     |                      ^^^
>     |
> = warning: this was previously accepted by the compiler but is being
phased out; it will become a hard error in a future release!
> = note: for more information, see issue #120192
<https://github.com/rust-lang/rust/issues/120192>
> 
> warning: macro `vec` is private
> -->
/Users/someonetoignore/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/num-bigint-dig-0.8.4/src/biguint.rs:2005:9
>      |
> 2005 |         vec![0]
>      |         ^^^
>      |
> = warning: this was previously accepted by the compiler but is being
phased out; it will become a hard error in a future release!
> = note: for more information, see issue #120192
<https://github.com/rust-lang/rust/issues/120192>
> 
> warning: macro `vec` is private
> -->
/Users/someonetoignore/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/num-bigint-dig-0.8.4/src/biguint.rs:2027:16
>      |
> 2027 |         return vec![b'0'];
>      |                ^^^
>      |
> = warning: this was previously accepted by the compiler but is being
phased out; it will become a hard error in a future release!
> = note: for more information, see issue #120192
<https://github.com/rust-lang/rust/issues/120192>
> 
> warning: macro `vec` is private
> -->
/Users/someonetoignore/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/num-bigint-dig-0.8.4/src/biguint.rs:2313:13
>      |
> 2313 |             vec![0]
>      |             ^^^
>      |
> = warning: this was previously accepted by the compiler but is being
phased out; it will become a hard error in a future release!
> = note: for more information, see issue #120192
<https://github.com/rust-lang/rust/issues/120192>
> 
> warning: macro `vec` is private
> -->
/Users/someonetoignore/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/num-bigint-dig-0.8.4/src/prime.rs:138:22
>     |
> 138 |     let mut moduli = vec![BigUint::zero(); prime_limit];
>     |                      ^^^
>     |
> = warning: this was previously accepted by the compiler but is being
phased out; it will become a hard error in a future release!
> = note: for more information, see issue #120192
<https://github.com/rust-lang/rust/issues/120192>
> 
> warning: macro `vec` is private
> -->
/Users/someonetoignore/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/num-bigint-dig-0.8.4/src/bigrand.rs:319:25
>     |
> 319 |         let mut bytes = vec![0u8; bytes_len];
>     |                         ^^^
>     |
> = warning: this was previously accepted by the compiler but is being
phased out; it will become a hard error in a future release!
> = note: for more information, see issue #120192
<https://github.com/rust-lang/rust/issues/120192>
> 

  ```
  
</details>

Release Notes:

- N/A
2025-11-13 10:35:16 +00:00
Lukas Wirth
dea3c8c949 remote: More nushell fixes (#42608)
Closes https://github.com/zed-industries/zed/issues/42594

Release Notes:

- Fixed remote server installation failing with nutshell
2025-11-13 09:53:31 +00:00
Lukas Wirth
7eac6d242c diagnostics: Workaround weird panic in update_path_excerpts (#42602)
Fixes ZED-36P

Patching this over for now until I can figure out the cause of this

Release Notes:

- Fixed panic in diagnostics pane
2025-11-13 09:13:54 +00:00
Lukas Wirth
b92b28314f Replace {floor/ceil}_char_boundary polyfills with std (#42599)
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-13 08:11:18 +00:00
AidanV
1fc0642de1 vim: Make each vim repeat its own transaction (#41735)
Release Notes:

- Pressing `u` after multiple `.` in rapid succession will now only undo
the latest repeat instead of all repeats.

---------

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-11-13 06:46:14 +00:00
Conrad Irwin
045ac6d1b6 Release failure visibility (#42572)
Closes #ISSUE

Release Notes:

- N/A
2025-11-12 23:11:09 -07:00
Sean Hagstrom
1936f16c62 editor: Use a single newline between each copied line from a multi-cursor selection (#41204)
Closes #40923

Release Notes:

- Fixed the amount of newlines between copied lines from a multi-cursor
selection of multiple full-line copies.

---


https://github.com/user-attachments/assets/ab7474d6-0e49-4c29-9700-7692cd019cef
2025-11-12 22:58:13 -07:00
Conrad Irwin
b32559f07d Avoid re-creating releases when re-running workflows (#42573)
Closes #ISSUE

Release Notes:

- N/A
2025-11-12 21:50:15 -07:00
Julia Ryan
28adedf1fa Disable env clearing for npm subcommands (#42587)
Fixes #39448

Several node version managers such as [volta](https://volta.sh) use thin
wrappers that locate the "real" node/npm binary with an env var that
points at their install root. When it finds this, it prepends the
correct directory to PATH, otherwise it'll check a hardcoded default
location and prepend that to PATH if it exists.

We were clearing env for npm subcommands, which meant that volta and co.
failed to locate the install root, and because they were installed via
scoop they don't use the default install path either so it simply
doesn't prepend anything to PATH (winget on the other hand installs
volta to the right place, which is why it worked when using that instead
of scoop to install volta @IllusionaryX).

So volta's npm wrapper executes a subcommand `npm`, but when that
doesn't prepend a different directory to PATH the first `npm` found in
PATH is that same wrapper itself, which horrifyingly causes itself to
re-exec continuously. I think they might have some logic to try to
prevent this using, you'll never guess, another env var that they set
whenever a volta wrapper execs something. Of course since we clear the
env that var also fails to propagate.

Removing env clearing (but keeping the prepending of npm path from your
settings) fixes these issues.

Release Notes:

- Fixed issues with scoop installations of mise/volta

Co-authored-by: John Tur <john-tur@outlook.com>
2025-11-12 22:03:59 -06:00
Max Brunsfeld
c9e231043a Report discarded zeta predictions and indicate whether they were shown (#42403)
Release Notes:

- N/A

---------

Co-authored-by: Michael Sloan <mgsloan@gmail.com>
Co-authored-by: Ben Kunkle <ben@zed.dev>
Co-authored-by: Agus Zubiaga <agus@zed.dev>
2025-11-12 16:41:04 -08:00
Richard Feldman
ede3b1dae6 Allow running concurrent unit evals (#42578)
Right now only one unit eval GitHub Action can be run at a time. This
permits them to run concurrently.

Release Notes:

- N/A
2025-11-12 22:04:38 +00:00
Agus Zubiaga
b0700a4625 zeta eval: --repeat flag (#42569)
Adds a `--repeat` flag to the zeta eval that runs each example as many
times as specified. Also makes the output nicer in a few ways.

Release Notes:

- N/A

---------

Co-authored-by: Ben Kunkle <ben@zed.dev>
Co-authored-by: Michael <michael@zed.dev>
2025-11-12 16:58:22 -05:00
Michael Sloan
f2a1eb9963 Make check-licenses script check that AGPL crates are not included in release binaries (#42571)
See discussion in #24657. Recalled that I had a stashed change for this,
so polished it up

Release Notes:

- N/A
2025-11-12 21:58:12 +00:00
Andrew Farkas
0c1ca2a45a Improve pane: reopen closed item to not reopen closed tabs (#42568)
Closes #42134

Release Notes:

- Improved `pane: reopen closed item` to not reopen closed tabs.
2025-11-12 21:08:41 +00:00
Conrad Irwin
8fd8b989a6 Use powershell for winget job steps (#42565)
Co-Authored-By: Claude

Release Notes:

- N/A
2025-11-12 13:41:20 -07:00
Lucas Parry
fd837b348f project_panel: Make natural sort ordering consistent with other apps (#41080)
The existing sorting approach when faced with `Dir1`, `dir2`, `Dir3`,
would only get as far as comparing the stems without numbers (`dir` and
`Dir`), and then the lowercase-first tie breaker in that function would
determine that `dir2` should come first, resulting in an undesirable
order of `dir2`, `Dir1`, `Dir3`.

This patch defers tie-breaking until it's determined that there's no
other difference in the strings outside of case to order on, at which
point we tie-break to provide a stable sort.

Natural number sorting is still preserved, and mixing different cases
alphabetically (as opposed to all lowercase alpha, followed by all
uppercase alpha) is preserved.

Closes #41080


Release Notes:

- Fixed: ProjectPanel sorting bug

Screenshots:

Before | After
----|---
<img width="237" height="325" alt="image"
src="https://github.com/user-attachments/assets/6e92e8c0-2172-4a8f-a058-484749da047b"
/> | <img width="239" height="325" alt="image"
src="https://github.com/user-attachments/assets/874ad29f-7238-4bfc-b89b-fd64f9b8889a"
/>

I'm having trouble reasoning through what was previously going wrong
with `docs` in the before screenshot, but it also seems to now appear
alphabetically where you'd expect it with this patch

---------

Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>
2025-11-13 02:04:40 +05:30
Piotr Osiewicz
6b239c3a9a Bump Rust to 1.91.1 (#42561)
Release Notes:

- N/A

---------

Co-authored-by: Julia Ryan <juliaryan3.14@gmail.com>
2025-11-12 20:27:04 +00:00
Piotr Osiewicz
73e5df6445 ci: Install pre-built cargo nextest instead of rolling our own (#42556)
Closes #ISSUE

Release Notes:

- N/A
2025-11-12 20:05:40 +00:00
KyleBarton
b403c199df Add additional comment for context in Tyepscript highlights (#42564)
This adds additional comments which were left out from #42494 by
accident. Namely, it describes why we have additional custom
highlighting in `highlights.scm` for the Typescript grammar.

Release Notes:

- N/A
2025-11-12 19:59:10 +00:00
Konstantinos Lyrakis
cb4067723b Fix typo (#42559)
Fixed a typo in the docs

Release Notes:

- N/A
2025-11-12 21:07:34 +02:00
Ben Kunkle
1c625f8783 Fix JSON Schema documentation for code_actions_on_format (#42128)
Closes #ISSUE

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-12 18:33:02 +00:00
KyleBarton
4adec27a3d Implement pretty TypeScript errors (#42494)
Closes #7844

This change uses tree-sitter highlights as a method of showing
typescript errors prettily, keeping regex as simple as possible:

<img width="832" height="446" alt="Screenshot 2025-11-11 at 3 40 24 PM"
src="https://github.com/user-attachments/assets/0b3b6cf1-4d4d-4398-b89b-ef5ec0df87ec"
/>

It covers three main areas:

1. Diagnostics

Diagnostics are now rendered with language-aware typescript, by
providing the project's language registry.

2. Vtsls

The LSP provider for typescript now implements the
`diagnostic_message_to_markdown` function in the `LspAdapter` trait, so
as to provide Diagnostics with \`\`\`typescript...\`\`\`-style code
blocks for any selection of typescript longer than one word. In the
single-word case, it simply wraps with \`\`

3. Typescript's `highlights.scm`

`vtsls` doesn't provide strictly valid typescript in much of its
messaging. Rather, it returns a message with snippets of typescript
values which are invalid. Tree-sitter was not properly highlighting
these snippets because it was expecting key-value formats. For instance:
```
type foo = { foo: string; bar: string; baz: number[] }
```
is valid, whereas simply
```
{ foo: string; bar: string; baz: number[] }
```
is not.

Therefore, highlights.scm needed to be adjusted in order to
pattern-match on literal values that might be returned from the vtsls
diagnostics messages. This was done by a) identifying arrow functions on
their own, and b) augmenting the `statment_block` pattern matching in
order to match on values which were clearly object literals.

This approach may not be exhaustive - I'm happy to work on any
additional cases we might identify from `vtsls` here - but hopefully
demonstrates an extensible approach to making these messages look nice,
without taking on the technical burden of extensive regex.

Release Notes:

- Show pretty TypeScript errors with language-aware Markdown.
2025-11-12 10:32:46 -08:00
Remco Smits
e8daab15ab debugger: Fix prevent creating breakpoints inside breakpoint editor (#42475)
Closes #38057

This PR fixes that you can no longer create breakpoints inside the
breakpoint editor in code called `BreakpointPromptEditor`. As you can
see, inside the after video, there is no breakpoint editor created
anymore.

**Before**


https://github.com/user-attachments/assets/c4e02684-ac40-4176-bd19-f8f08e831dde

**After**


https://github.com/user-attachments/assets/f5b1176f-9545-4629-be12-05c64697a3de

Release Notes:

- Debugger: Prevent breakpoints from being created inside the breakpoint
editor
2025-11-12 18:18:10 +00:00
Ben Kunkle
6501b0c311 zeta eval: Improve determinism and debugging ergonomics (#42478)
- Improves the determinism of the search step for better cache
reusability
- Adds a `--cache force` mode that refuses to make any requests or
searches that aren't cached
- The structure of the `zeta-*` directories under `target` has been
rethought for convenience

Release Notes:

- N/A

---------

Co-authored-by: Agus <agus@zed.dev>
2025-11-12 18:16:13 +00:00
Ben Kunkle
6c0069ca98 zeta2: Improve error reporting and eval purity (#42470)
Closes #ISSUE

Improves error reporting for various failure modes of zeta2, including
failing to parse the `<old_text>`/`<new_text>` pattern, and the contents
of `<old_text>` failing to match.

Additionally, makes it so that evals are checked out into a worktree
with the _repo_ name instead of the _example_ name, in order to make
sure that the eval name has no influence on the models prediction. The
repo name worktrees are still namespaced by the example name like
`{example_name}/{repo_name}` to ensure evals pointing to the same repo
do not conflict.

Release Notes:

- N/A *or* Added/Fixed/Improved ...

---------

Co-authored-by: Agus <agus@zed.dev>
2025-11-12 12:52:11 -05:00
Conrad Irwin
c8930e07a3 Allow multiple parked threads in tests (#42551)
Closes #ISSUE

Release Notes:

- N/A

Co-Authored-By: Piotr <piotr@zed.dev>
2025-11-12 10:29:31 -07:00
Richard Feldman
ab352f669e Gracefully handle @mention-ing large files with no outlines (#42543)
Closes #32098

Release Notes:

- In the Agent panel, when `@mention`-ing large files with no outline,
their first 1KB is now added to context
2025-11-12 16:55:25 +00:00
Finn Evers
e79188261b fs: Fix wrong watcher trace log on Linux (#42544)
Follow-up to #40200

Release Notes:

- N/A
2025-11-12 16:26:53 +00:00
Marshall Bowers
ab62739605 collab: Remove unused methods from User model (#42536)
This PR removes some unused methods from the `User` model.

Release Notes:

- N/A
2025-11-12 15:38:16 +00:00
Marco Mihai Condrache
cfbde91833 terminal: Add setting for scroll multiplier (#39463)
Closes #5130

Release Notes:

- Added setting option for scroll multiplier of the terminal

---------

Signed-off-by: Marco Mihai Condrache <52580954+marcocondrache@users.noreply.github.com>
Co-authored-by: MrSubidubi <finn@zed.dev>
2025-11-12 16:38:06 +01:00
Vasyl Protsiv
80b32ddaad gpui: Add 'Nearest' scrolling strategy to 'UniformList' (#41844)
This PR introduces `Nearest` scrolling strategy to `UniformList`. This
is now used in completions menu and the picker to choose the appropriate
scrolling strategy depending on movement direction. Previously,
selecting the next element after the last visible item caused the menu
to scroll with `ScrollStrategy::Top`, which scrolled the whole page and
placed the next element at the top. This behavior is inconsistent,
because using `ScrollStrategy::Top` when moving up only scrolls one
element, not the whole page.


https://github.com/user-attachments/assets/ccfb238f-8f76-4a18-a18d-bbcb63340c5a

The solution is to introduce the `Nearest` scrolling strategy which will
internally choose the scrolling strategy depending on whether the new
selected item is below or above currently visible items. This ensures a
single-item scroll regardless of movement direction.


https://github.com/user-attachments/assets/8502efb8-e2c0-4ab1-bd8d-93103841a9c4


I also noticed that some functions in the file have different logic
depending on `y_flipped`. This appears related to reversing the order of
elements in the list when the completion menu appears above the cursor.
This was a feature suggested in #11200 and implemented in #23446. It
looks like this feature was reverted in #27765 and there currently seem
to be no way to have `y_flipped` to be set to `true`.

My understanding is that the opposite scroll strategy should be used if
`y_flipped`, but since there is no way to enable this feature to test it
and I don't know if the feature is ever going to be reintroduced I
decided not to include it in this PR.


Release Notes:

- gpui: Add 'Nearest' scrolling strategy to 'UniformList'
2025-11-12 16:37:14 +01:00
Joseph T. Lyons
53652cdb3f Bump Zed to v0.214 (#42539)
Release Notes:

- N/A
2025-11-12 15:36:28 +00:00
Smit Barmase
1d75a9c4b2 Reverts "add OpenExcerptsSplit and dispatches on click" (#42538)
Partially reverts https://github.com/zed-industries/zed/pull/42283 to
restore the old behavior of excerpt clicking.

Release Notes:

- N/A
2025-11-12 20:47:29 +05:30
Richard Feldman
c5ab1d4679 Stop thread on Restore Checkpoint (#42537)
Closes #35142

In addition to cleaning up the terminals, also stops the conversation.

Release Notes:

- Restoring a checkpoint now stops the agent conversation.
2025-11-12 15:13:40 +00:00
Smit Barmase
1fdd95a9b3 Revert "editor: Improve multi-buffer header filename click to jump to the latest selection from that buffer" (#42534)
Reverts zed-industries/zed#42480

This panics on Nightly in cases where anchor might not be valid for that
snapshot. Taking it back before the cutoff.

Release Notes:

- N/A
2025-11-12 20:31:43 +05:30
localcc
49634f6041 Miniprofiler (#42385)
Release Notes:

- Added hang detection and a built in performance profiler
2025-11-12 15:31:20 +01:00
Jakub Konka
2119ac42d7 git_panel: Fix partially staged changes not showing up (#42530)
Release Notes:

- N/A
2025-11-12 15:13:29 +01:00
Hans
e833d1af8d vim: Fix change surround adding unwanted spaces with quotes (#42431)
Update `Vim.change_surround` in order to ensure that there's no
overlapping edits by keeping track of where the open string range ends
and ensuring that the closing string range start does not go lower than
the open string range end.

Closes #42316 

Release Notes:

- Fix vim's change surrounds `cs` inserting spaces with quotes by
preventing overlapping edits

---------

Co-authored-by: dino <dinojoaocosta@gmail.com>
2025-11-12 13:04:24 +00:00
Ben Kunkle
7be76c74d6 Use set -x in script/clear-target-dir-if-larger-than (#42525)
Closes #ISSUE

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-12 12:52:19 +00:00
Piotr Osiewicz
c2980cba18 remote_server: Bump fork to 0.4.0 (#42520)
Release Notes:

- N/A
2025-11-12 11:57:53 +00:00
Lena
a0be53a190 Wake up stalebot with an updated config (#42516)
- switch the bot from looking at the `bug/crash` labels which we don't
  use anymore to the Bug/Crash issue types which we do use
- shorten the period of time after which a bug is suspected to be stale
  (with our pace they can indeed be outdated in 60 days)
- extend the grace period for someone to come around and say nope, this
  problem still exists (people might be away for a couple of weeks).


Release Notes:

- N/A
2025-11-12 12:40:26 +01:00
Lena
70feff3c7a Add a one-off cleanup script for GH issue types (#42515)
Mainly for historical purposes and in case we want to do something similar enough in the future.

Release Notes:

- N/A
2025-11-12 11:40:31 +01:00
Finn Evers
f46990bac8 extensions_ui: Add XML extension suggestion for XML files (#42514)
Closes #41798

Release Notes:

- N/A
2025-11-12 10:12:02 +00:00
Lukas Wirth
78f466559a vim: Fix empty selections panic in insert_at_previous (#42504)
Fixes ZED-15C

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-12 09:54:22 +00:00
CnsMaple
4f158c1983 docs: Update basedpyright settings examples (#42497)
The
[example](https://docs.basedpyright.com/latest/configuration/language-server-settings/#zed)
on the official website of basedpyright is correct.

Release Notes:

- Update basedpyright settings examples
2025-11-12 10:05:17 +01:00
Kirill Bulatov
ddf762e368 Revert "gpui: Unify the index_for_x methods (#42162)" (#42505)
This reverts commit 082b80ec89.

This broke clicking, e.g. in snippets like

```rs
let x = vec![
    1, 2, //
    3,
];
```

clicking between `2` and `,` is quite off now.

Release Notes:

- N/A
2025-11-12 08:24:06 +00:00
Lukas Wirth
f2cadad49a gpui: Fix RefCell already borrowed in WindowsPlatform::run (#42506)
Relands #42440 

Fixes ZED-1VX

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-12 08:19:32 +00:00
Lukas Wirth
231d1b1d58 diagnostics: Close diagnosticsless buffers on refresh (#42503)
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-12 08:11:50 +00:00
Andrew Farkas
2bcfc12951 Absolutize LSP and DAP paths more conservatively (#42482)
Fixes a regression caused by #42135 where LSP and DAP binaries weren't
being used from `PATH` env var

Now we absolutize the path if (path is relative AND (path has multiple
components OR path exists in worktree)).

- Relative paths with multiple components might not exist in the
worktree because they are ignored. Paths with a single component will at
least have an entry saying that they exist and are ignored.
- Relative paths with multiple components will never use the `PATH` env
var, so they can be safely absolutized

Release Notes:

- N/A
2025-11-12 01:36:22 +00:00
Richard Feldman
cf6ae01d07 Show recommended models under normal category too (#42489)
<img width="395" height="444" alt="Screenshot 2025-11-11 at 4 04 57 PM"
src="https://github.com/user-attachments/assets/8da68721-6e33-4d01-810d-4aa1e2f3402d"
/>

Discussed with @danilo-leal and we're going with the "it's checked in
both places" design!

Closes #40910

Release Notes:

- Recommended AI models now still appear in their normal category in
addition to "Recommended:"
2025-11-11 22:10:46 +00:00
Miguel Cárdenas
2ad7ecbcf0 project_panel: Add auto_open settings (#40435)
- Based on #40234, and improvement of #40331

Release Notes:

- Added granular settings to control when files auto-open in the project
panel (project_panel.auto_open.on_create, on_paste, on_drop)

<img width="662" height="367" alt="Screenshot_2025-10-16_17-28-31"
src="https://github.com/user-attachments/assets/930a0a50-fc89-4c5d-8d05-b1fa2279de8b"
/>

---------

Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>
2025-11-12 03:23:40 +05:30
Lukas Wirth
854c6873c7 Revert "gpui: Fix RefCell already borrowed in WindowsPlatform::run" (#42481)
Reverts zed-industries/zed#42440

There are invalid temporaries in here keeping the borrows alive for
longer
2025-11-11 21:42:59 +00:00
Andrew Farkas
da94f898e6 Add support for multi-word snippet prefixes (#42398)
Supercedes #41126

Closes #39559, #35397, and #41426

Release Notes:

- Added support for multi-word snippet prefixes

---------

Co-authored-by: Agus Zubiaga <hi@aguz.me>
Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
Co-authored-by: Cole Miller <cole@zed.dev>
2025-11-11 16:34:25 -05:00
Richard Feldman
f62bfe1dfa Use enterprise_uri for settings when provided (#42485)
Closes #34945

Release Notes:

- Fixed `enterprise_uri` not being used for GitHub settings URL when
provided
2025-11-11 21:31:42 +00:00
Richard Feldman
a56693d9e8 Fix panic when opening an invalid URL (#42483)
Now instead of a panic we see this:

<img width="511" height="132" alt="Screenshot 2025-11-11 at 3 47 25 PM"
src="https://github.com/user-attachments/assets/48ba2f41-c5c0-4030-9331-0d3acfbf9461"
/>


Release Notes:

- Trying to open invalid URLs in a browser now shows an error instead of
panicking
2025-11-11 21:24:37 +00:00
Smit Barmase
b4b7a23c39 editor: Improve multi-buffer header filename click to jump to the latest selection from that buffer (#42480)
Closes https://github.com/zed-industries/zed/pull/42099

Regressed in https://github.com/zed-industries/zed/pull/42283

Release Notes:

- Clicking the multi-buffer header file name or the "Open file" button
now jumps to the most recent selection in that buffer, if one exists.
2025-11-12 02:04:37 +05:30
Richard Feldman
0d56ed7d91 Only send unit eval failures to Slack for cron job (#42479)
Release Notes:

- N/A
2025-11-11 20:19:34 +00:00
Lay Sheth
e01e0b83c4 Avoid panics in LSP store path handling (#42117)
Release Notes:

- Fixed incorrect journal paths handling
2025-11-11 20:51:57 +02:00
Richard Feldman
908ef03502 Split out cron and non-cron unit evals (#42472)
Release Notes:

- N/A

---------

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-11-11 13:45:48 -05:00
feeiyu
5f4d0dbaab Fix circular reference issue around PopoverMenu (#42461)
Follow up to https://github.com/zed-industries/zed/pull/42351

Release Notes:

- N/A
2025-11-11 19:20:38 +02:00
brequet
c50f821613 docs: Fix typo in configuring-zed.md (#42454)
Fix a minor typo in the setting key: `auto_install_extension` should be
`auto_install_extensions`.

Release Notes:

- N/A
2025-11-11 17:58:18 +01:00
Marshall Bowers
7e491ac500 collab: Drop embeddings table (#42466)
This PR drops the `embeddings` table, as it is no longer used.

Release Notes:

- N/A
2025-11-11 11:44:04 -05:00
Richard Feldman
9e1e732db8 Use longer timeout on evals (#42465)
The GPT-5 ones in particular can take a long time!

Release Notes:

- N/A

---------

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-11-11 16:37:20 +00:00
Lukas Wirth
83351283e4 settings: Skip terminal env vars with substitutions in vscode import (#42464)
Closes https://github.com/zed-industries/zed/issues/40547

Release Notes:

- Fixed vscode import creating faulty terminal env vars in terminal
settings
2025-11-11 16:15:12 +00:00
Marshall Bowers
03acbb7de3 collab: Remove unused embeddings queries and model (#42463)
This PR removes the queries and database model for embeddings, as
they're no longer used.

Release Notes:

- N/A
2025-11-11 16:13:59 +00:00
Richard Feldman
0268b17096 Add more secrets to eval workflows (#42459)
Release Notes:

- N/A
2025-11-11 16:07:57 +00:00
361 changed files with 16933 additions and 4822 deletions

View File

@@ -4,10 +4,8 @@ description: "Runs the tests"
runs:
using: "composite"
steps:
- name: Install Rust
shell: bash -euxo pipefail {0}
run: |
cargo install cargo-nextest --locked
- name: Install nextest
uses: taiki-e/install-action@nextest
- name: Install Node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4

View File

@@ -11,9 +11,8 @@ runs:
using: "composite"
steps:
- name: Install test runner
shell: powershell
working-directory: ${{ inputs.working-directory }}
run: cargo install cargo-nextest --locked
uses: taiki-e/install-action@nextest
- name: Install Node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4

View File

@@ -56,14 +56,14 @@ jobs:
- id: set-package-name
name: after_release::publish_winget::set_package_name
run: |
if [ "${{ github.event.release.prerelease }}" == "true" ]; then
PACKAGE_NAME=ZedIndustries.Zed.Preview
else
PACKAGE_NAME=ZedIndustries.Zed
fi
if ("${{ github.event.release.prerelease }}" -eq "true") {
$PACKAGE_NAME = "ZedIndustries.Zed.Preview"
} else {
$PACKAGE_NAME = "ZedIndustries.Zed"
}
echo "PACKAGE_NAME=$PACKAGE_NAME" >> "$GITHUB_OUTPUT"
shell: bash -euxo pipefail {0}
echo "PACKAGE_NAME=$PACKAGE_NAME" >> $env:GITHUB_OUTPUT
shell: pwsh
- name: after_release::publish_winget::winget_releaser
uses: vedantmgoyal9/winget-releaser@19e706d4c9121098010096f9c495a70a7518b30f
with:
@@ -86,3 +86,19 @@ jobs:
SENTRY_ORG: zed-dev
SENTRY_PROJECT: zed
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
notify_on_failure:
needs:
- rebuild_releases_page
- post_to_discord
- publish_winget
- create_sentry_release
if: failure()
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: release::notify_on_failure::notify_slack
run: |-
curl -X POST -H 'Content-type: application/json'\
--data '{"text":"${{ github.workflow }} failed: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"}' "$SLACK_WEBHOOK"
shell: bash -euxo pipefail {0}
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK_WORKFLOW_FAILURES }}

View File

@@ -42,7 +42,7 @@ jobs:
exit 1
;;
esac
which cargo-set-version > /dev/null || cargo install cargo-edit
which cargo-set-version > /dev/null || cargo install cargo-edit -f --no-default-features --features "set-version"
output="$(cargo set-version -p zed --bump patch 2>&1 | sed 's/.* //')"
export GIT_COMMITTER_NAME="Zed Bot"
export GIT_COMMITTER_EMAIL="hi@zed.dev"

View File

@@ -1,6 +1,7 @@
# Generated from xtask::workflows::cherry_pick
# Rebuild with `cargo xtask workflows`.
name: cherry_pick
run-name: 'cherry_pick to ${{ inputs.channel }} #${{ inputs.pr_number }}'
on:
workflow_dispatch:
inputs:
@@ -16,6 +17,10 @@ on:
description: channel
required: true
type: string
pr_number:
description: pr_number
required: true
type: string
jobs:
run_cherry_pick:
runs-on: namespace-profile-2x4-ubuntu-2404

View File

@@ -15,14 +15,15 @@ jobs:
stale-issue-message: >
Hi there! 👋
We're working to clean up our issue tracker by closing older issues that might not be relevant anymore. If you are able to reproduce this issue in the latest version of Zed, please let us know by commenting on this issue, and we will keep it open. If you can't reproduce it, feel free to close the issue yourself. Otherwise, we'll close it in 7 days.
We're working to clean up our issue tracker by closing older bugs that might not be relevant anymore. If you are able to reproduce this issue in the latest version of Zed, please let us know by commenting on this issue, and it will be kept open. If you can't reproduce it, feel free to close the issue yourself. Otherwise, it will close automatically in 14 days.
Thanks for your help!
close-issue-message: "This issue was closed due to inactivity. If you're still experiencing this problem, please open a new issue with a link to this issue."
days-before-stale: 120
days-before-close: 7
any-of-issue-labels: "bug,panic / crash"
days-before-stale: 60
days-before-close: 14
only-issue-types: "Bug,Crash"
operations-per-run: 1000
ascending: true
enable-statistics: true
stale-issue-label: "stale"
exempt-issue-labels: "never stale"

View File

@@ -39,8 +39,7 @@ jobs:
run: ./script/download-wasi-sdk
shell: bash -euxo pipefail {0}
- name: compare_perf::run_perf::install_hyperfine
run: cargo install hyperfine
shell: bash -euxo pipefail {0}
uses: taiki-e/install-action@hyperfine
- name: steps::git_checkout
run: git fetch origin ${{ inputs.base }} && git checkout ${{ inputs.base }}
shell: bash -euxo pipefail {0}

View File

@@ -43,9 +43,7 @@ jobs:
fetch-depth: 0
- name: Install cargo nextest
shell: bash -euxo pipefail {0}
run: |
cargo install cargo-nextest --locked
uses: taiki-e/install-action@nextest
- name: Limit target directory size
shell: bash -euxo pipefail {0}

View File

@@ -29,9 +29,6 @@ jobs:
- name: steps::clippy
run: ./script/clippy
shell: bash -euxo pipefail {0}
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: bash -euxo pipefail {0}
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 300
shell: bash -euxo pipefail {0}
@@ -78,8 +75,7 @@ jobs:
run: ./script/clippy
shell: bash -euxo pipefail {0}
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: bash -euxo pipefail {0}
uses: taiki-e/install-action@nextest
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 250
shell: bash -euxo pipefail {0}
@@ -112,9 +108,6 @@ jobs:
- name: steps::clippy
run: ./script/clippy.ps1
shell: pwsh
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: pwsh
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than.ps1 250
shell: pwsh
@@ -484,6 +477,20 @@ jobs:
shell: bash -euxo pipefail {0}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
notify_on_failure:
needs:
- upload_release_assets
- auto_release_preview
if: failure()
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: release::notify_on_failure::notify_slack
run: |-
curl -X POST -H 'Content-type: application/json'\
--data '{"text":"${{ github.workflow }} failed: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"}' "$SLACK_WEBHOOK"
shell: bash -euxo pipefail {0}
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK_WORKFLOW_FAILURES }}
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true

View File

@@ -47,9 +47,6 @@ jobs:
- name: steps::clippy
run: ./script/clippy.ps1
shell: pwsh
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: pwsh
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than.ps1 250
shell: pwsh
@@ -493,3 +490,21 @@ jobs:
SENTRY_PROJECT: zed
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
timeout-minutes: 60
notify_on_failure:
needs:
- bundle_linux_aarch64
- bundle_linux_x86_64
- bundle_mac_aarch64
- bundle_mac_x86_64
- bundle_windows_aarch64
- bundle_windows_x86_64
if: failure()
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: release::notify_on_failure::notify_slack
run: |-
curl -X POST -H 'Content-type: application/json'\
--data '{"text":"${{ github.workflow }} failed: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"}' "$SLACK_WEBHOOK"
shell: bash -euxo pipefail {0}
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK_WORKFLOW_FAILURES }}

View File

@@ -6,6 +6,9 @@ env:
CARGO_INCREMENTAL: '0'
RUST_BACKTRACE: '1'
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
GOOGLE_AI_API_KEY: ${{ secrets.GOOGLE_AI_API_KEY }}
GOOGLE_CLOUD_PROJECT: ${{ secrets.GOOGLE_CLOUD_PROJECT }}
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_EVAL_TELEMETRY: '1'
MODEL_NAME: ${{ inputs.model_name }}
@@ -48,6 +51,11 @@ jobs:
- name: run_agent_evals::agent_evals::run_eval
run: cargo run --package=eval -- --repetitions=8 --concurrency=1 --model "${MODEL_NAME}"
shell: bash -euxo pipefail {0}
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
GOOGLE_AI_API_KEY: ${{ secrets.GOOGLE_AI_API_KEY }}
GOOGLE_CLOUD_PROJECT: ${{ secrets.GOOGLE_CLOUD_PROJECT }}
- name: steps::cleanup_cargo_config
if: always()
run: |

View File

@@ -0,0 +1,68 @@
# Generated from xtask::workflows::run_cron_unit_evals
# Rebuild with `cargo xtask workflows`.
name: run_cron_unit_evals
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: '0'
RUST_BACKTRACE: '1'
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
on:
schedule:
- cron: 47 1 * * 2
workflow_dispatch: {}
jobs:
cron_unit_evals:
runs-on: namespace-profile-16x32-ubuntu-2204
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_cargo_config
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
shell: bash -euxo pipefail {0}
- name: steps::cache_rust_dependencies_namespace
uses: namespacelabs/nscloud-cache-action@v1
with:
cache: rust
- name: steps::setup_linux
run: ./script/linux
shell: bash -euxo pipefail {0}
- name: steps::install_mold
run: ./script/install-mold
shell: bash -euxo pipefail {0}
- name: steps::download_wasi_sdk
run: ./script/download-wasi-sdk
shell: bash -euxo pipefail {0}
- name: steps::cargo_install_nextest
uses: taiki-e/install-action@nextest
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 250
shell: bash -euxo pipefail {0}
- name: ./script/run-unit-evals
run: ./script/run-unit-evals
shell: bash -euxo pipefail {0}
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
GOOGLE_AI_API_KEY: ${{ secrets.GOOGLE_AI_API_KEY }}
GOOGLE_CLOUD_PROJECT: ${{ secrets.GOOGLE_CLOUD_PROJECT }}
- name: steps::cleanup_cargo_config
if: always()
run: |
rm -rf ./../.cargo
shell: bash -euxo pipefail {0}
- name: run_agent_evals::cron_unit_evals::send_failure_to_slack
if: ${{ failure() }}
uses: slackapi/slack-github-action@b0fa283ad8fea605de13dc3f449259339835fc52
with:
method: chat.postMessage
token: ${{ secrets.SLACK_APP_ZED_UNIT_EVALS_BOT_TOKEN }}
payload: |
channel: C04UDRNNJFQ
text: "Unit Evals Failed: https://github.com/zed-industries/zed/actions/runs/${{ github.run_id }}"
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true

View File

@@ -113,9 +113,6 @@ jobs:
- name: steps::clippy
run: ./script/clippy.ps1
shell: pwsh
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: pwsh
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than.ps1 250
shell: pwsh
@@ -164,8 +161,7 @@ jobs:
run: ./script/clippy
shell: bash -euxo pipefail {0}
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: bash -euxo pipefail {0}
uses: taiki-e/install-action@nextest
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 250
shell: bash -euxo pipefail {0}
@@ -200,9 +196,6 @@ jobs:
- name: steps::clippy
run: ./script/clippy
shell: bash -euxo pipefail {0}
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: bash -euxo pipefail {0}
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 300
shell: bash -euxo pipefail {0}

View File

@@ -1,17 +1,26 @@
# Generated from xtask::workflows::run_agent_evals
# Generated from xtask::workflows::run_unit_evals
# Rebuild with `cargo xtask workflows`.
name: run_agent_evals
name: run_unit_evals
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: '0'
RUST_BACKTRACE: '1'
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_EVAL_TELEMETRY: '1'
MODEL_NAME: ${{ inputs.model_name }}
on:
schedule:
- cron: 47 1 * * 2
workflow_dispatch: {}
workflow_dispatch:
inputs:
model_name:
description: model_name
required: true
type: string
commit_sha:
description: commit_sha
required: true
type: string
jobs:
unit_evals:
run_unit_evals:
runs-on: namespace-profile-16x32-ubuntu-2204
steps:
- name: steps::checkout_repo
@@ -37,8 +46,7 @@ jobs:
run: ./script/download-wasi-sdk
shell: bash -euxo pipefail {0}
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: bash -euxo pipefail {0}
uses: taiki-e/install-action@nextest
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 250
shell: bash -euxo pipefail {0}
@@ -47,20 +55,15 @@ jobs:
shell: bash -euxo pipefail {0}
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
- name: run_agent_evals::unit_evals::send_failure_to_slack
if: ${{ failure() }}
uses: slackapi/slack-github-action@b0fa283ad8fea605de13dc3f449259339835fc52
with:
method: chat.postMessage
token: ${{ secrets.SLACK_APP_ZED_UNIT_EVALS_BOT_TOKEN }}
payload: |
channel: C04UDRNNJFQ
text: "Unit Evals Failed: https://github.com/zed-industries/zed/actions/runs/${{ github.run_id }}"
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
GOOGLE_AI_API_KEY: ${{ secrets.GOOGLE_AI_API_KEY }}
GOOGLE_CLOUD_PROJECT: ${{ secrets.GOOGLE_CLOUD_PROJECT }}
UNIT_EVAL_COMMIT: ${{ inputs.commit_sha }}
- name: steps::cleanup_cargo_config
if: always()
run: |
rm -rf ./../.cargo
shell: bash -euxo pipefail {0}
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.run_id }}
cancel-in-progress: true

148
Cargo.lock generated
View File

@@ -322,6 +322,7 @@ dependencies = [
"assistant_slash_command",
"assistant_slash_commands",
"assistant_text_thread",
"async-fs",
"audio",
"buffer_diff",
"chrono",
@@ -343,6 +344,7 @@ dependencies = [
"gpui",
"html_to_markdown",
"http_client",
"image",
"indoc",
"itertools 0.14.0",
"jsonschema",
@@ -1461,6 +1463,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "879b6c89592deb404ba4dc0ae6b58ffd1795c78991cbb5b8bc441c48a070440d"
dependencies = [
"aws-lc-sys",
"untrusted 0.7.1",
"zeroize",
]
@@ -3208,6 +3211,7 @@ dependencies = [
"rustc-hash 2.1.1",
"schemars 1.0.4",
"serde",
"serde_json",
"strum 0.27.2",
]
@@ -3687,6 +3691,7 @@ dependencies = [
"collections",
"futures 0.3.31",
"gpui",
"http_client",
"log",
"net",
"parking_lot",
@@ -5311,6 +5316,7 @@ dependencies = [
"serde_json",
"settings",
"supermaven",
"sweep_ai",
"telemetry",
"theme",
"ui",
@@ -5860,6 +5866,7 @@ dependencies = [
"lsp",
"parking_lot",
"pretty_assertions",
"proto",
"semantic_version",
"serde",
"serde_json",
@@ -6248,7 +6255,7 @@ dependencies = [
"futures-core",
"futures-sink",
"nanorand",
"spin",
"spin 0.9.8",
]
[[package]]
@@ -6359,9 +6366,9 @@ checksum = "aa9a19cbb55df58761df49b23516a86d432839add4af60fc256da840f66ed35b"
[[package]]
name = "fork"
version = "0.2.0"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "05dc8b302e04a1c27f4fe694439ef0f29779ca4edc205b7b58f00db04e29656d"
checksum = "30268f1eefccc9d72f43692e8b89e659aeb52e84016c3b32b6e7e9f1c8f38f94"
dependencies = [
"libc",
]
@@ -7287,6 +7294,7 @@ dependencies = [
"calloop",
"calloop-wayland-source",
"cbindgen",
"circular-buffer",
"cocoa 0.26.0",
"cocoa-foundation 0.2.0",
"collections",
@@ -7342,6 +7350,7 @@ dependencies = [
"slotmap",
"smallvec",
"smol",
"spin 0.10.0",
"stacksafe",
"strum 0.27.2",
"sum_tree",
@@ -8656,23 +8665,25 @@ dependencies = [
[[package]]
name = "jupyter-protocol"
version = "0.6.0"
source = "git+https://github.com/ConradIrwin/runtimed?rev=7130c804216b6914355d15d0b91ea91f6babd734#7130c804216b6914355d15d0b91ea91f6babd734"
version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d9c047f6b5e551563af2ddb13dafed833f0ec5a5b0f9621d5ad740a9ff1e1095"
dependencies = [
"anyhow",
"async-trait",
"bytes 1.10.1",
"chrono",
"futures 0.3.31",
"serde",
"serde_json",
"thiserror 2.0.17",
"uuid",
]
[[package]]
name = "jupyter-websocket-client"
version = "0.9.0"
source = "git+https://github.com/ConradIrwin/runtimed?rev=7130c804216b6914355d15d0b91ea91f6babd734#7130c804216b6914355d15d0b91ea91f6babd734"
version = "0.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4197fa926a6b0bddfed7377d9fed3d00a0dec44a1501e020097bd26604699cae"
dependencies = [
"anyhow",
"async-trait",
@@ -8681,6 +8692,7 @@ dependencies = [
"jupyter-protocol",
"serde",
"serde_json",
"tokio",
"url",
"uuid",
]
@@ -8718,7 +8730,6 @@ dependencies = [
"ui",
"ui_input",
"util",
"vim",
"workspace",
"zed_actions",
]
@@ -8870,6 +8881,7 @@ dependencies = [
"icons",
"image",
"log",
"open_ai",
"open_router",
"parking_lot",
"proto",
@@ -9072,7 +9084,7 @@ version = "1.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bbd2bcb4c963f2ddae06a2efc7e9f3591312473c50c6685e1f298068316e66fe"
dependencies = [
"spin",
"spin 0.9.8",
]
[[package]]
@@ -10014,6 +10026,18 @@ version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "68354c5c6bd36d73ff3feceb05efa59b6acb7626617f4962be322a825e61f79a"
[[package]]
name = "miniprofiler_ui"
version = "0.1.0"
dependencies = [
"gpui",
"serde_json",
"smol",
"util",
"workspace",
"zed_actions",
]
[[package]]
name = "miniz_oxide"
version = "0.8.9"
@@ -10218,8 +10242,9 @@ dependencies = [
[[package]]
name = "nbformat"
version = "0.10.0"
source = "git+https://github.com/ConradIrwin/runtimed?rev=7130c804216b6914355d15d0b91ea91f6babd734#7130c804216b6914355d15d0b91ea91f6babd734"
version = "0.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "89c7229d604d847227002715e1235cd84e81919285d904ccb290a42ecc409348"
dependencies = [
"anyhow",
"chrono",
@@ -10505,11 +10530,10 @@ dependencies = [
[[package]]
name = "num-bigint-dig"
version = "0.8.4"
version = "0.8.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dc84195820f291c7697304f3cbdadd1cb7199c0efc917ff5eafd71225c136151"
checksum = "e661dda6640fad38e827a6d4a310ff4763082116fe217f279885c97f511bb0b7"
dependencies = [
"byteorder",
"lazy_static",
"libm",
"num-integer",
@@ -11012,6 +11036,7 @@ dependencies = [
"serde_json",
"settings",
"strum 0.27.2",
"thiserror 2.0.17",
]
[[package]]
@@ -13051,6 +13076,23 @@ dependencies = [
"zlog",
]
[[package]]
name = "project_benchmarks"
version = "0.1.0"
dependencies = [
"anyhow",
"clap",
"client",
"futures 0.3.31",
"gpui",
"http_client",
"language",
"node_runtime",
"project",
"settings",
"watch",
]
[[package]]
name = "project_panel"
version = "0.1.0"
@@ -13078,6 +13120,7 @@ dependencies = [
"settings",
"smallvec",
"telemetry",
"tempfile",
"theme",
"ui",
"util",
@@ -13996,6 +14039,7 @@ dependencies = [
"paths",
"pretty_assertions",
"project",
"prompt_store",
"proto",
"rayon",
"release_channel",
@@ -14241,7 +14285,7 @@ dependencies = [
"cfg-if",
"getrandom 0.2.16",
"libc",
"untrusted",
"untrusted 0.9.0",
"windows-sys 0.52.0",
]
@@ -14370,9 +14414,9 @@ dependencies = [
[[package]]
name = "rsa"
version = "0.9.8"
version = "0.9.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "78928ac1ed176a5ca1d17e578a1825f3d81ca54cf41053a592584b020cfd691b"
checksum = "40a0376c50d0358279d9d643e4bf7b7be212f1f4ff1da9070a7b54d22ef75c88"
dependencies = [
"const-oid",
"digest",
@@ -14422,25 +14466,26 @@ dependencies = [
[[package]]
name = "runtimelib"
version = "0.25.0"
source = "git+https://github.com/ConradIrwin/runtimed?rev=7130c804216b6914355d15d0b91ea91f6babd734#7130c804216b6914355d15d0b91ea91f6babd734"
version = "0.30.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "481b48894073a0096f28cbe9860af01fc1b861e55b3bc96afafc645ee3de62dc"
dependencies = [
"anyhow",
"async-dispatcher",
"async-std",
"aws-lc-rs",
"base64 0.22.1",
"bytes 1.10.1",
"chrono",
"data-encoding",
"dirs 5.0.1",
"dirs 6.0.0",
"futures 0.3.31",
"glob",
"jupyter-protocol",
"ring",
"serde",
"serde_json",
"shellexpand 3.1.1",
"smol",
"thiserror 2.0.17",
"uuid",
"zeromq",
]
@@ -14708,7 +14753,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8b6275d1ee7a1cd780b64aca7726599a1dbc893b1e64144529e55c3c2f745765"
dependencies = [
"ring",
"untrusted",
"untrusted 0.9.0",
]
[[package]]
@@ -14720,7 +14765,7 @@ dependencies = [
"aws-lc-rs",
"ring",
"rustls-pki-types",
"untrusted",
"untrusted 0.9.0",
]
[[package]]
@@ -14950,7 +14995,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "da046153aa2352493d6cb7da4b6e5c0c057d8a1d0a9aa8560baffdd945acd414"
dependencies = [
"ring",
"untrusted",
"untrusted 0.9.0",
]
[[package]]
@@ -15853,6 +15898,15 @@ dependencies = [
"lock_api",
]
[[package]]
name = "spin"
version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d5fe4ccb98d9c292d56fec89a5e07da7fc4cf0dc11e156b41793132775d3e591"
dependencies = [
"lock_api",
]
[[package]]
name = "spirv"
version = "0.3.0+sdk-1.3.268.0"
@@ -16538,6 +16592,33 @@ dependencies = [
"zeno",
]
[[package]]
name = "sweep_ai"
version = "0.1.0"
dependencies = [
"anyhow",
"arrayvec",
"brotli",
"client",
"collections",
"edit_prediction",
"feature_flags",
"futures 0.3.31",
"gpui",
"http_client",
"indoc",
"language",
"project",
"release_channel",
"reqwest_client",
"serde",
"serde_json",
"tree-sitter-rust",
"util",
"workspace",
"zlog",
]
[[package]]
name = "symphonia"
version = "0.5.5"
@@ -18534,6 +18615,12 @@ version = "0.2.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "673aac59facbab8a9007c7f6108d11f63b603f7cabff99fabf650fea5c32b861"
[[package]]
name = "untrusted"
version = "0.7.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a156c684c91ea7d62626509bce3cb4e1d9ed5c4d978f7b4352658f96a4c26b4a"
[[package]]
name = "untrusted"
version = "0.9.0"
@@ -21146,7 +21233,7 @@ dependencies = [
[[package]]
name = "zed"
version = "0.213.0"
version = "0.214.0"
dependencies = [
"acp_tools",
"activity_indicator",
@@ -21159,11 +21246,11 @@ dependencies = [
"audio",
"auto_update",
"auto_update_ui",
"backtrace",
"bincode 1.3.3",
"breadcrumbs",
"call",
"channel",
"chrono",
"clap",
"cli",
"client",
@@ -21221,8 +21308,8 @@ dependencies = [
"menu",
"migrator",
"mimalloc",
"miniprofiler_ui",
"nc",
"nix 0.29.0",
"node_runtime",
"notifications",
"onboarding",
@@ -21258,13 +21345,13 @@ dependencies = [
"snippets_ui",
"supermaven",
"svg_preview",
"sweep_ai",
"sysinfo 0.37.2",
"system_specs",
"tab_switcher",
"task",
"tasks_ui",
"telemetry",
"telemetry_events",
"terminal_view",
"theme",
"theme_extension",
@@ -21693,6 +21780,7 @@ dependencies = [
"serde_json",
"settings",
"smol",
"strsim",
"thiserror 2.0.17",
"util",
"uuid",

View File

@@ -110,6 +110,7 @@ members = [
"crates/menu",
"crates/migrator",
"crates/mistral",
"crates/miniprofiler_ui",
"crates/multi_buffer",
"crates/nc",
"crates/net",
@@ -126,6 +127,7 @@ members = [
"crates/picker",
"crates/prettier",
"crates/project",
"crates/project_benchmarks",
"crates/project_panel",
"crates/project_symbols",
"crates/prompt_store",
@@ -163,6 +165,7 @@ members = [
"crates/sum_tree",
"crates/supermaven",
"crates/supermaven_api",
"crates/sweep_ai",
"crates/codestral",
"crates/svg_preview",
"crates/system_specs",
@@ -341,6 +344,7 @@ menu = { path = "crates/menu" }
migrator = { path = "crates/migrator" }
mistral = { path = "crates/mistral" }
multi_buffer = { path = "crates/multi_buffer" }
miniprofiler_ui = { path = "crates/miniprofiler_ui" }
nc = { path = "crates/nc" }
net = { path = "crates/net" }
node_runtime = { path = "crates/node_runtime" }
@@ -395,6 +399,7 @@ streaming_diff = { path = "crates/streaming_diff" }
sum_tree = { path = "crates/sum_tree" }
supermaven = { path = "crates/supermaven" }
supermaven_api = { path = "crates/supermaven_api" }
sweep_ai = { path = "crates/sweep_ai" }
codestral = { path = "crates/codestral" }
system_specs = { path = "crates/system_specs" }
tab_switcher = { path = "crates/tab_switcher" }
@@ -475,6 +480,7 @@ bitflags = "2.6.0"
blade-graphics = { version = "0.7.0" }
blade-macros = { version = "0.3.0" }
blade-util = { version = "0.3.0" }
brotli = "8.0.2"
bytes = "1.0"
cargo_metadata = "0.19"
cargo_toml = "0.21"
@@ -482,7 +488,7 @@ cfg-if = "1.0.3"
chrono = { version = "0.4", features = ["serde"] }
ciborium = "0.2"
circular-buffer = "1.0"
clap = { version = "4.4", features = ["derive"] }
clap = { version = "4.4", features = ["derive", "wrap_help"] }
cocoa = "=0.26.0"
cocoa-foundation = "=0.2.0"
convert_case = "0.8.0"
@@ -504,7 +510,7 @@ emojis = "0.6.1"
env_logger = "0.11"
exec = "0.3.1"
fancy-regex = "0.14.0"
fork = "0.2.0"
fork = "0.4.0"
futures = "0.3"
futures-batch = "0.6.1"
futures-lite = "1.13"
@@ -531,8 +537,8 @@ itertools = "0.14.0"
json_dotpath = "1.1"
jsonschema = "0.30.0"
jsonwebtoken = "9.3"
jupyter-protocol = { git = "https://github.com/ConradIrwin/runtimed", rev = "7130c804216b6914355d15d0b91ea91f6babd734" }
jupyter-websocket-client = { git = "https://github.com/ConradIrwin/runtimed" ,rev = "7130c804216b6914355d15d0b91ea91f6babd734" }
jupyter-protocol = "0.10.0"
jupyter-websocket-client = "0.15.0"
libc = "0.2"
libsqlite3-sys = { version = "0.30.1", features = ["bundled"] }
linkify = "0.10.0"
@@ -545,7 +551,7 @@ minidumper = "0.8"
moka = { version = "0.12.10", features = ["sync"] }
naga = { version = "25.0", features = ["wgsl-in"] }
nanoid = "0.4"
nbformat = { git = "https://github.com/ConradIrwin/runtimed", rev = "7130c804216b6914355d15d0b91ea91f6babd734" }
nbformat = "0.15.0"
nix = "0.29"
num-format = "0.4.4"
num-traits = "0.2"
@@ -616,8 +622,8 @@ reqwest = { git = "https://github.com/zed-industries/reqwest.git", rev = "c15662
"stream",
], package = "zed-reqwest", version = "0.12.15-zed" }
rsa = "0.9.6"
runtimelib = { git = "https://github.com/ConradIrwin/runtimed", rev = "7130c804216b6914355d15d0b91ea91f6babd734", default-features = false, features = [
"async-dispatcher-runtime",
runtimelib = { version = "0.30.0", default-features = false, features = [
"async-dispatcher-runtime", "aws-lc-rs"
] }
rust-embed = { version = "8.4", features = ["include-exclude"] }
rustc-hash = "2.1.0"
@@ -628,6 +634,7 @@ scap = { git = "https://github.com/zed-industries/scap", rev = "4afea48c3b002197
schemars = { version = "1.0", features = ["indexmap2"] }
semver = "1.0"
serde = { version = "1.0.221", features = ["derive", "rc"] }
serde_derive = "1.0.221"
serde_json = { version = "1.0.144", features = ["preserve_order", "raw_value"] }
serde_json_lenient = { version = "0.2", features = [
"preserve_order",
@@ -721,6 +728,7 @@ yawc = "0.2.5"
zeroize = "1.8"
zstd = "0.11"
[workspace.dependencies.windows]
version = "0.61"
features = [
@@ -789,6 +797,19 @@ codegen-units = 16
codegen-units = 16
[profile.dev.package]
# proc-macros start
gpui_macros = { opt-level = 3 }
derive_refineable = { opt-level = 3 }
settings_macros = { opt-level = 3 }
sqlez_macros = { opt-level = 3, codegen-units = 1 }
ui_macros = { opt-level = 3 }
util_macros = { opt-level = 3 }
serde_derive = { opt-level = 3 }
quote = { opt-level = 3 }
syn = { opt-level = 3 }
proc-macro2 = { opt-level = 3 }
# proc-macros end
taffy = { opt-level = 3 }
cranelift-codegen = { opt-level = 3 }
cranelift-codegen-meta = { opt-level = 3 }
@@ -830,7 +851,6 @@ semantic_version = { codegen-units = 1 }
session = { codegen-units = 1 }
snippet = { codegen-units = 1 }
snippets_ui = { codegen-units = 1 }
sqlez_macros = { codegen-units = 1 }
story = { codegen-units = 1 }
supermaven_api = { codegen-units = 1 }
telemetry_events = { codegen-units = 1 }

View File

@@ -1,6 +1,6 @@
# syntax = docker/dockerfile:1.2
FROM rust:1.90-bookworm as builder
FROM rust:1.91.1-bookworm as builder
WORKDIR app
COPY . .

View File

@@ -44,6 +44,7 @@ design
docs
= @probably-neb
= @miguelraz
extension
= @kubkon
@@ -98,6 +99,9 @@ settings_ui
= @danilo-leal
= @probably-neb
support
= @miguelraz
tasks
= @SomeoneToIgnore
= @Veykril

32
assets/icons/sweep_ai.svg Normal file
View File

@@ -0,0 +1,32 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_3348_16)">
<path fill-rule="evenodd" clip-rule="evenodd" d="M7.97419 6.27207C8.44653 6.29114 8.86622 6.27046 9.23628 6.22425C9.08884 7.48378 8.7346 8.72903 8.16697 9.90688C8.04459 9.83861 7.92582 9.76008 7.81193 9.67108C7.64539 9.54099 7.49799 9.39549 7.37015 9.23818C7.5282 9.54496 7.64901 9.86752 7.73175 10.1986C7.35693 10.6656 6.90663 11.0373 6.412 11.3101C5.01165 10.8075 4.03638 9.63089 4.03638 7.93001C4.03638 6.96185 4.35234 6.07053 4.88281 5.36157C5.34001 5.69449 6.30374 6.20455 7.97419 6.27207ZM8.27511 11.5815C10.3762 11.5349 11.8115 10.7826 12.8347 7.93001C11.6992 7.93001 11.4246 7.10731 11.1188 6.19149C11.0669 6.03596 11.0141 5.87771 10.956 5.72037C10.6733 5.86733 10.2753 6.02782 9.74834 6.13895C9.59658 7.49345 9.20592 8.83238 8.56821 10.0897C8.89933 10.2093 9.24674 10.262 9.5908 10.2502C9.08928 10.4803 8.62468 10.8066 8.22655 11.2255C8.2457 11.3438 8.26186 11.4625 8.27511 11.5815ZM6.62702 7.75422C6.62702 7.50604 6.82821 7.30485 7.07639 7.30485C7.32457 7.30485 7.52576 7.50604 7.52576 7.75422V8.23616C7.52576 8.48435 7.32457 8.68554 7.07639 8.68554C6.82821 8.68554 6.62702 8.48435 6.62702 8.23616V7.75422ZM5.27746 7.30485C5.05086 7.30485 4.86716 7.48854 4.86716 7.71513V8.27525C4.86716 8.50185 5.05086 8.68554 5.27746 8.68554C5.50406 8.68554 5.68776 8.50185 5.68776 8.27525V7.71513C5.68776 7.48854 5.50406 7.30485 5.27746 7.30485Z" fill="white"/>
<mask id="mask0_3348_16" style="mask-type:luminance" maskUnits="userSpaceOnUse" x="4" y="5" width="9" height="7">
<path fill-rule="evenodd" clip-rule="evenodd" d="M7.97419 6.27207C8.44653 6.29114 8.86622 6.27046 9.23628 6.22425C9.08884 7.48378 8.7346 8.72903 8.16697 9.90688C8.04459 9.83861 7.92582 9.76008 7.81193 9.67108C7.64539 9.54099 7.49799 9.39549 7.37015 9.23818C7.5282 9.54496 7.64901 9.86752 7.73175 10.1986C7.35693 10.6656 6.90663 11.0373 6.412 11.3101C5.01165 10.8075 4.03638 9.63089 4.03638 7.93001C4.03638 6.96185 4.35234 6.07053 4.88281 5.36157C5.34001 5.69449 6.30374 6.20455 7.97419 6.27207ZM8.27511 11.5815C10.3762 11.5349 11.8115 10.7826 12.8347 7.93001C11.6992 7.93001 11.4246 7.10731 11.1188 6.19149C11.0669 6.03596 11.0141 5.87771 10.956 5.72037C10.6733 5.86733 10.2753 6.02782 9.74834 6.13895C9.59658 7.49345 9.20592 8.83238 8.56821 10.0897C8.89933 10.2093 9.24674 10.262 9.5908 10.2502C9.08928 10.4803 8.62468 10.8066 8.22655 11.2255C8.2457 11.3438 8.26186 11.4625 8.27511 11.5815ZM6.62702 7.75422C6.62702 7.50604 6.82821 7.30485 7.07639 7.30485C7.32457 7.30485 7.52576 7.50604 7.52576 7.75422V8.23616C7.52576 8.48435 7.32457 8.68554 7.07639 8.68554C6.82821 8.68554 6.62702 8.48435 6.62702 8.23616V7.75422ZM5.27746 7.30485C5.05086 7.30485 4.86716 7.48854 4.86716 7.71513V8.27525C4.86716 8.50185 5.05086 8.68554 5.27746 8.68554C5.50406 8.68554 5.68776 8.50185 5.68776 8.27525V7.71513C5.68776 7.48854 5.50406 7.30485 5.27746 7.30485Z" fill="white"/>
</mask>
<g mask="url(#mask0_3348_16)">
<path d="M9.23617 6.22425L9.39588 6.24293L9.41971 6.0393L9.21624 6.06471L9.23617 6.22425ZM8.16687 9.90688L8.08857 10.0473L8.23765 10.1305L8.31174 9.97669L8.16687 9.90688ZM7.37005 9.23819L7.49487 9.13676L7.22714 9.3118L7.37005 9.23819ZM7.73165 10.1986L7.85702 10.2993L7.90696 10.2371L7.88761 10.1597L7.73165 10.1986ZM6.41189 11.3101L6.35758 11.4615L6.42594 11.486L6.48954 11.4509L6.41189 11.3101ZM4.88271 5.36157L4.97736 5.23159L4.84905 5.13817L4.75397 5.26525L4.88271 5.36157ZM8.27501 11.5815L8.11523 11.5993L8.13151 11.7456L8.27859 11.7423L8.27501 11.5815ZM12.8346 7.93001L12.986 7.98428L13.0631 7.76921H12.8346V7.93001ZM10.9559 5.72037L11.1067 5.66469L11.0436 5.49354L10.8817 5.5777L10.9559 5.72037ZM9.74824 6.13896L9.71508 5.98161L9.60139 6.0056L9.58846 6.12102L9.74824 6.13896ZM8.56811 10.0897L8.42469 10.017L8.34242 10.1792L8.51348 10.241L8.56811 10.0897ZM9.5907 10.2502L9.65775 10.3964L9.58519 10.0896L9.5907 10.2502ZM8.22644 11.2255L8.10992 11.1147L8.05502 11.1725L8.06773 11.2512L8.22644 11.2255ZM9.21624 6.06471C8.85519 6.10978 8.44439 6.13015 7.98058 6.11139L7.96756 6.43272C8.44852 6.45215 8.87701 6.43111 9.25607 6.3838L9.21624 6.06471ZM8.31174 9.97669C8.88724 8.78244 9.2464 7.51988 9.39588 6.24293L9.07647 6.20557C8.93108 7.44772 8.58175 8.67563 8.02203 9.83708L8.31174 9.97669ZM8.2452 9.76645C8.12998 9.70219 8.01817 9.62826 7.91082 9.54438L7.71285 9.79779C7.8333 9.8919 7.95895 9.97503 8.08857 10.0473L8.2452 9.76645ZM7.91082 9.54438C7.75387 9.4218 7.61512 9.28479 7.49487 9.13676L7.24526 9.33957C7.38066 9.50619 7.53671 9.66023 7.71285 9.79779L7.91082 9.54438ZM7.22714 9.3118C7.37944 9.60746 7.49589 9.91837 7.57564 10.2376L7.88761 10.1597C7.80196 9.81663 7.67679 9.48248 7.513 9.16453L7.22714 9.3118ZM7.60624 10.098C7.24483 10.5482 6.81083 10.9065 6.33425 11.1693L6.48954 11.4509C7.00223 11.1682 7.46887 10.7829 7.85702 10.2993L7.60624 10.098ZM3.87549 7.93001C3.87548 9.7042 4.89861 10.9378 6.35758 11.4615L6.46622 11.1588C5.12449 10.6772 4.19707 9.55763 4.19707 7.93001H3.87549ZM4.75397 5.26525C4.20309 6.00147 3.87549 6.92646 3.87549 7.93001H4.19707C4.19707 6.99724 4.50139 6.13959 5.01145 5.45791L4.75397 5.26525ZM7.98058 6.11139C6.34236 6.04516 5.40922 5.54604 4.97736 5.23159L4.78806 5.49157C5.27058 5.84291 6.26491 6.3639 7.96756 6.43272L7.98058 6.11139ZM8.27859 11.7423C9.34696 11.7185 10.2682 11.515 11.0542 10.9376C11.8388 10.3612 12.4683 9.4273 12.986 7.98428L12.6833 7.8757C12.1776 9.28534 11.5779 10.1539 10.8638 10.6784C10.1511 11.202 9.30417 11.3978 8.27143 11.4208L8.27859 11.7423ZM12.8346 7.76921C12.3148 7.76921 12.0098 7.58516 11.7925 7.30552C11.5639 7.0114 11.4266 6.60587 11.2712 6.14061L10.9662 6.24242C11.1166 6.69294 11.2695 7.15667 11.5385 7.50285C11.8188 7.86347 12.2189 8.09078 12.8346 8.09078V7.76921ZM11.2712 6.14061C11.2195 5.98543 11.1658 5.82478 11.1067 5.66469L10.805 5.77606C10.8621 5.93065 10.9142 6.0865 10.9662 6.24242L11.2712 6.14061ZM10.8817 5.5777C10.6115 5.71821 10.2273 5.87362 9.71508 5.98161L9.78143 6.29626C10.3232 6.18206 10.735 6.0165 11.0301 5.86301L10.8817 5.5777ZM9.58846 6.12102C9.43882 7.45684 9.05355 8.77717 8.42469 10.017L8.71149 10.1625C9.35809 8.88764 9.75417 7.53011 9.90806 6.15685L9.58846 6.12102ZM9.58519 10.0896C9.26119 10.1006 8.93423 10.051 8.62269 9.93854L8.51348 10.241C8.86427 10.3677 9.23205 10.4234 9.5962 10.4109L9.58519 10.0896ZM8.34301 11.3363C8.72675 10.9325 9.17443 10.6181 9.65775 10.3964L9.52365 10.1041C9.00392 10.3425 8.52241 10.6807 8.10992 11.1147L8.34301 11.3363ZM8.43483 11.5638C8.4213 11.4421 8.40475 11.3207 8.3852 11.1998L8.06773 11.2512C8.08644 11.3668 8.10225 11.4829 8.11523 11.5993L8.43483 11.5638ZM7.07629 7.14405C6.73931 7.14405 6.46613 7.41724 6.46613 7.75423H6.7877C6.7877 7.59484 6.91691 7.46561 7.07629 7.46561V7.14405ZM7.68646 7.75423C7.68646 7.41724 7.41326 7.14405 7.07629 7.14405V7.46561C7.23567 7.46561 7.36489 7.59484 7.36489 7.75423H7.68646ZM7.68646 8.23616V7.75423H7.36489V8.23616H7.68646ZM7.07629 8.84634C7.41326 8.84634 7.68646 8.57315 7.68646 8.23616H7.36489C7.36489 8.39555 7.23567 8.52474 7.07629 8.52474V8.84634ZM6.46613 8.23616C6.46613 8.57315 6.73931 8.84634 7.07629 8.84634V8.52474C6.91691 8.52474 6.7877 8.39555 6.7877 8.23616H6.46613ZM6.46613 7.75423V8.23616H6.7877V7.75423H6.46613ZM5.02785 7.71514C5.02785 7.57734 5.13956 7.46561 5.27736 7.46561V7.14405C4.96196 7.14405 4.70627 7.39974 4.70627 7.71514H5.02785ZM5.02785 8.27525V7.71514H4.70627V8.27525H5.02785ZM5.27736 8.52474C5.13956 8.52474 5.02785 8.41305 5.02785 8.27525H4.70627C4.70627 8.59065 4.96196 8.84634 5.27736 8.84634V8.52474ZM5.52687 8.27525C5.52687 8.41305 5.41516 8.52474 5.27736 8.52474V8.84634C5.59277 8.84634 5.84845 8.59065 5.84845 8.27525H5.52687ZM5.52687 7.71514V8.27525H5.84845V7.71514H5.52687ZM5.27736 7.46561C5.41516 7.46561 5.52687 7.57734 5.52687 7.71514H5.84845C5.84845 7.39974 5.59277 7.14405 5.27736 7.14405V7.46561Z" fill="white"/>
</g>
<path fill-rule="evenodd" clip-rule="evenodd" d="M7.12635 14.5901C7.22369 14.3749 7.3069 14.1501 7.37454 13.9167C7.54132 13.3412 7.5998 12.7599 7.56197 12.1948C7.53665 12.5349 7.47589 12.8775 7.37718 13.2181C7.23926 13.694 7.03667 14.1336 6.78174 14.5301C6.89605 14.5547 7.01101 14.5747 7.12635 14.5901Z" fill="white"/>
<path d="M9.71984 7.74796C9.50296 7.74796 9.29496 7.83412 9.14159 7.98745C8.98822 8.14082 8.9021 8.34882 8.9021 8.5657C8.9021 8.78258 8.98822 8.99057 9.14159 9.14394C9.29496 9.29728 9.50296 9.38344 9.71984 9.38344V8.5657V7.74796Z" fill="white"/>
<mask id="mask1_3348_16" style="mask-type:luminance" maskUnits="userSpaceOnUse" x="5" y="2" width="8" height="9">
<path d="M12.3783 2.9985H5.36792V10.3954H12.3783V2.9985Z" fill="white"/>
<path fill-rule="evenodd" clip-rule="evenodd" d="M9.75733 3.61999C9.98577 5.80374 9.60089 8.05373 8.56819 10.0898C8.43122 10.0403 8.29704 9.9794 8.16699 9.90688C9.15325 7.86033 9.49538 5.61026 9.22757 3.43526C9.39923 3.51584 9.57682 3.57729 9.75733 3.61999Z" fill="black"/>
</mask>
<g mask="url(#mask1_3348_16)">
<path d="M8.56815 10.0898L8.67689 10.1449L8.62812 10.241L8.52678 10.2044L8.56815 10.0898ZM9.75728 3.61998L9.78536 3.50136L9.86952 3.52127L9.87853 3.6073L9.75728 3.61998ZM8.16695 9.90687L8.1076 10.0133L8.00732 9.9574L8.05715 9.85398L8.16695 9.90687ZM9.22753 3.43524L9.10656 3.45014L9.07958 3.23116L9.27932 3.32491L9.22753 3.43524ZM8.45945 10.0346C9.48122 8.02009 9.86217 5.79374 9.63608 3.63266L9.87853 3.6073C10.1093 5.81372 9.72048 8.0873 8.67689 10.1449L8.45945 10.0346ZM8.22633 9.80041C8.35056 9.86971 8.47876 9.92791 8.60956 9.97514L8.52678 10.2044C8.38363 10.1527 8.24344 10.0891 8.1076 10.0133L8.22633 9.80041ZM9.34849 3.42035C9.61905 5.61792 9.27346 7.89158 8.27675 9.9598L8.05715 9.85398C9.03298 7.82905 9.37158 5.60258 9.10656 3.45014L9.34849 3.42035ZM9.72925 3.7386C9.54064 3.69399 9.3551 3.62977 9.17573 3.54558L9.27932 3.32491C9.44327 3.40188 9.61288 3.46058 9.78536 3.50136L9.72925 3.7386Z" fill="white"/>
</g>
<path fill-rule="evenodd" clip-rule="evenodd" d="M11.4118 3.46925L11.2416 3.39926L11.1904 3.57611L11.349 3.62202C11.1904 3.57611 11.1904 3.57615 11.1904 3.5762L11.1903 3.57631L11.1902 3.57658L11.19 3.57741L11.1893 3.58009C11.1886 3.58233 11.1878 3.58548 11.1867 3.58949C11.1845 3.5975 11.1814 3.60897 11.1777 3.62359C11.1703 3.6528 11.1603 3.69464 11.1493 3.74656C11.1275 3.85017 11.102 3.99505 11.0869 4.16045C11.0573 4.4847 11.0653 4.91594 11.2489 5.26595C11.2613 5.28944 11.2643 5.31174 11.2625 5.32629C11.261 5.33849 11.2572 5.34226 11.2536 5.3449C11.0412 5.50026 10.5639 5.78997 9.76653 5.96607C9.76095 6.02373 9.75493 6.08134 9.74848 6.13895C10.601 5.95915 11.1161 5.65017 11.3511 5.4782C11.4413 5.41219 11.4471 5.28823 11.3952 5.18922C11.1546 4.73063 11.2477 4.08248 11.3103 3.78401C11.3314 3.68298 11.349 3.62202 11.349 3.62202C11.3745 3.6325 11.4002 3.63983 11.4259 3.64425C11.9083 3.72709 12.4185 2.78249 12.6294 2.33939C12.6852 2.22212 12.6234 2.08843 12.497 2.05837C11.2595 1.76399 5.46936 0.631807 4.57214 4.96989C4.55907 5.03307 4.57607 5.10106 4.62251 5.14584C4.87914 5.39322 5.86138 6.18665 7.9743 6.27207C8.44664 6.29114 8.86633 6.27046 9.23638 6.22425C9.24295 6.16797 9.24912 6.1117 9.25491 6.05534C8.88438 6.10391 8.46092 6.12641 7.98094 6.10702C5.91152 6.02337 4.96693 5.24843 4.73714 5.02692C4.73701 5.02679 4.73545 5.02525 4.73422 5.0208C4.73292 5.01611 4.73254 5.00987 4.73388 5.00334C4.94996 3.95861 5.4573 3.25195 6.11188 2.77714C6.77039 2.29947 7.58745 2.04983 8.42824 1.94075C10.1122 1.72228 11.8454 2.07312 12.4588 2.21906C12.4722 2.22225 12.4787 2.22927 12.4819 2.2362C12.4853 2.24342 12.4869 2.25443 12.4803 2.2684C12.3706 2.49879 12.183 2.85746 11.9656 3.13057C11.8564 3.26783 11.7479 3.37295 11.6469 3.43216C11.5491 3.48956 11.4752 3.49529 11.4118 3.46925Z" fill="white"/>
<mask id="mask2_3348_16" style="mask-type:luminance" maskUnits="userSpaceOnUse" x="3" y="9" width="7" height="6">
<path fill-rule="evenodd" clip-rule="evenodd" d="M8.22654 11.2255C8.62463 10.8066 9.08923 10.4803 9.59075 10.2502C8.97039 10.2715 8.33933 10.0831 7.81189 9.67109C7.64534 9.541 7.49795 9.39549 7.37014 9.23819C7.52815 9.54497 7.64896 9.86752 7.7317 10.1986C6.70151 11.4821 5.1007 12.0466 3.57739 11.8125C3.85909 12.527 4.32941 13.178 4.97849 13.6851C5.8625 14.3756 6.92544 14.6799 7.96392 14.6227C8.32513 13.5174 8.4085 12.351 8.22654 11.2255Z" fill="white"/>
</mask>
<g mask="url(#mask2_3348_16)">
<path d="M9.59085 10.2502L9.58389 10.0472L9.67556 10.4349L9.59085 10.2502ZM8.22663 11.2255L8.02607 11.258L8.00999 11.1585L8.07936 11.0856L8.22663 11.2255ZM7.37024 9.23819L7.18961 9.33119L7.52789 9.11006L7.37024 9.23819ZM7.7318 10.1986L7.92886 10.1494L7.95328 10.2472L7.8902 10.3258L7.7318 10.1986ZM3.57749 11.8125L3.3885 11.887L3.25879 11.5579L3.60835 11.6117L3.57749 11.8125ZM7.96402 14.6227L8.15711 14.6858L8.11397 14.8179L7.97519 14.8255L7.96402 14.6227ZM9.67556 10.4349C9.19708 10.6544 8.7538 10.9657 8.37387 11.3655L8.07936 11.0856C8.49566 10.6475 8.98161 10.3062 9.50614 10.0656L9.67556 10.4349ZM7.93704 9.51099C8.42551 9.89261 9.00942 10.0669 9.58389 10.0472L9.59781 10.4533C8.93151 10.4761 8.25334 10.2737 7.68693 9.83118L7.93704 9.51099ZM7.52789 9.11006C7.64615 9.25565 7.78261 9.39038 7.93704 9.51099L7.68693 9.83118C7.50827 9.69161 7.34994 9.53537 7.21254 9.36627L7.52789 9.11006ZM7.5347 10.2479C7.45573 9.93178 7.34043 9.62393 7.18961 9.33119L7.55082 9.14514C7.71611 9.466 7.84242 9.80326 7.92886 10.1494L7.5347 10.2479ZM3.60835 11.6117C5.06278 11.8352 6.59038 11.2962 7.57335 10.0715L7.8902 10.3258C6.81284 11.6681 5.1388 12.258 3.54663 12.0133L3.60835 11.6117ZM4.85352 13.8452C4.17512 13.3152 3.68312 12.6343 3.3885 11.887L3.76648 11.738C4.03524 12.4197 4.4839 13.0409 5.10364 13.525L4.85352 13.8452ZM7.97519 14.8255C6.8895 14.8853 5.77774 14.5672 4.85352 13.8452L5.10364 13.525C5.94745 14.1842 6.96157 14.4744 7.95285 14.4198L7.97519 14.8255ZM8.42716 11.1931C8.61419 12.3499 8.52858 13.5491 8.15711 14.6858L7.77093 14.5596C8.12191 13.4857 8.20296 12.352 8.02607 11.258L8.42716 11.1931Z" fill="white"/>
</g>
</g>
<defs>
<clipPath id="clip0_3348_16">
<rect width="9.63483" height="14" fill="white" transform="translate(3.19995 1.5)"/>
</clipPath>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 14 KiB

View File

@@ -313,7 +313,7 @@
"use_key_equivalents": true,
"bindings": {
"cmd-n": "agent::NewTextThread",
"cmd-alt-t": "agent::NewThread"
"cmd-alt-n": "agent::NewExternalAgentThread"
}
},
{

View File

@@ -421,12 +421,6 @@
"ctrl-[": "editor::Cancel"
}
},
{
"context": "vim_mode == helix_select && !menu",
"bindings": {
"escape": "vim::SwitchToHelixNormalMode"
}
},
{
"context": "(vim_mode == helix_normal || vim_mode == helix_select) && !menu",
"bindings": {

View File

@@ -742,14 +742,31 @@
// "never"
"show": "always"
},
// Sort order for entries in the project panel.
// This setting can take three values:
//
// 1. Show directories first, then files:
// "directories_first"
// 2. Mix directories and files together:
// "mixed"
// 3. Show files first, then directories:
// "files_first"
"sort_mode": "directories_first",
// Whether to enable drag-and-drop operations in the project panel.
"drag_and_drop": true,
// Whether to hide the root entry when only one folder is open in the window.
"hide_root": false,
// Whether to hide the hidden entries in the project panel.
"hide_hidden": false,
// Whether to automatically open files when pasting them in the project panel.
"open_file_on_paste": true
// Settings for automatically opening files.
"auto_open": {
// Whether to automatically open newly created files in the editor.
"on_create": true,
// Whether to automatically open files after pasting or duplicating them.
"on_paste": true,
// Whether to automatically open files dropped from external sources.
"on_drop": true
}
},
"outline_panel": {
// Whether to show the outline panel button in the status bar
@@ -1543,6 +1560,8 @@
// Default: 10_000, maximum: 100_000 (all bigger values set will be treated as 100_000), 0 disables the scrolling.
// Existing terminals will not pick up this change until they are recreated.
"max_scroll_history_lines": 10000,
// The multiplier for scrolling speed in the terminal.
"scroll_multiplier": 1.0,
// The minimum APCA perceptual contrast between foreground and background colors.
// APCA (Accessible Perceptual Contrast Algorithm) is more accurate than WCAG 2.x,
// especially for dark mode. Values range from 0 to 106.

View File

@@ -1866,10 +1866,14 @@ impl AcpThread {
.checkpoint
.as_ref()
.map(|c| c.git_checkpoint.clone());
// Cancel any in-progress generation before restoring
let cancel_task = self.cancel(cx);
let rewind = self.rewind(id.clone(), cx);
let git_store = self.project.read(cx).git_store().clone();
cx.spawn(async move |_, cx| {
cancel_task.await;
rewind.await?;
if let Some(checkpoint) = checkpoint {
git_store
@@ -1894,9 +1898,25 @@ impl AcpThread {
cx.update(|cx| truncate.run(id.clone(), cx))?.await?;
this.update(cx, |this, cx| {
if let Some((ix, _)) = this.user_message_mut(&id) {
// Collect all terminals from entries that will be removed
let terminals_to_remove: Vec<acp::TerminalId> = this.entries[ix..]
.iter()
.flat_map(|entry| entry.terminals())
.filter_map(|terminal| terminal.read(cx).id().clone().into())
.collect();
let range = ix..this.entries.len();
this.entries.truncate(ix);
cx.emit(AcpThreadEvent::EntriesRemoved(range));
// Kill and remove the terminals
for terminal_id in terminals_to_remove {
if let Some(terminal) = this.terminals.remove(&terminal_id) {
terminal.update(cx, |terminal, cx| {
terminal.kill(cx);
});
}
}
}
this.action_log().update(cx, |action_log, cx| {
action_log.reject_all_edits(Some(telemetry), cx)
@@ -3803,4 +3823,314 @@ mod tests {
}
});
}
/// Tests that restoring a checkpoint properly cleans up terminals that were
/// created after that checkpoint, and cancels any in-progress generation.
///
/// Reproduces issue #35142: When a checkpoint is restored, any terminal processes
/// that were started after that checkpoint should be terminated, and any in-progress
/// AI generation should be canceled.
#[gpui::test]
async fn test_restore_checkpoint_kills_terminal(cx: &mut TestAppContext) {
init_test(cx);
let fs = FakeFs::new(cx.executor());
let project = Project::test(fs, [], cx).await;
let connection = Rc::new(FakeAgentConnection::new());
let thread = cx
.update(|cx| connection.new_thread(project, Path::new(path!("/test")), cx))
.await
.unwrap();
// Send first user message to create a checkpoint
cx.update(|cx| {
thread.update(cx, |thread, cx| {
thread.send(vec!["first message".into()], cx)
})
})
.await
.unwrap();
// Send second message (creates another checkpoint) - we'll restore to this one
cx.update(|cx| {
thread.update(cx, |thread, cx| {
thread.send(vec!["second message".into()], cx)
})
})
.await
.unwrap();
// Create 2 terminals BEFORE the checkpoint that have completed running
let terminal_id_1 = acp::TerminalId(uuid::Uuid::new_v4().to_string().into());
let mock_terminal_1 = cx.new(|cx| {
let builder = ::terminal::TerminalBuilder::new_display_only(
::terminal::terminal_settings::CursorShape::default(),
::terminal::terminal_settings::AlternateScroll::On,
None,
0,
)
.unwrap();
builder.subscribe(cx)
});
thread.update(cx, |thread, cx| {
thread.on_terminal_provider_event(
TerminalProviderEvent::Created {
terminal_id: terminal_id_1.clone(),
label: "echo 'first'".to_string(),
cwd: Some(PathBuf::from("/test")),
output_byte_limit: None,
terminal: mock_terminal_1.clone(),
},
cx,
);
});
thread.update(cx, |thread, cx| {
thread.on_terminal_provider_event(
TerminalProviderEvent::Output {
terminal_id: terminal_id_1.clone(),
data: b"first\n".to_vec(),
},
cx,
);
});
thread.update(cx, |thread, cx| {
thread.on_terminal_provider_event(
TerminalProviderEvent::Exit {
terminal_id: terminal_id_1.clone(),
status: acp::TerminalExitStatus {
exit_code: Some(0),
signal: None,
meta: None,
},
},
cx,
);
});
let terminal_id_2 = acp::TerminalId(uuid::Uuid::new_v4().to_string().into());
let mock_terminal_2 = cx.new(|cx| {
let builder = ::terminal::TerminalBuilder::new_display_only(
::terminal::terminal_settings::CursorShape::default(),
::terminal::terminal_settings::AlternateScroll::On,
None,
0,
)
.unwrap();
builder.subscribe(cx)
});
thread.update(cx, |thread, cx| {
thread.on_terminal_provider_event(
TerminalProviderEvent::Created {
terminal_id: terminal_id_2.clone(),
label: "echo 'second'".to_string(),
cwd: Some(PathBuf::from("/test")),
output_byte_limit: None,
terminal: mock_terminal_2.clone(),
},
cx,
);
});
thread.update(cx, |thread, cx| {
thread.on_terminal_provider_event(
TerminalProviderEvent::Output {
terminal_id: terminal_id_2.clone(),
data: b"second\n".to_vec(),
},
cx,
);
});
thread.update(cx, |thread, cx| {
thread.on_terminal_provider_event(
TerminalProviderEvent::Exit {
terminal_id: terminal_id_2.clone(),
status: acp::TerminalExitStatus {
exit_code: Some(0),
signal: None,
meta: None,
},
},
cx,
);
});
// Get the second message ID to restore to
let second_message_id = thread.read_with(cx, |thread, _| {
// At this point we have:
// - Index 0: First user message (with checkpoint)
// - Index 1: Second user message (with checkpoint)
// No assistant responses because FakeAgentConnection just returns EndTurn
let AgentThreadEntry::UserMessage(message) = &thread.entries[1] else {
panic!("expected user message at index 1");
};
message.id.clone().unwrap()
});
// Create a terminal AFTER the checkpoint we'll restore to.
// This simulates the AI agent starting a long-running terminal command.
let terminal_id = acp::TerminalId(uuid::Uuid::new_v4().to_string().into());
let mock_terminal = cx.new(|cx| {
let builder = ::terminal::TerminalBuilder::new_display_only(
::terminal::terminal_settings::CursorShape::default(),
::terminal::terminal_settings::AlternateScroll::On,
None,
0,
)
.unwrap();
builder.subscribe(cx)
});
// Register the terminal as created
thread.update(cx, |thread, cx| {
thread.on_terminal_provider_event(
TerminalProviderEvent::Created {
terminal_id: terminal_id.clone(),
label: "sleep 1000".to_string(),
cwd: Some(PathBuf::from("/test")),
output_byte_limit: None,
terminal: mock_terminal.clone(),
},
cx,
);
});
// Simulate the terminal producing output (still running)
thread.update(cx, |thread, cx| {
thread.on_terminal_provider_event(
TerminalProviderEvent::Output {
terminal_id: terminal_id.clone(),
data: b"terminal is running...\n".to_vec(),
},
cx,
);
});
// Create a tool call entry that references this terminal
// This represents the agent requesting a terminal command
thread.update(cx, |thread, cx| {
thread
.handle_session_update(
acp::SessionUpdate::ToolCall(acp::ToolCall {
id: acp::ToolCallId("terminal-tool-1".into()),
title: "Running command".into(),
kind: acp::ToolKind::Execute,
status: acp::ToolCallStatus::InProgress,
content: vec![acp::ToolCallContent::Terminal {
terminal_id: terminal_id.clone(),
}],
locations: vec![],
raw_input: Some(
serde_json::json!({"command": "sleep 1000", "cd": "/test"}),
),
raw_output: None,
meta: None,
}),
cx,
)
.unwrap();
});
// Verify terminal exists and is in the thread
let terminal_exists_before =
thread.read_with(cx, |thread, _| thread.terminals.contains_key(&terminal_id));
assert!(
terminal_exists_before,
"Terminal should exist before checkpoint restore"
);
// Verify the terminal's underlying task is still running (not completed)
let terminal_running_before = thread.read_with(cx, |thread, _cx| {
let terminal_entity = thread.terminals.get(&terminal_id).unwrap();
terminal_entity.read_with(cx, |term, _cx| {
term.output().is_none() // output is None means it's still running
})
});
assert!(
terminal_running_before,
"Terminal should be running before checkpoint restore"
);
// Verify we have the expected entries before restore
let entry_count_before = thread.read_with(cx, |thread, _| thread.entries.len());
assert!(
entry_count_before > 1,
"Should have multiple entries before restore"
);
// Restore the checkpoint to the second message.
// This should:
// 1. Cancel any in-progress generation (via the cancel() call)
// 2. Remove the terminal that was created after that point
thread
.update(cx, |thread, cx| {
thread.restore_checkpoint(second_message_id, cx)
})
.await
.unwrap();
// Verify that no send_task is in progress after restore
// (cancel() clears the send_task)
let has_send_task_after = thread.read_with(cx, |thread, _| thread.send_task.is_some());
assert!(
!has_send_task_after,
"Should not have a send_task after restore (cancel should have cleared it)"
);
// Verify the entries were truncated (restoring to index 1 truncates at 1, keeping only index 0)
let entry_count = thread.read_with(cx, |thread, _| thread.entries.len());
assert_eq!(
entry_count, 1,
"Should have 1 entry after restore (only the first user message)"
);
// Verify the 2 completed terminals from before the checkpoint still exist
let terminal_1_exists = thread.read_with(cx, |thread, _| {
thread.terminals.contains_key(&terminal_id_1)
});
assert!(
terminal_1_exists,
"Terminal 1 (from before checkpoint) should still exist"
);
let terminal_2_exists = thread.read_with(cx, |thread, _| {
thread.terminals.contains_key(&terminal_id_2)
});
assert!(
terminal_2_exists,
"Terminal 2 (from before checkpoint) should still exist"
);
// Verify they're still in completed state
let terminal_1_completed = thread.read_with(cx, |thread, _cx| {
let terminal_entity = thread.terminals.get(&terminal_id_1).unwrap();
terminal_entity.read_with(cx, |term, _cx| term.output().is_some())
});
assert!(terminal_1_completed, "Terminal 1 should still be completed");
let terminal_2_completed = thread.read_with(cx, |thread, _cx| {
let terminal_entity = thread.terminals.get(&terminal_id_2).unwrap();
terminal_entity.read_with(cx, |term, _cx| term.output().is_some())
});
assert!(terminal_2_completed, "Terminal 2 should still be completed");
// Verify the running terminal (created after checkpoint) was removed
let terminal_3_exists =
thread.read_with(cx, |thread, _| thread.terminals.contains_key(&terminal_id));
assert!(
!terminal_3_exists,
"Terminal 3 (created after checkpoint) should have been removed"
);
// Verify total count is 2 (the two from before the checkpoint)
let terminal_count = thread.read_with(cx, |thread, _| thread.terminals.len());
assert_eq!(
terminal_count, 2,
"Should have exactly 2 terminals (the completed ones from before checkpoint)"
);
}
}

View File

@@ -133,9 +133,7 @@ impl LanguageModels {
for model in provider.provided_models(cx) {
let model_info = Self::map_language_model_to_info(&model, &provider);
let model_id = model_info.id.clone();
if !recommended_models.contains(&(model.provider_id(), model.id())) {
provider_models.push(model_info);
}
provider_models.push(model_info);
models.insert(model_id, model);
}
if !provider_models.is_empty() {

View File

@@ -150,6 +150,7 @@ impl DbThread {
.unwrap_or_default(),
input: tool_use.input,
is_input_complete: true,
thought_signature: None,
},
));
}

View File

@@ -15,12 +15,14 @@ const SEPARATOR_MARKER: &str = "=======";
const REPLACE_MARKER: &str = ">>>>>>> REPLACE";
const SONNET_PARAMETER_INVOKE_1: &str = "</parameter>\n</invoke>";
const SONNET_PARAMETER_INVOKE_2: &str = "</parameter></invoke>";
const END_TAGS: [&str; 5] = [
const SONNET_PARAMETER_INVOKE_3: &str = "</parameter>";
const END_TAGS: [&str; 6] = [
OLD_TEXT_END_TAG,
NEW_TEXT_END_TAG,
EDITS_END_TAG,
SONNET_PARAMETER_INVOKE_1, // Remove this after switching to streaming tool call
SONNET_PARAMETER_INVOKE_1, // Remove these after switching to streaming tool call
SONNET_PARAMETER_INVOKE_2,
SONNET_PARAMETER_INVOKE_3,
];
#[derive(Debug)]
@@ -567,21 +569,29 @@ mod tests {
parse_random_chunks(
indoc! {"
<old_text>some text</old_text><new_text>updated text</parameter></invoke>
<old_text>more text</old_text><new_text>upd</parameter></new_text>
"},
&mut parser,
&mut rng
),
vec![Edit {
old_text: "some text".to_string(),
new_text: "updated text".to_string(),
line_hint: None,
},]
vec![
Edit {
old_text: "some text".to_string(),
new_text: "updated text".to_string(),
line_hint: None,
},
Edit {
old_text: "more text".to_string(),
new_text: "upd".to_string(),
line_hint: None,
},
]
);
assert_eq!(
parser.finish(),
EditParserMetrics {
tags: 2,
mismatched_tags: 1
tags: 4,
mismatched_tags: 2
}
);
}

View File

@@ -1108,6 +1108,7 @@ fn tool_use(
raw_input: serde_json::to_string_pretty(&input).unwrap(),
input: serde_json::to_value(input).unwrap(),
is_input_complete: true,
thought_signature: None,
})
}

View File

@@ -44,6 +44,25 @@ pub async fn get_buffer_content_or_outline(
.collect::<Vec<_>>()
})?;
// If no outline exists, fall back to first 1KB so the agent has some context
if outline_items.is_empty() {
let text = buffer.read_with(cx, |buffer, _| {
let snapshot = buffer.snapshot();
let len = snapshot.len().min(1024);
let content = snapshot.text_for_range(0..len).collect::<String>();
if let Some(path) = path {
format!("# First 1KB of {path} (file too large to show full content, and no outline available)\n\n{content}")
} else {
format!("# First 1KB of file (file too large to show full content, and no outline available)\n\n{content}")
}
})?;
return Ok(BufferContent {
text,
is_outline: false,
});
}
let outline_text = render_outline(outline_items, None, 0, usize::MAX).await?;
let text = if let Some(path) = path {
@@ -140,3 +159,62 @@ fn render_entries(
entries_rendered
}
#[cfg(test)]
mod tests {
use super::*;
use fs::FakeFs;
use gpui::TestAppContext;
use project::Project;
use settings::SettingsStore;
#[gpui::test]
async fn test_large_file_fallback_to_subset(cx: &mut TestAppContext) {
cx.update(|cx| {
let settings = SettingsStore::test(cx);
cx.set_global(settings);
});
let fs = FakeFs::new(cx.executor());
let project = Project::test(fs, [], cx).await;
let content = "A".repeat(100 * 1024); // 100KB
let content_len = content.len();
let buffer = project
.update(cx, |project, cx| project.create_buffer(true, cx))
.await
.expect("failed to create buffer");
buffer.update(cx, |buffer, cx| buffer.set_text(content, cx));
let result = cx
.spawn(|cx| async move { get_buffer_content_or_outline(buffer, None, &cx).await })
.await
.unwrap();
// Should contain some of the actual file content
assert!(
result.text.contains("AAAAAAAAAA"),
"Result did not contain content subset"
);
// Should be marked as not an outline (it's truncated content)
assert!(
!result.is_outline,
"Large file without outline should not be marked as outline"
);
// Should be reasonably sized (much smaller than original)
assert!(
result.text.len() < 50 * 1024,
"Result size {} should be smaller than 50KB",
result.text.len()
);
// Should be significantly smaller than the original content
assert!(
result.text.len() < content_len / 10,
"Result should be much smaller than original content"
);
}
}

View File

@@ -21,9 +21,9 @@ use gpui::{
use indoc::indoc;
use language_model::{
LanguageModel, LanguageModelCompletionError, LanguageModelCompletionEvent, LanguageModelId,
LanguageModelProviderName, LanguageModelRegistry, LanguageModelRequest,
LanguageModelRequestMessage, LanguageModelToolResult, LanguageModelToolSchemaFormat,
LanguageModelToolUse, MessageContent, Role, StopReason, fake_provider::FakeLanguageModel,
LanguageModelRegistry, LanguageModelRequest, LanguageModelRequestMessage,
LanguageModelToolResult, LanguageModelToolSchemaFormat, LanguageModelToolUse, MessageContent,
Role, StopReason, fake_provider::FakeLanguageModel,
};
use pretty_assertions::assert_eq;
use project::{
@@ -274,6 +274,7 @@ async fn test_prompt_caching(cx: &mut TestAppContext) {
raw_input: json!({"text": "test"}).to_string(),
input: json!({"text": "test"}),
is_input_complete: true,
thought_signature: None,
};
fake_model
.send_last_completion_stream_event(LanguageModelCompletionEvent::ToolUse(tool_use.clone()));
@@ -461,6 +462,7 @@ async fn test_tool_authorization(cx: &mut TestAppContext) {
raw_input: "{}".into(),
input: json!({}),
is_input_complete: true,
thought_signature: None,
},
));
fake_model.send_last_completion_stream_event(LanguageModelCompletionEvent::ToolUse(
@@ -470,6 +472,7 @@ async fn test_tool_authorization(cx: &mut TestAppContext) {
raw_input: "{}".into(),
input: json!({}),
is_input_complete: true,
thought_signature: None,
},
));
fake_model.end_last_completion_stream();
@@ -520,6 +523,7 @@ async fn test_tool_authorization(cx: &mut TestAppContext) {
raw_input: "{}".into(),
input: json!({}),
is_input_complete: true,
thought_signature: None,
},
));
fake_model.end_last_completion_stream();
@@ -554,6 +558,7 @@ async fn test_tool_authorization(cx: &mut TestAppContext) {
raw_input: "{}".into(),
input: json!({}),
is_input_complete: true,
thought_signature: None,
},
));
fake_model.end_last_completion_stream();
@@ -592,6 +597,7 @@ async fn test_tool_hallucination(cx: &mut TestAppContext) {
raw_input: "{}".into(),
input: json!({}),
is_input_complete: true,
thought_signature: None,
},
));
fake_model.end_last_completion_stream();
@@ -621,6 +627,7 @@ async fn test_resume_after_tool_use_limit(cx: &mut TestAppContext) {
raw_input: "{}".into(),
input: serde_json::to_value(&EchoToolInput { text: "def".into() }).unwrap(),
is_input_complete: true,
thought_signature: None,
};
fake_model
.send_last_completion_stream_event(LanguageModelCompletionEvent::ToolUse(tool_use.clone()));
@@ -657,9 +664,7 @@ async fn test_resume_after_tool_use_limit(cx: &mut TestAppContext) {
);
// Simulate reaching tool use limit.
fake_model.send_last_completion_stream_event(LanguageModelCompletionEvent::StatusUpdate(
cloud_llm_client::CompletionRequestStatus::ToolUseLimitReached,
));
fake_model.send_last_completion_stream_event(LanguageModelCompletionEvent::ToolUseLimitReached);
fake_model.end_last_completion_stream();
let last_event = events.collect::<Vec<_>>().await.pop().unwrap();
assert!(
@@ -731,6 +736,7 @@ async fn test_send_after_tool_use_limit(cx: &mut TestAppContext) {
raw_input: "{}".into(),
input: serde_json::to_value(&EchoToolInput { text: "def".into() }).unwrap(),
is_input_complete: true,
thought_signature: None,
};
let tool_result = LanguageModelToolResult {
tool_use_id: "tool_id_1".into(),
@@ -741,9 +747,7 @@ async fn test_send_after_tool_use_limit(cx: &mut TestAppContext) {
};
fake_model
.send_last_completion_stream_event(LanguageModelCompletionEvent::ToolUse(tool_use.clone()));
fake_model.send_last_completion_stream_event(LanguageModelCompletionEvent::StatusUpdate(
cloud_llm_client::CompletionRequestStatus::ToolUseLimitReached,
));
fake_model.send_last_completion_stream_event(LanguageModelCompletionEvent::ToolUseLimitReached);
fake_model.end_last_completion_stream();
let last_event = events.collect::<Vec<_>>().await.pop().unwrap();
assert!(
@@ -1037,6 +1041,7 @@ async fn test_mcp_tools(cx: &mut TestAppContext) {
raw_input: json!({"text": "test"}).to_string(),
input: json!({"text": "test"}),
is_input_complete: true,
thought_signature: None,
},
));
fake_model.end_last_completion_stream();
@@ -1080,6 +1085,7 @@ async fn test_mcp_tools(cx: &mut TestAppContext) {
raw_input: json!({"text": "mcp"}).to_string(),
input: json!({"text": "mcp"}),
is_input_complete: true,
thought_signature: None,
},
));
fake_model.send_last_completion_stream_event(LanguageModelCompletionEvent::ToolUse(
@@ -1089,6 +1095,7 @@ async fn test_mcp_tools(cx: &mut TestAppContext) {
raw_input: json!({"text": "native"}).to_string(),
input: json!({"text": "native"}),
is_input_complete: true,
thought_signature: None,
},
));
fake_model.end_last_completion_stream();
@@ -1522,7 +1529,7 @@ async fn test_truncate_first_message(cx: &mut TestAppContext) {
});
fake_model.send_last_completion_stream_text_chunk("Hey!");
fake_model.send_last_completion_stream_event(LanguageModelCompletionEvent::UsageUpdate(
fake_model.send_last_completion_stream_event(LanguageModelCompletionEvent::TokenUsage(
language_model::TokenUsage {
input_tokens: 32_000,
output_tokens: 16_000,
@@ -1580,7 +1587,7 @@ async fn test_truncate_first_message(cx: &mut TestAppContext) {
});
cx.run_until_parked();
fake_model.send_last_completion_stream_text_chunk("Ahoy!");
fake_model.send_last_completion_stream_event(LanguageModelCompletionEvent::UsageUpdate(
fake_model.send_last_completion_stream_event(LanguageModelCompletionEvent::TokenUsage(
language_model::TokenUsage {
input_tokens: 40_000,
output_tokens: 20_000,
@@ -1625,7 +1632,7 @@ async fn test_truncate_second_message(cx: &mut TestAppContext) {
.unwrap();
cx.run_until_parked();
fake_model.send_last_completion_stream_text_chunk("Message 1 response");
fake_model.send_last_completion_stream_event(LanguageModelCompletionEvent::UsageUpdate(
fake_model.send_last_completion_stream_event(LanguageModelCompletionEvent::TokenUsage(
language_model::TokenUsage {
input_tokens: 32_000,
output_tokens: 16_000,
@@ -1672,7 +1679,7 @@ async fn test_truncate_second_message(cx: &mut TestAppContext) {
cx.run_until_parked();
fake_model.send_last_completion_stream_text_chunk("Message 2 response");
fake_model.send_last_completion_stream_event(LanguageModelCompletionEvent::UsageUpdate(
fake_model.send_last_completion_stream_event(LanguageModelCompletionEvent::TokenUsage(
language_model::TokenUsage {
input_tokens: 40_000,
output_tokens: 20_000,
@@ -1788,6 +1795,7 @@ async fn test_building_request_with_pending_tools(cx: &mut TestAppContext) {
raw_input: "{}".into(),
input: json!({}),
is_input_complete: true,
thought_signature: None,
};
let echo_tool_use = LanguageModelToolUse {
id: "tool_id_2".into(),
@@ -1795,6 +1803,7 @@ async fn test_building_request_with_pending_tools(cx: &mut TestAppContext) {
raw_input: json!({"text": "test"}).to_string(),
input: json!({"text": "test"}),
is_input_complete: true,
thought_signature: None,
};
fake_model.send_last_completion_stream_text_chunk("Hi!");
fake_model.send_last_completion_stream_event(LanguageModelCompletionEvent::ToolUse(
@@ -2000,6 +2009,7 @@ async fn test_tool_updates_to_completion(cx: &mut TestAppContext) {
raw_input: input.to_string(),
input,
is_input_complete: false,
thought_signature: None,
},
));
@@ -2012,6 +2022,7 @@ async fn test_tool_updates_to_completion(cx: &mut TestAppContext) {
raw_input: input.to_string(),
input,
is_input_complete: true,
thought_signature: None,
},
));
fake_model.end_last_completion_stream();
@@ -2144,7 +2155,6 @@ async fn test_send_retry_on_error(cx: &mut TestAppContext) {
fake_model.send_last_completion_stream_text_chunk("Hey,");
fake_model.send_last_completion_stream_error(LanguageModelCompletionError::ServerOverloaded {
provider: LanguageModelProviderName::new("Anthropic"),
retry_after: Some(Duration::from_secs(3)),
});
fake_model.end_last_completion_stream();
@@ -2214,12 +2224,12 @@ async fn test_send_retry_finishes_tool_calls_on_error(cx: &mut TestAppContext) {
raw_input: json!({"text": "test"}).to_string(),
input: json!({"text": "test"}),
is_input_complete: true,
thought_signature: None,
};
fake_model.send_last_completion_stream_event(LanguageModelCompletionEvent::ToolUse(
tool_use_1.clone(),
));
fake_model.send_last_completion_stream_error(LanguageModelCompletionError::ServerOverloaded {
provider: LanguageModelProviderName::new("Anthropic"),
retry_after: Some(Duration::from_secs(3)),
});
fake_model.end_last_completion_stream();
@@ -2286,7 +2296,6 @@ async fn test_send_max_retries_exceeded(cx: &mut TestAppContext) {
for _ in 0..crate::thread::MAX_RETRY_ATTEMPTS + 1 {
fake_model.send_last_completion_stream_error(
LanguageModelCompletionError::ServerOverloaded {
provider: LanguageModelProviderName::new("Anthropic"),
retry_after: Some(Duration::from_secs(3)),
},
);

View File

@@ -15,7 +15,7 @@ use agent_settings::{
use anyhow::{Context as _, Result, anyhow};
use chrono::{DateTime, Utc};
use client::{ModelRequestUsage, RequestUsage, UserStore};
use cloud_llm_client::{CompletionIntent, CompletionRequestStatus, Plan, UsageLimit};
use cloud_llm_client::{CompletionIntent, Plan, UsageLimit};
use collections::{HashMap, HashSet, IndexMap};
use fs::Fs;
use futures::stream;
@@ -30,11 +30,11 @@ use gpui::{
};
use language_model::{
LanguageModel, LanguageModelCompletionError, LanguageModelCompletionEvent, LanguageModelExt,
LanguageModelId, LanguageModelImage, LanguageModelProviderId, LanguageModelRegistry,
LanguageModelRequest, LanguageModelRequestMessage, LanguageModelRequestTool,
LanguageModelToolResult, LanguageModelToolResultContent, LanguageModelToolSchemaFormat,
LanguageModelToolUse, LanguageModelToolUseId, Role, SelectedModel, StopReason, TokenUsage,
ZED_CLOUD_PROVIDER_ID,
LanguageModelId, LanguageModelImage, LanguageModelProviderId, LanguageModelProviderName,
LanguageModelRegistry, LanguageModelRequest, LanguageModelRequestMessage,
LanguageModelRequestTool, LanguageModelToolResult, LanguageModelToolResultContent,
LanguageModelToolSchemaFormat, LanguageModelToolUse, LanguageModelToolUseId, Role,
SelectedModel, StopReason, TokenUsage, ZED_CLOUD_PROVIDER_ID,
};
use project::Project;
use prompt_store::ProjectContext;
@@ -607,6 +607,8 @@ pub struct Thread {
pub(crate) prompt_capabilities_rx: watch::Receiver<acp::PromptCapabilities>,
pub(crate) project: Entity<Project>,
pub(crate) action_log: Entity<ActionLog>,
/// Tracks the last time files were read by the agent, to detect external modifications
pub(crate) file_read_times: HashMap<PathBuf, fs::MTime>,
}
impl Thread {
@@ -665,6 +667,7 @@ impl Thread {
prompt_capabilities_rx,
project,
action_log,
file_read_times: HashMap::default(),
}
}
@@ -860,6 +863,7 @@ impl Thread {
updated_at: db_thread.updated_at,
prompt_capabilities_tx,
prompt_capabilities_rx,
file_read_times: HashMap::default(),
}
}
@@ -999,6 +1003,7 @@ impl Thread {
self.add_tool(NowTool);
self.add_tool(OpenTool::new(self.project.clone()));
self.add_tool(ReadFileTool::new(
cx.weak_entity(),
self.project.clone(),
self.action_log.clone(),
));
@@ -1290,9 +1295,10 @@ impl Thread {
if let Some(error) = error {
attempt += 1;
let provider = model.upstream_provider_name();
let retry = this.update(cx, |this, cx| {
let user_store = this.user_store.read(cx);
this.handle_completion_error(error, attempt, user_store.plan())
this.handle_completion_error(provider, error, attempt, user_store.plan())
})??;
let timer = cx.background_executor().timer(retry.duration);
event_stream.send_retry(retry);
@@ -1318,6 +1324,7 @@ impl Thread {
fn handle_completion_error(
&mut self,
provider: LanguageModelProviderName,
error: LanguageModelCompletionError,
attempt: u8,
plan: Option<Plan>,
@@ -1384,7 +1391,7 @@ impl Thread {
use LanguageModelCompletionEvent::*;
match event {
StartMessage { .. } => {
LanguageModelCompletionEvent::StartMessage { .. } => {
self.flush_pending_message(cx);
self.pending_message = Some(AgentMessage::default());
}
@@ -1411,7 +1418,7 @@ impl Thread {
),
)));
}
UsageUpdate(usage) => {
TokenUsage(usage) => {
telemetry::event!(
"Agent Thread Completion Usage Updated",
thread_id = self.id.to_string(),
@@ -1425,20 +1432,16 @@ impl Thread {
);
self.update_token_usage(usage, cx);
}
StatusUpdate(CompletionRequestStatus::UsageUpdated { amount, limit }) => {
RequestUsage { amount, limit } => {
self.update_model_request_usage(amount, limit, cx);
}
StatusUpdate(
CompletionRequestStatus::Started
| CompletionRequestStatus::Queued { .. }
| CompletionRequestStatus::Failed { .. },
) => {}
StatusUpdate(CompletionRequestStatus::ToolUseLimitReached) => {
ToolUseLimitReached => {
self.tool_use_limit_reached = true;
}
Stop(StopReason::Refusal) => return Err(CompletionError::Refusal.into()),
Stop(StopReason::MaxTokens) => return Err(CompletionError::MaxTokens.into()),
Stop(StopReason::ToolUse | StopReason::EndTurn) => {}
Started | Queued { .. } => {}
}
Ok(None)
@@ -1682,9 +1685,7 @@ impl Thread {
let event = event.log_err()?;
let text = match event {
LanguageModelCompletionEvent::Text(text) => text,
LanguageModelCompletionEvent::StatusUpdate(
CompletionRequestStatus::UsageUpdated { amount, limit },
) => {
LanguageModelCompletionEvent::RequestUsage { amount, limit } => {
this.update(cx, |thread, cx| {
thread.update_model_request_usage(amount, limit, cx);
})
@@ -1748,9 +1749,7 @@ impl Thread {
let event = event?;
let text = match event {
LanguageModelCompletionEvent::Text(text) => text,
LanguageModelCompletionEvent::StatusUpdate(
CompletionRequestStatus::UsageUpdated { amount, limit },
) => {
LanguageModelCompletionEvent::RequestUsage { amount, limit } => {
this.update(cx, |thread, cx| {
thread.update_model_request_usage(amount, limit, cx);
})?;

View File

@@ -309,6 +309,40 @@ impl AgentTool for EditFileTool {
})?
.await?;
// Check if the file has been modified since the agent last read it
if let Some(abs_path) = abs_path.as_ref() {
let (last_read_mtime, current_mtime, is_dirty) = self.thread.update(cx, |thread, cx| {
let last_read = thread.file_read_times.get(abs_path).copied();
let current = buffer.read(cx).file().and_then(|file| file.disk_state().mtime());
let dirty = buffer.read(cx).is_dirty();
(last_read, current, dirty)
})?;
// Check for unsaved changes first - these indicate modifications we don't know about
if is_dirty {
anyhow::bail!(
"This file cannot be written to because it has unsaved changes. \
Please end the current conversation immediately by telling the user you want to write to this file (mention its path explicitly) but you can't write to it because it has unsaved changes. \
Ask the user to save that buffer's changes and to inform you when it's ok to proceed."
);
}
// Check if the file was modified on disk since we last read it
if let (Some(last_read), Some(current)) = (last_read_mtime, current_mtime) {
// MTime can be unreliable for comparisons, so our newtype intentionally
// doesn't support comparing them. If the mtime at all different
// (which could be because of a modification or because e.g. system clock changed),
// we pessimistically assume it was modified.
if current != last_read {
anyhow::bail!(
"The file {} has been modified since you last read it. \
Please read the file again to get the current state before editing it.",
input.path.display()
);
}
}
}
let diff = cx.new(|cx| Diff::new(buffer.clone(), cx))?;
event_stream.update_diff(diff.clone());
let _finalize_diff = util::defer({
@@ -421,6 +455,17 @@ impl AgentTool for EditFileTool {
log.buffer_edited(buffer.clone(), cx);
})?;
// Update the recorded read time after a successful edit so consecutive edits work
if let Some(abs_path) = abs_path.as_ref() {
if let Some(new_mtime) = buffer.read_with(cx, |buffer, _| {
buffer.file().and_then(|file| file.disk_state().mtime())
})? {
self.thread.update(cx, |thread, _| {
thread.file_read_times.insert(abs_path.to_path_buf(), new_mtime);
})?;
}
}
let new_snapshot = buffer.read_with(cx, |buffer, _cx| buffer.snapshot())?;
let (new_text, unified_diff) = cx
.background_spawn({
@@ -1748,10 +1793,426 @@ mod tests {
}
}
#[gpui::test]
async fn test_file_read_times_tracking(cx: &mut TestAppContext) {
init_test(cx);
let fs = project::FakeFs::new(cx.executor());
fs.insert_tree(
"/root",
json!({
"test.txt": "original content"
}),
)
.await;
let project = Project::test(fs.clone(), [path!("/root").as_ref()], cx).await;
let context_server_registry =
cx.new(|cx| ContextServerRegistry::new(project.read(cx).context_server_store(), cx));
let model = Arc::new(FakeLanguageModel::default());
let thread = cx.new(|cx| {
Thread::new(
project.clone(),
cx.new(|_cx| ProjectContext::default()),
context_server_registry,
Templates::new(),
Some(model.clone()),
cx,
)
});
let action_log = thread.read_with(cx, |thread, _| thread.action_log().clone());
// Initially, file_read_times should be empty
let is_empty = thread.read_with(cx, |thread, _| thread.file_read_times.is_empty());
assert!(is_empty, "file_read_times should start empty");
// Create read tool
let read_tool = Arc::new(crate::ReadFileTool::new(
thread.downgrade(),
project.clone(),
action_log,
));
// Read the file to record the read time
cx.update(|cx| {
read_tool.clone().run(
crate::ReadFileToolInput {
path: "root/test.txt".to_string(),
start_line: None,
end_line: None,
},
ToolCallEventStream::test().0,
cx,
)
})
.await
.unwrap();
// Verify that file_read_times now contains an entry for the file
let has_entry = thread.read_with(cx, |thread, _| {
thread.file_read_times.len() == 1
&& thread
.file_read_times
.keys()
.any(|path| path.ends_with("test.txt"))
});
assert!(
has_entry,
"file_read_times should contain an entry after reading the file"
);
// Read the file again - should update the entry
cx.update(|cx| {
read_tool.clone().run(
crate::ReadFileToolInput {
path: "root/test.txt".to_string(),
start_line: None,
end_line: None,
},
ToolCallEventStream::test().0,
cx,
)
})
.await
.unwrap();
// Should still have exactly one entry
let has_one_entry = thread.read_with(cx, |thread, _| thread.file_read_times.len() == 1);
assert!(
has_one_entry,
"file_read_times should still have one entry after re-reading"
);
}
fn init_test(cx: &mut TestAppContext) {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
});
}
#[gpui::test]
async fn test_consecutive_edits_work(cx: &mut TestAppContext) {
init_test(cx);
let fs = project::FakeFs::new(cx.executor());
fs.insert_tree(
"/root",
json!({
"test.txt": "original content"
}),
)
.await;
let project = Project::test(fs.clone(), [path!("/root").as_ref()], cx).await;
let context_server_registry =
cx.new(|cx| ContextServerRegistry::new(project.read(cx).context_server_store(), cx));
let model = Arc::new(FakeLanguageModel::default());
let thread = cx.new(|cx| {
Thread::new(
project.clone(),
cx.new(|_cx| ProjectContext::default()),
context_server_registry,
Templates::new(),
Some(model.clone()),
cx,
)
});
let languages = project.read_with(cx, |project, _| project.languages().clone());
let action_log = thread.read_with(cx, |thread, _| thread.action_log().clone());
let read_tool = Arc::new(crate::ReadFileTool::new(
thread.downgrade(),
project.clone(),
action_log,
));
let edit_tool = Arc::new(EditFileTool::new(
project.clone(),
thread.downgrade(),
languages,
Templates::new(),
));
// Read the file first
cx.update(|cx| {
read_tool.clone().run(
crate::ReadFileToolInput {
path: "root/test.txt".to_string(),
start_line: None,
end_line: None,
},
ToolCallEventStream::test().0,
cx,
)
})
.await
.unwrap();
// First edit should work
let edit_result = {
let edit_task = cx.update(|cx| {
edit_tool.clone().run(
EditFileToolInput {
display_description: "First edit".into(),
path: "root/test.txt".into(),
mode: EditFileMode::Edit,
},
ToolCallEventStream::test().0,
cx,
)
});
cx.executor().run_until_parked();
model.send_last_completion_stream_text_chunk(
"<old_text>original content</old_text><new_text>modified content</new_text>"
.to_string(),
);
model.end_last_completion_stream();
edit_task.await
};
assert!(
edit_result.is_ok(),
"First edit should succeed, got error: {:?}",
edit_result.as_ref().err()
);
// Second edit should also work because the edit updated the recorded read time
let edit_result = {
let edit_task = cx.update(|cx| {
edit_tool.clone().run(
EditFileToolInput {
display_description: "Second edit".into(),
path: "root/test.txt".into(),
mode: EditFileMode::Edit,
},
ToolCallEventStream::test().0,
cx,
)
});
cx.executor().run_until_parked();
model.send_last_completion_stream_text_chunk(
"<old_text>modified content</old_text><new_text>further modified content</new_text>".to_string(),
);
model.end_last_completion_stream();
edit_task.await
};
assert!(
edit_result.is_ok(),
"Second consecutive edit should succeed, got error: {:?}",
edit_result.as_ref().err()
);
}
#[gpui::test]
async fn test_external_modification_detected(cx: &mut TestAppContext) {
init_test(cx);
let fs = project::FakeFs::new(cx.executor());
fs.insert_tree(
"/root",
json!({
"test.txt": "original content"
}),
)
.await;
let project = Project::test(fs.clone(), [path!("/root").as_ref()], cx).await;
let context_server_registry =
cx.new(|cx| ContextServerRegistry::new(project.read(cx).context_server_store(), cx));
let model = Arc::new(FakeLanguageModel::default());
let thread = cx.new(|cx| {
Thread::new(
project.clone(),
cx.new(|_cx| ProjectContext::default()),
context_server_registry,
Templates::new(),
Some(model.clone()),
cx,
)
});
let languages = project.read_with(cx, |project, _| project.languages().clone());
let action_log = thread.read_with(cx, |thread, _| thread.action_log().clone());
let read_tool = Arc::new(crate::ReadFileTool::new(
thread.downgrade(),
project.clone(),
action_log,
));
let edit_tool = Arc::new(EditFileTool::new(
project.clone(),
thread.downgrade(),
languages,
Templates::new(),
));
// Read the file first
cx.update(|cx| {
read_tool.clone().run(
crate::ReadFileToolInput {
path: "root/test.txt".to_string(),
start_line: None,
end_line: None,
},
ToolCallEventStream::test().0,
cx,
)
})
.await
.unwrap();
// Simulate external modification - advance time and save file
cx.background_executor
.advance_clock(std::time::Duration::from_secs(2));
fs.save(
path!("/root/test.txt").as_ref(),
&"externally modified content".into(),
language::LineEnding::Unix,
)
.await
.unwrap();
// Reload the buffer to pick up the new mtime
let project_path = project
.read_with(cx, |project, cx| {
project.find_project_path("root/test.txt", cx)
})
.expect("Should find project path");
let buffer = project
.update(cx, |project, cx| project.open_buffer(project_path, cx))
.await
.unwrap();
buffer
.update(cx, |buffer, cx| buffer.reload(cx))
.await
.unwrap();
cx.executor().run_until_parked();
// Try to edit - should fail because file was modified externally
let result = cx
.update(|cx| {
edit_tool.clone().run(
EditFileToolInput {
display_description: "Edit after external change".into(),
path: "root/test.txt".into(),
mode: EditFileMode::Edit,
},
ToolCallEventStream::test().0,
cx,
)
})
.await;
assert!(
result.is_err(),
"Edit should fail after external modification"
);
let error_msg = result.unwrap_err().to_string();
assert!(
error_msg.contains("has been modified since you last read it"),
"Error should mention file modification, got: {}",
error_msg
);
}
#[gpui::test]
async fn test_dirty_buffer_detected(cx: &mut TestAppContext) {
init_test(cx);
let fs = project::FakeFs::new(cx.executor());
fs.insert_tree(
"/root",
json!({
"test.txt": "original content"
}),
)
.await;
let project = Project::test(fs.clone(), [path!("/root").as_ref()], cx).await;
let context_server_registry =
cx.new(|cx| ContextServerRegistry::new(project.read(cx).context_server_store(), cx));
let model = Arc::new(FakeLanguageModel::default());
let thread = cx.new(|cx| {
Thread::new(
project.clone(),
cx.new(|_cx| ProjectContext::default()),
context_server_registry,
Templates::new(),
Some(model.clone()),
cx,
)
});
let languages = project.read_with(cx, |project, _| project.languages().clone());
let action_log = thread.read_with(cx, |thread, _| thread.action_log().clone());
let read_tool = Arc::new(crate::ReadFileTool::new(
thread.downgrade(),
project.clone(),
action_log,
));
let edit_tool = Arc::new(EditFileTool::new(
project.clone(),
thread.downgrade(),
languages,
Templates::new(),
));
// Read the file first
cx.update(|cx| {
read_tool.clone().run(
crate::ReadFileToolInput {
path: "root/test.txt".to_string(),
start_line: None,
end_line: None,
},
ToolCallEventStream::test().0,
cx,
)
})
.await
.unwrap();
// Open the buffer and make it dirty by editing without saving
let project_path = project
.read_with(cx, |project, cx| {
project.find_project_path("root/test.txt", cx)
})
.expect("Should find project path");
let buffer = project
.update(cx, |project, cx| project.open_buffer(project_path, cx))
.await
.unwrap();
// Make an in-memory edit to the buffer (making it dirty)
buffer.update(cx, |buffer, cx| {
let end_point = buffer.max_point();
buffer.edit([(end_point..end_point, " added text")], None, cx);
});
// Verify buffer is dirty
let is_dirty = buffer.read_with(cx, |buffer, _| buffer.is_dirty());
assert!(is_dirty, "Buffer should be dirty after in-memory edit");
// Try to edit - should fail because buffer has unsaved changes
let result = cx
.update(|cx| {
edit_tool.clone().run(
EditFileToolInput {
display_description: "Edit with dirty buffer".into(),
path: "root/test.txt".into(),
mode: EditFileMode::Edit,
},
ToolCallEventStream::test().0,
cx,
)
})
.await;
assert!(result.is_err(), "Edit should fail when buffer is dirty");
let error_msg = result.unwrap_err().to_string();
assert!(
error_msg.contains("cannot be written to because it has unsaved changes"),
"Error should mention unsaved changes, got: {}",
error_msg
);
}
}

View File

@@ -1,7 +1,7 @@
use action_log::ActionLog;
use agent_client_protocol::{self as acp, ToolCallUpdateFields};
use anyhow::{Context as _, Result, anyhow};
use gpui::{App, Entity, SharedString, Task};
use gpui::{App, Entity, SharedString, Task, WeakEntity};
use indoc::formatdoc;
use language::Point;
use language_model::{LanguageModelImage, LanguageModelToolResultContent};
@@ -12,7 +12,7 @@ use settings::Settings;
use std::sync::Arc;
use util::markdown::MarkdownCodeBlock;
use crate::{AgentTool, ToolCallEventStream, outline};
use crate::{AgentTool, Thread, ToolCallEventStream, outline};
/// Reads the content of the given file in the project.
///
@@ -42,13 +42,19 @@ pub struct ReadFileToolInput {
}
pub struct ReadFileTool {
thread: WeakEntity<Thread>,
project: Entity<Project>,
action_log: Entity<ActionLog>,
}
impl ReadFileTool {
pub fn new(project: Entity<Project>, action_log: Entity<ActionLog>) -> Self {
pub fn new(
thread: WeakEntity<Thread>,
project: Entity<Project>,
action_log: Entity<ActionLog>,
) -> Self {
Self {
thread,
project,
action_log,
}
@@ -195,6 +201,17 @@ impl AgentTool for ReadFileTool {
anyhow::bail!("{file_path} not found");
}
// Record the file read time and mtime
if let Some(mtime) = buffer.read_with(cx, |buffer, _| {
buffer.file().and_then(|file| file.disk_state().mtime())
})? {
self.thread
.update(cx, |thread, _| {
thread.file_read_times.insert(abs_path.to_path_buf(), mtime);
})
.ok();
}
let mut anchor = None;
// Check if specific line ranges are provided
@@ -285,11 +302,15 @@ impl AgentTool for ReadFileTool {
#[cfg(test)]
mod test {
use super::*;
use crate::{ContextServerRegistry, Templates, Thread};
use gpui::{AppContext, TestAppContext, UpdateGlobal as _};
use language::{Language, LanguageConfig, LanguageMatcher, tree_sitter_rust};
use language_model::fake_provider::FakeLanguageModel;
use project::{FakeFs, Project};
use prompt_store::ProjectContext;
use serde_json::json;
use settings::SettingsStore;
use std::sync::Arc;
use util::path;
#[gpui::test]
@@ -300,7 +321,20 @@ mod test {
fs.insert_tree(path!("/root"), json!({})).await;
let project = Project::test(fs.clone(), [path!("/root").as_ref()], cx).await;
let action_log = cx.new(|_| ActionLog::new(project.clone()));
let tool = Arc::new(ReadFileTool::new(project, action_log));
let context_server_registry =
cx.new(|cx| ContextServerRegistry::new(project.read(cx).context_server_store(), cx));
let model = Arc::new(FakeLanguageModel::default());
let thread = cx.new(|cx| {
Thread::new(
project.clone(),
cx.new(|_cx| ProjectContext::default()),
context_server_registry,
Templates::new(),
Some(model),
cx,
)
});
let tool = Arc::new(ReadFileTool::new(thread.downgrade(), project, action_log));
let (event_stream, _) = ToolCallEventStream::test();
let result = cx
@@ -333,7 +367,20 @@ mod test {
.await;
let project = Project::test(fs.clone(), [path!("/root").as_ref()], cx).await;
let action_log = cx.new(|_| ActionLog::new(project.clone()));
let tool = Arc::new(ReadFileTool::new(project, action_log));
let context_server_registry =
cx.new(|cx| ContextServerRegistry::new(project.read(cx).context_server_store(), cx));
let model = Arc::new(FakeLanguageModel::default());
let thread = cx.new(|cx| {
Thread::new(
project.clone(),
cx.new(|_cx| ProjectContext::default()),
context_server_registry,
Templates::new(),
Some(model),
cx,
)
});
let tool = Arc::new(ReadFileTool::new(thread.downgrade(), project, action_log));
let result = cx
.update(|cx| {
let input = ReadFileToolInput {
@@ -363,7 +410,20 @@ mod test {
let language_registry = project.read_with(cx, |project, _| project.languages().clone());
language_registry.add(Arc::new(rust_lang()));
let action_log = cx.new(|_| ActionLog::new(project.clone()));
let tool = Arc::new(ReadFileTool::new(project, action_log));
let context_server_registry =
cx.new(|cx| ContextServerRegistry::new(project.read(cx).context_server_store(), cx));
let model = Arc::new(FakeLanguageModel::default());
let thread = cx.new(|cx| {
Thread::new(
project.clone(),
cx.new(|_cx| ProjectContext::default()),
context_server_registry,
Templates::new(),
Some(model),
cx,
)
});
let tool = Arc::new(ReadFileTool::new(thread.downgrade(), project, action_log));
let result = cx
.update(|cx| {
let input = ReadFileToolInput {
@@ -435,7 +495,20 @@ mod test {
let project = Project::test(fs.clone(), [path!("/root").as_ref()], cx).await;
let action_log = cx.new(|_| ActionLog::new(project.clone()));
let tool = Arc::new(ReadFileTool::new(project, action_log));
let context_server_registry =
cx.new(|cx| ContextServerRegistry::new(project.read(cx).context_server_store(), cx));
let model = Arc::new(FakeLanguageModel::default());
let thread = cx.new(|cx| {
Thread::new(
project.clone(),
cx.new(|_cx| ProjectContext::default()),
context_server_registry,
Templates::new(),
Some(model),
cx,
)
});
let tool = Arc::new(ReadFileTool::new(thread.downgrade(), project, action_log));
let result = cx
.update(|cx| {
let input = ReadFileToolInput {
@@ -463,7 +536,20 @@ mod test {
.await;
let project = Project::test(fs.clone(), [path!("/root").as_ref()], cx).await;
let action_log = cx.new(|_| ActionLog::new(project.clone()));
let tool = Arc::new(ReadFileTool::new(project, action_log));
let context_server_registry =
cx.new(|cx| ContextServerRegistry::new(project.read(cx).context_server_store(), cx));
let model = Arc::new(FakeLanguageModel::default());
let thread = cx.new(|cx| {
Thread::new(
project.clone(),
cx.new(|_cx| ProjectContext::default()),
context_server_registry,
Templates::new(),
Some(model),
cx,
)
});
let tool = Arc::new(ReadFileTool::new(thread.downgrade(), project, action_log));
// start_line of 0 should be treated as 1
let result = cx
@@ -607,7 +693,20 @@ mod test {
let project = Project::test(fs.clone(), [path!("/project_root").as_ref()], cx).await;
let action_log = cx.new(|_| ActionLog::new(project.clone()));
let tool = Arc::new(ReadFileTool::new(project, action_log));
let context_server_registry =
cx.new(|cx| ContextServerRegistry::new(project.read(cx).context_server_store(), cx));
let model = Arc::new(FakeLanguageModel::default());
let thread = cx.new(|cx| {
Thread::new(
project.clone(),
cx.new(|_cx| ProjectContext::default()),
context_server_registry,
Templates::new(),
Some(model),
cx,
)
});
let tool = Arc::new(ReadFileTool::new(thread.downgrade(), project, action_log));
// Reading a file outside the project worktree should fail
let result = cx
@@ -821,7 +920,24 @@ mod test {
.await;
let action_log = cx.new(|_| ActionLog::new(project.clone()));
let tool = Arc::new(ReadFileTool::new(project.clone(), action_log.clone()));
let context_server_registry =
cx.new(|cx| ContextServerRegistry::new(project.read(cx).context_server_store(), cx));
let model = Arc::new(FakeLanguageModel::default());
let thread = cx.new(|cx| {
Thread::new(
project.clone(),
cx.new(|_cx| ProjectContext::default()),
context_server_registry,
Templates::new(),
Some(model),
cx,
)
});
let tool = Arc::new(ReadFileTool::new(
thread.downgrade(),
project.clone(),
action_log.clone(),
));
// Test reading allowed files in worktree1
let result = cx

View File

@@ -247,37 +247,58 @@ impl AgentConnection for AcpConnection {
let default_mode = self.default_mode.clone();
let cwd = cwd.to_path_buf();
let context_server_store = project.read(cx).context_server_store().read(cx);
let mcp_servers = if project.read(cx).is_local() {
context_server_store
.configured_server_ids()
.iter()
.filter_map(|id| {
let configuration = context_server_store.configuration_for_server(id)?;
let command = configuration.command();
Some(acp::McpServer::Stdio {
name: id.0.to_string(),
command: command.path.clone(),
args: command.args.clone(),
env: if let Some(env) = command.env.as_ref() {
env.iter()
.map(|(name, value)| acp::EnvVariable {
name: name.clone(),
value: value.clone(),
meta: None,
})
.collect()
} else {
vec![]
},
let mcp_servers =
if project.read(cx).is_local() {
context_server_store
.configured_server_ids()
.iter()
.filter_map(|id| {
let configuration = context_server_store.configuration_for_server(id)?;
match &*configuration {
project::context_server_store::ContextServerConfiguration::Custom {
command,
..
}
| project::context_server_store::ContextServerConfiguration::Extension {
command,
..
} => Some(acp::McpServer::Stdio {
name: id.0.to_string(),
command: command.path.clone(),
args: command.args.clone(),
env: if let Some(env) = command.env.as_ref() {
env.iter()
.map(|(name, value)| acp::EnvVariable {
name: name.clone(),
value: value.clone(),
meta: None,
})
.collect()
} else {
vec![]
},
}),
project::context_server_store::ContextServerConfiguration::Http {
url,
headers,
} => Some(acp::McpServer::Http {
name: id.0.to_string(),
url: url.to_string(),
headers: headers.iter().map(|(name, value)| acp::HttpHeader {
name: name.clone(),
value: value.clone(),
meta: None,
}).collect(),
}),
}
})
})
.collect()
} else {
// In SSH projects, the external agent is running on the remote
// machine, and currently we only run MCP servers on the local
// machine. So don't pass any MCP servers to the agent in that case.
Vec::new()
};
.collect()
} else {
// In SSH projects, the external agent is running on the remote
// machine, and currently we only run MCP servers on the local
// machine. So don't pass any MCP servers to the agent in that case.
Vec::new()
};
cx.spawn(async move |cx| {
let response = conn

View File

@@ -98,6 +98,8 @@ util.workspace = true
watch.workspace = true
workspace.workspace = true
zed_actions.workspace = true
image.workspace = true
async-fs.workspace = true
[dev-dependencies]
acp_thread = { workspace = true, features = ["test-support"] }

View File

@@ -109,6 +109,8 @@ impl ContextPickerCompletionProvider {
icon_path: Some(mode.icon().path().into()),
documentation: None,
source: project::CompletionSource::Custom,
match_start: None,
snippet_deduplication_key: None,
insert_text_mode: None,
// This ensures that when a user accepts this completion, the
// completion menu will still be shown after "@category " is
@@ -146,6 +148,8 @@ impl ContextPickerCompletionProvider {
documentation: None,
insert_text_mode: None,
source: project::CompletionSource::Custom,
match_start: None,
snippet_deduplication_key: None,
icon_path: Some(icon_for_completion),
confirm: Some(confirm_completion_callback(
thread_entry.title().clone(),
@@ -177,6 +181,8 @@ impl ContextPickerCompletionProvider {
documentation: None,
insert_text_mode: None,
source: project::CompletionSource::Custom,
match_start: None,
snippet_deduplication_key: None,
icon_path: Some(icon_path),
confirm: Some(confirm_completion_callback(
rule.title,
@@ -233,6 +239,8 @@ impl ContextPickerCompletionProvider {
documentation: None,
source: project::CompletionSource::Custom,
icon_path: Some(completion_icon_path),
match_start: None,
snippet_deduplication_key: None,
insert_text_mode: None,
confirm: Some(confirm_completion_callback(
file_name,
@@ -284,6 +292,8 @@ impl ContextPickerCompletionProvider {
documentation: None,
source: project::CompletionSource::Custom,
icon_path: Some(icon_path),
match_start: None,
snippet_deduplication_key: None,
insert_text_mode: None,
confirm: Some(confirm_completion_callback(
symbol.name.into(),
@@ -316,6 +326,8 @@ impl ContextPickerCompletionProvider {
documentation: None,
source: project::CompletionSource::Custom,
icon_path: Some(icon_path),
match_start: None,
snippet_deduplication_key: None,
insert_text_mode: None,
confirm: Some(confirm_completion_callback(
url_to_fetch.to_string().into(),
@@ -384,6 +396,8 @@ impl ContextPickerCompletionProvider {
icon_path: Some(action.icon().path().into()),
documentation: None,
source: project::CompletionSource::Custom,
match_start: None,
snippet_deduplication_key: None,
insert_text_mode: None,
// This ensures that when a user accepts this completion, the
// completion menu will still be shown after "@category " is
@@ -774,6 +788,8 @@ impl CompletionProvider for ContextPickerCompletionProvider {
)),
source: project::CompletionSource::Custom,
icon_path: None,
match_start: None,
snippet_deduplication_key: None,
insert_text_mode: None,
confirm: Some(Arc::new({
let editor = editor.clone();

View File

@@ -28,6 +28,7 @@ use gpui::{
EventEmitter, FocusHandle, Focusable, Image, ImageFormat, Img, KeyContext, SharedString,
Subscription, Task, TextStyle, WeakEntity, pulsating_between,
};
use itertools::Either;
use language::{Buffer, Language, language_settings::InlayHintKind};
use language_model::LanguageModelImage;
use postage::stream::Stream as _;
@@ -912,74 +913,114 @@ impl MessageEditor {
if !self.prompt_capabilities.borrow().image {
return;
}
let images = cx
.read_from_clipboard()
.map(|item| {
item.into_entries()
.filter_map(|entry| {
if let ClipboardEntry::Image(image) = entry {
Some(image)
} else {
None
}
})
.collect::<Vec<_>>()
})
.unwrap_or_default();
if images.is_empty() {
let Some(clipboard) = cx.read_from_clipboard() else {
return;
}
cx.stop_propagation();
};
cx.spawn_in(window, async move |this, cx| {
use itertools::Itertools;
let (mut images, paths) = clipboard
.into_entries()
.filter_map(|entry| match entry {
ClipboardEntry::Image(image) => Some(Either::Left(image)),
ClipboardEntry::ExternalPaths(paths) => Some(Either::Right(paths)),
_ => None,
})
.partition_map::<Vec<_>, Vec<_>, _, _, _>(std::convert::identity);
let replacement_text = MentionUri::PastedImage.as_link().to_string();
for image in images {
let (excerpt_id, text_anchor, multibuffer_anchor) =
self.editor.update(cx, |message_editor, cx| {
let snapshot = message_editor.snapshot(window, cx);
let (excerpt_id, _, buffer_snapshot) =
snapshot.buffer_snapshot().as_singleton().unwrap();
if !paths.is_empty() {
images.extend(
cx.background_spawn(async move {
let mut images = vec![];
for path in paths.into_iter().flat_map(|paths| paths.paths().to_owned()) {
let Ok(content) = async_fs::read(path).await else {
continue;
};
let Ok(format) = image::guess_format(&content) else {
continue;
};
images.push(gpui::Image::from_bytes(
match format {
image::ImageFormat::Png => gpui::ImageFormat::Png,
image::ImageFormat::Jpeg => gpui::ImageFormat::Jpeg,
image::ImageFormat::WebP => gpui::ImageFormat::Webp,
image::ImageFormat::Gif => gpui::ImageFormat::Gif,
image::ImageFormat::Bmp => gpui::ImageFormat::Bmp,
image::ImageFormat::Tiff => gpui::ImageFormat::Tiff,
image::ImageFormat::Ico => gpui::ImageFormat::Ico,
_ => continue,
},
content,
));
}
images
})
.await,
);
}
let text_anchor = buffer_snapshot.anchor_before(buffer_snapshot.len());
let multibuffer_anchor = snapshot
.buffer_snapshot()
.anchor_in_excerpt(*excerpt_id, text_anchor);
message_editor.edit(
[(
multi_buffer::Anchor::max()..multi_buffer::Anchor::max(),
format!("{replacement_text} "),
)],
if images.is_empty() {
return;
}
let replacement_text = MentionUri::PastedImage.as_link().to_string();
let Ok(editor) = this.update(cx, |this, cx| {
cx.stop_propagation();
this.editor.clone()
}) else {
return;
};
for image in images {
let Ok((excerpt_id, text_anchor, multibuffer_anchor)) =
editor.update_in(cx, |message_editor, window, cx| {
let snapshot = message_editor.snapshot(window, cx);
let (excerpt_id, _, buffer_snapshot) =
snapshot.buffer_snapshot().as_singleton().unwrap();
let text_anchor = buffer_snapshot.anchor_before(buffer_snapshot.len());
let multibuffer_anchor = snapshot
.buffer_snapshot()
.anchor_in_excerpt(*excerpt_id, text_anchor);
message_editor.edit(
[(
multi_buffer::Anchor::max()..multi_buffer::Anchor::max(),
format!("{replacement_text} "),
)],
cx,
);
(*excerpt_id, text_anchor, multibuffer_anchor)
})
else {
break;
};
let content_len = replacement_text.len();
let Some(start_anchor) = multibuffer_anchor else {
continue;
};
let Ok(end_anchor) = editor.update(cx, |editor, cx| {
let snapshot = editor.buffer().read(cx).snapshot(cx);
snapshot.anchor_before(start_anchor.to_offset(&snapshot) + content_len)
}) else {
continue;
};
let image = Arc::new(image);
let Ok(Some((crease_id, tx))) = cx.update(|window, cx| {
insert_crease_for_mention(
excerpt_id,
text_anchor,
content_len,
MentionUri::PastedImage.name().into(),
IconName::Image.path().into(),
Some(Task::ready(Ok(image.clone())).shared()),
editor.clone(),
window,
cx,
);
(*excerpt_id, text_anchor, multibuffer_anchor)
});
let content_len = replacement_text.len();
let Some(start_anchor) = multibuffer_anchor else {
continue;
};
let end_anchor = self.editor.update(cx, |editor, cx| {
let snapshot = editor.buffer().read(cx).snapshot(cx);
snapshot.anchor_before(start_anchor.to_offset(&snapshot) + content_len)
});
let image = Arc::new(image);
let Some((crease_id, tx)) = insert_crease_for_mention(
excerpt_id,
text_anchor,
content_len,
MentionUri::PastedImage.name().into(),
IconName::Image.path().into(),
Some(Task::ready(Ok(image.clone())).shared()),
self.editor.clone(),
window,
cx,
) else {
continue;
};
let task = cx
.spawn_in(window, {
async move |_, cx| {
)
}) else {
continue;
};
let task = cx
.spawn(async move |cx| {
let format = image.format;
let image = cx
.update(|_, cx| LanguageModelImage::from_image(image, cx))
@@ -994,15 +1035,16 @@ impl MessageEditor {
} else {
Err("Failed to convert image".into())
}
}
})
.shared();
this.update(cx, |this, _| {
this.mention_set
.mentions
.insert(crease_id, (MentionUri::PastedImage, task.clone()))
})
.shared();
.ok();
self.mention_set
.mentions
.insert(crease_id, (MentionUri::PastedImage, task.clone()));
cx.spawn_in(window, async move |this, cx| {
if task.await.notify_async_err(cx).is_none() {
this.update(cx, |this, cx| {
this.editor.update(cx, |editor, cx| {
@@ -1012,9 +1054,9 @@ impl MessageEditor {
})
.ok();
}
})
.detach();
}
}
})
.detach();
}
pub fn insert_dragged_files(
@@ -2671,13 +2713,14 @@ mod tests {
}
#[gpui::test]
async fn test_large_file_mention_uses_outline(cx: &mut TestAppContext) {
async fn test_large_file_mention_fallback(cx: &mut TestAppContext) {
init_test(cx);
let fs = FakeFs::new(cx.executor());
// Create a large file that exceeds AUTO_OUTLINE_SIZE
const LINE: &str = "fn example_function() { /* some code */ }\n";
// Using plain text without a configured language, so no outline is available
const LINE: &str = "This is a line of text in the file\n";
let large_content = LINE.repeat(2 * (outline::AUTO_OUTLINE_SIZE / LINE.len()));
assert!(large_content.len() > outline::AUTO_OUTLINE_SIZE);
@@ -2688,8 +2731,8 @@ mod tests {
fs.insert_tree(
"/project",
json!({
"large_file.rs": large_content.clone(),
"small_file.rs": small_content,
"large_file.txt": large_content.clone(),
"small_file.txt": small_content,
}),
)
.await;
@@ -2735,7 +2778,7 @@ mod tests {
let large_file_abs_path = project.read_with(cx, |project, cx| {
let worktree = project.worktrees(cx).next().unwrap();
let worktree_root = worktree.read(cx).abs_path();
worktree_root.join("large_file.rs")
worktree_root.join("large_file.txt")
});
let large_file_task = message_editor.update(cx, |editor, cx| {
editor.confirm_mention_for_file(large_file_abs_path, cx)
@@ -2744,11 +2787,20 @@ mod tests {
let large_file_mention = large_file_task.await.unwrap();
match large_file_mention {
Mention::Text { content, .. } => {
// Should contain outline header for large files
assert!(content.contains("File outline for"));
assert!(content.contains("file too large to show full content"));
// Should not contain the full repeated content
assert!(!content.contains(&LINE.repeat(100)));
// Should contain some of the content but not all of it
assert!(
content.contains(LINE),
"Should contain some of the file content"
);
assert!(
!content.contains(&LINE.repeat(100)),
"Should not contain the full file"
);
// Should be much smaller than original
assert!(
content.len() < large_content.len() / 10,
"Should be significantly truncated"
);
}
_ => panic!("Expected Text mention for large file"),
}
@@ -2758,7 +2810,7 @@ mod tests {
let small_file_abs_path = project.read_with(cx, |project, cx| {
let worktree = project.worktrees(cx).next().unwrap();
let worktree_root = worktree.read(cx).abs_path();
worktree_root.join("small_file.rs")
worktree_root.join("small_file.txt")
});
let small_file_task = message_editor.update(cx, |editor, cx| {
editor.confirm_mention_for_file(small_file_abs_path, cx)
@@ -2767,10 +2819,8 @@ mod tests {
let small_file_mention = small_file_task.await.unwrap();
match small_file_mention {
Mention::Text { content, .. } => {
// Should contain the actual content
// Should contain the full actual content
assert_eq!(content, small_content);
// Should not contain outline header
assert!(!content.contains("File outline for"));
}
_ => panic!("Expected Text mention for small file"),
}

View File

@@ -251,17 +251,17 @@ impl PickerDelegate for AcpModelPickerDelegate {
.inset(true)
.spacing(ListItemSpacing::Sparse)
.toggle_state(selected)
.start_slot::<Icon>(model_info.icon.map(|icon| {
Icon::new(icon)
.color(model_icon_color)
.size(IconSize::Small)
}))
.child(
h_flex()
.w_full()
.pl_0p5()
.gap_1p5()
.w(px(240.))
.when_some(model_info.icon, |this, icon| {
this.child(
Icon::new(icon)
.color(model_icon_color)
.size(IconSize::Small)
)
})
.child(Label::new(model_info.name.clone()).truncate()),
)
.end_slot(div().pr_3().when(is_selected, |this| {

View File

@@ -51,7 +51,7 @@ use ui::{
PopoverMenuHandle, SpinnerLabel, TintColor, Tooltip, WithScrollbar, prelude::*,
};
use util::{ResultExt, size::format_file_size, time::duration_alt_display};
use workspace::{CollaboratorId, Workspace};
use workspace::{CollaboratorId, NewTerminal, Workspace};
use zed_actions::agent::{Chat, ToggleModelSelector};
use zed_actions::assistant::OpenRulesLibrary;
@@ -69,8 +69,8 @@ use crate::ui::{
};
use crate::{
AgentDiffPane, AgentPanel, AllowAlways, AllowOnce, ContinueThread, ContinueWithBurnMode,
CycleModeSelector, ExpandMessageEditor, Follow, KeepAll, OpenAgentDiff, OpenHistory, RejectAll,
RejectOnce, ToggleBurnMode, ToggleProfileSelector,
CycleModeSelector, ExpandMessageEditor, Follow, KeepAll, NewThread, OpenAgentDiff, OpenHistory,
RejectAll, RejectOnce, ToggleBurnMode, ToggleProfileSelector,
};
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
@@ -278,6 +278,7 @@ pub struct AcpThreadView {
notification_subscriptions: HashMap<WindowHandle<AgentNotification>, Vec<Subscription>>,
thread_retry_status: Option<RetryStatus>,
thread_error: Option<ThreadError>,
thread_error_markdown: Option<Entity<Markdown>>,
thread_feedback: ThreadFeedbackState,
list_state: ListState,
auth_task: Option<Task<()>>,
@@ -415,6 +416,7 @@ impl AcpThreadView {
list_state: list_state,
thread_retry_status: None,
thread_error: None,
thread_error_markdown: None,
thread_feedback: Default::default(),
auth_task: None,
expanded_tool_calls: HashSet::default(),
@@ -798,6 +800,7 @@ impl AcpThreadView {
if should_retry {
self.thread_error = None;
self.thread_error_markdown = None;
self.reset(window, cx);
}
}
@@ -1327,6 +1330,7 @@ impl AcpThreadView {
fn clear_thread_error(&mut self, cx: &mut Context<Self>) {
self.thread_error = None;
self.thread_error_markdown = None;
cx.notify();
}
@@ -3140,7 +3144,7 @@ impl AcpThreadView {
.text_ui_sm(cx)
.h_full()
.children(terminal_view.map(|terminal_view| {
if terminal_view
let element = if terminal_view
.read(cx)
.content_mode(window, cx)
.is_scrollable()
@@ -3148,7 +3152,15 @@ impl AcpThreadView {
div().h_72().child(terminal_view).into_any_element()
} else {
terminal_view.into_any_element()
}
};
div()
.on_action(cx.listener(|_this, _: &NewTerminal, window, cx| {
window.dispatch_action(NewThread.boxed_clone(), cx);
cx.stop_propagation();
}))
.child(element)
.into_any_element()
})),
)
})
@@ -5344,9 +5356,9 @@ impl AcpThreadView {
}
}
fn render_thread_error(&self, cx: &mut Context<Self>) -> Option<Div> {
fn render_thread_error(&mut self, window: &mut Window, cx: &mut Context<Self>) -> Option<Div> {
let content = match self.thread_error.as_ref()? {
ThreadError::Other(error) => self.render_any_thread_error(error.clone(), cx),
ThreadError::Other(error) => self.render_any_thread_error(error.clone(), window, cx),
ThreadError::Refusal => self.render_refusal_error(cx),
ThreadError::AuthenticationRequired(error) => {
self.render_authentication_required_error(error.clone(), cx)
@@ -5431,7 +5443,12 @@ impl AcpThreadView {
.dismiss_action(self.dismiss_error_button(cx))
}
fn render_any_thread_error(&self, error: SharedString, cx: &mut Context<'_, Self>) -> Callout {
fn render_any_thread_error(
&mut self,
error: SharedString,
window: &mut Window,
cx: &mut Context<'_, Self>,
) -> Callout {
let can_resume = self
.thread()
.map_or(false, |thread| thread.read(cx).can_resume(cx));
@@ -5444,11 +5461,24 @@ impl AcpThreadView {
supports_burn_mode && thread.completion_mode() == CompletionMode::Normal
});
let markdown = if let Some(markdown) = &self.thread_error_markdown {
markdown.clone()
} else {
let markdown = cx.new(|cx| Markdown::new(error.clone(), None, None, cx));
self.thread_error_markdown = Some(markdown.clone());
markdown
};
let markdown_style = default_markdown_style(false, true, window, cx);
let description = self
.render_markdown(markdown, markdown_style)
.into_any_element();
Callout::new()
.severity(Severity::Error)
.title("Error")
.icon(IconName::XCircle)
.description(error.clone())
.title("An Error Happened")
.description_slot(description)
.actions_slot(
h_flex()
.gap_0p5()
@@ -5467,11 +5497,9 @@ impl AcpThreadView {
})
.when(can_resume, |this| {
this.child(
Button::new("retry", "Retry")
.icon(IconName::RotateCw)
.icon_position(IconPosition::Start)
IconButton::new("retry", IconName::RotateCw)
.icon_size(IconSize::Small)
.label_size(LabelSize::Small)
.tooltip(Tooltip::text("Retry Generation"))
.on_click(cx.listener(|this, _, _window, cx| {
this.resume_chat(cx);
})),
@@ -5613,7 +5641,6 @@ impl AcpThreadView {
IconButton::new("copy", IconName::Copy)
.icon_size(IconSize::Small)
.icon_color(Color::Muted)
.tooltip(Tooltip::text("Copy Error Message"))
.on_click(move |_, _, cx| {
cx.write_to_clipboard(ClipboardItem::new_string(message.clone()))
@@ -5623,7 +5650,6 @@ impl AcpThreadView {
fn dismiss_error_button(&self, cx: &mut Context<Self>) -> impl IntoElement {
IconButton::new("dismiss", IconName::Close)
.icon_size(IconSize::Small)
.icon_color(Color::Muted)
.tooltip(Tooltip::text("Dismiss Error"))
.on_click(cx.listener({
move |this, _, _, cx| {
@@ -5841,7 +5867,7 @@ impl Render for AcpThreadView {
None
}
})
.children(self.render_thread_error(cx))
.children(self.render_thread_error(window, cx))
.when_some(
self.new_server_version_available.as_ref().filter(|_| {
!has_messages || !matches!(self.thread_state, ThreadState::Ready { .. })
@@ -5907,7 +5933,6 @@ fn default_markdown_style(
syntax: cx.theme().syntax().clone(),
selection_background_color: colors.element_selection_background,
code_block_overflow_x_scroll: true,
table_overflow_x_scroll: true,
heading_level_styles: Some(HeadingLevelStyles {
h1: Some(TextStyleRefinement {
font_size: Some(rems(1.15).into()),
@@ -5975,6 +6000,7 @@ fn default_markdown_style(
},
link: TextStyleRefinement {
background_color: Some(colors.editor_foreground.opacity(0.025)),
color: Some(colors.text_accent),
underline: Some(UnderlineStyle {
color: Some(colors.text_accent.opacity(0.5)),
thickness: px(1.),

View File

@@ -1,5 +1,5 @@
mod add_llm_provider_modal;
mod configure_context_server_modal;
pub mod configure_context_server_modal;
mod configure_context_server_tools_modal;
mod manage_profiles_modal;
mod tool_picker;
@@ -46,9 +46,8 @@ pub(crate) use configure_context_server_modal::ConfigureContextServerModal;
pub(crate) use configure_context_server_tools_modal::ConfigureContextServerToolsModal;
pub(crate) use manage_profiles_modal::ManageProfilesModal;
use crate::{
AddContextServer,
agent_configuration::add_llm_provider_modal::{AddLlmProviderModal, LlmCompatibleProvider},
use crate::agent_configuration::add_llm_provider_modal::{
AddLlmProviderModal, LlmCompatibleProvider,
};
pub struct AgentConfiguration {
@@ -553,7 +552,9 @@ impl AgentConfiguration {
move |window, cx| {
Some(ContextMenu::build(window, cx, |menu, _window, _cx| {
menu.entry("Add Custom Server", None, {
|window, cx| window.dispatch_action(AddContextServer.boxed_clone(), cx)
|window, cx| {
window.dispatch_action(crate::AddContextServer.boxed_clone(), cx)
}
})
.entry("Install from Extensions", None, {
|window, cx| {
@@ -651,7 +652,7 @@ impl AgentConfiguration {
let is_running = matches!(server_status, ContextServerStatus::Running);
let item_id = SharedString::from(context_server_id.0.clone());
// Servers without a configuration can only be provided by extensions.
let provided_by_extension = server_configuration.is_none_or(|config| {
let provided_by_extension = server_configuration.as_ref().is_none_or(|config| {
matches!(
config.as_ref(),
ContextServerConfiguration::Extension { .. }
@@ -707,7 +708,10 @@ impl AgentConfiguration {
"Server is stopped.",
),
};
let is_remote = server_configuration
.as_ref()
.map(|config| matches!(config.as_ref(), ContextServerConfiguration::Http { .. }))
.unwrap_or(false);
let context_server_configuration_menu = PopoverMenu::new("context-server-config-menu")
.trigger_with_tooltip(
IconButton::new("context-server-config-menu", IconName::Settings)
@@ -730,14 +734,25 @@ impl AgentConfiguration {
let language_registry = language_registry.clone();
let workspace = workspace.clone();
move |window, cx| {
ConfigureContextServerModal::show_modal_for_existing_server(
context_server_id.clone(),
language_registry.clone(),
workspace.clone(),
window,
cx,
)
.detach_and_log_err(cx);
if is_remote {
crate::agent_configuration::configure_context_server_modal::ConfigureContextServerModal::show_modal_for_existing_server(
context_server_id.clone(),
language_registry.clone(),
workspace.clone(),
window,
cx,
)
.detach();
} else {
ConfigureContextServerModal::show_modal_for_existing_server(
context_server_id.clone(),
language_registry.clone(),
workspace.clone(),
window,
cx,
)
.detach();
}
}
}).when(tool_count > 0, |this| this.entry("View Tools", None, {
let context_server_id = context_server_id.clone();

View File

@@ -3,16 +3,42 @@ use std::sync::Arc;
use anyhow::Result;
use collections::HashSet;
use fs::Fs;
use gpui::{DismissEvent, Entity, EventEmitter, FocusHandle, Focusable, Render, Task};
use gpui::{
DismissEvent, Entity, EventEmitter, FocusHandle, Focusable, Render, ScrollHandle, Task,
};
use language_model::LanguageModelRegistry;
use language_models::provider::open_ai_compatible::{AvailableModel, ModelCapabilities};
use settings::{OpenAiCompatibleSettingsContent, update_settings_file};
use ui::{
Banner, Checkbox, KeyBinding, Modal, ModalFooter, ModalHeader, Section, ToggleState, prelude::*,
Banner, Checkbox, KeyBinding, Modal, ModalFooter, ModalHeader, Section, ToggleState,
WithScrollbar, prelude::*,
};
use ui_input::InputField;
use workspace::{ModalView, Workspace};
fn single_line_input(
label: impl Into<SharedString>,
placeholder: impl Into<SharedString>,
text: Option<&str>,
tab_index: isize,
window: &mut Window,
cx: &mut App,
) -> Entity<InputField> {
cx.new(|cx| {
let input = InputField::new(window, cx, placeholder)
.label(label)
.tab_index(tab_index)
.tab_stop(true);
if let Some(text) = text {
input
.editor()
.update(cx, |editor, cx| editor.set_text(text, window, cx));
}
input
})
}
#[derive(Clone, Copy)]
pub enum LlmCompatibleProvider {
OpenAi,
@@ -41,12 +67,14 @@ struct AddLlmProviderInput {
impl AddLlmProviderInput {
fn new(provider: LlmCompatibleProvider, window: &mut Window, cx: &mut App) -> Self {
let provider_name = single_line_input("Provider Name", provider.name(), None, window, cx);
let api_url = single_line_input("API URL", provider.api_url(), None, window, cx);
let provider_name =
single_line_input("Provider Name", provider.name(), None, 1, window, cx);
let api_url = single_line_input("API URL", provider.api_url(), None, 2, window, cx);
let api_key = single_line_input(
"API Key",
"000000000000000000000000000000000000000000000000",
None,
3,
window,
cx,
);
@@ -55,12 +83,13 @@ impl AddLlmProviderInput {
provider_name,
api_url,
api_key,
models: vec![ModelInput::new(window, cx)],
models: vec![ModelInput::new(0, window, cx)],
}
}
fn add_model(&mut self, window: &mut Window, cx: &mut App) {
self.models.push(ModelInput::new(window, cx));
let model_index = self.models.len();
self.models.push(ModelInput::new(model_index, window, cx));
}
fn remove_model(&mut self, index: usize) {
@@ -84,11 +113,14 @@ struct ModelInput {
}
impl ModelInput {
fn new(window: &mut Window, cx: &mut App) -> Self {
fn new(model_index: usize, window: &mut Window, cx: &mut App) -> Self {
let base_tab_index = (3 + (model_index * 4)) as isize;
let model_name = single_line_input(
"Model Name",
"e.g. gpt-4o, claude-opus-4, gemini-2.5-pro",
None,
base_tab_index + 1,
window,
cx,
);
@@ -96,6 +128,7 @@ impl ModelInput {
"Max Completion Tokens",
"200000",
Some("200000"),
base_tab_index + 2,
window,
cx,
);
@@ -103,16 +136,26 @@ impl ModelInput {
"Max Output Tokens",
"Max Output Tokens",
Some("32000"),
base_tab_index + 3,
window,
cx,
);
let max_tokens = single_line_input("Max Tokens", "Max Tokens", Some("200000"), window, cx);
let max_tokens = single_line_input(
"Max Tokens",
"Max Tokens",
Some("200000"),
base_tab_index + 4,
window,
cx,
);
let ModelCapabilities {
tools,
images,
parallel_tool_calls,
prompt_cache_key,
} = ModelCapabilities::default();
Self {
name: model_name,
max_completion_tokens,
@@ -165,24 +208,6 @@ impl ModelInput {
}
}
fn single_line_input(
label: impl Into<SharedString>,
placeholder: impl Into<SharedString>,
text: Option<&str>,
window: &mut Window,
cx: &mut App,
) -> Entity<InputField> {
cx.new(|cx| {
let input = InputField::new(window, cx, placeholder).label(label);
if let Some(text) = text {
input
.editor()
.update(cx, |editor, cx| editor.set_text(text, window, cx));
}
input
})
}
fn save_provider_to_settings(
input: &AddLlmProviderInput,
cx: &mut App,
@@ -258,6 +283,7 @@ fn save_provider_to_settings(
pub struct AddLlmProviderModal {
provider: LlmCompatibleProvider,
input: AddLlmProviderInput,
scroll_handle: ScrollHandle,
focus_handle: FocusHandle,
last_error: Option<SharedString>,
}
@@ -278,6 +304,7 @@ impl AddLlmProviderModal {
provider,
last_error: None,
focus_handle: cx.focus_handle(),
scroll_handle: ScrollHandle::new(),
}
}
@@ -418,6 +445,19 @@ impl AddLlmProviderModal {
)
})
}
fn on_tab(&mut self, _: &menu::SelectNext, window: &mut Window, _: &mut Context<Self>) {
window.focus_next();
}
fn on_tab_prev(
&mut self,
_: &menu::SelectPrevious,
window: &mut Window,
_: &mut Context<Self>,
) {
window.focus_prev();
}
}
impl EventEmitter<DismissEvent> for AddLlmProviderModal {}
@@ -431,15 +471,27 @@ impl Focusable for AddLlmProviderModal {
impl ModalView for AddLlmProviderModal {}
impl Render for AddLlmProviderModal {
fn render(&mut self, _window: &mut ui::Window, cx: &mut ui::Context<Self>) -> impl IntoElement {
fn render(&mut self, window: &mut ui::Window, cx: &mut ui::Context<Self>) -> impl IntoElement {
let focus_handle = self.focus_handle(cx);
div()
let window_size = window.viewport_size();
let rem_size = window.rem_size();
let is_large_window = window_size.height / rem_size > rems_from_px(600.).0;
let modal_max_height = if is_large_window {
rems_from_px(450.)
} else {
rems_from_px(200.)
};
v_flex()
.id("add-llm-provider-modal")
.key_context("AddLlmProviderModal")
.w(rems(34.))
.elevation_3(cx)
.on_action(cx.listener(Self::cancel))
.on_action(cx.listener(Self::on_tab))
.on_action(cx.listener(Self::on_tab_prev))
.capture_any_mouse_down(cx.listener(|this, _, window, cx| {
this.focus_handle(cx).focus(window);
}))
@@ -462,17 +514,25 @@ impl Render for AddLlmProviderModal {
)
})
.child(
v_flex()
.id("modal_content")
div()
.size_full()
.max_h_128()
.overflow_y_scroll()
.px(DynamicSpacing::Base12.rems(cx))
.gap(DynamicSpacing::Base04.rems(cx))
.child(self.input.provider_name.clone())
.child(self.input.api_url.clone())
.child(self.input.api_key.clone())
.child(self.render_model_section(cx)),
.vertical_scrollbar_for(self.scroll_handle.clone(), window, cx)
.child(
v_flex()
.id("modal_content")
.size_full()
.tab_group()
.max_h(modal_max_height)
.pl_3()
.pr_4()
.gap_2()
.overflow_y_scroll()
.track_scroll(&self.scroll_handle)
.child(self.input.provider_name.clone())
.child(self.input.api_url.clone())
.child(self.input.api_key.clone())
.child(self.render_model_section(cx)),
),
)
.footer(
ModalFooter::new().end_slot(
@@ -642,7 +702,7 @@ mod tests {
let cx = setup_test(cx).await;
cx.update(|window, cx| {
let model_input = ModelInput::new(window, cx);
let model_input = ModelInput::new(0, window, cx);
model_input.name.update(cx, |input, cx| {
input.editor().update(cx, |editor, cx| {
editor.set_text("somemodel", window, cx);
@@ -678,7 +738,7 @@ mod tests {
let cx = setup_test(cx).await;
cx.update(|window, cx| {
let mut model_input = ModelInput::new(window, cx);
let mut model_input = ModelInput::new(0, window, cx);
model_input.name.update(cx, |input, cx| {
input.editor().update(cx, |editor, cx| {
editor.set_text("somemodel", window, cx);
@@ -703,7 +763,7 @@ mod tests {
let cx = setup_test(cx).await;
cx.update(|window, cx| {
let mut model_input = ModelInput::new(window, cx);
let mut model_input = ModelInput::new(0, window, cx);
model_input.name.update(cx, |input, cx| {
input.editor().update(cx, |editor, cx| {
editor.set_text("somemodel", window, cx);
@@ -767,7 +827,7 @@ mod tests {
models.iter().enumerate()
{
if i >= input.models.len() {
input.models.push(ModelInput::new(window, cx));
input.models.push(ModelInput::new(i, window, cx));
}
let model = &mut input.models[i];
set_text(&model.name, name, window, cx);

View File

@@ -4,11 +4,12 @@ use std::{
};
use anyhow::{Context as _, Result};
use collections::HashMap;
use context_server::{ContextServerCommand, ContextServerId};
use editor::{Editor, EditorElement, EditorStyle};
use gpui::{
AsyncWindowContext, DismissEvent, Entity, EventEmitter, FocusHandle, Focusable, Task,
TextStyle, TextStyleRefinement, UnderlineStyle, WeakEntity, prelude::*,
AsyncWindowContext, DismissEvent, Entity, EventEmitter, FocusHandle, Focusable, ScrollHandle,
Task, TextStyle, TextStyleRefinement, UnderlineStyle, WeakEntity, prelude::*,
};
use language::{Language, LanguageRegistry};
use markdown::{Markdown, MarkdownElement, MarkdownStyle};
@@ -20,10 +21,12 @@ use project::{
project_settings::{ContextServerSettings, ProjectSettings},
worktree_store::WorktreeStore,
};
use serde::Deserialize;
use settings::{Settings as _, update_settings_file};
use theme::ThemeSettings;
use ui::{
CommonAnimationExt, KeyBinding, Modal, ModalFooter, ModalHeader, Section, Tooltip, prelude::*,
CommonAnimationExt, KeyBinding, Modal, ModalFooter, ModalHeader, Section, Tooltip,
WithScrollbar, prelude::*,
};
use util::ResultExt as _;
use workspace::{ModalView, Workspace};
@@ -36,6 +39,11 @@ enum ConfigurationTarget {
id: ContextServerId,
command: ContextServerCommand,
},
ExistingHttp {
id: ContextServerId,
url: String,
headers: HashMap<String, String>,
},
Extension {
id: ContextServerId,
repository_url: Option<SharedString>,
@@ -46,9 +54,11 @@ enum ConfigurationTarget {
enum ConfigurationSource {
New {
editor: Entity<Editor>,
is_http: bool,
},
Existing {
editor: Entity<Editor>,
is_http: bool,
},
Extension {
id: ContextServerId,
@@ -96,6 +106,7 @@ impl ConfigurationSource {
match target {
ConfigurationTarget::New => ConfigurationSource::New {
editor: create_editor(context_server_input(None), jsonc_language, window, cx),
is_http: false,
},
ConfigurationTarget::Existing { id, command } => ConfigurationSource::Existing {
editor: create_editor(
@@ -104,6 +115,20 @@ impl ConfigurationSource {
window,
cx,
),
is_http: false,
},
ConfigurationTarget::ExistingHttp {
id,
url,
headers: auth,
} => ConfigurationSource::Existing {
editor: create_editor(
context_server_http_input(Some((id, url, auth))),
jsonc_language,
window,
cx,
),
is_http: true,
},
ConfigurationTarget::Extension {
id,
@@ -140,16 +165,30 @@ impl ConfigurationSource {
fn output(&self, cx: &mut App) -> Result<(ContextServerId, ContextServerSettings)> {
match self {
ConfigurationSource::New { editor } | ConfigurationSource::Existing { editor } => {
parse_input(&editor.read(cx).text(cx)).map(|(id, command)| {
(
id,
ContextServerSettings::Custom {
enabled: true,
command,
},
)
})
ConfigurationSource::New { editor, is_http }
| ConfigurationSource::Existing { editor, is_http } => {
if *is_http {
parse_http_input(&editor.read(cx).text(cx)).map(|(id, url, auth)| {
(
id,
ContextServerSettings::Http {
enabled: true,
url,
headers: auth,
},
)
})
} else {
parse_input(&editor.read(cx).text(cx)).map(|(id, command)| {
(
id,
ContextServerSettings::Custom {
enabled: true,
command,
},
)
})
}
}
ConfigurationSource::Extension {
id,
@@ -211,6 +250,66 @@ fn context_server_input(existing: Option<(ContextServerId, ContextServerCommand)
)
}
fn context_server_http_input(
existing: Option<(ContextServerId, String, HashMap<String, String>)>,
) -> String {
let (name, url, headers) = match existing {
Some((id, url, headers)) => {
let header = if headers.is_empty() {
r#"// "Authorization": "Bearer <token>"#.to_string()
} else {
let json = serde_json::to_string_pretty(&headers).unwrap();
let mut lines = json.split("\n").collect::<Vec<_>>();
if lines.len() > 1 {
lines.remove(0);
lines.pop();
}
lines
.into_iter()
.map(|line| format!(" {}", line))
.collect::<String>()
};
(id.0.to_string(), url, header)
}
None => (
"some-remote-server".to_string(),
"https://example.com/mcp".to_string(),
r#"// "Authorization": "Bearer <token>"#.to_string(),
),
};
format!(
r#"{{
/// The name of your remote MCP server
"{name}": {{
/// The URL of the remote MCP server
"url": "{url}",
"headers": {{
/// Any headers to send along
{headers}
}}
}}
}}"#
)
}
fn parse_http_input(text: &str) -> Result<(ContextServerId, String, HashMap<String, String>)> {
#[derive(Deserialize)]
struct Temp {
url: String,
#[serde(default)]
headers: HashMap<String, String>,
}
let value: HashMap<String, Temp> = serde_json_lenient::from_str(text)?;
if value.len() != 1 {
anyhow::bail!("Expected exactly one context server configuration");
}
let (key, value) = value.into_iter().next().unwrap();
Ok((ContextServerId(key.into()), value.url, value.headers))
}
fn resolve_context_server_extension(
id: ContextServerId,
worktree_store: Entity<WorktreeStore>,
@@ -252,6 +351,7 @@ pub struct ConfigureContextServerModal {
source: ConfigurationSource,
state: State,
original_server_id: Option<ContextServerId>,
scroll_handle: ScrollHandle,
}
impl ConfigureContextServerModal {
@@ -310,6 +410,15 @@ impl ConfigureContextServerModal {
id: server_id,
command,
}),
ContextServerSettings::Http {
enabled: _,
url,
headers,
} => Some(ConfigurationTarget::ExistingHttp {
id: server_id,
url,
headers,
}),
ContextServerSettings::Extension { .. } => {
match workspace
.update(cx, |workspace, cx| {
@@ -351,6 +460,7 @@ impl ConfigureContextServerModal {
state: State::Idle,
original_server_id: match &target {
ConfigurationTarget::Existing { id, .. } => Some(id.clone()),
ConfigurationTarget::ExistingHttp { id, .. } => Some(id.clone()),
ConfigurationTarget::Extension { id, .. } => Some(id.clone()),
ConfigurationTarget::New => None,
},
@@ -361,6 +471,7 @@ impl ConfigureContextServerModal {
window,
cx,
),
scroll_handle: ScrollHandle::new(),
})
})
})
@@ -478,7 +589,7 @@ impl ModalView for ConfigureContextServerModal {}
impl Focusable for ConfigureContextServerModal {
fn focus_handle(&self, cx: &App) -> FocusHandle {
match &self.source {
ConfigurationSource::New { editor } => editor.focus_handle(cx),
ConfigurationSource::New { editor, .. } => editor.focus_handle(cx),
ConfigurationSource::Existing { editor, .. } => editor.focus_handle(cx),
ConfigurationSource::Extension { editor, .. } => editor
.as_ref()
@@ -524,9 +635,10 @@ impl ConfigureContextServerModal {
}
fn render_modal_content(&self, cx: &App) -> AnyElement {
// All variants now use single editor approach
let editor = match &self.source {
ConfigurationSource::New { editor } => editor,
ConfigurationSource::Existing { editor } => editor,
ConfigurationSource::New { editor, .. } => editor,
ConfigurationSource::Existing { editor, .. } => editor,
ConfigurationSource::Extension { editor, .. } => {
let Some(editor) = editor else {
return div().into_any_element();
@@ -598,6 +710,36 @@ impl ConfigureContextServerModal {
move |_, _, cx| cx.open_url(&repository_url)
}),
)
} else if let ConfigurationSource::New { is_http, .. } = &self.source {
let label = if *is_http {
"Run command"
} else {
"Connect via HTTP"
};
let tooltip = if *is_http {
"Configure an MCP serevr that runs on stdin/stdout."
} else {
"Configure an MCP server that you connect to over HTTP"
};
Some(
Button::new("toggle-kind", label)
.tooltip(Tooltip::text(tooltip))
.on_click(cx.listener(|this, _, window, cx| match &mut this.source {
ConfigurationSource::New { editor, is_http } => {
*is_http = !*is_http;
let new_text = if *is_http {
context_server_http_input(None)
} else {
context_server_input(None)
};
editor.update(cx, |editor, cx| {
editor.set_text(new_text, window, cx);
})
}
_ => {}
})),
)
} else {
None
},
@@ -680,6 +822,7 @@ impl ConfigureContextServerModal {
impl Render for ConfigureContextServerModal {
fn render(&mut self, window: &mut Window, cx: &mut Context<Self>) -> impl IntoElement {
let scroll_handle = self.scroll_handle.clone();
div()
.elevation_3(cx)
.w(rems(34.))
@@ -699,14 +842,29 @@ impl Render for ConfigureContextServerModal {
Modal::new("configure-context-server", None)
.header(self.render_modal_header())
.section(
Section::new()
.child(self.render_modal_description(window, cx))
.child(self.render_modal_content(cx))
.child(match &self.state {
State::Idle => div(),
State::Waiting => Self::render_waiting_for_context_server(),
State::Error(error) => Self::render_modal_error(error.clone()),
}),
Section::new().child(
div()
.size_full()
.child(
div()
.id("modal-content")
.max_h(vh(0.7, window))
.overflow_y_scroll()
.track_scroll(&scroll_handle)
.child(self.render_modal_description(window, cx))
.child(self.render_modal_content(cx))
.child(match &self.state {
State::Idle => div(),
State::Waiting => {
Self::render_waiting_for_context_server()
}
State::Error(error) => {
Self::render_modal_error(error.clone())
}
}),
)
.vertical_scrollbar_for(scroll_handle, window, cx),
),
)
.footer(self.render_modal_footer(cx)),
)

View File

@@ -1892,6 +1892,9 @@ impl AgentPanel {
.anchor(Corner::TopRight)
.with_handle(self.new_thread_menu_handle.clone())
.menu({
let selected_agent = self.selected_agent.clone();
let is_agent_selected = move |agent_type: AgentType| selected_agent == agent_type;
let workspace = self.workspace.clone();
let is_via_collab = workspace
.update(cx, |workspace, cx| {
@@ -1905,7 +1908,6 @@ impl AgentPanel {
let active_thread = active_thread.clone();
Some(ContextMenu::build(window, cx, |menu, _window, cx| {
menu.context(focus_handle.clone())
.header("Zed Agent")
.when_some(active_thread, |this, active_thread| {
let thread = active_thread.read(cx);
@@ -1929,9 +1931,11 @@ impl AgentPanel {
}
})
.item(
ContextMenuEntry::new("New Thread")
.action(NewThread.boxed_clone())
.icon(IconName::Thread)
ContextMenuEntry::new("Zed Agent")
.when(is_agent_selected(AgentType::NativeAgent) | is_agent_selected(AgentType::TextThread) , |this| {
this.action(Box::new(NewExternalAgentThread { agent: None }))
})
.icon(IconName::ZedAgent)
.icon_color(Color::Muted)
.handler({
let workspace = workspace.clone();
@@ -1955,10 +1959,10 @@ impl AgentPanel {
}),
)
.item(
ContextMenuEntry::new("New Text Thread")
ContextMenuEntry::new("Text Thread")
.action(NewTextThread.boxed_clone())
.icon(IconName::TextThread)
.icon_color(Color::Muted)
.action(NewTextThread.boxed_clone())
.handler({
let workspace = workspace.clone();
move |window, cx| {
@@ -1983,7 +1987,10 @@ impl AgentPanel {
.separator()
.header("External Agents")
.item(
ContextMenuEntry::new("New Claude Code")
ContextMenuEntry::new("Claude Code")
.when(is_agent_selected(AgentType::ClaudeCode), |this| {
this.action(Box::new(NewExternalAgentThread { agent: None }))
})
.icon(IconName::AiClaude)
.disabled(is_via_collab)
.icon_color(Color::Muted)
@@ -2009,7 +2016,10 @@ impl AgentPanel {
}),
)
.item(
ContextMenuEntry::new("New Codex CLI")
ContextMenuEntry::new("Codex CLI")
.when(is_agent_selected(AgentType::Codex), |this| {
this.action(Box::new(NewExternalAgentThread { agent: None }))
})
.icon(IconName::AiOpenAi)
.disabled(is_via_collab)
.icon_color(Color::Muted)
@@ -2035,7 +2045,10 @@ impl AgentPanel {
}),
)
.item(
ContextMenuEntry::new("New Gemini CLI")
ContextMenuEntry::new("Gemini CLI")
.when(is_agent_selected(AgentType::Gemini), |this| {
this.action(Box::new(NewExternalAgentThread { agent: None }))
})
.icon(IconName::AiGemini)
.icon_color(Color::Muted)
.disabled(is_via_collab)
@@ -2061,8 +2074,8 @@ impl AgentPanel {
}),
)
.map(|mut menu| {
let agent_server_store_read = agent_server_store.read(cx);
let agent_names = agent_server_store_read
let agent_server_store = agent_server_store.read(cx);
let agent_names = agent_server_store
.external_agents()
.filter(|name| {
name.0 != GEMINI_NAME
@@ -2071,21 +2084,38 @@ impl AgentPanel {
})
.cloned()
.collect::<Vec<_>>();
let custom_settings = cx
.global::<SettingsStore>()
.get::<AllAgentServersSettings>(None)
.custom
.clone();
for agent_name in agent_names {
let icon_path = agent_server_store_read.agent_icon(&agent_name);
let mut entry =
ContextMenuEntry::new(format!("New {}", agent_name));
let icon_path = agent_server_store.agent_icon(&agent_name);
let mut entry = ContextMenuEntry::new(agent_name.clone());
let command = custom_settings
.get(&agent_name.0)
.map(|settings| settings.command.clone())
.unwrap_or(placeholder_command());
if let Some(icon_path) = icon_path {
entry = entry.custom_icon_svg(icon_path);
} else {
entry = entry.icon(IconName::Terminal);
}
entry = entry
.when(
is_agent_selected(AgentType::Custom {
name: agent_name.0.clone(),
command: command.clone(),
}),
|this| {
this.action(Box::new(NewExternalAgentThread { agent: None }))
},
)
.icon_color(Color::Muted)
.disabled(is_via_collab)
.handler({
@@ -2125,6 +2155,7 @@ impl AgentPanel {
}
}
});
menu = menu.item(entry);
}
@@ -2157,7 +2188,7 @@ impl AgentPanel {
.id("selected_agent_icon")
.when_some(selected_agent_custom_icon, |this, icon_path| {
let label = selected_agent_label.clone();
this.px(DynamicSpacing::Base02.rems(cx))
this.px_1()
.child(Icon::from_external_svg(icon_path).color(Color::Muted))
.tooltip(move |_window, cx| {
Tooltip::with_meta(label.clone(), None, "Selected Agent", cx)
@@ -2166,7 +2197,7 @@ impl AgentPanel {
.when(!has_custom_icon, |this| {
this.when_some(self.selected_agent.icon(), |this, icon| {
let label = selected_agent_label.clone();
this.px(DynamicSpacing::Base02.rems(cx))
this.px_1()
.child(Icon::new(icon).color(Color::Muted))
.tooltip(move |_window, cx| {
Tooltip::with_meta(label.clone(), None, "Selected Agent", cx)

View File

@@ -30,7 +30,10 @@ use command_palette_hooks::CommandPaletteFilter;
use feature_flags::FeatureFlagAppExt as _;
use fs::Fs;
use gpui::{Action, App, Entity, SharedString, actions};
use language::LanguageRegistry;
use language::{
LanguageRegistry,
language_settings::{AllLanguageSettings, EditPredictionProvider},
};
use language_model::{
ConfiguredModel, LanguageModel, LanguageModelId, LanguageModelProviderId, LanguageModelRegistry,
};
@@ -286,7 +289,25 @@ pub fn init(
fn update_command_palette_filter(cx: &mut App) {
let disable_ai = DisableAiSettings::get_global(cx).disable_ai;
let agent_enabled = AgentSettings::get_global(cx).enabled;
let edit_prediction_provider = AllLanguageSettings::get_global(cx)
.edit_predictions
.provider;
CommandPaletteFilter::update_global(cx, |filter, _| {
use editor::actions::{
AcceptEditPrediction, AcceptPartialEditPrediction, NextEditPrediction,
PreviousEditPrediction, ShowEditPrediction, ToggleEditPrediction,
};
let edit_prediction_actions = [
TypeId::of::<AcceptEditPrediction>(),
TypeId::of::<AcceptPartialEditPrediction>(),
TypeId::of::<ShowEditPrediction>(),
TypeId::of::<NextEditPrediction>(),
TypeId::of::<PreviousEditPrediction>(),
TypeId::of::<ToggleEditPrediction>(),
];
if disable_ai {
filter.hide_namespace("agent");
filter.hide_namespace("assistant");
@@ -295,42 +316,47 @@ fn update_command_palette_filter(cx: &mut App) {
filter.hide_namespace("zed_predict_onboarding");
filter.hide_namespace("edit_prediction");
use editor::actions::{
AcceptEditPrediction, AcceptPartialEditPrediction, NextEditPrediction,
PreviousEditPrediction, ShowEditPrediction, ToggleEditPrediction,
};
let edit_prediction_actions = [
TypeId::of::<AcceptEditPrediction>(),
TypeId::of::<AcceptPartialEditPrediction>(),
TypeId::of::<ShowEditPrediction>(),
TypeId::of::<NextEditPrediction>(),
TypeId::of::<PreviousEditPrediction>(),
TypeId::of::<ToggleEditPrediction>(),
];
filter.hide_action_types(&edit_prediction_actions);
filter.hide_action_types(&[TypeId::of::<zed_actions::OpenZedPredictOnboarding>()]);
} else {
filter.show_namespace("agent");
if agent_enabled {
filter.show_namespace("agent");
} else {
filter.hide_namespace("agent");
}
filter.show_namespace("assistant");
filter.show_namespace("copilot");
match edit_prediction_provider {
EditPredictionProvider::None => {
filter.hide_namespace("edit_prediction");
filter.hide_namespace("copilot");
filter.hide_namespace("supermaven");
filter.hide_action_types(&edit_prediction_actions);
}
EditPredictionProvider::Copilot => {
filter.show_namespace("edit_prediction");
filter.show_namespace("copilot");
filter.hide_namespace("supermaven");
filter.show_action_types(edit_prediction_actions.iter());
}
EditPredictionProvider::Supermaven => {
filter.show_namespace("edit_prediction");
filter.hide_namespace("copilot");
filter.show_namespace("supermaven");
filter.show_action_types(edit_prediction_actions.iter());
}
EditPredictionProvider::Zed
| EditPredictionProvider::Codestral
| EditPredictionProvider::Experimental(_) => {
filter.show_namespace("edit_prediction");
filter.hide_namespace("copilot");
filter.hide_namespace("supermaven");
filter.show_action_types(edit_prediction_actions.iter());
}
}
filter.show_namespace("zed_predict_onboarding");
filter.show_namespace("edit_prediction");
use editor::actions::{
AcceptEditPrediction, AcceptPartialEditPrediction, NextEditPrediction,
PreviousEditPrediction, ShowEditPrediction, ToggleEditPrediction,
};
let edit_prediction_actions = [
TypeId::of::<AcceptEditPrediction>(),
TypeId::of::<AcceptPartialEditPrediction>(),
TypeId::of::<ShowEditPrediction>(),
TypeId::of::<NextEditPrediction>(),
TypeId::of::<PreviousEditPrediction>(),
TypeId::of::<ToggleEditPrediction>(),
];
filter.show_action_types(edit_prediction_actions.iter());
filter.show_action_types(&[TypeId::of::<zed_actions::OpenZedPredictOnboarding>()]);
}
});
@@ -420,3 +446,137 @@ fn register_slash_commands(cx: &mut App) {
})
.detach();
}
#[cfg(test)]
mod tests {
use super::*;
use agent_settings::{AgentProfileId, AgentSettings, CompletionMode};
use command_palette_hooks::CommandPaletteFilter;
use editor::actions::AcceptEditPrediction;
use gpui::{BorrowAppContext, TestAppContext, px};
use project::DisableAiSettings;
use settings::{
DefaultAgentView, DockPosition, NotifyWhenAgentWaiting, Settings, SettingsStore,
};
#[gpui::test]
fn test_agent_command_palette_visibility(cx: &mut TestAppContext) {
// Init settings
cx.update(|cx| {
let store = SettingsStore::test(cx);
cx.set_global(store);
command_palette_hooks::init(cx);
AgentSettings::register(cx);
DisableAiSettings::register(cx);
AllLanguageSettings::register(cx);
});
let agent_settings = AgentSettings {
enabled: true,
button: true,
dock: DockPosition::Right,
default_width: px(300.),
default_height: px(600.),
default_model: None,
inline_assistant_model: None,
commit_message_model: None,
thread_summary_model: None,
inline_alternatives: vec![],
default_profile: AgentProfileId::default(),
default_view: DefaultAgentView::Thread,
profiles: Default::default(),
always_allow_tool_actions: false,
notify_when_agent_waiting: NotifyWhenAgentWaiting::default(),
play_sound_when_agent_done: false,
single_file_review: false,
model_parameters: vec![],
preferred_completion_mode: CompletionMode::Normal,
enable_feedback: false,
expand_edit_card: true,
expand_terminal_card: true,
use_modifier_to_send: true,
message_editor_min_lines: 1,
};
cx.update(|cx| {
AgentSettings::override_global(agent_settings.clone(), cx);
DisableAiSettings::override_global(DisableAiSettings { disable_ai: false }, cx);
// Initial update
update_command_palette_filter(cx);
});
// Assert visible
cx.update(|cx| {
let filter = CommandPaletteFilter::try_global(cx).unwrap();
assert!(
!filter.is_hidden(&NewThread),
"NewThread should be visible by default"
);
});
// Disable agent
cx.update(|cx| {
let mut new_settings = agent_settings.clone();
new_settings.enabled = false;
AgentSettings::override_global(new_settings, cx);
// Trigger update
update_command_palette_filter(cx);
});
// Assert hidden
cx.update(|cx| {
let filter = CommandPaletteFilter::try_global(cx).unwrap();
assert!(
filter.is_hidden(&NewThread),
"NewThread should be hidden when agent is disabled"
);
});
// Test EditPredictionProvider
// Enable EditPredictionProvider::Copilot
cx.update(|cx| {
cx.update_global::<SettingsStore, _>(|store, cx| {
store.update_user_settings(cx, |s| {
s.project
.all_languages
.features
.get_or_insert(Default::default())
.edit_prediction_provider = Some(EditPredictionProvider::Copilot);
});
});
update_command_palette_filter(cx);
});
cx.update(|cx| {
let filter = CommandPaletteFilter::try_global(cx).unwrap();
assert!(
!filter.is_hidden(&AcceptEditPrediction),
"EditPrediction should be visible when provider is Copilot"
);
});
// Disable EditPredictionProvider (None)
cx.update(|cx| {
cx.update_global::<SettingsStore, _>(|store, cx| {
store.update_user_settings(cx, |s| {
s.project
.all_languages
.features
.get_or_insert(Default::default())
.edit_prediction_provider = Some(EditPredictionProvider::None);
});
});
update_command_palette_filter(cx);
});
cx.update(|cx| {
let filter = CommandPaletteFilter::try_global(cx).unwrap();
assert!(
filter.is_hidden(&AcceptEditPrediction),
"EditPrediction should be hidden when provider is None"
);
});
}
}

View File

@@ -1089,7 +1089,7 @@ mod tests {
}
#[gpui::test]
async fn test_large_file_uses_outline(cx: &mut TestAppContext) {
async fn test_large_file_uses_fallback(cx: &mut TestAppContext) {
init_test_settings(cx);
// Create a large file that exceeds AUTO_OUTLINE_SIZE
@@ -1101,16 +1101,16 @@ mod tests {
let file_context = load_context_for("file.txt", large_content, cx).await;
// Should contain some of the actual file content
assert!(
file_context
.text
.contains(&format!("# File outline for {}", path!("test/file.txt"))),
"Large files should not get an outline"
file_context.text.contains(LINE),
"Should contain some of the file content"
);
// Should be much smaller than original
assert!(
file_context.text.len() < content_len,
"Outline should be smaller than original content"
file_context.text.len() < content_len / 10,
"Should be significantly smaller than original content"
);
}

View File

@@ -278,6 +278,8 @@ impl ContextPickerCompletionProvider {
icon_path: Some(mode.icon().path().into()),
documentation: None,
source: project::CompletionSource::Custom,
match_start: None,
snippet_deduplication_key: None,
insert_text_mode: None,
// This ensures that when a user accepts this completion, the
// completion menu will still be shown after "@category " is
@@ -386,6 +388,8 @@ impl ContextPickerCompletionProvider {
icon_path: Some(action.icon().path().into()),
documentation: None,
source: project::CompletionSource::Custom,
match_start: None,
snippet_deduplication_key: None,
insert_text_mode: None,
// This ensures that when a user accepts this completion, the
// completion menu will still be shown after "@category " is
@@ -417,6 +421,8 @@ impl ContextPickerCompletionProvider {
replace_range: source_range.clone(),
new_text,
label: CodeLabel::plain(thread_entry.title().to_string(), None),
match_start: None,
snippet_deduplication_key: None,
documentation: None,
insert_text_mode: None,
source: project::CompletionSource::Custom,
@@ -484,6 +490,8 @@ impl ContextPickerCompletionProvider {
replace_range: source_range.clone(),
new_text,
label: CodeLabel::plain(rules.title.to_string(), None),
match_start: None,
snippet_deduplication_key: None,
documentation: None,
insert_text_mode: None,
source: project::CompletionSource::Custom,
@@ -524,6 +532,8 @@ impl ContextPickerCompletionProvider {
documentation: None,
source: project::CompletionSource::Custom,
icon_path: Some(IconName::ToolWeb.path().into()),
match_start: None,
snippet_deduplication_key: None,
insert_text_mode: None,
confirm: Some(confirm_completion_callback(
IconName::ToolWeb.path().into(),
@@ -612,6 +622,8 @@ impl ContextPickerCompletionProvider {
documentation: None,
source: project::CompletionSource::Custom,
icon_path: Some(completion_icon_path),
match_start: None,
snippet_deduplication_key: None,
insert_text_mode: None,
confirm: Some(confirm_completion_callback(
crease_icon_path,
@@ -689,6 +701,8 @@ impl ContextPickerCompletionProvider {
documentation: None,
source: project::CompletionSource::Custom,
icon_path: Some(IconName::Code.path().into()),
match_start: None,
snippet_deduplication_key: None,
insert_text_mode: None,
confirm: Some(confirm_completion_callback(
IconName::Code.path().into(),

View File

@@ -1,6 +1,6 @@
use std::{cmp::Reverse, sync::Arc};
use collections::{HashSet, IndexMap};
use collections::IndexMap;
use fuzzy::{StringMatch, StringMatchCandidate, match_strings};
use gpui::{Action, AnyElement, App, BackgroundExecutor, DismissEvent, Subscription, Task};
use language_model::{
@@ -57,7 +57,7 @@ fn all_models(cx: &App) -> GroupedModels {
})
.collect();
let other = providers
let all = providers
.iter()
.flat_map(|provider| {
provider
@@ -70,7 +70,7 @@ fn all_models(cx: &App) -> GroupedModels {
})
.collect();
GroupedModels::new(other, recommended)
GroupedModels::new(all, recommended)
}
#[derive(Clone)]
@@ -210,33 +210,24 @@ impl LanguageModelPickerDelegate {
struct GroupedModels {
recommended: Vec<ModelInfo>,
other: IndexMap<LanguageModelProviderId, Vec<ModelInfo>>,
all: IndexMap<LanguageModelProviderId, Vec<ModelInfo>>,
}
impl GroupedModels {
pub fn new(other: Vec<ModelInfo>, recommended: Vec<ModelInfo>) -> Self {
let recommended_ids = recommended
.iter()
.map(|info| (info.model.provider_id(), info.model.id()))
.collect::<HashSet<_>>();
let mut other_by_provider: IndexMap<_, Vec<ModelInfo>> = IndexMap::default();
for model in other {
if recommended_ids.contains(&(model.model.provider_id(), model.model.id())) {
continue;
}
pub fn new(all: Vec<ModelInfo>, recommended: Vec<ModelInfo>) -> Self {
let mut all_by_provider: IndexMap<_, Vec<ModelInfo>> = IndexMap::default();
for model in all {
let provider = model.model.provider_id();
if let Some(models) = other_by_provider.get_mut(&provider) {
if let Some(models) = all_by_provider.get_mut(&provider) {
models.push(model);
} else {
other_by_provider.insert(provider, vec![model]);
all_by_provider.insert(provider, vec![model]);
}
}
Self {
recommended,
other: other_by_provider,
all: all_by_provider,
}
}
@@ -252,7 +243,7 @@ impl GroupedModels {
);
}
for models in self.other.values() {
for models in self.all.values() {
if models.is_empty() {
continue;
}
@@ -267,20 +258,6 @@ impl GroupedModels {
}
entries
}
fn model_infos(&self) -> Vec<ModelInfo> {
let other = self
.other
.values()
.flat_map(|model| model.iter())
.cloned()
.collect::<Vec<_>>();
self.recommended
.iter()
.chain(&other)
.cloned()
.collect::<Vec<_>>()
}
}
enum LanguageModelPickerEntry {
@@ -425,8 +402,9 @@ impl PickerDelegate for LanguageModelPickerDelegate {
.collect::<Vec<_>>();
let available_models = all_models
.model_infos()
.iter()
.all
.values()
.flat_map(|models| models.iter())
.filter(|m| configured_provider_ids.contains(&m.model.provider_id()))
.cloned()
.collect::<Vec<_>>();
@@ -514,17 +492,15 @@ impl PickerDelegate for LanguageModelPickerDelegate {
.inset(true)
.spacing(ListItemSpacing::Sparse)
.toggle_state(selected)
.start_slot(
Icon::new(model_info.icon)
.color(model_icon_color)
.size(IconSize::Small),
)
.child(
h_flex()
.w_full()
.pl_0p5()
.gap_1p5()
.w(px(240.))
.child(
Icon::new(model_info.icon)
.color(model_icon_color)
.size(IconSize::Small),
)
.child(Label::new(model_info.model.name().0).truncate()),
)
.end_slot(div().pr_3().when(is_selected, |this| {
@@ -764,46 +740,52 @@ mod tests {
}
#[gpui::test]
fn test_exclude_recommended_models(_cx: &mut TestAppContext) {
fn test_recommended_models_also_appear_in_other(_cx: &mut TestAppContext) {
let recommended_models = create_models(vec![("zed", "claude")]);
let all_models = create_models(vec![
("zed", "claude"), // Should be filtered out from "other"
("zed", "claude"), // Should also appear in "other"
("zed", "gemini"),
("copilot", "o3"),
]);
let grouped_models = GroupedModels::new(all_models, recommended_models);
let actual_other_models = grouped_models
.other
let actual_all_models = grouped_models
.all
.values()
.flatten()
.cloned()
.collect::<Vec<_>>();
// Recommended models should not appear in "other"
assert_models_eq(actual_other_models, vec!["zed/gemini", "copilot/o3"]);
// Recommended models should also appear in "all"
assert_models_eq(
actual_all_models,
vec!["zed/claude", "zed/gemini", "copilot/o3"],
);
}
#[gpui::test]
fn test_dont_exclude_models_from_other_providers(_cx: &mut TestAppContext) {
fn test_models_from_different_providers(_cx: &mut TestAppContext) {
let recommended_models = create_models(vec![("zed", "claude")]);
let all_models = create_models(vec![
("zed", "claude"), // Should be filtered out from "other"
("zed", "claude"), // Should also appear in "other"
("zed", "gemini"),
("copilot", "claude"), // Should not be filtered out from "other"
("copilot", "claude"), // Different provider, should appear in "other"
]);
let grouped_models = GroupedModels::new(all_models, recommended_models);
let actual_other_models = grouped_models
.other
let actual_all_models = grouped_models
.all
.values()
.flatten()
.cloned()
.collect::<Vec<_>>();
// Recommended models should not appear in "other"
assert_models_eq(actual_other_models, vec!["zed/gemini", "copilot/claude"]);
// All models should appear in "all" regardless of recommended status
assert_models_eq(
actual_all_models,
vec!["zed/claude", "zed/gemini", "copilot/claude"],
);
}
}

View File

@@ -127,6 +127,8 @@ impl SlashCommandCompletionProvider {
new_text,
label: command.label(cx),
icon_path: None,
match_start: None,
snippet_deduplication_key: None,
insert_text_mode: None,
confirm,
source: CompletionSource::Custom,
@@ -232,6 +234,8 @@ impl SlashCommandCompletionProvider {
icon_path: None,
new_text,
documentation: None,
match_start: None,
snippet_deduplication_key: None,
confirm,
insert_text_mode: None,
source: CompletionSource::Custom,

View File

@@ -1679,7 +1679,7 @@ impl TextThreadEditor {
) {
cx.stop_propagation();
let images = if let Some(item) = cx.read_from_clipboard() {
let mut images = if let Some(item) = cx.read_from_clipboard() {
item.into_entries()
.filter_map(|entry| {
if let ClipboardEntry::Image(image) = entry {
@@ -1693,6 +1693,40 @@ impl TextThreadEditor {
Vec::new()
};
if let Some(paths) = cx.read_from_clipboard() {
for path in paths
.into_entries()
.filter_map(|entry| {
if let ClipboardEntry::ExternalPaths(paths) = entry {
Some(paths.paths().to_owned())
} else {
None
}
})
.flatten()
{
let Ok(content) = std::fs::read(path) else {
continue;
};
let Ok(format) = image::guess_format(&content) else {
continue;
};
images.push(gpui::Image::from_bytes(
match format {
image::ImageFormat::Png => gpui::ImageFormat::Png,
image::ImageFormat::Jpeg => gpui::ImageFormat::Jpeg,
image::ImageFormat::WebP => gpui::ImageFormat::Webp,
image::ImageFormat::Gif => gpui::ImageFormat::Gif,
image::ImageFormat::Bmp => gpui::ImageFormat::Bmp,
image::ImageFormat::Tiff => gpui::ImageFormat::Tiff,
image::ImageFormat::Ico => gpui::ImageFormat::Ico,
_ => continue,
},
content,
));
}
}
let metadata = if let Some(item) = cx.read_from_clipboard() {
item.entries().first().and_then(|entry| {
if let ClipboardEntry::String(text) = entry {
@@ -2592,12 +2626,11 @@ impl SearchableItem for TextThreadEditor {
&mut self,
index: usize,
matches: &[Self::Match],
collapse: bool,
window: &mut Window,
cx: &mut Context<Self>,
) {
self.editor.update(cx, |editor, cx| {
editor.activate_match(index, matches, collapse, window, cx);
editor.activate_match(index, matches, window, cx);
});
}

View File

@@ -205,13 +205,9 @@ impl PasswordProxy {
} else {
ShellKind::Posix
};
let askpass_program = ASKPASS_PROGRAM
.get_or_init(|| current_exec)
.try_shell_safe(shell_kind)
.context("Failed to shell-escape Askpass program path.")?
.to_string();
let askpass_program = ASKPASS_PROGRAM.get_or_init(|| current_exec);
// Create an askpass script that communicates back to this process.
let askpass_script = generate_askpass_script(&askpass_program, &askpass_socket);
let askpass_script = generate_askpass_script(shell_kind, askpass_program, &askpass_socket)?;
let _task = executor.spawn(async move {
maybe!(async move {
let listener =
@@ -334,23 +330,51 @@ pub fn set_askpass_program(path: std::path::PathBuf) {
#[inline]
#[cfg(not(target_os = "windows"))]
fn generate_askpass_script(askpass_program: &str, askpass_socket: &std::path::Path) -> String {
format!(
fn generate_askpass_script(
shell_kind: ShellKind,
askpass_program: &std::path::Path,
askpass_socket: &std::path::Path,
) -> Result<String> {
let askpass_program = shell_kind.prepend_command_prefix(
askpass_program
.to_str()
.context("Askpass program is on a non-utf8 path")?,
);
let askpass_program = shell_kind
.try_quote_prefix_aware(&askpass_program)
.context("Failed to shell-escape Askpass program path")?;
let askpass_socket = askpass_socket
.try_shell_safe(shell_kind)
.context("Failed to shell-escape Askpass socket path")?;
let print_args = "printf '%s\\0' \"$@\"";
let shebang = "#!/bin/sh";
Ok(format!(
"{shebang}\n{print_args} | {askpass_program} --askpass={askpass_socket} 2> /dev/null \n",
askpass_socket = askpass_socket.display(),
print_args = "printf '%s\\0' \"$@\"",
shebang = "#!/bin/sh",
)
))
}
#[inline]
#[cfg(target_os = "windows")]
fn generate_askpass_script(askpass_program: &str, askpass_socket: &std::path::Path) -> String {
format!(
fn generate_askpass_script(
shell_kind: ShellKind,
askpass_program: &std::path::Path,
askpass_socket: &std::path::Path,
) -> Result<String> {
let askpass_program = shell_kind.prepend_command_prefix(
askpass_program
.to_str()
.context("Askpass program is on a non-utf8 path")?,
);
let askpass_program = shell_kind
.try_quote_prefix_aware(&askpass_program)
.context("Failed to shell-escape Askpass program path")?;
let askpass_socket = askpass_socket
.try_shell_safe(shell_kind)
.context("Failed to shell-escape Askpass socket path")?;
Ok(format!(
r#"
$ErrorActionPreference = 'Stop';
($args -join [char]0) | & {askpass_program} --askpass={askpass_socket} 2> $null
"#,
askpass_socket = askpass_socket.display(),
)
))
}

View File

@@ -7,9 +7,10 @@ use assistant_slash_command::{
use assistant_slash_commands::FileCommandMetadata;
use client::{self, ModelRequestUsage, RequestUsage, proto, telemetry::Telemetry};
use clock::ReplicaId;
use cloud_llm_client::{CompletionIntent, CompletionRequestStatus, UsageLimit};
use cloud_llm_client::{CompletionIntent, UsageLimit};
use collections::{HashMap, HashSet};
use fs::{Fs, RenameOptions};
use futures::{FutureExt, StreamExt, future::Shared};
use gpui::{
App, AppContext as _, Context, Entity, EventEmitter, RenderImage, SharedString, Subscription,
@@ -2073,14 +2074,15 @@ impl TextThread {
});
match event {
LanguageModelCompletionEvent::StatusUpdate(status_update) => {
if let CompletionRequestStatus::UsageUpdated { amount, limit } = status_update {
this.update_model_request_usage(
amount as u32,
limit,
cx,
);
}
LanguageModelCompletionEvent::Started |
LanguageModelCompletionEvent::Queued {..} |
LanguageModelCompletionEvent::ToolUseLimitReached { .. } => {}
LanguageModelCompletionEvent::RequestUsage { amount, limit } => {
this.update_model_request_usage(
amount as u32,
limit,
cx,
);
}
LanguageModelCompletionEvent::StartMessage { .. } => {}
LanguageModelCompletionEvent::Stop(reason) => {
@@ -2142,7 +2144,7 @@ impl TextThread {
}
LanguageModelCompletionEvent::ToolUse(_) |
LanguageModelCompletionEvent::ToolUseJsonParseError { .. } |
LanguageModelCompletionEvent::UsageUpdate(_) => {}
LanguageModelCompletionEvent::TokenUsage(_) => {}
}
});

View File

@@ -10,8 +10,8 @@ use paths::remote_servers_dir;
use release_channel::{AppCommitSha, ReleaseChannel};
use serde::{Deserialize, Serialize};
use settings::{RegisterSetting, Settings, SettingsStore};
use smol::fs::File;
use smol::{fs, io::AsyncReadExt};
use smol::{fs::File, process::Command};
use std::mem;
use std::{
env::{
@@ -23,6 +23,7 @@ use std::{
sync::Arc,
time::Duration,
};
use util::command::new_smol_command;
use workspace::Workspace;
const SHOULD_SHOW_UPDATE_NOTIFICATION_KEY: &str = "auto-updater-should-show-updated-notification";
@@ -121,7 +122,7 @@ impl Drop for MacOsUnmounter<'_> {
let mount_path = mem::take(&mut self.mount_path);
self.background_executor
.spawn(async move {
let unmount_output = Command::new("hdiutil")
let unmount_output = new_smol_command("hdiutil")
.args(["detach", "-force"])
.arg(&mount_path)
.output()
@@ -350,8 +351,7 @@ impl AutoUpdater {
pub fn start_polling(&self, cx: &mut Context<Self>) -> Task<Result<()>> {
cx.spawn(async move |this, cx| {
#[cfg(target_os = "windows")]
{
if cfg!(target_os = "windows") {
use util::ResultExt;
cleanup_windows()
@@ -800,7 +800,7 @@ async fn install_release_linux(
.await
.context("failed to create directory into which to extract update")?;
let output = Command::new("tar")
let output = new_smol_command("tar")
.arg("-xzf")
.arg(&downloaded_tar_gz)
.arg("-C")
@@ -835,7 +835,7 @@ async fn install_release_linux(
to = PathBuf::from(prefix);
}
let output = Command::new("rsync")
let output = new_smol_command("rsync")
.args(["-av", "--delete"])
.arg(&from)
.arg(&to)
@@ -867,7 +867,7 @@ async fn install_release_macos(
let mut mounted_app_path: OsString = mount_path.join(running_app_filename).into();
mounted_app_path.push("/");
let output = Command::new("hdiutil")
let output = new_smol_command("hdiutil")
.args(["attach", "-nobrowse"])
.arg(&downloaded_dmg)
.arg("-mountroot")
@@ -887,7 +887,7 @@ async fn install_release_macos(
background_executor: cx.background_executor(),
};
let output = Command::new("rsync")
let output = new_smol_command("rsync")
.args(["-av", "--delete"])
.arg(&mounted_app_path)
.arg(&running_app_path)
@@ -903,7 +903,6 @@ async fn install_release_macos(
Ok(None)
}
#[cfg(target_os = "windows")]
async fn cleanup_windows() -> Result<()> {
let parent = std::env::current_exe()?
.parent()
@@ -919,7 +918,7 @@ async fn cleanup_windows() -> Result<()> {
}
async fn install_release_windows(downloaded_installer: PathBuf) -> Result<Option<PathBuf>> {
let output = Command::new(downloaded_installer)
let output = new_smol_command(downloaded_installer)
.arg("/verysilent")
.arg("/update=true")
.arg("!desktopicon")

View File

@@ -1683,7 +1683,9 @@ impl LiveKitRoom {
}
}
#[derive(Default)]
enum LocalTrack<Stream: ?Sized> {
#[default]
None,
Pending {
publish_id: usize,
@@ -1694,12 +1696,6 @@ enum LocalTrack<Stream: ?Sized> {
},
}
impl<T: ?Sized> Default for LocalTrack<T> {
fn default() -> Self {
Self::None
}
}
#[derive(Copy, Clone, PartialEq, Eq)]
pub enum RoomStatus {
Online,

View File

@@ -293,10 +293,11 @@ impl Telemetry {
}
pub fn metrics_enabled(self: &Arc<Self>) -> bool {
let state = self.state.lock();
let enabled = state.settings.metrics;
drop(state);
enabled
self.state.lock().settings.metrics
}
pub fn diagnostics_enabled(self: &Arc<Self>) -> bool {
self.state.lock().settings.diagnostics
}
pub fn set_authenticated_user_info(

View File

@@ -267,6 +267,7 @@ impl UserStore {
Status::SignedOut => {
current_user_tx.send(None).await.ok();
this.update(cx, |this, cx| {
this.clear_plan_and_usage();
cx.emit(Event::PrivateUserInfoUpdated);
cx.notify();
this.clear_contacts()
@@ -779,6 +780,12 @@ impl UserStore {
cx.notify();
}
pub fn clear_plan_and_usage(&mut self) {
self.plan_info = None;
self.model_request_usage = None;
self.edit_prediction_usage = None;
}
fn update_authenticated_user(
&mut self,
response: GetAuthenticatedUserResponse,

View File

@@ -58,6 +58,9 @@ pub const SERVER_SUPPORTS_STATUS_MESSAGES_HEADER_NAME: &str =
/// The name of the header used by the client to indicate that it supports receiving xAI models.
pub const CLIENT_SUPPORTS_X_AI_HEADER_NAME: &str = "x-zed-client-supports-x-ai";
/// The maximum number of edit predictions that can be rejected per request.
pub const MAX_EDIT_PREDICTION_REJECTIONS_PER_REQUEST: usize = 100;
#[derive(Debug, PartialEq, Clone, Copy, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum UsageLimit {
@@ -192,6 +195,17 @@ pub struct AcceptEditPredictionBody {
pub request_id: String,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct RejectEditPredictionsBody {
pub rejections: Vec<EditPredictionRejection>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct EditPredictionRejection {
pub request_id: String,
pub was_shown: bool,
}
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Clone, Copy, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum CompletionMode {

View File

@@ -76,6 +76,10 @@ pub enum PromptFormat {
OldTextNewText,
/// Prompt format intended for use via zeta_cli
OnlySnippets,
/// One-sentence instructions used in fine-tuned models
Minimal,
/// One-sentence instructions + FIM-like template
MinimalQwen,
}
impl PromptFormat {
@@ -102,6 +106,8 @@ impl std::fmt::Display for PromptFormat {
PromptFormat::OnlySnippets => write!(f, "Only Snippets"),
PromptFormat::NumLinesUniDiff => write!(f, "Numbered Lines / Unified Diff"),
PromptFormat::OldTextNewText => write!(f, "Old Text / New Text"),
PromptFormat::Minimal => write!(f, "Minimal"),
PromptFormat::MinimalQwen => write!(f, "Minimal + Qwen FIM"),
}
}
}

View File

@@ -19,4 +19,5 @@ ordered-float.workspace = true
rustc-hash.workspace = true
schemars.workspace = true
serde.workspace = true
serde_json.workspace = true
strum.workspace = true

View File

@@ -3,7 +3,8 @@ pub mod retrieval_prompt;
use anyhow::{Context as _, Result, anyhow};
use cloud_llm_client::predict_edits_v3::{
self, DiffPathFmt, Excerpt, Line, Point, PromptFormat, ReferencedDeclaration,
self, DiffPathFmt, Event, Excerpt, IncludedFile, Line, Point, PromptFormat,
ReferencedDeclaration,
};
use indoc::indoc;
use ordered_float::OrderedFloat;
@@ -31,7 +32,7 @@ const MARKED_EXCERPT_INSTRUCTIONS: &str = indoc! {"
Other code is provided for context, and `…` indicates when code has been skipped.
# Edit History:
## Edit History
"};
@@ -49,7 +50,7 @@ const LABELED_SECTIONS_INSTRUCTIONS: &str = indoc! {r#"
println!("{i}");
}
# Edit History:
## Edit History
"#};
@@ -86,6 +87,13 @@ const NUMBERED_LINES_INSTRUCTIONS: &str = indoc! {r#"
"#};
const STUDENT_MODEL_INSTRUCTIONS: &str = indoc! {r#"
You are a code completion assistant that analyzes edit history to identify and systematically complete incomplete refactorings or patterns across the entire codebase.
## Edit History
"#};
const UNIFIED_DIFF_REMINDER: &str = indoc! {"
---
@@ -100,18 +108,27 @@ const UNIFIED_DIFF_REMINDER: &str = indoc! {"
to uniquely identify it amongst all excerpts of code provided.
"};
const MINIMAL_PROMPT_REMINDER: &str = indoc! {"
---
Please analyze the edit history and the files, then provide the unified diff for your predicted edits.
Do not include the cursor marker in your output.
If you're editing multiple files, be sure to reflect filename in the hunk's header.
"};
const XML_TAGS_INSTRUCTIONS: &str = indoc! {r#"
# Instructions
You are an edit prediction agent in a code editor.
Your job is to predict the next edit that the user will make,
based on their last few edits and their current cursor location.
# Output Format
Analyze the history of edits made by the user in order to infer what they are currently trying to accomplish.
Then complete the remainder of the current change if it is incomplete, or predict the next edit the user intends to make.
Always continue along the user's current trajectory, rather than changing course.
You must briefly explain your understanding of the user's goal, in one
or two sentences, and then specify their next edit, using the following
XML format:
## Output Format
You should briefly explain your understanding of the user's overall goal in one sentence, then explain what the next change
along the users current trajectory will be in another, and finally specify the next edit using the following XML-like format:
<edits path="my-project/src/myapp/cli.py">
<old_text>
@@ -137,20 +154,34 @@ const XML_TAGS_INSTRUCTIONS: &str = indoc! {r#"
- Always close all tags properly
- Don't include the <|user_cursor|> marker in your output.
# Edit History:
## Edit History
"#};
const OLD_TEXT_NEW_TEXT_REMINDER: &str = indoc! {r#"
---
Remember that the edits in the edit history have already been deployed.
The files are currently as shown in the Code Excerpts section.
Remember that the edits in the edit history have already been applied.
"#};
pub fn build_prompt(
request: &predict_edits_v3::PredictEditsRequest,
) -> Result<(String, SectionLabels)> {
let mut section_labels = Default::default();
match request.prompt_format {
PromptFormat::MinimalQwen => {
let prompt = MinimalQwenPrompt {
events: request.events.clone(),
cursor_point: request.cursor_point,
cursor_path: request.excerpt_path.clone(),
included_files: request.included_files.clone(),
};
return Ok((prompt.render(), section_labels));
}
_ => (),
};
let mut insertions = match request.prompt_format {
PromptFormat::MarkedExcerpt => vec![
(
@@ -171,10 +202,12 @@ pub fn build_prompt(
],
PromptFormat::LabeledSections
| PromptFormat::NumLinesUniDiff
| PromptFormat::Minimal
| PromptFormat::OldTextNewText => {
vec![(request.cursor_point, CURSOR_MARKER)]
}
PromptFormat::OnlySnippets => vec![],
PromptFormat::MinimalQwen => unreachable!(),
};
let mut prompt = match request.prompt_format {
@@ -183,32 +216,59 @@ pub fn build_prompt(
PromptFormat::NumLinesUniDiff => NUMBERED_LINES_INSTRUCTIONS.to_string(),
PromptFormat::OldTextNewText => XML_TAGS_INSTRUCTIONS.to_string(),
PromptFormat::OnlySnippets => String::new(),
PromptFormat::Minimal => STUDENT_MODEL_INSTRUCTIONS.to_string(),
PromptFormat::MinimalQwen => unreachable!(),
};
if request.events.is_empty() {
prompt.push_str("(No edit history)\n\n");
} else {
prompt.push_str("Here are the latest edits made by the user, from earlier to later.\n\n");
let edit_preamble = if request.prompt_format == PromptFormat::Minimal {
"The following are the latest edits made by the user, from earlier to later.\n\n"
} else {
"Here are the latest edits made by the user, from earlier to later.\n\n"
};
prompt.push_str(edit_preamble);
push_events(&mut prompt, &request.events);
}
prompt.push_str(indoc! {"
# Code Excerpts
let excerpts_preamble = match request.prompt_format {
PromptFormat::Minimal => indoc! {"
## Part of the file under the cursor
The cursor marker <|user_cursor|> indicates the current user cursor position.
The file is in current state, edits from edit history have been applied.
"});
(The cursor marker <|user_cursor|> indicates the current user cursor position.
The file is in current state, edits from edit history has been applied.
We only show part of the file around the cursor.
You can only edit exactly this part of the file.
We prepend line numbers (e.g., `123|<actual line>`); they are not part of the file.)
"},
PromptFormat::NumLinesUniDiff | PromptFormat::OldTextNewText => indoc! {"
## Code Excerpts
if request.prompt_format == PromptFormat::NumLinesUniDiff {
prompt.push_str(indoc! {"
We prepend line numbers (e.g., `123|<actual line>`); they are not part of the file.
"});
}
Here is some excerpts of code that you should take into account to predict the next edit.
The cursor position is marked by `<|user_cursor|>` as it stands after the last edit in the history.
In addition other excerpts are included to better understand what the edit will be, including the declaration
or references of symbols around the cursor, or other similar code snippets that may need to be updated
following patterns that appear in the edit history.
Consider each of them carefully in relation to the edit history, and that the user may not have navigated
to the next place they want to edit yet.
Lines starting with `…` indicate omitted line ranges. These may appear inside multi-line code constructs.
"},
_ => indoc! {"
## Code Excerpts
The cursor marker <|user_cursor|> indicates the current user cursor position.
The file is in current state, edits from edit history have been applied.
"},
};
prompt.push_str(excerpts_preamble);
prompt.push('\n');
let mut section_labels = Default::default();
if !request.referenced_declarations.is_empty() || !request.signatures.is_empty() {
let syntax_based_prompt = SyntaxBasedPrompt::populate(request)?;
section_labels = syntax_based_prompt.write(&mut insertions, &mut prompt)?;
@@ -217,19 +277,38 @@ pub fn build_prompt(
anyhow::bail!("PromptFormat::LabeledSections cannot be used with ContextMode::Llm");
}
let include_line_numbers = matches!(
request.prompt_format,
PromptFormat::NumLinesUniDiff | PromptFormat::Minimal
);
for related_file in &request.included_files {
write_codeblock(
&related_file.path,
&related_file.excerpts,
if related_file.path == request.excerpt_path {
&insertions
} else {
&[]
},
related_file.max_row,
request.prompt_format == PromptFormat::NumLinesUniDiff,
&mut prompt,
);
if request.prompt_format == PromptFormat::Minimal {
write_codeblock_with_filename(
&related_file.path,
&related_file.excerpts,
if related_file.path == request.excerpt_path {
&insertions
} else {
&[]
},
related_file.max_row,
include_line_numbers,
&mut prompt,
);
} else {
write_codeblock(
&related_file.path,
&related_file.excerpts,
if related_file.path == request.excerpt_path {
&insertions
} else {
&[]
},
related_file.max_row,
include_line_numbers,
&mut prompt,
);
}
}
}
@@ -240,6 +319,9 @@ pub fn build_prompt(
PromptFormat::OldTextNewText => {
prompt.push_str(OLD_TEXT_NEW_TEXT_REMINDER);
}
PromptFormat::Minimal => {
prompt.push_str(MINIMAL_PROMPT_REMINDER);
}
_ => {}
}
@@ -255,6 +337,27 @@ pub fn write_codeblock<'a>(
output: &'a mut String,
) {
writeln!(output, "`````{}", DiffPathFmt(path)).unwrap();
write_excerpts(
excerpts,
sorted_insertions,
file_line_count,
include_line_numbers,
output,
);
write!(output, "`````\n\n").unwrap();
}
fn write_codeblock_with_filename<'a>(
path: &Path,
excerpts: impl IntoIterator<Item = &'a Excerpt>,
sorted_insertions: &[(Point, &str)],
file_line_count: Line,
include_line_numbers: bool,
output: &'a mut String,
) {
writeln!(output, "`````filename={}", DiffPathFmt(path)).unwrap();
write_excerpts(
excerpts,
sorted_insertions,
@@ -666,6 +769,7 @@ impl<'a> SyntaxBasedPrompt<'a> {
PromptFormat::MarkedExcerpt
| PromptFormat::OnlySnippets
| PromptFormat::OldTextNewText
| PromptFormat::Minimal
| PromptFormat::NumLinesUniDiff => {
if range.start.0 > 0 && !skipped_last_snippet {
output.push_str("\n");
@@ -681,6 +785,7 @@ impl<'a> SyntaxBasedPrompt<'a> {
writeln!(output, "<|section_{}|>", section_index).ok();
}
}
PromptFormat::MinimalQwen => unreachable!(),
}
let push_full_snippet = |output: &mut String| {
@@ -790,3 +895,69 @@ fn declaration_size(declaration: &ReferencedDeclaration, style: DeclarationStyle
DeclarationStyle::Declaration => declaration.text.len(),
}
}
struct MinimalQwenPrompt {
events: Vec<Event>,
cursor_point: Point,
cursor_path: Arc<Path>, // TODO: make a common struct with cursor_point
included_files: Vec<IncludedFile>,
}
impl MinimalQwenPrompt {
const INSTRUCTIONS: &str = "You are a code completion assistant that analyzes edit history to identify and systematically complete incomplete refactorings or patterns across the entire codebase.\n";
fn render(&self) -> String {
let edit_history = self.fmt_edit_history();
let context = self.fmt_context();
format!(
"{instructions}\n\n{edit_history}\n\n{context}",
instructions = MinimalQwenPrompt::INSTRUCTIONS,
edit_history = edit_history,
context = context
)
}
fn fmt_edit_history(&self) -> String {
if self.events.is_empty() {
"(No edit history)\n\n".to_string()
} else {
let mut events_str = String::new();
push_events(&mut events_str, &self.events);
format!(
"The following are the latest edits made by the user, from earlier to later.\n\n{}",
events_str
)
}
}
fn fmt_context(&self) -> String {
let mut context = String::new();
let include_line_numbers = true;
for related_file in &self.included_files {
writeln!(context, "<|file_sep|>{}", DiffPathFmt(&related_file.path)).unwrap();
if related_file.path == self.cursor_path {
write!(context, "<|fim_prefix|>").unwrap();
write_excerpts(
&related_file.excerpts,
&[(self.cursor_point, "<|fim_suffix|>")],
related_file.max_row,
include_line_numbers,
&mut context,
);
writeln!(context, "<|fim_middle|>").unwrap();
} else {
write_excerpts(
&related_file.excerpts,
&[],
related_file.max_row,
include_line_numbers,
&mut context,
);
}
}
context
}
}

View File

@@ -11,11 +11,11 @@ pub fn build_prompt(request: predict_edits_v3::PlanContextRetrievalRequest) -> R
let mut prompt = SEARCH_INSTRUCTIONS.to_string();
if !request.events.is_empty() {
writeln!(&mut prompt, "## User Edits\n")?;
writeln!(&mut prompt, "\n## User Edits\n\n")?;
push_events(&mut prompt, &request.events);
}
writeln!(&mut prompt, "## Cursor context")?;
writeln!(&mut prompt, "## Cursor context\n")?;
write_codeblock(
&request.excerpt_path,
&[Excerpt {
@@ -40,11 +40,49 @@ pub fn build_prompt(request: predict_edits_v3::PlanContextRetrievalRequest) -> R
pub struct SearchToolInput {
/// An array of queries to run for gathering context relevant to the next prediction
#[schemars(length(max = 3))]
#[serde(deserialize_with = "deserialize_queries")]
pub queries: Box<[SearchToolQuery]>,
}
fn deserialize_queries<'de, D>(deserializer: D) -> Result<Box<[SearchToolQuery]>, D::Error>
where
D: serde::Deserializer<'de>,
{
use serde::de::Error;
#[derive(Deserialize)]
#[serde(untagged)]
enum QueryCollection {
Array(Box<[SearchToolQuery]>),
DoubleArray(Box<[Box<[SearchToolQuery]>]>),
Single(SearchToolQuery),
}
#[derive(Deserialize)]
#[serde(untagged)]
enum MaybeDoubleEncoded {
SingleEncoded(QueryCollection),
DoubleEncoded(String),
}
let result = MaybeDoubleEncoded::deserialize(deserializer)?;
let normalized = match result {
MaybeDoubleEncoded::SingleEncoded(value) => value,
MaybeDoubleEncoded::DoubleEncoded(value) => {
serde_json::from_str(&value).map_err(D::Error::custom)?
}
};
Ok(match normalized {
QueryCollection::Array(items) => items,
QueryCollection::Single(search_tool_query) => Box::new([search_tool_query]),
QueryCollection::DoubleArray(double_array) => double_array.into_iter().flatten().collect(),
})
}
/// Search for relevant code by path, syntax hierarchy, and content.
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema, Hash)]
pub struct SearchToolQuery {
/// 1. A glob pattern to match file paths in the codebase to search in.
pub glob: String,
@@ -92,3 +130,115 @@ const TOOL_USE_REMINDER: &str = indoc! {"
--
Analyze the user's intent in one to two sentences, then call the `search` tool.
"};
#[cfg(test)]
mod tests {
use serde_json::json;
use super::*;
#[test]
fn test_deserialize_queries() {
let single_query_json = indoc! {r#"{
"queries": {
"glob": "**/*.rs",
"syntax_node": ["fn test"],
"content": "assert"
}
}"#};
let flat_input: SearchToolInput = serde_json::from_str(single_query_json).unwrap();
assert_eq!(flat_input.queries.len(), 1);
assert_eq!(flat_input.queries[0].glob, "**/*.rs");
assert_eq!(flat_input.queries[0].syntax_node, vec!["fn test"]);
assert_eq!(flat_input.queries[0].content, Some("assert".to_string()));
let flat_json = indoc! {r#"{
"queries": [
{
"glob": "**/*.rs",
"syntax_node": ["fn test"],
"content": "assert"
},
{
"glob": "**/*.ts",
"syntax_node": [],
"content": null
}
]
}"#};
let flat_input: SearchToolInput = serde_json::from_str(flat_json).unwrap();
assert_eq!(flat_input.queries.len(), 2);
assert_eq!(flat_input.queries[0].glob, "**/*.rs");
assert_eq!(flat_input.queries[0].syntax_node, vec!["fn test"]);
assert_eq!(flat_input.queries[0].content, Some("assert".to_string()));
assert_eq!(flat_input.queries[1].glob, "**/*.ts");
assert_eq!(flat_input.queries[1].syntax_node.len(), 0);
assert_eq!(flat_input.queries[1].content, None);
let nested_json = indoc! {r#"{
"queries": [
[
{
"glob": "**/*.rs",
"syntax_node": ["fn test"],
"content": "assert"
}
],
[
{
"glob": "**/*.ts",
"syntax_node": [],
"content": null
}
]
]
}"#};
let nested_input: SearchToolInput = serde_json::from_str(nested_json).unwrap();
assert_eq!(nested_input.queries.len(), 2);
assert_eq!(nested_input.queries[0].glob, "**/*.rs");
assert_eq!(nested_input.queries[0].syntax_node, vec!["fn test"]);
assert_eq!(nested_input.queries[0].content, Some("assert".to_string()));
assert_eq!(nested_input.queries[1].glob, "**/*.ts");
assert_eq!(nested_input.queries[1].syntax_node.len(), 0);
assert_eq!(nested_input.queries[1].content, None);
let double_encoded_queries = serde_json::to_string(&json!({
"queries": serde_json::to_string(&json!([
{
"glob": "**/*.rs",
"syntax_node": ["fn test"],
"content": "assert"
},
{
"glob": "**/*.ts",
"syntax_node": [],
"content": null
}
])).unwrap()
}))
.unwrap();
let double_encoded_input: SearchToolInput =
serde_json::from_str(&double_encoded_queries).unwrap();
assert_eq!(double_encoded_input.queries.len(), 2);
assert_eq!(double_encoded_input.queries[0].glob, "**/*.rs");
assert_eq!(double_encoded_input.queries[0].syntax_node, vec!["fn test"]);
assert_eq!(
double_encoded_input.queries[0].content,
Some("assert".to_string())
);
assert_eq!(double_encoded_input.queries[1].glob, "**/*.ts");
assert_eq!(double_encoded_input.queries[1].syntax_node.len(), 0);
assert_eq!(double_encoded_input.queries[1].content, None);
// ### ERROR Switching from var declarations to lexical declarations [RUN 073]
// invalid search json {"queries": ["express/lib/response.js", "var\\s+[a-zA-Z_][a-zA-Z0-9_]*\\s*=.*;", "function.*\\(.*\\).*\\{.*\\}"]}
}
}

View File

@@ -0,0 +1 @@
drop table embeddings;

View File

@@ -0,0 +1,4 @@
alter table billing_customers
add column external_id text;
create unique index uix_billing_customers_on_external_id on billing_customers (external_id);

View File

@@ -5,7 +5,6 @@ pub mod buffers;
pub mod channels;
pub mod contacts;
pub mod contributors;
pub mod embeddings;
pub mod extensions;
pub mod notifications;
pub mod projects;

View File

@@ -1,94 +0,0 @@
use super::*;
use time::Duration;
use time::OffsetDateTime;
impl Database {
pub async fn get_embeddings(
&self,
model: &str,
digests: &[Vec<u8>],
) -> Result<HashMap<Vec<u8>, Vec<f32>>> {
self.transaction(|tx| async move {
let embeddings = {
let mut db_embeddings = embedding::Entity::find()
.filter(
embedding::Column::Model.eq(model).and(
embedding::Column::Digest
.is_in(digests.iter().map(|digest| digest.as_slice())),
),
)
.stream(&*tx)
.await?;
let mut embeddings = HashMap::default();
while let Some(db_embedding) = db_embeddings.next().await {
let db_embedding = db_embedding?;
embeddings.insert(db_embedding.digest, db_embedding.dimensions);
}
embeddings
};
if !embeddings.is_empty() {
let now = OffsetDateTime::now_utc();
let retrieved_at = PrimitiveDateTime::new(now.date(), now.time());
embedding::Entity::update_many()
.filter(
embedding::Column::Digest
.is_in(embeddings.keys().map(|digest| digest.as_slice())),
)
.col_expr(embedding::Column::RetrievedAt, Expr::value(retrieved_at))
.exec(&*tx)
.await?;
}
Ok(embeddings)
})
.await
}
pub async fn save_embeddings(
&self,
model: &str,
embeddings: &HashMap<Vec<u8>, Vec<f32>>,
) -> Result<()> {
self.transaction(|tx| async move {
embedding::Entity::insert_many(embeddings.iter().map(|(digest, dimensions)| {
let now_offset_datetime = OffsetDateTime::now_utc();
let retrieved_at =
PrimitiveDateTime::new(now_offset_datetime.date(), now_offset_datetime.time());
embedding::ActiveModel {
model: ActiveValue::set(model.to_string()),
digest: ActiveValue::set(digest.clone()),
dimensions: ActiveValue::set(dimensions.clone()),
retrieved_at: ActiveValue::set(retrieved_at),
}
}))
.on_conflict(
OnConflict::columns([embedding::Column::Model, embedding::Column::Digest])
.do_nothing()
.to_owned(),
)
.exec_without_returning(&*tx)
.await?;
Ok(())
})
.await
}
pub async fn purge_old_embeddings(&self) -> Result<()> {
self.transaction(|tx| async move {
embedding::Entity::delete_many()
.filter(
embedding::Column::RetrievedAt
.lte(OffsetDateTime::now_utc() - Duration::days(60)),
)
.exec(&*tx)
.await?;
Ok(())
})
.await
}
}

View File

@@ -1005,6 +1005,7 @@ impl Database {
is_last_update: true,
merge_message: db_repository_entry.merge_message,
stash_entries: Vec::new(),
renamed_paths: Default::default(),
});
}
}

View File

@@ -796,6 +796,7 @@ impl Database {
is_last_update: true,
merge_message: db_repository.merge_message,
stash_entries: Vec::new(),
renamed_paths: Default::default(),
});
}
}

View File

@@ -8,7 +8,6 @@ pub mod channel_chat_participant;
pub mod channel_member;
pub mod contact;
pub mod contributor;
pub mod embedding;
pub mod extension;
pub mod extension_version;
pub mod follower;

View File

@@ -1,18 +0,0 @@
use sea_orm::entity::prelude::*;
use time::PrimitiveDateTime;
#[derive(Clone, Debug, PartialEq, DeriveEntityModel)]
#[sea_orm(table_name = "embeddings")]
pub struct Model {
#[sea_orm(primary_key)]
pub model: String,
#[sea_orm(primary_key)]
pub digest: Vec<u8>,
pub dimensions: Vec<f32>,
pub retrieved_at: PrimitiveDateTime,
}
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {}
impl ActiveModelBehavior for ActiveModel {}

View File

@@ -39,25 +39,6 @@ pub enum Relation {
Contributor,
}
impl Model {
/// Returns the timestamp of when the user's account was created.
///
/// This will be the earlier of the `created_at` and `github_user_created_at` timestamps.
pub fn account_created_at(&self) -> NaiveDateTime {
let mut account_created_at = self.created_at;
if let Some(github_created_at) = self.github_user_created_at {
account_created_at = account_created_at.min(github_created_at);
}
account_created_at
}
/// Returns the age of the user's account.
pub fn account_age(&self) -> chrono::Duration {
chrono::Utc::now().naive_utc() - self.account_created_at()
}
}
impl Related<super::access_token::Entity> for Entity {
fn to() -> RelationDef {
Relation::AccessToken.def()

View File

@@ -2,9 +2,6 @@ mod buffer_tests;
mod channel_tests;
mod contributor_tests;
mod db_tests;
// we only run postgres tests on macos right now
#[cfg(target_os = "macos")]
mod embedding_tests;
mod extension_tests;
use crate::migrations::run_database_migrations;

View File

@@ -1,87 +0,0 @@
use super::TestDb;
use crate::db::embedding;
use collections::HashMap;
use sea_orm::{ColumnTrait, EntityTrait, QueryFilter, sea_query::Expr};
use std::ops::Sub;
use time::{Duration, OffsetDateTime, PrimitiveDateTime};
// SQLite does not support array arguments, so we only test this against a real postgres instance
#[gpui::test]
async fn test_get_embeddings_postgres(cx: &mut gpui::TestAppContext) {
let test_db = TestDb::postgres(cx.executor());
let db = test_db.db();
let provider = "test_model";
let digest1 = vec![1, 2, 3];
let digest2 = vec![4, 5, 6];
let embeddings = HashMap::from_iter([
(digest1.clone(), vec![0.1, 0.2, 0.3]),
(digest2.clone(), vec![0.4, 0.5, 0.6]),
]);
// Save embeddings
db.save_embeddings(provider, &embeddings).await.unwrap();
// Retrieve embeddings
let retrieved_embeddings = db
.get_embeddings(provider, &[digest1.clone(), digest2.clone()])
.await
.unwrap();
assert_eq!(retrieved_embeddings.len(), 2);
assert!(retrieved_embeddings.contains_key(&digest1));
assert!(retrieved_embeddings.contains_key(&digest2));
// Check if the retrieved embeddings are correct
assert_eq!(retrieved_embeddings[&digest1], vec![0.1, 0.2, 0.3]);
assert_eq!(retrieved_embeddings[&digest2], vec![0.4, 0.5, 0.6]);
}
#[gpui::test]
async fn test_purge_old_embeddings(cx: &mut gpui::TestAppContext) {
let test_db = TestDb::postgres(cx.executor());
let db = test_db.db();
let model = "test_model";
let digest = vec![7, 8, 9];
let embeddings = HashMap::from_iter([(digest.clone(), vec![0.7, 0.8, 0.9])]);
// Save old embeddings
db.save_embeddings(model, &embeddings).await.unwrap();
// Reach into the DB and change the retrieved at to be > 60 days
db.transaction(|tx| {
let digest = digest.clone();
async move {
let sixty_days_ago = OffsetDateTime::now_utc().sub(Duration::days(61));
let retrieved_at = PrimitiveDateTime::new(sixty_days_ago.date(), sixty_days_ago.time());
embedding::Entity::update_many()
.filter(
embedding::Column::Model
.eq(model)
.and(embedding::Column::Digest.eq(digest)),
)
.col_expr(embedding::Column::RetrievedAt, Expr::value(retrieved_at))
.exec(&*tx)
.await
.unwrap();
Ok(())
}
})
.await
.unwrap();
// Purge old embeddings
db.purge_old_embeddings().await.unwrap();
// Try to retrieve the purged embeddings
let retrieved_embeddings = db
.get_embeddings(model, std::slice::from_ref(&digest))
.await
.unwrap();
assert!(
retrieved_embeddings.is_empty(),
"Old embeddings should have been purged"
);
}

View File

@@ -13,7 +13,7 @@ use collab::llm::db::LlmDatabase;
use collab::migrations::run_database_migrations;
use collab::{
AppState, Config, Result, api::fetch_extensions_from_blob_store_periodically, db, env,
executor::Executor, rpc::ResultExt,
executor::Executor,
};
use db::Database;
use std::{
@@ -95,8 +95,6 @@ async fn main() -> Result<()> {
let state = AppState::new(config, Executor::Production).await?;
if mode.is_collab() {
state.db.purge_old_embeddings().await.trace_err();
let epoch = state
.db
.create_server(&state.config.zed_environment)

View File

@@ -288,7 +288,7 @@ async fn test_newline_above_or_below_does_not_move_guest_cursor(
"});
}
#[gpui::test(iterations = 10)]
#[gpui::test]
async fn test_collaborating_with_completion(cx_a: &mut TestAppContext, cx_b: &mut TestAppContext) {
let mut server = TestServer::start(cx_a.executor()).await;
let client_a = server.create_client(cx_a, "user_a").await;
@@ -307,17 +307,35 @@ async fn test_collaborating_with_completion(cx_a: &mut TestAppContext, cx_b: &mu
..lsp::ServerCapabilities::default()
};
client_a.language_registry().add(rust_lang());
let mut fake_language_servers = client_a.language_registry().register_fake_lsp(
let mut fake_language_servers = [
client_a.language_registry().register_fake_lsp(
"Rust",
FakeLspAdapter {
capabilities: capabilities.clone(),
..FakeLspAdapter::default()
},
),
client_a.language_registry().register_fake_lsp(
"Rust",
FakeLspAdapter {
name: "fake-analyzer",
capabilities: capabilities.clone(),
..FakeLspAdapter::default()
},
),
];
client_b.language_registry().add(rust_lang());
client_b.language_registry().register_fake_lsp_adapter(
"Rust",
FakeLspAdapter {
capabilities: capabilities.clone(),
..FakeLspAdapter::default()
},
);
client_b.language_registry().add(rust_lang());
client_b.language_registry().register_fake_lsp_adapter(
"Rust",
FakeLspAdapter {
name: "fake-analyzer",
capabilities,
..FakeLspAdapter::default()
},
@@ -352,7 +370,8 @@ async fn test_collaborating_with_completion(cx_a: &mut TestAppContext, cx_b: &mu
Editor::for_buffer(buffer_b.clone(), Some(project_b.clone()), window, cx)
});
let fake_language_server = fake_language_servers.next().await.unwrap();
let fake_language_server = fake_language_servers[0].next().await.unwrap();
let second_fake_language_server = fake_language_servers[1].next().await.unwrap();
cx_a.background_executor.run_until_parked();
buffer_b.read_with(cx_b, |buffer, _| {
@@ -414,6 +433,11 @@ async fn test_collaborating_with_completion(cx_a: &mut TestAppContext, cx_b: &mu
.next()
.await
.unwrap();
second_fake_language_server
.set_request_handler::<lsp::request::Completion, _, _>(|_, _| async move { Ok(None) })
.next()
.await
.unwrap();
cx_a.executor().finish_waiting();
// Open the buffer on the host.
@@ -522,6 +546,10 @@ async fn test_collaborating_with_completion(cx_a: &mut TestAppContext, cx_b: &mu
])))
});
// Second language server also needs to handle the request (returns None)
let mut second_completion_response = second_fake_language_server
.set_request_handler::<lsp::request::Completion, _, _>(|_, _| async move { Ok(None) });
// The completion now gets a new `text_edit.new_text` when resolving the completion item
let mut resolve_completion_response = fake_language_server
.set_request_handler::<lsp::request::ResolveCompletionItem, _, _>(|params, _| async move {
@@ -545,6 +573,7 @@ async fn test_collaborating_with_completion(cx_a: &mut TestAppContext, cx_b: &mu
cx_b.executor().run_until_parked();
completion_response.next().await.unwrap();
second_completion_response.next().await.unwrap();
editor_b.update_in(cx_b, |editor, window, cx| {
assert!(editor.context_menu_visible());
@@ -563,6 +592,77 @@ async fn test_collaborating_with_completion(cx_a: &mut TestAppContext, cx_b: &mu
"use d::SomeTrait;\nfn main() { a.first_method(); a.third_method(, , ) }"
);
});
// Ensure buffer is synced before proceeding with the next test
cx_a.executor().run_until_parked();
cx_b.executor().run_until_parked();
// Test completions from the second fake language server
// Add another completion trigger to test the second language server
editor_b.update_in(cx_b, |editor, window, cx| {
editor.change_selections(SelectionEffects::no_scroll(), window, cx, |s| {
s.select_ranges([68..68])
});
editor.handle_input("; b", window, cx);
editor.handle_input(".", window, cx);
});
buffer_b.read_with(cx_b, |buffer, _| {
assert_eq!(
buffer.text(),
"use d::SomeTrait;\nfn main() { a.first_method(); a.third_method(, , ); b. }"
);
});
// Set up completion handlers for both language servers
let mut first_lsp_completion = fake_language_server
.set_request_handler::<lsp::request::Completion, _, _>(|_, _| async move { Ok(None) });
let mut second_lsp_completion = second_fake_language_server
.set_request_handler::<lsp::request::Completion, _, _>(|params, _| async move {
assert_eq!(
params.text_document_position.text_document.uri,
lsp::Uri::from_file_path(path!("/a/main.rs")).unwrap(),
);
assert_eq!(
params.text_document_position.position,
lsp::Position::new(1, 54),
);
Ok(Some(lsp::CompletionResponse::Array(vec![
lsp::CompletionItem {
label: "analyzer_method(…)".into(),
detail: Some("fn(&self) -> Result<T>".into()),
text_edit: Some(lsp::CompletionTextEdit::Edit(lsp::TextEdit {
new_text: "analyzer_method()".to_string(),
range: lsp::Range::new(
lsp::Position::new(1, 54),
lsp::Position::new(1, 54),
),
})),
insert_text_format: Some(lsp::InsertTextFormat::SNIPPET),
..Default::default()
},
])))
});
cx_b.executor().run_until_parked();
// Await both language server responses
first_lsp_completion.next().await.unwrap();
second_lsp_completion.next().await.unwrap();
cx_b.executor().run_until_parked();
// Confirm the completion from the second language server works
editor_b.update_in(cx_b, |editor, window, cx| {
assert!(editor.context_menu_visible());
editor.confirm_completion(&ConfirmCompletion { item_ix: Some(0) }, window, cx);
assert_eq!(
editor.text(cx),
"use d::SomeTrait;\nfn main() { a.first_method(); a.third_method(, , ); b.analyzer_method() }"
);
});
}
#[gpui::test(iterations = 10)]
@@ -2169,16 +2269,28 @@ async fn test_inlay_hint_refresh_is_forwarded(
} else {
"initial hint"
};
Ok(Some(vec![lsp::InlayHint {
position: lsp::Position::new(0, character),
label: lsp::InlayHintLabel::String(label.to_string()),
kind: None,
text_edits: None,
tooltip: None,
padding_left: None,
padding_right: None,
data: None,
}]))
Ok(Some(vec![
lsp::InlayHint {
position: lsp::Position::new(0, character),
label: lsp::InlayHintLabel::String(label.to_string()),
kind: None,
text_edits: None,
tooltip: None,
padding_left: None,
padding_right: None,
data: None,
},
lsp::InlayHint {
position: lsp::Position::new(1090, 1090),
label: lsp::InlayHintLabel::String("out-of-bounds hint".to_string()),
kind: None,
text_edits: None,
tooltip: None,
padding_left: None,
padding_right: None,
data: None,
},
]))
}
})
.next()

View File

@@ -672,20 +672,25 @@ impl CollabPanel {
{
self.entries.push(ListEntry::ChannelEditor { depth: 0 });
}
let should_respect_collapse = query.is_empty();
let mut collapse_depth = None;
for (idx, channel) in channels.into_iter().enumerate() {
let depth = channel.parent_path.len();
if collapse_depth.is_none() && self.is_channel_collapsed(channel.id) {
collapse_depth = Some(depth);
} else if let Some(collapsed_depth) = collapse_depth {
if depth > collapsed_depth {
continue;
}
if self.is_channel_collapsed(channel.id) {
if should_respect_collapse {
if collapse_depth.is_none() && self.is_channel_collapsed(channel.id) {
collapse_depth = Some(depth);
} else {
collapse_depth = None;
} else if let Some(collapsed_depth) = collapse_depth {
if depth > collapsed_depth {
continue;
}
if self.is_channel_collapsed(channel.id) {
collapse_depth = Some(depth);
} else {
collapse_depth = None;
}
}
}

View File

@@ -12,7 +12,7 @@ workspace = true
path = "src/context_server.rs"
[features]
test-support = []
test-support = ["gpui/test-support"]
[dependencies]
anyhow.workspace = true
@@ -20,6 +20,7 @@ async-trait.workspace = true
collections.workspace = true
futures.workspace = true
gpui.workspace = true
http_client = { workspace = true, features = ["test-support"] }
log.workspace = true
net.workspace = true
parking_lot.workspace = true
@@ -32,3 +33,6 @@ smol.workspace = true
tempfile.workspace = true
url = { workspace = true, features = ["serde"] }
util.workspace = true
[dev-dependencies]
gpui = { workspace = true, features = ["test-support"] }

View File

@@ -6,6 +6,8 @@ pub mod test;
pub mod transport;
pub mod types;
use collections::HashMap;
use http_client::HttpClient;
use std::path::Path;
use std::sync::Arc;
use std::{fmt::Display, path::PathBuf};
@@ -15,6 +17,9 @@ use client::Client;
use gpui::AsyncApp;
use parking_lot::RwLock;
pub use settings::ContextServerCommand;
use url::Url;
use crate::transport::HttpTransport;
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct ContextServerId(pub Arc<str>);
@@ -52,6 +57,25 @@ impl ContextServer {
}
}
pub fn http(
id: ContextServerId,
endpoint: &Url,
headers: HashMap<String, String>,
http_client: Arc<dyn HttpClient>,
executor: gpui::BackgroundExecutor,
) -> Result<Self> {
let transport = match endpoint.scheme() {
"http" | "https" => {
log::info!("Using HTTP transport for {}", endpoint);
let transport =
HttpTransport::new(http_client, endpoint.to_string(), headers, executor);
Arc::new(transport) as _
}
_ => anyhow::bail!("unsupported MCP url scheme {}", endpoint.scheme()),
};
Ok(Self::new(id, transport))
}
pub fn new(id: ContextServerId, transport: Arc<dyn crate::transport::Transport>) -> Self {
Self {
id,

View File

@@ -1,11 +1,12 @@
pub mod http;
mod stdio_transport;
use std::pin::Pin;
use anyhow::Result;
use async_trait::async_trait;
use futures::Stream;
use std::pin::Pin;
pub use http::*;
pub use stdio_transport::*;
#[async_trait]

View File

@@ -0,0 +1,259 @@
use anyhow::{Result, anyhow};
use async_trait::async_trait;
use collections::HashMap;
use futures::{Stream, StreamExt};
use gpui::BackgroundExecutor;
use http_client::{AsyncBody, HttpClient, Request, Response, http::Method};
use parking_lot::Mutex as SyncMutex;
use smol::channel;
use std::{pin::Pin, sync::Arc};
use crate::transport::Transport;
// Constants from MCP spec
const HEADER_SESSION_ID: &str = "Mcp-Session-Id";
const EVENT_STREAM_MIME_TYPE: &str = "text/event-stream";
const JSON_MIME_TYPE: &str = "application/json";
/// HTTP Transport with session management and SSE support
pub struct HttpTransport {
http_client: Arc<dyn HttpClient>,
endpoint: String,
session_id: Arc<SyncMutex<Option<String>>>,
executor: BackgroundExecutor,
response_tx: channel::Sender<String>,
response_rx: channel::Receiver<String>,
error_tx: channel::Sender<String>,
error_rx: channel::Receiver<String>,
// Authentication headers to include in requests
headers: HashMap<String, String>,
}
impl HttpTransport {
pub fn new(
http_client: Arc<dyn HttpClient>,
endpoint: String,
headers: HashMap<String, String>,
executor: BackgroundExecutor,
) -> Self {
let (response_tx, response_rx) = channel::unbounded();
let (error_tx, error_rx) = channel::unbounded();
Self {
http_client,
executor,
endpoint,
session_id: Arc::new(SyncMutex::new(None)),
response_tx,
response_rx,
error_tx,
error_rx,
headers,
}
}
/// Send a message and handle the response based on content type
async fn send_message(&self, message: String) -> Result<()> {
let is_notification =
!message.contains("\"id\":") || message.contains("notifications/initialized");
let mut request_builder = Request::builder()
.method(Method::POST)
.uri(&self.endpoint)
.header("Content-Type", JSON_MIME_TYPE)
.header(
"Accept",
format!("{}, {}", JSON_MIME_TYPE, EVENT_STREAM_MIME_TYPE),
);
for (key, value) in &self.headers {
request_builder = request_builder.header(key.as_str(), value.as_str());
}
// Add session ID if we have one (except for initialize)
if let Some(ref session_id) = *self.session_id.lock() {
request_builder = request_builder.header(HEADER_SESSION_ID, session_id.as_str());
}
let request = request_builder.body(AsyncBody::from(message.into_bytes()))?;
let mut response = self.http_client.send(request).await?;
// Handle different response types based on status and content-type
match response.status() {
status if status.is_success() => {
// Check content type
let content_type = response
.headers()
.get("content-type")
.and_then(|v| v.to_str().ok());
// Extract session ID from response headers if present
if let Some(session_id) = response
.headers()
.get(HEADER_SESSION_ID)
.and_then(|v| v.to_str().ok())
{
*self.session_id.lock() = Some(session_id.to_string());
log::debug!("Session ID set: {}", session_id);
}
match content_type {
Some(ct) if ct.starts_with(JSON_MIME_TYPE) => {
// JSON response - read and forward immediately
let mut body = String::new();
futures::AsyncReadExt::read_to_string(response.body_mut(), &mut body)
.await?;
// Only send non-empty responses
if !body.is_empty() {
self.response_tx
.send(body)
.await
.map_err(|_| anyhow!("Failed to send JSON response"))?;
}
}
Some(ct) if ct.starts_with(EVENT_STREAM_MIME_TYPE) => {
// SSE stream - set up streaming
self.setup_sse_stream(response).await?;
}
_ => {
// For notifications, 202 Accepted with no content type is ok
if is_notification && status.as_u16() == 202 {
log::debug!("Notification accepted");
} else {
return Err(anyhow!("Unexpected content type: {:?}", content_type));
}
}
}
}
status if status.as_u16() == 202 => {
// Accepted - notification acknowledged, no response needed
log::debug!("Notification accepted");
}
_ => {
let mut error_body = String::new();
futures::AsyncReadExt::read_to_string(response.body_mut(), &mut error_body).await?;
self.error_tx
.send(format!("HTTP {}: {}", response.status(), error_body))
.await
.map_err(|_| anyhow!("Failed to send error"))?;
}
}
Ok(())
}
/// Set up SSE streaming from the response
async fn setup_sse_stream(&self, mut response: Response<AsyncBody>) -> Result<()> {
let response_tx = self.response_tx.clone();
let error_tx = self.error_tx.clone();
// Spawn a task to handle the SSE stream
smol::spawn(async move {
let reader = futures::io::BufReader::new(response.body_mut());
let mut lines = futures::AsyncBufReadExt::lines(reader);
let mut data_buffer = Vec::new();
let mut in_message = false;
while let Some(line_result) = lines.next().await {
match line_result {
Ok(line) => {
if line.is_empty() {
// Empty line signals end of event
if !data_buffer.is_empty() {
let message = data_buffer.join("\n");
// Filter out ping messages and empty data
if !message.trim().is_empty() && message != "ping" {
if let Err(e) = response_tx.send(message).await {
log::error!("Failed to send SSE message: {}", e);
break;
}
}
data_buffer.clear();
}
in_message = false;
} else if let Some(data) = line.strip_prefix("data: ") {
// Handle data lines
let data = data.trim();
if !data.is_empty() {
// Check if this is a ping message
if data == "ping" {
log::trace!("Received SSE ping");
continue;
}
data_buffer.push(data.to_string());
in_message = true;
}
} else if line.starts_with("event:")
|| line.starts_with("id:")
|| line.starts_with("retry:")
{
// Ignore other SSE fields
continue;
} else if in_message {
// Continuation of data
data_buffer.push(line);
}
}
Err(e) => {
let _ = error_tx.send(format!("SSE stream error: {}", e)).await;
break;
}
}
}
})
.detach();
Ok(())
}
}
#[async_trait]
impl Transport for HttpTransport {
async fn send(&self, message: String) -> Result<()> {
self.send_message(message).await
}
fn receive(&self) -> Pin<Box<dyn Stream<Item = String> + Send>> {
Box::pin(self.response_rx.clone())
}
fn receive_err(&self) -> Pin<Box<dyn Stream<Item = String> + Send>> {
Box::pin(self.error_rx.clone())
}
}
impl Drop for HttpTransport {
fn drop(&mut self) {
// Try to cleanup session on drop
let http_client = self.http_client.clone();
let endpoint = self.endpoint.clone();
let session_id = self.session_id.lock().clone();
let headers = self.headers.clone();
if let Some(session_id) = session_id {
self.executor
.spawn(async move {
let mut request_builder = Request::builder()
.method(Method::DELETE)
.uri(&endpoint)
.header(HEADER_SESSION_ID, &session_id);
// Add authentication headers if present
for (key, value) in headers {
request_builder = request_builder.header(key.as_str(), value.as_str());
}
let request = request_builder.body(AsyncBody::empty());
if let Ok(request) = request {
let _ = http_client.send(request).await;
}
})
.detach();
}
}
}

View File

@@ -289,15 +289,11 @@ impl minidumper::ServerHandler for CrashServer {
pub fn panic_hook(info: &PanicHookInfo) {
// Don't handle a panic on threads that are not relevant to the main execution.
if extension_host::wasm_host::IS_WASM_THREAD.with(|v| v.load(Ordering::Acquire)) {
log::error!("wasm thread panicked!");
return;
}
let message = info
.payload()
.downcast_ref::<&str>()
.map(|s| s.to_string())
.or_else(|| info.payload().downcast_ref::<String>().cloned())
.unwrap_or_else(|| "Box<Any>".to_string());
let message = info.payload_as_str().unwrap_or("Box<Any>").to_owned();
let span = info
.location()

View File

@@ -1029,13 +1029,11 @@ impl SearchableItem for DapLogView {
&mut self,
index: usize,
matches: &[Self::Match],
collapse: bool,
window: &mut Window,
cx: &mut Context<Self>,
) {
self.editor.update(cx, |e, cx| {
e.activate_match(index, matches, collapse, window, cx)
})
self.editor
.update(cx, |e, cx| e.activate_match(index, matches, window, cx))
}
fn select_matches(

View File

@@ -677,6 +677,8 @@ impl ConsoleQueryBarCompletionProvider {
),
new_text: string_match.string.clone(),
label: CodeLabel::plain(string_match.string.clone(), None),
match_start: None,
snippet_deduplication_key: None,
icon_path: None,
documentation: Some(CompletionDocumentation::MultiLineMarkdown(
variable_value.into(),
@@ -790,6 +792,8 @@ impl ConsoleQueryBarCompletionProvider {
documentation: completion.detail.map(|detail| {
CompletionDocumentation::MultiLineMarkdown(detail.into())
}),
match_start: None,
snippet_deduplication_key: None,
confirm: None,
source: project::CompletionSource::Dap { sort_text },
insert_text_mode: None,

View File

@@ -370,11 +370,16 @@ impl BufferDiagnosticsEditor {
continue;
}
let languages = buffer_diagnostics_editor
.read_with(cx, |b, cx| b.project.read(cx).languages().clone())
.ok();
let diagnostic_blocks = cx.update(|_window, cx| {
DiagnosticRenderer::diagnostic_blocks_for_group(
group,
buffer_snapshot.remote_id(),
Some(Arc::new(buffer_diagnostics_editor.clone())),
languages,
cx,
)
})?;

View File

@@ -6,7 +6,7 @@ use editor::{
hover_popover::diagnostics_markdown_style,
};
use gpui::{AppContext, Entity, Focusable, WeakEntity};
use language::{BufferId, Diagnostic, DiagnosticEntryRef};
use language::{BufferId, Diagnostic, DiagnosticEntryRef, LanguageRegistry};
use lsp::DiagnosticSeverity;
use markdown::{Markdown, MarkdownElement};
use settings::Settings;
@@ -27,6 +27,7 @@ impl DiagnosticRenderer {
diagnostic_group: Vec<DiagnosticEntryRef<'_, Point>>,
buffer_id: BufferId,
diagnostics_editor: Option<Arc<dyn DiagnosticsToolbarEditor>>,
language_registry: Option<Arc<LanguageRegistry>>,
cx: &mut App,
) -> Vec<DiagnosticBlock> {
let Some(primary_ix) = diagnostic_group
@@ -75,11 +76,14 @@ impl DiagnosticRenderer {
))
}
}
results.push(DiagnosticBlock {
initial_range: primary.range.clone(),
severity: primary.diagnostic.severity,
diagnostics_editor: diagnostics_editor.clone(),
markdown: cx.new(|cx| Markdown::new(markdown.into(), None, None, cx)),
markdown: cx.new(|cx| {
Markdown::new(markdown.into(), language_registry.clone(), None, cx)
}),
});
} else {
if entry.range.start.row.abs_diff(primary.range.start.row) >= 5 {
@@ -91,7 +95,9 @@ impl DiagnosticRenderer {
initial_range: entry.range.clone(),
severity: entry.diagnostic.severity,
diagnostics_editor: diagnostics_editor.clone(),
markdown: cx.new(|cx| Markdown::new(markdown.into(), None, None, cx)),
markdown: cx.new(|cx| {
Markdown::new(markdown.into(), language_registry.clone(), None, cx)
}),
});
}
}
@@ -118,9 +124,16 @@ impl editor::DiagnosticRenderer for DiagnosticRenderer {
buffer_id: BufferId,
snapshot: EditorSnapshot,
editor: WeakEntity<Editor>,
language_registry: Option<Arc<LanguageRegistry>>,
cx: &mut App,
) -> Vec<BlockProperties<Anchor>> {
let blocks = Self::diagnostic_blocks_for_group(diagnostic_group, buffer_id, None, cx);
let blocks = Self::diagnostic_blocks_for_group(
diagnostic_group,
buffer_id,
None,
language_registry,
cx,
);
blocks
.into_iter()
@@ -146,9 +159,16 @@ impl editor::DiagnosticRenderer for DiagnosticRenderer {
diagnostic_group: Vec<DiagnosticEntryRef<'_, Point>>,
range: Range<Point>,
buffer_id: BufferId,
language_registry: Option<Arc<LanguageRegistry>>,
cx: &mut App,
) -> Option<Entity<Markdown>> {
let blocks = Self::diagnostic_blocks_for_group(diagnostic_group, buffer_id, None, cx);
let blocks = Self::diagnostic_blocks_for_group(
diagnostic_group,
buffer_id,
None,
language_registry,
cx,
);
blocks
.into_iter()
.find_map(|block| (block.initial_range == range).then(|| block.markdown))
@@ -206,6 +226,11 @@ impl DiagnosticBlock {
self.markdown.clone(),
diagnostics_markdown_style(bcx.window, cx),
)
.code_block_renderer(markdown::CodeBlockRenderer::Default {
copy_button: false,
copy_button_on_hover: false,
border: false,
})
.on_url_click({
move |link, window, cx| {
editor

View File

@@ -73,7 +73,7 @@ pub fn init(cx: &mut App) {
}
pub(crate) struct ProjectDiagnosticsEditor {
project: Entity<Project>,
pub project: Entity<Project>,
workspace: WeakEntity<Workspace>,
focus_handle: FocusHandle,
editor: Entity<Editor>,
@@ -182,7 +182,6 @@ impl ProjectDiagnosticsEditor {
project::Event::DiskBasedDiagnosticsFinished { language_server_id } => {
log::debug!("disk based diagnostics finished for server {language_server_id}");
this.close_diagnosticless_buffers(
window,
cx,
this.editor.focus_handle(cx).contains_focused(window, cx)
|| this.focus_handle.contains_focused(window, cx),
@@ -247,10 +246,10 @@ impl ProjectDiagnosticsEditor {
window.focus(&this.focus_handle);
}
}
EditorEvent::Blurred => this.close_diagnosticless_buffers(window, cx, false),
EditorEvent::Saved => this.close_diagnosticless_buffers(window, cx, true),
EditorEvent::Blurred => this.close_diagnosticless_buffers(cx, false),
EditorEvent::Saved => this.close_diagnosticless_buffers(cx, true),
EditorEvent::SelectionsChanged { .. } => {
this.close_diagnosticless_buffers(window, cx, true)
this.close_diagnosticless_buffers(cx, true)
}
_ => {}
}
@@ -298,12 +297,7 @@ impl ProjectDiagnosticsEditor {
/// - have no diagnostics anymore
/// - are saved (not dirty)
/// - and, if `retain_selections` is true, do not have selections within them
fn close_diagnosticless_buffers(
&mut self,
_window: &mut Window,
cx: &mut Context<Self>,
retain_selections: bool,
) {
fn close_diagnosticless_buffers(&mut self, cx: &mut Context<Self>, retain_selections: bool) {
let snapshot = self
.editor
.update(cx, |editor, cx| editor.display_snapshot(cx));
@@ -447,7 +441,7 @@ impl ProjectDiagnosticsEditor {
fn focus_out(&mut self, _: FocusOutEvent, window: &mut Window, cx: &mut Context<Self>) {
if !self.focus_handle.is_focused(window) && !self.editor.focus_handle(cx).is_focused(window)
{
self.close_diagnosticless_buffers(window, cx, false);
self.close_diagnosticless_buffers(cx, false);
}
}
@@ -461,8 +455,7 @@ impl ProjectDiagnosticsEditor {
});
}
});
self.multibuffer
.update(cx, |multibuffer, cx| multibuffer.clear(cx));
self.close_diagnosticless_buffers(cx, false);
self.project.update(cx, |project, cx| {
self.paths_to_update = project
.diagnostic_summaries(false, cx)
@@ -498,7 +491,7 @@ impl ProjectDiagnosticsEditor {
cx: &mut Context<Self>,
) -> Task<Result<()>> {
let was_empty = self.multibuffer.read(cx).is_empty();
let mut buffer_snapshot = buffer.read(cx).snapshot();
let buffer_snapshot = buffer.read(cx).snapshot();
let buffer_id = buffer_snapshot.remote_id();
let max_severity = if self.include_warnings {
@@ -552,11 +545,15 @@ impl ProjectDiagnosticsEditor {
if group_severity.is_none_or(|s| s > max_severity) {
continue;
}
let languages = this
.read_with(cx, |t, cx| t.project.read(cx).languages().clone())
.ok();
let more = cx.update(|_, cx| {
crate::diagnostic_renderer::DiagnosticRenderer::diagnostic_blocks_for_group(
group,
buffer_snapshot.remote_id(),
Some(diagnostics_toolbar_editor.clone()),
languages,
cx,
)
})?;
@@ -605,7 +602,6 @@ impl ProjectDiagnosticsEditor {
cx,
)
.await;
buffer_snapshot = cx.update(|_, cx| buffer.read(cx).snapshot())?;
let initial_range = buffer_snapshot.anchor_after(b.initial_range.start)
..buffer_snapshot.anchor_before(b.initial_range.end);
let excerpt_range = ExcerptRange {
@@ -1013,11 +1009,14 @@ async fn heuristic_syntactic_expand(
snapshot: BufferSnapshot,
cx: &mut AsyncApp,
) -> Option<RangeInclusive<BufferRow>> {
let start = snapshot.clip_point(input_range.start, Bias::Right);
let end = snapshot.clip_point(input_range.end, Bias::Left);
let input_row_count = input_range.end.row - input_range.start.row;
if input_row_count > max_row_count {
return None;
}
let input_range = start..end;
// If the outline node contains the diagnostic and is small enough, just use that.
let outline_range = snapshot.outline_range_containing(input_range.clone());
if let Some(outline_range) = outline_range.clone() {

View File

@@ -104,6 +104,7 @@ pub trait EditPredictionProvider: 'static + Sized {
);
fn accept(&mut self, cx: &mut Context<Self>);
fn discard(&mut self, cx: &mut Context<Self>);
fn did_show(&mut self, _cx: &mut Context<Self>) {}
fn suggest(
&mut self,
buffer: &Entity<Buffer>,
@@ -142,6 +143,7 @@ pub trait EditPredictionProviderHandle {
direction: Direction,
cx: &mut App,
);
fn did_show(&self, cx: &mut App);
fn accept(&self, cx: &mut App);
fn discard(&self, cx: &mut App);
fn suggest(
@@ -233,6 +235,10 @@ where
self.update(cx, |this, cx| this.discard(cx))
}
fn did_show(&self, cx: &mut App) {
self.update(cx, |this, cx| this.did_show(cx))
}
fn suggest(
&self,
buffer: &Entity<Buffer>,

View File

@@ -18,18 +18,19 @@ client.workspace = true
cloud_llm_client.workspace = true
codestral.workspace = true
copilot.workspace = true
edit_prediction.workspace = true
editor.workspace = true
feature_flags.workspace = true
fs.workspace = true
gpui.workspace = true
indoc.workspace = true
edit_prediction.workspace = true
language.workspace = true
paths.workspace = true
project.workspace = true
regex.workspace = true
settings.workspace = true
supermaven.workspace = true
sweep_ai.workspace = true
telemetry.workspace = true
ui.workspace = true
workspace.workspace = true

View File

@@ -18,12 +18,15 @@ use language::{
};
use project::DisableAiSettings;
use regex::Regex;
use settings::{Settings, SettingsStore, update_settings_file};
use settings::{
EXPERIMENTAL_SWEEP_EDIT_PREDICTION_PROVIDER_NAME, Settings, SettingsStore, update_settings_file,
};
use std::{
sync::{Arc, LazyLock},
time::Duration,
};
use supermaven::{AccountStatus, Supermaven};
use sweep_ai::SweepFeatureFlag;
use ui::{
Clickable, ContextMenu, ContextMenuEntry, DocumentationEdge, DocumentationSide, IconButton,
IconButtonShape, Indicator, PopoverMenu, PopoverMenuHandle, ProgressBar, Tooltip, prelude::*,
@@ -43,7 +46,8 @@ actions!(
]
);
const COPILOT_SETTINGS_URL: &str = "https://github.com/settings/copilot";
const COPILOT_SETTINGS_PATH: &str = "/settings/copilot";
const COPILOT_SETTINGS_URL: &str = concat!("https://github.com", "/settings/copilot");
const PRIVACY_DOCS: &str = "https://zed.dev/docs/ai/privacy-and-security";
struct CopilotErrorToast;
@@ -77,7 +81,7 @@ impl Render for EditPredictionButton {
let all_language_settings = all_language_settings(None, cx);
match all_language_settings.edit_predictions.provider {
match &all_language_settings.edit_predictions.provider {
EditPredictionProvider::None => div().hidden(),
EditPredictionProvider::Copilot => {
@@ -128,20 +132,21 @@ impl Render for EditPredictionButton {
}),
);
}
let this = cx.entity();
let this = cx.weak_entity();
div().child(
PopoverMenu::new("copilot")
.menu(move |window, cx| {
let current_status = Copilot::global(cx)?.read(cx).status();
Some(match current_status {
match current_status {
Status::Authorized => this.update(cx, |this, cx| {
this.build_copilot_context_menu(window, cx)
}),
_ => this.update(cx, |this, cx| {
this.build_copilot_start_menu(window, cx)
}),
})
}
.ok()
})
.anchor(Corner::BottomRight)
.trigger_with_tooltip(
@@ -182,7 +187,7 @@ impl Render for EditPredictionButton {
let icon = status.to_icon();
let tooltip_text = status.to_tooltip();
let has_menu = status.has_menu();
let this = cx.entity();
let this = cx.weak_entity();
let fs = self.fs.clone();
div().child(
@@ -209,9 +214,11 @@ impl Render for EditPredictionButton {
)
}))
}
SupermavenButtonStatus::Ready => Some(this.update(cx, |this, cx| {
this.build_supermaven_context_menu(window, cx)
})),
SupermavenButtonStatus::Ready => this
.update(cx, |this, cx| {
this.build_supermaven_context_menu(window, cx)
})
.ok(),
_ => None,
})
.anchor(Corner::BottomRight)
@@ -233,15 +240,16 @@ impl Render for EditPredictionButton {
let enabled = self.editor_enabled.unwrap_or(true);
let has_api_key = CodestralCompletionProvider::has_api_key(cx);
let fs = self.fs.clone();
let this = cx.entity();
let this = cx.weak_entity();
div().child(
PopoverMenu::new("codestral")
.menu(move |window, cx| {
if has_api_key {
Some(this.update(cx, |this, cx| {
this.update(cx, |this, cx| {
this.build_codestral_context_menu(window, cx)
}))
})
.ok()
} else {
Some(ContextMenu::build(window, cx, |menu, _, _| {
let fs = fs.clone();
@@ -292,6 +300,15 @@ impl Render for EditPredictionButton {
.with_handle(self.popover_menu_handle.clone()),
)
}
EditPredictionProvider::Experimental(provider_name) => {
if *provider_name == EXPERIMENTAL_SWEEP_EDIT_PREDICTION_PROVIDER_NAME
&& cx.has_flag::<SweepFeatureFlag>()
{
div().child(Icon::new(IconName::SweepAi))
} else {
div()
}
}
EditPredictionProvider::Zed => {
let enabled = self.editor_enabled.unwrap_or(true);
@@ -520,7 +537,7 @@ impl EditPredictionButton {
set_completion_provider(fs.clone(), cx, provider);
})
}
EditPredictionProvider::None => continue,
EditPredictionProvider::None | EditPredictionProvider::Experimental(_) => continue,
};
}
}
@@ -832,6 +849,16 @@ impl EditPredictionButton {
window: &mut Window,
cx: &mut Context<Self>,
) -> Entity<ContextMenu> {
let all_language_settings = all_language_settings(None, cx);
let copilot_config = copilot::copilot_chat::CopilotChatConfiguration {
enterprise_uri: all_language_settings
.edit_predictions
.copilot
.enterprise_uri
.clone(),
};
let settings_url = copilot_settings_url(copilot_config.enterprise_uri.as_deref());
ContextMenu::build(window, cx, |menu, window, cx| {
let menu = self.build_language_settings_menu(menu, window, cx);
let menu =
@@ -840,10 +867,7 @@ impl EditPredictionButton {
menu.separator()
.link(
"Go to Copilot Settings",
OpenBrowser {
url: COPILOT_SETTINGS_URL.to_string(),
}
.boxed_clone(),
OpenBrowser { url: settings_url }.boxed_clone(),
)
.action("Sign Out", copilot::SignOut.boxed_clone())
})
@@ -1172,3 +1196,99 @@ fn toggle_edit_prediction_mode(fs: Arc<dyn Fs>, mode: EditPredictionsMode, cx: &
});
}
}
fn copilot_settings_url(enterprise_uri: Option<&str>) -> String {
match enterprise_uri {
Some(uri) => {
format!("{}{}", uri.trim_end_matches('/'), COPILOT_SETTINGS_PATH)
}
None => COPILOT_SETTINGS_URL.to_string(),
}
}
#[cfg(test)]
mod tests {
use super::*;
use gpui::TestAppContext;
#[gpui::test]
async fn test_copilot_settings_url_with_enterprise_uri(cx: &mut TestAppContext) {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
});
cx.update_global(|settings_store: &mut SettingsStore, cx| {
settings_store
.set_user_settings(
r#"{"edit_predictions":{"copilot":{"enterprise_uri":"https://my-company.ghe.com"}}}"#,
cx,
)
.unwrap();
});
let url = cx.update(|cx| {
let all_language_settings = all_language_settings(None, cx);
copilot_settings_url(
all_language_settings
.edit_predictions
.copilot
.enterprise_uri
.as_deref(),
)
});
assert_eq!(url, "https://my-company.ghe.com/settings/copilot");
}
#[gpui::test]
async fn test_copilot_settings_url_with_enterprise_uri_trailing_slash(cx: &mut TestAppContext) {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
});
cx.update_global(|settings_store: &mut SettingsStore, cx| {
settings_store
.set_user_settings(
r#"{"edit_predictions":{"copilot":{"enterprise_uri":"https://my-company.ghe.com/"}}}"#,
cx,
)
.unwrap();
});
let url = cx.update(|cx| {
let all_language_settings = all_language_settings(None, cx);
copilot_settings_url(
all_language_settings
.edit_predictions
.copilot
.enterprise_uri
.as_deref(),
)
});
assert_eq!(url, "https://my-company.ghe.com/settings/copilot");
}
#[gpui::test]
async fn test_copilot_settings_url_without_enterprise_uri(cx: &mut TestAppContext) {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
});
let url = cx.update(|cx| {
let all_language_settings = all_language_settings(None, cx);
copilot_settings_url(
all_language_settings
.edit_predictions
.copilot
.enterprise_uri
.as_deref(),
)
});
assert_eq!(url, "https://github.com/settings/copilot");
}
}

View File

@@ -305,6 +305,8 @@ impl CompletionBuilder {
icon_path: None,
insert_text_mode: None,
confirm: None,
match_start: None,
snippet_deduplication_key: None,
}
}
}

View File

@@ -17,7 +17,6 @@ use project::{CompletionDisplayOptions, CompletionSource};
use task::DebugScenario;
use task::TaskContext;
use std::collections::VecDeque;
use std::sync::Arc;
use std::sync::atomic::{AtomicBool, Ordering};
use std::{
@@ -36,12 +35,13 @@ use util::ResultExt;
use crate::hover_popover::{hover_markdown_style, open_markdown_url};
use crate::{
CodeActionProvider, CompletionId, CompletionItemKind, CompletionProvider, DisplayRow, Editor,
EditorStyle, ResolvedTasks,
CodeActionProvider, CompletionId, CompletionProvider, DisplayRow, Editor, EditorStyle,
ResolvedTasks,
actions::{ConfirmCodeAction, ConfirmCompletion},
split_words, styled_runs_for_code_label,
};
use crate::{CodeActionSource, EditorSettings};
use collections::{HashSet, VecDeque};
use settings::{Settings, SnippetSortOrder};
pub const MENU_GAP: Pixels = px(4.);
@@ -220,7 +220,9 @@ pub struct CompletionsMenu {
pub is_incomplete: bool,
pub buffer: Entity<Buffer>,
pub completions: Rc<RefCell<Box<[Completion]>>>,
match_candidates: Arc<[StringMatchCandidate]>,
/// String match candidate for each completion, grouped by `match_start`.
match_candidates: Arc<[(Option<text::Anchor>, Vec<StringMatchCandidate>)]>,
/// Entries displayed in the menu, which is a filtered and sorted subset of `match_candidates`.
pub entries: Rc<RefCell<Box<[StringMatch]>>>,
pub selected_item: usize,
filter_task: Task<()>,
@@ -308,6 +310,8 @@ impl CompletionsMenu {
.iter()
.enumerate()
.map(|(id, completion)| StringMatchCandidate::new(id, completion.label.filter_text()))
.into_group_map_by(|candidate| completions[candidate.id].match_start)
.into_iter()
.collect();
let completions_menu = Self {
@@ -355,6 +359,8 @@ impl CompletionsMenu {
replace_range: selection.start.text_anchor..selection.end.text_anchor,
new_text: choice.to_string(),
label: CodeLabel::plain(choice.to_string(), None),
match_start: None,
snippet_deduplication_key: None,
icon_path: None,
documentation: None,
confirm: None,
@@ -363,11 +369,14 @@ impl CompletionsMenu {
})
.collect();
let match_candidates = choices
.iter()
.enumerate()
.map(|(id, completion)| StringMatchCandidate::new(id, completion))
.collect();
let match_candidates = Arc::new([(
None,
choices
.iter()
.enumerate()
.map(|(id, completion)| StringMatchCandidate::new(id, completion))
.collect(),
)]);
let entries = choices
.iter()
.enumerate()
@@ -497,7 +506,7 @@ impl CompletionsMenu {
cx: &mut Context<Editor>,
) {
self.scroll_handle
.scroll_to_item(self.selected_item, ScrollStrategy::Top);
.scroll_to_item(self.selected_item, ScrollStrategy::Nearest);
if let Some(provider) = provider {
let entries = self.entries.borrow();
let entry = if self.selected_item < entries.len() {
@@ -948,7 +957,7 @@ impl CompletionsMenu {
}
let mat = &self.entries.borrow()[self.selected_item];
let completions = self.completions.borrow_mut();
let completions = self.completions.borrow();
let multiline_docs = match completions[mat.candidate_id].documentation.as_ref() {
Some(CompletionDocumentation::MultiLinePlainText(text)) => div().child(text.clone()),
Some(CompletionDocumentation::SingleLineAndMultiLinePlainText {
@@ -1026,57 +1035,74 @@ impl CompletionsMenu {
pub fn filter(
&mut self,
query: Option<Arc<String>>,
query: Arc<String>,
query_end: text::Anchor,
buffer: &Entity<Buffer>,
provider: Option<Rc<dyn CompletionProvider>>,
window: &mut Window,
cx: &mut Context<Editor>,
) {
self.cancel_filter.store(true, Ordering::Relaxed);
if let Some(query) = query {
self.cancel_filter = Arc::new(AtomicBool::new(false));
let matches = self.do_async_filtering(query, cx);
let id = self.id;
self.filter_task = cx.spawn_in(window, async move |editor, cx| {
let matches = matches.await;
editor
.update_in(cx, |editor, window, cx| {
editor.with_completions_menu_matching_id(id, |this| {
if let Some(this) = this {
this.set_filter_results(matches, provider, window, cx);
}
});
})
.ok();
});
} else {
self.filter_task = Task::ready(());
let matches = self.unfiltered_matches();
self.set_filter_results(matches, provider, window, cx);
}
self.cancel_filter = Arc::new(AtomicBool::new(false));
let matches = self.do_async_filtering(query, query_end, buffer, cx);
let id = self.id;
self.filter_task = cx.spawn_in(window, async move |editor, cx| {
let matches = matches.await;
editor
.update_in(cx, |editor, window, cx| {
editor.with_completions_menu_matching_id(id, |this| {
if let Some(this) = this {
this.set_filter_results(matches, provider, window, cx);
}
});
})
.ok();
});
}
pub fn do_async_filtering(
&self,
query: Arc<String>,
query_end: text::Anchor,
buffer: &Entity<Buffer>,
cx: &Context<Editor>,
) -> Task<Vec<StringMatch>> {
let matches_task = cx.background_spawn({
let query = query.clone();
let match_candidates = self.match_candidates.clone();
let cancel_filter = self.cancel_filter.clone();
let background_executor = cx.background_executor().clone();
async move {
fuzzy::match_strings(
&match_candidates,
&query,
query.chars().any(|c| c.is_uppercase()),
false,
1000,
&cancel_filter,
background_executor,
)
.await
let buffer_snapshot = buffer.read(cx).snapshot();
let background_executor = cx.background_executor().clone();
let match_candidates = self.match_candidates.clone();
let cancel_filter = self.cancel_filter.clone();
let default_query = query.clone();
let matches_task = cx.background_spawn(async move {
let queries_and_candidates = match_candidates
.iter()
.map(|(query_start, candidates)| {
let query_for_batch = match query_start {
Some(start) => {
Arc::new(buffer_snapshot.text_for_range(*start..query_end).collect())
}
None => default_query.clone(),
};
(query_for_batch, candidates)
})
.collect_vec();
let mut results = vec![];
for (query, match_candidates) in queries_and_candidates {
results.extend(
fuzzy::match_strings(
&match_candidates,
&query,
query.chars().any(|c| c.is_uppercase()),
false,
1000,
&cancel_filter,
background_executor.clone(),
)
.await,
);
}
results
});
let completions = self.completions.clone();
@@ -1085,45 +1111,31 @@ impl CompletionsMenu {
cx.foreground_executor().spawn(async move {
let mut matches = matches_task.await;
let completions_ref = completions.borrow();
if sort_completions {
matches = Self::sort_string_matches(
matches,
Some(&query),
Some(&query), // used for non-snippets only
snippet_sort_order,
completions.borrow().as_ref(),
&completions_ref,
);
}
// Remove duplicate snippet prefixes (e.g., "cool code" will match
// the text "c c" in two places; we should only show the longer one)
let mut snippets_seen = HashSet::<(usize, usize)>::default();
matches.retain(|result| {
match completions_ref[result.candidate_id].snippet_deduplication_key {
Some(key) => snippets_seen.insert(key),
None => true,
}
});
matches
})
}
/// Like `do_async_filtering` but there is no filter query, so no need to spawn tasks.
pub fn unfiltered_matches(&self) -> Vec<StringMatch> {
let mut matches = self
.match_candidates
.iter()
.enumerate()
.map(|(candidate_id, candidate)| StringMatch {
candidate_id,
score: Default::default(),
positions: Default::default(),
string: candidate.string.clone(),
})
.collect();
if self.sort_completions {
matches = Self::sort_string_matches(
matches,
None,
self.snippet_sort_order,
self.completions.borrow().as_ref(),
);
}
matches
}
pub fn set_filter_results(
&mut self,
matches: Vec<StringMatch>,
@@ -1166,28 +1178,13 @@ impl CompletionsMenu {
.and_then(|c| c.to_lowercase().next());
if snippet_sort_order == SnippetSortOrder::None {
matches.retain(|string_match| {
let completion = &completions[string_match.candidate_id];
let is_snippet = matches!(
&completion.source,
CompletionSource::Lsp { lsp_completion, .. }
if lsp_completion.kind == Some(CompletionItemKind::SNIPPET)
);
!is_snippet
});
matches
.retain(|string_match| !completions[string_match.candidate_id].is_snippet_kind());
}
matches.sort_unstable_by_key(|string_match| {
let completion = &completions[string_match.candidate_id];
let is_snippet = matches!(
&completion.source,
CompletionSource::Lsp { lsp_completion, .. }
if lsp_completion.kind == Some(CompletionItemKind::SNIPPET)
);
let sort_text = match &completion.source {
CompletionSource::Lsp { lsp_completion, .. } => lsp_completion.sort_text.as_deref(),
CompletionSource::Dap { sort_text } => Some(sort_text.as_str()),
@@ -1199,14 +1196,17 @@ impl CompletionsMenu {
let score = string_match.score;
let sort_score = Reverse(OrderedFloat(score));
let query_start_doesnt_match_split_words = query_start_lower
.map(|query_char| {
!split_words(&string_match.string).any(|word| {
word.chars().next().and_then(|c| c.to_lowercase().next())
== Some(query_char)
// Snippets do their own first-letter matching logic elsewhere.
let is_snippet = completion.is_snippet_kind();
let query_start_doesnt_match_split_words = !is_snippet
&& query_start_lower
.map(|query_char| {
!split_words(&string_match.string).any(|word| {
word.chars().next().and_then(|c| c.to_lowercase().next())
== Some(query_char)
})
})
})
.unwrap_or(false);
.unwrap_or(false);
if query_start_doesnt_match_split_words {
MatchTier::OtherMatch { sort_score }
@@ -1218,6 +1218,7 @@ impl CompletionsMenu {
SnippetSortOrder::None => Reverse(0),
};
let sort_positions = string_match.positions.clone();
// This exact matching won't work for multi-word snippets, but it's fine
let sort_exact = Reverse(if Some(completion.label.filter_text()) == query {
1
} else {

View File

@@ -1097,7 +1097,7 @@ impl DisplaySnapshot {
details: &TextLayoutDetails,
) -> u32 {
let layout_line = self.layout_row(display_row, details);
layout_line.index_for_x(x) as u32
layout_line.closest_index_for_x(x) as u32
}
pub fn grapheme_at(&self, mut point: DisplayPoint) -> Option<SharedString> {

View File

@@ -248,10 +248,8 @@ impl<'a> Iterator for InlayChunks<'a> {
// Determine split index handling edge cases
let split_index = if desired_bytes >= chunk.text.len() {
chunk.text.len()
} else if chunk.text.is_char_boundary(desired_bytes) {
desired_bytes
} else {
find_next_utf8_boundary(chunk.text, desired_bytes)
chunk.text.ceil_char_boundary(desired_bytes)
};
let (prefix, suffix) = chunk.text.split_at(split_index);
@@ -373,10 +371,8 @@ impl<'a> Iterator for InlayChunks<'a> {
.next()
.map(|c| c.len_utf8())
.unwrap_or(1)
} else if inlay_chunk.is_char_boundary(next_inlay_highlight_endpoint) {
next_inlay_highlight_endpoint
} else {
find_next_utf8_boundary(inlay_chunk, next_inlay_highlight_endpoint)
inlay_chunk.ceil_char_boundary(next_inlay_highlight_endpoint)
};
let (chunk, remainder) = inlay_chunk.split_at(split_index);
@@ -1146,31 +1142,6 @@ fn push_isomorphic(sum_tree: &mut SumTree<Transform>, summary: TextSummary) {
}
}
/// Given a byte index that is NOT a UTF-8 boundary, find the next one.
/// Assumes: 0 < byte_index < text.len() and !text.is_char_boundary(byte_index)
#[inline(always)]
fn find_next_utf8_boundary(text: &str, byte_index: usize) -> usize {
let bytes = text.as_bytes();
let mut idx = byte_index + 1;
// Scan forward until we find a boundary
while idx < text.len() {
if is_utf8_char_boundary(bytes[idx]) {
return idx;
}
idx += 1;
}
// Hit the end, return the full length
text.len()
}
// Private helper function taken from Rust's core::num module (which is both Apache2 and MIT licensed)
const fn is_utf8_char_boundary(byte: u8) -> bool {
// This is bit magic equivalent to: b < 128 || b >= 192
(byte as i8) >= -0x40
}
#[cfg(test)]
mod tests {
use super::*;

View File

@@ -117,8 +117,9 @@ use language::{
AutoindentMode, BlockCommentConfig, BracketMatch, BracketPair, Buffer, BufferRow,
BufferSnapshot, Capability, CharClassifier, CharKind, CharScopeContext, CodeLabel, CursorShape,
DiagnosticEntryRef, DiffOptions, EditPredictionsMode, EditPreview, HighlightedText, IndentKind,
IndentSize, Language, OffsetRangeExt, OutlineItem, Point, Runnable, RunnableRange, Selection,
SelectionGoal, TextObject, TransactionId, TreeSitterOptions, WordsQuery,
IndentSize, Language, LanguageRegistry, OffsetRangeExt, OutlineItem, Point, Runnable,
RunnableRange, Selection, SelectionGoal, TextObject, TransactionId, TreeSitterOptions,
WordsQuery,
language_settings::{
self, LspInsertMode, RewrapBehavior, WordsCompletionMode, all_language_settings,
language_settings,
@@ -371,6 +372,7 @@ pub trait DiagnosticRenderer {
buffer_id: BufferId,
snapshot: EditorSnapshot,
editor: WeakEntity<Editor>,
language_registry: Option<Arc<LanguageRegistry>>,
cx: &mut App,
) -> Vec<BlockProperties<Anchor>>;
@@ -379,6 +381,7 @@ pub trait DiagnosticRenderer {
diagnostic_group: Vec<DiagnosticEntryRef<'_, Point>>,
range: Range<Point>,
buffer_id: BufferId,
language_registry: Option<Arc<LanguageRegistry>>,
cx: &mut App,
) -> Option<Entity<markdown::Markdown>>;
@@ -1096,6 +1099,7 @@ pub struct Editor {
searchable: bool,
cursor_shape: CursorShape,
current_line_highlight: Option<CurrentLineHighlight>,
collapse_matches: bool,
autoindent_mode: Option<AutoindentMode>,
workspace: Option<(WeakEntity<Workspace>, Option<WorkspaceId>)>,
input_enabled: bool,
@@ -1297,8 +1301,9 @@ struct SelectionHistoryEntry {
add_selections_state: Option<AddSelectionsState>,
}
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
#[derive(Copy, Clone, Default, Debug, PartialEq, Eq)]
enum SelectionHistoryMode {
#[default]
Normal,
Undoing,
Redoing,
@@ -1311,12 +1316,6 @@ struct HoveredCursor {
selection_id: usize,
}
impl Default for SelectionHistoryMode {
fn default() -> Self {
Self::Normal
}
}
#[derive(Debug)]
/// SelectionEffects controls the side-effects of updating the selection.
///
@@ -2213,7 +2212,7 @@ impl Editor {
.unwrap_or_default(),
current_line_highlight: None,
autoindent_mode: Some(AutoindentMode::EachLine),
collapse_matches: false,
workspace: None,
input_enabled: !is_minimap,
use_modal_editing: full_mode,
@@ -2386,7 +2385,10 @@ impl Editor {
}
}
EditorEvent::Edited { .. } => {
if vim_flavor(cx).is_none() {
let vim_mode = vim_mode_setting::VimModeSetting::try_get(cx)
.map(|vim_mode| vim_mode.0)
.unwrap_or(false);
if !vim_mode {
let display_map = editor.display_snapshot(cx);
let selections = editor.selections.all_adjusted_display(&display_map);
let pop_state = editor
@@ -2633,6 +2635,10 @@ impl Editor {
key_context.add("end_of_input");
}
if self.has_any_expanded_diff_hunks(cx) {
key_context.add("diffs_expanded");
}
key_context
}
@@ -3011,17 +3017,21 @@ impl Editor {
self.current_line_highlight = current_line_highlight;
}
pub fn range_for_match<T: std::marker::Copy>(
&self,
range: &Range<T>,
collapse: bool,
) -> Range<T> {
if collapse {
pub fn set_collapse_matches(&mut self, collapse_matches: bool) {
self.collapse_matches = collapse_matches;
}
pub fn range_for_match<T: std::marker::Copy>(&self, range: &Range<T>) -> Range<T> {
if self.collapse_matches {
return range.start..range.start;
}
range.clone()
}
pub fn clip_at_line_ends(&mut self, cx: &mut Context<Self>) -> bool {
self.display_map.read(cx).clip_at_line_ends
}
pub fn set_clip_at_line_ends(&mut self, clip: bool, cx: &mut Context<Self>) {
if self.display_map.read(cx).clip_at_line_ends != clip {
self.display_map
@@ -5517,7 +5527,14 @@ impl Editor {
if let Some(CodeContextMenu::Completions(menu)) = self.context_menu.borrow_mut().as_mut() {
if filter_completions {
menu.filter(query.clone(), provider.clone(), window, cx);
menu.filter(
query.clone().unwrap_or_default(),
buffer_position.text_anchor,
&buffer,
provider.clone(),
window,
cx,
);
}
// When `is_incomplete` is false, no need to re-query completions when the current query
// is a suffix of the initial query.
@@ -5526,7 +5543,7 @@ impl Editor {
// If the new query is a suffix of the old query (typing more characters) and
// the previous result was complete, the existing completions can be filtered.
//
// Note that this is always true for snippet completions.
// Note that snippet completions are always complete.
let query_matches = match (&menu.initial_query, &query) {
(Some(initial_query), Some(query)) => query.starts_with(initial_query.as_ref()),
(None, _) => true,
@@ -5656,12 +5673,15 @@ impl Editor {
};
let mut words = if load_word_completions {
cx.background_spawn(async move {
buffer_snapshot.words_in_range(WordsQuery {
fuzzy_contents: None,
range: word_search_range,
skip_digits,
})
cx.background_spawn({
let buffer_snapshot = buffer_snapshot.clone();
async move {
buffer_snapshot.words_in_range(WordsQuery {
fuzzy_contents: None,
range: word_search_range,
skip_digits,
})
}
})
} else {
Task::ready(BTreeMap::default())
@@ -5671,8 +5691,11 @@ impl Editor {
&& provider.show_snippets()
&& let Some(project) = self.project()
{
let char_classifier = buffer_snapshot
.char_classifier_at(buffer_position)
.scope_context(Some(CharScopeContext::Completion));
project.update(cx, |project, cx| {
snippet_completions(project, &buffer, buffer_position, cx)
snippet_completions(project, &buffer, buffer_position, char_classifier, cx)
})
} else {
Task::ready(Ok(CompletionResponse {
@@ -5727,6 +5750,8 @@ impl Editor {
replace_range: word_replace_range.clone(),
new_text: word.clone(),
label: CodeLabel::plain(word, None),
match_start: None,
snippet_deduplication_key: None,
icon_path: None,
documentation: None,
source: CompletionSource::BufferWord {
@@ -5775,11 +5800,12 @@ impl Editor {
);
let query = if filter_completions { query } else { None };
let matches_task = if let Some(query) = query {
menu.do_async_filtering(query, cx)
} else {
Task::ready(menu.unfiltered_matches())
};
let matches_task = menu.do_async_filtering(
query.unwrap_or_default(),
buffer_position,
&buffer,
cx,
);
(menu, matches_task)
}) else {
return;
@@ -5796,7 +5822,7 @@ impl Editor {
return;
};
// Only valid to take prev_menu because it the new menu is immediately set
// Only valid to take prev_menu because either the new menu is immediately set
// below, or the menu is hidden.
if let Some(CodeContextMenu::Completions(prev_menu)) =
editor.context_menu.borrow_mut().take()
@@ -7851,6 +7877,10 @@ impl Editor {
self.edit_prediction_preview,
EditPredictionPreview::Inactive { .. }
) {
if let Some(provider) = self.edit_prediction_provider.as_ref() {
provider.provider.did_show(cx)
}
self.edit_prediction_preview = EditPredictionPreview::Active {
previous_scroll_position: None,
since: Instant::now(),
@@ -8030,6 +8060,9 @@ impl Editor {
&& !self.edit_predictions_hidden_for_vim_mode;
if show_completions_in_buffer {
if let Some(provider) = &self.edit_prediction_provider {
provider.provider.did_show(cx);
}
if edits
.iter()
.all(|(range, _)| range.to_offset(&multibuffer).is_empty())
@@ -11044,6 +11077,10 @@ impl Editor {
window: &mut Window,
cx: &mut Context<Self>,
) {
if self.breakpoint_store.is_none() {
return;
}
for (anchor, breakpoint) in self.breakpoints_at_cursors(window, cx) {
let breakpoint = breakpoint.unwrap_or_else(|| Breakpoint {
message: None,
@@ -11103,6 +11140,10 @@ impl Editor {
window: &mut Window,
cx: &mut Context<Self>,
) {
if self.breakpoint_store.is_none() {
return;
}
for (anchor, breakpoint) in self.breakpoints_at_cursors(window, cx) {
let Some(breakpoint) = breakpoint.filter(|breakpoint| breakpoint.is_disabled()) else {
continue;
@@ -11122,6 +11163,10 @@ impl Editor {
window: &mut Window,
cx: &mut Context<Self>,
) {
if self.breakpoint_store.is_none() {
return;
}
for (anchor, breakpoint) in self.breakpoints_at_cursors(window, cx) {
let Some(breakpoint) = breakpoint.filter(|breakpoint| breakpoint.is_enabled()) else {
continue;
@@ -11141,6 +11186,10 @@ impl Editor {
window: &mut Window,
cx: &mut Context<Self>,
) {
if self.breakpoint_store.is_none() {
return;
}
for (anchor, breakpoint) in self.breakpoints_at_cursors(window, cx) {
if let Some(breakpoint) = breakpoint {
self.edit_breakpoint_at_anchor(
@@ -12548,6 +12597,7 @@ impl Editor {
{
let max_point = buffer.max_point();
let mut is_first = true;
let mut prev_selection_was_entire_line = false;
for selection in &mut selections {
let is_entire_line =
(selection.is_empty() && cut_no_selection_line) || self.selections.line_mode();
@@ -12562,9 +12612,10 @@ impl Editor {
}
if is_first {
is_first = false;
} else {
} else if !prev_selection_was_entire_line {
text += "\n";
}
prev_selection_was_entire_line = is_entire_line;
let mut len = 0;
for chunk in buffer.text_for_range(selection.start..selection.end) {
text.push_str(chunk);
@@ -12647,6 +12698,7 @@ impl Editor {
{
let max_point = buffer.max_point();
let mut is_first = true;
let mut prev_selection_was_entire_line = false;
for selection in &selections {
let mut start = selection.start;
let mut end = selection.end;
@@ -12705,9 +12757,10 @@ impl Editor {
for trimmed_range in trimmed_selections {
if is_first {
is_first = false;
} else {
} else if !prev_selection_was_entire_line {
text += "\n";
}
prev_selection_was_entire_line = is_entire_line;
let mut len = 0;
for chunk in buffer.text_for_range(trimmed_range.start..trimmed_range.end) {
text.push_str(chunk);
@@ -12781,7 +12834,11 @@ impl Editor {
let end_offset = start_offset + clipboard_selection.len;
to_insert = &clipboard_text[start_offset..end_offset];
entire_line = clipboard_selection.is_entire_line;
start_offset = end_offset + 1;
start_offset = if entire_line {
end_offset
} else {
end_offset + 1
};
original_indent_column = Some(clipboard_selection.first_line_indent);
} else {
to_insert = &*clipboard_text;
@@ -16872,7 +16929,7 @@ impl Editor {
editor.update_in(cx, |editor, window, cx| {
let range = target_range.to_point(target_buffer.read(cx));
let range = editor.range_for_match(&range, false);
let range = editor.range_for_match(&range);
let range = collapse_multiline_range(range);
if !split
@@ -17915,8 +17972,18 @@ impl Editor {
.diagnostic_group(buffer_id, diagnostic.diagnostic.group_id)
.collect::<Vec<_>>();
let blocks =
renderer.render_group(diagnostic_group, buffer_id, snapshot, cx.weak_entity(), cx);
let language_registry = self
.project()
.map(|project| project.read(cx).languages().clone());
let blocks = renderer.render_group(
diagnostic_group,
buffer_id,
snapshot,
cx.weak_entity(),
language_registry,
cx,
);
let blocks = self.display_map.update(cx, |display_map, cx| {
display_map.insert_blocks(blocks, cx).into_iter().collect()
@@ -19281,6 +19348,16 @@ impl Editor {
})
}
fn has_any_expanded_diff_hunks(&self, cx: &App) -> bool {
if self.buffer.read(cx).all_diff_hunks_expanded() {
return true;
}
let ranges = vec![Anchor::min()..Anchor::max()];
self.buffer
.read(cx)
.has_expanded_diff_hunks_in_ranges(&ranges, cx)
}
fn toggle_diff_hunks_in_ranges(
&mut self,
ranges: Vec<Range<Anchor>>,
@@ -21692,7 +21769,9 @@ impl Editor {
.and_then(|e| e.to_str())
.map(|a| a.to_string()));
let vim_mode = vim_flavor(cx).is_some();
let vim_mode = vim_mode_setting::VimModeSetting::try_get(cx)
.map(|vim_mode| vim_mode.0)
.unwrap_or(false);
let edit_predictions_provider = all_language_settings(file, cx).edit_predictions.provider;
let copilot_enabled = edit_predictions_provider
@@ -22327,28 +22406,6 @@ fn edit_for_markdown_paste<'a>(
(range, new_text)
}
#[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
pub enum VimFlavor {
Vim,
Helix,
}
pub fn vim_flavor(cx: &App) -> Option<VimFlavor> {
if vim_mode_setting::HelixModeSetting::try_get(cx)
.map(|helix_mode| helix_mode.0)
.unwrap_or(false)
{
Some(VimFlavor::Helix)
} else if vim_mode_setting::VimModeSetting::try_get(cx)
.map(|vim_mode| vim_mode.0)
.unwrap_or(false)
{
Some(VimFlavor::Vim)
} else {
None // neither vim nor helix mode
}
}
fn process_completion_for_edit(
completion: &Completion,
intent: CompletionIntent,
@@ -23201,10 +23258,11 @@ impl CodeActionProvider for Entity<Project> {
fn snippet_completions(
project: &Project,
buffer: &Entity<Buffer>,
buffer_position: text::Anchor,
buffer_anchor: text::Anchor,
classifier: CharClassifier,
cx: &mut App,
) -> Task<Result<CompletionResponse>> {
let languages = buffer.read(cx).languages_at(buffer_position);
let languages = buffer.read(cx).languages_at(buffer_anchor);
let snippet_store = project.snippets().read(cx);
let scopes: Vec<_> = languages
@@ -23233,97 +23291,146 @@ fn snippet_completions(
let executor = cx.background_executor().clone();
cx.background_spawn(async move {
let is_word_char = |c| classifier.is_word(c);
let mut is_incomplete = false;
let mut completions: Vec<Completion> = Vec::new();
for (scope, snippets) in scopes.into_iter() {
let classifier =
CharClassifier::new(Some(scope)).scope_context(Some(CharScopeContext::Completion));
const MAX_WORD_PREFIX_LEN: usize = 128;
let last_word: String = snapshot
.reversed_chars_for_range(text::Anchor::MIN..buffer_position)
.take(MAX_WORD_PREFIX_LEN)
.take_while(|c| classifier.is_word(*c))
.collect::<String>()
.chars()
.rev()
.collect();
const MAX_PREFIX_LEN: usize = 128;
let buffer_offset = text::ToOffset::to_offset(&buffer_anchor, &snapshot);
let window_start = buffer_offset.saturating_sub(MAX_PREFIX_LEN);
let window_start = snapshot.clip_offset(window_start, Bias::Left);
if last_word.is_empty() {
return Ok(CompletionResponse {
completions: vec![],
display_options: CompletionDisplayOptions::default(),
is_incomplete: true,
});
let max_buffer_window: String = snapshot
.text_for_range(window_start..buffer_offset)
.collect();
if max_buffer_window.is_empty() {
return Ok(CompletionResponse {
completions: vec![],
display_options: CompletionDisplayOptions::default(),
is_incomplete: true,
});
}
for (_scope, snippets) in scopes.into_iter() {
// Sort snippets by word count to match longer snippet prefixes first.
let mut sorted_snippet_candidates = snippets
.iter()
.enumerate()
.flat_map(|(snippet_ix, snippet)| {
snippet
.prefix
.iter()
.enumerate()
.map(move |(prefix_ix, prefix)| {
let word_count =
snippet_candidate_suffixes(prefix, is_word_char).count();
((snippet_ix, prefix_ix), prefix, word_count)
})
})
.collect_vec();
sorted_snippet_candidates
.sort_unstable_by_key(|(_, _, word_count)| Reverse(*word_count));
// Each prefix may be matched multiple times; the completion menu must filter out duplicates.
let buffer_windows = snippet_candidate_suffixes(&max_buffer_window, is_word_char)
.take(
sorted_snippet_candidates
.first()
.map(|(_, _, word_count)| *word_count)
.unwrap_or_default(),
)
.collect_vec();
const MAX_RESULTS: usize = 100;
// Each match also remembers how many characters from the buffer it consumed
let mut matches: Vec<(StringMatch, usize)> = vec![];
let mut snippet_list_cutoff_index = 0;
for (buffer_index, buffer_window) in buffer_windows.iter().enumerate().rev() {
let word_count = buffer_index + 1;
// Increase `snippet_list_cutoff_index` until we have all of the
// snippets with sufficiently many words.
while sorted_snippet_candidates
.get(snippet_list_cutoff_index)
.is_some_and(|(_ix, _prefix, snippet_word_count)| {
*snippet_word_count >= word_count
})
{
snippet_list_cutoff_index += 1;
}
// Take only the candidates with at least `word_count` many words
let snippet_candidates_at_word_len =
&sorted_snippet_candidates[..snippet_list_cutoff_index];
let candidates = snippet_candidates_at_word_len
.iter()
.map(|(_snippet_ix, prefix, _snippet_word_count)| prefix)
.enumerate() // index in `sorted_snippet_candidates`
// First char must match
.filter(|(_ix, prefix)| {
itertools::equal(
prefix
.chars()
.next()
.into_iter()
.flat_map(|c| c.to_lowercase()),
buffer_window
.chars()
.next()
.into_iter()
.flat_map(|c| c.to_lowercase()),
)
})
.map(|(ix, prefix)| StringMatchCandidate::new(ix, prefix))
.collect::<Vec<StringMatchCandidate>>();
matches.extend(
fuzzy::match_strings(
&candidates,
&buffer_window,
buffer_window.chars().any(|c| c.is_uppercase()),
true,
MAX_RESULTS - matches.len(), // always prioritize longer snippets
&Default::default(),
executor.clone(),
)
.await
.into_iter()
.map(|string_match| (string_match, buffer_window.len())),
);
if matches.len() >= MAX_RESULTS {
break;
}
}
let as_offset = text::ToOffset::to_offset(&buffer_position, &snapshot);
let to_lsp = |point: &text::Anchor| {
let end = text::ToPointUtf16::to_point_utf16(point, &snapshot);
point_to_lsp(end)
};
let lsp_end = to_lsp(&buffer_position);
let candidates = snippets
.iter()
.enumerate()
.flat_map(|(ix, snippet)| {
snippet
.prefix
.iter()
.map(move |prefix| StringMatchCandidate::new(ix, prefix))
})
.collect::<Vec<StringMatchCandidate>>();
const MAX_RESULTS: usize = 100;
let mut matches = fuzzy::match_strings(
&candidates,
&last_word,
last_word.chars().any(|c| c.is_uppercase()),
true,
MAX_RESULTS,
&Default::default(),
executor.clone(),
)
.await;
let lsp_end = to_lsp(&buffer_anchor);
if matches.len() >= MAX_RESULTS {
is_incomplete = true;
}
// Remove all candidates where the query's start does not match the start of any word in the candidate
if let Some(query_start) = last_word.chars().next() {
matches.retain(|string_match| {
split_words(&string_match.string).any(|word| {
// Check that the first codepoint of the word as lowercase matches the first
// codepoint of the query as lowercase
word.chars()
.flat_map(|codepoint| codepoint.to_lowercase())
.zip(query_start.to_lowercase())
.all(|(word_cp, query_cp)| word_cp == query_cp)
})
});
}
let matched_strings = matches
.into_iter()
.map(|m| m.string)
.collect::<HashSet<_>>();
completions.extend(snippets.iter().filter_map(|snippet| {
let matching_prefix = snippet
.prefix
.iter()
.find(|prefix| matched_strings.contains(*prefix))?;
let start = as_offset - last_word.len();
completions.extend(matches.iter().map(|(string_match, buffer_window_len)| {
let ((snippet_index, prefix_index), matching_prefix, _snippet_word_count) =
sorted_snippet_candidates[string_match.candidate_id];
let snippet = &snippets[snippet_index];
let start = buffer_offset - buffer_window_len;
let start = snapshot.anchor_before(start);
let range = start..buffer_position;
let range = start..buffer_anchor;
let lsp_start = to_lsp(&start);
let lsp_range = lsp::Range {
start: lsp_start,
end: lsp_end,
};
Some(Completion {
Completion {
replace_range: range,
new_text: snippet.body.clone(),
source: CompletionSource::Lsp {
@@ -23353,7 +23460,11 @@ fn snippet_completions(
}),
lsp_defaults: None,
},
label: CodeLabel::plain(matching_prefix.clone(), None),
label: CodeLabel {
text: matching_prefix.clone(),
runs: Vec::new(),
filter_range: 0..matching_prefix.len(),
},
icon_path: None,
documentation: Some(CompletionDocumentation::SingleLineAndMultiLinePlainText {
single_line: snippet.name.clone().into(),
@@ -23364,8 +23475,10 @@ fn snippet_completions(
}),
insert_text_mode: None,
confirm: None,
})
}))
match_start: Some(start),
snippet_deduplication_key: Some((snippet_index, prefix_index)),
}
}));
}
Ok(CompletionResponse {
@@ -24611,6 +24724,33 @@ pub(crate) fn split_words(text: &str) -> impl std::iter::Iterator<Item = &str> +
})
}
/// Given a string of text immediately before the cursor, iterates over possible
/// strings a snippet could match to. More precisely: returns an iterator over
/// suffixes of `text` created by splitting at word boundaries (before & after
/// every non-word character).
///
/// Shorter suffixes are returned first.
pub(crate) fn snippet_candidate_suffixes(
text: &str,
is_word_char: impl Fn(char) -> bool,
) -> impl std::iter::Iterator<Item = &str> {
let mut prev_index = text.len();
let mut prev_codepoint = None;
text.char_indices()
.rev()
.chain([(0, '\0')])
.filter_map(move |(index, codepoint)| {
let prev_index = std::mem::replace(&mut prev_index, index);
let prev_codepoint = prev_codepoint.replace(codepoint)?;
if is_word_char(prev_codepoint) && is_word_char(codepoint) {
None
} else {
let chunk = &text[prev_index..]; // go to end of string
Some(chunk)
}
})
}
pub trait RangeToAnchorExt: Sized {
fn to_anchors(self, snapshot: &MultiBufferSnapshot) -> Range<Anchor>;

View File

@@ -32,6 +32,7 @@ use language::{
tree_sitter_python,
};
use language_settings::Formatter;
use languages::markdown_lang;
use languages::rust_lang;
use lsp::CompletionParams;
use multi_buffer::{IndentGuide, PathKey};
@@ -11414,6 +11415,53 @@ async fn test_snippet_indentation(cx: &mut TestAppContext) {
ˇ"});
}
#[gpui::test]
async fn test_snippet_with_multi_word_prefix(cx: &mut TestAppContext) {
init_test(cx, |_| {});
let mut cx = EditorTestContext::new(cx).await;
cx.update_editor(|editor, _, cx| {
editor.project().unwrap().update(cx, |project, cx| {
project.snippets().update(cx, |snippets, _cx| {
let snippet = project::snippet_provider::Snippet {
prefix: vec!["multi word".to_string()],
body: "this is many words".to_string(),
description: Some("description".to_string()),
name: "multi-word snippet test".to_string(),
};
snippets.add_snippet_for_test(
None,
PathBuf::from("test_snippets.json"),
vec![Arc::new(snippet)],
);
});
})
});
for (input_to_simulate, should_match_snippet) in [
("m", true),
("m ", true),
("m w", true),
("aa m w", true),
("aa m g", false),
] {
cx.set_state("ˇ");
cx.simulate_input(input_to_simulate); // fails correctly
cx.update_editor(|editor, _, _| {
let Some(CodeContextMenu::Completions(context_menu)) = &*editor.context_menu.borrow()
else {
assert!(!should_match_snippet); // no completions! don't even show the menu
return;
};
assert!(context_menu.visible());
let completions = context_menu.completions.borrow();
assert_eq!(!completions.is_empty(), should_match_snippet);
});
}
}
#[gpui::test]
async fn test_document_format_during_save(cx: &mut TestAppContext) {
init_test(cx, |_| {});
@@ -17369,6 +17417,41 @@ fn test_split_words() {
assert_eq!(split(":do_the_thing"), &[":", "do_", "the_", "thing"]);
}
#[test]
fn test_split_words_for_snippet_prefix() {
fn split(text: &str) -> Vec<&str> {
snippet_candidate_suffixes(text, |c| c.is_alphanumeric() || c == '_').collect()
}
assert_eq!(split("HelloWorld"), &["HelloWorld"]);
assert_eq!(split("hello_world"), &["hello_world"]);
assert_eq!(split("_hello_world_"), &["_hello_world_"]);
assert_eq!(split("Hello_World"), &["Hello_World"]);
assert_eq!(split("helloWOrld"), &["helloWOrld"]);
assert_eq!(split("helloworld"), &["helloworld"]);
assert_eq!(
split("this@is!@#$^many . symbols"),
&[
"symbols",
" symbols",
". symbols",
" . symbols",
" . symbols",
" . symbols",
"many . symbols",
"^many . symbols",
"$^many . symbols",
"#$^many . symbols",
"@#$^many . symbols",
"!@#$^many . symbols",
"is!@#$^many . symbols",
"@is!@#$^many . symbols",
"this@is!@#$^many . symbols",
],
);
assert_eq!(split("a.s"), &["s", ".s", "a.s"]);
}
#[gpui::test]
async fn test_move_to_enclosing_bracket(cx: &mut TestAppContext) {
init_test(cx, |_| {});
@@ -17421,6 +17504,89 @@ async fn test_move_to_enclosing_bracket(cx: &mut TestAppContext) {
);
}
#[gpui::test]
async fn test_move_to_enclosing_bracket_in_markdown_code_block(cx: &mut TestAppContext) {
init_test(cx, |_| {});
let language_registry = Arc::new(language::LanguageRegistry::test(cx.executor()));
language_registry.add(markdown_lang());
language_registry.add(rust_lang());
let buffer = cx.new(|cx| {
let mut buffer = language::Buffer::local(
indoc! {"
```rs
impl Worktree {
pub async fn open_buffers(&self, path: &Path) -> impl Iterator<&Buffer> {
}
}
```
"},
cx,
);
buffer.set_language_registry(language_registry.clone());
buffer.set_language(Some(markdown_lang()), cx);
buffer
});
let buffer = cx.new(|cx| MultiBuffer::singleton(buffer, cx));
let editor = cx.add_window(|window, cx| build_editor(buffer.clone(), window, cx));
cx.executor().run_until_parked();
_ = editor.update(cx, |editor, window, cx| {
// Case 1: Test outer enclosing brackets
select_ranges(
editor,
&indoc! {"
```rs
impl Worktree {
pub async fn open_buffers(&self, path: &Path) -> impl Iterator<&Buffer> {
}
```
"},
window,
cx,
);
editor.move_to_enclosing_bracket(&MoveToEnclosingBracket, window, cx);
assert_text_with_selections(
editor,
&indoc! {"
```rs
impl Worktree ˇ{
pub async fn open_buffers(&self, path: &Path) -> impl Iterator<&Buffer> {
}
}
```
"},
cx,
);
// Case 2: Test inner enclosing brackets
select_ranges(
editor,
&indoc! {"
```rs
impl Worktree {
pub async fn open_buffers(&self, path: &Path) -> impl Iterator<&Buffer> {
}
```
"},
window,
cx,
);
editor.move_to_enclosing_bracket(&MoveToEnclosingBracket, window, cx);
assert_text_with_selections(
editor,
&indoc! {"
```rs
impl Worktree {
pub async fn open_buffers(&self, path: &Path) -> impl Iterator<&Buffer> ˇ{
}
}
```
"},
cx,
);
});
}
#[gpui::test]
async fn test_on_type_formatting_not_triggered(cx: &mut TestAppContext) {
init_test(cx, |_| {});
@@ -25620,6 +25786,195 @@ pub fn check_displayed_completions(expected: Vec<&'static str>, cx: &mut EditorL
});
}
#[gpui::test]
async fn test_mixed_completions_with_multi_word_snippet(cx: &mut TestAppContext) {
init_test(cx, |_| {});
let mut cx = EditorLspTestContext::new_rust(
lsp::ServerCapabilities {
completion_provider: Some(lsp::CompletionOptions {
..Default::default()
}),
..Default::default()
},
cx,
)
.await;
cx.lsp
.set_request_handler::<lsp::request::Completion, _, _>(move |_, _| async move {
Ok(Some(lsp::CompletionResponse::Array(vec![
lsp::CompletionItem {
label: "unsafe".into(),
text_edit: Some(lsp::CompletionTextEdit::Edit(lsp::TextEdit {
range: lsp::Range {
start: lsp::Position {
line: 0,
character: 9,
},
end: lsp::Position {
line: 0,
character: 11,
},
},
new_text: "unsafe".to_string(),
})),
insert_text_mode: Some(lsp::InsertTextMode::AS_IS),
..Default::default()
},
])))
});
cx.update_editor(|editor, _, cx| {
editor.project().unwrap().update(cx, |project, cx| {
project.snippets().update(cx, |snippets, _cx| {
snippets.add_snippet_for_test(
None,
PathBuf::from("test_snippets.json"),
vec![
Arc::new(project::snippet_provider::Snippet {
prefix: vec![
"unlimited word count".to_string(),
"unlimit word count".to_string(),
"unlimited unknown".to_string(),
],
body: "this is many words".to_string(),
description: Some("description".to_string()),
name: "multi-word snippet test".to_string(),
}),
Arc::new(project::snippet_provider::Snippet {
prefix: vec!["unsnip".to_string(), "@few".to_string()],
body: "fewer words".to_string(),
description: Some("alt description".to_string()),
name: "other name".to_string(),
}),
Arc::new(project::snippet_provider::Snippet {
prefix: vec!["ab aa".to_string()],
body: "abcd".to_string(),
description: None,
name: "alphabet".to_string(),
}),
],
);
});
})
});
let get_completions = |cx: &mut EditorLspTestContext| {
cx.update_editor(|editor, _, _| match &*editor.context_menu.borrow() {
Some(CodeContextMenu::Completions(context_menu)) => {
let entries = context_menu.entries.borrow();
entries
.iter()
.map(|entry| entry.string.clone())
.collect_vec()
}
_ => vec![],
})
};
// snippets:
// @foo
// foo bar
//
// when typing:
//
// when typing:
// - if I type a symbol "open the completions with snippets only"
// - if I type a word character "open the completions menu" (if it had been open snippets only, clear it out)
//
// stuff we need:
// - filtering logic change?
// - remember how far back the completion started.
let test_cases: &[(&str, &[&str])] = &[
(
"un",
&[
"unsafe",
"unlimit word count",
"unlimited unknown",
"unlimited word count",
"unsnip",
],
),
(
"u ",
&[
"unlimit word count",
"unlimited unknown",
"unlimited word count",
],
),
("u a", &["ab aa", "unsafe"]), // unsAfe
(
"u u",
&[
"unsafe",
"unlimit word count",
"unlimited unknown", // ranked highest among snippets
"unlimited word count",
"unsnip",
],
),
("uw c", &["unlimit word count", "unlimited word count"]),
(
"u w",
&[
"unlimit word count",
"unlimited word count",
"unlimited unknown",
],
),
("u w ", &["unlimit word count", "unlimited word count"]),
(
"u ",
&[
"unlimit word count",
"unlimited unknown",
"unlimited word count",
],
),
("wor", &[]),
("uf", &["unsafe"]),
("af", &["unsafe"]),
("afu", &[]),
(
"ue",
&["unsafe", "unlimited unknown", "unlimited word count"],
),
("@", &["@few"]),
("@few", &["@few"]),
("@ ", &[]),
("a@", &["@few"]),
("a@f", &["@few", "unsafe"]),
("a@fw", &["@few"]),
("a", &["ab aa", "unsafe"]),
("aa", &["ab aa"]),
("aaa", &["ab aa"]),
("ab", &["ab aa"]),
("ab ", &["ab aa"]),
("ab a", &["ab aa", "unsafe"]),
("ab ab", &["ab aa"]),
("ab ab aa", &["ab aa"]),
];
for &(input_to_simulate, expected_completions) in test_cases {
cx.set_state("fn a() { ˇ }\n");
for c in input_to_simulate.split("") {
cx.simulate_input(c);
cx.run_until_parked();
}
let expected_completions = expected_completions
.iter()
.map(|s| s.to_string())
.collect_vec();
assert_eq!(
get_completions(&mut cx),
expected_completions,
"< actual / expected >, input = {input_to_simulate:?}",
);
}
}
/// Handle completion request passing a marked string specifying where the completion
/// should be triggered from using '|' character, what range should be replaced, and what completions
/// should be returned using '<' and '>' to delimit the range.
@@ -27104,6 +27459,60 @@ async fn test_copy_line_without_trailing_newline(cx: &mut TestAppContext) {
cx.assert_editor_state("line1\nline2\nˇ");
}
#[gpui::test]
async fn test_multi_selection_copy_with_newline_between_copied_lines(cx: &mut TestAppContext) {
init_test(cx, |_| {});
let mut cx = EditorTestContext::new(cx).await;
cx.set_state("ˇline1\nˇline2\nˇline3\n");
cx.update_editor(|e, window, cx| e.copy(&Copy, window, cx));
let clipboard_text = cx
.read_from_clipboard()
.and_then(|item| item.text().as_deref().map(str::to_string));
assert_eq!(
clipboard_text,
Some("line1\nline2\nline3\n".to_string()),
"Copying multiple lines should include a single newline between lines"
);
cx.set_state("lineA\nˇ");
cx.update_editor(|e, window, cx| e.paste(&Paste, window, cx));
cx.assert_editor_state("lineA\nline1\nline2\nline3\nˇ");
}
#[gpui::test]
async fn test_multi_selection_cut_with_newline_between_copied_lines(cx: &mut TestAppContext) {
init_test(cx, |_| {});
let mut cx = EditorTestContext::new(cx).await;
cx.set_state("ˇline1\nˇline2\nˇline3\n");
cx.update_editor(|e, window, cx| e.cut(&Cut, window, cx));
let clipboard_text = cx
.read_from_clipboard()
.and_then(|item| item.text().as_deref().map(str::to_string));
assert_eq!(
clipboard_text,
Some("line1\nline2\nline3\n".to_string()),
"Copying multiple lines should include a single newline between lines"
);
cx.set_state("lineA\nˇ");
cx.update_editor(|e, window, cx| e.paste(&Paste, window, cx));
cx.assert_editor_state("lineA\nline1\nline2\nline3\nˇ");
}
#[gpui::test]
async fn test_end_of_editor_context(cx: &mut TestAppContext) {
init_test(cx, |_| {});

View File

@@ -6,10 +6,10 @@ use crate::{
EditDisplayMode, EditPrediction, Editor, EditorMode, EditorSettings, EditorSnapshot,
EditorStyle, FILE_HEADER_HEIGHT, FocusedBlock, GutterDimensions, HalfPageDown, HalfPageUp,
HandleInput, HoveredCursor, InlayHintRefreshReason, JumpData, LineDown, LineHighlight, LineUp,
MAX_LINE_LEN, MINIMAP_FONT_SIZE, MULTI_BUFFER_EXCERPT_HEADER_HEIGHT, OpenExcerpts,
OpenExcerptsSplit, PageDown, PageUp, PhantomBreakpointIndicator, Point, RowExt, RowRangeExt,
SelectPhase, SelectedTextHighlight, Selection, SelectionDragState, SelectionEffects,
SizingBehavior, SoftWrap, StickyHeaderExcerpt, ToPoint, ToggleFold, ToggleFoldAll,
MAX_LINE_LEN, MINIMAP_FONT_SIZE, MULTI_BUFFER_EXCERPT_HEADER_HEIGHT, OpenExcerpts, PageDown,
PageUp, PhantomBreakpointIndicator, Point, RowExt, RowRangeExt, SelectPhase,
SelectedTextHighlight, Selection, SelectionDragState, SelectionEffects, SizingBehavior,
SoftWrap, StickyHeaderExcerpt, ToPoint, ToggleFold, ToggleFoldAll,
code_context_menus::{CodeActionsMenu, MENU_ASIDE_MAX_WIDTH, MENU_ASIDE_MIN_WIDTH, MENU_GAP},
display_map::{
Block, BlockContext, BlockStyle, ChunkRendererId, DisplaySnapshot, EditorMargins,
@@ -3664,6 +3664,7 @@ impl EditorElement {
row_block_types: &mut HashMap<DisplayRow, bool>,
selections: &[Selection<Point>],
selected_buffer_ids: &Vec<BufferId>,
latest_selection_anchors: &HashMap<BufferId, Anchor>,
is_row_soft_wrapped: impl Copy + Fn(usize) -> bool,
sticky_header_excerpt_id: Option<ExcerptId>,
window: &mut Window,
@@ -3739,7 +3740,13 @@ impl EditorElement {
let selected = selected_buffer_ids.contains(&first_excerpt.buffer_id);
let result = v_flex().id(block_id).w_full().pr(editor_margins.right);
let jump_data = header_jump_data(snapshot, block_row_start, *height, first_excerpt);
let jump_data = header_jump_data(
snapshot,
block_row_start,
*height,
first_excerpt,
latest_selection_anchors,
);
result
.child(self.render_buffer_header(
first_excerpt,
@@ -3774,7 +3781,13 @@ impl EditorElement {
Block::BufferHeader { excerpt, height } => {
let mut result = v_flex().id(block_id).w_full();
let jump_data = header_jump_data(snapshot, block_row_start, *height, excerpt);
let jump_data = header_jump_data(
snapshot,
block_row_start,
*height,
excerpt,
latest_selection_anchors,
);
if sticky_header_excerpt_id != Some(excerpt.id) {
let selected = selected_buffer_ids.contains(&excerpt.buffer_id);
@@ -4042,24 +4055,17 @@ impl EditorElement {
)
.group_hover("", |div| div.underline()),
)
.on_click({
let focus_handle = focus_handle.clone();
move |event, window, cx| {
if event.modifiers().secondary() {
focus_handle.dispatch_action(
&OpenExcerptsSplit,
window,
cx,
);
} else {
focus_handle.dispatch_action(
&OpenExcerpts,
window,
cx,
);
}
.on_click(window.listener_for(&self.editor, {
let jump_data = jump_data.clone();
move |editor, e: &ClickEvent, window, cx| {
editor.open_excerpts_common(
Some(jump_data.clone()),
e.modifiers().secondary(),
window,
cx,
);
}
}),
})),
)
.when_some(parent_path, |then, path| {
then.child(div().child(path).text_color(
@@ -4087,24 +4093,17 @@ impl EditorElement {
cx,
)),
)
.on_click({
let focus_handle = focus_handle.clone();
move |event, window, cx| {
if event.modifiers().secondary() {
focus_handle.dispatch_action(
&OpenExcerptsSplit,
window,
cx,
);
} else {
focus_handle.dispatch_action(
&OpenExcerpts,
window,
cx,
);
}
.on_click(window.listener_for(&self.editor, {
let jump_data = jump_data.clone();
move |editor, e: &ClickEvent, window, cx| {
editor.open_excerpts_common(
Some(jump_data.clone()),
e.modifiers().secondary(),
window,
cx,
);
}
}),
})),
)
},
)
@@ -4250,6 +4249,7 @@ impl EditorElement {
line_layouts: &mut [LineWithInvisibles],
selections: &[Selection<Point>],
selected_buffer_ids: &Vec<BufferId>,
latest_selection_anchors: &HashMap<BufferId, Anchor>,
is_row_soft_wrapped: impl Copy + Fn(usize) -> bool,
sticky_header_excerpt_id: Option<ExcerptId>,
window: &mut Window,
@@ -4293,6 +4293,7 @@ impl EditorElement {
&mut row_block_types,
selections,
selected_buffer_ids,
latest_selection_anchors,
is_row_soft_wrapped,
sticky_header_excerpt_id,
window,
@@ -4350,6 +4351,7 @@ impl EditorElement {
&mut row_block_types,
selections,
selected_buffer_ids,
latest_selection_anchors,
is_row_soft_wrapped,
sticky_header_excerpt_id,
window,
@@ -4405,6 +4407,7 @@ impl EditorElement {
&mut row_block_types,
selections,
selected_buffer_ids,
latest_selection_anchors,
is_row_soft_wrapped,
sticky_header_excerpt_id,
window,
@@ -4487,6 +4490,7 @@ impl EditorElement {
hitbox: &Hitbox,
selected_buffer_ids: &Vec<BufferId>,
blocks: &[BlockLayout],
latest_selection_anchors: &HashMap<BufferId, Anchor>,
window: &mut Window,
cx: &mut App,
) -> AnyElement {
@@ -4495,6 +4499,7 @@ impl EditorElement {
DisplayRow(scroll_position.y as u32),
FILE_HEADER_HEIGHT + MULTI_BUFFER_EXCERPT_HEADER_HEIGHT,
excerpt,
latest_selection_anchors,
);
let editor_bg_color = cx.theme().colors().editor_background;
@@ -7794,18 +7799,52 @@ fn file_status_label_color(file_status: Option<FileStatus>) -> Color {
}
fn header_jump_data(
editor_snapshot: &EditorSnapshot,
block_row_start: DisplayRow,
height: u32,
first_excerpt: &ExcerptInfo,
latest_selection_anchors: &HashMap<BufferId, Anchor>,
) -> JumpData {
let jump_target = if let Some(anchor) = latest_selection_anchors.get(&first_excerpt.buffer_id)
&& let Some(range) = editor_snapshot.context_range_for_excerpt(anchor.excerpt_id)
&& let Some(buffer) = editor_snapshot
.buffer_snapshot()
.buffer_for_excerpt(anchor.excerpt_id)
{
JumpTargetInExcerptInput {
id: anchor.excerpt_id,
buffer,
excerpt_start_anchor: range.start,
jump_anchor: anchor.text_anchor,
}
} else {
JumpTargetInExcerptInput {
id: first_excerpt.id,
buffer: &first_excerpt.buffer,
excerpt_start_anchor: first_excerpt.range.context.start,
jump_anchor: first_excerpt.range.primary.start,
}
};
header_jump_data_inner(editor_snapshot, block_row_start, height, &jump_target)
}
struct JumpTargetInExcerptInput<'a> {
id: ExcerptId,
buffer: &'a language::BufferSnapshot,
excerpt_start_anchor: text::Anchor,
jump_anchor: text::Anchor,
}
fn header_jump_data_inner(
snapshot: &EditorSnapshot,
block_row_start: DisplayRow,
height: u32,
for_excerpt: &ExcerptInfo,
for_excerpt: &JumpTargetInExcerptInput,
) -> JumpData {
let range = &for_excerpt.range;
let buffer = &for_excerpt.buffer;
let jump_anchor = range.primary.start;
let excerpt_start = range.context.start;
let jump_position = language::ToPoint::to_point(&jump_anchor, buffer);
let rows_from_excerpt_start = if jump_anchor == excerpt_start {
let jump_position = language::ToPoint::to_point(&for_excerpt.jump_anchor, buffer);
let excerpt_start = for_excerpt.excerpt_start_anchor;
let rows_from_excerpt_start = if for_excerpt.jump_anchor == excerpt_start {
0
} else {
let excerpt_start_point = language::ToPoint::to_point(&excerpt_start, buffer);
@@ -7822,7 +7861,7 @@ fn header_jump_data(
JumpData::MultiBufferPoint {
excerpt_id: for_excerpt.id,
anchor: jump_anchor,
anchor: for_excerpt.jump_anchor,
position: jump_position,
line_offset_from_top,
}
@@ -8620,7 +8659,7 @@ impl LineWithInvisibles {
let fragment_end_x = fragment_start_x + shaped_line.width;
if x < fragment_end_x {
return Some(
fragment_start_index + shaped_line.index_for_x(x - fragment_start_x),
fragment_start_index + shaped_line.index_for_x(x - fragment_start_x)?,
);
}
fragment_start_x = fragment_end_x;
@@ -9139,15 +9178,18 @@ impl Element for EditorElement {
cx,
);
let (local_selections, selected_buffer_ids): (
let (local_selections, selected_buffer_ids, latest_selection_anchors): (
Vec<Selection<Point>>,
Vec<BufferId>,
HashMap<BufferId, Anchor>,
) = self
.editor_with_selections(cx)
.map(|editor| {
editor.update(cx, |editor, cx| {
let all_selections =
editor.selections.all::<Point>(&snapshot.display_snapshot);
let all_anchor_selections =
editor.selections.all_anchors(&snapshot.display_snapshot);
let selected_buffer_ids =
if editor.buffer_kind(cx) == ItemBufferKind::Singleton {
Vec::new()
@@ -9176,10 +9218,31 @@ impl Element for EditorElement {
selections
.extend(editor.selections.pending(&snapshot.display_snapshot));
(selections, selected_buffer_ids)
let mut anchors_by_buffer: HashMap<BufferId, (usize, Anchor)> =
HashMap::default();
for selection in all_anchor_selections.iter() {
let head = selection.head();
if let Some(buffer_id) = head.buffer_id {
anchors_by_buffer
.entry(buffer_id)
.and_modify(|(latest_id, latest_anchor)| {
if selection.id > *latest_id {
*latest_id = selection.id;
*latest_anchor = head;
}
})
.or_insert((selection.id, head));
}
}
let latest_selection_anchors = anchors_by_buffer
.into_iter()
.map(|(buffer_id, (_, anchor))| (buffer_id, anchor))
.collect();
(selections, selected_buffer_ids, latest_selection_anchors)
})
})
.unwrap_or_default();
.unwrap_or_else(|| (Vec::new(), Vec::new(), HashMap::default()));
let (selections, mut active_rows, newest_selection_head) = self
.layout_selections(
@@ -9410,6 +9473,7 @@ impl Element for EditorElement {
&mut line_layouts,
&local_selections,
&selected_buffer_ids,
&latest_selection_anchors,
is_row_soft_wrapped,
sticky_header_excerpt_id,
window,
@@ -9443,6 +9507,7 @@ impl Element for EditorElement {
&hitbox,
&selected_buffer_ids,
&blocks,
&latest_selection_anchors,
window,
cx,
)

View File

@@ -341,7 +341,13 @@ fn show_hover(
renderer
.as_ref()
.and_then(|renderer| {
renderer.render_hover(group, point_range, buffer_id, cx)
renderer.render_hover(
group,
point_range,
buffer_id,
language_registry.clone(),
cx,
)
})
.context("no rendered diagnostic")
})??;
@@ -986,6 +992,11 @@ impl DiagnosticPopover {
self.markdown.clone(),
diagnostics_markdown_style(window, cx),
)
.code_block_renderer(markdown::CodeBlockRenderer::Default {
copy_button: false,
copy_button_on_hover: false,
border: false,
})
.on_url_click(
move |link, window, cx| {
if let Some(renderer) = GlobalDiagnosticRenderer::global(cx)

View File

@@ -1,4 +1,5 @@
use std::{
collections::hash_map,
ops::{ControlFlow, Range},
time::Duration,
};
@@ -778,6 +779,7 @@ impl Editor {
}
let excerpts = self.buffer.read(cx).excerpt_ids();
let mut inserted_hint_text = HashMap::default();
let hints_to_insert = new_hints
.into_iter()
.filter_map(|(chunk_range, hints_result)| {
@@ -804,8 +806,35 @@ impl Editor {
}
}
})
.flat_map(|hints| hints.into_values())
.flatten()
.flat_map(|new_hints| {
let mut hints_deduplicated = Vec::new();
if new_hints.len() > 1 {
for (server_id, new_hints) in new_hints {
for (new_id, new_hint) in new_hints {
let hints_text_for_position = inserted_hint_text
.entry(new_hint.position)
.or_insert_with(HashMap::default);
let insert =
match hints_text_for_position.entry(new_hint.text().to_string()) {
hash_map::Entry::Occupied(o) => o.get() == &server_id,
hash_map::Entry::Vacant(v) => {
v.insert(server_id);
true
}
};
if insert {
hints_deduplicated.push((new_id, new_hint));
}
}
}
} else {
hints_deduplicated.extend(new_hints.into_values().flatten());
}
hints_deduplicated
})
.filter_map(|(hint_id, lsp_hint)| {
if inlay_hints.allowed_hint_kinds.contains(&lsp_hint.kind)
&& inlay_hints
@@ -3732,6 +3761,7 @@ let c = 3;"#
let mut fake_servers = language_registry.register_fake_lsp(
"Rust",
FakeLspAdapter {
name: "rust-analyzer",
capabilities: lsp::ServerCapabilities {
inlay_hint_provider: Some(lsp::OneOf::Left(true)),
..lsp::ServerCapabilities::default()
@@ -3804,6 +3834,78 @@ let c = 3;"#
},
);
// Add another server that does send the same, duplicate hints back
let mut fake_servers_2 = language_registry.register_fake_lsp(
"Rust",
FakeLspAdapter {
name: "CrabLang-ls",
capabilities: lsp::ServerCapabilities {
inlay_hint_provider: Some(lsp::OneOf::Left(true)),
..lsp::ServerCapabilities::default()
},
initializer: Some(Box::new(move |fake_server| {
fake_server.set_request_handler::<lsp::request::InlayHintRequest, _, _>(
move |params, _| async move {
if params.text_document.uri
== lsp::Uri::from_file_path(path!("/a/main.rs")).unwrap()
{
Ok(Some(vec![
lsp::InlayHint {
position: lsp::Position::new(1, 9),
label: lsp::InlayHintLabel::String(": i32".to_owned()),
kind: Some(lsp::InlayHintKind::TYPE),
text_edits: None,
tooltip: None,
padding_left: None,
padding_right: None,
data: None,
},
lsp::InlayHint {
position: lsp::Position::new(19, 9),
label: lsp::InlayHintLabel::String(": i33".to_owned()),
kind: Some(lsp::InlayHintKind::TYPE),
text_edits: None,
tooltip: None,
padding_left: None,
padding_right: None,
data: None,
},
]))
} else if params.text_document.uri
== lsp::Uri::from_file_path(path!("/a/lib.rs")).unwrap()
{
Ok(Some(vec![
lsp::InlayHint {
position: lsp::Position::new(1, 10),
label: lsp::InlayHintLabel::String(": i34".to_owned()),
kind: Some(lsp::InlayHintKind::TYPE),
text_edits: None,
tooltip: None,
padding_left: None,
padding_right: None,
data: None,
},
lsp::InlayHint {
position: lsp::Position::new(29, 10),
label: lsp::InlayHintLabel::String(": i35".to_owned()),
kind: Some(lsp::InlayHintKind::TYPE),
text_edits: None,
tooltip: None,
padding_left: None,
padding_right: None,
data: None,
},
]))
} else {
panic!("Unexpected file path {:?}", params.text_document.uri);
}
},
);
})),
..FakeLspAdapter::default()
},
);
let (buffer_1, _handle_1) = project
.update(cx, |project, cx| {
project.open_local_buffer_with_lsp(path!("/a/main.rs"), cx)
@@ -3847,6 +3949,7 @@ let c = 3;"#
});
let fake_server = fake_servers.next().await.unwrap();
let _fake_server_2 = fake_servers_2.next().await.unwrap();
cx.executor().advance_clock(Duration::from_millis(100));
cx.executor().run_until_parked();
@@ -3855,11 +3958,16 @@ let c = 3;"#
assert_eq!(
vec![
": i32".to_string(),
": i32".to_string(),
": i33".to_string(),
": i33".to_string(),
": i34".to_string(),
": i34".to_string(),
": i35".to_string(),
": i35".to_string(),
],
sorted_cached_hint_labels(editor, cx),
"We receive duplicate hints from 2 servers and cache them all"
);
assert_eq!(
vec![
@@ -3869,7 +3977,7 @@ let c = 3;"#
": i33".to_string(),
],
visible_hint_labels(editor, cx),
"lib.rs is added before main.rs , so its excerpts should be visible first"
"lib.rs is added before main.rs , so its excerpts should be visible first; hints should be deduplicated per label"
);
})
.unwrap();
@@ -3919,8 +4027,12 @@ let c = 3;"#
assert_eq!(
vec![
": i32".to_string(),
": i32".to_string(),
": i33".to_string(),
": i33".to_string(),
": i34".to_string(),
": i34".to_string(),
": i35".to_string(),
": i35".to_string(),
],
sorted_cached_hint_labels(editor, cx),

View File

@@ -1586,12 +1586,11 @@ impl SearchableItem for Editor {
&mut self,
index: usize,
matches: &[Range<Anchor>],
collapse: bool,
window: &mut Window,
cx: &mut Context<Self>,
) {
self.unfold_ranges(&[matches[index].clone()], false, true, cx);
let range = self.range_for_match(&matches[index], collapse);
let range = self.range_for_match(&matches[index]);
let autoscroll = if EditorSettings::get_global(cx).search.center_on_match {
Autoscroll::center()
} else {

View File

@@ -81,7 +81,19 @@ impl MouseContextMenu {
cx: &mut Context<Editor>,
) -> Self {
let context_menu_focus = context_menu.focus_handle(cx);
window.focus(&context_menu_focus);
// Since `ContextMenu` is rendered in a deferred fashion its focus
// handle is not linked to the Editor's until after the deferred draw
// callback runs.
// We need to wait for that to happen before focusing it, so that
// calling `contains_focused` on the editor's focus handle returns
// `true` when the `ContextMenu` is focused.
let focus_handle = context_menu_focus.clone();
cx.on_next_frame(window, move |_, window, cx| {
cx.on_next_frame(window, move |_, window, _cx| {
window.focus(&focus_handle);
});
});
let _dismiss_subscription = cx.subscribe_in(&context_menu, window, {
let context_menu_focus = context_menu_focus.clone();
@@ -329,8 +341,18 @@ mod tests {
}
"});
cx.editor(|editor, _window, _app| assert!(editor.mouse_context_menu.is_none()));
cx.update_editor(|editor, window, cx| {
deploy_context_menu(editor, Some(Default::default()), point, window, cx)
deploy_context_menu(editor, Some(Default::default()), point, window, cx);
// Assert that, even after deploying the editor's mouse context
// menu, the editor's focus handle still contains the focused
// element. The pane's tab bar relies on this to determine whether
// to show the tab bar buttons and there was a small flicker when
// deploying the mouse context menu that would cause this to not be
// true, making it so that the buttons would disappear for a couple
// of frames.
assert!(editor.focus_handle.contains_focused(window, cx));
});
cx.assert_editor_state(indoc! {"

View File

@@ -372,7 +372,7 @@ impl SelectionsCollection {
let is_empty = positions.start == positions.end;
let line_len = display_map.line_len(row);
let line = display_map.layout_row(row, text_layout_details);
let start_col = line.index_for_x(positions.start) as u32;
let start_col = line.closest_index_for_x(positions.start) as u32;
let (start, end) = if is_empty {
let point = DisplayPoint::new(row, std::cmp::min(start_col, line_len));
@@ -382,7 +382,7 @@ impl SelectionsCollection {
return None;
}
let start = DisplayPoint::new(row, start_col);
let end_col = line.index_for_x(positions.end) as u32;
let end_col = line.closest_index_for_x(positions.end) as u32;
let end = DisplayPoint::new(row, end_col);
(start, end)
};

View File

@@ -6,7 +6,7 @@ use std::{
};
use anyhow::Result;
use language::rust_lang;
use language::{markdown_lang, rust_lang};
use serde_json::json;
use crate::{Editor, ToPoint};
@@ -313,6 +313,22 @@ impl EditorLspTestContext {
Self::new(language, Default::default(), cx).await
}
pub async fn new_markdown_with_rust(cx: &mut gpui::TestAppContext) -> Self {
let context = Self::new(
Arc::into_inner(markdown_lang()).unwrap(),
Default::default(),
cx,
)
.await;
let language_registry = context.workspace.read_with(cx, |workspace, cx| {
workspace.project().read(cx).languages().clone()
});
language_registry.add(rust_lang());
context
}
/// Constructs lsp range using a marked string with '[', ']' range delimiters
#[track_caller]
pub fn lsp_range(&mut self, marked_text: &str) -> lsp::Range {

View File

@@ -59,6 +59,17 @@ impl EditorTestContext {
})
.await
.unwrap();
let language = project
.read_with(cx, |project, _cx| {
project.languages().language_for_name("Plain Text")
})
.await
.unwrap();
buffer.update(cx, |buffer, cx| {
buffer.set_language(Some(language), cx);
});
let editor = cx.add_window(|window, cx| {
let editor = build_editor_with_project(
project,

Some files were not shown because too many files have changed in this diff Show More