Compare commits

..

86 Commits

Author SHA1 Message Date
David
d4caf96c26 fix merge 2025-11-25 15:54:18 +01:00
David
ed38711bb4 prettier 2025-11-25 15:53:23 +01:00
David
65a65a740a add performance doc from channel to docs 2025-11-25 15:53:04 +01:00
Lukas Wirth
4a9ab5dcf1 multi_buffer: Fix up some anchor checks (#43454)
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-25 15:52:23 +01:00
Piotr Osiewicz
7c44e25ff9 ci: Do not show output of failed tests at the end too (#43449)
This reverts #39643, effectively

For the record, @SomeoneToIgnore found it quite cumbersome to scroll
through logs just to see which tests have failed. I kinda see the
argument. At the same time, I wish nextest could do both: it could
aggregate logs of failed tests and then print out the summary.

Release Notes:

- N/A
2025-11-25 15:52:08 +01:00
AidanV
395cd249af helix: Fix Vim::NextWordEnd off-by-one in HelixSelect (#43234)
Closes #43209
Closes #38121

Starting on the first character.
Running `v e` before changes: 
<img width="410" height="162" alt="image"
src="https://github.com/user-attachments/assets/ee13fa29-826c-45c0-9ea0-a598cc8e781a"
/>

Running `v e` after changes:
<img width="483" height="166" alt="image"
src="https://github.com/user-attachments/assets/24791a07-97df-47cd-9ef2-171522adb796"
/>

Change Notes:

- Added helix selection sanitation code that directly mirrors the code
in the Vim
[`visual_motion`](b6728c080c/crates/vim/src/visual.rs (L237))
method. I kept the comments from the Vim section that explains its
purpose.
- The above change converted the problem from fixing `v e` to fixing `v
w`. Since `w` is treated differently in Helix than in Vim (i.e. `w` in
Vim goes to the first character of a word and `w` in Helix goes to the
character before a word. Commented
[here](b6728c080c/crates/vim/src/helix.rs (L132))),
the code treats `w` in `HelixSelect` as a motion that differs from the
Vim motion in the same way that the function
[`helix_move_cursor`](b6728c080c/crates/vim/src/helix.rs (L353))
separates these behaviors.
- Added a regression test

Release Notes:

- Fixes bug where `Vim::NextWordEnd` in `HelixSelect` would not select
whole word.
2025-11-25 15:52:08 +01:00
Piotr Osiewicz
c14efd2e77 lsp: Fix potential double didClose notification when renaming a file (#43448)
Closes #42709

Release Notes:

- N/A
2025-11-25 15:52:08 +01:00
Kirill Bulatov
ef866162b9 Only show ssh logs when toggled (#43445)
Same as in collab projects.

Release Notes:

- N/A
2025-11-25 15:52:08 +01:00
Kirill Bulatov
49e156cb6b Fix first window open not focusing the modals (#43180)
Closes https://github.com/zed-industries/zed/issues/4357
Closes https://github.com/zed-industries/zed/issues/41278

Release Notes:

- Fixed modals not getting focus on window reopen

---------

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-11-25 15:52:08 +01:00
Ole Jørgen Brønner
58fd39ddcb multi_buffer: Fix editor::ExpandExcerpts failing when cursor is at excerpt start (#42324)
The bug is easily verified by:

1. open any multi-buffer
2. place the cursor at the beginning of an excerpt
3. run the editor::ExpandExcerpts / editor: expand excerpts action
4. The excerpt is not expanded

Since the `buffer_ids_for_range` function basically did the same and had
even been changed the same way earlier I DRYed these functions as well.

Note: I'm a rust novice, so keep an extra eye on rust technicalities
when reviewing :)

---

Release Notes:

- Fix editor: expand excerpts failing when cursor is at excerpt start

---------

Co-authored-by: Lukas Wirth <me@lukaswirth.dev>
2025-11-25 15:52:08 +01:00
Max Brunsfeld
624dab2027 Combine zeta and zeta2 edit prediction providers (#43284)
We've realized that a lot of the logic within an
`EditPredictionProvider` is not specific to a particular edit prediction
model / service. Rather, it is just the generic state management
required to perform edit predictions at all in Zed. We want to move to a
setup where there's one "built-in" edit prediction provider in Zed,
which can be pointed at different edit prediction models. The only logic
that is different for different models is how we construct the prompt,
send the request, and parse the output.

This PR also changes the behavior of the staff-only `zeta2` feature flag
so that in only gates your *ability* to use Zeta2, but you can still use
your local settings file to choose between different edit prediction
models/services: zeta1, zeta2, and sweep.

This PR also makes zeta1's outcome reporting and prediction-rating
features work with all prediction models, not just zeta1.

To do:
* [x] remove duplicated logic around sending cloud requests between
zeta1 and zeta2
* [x] port the outcome reporting logic from zeta to zeta2.
* [x] get the "rate completions" modal working with all EP models
   * [x] display edit prediction diff
   * [x] show edit history events
* [x] remove the original `zeta` crate.

Release Notes:

- N/A

---------

Co-authored-by: Agus Zubiaga <agus@zed.dev>
Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-11-25 15:52:08 +01:00
Kirill Bulatov
f9b25f07cb Redact environment variables in server info view (#43436)
Follow-up of https://github.com/zed-industries/zed/pull/42831

Release Notes:

- N/A
2025-11-25 15:52:08 +01:00
Julia Ryan
8514799277 Fix remote project snippet duplication (#43429)
Closes #43311

Release Notes:

- N/A

---------

Co-authored-by: John Tur <john-tur@outlook.com>
2025-11-25 15:52:08 +01:00
Kirill Bulatov
5af3a1554a Keep single default PHP language server (#43432)
9a119b18ee/extension.toml
provides 3 language servers for `php`, so `...` will always include all
3 if those are not excluded or included explicitly.

Change the configs and docs so, that only one php language server is
used.

Release Notes:

- N/A
2025-11-25 15:52:08 +01:00
Julia Ryan
945d0fb48f Fix zed cli in NixOS WSL instances (#43433)
This fixes running `zed <path>` inside nixos wsl instances. We're
copying the approach used elsewhere which is to try using `--exec`
first, and if that fails use an actual shell which should cover the
nixos case because it only puts binaries on your PATH inside the
`/etc/profile` script which is sourced on shell startup.

Release Notes:

- N/A

---------

Co-authored-by: John Tur <john-tur@outlook.com>
2025-11-25 15:52:08 +01:00
Mayank Verma
478e2a5c34 editor: Fix copy file actions not working in remote environments (#43362)
Closes #42500

Release Notes:

- Fixed all three editor actions not working in remote environments
  - `editor: copy file name`
  - `editor: copy file location`
  - `editor: copy file name without extension`

Here's the before/after:




https://github.com/user-attachments/assets/bfb03e99-2e1a-47a2-bd26-280180154fe3
2025-11-25 15:52:08 +01:00
Lennart
9b0d618f4a vim: Fix cursor shape after deactivation (#42834)
Update the `Vim.deactivate` method to ensure that the cursor shape is
reset to the one available in the user's settings, in the `cursor_shape`
setting, instead of simply defaulting to `CursorShape::Bar`.

In order to test this behavior, the `Editor.cursor_shape` method was
also introduced.

Release Notes:

- Fixed the cursor shape reset in vim mode deactivation, ensuring that
the user's `cursor_shape` setting is used

---------

Co-authored-by: dino <dinojoaocosta@gmail.com>
2025-11-25 15:52:08 +01:00
Mayank Verma
4c0abdd245 project: Send LSP metadata to remote ServerInfo (#42831)
Closes #39582

Release Notes:

- Added LSP metadata to remote ServerInfo

Here's the before/after:


https://github.com/user-attachments/assets/1057faa5-82af-4975-abad-5e10e139fac1

---------

Co-authored-by: Kirill Bulatov <mail4score@gmail.com>
2025-11-25 15:52:08 +01:00
Mikayla Maki
fc85692e39 Add support for Opus 4.5 (#43425)
Adds support for Opus 4.5
- [x] BYOK
- [x] Amazon Bedrock

Release Notes:

- Added support for Opus 4.5

Co-authored-by: Richard Feldman <oss@rtfeldman.com>
2025-11-25 15:52:08 +01:00
localcc
44a64e78d4 Improve Windows path canonicalization (#43423)
Path canonicalization on windows will now favor keeping the drive letter
intact when canonicalizing paths. This helps some lsps with mapped
network drive compatibility.

Closes #41336 

Release Notes:

- N/A
2025-11-25 15:52:08 +01:00
morgankrey
c9daa565e2 Opus 4.5 and Gemini 3 to docs (#43424)
Add Opus 4.5 and Gemini 3 to docs

Release Notes:

- N/A
2025-11-25 15:52:08 +01:00
Kunall Banerjee
e9260f0e99 Rework and consolidate issue templates (#43403)
We’re reworking our triage process and in doing so, reworking our issue
templates is worth looking into. We have multiple issue templates, for
arbitrary categories, and not enough enforcement. The plan is to
consolidate the issue templates (maybe all into one) and drop the
others.

Release Notes:

- N/A
2025-11-25 15:52:07 +01:00
Yeoh Joer
aa27bd27fc Passthrough env to npm subcommands when using the system node runtime (#43102)
Closes #39448
Closes #37866

This PR expands the env-clearing fix from #42587 to include the
SystemNodeRuntime, which covers Node.js installations managed by Mise.
When running under the system runtime, npm subcommands were still
launched with a cleared environment, preventing variables such as
MISE_DATA_DIR from reaching the shim or the mise binary itself. As a
result, Mise finds the npm binary in the default MISE_DATA_DIR,
consistent with the behavior described in
https://github.com/zed-industries/zed/issues/39448#issuecomment-3433644569.

This change ensures that environment variables are passed through for
npm subcommands when using the system Node runtime, restoring expected
behavior for Mise-managed Node installations. This also fixes cases
where envs are used by npm itself.

Release Notes:

- Enable environment passthrough for npm subcommands
2025-11-25 15:52:07 +01:00
Danilo Leal
8f59769c0f ui: Update ThreadItem component design (#43421)
Release Notes:

- N/A
2025-11-25 15:52:07 +01:00
Richard Feldman
4b056288df Fix Gemini 3 on OpenRouter (#43416)
Release Notes:

- Gemini 3 now works on OpenRouter in the Agent Panel
2025-11-25 15:52:07 +01:00
Agus Zubiaga
28b027f927 Add each panel to the workspace as soon as it's ready (#43414)
We'll now add panels to the workspace as soon as they're ready rather
than waiting for all the rest to complete. We should strive to make all
panels fast, but given that their load tasks are fallible and do IO,
this approach seems more resilient.

Additionally, we'll now start loading the agent panel at the same time
as the rest.

Release Notes:

- workspace: Add panels as soon as they are ready
2025-11-25 15:52:07 +01:00
AidanV
45c2476a23 vim: Fix bug where d . . freezes the editor (#42145)
This bug seems to be caused by pushing an operator (i.e. `d`) followed
by a repeat (i.e. `.`) so the recording includes the push operator and
the repeat. When this is repeated (i.e. `.`) it causes an infinite loop.

This change fixes this bug by pushing a ClearOperator action if there is
an ongoing recording when repeat is called.

Release Notes:

- Fixed bug where pressing `d . .` in Vim mode would freeze the editor.

---------

Co-authored-by: dino <dinojoaocosta@gmail.com>
2025-11-25 15:52:07 +01:00
HuaGu-Dragon
7ae28d854c Attempt to fix go to the end of the line when using helix mode (#41575)
Closes #41550

Release Notes:

- Fixed `<g-l>` behavior in helix mode which will now correctly go to the last charactor of the line.
- Fixed not switching to helix normal mode when in default vim context and pressing escape.

---------

Co-authored-by: Jakub Konka <kubkon@jakubkonka.com>
2025-11-25 15:52:07 +01:00
Oleksiy Syvokon
4118b71010 zeta2: Support experimental 1120-seedcoder model (#43411)
1. Introduce a common `PromptFormatter` trait
2. Let models define their generation params.
3. Add support for the experimental 1120-seedcoder prompt format


Release Notes:

- N/A
2025-11-25 15:52:07 +01:00
Kirill Bulatov
f00fe516ac Use a proper name for highlights.scm (#43412)
Release Notes:

- N/A
2025-11-25 15:52:07 +01:00
Piotr Osiewicz
55f5b477b1 auto_updater: Fix upload-nightly.ps1 and auto-update check (#43404)
Release Notes:

- N/A
2025-11-25 15:52:07 +01:00
Danilo Leal
6c5757c74c debugger_ui: Add button to close the panel when docked to bottom (#43409)
This PR adds a button to close the panel when it is docked to the
bottom. Effectively, the button triggers the same `ToggleBottomDock`
action that clicking on the button that opened the panel triggers, but I
think having it there just makes it extra obvious how to close it, which
is beneficial.

As a bonus, also fixed the panel controls container height when it is
docked to the sides, so it perfectly aligns with the panel tabbar
height.

| Perfectly Aligned Header | Close Button |
|--------|--------|
| <img width="2620" height="2010" alt="Screenshot 2025-11-24 at 12  01
2@2x"
src="https://github.com/user-attachments/assets/08a50858-1b50-4ebd-af7a-c5dae32cf4f6"
/> | <img width="2620" height="2010" alt="Screenshot 2025-11-24 at 12 
01@2x"
src="https://github.com/user-attachments/assets/17a6eee0-9934-4949-8741-fffd5b106e95"
/> |

Release Notes:

- N/A
2025-11-25 15:52:07 +01:00
Danilo Leal
8c2b507a10 agent_ui: Refine "reject"/"keep" behavior when regenerating previous prompts (#43347)
Closes https://github.com/zed-industries/zed/issues/42753

Consider the following flow: you submit prompt A. Prompt A generates
some edits. You don't click on either "reject" or "keep"; they stay in a
pending state. You then submit prompt B, but before the agent outputs
any response, you click to edit prompt B, thus submitting a
regeneration.

Before this PR, the above flow would make the edits originated from
prompt A to be auto-rejected. This feels very incorrect and can surprise
users when they see that the edits that were pending got rejected. It
feels more correct to only auto-reject changes if you're regenerating
the prompt that directly generated those edits in the first place. Then,
it also feels more correct to assume that if there was a follow-up
prompt after some edits were made, those edits were passively
"accepted".

So, this is what this PR is doing. Consider the following flow to get a
picture of the behavior change:
- You submit prompt A. 
- Prompt A generates some edits. 
- You don't click on either "reject" or "keep"; they're pending. 
- You then submit prompt B, but before the agents outputs anything, you
click to edit prompt B, submitting a regeneration.
- Now, edits from prompt A will be auto-kept.

Release Notes:

- agent: Improved the "reject"/"keep" behavior when regenerating older
prompts by auto-keeping pending edits that don't originate from the
prompt to-be-regenerated.
2025-11-25 15:52:07 +01:00
Vasyl Protsiv
278df0f1c2 sum_tree: Make SumTree::append run in logarithmic time (#43349)
The `SumTree::append` method is slow when appending large trees to small
trees. The reason is this code here:

f57f4cd360/crates/sum_tree/src/sum_tree.rs (L628-L630)

`append` is called recursively until `self` and `other` have the same
height, effectively making this code `O(log^2 n)` in the number of
leaves of `other` tree in the worst case.

There are no algorithmic reasons why appending large trees must be this
much slower.

This PR proves it by providing implementation of `append` that works in
logarithmic time regardless if `self` is smaller or larger than `other`.

The helper method `append_large` has the symmetric logic to
`push_tree_recursive` but moves the (unlikely) case of merging
underflowing node in a separate helper function to reduce stack usage. I
am a bit unsure about some implementation choices made in
`push_tree_recursive` and would like to discuss some of these later, but
at the moment I didn't change anything there and tried to follow the
same logic in `append_large`.

We might also consider adding `push_front`/`prepend` methods to
`SumTree`.

I did not find a good benchmark that covers this case so I added a new
one to rope benchmarks.

<details>
<summary>cargo bench (compared to current main)</summary>

```
     Running benches\rope_benchmark.rs (D:\zed\target\release\deps\rope_benchmark-59c669d2895cd2c4.exe)
Gnuplot not found, using plotters backend
push/4096               time:   [195.67 µs 195.75 µs 195.86 µs]
                        thrpt:  [19.944 MiB/s 19.955 MiB/s 19.964 MiB/s]
                 change:
                        time:   [+0.2162% +0.3040% +0.4057%] (p = 0.00 < 0.05)
                        thrpt:  [-0.4040% -0.3030% -0.2157%]
                        Change within noise threshold.
Found 14 outliers among 100 measurements (14.00%)
  2 (2.00%) low mild
  6 (6.00%) high mild
  6 (6.00%) high severe
Benchmarking push/65536: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 7.8s, enable flat sampling, or reduce sample count to 50.
push/65536              time:   [1.4431 ms 1.4485 ms 1.4546 ms]
                        thrpt:  [42.966 MiB/s 43.147 MiB/s 43.310 MiB/s]
                 change:
                        time:   [-3.2257% -1.2013% +0.6431%] (p = 0.27 > 0.05)
                        thrpt:  [-0.6390% +1.2159% +3.3332%]
                        No change in performance detected.
Found 11 outliers among 100 measurements (11.00%)
  1 (1.00%) low mild
  5 (5.00%) high mild
  5 (5.00%) high severe

append/4096             time:   [15.107 µs 15.128 µs 15.149 µs]
                        thrpt:  [257.86 MiB/s 258.22 MiB/s 258.58 MiB/s]
                 change:
                        time:   [+0.9650% +1.5256% +1.9057%] (p = 0.00 < 0.05)
                        thrpt:  [-1.8701% -1.5026% -0.9557%]
                        Change within noise threshold.
Found 2 outliers among 100 measurements (2.00%)
  1 (1.00%) low mild
  1 (1.00%) high severe
append/65536            time:   [1.2870 µs 1.4496 µs 1.6484 µs]
                        thrpt:  [37.028 GiB/s 42.106 GiB/s 47.425 GiB/s]
                 change:
                        time:   [-28.699% -16.073% -0.3133%] (p = 0.04 < 0.05)
                        thrpt:  [+0.3142% +19.151% +40.250%]
                        Change within noise threshold.
Found 17 outliers among 100 measurements (17.00%)
  1 (1.00%) high mild
  16 (16.00%) high severe

slice/4096              time:   [30.580 µs 30.611 µs 30.639 µs]
                        thrpt:  [127.49 MiB/s 127.61 MiB/s 127.74 MiB/s]
                 change:
                        time:   [-2.2958% -0.9674% -0.1835%] (p = 0.08 > 0.05)
                        thrpt:  [+0.1838% +0.9769% +2.3498%]
                        No change in performance detected.
slice/65536             time:   [614.86 µs 795.04 µs 1.0293 ms]
                        thrpt:  [60.723 MiB/s 78.613 MiB/s 101.65 MiB/s]
                 change:
                        time:   [-12.714% +7.2092% +30.676%] (p = 0.52 > 0.05)
                        thrpt:  [-23.475% -6.7244% +14.566%]
                        No change in performance detected.
Found 14 outliers among 100 measurements (14.00%)
  14 (14.00%) high severe

bytes_in_range/4096     time:   [3.3298 µs 3.3416 µs 3.3563 µs]
                        thrpt:  [1.1366 GiB/s 1.1416 GiB/s 1.1456 GiB/s]
                 change:
                        time:   [+2.0652% +3.0667% +4.3765%] (p = 0.00 < 0.05)
                        thrpt:  [-4.1930% -2.9754% -2.0234%]
                        Performance has regressed.
Found 2 outliers among 100 measurements (2.00%)
  2 (2.00%) high severe
bytes_in_range/65536    time:   [80.640 µs 80.825 µs 81.024 µs]
                        thrpt:  [771.38 MiB/s 773.28 MiB/s 775.05 MiB/s]
                 change:
                        time:   [-0.6566% +1.0994% +2.9691%] (p = 0.27 > 0.05)
                        thrpt:  [-2.8835% -1.0875% +0.6609%]
                        No change in performance detected.
Found 10 outliers among 100 measurements (10.00%)
  2 (2.00%) high mild
  8 (8.00%) high severe

chars/4096              time:   [763.17 ns 763.68 ns 764.36 ns]
                        thrpt:  [4.9907 GiB/s 4.9952 GiB/s 4.9985 GiB/s]
                 change:
                        time:   [-2.1138% -0.7973% +0.1096%] (p = 0.18 > 0.05)
                        thrpt:  [-0.1095% +0.8037% +2.1595%]
                        No change in performance detected.
Found 10 outliers among 100 measurements (10.00%)
  1 (1.00%) low severe
  6 (6.00%) low mild
  3 (3.00%) high severe
chars/65536             time:   [12.479 µs 12.503 µs 12.529 µs]
                        thrpt:  [4.8714 GiB/s 4.8817 GiB/s 4.8910 GiB/s]
                 change:
                        time:   [-2.4451% -1.0638% +0.6633%] (p = 0.16 > 0.05)
                        thrpt:  [-0.6589% +1.0753% +2.5063%]
                        No change in performance detected.
Found 11 outliers among 100 measurements (11.00%)
  4 (4.00%) high mild
  7 (7.00%) high severe

clip_point/4096         time:   [63.148 µs 63.182 µs 63.229 µs]
                        thrpt:  [61.779 MiB/s 61.825 MiB/s 61.859 MiB/s]
                 change:
                        time:   [+1.0107% +2.1329% +4.2849%] (p = 0.02 < 0.05)
                        thrpt:  [-4.1088% -2.0883% -1.0006%]
                        Performance has regressed.
Found 5 outliers among 100 measurements (5.00%)
  4 (4.00%) high mild
  1 (1.00%) high severe
Benchmarking clip_point/65536: Warming up for 3.0000 s
Warning: Unable to complete 100 samples in 5.0s. You may wish to increase target time to 7.8s, enable flat sampling, or reduce sample count to 50.
clip_point/65536        time:   [1.2578 ms 1.2593 ms 1.2608 ms]
                        thrpt:  [49.573 MiB/s 49.631 MiB/s 49.690 MiB/s]
                 change:
                        time:   [+0.4881% +0.8942% +1.3488%] (p = 0.00 < 0.05)
                        thrpt:  [-1.3308% -0.8863% -0.4857%]
                        Change within noise threshold.
Found 15 outliers among 100 measurements (15.00%)
  1 (1.00%) high mild
  14 (14.00%) high severe

point_to_offset/4096    time:   [16.211 µs 16.235 µs 16.257 µs]
                        thrpt:  [240.28 MiB/s 240.61 MiB/s 240.97 MiB/s]
                 change:
                        time:   [-1.4913% +0.1685% +2.2662%] (p = 0.89 > 0.05)
                        thrpt:  [-2.2159% -0.1682% +1.5139%]
                        No change in performance detected.
Found 2 outliers among 100 measurements (2.00%)
  1 (1.00%) high mild
  1 (1.00%) high severe
point_to_offset/65536   time:   [360.06 µs 360.58 µs 361.16 µs]
                        thrpt:  [173.05 MiB/s 173.33 MiB/s 173.58 MiB/s]
                 change:
                        time:   [+0.0939% +0.8792% +1.8751%] (p = 0.06 > 0.05)
                        thrpt:  [-1.8406% -0.8715% -0.0938%]
                        No change in performance detected.
Found 10 outliers among 100 measurements (10.00%)
  3 (3.00%) high mild
  7 (7.00%) high severe

cursor/4096             time:   [19.266 µs 19.282 µs 19.302 µs]
                        thrpt:  [202.38 MiB/s 202.58 MiB/s 202.75 MiB/s]
                 change:
                        time:   [+1.2457% +2.2477% +2.8702%] (p = 0.00 < 0.05)
                        thrpt:  [-2.7901% -2.1983% -1.2304%]
                        Performance has regressed.
Found 4 outliers among 100 measurements (4.00%)
  2 (2.00%) high mild
  2 (2.00%) high severe
cursor/65536            time:   [467.63 µs 468.36 µs 469.14 µs]
                        thrpt:  [133.22 MiB/s 133.44 MiB/s 133.65 MiB/s]
                 change:
                        time:   [-0.2019% +1.3419% +2.8915%] (p = 0.10 > 0.05)
                        thrpt:  [-2.8103% -1.3241% +0.2023%]
                        No change in performance detected.
Found 12 outliers among 100 measurements (12.00%)
  3 (3.00%) high mild
  9 (9.00%) high severe

append many/small to large
                        time:   [37.419 ms 37.656 ms 37.929 ms]
                        thrpt:  [321.84 MiB/s 324.17 MiB/s 326.22 MiB/s]
                 change:
                        time:   [+0.8113% +1.7361% +2.6538%] (p = 0.00 < 0.05)
                        thrpt:  [-2.5852% -1.7065% -0.8047%]
                        Change within noise threshold.
Found 9 outliers among 100 measurements (9.00%)
  9 (9.00%) high severe
append many/large to small
                        time:   [51.289 ms 51.437 ms 51.614 ms]
                        thrpt:  [236.50 MiB/s 237.32 MiB/s 238.00 MiB/s]
                 change:
                        time:   [-87.518% -87.479% -87.438%] (p = 0.00 < 0.05)
                        thrpt:  [+696.08% +698.66% +701.13%]
                        Performance has improved.
Found 13 outliers among 100 measurements (13.00%)
  4 (4.00%) high mild
  9 (9.00%) high severe
```
</details>

Release Notes:

- sum_tree: Make SumTree::append run in logarithmic time
2025-11-25 15:52:07 +01:00
Lukas Wirth
f8bbe37e82 gpui: Do not panic when GetMonitorInfoW fails (#43397)
Fixes ZED-29R

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-25 15:52:07 +01:00
Piotr Osiewicz
298dbd881c releases: Add build number to Nightly builds (#42990)
- **Remove semantic_version crate and use semver instead**
- **Update upload-nightly**


Release Notes:

- N/A

---------

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-11-25 15:52:07 +01:00
Lukas Wirth
4ed5fd1ecd proto: Fix cloned errors losing all context (#43393)
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-25 15:52:07 +01:00
Kunall Banerjee
287eed624b docs: Better wording for terminal.working_directory setting (#43388)
Initially this was just going to be a minor docs fix, but then I
wondered if we could improve the copy in the editor as well.

Release Notes:

- N/A
2025-11-25 15:52:07 +01:00
Lukas Wirth
d6d433be5e util: Fix invalid powershell redirection syntax used in uni shell env capture (#43390)
Closes  https://github.com/zed-industries/zed/issues/42869

Release Notes:

- Fixed shell env sourcing not working with powershell on unix systems
2025-11-25 15:52:07 +01:00
Binlogo
fcf46cddc2 http_client: Support GITHUB_TOKEN env to auth GitHub requests (#42623)
Closes #33903

Release Notes:

- Ensured Zed reuses `GITHUB_TOKEN` env variable when querying GitHub

---

Before fixing:

-  The `crates-lsp` extension request captured:
```
curl 'https://api.github.com/repos/MathiasPius/crates-lsp/releases' \
-H 'accept: */*' \
-H 'user-agent: Zed/0.212.3 (macos; aarch64)' \
-H 'host: api.github.com' \
```

-  `crates-lsp` extension error: 
```
Language server crates-lsp:

from extension "Crates LSP" version 0.2.0: status error 403, response: "{\"message\":\"API rate limit exceeded for x.x.x.x. (But here's the good news: Authenticated requests get a higher rate limit. Check out the documentation for more details.)\",\"documentation_url\":\"https://docs.github.com/rest/overview/resources-in-the-rest-api#rate-limiting\"}\n"
```

After fixing:

```
export GITHUB_TOKEN=$(gh auth token)
cargo run
```

-  The `crates-lsp` extension request captured:
```
curl 'https://api.github.com/repos/MathiasPius/crates-lsp/releases' \
-H 'authorization: Bearer gho_Nt*****************2KXLw2' \
-H 'accept: */*' \
-H 'user-agent: Zed/0.214.0 (macos; aarch64)' \
-H 'host: api.github.com' \
```

The API rate limitation is resolved.

---

This isn't a perfect solution, but it enables users to avoid the noise.
2025-11-25 15:52:07 +01:00
Oscar Villavicencio
3f59934489 docs: Document git_hosting_providers for self-hosted Git instances (#43278)
Closes #38433

Document how to register self-hosted GitHub/GitLab/Bitbucket instances
via git_hosting_providers setting so permalinks and issue links resolve.

Release Notes:

- Added documentation on how to register self-hosted
GitHub/GitLab/Bitbucket instances via the `git_hosting_providers`
setting. This ensures permalinks and issue links can be resolved for
these instances.
2025-11-25 15:52:07 +01:00
shaik-zeeshan
abcffeffa0 Fix gutter hover breakpoint not updating when switching the tabs (#43163)
Closes #42073

fixes hover breakpoint not disappearing from a tab when tabs are
switched


https://github.com/user-attachments/assets/43096d2a-cc5b-46c4-b903-5bc8c33305c5


Release Notes:

- N/A

---------

Co-authored-by: Finn Evers <finn.evers@outlook.de>
2025-11-25 15:52:07 +01:00
mg
4d2bf71c06 Add Windows path for extensions (#42645)
### Description

The `installing-extensions.md` guide was missing the directory path for
the Windows platform. It currently only lists the paths for macOS and
Linux. This PR adds the correct path for Windows users
(`%LOCALAPPDATA%\zed\extensions`).

Release Notes:

- N/A

---------

Co-authored-by: Kirill Bulatov <mail4score@gmail.com>
2025-11-25 15:52:07 +01:00
Lukas Wirth
c072236774 agent: Fix utf8 panic in outline (#43141)
Fixes ZED-3F3

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-25 15:52:07 +01:00
Benjamin Jurk
a47332c91f Treat .h++ files as C++ (#42802)
Release Notes:

- `.h++` files are now treated as C++.
2025-11-25 15:52:07 +01:00
Lukas Wirth
be6ca2f53d miniprofiler_ui: Copy path to clipboard on click (#43280)
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-25 15:52:07 +01:00
Ulysse Buonomo
873fe01158 cli: Allow opening non-existent paths (#43250)
Changes are made to `parse_path_with_position`:
we try to get the canonical, existing parts of
a path, then append the non-existing parts.

Closes #4441

Release Notes:

- Added the possibility to open a non-existing path using `zed` CLI
  ```
  zed path/to/non/existing/file.txt
  ```

Co-authored-by: Syed Sadiq Ali <sadiqonemail@gmail.com>
2025-11-25 15:52:07 +01:00
Danilo Leal
d2b40350a5 agent_ui: Make thread markdown editable (#43377)
This PR makes the thread markdown editable. This refers to the "open
thread as markdown" feature, where you previously could only read. One
benefit of this move is that it makes a bit more obvious that you can
`cmd-s` to save the markdown, allowing you to store the content of a
given thread. You could already do this before, but due to it being
editable now, you see the tab with a dirty indicator, which communicates
that better.

Release Notes:

- agent: Made the thread markdown editable.
2025-11-25 15:52:07 +01:00
Danilo Leal
5d8cfdda7c docs: Improve edit prediction page (#43379)
This PR improves the edit prediction page particularly by adding
information about pricing and plans, which wasn't at all mentioned here
before, _and_ by including a section with a keybinding example
demonstrating how to always use just `tab` to always accept edit
predictions.

Release Notes:

- N/A
2025-11-25 15:52:07 +01:00
Danilo Leal
1649cc2655 Add mouse-based affordance to open a recent project in new window (#43373)
Closes https://github.com/zed-industries/zed/issues/31796

<img width="500" height="1034" alt="Screenshot 2025-11-23 at 7  39 2@2x"
src="https://github.com/user-attachments/assets/bd516359-328f-44aa-9130-33f9567df805"
/>

Release Notes:

- N/A
2025-11-25 15:52:07 +01:00
Ben Kunkle
4a82746089 settings_ui: Don't show sh as default shell on windows (#43276)
Closes #ISSUE

Release Notes:

- Fixed an issue in the settings UI where changing the terminal shell
would set the default shell to `sh` on Windows
2025-11-25 15:52:07 +01:00
Bennet Bo Fenner
9318ca049b Fix inline assist panic (#43364)
Fixes a panic that was introduced in #42633. Repro steps:
1. Open the inline assistant and mention a file in the prompt
2. Run the inline assistant
3. Remove the mention and insert a different one
4. 💥

This would happen because the mention set still had a reference to the
old editor, because we create a new one in `PromptEditor::unlink`.

Also removes the unused
`crates/agent_ui/src/context_picker/completion_provider.rs` file, which
was not removed by mistake in the previous PR.

Release Notes:

- N/A
2025-11-25 15:52:07 +01:00
Mayank Verma
3651206f66 editor: Fix tab tooltips not showing file path for remote files (#43359)
Closes #42344

Release Notes:

- Fixed editor tab tooltips not showing file path for remote files

Here's the before/after, tested both local and remote:


https://github.com/user-attachments/assets/2768a0f8-e35b-4eff-aa95-d0decb51ec78
2025-11-25 15:52:07 +01:00
Lukas Wirth
18df6158ee terminal_view: Reuse editor's blink manager (#43351)
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-25 15:52:07 +01:00
John Tur
6922ab8eab Fix labels for GitHub issue templates (#43348)
Release Notes:

- N/A
2025-11-25 15:52:07 +01:00
Danilo Leal
6973c81c88 agent_ui: Display footer for model selector when in Zed agent (#43294)
This PR adds back the footer with the "Configure" button in the model
selector but only when the seeing it from the Zed agent (or inline
assistant/text threads). I had removed it a while back because seeing
the "Configure" button, which takes you to the agent panel settings
view, when clicking from an external agent didn't make much sense, given
there's nothing model-wise you can configure from Zed (at least yet) for
an external agent.

This also makes the button in the footer a bit nicer by making it full
screen and displaying a keybinding, so that you can easily do the whole
"trigger model selector → go to settings view" all with the keyboard.

<img width="400" height="870" alt="Screenshot 2025-11-21 at 10  38@2x"
src="https://github.com/user-attachments/assets/c14f2acf-b793-4bc1-ac53-8a8a53b219e6"
/>

Release Notes:

- N/A
2025-11-25 15:52:07 +01:00
Danilo Leal
3f57872b2d debugger_ui: Improve some elements of the UI (#43344)
- In the launch tab of the new session mode, I've switched it to use the
`InputField` component instead given that had all that we needed
already. Allows for removing a good chunk of editor-related code
- Also in the launch tab, added support for keyboard navigation between
all of the elements there (dropdown, inputs, and switch component)
- Added some simple an empty state treatment for the breakpoint column
when there are none set


https://github.com/user-attachments/assets/a441aa8a-360b-4e38-839f-786315a8a235

Release Notes:

- debugger: Made the input elements within the launch tab in the new
session modal keyboard navigable˙.
2025-11-25 15:52:07 +01:00
Danilo Leal
dd6a64017d ui: Remove CheckboxWithLabel and improve Switch and Checkbox (#43343)
This PR finally removes the `CheckboxWithLabel` component, which is not
fully needed given the `Checkbox` can take a `label` method. Then, took
advantage of the opportunity to add more methods with regards to label
customization (position, size, and color) in both the `Checkbox` and
`Switch` components.

Release Notes:

- N/A
2025-11-25 15:52:07 +01:00
warrenjokinen
de4a97dfef docs: Fix minor typo in docker.md (#43334)
Updated wording (added a missing word) for reporting issues in
Dockerfile extension documentation.

Closes #ISSUE N/A

Release Notes:

- N/A
2025-11-25 15:52:07 +01:00
Liffindra Angga Zaaldian
865c46130a Update JavaScript default language server (#43316)
As stated in [TypeScript Language Server
documentation](https://zed.dev/docs/languages/typescript#language-servers),
JavaScript uses `vtsls` as the default language server.

Release Notes:

- N/A
2025-11-25 15:52:07 +01:00
Marco Mihai Condrache
399c10ba66 gpui: Fix documentation of window methods (#43315)
Closes #43313 

Release Notes:

- N/A

Signed-off-by: Marco Mihai Condrache <52580954+marcocondrache@users.noreply.github.com>
2025-11-25 15:52:07 +01:00
Julia Ryan
b84bd6dcde Fix wsl path parsing (#43295)
Closes #40286

Release Notes:

- N/A

---------

Co-authored-by: John Tur <john-tur@outlook.com>
2025-11-25 15:52:07 +01:00
Cole Miller
afff1738a8 Disable flaky test_git_status_postprocessing test (#43293)
Release Notes:

- N/A
2025-11-25 15:52:07 +01:00
Lukas Wirth
ecdcdc4bee askpass: Fix double command ampersand in powershell script (#43289)
Fixes https://github.com/zed-industries/zed/issues/42618 /
https://github.com/zed-industries/zed/issues/43109

Release Notes:

- N/A
2025-11-25 15:52:07 +01:00
Mikayla Maki
1213cbf0a9 Fix a bug where Anthropic completions would not work on nightly (#43287)
Follow up to: https://github.com/zed-industries/zed/pull/43185/files

Release Notes:

- N/A

Co-authored-by: Michael <mbenfield@zed.dev>
2025-11-25 15:52:07 +01:00
Jakub Konka
f2ac1a1810 git: Handle git pre-commit hooks separately (#43285)
We now run git pre-commit hooks before we commit. This ensures we don't
run into timeout issues with askpass delegate and report invalid error
to the user.

Closes #43157

Release Notes:

- Fixed long running pre-commit hooks causing committing from Zed to
fail.

Co-authored-by: Cole Miller <cole@zed.dev>
2025-11-25 15:52:07 +01:00
Conrad Irwin
6a25efc856 Retry sentry uploads (#43267)
We see internal server errors occasionally; and it's very annoying to
have to re-run the entire step

Release Notes:

- N/A
2025-11-25 15:52:07 +01:00
Be
df4a134b33 Add setting for enabling server-side decorations (#39250)
Previously, this was controllable via the undocumented
ZED_WINDOW_DECORATIONS environment variable (added in #13866). Using an
environment variable for this is inconvenient because it requires users
to set that environment variable somehow before starting Zed, such as in
the .desktop file or persistently in their shell. Controlling this via a
Zed setting is more convenient.

This does not modify the design of the titlebar in any way. It only
moves the existing option from an environment variable to a Zed setting.

Fixes #14165

Client-side decorations (default):
<img width="3840" height="2160" alt="image"
src="https://github.com/user-attachments/assets/525feb92-2f60-47d3-b0ca-47c98770fa8c"
/>


Server-side decorations in KDE Plasma:
<img width="3840" height="2160" alt="image"
src="https://github.com/user-attachments/assets/7379c7c8-e5e3-47ba-a3ea-4191fec9434d"
/>

Release Notes:

- Changed option for Wayland server-side decorations from an environment
variable to settings.json field

---------

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-11-25 15:52:07 +01:00
Be
c0a2eb784b gpui: Fall back to client-side decorations on Wayland if SSD not supported (#39313)
It is optional for Wayland servers to support server-side decorations.
In particular, GNOME chooses to not implement SSD
(https://gitlab.gnome.org/GNOME/mutter/-/issues/217). So, even if the
application requests SSD, it must draw client-side decorations unless
the application receives a response from the server confirming the
request for SSD.

Before, when the user requested SSD for Zed, but the Wayland server did
not support it, there were no server-side decorations (window titlebar)
drawn, but Zed did not draw the window minimize, maximize, and close
buttons either. This fixes Zed so it always draws the window control
buttons if the Wayland server does not support SSD.

Before on GNOME Wayland with SSD requested:
<img width="3840" height="2160" alt="image"
src="https://github.com/user-attachments/assets/68a6d853-623d-401f-8e7f-21d4dea00543"
/>

After on GNOME Wayland with SSD requested:
<img width="3840" height="2160" alt="image"
src="https://github.com/user-attachments/assets/b258ae8b-fe0e-4ba2-a541-ef6f2c38f788"
/>


Release Notes:

- Fixed window control buttons not showing in GNOME Wayland when SSD
requested
2025-11-25 15:52:07 +01:00
Dave Waggoner
d8ebfa948a terminal: New settings for path hyperlink regexes (#40305)
Closes:
- #12338
- #40202 

1. Adds two new settings which allow customizing the set of regexes used
to identify path hyperlinks in terminal
1. Fixes path hyperlinks for paths containing unicode emoji and
punctuation, for example, `mojo.🔥`
1. Fixes path hyperlinks for Windows verbatim paths, for example,
`\\?\C:\Over\here.rs`.
1. Improves path hyperlink performance, especially for terminals with a
lot of content
1. Replaces existing custom hard-coded default path hyperlink parsing
logic with a set of customizable default regexes

## New settings

(from default.json)

### terminal.path_hyperlink_regexes

Regexes used to identify paths for hyperlink navigation. Supports
optional named capture
groups `path`, `line`, `column`, and `link`. If none of these are
present, the entire match
is the hyperlink target. If `path` is present, it is the hyperlink
target, along with `line`
and `column` if present. `link` may be used to customize what text in
terminal is part of the
hyperlink. If `link` is not present, the text of the entire match is
used. If `line` and
`column` are not present, the default built-in line and column suffix
processing is used
which parses `line:column` and `(line,column)` variants. The default
value handles Python
diagnostics and common path, line, column syntaxes. This can be extended
or replaced to
handle specific scenarios. For example, to enable support for
hyperlinking paths which
contain spaces in rust output,
```
[
  "\\s+(-->|:::|at) (?<link>(?<path>.+?))(:$|$)",
  "\\s+(Compiling|Checking|Documenting) [^(]+\\((?<link>(?<path>.+))\\)"
],
```
could be used. Processing stops at the first regex with a match, even if
no link is
produced which is the case when the cursor is not over the hyperlinked
text. For best
performance it is recommended to order regexes from most common to least
common. For
readability and documentation, each regex may be an array of strings
which are collected
into one multi-line regex string for use in terminal path hyperlink
detection.

### terminal.path_hyperlink_timeout_ms
Timeout for hover and Cmd-click path hyperlink discovery in
milliseconds. Specifying a
timeout of `0` will disable path hyperlinking in terminal.

## Performance

This PR fixes terminal to only search the hovered line for hyperlinks
and adds a benchmark. Before this fix, hyperlink detection grows
linearly with terminal content, with this fix it is proportional only to
the hovered line. The gains come from replacing
`visible_regex_match_iter`, which searched all visible lines, with code
that only searches the line hovered on (including if the line is
wrapped).

Local benchmark timings (terminal with 500 lines of content):

||main|this PR|Δ|
|-|-|-:|-|
| cargo_hyperlink_benchmark | 1.4 ms | 13 µs | -99.0% |
| rust_hyperlink_benchmark | 1.2 ms | 11 µs | -99.1% |
| ls_hyperlink_benchmark | 1.3 ms | 7 µs |  -99.5% |

Release Notes:

- terminal: New settings to allow customizing the set of regexes used to
identify path hyperlinks in terminal
- terminal: Fixed terminal path hyperlinks for paths containing unicode
punctuation and emoji, e.g. mojo.🔥
- terminal: Fixed path hyperlinks for Windows verbatim paths, for
example, `\\?\C:\Over\here.rs`
- terminal: Improved terminal hyperlink performance, especially for
terminals with a lot of content visible
2025-11-25 15:52:07 +01:00
Joseph T. Lyons
55e2255062 Make community champions public (#43271)
Release Notes:

- N/A
2025-11-25 15:52:07 +01:00
Agus Zubiaga
91cda35cbd zeta2: Improve jump outside UI (#43262)
Still a prototype UI but a bit more noticeable :) 

Release Notes:

- N/A
2025-11-25 15:52:07 +01:00
Bennet Bo Fenner
e3f610c2f9 agent_ui: Remove context strip from inline assistant (#42633)
TODO
- [x] Implement PromptEditor::paste
- [x] Fix creases on unlink
- [x] PromptCompletionProviderDelegate::supports_images
- [ ] Fix highlighting in completion menu

Release Notes:

- N/A

---------

Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
2025-11-25 15:52:07 +01:00
Luke Naylor
e6cf656f0f markdown: Add LaTeX syntax highlighting injection (#41110)
Closes [#30264](https://github.com/zed-industries/zed/issues/30264)

Small addition based on
[nvim-treesitter](https://github.com/nvim-treesitter/nvim-treesitter/blob/main/runtime/queries/markdown_inline/injections.scm)

<img width="1122" height="356" alt="Screenshot From 2025-10-24 15-47-58"
src="https://github.com/user-attachments/assets/33e7387d-a299-4921-9db8-622d2657bec1"
/>

This does require the LaTeX extension to be installed.

Release Notes:

- Added LaTeX highlighting for inline and display equations in Markdown when the LaTeX extension is installed

---------

Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-11-25 15:52:07 +01:00
Agus Zubiaga
c92023cb72 zeta2: Predict at next diagnostic location (#43257)
When no predictions are available for the current buffer, we will now
attempt to predict at the closest diagnostic from the cursor location
that wasn't included in the last prediction request. This enables a
commonly desired kind of far-away jump without requiring explicit model
support.

Release Notes:

- N/A
2025-11-25 15:52:06 +01:00
Lukas Wirth
ddc18f2588 crashes: Print panic message to logs (#43159)
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-25 15:52:06 +01:00
Conrad Irwin
ccad41b741 Fix install linux (#43205)
Closes: #42726

Release Notes:

- Fix ./script/install-linux for installing a development version of Zed
on Linux
2025-11-25 15:52:06 +01:00
Smit Barmase
793f6ff5a2 Revert "gpui: Convert macOS clipboard file URLs to paths for paste" (#43254)
Reverts zed-industries/zed#36848

Turns out this broke copying a screenshot from apps like CleanShot X and
then pasting it over. We should land this again after taking a look at
those cases. Pasting screenshots from the native macOS screenshot
functionality works though.

cc @seantimm 

Release Notes:

- Fixed issue where copying a screenshot from apps like CleanShot X into
Agent Panel didn't work as expected.
2025-11-25 15:52:06 +01:00
Conrad Irwin
2a7a259d2b Fallible Settings (#42938)
Also tidies up error notifications so that in the case of syntax errors
we don't see noise about the migration failing as well.

Release Notes:

- Invalid values in settings files will no longer prevent the rest of
the file from being parsed.
2025-11-25 15:52:06 +01:00
Lukas Wirth
be3c16c721 Revert "util: Check whether discovered powershell is actually executable" (#43247)
Reverts zed-industries/zed#43044
Closes https://github.com/zed-industries/zed/issues/43224

This slows down startup on windows significantly

Release Notes:

- Fixed slow startup on Windows
2025-11-25 15:52:06 +01:00
Smit Barmase
ae4a422652 agent_ui: Fix sent agent prompt getting lost after authentication (#43245)
Closes #42379

Release Notes:

- Fixed issue where a sent agent message is not restored after
successful authentication.
2025-11-25 15:52:06 +01:00
Danilo Leal
d323039524 Allow onboarding pages to be zoomed in/out (#43244)
We were just missing adding keybindings for these.

Release Notes:

- onboarding: The onboarding pages can now be zoomed in/out with the
same keybindings you'd use to zoom in/out a regular buffer.
2025-11-25 15:52:06 +01:00
David
8a0f8cc744 mention samply in readme, add profiling profile 2025-11-25 15:49:06 +01:00
David
ab290acd20 spell 2025-11-25 13:46:11 +01:00
David
08b740c1c7 prettier 2025-11-25 13:41:54 +01:00
David
07e8d81639 add performance doc from channel to docs 2025-11-21 18:25:10 +01:00
472 changed files with 10585 additions and 25302 deletions

70
.github/ISSUE_TEMPLATE/1.bug-report.yml vendored Normal file
View File

@@ -0,0 +1,70 @@
name: Report an issue
description: Report an issue with Zed.
type: Bug
body:
- type: markdown
attributes:
value: |
Feature requests should be opened in [discussions](https://github.com/zed-industries/zed/discussions/new/choose).
Before opening a new issue, please do a [search](https://github.com/zed-industries/zed/issues) of existing issues and :+1: upvote the existing issue instead. This will help us maintain a proper signal-to-noise ratio.
If you need help with your own project, you can ask a question in our [Discord Support Forums](https://discord.com/invite/zedindustries).
- type: textarea
attributes:
label: Reproduction steps
description: A step-by-step description of how to reproduce the issue from a **clean Zed install**. Any code must be sufficient to reproduce (make sure to include context!). Include code as text, not just as a screenshot. **Issues with insufficient detail may be summarily closed**.
placeholder: |
1. Start Zed
2. Click X
3. Y will happen
validations:
required: true
- type: textarea
attributes:
label: Current vs. Expected behavior
description: |
A clear and concise description of what is the current behavior (screenshots, videos), vs. what you expected the behavior to be.
**Skipping this/failure to provide complete information will result in the issue being closed.**
placeholder: "Based on my reproduction steps above, when I click X, I expect this to happen, but instead Y happens."
validations:
required: true
- type: textarea
attributes:
label: If applicable, attach your Zed log file to this issue.
description: |
Open the command palette in Zed, then type `zed: open log` to see the last 1000 lines. Or type `zed: reveal log in file manager` in the command palette to reveal the log file itself.
value: |
<details><summary>Zed.log</summary>
<!-- Paste your log inside the code block. -->
```log
```
</details>
validations:
required: false
- type: textarea
attributes:
label: If applicable, provide details about your model provider
placeholder: |
- Provider: (Anthropic via ZedPro, Anthropic via API key, Copilot Chat, Mistral, OpenAI, etc.)
- Model Name: (Claude Sonnet 4.5, Gemini 3 Pro, GPT-5)
- Mode: (Agent Panel, Inline Assistant, Terminal Assistant or Text Threads)
- Other details (ACPs, MCPs, other settings, etc.):
validations:
required: false
- type: textarea
attributes:
label: Zed version and system specs
description: |
Open the command palette in Zed, then type “zed: copy system specs into clipboard”. **Skipping this/failure to provide complete information will result in the issue being closed**.
placeholder: |
Zed: v0.215.0 (Zed Nightly bfe141ea79aa4984028934067ba75c48d99136ae)
OS: macOS 15.1
Memory: 36 GiB
Architecture: aarch64
validations:
required: true

View File

@@ -1,99 +0,0 @@
name: Report a bug
description: Report a problem with Zed.
type: Bug
labels: "state:needs triage"
body:
- type: markdown
attributes:
value: |
Is this bug already reported? Upvote to get it noticed faster. [Here's the search](https://github.com/zed-industries/zed/issues). Upvote means giving it a :+1: reaction.
Feature request? Please open in [discussions](https://github.com/zed-industries/zed/discussions/new/choose) instead.
Just have a question or need support? Welcome to [Discord Support Forums](https://discord.com/invite/zedindustries).
- type: textarea
attributes:
label: Reproduction steps
description: A step-by-step description of how to reproduce the bug from a **clean Zed install**. The more context you provide, the easier it is to find and fix the problem fast.
placeholder: |
1. Start Zed
2. Click X
validations:
required: true
- type: textarea
attributes:
label: Current vs. Expected behavior
description: |
Current behavior (screenshots, videos, etc. are appreciated), vs. what you expected the behavior to be.
placeholder: |
Current behavior: <screenshot with an arrow> The icon is blue. Expected behavior: The icon should be red because this is what the setting is documented to do.
validations:
required: true
- type: textarea
id: environment
attributes:
label: Zed version and system specs
description: |
Open the command palette in Zed, then type “zed: copy system specs into clipboard”.
placeholder: |
Zed: v0.215.0 (Zed Nightly bfe141ea79aa4984028934067ba75c48d99136ae)
OS: macOS 15.1
Memory: 36 GiB
Architecture: aarch64
validations:
required: true
- type: textarea
attributes:
label: Attach Zed log file
description: |
Open the command palette in Zed, then type `zed: open log` to see the last 1000 lines. Or type `zed: reveal log in file manager` in the command palette to reveal the log file itself.
value: |
<details><summary>Zed.log</summary>
<!-- Paste your log inside the code block. -->
```log
```
</details>
validations:
required: false
- type: textarea
attributes:
label: Relevant Zed settings
description: |
Open the command palette in Zed, then type “zed: open settings file” and copy/paste any relevant (e.g., LSP-specific) settings.
value: |
<details><summary>settings.json</summary>
<!-- Paste your settings inside the code block. -->
```json
```
</details>
validations:
required: false
- type: textarea
attributes:
label: (for AI issues) Model provider details
placeholder: |
- Provider: (Anthropic via ZedPro, Anthropic via API key, Copilot Chat, Mistral, OpenAI, etc.)
- Model Name: (Claude Sonnet 4.5, Gemini 3 Pro, GPT-5)
- Mode: (Agent Panel, Inline Assistant, Terminal Assistant or Text Threads)
- Other details (ACPs, MCPs, other settings, etc.):
validations:
required: false
- type: dropdown
attributes:
label: If you are using WSL on Windows, what flavor of Linux are you using?
multiple: false
options:
- Arch Linux
- Ubuntu
- Fedora
- Mint
- Pop!_OS
- NixOS
- Other

View File

@@ -1,23 +1,32 @@
name: Report a crash
description: Zed is crashing or freezing or hanging.
type: Crash
labels: "state:needs triage"
body:
- type: textarea
attributes:
label: Reproduction steps
description: A step-by-step description of how to reproduce the crash from a **clean Zed install**. The more context you provide, the easier it is to find and fix the problem fast.
description: A step-by-step description of how to reproduce the crash from a **clean Zed install**. **Be verbose**. **Issues with insufficient detail may be summarily closed**.
placeholder: |
1. Start Zed
2. Perform an action
3. Zed crashes
validations:
required: true
- type: textarea
attributes:
label: Current vs. Expected behavior
description: |
Go into depth about what actions youre performing in Zed to trigger the crash. If Zed crashes before it loads any windows, make sure to mention that. Again, **be verbose**.
**Skipping this/failure to provide complete information will result in the issue being closed.**
placeholder: "Based on my reproduction steps above, when I perform said action, I expect this to happen, but instead Zed crashes."
validations:
required: true
- type: textarea
attributes:
label: Zed version and system specs
description: |
Open the command palette in Zed, then type “zed: copy system specs into clipboard”.
Open the command palette in Zed, then type “zed: copy system specs into clipboard”. **Skipping this/failure to provide complete information will result in the issue being closed**.
placeholder: |
Zed: v0.215.0 (Zed Nightly bfe141ea79aa4984028934067ba75c48d99136ae)
OS: macOS 15.1
@@ -27,7 +36,7 @@ body:
required: true
- type: textarea
attributes:
label: Attach Zed log file
label: If applicable, attach your Zed log file to this issue
description: |
Open the command palette in Zed, then type `zed: open log` to see the last 1000 lines. Or type `zed: reveal log in file manager` in the command palette to reveal the log file itself.
value: |

19
.github/ISSUE_TEMPLATE/99_other.yml vendored Normal file
View File

@@ -0,0 +1,19 @@
name: Other [Staff Only]
description: Zed Staff Only
body:
- type: textarea
attributes:
label: Summary
value: |
<!-- Please insert a one line summary of the issue below -->
SUMMARY_SENTENCE_HERE
### Description
IF YOU DO NOT WORK FOR ZED INDUSTRIES DO NOT CREATE ISSUES WITH THIS TEMPLATE.
THEY WILL BE AUTO-CLOSED AND MAY RESULT IN YOU BEING BANNED FROM THE ZED ISSUE TRACKER.
FEATURE REQUESTS / SUPPORT REQUESTS SHOULD BE OPENED AS DISCUSSIONS:
https://github.com/zed-industries/zed/discussions/new/choose
validations:
required: true

View File

@@ -23,13 +23,11 @@ jobs:
AidanV
akbxr
AlvaroParker
amtoaer
artemevsevev
bajrangCoder
bcomnes
Be-ing
blopker
bnjjj
bobbymannino
CharlesChen0823
chbk
@@ -37,10 +35,8 @@ jobs:
davewa
ddoemonn
djsauble
errmayank
fantacell
findrakecil
FloppyDisco
gko
huacnlee
imumesh18
@@ -55,7 +51,6 @@ jobs:
marius851000
mikebronner
ognevny
playdohface
RemcoSmitsDev
romaninsh
Simek
@@ -63,14 +58,12 @@ jobs:
sourcefrog
suxiaoshao
Takk8IS
thedadams
tidely
timvermeulen
valentinegb
versecafe
vitallium
warrenjokinen
WhySoBad
ya7010
Zertsov
with:

View File

@@ -1,147 +0,0 @@
# Generated from xtask::workflows::extension_bump
# Rebuild with `cargo xtask workflows`.
name: extension_bump
env:
CARGO_TERM_COLOR: always
RUST_BACKTRACE: '1'
CARGO_INCREMENTAL: '0'
ZED_EXTENSION_CLI_SHA: 7cfce605704d41ca247e3f84804bf323f6c6caaf
on:
workflow_call:
inputs:
bump-type:
description: bump-type
type: string
default: patch
force-bump:
description: force-bump
required: true
type: boolean
secrets:
app-id:
description: The app ID used to create the PR
required: true
app-secret:
description: The app secret for the corresponding app ID
required: true
jobs:
check_bump_needed:
if: (github.repository_owner == 'zed-industries' || github.repository_owner == 'zed-extensions')
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
fetch-depth: 0
- id: compare-versions-check
name: extension_bump::compare_versions
run: |
CURRENT_VERSION="$(sed -n 's/version = \"\(.*\)\"/\1/p' < extension.toml)"
PR_PARENT_SHA="${{ github.event.pull_request.head.sha }}"
if [[ -n "$PR_PARENT_SHA" ]]; then
git checkout "$PR_PARENT_SHA"
elif BRANCH_PARENT_SHA="$(git merge-base origin/main origin/zed-zippy-autobump)"; then
git checkout "$BRANCH_PARENT_SHA"
else
git checkout "$(git log -1 --format=%H)"~1
fi
PARENT_COMMIT_VERSION="$(sed -n 's/version = \"\(.*\)\"/\1/p' < extension.toml)"
[[ "$CURRENT_VERSION" == "$PARENT_COMMIT_VERSION" ]] && \
echo "needs_bump=true" >> "$GITHUB_OUTPUT" || \
echo "needs_bump=false" >> "$GITHUB_OUTPUT"
echo "current_version=${CURRENT_VERSION}" >> "$GITHUB_OUTPUT"
shell: bash -euxo pipefail {0}
outputs:
needs_bump: ${{ steps.compare-versions-check.outputs.needs_bump }}
current_version: ${{ steps.compare-versions-check.outputs.current_version }}
timeout-minutes: 1
bump_extension_version:
needs:
- check_bump_needed
if: |-
(github.repository_owner == 'zed-industries' || github.repository_owner == 'zed-extensions') &&
(inputs.force-bump == 'true' || needs.check_bump_needed.outputs.needs_bump == 'true')
runs-on: namespace-profile-8x16-ubuntu-2204
steps:
- id: generate-token
name: extension_bump::generate_token
uses: actions/create-github-app-token@v2
with:
app-id: ${{ secrets.app-id }}
private-key: ${{ secrets.app-secret }}
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: extension_bump::install_bump_2_version
run: pip install bump2version
shell: bash -euxo pipefail {0}
- id: bump-version
name: extension_bump::bump_version
run: |
OLD_VERSION="${{ needs.check_bump_needed.outputs.current_version }}"
BUMP_FILES=("extension.toml")
if [[ -f "Cargo.toml" ]]; then
BUMP_FILES+=("Cargo.toml")
fi
bump2version --verbose --current-version "$OLD_VERSION" --no-configured-files ${{ inputs.bump-type }} "${BUMP_FILES[@]}"
if [[ -f "Cargo.toml" ]]; then
cargo update --workspace
fi
NEW_VERSION="$(sed -n 's/version = \"\(.*\)\"/\1/p' < extension.toml)"
echo "new_version=${NEW_VERSION}" >> "$GITHUB_OUTPUT"
shell: bash -euxo pipefail {0}
- name: extension_bump::create_pull_request
uses: peter-evans/create-pull-request@v7
with:
title: Bump version to ${{ steps.bump-version.outputs.new_version }}
body: This PR bumps the version of this extension to v${{ steps.bump-version.outputs.new_version }}
commit-message: Bump version to v${{ steps.bump-version.outputs.new_version }}
branch: zed-zippy-autobump
committer: zed-zippy[bot] <234243425+zed-zippy[bot]@users.noreply.github.com>
base: main
delete-branch: true
token: ${{ steps.generate-token.outputs.token }}
sign-commits: true
timeout-minutes: 1
create_version_label:
needs:
- check_bump_needed
if: (github.repository_owner == 'zed-industries' || github.repository_owner == 'zed-extensions') && github.event_name == 'push' && github.ref == 'refs/heads/main' && needs.check_bump_needed.outputs.needs_bump == 'false'
runs-on: namespace-profile-8x16-ubuntu-2204
steps:
- id: generate-token
name: extension_bump::generate_token
uses: actions/create-github-app-token@v2
with:
app-id: ${{ secrets.app-id }}
private-key: ${{ secrets.app-secret }}
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: extension_bump::create_version_tag
uses: actions/github-script@v7
with:
script: |-
github.rest.git.createRef({
owner: context.repo.owner,
repo: context.repo.repo,
ref: 'refs/tags/v${{ needs.check_bump_needed.outputs.current_version }}',
sha: context.sha
})
github-token: ${{ steps.generate-token.outputs.token }}
timeout-minutes: 1
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true

View File

@@ -1,43 +0,0 @@
# Generated from xtask::workflows::extension_release
# Rebuild with `cargo xtask workflows`.
name: extension_release
on:
workflow_call:
secrets:
app-id:
description: The app ID used to create the PR
required: true
app-secret:
description: The app secret for the corresponding app ID
required: true
jobs:
create_release:
if: (github.repository_owner == 'zed-industries' || github.repository_owner == 'zed-extensions')
runs-on: namespace-profile-8x16-ubuntu-2204
steps:
- id: generate-token
name: extension_bump::generate_token
uses: actions/create-github-app-token@v2
with:
app-id: ${{ secrets.app-id }}
private-key: ${{ secrets.app-secret }}
owner: zed-industries
repositories: extensions
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- id: get-extension-id
name: extension_release::get_extension_id
run: |
EXTENSION_ID="$(sed -n 's/id = \"\(.*\)\"/\1/p' < extension.toml)"
echo "extension_id=${EXTENSION_ID}" >> "$GITHUB_OUTPUT"
shell: bash -euxo pipefail {0}
- name: extension_release::release_action
uses: huacnlee/zed-extension-action@v2
with:
extension-name: ${{ steps.get-extension-id.outputs.extension_id }}
push-to: zed-industries/extensions
env:
COMMITTER_TOKEN: ${{ steps.generate-token.outputs.token }}

View File

@@ -7,7 +7,12 @@ env:
CARGO_INCREMENTAL: '0'
ZED_EXTENSION_CLI_SHA: 7cfce605704d41ca247e3f84804bf323f6c6caaf
on:
workflow_call: {}
workflow_call:
inputs:
run_tests:
description: Whether the workflow should run rust tests
required: true
type: boolean
jobs:
orchestrate:
if: (github.repository_owner == 'zed-industries' || github.repository_owner == 'zed-extensions')
@@ -68,12 +73,12 @@ jobs:
run: cargo clippy --release --all-targets --all-features -- --deny warnings
shell: bash -euxo pipefail {0}
- name: steps::cargo_install_nextest
if: inputs.run_tests
uses: taiki-e/install-action@nextest
- name: steps::cargo_nextest
if: inputs.run_tests
run: cargo nextest run --workspace --no-fail-fast
shell: bash -euxo pipefail {0}
env:
NEXTEST_NO_TESTS: warn
timeout-minutes: 3
check_extension:
needs:
@@ -103,7 +108,7 @@ jobs:
mkdir -p /tmp/ext-output
./zed-extension --source-dir . --scratch-dir /tmp/ext-scratch --output-dir /tmp/ext-output
shell: bash -euxo pipefail {0}
timeout-minutes: 2
timeout-minutes: 1
tests_pass:
needs:
- orchestrate

View File

@@ -13,7 +13,7 @@ jobs:
bundle_linux_aarch64:
if: |-
(github.event.action == 'labeled' && github.event.label.name == 'run-bundling') ||
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
runs-on: namespace-profile-8x32-ubuntu-2004-arm-m4
env:
CARGO_INCREMENTAL: 0
@@ -56,7 +56,7 @@ jobs:
bundle_linux_x86_64:
if: |-
(github.event.action == 'labeled' && github.event.label.name == 'run-bundling') ||
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
runs-on: namespace-profile-32x64-ubuntu-2004
env:
CARGO_INCREMENTAL: 0
@@ -99,7 +99,7 @@ jobs:
bundle_mac_aarch64:
if: |-
(github.event.action == 'labeled' && github.event.label.name == 'run-bundling') ||
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
runs-on: self-mini-macos
env:
CARGO_INCREMENTAL: 0
@@ -145,7 +145,7 @@ jobs:
bundle_mac_x86_64:
if: |-
(github.event.action == 'labeled' && github.event.label.name == 'run-bundling') ||
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
runs-on: self-mini-macos
env:
CARGO_INCREMENTAL: 0
@@ -191,7 +191,7 @@ jobs:
bundle_windows_aarch64:
if: |-
(github.event.action == 'labeled' && github.event.label.name == 'run-bundling') ||
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
runs-on: self-32vcpu-windows-2022
env:
CARGO_INCREMENTAL: 0
@@ -229,7 +229,7 @@ jobs:
bundle_windows_x86_64:
if: |-
(github.event.action == 'labeled' && github.event.label.name == 'run-bundling') ||
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
runs-on: self-32vcpu-windows-2022
env:
CARGO_INCREMENTAL: 0

View File

@@ -13,14 +13,6 @@ on:
jobs:
cron_unit_evals:
runs-on: namespace-profile-16x32-ubuntu-2204
strategy:
matrix:
model:
- anthropic/claude-sonnet-4-5-latest
- anthropic/claude-opus-4-5-latest
- google/gemini-3-pro
- openai/gpt-5
fail-fast: false
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
@@ -57,7 +49,6 @@ jobs:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
GOOGLE_AI_API_KEY: ${{ secrets.GOOGLE_AI_API_KEY }}
GOOGLE_CLOUD_PROJECT: ${{ secrets.GOOGLE_CLOUD_PROJECT }}
ZED_AGENT_MODEL: ${{ matrix.model }}
- name: steps::cleanup_cargo_config
if: always()
run: |

View File

@@ -84,7 +84,7 @@ jobs:
run: ./script/check-keymaps
shell: bash -euxo pipefail {0}
- name: run_tests::check_style::check_for_typos
uses: crate-ci/typos@2d0ce569feab1f8752f1dde43cc2f2aa53236e06
uses: crate-ci/typos@80c8a4945eec0f6d464eaf9e65ed98ef085283d1
with:
config: ./typos.toml
- name: steps::cargo_fmt
@@ -520,7 +520,6 @@ jobs:
uses: bufbuild/buf-setup-action@v1
with:
version: v1.29.0
github_token: ${{ secrets.GITHUB_TOKEN }}
- name: run_tests::check_postgres_and_protobuf_migrations::bufbuild_breaking_action
uses: bufbuild/buf-breaking-action@v1
with:

224
Cargo.lock generated
View File

@@ -159,7 +159,6 @@ dependencies = [
"derive_more 0.99.20",
"editor",
"env_logger 0.11.8",
"eval_utils",
"fs",
"futures 0.3.31",
"git",
@@ -216,9 +215,9 @@ dependencies = [
[[package]]
name = "agent-client-protocol"
version = "0.8.0"
version = "0.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3e639d6b544ad39f5b4e05802db5eb04e1518284eb05fda1839931003e0244c8"
checksum = "525705e39c11cd73f7bc784e3681a9386aa30c8d0630808d3dc2237eb4f9cb1b"
dependencies = [
"agent-client-protocol-schema",
"anyhow",
@@ -227,15 +226,16 @@ dependencies = [
"derive_more 2.0.1",
"futures 0.3.31",
"log",
"parking_lot",
"serde",
"serde_json",
]
[[package]]
name = "agent-client-protocol-schema"
version = "0.9.1"
version = "0.6.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f182f5e14bef8232b239719bd99166bb11e986c08fc211f28e392f880d3093ba"
checksum = "ecf16c18fea41282d6bbadd1549a06be6836bddb1893f44a6235f340fa24e2af"
dependencies = [
"anyhow",
"derive_more 2.0.1",
@@ -328,7 +328,6 @@ dependencies = [
"buffer_diff",
"chrono",
"client",
"clock",
"cloud_llm_client",
"collections",
"command_palette_hooks",
@@ -336,7 +335,6 @@ dependencies = [
"context_server",
"db",
"editor",
"eval_utils",
"extension",
"extension_host",
"feature_flags",
@@ -345,7 +343,6 @@ dependencies = [
"futures 0.3.31",
"fuzzy",
"gpui",
"gpui_tokio",
"html_to_markdown",
"http_client",
"image",
@@ -373,7 +370,6 @@ dependencies = [
"proto",
"rand 0.9.2",
"release_channel",
"reqwest_client",
"rope",
"rules_library",
"schemars",
@@ -1384,7 +1380,6 @@ dependencies = [
"http_client",
"markdown_preview",
"release_channel",
"semver",
"serde",
"serde_json",
"smol",
@@ -2130,15 +2125,30 @@ dependencies = [
"syn 2.0.106",
]
[[package]]
name = "bit-set"
version = "0.5.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0700ddab506f33b20a03b13996eccd309a48e5ff77d0d95926aa0210fb4e95f1"
dependencies = [
"bit-vec 0.6.3",
]
[[package]]
name = "bit-set"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "08807e080ed7f9d5433fa9b275196cfc35414f66a0c79d864dc51a0d825231a3"
dependencies = [
"bit-vec",
"bit-vec 0.8.0",
]
[[package]]
name = "bit-vec"
version = "0.6.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "349f9b6a179ed607305526ca489b34ad0a41aed5f7980fa90eb03160b69598fb"
[[package]]
name = "bit-vec"
version = "0.8.0"
@@ -2317,9 +2327,9 @@ dependencies = [
[[package]]
name = "borrow-or-share"
version = "0.2.4"
version = "0.2.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dc0b364ead1874514c8c2855ab558056ebfeb775653e7ae45ff72f28f8f3166c"
checksum = "3eeab4423108c5d7c744f4d234de88d18d636100093ae04caf4825134b9c3a32"
[[package]]
name = "borsh"
@@ -2412,7 +2422,6 @@ dependencies = [
"rand 0.9.2",
"rope",
"serde_json",
"settings",
"sum_tree",
"text",
"unindent",
@@ -4173,7 +4182,6 @@ dependencies = [
"serde_json",
"smol",
"system_specs",
"windows 0.61.3",
"zstd 0.11.2+zstd.1.5.2",
]
@@ -5290,7 +5298,6 @@ dependencies = [
"indoc",
"language",
"lsp",
"menu",
"paths",
"project",
"regex",
@@ -5300,8 +5307,6 @@ dependencies = [
"telemetry",
"theme",
"ui",
"ui_input",
"util",
"workspace",
"zed_actions",
"zeta",
@@ -5360,7 +5365,6 @@ dependencies = [
"db",
"edit_prediction",
"emojis",
"feature_flags",
"file_icons",
"fs",
"futures 0.3.31",
@@ -5765,15 +5769,6 @@ dependencies = [
"watch",
]
[[package]]
name = "eval_utils"
version = "0.1.0"
dependencies = [
"gpui",
"serde",
"smol",
]
[[package]]
name = "event-listener"
version = "2.5.3"
@@ -5993,11 +5988,22 @@ checksum = "7360491ce676a36bf9bb3c56c1aa791658183a54d2744120f27285738d90465a"
[[package]]
name = "fancy-regex"
version = "0.16.2"
version = "0.13.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "998b056554fbe42e03ae0e152895cd1a7e1002aec800fdc6635d20270260c46f"
checksum = "531e46835a22af56d1e3b66f04844bed63158bc094a628bec1d321d9b4c44bf2"
dependencies = [
"bit-set",
"bit-set 0.5.3",
"regex-automata",
"regex-syntax",
]
[[package]]
name = "fancy-regex"
version = "0.14.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6e24cb5a94bcae1e5408b0effca5cd7172ea3c5755049c5f3af4cd283a165298"
dependencies = [
"bit-set 0.8.0",
"regex-automata",
"regex-syntax",
]
@@ -6219,9 +6225,9 @@ checksum = "8bf7cc16383c4b8d58b9905a8509f02926ce3058053c056376248d958c9df1e8"
[[package]]
name = "fluent-uri"
version = "0.4.1"
version = "0.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bc74ac4d8359ae70623506d512209619e5cf8f347124910440dbc221714b328e"
checksum = "1918b65d96df47d3591bed19c5cca17e3fa5d0707318e4b5ef2eae01764df7e5"
dependencies = [
"borrow-or-share",
"ref-cast",
@@ -6400,7 +6406,6 @@ dependencies = [
"git",
"gpui",
"ignore",
"is_executable",
"libc",
"log",
"notify 8.2.0",
@@ -6960,7 +6965,7 @@ dependencies = [
[[package]]
name = "gh-workflow"
version = "0.8.0"
source = "git+https://github.com/zed-industries/gh-workflow?rev=09acfdf2bd5c1d6254abefd609c808ff73547b2c#09acfdf2bd5c1d6254abefd609c808ff73547b2c"
source = "git+https://github.com/zed-industries/gh-workflow?rev=3eaa84abca0778eb54272f45a312cb24f9a0b435#3eaa84abca0778eb54272f45a312cb24f9a0b435"
dependencies = [
"async-trait",
"derive_more 2.0.1",
@@ -6977,7 +6982,7 @@ dependencies = [
[[package]]
name = "gh-workflow-macros"
version = "0.8.0"
source = "git+https://github.com/zed-industries/gh-workflow?rev=09acfdf2bd5c1d6254abefd609c808ff73547b2c#09acfdf2bd5c1d6254abefd609c808ff73547b2c"
source = "git+https://github.com/zed-industries/gh-workflow?rev=3eaa84abca0778eb54272f45a312cb24f9a0b435#3eaa84abca0778eb54272f45a312cb24f9a0b435"
dependencies = [
"heck 0.5.0",
"quote",
@@ -7100,7 +7105,6 @@ dependencies = [
"futures 0.3.31",
"fuzzy",
"git",
"git_hosting_providers",
"gpui",
"indoc",
"itertools 0.14.0",
@@ -7122,7 +7126,6 @@ dependencies = [
"serde",
"serde_json",
"settings",
"smol",
"strum 0.27.2",
"telemetry",
"theme",
@@ -7517,17 +7520,6 @@ dependencies = [
"serde",
]
[[package]]
name = "hashbrown"
version = "0.16.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "841d1cc9bed7f9236f321df977030373f4a4163ae1a7dbfe1a51a2c1a51d9100"
dependencies = [
"allocator-api2",
"equivalent",
"foldhash 0.2.0",
]
[[package]]
name = "hashlink"
version = "0.8.4"
@@ -8444,15 +8436,6 @@ dependencies = [
"once_cell",
]
[[package]]
name = "is_executable"
version = "1.0.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "baabb8b4867b26294d818bf3f651a454b6901431711abb96e296245888d6e8c4"
dependencies = [
"windows-sys 0.60.2",
]
[[package]]
name = "is_terminal_polyfill"
version = "1.70.1"
@@ -8617,21 +8600,21 @@ dependencies = [
[[package]]
name = "jsonschema"
version = "0.37.4"
version = "0.30.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "73c9ffb2b5c56d58030e1b532d8e8389da94590515f118cf35b5cb68e4764a7e"
checksum = "f1b46a0365a611fbf1d2143104dcf910aada96fafd295bab16c60b802bf6fa1d"
dependencies = [
"ahash 0.8.12",
"base64 0.22.1",
"bytecount",
"data-encoding",
"email_address",
"fancy-regex",
"fancy-regex 0.14.0",
"fraction",
"getrandom 0.3.4",
"idna",
"itoa",
"num-cmp",
"num-traits",
"once_cell",
"percent-encoding",
"referencing",
"regex",
@@ -8639,7 +8622,6 @@ dependencies = [
"reqwest 0.12.24",
"serde",
"serde_json",
"unicode-general-category",
"uuid-simd",
]
@@ -9020,7 +9002,6 @@ dependencies = [
"chrono",
"collections",
"futures 0.3.31",
"globset",
"gpui",
"http_client",
"itertools 0.14.0",
@@ -9046,9 +9027,7 @@ dependencies = [
"serde_json",
"serde_json_lenient",
"settings",
"smallvec",
"smol",
"snippet",
"task",
"terminal",
"text",
@@ -10188,7 +10167,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2b977c445f26e49757f9aca3631c3b8b836942cb278d69a92e7b80d3b24da632"
dependencies = [
"arrayvec",
"bit-set",
"bit-set 0.8.0",
"bitflags 2.9.4",
"cfg_aliases 0.2.1",
"codespan-reporting 0.12.0",
@@ -11551,7 +11530,7 @@ dependencies = [
[[package]]
name = "pet"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da#1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=e97b9508befa0062929da65a01054d25c4be861c#e97b9508befa0062929da65a01054d25c4be861c"
dependencies = [
"clap",
"env_logger 0.10.2",
@@ -11576,7 +11555,6 @@ dependencies = [
"pet-python-utils",
"pet-reporter",
"pet-telemetry",
"pet-uv",
"pet-venv",
"pet-virtualenv",
"pet-virtualenvwrapper",
@@ -11589,7 +11567,7 @@ dependencies = [
[[package]]
name = "pet-conda"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da#1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=e97b9508befa0062929da65a01054d25c4be861c#e97b9508befa0062929da65a01054d25c4be861c"
dependencies = [
"env_logger 0.10.2",
"lazy_static",
@@ -11608,7 +11586,7 @@ dependencies = [
[[package]]
name = "pet-core"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da#1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=e97b9508befa0062929da65a01054d25c4be861c#e97b9508befa0062929da65a01054d25c4be861c"
dependencies = [
"clap",
"lazy_static",
@@ -11623,7 +11601,7 @@ dependencies = [
[[package]]
name = "pet-env-var-path"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da#1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=e97b9508befa0062929da65a01054d25c4be861c#e97b9508befa0062929da65a01054d25c4be861c"
dependencies = [
"lazy_static",
"log",
@@ -11639,7 +11617,7 @@ dependencies = [
[[package]]
name = "pet-fs"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da#1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=e97b9508befa0062929da65a01054d25c4be861c#e97b9508befa0062929da65a01054d25c4be861c"
dependencies = [
"log",
"msvc_spectre_libs",
@@ -11648,7 +11626,7 @@ dependencies = [
[[package]]
name = "pet-global-virtualenvs"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da#1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=e97b9508befa0062929da65a01054d25c4be861c#e97b9508befa0062929da65a01054d25c4be861c"
dependencies = [
"log",
"msvc_spectre_libs",
@@ -11661,7 +11639,7 @@ dependencies = [
[[package]]
name = "pet-homebrew"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da#1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=e97b9508befa0062929da65a01054d25c4be861c#e97b9508befa0062929da65a01054d25c4be861c"
dependencies = [
"lazy_static",
"log",
@@ -11679,7 +11657,7 @@ dependencies = [
[[package]]
name = "pet-jsonrpc"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da#1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=e97b9508befa0062929da65a01054d25c4be861c#e97b9508befa0062929da65a01054d25c4be861c"
dependencies = [
"env_logger 0.10.2",
"log",
@@ -11692,7 +11670,7 @@ dependencies = [
[[package]]
name = "pet-linux-global-python"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da#1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=e97b9508befa0062929da65a01054d25c4be861c#e97b9508befa0062929da65a01054d25c4be861c"
dependencies = [
"log",
"msvc_spectre_libs",
@@ -11705,7 +11683,7 @@ dependencies = [
[[package]]
name = "pet-mac-commandlinetools"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da#1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=e97b9508befa0062929da65a01054d25c4be861c#e97b9508befa0062929da65a01054d25c4be861c"
dependencies = [
"log",
"msvc_spectre_libs",
@@ -11718,7 +11696,7 @@ dependencies = [
[[package]]
name = "pet-mac-python-org"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da#1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=e97b9508befa0062929da65a01054d25c4be861c#e97b9508befa0062929da65a01054d25c4be861c"
dependencies = [
"log",
"msvc_spectre_libs",
@@ -11731,7 +11709,7 @@ dependencies = [
[[package]]
name = "pet-mac-xcode"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da#1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=e97b9508befa0062929da65a01054d25c4be861c#e97b9508befa0062929da65a01054d25c4be861c"
dependencies = [
"log",
"msvc_spectre_libs",
@@ -11744,7 +11722,7 @@ dependencies = [
[[package]]
name = "pet-pipenv"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da#1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=e97b9508befa0062929da65a01054d25c4be861c#e97b9508befa0062929da65a01054d25c4be861c"
dependencies = [
"log",
"msvc_spectre_libs",
@@ -11757,7 +11735,7 @@ dependencies = [
[[package]]
name = "pet-pixi"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da#1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=e97b9508befa0062929da65a01054d25c4be861c#e97b9508befa0062929da65a01054d25c4be861c"
dependencies = [
"log",
"msvc_spectre_libs",
@@ -11769,7 +11747,7 @@ dependencies = [
[[package]]
name = "pet-poetry"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da#1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=e97b9508befa0062929da65a01054d25c4be861c#e97b9508befa0062929da65a01054d25c4be861c"
dependencies = [
"base64 0.22.1",
"lazy_static",
@@ -11790,7 +11768,7 @@ dependencies = [
[[package]]
name = "pet-pyenv"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da#1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=e97b9508befa0062929da65a01054d25c4be861c#e97b9508befa0062929da65a01054d25c4be861c"
dependencies = [
"lazy_static",
"log",
@@ -11808,7 +11786,7 @@ dependencies = [
[[package]]
name = "pet-python-utils"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da#1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=e97b9508befa0062929da65a01054d25c4be861c#e97b9508befa0062929da65a01054d25c4be861c"
dependencies = [
"env_logger 0.10.2",
"lazy_static",
@@ -11825,7 +11803,7 @@ dependencies = [
[[package]]
name = "pet-reporter"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da#1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=e97b9508befa0062929da65a01054d25c4be861c#e97b9508befa0062929da65a01054d25c4be861c"
dependencies = [
"env_logger 0.10.2",
"log",
@@ -11839,7 +11817,7 @@ dependencies = [
[[package]]
name = "pet-telemetry"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da#1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=e97b9508befa0062929da65a01054d25c4be861c#e97b9508befa0062929da65a01054d25c4be861c"
dependencies = [
"env_logger 0.10.2",
"lazy_static",
@@ -11851,22 +11829,10 @@ dependencies = [
"regex",
]
[[package]]
name = "pet-uv"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da#1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da"
dependencies = [
"log",
"pet-core",
"pet-python-utils",
"serde",
"toml 0.9.8",
]
[[package]]
name = "pet-venv"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da#1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=e97b9508befa0062929da65a01054d25c4be861c#e97b9508befa0062929da65a01054d25c4be861c"
dependencies = [
"log",
"msvc_spectre_libs",
@@ -11878,7 +11844,7 @@ dependencies = [
[[package]]
name = "pet-virtualenv"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da#1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=e97b9508befa0062929da65a01054d25c4be861c#e97b9508befa0062929da65a01054d25c4be861c"
dependencies = [
"log",
"msvc_spectre_libs",
@@ -11890,7 +11856,7 @@ dependencies = [
[[package]]
name = "pet-virtualenvwrapper"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da#1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=e97b9508befa0062929da65a01054d25c4be861c#e97b9508befa0062929da65a01054d25c4be861c"
dependencies = [
"log",
"msvc_spectre_libs",
@@ -11903,7 +11869,7 @@ dependencies = [
[[package]]
name = "pet-windows-registry"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da#1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=e97b9508befa0062929da65a01054d25c4be861c#e97b9508befa0062929da65a01054d25c4be861c"
dependencies = [
"lazy_static",
"log",
@@ -11921,7 +11887,7 @@ dependencies = [
[[package]]
name = "pet-windows-store"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da#1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=e97b9508befa0062929da65a01054d25c4be861c#e97b9508befa0062929da65a01054d25c4be861c"
dependencies = [
"lazy_static",
"log",
@@ -13044,7 +13010,7 @@ dependencies = [
"dap",
"dap_adapters",
"extension",
"fancy-regex",
"fancy-regex 0.14.0",
"fs",
"futures 0.3.31",
"fuzzy",
@@ -13076,7 +13042,6 @@ dependencies = [
"semver",
"serde",
"serde_json",
"session",
"settings",
"sha2",
"shellexpand 2.1.2",
@@ -13916,14 +13881,13 @@ dependencies = [
[[package]]
name = "referencing"
version = "0.37.4"
version = "0.30.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4283168a506f0dcbdce31c9f9cce3129c924da4c6bca46e46707fcb746d2d70c"
checksum = "c8eff4fa778b5c2a57e85c5f2fe3a709c52f0e60d23146e2151cbef5893f420e"
dependencies = [
"ahash 0.8.12",
"fluent-uri",
"getrandom 0.3.4",
"hashbrown 0.16.1",
"once_cell",
"parking_lot",
"percent-encoding",
"serde_json",
@@ -15124,7 +15088,6 @@ dependencies = [
"editor",
"futures 0.3.31",
"gpui",
"itertools 0.14.0",
"language",
"lsp",
"menu",
@@ -15384,7 +15347,6 @@ dependencies = [
name = "session"
version = "0.1.0"
dependencies = [
"collections",
"db",
"gpui",
"serde_json",
@@ -17118,7 +17080,7 @@ dependencies = [
"alacritty_terminal",
"anyhow",
"collections",
"fancy-regex",
"fancy-regex 0.14.0",
"futures 0.3.31",
"gpui",
"itertools 0.14.0",
@@ -17351,13 +17313,13 @@ dependencies = [
[[package]]
name = "tiktoken-rs"
version = "0.9.1"
source = "git+https://github.com/zed-industries/tiktoken-rs?rev=2570c4387a8505fb8f1d3f3557454b474f1e8271#2570c4387a8505fb8f1d3f3557454b474f1e8271"
version = "0.8.0"
source = "git+https://github.com/zed-industries/tiktoken-rs?rev=30c32a4522751699adeda0d5840c71c3b75ae73d#30c32a4522751699adeda0d5840c71c3b75ae73d"
dependencies = [
"anyhow",
"base64 0.22.1",
"bstr",
"fancy-regex",
"fancy-regex 0.13.0",
"lazy_static",
"regex",
"rustc-hash 1.1.0",
@@ -17511,7 +17473,6 @@ dependencies = [
"rpc",
"schemars",
"serde",
"session",
"settings",
"smallvec",
"story",
@@ -18010,9 +17971,9 @@ dependencies = [
[[package]]
name = "tree-sitter-bash"
version = "0.25.1"
version = "0.25.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9e5ec769279cc91b561d3df0d8a5deb26b0ad40d183127f409494d6d8fc53062"
checksum = "871b0606e667e98a1237ebdc1b0d7056e0aebfdc3141d12b399865d4cb6ed8a6"
dependencies = [
"cc",
"tree-sitter-language",
@@ -18490,12 +18451,6 @@ version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ce61d488bcdc9bc8b5d1772c404828b17fc481c0a582b5581e95fb233aef503e"
[[package]]
name = "unicode-general-category"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0b993bddc193ae5bd0d623b49ec06ac3e9312875fdae725a975c51db1cc1677f"
[[package]]
name = "unicode-ident"
version = "1.0.19"
@@ -18730,6 +18685,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "23b082222b4f6619906941c17eb2297fff4c2fb96cb60164170522942a200bd8"
dependencies = [
"outref",
"uuid",
"vsimd",
]
@@ -21029,7 +20985,6 @@ dependencies = [
"indexmap",
"indoc",
"serde",
"serde_json",
"toml 0.8.23",
"toml_edit 0.22.27",
]
@@ -21214,7 +21169,7 @@ dependencies = [
[[package]]
name = "zed"
version = "0.217.0"
version = "0.215.0"
dependencies = [
"acp_tools",
"activity_indicator",
@@ -21504,8 +21459,6 @@ dependencies = [
[[package]]
name = "zed_extension_api"
version = "0.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0729d50b4ca0a7e28e590bbe32e3ca0194d97ef654961451a424c661a366fca0"
dependencies = [
"serde",
"serde_json",
@@ -21514,7 +21467,9 @@ dependencies = [
[[package]]
name = "zed_extension_api"
version = "0.8.0"
version = "0.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0729d50b4ca0a7e28e590bbe32e3ca0194d97ef654961451a424c661a366fca0"
dependencies = [
"serde",
"serde_json",
@@ -21532,12 +21487,12 @@ dependencies = [
name = "zed_html"
version = "0.2.3"
dependencies = [
"zed_extension_api 0.7.0",
"zed_extension_api 0.7.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]]
name = "zed_proto"
version = "0.2.3"
version = "0.2.2"
dependencies = [
"zed_extension_api 0.1.0",
]
@@ -21546,7 +21501,7 @@ dependencies = [
name = "zed_test_extension"
version = "0.1.0"
dependencies = [
"zed_extension_api 0.8.0",
"zed_extension_api 0.7.0",
]
[[package]]
@@ -21690,7 +21645,6 @@ dependencies = [
"collections",
"command_palette_hooks",
"copilot",
"credentials_provider",
"ctor",
"db",
"edit_prediction",

View File

@@ -59,7 +59,6 @@ members = [
"crates/zeta2_tools",
"crates/editor",
"crates/eval",
"crates/eval_utils",
"crates/explorer_command_injector",
"crates/extension",
"crates/extension_api",
@@ -289,7 +288,6 @@ deepseek = { path = "crates/deepseek" }
derive_refineable = { path = "crates/refineable/derive_refineable" }
diagnostics = { path = "crates/diagnostics" }
editor = { path = "crates/editor" }
eval_utils = { path = "crates/eval_utils" }
extension = { path = "crates/extension" }
extension_host = { path = "crates/extension_host" }
extensions_ui = { path = "crates/extensions_ui" }
@@ -441,7 +439,7 @@ zlog_settings = { path = "crates/zlog_settings" }
# External crates
#
agent-client-protocol = { version = "=0.8.0", features = ["unstable"] }
agent-client-protocol = { version = "0.7.0", features = ["unstable"] }
aho-corasick = "1.1"
alacritty_terminal = "0.25.1-rc1"
any_vec = "0.14"
@@ -505,12 +503,12 @@ ec4rs = "1.1"
emojis = "0.6.1"
env_logger = "0.11"
exec = "0.3.1"
fancy-regex = "0.16.0"
fancy-regex = "0.14.0"
fork = "0.4.0"
futures = "0.3"
futures-batch = "0.6.1"
futures-lite = "1.13"
gh-workflow = { git = "https://github.com/zed-industries/gh-workflow", rev = "09acfdf2bd5c1d6254abefd609c808ff73547b2c" }
gh-workflow = { git = "https://github.com/zed-industries/gh-workflow", rev = "3eaa84abca0778eb54272f45a312cb24f9a0b435" }
git2 = { version = "0.20.1", default-features = false }
globset = "0.4"
handlebars = "4.3"
@@ -531,7 +529,7 @@ indoc = "2"
inventory = "0.3.19"
itertools = "0.14.0"
json_dotpath = "1.1"
jsonschema = "0.37.0"
jsonschema = "0.30.0"
jsonwebtoken = "9.3"
jupyter-protocol = "0.10.0"
jupyter-websocket-client = "0.15.0"
@@ -585,14 +583,14 @@ partial-json-fixer = "0.5.3"
parse_int = "0.9"
pciid-parser = "0.8.0"
pathdiff = "0.2"
pet = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
pet-conda = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
pet-core = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
pet-fs = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
pet-pixi = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
pet-poetry = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
pet-reporter = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
pet-virtualenv = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
pet = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "e97b9508befa0062929da65a01054d25c4be861c" }
pet-conda = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "e97b9508befa0062929da65a01054d25c4be861c" }
pet-core = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "e97b9508befa0062929da65a01054d25c4be861c" }
pet-fs = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "e97b9508befa0062929da65a01054d25c4be861c" }
pet-pixi = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "e97b9508befa0062929da65a01054d25c4be861c" }
pet-poetry = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "e97b9508befa0062929da65a01054d25c4be861c" }
pet-reporter = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "e97b9508befa0062929da65a01054d25c4be861c" }
pet-virtualenv = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "e97b9508befa0062929da65a01054d25c4be861c" }
portable-pty = "0.9.0"
postage = { version = "0.5", features = ["futures-traits"] }
pretty_assertions = { version = "1.3.0", features = ["unstable"] }
@@ -641,7 +639,6 @@ serde_urlencoded = "0.7"
sha2 = "0.10"
shellexpand = "2.1.0"
shlex = "1.3.0"
similar = "2.6"
simplelog = "0.12.2"
slotmap = "1.0.6"
smallvec = { version = "1.6", features = ["union"] }
@@ -658,7 +655,7 @@ sysinfo = "0.37.0"
take-until = "0.2.0"
tempfile = "3.20.0"
thiserror = "2.0.12"
tiktoken-rs = { git = "https://github.com/zed-industries/tiktoken-rs", rev = "2570c4387a8505fb8f1d3f3557454b474f1e8271" }
tiktoken-rs = { git = "https://github.com/zed-industries/tiktoken-rs", rev = "30c32a4522751699adeda0d5840c71c3b75ae73d" }
time = { version = "0.3", features = [
"macros",
"parsing",
@@ -674,7 +671,7 @@ toml = "0.8"
toml_edit = { version = "0.22", default-features = false, features = ["display", "parse", "serde"] }
tower-http = "0.4.4"
tree-sitter = { version = "0.25.10", features = ["wasm"] }
tree-sitter-bash = "0.25.1"
tree-sitter-bash = "0.25.0"
tree-sitter-c = "0.23"
tree-sitter-cpp = { git = "https://github.com/tree-sitter/tree-sitter-cpp", rev = "5cb9b693cfd7bfacab1d9ff4acac1a4150700609" }
tree-sitter-css = "0.23"
@@ -869,6 +866,10 @@ debug = "full"
lto = false
codegen-units = 16
[profile.profiling]
inherits = "release"
debug = "full"
[workspace.lints.rust]
unexpected_cfgs = { level = "allow" }

View File

@@ -43,9 +43,8 @@ design
= @danilo-leal
docs
= @miguelraz
= @probably-neb
= @yeskunall
= @miguelraz
extension
= @kubkon
@@ -53,10 +52,6 @@ extension
git
= @cole-miller
= @danilo-leal
= @dvdsk
= @kubkon
= @Anthony-Eid
= @cameron1024
gpui
= @Anthony-Eid
@@ -104,9 +99,6 @@ settings_ui
= @danilo-leal
= @probably-neb
sum_tree
= @Veykril
support
= @miguelraz
@@ -118,9 +110,6 @@ terminal
= @kubkon
= @Veykril
text
= @Veykril
vim
= @ConradIrwin
= @dinocosta
@@ -130,4 +119,3 @@ vim
windows
= @localcc
= @reflectronic
= @Veykril

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="none"><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.2" d="M14 11.333A6 6 0 0 0 4 6.867l-1 .9"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.333" d="M2 4.667v4h4"/><path fill="#000" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.2" d="M8 12a.667.667 0 1 0 0-1.333A.667.667 0 0 0 8 12Z"/></svg>

After

Width:  |  Height:  |  Size: 467 B

View File

@@ -1,5 +1 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M2 13H5" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M11 13H14" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M11.5 8.5L8 12M8 12L4.5 8.5M8 12L8 3" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
</svg>
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="none"><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.2" d="M3.333 10 8 14.667 12.667 10M8 5.333v9.334"/><path fill="#000" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.2" d="M8 2.667a.667.667 0 1 0 0-1.334.667.667 0 0 0 0 1.334Z"/></svg>

Before

Width:  |  Height:  |  Size: 443 B

After

Width:  |  Height:  |  Size: 374 B

View File

@@ -1,5 +1 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M4.5 6.5L8 3M8 3L11.5 6.5M8 3V12" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M2 13H5" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M11 13H14" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
</svg>
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="none"><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.2" d="M3.333 6 8 1.333 12.667 6M8 10.667V1.333"/><path fill="#000" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.2" d="M8 13.333a.667.667 0 1 1 0 1.334.667.667 0 0 1 0-1.334Z"/></svg>

Before

Width:  |  Height:  |  Size: 439 B

After

Width:  |  Height:  |  Size: 373 B

View File

@@ -1,5 +1 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M2 11.333C2.00118 10.1752 2.33729 9.04258 2.96777 8.07159C3.59826 7.10059 4.49621 6.33274 5.55331 5.86064C6.61041 5.38853 7.78152 5.23235 8.9254 5.41091C10.0693 5.58947 11.1371 6.09516 12 6.86698L13 7.76698" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M14 4.66699V8.66699H10" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M7 13H10" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
</svg>
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="none"><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.2" d="M2 11.333a6 6 0 0 1 10-4.466l1 .9"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.333" d="M14 4.667v4h-4"/><path fill="#000" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.2" d="M8 12a.667.667 0 1 1 0-1.333A.667.667 0 0 1 8 12Z"/></svg>

Before

Width:  |  Height:  |  Size: 627 B

After

Width:  |  Height:  |  Size: 468 B

View File

@@ -1,10 +0,0 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_1_2)">
<path fill-rule="evenodd" clip-rule="evenodd" d="M4.58747 12.9359C4.35741 12.778 4.17558 12.625 4.17558 12.625L10.092 2.37749C10.092 2.37749 10.3355 2.46782 10.5367 2.56426C10.7903 2.6858 11.0003 2.80429 11.0003 2.80429C13.8681 4.46005 14.8523 8.13267 13.1965 11.0005C11.5407 13.8684 7.8681 14.8525 5.00023 13.1967C5.00023 13.1967 4.79936 13.0812 4.58747 12.9359ZM10.5003 3.67032L5.50023 12.3307C7.89013 13.7105 10.9506 12.8904 12.3305 10.5006C13.7102 8.1106 12.8902 5.05015 10.5003 3.67032ZM3.07664 11.4314C2.87558 11.1403 2.804 11.0006 2.804 11.0006C1.77036 9.20524 1.69456 6.92215 2.80404 5.00046C3.91353 3.07877 5.92859 2.00291 8.0003 2.00036C8.0003 2.00036 8.28 1.99964 8.51289 2.02194C8.86375 2.05556 9.09702 2.10083 9.09702 2.10083L3.43905 11.9007C3.43905 11.9007 3.30482 11.7618 3.07664 11.4314ZM7.40178 3.03702C5.89399 3.22027 4.48727 4.08506 3.67008 5.50052C2.85288 6.9159 2.80733 8.56653 3.40252 9.96401L7.40178 3.03702Z" fill="black" stroke="black" stroke-width="0.1"/>
</g>
<defs>
<clipPath id="clip0_1_2">
<rect width="16" height="16" fill="white"/>
</clipPath>
</defs>
</svg>

Before

Width:  |  Height:  |  Size: 1.2 KiB

View File

@@ -1279,7 +1279,6 @@
"escape": "workspace::CloseWindow",
"ctrl-m": "settings_editor::Minimize",
"ctrl-f": "search::FocusSearch",
"ctrl-,": "settings_editor::OpenCurrentFile",
"left": "settings_editor::ToggleFocusNav",
"ctrl-shift-e": "settings_editor::ToggleFocusNav",
// todo(settings_ui): cut this down based on the max files and overflow UI
@@ -1335,12 +1334,5 @@
"alt-left": "dev::Zeta2ContextGoBack",
"alt-right": "dev::Zeta2ContextGoForward"
}
},
{
"context": "GitBranchSelector || (GitBranchSelector > Picker > Editor)",
"use_key_equivalents": true,
"bindings": {
"ctrl-shift-backspace": "branch_picker::DeleteBranch"
}
}
]

View File

@@ -49,8 +49,7 @@
"ctrl-cmd-f": "zed::ToggleFullScreen",
"ctrl-cmd-z": "edit_prediction::RateCompletions",
"ctrl-cmd-i": "edit_prediction::ToggleMenu",
"ctrl-cmd-l": "lsp_tool::ToggleMenu",
"ctrl-cmd-c": "editor::DisplayCursorNames"
"ctrl-cmd-l": "lsp_tool::ToggleMenu"
}
},
{
@@ -590,7 +589,8 @@
"cmd-.": "editor::ToggleCodeActions",
"cmd-k r": "editor::RevealInFileManager",
"cmd-k p": "editor::CopyPath",
"cmd-\\": "pane::SplitRight"
"cmd-\\": "pane::SplitRight",
"ctrl-cmd-c": "editor::DisplayCursorNames"
}
},
{
@@ -730,8 +730,7 @@
"context": "Workspace && debugger_running",
"use_key_equivalents": true,
"bindings": {
"f5": "zed::NoAction",
"f11": "debugger::StepInto"
"f5": "zed::NoAction"
}
},
{
@@ -1383,7 +1382,6 @@
"escape": "workspace::CloseWindow",
"cmd-m": "settings_editor::Minimize",
"cmd-f": "search::FocusSearch",
"cmd-,": "settings_editor::OpenCurrentFile",
"left": "settings_editor::ToggleFocusNav",
"cmd-shift-e": "settings_editor::ToggleFocusNav",
// todo(settings_ui): cut this down based on the max files and overflow UI
@@ -1440,12 +1438,5 @@
"alt-left": "dev::Zeta2ContextGoBack",
"alt-right": "dev::Zeta2ContextGoForward"
}
},
{
"context": "GitBranchSelector || (GitBranchSelector > Picker > Editor)",
"use_key_equivalents": true,
"bindings": {
"cmd-shift-backspace": "branch_picker::DeleteBranch"
}
}
]

View File

@@ -36,12 +36,12 @@
"shift-f5": "debugger::Stop",
"ctrl-shift-f5": "debugger::RerunSession",
"f6": "debugger::Pause",
"f10": "debugger::StepOver",
"f7": "debugger::StepOver",
"ctrl-f11": "debugger::StepInto",
"shift-f11": "debugger::StepOut",
"f11": "zed::ToggleFullScreen",
"ctrl-shift-i": "edit_prediction::ToggleMenu",
"shift-alt-l": "lsp_tool::ToggleMenu",
"ctrl-shift-alt-c": "editor::DisplayCursorNames"
"shift-alt-l": "lsp_tool::ToggleMenu"
}
},
{
@@ -117,7 +117,7 @@
"alt-g m": "git::OpenModifiedFiles",
"menu": "editor::OpenContextMenu",
"shift-f10": "editor::OpenContextMenu",
"ctrl-alt-e": "editor::ToggleEditPrediction",
"ctrl-shift-e": "editor::ToggleEditPrediction",
"f9": "editor::ToggleBreakpoint",
"shift-f9": "editor::EditLogBreakpoint"
}
@@ -215,7 +215,7 @@
"context": "ContextEditor > Editor",
"use_key_equivalents": true,
"bindings": {
"ctrl-i": "assistant::Assist",
"ctrl-enter": "assistant::Assist",
"ctrl-s": "workspace::Save",
"ctrl-shift-,": "assistant::InsertIntoEditor",
"shift-enter": "assistant::Split",
@@ -240,18 +240,18 @@
"shift-alt-p": "agent::ManageProfiles",
"ctrl-i": "agent::ToggleProfileSelector",
"shift-alt-/": "agent::ToggleModelSelector",
"shift-alt-j": "agent::ToggleNavigationMenu",
"shift-alt-i": "agent::ToggleOptionsMenu",
"ctrl-shift-alt-n": "agent::ToggleNewThreadMenu",
"ctrl-shift-j": "agent::ToggleNavigationMenu",
"ctrl-alt-i": "agent::ToggleOptionsMenu",
// "ctrl-shift-alt-n": "agent::ToggleNewThreadMenu",
"shift-alt-escape": "agent::ExpandMessageEditor",
"ctrl-shift-.": "agent::AddSelectionToThread",
"ctrl-shift-e": "project_panel::ToggleFocus",
"ctrl-shift-enter": "agent::ContinueThread",
"super-ctrl-b": "agent::ToggleBurnMode",
"alt-enter": "agent::ContinueWithBurnMode",
"shift-alt-a": "agent::AllowOnce",
"ctrl-y": "agent::AllowOnce",
"ctrl-alt-y": "agent::AllowAlways",
"shift-alt-z": "agent::RejectOnce"
"ctrl-alt-z": "agent::RejectOnce"
}
},
{
@@ -500,7 +500,10 @@
"ctrl-shift-l": "editor::SelectAllMatches", // Select all occurrences of current selection
"ctrl-f2": "editor::SelectAllMatches", // Select all occurrences of current word
"ctrl-d": ["editor::SelectNext", { "replace_newest": false }], // editor.action.addSelectionToNextFindMatch / find_under_expand
"ctrl-shift-down": ["editor::SelectNext", { "replace_newest": false }], // editor.action.addSelectionToNextFindMatch
"ctrl-shift-up": ["editor::SelectPrevious", { "replace_newest": false }], // editor.action.addSelectionToPreviousFindMatch
"ctrl-k ctrl-d": ["editor::SelectNext", { "replace_newest": true }], // editor.action.moveSelectionToNextFindMatch / find_under_expand_skip
"ctrl-k ctrl-shift-d": ["editor::SelectPrevious", { "replace_newest": true }], // editor.action.moveSelectionToPreviousFindMatch
"ctrl-k ctrl-i": "editor::Hover",
"ctrl-k ctrl-b": "editor::BlameHover",
"ctrl-/": ["editor::ToggleComments", { "advance_downwards": false }],
@@ -509,8 +512,12 @@
"f2": "editor::Rename",
"f12": "editor::GoToDefinition",
"alt-f12": "editor::GoToDefinitionSplit",
"ctrl-shift-f10": "editor::GoToDefinitionSplit",
"ctrl-f12": "editor::GoToImplementation",
"shift-f12": "editor::GoToTypeDefinition",
"ctrl-alt-f12": "editor::GoToTypeDefinitionSplit",
"shift-alt-f12": "editor::FindAllReferences",
"ctrl-m": "editor::MoveToEnclosingBracket", // from jetbrains
"ctrl-shift-\\": "editor::MoveToEnclosingBracket",
"ctrl-shift-[": "editor::Fold",
"ctrl-shift-]": "editor::UnfoldLines",
@@ -534,6 +541,7 @@
"ctrl-k r": "editor::RevealInFileManager",
"ctrl-k p": "editor::CopyPath",
"ctrl-\\": "pane::SplitRight",
"ctrl-shift-alt-c": "editor::DisplayCursorNames",
"alt-.": "editor::GoToHunk",
"alt-,": "editor::GoToPreviousHunk"
}
@@ -1116,7 +1124,7 @@
"shift-insert": "terminal::Paste",
"ctrl-v": "terminal::Paste",
"ctrl-shift-v": "terminal::Paste",
"ctrl-i": "assistant::InlineAssist",
"ctrl-enter": "assistant::InlineAssist",
"alt-b": ["terminal::SendText", "\u001bb"],
"alt-f": ["terminal::SendText", "\u001bf"],
"alt-.": ["terminal::SendText", "\u001b."],
@@ -1297,7 +1305,6 @@
"escape": "workspace::CloseWindow",
"ctrl-m": "settings_editor::Minimize",
"ctrl-f": "search::FocusSearch",
"ctrl-,": "settings_editor::OpenCurrentFile",
"left": "settings_editor::ToggleFocusNav",
"ctrl-shift-e": "settings_editor::ToggleFocusNav",
// todo(settings_ui): cut this down based on the max files and overflow UI
@@ -1354,12 +1361,5 @@
"alt-left": "dev::Zeta2ContextGoBack",
"alt-right": "dev::Zeta2ContextGoForward"
}
},
{
"context": "GitBranchSelector || (GitBranchSelector > Picker > Editor)",
"use_key_equivalents": true,
"bindings": {
"ctrl-shift-backspace": "branch_picker::DeleteBranch"
}
}
]

View File

@@ -857,8 +857,6 @@
"ctrl-w shift-right": "workspace::SwapPaneRight",
"ctrl-w shift-up": "workspace::SwapPaneUp",
"ctrl-w shift-down": "workspace::SwapPaneDown",
"ctrl-w x": "workspace::SwapPaneAdjacent",
"ctrl-w ctrl-x": "workspace::SwapPaneAdjacent",
"ctrl-w shift-h": "workspace::MovePaneLeft",
"ctrl-w shift-l": "workspace::MovePaneRight",
"ctrl-w shift-k": "workspace::MovePaneUp",

View File

@@ -1100,22 +1100,13 @@
"preview_tabs": {
// Whether preview tabs should be enabled.
// Preview tabs allow you to open files in preview mode, where they close automatically
// when you open another preview tab.
// when you switch to another file unless you explicitly pin them.
// This is useful for quickly viewing files without cluttering your workspace.
"enabled": true,
// Whether to open tabs in preview mode when opened from the project panel with a single click.
"enable_preview_from_project_panel": true,
// Whether to open tabs in preview mode when selected from the file finder.
"enable_preview_from_file_finder": false,
// Whether to open tabs in preview mode when opened from a multibuffer.
"enable_preview_from_multibuffer": true,
// Whether to open tabs in preview mode when code navigation is used to open a multibuffer.
"enable_preview_multibuffer_from_code_navigation": false,
// Whether to open tabs in preview mode when code navigation is used to open a single file.
"enable_preview_file_from_code_navigation": true,
// Whether to keep tabs in preview mode when code navigation is used to navigate away from them.
// If `enable_preview_file_from_code_navigation` or `enable_preview_multibuffer_from_code_navigation` is also true, the new tab may replace the existing one.
"enable_keep_preview_on_code_navigation": false
// Whether a preview tab gets replaced when code navigation is used to navigate away from the tab.
"enable_preview_from_code_navigation": false
},
// Settings related to the file finder.
"file_finder": {
@@ -1218,13 +1209,6 @@
"tab_size": 4,
// What debuggers are preferred by default for all languages.
"debuggers": [],
// Whether to enable word diff highlighting in the editor.
//
// When enabled, changed words within modified lines are highlighted
// to show exactly what changed.
//
// Default: true
"word_diff_enabled": true,
// Control what info is collected by Zed.
"telemetry": {
// Send debug info like crash reports.
@@ -1366,8 +1350,6 @@
// "load_direnv": "direct"
// 2. Load direnv configuration through the shell hook, works for POSIX shells and fish.
// "load_direnv": "shell_hook"
// 3. Don't load direnv configuration at all.
// "load_direnv": "disabled"
"load_direnv": "direct",
"edit_predictions": {
// A list of globs representing files that edit predictions should be disabled for.
@@ -2040,11 +2022,7 @@
// dirty files when closing the application.
//
// Default: true
"restore_unsaved_buffers": true,
// Whether or not to skip project trust checks and synchronize project settings from any worktree automatically.
//
// Default: false
"trust_all_worktrees": false
"restore_unsaved_buffers": true
},
// Zed's Prettier integration settings.
// Allows to enable/disable formatting with Prettier

View File

@@ -45,7 +45,6 @@
"tab.inactive_background": "#1f2127ff",
"tab.active_background": "#0d1016ff",
"search.match_background": "#5ac2fe66",
"search.active_match_background": "#ea570166",
"panel.background": "#1f2127ff",
"panel.focused_border": "#5ac1feff",
"pane.focused_border": null,
@@ -437,7 +436,6 @@
"tab.inactive_background": "#ececedff",
"tab.active_background": "#fcfcfcff",
"search.match_background": "#3b9ee566",
"search.active_match_background": "#f88b3666",
"panel.background": "#ececedff",
"panel.focused_border": "#3b9ee5ff",
"pane.focused_border": null,
@@ -829,7 +827,6 @@
"tab.inactive_background": "#353944ff",
"tab.active_background": "#242835ff",
"search.match_background": "#73cffe66",
"search.active_match_background": "#fd722b66",
"panel.background": "#353944ff",
"panel.focused_border": null,
"pane.focused_border": null,

View File

@@ -46,7 +46,6 @@
"tab.inactive_background": "#3a3735ff",
"tab.active_background": "#282828ff",
"search.match_background": "#83a59866",
"search.active_match_background": "#c09f3f66",
"panel.background": "#3a3735ff",
"panel.focused_border": "#83a598ff",
"pane.focused_border": null,
@@ -453,7 +452,6 @@
"tab.inactive_background": "#393634ff",
"tab.active_background": "#1d2021ff",
"search.match_background": "#83a59866",
"search.active_match_background": "#c9653666",
"panel.background": "#393634ff",
"panel.focused_border": "#83a598ff",
"pane.focused_border": null,
@@ -860,7 +858,6 @@
"tab.inactive_background": "#3b3735ff",
"tab.active_background": "#32302fff",
"search.match_background": "#83a59866",
"search.active_match_background": "#aea85166",
"panel.background": "#3b3735ff",
"panel.focused_border": null,
"pane.focused_border": null,
@@ -1267,7 +1264,6 @@
"tab.inactive_background": "#ecddb4ff",
"tab.active_background": "#fbf1c7ff",
"search.match_background": "#0b667866",
"search.active_match_background": "#ba2d1166",
"panel.background": "#ecddb4ff",
"panel.focused_border": null,
"pane.focused_border": null,
@@ -1674,7 +1670,6 @@
"tab.inactive_background": "#ecddb5ff",
"tab.active_background": "#f9f5d7ff",
"search.match_background": "#0b667866",
"search.active_match_background": "#dc351466",
"panel.background": "#ecddb5ff",
"panel.focused_border": null,
"pane.focused_border": null,
@@ -2081,7 +2076,6 @@
"tab.inactive_background": "#ecdcb3ff",
"tab.active_background": "#f2e5bcff",
"search.match_background": "#0b667866",
"search.active_match_background": "#d7331466",
"panel.background": "#ecdcb3ff",
"panel.focused_border": null,
"pane.focused_border": null,

View File

@@ -45,7 +45,6 @@
"tab.inactive_background": "#2f343eff",
"tab.active_background": "#282c33ff",
"search.match_background": "#74ade866",
"search.active_match_background": "#e8af7466",
"panel.background": "#2f343eff",
"panel.focused_border": null,
"pane.focused_border": null,
@@ -99,8 +98,6 @@
"link_text.hover": "#74ade8ff",
"version_control.added": "#27a657ff",
"version_control.modified": "#d3b020ff",
"version_control.word_added": "#2EA04859",
"version_control.word_deleted": "#78081BCC",
"version_control.deleted": "#e06c76ff",
"version_control.conflict_marker.ours": "#a1c1811a",
"version_control.conflict_marker.theirs": "#74ade81a",
@@ -449,7 +446,6 @@
"tab.inactive_background": "#ebebecff",
"tab.active_background": "#fafafaff",
"search.match_background": "#5c79e266",
"search.active_match_background": "#d0a92366",
"panel.background": "#ebebecff",
"panel.focused_border": null,
"pane.focused_border": null,
@@ -503,8 +499,6 @@
"link_text.hover": "#5c78e2ff",
"version_control.added": "#27a657ff",
"version_control.modified": "#d3b020ff",
"version_control.word_added": "#2EA04859",
"version_control.word_deleted": "#F85149CC",
"version_control.deleted": "#e06c76ff",
"conflict": "#a48819ff",
"conflict.background": "#faf2e6ff",

View File

@@ -201,19 +201,17 @@ impl ToolCall {
};
let mut content = Vec::with_capacity(tool_call.content.len());
for item in tool_call.content {
if let Some(item) = ToolCallContent::from_acp(
content.push(ToolCallContent::from_acp(
item,
language_registry.clone(),
path_style,
terminals,
cx,
)? {
content.push(item);
}
)?);
}
let result = Self {
id: tool_call.tool_call_id,
id: tool_call.id,
label: cx
.new(|cx| Markdown::new(title.into(), Some(language_registry.clone()), None, cx)),
kind: tool_call.kind,
@@ -243,7 +241,6 @@ impl ToolCall {
locations,
raw_input,
raw_output,
..
} = fields;
if let Some(kind) = kind {
@@ -265,29 +262,21 @@ impl ToolCall {
}
if let Some(content) = content {
let mut new_content_len = content.len();
let new_content_len = content.len();
let mut content = content.into_iter();
// Reuse existing content if we can
for (old, new) in self.content.iter_mut().zip(content.by_ref()) {
let valid_content =
old.update_from_acp(new, language_registry.clone(), path_style, terminals, cx)?;
if !valid_content {
new_content_len -= 1;
}
old.update_from_acp(new, language_registry.clone(), path_style, terminals, cx)?;
}
for new in content {
if let Some(new) = ToolCallContent::from_acp(
self.content.push(ToolCallContent::from_acp(
new,
language_registry.clone(),
path_style,
terminals,
cx,
)? {
self.content.push(new);
} else {
new_content_len -= 1;
}
)?)
}
self.content.truncate(new_content_len);
}
@@ -358,13 +347,13 @@ impl ToolCall {
let buffer = buffer.await.log_err()?;
let position = buffer
.update(cx, |buffer, _| {
let snapshot = buffer.snapshot();
if let Some(row) = location.line {
let snapshot = buffer.snapshot();
let column = snapshot.indent_size_for_line(row).len;
let point = snapshot.clip_point(Point::new(row, column), Bias::Left);
snapshot.anchor_before(point)
} else {
Anchor::min_for_buffer(snapshot.remote_id())
Anchor::MIN
}
})
.ok()?;
@@ -436,7 +425,6 @@ impl From<acp::ToolCallStatus> for ToolCallStatus {
acp::ToolCallStatus::InProgress => Self::InProgress,
acp::ToolCallStatus::Completed => Self::Completed,
acp::ToolCallStatus::Failed => Self::Failed,
_ => Self::Pending,
}
}
}
@@ -549,7 +537,7 @@ impl ContentBlock {
..
}) => Self::resource_link_md(&uri, path_style),
acp::ContentBlock::Image(image) => Self::image_md(&image),
_ => String::new(),
acp::ContentBlock::Audio(_) | acp::ContentBlock::Resource(_) => String::new(),
}
}
@@ -603,17 +591,15 @@ impl ToolCallContent {
path_style: PathStyle,
terminals: &HashMap<acp::TerminalId, Entity<Terminal>>,
cx: &mut App,
) -> Result<Option<Self>> {
) -> Result<Self> {
match content {
acp::ToolCallContent::Content(acp::Content { content, .. }) => {
Ok(Some(Self::ContentBlock(ContentBlock::new(
content,
&language_registry,
path_style,
cx,
))))
}
acp::ToolCallContent::Diff(diff) => Ok(Some(Self::Diff(cx.new(|cx| {
acp::ToolCallContent::Content { content } => Ok(Self::ContentBlock(ContentBlock::new(
content,
&language_registry,
path_style,
cx,
))),
acp::ToolCallContent::Diff { diff } => Ok(Self::Diff(cx.new(|cx| {
Diff::finalized(
diff.path.to_string_lossy().into_owned(),
diff.old_text,
@@ -621,13 +607,12 @@ impl ToolCallContent {
language_registry,
cx,
)
})))),
acp::ToolCallContent::Terminal(acp::Terminal { terminal_id, .. }) => terminals
}))),
acp::ToolCallContent::Terminal { terminal_id } => terminals
.get(&terminal_id)
.cloned()
.map(|terminal| Some(Self::Terminal(terminal)))
.map(Self::Terminal)
.ok_or_else(|| anyhow::anyhow!("Terminal with id `{}` not found", terminal_id)),
_ => Ok(None),
}
}
@@ -638,9 +623,9 @@ impl ToolCallContent {
path_style: PathStyle,
terminals: &HashMap<acp::TerminalId, Entity<Terminal>>,
cx: &mut App,
) -> Result<bool> {
) -> Result<()> {
let needs_update = match (&self, &new) {
(Self::Diff(old_diff), acp::ToolCallContent::Diff(new_diff)) => {
(Self::Diff(old_diff), acp::ToolCallContent::Diff { diff: new_diff }) => {
old_diff.read(cx).needs_update(
new_diff.old_text.as_deref().unwrap_or(""),
&new_diff.new_text,
@@ -650,14 +635,10 @@ impl ToolCallContent {
_ => true,
};
if let Some(update) = Self::from_acp(new, language_registry, path_style, terminals, cx)? {
if needs_update {
*self = update;
}
Ok(true)
} else {
Ok(false)
if needs_update {
*self = Self::from_acp(new, language_registry, path_style, terminals, cx)?;
}
Ok(())
}
pub fn to_markdown(&self, cx: &App) -> String {
@@ -679,7 +660,7 @@ pub enum ToolCallUpdate {
impl ToolCallUpdate {
fn id(&self) -> &acp::ToolCallId {
match self {
Self::UpdateFields(update) => &update.tool_call_id,
Self::UpdateFields(update) => &update.id,
Self::UpdateDiff(diff) => &diff.id,
Self::UpdateTerminal(terminal) => &terminal.id,
}
@@ -751,7 +732,6 @@ impl Plan {
acp::PlanEntryStatus::Completed => {
stats.completed += 1;
}
_ => {}
}
}
@@ -1174,7 +1154,6 @@ impl AcpThread {
current_mode_id,
..
}) => cx.emit(AcpThreadEvent::ModeUpdated(current_mode_id)),
_ => {}
}
Ok(())
}
@@ -1308,7 +1287,11 @@ impl AcpThread {
label: cx.new(|cx| Markdown::new("Tool call not found".into(), None, None, cx)),
kind: acp::ToolKind::Fetch,
content: vec![ToolCallContent::ContentBlock(ContentBlock::new(
"Tool call not found".into(),
acp::ContentBlock::Text(acp::TextContent {
text: "Tool call not found".to_string(),
annotations: None,
meta: None,
}),
&languages,
path_style,
cx,
@@ -1332,7 +1315,7 @@ impl AcpThread {
let location_updated = update.fields.locations.is_some();
call.update_fields(update.fields, languages, path_style, &self.terminals, cx)?;
if location_updated {
self.resolve_locations(update.tool_call_id, cx);
self.resolve_locations(update.id, cx);
}
}
ToolCallUpdate::UpdateDiff(update) => {
@@ -1370,7 +1353,7 @@ impl AcpThread {
) -> Result<(), acp::Error> {
let language_registry = self.project.read(cx).languages().clone();
let path_style = self.project.read(cx).path_style(cx);
let id = update.tool_call_id.clone();
let id = update.id.clone();
let agent = self.connection().telemetry_id();
let session = self.session_id();
@@ -1535,16 +1518,16 @@ impl AcpThread {
// some tools would (incorrectly) continue to auto-accept.
if let Some(allow_once_option) = options.iter().find_map(|option| {
if matches!(option.kind, acp::PermissionOptionKind::AllowOnce) {
Some(option.option_id.clone())
Some(option.id.clone())
} else {
None
}
}) {
self.upsert_tool_call_inner(tool_call, ToolCallStatus::Pending, cx)?;
return Ok(async {
acp::RequestPermissionOutcome::Selected(acp::SelectedPermissionOutcome::new(
allow_once_option,
))
acp::RequestPermissionOutcome::Selected {
option_id: allow_once_option,
}
}
.boxed());
}
@@ -1560,9 +1543,7 @@ impl AcpThread {
let fut = async {
match rx.await {
Ok(option) => acp::RequestPermissionOutcome::Selected(
acp::SelectedPermissionOutcome::new(option),
),
Ok(option) => acp::RequestPermissionOutcome::Selected { option_id: option },
Err(oneshot::Canceled) => acp::RequestPermissionOutcome::Cancelled,
}
}
@@ -1589,7 +1570,6 @@ impl AcpThread {
acp::PermissionOptionKind::AllowOnce | acp::PermissionOptionKind::AllowAlways => {
ToolCallStatus::InProgress
}
_ => ToolCallStatus::InProgress,
};
let curr_status = mem::replace(&mut call.status, new_status);
@@ -1668,7 +1648,14 @@ impl AcpThread {
message: &str,
cx: &mut Context<Self>,
) -> BoxFuture<'static, Result<()>> {
self.send(vec![message.into()], cx)
self.send(
vec![acp::ContentBlock::Text(acp::TextContent {
text: message.to_string(),
annotations: None,
meta: None,
})],
cx,
)
}
pub fn send(
@@ -1682,7 +1669,11 @@ impl AcpThread {
self.project.read(cx).path_style(cx),
cx,
);
let request = acp::PromptRequest::new(self.session_id.clone(), message.clone());
let request = acp::PromptRequest {
prompt: message.clone(),
session_id: self.session_id.clone(),
meta: None,
};
let git_store = self.project.read(cx).git_store().clone();
let message_id = if self.connection.truncate(&self.session_id, cx).is_some() {
@@ -1774,7 +1765,7 @@ impl AcpThread {
result,
Ok(Ok(acp::PromptResponse {
stop_reason: acp::StopReason::Cancelled,
..
meta: None,
}))
);
@@ -1790,7 +1781,7 @@ impl AcpThread {
// Handle refusal - distinguish between user prompt and tool call refusals
if let Ok(Ok(acp::PromptResponse {
stop_reason: acp::StopReason::Refusal,
..
meta: _,
})) = result
{
if let Some((user_msg_ix, _)) = this.last_user_message() {
@@ -2026,7 +2017,7 @@ impl AcpThread {
})?;
Ok(project.open_buffer(path, cx))
})
.map_err(|e| acp::Error::internal_error().data(e.to_string()))
.map_err(|e| acp::Error::internal_error().with_data(e.to_string()))
.flatten()?;
let buffer = load.await?;
@@ -2059,7 +2050,7 @@ impl AcpThread {
let start_position = Point::new(line, 0);
if start_position > max_point {
return Err(acp::Error::invalid_params().data(format!(
return Err(acp::Error::invalid_params().with_data(format!(
"Attempting to read beyond the end of the file, line {}:{}",
max_point.row + 1,
max_point.column
@@ -2129,7 +2120,7 @@ impl AcpThread {
position: edits
.last()
.map(|(range, _)| range.end)
.unwrap_or(Anchor::min_for_buffer(buffer.read(cx).remote_id())),
.unwrap_or(Anchor::MIN),
}),
cx,
);
@@ -2211,7 +2202,7 @@ impl AcpThread {
let language_registry = project.read(cx).languages().clone();
let is_windows = project.read(cx).path_style(cx).is_windows();
let terminal_id = acp::TerminalId::new(Uuid::new_v4().to_string());
let terminal_id = acp::TerminalId(Uuid::new_v4().to_string().into());
let terminal_task = cx.spawn({
let terminal_id = terminal_id.clone();
async move |_this, cx| {
@@ -2421,7 +2412,7 @@ mod tests {
.await
.unwrap();
let terminal_id = acp::TerminalId::new(uuid::Uuid::new_v4().to_string());
let terminal_id = acp::TerminalId(uuid::Uuid::new_v4().to_string().into());
// Send Output BEFORE Created - should be buffered by acp_thread
thread.update(cx, |thread, cx| {
@@ -2483,7 +2474,7 @@ mod tests {
.await
.unwrap();
let terminal_id = acp::TerminalId::new(uuid::Uuid::new_v4().to_string());
let terminal_id = acp::TerminalId(uuid::Uuid::new_v4().to_string().into());
// Send Output BEFORE Created
thread.update(cx, |thread, cx| {
@@ -2501,7 +2492,11 @@ mod tests {
thread.on_terminal_provider_event(
TerminalProviderEvent::Exit {
terminal_id: terminal_id.clone(),
status: acp::TerminalExitStatus::new().exit_code(0),
status: acp::TerminalExitStatus {
exit_code: Some(0),
signal: None,
meta: None,
},
},
cx,
);
@@ -2558,7 +2553,15 @@ mod tests {
// Test creating a new user message
thread.update(cx, |thread, cx| {
thread.push_user_content_block(None, "Hello, ".into(), cx);
thread.push_user_content_block(
None,
acp::ContentBlock::Text(acp::TextContent {
annotations: None,
text: "Hello, ".to_string(),
meta: None,
}),
cx,
);
});
thread.update(cx, |thread, cx| {
@@ -2574,7 +2577,15 @@ mod tests {
// Test appending to existing user message
let message_1_id = UserMessageId::new();
thread.update(cx, |thread, cx| {
thread.push_user_content_block(Some(message_1_id.clone()), "world!".into(), cx);
thread.push_user_content_block(
Some(message_1_id.clone()),
acp::ContentBlock::Text(acp::TextContent {
annotations: None,
text: "world!".to_string(),
meta: None,
}),
cx,
);
});
thread.update(cx, |thread, cx| {
@@ -2589,14 +2600,26 @@ mod tests {
// Test creating new user message after assistant message
thread.update(cx, |thread, cx| {
thread.push_assistant_content_block("Assistant response".into(), false, cx);
thread.push_assistant_content_block(
acp::ContentBlock::Text(acp::TextContent {
annotations: None,
text: "Assistant response".to_string(),
meta: None,
}),
false,
cx,
);
});
let message_2_id = UserMessageId::new();
thread.update(cx, |thread, cx| {
thread.push_user_content_block(
Some(message_2_id.clone()),
"New user message".into(),
acp::ContentBlock::Text(acp::TextContent {
annotations: None,
text: "New user message".to_string(),
meta: None,
}),
cx,
);
});
@@ -2624,22 +2647,27 @@ mod tests {
thread.update(&mut cx, |thread, cx| {
thread
.handle_session_update(
acp::SessionUpdate::AgentThoughtChunk(acp::ContentChunk::new(
"Thinking ".into(),
)),
acp::SessionUpdate::AgentThoughtChunk(acp::ContentChunk {
content: "Thinking ".into(),
meta: None,
}),
cx,
)
.unwrap();
thread
.handle_session_update(
acp::SessionUpdate::AgentThoughtChunk(acp::ContentChunk::new(
"hard!".into(),
)),
acp::SessionUpdate::AgentThoughtChunk(acp::ContentChunk {
content: "hard!".into(),
meta: None,
}),
cx,
)
.unwrap();
})?;
Ok(acp::PromptResponse::new(acp::StopReason::EndTurn))
Ok(acp::PromptResponse {
stop_reason: acp::StopReason::EndTurn,
meta: None,
})
}
.boxed_local()
},
@@ -2707,7 +2735,10 @@ mod tests {
.unwrap()
.await
.unwrap();
Ok(acp::PromptResponse::new(acp::StopReason::EndTurn))
Ok(acp::PromptResponse {
stop_reason: acp::StopReason::EndTurn,
meta: None,
})
}
.boxed_local()
},
@@ -2938,7 +2969,7 @@ mod tests {
let fs = FakeFs::new(cx.executor());
let project = Project::test(fs, [], cx).await;
let id = acp::ToolCallId::new("test");
let id = acp::ToolCallId("test".into());
let connection = Rc::new(FakeAgentConnection::new().on_user_message({
let id = id.clone();
@@ -2948,17 +2979,26 @@ mod tests {
thread
.update(&mut cx, |thread, cx| {
thread.handle_session_update(
acp::SessionUpdate::ToolCall(
acp::ToolCall::new(id.clone(), "Label")
.kind(acp::ToolKind::Fetch)
.status(acp::ToolCallStatus::InProgress),
),
acp::SessionUpdate::ToolCall(acp::ToolCall {
id: id.clone(),
title: "Label".into(),
kind: acp::ToolKind::Fetch,
status: acp::ToolCallStatus::InProgress,
content: vec![],
locations: vec![],
raw_input: None,
raw_output: None,
meta: None,
}),
cx,
)
})
.unwrap()
.unwrap();
Ok(acp::PromptResponse::new(acp::StopReason::EndTurn))
Ok(acp::PromptResponse {
stop_reason: acp::StopReason::EndTurn,
meta: None,
})
}
.boxed_local()
}
@@ -3000,10 +3040,14 @@ mod tests {
thread
.update(cx, |thread, cx| {
thread.handle_session_update(
acp::SessionUpdate::ToolCallUpdate(acp::ToolCallUpdate::new(
acp::SessionUpdate::ToolCallUpdate(acp::ToolCallUpdate {
id,
acp::ToolCallUpdateFields::new().status(acp::ToolCallStatus::Completed),
)),
fields: acp::ToolCallUpdateFields {
status: Some(acp::ToolCallStatus::Completed),
..Default::default()
},
meta: None,
}),
cx,
)
})
@@ -3035,21 +3079,33 @@ mod tests {
thread
.update(&mut cx, |thread, cx| {
thread.handle_session_update(
acp::SessionUpdate::ToolCall(
acp::ToolCall::new("test", "Label")
.kind(acp::ToolKind::Edit)
.status(acp::ToolCallStatus::Completed)
.content(vec![acp::ToolCallContent::Diff(acp::Diff::new(
"/test/test.txt",
"foo",
))]),
),
acp::SessionUpdate::ToolCall(acp::ToolCall {
id: acp::ToolCallId("test".into()),
title: "Label".into(),
kind: acp::ToolKind::Edit,
status: acp::ToolCallStatus::Completed,
content: vec![acp::ToolCallContent::Diff {
diff: acp::Diff {
path: "/test/test.txt".into(),
old_text: None,
new_text: "foo".into(),
meta: None,
},
}],
locations: vec![],
raw_input: None,
raw_output: None,
meta: None,
}),
cx,
)
})
.unwrap()
.unwrap();
Ok(acp::PromptResponse::new(acp::StopReason::EndTurn))
Ok(acp::PromptResponse {
stop_reason: acp::StopReason::EndTurn,
meta: None,
})
}
.boxed_local()
}
@@ -3102,14 +3158,18 @@ mod tests {
thread.update(&mut cx, |thread, cx| {
thread
.handle_session_update(
acp::SessionUpdate::AgentMessageChunk(acp::ContentChunk::new(
content.text.to_uppercase().into(),
)),
acp::SessionUpdate::AgentMessageChunk(acp::ContentChunk {
content: content.text.to_uppercase().into(),
meta: None,
}),
cx,
)
.unwrap();
})?;
Ok(acp::PromptResponse::new(acp::StopReason::EndTurn))
Ok(acp::PromptResponse {
stop_reason: acp::StopReason::EndTurn,
meta: None,
})
}
.boxed_local()
}
@@ -3265,22 +3325,34 @@ mod tests {
thread.update(&mut cx, |thread, cx| {
thread
.handle_session_update(
acp::SessionUpdate::ToolCall(
acp::ToolCall::new("tool1", "Test Tool")
.kind(acp::ToolKind::Fetch)
.status(acp::ToolCallStatus::Completed)
.raw_input(serde_json::json!({"query": "test"}))
.raw_output(serde_json::json!({"result": "inappropriate content"})),
),
acp::SessionUpdate::ToolCall(acp::ToolCall {
id: acp::ToolCallId("tool1".into()),
title: "Test Tool".into(),
kind: acp::ToolKind::Fetch,
status: acp::ToolCallStatus::Completed,
content: vec![],
locations: vec![],
raw_input: Some(serde_json::json!({"query": "test"})),
raw_output: Some(
serde_json::json!({"result": "inappropriate content"}),
),
meta: None,
}),
cx,
)
.unwrap();
})?;
// Now return refusal because of the tool result
Ok(acp::PromptResponse::new(acp::StopReason::Refusal))
Ok(acp::PromptResponse {
stop_reason: acp::StopReason::Refusal,
meta: None,
})
} else {
Ok(acp::PromptResponse::new(acp::StopReason::EndTurn))
Ok(acp::PromptResponse {
stop_reason: acp::StopReason::EndTurn,
meta: None,
})
}
}
.boxed_local()
@@ -3308,7 +3380,16 @@ mod tests {
});
// Send a user message - this will trigger tool call and then refusal
let send_task = thread.update(cx, |thread, cx| thread.send(vec!["Hello".into()], cx));
let send_task = thread.update(cx, |thread, cx| {
thread.send(
vec![acp::ContentBlock::Text(acp::TextContent {
text: "Hello".into(),
annotations: None,
meta: None,
})],
cx,
)
});
cx.background_executor.spawn(send_task).detach();
cx.run_until_parked();
@@ -3354,11 +3435,21 @@ mod tests {
let refuse_next = refuse_next.clone();
move |_request, _thread, _cx| {
if refuse_next.load(SeqCst) {
async move { Ok(acp::PromptResponse::new(acp::StopReason::Refusal)) }
.boxed_local()
async move {
Ok(acp::PromptResponse {
stop_reason: acp::StopReason::Refusal,
meta: None,
})
}
.boxed_local()
} else {
async move { Ok(acp::PromptResponse::new(acp::StopReason::EndTurn)) }
.boxed_local()
async move {
Ok(acp::PromptResponse {
stop_reason: acp::StopReason::EndTurn,
meta: None,
})
}
.boxed_local()
}
}
}));
@@ -3415,7 +3506,10 @@ mod tests {
let refuse_next = refuse_next.clone();
async move {
if refuse_next.load(SeqCst) {
return Ok(acp::PromptResponse::new(acp::StopReason::Refusal));
return Ok(acp::PromptResponse {
stop_reason: acp::StopReason::Refusal,
meta: None,
});
}
let acp::ContentBlock::Text(content) = &request.prompt[0] else {
@@ -3424,14 +3518,18 @@ mod tests {
thread.update(&mut cx, |thread, cx| {
thread
.handle_session_update(
acp::SessionUpdate::AgentMessageChunk(acp::ContentChunk::new(
content.text.to_uppercase().into(),
)),
acp::SessionUpdate::AgentMessageChunk(acp::ContentChunk {
content: content.text.to_uppercase().into(),
meta: None,
}),
cx,
)
.unwrap();
})?;
Ok(acp::PromptResponse::new(acp::StopReason::EndTurn))
Ok(acp::PromptResponse {
stop_reason: acp::StopReason::EndTurn,
meta: None,
})
}
.boxed_local()
}
@@ -3570,12 +3668,13 @@ mod tests {
_cwd: &Path,
cx: &mut App,
) -> Task<gpui::Result<Entity<AcpThread>>> {
let session_id = acp::SessionId::new(
let session_id = acp::SessionId(
rand::rng()
.sample_iter(&distr::Alphanumeric)
.take(7)
.map(char::from)
.collect::<String>(),
.collect::<String>()
.into(),
);
let action_log = cx.new(|_| ActionLog::new(project.clone()));
let thread = cx.new(|cx| {
@@ -3585,12 +3684,12 @@ mod tests {
project,
action_log,
session_id.clone(),
watch::Receiver::constant(
acp::PromptCapabilities::new()
.image(true)
.audio(true)
.embedded_context(true),
),
watch::Receiver::constant(acp::PromptCapabilities {
image: true,
audio: true,
embedded_context: true,
meta: None,
}),
cx,
)
});
@@ -3619,7 +3718,10 @@ mod tests {
let thread = thread.clone();
cx.spawn(async move |cx| handler(params, thread, cx.clone()).await)
} else {
Task::ready(Ok(acp::PromptResponse::new(acp::StopReason::EndTurn)))
Task::ready(Ok(acp::PromptResponse {
stop_reason: acp::StopReason::EndTurn,
meta: None,
}))
}
}
@@ -3674,13 +3776,17 @@ mod tests {
.unwrap();
// Try to update a tool call that doesn't exist
let nonexistent_id = acp::ToolCallId::new("nonexistent-tool-call");
let nonexistent_id = acp::ToolCallId("nonexistent-tool-call".into());
thread.update(cx, |thread, cx| {
let result = thread.handle_session_update(
acp::SessionUpdate::ToolCallUpdate(acp::ToolCallUpdate::new(
nonexistent_id.clone(),
acp::ToolCallUpdateFields::new().status(acp::ToolCallStatus::Completed),
)),
acp::SessionUpdate::ToolCallUpdate(acp::ToolCallUpdate {
id: nonexistent_id.clone(),
fields: acp::ToolCallUpdateFields {
status: Some(acp::ToolCallStatus::Completed),
..Default::default()
},
meta: None,
}),
cx,
);
@@ -3755,7 +3861,7 @@ mod tests {
.unwrap();
// Create 2 terminals BEFORE the checkpoint that have completed running
let terminal_id_1 = acp::TerminalId::new(uuid::Uuid::new_v4().to_string());
let terminal_id_1 = acp::TerminalId(uuid::Uuid::new_v4().to_string().into());
let mock_terminal_1 = cx.new(|cx| {
let builder = ::terminal::TerminalBuilder::new_display_only(
::terminal::terminal_settings::CursorShape::default(),
@@ -3794,13 +3900,17 @@ mod tests {
thread.on_terminal_provider_event(
TerminalProviderEvent::Exit {
terminal_id: terminal_id_1.clone(),
status: acp::TerminalExitStatus::new().exit_code(0),
status: acp::TerminalExitStatus {
exit_code: Some(0),
signal: None,
meta: None,
},
},
cx,
);
});
let terminal_id_2 = acp::TerminalId::new(uuid::Uuid::new_v4().to_string());
let terminal_id_2 = acp::TerminalId(uuid::Uuid::new_v4().to_string().into());
let mock_terminal_2 = cx.new(|cx| {
let builder = ::terminal::TerminalBuilder::new_display_only(
::terminal::terminal_settings::CursorShape::default(),
@@ -3839,7 +3949,11 @@ mod tests {
thread.on_terminal_provider_event(
TerminalProviderEvent::Exit {
terminal_id: terminal_id_2.clone(),
status: acp::TerminalExitStatus::new().exit_code(0),
status: acp::TerminalExitStatus {
exit_code: Some(0),
signal: None,
meta: None,
},
},
cx,
);
@@ -3859,7 +3973,7 @@ mod tests {
// Create a terminal AFTER the checkpoint we'll restore to.
// This simulates the AI agent starting a long-running terminal command.
let terminal_id = acp::TerminalId::new(uuid::Uuid::new_v4().to_string());
let terminal_id = acp::TerminalId(uuid::Uuid::new_v4().to_string().into());
let mock_terminal = cx.new(|cx| {
let builder = ::terminal::TerminalBuilder::new_display_only(
::terminal::terminal_settings::CursorShape::default(),
@@ -3901,15 +4015,21 @@ mod tests {
thread.update(cx, |thread, cx| {
thread
.handle_session_update(
acp::SessionUpdate::ToolCall(
acp::ToolCall::new("terminal-tool-1", "Running command")
.kind(acp::ToolKind::Execute)
.status(acp::ToolCallStatus::InProgress)
.content(vec![acp::ToolCallContent::Terminal(acp::Terminal::new(
terminal_id.clone(),
))])
.raw_input(serde_json::json!({"command": "sleep 1000", "cd": "/test"})),
),
acp::SessionUpdate::ToolCall(acp::ToolCall {
id: acp::ToolCallId("terminal-tool-1".into()),
title: "Running command".into(),
kind: acp::ToolKind::Execute,
status: acp::ToolCallStatus::InProgress,
content: vec![acp::ToolCallContent::Terminal {
terminal_id: terminal_id.clone(),
}],
locations: vec![],
raw_input: Some(
serde_json::json!({"command": "sleep 1000", "cd": "/test"}),
),
raw_output: None,
meta: None,
}),
cx,
)
.unwrap();

View File

@@ -336,7 +336,7 @@ mod test_support {
_cwd: &Path,
cx: &mut gpui::App,
) -> Task<gpui::Result<Entity<AcpThread>>> {
let session_id = acp::SessionId::new(self.sessions.lock().len().to_string());
let session_id = acp::SessionId(self.sessions.lock().len().to_string().into());
let action_log = cx.new(|_| ActionLog::new(project.clone()));
let thread = cx.new(|cx| {
AcpThread::new(
@@ -345,12 +345,12 @@ mod test_support {
project,
action_log,
session_id.clone(),
watch::Receiver::constant(
acp::PromptCapabilities::new()
.image(true)
.audio(true)
.embedded_context(true),
),
watch::Receiver::constant(acp::PromptCapabilities {
image: true,
audio: true,
embedded_context: true,
meta: None,
}),
cx,
)
});
@@ -389,7 +389,10 @@ mod test_support {
response_tx.replace(tx);
cx.spawn(async move |_| {
let stop_reason = rx.await?;
Ok(acp::PromptResponse::new(stop_reason))
Ok(acp::PromptResponse {
stop_reason,
meta: None,
})
})
} else {
for update in self.next_prompt_updates.lock().drain(..) {
@@ -397,7 +400,7 @@ mod test_support {
let update = update.clone();
let permission_request = if let acp::SessionUpdate::ToolCall(tool_call) =
&update
&& let Some(options) = self.permission_requests.get(&tool_call.tool_call_id)
&& let Some(options) = self.permission_requests.get(&tool_call.id)
{
Some((tool_call.clone(), options.clone()))
} else {
@@ -426,7 +429,10 @@ mod test_support {
cx.spawn(async move |_| {
try_join_all(tasks).await?;
Ok(acp::PromptResponse::new(acp::StopReason::EndTurn))
Ok(acp::PromptResponse {
stop_reason: acp::StopReason::EndTurn,
meta: None,
})
})
}
}

View File

@@ -50,14 +50,9 @@ impl Diff {
let hunk_ranges = {
let buffer = buffer.read(cx);
let diff = diff.read(cx);
diff.hunks_intersecting_range(
Anchor::min_for_buffer(buffer.remote_id())
..Anchor::max_for_buffer(buffer.remote_id()),
buffer,
cx,
)
.map(|diff_hunk| diff_hunk.buffer_range.to_point(buffer))
.collect::<Vec<_>>()
diff.hunks_intersecting_range(Anchor::MIN..Anchor::MAX, buffer, cx)
.map(|diff_hunk| diff_hunk.buffer_range.to_point(buffer))
.collect::<Vec<_>>()
};
multibuffer.set_excerpts_for_path(
@@ -321,12 +316,7 @@ impl PendingDiff {
let buffer = self.new_buffer.read(cx);
let diff = self.diff.read(cx);
let mut ranges = diff
.hunks_intersecting_range(
Anchor::min_for_buffer(buffer.remote_id())
..Anchor::max_for_buffer(buffer.remote_id()),
buffer,
cx,
)
.hunks_intersecting_range(Anchor::MIN..Anchor::MAX, buffer, cx)
.map(|diff_hunk| diff_hunk.buffer_range.to_point(buffer))
.collect::<Vec<_>>();
ranges.extend(

View File

@@ -108,7 +108,7 @@ impl MentionUri {
if let Some(thread_id) = path.strip_prefix("/agent/thread/") {
let name = single_query_param(&url, "name")?.context("Missing thread name")?;
Ok(Self::Thread {
id: acp::SessionId::new(thread_id),
id: acp::SessionId(thread_id.into()),
name,
})
} else if let Some(path) = path.strip_prefix("/agent/text-thread/") {

View File

@@ -75,15 +75,11 @@ impl Terminal {
let exit_status = exit_status.map(portable_pty::ExitStatus::from);
let mut status = acp::TerminalExitStatus::new();
if let Some(exit_status) = exit_status.as_ref() {
status = status.exit_code(exit_status.exit_code());
if let Some(signal) = exit_status.signal() {
status = status.signal(signal);
}
acp::TerminalExitStatus {
exit_code: exit_status.as_ref().map(|e| e.exit_code()),
signal: exit_status.and_then(|e| e.signal().map(Into::into)),
meta: None,
}
status
})
.shared(),
}
@@ -105,23 +101,27 @@ impl Terminal {
pub fn current_output(&self, cx: &App) -> acp::TerminalOutputResponse {
if let Some(output) = self.output.as_ref() {
let mut exit_status = acp::TerminalExitStatus::new();
if let Some(status) = output.exit_status.map(portable_pty::ExitStatus::from) {
exit_status = exit_status.exit_code(status.exit_code());
if let Some(signal) = status.signal() {
exit_status = exit_status.signal(signal);
}
}
let exit_status = output.exit_status.map(portable_pty::ExitStatus::from);
acp::TerminalOutputResponse::new(
output.content.clone(),
output.original_content_len > output.content.len(),
)
.exit_status(exit_status)
acp::TerminalOutputResponse {
output: output.content.clone(),
truncated: output.original_content_len > output.content.len(),
exit_status: Some(acp::TerminalExitStatus {
exit_code: exit_status.as_ref().map(|e| e.exit_code()),
signal: exit_status.and_then(|e| e.signal().map(Into::into)),
meta: None,
}),
meta: None,
}
} else {
let (current_content, original_len) = self.truncated_output(cx);
let truncated = current_content.len() < original_len;
acp::TerminalOutputResponse::new(current_content, truncated)
acp::TerminalOutputResponse {
truncated: current_content.len() < original_len,
output: current_content,
exit_status: None,
meta: None,
}
}
}

View File

@@ -528,7 +528,7 @@ impl Render for AcpTools {
.with_sizing_behavior(gpui::ListSizingBehavior::Auto)
.size_full(),
)
.vertical_scrollbar_for(&connection.list_state, window, cx)
.vertical_scrollbar_for(connection.list_state.clone(), window, cx)
.into_any()
}
}

View File

@@ -409,11 +409,9 @@ impl ActionLog {
let new_diff_base = new_diff_base.clone();
async move {
let mut unreviewed_edits = Patch::default();
for hunk in diff_snapshot.hunks_intersecting_range(
Anchor::min_for_buffer(buffer_snapshot.remote_id())
..Anchor::max_for_buffer(buffer_snapshot.remote_id()),
&buffer_snapshot,
) {
for hunk in diff_snapshot
.hunks_intersecting_range(Anchor::MIN..Anchor::MAX, &buffer_snapshot)
{
let old_range = new_diff_base
.offset_to_point(hunk.diff_base_byte_range.start)
..new_diff_base.offset_to_point(hunk.diff_base_byte_range.end);
@@ -734,10 +732,12 @@ impl ActionLog {
cx: &mut Context<Self>,
) -> Task<()> {
let futures = self.changed_buffers(cx).into_keys().map(|buffer| {
let buffer_ranges = vec![Anchor::min_max_range_for_buffer(
buffer.read(cx).remote_id(),
)];
let reject = self.reject_edits_in_ranges(buffer, buffer_ranges, telemetry.clone(), cx);
let reject = self.reject_edits_in_ranges(
buffer,
vec![Anchor::MIN..Anchor::MAX],
telemetry.clone(),
cx,
);
async move {
reject.await.log_err();
@@ -2010,8 +2010,7 @@ mod tests {
// User accepts the single hunk
action_log.update(cx, |log, cx| {
let buffer_range = Anchor::min_max_range_for_buffer(buffer.read(cx).remote_id());
log.keep_edits_in_range(buffer.clone(), buffer_range, None, cx)
log.keep_edits_in_range(buffer.clone(), Anchor::MIN..Anchor::MAX, None, cx)
});
cx.run_until_parked();
assert_eq!(unreviewed_hunks(&action_log, cx), vec![]);
@@ -2032,14 +2031,7 @@ mod tests {
// User rejects the hunk
action_log
.update(cx, |log, cx| {
log.reject_edits_in_ranges(
buffer.clone(),
vec![Anchor::min_max_range_for_buffer(
buffer.read(cx).remote_id(),
)],
None,
cx,
)
log.reject_edits_in_ranges(buffer.clone(), vec![Anchor::MIN..Anchor::MAX], None, cx)
})
.await
.unwrap();

View File

@@ -83,7 +83,6 @@ ctor.workspace = true
db = { workspace = true, "features" = ["test-support"] }
editor = { workspace = true, "features" = ["test-support"] }
env_logger.workspace = true
eval_utils.workspace = true
fs = { workspace = true, "features" = ["test-support"] }
git = { workspace = true, "features" = ["test-support"] }
gpui = { workspace = true, "features" = ["test-support"] }

View File

@@ -170,7 +170,7 @@ impl LanguageModels {
}
fn model_id(model: &Arc<dyn LanguageModel>) -> acp::ModelId {
acp::ModelId::new(format!("{}/{}", model.provider_id().0, model.id().0))
acp::ModelId(format!("{}/{}", model.provider_id().0, model.id().0).into())
}
fn authenticate_all_language_model_providers(cx: &mut App) -> Task<()> {
@@ -789,12 +789,28 @@ impl NativeAgentConnection {
}
ThreadEvent::AgentText(text) => {
acp_thread.update(cx, |thread, cx| {
thread.push_assistant_content_block(text.into(), false, cx)
thread.push_assistant_content_block(
acp::ContentBlock::Text(acp::TextContent {
text,
annotations: None,
meta: None,
}),
false,
cx,
)
})?;
}
ThreadEvent::AgentThinking(text) => {
acp_thread.update(cx, |thread, cx| {
thread.push_assistant_content_block(text.into(), true, cx)
thread.push_assistant_content_block(
acp::ContentBlock::Text(acp::TextContent {
text,
annotations: None,
meta: None,
}),
true,
cx,
)
})?;
}
ThreadEvent::ToolCallAuthorization(ToolCallAuthorization {
@@ -808,9 +824,8 @@ impl NativeAgentConnection {
)
})??;
cx.background_spawn(async move {
if let acp::RequestPermissionOutcome::Selected(
acp::SelectedPermissionOutcome { option_id, .. },
) = outcome_task.await
if let acp::RequestPermissionOutcome::Selected { option_id } =
outcome_task.await
{
response
.send(option_id)
@@ -837,7 +852,10 @@ impl NativeAgentConnection {
}
ThreadEvent::Stop(stop_reason) => {
log::debug!("Assistant message complete: {:?}", stop_reason);
return Ok(acp::PromptResponse::new(stop_reason));
return Ok(acp::PromptResponse {
stop_reason,
meta: None,
});
}
}
}
@@ -849,7 +867,10 @@ impl NativeAgentConnection {
}
log::debug!("Response stream completed");
anyhow::Ok(acp::PromptResponse::new(acp::StopReason::EndTurn))
anyhow::Ok(acp::PromptResponse {
stop_reason: acp::StopReason::EndTurn,
meta: None,
})
})
}
}
@@ -1353,7 +1374,7 @@ mod internal_tests {
IndexMap::from_iter([(
AgentModelGroupName("Fake".into()),
vec![AgentModelInfo {
id: acp::ModelId::new("fake/fake"),
id: acp::ModelId("fake/fake".into()),
name: "Fake".into(),
description: None,
icon: Some(ui::IconName::ZedAssistant),
@@ -1414,7 +1435,7 @@ mod internal_tests {
// Select a model
let selector = connection.model_selector(&session_id).unwrap();
let model_id = acp::ModelId::new("fake/fake");
let model_id = acp::ModelId("fake/fake".into());
cx.update(|cx| selector.select_model(model_id.clone(), cx))
.await
.unwrap();
@@ -1500,14 +1521,20 @@ mod internal_tests {
thread.send(
vec![
"What does ".into(),
acp::ContentBlock::ResourceLink(acp::ResourceLink::new(
"b.md",
MentionUri::File {
acp::ContentBlock::ResourceLink(acp::ResourceLink {
name: "b.md".into(),
uri: MentionUri::File {
abs_path: path!("/a/b.md").into(),
}
.to_uri()
.to_string(),
)),
annotations: None,
description: None,
mime_type: None,
size: None,
title: None,
meta: None,
}),
" mean?".into(),
],
cx,

View File

@@ -366,7 +366,7 @@ impl ThreadsDatabase {
for (id, summary, updated_at) in rows {
threads.push(DbThreadMetadata {
id: acp::SessionId::new(id),
id: acp::SessionId(id),
title: summary.into(),
updated_at: DateTime::parse_from_rfc3339(&updated_at)?.with_timezone(&Utc),
});
@@ -424,20 +424,4 @@ impl ThreadsDatabase {
Ok(())
})
}
pub fn delete_threads(&self) -> Task<Result<()>> {
let connection = self.connection.clone();
self.executor.spawn(async move {
let connection = connection.lock();
let mut delete = connection.exec_bound::<()>(indoc! {"
DELETE FROM threads
"})?;
delete(())?;
Ok(())
})
}
}

View File

@@ -172,14 +172,14 @@ impl EditAgent {
project.set_agent_location(
Some(AgentLocation {
buffer: buffer.downgrade(),
position: language::Anchor::max_for_buffer(buffer.read(cx).remote_id()),
position: language::Anchor::MAX,
}),
cx,
)
});
output_events_tx
.unbounded_send(EditAgentOutputEvent::Edited(
Anchor::min_max_range_for_buffer(buffer.read(cx).remote_id()),
language::Anchor::MIN..language::Anchor::MAX,
))
.ok();
})?;
@@ -187,7 +187,7 @@ impl EditAgent {
while let Some(event) = parse_rx.next().await {
match event? {
CreateFileParserEvent::NewTextChunk { chunk } => {
let buffer_id = cx.update(|cx| {
cx.update(|cx| {
buffer.update(cx, |buffer, cx| buffer.append(chunk, cx));
self.action_log
.update(cx, |log, cx| log.buffer_edited(buffer.clone(), cx));
@@ -195,18 +195,15 @@ impl EditAgent {
project.set_agent_location(
Some(AgentLocation {
buffer: buffer.downgrade(),
position: language::Anchor::max_for_buffer(
buffer.read(cx).remote_id(),
),
position: language::Anchor::MAX,
}),
cx,
)
});
buffer.read(cx).remote_id()
})?;
output_events_tx
.unbounded_send(EditAgentOutputEvent::Edited(
Anchor::min_max_range_for_buffer(buffer_id),
language::Anchor::MIN..language::Anchor::MAX,
))
.ok();
}
@@ -1203,9 +1200,7 @@ mod tests {
project.read_with(cx, |project, _| project.agent_location()),
Some(AgentLocation {
buffer: buffer.downgrade(),
position: language::Anchor::max_for_buffer(
cx.update(|cx| buffer.read(cx).remote_id())
),
position: language::Anchor::MAX
})
);
@@ -1223,9 +1218,7 @@ mod tests {
project.read_with(cx, |project, _| project.agent_location()),
Some(AgentLocation {
buffer: buffer.downgrade(),
position: language::Anchor::max_for_buffer(
cx.update(|cx| buffer.read(cx).remote_id())
),
position: language::Anchor::MAX
})
);
@@ -1243,9 +1236,7 @@ mod tests {
project.read_with(cx, |project, _| project.agent_location()),
Some(AgentLocation {
buffer: buffer.downgrade(),
position: language::Anchor::max_for_buffer(
cx.update(|cx| buffer.read(cx).remote_id())
),
position: language::Anchor::MAX
})
);
@@ -1263,9 +1254,7 @@ mod tests {
project.read_with(cx, |project, _| project.agent_location()),
Some(AgentLocation {
buffer: buffer.downgrade(),
position: language::Anchor::max_for_buffer(
cx.update(|cx| buffer.read(cx).remote_id())
),
position: language::Anchor::MAX
})
);
@@ -1280,9 +1269,7 @@ mod tests {
project.read_with(cx, |project, _| project.agent_location()),
Some(AgentLocation {
buffer: buffer.downgrade(),
position: language::Anchor::max_for_buffer(
cx.update(|cx| buffer.read(cx).remote_id())
),
position: language::Anchor::MAX
})
);
}

View File

@@ -4,7 +4,7 @@ use crate::{
};
use Role::*;
use client::{Client, UserStore};
use eval_utils::{EvalOutput, EvalOutputProcessor, OutcomeKind};
use collections::HashMap;
use fs::FakeFs;
use futures::{FutureExt, future::LocalBoxFuture};
use gpui::{AppContext, TestAppContext, Timer};
@@ -20,62 +20,16 @@ use rand::prelude::*;
use reqwest_client::ReqwestClient;
use serde_json::json;
use std::{
cmp::Reverse,
fmt::{self, Display},
io::Write as _,
path::Path,
str::FromStr,
sync::mpsc,
time::Duration,
};
use util::path;
#[derive(Default, Clone, Debug)]
struct EditAgentOutputProcessor {
mismatched_tag_threshold: f32,
cumulative_tags: usize,
cumulative_mismatched_tags: usize,
eval_outputs: Vec<EvalOutput<EditEvalMetadata>>,
}
fn mismatched_tag_threshold(mismatched_tag_threshold: f32) -> EditAgentOutputProcessor {
EditAgentOutputProcessor {
mismatched_tag_threshold,
cumulative_tags: 0,
cumulative_mismatched_tags: 0,
eval_outputs: Vec::new(),
}
}
#[derive(Clone, Debug)]
struct EditEvalMetadata {
tags: usize,
mismatched_tags: usize,
}
impl EvalOutputProcessor for EditAgentOutputProcessor {
type Metadata = EditEvalMetadata;
fn process(&mut self, output: &EvalOutput<Self::Metadata>) {
if matches!(output.outcome, OutcomeKind::Passed | OutcomeKind::Failed) {
self.cumulative_mismatched_tags += output.metadata.mismatched_tags;
self.cumulative_tags += output.metadata.tags;
self.eval_outputs.push(output.clone());
}
}
fn assert(&mut self) {
let mismatched_tag_ratio =
self.cumulative_mismatched_tags as f32 / self.cumulative_tags as f32;
if mismatched_tag_ratio > self.mismatched_tag_threshold {
for eval_output in &self.eval_outputs {
println!("{}", eval_output.data);
}
panic!(
"Too many mismatched tags: {:?}",
self.cumulative_mismatched_tags
);
}
}
}
#[test]
#[cfg_attr(not(feature = "unit-eval"), ignore)]
fn eval_extract_handle_command_output() {
@@ -101,19 +55,22 @@ fn eval_extract_handle_command_output() {
include_str!("evals/fixtures/extract_handle_command_output/possible-07.diff"),
];
let edit_description = "Extract `handle_command_output` method from `run_git_blame`.";
eval_utils::eval(100, 0.95, mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
eval(
100,
0.95,
0.05,
EvalInput::from_conversation(
vec![
message(
User,
[text(formatdoc! {"
Read the `{input_file_path}` file and extract a method in
the final stanza of `run_git_blame` to deal with command failures,
call it `handle_command_output` and take the std::process::Output as the only parameter.
Do not document the method and do not add any comments.
Read the `{input_file_path}` file and extract a method in
the final stanza of `run_git_blame` to deal with command failures,
call it `handle_command_output` and take the std::process::Output as the only parameter.
Do not document the method and do not add any comments.
Add it right next to `run_git_blame` and copy it verbatim from `run_git_blame`.
"})],
Add it right next to `run_git_blame` and copy it verbatim from `run_git_blame`.
"})],
),
message(
Assistant,
@@ -145,9 +102,9 @@ fn eval_extract_handle_command_output() {
),
],
Some(input_file_content.into()),
EvalAssertion::assert_diff_any(possible_diffs.clone()),
))
});
EvalAssertion::assert_diff_any(possible_diffs),
),
);
}
#[test]
@@ -165,16 +122,18 @@ fn eval_delete_run_git_blame() {
let input_file_content = include_str!("evals/fixtures/delete_run_git_blame/before.rs");
let output_file_content = include_str!("evals/fixtures/delete_run_git_blame/after.rs");
let edit_description = "Delete the `run_git_blame` function.";
eval_utils::eval(100, 0.95, mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
eval(
100,
0.95,
0.05,
EvalInput::from_conversation(
vec![
message(
User,
[text(formatdoc! {"
Read the `{input_file_path}` file and delete `run_git_blame`. Just that
one function, not its usages.
"})],
Read the `{input_file_path}` file and delete `run_git_blame`. Just that
one function, not its usages.
"})],
),
message(
Assistant,
@@ -207,8 +166,8 @@ fn eval_delete_run_git_blame() {
],
Some(input_file_content.into()),
EvalAssertion::assert_eq(output_file_content),
))
});
),
);
}
#[test]
@@ -226,16 +185,18 @@ fn eval_translate_doc_comments() {
let input_file_path = "root/canvas.rs";
let input_file_content = include_str!("evals/fixtures/translate_doc_comments/before.rs");
let edit_description = "Translate all doc comments to Italian";
eval_utils::eval(200, 1., mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
eval(
200,
1.,
0.05,
EvalInput::from_conversation(
vec![
message(
User,
[text(formatdoc! {"
Read the {input_file_path} file and edit it (without overwriting it),
translating all the doc comments to italian.
"})],
Read the {input_file_path} file and edit it (without overwriting it),
translating all the doc comments to italian.
"})],
),
message(
Assistant,
@@ -268,8 +229,8 @@ fn eval_translate_doc_comments() {
],
Some(input_file_content.into()),
EvalAssertion::judge_diff("Doc comments were translated to Italian"),
))
});
),
);
}
#[test]
@@ -288,31 +249,33 @@ fn eval_use_wasi_sdk_in_compile_parser_to_wasm() {
let input_file_content =
include_str!("evals/fixtures/use_wasi_sdk_in_compile_parser_to_wasm/before.rs");
let edit_description = "Update compile_parser_to_wasm to use wasi-sdk instead of emscripten";
eval_utils::eval(100, 0.95, mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
eval(
100,
0.95,
0.05,
EvalInput::from_conversation(
vec![
message(
User,
[text(formatdoc! {"
Read the `{input_file_path}` file and change `compile_parser_to_wasm` to use `wasi-sdk` instead of emscripten.
Use `ureq` to download the SDK for the current platform and architecture.
Extract the archive into a sibling of `lib` inside the `tree-sitter` directory in the cache_dir.
Compile the parser to wasm using the `bin/clang` executable (or `bin/clang.exe` on windows)
that's inside of the archive.
Don't re-download the SDK if that executable already exists.
Read the `{input_file_path}` file and change `compile_parser_to_wasm` to use `wasi-sdk` instead of emscripten.
Use `ureq` to download the SDK for the current platform and architecture.
Extract the archive into a sibling of `lib` inside the `tree-sitter` directory in the cache_dir.
Compile the parser to wasm using the `bin/clang` executable (or `bin/clang.exe` on windows)
that's inside of the archive.
Don't re-download the SDK if that executable already exists.
Use these clang flags: -fPIC -shared -Os -Wl,--export=tree_sitter_{{language_name}}
Use these clang flags: -fPIC -shared -Os -Wl,--export=tree_sitter_{{language_name}}
Here are the available wasi-sdk assets:
- wasi-sdk-25.0-x86_64-macos.tar.gz
- wasi-sdk-25.0-arm64-macos.tar.gz
- wasi-sdk-25.0-x86_64-linux.tar.gz
- wasi-sdk-25.0-arm64-linux.tar.gz
- wasi-sdk-25.0-x86_64-linux.tar.gz
- wasi-sdk-25.0-arm64-linux.tar.gz
- wasi-sdk-25.0-x86_64-windows.tar.gz
"})],
Here are the available wasi-sdk assets:
- wasi-sdk-25.0-x86_64-macos.tar.gz
- wasi-sdk-25.0-arm64-macos.tar.gz
- wasi-sdk-25.0-x86_64-linux.tar.gz
- wasi-sdk-25.0-arm64-linux.tar.gz
- wasi-sdk-25.0-x86_64-linux.tar.gz
- wasi-sdk-25.0-arm64-linux.tar.gz
- wasi-sdk-25.0-x86_64-windows.tar.gz
"})],
),
message(
Assistant,
@@ -389,11 +352,11 @@ fn eval_use_wasi_sdk_in_compile_parser_to_wasm() {
],
Some(input_file_content.into()),
EvalAssertion::judge_diff(indoc! {"
- The compile_parser_to_wasm method has been changed to use wasi-sdk
- ureq is used to download the SDK for current platform and architecture
"}),
))
});
- The compile_parser_to_wasm method has been changed to use wasi-sdk
- ureq is used to download the SDK for current platform and architecture
"}),
),
);
}
#[test]
@@ -417,8 +380,11 @@ fn eval_disable_cursor_blinking() {
include_str!("evals/fixtures/disable_cursor_blinking/possible-03.diff"),
include_str!("evals/fixtures/disable_cursor_blinking/possible-04.diff"),
];
eval_utils::eval(100, 0.51, mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
eval(
100,
0.51,
0.05,
EvalInput::from_conversation(
vec![
message(User, [text("Let's research how to cursor blinking works.")]),
message(
@@ -455,10 +421,10 @@ fn eval_disable_cursor_blinking() {
message(
User,
[text(indoc! {"
Comment out the lines that interact with the BlinkManager.
Keep the outer `update` blocks, but comments everything that's inside (including if statements).
Don't add additional comments.
"})],
Comment out the lines that interact with the BlinkManager.
Keep the outer `update` blocks, but comments everything that's inside (including if statements).
Don't add additional comments.
"})],
),
message(
Assistant,
@@ -474,9 +440,9 @@ fn eval_disable_cursor_blinking() {
),
],
Some(input_file_content.into()),
EvalAssertion::assert_diff_any(possible_diffs.clone()),
))
});
EvalAssertion::assert_diff_any(possible_diffs),
),
);
}
#[test]
@@ -501,16 +467,20 @@ fn eval_from_pixels_constructor() {
let input_file_path = "root/canvas.rs";
let input_file_content = include_str!("evals/fixtures/from_pixels_constructor/before.rs");
let edit_description = "Implement from_pixels constructor and add tests.";
eval_utils::eval(100, 0.95, mismatched_tag_threshold(0.25), move || {
run_eval(EvalInput::from_conversation(
eval(
100,
0.95,
// For whatever reason, this eval produces more mismatched tags.
// Increasing for now, let's see if we can bring this down.
0.25,
EvalInput::from_conversation(
vec![
message(
User,
[text(indoc! {"
Introduce a new `from_pixels` constructor in Canvas and
also add tests for it in the same file.
"})],
Introduce a new `from_pixels` constructor in Canvas and
also add tests for it in the same file.
"})],
),
message(
Assistant,
@@ -575,92 +545,92 @@ fn eval_from_pixels_constructor() {
"tool_4",
"grep",
indoc! {"
Found 6 matches:
Found 6 matches:
## Matches in font-kit/src/loaders/core_text.rs
## Matches in font-kit/src/loaders/core_text.rs
### mod test L926-936
```
mod test {
use super::Font;
use crate::properties::{Stretch, Weight};
### mod test L926-936
```
mod test {
use super::Font;
use crate::properties::{Stretch, Weight};
#[cfg(feature = \"source\")]
use crate::source::SystemSource;
#[cfg(feature = \"source\")]
use crate::source::SystemSource;
static TEST_FONT_POSTSCRIPT_NAME: &'static str = \"ArialMT\";
static TEST_FONT_POSTSCRIPT_NAME: &'static str = \"ArialMT\";
#[cfg(feature = \"source\")]
#[test]
```
#[cfg(feature = \"source\")]
#[test]
```
55 lines remaining in ancestor node. Read the file to see all.
55 lines remaining in ancestor node. Read the file to see all.
### mod test L947-951
```
}
### mod test L947-951
```
}
#[test]
fn test_core_text_to_css_font_weight() {
// Exact matches
```
#[test]
fn test_core_text_to_css_font_weight() {
// Exact matches
```
### mod test L959-963
```
}
### mod test L959-963
```
}
#[test]
fn test_core_text_to_css_font_stretch() {
// Exact matches
```
#[test]
fn test_core_text_to_css_font_stretch() {
// Exact matches
```
## Matches in font-kit/src/loaders/freetype.rs
## Matches in font-kit/src/loaders/freetype.rs
### mod test L1238-1248
```
mod test {
use crate::loaders::freetype::Font;
### mod test L1238-1248
```
mod test {
use crate::loaders::freetype::Font;
static PCF_FONT_PATH: &str = \"resources/tests/times-roman-pcf/timR12.pcf\";
static PCF_FONT_POSTSCRIPT_NAME: &str = \"Times-Roman\";
static PCF_FONT_PATH: &str = \"resources/tests/times-roman-pcf/timR12.pcf\";
static PCF_FONT_POSTSCRIPT_NAME: &str = \"Times-Roman\";
#[test]
fn get_pcf_postscript_name() {
let font = Font::from_path(PCF_FONT_PATH, 0).unwrap();
assert_eq!(font.postscript_name().unwrap(), PCF_FONT_POSTSCRIPT_NAME);
}
```
#[test]
fn get_pcf_postscript_name() {
let font = Font::from_path(PCF_FONT_PATH, 0).unwrap();
assert_eq!(font.postscript_name().unwrap(), PCF_FONT_POSTSCRIPT_NAME);
}
```
1 lines remaining in ancestor node. Read the file to see all.
1 lines remaining in ancestor node. Read the file to see all.
## Matches in font-kit/src/sources/core_text.rs
## Matches in font-kit/src/sources/core_text.rs
### mod test L265-275
```
mod test {
use crate::properties::{Stretch, Weight};
### mod test L265-275
```
mod test {
use crate::properties::{Stretch, Weight};
#[test]
fn test_css_to_core_text_font_weight() {
// Exact matches
assert_eq!(super::css_to_core_text_font_weight(Weight(100.0)), -0.7);
assert_eq!(super::css_to_core_text_font_weight(Weight(400.0)), 0.0);
assert_eq!(super::css_to_core_text_font_weight(Weight(700.0)), 0.4);
assert_eq!(super::css_to_core_text_font_weight(Weight(900.0)), 0.8);
#[test]
fn test_css_to_core_text_font_weight() {
// Exact matches
assert_eq!(super::css_to_core_text_font_weight(Weight(100.0)), -0.7);
assert_eq!(super::css_to_core_text_font_weight(Weight(400.0)), 0.0);
assert_eq!(super::css_to_core_text_font_weight(Weight(700.0)), 0.4);
assert_eq!(super::css_to_core_text_font_weight(Weight(900.0)), 0.8);
```
```
27 lines remaining in ancestor node. Read the file to see all.
27 lines remaining in ancestor node. Read the file to see all.
### mod test L278-282
```
}
### mod test L278-282
```
}
#[test]
fn test_css_to_core_text_font_stretch() {
// Exact matches
```
"},
#[test]
fn test_css_to_core_text_font_stretch() {
// Exact matches
```
"},
)],
),
message(
@@ -678,11 +648,11 @@ fn eval_from_pixels_constructor() {
],
Some(input_file_content.into()),
EvalAssertion::judge_diff(indoc! {"
- The diff contains a new `from_pixels` constructor
- The diff contains new tests for the `from_pixels` constructor
"}),
))
});
- The diff contains a new `from_pixels` constructor
- The diff contains new tests for the `from_pixels` constructor
"}),
),
);
}
#[test]
@@ -700,9 +670,11 @@ fn eval_zode() {
let input_file_path = "root/zode.py";
let input_content = None;
let edit_description = "Create the main Zode CLI script";
eval_utils::eval(50, 1., mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
eval(
50,
1.,
0.05,
EvalInput::from_conversation(
vec![
message(User, [text(include_str!("evals/fixtures/zode/prompt.md"))]),
message(
@@ -761,7 +733,7 @@ fn eval_zode() {
],
),
],
input_content.clone(),
input_content,
EvalAssertion::new(async move |sample, _, _cx| {
let invalid_starts = [' ', '`', '\n'];
let mut message = String::new();
@@ -786,8 +758,8 @@ fn eval_zode() {
})
}
}),
))
});
),
);
}
#[test]
@@ -805,17 +777,19 @@ fn eval_add_overwrite_test() {
let input_file_path = "root/action_log.rs";
let input_file_content = include_str!("evals/fixtures/add_overwrite_test/before.rs");
let edit_description = "Add a new test for overwriting a file in action_log.rs";
eval_utils::eval(200, 0.5, mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
eval(
200,
0.5, // TODO: make this eval better
0.05,
EvalInput::from_conversation(
vec![
message(
User,
[text(indoc! {"
Introduce a new test in `action_log.rs` to test overwriting a file.
That is, a file already exists, but we call `buffer_created` as if the file were new.
Take inspiration from all the other tests in the file.
"})],
Introduce a new test in `action_log.rs` to test overwriting a file.
That is, a file already exists, but we call `buffer_created` as if the file were new.
Take inspiration from all the other tests in the file.
"})],
),
message(
Assistant,
@@ -835,81 +809,81 @@ fn eval_add_overwrite_test() {
"tool_1",
"read_file",
indoc! {"
pub struct ActionLog [L13-20]
tracked_buffers [L15]
edited_since_project_diagnostics_check [L17]
project [L19]
impl ActionLog [L22-498]
pub fn new [L24-30]
pub fn project [L32-34]
pub fn checked_project_diagnostics [L37-39]
pub fn has_edited_files_since_project_diagnostics_check [L42-44]
fn track_buffer_internal [L46-101]
fn handle_buffer_event [L103-116]
fn handle_buffer_edited [L118-123]
fn handle_buffer_file_changed [L125-158]
async fn maintain_diff [L160-264]
pub fn buffer_read [L267-269]
pub fn buffer_created [L272-276]
pub fn buffer_edited [L279-287]
pub fn will_delete_buffer [L289-304]
pub fn keep_edits_in_range [L306-364]
pub fn reject_edits_in_ranges [L366-459]
pub fn keep_all_edits [L461-473]
pub fn changed_buffers [L476-482]
pub fn stale_buffers [L485-497]
fn apply_non_conflicting_edits [L500-561]
fn diff_snapshots [L563-585]
fn point_to_row_edit [L587-614]
enum ChangeAuthor [L617-620]
User [L618]
Agent [L619]
enum TrackedBufferStatus [L623-627]
Created [L624]
Modified [L625]
Deleted [L626]
struct TrackedBuffer [L629-641]
buffer [L630]
base_text [L631]
unreviewed_changes [L632]
status [L633]
version [L634]
diff [L635]
snapshot [L636]
diff_update [L637]
_open_lsp_handle [L638]
_maintain_diff [L639]
_subscription [L640]
impl TrackedBuffer [L643-657]
fn has_changes [L644-650]
fn schedule_diff_update [L652-656]
pub struct ChangedBuffer [L659-661]
pub diff [L660]
mod tests [L664-1574]
fn init_logger [L678-682]
fn init_test [L684-691]
async fn test_keep_edits [L694-769]
async fn test_deletions [L772-854]
async fn test_overlapping_user_edits [L857-951]
async fn test_creating_files [L954-1010]
async fn test_deleting_files [L1013-1120]
async fn test_reject_edits [L1123-1255]
async fn test_reject_multiple_edits [L1258-1331]
async fn test_reject_deleted_file [L1334-1388]
async fn test_reject_created_file [L1391-1443]
async fn test_random_diffs [L1446-1535]
fn quiesce [L1510-1534]
struct HunkStatus [L1538-1542]
range [L1539]
diff_status [L1540]
old_text [L1541]
fn unreviewed_hunks [L1544-1573]
pub struct ActionLog [L13-20]
tracked_buffers [L15]
edited_since_project_diagnostics_check [L17]
project [L19]
impl ActionLog [L22-498]
pub fn new [L24-30]
pub fn project [L32-34]
pub fn checked_project_diagnostics [L37-39]
pub fn has_edited_files_since_project_diagnostics_check [L42-44]
fn track_buffer_internal [L46-101]
fn handle_buffer_event [L103-116]
fn handle_buffer_edited [L118-123]
fn handle_buffer_file_changed [L125-158]
async fn maintain_diff [L160-264]
pub fn buffer_read [L267-269]
pub fn buffer_created [L272-276]
pub fn buffer_edited [L279-287]
pub fn will_delete_buffer [L289-304]
pub fn keep_edits_in_range [L306-364]
pub fn reject_edits_in_ranges [L366-459]
pub fn keep_all_edits [L461-473]
pub fn changed_buffers [L476-482]
pub fn stale_buffers [L485-497]
fn apply_non_conflicting_edits [L500-561]
fn diff_snapshots [L563-585]
fn point_to_row_edit [L587-614]
enum ChangeAuthor [L617-620]
User [L618]
Agent [L619]
enum TrackedBufferStatus [L623-627]
Created [L624]
Modified [L625]
Deleted [L626]
struct TrackedBuffer [L629-641]
buffer [L630]
base_text [L631]
unreviewed_changes [L632]
status [L633]
version [L634]
diff [L635]
snapshot [L636]
diff_update [L637]
_open_lsp_handle [L638]
_maintain_diff [L639]
_subscription [L640]
impl TrackedBuffer [L643-657]
fn has_changes [L644-650]
fn schedule_diff_update [L652-656]
pub struct ChangedBuffer [L659-661]
pub diff [L660]
mod tests [L664-1574]
fn init_logger [L678-682]
fn init_test [L684-691]
async fn test_keep_edits [L694-769]
async fn test_deletions [L772-854]
async fn test_overlapping_user_edits [L857-951]
async fn test_creating_files [L954-1010]
async fn test_deleting_files [L1013-1120]
async fn test_reject_edits [L1123-1255]
async fn test_reject_multiple_edits [L1258-1331]
async fn test_reject_deleted_file [L1334-1388]
async fn test_reject_created_file [L1391-1443]
async fn test_random_diffs [L1446-1535]
fn quiesce [L1510-1534]
struct HunkStatus [L1538-1542]
range [L1539]
diff_status [L1540]
old_text [L1541]
fn unreviewed_hunks [L1544-1573]
Showing symbols 1-69 (total symbols: 69)
Showing symbols 1-69 (total symbols: 69)
Using the line numbers in this outline, you can call this tool again while specifying
the start_line and end_line fields to see the implementations of symbols in the outline.
"},
Using the line numbers in this outline, you can call this tool again while specifying
the start_line and end_line fields to see the implementations of symbols in the outline.
"},
)],
),
message(
@@ -989,17 +963,17 @@ fn eval_add_overwrite_test() {
Assistant,
[
text(indoc! {"
Based on my understanding of the existing tests and code, I'll now create a new test for overwriting a file. In this test, we'll:
Based on my understanding of the existing tests and code, I'll now create a new test for overwriting a file. In this test, we'll:
1. Create a file
2. Open a buffer for it
3. Track it in the action log
4. Modify the buffer
5. Call `buffer_created` again as if the file were new
6. Check that the changes are properly tracked
1. Create a file
2. Open a buffer for it
3. Track it in the action log
4. Modify the buffer
5. Call `buffer_created` again as if the file were new
6. Check that the changes are properly tracked
Let's write the test:
"}),
Let's write the test:
"}),
tool_use(
"tool_5",
"edit_file",
@@ -1016,8 +990,8 @@ fn eval_add_overwrite_test() {
EvalAssertion::judge_diff(
"A new test for overwritten files was created, without changing any previous test",
),
))
});
),
);
}
#[test]
@@ -1042,18 +1016,20 @@ fn eval_create_empty_file() {
let input_file_content = None;
let expected_output_content = String::new();
eval_utils::eval(100, 0.99, mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
eval(
100,
0.99,
0.05,
EvalInput::from_conversation(
vec![
message(User, [text("Create a second empty todo file ")]),
message(
Assistant,
[
text(formatdoc! {"
I'll help you create a second empty todo file.
First, let me examine the project structure to see if there's already a todo file, which will help me determine the appropriate name and location for the second one.
"}),
I'll help you create a second empty todo file.
First, let me examine the project structure to see if there's already a todo file, which will help me determine the appropriate name and location for the second one.
"}),
tool_use(
"toolu_01GAF8TtsgpjKxCr8fgQLDgR",
"list_directory",
@@ -1075,8 +1051,8 @@ fn eval_create_empty_file() {
Assistant,
[
text(formatdoc! {"
I can see there's already a `TODO` file in the `root` directory. Let me create a second empty todo file called `TODO3` in the same directory:
"}),
I can see there's already a `TODO` file in the `root` directory. Let me create a second empty todo file called `TODO3` in the same directory:
"}),
tool_use(
"toolu_01Tb3iQ9griqSYMmVuykQPWU",
"edit_file",
@@ -1089,12 +1065,12 @@ fn eval_create_empty_file() {
],
),
],
input_file_content.clone(),
input_file_content,
// Bad behavior is to write something like
// "I'll create an empty TODO3 file as requested."
EvalAssertion::assert_eq(expected_output_content.clone()),
))
});
EvalAssertion::assert_eq(expected_output_content),
),
);
}
fn message(
@@ -1336,44 +1312,115 @@ impl EvalAssertion {
}
}
fn run_eval(eval: EvalInput) -> eval_utils::EvalOutput<EditEvalMetadata> {
let dispatcher = gpui::TestDispatcher::new(StdRng::from_os_rng());
let mut cx = TestAppContext::build(dispatcher, None);
let result = cx.executor().block_test(async {
let test = EditAgentTest::new(&mut cx).await;
test.eval(eval, &mut cx).await
});
match result {
Ok(output) => eval_utils::EvalOutput {
data: output.to_string(),
outcome: if output.assertion.score < 80 {
eval_utils::OutcomeKind::Failed
} else {
eval_utils::OutcomeKind::Passed
},
metadata: EditEvalMetadata {
tags: output.sample.edit_output.parser_metrics.tags,
mismatched_tags: output.sample.edit_output.parser_metrics.mismatched_tags,
},
},
Err(e) => eval_utils::EvalOutput {
data: format!("{e:?}"),
outcome: eval_utils::OutcomeKind::Error,
metadata: EditEvalMetadata {
tags: 0,
mismatched_tags: 0,
},
},
fn eval(
iterations: usize,
expected_pass_ratio: f32,
mismatched_tag_threshold: f32,
mut eval: EvalInput,
) {
let mut evaluated_count = 0;
let mut failed_count = 0;
report_progress(evaluated_count, failed_count, iterations);
let (tx, rx) = mpsc::channel();
// Cache the last message in the conversation, and run one instance of the eval so that
// all the next ones are cached.
eval.conversation.last_mut().unwrap().cache = true;
run_eval(eval.clone(), tx.clone());
let executor = gpui::background_executor();
let semaphore = Arc::new(smol::lock::Semaphore::new(32));
for _ in 1..iterations {
let eval = eval.clone();
let tx = tx.clone();
let semaphore = semaphore.clone();
executor
.spawn(async move {
let _guard = semaphore.acquire().await;
run_eval(eval, tx)
})
.detach();
}
drop(tx);
let mut failed_evals = HashMap::default();
let mut errored_evals = HashMap::default();
let mut eval_outputs = Vec::new();
let mut cumulative_parser_metrics = EditParserMetrics::default();
while let Ok(output) = rx.recv() {
match output {
Ok(output) => {
cumulative_parser_metrics += output.sample.edit_output.parser_metrics.clone();
eval_outputs.push(output.clone());
if output.assertion.score < 80 {
failed_count += 1;
failed_evals
.entry(output.sample.text_after.clone())
.or_insert(Vec::new())
.push(output);
}
}
Err(error) => {
failed_count += 1;
*errored_evals.entry(format!("{:?}", error)).or_insert(0) += 1;
}
}
evaluated_count += 1;
report_progress(evaluated_count, failed_count, iterations);
}
let actual_pass_ratio = (iterations - failed_count) as f32 / iterations as f32;
println!("Actual pass ratio: {}\n", actual_pass_ratio);
if actual_pass_ratio < expected_pass_ratio {
let mut errored_evals = errored_evals.into_iter().collect::<Vec<_>>();
errored_evals.sort_by_key(|(_, count)| Reverse(*count));
for (error, count) in errored_evals {
println!("Eval errored {} times. Error: {}", count, error);
}
let mut failed_evals = failed_evals.into_iter().collect::<Vec<_>>();
failed_evals.sort_by_key(|(_, evals)| Reverse(evals.len()));
for (_buffer_output, failed_evals) in failed_evals {
let eval_output = failed_evals.first().unwrap();
println!("Eval failed {} times", failed_evals.len());
println!("{}", eval_output);
}
panic!(
"Actual pass ratio: {}\nExpected pass ratio: {}",
actual_pass_ratio, expected_pass_ratio
);
}
let mismatched_tag_ratio =
cumulative_parser_metrics.mismatched_tags as f32 / cumulative_parser_metrics.tags as f32;
if mismatched_tag_ratio > mismatched_tag_threshold {
for eval_output in eval_outputs {
println!("{}", eval_output);
}
panic!("Too many mismatched tags: {:?}", cumulative_parser_metrics);
}
}
fn run_eval(eval: EvalInput, tx: mpsc::Sender<Result<EvalOutput>>) {
let dispatcher = gpui::TestDispatcher::new(StdRng::from_os_rng());
let mut cx = TestAppContext::build(dispatcher, None);
let output = cx.executor().block_test(async {
let test = EditAgentTest::new(&mut cx).await;
test.eval(eval, &mut cx).await
});
tx.send(output).unwrap();
}
#[derive(Clone)]
struct EditEvalOutput {
struct EvalOutput {
sample: EvalSample,
assertion: EvalAssertionOutcome,
}
impl Display for EditEvalOutput {
impl Display for EvalOutput {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
writeln!(f, "Score: {:?}", self.assertion.score)?;
if let Some(message) = self.assertion.message.as_ref() {
@@ -1392,6 +1439,22 @@ impl Display for EditEvalOutput {
}
}
fn report_progress(evaluated_count: usize, failed_count: usize, iterations: usize) {
let passed_count = evaluated_count - failed_count;
let passed_ratio = if evaluated_count == 0 {
0.0
} else {
passed_count as f64 / evaluated_count as f64
};
print!(
"\r\x1b[KEvaluated {}/{} ({:.2}% passed)",
evaluated_count,
iterations,
passed_ratio * 100.0
);
std::io::stdout().flush().unwrap();
}
struct EditAgentTest {
agent: EditAgent,
project: Entity<Project>,
@@ -1487,10 +1550,7 @@ impl EditAgentTest {
})
}
async fn eval(&self, mut eval: EvalInput, cx: &mut TestAppContext) -> Result<EditEvalOutput> {
// Make sure the last message in the conversation is cached.
eval.conversation.last_mut().unwrap().cache = true;
async fn eval(&self, eval: EvalInput, cx: &mut TestAppContext) -> Result<EvalOutput> {
let path = self
.project
.read_with(cx, |project, cx| {
@@ -1596,7 +1656,7 @@ impl EditAgentTest {
.run(&sample, self.judge_model.clone(), cx)
.await?;
Ok(EditEvalOutput { assertion, sample })
Ok(EvalOutput { assertion, sample })
}
}

View File

@@ -188,15 +188,6 @@ impl HistoryStore {
})
}
pub fn delete_threads(&mut self, cx: &mut Context<Self>) -> Task<Result<()>> {
let database_future = ThreadsDatabase::connect(cx);
cx.spawn(async move |this, cx| {
let database = database_future.await.map_err(|err| anyhow!(err))?;
database.delete_threads().await?;
this.update(cx, |this, cx| this.reload(cx))
})
}
pub fn delete_text_thread(
&mut self,
path: Arc<Path>,
@@ -354,9 +345,9 @@ impl HistoryStore {
.into_iter()
.take(MAX_RECENTLY_OPENED_ENTRIES)
.flat_map(|entry| match entry {
SerializedRecentOpen::AcpThread(id) => {
Some(HistoryEntryId::AcpThread(acp::SessionId::new(id.as_str())))
}
SerializedRecentOpen::AcpThread(id) => Some(HistoryEntryId::AcpThread(
acp::SessionId(id.as_str().into()),
)),
SerializedRecentOpen::TextThread(file_name) => Some(
HistoryEntryId::TextThread(text_threads_dir().join(file_name).into()),
),

View File

@@ -66,9 +66,11 @@ pub async fn get_buffer_content_or_outline(
let outline_text = render_outline(outline_items, None, 0, usize::MAX).await?;
let text = if let Some(path) = path {
format!("# File outline for {path}\n\n{outline_text}",)
format!(
"# File outline for {path} (file too large to show full content)\n\n{outline_text}",
)
} else {
format!("# File outline\n\n{outline_text}",)
format!("# File outline (file too large to show full content)\n\n{outline_text}",)
};
Ok(BufferContent {
text,

View File

@@ -493,14 +493,14 @@ async fn test_tool_authorization(cx: &mut TestAppContext) {
// Approve the first
tool_call_auth_1
.response
.send(tool_call_auth_1.options[1].option_id.clone())
.send(tool_call_auth_1.options[1].id.clone())
.unwrap();
cx.run_until_parked();
// Reject the second
tool_call_auth_2
.response
.send(tool_call_auth_1.options[2].option_id.clone())
.send(tool_call_auth_1.options[2].id.clone())
.unwrap();
cx.run_until_parked();
@@ -510,14 +510,14 @@ async fn test_tool_authorization(cx: &mut TestAppContext) {
message.content,
vec![
language_model::MessageContent::ToolResult(LanguageModelToolResult {
tool_use_id: tool_call_auth_1.tool_call.tool_call_id.0.to_string().into(),
tool_use_id: tool_call_auth_1.tool_call.id.0.to_string().into(),
tool_name: ToolRequiringPermission::name().into(),
is_error: false,
content: "Allowed".into(),
output: Some("Allowed".into())
}),
language_model::MessageContent::ToolResult(LanguageModelToolResult {
tool_use_id: tool_call_auth_2.tool_call.tool_call_id.0.to_string().into(),
tool_use_id: tool_call_auth_2.tool_call.id.0.to_string().into(),
tool_name: ToolRequiringPermission::name().into(),
is_error: true,
content: "Permission to run tool denied by user".into(),
@@ -543,7 +543,7 @@ async fn test_tool_authorization(cx: &mut TestAppContext) {
let tool_call_auth_3 = next_tool_call_authorization(&mut events).await;
tool_call_auth_3
.response
.send(tool_call_auth_3.options[0].option_id.clone())
.send(tool_call_auth_3.options[0].id.clone())
.unwrap();
cx.run_until_parked();
let completion = fake_model.pending_completions().pop().unwrap();
@@ -552,7 +552,7 @@ async fn test_tool_authorization(cx: &mut TestAppContext) {
message.content,
vec![language_model::MessageContent::ToolResult(
LanguageModelToolResult {
tool_use_id: tool_call_auth_3.tool_call.tool_call_id.0.to_string().into(),
tool_use_id: tool_call_auth_3.tool_call.id.0.to_string().into(),
tool_name: ToolRequiringPermission::name().into(),
is_error: false,
content: "Allowed".into(),
@@ -1353,20 +1353,20 @@ async fn test_cancellation(cx: &mut TestAppContext) {
ThreadEvent::ToolCall(tool_call) => {
assert_eq!(tool_call.title, expected_tools.remove(0));
if tool_call.title == "Echo" {
echo_id = Some(tool_call.tool_call_id);
echo_id = Some(tool_call.id);
}
}
ThreadEvent::ToolCallUpdate(acp_thread::ToolCallUpdate::UpdateFields(
acp::ToolCallUpdate {
tool_call_id,
id,
fields:
acp::ToolCallUpdateFields {
status: Some(acp::ToolCallStatus::Completed),
..
},
..
meta: None,
},
)) if Some(&tool_call_id) == echo_id.as_ref() => {
)) if Some(&id) == echo_id.as_ref() => {
echo_completed = true;
}
_ => {}
@@ -1995,7 +1995,11 @@ async fn test_agent_connection(cx: &mut TestAppContext) {
.update(|cx| {
connection.prompt(
Some(acp_thread::UserMessageId::new()),
acp::PromptRequest::new(session_id.clone(), vec!["ghi".into()]),
acp::PromptRequest {
session_id: session_id.clone(),
prompt: vec!["ghi".into()],
meta: None,
},
cx,
)
})
@@ -2052,50 +2056,68 @@ async fn test_tool_updates_to_completion(cx: &mut TestAppContext) {
let tool_call = expect_tool_call(&mut events).await;
assert_eq!(
tool_call,
acp::ToolCall::new("1", "Thinking")
.kind(acp::ToolKind::Think)
.raw_input(json!({}))
.meta(acp::Meta::from_iter([(
"tool_name".into(),
"thinking".into()
)]))
acp::ToolCall {
id: acp::ToolCallId("1".into()),
title: "Thinking".into(),
kind: acp::ToolKind::Think,
status: acp::ToolCallStatus::Pending,
content: vec![],
locations: vec![],
raw_input: Some(json!({})),
raw_output: None,
meta: Some(json!({ "tool_name": "thinking" })),
}
);
let update = expect_tool_call_update_fields(&mut events).await;
assert_eq!(
update,
acp::ToolCallUpdate::new(
"1",
acp::ToolCallUpdateFields::new()
.title("Thinking")
.kind(acp::ToolKind::Think)
.raw_input(json!({ "content": "Thinking hard!"}))
)
acp::ToolCallUpdate {
id: acp::ToolCallId("1".into()),
fields: acp::ToolCallUpdateFields {
title: Some("Thinking".into()),
kind: Some(acp::ToolKind::Think),
raw_input: Some(json!({ "content": "Thinking hard!" })),
..Default::default()
},
meta: None,
}
);
let update = expect_tool_call_update_fields(&mut events).await;
assert_eq!(
update,
acp::ToolCallUpdate::new(
"1",
acp::ToolCallUpdateFields::new().status(acp::ToolCallStatus::InProgress)
)
acp::ToolCallUpdate {
id: acp::ToolCallId("1".into()),
fields: acp::ToolCallUpdateFields {
status: Some(acp::ToolCallStatus::InProgress),
..Default::default()
},
meta: None,
}
);
let update = expect_tool_call_update_fields(&mut events).await;
assert_eq!(
update,
acp::ToolCallUpdate::new(
"1",
acp::ToolCallUpdateFields::new().content(vec!["Thinking hard!".into()])
)
acp::ToolCallUpdate {
id: acp::ToolCallId("1".into()),
fields: acp::ToolCallUpdateFields {
content: Some(vec!["Thinking hard!".into()]),
..Default::default()
},
meta: None,
}
);
let update = expect_tool_call_update_fields(&mut events).await;
assert_eq!(
update,
acp::ToolCallUpdate::new(
"1",
acp::ToolCallUpdateFields::new()
.status(acp::ToolCallStatus::Completed)
.raw_output("Finished thinking.".into())
)
acp::ToolCallUpdate {
id: acp::ToolCallId("1".into()),
fields: acp::ToolCallUpdateFields {
status: Some(acp::ToolCallStatus::Completed),
raw_output: Some("Finished thinking.".into()),
..Default::default()
},
meta: None,
}
);
}
@@ -2531,7 +2553,7 @@ fn setup_context_server(
let mut settings = ProjectSettings::get_global(cx).clone();
settings.context_servers.insert(
name.into(),
project::project_settings::ContextServerSettings::Stdio {
project::project_settings::ContextServerSettings::Custom {
enabled: true,
command: ContextServerCommand {
path: "somebinary".into(),

View File

@@ -619,9 +619,12 @@ pub struct Thread {
impl Thread {
fn prompt_capabilities(model: Option<&dyn LanguageModel>) -> acp::PromptCapabilities {
let image = model.map_or(true, |model| model.supports_images());
acp::PromptCapabilities::new()
.image(image)
.embedded_context(true)
acp::PromptCapabilities {
meta: None,
image,
audio: false,
embedded_context: true,
}
}
pub fn new(
@@ -637,7 +640,7 @@ impl Thread {
let (prompt_capabilities_tx, prompt_capabilities_rx) =
watch::channel(Self::prompt_capabilities(model.as_deref()));
Self {
id: acp::SessionId::new(uuid::Uuid::new_v4().to_string()),
id: acp::SessionId(uuid::Uuid::new_v4().to_string().into()),
prompt_id: PromptId::new(),
updated_at: Utc::now(),
title: None,
@@ -734,11 +737,17 @@ impl Thread {
let Some(tool) = tool else {
stream
.0
.unbounded_send(Ok(ThreadEvent::ToolCall(
acp::ToolCall::new(tool_use.id.to_string(), tool_use.name.to_string())
.status(acp::ToolCallStatus::Failed)
.raw_input(tool_use.input.clone()),
)))
.unbounded_send(Ok(ThreadEvent::ToolCall(acp::ToolCall {
meta: None,
id: acp::ToolCallId(tool_use.id.to_string().into()),
title: tool_use.name.to_string(),
kind: acp::ToolKind::Other,
status: acp::ToolCallStatus::Failed,
content: Vec::new(),
locations: Vec::new(),
raw_input: Some(tool_use.input.clone()),
raw_output: None,
})))
.ok();
return;
};
@@ -766,20 +775,24 @@ impl Thread {
.log_err();
}
let mut fields = acp::ToolCallUpdateFields::new().status(tool_result.as_ref().map_or(
acp::ToolCallStatus::Failed,
|result| {
if result.is_error {
acp::ToolCallStatus::Failed
} else {
acp::ToolCallStatus::Completed
}
stream.update_tool_call_fields(
&tool_use.id,
acp::ToolCallUpdateFields {
status: Some(
tool_result
.as_ref()
.map_or(acp::ToolCallStatus::Failed, |result| {
if result.is_error {
acp::ToolCallStatus::Failed
} else {
acp::ToolCallStatus::Completed
}
}),
),
raw_output: output,
..Default::default()
},
));
if let Some(output) = output {
fields = fields.raw_output(output);
}
stream.update_tool_call_fields(&tool_use.id, fields);
);
}
pub fn from_db(
@@ -1259,15 +1272,18 @@ impl Thread {
while let Some(tool_result) = tool_results.next().await {
log::debug!("Tool finished {:?}", tool_result);
let mut fields = acp::ToolCallUpdateFields::new().status(if tool_result.is_error {
acp::ToolCallStatus::Failed
} else {
acp::ToolCallStatus::Completed
});
if let Some(output) = &tool_result.output {
fields = fields.raw_output(output.clone());
}
event_stream.update_tool_call_fields(&tool_result.tool_use_id, fields);
event_stream.update_tool_call_fields(
&tool_result.tool_use_id,
acp::ToolCallUpdateFields {
status: Some(if tool_result.is_error {
acp::ToolCallStatus::Failed
} else {
acp::ToolCallStatus::Completed
}),
raw_output: tool_result.output.clone(),
..Default::default()
},
);
this.update(cx, |this, _cx| {
this.pending_message()
.tool_results
@@ -1544,10 +1560,12 @@ impl Thread {
} else {
event_stream.update_tool_call_fields(
&tool_use.id,
acp::ToolCallUpdateFields::new()
.title(title)
.kind(kind)
.raw_input(tool_use.input.clone()),
acp::ToolCallUpdateFields {
title: Some(title.into()),
kind: Some(kind),
raw_input: Some(tool_use.input.clone()),
..Default::default()
},
);
}
@@ -1569,9 +1587,10 @@ impl Thread {
let fs = self.project.read(cx).fs().clone();
let tool_event_stream =
ToolCallEventStream::new(tool_use.id.clone(), event_stream.clone(), Some(fs));
tool_event_stream.update_fields(
acp::ToolCallUpdateFields::new().status(acp::ToolCallStatus::InProgress),
);
tool_event_stream.update_fields(acp::ToolCallUpdateFields {
status: Some(acp::ToolCallStatus::InProgress),
..Default::default()
});
let supports_images = self.model().is_some_and(|model| model.supports_images());
let tool_result = tool.run(tool_use.input, tool_event_stream, cx);
log::debug!("Running tool {}", tool_use.name);
@@ -2362,13 +2381,19 @@ impl ThreadEventStream {
kind: acp::ToolKind,
input: serde_json::Value,
) -> acp::ToolCall {
acp::ToolCall::new(id.to_string(), title)
.kind(kind)
.raw_input(input)
.meta(acp::Meta::from_iter([(
"tool_name".into(),
tool_name.into(),
)]))
acp::ToolCall {
meta: Some(serde_json::json!({
"tool_name": tool_name
})),
id: acp::ToolCallId(id.to_string().into()),
title,
kind,
status: acp::ToolCallStatus::Pending,
content: vec![],
locations: vec![],
raw_input: Some(input),
raw_output: None,
}
}
fn update_tool_call_fields(
@@ -2378,7 +2403,12 @@ impl ThreadEventStream {
) {
self.0
.unbounded_send(Ok(ThreadEvent::ToolCallUpdate(
acp::ToolCallUpdate::new(tool_use_id.to_string(), fields).into(),
acp::ToolCallUpdate {
meta: None,
id: acp::ToolCallId(tool_use_id.to_string().into()),
fields,
}
.into(),
)))
.ok();
}
@@ -2441,7 +2471,7 @@ impl ToolCallEventStream {
.0
.unbounded_send(Ok(ThreadEvent::ToolCallUpdate(
acp_thread::ToolCallUpdateDiff {
id: acp::ToolCallId::new(self.tool_use_id.to_string()),
id: acp::ToolCallId(self.tool_use_id.to_string().into()),
diff,
}
.into(),
@@ -2459,26 +2489,33 @@ impl ToolCallEventStream {
.0
.unbounded_send(Ok(ThreadEvent::ToolCallAuthorization(
ToolCallAuthorization {
tool_call: acp::ToolCallUpdate::new(
self.tool_use_id.to_string(),
acp::ToolCallUpdateFields::new().title(title),
),
tool_call: acp::ToolCallUpdate {
meta: None,
id: acp::ToolCallId(self.tool_use_id.to_string().into()),
fields: acp::ToolCallUpdateFields {
title: Some(title.into()),
..Default::default()
},
},
options: vec![
acp::PermissionOption::new(
acp::PermissionOptionId::new("always_allow"),
"Always Allow",
acp::PermissionOptionKind::AllowAlways,
),
acp::PermissionOption::new(
acp::PermissionOptionId::new("allow"),
"Allow",
acp::PermissionOptionKind::AllowOnce,
),
acp::PermissionOption::new(
acp::PermissionOptionId::new("deny"),
"Deny",
acp::PermissionOptionKind::RejectOnce,
),
acp::PermissionOption {
id: acp::PermissionOptionId("always_allow".into()),
name: "Always Allow".into(),
kind: acp::PermissionOptionKind::AllowAlways,
meta: None,
},
acp::PermissionOption {
id: acp::PermissionOptionId("allow".into()),
name: "Allow".into(),
kind: acp::PermissionOptionKind::AllowOnce,
meta: None,
},
acp::PermissionOption {
id: acp::PermissionOptionId("deny".into()),
name: "Deny".into(),
kind: acp::PermissionOptionKind::RejectOnce,
meta: None,
},
],
response: response_tx,
},
@@ -2623,15 +2660,7 @@ impl UserMessageContent {
// TODO
Self::Text("[blob]".to_string())
}
other => {
log::warn!("Unexpected content type: {:?}", other);
Self::Text("[unknown]".to_string())
}
},
other => {
log::warn!("Unexpected content type: {:?}", other);
Self::Text("[unknown]".to_string())
}
}
}
}
@@ -2639,15 +2668,32 @@ impl UserMessageContent {
impl From<UserMessageContent> for acp::ContentBlock {
fn from(content: UserMessageContent) -> Self {
match content {
UserMessageContent::Text(text) => text.into(),
UserMessageContent::Image(image) => {
acp::ContentBlock::Image(acp::ImageContent::new(image.source, "image/png"))
UserMessageContent::Text(text) => acp::ContentBlock::Text(acp::TextContent {
text,
annotations: None,
meta: None,
}),
UserMessageContent::Image(image) => acp::ContentBlock::Image(acp::ImageContent {
data: image.source.to_string(),
mime_type: "image/png".to_string(),
meta: None,
annotations: None,
uri: None,
}),
UserMessageContent::Mention { uri, content } => {
acp::ContentBlock::Resource(acp::EmbeddedResource {
meta: None,
resource: acp::EmbeddedResourceResource::TextResourceContents(
acp::TextResourceContents {
meta: None,
mime_type: None,
text: content,
uri: uri.to_uri().to_string(),
},
),
annotations: None,
})
}
UserMessageContent::Mention { uri, content } => acp::ContentBlock::Resource(
acp::EmbeddedResource::new(acp::EmbeddedResourceResource::TextResourceContents(
acp::TextResourceContents::new(content, uri.to_uri().to_string()),
)),
),
}
}
}

View File

@@ -273,9 +273,14 @@ impl AgentTool for EditFileTool {
};
let abs_path = project.read(cx).absolute_path(&project_path, cx);
if let Some(abs_path) = abs_path.clone() {
event_stream.update_fields(
ToolCallUpdateFields::new().locations(vec![acp::ToolCallLocation::new(abs_path)]),
);
event_stream.update_fields(ToolCallUpdateFields {
locations: Some(vec![acp::ToolCallLocation {
path: abs_path,
line: None,
meta: None,
}]),
..Default::default()
});
}
let authorize = self.authorize(&input, &event_stream, cx);
@@ -384,11 +389,10 @@ impl AgentTool for EditFileTool {
range.start.to_point(&buffer.snapshot()).row
}).ok();
if let Some(abs_path) = abs_path.clone() {
let mut location = ToolCallLocation::new(abs_path);
if let Some(line) = line {
location = location.line(line);
}
event_stream.update_fields(ToolCallUpdateFields::new().locations(vec![location]));
event_stream.update_fields(ToolCallUpdateFields {
locations: Some(vec![ToolCallLocation { path: abs_path, line, meta: None }]),
..Default::default()
});
}
emitted_location = true;
}

View File

@@ -118,29 +118,33 @@ impl AgentTool for FindPathTool {
let paginated_matches: &[PathBuf] = &matches[cmp::min(input.offset, matches.len())
..cmp::min(input.offset + RESULTS_PER_PAGE, matches.len())];
event_stream.update_fields(
acp::ToolCallUpdateFields::new()
.title(if paginated_matches.is_empty() {
"No matches".into()
} else if paginated_matches.len() == 1 {
"1 match".into()
} else {
format!("{} matches", paginated_matches.len())
})
.content(
paginated_matches
.iter()
.map(|path| {
acp::ToolCallContent::Content(acp::Content::new(
acp::ContentBlock::ResourceLink(acp::ResourceLink::new(
path.to_string_lossy(),
format!("file://{}", path.display()),
)),
))
})
.collect(),
),
);
event_stream.update_fields(acp::ToolCallUpdateFields {
title: Some(if paginated_matches.is_empty() {
"No matches".into()
} else if paginated_matches.len() == 1 {
"1 match".into()
} else {
format!("{} matches", paginated_matches.len())
}),
content: Some(
paginated_matches
.iter()
.map(|path| acp::ToolCallContent::Content {
content: acp::ContentBlock::ResourceLink(acp::ResourceLink {
uri: format!("file://{}", path.display()),
name: path.to_string_lossy().into(),
annotations: None,
description: None,
mime_type: None,
size: None,
title: None,
meta: None,
}),
})
.collect(),
),
..Default::default()
});
Ok(FindPathToolOutput {
offset: input.offset,
@@ -173,7 +177,7 @@ fn search_paths(glob: &str, project: Entity<Project>, cx: &mut App) -> Task<Resu
let mut results = Vec::new();
for snapshot in snapshots {
for entry in snapshot.entries(false, 0) {
if path_matcher.is_match(&snapshot.root_name().join(&entry.path)) {
if path_matcher.is_match(snapshot.root_name().join(&entry.path).as_std_path()) {
results.push(snapshot.absolutize(&entry.path));
}
}

View File

@@ -32,21 +32,8 @@ pub struct GrepToolInput {
/// Do NOT specify a path here! This will only be matched against the code **content**.
pub regex: String,
/// A glob pattern for the paths of files to include in the search.
/// Supports standard glob patterns like "**/*.rs" or "frontend/src/**/*.ts".
/// Supports standard glob patterns like "**/*.rs" or "src/**/*.ts".
/// If omitted, all files in the project will be searched.
///
/// The glob pattern is matched against the full path including the project root directory.
///
/// <example>
/// If the project has the following root directories:
///
/// - /a/b/backend
/// - /c/d/frontend
///
/// Use "backend/**/*.rs" to search only Rust files in the backend root directory.
/// Use "frontend/src/**/*.ts" to search TypeScript files only in the frontend root directory (sub-directory "src").
/// Use "**/*.rs" to search Rust files across all root directories.
/// </example>
pub include_pattern: Option<String>,
/// Optional starting position for paginated results (0-based).
/// When not provided, starts from the beginning.
@@ -145,7 +132,8 @@ impl AgentTool for GrepTool {
let exclude_patterns = global_settings
.file_scan_exclusions
.sources()
.chain(global_settings.private_files.sources());
.iter()
.chain(global_settings.private_files.sources().iter());
match PathMatcher::new(exclude_patterns, path_style) {
Ok(matcher) => matcher,

View File

@@ -17,9 +17,6 @@ use crate::{AgentTool, Thread, ToolCallEventStream, outline};
/// Reads the content of the given file in the project.
///
/// - Never attempt to read a path that hasn't been previously mentioned.
/// - For large files, this tool returns a file outline with symbol names and line numbers instead of the full content.
/// This outline IS a successful response - use the line numbers to read specific sections with start_line/end_line.
/// Do NOT retry reading the same file without line numbers if you receive an outline.
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct ReadFileToolInput {
/// The relative path of the file to read.
@@ -152,12 +149,15 @@ impl AgentTool for ReadFileTool {
}
let file_path = input.path.clone();
let mut location = acp::ToolCallLocation::new(&abs_path);
if let Some(line) = input.start_line {
location = location.line(line.saturating_sub(1));
}
event_stream.update_fields(ToolCallUpdateFields::new().locations(vec![location]));
event_stream.update_fields(ToolCallUpdateFields {
locations: Some(vec![acp::ToolCallLocation {
path: abs_path.clone(),
line: input.start_line.map(|line| line.saturating_sub(1)),
meta: None,
}]),
..Default::default()
});
if image_store::is_image_file(&self.project, &project_path, cx) {
return cx.spawn(async move |cx| {
@@ -254,15 +254,16 @@ impl AgentTool for ReadFileTool {
if buffer_content.is_outline {
Ok(formatdoc! {"
SUCCESS: File outline retrieved. This file is too large to read all at once, so the outline below shows the file's structure with line numbers.
IMPORTANT: Do NOT retry this call without line numbers - you will get the same outline.
Instead, use the line numbers below to read specific sections by calling this tool again with start_line and end_line parameters.
This file was too big to read all at once.
{}
NEXT STEPS: To read a specific symbol's implementation, call read_file with the same path plus start_line and end_line from the outline above.
For example, to read a function shown as [L100-150], use start_line: 100 and end_line: 150.", buffer_content.text
Using the line numbers in this outline, you can call this tool again
while specifying the start_line and end_line fields to see the
implementations of symbols in the outline.
Alternatively, you can fall back to the `grep` tool (if available)
to search the file for specific content.", buffer_content.text
}
.into())
} else {
@@ -274,9 +275,7 @@ impl AgentTool for ReadFileTool {
project.set_agent_location(
Some(AgentLocation {
buffer: buffer.downgrade(),
position: anchor.unwrap_or_else(|| {
text::Anchor::min_for_buffer(buffer.read(cx).remote_id())
}),
position: anchor.unwrap_or(text::Anchor::MIN),
}),
cx,
);
@@ -286,9 +285,12 @@ impl AgentTool for ReadFileTool {
text,
}
.to_string();
event_stream.update_fields(ToolCallUpdateFields::new().content(vec![
acp::ToolCallContent::Content(acp::Content::new(markdown)),
]));
event_stream.update_fields(ToolCallUpdateFields {
content: Some(vec![acp::ToolCallContent::Content {
content: markdown.into(),
}]),
..Default::default()
})
}
})?;
@@ -436,7 +438,7 @@ mod test {
let content = result.to_str().unwrap();
assert_eq!(
content.lines().skip(7).take(6).collect::<Vec<_>>(),
content.lines().skip(4).take(6).collect::<Vec<_>>(),
vec![
"struct Test0 [L1-4]",
" a [L2]",
@@ -471,7 +473,7 @@ mod test {
pretty_assertions::assert_eq!(
content
.lines()
.skip(7)
.skip(4)
.take(expected_content.len())
.collect::<Vec<_>>(),
expected_content

View File

@@ -112,9 +112,10 @@ impl AgentTool for TerminalTool {
.await?;
let terminal_id = terminal.id(cx)?;
event_stream.update_fields(acp::ToolCallUpdateFields::new().content(vec![
acp::ToolCallContent::Terminal(acp::Terminal::new(terminal_id)),
]));
event_stream.update_fields(acp::ToolCallUpdateFields {
content: Some(vec![acp::ToolCallContent::Terminal { terminal_id }]),
..Default::default()
});
let exit_status = terminal.wait_for_exit(cx)?.await;
let output = terminal.current_output(cx)?;

View File

@@ -43,8 +43,10 @@ impl AgentTool for ThinkingTool {
event_stream: ToolCallEventStream,
_cx: &mut App,
) -> Task<Result<String>> {
event_stream
.update_fields(acp::ToolCallUpdateFields::new().content(vec![input.content.into()]));
event_stream.update_fields(acp::ToolCallUpdateFields {
content: Some(vec![input.content.into()]),
..Default::default()
});
Task::ready(Ok("Finished thinking.".to_string()))
}
}

View File

@@ -76,8 +76,10 @@ impl AgentTool for WebSearchTool {
let response = match search_task.await {
Ok(response) => response,
Err(err) => {
event_stream
.update_fields(acp::ToolCallUpdateFields::new().title("Web Search Failed"));
event_stream.update_fields(acp::ToolCallUpdateFields {
title: Some("Web Search Failed".to_string()),
..Default::default()
});
return Err(err);
}
};
@@ -105,23 +107,26 @@ fn emit_update(response: &WebSearchResponse, event_stream: &ToolCallEventStream)
} else {
format!("{} results", response.results.len())
};
event_stream.update_fields(
acp::ToolCallUpdateFields::new()
.title(format!("Searched the web: {result_text}"))
.content(
response
.results
.iter()
.map(|result| {
acp::ToolCallContent::Content(acp::Content::new(
acp::ContentBlock::ResourceLink(
acp::ResourceLink::new(result.title.clone(), result.url.clone())
.title(result.title.clone())
.description(result.text.clone()),
),
))
})
.collect(),
),
);
event_stream.update_fields(acp::ToolCallUpdateFields {
title: Some(format!("Searched the web: {result_text}")),
content: Some(
response
.results
.iter()
.map(|result| acp::ToolCallContent::Content {
content: acp::ContentBlock::ResourceLink(acp::ResourceLink {
name: result.title.clone(),
uri: result.url.clone(),
title: Some(result.title.clone()),
description: Some(result.text.clone()),
mime_type: None,
annotations: None,
size: None,
meta: None,
}),
})
.collect(),
),
..Default::default()
});
}

View File

@@ -76,7 +76,7 @@ pub async fn connect(
Ok(Rc::new(conn) as _)
}
const MINIMUM_SUPPORTED_VERSION: acp::ProtocolVersion = acp::ProtocolVersion::V1;
const MINIMUM_SUPPORTED_VERSION: acp::ProtocolVersion = acp::V1;
impl AcpConnection {
pub async fn stdio(
@@ -173,27 +173,29 @@ impl AcpConnection {
});
})?;
let mut client_info = acp::Implementation::new("zed", version);
if let Some(release_channel) = release_channel {
client_info = client_info.title(release_channel);
}
let response = connection
.initialize(
acp::InitializeRequest::new(acp::ProtocolVersion::V1)
.client_capabilities(
acp::ClientCapabilities::new()
.fs(acp::FileSystemCapability::new()
.read_text_file(true)
.write_text_file(true))
.terminal(true)
// Experimental: Allow for rendering terminal output from the agents
.meta(acp::Meta::from_iter([
("terminal_output".into(), true.into()),
("terminal-auth".into(), true.into()),
])),
)
.client_info(client_info),
)
.initialize(acp::InitializeRequest {
protocol_version: acp::VERSION,
client_capabilities: acp::ClientCapabilities {
fs: acp::FileSystemCapability {
read_text_file: true,
write_text_file: true,
meta: None,
},
terminal: true,
meta: Some(serde_json::json!({
// Experimental: Allow for rendering terminal output from the agents
"terminal_output": true,
"terminal-auth": true,
})),
},
client_info: Some(acp::Implementation {
name: "zed".to_owned(),
title: release_channel.map(|c| c.to_owned()),
version,
}),
meta: None,
})
.await?;
if response.protocol_version < MINIMUM_SUPPORTED_VERSION {
@@ -251,13 +253,14 @@ impl AgentConnection for AcpConnection {
let default_model = self.default_model.clone();
let cwd = cwd.to_path_buf();
let context_server_store = project.read(cx).context_server_store().read(cx);
let mcp_servers = if project.read(cx).is_local() {
context_server_store
.configured_server_ids()
.iter()
.filter_map(|id| {
let configuration = context_server_store.configuration_for_server(id)?;
match &*configuration {
let mcp_servers =
if project.read(cx).is_local() {
context_server_store
.configured_server_ids()
.iter()
.filter_map(|id| {
let configuration = context_server_store.configuration_for_server(id)?;
match &*configuration {
project::context_server_store::ContextServerConfiguration::Custom {
command,
..
@@ -265,41 +268,47 @@ impl AgentConnection for AcpConnection {
| project::context_server_store::ContextServerConfiguration::Extension {
command,
..
} => Some(acp::McpServer::Stdio(
acp::McpServerStdio::new(id.0.to_string(), &command.path)
.args(command.args.clone())
.env(if let Some(env) = command.env.as_ref() {
env.iter()
.map(|(name, value)| acp::EnvVariable::new(name, value))
.collect()
} else {
vec![]
}),
)),
} => Some(acp::McpServer::Stdio {
name: id.0.to_string(),
command: command.path.clone(),
args: command.args.clone(),
env: if let Some(env) = command.env.as_ref() {
env.iter()
.map(|(name, value)| acp::EnvVariable {
name: name.clone(),
value: value.clone(),
meta: None,
})
.collect()
} else {
vec![]
},
}),
project::context_server_store::ContextServerConfiguration::Http {
url,
headers,
} => Some(acp::McpServer::Http(
acp::McpServerHttp::new(id.0.to_string(), url.to_string()).headers(
headers
.iter()
.map(|(name, value)| acp::HttpHeader::new(name, value))
.collect(),
),
)),
} => Some(acp::McpServer::Http {
name: id.0.to_string(),
url: url.to_string(),
headers: headers.iter().map(|(name, value)| acp::HttpHeader {
name: name.clone(),
value: value.clone(),
meta: None,
}).collect(),
}),
}
})
.collect()
} else {
// In SSH projects, the external agent is running on the remote
// machine, and currently we only run MCP servers on the local
// machine. So don't pass any MCP servers to the agent in that case.
Vec::new()
};
})
.collect()
} else {
// In SSH projects, the external agent is running on the remote
// machine, and currently we only run MCP servers on the local
// machine. So don't pass any MCP servers to the agent in that case.
Vec::new()
};
cx.spawn(async move |cx| {
let response = conn
.new_session(acp::NewSessionRequest::new(cwd).mcp_servers(mcp_servers))
.new_session(acp::NewSessionRequest { mcp_servers, cwd, meta: None })
.await
.map_err(|err| {
if err.code == acp::ErrorCode::AUTH_REQUIRED.code {
@@ -332,7 +341,11 @@ impl AgentConnection for AcpConnection {
let modes = modes.clone();
let conn = conn.clone();
async move |_| {
let result = conn.set_session_mode(acp::SetSessionModeRequest::new(session_id, default_mode))
let result = conn.set_session_mode(acp::SetSessionModeRequest {
session_id,
mode_id: default_mode,
meta: None,
})
.await.log_err();
if result.is_none() {
@@ -375,7 +388,11 @@ impl AgentConnection for AcpConnection {
let models = models.clone();
let conn = conn.clone();
async move |_| {
let result = conn.set_session_model(acp::SetSessionModelRequest::new(session_id, default_model))
let result = conn.set_session_model(acp::SetSessionModelRequest {
session_id,
model_id: default_model,
meta: None,
})
.await.log_err();
if result.is_none() {
@@ -439,8 +456,12 @@ impl AgentConnection for AcpConnection {
fn authenticate(&self, method_id: acp::AuthMethodId, cx: &mut App) -> Task<Result<()>> {
let conn = self.connection.clone();
cx.foreground_executor().spawn(async move {
conn.authenticate(acp::AuthenticateRequest::new(method_id))
.await?;
conn.authenticate(acp::AuthenticateRequest {
method_id: method_id.clone(),
meta: None,
})
.await?;
Ok(())
})
}
@@ -494,7 +515,10 @@ impl AgentConnection for AcpConnection {
&& (details.contains("This operation was aborted")
|| details.contains("The user aborted a request"))
{
Ok(acp::PromptResponse::new(acp::StopReason::Cancelled))
Ok(acp::PromptResponse {
stop_reason: acp::StopReason::Cancelled,
meta: None,
})
} else {
Err(anyhow!(details))
}
@@ -511,7 +535,10 @@ impl AgentConnection for AcpConnection {
session.suppress_abort_err = true;
}
let conn = self.connection.clone();
let params = acp::CancelNotification::new(session_id.clone());
let params = acp::CancelNotification {
session_id: session_id.clone(),
meta: None,
};
cx.foreground_executor()
.spawn(async move { conn.cancel(params).await })
.detach();
@@ -592,7 +619,11 @@ impl acp_thread::AgentSessionModes for AcpSessionModes {
let state = self.state.clone();
cx.foreground_executor().spawn(async move {
let result = connection
.set_session_mode(acp::SetSessionModeRequest::new(session_id, mode_id))
.set_session_mode(acp::SetSessionModeRequest {
session_id,
mode_id,
meta: None,
})
.await;
if result.is_err() {
@@ -651,7 +682,11 @@ impl acp_thread::AgentModelSelector for AcpModelSelector {
let state = self.state.clone();
cx.foreground_executor().spawn(async move {
let result = connection
.set_session_model(acp::SetSessionModelRequest::new(session_id, model_id))
.set_session_model(acp::SetSessionModelRequest {
session_id,
model_id,
meta: None,
})
.await;
if result.is_err() {
@@ -713,7 +748,10 @@ impl acp::Client for ClientDelegate {
let outcome = task.await;
Ok(acp::RequestPermissionResponse::new(outcome))
Ok(acp::RequestPermissionResponse {
outcome,
meta: None,
})
}
async fn write_text_file(
@@ -745,7 +783,10 @@ impl acp::Client for ClientDelegate {
let content = task.await?;
Ok(acp::ReadTextFileResponse::new(content))
Ok(acp::ReadTextFileResponse {
content,
meta: None,
})
}
async fn session_notification(
@@ -780,7 +821,7 @@ impl acp::Client for ClientDelegate {
if let Some(terminal_info) = meta.get("terminal_info") {
if let Some(id_str) = terminal_info.get("terminal_id").and_then(|v| v.as_str())
{
let terminal_id = acp::TerminalId::new(id_str);
let terminal_id = acp::TerminalId(id_str.into());
let cwd = terminal_info
.get("cwd")
.and_then(|v| v.as_str().map(PathBuf::from));
@@ -796,7 +837,7 @@ impl acp::Client for ClientDelegate {
let lower = cx.new(|cx| builder.subscribe(cx));
thread.on_terminal_provider_event(
TerminalProviderEvent::Created {
terminal_id,
terminal_id: terminal_id.clone(),
label: tc.title.clone(),
cwd,
output_byte_limit: None,
@@ -821,12 +862,15 @@ impl acp::Client for ClientDelegate {
if let Some(meta) = &tcu.meta {
if let Some(term_out) = meta.get("terminal_output") {
if let Some(id_str) = term_out.get("terminal_id").and_then(|v| v.as_str()) {
let terminal_id = acp::TerminalId::new(id_str);
let terminal_id = acp::TerminalId(id_str.into());
if let Some(s) = term_out.get("data").and_then(|v| v.as_str()) {
let data = s.as_bytes().to_vec();
let _ = session.thread.update(&mut self.cx.clone(), |thread, cx| {
thread.on_terminal_provider_event(
TerminalProviderEvent::Output { terminal_id, data },
TerminalProviderEvent::Output {
terminal_id: terminal_id.clone(),
data,
},
cx,
);
});
@@ -837,19 +881,21 @@ impl acp::Client for ClientDelegate {
// terminal_exit
if let Some(term_exit) = meta.get("terminal_exit") {
if let Some(id_str) = term_exit.get("terminal_id").and_then(|v| v.as_str()) {
let terminal_id = acp::TerminalId::new(id_str);
let mut status = acp::TerminalExitStatus::new();
if let Some(code) = term_exit.get("exit_code").and_then(|v| v.as_u64()) {
status = status.exit_code(code as u32)
}
if let Some(signal) = term_exit.get("signal").and_then(|v| v.as_str()) {
status = status.signal(signal);
}
let terminal_id = acp::TerminalId(id_str.into());
let status = acp::TerminalExitStatus {
exit_code: term_exit
.get("exit_code")
.and_then(|v| v.as_u64())
.map(|i| i as u32),
signal: term_exit
.get("signal")
.and_then(|v| v.as_str().map(|s| s.to_string())),
meta: None,
};
let _ = session.thread.update(&mut self.cx.clone(), |thread, cx| {
thread.on_terminal_provider_event(
TerminalProviderEvent::Exit {
terminal_id,
terminal_id: terminal_id.clone(),
status,
},
cx,
@@ -886,7 +932,7 @@ impl acp::Client for ClientDelegate {
// Register with renderer
let terminal_entity = thread.update(&mut self.cx.clone(), |thread, cx| {
thread.register_terminal_created(
acp::TerminalId::new(uuid::Uuid::new_v4().to_string()),
acp::TerminalId(uuid::Uuid::new_v4().to_string().into()),
format!("{} {}", args.command, args.args.join(" ")),
args.cwd.clone(),
args.output_byte_limit,
@@ -896,7 +942,10 @@ impl acp::Client for ClientDelegate {
})?;
let terminal_id =
terminal_entity.read_with(&self.cx, |terminal, _| terminal.id().clone())?;
Ok(acp::CreateTerminalResponse::new(terminal_id))
Ok(acp::CreateTerminalResponse {
terminal_id,
meta: None,
})
}
async fn kill_terminal_command(
@@ -957,7 +1006,10 @@ impl acp::Client for ClientDelegate {
})??
.await;
Ok(acp::WaitForTerminalExitResponse::new(exit_status))
Ok(acp::WaitForTerminalExitResponse {
exit_status,
meta: None,
})
}
}

View File

@@ -41,7 +41,7 @@ impl AgentServer for ClaudeCode {
settings
.as_ref()
.and_then(|s| s.default_mode.clone().map(acp::SessionModeId::new))
.and_then(|s| s.default_mode.clone().map(|m| acp::SessionModeId(m.into())))
}
fn set_default_mode(&self, mode_id: Option<acp::SessionModeId>, fs: Arc<dyn Fs>, cx: &mut App) {
@@ -62,7 +62,7 @@ impl AgentServer for ClaudeCode {
settings
.as_ref()
.and_then(|s| s.default_model.clone().map(acp::ModelId::new))
.and_then(|s| s.default_model.clone().map(|m| acp::ModelId(m.into())))
}
fn set_default_model(&self, model_id: Option<acp::ModelId>, fs: Arc<dyn Fs>, cx: &mut App) {

View File

@@ -42,7 +42,7 @@ impl AgentServer for Codex {
settings
.as_ref()
.and_then(|s| s.default_mode.clone().map(acp::SessionModeId::new))
.and_then(|s| s.default_mode.clone().map(|m| acp::SessionModeId(m.into())))
}
fn set_default_mode(&self, mode_id: Option<acp::SessionModeId>, fs: Arc<dyn Fs>, cx: &mut App) {
@@ -63,7 +63,7 @@ impl AgentServer for Codex {
settings
.as_ref()
.and_then(|s| s.default_model.clone().map(acp::ModelId::new))
.and_then(|s| s.default_model.clone().map(|m| acp::ModelId(m.into())))
}
fn set_default_model(&self, model_id: Option<acp::ModelId>, fs: Arc<dyn Fs>, cx: &mut App) {

View File

@@ -44,7 +44,7 @@ impl crate::AgentServer for CustomAgentServer {
settings
.as_ref()
.and_then(|s| s.default_mode().map(acp::SessionModeId::new))
.and_then(|s| s.default_mode().map(|m| acp::SessionModeId(m.into())))
}
fn set_default_mode(&self, mode_id: Option<acp::SessionModeId>, fs: Arc<dyn Fs>, cx: &mut App) {
@@ -80,7 +80,7 @@ impl crate::AgentServer for CustomAgentServer {
settings
.as_ref()
.and_then(|s| s.default_model().map(acp::ModelId::new))
.and_then(|s| s.default_model().map(|m| acp::ModelId(m.into())))
}
fn set_default_model(&self, model_id: Option<acp::ModelId>, fs: Arc<dyn Fs>, cx: &mut App) {

View File

@@ -82,9 +82,26 @@ where
.update(cx, |thread, cx| {
thread.send(
vec![
"Read the file ".into(),
acp::ContentBlock::ResourceLink(acp::ResourceLink::new("foo.rs", "foo.rs")),
" and tell me what the content of the println! is".into(),
acp::ContentBlock::Text(acp::TextContent {
text: "Read the file ".into(),
annotations: None,
meta: None,
}),
acp::ContentBlock::ResourceLink(acp::ResourceLink {
uri: "foo.rs".into(),
name: "foo.rs".into(),
annotations: None,
description: None,
mime_type: None,
size: None,
title: None,
meta: None,
}),
acp::ContentBlock::Text(acp::TextContent {
text: " and tell me what the content of the println! is".into(),
annotations: None,
meta: None,
}),
],
cx,
)
@@ -412,7 +429,7 @@ macro_rules! common_e2e_tests {
async fn tool_call_with_permission(cx: &mut ::gpui::TestAppContext) {
$crate::e2e_tests::test_tool_call_with_permission(
$server,
::agent_client_protocol::PermissionOptionId::new($allow_option_id),
::agent_client_protocol::PermissionOptionId($allow_option_id.into()),
cx,
)
.await;

View File

@@ -13,8 +13,7 @@ path = "src/agent_ui.rs"
doctest = false
[features]
test-support = ["gpui/test-support", "language/test-support", "reqwest_client"]
unit-eval = []
test-support = ["gpui/test-support", "language/test-support"]
[dependencies]
acp_thread.workspace = true
@@ -48,7 +47,6 @@ fs.workspace = true
futures.workspace = true
fuzzy.workspace = true
gpui.workspace = true
gpui_tokio.workspace = true
html_to_markdown.workspace = true
http_client.workspace = true
indoc.workspace = true
@@ -100,17 +98,14 @@ workspace.workspace = true
zed_actions.workspace = true
image.workspace = true
async-fs.workspace = true
reqwest_client = { workspace = true, optional = true }
[dev-dependencies]
acp_thread = { workspace = true, features = ["test-support"] }
agent = { workspace = true, features = ["test-support"] }
assistant_text_thread = { workspace = true, features = ["test-support"] }
buffer_diff = { workspace = true, features = ["test-support"] }
clock.workspace = true
db = { workspace = true, features = ["test-support"] }
editor = { workspace = true, features = ["test-support"] }
eval_utils.workspace = true
gpui = { workspace = true, "features" = ["test-support"] }
indoc.workspace = true
language = { workspace = true, "features" = ["test-support"] }
@@ -120,6 +115,5 @@ pretty_assertions.workspace = true
project = { workspace = true, features = ["test-support"] }
semver.workspace = true
rand.workspace = true
reqwest_client.workspace = true
tree-sitter-md.workspace = true
unindent.workspace = true

View File

@@ -432,11 +432,24 @@ mod tests {
let (workspace, cx) =
cx.add_window_view(|window, cx| Workspace::test_new(project.clone(), window, cx));
let tool_call = acp::ToolCall::new("tool", "Tool call")
.status(acp::ToolCallStatus::InProgress)
.content(vec![acp::ToolCallContent::Diff(
acp::Diff::new("/project/hello.txt", "hello world").old_text("hi world"),
)]);
let tool_call = acp::ToolCall {
id: acp::ToolCallId("tool".into()),
title: "Tool call".into(),
kind: acp::ToolKind::Other,
status: acp::ToolCallStatus::InProgress,
content: vec![acp::ToolCallContent::Diff {
diff: acp::Diff {
path: "/project/hello.txt".into(),
old_text: Some("hi world".into()),
new_text: "hello world".into(),
meta: None,
},
}],
locations: vec![],
raw_input: None,
raw_output: None,
meta: None,
};
let connection = Rc::new(StubAgentConnection::new());
let thread = cx
.update(|_, cx| {

View File

@@ -225,13 +225,8 @@ impl MessageEditor {
.iter()
.find(|command| command.name == command_name)?;
let acp::AvailableCommandInput::Unstructured(acp::UnstructuredCommandInput {
mut hint,
..
}) = available_command.input.clone()?
else {
return None;
};
let acp::AvailableCommandInput::Unstructured { mut hint } =
available_command.input.clone()?;
let mut hint_pos = MultiBufferOffset(parsed_command.source_range.end) + 1usize;
if hint_pos > snapshot.len() {
@@ -408,28 +403,34 @@ impl MessageEditor {
} => {
all_tracked_buffers.extend(tracked_buffers.iter().cloned());
if supports_embedded_context {
acp::ContentBlock::Resource(acp::EmbeddedResource::new(
acp::EmbeddedResourceResource::TextResourceContents(
acp::TextResourceContents::new(
content.clone(),
uri.to_uri().to_string(),
acp::ContentBlock::Resource(acp::EmbeddedResource {
annotations: None,
resource:
acp::EmbeddedResourceResource::TextResourceContents(
acp::TextResourceContents {
mime_type: None,
text: content.clone(),
uri: uri.to_uri().to_string(),
meta: None,
},
),
),
))
meta: None,
})
} else {
acp::ContentBlock::ResourceLink(acp::ResourceLink::new(
uri.name(),
uri.to_uri().to_string(),
))
acp::ContentBlock::ResourceLink(acp::ResourceLink {
name: uri.name(),
uri: uri.to_uri().to_string(),
annotations: None,
description: None,
mime_type: None,
size: None,
title: None,
meta: None,
})
}
}
Mention::Image(mention_image) => {
let mut image = acp::ImageContent::new(
mention_image.data.clone(),
mention_image.format.mime_type(),
);
if let Some(uri) = match uri {
let uri = match uri {
MentionUri::File { .. } => Some(uri.to_uri().to_string()),
MentionUri::PastedImage => None,
other => {
@@ -439,14 +440,25 @@ impl MessageEditor {
);
None
}
} {
image = image.uri(uri)
};
acp::ContentBlock::Image(image)
acp::ContentBlock::Image(acp::ImageContent {
annotations: None,
data: mention_image.data.to_string(),
mime_type: mention_image.format.mime_type().into(),
uri,
meta: None,
})
}
Mention::Link => acp::ContentBlock::ResourceLink(
acp::ResourceLink::new(uri.name(), uri.to_uri().to_string()),
),
Mention::Link => acp::ContentBlock::ResourceLink(acp::ResourceLink {
name: uri.name(),
uri: uri.to_uri().to_string(),
annotations: None,
description: None,
mime_type: None,
size: None,
title: None,
meta: None,
}),
};
chunks.push(chunk);
ix = crease_range.end.0;
@@ -734,7 +746,8 @@ impl MessageEditor {
uri,
data,
mime_type,
..
annotations: _,
meta: _,
}) => {
let mention_uri = if let Some(uri) = uri {
MentionUri::parse(&uri, path_style)
@@ -760,7 +773,7 @@ impl MessageEditor {
}),
));
}
_ => {}
acp::ContentBlock::Audio(_) | acp::ContentBlock::Resource(_) => {}
}
}
@@ -1079,7 +1092,12 @@ mod tests {
assert!(error_message.contains("Available commands: none"));
// Now simulate Claude providing its list of available commands (which doesn't include file)
available_commands.replace(vec![acp::AvailableCommand::new("help", "Get help")]);
available_commands.replace(vec![acp::AvailableCommand {
name: "help".to_string(),
description: "Get help".to_string(),
input: None,
meta: None,
}]);
// Test that unsupported slash commands trigger an error when we have a list of available commands
editor.update_in(cx, |editor, window, cx| {
@@ -1193,12 +1211,20 @@ mod tests {
let history_store = cx.new(|cx| HistoryStore::new(text_thread_store, cx));
let prompt_capabilities = Rc::new(RefCell::new(acp::PromptCapabilities::default()));
let available_commands = Rc::new(RefCell::new(vec![
acp::AvailableCommand::new("quick-math", "2 + 2 = 4 - 1 = 3"),
acp::AvailableCommand::new("say-hello", "Say hello to whoever you want").input(
acp::AvailableCommandInput::Unstructured(acp::UnstructuredCommandInput::new(
"<name>",
)),
),
acp::AvailableCommand {
name: "quick-math".to_string(),
description: "2 + 2 = 4 - 1 = 3".to_string(),
input: None,
meta: None,
},
acp::AvailableCommand {
name: "say-hello".to_string(),
description: "Say hello to whoever you want".to_string(),
input: Some(acp::AvailableCommandInput::Unstructured {
hint: "<name>".to_string(),
}),
meta: None,
},
]));
let editor = workspace.update_in(&mut cx, |workspace, window, cx| {
@@ -1397,7 +1423,7 @@ mod tests {
rel_path("b/eight.txt"),
];
let slash = PathStyle::local().primary_separator();
let slash = PathStyle::local().separator();
let mut opened_editors = Vec::new();
for path in paths {
@@ -1478,12 +1504,12 @@ mod tests {
editor.set_text("", window, cx);
});
prompt_capabilities.replace(
acp::PromptCapabilities::new()
.image(true)
.audio(true)
.embedded_context(true),
);
prompt_capabilities.replace(acp::PromptCapabilities {
image: true,
audio: true,
embedded_context: true,
meta: None,
});
cx.simulate_input("Lorem ");
@@ -1934,9 +1960,11 @@ mod tests {
cx,
);
// Enable embedded context so files are actually included
editor
.prompt_capabilities
.replace(acp::PromptCapabilities::new().embedded_context(true));
editor.prompt_capabilities.replace(acp::PromptCapabilities {
embedded_context: true,
meta: None,
..Default::default()
});
editor
})
});
@@ -2015,7 +2043,7 @@ mod tests {
// Create a thread metadata to insert as summary
let thread_metadata = agent::DbThreadMetadata {
id: acp::SessionId::new("thread-123"),
id: acp::SessionId("thread-123".into()),
title: "Previous Conversation".into(),
updated_at: chrono::Utc::now(),
};
@@ -2122,7 +2150,14 @@ mod tests {
.await
.unwrap();
assert_eq!(content, vec!["してhello world".into()]);
assert_eq!(
content,
vec![acp::ContentBlock::Text(acp::TextContent {
text: "してhello world".into(),
annotations: None,
meta: None
})]
);
}
#[gpui::test]
@@ -2201,24 +2236,38 @@ mod tests {
.0;
let main_rs_uri = if cfg!(windows) {
"file:///C:/project/src/main.rs"
"file:///C:/project/src/main.rs".to_string()
} else {
"file:///project/src/main.rs"
"file:///project/src/main.rs".to_string()
};
// When embedded context is `false` we should get a resource link
pretty_assertions::assert_eq!(
content,
vec![
"What is in ".into(),
acp::ContentBlock::ResourceLink(acp::ResourceLink::new("main.rs", main_rs_uri))
acp::ContentBlock::Text(acp::TextContent {
text: "What is in ".to_string(),
annotations: None,
meta: None
}),
acp::ContentBlock::ResourceLink(acp::ResourceLink {
uri: main_rs_uri.clone(),
name: "main.rs".to_string(),
annotations: None,
meta: None,
description: None,
mime_type: None,
size: None,
title: None,
})
]
);
message_editor.update(cx, |editor, _cx| {
editor
.prompt_capabilities
.replace(acp::PromptCapabilities::new().embedded_context(true))
editor.prompt_capabilities.replace(acp::PromptCapabilities {
embedded_context: true,
..Default::default()
})
});
let content = message_editor
@@ -2231,12 +2280,23 @@ mod tests {
pretty_assertions::assert_eq!(
content,
vec![
"What is in ".into(),
acp::ContentBlock::Resource(acp::EmbeddedResource::new(
acp::EmbeddedResourceResource::TextResourceContents(
acp::TextResourceContents::new(file_content, main_rs_uri)
)
))
acp::ContentBlock::Text(acp::TextContent {
text: "What is in ".to_string(),
annotations: None,
meta: None
}),
acp::ContentBlock::Resource(acp::EmbeddedResource {
resource: acp::EmbeddedResourceResource::TextResourceContents(
acp::TextResourceContents {
text: file_content.to_string(),
uri: main_rs_uri,
mime_type: None,
meta: None
}
),
annotations: None,
meta: None
})
]
);
}

View File

@@ -161,7 +161,7 @@ impl Render for ModeSelector {
.map(|mode| mode.name.clone())
.unwrap_or_else(|| "Unknown".into());
let this = cx.weak_entity();
let this = cx.entity();
let icon = if self.menu_handle.is_deployed() {
IconName::ChevronUp
@@ -222,8 +222,7 @@ impl Render for ModeSelector {
y: px(-2.0),
})
.menu(move |window, cx| {
this.update(cx, |this, cx| this.build_context_menu(window, cx))
.ok()
Some(this.update(cx, |this, cx| this.build_context_menu(window, cx)))
})
}
}

View File

@@ -464,7 +464,7 @@ mod tests {
models
.into_iter()
.map(|model| acp_thread::AgentModelInfo {
id: acp::ModelId::new(model.to_string()),
id: acp::ModelId(model.to_string().into()),
name: model.to_string().into(),
description: None,
icon: None,

View File

@@ -1,5 +1,5 @@
use crate::acp::AcpThreadView;
use crate::{AgentPanel, RemoveHistory, RemoveSelectedThread};
use crate::{AgentPanel, RemoveSelectedThread};
use agent::{HistoryEntry, HistoryStore};
use chrono::{Datelike as _, Local, NaiveDate, TimeDelta};
use editor::{Editor, EditorEvent};
@@ -12,7 +12,7 @@ use std::{fmt::Display, ops::Range};
use text::Bias;
use time::{OffsetDateTime, UtcOffset};
use ui::{
HighlightedLabel, IconButtonShape, ListItem, ListItemSpacing, Tab, Tooltip, WithScrollbar,
HighlightedLabel, IconButtonShape, ListItem, ListItemSpacing, Tooltip, WithScrollbar,
prelude::*,
};
@@ -25,7 +25,6 @@ pub struct AcpThreadHistory {
search_query: SharedString,
visible_items: Vec<ListItemType>,
local_timezone: UtcOffset,
confirming_delete_history: bool,
_update_task: Task<()>,
_subscriptions: Vec<gpui::Subscription>,
}
@@ -99,7 +98,6 @@ impl AcpThreadHistory {
)
.unwrap(),
search_query: SharedString::default(),
confirming_delete_history: false,
_subscriptions: vec![search_editor_subscription, history_store_subscription],
_update_task: Task::ready(()),
};
@@ -333,24 +331,6 @@ impl AcpThreadHistory {
task.detach_and_log_err(cx);
}
fn remove_history(&mut self, _window: &mut Window, cx: &mut Context<Self>) {
self.history_store.update(cx, |store, cx| {
store.delete_threads(cx).detach_and_log_err(cx)
});
self.confirming_delete_history = false;
cx.notify();
}
fn prompt_delete_history(&mut self, _window: &mut Window, cx: &mut Context<Self>) {
self.confirming_delete_history = true;
cx.notify();
}
fn cancel_delete_history(&mut self, _window: &mut Window, cx: &mut Context<Self>) {
self.confirming_delete_history = false;
cx.notify();
}
fn render_list_items(
&mut self,
range: Range<usize>,
@@ -446,10 +426,9 @@ impl AcpThreadHistory {
.tooltip(move |_window, cx| {
Tooltip::for_action("Delete", &RemoveSelectedThread, cx)
})
.on_click(cx.listener(move |this, _, _, cx| {
this.remove_thread(ix, cx);
cx.stop_propagation()
})),
.on_click(
cx.listener(move |this, _, _, cx| this.remove_thread(ix, cx)),
),
)
} else {
None
@@ -468,8 +447,6 @@ impl Focusable for AcpThreadHistory {
impl Render for AcpThreadHistory {
fn render(&mut self, window: &mut Window, cx: &mut Context<Self>) -> impl IntoElement {
let has_no_history = self.history_store.read(cx).is_empty(cx);
v_flex()
.key_context("ThreadHistory")
.size_full()
@@ -480,12 +457,9 @@ impl Render for AcpThreadHistory {
.on_action(cx.listener(Self::select_last))
.on_action(cx.listener(Self::confirm))
.on_action(cx.listener(Self::remove_selected_thread))
.on_action(cx.listener(|this, _: &RemoveHistory, window, cx| {
this.remove_history(window, cx);
}))
.child(
h_flex()
.h(Tab::container_height(cx))
.h(px(41.)) // Match the toolbar perfectly
.w_full()
.py_1()
.px_2()
@@ -507,7 +481,7 @@ impl Render for AcpThreadHistory {
.overflow_hidden()
.flex_grow();
if has_no_history {
if self.history_store.read(cx).is_empty(cx) {
view.justify_center().items_center().child(
Label::new("You don't have any past threads yet.")
.size(LabelSize::Small)
@@ -528,74 +502,16 @@ impl Render for AcpThreadHistory {
)
.p_1()
.pr_4()
.track_scroll(&self.scroll_handle)
.track_scroll(self.scroll_handle.clone())
.flex_grow(),
)
.vertical_scrollbar_for(&self.scroll_handle, window, cx)
.vertical_scrollbar_for(
self.scroll_handle.clone(),
window,
cx,
)
}
})
.when(!has_no_history, |this| {
this.child(
h_flex()
.p_2()
.border_t_1()
.border_color(cx.theme().colors().border_variant)
.when(!self.confirming_delete_history, |this| {
this.child(
Button::new("delete_history", "Delete All History")
.full_width()
.style(ButtonStyle::Outlined)
.label_size(LabelSize::Small)
.on_click(cx.listener(|this, _, window, cx| {
this.prompt_delete_history(window, cx);
})),
)
})
.when(self.confirming_delete_history, |this| {
this.w_full()
.gap_2()
.flex_wrap()
.justify_between()
.child(
h_flex()
.flex_wrap()
.gap_1()
.child(
Label::new("Delete all threads?")
.size(LabelSize::Small),
)
.child(
Label::new("You won't be able to recover them later.")
.size(LabelSize::Small)
.color(Color::Muted),
),
)
.child(
h_flex()
.gap_1()
.child(
Button::new("cancel_delete", "Cancel")
.label_size(LabelSize::Small)
.on_click(cx.listener(|this, _, window, cx| {
this.cancel_delete_history(window, cx);
})),
)
.child(
Button::new("confirm_delete", "Delete")
.style(ButtonStyle::Tinted(ui::TintColor::Error))
.color(Color::Error)
.label_size(LabelSize::Small)
.on_click(cx.listener(|_, _, window, cx| {
window.dispatch_action(
Box::new(RemoveHistory),
cx,
);
})),
),
)
}),
)
})
}
}

View File

@@ -665,8 +665,6 @@ impl AcpThreadView {
})
});
this.message_editor.focus_handle(cx).focus(window);
cx.notify();
}
Err(err) => {
@@ -1001,10 +999,6 @@ impl AcpThreadView {
}
}
pub fn is_loading(&self) -> bool {
matches!(self.thread_state, ThreadState::Loading { .. })
}
fn resume_chat(&mut self, cx: &mut Context<Self>) {
self.thread_error.take();
let Some(thread) = self.thread() else {
@@ -1476,8 +1470,18 @@ impl AcpThreadView {
.iter()
.any(|method| method.id.0.as_ref() == "claude-login")
{
available_commands.push(acp::AvailableCommand::new("login", "Authenticate"));
available_commands.push(acp::AvailableCommand::new("logout", "Authenticate"));
available_commands.push(acp::AvailableCommand {
name: "login".to_owned(),
description: "Authenticate".to_owned(),
input: None,
meta: None,
});
available_commands.push(acp::AvailableCommand {
name: "logout".to_owned(),
description: "Authenticate".to_owned(),
input: None,
meta: None,
});
}
let has_commands = !available_commands.is_empty();
@@ -2552,7 +2556,7 @@ impl AcpThreadView {
acp::ToolKind::Think => IconName::ToolThink,
acp::ToolKind::Fetch => IconName::ToolWeb,
acp::ToolKind::SwitchMode => IconName::ArrowRightLeft,
acp::ToolKind::Other | _ => IconName::ToolHammer,
acp::ToolKind::Other => IconName::ToolHammer,
})
}
.size(IconSize::Small)
@@ -2804,7 +2808,7 @@ impl AcpThreadView {
})
.gap_0p5()
.children(options.iter().map(move |option| {
let option_id = SharedString::from(option.option_id.0.clone());
let option_id = SharedString::from(option.id.0.clone());
Button::new((option_id, entry_ix), option.name.clone())
.map(|this| {
let (this, action) = match option.kind {
@@ -2820,7 +2824,7 @@ impl AcpThreadView {
this.icon(IconName::Close).icon_color(Color::Error),
Some(&RejectOnce as &dyn Action),
),
acp::PermissionOptionKind::RejectAlways | _ => {
acp::PermissionOptionKind::RejectAlways => {
(this.icon(IconName::Close).icon_color(Color::Error), None)
}
};
@@ -2845,7 +2849,7 @@ impl AcpThreadView {
.label_size(LabelSize::Small)
.on_click(cx.listener({
let tool_call_id = tool_call_id.clone();
let option_id = option.option_id.clone();
let option_id = option.id.clone();
let option_kind = option.kind;
move |this, _, window, cx| {
this.authorize_tool_call(
@@ -3533,7 +3537,7 @@ impl AcpThreadView {
);
this.authenticate(
acp::AuthMethodId::new(method_id.clone()),
acp::AuthMethodId(method_id.clone()),
window,
cx,
)
@@ -3798,64 +3802,48 @@ impl AcpThreadView {
}))
}
fn render_plan_entries(
&self,
plan: &Plan,
window: &mut Window,
cx: &Context<Self>,
) -> impl IntoElement {
v_flex()
.id("plan_items_list")
.max_h_40()
.overflow_y_scroll()
.children(plan.entries.iter().enumerate().flat_map(|(index, entry)| {
let element = h_flex()
.py_1()
.px_2()
.gap_2()
.justify_between()
.bg(cx.theme().colors().editor_background)
.when(index < plan.entries.len() - 1, |parent| {
parent.border_color(cx.theme().colors().border).border_b_1()
})
.child(
h_flex()
.id(("plan_entry", index))
.gap_1p5()
.max_w_full()
.overflow_x_scroll()
.text_xs()
.text_color(cx.theme().colors().text_muted)
.child(match entry.status {
acp::PlanEntryStatus::InProgress => {
Icon::new(IconName::TodoProgress)
.size(IconSize::Small)
.color(Color::Accent)
.with_rotate_animation(2)
.into_any_element()
}
acp::PlanEntryStatus::Completed => {
Icon::new(IconName::TodoComplete)
.size(IconSize::Small)
.color(Color::Success)
.into_any_element()
}
acp::PlanEntryStatus::Pending | _ => {
Icon::new(IconName::TodoPending)
.size(IconSize::Small)
.color(Color::Muted)
.into_any_element()
}
})
.child(MarkdownElement::new(
entry.content.clone(),
plan_label_markdown_style(&entry.status, window, cx),
)),
);
fn render_plan_entries(&self, plan: &Plan, window: &mut Window, cx: &Context<Self>) -> Div {
v_flex().children(plan.entries.iter().enumerate().flat_map(|(index, entry)| {
let element = h_flex()
.py_1()
.px_2()
.gap_2()
.justify_between()
.bg(cx.theme().colors().editor_background)
.when(index < plan.entries.len() - 1, |parent| {
parent.border_color(cx.theme().colors().border).border_b_1()
})
.child(
h_flex()
.id(("plan_entry", index))
.gap_1p5()
.max_w_full()
.overflow_x_scroll()
.text_xs()
.text_color(cx.theme().colors().text_muted)
.child(match entry.status {
acp::PlanEntryStatus::Pending => Icon::new(IconName::TodoPending)
.size(IconSize::Small)
.color(Color::Muted)
.into_any_element(),
acp::PlanEntryStatus::InProgress => Icon::new(IconName::TodoProgress)
.size(IconSize::Small)
.color(Color::Accent)
.with_rotate_animation(2)
.into_any_element(),
acp::PlanEntryStatus::Completed => Icon::new(IconName::TodoComplete)
.size(IconSize::Small)
.color(Color::Success)
.into_any_element(),
})
.child(MarkdownElement::new(
entry.content.clone(),
plan_label_markdown_style(&entry.status, window, cx),
)),
);
Some(element)
}))
.into_any_element()
Some(element)
}))
}
fn render_edits_summary(
@@ -3993,177 +3981,162 @@ impl AcpThreadView {
changed_buffers: &BTreeMap<Entity<Buffer>, Entity<BufferDiff>>,
pending_edits: bool,
cx: &Context<Self>,
) -> impl IntoElement {
) -> Div {
let editor_bg_color = cx.theme().colors().editor_background;
v_flex()
.id("edited_files_list")
.max_h_40()
.overflow_y_scroll()
.children(
changed_buffers
.iter()
.enumerate()
.flat_map(|(index, (buffer, _diff))| {
let file = buffer.read(cx).file()?;
let path = file.path();
let path_style = file.path_style(cx);
let separator = file.path_style(cx).primary_separator();
v_flex().children(changed_buffers.iter().enumerate().flat_map(
|(index, (buffer, _diff))| {
let file = buffer.read(cx).file()?;
let path = file.path();
let path_style = file.path_style(cx);
let separator = file.path_style(cx).separator();
let file_path = path.parent().and_then(|parent| {
if parent.is_empty() {
None
} else {
Some(
Label::new(format!(
"{}{separator}",
parent.display(path_style)
))
.color(Color::Muted)
.size(LabelSize::XSmall)
.buffer_font(cx),
)
}
});
let file_name = path.file_name().map(|name| {
Label::new(name.to_string())
let file_path = path.parent().and_then(|parent| {
if parent.is_empty() {
None
} else {
Some(
Label::new(format!("{}{separator}", parent.display(path_style)))
.color(Color::Muted)
.size(LabelSize::XSmall)
.buffer_font(cx)
.ml_1p5()
});
.buffer_font(cx),
)
}
});
let file_icon = FileIcons::get_icon(path.as_std_path(), cx)
.map(Icon::from_path)
.map(|icon| icon.color(Color::Muted).size(IconSize::Small))
.unwrap_or_else(|| {
Icon::new(IconName::File)
.color(Color::Muted)
.size(IconSize::Small)
});
let file_name = path.file_name().map(|name| {
Label::new(name.to_string())
.size(LabelSize::XSmall)
.buffer_font(cx)
.ml_1p5()
});
let overlay_gradient = linear_gradient(
90.,
linear_color_stop(editor_bg_color, 1.),
linear_color_stop(editor_bg_color.opacity(0.2), 0.),
);
let file_icon = FileIcons::get_icon(path.as_std_path(), cx)
.map(Icon::from_path)
.map(|icon| icon.color(Color::Muted).size(IconSize::Small))
.unwrap_or_else(|| {
Icon::new(IconName::File)
.color(Color::Muted)
.size(IconSize::Small)
});
let element = h_flex()
.group("edited-code")
.id(("file-container", index))
.py_1()
.pl_2()
.pr_1()
.gap_2()
.justify_between()
.bg(editor_bg_color)
.when(index < changed_buffers.len() - 1, |parent| {
parent.border_color(cx.theme().colors().border).border_b_1()
})
let overlay_gradient = linear_gradient(
90.,
linear_color_stop(editor_bg_color, 1.),
linear_color_stop(editor_bg_color.opacity(0.2), 0.),
);
let element = h_flex()
.group("edited-code")
.id(("file-container", index))
.py_1()
.pl_2()
.pr_1()
.gap_2()
.justify_between()
.bg(editor_bg_color)
.when(index < changed_buffers.len() - 1, |parent| {
parent.border_color(cx.theme().colors().border).border_b_1()
})
.child(
h_flex()
.id(("file-name-row", index))
.relative()
.pr_8()
.w_full()
.overflow_x_scroll()
.child(
h_flex()
.id(("file-name-row", index))
.relative()
.pr_8()
.w_full()
.overflow_x_scroll()
.child(
h_flex()
.id(("file-name-path", index))
.cursor_pointer()
.pr_0p5()
.gap_0p5()
.hover(|s| s.bg(cx.theme().colors().element_hover))
.rounded_xs()
.child(file_icon)
.children(file_name)
.children(file_path)
.tooltip(Tooltip::text("Go to File"))
.on_click({
let buffer = buffer.clone();
cx.listener(move |this, _, window, cx| {
this.open_edited_buffer(&buffer, window, cx);
})
}),
)
.child(
div()
.absolute()
.h_full()
.w_12()
.top_0()
.bottom_0()
.right_0()
.bg(overlay_gradient),
),
.id(("file-name-path", index))
.cursor_pointer()
.pr_0p5()
.gap_0p5()
.hover(|s| s.bg(cx.theme().colors().element_hover))
.rounded_xs()
.child(file_icon)
.children(file_name)
.children(file_path)
.tooltip(Tooltip::text("Go to File"))
.on_click({
let buffer = buffer.clone();
cx.listener(move |this, _, window, cx| {
this.open_edited_buffer(&buffer, window, cx);
})
}),
)
.child(
h_flex()
.gap_1()
.visible_on_hover("edited-code")
.child(
Button::new("review", "Review")
.label_size(LabelSize::Small)
.on_click({
let buffer = buffer.clone();
cx.listener(move |this, _, window, cx| {
this.open_edited_buffer(&buffer, window, cx);
})
}),
)
.child(Divider::vertical().color(DividerColor::BorderVariant))
.child(
Button::new("reject-file", "Reject")
.label_size(LabelSize::Small)
.disabled(pending_edits)
.on_click({
let buffer = buffer.clone();
let action_log = action_log.clone();
let telemetry = telemetry.clone();
move |_, _, cx| {
action_log.update(cx, |action_log, cx| {
action_log
div()
.absolute()
.h_full()
.w_12()
.top_0()
.bottom_0()
.right_0()
.bg(overlay_gradient),
),
)
.child(
h_flex()
.gap_1()
.visible_on_hover("edited-code")
.child(
Button::new("review", "Review")
.label_size(LabelSize::Small)
.on_click({
let buffer = buffer.clone();
cx.listener(move |this, _, window, cx| {
this.open_edited_buffer(&buffer, window, cx);
})
}),
)
.child(Divider::vertical().color(DividerColor::BorderVariant))
.child(
Button::new("reject-file", "Reject")
.label_size(LabelSize::Small)
.disabled(pending_edits)
.on_click({
let buffer = buffer.clone();
let action_log = action_log.clone();
let telemetry = telemetry.clone();
move |_, _, cx| {
action_log.update(cx, |action_log, cx| {
action_log
.reject_edits_in_ranges(
buffer.clone(),
vec![Anchor::min_max_range_for_buffer(
buffer.read(cx).remote_id(),
)],
vec![Anchor::MIN..Anchor::MAX],
Some(telemetry.clone()),
cx,
)
.detach_and_log_err(cx);
})
}
}),
)
.child(
Button::new("keep-file", "Keep")
.label_size(LabelSize::Small)
.disabled(pending_edits)
.on_click({
let buffer = buffer.clone();
let action_log = action_log.clone();
let telemetry = telemetry.clone();
move |_, _, cx| {
action_log.update(cx, |action_log, cx| {
action_log.keep_edits_in_range(
buffer.clone(),
Anchor::min_max_range_for_buffer(
buffer.read(cx).remote_id(),
),
Some(telemetry.clone()),
cx,
);
})
}
}),
),
);
})
}
}),
)
.child(
Button::new("keep-file", "Keep")
.label_size(LabelSize::Small)
.disabled(pending_edits)
.on_click({
let buffer = buffer.clone();
let action_log = action_log.clone();
let telemetry = telemetry.clone();
move |_, _, cx| {
action_log.update(cx, |action_log, cx| {
action_log.keep_edits_in_range(
buffer.clone(),
Anchor::MIN..Anchor::MAX,
Some(telemetry.clone()),
cx,
);
})
}
}),
),
);
Some(element)
}),
)
.into_any_element()
Some(element)
},
))
}
fn render_message_editor(&mut self, window: &mut Window, cx: &mut Context<Self>) -> AnyElement {
@@ -4184,10 +4157,8 @@ impl AcpThreadView {
.block_mouse_except_scroll();
let enable_editor = match self.thread_state {
ThreadState::Ready { .. } => true,
ThreadState::Loading { .. }
| ThreadState::Unauthenticated { .. }
| ThreadState::LoadError(..) => false,
ThreadState::Loading { .. } | ThreadState::Ready { .. } => true,
ThreadState::Unauthenticated { .. } | ThreadState::LoadError(..) => false,
};
v_flex()
@@ -4419,7 +4390,7 @@ impl AcpThreadView {
self.authorize_tool_call(
tool_call.id.clone(),
option.option_id.clone(),
option.id.clone(),
option.kind,
window,
cx,
@@ -4772,8 +4743,11 @@ impl AcpThreadView {
let buffer = multibuffer.as_singleton();
if agent_location.buffer.upgrade() == buffer {
let excerpt_id = multibuffer.excerpt_ids().first().cloned();
let anchor =
editor::Anchor::in_buffer(excerpt_id.unwrap(), agent_location.position);
let anchor = editor::Anchor::in_buffer(
excerpt_id.unwrap(),
buffer.unwrap().read(cx).remote_id(),
agent_location.position,
);
editor.change_selections(Default::default(), window, cx, |selections| {
selections.select_anchor_ranges([anchor..anchor]);
})
@@ -5848,10 +5822,12 @@ fn placeholder_text(agent_name: &str, has_commands: bool) -> String {
impl Focusable for AcpThreadView {
fn focus_handle(&self, cx: &App) -> FocusHandle {
match self.thread_state {
ThreadState::Ready { .. } => self.active_editor(cx).focus_handle(cx),
ThreadState::Loading { .. }
| ThreadState::LoadError(_)
| ThreadState::Unauthenticated { .. } => self.focus_handle.clone(),
ThreadState::Loading { .. } | ThreadState::Ready { .. } => {
self.active_editor(cx).focus_handle(cx)
}
ThreadState::LoadError(_) | ThreadState::Unauthenticated { .. } => {
self.focus_handle.clone()
}
}
}
}
@@ -5919,7 +5895,7 @@ impl Render for AcpThreadView {
.flex_grow()
.into_any(),
)
.vertical_scrollbar_for(&self.list_state, window, cx)
.vertical_scrollbar_for(self.list_state.clone(), window, cx)
.into_any()
} else {
this.child(self.render_recent_history(cx)).into_any()
@@ -6235,18 +6211,27 @@ pub(crate) mod tests {
async fn test_notification_for_tool_authorization(cx: &mut TestAppContext) {
init_test(cx);
let tool_call_id = acp::ToolCallId::new("1");
let tool_call = acp::ToolCall::new(tool_call_id.clone(), "Label")
.kind(acp::ToolKind::Edit)
.content(vec!["hi".into()]);
let tool_call_id = acp::ToolCallId("1".into());
let tool_call = acp::ToolCall {
id: tool_call_id.clone(),
title: "Label".into(),
kind: acp::ToolKind::Edit,
status: acp::ToolCallStatus::Pending,
content: vec!["hi".into()],
locations: vec![],
raw_input: None,
raw_output: None,
meta: None,
};
let connection =
StubAgentConnection::new().with_permission_requests(HashMap::from_iter([(
tool_call_id,
vec![acp::PermissionOption::new(
"1".into(),
"Allow",
acp::PermissionOptionKind::AllowOnce,
)],
vec![acp::PermissionOption {
id: acp::PermissionOptionId("1".into()),
name: "Allow".into(),
kind: acp::PermissionOptionKind::AllowOnce,
meta: None,
}],
)]));
connection.set_next_prompt_updates(vec![acp::SessionUpdate::ToolCall(tool_call)]);
@@ -6465,7 +6450,10 @@ pub(crate) mod tests {
fn default_response() -> Self {
let conn = StubAgentConnection::new();
conn.set_next_prompt_updates(vec![acp::SessionUpdate::AgentMessageChunk(
acp::ContentChunk::new("Default response".into()),
acp::ContentChunk {
content: "Default response".into(),
meta: None,
},
)]);
Self::new(conn)
}
@@ -6522,13 +6510,13 @@ pub(crate) mod tests {
self,
project,
action_log,
SessionId::new("test"),
watch::Receiver::constant(
acp::PromptCapabilities::new()
.image(true)
.audio(true)
.embedded_context(true),
),
SessionId("test".into()),
watch::Receiver::constant(acp::PromptCapabilities {
image: true,
audio: true,
embedded_context: true,
meta: None,
}),
cx,
)
})))
@@ -6586,13 +6574,13 @@ pub(crate) mod tests {
self,
project,
action_log,
SessionId::new("test"),
watch::Receiver::constant(
acp::PromptCapabilities::new()
.image(true)
.audio(true)
.embedded_context(true),
),
SessionId("test".into()),
watch::Receiver::constant(acp::PromptCapabilities {
image: true,
audio: true,
embedded_context: true,
meta: None,
}),
cx,
)
})))
@@ -6616,7 +6604,10 @@ pub(crate) mod tests {
_params: acp::PromptRequest,
_cx: &mut App,
) -> Task<gpui::Result<acp::PromptResponse>> {
Task::ready(Ok(acp::PromptResponse::new(acp::StopReason::Refusal)))
Task::ready(Ok(acp::PromptResponse {
stop_reason: acp::StopReason::Refusal,
meta: None,
}))
}
fn cancel(&self, _session_id: &acp::SessionId, _cx: &mut App) {
@@ -6684,14 +6675,24 @@ pub(crate) mod tests {
.unwrap();
// First user message
connection.set_next_prompt_updates(vec![acp::SessionUpdate::ToolCall(
acp::ToolCall::new("tool1", "Edit file 1")
.kind(acp::ToolKind::Edit)
.status(acp::ToolCallStatus::Completed)
.content(vec![acp::ToolCallContent::Diff(
acp::Diff::new("/project/test1.txt", "new content 1").old_text("old content 1"),
)]),
)]);
connection.set_next_prompt_updates(vec![acp::SessionUpdate::ToolCall(acp::ToolCall {
id: acp::ToolCallId("tool1".into()),
title: "Edit file 1".into(),
kind: acp::ToolKind::Edit,
status: acp::ToolCallStatus::Completed,
content: vec![acp::ToolCallContent::Diff {
diff: acp::Diff {
path: "/project/test1.txt".into(),
old_text: Some("old content 1".into()),
new_text: "new content 1".into(),
meta: None,
},
}],
locations: vec![],
raw_input: None,
raw_output: None,
meta: None,
})]);
thread
.update(cx, |thread, cx| thread.send_raw("Give me a diff", cx))
@@ -6717,14 +6718,24 @@ pub(crate) mod tests {
});
// Second user message
connection.set_next_prompt_updates(vec![acp::SessionUpdate::ToolCall(
acp::ToolCall::new("tool2", "Edit file 2")
.kind(acp::ToolKind::Edit)
.status(acp::ToolCallStatus::Completed)
.content(vec![acp::ToolCallContent::Diff(
acp::Diff::new("/project/test2.txt", "new content 2").old_text("old content 2"),
)]),
)]);
connection.set_next_prompt_updates(vec![acp::SessionUpdate::ToolCall(acp::ToolCall {
id: acp::ToolCallId("tool2".into()),
title: "Edit file 2".into(),
kind: acp::ToolKind::Edit,
status: acp::ToolCallStatus::Completed,
content: vec![acp::ToolCallContent::Diff {
diff: acp::Diff {
path: "/project/test2.txt".into(),
old_text: Some("old content 2".into()),
new_text: "new content 2".into(),
meta: None,
},
}],
locations: vec![],
raw_input: None,
raw_output: None,
meta: None,
})]);
thread
.update(cx, |thread, cx| thread.send_raw("Another one", cx))
@@ -6798,7 +6809,14 @@ pub(crate) mod tests {
let connection = StubAgentConnection::new();
connection.set_next_prompt_updates(vec![acp::SessionUpdate::AgentMessageChunk(
acp::ContentChunk::new("Response".into()),
acp::ContentChunk {
content: acp::ContentBlock::Text(acp::TextContent {
text: "Response".into(),
annotations: None,
meta: None,
}),
meta: None,
},
)]);
let (thread_view, cx) = setup_thread_view(StubAgentServer::new(connection), cx).await;
@@ -6884,7 +6902,14 @@ pub(crate) mod tests {
let connection = StubAgentConnection::new();
connection.set_next_prompt_updates(vec![acp::SessionUpdate::AgentMessageChunk(
acp::ContentChunk::new("Response".into()),
acp::ContentChunk {
content: acp::ContentBlock::Text(acp::TextContent {
text: "Response".into(),
annotations: None,
meta: None,
}),
meta: None,
},
)]);
let (thread_view, cx) =
@@ -6924,7 +6949,14 @@ pub(crate) mod tests {
// Send
connection.set_next_prompt_updates(vec![acp::SessionUpdate::AgentMessageChunk(
acp::ContentChunk::new("New Response".into()),
acp::ContentChunk {
content: acp::ContentBlock::Text(acp::TextContent {
text: "New Response".into(),
annotations: None,
meta: None,
}),
meta: None,
},
)]);
user_message_editor.update_in(cx, |_editor, window, cx| {
@@ -7012,7 +7044,14 @@ pub(crate) mod tests {
cx.update(|_, cx| {
connection.send_update(
session_id.clone(),
acp::SessionUpdate::AgentMessageChunk(acp::ContentChunk::new("Response".into())),
acp::SessionUpdate::AgentMessageChunk(acp::ContentChunk {
content: acp::ContentBlock::Text(acp::TextContent {
text: "Response".into(),
annotations: None,
meta: None,
}),
meta: None,
}),
cx,
);
connection.end_turn(session_id, acp::StopReason::EndTurn);
@@ -7064,9 +7103,10 @@ pub(crate) mod tests {
cx.update(|_, cx| {
connection.send_update(
session_id.clone(),
acp::SessionUpdate::AgentMessageChunk(acp::ContentChunk::new(
"Message 1 resp".into(),
)),
acp::SessionUpdate::AgentMessageChunk(acp::ContentChunk {
content: "Message 1 resp".into(),
meta: None,
}),
cx,
);
});
@@ -7100,7 +7140,10 @@ pub(crate) mod tests {
// Simulate a response sent after beginning to cancel
connection.send_update(
session_id.clone(),
acp::SessionUpdate::AgentMessageChunk(acp::ContentChunk::new("onse".into())),
acp::SessionUpdate::AgentMessageChunk(acp::ContentChunk {
content: "onse".into(),
meta: None,
}),
cx,
);
});
@@ -7131,9 +7174,10 @@ pub(crate) mod tests {
cx.update(|_, cx| {
connection.send_update(
session_id.clone(),
acp::SessionUpdate::AgentMessageChunk(acp::ContentChunk::new(
"Message 2 response".into(),
)),
acp::SessionUpdate::AgentMessageChunk(acp::ContentChunk {
content: "Message 2 response".into(),
meta: None,
}),
cx,
);
connection.end_turn(session_id.clone(), acp::StopReason::EndTurn);
@@ -7172,7 +7216,14 @@ pub(crate) mod tests {
let connection = StubAgentConnection::new();
connection.set_next_prompt_updates(vec![acp::SessionUpdate::AgentMessageChunk(
acp::ContentChunk::new("Response".into()),
acp::ContentChunk {
content: acp::ContentBlock::Text(acp::TextContent {
text: "Response".into(),
annotations: None,
meta: None,
}),
meta: None,
},
)]);
let (thread_view, cx) = setup_thread_view(StubAgentServer::new(connection), cx).await;
@@ -7251,7 +7302,14 @@ pub(crate) mod tests {
let connection = StubAgentConnection::new();
connection.set_next_prompt_updates(vec![acp::SessionUpdate::AgentMessageChunk(
acp::ContentChunk::new("Response".into()),
acp::ContentChunk {
content: acp::ContentBlock::Text(acp::TextContent {
text: "Response".into(),
annotations: None,
meta: None,
}),
meta: None,
},
)]);
let (thread_view, cx) = setup_thread_view(StubAgentServer::new(connection), cx).await;

View File

@@ -1209,7 +1209,7 @@ impl Render for AgentConfiguration {
.child(self.render_context_servers_section(window, cx))
.child(self.render_provider_configuration_section(cx)),
)
.vertical_scrollbar_for(&self.scroll_handle, window, cx),
.vertical_scrollbar_for(self.scroll_handle.clone(), window, cx),
)
}
}

View File

@@ -516,7 +516,7 @@ impl Render for AddLlmProviderModal {
.child(
div()
.size_full()
.vertical_scrollbar_for(&self.scroll_handle, window, cx)
.vertical_scrollbar_for(self.scroll_handle.clone(), window, cx)
.child(
v_flex()
.id("modal_content")

View File

@@ -1,4 +1,7 @@
use std::sync::{Arc, Mutex};
use std::{
path::PathBuf,
sync::{Arc, Mutex},
};
use anyhow::{Context as _, Result};
use collections::HashMap;
@@ -179,7 +182,7 @@ impl ConfigurationSource {
parse_input(&editor.read(cx).text(cx)).map(|(id, command)| {
(
id,
ContextServerSettings::Stdio {
ContextServerSettings::Custom {
enabled: true,
command,
},
@@ -221,12 +224,11 @@ fn context_server_input(existing: Option<(ContextServerId, ContextServerCommand)
Some((id, cmd)) => {
let args = serde_json::to_string(&cmd.args).unwrap();
let env = serde_json::to_string(&cmd.env.unwrap_or_default()).unwrap();
let cmd_path = serde_json::to_string(&cmd.path).unwrap();
(id.0.to_string(), cmd_path, args, env)
(id.0.to_string(), cmd.path, args, env)
}
None => (
"some-mcp-server".to_string(),
"".to_string(),
PathBuf::new(),
"[]".to_string(),
"{}".to_string(),
),
@@ -237,13 +239,14 @@ fn context_server_input(existing: Option<(ContextServerId, ContextServerCommand)
/// The name of your MCP server
"{name}": {{
/// The command which runs the MCP server
"command": {command},
"command": "{}",
/// The arguments to pass to the MCP server
"args": {args},
/// The environment variables to set
"env": {env}
}}
}}"#
}}"#,
command.display()
)
}
@@ -400,7 +403,7 @@ impl ConfigureContextServerModal {
window.spawn(cx, async move |cx| {
let target = match settings {
ContextServerSettings::Stdio {
ContextServerSettings::Custom {
enabled: _,
command,
} => Some(ConfigurationTarget::Existing {
@@ -632,6 +635,7 @@ impl ConfigureContextServerModal {
}
fn render_modal_content(&self, cx: &App) -> AnyElement {
// All variants now use single editor approach
let editor = match &self.source {
ConfigurationSource::New { editor, .. } => editor,
ConfigurationSource::Existing { editor, .. } => editor,
@@ -708,12 +712,12 @@ impl ConfigureContextServerModal {
)
} else if let ConfigurationSource::New { is_http, .. } = &self.source {
let label = if *is_http {
"Configure Local"
"Run command"
} else {
"Configure Remote"
"Connect via HTTP"
};
let tooltip = if *is_http {
"Configure an MCP server that runs on stdin/stdout."
"Configure an MCP serevr that runs on stdin/stdout."
} else {
"Configure an MCP server that you connect to over HTTP"
};
@@ -818,6 +822,7 @@ impl ConfigureContextServerModal {
impl Render for ConfigureContextServerModal {
fn render(&mut self, window: &mut Window, cx: &mut Context<Self>) -> impl IntoElement {
let scroll_handle = self.scroll_handle.clone();
div()
.elevation_3(cx)
.w(rems(34.))
@@ -845,7 +850,7 @@ impl Render for ConfigureContextServerModal {
.id("modal-content")
.max_h(vh(0.7, window))
.overflow_y_scroll()
.track_scroll(&self.scroll_handle)
.track_scroll(&scroll_handle)
.child(self.render_modal_description(window, cx))
.child(self.render_modal_content(cx))
.child(match &self.state {
@@ -858,7 +863,7 @@ impl Render for ConfigureContextServerModal {
}
}),
)
.vertical_scrollbar_for(&self.scroll_handle, window, cx),
.vertical_scrollbar_for(scroll_handle, window, cx),
),
)
.footer(self.render_modal_footer(cx)),

View File

@@ -138,7 +138,7 @@ impl ConfigureContextServerToolsModal {
items
})),
)
.vertical_scrollbar_for(&self.scroll_handle, window, cx)
.vertical_scrollbar_for(self.scroll_handle.clone(), window, cx)
.into_any_element()
}
}

View File

@@ -145,7 +145,7 @@ impl AgentDiffPane {
let diff_hunk_ranges = diff
.hunks_intersecting_range(
language::Anchor::min_max_range_for_buffer(snapshot.remote_id()),
language::Anchor::MIN..language::Anchor::MAX,
&snapshot,
cx,
)
@@ -493,7 +493,7 @@ impl Item for AgentDiffPane {
Some("Assistant Diff Opened")
}
fn as_searchable(&self, _: &Entity<Self>, _: &App) -> Option<Box<dyn SearchableItemHandle>> {
fn as_searchable(&self, _: &Entity<Self>) -> Option<Box<dyn SearchableItemHandle>> {
Some(Box::new(self.editor.clone()))
}

View File

@@ -1,4 +1,7 @@
use std::{ops::Range, path::Path, rc::Rc, sync::Arc, time::Duration};
use std::ops::Range;
use std::path::Path;
use std::rc::Rc;
use std::sync::Arc;
use acp_thread::AcpThread;
use agent::{ContextServerRegistry, DbThreadMetadata, HistoryEntry, HistoryStore};
@@ -17,9 +20,10 @@ use zed_actions::agent::{OpenClaudeCodeOnboardingModal, ReauthenticateAgent};
use crate::ManageProfiles;
use crate::ui::{AcpOnboardingModal, ClaudeCodeOnboardingModal};
use crate::{
AddContextServer, AgentDiffPane, Follow, InlineAssistant, NewTextThread, NewThread,
OpenActiveThreadAsMarkdown, OpenAgentDiff, OpenHistory, ResetTrialEndUpsell, ResetTrialUpsell,
ToggleNavigationMenu, ToggleNewThreadMenu, ToggleOptionsMenu,
AddContextServer, AgentDiffPane, DeleteRecentlyOpenThread, Follow, InlineAssistant,
NewTextThread, NewThread, OpenActiveThreadAsMarkdown, OpenAgentDiff, OpenHistory,
ResetTrialEndUpsell, ResetTrialUpsell, ToggleNavigationMenu, ToggleNewThreadMenu,
ToggleOptionsMenu,
acp::AcpThreadView,
agent_configuration::{AgentConfiguration, AssistantConfigurationEvent},
slash_command::SlashCommandCompletionProvider,
@@ -43,9 +47,9 @@ use extension::ExtensionEvents;
use extension_host::ExtensionStore;
use fs::Fs;
use gpui::{
Action, Animation, AnimationExt, AnyElement, App, AsyncWindowContext, Corner, DismissEvent,
Entity, EventEmitter, ExternalPaths, FocusHandle, Focusable, KeyContext, Pixels, Subscription,
Task, UpdateGlobal, WeakEntity, prelude::*, pulsating_between,
Action, AnyElement, App, AsyncWindowContext, Corner, DismissEvent, Entity, EventEmitter,
ExternalPaths, FocusHandle, Focusable, KeyContext, Pixels, Subscription, Task, UpdateGlobal,
WeakEntity, prelude::*,
};
use language::LanguageRegistry;
use language_model::{ConfigurationError, LanguageModelRegistry};
@@ -55,9 +59,10 @@ use rules_library::{RulesLibrary, open_rules_library};
use search::{BufferSearchBar, buffer_search};
use settings::{Settings, update_settings_file};
use theme::ThemeSettings;
use ui::utils::WithRemSize;
use ui::{
Callout, ContextMenu, ContextMenuEntry, KeyBinding, PopoverMenu, PopoverMenuHandle,
ProgressBar, Tab, Tooltip, prelude::*, utils::WithRemSize,
ProgressBar, Tab, Tooltip, prelude::*,
};
use util::ResultExt as _;
use workspace::{
@@ -609,14 +614,11 @@ impl AgentPanel {
if let Some(panel) = panel.upgrade() {
menu = Self::populate_recently_opened_menu_section(menu, panel, cx);
}
menu = menu
.action("View All", Box::new(OpenHistory))
menu.action("View All", Box::new(OpenHistory))
.end_slot_action(DeleteRecentlyOpenThread.boxed_clone())
.fixed_width(px(320.).into())
.keep_open_on_confirm(false)
.key_context("NavigationMenu");
menu
.key_context("NavigationMenu")
});
weak_panel
.update(cx, |panel, cx| {
@@ -2155,41 +2157,28 @@ impl AgentPanel {
let selected_agent_label = self.selected_agent.label();
let is_thread_loading = self
.active_thread_view()
.map(|thread| thread.read(cx).is_loading())
.unwrap_or(false);
let has_custom_icon = selected_agent_custom_icon.is_some();
let selected_agent = div()
.id("selected_agent_icon")
.when_some(selected_agent_custom_icon, |this, icon_path| {
let label = selected_agent_label.clone();
this.px_1()
.child(Icon::from_external_svg(icon_path).color(Color::Muted))
.tooltip(move |_window, cx| {
Tooltip::with_meta(label.clone(), None, "Selected Agent", cx)
})
})
.when(!has_custom_icon, |this| {
this.when_some(self.selected_agent.icon(), |this, icon| {
this.px_1().child(Icon::new(icon).color(Color::Muted))
let label = selected_agent_label.clone();
this.px_1()
.child(Icon::new(icon).color(Color::Muted))
.tooltip(move |_window, cx| {
Tooltip::with_meta(label.clone(), None, "Selected Agent", cx)
})
})
})
.tooltip(move |_, cx| {
Tooltip::with_meta(selected_agent_label.clone(), None, "Selected Agent", cx)
});
let selected_agent = if is_thread_loading {
selected_agent
.with_animation(
"pulsating-icon",
Animation::new(Duration::from_secs(1))
.repeat()
.with_easing(pulsating_between(0.2, 0.6)),
|icon, delta| icon.opacity(delta),
)
.into_any_element()
} else {
selected_agent.into_any_element()
};
.into_any_element();
h_flex()
.id("agent-panel-toolbar")
@@ -2685,17 +2674,16 @@ impl rules_library::InlineAssistDelegate for PromptLibraryInlineAssist {
return;
};
let project = workspace.read(cx).project().downgrade();
let thread_store = panel.read(cx).thread_store().clone();
assistant.assist(
prompt_editor,
self.workspace.clone(),
project,
thread_store,
panel.read(cx).thread_store().clone(),
None,
initial_prompt,
window,
cx,
);
)
})
}

View File

@@ -7,8 +7,6 @@ mod buffer_codegen;
mod completion_provider;
mod context;
mod context_server_configuration;
#[cfg(test)]
mod evals;
mod inline_assistant;
mod inline_prompt_editor;
mod language_model_selector;
@@ -71,8 +69,6 @@ actions!(
CycleModeSelector,
/// Expands the message editor to full size.
ExpandMessageEditor,
/// Removes all thread history.
RemoveHistory,
/// Opens the conversation history view.
OpenHistory,
/// Adds a context server to the configuration.

View File

@@ -271,7 +271,7 @@ impl CodegenAlternative {
let mut buffer = Buffer::local_normalized(text, line_ending, cx);
buffer.set_language(language, cx);
if let Some(language_registry) = language_registry {
buffer.set_language_registry(language_registry);
buffer.set_language_registry(language_registry)
}
buffer
});
@@ -719,7 +719,6 @@ impl CodegenAlternative {
output_tokens = usage.output_tokens,
)
}
cx.emit(CodegenEvent::Finished);
cx.notify();
})

View File

@@ -7,9 +7,7 @@ use std::sync::atomic::AtomicBool;
use acp_thread::MentionUri;
use agent::{HistoryEntry, HistoryStore};
use anyhow::Result;
use editor::{
CompletionProvider, Editor, ExcerptId, code_context_menus::COMPLETION_MENU_MAX_WIDTH,
};
use editor::{CompletionProvider, Editor, ExcerptId};
use fuzzy::{PathMatch, StringMatch, StringMatchCandidate};
use gpui::{App, Entity, Task, WeakEntity};
use language::{Buffer, CodeLabel, CodeLabelBuilder, HighlightId};
@@ -27,7 +25,6 @@ use ui::prelude::*;
use util::ResultExt as _;
use util::paths::PathStyle;
use util::rel_path::RelPath;
use util::truncate_and_remove_front;
use workspace::Workspace;
use crate::AgentPanel;
@@ -339,20 +336,14 @@ impl<T: PromptCompletionProviderDelegate> PromptCompletionProvider<T> {
mention_set: WeakEntity<MentionSet>,
workspace: Entity<Workspace>,
project: Entity<Project>,
label_max_chars: usize,
cx: &mut App,
) -> Option<Completion> {
let path_style = project.read(cx).path_style(cx);
let (file_name, directory) =
extract_file_name_and_directory(&project_path.path, path_prefix, path_style);
let label = build_code_label_for_path(
&file_name,
directory.as_ref().map(|s| s.as_ref()),
None,
label_max_chars,
cx,
);
let label =
build_code_label_for_path(&file_name, directory.as_ref().map(|s| s.as_ref()), None, cx);
let abs_path = project.read(cx).absolute_path(&project_path, cx)?;
@@ -401,7 +392,6 @@ impl<T: PromptCompletionProviderDelegate> PromptCompletionProvider<T> {
editor: WeakEntity<Editor>,
mention_set: WeakEntity<MentionSet>,
workspace: Entity<Workspace>,
label_max_chars: usize,
cx: &mut App,
) -> Option<Completion> {
let project = workspace.read(cx).project().clone();
@@ -424,7 +414,6 @@ impl<T: PromptCompletionProviderDelegate> PromptCompletionProvider<T> {
&symbol.name,
Some(&file_name),
Some(symbol.range.start.0.row + 1),
label_max_chars,
cx,
);
@@ -863,7 +852,7 @@ impl<T: PromptCompletionProviderDelegate> CompletionProvider for PromptCompletio
buffer: &Entity<Buffer>,
buffer_position: Anchor,
_trigger: CompletionContext,
window: &mut Window,
_window: &mut Window,
cx: &mut Context<Editor>,
) -> Task<Result<Vec<CompletionResponse>>> {
let state = buffer.update(cx, |buffer, cx| {
@@ -872,7 +861,7 @@ impl<T: PromptCompletionProviderDelegate> CompletionProvider for PromptCompletio
let offset_to_line = buffer.point_to_offset(line_start);
let mut lines = buffer.text_for_range(line_start..position).lines();
let line = lines.next()?;
PromptCompletion::try_parse(line, offset_to_line, &self.source.supported_modes(cx))
ContextCompletion::try_parse(line, offset_to_line, &self.source.supported_modes(cx))
});
let Some(state) = state else {
return Task::ready(Ok(Vec::new()));
@@ -891,7 +880,7 @@ impl<T: PromptCompletionProviderDelegate> CompletionProvider for PromptCompletio
let editor = self.editor.clone();
let mention_set = self.mention_set.downgrade();
match state {
PromptCompletion::SlashCommand(SlashCommandCompletion {
ContextCompletion::SlashCommand(SlashCommandCompletion {
command, argument, ..
}) => {
let search_task = self.search_slash_commands(command.unwrap_or_default(), cx);
@@ -954,36 +943,11 @@ impl<T: PromptCompletionProviderDelegate> CompletionProvider for PromptCompletio
}])
})
}
PromptCompletion::Mention(MentionCompletion { mode, argument, .. }) => {
ContextCompletion::Mention(MentionCompletion { mode, argument, .. }) => {
let query = argument.unwrap_or_default();
let search_task =
self.search_mentions(mode, query, Arc::<AtomicBool>::default(), cx);
// Calculate maximum characters available for the full label (file_name + space + directory)
// based on maximum menu width after accounting for padding, spacing, and icon width
let label_max_chars = {
// Base06 left padding + Base06 gap + Base06 right padding + icon width
let used_pixels = DynamicSpacing::Base06.px(cx) * 3.0
+ IconSize::XSmall.rems() * window.rem_size();
let style = window.text_style();
let font_id = window.text_system().resolve_font(&style.font());
let font_size = TextSize::Small.rems(cx).to_pixels(window.rem_size());
// Fallback em_width of 10px matches file_finder.rs fallback for TextSize::Small
let em_width = cx
.text_system()
.em_width(font_id, font_size)
.unwrap_or(px(10.0));
// Calculate available pixels for text (file_name + directory)
// Using max width since dynamic_width allows the menu to expand up to this
let available_pixels = COMPLETION_MENU_MAX_WIDTH - used_pixels;
// Convert to character count (total available for file_name + directory)
(f32::from(available_pixels) / f32::from(em_width)) as usize
};
cx.spawn(async move |_, cx| {
let matches = search_task.await;
@@ -1020,7 +984,6 @@ impl<T: PromptCompletionProviderDelegate> CompletionProvider for PromptCompletio
mention_set.clone(),
workspace.clone(),
project.clone(),
label_max_chars,
cx,
)
}
@@ -1033,7 +996,6 @@ impl<T: PromptCompletionProviderDelegate> CompletionProvider for PromptCompletio
editor.clone(),
mention_set.clone(),
workspace.clone(),
label_max_chars,
cx,
)
}
@@ -1114,6 +1076,7 @@ impl<T: PromptCompletionProviderDelegate> CompletionProvider for PromptCompletio
position: language::Anchor,
_text: &str,
_trigger_in_words: bool,
_menu_is_open: bool,
cx: &mut Context<Editor>,
) -> bool {
let buffer = buffer.read(cx);
@@ -1122,12 +1085,12 @@ impl<T: PromptCompletionProviderDelegate> CompletionProvider for PromptCompletio
let offset_to_line = buffer.point_to_offset(line_start);
let mut lines = buffer.text_for_range(line_start..position).lines();
if let Some(line) = lines.next() {
PromptCompletion::try_parse(line, offset_to_line, &self.source.supported_modes(cx))
ContextCompletion::try_parse(line, offset_to_line, &self.source.supported_modes(cx))
.filter(|completion| {
// Right now we don't support completing arguments of slash commands
let is_slash_command_with_argument = matches!(
completion,
PromptCompletion::SlashCommand(SlashCommandCompletion {
ContextCompletion::SlashCommand(SlashCommandCompletion {
argument: Some(_),
..
})
@@ -1197,13 +1160,12 @@ fn confirm_completion_callback<T: PromptCompletionProviderDelegate>(
})
}
#[derive(Debug, PartialEq)]
enum PromptCompletion {
enum ContextCompletion {
SlashCommand(SlashCommandCompletion),
Mention(MentionCompletion),
}
impl PromptCompletion {
impl ContextCompletion {
fn source_range(&self) -> Range<usize> {
match self {
Self::SlashCommand(completion) => completion.source_range.clone(),
@@ -1216,14 +1178,15 @@ impl PromptCompletion {
offset_to_line: usize,
supported_modes: &[PromptContextType],
) -> Option<Self> {
if line.contains('@') {
if let Some(mention) =
MentionCompletion::try_parse(line, offset_to_line, supported_modes)
{
return Some(Self::Mention(mention));
}
if let Some(command) = SlashCommandCompletion::try_parse(line, offset_to_line) {
Some(Self::SlashCommand(command))
} else if let Some(mention) =
MentionCompletion::try_parse(line, offset_to_line, supported_modes)
{
Some(Self::Mention(mention))
} else {
None
}
SlashCommandCompletion::try_parse(line, offset_to_line).map(Self::SlashCommand)
}
}
@@ -1632,7 +1595,6 @@ fn build_code_label_for_path(
file: &str,
directory: Option<&str>,
line_number: Option<u32>,
label_max_chars: usize,
cx: &App,
) -> CodeLabel {
let variable_highlight_id = cx
@@ -1646,13 +1608,7 @@ fn build_code_label_for_path(
label.push_str(" ", None);
if let Some(directory) = directory {
let file_name_chars = file.chars().count();
// Account for: file_name + space (ellipsis is handled by truncate_and_remove_front)
let directory_max_chars = label_max_chars
.saturating_sub(file_name_chars)
.saturating_sub(1);
let truncated_directory = truncate_and_remove_front(directory, directory_max_chars.max(5));
label.push_str(&truncated_directory, variable_highlight_id);
label.push_str(directory, variable_highlight_id);
}
if let Some(line_number) = line_number {
label.push_str(&format!(" L{}", line_number), variable_highlight_id);
@@ -1697,38 +1653,6 @@ fn selection_ranges(
mod tests {
use super::*;
#[test]
fn test_prompt_completion_parse() {
let supported_modes = vec![PromptContextType::File, PromptContextType::Symbol];
assert_eq!(
PromptCompletion::try_parse("/", 0, &supported_modes),
Some(PromptCompletion::SlashCommand(SlashCommandCompletion {
source_range: 0..1,
command: None,
argument: None,
}))
);
assert_eq!(
PromptCompletion::try_parse("@", 0, &supported_modes),
Some(PromptCompletion::Mention(MentionCompletion {
source_range: 0..1,
mode: None,
argument: None,
}))
);
assert_eq!(
PromptCompletion::try_parse("/test @file", 0, &supported_modes),
Some(PromptCompletion::Mention(MentionCompletion {
source_range: 6..11,
mode: Some(PromptContextType::File),
argument: None,
}))
);
}
#[test]
fn test_slash_command_completion_parse() {
assert_eq!(

View File

@@ -1,89 +0,0 @@
use std::str::FromStr;
use crate::inline_assistant::test::run_inline_assistant_test;
use eval_utils::{EvalOutput, NoProcessor};
use gpui::TestAppContext;
use language_model::{LanguageModelRegistry, SelectedModel};
use rand::{SeedableRng as _, rngs::StdRng};
#[test]
#[cfg_attr(not(feature = "unit-eval"), ignore)]
fn eval_single_cursor_edit() {
eval_utils::eval(20, 1.0, NoProcessor, move || {
run_eval(
&EvalInput {
prompt: "Rename this variable to buffer_text".to_string(),
buffer: indoc::indoc! {"
struct EvalExampleStruct {
text: Strˇing,
prompt: String,
}
"}
.to_string(),
},
&|_, output| {
let expected = indoc::indoc! {"
struct EvalExampleStruct {
buffer_text: String,
prompt: String,
}
"};
if output == expected {
EvalOutput {
outcome: eval_utils::OutcomeKind::Passed,
data: "Passed!".to_string(),
metadata: (),
}
} else {
EvalOutput {
outcome: eval_utils::OutcomeKind::Failed,
data: format!("Failed to rename variable, output: {}", output),
metadata: (),
}
}
},
)
});
}
struct EvalInput {
buffer: String,
prompt: String,
}
fn run_eval(
input: &EvalInput,
judge: &dyn Fn(&EvalInput, &str) -> eval_utils::EvalOutput<()>,
) -> eval_utils::EvalOutput<()> {
let dispatcher = gpui::TestDispatcher::new(StdRng::from_os_rng());
let mut cx = TestAppContext::build(dispatcher, None);
cx.skip_drawing();
let buffer_text = run_inline_assistant_test(
input.buffer.clone(),
input.prompt.clone(),
|cx| {
// Reconfigure to use a real model instead of the fake one
let model_name = std::env::var("ZED_AGENT_MODEL")
.unwrap_or("anthropic/claude-sonnet-4-latest".into());
let selected_model = SelectedModel::from_str(&model_name)
.expect("Invalid model format. Use 'provider/model-id'");
log::info!("Selected model: {selected_model:?}");
cx.update(|_, cx| {
LanguageModelRegistry::global(cx).update(cx, |registry, cx| {
registry.select_inline_assistant_model(Some(&selected_model), cx);
});
});
},
|_cx| {
log::info!("Waiting for actual response from the LLM...");
},
&mut cx,
);
judge(input, &buffer_text)
}

View File

@@ -32,7 +32,7 @@ use editor::{
},
};
use fs::Fs;
use futures::{FutureExt, channel::mpsc};
use futures::FutureExt;
use gpui::{
App, Context, Entity, Focusable, Global, HighlightStyle, Subscription, Task, UpdateGlobal,
WeakEntity, Window, point,
@@ -102,7 +102,6 @@ pub struct InlineAssistant {
prompt_builder: Arc<PromptBuilder>,
telemetry: Arc<Telemetry>,
fs: Arc<dyn Fs>,
_inline_assistant_completions: Option<mpsc::UnboundedSender<anyhow::Result<InlineAssistId>>>,
}
impl Global for InlineAssistant {}
@@ -124,18 +123,9 @@ impl InlineAssistant {
prompt_builder,
telemetry,
fs,
_inline_assistant_completions: None,
}
}
#[cfg(any(test, feature = "test-support"))]
pub fn set_completion_receiver(
&mut self,
sender: mpsc::UnboundedSender<anyhow::Result<InlineAssistId>>,
) {
self._inline_assistant_completions = Some(sender);
}
pub fn register_workspace(
&mut self,
workspace: &Entity<Workspace>,
@@ -297,7 +287,7 @@ impl InlineAssistant {
action.prompt.clone(),
window,
cx,
);
)
})
}
InlineAssistTarget::Terminal(active_terminal) => {
@@ -311,8 +301,8 @@ impl InlineAssistant {
action.prompt.clone(),
window,
cx,
);
});
)
})
}
};
@@ -450,6 +440,7 @@ impl InlineAssistant {
{
let anchor_range = Anchor::range_in_buffer(
excerpt_id,
buffer.remote_id(),
buffer.anchor_before(buffer_range.start)..buffer.anchor_after(buffer_range.end),
);
@@ -608,13 +599,13 @@ impl InlineAssistant {
initial_prompt: Option<String>,
window: &mut Window,
cx: &mut App,
) -> Option<InlineAssistId> {
) {
let snapshot = editor.update(cx, |editor, cx| editor.snapshot(window, cx));
let Some((codegen_ranges, newest_selection)) =
self.codegen_ranges(editor, &snapshot, window, cx)
else {
return None;
return;
};
let assist_to_focus = self.batch_assist(
@@ -634,8 +625,6 @@ impl InlineAssistant {
if let Some(assist_id) = assist_to_focus {
self.focus_assist(assist_id, window, cx);
}
assist_to_focus
}
pub fn suggest_assist(
@@ -1468,7 +1457,6 @@ impl InlineAssistant {
editor.set_soft_wrap_mode(language::language_settings::SoftWrap::None, cx);
editor.set_show_wrap_guides(false, cx);
editor.set_show_gutter(false, cx);
editor.set_offset_content(false, cx);
editor.scroll_manager.set_forbid_vertical_scroll(true);
editor.set_read_only(true);
editor.set_show_edit_predictions(Some(false), window, cx);
@@ -1752,16 +1740,6 @@ impl InlineAssist {
&& assist.decorations.is_none()
&& let Some(workspace) = assist.workspace.upgrade()
{
#[cfg(any(test, feature = "test-support"))]
if let Some(sender) = &mut this._inline_assistant_completions {
sender
.unbounded_send(Err(anyhow::anyhow!(
"Inline assistant error: {}",
error
)))
.ok();
}
let error = format!("Inline assistant error: {}", error);
workspace.update(cx, |workspace, cx| {
struct InlineAssistantError;
@@ -1772,11 +1750,6 @@ impl InlineAssist {
workspace.show_toast(Toast::new(id, error), cx);
})
} else {
#[cfg(any(test, feature = "test-support"))]
if let Some(sender) = &mut this._inline_assistant_completions {
sender.unbounded_send(Ok(assist_id)).ok();
}
}
if assist.decorations.is_none() {
@@ -1970,160 +1943,3 @@ fn merge_ranges(ranges: &mut Vec<Range<Anchor>>, buffer: &MultiBufferSnapshot) {
}
}
}
#[cfg(any(test, feature = "test-support"))]
pub mod test {
use std::sync::Arc;
use agent::HistoryStore;
use assistant_text_thread::TextThreadStore;
use client::{Client, UserStore};
use editor::{Editor, MultiBuffer, MultiBufferOffset};
use fs::FakeFs;
use futures::channel::mpsc;
use gpui::{AppContext, TestAppContext, UpdateGlobal as _};
use language::Buffer;
use language_model::LanguageModelRegistry;
use project::Project;
use prompt_store::PromptBuilder;
use smol::stream::StreamExt as _;
use util::test::marked_text_ranges;
use workspace::Workspace;
use crate::InlineAssistant;
pub fn run_inline_assistant_test<SetupF, TestF>(
base_buffer: String,
prompt: String,
setup: SetupF,
test: TestF,
cx: &mut TestAppContext,
) -> String
where
SetupF: FnOnce(&mut gpui::VisualTestContext),
TestF: FnOnce(&mut gpui::VisualTestContext),
{
let fs = FakeFs::new(cx.executor());
let app_state = cx.update(|cx| workspace::AppState::test(cx));
let prompt_builder = Arc::new(PromptBuilder::new(None).unwrap());
let http = Arc::new(reqwest_client::ReqwestClient::user_agent("agent tests").unwrap());
let client = cx.update(|cx| {
cx.set_http_client(http);
Client::production(cx)
});
let mut inline_assistant =
InlineAssistant::new(fs.clone(), prompt_builder, client.telemetry().clone());
let (tx, mut completion_rx) = mpsc::unbounded();
inline_assistant.set_completion_receiver(tx);
// Initialize settings and client
cx.update(|cx| {
gpui_tokio::init(cx);
settings::init(cx);
client::init(&client, cx);
workspace::init(app_state.clone(), cx);
let user_store = cx.new(|cx| UserStore::new(client.clone(), cx));
language_model::init(client.clone(), cx);
language_models::init(user_store, client.clone(), cx);
cx.set_global(inline_assistant);
});
let project = cx
.executor()
.block_test(async { Project::test(fs.clone(), [], cx).await });
// Create workspace with window
let (workspace, cx) = cx.add_window_view(|window, cx| {
window.activate_window();
Workspace::new(None, project.clone(), app_state.clone(), window, cx)
});
setup(cx);
let (_editor, buffer) = cx.update(|window, cx| {
let buffer = cx.new(|cx| Buffer::local("", cx));
let multibuffer = cx.new(|cx| MultiBuffer::singleton(buffer.clone(), cx));
let editor = cx.new(|cx| Editor::for_multibuffer(multibuffer, None, window, cx));
editor.update(cx, |editor, cx| {
let (unmarked_text, selection_ranges) = marked_text_ranges(&base_buffer, true);
editor.set_text(unmarked_text, window, cx);
editor.change_selections(Default::default(), window, cx, |s| {
s.select_ranges(
selection_ranges.into_iter().map(|range| {
MultiBufferOffset(range.start)..MultiBufferOffset(range.end)
}),
)
})
});
let text_thread_store = cx.new(|cx| TextThreadStore::fake(project.clone(), cx));
let history_store = cx.new(|cx| HistoryStore::new(text_thread_store, cx));
// Add editor to workspace
workspace.update(cx, |workspace, cx| {
workspace.add_item_to_active_pane(Box::new(editor.clone()), None, true, window, cx);
});
// Call assist method
InlineAssistant::update_global(cx, |inline_assistant, cx| {
let assist_id = inline_assistant
.assist(
&editor,
workspace.downgrade(),
project.downgrade(),
history_store, // thread_store
None, // prompt_store
Some(prompt),
window,
cx,
)
.unwrap();
inline_assistant.start_assist(assist_id, window, cx);
});
(editor, buffer)
});
cx.run_until_parked();
test(cx);
cx.executor()
.block_test(async { completion_rx.next().await });
buffer.read_with(cx, |buffer, _| buffer.text())
}
#[allow(unused)]
pub fn test_inline_assistant(
base_buffer: &'static str,
llm_output: &'static str,
cx: &mut TestAppContext,
) -> String {
run_inline_assistant_test(
base_buffer.to_string(),
"Prompt doesn't matter because we're using a fake model".to_string(),
|cx| {
cx.update(|_, cx| LanguageModelRegistry::test(cx));
},
|cx| {
let fake_model = cx.update(|_, cx| {
LanguageModelRegistry::global(cx)
.update(cx, |registry, _| registry.fake_model())
});
let fake = fake_model.as_fake();
// let fake = fake_model;
fake.send_last_completion_stream_text_chunk(llm_output.to_string());
fake.end_last_completion_stream();
// Run again to process the model's response
cx.run_until_parked();
},
cx,
)
}
}

View File

@@ -341,6 +341,7 @@ impl CompletionProvider for SlashCommandCompletionProvider {
position: language::Anchor,
_text: &str,
_trigger_in_words: bool,
_menu_is_open: bool,
cx: &mut Context<Editor>,
) -> bool {
let buffer = buffer.read(cx);

View File

@@ -2556,11 +2556,7 @@ impl Item for TextThreadEditor {
Some(self.title(cx).to_string().into())
}
fn as_searchable(
&self,
handle: &Entity<Self>,
_: &App,
) -> Option<Box<dyn SearchableItemHandle>> {
fn as_searchable(&self, handle: &Entity<Self>) -> Option<Box<dyn SearchableItemHandle>> {
Some(Box::new(handle.clone()))
}
@@ -2622,13 +2618,11 @@ impl SearchableItem for TextThreadEditor {
fn update_matches(
&mut self,
matches: &[Self::Match],
active_match_index: Option<usize>,
window: &mut Window,
cx: &mut Context<Self>,
) {
self.editor.update(cx, |editor, cx| {
editor.update_matches(matches, active_match_index, window, cx)
});
self.editor
.update(cx, |editor, cx| editor.update_matches(matches, window, cx));
}
fn query_suggestion(&mut self, window: &mut Window, cx: &mut Context<Self>) -> String {

View File

@@ -249,15 +249,10 @@ impl PasswordProxy {
fs::write(&askpass_script_path, askpass_script)
.await
.with_context(|| format!("creating askpass script at {askpass_script_path:?}"))?;
make_file_executable(&askpass_script_path)
.await
.with_context(|| {
format!("marking askpass script executable at {askpass_script_path:?}")
})?;
// todo(shell): There might be no powershell on the system
make_file_executable(&askpass_script_path).await?;
#[cfg(target_os = "windows")]
let askpass_helper = format!(
"powershell.exe -ExecutionPolicy Bypass -File \"{}\"",
"powershell.exe -ExecutionPolicy Bypass -File {}",
askpass_script_path.display()
);

View File

@@ -233,11 +233,18 @@ fn collect_diagnostics(
options: Options,
cx: &mut App,
) -> Task<Result<Option<SlashCommandOutput>>> {
let error_source = if let Some(path_matcher) = &options.path_matcher {
debug_assert_eq!(path_matcher.sources().len(), 1);
Some(path_matcher.sources().first().cloned().unwrap_or_default())
} else {
None
};
let path_style = project.read(cx).path_style(cx);
let glob_is_exact_file_match = if let Some(path) = options
.path_matcher
.as_ref()
.and_then(|pm| pm.sources().next())
.and_then(|pm| pm.sources().first())
{
project
.read(cx)
@@ -259,13 +266,6 @@ fn collect_diagnostics(
.collect();
cx.spawn(async move |cx| {
let error_source = if let Some(path_matcher) = &options.path_matcher {
debug_assert_eq!(path_matcher.sources().count(), 1);
Some(path_matcher.sources().next().unwrap_or_default())
} else {
None
};
let mut output = SlashCommandOutput::default();
if let Some(error_source) = error_source.as_ref() {
@@ -277,7 +277,7 @@ fn collect_diagnostics(
let mut project_summary = DiagnosticSummary::default();
for (project_path, path, summary) in diagnostic_summaries {
if let Some(path_matcher) = &options.path_matcher
&& !path_matcher.is_match(&path)
&& !path_matcher.is_match(&path.as_std_path())
{
continue;
}

View File

@@ -797,7 +797,7 @@ impl TextThread {
});
let message = MessageAnchor {
id: first_message_id,
start: language::Anchor::min_for_buffer(this.buffer.read(cx).remote_id()),
start: language::Anchor::MIN,
};
this.messages_metadata.insert(
first_message_id,
@@ -1147,10 +1147,12 @@ impl TextThread {
cx: &App,
) -> bool {
let version = &self.buffer.read(cx).version;
let observed_start =
range.start.is_min() || range.start.is_max() || version.observed(range.start.timestamp);
let observed_end =
range.end.is_min() || range.end.is_max() || version.observed(range.end.timestamp);
let observed_start = range.start == language::Anchor::MIN
|| range.start == language::Anchor::MAX
|| version.observed(range.start.timestamp);
let observed_end = range.end == language::Anchor::MIN
|| range.end == language::Anchor::MAX
|| version.observed(range.end.timestamp);
observed_start && observed_end
}
@@ -2856,8 +2858,7 @@ impl TextThread {
messages.next();
}
}
let message_end_anchor =
message_end.unwrap_or(language::Anchor::max_for_buffer(buffer.remote_id()));
let message_end_anchor = message_end.unwrap_or(language::Anchor::MAX);
let message_end = message_end_anchor.to_offset(buffer);
return Some(Message {
@@ -2933,7 +2934,6 @@ impl TextThread {
RenameOptions {
overwrite: true,
ignore_if_exists: true,
create_parents: false,
},
)
.await?;

View File

@@ -510,9 +510,7 @@ impl AutoUpdater {
(None, None, None)
};
let version = if let Some(mut version) = version {
version.pre = semver::Prerelease::EMPTY;
version.build = semver::BuildMetadata::EMPTY;
let version = if let Some(version) = version {
version.to_string()
} else {
"latest".to_string()
@@ -639,11 +637,10 @@ impl AutoUpdater {
if let AutoUpdateStatus::Updated { version, .. } = status {
match version {
VersionCheckType::Sha(cached_version) => {
let should_download =
parsed_fetched_version.as_ref().ok().is_none_or(|version| {
version.build.as_str().rsplit('.').next()
!= Some(&cached_version.full())
});
let should_download = parsed_fetched_version
.as_ref()
.ok()
.is_none_or(|version| version.build.as_str() != cached_version.full());
let newer_version = should_download
.then(|| VersionCheckType::Sha(AppCommitSha::new(fetched_version)));
return Ok(newer_version);
@@ -663,9 +660,10 @@ impl AutoUpdater {
.ok()
.flatten()
.map(|sha| {
parsed_fetched_version.as_ref().ok().is_none_or(|version| {
version.build.as_str().rsplit('.').next() != Some(&sha)
})
parsed_fetched_version
.as_ref()
.ok()
.is_none_or(|version| version.build.as_str() != sha)
})
.unwrap_or(true);
let newer_version = should_download
@@ -719,12 +717,9 @@ impl AutoUpdater {
}
fn check_if_fetched_version_is_newer_non_nightly(
mut installed_version: Version,
installed_version: Version,
fetched_version: Version,
) -> Result<Option<VersionCheckType>> {
// For non-nightly releases, ignore build and pre-release fields as they're not provided by our endpoints right now.
installed_version.build = semver::BuildMetadata::EMPTY;
installed_version.pre = semver::Prerelease::EMPTY;
let should_download = fetched_version > installed_version;
let newer_version = should_download.then(|| VersionCheckType::Semantic(fetched_version));
Ok(newer_version)

View File

@@ -20,7 +20,6 @@ gpui.workspace = true
http_client.workspace = true
markdown_preview.workspace = true
release_channel.workspace = true
semver.workspace = true
serde.workspace = true
serde_json.workspace = true
smol.workspace = true

View File

@@ -148,9 +148,7 @@ pub fn notify_if_app_was_updated(cx: &mut App) {
let should_show_notification = should_show_notification.await?;
if should_show_notification {
cx.update(|cx| {
let mut version = updater.read(cx).current_version();
version.build = semver::BuildMetadata::EMPTY;
version.pre = semver::Prerelease::EMPTY;
let version = updater.read(cx).current_version();
let app_name = ReleaseChannel::global(cx).display_name();
show_app_notification(
NotificationId::unique::<UpdateNotification>(),

View File

@@ -584,100 +584,41 @@ impl Model {
}
}
pub fn cross_region_inference_id(
&self,
region: &str,
allow_global: bool,
) -> anyhow::Result<String> {
// List derived from here:
// https://docs.aws.amazon.com/bedrock/latest/userguide/inference-profiles-support.html#inference-profiles-support-system
let model_id = self.request_id();
let supports_global = matches!(
self,
Model::ClaudeOpus4_5
| Model::ClaudeOpus4_5Thinking
| Model::ClaudeHaiku4_5
| Model::ClaudeSonnet4
| Model::ClaudeSonnet4Thinking
| Model::ClaudeSonnet4_5
| Model::ClaudeSonnet4_5Thinking
);
pub fn cross_region_inference_id(&self, region: &str) -> anyhow::Result<String> {
let region_group = if region.starts_with("us-gov-") {
"us-gov"
} else if region.starts_with("us-")
|| region.starts_with("ca-")
|| region.starts_with("sa-")
{
if allow_global && supports_global {
"global"
} else {
"us"
}
} else if region.starts_with("us-") {
"us"
} else if region.starts_with("eu-") {
if allow_global && supports_global {
"global"
} else {
"eu"
}
"eu"
} else if region.starts_with("ap-") || region == "me-central-1" || region == "me-south-1" {
if allow_global && supports_global {
"global"
} else {
"apac"
}
"apac"
} else if region.starts_with("ca-") || region.starts_with("sa-") {
// Canada and South America regions - default to US profiles
"us"
} else {
anyhow::bail!("Unsupported Region {region}");
};
match (self, region_group, region) {
(Model::Custom { .. }, _, _) => Ok(self.request_id().into()),
let model_id = self.request_id();
(
Model::ClaudeOpus4_5
| Model::ClaudeOpus4_5Thinking
| Model::ClaudeHaiku4_5
| Model::ClaudeSonnet4
| Model::ClaudeSonnet4Thinking
| Model::ClaudeSonnet4_5
| Model::ClaudeSonnet4_5Thinking,
"global",
_,
) => Ok(format!("{}.{}", region_group, model_id)),
match (self, region_group) {
// Custom models can't have CRI IDs
(Model::Custom { .. }, _) => Ok(self.request_id().into()),
(
Model::Claude3Haiku
| Model::Claude3_5Sonnet
| Model::Claude3_7Sonnet
| Model::Claude3_7SonnetThinking
| Model::ClaudeSonnet4_5
| Model::ClaudeSonnet4_5Thinking,
"us-gov",
_,
) => Ok(format!("{}.{}", region_group, model_id)),
(
Model::ClaudeHaiku4_5 | Model::ClaudeSonnet4_5 | Model::ClaudeSonnet4_5Thinking,
"apac",
"ap-southeast-2" | "ap-southeast-4",
) => Ok(format!("au.{}", model_id)),
(
Model::ClaudeHaiku4_5 | Model::ClaudeSonnet4_5 | Model::ClaudeSonnet4_5Thinking,
"apac",
"ap-northeast-1" | "ap-northeast-3",
) => Ok(format!("jp.{}", model_id)),
(Model::AmazonNovaLite, "us", r) if r.starts_with("ca-") => {
Ok(format!("ca.{}", model_id))
// Models with US Gov only
(Model::Claude3_5Sonnet, "us-gov") | (Model::Claude3Haiku, "us-gov") => {
Ok(format!("{}.{}", region_group, model_id))
}
// Available everywhere
(Model::AmazonNovaLite | Model::AmazonNovaMicro | Model::AmazonNovaPro, _) => {
Ok(format!("{}.{}", region_group, model_id))
}
// Models in US
(
Model::AmazonNovaPremier
| Model::AmazonNovaLite
| Model::AmazonNovaMicro
| Model::AmazonNovaPro
| Model::Claude3_5Haiku
| Model::ClaudeHaiku4_5
| Model::Claude3_5Sonnet
@@ -714,18 +655,16 @@ impl Model {
| Model::PalmyraWriterX4
| Model::PalmyraWriterX5,
"us",
_,
) => Ok(format!("{}.{}", region_group, model_id)),
// Models available in EU
(
Model::AmazonNovaLite
| Model::AmazonNovaMicro
| Model::AmazonNovaPro
| Model::Claude3_5Sonnet
Model::Claude3_5Sonnet
| Model::ClaudeHaiku4_5
| Model::Claude3_7Sonnet
| Model::Claude3_7SonnetThinking
| Model::ClaudeSonnet4
| Model::ClaudeSonnet4Thinking
| Model::ClaudeSonnet4_5
| Model::ClaudeSonnet4_5Thinking
| Model::Claude3Haiku
@@ -734,26 +673,26 @@ impl Model {
| Model::MetaLlama323BInstructV1
| Model::MistralPixtralLarge2502V1,
"eu",
_,
) => Ok(format!("{}.{}", region_group, model_id)),
// Models available in APAC
(
Model::AmazonNovaLite
| Model::AmazonNovaMicro
| Model::AmazonNovaPro
| Model::Claude3_5Sonnet
Model::Claude3_5Sonnet
| Model::Claude3_5SonnetV2
| Model::ClaudeHaiku4_5
| Model::Claude3Haiku
| Model::Claude3Sonnet
| Model::Claude3_7Sonnet
| Model::Claude3_7SonnetThinking
| Model::ClaudeSonnet4
| Model::Claude3Haiku
| Model::Claude3Sonnet,
| Model::ClaudeSonnet4Thinking
| Model::ClaudeSonnet4_5
| Model::ClaudeSonnet4_5Thinking,
"apac",
_,
) => Ok(format!("{}.{}", region_group, model_id)),
_ => Ok(model_id.into()),
// Any other combination is not supported
_ => Ok(self.request_id().into()),
}
}
}
@@ -766,15 +705,15 @@ mod tests {
fn test_us_region_inference_ids() -> anyhow::Result<()> {
// Test US regions
assert_eq!(
Model::Claude3_5SonnetV2.cross_region_inference_id("us-east-1", false)?,
Model::Claude3_5SonnetV2.cross_region_inference_id("us-east-1")?,
"us.anthropic.claude-3-5-sonnet-20241022-v2:0"
);
assert_eq!(
Model::Claude3_5SonnetV2.cross_region_inference_id("us-west-2", false)?,
Model::Claude3_5SonnetV2.cross_region_inference_id("us-west-2")?,
"us.anthropic.claude-3-5-sonnet-20241022-v2:0"
);
assert_eq!(
Model::AmazonNovaPro.cross_region_inference_id("us-east-2", false)?,
Model::AmazonNovaPro.cross_region_inference_id("us-east-2")?,
"us.amazon.nova-pro-v1:0"
);
Ok(())
@@ -784,19 +723,19 @@ mod tests {
fn test_eu_region_inference_ids() -> anyhow::Result<()> {
// Test European regions
assert_eq!(
Model::ClaudeSonnet4.cross_region_inference_id("eu-west-1", false)?,
Model::ClaudeSonnet4.cross_region_inference_id("eu-west-1")?,
"eu.anthropic.claude-sonnet-4-20250514-v1:0"
);
assert_eq!(
Model::ClaudeSonnet4_5.cross_region_inference_id("eu-west-1", false)?,
Model::ClaudeSonnet4_5.cross_region_inference_id("eu-west-1")?,
"eu.anthropic.claude-sonnet-4-5-20250929-v1:0"
);
assert_eq!(
Model::Claude3Sonnet.cross_region_inference_id("eu-west-1", false)?,
Model::Claude3Sonnet.cross_region_inference_id("eu-west-1")?,
"eu.anthropic.claude-3-sonnet-20240229-v1:0"
);
assert_eq!(
Model::AmazonNovaMicro.cross_region_inference_id("eu-north-1", false)?,
Model::AmazonNovaMicro.cross_region_inference_id("eu-north-1")?,
"eu.amazon.nova-micro-v1:0"
);
Ok(())
@@ -806,15 +745,15 @@ mod tests {
fn test_apac_region_inference_ids() -> anyhow::Result<()> {
// Test Asia-Pacific regions
assert_eq!(
Model::Claude3_5SonnetV2.cross_region_inference_id("ap-northeast-1", false)?,
Model::Claude3_5SonnetV2.cross_region_inference_id("ap-northeast-1")?,
"apac.anthropic.claude-3-5-sonnet-20241022-v2:0"
);
assert_eq!(
Model::Claude3_5SonnetV2.cross_region_inference_id("ap-southeast-2", false)?,
Model::Claude3_5SonnetV2.cross_region_inference_id("ap-southeast-2")?,
"apac.anthropic.claude-3-5-sonnet-20241022-v2:0"
);
assert_eq!(
Model::AmazonNovaLite.cross_region_inference_id("ap-south-1", false)?,
Model::AmazonNovaLite.cross_region_inference_id("ap-south-1")?,
"apac.amazon.nova-lite-v1:0"
);
Ok(())
@@ -824,11 +763,11 @@ mod tests {
fn test_gov_region_inference_ids() -> anyhow::Result<()> {
// Test Government regions
assert_eq!(
Model::Claude3_5Sonnet.cross_region_inference_id("us-gov-east-1", false)?,
Model::Claude3_5Sonnet.cross_region_inference_id("us-gov-east-1")?,
"us-gov.anthropic.claude-3-5-sonnet-20240620-v1:0"
);
assert_eq!(
Model::Claude3Haiku.cross_region_inference_id("us-gov-west-1", false)?,
Model::Claude3Haiku.cross_region_inference_id("us-gov-west-1")?,
"us-gov.anthropic.claude-3-haiku-20240307-v1:0"
);
Ok(())
@@ -838,15 +777,15 @@ mod tests {
fn test_meta_models_inference_ids() -> anyhow::Result<()> {
// Test Meta models
assert_eq!(
Model::MetaLlama370BInstructV1.cross_region_inference_id("us-east-1", false)?,
Model::MetaLlama370BInstructV1.cross_region_inference_id("us-east-1")?,
"meta.llama3-70b-instruct-v1:0"
);
assert_eq!(
Model::MetaLlama3170BInstructV1.cross_region_inference_id("us-east-1", false)?,
Model::MetaLlama3170BInstructV1.cross_region_inference_id("us-east-1")?,
"us.meta.llama3-1-70b-instruct-v1:0"
);
assert_eq!(
Model::MetaLlama321BInstructV1.cross_region_inference_id("eu-west-1", false)?,
Model::MetaLlama321BInstructV1.cross_region_inference_id("eu-west-1")?,
"eu.meta.llama3-2-1b-instruct-v1:0"
);
Ok(())
@@ -857,11 +796,11 @@ mod tests {
// Mistral models don't follow the regional prefix pattern,
// so they should return their original IDs
assert_eq!(
Model::MistralMistralLarge2402V1.cross_region_inference_id("us-east-1", false)?,
Model::MistralMistralLarge2402V1.cross_region_inference_id("us-east-1")?,
"mistral.mistral-large-2402-v1:0"
);
assert_eq!(
Model::MistralMixtral8x7BInstructV0.cross_region_inference_id("eu-west-1", false)?,
Model::MistralMixtral8x7BInstructV0.cross_region_inference_id("eu-west-1")?,
"mistral.mixtral-8x7b-instruct-v0:1"
);
Ok(())
@@ -872,11 +811,11 @@ mod tests {
// AI21 models don't follow the regional prefix pattern,
// so they should return their original IDs
assert_eq!(
Model::AI21J2UltraV1.cross_region_inference_id("us-east-1", false)?,
Model::AI21J2UltraV1.cross_region_inference_id("us-east-1")?,
"ai21.j2-ultra-v1"
);
assert_eq!(
Model::AI21JambaInstructV1.cross_region_inference_id("eu-west-1", false)?,
Model::AI21JambaInstructV1.cross_region_inference_id("eu-west-1")?,
"ai21.jamba-instruct-v1:0"
);
Ok(())
@@ -887,11 +826,11 @@ mod tests {
// Cohere models don't follow the regional prefix pattern,
// so they should return their original IDs
assert_eq!(
Model::CohereCommandRV1.cross_region_inference_id("us-east-1", false)?,
Model::CohereCommandRV1.cross_region_inference_id("us-east-1")?,
"cohere.command-r-v1:0"
);
assert_eq!(
Model::CohereCommandTextV14_4k.cross_region_inference_id("ap-southeast-1", false)?,
Model::CohereCommandTextV14_4k.cross_region_inference_id("ap-southeast-1")?,
"cohere.command-text-v14:7:4k"
);
Ok(())
@@ -911,17 +850,10 @@ mod tests {
// Custom model should return its name unchanged
assert_eq!(
custom_model.cross_region_inference_id("us-east-1", false)?,
custom_model.cross_region_inference_id("us-east-1")?,
"custom.my-model-v1:0"
);
// Test that models without global support fall back to regional when allow_global is true
assert_eq!(
Model::AmazonNovaPro.cross_region_inference_id("us-east-1", true)?,
"us.amazon.nova-pro-v1:0",
"Nova Pro should fall back to regional profile even when allow_global is true"
);
Ok(())
}
@@ -960,28 +892,3 @@ mod tests {
);
}
}
#[test]
fn test_global_inference_ids() -> anyhow::Result<()> {
// Test global inference for models that support it when allow_global is true
assert_eq!(
Model::ClaudeSonnet4.cross_region_inference_id("us-east-1", true)?,
"global.anthropic.claude-sonnet-4-20250514-v1:0"
);
assert_eq!(
Model::ClaudeSonnet4_5.cross_region_inference_id("eu-west-1", true)?,
"global.anthropic.claude-sonnet-4-5-20250929-v1:0"
);
assert_eq!(
Model::ClaudeHaiku4_5.cross_region_inference_id("ap-south-1", true)?,
"global.anthropic.claude-haiku-4-5-20251001-v1:0"
);
// Test that regional prefix is used when allow_global is false
assert_eq!(
Model::ClaudeSonnet4.cross_region_inference_id("us-east-1", false)?,
"us.anthropic.claude-sonnet-4-20250514-v1:0"
);
Ok(())
}

View File

@@ -12,7 +12,7 @@ workspace = true
path = "src/buffer_diff.rs"
[features]
test-support = ["settings"]
test-support = []
[dependencies]
anyhow.workspace = true
@@ -24,7 +24,6 @@ language.workspace = true
log.workspace = true
pretty_assertions.workspace = true
rope.workspace = true
settings = { workspace = true, optional = true }
sum_tree.workspace = true
text.workspace = true
util.workspace = true
@@ -34,7 +33,6 @@ ctor.workspace = true
gpui = { workspace = true, features = ["test-support"] }
rand.workspace = true
serde_json.workspace = true
settings.workspace = true
text = { workspace = true, features = ["test-support"] }
unindent.workspace = true
zlog.workspace = true

View File

@@ -1,10 +1,7 @@
use futures::channel::oneshot;
use git2::{DiffLineType as GitDiffLineType, DiffOptions as GitOptions, Patch as GitPatch};
use gpui::{App, AppContext as _, AsyncApp, Context, Entity, EventEmitter, Task, TaskLabel};
use language::{
BufferRow, DiffOptions, File, Language, LanguageName, LanguageRegistry,
language_settings::language_settings, word_diff_ranges,
};
use language::{Language, LanguageRegistry};
use rope::Rope;
use std::{
cmp::Ordering,
@@ -14,16 +11,14 @@ use std::{
sync::{Arc, LazyLock},
};
use sum_tree::SumTree;
use text::{Anchor, Bias, BufferId, OffsetRangeExt, Point, ToOffset as _, ToPoint as _};
use text::{Anchor, Bias, BufferId, OffsetRangeExt, Point, ToOffset as _};
use util::ResultExt;
pub static CALCULATE_DIFF_TASK: LazyLock<TaskLabel> = LazyLock::new(TaskLabel::new);
pub const MAX_WORD_DIFF_LINE_COUNT: usize = 5;
pub struct BufferDiff {
pub buffer_id: BufferId,
inner: BufferDiffInner,
// diff of the index vs head
secondary_diff: Option<Entity<BufferDiff>>,
}
@@ -36,7 +31,6 @@ pub struct BufferDiffSnapshot {
#[derive(Clone)]
struct BufferDiffInner {
hunks: SumTree<InternalDiffHunk>,
// Used for making staging mo
pending_hunks: SumTree<PendingHunk>,
base_text: language::BufferSnapshot,
base_text_exists: bool,
@@ -56,18 +50,11 @@ pub enum DiffHunkStatusKind {
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
/// Diff of Working Copy vs Index
/// aka 'is this hunk staged or not'
pub enum DiffHunkSecondaryStatus {
/// Unstaged
HasSecondaryHunk,
/// Partially staged
OverlapsWithSecondaryHunk,
/// Staged
NoSecondaryHunk,
/// We are unstaging
SecondaryHunkAdditionPending,
/// We are stagind
SecondaryHunkRemovalPending,
}
@@ -81,10 +68,6 @@ pub struct DiffHunk {
/// The range in the buffer's diff base text to which this hunk corresponds.
pub diff_base_byte_range: Range<usize>,
pub secondary_status: DiffHunkSecondaryStatus,
// Anchors representing the word diff locations in the active buffer
pub buffer_word_diffs: Vec<Range<Anchor>>,
// Offsets relative to the start of the deleted diff that represent word diff locations
pub base_word_diffs: Vec<Range<usize>>,
}
/// We store [`InternalDiffHunk`]s internally so we don't need to store the additional row range.
@@ -92,8 +75,6 @@ pub struct DiffHunk {
struct InternalDiffHunk {
buffer_range: Range<Anchor>,
diff_base_byte_range: Range<usize>,
base_word_diffs: Vec<Range<usize>>,
buffer_word_diffs: Vec<Range<Anchor>>,
}
#[derive(Debug, Clone, PartialEq, Eq)]
@@ -107,7 +88,6 @@ struct PendingHunk {
#[derive(Debug, Clone)]
pub struct DiffHunkSummary {
buffer_range: Range<Anchor>,
diff_base_byte_range: Range<usize>,
}
impl sum_tree::Item for InternalDiffHunk {
@@ -116,7 +96,6 @@ impl sum_tree::Item for InternalDiffHunk {
fn summary(&self, _cx: &text::BufferSnapshot) -> Self::Summary {
DiffHunkSummary {
buffer_range: self.buffer_range.clone(),
diff_base_byte_range: self.diff_base_byte_range.clone(),
}
}
}
@@ -127,7 +106,6 @@ impl sum_tree::Item for PendingHunk {
fn summary(&self, _cx: &text::BufferSnapshot) -> Self::Summary {
DiffHunkSummary {
buffer_range: self.buffer_range.clone(),
diff_base_byte_range: self.diff_base_byte_range.clone(),
}
}
}
@@ -138,7 +116,6 @@ impl sum_tree::Summary for DiffHunkSummary {
fn zero(_cx: Self::Context<'_>) -> Self {
DiffHunkSummary {
buffer_range: Anchor::MIN..Anchor::MIN,
diff_base_byte_range: 0..0,
}
}
@@ -148,15 +125,6 @@ impl sum_tree::Summary for DiffHunkSummary {
.start
.min(&other.buffer_range.start, buffer);
self.buffer_range.end = *self.buffer_range.end.max(&other.buffer_range.end, buffer);
self.diff_base_byte_range.start = self
.diff_base_byte_range
.start
.min(other.diff_base_byte_range.start);
self.diff_base_byte_range.end = self
.diff_base_byte_range
.end
.max(other.diff_base_byte_range.end);
}
}
@@ -185,10 +153,6 @@ impl std::fmt::Debug for BufferDiffInner {
}
impl BufferDiffSnapshot {
pub fn buffer_diff_id(&self) -> BufferId {
self.inner.base_text.remote_id()
}
fn empty(buffer: &text::BufferSnapshot, cx: &mut App) -> BufferDiffSnapshot {
BufferDiffSnapshot {
inner: BufferDiffInner {
@@ -227,13 +191,6 @@ impl BufferDiffSnapshot {
let base_text_pair;
let base_text_exists;
let base_text_snapshot;
let diff_options = build_diff_options(
None,
language.as_ref().map(|l| l.name()),
language.as_ref().map(|l| l.default_scope()),
cx,
);
if let Some(text) = &base_text {
let base_text_rope = Rope::from(text.as_str());
base_text_pair = Some((text.clone(), base_text_rope.clone()));
@@ -251,7 +208,7 @@ impl BufferDiffSnapshot {
.background_executor()
.spawn_labeled(*CALCULATE_DIFF_TASK, {
let buffer = buffer.clone();
async move { compute_hunks(base_text_pair, buffer, diff_options) }
async move { compute_hunks(base_text_pair, buffer) }
});
async move {
@@ -274,12 +231,6 @@ impl BufferDiffSnapshot {
base_text_snapshot: language::BufferSnapshot,
cx: &App,
) -> impl Future<Output = Self> + use<> {
let diff_options = build_diff_options(
base_text_snapshot.file(),
base_text_snapshot.language().map(|l| l.name()),
base_text_snapshot.language().map(|l| l.default_scope()),
cx,
);
let base_text_exists = base_text.is_some();
let base_text_pair = base_text.map(|text| {
debug_assert_eq!(&*text, &base_text_snapshot.text());
@@ -291,7 +242,7 @@ impl BufferDiffSnapshot {
inner: BufferDiffInner {
base_text: base_text_snapshot,
pending_hunks: SumTree::new(&buffer),
hunks: compute_hunks(base_text_pair, buffer, diff_options),
hunks: compute_hunks(base_text_pair, buffer),
base_text_exists,
},
secondary_diff: None,
@@ -350,54 +301,6 @@ impl BufferDiffSnapshot {
let (new_id, new_empty) = (right.remote_id(), right.is_empty());
new_id == old_id || (new_empty && old_empty)
}
pub fn row_to_base_text_row(&self, row: BufferRow, buffer: &text::BufferSnapshot) -> u32 {
// TODO(split-diff) expose a parameter to reuse a cursor to avoid repeatedly seeking from the start
// Find the last hunk that starts before this position.
let mut cursor = self.inner.hunks.cursor::<DiffHunkSummary>(buffer);
let position = buffer.anchor_before(Point::new(row, 0));
cursor.seek(&position, Bias::Left);
if cursor
.item()
.is_none_or(|hunk| hunk.buffer_range.start.cmp(&position, buffer).is_gt())
{
cursor.prev();
}
let unclipped_point = if let Some(hunk) = cursor.item()
&& hunk.buffer_range.start.cmp(&position, buffer).is_le()
{
let mut unclipped_point = cursor
.end()
.diff_base_byte_range
.end
.to_point(self.base_text());
if position.cmp(&cursor.end().buffer_range.end, buffer).is_ge() {
unclipped_point +=
Point::new(row, 0) - cursor.end().buffer_range.end.to_point(buffer);
}
// Move the cursor so that at the next step we can clip with the start of the next hunk.
cursor.next();
unclipped_point
} else {
// Position is before the added region for the first hunk.
debug_assert!(self.inner.hunks.first().is_none_or(|first_hunk| {
position.cmp(&first_hunk.buffer_range.start, buffer).is_le()
}));
Point::new(row, 0)
};
let max_point = if let Some(next_hunk) = cursor.item() {
next_hunk
.diff_base_byte_range
.start
.to_point(self.base_text())
} else {
self.base_text().max_point()
};
unclipped_point.min(max_point).row
}
}
impl BufferDiffInner {
@@ -437,7 +340,7 @@ impl BufferDiffInner {
};
let hunk = PendingHunk {
buffer_range: Anchor::min_max_range_for_buffer(buffer.remote_id()),
buffer_range: Anchor::MIN..Anchor::MAX,
diff_base_byte_range: 0..index_text.map_or(0, |rope| rope.len()),
buffer_version: buffer.version().clone(),
new_status,
@@ -634,15 +537,11 @@ impl BufferDiffInner {
[
(
&hunk.buffer_range.start,
(
hunk.buffer_range.start,
hunk.diff_base_byte_range.start,
hunk,
),
(hunk.buffer_range.start, hunk.diff_base_byte_range.start),
),
(
&hunk.buffer_range.end,
(hunk.buffer_range.end, hunk.diff_base_byte_range.end, hunk),
(hunk.buffer_range.end, hunk.diff_base_byte_range.end),
),
]
});
@@ -661,11 +560,8 @@ impl BufferDiffInner {
let mut summaries = buffer.summaries_for_anchors_with_payload::<Point, _, _>(anchor_iter);
iter::from_fn(move || {
loop {
let (start_point, (start_anchor, start_base, hunk)) = summaries.next()?;
let (mut end_point, (mut end_anchor, end_base, _)) = summaries.next()?;
let base_word_diffs = hunk.base_word_diffs.clone();
let buffer_word_diffs = hunk.buffer_word_diffs.clone();
let (start_point, (start_anchor, start_base)) = summaries.next()?;
let (mut end_point, (mut end_anchor, end_base)) = summaries.next()?;
if !start_anchor.is_valid(buffer) {
continue;
@@ -735,8 +631,6 @@ impl BufferDiffInner {
range: start_point..end_point,
diff_base_byte_range: start_base..end_base,
buffer_range: start_anchor..end_anchor,
base_word_diffs,
buffer_word_diffs,
secondary_status,
});
}
@@ -768,8 +662,6 @@ impl BufferDiffInner {
buffer_range: hunk.buffer_range.clone(),
// The secondary status is not used by callers of this method.
secondary_status: DiffHunkSecondaryStatus::NoSecondaryHunk,
base_word_diffs: hunk.base_word_diffs.clone(),
buffer_word_diffs: hunk.buffer_word_diffs.clone(),
})
})
}
@@ -838,36 +730,9 @@ impl BufferDiffInner {
}
}
fn build_diff_options(
file: Option<&Arc<dyn File>>,
language: Option<LanguageName>,
language_scope: Option<language::LanguageScope>,
cx: &App,
) -> Option<DiffOptions> {
#[cfg(any(test, feature = "test-support"))]
{
if !cx.has_global::<settings::SettingsStore>() {
return Some(DiffOptions {
language_scope,
max_word_diff_line_count: MAX_WORD_DIFF_LINE_COUNT,
..Default::default()
});
}
}
language_settings(language, file, cx)
.word_diff_enabled
.then_some(DiffOptions {
language_scope,
max_word_diff_line_count: MAX_WORD_DIFF_LINE_COUNT,
..Default::default()
})
}
fn compute_hunks(
diff_base: Option<(Arc<String>, Rope)>,
buffer: text::BufferSnapshot,
diff_options: Option<DiffOptions>,
) -> SumTree<InternalDiffHunk> {
let mut tree = SumTree::new(&buffer);
@@ -893,8 +758,6 @@ fn compute_hunks(
InternalDiffHunk {
buffer_range: buffer.anchor_before(0)..buffer.anchor_before(0),
diff_base_byte_range: 0..diff_base.len() - 1,
base_word_diffs: Vec::default(),
buffer_word_diffs: Vec::default(),
},
&buffer,
);
@@ -910,7 +773,6 @@ fn compute_hunks(
&diff_base_rope,
&buffer,
&mut divergence,
diff_options.as_ref(),
);
tree.push(hunk, &buffer);
}
@@ -918,10 +780,8 @@ fn compute_hunks(
} else {
tree.push(
InternalDiffHunk {
buffer_range: Anchor::min_max_range_for_buffer(buffer.remote_id()),
buffer_range: Anchor::MIN..Anchor::MAX,
diff_base_byte_range: 0..0,
base_word_diffs: Vec::default(),
buffer_word_diffs: Vec::default(),
},
&buffer,
);
@@ -936,7 +796,6 @@ fn process_patch_hunk(
diff_base: &Rope,
buffer: &text::BufferSnapshot,
buffer_row_divergence: &mut i64,
diff_options: Option<&DiffOptions>,
) -> InternalDiffHunk {
let line_item_count = patch.num_lines_in_hunk(hunk_index).unwrap();
assert!(line_item_count > 0);
@@ -1001,49 +860,9 @@ fn process_patch_hunk(
let start = Point::new(buffer_row_range.start, 0);
let end = Point::new(buffer_row_range.end, 0);
let buffer_range = buffer.anchor_before(start)..buffer.anchor_before(end);
let base_line_count = line_item_count.saturating_sub(buffer_row_range.len());
let (base_word_diffs, buffer_word_diffs) = if let Some(diff_options) = diff_options
&& !buffer_row_range.is_empty()
&& base_line_count == buffer_row_range.len()
&& diff_options.max_word_diff_line_count >= base_line_count
{
let base_text: String = diff_base
.chunks_in_range(diff_base_byte_range.clone())
.collect();
let buffer_text: String = buffer.text_for_range(buffer_range.clone()).collect();
let (base_word_diffs, buffer_word_diffs_relative) = word_diff_ranges(
&base_text,
&buffer_text,
DiffOptions {
language_scope: diff_options.language_scope.clone(),
..*diff_options
},
);
let buffer_start_offset = buffer_range.start.to_offset(buffer);
let buffer_word_diffs = buffer_word_diffs_relative
.into_iter()
.map(|range| {
let start = buffer.anchor_after(buffer_start_offset + range.start);
let end = buffer.anchor_after(buffer_start_offset + range.end);
start..end
})
.collect();
(base_word_diffs, buffer_word_diffs)
} else {
(Vec::default(), Vec::default())
};
InternalDiffHunk {
buffer_range,
diff_base_byte_range,
base_word_diffs,
buffer_word_diffs,
}
}
@@ -1122,11 +941,10 @@ impl BufferDiff {
pub fn clear_pending_hunks(&mut self, cx: &mut Context<Self>) {
if self.secondary_diff.is_some() {
self.inner.pending_hunks = SumTree::from_summary(DiffHunkSummary {
buffer_range: Anchor::min_min_range_for_buffer(self.buffer_id),
diff_base_byte_range: 0..0,
buffer_range: Anchor::MIN..Anchor::MIN,
});
cx.emit(BufferDiffEvent::DiffChanged {
changed_range: Some(Anchor::min_max_range_for_buffer(self.buffer_id)),
changed_range: Some(Anchor::MIN..Anchor::MAX),
});
}
}
@@ -1247,10 +1065,7 @@ impl BufferDiff {
{
(false, new_state.compare(state, buffer))
}
_ => (
true,
Some(text::Anchor::min_max_range_for_buffer(self.buffer_id)),
),
_ => (true, Some(text::Anchor::MIN..text::Anchor::MAX)),
};
if let Some(secondary_changed_range) = secondary_diff_change
@@ -1311,11 +1126,7 @@ impl BufferDiff {
buffer_snapshot: &'a text::BufferSnapshot,
cx: &'a App,
) -> impl 'a + Iterator<Item = DiffHunk> {
self.hunks_intersecting_range(
Anchor::min_max_range_for_buffer(buffer_snapshot.remote_id()),
buffer_snapshot,
cx,
)
self.hunks_intersecting_range(Anchor::MIN..Anchor::MAX, buffer_snapshot, cx)
}
pub fn hunks_intersecting_range<'a>(
@@ -1411,9 +1222,7 @@ impl BufferDiff {
impl DiffHunk {
pub fn is_created_file(&self) -> bool {
self.diff_base_byte_range == (0..0)
&& self.buffer_range.start.is_min()
&& self.buffer_range.end.is_max()
self.diff_base_byte_range == (0..0) && self.buffer_range == (Anchor::MIN..Anchor::MAX)
}
pub fn status(&self) -> DiffHunkStatus {
@@ -1580,10 +1389,7 @@ mod tests {
let mut buffer = Buffer::new(ReplicaId::LOCAL, BufferId::new(1).unwrap(), buffer_text);
let mut diff = BufferDiffSnapshot::new_sync(buffer.clone(), diff_base.clone(), cx);
assert_hunks(
diff.hunks_intersecting_range(
Anchor::min_max_range_for_buffer(buffer.remote_id()),
&buffer,
),
diff.hunks_intersecting_range(Anchor::MIN..Anchor::MAX, &buffer),
&buffer,
&diff_base,
&[(1..2, "two\n", "HELLO\n", DiffHunkStatus::modified_none())],
@@ -1592,10 +1398,7 @@ mod tests {
buffer.edit([(0..0, "point five\n")]);
diff = BufferDiffSnapshot::new_sync(buffer.clone(), diff_base.clone(), cx);
assert_hunks(
diff.hunks_intersecting_range(
Anchor::min_max_range_for_buffer(buffer.remote_id()),
&buffer,
),
diff.hunks_intersecting_range(Anchor::MIN..Anchor::MAX, &buffer),
&buffer,
&diff_base,
&[
@@ -1606,10 +1409,7 @@ mod tests {
diff = cx.update(|cx| BufferDiffSnapshot::empty(&buffer, cx));
assert_hunks::<&str, _>(
diff.hunks_intersecting_range(
Anchor::min_max_range_for_buffer(buffer.remote_id()),
&buffer,
),
diff.hunks_intersecting_range(Anchor::MIN..Anchor::MAX, &buffer),
&buffer,
&diff_base,
&[],
@@ -1683,10 +1483,7 @@ mod tests {
];
assert_hunks(
uncommitted_diff.hunks_intersecting_range(
Anchor::min_max_range_for_buffer(buffer.remote_id()),
&buffer,
),
uncommitted_diff.hunks_intersecting_range(Anchor::MIN..Anchor::MAX, &buffer),
&buffer,
&head_text,
&expected_hunks,
@@ -1745,11 +1542,8 @@ mod tests {
})
.await;
assert_eq!(
diff.hunks_intersecting_range(
Anchor::min_max_range_for_buffer(buffer.remote_id()),
&buffer
)
.count(),
diff.hunks_intersecting_range(Anchor::MIN..Anchor::MAX, &buffer)
.count(),
8
);
@@ -2361,12 +2155,8 @@ mod tests {
let mut diff = uncommitted_diff(&working_copy, &index_text, head_text.clone(), cx);
let mut hunks = diff.update(cx, |diff, cx| {
diff.hunks_intersecting_range(
Anchor::min_max_range_for_buffer(diff.buffer_id),
&working_copy,
cx,
)
.collect::<Vec<_>>()
diff.hunks_intersecting_range(Anchor::MIN..Anchor::MAX, &working_copy, cx)
.collect::<Vec<_>>()
});
if hunks.is_empty() {
return;
@@ -2395,12 +2185,8 @@ mod tests {
diff = uncommitted_diff(&working_copy, &index_text, head_text.clone(), cx);
let found_hunks = diff.update(cx, |diff, cx| {
diff.hunks_intersecting_range(
Anchor::min_max_range_for_buffer(diff.buffer_id),
&working_copy,
cx,
)
.collect::<Vec<_>>()
diff.hunks_intersecting_range(Anchor::MIN..Anchor::MAX, &working_copy, cx)
.collect::<Vec<_>>()
});
assert_eq!(hunks.len(), found_hunks.len());
@@ -2418,62 +2204,4 @@ mod tests {
hunks = found_hunks;
}
}
#[gpui::test]
async fn test_row_to_base_text_row(cx: &mut TestAppContext) {
let base_text = "
zero
one
two
three
four
five
six
seven
eight
"
.unindent();
let buffer_text = "
zero
ONE
two
NINE
five
seven
"
.unindent();
// zero
// - one
// + ONE
// two
// - three
// - four
// + NINE
// five
// - six
// seven
// + eight
let buffer = Buffer::new(ReplicaId::LOCAL, BufferId::new(1).unwrap(), buffer_text);
let buffer_snapshot = buffer.snapshot();
let diff = BufferDiffSnapshot::new_sync(buffer_snapshot.clone(), base_text, cx);
let expected_results = [
// don't format me
(0, 0),
(1, 2),
(2, 2),
(3, 5),
(4, 5),
(5, 7),
(6, 9),
];
for (buffer_row, expected) in expected_results {
assert_eq!(
diff.row_to_base_text_row(buffer_row, &buffer_snapshot),
expected,
"{buffer_row}"
);
}
}
}

View File

@@ -524,16 +524,6 @@ impl Room {
self.id
}
pub fn room_id(&self) -> impl Future<Output = Option<String>> + 'static {
let room = self.live_kit.as_ref().map(|lk| lk.room.clone());
async move {
let room = room?;
let sid = room.sid().await;
let name = room.name();
Some(format!("{} (sid: {sid})", name))
}
}
pub fn status(&self) -> RoomStatus {
self.status
}

View File

@@ -1723,10 +1723,6 @@ impl ProtoClient for Client {
fn is_via_collab(&self) -> bool {
true
}
fn has_wsl_interop(&self) -> bool {
false
}
}
/// prefix for the zed:// url scheme

View File

@@ -59,11 +59,3 @@ pub fn agent_server_docs(cx: &App) -> String {
server_url = server_url(cx)
)
}
/// Returns the URL to Zed's edit prediction documentation.
pub fn edit_prediction_docs(cx: &App) -> String {
format!(
"{server_url}/docs/ai/edit-prediction",
server_url = server_url(cx)
)
}

View File

@@ -169,17 +169,6 @@ pub struct PredictEditsBody {
/// Info about the git repository state, only present when can_collect_data is true.
#[serde(skip_serializing_if = "Option::is_none", default)]
pub git_info: Option<PredictEditsGitInfo>,
/// The trigger for this request.
#[serde(default)]
pub trigger: PredictEditsRequestTrigger,
}
#[derive(Default, Debug, Clone, Copy, Serialize, Deserialize)]
pub enum PredictEditsRequestTrigger {
Diagnostics,
Cli,
#[default]
Other,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
@@ -206,41 +195,17 @@ pub struct AcceptEditPredictionBody {
pub request_id: String,
}
#[derive(Debug, Clone, Deserialize)]
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct RejectEditPredictionsBody {
pub rejections: Vec<EditPredictionRejection>,
}
#[derive(Debug, Clone, Serialize)]
pub struct RejectEditPredictionsBodyRef<'a> {
pub rejections: &'a [EditPredictionRejection],
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct EditPredictionRejection {
pub request_id: String,
#[serde(default)]
pub reason: EditPredictionRejectReason,
pub was_shown: bool,
}
#[derive(Default, Debug, Clone, Copy, Serialize, Deserialize, PartialEq)]
pub enum EditPredictionRejectReason {
/// New requests were triggered before this one completed
Canceled,
/// No edits returned
Empty,
/// Edits returned, but none remained after interpolation
InterpolatedEmpty,
/// The new prediction was preferred over the current one
Replaced,
/// The current prediction was preferred over the new one
CurrentPreferred,
/// The current prediction was discarded
#[default]
Discarded,
}
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Clone, Copy, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum CompletionMode {

View File

@@ -9,7 +9,7 @@ use std::{
use strum::EnumIter;
use uuid::Uuid;
use crate::{PredictEditsGitInfo, PredictEditsRequestTrigger};
use crate::PredictEditsGitInfo;
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PlanContextRetrievalRequest {
@@ -53,8 +53,6 @@ pub struct PredictEditsRequest {
pub prompt_max_bytes: Option<usize>,
#[serde(default)]
pub prompt_format: PromptFormat,
#[serde(default)]
pub trigger: PredictEditsRequestTrigger,
}
#[derive(Debug, Clone, Serialize, Deserialize)]

View File

@@ -121,8 +121,6 @@ CREATE TABLE "project_repositories" (
"merge_message" VARCHAR,
"branch_summary" VARCHAR,
"head_commit_details" VARCHAR,
"remote_upstream_url" VARCHAR,
"remote_origin_url" VARCHAR,
PRIMARY KEY (project_id, id)
);

View File

@@ -1,2 +0,0 @@
ALTER TABLE "project_repositories" ADD COLUMN "remote_upstream_url" VARCHAR;
ALTER TABLE "project_repositories" ADD COLUMN "remote_origin_url" VARCHAR;

View File

@@ -64,16 +64,6 @@ async fn check_is_contributor(
}));
}
if ZedZippyBot::is_zed_zippy_bot(&params) {
return Ok(Json(CheckIsContributorResponse {
signed_at: Some(
ZedZippyBot::created_at()
.and_utc()
.to_rfc3339_opts(SecondsFormat::Millis, true),
),
}));
}
Ok(Json(CheckIsContributorResponse {
signed_at: app
.db
@@ -113,36 +103,6 @@ impl RenovateBot {
}
}
/// The Zed Zippy bot GitHub user (`zed-zippy[bot]`).
///
/// https://api.github.com/users/zed-zippy[bot]
struct ZedZippyBot;
impl ZedZippyBot {
const LOGIN: &'static str = "zed-zippy[bot]";
const USER_ID: i32 = 234243425;
/// Returns the `created_at` timestamp for the Zed Zippy bot user.
fn created_at() -> &'static NaiveDateTime {
static CREATED_AT: OnceLock<NaiveDateTime> = OnceLock::new();
CREATED_AT.get_or_init(|| {
chrono::DateTime::parse_from_rfc3339("2025-09-24T17:00:11Z")
.expect("failed to parse 'created_at' for 'zed-zippy[bot]'")
.naive_utc()
})
}
/// Returns whether the given contributor selector corresponds to the Zed Zippy bot user.
fn is_zed_zippy_bot(contributor: &ContributorSelector) -> bool {
match contributor {
ContributorSelector::GitHubLogin { github_login } => github_login == Self::LOGIN,
ContributorSelector::GitHubUserId { github_user_id } => {
github_user_id == &Self::USER_ID
}
}
}
}
#[derive(Debug, Deserialize)]
struct AddContributorBody {
github_user_id: i32,

View File

@@ -362,8 +362,6 @@ impl Database {
entry_ids: ActiveValue::set("[]".into()),
head_commit_details: ActiveValue::set(None),
merge_message: ActiveValue::set(None),
remote_upstream_url: ActiveValue::set(None),
remote_origin_url: ActiveValue::set(None),
}
}),
)
@@ -513,8 +511,6 @@ impl Database {
serde_json::to_string(&update.current_merge_conflicts).unwrap(),
)),
merge_message: ActiveValue::set(update.merge_message.clone()),
remote_upstream_url: ActiveValue::set(update.remote_upstream_url.clone()),
remote_origin_url: ActiveValue::set(update.remote_origin_url.clone()),
})
.on_conflict(
OnConflict::columns([
@@ -1009,8 +1005,6 @@ impl Database {
is_last_update: true,
merge_message: db_repository_entry.merge_message,
stash_entries: Vec::new(),
remote_upstream_url: db_repository_entry.remote_upstream_url.clone(),
remote_origin_url: db_repository_entry.remote_origin_url.clone(),
});
}
}

View File

@@ -796,8 +796,6 @@ impl Database {
is_last_update: true,
merge_message: db_repository.merge_message,
stash_entries: Vec::new(),
remote_upstream_url: db_repository.remote_upstream_url.clone(),
remote_origin_url: db_repository.remote_origin_url.clone(),
});
}
}

View File

@@ -22,8 +22,6 @@ pub struct Model {
pub branch_summary: Option<String>,
// A JSON object representing the current Head commit values
pub head_commit_details: Option<String>,
pub remote_upstream_url: Option<String>,
pub remote_origin_url: Option<String>,
}
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]

View File

@@ -469,8 +469,6 @@ impl Server {
.add_request_handler(forward_mutating_project_request::<proto::GetBlobContent>)
.add_request_handler(forward_mutating_project_request::<proto::GitCreateBranch>)
.add_request_handler(forward_mutating_project_request::<proto::GitChangeBranch>)
.add_request_handler(forward_mutating_project_request::<proto::GitCreateRemote>)
.add_request_handler(forward_mutating_project_request::<proto::GitRemoveRemote>)
.add_request_handler(forward_mutating_project_request::<proto::CheckForPushedCommits>)
.add_message_handler(broadcast_project_message_from_host::<proto::AdvertiseContexts>)
.add_message_handler(update_context)

View File

@@ -25,7 +25,6 @@ use gpui::{
use indoc::indoc;
use language::FakeLspAdapter;
use lsp::LSP_REQUEST_TIMEOUT;
use pretty_assertions::assert_eq;
use project::{
ProgressToken, ProjectPath, SERVER_PROGRESS_THROTTLE_TIMEOUT,
lsp_store::lsp_ext_command::{ExpandedMacro, LspExtExpandMacro},
@@ -1582,10 +1581,7 @@ async fn test_share_project(
buffer_a.read_with(cx_a, |buffer, _| {
buffer
.snapshot()
.selections_in_range(
text::Anchor::min_max_range_for_buffer(buffer.remote_id()),
false,
)
.selections_in_range(text::Anchor::MIN..text::Anchor::MAX, false)
.count()
== 1
});
@@ -1626,10 +1622,7 @@ async fn test_share_project(
buffer_a.read_with(cx_a, |buffer, _| {
buffer
.snapshot()
.selections_in_range(
text::Anchor::min_max_range_for_buffer(buffer.remote_id()),
false,
)
.selections_in_range(text::Anchor::MIN..text::Anchor::MAX, false)
.count()
== 0
});
@@ -3193,12 +3186,13 @@ async fn test_lsp_pull_diagnostics(
.collect::<Vec<_>>();
let expected_messages = [
expected_pull_diagnostic_lib_message,
expected_push_diagnostic_lib_message,
// TODO bug: the pushed diagnostics are not being sent to the client when they open the corresponding buffer.
// expected_push_diagnostic_lib_message,
];
assert_eq!(
all_diagnostics.len(),
2,
"Expected pull and push diagnostics, but got: {all_diagnostics:?}"
1,
"Expected pull diagnostics, but got: {all_diagnostics:?}"
);
for diagnostic in all_diagnostics {
assert!(
@@ -3258,15 +3252,14 @@ async fn test_lsp_pull_diagnostics(
.diagnostics_in_range(MultiBufferOffset(0)..snapshot.len())
.collect::<Vec<_>>();
let expected_messages = [
// Despite workspace diagnostics provided,
// the currently open file's diagnostics should be preferred, as LSP suggests.
expected_pull_diagnostic_lib_message,
expected_push_diagnostic_lib_message,
expected_workspace_pull_diagnostics_lib_message,
// TODO bug: the pushed diagnostics are not being sent to the client when they open the corresponding buffer.
// expected_push_diagnostic_lib_message,
];
assert_eq!(
all_diagnostics.len(),
2,
"Expected pull and push diagnostics, but got: {all_diagnostics:?}"
1,
"Expected pull diagnostics, but got: {all_diagnostics:?}"
);
for diagnostic in all_diagnostics {
assert!(
@@ -3379,9 +3372,8 @@ async fn test_lsp_pull_diagnostics(
"Another workspace diagnostics pull should happen after the diagnostics refresh server request"
);
{
assert_eq!(
diagnostics_pulls_result_ids.lock().await.len(),
diagnostic_pulls_result_ids,
assert!(
diagnostics_pulls_result_ids.lock().await.len() == diagnostic_pulls_result_ids,
"Pulls should not happen hence no extra ids should appear"
);
assert!(
@@ -3399,7 +3391,7 @@ async fn test_lsp_pull_diagnostics(
expected_pull_diagnostic_lib_message,
expected_push_diagnostic_lib_message,
];
assert_eq!(all_diagnostics.len(), 2);
assert_eq!(all_diagnostics.len(), 1);
for diagnostic in &all_diagnostics {
assert!(
expected_messages.contains(&diagnostic.diagnostic.message.as_str()),
@@ -3518,6 +3510,7 @@ async fn test_git_blame_is_forwarded(cx_a: &mut TestAppContext, cx_b: &mut TestA
.into_iter()
.map(|(sha, message)| (sha.parse().unwrap(), message.into()))
.collect(),
remote_url: Some("git@github.com:zed-industries/zed.git".to_string()),
};
client_a.fs().set_blame_for_repo(
Path::new(path!("/my-repo/.git")),
@@ -3602,6 +3595,10 @@ async fn test_git_blame_is_forwarded(cx_a: &mut TestAppContext, cx_b: &mut TestA
for (idx, (buffer, entry)) in entries.iter().flatten().enumerate() {
let details = blame.details_for_entry(*buffer, entry).unwrap();
assert_eq!(details.message, format!("message for idx-{}", idx));
assert_eq!(
details.permalink.unwrap().to_string(),
format!("https://github.com/zed-industries/zed/commit/{}", entry.sha)
);
}
});
});

View File

@@ -541,7 +541,7 @@ impl Item for ChannelView {
})
}
fn as_searchable(&self, _: &Entity<Self>, _: &App) -> Option<Box<dyn SearchableItemHandle>> {
fn as_searchable(&self, _: &Entity<Self>) -> Option<Box<dyn SearchableItemHandle>> {
Some(Box::new(self.editor.clone()))
}

View File

@@ -37,7 +37,7 @@ use ui::{
};
use util::{ResultExt, TryFutureExt, maybe};
use workspace::{
CopyRoomId, Deafen, LeaveCall, Mute, OpenChannelNotes, ScreenShare, ShareProject, Workspace,
Deafen, LeaveCall, Mute, OpenChannelNotes, ScreenShare, ShareProject, Workspace,
dock::{DockPosition, Panel, PanelEvent},
notifications::{DetachAndPromptErr, NotifyResultExt},
};
@@ -128,32 +128,6 @@ pub fn init(cx: &mut App) {
workspace.register_action(|_, _: &LeaveCall, window, cx| {
CollabPanel::leave_call(window, cx);
});
workspace.register_action(|workspace, _: &CopyRoomId, window, cx| {
use workspace::notifications::{NotificationId, NotifyTaskExt as _};
struct RoomIdCopiedToast;
if let Some(room) = ActiveCall::global(cx).read(cx).room() {
let romo_id_fut = room.read(cx).room_id();
cx.spawn(async move |workspace, cx| {
let room_id = romo_id_fut.await.context("Failed to get livekit room")?;
workspace.update(cx, |workspace, cx| {
cx.write_to_clipboard(ClipboardItem::new_string(room_id));
workspace.show_toast(
workspace::Toast::new(
NotificationId::unique::<RoomIdCopiedToast>(),
"Room ID copied to clipboard",
)
.autohide(),
cx,
);
})
})
.detach_and_notify_err(window, cx);
} else {
workspace.show_error(&"Theres no active call; join one first.", cx);
}
});
workspace.register_action(|workspace, _: &ShareProject, window, cx| {
let project = workspace.project().clone();
println!("{project:?}");

View File

@@ -8,9 +8,6 @@ license = "GPL-3.0-or-later"
[lints]
workspace = true
[features]
test-support = ["db/test-support"]
[lib]
path = "src/command_palette.rs"
doctest = false

Some files were not shown because too many files have changed in this diff Show More