Compare commits

..

166 Commits

Author SHA1 Message Date
Cole Miller
ee53b805f0 fix 2025-12-10 18:00:48 -05:00
Xipeng Jin
9e628505f3 git: Add tree view support to Git Panel (#44089)
Closes #35803

This PR adds tree view support to the git panel UI as an additional
setting and moves git entry checkboxes to the right. Tree view only
supports sorting by paths behavior since sorting by status can become
noisy, due to having to duplicate directories that have entries with
different statuses.

### Tree vs Flat View
<img width="358" height="250" alt="image"
src="https://github.com/user-attachments/assets/c6b95d57-12fc-4c5e-8537-ee129963e50c"
/>
<img width="362" height="152" alt="image"
src="https://github.com/user-attachments/assets/0a69e00f-3878-4807-ae45-65e2d54174fc"
/>


#### Architecture changes

Before this PR, `GitPanel::entries` represented all entries and all
visible entries because both sets were equal to one another. However,
this equality isn't true for tree view, because entries can be
collapsed. To fix this, `TreeState` was added as a logical indices field
that is used to filter out non-visible entries. A benefit of this field
is that it could be used in the future to implement searching in the
GitPanel.

Another significant thing this PR changed was adding a HashMap field
`entries_by_indices` on `GitPanel`. We did this because `entry_by_path`
used binary search, which becomes overly complicated to implement for
tree view. The performance of this function matters because it's a hot
code path, so a linear search wasn't ideal either. The solution was
using a hash map to improve time complexity from O(log n) to O(1), where
n is the count of entries.

#### Follow-ups
In the future, we could use `ui::ListItem` to render entries in the tree
view to improve UI consistency.
 
Release Notes:

- Added tree view for Git panel. Users are able to switch between Flat
and Tree view in Git panel.

---------

Co-authored-by: Anthony Eid <anthony@zed.dev>
Co-authored-by: Remco Smits <djsmits12@gmail.com>
2025-12-10 15:11:36 -05:00
KyleBarton
3a84ec38ac Introduce MVP Dev Containers support (#44442)
Partially addresses #11473 

MVP of dev containers with the following capabilities:

- If in a project with `.devcontainer/devcontainer.json`, a pop-up
notification will ask if you want to open the project in a dev
container. This can be dismissed:
<img width="1478" height="1191" alt="Screenshot 2025-12-08 at 3 15
23 PM"
src="https://github.com/user-attachments/assets/ec2e20d6-28ec-4495-8f23-4c1d48a9ce78"
/>
- Similarly, if a `devcontainer.json` file is in the project, you can
open a devcontainer (or go the devcontainer.json file for further
editing) via the `open remote` modal:


https://github.com/user-attachments/assets/61f2fdaa-2808-4efc-994c-7b444a92c0b1

*Limitations*

This is a first release, and comes with some limitations:
- Zed extensions are not managed in `devcontainer.json` yet. They will
need to be installed either on host or in the container. Host +
Container sync their extensions, so there is not currently a concept of
what is installed in the container vs what is installed on host: they
come from the same list of manifests
- This implementation uses the [devcontainer
CLI](https://github.com/devcontainers/cli) for its control plane. Hence,
it does not yet support the `forwardPorts` directive. A single port can
be opened with `appPort`. See reference in docs
[here](https://github.com/devcontainers/cli/tree/main/example-usage#how-the-tool-examples-work)
- Editing devcontainer.json does not automatically cause the dev
container to be rebuilt. So if you add features, change images, etc, you
will need to `docker kill` the existing dev container before proceeding.
- Currently takes a hard dependency on `docker` being available in the
user's `PATH`.


Release Notes:

- Added ability to Open a project in a DevContainer, provided a
`.devcontainer/devcontainer.json` is present

---------

Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
Co-authored-by: Danilo Leal <67129314+danilo-leal@users.noreply.github.com>
2025-12-10 12:10:43 -08:00
Danilo Leal
a61bf33fb0 Fix label copy for file history menu items (#44569)
Buttons and menu items should preferably always start with an infinitive
verb that describes what will happen when you trigger them. Instead of
just "File History", we should say "_View_ File History".

Release Notes:

- N/A
2025-12-10 18:00:11 +00:00
John Tur
d83201256d Use shell to launch MCP and ACP servers (#42382)
`npx`, and any `npm install`-ed programs, exist as batch
scripts/PowerShell scripts on the PATH. We have to use a shell to launch
these programs.

Fixes https://github.com/zed-industries/zed/issues/41435
Closes https://github.com/zed-industries/zed/pull/42651


Release Notes:

- windows: Custom MCP and ACP servers installed through `npm` now launch
correctly.

---------

Co-authored-by: Lukas Wirth <me@lukaswirth.dev>
2025-12-10 12:08:37 -05:00
Ben Kunkle
8ee85eab3c vim: Remove ctrl-6 keybinding alias for pane::AlternateFile (#44560)
Closes #ISSUE

It seems that `ctrl-6` is used exclusively as an alias, as can be seen
in the [linked section of the vim
docs](https://vimhelp.org/editing.txt.html#CTRL-%5E) from the initial PR
that added it. This however conflicts with the `ctrl-{n}` bindings for
`pane::ActivateItem` on macOS, leading to confusing file selection when
`ctrl-6` is pressed.

Release Notes:

- vim(BREAKING): Removed a keybinding conflict between the default macOS
bindings for `pane::ActivateItem` and the `ctrl-6` alias
for`pane::AlternateFile` which is primarily bound to `ctrl-^`. `ctrl-6`
is no longer treated as an alias for `ctrl-^` in vim mode. If you'd like
to restore `ctrl-6` as a binding for `pane::AlternateFile`, paste the
following into your `keymap.json` file:
```
  {
    "context": "VimControl && !menu",
    "bindings": {
      "ctrl-6": "pane::AlternateFile"
    }
  }
```
2025-12-10 16:55:50 +00:00
Ben Brandt
5b309ef986 acp: Better telemetry IDs for ACP agents (#44544)
We were defining these in multiple places and also weren't leveraging
the ids the agents were already providing.

This should make sure we use them consistently and avoid issues in the
future.

Release Notes:

- N/A
2025-12-10 16:48:08 +00:00
Mayank Verma
326ebb5230 git: Fix failing commits when hook command is not available (#43993) 2025-12-10 16:34:49 +00:00
Bennet Bo Fenner
f5babf96e1 agent_ui: Fix project path not found error when pasting code from other project (#44555)
The problem with inserting the absolute paths is that the agent will try
to read them. However, we don't allow the agent to read files outside
the current project. For now, we will only insert the crease in case the
code that is getting pasted is from the same project

Release Notes:

- Fixed an issue where pasting code into the agent panel from another
window would show an error
2025-12-10 16:30:10 +00:00
Joseph T. Lyons
f48aa252f8 Bump Zed to v0.218 (#44551)
Release Notes:

- N/A
2025-12-10 15:28:39 +00:00
Finn Evers
4106c8a188 Disable OmniSharp by default for C# files (#44427)
In preparation for https://github.com/zed-extensions/csharp/pull/11. Do
not merge before that PR is published.

Release Notes:

- Added support for Roslyn in C# files. Roslyn will now be the default
language server for C#
2025-12-10 10:12:41 -05:00
Agus Zubiaga
21f7e6a9e6 commit view: Fix layout shift while loading commit (#44548)
Fixes a few cases where the commit view would layout shift as the diff
loaded. This was caused by:
- Adding the commit message buffer after all the diff files
- Using the gutter dimensions from the last frame for the avatar spacing

Release Notes:

- commit view: Fix layout shift while loading commit

---------

Co-authored-by: MrSubidubi <dev@bahn.sh>
2025-12-10 15:01:49 +00:00
Finn Evers
dd431631b4 editor: Ensure completion menu scrollbar does not become stale (#44536)
Only by reusing the previous scroll handle, we can ensure that both the
scrollbar remains usable and also that the scrollbar does not flicker.
Previously, the scrollbar would hold the reference to an outdated
handle.

I tried invalidating the handle the scrollbar uses, but that leads to
flickering, which is worse. Hence, let's just reuse the scrollbar here.

Release Notes:

- Fixed an issue where the scrollbar would become stale in the code
completions menu after the items were updated.
2025-12-10 15:28:19 +01:00
Lukas Wirth
511e51c80e text: Replace some more release panics with graceful fallbacks (#44542)
Fixes ZED-3P7

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-12-10 13:01:31 +00:00
Agus Zubiaga
0a816cbc87 edit prediction: Exclude whole-module definitions from context (#44414)
For qualified identifiers we end up requesting both the definition of
the module and the item within it, but we only want the latter. At the
moment, we can't skip the request altogether, because we can't tell them
apart from the highlights query. However, we can tell from the target
range length, because it should be small for individual definitions as
it only covers their name, not the whole body.

Release Notes:

- N/A
2025-12-10 09:48:10 -03:00
Lukas Wirth
b1333b53ad editor: Improve performance of create_highlight_endpoints (#44521)
We reallocate quite a bunch in this codepath even though we don't need
to, we already roughly know what number of elements we are working with
so we can reduce the required allocations to some degree. This also
reduces the amount of anchor comparisons required.

Came up in profiling for
https://github.com/zed-industries/zed/issues/44503

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-12-10 10:35:29 +00:00
Lukas Wirth
30597a0cba project_panel: Fix create entry with trailing dot duplicating on windows (#44524)
Release Notes:

- Fixed an issue where creating a file through the project panel with a
trailing dot in its name would duplicate the entries with and without
the dot

Co-authored by: Smit Barmase <smit@zed.dev>
2025-12-10 10:33:49 +00:00
Richard Feldman
a8e2dc2f25 Use agent name from extension (#44496)
Previously this rendered `mistral-vibe` and not `Mistral Vibe`:

<img width="242" height="199" alt="Screenshot 2025-12-09 at 2 52 48 PM"
src="https://github.com/user-attachments/assets/f85cbf20-91d1-4c05-8b3a-fa5b544acb1c"
/>

Release Notes:

- Render agent display names from extension in menu
2025-12-10 10:19:00 +00:00
Mikayla Maki
fd2094fa19 Add inline prompt rating (#44230)
TODO:

- [x] Add inline prompt rating buttons
- [ ] Hook this into our other systems

Release Notes:

- N/A
2025-12-10 07:46:04 +00:00
Conrad Irwin
22f1655f8f Add history to the command palette (#44517)
Co-Authored-By: Claude <ai+claude@zed.dev>

Closes #ISSUE

Release Notes:

- Added history to the command palette (`up` will now show recently
executed
commands). This is particularly helpful in vim mode when you may mistype
a
complicated command and want to re-run a slightly different version
thereof.

---------

Co-authored-by: Claude <ai+claude@zed.dev>
2025-12-10 07:07:48 +00:00
Mayank Verma
7cbe25fda5 vim: Fix editor paste not using clipboard in visual mode (#44347)
Closes #44178

Release Notes:

- Fixed editor paste not using clipboard when in Vim visual mode
2025-12-09 21:35:28 -07:00
Mayank Verma
728f09f3f4 vim: Fix buffer navigation with non-Editor items (#44350)
Closes #44348

Release Notes:

- Fixed buffer navigation in Vim mode with non-Editor items
2025-12-09 21:34:24 -07:00
Julia Ryan
4353b8ecd5 Fix --user-data-dir (#44235)
Closes #40067

Release Notes:

- The `--user-data-dir` flag now works on Windows and Linux, as well as
macOS if you pass `--foreground`.

---------

Co-authored-by: Lukas Wirth <me@lukaswirth.dev>
2025-12-10 00:42:19 +00:00
David Kleingeld
736a712387 Handle response error for ashpd fixing login edgecases (#44502)
Release Notes:

- Fixed login fallbacks on Linux

Co-authored-by: Julia Ryan <juliaryan3.14@gmail.com>
2025-12-09 23:30:36 +00:00
Piotr Osiewicz
3180f44477 lsp: Do not drop lsp buffer handle from editor when a language change leads to buffer having a legit language (#44469)
Fixes a bug that led to us unnecessarily restarting a language server
when we were looking at a single file of a given language.

Release Notes:

- Fixed a bug that led to Zed sometimes starting an excessive amount of
language servers
2025-12-09 21:37:39 +01:00
Peter König
5dd8561b06 Fix DeepSeek Reasoner tool-call handling and add reasoning_content support (#44301)
## Closes #43887

## Release Notes:

### Problem
DeepSeek's reasoning mode API requires `reasoning_content` to be
included in assistant messages that precede tool calls. Without it, the
API returns a 400 error:

```
Missing `reasoning_content` field in the assistant message at message index 2
```

### Added/Fixed/Improved
- Add `reasoning_content` field to `RequestMessage::Assistant` in
`crates/deepseek/src/deepseek.rs`
- Accumulate thinking content from `MessageContent::Thinking` and attach
it to the next assistant/tool-call message
- Wire reasoning content through the language model provider in
`crates/language_models/src/provider/deepseek.rs`

### Testing
- Verified with DeepSeek Reasoner model using tool calls
- Confirmed reasoning content is properly included in API requests

Fixes tool-call errors when using DeepSeek's reasoning mode.

---------

Co-authored-by: Ben Brandt <benjamin.j.brandt@gmail.com>
2025-12-09 20:54:16 +01:00
Bennet Bo Fenner
bfab0b71e0 agent_ui: Fix panic in message editor (#44493)
Release Notes:

- N/A
2025-12-09 18:55:29 +00:00
Julia Ryan
04d920016f Remove reqwest dependency from gpui (#44424)
This was pulling in tokio which is pretty unfortunate. The solution is
to do the `reqwest::Form` to `http::Reqwest` conversion in the
reliability crate instead of our http client wrapper.

Release Notes:

- N/A
2025-12-09 09:29:40 -08:00
David Kleingeld
20fa9983ad Revert "gpui: Update link to Ownership and data flow section" (#44492)
While this fixes the link in the Readme it breaks the one in the docs
which is the more important one (we should probably just duplicate the
readme and not include it into gpui.rs but that is annoying).
2025-12-09 17:22:16 +00:00
Gaauwe Rombouts
dd57d97bb6 Revert "Improve TS/TSX/JS syntax highlighting for parameters, types, and punctuation" (#44490)
Reverts zed-industries/zed#43437

Internally we noticed some regression related to removed query for
PascalCase identifiers. Reverting now to prevent this from going to
preview, still planning to land this with the necessary fixes later.
2025-12-09 17:50:23 +01:00
Pablo Aguiar
d5a437d22f editor: Add rotation commands for selections and lines (#41236)
Introduces RotateSelectionsForward and RotateSelectionsBackward actions
that rotate content in a circular fashion across multiple cursors.

Behavior based on context:
- With selections: rotates the selected text at each cursor position
(e.g., x=1, y=2, z=3 becomes x=3, y=1, z=2)
- With just cursors: rotates entire lines at cursor positions (e.g.,
three lines cycle to line3, line1, line2)

Selections are preserved after rotation, allowing repeated cycling.
Useful for quickly rearranging values, lines, or arguments.

For more examples and use cases, please refer to #5315.

I'm eager to read your thoughts and make any adjustments or improvements
to any aspect of this change.

Closes #5315

Release Notes:

- Added `RotateSelectionsForward` and `RotateSelectionsBackward` actions
that rotate content in a circular fashion across multiple cursors
2025-12-09 11:15:14 -05:00
Nia
a524071dd9 gpui: Try to notify when GPU init fails (#44487)
Hopefully addresses #43575. cc @cole-miller 

Release Notes:

- GPU initialization errors are more reliably reported

---------

Co-authored-by: Cole Miller <cole@zed.dev>
2025-12-09 17:00:13 +01:00
Piotr Osiewicz
1471105643 edit_prediction: Remove duplicate definition of interpolate_edits (#44485)
Release Notes:

- N/A
2025-12-09 15:13:52 +00:00
Aaron Feickert
f05ee8a24d Fix menu capitalization (#44450)
This PR fixes fixes capitalization of two menu items for consistency
elsewhere in the application.

Release Notes:

- N/A
2025-12-09 10:55:01 -03:00
Xiaobo Liu
4d0cada8f4 git_ui: Hide breakpoints in commit views (#44484)
Release Notes:

- Improved commit view to not show breakpoints on hover

Signed-off-by: Xiaobo Liu <cppcoffee@gmail.com>
2025-12-09 13:47:45 +00:00
Mustaque Ahmed
abf90cc274 language: Add auto-surround for Plain Text, JSON, and JSONC (#42631)
**Summary**
When users selected text and pressed opening brackets (`(`, `[`, `{`),
the text was deleted instead of being wrapped.

- Added bracket pairs: `()`, `[]`, `{}`, `""`, `''` with `surround =
true`
- Added `surround = true` to existing bracket pairs
- Added `()` bracket pair

**Production Build Fix** (`crates/languages/src/lib.rs`)
- Fixed bug where `brackets` config was stripped in non-`load-grammars`
builds
- Preserved `brackets: config.brackets` in production mode

Closes #41186

**Screen recording**

https://github.com/user-attachments/assets/22067fe7-d5c4-4a72-a93d-8dbaae640168

Release Notes:

- N/A *or* Added/Fixed/Improved ...

---------

Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>
2025-12-09 18:27:23 +05:30
Lukas Wirth
b79d92d1c6 language_extension: Handle prefixed WASI windows paths in extension spawning (#44477)
Closes https://github.com/zed-industries/zed/issues/12013

Release Notes:

- Fixed some wasm language extensions failing to spawn on windows
2025-12-09 13:22:57 +01:00
Finn Evers
660234fed2 docs: Improve documentation for updating an extension (#44475)
Release Notes:

- N/A
2025-12-09 11:49:33 +00:00
Lena
2b02b60317 Fix a search filter in top-ranking issues script (#44468)
Release Notes:

- N/A
2025-12-09 09:44:30 +00:00
Lena
9d49c1ffda Switch from labels to types in Top-Ranking issues (#44383)
Since we've run a script to replace labels with types on the open issues
(e.g. label 'bug' → type 'Bug'), and those labels are deprecated, the
script is updated to deal with issue types only.

Other changes:
- only get top-100 search results for each section since we only look at
top-50 anyway: this way we don't need to deal with rate limiting, and
the entire script runs way faster when it doesn't need to fetch 1000+
bugs
- subtract the "-1" reactions from the "+1" reactions on a given issue
to give a slightly more accurate picture in the overall ranking (this
can further be improved by adding the distinct heart reactions but we'll
leave that for another day)
- only output the issues with a score > 0
- use Typer's built-in error handling for a missing argument
- since we're only dealing with types and not labels now, remove the
handling of potentially duplicate issues in the search results per
section
- make `Tracking` its own section since this issue type exists now
- remove the `unlabeled` section with issues of no type since all the
open issues have a type now and we intend to keep it that way for the
sake of these and other stats (and also because GitHub's REST API has
caught up with types but not with `no:type`)
- replace pygithub and custom classes with requests directly to the
GitHub API and plain data structures for a lighter footprint
- spell out the date of the update in the resulting text to avoid the
ambiguity (10/6 → October 06).

The way the script is invoked has not been changed.

Example run:

```
*Updated on December 08, 2025 06:57 AM (EST)*

## Features

1. https://github.com/zed-industries/zed/issues/11473 (679 👍)
2. https://github.com/zed-industries/zed/issues/4642 (674 👍)
3. https://github.com/zed-industries/zed/issues/10910 (638 👍)
4. https://github.com/zed-industries/zed/issues/8279 (592 👍)
5. https://github.com/zed-industries/zed/issues/5242 (581 👍)
6. https://github.com/zed-industries/zed/issues/4355 (552 👍)
7. https://github.com/zed-industries/zed/issues/15968 (453 👍)
8. https://github.com/zed-industries/zed/issues/4930 (357 👍)
9. https://github.com/zed-industries/zed/issues/5066 (345 👍)
10. https://github.com/zed-industries/zed/issues/5120 (312 👍)
11. https://github.com/zed-industries/zed/issues/7450 (310 👍)
12. https://github.com/zed-industries/zed/issues/14801 (291 👍)
13. https://github.com/zed-industries/zed/issues/10696 (276 👍)
14. https://github.com/zed-industries/zed/issues/16965 (258 👍)
15. https://github.com/zed-industries/zed/issues/4688 (231 👍)
16. https://github.com/zed-industries/zed/issues/4943 (228 👍)
17. https://github.com/zed-industries/zed/issues/9459 (223 👍)
18. https://github.com/zed-industries/zed/issues/21538 (223 👍)
19. https://github.com/zed-industries/zed/issues/11889 (194 👍)
20. https://github.com/zed-industries/zed/issues/9721 (180 👍)
21. https://github.com/zed-industries/zed/issues/5039 (172 👍)
22. https://github.com/zed-industries/zed/issues/9662 (162 👍)
23. https://github.com/zed-industries/zed/issues/4888 (160 👍)
24. https://github.com/zed-industries/zed/issues/26823 (158 👍)
25. https://github.com/zed-industries/zed/issues/21208 (151 👍)
26. https://github.com/zed-industries/zed/issues/4991 (149 👍)
27. https://github.com/zed-industries/zed/issues/6722 (144 👍)
28. https://github.com/zed-industries/zed/issues/18490 (139 👍)
29. https://github.com/zed-industries/zed/issues/10647 (138 👍)
30. https://github.com/zed-industries/zed/issues/35803 (121 👍)
31. https://github.com/zed-industries/zed/issues/4808 (118 👍)
32. https://github.com/zed-industries/zed/issues/12406 (118 👍)
33. https://github.com/zed-industries/zed/issues/37074 (118 👍)
34. https://github.com/zed-industries/zed/issues/7121 (117 👍)
35. https://github.com/zed-industries/zed/issues/15098 (112 👍)
36. https://github.com/zed-industries/zed/issues/4867 (111 👍)
37. https://github.com/zed-industries/zed/issues/4751 (108 👍)
38. https://github.com/zed-industries/zed/issues/14473 (98 👍)
39. https://github.com/zed-industries/zed/issues/6754 (97 👍)
40. https://github.com/zed-industries/zed/issues/11138 (97 👍)
41. https://github.com/zed-industries/zed/issues/17455 (90 👍)
42. https://github.com/zed-industries/zed/issues/9922 (89 👍)
43. https://github.com/zed-industries/zed/issues/4504 (87 👍)
44. https://github.com/zed-industries/zed/issues/17353 (85 👍)
45. https://github.com/zed-industries/zed/issues/4663 (82 👍)
46. https://github.com/zed-industries/zed/issues/12039 (79 👍)
47. https://github.com/zed-industries/zed/issues/11107 (75 👍)
48. https://github.com/zed-industries/zed/issues/11565 (73 👍)
49. https://github.com/zed-industries/zed/issues/22373 (72 👍)
50. https://github.com/zed-industries/zed/issues/11023 (71 👍)

## Bugs

1. https://github.com/zed-industries/zed/issues/7992 (457 👍)
2. https://github.com/zed-industries/zed/issues/12589 (113 👍)
3. https://github.com/zed-industries/zed/issues/12176 (105 👍)
4. https://github.com/zed-industries/zed/issues/14053 (96 👍)
5. https://github.com/zed-industries/zed/issues/18698 (90 👍)
6. https://github.com/zed-industries/zed/issues/8043 (73 👍)
7. https://github.com/zed-industries/zed/issues/7465 (65 👍)
8. https://github.com/zed-industries/zed/issues/9403 (56 👍)
9. https://github.com/zed-industries/zed/issues/9789 (55 👍)
10. https://github.com/zed-industries/zed/issues/30313 (52 👍)
11. https://github.com/zed-industries/zed/issues/13564 (47 👍)
12. https://github.com/zed-industries/zed/issues/18673 (47 👍)
13. https://github.com/zed-industries/zed/issues/43025 (44 👍)
14. https://github.com/zed-industries/zed/issues/15166 (43 👍)
15. https://github.com/zed-industries/zed/issues/14074 (41 👍)
16. https://github.com/zed-industries/zed/issues/38109 (39 👍)
17. https://github.com/zed-industries/zed/issues/21076 (38 👍)
18. https://github.com/zed-industries/zed/issues/32792 (38 👍)
19. https://github.com/zed-industries/zed/issues/26875 (36 👍)
20. https://github.com/zed-industries/zed/issues/21146 (35 👍)
21. https://github.com/zed-industries/zed/issues/39163 (35 👍)
22. https://github.com/zed-industries/zed/issues/13838 (32 👍)
23. https://github.com/zed-industries/zed/issues/16727 (32 👍)
24. https://github.com/zed-industries/zed/issues/9057 (31 👍)
25. https://github.com/zed-industries/zed/issues/38151 (31 👍)
26. https://github.com/zed-industries/zed/issues/38750 (30 👍)
27. https://github.com/zed-industries/zed/issues/8352 (29 👍)
28. https://github.com/zed-industries/zed/issues/11744 (29 👍)
29. https://github.com/zed-industries/zed/issues/20559 (29 👍)
30. https://github.com/zed-industries/zed/issues/23640 (29 👍)
31. https://github.com/zed-industries/zed/issues/11104 (27 👍)
32. https://github.com/zed-industries/zed/issues/13461 (27 👍)
33. https://github.com/zed-industries/zed/issues/13286 (25 👍)
34. https://github.com/zed-industries/zed/issues/29962 (25 👍)
35. https://github.com/zed-industries/zed/issues/14833 (23 👍)
36. https://github.com/zed-industries/zed/issues/15409 (23 👍)
37. https://github.com/zed-industries/zed/issues/11127 (22 👍)
38. https://github.com/zed-industries/zed/issues/12835 (22 👍)
39. https://github.com/zed-industries/zed/issues/31351 (22 👍)
40. https://github.com/zed-industries/zed/issues/33942 (22 👍)
41. https://github.com/zed-industries/zed/issues/7086 (21 👍)
42. https://github.com/zed-industries/zed/issues/13176 (20 👍)
43. https://github.com/zed-industries/zed/issues/14222 (20 👍)
44. https://github.com/zed-industries/zed/issues/29757 (20 👍)
45. https://github.com/zed-industries/zed/issues/35122 (20 👍)
46. https://github.com/zed-industries/zed/issues/29807 (19 👍)
47. https://github.com/zed-industries/zed/issues/4701 (18 👍)
48. https://github.com/zed-industries/zed/issues/35770 (18 👍)
49. https://github.com/zed-industries/zed/issues/37734 (18 👍)
50. https://github.com/zed-industries/zed/issues/4434 (17 👍)

## Tracking issues

1. https://github.com/zed-industries/zed/issues/7808 (298 👍)
2. https://github.com/zed-industries/zed/issues/24878 (101 👍)
3. https://github.com/zed-industries/zed/issues/7371 (60 👍)
4. https://github.com/zed-industries/zed/issues/26916 (51 👍)
5. https://github.com/zed-industries/zed/issues/31102 (41 👍)
6. https://github.com/zed-industries/zed/issues/25469 (30 👍)
7. https://github.com/zed-industries/zed/issues/10906 (18 👍)
8. https://github.com/zed-industries/zed/issues/9778 (11 👍)
9. https://github.com/zed-industries/zed/issues/23930 (10 👍)
10. https://github.com/zed-industries/zed/issues/23914 (8 👍)
11. https://github.com/zed-industries/zed/issues/18078 (7 👍)
12. https://github.com/zed-industries/zed/issues/25560 (6 👍)

## Crashes

1. https://github.com/zed-industries/zed/issues/13190 (33 👍)
2. https://github.com/zed-industries/zed/issues/32318 (15 👍)
3. https://github.com/zed-industries/zed/issues/39097 (14 👍)
4. https://github.com/zed-industries/zed/issues/31149 (11 👍)
5. https://github.com/zed-industries/zed/issues/36139 (10 👍)
6. https://github.com/zed-industries/zed/issues/39890 (10 👍)
7. https://github.com/zed-industries/zed/issues/16120 (9 👍)
8. https://github.com/zed-industries/zed/issues/20970 (5 👍)
9. https://github.com/zed-industries/zed/issues/28385 (5 👍)
10. https://github.com/zed-industries/zed/issues/27270 (4 👍)
11. https://github.com/zed-industries/zed/issues/30466 (4 👍)
12. https://github.com/zed-industries/zed/issues/37593 (4 👍)
13. https://github.com/zed-industries/zed/issues/27751 (3 👍)
14. https://github.com/zed-industries/zed/issues/29467 (3 👍)
15. https://github.com/zed-industries/zed/issues/39806 (3 👍)
16. https://github.com/zed-industries/zed/issues/40998 (3 👍)
17. https://github.com/zed-industries/zed/issues/10992 (2 👍)
18. https://github.com/zed-industries/zed/issues/31461 (2 👍)
19. https://github.com/zed-industries/zed/issues/37291 (2 👍)
20. https://github.com/zed-industries/zed/issues/38275 (2 👍)
21. https://github.com/zed-industries/zed/issues/43547 (2 👍)
22. https://github.com/zed-industries/zed/issues/20014 (1 👍)
23. https://github.com/zed-industries/zed/issues/30993 (1 👍)
24. https://github.com/zed-industries/zed/issues/31498 (1 👍)
25. https://github.com/zed-industries/zed/issues/31829 (1 👍)
26. https://github.com/zed-industries/zed/issues/32280 (1 👍)
27. https://github.com/zed-industries/zed/issues/36036 (1 👍)
28. https://github.com/zed-industries/zed/issues/37918 (1 👍)
29. https://github.com/zed-industries/zed/issues/39269 (1 👍)
30. https://github.com/zed-industries/zed/issues/42825 (1 👍)
31. https://github.com/zed-industries/zed/issues/43522 (1 👍)
32. https://github.com/zed-industries/zed/issues/43774 (1 👍)

## Windows

1. https://github.com/zed-industries/zed/issues/12288 (36 👍)
2. https://github.com/zed-industries/zed/issues/20559 (29 👍)
3. https://github.com/zed-industries/zed/issues/12013 (15 👍)
4. https://github.com/zed-industries/zed/issues/38682 (8 👍)
5. https://github.com/zed-industries/zed/issues/36241 (7 👍)
6. https://github.com/zed-industries/zed/issues/28497 (3 👍)
7. https://github.com/zed-industries/zed/issues/33748 (3 👍)
8. https://github.com/zed-industries/zed/issues/38348 (3 👍)
9. https://github.com/zed-industries/zed/issues/41649 (3 👍)
10. https://github.com/zed-industries/zed/issues/41734 (3 👍)
11. https://github.com/zed-industries/zed/issues/42873 (3 👍)
12. https://github.com/zed-industries/zed/issues/36318 (2 👍)
13. https://github.com/zed-industries/zed/issues/38886 (2 👍)
14. https://github.com/zed-industries/zed/issues/39038 (2 👍)
15. https://github.com/zed-industries/zed/issues/39056 (2 👍)
16. https://github.com/zed-industries/zed/issues/39189 (2 👍)
17. https://github.com/zed-industries/zed/issues/39473 (2 👍)
18. https://github.com/zed-industries/zed/issues/39764 (2 👍)
19. https://github.com/zed-industries/zed/issues/40430 (2 👍)
20. https://github.com/zed-industries/zed/issues/43051 (2 👍)
21. https://github.com/zed-industries/zed/issues/18765 (1 👍)
22. https://github.com/zed-industries/zed/issues/35174 (1 👍)
23. https://github.com/zed-industries/zed/issues/35958 (1 👍)
24. https://github.com/zed-industries/zed/issues/36193 (1 👍)
25. https://github.com/zed-industries/zed/issues/36849 (1 👍)
26. https://github.com/zed-industries/zed/issues/38760 (1 👍)
27. https://github.com/zed-industries/zed/issues/39346 (1 👍)
28. https://github.com/zed-industries/zed/issues/39435 (1 👍)
29. https://github.com/zed-industries/zed/issues/39453 (1 👍)
30. https://github.com/zed-industries/zed/issues/39927 (1 👍)
31. https://github.com/zed-industries/zed/issues/40209 (1 👍)
32. https://github.com/zed-industries/zed/issues/40277 (1 👍)
33. https://github.com/zed-industries/zed/issues/40370 (1 👍)
34. https://github.com/zed-industries/zed/issues/40392 (1 👍)
35. https://github.com/zed-industries/zed/issues/40475 (1 👍)
36. https://github.com/zed-industries/zed/issues/40585 (1 👍)
37. https://github.com/zed-industries/zed/issues/40647 (1 👍)
38. https://github.com/zed-industries/zed/issues/40954 (1 👍)
39. https://github.com/zed-industries/zed/issues/42050 (1 👍)
40. https://github.com/zed-industries/zed/issues/42366 (1 👍)
41. https://github.com/zed-industries/zed/issues/42731 (1 👍)
42. https://github.com/zed-industries/zed/issues/42861 (1 👍)
43. https://github.com/zed-industries/zed/issues/43522 (1 👍)

## Meta issues

1. https://github.com/zed-industries/zed/issues/24804 (10 👍)
2. https://github.com/zed-industries/zed/issues/36730 (3 👍)
```

Release Notes:

- N/A

---------

Co-authored-by: Joseph T. Lyons <JosephTLyons@gmail.com>
2025-12-09 10:24:06 +01:00
Lukas Wirth
6253b1d220 worktree: Print canonicalization error details (#44459)
cc https://github.com/zed-industries/zed/issues/24714

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-12-09 08:30:36 +00:00
Jason Lee
4e75f0f3ab gpui: Implement From<String> for ElementId (#44447)
Release Notes:

- N/A

## Before

```rs
div()
    .id(SharedString::from(format!("process-entry-{ix}-command")))
```

## After

```rs
div()
    .id(format!("process-entry-{ix}-command"))
```
2025-12-09 09:08:59 +01:00
Kirill Bulatov
0b4f72e549 Tidy up single-file worktrees' opening errors (#44455)
Part of https://github.com/zed-industries/zed/issues/44370

Also log when fail to open the project item.

Release Notes:

- N/A
2025-12-09 07:50:10 +00:00
Mikayla Maki
dc5f54eaf9 Backout inline assistant changes (#44454)
Release Notes:

- N/A
2025-12-09 07:15:50 +00:00
Afief Abdurrahman
ba807a3c46 languages: Initialize Tailwind's options with includeLanguages (#43978)
Since [this
PR](https://github.com/tailwindlabs/tailwindcss-intellisense/pull/1014),
the `tailwindCSS.userLanguages` option has been deprecated, and it is
recommended to use `tailwindCSS.includeLanguages` instead. Using
`tailwindCSS.userLanguages` triggers the warning shown below in the
`tailwindcss-language-server` logs.

<img width="634" height="259" alt="tailwindcss-language-server (kron)
Server Logs v"
src="https://github.com/user-attachments/assets/763551ad-f41a-4756-9d7d-dfb7df45cc5c"
/>

Release Notes:

- Fixed a warning indicating the deprecation of
`tailwindCSS.userLanguages` by initializing the options with
`tailwindCSS.includeLanguages`.

---------

Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>
2025-12-09 11:08:29 +05:30
Max Brunsfeld
45829b3380 Avoid the cost of creating an anyhow error in RelPath::strip_prefix (#44444)
Release Notes:

- Fixed a performance bottleneck that could delay Zed's processing FS
events for a long time in some cases.
2025-12-09 00:01:46 +00:00
Marshall Bowers
631e3dd272 collab: Remove unused Signup model (#44438)
This PR removes the `Signup` database model, as it was not being used.

Release Notes:

- N/A
2025-12-08 23:15:38 +00:00
Marshall Bowers
8d44bcd4f9 collab: Remove database migrations (#44436)
This PR removes the database schema migrations from the repo, as these
are now managed by Cloud.

There's a new `20251208000000_test_schema.sql` "migration" that we use
to create the database schema for the tests, similar to what we use for
SQLite.

Release Notes:

- N/A
2025-12-08 17:53:14 -05:00
Andrew Farkas
1888106664 Fix telemetry for collab::ToggleMute and remove unregistered actions (#44432)
This PR removes the actions `collab::ToggleScreenSharing`,
`collab::ToggleMute`, and `collab::ToggleDeafen`. They weren't actually
registered to any behavior, so while it was possible to create a keybind
bound to them, they never actually trigger. I spent ~30 minutes trying
to figure out why I was getting this result for my `"f13":
"collab::ToggleMute"` keybind in the keybind context menu:

<img width="485" height="174" alt="image"
src="https://github.com/user-attachments/assets/23064c8f-fe8d-42e5-b94f-bd4b8a0cb3b5"
/>

(This really threw me for a loop because I was trying to use this as a
known good case to compare against a _different_ action that wasn't
working because I forgot to register it.)

As a side benefit, this enables telemetry for toggling mic mute via
keybind.

Release Notes:

- Fixed telemetry for `collab::Mute`
- Removed unregistered actions `collab::ToggleMute`,
`collab::ToggleDeafen`, and `collab::ToggleScreenshare`
- The correctly-functioning actions `collab::Mute`, `collab::Deafen`,
and `collab::ScreenShare` are recommended instead
2025-12-08 22:03:51 +00:00
Marshall Bowers
c005adb09c collab: Don't run migrations on startup (#44430)
This PR removes the step that applies migrations when Collab starts up,
as migrations are now done as part of Cloud deployments.

Release Notes:

- N/A
2025-12-08 21:46:52 +00:00
Andrew Farkas
6b2d1f153d Add editor::InsertSnippet action (#44428)
Closes #20036

This introduces new action `editor: insert snippet`. It supports three
modes:

```
["editor::InsertSnippet", {"name": "snippet_name"}]
["editor::InsertSnippet", {"language": "language_name", "name": "snippet_name"}]
["editor::InsertSnippet", {"snippet": "snippet with $1 tab stops"}]
```

## Example usage

### `keymap.json`

```json
  {
    "context": "Editor",
    "bindings": {
      // named global snippet
      "cmd-k cmd-r": ["editor::InsertSnippet", {"name": "all rights reserved"}],
      // named language-specific snippet
      "cmd-k cmd-p": ["editor::InsertSnippet", {"language": "rust", "name": "debug-print a value"}],
      // inline snippet
      "cmd-k cmd-e": ["editor::InsertSnippet", {"snippet": "println!(\"This snippet has multiple lines.\")\nprintln!(\"It belongs to $1 and is very $2.\")"}],
    },
  },
```

### `~/.config/zed/snippets/rust.json`

```json
{
  "debug-print a value": {
    "body": "println!(\"$1 = {:?}\", $1)",
  },
}
```

### `~/.config/zed/snippets/snippets.json`

```json
{
  "all rights reserved": {
    "body": "Copyright © ${1:2025} ${2:your name}. All rights reserved.",
  },
}
```

## Future extensions

- Support multiline inline snippets using an array of strings using
something similar to `ListOrDirect` in
`snippet_provider::format::VsCodeSnippet`
- When called with no arguments, open a modal to select a snippet to
insert

## Release notes

Release Notes:

- Added `editor::InsertSnippet` action
2025-12-08 21:38:24 +00:00
tidely
22e1bcccad languages: Check whether to update typescript-language-server (#44343)
Closes #43155

Adds a missing check to also update packages when the
`typescript-language-server` package is outdated.

I created a new `SERVER_PACKAGE_NAME ` constant so that the package name
isn't coupled to the language server name inside of Zed.

Release Notes:

- Fixed the typescript language server falling out of date
2025-12-08 21:19:10 +00:00
Finn Evers
bb591f1e65 extension_cli: Properly populate manifest with snippet location (#44425)
This fixes an issue where the snippet file location would not be the
proper one for compiled extensions because it would be populated with an
absolute path instead of a relative one in relation to the extension
output directory. This caused the copy operation downstream to not do
anything, because it copied the file to the location it already was
(which was not the output directory for that extension).

Also adds some tests and pulls in the `Fs` so we do not have such issues
with snippets a third time hopefully.

Release Notes:

- N/A
2025-12-08 21:49:26 +01:00
Piotr Osiewicz
3d6cc3dc79 terminal: Fix performance issues with hyperlink regex matching (#44407)
Problem statement: When given a line that contained a lot of matches of
your hyperlink regex of choice (thanks to #40305), we would look for
matches
that intersected with currently hovered point. This is *hella*
expensive, because we would re-walk the whole alacritty grid for each
match. With the repro that Joseph shared, we had to go through 4000 such
matches on each frame render.

Problem solution: We now convert the hovered point into a range within
the line (byte-wise) in order to throw away matches that do not
intersect the
hovered range. This lets us avoid performing the unnecessary conversion
when we know it's never going to yield a match range that intersects the
hovered point.

Release Notes:

- terminal: Fixed performance regression when handling long lines.

---------

Co-authored-by: Dave Waggoner <waggoner.dave@gmail.com>
2025-12-08 21:03:50 +01:00
Anthony Eid
464d4f72eb git: Use branch names for resolve conflict buttons (#44421)
This makes merge conflict resolution clearer because we're now parsing
the branch names from the conflict region instead of hardcoding HEAD and
ORIGIN.

### Before
<img width="1157" height="1308" alt="image"
src="https://github.com/user-attachments/assets/1fd72823-4650-48dd-b26a-77c66d21614d"
/>

### After
<img width="1440" height="1249" alt="Screenshot 2025-12-08 at 2 17
12 PM"
src="https://github.com/user-attachments/assets/d23c219a-6128-4e2d-a8bc-3f128aa55272"
/>

Release Notes:

- git: Use branch names for git conflict buttons instead of HEAD and
ORIGIN
2025-12-08 14:49:06 -05:00
Bennet Bo Fenner
f4892559f0 codex: Fallback to locally installed version if update fails (#44419)
Closes #43900

Release Notes:

- Fallback to locally installed codex version if update fails
2025-12-08 19:59:25 +01:00
tidely
387059c6b2 language: Add LanguageName::new_static to reduce allocations (#44380)
Implements a specialized constructor `LanguageName::new_static` for
`&'static str` which reduces allocations.

`LanguageName::new` always backs the underlying `SharedString` with an
owned `Arc<str>` even when a `&'static str` is passed. This makes us
allocate each time we create a new `LanguageName` no matter what.
Creating a specialized constructor for `&'static str` allows us to
essentially construct them for free.

Additional change:
Encourages using explicit constructors to avoid needless allocations.
Currently there were no instances of this trait being called where the
lifetime was not `'static` saving another 48 locations of allocation.

```rust
impl<'a> From<&'a str> for LanguageName {
    fn from(str: &'a str) -> Self {
        Self(SharedString::new(str))
    }
}

// to 

impl From<&'static str> for LanguageName {
    fn from(str: &'static str) -> Self {
        Self(SharedString::new_static(str))
    }
}

```

Release Notes:

- N/A
2025-12-08 19:57:02 +01:00
Nereuxofficial
4a382b2797 fuzzy: Use lowercase representations for matrix size calculation (#44338)
Closes #44324

Release Notes:

- Uses the lowercase representation of the query for the matrix length
calculation to match the bounds size expected in `recursive_score_match`
2025-12-08 19:50:20 +01:00
ᴀᴍᴛᴏᴀᴇʀ
b948d8b9e7 git: Improve self-hosted provider support and Bitbucket integration (#42343)
This PR includes several minor modifications and improvements related to
Git hosting providers, covering the following areas:

1. Bitbucket Owner Parsing Fix: Remove the common `scm` prefix from the
remote URL of self-hosted Bitbucket instances to prevent incorrect owner
parsing.
[Reference](a6e3c6fbb2/src/git/remotes/bitbucket-server.ts (L72-L74))
2. Bitbucket Avatars in Blame: Add support for displaying Bitbucket
avatars in the Git blame view.
<img width="2750" height="1994" alt="CleanShot 2025-11-10 at 20 34
40@2x"
src="https://github.com/user-attachments/assets/9e26abdf-7880-4085-b636-a1f99ebeeb97"
/>
3. Self-hosted SourceHut Support: Add support for self-hosted SourceHut
instances.
4. Configuration: Add recently introduced self-hosted Git providers
(Gitea, Forgejo, and SourceHut) to the `git_hosting_providers` setting
option.
<img width="2750" height="1994" alt="CleanShot 2025-11-10 at 20 33
48@2x"
src="https://github.com/user-attachments/assets/44ffc799-182d-4145-9b89-e509bbc08843"
/>


Closes #11043

Release Notes:

- Improved self-hosted git provider support and Bitbucket integration
2025-12-08 13:32:14 -05:00
Floyd Wang
bc17491527 gpui: Revert grid template columns default behavior to align with Tailwind (#44368)
When using the latest version of `GPUI`, I found some grid layout
issues. I discovered #43555 modified the default behavior of grid
template columns. I checked the implementation at
https://tailwindcss.com/docs/grid-template-columns, and it seems our
previous implementation was correct.

If a grid layout is placed inside a flexbox, the layout becomes
unpredictable.

```rust
impl Render for HelloWorld {
    fn render(&mut self, _window: &mut Window, _cx: &mut Context<Self>) -> impl IntoElement {
        div()
            .flex()
            .size(px(500.0))
            .bg(rgb(0x505050))
            .text_xl()
            .text_color(rgb(0xffffff))
            .child(
                div()
                    .size_full()
                    .gap_1()
                    .grid()
                    .grid_cols(2)
                    .border_1()
                    .border_color(gpui::red())
                    .children((0..10).map(|ix| {
                        div()
                            .w_full()
                            .border_1()
                            .border_color(gpui::green())
                            .child(ix.to_string())
                    })),
            )
    }
}
```

| Before | After |
| - | - |
| <img width="612" height="644" alt="After1"
src="https://github.com/user-attachments/assets/64eaf949-0f38-4f0b-aae7-6637f8f40038"
/> | <img width="612" height="644" alt="Before1"
src="https://github.com/user-attachments/assets/561a508d-29ea-4fd2-bd1e-909ad14b9ee3"
/> |

I also placed the grid layout example inside a flexbox too.

| Before | After |
| - | - |
| <img width="612" height="644" alt="After"
src="https://github.com/user-attachments/assets/fa6f4a2d-21d8-413e-8b66-7bd073e05f87"
/> | <img width="612" height="644" alt="Before"
src="https://github.com/user-attachments/assets/9e0783d1-18e9-470d-b913-0dbe4ba88835"
/> |

I tested the changes from the previous PR, and it seems that setting the
table's parent to `v_flex` is sufficient to achieve a non-full table
width without modifying the grid layout. This was already done in the
previous PR.

I reverted the grid changes, the blue border represents the table width.
cc @RemcoSmitsDev

<img width="1107" height="1000" alt="table"
src="https://github.com/user-attachments/assets/4b7ba2a2-a66a-444d-ad42-d80bc9057cce"
/>

So, I believe we should revert to this implementation to align with
tailwindcss behavior and avoid potential future problems, especially
since the cause of this issue is difficult to pinpoint.

Release Notes:

- N/A
2025-12-08 17:38:10 +01:00
ozzy
f6a6630171 agent_ui: Auto-capture file context on paste (#42982)
Closes #42972


https://github.com/user-attachments/assets/98f2d3dc-5682-4670-b636-fa8ea2495c69

Release Notes:

- Added automatic file context detection when pasting code into the AI
agent panel. Pasted code now displays as collapsible badges showing the
file path and line numbers (e.g., "app/layout.tsx (18-25)").

---------

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-12-08 17:32:04 +01:00
Cameron Mcloughlin
18421845eb nix: Fix nix build failure due to missing protoc (#44412) 2025-12-08 16:08:36 +00:00
Martin Bergo
21439426a0 Add support for Grok 4.1 Fast models in xAI provider (#43419)
Release Notes:

- Added support for Grok 4.1 Fast (reasoning and non-reasoning) models
in the xAI provider, with 2M token context windows and full vision
capabilities.
- Extended 2M token context to existing Grok 4 Fast variants (from 128K)
for consistency with xAI updates.
- Enabled image/vision support for all Grok 4 family models.

Doc:
https://docs.x.ai/docs/models/grok-4-1-fast-reasoning
https://docs.x.ai/docs/models/grok-4-1-fast-non-reasoning
2025-12-08 16:38:34 +01:00
Smit Barmase
044f7b5583 editor: Fix buffer fold focuses first buffer in multi-buffer instead of the toggled one (#44394)
Closes #43870

Regressed in https://github.com/zed-industries/zed/pull/37953

Release Notes:

- Fixed issue where toggling buffer fold focuses first buffer in
multi-buffer instead of the toggled one.
2025-12-08 21:00:00 +05:30
Danilo Leal
12dba5edbe git_ui: Improve the branch picker when in the panel (#44408)
This PR adapts the design for the panel version of the branch picker.


https://github.com/user-attachments/assets/d04b612b-72e8-4bc9-9a19-9d466b9fe696

Release Notes:

- N/A
2025-12-08 15:20:58 +00:00
Lukas Wirth
216a3a60f5 keymap: Fix windows keys for open and open files actions (#44406)
Closes https://github.com/zed-industries/zed/issues/44040
Closes https://github.com/zed-industries/zed/issues/44044
 
Release Notes:

- Fixed incorrect keybindings for `open folder` and `open files` actions
on windows' default keymap
2025-12-08 15:08:44 +00:00
Cameron Mcloughlin
66789607c9 editor: Goto references skip multibuffer if single match (#43026)
Co-authored-by: Agus <agus@zed.dev>
2025-12-08 14:38:19 +00:00
Lee Nussbaum
62c312b35f Update Recent Projects picker to show SSH host (#44349)
**Problem addressed:**

Cannot distinguish an identical path on multiple remote hosts in the
"Recent Projects" picker.

**Related work:**

- Issue #40358 (already closed) identified the issue in the "Expected
Behavior" section for both WSL and SSH remotes.
- PR #40375 implemented WSL distro labels.

**Screenshots:**

Before:
<img width="485" height="98" alt="screenshot-sample-project-before"
src="https://github.com/user-attachments/assets/182d907a-6f11-44aa-8ca5-8114f5551c93"
/>

After:
<img width="481" height="94" alt="screenshot-sample-project-after"
src="https://github.com/user-attachments/assets/5f87ff12-6577-404e-9319-717f84b7d2e7"
/>

**Implementation notes:**

RemoteConnectionOptions::display_name() will be subject to
exhaustiveness checking on RemoteConnectionOption variants.

Keeps the same UI approach as the WSL distro variants.

Release Notes:

- Improved Recent Projects picker: now displays SSH hostname with
remotes.
2025-12-08 11:07:03 -03:00
Ben Brandt
066dd5c9d5 acp: Fix download path for Codex on ARM Windows (#44395)
Both windows paths use .zip, not .tar.gz

Closes #44378

Release Notes:

- acp: Fix codex-acp download path for ARM Windows targets
2025-12-08 14:06:06 +00:00
Lukas Wirth
bdba6fd069 remote(wsl): Make shell and platform discovery more resilient to shell scripts (#44363)
Companion PR to https://github.com/zed-industries/zed/pull/44165

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-12-08 15:05:50 +01:00
Remco Smits
2260b87ea8 agent_ui: Fix show markdown list checked state (#43567)
Closes #37527

This PR adds support for showing the list state of a list item inside
the agent UI.

**Before**
<img width="643" height="505" alt="Screenshot 2025-11-26 at 16 21 31"
src="https://github.com/user-attachments/assets/30c78022-4096-4fe4-a6cc-db208d03900f"
/>

**After**
<img width="640" height="503" alt="Screenshot 2025-11-26 at 16 41 32"
src="https://github.com/user-attachments/assets/ece14172-79a5-4d5e-a577-4b87db04280f"
/>

Release Notes:
- Agent UI now show the checked state of a list item

---------

Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
2025-12-08 14:04:49 +00:00
Kian Kasad
e6214b2b71 extensions: Don't recompile Tree-sitter parsers when up-to-date (#43442)
When installing local "dev" extensions that provide tree-sitter
grammars, compiling the parser can be quite intensive[^anecdote]. This
PR changes the logic to only compile the parser if the WASM object
doesn't exist or the source files are newer than the object (just like
`make(1)` would do).

[^anecdote]: The tree-sitter parser for LLVM IR takes >10 minutes to
compile and uses 20 GB of memory on my laptop.

Release Notes:

- N/A

---------

Co-authored-by: Finn Evers <finn.evers@outlook.de>
2025-12-08 13:48:04 +00:00
Ben Brandt
7ef45914e8 editor: Fix link navigation within editors that don't have a workspace (#44389)
This mostly affects Channel Notes, but due to a change in
https://github.com/zed-industries/zed/pull/43921 we were
short-circuiting before opening links.

I moved the workspace checks back to right before we need them so that
we still follow the same control flow as usual for these editors.

Closes #44207

Release Notes:

- N/A
2025-12-08 13:36:39 +00:00
Mustaque Ahmed
00e6cbc4fc git_ui: Fix tooltip overlaying context menu in git blame (#42764)
Closes #26949


## Summary

1. Split editor references to avoid borrow conflicts in event handlers
2. Check
[has_mouse_context_menu()](vscode-file://vscode-app/Applications/Visual%20Studio%20Code.app/Contents/Resources/app/out/vs/code/electron-browser/workbench/workbench.html)
state directly in tooltip conditional instead of caching stale value
3. Restructured context menu deployment to ensure proper sequencing:
hide popover → build menu → deploy menu → notify for re-render

**Screen recording**



https://github.com/user-attachments/assets/8a00f882-1c54-47b0-9211-4f28f8deb867



Release Notes:

- Fixed an issue where the context menu in the Git Blame view would be
frequently overlapped by the commit information tooltip.

---------

Co-authored-by: Finn Evers <finn.evers@outlook.de>
2025-12-08 14:32:06 +01:00
Finn Evers
9e0a4c2a9c terminal_view: Fix casing of popover menu entry (#44377)
This ensures that the casing of this entry aligns with other entries in
the app popover Menus.

Release Notes:

- N/A

---------

Co-authored-by: Piotr Osiewicz <24362066+osiewicz@users.noreply.github.com>
2025-12-08 13:19:54 +00:00
Danilo Leal
a8b61c5ffa ui: Remove support for unnecessary Switch colors (#44388)
This PR removes the default, error, warning, and success color variants
from the `SwitchColor` enum. In the most YAGNI spirit, I think we'll
probably never want to use these colors for the switch, so there's no
reason to support them. And if we ever want to do it, we can re-add
them.

I also took the opportunity to change the default color to be "accent",
which is _already_ what we use for all instances of this component, so
there's no need to have to define it every time. This effectively makes
the enum support only "accent" and "custom", which I think is okay for
now if we ever need an escape hatch before committing to supporting new
values.

Release Notes:

- N/A
2025-12-08 13:16:19 +00:00
Oleksiy Syvokon
d312d59ace Add zeta distill command (#44369)
This PR partially implements a knowledge distillation data pipeline.

`zeta distill` gets a dataset of chronologically ordered commits and
generates synthetic predictions with a teacher model (one-shot Claude
Sonnet).

`zeta distill --batches cache.db` will enable Message Batches API. Under
the first run, this command will collect all LLM requests and upload a
batch of them to Anthropic. On subsequent runs, it will check the batch
status. If ready, it will download the result and put them into the
local cache.


Release Notes:

- N/A

---------

Co-authored-by: Piotr Osiewicz <24362066+osiewicz@users.noreply.github.com>
Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-12-08 15:13:22 +02:00
Thomas Wood
b4083ec47b Fix typo in prompt.md (#44326)
"too calls" → "tool calls"

Release Notes:

- N/A
2025-12-08 10:08:15 -03:00
Smit Barmase
e75a7aecd6 languages: Fix Markdown list items are automatically indented erroneously (#44381)
Closes #44223

Regressed in https://github.com/zed-industries/zed/pull/40794 in attempt
to fix https://github.com/zed-industries/zed/issues/40757. This PR
handles both cases and add more tests around it.

Bug is only in Nightly.

Release Notes:

- N/A
2025-12-08 17:49:17 +05:30
Ben Brandt
03cf7ddb53 acp: Update to agent-client-protocol rust sdk v0.9.0 (#44373)
Release Notes:

- N/A
2025-12-08 11:11:05 +00:00
Clay Tercek
9364d39487 Improve TS/TSX/JS syntax highlighting for parameters, types, and punctuation (#43437)
This pull request enhances syntax highlighting for JavaScript,
TypeScript, TSX, and JSDoc by adding more precise rules for parameters,
types, and punctuation.

- Added queries for highlighting parameters (`@variable.parameter`)
- Expanded highlighting for type identifiers, type aliases, interfaces,
classes
- Extended/implemented types to improve distinction between different
type constructs (`@type`, `@type.class`)
- Added highlighting for punctuation in type parameters, unions,
intersections, annotations, index signatures, optional fields, and
predicates (`@punctuation.special`, `@punctuation.bracket`)
- Added highlighting for identifiers in JSDoc comments
(`@variable.jsdoc`)

Release Notes:

- Refined syntax highlighting in JavaScript and TypeScript for better
visual distinction of
  types, parameters, and JSDoc elements
2025-12-08 12:00:38 +01:00
Lukas Wirth
f16913400a title_bar: Fix clicking collaborators on windows not starting a follow (#44364)
Release Notes:

- Fixed left click not allowing to follow in collab title bar on windows
2025-12-08 09:36:55 +00:00
Rémi Kalbe
5bfc0baa4c macos: Reset exception ports for shell-spawned processes (#44193)
## Summary

Follow-up to #40716. This applies the same `reset_exception_ports()` fix
to `set_pre_exec_to_start_new_session()`, which is used by shell
environment capture, terminal spawning, and DAP transport.

### Root Cause

After more debugging, I finally figured out what was causing the issue
on my machine. Here's what was happening:

1. Zed spawns a login shell (zsh) to capture environment variables
2. A pipe is created: reader in Zed, writer mapped to fd 0 in zsh
3. zsh sources `.zshrc` → loads oh-my-zsh → runs poetry plugin
4. Poetry plugin runs `poetry completions zsh &|` in background
5. Poetry inherits fd 0 (the pipe's write end) from zsh
6. zsh finishes `zed --printenv` and exits
7. Poetry still holds fd 0 open
8. Zed's `reader.read_to_end()` blocks waiting for all writers to close
9. Poetry hangs (likely due to inherited crash handler exception ports
interfering with its normal operation)
10. Pipe stays open → Zed stuck → no more processes spawn (including
LSPs)

I confirmed this by killing the hanging `poetry` process, which
immediately unblocked Zed and allowed LSPs to start. However, this
workaround was needed every time I started Zed.

While poetry was the culprit in my case, this can affect any shell
configuration that spawns background processes during initialization
(oh-my-zsh plugins, direnv, asdf, nvm, etc.).

Fixes #36754

## Test plan

- [x] Build with `ZED_GENERATE_MINIDUMPS=true` to force crash handler
initialization
- [x] Verify crash handler logs appear ("spawning crash handler
process", "crash handler registered")
- [x] Confirm LSPs start correctly with shell plugins that spawn
background processes

Release Notes:

- Fixed an issue on macOS where LSPs could fail to start when shell
plugins spawn background processes during environment capture.
2025-12-08 09:26:35 +01:00
Jake Go
d7b99a5b12 gpui: Fix new window cascade positioning (#44358)
Closes [#44354](https://github.com/zed-industries/zed/discussions/44354)

Release Notes:
- Fixed new windows to properly cascade from the active window instead
of opening at the exact same position
2025-12-08 09:11:58 +01:00
Cole Miller
7691cf341c Fix selections when opening excerpt with an existing buffer that has expanded diff hunks (#44360)
Release Notes:

- N/A
2025-12-08 06:12:10 +00:00
Anthony Eid
63cc90cd2c debugger: Fix stack frame filter not persisting between sessions (#44352)
This bug was caused by using two separate keys for writing/reading from
the KVP database. The bug didn't show up in my debug build because there
was an old entry of a valid key.

I added an integration test for this feature to prevent future
regressions as well.

Release Notes:

- debugger: Fix a bug where the stack frame filter state wouldn't
persist between sessions
2025-12-08 01:43:30 +00:00
Remco Smits
d1e45e27de debugger: Fix UI would not update when you select the Current State option (#44340)
This PR fixes that the `Current State` option inside the history
dropdown does not updating the UI. This was because we didn't send the
`SessionEvent::HistoricSnapshotSelected` event in the reset case. This
was just a mistake.

**After**


https://github.com/user-attachments/assets/6df5f990-fd66-4c6b-9633-f85b422fb95a

cc @Anthony-Eid

Release Notes:

- N/A
2025-12-07 16:59:38 -05:00
Kunall Banerjee
9da0d40694 docs: Point to the right URL for Regal LSP (#44318)
Release Notes:

- N/A
2025-12-07 05:29:58 +00:00
Kunall Banerjee
9f344f093e docs: Point to the right URL for Astro LSP (#44314)
The original URL points to a deprecated repo.

Release Notes:

- N/A
2025-12-06 19:14:13 -05:00
Remco Smits
ef76f07b1e debugger: Make historic snapshot button a dropdown menu (#44307)
This allows users to select any snapshot in the debugger history feature
and go back to the active session snapshot.

We also change variable names to use hsitoric snapshot instead of
history and move the snapshot icon to the back of the debugger top
control strip.


https://github.com/user-attachments/assets/805de8d0-30c1-4719-8af7-2d47e1df1da4

Release Notes:

- N/A

Co-authored-by: Anthony Eid <hello@anthonyeid.me>
2025-12-06 21:08:33 +00:00
Remco Smits
4577e1bf8f debugger: Get stack frame list working with historic snapshot feature (#44303)
This PR fixes an issue where the stack frame list would not update when
viewing a historic snapshot.
We now also show the right active debug line based on the currently
selected history.


https://github.com/user-attachments/assets/baccd078-23ed-4db3-9959-f83dc2be8309

Release Notes:

- N/A

---------

Co-authored-by: Anthony Eid <hello@anthonyeid.me>
2025-12-06 15:34:19 -05:00
Remco Smits
a574ae8779 debugger: Start work on adding session snapshot feature (#44298)
This PR adds the basic logic for a feature that allows you to visit any
stopped information back in time. We will follow up with PRs to improve
this and actually add UI for it so the UX is better.


https://github.com/user-attachments/assets/42d8a5b3-8ab8-471a-bdd0-f579662eadd6


Edit Anthony:

We feature flagged this so external users won't be able to access this
until the feature is polished

Release Notes:

- N/A

---------

Co-authored-by: Anthony Eid <hello@anthonyeid.me>
2025-12-06 19:31:08 +00:00
Kirill Bulatov
16666f5357 Use single languages::{rust_lang, markdown_lang} in tests across the codebase (#44282)
This allows referencing proper queries and keeping the tests up-to-date.

Release Notes:

- N/A
2025-12-06 18:49:21 +00:00
Ben Kunkle
b2e35b5f99 zlog: Fix dynamic mod path filtering (#44296)
Closes #ISSUE

Release Notes:

- Linux: cleaned up noisy logs from `zbus`
2025-12-06 17:56:49 +00:00
John Tur
9e33243015 Fix unregistration logic for pull diagnostics (#44294)
Even if `workspace_diagnostics_refresh_tasks` is empty, registrations
which didn't advertise support for workspace diagnostics may still
exist.

Release Notes:

- N/A
2025-12-06 11:31:05 -05:00
Danilo Leal
a0848daab4 agent ui: Fix clicks on the notification sometimes not being triggered (#44280)
Closes https://github.com/zed-industries/zed/issues/43292

We were seeing clicks on the "View Panel" and "Dismiss" buttons
sometimes not being triggered. I believe this was happening because the
overall parent also had an on_click, which due to this being a popup
window, was causing conflicts with the buttons' on click handlers. This
should hopefully fix that issue.

Release Notes:

- agent: Fixed an issue where clicking on the agent notification buttons
would sometimes not trigger their actions.
2025-12-06 12:43:37 +00:00
David Kleingeld
d72746773f Put tracy dependency behind feature tracy (#44277)
It broke CI, now it no longer does 🎉 Proper fix followes after the
weekend.

Release Notes:

- N/A
2025-12-06 14:08:01 +02:00
Danilo Leal
0565992d7a project picker: Improve tooltip on secondary actions (#44264)
This PR adds the keybinding for the "open in project window" button on
the project picker as well as makes the tooltip for the content bit on
the active list item only show up for the content container.


https://github.com/user-attachments/assets/42944cf7-e4e7-4bf8-8695-48df8b3a35eb


Release Notes:

- N/A
2025-12-06 09:06:51 -03:00
Danilo Leal
e1d8c1a6a1 Improve visual alignment on the inline assistant (#44265)
Just making all of the elements in the inline assistant more vertically
centered.

<img width="500" height="1938" alt="Screenshot 2025-12-06 at 12  02@2x"
src="https://github.com/user-attachments/assets/7f9627ac-4f2d-4f93-9a7e-31c5a01c32d1"
/>

Release Notes:

- N/A
2025-12-06 09:06:43 -03:00
Agus Zubiaga
f08fd732a7 Add experimental mercury edit prediction provider (#44256)
Release Notes:

- N/A

---------

Co-authored-by: Ben Kunkle <ben@zed.dev>
Co-authored-by: Max Brunsfeld <maxbrunsfeld@gmail.com>
2025-12-06 10:08:44 +00:00
Haojian Wu
51b7d06a27 Fix a typo: to -> two (#44272)
Release Notes:

- N/A
2025-12-06 09:35:18 +02:00
Cole Miller
66c7bdf037 git: For conflicted files, set project diff excerpts using conflicts only (#44263)
It's just distracting having excerpts for all the successfully merged
hunks.

Release Notes:

- git: The project diff now focuses on merge conflicts for files that
have them.
2025-12-06 02:20:14 +00:00
Cole Miller
363fbbf0d4 git: Fix excerpt ranges in the commit view (#44261)
Release Notes:

- N/A
2025-12-06 02:05:34 +00:00
Serophots
9860884217 gpui: Make length helpers into const functions (#44259)
Make gpui's `rems()`, `phi()`, `auto()` length related helpers into
const functions.

I can't see why these functions aren't already const except that it
must've been overlooked when they were written?

In my project I had need for rems() to be const, and I thought I'd do
phi() and auto() whilst I was in the neighbourhood

Release Notes:

- N/A
2025-12-06 01:08:43 +00:00
Conrad Irwin
4cef8eb47b Fix persistence for single-file worktrees (#44257)
We were just deleting them before

Co-Authored-By: Matthew Chisolm <mchisolm0@gmail.com>

Closes #ISSUE

Release Notes:

- Fixed restoring window location for single-file worktrees

Co-authored-by: Matthew Chisolm <mchisolm0@gmail.com>
2025-12-05 23:55:05 +00:00
Oleksii (Alexey) Orlenko
e5f87735d3 markdown_preview: Remove unnecessary vec allocation (#44238)
Instead of allocating a one-element vec on the heap, we can just use an
array here (since `Editor::edit` accepts anything that implements
`IntoIterator`).

I haven't checked if there are more instances that can be simplified,
just accidentally stumbled upon this when working on something else in
the markdown preview crate.

Release Notes:

- N/A
2025-12-05 23:27:21 +01:00
Mayank Verma
f4b8b0f471 settings: Fix inconsistent terminal font weight step size (#44243)
Closes #44242

Release Notes:

- Fixed inconsistent terminal font weight step size in settings
2025-12-05 19:24:59 -03:00
Michael Benfield
5cd30e5106 inline assistant: Use tools and remove insertion mode (#44248)
Co-authored by: Mikayla Maki <mikayla.c.maki@gmail.com>
Co-authored-by: Danilo Leal <daniloleal09@gmail.com>

Release Notes:

- N/A
2025-12-05 13:28:29 -08:00
Kirill Bulatov
a350438a21 Specify a schema to use when dealing with JSONC files (#44250)
Follow-up of https://github.com/zed-industries/zed/pull/43854
Closes https://github.com/zed-industries/zed/issues/40970

Seems that json language server does not distinguish between JSONC and
JSON files in runtime, but there is a static schema, which accepts globs
in its `fileMatch` fields.

Use all glob overrides and file suffixes for JSONC inside those match
fields, and provide a grammar for such matches, which accepts trailing
commas.

Release Notes:

- Improved JSONC trailing comma handling
2025-12-05 20:26:42 +00:00
Danilo Leal
bd6ca841ad git_ui: Improve the branch picker UI (#44217)
Follow up to https://github.com/zed-industries/zed/pull/42819 and
https://github.com/zed-industries/zed/pull/44206.

- Make this picker feel more consistent with other similar pickers
(namely, the project picker)
- Move actions to the footer and toggle them conditionally
- Only show the "Create" and "Create New From: {default}" when we're
selecting the "Create" list item _or_ when that item is the only
visible. This means I'm changing here the state transition to only
change to `NewBranch/NewRemote` if we only have those items available.
- Reuse more UI code and use components when available (e.g.,
`ListHeader`)
- Remove secondary actions from the list item

Next step (in another PR), will be refine the same picker in the
smaller, panel version.


https://github.com/user-attachments/assets/fe72ac06-c1df-4829-a8a4-df8a9222672f

Release Notes:

- N/A
2025-12-05 17:17:50 -03:00
Bennet Bo Fenner
f9cea5af29 Fix project not getting dropped after closing window (#44237) 2025-12-05 19:53:53 +01:00
Danilo Leal
3bb6c2546a git_ui: Fix history view label truncation (#44218)
There's still a weird problem happening where the labels (and the label
on the tab, too, for what is worth) flicker as the file history view
gets smaller. I suspect that problem is related to something
else—potentially the truncation algorithm or focus management—so I'm not
solving it here.

<img width="500" height="1948" alt="Screenshot 2025-12-05 at 11  24@2x"
src="https://github.com/user-attachments/assets/25715725-e2cb-475a-bdab-f506bb75475f"
/>

Release Notes:

- N/A
2025-12-05 15:46:28 -03:00
Lukas Wirth
37b0cdf94b multi_buffer: Remap excerpt ids to latest excerpt in excerpt fetching (#44229)
Closes #ISSUE

Release Notes:

- N/A *or* Added/Fixed/Improved ...

Co-authored by: Cole Miller <cole@zed.dev>
2025-12-05 18:20:29 +00:00
Dino
d76dd86272 tab_switcher: Add documentation for tab switcher (#44189)
Release Notes:

- Added documentation for Tab Switcher
2025-12-05 18:18:51 +00:00
David Kleingeld
b558be7ec6 adds tracing for instrumenting non-async functions (#44147)
Tracing code is not included in normal release builds
Documents how to use them in our performance docs
Only the maps and cursors are instrumented atm

# Compile times:
current main: fresh release build (cargo clean then build --release)
377.34 secs
current main: fresh debug build (cargo clean then build )
89.31 secs

tracing tracy: fresh release build (cargo clean then build --release)
374.84 secs
tracing tracy: fresh debug build (cargo clean then build )
88.95 secs

tracing tracy: fresh release build with timings (cargo clean then build
--release --features tracing)
375.77 secs
tracing tracy: fresh debug build with timings (cargo clean then build
--features tracing)
90.03 secs


Release Notes:

- N/A

---------

Co-authored-by: localcc <work@localcc.cc>
2025-12-05 17:23:06 +00:00
Agus Zubiaga
07fe8e9bb1 remoting: Proxy configuration docs (#44225)
Adds an explicit section about how to configure proxies when remoting.

Release Notes:

- N/A
2025-12-05 13:47:29 -03:00
Ben Brandt
b776178b52 agent_ui: Fix mention and slash command menu not appearing with show_completions_on_input set to false (#44222)
Addresses a regression introduced by
https://github.com/zed-industries/zed/pull/44021 that caused @mentions
and slash commands to stop working if you set
`show_completions_on_input: false` in your settings.

In this case, we should always show these menus, otherwise the features
won't work at all.

Release Notes:

- N/A
2025-12-05 15:50:32 +00:00
Dino
1d0aef6b22 Ensure font features are applied to styled text (#44219)
- Replace `gpui::styled::Styled.font_family()` calls with
`gpui::styled::Styled.font()` when laying out inline diagnostics and
inline blame, to ensure that the font's features are also used, and
not just the font feature.
- Update both `editor::hover_popover::hover_markdown_style` and
`editor::hover_popover::diagnostics_markdown_style` to ensure that
both the UI and Buffer font features are used in both markdown and
diagnostics popover.

Closes #44209 

Release Notes:

- Fixed font feature application for inline git blame, inline
diagnostics, markdown popovers and diagnostics popovers
2025-12-05 15:24:07 +00:00
Agus Zubiaga
c7ef3025e4 remoting: Server download connect timeout (#44216)
Sometimes machines are configured to drop outbound packets (rather than
reject connections). In these cases, curl/wget just hang causing our
download step to never complete. This PR adds a timeout of 10s for the
connection (not the whole download), so that in situations like this we
can fallback to our client-side download eventually.

Related to but doesn't fully fix:
https://github.com/zed-industries/zed/issues/43694 and
https://github.com/zed-industries/zed/issues/43718

Release Notes:

- remote: Add 10s connect timeout for server download
2025-12-05 16:16:46 +01:00
Agus Zubiaga
822fc7ef16 remote: Use last line of uname and shell output (#44165)
We have seen cases (see
https://github.com/zed-industries/zed/issues/43694) where the user's
shell initialization script includes text that ends up in the output of
the commands we use to detect the platform and shell of the remote. This
solution isn't perfect, but it should address the issue in most
situations since both commands should only output one line.

Release Notes:

- remote: Improve resiliency when initialization scripts output text
2025-12-05 14:04:01 +01:00
Anthony Eid
126d708fa1 git: Fix branch picker creating new branches with refs/head/ prefixed on the branch name (#44206)
The bug was introduced in this recent PR:
https://github.com/zed-industries/zed/pull/42819. Since it's still in
nightly, there is no need for release notes.

I also polished the feature a bit by:
- Ensuring branch names are always a single line so the branch picker's
uniform list uses the correct element height.
- Adding tooltip text for the filter remotes button.
- Fixing the create branch from default icon showing up for non-new
branch entries.

Release Notes:

- N/A
2025-12-05 12:59:13 +00:00
Lukas Wirth
a5ab5c7d5d gpui: Document the leak detector (#44208)
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-12-05 12:35:05 +00:00
Piotr Osiewicz
35da6d000a debugger: Fix evaluate selection running two evaluations & failing for Python and go (#44205)
Evaluate selection now acts as if the text was typed verbatim into the
console.

Closes ##33526

Release Notes:

- debugger: Fixed "evaluate selection" not behaving as if the
highlighted text was not typed verbatim into the console.
2025-12-05 11:08:04 +00:00
Max Brunsfeld
d6241b17d3 Fix infinite loop in assemble_excerpts (#44195)
Also, expand the number of identifiers fetched.

Release Notes:

- N/A
2025-12-05 06:51:26 +00:00
Max Brunsfeld
42583c1141 Reorganize edit prediction code and remove old experiments (#44187)
Release Notes:

- N/A

---------

Co-authored-by: Agus Zubiaga <agus@zed.dev>
Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-12-04 15:56:57 -08:00
Max Brunsfeld
76167109db Add experimental LSP-based context retrieval system for edit prediction (#44036)
To do

* [x] Default to no context retrieval. Allow opting in to LSP-based
retrieval via a setting (for users in `zeta2` feature flag)
* [x] Feed this context to models when enabled
* [x] Make the zeta2 context view work well with LSP retrieval
* [x] Add a UI for the setting (for feature-flagged users)
* [x] Ensure Zeta CLI `context` command is usable

---

* [ ] Filter out LSP definitions that are too large / entire files (e.g.
modules)
* [ ] Introduce timeouts
* [ ] Test with other LSPs
* [ ] Figure out hangs

Release Notes:

- N/A

---------

Co-authored-by: Ben Kunkle <ben@zed.dev>
Co-authored-by: Agus Zubiaga <agus@zed.dev>
2025-12-04 12:48:39 -08:00
Ian Chamberlain
cd8679e81a Allow trailing commas in builtin JSONC schemas (#43854)
The JSON language server looks for a top-level `allowTrailingCommas`
flag to decide whether it should warn for trailing commas. Since the
JSONC parser for these builtin files can handles trailing commas, adding
this flag to the schema also prevents a warning for those commas.

I don't think there's an issue that is only for this specific issue, but
it relates to *many* existing / older issues:
- #18509
- #17487
- #40970
- #18509
- #21303

Release Notes:

- Suppress warning for trailing commas in builtin JSON files
(`settings.json`, `keymap.json`, etc.)
2025-12-04 15:37:32 -05:00
Danilo Leal
43f977c6b9 terminal view: Use tooltip element for the tab tooltip (#44169)
Just recently realized we don't need this custom component for it given
we now have `Tooltip::element`. UI result is exactly the same; nothing
changes.

Release Notes:

- N/A
2025-12-04 16:48:03 -03:00
Danilo Leal
bdb8caa42e git_ui: Fix indent guides not showing for file buffers in the commit view (#44166)
Follow up to https://github.com/zed-industries/zed/pull/44162 where my
strategy for not displaying the indent guides only in the commit message
was wrong given I ended up... disabling indent guides for all the
buffers. This PR adds a new method to the editor where we can disable it
for a specific buffer ID following the pattern of
`disable_header_for_buffer`.

Release Notes:

- N/A
2025-12-04 16:47:27 -03:00
vipex
9ae77ec3c9 markdown: Don't adjust indentation when inserting with multiple cursors (#40794)
Closes #40757

## Summary

This PR addresses an issue where Zed incorrectly adjusts the indentation
of Markdown lists when inserting text using multiple cursors. Currently:

- Editing individual lines with a single cursor behaves correctly (no
unwanted indentation changes).
- Using multiple cursors, Zed automatically adjusts the indentation,
unlike VS Code, which preserves the existing formatting.

## Tasks
- [x] Implement a new test to verify correct Markdown indentation
behavior with multiple cursors.
- [x] Apply the fix to prevent Zed from adjusting indentation when
inserting text on multiple cursors.

------------------------

Release Notes:

- Fixed an issue where inserting text with multiple cursors inside a
nested Markdown list would cause it to lose its indentation.

---------

Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>
2025-12-05 00:18:06 +05:30
Cole Miller
d5ed9d3e3a git: Don't call git2::Repository::find_remote for every blamed buffer (#44107)
We already store the remote URLs for `origin` and `upstream` in the
`RepositorySnapshot`, so just use that data. Follow-up to #44092.

Release Notes:

- N/A
2025-12-04 13:25:30 -05:00
Liffindra Angga Zaaldian
74a1b5d14d Update PHP language server docs (#44001)
Reformat document structure like other language docs, improve
information flow, add missing requirements, and fix typos.

Release Notes:

- N/A

---------

Co-authored-by: Piotr Osiewicz <24362066+osiewicz@users.noreply.github.com>
2025-12-04 19:04:06 +01:00
Lukas Wirth
07af011eb4 worktree: Fix git ignored directories dropping their contents when they are refreshed (#44143)
Closes https://github.com/zed-industries/zed/issues/38653

Release Notes:

- Fixed git ignored directories appearing as empty when their content
changes on windows

Co-authored by: Smit Barmase <smit@zed.dev>
2025-12-04 18:14:10 +01:00
Danilo Leal
c357dc25fc git_ui: Clean up the commit view UI (#44162) 2025-12-04 13:44:48 -03:00
Lukas Wirth
93bc6616c6 editor: Improve performance of update_visible_edit_prediction (#44161)
One half of https://github.com/zed-industries/zed/issues/42861

This basically reduces the main thread work for large enough json (and
other) files from multiple milliseconds (15ms was observed in that test
case) down to microseconds (100ms here).

Release Notes:

- Improved cursor movement performance when edit predictions are enabled
2025-12-04 15:41:48 +00:00
Lukas Wirth
a33e881906 remote: Recognize WSL interop to open browser for codex web login (#44136)
Closes #41521

Release Notes:

- Fixed codex web login not working on wsl remotes if no browser is
installed

Co-authored-by: Ben Brandt <benjamin.j.brandt@gmail.com>
2025-12-04 14:42:26 +00:00
Agus Zubiaga
c978db8626 Fix background scanner deadlock (#44109)
Fixes a deadlock in the background scanner that occurs on single-core
Linux devices. This happens because the background scanner would `block`
on a background thread waiting for a future, but on single-core Linux
devices there would be no other thread to pick it up. This mostly
affects SSH remoting use cases where it's common for servers to have 1
vCPU.

Closes #43884 
Closes #43809

Release Notes:

- Fix SSH remoting hang when connecting to 1 vCPU servers
2025-12-04 11:30:16 -03:00
Rawand Ahmed Shaswar
2dad46c5c0 gpui: Fix division by zero when chars/sec = 0 on Wayland (#44151)
Closes #44148

the existing rate == 0 check inside the timer callback already handles
disabling repeat - it just drops the timer immediately. So the fix
prevents the crash while preserving correct behavior. 

Release Notes:

- Linux (Wayland): Fixed a crash that could occur when
`characters_per_second` was zero
2025-12-04 11:26:17 -03:00
Coenen Benjamin
4c51fffbb5 Add support for git remotes (#42819)
Follow up of #42486 
Closes #26559



https://github.com/user-attachments/assets/e2f54dda-a78b-4d9b-a910-16d51f98a111



Release Notes:

- Added support for git remotes

---------

Signed-off-by: Benjamin <5719034+bnjjj@users.noreply.github.com>
2025-12-04 14:23:36 +01:00
Piotr Osiewicz
0d80b452fb python: Improve sorting order of toolchains to give higher precedence to project-local virtual environments that are within current subproject (#44141)
Closes #44090

Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>

Release Notes:

- python: Improved sorting order of toolchains in monorepos with
multiple local virtual environments.
- python: Fixed toolchain selector not having an active toolchain
selected on open.

---------

Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>
Co-authored-by: Smit <smit@zed.dev>
2025-12-04 12:33:13 +00:00
John Gibb
bad6bde03a Use buffer language when formatting with Prettier (#43368)
Set `prettier_parser` explicitly if the file extension for the buffer
does not match a known one for the current language

Release Notes:

- N/A

---------

Co-authored-by: Kirill Bulatov <kirill@zed.dev>
2025-12-04 12:07:40 +00:00
Piotr Osiewicz
4ec2d04ad9 search: Fix sort order not being maintained in presence of open buffers (#44135)
In project search UI code we were seeing an issue where "Go to next
match" would act up and behave weirdly. It would not wrap at times.
Stuff would be weird, yo. It turned out that match ranges reported by
core project search were sometimes out of sync with the state of the
multi-buffer. As in, the sort order of
`search::ProjectSearch::match_ranges` would not match up with
multi-buffer's sort order. This is ~because multi-buffers maintain their
own sort order.

What happened within project search is that we were skipping straight
from stage 1 (filtering paths) to stage 3 via an internal channel and in
the process we've dropped the channel used to maintain result sorting.
This made is so that, given 2 files to scan:
- project/file1.rs <- not open, has to go through stage2 (FS scan)
- project/file2.rs <- open, goes straight from stage1 (path filtering)
  to stage3 (finding all matches) We would report matches for
  project/file2.rs first, because we would notice that there's an
  existing language::Buffer for it. However, we should wait for
  project/file1.rs status to be reported first before we kick off
  project/file2.rs

The fix is to use the sorting channel instead of an internal one, as
that keeps the sorting worker "in the loop" about the state of the
world.

Closes #43672

Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>

Release Notes:

- Fixed "Select next match" in project search results misbehaving when
some of the buffers within the search results were open before search
was ran.
- Fixed project search results being scrolled to the last file active
prior to running the search.

---------

Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>
Co-authored-by: Smit <smit@zed.dev>
2025-12-04 12:21:02 +01:00
Shardul Vaidya
0f0017dc8e bedrock: Support global endpoints and new regional endpoints (#44103)
Closes #43598

Release Notes:

- bedrock: Added opt-in `allow_global` which enables global endpoints
- bedrock: Updated cross-region-inference endpoint and model list
- bedrock: Fixed Opus 4.5 access on Bedrock, now only accessible through the `allow_global` setting
2025-12-04 12:14:31 +01:00
Agus Zubiaga
9db0d66251 linux: Spawn at least two background threads (#44110)
Related to https://github.com/zed-industries/zed/pull/44109,
https://github.com/zed-industries/zed/issues/43884,
https://github.com/zed-industries/zed/issues/43809.

In the Linux dispatcher, we create one background thread per CPU, but
when a single core is available, having a single background thread
significantly hinders the perceived performance of Zed. This is
particularly helpful when SSH remoting to low-resource servers.

We may want to bump this to more than two threads actually, but I wanted
to be conservative, and this seems to make a big difference already.

Release Notes:

- N/A
2025-12-04 10:40:51 +00:00
Aero
b07389d9f3 macos: Add missing file access entitlements (#43609)
Adds `com.apple.security.files.user-selected.read-write` and
`com.apple.security.files.downloads.read-write` to zed.entitlements.

This resolves an issue where the integrated terminal could not access
external drives or user-selected files on macOS, even when "Full Disk
Access" was granted. These entitlements are required for the application
to properly inherit file access permissions.

Release Notes:

- Resolves an issue where the integrated terminal could not access
external drives or user-selected files on macOS.
2025-12-04 12:38:10 +02:00
Kirill Bulatov
db2e26f67b Re-colorize the brackets when the theme changes (#44130)
Closes https://github.com/zed-industries/zed/issues/44127

Release Notes:

- Fixed brackets not re-colorizing on theme change
2025-12-04 10:21:37 +00:00
John Tur
391c92b07a Reduce priority of Windows thread pool work items (#44121)
`WorkItemPriority::High` will enqueue the work items to threads with
higher-than-normal priority. If the work items are very intensive, this
can cause the system to become unresponsive. It's not clear what this
gets us, so let's avoid the responsiveness issue by deleting this.

Release Notes:

- N/A
2025-12-04 07:45:36 +00:00
Conrad Irwin
1e4d80a21f Update fancy-regex (#44120)
Fancy regex has a max backtracking limit which defaults to 1,000,000
backtracks. This avoids spinning the CPU forever in the case that a
match is taking a long time (though does mean that some matches may be
missed).

Unfortunately the verison we depended on causes an infinite loop when
the backtracking limit is hit
(https://github.com/fancy-regex/fancy-regex/issues/137), so we got the
worse of both worlds: matches were missed *and* we spun the CPU forever.

Updating fixes this.

Excitingly regex may gain support for lookarounds
(https://github.com/rust-lang/regex/pull/1315), which will make
fancy-regex much less load bearing.

Closes #43821

Release Notes:

- Fix a bug where search regexes with look-around or backreferences
could hang
  the CPU. They will now abort after a certain number of match attempts.
2025-12-04 07:29:31 +00:00
Joseph T. Lyons
f90d9d26a5 Prefer to disable options over hiding (git panel entry context menu) (#44102)
When adding the File History option here, I used the pattern to hide the
option, since that's what another option was already doing here, but I
see other menus in the git panel (`...`) that use disabling over hiding,
which is what I think is a nicer experience (allows you to learn of
actions, the full range of actions is always visible, don't have to
worry about how multiple hidden items might interact in various
configurations, etc).

<img width="336" height="241" alt="SCR-20251203-pnpy"
src="https://github.com/user-attachments/assets/0da90b9a-c230-4ce3-87b9-553ffb83604f"
/>

<img width="332" height="265" alt="SCR-20251203-pobg"
src="https://github.com/user-attachments/assets/5da95c7d-faa9-4f0f-a069-f1d099f952b9"
/>


In general, I think it would be good to move to being more consistent
with disabling over hiding - there are other places in the app that are
hiding - some might be valid, but others might just choices made on a
whim.

Release Notes:

- N/A
2025-12-03 22:56:51 +00:00
Andrew Farkas
40a611bf34 tab_switcher: Subscribe to workspace events instead of pane events (#44101)
Closes #43171

Previously the tab switcher only subscribed to events from a single pane
so closing tabs in other panes wouldn't cause the tab switcher to
update. This PR changes that so the tab switcher subscribes to the whole
workspace and thus updates when tabs in other panes are closed.

It also modifies the work in #44006 to sync selected index across the
whole workspace instead of just the original pane in the case of the
all-panes tab switcher.

Release Notes:

- Fixed all-panes tab switcher not updating in response to changes in
other panes
2025-12-03 22:49:44 +00:00
Smit Barmase
8ad3a150c8 editor: Add active match highlight for buffer and project search (#44098)
Closes #28617

<img width="400" alt="image"
src="https://github.com/user-attachments/assets/b1c2880c-5744-4bed-a687-5c5e7aa7fef5"
/>

Release Notes:

- Improved visibility of the currently active match when browsing
results in buffer or project search.

---------

Co-authored-by: DarkMatter-999 <darkmatter999official@gmail.com>
2025-12-04 03:55:04 +05:30
Andrew Farkas
87976e91cf Add more preview tab settings and fix janky behavior (#43921)
Closes #41495

Known issues:
- File path links always open as non-preview tabs. Fixing this is not
technically too difficult but requires more invasive changes and so
should be done in a future PR.

Release Notes:

- Fixed strange behavior when reopening closed preview tabs
- Overhauled preview tabs settings:
- Added setting `preview_tabs.enable_preview_from_project_panel`
(default `true`)
- Kept setting `preview_tabs.enable_preview_from_file_finder` (default
`false`)
- Added setting `preview_tabs.enable_preview_from_multibuffer` (default
`true`)
- Added setting
`preview_tabs.enable_preview_multibuffer_from_code_navigation` (default
`false`)
- Added setting `preview_tabs.enable_preview_file_from_code_navigation`
(default `true`)
- Renamed setting `preview_tabs.enable_preview_from_code_navigation` to
`preview_tabs.enable_keep_preview_on_code_navigation` (default `false`)

---------

Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>
Co-authored-by: Cole Miller <cole@zed.dev>
2025-12-03 21:56:39 +00:00
Michael Benfield
290a1550aa ai: Add an eval for the inline assistant (#43291)
Release Notes:

- N/A

---------

Co-authored-by: Mikayla Maki <mikayla.c.maki@gmail.com>
2025-12-03 20:32:25 +00:00
feeiyu
92dcfdef76 Fix circular reference issue around PopoverMenu again (#44084)
Follow up to https://github.com/zed-industries/zed/pull/42351

Release Notes:

- N/A
2025-12-03 21:34:01 +02:00
Cole Miller
4ef8433396 Run git2::Repository::find_remote in the background (#44092)
We were seeing this hog the main thread.

Release Notes:

- N/A

---------

Co-authored-by: cameron <cameron.studdstreet@gmail.com>
2025-12-03 19:33:40 +00:00
John Tur
a51e975b81 Improve support for multiple registrations of textDocument/diagnostic (#43703)
Closes https://github.com/zed-industries/zed/issues/41935

The registration ID responsible for generating each diagnostic is now
tracked. This allows us to replace only the diagnostics from the same
registration ID when a pull diagnostics report is applied.

Additionally, various deficiencies in our support for pull diagnostics
have been fixed:
- Document pulls are issued for all open buffers, not just the edited
one. A shorter debounce is used for the edited buffer. Workspace
diagnostics are also now ignored for open buffers.
- Tracking of `lastResultId` is improved.
- Stored pull diagnostics are discarded when the corresponding buffer is
closed.

Release Notes:

- Improved compatibility with language servers that use the "pull
diagnostics" feature of Language Server Protocol.

---------

Co-authored-by: Kirill Bulatov <mail4score@gmail.com>
Co-authored-by: Kirill Bulatov <kirill@zed.dev>
2025-12-03 20:47:43 +02:00
Conrad Irwin
493cfadb42 Revert "http_client: Add integrity checks for GitHub binaries using digest checks (#43737)" (#44086)
This reverts commit 05764e8af7.

Internally we've seen a much higher incidence of macOS code-signing
failing on
the download rust analyzer than we did before this change.

It's unclear why this would be a problem, but we want to try reverting
to see if that fixes it.

Release Notes:

- Reverted a change that seemed to cause problems with code-signing on
rust-analyzer
2025-12-03 11:40:47 -07:00
Mayank Verma
0818cedded editor: Fix blame hover not working when inline git blame is disabled (#42992)
Closes #42936

Release Notes:

- Fixed editor blame hover not working when inline git blame is disabled

Here's the before/after:


https://github.com/user-attachments/assets/a3875011-4a27-45b3-b638-3e146c06f1fe
2025-12-03 12:25:31 -05:00
Dino
6b46a71dd0 tab_switcher: Fix bug where selected index after closing tab did not match pane's active item (#44006)
Whenever an item is removed using the Tab Switcher, the list of matches
is automatically updated, which can lead to the order of the elements
being updated and changing in comparison to what the user was previously
seeing. Unfortunately this can lead to a situation where the selected
index, since it wasn't being updated, would end up in a different item
than the one that was actually active in the pane.

This Pull Request updates the handling of the `PaneEvent::RemovedItem`
event so that the `TabSwitcherDelegate.selected_index` field is
automatically updated to match the pane's new active item.

Seeing as this is being updated, the
`test_close_preserves_selected_position` test is also removed, as it no
longer makes sense with the current implementation. I believe a better
user experience would be to actually not update the order of the
matches, simply removing the ones that no longer exist, and keep the
selected index position, but will tackle that in a different Pull
Request.

Closes #44005 

Release Notes:

- Fixed a bug with the tab switcher where, after closing a tab, the
selected entry would not match the pane's active item
2025-12-03 17:12:04 +00:00
Ramon
575ea49aad Fix yank around paragraph missing newline (#43583)
Use `MotionKind::LineWise` in both
`vim::normal::change::Vim.change_object` and
`vim::normal::yank::Vim.yank_object` when dealing with objects that
target `Mode::VisualLine`, for example, paragraphs. This fixes an issue
where yanking and changing paragraphs would not include the trailing
newline character.

Closes #28804

Release Notes:

- Fixed linewise text object operations (`yap`, `cap`, etc.) omitting
trailing blank line in vim mode

---------

Co-authored-by: dino <dinojoaocosta@gmail.com>
2025-12-03 17:11:53 +00:00
Xipeng Jin
85ccd7c98b Fix not able to navigate to files in git commit multibuffer (#42558)
Closes #40851

Release Notes:

- Fixed: Commit diff multibuffers now open real project files whenever
possible, restoring navigation and annotations inside those excerpts.

---------

Co-authored-by: Anthony Eid <anthony@zed.dev>
2025-12-03 11:59:56 -05:00
Vitaly Slobodin
b168679c18 language: Remove old unused HTML/ERB language ID (#44081)
The `HTML/ERB` language was renamed
to `HTML+ERB` in https://github.com/zed-industries/zed/pull/40000 We can
remove the old name safely now.

Release Notes:

- N/A
2025-12-03 17:34:49 +01:00
Jeff Brennan
621ac16e35 go: Fix language injections (#43775)
Closes #43730


## Summary
This modifies the existing injections.scm file for go by adding more
specific prefix queries and *_content nodes to the existing
`raw_string_literal` and `interpreted_string_literal` sections

<details><summary>This PR</summary>

<img width="567" height="784" alt="image"
src="https://github.com/user-attachments/assets/bfe8c64e-1dc2-470c-9f85-2c664a6c5a15"
/>
<img width="383" height="909" alt="image"
src="https://github.com/user-attachments/assets/9349af8c-22d3-4c9b-a435-a73719f17ba3"
/>
<img width="572" height="800" alt="image"
src="https://github.com/user-attachments/assets/939ada3b-1440-443b-8492-0eb61a7ee90f"
/>

</details>

<details><summary>Current Release (0.214.7)</summary>

<img width="569" height="777" alt="image"
src="https://github.com/user-attachments/assets/e6fffe77-e4c6-48e3-9c6d-a140298225c5"
/>
<img width="381" height="896" alt="image"
src="https://github.com/user-attachments/assets/bf107950-c33a-4603-90d3-2304bef0a4af"
/>
<img width="574" height="798" alt="image"
src="https://github.com/user-attachments/assets/2c5e43e5-f101-4722-8f58-6b176ba950ca"
/>

</details>

<details><summary>Code</summary>

```go
func test_sql() {
	// const assignment
	const _ = /* sql */ "SELECT * FROM users"
	const _ = /* sql */ `SELECT id, name FROM products`

	// var assignment
	var _ = /* sql */ `SELECT id, name FROM products`
	var _ = /* sql */ "SELECT id, name FROM products"

	// := assignment
	test := /* sql */ "SELECT * FROM users"
	test2 := /* sql */ `SELECT * FROM users`
	println(test)
	println(test2)

	// = assignment
	_ = /* sql */ "SELECT * FROM users WHERE id = 1"
	_ = /* sql */ `SELECT * FROM users WHERE id = 1`

	// literal elements
	_ = testStruct{Field: /* sql */ "SELECT * FROM users"}
	_ = testStruct{Field: /* sql */ `SELECT * FROM users`}

	testFunc(/* sql */ "SELECT * FROM users")
	testFunc(/* sql */ `SELECT * FROM users`)

	const backtickString = /* sql */ `SELECT * FROM users;`
	const quotedString = /* sql */ "SELECT * FROM users;"

	const backtickStringNoHighlight = `SELECT * FROM users;`
	const quotedStringNoHighlight = "SELECT * FROM users;"
}

func test_yaml() {
	// const assignment
	const _ = /* yaml */ `
settings:
  enabled: true
  port: 8080
`

	// := assignment
	test := /* yaml */ `
settings:
  enabled: true
  port: 8080
`

	println(test)

	// = assignment
	_ = /* yaml */ `
settings:
  enabled: true
  port: 8080
`

	// literal elements in a struct
	_ = testStruct{Field: /* yaml */ `
settings:
  test: 1234
  port: 8080
`}

	// function argument
	testFunc(/* yaml */ `
settings:
  enabled: true
  port: 8080
`)
}

func test_css() {
	// const assignment
	const _ = /* css */ "body { margin: 0; }"
	const _ = /* css */ `body { margin: 0; }`

	const cssCodes = /* css */ `
h1 {
  color: #333;
}
`

	// := assignment
	test := /* css */ "body { margin: 0; }"
	println(test)

	// = assignment
	_ = /* css */ "body { margin: 0; }"
	_ = /* css */ `body { margin: 0; }`

	// literal elements
	_ = testStruct{Field: /* css */ "body { margin: 0; }"}
	_ = testStruct{Field: /* css */ `body { margin: 0; }`}

	testFunc(/* css */ "body { margin: 0; }")
	testFunc(/* css */ `body { margin: 0; }`)

	const backtickString = /* css */ `body { margin: 0; }`
	const quotedString = /* css */ "body { margin: 0; }"

	const backtickStringNoHighlight = `body { margin: 0; }`
	const quotedStringNoHighlight = "body { margin: 0; }"
}
```

</details>

Release Notes:

- Greatly improved the quality of comment-directed language injections
in Go
2025-12-03 10:32:51 -06:00
Arthur Schurhaus
c248a956e0 markdown: Fix rendering of inline HTML <code> tags (#43513)
Added support for rendering HTML `<code> `tags inside Markdown content.
Previously, these tags were ignored by the renderer and displayed as raw
text (inside LSP hover documentation).

Closes: #43166 

Release Notes:
- Fixed styling of `<code>` HTML tags in Markdown popovers.

Before:
<img width="445" height="145" alt="image"
src="https://github.com/user-attachments/assets/67c4f864-1fa7-46a9-bb25-8b07a335355d"
/>
After:
<img width="699" height="257" alt="image"
src="https://github.com/user-attachments/assets/8d784a75-28be-43cd-80b4-3aad8babb65b"
/>
2025-12-03 16:58:51 +01:00
Remco Smits
7e177c496c markdown_preview: Fix markdown tables taking up the full width of the parent element (#43555)
Closes #39152

This PR fixes an issue where we would render Markdown tables full width
based on their container size. We now render tables based on their
content min size, meaning you are still allowed to make the table render
as it was before by making the columns `w_full`.

I had to change the `div()` to `v_flex().items_start()` because this
introduced a weird displaying behavior of the outside table border,
because the grid container was not shrinking due to It was always taking
up the full width of their container.

**Before**
<img width="1273" height="800" alt="Screenshot 2025-11-26 at 14 37 19"
src="https://github.com/user-attachments/assets/2e152021-8679-48c2-b7bd-1c02768c0253"
/>

**After**
<img width="1273" height="797" alt="Screenshot 2025-11-26 at 14 56 12"
src="https://github.com/user-attachments/assets/4459d20e-8c3b-487b-a215-c95ee5c1fc8e"
/>

**Code example**

```markdown
|   Name   |   Age   |   Occupation   |
|:--------:|:-------:|:--------------:|
| Alice    |  28     | Engineer       |
| Bob      |  34     | Designer       |
| Carol    |  25     | Developer      |


| Syntax      | Description |
| ----------- | ----------- |
| Header      | Title       |
| Paragraph   | Text        |


| City           | Population (approx.) | Known For                          |
|----------------|----------------------|------------------------------------|
| New York       | 8,500,000            | Statue of Liberty, Wall Street     |
| Los Angeles    | 4,000,000            | Hollywood, film industry           |
| Chicago        | 2,700,000            | Architecture, deep-dish pizza      |
| Houston        | 2,300,000            | NASA, energy industry              |
| Miami          | 470,000              | Beaches, Latin culture             |
| San Francisco  | 800,000              | Golden Gate Bridge, Silicon Valley |
| Las Vegas      | 650,000              | Casinos, nightlife                 |


<table>
    <caption>Table Caption</caption>
  <thead>
    <tr>
    <th>ID asjkfjaslkf jalksjflksajflka jlksdla k</th>
    <th>Name</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>1</td>
      <td>Chris</td>
    </tr>
    <tr>
      <td>2</td>
      <td>Dennis</td>
    </tr>
    <tr>
      <td>3</td>
      <td>Sarah</td>
    </tr>
    <tr>
      <td>4</td>
      <td>Karen</td>
    </tr>
  </tbody>
</table>
```

cc @bennetbo

Release Notes:

- Markdown Preview: Markdown tables scale now based on their content
size
2025-12-03 16:49:40 +01:00
Joseph T. Lyons
e39dd2af67 Bump Zed to v0.217 (#44080)
Release Notes:

- N/A
2025-12-03 15:45:45 +00:00
626 changed files with 26298 additions and 23258 deletions

3
.gitignore vendored
View File

@@ -39,3 +39,6 @@ xcuserdata/
# Don't commit any secrets to the repo.
.env
.env.secret.toml
# `nix build` output
/result

1538
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -54,11 +54,12 @@ members = [
"crates/diagnostics",
"crates/docs_preprocessor",
"crates/edit_prediction",
"crates/edit_prediction_button",
"crates/edit_prediction_types",
"crates/edit_prediction_ui",
"crates/edit_prediction_context",
"crates/zeta2_tools",
"crates/editor",
"crates/eval",
"crates/eval_utils",
"crates/explorer_command_injector",
"crates/extension",
"crates/extension_api",
@@ -200,10 +201,11 @@ members = [
"crates/zed",
"crates/zed_actions",
"crates/zed_env_vars",
"crates/zeta",
"crates/zeta_cli",
"crates/edit_prediction_cli",
"crates/zlog",
"crates/zlog_settings",
"crates/ztracing",
"crates/ztracing_macro",
#
# Extensions
@@ -242,7 +244,6 @@ activity_indicator = { path = "crates/activity_indicator" }
agent_ui = { path = "crates/agent_ui" }
agent_settings = { path = "crates/agent_settings" }
agent_servers = { path = "crates/agent_servers" }
ai = { path = "crates/ai" }
ai_onboarding = { path = "crates/ai_onboarding" }
anthropic = { path = "crates/anthropic" }
askpass = { path = "crates/askpass" }
@@ -252,7 +253,6 @@ assistant_slash_command = { path = "crates/assistant_slash_command" }
assistant_slash_commands = { path = "crates/assistant_slash_commands" }
audio = { path = "crates/audio" }
auto_update = { path = "crates/auto_update" }
auto_update_helper = { path = "crates/auto_update_helper" }
auto_update_ui = { path = "crates/auto_update_ui" }
aws_http_client = { path = "crates/aws_http_client" }
bedrock = { path = "crates/bedrock" }
@@ -267,7 +267,6 @@ cloud_api_client = { path = "crates/cloud_api_client" }
cloud_api_types = { path = "crates/cloud_api_types" }
cloud_llm_client = { path = "crates/cloud_llm_client" }
cloud_zeta2_prompt = { path = "crates/cloud_zeta2_prompt" }
collab = { path = "crates/collab" }
collab_ui = { path = "crates/collab_ui" }
collections = { path = "crates/collections", version = "0.1.0" }
command_palette = { path = "crates/command_palette" }
@@ -288,6 +287,7 @@ deepseek = { path = "crates/deepseek" }
derive_refineable = { path = "crates/refineable/derive_refineable" }
diagnostics = { path = "crates/diagnostics" }
editor = { path = "crates/editor" }
eval_utils = { path = "crates/eval_utils" }
extension = { path = "crates/extension" }
extension_host = { path = "crates/extension_host" }
extensions_ui = { path = "crates/extensions_ui" }
@@ -311,10 +311,9 @@ http_client = { path = "crates/http_client" }
http_client_tls = { path = "crates/http_client_tls" }
icons = { path = "crates/icons" }
image_viewer = { path = "crates/image_viewer" }
edit_prediction = { path = "crates/edit_prediction" }
edit_prediction_button = { path = "crates/edit_prediction_button" }
edit_prediction_types = { path = "crates/edit_prediction_types" }
edit_prediction_ui = { path = "crates/edit_prediction_ui" }
edit_prediction_context = { path = "crates/edit_prediction_context" }
zeta2_tools = { path = "crates/zeta2_tools" }
inspector_ui = { path = "crates/inspector_ui" }
install_cli = { path = "crates/install_cli" }
journal = { path = "crates/journal" }
@@ -356,8 +355,6 @@ panel = { path = "crates/panel" }
paths = { path = "crates/paths" }
perf = { path = "tooling/perf" }
picker = { path = "crates/picker" }
plugin = { path = "crates/plugin" }
plugin_macros = { path = "crates/plugin_macros" }
prettier = { path = "crates/prettier" }
settings_profile_selector = { path = "crates/settings_profile_selector" }
project = { path = "crates/project" }
@@ -368,12 +365,10 @@ proto = { path = "crates/proto" }
recent_projects = { path = "crates/recent_projects" }
refineable = { path = "crates/refineable" }
release_channel = { path = "crates/release_channel" }
scheduler = { path = "crates/scheduler" }
remote = { path = "crates/remote" }
remote_server = { path = "crates/remote_server" }
repl = { path = "crates/repl" }
reqwest_client = { path = "crates/reqwest_client" }
rich_text = { path = "crates/rich_text" }
rodio = { git = "https://github.com/RustAudio/rodio", rev ="e2074c6c2acf07b57cf717e076bdda7a9ac6e70b", features = ["wav", "playback", "wav_output", "recording"] }
rope = { path = "crates/rope" }
rpc = { path = "crates/rpc" }
@@ -390,7 +385,6 @@ snippets_ui = { path = "crates/snippets_ui" }
sqlez = { path = "crates/sqlez" }
sqlez_macros = { path = "crates/sqlez_macros" }
story = { path = "crates/story" }
storybook = { path = "crates/storybook" }
streaming_diff = { path = "crates/streaming_diff" }
sum_tree = { path = "crates/sum_tree" }
supermaven = { path = "crates/supermaven" }
@@ -407,7 +401,6 @@ terminal_view = { path = "crates/terminal_view" }
text = { path = "crates/text" }
theme = { path = "crates/theme" }
theme_extension = { path = "crates/theme_extension" }
theme_importer = { path = "crates/theme_importer" }
theme_selector = { path = "crates/theme_selector" }
time_format = { path = "crates/time_format" }
title_bar = { path = "crates/title_bar" }
@@ -431,15 +424,17 @@ x_ai = { path = "crates/x_ai" }
zed = { path = "crates/zed" }
zed_actions = { path = "crates/zed_actions" }
zed_env_vars = { path = "crates/zed_env_vars" }
zeta = { path = "crates/zeta" }
edit_prediction = { path = "crates/edit_prediction" }
zlog = { path = "crates/zlog" }
zlog_settings = { path = "crates/zlog_settings" }
ztracing = { path = "crates/ztracing" }
ztracing_macro = { path = "crates/ztracing_macro" }
#
# External crates
#
agent-client-protocol = { version = "=0.8.0", features = ["unstable"] }
agent-client-protocol = { version = "=0.9.0", features = ["unstable"] }
aho-corasick = "1.1"
alacritty_terminal = "0.25.1-rc1"
any_vec = "0.14"
@@ -503,16 +498,14 @@ ec4rs = "1.1"
emojis = "0.6.1"
env_logger = "0.11"
exec = "0.3.1"
fancy-regex = "0.14.0"
fancy-regex = "0.16.0"
fork = "0.4.0"
futures = "0.3"
futures-batch = "0.6.1"
futures-lite = "1.13"
gh-workflow = { git = "https://github.com/zed-industries/gh-workflow", rev = "09acfdf2bd5c1d6254abefd609c808ff73547b2c" }
git2 = { version = "0.20.1", default-features = false }
globset = "0.4"
handlebars = "4.3"
hashbrown = "0.15.3"
heck = "0.5"
heed = { version = "0.21.0", features = ["read-txn-no-tls"] }
hex = "0.4.3"
@@ -529,7 +522,7 @@ indoc = "2"
inventory = "0.3.19"
itertools = "0.14.0"
json_dotpath = "1.1"
jsonschema = "0.30.0"
jsonschema = "0.37.0"
jsonwebtoken = "9.3"
jupyter-protocol = "0.10.0"
jupyter-websocket-client = "0.15.0"
@@ -548,7 +541,6 @@ nanoid = "0.4"
nbformat = "0.15.0"
nix = "0.29"
num-format = "0.4.4"
num-traits = "0.2"
objc = "0.2"
objc2-foundation = { version = "=0.3.1", default-features = false, features = [
"NSArray",
@@ -587,7 +579,6 @@ pet = { git = "https://github.com/microsoft/python-environment-tools.git", rev =
pet-conda = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
pet-core = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
pet-fs = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
pet-pixi = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
pet-poetry = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
pet-reporter = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
pet-virtualenv = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
@@ -627,7 +618,6 @@ scap = { git = "https://github.com/zed-industries/scap", rev = "4afea48c3b002197
schemars = { version = "1.0", features = ["indexmap2"] }
semver = { version = "1.0", features = ["serde"] }
serde = { version = "1.0.221", features = ["derive", "rc"] }
serde_derive = "1.0.221"
serde_json = { version = "1.0.144", features = ["preserve_order", "raw_value"] }
serde_json_lenient = { version = "0.2", features = [
"preserve_order",
@@ -639,7 +629,6 @@ serde_urlencoded = "0.7"
sha2 = "0.10"
shellexpand = "2.1.0"
shlex = "1.3.0"
similar = "2.6"
simplelog = "0.12.2"
slotmap = "1.0.6"
smallvec = { version = "1.6", features = ["union"] }
@@ -656,7 +645,7 @@ sysinfo = "0.37.0"
take-until = "0.2.0"
tempfile = "3.20.0"
thiserror = "2.0.12"
tiktoken-rs = { git = "https://github.com/zed-industries/tiktoken-rs", rev = "7249f999c5fdf9bf3cc5c288c964454e4dac0c00" }
tiktoken-rs = { git = "https://github.com/zed-industries/tiktoken-rs", rev = "2570c4387a8505fb8f1d3f3557454b474f1e8271" }
time = { version = "0.3", features = [
"macros",
"parsing",
@@ -694,6 +683,7 @@ tree-sitter-ruby = "0.23"
tree-sitter-rust = "0.24"
tree-sitter-typescript = { git = "https://github.com/zed-industries/tree-sitter-typescript", rev = "e2c53597d6a5d9cf7bbe8dccde576fe1e46c5899" } # https://github.com/tree-sitter/tree-sitter-typescript/pull/347
tree-sitter-yaml = { git = "https://github.com/zed-industries/tree-sitter-yaml", rev = "baff0b51c64ef6a1fb1f8390f3ad6015b83ec13a" }
tracing = "0.1.40"
unicase = "2.6"
unicode-script = "0.5.7"
unicode-segmentation = "1.10"
@@ -717,7 +707,6 @@ wasmtime-wasi = "29"
wax = "0.6"
which = "6.0.0"
windows-core = "0.61"
wit-component = "0.221"
yawc = "0.2.5"
zeroize = "1.8"
zstd = "0.11"
@@ -799,20 +788,13 @@ settings_macros = { opt-level = 3 }
sqlez_macros = { opt-level = 3, codegen-units = 1 }
ui_macros = { opt-level = 3 }
util_macros = { opt-level = 3 }
serde_derive = { opt-level = 3 }
quote = { opt-level = 3 }
syn = { opt-level = 3 }
proc-macro2 = { opt-level = 3 }
# proc-macros end
taffy = { opt-level = 3 }
cranelift-codegen = { opt-level = 3 }
cranelift-codegen-meta = { opt-level = 3 }
cranelift-codegen-shared = { opt-level = 3 }
resvg = { opt-level = 3 }
rustybuzz = { opt-level = 3 }
ttf-parser = { opt-level = 3 }
wasmtime-cranelift = { opt-level = 3 }
wasmtime = { opt-level = 3 }
# Build single-source-file crates with cg=1 as it helps make `cargo build` of a whole workspace a bit faster
activity_indicator = { codegen-units = 1 }
@@ -821,12 +803,11 @@ breadcrumbs = { codegen-units = 1 }
collections = { codegen-units = 1 }
command_palette = { codegen-units = 1 }
command_palette_hooks = { codegen-units = 1 }
extension_cli = { codegen-units = 1 }
feature_flags = { codegen-units = 1 }
file_icons = { codegen-units = 1 }
fsevent = { codegen-units = 1 }
image_viewer = { codegen-units = 1 }
edit_prediction_button = { codegen-units = 1 }
edit_prediction_ui = { codegen-units = 1 }
install_cli = { codegen-units = 1 }
journal = { codegen-units = 1 }
json_schema_store = { codegen-units = 1 }
@@ -841,7 +822,6 @@ project_symbols = { codegen-units = 1 }
refineable = { codegen-units = 1 }
release_channel = { codegen-units = 1 }
reqwest_client = { codegen-units = 1 }
rich_text = { codegen-units = 1 }
session = { codegen-units = 1 }
snippet = { codegen-units = 1 }
snippets_ui = { codegen-units = 1 }

View File

@@ -34,8 +34,4 @@ RUN apt-get update; \
linux-perf binutils
WORKDIR app
COPY --from=builder /app/collab /app/collab
COPY --from=builder /app/crates/collab/migrations /app/migrations
COPY --from=builder /app/crates/collab/migrations_llm /app/migrations_llm
ENV MIGRATIONS_PATH=/app/migrations
ENV LLM_DATABASE_MIGRATIONS_PATH=/app/migrations_llm
ENTRYPOINT ["/app/collab"]

5
assets/icons/box.svg Normal file
View File

@@ -0,0 +1,5 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M13.3996 5.59852C13.3994 5.3881 13.3439 5.18144 13.2386 4.99926C13.1333 4.81709 12.9819 4.66581 12.7997 4.56059L8.59996 2.16076C8.41755 2.05544 8.21063 2 8 2C7.78937 2 7.58246 2.05544 7.40004 2.16076L3.20033 4.56059C3.0181 4.66581 2.86674 4.81709 2.76144 4.99926C2.65613 5.18144 2.60059 5.3881 2.60037 5.59852V10.3982C2.60059 10.6086 2.65613 10.8153 2.76144 10.9975C2.86674 11.1796 3.0181 11.3309 3.20033 11.4361L7.40004 13.836C7.58246 13.9413 7.78937 13.9967 8 13.9967C8.21063 13.9967 8.41755 13.9413 8.59996 13.836L12.7997 11.4361C12.9819 11.3309 13.1333 11.1796 13.2386 10.9975C13.3439 10.8153 13.3994 10.6086 13.3996 10.3982V5.59852Z" stroke="white" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M2.78033 4.99857L7.99998 7.99836L13.2196 4.99857" stroke="white" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M8 13.9979V7.99829" stroke="white" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

After

Width:  |  Height:  |  Size: 1.1 KiB

View File

@@ -0,0 +1,8 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M4 2V10" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M12 6C12.5304 6 13.0391 5.78929 13.4142 5.41421C13.7893 5.03914 14 4.53043 14 4C14 3.46957 13.7893 2.96086 13.4142 2.58579C13.0391 2.21071 12.5304 2 12 2C11.4696 2 10.9609 2.21071 10.5858 2.58579C10.2107 2.96086 10 3.46957 10 4C10 4.53043 10.2107 5.03914 10.5858 5.41421C10.9609 5.78929 11.4696 6 12 6Z" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M4 14C4.53043 14 5.03914 13.7893 5.41421 13.4142C5.78929 13.0391 6 12.5304 6 12C6 11.4696 5.78929 10.9609 5.41421 10.5858C5.03914 10.2107 4.53043 10 4 10C3.46957 10 2.96086 10.2107 2.58579 10.5858C2.21071 10.9609 2 11.4696 2 12C2 12.5304 2.21071 13.0391 2.58579 13.4142C2.96086 13.7893 3.46957 14 4 14Z" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M10 4C8.4087 4 6.88258 4.63214 5.75736 5.75736C4.63214 6.88258 4 8.4087 4 10" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M12 10V14" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M14 12H10" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

After

Width:  |  Height:  |  Size: 1.4 KiB

View File

@@ -0,0 +1,11 @@
<svg width="28" height="28" viewBox="0 0 28 28" fill="none" id="svg1378540956_510">
<g clip-path="url(#svg1378540956_510_clip0_1_1506)" transform="translate(4, 4) scale(0.857)">
<path d="M17.0547 0.372066H8.52652L-0.00165176 8.90024V17.4284H8.52652V8.90024H17.0547V0.372066Z" fill="#1A1C20"></path>
<path d="M10.1992 27.6279H18.7274L27.2556 19.0998V10.5716H18.7274V19.0998H10.1992V27.6279Z" fill="#1A1C20"></path>
</g>
<defs>
<clipPath id="svg1378540956_510_clip0_1_1506">
<rect width="27.2559" height="27.2559" fill="white" transform="translate(0 0.37207)"></rect>
</clipPath>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 593 B

View File

@@ -41,7 +41,7 @@
"ctrl-f11": "debugger::StepInto",
"shift-f11": "debugger::StepOut",
"f11": "zed::ToggleFullScreen",
"ctrl-alt-z": "edit_prediction::RateCompletions",
"ctrl-alt-z": "edit_prediction::RatePredictions",
"ctrl-alt-shift-i": "edit_prediction::ToggleMenu",
"ctrl-alt-l": "lsp_tool::ToggleMenu"
}
@@ -616,8 +616,8 @@
"ctrl-alt-super-p": "settings_profile_selector::Toggle",
"ctrl-t": "project_symbols::Toggle",
"ctrl-p": "file_finder::Toggle",
"ctrl-tab": "tab_switcher::Toggle",
"ctrl-shift-tab": ["tab_switcher::Toggle", { "select_last": true }],
"ctrl-tab": "tab_switcher::Toggle",
"ctrl-e": "file_finder::Toggle",
"f1": "command_palette::Toggle",
"ctrl-shift-p": "command_palette::Toggle",
@@ -811,7 +811,10 @@
"context": "PromptEditor",
"bindings": {
"ctrl-[": "agent::CyclePreviousInlineAssist",
"ctrl-]": "agent::CycleNextInlineAssist"
"ctrl-]": "agent::CycleNextInlineAssist",
"ctrl-shift-enter": "inline_assistant::ThumbsUpResult",
"ctrl-shift-backspace": "inline_assistant::ThumbsDownResult"
}
},
{
@@ -1322,25 +1325,18 @@
}
},
{
"context": "Zeta2Feedback > Editor",
"context": "EditPredictionContext > Editor",
"bindings": {
"enter": "editor::Newline",
"ctrl-enter up": "dev::Zeta2RatePredictionPositive",
"ctrl-enter down": "dev::Zeta2RatePredictionNegative"
}
},
{
"context": "Zeta2Context > Editor",
"bindings": {
"alt-left": "dev::Zeta2ContextGoBack",
"alt-right": "dev::Zeta2ContextGoForward"
"alt-left": "dev::EditPredictionContextGoBack",
"alt-right": "dev::EditPredictionContextGoForward"
}
},
{
"context": "GitBranchSelector || (GitBranchSelector > Picker > Editor)",
"use_key_equivalents": true,
"bindings": {
"ctrl-shift-backspace": "branch_picker::DeleteBranch"
"ctrl-shift-backspace": "branch_picker::DeleteBranch",
"ctrl-shift-i": "branch_picker::FilterRemotes"
}
}
]

View File

@@ -47,7 +47,7 @@
"cmd-m": "zed::Minimize",
"fn-f": "zed::ToggleFullScreen",
"ctrl-cmd-f": "zed::ToggleFullScreen",
"ctrl-cmd-z": "edit_prediction::RateCompletions",
"ctrl-cmd-z": "edit_prediction::RatePredictions",
"ctrl-cmd-i": "edit_prediction::ToggleMenu",
"ctrl-cmd-l": "lsp_tool::ToggleMenu",
"ctrl-cmd-c": "editor::DisplayCursorNames"
@@ -684,8 +684,8 @@
"ctrl-alt-cmd-p": "settings_profile_selector::Toggle",
"cmd-t": "project_symbols::Toggle",
"cmd-p": "file_finder::Toggle",
"ctrl-tab": "tab_switcher::Toggle",
"ctrl-shift-tab": ["tab_switcher::Toggle", { "select_last": true }],
"ctrl-tab": "tab_switcher::Toggle",
"cmd-shift-p": "command_palette::Toggle",
"cmd-shift-m": "diagnostics::Deploy",
"cmd-shift-e": "project_panel::ToggleFocus",
@@ -878,7 +878,9 @@
"bindings": {
"cmd-alt-/": "agent::ToggleModelSelector",
"ctrl-[": "agent::CyclePreviousInlineAssist",
"ctrl-]": "agent::CycleNextInlineAssist"
"ctrl-]": "agent::CycleNextInlineAssist",
"cmd-shift-enter": "inline_assistant::ThumbsUpResult",
"cmd-shift-backspace": "inline_assistant::ThumbsDownResult"
}
},
{
@@ -1427,25 +1429,18 @@
}
},
{
"context": "Zeta2Feedback > Editor",
"context": "EditPredictionContext > Editor",
"bindings": {
"enter": "editor::Newline",
"cmd-enter up": "dev::Zeta2RatePredictionPositive",
"cmd-enter down": "dev::Zeta2RatePredictionNegative"
}
},
{
"context": "Zeta2Context > Editor",
"bindings": {
"alt-left": "dev::Zeta2ContextGoBack",
"alt-right": "dev::Zeta2ContextGoForward"
"alt-left": "dev::EditPredictionContextGoBack",
"alt-right": "dev::EditPredictionContextGoForward"
}
},
{
"context": "GitBranchSelector || (GitBranchSelector > Picker > Editor)",
"use_key_equivalents": true,
"bindings": {
"cmd-shift-backspace": "branch_picker::DeleteBranch"
"cmd-shift-backspace": "branch_picker::DeleteBranch",
"cmd-shift-i": "branch_picker::FilterRemotes"
}
}
]

View File

@@ -609,8 +609,8 @@
"ctrl-alt-super-p": "settings_profile_selector::Toggle",
"ctrl-t": "project_symbols::Toggle",
"ctrl-p": "file_finder::Toggle",
"ctrl-tab": "tab_switcher::Toggle",
"ctrl-shift-tab": ["tab_switcher::Toggle", { "select_last": true }],
"ctrl-tab": "tab_switcher::Toggle",
"ctrl-e": "file_finder::Toggle",
"f1": "command_palette::Toggle",
"ctrl-shift-p": "command_palette::Toggle",
@@ -816,7 +816,9 @@
"use_key_equivalents": true,
"bindings": {
"ctrl-[": "agent::CyclePreviousInlineAssist",
"ctrl-]": "agent::CycleNextInlineAssist"
"ctrl-]": "agent::CycleNextInlineAssist",
"ctrl-shift-enter": "inline_assistant::ThumbsUpResult",
"ctrl-shift-delete": "inline_assistant::ThumbsDownResult"
}
},
{
@@ -1344,25 +1346,18 @@
}
},
{
"context": "Zeta2Feedback > Editor",
"context": "EditPredictionContext > Editor",
"bindings": {
"enter": "editor::Newline",
"ctrl-enter up": "dev::Zeta2RatePredictionPositive",
"ctrl-enter down": "dev::Zeta2RatePredictionNegative"
}
},
{
"context": "Zeta2Context > Editor",
"bindings": {
"alt-left": "dev::Zeta2ContextGoBack",
"alt-right": "dev::Zeta2ContextGoForward"
"alt-left": "dev::EditPredictionContextGoBack",
"alt-right": "dev::EditPredictionContextGoForward"
}
},
{
"context": "GitBranchSelector || (GitBranchSelector > Picker > Editor)",
"use_key_equivalents": true,
"bindings": {
"ctrl-shift-backspace": "branch_picker::DeleteBranch"
"ctrl-shift-backspace": "branch_picker::DeleteBranch",
"ctrl-shift-i": "branch_picker::FilterRemotes"
}
}
]

View File

@@ -180,7 +180,6 @@
"ctrl-w g shift-d": "editor::GoToTypeDefinitionSplit",
"ctrl-w space": "editor::OpenExcerptsSplit",
"ctrl-w g space": "editor::OpenExcerptsSplit",
"ctrl-6": "pane::AlternateFile",
"ctrl-^": "pane::AlternateFile",
".": "vim::Repeat"
}
@@ -902,7 +901,11 @@
"context": "!Editor && !Terminal",
"bindings": {
":": "command_palette::Toggle",
"g /": "pane::DeploySearch"
"g /": "pane::DeploySearch",
"] b": "pane::ActivateNextItem",
"[ b": "pane::ActivatePreviousItem",
"] shift-b": "pane::ActivateLastItem",
"[ shift-b": ["pane::ActivateItem", 0]
}
},
{

View File

@@ -0,0 +1,44 @@
{{#if language_name}}
Here's a file of {{language_name}} that the user is going to ask you to make an edit to.
{{else}}
Here's a file of text that the user is going to ask you to make an edit to.
{{/if}}
The section you'll need to rewrite is marked with <rewrite_this></rewrite_this> tags.
<document>
{{{document_content}}}
</document>
{{#if is_truncated}}
The context around the relevant section has been truncated (possibly in the middle of a line) for brevity.
{{/if}}
{{#if rewrite_section}}
And here's the section to rewrite based on that prompt again for reference:
<rewrite_this>
{{{rewrite_section}}}
</rewrite_this>
{{#if diagnostic_errors}}
Below are the diagnostic errors visible to the user. If the user requests problems to be fixed, use this information, but do not try to fix these errors if the user hasn't asked you to.
{{#each diagnostic_errors}}
<diagnostic_error>
<line_number>{{line_number}}</line_number>
<error_message>{{error_message}}</error_message>
<code_content>{{code_content}}</code_content>
</diagnostic_error>
{{/each}}
{{/if}}
{{/if}}
Only make changes that are necessary to fulfill the prompt, leave everything else as-is. All surrounding {{content_type}} will be preserved.
Start at the indentation level in the original file in the rewritten {{content_type}}.
You must use one of the provided tools to make the rewrite or to provide an explanation as to why the user's request cannot be fulfilled. It is an error if
you simply send back unstructured text. If you need to make a statement or ask a question you must use one of the tools to do so.
It is an error if you try to make a change that cannot be made simply by editing the rewrite_section.

View File

@@ -870,6 +870,10 @@
//
// Default: false
"collapse_untracked_diff": false,
/// Whether to show entries with tree or flat view in the panel
///
/// Default: false
"tree_view": false,
"scrollbar": {
// When to show the scrollbar in the git panel.
//
@@ -1100,13 +1104,22 @@
"preview_tabs": {
// Whether preview tabs should be enabled.
// Preview tabs allow you to open files in preview mode, where they close automatically
// when you switch to another file unless you explicitly pin them.
// when you open another preview tab.
// This is useful for quickly viewing files without cluttering your workspace.
"enabled": true,
// Whether to open tabs in preview mode when opened from the project panel with a single click.
"enable_preview_from_project_panel": true,
// Whether to open tabs in preview mode when selected from the file finder.
"enable_preview_from_file_finder": false,
// Whether a preview tab gets replaced when code navigation is used to navigate away from the tab.
"enable_preview_from_code_navigation": false
// Whether to open tabs in preview mode when opened from a multibuffer.
"enable_preview_from_multibuffer": true,
// Whether to open tabs in preview mode when code navigation is used to open a multibuffer.
"enable_preview_multibuffer_from_code_navigation": false,
// Whether to open tabs in preview mode when code navigation is used to open a single file.
"enable_preview_file_from_code_navigation": true,
// Whether to keep tabs in preview mode when code navigation is used to navigate away from them.
// If `enable_preview_file_from_code_navigation` or `enable_preview_multibuffer_from_code_navigation` is also true, the new tab may replace the existing one.
"enable_keep_preview_on_code_navigation": false
},
// Settings related to the file finder.
"file_finder": {
@@ -1802,7 +1815,7 @@
}
},
"CSharp": {
"language_servers": ["omnisharp", "!roslyn", "..."]
"language_servers": ["roslyn", "!omnisharp", "..."]
},
"CSS": {
"prettier": {

View File

@@ -45,6 +45,7 @@
"tab.inactive_background": "#1f2127ff",
"tab.active_background": "#0d1016ff",
"search.match_background": "#5ac2fe66",
"search.active_match_background": "#ea570166",
"panel.background": "#1f2127ff",
"panel.focused_border": "#5ac1feff",
"pane.focused_border": null,
@@ -436,6 +437,7 @@
"tab.inactive_background": "#ececedff",
"tab.active_background": "#fcfcfcff",
"search.match_background": "#3b9ee566",
"search.active_match_background": "#f88b3666",
"panel.background": "#ececedff",
"panel.focused_border": "#3b9ee5ff",
"pane.focused_border": null,
@@ -827,6 +829,7 @@
"tab.inactive_background": "#353944ff",
"tab.active_background": "#242835ff",
"search.match_background": "#73cffe66",
"search.active_match_background": "#fd722b66",
"panel.background": "#353944ff",
"panel.focused_border": null,
"pane.focused_border": null,

View File

@@ -46,6 +46,7 @@
"tab.inactive_background": "#3a3735ff",
"tab.active_background": "#282828ff",
"search.match_background": "#83a59866",
"search.active_match_background": "#c09f3f66",
"panel.background": "#3a3735ff",
"panel.focused_border": "#83a598ff",
"pane.focused_border": null,
@@ -452,6 +453,7 @@
"tab.inactive_background": "#393634ff",
"tab.active_background": "#1d2021ff",
"search.match_background": "#83a59866",
"search.active_match_background": "#c9653666",
"panel.background": "#393634ff",
"panel.focused_border": "#83a598ff",
"pane.focused_border": null,
@@ -858,6 +860,7 @@
"tab.inactive_background": "#3b3735ff",
"tab.active_background": "#32302fff",
"search.match_background": "#83a59866",
"search.active_match_background": "#aea85166",
"panel.background": "#3b3735ff",
"panel.focused_border": null,
"pane.focused_border": null,
@@ -1264,6 +1267,7 @@
"tab.inactive_background": "#ecddb4ff",
"tab.active_background": "#fbf1c7ff",
"search.match_background": "#0b667866",
"search.active_match_background": "#ba2d1166",
"panel.background": "#ecddb4ff",
"panel.focused_border": null,
"pane.focused_border": null,
@@ -1670,6 +1674,7 @@
"tab.inactive_background": "#ecddb5ff",
"tab.active_background": "#f9f5d7ff",
"search.match_background": "#0b667866",
"search.active_match_background": "#dc351466",
"panel.background": "#ecddb5ff",
"panel.focused_border": null,
"pane.focused_border": null,
@@ -2076,6 +2081,7 @@
"tab.inactive_background": "#ecdcb3ff",
"tab.active_background": "#f2e5bcff",
"search.match_background": "#0b667866",
"search.active_match_background": "#d7331466",
"panel.background": "#ecdcb3ff",
"panel.focused_border": null,
"pane.focused_border": null,

View File

@@ -45,6 +45,7 @@
"tab.inactive_background": "#2f343eff",
"tab.active_background": "#282c33ff",
"search.match_background": "#74ade866",
"search.active_match_background": "#e8af7466",
"panel.background": "#2f343eff",
"panel.focused_border": null,
"pane.focused_border": null,
@@ -448,6 +449,7 @@
"tab.inactive_background": "#ebebecff",
"tab.active_background": "#fafafaff",
"search.match_background": "#5c79e266",
"search.active_match_background": "#d0a92366",
"panel.background": "#ebebecff",
"panel.focused_border": null,
"pane.focused_border": null,

View File

@@ -1372,7 +1372,7 @@ impl AcpThread {
let path_style = self.project.read(cx).path_style(cx);
let id = update.tool_call_id.clone();
let agent = self.connection().telemetry_id();
let agent_telemetry_id = self.connection().telemetry_id();
let session = self.session_id();
if let ToolCallStatus::Completed | ToolCallStatus::Failed = status {
let status = if matches!(status, ToolCallStatus::Completed) {
@@ -1380,7 +1380,12 @@ impl AcpThread {
} else {
"failed"
};
telemetry::event!("Agent Tool Call Completed", agent, session, status);
telemetry::event!(
"Agent Tool Call Completed",
agent_telemetry_id,
session,
status
);
}
if let Some(ix) = self.index_for_tool_call(&id) {
@@ -2929,7 +2934,7 @@ mod tests {
.await
.unwrap_err();
assert_eq!(err.code, acp::ErrorCode::RESOURCE_NOT_FOUND.code);
assert_eq!(err.code, acp::ErrorCode::ResourceNotFound);
}
#[gpui::test]
@@ -3556,8 +3561,8 @@ mod tests {
}
impl AgentConnection for FakeAgentConnection {
fn telemetry_id(&self) -> &'static str {
"fake"
fn telemetry_id(&self) -> SharedString {
"fake".into()
}
fn auth_methods(&self) -> &[acp::AuthMethod] {

View File

@@ -20,7 +20,7 @@ impl UserMessageId {
}
pub trait AgentConnection {
fn telemetry_id(&self) -> &'static str;
fn telemetry_id(&self) -> SharedString;
fn new_thread(
self: Rc<Self>,
@@ -322,8 +322,8 @@ mod test_support {
}
impl AgentConnection for StubAgentConnection {
fn telemetry_id(&self) -> &'static str {
"stub"
fn telemetry_id(&self) -> SharedString {
"stub".into()
}
fn auth_methods(&self) -> &[acp::AuthMethod] {

View File

@@ -75,15 +75,9 @@ impl Terminal {
let exit_status = exit_status.map(portable_pty::ExitStatus::from);
let mut status = acp::TerminalExitStatus::new();
if let Some(exit_status) = exit_status.as_ref() {
status = status.exit_code(exit_status.exit_code());
if let Some(signal) = exit_status.signal() {
status = status.signal(signal);
}
}
status
acp::TerminalExitStatus::new()
.exit_code(exit_status.as_ref().map(|e| e.exit_code()))
.signal(exit_status.and_then(|e| e.signal().map(ToOwned::to_owned)))
})
.shared(),
}
@@ -105,19 +99,17 @@ impl Terminal {
pub fn current_output(&self, cx: &App) -> acp::TerminalOutputResponse {
if let Some(output) = self.output.as_ref() {
let mut exit_status = acp::TerminalExitStatus::new();
if let Some(status) = output.exit_status.map(portable_pty::ExitStatus::from) {
exit_status = exit_status.exit_code(status.exit_code());
if let Some(signal) = status.signal() {
exit_status = exit_status.signal(signal);
}
}
let exit_status = output.exit_status.map(portable_pty::ExitStatus::from);
acp::TerminalOutputResponse::new(
output.content.clone(),
output.original_content_len > output.content.len(),
)
.exit_status(exit_status)
.exit_status(
acp::TerminalExitStatus::new()
.exit_code(exit_status.as_ref().map(|e| e.exit_code()))
.signal(exit_status.and_then(|e| e.signal().map(ToOwned::to_owned))),
)
} else {
let (current_content, original_len) = self.truncated_output(cx);
let truncated = current_content.len() < original_len;

View File

@@ -777,7 +777,7 @@ impl ActionLog {
#[derive(Clone)]
pub struct ActionLogTelemetry {
pub agent_telemetry_id: &'static str,
pub agent_telemetry_id: SharedString,
pub session_id: Arc<str>,
}

View File

@@ -83,6 +83,7 @@ ctor.workspace = true
db = { workspace = true, "features" = ["test-support"] }
editor = { workspace = true, "features" = ["test-support"] }
env_logger.workspace = true
eval_utils.workspace = true
fs = { workspace = true, "features" = ["test-support"] }
git = { workspace = true, "features" = ["test-support"] }
gpui = { workspace = true, "features" = ["test-support"] }

View File

@@ -947,8 +947,8 @@ impl acp_thread::AgentModelSelector for NativeAgentModelSelector {
}
impl acp_thread::AgentConnection for NativeAgentConnection {
fn telemetry_id(&self) -> &'static str {
"zed"
fn telemetry_id(&self) -> SharedString {
"zed".into()
}
fn new_thread(

View File

@@ -4,7 +4,7 @@ use crate::{
};
use Role::*;
use client::{Client, UserStore};
use collections::HashMap;
use eval_utils::{EvalOutput, EvalOutputProcessor, OutcomeKind};
use fs::FakeFs;
use futures::{FutureExt, future::LocalBoxFuture};
use gpui::{AppContext, TestAppContext, Timer};
@@ -20,16 +20,62 @@ use rand::prelude::*;
use reqwest_client::ReqwestClient;
use serde_json::json;
use std::{
cmp::Reverse,
fmt::{self, Display},
io::Write as _,
path::Path,
str::FromStr,
sync::mpsc,
time::Duration,
};
use util::path;
#[derive(Default, Clone, Debug)]
struct EditAgentOutputProcessor {
mismatched_tag_threshold: f32,
cumulative_tags: usize,
cumulative_mismatched_tags: usize,
eval_outputs: Vec<EvalOutput<EditEvalMetadata>>,
}
fn mismatched_tag_threshold(mismatched_tag_threshold: f32) -> EditAgentOutputProcessor {
EditAgentOutputProcessor {
mismatched_tag_threshold,
cumulative_tags: 0,
cumulative_mismatched_tags: 0,
eval_outputs: Vec::new(),
}
}
#[derive(Clone, Debug)]
struct EditEvalMetadata {
tags: usize,
mismatched_tags: usize,
}
impl EvalOutputProcessor for EditAgentOutputProcessor {
type Metadata = EditEvalMetadata;
fn process(&mut self, output: &EvalOutput<Self::Metadata>) {
if matches!(output.outcome, OutcomeKind::Passed | OutcomeKind::Failed) {
self.cumulative_mismatched_tags += output.metadata.mismatched_tags;
self.cumulative_tags += output.metadata.tags;
self.eval_outputs.push(output.clone());
}
}
fn assert(&mut self) {
let mismatched_tag_ratio =
self.cumulative_mismatched_tags as f32 / self.cumulative_tags as f32;
if mismatched_tag_ratio > self.mismatched_tag_threshold {
for eval_output in &self.eval_outputs {
println!("{}", eval_output.data);
}
panic!(
"Too many mismatched tags: {:?}",
self.cumulative_mismatched_tags
);
}
}
}
#[test]
#[cfg_attr(not(feature = "unit-eval"), ignore)]
fn eval_extract_handle_command_output() {
@@ -55,22 +101,19 @@ fn eval_extract_handle_command_output() {
include_str!("evals/fixtures/extract_handle_command_output/possible-07.diff"),
];
let edit_description = "Extract `handle_command_output` method from `run_git_blame`.";
eval(
100,
0.95,
0.05,
EvalInput::from_conversation(
eval_utils::eval(100, 0.95, mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
vec![
message(
User,
[text(formatdoc! {"
Read the `{input_file_path}` file and extract a method in
the final stanza of `run_git_blame` to deal with command failures,
call it `handle_command_output` and take the std::process::Output as the only parameter.
Do not document the method and do not add any comments.
Read the `{input_file_path}` file and extract a method in
the final stanza of `run_git_blame` to deal with command failures,
call it `handle_command_output` and take the std::process::Output as the only parameter.
Do not document the method and do not add any comments.
Add it right next to `run_git_blame` and copy it verbatim from `run_git_blame`.
"})],
Add it right next to `run_git_blame` and copy it verbatim from `run_git_blame`.
"})],
),
message(
Assistant,
@@ -102,9 +145,9 @@ fn eval_extract_handle_command_output() {
),
],
Some(input_file_content.into()),
EvalAssertion::assert_diff_any(possible_diffs),
),
);
EvalAssertion::assert_diff_any(possible_diffs.clone()),
))
});
}
#[test]
@@ -122,18 +165,16 @@ fn eval_delete_run_git_blame() {
let input_file_content = include_str!("evals/fixtures/delete_run_git_blame/before.rs");
let output_file_content = include_str!("evals/fixtures/delete_run_git_blame/after.rs");
let edit_description = "Delete the `run_git_blame` function.";
eval(
100,
0.95,
0.05,
EvalInput::from_conversation(
eval_utils::eval(100, 0.95, mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
vec![
message(
User,
[text(formatdoc! {"
Read the `{input_file_path}` file and delete `run_git_blame`. Just that
one function, not its usages.
"})],
Read the `{input_file_path}` file and delete `run_git_blame`. Just that
one function, not its usages.
"})],
),
message(
Assistant,
@@ -166,8 +207,8 @@ fn eval_delete_run_git_blame() {
],
Some(input_file_content.into()),
EvalAssertion::assert_eq(output_file_content),
),
);
))
});
}
#[test]
@@ -185,18 +226,16 @@ fn eval_translate_doc_comments() {
let input_file_path = "root/canvas.rs";
let input_file_content = include_str!("evals/fixtures/translate_doc_comments/before.rs");
let edit_description = "Translate all doc comments to Italian";
eval(
200,
1.,
0.05,
EvalInput::from_conversation(
eval_utils::eval(200, 1., mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
vec![
message(
User,
[text(formatdoc! {"
Read the {input_file_path} file and edit it (without overwriting it),
translating all the doc comments to italian.
"})],
Read the {input_file_path} file and edit it (without overwriting it),
translating all the doc comments to italian.
"})],
),
message(
Assistant,
@@ -229,8 +268,8 @@ fn eval_translate_doc_comments() {
],
Some(input_file_content.into()),
EvalAssertion::judge_diff("Doc comments were translated to Italian"),
),
);
))
});
}
#[test]
@@ -249,33 +288,31 @@ fn eval_use_wasi_sdk_in_compile_parser_to_wasm() {
let input_file_content =
include_str!("evals/fixtures/use_wasi_sdk_in_compile_parser_to_wasm/before.rs");
let edit_description = "Update compile_parser_to_wasm to use wasi-sdk instead of emscripten";
eval(
100,
0.95,
0.05,
EvalInput::from_conversation(
eval_utils::eval(100, 0.95, mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
vec![
message(
User,
[text(formatdoc! {"
Read the `{input_file_path}` file and change `compile_parser_to_wasm` to use `wasi-sdk` instead of emscripten.
Use `ureq` to download the SDK for the current platform and architecture.
Extract the archive into a sibling of `lib` inside the `tree-sitter` directory in the cache_dir.
Compile the parser to wasm using the `bin/clang` executable (or `bin/clang.exe` on windows)
that's inside of the archive.
Don't re-download the SDK if that executable already exists.
Read the `{input_file_path}` file and change `compile_parser_to_wasm` to use `wasi-sdk` instead of emscripten.
Use `ureq` to download the SDK for the current platform and architecture.
Extract the archive into a sibling of `lib` inside the `tree-sitter` directory in the cache_dir.
Compile the parser to wasm using the `bin/clang` executable (or `bin/clang.exe` on windows)
that's inside of the archive.
Don't re-download the SDK if that executable already exists.
Use these clang flags: -fPIC -shared -Os -Wl,--export=tree_sitter_{{language_name}}
Use these clang flags: -fPIC -shared -Os -Wl,--export=tree_sitter_{{language_name}}
Here are the available wasi-sdk assets:
- wasi-sdk-25.0-x86_64-macos.tar.gz
- wasi-sdk-25.0-arm64-macos.tar.gz
- wasi-sdk-25.0-x86_64-linux.tar.gz
- wasi-sdk-25.0-arm64-linux.tar.gz
- wasi-sdk-25.0-x86_64-linux.tar.gz
- wasi-sdk-25.0-arm64-linux.tar.gz
- wasi-sdk-25.0-x86_64-windows.tar.gz
"})],
Here are the available wasi-sdk assets:
- wasi-sdk-25.0-x86_64-macos.tar.gz
- wasi-sdk-25.0-arm64-macos.tar.gz
- wasi-sdk-25.0-x86_64-linux.tar.gz
- wasi-sdk-25.0-arm64-linux.tar.gz
- wasi-sdk-25.0-x86_64-linux.tar.gz
- wasi-sdk-25.0-arm64-linux.tar.gz
- wasi-sdk-25.0-x86_64-windows.tar.gz
"})],
),
message(
Assistant,
@@ -352,11 +389,11 @@ fn eval_use_wasi_sdk_in_compile_parser_to_wasm() {
],
Some(input_file_content.into()),
EvalAssertion::judge_diff(indoc! {"
- The compile_parser_to_wasm method has been changed to use wasi-sdk
- ureq is used to download the SDK for current platform and architecture
"}),
),
);
- The compile_parser_to_wasm method has been changed to use wasi-sdk
- ureq is used to download the SDK for current platform and architecture
"}),
))
});
}
#[test]
@@ -380,11 +417,8 @@ fn eval_disable_cursor_blinking() {
include_str!("evals/fixtures/disable_cursor_blinking/possible-03.diff"),
include_str!("evals/fixtures/disable_cursor_blinking/possible-04.diff"),
];
eval(
100,
0.51,
0.05,
EvalInput::from_conversation(
eval_utils::eval(100, 0.51, mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
vec![
message(User, [text("Let's research how to cursor blinking works.")]),
message(
@@ -421,10 +455,10 @@ fn eval_disable_cursor_blinking() {
message(
User,
[text(indoc! {"
Comment out the lines that interact with the BlinkManager.
Keep the outer `update` blocks, but comments everything that's inside (including if statements).
Don't add additional comments.
"})],
Comment out the lines that interact with the BlinkManager.
Keep the outer `update` blocks, but comments everything that's inside (including if statements).
Don't add additional comments.
"})],
),
message(
Assistant,
@@ -440,9 +474,9 @@ fn eval_disable_cursor_blinking() {
),
],
Some(input_file_content.into()),
EvalAssertion::assert_diff_any(possible_diffs),
),
);
EvalAssertion::assert_diff_any(possible_diffs.clone()),
))
});
}
#[test]
@@ -467,20 +501,16 @@ fn eval_from_pixels_constructor() {
let input_file_path = "root/canvas.rs";
let input_file_content = include_str!("evals/fixtures/from_pixels_constructor/before.rs");
let edit_description = "Implement from_pixels constructor and add tests.";
eval(
100,
0.95,
// For whatever reason, this eval produces more mismatched tags.
// Increasing for now, let's see if we can bring this down.
0.25,
EvalInput::from_conversation(
eval_utils::eval(100, 0.95, mismatched_tag_threshold(0.25), move || {
run_eval(EvalInput::from_conversation(
vec![
message(
User,
[text(indoc! {"
Introduce a new `from_pixels` constructor in Canvas and
also add tests for it in the same file.
"})],
Introduce a new `from_pixels` constructor in Canvas and
also add tests for it in the same file.
"})],
),
message(
Assistant,
@@ -545,92 +575,92 @@ fn eval_from_pixels_constructor() {
"tool_4",
"grep",
indoc! {"
Found 6 matches:
Found 6 matches:
## Matches in font-kit/src/loaders/core_text.rs
## Matches in font-kit/src/loaders/core_text.rs
### mod test L926-936
```
mod test {
use super::Font;
use crate::properties::{Stretch, Weight};
### mod test L926-936
```
mod test {
use super::Font;
use crate::properties::{Stretch, Weight};
#[cfg(feature = \"source\")]
use crate::source::SystemSource;
#[cfg(feature = \"source\")]
use crate::source::SystemSource;
static TEST_FONT_POSTSCRIPT_NAME: &'static str = \"ArialMT\";
static TEST_FONT_POSTSCRIPT_NAME: &'static str = \"ArialMT\";
#[cfg(feature = \"source\")]
#[test]
```
#[cfg(feature = \"source\")]
#[test]
```
55 lines remaining in ancestor node. Read the file to see all.
55 lines remaining in ancestor node. Read the file to see all.
### mod test L947-951
```
}
### mod test L947-951
```
}
#[test]
fn test_core_text_to_css_font_weight() {
// Exact matches
```
#[test]
fn test_core_text_to_css_font_weight() {
// Exact matches
```
### mod test L959-963
```
}
### mod test L959-963
```
}
#[test]
fn test_core_text_to_css_font_stretch() {
// Exact matches
```
#[test]
fn test_core_text_to_css_font_stretch() {
// Exact matches
```
## Matches in font-kit/src/loaders/freetype.rs
## Matches in font-kit/src/loaders/freetype.rs
### mod test L1238-1248
```
mod test {
use crate::loaders::freetype::Font;
### mod test L1238-1248
```
mod test {
use crate::loaders::freetype::Font;
static PCF_FONT_PATH: &str = \"resources/tests/times-roman-pcf/timR12.pcf\";
static PCF_FONT_POSTSCRIPT_NAME: &str = \"Times-Roman\";
static PCF_FONT_PATH: &str = \"resources/tests/times-roman-pcf/timR12.pcf\";
static PCF_FONT_POSTSCRIPT_NAME: &str = \"Times-Roman\";
#[test]
fn get_pcf_postscript_name() {
let font = Font::from_path(PCF_FONT_PATH, 0).unwrap();
assert_eq!(font.postscript_name().unwrap(), PCF_FONT_POSTSCRIPT_NAME);
}
```
#[test]
fn get_pcf_postscript_name() {
let font = Font::from_path(PCF_FONT_PATH, 0).unwrap();
assert_eq!(font.postscript_name().unwrap(), PCF_FONT_POSTSCRIPT_NAME);
}
```
1 lines remaining in ancestor node. Read the file to see all.
1 lines remaining in ancestor node. Read the file to see all.
## Matches in font-kit/src/sources/core_text.rs
## Matches in font-kit/src/sources/core_text.rs
### mod test L265-275
```
mod test {
use crate::properties::{Stretch, Weight};
### mod test L265-275
```
mod test {
use crate::properties::{Stretch, Weight};
#[test]
fn test_css_to_core_text_font_weight() {
// Exact matches
assert_eq!(super::css_to_core_text_font_weight(Weight(100.0)), -0.7);
assert_eq!(super::css_to_core_text_font_weight(Weight(400.0)), 0.0);
assert_eq!(super::css_to_core_text_font_weight(Weight(700.0)), 0.4);
assert_eq!(super::css_to_core_text_font_weight(Weight(900.0)), 0.8);
#[test]
fn test_css_to_core_text_font_weight() {
// Exact matches
assert_eq!(super::css_to_core_text_font_weight(Weight(100.0)), -0.7);
assert_eq!(super::css_to_core_text_font_weight(Weight(400.0)), 0.0);
assert_eq!(super::css_to_core_text_font_weight(Weight(700.0)), 0.4);
assert_eq!(super::css_to_core_text_font_weight(Weight(900.0)), 0.8);
```
```
27 lines remaining in ancestor node. Read the file to see all.
27 lines remaining in ancestor node. Read the file to see all.
### mod test L278-282
```
}
### mod test L278-282
```
}
#[test]
fn test_css_to_core_text_font_stretch() {
// Exact matches
```
"},
#[test]
fn test_css_to_core_text_font_stretch() {
// Exact matches
```
"},
)],
),
message(
@@ -648,11 +678,11 @@ fn eval_from_pixels_constructor() {
],
Some(input_file_content.into()),
EvalAssertion::judge_diff(indoc! {"
- The diff contains a new `from_pixels` constructor
- The diff contains new tests for the `from_pixels` constructor
"}),
),
);
- The diff contains a new `from_pixels` constructor
- The diff contains new tests for the `from_pixels` constructor
"}),
))
});
}
#[test]
@@ -670,11 +700,9 @@ fn eval_zode() {
let input_file_path = "root/zode.py";
let input_content = None;
let edit_description = "Create the main Zode CLI script";
eval(
50,
1.,
0.05,
EvalInput::from_conversation(
eval_utils::eval(50, 1., mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
vec![
message(User, [text(include_str!("evals/fixtures/zode/prompt.md"))]),
message(
@@ -733,7 +761,7 @@ fn eval_zode() {
],
),
],
input_content,
input_content.clone(),
EvalAssertion::new(async move |sample, _, _cx| {
let invalid_starts = [' ', '`', '\n'];
let mut message = String::new();
@@ -758,8 +786,8 @@ fn eval_zode() {
})
}
}),
),
);
))
});
}
#[test]
@@ -777,19 +805,17 @@ fn eval_add_overwrite_test() {
let input_file_path = "root/action_log.rs";
let input_file_content = include_str!("evals/fixtures/add_overwrite_test/before.rs");
let edit_description = "Add a new test for overwriting a file in action_log.rs";
eval(
200,
0.5, // TODO: make this eval better
0.05,
EvalInput::from_conversation(
eval_utils::eval(200, 0.5, mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
vec![
message(
User,
[text(indoc! {"
Introduce a new test in `action_log.rs` to test overwriting a file.
That is, a file already exists, but we call `buffer_created` as if the file were new.
Take inspiration from all the other tests in the file.
"})],
Introduce a new test in `action_log.rs` to test overwriting a file.
That is, a file already exists, but we call `buffer_created` as if the file were new.
Take inspiration from all the other tests in the file.
"})],
),
message(
Assistant,
@@ -809,81 +835,81 @@ fn eval_add_overwrite_test() {
"tool_1",
"read_file",
indoc! {"
pub struct ActionLog [L13-20]
tracked_buffers [L15]
edited_since_project_diagnostics_check [L17]
project [L19]
impl ActionLog [L22-498]
pub fn new [L24-30]
pub fn project [L32-34]
pub fn checked_project_diagnostics [L37-39]
pub fn has_edited_files_since_project_diagnostics_check [L42-44]
fn track_buffer_internal [L46-101]
fn handle_buffer_event [L103-116]
fn handle_buffer_edited [L118-123]
fn handle_buffer_file_changed [L125-158]
async fn maintain_diff [L160-264]
pub fn buffer_read [L267-269]
pub fn buffer_created [L272-276]
pub fn buffer_edited [L279-287]
pub fn will_delete_buffer [L289-304]
pub fn keep_edits_in_range [L306-364]
pub fn reject_edits_in_ranges [L366-459]
pub fn keep_all_edits [L461-473]
pub fn changed_buffers [L476-482]
pub fn stale_buffers [L485-497]
fn apply_non_conflicting_edits [L500-561]
fn diff_snapshots [L563-585]
fn point_to_row_edit [L587-614]
enum ChangeAuthor [L617-620]
User [L618]
Agent [L619]
enum TrackedBufferStatus [L623-627]
Created [L624]
Modified [L625]
Deleted [L626]
struct TrackedBuffer [L629-641]
buffer [L630]
base_text [L631]
unreviewed_changes [L632]
status [L633]
version [L634]
diff [L635]
snapshot [L636]
diff_update [L637]
_open_lsp_handle [L638]
_maintain_diff [L639]
_subscription [L640]
impl TrackedBuffer [L643-657]
fn has_changes [L644-650]
fn schedule_diff_update [L652-656]
pub struct ChangedBuffer [L659-661]
pub diff [L660]
mod tests [L664-1574]
fn init_logger [L678-682]
fn init_test [L684-691]
async fn test_keep_edits [L694-769]
async fn test_deletions [L772-854]
async fn test_overlapping_user_edits [L857-951]
async fn test_creating_files [L954-1010]
async fn test_deleting_files [L1013-1120]
async fn test_reject_edits [L1123-1255]
async fn test_reject_multiple_edits [L1258-1331]
async fn test_reject_deleted_file [L1334-1388]
async fn test_reject_created_file [L1391-1443]
async fn test_random_diffs [L1446-1535]
fn quiesce [L1510-1534]
struct HunkStatus [L1538-1542]
range [L1539]
diff_status [L1540]
old_text [L1541]
fn unreviewed_hunks [L1544-1573]
pub struct ActionLog [L13-20]
tracked_buffers [L15]
edited_since_project_diagnostics_check [L17]
project [L19]
impl ActionLog [L22-498]
pub fn new [L24-30]
pub fn project [L32-34]
pub fn checked_project_diagnostics [L37-39]
pub fn has_edited_files_since_project_diagnostics_check [L42-44]
fn track_buffer_internal [L46-101]
fn handle_buffer_event [L103-116]
fn handle_buffer_edited [L118-123]
fn handle_buffer_file_changed [L125-158]
async fn maintain_diff [L160-264]
pub fn buffer_read [L267-269]
pub fn buffer_created [L272-276]
pub fn buffer_edited [L279-287]
pub fn will_delete_buffer [L289-304]
pub fn keep_edits_in_range [L306-364]
pub fn reject_edits_in_ranges [L366-459]
pub fn keep_all_edits [L461-473]
pub fn changed_buffers [L476-482]
pub fn stale_buffers [L485-497]
fn apply_non_conflicting_edits [L500-561]
fn diff_snapshots [L563-585]
fn point_to_row_edit [L587-614]
enum ChangeAuthor [L617-620]
User [L618]
Agent [L619]
enum TrackedBufferStatus [L623-627]
Created [L624]
Modified [L625]
Deleted [L626]
struct TrackedBuffer [L629-641]
buffer [L630]
base_text [L631]
unreviewed_changes [L632]
status [L633]
version [L634]
diff [L635]
snapshot [L636]
diff_update [L637]
_open_lsp_handle [L638]
_maintain_diff [L639]
_subscription [L640]
impl TrackedBuffer [L643-657]
fn has_changes [L644-650]
fn schedule_diff_update [L652-656]
pub struct ChangedBuffer [L659-661]
pub diff [L660]
mod tests [L664-1574]
fn init_logger [L678-682]
fn init_test [L684-691]
async fn test_keep_edits [L694-769]
async fn test_deletions [L772-854]
async fn test_overlapping_user_edits [L857-951]
async fn test_creating_files [L954-1010]
async fn test_deleting_files [L1013-1120]
async fn test_reject_edits [L1123-1255]
async fn test_reject_multiple_edits [L1258-1331]
async fn test_reject_deleted_file [L1334-1388]
async fn test_reject_created_file [L1391-1443]
async fn test_random_diffs [L1446-1535]
fn quiesce [L1510-1534]
struct HunkStatus [L1538-1542]
range [L1539]
diff_status [L1540]
old_text [L1541]
fn unreviewed_hunks [L1544-1573]
Showing symbols 1-69 (total symbols: 69)
Showing symbols 1-69 (total symbols: 69)
Using the line numbers in this outline, you can call this tool again while specifying
the start_line and end_line fields to see the implementations of symbols in the outline.
"},
Using the line numbers in this outline, you can call this tool again while specifying
the start_line and end_line fields to see the implementations of symbols in the outline.
"},
)],
),
message(
@@ -963,17 +989,17 @@ fn eval_add_overwrite_test() {
Assistant,
[
text(indoc! {"
Based on my understanding of the existing tests and code, I'll now create a new test for overwriting a file. In this test, we'll:
Based on my understanding of the existing tests and code, I'll now create a new test for overwriting a file. In this test, we'll:
1. Create a file
2. Open a buffer for it
3. Track it in the action log
4. Modify the buffer
5. Call `buffer_created` again as if the file were new
6. Check that the changes are properly tracked
1. Create a file
2. Open a buffer for it
3. Track it in the action log
4. Modify the buffer
5. Call `buffer_created` again as if the file were new
6. Check that the changes are properly tracked
Let's write the test:
"}),
Let's write the test:
"}),
tool_use(
"tool_5",
"edit_file",
@@ -990,8 +1016,8 @@ fn eval_add_overwrite_test() {
EvalAssertion::judge_diff(
"A new test for overwritten files was created, without changing any previous test",
),
),
);
))
});
}
#[test]
@@ -1016,20 +1042,18 @@ fn eval_create_empty_file() {
let input_file_content = None;
let expected_output_content = String::new();
eval(
100,
0.99,
0.05,
EvalInput::from_conversation(
eval_utils::eval(100, 0.99, mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
vec![
message(User, [text("Create a second empty todo file ")]),
message(
Assistant,
[
text(formatdoc! {"
I'll help you create a second empty todo file.
First, let me examine the project structure to see if there's already a todo file, which will help me determine the appropriate name and location for the second one.
"}),
I'll help you create a second empty todo file.
First, let me examine the project structure to see if there's already a todo file, which will help me determine the appropriate name and location for the second one.
"}),
tool_use(
"toolu_01GAF8TtsgpjKxCr8fgQLDgR",
"list_directory",
@@ -1051,8 +1075,8 @@ fn eval_create_empty_file() {
Assistant,
[
text(formatdoc! {"
I can see there's already a `TODO` file in the `root` directory. Let me create a second empty todo file called `TODO3` in the same directory:
"}),
I can see there's already a `TODO` file in the `root` directory. Let me create a second empty todo file called `TODO3` in the same directory:
"}),
tool_use(
"toolu_01Tb3iQ9griqSYMmVuykQPWU",
"edit_file",
@@ -1065,12 +1089,12 @@ fn eval_create_empty_file() {
],
),
],
input_file_content,
input_file_content.clone(),
// Bad behavior is to write something like
// "I'll create an empty TODO3 file as requested."
EvalAssertion::assert_eq(expected_output_content),
),
);
EvalAssertion::assert_eq(expected_output_content.clone()),
))
});
}
fn message(
@@ -1312,115 +1336,44 @@ impl EvalAssertion {
}
}
fn eval(
iterations: usize,
expected_pass_ratio: f32,
mismatched_tag_threshold: f32,
mut eval: EvalInput,
) {
let mut evaluated_count = 0;
let mut failed_count = 0;
report_progress(evaluated_count, failed_count, iterations);
let (tx, rx) = mpsc::channel();
// Cache the last message in the conversation, and run one instance of the eval so that
// all the next ones are cached.
eval.conversation.last_mut().unwrap().cache = true;
run_eval(eval.clone(), tx.clone());
let executor = gpui::background_executor();
let semaphore = Arc::new(smol::lock::Semaphore::new(32));
for _ in 1..iterations {
let eval = eval.clone();
let tx = tx.clone();
let semaphore = semaphore.clone();
executor
.spawn(async move {
let _guard = semaphore.acquire().await;
run_eval(eval, tx)
})
.detach();
}
drop(tx);
let mut failed_evals = HashMap::default();
let mut errored_evals = HashMap::default();
let mut eval_outputs = Vec::new();
let mut cumulative_parser_metrics = EditParserMetrics::default();
while let Ok(output) = rx.recv() {
match output {
Ok(output) => {
cumulative_parser_metrics += output.sample.edit_output.parser_metrics.clone();
eval_outputs.push(output.clone());
if output.assertion.score < 80 {
failed_count += 1;
failed_evals
.entry(output.sample.text_after.clone())
.or_insert(Vec::new())
.push(output);
}
}
Err(error) => {
failed_count += 1;
*errored_evals.entry(format!("{:?}", error)).or_insert(0) += 1;
}
}
evaluated_count += 1;
report_progress(evaluated_count, failed_count, iterations);
}
let actual_pass_ratio = (iterations - failed_count) as f32 / iterations as f32;
println!("Actual pass ratio: {}\n", actual_pass_ratio);
if actual_pass_ratio < expected_pass_ratio {
let mut errored_evals = errored_evals.into_iter().collect::<Vec<_>>();
errored_evals.sort_by_key(|(_, count)| Reverse(*count));
for (error, count) in errored_evals {
println!("Eval errored {} times. Error: {}", count, error);
}
let mut failed_evals = failed_evals.into_iter().collect::<Vec<_>>();
failed_evals.sort_by_key(|(_, evals)| Reverse(evals.len()));
for (_buffer_output, failed_evals) in failed_evals {
let eval_output = failed_evals.first().unwrap();
println!("Eval failed {} times", failed_evals.len());
println!("{}", eval_output);
}
panic!(
"Actual pass ratio: {}\nExpected pass ratio: {}",
actual_pass_ratio, expected_pass_ratio
);
}
let mismatched_tag_ratio =
cumulative_parser_metrics.mismatched_tags as f32 / cumulative_parser_metrics.tags as f32;
if mismatched_tag_ratio > mismatched_tag_threshold {
for eval_output in eval_outputs {
println!("{}", eval_output);
}
panic!("Too many mismatched tags: {:?}", cumulative_parser_metrics);
}
}
fn run_eval(eval: EvalInput, tx: mpsc::Sender<Result<EvalOutput>>) {
fn run_eval(eval: EvalInput) -> eval_utils::EvalOutput<EditEvalMetadata> {
let dispatcher = gpui::TestDispatcher::new(StdRng::from_os_rng());
let mut cx = TestAppContext::build(dispatcher, None);
let output = cx.executor().block_test(async {
let result = cx.executor().block_test(async {
let test = EditAgentTest::new(&mut cx).await;
test.eval(eval, &mut cx).await
});
tx.send(output).unwrap();
match result {
Ok(output) => eval_utils::EvalOutput {
data: output.to_string(),
outcome: if output.assertion.score < 80 {
eval_utils::OutcomeKind::Failed
} else {
eval_utils::OutcomeKind::Passed
},
metadata: EditEvalMetadata {
tags: output.sample.edit_output.parser_metrics.tags,
mismatched_tags: output.sample.edit_output.parser_metrics.mismatched_tags,
},
},
Err(e) => eval_utils::EvalOutput {
data: format!("{e:?}"),
outcome: eval_utils::OutcomeKind::Error,
metadata: EditEvalMetadata {
tags: 0,
mismatched_tags: 0,
},
},
}
}
#[derive(Clone)]
struct EvalOutput {
struct EditEvalOutput {
sample: EvalSample,
assertion: EvalAssertionOutcome,
}
impl Display for EvalOutput {
impl Display for EditEvalOutput {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
writeln!(f, "Score: {:?}", self.assertion.score)?;
if let Some(message) = self.assertion.message.as_ref() {
@@ -1439,22 +1392,6 @@ impl Display for EvalOutput {
}
}
fn report_progress(evaluated_count: usize, failed_count: usize, iterations: usize) {
let passed_count = evaluated_count - failed_count;
let passed_ratio = if evaluated_count == 0 {
0.0
} else {
passed_count as f64 / evaluated_count as f64
};
print!(
"\r\x1b[KEvaluated {}/{} ({:.2}% passed)",
evaluated_count,
iterations,
passed_ratio * 100.0
);
std::io::stdout().flush().unwrap();
}
struct EditAgentTest {
agent: EditAgent,
project: Entity<Project>,
@@ -1550,7 +1487,10 @@ impl EditAgentTest {
})
}
async fn eval(&self, eval: EvalInput, cx: &mut TestAppContext) -> Result<EvalOutput> {
async fn eval(&self, mut eval: EvalInput, cx: &mut TestAppContext) -> Result<EditEvalOutput> {
// Make sure the last message in the conversation is cached.
eval.conversation.last_mut().unwrap().cache = true;
let path = self
.project
.read_with(cx, |project, cx| {
@@ -1656,7 +1596,7 @@ impl EditAgentTest {
.run(&sample, self.judge_model.clone(), cx)
.await?;
Ok(EvalOutput { assertion, sample })
Ok(EditEvalOutput { assertion, sample })
}
}

View File

@@ -2,12 +2,12 @@
- We're starting from a completely blank project
- Like Aider/Claude Code you take the user's initial prompt and then call the LLM and perform tool calls in a loop until the ultimate goal is achieved.
- Unlike Aider or Claude code, it's not intended to be interactive. Once the initial prompt is passed in, there will be no further input from the user.
- The system you will build must reach the stated goal just by performing too calls and calling the LLM
- The system you will build must reach the stated goal just by performing tool calls and calling the LLM
- I want you to build this in python. Use the anthropic python sdk and the model context protocol sdk. Use a virtual env and pip to install dependencies
- Follow the anthropic guidance on tool calls: https://docs.anthropic.com/en/docs/build-with-claude/tool-use/overview
- Use this Anthropic model: `claude-3-7-sonnet-20250219`
- Use this Anthropic API Key: `sk-ant-api03-qweeryiofdjsncmxquywefidopsugus`
- One of the most important pieces to this is having good too calls. We will be using the tools provided by the Claude MCP server. You can start this server using `claude mcp serve` and then you will need to write code that acts as an MCP **client** to connect to this mcp server via MCP. Likely you want to start this using a subprocess. The JSON schema showing the tools available via this sdk are available below. Via this MCP server you have access to all the tools that zode needs: Bash, GlobTool, GrepTool, LS, View, Edit, Replace, WebFetchTool
- One of the most important pieces to this is having good tool calls. We will be using the tools provided by the Claude MCP server. You can start this server using `claude mcp serve` and then you will need to write code that acts as an MCP **client** to connect to this mcp server via MCP. Likely you want to start this using a subprocess. The JSON schema showing the tools available via this sdk are available below. Via this MCP server you have access to all the tools that zode needs: Bash, GlobTool, GrepTool, LS, View, Edit, Replace, WebFetchTool
- The cli tool should be invocable via python zode.py file.md where file.md is any possible file that contains the users prompt. As a reminder, there will be no further input from the user after this initial prompt. Zode must take it from there and call the LLM and tools until the user goal is accomplished
- Try and keep all code in zode.py and make heavy use of the asks I mentioned
- Once youve implemented this, you must run python zode.py eval/instructions.md to see how well our new agent tool does!

View File

@@ -21,10 +21,6 @@ impl NativeAgentServer {
}
impl AgentServer for NativeAgentServer {
fn telemetry_id(&self) -> &'static str {
"zed"
}
fn name(&self) -> SharedString {
"Zed Agent".into()
}

View File

@@ -2094,7 +2094,7 @@ async fn test_tool_updates_to_completion(cx: &mut TestAppContext) {
"1",
acp::ToolCallUpdateFields::new()
.status(acp::ToolCallStatus::Completed)
.raw_output("Finished thinking.".into())
.raw_output("Finished thinking.")
)
);
}

View File

@@ -766,20 +766,22 @@ impl Thread {
.log_err();
}
let mut fields = acp::ToolCallUpdateFields::new().status(tool_result.as_ref().map_or(
acp::ToolCallStatus::Failed,
|result| {
if result.is_error {
acp::ToolCallStatus::Failed
} else {
acp::ToolCallStatus::Completed
}
},
));
if let Some(output) = output {
fields = fields.raw_output(output);
}
stream.update_tool_call_fields(&tool_use.id, fields);
stream.update_tool_call_fields(
&tool_use.id,
acp::ToolCallUpdateFields::new()
.status(
tool_result
.as_ref()
.map_or(acp::ToolCallStatus::Failed, |result| {
if result.is_error {
acp::ToolCallStatus::Failed
} else {
acp::ToolCallStatus::Completed
}
}),
)
.raw_output(output),
);
}
pub fn from_db(
@@ -1259,15 +1261,16 @@ impl Thread {
while let Some(tool_result) = tool_results.next().await {
log::debug!("Tool finished {:?}", tool_result);
let mut fields = acp::ToolCallUpdateFields::new().status(if tool_result.is_error {
acp::ToolCallStatus::Failed
} else {
acp::ToolCallStatus::Completed
});
if let Some(output) = &tool_result.output {
fields = fields.raw_output(output.clone());
}
event_stream.update_tool_call_fields(&tool_result.tool_use_id, fields);
event_stream.update_tool_call_fields(
&tool_result.tool_use_id,
acp::ToolCallUpdateFields::new()
.status(if tool_result.is_error {
acp::ToolCallStatus::Failed
} else {
acp::ToolCallStatus::Completed
})
.raw_output(tool_result.output.clone()),
);
this.update(cx, |this, _cx| {
this.pending_message()
.tool_results
@@ -1545,7 +1548,7 @@ impl Thread {
event_stream.update_tool_call_fields(
&tool_use.id,
acp::ToolCallUpdateFields::new()
.title(title)
.title(title.as_str())
.kind(kind)
.raw_input(tool_use.input.clone()),
);
@@ -2461,7 +2464,7 @@ impl ToolCallEventStream {
ToolCallAuthorization {
tool_call: acp::ToolCallUpdate::new(
self.tool_use_id.to_string(),
acp::ToolCallUpdateFields::new().title(title),
acp::ToolCallUpdateFields::new().title(title.into()),
),
options: vec![
acp::PermissionOption::new(

View File

@@ -4,6 +4,7 @@ mod create_directory_tool;
mod delete_path_tool;
mod diagnostics_tool;
mod edit_file_tool;
mod fetch_tool;
mod find_path_tool;
mod grep_tool;
@@ -12,6 +13,7 @@ mod move_path_tool;
mod now_tool;
mod open_tool;
mod read_file_tool;
mod terminal_tool;
mod thinking_tool;
mod web_search_tool;
@@ -25,6 +27,7 @@ pub use create_directory_tool::*;
pub use delete_path_tool::*;
pub use diagnostics_tool::*;
pub use edit_file_tool::*;
pub use fetch_tool::*;
pub use find_path_tool::*;
pub use grep_tool::*;
@@ -33,6 +36,7 @@ pub use move_path_tool::*;
pub use now_tool::*;
pub use open_tool::*;
pub use read_file_tool::*;
pub use terminal_tool::*;
pub use thinking_tool::*;
pub use web_search_tool::*;

View File

@@ -384,11 +384,7 @@ impl AgentTool for EditFileTool {
range.start.to_point(&buffer.snapshot()).row
}).ok();
if let Some(abs_path) = abs_path.clone() {
let mut location = ToolCallLocation::new(abs_path);
if let Some(line) = line {
location = location.line(line);
}
event_stream.update_fields(ToolCallUpdateFields::new().locations(vec![location]));
event_stream.update_fields(ToolCallUpdateFields::new().locations(vec![ToolCallLocation::new(abs_path).line(line)]));
}
emitted_location = true;
}

View File

@@ -138,7 +138,7 @@ impl AgentTool for FindPathTool {
)),
))
})
.collect(),
.collect::<Vec<_>>(),
),
);

View File

@@ -322,7 +322,6 @@ mod tests {
use super::*;
use gpui::{TestAppContext, UpdateGlobal};
use language::{Language, LanguageConfig, LanguageMatcher};
use project::{FakeFs, Project};
use serde_json::json;
use settings::SettingsStore;
@@ -564,7 +563,7 @@ mod tests {
let project = Project::test(fs.clone(), [path!("/root").as_ref()], cx).await;
project.update(cx, |project, _cx| {
project.languages().add(rust_lang().into())
project.languages().add(language::rust_lang())
});
project
@@ -793,22 +792,6 @@ mod tests {
});
}
fn rust_lang() -> Language {
Language::new(
LanguageConfig {
name: "Rust".into(),
matcher: LanguageMatcher {
path_suffixes: vec!["rs".to_string()],
..Default::default()
},
..Default::default()
},
Some(tree_sitter_rust::LANGUAGE.into()),
)
.with_outline_query(include_str!("../../../languages/src/rust/outline.scm"))
.unwrap()
}
#[gpui::test]
async fn test_grep_security_boundaries(cx: &mut TestAppContext) {
init_test(cx);

View File

@@ -152,12 +152,11 @@ impl AgentTool for ReadFileTool {
}
let file_path = input.path.clone();
let mut location = acp::ToolCallLocation::new(&abs_path);
if let Some(line) = input.start_line {
location = location.line(line.saturating_sub(1));
}
event_stream.update_fields(ToolCallUpdateFields::new().locations(vec![location]));
event_stream.update_fields(ToolCallUpdateFields::new().locations(vec![
acp::ToolCallLocation::new(&abs_path)
.line(input.start_line.map(|line| line.saturating_sub(1))),
]));
if image_store::is_image_file(&self.project, &project_path, cx) {
return cx.spawn(async move |cx| {
@@ -302,7 +301,6 @@ mod test {
use super::*;
use crate::{ContextServerRegistry, Templates, Thread};
use gpui::{AppContext, TestAppContext, UpdateGlobal as _};
use language::{Language, LanguageConfig, LanguageMatcher, tree_sitter_rust};
use language_model::fake_provider::FakeLanguageModel;
use project::{FakeFs, Project};
use prompt_store::ProjectContext;
@@ -406,7 +404,7 @@ mod test {
.await;
let project = Project::test(fs.clone(), [path!("/root").as_ref()], cx).await;
let language_registry = project.read_with(cx, |project, _| project.languages().clone());
language_registry.add(Arc::new(rust_lang()));
language_registry.add(language::rust_lang());
let action_log = cx.new(|_| ActionLog::new(project.clone()));
let context_server_registry =
cx.new(|cx| ContextServerRegistry::new(project.read(cx).context_server_store(), cx));
@@ -596,49 +594,6 @@ mod test {
});
}
fn rust_lang() -> Language {
Language::new(
LanguageConfig {
name: "Rust".into(),
matcher: LanguageMatcher {
path_suffixes: vec!["rs".to_string()],
..Default::default()
},
..Default::default()
},
Some(tree_sitter_rust::LANGUAGE.into()),
)
.with_outline_query(
r#"
(line_comment) @annotation
(struct_item
"struct" @context
name: (_) @name) @item
(enum_item
"enum" @context
name: (_) @name) @item
(enum_variant
name: (_) @name) @item
(field_declaration
name: (_) @name) @item
(impl_item
"impl" @context
trait: (_)? @name
"for"? @context
type: (_) @name
body: (_ "{" (_)* "}")) @item
(function_item
"fn" @context
name: (_) @name) @item
(mod_item
"mod" @context
name: (_) @name) @item
"#,
)
.unwrap()
}
#[gpui::test]
async fn test_read_file_security(cx: &mut TestAppContext) {
init_test(cx);

View File

@@ -121,7 +121,7 @@ fn emit_update(response: &WebSearchResponse, event_stream: &ToolCallEventStream)
),
))
})
.collect(),
.collect::<Vec<_>>(),
),
);
}

View File

@@ -9,6 +9,10 @@ use futures::io::BufReader;
use project::Project;
use project::agent_server_store::AgentServerCommand;
use serde::Deserialize;
use settings::Settings as _;
use task::ShellBuilder;
#[cfg(windows)]
use task::ShellKind;
use util::ResultExt as _;
use std::path::PathBuf;
@@ -21,7 +25,7 @@ use gpui::{App, AppContext as _, AsyncApp, Entity, SharedString, Task, WeakEntit
use acp_thread::{AcpThread, AuthRequired, LoadError, TerminalProviderEvent};
use terminal::TerminalBuilder;
use terminal::terminal_settings::{AlternateScroll, CursorShape};
use terminal::terminal_settings::{AlternateScroll, CursorShape, TerminalSettings};
#[derive(Debug, Error)]
#[error("Unsupported version")]
@@ -29,7 +33,7 @@ pub struct UnsupportedVersion;
pub struct AcpConnection {
server_name: SharedString,
telemetry_id: &'static str,
telemetry_id: SharedString,
connection: Rc<acp::ClientSideConnection>,
sessions: Rc<RefCell<HashMap<acp::SessionId, AcpSession>>>,
auth_methods: Vec<acp::AuthMethod>,
@@ -54,7 +58,6 @@ pub struct AcpSession {
pub async fn connect(
server_name: SharedString,
telemetry_id: &'static str,
command: AgentServerCommand,
root_dir: &Path,
default_mode: Option<acp::SessionModeId>,
@@ -64,7 +67,6 @@ pub async fn connect(
) -> Result<Rc<dyn AgentConnection>> {
let conn = AcpConnection::stdio(
server_name,
telemetry_id,
command.clone(),
root_dir,
default_mode,
@@ -81,7 +83,6 @@ const MINIMUM_SUPPORTED_VERSION: acp::ProtocolVersion = acp::ProtocolVersion::V1
impl AcpConnection {
pub async fn stdio(
server_name: SharedString,
telemetry_id: &'static str,
command: AgentServerCommand,
root_dir: &Path,
default_mode: Option<acp::SessionModeId>,
@@ -89,9 +90,26 @@ impl AcpConnection {
is_remote: bool,
cx: &mut AsyncApp,
) -> Result<Self> {
let mut child = util::command::new_smol_command(&command.path);
let shell = cx.update(|cx| TerminalSettings::get(None, cx).shell.clone())?;
let builder = ShellBuilder::new(&shell, cfg!(windows));
#[cfg(windows)]
let kind = builder.kind();
let (cmd, args) = builder.build(Some(command.path.display().to_string()), &command.args);
let mut child = util::command::new_smol_command(cmd);
#[cfg(windows)]
if kind == ShellKind::Cmd {
use smol::process::windows::CommandExt;
for arg in args {
child.raw_arg(arg);
}
} else {
child.args(args);
}
#[cfg(not(windows))]
child.args(args);
child
.args(command.args.iter().map(|arg| arg.as_str()))
.envs(command.env.iter().flatten())
.stdin(std::process::Stdio::piped())
.stdout(std::process::Stdio::piped())
@@ -173,10 +191,6 @@ impl AcpConnection {
});
})?;
let mut client_info = acp::Implementation::new("zed", version);
if let Some(release_channel) = release_channel {
client_info = client_info.title(release_channel);
}
let response = connection
.initialize(
acp::InitializeRequest::new(acp::ProtocolVersion::V1)
@@ -192,7 +206,10 @@ impl AcpConnection {
("terminal-auth".into(), true.into()),
])),
)
.client_info(client_info),
.client_info(
acp::Implementation::new("zed", version)
.title(release_channel.map(ToOwned::to_owned)),
),
)
.await?;
@@ -200,6 +217,13 @@ impl AcpConnection {
return Err(UnsupportedVersion.into());
}
let telemetry_id = response
.agent_info
// Use the one the agent provides if we have one
.map(|info| info.name.into())
// Otherwise, just use the name
.unwrap_or_else(|| server_name.clone());
Ok(Self {
auth_methods: response.auth_methods,
root_dir: root_dir.to_owned(),
@@ -234,8 +258,8 @@ impl Drop for AcpConnection {
}
impl AgentConnection for AcpConnection {
fn telemetry_id(&self) -> &'static str {
self.telemetry_id
fn telemetry_id(&self) -> SharedString {
self.telemetry_id.clone()
}
fn new_thread(
@@ -302,10 +326,10 @@ impl AgentConnection for AcpConnection {
.new_session(acp::NewSessionRequest::new(cwd).mcp_servers(mcp_servers))
.await
.map_err(|err| {
if err.code == acp::ErrorCode::AUTH_REQUIRED.code {
if err.code == acp::ErrorCode::AuthRequired {
let mut error = AuthRequired::new();
if err.message != acp::ErrorCode::AUTH_REQUIRED.message {
if err.message != acp::ErrorCode::AuthRequired.to_string() {
error = error.with_description(err.message);
}
@@ -467,11 +491,11 @@ impl AgentConnection for AcpConnection {
match result {
Ok(response) => Ok(response),
Err(err) => {
if err.code == acp::ErrorCode::AUTH_REQUIRED.code {
if err.code == acp::ErrorCode::AuthRequired {
return Err(anyhow!(acp::Error::auth_required()));
}
if err.code != ErrorCode::INTERNAL_ERROR.code {
if err.code != ErrorCode::InternalError {
anyhow::bail!(err)
}
@@ -838,13 +862,18 @@ impl acp::Client for ClientDelegate {
if let Some(term_exit) = meta.get("terminal_exit") {
if let Some(id_str) = term_exit.get("terminal_id").and_then(|v| v.as_str()) {
let terminal_id = acp::TerminalId::new(id_str);
let mut status = acp::TerminalExitStatus::new();
if let Some(code) = term_exit.get("exit_code").and_then(|v| v.as_u64()) {
status = status.exit_code(code as u32)
}
if let Some(signal) = term_exit.get("signal").and_then(|v| v.as_str()) {
status = status.signal(signal);
}
let status = acp::TerminalExitStatus::new()
.exit_code(
term_exit
.get("exit_code")
.and_then(|v| v.as_u64())
.map(|i| i as u32),
)
.signal(
term_exit
.get("signal")
.and_then(|v| v.as_str().map(|s| s.to_string())),
);
let _ = session.thread.update(&mut self.cx.clone(), |thread, cx| {
thread.on_terminal_provider_event(

View File

@@ -56,7 +56,6 @@ impl AgentServerDelegate {
pub trait AgentServer: Send {
fn logo(&self) -> ui::IconName;
fn name(&self) -> SharedString;
fn telemetry_id(&self) -> &'static str;
fn default_mode(&self, _cx: &mut App) -> Option<agent_client_protocol::SessionModeId> {
None
}

View File

@@ -22,10 +22,6 @@ pub struct AgentServerLoginCommand {
}
impl AgentServer for ClaudeCode {
fn telemetry_id(&self) -> &'static str {
"claude-code"
}
fn name(&self) -> SharedString {
"Claude Code".into()
}
@@ -83,7 +79,6 @@ impl AgentServer for ClaudeCode {
cx: &mut App,
) -> Task<Result<(Rc<dyn AgentConnection>, Option<task::SpawnInTerminal>)>> {
let name = self.name();
let telemetry_id = self.telemetry_id();
let root_dir = root_dir.map(|root_dir| root_dir.to_string_lossy().into_owned());
let is_remote = delegate.project.read(cx).is_via_remote_server();
let store = delegate.store.downgrade();
@@ -108,7 +103,6 @@ impl AgentServer for ClaudeCode {
.await?;
let connection = crate::acp::connect(
name,
telemetry_id,
command,
root_dir.as_ref(),
default_mode,

View File

@@ -23,10 +23,6 @@ pub(crate) mod tests {
}
impl AgentServer for Codex {
fn telemetry_id(&self) -> &'static str {
"codex"
}
fn name(&self) -> SharedString {
"Codex".into()
}
@@ -84,7 +80,6 @@ impl AgentServer for Codex {
cx: &mut App,
) -> Task<Result<(Rc<dyn AgentConnection>, Option<task::SpawnInTerminal>)>> {
let name = self.name();
let telemetry_id = self.telemetry_id();
let root_dir = root_dir.map(|root_dir| root_dir.to_string_lossy().into_owned());
let is_remote = delegate.project.read(cx).is_via_remote_server();
let store = delegate.store.downgrade();
@@ -110,7 +105,6 @@ impl AgentServer for Codex {
let connection = crate::acp::connect(
name,
telemetry_id,
command,
root_dir.as_ref(),
default_mode,

View File

@@ -1,4 +1,4 @@
use crate::{AgentServerDelegate, load_proxy_env};
use crate::{AgentServer, AgentServerDelegate, load_proxy_env};
use acp_thread::AgentConnection;
use agent_client_protocol as acp;
use anyhow::{Context as _, Result};
@@ -20,11 +20,7 @@ impl CustomAgentServer {
}
}
impl crate::AgentServer for CustomAgentServer {
fn telemetry_id(&self) -> &'static str {
"custom"
}
impl AgentServer for CustomAgentServer {
fn name(&self) -> SharedString {
self.name.clone()
}
@@ -112,14 +108,12 @@ impl crate::AgentServer for CustomAgentServer {
cx: &mut App,
) -> Task<Result<(Rc<dyn AgentConnection>, Option<task::SpawnInTerminal>)>> {
let name = self.name();
let telemetry_id = self.telemetry_id();
let root_dir = root_dir.map(|root_dir| root_dir.to_string_lossy().into_owned());
let is_remote = delegate.project.read(cx).is_via_remote_server();
let default_mode = self.default_mode(cx);
let default_model = self.default_model(cx);
let store = delegate.store.downgrade();
let extra_env = load_proxy_env(cx);
cx.spawn(async move |cx| {
let (command, root_dir, login) = store
.update(cx, |store, cx| {
@@ -139,7 +133,6 @@ impl crate::AgentServer for CustomAgentServer {
.await?;
let connection = crate::acp::connect(
name,
telemetry_id,
command,
root_dir.as_ref(),
default_mode,

View File

@@ -12,10 +12,6 @@ use project::agent_server_store::GEMINI_NAME;
pub struct Gemini;
impl AgentServer for Gemini {
fn telemetry_id(&self) -> &'static str {
"gemini-cli"
}
fn name(&self) -> SharedString {
"Gemini CLI".into()
}
@@ -31,7 +27,6 @@ impl AgentServer for Gemini {
cx: &mut App,
) -> Task<Result<(Rc<dyn AgentConnection>, Option<task::SpawnInTerminal>)>> {
let name = self.name();
let telemetry_id = self.telemetry_id();
let root_dir = root_dir.map(|root_dir| root_dir.to_string_lossy().into_owned());
let is_remote = delegate.project.read(cx).is_via_remote_server();
let store = delegate.store.downgrade();
@@ -66,7 +61,6 @@ impl AgentServer for Gemini {
let connection = crate::acp::connect(
name,
telemetry_id,
command,
root_dir.as_ref(),
default_mode,

View File

@@ -13,7 +13,8 @@ path = "src/agent_ui.rs"
doctest = false
[features]
test-support = ["gpui/test-support", "language/test-support"]
test-support = ["gpui/test-support", "language/test-support", "reqwest_client"]
unit-eval = []
[dependencies]
acp_thread.workspace = true
@@ -47,6 +48,7 @@ fs.workspace = true
futures.workspace = true
fuzzy.workspace = true
gpui.workspace = true
gpui_tokio.workspace = true
html_to_markdown.workspace = true
http_client.workspace = true
indoc.workspace = true
@@ -93,19 +95,23 @@ ui.workspace = true
ui_input.workspace = true
url.workspace = true
util.workspace = true
uuid.workspace = true
watch.workspace = true
workspace.workspace = true
zed_actions.workspace = true
image.workspace = true
async-fs.workspace = true
reqwest_client = { workspace = true, optional = true }
[dev-dependencies]
acp_thread = { workspace = true, features = ["test-support"] }
agent = { workspace = true, features = ["test-support"] }
assistant_text_thread = { workspace = true, features = ["test-support"] }
buffer_diff = { workspace = true, features = ["test-support"] }
clock.workspace = true
db = { workspace = true, features = ["test-support"] }
editor = { workspace = true, features = ["test-support"] }
eval_utils.workspace = true
gpui = { workspace = true, "features" = ["test-support"] }
indoc.workspace = true
language = { workspace = true, "features" = ["test-support"] }
@@ -115,5 +121,6 @@ pretty_assertions.workspace = true
project = { workspace = true, features = ["test-support"] }
semver.workspace = true
rand.workspace = true
reqwest_client.workspace = true
tree-sitter-md.workspace = true
unindent.workspace = true

View File

@@ -21,8 +21,8 @@ use editor::{
};
use futures::{FutureExt as _, future::join_all};
use gpui::{
AppContext, Context, Entity, EventEmitter, FocusHandle, Focusable, ImageFormat, KeyContext,
SharedString, Subscription, Task, TextStyle, WeakEntity,
AppContext, ClipboardEntry, Context, Entity, EventEmitter, FocusHandle, Focusable, ImageFormat,
KeyContext, SharedString, Subscription, Task, TextStyle, WeakEntity,
};
use language::{Buffer, Language, language_settings::InlayHintKind};
use project::{CompletionIntent, InlayHint, InlayHintLabel, InlayId, Project, Worktree};
@@ -417,13 +417,12 @@ impl MessageEditor {
))
}
}
Mention::Image(mention_image) => {
let mut image = acp::ImageContent::new(
Mention::Image(mention_image) => acp::ContentBlock::Image(
acp::ImageContent::new(
mention_image.data.clone(),
mention_image.format.mime_type(),
);
if let Some(uri) = match uri {
)
.uri(match uri {
MentionUri::File { .. } => Some(uri.to_uri().to_string()),
MentionUri::PastedImage => None,
other => {
@@ -433,11 +432,8 @@ impl MessageEditor {
);
None
}
} {
image = image.uri(uri)
};
acp::ContentBlock::Image(image)
}
}),
),
Mention::Link => acp::ContentBlock::ResourceLink(
acp::ResourceLink::new(uri.name(), uri.to_uri().to_string()),
),
@@ -547,6 +543,145 @@ impl MessageEditor {
}
fn paste(&mut self, _: &Paste, window: &mut Window, cx: &mut Context<Self>) {
let editor_clipboard_selections = cx
.read_from_clipboard()
.and_then(|item| item.entries().first().cloned())
.and_then(|entry| match entry {
ClipboardEntry::String(text) => {
text.metadata_json::<Vec<editor::ClipboardSelection>>()
}
_ => None,
});
let has_file_context = editor_clipboard_selections
.as_ref()
.is_some_and(|selections| {
selections
.iter()
.any(|sel| sel.file_path.is_some() && sel.line_range.is_some())
});
if has_file_context {
if let Some((workspace, selections)) =
self.workspace.upgrade().zip(editor_clipboard_selections)
{
let Some(first_selection) = selections.first() else {
return;
};
if let Some(file_path) = &first_selection.file_path {
// In case someone pastes selections from another window
// with a different project, we don't want to insert the
// crease (containing the absolute path) since the agent
// cannot access files outside the project.
let is_in_project = workspace
.read(cx)
.project()
.read(cx)
.project_path_for_absolute_path(file_path, cx)
.is_some();
if !is_in_project {
return;
}
}
cx.stop_propagation();
let insertion_target = self
.editor
.read(cx)
.selections
.newest_anchor()
.start
.text_anchor;
let project = workspace.read(cx).project().clone();
for selection in selections {
if let (Some(file_path), Some(line_range)) =
(selection.file_path, selection.line_range)
{
let crease_text =
acp_thread::selection_name(Some(file_path.as_ref()), &line_range);
let mention_uri = MentionUri::Selection {
abs_path: Some(file_path.clone()),
line_range: line_range.clone(),
};
let mention_text = mention_uri.as_link().to_string();
let (excerpt_id, text_anchor, content_len) =
self.editor.update(cx, |editor, cx| {
let buffer = editor.buffer().read(cx);
let snapshot = buffer.snapshot(cx);
let (excerpt_id, _, buffer_snapshot) =
snapshot.as_singleton().unwrap();
let text_anchor = insertion_target.bias_left(&buffer_snapshot);
editor.insert(&mention_text, window, cx);
editor.insert(" ", window, cx);
(*excerpt_id, text_anchor, mention_text.len())
});
let Some((crease_id, tx)) = insert_crease_for_mention(
excerpt_id,
text_anchor,
content_len,
crease_text.into(),
mention_uri.icon_path(cx),
None,
self.editor.clone(),
window,
cx,
) else {
continue;
};
drop(tx);
let mention_task = cx
.spawn({
let project = project.clone();
async move |_, cx| {
let project_path = project
.update(cx, |project, cx| {
project.project_path_for_absolute_path(&file_path, cx)
})
.map_err(|e| e.to_string())?
.ok_or_else(|| "project path not found".to_string())?;
let buffer = project
.update(cx, |project, cx| {
project.open_buffer(project_path, cx)
})
.map_err(|e| e.to_string())?
.await
.map_err(|e| e.to_string())?;
buffer
.update(cx, |buffer, cx| {
let start = Point::new(*line_range.start(), 0)
.min(buffer.max_point());
let end = Point::new(*line_range.end() + 1, 0)
.min(buffer.max_point());
let content =
buffer.text_for_range(start..end).collect();
Mention::Text {
content,
tracked_buffers: vec![cx.entity()],
}
})
.map_err(|e| e.to_string())
}
})
.shared();
self.mention_set.update(cx, |mention_set, _cx| {
mention_set.insert_mention(crease_id, mention_uri.clone(), mention_task)
});
}
}
return;
}
}
if self.prompt_capabilities.borrow().image
&& let Some(task) =
paste_images_as_context(self.editor.clone(), self.mention_set.clone(), window, cx)

View File

@@ -161,7 +161,7 @@ impl Render for ModeSelector {
.map(|mode| mode.name.clone())
.unwrap_or_else(|| "Unknown".into());
let this = cx.entity();
let this = cx.weak_entity();
let icon = if self.menu_handle.is_deployed() {
IconName::ChevronUp
@@ -222,7 +222,8 @@ impl Render for ModeSelector {
y: px(-2.0),
})
.menu(move |window, cx| {
Some(this.update(cx, |this, cx| this.build_context_menu(window, cx)))
this.update(cx, |this, cx| this.build_context_menu(window, cx))
.ok()
})
}
}

View File

@@ -100,7 +100,7 @@ impl ThreadError {
{
Self::ModelRequestLimitReached(error.plan)
} else if let Some(acp_error) = error.downcast_ref::<acp::Error>()
&& acp_error.code == acp::ErrorCode::AUTH_REQUIRED.code
&& acp_error.code == acp::ErrorCode::AuthRequired
{
Self::AuthenticationRequired(acp_error.message.clone().into())
} else {
@@ -170,7 +170,7 @@ impl ThreadFeedbackState {
}
}
let session_id = thread.read(cx).session_id().clone();
let agent = thread.read(cx).connection().telemetry_id();
let agent_telemetry_id = thread.read(cx).connection().telemetry_id();
let task = telemetry.thread_data(&session_id, cx);
let rating = match feedback {
ThreadFeedback::Positive => "positive",
@@ -180,7 +180,7 @@ impl ThreadFeedbackState {
let thread = task.await?;
telemetry::event!(
"Agent Thread Rated",
agent = agent,
agent = agent_telemetry_id,
session_id = session_id,
rating = rating,
thread = thread
@@ -207,13 +207,13 @@ impl ThreadFeedbackState {
self.comments_editor.take();
let session_id = thread.read(cx).session_id().clone();
let agent = thread.read(cx).connection().telemetry_id();
let agent_telemetry_id = thread.read(cx).connection().telemetry_id();
let task = telemetry.thread_data(&session_id, cx);
cx.background_spawn(async move {
let thread = task.await?;
telemetry::event!(
"Agent Thread Feedback Comments",
agent = agent,
agent = agent_telemetry_id,
session_id = session_id,
comments = comments,
thread = thread
@@ -333,6 +333,7 @@ impl AcpThreadView {
project: Entity<Project>,
history_store: Entity<HistoryStore>,
prompt_store: Option<Entity<PromptStore>>,
track_load_event: bool,
window: &mut Window,
cx: &mut Context<Self>,
) -> Self {
@@ -391,8 +392,9 @@ impl AcpThreadView {
),
];
let show_codex_windows_warning = crate::ExternalAgent::parse_built_in(agent.as_ref())
== Some(crate::ExternalAgent::Codex);
let show_codex_windows_warning = cfg!(windows)
&& project.read(cx).is_local()
&& agent.clone().downcast::<agent_servers::Codex>().is_some();
Self {
agent: agent.clone(),
@@ -404,6 +406,7 @@ impl AcpThreadView {
resume_thread.clone(),
workspace.clone(),
project.clone(),
track_load_event,
window,
cx,
),
@@ -448,6 +451,7 @@ impl AcpThreadView {
self.resume_thread_metadata.clone(),
self.workspace.clone(),
self.project.clone(),
true,
window,
cx,
);
@@ -461,6 +465,7 @@ impl AcpThreadView {
resume_thread: Option<DbThreadMetadata>,
workspace: WeakEntity<Workspace>,
project: Entity<Project>,
track_load_event: bool,
window: &mut Window,
cx: &mut Context<Self>,
) -> ThreadState {
@@ -519,6 +524,10 @@ impl AcpThreadView {
}
};
if track_load_event {
telemetry::event!("Agent Thread Started", agent = connection.telemetry_id());
}
let result = if let Some(native_agent) = connection
.clone()
.downcast::<agent::NativeAgentConnection>()
@@ -1133,8 +1142,8 @@ impl AcpThreadView {
let Some(thread) = self.thread() else {
return;
};
let agent_telemetry_id = self.agent.telemetry_id();
let session_id = thread.read(cx).session_id().clone();
let agent_telemetry_id = thread.read(cx).connection().telemetry_id();
let thread = thread.downgrade();
if self.should_be_following {
self.workspace
@@ -1512,6 +1521,7 @@ impl AcpThreadView {
else {
return;
};
let agent_telemetry_id = connection.telemetry_id();
// Check for the experimental "terminal-auth" _meta field
let auth_method = connection.auth_methods().iter().find(|m| m.id == method);
@@ -1579,19 +1589,18 @@ impl AcpThreadView {
);
cx.notify();
self.auth_task = Some(cx.spawn_in(window, {
let agent = self.agent.clone();
async move |this, cx| {
let result = authenticate.await;
match &result {
Ok(_) => telemetry::event!(
"Authenticate Agent Succeeded",
agent = agent.telemetry_id()
agent = agent_telemetry_id
),
Err(_) => {
telemetry::event!(
"Authenticate Agent Failed",
agent = agent.telemetry_id(),
agent = agent_telemetry_id,
)
}
}
@@ -1675,6 +1684,7 @@ impl AcpThreadView {
None,
this.workspace.clone(),
this.project.clone(),
true,
window,
cx,
)
@@ -1730,43 +1740,38 @@ impl AcpThreadView {
connection.authenticate(method, cx)
};
cx.notify();
self.auth_task =
Some(cx.spawn_in(window, {
let agent = self.agent.clone();
async move |this, cx| {
let result = authenticate.await;
self.auth_task = Some(cx.spawn_in(window, {
async move |this, cx| {
let result = authenticate.await;
match &result {
Ok(_) => telemetry::event!(
"Authenticate Agent Succeeded",
agent = agent.telemetry_id()
),
Err(_) => {
telemetry::event!(
"Authenticate Agent Failed",
agent = agent.telemetry_id(),
)
}
match &result {
Ok(_) => telemetry::event!(
"Authenticate Agent Succeeded",
agent = agent_telemetry_id
),
Err(_) => {
telemetry::event!("Authenticate Agent Failed", agent = agent_telemetry_id,)
}
this.update_in(cx, |this, window, cx| {
if let Err(err) = result {
if let ThreadState::Unauthenticated {
pending_auth_method,
..
} = &mut this.thread_state
{
pending_auth_method.take();
}
this.handle_thread_error(err, cx);
} else {
this.reset(window, cx);
}
this.auth_task.take()
})
.ok();
}
}));
this.update_in(cx, |this, window, cx| {
if let Err(err) = result {
if let ThreadState::Unauthenticated {
pending_auth_method,
..
} = &mut this.thread_state
{
pending_auth_method.take();
}
this.handle_thread_error(err, cx);
} else {
this.reset(window, cx);
}
this.auth_task.take()
})
.ok();
}
}));
}
fn spawn_external_agent_login(
@@ -1896,10 +1901,11 @@ impl AcpThreadView {
let Some(thread) = self.thread() else {
return;
};
let agent_telemetry_id = thread.read(cx).connection().telemetry_id();
telemetry::event!(
"Agent Tool Call Authorized",
agent = self.agent.telemetry_id(),
agent = agent_telemetry_id,
session = thread.read(cx).session_id(),
option = option_kind
);
@@ -3509,7 +3515,9 @@ impl AcpThreadView {
(method.id.0.clone(), method.name.clone())
};
Button::new(SharedString::from(method_id.clone()), name)
let agent_telemetry_id = connection.telemetry_id();
Button::new(method_id.clone(), name)
.label_size(LabelSize::Small)
.map(|this| {
if ix == 0 {
@@ -3528,7 +3536,7 @@ impl AcpThreadView {
cx.listener(move |this, _, window, cx| {
telemetry::event!(
"Authenticate Agent Started",
agent = this.agent.telemetry_id(),
agent = agent_telemetry_id,
method = method_id
);
@@ -5376,47 +5384,39 @@ impl AcpThreadView {
)
}
fn render_codex_windows_warning(&self, cx: &mut Context<Self>) -> Option<Callout> {
if self.show_codex_windows_warning {
Some(
Callout::new()
.icon(IconName::Warning)
.severity(Severity::Warning)
.title("Codex on Windows")
.description(
"For best performance, run Codex in Windows Subsystem for Linux (WSL2)",
)
.actions_slot(
Button::new("open-wsl-modal", "Open in WSL")
.icon_size(IconSize::Small)
.icon_color(Color::Muted)
.on_click(cx.listener({
move |_, _, _window, cx| {
#[cfg(windows)]
_window.dispatch_action(
zed_actions::wsl_actions::OpenWsl::default().boxed_clone(),
cx,
);
cx.notify();
}
})),
)
.dismiss_action(
IconButton::new("dismiss", IconName::Close)
.icon_size(IconSize::Small)
.icon_color(Color::Muted)
.tooltip(Tooltip::text("Dismiss Warning"))
.on_click(cx.listener({
move |this, _, _, cx| {
this.show_codex_windows_warning = false;
cx.notify();
}
})),
),
fn render_codex_windows_warning(&self, cx: &mut Context<Self>) -> Callout {
Callout::new()
.icon(IconName::Warning)
.severity(Severity::Warning)
.title("Codex on Windows")
.description("For best performance, run Codex in Windows Subsystem for Linux (WSL2)")
.actions_slot(
Button::new("open-wsl-modal", "Open in WSL")
.icon_size(IconSize::Small)
.icon_color(Color::Muted)
.on_click(cx.listener({
move |_, _, _window, cx| {
#[cfg(windows)]
_window.dispatch_action(
zed_actions::wsl_actions::OpenWsl::default().boxed_clone(),
cx,
);
cx.notify();
}
})),
)
.dismiss_action(
IconButton::new("dismiss", IconName::Close)
.icon_size(IconSize::Small)
.icon_color(Color::Muted)
.tooltip(Tooltip::text("Dismiss Warning"))
.on_click(cx.listener({
move |this, _, _, cx| {
this.show_codex_windows_warning = false;
cx.notify();
}
})),
)
} else {
None
}
}
fn render_thread_error(&mut self, window: &mut Window, cx: &mut Context<Self>) -> Option<Div> {
@@ -5936,12 +5936,8 @@ impl Render for AcpThreadView {
_ => this,
})
.children(self.render_thread_retry_status_callout(window, cx))
.children({
if cfg!(windows) && self.project.read(cx).is_local() {
self.render_codex_windows_warning(cx)
} else {
None
}
.when(self.show_codex_windows_warning, |this| {
this.child(self.render_codex_windows_warning(cx))
})
.children(self.render_thread_error(window, cx))
.when_some(
@@ -6243,7 +6239,7 @@ pub(crate) mod tests {
StubAgentConnection::new().with_permission_requests(HashMap::from_iter([(
tool_call_id,
vec![acp::PermissionOption::new(
"1".into(),
"1",
"Allow",
acp::PermissionOptionKind::AllowOnce,
)],
@@ -6398,6 +6394,7 @@ pub(crate) mod tests {
project,
history_store,
None,
false,
window,
cx,
)
@@ -6475,10 +6472,6 @@ pub(crate) mod tests {
where
C: 'static + AgentConnection + Send + Clone,
{
fn telemetry_id(&self) -> &'static str {
"test"
}
fn logo(&self) -> ui::IconName {
ui::IconName::Ai
}
@@ -6505,8 +6498,8 @@ pub(crate) mod tests {
struct SaboteurAgentConnection;
impl AgentConnection for SaboteurAgentConnection {
fn telemetry_id(&self) -> &'static str {
"saboteur"
fn telemetry_id(&self) -> SharedString {
"saboteur".into()
}
fn new_thread(
@@ -6569,8 +6562,8 @@ pub(crate) mod tests {
struct RefusalAgentConnection;
impl AgentConnection for RefusalAgentConnection {
fn telemetry_id(&self) -> &'static str {
"refusal"
fn telemetry_id(&self) -> SharedString {
"refusal".into()
}
fn new_thread(
@@ -6671,6 +6664,7 @@ pub(crate) mod tests {
project.clone(),
history_store.clone(),
None,
false,
window,
cx,
)

View File

@@ -36,7 +36,7 @@ use settings::{Settings, SettingsStore, update_settings_file};
use ui::{
Button, ButtonStyle, Chip, CommonAnimationExt, ContextMenu, ContextMenuEntry, Disclosure,
Divider, DividerColor, ElevationIndex, IconName, IconPosition, IconSize, Indicator, LabelSize,
PopoverMenu, Switch, SwitchColor, Tooltip, WithScrollbar, prelude::*,
PopoverMenu, Switch, Tooltip, WithScrollbar, prelude::*,
};
use util::ResultExt as _;
use workspace::{Workspace, create_and_open_local_file};
@@ -838,7 +838,7 @@ impl AgentConfiguration {
.min_w_0()
.child(
h_flex()
.id(SharedString::from(format!("tooltip-{}", item_id)))
.id(format!("tooltip-{}", item_id))
.h_full()
.w_3()
.mr_2()
@@ -879,7 +879,6 @@ impl AgentConfiguration {
.child(context_server_configuration_menu)
.child(
Switch::new("context-server-switch", is_running.into())
.color(SwitchColor::Accent)
.on_click({
let context_server_manager = self.context_server_store.clone();
let fs = self.fs.clone();
@@ -978,7 +977,10 @@ impl AgentConfiguration {
} else {
AgentIcon::Name(IconName::Ai)
};
(name, icon)
let display_name = agent_server_store
.agent_display_name(&name)
.unwrap_or_else(|| name.0.clone());
(name, icon, display_name)
})
.collect();
@@ -1085,6 +1087,7 @@ impl AgentConfiguration {
.child(self.render_agent_server(
AgentIcon::Name(IconName::AiClaude),
"Claude Code",
"Claude Code",
false,
cx,
))
@@ -1092,6 +1095,7 @@ impl AgentConfiguration {
.child(self.render_agent_server(
AgentIcon::Name(IconName::AiOpenAi),
"Codex CLI",
"Codex CLI",
false,
cx,
))
@@ -1099,16 +1103,23 @@ impl AgentConfiguration {
.child(self.render_agent_server(
AgentIcon::Name(IconName::AiGemini),
"Gemini CLI",
"Gemini CLI",
false,
cx,
))
.map(|mut parent| {
for (name, icon) in user_defined_agents {
for (name, icon, display_name) in user_defined_agents {
parent = parent
.child(
Divider::horizontal().color(DividerColor::BorderFaded),
)
.child(self.render_agent_server(icon, name, true, cx));
.child(self.render_agent_server(
icon,
name,
display_name,
true,
cx,
));
}
parent
}),
@@ -1119,11 +1130,13 @@ impl AgentConfiguration {
fn render_agent_server(
&self,
icon: AgentIcon,
name: impl Into<SharedString>,
id: impl Into<SharedString>,
display_name: impl Into<SharedString>,
external: bool,
cx: &mut Context<Self>,
) -> impl IntoElement {
let name = name.into();
let id = id.into();
let display_name = display_name.into();
let icon = match icon {
AgentIcon::Name(icon_name) => Icon::new(icon_name)
.size(IconSize::Small)
@@ -1133,12 +1146,15 @@ impl AgentConfiguration {
.color(Color::Muted),
};
let tooltip_id = SharedString::new(format!("agent-source-{}", name));
let tooltip_message = format!("The {} agent was installed from an extension.", name);
let tooltip_id = SharedString::new(format!("agent-source-{}", id));
let tooltip_message = format!(
"The {} agent was installed from an extension.",
display_name
);
let agent_server_name = ExternalAgentServerName(name.clone());
let agent_server_name = ExternalAgentServerName(id.clone());
let uninstall_btn_id = SharedString::from(format!("uninstall-{}", name));
let uninstall_btn_id = SharedString::from(format!("uninstall-{}", id));
let uninstall_button = IconButton::new(uninstall_btn_id, IconName::Trash)
.icon_color(Color::Muted)
.icon_size(IconSize::Small)
@@ -1162,7 +1178,7 @@ impl AgentConfiguration {
h_flex()
.gap_1p5()
.child(icon)
.child(Label::new(name))
.child(Label::new(display_name))
.when(external, |this| {
this.child(
div()

View File

@@ -87,7 +87,7 @@ impl ConfigureContextServerToolsModal {
v_flex()
.child(
h_flex()
.id(SharedString::from(format!("tool-header-{}", index)))
.id(format!("tool-header-{}", index))
.py_1()
.pl_1()
.pr_2()

View File

@@ -422,7 +422,7 @@ impl ManageProfilesModal {
let is_focused = profile.navigation.focus_handle.contains_focused(window, cx);
div()
.id(SharedString::from(format!("profile-{}", profile.id)))
.id(format!("profile-{}", profile.id))
.track_focus(&profile.navigation.focus_handle)
.on_action({
let profile_id = profile.id.clone();
@@ -431,7 +431,7 @@ impl ManageProfilesModal {
})
})
.child(
ListItem::new(SharedString::from(format!("profile-{}", profile.id)))
ListItem::new(format!("profile-{}", profile.id))
.toggle_state(is_focused)
.inset(true)
.spacing(ListItemSpacing::Sparse)

View File

@@ -63,6 +63,10 @@ impl AgentModelSelector {
pub fn toggle(&self, window: &mut Window, cx: &mut Context<Self>) {
self.menu_handle.toggle(window, cx);
}
pub fn active_model(&self, cx: &App) -> Option<language_model::ConfiguredModel> {
self.selector.read(cx).delegate.active_model(cx)
}
}
impl Render for AgentModelSelector {
@@ -98,7 +102,7 @@ impl Render for AgentModelSelector {
.child(
Icon::new(IconName::ChevronDown)
.color(color)
.size(IconSize::XSmall),
.size(IconSize::Small),
),
move |_window, cx| {
Tooltip::for_action_in("Change Model", &ToggleModelSelector, &focus_handle, cx)

View File

@@ -305,6 +305,7 @@ impl ActiveView {
project,
history_store,
prompt_store,
false,
window,
cx,
)
@@ -885,10 +886,6 @@ impl AgentPanel {
let server = ext_agent.server(fs, history);
if !loading {
telemetry::event!("Agent Thread Started", agent = server.telemetry_id());
}
this.update_in(cx, |this, window, cx| {
let selected_agent = ext_agent.into();
if this.selected_agent != selected_agent {
@@ -905,6 +902,7 @@ impl AgentPanel {
project,
this.history_store.clone(),
this.prompt_store.clone(),
!loading,
window,
cx,
)
@@ -2083,8 +2081,11 @@ impl AgentPanel {
for agent_name in agent_names {
let icon_path = agent_server_store.agent_icon(&agent_name);
let display_name = agent_server_store
.agent_display_name(&agent_name)
.unwrap_or_else(|| agent_name.0.clone());
let mut entry = ContextMenuEntry::new(agent_name.clone());
let mut entry = ContextMenuEntry::new(display_name);
if let Some(icon_path) = icon_path {
entry = entry.custom_icon_svg(icon_path);
@@ -2685,16 +2686,17 @@ impl rules_library::InlineAssistDelegate for PromptLibraryInlineAssist {
return;
};
let project = workspace.read(cx).project().downgrade();
let thread_store = panel.read(cx).thread_store().clone();
assistant.assist(
prompt_editor,
self.workspace.clone(),
project,
panel.read(cx).thread_store().clone(),
thread_store,
None,
initial_prompt,
window,
cx,
)
);
})
}

View File

@@ -7,6 +7,8 @@ mod buffer_codegen;
mod completion_provider;
mod context;
mod context_server_configuration;
#[cfg(test)]
mod evals;
mod inline_assistant;
mod inline_prompt_editor;
mod language_model_selector;
@@ -158,16 +160,6 @@ pub enum ExternalAgent {
}
impl ExternalAgent {
pub fn parse_built_in(server: &dyn agent_servers::AgentServer) -> Option<Self> {
match server.telemetry_id() {
"gemini-cli" => Some(Self::Gemini),
"claude-code" => Some(Self::ClaudeCode),
"codex" => Some(Self::Codex),
"zed" => Some(Self::NativeAgent),
_ => None,
}
}
pub fn server(
&self,
fs: Arc<dyn fs::Fs>,

View File

@@ -5,22 +5,26 @@ use client::telemetry::Telemetry;
use cloud_llm_client::CompletionIntent;
use collections::HashSet;
use editor::{Anchor, AnchorRangeExt, MultiBuffer, MultiBufferSnapshot, ToOffset as _, ToPoint};
use feature_flags::{FeatureFlagAppExt as _, InlineAssistantV2FeatureFlag};
use futures::{
SinkExt, Stream, StreamExt, TryStreamExt as _,
channel::mpsc,
future::{LocalBoxFuture, Shared},
join,
};
use gpui::{App, AppContext as _, Context, Entity, EventEmitter, Subscription, Task};
use gpui::{App, AppContext as _, AsyncApp, Context, Entity, EventEmitter, Subscription, Task};
use language::{Buffer, IndentKind, Point, TransactionId, line_diff};
use language_model::{
LanguageModel, LanguageModelRegistry, LanguageModelRequest, LanguageModelRequestMessage,
LanguageModelTextStream, Role, report_assistant_event,
LanguageModel, LanguageModelCompletionError, LanguageModelRegistry, LanguageModelRequest,
LanguageModelRequestMessage, LanguageModelRequestTool, LanguageModelTextStream, Role,
report_assistant_event,
};
use multi_buffer::MultiBufferRow;
use parking_lot::Mutex;
use prompt_store::PromptBuilder;
use rope::Rope;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use smol::future::FutureExt;
use std::{
cmp,
@@ -34,6 +38,29 @@ use std::{
};
use streaming_diff::{CharOperation, LineDiff, LineOperation, StreamingDiff};
use telemetry_events::{AssistantEventData, AssistantKind, AssistantPhase};
use ui::SharedString;
/// Use this tool to provide a message to the user when you're unable to complete a task.
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct FailureMessageInput {
/// A brief message to the user explaining why you're unable to fulfill the request or to ask a question about the request.
///
/// The message may use markdown formatting if you wish.
pub message: String,
}
/// Replaces text in <rewrite_this></rewrite_this> tags with your replacement_text.
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct RewriteSectionInput {
/// A brief description of the edit you have made.
///
/// The description may use markdown formatting if you wish.
/// This is optional - if the edit is simple or obvious, you should leave it empty.
pub description: String,
/// The text to replace the section with.
pub replacement_text: String,
}
pub struct BufferCodegen {
alternatives: Vec<Entity<CodegenAlternative>>,
@@ -92,6 +119,10 @@ impl BufferCodegen {
.push(cx.subscribe(&codegen, |_, _, event, cx| cx.emit(*event)));
}
pub fn active_completion(&self, cx: &App) -> Option<String> {
self.active_alternative().read(cx).current_completion()
}
pub fn active_alternative(&self) -> &Entity<CodegenAlternative> {
&self.alternatives[self.active_alternative]
}
@@ -214,6 +245,10 @@ impl BufferCodegen {
pub fn last_equal_ranges<'a>(&self, cx: &'a App) -> &'a [Range<Anchor>] {
self.active_alternative().read(cx).last_equal_ranges()
}
pub fn selected_text<'a>(&self, cx: &'a App) -> Option<&'a str> {
self.active_alternative().read(cx).selected_text()
}
}
impl EventEmitter<CodegenEvent> for BufferCodegen {}
@@ -237,7 +272,9 @@ pub struct CodegenAlternative {
line_operations: Vec<LineOperation>,
elapsed_time: Option<f64>,
completion: Option<String>,
selected_text: Option<String>,
pub message_id: Option<String>,
pub model_explanation: Option<SharedString>,
}
impl EventEmitter<CodegenEvent> for CodegenAlternative {}
@@ -288,14 +325,16 @@ impl CodegenAlternative {
generation: Task::ready(()),
diff: Diff::default(),
telemetry,
_subscription: cx.subscribe(&buffer, Self::handle_buffer_event),
builder,
active,
active: active,
edits: Vec::new(),
line_operations: Vec::new(),
range,
elapsed_time: None,
completion: None,
selected_text: None,
model_explanation: None,
_subscription: cx.subscribe(&buffer, Self::handle_buffer_event),
}
}
@@ -358,20 +397,126 @@ impl CodegenAlternative {
let api_key = model.api_key(cx);
let telemetry_id = model.telemetry_id();
let provider_id = model.provider_id();
let stream: LocalBoxFuture<Result<LanguageModelTextStream>> =
if user_prompt.trim().to_lowercase() == "delete" {
async { Ok(LanguageModelTextStream::default()) }.boxed_local()
} else {
let request = self.build_request(&model, user_prompt, context_task, cx)?;
cx.spawn(async move |_, cx| {
Ok(model.stream_completion_text(request.await, cx).await?)
})
.boxed_local()
};
self.handle_stream(telemetry_id, provider_id.to_string(), api_key, stream, cx);
if cx.has_flag::<InlineAssistantV2FeatureFlag>() {
let request = self.build_request(&model, user_prompt, context_task, cx)?;
let tool_use =
cx.spawn(async move |_, cx| model.stream_completion_tool(request.await, cx).await);
self.handle_tool_use(telemetry_id, provider_id.to_string(), api_key, tool_use, cx);
} else {
let stream: LocalBoxFuture<Result<LanguageModelTextStream>> =
if user_prompt.trim().to_lowercase() == "delete" {
async { Ok(LanguageModelTextStream::default()) }.boxed_local()
} else {
let request = self.build_request(&model, user_prompt, context_task, cx)?;
cx.spawn(async move |_, cx| {
Ok(model.stream_completion_text(request.await, cx).await?)
})
.boxed_local()
};
self.handle_stream(telemetry_id, provider_id.to_string(), api_key, stream, cx);
}
Ok(())
}
fn build_request_v2(
&self,
model: &Arc<dyn LanguageModel>,
user_prompt: String,
context_task: Shared<Task<Option<LoadedContext>>>,
cx: &mut App,
) -> Result<Task<LanguageModelRequest>> {
let buffer = self.buffer.read(cx).snapshot(cx);
let language = buffer.language_at(self.range.start);
let language_name = if let Some(language) = language.as_ref() {
if Arc::ptr_eq(language, &language::PLAIN_TEXT) {
None
} else {
Some(language.name())
}
} else {
None
};
let language_name = language_name.as_ref();
let start = buffer.point_to_buffer_offset(self.range.start);
let end = buffer.point_to_buffer_offset(self.range.end);
let (buffer, range) = if let Some((start, end)) = start.zip(end) {
let (start_buffer, start_buffer_offset) = start;
let (end_buffer, end_buffer_offset) = end;
if start_buffer.remote_id() == end_buffer.remote_id() {
(start_buffer.clone(), start_buffer_offset..end_buffer_offset)
} else {
anyhow::bail!("invalid transformation range");
}
} else {
anyhow::bail!("invalid transformation range");
};
let system_prompt = self
.builder
.generate_inline_transformation_prompt_v2(
language_name,
buffer,
range.start.0..range.end.0,
)
.context("generating content prompt")?;
let temperature = AgentSettings::temperature_for_model(model, cx);
let tool_input_format = model.tool_input_format();
Ok(cx.spawn(async move |_cx| {
let mut messages = vec![LanguageModelRequestMessage {
role: Role::System,
content: vec![system_prompt.into()],
cache: false,
reasoning_details: None,
}];
let mut user_message = LanguageModelRequestMessage {
role: Role::User,
content: Vec::new(),
cache: false,
reasoning_details: None,
};
if let Some(context) = context_task.await {
context.add_to_request_message(&mut user_message);
}
user_message.content.push(user_prompt.into());
messages.push(user_message);
let tools = vec![
LanguageModelRequestTool {
name: "rewrite_section".to_string(),
description: "Replaces text in <rewrite_this></rewrite_this> tags with your replacement_text.".to_string(),
input_schema: language_model::tool_schema::root_schema_for::<RewriteSectionInput>(tool_input_format).to_value(),
},
LanguageModelRequestTool {
name: "failure_message".to_string(),
description: "Use this tool to provide a message to the user when you're unable to complete a task.".to_string(),
input_schema: language_model::tool_schema::root_schema_for::<FailureMessageInput>(tool_input_format).to_value(),
},
];
LanguageModelRequest {
thread_id: None,
prompt_id: None,
intent: Some(CompletionIntent::InlineAssist),
mode: None,
tools,
tool_choice: None,
stop: Vec::new(),
temperature,
messages,
thinking_allowed: false,
}
}))
}
fn build_request(
&self,
model: &Arc<dyn LanguageModel>,
@@ -379,6 +524,10 @@ impl CodegenAlternative {
context_task: Shared<Task<Option<LoadedContext>>>,
cx: &mut App,
) -> Result<Task<LanguageModelRequest>> {
if cx.has_flag::<InlineAssistantV2FeatureFlag>() {
return self.build_request_v2(model, user_prompt, context_task, cx);
}
let buffer = self.buffer.read(cx).snapshot(cx);
let language = buffer.language_at(self.range.start);
let language_name = if let Some(language) = language.as_ref() {
@@ -469,6 +618,8 @@ impl CodegenAlternative {
.text_for_range(self.range.start..self.range.end)
.collect::<Rope>();
self.selected_text = Some(selected_text.to_string());
let selection_start = self.range.start.to_point(&snapshot);
// Start with the indentation of the first line in the selection
@@ -510,6 +661,7 @@ impl CodegenAlternative {
self.generation = cx.spawn(async move |codegen, cx| {
let stream = stream.await;
let token_usage = stream
.as_ref()
.ok()
@@ -719,6 +871,7 @@ impl CodegenAlternative {
output_tokens = usage.output_tokens,
)
}
cx.emit(CodegenEvent::Finished);
cx.notify();
})
@@ -727,6 +880,14 @@ impl CodegenAlternative {
cx.notify();
}
pub fn current_completion(&self) -> Option<String> {
self.completion.clone()
}
pub fn selected_text(&self) -> Option<&str> {
self.selected_text.as_deref()
}
pub fn stop(&mut self, cx: &mut Context<Self>) {
self.last_equal_ranges.clear();
if self.diff.is_empty() {
@@ -898,6 +1059,101 @@ impl CodegenAlternative {
.ok();
})
}
fn handle_tool_use(
&mut self,
_telemetry_id: String,
_provider_id: String,
_api_key: Option<String>,
tool_use: impl 'static
+ Future<
Output = Result<language_model::LanguageModelToolUse, LanguageModelCompletionError>,
>,
cx: &mut Context<Self>,
) {
self.diff = Diff::default();
self.status = CodegenStatus::Pending;
self.generation = cx.spawn(async move |codegen, cx| {
let finish_with_status = |status: CodegenStatus, cx: &mut AsyncApp| {
let _ = codegen.update(cx, |this, cx| {
this.status = status;
cx.emit(CodegenEvent::Finished);
cx.notify();
});
};
let tool_use = tool_use.await;
match tool_use {
Ok(tool_use) if tool_use.name.as_ref() == "rewrite_section" => {
// Parse the input JSON into RewriteSectionInput
match serde_json::from_value::<RewriteSectionInput>(tool_use.input) {
Ok(input) => {
// Store the description if non-empty
let description = if !input.description.trim().is_empty() {
Some(input.description.clone())
} else {
None
};
// Apply the replacement text to the buffer and compute diff
let batch_diff_task = codegen
.update(cx, |this, cx| {
this.model_explanation = description.map(Into::into);
let range = this.range.clone();
this.apply_edits(
std::iter::once((range, input.replacement_text)),
cx,
);
this.reapply_batch_diff(cx)
})
.ok();
// Wait for the diff computation to complete
if let Some(diff_task) = batch_diff_task {
diff_task.await;
}
finish_with_status(CodegenStatus::Done, cx);
return;
}
Err(e) => {
finish_with_status(CodegenStatus::Error(e.into()), cx);
return;
}
}
}
Ok(tool_use) if tool_use.name.as_ref() == "failure_message" => {
// Handle failure message tool use
match serde_json::from_value::<FailureMessageInput>(tool_use.input) {
Ok(input) => {
let _ = codegen.update(cx, |this, _cx| {
// Store the failure message as the tool description
this.model_explanation = Some(input.message.into());
});
finish_with_status(CodegenStatus::Done, cx);
return;
}
Err(e) => {
finish_with_status(CodegenStatus::Error(e.into()), cx);
return;
}
}
}
Ok(_tool_use) => {
// Unexpected tool.
finish_with_status(CodegenStatus::Done, cx);
return;
}
Err(e) => {
finish_with_status(CodegenStatus::Error(e.into()), cx);
return;
}
}
});
cx.notify();
}
}
#[derive(Copy, Clone, Debug)]
@@ -1059,8 +1315,9 @@ mod tests {
};
use gpui::TestAppContext;
use indoc::indoc;
use language::{Buffer, Language, LanguageConfig, LanguageMatcher, Point, tree_sitter_rust};
use language::{Buffer, Point};
use language_model::{LanguageModelRegistry, TokenUsage};
use languages::rust_lang;
use rand::prelude::*;
use settings::SettingsStore;
use std::{future, sync::Arc};
@@ -1077,7 +1334,7 @@ mod tests {
}
}
"};
let buffer = cx.new(|cx| Buffer::local(text, cx).with_language(Arc::new(rust_lang()), cx));
let buffer = cx.new(|cx| Buffer::local(text, cx).with_language(rust_lang(), cx));
let buffer = cx.new(|cx| MultiBuffer::singleton(buffer, cx));
let range = buffer.read_with(cx, |buffer, cx| {
let snapshot = buffer.snapshot(cx);
@@ -1139,7 +1396,7 @@ mod tests {
le
}
"};
let buffer = cx.new(|cx| Buffer::local(text, cx).with_language(Arc::new(rust_lang()), cx));
let buffer = cx.new(|cx| Buffer::local(text, cx).with_language(rust_lang(), cx));
let buffer = cx.new(|cx| MultiBuffer::singleton(buffer, cx));
let range = buffer.read_with(cx, |buffer, cx| {
let snapshot = buffer.snapshot(cx);
@@ -1203,7 +1460,7 @@ mod tests {
" \n",
"}\n" //
);
let buffer = cx.new(|cx| Buffer::local(text, cx).with_language(Arc::new(rust_lang()), cx));
let buffer = cx.new(|cx| Buffer::local(text, cx).with_language(rust_lang(), cx));
let buffer = cx.new(|cx| MultiBuffer::singleton(buffer, cx));
let range = buffer.read_with(cx, |buffer, cx| {
let snapshot = buffer.snapshot(cx);
@@ -1319,7 +1576,7 @@ mod tests {
let x = 0;
}
"};
let buffer = cx.new(|cx| Buffer::local(text, cx).with_language(Arc::new(rust_lang()), cx));
let buffer = cx.new(|cx| Buffer::local(text, cx).with_language(rust_lang(), cx));
let buffer = cx.new(|cx| MultiBuffer::singleton(buffer, cx));
let range = buffer.read_with(cx, |buffer, cx| {
let snapshot = buffer.snapshot(cx);
@@ -1436,27 +1693,4 @@ mod tests {
});
chunks_tx
}
fn rust_lang() -> Language {
Language::new(
LanguageConfig {
name: "Rust".into(),
matcher: LanguageMatcher {
path_suffixes: vec!["rs".to_string()],
..Default::default()
},
..Default::default()
},
Some(tree_sitter_rust::LANGUAGE.into()),
)
.with_indents_query(
r#"
(call_expression) @indent
(field_expression) @indent
(_ "(" ")" @end) @indent
(_ "{" "}" @end) @indent
"#,
)
.unwrap()
}
}

View File

@@ -0,0 +1,89 @@
use std::str::FromStr;
use crate::inline_assistant::test::run_inline_assistant_test;
use eval_utils::{EvalOutput, NoProcessor};
use gpui::TestAppContext;
use language_model::{LanguageModelRegistry, SelectedModel};
use rand::{SeedableRng as _, rngs::StdRng};
#[test]
#[cfg_attr(not(feature = "unit-eval"), ignore)]
fn eval_single_cursor_edit() {
eval_utils::eval(20, 1.0, NoProcessor, move || {
run_eval(
&EvalInput {
prompt: "Rename this variable to buffer_text".to_string(),
buffer: indoc::indoc! {"
struct EvalExampleStruct {
text: Strˇing,
prompt: String,
}
"}
.to_string(),
},
&|_, output| {
let expected = indoc::indoc! {"
struct EvalExampleStruct {
buffer_text: String,
prompt: String,
}
"};
if output == expected {
EvalOutput {
outcome: eval_utils::OutcomeKind::Passed,
data: "Passed!".to_string(),
metadata: (),
}
} else {
EvalOutput {
outcome: eval_utils::OutcomeKind::Failed,
data: format!("Failed to rename variable, output: {}", output),
metadata: (),
}
}
},
)
});
}
struct EvalInput {
buffer: String,
prompt: String,
}
fn run_eval(
input: &EvalInput,
judge: &dyn Fn(&EvalInput, &str) -> eval_utils::EvalOutput<()>,
) -> eval_utils::EvalOutput<()> {
let dispatcher = gpui::TestDispatcher::new(StdRng::from_os_rng());
let mut cx = TestAppContext::build(dispatcher, None);
cx.skip_drawing();
let buffer_text = run_inline_assistant_test(
input.buffer.clone(),
input.prompt.clone(),
|cx| {
// Reconfigure to use a real model instead of the fake one
let model_name = std::env::var("ZED_AGENT_MODEL")
.unwrap_or("anthropic/claude-sonnet-4-latest".into());
let selected_model = SelectedModel::from_str(&model_name)
.expect("Invalid model format. Use 'provider/model-id'");
log::info!("Selected model: {selected_model:?}");
cx.update(|_, cx| {
LanguageModelRegistry::global(cx).update(cx, |registry, cx| {
registry.select_inline_assistant_model(Some(&selected_model), cx);
});
});
},
|_cx| {
log::info!("Waiting for actual response from the LLM...");
},
&mut cx,
);
judge(input, &buffer_text)
}

View File

@@ -32,7 +32,7 @@ use editor::{
},
};
use fs::Fs;
use futures::FutureExt;
use futures::{FutureExt, channel::mpsc};
use gpui::{
App, Context, Entity, Focusable, Global, HighlightStyle, Subscription, Task, UpdateGlobal,
WeakEntity, Window, point,
@@ -102,6 +102,7 @@ pub struct InlineAssistant {
prompt_builder: Arc<PromptBuilder>,
telemetry: Arc<Telemetry>,
fs: Arc<dyn Fs>,
_inline_assistant_completions: Option<mpsc::UnboundedSender<anyhow::Result<InlineAssistId>>>,
}
impl Global for InlineAssistant {}
@@ -123,9 +124,18 @@ impl InlineAssistant {
prompt_builder,
telemetry,
fs,
_inline_assistant_completions: None,
}
}
#[cfg(any(test, feature = "test-support"))]
pub fn set_completion_receiver(
&mut self,
sender: mpsc::UnboundedSender<anyhow::Result<InlineAssistId>>,
) {
self._inline_assistant_completions = Some(sender);
}
pub fn register_workspace(
&mut self,
workspace: &Entity<Workspace>,
@@ -287,7 +297,7 @@ impl InlineAssistant {
action.prompt.clone(),
window,
cx,
)
);
})
}
InlineAssistTarget::Terminal(active_terminal) => {
@@ -301,8 +311,8 @@ impl InlineAssistant {
action.prompt.clone(),
window,
cx,
)
})
);
});
}
};
@@ -377,17 +387,9 @@ impl InlineAssistant {
let mut selections = Vec::<Selection<Point>>::new();
let mut newest_selection = None;
for mut selection in initial_selections {
if selection.end > selection.start {
selection.start.column = 0;
// If the selection ends at the start of the line, we don't want to include it.
if selection.end.column == 0 {
selection.end.row -= 1;
}
selection.end.column = snapshot
.buffer_snapshot()
.line_len(MultiBufferRow(selection.end.row));
} else if let Some(fold) =
snapshot.crease_for_buffer_row(MultiBufferRow(selection.end.row))
if selection.end == selection.start
&& let Some(fold) =
snapshot.crease_for_buffer_row(MultiBufferRow(selection.end.row))
{
selection.start = fold.range().start;
selection.end = fold.range().end;
@@ -414,6 +416,15 @@ impl InlineAssistant {
}
}
}
} else {
selection.start.column = 0;
// If the selection ends at the start of the line, we don't want to include it.
if selection.end.column == 0 && selection.start.row != selection.end.row {
selection.end.row -= 1;
}
selection.end.column = snapshot
.buffer_snapshot()
.line_len(MultiBufferRow(selection.end.row));
}
if let Some(prev_selection) = selections.last_mut()
@@ -534,14 +545,15 @@ impl InlineAssistant {
}
}
let [prompt_block_id, end_block_id] =
self.insert_assist_blocks(editor, &range, &prompt_editor, cx);
let [prompt_block_id, tool_description_block_id, end_block_id] =
self.insert_assist_blocks(&editor, &range, &prompt_editor, cx);
assists.push((
assist_id,
range.clone(),
prompt_editor,
prompt_block_id,
tool_description_block_id,
end_block_id,
));
}
@@ -560,7 +572,15 @@ impl InlineAssistant {
};
let mut assist_group = InlineAssistGroup::new();
for (assist_id, range, prompt_editor, prompt_block_id, end_block_id) in assists {
for (
assist_id,
range,
prompt_editor,
prompt_block_id,
tool_description_block_id,
end_block_id,
) in assists
{
let codegen = prompt_editor.read(cx).codegen().clone();
self.assists.insert(
@@ -571,6 +591,7 @@ impl InlineAssistant {
editor,
&prompt_editor,
prompt_block_id,
tool_description_block_id,
end_block_id,
range,
codegen,
@@ -598,13 +619,13 @@ impl InlineAssistant {
initial_prompt: Option<String>,
window: &mut Window,
cx: &mut App,
) {
) -> Option<InlineAssistId> {
let snapshot = editor.update(cx, |editor, cx| editor.snapshot(window, cx));
let Some((codegen_ranges, newest_selection)) =
self.codegen_ranges(editor, &snapshot, window, cx)
else {
return;
return None;
};
let assist_to_focus = self.batch_assist(
@@ -624,6 +645,8 @@ impl InlineAssistant {
if let Some(assist_id) = assist_to_focus {
self.focus_assist(assist_id, window, cx);
}
assist_to_focus
}
pub fn suggest_assist(
@@ -677,7 +700,7 @@ impl InlineAssistant {
range: &Range<Anchor>,
prompt_editor: &Entity<PromptEditor<BufferCodegen>>,
cx: &mut App,
) -> [CustomBlockId; 2] {
) -> [CustomBlockId; 3] {
let prompt_editor_height = prompt_editor.update(cx, |prompt_editor, cx| {
prompt_editor
.editor
@@ -691,6 +714,14 @@ impl InlineAssistant {
render: build_assist_editor_renderer(prompt_editor),
priority: 0,
},
// Placeholder for tool description - will be updated dynamically
BlockProperties {
style: BlockStyle::Flex,
placement: BlockPlacement::Below(range.end),
height: Some(0),
render: Arc::new(|_cx| div().into_any_element()),
priority: 0,
},
BlockProperties {
style: BlockStyle::Sticky,
placement: BlockPlacement::Below(range.end),
@@ -709,7 +740,7 @@ impl InlineAssistant {
editor.update(cx, |editor, cx| {
let block_ids = editor.insert_blocks(assist_blocks, None, cx);
[block_ids[0], block_ids[1]]
[block_ids[0], block_ids[1], block_ids[2]]
})
}
@@ -1101,6 +1132,9 @@ impl InlineAssistant {
let mut to_remove = decorations.removed_line_block_ids;
to_remove.insert(decorations.prompt_block_id);
to_remove.insert(decorations.end_block_id);
if let Some(tool_description_block_id) = decorations.model_explanation {
to_remove.insert(tool_description_block_id);
}
editor.remove_blocks(to_remove, None, cx);
});
@@ -1421,8 +1455,60 @@ impl InlineAssistant {
let old_snapshot = codegen.snapshot(cx);
let old_buffer = codegen.old_buffer(cx);
let deleted_row_ranges = codegen.diff(cx).deleted_row_ranges.clone();
// let model_explanation = codegen.model_explanation(cx);
editor.update(cx, |editor, cx| {
// Update tool description block
// if let Some(description) = model_explanation {
// if let Some(block_id) = decorations.model_explanation {
// editor.remove_blocks(HashSet::from_iter([block_id]), None, cx);
// let new_block_id = editor.insert_blocks(
// [BlockProperties {
// style: BlockStyle::Flex,
// placement: BlockPlacement::Below(assist.range.end),
// height: Some(1),
// render: Arc::new({
// let description = description.clone();
// move |cx| {
// div()
// .w_full()
// .py_1()
// .px_2()
// .bg(cx.theme().colors().editor_background)
// .border_y_1()
// .border_color(cx.theme().status().info_border)
// .child(
// Label::new(description.clone())
// .color(Color::Muted)
// .size(LabelSize::Small),
// )
// .into_any_element()
// }
// }),
// priority: 0,
// }],
// None,
// cx,
// );
// decorations.model_explanation = new_block_id.into_iter().next();
// }
// } else if let Some(block_id) = decorations.model_explanation {
// // Hide the block if there's no description
// editor.remove_blocks(HashSet::from_iter([block_id]), None, cx);
// let new_block_id = editor.insert_blocks(
// [BlockProperties {
// style: BlockStyle::Flex,
// placement: BlockPlacement::Below(assist.range.end),
// height: Some(0),
// render: Arc::new(|_cx| div().into_any_element()),
// priority: 0,
// }],
// None,
// cx,
// );
// decorations.model_explanation = new_block_id.into_iter().next();
// }
let old_blocks = mem::take(&mut decorations.removed_line_block_ids);
editor.remove_blocks(old_blocks, None, cx);
@@ -1674,6 +1760,7 @@ impl InlineAssist {
editor: &Entity<Editor>,
prompt_editor: &Entity<PromptEditor<BufferCodegen>>,
prompt_block_id: CustomBlockId,
tool_description_block_id: CustomBlockId,
end_block_id: CustomBlockId,
range: Range<Anchor>,
codegen: Entity<BufferCodegen>,
@@ -1688,7 +1775,8 @@ impl InlineAssist {
decorations: Some(InlineAssistDecorations {
prompt_block_id,
prompt_editor: prompt_editor.clone(),
removed_line_block_ids: HashSet::default(),
removed_line_block_ids: Default::default(),
model_explanation: Some(tool_description_block_id),
end_block_id,
}),
range,
@@ -1740,6 +1828,16 @@ impl InlineAssist {
&& assist.decorations.is_none()
&& let Some(workspace) = assist.workspace.upgrade()
{
#[cfg(any(test, feature = "test-support"))]
if let Some(sender) = &mut this._inline_assistant_completions {
sender
.unbounded_send(Err(anyhow::anyhow!(
"Inline assistant error: {}",
error
)))
.ok();
}
let error = format!("Inline assistant error: {}", error);
workspace.update(cx, |workspace, cx| {
struct InlineAssistantError;
@@ -1750,6 +1848,11 @@ impl InlineAssist {
workspace.show_toast(Toast::new(id, error), cx);
})
} else {
#[cfg(any(test, feature = "test-support"))]
if let Some(sender) = &mut this._inline_assistant_completions {
sender.unbounded_send(Ok(assist_id)).ok();
}
}
if assist.decorations.is_none() {
@@ -1777,6 +1880,7 @@ struct InlineAssistDecorations {
prompt_block_id: CustomBlockId,
prompt_editor: Entity<PromptEditor<BufferCodegen>>,
removed_line_block_ids: HashSet<CustomBlockId>,
model_explanation: Option<CustomBlockId>,
end_block_id: CustomBlockId,
}
@@ -1943,3 +2047,160 @@ fn merge_ranges(ranges: &mut Vec<Range<Anchor>>, buffer: &MultiBufferSnapshot) {
}
}
}
#[cfg(any(test, feature = "test-support"))]
pub mod test {
use std::sync::Arc;
use agent::HistoryStore;
use assistant_text_thread::TextThreadStore;
use client::{Client, UserStore};
use editor::{Editor, MultiBuffer, MultiBufferOffset};
use fs::FakeFs;
use futures::channel::mpsc;
use gpui::{AppContext, TestAppContext, UpdateGlobal as _};
use language::Buffer;
use language_model::LanguageModelRegistry;
use project::Project;
use prompt_store::PromptBuilder;
use smol::stream::StreamExt as _;
use util::test::marked_text_ranges;
use workspace::Workspace;
use crate::InlineAssistant;
pub fn run_inline_assistant_test<SetupF, TestF>(
base_buffer: String,
prompt: String,
setup: SetupF,
test: TestF,
cx: &mut TestAppContext,
) -> String
where
SetupF: FnOnce(&mut gpui::VisualTestContext),
TestF: FnOnce(&mut gpui::VisualTestContext),
{
let fs = FakeFs::new(cx.executor());
let app_state = cx.update(|cx| workspace::AppState::test(cx));
let prompt_builder = Arc::new(PromptBuilder::new(None).unwrap());
let http = Arc::new(reqwest_client::ReqwestClient::user_agent("agent tests").unwrap());
let client = cx.update(|cx| {
cx.set_http_client(http);
Client::production(cx)
});
let mut inline_assistant =
InlineAssistant::new(fs.clone(), prompt_builder, client.telemetry().clone());
let (tx, mut completion_rx) = mpsc::unbounded();
inline_assistant.set_completion_receiver(tx);
// Initialize settings and client
cx.update(|cx| {
gpui_tokio::init(cx);
settings::init(cx);
client::init(&client, cx);
workspace::init(app_state.clone(), cx);
let user_store = cx.new(|cx| UserStore::new(client.clone(), cx));
language_model::init(client.clone(), cx);
language_models::init(user_store, client.clone(), cx);
cx.set_global(inline_assistant);
});
let project = cx
.executor()
.block_test(async { Project::test(fs.clone(), [], cx).await });
// Create workspace with window
let (workspace, cx) = cx.add_window_view(|window, cx| {
window.activate_window();
Workspace::new(None, project.clone(), app_state.clone(), window, cx)
});
setup(cx);
let (_editor, buffer) = cx.update(|window, cx| {
let buffer = cx.new(|cx| Buffer::local("", cx));
let multibuffer = cx.new(|cx| MultiBuffer::singleton(buffer.clone(), cx));
let editor = cx.new(|cx| Editor::for_multibuffer(multibuffer, None, window, cx));
editor.update(cx, |editor, cx| {
let (unmarked_text, selection_ranges) = marked_text_ranges(&base_buffer, true);
editor.set_text(unmarked_text, window, cx);
editor.change_selections(Default::default(), window, cx, |s| {
s.select_ranges(
selection_ranges.into_iter().map(|range| {
MultiBufferOffset(range.start)..MultiBufferOffset(range.end)
}),
)
})
});
let text_thread_store = cx.new(|cx| TextThreadStore::fake(project.clone(), cx));
let history_store = cx.new(|cx| HistoryStore::new(text_thread_store, cx));
// Add editor to workspace
workspace.update(cx, |workspace, cx| {
workspace.add_item_to_active_pane(Box::new(editor.clone()), None, true, window, cx);
});
// Call assist method
InlineAssistant::update_global(cx, |inline_assistant, cx| {
let assist_id = inline_assistant
.assist(
&editor,
workspace.downgrade(),
project.downgrade(),
history_store, // thread_store
None, // prompt_store
Some(prompt),
window,
cx,
)
.unwrap();
inline_assistant.start_assist(assist_id, window, cx);
});
(editor, buffer)
});
cx.run_until_parked();
test(cx);
cx.executor()
.block_test(async { completion_rx.next().await });
buffer.read_with(cx, |buffer, _| buffer.text())
}
#[allow(unused)]
pub fn test_inline_assistant(
base_buffer: &'static str,
llm_output: &'static str,
cx: &mut TestAppContext,
) -> String {
run_inline_assistant_test(
base_buffer.to_string(),
"Prompt doesn't matter because we're using a fake model".to_string(),
|cx| {
cx.update(|_, cx| LanguageModelRegistry::test(cx));
},
|cx| {
let fake_model = cx.update(|_, cx| {
LanguageModelRegistry::global(cx)
.update(cx, |registry, _| registry.fake_model())
});
let fake = fake_model.as_fake();
// let fake = fake_model;
fake.send_last_completion_stream_text_chunk(llm_output.to_string());
fake.end_last_completion_stream();
// Run again to process the model's response
cx.run_until_parked();
},
cx,
)
}
}

View File

@@ -8,24 +8,28 @@ use editor::{
ContextMenuOptions, Editor, EditorElement, EditorEvent, EditorMode, EditorStyle, MultiBuffer,
actions::{MoveDown, MoveUp},
};
use feature_flags::{FeatureFlag, FeatureFlagAppExt};
use fs::Fs;
use gpui::{
AnyElement, App, Context, CursorStyle, Entity, EventEmitter, FocusHandle, Focusable,
Subscription, TextStyle, WeakEntity, Window,
AnyElement, App, ClipboardItem, Context, Entity, EventEmitter, FocusHandle, Focusable,
Subscription, TextStyle, TextStyleRefinement, WeakEntity, Window, actions,
};
use language_model::{LanguageModel, LanguageModelRegistry};
use markdown::{HeadingLevelStyles, Markdown, MarkdownElement, MarkdownStyle};
use parking_lot::Mutex;
use project::Project;
use prompt_store::PromptStore;
use settings::Settings;
use std::cmp;
use std::ops::Range;
use std::rc::Rc;
use std::sync::Arc;
use std::{cmp, mem};
use theme::ThemeSettings;
use ui::utils::WithRemSize;
use ui::{IconButtonShape, KeyBinding, PopoverMenuHandle, Tooltip, prelude::*};
use workspace::Workspace;
use uuid::Uuid;
use workspace::notifications::NotificationId;
use workspace::{Toast, Workspace};
use zed_actions::agent::ToggleModelSelector;
use crate::agent_model_selector::AgentModelSelector;
@@ -38,6 +42,58 @@ use crate::mention_set::{MentionSet, crease_for_mention};
use crate::terminal_codegen::TerminalCodegen;
use crate::{CycleNextInlineAssist, CyclePreviousInlineAssist, ModelUsageContext};
actions!(inline_assistant, [ThumbsUpResult, ThumbsDownResult]);
pub struct InlineAssistRatingFeatureFlag;
impl FeatureFlag for InlineAssistRatingFeatureFlag {
const NAME: &'static str = "inline-assist-rating";
fn enabled_for_staff() -> bool {
false
}
}
enum RatingState {
Pending,
GeneratedCompletion(Option<String>),
Rated(Uuid),
}
impl RatingState {
fn is_pending(&self) -> bool {
matches!(self, RatingState::Pending)
}
fn rating_id(&self) -> Option<Uuid> {
match self {
RatingState::Pending => None,
RatingState::GeneratedCompletion(_) => None,
RatingState::Rated(id) => Some(*id),
}
}
fn rate(&mut self) -> (Uuid, Option<String>) {
let id = Uuid::new_v4();
let old_state = mem::replace(self, RatingState::Rated(id));
let completion = match old_state {
RatingState::Pending => None,
RatingState::GeneratedCompletion(completion) => completion,
RatingState::Rated(_) => None,
};
(id, completion)
}
fn reset(&mut self) {
*self = RatingState::Pending;
}
fn generated_completion(&mut self, generated_completion: Option<String>) {
*self = RatingState::GeneratedCompletion(generated_completion);
}
}
pub struct PromptEditor<T> {
pub editor: Entity<Editor>,
mode: PromptEditorMode,
@@ -53,6 +109,7 @@ pub struct PromptEditor<T> {
_codegen_subscription: Subscription,
editor_subscriptions: Vec<Subscription>,
show_rate_limit_notice: bool,
rated: RatingState,
_phantom: std::marker::PhantomData<T>,
}
@@ -65,7 +122,7 @@ impl<T: 'static> Render for PromptEditor<T> {
const RIGHT_PADDING: Pixels = px(9.);
let (left_gutter_width, right_padding) = match &self.mode {
let (left_gutter_width, right_padding, explanation) = match &self.mode {
PromptEditorMode::Buffer {
id: _,
codegen,
@@ -83,17 +140,23 @@ impl<T: 'static> Render for PromptEditor<T> {
let left_gutter_width = gutter.full_width() + (gutter.margin / 2.0);
let right_padding = editor_margins.right + RIGHT_PADDING;
(left_gutter_width, right_padding)
let explanation = codegen
.active_alternative()
.read(cx)
.model_explanation
.clone();
(left_gutter_width, right_padding, explanation)
}
PromptEditorMode::Terminal { .. } => {
// Give the equivalent of the same left-padding that we're using on the right
(Pixels::from(40.0), Pixels::from(24.))
(Pixels::from(40.0), Pixels::from(24.), None)
}
};
let bottom_padding = match &self.mode {
PromptEditorMode::Buffer { .. } => rems_from_px(2.0),
PromptEditorMode::Terminal { .. } => rems_from_px(8.0),
PromptEditorMode::Terminal { .. } => rems_from_px(4.0),
};
buttons.extend(self.render_buttons(window, cx));
@@ -111,22 +174,33 @@ impl<T: 'static> Render for PromptEditor<T> {
this.trigger_completion_menu(window, cx);
}));
let markdown = window.use_state(cx, |_, cx| Markdown::new("".into(), None, None, cx));
if let Some(explanation) = &explanation {
markdown.update(cx, |markdown, cx| {
markdown.reset(explanation.clone(), cx);
});
}
let explanation_label = self
.render_markdown(markdown, markdown_style(window, cx))
.into_any_element();
v_flex()
.key_context("PromptEditor")
.capture_action(cx.listener(Self::paste))
.bg(cx.theme().colors().editor_background)
.block_mouse_except_scroll()
.gap_0p5()
.border_y_1()
.border_color(cx.theme().status().info_border)
.size_full()
.pt_0p5()
.pb(bottom_padding)
.pr(right_padding)
.gap_0p5()
.justify_center()
.border_y_1()
.border_color(cx.theme().colors().border)
.bg(cx.theme().colors().editor_background)
.child(
h_flex()
.items_start()
.cursor(CursorStyle::Arrow)
.on_action(cx.listener(|this, _: &ToggleModelSelector, window, cx| {
this.model_selector
.update(cx, |model_selector, cx| model_selector.toggle(window, cx));
@@ -135,18 +209,20 @@ impl<T: 'static> Render for PromptEditor<T> {
.on_action(cx.listener(Self::cancel))
.on_action(cx.listener(Self::move_up))
.on_action(cx.listener(Self::move_down))
.on_action(cx.listener(Self::thumbs_up))
.on_action(cx.listener(Self::thumbs_down))
.capture_action(cx.listener(Self::cycle_prev))
.capture_action(cx.listener(Self::cycle_next))
.child(
WithRemSize::new(ui_font_size)
.h_full()
.w(left_gutter_width)
.flex()
.flex_row()
.flex_shrink_0()
.items_center()
.h_full()
.w(left_gutter_width)
.justify_center()
.gap_2()
.gap_1()
.child(self.render_close_button(cx))
.map(|el| {
let CodegenStatus::Error(error) = self.codegen_status(cx) else {
@@ -177,26 +253,83 @@ impl<T: 'static> Render for PromptEditor<T> {
.flex_row()
.items_center()
.gap_1()
.child(add_context_button)
.child(self.model_selector.clone())
.children(buttons),
),
),
)
.child(
WithRemSize::new(ui_font_size)
.flex()
.flex_row()
.items_center()
.child(h_flex().flex_shrink_0().w(left_gutter_width))
.child(
h_flex()
.w_full()
.pl_1()
.items_start()
.justify_between()
.child(add_context_button)
.child(self.model_selector.clone()),
),
)
.when_some(explanation, |this, _| {
this.child(
h_flex()
.size_full()
.justify_center()
.child(div().w(left_gutter_width + px(6.)))
.child(
div()
.size_full()
.min_w_0()
.pt(rems_from_px(3.))
.pl_0p5()
.flex_1()
.border_t_1()
.border_color(cx.theme().colors().border_variant)
.child(explanation_label),
),
)
})
}
}
fn markdown_style(window: &Window, cx: &App) -> MarkdownStyle {
let theme_settings = ThemeSettings::get_global(cx);
let colors = cx.theme().colors();
let mut text_style = window.text_style();
text_style.refine(&TextStyleRefinement {
font_family: Some(theme_settings.ui_font.family.clone()),
color: Some(colors.text),
..Default::default()
});
MarkdownStyle {
base_text_style: text_style.clone(),
syntax: cx.theme().syntax().clone(),
selection_background_color: colors.element_selection_background,
heading_level_styles: Some(HeadingLevelStyles {
h1: Some(TextStyleRefinement {
font_size: Some(rems(1.15).into()),
..Default::default()
}),
h2: Some(TextStyleRefinement {
font_size: Some(rems(1.1).into()),
..Default::default()
}),
h3: Some(TextStyleRefinement {
font_size: Some(rems(1.05).into()),
..Default::default()
}),
h4: Some(TextStyleRefinement {
font_size: Some(rems(1.).into()),
..Default::default()
}),
h5: Some(TextStyleRefinement {
font_size: Some(rems(0.95).into()),
..Default::default()
}),
h6: Some(TextStyleRefinement {
font_size: Some(rems(0.875).into()),
..Default::default()
}),
}),
inline_code: TextStyleRefinement {
font_family: Some(theme_settings.buffer_font.family.clone()),
font_fallbacks: theme_settings.buffer_font.fallbacks.clone(),
font_features: Some(theme_settings.buffer_font.features.clone()),
background_color: Some(colors.editor_foreground.opacity(0.08)),
..Default::default()
},
..Default::default()
}
}
@@ -354,6 +487,7 @@ impl<T: 'static> PromptEditor<T> {
}
self.edited_since_done = true;
self.rated.reset();
cx.notify();
}
EditorEvent::Blurred => {
@@ -441,6 +575,121 @@ impl<T: 'static> PromptEditor<T> {
}
}
fn thumbs_up(&mut self, _: &ThumbsUpResult, _window: &mut Window, cx: &mut Context<Self>) {
if self.rated.is_pending() {
self.toast("Still generating...", None, cx);
return;
}
if let Some(rating_id) = self.rated.rating_id() {
self.toast("Already rated this completion", Some(rating_id), cx);
return;
}
let (rating_id, completion) = self.rated.rate();
let selected_text = match &self.mode {
PromptEditorMode::Buffer { codegen, .. } => {
codegen.read(cx).selected_text(cx).map(|s| s.to_string())
}
PromptEditorMode::Terminal { .. } => None,
};
let model_info = self.model_selector.read(cx).active_model(cx);
let model_id = {
let Some(configured_model) = model_info else {
self.toast("No configured model", None, cx);
return;
};
configured_model.model.telemetry_id()
};
let prompt = self.editor.read(cx).text(cx);
telemetry::event!(
"Inline Assistant Rated",
rating = "positive",
model = model_id,
prompt = prompt,
completion = completion,
selected_text = selected_text,
rating_id = rating_id.to_string()
);
cx.notify();
}
fn thumbs_down(&mut self, _: &ThumbsDownResult, _window: &mut Window, cx: &mut Context<Self>) {
if self.rated.is_pending() {
self.toast("Still generating...", None, cx);
return;
}
if let Some(rating_id) = self.rated.rating_id() {
self.toast("Already rated this completion", Some(rating_id), cx);
return;
}
let (rating_id, completion) = self.rated.rate();
let selected_text = match &self.mode {
PromptEditorMode::Buffer { codegen, .. } => {
codegen.read(cx).selected_text(cx).map(|s| s.to_string())
}
PromptEditorMode::Terminal { .. } => None,
};
let model_info = self.model_selector.read(cx).active_model(cx);
let model_telemetry_id = {
let Some(configured_model) = model_info else {
self.toast("No configured model", None, cx);
return;
};
configured_model.model.telemetry_id()
};
let prompt = self.editor.read(cx).text(cx);
telemetry::event!(
"Inline Assistant Rated",
rating = "negative",
model = model_telemetry_id,
prompt = prompt,
completion = completion,
selected_text = selected_text,
rating_id = rating_id.to_string()
);
cx.notify();
}
fn toast(&mut self, msg: &str, uuid: Option<Uuid>, cx: &mut Context<'_, PromptEditor<T>>) {
self.workspace
.update(cx, |workspace, cx| {
enum InlinePromptRating {}
workspace.show_toast(
{
let mut toast = Toast::new(
NotificationId::unique::<InlinePromptRating>(),
msg.to_string(),
)
.autohide();
if let Some(uuid) = uuid {
toast = toast.on_click("Click to copy rating ID", move |_, cx| {
cx.write_to_clipboard(ClipboardItem::new_string(uuid.to_string()));
});
};
toast
},
cx,
);
})
.ok();
}
fn move_up(&mut self, _: &MoveUp, window: &mut Window, cx: &mut Context<Self>) {
if let Some(ix) = self.prompt_history_ix {
if ix > 0 {
@@ -546,6 +795,9 @@ impl<T: 'static> PromptEditor<T> {
.into_any_element(),
]
} else {
let show_rating_buttons = cx.has_flag::<InlineAssistRatingFeatureFlag>();
let rated = self.rated.rating_id().is_some();
let accept = IconButton::new("accept", IconName::Check)
.icon_color(Color::Info)
.shape(IconButtonShape::Square)
@@ -557,25 +809,59 @@ impl<T: 'static> PromptEditor<T> {
}))
.into_any_element();
match &self.mode {
PromptEditorMode::Terminal { .. } => vec![
accept,
IconButton::new("confirm", IconName::PlayFilled)
.icon_color(Color::Info)
let mut buttons = Vec::new();
if show_rating_buttons {
buttons.push(
IconButton::new("thumbs-down", IconName::ThumbsDown)
.icon_color(if rated { Color::Muted } else { Color::Default })
.shape(IconButtonShape::Square)
.tooltip(|_window, cx| {
Tooltip::for_action(
"Execute Generated Command",
&menu::SecondaryConfirm,
cx,
)
})
.on_click(cx.listener(|_, _, _, cx| {
cx.emit(PromptEditorEvent::ConfirmRequested { execute: true });
.disabled(rated)
.tooltip(Tooltip::text("Bad result"))
.on_click(cx.listener(|this, _, window, cx| {
this.thumbs_down(&ThumbsDownResult, window, cx);
}))
.into_any_element(),
],
PromptEditorMode::Buffer { .. } => vec![accept],
);
buttons.push(
IconButton::new("thumbs-up", IconName::ThumbsUp)
.icon_color(if rated { Color::Muted } else { Color::Default })
.shape(IconButtonShape::Square)
.disabled(rated)
.tooltip(Tooltip::text("Good result"))
.on_click(cx.listener(|this, _, window, cx| {
this.thumbs_up(&ThumbsUpResult, window, cx);
}))
.into_any_element(),
);
}
buttons.push(accept);
match &self.mode {
PromptEditorMode::Terminal { .. } => {
buttons.push(
IconButton::new("confirm", IconName::PlayFilled)
.icon_color(Color::Info)
.shape(IconButtonShape::Square)
.tooltip(|_window, cx| {
Tooltip::for_action(
"Execute Generated Command",
&menu::SecondaryConfirm,
cx,
)
})
.on_click(cx.listener(|_, _, _, cx| {
cx.emit(PromptEditorEvent::ConfirmRequested {
execute: true,
});
}))
.into_any_element(),
);
buttons
}
PromptEditorMode::Buffer { .. } => buttons,
}
}
}
@@ -759,6 +1045,10 @@ impl<T: 'static> PromptEditor<T> {
})
.into_any_element()
}
fn render_markdown(&self, markdown: Entity<Markdown>, style: MarkdownStyle) -> MarkdownElement {
MarkdownElement::new(markdown, style)
}
}
pub enum PromptEditorMode {
@@ -900,6 +1190,7 @@ impl PromptEditor<BufferCodegen> {
editor_subscriptions: Vec::new(),
show_rate_limit_notice: false,
mode,
rated: RatingState::Pending,
_phantom: Default::default(),
};
@@ -910,7 +1201,7 @@ impl PromptEditor<BufferCodegen> {
fn handle_codegen_changed(
&mut self,
_: Entity<BufferCodegen>,
codegen: Entity<BufferCodegen>,
cx: &mut Context<PromptEditor<BufferCodegen>>,
) {
match self.codegen_status(cx) {
@@ -919,10 +1210,13 @@ impl PromptEditor<BufferCodegen> {
.update(cx, |editor, _| editor.set_read_only(false));
}
CodegenStatus::Pending => {
self.rated.reset();
self.editor
.update(cx, |editor, _| editor.set_read_only(true));
}
CodegenStatus::Done => {
let completion = codegen.read(cx).active_completion(cx);
self.rated.generated_completion(completion);
self.edited_since_done = false;
self.editor
.update(cx, |editor, _| editor.set_read_only(false));
@@ -1043,6 +1337,7 @@ impl PromptEditor<TerminalCodegen> {
editor_subscriptions: Vec::new(),
mode,
show_rate_limit_notice: false,
rated: RatingState::Pending,
_phantom: Default::default(),
};
this.count_lines(cx);
@@ -1075,17 +1370,20 @@ impl PromptEditor<TerminalCodegen> {
}
}
fn handle_codegen_changed(&mut self, _: Entity<TerminalCodegen>, cx: &mut Context<Self>) {
fn handle_codegen_changed(&mut self, codegen: Entity<TerminalCodegen>, cx: &mut Context<Self>) {
match &self.codegen().read(cx).status {
CodegenStatus::Idle => {
self.editor
.update(cx, |editor, _| editor.set_read_only(false));
}
CodegenStatus::Pending => {
self.rated = RatingState::Pending;
self.editor
.update(cx, |editor, _| editor.set_read_only(true));
}
CodegenStatus::Done | CodegenStatus::Error(_) => {
self.rated
.generated_completion(codegen.read(cx).completion());
self.edited_since_done = false;
self.editor
.update(cx, |editor, _| editor.set_read_only(false));

View File

@@ -542,7 +542,7 @@ impl PickerDelegate for ProfilePickerDelegate {
let is_active = active_id == candidate.id;
Some(
ListItem::new(SharedString::from(candidate.id.0.clone()))
ListItem::new(candidate.id.0.clone())
.inset(true)
.spacing(ListItemSpacing::Sparse)
.toggle_state(selected)

View File

@@ -135,6 +135,12 @@ impl TerminalCodegen {
cx.notify();
}
pub fn completion(&self) -> Option<String> {
self.transaction
.as_ref()
.map(|transaction| transaction.completion.clone())
}
pub fn stop(&mut self, cx: &mut Context<Self>) {
self.status = CodegenStatus::Done;
self.generation = Task::ready(());
@@ -167,27 +173,32 @@ pub const CLEAR_INPUT: &str = "\x03";
const CARRIAGE_RETURN: &str = "\x0d";
struct TerminalTransaction {
completion: String,
terminal: Entity<Terminal>,
}
impl TerminalTransaction {
pub fn start(terminal: Entity<Terminal>) -> Self {
Self { terminal }
Self {
completion: String::new(),
terminal,
}
}
pub fn push(&mut self, hunk: String, cx: &mut App) {
// Ensure that the assistant cannot accidentally execute commands that are streamed into the terminal
let input = Self::sanitize_input(hunk);
self.completion.push_str(&input);
self.terminal
.update(cx, |terminal, _| terminal.input(input.into_bytes()));
}
pub fn undo(&self, cx: &mut App) {
pub fn undo(self, cx: &mut App) {
self.terminal
.update(cx, |terminal, _| terminal.input(CLEAR_INPUT.as_bytes()));
}
pub fn complete(&self, cx: &mut App) {
pub fn complete(self, cx: &mut App) {
self.terminal
.update(cx, |terminal, _| terminal.input(CARRIAGE_RETURN.as_bytes()));
}

View File

@@ -1682,6 +1682,98 @@ impl TextThreadEditor {
window: &mut Window,
cx: &mut Context<Self>,
) {
let editor_clipboard_selections = cx
.read_from_clipboard()
.and_then(|item| item.entries().first().cloned())
.and_then(|entry| match entry {
ClipboardEntry::String(text) => {
text.metadata_json::<Vec<editor::ClipboardSelection>>()
}
_ => None,
});
let has_file_context = editor_clipboard_selections
.as_ref()
.is_some_and(|selections| {
selections
.iter()
.any(|sel| sel.file_path.is_some() && sel.line_range.is_some())
});
if has_file_context {
if let Some(clipboard_item) = cx.read_from_clipboard() {
if let Some(ClipboardEntry::String(clipboard_text)) =
clipboard_item.entries().first()
{
if let Some(selections) = editor_clipboard_selections {
cx.stop_propagation();
let text = clipboard_text.text();
self.editor.update(cx, |editor, cx| {
let mut current_offset = 0;
let weak_editor = cx.entity().downgrade();
for selection in selections {
if let (Some(file_path), Some(line_range)) =
(selection.file_path, selection.line_range)
{
let selected_text =
&text[current_offset..current_offset + selection.len];
let fence = assistant_slash_commands::codeblock_fence_for_path(
file_path.to_str(),
Some(line_range.clone()),
);
let formatted_text = format!("{fence}{selected_text}\n```");
let insert_point = editor
.selections
.newest::<Point>(&editor.display_snapshot(cx))
.head();
let start_row = MultiBufferRow(insert_point.row);
editor.insert(&formatted_text, window, cx);
let snapshot = editor.buffer().read(cx).snapshot(cx);
let anchor_before = snapshot.anchor_after(insert_point);
let anchor_after = editor
.selections
.newest_anchor()
.head()
.bias_left(&snapshot);
editor.insert("\n", window, cx);
let crease_text = acp_thread::selection_name(
Some(file_path.as_ref()),
&line_range,
);
let fold_placeholder = quote_selection_fold_placeholder(
crease_text,
weak_editor.clone(),
);
let crease = Crease::inline(
anchor_before..anchor_after,
fold_placeholder,
render_quote_selection_output_toggle,
|_, _, _, _| Empty.into_any(),
);
editor.insert_creases(vec![crease], cx);
editor.fold_at(start_row, window, cx);
current_offset += selection.len;
if !selection.is_entire_line && current_offset < text.len() {
current_offset += 1;
}
}
}
});
return;
}
}
}
}
cx.stop_propagation();
let mut images = if let Some(item) = cx.read_from_clipboard() {
@@ -2622,11 +2714,13 @@ impl SearchableItem for TextThreadEditor {
fn update_matches(
&mut self,
matches: &[Self::Match],
active_match_index: Option<usize>,
window: &mut Window,
cx: &mut Context<Self>,
) {
self.editor
.update(cx, |editor, cx| editor.update_matches(matches, window, cx));
self.editor.update(cx, |editor, cx| {
editor.update_matches(matches, active_match_index, window, cx)
});
}
fn query_suggestion(&mut self, window: &mut Window, cx: &mut Context<Self>) -> String {

View File

@@ -106,9 +106,6 @@ impl Render for AgentNotification {
.font(ui_font)
.border_color(cx.theme().colors().border)
.rounded_xl()
.on_click(cx.listener(|_, _, _, cx| {
cx.emit(AgentNotificationEvent::Accepted);
}))
.child(
h_flex()
.items_start()

View File

@@ -12,6 +12,8 @@ pub use settings::{AnthropicAvailableModel as AvailableModel, ModelMode};
use strum::{EnumIter, EnumString};
use thiserror::Error;
pub mod batches;
pub const ANTHROPIC_API_URL: &str = "https://api.anthropic.com";
#[cfg_attr(feature = "schemars", derive(schemars::JsonSchema))]
@@ -465,6 +467,7 @@ impl Model {
}
}
/// Generate completion with streaming.
pub async fn stream_completion(
client: &dyn HttpClient,
api_url: &str,
@@ -477,6 +480,101 @@ pub async fn stream_completion(
.map(|output| output.0)
}
/// Generate completion without streaming.
pub async fn non_streaming_completion(
client: &dyn HttpClient,
api_url: &str,
api_key: &str,
request: Request,
beta_headers: Option<String>,
) -> Result<Response, AnthropicError> {
let (mut response, rate_limits) =
send_request(client, api_url, api_key, &request, beta_headers).await?;
if response.status().is_success() {
let mut body = String::new();
response
.body_mut()
.read_to_string(&mut body)
.await
.map_err(AnthropicError::ReadResponse)?;
serde_json::from_str(&body).map_err(AnthropicError::DeserializeResponse)
} else {
Err(handle_error_response(response, rate_limits).await)
}
}
async fn send_request(
client: &dyn HttpClient,
api_url: &str,
api_key: &str,
request: impl Serialize,
beta_headers: Option<String>,
) -> Result<(http::Response<AsyncBody>, RateLimitInfo), AnthropicError> {
let uri = format!("{api_url}/v1/messages");
let mut request_builder = HttpRequest::builder()
.method(Method::POST)
.uri(uri)
.header("Anthropic-Version", "2023-06-01")
.header("X-Api-Key", api_key.trim())
.header("Content-Type", "application/json");
if let Some(beta_headers) = beta_headers {
request_builder = request_builder.header("Anthropic-Beta", beta_headers);
}
let serialized_request =
serde_json::to_string(&request).map_err(AnthropicError::SerializeRequest)?;
let request = request_builder
.body(AsyncBody::from(serialized_request))
.map_err(AnthropicError::BuildRequestBody)?;
let response = client
.send(request)
.await
.map_err(AnthropicError::HttpSend)?;
let rate_limits = RateLimitInfo::from_headers(response.headers());
Ok((response, rate_limits))
}
async fn handle_error_response(
mut response: http::Response<AsyncBody>,
rate_limits: RateLimitInfo,
) -> AnthropicError {
if response.status().as_u16() == 529 {
return AnthropicError::ServerOverloaded {
retry_after: rate_limits.retry_after,
};
}
if let Some(retry_after) = rate_limits.retry_after {
return AnthropicError::RateLimit { retry_after };
}
let mut body = String::new();
let read_result = response
.body_mut()
.read_to_string(&mut body)
.await
.map_err(AnthropicError::ReadResponse);
if let Err(err) = read_result {
return err;
}
match serde_json::from_str::<Event>(&body) {
Ok(Event::Error { error }) => AnthropicError::ApiError(error),
Ok(_) | Err(_) => AnthropicError::HttpResponseError {
status_code: response.status(),
message: body,
},
}
}
/// An individual rate limit.
#[derive(Debug)]
pub struct RateLimit {
@@ -580,30 +678,10 @@ pub async fn stream_completion_with_rate_limit_info(
base: request,
stream: true,
};
let uri = format!("{api_url}/v1/messages");
let mut request_builder = HttpRequest::builder()
.method(Method::POST)
.uri(uri)
.header("Anthropic-Version", "2023-06-01")
.header("X-Api-Key", api_key.trim())
.header("Content-Type", "application/json");
let (response, rate_limits) =
send_request(client, api_url, api_key, &request, beta_headers).await?;
if let Some(beta_headers) = beta_headers {
request_builder = request_builder.header("Anthropic-Beta", beta_headers);
}
let serialized_request =
serde_json::to_string(&request).map_err(AnthropicError::SerializeRequest)?;
let request = request_builder
.body(AsyncBody::from(serialized_request))
.map_err(AnthropicError::BuildRequestBody)?;
let mut response = client
.send(request)
.await
.map_err(AnthropicError::HttpSend)?;
let rate_limits = RateLimitInfo::from_headers(response.headers());
if response.status().is_success() {
let reader = BufReader::new(response.into_body());
let stream = reader
@@ -622,27 +700,8 @@ pub async fn stream_completion_with_rate_limit_info(
})
.boxed();
Ok((stream, Some(rate_limits)))
} else if response.status().as_u16() == 529 {
Err(AnthropicError::ServerOverloaded {
retry_after: rate_limits.retry_after,
})
} else if let Some(retry_after) = rate_limits.retry_after {
Err(AnthropicError::RateLimit { retry_after })
} else {
let mut body = String::new();
response
.body_mut()
.read_to_string(&mut body)
.await
.map_err(AnthropicError::ReadResponse)?;
match serde_json::from_str::<Event>(&body) {
Ok(Event::Error { error }) => Err(AnthropicError::ApiError(error)),
Ok(_) | Err(_) => Err(AnthropicError::HttpResponseError {
status_code: response.status(),
message: body,
}),
}
Err(handle_error_response(response, rate_limits).await)
}
}

View File

@@ -0,0 +1,190 @@
use anyhow::Result;
use futures::AsyncReadExt;
use http_client::{AsyncBody, HttpClient, Method, Request as HttpRequest};
use serde::{Deserialize, Serialize};
use crate::{AnthropicError, ApiError, RateLimitInfo, Request, Response};
#[derive(Debug, Serialize, Deserialize)]
pub struct BatchRequest {
pub custom_id: String,
pub params: Request,
}
#[derive(Debug, Serialize, Deserialize)]
pub struct CreateBatchRequest {
pub requests: Vec<BatchRequest>,
}
#[derive(Debug, Serialize, Deserialize)]
pub struct MessageBatchRequestCounts {
pub processing: u64,
pub succeeded: u64,
pub errored: u64,
pub canceled: u64,
pub expired: u64,
}
#[derive(Debug, Serialize, Deserialize)]
pub struct MessageBatch {
pub id: String,
#[serde(rename = "type")]
pub batch_type: String,
pub processing_status: String,
pub request_counts: MessageBatchRequestCounts,
pub ended_at: Option<String>,
pub created_at: String,
pub expires_at: String,
pub archived_at: Option<String>,
pub cancel_initiated_at: Option<String>,
pub results_url: Option<String>,
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(tag = "type")]
pub enum BatchResult {
#[serde(rename = "succeeded")]
Succeeded { message: Response },
#[serde(rename = "errored")]
Errored { error: ApiError },
#[serde(rename = "canceled")]
Canceled,
#[serde(rename = "expired")]
Expired,
}
#[derive(Debug, Serialize, Deserialize)]
pub struct BatchIndividualResponse {
pub custom_id: String,
pub result: BatchResult,
}
pub async fn create_batch(
client: &dyn HttpClient,
api_url: &str,
api_key: &str,
request: CreateBatchRequest,
) -> Result<MessageBatch, AnthropicError> {
let uri = format!("{api_url}/v1/messages/batches");
let request_builder = HttpRequest::builder()
.method(Method::POST)
.uri(uri)
.header("Anthropic-Version", "2023-06-01")
.header("X-Api-Key", api_key.trim())
.header("Content-Type", "application/json");
let serialized_request =
serde_json::to_string(&request).map_err(AnthropicError::SerializeRequest)?;
let http_request = request_builder
.body(AsyncBody::from(serialized_request))
.map_err(AnthropicError::BuildRequestBody)?;
let mut response = client
.send(http_request)
.await
.map_err(AnthropicError::HttpSend)?;
let rate_limits = RateLimitInfo::from_headers(response.headers());
if response.status().is_success() {
let mut body = String::new();
response
.body_mut()
.read_to_string(&mut body)
.await
.map_err(AnthropicError::ReadResponse)?;
serde_json::from_str(&body).map_err(AnthropicError::DeserializeResponse)
} else {
Err(crate::handle_error_response(response, rate_limits).await)
}
}
pub async fn retrieve_batch(
client: &dyn HttpClient,
api_url: &str,
api_key: &str,
message_batch_id: &str,
) -> Result<MessageBatch, AnthropicError> {
let uri = format!("{api_url}/v1/messages/batches/{message_batch_id}");
let request_builder = HttpRequest::builder()
.method(Method::GET)
.uri(uri)
.header("Anthropic-Version", "2023-06-01")
.header("X-Api-Key", api_key.trim());
let http_request = request_builder
.body(AsyncBody::default())
.map_err(AnthropicError::BuildRequestBody)?;
let mut response = client
.send(http_request)
.await
.map_err(AnthropicError::HttpSend)?;
let rate_limits = RateLimitInfo::from_headers(response.headers());
if response.status().is_success() {
let mut body = String::new();
response
.body_mut()
.read_to_string(&mut body)
.await
.map_err(AnthropicError::ReadResponse)?;
serde_json::from_str(&body).map_err(AnthropicError::DeserializeResponse)
} else {
Err(crate::handle_error_response(response, rate_limits).await)
}
}
pub async fn retrieve_batch_results(
client: &dyn HttpClient,
api_url: &str,
api_key: &str,
message_batch_id: &str,
) -> Result<Vec<BatchIndividualResponse>, AnthropicError> {
let uri = format!("{api_url}/v1/messages/batches/{message_batch_id}/results");
let request_builder = HttpRequest::builder()
.method(Method::GET)
.uri(uri)
.header("Anthropic-Version", "2023-06-01")
.header("X-Api-Key", api_key.trim());
let http_request = request_builder
.body(AsyncBody::default())
.map_err(AnthropicError::BuildRequestBody)?;
let mut response = client
.send(http_request)
.await
.map_err(AnthropicError::HttpSend)?;
let rate_limits = RateLimitInfo::from_headers(response.headers());
if response.status().is_success() {
let mut body = String::new();
response
.body_mut()
.read_to_string(&mut body)
.await
.map_err(AnthropicError::ReadResponse)?;
let mut results = Vec::new();
for line in body.lines() {
if line.trim().is_empty() {
continue;
}
let result: BatchIndividualResponse =
serde_json::from_str(line).map_err(AnthropicError::DeserializeResponse)?;
results.push(result);
}
Ok(results)
} else {
Err(crate::handle_error_response(response, rate_limits).await)
}
}

View File

@@ -584,41 +584,100 @@ impl Model {
}
}
pub fn cross_region_inference_id(&self, region: &str) -> anyhow::Result<String> {
pub fn cross_region_inference_id(
&self,
region: &str,
allow_global: bool,
) -> anyhow::Result<String> {
// List derived from here:
// https://docs.aws.amazon.com/bedrock/latest/userguide/inference-profiles-support.html#inference-profiles-support-system
let model_id = self.request_id();
let supports_global = matches!(
self,
Model::ClaudeOpus4_5
| Model::ClaudeOpus4_5Thinking
| Model::ClaudeHaiku4_5
| Model::ClaudeSonnet4
| Model::ClaudeSonnet4Thinking
| Model::ClaudeSonnet4_5
| Model::ClaudeSonnet4_5Thinking
);
let region_group = if region.starts_with("us-gov-") {
"us-gov"
} else if region.starts_with("us-") {
"us"
} else if region.starts_with("us-")
|| region.starts_with("ca-")
|| region.starts_with("sa-")
{
if allow_global && supports_global {
"global"
} else {
"us"
}
} else if region.starts_with("eu-") {
"eu"
if allow_global && supports_global {
"global"
} else {
"eu"
}
} else if region.starts_with("ap-") || region == "me-central-1" || region == "me-south-1" {
"apac"
} else if region.starts_with("ca-") || region.starts_with("sa-") {
// Canada and South America regions - default to US profiles
"us"
if allow_global && supports_global {
"global"
} else {
"apac"
}
} else {
anyhow::bail!("Unsupported Region {region}");
};
let model_id = self.request_id();
match (self, region_group, region) {
(Model::Custom { .. }, _, _) => Ok(self.request_id().into()),
match (self, region_group) {
// Custom models can't have CRI IDs
(Model::Custom { .. }, _) => Ok(self.request_id().into()),
(
Model::ClaudeOpus4_5
| Model::ClaudeOpus4_5Thinking
| Model::ClaudeHaiku4_5
| Model::ClaudeSonnet4
| Model::ClaudeSonnet4Thinking
| Model::ClaudeSonnet4_5
| Model::ClaudeSonnet4_5Thinking,
"global",
_,
) => Ok(format!("{}.{}", region_group, model_id)),
// Models with US Gov only
(Model::Claude3_5Sonnet, "us-gov") | (Model::Claude3Haiku, "us-gov") => {
Ok(format!("{}.{}", region_group, model_id))
(
Model::Claude3Haiku
| Model::Claude3_5Sonnet
| Model::Claude3_7Sonnet
| Model::Claude3_7SonnetThinking
| Model::ClaudeSonnet4_5
| Model::ClaudeSonnet4_5Thinking,
"us-gov",
_,
) => Ok(format!("{}.{}", region_group, model_id)),
(
Model::ClaudeHaiku4_5 | Model::ClaudeSonnet4_5 | Model::ClaudeSonnet4_5Thinking,
"apac",
"ap-southeast-2" | "ap-southeast-4",
) => Ok(format!("au.{}", model_id)),
(
Model::ClaudeHaiku4_5 | Model::ClaudeSonnet4_5 | Model::ClaudeSonnet4_5Thinking,
"apac",
"ap-northeast-1" | "ap-northeast-3",
) => Ok(format!("jp.{}", model_id)),
(Model::AmazonNovaLite, "us", r) if r.starts_with("ca-") => {
Ok(format!("ca.{}", model_id))
}
// Available everywhere
(Model::AmazonNovaLite | Model::AmazonNovaMicro | Model::AmazonNovaPro, _) => {
Ok(format!("{}.{}", region_group, model_id))
}
// Models in US
(
Model::AmazonNovaPremier
| Model::AmazonNovaLite
| Model::AmazonNovaMicro
| Model::AmazonNovaPro
| Model::Claude3_5Haiku
| Model::ClaudeHaiku4_5
| Model::Claude3_5Sonnet
@@ -655,16 +714,18 @@ impl Model {
| Model::PalmyraWriterX4
| Model::PalmyraWriterX5,
"us",
_,
) => Ok(format!("{}.{}", region_group, model_id)),
// Models available in EU
(
Model::Claude3_5Sonnet
Model::AmazonNovaLite
| Model::AmazonNovaMicro
| Model::AmazonNovaPro
| Model::Claude3_5Sonnet
| Model::ClaudeHaiku4_5
| Model::Claude3_7Sonnet
| Model::Claude3_7SonnetThinking
| Model::ClaudeSonnet4
| Model::ClaudeSonnet4Thinking
| Model::ClaudeSonnet4_5
| Model::ClaudeSonnet4_5Thinking
| Model::Claude3Haiku
@@ -673,26 +734,26 @@ impl Model {
| Model::MetaLlama323BInstructV1
| Model::MistralPixtralLarge2502V1,
"eu",
_,
) => Ok(format!("{}.{}", region_group, model_id)),
// Models available in APAC
(
Model::Claude3_5Sonnet
Model::AmazonNovaLite
| Model::AmazonNovaMicro
| Model::AmazonNovaPro
| Model::Claude3_5Sonnet
| Model::Claude3_5SonnetV2
| Model::ClaudeHaiku4_5
| Model::Claude3Haiku
| Model::Claude3Sonnet
| Model::Claude3_7Sonnet
| Model::Claude3_7SonnetThinking
| Model::ClaudeSonnet4
| Model::ClaudeSonnet4Thinking
| Model::ClaudeSonnet4_5
| Model::ClaudeSonnet4_5Thinking,
| Model::Claude3Haiku
| Model::Claude3Sonnet,
"apac",
_,
) => Ok(format!("{}.{}", region_group, model_id)),
// Any other combination is not supported
_ => Ok(self.request_id().into()),
_ => Ok(model_id.into()),
}
}
}
@@ -705,15 +766,15 @@ mod tests {
fn test_us_region_inference_ids() -> anyhow::Result<()> {
// Test US regions
assert_eq!(
Model::Claude3_5SonnetV2.cross_region_inference_id("us-east-1")?,
Model::Claude3_5SonnetV2.cross_region_inference_id("us-east-1", false)?,
"us.anthropic.claude-3-5-sonnet-20241022-v2:0"
);
assert_eq!(
Model::Claude3_5SonnetV2.cross_region_inference_id("us-west-2")?,
Model::Claude3_5SonnetV2.cross_region_inference_id("us-west-2", false)?,
"us.anthropic.claude-3-5-sonnet-20241022-v2:0"
);
assert_eq!(
Model::AmazonNovaPro.cross_region_inference_id("us-east-2")?,
Model::AmazonNovaPro.cross_region_inference_id("us-east-2", false)?,
"us.amazon.nova-pro-v1:0"
);
Ok(())
@@ -723,19 +784,19 @@ mod tests {
fn test_eu_region_inference_ids() -> anyhow::Result<()> {
// Test European regions
assert_eq!(
Model::ClaudeSonnet4.cross_region_inference_id("eu-west-1")?,
Model::ClaudeSonnet4.cross_region_inference_id("eu-west-1", false)?,
"eu.anthropic.claude-sonnet-4-20250514-v1:0"
);
assert_eq!(
Model::ClaudeSonnet4_5.cross_region_inference_id("eu-west-1")?,
Model::ClaudeSonnet4_5.cross_region_inference_id("eu-west-1", false)?,
"eu.anthropic.claude-sonnet-4-5-20250929-v1:0"
);
assert_eq!(
Model::Claude3Sonnet.cross_region_inference_id("eu-west-1")?,
Model::Claude3Sonnet.cross_region_inference_id("eu-west-1", false)?,
"eu.anthropic.claude-3-sonnet-20240229-v1:0"
);
assert_eq!(
Model::AmazonNovaMicro.cross_region_inference_id("eu-north-1")?,
Model::AmazonNovaMicro.cross_region_inference_id("eu-north-1", false)?,
"eu.amazon.nova-micro-v1:0"
);
Ok(())
@@ -745,15 +806,15 @@ mod tests {
fn test_apac_region_inference_ids() -> anyhow::Result<()> {
// Test Asia-Pacific regions
assert_eq!(
Model::Claude3_5SonnetV2.cross_region_inference_id("ap-northeast-1")?,
Model::Claude3_5SonnetV2.cross_region_inference_id("ap-northeast-1", false)?,
"apac.anthropic.claude-3-5-sonnet-20241022-v2:0"
);
assert_eq!(
Model::Claude3_5SonnetV2.cross_region_inference_id("ap-southeast-2")?,
Model::Claude3_5SonnetV2.cross_region_inference_id("ap-southeast-2", false)?,
"apac.anthropic.claude-3-5-sonnet-20241022-v2:0"
);
assert_eq!(
Model::AmazonNovaLite.cross_region_inference_id("ap-south-1")?,
Model::AmazonNovaLite.cross_region_inference_id("ap-south-1", false)?,
"apac.amazon.nova-lite-v1:0"
);
Ok(())
@@ -763,11 +824,11 @@ mod tests {
fn test_gov_region_inference_ids() -> anyhow::Result<()> {
// Test Government regions
assert_eq!(
Model::Claude3_5Sonnet.cross_region_inference_id("us-gov-east-1")?,
Model::Claude3_5Sonnet.cross_region_inference_id("us-gov-east-1", false)?,
"us-gov.anthropic.claude-3-5-sonnet-20240620-v1:0"
);
assert_eq!(
Model::Claude3Haiku.cross_region_inference_id("us-gov-west-1")?,
Model::Claude3Haiku.cross_region_inference_id("us-gov-west-1", false)?,
"us-gov.anthropic.claude-3-haiku-20240307-v1:0"
);
Ok(())
@@ -777,15 +838,15 @@ mod tests {
fn test_meta_models_inference_ids() -> anyhow::Result<()> {
// Test Meta models
assert_eq!(
Model::MetaLlama370BInstructV1.cross_region_inference_id("us-east-1")?,
Model::MetaLlama370BInstructV1.cross_region_inference_id("us-east-1", false)?,
"meta.llama3-70b-instruct-v1:0"
);
assert_eq!(
Model::MetaLlama3170BInstructV1.cross_region_inference_id("us-east-1")?,
Model::MetaLlama3170BInstructV1.cross_region_inference_id("us-east-1", false)?,
"us.meta.llama3-1-70b-instruct-v1:0"
);
assert_eq!(
Model::MetaLlama321BInstructV1.cross_region_inference_id("eu-west-1")?,
Model::MetaLlama321BInstructV1.cross_region_inference_id("eu-west-1", false)?,
"eu.meta.llama3-2-1b-instruct-v1:0"
);
Ok(())
@@ -796,11 +857,11 @@ mod tests {
// Mistral models don't follow the regional prefix pattern,
// so they should return their original IDs
assert_eq!(
Model::MistralMistralLarge2402V1.cross_region_inference_id("us-east-1")?,
Model::MistralMistralLarge2402V1.cross_region_inference_id("us-east-1", false)?,
"mistral.mistral-large-2402-v1:0"
);
assert_eq!(
Model::MistralMixtral8x7BInstructV0.cross_region_inference_id("eu-west-1")?,
Model::MistralMixtral8x7BInstructV0.cross_region_inference_id("eu-west-1", false)?,
"mistral.mixtral-8x7b-instruct-v0:1"
);
Ok(())
@@ -811,11 +872,11 @@ mod tests {
// AI21 models don't follow the regional prefix pattern,
// so they should return their original IDs
assert_eq!(
Model::AI21J2UltraV1.cross_region_inference_id("us-east-1")?,
Model::AI21J2UltraV1.cross_region_inference_id("us-east-1", false)?,
"ai21.j2-ultra-v1"
);
assert_eq!(
Model::AI21JambaInstructV1.cross_region_inference_id("eu-west-1")?,
Model::AI21JambaInstructV1.cross_region_inference_id("eu-west-1", false)?,
"ai21.jamba-instruct-v1:0"
);
Ok(())
@@ -826,11 +887,11 @@ mod tests {
// Cohere models don't follow the regional prefix pattern,
// so they should return their original IDs
assert_eq!(
Model::CohereCommandRV1.cross_region_inference_id("us-east-1")?,
Model::CohereCommandRV1.cross_region_inference_id("us-east-1", false)?,
"cohere.command-r-v1:0"
);
assert_eq!(
Model::CohereCommandTextV14_4k.cross_region_inference_id("ap-southeast-1")?,
Model::CohereCommandTextV14_4k.cross_region_inference_id("ap-southeast-1", false)?,
"cohere.command-text-v14:7:4k"
);
Ok(())
@@ -850,10 +911,17 @@ mod tests {
// Custom model should return its name unchanged
assert_eq!(
custom_model.cross_region_inference_id("us-east-1")?,
custom_model.cross_region_inference_id("us-east-1", false)?,
"custom.my-model-v1:0"
);
// Test that models without global support fall back to regional when allow_global is true
assert_eq!(
Model::AmazonNovaPro.cross_region_inference_id("us-east-1", true)?,
"us.amazon.nova-pro-v1:0",
"Nova Pro should fall back to regional profile even when allow_global is true"
);
Ok(())
}
@@ -892,3 +960,28 @@ mod tests {
);
}
}
#[test]
fn test_global_inference_ids() -> anyhow::Result<()> {
// Test global inference for models that support it when allow_global is true
assert_eq!(
Model::ClaudeSonnet4.cross_region_inference_id("us-east-1", true)?,
"global.anthropic.claude-sonnet-4-20250514-v1:0"
);
assert_eq!(
Model::ClaudeSonnet4_5.cross_region_inference_id("eu-west-1", true)?,
"global.anthropic.claude-sonnet-4-5-20250929-v1:0"
);
assert_eq!(
Model::ClaudeHaiku4_5.cross_region_inference_id("ap-south-1", true)?,
"global.anthropic.claude-haiku-4-5-20251001-v1:0"
);
// Test that regional prefix is used when allow_global is false
assert_eq!(
Model::ClaudeSonnet4.cross_region_inference_id("us-east-1", false)?,
"us.anthropic.claude-sonnet-4-20250514-v1:0"
);
Ok(())
}

View File

@@ -32,7 +32,7 @@ struct Detect;
trait InstalledApp {
fn zed_version_string(&self) -> String;
fn launch(&self, ipc_url: String) -> anyhow::Result<()>;
fn launch(&self, ipc_url: String, user_data_dir: Option<&str>) -> anyhow::Result<()>;
fn run_foreground(
&self,
ipc_url: String,
@@ -588,7 +588,7 @@ fn main() -> Result<()> {
if args.foreground {
app.run_foreground(url, user_data_dir.as_deref())?;
} else {
app.launch(url)?;
app.launch(url, user_data_dir.as_deref())?;
sender.join().unwrap()?;
if let Some(handle) = stdin_pipe_handle {
handle.join().unwrap()?;
@@ -709,14 +709,18 @@ mod linux {
)
}
fn launch(&self, ipc_url: String) -> anyhow::Result<()> {
let sock_path = paths::data_dir().join(format!(
fn launch(&self, ipc_url: String, user_data_dir: Option<&str>) -> anyhow::Result<()> {
let data_dir = user_data_dir
.map(PathBuf::from)
.unwrap_or_else(|| paths::data_dir().clone());
let sock_path = data_dir.join(format!(
"zed-{}.sock",
*release_channel::RELEASE_CHANNEL_NAME
));
let sock = UnixDatagram::unbound()?;
if sock.connect(&sock_path).is_err() {
self.boot_background(ipc_url)?;
self.boot_background(ipc_url, user_data_dir)?;
} else {
sock.send(ipc_url.as_bytes())?;
}
@@ -742,7 +746,11 @@ mod linux {
}
impl App {
fn boot_background(&self, ipc_url: String) -> anyhow::Result<()> {
fn boot_background(
&self,
ipc_url: String,
user_data_dir: Option<&str>,
) -> anyhow::Result<()> {
let path = &self.0;
match fork::fork() {
@@ -756,8 +764,13 @@ mod linux {
if fork::close_fd().is_err() {
eprintln!("failed to close_fd: {}", std::io::Error::last_os_error());
}
let error =
exec::execvp(path.clone(), &[path.as_os_str(), &OsString::from(ipc_url)]);
let mut args: Vec<OsString> =
vec![path.as_os_str().to_owned(), OsString::from(ipc_url)];
if let Some(dir) = user_data_dir {
args.push(OsString::from("--user-data-dir"));
args.push(OsString::from(dir));
}
let error = exec::execvp(path.clone(), &args);
// if exec succeeded, we never get here.
eprintln!("failed to exec {:?}: {}", path, error);
process::exit(1)
@@ -943,11 +956,14 @@ mod windows {
)
}
fn launch(&self, ipc_url: String) -> anyhow::Result<()> {
fn launch(&self, ipc_url: String, user_data_dir: Option<&str>) -> anyhow::Result<()> {
if check_single_instance() {
std::process::Command::new(self.0.clone())
.arg(ipc_url)
.spawn()?;
let mut cmd = std::process::Command::new(self.0.clone());
cmd.arg(ipc_url);
if let Some(dir) = user_data_dir {
cmd.arg("--user-data-dir").arg(dir);
}
cmd.spawn()?;
} else {
unsafe {
let pipe = CreateFileW(
@@ -1096,7 +1112,7 @@ mod mac_os {
format!("Zed {} {}", self.version(), self.path().display(),)
}
fn launch(&self, url: String) -> anyhow::Result<()> {
fn launch(&self, url: String, user_data_dir: Option<&str>) -> anyhow::Result<()> {
match self {
Self::App { app_bundle, .. } => {
let app_path = app_bundle;
@@ -1146,8 +1162,11 @@ mod mac_os {
format!("Cloning descriptor for file {subprocess_stdout_file:?}")
})?;
let mut command = std::process::Command::new(executable);
let command = command
.env(FORCE_CLI_MODE_ENV_VAR_NAME, "")
command.env(FORCE_CLI_MODE_ENV_VAR_NAME, "");
if let Some(dir) = user_data_dir {
command.arg("--user-data-dir").arg(dir);
}
command
.stderr(subprocess_stdout_file)
.stdout(subprocess_stdin_file)
.arg(url);

View File

@@ -31,18 +31,10 @@ pub struct PredictEditsRequest {
/// Within `signatures`
pub excerpt_parent: Option<usize>,
#[serde(skip_serializing_if = "Vec::is_empty", default)]
pub included_files: Vec<IncludedFile>,
#[serde(skip_serializing_if = "Vec::is_empty", default)]
pub signatures: Vec<Signature>,
#[serde(skip_serializing_if = "Vec::is_empty", default)]
pub referenced_declarations: Vec<ReferencedDeclaration>,
pub related_files: Vec<RelatedFile>,
pub events: Vec<Arc<Event>>,
#[serde(default)]
pub can_collect_data: bool,
#[serde(skip_serializing_if = "Vec::is_empty", default)]
pub diagnostic_groups: Vec<DiagnosticGroup>,
#[serde(skip_serializing_if = "is_default", default)]
pub diagnostic_groups_truncated: bool,
/// Info about the git repository state, only present when can_collect_data is true.
#[serde(skip_serializing_if = "Option::is_none", default)]
pub git_info: Option<PredictEditsGitInfo>,
@@ -58,7 +50,7 @@ pub struct PredictEditsRequest {
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct IncludedFile {
pub struct RelatedFile {
pub path: Arc<Path>,
pub max_row: Line,
pub excerpts: Vec<Excerpt>,
@@ -72,11 +64,9 @@ pub struct Excerpt {
#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, EnumIter)]
pub enum PromptFormat {
MarkedExcerpt,
LabeledSections,
NumLinesUniDiff,
/// XML old_tex/new_text
OldTextNewText,
/// Prompt format intended for use via zeta_cli
/// Prompt format intended for use via edit_prediction_cli
OnlySnippets,
/// One-sentence instructions used in fine-tuned models
Minimal,
@@ -87,7 +77,7 @@ pub enum PromptFormat {
}
impl PromptFormat {
pub const DEFAULT: PromptFormat = PromptFormat::NumLinesUniDiff;
pub const DEFAULT: PromptFormat = PromptFormat::Minimal;
}
impl Default for PromptFormat {
@@ -105,10 +95,7 @@ impl PromptFormat {
impl std::fmt::Display for PromptFormat {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
PromptFormat::MarkedExcerpt => write!(f, "Marked Excerpt"),
PromptFormat::LabeledSections => write!(f, "Labeled Sections"),
PromptFormat::OnlySnippets => write!(f, "Only Snippets"),
PromptFormat::NumLinesUniDiff => write!(f, "Numbered Lines / Unified Diff"),
PromptFormat::OldTextNewText => write!(f, "Old Text / New Text"),
PromptFormat::Minimal => write!(f, "Minimal"),
PromptFormat::MinimalQwen => write!(f, "Minimal + Qwen FIM"),
@@ -178,67 +165,6 @@ impl<'a> std::fmt::Display for DiffPathFmt<'a> {
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Signature {
pub text: String,
pub text_is_truncated: bool,
#[serde(skip_serializing_if = "Option::is_none", default)]
pub parent_index: Option<usize>,
/// Range of `text` within the file, possibly truncated according to `text_is_truncated`. The
/// file is implicitly the file that contains the descendant declaration or excerpt.
pub range: Range<Line>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ReferencedDeclaration {
pub path: Arc<Path>,
pub text: String,
pub text_is_truncated: bool,
/// Range of `text` within file, possibly truncated according to `text_is_truncated`
pub range: Range<Line>,
/// Range within `text`
pub signature_range: Range<usize>,
/// Index within `signatures`.
#[serde(skip_serializing_if = "Option::is_none", default)]
pub parent_index: Option<usize>,
pub score_components: DeclarationScoreComponents,
pub signature_score: f32,
pub declaration_score: f32,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct DeclarationScoreComponents {
pub is_same_file: bool,
pub is_referenced_nearby: bool,
pub is_referenced_in_breadcrumb: bool,
pub reference_count: usize,
pub same_file_declaration_count: usize,
pub declaration_count: usize,
pub reference_line_distance: u32,
pub declaration_line_distance: u32,
pub excerpt_vs_item_jaccard: f32,
pub excerpt_vs_signature_jaccard: f32,
pub adjacent_vs_item_jaccard: f32,
pub adjacent_vs_signature_jaccard: f32,
pub excerpt_vs_item_weighted_overlap: f32,
pub excerpt_vs_signature_weighted_overlap: f32,
pub adjacent_vs_item_weighted_overlap: f32,
pub adjacent_vs_signature_weighted_overlap: f32,
pub path_import_match_count: usize,
pub wildcard_path_import_match_count: usize,
pub import_similarity: f32,
pub max_import_similarity: f32,
pub normalized_import_similarity: f32,
pub wildcard_import_similarity: f32,
pub normalized_wildcard_import_similarity: f32,
pub included_by_others: usize,
pub includes_others: usize,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(transparent)]
pub struct DiagnosticGroup(pub Box<serde_json::value::RawValue>);
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PredictEditsResponse {
pub request_id: Uuid,
@@ -262,10 +188,6 @@ pub struct Edit {
pub content: String,
}
fn is_default<T: Default + PartialEq>(value: &T) -> bool {
*value == T::default()
}
#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, PartialOrd, Eq, Ord)]
pub struct Point {
pub line: Line,

View File

@@ -15,9 +15,4 @@ path = "src/cloud_zeta2_prompt.rs"
anyhow.workspace = true
cloud_llm_client.workspace = true
indoc.workspace = true
ordered-float.workspace = true
rustc-hash.workspace = true
schemars.workspace = true
serde.workspace = true
serde_json.workspace = true
strum.workspace = true

View File

@@ -1,20 +1,12 @@
//! Zeta2 prompt planning and generation code shared with cloud.
pub mod retrieval_prompt;
use anyhow::{Context as _, Result, anyhow};
use anyhow::Result;
use cloud_llm_client::predict_edits_v3::{
self, DiffPathFmt, Event, Excerpt, IncludedFile, Line, Point, PromptFormat,
ReferencedDeclaration,
self, DiffPathFmt, Event, Excerpt, Line, Point, PromptFormat, RelatedFile,
};
use indoc::indoc;
use ordered_float::OrderedFloat;
use rustc_hash::{FxHashMap, FxHashSet};
use serde::Serialize;
use std::cmp;
use std::fmt::Write;
use std::path::Path;
use std::sync::Arc;
use std::{cmp::Reverse, collections::BinaryHeap, ops::Range, path::Path};
use strum::{EnumIter, IntoEnumIterator};
pub const DEFAULT_MAX_PROMPT_BYTES: usize = 10 * 1024;
@@ -24,69 +16,6 @@ pub const EDITABLE_REGION_START_MARKER_WITH_NEWLINE: &str = "<|editable_region_s
/// NOTE: Differs from zed version of constant - includes a newline
pub const EDITABLE_REGION_END_MARKER_WITH_NEWLINE: &str = "<|editable_region_end|>\n";
// TODO: use constants for markers?
const MARKED_EXCERPT_INSTRUCTIONS: &str = indoc! {"
You are a code completion assistant and your task is to analyze user edits and then rewrite an excerpt that the user provides, suggesting the appropriate edits within the excerpt, taking into account the cursor location.
The excerpt to edit will be wrapped in markers <|editable_region_start|> and <|editable_region_end|>. The cursor position is marked with <|user_cursor|>. Please respond with edited code for that region.
Other code is provided for context, and `…` indicates when code has been skipped.
## Edit History
"};
const LABELED_SECTIONS_INSTRUCTIONS: &str = indoc! {r#"
You are a code completion assistant and your task is to analyze user edits, and suggest an edit to one of the provided sections of code.
Sections of code are grouped by file and then labeled by `<|section_N|>` (e.g `<|section_8|>`).
The cursor position is marked with `<|user_cursor|>` and it will appear within a special section labeled `<|current_section|>`. Prefer editing the current section until no more changes are needed within it.
Respond ONLY with the name of the section to edit on a single line, followed by all of the code that should replace that section. For example:
<|current_section|>
for i in 0..16 {
println!("{i}");
}
## Edit History
"#};
const NUMBERED_LINES_INSTRUCTIONS: &str = indoc! {r#"
# Instructions
You are an edit prediction agent in a code editor.
Your job is to predict the next edit that the user will make,
based on their last few edits and their current cursor location.
## Output Format
You must briefly explain your understanding of the user's goal, in one
or two sentences, and then specify their next edit in the form of a
unified diff, like this:
```
--- a/src/myapp/cli.py
+++ b/src/myapp/cli.py
@@ ... @@
import os
import time
import sys
+from constants import LOG_LEVEL_WARNING
@@ ... @@
config.headless()
config.set_interactive(false)
-config.set_log_level(LOG_L)
+config.set_log_level(LOG_LEVEL_WARNING)
config.set_use_color(True)
```
## Edit History
"#};
const STUDENT_MODEL_INSTRUCTIONS: &str = indoc! {r#"
You are a code completion assistant that analyzes edit history to identify and systematically complete incomplete refactorings or patterns across the entire codebase.
@@ -94,20 +23,6 @@ const STUDENT_MODEL_INSTRUCTIONS: &str = indoc! {r#"
"#};
const UNIFIED_DIFF_REMINDER: &str = indoc! {"
---
Analyze the edit history and the files, then provide the unified diff for your predicted edits.
Do not include the cursor marker in your output.
Your diff should include edited file paths in its file headers (lines beginning with `---` and `+++`).
Do not include line numbers in the hunk headers, use `@@ ... @@`.
Removed lines begin with `-`.
Added lines begin with `+`.
Context lines begin with an extra space.
Context and removed lines are used to match the target edit location, so make sure to include enough of them
to uniquely identify it amongst all excerpts of code provided.
"};
const MINIMAL_PROMPT_REMINDER: &str = indoc! {"
---
@@ -164,49 +79,25 @@ const OLD_TEXT_NEW_TEXT_REMINDER: &str = indoc! {r#"
Remember that the edits in the edit history have already been applied.
"#};
pub fn build_prompt(
request: &predict_edits_v3::PredictEditsRequest,
) -> Result<(String, SectionLabels)> {
let mut section_labels = Default::default();
pub fn build_prompt(request: &predict_edits_v3::PredictEditsRequest) -> Result<String> {
let prompt_data = PromptData {
events: request.events.clone(),
cursor_point: request.cursor_point,
cursor_path: request.excerpt_path.clone(),
included_files: request.included_files.clone(),
included_files: request.related_files.clone(),
};
match request.prompt_format {
PromptFormat::MinimalQwen => {
return Ok((MinimalQwenPrompt.render(&prompt_data), section_labels));
return Ok(MinimalQwenPrompt.render(&prompt_data));
}
PromptFormat::SeedCoder1120 => {
return Ok((SeedCoder1120Prompt.render(&prompt_data), section_labels));
return Ok(SeedCoder1120Prompt.render(&prompt_data));
}
_ => (),
};
let mut insertions = match request.prompt_format {
PromptFormat::MarkedExcerpt => vec![
(
Point {
line: request.excerpt_line_range.start,
column: 0,
},
EDITABLE_REGION_START_MARKER_WITH_NEWLINE,
),
(request.cursor_point, CURSOR_MARKER),
(
Point {
line: request.excerpt_line_range.end,
column: 0,
},
EDITABLE_REGION_END_MARKER_WITH_NEWLINE,
),
],
PromptFormat::LabeledSections
| PromptFormat::NumLinesUniDiff
| PromptFormat::Minimal
| PromptFormat::OldTextNewText => {
let insertions = match request.prompt_format {
PromptFormat::Minimal | PromptFormat::OldTextNewText => {
vec![(request.cursor_point, CURSOR_MARKER)]
}
PromptFormat::OnlySnippets => vec![],
@@ -215,9 +106,6 @@ pub fn build_prompt(
};
let mut prompt = match request.prompt_format {
PromptFormat::MarkedExcerpt => MARKED_EXCERPT_INSTRUCTIONS.to_string(),
PromptFormat::LabeledSections => LABELED_SECTIONS_INSTRUCTIONS.to_string(),
PromptFormat::NumLinesUniDiff => NUMBERED_LINES_INSTRUCTIONS.to_string(),
PromptFormat::OldTextNewText => XML_TAGS_INSTRUCTIONS.to_string(),
PromptFormat::OnlySnippets => String::new(),
PromptFormat::Minimal => STUDENT_MODEL_INSTRUCTIONS.to_string(),
@@ -247,7 +135,7 @@ pub fn build_prompt(
You can only edit exactly this part of the file.
We prepend line numbers (e.g., `123|<actual line>`); they are not part of the file.)
"},
PromptFormat::NumLinesUniDiff | PromptFormat::OldTextNewText => indoc! {"
PromptFormat::OldTextNewText => indoc! {"
## Code Excerpts
Here is some excerpts of code that you should take into account to predict the next edit.
@@ -263,64 +151,51 @@ pub fn build_prompt(
Lines starting with `…` indicate omitted line ranges. These may appear inside multi-line code constructs.
"},
_ => indoc! {"
PromptFormat::OnlySnippets | PromptFormat::MinimalQwen | PromptFormat::SeedCoder1120 => {
indoc! {"
## Code Excerpts
The cursor marker <|user_cursor|> indicates the current user cursor position.
The file is in current state, edits from edit history have been applied.
"},
"}
}
};
prompt.push_str(excerpts_preamble);
prompt.push('\n');
if !request.referenced_declarations.is_empty() || !request.signatures.is_empty() {
let syntax_based_prompt = SyntaxBasedPrompt::populate(request)?;
section_labels = syntax_based_prompt.write(&mut insertions, &mut prompt)?;
} else {
if request.prompt_format == PromptFormat::LabeledSections {
anyhow::bail!("PromptFormat::LabeledSections cannot be used with ContextMode::Llm");
}
let include_line_numbers = matches!(
request.prompt_format,
PromptFormat::NumLinesUniDiff | PromptFormat::Minimal
);
for related_file in &request.included_files {
if request.prompt_format == PromptFormat::Minimal {
write_codeblock_with_filename(
&related_file.path,
&related_file.excerpts,
if related_file.path == request.excerpt_path {
&insertions
} else {
&[]
},
related_file.max_row,
include_line_numbers,
&mut prompt,
);
} else {
write_codeblock(
&related_file.path,
&related_file.excerpts,
if related_file.path == request.excerpt_path {
&insertions
} else {
&[]
},
related_file.max_row,
include_line_numbers,
&mut prompt,
);
}
let include_line_numbers = matches!(request.prompt_format, PromptFormat::Minimal);
for related_file in &request.related_files {
if request.prompt_format == PromptFormat::Minimal {
write_codeblock_with_filename(
&related_file.path,
&related_file.excerpts,
if related_file.path == request.excerpt_path {
&insertions
} else {
&[]
},
related_file.max_row,
include_line_numbers,
&mut prompt,
);
} else {
write_codeblock(
&related_file.path,
&related_file.excerpts,
if related_file.path == request.excerpt_path {
&insertions
} else {
&[]
},
related_file.max_row,
include_line_numbers,
&mut prompt,
);
}
}
match request.prompt_format {
PromptFormat::NumLinesUniDiff => {
prompt.push_str(UNIFIED_DIFF_REMINDER);
}
PromptFormat::OldTextNewText => {
prompt.push_str(OLD_TEXT_NEW_TEXT_REMINDER);
}
@@ -330,7 +205,7 @@ pub fn build_prompt(
_ => {}
}
Ok((prompt, section_labels))
Ok(prompt)
}
pub fn generation_params(prompt_format: PromptFormat) -> GenerationParams {
@@ -444,476 +319,11 @@ pub fn push_events(output: &mut String, events: &[Arc<predict_edits_v3::Event>])
writeln!(output, "`````\n").unwrap();
}
pub struct SyntaxBasedPrompt<'a> {
request: &'a predict_edits_v3::PredictEditsRequest,
/// Snippets to include in the prompt. These may overlap - they are merged / deduplicated in
/// `to_prompt_string`.
snippets: Vec<PlannedSnippet<'a>>,
budget_used: usize,
}
#[derive(Clone, Debug)]
pub struct PlannedSnippet<'a> {
path: Arc<Path>,
range: Range<Line>,
text: &'a str,
// TODO: Indicate this in the output
#[allow(dead_code)]
text_is_truncated: bool,
}
#[derive(EnumIter, Clone, Copy, PartialEq, Eq, Hash, Debug, PartialOrd, Ord)]
pub enum DeclarationStyle {
Signature,
Declaration,
}
#[derive(Default, Clone, Debug, Serialize)]
pub struct SectionLabels {
pub excerpt_index: usize,
pub section_ranges: Vec<(Arc<Path>, Range<Line>)>,
}
impl<'a> SyntaxBasedPrompt<'a> {
/// Greedy one-pass knapsack algorithm to populate the prompt plan. Does the following:
///
/// Initializes a priority queue by populating it with each snippet, finding the
/// DeclarationStyle that minimizes `score_density = score / snippet.range(style).len()`. When a
/// "signature" snippet is popped, insert an entry for the "declaration" variant that reflects
/// the cost of upgrade.
///
/// TODO: Implement an early halting condition. One option might be to have another priority
/// queue where the score is the size, and update it accordingly. Another option might be to
/// have some simpler heuristic like bailing after N failed insertions, or based on how much
/// budget is left.
///
/// TODO: Has the current known sources of imprecision:
///
/// * Does not consider snippet overlap when ranking. For example, it might add a field to the
/// plan even though the containing struct is already included.
///
/// * Does not consider cost of signatures when ranking snippets - this is tricky since
/// signatures may be shared by multiple snippets.
///
/// * Does not include file paths / other text when considering max_bytes.
pub fn populate(request: &'a predict_edits_v3::PredictEditsRequest) -> Result<Self> {
let mut this = Self {
request,
snippets: Vec::new(),
budget_used: request.excerpt.len(),
};
let mut included_parents = FxHashSet::default();
let additional_parents = this.additional_parent_signatures(
&request.excerpt_path,
request.excerpt_parent,
&included_parents,
)?;
this.add_parents(&mut included_parents, additional_parents);
let max_bytes = request.prompt_max_bytes.unwrap_or(DEFAULT_MAX_PROMPT_BYTES);
if this.budget_used > max_bytes {
return Err(anyhow!(
"Excerpt + signatures size of {} already exceeds budget of {}",
this.budget_used,
max_bytes
));
}
#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord)]
struct QueueEntry {
score_density: OrderedFloat<f32>,
declaration_index: usize,
style: DeclarationStyle,
}
// Initialize priority queue with the best score for each snippet.
let mut queue: BinaryHeap<QueueEntry> = BinaryHeap::new();
for (declaration_index, declaration) in request.referenced_declarations.iter().enumerate() {
let (style, score_density) = DeclarationStyle::iter()
.map(|style| {
(
style,
OrderedFloat(declaration_score_density(&declaration, style)),
)
})
.max_by_key(|(_, score_density)| *score_density)
.unwrap();
queue.push(QueueEntry {
score_density,
declaration_index,
style,
});
}
// Knapsack selection loop
while let Some(queue_entry) = queue.pop() {
let Some(declaration) = request
.referenced_declarations
.get(queue_entry.declaration_index)
else {
return Err(anyhow!(
"Invalid declaration index {}",
queue_entry.declaration_index
));
};
let mut additional_bytes = declaration_size(declaration, queue_entry.style);
if this.budget_used + additional_bytes > max_bytes {
continue;
}
let additional_parents = this.additional_parent_signatures(
&declaration.path,
declaration.parent_index,
&mut included_parents,
)?;
additional_bytes += additional_parents
.iter()
.map(|(_, snippet)| snippet.text.len())
.sum::<usize>();
if this.budget_used + additional_bytes > max_bytes {
continue;
}
this.budget_used += additional_bytes;
this.add_parents(&mut included_parents, additional_parents);
let planned_snippet = match queue_entry.style {
DeclarationStyle::Signature => {
let Some(text) = declaration.text.get(declaration.signature_range.clone())
else {
return Err(anyhow!(
"Invalid declaration signature_range {:?} with text.len() = {}",
declaration.signature_range,
declaration.text.len()
));
};
let signature_start_line = declaration.range.start
+ Line(
declaration.text[..declaration.signature_range.start]
.lines()
.count() as u32,
);
let signature_end_line = signature_start_line
+ Line(
declaration.text
[declaration.signature_range.start..declaration.signature_range.end]
.lines()
.count() as u32,
);
let range = signature_start_line..signature_end_line;
PlannedSnippet {
path: declaration.path.clone(),
range,
text,
text_is_truncated: declaration.text_is_truncated,
}
}
DeclarationStyle::Declaration => PlannedSnippet {
path: declaration.path.clone(),
range: declaration.range.clone(),
text: &declaration.text,
text_is_truncated: declaration.text_is_truncated,
},
};
this.snippets.push(planned_snippet);
// When a Signature is consumed, insert an entry for Definition style.
if queue_entry.style == DeclarationStyle::Signature {
let signature_size = declaration_size(&declaration, DeclarationStyle::Signature);
let declaration_size =
declaration_size(&declaration, DeclarationStyle::Declaration);
let signature_score = declaration_score(&declaration, DeclarationStyle::Signature);
let declaration_score =
declaration_score(&declaration, DeclarationStyle::Declaration);
let score_diff = declaration_score - signature_score;
let size_diff = declaration_size.saturating_sub(signature_size);
if score_diff > 0.0001 && size_diff > 0 {
queue.push(QueueEntry {
declaration_index: queue_entry.declaration_index,
score_density: OrderedFloat(score_diff / (size_diff as f32)),
style: DeclarationStyle::Declaration,
});
}
}
}
anyhow::Ok(this)
}
fn add_parents(
&mut self,
included_parents: &mut FxHashSet<usize>,
snippets: Vec<(usize, PlannedSnippet<'a>)>,
) {
for (parent_index, snippet) in snippets {
included_parents.insert(parent_index);
self.budget_used += snippet.text.len();
self.snippets.push(snippet);
}
}
fn additional_parent_signatures(
&self,
path: &Arc<Path>,
parent_index: Option<usize>,
included_parents: &FxHashSet<usize>,
) -> Result<Vec<(usize, PlannedSnippet<'a>)>> {
let mut results = Vec::new();
self.additional_parent_signatures_impl(path, parent_index, included_parents, &mut results)?;
Ok(results)
}
fn additional_parent_signatures_impl(
&self,
path: &Arc<Path>,
parent_index: Option<usize>,
included_parents: &FxHashSet<usize>,
results: &mut Vec<(usize, PlannedSnippet<'a>)>,
) -> Result<()> {
let Some(parent_index) = parent_index else {
return Ok(());
};
if included_parents.contains(&parent_index) {
return Ok(());
}
let Some(parent_signature) = self.request.signatures.get(parent_index) else {
return Err(anyhow!("Invalid parent index {}", parent_index));
};
results.push((
parent_index,
PlannedSnippet {
path: path.clone(),
range: parent_signature.range.clone(),
text: &parent_signature.text,
text_is_truncated: parent_signature.text_is_truncated,
},
));
self.additional_parent_signatures_impl(
path,
parent_signature.parent_index,
included_parents,
results,
)
}
/// Renders the planned context. Each file starts with "```FILE_PATH\n` and ends with triple
/// backticks, with a newline after each file. Outputs a line with "..." between nonconsecutive
/// chunks.
pub fn write(
&'a self,
excerpt_file_insertions: &mut Vec<(Point, &'static str)>,
prompt: &mut String,
) -> Result<SectionLabels> {
let mut file_to_snippets: FxHashMap<&'a std::path::Path, Vec<&PlannedSnippet<'a>>> =
FxHashMap::default();
for snippet in &self.snippets {
file_to_snippets
.entry(&snippet.path)
.or_default()
.push(snippet);
}
// Reorder so that file with cursor comes last
let mut file_snippets = Vec::new();
let mut excerpt_file_snippets = Vec::new();
for (file_path, snippets) in file_to_snippets {
if file_path == self.request.excerpt_path.as_ref() {
excerpt_file_snippets = snippets;
} else {
file_snippets.push((file_path, snippets, false));
}
}
let excerpt_snippet = PlannedSnippet {
path: self.request.excerpt_path.clone(),
range: self.request.excerpt_line_range.clone(),
text: &self.request.excerpt,
text_is_truncated: false,
};
excerpt_file_snippets.push(&excerpt_snippet);
file_snippets.push((&self.request.excerpt_path, excerpt_file_snippets, true));
let section_labels =
self.push_file_snippets(prompt, excerpt_file_insertions, file_snippets)?;
Ok(section_labels)
}
fn push_file_snippets(
&self,
output: &mut String,
excerpt_file_insertions: &mut Vec<(Point, &'static str)>,
file_snippets: Vec<(&'a Path, Vec<&'a PlannedSnippet>, bool)>,
) -> Result<SectionLabels> {
let mut section_ranges = Vec::new();
let mut excerpt_index = None;
for (file_path, mut snippets, is_excerpt_file) in file_snippets {
snippets.sort_by_key(|s| (s.range.start, Reverse(s.range.end)));
// TODO: What if the snippets get expanded too large to be editable?
let mut current_snippet: Option<(&PlannedSnippet, Range<Line>)> = None;
let mut disjoint_snippets: Vec<(&PlannedSnippet, Range<Line>)> = Vec::new();
for snippet in snippets {
if let Some((_, current_snippet_range)) = current_snippet.as_mut()
&& snippet.range.start <= current_snippet_range.end
{
current_snippet_range.end = current_snippet_range.end.max(snippet.range.end);
continue;
}
if let Some(current_snippet) = current_snippet.take() {
disjoint_snippets.push(current_snippet);
}
current_snippet = Some((snippet, snippet.range.clone()));
}
if let Some(current_snippet) = current_snippet.take() {
disjoint_snippets.push(current_snippet);
}
writeln!(output, "`````path={}", file_path.display()).ok();
let mut skipped_last_snippet = false;
for (snippet, range) in disjoint_snippets {
let section_index = section_ranges.len();
match self.request.prompt_format {
PromptFormat::MarkedExcerpt
| PromptFormat::OnlySnippets
| PromptFormat::OldTextNewText
| PromptFormat::Minimal
| PromptFormat::NumLinesUniDiff => {
if range.start.0 > 0 && !skipped_last_snippet {
output.push_str("\n");
}
}
PromptFormat::LabeledSections => {
if is_excerpt_file
&& range.start <= self.request.excerpt_line_range.start
&& range.end >= self.request.excerpt_line_range.end
{
writeln!(output, "<|current_section|>").ok();
} else {
writeln!(output, "<|section_{}|>", section_index).ok();
}
}
PromptFormat::MinimalQwen => unreachable!(),
PromptFormat::SeedCoder1120 => unreachable!(),
}
let push_full_snippet = |output: &mut String| {
if self.request.prompt_format == PromptFormat::NumLinesUniDiff {
for (i, line) in snippet.text.lines().enumerate() {
writeln!(output, "{}|{}", i as u32 + range.start.0 + 1, line)?;
}
} else {
output.push_str(&snippet.text);
}
anyhow::Ok(())
};
if is_excerpt_file {
if self.request.prompt_format == PromptFormat::OnlySnippets {
if range.start >= self.request.excerpt_line_range.start
&& range.end <= self.request.excerpt_line_range.end
{
skipped_last_snippet = true;
} else {
skipped_last_snippet = false;
output.push_str(snippet.text);
}
} else if !excerpt_file_insertions.is_empty() {
let lines = snippet.text.lines().collect::<Vec<_>>();
let push_line = |output: &mut String, line_ix: usize| {
if self.request.prompt_format == PromptFormat::NumLinesUniDiff {
write!(output, "{}|", line_ix as u32 + range.start.0 + 1)?;
}
anyhow::Ok(writeln!(output, "{}", lines[line_ix])?)
};
let mut last_line_ix = 0;
let mut insertion_ix = 0;
while insertion_ix < excerpt_file_insertions.len() {
let (point, insertion) = &excerpt_file_insertions[insertion_ix];
let found = point.line >= range.start && point.line <= range.end;
if found {
excerpt_index = Some(section_index);
let insertion_line_ix = (point.line.0 - range.start.0) as usize;
for line_ix in last_line_ix..insertion_line_ix {
push_line(output, line_ix)?;
}
if let Some(next_line) = lines.get(insertion_line_ix) {
if self.request.prompt_format == PromptFormat::NumLinesUniDiff {
write!(
output,
"{}|",
insertion_line_ix as u32 + range.start.0 + 1
)?
}
output.push_str(&next_line[..point.column as usize]);
output.push_str(insertion);
writeln!(output, "{}", &next_line[point.column as usize..])?;
} else {
writeln!(output, "{}", insertion)?;
}
last_line_ix = insertion_line_ix + 1;
excerpt_file_insertions.remove(insertion_ix);
continue;
}
insertion_ix += 1;
}
skipped_last_snippet = false;
for line_ix in last_line_ix..lines.len() {
push_line(output, line_ix)?;
}
} else {
skipped_last_snippet = false;
push_full_snippet(output)?;
}
} else {
skipped_last_snippet = false;
push_full_snippet(output)?;
}
section_ranges.push((snippet.path.clone(), range));
}
output.push_str("`````\n\n");
}
Ok(SectionLabels {
// TODO: Clean this up
excerpt_index: match self.request.prompt_format {
PromptFormat::OnlySnippets => 0,
_ => excerpt_index.context("bug: no snippet found for excerpt")?,
},
section_ranges,
})
}
}
fn declaration_score_density(declaration: &ReferencedDeclaration, style: DeclarationStyle) -> f32 {
declaration_score(declaration, style) / declaration_size(declaration, style) as f32
}
fn declaration_score(declaration: &ReferencedDeclaration, style: DeclarationStyle) -> f32 {
match style {
DeclarationStyle::Signature => declaration.signature_score,
DeclarationStyle::Declaration => declaration.declaration_score,
}
}
fn declaration_size(declaration: &ReferencedDeclaration, style: DeclarationStyle) -> usize {
match style {
DeclarationStyle::Signature => declaration.signature_range.len(),
DeclarationStyle::Declaration => declaration.text.len(),
}
}
struct PromptData {
events: Vec<Arc<Event>>,
cursor_point: Point,
cursor_path: Arc<Path>, // TODO: make a common struct with cursor_point
included_files: Vec<IncludedFile>,
included_files: Vec<RelatedFile>,
}
#[derive(Default)]
@@ -1051,7 +461,7 @@ impl SeedCoder1120Prompt {
context
}
fn fmt_fim(&self, file: &IncludedFile, cursor_point: Point) -> String {
fn fmt_fim(&self, file: &RelatedFile, cursor_point: Point) -> String {
let mut buf = String::new();
const FIM_SUFFIX: &str = "<[fim-suffix]>";
const FIM_PREFIX: &str = "<[fim-prefix]>";

View File

@@ -1,244 +0,0 @@
use anyhow::Result;
use cloud_llm_client::predict_edits_v3::{self, Excerpt};
use indoc::indoc;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use std::fmt::Write;
use crate::{push_events, write_codeblock};
pub fn build_prompt(request: predict_edits_v3::PlanContextRetrievalRequest) -> Result<String> {
let mut prompt = SEARCH_INSTRUCTIONS.to_string();
if !request.events.is_empty() {
writeln!(&mut prompt, "\n## User Edits\n\n")?;
push_events(&mut prompt, &request.events);
}
writeln!(&mut prompt, "## Cursor context\n")?;
write_codeblock(
&request.excerpt_path,
&[Excerpt {
start_line: request.excerpt_line_range.start,
text: request.excerpt.into(),
}],
&[],
request.cursor_file_max_row,
true,
&mut prompt,
);
writeln!(&mut prompt, "{TOOL_USE_REMINDER}")?;
Ok(prompt)
}
/// Search for relevant code
///
/// For the best results, run multiple queries at once with a single invocation of this tool.
#[derive(Clone, Deserialize, Serialize, JsonSchema)]
pub struct SearchToolInput {
/// An array of queries to run for gathering context relevant to the next prediction
#[schemars(length(max = 3))]
#[serde(deserialize_with = "deserialize_queries")]
pub queries: Box<[SearchToolQuery]>,
}
fn deserialize_queries<'de, D>(deserializer: D) -> Result<Box<[SearchToolQuery]>, D::Error>
where
D: serde::Deserializer<'de>,
{
use serde::de::Error;
#[derive(Deserialize)]
#[serde(untagged)]
enum QueryCollection {
Array(Box<[SearchToolQuery]>),
DoubleArray(Box<[Box<[SearchToolQuery]>]>),
Single(SearchToolQuery),
}
#[derive(Deserialize)]
#[serde(untagged)]
enum MaybeDoubleEncoded {
SingleEncoded(QueryCollection),
DoubleEncoded(String),
}
let result = MaybeDoubleEncoded::deserialize(deserializer)?;
let normalized = match result {
MaybeDoubleEncoded::SingleEncoded(value) => value,
MaybeDoubleEncoded::DoubleEncoded(value) => {
serde_json::from_str(&value).map_err(D::Error::custom)?
}
};
Ok(match normalized {
QueryCollection::Array(items) => items,
QueryCollection::Single(search_tool_query) => Box::new([search_tool_query]),
QueryCollection::DoubleArray(double_array) => double_array.into_iter().flatten().collect(),
})
}
/// Search for relevant code by path, syntax hierarchy, and content.
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema, Hash)]
pub struct SearchToolQuery {
/// 1. A glob pattern to match file paths in the codebase to search in.
pub glob: String,
/// 2. Regular expressions to match syntax nodes **by their first line** and hierarchy.
///
/// Subsequent regexes match nodes within the full content of the nodes matched by the previous regexes.
///
/// Example: Searching for a `User` class
/// ["class\s+User"]
///
/// Example: Searching for a `get_full_name` method under a `User` class
/// ["class\s+User", "def\sget_full_name"]
///
/// Skip this field to match on content alone.
#[schemars(length(max = 3))]
#[serde(default)]
pub syntax_node: Vec<String>,
/// 3. An optional regular expression to match the final content that should appear in the results.
///
/// - Content will be matched within all lines of the matched syntax nodes.
/// - If syntax node regexes are provided, this field can be skipped to include as much of the node itself as possible.
/// - If no syntax node regexes are provided, the content will be matched within the entire file.
pub content: Option<String>,
}
pub const TOOL_NAME: &str = "search";
const SEARCH_INSTRUCTIONS: &str = indoc! {r#"
You are part of an edit prediction system in a code editor.
Your role is to search for code that will serve as context for predicting the next edit.
- Analyze the user's recent edits and current cursor context
- Use the `search` tool to find code that is relevant for predicting the next edit
- Focus on finding:
- Code patterns that might need similar changes based on the recent edits
- Functions, variables, types, and constants referenced in the current cursor context
- Related implementations, usages, or dependencies that may require consistent updates
- How items defined in the cursor excerpt are used or altered
- You will not be able to filter results or perform subsequent queries, so keep searches as targeted as possible
- Use `syntax_node` parameter whenever you're looking for a particular type, class, or function
- Avoid using wildcard globs if you already know the file path of the content you're looking for
"#};
const TOOL_USE_REMINDER: &str = indoc! {"
--
Analyze the user's intent in one to two sentences, then call the `search` tool.
"};
#[cfg(test)]
mod tests {
use serde_json::json;
use super::*;
#[test]
fn test_deserialize_queries() {
let single_query_json = indoc! {r#"{
"queries": {
"glob": "**/*.rs",
"syntax_node": ["fn test"],
"content": "assert"
}
}"#};
let flat_input: SearchToolInput = serde_json::from_str(single_query_json).unwrap();
assert_eq!(flat_input.queries.len(), 1);
assert_eq!(flat_input.queries[0].glob, "**/*.rs");
assert_eq!(flat_input.queries[0].syntax_node, vec!["fn test"]);
assert_eq!(flat_input.queries[0].content, Some("assert".to_string()));
let flat_json = indoc! {r#"{
"queries": [
{
"glob": "**/*.rs",
"syntax_node": ["fn test"],
"content": "assert"
},
{
"glob": "**/*.ts",
"syntax_node": [],
"content": null
}
]
}"#};
let flat_input: SearchToolInput = serde_json::from_str(flat_json).unwrap();
assert_eq!(flat_input.queries.len(), 2);
assert_eq!(flat_input.queries[0].glob, "**/*.rs");
assert_eq!(flat_input.queries[0].syntax_node, vec!["fn test"]);
assert_eq!(flat_input.queries[0].content, Some("assert".to_string()));
assert_eq!(flat_input.queries[1].glob, "**/*.ts");
assert_eq!(flat_input.queries[1].syntax_node.len(), 0);
assert_eq!(flat_input.queries[1].content, None);
let nested_json = indoc! {r#"{
"queries": [
[
{
"glob": "**/*.rs",
"syntax_node": ["fn test"],
"content": "assert"
}
],
[
{
"glob": "**/*.ts",
"syntax_node": [],
"content": null
}
]
]
}"#};
let nested_input: SearchToolInput = serde_json::from_str(nested_json).unwrap();
assert_eq!(nested_input.queries.len(), 2);
assert_eq!(nested_input.queries[0].glob, "**/*.rs");
assert_eq!(nested_input.queries[0].syntax_node, vec!["fn test"]);
assert_eq!(nested_input.queries[0].content, Some("assert".to_string()));
assert_eq!(nested_input.queries[1].glob, "**/*.ts");
assert_eq!(nested_input.queries[1].syntax_node.len(), 0);
assert_eq!(nested_input.queries[1].content, None);
let double_encoded_queries = serde_json::to_string(&json!({
"queries": serde_json::to_string(&json!([
{
"glob": "**/*.rs",
"syntax_node": ["fn test"],
"content": "assert"
},
{
"glob": "**/*.ts",
"syntax_node": [],
"content": null
}
])).unwrap()
}))
.unwrap();
let double_encoded_input: SearchToolInput =
serde_json::from_str(&double_encoded_queries).unwrap();
assert_eq!(double_encoded_input.queries.len(), 2);
assert_eq!(double_encoded_input.queries[0].glob, "**/*.rs");
assert_eq!(double_encoded_input.queries[0].syntax_node, vec!["fn test"]);
assert_eq!(
double_encoded_input.queries[0].content,
Some("assert".to_string())
);
assert_eq!(double_encoded_input.queries[1].glob, "**/*.ts");
assert_eq!(double_encoded_input.queries[1].syntax_node.len(), 0);
assert_eq!(double_encoded_input.queries[1].content, None);
// ### ERROR Switching from var declarations to lexical declarations [RUN 073]
// invalid search json {"queries": ["express/lib/response.js", "var\\s+[a-zA-Z_][a-zA-Z0-9_]*\\s*=.*;", "function.*\\(.*\\).*\\{.*\\}"]}
}
}

View File

@@ -10,7 +10,7 @@ path = "src/codestral.rs"
[dependencies]
anyhow.workspace = true
edit_prediction.workspace = true
edit_prediction_types.workspace = true
edit_prediction_context.workspace = true
futures.workspace = true
gpui.workspace = true

View File

@@ -1,6 +1,6 @@
use anyhow::{Context as _, Result};
use edit_prediction::{Direction, EditPrediction, EditPredictionProvider};
use edit_prediction_context::{EditPredictionExcerpt, EditPredictionExcerptOptions};
use edit_prediction_types::{Direction, EditPrediction, EditPredictionDelegate};
use futures::AsyncReadExt;
use gpui::{App, Context, Entity, Task};
use http_client::HttpClient;
@@ -43,17 +43,17 @@ impl CurrentCompletion {
/// Attempts to adjust the edits based on changes made to the buffer since the completion was generated.
/// Returns None if the user's edits conflict with the predicted edits.
fn interpolate(&self, new_snapshot: &BufferSnapshot) -> Option<Vec<(Range<Anchor>, Arc<str>)>> {
edit_prediction::interpolate_edits(&self.snapshot, new_snapshot, &self.edits)
edit_prediction_types::interpolate_edits(&self.snapshot, new_snapshot, &self.edits)
}
}
pub struct CodestralCompletionProvider {
pub struct CodestralEditPredictionDelegate {
http_client: Arc<dyn HttpClient>,
pending_request: Option<Task<Result<()>>>,
current_completion: Option<CurrentCompletion>,
}
impl CodestralCompletionProvider {
impl CodestralEditPredictionDelegate {
pub fn new(http_client: Arc<dyn HttpClient>) -> Self {
Self {
http_client,
@@ -165,7 +165,7 @@ impl CodestralCompletionProvider {
}
}
impl EditPredictionProvider for CodestralCompletionProvider {
impl EditPredictionDelegate for CodestralEditPredictionDelegate {
fn name() -> &'static str {
"codestral"
}
@@ -174,7 +174,7 @@ impl EditPredictionProvider for CodestralCompletionProvider {
"Codestral"
}
fn show_completions_in_menu() -> bool {
fn show_predictions_in_menu() -> bool {
true
}
@@ -239,7 +239,6 @@ impl EditPredictionProvider for CodestralCompletionProvider {
cursor_point,
&snapshot,
&EXCERPT_OPTIONS,
None,
)
.context("Line containing cursor doesn't fit in excerpt max bytes")?;

View File

@@ -65,7 +65,7 @@ tokio = { workspace = true, features = ["full"] }
toml.workspace = true
tower = "0.4"
tower-http = { workspace = true, features = ["trace"] }
tracing = "0.1.40"
tracing.workspace = true
tracing-subscriber = { version = "0.3.18", features = ["env-filter", "json", "registry", "tracing-log"] } # workaround for https://github.com/tokio-rs/tracing/issues/2927
util.workspace = true
uuid.workspace = true

View File

@@ -63,15 +63,3 @@ Deployment is triggered by pushing to the `collab-staging` (or `collab-productio
- `./script/deploy-collab production`
You can tell what is currently deployed with `./script/what-is-deployed`.
# Database Migrations
To create a new migration:
```sh
./script/create-migration <name>
```
Migrations are run automatically on service start, so run `foreman start` again. The service will crash if the migrations fail.
When you create a new migration, you also need to update the [SQLite schema](./migrations.sqlite/20221109000000_test_schema.sql) that is used for testing.

View File

@@ -1,21 +0,0 @@
apiVersion: batch/v1
kind: Job
metadata:
namespace: ${ZED_KUBE_NAMESPACE}
name: ${ZED_MIGRATE_JOB_NAME}
spec:
template:
spec:
restartPolicy: Never
containers:
- name: migrator
imagePullPolicy: Always
image: ${ZED_IMAGE_ID}
args:
- migrate
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: database
key: url

View File

@@ -121,6 +121,8 @@ CREATE TABLE "project_repositories" (
"merge_message" VARCHAR,
"branch_summary" VARCHAR,
"head_commit_details" VARCHAR,
"remote_upstream_url" VARCHAR,
"remote_origin_url" VARCHAR,
PRIMARY KEY (project_id, id)
);

View File

@@ -1,20 +0,0 @@
CREATE TABLE IF NOT EXISTS "sessions" (
"id" VARCHAR NOT NULL PRIMARY KEY,
"expires" TIMESTAMP WITH TIME ZONE NULL,
"session" TEXT NOT NULL
);
CREATE TABLE IF NOT EXISTS "users" (
"id" SERIAL PRIMARY KEY,
"github_login" VARCHAR,
"admin" BOOLEAN
);
CREATE UNIQUE INDEX "index_users_github_login" ON "users" ("github_login");
CREATE TABLE IF NOT EXISTS "signups" (
"id" SERIAL PRIMARY KEY,
"github_login" VARCHAR,
"email_address" VARCHAR,
"about" TEXT
);

View File

@@ -1,7 +0,0 @@
CREATE TABLE IF NOT EXISTS "access_tokens" (
"id" SERIAL PRIMARY KEY,
"user_id" INTEGER REFERENCES users (id),
"hash" VARCHAR(128)
);
CREATE INDEX "index_access_tokens_user_id" ON "access_tokens" ("user_id");

View File

@@ -1,46 +0,0 @@
CREATE TABLE IF NOT EXISTS "orgs" (
"id" SERIAL PRIMARY KEY,
"name" VARCHAR NOT NULL,
"slug" VARCHAR NOT NULL
);
CREATE UNIQUE INDEX "index_orgs_slug" ON "orgs" ("slug");
CREATE TABLE IF NOT EXISTS "org_memberships" (
"id" SERIAL PRIMARY KEY,
"org_id" INTEGER REFERENCES orgs (id) NOT NULL,
"user_id" INTEGER REFERENCES users (id) NOT NULL,
"admin" BOOLEAN NOT NULL
);
CREATE INDEX "index_org_memberships_user_id" ON "org_memberships" ("user_id");
CREATE UNIQUE INDEX "index_org_memberships_org_id_and_user_id" ON "org_memberships" ("org_id", "user_id");
CREATE TABLE IF NOT EXISTS "channels" (
"id" SERIAL PRIMARY KEY,
"owner_id" INTEGER NOT NULL,
"owner_is_user" BOOLEAN NOT NULL,
"name" VARCHAR NOT NULL
);
CREATE UNIQUE INDEX "index_channels_owner_and_name" ON "channels" ("owner_is_user", "owner_id", "name");
CREATE TABLE IF NOT EXISTS "channel_memberships" (
"id" SERIAL PRIMARY KEY,
"channel_id" INTEGER REFERENCES channels (id) NOT NULL,
"user_id" INTEGER REFERENCES users (id) NOT NULL,
"admin" BOOLEAN NOT NULL
);
CREATE INDEX "index_channel_memberships_user_id" ON "channel_memberships" ("user_id");
CREATE UNIQUE INDEX "index_channel_memberships_channel_id_and_user_id" ON "channel_memberships" ("channel_id", "user_id");
CREATE TABLE IF NOT EXISTS "channel_messages" (
"id" SERIAL PRIMARY KEY,
"channel_id" INTEGER REFERENCES channels (id) NOT NULL,
"sender_id" INTEGER REFERENCES users (id) NOT NULL,
"body" TEXT NOT NULL,
"sent_at" TIMESTAMP
);
CREATE INDEX "index_channel_messages_channel_id" ON "channel_messages" ("channel_id");

View File

@@ -1,4 +0,0 @@
ALTER TABLE "channel_messages"
ADD "nonce" UUID NOT NULL DEFAULT gen_random_uuid();
CREATE UNIQUE INDEX "index_channel_messages_nonce" ON "channel_messages" ("nonce");

View File

@@ -1,4 +0,0 @@
ALTER TABLE "signups"
ADD "wants_releases" BOOLEAN,
ADD "wants_updates" BOOLEAN,
ADD "wants_community" BOOLEAN;

View File

@@ -1 +0,0 @@
DROP TABLE IF EXISTS "signups";

View File

@@ -1,2 +0,0 @@
CREATE EXTENSION IF NOT EXISTS pg_trgm;
CREATE INDEX trigram_index_users_on_github_login ON users USING GIN(github_login gin_trgm_ops);

View File

@@ -1,11 +0,0 @@
CREATE TABLE IF NOT EXISTS "contacts" (
"id" SERIAL PRIMARY KEY,
"user_id_a" INTEGER REFERENCES users (id) NOT NULL,
"user_id_b" INTEGER REFERENCES users (id) NOT NULL,
"a_to_b" BOOLEAN NOT NULL,
"should_notify" BOOLEAN NOT NULL,
"accepted" BOOLEAN NOT NULL
);
CREATE UNIQUE INDEX "index_contacts_user_ids" ON "contacts" ("user_id_a", "user_id_b");
CREATE INDEX "index_contacts_user_id_b" ON "contacts" ("user_id_b");

View File

@@ -1,9 +0,0 @@
ALTER TABLE users
ADD email_address VARCHAR(255) DEFAULT NULL,
ADD invite_code VARCHAR(64),
ADD invite_count INTEGER NOT NULL DEFAULT 0,
ADD inviter_id INTEGER REFERENCES users (id),
ADD connected_once BOOLEAN NOT NULL DEFAULT false,
ADD created_at TIMESTAMP NOT NULL DEFAULT NOW();
CREATE UNIQUE INDEX "index_invite_code_users" ON "users" ("invite_code");

View File

@@ -1,6 +0,0 @@
ALTER TABLE contacts DROP CONSTRAINT contacts_user_id_a_fkey;
ALTER TABLE contacts DROP CONSTRAINT contacts_user_id_b_fkey;
ALTER TABLE contacts ADD CONSTRAINT contacts_user_id_a_fkey FOREIGN KEY (user_id_a) REFERENCES users(id) ON DELETE CASCADE;
ALTER TABLE contacts ADD CONSTRAINT contacts_user_id_b_fkey FOREIGN KEY (user_id_b) REFERENCES users(id) ON DELETE CASCADE;
ALTER TABLE users DROP CONSTRAINT users_inviter_id_fkey;
ALTER TABLE users ADD CONSTRAINT users_inviter_id_fkey FOREIGN KEY (inviter_id) REFERENCES users(id) ON DELETE SET NULL;

View File

@@ -1,24 +0,0 @@
CREATE TABLE IF NOT EXISTS "projects" (
"id" SERIAL PRIMARY KEY,
"host_user_id" INTEGER REFERENCES users (id) NOT NULL,
"unregistered" BOOLEAN NOT NULL DEFAULT false
);
CREATE TABLE IF NOT EXISTS "worktree_extensions" (
"id" SERIAL PRIMARY KEY,
"project_id" INTEGER REFERENCES projects (id) NOT NULL,
"worktree_id" INTEGER NOT NULL,
"extension" VARCHAR(255),
"count" INTEGER NOT NULL
);
CREATE TABLE IF NOT EXISTS "project_activity_periods" (
"id" SERIAL PRIMARY KEY,
"duration_millis" INTEGER NOT NULL,
"ended_at" TIMESTAMP NOT NULL,
"user_id" INTEGER REFERENCES users (id) NOT NULL,
"project_id" INTEGER REFERENCES projects (id) NOT NULL
);
CREATE INDEX "index_project_activity_periods_on_ended_at" ON "project_activity_periods" ("ended_at");
CREATE UNIQUE INDEX "index_worktree_extensions_on_project_id_and_worktree_id_and_extension" ON "worktree_extensions" ("project_id", "worktree_id", "extension");

View File

@@ -1,27 +0,0 @@
CREATE TABLE IF NOT EXISTS "signups" (
"id" SERIAL PRIMARY KEY,
"email_address" VARCHAR NOT NULL,
"email_confirmation_code" VARCHAR(64) NOT NULL,
"email_confirmation_sent" BOOLEAN NOT NULL,
"created_at" TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
"device_id" VARCHAR,
"user_id" INTEGER REFERENCES users (id) ON DELETE CASCADE,
"inviting_user_id" INTEGER REFERENCES users (id) ON DELETE SET NULL,
"platform_mac" BOOLEAN NOT NULL,
"platform_linux" BOOLEAN NOT NULL,
"platform_windows" BOOLEAN NOT NULL,
"platform_unknown" BOOLEAN NOT NULL,
"editor_features" VARCHAR[],
"programming_languages" VARCHAR[]
);
CREATE UNIQUE INDEX "index_signups_on_email_address" ON "signups" ("email_address");
CREATE INDEX "index_signups_on_email_confirmation_sent" ON "signups" ("email_confirmation_sent");
ALTER TABLE "users"
ADD "github_user_id" INTEGER;
CREATE INDEX "index_users_on_email_address" ON "users" ("email_address");
CREATE INDEX "index_users_on_github_user_id" ON "users" ("github_user_id");

View File

@@ -1,2 +0,0 @@
ALTER TABLE "users"
ADD "metrics_id" uuid NOT NULL DEFAULT gen_random_uuid();

View File

@@ -1,90 +0,0 @@
CREATE TABLE IF NOT EXISTS "rooms" (
"id" SERIAL PRIMARY KEY,
"live_kit_room" VARCHAR NOT NULL
);
ALTER TABLE "projects"
ADD "room_id" INTEGER REFERENCES rooms (id),
ADD "host_connection_id" INTEGER,
ADD "host_connection_epoch" UUID;
CREATE INDEX "index_projects_on_host_connection_epoch" ON "projects" ("host_connection_epoch");
CREATE TABLE "worktrees" (
"project_id" INTEGER NOT NULL REFERENCES projects (id) ON DELETE CASCADE,
"id" INT8 NOT NULL,
"root_name" VARCHAR NOT NULL,
"abs_path" VARCHAR NOT NULL,
"visible" BOOL NOT NULL,
"scan_id" INT8 NOT NULL,
"is_complete" BOOL NOT NULL,
PRIMARY KEY(project_id, id)
);
CREATE INDEX "index_worktrees_on_project_id" ON "worktrees" ("project_id");
CREATE TABLE "worktree_entries" (
"project_id" INTEGER NOT NULL,
"worktree_id" INT8 NOT NULL,
"id" INT8 NOT NULL,
"is_dir" BOOL NOT NULL,
"path" VARCHAR NOT NULL,
"inode" INT8 NOT NULL,
"mtime_seconds" INT8 NOT NULL,
"mtime_nanos" INTEGER NOT NULL,
"is_symlink" BOOL NOT NULL,
"is_ignored" BOOL NOT NULL,
PRIMARY KEY(project_id, worktree_id, id),
FOREIGN KEY(project_id, worktree_id) REFERENCES worktrees (project_id, id) ON DELETE CASCADE
);
CREATE INDEX "index_worktree_entries_on_project_id" ON "worktree_entries" ("project_id");
CREATE INDEX "index_worktree_entries_on_project_id_and_worktree_id" ON "worktree_entries" ("project_id", "worktree_id");
CREATE TABLE "worktree_diagnostic_summaries" (
"project_id" INTEGER NOT NULL,
"worktree_id" INT8 NOT NULL,
"path" VARCHAR NOT NULL,
"language_server_id" INT8 NOT NULL,
"error_count" INTEGER NOT NULL,
"warning_count" INTEGER NOT NULL,
PRIMARY KEY(project_id, worktree_id, path),
FOREIGN KEY(project_id, worktree_id) REFERENCES worktrees (project_id, id) ON DELETE CASCADE
);
CREATE INDEX "index_worktree_diagnostic_summaries_on_project_id" ON "worktree_diagnostic_summaries" ("project_id");
CREATE INDEX "index_worktree_diagnostic_summaries_on_project_id_and_worktree_id" ON "worktree_diagnostic_summaries" ("project_id", "worktree_id");
CREATE TABLE "language_servers" (
"project_id" INTEGER NOT NULL REFERENCES projects (id) ON DELETE CASCADE,
"id" INT8 NOT NULL,
"name" VARCHAR NOT NULL,
PRIMARY KEY(project_id, id)
);
CREATE INDEX "index_language_servers_on_project_id" ON "language_servers" ("project_id");
CREATE TABLE "project_collaborators" (
"id" SERIAL PRIMARY KEY,
"project_id" INTEGER NOT NULL REFERENCES projects (id) ON DELETE CASCADE,
"connection_id" INTEGER NOT NULL,
"connection_epoch" UUID NOT NULL,
"user_id" INTEGER NOT NULL,
"replica_id" INTEGER NOT NULL,
"is_host" BOOLEAN NOT NULL
);
CREATE INDEX "index_project_collaborators_on_project_id" ON "project_collaborators" ("project_id");
CREATE UNIQUE INDEX "index_project_collaborators_on_project_id_and_replica_id" ON "project_collaborators" ("project_id", "replica_id");
CREATE INDEX "index_project_collaborators_on_connection_epoch" ON "project_collaborators" ("connection_epoch");
CREATE TABLE "room_participants" (
"id" SERIAL PRIMARY KEY,
"room_id" INTEGER NOT NULL REFERENCES rooms (id),
"user_id" INTEGER NOT NULL REFERENCES users (id),
"answering_connection_id" INTEGER,
"answering_connection_epoch" UUID,
"location_kind" INTEGER,
"location_project_id" INTEGER,
"initial_project_id" INTEGER,
"calling_user_id" INTEGER NOT NULL REFERENCES users (id),
"calling_connection_id" INTEGER NOT NULL,
"calling_connection_epoch" UUID NOT NULL
);
CREATE UNIQUE INDEX "index_room_participants_on_user_id" ON "room_participants" ("user_id");
CREATE INDEX "index_room_participants_on_answering_connection_epoch" ON "room_participants" ("answering_connection_epoch");
CREATE INDEX "index_room_participants_on_calling_connection_epoch" ON "room_participants" ("calling_connection_epoch");

View File

@@ -1,2 +0,0 @@
ALTER TABLE "signups"
ADD "added_to_mailing_list" BOOLEAN NOT NULL DEFAULT FALSE;

View File

@@ -1,7 +0,0 @@
ALTER TABLE "room_participants"
ADD "answering_connection_lost" BOOLEAN NOT NULL DEFAULT FALSE;
CREATE INDEX "index_project_collaborators_on_connection_id" ON "project_collaborators" ("connection_id");
CREATE UNIQUE INDEX "index_project_collaborators_on_project_id_connection_id_and_epoch" ON "project_collaborators" ("project_id", "connection_id", "connection_epoch");
CREATE INDEX "index_room_participants_on_answering_connection_id" ON "room_participants" ("answering_connection_id");
CREATE UNIQUE INDEX "index_room_participants_on_answering_connection_id_and_answering_connection_epoch" ON "room_participants" ("answering_connection_id", "answering_connection_epoch");

View File

@@ -1 +0,0 @@
CREATE INDEX "index_room_participants_on_room_id" ON "room_participants" ("room_id");

View File

@@ -1,30 +0,0 @@
CREATE TABLE servers (
id SERIAL PRIMARY KEY,
environment VARCHAR NOT NULL
);
DROP TABLE worktree_extensions;
DROP TABLE project_activity_periods;
DELETE from projects;
ALTER TABLE projects
DROP COLUMN host_connection_epoch,
ADD COLUMN host_connection_server_id INTEGER REFERENCES servers (id) ON DELETE CASCADE;
CREATE INDEX "index_projects_on_host_connection_server_id" ON "projects" ("host_connection_server_id");
CREATE INDEX "index_projects_on_host_connection_id_and_host_connection_server_id" ON "projects" ("host_connection_id", "host_connection_server_id");
DELETE FROM project_collaborators;
ALTER TABLE project_collaborators
DROP COLUMN connection_epoch,
ADD COLUMN connection_server_id INTEGER NOT NULL REFERENCES servers (id) ON DELETE CASCADE;
CREATE INDEX "index_project_collaborators_on_connection_server_id" ON "project_collaborators" ("connection_server_id");
CREATE UNIQUE INDEX "index_project_collaborators_on_project_id_connection_id_and_server_id" ON "project_collaborators" ("project_id", "connection_id", "connection_server_id");
DELETE FROM room_participants;
ALTER TABLE room_participants
DROP COLUMN answering_connection_epoch,
DROP COLUMN calling_connection_epoch,
ADD COLUMN answering_connection_server_id INTEGER REFERENCES servers (id) ON DELETE CASCADE,
ADD COLUMN calling_connection_server_id INTEGER REFERENCES servers (id) ON DELETE SET NULL;
CREATE INDEX "index_room_participants_on_answering_connection_server_id" ON "room_participants" ("answering_connection_server_id");
CREATE INDEX "index_room_participants_on_calling_connection_server_id" ON "room_participants" ("calling_connection_server_id");
CREATE UNIQUE INDEX "index_room_participants_on_answering_connection_id_and_answering_connection_server_id" ON "room_participants" ("answering_connection_id", "answering_connection_server_id");

View File

@@ -1,3 +0,0 @@
ALTER TABLE "worktree_entries"
ADD COLUMN "scan_id" INT8,
ADD COLUMN "is_deleted" BOOL;

View File

@@ -1,3 +0,0 @@
ALTER TABLE worktrees
ALTER COLUMN is_complete SET DEFAULT FALSE,
ADD COLUMN completed_scan_id INT8;

View File

@@ -1,15 +0,0 @@
CREATE TABLE IF NOT EXISTS "followers" (
"id" SERIAL PRIMARY KEY,
"room_id" INTEGER NOT NULL REFERENCES rooms (id) ON DELETE CASCADE,
"project_id" INTEGER NOT NULL REFERENCES projects (id) ON DELETE CASCADE,
"leader_connection_server_id" INTEGER NOT NULL REFERENCES servers (id) ON DELETE CASCADE,
"leader_connection_id" INTEGER NOT NULL,
"follower_connection_server_id" INTEGER NOT NULL REFERENCES servers (id) ON DELETE CASCADE,
"follower_connection_id" INTEGER NOT NULL
);
CREATE UNIQUE INDEX
"index_followers_on_project_id_and_leader_connection_server_id_and_leader_connection_id_and_follower_connection_server_id_and_follower_connection_id"
ON "followers" ("project_id", "leader_connection_server_id", "leader_connection_id", "follower_connection_server_id", "follower_connection_id");
CREATE INDEX "index_followers_on_room_id" ON "followers" ("room_id");

View File

@@ -1,13 +0,0 @@
CREATE TABLE "worktree_repositories" (
"project_id" INTEGER NOT NULL,
"worktree_id" INT8 NOT NULL,
"work_directory_id" INT8 NOT NULL,
"scan_id" INT8 NOT NULL,
"branch" VARCHAR,
"is_deleted" BOOL NOT NULL,
PRIMARY KEY(project_id, worktree_id, work_directory_id),
FOREIGN KEY(project_id, worktree_id) REFERENCES worktrees (project_id, id) ON DELETE CASCADE,
FOREIGN KEY(project_id, worktree_id, work_directory_id) REFERENCES worktree_entries (project_id, worktree_id, id) ON DELETE CASCADE
);
CREATE INDEX "index_worktree_repositories_on_project_id" ON "worktree_repositories" ("project_id");
CREATE INDEX "index_worktree_repositories_on_project_id_and_worktree_id" ON "worktree_repositories" ("project_id", "worktree_id");

View File

@@ -1,15 +0,0 @@
CREATE TABLE "worktree_repository_statuses" (
"project_id" INTEGER NOT NULL,
"worktree_id" INT8 NOT NULL,
"work_directory_id" INT8 NOT NULL,
"repo_path" VARCHAR NOT NULL,
"status" INT8 NOT NULL,
"scan_id" INT8 NOT NULL,
"is_deleted" BOOL NOT NULL,
PRIMARY KEY(project_id, worktree_id, work_directory_id, repo_path),
FOREIGN KEY(project_id, worktree_id) REFERENCES worktrees (project_id, id) ON DELETE CASCADE,
FOREIGN KEY(project_id, worktree_id, work_directory_id) REFERENCES worktree_entries (project_id, worktree_id, id) ON DELETE CASCADE
);
CREATE INDEX "index_wt_repos_statuses_on_project_id" ON "worktree_repository_statuses" ("project_id");
CREATE INDEX "index_wt_repos_statuses_on_project_id_and_wt_id" ON "worktree_repository_statuses" ("project_id", "worktree_id");
CREATE INDEX "index_wt_repos_statuses_on_project_id_and_wt_id_and_wd_id" ON "worktree_repository_statuses" ("project_id", "worktree_id", "work_directory_id");

View File

@@ -1,10 +0,0 @@
CREATE TABLE "worktree_settings_files" (
"project_id" INTEGER NOT NULL,
"worktree_id" INT8 NOT NULL,
"path" VARCHAR NOT NULL,
"content" TEXT NOT NULL,
PRIMARY KEY(project_id, worktree_id, path),
FOREIGN KEY(project_id, worktree_id) REFERENCES worktrees (project_id, id) ON DELETE CASCADE
);
CREATE INDEX "index_settings_files_on_project_id" ON "worktree_settings_files" ("project_id");
CREATE INDEX "index_settings_files_on_project_id_and_wt_id" ON "worktree_settings_files" ("project_id", "worktree_id");

View File

@@ -1,2 +0,0 @@
ALTER TABLE "worktree_entries"
ADD "git_status" INT8;

View File

@@ -1,2 +0,0 @@
ALTER TABLE "worktree_entries"
ADD "is_external" BOOL NOT NULL DEFAULT FALSE;

View File

@@ -1,30 +0,0 @@
DROP TABLE "channel_messages";
DROP TABLE "channel_memberships";
DROP TABLE "org_memberships";
DROP TABLE "orgs";
DROP TABLE "channels";
CREATE TABLE "channels" (
"id" SERIAL PRIMARY KEY,
"name" VARCHAR NOT NULL,
"created_at" TIMESTAMP NOT NULL DEFAULT now()
);
CREATE TABLE "channel_paths" (
"id_path" VARCHAR NOT NULL PRIMARY KEY,
"channel_id" INTEGER NOT NULL REFERENCES channels (id) ON DELETE CASCADE
);
CREATE INDEX "index_channel_paths_on_channel_id" ON "channel_paths" ("channel_id");
CREATE TABLE "channel_members" (
"id" SERIAL PRIMARY KEY,
"channel_id" INTEGER NOT NULL REFERENCES channels (id) ON DELETE CASCADE,
"user_id" INTEGER NOT NULL REFERENCES users (id) ON DELETE CASCADE,
"admin" BOOLEAN NOT NULL DEFAULT false,
"accepted" BOOLEAN NOT NULL DEFAULT false,
"updated_at" TIMESTAMP NOT NULL DEFAULT now()
);
CREATE UNIQUE INDEX "index_channel_members_on_channel_id_and_user_id" ON "channel_members" ("channel_id", "user_id");
ALTER TABLE rooms ADD COLUMN "channel_id" INTEGER REFERENCES channels (id) ON DELETE CASCADE;

View File

@@ -1,40 +0,0 @@
CREATE TABLE "buffers" (
"id" SERIAL PRIMARY KEY,
"channel_id" INTEGER NOT NULL REFERENCES channels (id) ON DELETE CASCADE,
"epoch" INTEGER NOT NULL DEFAULT 0
);
CREATE INDEX "index_buffers_on_channel_id" ON "buffers" ("channel_id");
CREATE TABLE "buffer_operations" (
"buffer_id" INTEGER NOT NULL REFERENCES buffers (id) ON DELETE CASCADE,
"epoch" INTEGER NOT NULL,
"replica_id" INTEGER NOT NULL,
"lamport_timestamp" INTEGER NOT NULL,
"value" BYTEA NOT NULL,
PRIMARY KEY(buffer_id, epoch, lamport_timestamp, replica_id)
);
CREATE TABLE "buffer_snapshots" (
"buffer_id" INTEGER NOT NULL REFERENCES buffers (id) ON DELETE CASCADE,
"epoch" INTEGER NOT NULL,
"text" TEXT NOT NULL,
"operation_serialization_version" INTEGER NOT NULL,
PRIMARY KEY(buffer_id, epoch)
);
CREATE TABLE "channel_buffer_collaborators" (
"id" SERIAL PRIMARY KEY,
"channel_id" INTEGER NOT NULL REFERENCES channels (id) ON DELETE CASCADE,
"connection_id" INTEGER NOT NULL,
"connection_server_id" INTEGER NOT NULL REFERENCES servers (id) ON DELETE CASCADE,
"connection_lost" BOOLEAN NOT NULL DEFAULT FALSE,
"user_id" INTEGER NOT NULL REFERENCES users (id) ON DELETE CASCADE,
"replica_id" INTEGER NOT NULL
);
CREATE INDEX "index_channel_buffer_collaborators_on_channel_id" ON "channel_buffer_collaborators" ("channel_id");
CREATE UNIQUE INDEX "index_channel_buffer_collaborators_on_channel_id_and_replica_id" ON "channel_buffer_collaborators" ("channel_id", "replica_id");
CREATE INDEX "index_channel_buffer_collaborators_on_connection_server_id" ON "channel_buffer_collaborators" ("connection_server_id");
CREATE INDEX "index_channel_buffer_collaborators_on_connection_id" ON "channel_buffer_collaborators" ("connection_id");
CREATE UNIQUE INDEX "index_channel_buffer_collaborators_on_channel_id_connection_id_and_server_id" ON "channel_buffer_collaborators" ("channel_id", "connection_id", "connection_server_id");

View File

@@ -1,16 +0,0 @@
CREATE TABLE "feature_flags" (
"id" SERIAL PRIMARY KEY,
"flag" VARCHAR(255) NOT NULL UNIQUE
);
CREATE UNIQUE INDEX "index_feature_flags" ON "feature_flags" ("id");
CREATE TABLE "user_features" (
"user_id" INTEGER NOT NULL REFERENCES users(id) ON DELETE CASCADE,
"feature_id" INTEGER NOT NULL REFERENCES feature_flags(id) ON DELETE CASCADE,
PRIMARY KEY (user_id, feature_id)
);
CREATE UNIQUE INDEX "index_user_features_user_id_and_feature_id" ON "user_features" ("user_id", "feature_id");
CREATE INDEX "index_user_features_on_user_id" ON "user_features" ("user_id");
CREATE INDEX "index_user_features_on_feature_id" ON "user_features" ("feature_id");

Some files were not shown because too many files have changed in this diff Show More