Compare commits

..

145 Commits

Author SHA1 Message Date
Jakub Konka
3a301afbc6 Enable all debug info for easier profiling 2025-11-10 22:38:40 +01:00
Jakub Konka
78add792c7 project_diff: Load buffers in the background 2025-11-10 22:01:38 +01:00
Jakub Konka
359160c8b1 git: Add askpass delegate to git-commit handlers (#42239)
In my local setup, I always enforce git-commit signing with GPG/SSH
which automatically enforces `git commit -S` when committing. This
changeset will now show a modal to the user for them to specify the
passphrase (if any) so that they can unlock their private key for
signing when committing in Zed.

<img width="1086" height="948" alt="Screenshot 2025-11-07 at 11 09
09 PM"
src="https://github.com/user-attachments/assets/ac34b427-c833-41c7-b634-8781493f8a5e"
/>


Release Notes:

- Handle automatic git-commit signing by presenting the user with an
askpass modal
2025-11-10 07:57:50 +00:00
Max Brunsfeld
b8081ad7a6 Make it easy to point zeta2 at ollama (#42329)
I wanted to be able to work offline, so I made it a little bit more
convenient to point zeta2 at ollama.

* For zeta2, don't require that request ids be UUIDs
* Add an env var `ZED_ZETA2_OLLAMA` that sets the edit prediction URL
and model id to work w/ ollama.

Release Notes:

- N/A
2025-11-09 21:10:36 -08:00
ᴀᴍᴛᴏᴀᴇʀ
35c58151eb git: Fix support for self-hosted Bitbucket (#42002)
Closes #41995

Release Notes:

- Fixed support for self-hosted Bitbucket
2025-11-09 21:37:22 -05:00
Ayush Chandekar
e025ee6a11 git: Add base branch support to create_branch (#42151)
Closes [#41674](https://github.com/zed-industries/zed/issues/41674)

Description:
Creating a branch from a base requires switching to the base branch
first, then creating the new branch and checking out to it, which
requires multiple operations.

Add base_branch parameter to create_branch to allow a new branch from a
base branch in one operation which is synonymous to the command `git
switch -c <new-branch> <base-branch>`.

Below is the video after solving the issue: 

(`master` branch is the default branch here, and I create a branch
`new-branch-2` based off the `master` branch. I also show the error
which used to appear before the fix.)

[Screencast from 2025-11-07
05-14-32.webm](https://github.com/user-attachments/assets/d37d1b58-af5f-44e8-b867-2aa5d4ef3d90)

Release Notes:

- Fixed the branch-picking error by replacing multiple sequential switch
operations with just one switch operation.

Signed-off-by: ayu-ch <ayu.chandekar@gmail.com>
2025-11-09 21:35:29 -05:00
Mayank Verma
c60d31a726 git: Track worktree references to resolve stale repository state (#41592)
Closes #35997
Closes #38018
Closes #41516

Release Notes:
- Fixes stale git repositories persisting after removal
2025-11-09 21:24:20 -05:00
Bennet Bo Fenner
0bcf607a28 agent_ui: Always allow to include symbols (#42261)
We can always include symbols, since we either include a ResourceLink to
the symbol (when `PromptCapabilities::embedded_context = false`) or a
Resource (when `PromptCapabilities::embedded_context = true`)

Release Notes:

- Fixed an issue where symbols could not be included when using specific
ACP agents
2025-11-09 20:45:00 +01:00
Bennet Bo Fenner
431a195c32 acp: Fix issue with mentions when embedded_context is set to false (#42260)
Release Notes:

- acp: Fixed an issue where Zed would not respect
`PromptCapabilities::embedded_context`
2025-11-09 20:44:07 +01:00
Danilo Leal
6db6251484 agent_ui: Fix external agent icons in configuration view (#42313)
This PR makes the icons for external agents in the configuration view
use `from_external_svg` instead of `from_path`.

Release Notes:

- N/A
2025-11-09 14:32:19 -03:00
Danilo Leal
2fb3d593bc agent_ui: Add component to standardize the configured LLM card (#42314)
This PR adds a new component to the `language_models` crate called
`ConfiguredApiCard`:

<img width="500" height="420" alt="Screenshot 2025-11-09 at 2  07@2x"
src="https://github.com/user-attachments/assets/655ea941-2df8-4489-a4da-bba34acf33a9"
/>

We were previously recreating this component from scratch with regular
divs in all LLM providers render function, which was redundant as they
all essentially looked the same and didn't have any major variations
aside from labels. We can clean up a bunch of similar code with this
change, which is cool!

Release Notes:

- N/A
2025-11-09 14:32:05 -03:00
chenmi
cc1d66b530 agent_ui: Improve API key configuration UI display (#42306)
Improve the layout and text display of API key configuration in multiple
language model providers to ensure proper text wrapping and ellipsis
handling when API URLs are long.

Before:

<img width="320" alt="image"
src="https://github.com/user-attachments/assets/2f89182c-34a0-4f95-a43a-c2be98d34873"
/>

After:

<img width="320" alt="image"
src="https://github.com/user-attachments/assets/09bf5cc3-07f0-47bc-b21a-d84b8b1caa67"
/>

Changes include:
- Add proper flex layout with overflow handling
- Replace truncate_and_trailoff with CSS text ellipsis
- Ensure consistent UI behavior across all providers

Release Notes:

- Improved API key configuration display in language model settings
2025-11-09 13:00:31 -03:00
Lukas Wirth
5d08c1b35f Surpress more rust-analyzer error logs (#42299)
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-09 11:27:16 +00:00
Lukas Wirth
b7d4d1791a diagnostics: Keep diagnostic excerpt ranges properly ordered (#42298)
Fixes ZED-2CQ

We were doing the binary search by buffer points, but due to await
points within this function we could end up mixing points of differing
buffer versions.

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-09 10:55:56 +00:00
John Tur
81d38d9872 Additional Windows keyboard input fixes (#42294)
- Enable Alt+Numpad input
- For this to be effective, the default keybindings for Alt+{Number}
will need to be unbound. This won't be needed once we gain the ability
to differentiate numpad digit keys from alphanumeric digit keys.
  - Fixes https://github.com/zed-industries/zed/issues/40699
- Fix a number of edge cases with dead keys

Release Notes:

- N/A
2025-11-09 03:51:39 -05:00
Matt Miller
21f73d9c02 Use ButtonLike and add OpenExcerptsSplit and dispatches on click (#42283)
Closes #42099 

Release Notes:

- N/A
2025-11-08 17:47:01 -06:00
Danilo Leal
12857a7207 agent: Improve AddSelectionToThread action display (#42280)
Closes https://github.com/zed-industries/zed/issues/42276

This fixes the fact that the `AddSelectionToThread` action was visible
when `disable_ai` was true, as well as it improves its display by making
it either disabled or hidden when there are no selections in the editor.
I also ended up removing it from the app menu simply because making it
observe the `disable_ai` setting would be a bit more complex than I'd
like at the moment, so figured that, given I'm also now adding it to the
toolbar selection menu, we could do without it over there.

Release Notes:

- Fixed the `AddSelectionToThread` action showing up when `disable_ai`
is true
- Improved the `AddSelectionToThread` action display by only making it
available when there are selections in the editor
2025-11-08 14:41:08 -03:00
Danilo Leal
f6be16da3b docs: Update text threads page (#42273)
Adding a section clarifying the difference between regular threads vs.
text threads.

Release Notes:

- N/A
2025-11-08 12:55:31 -03:00
Danilo Leal
94aa643484 docs: Update agent tools page (#42271)
Release Notes:

- N/A
2025-11-08 12:54:42 -03:00
Jakub Konka
28a85158c7 shell_env: Wrap error context in format! where missing (#42267)
Release Notes:

- N/A
2025-11-08 15:46:01 +00:00
boris.zalman
a2c2c617b5 Add helix keymap to delete without yanking (#41988)
Added previously missing keymap for helix deleting without yank. Action
"editor::Delete" is mapped to "Ald-d" as in
https://docs.helix-editor.com/master/keymap.html#changes

Release Notes:

- N/A
2025-11-08 15:53:34 +01:00
Xipeng Jin
77667f4844 Remove Markdown CodeBlock metadata and Custom rendering (#42211)
Follow up #40736

Clean up `CodeBlockRenderer::Custom` related rendering per the previous
PR
[comment](https://github.com/zed-industries/zed/pull/40736#issuecomment-3503074893).
Additional note here:
1. The `Custom` variant in the enum `CodeBlockRenderer` will become not
useful since cleaning all code related to the custom rendering logic.
2. Need to further review the usage of code block `metadata` field in
`MarkdownTag::CodeBlock` enum.

I would like to have the team further review my note above so that we
can make sure it will be safe to clean it up and will not affect any
potential future features will be built on top of it. Thank you!

Release Notes:

- N/A
2025-11-08 14:00:53 +01:00
Hyeondong Lee
b01a6fbdea Fix missing highlight for macro_invocation bang (#41572)
(Not sure if this was left out on purpose, but this makes things feel a
bit more consistent since [VS Code parses bang mark as part of the macro
name](https://github.com/microsoft/vscode/blob/main/extensions/rust/syntaxes/rust.tmLanguage.json#L889-L905))


Release Notes:
- Added the missing highlight for the bang mark in macro invocations.


| **Before** | **After** |
| :---: | :---: |
| <img width="684" height="222" alt="before"
src="https://github.com/user-attachments/assets/ae71eda3-76b5-4547-b2df-4e437a07abf5"
/> | <img width="646" height="236" alt="fixed"
src="https://github.com/user-attachments/assets/500deda5-d6d8-439c-8824-65c2fb0a5daa"
/> |
2025-11-08 08:42:33 +00:00
Roland Rodriguez
44d91c1709 docs: Explain what scrollbar marks represent (#42130)
## Summary

Adds explanations for what each type of scrollbar indicator visually
represents in the editor.

## Description

This PR addresses the issue where users didn't understand what the
colored marks on the scrollbar mean. The existing documentation
explained how to toggle each type of mark on/off, but didn't explain
what they actually represent.

This adds a brief, clear explanation after each scrollbar indicator
setting describing what that indicator shows (e.g., "Git diff indicators
appear as colored marks showing lines that have been added, modified, or
deleted compared to the git HEAD").

## Fixes

Closes #31794

## Test Plan

- Documentation follows the existing style and format of
`docs/src/configuring-zed.md`
- Each explanation is concise and immediately follows the setting
description
- Language is clear and user-friendly

Release Notes:

- N/A
2025-11-08 10:40:18 +02:00
Donnie Adams
d187cbb188 Add comment injection support to remaining languages (#41710)
Release Notes:

- Added support for comment language injections for remaining built-in
languages and multi-line support for Rust
2025-11-08 10:34:53 +02:00
Agus Zubiaga
c241eadbc3 zeta2: Targeted retrieval search (#42240)
Since we removed the filtering step during context gathering, we want
the model to perform more targeted searches. This PR tweaks search tool
schema allowing the model to search within syntax nodes such as `impl`
blocks or methods.

This is what the query schema looks like now:

```rust
/// Search for relevant code by path, syntax hierarchy, and content.
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
pub struct SearchToolQuery {
    /// 1. A glob pattern to match file paths in the codebase to search in.
    pub glob: String,
    /// 2. Regular expressions to match syntax nodes **by their first line** and hierarchy.
    ///
    /// Subsequent regexes match nodes within the full content of the nodes matched by the previous regexes.
    ///
    /// Example: Searching for a `User` class
    ///     ["class\s+User"]
    ///
    /// Example: Searching for a `get_full_name` method under a `User` class
    ///     ["class\s+User", "def\sget_full_name"]
    ///
    /// Skip this field to match on content alone.
    #[schemars(length(max = 3))]
    #[serde(default)]
    pub syntax_node: Vec<String>,
    /// 3. An optional regular expression to match the final content that should appear in the results.
    ///
    /// - Content will be matched within all lines of the matched syntax nodes.
    /// - If syntax node regexes are provided, this field can be skipped to include as much of the node itself as possible.
    /// - If no syntax node regexes are provided, the content will be matched within the entire file.
    pub content: Option<String>,
}
```

We'll need to keep refining this, but the core implementation is ready.

Release Notes:

- N/A

---------

Co-authored-by: Ben <ben@zed.dev>
Co-authored-by: Max <max@zed.dev>
Co-authored-by: Max Brunsfeld <maxbrunsfeld@gmail.com>
2025-11-08 01:06:12 +00:00
Mikayla Maki
5f8226457e Automate settings registration (#42238)
Release Notes:

- N/A

---------

Co-authored-by: Nia <nia@zed.dev>
2025-11-07 22:27:14 +00:00
Anthony Eid
309947aa53 editor: Allow clicking on excerpts with alt key to open file path (#42235)
#42021 Made clicking on an excerpt title toggle it. This PR brings back
the old behavior if a user is pressing the Alt key when clicking on an
excerpt title.

Release Notes:

- N/A

---------

Co-authored-by: Remco Smits <djsmits12@gmail.com>
2025-11-07 16:23:55 -05:00
Lukas Wirth
0881e548de terminal: Spawn terminal process on main thread on unix (#42234)
Otherwise the terminal will not process the signals correctly 

Release Notes:

- Fixed ctrl+c and friends not working in the terminal on macOS and
linux
2025-11-07 20:56:43 +00:00
claytonrcarter
4511d11a11 bundle: Skip sentry upload for local install on macOS (#42231)
This is a follow up to #41482. When running `script/bundle-mac`, it will
upload debug symbols to Sentry if you have a `$SENTRY_AUTH_TOKEN` set. I
happen to have one set, so this script was trying to generate and upload
those. Whoops! This change skips the upload entirely if you're running a
local install.

Release Notes:

- N/A
2025-11-07 12:38:01 -08:00
Conrad Irwin
19d2fdb6c6 Refresh releases page post deploy (#42218)
Release Notes:

- N/A
2025-11-07 13:34:47 -07:00
Conrad Irwin
20953ecb9d Make nightly bucket objects public (#42229)
Makes it easier to port updates to cloudflare

Closes #ISSUE

Release Notes:

- N/A
2025-11-07 19:43:08 +00:00
Anthony Eid
7475bdaf20 debugger: Truncate scope names to avoid text overlapping in variable list (#42230)
Closes #41969

This was caused because scope names weren't being truncated unlike the
other type of variable list entries.

Release Notes:

- debugger: Fix bug where minimizing the width of the variable list
would cause scope names to overlap

Co-authored-by: Remco Smits <djsmits12@gmail.com>
2025-11-07 19:42:14 +00:00
Dima
8e4c807c6a Fix duplicated 'the' typo (#42225)
Just found this while reading the docs.

Release Notes:

- N/A
2025-11-07 21:37:10 +02:00
John Tur
a66dac7b3a Fix crash during drag-and-drop on Windows (#42227)
The HGLOBAL is itself the HDROP. Do not dereference it.

Release Notes:

- windows: Fixed crashes during drag-and-drop operations
2025-11-07 19:18:47 +00:00
morgankrey
8a903f9c10 2025 11 07 update privacy docs (#42226)
Docs update

Release Notes:

- N/A
2025-11-07 13:16:13 -06:00
Lukas Wirth
9f9575d100 Silence rust-analyzer startup errors (#42222)
When rust-analyzer is still loading the cargo project it tends to error
out on most lsp requests with `content modified`. This pollutes our
logs.

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-07 18:43:43 +00:00
Marshall Bowers
00898d46c0 docs: Add docs on extension capabilities (#42223)
This PR adds some initial docs on extension capabilities.

Release Notes:

- N/A
2025-11-07 18:38:20 +00:00
Joseph T. Lyons
bcc3307a7e Add optional Zed log field to all bug report templates (#42221)
Release Notes:

- N/A
2025-11-07 18:28:35 +00:00
Lukas Wirth
8ba33ad270 gpui: Do not unwrap in window_procedure (#42216)
Technically these should not be possible to hit, but sentry says
otherwise. Turning these into errors should give us more information
than the abort due to unwinding across ffi boundaries.

Fixes ZED-321

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-07 18:10:30 +00:00
Marshall Bowers
1e6344899d docs: Update extension features language (#42215)
This PR updates the language around what features extensions can
provide.

Release Notes:

- N/A
2025-11-07 17:43:22 +00:00
Jakub Konka
93f9cff876 Remove invalid assertion in editor (#42210)
Release Notes:

- N/A
2025-11-07 17:36:10 +00:00
Lukas Wirth
6cafe4a9c5 gpui: Do not panic when unable to find the selected fonts (#42212)
Fixes ZED-329

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-07 17:08:37 +00:00
Lukas Wirth
585c440e6e util: Support shell env fetching for git bash (#42208)
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-07 16:28:41 +00:00
Lukas Wirth
083bd147ef util: Fall back to cmd if we can't find powershell on the system (#42204)
Closes https://github.com/zed-industries/zed/issues/42165

Release Notes:

- Fixed trying to use powershell for commands when its not installed on
the system
2025-11-07 17:13:52 +01:00
Remco Smits
9591790d8d markdown: Add support for HTML styling attributes (#42143)
Second take on https://github.com/zed-industries/zed/pull/37765.

This PR adds support for styling elements (**b**, **strong**, **em**,
**i**, **ins**, **del**), but also allow you to show the styling text
inline with the current text.
This is done by appending all the up-following text into one text chunk
and merge the highlights from both of them into the already existing
chunk. If there does not exist a text chunk, we will create one and the
next iteration we will use that one to store all the information on.

**Before**
<img width="483" height="692" alt="Screenshot 2025-11-06 at 22 08 09"
src="https://github.com/user-attachments/assets/6158fd3b-066c-4abe-9f8e-bcafae85392e"
/>

**After**
<img width="868" height="300" alt="Screenshot 2025-11-06 at 22 08 21"
src="https://github.com/user-attachments/assets/4d5a7a33-d31c-4514-91c8-2b2a2ff43e0e"
/>

**Code example**
```html
<p>some text <b>bold text</b></p>
<p>some text <strong>strong text</strong></p>
<p>some text <i>italic text</i></p>
<p>some text <em>emphasized text</em></p>
<p>some text <del>delete text</del></p>
<p>some text <ins>insert text</ins></p>

<p>Some text <strong>strong text</strong> more text <b>bold text</b> more text <i>italic text</i> more text <em>emphasized text</em> more text <del>deleted text</del> more text <ins>inserted text</ins></p>

<p><a href="https://example.com">Link Text</a></p>

<p style="text-decoration: underline;">text styled from style attribute</p>
```

cc @bennetbo 

**TODO**
- [x] add tests for styling nested text that should result in one merge

Release Notes:

- Markdown Preview: Added support for `HTML` styling elements

---------

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-11-07 16:12:27 +00:00
Lukas Wirth
74bf1a170d recent_projects: Do not try to watch /etc/ssh/ssh_config on windows (#42200)
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-07 15:46:37 +00:00
Bennet Bo Fenner
e72c3bf20d agent_ui: Remove message_editor module (#42195)
Artefact from agent1 removal

Release Notes:

- N/A
2025-11-07 14:36:58 +00:00
Maokaman1
160bf915aa language_models: Filter out whitespace-only text content parts for OpenAI and OpenAI compatible providers (#40316)
Closes #40097

When multiple files are added sequentially to the agent panel, the
request JSON incorrectly includes "text" elements containing only
spaces. These empty elements cause the Zhipu AI API to return a "text
cannot be empty" error.
The fix filters out any "text" elements that are empty or contain only
whitespaces.

UI state when the error occurs:
<img width="300" alt="Image"
src="https://github.com/user-attachments/assets/c55e5272-3f03-42c0-b412-fa24be2b0043"
/>

Request JSON (causing the error):
```
{
  "model": "glm-4.6",
  "messages": [
    {
      "role": "system",
      "content": "<<CUT>>"
    },
    {
      "role": "user",
      "content": [
        {
          "type": "text",
          "text": "[@1.txt](zed:///agent/file?path=C%3A%5CTemp%5CTest%5C1.txt)"
        },
        { "type": "text", "text": " " },
        {
          "type": "text",
          "text": "[@2.txt](zed:///agent/file?path=C%3A%5CTemp%5CTest%5C2.txt)"
        },
        { "type": "text", "text": " describe" },
```

Release Notes:

- Fixed an issue when an OpenAI request contained whitespace-only text content

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-11-07 14:43:07 +01:00
Xipeng Jin
aff4c25a47 markdown: Restore horizontal scrollbars for codeblocks (#40736)
### Summary

Restore the agent pane’s code-block horizontal scrollbar for easier
scrolling without trackpad and preserve individual scroll state across
multiple code blocks.

### Motivation

Addresses https://github.com/zed-industries/zed/issues/34224, where
agent responses with wide code snippets couldn’t be scrolled
horizontally in the panel. Previously there is no visual effect for
scrollbar to let the user move the code snippet and it was not obviously
to use trackpad or hold down `shift` while scrolling. This PR will
ensure the user being able to only use their mouse to drag the
horizontal scrollbar to show the complete line when the code overflow
the width of code block.

### Changes

- Support auto-hide horizontal scrollbar for rendering code block in
agent panel by adding scrollbar support in markdown.rs
- Add `code_block_scroll_handles` cache in
_crates/markdown/src/markdown.rs_ to give each code block a persistent
`ScrollHandle`.
- Wrap rendered code blocks with custom horizontal scrollbars that match
the vertical scrollbar styling and track hover visibility.
- Retain or clear scroll handles based on whether horizontal overflow is
enabled, preventing leaks when the markdown re-renders.

### How to Test

1. Open the agent panel, request code generation, and ensure wide
snippets show a horizontal scrollbar on hover.
3. Scroll horizontally, navigate away (e.g., change tabs or trigger a
re-render), and confirm the scroll position sticks when returning.
5. Toggle horizontal overflow styling off/on (if applicable) and verify
scrollbars appear or disappear appropriately.

### Screenshots / Demos (if UI change)


https://github.com/user-attachments/assets/e23f94d9-8fe3-42f5-8f77-81b1005a14c8

### Notes for Reviewers

- This is my first time contribution for `zed`, sorry for any code
patten inconsistency. So please let me know if you have any comments and
suggestions to make the code pattern consistent and easy to maintain.
- For now, the horizontal scrollbar is not configurable from the setting
and the style is fixed with the same design as the vertical one. I am
happy to readjust this setting to fit the needs.
- Please let me know if you think any behaviors or designs need to be
changed for the scrollbar.
- All changes live inside _crates/markdown/src/markdown.rs_; no API
surface changes.

Closes #34224 

### Release Notes:

- AI: Show horizontal scroll-bars in wide markdown elements
2025-11-07 13:35:59 +00:00
aohanhongzhi
146e754f73 URL-encode the image paths in Markdown so that images with filenames (#41788)
Closes https://github.com/zed-industries/zed/issues/41786

Release Notes:

- markdown preview: Fixed an issue where path urls would not be parsed
correctly when containing URL-encoded characters

<img width="1680" height="1126"
alt="569415cb-b3e8-4ad6-b31c-a1898ec32085"
src="https://github.com/user-attachments/assets/7de8a892-ff01-4e00-a28c-1c5e9206ce3a"
/>
2025-11-07 14:02:40 +01:00
Kirill Bulatov
278fe91a9a Skip buffer registration if lsp data should be ignored (#42190)
Release Notes:

- N/A
2025-11-07 12:57:54 +00:00
Lukas Wirth
39fb89e031 workspace: Do not panic when the database is corruped (#42186)
Fixes ZED-1NK

Release Notes:

- Fixed zed not starting when the database cannot be loaded
2025-11-07 12:36:36 +00:00
Casper van Elteren
9d52b6c538 terminal: Allow configuring conda manager (#40577)
Closes #40576
This PR makes Conda activation configurable and transparent by adding a
`terminal.detect_venv.on.conda_manager` setting (`"auto" | "conda" |
"mamba" | "micromamba"`, default `"auto"`), updating Python environment
activation to honor this preference (or the detected manager executable)
and fall back to `conda` when necessary.

The preference is passed via `ZED_CONDA_MANAGER` from the terminal
settings, and the activation command is built accordingly (with proper
quoting for paths). Changes span
`zed/crates/terminal/src/terminal_settings.rs` (new `CondaManager` and
setting), `zed/crates/project/src/terminals.rs` (inject env var),
`zed/crates/languages/src/python.rs` (activation logic), and
`zed/assets/settings/default.json` (document the setting). Default
behavior remains unchanged for most users while enabling explicit
selection of `mamba` or `micromamba`.

Release Notes:
- Added: terminal.detect_venv.on.conda_manager setting to choose the
Conda manager (auto, conda, mamba, micromamba). Default: auto.
- Changed: Python Conda environment activation now respects the
configured manager, otherwise uses the detected environment manager
executable, and falls back to conda.
- Reliability: Activation commands quote manager paths to handle spaces
across platforms.
- Compatibility: No breaking changes; non-Conda environments are
unaffected; remote terminals are supported.

---------

Co-authored-by: Lukas Wirth <me@lukaswirth.dev>
Co-authored-by: Lukas Wirth <lukas@zed.dev>
2025-11-07 12:35:06 +00:00
Mikayla Maki
082b80ec89 gpui: Unify the index_for_x methods (#42162)
Supersedes https://github.com/zed-industries/zed/pull/39910

At some point, these two (`index_for_x` and `closest_index_for_x`)
methods where separated out and some code paths used one, while other
code paths took the other. That said, their behavior is almost
identical:

- `index_for_x` computes the index behind the pixel offset, and returns
`None` if there's an overshoot
- `closest_index_for_x` computes the nearest index to the pixel offset,
taking into account whether the offset is over halfway through or not.
If there's an overshoot, it returns the length of the line.

Given these two behaviors, `closest_index_for_x` seems to be a more
useful API than `index_for_x`, and indeed the display map and other core
editor features use it extensively. So this PR is an experiment in
simply replacing one behavior with the other.

Release Notes:

- Improved the accuracy of mouse selections in Markdown
2025-11-07 14:23:43 +02:00
Bennet Bo Fenner
483e31e42a Fix telemetry (#42184)
Follow up to #41991 🤦🏻 

Release Notes:

- N/A
2025-11-07 11:36:05 +00:00
Bennet Bo Fenner
61c263fcf0 agent_ui: Allow opening thread as markdown in remote projects (#42182)
Release Notes:

- Added support for opening thread as markdown in remote projects
2025-11-07 11:32:16 +00:00
Lukas Wirth
3c19174f7b diagnostics: Fix diagnostics view no clearing blocks correctly (#42179)
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-07 11:16:25 +00:00
Bennet Bo Fenner
7e93c171b5 action_log: Remove unused code (#42177)
Release Notes:

- N/A
2025-11-07 10:33:51 +00:00
Lukas Wirth
88a8e53696 editor: Remove buffer and display map fields from SelectionsCollection (#42175)
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-07 11:21:14 +01:00
Bennet Bo Fenner
c2416d6bab Add agent metrics (#41991)
Release Notes:

- N/A
2025-11-07 10:07:57 +00:00
Jason Lee
29cc3d0e18 gpui: Add to support check state to MenuItem (#39876)
Release Notes:

- N/A

---


https://github.com/user-attachments/assets/d46b77ae-88ba-43da-93ad-3656a7fecaf9

The system menu is only support for macOS, so here just modify the macOS
platform special code.

The Windows, Linux used `ApplicationMenu`, I have already added
`checked` option to Zed's ContextMenu.

Then later when this PR merged, we can improve "View" menu to show check
state to panels (Project Panel, Outline Panel, ...).
2025-11-07 09:42:55 +00:00
Remco Smits
760747f127 markdown: Add support for HTML table captions (#41192)
Thanks to @Angelk90 for pointing out that, we were missing this feature.
So this PR implement the caption feature for HTML tables for the
markdown preview.

**Code example**
```html
<table>
    <caption>Revenue by Region</caption>
    <tr>
        <th rowspan="2">Region</th>
        <th colspan="2">Revenue</th>
        <th rowspan="2">Growth</th>
    </tr>
    <tr>
        <th>Q2 2024</th>
        <th>Q3 2024</th>
    </tr>
    <tr>
        <td>North America</td>
        <td>$2.8M</td>
        <td>$2.4B</td>
        <td>+85,614%</td>
    </tr>
    <tr>
        <td>Europe</td>
        <td>$1.2M</td>
        <td>$1.9B</td>
        <td>+158,233%</td>
    </tr>
    <tr>
        <td>Asia-Pacific</td>
        <td>$0.5M</td>
        <td>$1.4B</td>
        <td>+279,900%</td>
    </tr>
</table>
```

**Result**:
<img width="1201" height="774" alt="Screenshot 2025-10-25 at 21 18 01"
src="https://github.com/user-attachments/assets/c2a8c1c2-f861-40df-b5c9-549932818f6e"
/>

Release Notes:

- Markdown preview: Added support for `HTML` table captions
2025-11-07 09:47:12 +01:00
Lukas Wirth
d4ec55b183 editor: Correctly handle blocks spanning more than 128 rows (#42172)
Release Notes:

- Fixed block rendering for blocks spanning more than 128 rows
2025-11-07 08:46:57 +00:00
Max Brunsfeld
f89bb2f0d2 Small zeta cli fixes (#42170)
* Fix a panic that happened because we lost the
`ContextRetrievalStarted` debug message, so we didn't assign `t0`.
* Write the edit prediction response log file as a markdown file
containing the text, not a JSON file. We mostly always want the text
content.

Release Notes:

- N/A
2025-11-07 07:34:05 +00:00
Conrad Irwin
e43c436cb6 Create sentry releases in after_release (#42169)
This had been moved to auto-release preview, and was not running for
stable.

Closes #ISSUE

Release Notes:

- N/A
2025-11-07 07:30:10 +00:00
Lukas Wirth
de1bf64f41 project: Remove unnecessary panic (#42167)
If we are in a remote session with the remote dropped, this path is very
much reachable if the call to this function got queued up in a task.

Fixes ZED-124

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-07 06:56:19 +00:00
Jakub Konka
00eafe63d9 git: Make long-running git staging snappy in git panel (#42149)
Previously, staging a large file in the git panel would block the UI
items until that operation finished. This is due to the fact that
staging is a git op that is locked globally by git (per repo) meaning
only one op that is modifying the git index can run at any one time. In
order to make the UI snappy while letting any pending git staging jobs
to finish in the background, we track their progress via `PendingOps`
indexed by git entry path. We have already had a concept of pending
operations however they existed at the UI layer in the `GitPanel`
abstraction. This PR moves and augments `PendingOps` into the model
`Repository` in `git_store` which seems like a more natural place for
tracking running git jobs/operations. Thanks to this, pending ops are
now stored in a `SumTree` indexed by git entry path part of the
`Repository` snapshot, which makes for efficient access from the UI.

Release Notes:

- Improved UI responsiveness when staging/unstaging large files in the
git panel
2025-11-07 07:34:06 +01:00
Max Brunsfeld
5044e6ac1d zeta2: Make eval example file format more expressive (#42156)
* Allow expressing alternative possible context fetches in `Expected
Context` section
* Allow marking a subset of lines as "required" in `Expected Context`.

We still need to improve how we display the results. I've removed the
context pass/fail pretty printing for now, because it would need to be
rethought to work with the new structure, but for now I think we should
focus on getting basic predictions to run. But this is progress toward a
better structure for eval examples.

Release Notes:

- N/A

---------

Co-authored-by: Oleksiy Syvokon <oleksiy.syvokon@gmail.com>
Co-authored-by: Ben Kunkle <ben@zed.dev>
Co-authored-by: Agus Zubiaga <agus@zed.dev>
2025-11-06 18:05:18 -08:00
Max Brunsfeld
784fdcaee3 zeta2: Build edit prediction prompt and process model output in client (#41870)
Release Notes:

- N/A

---------

Co-authored-by: Agus Zubiaga <agus@zed.dev>
Co-authored-by: Ben Kunkle <ben@zed.dev>
Co-authored-by: Piotr Osiewicz <24362066+osiewicz@users.noreply.github.com>
2025-11-06 18:36:58 -05:00
Ben Kunkle
fb87972f44 settings_ui: Use any open workspace window when opening settings links (#42106)
Closes #ISSUE

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-06 18:36:23 -05:00
Julia Ryan
8cccb5d4f5 Use windows runner for publishing winget package (#42144)
Our new linux runners don't have powershell installed which causes the
`release-winget` job to fail. This simply runs that step on windows
instead.

Release Notes:

- N/A
2025-11-06 14:13:03 -08:00
Andrew Farkas
2895d31d83 Fix tab switcher close item using wrong pane (#42138)
Closes #40646

Release Notes:

- Fixed `tab_switcher::CloseSelectedItem` doing nothing on tab in
inactive pane

Co-authored-by: Cole Miller <cole@zed.dev>
2025-11-06 20:38:52 +00:00
Cave Bats Of Ware
a112153a2e Enable image support in remote projects (#39158)
Adds support for opening and displaying images in remote projects. The
server streams image data to the client in chunks, where the client then
reconstructs the image and displays it. This change includes:

- Adding `image` crate as a dependency for remote_server
- Implementing `ImageStore` for remote access
- Creating proto definitions for image-related messages
- Adding handlers for creating images for peers
- Computing image metadata from bytes instead of reading from disk for
remote images

Closes #20430
Closes #39104
Closes #40445

Release Notes:

- Added support for image preview in remote sessions.
- Fixed #39104

<img width="982" height="551" alt="image"
src="https://github.com/user-attachments/assets/575428a3-9144-4c1f-b76f-952019ea14cc"
/>
<img width="978" height="547" alt="image"
src="https://github.com/user-attachments/assets/fb58243a-4856-4e73-bb30-8d5e188b3ac9"
/>

---------

Co-authored-by: Julia Ryan <juliaryan3.14@gmail.com>
2025-11-06 11:31:32 -08:00
Richard Feldman
d21184b1d3 Fix ACP extension root dir (#42131)
Agents running in extensions need to have a root directory of the
extension's dir for installation and authentication, but *not* for the
conversation itself - otherwise the agent is running things like
terminal commands in the wrong dir.

Release Notes:

- Fixed Agent Server extensions having the current working directory of
the extension rather than the project
2025-11-06 19:19:16 +00:00
Anthony Eid
ba136abf6c editor: Add action to move between snippet tabstop positions v2 (#42127)
Closes https://github.com/zed-industries/zed/issues/41407

This PR fixes the issues that caused #41407 to be reverted in #42008.
Namely that the action context didn't take into account if a snippet
could move backwards or forwards, and the action shared the same key
mapping as `editor::MoveToPreviousWordStart` and
`editor::MoveToNextWordEnd`.

I changed the default key mapping for the move to snippet tabstop to tab
and shift-tab to match the default behavior of other editors.

Release Notes:

- Editor: Add actions to move between snippet tabstop positions
2025-11-06 18:49:39 +00:00
Lukas Wirth
2d45c23fb0 remote: Flush to stdin when writing to sftp 2 (#42126)
https://github.com/zed-industries/zed/pull/42103#issuecomment-3498137130

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-06 18:18:19 +00:00
Bo Lorentsen
b78f19982f Allow for using config specific gdb binaries in gdb adapter (#37193)
I really need this for embedded development in IDF and Zephyr, where the
chip vendors sometimes provide there own specialized version of the
tools, and I need to direct zed to use these.

The current GDB adapter only supports the gdb it find in the normal
search path, and it also seems like we where not able to transfer gdb
specific startup arguments (only `-i=dap` is set) .

In order to fix this I (semi wipe using GPT-4.1) expanded the GDB
adapter with 2 new config options :

* **gdb_path** holds a full path to the gdb executable, for now only
full path or relative to cwd
* **gdb_args** an array holding additional arguments given to gdb on
startup
 
It seemed to me, like the `env` config did not transferred to gdb, so
this is added to.
 
 I have tested this locally, and it seems to work not only compile :-)

Release Notes:

debugger: Adds gdb_path and gdb_args to gdb debug adapter options 
debugger: Fix bug where gdb debug sessions wouldn't inherit the shell
environment from Zed

---------

Co-authored-by: Anthony <anthony@zed.dev>
2025-11-06 12:53:59 -05:00
Lukas Wirth
a7fac65d62 gpui: Remove unneeded weak Rc cycle in WindowsWindowInner (#42119)
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-06 17:33:09 +00:00
Lukas Wirth
bf8864d106 gpui: Remove all (unsound) ManuallyDrop usages, panic on device loss (#42114)
Given that when we lose our devices unrecoverably we will panic anyways,
might as well do so eagerly which makes it clearer.

Additionally this PR replaces all uses of `ManuallyDrop` with `Option`,
as otherwise we need to do manual bookkeeping of what is and isn't
initialized when we try to recover devices as we can bail out halfway
while recovering. In other words, the code prior to this was fairly
unsound due to freely using `ManuallyDrop::drop`.
 
Fixes ZED-1SS
 
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-06 17:29:15 +00:00
Nia
08ee4f7966 notify: Bump to rebased 8.2.0 fork (#42113)
Hopefully makes progress towards #38109, #39266

Release Notes:

- N/A
2025-11-06 16:30:17 +00:00
Lukas Wirth
6f6f652cf2 zlog: Add env var to enable line number logging (#41905)
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-06 15:31:26 +00:00
Lukas Wirth
9c8e37a156 clangd: Fix switch source header action on windows (#42105)
Fixes https://github.com/zed-industries/zed/issues/40935

Release Notes:

- Fixed clangd's switch source header action not working on windows
2025-11-06 15:12:34 +00:00
Kirill Bulatov
d54c64f35a Refresh outline panel on file renames (#42104)
Closes https://github.com/zed-industries/zed/issues/41877

Release Notes:

- Fixed outline panel not updating file headers on rename
2025-11-06 14:46:46 +00:00
Lukas Wirth
0b53da18d5 remote: Flush to stdin when writing to sftp (#42103)
https://github.com/zed-industries/zed/issues/42027#issuecomment-3497210172

Release Notes:

- Fixed ssh remoting potentially failing due to not flushing stdin to
sftp
2025-11-06 14:27:26 +00:00
Libon
5ced3ef0fd editor: Improve multibuffer header spacing (#42071)
BEFORE:
<img width="1751" height="629" alt="image"
src="https://github.com/user-attachments/assets/f88464d3-5daa-4d53-b394-f92db8b0fd8c"
/>

AFTER:
<img width="1714" height="493" alt="image"
src="https://github.com/user-attachments/assets/022c883b-b219-40a3-aa5f-0c16d23e8abf"
/>

Release Notes:

- N/A

---------

Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
2025-11-06 14:09:06 +00:00
ʟᴜɴᴇx
7a37dd9433 Do not serialize buffers containing bundled files (#38102)
Fix bundled file persistence by introducing SerializationMode enum

Closes: #38094

What's the issue?

Opening bundled files like Default Key Bindings
(zed://settings/keymap-default.json) was causing SQLite foreign key
constraint errors. The editor was trying to save state for these
read-only assets just like regular files, but since bundled files don't
have database entries, the foreign key constraint would fail.

The fix

Replaced the boolean serialize_dirty_buffers flag with a type-safe
SerializationMode enum:

```rust
pub enum SerializationMode {
    Enabled,   // Regular files persist across sessions
    Disabled,  // Bundled files don't persist
}
```

This prevents serialization at the source: workspace_id() returns None
for disabled editors, serialize() bails early, and should_serialize()
returns false. When opening bundled files, we set the mode to Disabled
from the start, so they're treated as transient views that never
interact with the persistence layer.

Changes

- editor.rs: Added SerializationMode enum and updated serialization
methods to respect it
- items.rs: Guarded should_serialize() to prevent disabled editors from
being serialized
- zed.rs: Set SerializationMode::Disabled in open_bundled_file()

Result

Bundled files open cleanly without SQLite errors and don't persist
across workspace reloads (expected behavior). Regular file persistence
remains unaffected.

Release Notes: Fixed SQLite foreign key constraint errors when opening
bundled files like Default Key Bindings.

---------

Co-authored-by: MrSubidubi <finn@zed.dev>
2025-11-06 13:30:03 +00:00
Danilo Leal
a7163623e7 agent_ui: Make "add more agents" menu item take to extensions (#42098)
Now that agent servers are a thing, this is the primary and easiest way
to quickly add more agents to Zed, without touching any settings JSON
file. :)

Release Notes:

- N/A
2025-11-06 09:55:06 -03:00
Lukas Wirth
f08068680d agent_ui: Do not show Codex wsl warning on wsl take 2 (#42096)
https://github.com/zed-industries/zed/pull/42079#discussion_r2498472887

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-06 12:15:17 +00:00
Lukas Wirth
a951e414d8 util: Fix shell environment fetching with cmd (#42093)
Release Notes:

- Fixed shell environment fetching failing when having `cmd` configured
as terminal shell
2025-11-06 12:07:34 +00:00
Lukas Wirth
e75c6b1aa5 remote: Fix detect_can_exec detection (#42087)
Closes https://github.com/zed-industries/zed/issues/42036

Release Notes:

- Fixed an issuer with wsl exec detection eagerly failing, breaking
remote connections
2025-11-06 12:06:32 +00:00
Kirill Bulatov
149eedb73d Fix scroll position restoration (#42088)
Follow-up of https://github.com/zed-industries/zed/pull/42035
Scroll position needs to be stored immediately, otherwise editor close
may not register that.

Release Notes:

- N/A
2025-11-06 11:37:27 +00:00
Lukas Wirth
fb46bae3ed remote: Add missing quotation in extract_server_binary (#42085)
Also respect the shell env for various commands again
Should close https://github.com/zed-industries/zed/issues/42027

Release Notes:

- Fixed remote server installation failing on some setups
2025-11-06 11:19:24 +00:00
Lukas Wirth
28d7c37b0d recent_projects: Improve user facing error messages on connection failure (#42083)
cc https://github.com/zed-industries/zed/issues/42004

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-06 11:00:41 +00:00
Lukas Wirth
f6da987d4c agent_ui: Do not show Codex wsl warning on wsl (#42079)
Release Notes:

- Fixed the codex wsl warning being shown on wsl itself
2025-11-06 10:34:14 +00:00
Kirill Bulatov
efc71f35a5 Tone down extension errors (#42080)
Before:
<img width="2032" height="1161" alt="before"
src="https://github.com/user-attachments/assets/5c497b47-87e8-4167-bc28-93e34556ea4d"
/>

After:
<img width="2032" height="1161" alt="after"
src="https://github.com/user-attachments/assets/4a87803f-67df-4bf8-ade0-306f3c9ca81e"
/>

Release Notes:

- N/A
2025-11-06 10:12:04 +00:00
Lukas Wirth
3b7ee58cfa zed: Attach console to parent process before processing --printenv (#42075)
Otherwise the `--printenv` flag will simply not work on windows

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-06 09:47:54 +00:00
Lukas Wirth
32047bef93 text: Improve panic messages with more information (#42072)
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-06 09:16:45 +00:00
Lukas Wirth
4003287cc3 vim: Downgrade user config error from panic to log (#42070)
Fixes ZED-2W3

Release Notes:

- Fixed panic due to invalid vim keycap
2025-11-06 08:37:04 +00:00
Lukas Wirth
001a47c8b7 gpui: Inline some hot recursive scope functions (#42069)
This reduces stack usage in prepainting slightly as we stack less
references to window/app.

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-06 08:31:39 +00:00
Lukas Wirth
113f0780b3 agent_ui: Fix string slicing panic in message editor (#42068)
Fixes ZED-302

Release Notes:

- Fixed a panic in agent message editor when using multibyte whitespace
characters
2025-11-06 07:58:20 +00:00
Lexi Mattick
273321608f gpui: Add debug assertion to Window::on_action and make docs consistent (#41202)
Improves formatting consistency across various docs, fixes some typos,
and adds a missing `debug_assert_paint` to `Window::on_action` and
`Window::on_action_when`.

Release Notes:

- N/A
2025-11-06 07:55:20 +00:00
Smit Barmase
92cfce568b language: Fix completion menu no longer prioritizes relevant items for Typescript and Python (#42065)
Closes #41672

Regressed in https://github.com/zed-industries/zed/pull/40242

Release Notes:

- Fixed issue where completion menu no longer prioritizes relevant items
for TypeScript and Python.
2025-11-06 13:19:46 +05:30
Coenen Benjamin
2b6cf31ace file_finder: Display duplicated file in file finder history (#41917)
Closes #41850

When digging into this I figured out that basically what was going on is
in the history of the file finder it doesn't update the name of the file
duplicated because when you duplicate a file it's named automatically
with `filename copy` and so this filename was added to the history but
not updated so once you wanted to go back into this file it was not part
of file finder displayed history anymore because this file doesn't exist
anymore but the entity id remains the same.
I was also to reproduce this bug when just renaming a file.

Release Notes:

- Fixed: Display duplicated file in file finder history

---------

Signed-off-by: Benjamin <5719034+bnjjj@users.noreply.github.com>
Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
Co-authored-by: Lukas Wirth <me@lukaswirth.dev>
2025-11-06 07:06:45 +00:00
Lemon
58cec41932 Adjust terminal decorative character ranges to include missing Powerline characters (#42043)
Closes #41975.

This change adjusts some of the ranges which were incorrectly labeled or
excluded characters. The new ranges include three codepoints which are
not assigned in Nerd Fonts. However, one of these codepoints were
already included prior to this change. These codepoints are:

- U+E0C9, between the two ice separators
- U+E0D3, between the two trapezoid separators. The ranges prior to this
PR already included this one.
- U+E0D5, between the trapezoid separators and the inverted triangle
separators

I included these so as to not overcomplicate the ranges by
cherry-picking the defined codepoints. That being said, if we're okay
with this and an additional unassigned codepoint (U+E0CB, between ice
separators and honeycomb separators) being included then a simple range
from 0xE0B0 to 0xE0D7 nicely includes all of the Powerline characters.

I wasn't sure how to write tests for this so I just added two characters
to the existing tests which were previously not covered lol. All of the
Powerline characters can be seen
[here](https://www.nerdfonts.com/cheat-sheet) by searching `nf-pl`.

Release Notes:

- Fixed certain Powerline characters incorrectly having terminal
contrast adjustment applied.
2025-11-06 06:38:34 +00:00
Conrad Irwin
2ec5ca0e05 Fix generate release notes script on first stable (#42061)
Don't crash in generate-release-notes on the first stable
commit on a branch.

Release Notes:

- N/A
2025-11-05 23:25:30 -07:00
Conrad Irwin
f8da550867 Refresh zed.dev releases page after releases (#42060)
Release Notes:

- N/A
2025-11-05 23:25:13 -07:00
Mayank Verma
0b1d3d78a4 git: Fix pull failing when tracking remote with different branch name (#41768)
Closes #31430

Release Notes:

- Fixed git pull failing when tracking remote with different branch name

Here's a before/after comparison when `dev` branch has upstream set to
`origin/main`:


https://github.com/user-attachments/assets/3a47e736-c7b7-4634-8cd1-aca7300c3a73
2025-11-06 05:18:08 +00:00
Cole Miller
930b489d90 ci: Don't require protobuf and postgres checks for tests_pass for now (#42057)
For now, there are cases where we want to merge PRs (advisedly) even
though these checks fail.

Release Notes:

- N/A
2025-11-06 00:03:50 -05:00
Delvin
121cee8045 git: Add cursor pointer on last commit to check changes (#41960)
Release Notes:
-  Improved visual cue on git panel ui to check previous commit changes 

Before: 
<img width="1470" height="956" alt="Screenshot 2025-11-05 at 2 06 49 pm"
src="https://github.com/user-attachments/assets/b8c54bb6-c8b8-4d36-a14f-71d725ed68f2"
/>

After:
<img width="1470" height="956" alt="Screenshot 2025-11-05 at 2 06 24 pm"
src="https://github.com/user-attachments/assets/d8d96f9e-ceed-4c02-9f93-de9fd3dfcbf1"
/>
2025-11-05 23:27:38 -05:00
Viraj Bhartiya
5360dc1504 Refactor timestamp formatting in Git UI components to use chrono for local time calculations (#41005)
- Updated `blame_ui.rs`, `branch_picker.rs`, `commit_tooltip.rs`, and
`commit_view.rs` to replace the previous timestamp formatting with
`chrono` for better accuracy in local time representation.
- Introduced `chrono::Local::now().offset().local_minus_utc()` to obtain
the local offset for timestamp formatting.

Closes #40878

Release Notes:
- Improved timestamp handling in various Git UI components for enhanced
user experience.
2025-11-05 23:23:22 -05:00
Danilo Leal
69862790cb docs: Improve content in /ai/agent-panel and /ai/rules (#42055)
- Clarify about first-party supported features in external agents
- Create new section for selection as context for higher visibility
- Add keybindings for agent profiles
- Fix outdated setting using `assistant` instead of `agent`
- Add keybinding for accessing the rules library

Release Notes:

- N/A
2025-11-06 01:18:55 -03:00
ᴀᴍᴛᴏᴀᴇʀ
284d8f790a Support Forgejo and Gitea avatars in git blame (#41813)
Part of #11043.

Codeberg is a public instance of Forgejo, as confirmed by the API
documentation at https://codeberg.org/api/swagger. Therefore, I renamed
the related component from codeberg to forgejo and added codeberg.org as
a public instance.

Furthermore, to optimize request speed for the commit API, I set
`stat=false&verification=false&files=false`.

<img width="1650" height="1268" alt="CleanShot 2025-11-03 at 19 57
06@2x"
src="https://github.com/user-attachments/assets/c1b4129e-f324-41c2-86dc-5e4f7403c046"
/>

<br/>
<br/>

Regarding Gitea Support: 

Forgejo is a fork of Gitea, and their APIs are currently identical
(e.g., for getting avatars). However, to future-proof against potential
API divergence, I decided to treat them as separate entities. The
current gitea implementation is essentially a copy of the forgejo file
with the relevant type names and the public instance URL updated.

Release Notes:

- Added Support for Forgejo and Gitea avatars in git blame
2025-11-05 22:53:28 -05:00
Danilo Leal
f824e93eeb docs: Remove non-existing keybinding in /visual-customization (#42053)
Release Notes:

- N/A
2025-11-06 00:38:39 -03:00
Danilo Leal
e71bc4821c docs: Add section about agent servers in /external-agents (#42052)
Release Notes:

- N/A
2025-11-06 00:38:33 -03:00
Jason Lee
64c8c19e1b gpui: Impl Default for TextRun (#41084)
Release Notes:

- N/A

When I was implementing Input, I often used `TextRun`, but `background`,
`underline` and `strikethrough` were often not used.

So make change to simplify it.
2025-11-06 01:50:23 +00:00
Richard Feldman
622d626a29 Add agent-servers.md (#41609)
Release Notes:

- N/A

---------

Co-authored-by: Danilo Leal <67129314+danilo-leal@users.noreply.github.com>
2025-11-06 01:35:56 +00:00
Petros Amoiridis
714481073d Fix macOS new window stacking (#38683)
Closes #36206 

Disclaimer: I did use AI for help to end up with this proposed solution.
😅

## Observed behavior of native apps on macOS (like Safari)

I first did a quick research on how Safari behaves on macOS, and here's
what I have found:

1. Safari seems to position new windows with an offset based on the
currently active window
2. It keeps opening new windows with an offset until the new window
cannot fit the display bounds horizontally, vertically or both.
3. When it cannot fit horizontally, the new window opens at x=0
(y=active window's y)
4. When it cannot fit vertically, the new window opens at y=0 (x=active
window's x)
5. When it cannot fit both horizontally and vertically, the new window
opens at x=0 and y=0 (top left).
6. At any moment if I activate a different Safari window, the next new
window is offset off of that
7. If I resize the active window and open a new window, the new window
has the same size as the active window

So, I implemented the changes based on those observations.

I am not sure if touching `gpui/src/window.rs` is the way to go. I am
open to feedback and direction here.

I am also not sure if making my changes platform (macOS) specific, is
the right thing to do. I reckoned that Linux and Windows have different
default behaviors, and the original issue mentioned macOS. But,
likewise, I am open to take a different approach.

## Tests

I haven't included tests for such change, as it seems to me a bit
difficult to properly test this, other than just doing a manual
integration test. But if you would want them for such a change, happy to
try including them.

## Alternative approach

I also did some research on macOS native APIs that we could use instead
of trying to make the calculations ourselves, and I found
`NSWindow.cascadeTopLeftFromPoint` which seems to be doing exactly what
we want, and more. It probably takes more things into consideration and
thus it is more robust. We could go down that road, and add it to
`gpui/src/platform/mac/window.rs` and then use it for new window
creation. Again, if that's what you would do yourselves, let me know and
I can either change the implementation here, or open a new pull request
and let you decide which one would you would like to pursue.

## Video showing the behavior


https://github.com/user-attachments/assets/f802a864-7504-47ee-8c6b-8d9b55474899

🙇‍♂️

Release Notes:

- Improved macOS new window stacking
2025-11-06 01:22:49 +00:00
Sean Timm
eccdfed32b gpui: Convert macOS clipboard file URLs to paths for paste (#36848)
- On macOS, pasting now inserts the actual file path when the clipboard
contains a file URL (public.file-url/public.url)
- Terminal paste remains text-only; no temp files or data URLs are
created. If only raw image bytes exist on the clipboard, paste is a
no-op.
- Scope: macOS only; no dependency changes.
- Added a test (test_file_url_converts_to_path) that verifies URL→path
conversion using a unique pasteboard.

Release Notes:

- Improved pasting on macOS: now inserts the actual file path when the
clipboard contains a file URL (enables image paste support for Claude
Code)
2025-11-06 00:35:52 +00:00
Cyandev
2664596a34 gpui: Fix incorrect handling of Function key modifier on macOS (#38518)
On macOS, the Function key is reserved for system use and should not be
used in application code.

This commit updated keystroke matching and key event handling to ignore
the Function key modifier while users are typing or pressing
keybindings.

For some keyboards with compact layout (like my 65% keyboard), there is
no separated backtick key. Esc and it shares the same physical key. To
input backtick, users may press `Fn-Esc`. However, macOS will still
deliver events with Fn key modifier to applications. Cocoa framework can
handle this correctly, which typically ignore the Fn directly. GPUI
should also follow the same rule, otherwise, the backtick key on those
keyboards won't work.

Release Notes:

- Fixed a bug where typing fn-\` on macOS would not insert a `.
2025-11-05 23:19:32 +00:00
Richard Feldman
23f2fb6089 Run ACP login from same cwd as agent server (#42038)
This makes it possible to do login via things like `cmd: "node", args:
["my-node-file.js", "login"]`

Also, that command will now use Zed's managed `node` instance.

Release Notes:

- ACP extensions can now run terminal login commands using relative
paths
2025-11-05 18:17:50 -05:00
Julia Ryan
fb2c2c55dc Fix windows crash handler (#42039)
Closes #41471

We were killing the crash handler when it received a second copy of any
of the messages, but this GPU specs one is sent on each new window
rather than once at startup. We could gate the sending to only happen
once, but it's simpler to just allow multiple gpu specs messages.

Release Notes:

- N/A
2025-11-05 22:34:05 +00:00
Rémi Kalbe
8315fde1ff Fix LSP spawning by resetting exception ports in child processes (#40716)
## Summary

Fixes #36754

This PR fixes an issue where LSPs fail to spawn after the crash handler
is initialized.

## Problem

After PR #35263 added minidump crash reporting, some users experienced
LSP spawn failures. The issue manifests as:
- LSPs fail to spawn with no clear error messages
- The problem only occurs after crash handler initialization
- LSPs work when a debugger is attached, revealing a timing issue

### Root Cause

The crash handler installs Mach exception ports for minidump generation.
Due to a timing issue, child processes inherit these exception ports
before they're fully stabilized, which can block child process spawning.

## Solution

Reset exception ports in child processes using the `pre_exec()` hook,
which runs after `fork()` but before `exec()`. This prevents children
from inheriting the parent's crash handler exception ports.

### Implementation

- Adds macOS-specific implementation of `new_smol_command()` that resets
exception ports before exec
- Calls `task_set_exception_ports` to reset all exception ports to
`MACH_PORT_NULL`
- Graceful error handling: logs warnings but doesn't fail process
spawning if port reset fails

Release Notes:

- Fixed LSPs failing to spawn on some macOS systems

---------

Co-authored-by: Julia Ryan <juliaryan3.14@gmail.com>
2025-11-05 22:05:34 +00:00
John Tur
fc87440682 Update Windows docs (#41423)
- Document Arm64 support
- Document minimum Windows version requirements

Release Notes:

- N/A
2025-11-05 21:23:48 +00:00
Kirill Bulatov
c996eadaf5 Update editor data only after real scroll reports (#42035)
Release Notes:

- N/A
2025-11-05 21:22:29 +00:00
Danilo Leal
e8c6c1ba04 agent_ui: Fix how icons from external agents are displayed (#42034)
Release Notes:

- N/A
2025-11-05 21:14:16 +00:00
versecafe
b8364d7c33 node: Move managed runtime to v24 LTS (#41956)
Release Notes:

- Moved managed Node runtime to v24 LTS
2025-11-05 14:10:42 -07:00
John Tur
7c23ef89ec Fix corrupted characters being inserted when Alt is pressed (#42033)
The Alt+Numpad buffer that's maintained by the input stack is getting
corrupted, leading to garbage characters being inserted on keystrokes
like Alt+Up. Disable the automatic handling of Alt+Numpad for now until
the cause of this corruption is understood. The Alt+Numpad input did not
work anyway, so this does not regress anything.

Release Notes:

- windows: Fixed corrupted characters being inserted when Alt is pressed
(preview only)
2025-11-05 21:03:36 +00:00
Matt Miller
2f463370cc Refactor buffer headers to collapse on click (#42021)
Release Notes:
Updated how clicking on multi-buffer headers works to provide better
control and prevent unexpected navigation:

Clicking the header now collapses/expands the file section instead of
opening the file.
Opening files can be done by clicking the filename or the "Open file"
button on the right side of the header.
Existing shortcuts continue to work: use the left chevron to collapse or
your keyboard shortcut to jump to the file

**Demo:**

https://github.com/user-attachments/assets/dca9ccc5-bd98-416c-97af-43b4e4b2f903
2025-11-05 14:59:50 -06:00
Danilo Leal
feed34cafe gpui: Add support for rendering SVG from external files (#42024)
Release Notes:

- N/A

---------

Co-authored-by: Mikayla Maki <mikayla.c.maki@gmail.com>
2025-11-05 16:12:30 -03:00
Joseph T. Lyons
4724aa5cb8 Bump Zed to v0.213 (#42018)
Release Notes:

- N/A
2025-11-05 17:31:18 +00:00
Cameron Mcloughlin
366a5db2c0 collab_ui: Show parents when searching channels (#42005)
Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
2025-11-05 16:51:10 +00:00
Danilo Leal
81e87c4cd6 settings ui: Fix divider in items that doesn't have subfields (#42016)
Release Notes:

- N/A

Co-authored-by: cameron <cameron.studdstreet@gmail.com>
2025-11-05 16:49:19 +00:00
Andrew Farkas
b8ba663c20 Revert "Refactor completions" (#42014)
Reverts zed-industries/zed#41939
2025-11-05 16:48:28 +00:00
Anthony Eid
27fb1098fa Revert "editor: Add action to move between snippet tabstop positions" (#42008)
Reverts zed-industries/zed#41466

This PR would add "in_snippet" context when there wasn't a completion
menu visible, causing some actions to not be hit.

Release Note:

- N/A
2025-11-05 11:16:39 -05:00
Danilo Leal
0f5a63a9b0 agent_ui: Make "waiting confirmation" state more apparent (#41998)
This PR changes the loading/generating indicator when in the "waiting
for tool call confirmation" state so that's a bit more visible and
discernible as needing your attention, as opposed to a regular
generating state.

<img width="400" alt="Screenshot 2025-11-05 at 10  46@2x"
src="https://github.com/user-attachments/assets/88adbf97-20fb-49c4-9c77-b0a3a22aa14e"
/>

Release Notes:

- agent: Improved the "waiting for confirmation" state visibility so
that you more rapidly know the agent is waiting for you to act.
2025-11-05 11:45:44 -03:00
Danilo Leal
c8ada5b1ae agent_ui: Reduce label repetitiveness on new thread menu (#42001)
Mostly just removing "thread" from all external agent menu items; I
think we can do without it and it already becomes much better/cleaner.

Release Notes:

- N/A
2025-11-05 11:45:31 -03:00
Techy
27a18843d4 open_ai: Make the deltas optional (#39142)
I am using an Azure OpenAI instance since that is what is provided at
work and with how they have it setup not all responses contain a delta,
which lead to errors and truncated responses. This is related to how
they are filtering potentially offensive requests and responses. I don't
believe this filter was made in-house, instead I believe it is provided
by Microsoft/Azure, so I suspect this fix may help other users.

Release Notes:

- N/A

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-11-05 13:47:14 +01:00
Smit Barmase
2bc1d60c52 remote: Fix open terminal fails when $SHELL is not set (#41990)
Closes #41644

Release Notes:

- Fixed issue where it failed to spawn terminal on systems such as
Alpine.
2025-11-05 18:15:15 +05:30
Karl-Erik Enkelmann
17933f1222 Update documentation on Java support (#41758)
This brings the documentation on Java in line with the much changed
reality of the Java extension.
Note that the correctness of this is contingent on
https://github.com/zed-industries/extensions/pull/3745 being merged.

Release Notes:

- N/A
2025-11-05 12:24:29 +01:00
Vitaly Slobodin
cd87307289 language: Fix language detection for injected syntax layers (#41111)
Closes #40632

**TL;DR:** The `wrap selections in tag` action was unavailable in ERB
files, even when the cursor was positioned in HTML content (outside of
Ruby code blocks). This happened because `syntax_layer_at()` incorrectly
returned the Ruby language for positions that were actually in HTML.
**NOTE:** I am not familiar with that part of Zed so it could be that
the fix here is completely incorrect.

Previously, `syntax_layer_at` incorrectly reported injected languages
(e.g., Ruby in ERB files) even when the cursor was in the base language
content (HTML). This broke actions like `wrap selections in tag` that
depend on language-specific configuration.

The issue had two parts:
1. Missing start boundary check: The filter only checked if a layer's
end was after the cursor (`end_byte() > offset`), not if it started
before, causing layers outside the cursor position to be included. See
the `BEFORE` video: when I click on the HTML part it reports `Ruby`
language instead of `HTML`.
2. Wrong boundary reference for injections: For injected layers with
`included_sub_ranges` (like Ruby code blocks in ERB), checking the root
node boundaries returned the entire file range instead of the actual
injection ranges.

This fix:
- Adds the containment check using half-open range semantics [start,
end) for root node boundaries. That ensures proper reporting of the
detected language when a cursor (`|`) is located right after the
injection:

   ```
   <body>
    <%= yield %>|
  </body>
   ```

- Checks `included_sub_ranges` for injected layers to determine if the
cursor is actually within an injection
- Falls back to root node boundaries for base layers without sub-ranges.
This is the original behavior.

Fixes ERB language support where actions should be available based on
the cursor's actual language context. I think that also applies to some
other template languages like HEEX (Phoenix) and `*.pug`. On short
videos below you can see how I navigate through the ERB template and the
terminal on the right outputs the detected language if you apply the
following patch:

```diff
diff --git i/crates/editor/src/editor.rs w/crates/editor/src/editor.rs
index 15af61f5d2..54a8e0ae37 100644
--- i/crates/editor/src/editor.rs
+++ w/crates/editor/src/editor.rs
@@ -10671,6 +10671,7 @@ impl Editor {
         for selection in self.selections.disjoint_anchors_arc().iter() {
             if snapshot
                 .language_at(selection.start)
+                .inspect(|language| println!("Detected language: {:?}", language))
                 .and_then(|lang| lang.config().wrap_characters.as_ref())
                 .is_some()
             {
```

**Before:**


https://github.com/user-attachments/assets/3f8358f4-d343-462e-b6b1-3f1f2e8c533d


**After:**



https://github.com/user-attachments/assets/c1b9f065-1b44-45a2-8a24-76b7d812130d



Here is the ERB template:

```
<!DOCTYPE html>
<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
    <style>
      /* Email styles need to be inline */
    </style>
  </head>
  <body>
    <%= yield %>
  </body>
</html>
```

Release Notes:

- N/A
2025-11-05 12:16:28 +01:00
Vitaly Slobodin
11b29d693f ruby: Add note about enabling Ruby LSP for ERB files (#41851)
Hi, this is a follow-up change for
https://github.com/zed-industries/zed/pull/41754 I think it important to
keep existing things working. So add notes to the Ruby extension doc
about enabling Ruby LSP for ERB files as well. Thanks!

Release Notes:

- N/A
2025-11-05 11:00:12 +01:00
Lukas Wirth
c061698229 project: Fetch latest lsp data in deduplicate_range_based_lsp_requests (#41971)
Fixes ZED-2MK

Release Notes:

- Fixed a panic in inlay hints
2025-11-05 08:30:22 +00:00
John Tur
b4f7af066e Work around codegen bug with GetKeyboardState (#41970)
Works around an issue, which can be reproduced in the following program:
```rs
use windows::Win32::UI::Input::KeyboardAndMouse::{GetKeyboardState, VK_CONTROL};

fn main() {
    let mut keyboard_state = [0u8; 256];
    unsafe {
        GetKeyboardState(&mut keyboard_state).unwrap();
    }

    let ctrl_down = (keyboard_state[VK_CONTROL.0 as usize] & 0x80) != 0;
    println!("Is Ctrl down: {ctrl_down}");
}
```

In debug mode, this program prints the correct answer. In release mode,
it always prints false. The optimizer appears to think that
`keyboard_state` isn't mutated and remains zeroed, and folds the
`modifier_down` comparisons to `false`.

Release Notes:

- N/A
2025-11-05 08:06:14 +00:00
Smit Barmase
c83621fa1f editor: Fix setting multi_cursor_modifier opens implementation in new pane instead of new tab (#41963)
Closes #41014

Release Notes:

- Fixed an issue where `multi_cursor_modifier` set to `cmd_or_ctrl`
opens implementation in new pane instead of new tab.
2025-11-05 11:31:56 +05:30
387 changed files with 11677 additions and 7454 deletions

View File

@@ -39,3 +39,21 @@ body:
Output of "zed: copy system specs into clipboard"
validations:
required: true
- type: textarea
attributes:
label: If applicable, attach your `Zed.log` file to this issue.
description: |
From the command palette, run `zed: open log` to see the last 1000 lines.
Or run `zed: reveal log in file manager` to reveal the log file itself.
value: |
<details><summary>Zed.log</summary>
<!-- Paste your log inside the code block. -->
```log
```
</details>
validations:
required: false

View File

@@ -33,3 +33,21 @@ body:
Output of "zed: copy system specs into clipboard"
validations:
required: true
- type: textarea
attributes:
label: If applicable, attach your `Zed.log` file to this issue.
description: |
From the command palette, run `zed: open log` to see the last 1000 lines.
Or run `zed: reveal log in file manager` to reveal the log file itself.
value: |
<details><summary>Zed.log</summary>
<!-- Paste your log inside the code block. -->
```log
```
</details>
validations:
required: false

View File

@@ -33,3 +33,21 @@ body:
Output of "zed: copy system specs into clipboard"
validations:
required: true
- type: textarea
attributes:
label: If applicable, attach your `Zed.log` file to this issue.
description: |
From the command palette, run `zed: open log` to see the last 1000 lines.
Or run `zed: reveal log in file manager` to reveal the log file itself.
value: |
<details><summary>Zed.log</summary>
<!-- Paste your log inside the code block. -->
```log
```
</details>
validations:
required: false

View File

@@ -33,3 +33,21 @@ body:
Output of "zed: copy system specs into clipboard"
validations:
required: true
- type: textarea
attributes:
label: If applicable, attach your `Zed.log` file to this issue.
description: |
From the command palette, run `zed: open log` to see the last 1000 lines.
Or run `zed: reveal log in file manager` to reveal the log file itself.
value: |
<details><summary>Zed.log</summary>
<!-- Paste your log inside the code block. -->
```log
```
</details>
validations:
required: false

View File

@@ -56,3 +56,20 @@ body:
Output of "zed: copy system specs into clipboard"
validations:
required: true
- type: textarea
attributes:
label: If applicable, attach your `Zed.log` file to this issue.
description: |
From the command palette, run `zed: open log` to see the last 1000 lines.
Or run `zed: reveal log in file manager` to reveal the log file itself.
value: |
<details><summary>Zed.log</summary>
<!-- Paste your log inside the code block. -->
```log
```
</details>
validations:
required: false

88
.github/workflows/after_release.yml vendored Normal file
View File

@@ -0,0 +1,88 @@
# Generated from xtask::workflows::after_release
# Rebuild with `cargo xtask workflows`.
name: after_release
on:
release:
types:
- published
jobs:
rebuild_releases_page:
if: github.repository_owner == 'zed-industries'
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: after_release::rebuild_releases_page::refresh_cloud_releases
run: curl -fX POST https://cloud.zed.dev/releases/refresh?expect_tag=${{ github.event.release.tag_name }}
shell: bash -euxo pipefail {0}
- name: after_release::rebuild_releases_page::redeploy_zed_dev
run: npm exec --yes -- vercel@37 --token="$VERCEL_TOKEN" --scope zed-industries redeploy https://zed.dev
shell: bash -euxo pipefail {0}
env:
VERCEL_TOKEN: ${{ secrets.VERCEL_TOKEN }}
post_to_discord:
needs:
- rebuild_releases_page
if: github.repository_owner == 'zed-industries'
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- id: get-release-url
name: after_release::post_to_discord::get_release_url
run: |
if [ "${{ github.event.release.prerelease }}" == "true" ]; then
URL="https://zed.dev/releases/preview"
else
URL="https://zed.dev/releases/stable"
fi
echo "URL=$URL" >> "$GITHUB_OUTPUT"
shell: bash -euxo pipefail {0}
- id: get-content
name: after_release::post_to_discord::get_content
uses: 2428392/gh-truncate-string-action@b3ff790d21cf42af3ca7579146eedb93c8fb0757
with:
stringToTruncate: |
📣 Zed [${{ github.event.release.tag_name }}](<${{ steps.get-release-url.outputs.URL }}>) was just released!
${{ github.event.release.body }}
maxLength: 2000
truncationSymbol: '...'
- name: after_release::post_to_discord::discord_webhook_action
uses: tsickert/discord-webhook@c840d45a03a323fbc3f7507ac7769dbd91bfb164
with:
webhook-url: ${{ secrets.DISCORD_WEBHOOK_RELEASE_NOTES }}
content: ${{ steps.get-content.outputs.string }}
publish_winget:
runs-on: self-32vcpu-windows-2022
steps:
- id: set-package-name
name: after_release::publish_winget::set_package_name
run: |
if [ "${{ github.event.release.prerelease }}" == "true" ]; then
PACKAGE_NAME=ZedIndustries.Zed.Preview
else
PACKAGE_NAME=ZedIndustries.Zed
fi
echo "PACKAGE_NAME=$PACKAGE_NAME" >> "$GITHUB_OUTPUT"
shell: bash -euxo pipefail {0}
- name: after_release::publish_winget::winget_releaser
uses: vedantmgoyal9/winget-releaser@19e706d4c9121098010096f9c495a70a7518b30f
with:
identifier: ${{ steps.set-package-name.outputs.PACKAGE_NAME }}
max-versions-to-keep: 5
token: ${{ secrets.WINGET_TOKEN }}
create_sentry_release:
if: github.repository_owner == 'zed-industries'
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: release::create_sentry_release
uses: getsentry/action-release@526942b68292201ac6bbb99b9a0747d4abee354c
with:
environment: production
env:
SENTRY_ORG: zed-dev
SENTRY_PROJECT: zed
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}

View File

@@ -1,93 +0,0 @@
# IF YOU UPDATE THE NAME OF ANY GITHUB SECRET, YOU MUST CHERRY PICK THE COMMIT
# TO BOTH STABLE AND PREVIEW CHANNELS
name: Release Actions
on:
release:
types: [published]
jobs:
discord_release:
if: github.repository_owner == 'zed-industries'
runs-on: ubuntu-latest
steps:
- name: Get release URL
id: get-release-url
run: |
if [ "${{ github.event.release.prerelease }}" == "true" ]; then
URL="https://zed.dev/releases/preview"
else
URL="https://zed.dev/releases/stable"
fi
echo "URL=$URL" >> "$GITHUB_OUTPUT"
- name: Get content
uses: 2428392/gh-truncate-string-action@b3ff790d21cf42af3ca7579146eedb93c8fb0757 # v1.4.1
id: get-content
with:
stringToTruncate: |
📣 Zed [${{ github.event.release.tag_name }}](<${{ steps.get-release-url.outputs.URL }}>) was just released!
${{ github.event.release.body }}
maxLength: 2000
truncationSymbol: "..."
- name: Discord Webhook Action
uses: tsickert/discord-webhook@c840d45a03a323fbc3f7507ac7769dbd91bfb164 # v5.3.0
with:
webhook-url: ${{ secrets.DISCORD_WEBHOOK_RELEASE_NOTES }}
content: ${{ steps.get-content.outputs.string }}
publish-winget:
runs-on:
- ubuntu-latest
steps:
- name: Set Package Name
id: set-package-name
run: |
if [ "${{ github.event.release.prerelease }}" == "true" ]; then
PACKAGE_NAME=ZedIndustries.Zed.Preview
else
PACKAGE_NAME=ZedIndustries.Zed
fi
echo "PACKAGE_NAME=$PACKAGE_NAME" >> "$GITHUB_OUTPUT"
- uses: vedantmgoyal9/winget-releaser@19e706d4c9121098010096f9c495a70a7518b30f # v2
with:
identifier: ${{ steps.set-package-name.outputs.PACKAGE_NAME }}
max-versions-to-keep: 5
token: ${{ secrets.WINGET_TOKEN }}
send_release_notes_email:
if: false && github.repository_owner == 'zed-industries' && !github.event.release.prerelease
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
fetch-depth: 0
- name: Check if release was promoted from preview
id: check-promotion-from-preview
run: |
VERSION="${{ github.event.release.tag_name }}"
PREVIEW_TAG="${VERSION}-pre"
if git rev-parse "$PREVIEW_TAG" > /dev/null 2>&1; then
echo "was_promoted_from_preview=true" >> "$GITHUB_OUTPUT"
else
echo "was_promoted_from_preview=false" >> "$GITHUB_OUTPUT"
fi
- name: Send release notes email
if: steps.check-promotion-from-preview.outputs.was_promoted_from_preview == 'true'
run: |
TAG="${{ github.event.release.tag_name }}"
cat << 'EOF' > release_body.txt
${{ github.event.release.body }}
EOF
jq -n --arg tag "$TAG" --rawfile body release_body.txt '{version: $tag, markdown_body: $body}' \
> release_data.json
curl -X POST "https://zed.dev/api/send_release_notes_email" \
-H "Authorization: Bearer ${{ secrets.RELEASE_NOTES_API_TOKEN }}" \
-H "Content-Type: application/json" \
-d @release_data.json

View File

@@ -475,14 +475,6 @@ jobs:
shell: bash -euxo pipefail {0}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: release::create_sentry_release
uses: getsentry/action-release@526942b68292201ac6bbb99b9a0747d4abee354c
with:
environment: production
env:
SENTRY_ORG: zed-dev
SENTRY_PROJECT: zed
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true

View File

@@ -285,40 +285,6 @@ jobs:
rm -rf ./../.cargo
shell: bash -euxo pipefail {0}
timeout-minutes: 60
check_postgres_and_protobuf_migrations:
needs:
- orchestrate
if: needs.orchestrate.outputs.run_tests == 'true'
runs-on: self-mini-macos
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
- name: run_tests::check_postgres_and_protobuf_migrations::remove_untracked_files
run: git clean -df
shell: bash -euxo pipefail {0}
- name: run_tests::check_postgres_and_protobuf_migrations::ensure_fresh_merge
run: |
if [ -z "$GITHUB_BASE_REF" ];
then
echo "BUF_BASE_BRANCH=$(git merge-base origin/main HEAD)" >> "$GITHUB_ENV"
else
git checkout -B temp
git merge -q "origin/$GITHUB_BASE_REF" -m "merge main into temp"
echo "BUF_BASE_BRANCH=$GITHUB_BASE_REF" >> "$GITHUB_ENV"
fi
shell: bash -euxo pipefail {0}
- name: run_tests::check_postgres_and_protobuf_migrations::bufbuild_setup_action
uses: bufbuild/buf-setup-action@v1
with:
version: v1.29.0
- name: run_tests::check_postgres_and_protobuf_migrations::bufbuild_breaking_action
uses: bufbuild/buf-breaking-action@v1
with:
input: crates/proto/proto/
against: https://github.com/${GITHUB_REPOSITORY}.git#branch=${BUF_BASE_BRANCH},subdir=crates/proto/proto/
timeout-minutes: 60
check_dependencies:
needs:
- orchestrate
@@ -518,6 +484,40 @@ jobs:
shell: bash -euxo pipefail {0}
timeout-minutes: 60
continue-on-error: true
check_postgres_and_protobuf_migrations:
needs:
- orchestrate
if: needs.orchestrate.outputs.run_tests == 'true'
runs-on: self-mini-macos
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
- name: run_tests::check_postgres_and_protobuf_migrations::remove_untracked_files
run: git clean -df
shell: bash -euxo pipefail {0}
- name: run_tests::check_postgres_and_protobuf_migrations::ensure_fresh_merge
run: |
if [ -z "$GITHUB_BASE_REF" ];
then
echo "BUF_BASE_BRANCH=$(git merge-base origin/main HEAD)" >> "$GITHUB_ENV"
else
git checkout -B temp
git merge -q "origin/$GITHUB_BASE_REF" -m "merge main into temp"
echo "BUF_BASE_BRANCH=$GITHUB_BASE_REF" >> "$GITHUB_ENV"
fi
shell: bash -euxo pipefail {0}
- name: run_tests::check_postgres_and_protobuf_migrations::bufbuild_setup_action
uses: bufbuild/buf-setup-action@v1
with:
version: v1.29.0
- name: run_tests::check_postgres_and_protobuf_migrations::bufbuild_breaking_action
uses: bufbuild/buf-breaking-action@v1
with:
input: crates/proto/proto/
against: https://github.com/${GITHUB_REPOSITORY}.git#branch=${BUF_BASE_BRANCH},subdir=crates/proto/proto/
timeout-minutes: 60
tests_pass:
needs:
- orchestrate
@@ -527,7 +527,6 @@ jobs:
- run_tests_mac
- doctests
- check_workspace_binaries
- check_postgres_and_protobuf_migrations
- check_dependencies
- check_docs
- check_licenses
@@ -554,7 +553,6 @@ jobs:
check_result "run_tests_mac" "${{ needs.run_tests_mac.result }}"
check_result "doctests" "${{ needs.doctests.result }}"
check_result "check_workspace_binaries" "${{ needs.check_workspace_binaries.result }}"
check_result "check_postgres_and_protobuf_migrations" "${{ needs.check_postgres_and_protobuf_migrations.result }}"
check_result "check_dependencies" "${{ needs.check_dependencies.result }}"
check_result "check_docs" "${{ needs.check_docs.result }}"
check_result "check_licenses" "${{ needs.check_licenses.result }}"

34
Cargo.lock generated
View File

@@ -32,6 +32,7 @@ dependencies = [
"settings",
"smol",
"task",
"telemetry",
"tempfile",
"terminal",
"ui",
@@ -39,6 +40,7 @@ dependencies = [
"util",
"uuid",
"watch",
"zlog",
]
[[package]]
@@ -79,6 +81,7 @@ dependencies = [
"rand 0.9.2",
"serde_json",
"settings",
"telemetry",
"text",
"util",
"watch",
@@ -247,7 +250,6 @@ dependencies = [
"acp_tools",
"action_log",
"agent-client-protocol",
"agent_settings",
"anyhow",
"async-trait",
"client",
@@ -3198,6 +3200,7 @@ dependencies = [
"indoc",
"ordered-float 2.10.1",
"rustc-hash 2.1.1",
"schemars 1.0.4",
"serde",
"strum 0.27.2",
]
@@ -6404,7 +6407,7 @@ dependencies = [
"ignore",
"libc",
"log",
"notify 8.0.0",
"notify 8.2.0",
"objc",
"parking_lot",
"paths",
@@ -7091,7 +7094,6 @@ dependencies = [
"askpass",
"buffer_diff",
"call",
"chrono",
"cloud_llm_client",
"collections",
"command_palette_hooks",
@@ -8864,6 +8866,7 @@ dependencies = [
"open_router",
"parking_lot",
"proto",
"schemars 1.0.4",
"serde",
"serde_json",
"settings",
@@ -9028,6 +9031,7 @@ dependencies = [
"settings",
"smol",
"task",
"terminal",
"text",
"theme",
"toml 0.8.23",
@@ -9675,6 +9679,7 @@ dependencies = [
"settings",
"theme",
"ui",
"urlencoding",
"util",
"workspace",
]
@@ -10409,11 +10414,10 @@ dependencies = [
[[package]]
name = "notify"
version = "8.0.0"
source = "git+https://github.com/zed-industries/notify.git?rev=bbb9ea5ae52b253e095737847e367c30653a2e96#bbb9ea5ae52b253e095737847e367c30653a2e96"
version = "8.2.0"
source = "git+https://github.com/zed-industries/notify.git?rev=b4588b2e5aee68f4c0e100f140e808cbce7b1419#b4588b2e5aee68f4c0e100f140e808cbce7b1419"
dependencies = [
"bitflags 2.9.4",
"filetime",
"fsevent-sys 4.1.0",
"inotify 0.11.0",
"kqueue",
@@ -10422,7 +10426,7 @@ dependencies = [
"mio 1.1.0",
"notify-types",
"walkdir",
"windows-sys 0.59.0",
"windows-sys 0.60.2",
]
[[package]]
@@ -10439,7 +10443,7 @@ dependencies = [
[[package]]
name = "notify-types"
version = "2.0.0"
source = "git+https://github.com/zed-industries/notify.git?rev=bbb9ea5ae52b253e095737847e367c30653a2e96#bbb9ea5ae52b253e095737847e367c30653a2e96"
source = "git+https://github.com/zed-industries/notify.git?rev=b4588b2e5aee68f4c0e100f140e808cbce7b1419#b4588b2e5aee68f4c0e100f140e808cbce7b1419"
[[package]]
name = "now"
@@ -13971,6 +13975,7 @@ dependencies = [
"gpui",
"gpui_tokio",
"http_client",
"image",
"json_schema_store",
"language",
"language_extension",
@@ -16212,7 +16217,6 @@ dependencies = [
"log",
"menu",
"picker",
"project",
"reqwest_client",
"rust-embed",
"settings",
@@ -16222,7 +16226,6 @@ dependencies = [
"theme",
"title_bar",
"ui",
"workspace",
]
[[package]]
@@ -18618,6 +18621,7 @@ dependencies = [
"itertools 0.14.0",
"libc",
"log",
"mach2 0.5.0",
"nix 0.29.0",
"pretty_assertions",
"rand 0.9.2",
@@ -18807,7 +18811,6 @@ dependencies = [
name = "vim_mode_setting"
version = "0.1.0"
dependencies = [
"gpui",
"settings",
]
@@ -20951,6 +20954,7 @@ dependencies = [
"gh-workflow",
"indexmap 2.11.4",
"indoc",
"serde",
"toml 0.8.23",
"toml_edit 0.22.27",
]
@@ -21135,7 +21139,7 @@ dependencies = [
[[package]]
name = "zed"
version = "0.212.0"
version = "0.213.0"
dependencies = [
"acp_tools",
"activity_indicator",
@@ -21674,18 +21678,20 @@ dependencies = [
"language_model",
"log",
"lsp",
"open_ai",
"pretty_assertions",
"project",
"release_channel",
"schemars 1.0.4",
"serde",
"serde_json",
"settings",
"smol",
"thiserror 2.0.17",
"util",
"uuid",
"workspace",
"worktree",
"zlog",
]
[[package]]
@@ -21697,6 +21703,7 @@ dependencies = [
"clap",
"client",
"cloud_llm_client",
"cloud_zeta2_prompt",
"collections",
"edit_prediction_context",
"editor",
@@ -21710,7 +21717,6 @@ dependencies = [
"ordered-float 2.10.1",
"pretty_assertions",
"project",
"regex-syntax",
"serde",
"serde_json",
"settings",

View File

@@ -663,6 +663,7 @@ time = { version = "0.3", features = [
"serde",
"serde-well-known",
"formatting",
"local-offset",
] }
tiny_http = "0.8"
tokio = { version = "1" }
@@ -772,8 +773,8 @@ features = [
]
[patch.crates-io]
notify = { git = "https://github.com/zed-industries/notify.git", rev = "bbb9ea5ae52b253e095737847e367c30653a2e96" }
notify-types = { git = "https://github.com/zed-industries/notify.git", rev = "bbb9ea5ae52b253e095737847e367c30653a2e96" }
notify = { git = "https://github.com/zed-industries/notify.git", rev = "b4588b2e5aee68f4c0e100f140e808cbce7b1419" }
notify-types = { git = "https://github.com/zed-industries/notify.git", rev = "b4588b2e5aee68f4c0e100f140e808cbce7b1419" }
windows-capture = { git = "https://github.com/zed-industries/windows-capture.git", rev = "f0d6c1b6691db75461b732f6d5ff56eed002eeb9" }
[profile.dev]
@@ -839,7 +840,7 @@ ui_input = { codegen-units = 1 }
zed_actions = { codegen-units = 1 }
[profile.release]
debug = "limited"
debug = "full"
lto = "thin"
codegen-units = 1

View File

@@ -736,11 +736,17 @@
}
},
{
"context": "Editor && in_snippet",
"context": "Editor && in_snippet && has_next_tabstop && !showing_completions",
"use_key_equivalents": true,
"bindings": {
"alt-right": "editor::NextSnippetTabstop",
"alt-left": "editor::PreviousSnippetTabstop"
"tab": "editor::NextSnippetTabstop"
}
},
{
"context": "Editor && in_snippet && has_previous_tabstop && !showing_completions",
"use_key_equivalents": true,
"bindings": {
"shift-tab": "editor::PreviousSnippetTabstop"
}
},
// Bindings for accepting edit predictions

View File

@@ -806,11 +806,17 @@
}
},
{
"context": "Editor && in_snippet",
"context": "Editor && in_snippet && has_next_tabstop && !showing_completions",
"use_key_equivalents": true,
"bindings": {
"alt-right": "editor::NextSnippetTabstop",
"alt-left": "editor::PreviousSnippetTabstop"
"tab": "editor::NextSnippetTabstop"
}
},
{
"context": "Editor && in_snippet && has_previous_tabstop && !showing_completions",
"use_key_equivalents": true,
"bindings": {
"shift-tab": "editor::PreviousSnippetTabstop"
}
},
{

View File

@@ -740,11 +740,17 @@
}
},
{
"context": "Editor && in_snippet",
"context": "Editor && in_snippet && has_next_tabstop && !showing_completions",
"use_key_equivalents": true,
"bindings": {
"alt-right": "editor::NextSnippetTabstop",
"alt-left": "editor::PreviousSnippetTabstop"
"tab": "editor::NextSnippetTabstop"
}
},
{
"context": "Editor && in_snippet && has_previous_tabstop && !showing_completions",
"use_key_equivalents": true,
"bindings": {
"shift-tab": "editor::PreviousSnippetTabstop"
}
},
// Bindings for accepting edit predictions

View File

@@ -455,6 +455,7 @@
"<": "vim::Outdent",
"=": "vim::AutoIndent",
"d": "vim::HelixDelete",
"alt-d": "editor::Delete", // Delete selection, without yanking
"c": "vim::HelixSubstitute",
"alt-c": "vim::HelixSubstituteNoYank",

View File

@@ -1487,7 +1487,11 @@
// in your project's settings, rather than globally.
"directories": [".env", "env", ".venv", "venv"],
// Can also be `csh`, `fish`, `nushell` and `power_shell`
"activate_script": "default"
"activate_script": "default",
// Preferred Conda manager to use when activating Conda environments.
// Values: "auto", "conda", "mamba", "micromamba"
// Default: "auto"
"conda_manager": "auto"
}
},
"toolbar": {

View File

@@ -39,6 +39,7 @@ serde_json.workspace = true
settings.workspace = true
smol.workspace = true
task.workspace = true
telemetry.workspace = true
terminal.workspace = true
ui.workspace = true
url.workspace = true
@@ -56,3 +57,4 @@ rand.workspace = true
tempfile.workspace = true
util.workspace = true
settings.workspace = true
zlog.workspace = true

View File

@@ -15,7 +15,7 @@ use settings::Settings as _;
use task::{Shell, ShellBuilder};
pub use terminal::*;
use action_log::ActionLog;
use action_log::{ActionLog, ActionLogTelemetry};
use agent_client_protocol::{self as acp};
use anyhow::{Context as _, Result, anyhow};
use editor::Bias;
@@ -820,6 +820,15 @@ pub struct AcpThread {
pending_terminal_exit: HashMap<acp::TerminalId, acp::TerminalExitStatus>,
}
impl From<&AcpThread> for ActionLogTelemetry {
fn from(value: &AcpThread) -> Self {
Self {
agent_telemetry_id: value.connection().telemetry_id(),
session_id: value.session_id.0.clone(),
}
}
}
#[derive(Debug)]
pub enum AcpThreadEvent {
NewEntry,
@@ -1346,6 +1355,17 @@ impl AcpThread {
let path_style = self.project.read(cx).path_style(cx);
let id = update.id.clone();
let agent = self.connection().telemetry_id();
let session = self.session_id();
if let ToolCallStatus::Completed | ToolCallStatus::Failed = status {
let status = if matches!(status, ToolCallStatus::Completed) {
"completed"
} else {
"failed"
};
telemetry::event!("Agent Tool Call Completed", agent, session, status);
}
if let Some(ix) = self.index_for_tool_call(&id) {
let AgentThreadEntry::ToolCall(call) = &mut self.entries[ix] else {
unreachable!()
@@ -1869,6 +1889,7 @@ impl AcpThread {
return Task::ready(Err(anyhow!("not supported")));
};
let telemetry = ActionLogTelemetry::from(&*self);
cx.spawn(async move |this, cx| {
cx.update(|cx| truncate.run(id.clone(), cx))?.await?;
this.update(cx, |this, cx| {
@@ -1877,8 +1898,9 @@ impl AcpThread {
this.entries.truncate(ix);
cx.emit(AcpThreadEvent::EntriesRemoved(range));
}
this.action_log()
.update(cx, |action_log, cx| action_log.reject_all_edits(cx))
this.action_log().update(cx, |action_log, cx| {
action_log.reject_all_edits(Some(telemetry), cx)
})
})?
.await;
Ok(())
@@ -2355,8 +2377,6 @@ mod tests {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
Project::init_settings(cx);
language::init(cx);
});
}
@@ -3614,6 +3634,10 @@ mod tests {
}
impl AgentConnection for FakeAgentConnection {
fn telemetry_id(&self) -> &'static str {
"fake"
}
fn auth_methods(&self) -> &[acp::AuthMethod] {
&self.auth_methods
}

View File

@@ -20,6 +20,8 @@ impl UserMessageId {
}
pub trait AgentConnection {
fn telemetry_id(&self) -> &'static str;
fn new_thread(
self: Rc<Self>,
project: Entity<Project>,
@@ -106,9 +108,6 @@ pub trait AgentSessionSetTitle {
}
pub trait AgentTelemetry {
/// The name of the agent used for telemetry.
fn agent_name(&self) -> String;
/// A representation of the current thread state that can be serialized for
/// storage with telemetry events.
fn thread_data(
@@ -318,6 +317,10 @@ mod test_support {
}
impl AgentConnection for StubAgentConnection {
fn telemetry_id(&self) -> &'static str {
"stub"
}
fn auth_methods(&self) -> &[acp::AuthMethod] {
&[]
}

View File

@@ -20,6 +20,7 @@ futures.workspace = true
gpui.workspace = true
language.workspace = true
project.workspace = true
telemetry.workspace = true
text.workspace = true
util.workspace = true
watch.workspace = true

View File

@@ -3,7 +3,9 @@ use buffer_diff::BufferDiff;
use clock;
use collections::BTreeMap;
use futures::{FutureExt, StreamExt, channel::mpsc};
use gpui::{App, AppContext, AsyncApp, Context, Entity, Subscription, Task, WeakEntity};
use gpui::{
App, AppContext, AsyncApp, Context, Entity, SharedString, Subscription, Task, WeakEntity,
};
use language::{Anchor, Buffer, BufferEvent, DiskState, Point, ToPoint};
use project::{Project, ProjectItem, lsp_store::OpenLspBufferHandle};
use std::{cmp, ops::Range, sync::Arc};
@@ -31,71 +33,6 @@ impl ActionLog {
&self.project
}
pub fn latest_snapshot(&self, buffer: &Entity<Buffer>) -> Option<text::BufferSnapshot> {
Some(self.tracked_buffers.get(buffer)?.snapshot.clone())
}
/// Return a unified diff patch with user edits made since last read or notification
pub fn unnotified_user_edits(&self, cx: &Context<Self>) -> Option<String> {
let diffs = self
.tracked_buffers
.values()
.filter_map(|tracked| {
if !tracked.may_have_unnotified_user_edits {
return None;
}
let text_with_latest_user_edits = tracked.diff_base.to_string();
let text_with_last_seen_user_edits = tracked.last_seen_base.to_string();
if text_with_latest_user_edits == text_with_last_seen_user_edits {
return None;
}
let patch = language::unified_diff(
&text_with_last_seen_user_edits,
&text_with_latest_user_edits,
);
let buffer = tracked.buffer.clone();
let file_path = buffer
.read(cx)
.file()
.map(|file| {
let mut path = file.full_path(cx).to_string_lossy().into_owned();
if file.path_style(cx).is_windows() {
path = path.replace('\\', "/");
}
path
})
.unwrap_or_else(|| format!("buffer_{}", buffer.entity_id()));
let mut result = String::new();
result.push_str(&format!("--- a/{}\n", file_path));
result.push_str(&format!("+++ b/{}\n", file_path));
result.push_str(&patch);
Some(result)
})
.collect::<Vec<_>>();
if diffs.is_empty() {
return None;
}
let unified_diff = diffs.join("\n\n");
Some(unified_diff)
}
/// Return a unified diff patch with user edits made since last read/notification
/// and mark them as notified
pub fn flush_unnotified_user_edits(&mut self, cx: &Context<Self>) -> Option<String> {
let patch = self.unnotified_user_edits(cx);
self.tracked_buffers.values_mut().for_each(|tracked| {
tracked.may_have_unnotified_user_edits = false;
tracked.last_seen_base = tracked.diff_base.clone();
});
patch
}
fn track_buffer_internal(
&mut self,
buffer: Entity<Buffer>,
@@ -145,31 +82,26 @@ impl ActionLog {
let diff = cx.new(|cx| BufferDiff::new(&text_snapshot, cx));
let (diff_update_tx, diff_update_rx) = mpsc::unbounded();
let diff_base;
let last_seen_base;
let unreviewed_edits;
if is_created {
diff_base = Rope::default();
last_seen_base = Rope::default();
unreviewed_edits = Patch::new(vec![Edit {
old: 0..1,
new: 0..text_snapshot.max_point().row + 1,
}])
} else {
diff_base = buffer.read(cx).as_rope().clone();
last_seen_base = diff_base.clone();
unreviewed_edits = Patch::default();
}
TrackedBuffer {
buffer: buffer.clone(),
diff_base,
last_seen_base,
unreviewed_edits,
snapshot: text_snapshot,
status,
version: buffer.read(cx).version(),
diff,
diff_update: diff_update_tx,
may_have_unnotified_user_edits: false,
_open_lsp_handle: open_lsp_handle,
_maintain_diff: cx.spawn({
let buffer = buffer.clone();
@@ -320,10 +252,9 @@ impl ActionLog {
let new_snapshot = buffer_snapshot.clone();
let unreviewed_edits = tracked_buffer.unreviewed_edits.clone();
let edits = diff_snapshots(&old_snapshot, &new_snapshot);
let mut has_user_changes = false;
async move {
if let ChangeAuthor::User = author {
has_user_changes = apply_non_conflicting_edits(
apply_non_conflicting_edits(
&unreviewed_edits,
edits,
&mut base_text,
@@ -331,22 +262,13 @@ impl ActionLog {
);
}
(Arc::new(base_text.to_string()), base_text, has_user_changes)
(Arc::new(base_text.to_string()), base_text)
}
});
anyhow::Ok(rebase)
})??;
let (new_base_text, new_diff_base, has_user_changes) = rebase.await;
this.update(cx, |this, _| {
let tracked_buffer = this
.tracked_buffers
.get_mut(buffer)
.context("buffer not tracked")
.unwrap();
tracked_buffer.may_have_unnotified_user_edits |= has_user_changes;
})?;
let (new_base_text, new_diff_base) = rebase.await;
Self::update_diff(
this,
@@ -565,14 +487,17 @@ impl ActionLog {
&mut self,
buffer: Entity<Buffer>,
buffer_range: Range<impl language::ToPoint>,
telemetry: Option<ActionLogTelemetry>,
cx: &mut Context<Self>,
) {
let Some(tracked_buffer) = self.tracked_buffers.get_mut(&buffer) else {
return;
};
let mut metrics = ActionLogMetrics::for_buffer(buffer.read(cx));
match tracked_buffer.status {
TrackedBufferStatus::Deleted => {
metrics.add_edits(tracked_buffer.unreviewed_edits.edits());
self.tracked_buffers.remove(&buffer);
cx.notify();
}
@@ -581,7 +506,6 @@ impl ActionLog {
let buffer_range =
buffer_range.start.to_point(buffer)..buffer_range.end.to_point(buffer);
let mut delta = 0i32;
tracked_buffer.unreviewed_edits.retain_mut(|edit| {
edit.old.start = (edit.old.start as i32 + delta) as u32;
edit.old.end = (edit.old.end as i32 + delta) as u32;
@@ -613,6 +537,7 @@ impl ActionLog {
.collect::<String>(),
);
delta += edit.new_len() as i32 - edit.old_len() as i32;
metrics.add_edit(edit);
false
}
});
@@ -624,19 +549,24 @@ impl ActionLog {
tracked_buffer.schedule_diff_update(ChangeAuthor::User, cx);
}
}
if let Some(telemetry) = telemetry {
telemetry_report_accepted_edits(&telemetry, metrics);
}
}
pub fn reject_edits_in_ranges(
&mut self,
buffer: Entity<Buffer>,
buffer_ranges: Vec<Range<impl language::ToPoint>>,
telemetry: Option<ActionLogTelemetry>,
cx: &mut Context<Self>,
) -> Task<Result<()>> {
let Some(tracked_buffer) = self.tracked_buffers.get_mut(&buffer) else {
return Task::ready(Ok(()));
};
match &tracked_buffer.status {
let mut metrics = ActionLogMetrics::for_buffer(buffer.read(cx));
let task = match &tracked_buffer.status {
TrackedBufferStatus::Created {
existing_file_content,
} => {
@@ -686,6 +616,7 @@ impl ActionLog {
}
};
metrics.add_edits(tracked_buffer.unreviewed_edits.edits());
self.tracked_buffers.remove(&buffer);
cx.notify();
task
@@ -699,6 +630,7 @@ impl ActionLog {
.update(cx, |project, cx| project.save_buffer(buffer.clone(), cx));
// Clear all tracked edits for this buffer and start over as if we just read it.
metrics.add_edits(tracked_buffer.unreviewed_edits.edits());
self.tracked_buffers.remove(&buffer);
self.buffer_read(buffer.clone(), cx);
cx.notify();
@@ -738,6 +670,7 @@ impl ActionLog {
}
if revert {
metrics.add_edit(edit);
let old_range = tracked_buffer
.diff_base
.point_to_offset(Point::new(edit.old.start, 0))
@@ -758,12 +691,25 @@ impl ActionLog {
self.project
.update(cx, |project, cx| project.save_buffer(buffer, cx))
}
};
if let Some(telemetry) = telemetry {
telemetry_report_rejected_edits(&telemetry, metrics);
}
task
}
pub fn keep_all_edits(&mut self, cx: &mut Context<Self>) {
self.tracked_buffers
.retain(|_buffer, tracked_buffer| match tracked_buffer.status {
pub fn keep_all_edits(
&mut self,
telemetry: Option<ActionLogTelemetry>,
cx: &mut Context<Self>,
) {
self.tracked_buffers.retain(|buffer, tracked_buffer| {
let mut metrics = ActionLogMetrics::for_buffer(buffer.read(cx));
metrics.add_edits(tracked_buffer.unreviewed_edits.edits());
if let Some(telemetry) = telemetry.as_ref() {
telemetry_report_accepted_edits(telemetry, metrics);
}
match tracked_buffer.status {
TrackedBufferStatus::Deleted => false,
_ => {
if let TrackedBufferStatus::Created { .. } = &mut tracked_buffer.status {
@@ -774,13 +720,24 @@ impl ActionLog {
tracked_buffer.schedule_diff_update(ChangeAuthor::User, cx);
true
}
});
}
});
cx.notify();
}
pub fn reject_all_edits(&mut self, cx: &mut Context<Self>) -> Task<()> {
pub fn reject_all_edits(
&mut self,
telemetry: Option<ActionLogTelemetry>,
cx: &mut Context<Self>,
) -> Task<()> {
let futures = self.changed_buffers(cx).into_keys().map(|buffer| {
let reject = self.reject_edits_in_ranges(buffer, vec![Anchor::MIN..Anchor::MAX], cx);
let reject = self.reject_edits_in_ranges(
buffer,
vec![Anchor::MIN..Anchor::MAX],
telemetry.clone(),
cx,
);
async move {
reject.await.log_err();
@@ -788,8 +745,7 @@ impl ActionLog {
});
let task = futures::future::join_all(futures);
cx.spawn(async move |_, _| {
cx.background_spawn(async move {
task.await;
})
}
@@ -819,6 +775,61 @@ impl ActionLog {
}
}
#[derive(Clone)]
pub struct ActionLogTelemetry {
pub agent_telemetry_id: &'static str,
pub session_id: Arc<str>,
}
struct ActionLogMetrics {
lines_removed: u32,
lines_added: u32,
language: Option<SharedString>,
}
impl ActionLogMetrics {
fn for_buffer(buffer: &Buffer) -> Self {
Self {
language: buffer.language().map(|l| l.name().0),
lines_removed: 0,
lines_added: 0,
}
}
fn add_edits(&mut self, edits: &[Edit<u32>]) {
for edit in edits {
self.add_edit(edit);
}
}
fn add_edit(&mut self, edit: &Edit<u32>) {
self.lines_added += edit.new_len();
self.lines_removed += edit.old_len();
}
}
fn telemetry_report_accepted_edits(telemetry: &ActionLogTelemetry, metrics: ActionLogMetrics) {
telemetry::event!(
"Agent Edits Accepted",
agent = telemetry.agent_telemetry_id,
session = telemetry.session_id,
language = metrics.language,
lines_added = metrics.lines_added,
lines_removed = metrics.lines_removed
);
}
fn telemetry_report_rejected_edits(telemetry: &ActionLogTelemetry, metrics: ActionLogMetrics) {
telemetry::event!(
"Agent Edits Rejected",
agent = telemetry.agent_telemetry_id,
session = telemetry.session_id,
language = metrics.language,
lines_added = metrics.lines_added,
lines_removed = metrics.lines_removed
);
}
fn apply_non_conflicting_edits(
patch: &Patch<u32>,
edits: Vec<Edit<u32>>,
@@ -949,14 +960,12 @@ enum TrackedBufferStatus {
struct TrackedBuffer {
buffer: Entity<Buffer>,
diff_base: Rope,
last_seen_base: Rope,
unreviewed_edits: Patch<u32>,
status: TrackedBufferStatus,
version: clock::Global,
diff: Entity<BufferDiff>,
snapshot: text::BufferSnapshot,
diff_update: mpsc::UnboundedSender<(ChangeAuthor, text::BufferSnapshot)>,
may_have_unnotified_user_edits: bool,
_open_lsp_handle: OpenLspBufferHandle,
_maintain_diff: Task<()>,
_subscription: Subscription,
@@ -987,7 +996,6 @@ mod tests {
use super::*;
use buffer_diff::DiffHunkStatusKind;
use gpui::TestAppContext;
use indoc::indoc;
use language::Point;
use project::{FakeFs, Fs, Project, RemoveOptions};
use rand::prelude::*;
@@ -1005,8 +1013,6 @@ mod tests {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
language::init(cx);
Project::init_settings(cx);
});
}
@@ -1066,7 +1072,7 @@ mod tests {
);
action_log.update(cx, |log, cx| {
log.keep_edits_in_range(buffer.clone(), Point::new(3, 0)..Point::new(4, 3), cx)
log.keep_edits_in_range(buffer.clone(), Point::new(3, 0)..Point::new(4, 3), None, cx)
});
cx.run_until_parked();
assert_eq!(
@@ -1082,7 +1088,7 @@ mod tests {
);
action_log.update(cx, |log, cx| {
log.keep_edits_in_range(buffer.clone(), Point::new(0, 0)..Point::new(4, 3), cx)
log.keep_edits_in_range(buffer.clone(), Point::new(0, 0)..Point::new(4, 3), None, cx)
});
cx.run_until_parked();
assert_eq!(unreviewed_hunks(&action_log, cx), vec![]);
@@ -1167,7 +1173,7 @@ mod tests {
);
action_log.update(cx, |log, cx| {
log.keep_edits_in_range(buffer.clone(), Point::new(1, 0)..Point::new(1, 0), cx)
log.keep_edits_in_range(buffer.clone(), Point::new(1, 0)..Point::new(1, 0), None, cx)
});
cx.run_until_parked();
assert_eq!(unreviewed_hunks(&action_log, cx), vec![]);
@@ -1264,111 +1270,7 @@ mod tests {
);
action_log.update(cx, |log, cx| {
log.keep_edits_in_range(buffer.clone(), Point::new(0, 0)..Point::new(1, 0), cx)
});
cx.run_until_parked();
assert_eq!(unreviewed_hunks(&action_log, cx), vec![]);
}
#[gpui::test(iterations = 10)]
async fn test_user_edits_notifications(cx: &mut TestAppContext) {
init_test(cx);
let fs = FakeFs::new(cx.executor());
fs.insert_tree(
path!("/dir"),
json!({"file": indoc! {"
abc
def
ghi
jkl
mno"}}),
)
.await;
let project = Project::test(fs.clone(), [path!("/dir").as_ref()], cx).await;
let action_log = cx.new(|_| ActionLog::new(project.clone()));
let file_path = project
.read_with(cx, |project, cx| project.find_project_path("dir/file", cx))
.unwrap();
let buffer = project
.update(cx, |project, cx| project.open_buffer(file_path, cx))
.await
.unwrap();
// Agent edits
cx.update(|cx| {
action_log.update(cx, |log, cx| log.buffer_read(buffer.clone(), cx));
buffer.update(cx, |buffer, cx| {
buffer
.edit([(Point::new(1, 2)..Point::new(2, 3), "F\nGHI")], None, cx)
.unwrap()
});
action_log.update(cx, |log, cx| log.buffer_edited(buffer.clone(), cx));
});
cx.run_until_parked();
assert_eq!(
buffer.read_with(cx, |buffer, _| buffer.text()),
indoc! {"
abc
deF
GHI
jkl
mno"}
);
assert_eq!(
unreviewed_hunks(&action_log, cx),
vec![(
buffer.clone(),
vec![HunkStatus {
range: Point::new(1, 0)..Point::new(3, 0),
diff_status: DiffHunkStatusKind::Modified,
old_text: "def\nghi\n".into(),
}],
)]
);
// User edits
buffer.update(cx, |buffer, cx| {
buffer.edit(
[
(Point::new(0, 2)..Point::new(0, 2), "X"),
(Point::new(3, 0)..Point::new(3, 0), "Y"),
],
None,
cx,
)
});
cx.run_until_parked();
assert_eq!(
buffer.read_with(cx, |buffer, _| buffer.text()),
indoc! {"
abXc
deF
GHI
Yjkl
mno"}
);
// User edits should be stored separately from agent's
let user_edits = action_log.update(cx, |log, cx| log.unnotified_user_edits(cx));
assert_eq!(
user_edits.expect("should have some user edits"),
indoc! {"
--- a/dir/file
+++ b/dir/file
@@ -1,5 +1,5 @@
-abc
+abXc
def
ghi
-jkl
+Yjkl
mno
"}
);
action_log.update(cx, |log, cx| {
log.keep_edits_in_range(buffer.clone(), Point::new(0, 0)..Point::new(1, 0), cx)
log.keep_edits_in_range(buffer.clone(), Point::new(0, 0)..Point::new(1, 0), None, cx)
});
cx.run_until_parked();
assert_eq!(unreviewed_hunks(&action_log, cx), vec![]);
@@ -1427,7 +1329,7 @@ mod tests {
);
action_log.update(cx, |log, cx| {
log.keep_edits_in_range(buffer.clone(), 0..5, cx)
log.keep_edits_in_range(buffer.clone(), 0..5, None, cx)
});
cx.run_until_parked();
assert_eq!(unreviewed_hunks(&action_log, cx), vec![]);
@@ -1479,7 +1381,7 @@ mod tests {
action_log
.update(cx, |log, cx| {
log.reject_edits_in_ranges(buffer.clone(), vec![2..5], cx)
log.reject_edits_in_ranges(buffer.clone(), vec![2..5], None, cx)
})
.await
.unwrap();
@@ -1559,7 +1461,7 @@ mod tests {
action_log
.update(cx, |log, cx| {
log.reject_edits_in_ranges(buffer.clone(), vec![2..5], cx)
log.reject_edits_in_ranges(buffer.clone(), vec![2..5], None, cx)
})
.await
.unwrap();
@@ -1742,6 +1644,7 @@ mod tests {
log.reject_edits_in_ranges(
buffer.clone(),
vec![Point::new(4, 0)..Point::new(4, 0)],
None,
cx,
)
})
@@ -1776,6 +1679,7 @@ mod tests {
log.reject_edits_in_ranges(
buffer.clone(),
vec![Point::new(0, 0)..Point::new(1, 0)],
None,
cx,
)
})
@@ -1803,6 +1707,7 @@ mod tests {
log.reject_edits_in_ranges(
buffer.clone(),
vec![Point::new(4, 0)..Point::new(4, 0)],
None,
cx,
)
})
@@ -1877,7 +1782,7 @@ mod tests {
let range_2 = buffer.read(cx).anchor_before(Point::new(5, 0))
..buffer.read(cx).anchor_before(Point::new(5, 3));
log.reject_edits_in_ranges(buffer.clone(), vec![range_1, range_2], cx)
log.reject_edits_in_ranges(buffer.clone(), vec![range_1, range_2], None, cx)
.detach();
assert_eq!(
buffer.read_with(cx, |buffer, _| buffer.text()),
@@ -1938,6 +1843,7 @@ mod tests {
log.reject_edits_in_ranges(
buffer.clone(),
vec![Point::new(0, 0)..Point::new(0, 0)],
None,
cx,
)
})
@@ -1993,6 +1899,7 @@ mod tests {
log.reject_edits_in_ranges(
buffer.clone(),
vec![Point::new(0, 0)..Point::new(0, 11)],
None,
cx,
)
})
@@ -2055,6 +1962,7 @@ mod tests {
log.reject_edits_in_ranges(
buffer.clone(),
vec![Point::new(0, 0)..Point::new(100, 0)],
None,
cx,
)
})
@@ -2102,7 +2010,7 @@ mod tests {
// User accepts the single hunk
action_log.update(cx, |log, cx| {
log.keep_edits_in_range(buffer.clone(), Anchor::MIN..Anchor::MAX, cx)
log.keep_edits_in_range(buffer.clone(), Anchor::MIN..Anchor::MAX, None, cx)
});
cx.run_until_parked();
assert_eq!(unreviewed_hunks(&action_log, cx), vec![]);
@@ -2123,7 +2031,7 @@ mod tests {
// User rejects the hunk
action_log
.update(cx, |log, cx| {
log.reject_edits_in_ranges(buffer.clone(), vec![Anchor::MIN..Anchor::MAX], cx)
log.reject_edits_in_ranges(buffer.clone(), vec![Anchor::MIN..Anchor::MAX], None, cx)
})
.await
.unwrap();
@@ -2167,7 +2075,7 @@ mod tests {
cx.run_until_parked();
// User clicks "Accept All"
action_log.update(cx, |log, cx| log.keep_all_edits(cx));
action_log.update(cx, |log, cx| log.keep_all_edits(None, cx));
cx.run_until_parked();
assert!(fs.is_file(path!("/dir/new_file").as_ref()).await);
assert_eq!(unreviewed_hunks(&action_log, cx), vec![]); // Hunks are cleared
@@ -2186,7 +2094,7 @@ mod tests {
// User clicks "Reject All"
action_log
.update(cx, |log, cx| log.reject_all_edits(cx))
.update(cx, |log, cx| log.reject_all_edits(None, cx))
.await;
cx.run_until_parked();
assert!(fs.is_file(path!("/dir/new_file").as_ref()).await);
@@ -2226,7 +2134,7 @@ mod tests {
action_log.update(cx, |log, cx| {
let range = buffer.read(cx).random_byte_range(0, &mut rng);
log::info!("keeping edits in range {:?}", range);
log.keep_edits_in_range(buffer.clone(), range, cx)
log.keep_edits_in_range(buffer.clone(), range, None, cx)
});
}
25..50 => {
@@ -2234,7 +2142,7 @@ mod tests {
.update(cx, |log, cx| {
let range = buffer.read(cx).random_byte_range(0, &mut rng);
log::info!("rejecting edits in range {:?}", range);
log.reject_edits_in_ranges(buffer.clone(), vec![range], cx)
log.reject_edits_in_ranges(buffer.clone(), vec![range], None, cx)
})
.await
.unwrap();
@@ -2488,61 +2396,4 @@ mod tests {
.collect()
})
}
#[gpui::test]
async fn test_format_patch(cx: &mut TestAppContext) {
init_test(cx);
let fs = FakeFs::new(cx.executor());
fs.insert_tree(
path!("/dir"),
json!({"test.txt": "line 1\nline 2\nline 3\n"}),
)
.await;
let project = Project::test(fs.clone(), [path!("/dir").as_ref()], cx).await;
let action_log = cx.new(|_| ActionLog::new(project.clone()));
let file_path = project
.read_with(cx, |project, cx| {
project.find_project_path("dir/test.txt", cx)
})
.unwrap();
let buffer = project
.update(cx, |project, cx| project.open_buffer(file_path, cx))
.await
.unwrap();
cx.update(|cx| {
// Track the buffer and mark it as read first
action_log.update(cx, |log, cx| {
log.buffer_read(buffer.clone(), cx);
});
// Make some edits to create a patch
buffer.update(cx, |buffer, cx| {
buffer
.edit([(Point::new(1, 0)..Point::new(1, 6), "CHANGED")], None, cx)
.unwrap(); // Replace "line2" with "CHANGED"
});
});
cx.run_until_parked();
// Get the patch
let patch = action_log.update(cx, |log, cx| log.unnotified_user_edits(cx));
// Verify the patch format contains expected unified diff elements
assert_eq!(
patch.unwrap(),
indoc! {"
--- a/dir/test.txt
+++ b/dir/test.txt
@@ -1,3 +1,3 @@
line 1
-line 2
+CHANGED
line 3
"}
);
}
}

View File

@@ -63,7 +63,6 @@ streaming_diff.workspace = true
strsim.workspace = true
task.workspace = true
telemetry.workspace = true
terminal.workspace = true
text.workspace = true
thiserror.workspace = true
ui.workspace = true

View File

@@ -6,7 +6,6 @@ mod native_agent_server;
pub mod outline;
mod templates;
mod thread;
mod tool_schema;
mod tools;
#[cfg(test)]
@@ -218,7 +217,7 @@ impl LanguageModels {
}
_ => {
log::error!(
"Failed to authenticate provider: {}: {err}",
"Failed to authenticate provider: {}: {err:#}",
provider_name.0
);
}
@@ -967,6 +966,10 @@ impl acp_thread::AgentModelSelector for NativeAgentModelSelector {
}
impl acp_thread::AgentConnection for NativeAgentConnection {
fn telemetry_id(&self) -> &'static str {
"zed"
}
fn new_thread(
self: Rc<Self>,
project: Entity<Project>,
@@ -1107,10 +1110,6 @@ impl acp_thread::AgentConnection for NativeAgentConnection {
}
impl acp_thread::AgentTelemetry for NativeAgentConnection {
fn agent_name(&self) -> String {
"Zed".into()
}
fn thread_data(
&self,
session_id: &acp::SessionId,
@@ -1627,9 +1626,7 @@ mod internal_tests {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
Project::init_settings(cx);
agent_settings::init(cx);
language::init(cx);
LanguageModelRegistry::test(cx);
});
}

View File

@@ -1394,7 +1394,7 @@ mod tests {
async fn init_test(cx: &mut TestAppContext) -> EditAgent {
cx.update(settings::init);
cx.update(Project::init_settings);
let project = Project::test(FakeFs::new(cx.executor()), [], cx).await;
let model = Arc::new(FakeLanguageModel::default());
let action_log = cx.new(|_| ActionLog::new(project.clone()));

View File

@@ -1468,14 +1468,9 @@ impl EditAgentTest {
gpui_tokio::init(cx);
let http_client = Arc::new(ReqwestClient::user_agent("agent tests").unwrap());
cx.set_http_client(http_client);
client::init_settings(cx);
let client = Client::production(cx);
let user_store = cx.new(|cx| UserStore::new(client.clone(), cx));
settings::init(cx);
Project::init_settings(cx);
language::init(cx);
language_model::init(client.clone(), cx);
language_models::init(user_store, client.clone(), cx);
});

View File

@@ -88,8 +88,6 @@ mod tests {
async |fs, project, cx| {
let auth = cx.update(|cx| {
prompt_store::init(cx);
terminal::init(cx);
let registry = language_model::LanguageModelRegistry::read_global(cx);
let auth = registry
.provider(&language_model::ANTHROPIC_PROVIDER_ID)

View File

@@ -1,6 +1,6 @@
use anyhow::Result;
use gpui::{AsyncApp, Entity};
use language::{Buffer, OutlineItem, ParseStatus};
use language::{Buffer, OutlineItem};
use regex::Regex;
use std::fmt::Write;
use text::Point;
@@ -30,10 +30,9 @@ pub async fn get_buffer_content_or_outline(
if file_size > AUTO_OUTLINE_SIZE {
// For large files, use outline instead of full content
// Wait until the buffer has been fully parsed, so we can read its outline
let mut parse_status = buffer.read_with(cx, |buffer, _| buffer.parse_status())?;
while *parse_status.borrow() != ParseStatus::Idle {
parse_status.changed().await?;
}
buffer
.read_with(cx, |buffer, _| buffer.parsing_idle())?
.await;
let outline_items = buffer.read_with(cx, |buffer, _| {
let snapshot = buffer.snapshot();

View File

@@ -1851,7 +1851,6 @@ async fn test_agent_connection(cx: &mut TestAppContext) {
// Initialize language model system with test provider
cx.update(|cx| {
gpui_tokio::init(cx);
client::init_settings(cx);
let http_client = FakeHttpClient::with_404_response();
let clock = Arc::new(clock::FakeSystemClock::new());
@@ -1859,9 +1858,7 @@ async fn test_agent_connection(cx: &mut TestAppContext) {
let user_store = cx.new(|cx| UserStore::new(client.clone(), cx));
language_model::init(client.clone(), cx);
language_models::init(user_store, client.clone(), cx);
Project::init_settings(cx);
LanguageModelRegistry::test(cx);
agent_settings::init(cx);
});
cx.executor().forbid_parking();
@@ -2395,8 +2392,6 @@ async fn setup(cx: &mut TestAppContext, model: TestModel) -> ThreadTest {
cx.update(|cx| {
settings::init(cx);
Project::init_settings(cx);
agent_settings::init(cx);
match model {
TestModel::Fake => {}
@@ -2404,7 +2399,6 @@ async fn setup(cx: &mut TestAppContext, model: TestModel) -> ThreadTest {
gpui_tokio::init(cx);
let http_client = ReqwestClient::user_agent("agent tests").unwrap();
cx.set_http_client(Arc::new(http_client));
client::init_settings(cx);
let client = Client::production(cx);
let user_store = cx.new(|cx| UserStore::new(client.clone(), cx));
language_model::init(client.clone(), cx);

View File

@@ -2139,7 +2139,7 @@ where
/// Returns the JSON schema that describes the tool's input.
fn input_schema(format: LanguageModelToolSchemaFormat) -> Schema {
crate::tool_schema::root_schema_for::<Self::Input>(format)
language_model::tool_schema::root_schema_for::<Self::Input>(format)
}
/// Some tools rely on a provider for the underlying billing or other reasons.
@@ -2226,7 +2226,7 @@ where
fn input_schema(&self, format: LanguageModelToolSchemaFormat) -> Result<serde_json::Value> {
let mut json = serde_json::to_value(T::input_schema(format))?;
crate::tool_schema::adapt_schema_to_format(&mut json, format)?;
language_model::tool_schema::adapt_schema_to_format(&mut json, format)?;
Ok(json)
}

View File

@@ -165,7 +165,7 @@ impl AnyAgentTool for ContextServerTool {
format: language_model::LanguageModelToolSchemaFormat,
) -> Result<serde_json::Value> {
let mut schema = self.tool.input_schema.clone();
crate::tool_schema::adapt_schema_to_format(&mut schema, format)?;
language_model::tool_schema::adapt_schema_to_format(&mut schema, format)?;
Ok(match schema {
serde_json::Value::Null => {
serde_json::json!({ "type": "object", "properties": [] })

View File

@@ -562,7 +562,6 @@ fn resolve_path(
mod tests {
use super::*;
use crate::{ContextServerRegistry, Templates};
use client::TelemetrySettings;
use fs::Fs;
use gpui::{TestAppContext, UpdateGlobal};
use language_model::fake_provider::FakeLanguageModel;
@@ -1753,10 +1752,6 @@ mod tests {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
language::init(cx);
TelemetrySettings::register(cx);
agent_settings::AgentSettings::register(cx);
Project::init_settings(cx);
});
}
}

View File

@@ -246,8 +246,6 @@ mod test {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
language::init(cx);
Project::init_settings(cx);
});
}
}

View File

@@ -778,8 +778,6 @@ mod tests {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
language::init(cx);
Project::init_settings(cx);
});
}

View File

@@ -223,8 +223,6 @@ mod tests {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
language::init(cx);
Project::init_settings(cx);
});
}

View File

@@ -163,8 +163,6 @@ mod tests {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
language::init(cx);
Project::init_settings(cx);
});
}
}

View File

@@ -509,8 +509,6 @@ mod test {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
language::init(cx);
Project::init_settings(cx);
});
}

View File

@@ -21,7 +21,6 @@ acp_tools.workspace = true
acp_thread.workspace = true
action_log.workspace = true
agent-client-protocol.workspace = true
agent_settings.workspace = true
anyhow.workspace = true
async-trait.workspace = true
client.workspace = true
@@ -33,7 +32,6 @@ gpui.workspace = true
gpui_tokio = { workspace = true, optional = true }
http_client.workspace = true
indoc.workspace = true
language.workspace = true
language_model.workspace = true
language_models.workspace = true
log.workspace = true

View File

@@ -29,6 +29,7 @@ pub struct UnsupportedVersion;
pub struct AcpConnection {
server_name: SharedString,
telemetry_id: &'static str,
connection: Rc<acp::ClientSideConnection>,
sessions: Rc<RefCell<HashMap<acp::SessionId, AcpSession>>>,
auth_methods: Vec<acp::AuthMethod>,
@@ -52,6 +53,7 @@ pub struct AcpSession {
pub async fn connect(
server_name: SharedString,
telemetry_id: &'static str,
command: AgentServerCommand,
root_dir: &Path,
default_mode: Option<acp::SessionModeId>,
@@ -60,6 +62,7 @@ pub async fn connect(
) -> Result<Rc<dyn AgentConnection>> {
let conn = AcpConnection::stdio(
server_name,
telemetry_id,
command.clone(),
root_dir,
default_mode,
@@ -75,6 +78,7 @@ const MINIMUM_SUPPORTED_VERSION: acp::ProtocolVersion = acp::V1;
impl AcpConnection {
pub async fn stdio(
server_name: SharedString,
telemetry_id: &'static str,
command: AgentServerCommand,
root_dir: &Path,
default_mode: Option<acp::SessionModeId>,
@@ -199,6 +203,7 @@ impl AcpConnection {
root_dir: root_dir.to_owned(),
connection,
server_name,
telemetry_id,
sessions,
agent_capabilities: response.agent_capabilities,
default_mode,
@@ -226,6 +231,10 @@ impl Drop for AcpConnection {
}
impl AgentConnection for AcpConnection {
fn telemetry_id(&self) -> &'static str {
self.telemetry_id
}
fn new_thread(
self: Rc<Self>,
project: Entity<Project>,

View File

@@ -62,6 +62,7 @@ impl AgentServer for ClaudeCode {
cx: &mut App,
) -> Task<Result<(Rc<dyn AgentConnection>, Option<task::SpawnInTerminal>)>> {
let name = self.name();
let telemetry_id = self.telemetry_id();
let root_dir = root_dir.map(|root_dir| root_dir.to_string_lossy().into_owned());
let is_remote = delegate.project.read(cx).is_via_remote_server();
let store = delegate.store.downgrade();
@@ -85,6 +86,7 @@ impl AgentServer for ClaudeCode {
.await?;
let connection = crate::acp::connect(
name,
telemetry_id,
command,
root_dir.as_ref(),
default_mode,

View File

@@ -63,6 +63,7 @@ impl AgentServer for Codex {
cx: &mut App,
) -> Task<Result<(Rc<dyn AgentConnection>, Option<task::SpawnInTerminal>)>> {
let name = self.name();
let telemetry_id = self.telemetry_id();
let root_dir = root_dir.map(|root_dir| root_dir.to_string_lossy().into_owned());
let is_remote = delegate.project.read(cx).is_via_remote_server();
let store = delegate.store.downgrade();
@@ -87,6 +88,7 @@ impl AgentServer for Codex {
let connection = crate::acp::connect(
name,
telemetry_id,
command,
root_dir.as_ref(),
default_mode,

View File

@@ -67,6 +67,7 @@ impl crate::AgentServer for CustomAgentServer {
cx: &mut App,
) -> Task<Result<(Rc<dyn AgentConnection>, Option<task::SpawnInTerminal>)>> {
let name = self.name();
let telemetry_id = self.telemetry_id();
let root_dir = root_dir.map(|root_dir| root_dir.to_string_lossy().into_owned());
let is_remote = delegate.project.read(cx).is_via_remote_server();
let default_mode = self.default_mode(cx);
@@ -92,6 +93,7 @@ impl crate::AgentServer for CustomAgentServer {
.await?;
let connection = crate::acp::connect(
name,
telemetry_id,
command,
root_dir.as_ref(),
default_mode,

View File

@@ -6,7 +6,9 @@ use gpui::{AppContext, Entity, TestAppContext};
use indoc::indoc;
#[cfg(test)]
use project::agent_server_store::BuiltinAgentServerSettings;
use project::{FakeFs, Project, agent_server_store::AllAgentServersSettings};
use project::{FakeFs, Project};
#[cfg(test)]
use settings::Settings;
use std::{
path::{Path, PathBuf},
sync::Arc,
@@ -452,29 +454,22 @@ pub use common_e2e_tests;
// Helpers
pub async fn init_test(cx: &mut TestAppContext) -> Arc<FakeFs> {
use settings::Settings;
env_logger::try_init().ok();
cx.update(|cx| {
let settings_store = settings::SettingsStore::test(cx);
cx.set_global(settings_store);
Project::init_settings(cx);
language::init(cx);
gpui_tokio::init(cx);
let http_client = reqwest_client::ReqwestClient::user_agent("agent tests").unwrap();
cx.set_http_client(Arc::new(http_client));
client::init_settings(cx);
let client = client::Client::production(cx);
let user_store = cx.new(|cx| client::UserStore::new(client.clone(), cx));
language_model::init(client.clone(), cx);
language_models::init(user_store, client, cx);
agent_settings::init(cx);
AllAgentServersSettings::register(cx);
#[cfg(test)]
AllAgentServersSettings::override_global(
AllAgentServersSettings {
project::agent_server_store::AllAgentServersSettings::override_global(
project::agent_server_store::AllAgentServersSettings {
claude: Some(BuiltinAgentServerSettings {
path: Some("claude-code-acp".into()),
args: None,

View File

@@ -31,6 +31,7 @@ impl AgentServer for Gemini {
cx: &mut App,
) -> Task<Result<(Rc<dyn AgentConnection>, Option<task::SpawnInTerminal>)>> {
let name = self.name();
let telemetry_id = self.telemetry_id();
let root_dir = root_dir.map(|root_dir| root_dir.to_string_lossy().into_owned());
let is_remote = delegate.project.read(cx).is_via_remote_server();
let store = delegate.store.downgrade();
@@ -64,6 +65,7 @@ impl AgentServer for Gemini {
let connection = crate::acp::connect(
name,
telemetry_id,
command,
root_dir.as_ref(),
default_mode,

View File

@@ -10,7 +10,7 @@ use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use settings::{
DefaultAgentView, DockPosition, LanguageModelParameters, LanguageModelSelection,
NotifyWhenAgentWaiting, Settings,
NotifyWhenAgentWaiting, RegisterSetting, Settings,
};
pub use crate::agent_profile::*;
@@ -19,11 +19,7 @@ pub const SUMMARIZE_THREAD_PROMPT: &str = include_str!("prompts/summarize_thread
pub const SUMMARIZE_THREAD_DETAILED_PROMPT: &str =
include_str!("prompts/summarize_thread_detailed_prompt.txt");
pub fn init(cx: &mut App) {
AgentSettings::register(cx);
}
#[derive(Clone, Debug)]
#[derive(Clone, Debug, RegisterSetting)]
pub struct AgentSettings {
pub enabled: bool,
pub button: bool,

View File

@@ -646,16 +646,14 @@ impl ContextPickerCompletionProvider {
cx: &mut App,
) -> Vec<ContextPickerEntry> {
let embedded_context = self.prompt_capabilities.borrow().embedded_context;
let mut entries = if embedded_context {
vec![
ContextPickerEntry::Mode(ContextPickerMode::File),
ContextPickerEntry::Mode(ContextPickerMode::Symbol),
ContextPickerEntry::Mode(ContextPickerMode::Thread),
]
} else {
// File is always available, but we don't need a mode entry
vec![]
};
let mut entries = vec![
ContextPickerEntry::Mode(ContextPickerMode::File),
ContextPickerEntry::Mode(ContextPickerMode::Symbol),
];
if embedded_context {
entries.push(ContextPickerEntry::Mode(ContextPickerMode::Thread));
}
let has_selection = workspace
.read(cx)

View File

@@ -401,10 +401,9 @@ mod tests {
use acp_thread::{AgentConnection, StubAgentConnection};
use agent::HistoryStore;
use agent_client_protocol as acp;
use agent_settings::AgentSettings;
use assistant_text_thread::TextThreadStore;
use buffer_diff::{DiffHunkStatus, DiffHunkStatusKind};
use editor::{EditorSettings, RowInfo};
use editor::RowInfo;
use fs::FakeFs;
use gpui::{AppContext as _, SemanticVersion, TestAppContext};
@@ -413,7 +412,7 @@ mod tests {
use pretty_assertions::assert_matches;
use project::Project;
use serde_json::json;
use settings::{Settings as _, SettingsStore};
use settings::SettingsStore;
use util::path;
use workspace::Workspace;
@@ -539,13 +538,8 @@ mod tests {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
language::init(cx);
Project::init_settings(cx);
AgentSettings::register(cx);
workspace::init_settings(cx);
theme::init(theme::LoadThemes::JustBase, cx);
release_channel::init(SemanticVersion::default(), cx);
EditorSettings::register(cx);
});
}
}

View File

@@ -356,7 +356,7 @@ impl MessageEditor {
let task = match mention_uri.clone() {
MentionUri::Fetch { url } => self.confirm_mention_for_fetch(url, cx),
MentionUri::Directory { .. } => Task::ready(Ok(Mention::UriOnly)),
MentionUri::Directory { .. } => Task::ready(Ok(Mention::Link)),
MentionUri::Thread { id, .. } => self.confirm_mention_for_thread(id, cx),
MentionUri::TextThread { path, .. } => self.confirm_mention_for_text_thread(path, cx),
MentionUri::File { abs_path } => self.confirm_mention_for_file(abs_path, cx),
@@ -373,7 +373,6 @@ impl MessageEditor {
)))
}
MentionUri::Selection { .. } => {
// Handled elsewhere
debug_panic!("unexpected selection URI");
Task::ready(Err(anyhow!("unexpected selection URI")))
}
@@ -704,20 +703,21 @@ impl MessageEditor {
return Task::ready(Err(err));
}
let contents = self.mention_set.contents(
&self.prompt_capabilities.borrow(),
full_mention_content,
self.project.clone(),
cx,
);
let contents = self
.mention_set
.contents(full_mention_content, self.project.clone(), cx);
let editor = self.editor.clone();
let supports_embedded_context = self.prompt_capabilities.borrow().embedded_context;
cx.spawn(async move |_, cx| {
let contents = contents.await?;
let mut all_tracked_buffers = Vec::new();
let result = editor.update(cx, |editor, cx| {
let mut ix = text.chars().position(|c| !c.is_whitespace()).unwrap_or(0);
let (mut ix, _) = text
.char_indices()
.find(|(_, c)| !c.is_whitespace())
.unwrap_or((0, '\0'));
let mut chunks: Vec<acp::ContentBlock> = Vec::new();
let text = editor.text(cx);
editor.display_map.update(cx, |map, cx| {
@@ -738,18 +738,32 @@ impl MessageEditor {
tracked_buffers,
} => {
all_tracked_buffers.extend(tracked_buffers.iter().cloned());
acp::ContentBlock::Resource(acp::EmbeddedResource {
annotations: None,
resource: acp::EmbeddedResourceResource::TextResourceContents(
acp::TextResourceContents {
mime_type: None,
text: content.clone(),
uri: uri.to_uri().to_string(),
meta: None,
},
),
meta: None,
})
if supports_embedded_context {
acp::ContentBlock::Resource(acp::EmbeddedResource {
annotations: None,
resource:
acp::EmbeddedResourceResource::TextResourceContents(
acp::TextResourceContents {
mime_type: None,
text: content.clone(),
uri: uri.to_uri().to_string(),
meta: None,
},
),
meta: None,
})
} else {
acp::ContentBlock::ResourceLink(acp::ResourceLink {
name: uri.name(),
uri: uri.to_uri().to_string(),
annotations: None,
description: None,
mime_type: None,
size: None,
title: None,
meta: None,
})
}
}
Mention::Image(mention_image) => {
let uri = match uri {
@@ -771,18 +785,16 @@ impl MessageEditor {
meta: None,
})
}
Mention::UriOnly => {
acp::ContentBlock::ResourceLink(acp::ResourceLink {
name: uri.name(),
uri: uri.to_uri().to_string(),
annotations: None,
description: None,
mime_type: None,
size: None,
title: None,
meta: None,
})
}
Mention::Link => acp::ContentBlock::ResourceLink(acp::ResourceLink {
name: uri.name(),
uri: uri.to_uri().to_string(),
annotations: None,
description: None,
mime_type: None,
size: None,
title: None,
meta: None,
}),
};
chunks.push(chunk);
ix = crease_range.end;
@@ -1111,7 +1123,7 @@ impl MessageEditor {
let start = text.len();
write!(&mut text, "{}", mention_uri.as_link()).ok();
let end = text.len();
mentions.push((start..end, mention_uri, Mention::UriOnly));
mentions.push((start..end, mention_uri, Mention::Link));
}
}
acp::ContentBlock::Image(acp::ImageContent {
@@ -1517,7 +1529,7 @@ pub enum Mention {
tracked_buffers: Vec<Entity<Buffer>>,
},
Image(MentionImage),
UriOnly,
Link,
}
#[derive(Clone, Debug, Eq, PartialEq)]
@@ -1534,21 +1546,10 @@ pub struct MentionSet {
impl MentionSet {
fn contents(
&self,
prompt_capabilities: &acp::PromptCapabilities,
full_mention_content: bool,
project: Entity<Project>,
cx: &mut App,
) -> Task<Result<HashMap<CreaseId, (MentionUri, Mention)>>> {
if !prompt_capabilities.embedded_context {
let mentions = self
.mentions
.iter()
.map(|(crease_id, (uri, _))| (*crease_id, (uri.clone(), Mention::UriOnly)))
.collect();
return Task::ready(Ok(mentions));
}
let mentions = self.mentions.clone();
cx.spawn(async move |cx| {
let mut contents = HashMap::default();
@@ -1898,10 +1899,8 @@ mod tests {
let app_state = cx.update(AppState::test);
cx.update(|cx| {
language::init(cx);
editor::init(cx);
workspace::init(app_state.clone(), cx);
Project::init_settings(cx);
});
let project = Project::test(app_state.fs.clone(), [path!("/dir").as_ref()], cx).await;
@@ -2074,10 +2073,8 @@ mod tests {
let app_state = cx.update(AppState::test);
cx.update(|cx| {
language::init(cx);
editor::init(cx);
workspace::init(app_state.clone(), cx);
Project::init_settings(cx);
});
app_state
@@ -2202,6 +2199,8 @@ mod tests {
format!("seven.txt b{slash}"),
format!("six.txt b{slash}"),
format!("five.txt b{slash}"),
"Files & Directories".into(),
"Symbols".into()
]
);
editor.set_text("", window, cx);
@@ -2286,21 +2285,11 @@ mod tests {
assert_eq!(fold_ranges(editor, cx).len(), 1);
});
let all_prompt_capabilities = acp::PromptCapabilities {
image: true,
audio: true,
embedded_context: true,
meta: None,
};
let contents = message_editor
.update(&mut cx, |message_editor, cx| {
message_editor.mention_set().contents(
&all_prompt_capabilities,
false,
project.clone(),
cx,
)
message_editor
.mention_set()
.contents(false, project.clone(), cx)
})
.await
.unwrap()
@@ -2318,30 +2307,6 @@ mod tests {
);
}
let contents = message_editor
.update(&mut cx, |message_editor, cx| {
message_editor.mention_set().contents(
&acp::PromptCapabilities::default(),
false,
project.clone(),
cx,
)
})
.await
.unwrap()
.into_values()
.collect::<Vec<_>>();
{
let [(uri, Mention::UriOnly)] = contents.as_slice() else {
panic!("Unexpected mentions");
};
pretty_assertions::assert_eq!(
uri,
&MentionUri::parse(&url_one, PathStyle::local()).unwrap()
);
}
cx.simulate_input(" ");
editor.update(&mut cx, |editor, cx| {
@@ -2377,12 +2342,9 @@ mod tests {
let contents = message_editor
.update(&mut cx, |message_editor, cx| {
message_editor.mention_set().contents(
&all_prompt_capabilities,
false,
project.clone(),
cx,
)
message_editor
.mention_set()
.contents(false, project.clone(), cx)
})
.await
.unwrap()
@@ -2503,12 +2465,9 @@ mod tests {
let contents = message_editor
.update(&mut cx, |message_editor, cx| {
message_editor.mention_set().contents(
&all_prompt_capabilities,
false,
project.clone(),
cx,
)
message_editor
.mention_set()
.contents(false, project.clone(), cx)
})
.await
.unwrap()
@@ -2554,12 +2513,9 @@ mod tests {
// Getting the message contents fails
message_editor
.update(&mut cx, |message_editor, cx| {
message_editor.mention_set().contents(
&all_prompt_capabilities,
false,
project.clone(),
cx,
)
message_editor
.mention_set()
.contents(false, project.clone(), cx)
})
.await
.expect_err("Should fail to load x.png");
@@ -2610,12 +2566,9 @@ mod tests {
// Now getting the contents succeeds, because the invalid mention was removed
let contents = message_editor
.update(&mut cx, |message_editor, cx| {
message_editor.mention_set().contents(
&all_prompt_capabilities,
false,
project.clone(),
cx,
)
message_editor
.mention_set()
.contents(false, project.clone(), cx)
})
.await
.unwrap();
@@ -2879,7 +2832,7 @@ mod tests {
cx.run_until_parked();
editor.update_in(cx, |editor, window, cx| {
editor.set_text(" hello world ", window, cx);
editor.set_text(" \u{A0}してhello world ", window, cx);
});
let (content, _) = message_editor
@@ -2890,13 +2843,154 @@ mod tests {
assert_eq!(
content,
vec![acp::ContentBlock::Text(acp::TextContent {
text: "hello world".into(),
text: "してhello world".into(),
annotations: None,
meta: None
})]
);
}
#[gpui::test]
async fn test_editor_respects_embedded_context_capability(cx: &mut TestAppContext) {
init_test(cx);
let fs = FakeFs::new(cx.executor());
let file_content = "fn main() { println!(\"Hello, world!\"); }\n";
fs.insert_tree(
"/project",
json!({
"src": {
"main.rs": file_content,
}
}),
)
.await;
let project = Project::test(fs, [Path::new(path!("/project"))], cx).await;
let (workspace, cx) =
cx.add_window_view(|window, cx| Workspace::test_new(project.clone(), window, cx));
let text_thread_store = cx.new(|cx| TextThreadStore::fake(project.clone(), cx));
let history_store = cx.new(|cx| HistoryStore::new(text_thread_store, cx));
let (message_editor, editor) = workspace.update_in(cx, |workspace, window, cx| {
let workspace_handle = cx.weak_entity();
let message_editor = cx.new(|cx| {
MessageEditor::new(
workspace_handle,
project.clone(),
history_store.clone(),
None,
Default::default(),
Default::default(),
"Test Agent".into(),
"Test",
EditorMode::AutoHeight {
max_lines: None,
min_lines: 1,
},
window,
cx,
)
});
workspace.active_pane().update(cx, |pane, cx| {
pane.add_item(
Box::new(cx.new(|_| MessageEditorItem(message_editor.clone()))),
true,
true,
None,
window,
cx,
);
});
message_editor.read(cx).focus_handle(cx).focus(window);
let editor = message_editor.read(cx).editor().clone();
(message_editor, editor)
});
cx.simulate_input("What is in @file main");
editor.update_in(cx, |editor, window, cx| {
assert!(editor.has_visible_completions_menu());
assert_eq!(editor.text(cx), "What is in @file main");
editor.confirm_completion(&editor::actions::ConfirmCompletion::default(), window, cx);
});
let content = message_editor
.update(cx, |editor, cx| editor.contents(false, cx))
.await
.unwrap()
.0;
let main_rs_uri = if cfg!(windows) {
"file:///C:/project/src/main.rs".to_string()
} else {
"file:///project/src/main.rs".to_string()
};
// When embedded context is `false` we should get a resource link
pretty_assertions::assert_eq!(
content,
vec![
acp::ContentBlock::Text(acp::TextContent {
text: "What is in ".to_string(),
annotations: None,
meta: None
}),
acp::ContentBlock::ResourceLink(acp::ResourceLink {
uri: main_rs_uri.clone(),
name: "main.rs".to_string(),
annotations: None,
meta: None,
description: None,
mime_type: None,
size: None,
title: None,
})
]
);
message_editor.update(cx, |editor, _cx| {
editor.prompt_capabilities.replace(acp::PromptCapabilities {
embedded_context: true,
..Default::default()
})
});
let content = message_editor
.update(cx, |editor, cx| editor.contents(false, cx))
.await
.unwrap()
.0;
// When embedded context is `true` we should get a resource
pretty_assertions::assert_eq!(
content,
vec![
acp::ContentBlock::Text(acp::TextContent {
text: "What is in ".to_string(),
annotations: None,
meta: None
}),
acp::ContentBlock::Resource(acp::EmbeddedResource {
resource: acp::EmbeddedResourceResource::TextResourceContents(
acp::TextResourceContents {
text: file_content.to_string(),
uri: main_rs_uri,
mime_type: None,
meta: None
}
),
annotations: None,
meta: None
})
]
);
}
#[gpui::test]
async fn test_autoscroll_after_insert_selections(cx: &mut TestAppContext) {
init_test(cx);
@@ -2904,10 +2998,8 @@ mod tests {
let app_state = cx.update(AppState::test);
cx.update(|cx| {
language::init(cx);
editor::init(cx);
workspace::init(app_state.clone(), cx);
Project::init_settings(cx);
});
app_state

View File

@@ -1,6 +1,6 @@
use std::rc::Rc;
use acp_thread::AgentModelSelector;
use acp_thread::{AgentModelInfo, AgentModelSelector};
use gpui::{Entity, FocusHandle};
use picker::popover_menu::PickerPopoverMenu;
use ui::{
@@ -36,12 +36,8 @@ impl AcpModelSelectorPopover {
self.menu_handle.toggle(window, cx);
}
pub fn active_model_name(&self, cx: &App) -> Option<SharedString> {
self.selector
.read(cx)
.delegate
.active_model()
.map(|model| model.name.clone())
pub fn active_model<'a>(&self, cx: &'a App) -> Option<&'a AgentModelInfo> {
self.selector.read(cx).delegate.active_model()
}
}

View File

@@ -4,12 +4,12 @@ use acp_thread::{
ToolCallStatus, UserMessageId,
};
use acp_thread::{AgentConnection, Plan};
use action_log::ActionLog;
use action_log::{ActionLog, ActionLogTelemetry};
use agent::{DbThreadMetadata, HistoryEntry, HistoryEntryId, HistoryStore, NativeAgentServer};
use agent_client_protocol::{self as acp, PromptCapabilities};
use agent_servers::{AgentServer, AgentServerDelegate};
use agent_settings::{AgentProfileId, AgentSettings, CompletionMode};
use anyhow::{Result, anyhow, bail};
use anyhow::{Result, anyhow};
use arrayvec::ArrayVec;
use audio::{Audio, Sound};
use buffer_diff::BufferDiff;
@@ -169,7 +169,7 @@ impl ThreadFeedbackState {
}
}
let session_id = thread.read(cx).session_id().clone();
let agent_name = telemetry.agent_name();
let agent = thread.read(cx).connection().telemetry_id();
let task = telemetry.thread_data(&session_id, cx);
let rating = match feedback {
ThreadFeedback::Positive => "positive",
@@ -179,9 +179,9 @@ impl ThreadFeedbackState {
let thread = task.await?;
telemetry::event!(
"Agent Thread Rated",
agent = agent,
session_id = session_id,
rating = rating,
agent = agent_name,
thread = thread
);
anyhow::Ok(())
@@ -206,15 +206,15 @@ impl ThreadFeedbackState {
self.comments_editor.take();
let session_id = thread.read(cx).session_id().clone();
let agent_name = telemetry.agent_name();
let agent = thread.read(cx).connection().telemetry_id();
let task = telemetry.thread_data(&session_id, cx);
cx.background_spawn(async move {
let thread = task.await?;
telemetry::event!(
"Agent Thread Feedback Comments",
agent = agent,
session_id = session_id,
comments = comments,
agent = agent_name,
thread = thread
);
anyhow::Ok(())
@@ -294,7 +294,6 @@ pub struct AcpThreadView {
resume_thread_metadata: Option<DbThreadMetadata>,
_cancel_task: Option<Task<()>>,
_subscriptions: [Subscription; 5],
#[cfg(target_os = "windows")]
show_codex_windows_warning: bool,
}
@@ -401,7 +400,6 @@ impl AcpThreadView {
),
];
#[cfg(target_os = "windows")]
let show_codex_windows_warning = crate::ExternalAgent::parse_built_in(agent.as_ref())
== Some(crate::ExternalAgent::Codex);
@@ -447,7 +445,6 @@ impl AcpThreadView {
focus_handle: cx.focus_handle(),
new_server_version_available: None,
resume_thread_metadata: resume_thread,
#[cfg(target_os = "windows")]
show_codex_windows_warning,
}
}
@@ -541,14 +538,7 @@ impl AcpThreadView {
})
.log_err()
} else {
let root_dir = if let Some(acp_agent) = connection
.clone()
.downcast::<agent_servers::AcpConnection>()
{
acp_agent.root_dir().into()
} else {
root_dir.unwrap_or(paths::home_dir().as_path().into())
};
let root_dir = root_dir.unwrap_or(paths::home_dir().as_path().into());
cx.update(|_, cx| {
connection
.clone()
@@ -1133,8 +1123,6 @@ impl AcpThreadView {
message_editor.contents(full_mention_content, cx)
});
let agent_telemetry_id = self.agent.telemetry_id();
self.thread_error.take();
self.editing_message.take();
self.thread_feedback.clear();
@@ -1142,6 +1130,8 @@ impl AcpThreadView {
let Some(thread) = self.thread() else {
return;
};
let agent_telemetry_id = self.agent.telemetry_id();
let session_id = thread.read(cx).session_id().clone();
let thread = thread.downgrade();
if self.should_be_following {
self.workspace
@@ -1152,6 +1142,7 @@ impl AcpThreadView {
}
self.is_loading_contents = true;
let model_id = self.current_model_id(cx);
let guard = cx.new(|_| ());
cx.observe_release(&guard, |this, _guard, cx| {
this.is_loading_contents = false;
@@ -1173,6 +1164,7 @@ impl AcpThreadView {
message_editor.clear(window, cx);
});
})?;
let turn_start_time = Instant::now();
let send = thread.update(cx, |thread, cx| {
thread.action_log().update(cx, |action_log, cx| {
for buffer in tracked_buffers {
@@ -1181,11 +1173,27 @@ impl AcpThreadView {
});
drop(guard);
telemetry::event!("Agent Message Sent", agent = agent_telemetry_id);
telemetry::event!(
"Agent Message Sent",
agent = agent_telemetry_id,
session = session_id,
model = model_id
);
thread.send(contents, cx)
})?;
send.await
let res = send.await;
let turn_time_ms = turn_start_time.elapsed().as_millis();
let status = if res.is_ok() { "success" } else { "failure" };
telemetry::event!(
"Agent Turn Completed",
agent = agent_telemetry_id,
session = session_id,
model = model_id,
status,
turn_time_ms,
);
res
});
cx.spawn(async move |this, cx| {
@@ -1387,7 +1395,7 @@ impl AcpThreadView {
AcpThreadEvent::Refusal => {
self.thread_retry_status.take();
self.thread_error = Some(ThreadError::Refusal);
let model_or_agent_name = self.get_current_model_name(cx);
let model_or_agent_name = self.current_model_name(cx);
let notification_message =
format!("{} refused to respond to this request", model_or_agent_name);
self.notify_with_sound(&notification_message, IconName::Warning, window, cx);
@@ -1506,6 +1514,12 @@ impl AcpThreadView {
})
.unwrap_or_default();
// Run SpawnInTerminal in the same dir as the ACP server
let cwd = connection
.clone()
.downcast::<agent_servers::AcpConnection>()
.map(|acp_conn| acp_conn.root_dir().to_path_buf());
// Build SpawnInTerminal from _meta
let login = task::SpawnInTerminal {
id: task::TaskId(format!("external-agent-{}-login", label)),
@@ -1514,6 +1528,7 @@ impl AcpThreadView {
command: Some(command.to_string()),
args,
command_label: label.to_string(),
cwd,
env,
use_new_terminal: true,
allow_concurrent_runs: true,
@@ -1526,8 +1541,9 @@ impl AcpThreadView {
pending_auth_method.replace(method.clone());
if let Some(workspace) = self.workspace.upgrade() {
let project = self.project.clone();
let authenticate = Self::spawn_external_agent_login(
login, workspace, false, window, cx,
login, workspace, project, false, true, window, cx,
);
cx.notify();
self.auth_task = Some(cx.spawn_in(window, {
@@ -1671,7 +1687,10 @@ impl AcpThreadView {
&& let Some(login) = self.login.clone()
{
if let Some(workspace) = self.workspace.upgrade() {
Self::spawn_external_agent_login(login, workspace, false, window, cx)
let project = self.project.clone();
Self::spawn_external_agent_login(
login, workspace, project, false, false, window, cx,
)
} else {
Task::ready(Ok(()))
}
@@ -1721,17 +1740,40 @@ impl AcpThreadView {
fn spawn_external_agent_login(
login: task::SpawnInTerminal,
workspace: Entity<Workspace>,
project: Entity<Project>,
previous_attempt: bool,
check_exit_code: bool,
window: &mut Window,
cx: &mut App,
) -> Task<Result<()>> {
let Some(terminal_panel) = workspace.read(cx).panel::<TerminalPanel>(cx) else {
return Task::ready(Ok(()));
};
let project = workspace.read(cx).project().clone();
window.spawn(cx, async move |cx| {
let mut task = login.clone();
if let Some(cmd) = &task.command {
// Have "node" command use Zed's managed Node runtime by default
if cmd == "node" {
let resolved_node_runtime = project
.update(cx, |project, cx| {
let agent_server_store = project.agent_server_store().clone();
agent_server_store.update(cx, |store, cx| {
store.node_runtime().map(|node_runtime| {
cx.background_spawn(async move {
node_runtime.binary_path().await
})
})
})
});
if let Ok(Some(resolve_task)) = resolved_node_runtime {
if let Ok(node_path) = resolve_task.await {
task.command = Some(node_path.to_string_lossy().to_string());
}
}
}
}
task.shell = task::Shell::WithArguments {
program: task.command.take().expect("login command should be set"),
args: std::mem::take(&mut task.args),
@@ -1749,44 +1791,65 @@ impl AcpThreadView {
})?;
let terminal = terminal.await?;
let mut exit_status = terminal
.read_with(cx, |terminal, cx| terminal.wait_for_completed_task(cx))?
.fuse();
let logged_in = cx
.spawn({
let terminal = terminal.clone();
async move |cx| {
loop {
cx.background_executor().timer(Duration::from_secs(1)).await;
let content =
terminal.update(cx, |terminal, _cx| terminal.get_content())?;
if content.contains("Login successful")
|| content.contains("Type your message")
{
return anyhow::Ok(());
if check_exit_code {
// For extension-based auth, wait for the process to exit and check exit code
let exit_status = terminal
.read_with(cx, |terminal, cx| terminal.wait_for_completed_task(cx))?
.await;
match exit_status {
Some(status) if status.success() => {
Ok(())
}
Some(status) => {
Err(anyhow!("Login command failed with exit code: {:?}", status.code()))
}
None => {
Err(anyhow!("Login command terminated without exit status"))
}
}
} else {
// For hardcoded agents (claude-login, gemini-cli): look for specific output
let mut exit_status = terminal
.read_with(cx, |terminal, cx| terminal.wait_for_completed_task(cx))?
.fuse();
let logged_in = cx
.spawn({
let terminal = terminal.clone();
async move |cx| {
loop {
cx.background_executor().timer(Duration::from_secs(1)).await;
let content =
terminal.update(cx, |terminal, _cx| terminal.get_content())?;
if content.contains("Login successful")
|| content.contains("Type your message")
{
return anyhow::Ok(());
}
}
}
})
.fuse();
futures::pin_mut!(logged_in);
futures::select_biased! {
result = logged_in => {
if let Err(e) = result {
log::error!("{e}");
return Err(anyhow!("exited before logging in"));
}
}
})
.fuse();
futures::pin_mut!(logged_in);
futures::select_biased! {
result = logged_in => {
if let Err(e) = result {
log::error!("{e}");
_ = exit_status => {
if !previous_attempt && project.read_with(cx, |project, _| project.is_via_remote_server())? && login.label.contains("gemini") {
return cx.update(|window, cx| Self::spawn_external_agent_login(login, workspace, project.clone(), true, false, window, cx))?.await
}
return Err(anyhow!("exited before logging in"));
}
}
_ = exit_status => {
if !previous_attempt && project.read_with(cx, |project, _| project.is_via_remote_server())? && login.label.contains("gemini") {
return cx.update(|window, cx| Self::spawn_external_agent_login(login, workspace, true, window, cx))?.await
}
return Err(anyhow!("exited before logging in"));
}
terminal.update(cx, |terminal, _| terminal.kill_active_task())?;
Ok(())
}
terminal.update(cx, |terminal, _| terminal.kill_active_task())?;
Ok(())
})
}
@@ -1801,6 +1864,14 @@ impl AcpThreadView {
let Some(thread) = self.thread() else {
return;
};
telemetry::event!(
"Agent Tool Call Authorized",
agent = self.agent.telemetry_id(),
session = thread.read(cx).session_id(),
option = option_kind
);
thread.update(cx, |thread, cx| {
thread.authorize_tool_call(tool_call_id, option_id, option_kind, cx);
});
@@ -2051,6 +2122,15 @@ impl AcpThreadView {
.into_any(),
};
let needs_confirmation = if let AgentThreadEntry::ToolCall(tool_call) = entry {
matches!(
tool_call.status,
ToolCallStatus::WaitingForConfirmation { .. }
)
} else {
false
};
let Some(thread) = self.thread() else {
return primary;
};
@@ -2059,7 +2139,13 @@ impl AcpThreadView {
v_flex()
.w_full()
.child(primary)
.child(self.render_thread_controls(&thread, cx))
.map(|this| {
if needs_confirmation {
this.child(self.render_generating(true))
} else {
this.child(self.render_thread_controls(&thread, cx))
}
})
.when_some(
self.thread_feedback.comments_editor.clone(),
|this, editor| this.child(Self::render_feedback_feedback_editor(editor, cx)),
@@ -3518,6 +3604,7 @@ impl AcpThreadView {
) -> Option<AnyElement> {
let thread = thread_entity.read(cx);
let action_log = thread.action_log();
let telemetry = ActionLogTelemetry::from(thread);
let changed_buffers = action_log.read(cx).changed_buffers(cx);
let plan = thread.plan();
@@ -3565,6 +3652,7 @@ impl AcpThreadView {
.when(self.edits_expanded, |parent| {
parent.child(self.render_edited_files(
action_log,
telemetry,
&changed_buffers,
pending_edits,
cx,
@@ -3845,6 +3933,7 @@ impl AcpThreadView {
fn render_edited_files(
&self,
action_log: &Entity<ActionLog>,
telemetry: ActionLogTelemetry,
changed_buffers: &BTreeMap<Entity<Buffer>, Entity<BufferDiff>>,
pending_edits: bool,
cx: &Context<Self>,
@@ -3964,12 +4053,14 @@ impl AcpThreadView {
.on_click({
let buffer = buffer.clone();
let action_log = action_log.clone();
let telemetry = telemetry.clone();
move |_, _, cx| {
action_log.update(cx, |action_log, cx| {
action_log
.reject_edits_in_ranges(
buffer.clone(),
vec![Anchor::MIN..Anchor::MAX],
Some(telemetry.clone()),
cx,
)
.detach_and_log_err(cx);
@@ -3984,11 +4075,13 @@ impl AcpThreadView {
.on_click({
let buffer = buffer.clone();
let action_log = action_log.clone();
let telemetry = telemetry.clone();
move |_, _, cx| {
action_log.update(cx, |action_log, cx| {
action_log.keep_edits_in_range(
buffer.clone(),
Anchor::MIN..Anchor::MAX,
Some(telemetry.clone()),
cx,
);
})
@@ -4204,17 +4297,23 @@ impl AcpThreadView {
let Some(thread) = self.thread() else {
return;
};
let telemetry = ActionLogTelemetry::from(thread.read(cx));
let action_log = thread.read(cx).action_log().clone();
action_log.update(cx, |action_log, cx| action_log.keep_all_edits(cx));
action_log.update(cx, |action_log, cx| {
action_log.keep_all_edits(Some(telemetry), cx)
});
}
fn reject_all(&mut self, _: &RejectAll, _window: &mut Window, cx: &mut Context<Self>) {
let Some(thread) = self.thread() else {
return;
};
let telemetry = ActionLogTelemetry::from(thread.read(cx));
let action_log = thread.read(cx).action_log().clone();
action_log
.update(cx, |action_log, cx| action_log.reject_all_edits(cx))
.update(cx, |action_log, cx| {
action_log.reject_all_edits(Some(telemetry), cx)
})
.detach();
}
@@ -4610,35 +4709,36 @@ impl AcpThreadView {
.languages
.language_for_name("Markdown");
let (thread_summary, markdown) = if let Some(thread) = self.thread() {
let (thread_title, markdown) = if let Some(thread) = self.thread() {
let thread = thread.read(cx);
(thread.title().to_string(), thread.to_markdown(cx))
} else {
return Task::ready(Ok(()));
};
let project = workspace.read(cx).project().clone();
window.spawn(cx, async move |cx| {
let markdown_language = markdown_language_task.await?;
let buffer = project
.update(cx, |project, cx| project.create_buffer(false, cx))?
.await?;
buffer.update(cx, |buffer, cx| {
buffer.set_text(markdown, cx);
buffer.set_language(Some(markdown_language), cx);
buffer.set_capability(language::Capability::ReadOnly, cx);
})?;
workspace.update_in(cx, |workspace, window, cx| {
let project = workspace.project().clone();
if !project.read(cx).is_local() {
bail!("failed to open active thread as markdown in remote project");
}
let buffer = project.update(cx, |project, cx| {
project.create_local_buffer(&markdown, Some(markdown_language), true, cx)
});
let buffer = cx.new(|cx| {
MultiBuffer::singleton(buffer, cx).with_title(thread_summary.clone())
});
let buffer = cx
.new(|cx| MultiBuffer::singleton(buffer, cx).with_title(thread_title.clone()));
workspace.add_item_to_active_pane(
Box::new(cx.new(|cx| {
let mut editor =
Editor::for_multibuffer(buffer, Some(project.clone()), window, cx);
editor.set_breadcrumb_header(thread_summary);
editor.set_breadcrumb_header(thread_title);
editor
})),
None,
@@ -4646,9 +4746,7 @@ impl AcpThreadView {
window,
cx,
);
anyhow::Ok(())
})??;
})?;
anyhow::Ok(())
})
}
@@ -4829,6 +4927,31 @@ impl AcpThreadView {
}
}
fn render_generating(&self, confirmation: bool) -> impl IntoElement {
h_flex()
.id("generating-spinner")
.py_2()
.px(rems_from_px(22.))
.map(|this| {
if confirmation {
this.gap_2()
.child(
h_flex()
.w_2()
.child(SpinnerLabel::sand().size(LabelSize::Small)),
)
.child(
LoadingLabel::new("Waiting Confirmation")
.size(LabelSize::Small)
.color(Color::Muted),
)
} else {
this.child(SpinnerLabel::new().size(LabelSize::Small))
}
})
.into_any_element()
}
fn render_thread_controls(
&self,
thread: &Entity<AcpThread>,
@@ -4836,12 +4959,7 @@ impl AcpThreadView {
) -> impl IntoElement {
let is_generating = matches!(thread.read(cx).status(), ThreadStatus::Generating);
if is_generating {
return h_flex().id("thread-controls-container").child(
div()
.py_2()
.px(rems_from_px(22.))
.child(SpinnerLabel::new().size(LabelSize::Small)),
);
return self.render_generating(false).into_any_element();
}
let open_as_markdown = IconButton::new("open-as-markdown", IconName::FileMarkdown)
@@ -4929,7 +5047,10 @@ impl AcpThreadView {
);
}
container.child(open_as_markdown).child(scroll_to_top)
container
.child(open_as_markdown)
.child(scroll_to_top)
.into_any_element()
}
fn render_feedback_feedback_editor(editor: Entity<Editor>, cx: &Context<Self>) -> Div {
@@ -5159,7 +5280,6 @@ impl AcpThreadView {
)
}
#[cfg(target_os = "windows")]
fn render_codex_windows_warning(&self, cx: &mut Context<Self>) -> Option<Callout> {
if self.show_codex_windows_warning {
Some(
@@ -5175,8 +5295,9 @@ impl AcpThreadView {
.icon_size(IconSize::Small)
.icon_color(Color::Muted)
.on_click(cx.listener({
move |_, _, window, cx| {
window.dispatch_action(
move |_, _, _window, cx| {
#[cfg(windows)]
_window.dispatch_action(
zed_actions::wsl_actions::OpenWsl::default().boxed_clone(),
cx,
);
@@ -5251,20 +5372,21 @@ impl AcpThreadView {
)
}
fn get_current_model_name(&self, cx: &App) -> SharedString {
fn current_model_id(&self, cx: &App) -> Option<String> {
self.model_selector
.as_ref()
.and_then(|selector| selector.read(cx).active_model(cx).map(|m| m.id.to_string()))
}
fn current_model_name(&self, cx: &App) -> SharedString {
// For native agent (Zed Agent), use the specific model name (e.g., "Claude 3.5 Sonnet")
// For ACP agents, use the agent name (e.g., "Claude Code", "Gemini CLI")
// This provides better clarity about what refused the request
if self
.agent
.clone()
.downcast::<agent::NativeAgentServer>()
.is_some()
{
// Native agent - use the model name
if self.as_native_connection(cx).is_some() {
self.model_selector
.as_ref()
.and_then(|selector| selector.read(cx).active_model_name(cx))
.and_then(|selector| selector.read(cx).active_model(cx))
.map(|model| model.name.clone())
.unwrap_or_else(|| SharedString::from("The model"))
} else {
// ACP agent - use the agent name (e.g., "Claude Code", "Gemini CLI")
@@ -5273,7 +5395,7 @@ impl AcpThreadView {
}
fn render_refusal_error(&self, cx: &mut Context<'_, Self>) -> Callout {
let model_or_agent_name = self.get_current_model_name(cx);
let model_or_agent_name = self.current_model_name(cx);
let refusal_message = format!(
"{} refused to respond to this prompt. This can happen when a model believes the prompt violates its content policy or safety guidelines, so rephrasing it can sometimes address the issue.",
model_or_agent_name
@@ -5679,13 +5801,10 @@ impl Render for AcpThreadView {
})
.children(self.render_thread_retry_status_callout(window, cx))
.children({
#[cfg(target_os = "windows")]
{
if cfg!(windows) && self.project.read(cx).is_local() {
self.render_codex_windows_warning(cx)
}
#[cfg(not(target_os = "windows"))]
{
Vec::<Empty>::new()
} else {
None
}
})
.children(self.render_thread_error(cx))
@@ -5874,7 +5993,6 @@ pub(crate) mod tests {
use acp_thread::StubAgentConnection;
use agent_client_protocol::SessionId;
use assistant_text_thread::TextThreadStore;
use editor::EditorSettings;
use fs::FakeFs;
use gpui::{EventEmitter, SemanticVersion, TestAppContext, VisualTestContext};
use project::Project;
@@ -6262,6 +6380,10 @@ pub(crate) mod tests {
struct SaboteurAgentConnection;
impl AgentConnection for SaboteurAgentConnection {
fn telemetry_id(&self) -> &'static str {
"saboteur"
}
fn new_thread(
self: Rc<Self>,
project: Entity<Project>,
@@ -6322,6 +6444,10 @@ pub(crate) mod tests {
struct RefusalAgentConnection;
impl AgentConnection for RefusalAgentConnection {
fn telemetry_id(&self) -> &'static str {
"refusal"
}
fn new_thread(
self: Rc<Self>,
project: Entity<Project>,
@@ -6384,13 +6510,8 @@ pub(crate) mod tests {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
language::init(cx);
Project::init_settings(cx);
AgentSettings::register(cx);
workspace::init_settings(cx);
theme::init(theme::LoadThemes::JustBase, cx);
release_channel::init(SemanticVersion::default(), cx);
EditorSettings::register(cx);
prompt_store::init(cx)
});
}

View File

@@ -1013,7 +1013,7 @@ impl AgentConfiguration {
.child(Divider::horizontal().color(DividerColor::BorderFaded))
.child(self.render_agent_server(
AgentIcon::Name(IconName::AiOpenAi),
"Codex",
"Codex CLI",
false,
))
.child(Divider::horizontal().color(DividerColor::BorderFaded))
@@ -1047,7 +1047,7 @@ impl AgentConfiguration {
AgentIcon::Name(icon_name) => Icon::new(icon_name)
.size(IconSize::Small)
.color(Color::Muted),
AgentIcon::Path(icon_path) => Icon::from_path(icon_path)
AgentIcon::Path(icon_path) => Icon::from_external_svg(icon_path)
.size(IconSize::Small)
.color(Color::Muted),
};

View File

@@ -515,16 +515,14 @@ impl Render for AddLlmProviderModal {
#[cfg(test)]
mod tests {
use super::*;
use editor::EditorSettings;
use fs::FakeFs;
use gpui::{TestAppContext, VisualTestContext};
use language::language_settings;
use language_model::{
LanguageModelProviderId, LanguageModelProviderName,
fake_provider::FakeLanguageModelProvider,
};
use project::Project;
use settings::{Settings as _, SettingsStore};
use settings::SettingsStore;
use util::path;
#[gpui::test]
@@ -730,13 +728,9 @@ mod tests {
cx.update(|cx| {
let store = SettingsStore::test(cx);
cx.set_global(store);
workspace::init_settings(cx);
Project::init_settings(cx);
theme::init(theme::LoadThemes::JustBase, cx);
language_settings::init(cx);
EditorSettings::register(cx);
language_model::init_settings(cx);
language_models::init_settings(cx);
});
let fs = FakeFs::new(cx.executor());

View File

@@ -1,6 +1,6 @@
use crate::{Keep, KeepAll, OpenAgentDiff, Reject, RejectAll};
use acp_thread::{AcpThread, AcpThreadEvent};
use action_log::ActionLog;
use action_log::ActionLogTelemetry;
use agent_settings::AgentSettings;
use anyhow::Result;
use buffer_diff::DiffHunkStatus;
@@ -40,79 +40,16 @@ use zed_actions::assistant::ToggleFocus;
pub struct AgentDiffPane {
multibuffer: Entity<MultiBuffer>,
editor: Entity<Editor>,
thread: AgentDiffThread,
thread: Entity<AcpThread>,
focus_handle: FocusHandle,
workspace: WeakEntity<Workspace>,
title: SharedString,
_subscriptions: Vec<Subscription>,
}
#[derive(PartialEq, Eq, Clone)]
pub enum AgentDiffThread {
AcpThread(Entity<AcpThread>),
}
impl AgentDiffThread {
fn project(&self, cx: &App) -> Entity<Project> {
match self {
AgentDiffThread::AcpThread(thread) => thread.read(cx).project().clone(),
}
}
fn action_log(&self, cx: &App) -> Entity<ActionLog> {
match self {
AgentDiffThread::AcpThread(thread) => thread.read(cx).action_log().clone(),
}
}
fn title(&self, cx: &App) -> SharedString {
match self {
AgentDiffThread::AcpThread(thread) => thread.read(cx).title(),
}
}
fn has_pending_edit_tool_uses(&self, cx: &App) -> bool {
match self {
AgentDiffThread::AcpThread(thread) => thread.read(cx).has_pending_edit_tool_calls(),
}
}
fn downgrade(&self) -> WeakAgentDiffThread {
match self {
AgentDiffThread::AcpThread(thread) => {
WeakAgentDiffThread::AcpThread(thread.downgrade())
}
}
}
}
impl From<Entity<AcpThread>> for AgentDiffThread {
fn from(entity: Entity<AcpThread>) -> Self {
AgentDiffThread::AcpThread(entity)
}
}
#[derive(PartialEq, Eq, Clone)]
pub enum WeakAgentDiffThread {
AcpThread(WeakEntity<AcpThread>),
}
impl WeakAgentDiffThread {
pub fn upgrade(&self) -> Option<AgentDiffThread> {
match self {
WeakAgentDiffThread::AcpThread(weak) => weak.upgrade().map(AgentDiffThread::AcpThread),
}
}
}
impl From<WeakEntity<AcpThread>> for WeakAgentDiffThread {
fn from(entity: WeakEntity<AcpThread>) -> Self {
WeakAgentDiffThread::AcpThread(entity)
}
}
impl AgentDiffPane {
pub fn deploy(
thread: impl Into<AgentDiffThread>,
thread: Entity<AcpThread>,
workspace: WeakEntity<Workspace>,
window: &mut Window,
cx: &mut App,
@@ -123,12 +60,11 @@ impl AgentDiffPane {
}
pub fn deploy_in_workspace(
thread: impl Into<AgentDiffThread>,
thread: Entity<AcpThread>,
workspace: &mut Workspace,
window: &mut Window,
cx: &mut Context<Workspace>,
) -> Entity<Self> {
let thread = thread.into();
let existing_diff = workspace
.items_of_type::<AgentDiffPane>(cx)
.find(|diff| diff.read(cx).thread == thread);
@@ -145,7 +81,7 @@ impl AgentDiffPane {
}
pub fn new(
thread: AgentDiffThread,
thread: Entity<AcpThread>,
workspace: WeakEntity<Workspace>,
window: &mut Window,
cx: &mut Context<Self>,
@@ -153,7 +89,7 @@ impl AgentDiffPane {
let focus_handle = cx.focus_handle();
let multibuffer = cx.new(|_| MultiBuffer::new(Capability::ReadWrite));
let project = thread.project(cx);
let project = thread.read(cx).project().clone();
let editor = cx.new(|cx| {
let mut editor =
Editor::for_multibuffer(multibuffer.clone(), Some(project.clone()), window, cx);
@@ -164,19 +100,16 @@ impl AgentDiffPane {
editor
});
let action_log = thread.action_log(cx);
let action_log = thread.read(cx).action_log().clone();
let mut this = Self {
_subscriptions: vec![
cx.observe_in(&action_log, window, |this, _action_log, window, cx| {
this.update_excerpts(window, cx)
}),
match &thread {
AgentDiffThread::AcpThread(thread) => cx
.subscribe(thread, |this, _thread, event, cx| {
this.handle_acp_thread_event(event, cx)
}),
},
cx.subscribe(&thread, |this, _thread, event, cx| {
this.handle_acp_thread_event(event, cx)
}),
],
title: SharedString::default(),
multibuffer,
@@ -191,7 +124,12 @@ impl AgentDiffPane {
}
fn update_excerpts(&mut self, window: &mut Window, cx: &mut Context<Self>) {
let changed_buffers = self.thread.action_log(cx).read(cx).changed_buffers(cx);
let changed_buffers = self
.thread
.read(cx)
.action_log()
.read(cx)
.changed_buffers(cx);
let mut paths_to_delete = self.multibuffer.read(cx).paths().collect::<HashSet<_>>();
for (buffer, diff_handle) in changed_buffers {
@@ -278,7 +216,7 @@ impl AgentDiffPane {
}
fn update_title(&mut self, cx: &mut Context<Self>) {
let new_title = self.thread.title(cx);
let new_title = self.thread.read(cx).title();
if new_title != self.title {
self.title = new_title;
cx.emit(EditorEvent::TitleChanged);
@@ -340,16 +278,18 @@ impl AgentDiffPane {
}
fn keep_all(&mut self, _: &KeepAll, _window: &mut Window, cx: &mut Context<Self>) {
self.thread
.action_log(cx)
.update(cx, |action_log, cx| action_log.keep_all_edits(cx))
let telemetry = ActionLogTelemetry::from(self.thread.read(cx));
let action_log = self.thread.read(cx).action_log().clone();
action_log.update(cx, |action_log, cx| {
action_log.keep_all_edits(Some(telemetry), cx)
});
}
}
fn keep_edits_in_selection(
editor: &mut Editor,
buffer_snapshot: &MultiBufferSnapshot,
thread: &AgentDiffThread,
thread: &Entity<AcpThread>,
window: &mut Window,
cx: &mut Context<Editor>,
) {
@@ -364,7 +304,7 @@ fn keep_edits_in_selection(
fn reject_edits_in_selection(
editor: &mut Editor,
buffer_snapshot: &MultiBufferSnapshot,
thread: &AgentDiffThread,
thread: &Entity<AcpThread>,
window: &mut Window,
cx: &mut Context<Editor>,
) {
@@ -378,7 +318,7 @@ fn reject_edits_in_selection(
fn keep_edits_in_ranges(
editor: &mut Editor,
buffer_snapshot: &MultiBufferSnapshot,
thread: &AgentDiffThread,
thread: &Entity<AcpThread>,
ranges: Vec<Range<editor::Anchor>>,
window: &mut Window,
cx: &mut Context<Editor>,
@@ -393,8 +333,15 @@ fn keep_edits_in_ranges(
for hunk in &diff_hunks_in_ranges {
let buffer = multibuffer.read(cx).buffer(hunk.buffer_id);
if let Some(buffer) = buffer {
thread.action_log(cx).update(cx, |action_log, cx| {
action_log.keep_edits_in_range(buffer, hunk.buffer_range.clone(), cx)
let action_log = thread.read(cx).action_log().clone();
let telemetry = ActionLogTelemetry::from(thread.read(cx));
action_log.update(cx, |action_log, cx| {
action_log.keep_edits_in_range(
buffer,
hunk.buffer_range.clone(),
Some(telemetry),
cx,
)
});
}
}
@@ -403,7 +350,7 @@ fn keep_edits_in_ranges(
fn reject_edits_in_ranges(
editor: &mut Editor,
buffer_snapshot: &MultiBufferSnapshot,
thread: &AgentDiffThread,
thread: &Entity<AcpThread>,
ranges: Vec<Range<editor::Anchor>>,
window: &mut Window,
cx: &mut Context<Editor>,
@@ -427,11 +374,12 @@ fn reject_edits_in_ranges(
}
}
let action_log = thread.read(cx).action_log().clone();
let telemetry = ActionLogTelemetry::from(thread.read(cx));
for (buffer, ranges) in ranges_by_buffer {
thread
.action_log(cx)
action_log
.update(cx, |action_log, cx| {
action_log.reject_edits_in_ranges(buffer, ranges, cx)
action_log.reject_edits_in_ranges(buffer, ranges, Some(telemetry.clone()), cx)
})
.detach_and_log_err(cx);
}
@@ -531,7 +479,7 @@ impl Item for AgentDiffPane {
}
fn tab_content(&self, params: TabContentParams, _window: &Window, cx: &App) -> AnyElement {
let title = self.thread.title(cx);
let title = self.thread.read(cx).title();
Label::new(format!("Review: {}", title))
.color(if params.selected {
Color::Default
@@ -712,7 +660,7 @@ impl Render for AgentDiffPane {
}
}
fn diff_hunk_controls(thread: &AgentDiffThread) -> editor::RenderDiffHunkControlsFn {
fn diff_hunk_controls(thread: &Entity<AcpThread>) -> editor::RenderDiffHunkControlsFn {
let thread = thread.clone();
Arc::new(
@@ -739,7 +687,7 @@ fn render_diff_hunk_controls(
hunk_range: Range<editor::Anchor>,
is_created_file: bool,
line_height: Pixels,
thread: &AgentDiffThread,
thread: &Entity<AcpThread>,
editor: &Entity<Editor>,
cx: &mut App,
) -> AnyElement {
@@ -1153,8 +1101,11 @@ impl Render for AgentDiffToolbar {
return Empty.into_any();
};
let has_pending_edit_tool_use =
agent_diff.read(cx).thread.has_pending_edit_tool_uses(cx);
let has_pending_edit_tool_use = agent_diff
.read(cx)
.thread
.read(cx)
.has_pending_edit_tool_calls();
if has_pending_edit_tool_use {
return div().px_2().child(spinner_icon).into_any();
@@ -1214,7 +1165,7 @@ pub enum EditorState {
}
struct WorkspaceThread {
thread: WeakAgentDiffThread,
thread: WeakEntity<AcpThread>,
_thread_subscriptions: (Subscription, Subscription),
singleton_editors: HashMap<WeakEntity<Buffer>, HashMap<WeakEntity<Editor>, Subscription>>,
_settings_subscription: Subscription,
@@ -1239,23 +1190,23 @@ impl AgentDiff {
pub fn set_active_thread(
workspace: &WeakEntity<Workspace>,
thread: impl Into<AgentDiffThread>,
thread: Entity<AcpThread>,
window: &mut Window,
cx: &mut App,
) {
Self::global(cx).update(cx, |this, cx| {
this.register_active_thread_impl(workspace, thread.into(), window, cx);
this.register_active_thread_impl(workspace, thread, window, cx);
});
}
fn register_active_thread_impl(
&mut self,
workspace: &WeakEntity<Workspace>,
thread: AgentDiffThread,
thread: Entity<AcpThread>,
window: &mut Window,
cx: &mut Context<Self>,
) {
let action_log = thread.action_log(cx);
let action_log = thread.read(cx).action_log().clone();
let action_log_subscription = cx.observe_in(&action_log, window, {
let workspace = workspace.clone();
@@ -1264,14 +1215,12 @@ impl AgentDiff {
}
});
let thread_subscription = match &thread {
AgentDiffThread::AcpThread(thread) => cx.subscribe_in(thread, window, {
let workspace = workspace.clone();
move |this, thread, event, window, cx| {
this.handle_acp_thread_event(&workspace, thread, event, window, cx)
}
}),
};
let thread_subscription = cx.subscribe_in(&thread, window, {
let workspace = workspace.clone();
move |this, thread, event, window, cx| {
this.handle_acp_thread_event(&workspace, thread, event, window, cx)
}
});
if let Some(workspace_thread) = self.workspace_threads.get_mut(workspace) {
// replace thread and action log subscription, but keep editors
@@ -1348,7 +1297,7 @@ impl AgentDiff {
fn register_review_action<T: Action>(
workspace: &mut Workspace,
review: impl Fn(&Entity<Editor>, &AgentDiffThread, &mut Window, &mut App) -> PostReviewState
review: impl Fn(&Entity<Editor>, &Entity<AcpThread>, &mut Window, &mut App) -> PostReviewState
+ 'static,
this: &Entity<AgentDiff>,
) {
@@ -1508,7 +1457,7 @@ impl AgentDiff {
return;
};
let action_log = thread.action_log(cx);
let action_log = thread.read(cx).action_log();
let changed_buffers = action_log.read(cx).changed_buffers(cx);
let mut unaffected = self.reviewing_editors.clone();
@@ -1627,7 +1576,7 @@ impl AgentDiff {
fn keep_all(
editor: &Entity<Editor>,
thread: &AgentDiffThread,
thread: &Entity<AcpThread>,
window: &mut Window,
cx: &mut App,
) -> PostReviewState {
@@ -1647,7 +1596,7 @@ impl AgentDiff {
fn reject_all(
editor: &Entity<Editor>,
thread: &AgentDiffThread,
thread: &Entity<AcpThread>,
window: &mut Window,
cx: &mut App,
) -> PostReviewState {
@@ -1667,7 +1616,7 @@ impl AgentDiff {
fn keep(
editor: &Entity<Editor>,
thread: &AgentDiffThread,
thread: &Entity<AcpThread>,
window: &mut Window,
cx: &mut App,
) -> PostReviewState {
@@ -1680,7 +1629,7 @@ impl AgentDiff {
fn reject(
editor: &Entity<Editor>,
thread: &AgentDiffThread,
thread: &Entity<AcpThread>,
window: &mut Window,
cx: &mut App,
) -> PostReviewState {
@@ -1703,7 +1652,7 @@ impl AgentDiff {
fn review_in_active_editor(
&mut self,
workspace: &mut Workspace,
review: impl Fn(&Entity<Editor>, &AgentDiffThread, &mut Window, &mut App) -> PostReviewState,
review: impl Fn(&Entity<Editor>, &Entity<AcpThread>, &mut Window, &mut App) -> PostReviewState,
window: &mut Window,
cx: &mut Context<Self>,
) -> Option<Task<Result<()>>> {
@@ -1725,7 +1674,7 @@ impl AgentDiff {
if let PostReviewState::AllReviewed = review(&editor, &thread, window, cx)
&& let Some(curr_buffer) = editor.read(cx).buffer().read(cx).as_singleton()
{
let changed_buffers = thread.action_log(cx).read(cx).changed_buffers(cx);
let changed_buffers = thread.read(cx).action_log().read(cx).changed_buffers(cx);
let mut keys = changed_buffers.keys().cycle();
keys.find(|k| *k == &curr_buffer);
@@ -1768,12 +1717,11 @@ mod tests {
use super::*;
use crate::Keep;
use acp_thread::AgentConnection as _;
use agent_settings::AgentSettings;
use editor::EditorSettings;
use gpui::{TestAppContext, UpdateGlobal, VisualTestContext};
use project::{FakeFs, Project};
use serde_json::json;
use settings::{Settings, SettingsStore};
use settings::SettingsStore;
use std::{path::Path, rc::Rc};
use util::path;
@@ -1782,13 +1730,8 @@ mod tests {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
language::init(cx);
Project::init_settings(cx);
AgentSettings::register(cx);
prompt_store::init(cx);
workspace::init_settings(cx);
theme::init(theme::LoadThemes::JustBase, cx);
EditorSettings::register(cx);
language_model::init_settings(cx);
});
@@ -1815,8 +1758,7 @@ mod tests {
.await
.unwrap();
let thread = AgentDiffThread::AcpThread(thread);
let action_log = cx.read(|cx| thread.action_log(cx));
let action_log = cx.read(|cx| thread.read(cx).action_log().clone());
let (workspace, cx) =
cx.add_window_view(|window, cx| Workspace::test_new(project.clone(), window, cx));
@@ -1942,13 +1884,8 @@ mod tests {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
language::init(cx);
Project::init_settings(cx);
AgentSettings::register(cx);
prompt_store::init(cx);
workspace::init_settings(cx);
theme::init(theme::LoadThemes::JustBase, cx);
EditorSettings::register(cx);
language_model::init_settings(cx);
workspace::register_project_item::<Editor>(cx);
});
@@ -2004,7 +1941,6 @@ mod tests {
let action_log = thread.read_with(cx, |thread, _| thread.action_log().clone());
// Set the active thread
let thread = AgentDiffThread::AcpThread(thread);
cx.update(|window, cx| {
AgentDiff::set_active_thread(&workspace.downgrade(), thread.clone(), window, cx)
});

View File

@@ -16,7 +16,7 @@ use serde::{Deserialize, Serialize};
use settings::{
DefaultAgentView as DefaultView, LanguageModelProviderSetting, LanguageModelSelection,
};
use zed_actions::OpenBrowser;
use zed_actions::agent::{OpenClaudeCodeOnboardingModal, ReauthenticateAgent};
use crate::ui::{AcpOnboardingModal, ClaudeCodeOnboardingModal};
@@ -1880,7 +1880,12 @@ impl AgentPanel {
{
let focus_handle = focus_handle.clone();
move |_window, cx| {
Tooltip::for_action_in("New…", &ToggleNewThreadMenu, &focus_handle, cx)
Tooltip::for_action_in(
"New Thread…",
&ToggleNewThreadMenu,
&focus_handle,
cx,
)
}
},
)
@@ -1978,7 +1983,7 @@ impl AgentPanel {
.separator()
.header("External Agents")
.item(
ContextMenuEntry::new("New Claude Code Thread")
ContextMenuEntry::new("New Claude Code")
.icon(IconName::AiClaude)
.disabled(is_via_collab)
.icon_color(Color::Muted)
@@ -2004,7 +2009,7 @@ impl AgentPanel {
}),
)
.item(
ContextMenuEntry::new("New Codex Thread")
ContextMenuEntry::new("New Codex CLI")
.icon(IconName::AiOpenAi)
.disabled(is_via_collab)
.icon_color(Color::Muted)
@@ -2030,7 +2035,7 @@ impl AgentPanel {
}),
)
.item(
ContextMenuEntry::new("New Gemini CLI Thread")
ContextMenuEntry::new("New Gemini CLI")
.icon(IconName::AiGemini)
.icon_color(Color::Muted)
.disabled(is_via_collab)
@@ -2074,9 +2079,9 @@ impl AgentPanel {
for agent_name in agent_names {
let icon_path = agent_server_store_read.agent_icon(&agent_name);
let mut entry =
ContextMenuEntry::new(format!("New {} Thread", agent_name));
ContextMenuEntry::new(format!("New {}", agent_name));
if let Some(icon_path) = icon_path {
entry = entry.custom_icon_path(icon_path);
entry = entry.custom_icon_svg(icon_path);
} else {
entry = entry.icon(IconName::Terminal);
}
@@ -2126,12 +2131,20 @@ impl AgentPanel {
menu
})
.separator()
.link(
"Add Other Agents",
OpenBrowser {
url: zed_urls::external_agents_docs(cx),
}
.boxed_clone(),
.item(
ContextMenuEntry::new("Add More Agents")
.icon(IconName::Plus)
.icon_color(Color::Muted)
.handler({
move |window, cx| {
window.dispatch_action(Box::new(zed_actions::Extensions {
category_filter: Some(
zed_actions::ExtensionCategoryFilter::AgentServers,
),
id: None,
}), cx)
}
}),
)
}))
}
@@ -2145,7 +2158,7 @@ impl AgentPanel {
.when_some(selected_agent_custom_icon, |this, icon_path| {
let label = selected_agent_label.clone();
this.px(DynamicSpacing::Base02.rems(cx))
.child(Icon::from_path(icon_path).color(Color::Muted))
.child(Icon::from_external_svg(icon_path).color(Color::Muted))
.tooltip(move |_window, cx| {
Tooltip::with_meta(label.clone(), None, "Selected Agent", cx)
})

View File

@@ -12,7 +12,6 @@ mod context_strip;
mod inline_assistant;
mod inline_prompt_editor;
mod language_model_selector;
mod message_editor;
mod profile_selector;
mod slash_command;
mod slash_command_picker;
@@ -248,8 +247,6 @@ pub fn init(
is_eval: bool,
cx: &mut App,
) {
AgentSettings::register(cx);
assistant_text_thread::init(client.clone(), cx);
rules_library::init(cx);
if !is_eval {

View File

@@ -1082,10 +1082,7 @@ mod tests {
};
use gpui::TestAppContext;
use indoc::indoc;
use language::{
Buffer, Language, LanguageConfig, LanguageMatcher, Point, language_settings,
tree_sitter_rust,
};
use language::{Buffer, Language, LanguageConfig, LanguageMatcher, Point, tree_sitter_rust};
use language_model::{LanguageModelRegistry, TokenUsage};
use rand::prelude::*;
use settings::SettingsStore;
@@ -1465,8 +1462,6 @@ mod tests {
fn init_test(cx: &mut TestAppContext) {
cx.update(LanguageModelRegistry::test);
cx.set_global(cx.update(SettingsStore::test));
cx.update(Project::init_settings);
cx.update(language_settings::init);
}
fn simulate_response_stream(

View File

@@ -1075,8 +1075,6 @@ mod tests {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
language::init(cx);
Project::init_settings(cx);
});
}

View File

@@ -42,7 +42,7 @@ use super::{
ContextPickerAction, ContextPickerEntry, ContextPickerMode, MentionLink, RecentEntry,
available_context_picker_entries, recent_context_picker_entries_with_store, selection_ranges,
};
use crate::message_editor::ContextCreasesAddon;
use crate::inline_prompt_editor::ContextCreasesAddon;
pub(crate) enum Match {
File(FileMatch),
@@ -1182,10 +1182,8 @@ mod tests {
let app_state = cx.update(AppState::test);
cx.update(|cx| {
language::init(cx);
editor::init(cx);
workspace::init(app_state.clone(), cx);
Project::init_settings(cx);
});
app_state
@@ -1486,10 +1484,8 @@ mod tests {
let app_state = cx.update(AppState::test);
cx.update(|cx| {
language::init(cx);
editor::init(cx);
workspace::init(app_state.clone(), cx);
Project::init_settings(cx);
});
app_state
@@ -1686,11 +1682,6 @@ mod tests {
let store = SettingsStore::test(cx);
cx.set_global(store);
theme::init(theme::LoadThemes::JustBase, cx);
client::init_settings(cx);
language::init(cx);
Project::init_settings(cx);
workspace::init_settings(cx);
editor::init_settings(cx);
});
}
}

View File

@@ -1,8 +1,8 @@
use crate::context_store::ContextStore;
use agent::HistoryStore;
use collections::VecDeque;
use collections::{HashMap, VecDeque};
use editor::actions::Paste;
use editor::display_map::EditorMargins;
use editor::display_map::{CreaseId, EditorMargins};
use editor::{Addon, AnchorRangeExt as _};
use editor::{
ContextMenuOptions, Editor, EditorElement, EditorEvent, EditorMode, EditorStyle, MultiBuffer,
actions::{MoveDown, MoveUp},
@@ -17,6 +17,7 @@ use parking_lot::Mutex;
use prompt_store::PromptStore;
use settings::Settings;
use std::cmp;
use std::ops::Range;
use std::rc::Rc;
use std::sync::Arc;
use theme::ThemeSettings;
@@ -27,12 +28,15 @@ use zed_actions::agent::ToggleModelSelector;
use crate::agent_model_selector::AgentModelSelector;
use crate::buffer_codegen::BufferCodegen;
use crate::context_picker::{ContextPicker, ContextPickerCompletionProvider};
use crate::context::{AgentContextHandle, AgentContextKey};
use crate::context_picker::{ContextPicker, ContextPickerCompletionProvider, crease_for_mention};
use crate::context_store::{ContextStore, ContextStoreEvent};
use crate::context_strip::{ContextStrip, ContextStripEvent, SuggestContextKind};
use crate::message_editor::{ContextCreasesAddon, extract_message_creases, insert_message_creases};
use crate::terminal_codegen::TerminalCodegen;
use crate::{CycleNextInlineAssist, CyclePreviousInlineAssist, ModelUsageContext};
use crate::{RemoveAllContext, ToggleContextPicker};
use crate::{
CycleNextInlineAssist, CyclePreviousInlineAssist, ModelUsageContext, RemoveAllContext,
ToggleContextPicker,
};
pub struct PromptEditor<T> {
pub editor: Entity<Editor>,
@@ -1157,3 +1161,156 @@ impl GenerationMode {
}
}
}
/// Stored information that can be used to resurrect a context crease when creating an editor for a past message.
#[derive(Clone, Debug)]
pub struct MessageCrease {
pub range: Range<usize>,
pub icon_path: SharedString,
pub label: SharedString,
/// None for a deserialized message, Some otherwise.
pub context: Option<AgentContextHandle>,
}
#[derive(Default)]
pub struct ContextCreasesAddon {
creases: HashMap<AgentContextKey, Vec<(CreaseId, SharedString)>>,
_subscription: Option<Subscription>,
}
impl Addon for ContextCreasesAddon {
fn to_any(&self) -> &dyn std::any::Any {
self
}
fn to_any_mut(&mut self) -> Option<&mut dyn std::any::Any> {
Some(self)
}
}
impl ContextCreasesAddon {
pub fn new() -> Self {
Self {
creases: HashMap::default(),
_subscription: None,
}
}
pub fn add_creases(
&mut self,
context_store: &Entity<ContextStore>,
key: AgentContextKey,
creases: impl IntoIterator<Item = (CreaseId, SharedString)>,
cx: &mut Context<Editor>,
) {
self.creases.entry(key).or_default().extend(creases);
self._subscription = Some(
cx.subscribe(context_store, |editor, _, event, cx| match event {
ContextStoreEvent::ContextRemoved(key) => {
let Some(this) = editor.addon_mut::<Self>() else {
return;
};
let (crease_ids, replacement_texts): (Vec<_>, Vec<_>) = this
.creases
.remove(key)
.unwrap_or_default()
.into_iter()
.unzip();
let ranges = editor
.remove_creases(crease_ids, cx)
.into_iter()
.map(|(_, range)| range)
.collect::<Vec<_>>();
editor.unfold_ranges(&ranges, false, false, cx);
editor.edit(ranges.into_iter().zip(replacement_texts), cx);
cx.notify();
}
}),
)
}
pub fn into_inner(self) -> HashMap<AgentContextKey, Vec<(CreaseId, SharedString)>> {
self.creases
}
}
pub fn extract_message_creases(
editor: &mut Editor,
cx: &mut Context<'_, Editor>,
) -> Vec<MessageCrease> {
let buffer_snapshot = editor.buffer().read(cx).snapshot(cx);
let mut contexts_by_crease_id = editor
.addon_mut::<ContextCreasesAddon>()
.map(std::mem::take)
.unwrap_or_default()
.into_inner()
.into_iter()
.flat_map(|(key, creases)| {
let context = key.0;
creases
.into_iter()
.map(move |(id, _)| (id, context.clone()))
})
.collect::<HashMap<_, _>>();
// Filter the addon's list of creases based on what the editor reports,
// since the addon might have removed creases in it.
editor.display_map.update(cx, |display_map, cx| {
display_map
.snapshot(cx)
.crease_snapshot
.creases()
.filter_map(|(id, crease)| {
Some((
id,
(
crease.range().to_offset(&buffer_snapshot),
crease.metadata()?.clone(),
),
))
})
.map(|(id, (range, metadata))| {
let context = contexts_by_crease_id.remove(&id);
MessageCrease {
range,
context,
label: metadata.label,
icon_path: metadata.icon_path,
}
})
.collect()
})
}
pub fn insert_message_creases(
editor: &mut Editor,
message_creases: &[MessageCrease],
context_store: &Entity<ContextStore>,
window: &mut Window,
cx: &mut Context<'_, Editor>,
) {
let buffer_snapshot = editor.buffer().read(cx).snapshot(cx);
let creases = message_creases
.iter()
.map(|crease| {
let start = buffer_snapshot.anchor_after(crease.range.start);
let end = buffer_snapshot.anchor_before(crease.range.end);
crease_for_mention(
crease.label.clone(),
crease.icon_path.clone(),
start..end,
cx.weak_entity(),
)
})
.collect::<Vec<_>>();
let ids = editor.insert_creases(creases.clone(), cx);
editor.fold_creases(creases, false, window, cx);
if let Some(addon) = editor.addon_mut::<ContextCreasesAddon>() {
for (crease, id) in message_creases.iter().zip(ids) {
if let Some(context) = crease.context.as_ref() {
let key = AgentContextKey(context.clone());
addon.add_creases(context_store, key, vec![(id, crease.label.clone())], cx);
}
}
}
}

View File

@@ -177,7 +177,7 @@ impl LanguageModelPickerDelegate {
}
_ => {
log::error!(
"Failed to authenticate provider: {}: {err}",
"Failed to authenticate provider: {}: {err:#}",
provider_name.0
);
}

View File

@@ -1,166 +0,0 @@
use std::ops::Range;
use collections::HashMap;
use editor::display_map::CreaseId;
use editor::{Addon, AnchorRangeExt, Editor};
use gpui::{Entity, Subscription};
use ui::prelude::*;
use crate::{
context::{AgentContextHandle, AgentContextKey},
context_picker::crease_for_mention,
context_store::{ContextStore, ContextStoreEvent},
};
/// Stored information that can be used to resurrect a context crease when creating an editor for a past message.
#[derive(Clone, Debug)]
pub struct MessageCrease {
pub range: Range<usize>,
pub icon_path: SharedString,
pub label: SharedString,
/// None for a deserialized message, Some otherwise.
pub context: Option<AgentContextHandle>,
}
#[derive(Default)]
pub struct ContextCreasesAddon {
creases: HashMap<AgentContextKey, Vec<(CreaseId, SharedString)>>,
_subscription: Option<Subscription>,
}
impl Addon for ContextCreasesAddon {
fn to_any(&self) -> &dyn std::any::Any {
self
}
fn to_any_mut(&mut self) -> Option<&mut dyn std::any::Any> {
Some(self)
}
}
impl ContextCreasesAddon {
pub fn new() -> Self {
Self {
creases: HashMap::default(),
_subscription: None,
}
}
pub fn add_creases(
&mut self,
context_store: &Entity<ContextStore>,
key: AgentContextKey,
creases: impl IntoIterator<Item = (CreaseId, SharedString)>,
cx: &mut Context<Editor>,
) {
self.creases.entry(key).or_default().extend(creases);
self._subscription = Some(
cx.subscribe(context_store, |editor, _, event, cx| match event {
ContextStoreEvent::ContextRemoved(key) => {
let Some(this) = editor.addon_mut::<Self>() else {
return;
};
let (crease_ids, replacement_texts): (Vec<_>, Vec<_>) = this
.creases
.remove(key)
.unwrap_or_default()
.into_iter()
.unzip();
let ranges = editor
.remove_creases(crease_ids, cx)
.into_iter()
.map(|(_, range)| range)
.collect::<Vec<_>>();
editor.unfold_ranges(&ranges, false, false, cx);
editor.edit(ranges.into_iter().zip(replacement_texts), cx);
cx.notify();
}
}),
)
}
pub fn into_inner(self) -> HashMap<AgentContextKey, Vec<(CreaseId, SharedString)>> {
self.creases
}
}
pub fn extract_message_creases(
editor: &mut Editor,
cx: &mut Context<'_, Editor>,
) -> Vec<MessageCrease> {
let buffer_snapshot = editor.buffer().read(cx).snapshot(cx);
let mut contexts_by_crease_id = editor
.addon_mut::<ContextCreasesAddon>()
.map(std::mem::take)
.unwrap_or_default()
.into_inner()
.into_iter()
.flat_map(|(key, creases)| {
let context = key.0;
creases
.into_iter()
.map(move |(id, _)| (id, context.clone()))
})
.collect::<HashMap<_, _>>();
// Filter the addon's list of creases based on what the editor reports,
// since the addon might have removed creases in it.
editor.display_map.update(cx, |display_map, cx| {
display_map
.snapshot(cx)
.crease_snapshot
.creases()
.filter_map(|(id, crease)| {
Some((
id,
(
crease.range().to_offset(&buffer_snapshot),
crease.metadata()?.clone(),
),
))
})
.map(|(id, (range, metadata))| {
let context = contexts_by_crease_id.remove(&id);
MessageCrease {
range,
context,
label: metadata.label,
icon_path: metadata.icon_path,
}
})
.collect()
})
}
pub fn insert_message_creases(
editor: &mut Editor,
message_creases: &[MessageCrease],
context_store: &Entity<ContextStore>,
window: &mut Window,
cx: &mut Context<'_, Editor>,
) {
let buffer_snapshot = editor.buffer().read(cx).snapshot(cx);
let creases = message_creases
.iter()
.map(|crease| {
let start = buffer_snapshot.anchor_after(crease.range.start);
let end = buffer_snapshot.anchor_before(crease.range.end);
crease_for_mention(
crease.label.clone(),
crease.icon_path.clone(),
start..end,
cx.weak_entity(),
)
})
.collect::<Vec<_>>();
let ids = editor.insert_creases(creases.clone(), cx);
editor.fold_creases(creases, false, window, cx);
if let Some(addon) = editor.addon_mut::<ContextCreasesAddon>() {
for (crease, id) in message_creases.iter().zip(ids) {
if let Some(context) = crease.context.as_ref() {
let key = AgentContextKey(context.clone());
addon.add_creases(context_store, key, vec![(id, crease.label.clone())], cx);
}
}
}
}

View File

@@ -477,7 +477,7 @@ impl TextThreadEditor {
editor.insert(&format!("/{name}"), window, cx);
if command.accepts_arguments() {
editor.insert(" ", window, cx);
editor.show_completions(&ShowCompletions, window, cx);
editor.show_completions(&ShowCompletions::default(), window, cx);
}
});
});
@@ -3223,11 +3223,7 @@ mod tests {
prompt_store::init(cx);
LanguageModelRegistry::test(cx);
cx.set_global(settings_store);
language::init(cx);
agent_settings::init(cx);
Project::init_settings(cx);
theme::init(theme::LoadThemes::JustBase, cx);
workspace::init_settings(cx);
editor::init_settings(cx);
}
}

View File

@@ -577,8 +577,6 @@ mod test {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
// release_channel::init(SemanticVersion::default(), cx);
language::init(cx);
Project::init_settings(cx);
});
}

View File

@@ -22,7 +22,6 @@ use language_model::{
};
use parking_lot::Mutex;
use pretty_assertions::assert_eq;
use project::Project;
use prompt_store::PromptBuilder;
use rand::prelude::*;
use serde_json::json;
@@ -1411,9 +1410,6 @@ fn init_test(cx: &mut App) {
prompt_store::init(cx);
LanguageModelRegistry::test(cx);
cx.set_global(settings_store);
language::init(cx);
agent_settings::init(cx);
Project::init_settings(cx);
}
#[derive(Clone)]

View File

@@ -48,7 +48,6 @@ pub const LEGACY_CHANNEL_COUNT: NonZero<u16> = nz!(2);
pub const REPLAY_DURATION: Duration = Duration::from_secs(30);
pub fn init(cx: &mut App) {
AudioSettings::register(cx);
LIVE_SETTINGS.initialize(cx);
}

View File

@@ -1,9 +1,9 @@
use std::sync::atomic::{AtomicBool, Ordering};
use gpui::App;
use settings::{Settings, SettingsStore};
use settings::{RegisterSetting, Settings, SettingsStore};
#[derive(Clone, Debug)]
#[derive(Clone, Debug, RegisterSetting)]
pub struct AudioSettings {
/// Opt into the new audio system.
///

View File

@@ -10,7 +10,7 @@ use http_client::{AsyncBody, HttpClient, HttpClientWithUrl};
use paths::remote_servers_dir;
use release_channel::{AppCommitSha, ReleaseChannel};
use serde::{Deserialize, Serialize};
use settings::{Settings, SettingsStore};
use settings::{RegisterSetting, Settings, SettingsStore};
use smol::{fs, io::AsyncReadExt};
use smol::{fs::File, process::Command};
use std::mem;
@@ -120,7 +120,7 @@ impl Drop for MacOsUnmounter<'_> {
}
}
#[derive(Clone, Copy, Debug)]
#[derive(Clone, Copy, Debug, RegisterSetting)]
struct AutoUpdateSetting(bool);
/// Whether or not to automatically check for updates.
@@ -138,8 +138,6 @@ struct GlobalAutoUpdate(Option<Entity<AutoUpdater>>);
impl Global for GlobalAutoUpdate {}
pub fn init(http_client: Arc<HttpClientWithUrl>, cx: &mut App) {
AutoUpdateSetting::register(cx);
cx.observe_new(|workspace: &mut Workspace, _window, _cx| {
workspace.register_action(|_, action, window, cx| check(action, window, cx));
@@ -406,6 +404,7 @@ impl AutoUpdater {
arch: &str,
release_channel: ReleaseChannel,
version: Option<SemanticVersion>,
set_status: impl Fn(&str, &mut AsyncApp) + Send + 'static,
cx: &mut AsyncApp,
) -> Result<PathBuf> {
let this = cx.update(|cx| {
@@ -415,6 +414,7 @@ impl AutoUpdater {
.context("auto-update not initialized")
})??;
set_status("Fetching remote server release", cx);
let release = Self::get_release(
&this,
"zed-remote-server",
@@ -439,6 +439,7 @@ impl AutoUpdater {
"downloading zed-remote-server {os} {arch} version {}",
release.version
);
set_status("Downloading remote server", cx);
download_remote_server_binary(&version_path, release, client, cx).await?;
}
@@ -1025,7 +1026,6 @@ mod tests {
.set_user_settings("{}", cx)
.expect("Unable to set user settings");
cx.set_global(store);
AutoUpdateSetting::register(cx);
assert!(AutoUpdateSetting::get_global(cx).0);
});
}

View File

@@ -1,7 +1,6 @@
pub mod participant;
pub mod room;
use crate::call_settings::CallSettings;
use anyhow::{Context as _, Result, anyhow};
use audio::Audio;
use client::{ChannelId, Client, TypedEnvelope, User, UserStore, ZED_ALWAYS_ACTIVE, proto};
@@ -14,7 +13,6 @@ use gpui::{
use postage::watch;
use project::Project;
use room::Event;
use settings::Settings;
use std::sync::Arc;
pub use livekit_client::{RemoteVideoTrack, RemoteVideoTrackView, RemoteVideoTrackViewEvent};
@@ -26,8 +24,6 @@ struct GlobalActiveCall(Entity<ActiveCall>);
impl Global for GlobalActiveCall {}
pub fn init(client: Arc<Client>, user_store: Entity<UserStore>, cx: &mut App) {
CallSettings::register(cx);
let active_call = cx.new(|cx| ActiveCall::new(client, user_store, cx));
cx.set_global(GlobalActiveCall(active_call));
}

View File

@@ -1,6 +1,6 @@
use settings::Settings;
use settings::{RegisterSetting, Settings};
#[derive(Debug)]
#[derive(Debug, RegisterSetting)]
pub struct CallSettings {
pub mute_on_join: bool,
pub share_on_join: bool,

View File

@@ -237,7 +237,6 @@ fn init_test(cx: &mut App) -> Entity<ChannelStore> {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
release_channel::init(SemanticVersion::default(), cx);
client::init_settings(cx);
let clock = Arc::new(FakeSystemClock::new());
let http = FakeHttpClient::with_404_response();

View File

@@ -30,7 +30,7 @@ use rand::prelude::*;
use release_channel::{AppVersion, ReleaseChannel};
use rpc::proto::{AnyTypedEnvelope, EnvelopedMessage, PeerId, RequestMessage};
use serde::{Deserialize, Serialize};
use settings::{Settings, SettingsContent};
use settings::{RegisterSetting, Settings, SettingsContent};
use std::{
any::TypeId,
convert::TryFrom,
@@ -95,7 +95,7 @@ actions!(
]
);
#[derive(Deserialize)]
#[derive(Deserialize, RegisterSetting)]
pub struct ClientSettings {
pub server_url: String,
}
@@ -113,7 +113,7 @@ impl Settings for ClientSettings {
}
}
#[derive(Deserialize, Default)]
#[derive(Deserialize, Default, RegisterSetting)]
pub struct ProxySettings {
pub proxy: Option<String>,
}
@@ -140,12 +140,6 @@ impl Settings for ProxySettings {
}
}
pub fn init_settings(cx: &mut App) {
TelemetrySettings::register(cx);
ClientSettings::register(cx);
ProxySettings::register(cx);
}
pub fn init(client: &Arc<Client>, cx: &mut App) {
let client = Arc::downgrade(client);
cx.on_action({
@@ -508,7 +502,7 @@ impl<T: 'static> Drop for PendingEntitySubscription<T> {
}
}
#[derive(Copy, Clone, Deserialize, Debug)]
#[derive(Copy, Clone, Deserialize, Debug, RegisterSetting)]
pub struct TelemetrySettings {
pub diagnostics: bool,
pub metrics: bool,
@@ -2177,7 +2171,6 @@ mod tests {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
init_settings(cx);
});
}
}

View File

@@ -179,8 +179,6 @@ impl Telemetry {
let release_channel =
ReleaseChannel::try_global(cx).map(|release_channel| release_channel.display_name());
TelemetrySettings::register(cx);
let state = Arc::new(Mutex::new(TelemetryState {
settings: *TelemetrySettings::get_global(cx),
architecture: env::consts::ARCH,

View File

@@ -260,7 +260,7 @@ impl fmt::Debug for Lamport {
impl fmt::Debug for Global {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "Global {{")?;
for timestamp in self.iter() {
for timestamp in self.iter().filter(|t| t.value > 0) {
if timestamp.replica_id.0 > 0 {
write!(f, ", ")?;
}

View File

@@ -1,5 +1,4 @@
pub mod predict_edits_v3;
pub mod udiff;
use std::str::FromStr;
use std::sync::Arc;
@@ -184,13 +183,13 @@ pub struct PredictEditsGitInfo {
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PredictEditsResponse {
pub request_id: Uuid,
pub request_id: String,
pub output_excerpt: String,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct AcceptEditPredictionBody {
pub request_id: Uuid,
pub request_id: String,
}
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Clone, Copy, Serialize, Deserialize)]

View File

@@ -1,7 +1,7 @@
use chrono::Duration;
use serde::{Deserialize, Serialize};
use std::{
fmt::Display,
fmt::{Display, Write as _},
ops::{Add, Range, Sub},
path::{Path, PathBuf},
sync::Arc,
@@ -11,7 +11,14 @@ use uuid::Uuid;
use crate::PredictEditsGitInfo;
// TODO: snippet ordering within file / relative to excerpt
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PlanContextRetrievalRequest {
pub excerpt: String,
pub excerpt_path: Arc<Path>,
pub excerpt_line_range: Range<Line>,
pub cursor_file_max_row: Line,
pub events: Vec<Event>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct PredictEditsRequest {
@@ -125,15 +132,15 @@ impl Display for Event {
write!(
f,
"// User accepted prediction:\n--- a/{}\n+++ b/{}\n{diff}",
old_path.display(),
new_path.display()
DiffPathFmt(old_path),
DiffPathFmt(new_path)
)
} else {
write!(
f,
"--- a/{}\n+++ b/{}\n{diff}",
old_path.display(),
new_path.display()
DiffPathFmt(old_path),
DiffPathFmt(new_path)
)
}
}
@@ -141,6 +148,24 @@ impl Display for Event {
}
}
/// always format the Path as a unix path with `/` as the path sep in Diffs
pub struct DiffPathFmt<'a>(pub &'a Path);
impl<'a> std::fmt::Display for DiffPathFmt<'a> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut is_first = true;
for component in self.0.components() {
if !is_first {
f.write_char('/')?;
} else {
is_first = false;
}
write!(f, "{}", component.as_os_str().display())?;
}
Ok(())
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Signature {
pub text: String,

View File

@@ -1,294 +0,0 @@
use std::{borrow::Cow, fmt::Display};
#[derive(Debug, PartialEq)]
pub enum DiffLine<'a> {
OldPath { path: Cow<'a, str> },
NewPath { path: Cow<'a, str> },
HunkHeader(Option<HunkLocation>),
Context(&'a str),
Deletion(&'a str),
Addition(&'a str),
Garbage(&'a str),
}
#[derive(Debug, PartialEq)]
pub struct HunkLocation {
start_line_old: u32,
count_old: u32,
start_line_new: u32,
count_new: u32,
}
impl<'a> DiffLine<'a> {
pub fn parse(line: &'a str) -> Self {
Self::try_parse(line).unwrap_or(Self::Garbage(line))
}
fn try_parse(line: &'a str) -> Option<Self> {
if let Some(header) = line.strip_prefix("---").and_then(eat_required_whitespace) {
let path = parse_header_path("a/", header);
Some(Self::OldPath { path })
} else if let Some(header) = line.strip_prefix("+++").and_then(eat_required_whitespace) {
Some(Self::NewPath {
path: parse_header_path("b/", header),
})
} else if let Some(header) = line.strip_prefix("@@").and_then(eat_required_whitespace) {
if header.starts_with("...") {
return Some(Self::HunkHeader(None));
}
let (start_line_old, header) = header.strip_prefix('-')?.split_once(',')?;
let mut parts = header.split_ascii_whitespace();
let count_old = parts.next()?;
let (start_line_new, count_new) = parts.next()?.strip_prefix('+')?.split_once(',')?;
Some(Self::HunkHeader(Some(HunkLocation {
start_line_old: start_line_old.parse::<u32>().ok()?.saturating_sub(1),
count_old: count_old.parse().ok()?,
start_line_new: start_line_new.parse::<u32>().ok()?.saturating_sub(1),
count_new: count_new.parse().ok()?,
})))
} else if let Some(deleted_header) = line.strip_prefix("-") {
Some(Self::Deletion(deleted_header))
} else if line.is_empty() {
Some(Self::Context(""))
} else if let Some(context) = line.strip_prefix(" ") {
Some(Self::Context(context))
} else {
Some(Self::Addition(line.strip_prefix("+")?))
}
}
}
impl<'a> Display for DiffLine<'a> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
DiffLine::OldPath { path } => write!(f, "--- {path}"),
DiffLine::NewPath { path } => write!(f, "+++ {path}"),
DiffLine::HunkHeader(Some(hunk_location)) => {
write!(
f,
"@@ -{},{} +{},{} @@",
hunk_location.start_line_old + 1,
hunk_location.count_old,
hunk_location.start_line_new + 1,
hunk_location.count_new
)
}
DiffLine::HunkHeader(None) => write!(f, "@@ ... @@"),
DiffLine::Context(content) => write!(f, " {content}"),
DiffLine::Deletion(content) => write!(f, "-{content}"),
DiffLine::Addition(content) => write!(f, "+{content}"),
DiffLine::Garbage(line) => write!(f, "{line}"),
}
}
}
fn parse_header_path<'a>(strip_prefix: &'static str, header: &'a str) -> Cow<'a, str> {
if !header.contains(['"', '\\']) {
let path = header.split_ascii_whitespace().next().unwrap_or(header);
return Cow::Borrowed(path.strip_prefix(strip_prefix).unwrap_or(path));
}
let mut path = String::with_capacity(header.len());
let mut in_quote = false;
let mut chars = header.chars().peekable();
let mut strip_prefix = Some(strip_prefix);
while let Some(char) = chars.next() {
if char == '"' {
in_quote = !in_quote;
} else if char == '\\' {
let Some(&next_char) = chars.peek() else {
break;
};
chars.next();
path.push(next_char);
} else if char.is_ascii_whitespace() && !in_quote {
break;
} else {
path.push(char);
}
if let Some(prefix) = strip_prefix
&& path == prefix
{
strip_prefix.take();
path.clear();
}
}
Cow::Owned(path)
}
fn eat_required_whitespace(header: &str) -> Option<&str> {
let trimmed = header.trim_ascii_start();
if trimmed.len() == header.len() {
None
} else {
Some(trimmed)
}
}
#[cfg(test)]
mod tests {
use super::*;
use indoc::indoc;
#[test]
fn parse_lines_simple() {
let input = indoc! {"
diff --git a/text.txt b/text.txt
index 86c770d..a1fd855 100644
--- a/file.txt
+++ b/file.txt
@@ -1,2 +1,3 @@
context
-deleted
+inserted
garbage
--- b/file.txt
+++ a/file.txt
"};
let lines = input.lines().map(DiffLine::parse).collect::<Vec<_>>();
pretty_assertions::assert_eq!(
lines,
&[
DiffLine::Garbage("diff --git a/text.txt b/text.txt"),
DiffLine::Garbage("index 86c770d..a1fd855 100644"),
DiffLine::OldPath {
path: "file.txt".into()
},
DiffLine::NewPath {
path: "file.txt".into()
},
DiffLine::HunkHeader(Some(HunkLocation {
start_line_old: 0,
count_old: 2,
start_line_new: 0,
count_new: 3
})),
DiffLine::Context("context"),
DiffLine::Deletion("deleted"),
DiffLine::Addition("inserted"),
DiffLine::Garbage("garbage"),
DiffLine::Context(""),
DiffLine::OldPath {
path: "b/file.txt".into()
},
DiffLine::NewPath {
path: "a/file.txt".into()
},
]
);
}
#[test]
fn file_header_extra_space() {
let options = ["--- file", "--- file", "---\tfile"];
for option in options {
pretty_assertions::assert_eq!(
DiffLine::parse(option),
DiffLine::OldPath {
path: "file".into()
},
"{option}",
);
}
}
#[test]
fn hunk_header_extra_space() {
let options = [
"@@ -1,2 +1,3 @@",
"@@ -1,2 +1,3 @@",
"@@\t-1,2\t+1,3\t@@",
"@@ -1,2 +1,3 @@",
"@@ -1,2 +1,3 @@",
"@@ -1,2 +1,3 @@",
"@@ -1,2 +1,3 @@ garbage",
];
for option in options {
pretty_assertions::assert_eq!(
DiffLine::parse(option),
DiffLine::HunkHeader(Some(HunkLocation {
start_line_old: 0,
count_old: 2,
start_line_new: 0,
count_new: 3
})),
"{option}",
);
}
}
#[test]
fn hunk_header_without_location() {
pretty_assertions::assert_eq!(DiffLine::parse("@@ ... @@"), DiffLine::HunkHeader(None));
}
#[test]
fn test_parse_path() {
assert_eq!(parse_header_path("a/", "foo.txt"), "foo.txt");
assert_eq!(
parse_header_path("a/", "foo/bar/baz.txt"),
"foo/bar/baz.txt"
);
assert_eq!(parse_header_path("a/", "a/foo.txt"), "foo.txt");
assert_eq!(
parse_header_path("a/", "a/foo/bar/baz.txt"),
"foo/bar/baz.txt"
);
// Extra
assert_eq!(
parse_header_path("a/", "a/foo/bar/baz.txt 2025"),
"foo/bar/baz.txt"
);
assert_eq!(
parse_header_path("a/", "a/foo/bar/baz.txt\t2025"),
"foo/bar/baz.txt"
);
assert_eq!(
parse_header_path("a/", "a/foo/bar/baz.txt \""),
"foo/bar/baz.txt"
);
// Quoted
assert_eq!(
parse_header_path("a/", "a/foo/bar/\"baz quox.txt\""),
"foo/bar/baz quox.txt"
);
assert_eq!(
parse_header_path("a/", "\"a/foo/bar/baz quox.txt\""),
"foo/bar/baz quox.txt"
);
assert_eq!(
parse_header_path("a/", "\"foo/bar/baz quox.txt\""),
"foo/bar/baz quox.txt"
);
assert_eq!(parse_header_path("a/", "\"whatever 🤷\""), "whatever 🤷");
assert_eq!(
parse_header_path("a/", "\"foo/bar/baz quox.txt\" 2025"),
"foo/bar/baz quox.txt"
);
// unescaped quotes are dropped
assert_eq!(parse_header_path("a/", "foo/\"bar\""), "foo/bar");
// Escaped
assert_eq!(
parse_header_path("a/", "\"foo/\\\"bar\\\"/baz.txt\""),
"foo/\"bar\"/baz.txt"
);
assert_eq!(
parse_header_path("a/", "\"C:\\\\Projects\\\\My App\\\\old file.txt\""),
"C:\\Projects\\My App\\old file.txt"
);
}
}

View File

@@ -17,5 +17,6 @@ cloud_llm_client.workspace = true
indoc.workspace = true
ordered-float.workspace = true
rustc-hash.workspace = true
schemars.workspace = true
serde.workspace = true
strum.workspace = true

View File

@@ -1,8 +1,9 @@
//! Zeta2 prompt planning and generation code shared with cloud.
pub mod retrieval_prompt;
use anyhow::{Context as _, Result, anyhow};
use cloud_llm_client::predict_edits_v3::{
self, Excerpt, Line, Point, PromptFormat, ReferencedDeclaration,
self, DiffPathFmt, Excerpt, Line, Point, PromptFormat, ReferencedDeclaration,
};
use indoc::indoc;
use ordered_float::OrderedFloat;
@@ -212,7 +213,7 @@ pub fn write_codeblock<'a>(
include_line_numbers: bool,
output: &'a mut String,
) {
writeln!(output, "`````{}", path.display()).unwrap();
writeln!(output, "`````{}", DiffPathFmt(path)).unwrap();
write_excerpts(
excerpts,
sorted_insertions,
@@ -275,7 +276,7 @@ pub fn write_excerpts<'a>(
}
}
fn push_events(output: &mut String, events: &[predict_edits_v3::Event]) {
pub fn push_events(output: &mut String, events: &[predict_edits_v3::Event]) {
if events.is_empty() {
return;
};

View File

@@ -0,0 +1,94 @@
use anyhow::Result;
use cloud_llm_client::predict_edits_v3::{self, Excerpt};
use indoc::indoc;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use std::fmt::Write;
use crate::{push_events, write_codeblock};
pub fn build_prompt(request: predict_edits_v3::PlanContextRetrievalRequest) -> Result<String> {
let mut prompt = SEARCH_INSTRUCTIONS.to_string();
if !request.events.is_empty() {
writeln!(&mut prompt, "## User Edits\n")?;
push_events(&mut prompt, &request.events);
}
writeln!(&mut prompt, "## Cursor context")?;
write_codeblock(
&request.excerpt_path,
&[Excerpt {
start_line: request.excerpt_line_range.start,
text: request.excerpt.into(),
}],
&[],
request.cursor_file_max_row,
true,
&mut prompt,
);
writeln!(&mut prompt, "{TOOL_USE_REMINDER}")?;
Ok(prompt)
}
/// Search for relevant code
///
/// For the best results, run multiple queries at once with a single invocation of this tool.
#[derive(Clone, Deserialize, Serialize, JsonSchema)]
pub struct SearchToolInput {
/// An array of queries to run for gathering context relevant to the next prediction
#[schemars(length(max = 3))]
pub queries: Box<[SearchToolQuery]>,
}
/// Search for relevant code by path, syntax hierarchy, and content.
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
pub struct SearchToolQuery {
/// 1. A glob pattern to match file paths in the codebase to search in.
pub glob: String,
/// 2. Regular expressions to match syntax nodes **by their first line** and hierarchy.
///
/// Subsequent regexes match nodes within the full content of the nodes matched by the previous regexes.
///
/// Example: Searching for a `User` class
/// ["class\s+User"]
///
/// Example: Searching for a `get_full_name` method under a `User` class
/// ["class\s+User", "def\sget_full_name"]
///
/// Skip this field to match on content alone.
#[schemars(length(max = 3))]
#[serde(default)]
pub syntax_node: Vec<String>,
/// 3. An optional regular expression to match the final content that should appear in the results.
///
/// - Content will be matched within all lines of the matched syntax nodes.
/// - If syntax node regexes are provided, this field can be skipped to include as much of the node itself as possible.
/// - If no syntax node regexes are provided, the content will be matched within the entire file.
pub content: Option<String>,
}
pub const TOOL_NAME: &str = "search";
const SEARCH_INSTRUCTIONS: &str = indoc! {r#"
You are part of an edit prediction system in a code editor.
Your role is to search for code that will serve as context for predicting the next edit.
- Analyze the user's recent edits and current cursor context
- Use the `search` tool to find code that is relevant for predicting the next edit
- Focus on finding:
- Code patterns that might need similar changes based on the recent edits
- Functions, variables, types, and constants referenced in the current cursor context
- Related implementations, usages, or dependencies that may require consistent updates
- How items defined in the cursor excerpt are used or altered
- You will not be able to filter results or perform subsequent queries, so keep searches as targeted as possible
- Use `syntax_node` parameter whenever you're looking for a particular type, class, or function
- Avoid using wildcard globs if you already know the file path of the content you're looking for
"#};
const TOOL_USE_REMINDER: &str = indoc! {"
--
Analyze the user's intent in one to two sentences, then call the `search` tool.
"};

View File

@@ -34,7 +34,7 @@ struct CurrentCompletion {
snapshot: BufferSnapshot,
/// The edits that should be applied to transform the original text into the predicted text.
/// Each edit is a range in the buffer and the text to replace it with.
edits: Arc<[(Range<Anchor>, String)]>,
edits: Arc<[(Range<Anchor>, Arc<str>)]>,
/// Preview of how the buffer will look after applying the edits.
edit_preview: EditPreview,
}
@@ -42,7 +42,7 @@ struct CurrentCompletion {
impl CurrentCompletion {
/// Attempts to adjust the edits based on changes made to the buffer since the completion was generated.
/// Returns None if the user's edits conflict with the predicted edits.
fn interpolate(&self, new_snapshot: &BufferSnapshot) -> Option<Vec<(Range<Anchor>, String)>> {
fn interpolate(&self, new_snapshot: &BufferSnapshot) -> Option<Vec<(Range<Anchor>, Arc<str>)>> {
edit_prediction::interpolate_edits(&self.snapshot, new_snapshot, &self.edits)
}
}
@@ -281,8 +281,8 @@ impl EditPredictionProvider for CodestralCompletionProvider {
return Ok(());
}
let edits: Arc<[(Range<Anchor>, String)]> =
vec![(cursor_position..cursor_position, completion_text)].into();
let edits: Arc<[(Range<Anchor>, Arc<str>)]> =
vec![(cursor_position..cursor_position, completion_text.into())].into();
let edit_preview = buffer
.read_with(cx, |buffer, cx| buffer.preview_edits(edits.clone(), cx))?
.await;

View File

@@ -346,6 +346,7 @@ impl Server {
.add_request_handler(forward_read_only_project_request::<proto::ResolveInlayHint>)
.add_request_handler(forward_read_only_project_request::<proto::GetColorPresentation>)
.add_request_handler(forward_read_only_project_request::<proto::OpenBufferByPath>)
.add_request_handler(forward_read_only_project_request::<proto::OpenImageByPath>)
.add_request_handler(forward_read_only_project_request::<proto::GitGetBranches>)
.add_request_handler(forward_read_only_project_request::<proto::GetDefaultBranch>)
.add_request_handler(forward_read_only_project_request::<proto::OpenUnstagedDiff>)
@@ -395,6 +396,7 @@ impl Server {
.add_request_handler(forward_mutating_project_request::<proto::StopLanguageServers>)
.add_request_handler(forward_mutating_project_request::<proto::LinkedEditingRange>)
.add_message_handler(create_buffer_for_peer)
.add_message_handler(create_image_for_peer)
.add_request_handler(update_buffer)
.add_message_handler(broadcast_project_message_from_host::<proto::RefreshInlayHints>)
.add_message_handler(broadcast_project_message_from_host::<proto::RefreshCodeLens>)
@@ -2389,6 +2391,26 @@ async fn create_buffer_for_peer(
Ok(())
}
/// Notify other participants that a new image has been created
async fn create_image_for_peer(
request: proto::CreateImageForPeer,
session: MessageContext,
) -> Result<()> {
session
.db()
.await
.check_user_is_project_host(
ProjectId::from_proto(request.project_id),
session.connection_id,
)
.await?;
let peer_id = request.peer_id.context("invalid peer id")?;
session
.peer
.forward_send(session.connection_id, peer_id.into(), request)?;
Ok(())
}
/// Notify other participants that a buffer has been updated. This is
/// allowed for guests as long as the update is limited to selections.
async fn update_buffer(

View File

@@ -23,9 +23,6 @@ pub fn init_test(cx: &mut gpui::TestAppContext) {
cx.update(|cx| {
theme::init(theme::LoadThemes::JustBase, cx);
command_palette_hooks::init(cx);
language::init(cx);
workspace::init_settings(cx);
project::Project::init_settings(cx);
debugger_ui::init(cx);
editor::init(cx);
});

View File

@@ -7065,7 +7065,7 @@ async fn test_remote_git_branches(
// Also try creating a new branch
cx_b.update(|cx| {
repo_b.update(cx, |repository, _cx| {
repository.create_branch("totally-new-branch".to_string())
repository.create_branch("totally-new-branch".to_string(), None)
})
})
.await

View File

@@ -84,7 +84,6 @@ async fn test_sharing_an_ssh_remote_project(
let node = NodeRuntime::unavailable();
let languages = Arc::new(LanguageRegistry::new(server_cx.executor()));
let _headless_project = server_cx.new(|cx| {
client::init_settings(cx);
HeadlessProject::new(
HeadlessAppState {
session: server_ssh,
@@ -245,7 +244,6 @@ async fn test_ssh_collaboration_git_branches(
let node = NodeRuntime::unavailable();
let languages = Arc::new(LanguageRegistry::new(server_cx.executor()));
let headless_project = server_cx.new(|cx| {
client::init_settings(cx);
HeadlessProject::new(
HeadlessAppState {
session: server_ssh,
@@ -328,7 +326,7 @@ async fn test_ssh_collaboration_git_branches(
// Also try creating a new branch
cx_b.update(|cx| {
repo_b.update(cx, |repo_b, _cx| {
repo_b.create_branch("totally-new-branch".to_string())
repo_b.create_branch("totally-new-branch".to_string(), None)
})
})
.await
@@ -450,7 +448,6 @@ async fn test_ssh_collaboration_formatting_with_prettier(
server_cx.update(HeadlessProject::init);
let remote_http_client = Arc::new(BlockedHttpClient);
let _headless_project = server_cx.new(|cx| {
client::init_settings(cx);
HeadlessProject::new(
HeadlessAppState {
session: server_ssh,
@@ -612,7 +609,6 @@ async fn test_remote_server_debugger(
let node = NodeRuntime::unavailable();
let languages = Arc::new(LanguageRegistry::new(server_cx.executor()));
let _headless_project = server_cx.new(|cx| {
client::init_settings(cx);
HeadlessProject::new(
HeadlessAppState {
session: server_ssh,
@@ -721,7 +717,6 @@ async fn test_slow_adapter_startup_retries(
let node = NodeRuntime::unavailable();
let languages = Arc::new(LanguageRegistry::new(server_cx.executor()));
let _headless_project = server_cx.new(|cx| {
client::init_settings(cx);
HeadlessProject::new(
HeadlessAppState {
session: server_ssh,

View File

@@ -174,7 +174,6 @@ impl TestServer {
cx.set_global(settings);
theme::init(theme::LoadThemes::JustBase, cx);
release_channel::init(SemanticVersion::default(), cx);
client::init_settings(cx);
});
let clock = Arc::new(FakeSystemClock::new());
@@ -345,7 +344,6 @@ impl TestServer {
theme::init(theme::LoadThemes::JustBase, cx);
Project::init(&client, cx);
client::init(&client, cx);
language::init(cx);
editor::init(cx);
workspace::init(app_state.clone(), cx);
call::init(client.clone(), user_store.clone(), cx);
@@ -359,7 +357,6 @@ impl TestServer {
);
language_model::LanguageModelRegistry::test(cx);
assistant_text_thread::init(client.clone(), cx);
agent_settings::init(cx);
});
client

View File

@@ -7,10 +7,11 @@ use anyhow::Context as _;
use call::ActiveCall;
use channel::{Channel, ChannelEvent, ChannelStore};
use client::{ChannelId, Client, Contact, User, UserStore};
use collections::{HashMap, HashSet};
use contact_finder::ContactFinder;
use db::kvp::KEY_VALUE_STORE;
use editor::{Editor, EditorElement, EditorStyle};
use fuzzy::{StringMatchCandidate, match_strings};
use fuzzy::{StringMatch, StringMatchCandidate, match_strings};
use gpui::{
AnyElement, App, AsyncWindowContext, Bounds, ClickEvent, ClipboardItem, Context, DismissEvent,
Div, Entity, EventEmitter, FocusHandle, Focusable, FontStyle, InteractiveElement, IntoElement,
@@ -30,9 +31,9 @@ use smallvec::SmallVec;
use std::{mem, sync::Arc};
use theme::{ActiveTheme, ThemeSettings};
use ui::{
Avatar, AvatarAvailabilityIndicator, Button, Color, ContextMenu, Facepile, Icon, IconButton,
IconName, IconSize, Indicator, Label, ListHeader, ListItem, Tooltip, prelude::*,
tooltip_container,
Avatar, AvatarAvailabilityIndicator, Button, Color, ContextMenu, Facepile, HighlightedLabel,
Icon, IconButton, IconName, IconSize, Indicator, Label, ListHeader, ListItem, Tooltip,
prelude::*, tooltip_container,
};
use util::{ResultExt, TryFutureExt, maybe};
use workspace::{
@@ -261,6 +262,8 @@ enum ListEntry {
channel: Arc<Channel>,
depth: usize,
has_children: bool,
// `None` when the channel is a parent of a matched channel.
string_match: Option<StringMatch>,
},
ChannelNotes {
channel_id: ChannelId,
@@ -630,6 +633,10 @@ impl CollabPanel {
.enumerate()
.map(|(ix, (_, channel))| StringMatchCandidate::new(ix, &channel.name)),
);
let mut channels = channel_store
.ordered_channels()
.map(|(_, chan)| chan)
.collect::<Vec<_>>();
let matches = executor.block(match_strings(
&self.match_candidates,
&query,
@@ -639,14 +646,34 @@ impl CollabPanel {
&Default::default(),
executor.clone(),
));
let matches_by_id: HashMap<_, _> = matches
.iter()
.map(|mat| (channels[mat.candidate_id].id, mat.clone()))
.collect();
let channel_ids_of_matches_or_parents: HashSet<_> = matches
.iter()
.flat_map(|mat| {
let match_channel = channels[mat.candidate_id];
match_channel
.parent_path
.iter()
.copied()
.chain(Some(match_channel.id))
})
.collect();
channels.retain(|chan| channel_ids_of_matches_or_parents.contains(&chan.id));
if let Some(state) = &self.channel_editing_state
&& matches!(state, ChannelEditingState::Create { location: None, .. })
{
self.entries.push(ListEntry::ChannelEditor { depth: 0 });
}
let mut collapse_depth = None;
for mat in matches {
let channel = channel_store.channel_at_index(mat.candidate_id).unwrap();
for (idx, channel) in channels.into_iter().enumerate() {
let depth = channel.parent_path.len();
if collapse_depth.is_none() && self.is_channel_collapsed(channel.id) {
@@ -663,7 +690,7 @@ impl CollabPanel {
}
let has_children = channel_store
.channel_at_index(mat.candidate_id + 1)
.channel_at_index(idx + 1)
.is_some_and(|next_channel| next_channel.parent_path.ends_with(&[channel.id]));
match &self.channel_editing_state {
@@ -675,6 +702,7 @@ impl CollabPanel {
channel: channel.clone(),
depth,
has_children: false,
string_match: matches_by_id.get(&channel.id).map(|mat| (*mat).clone()),
});
self.entries
.push(ListEntry::ChannelEditor { depth: depth + 1 });
@@ -690,6 +718,7 @@ impl CollabPanel {
channel: channel.clone(),
depth,
has_children,
string_match: matches_by_id.get(&channel.id).map(|mat| (*mat).clone()),
});
}
}
@@ -2321,8 +2350,17 @@ impl CollabPanel {
channel,
depth,
has_children,
string_match,
} => self
.render_channel(channel, *depth, *has_children, is_selected, ix, cx)
.render_channel(
channel,
*depth,
*has_children,
is_selected,
ix,
string_match.as_ref(),
cx,
)
.into_any_element(),
ListEntry::ChannelEditor { depth } => self
.render_channel_editor(*depth, window, cx)
@@ -2719,6 +2757,7 @@ impl CollabPanel {
has_children: bool,
is_selected: bool,
ix: usize,
string_match: Option<&StringMatch>,
cx: &mut Context<Self>,
) -> impl IntoElement {
let channel_id = channel.id;
@@ -2855,7 +2894,14 @@ impl CollabPanel {
.child(
h_flex()
.id(channel_id.0 as usize)
.child(Label::new(channel.name.clone()))
.child(match string_match {
None => Label::new(channel.name.clone()).into_any_element(),
Some(string_match) => HighlightedLabel::new(
channel.name.clone(),
string_match.positions.clone(),
)
.into_any_element(),
})
.children(face_pile.map(|face_pile| face_pile.p_1())),
),
)

View File

@@ -13,14 +13,10 @@ use gpui::{
};
pub use panel_settings::{CollaborationPanelSettings, NotificationPanelSettings};
use release_channel::ReleaseChannel;
use settings::Settings;
use ui::px;
use workspace::AppState;
pub fn init(app_state: &Arc<AppState>, cx: &mut App) {
CollaborationPanelSettings::register(cx);
NotificationPanelSettings::register(cx);
channel_view::init(cx);
collab_panel::init(cx);
notification_panel::init(cx);

View File

@@ -1,16 +1,16 @@
use gpui::Pixels;
use settings::Settings;
use settings::{RegisterSetting, Settings};
use ui::px;
use workspace::dock::DockPosition;
#[derive(Debug)]
#[derive(Debug, RegisterSetting)]
pub struct CollaborationPanelSettings {
pub button: bool,
pub dock: DockPosition,
pub default_width: Pixels,
}
#[derive(Debug)]
#[derive(Debug, RegisterSetting)]
pub struct NotificationPanelSettings {
pub button: bool,
pub dock: DockPosition,

View File

@@ -28,7 +28,6 @@ use workspace::{ModalView, Workspace, WorkspaceSettings};
use zed_actions::{OpenZedUrl, command_palette::Toggle};
pub fn init(cx: &mut App) {
client::init_settings(cx);
command_palette_hooks::init(cx);
cx.observe_new(CommandPalette::register).detach();
}
@@ -789,13 +788,11 @@ mod tests {
cx.update(|cx| {
let app_state = AppState::test(cx);
theme::init(theme::LoadThemes::JustBase, cx);
language::init(cx);
editor::init(cx);
menu::init();
go_to_line::init(cx);
workspace::init(app_state.clone(), cx);
init(cx);
Project::init_settings(cx);
cx.bind_keys(KeymapFile::load_panic_on_failure(
r#"[
{

View File

@@ -26,7 +26,6 @@ test-support = [
[dependencies]
anyhow.workspace = true
chrono.workspace = true
client.workspace = true
collections.workspace = true
command_palette_hooks.workspace = true
dirs.workspace = true

View File

@@ -1115,11 +1115,6 @@ mod tests {
let store = SettingsStore::test(cx);
cx.set_global(store);
theme::init(theme::LoadThemes::JustBase, cx);
client::init_settings(cx);
language::init(cx);
editor::init_settings(cx);
Project::init_settings(cx);
workspace::init_settings(cx);
SettingsStore::update_global(cx, |store: &mut SettingsStore, cx| {
store.update_user_settings(cx, |settings| f(&mut settings.project.all_languages));
});

View File

@@ -265,9 +265,10 @@ impl minidumper::ServerHandler for CrashServer {
3 => {
let gpu_specs: system_specs::GpuSpecs =
bincode::deserialize(&buffer).expect("gpu specs");
self.active_gpu
.set(gpu_specs)
.expect("already set active gpu");
// we ignore the case where it was already set because this message is sent
// on each new window. in theory all zed windows should be using the same
// GPU so this is fine.
self.active_gpu.set(gpu_specs).ok();
}
_ => {
panic!("invalid message kind");

View File

@@ -256,7 +256,7 @@ impl DebugAdapterClient {
#[cfg(test)]
mod tests {
use super::*;
use crate::{client::DebugAdapterClient, debugger_settings::DebuggerSettings};
use crate::client::DebugAdapterClient;
use dap_types::{
Capabilities, InitializeRequestArguments, InitializeRequestArgumentsPathFormat,
RunInTerminalRequestArguments, StartDebuggingRequestArguments,
@@ -265,7 +265,7 @@ mod tests {
};
use gpui::TestAppContext;
use serde_json::json;
use settings::{Settings, SettingsStore};
use settings::SettingsStore;
use std::sync::{
Arc,
atomic::{AtomicBool, Ordering},
@@ -277,7 +277,6 @@ mod tests {
cx.update(|cx| {
let settings = SettingsStore::test(cx);
cx.set_global(settings);
DebuggerSettings::register(cx);
});
}

View File

@@ -1,6 +1,7 @@
use dap_types::SteppingGranularity;
use settings::{Settings, SettingsContent};
use settings::{RegisterSetting, Settings, SettingsContent};
#[derive(Debug, RegisterSetting)]
pub struct DebuggerSettings {
/// Determines the stepping granularity.
///

View File

@@ -1,10 +1,9 @@
use std::ffi::OsStr;
use anyhow::{Context as _, Result, bail};
use async_trait::async_trait;
use collections::HashMap;
use dap::{StartDebuggingRequestArguments, adapters::DebugTaskDefinition};
use gpui::AsyncApp;
use std::ffi::OsStr;
use task::{DebugScenario, ZedDebugConfig};
use crate::*;
@@ -16,6 +15,14 @@ impl GdbDebugAdapter {
const ADAPTER_NAME: &'static str = "GDB";
}
/// Ensures that "-i=dap" is present in the GDB argument list.
fn ensure_dap_interface(mut gdb_args: Vec<String>) -> Vec<String> {
if !gdb_args.iter().any(|arg| arg.trim() == "-i=dap") {
gdb_args.insert(0, "-i=dap".to_string());
}
gdb_args
}
#[async_trait(?Send)]
impl DebugAdapter for GdbDebugAdapter {
fn name(&self) -> DebugAdapterName {
@@ -99,6 +106,18 @@ impl DebugAdapter for GdbDebugAdapter {
"type": "string",
"description": "Working directory for the debugged program. GDB will change its working directory to this directory."
},
"gdb_path": {
"type": "string",
"description": "Alternative path to the GDB executable, if the one in standard path is not desirable"
},
"gdb_args": {
"type": "array",
"items": {
"type":"string"
},
"description": "additional arguments given to GDB at startup, not the program debugged",
"default": []
},
"env": {
"type": "object",
"description": "Environment variables for the debugged program. Each key is the name of an environment variable; each value is the value of that variable."
@@ -164,21 +183,49 @@ impl DebugAdapter for GdbDebugAdapter {
user_env: Option<HashMap<String, String>>,
_: &mut AsyncApp,
) -> Result<DebugAdapterBinary> {
let user_setting_path = user_installed_path
.filter(|p| p.exists())
.and_then(|p| p.to_str().map(|s| s.to_string()));
// Try to get gdb_path from config
let gdb_path_from_config = config
.config
.get("gdb_path")
.and_then(|v| v.as_str())
.map(|s| s.to_string());
let gdb_path = delegate
.which(OsStr::new("gdb"))
.await
.and_then(|p| p.to_str().map(|s| s.to_string()))
.context("Could not find gdb in path");
let gdb_path = if let Some(path) = gdb_path_from_config {
path
} else {
// Original logic: use user_installed_path or search in system path
let user_setting_path = user_installed_path
.filter(|p| p.exists())
.and_then(|p| p.to_str().map(|s| s.to_string()));
if gdb_path.is_err() && user_setting_path.is_none() {
bail!("Could not find gdb path or it's not installed");
}
let gdb_path_result = delegate
.which(OsStr::new("gdb"))
.await
.and_then(|p| p.to_str().map(|s| s.to_string()))
.context("Could not find gdb in path");
let gdb_path = user_setting_path.unwrap_or(gdb_path?);
if gdb_path_result.is_err() && user_setting_path.is_none() {
bail!("Could not find gdb path or it's not installed");
}
user_setting_path.unwrap_or_else(|| gdb_path_result.unwrap())
};
// Arguments: use gdb_args from config if present, else user_args, else default
let gdb_args = {
let args = config
.config
.get("gdb_args")
.and_then(|v| v.as_array())
.map(|arr| {
arr.iter()
.filter_map(|v| v.as_str().map(|s| s.to_string()))
.collect::<Vec<_>>()
})
.or(user_args.clone())
.unwrap_or_else(|| vec!["-i=dap".into()]);
ensure_dap_interface(args)
};
let mut configuration = config.config.clone();
if let Some(configuration) = configuration.as_object_mut() {
@@ -187,10 +234,26 @@ impl DebugAdapter for GdbDebugAdapter {
.or_insert_with(|| delegate.worktree_root_path().to_string_lossy().into());
}
let mut base_env = delegate.shell_env().await;
base_env.extend(user_env.unwrap_or_default());
let config_env: HashMap<String, String> = config
.config
.get("env")
.and_then(|v| v.as_object())
.map(|obj| {
obj.iter()
.filter_map(|(k, v)| v.as_str().map(|s| (k.clone(), s.to_string())))
.collect::<HashMap<String, String>>()
})
.unwrap_or_else(HashMap::default);
base_env.extend(config_env);
Ok(DebugAdapterBinary {
command: Some(gdb_path),
arguments: user_args.unwrap_or_else(|| vec!["-i=dap".into()]),
envs: user_env.unwrap_or_default(),
arguments: gdb_args,
envs: base_env,
cwd: Some(delegate.worktree_root_path().to_path_buf()),
connection: None,
request_args: StartDebuggingRequestArguments {

View File

@@ -1,6 +1,5 @@
use std::any::TypeId;
use dap::debugger_settings::DebuggerSettings;
use debugger_panel::DebugPanel;
use editor::Editor;
use gpui::{Action, App, DispatchPhase, EntityInputHandler, actions};
@@ -10,7 +9,6 @@ use project::debugger::{self, breakpoint_store::SourceBreakpoint, session::Threa
use schemars::JsonSchema;
use serde::Deserialize;
use session::DebugSession;
use settings::Settings;
use stack_trace_view::StackTraceView;
use tasks_ui::{Spawn, TaskOverrides};
use ui::{FluentBuilder, InteractiveElement};
@@ -115,7 +113,6 @@ actions!(
);
pub fn init(cx: &mut App) {
DebuggerSettings::register(cx);
workspace::FollowableViewRegistry::register::<DebugSession>(cx);
cx.observe_new(|workspace: &mut Workspace, _, _| {

View File

@@ -1404,6 +1404,7 @@ impl VariableList {
div()
.text_ui(cx)
.w_full()
.truncate()
.when(self.disabled, |this| {
this.text_color(Color::Disabled.color(cx))
})

View File

@@ -43,9 +43,6 @@ pub fn init_test(cx: &mut gpui::TestAppContext) {
terminal_view::init(cx);
theme::init(theme::LoadThemes::JustBase, cx);
command_palette_hooks::init(cx);
language::init(cx);
workspace::init_settings(cx);
Project::init_settings(cx);
editor::init(cx);
crate::init(cx);
dap_adapters::init(cx);

View File

@@ -23,7 +23,7 @@ use project::{
use settings::Settings;
use std::{
any::{Any, TypeId},
cmp::Ordering,
cmp::{self, Ordering},
sync::Arc,
};
use text::{Anchor, BufferSnapshot, OffsetRangeExt};
@@ -410,7 +410,7 @@ impl BufferDiagnosticsEditor {
// in the editor.
// This is done by iterating over the list of diagnostic blocks and
// determine what range does the diagnostic block span.
let mut excerpt_ranges: Vec<ExcerptRange<Point>> = Vec::new();
let mut excerpt_ranges: Vec<ExcerptRange<_>> = Vec::new();
for diagnostic_block in blocks.iter() {
let excerpt_range = context_range_for_entry(
@@ -420,30 +420,43 @@ impl BufferDiagnosticsEditor {
&mut cx,
)
.await;
let initial_range = buffer_snapshot
.anchor_after(diagnostic_block.initial_range.start)
..buffer_snapshot.anchor_before(diagnostic_block.initial_range.end);
let index = excerpt_ranges
.binary_search_by(|probe| {
let bin_search = |probe: &ExcerptRange<text::Anchor>| {
let context_start = || {
probe
.context
.start
.cmp(&excerpt_range.start)
.then(probe.context.end.cmp(&excerpt_range.end))
.then(
probe
.primary
.start
.cmp(&diagnostic_block.initial_range.start),
)
.then(probe.primary.end.cmp(&diagnostic_block.initial_range.end))
.then(Ordering::Greater)
})
.unwrap_or_else(|index| index);
.cmp(&excerpt_range.start, &buffer_snapshot)
};
let context_end =
|| probe.context.end.cmp(&excerpt_range.end, &buffer_snapshot);
let primary_start = || {
probe
.primary
.start
.cmp(&initial_range.start, &buffer_snapshot)
};
let primary_end =
|| probe.primary.end.cmp(&initial_range.end, &buffer_snapshot);
context_start()
.then_with(context_end)
.then_with(primary_start)
.then_with(primary_end)
.then(cmp::Ordering::Greater)
};
let index = excerpt_ranges
.binary_search_by(bin_search)
.unwrap_or_else(|i| i);
excerpt_ranges.insert(
index,
ExcerptRange {
context: excerpt_range,
primary: diagnostic_block.initial_range.clone(),
primary: initial_range,
},
)
}
@@ -466,6 +479,13 @@ impl BufferDiagnosticsEditor {
buffer_diagnostics_editor
.multibuffer
.update(cx, |multibuffer, cx| {
let excerpt_ranges = excerpt_ranges
.into_iter()
.map(|range| ExcerptRange {
context: range.context.to_point(&buffer_snapshot),
primary: range.primary.to_point(&buffer_snapshot),
})
.collect();
multibuffer.set_excerpt_ranges_for_path(
PathKey::for_buffer(&buffer, cx),
buffer.clone(),

View File

@@ -39,8 +39,8 @@ impl DiagnosticRenderer {
let group_id = primary.diagnostic.group_id;
let mut results = vec![];
for entry in diagnostic_group.iter() {
let mut markdown = Self::markdown(&entry.diagnostic);
if entry.diagnostic.is_primary {
let mut markdown = Self::markdown(&entry.diagnostic);
let diagnostic = &primary.diagnostic;
if diagnostic.source.is_some() || diagnostic.code.is_some() {
markdown.push_str(" (");
@@ -81,21 +81,12 @@ impl DiagnosticRenderer {
diagnostics_editor: diagnostics_editor.clone(),
markdown: cx.new(|cx| Markdown::new(markdown.into(), None, None, cx)),
});
} else if entry.range.start.row.abs_diff(primary.range.start.row) < 5 {
let markdown = Self::markdown(&entry.diagnostic);
results.push(DiagnosticBlock {
initial_range: entry.range.clone(),
severity: entry.diagnostic.severity,
diagnostics_editor: diagnostics_editor.clone(),
markdown: cx.new(|cx| Markdown::new(markdown.into(), None, None, cx)),
});
} else {
let mut markdown = Self::markdown(&entry.diagnostic);
markdown.push_str(&format!(
" ([back](file://#diagnostic-{buffer_id}-{group_id}-{primary_ix}))"
));
if entry.range.start.row.abs_diff(primary.range.start.row) >= 5 {
markdown.push_str(&format!(
" ([back](file://#diagnostic-{buffer_id}-{group_id}-{primary_ix}))"
));
}
results.push(DiagnosticBlock {
initial_range: entry.range.clone(),
severity: entry.diagnostic.severity,

View File

@@ -152,8 +152,8 @@ impl Render for ProjectDiagnosticsEditor {
#[derive(PartialEq, Eq, Copy, Clone, Debug)]
enum RetainExcerpts {
Yes,
No,
All,
Dirty,
}
impl ProjectDiagnosticsEditor {
@@ -207,17 +207,7 @@ impl ProjectDiagnosticsEditor {
"diagnostics updated for server {language_server_id}, \
paths {paths:?}. updating excerpts"
);
let focused = this.editor.focus_handle(cx).contains_focused(window, cx)
|| this.focus_handle.contains_focused(window, cx);
this.update_stale_excerpts(
if focused {
RetainExcerpts::Yes
} else {
RetainExcerpts::No
},
window,
cx,
);
this.update_stale_excerpts(window, cx);
}
_ => {}
},
@@ -280,8 +270,7 @@ impl ProjectDiagnosticsEditor {
cx,
)
});
this.diagnostics.clear();
this.update_all_excerpts(window, cx);
this.refresh(window, cx);
})
.detach();
@@ -301,25 +290,29 @@ impl ProjectDiagnosticsEditor {
diagnostic_summary_update: Task::ready(()),
_subscription: project_event_subscription,
};
this.update_all_excerpts(window, cx);
this.refresh(window, cx);
this
}
/// Closes all excerpts of buffers that:
/// - have no diagnostics anymore
/// - are saved (not dirty)
/// - and, if `reatin_selections` is true, do not have selections within them
/// - and, if `retain_selections` is true, do not have selections within them
fn close_diagnosticless_buffers(
&mut self,
_window: &mut Window,
cx: &mut Context<Self>,
retain_selections: bool,
) {
let buffer_ids = self.multibuffer.read(cx).all_buffer_ids();
let selected_buffers = self.editor.update(cx, |editor, cx| {
let snapshot = self
.editor
.update(cx, |editor, cx| editor.display_snapshot(cx));
let buffer = self.multibuffer.read(cx);
let buffer_ids = buffer.all_buffer_ids();
let selected_buffers = self.editor.update(cx, |editor, _| {
editor
.selections
.all_anchors(cx)
.all_anchors(&snapshot)
.iter()
.filter_map(|anchor| anchor.start.buffer_id)
.collect::<HashSet<_>>()
@@ -328,19 +321,19 @@ impl ProjectDiagnosticsEditor {
if retain_selections && selected_buffers.contains(&buffer_id) {
continue;
}
let has_blocks = self
let has_no_blocks = self
.blocks
.get(&buffer_id)
.is_none_or(|blocks| blocks.is_empty());
if !has_blocks {
if !has_no_blocks {
continue;
}
let is_dirty = self
.multibuffer
.read(cx)
.buffer(buffer_id)
.is_some_and(|buffer| buffer.read(cx).is_dirty());
if !is_dirty {
.is_none_or(|buffer| buffer.read(cx).is_dirty());
if is_dirty {
continue;
}
self.multibuffer.update(cx, |b, cx| {
@@ -349,18 +342,10 @@ impl ProjectDiagnosticsEditor {
}
}
fn update_stale_excerpts(
&mut self,
mut retain_excerpts: RetainExcerpts,
window: &mut Window,
cx: &mut Context<Self>,
) {
fn update_stale_excerpts(&mut self, window: &mut Window, cx: &mut Context<Self>) {
if self.update_excerpts_task.is_some() {
return;
}
if self.multibuffer.read(cx).is_dirty(cx) {
retain_excerpts = RetainExcerpts::Yes;
}
let project_handle = self.project.clone();
self.update_excerpts_task = Some(cx.spawn_in(window, async move |this, cx| {
@@ -386,6 +371,13 @@ impl ProjectDiagnosticsEditor {
.log_err()
{
this.update_in(cx, |this, window, cx| {
let focused = this.editor.focus_handle(cx).contains_focused(window, cx)
|| this.focus_handle.contains_focused(window, cx);
let retain_excerpts = if focused {
RetainExcerpts::All
} else {
RetainExcerpts::Dirty
};
this.update_excerpts(buffer, retain_excerpts, window, cx)
})?
.await?;
@@ -441,7 +433,7 @@ impl ProjectDiagnosticsEditor {
if self.update_excerpts_task.is_some() {
self.update_excerpts_task = None;
} else {
self.update_all_excerpts(window, cx);
self.refresh(window, cx);
}
cx.notify();
}
@@ -459,31 +451,26 @@ impl ProjectDiagnosticsEditor {
}
}
/// Enqueue an update of all excerpts. Updates all paths that either
/// currently have diagnostics or are currently present in this view.
fn update_all_excerpts(&mut self, window: &mut Window, cx: &mut Context<Self>) {
/// Clears all diagnostics in this view, and refetches them from the project.
fn refresh(&mut self, window: &mut Window, cx: &mut Context<Self>) {
self.diagnostics.clear();
self.editor.update(cx, |editor, cx| {
for (_, block_ids) in self.blocks.drain() {
editor.display_map.update(cx, |display_map, cx| {
display_map.remove_blocks(block_ids.into_iter().collect(), cx)
});
}
});
self.multibuffer
.update(cx, |multibuffer, cx| multibuffer.clear(cx));
self.project.update(cx, |project, cx| {
let mut project_paths = project
self.paths_to_update = project
.diagnostic_summaries(false, cx)
.map(|(project_path, _, _)| project_path)
.collect::<BTreeSet<_>>();
self.multibuffer.update(cx, |multibuffer, cx| {
for buffer in multibuffer.all_buffers() {
if let Some(file) = buffer.read(cx).file() {
project_paths.insert(ProjectPath {
path: file.path().clone(),
worktree_id: file.worktree_id(cx),
});
}
}
multibuffer.clear(cx);
});
self.paths_to_update = project_paths;
});
self.update_stale_excerpts(RetainExcerpts::No, window, cx);
self.update_stale_excerpts(window, cx);
}
fn diagnostics_are_unchanged(
@@ -511,7 +498,7 @@ impl ProjectDiagnosticsEditor {
cx: &mut Context<Self>,
) -> Task<Result<()>> {
let was_empty = self.multibuffer.read(cx).is_empty();
let buffer_snapshot = buffer.read(cx).snapshot();
let mut buffer_snapshot = buffer.read(cx).snapshot();
let buffer_id = buffer_snapshot.remote_id();
let max_severity = if self.include_warnings {
@@ -559,6 +546,7 @@ impl ProjectDiagnosticsEditor {
}
let mut blocks: Vec<DiagnosticBlock> = Vec::new();
let diagnostics_toolbar_editor = Arc::new(this.clone());
for (_, group) in grouped {
let group_severity = group.iter().map(|d| d.diagnostic.severity).min();
if group_severity.is_none_or(|s| s > max_severity) {
@@ -568,7 +556,7 @@ impl ProjectDiagnosticsEditor {
crate::diagnostic_renderer::DiagnosticRenderer::diagnostic_blocks_for_group(
group,
buffer_snapshot.remote_id(),
Some(Arc::new(this.clone())),
Some(diagnostics_toolbar_editor.clone()),
cx,
)
})?;
@@ -576,21 +564,21 @@ impl ProjectDiagnosticsEditor {
blocks.extend(more);
}
let mut excerpt_ranges: Vec<ExcerptRange<Point>> = match retain_excerpts {
RetainExcerpts::No => Vec::new(),
RetainExcerpts::Yes => this.update(cx, |this, cx| {
this.multibuffer.update(cx, |multi_buffer, cx| {
multi_buffer
let mut excerpt_ranges: Vec<ExcerptRange<_>> = this.update(cx, |this, cx| {
this.multibuffer.update(cx, |multi_buffer, cx| {
let is_dirty = multi_buffer
.buffer(buffer_id)
.is_none_or(|buffer| buffer.read(cx).is_dirty());
match retain_excerpts {
RetainExcerpts::Dirty if !is_dirty => Vec::new(),
RetainExcerpts::All | RetainExcerpts::Dirty => multi_buffer
.excerpts_for_buffer(buffer_id, cx)
.into_iter()
.map(|(_, range)| ExcerptRange {
context: range.context.to_point(&buffer_snapshot),
primary: range.primary.to_point(&buffer_snapshot),
})
.collect()
})
})?,
};
.map(|(_, range)| range)
.collect(),
}
})
})?;
let mut result_blocks = vec![None; excerpt_ranges.len()];
let context_lines = cx.update(|_, cx| multibuffer_context_lines(cx))?;
for b in blocks {
@@ -601,24 +589,41 @@ impl ProjectDiagnosticsEditor {
cx,
)
.await;
buffer_snapshot = cx.update(|_, cx| buffer.read(cx).snapshot())?;
let initial_range = buffer_snapshot.anchor_after(b.initial_range.start)
..buffer_snapshot.anchor_before(b.initial_range.end);
let i = excerpt_ranges
.binary_search_by(|probe| {
let bin_search = |probe: &ExcerptRange<text::Anchor>| {
let context_start = || {
probe
.context
.start
.cmp(&excerpt_range.start)
.then(probe.context.end.cmp(&excerpt_range.end))
.then(probe.primary.start.cmp(&b.initial_range.start))
.then(probe.primary.end.cmp(&b.initial_range.end))
.then(cmp::Ordering::Greater)
})
.cmp(&excerpt_range.start, &buffer_snapshot)
};
let context_end =
|| probe.context.end.cmp(&excerpt_range.end, &buffer_snapshot);
let primary_start = || {
probe
.primary
.start
.cmp(&initial_range.start, &buffer_snapshot)
};
let primary_end =
|| probe.primary.end.cmp(&initial_range.end, &buffer_snapshot);
context_start()
.then_with(context_end)
.then_with(primary_start)
.then_with(primary_end)
.then(cmp::Ordering::Greater)
};
let i = excerpt_ranges
.binary_search_by(bin_search)
.unwrap_or_else(|i| i);
excerpt_ranges.insert(
i,
ExcerptRange {
context: excerpt_range,
primary: b.initial_range.clone(),
primary: initial_range,
},
);
result_blocks.insert(i, Some(b));
@@ -633,6 +638,13 @@ impl ProjectDiagnosticsEditor {
})
}
let (anchor_ranges, _) = this.multibuffer.update(cx, |multi_buffer, cx| {
let excerpt_ranges = excerpt_ranges
.into_iter()
.map(|range| ExcerptRange {
context: range.context.to_point(&buffer_snapshot),
primary: range.primary.to_point(&buffer_snapshot),
})
.collect();
multi_buffer.set_excerpt_ranges_for_path(
PathKey::for_buffer(&buffer, cx),
buffer.clone(),
@@ -946,7 +958,7 @@ impl DiagnosticsToolbarEditor for WeakEntity<ProjectDiagnosticsEditor> {
fn refresh_diagnostics(&self, window: &mut Window, cx: &mut App) {
let _ = self.update(cx, |project_diagnostics_editor, cx| {
project_diagnostics_editor.update_all_excerpts(window, cx);
project_diagnostics_editor.refresh(window, cx);
});
}
@@ -978,8 +990,8 @@ async fn context_range_for_entry(
context: u32,
snapshot: BufferSnapshot,
cx: &mut AsyncApp,
) -> Range<Point> {
if let Some(rows) = heuristic_syntactic_expand(
) -> Range<text::Anchor> {
let range = if let Some(rows) = heuristic_syntactic_expand(
range.clone(),
DIAGNOSTIC_EXPANSION_ROW_LIMIT,
snapshot.clone(),
@@ -987,15 +999,17 @@ async fn context_range_for_entry(
)
.await
{
return Range {
Range {
start: Point::new(*rows.start(), 0),
end: snapshot.clip_point(Point::new(*rows.end(), u32::MAX), Bias::Left),
};
}
Range {
start: Point::new(range.start.row.saturating_sub(context), 0),
end: snapshot.clip_point(Point::new(range.end.row + context, u32::MAX), Bias::Left),
}
}
} else {
Range {
start: Point::new(range.start.row.saturating_sub(context), 0),
end: snapshot.clip_point(Point::new(range.end.row + context, u32::MAX), Bias::Left),
}
};
snapshot.anchor_after(range.start)..snapshot.anchor_before(range.end)
}
/// Expands the input range using syntax information from TreeSitter. This expansion will be limited

Some files were not shown because too many files have changed in this diff Show More