Compare commits

..

126 Commits

Author SHA1 Message Date
Anthony
a01de2d984 clippy 2025-06-03 21:28:44 -04:00
Anthony
e4067b0f51 Use origin main's cargo lock file 2025-06-03 21:25:15 -04:00
Anthony
7ddafcdbe4 Merge remote-tracking branch 'origin/main' into fix-task-filtering 2025-06-03 21:24:26 -04:00
Smit Barmase
10df7b5eb9 gpui: Add API for read and write active drag cursor style (#32028)
Prep for https://github.com/zed-industries/zed/pull/32040

Currently, there’s no way to modify the cursor style of the active drag
state after dragging begins. However, there are scenarios where we might
want to change the cursor style, such as pressing a modifier key while
dragging. This PR introduces an API to update and read the current
active drag state cursor.

Release Notes:

- N/A
2025-06-04 06:53:03 +05:30
Haru Kim
55120c4231 Properly load environment variables from the login shell (#31799)
Fixes #11647
Fixes #13888
Fixes #18771
Fixes #19779
Fixes #22437
Fixes #23649
Fixes #24200
Fixes #27601

Zed’s current method of loading environment variables from the login
shell has two issues:
1. Some shells—​fish in particular—​​write specific escape characters to
`stdout` right before they exit. When this happens, the tail end of the
last environment variable printed by `/usr/bin/env` becomes corrupted.
2. If a multi-line value contains an equals sign, that line is
mis-parsed as a separate name-value pair.

This PR addresses those problems by:
1. Redirecting the shell command's `stdout` directly to a temporary
file, eliminating any side effects caused by the shell itself.
2. Replacing `/usr/bin/env` with `sh -c 'export -p'`, which removes
ambiguity when handling multi-line values.

Additional changes:
- Correctly set the arguments used to launch a login shell under `csh`
or `tcsh`.
- Deduplicate code by sharing the implementation that loads environment
variables on first run with the logic that reloads them for a project.



Release Notes:

- N/A
2025-06-03 19:16:26 -06:00
Piotr Osiewicz
8227c45a11 docs: Document debugger.dock setting (#32038)
Closes #ISSUE

Release Notes:

- N/A
2025-06-03 23:38:41 +00:00
Piotr Osiewicz
d23359e19a debugger: Fix issues with running Zed-installed debugpy + hangs when downloading (#32034)
Closes #32018

Release Notes:

- Fixed issues with launching Python debug adapter downloaded by Zed.
You might need to clear the old install of Debugpy from
`$HOME/.local/share/zed/debug_adapters/Debugpy` (Linux) or
`$HOME/Library/Application Support/Zed/debug_adapters/Debugpy` (Mac).
2025-06-04 01:37:25 +02:00
Kirill Bulatov
936ad0bf10 Use better fallback for empty rerun action (#32031)
When invoking the task rerun action before any task had been run, it
falls back to task selection modal.
Adjust this fall back to use the debugger view, if available:

Before:

![before](https://github.com/user-attachments/assets/737d2dc1-15a4-4eea-a5f9-4aff6c7600cc)


After:

![after](https://github.com/user-attachments/assets/43381b85-5167-44e7-a8b0-865a64eaa6ea)


Release Notes:

- N/A
2025-06-03 22:45:37 +00:00
Kirill Bulatov
faa0bb51c9 Better log canonicalization errors (#32030)
Based on
https://github.com/zed-industries/zed/issues/18673#issuecomment-2933025951

Adds an anyhow error context with the path used for canonicalization
(also, explicitly mention path at the place from the comment).

Release Notes:

- N/A
2025-06-03 22:30:59 +00:00
Joseph T. Lyons
2db2271e3c Do not activate inactive tabs when pinning or unpinning
Closes https://github.com/zed-industries/zed/issues/32024

Release Notes:

- Fixed a bug where inactive tabs would be activated when pinning or
unpinning.
2025-06-03 17:43:06 -04:00
Anthony
326697922c Merge remote-tracking branch 'origin/main' into fix-task-filtering 2025-06-03 17:42:02 -04:00
Anthony
85086550f9 Merge branch 'main' into fix-task-filtering 2025-06-03 17:41:48 -04:00
Conrad Irwin
79b1dd7db8 Improve collab cleanup (#32000)
Co-authored-by: Max <max@zed.dev>
Co-authored-by: Marshall <marshall@zed.dev>
Co-authored-by: Mikayla <mikayla@zed.dev>

Release Notes:

- N/A
2025-06-03 15:27:28 -06:00
Anthony Eid
81f8e2ed4a Limit BufferSnapshot::surrounding_word search to 256 characters (#32016)
This is the first step to closing #16120. Part of the problem was that
`surrounding_word` would search the whole line for matches with no
limit.

Co-authored-by: Conrad Irwin \<conrad@zed.dev\>
Co-authored-by: Ben Kunkle \<ben@zed.dev\>
Co-authored-by: Cole Miller \<cole@zed.dev\>

Release Notes:

- N/A
2025-06-03 21:08:59 +00:00
Gilles Peiffer
b9256dd469 editor: Apply common_prefix_len refactor suggestion (#31957)
This adds João's nice suggestion from
https://github.com/zed-industries/zed/pull/31818#discussion_r2118582616.

Release Notes:

- N/A

---------

Co-authored-by: João Marcos <marcospb19@hotmail.com>
2025-06-03 15:07:14 -06:00
Bennet Bo Fenner
27d3da678c editor: Fix panic when full width crease is wrapped (#31960)
Closes #31919

Release Notes:

- Fixed a panic that could sometimes occur when the agent panel was too
narrow and contained context included via `@`.

---------

Co-authored-by: Antonio <antonio@zed.dev>
Co-authored-by: Antonio Scandurra <me@as-cii.com>
2025-06-03 22:59:27 +02:00
Conrad Irwin
03357f3f7b Fix panic when re-editing old message with creases (#32017)
Co-authored-by: Cole Miller <m@cole-miller.net>

Release Notes:

- agent: Fixed a panic when re-editing old messages

---------

Co-authored-by: Cole Miller <m@cole-miller.net>
Co-authored-by: Cole Miller <cole@zed.dev>
2025-06-03 20:56:18 +00:00
Kirill Bulatov
4aabba6cf6 Improve Zed prompts for file path selection (#32014)
Part of https://github.com/zed-industries/zed/discussions/31653
`"use_system_path_prompts": false` is needed in settings for these to
appear as modals for new file save and file open.

Fixed a very subpar experience of the "save new file" Zed modal,
compared to a similar "open file path" Zed modal by uniting their code.

Before:


https://github.com/user-attachments/assets/c4082b70-6cdc-4598-a416-d491011c8ac4


After:



https://github.com/user-attachments/assets/21ca672a-ae40-426c-b68f-9efee4f93c8c


Also 

* alters both prompts to start in the current worktree directory, with
the fallback to home directory.
* adjusts the code to handle Windows paths better

Release Notes:

- Improved Zed prompts for file path selection

---------

Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>
2025-06-03 20:35:25 +00:00
Michael Sloan
8c46a4f594 Make completions menu stay open after after it's manually requested (#32015)
Also includes a clarity refactoring to remove
`ignore_completion_provider`.

Closes #15549

Release Notes:

- Fixed completions menu closing on typing after being requested while
`show_completions_on_input: false`.
2025-06-03 20:33:52 +00:00
Anthony
eb4a63a9f9 Limit BufferSnapshot::surrounding_word search to 256 characters
Co-authored-by Conrad Irwin <conrad@zed.dev>
Co-authored-by Ben Kunkle <ben@zed.dev>
Co-authored-by Cole Miller <cole@zed.dev>
2025-06-03 16:23:26 -04:00
Luke Naylor
522abe8e59 Change default formatter settings for LaTeX (#28727)
Closes: https://github.com/rzukic/zed-latex/issues/77 

## Default formatter: `latexindent`
Before, formatting was delegated to the language server, which just ran
a `latexindent` executable. There was no benefit to running it through
the language server over running it as an "external" formatter in zed.
In fact this was an issue because there was no way to provide an
explicit path for the executable (causing above extension issue). Having
the default settings configure the formatter directly gives more control
to user and removes the number of indirections making it clearer how to
tweak things like the executable path, or extra CLI args, etc...

## Alternative: `prettier`
Default settings have also been added to allow prettier as the formatter
(by just setting `"formatter": "prettier"` in the "LaTeX" language
settings). This was not possible before because an extra line needed to
be added to the `prettier` crate (similarly to what was done for
https://github.com/zed-industries/zed/issues/19024) to find the plugin
correctly.
> [!NOTE]
> The `prettier-plugin-latex` node module also contained a
`dist/standalone.js` but using that instead of
`dist/prettier-plugin-latex.js` gave an error, and indeed the latter
worked as intended (along with its questionable choices for formatting).

Release Notes:

- LaTeX: added default `latexindent` formatter settings without relying
on `texlab`, as well as allowing `prettier` to be chosen for formatting

---------

Co-authored-by: Peter Tripp <peter@zed.dev>
2025-06-03 19:51:30 +00:00
Michael Sloan
5ae8c4cf09 Fix word completions clobbering the text after the cursor (#32010)
Release Notes:

- N/A
2025-06-03 19:37:26 +00:00
Smit Barmase
d8195a8fd7 project_panel: Highlight containing folder which would be the target of the drop operation (#31976)
Part of https://github.com/zed-industries/zed/issues/14496

This PR adds highlighting on the containing folder which would be the
target of the drop operation. It only highlights those directories where
actual drop is possible, i.e. same directory where drag started is not
highlighted.

- [x] Tests


https://github.com/user-attachments/assets/46528467-e07a-4574-a8d5-beab25e70162

Release Notes:

- Improved project panel to show a highlight on the containing folder
which would be the target of the drop operation.
2025-06-04 00:34:37 +05:30
Danilo Leal
2645591cd5 agent: Allow to accept and reject all via the panel (#31971)
This PR introduces the "Reject All" and "Accept All" buttons in the
panel's edit bar, which appears as soon as the agent starts editing a
file. I'm also adding here a new method to the thread called
`has_pending_edit_tool_uses`, which is a more specific way of knowing,
in comparison to the `is_generating` method, whether or not the
reject/accept all actions can be triggered.

Previously, without this new method, you'd be waiting for the whole
generation to end (e.g., the agent would be generating markdown with
things like change summary) to be able to click those buttons, when the
edit was already there, ready for you. It always felt like waiting for
the whole thing was unnecessary when you really wanted to just wait for
the _edits_ to be done, as so to avoid any potential conflicting state.

<img
src="https://github.com/user-attachments/assets/0927f3a6-c9ee-46ae-8f7b-97157d39a7b5"
width="500"/>

---

Release Notes:

- agent: Added ability to reject and accept all changes from the agent
panel.

---------

Co-authored-by: Agus Zubiaga <hi@aguz.me>
2025-06-03 15:20:25 -03:00
Danilo Leal
526a7c0702 agent: Support AGENT.md and AGENTS.md as rules file names (#31998)
These started to be used more recently, so we should also support them.

Release Notes:

- agent: Added support for `AGENT.md` and `AGENTS.md` as rules file
names.
2025-06-03 15:19:39 -03:00
Danilo Leal
e793740168 agent: Refine rules library window design (#31994)
Just polishing up a bit the Rules Library design. I think the most
confusing part here was the icon that was being used to tag a rule as
default; I've heard feedback more than once saying that was confusing,
so I'm now switching to a rather standard star icon, which I'd assume is
well-understood as a "favoriting" affordance.

Release Notes:

- N/A
2025-06-03 14:59:17 -03:00
Danilo Leal
dea0a58727 docs: Update mentions to GitHub to use correct capitalization (#31996)
That type of thing... 😅 "Github" is the incorrect formatting; "GitHub"
is the correct.

Release Notes:

- N/A
2025-06-03 14:55:24 -03:00
Agus Zubiaga
b7abc9d493 agent: Display full terminal output without scrolling (#31922)
The terminal tool card used a fixed height and scrolling, but this meant
that it was too tall for commands that only outputted a few lines, and
the nested scrolling was undesirable.

This PR makes the card be as too as needed to fit the entire output (no
scrolling), and allows the user to collapse it to fewer lines when
applicable. Making it work the same way as the edit tool card. In fact,
both tools now use a shared UI component.


https://github.com/user-attachments/assets/1127e21d-1d41-4a4b-a99f-7cd70fccbb56


Release Notes:

- Agent: Display full terminal output
- Agent: Allow collapsing terminal output
2025-06-03 10:54:25 -07:00
Peter Tripp
01a77bb231 Add sql language docs (#32003)
Closes: https://github.com/zed-industries/zed/issues/9537

Pairs with removing `prettier-plugin-sql` from the sql extension:
- https://github.com/zed-extensions/sql/pull/19

Release Notes:

- N/A
2025-06-03 13:52:42 -04:00
Daniel Zhu
de225fd242 file_finder: Add option to create new file (#31567)
https://github.com/user-attachments/assets/7c8a05a1-8d59-4371-a1d6-a8cb82aa13b9

While implementing this, I noticed that currently when the search panel
displays only one result, the box oscillates a bit up and down like so:


https://github.com/user-attachments/assets/dd1520e2-fa0b-4307-b27a-984e69b0a644

Not sure how to fix this at the moment, maybe that could be another PR?

Release Notes:

- Add option to create new file in project search panel.
2025-06-03 10:44:57 -07:00
Oleksiy Syvokon
1bc052d76b docs: Gemini thinking budget configuration (#32002)
Release Notes:

- N/A
2025-06-03 20:41:42 +03:00
Ben Kunkle
29cb95a3ca Remove support for changing magnification of active pane (#31981)
Closes #4265
Closes #24600

This setting causes many visual defects, and introduces unnecessary
(maintenance) complexity. as seen by #4265 and #24600


CC: @iamnbutler - How do you feel about this? I recommend looking at
https://github.com/zed-industries/zed/pull/24150#issuecomment-2866706506
for more context

Release Notes:

- Removed support

---------

Co-authored-by: Peter <peter@zed.dev>
2025-06-03 13:32:32 -04:00
Cole Miller
1307b81721 Allow configuring custom git hosting providers in project settings (#31929)
Closes #29229

Release Notes:

- Extended the support for configuring custom git hosting providers to
cover project settings in addition to global settings.

---------

Co-authored-by: Anthony Eid <hello@anthonyeid.me>
2025-06-03 12:23:01 -04:00
Piotr Osiewicz
203754d0db docs: Demote rdbg support in docs (#31993)
Closes #ISSUE

Release Notes:

- N/A
2025-06-03 16:12:21 +00:00
Umesh Yadav
c9c603b1d1 Add support for OpenRouter as a language model provider (#29496)
This pull request adds full integration with OpenRouter, allowing users
to access a wide variety of language models through a single API key.

**Implementation Details:**

* **Provider Registration:** Registers OpenRouter as a new language
model provider within the application's model registry. This includes UI
for API key authentication, token counting, streaming completions, and
tool-call handling.
* **Dedicated Crate:** Adds a new `open_router` crate to manage
interactions with the OpenRouter HTTP API, including model discovery and
streaming helpers.
* **UI & Configuration:** Extends workspace manifests, the settings
schema, icons, and default configurations to surface the OpenRouter
provider and its settings within the UI.
* **Readability:** Reformats JSON arrays within the settings files for
improved readability.

**Design Decisions & Discussion Points:**

* **Code Reuse:** I leveraged much of the existing logic from the
`openai` provider integration due to the significant similarities
between the OpenAI and OpenRouter API specifications.
* **Default Model:** I set the default model to `openrouter/auto`. This
model automatically routes user prompts to the most suitable underlying
model on OpenRouter, providing a convenient starting point.
* **Model Population Strategy:**
* <strike>I've implemented dynamic population of available models by
querying the OpenRouter API upon initialization.
* Currently, this involves three separate API calls: one for all models,
one for tool-use models, and one for models good at programming.
* The data from the tool-use API call sets a `tool_use` flag for
relevant models.
* The data from the programming models API call is used to sort the
list, prioritizing coding-focused models in the dropdown.</strike>
* <strike>**Feedback Welcome:** I acknowledge this multi-call approach
is API-intensive. I am open to feedback and alternative implementation
suggestions if the team believes this can be optimized.</strike>
    * **Update: Now this has been simplified to one api call.**
* **UI/UX Considerations:**
* <strike>Authentication Method: Currently, I've implemented the
standard API key input in settings, similar to other providers like
OpenAI/Anthropic. However, OpenRouter also supports OAuth 2.0 with PKCE.
This could offer a potentially smoother, more integrated setup
experience for users (e.g., clicking a button to authorize instead of
copy-pasting a key). Should we prioritize implementing OAuth PKCE now,
or perhaps add it as an alternative option later?</strike>(PKCE is not
straight forward and complicated so skipping this for now. So that we
can add the support and work on this later.)
* <strike>To visually distinguish models better suited for programming,
I've considered adding a marker (e.g., `</>` or `🧠`) next to their
names. Thoughts on this proposal?</strike>. (This will require a changes
and discussion across model provider. This doesn't fall under the scope
of current PR).
* OpenRouter offers 300+ models. The current implementation loads all of
them. **Feedback Needed:** Should we refine this list or implement more
sophisticated filtering/categorization for better usability?

**Motivation:**

This integration directly addresses one of the most highly upvoted
feature requests/discussions within the Zed community. Adding OpenRouter
support significantly expands the range of AI models accessible to
users.

I welcome feedback from the Zed team on this implementation and the
design choices made. I am eager to refine this feature and make it
available to users.

ISSUES: https://github.com/zed-industries/zed/discussions/16576

Release Notes:

- Added support for OpenRouter as a language model provider.

---------

Signed-off-by: Umesh Yadav <umesh4257@gmail.com>
Co-authored-by: Marshall Bowers <git@maxdeviant.com>
2025-06-03 15:59:46 +00:00
Shardul Vaidya
e13b494c9e bedrock: Fix cross-region inference (#30659)
Closes #30535

Release Notes:

- AWS Bedrock: Add support for Meta Llama 4 Scout and Maverick models.
- AWS Bedrock: Fixed cross-region inference for all regions.
- AWS Bedrock: Updated all models available through Cross Region
inference.

---------

Co-authored-by: Marshall Bowers <git@maxdeviant.com>
2025-06-03 15:46:35 +00:00
little-dude
c0397727e0 language_models: Sort Ollama models by name (#31620)
Hello,

This is my first contribution so apologies if I'm not following the
proper process (I haven't seen anything special in
https://github.com/zed-industries/zed/blob/main/CONTRIBUTING.md). Also,
I have tested my changes manually, but I could not figure out an easy we
to instantiate a `LanguageModelSelector` in the unit tests, so I didn't
write a test. If you can provide some guidance I'd be happy to write a
test.

---

If the user configured the models with custom names via `display_name`,
we want the ollama models to be sorted based on the name that is
actually displayed.

~~The original issue is only about ollama but this change will also
affect the other providers.~~

Closes #30854

Release Notes:

- Ollama: Changed models to be sorted by name.
2025-06-03 15:37:08 +00:00
Marshall Bowers
9c2b90fb8f collab: Return subscription period from GET /billing/subscriptions (#31987)
This PR updates the `GET /billing/subscriptions` endpoint to return the
subscription period on them.

Release Notes:

- N/A
2025-06-03 15:29:08 +00:00
Marshall Bowers
d108e5f53c collab: Fix deserialization of create meter event response (#31982)
This PR fixes the deserialization of the create meter event response
from the Stripe API.

Release Notes:

- N/A
2025-06-03 15:23:38 +00:00
Marshall Bowers
2551bde1d3 collab: Increase number of returned extensions to 1,000 (#31983)
This PR increases the number of returned extensions from the extension
API to 1,000 (up from 500).

We'll need a better solution at some point, but for now we can keep
bumping this number.

Closes https://github.com/zed-industries/zed/issues/31067.

Release Notes:

- N/A
2025-06-03 15:03:32 +00:00
Peter Tripp
e7de80c6ae ci: Improve Danger and ci.yml explicitness (#31979)
Allow colons after issue links and for them to in ul.
Change ci references from [self-hosted, test] to more explicit
[self-hosted, macOS]

Release Notes:

- N/A

---------

Co-authored-by: Ben Kunkle <ben.kunkle@gmail.com>
Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>
2025-06-03 10:54:04 -04:00
Peter Tripp
ae210eced8 Fix aggressive indent in shell scripts (#31973)
Closes: https://github.com/zed-industries/zed/issues/31774

Release Notes:

- N/A

Co-authored-by: Ben Kunkle <ben.kunkle@gmail.com>
2025-06-03 10:50:58 -04:00
Piotr Osiewicz
a9d99d8347 docs: Improve docs for debugger (around breakpoints and doc structure) (#31962)
Closes #ISSUE

Release Notes:

- N/A
2025-06-03 16:35:35 +02:00
Thiago Pacheco
3e6435eddc Fix Python virtual environment detection (#31934)
# Fix Python Virtual Environment Detection in Zed

## Problem

Zed was not properly detecting Python virtual environments when a
project didn't contain a `pyrightconfig.json` file. This caused Pyright
(the Python language server) to report `reportMissingImports` errors for
packages installed in virtual environments, even though the virtual
environment was correctly set up and worked fine in other editors.

The issue was that while Zed's `PythonToolchainProvider` correctly
detected virtual environments, this information wasn't being
communicated to Pyright in a format it could understand.

## Root Cause

The main issue was in how Zed communicated virtual environment
configuration to Pyright through the Language Server Protocol (LSP).
When Pyright requests workspace configuration, it expects virtual
environment settings (`venvPath` and `venv`) at the root level of the
configuration object - the same format used in `pyrightconfig.json`
files. However, Zed was attempting to place these settings in various
nested locations that Pyright wasn't checking.

## Solution

The fix involves several coordinated changes to ensure Pyright receives
virtual environment configuration in all the ways it might expect:

### 1. Enhanced Workspace Configuration (`workspace_configuration`
method)
- When a virtual environment is detected, Zed now sets `venvPath` and
`venv` at the root level of the configuration object, matching the exact
format of a `pyrightconfig.json` file
- Uses relative path `"."` when the virtual environment is located in
the workspace root
- Also sets `python.pythonPath` and `python.defaultInterpreterPath` for
compatibility with different Pyright versions

### 2. Environment Variables for All Language Server Binaries
- Updated `check_if_user_installed`, `fetch_server_binary`,
`check_if_version_installed`, and `cached_server_binary` methods to
include shell environment variables
- This ensures environment variables like `VIRTUAL_ENV` are available to
Pyright, helping with automatic virtual environment detection

### 3. Initialization Options
- Added minimal initialization options to enable Pyright's automatic
path searching and import completion features
- Sets `autoSearchPaths: true` and `useLibraryCodeForTypes: true` to
improve Pyright's ability to find packages

## Key Changes

The workspace configuration now properly formats virtual environment
configuration:
- Root level: `venvPath` and `venv` (matches pyrightconfig.json format)
- Python section: `pythonPath` and `defaultInterpreterPath` for
interpreter paths

## Impact

- Users no longer need to create a `pyrightconfig.json` file for virtual
environment detection
- Python projects with virtual environments in standard locations
(`.venv`, `venv`, etc.) will work out of the box
- Import resolution for packages installed in virtual environments now
works correctly
- Maintains compatibility with manual `pyrightconfig.json` configuration
for complex setups

## Testing

The changes were tested with Python projects using virtual environments
without `pyrightconfig.json` files. Pyright now correctly resolves
imports from packages installed in the virtual environment, eliminating
the `reportMissingImports` errors.

## Release Notes

- Fixed Python virtual environment detection when no
`pyrightconfig.json` is present
- Pyright now correctly resolves imports from packages installed in
virtual environments (`.venv`, `venv`, etc.)
- Python projects with virtual environments no longer show false
`reportMissingImports` errors
- Improved Python development experience with automatic virtual
environment configuration

---------

Co-authored-by: Piotr Osiewicz <24362066+osiewicz@users.noreply.github.com>
2025-06-03 16:35:13 +02:00
Danilo Leal
9e75871d48 agent: Make the sound notification play only if Zed is in the background (#31975)
Users were giving feedback about the sound notification being
annoying/unnecessary if Zed is in the foreground, which I agree! So,
this PR changes it so that it only plays if that is not the case.

Release Notes:

- agent: Improved sound notification behavior by making it play only if
Zed is in the background.
2025-06-03 11:14:26 -03:00
Ben Brandt
707a4c7f20 Remove unused editor_model configuration option (#31492)
It seems that this configuration option is no longer used and can be
removed.


Release Notes:

- Removed unused `agent.editor_model` setting
2025-06-03 13:50:33 +00:00
Oleksiy Syvokon
854076f96d agent: Lower "no thread found" logging level to debug (#31972)
This code path is not really an error, as it can happen due to normal,
albeit uncommon, actions. Like, for example, this scenario:

1. Create a thread X in Zed instance A
2. Open Zed instance B
3. Delete the thread X in instance A
4. Close instance B. This will write non-existing thread id X to
`agent-navigation-history.json`
5. Open Zed instance C. It won't be able to load the thread X.

Another way to get into this state is by running Zed with LMDB and
SQLite thread storages side-by-side.

In any case, this is not severe enough for an error.

Closes #ISSUE

Release Notes:

- N/A
2025-06-03 13:27:58 +00:00
90aca
cf931247d0 Add thinking budget for Gemini custom models (#31251)
Closes #31243

As described in my issue, the [thinking
budget](https://ai.google.dev/gemini-api/docs/thinking) gets
automatically chosen by Gemini unless it is specifically set to
something. In order to have fast responses (inline assistant) I prefer
to set it to 0.

Release Notes:

- ai: Added `thinking` mode for custom Google models with configurable
token budget

---------

Co-authored-by: Ben Brandt <benjamin.j.brandt@gmail.com>
2025-06-03 13:40:20 +02:00
Ben Brandt
b74477d12e Option to auto-close deleted files with no unsaved edits (#31920)
Closes #27982

Release Notes:

- Added `close_on_file_delete` setting (off by default) to allow closing
open files after they have been deleted on disk

---------

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-06-03 13:18:29 +02:00
Fernando Freire
3077abf9cf google_ai: Parse thought parts in Gemini responses (#31925)
Fixes thinking Gemini models.

Closes #31902

Release Notes:

- Updated Google Gemini client to match the latest API
2025-06-03 10:37:06 +00:00
Kiran_Peraka
07dab4e94a multi_buffer: Merge adjacent matches into a single excerpt when separated by only one line (#31708)
Closes #31252

Release Notes:

- Improved displaying of project search matches or diagnostics when the
excerpts are adjacent.

---------

Co-authored-by: Antonio Scandurra <me@as-cii.com>
2025-06-03 10:07:59 +00:00
Umesh Yadav
59686f1f44 language_models: Add images support for Ollama vision models (#31883)
Ollama supports vision to process input images. This PR adds support for
same. I have tested this with gemma3:4b and have attached the screenshot
of it working.

<img width="435" alt="image"
src="https://github.com/user-attachments/assets/5f17d742-0a37-4e6c-b4d8-05b750a0a158"
/>


Release Notes:

- Add image support for [Ollama vision models](https://ollama.com/search?c=vision)
2025-06-03 11:12:59 +02:00
Piotr Osiewicz
a60bea8a3d collab: Reconnect to channel notes (#31950)
Closes #31758

Release Notes:

- Fixed channel notes not getting re-connected when a connection to Zed
servers is restored.

---------

Co-authored-by: Kirill Bulatov <mail4score@gmail.com>
2025-06-03 11:12:45 +02:00
THELOSTSOUL
b820aa1fcd Add tool support for DeepSeek (#30223)
[deepseek function call
api](https://api-docs.deepseek.com/guides/function_calling)
has been released and it is same as openai.

Release Notes:

- Added tool calling support for Deepseek Models

---------

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-06-03 10:59:36 +02:00
Piotr Osiewicz
55d91bce53 debugger: Add tooltips to the new process modal (#31953)
Closes #ISSUE

Release Notes:

- N/A
2025-06-03 08:49:56 +00:00
clauses3
b798392050 Expand tilde paths in edit prediction settings (#31235)
Release Notes:

- edit_prediction: Handle `~` in paths in `disabled_globs` setting
2025-06-03 10:32:23 +02:00
Finn Evers
657c8b1084 project_panel: Improve behavior for cut-pasting entries (#31931)
Previously, we would move entries each time they were pasted. Thus, if
you were to cut some files and pasted them in folder `a` and then `b`,
they would only occur in folder `b` and not in folder `a`. This is
unintuitive - e.g. the same does not apply to text and does not happen
in other editors.

This PR improves this behavior - after the first paste of a cut
clipboard, we change the clipboard to a copy clipboard, ensuring that
for all folloing pastes, the entries are not moved again. In the above
example, the files would then also be found in folder `a`. This is also
reflected in the added test.

Release Notes:

- Ensured that cut project panel entries are cut-pasted only on the
first use, and copy-pasted on all subsequent pastes.
2025-06-03 03:51:42 -04:00
Piotr Osiewicz
2bb8aa2f73 go_to_line: Show position relative to current excerpt in a multi-buffer (#31947)
Closes #31515

This PR explicitly leaves the behavior of go to line unspecified with
multi-buffer.

Release Notes:

- Fixed wrong line number being shown in the status bar when in
multi-buffer.
2025-06-03 09:41:45 +02:00
Michael Sloan
beeb42da29 snippets: Show completions on first range in tabstop instead of last (#31939)
Release Notes:

- N/A
2025-06-02 23:56:45 -06:00
thebasilisk
6d66ff1d95 Add Helix implementation for Motion::FindForward and Motion::FindBackward (#31547)
Closes #30462 

Release Notes:

- Added text selection for "vim::PushFindForward" and
"vim::PushFindBackward" keybinds in helix mode
2025-06-02 22:15:21 -06:00
Arseny Kapoulkine
e0b818af62 Fix duplicate prefixes when repeating completions in Vim mode (#31818)
When text is completed, new_text contains the entire new completion
which replaces the old_text. In Vim mode, pressing . repeats the
completion; if InputHandled records the full text and no range to
replace, the entire completion gets appended; this happens after the
completion prefix typing repeats, and we get a duplicate prefix.

Using range to replace has some downsides when the completion is
repeated as a standalone action; in a common case, it should be
sufficient to record the new suffix. This is actually what used to
happen before #28586, which removed this code in a larger attempt to fix
completions at multiple cursors:

```rust
let text = &new_text[common_prefix_len..];
let utf16_range_to_replace = ...

cx.emit(EditorEvent::InputHandled {
    utf16_range_to_replace,
    text: text.into(),
});
```

Fixes #30758
Fixes #31759
Fixes #31779

Release Notes:

- Vim: Fix duplicate prefixes when repeating completions via `.`
2025-06-02 21:34:46 -06:00
Fernando Carletti
58a400b1ee keymap: Fix subword navigation and selection on Sublime Text keymap (#31840)
On Linux, the correct modifier key for this action is `alt`, not `ctrl`.
I mistakenly set it to `ctrl` on #30268.

From Sublime's keymap: 
```json
{ "keys": ["ctrl+left"], "command": "move", "args": {"by": "words", "forward": false} },
{ "keys": ["ctrl+right"], "command": "move", "args": {"by": "word_ends", "forward": true} },
{ "keys": ["ctrl+shift+left"], "command": "move", "args": {"by": "words", "forward": false, "extend": true} },
{ "keys": ["ctrl+shift+right"], "command": "move", "args": {"by": "word_ends", "forward": true, "extend": true} },

{ "keys": ["alt+left"], "command": "move", "args": {"by": "subwords", "forward": false} },
{ "keys": ["alt+right"], "command": "move", "args": {"by": "subword_ends", "forward": true} },
{ "keys": ["alt+shift+left"], "command": "move", "args": {"by": "subwords", "forward": false, "extend": true} },
{ "keys": ["alt+shift+right"], "command": "move", "args": {"by": "subword_ends", "forward": true, "extend": true} },
```

Release Notes:

- N/A
2025-06-02 21:22:27 -06:00
tidely
8ab7d44d51 terminal: Match trait bounds with terminal input (#31441)
The core change here is the following:

```rust
fn write_to_pty(&self, input: impl Into<Vec<u8>>);

// into
fn write_to_pty(&self, input: impl Into<Cow<'static, [u8]>>);
```

This matches the trait bounds that's used by the Alacritty crate. We are
now allowed to effectively pass `&'static str` instead of always needing
a `String`.

The main benefit comes from making the `to_esc_str` function return a
`Cow<'static, str>` instead of `String`. We save an allocation in the
following instances:

- When the user presses any special key that isn't alphanumerical (in
the terminal)
- When the uses presses any key while a modifier is active (in the
terminal)
- When focusing/un-focusing the terminal
- When completing or undoing a terminal transaction
- When starting a terminal assist

This basically saves us an allocation on **every key** press in the
terminal.

NOTE: This same optimization can be done for **nearly all** keypresses
in the entirety of Zed by changing the signature of the `Keystroke`
struct in gpui. If the Zed team is interested in a PR for it, let me
know.

Release Notes:

- N/A
2025-06-02 21:12:28 -06:00
Michael Sloan
56d4c0af9f snippets: Preserve leading whitespace (#31933)
Closes #18481

Release Notes:

- Snippet insertions now preserve leading whitespace instead of using
language-specific auto-indentation.
2025-06-03 02:37:06 +00:00
Michael Sloan
feeda7fa37 Add newlines between messages in LSP RPC logs for more navigability (#31863)
Release Notes:

- N/A
2025-06-03 02:12:58 +00:00
Cole Miller
4a5c55a8f2 debugger: Use new icons for quick debug/spawn button (#31932)
This PR wires up the new icons that were added in #31784.

Release Notes:

- N/A
2025-06-03 01:26:41 +00:00
Cole Miller
7c1ae9bcc3 debugger: Go back to loading task contexts asynchronously for new session modal (#31908)
Release Notes:

- N/A
2025-06-02 21:14:30 -04:00
Cole Miller
6f97da3435 debugger: Align zoom behavior with other panels (#31901)
Release Notes:

- Debugger Beta: `shift-escape` (`workspace::ToggleZoom`) now zooms the
entire debug panel; `alt-shift-escape` (`debugger::ToggleExpandItem`)
triggers the old behavior of zooming a specific item.
2025-06-03 00:59:36 +00:00
Danilo Leal
63c1033448 agent: Generate a notification when reaching tool use limit (#31894)
When reaching the consecutive tool call limit, the agent gets blocked
and without a notification, you wouldn't know that. This PR adds the
ability to be notified when that happens, and you can use either sound
_and_ toast, or just one of them.

Release Notes:

- agent: Added support for getting notified (via toast and/or sound)
when reaching the consecutive tool call limit.
2025-06-02 21:57:42 -03:00
Cole Miller
b16911e756 debugger: Extend f5 binding to contextually rerun the last session (#31753)
Release Notes:

- Debugger Beta: if there is no stopped or running session, `f5` now
reruns the last session, or opens the new session modal if there is no
previously-run session.
2025-06-03 00:35:52 +00:00
Cole Miller
b14401f817 Remove agent_diff key context when agent review ends for an editor (#31930)
Release Notes:

- Fixed an issue that prevented `git::Restore` keybindings from working
in editors for buffers that had previously been modified by the agent.

Co-authored-by: Anthony Eid <hello@anthonyeid.me>
2025-06-02 19:46:04 -04:00
Michael Sloan
17cf865d1e Avoid re-querying language server completions when possible (#31872)
Also adds reuse of the markdown documentation cache even when
completions are re-queried, so that markdown documentation doesn't
flicker when `is_incomplete: true` (completions provided by rust
analyzer always set this)

Release Notes:

- Added support for filtering language server completions instead of
re-querying.
2025-06-02 22:19:09 +00:00
Mikayla Maki
b7ec437b13 Simplify debug launcher UI (#31928)
This PR updates the name of the `NewSessionModal` to `NewProcessModal`
(to reflect it's new purpose), changes the tabs in the modal to read
`Run | Debug | Attach | Launch` and changes the associated types in code
to match the tabs. In addition, this PR adds a few labels to the text
fields in the `Launch` tab, and adds a link to open the associated
settings file. In both debug.json files, added links to the zed.dev
debugger docs.

Release Notes:

- Debugger Beta: Improve the new process modal
2025-06-02 21:24:08 +00:00
Piotr Osiewicz
f1aab1120d terminal: Persist pinned tabs in terminal (#31921)
Closes #31098

Release Notes:

- Fixed terminal pinned tab state not persisting across restarts.
2025-06-02 22:36:57 +02:00
Joe Polny
3f90bc81bd gpui: Filter out NoAction bindings from pending input (#30260)
This prevents the 1 second delay happening on input when all of the
pending bindings are NoAction

Closes #30259

Release Notes:

- Fixed unnecessary delay when typing a multi-stroke binding that
doesn't match any non-null bindings

---------

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-06-02 13:22:36 -06:00
AidanV
9d5fb3c3f3 Add :delm[arks] {marks} command to delete vim marks (#31140)
Release Notes:

- Implements `:delm[arks] {marks}` specified
[here](https://vimhelp.org/motion.txt.html#%3Adelmarks)
- Adds `ArgumentRequired` action for vim commands that require arguments

---------

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-06-02 13:18:28 -06:00
Oleksiy Syvokon
864767ad35 agent: Support vim-mode in the agent panel's editor (#31915)
Closes #30081

Release Notes:

- Added vim-mode support in the agent panel's editor

---------

Co-authored-by: Ben Kunkle <ben.kunkle@gmail.com>
Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-06-02 22:10:31 +03:00
Bennet Bo Fenner
ec69b68e72 indent guides: Fix issue with entirely-whitespace lines (#31916)
Closes #26957

Release Notes:

- Fix an edge case where indent guides would be rendered incorrectly if
lines consisted of entirely whitespace

Co-authored-by: Ben Brandt <benjamin.j.brandt@gmail.com>
2025-06-02 17:35:00 +00:00
Piotr Osiewicz
9dd18e5ee1 python: Re-land usage of source file path in toolchain picker (#31893)
This reverts commit 1e55e88c18.

Closes #ISSUE

Release Notes:

- Python toolchain selector now uses path to the closest pyproject.toml
as a basis for picking a toolchain. All files under the same
pyproject.toml (in filesystem hierarchy) will share a single virtual
environment. It is possible to have multiple Python virtual environments
selected for disjoint parts of the same project.
2025-06-02 16:29:06 +00:00
Smit Barmase
2ebe16a52f workspace: Fix empty pane becomes unresponsive to keybindings after commit via terminal (#31905)
Closes #27579

This PR fixes issue where keybinding wouldn’t work in a pane after
focusing it from dock using the `ActivatePaneInDirection` action in
certain cases.


https://github.com/user-attachments/assets/9ceca580-a63f-4807-acff-29b61819f424

Release Notes:

- Fixed the issue where keybinding wouldn’t work in a pane after
focusing it from dock using the `ActivatePaneInDirection` action in
certain cases.
2025-06-02 21:52:35 +05:30
Anthony
089db88432 use default if env doesn't exist 2025-06-02 11:59:28 -04:00
Joseph T. Lyons
1ed4647203 Add test for pane: toggle pin tab (#31906)
Also adds the optimization to not move a tab being pinned when its
destination index is the same as its index.

Release Notes:

- N/A
2025-06-02 15:58:10 +00:00
Dino
ebed567adb vim: Handle paste in visual line mode when cursor is at newline (#30791)
This Pull Request fixes the current paste behavior in vim mode, when in
visual mode, and the cursor is at a newline character. Currently this
joins the pasted contents with the line right below it, but in vim this
does not happen, so these changes make it so that Zed's vim mode behaves
the same as vim for this specific case.

Closes #29270 

Release Notes:

- Fixed pasting in vim's visual line mode when cursor is on a newline
character
2025-06-02 09:50:13 -06:00
5brian
a6544c70c5 vim: Fix add empty line (#30987)
Fixes: 

`] space` does not consume counts, and it gets applied to the next
action.

`] space` on an empty line causes cursor to move to the next line.

Release Notes:

- N/A
2025-06-02 09:49:31 -06:00
AidanV
b363e1a482 vim: Add support for :e[dit] {file} command to open files (#31227)
Closes #17786

Release Notes:

- Adds `:e[dit] {file}` command to open files
2025-06-02 09:47:40 -06:00
Umesh Yadav
65e3e84cbc language_models: Add thinking support for ollama (#31665)
This PR updates how we handle Ollama responses, leveraging the new
[v0.9.0](https://github.com/ollama/ollama/releases/tag/v0.9.0) release.
Previously, thinking text was embedded within the model's main content,
leading to it appearing directly in the agent's response. Now, thinking
content is provided as a separate parameter, allowing us to display it
correctly within the agent panel, similar to other providers. I have
tested this with qwen3:8b and works nicely. ~~We can release this once
the ollama is release is stable.~~ It's released now as stable.

<img width="433" alt="image"
src="https://github.com/user-attachments/assets/2983ef06-6679-4033-82c2-231ea9cd6434"
/>


Release Notes:

- Add thinking support for ollama

---------

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-06-02 15:12:41 +00:00
morgankrey
1e1d4430c2 Fixing 404 in AI Configuration Docs (#31899)
Closes #ISSUE

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-06-02 09:09:00 -05:00
Oleksiy Syvokon
c874f1fa9d agent: Migrate thread storage to SQLite with zstd compression (#31741)
Previously, LMDB was used for storing threads, but it consumed excessive
disk space and was capped at 1GB.

This change migrates thread storage to an SQLite database. Thread JSON
objects are now compressed using zstd.

I considered training a custom zstd dictionary and storing it in a
separate table. However, the additional complexity outweighed the modest
space savings (up to 20%). I ended up using the default dictionary
stored with data.

Threads can be exported relatively easily from outside the application:

```
$ sqlite3 threads.db "SELECT hex(data) FROM threads LIMIT 5;" |
    xxd -r -p |
    zstd -d |
    fx
```

Benchmarks:
- Original heed database: 200MB
- Sqlite uncompressed: 51MB
- sqlite compressed (this PR): 4.0MB
- sqlite compressed with a trained dictionary: 3.8MB


Release Notes:

- Migrated thread storage to SQLite with compression
2025-06-02 17:01:34 +03:00
Alisina Bahadori
9a9e96ed5a Increase terminal inline assistant block height (#31807)
Closes #31806
Closes #28969

Not sure if a static value is best. Maybe it is better to somehow use
`count_lines` function here too.

### Before
<img width="871" alt="449463234-ab1a33a0-2331-4605-aaee-cae60ddd0f9d"
src="https://github.com/user-attachments/assets/1e3bec86-4cad-426c-9f59-5ad3d14fc9d7"
/>


### After
<img width="861" alt="Screenshot 2025-05-31 at 1 12 33 AM"
src="https://github.com/user-attachments/assets/0c8219a9-0812-45af-8125-1f4294fe2142"
/>

Release Notes:

- Fixed terminal inline assistant clipping when cursor is at bottom of
terminal.
2025-06-02 10:55:40 -03:00
Danilo Leal
8c46e290df docs: Add more details to the agent checkpoint section (#31898)
Figured this was worth highlighting as part of the "Restore Checkpoint"
feature behavior.

Release Notes:

- N/A
2025-06-02 10:55:03 -03:00
Thiago Pacheco
aacbb9c2f4 python: Respect picked toolchain (when it's not at the root) when running tests (#31150)
# Fix Python venv Detection for Test Runner
## Problem
Zed’s Python test runner was not reliably detecting and activating the
project’s Python virtual environment (.venv or venv), causing it to
default to the system Python. This led to issues such as missing
dependencies (e.g., pytest) when running tests.
## Solution
Project Root Awareness: The Python context provider now receives the
project root path, ensuring venv detection always starts from the
project root rather than the test file’s directory.
Robust venv Activation: The test runner now correctly detects and
activates the Python interpreter from .venv or venv in the project root,
setting VIRTUAL_ENV and updating PATH as needed.
Minimal Impact: The change is limited in scope, affecting only the
necessary code paths for Python test runner venv detection. No broad
architectural changes were made.
## Additional Improvements
Updated trait and function signatures to thread the project root path
where needed.
Cleaned up linter warnings and unused code.
## Result
Python tests now reliably run using the project’s virtual environment,
matching the behavior of other IDEs and ensuring all dependencies are
available.

Release Notes:

- Fixed Python tasks always running with a toolchain selected for the
root of a workspace.

---------

Co-authored-by: Piotr Osiewicz <24362066+osiewicz@users.noreply.github.com>
2025-06-02 15:29:34 +02:00
Ben Kunkle
f90333f92e zlog: Check crate name against filters if scope empty (#31892)
Fixes
https://github.com/zed-industries/zed/discussions/29541#discussioncomment-13243073

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-06-02 13:22:32 +00:00
Ben Kunkle
b24f614ca3 docs: Improve documentation around Vulkan/GPU issues on Linux (#31895)
Closes #ISSUE

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-06-02 13:16:49 +00:00
Danilo Leal
cefa0cbed8 Improve the file finder picker footer design (#31777)
The current filter icon button in the file finder picker footer confused
me because it is a really a toggleable button that adds a specific
filter. From the icon used, I was expecting more configuration options
rather than just one. Also, I was really wanting a way to trigger it
with the keyboard, even if I need my mouse to initially learn about the
keybinding.

So, this PR transforms that icon button into an actual popover trigger,
in which (for now) there's only one filter option. However, being a menu
is cool because it allows to accomodate more items like, for example,
"Include Git Submodule Files" and others, in the future. Also, there's
now a keybinding that you can hit to open that popover, as well as an
indicator that pops in to communicate that a certain item inside it has
been toggled.

Lastly, also added a keybinding to the "Split" menu in the spirit of
making everything more keyboard accessible!

| Before | After |
|--------|--------|
| ![CleanShot 2025-05-30 at 4  29
57@2x](https://github.com/user-attachments/assets/88a30588-289d-4d76-bb50-0a4e7f72ef84)
| ![CleanShot 2025-05-30 at 4  24
31@2x](https://github.com/user-attachments/assets/30b8f3eb-4d5c-43e1-abad-59d32ed7c89f)
|

Release Notes:

- Improved the keyboard navigability of the file finder filtering
options.
2025-06-02 09:54:15 -03:00
Smit Barmase
3fb1023667 editor: Fix columnar selection incorrectly uses cursor to start selection instead of mouse position (#31888)
Closes #13905

This PR fixes columnar selection to originate from mouse position
instead of current cursor position. Now columnar selection behaves as
same as Sublime Text.

1. Columnar selection from click-and-drag on text (New):


https://github.com/user-attachments/assets/f2e721f4-109f-4d81-a25b-8534065bfb37

2. Columnar selection from click-and-drag on empty space (New): 


https://github.com/user-attachments/assets/c2bb02e9-c006-4193-8d76-097233a47a3c

3. Multi cursors at end of line when no interecting text found (New): 


https://github.com/user-attachments/assets/e47d5ab3-0b5f-4e55-81b3-dfe450f149b5

4. Converting normal selection to columnar selection (Existing):


https://github.com/user-attachments/assets/e5715679-ebae-4f5a-ad17-d29864e14e1e


Release Notes:

- Fixed the issue where the columnar selection (`opt+shift`) incorrectly
used the cursor to start the selection instead of the mouse position.
2025-06-02 16:37:36 +05:30
Umesh Yadav
9c715b470e agent: Show actual file name and icon in context pill (#31813)
Previously in the agent context pill if we added images it showed
generic Image tag on the image context pill. This PR make sure if we
have a path available for a image context show the filename which is in
line with other context pills.


Before | After
--- | ---
![Screenshot 2025-05-31 at 3 14 07
PM](https://github.com/user-attachments/assets/b342f046-2c1c-4c18-bb26-2926933d5d34)
| ![Screenshot 2025-05-31 at 3 14 07
PM](https://github.com/user-attachments/assets/90ad4062-cdc6-4274-b9cd-834b76e8e11b)





Release Notes:

- N/A

---------

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-06-02 10:35:22 +00:00
Oleksiy Syvokon
ae219e9e99 agent: Fix bug with double-counting tokens in Gemini (#31885)
We report the total number of input tokens by summing the numbers of
1. Prompt tokens
2. Cached tokens

But Google API returns prompt tokens (1) that already include cached
tokens (2), so we were double counting tokens in some cases.

Release Notes:

- Fixed bug with double-counting tokens in Gemini
2025-06-02 10:18:44 +00:00
Bennet Bo Fenner
6d99c12796 assistant_context_editor: Fix copy paste regression (#31882)
Closes #31166

Release Notes:

- Fixed an issue where copying and pasting an assistant response in text
threads would result in duplicate text
2025-06-02 11:52:47 +02:00
Kirill Bulatov
8fb7fa941a Suppress log blade_graphics -related logs by default (#31881)
Release Notes:

- N/A
2025-06-02 08:59:57 +00:00
Joseph T. Lyons
22d75b798e Notify when pinning a tab even if the tab isn't moved (#31880)
The optimization to not move a tab being pinned (when the destination
index is the same as its index) in
https://github.com/zed-industries/zed/pull/31871 caused a regression, as
we were no longer calling `cx.notify()` indirectly through `move_item`.
Thanks for catching this, @smitbarmase.

Release Notes:

- N/A
2025-06-02 06:58:57 +00:00
Smit Barmase
06a199da4d editor: Fix completion accept for optional chaining in Typescript (#31878)
Closes #31662

Currently, we assume `insert_range` will always end at the cursor and
`replace_range` will also always end after the cursor for calculating
range to replace. This is a particular case for the rust-analyzer, but
not widely true for other language servers.

This PR fixes this assumption, and now `insert_range` and
`replace_range` both can end before cursor.

In this particular case:
```ts
let x: string | undefined;

x.tostˇ // here insert as well as replace range is just "." while new_text is "?.toString()"
```

This change makes it such that if final range to replace ends before
cursor, we extend it till the cursor.

Bonus:
- Improves suffix and subsequence matching to use `label` over
`new_text` as `new_text` can contain end characters like `()` or `$`
which is not visible while accepting the completion.
- Make suffix and subsequence check case insensitive.
- Fixes broken subsequence matching which was not considering the order
of characters while matching subsequence.

Release Notes:

- Fixed an issue where autocompleting optional chaining methods in
TypeScript, such as `x.tostr`, would result in `x?.toString()tostr`
instead of `x?.toString()`.
2025-06-02 10:45:40 +05:30
Joseph T. Lyons
ab6125ddde Fix bugs around pinned tabs (#31871)
Closes https://github.com/zed-industries/zed/issues/31870

Release Notes:

- Allowed opening 1 more item if `n` tabs are pinned, where `n` equals
`max_tabs` count.
- Fixed a bug where pinned tabs would eventually be closed out when
exceeding the `max_tabs` count.
- Fixed a bug where a tab could be lost when pinning a tab while at the
`max_tabs` count.
- Fixed a bug where pinning a tab when already at the `max_tabs` limit
could cause other tabs to be incorrectly closed.
2025-06-01 19:50:06 -04:00
Joseph T. Lyons
d3bc561f26 Disable close clean menu item when all are dirty (#31859)
This PR disables the "Close Clean" tab context menu action if all items
are dirty.

<img width="595" alt="SCR-20250601-kaev"
src="https://github.com/user-attachments/assets/add30762-b483-4701-9053-141d2dfe9b05"
/>

<img width="573" alt="SCR-20250601-kahl"
src="https://github.com/user-attachments/assets/24f260e4-01d6-48d6-a6f4-a13ae59c246e"
/>

Also did a bit more general refactoring.

Release Notes:

- N/A
2025-06-01 15:15:33 +00:00
Joseph T. Lyons
f13f2dfb70 Ensure item-closing actions do not panic when no items are present (#31845)
This PR adds a comprehensive test that ensures that no item-closing
action will panic when no items are present. A test already existed
(`test_remove_active_empty `) that ensured `CloseActiveItem` didn't
panic, but the new test covers:

- `CloseActiveItem`
- `CloseInactiveItems`
- `CloseAllItems`
- `CloseCleanItems`
- `CloseItemsToTheRight`
- `CloseItemsToTheLeft`

I plan to do a bit more clean up in `pane.rs` and this feels like a good
thing to add before that.

Release Notes:

- N/A
2025-06-01 03:31:38 +00:00
Joseph T. Lyons
24e4446cd3 Refactor item-closing actions (#31838)
While working on 

- https://github.com/zed-industries/zed/pull/31783
- https://github.com/zed-industries/zed/pull/31786

... I noticed some areas that could be improved through refactoring. The
bug in https://github.com/zed-industries/zed/pull/31783 came from having
duplicate code. The fix had been applied to one version, but not the
duplicated code.

This PR attempts to do some initial clean up, through some refactoring.

Release Notes:

- N/A
2025-05-31 19:38:32 -04:00
Aleksei Gusev
cc536655a1 Fix slowness in Terminal when vi-mode is enabled (#31824)
It seems alacritty handles vi-mode motions in a special way and it is up
to the client to decide when redraw is necessary. With this change,
`TerminalView` notifies the context if a keystroke is processed and vi
mode is enabled.

Fixes #31447

Before:


https://github.com/user-attachments/assets/a78d4ba0-23a3-4660-a834-2f92948f586c

After:


https://github.com/user-attachments/assets/cabbb0f4-a1f9-4f1c-87d8-a56a10e35cc8

Release Notes:

- Fixed sluggish cursor motions in Terminal when Vi Mode is enabled
[#31447]
2025-05-31 20:02:56 +03:00
Kartik Vashistha
2a9e73c65d docs: Update Ansible docs (#31817)
Release Notes:

- N/A
2025-05-31 10:30:29 -04:00
Kirill Bulatov
4f1728e5ee Do not unwrap when updating inline diagnostics (#31814)
Also do not hold the strong editor reference while debounced.

Release Notes:

- N/A
2025-05-31 10:10:15 +00:00
Christian Bergschneider
40c91d5df0 gpui: Implement window_handle and display_handle for wayland platform (#28152)
This PR implements the previously unimplemented window_handle and
display_handle methods in the wayland platform. It also exposes the
display_handle method through the Window struct.

Release Notes:

- N/A
2025-05-30 15:45:03 -07:00
Kirill Bulatov
fe1b36671d Do not error on debugger server connection close (#31795)
Start to show islands of logs in a successful debugger runs for Rust:

<img width="1728" alt="1"
src="https://github.com/user-attachments/assets/7400201b-b900-4b20-8adf-21850ae59671"
/>

<img width="1728" alt="2"
src="https://github.com/user-attachments/assets/e75cc5f1-1f74-41d6-b7aa-697a4b2f055b"
/>

Release Notes:

- N/A
2025-05-30 22:37:40 +00:00
Agus Zubiaga
bb9e2b0403 Fix existing CompletionMode deserialization (#31790)
https://github.com/zed-industries/zed/pull/31668 renamed
`CompletionMode::Max` to `CompletionMode::Burn` which is a good change,
but this broke the deserialization for threads whose completion mode was
stored in LMDB. This adds a deserialization alias so that both values
work.

We could make a full new `SerializedThread` version which migrates this
value, but that seems overkill for this single change, we can batch that
with more changes later. Also, people in nightly already have some v1
threads with `burn` stored, so it wouldn't quite work for everybody.

Release Notes:

- N/A
2025-05-30 19:06:08 -03:00
Yaroslav Pietukhov
4f8d7f0a6b Disallow running Zed with root privileges (#31331)
This will fix a lot of weird problems that are based on file access
issues.

As discussed in
https://github.com/zed-industries/zed/pull/31219#issuecomment-2905371710,
for now it's better to just prevent running Zed with root privileges.

Release Notes:

- Explicitly disallow running Zed with root privileges

---------

Co-authored-by: Peter Tripp <peter@zed.dev>
2025-05-30 21:22:52 +00:00
Danilo Leal
caf3d30bf6 Harmonize quick action icons (#31784)
Just ensuring they're more harmonized in size, stroke width, and overall
dimensions. Adding here two new "play" icons, too, that will be used in
the context of the debugger.

<img
src="https://github.com/user-attachments/assets/79bcf0b0-e995-4c8e-9c78-0aba32f42f2d"
width="400" />

Release Notes:

- N/A
2025-05-30 18:18:23 -03:00
Peter Tripp
df0cf22347 Add powershell language docs (#31787)
Release Notes:

- N/A
2025-05-30 17:09:55 -04:00
Piotr Osiewicz
a305eda8d1 debugger: Relax implementation of validate_config to not run validation (#31785)
When we moved to schema-based debug configs, we've added validate_config
- a trait method
that is supposed to both validate the configuration and determine
whether it is a launch configuration
or an attach configuration.

The validation bit is a bit problematic though - we received reports on
Discords about
scenarios not starting up properly; it turned out that Javascript's
implementation was overly strict.
Thus, I got rid of any code that tries to validate the config - let's
let the debug adapter itself
decide whether it can digest the configuration or not. validate_config
is now left unimplemented for most
DebugAdapter implementations (except for PHP), because all adapters use
`request`: 'launch'/'attach' for that.
Let's leave the trait method in place though, as nothing guarantees this
to be true for all adapters.

cc @Anthony-Eid

Release Notes:

- debugger: Improved error messages when the debug scenario is not
valid.
- debugger: Fixed cases where valid configs were rejected.
2025-05-30 23:08:41 +02:00
Joseph T. Lyons
ba7b1db054 Mark items as pinned via ! in tests (#31786)
Release Notes:

- N/A
2025-05-30 21:03:31 +00:00
Joseph T. Lyons
019c8ded77 Fix bug where pinned tabs were closed when closing others (#31783)
Closes https://github.com/zed-industries/zed/issues/28166

Release Notes:

- Fixed a bug where pinned tabs were closed when running `Close Others`
from the tab context menu
2025-05-30 20:48:33 +00:00
Fernando Carletti
1704dbea7e keymap: Add subword navigation and selection to Sublime Text keymap (#30268)
For reference, this is what is set in Sublime Text's default-keymap
files for both MacOS and Linux:

```json
{ "keys": ["ctrl+left"], "command": "move", "args": {"by": "words", "forward": false} },
{ "keys": ["ctrl+right"], "command": "move", "args": {"by": "word_ends", "forward": true} },
{ "keys": ["ctrl+shift+left"], "command": "move", "args": {"by": "words", "forward": false, "extend": true} },
{ "keys": ["ctrl+shift+right"], "command": "move", "args": {"by": "word_ends", "forward": true, "extend": true} },
```

Release Notes:

- Add subword navigation and selection to Sublime keymap

Co-authored-by: Peter Tripp <peter@zed.dev>
2025-05-30 20:38:21 +00:00
Hendrik Sollich
eefa6c4882 Icon theme selector: Don't select last list item when fuzzy searching (#29560)
Adds manual icon-theme selection persistence

Store manually selected icon-themes to maintain selection when query
changes. This allows the theme selector to remember the user's choice
rather than resetting selection when filtering results.

mentioned in #28081 and #28278

Release Notes:

- Improved persistence when selecting themes and icon themes.

---------

Co-authored-by: Peter Tripp <peter@zed.dev>
2025-05-30 20:37:38 +00:00
Alex
1f17df7fb0 debugger: Add support for go tests (#31772)
In the https://github.com/zed-industries/zed/pull/31559 I did not
introduce ability to debug test invocations.
Adding it here. E.g:
![Kapture 2025-05-30 at 19 59
13](https://github.com/user-attachments/assets/1111d4a5-8b0a-42e6-aa98-2d797f61ffe3)

Release Notes:
- Added support for debugging single tests written in go
2025-05-30 22:18:32 +02:00
tidely
6d687a2c2c ollama: Change default context size to 4096 (#31682)
Ollama increased their default context size from 2048 to 4096 tokens in
version v0.6.7, which released over a month ago.

https://github.com/ollama/ollama/releases/tag/v0.6.7

Release Notes:

- ollama: Update default model context to 4096 (matching upstream)
2025-05-30 16:12:39 -04:00
Umesh Yadav
32214abb64 Improve TypeScript shebang detection (#31437)
Closes #13981

Release Notes:

- Improved TypeScript shebang detection
2025-05-30 16:11:13 -04:00
Peter Tripp
a78563b80b ci: Prevent "Tests Pass" job from running on forks (#31778)
Example fork [actions run](https://github.com/alphaArgon/zed/actions/runs/15349715488/job/43194591563)
which would be suppressed in the future.

(Sorry @alphaArgon)

Release Notes:

- N/A
2025-05-30 16:09:22 -04:00
Kirill Bulatov
f881cacd8a Use both language and LSP icons for LSP tasks (#31773)
Make more explicit which language LSP tasks are used.

Before:

![image](https://github.com/user-attachments/assets/27f93c5f-942e-47a0-9b74-2c6d4d6248de)

After:

![image
(1)](https://github.com/user-attachments/assets/5a29fb0a-2e16-4c35-9dda-ae7925eaa034)


![image](https://github.com/user-attachments/assets/d1bf518e-63d1-4ebf-af3d-3c9d464c6532)


Release Notes:

- N/A
2025-05-30 19:28:56 +00:00
Umesh Yadav
a539a38f13 Revert "copilot: Fix vision request detection for follow-up messages" (#31776)
Reverts zed-industries/zed#31760

see this comment for context:
https://github.com/zed-industries/zed/pull/31760#issuecomment-2923158611.

Release Notes:

- N/A
2025-05-30 21:28:31 +02:00
Anthony
613be942cd Improve task resolution args filtering 2025-05-30 16:49:38 +03:00
247 changed files with 11627 additions and 7003 deletions

View File

@@ -73,7 +73,7 @@ jobs:
timeout-minutes: 60
runs-on:
- self-hosted
- test
- macOS
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
@@ -200,7 +200,7 @@ jobs:
needs.job_spec.outputs.run_tests == 'true'
runs-on:
- self-hosted
- test
- macOS
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
@@ -482,7 +482,9 @@ jobs:
- macos_tests
- windows_clippy
- windows_tests
if: always()
if: |
github.repository_owner == 'zed-industries' &&
always()
steps:
- name: Check all tests passed
run: |

View File

@@ -15,7 +15,7 @@ jobs:
if: github.repository_owner == 'zed-industries'
runs-on:
- self-hosted
- test
- macOS
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
@@ -33,7 +33,7 @@ jobs:
name: Run tests
runs-on:
- self-hosted
- test
- macOS
needs: style
steps:
- name: Checkout repo

View File

@@ -20,7 +20,7 @@ jobs:
if: github.repository_owner == 'zed-industries'
runs-on:
- self-hosted
- test
- macOS
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
@@ -40,7 +40,7 @@ jobs:
if: github.repository_owner == 'zed-industries'
runs-on:
- self-hosted
- test
- macOS
needs: style
steps:
- name: Checkout repo

6
.rules
View File

@@ -5,12 +5,6 @@
* Prefer implementing functionality in existing files unless it is a new logical component. Avoid creating many small files.
* Avoid using functions that panic like `unwrap()`, instead use mechanisms like `?` to propagate errors.
* Be careful with operations like indexing which may panic if the indexes are out of bounds.
* Never silently discard errors with `let _ =` on fallible operations. Always handle errors appropriately:
- Propagate errors with `?` when the calling function should handle them
- Use `.log_err()` or similar when you need to ignore errors but want visibility
- Use explicit error handling with `match` or `if let Err(...)` when you need custom logic
- Example: avoid `let _ = client.request(...).await?;` - use `client.request(...).await?;` instead
* When implementing async operations that may fail, ensure errors propagate to the UI layer so users get meaningful feedback.
* Never create files with `mod.rs` paths - prefer `src/some_module.rs` instead of `src/some_module/mod.rs`.
# GPUI

20
Cargo.lock generated
View File

@@ -114,6 +114,7 @@ dependencies = [
"serde_json_lenient",
"settings",
"smol",
"sqlez",
"streaming_diff",
"telemetry",
"telemetry_events",
@@ -133,6 +134,7 @@ dependencies = [
"workspace-hack",
"zed_actions",
"zed_llm_client",
"zstd",
]
[[package]]
@@ -525,6 +527,7 @@ dependencies = [
"fuzzy",
"gpui",
"indexed_docs",
"indoc",
"language",
"language_model",
"languages",
@@ -7070,6 +7073,7 @@ dependencies = [
"image",
"inventory",
"itertools 0.14.0",
"libc",
"log",
"lyon",
"media",
@@ -8758,6 +8762,7 @@ dependencies = [
"serde",
"serde_json",
"settings",
"shellexpand 2.1.2",
"smallvec",
"smol",
"streaming-iterator",
@@ -8859,6 +8864,7 @@ dependencies = [
"mistral",
"ollama",
"open_ai",
"open_router",
"partial-json-fixer",
"project",
"proto",
@@ -10703,6 +10709,19 @@ dependencies = [
"workspace-hack",
]
[[package]]
name = "open_router"
version = "0.1.0"
dependencies = [
"anyhow",
"futures 0.3.31",
"http_client",
"schemars",
"serde",
"serde_json",
"workspace-hack",
]
[[package]]
name = "opener"
version = "0.7.2"
@@ -17110,6 +17129,7 @@ dependencies = [
"futures-lite 1.13.0",
"git2",
"globset",
"indoc",
"itertools 0.14.0",
"libc",
"log",

View File

@@ -100,6 +100,7 @@ members = [
"crates/notifications",
"crates/ollama",
"crates/open_ai",
"crates/open_router",
"crates/outline",
"crates/outline_panel",
"crates/panel",
@@ -307,6 +308,7 @@ node_runtime = { path = "crates/node_runtime" }
notifications = { path = "crates/notifications" }
ollama = { path = "crates/ollama" }
open_ai = { path = "crates/open_ai" }
open_router = { path = "crates/open_router", features = ["schemars"] }
outline = { path = "crates/outline" }
outline_panel = { path = "crates/outline_panel" }
panel = { path = "crates/panel" }

View File

@@ -0,0 +1,8 @@
<svg width="16" height="16" viewBox="0 0 16 16" xmlns="http://www.w3.org/2000/svg" fill="currentColor" stroke="currentColor">
<g clip-path="url(#clip0_205_3)">
<path d="M0.094 7.78c0.469 0 2.281 -0.405 3.219 -0.936s0.938 -0.531 2.875 -1.906c2.453 -1.741 4.188 -1.158 7.031 -1.158" stroke-width="2.8125" />
<path d="m15.969 3.797 -4.805 2.774V1.023z" />
<path d="M0 7.781c0.469 0 2.281 0.405 3.219 0.936s0.938 0.531 2.875 1.906C8.547 12.364 10.281 11.781 13.125 11.781" stroke-width="2.8125" />
<path d="m15.875 11.764 -4.805 -2.774v5.548z" />
</g>
</svg>

After

Width:  |  Height:  |  Size: 575 B

View File

@@ -1,5 +1,4 @@
<svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M17 20H16C14.9391 20 13.9217 19.6629 13.1716 19.0627C12.4214 18.4626 12 17.6487 12 16.8V7.2C12 6.35131 12.4214 5.53737 13.1716 4.93726C13.9217 4.33714 14.9391 4 16 4H17" stroke="black" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M7 20H8C9.06087 20 10.0783 19.5786 10.8284 18.8284C11.5786 18.0783 12 17.0609 12 16V15" stroke="black" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M7 4H8C9.06087 4 10.0783 4.42143 10.8284 5.17157C11.5786 5.92172 12 6.93913 12 8V9" stroke="black" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"/>
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M11 13H10.4C9.76346 13 9.15302 12.7893 8.70296 12.4142C8.25284 12.0391 8 11.5304 8 11V5C8 4.46957 8.25284 3.96086 8.70296 3.58579C9.15302 3.21071 9.76346 3 10.4 3H11" stroke="black" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M5 13H5.6C6.23654 13 6.84698 12.7893 7.29704 12.4142C7.74716 12.0391 8 11.5304 8 11V5C8 4.46957 7.74716 3.96086 7.29704 3.58579C6.84698 3.21071 6.23654 3 5.6 3H5" stroke="black" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

Before

Width:  |  Height:  |  Size: 715 B

After

Width:  |  Height:  |  Size: 617 B

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-list-todo-icon lucide-list-todo"><rect x="3" y="5" width="6" height="6" rx="1"/><path d="m3 17 2 2 4-4"/><path d="M13 6h8"/><path d="M13 12h8"/><path d="M13 18h8"/></svg>

After

Width:  |  Height:  |  Size: 373 B

View File

@@ -0,0 +1,3 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M4 3L13 8L4 13V3Z" stroke="black" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

After

Width:  |  Height:  |  Size: 214 B

View File

@@ -0,0 +1,8 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M4 12C2.35977 11.85 1 10.575 1 9" stroke="black" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M1.00875 15.2C1.00875 13.625 0.683456 12.275 4.00001 12.2" stroke="black" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M7 9C7 10.575 5.62857 11.85 4 12" stroke="black" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M4 12.2C6.98117 12.2 7 13.625 7 15.2" stroke="black" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"/>
<rect x="2.5" y="9" width="3" height="6" rx="1.5" fill="black"/>
<path d="M9 10L13 8L4 3V7.5" stroke="black" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

After

Width:  |  Height:  |  Size: 813 B

View File

@@ -1,3 +1,8 @@
<svg width="17" height="17" viewBox="0 0 17 17" fill="none" xmlns="http://www.w3.org/2000/svg">
<path fill-rule="evenodd" clip-rule="evenodd" d="M6.36667 3.79167C5.53364 3.79167 4.85833 4.46697 4.85833 5.3C4.85833 6.13303 5.53364 6.80833 6.36667 6.80833C7.1997 6.80833 7.875 6.13303 7.875 5.3C7.875 4.46697 7.1997 3.79167 6.36667 3.79167ZM2.1 5.925H3.67944C3.9626 7.14732 5.05824 8.05833 6.36667 8.05833C7.67509 8.05833 8.77073 7.14732 9.05389 5.925H14.9C15.2452 5.925 15.525 5.64518 15.525 5.3C15.525 4.95482 15.2452 4.675 14.9 4.675H9.05389C8.77073 3.45268 7.67509 2.54167 6.36667 2.54167C5.05824 2.54167 3.9626 3.45268 3.67944 4.675H2.1C1.75482 4.675 1.475 4.95482 1.475 5.3C1.475 5.64518 1.75482 5.925 2.1 5.925ZM13.3206 12.325C13.0374 13.5473 11.9418 14.4583 10.6333 14.4583C9.32491 14.4583 8.22927 13.5473 7.94611 12.325H2.1C1.75482 12.325 1.475 12.0452 1.475 11.7C1.475 11.3548 1.75482 11.075 2.1 11.075H7.94611C8.22927 9.85268 9.32491 8.94167 10.6333 8.94167C11.9418 8.94167 13.0374 9.85268 13.3206 11.075H14.9C15.2452 11.075 15.525 11.3548 15.525 11.7C15.525 12.0452 15.2452 12.325 14.9 12.325H13.3206ZM9.125 11.7C9.125 10.867 9.8003 10.1917 10.6333 10.1917C11.4664 10.1917 12.1417 10.867 12.1417 11.7C12.1417 12.533 11.4664 13.2083 10.6333 13.2083C9.8003 13.2083 9.125 12.533 9.125 11.7Z" fill="black"/>
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M2 5H4" stroke="black" stroke-width="1.5" stroke-linecap="round"/>
<path d="M8 5L14 5" stroke="black" stroke-width="1.5" stroke-linecap="round"/>
<path d="M12 11L14 11" stroke="black" stroke-width="1.5" stroke-linecap="round"/>
<path d="M2 11H8" stroke="black" stroke-width="1.5" stroke-linecap="round"/>
<circle cx="6" cy="5" r="2" fill="black" fill-opacity="0.1" stroke="black" stroke-width="1.5" stroke-linecap="round"/>
<circle cx="10" cy="11" r="2" fill="black" fill-opacity="0.1" stroke="black" stroke-width="1.5" stroke-linecap="round"/>
</svg>

Before

Width:  |  Height:  |  Size: 1.3 KiB

After

Width:  |  Height:  |  Size: 657 B

View File

@@ -1 +1,3 @@
<svg width="15" height="15" viewBox="0 0 15 15" fill="none" xmlns="http://www.w3.org/2000/svg"><path d="M6.97942 1.25171L6.9585 1.30199L5.58662 4.60039C5.54342 4.70426 5.44573 4.77523 5.3336 4.78422L1.7727 5.0697L1.71841 5.07405L1.38687 5.10063L1.08608 5.12475C0.820085 5.14607 0.712228 5.47802 0.914889 5.65162L1.14406 5.84793L1.39666 6.06431L1.43802 6.09974L4.15105 8.42374C4.23648 8.49692 4.2738 8.61176 4.24769 8.72118L3.41882 12.196L3.40618 12.249L3.32901 12.5725L3.25899 12.866C3.19708 13.1256 3.47945 13.3308 3.70718 13.1917L3.9647 13.0344L4.24854 12.861L4.29502 12.8326L7.34365 10.9705C7.43965 10.9119 7.5604 10.9119 7.6564 10.9705L10.705 12.8326L10.7515 12.861L11.0354 13.0344L11.2929 13.1917C11.5206 13.3308 11.803 13.1256 11.7411 12.866L11.671 12.5725L11.5939 12.249L11.5812 12.196L10.7524 8.72118C10.7263 8.61176 10.7636 8.49692 10.849 8.42374L13.562 6.09974L13.6034 6.06431L13.856 5.84793L14.0852 5.65162C14.2878 5.47802 14.18 5.14607 13.914 5.12475L13.6132 5.10063L13.2816 5.07405L13.2274 5.0697L9.66645 4.78422C9.55432 4.77523 9.45663 4.70426 9.41343 4.60039L8.04155 1.30199L8.02064 1.25171L7.89291 0.944609L7.77702 0.665992C7.67454 0.419604 7.32551 0.419604 7.22303 0.665992L7.10715 0.944609L6.97942 1.25171ZM7.50003 2.60397L6.50994 4.98442C6.32273 5.43453 5.89944 5.74207 5.41351 5.78103L2.84361 5.98705L4.8016 7.66428C5.17183 7.98142 5.33351 8.47903 5.2204 8.95321L4.62221 11.461L6.8224 10.1171C7.23842 9.86302 7.76164 9.86302 8.17766 10.1171L10.3778 11.461L9.77965 8.95321C9.66654 8.47903 9.82822 7.98142 10.1984 7.66428L12.1564 5.98705L9.58654 5.78103C9.10061 5.74207 8.67732 5.43453 8.49011 4.98442L7.50003 2.60397Z" fill="currentColor" fill-rule="evenodd" clip-rule="evenodd"></path></svg>
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M7.68323 1.53C7.71245 1.47097 7.75758 1.42129 7.81353 1.38655C7.86949 1.35181 7.93404 1.3334 7.9999 1.3334C8.06576 1.3334 8.13031 1.35181 8.18626 1.38655C8.24222 1.42129 8.28735 1.47097 8.31656 1.53L9.85656 4.64933C9.95802 4.85465 10.1078 5.03227 10.293 5.16697C10.4782 5.30167 10.6933 5.38941 10.9199 5.42267L14.3639 5.92667C14.4292 5.93612 14.4905 5.96365 14.5409 6.00613C14.5913 6.04862 14.6289 6.10437 14.6492 6.16707C14.6696 6.22978 14.6721 6.29694 14.6563 6.36096C14.6405 6.42498 14.6071 6.4833 14.5599 6.52933L12.0692 8.95467C11.905 9.11473 11.7821 9.31232 11.7111 9.53042C11.6402 9.74852 11.6233 9.98059 11.6619 10.2067L12.2499 13.6333C12.2614 13.6986 12.2544 13.7657 12.2296 13.8271C12.2048 13.8885 12.1632 13.9417 12.1096 13.9807C12.056 14.0196 11.9926 14.0427 11.9265 14.0473C11.8604 14.0519 11.7944 14.0378 11.7359 14.0067L8.65723 12.388C8.45438 12.2815 8.22868 12.2258 7.99956 12.2258C7.77044 12.2258 7.54475 12.2815 7.3419 12.388L4.2639 14.0067C4.20545 14.0376 4.1395 14.0515 4.07353 14.0468C4.00757 14.0421 3.94424 14.019 3.89076 13.9801C3.83728 13.9413 3.79579 13.8881 3.771 13.8268C3.74622 13.7655 3.73914 13.6985 3.75056 13.6333L4.3379 10.2073C4.3767 9.98116 4.35989 9.74893 4.28892 9.5307C4.21796 9.31246 4.09497 9.11477 3.93056 8.95467L1.4399 6.53C1.39229 6.48402 1.35856 6.4256 1.34254 6.36138C1.32652 6.29717 1.32886 6.22975 1.34928 6.16679C1.36971 6.10384 1.40741 6.04789 1.45808 6.00532C1.50876 5.96275 1.57037 5.93527 1.6359 5.926L5.07923 5.42267C5.30607 5.38967 5.52149 5.30204 5.70695 5.16733C5.89242 5.03261 6.04237 4.85485 6.1439 4.64933L7.68323 1.53Z" stroke="black" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

Before

Width:  |  Height:  |  Size: 1.7 KiB

After

Width:  |  Height:  |  Size: 1.7 KiB

View File

@@ -1 +1,3 @@
<svg width="15" height="15" viewBox="0 0 15 15" fill="none" xmlns="http://www.w3.org/2000/svg"><path d="M7.22303 0.665992C7.32551 0.419604 7.67454 0.419604 7.77702 0.665992L9.41343 4.60039C9.45663 4.70426 9.55432 4.77523 9.66645 4.78422L13.914 5.12475C14.18 5.14607 14.2878 5.47802 14.0852 5.65162L10.849 8.42374C10.7636 8.49692 10.7263 8.61176 10.7524 8.72118L11.7411 12.866C11.803 13.1256 11.5206 13.3308 11.2929 13.1917L7.6564 10.9705C7.5604 10.9119 7.43965 10.9119 7.34365 10.9705L3.70718 13.1917C3.47945 13.3308 3.19708 13.1256 3.25899 12.866L4.24769 8.72118C4.2738 8.61176 4.23648 8.49692 4.15105 8.42374L0.914889 5.65162C0.712228 5.47802 0.820086 5.14607 1.08608 5.12475L5.3336 4.78422C5.44573 4.77523 5.54342 4.70426 5.58662 4.60039L7.22303 0.665992Z" fill="currentColor"></path></svg>
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M7.68323 1.53C7.71245 1.47097 7.75758 1.42129 7.81353 1.38655C7.86949 1.35181 7.93404 1.3334 7.9999 1.3334C8.06576 1.3334 8.13031 1.35181 8.18626 1.38655C8.24222 1.42129 8.28735 1.47097 8.31656 1.53L9.85656 4.64933C9.95802 4.85465 10.1078 5.03227 10.293 5.16697C10.4782 5.30167 10.6933 5.38941 10.9199 5.42267L14.3639 5.92667C14.4292 5.93612 14.4905 5.96365 14.5409 6.00613C14.5913 6.04862 14.6289 6.10437 14.6492 6.16707C14.6696 6.22978 14.6721 6.29694 14.6563 6.36096C14.6405 6.42498 14.6071 6.4833 14.5599 6.52933L12.0692 8.95467C11.905 9.11473 11.7821 9.31232 11.7111 9.53042C11.6402 9.74852 11.6233 9.98059 11.6619 10.2067L12.2499 13.6333C12.2614 13.6986 12.2544 13.7657 12.2296 13.8271C12.2048 13.8885 12.1632 13.9417 12.1096 13.9807C12.056 14.0196 11.9926 14.0427 11.9265 14.0473C11.8604 14.0519 11.7944 14.0378 11.7359 14.0067L8.65723 12.388C8.45438 12.2815 8.22868 12.2258 7.99956 12.2258C7.77044 12.2258 7.54475 12.2815 7.3419 12.388L4.2639 14.0067C4.20545 14.0376 4.1395 14.0515 4.07353 14.0468C4.00757 14.0421 3.94424 14.019 3.89076 13.9801C3.83728 13.9413 3.79579 13.8881 3.771 13.8268C3.74622 13.7655 3.73914 13.6985 3.75056 13.6333L4.3379 10.2073C4.3767 9.98116 4.35989 9.74893 4.28892 9.5307C4.21796 9.31246 4.09497 9.11477 3.93056 8.95467L1.4399 6.53C1.39229 6.48402 1.35856 6.4256 1.34254 6.36138C1.32652 6.29717 1.32886 6.22975 1.34928 6.16679C1.36971 6.10384 1.40741 6.04789 1.45808 6.00532C1.50876 5.96275 1.57037 5.93527 1.6359 5.926L5.07923 5.42267C5.30607 5.38967 5.52149 5.30204 5.70695 5.16733C5.89242 5.03261 6.04237 4.85485 6.1439 4.64933L7.68323 1.53Z" fill="black" stroke="black" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

Before

Width:  |  Height:  |  Size: 794 B

After

Width:  |  Height:  |  Size: 1.7 KiB

View File

@@ -1,5 +1,5 @@
<svg width="14" height="14" viewBox="0 0 14 14" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M7 1.75L5.88467 5.14092C5.82759 5.31446 5.73055 5.47218 5.60136 5.60136C5.47218 5.73055 5.31446 5.82759 5.14092 5.88467L1.75 7L5.14092 8.11533C5.31446 8.17241 5.47218 8.26945 5.60136 8.39864C5.73055 8.52782 5.82759 8.68554 5.88467 8.85908L7 12.25L8.11533 8.85908C8.17241 8.68554 8.26945 8.52782 8.39864 8.39864C8.52782 8.26945 8.68554 8.17241 8.85908 8.11533L12.25 7L8.85908 5.88467C8.68554 5.82759 8.52782 5.73055 8.39864 5.60136C8.26945 5.47218 8.17241 5.31446 8.11533 5.14092L7 1.75Z" fill="black" fill-opacity="0.15" stroke="black" stroke-width="1.25" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M2.91667 1.75V4.08333M1.75 2.91667H4.08333" stroke="black" stroke-opacity="0.75" stroke-width="1.25" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M11.0833 9.91667V12.25M9.91667 11.0833H12.25" stroke="black" stroke-opacity="0.75" stroke-width="1.25" stroke-linecap="round" stroke-linejoin="round"/>
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M8 2L6.72534 5.87534C6.6601 6.07367 6.5492 6.25392 6.40155 6.40155C6.25392 6.5492 6.07367 6.6601 5.87534 6.72534L2 8L5.87534 9.27466C6.07367 9.3399 6.25392 9.4508 6.40155 9.59845C6.5492 9.74608 6.6601 9.92633 6.72534 10.1247L8 14L9.27466 10.1247C9.3399 9.92633 9.4508 9.74608 9.59845 9.59845C9.74608 9.4508 9.92633 9.3399 10.1247 9.27466L14 8L10.1247 6.72534C9.92633 6.6601 9.74608 6.5492 9.59845 6.40155C9.4508 6.25392 9.3399 6.07367 9.27466 5.87534L8 2Z" fill="black" fill-opacity="0.15" stroke="black" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M3.33334 2V4.66666M2 3.33334H4.66666" stroke="black" stroke-opacity="0.75" stroke-width="1.25" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M12.6665 11.3333V14M11.3333 12.6666H13.9999" stroke="black" stroke-opacity="0.75" stroke-width="1.25" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

Before

Width:  |  Height:  |  Size: 1.0 KiB

After

Width:  |  Height:  |  Size: 998 B

View File

@@ -31,8 +31,6 @@
"ctrl-,": "zed::OpenSettings",
"ctrl-q": "zed::Quit",
"f4": "debugger::Start",
"alt-f4": "debugger::RerunLastSession",
"f5": "debugger::Continue",
"shift-f5": "debugger::Stop",
"ctrl-shift-f5": "debugger::Restart",
"f6": "debugger::Pause",
@@ -280,7 +278,9 @@
"enter": "agent::Chat",
"ctrl-enter": "agent::ChatWithFollow",
"ctrl-i": "agent::ToggleProfileSelector",
"shift-ctrl-r": "agent::OpenAgentDiff"
"shift-ctrl-r": "agent::OpenAgentDiff",
"ctrl-shift-y": "agent::KeepAll",
"ctrl-shift-n": "agent::RejectAll"
}
},
{
@@ -583,11 +583,24 @@
"ctrl-alt-r": "task::Rerun",
"alt-t": "task::Rerun",
"alt-shift-t": "task::Spawn",
"alt-shift-r": ["task::Spawn", { "reveal_target": "center" }]
"alt-shift-r": ["task::Spawn", { "reveal_target": "center" }],
// also possible to spawn tasks by name:
// "foo-bar": ["task::Spawn", { "task_name": "MyTask", "reveal_target": "dock" }]
// or by tag:
// "foo-bar": ["task::Spawn", { "task_tag": "MyTag" }],
"f5": "debugger::RerunLastSession"
}
},
{
"context": "Workspace && debugger_running",
"bindings": {
"f5": "zed::NoAction"
}
},
{
"context": "Workspace && debugger_stopped",
"bindings": {
"f5": "debugger::Continue"
}
},
{
@@ -873,7 +886,8 @@
"context": "DebugPanel",
"bindings": {
"ctrl-t": "debugger::ToggleThreadPicker",
"ctrl-i": "debugger::ToggleSessionPicker"
"ctrl-i": "debugger::ToggleSessionPicker",
"shift-alt-escape": "debugger::ToggleExpandItem"
}
},
{
@@ -897,9 +911,7 @@
"context": "CollabPanel && not_editing",
"bindings": {
"ctrl-backspace": "collab_panel::Remove",
"space": "menu::Confirm",
"ctrl-up": "collab_panel::MoveChannelUp",
"ctrl-down": "collab_panel::MoveChannelDown"
"space": "menu::Confirm"
}
},
{
@@ -930,6 +942,13 @@
"tab": "channel_modal::ToggleMode"
}
},
{
"context": "FileFinder",
"bindings": {
"ctrl-shift-a": "file_finder::ToggleSplitMenu",
"ctrl-shift-i": "file_finder::ToggleFilterMenu"
}
},
{
"context": "FileFinder || (FileFinder > Picker > Editor) || (FileFinder > Picker > menu)",
"bindings": {

View File

@@ -4,8 +4,6 @@
"use_key_equivalents": true,
"bindings": {
"f4": "debugger::Start",
"alt-f4": "debugger::RerunLastSession",
"f5": "debugger::Continue",
"shift-f5": "debugger::Stop",
"shift-cmd-f5": "debugger::Restart",
"f6": "debugger::Pause",
@@ -317,7 +315,9 @@
"enter": "agent::Chat",
"cmd-enter": "agent::ChatWithFollow",
"cmd-i": "agent::ToggleProfileSelector",
"shift-ctrl-r": "agent::OpenAgentDiff"
"shift-ctrl-r": "agent::OpenAgentDiff",
"cmd-shift-y": "agent::KeepAll",
"cmd-shift-n": "agent::RejectAll"
}
},
{
@@ -635,7 +635,8 @@
"cmd-k shift-right": "workspace::SwapPaneRight",
"cmd-k shift-up": "workspace::SwapPaneUp",
"cmd-k shift-down": "workspace::SwapPaneDown",
"cmd-shift-x": "zed::Extensions"
"cmd-shift-x": "zed::Extensions",
"f5": "debugger::RerunLastSession"
}
},
{
@@ -652,6 +653,20 @@
// "foo-bar": ["task::Spawn", { "task_tag": "MyTag" }],
}
},
{
"context": "Workspace && debugger_running",
"use_key_equivalents": true,
"bindings": {
"f5": "zed::NoAction"
}
},
{
"context": "Workspace && debugger_stopped",
"use_key_equivalents": true,
"bindings": {
"f5": "debugger::Continue"
}
},
// Bindings from Sublime Text
{
"context": "Editor",
@@ -936,7 +951,8 @@
"context": "DebugPanel",
"bindings": {
"cmd-t": "debugger::ToggleThreadPicker",
"cmd-i": "debugger::ToggleSessionPicker"
"cmd-i": "debugger::ToggleSessionPicker",
"shift-alt-escape": "debugger::ToggleExpandItem"
}
},
{
@@ -951,9 +967,7 @@
"use_key_equivalents": true,
"bindings": {
"ctrl-backspace": "collab_panel::Remove",
"space": "menu::Confirm",
"cmd-up": "collab_panel::MoveChannelUp",
"cmd-down": "collab_panel::MoveChannelDown"
"space": "menu::Confirm"
}
},
{
@@ -989,6 +1003,14 @@
"tab": "channel_modal::ToggleMode"
}
},
{
"context": "FileFinder",
"use_key_equivalents": true,
"bindings": {
"cmd-shift-a": "file_finder::ToggleSplitMenu",
"cmd-shift-i": "file_finder::ToggleFilterMenu"
}
},
{
"context": "FileFinder || (FileFinder > Picker > Editor) || (FileFinder > Picker > menu)",
"use_key_equivalents": true,

View File

@@ -51,7 +51,11 @@
"ctrl-k ctrl-l": "editor::ConvertToLowerCase",
"shift-alt-m": "markdown::OpenPreviewToTheSide",
"ctrl-backspace": "editor::DeleteToPreviousWordStart",
"ctrl-delete": "editor::DeleteToNextWordEnd"
"ctrl-delete": "editor::DeleteToNextWordEnd",
"alt-right": "editor::MoveToNextSubwordEnd",
"alt-left": "editor::MoveToPreviousSubwordStart",
"alt-shift-right": "editor::SelectToNextSubwordEnd",
"alt-shift-left": "editor::SelectToPreviousSubwordStart"
}
},
{

View File

@@ -53,7 +53,11 @@
"cmd-shift-j": "editor::JoinLines",
"shift-alt-m": "markdown::OpenPreviewToTheSide",
"ctrl-backspace": "editor::DeleteToPreviousWordStart",
"ctrl-delete": "editor::DeleteToNextWordEnd"
"ctrl-delete": "editor::DeleteToNextWordEnd",
"ctrl-right": "editor::MoveToNextSubwordEnd",
"ctrl-left": "editor::MoveToPreviousSubwordStart",
"ctrl-shift-right": "editor::SelectToNextSubwordEnd",
"ctrl-shift-left": "editor::SelectToPreviousSubwordStart"
}
},
{

View File

@@ -838,6 +838,19 @@
"tab": "editor::AcceptEditPrediction"
}
},
{
"context": "MessageEditor > Editor && VimControl",
"bindings": {
"enter": "agent::Chat",
// TODO: Implement search
"/": null,
"?": null,
"#": null,
"*": null,
"n": null,
"shift-n": null
}
},
{
"context": "os != macos && Editor && edit_prediction_conflict",
"bindings": {

View File

@@ -73,9 +73,6 @@
"unnecessary_code_fade": 0.3,
// Active pane styling settings.
"active_pane_modifiers": {
// The factor to grow the active pane by. Defaults to 1.0
// which gives the same size as all other panes.
"magnification": 1.0,
// Inset border size of the active pane, in pixels.
"border_size": 0.0,
// Opacity of the inactive panes. 0 means transparent, 1 means opaque.
@@ -128,6 +125,8 @@
//
// Default: true
"restore_on_file_reopen": true,
// Whether to automatically close files that have been deleted on disk.
"close_on_file_delete": false,
// Size of the drop target in the editor.
"drop_target_size": 0.2,
// Whether the window should be closed when using 'close active item' on a window with no tabs.
@@ -731,13 +730,6 @@
// The model to use.
"model": "claude-sonnet-4"
},
// The model to use when applying edits from the agent.
"editor_model": {
// The provider to use.
"provider": "zed.dev",
// The model to use.
"model": "claude-sonnet-4"
},
// Additional parameters for language model requests. When making a request to a model, parameters will be taken
// from the last entry in this list that matches the model's provider and name. In each entry, both provider
// and model are optional, so that you can specify parameters for either one.
@@ -1505,11 +1497,11 @@
}
},
"LaTeX": {
"format_on_save": "on",
"formatter": "language_server",
"language_servers": ["texlab", "..."],
"prettier": {
"allowed": false
"allowed": true,
"plugins": ["prettier-plugin-latex"]
}
},
"Markdown": {
@@ -1610,6 +1602,9 @@
"version": "1",
"api_url": "https://api.openai.com/v1"
},
"open_router": {
"api_url": "https://openrouter.ai/api/v1"
},
"lmstudio": {
"api_url": "http://localhost:1234/api/v0"
},

View File

@@ -1,3 +1,7 @@
// Some example tasks for common languages.
//
// For more documentation on how to configure debug tasks,
// see: https://zed.dev/docs/debugger
[
{
"label": "Debug active PHP file",

View File

@@ -0,0 +1,5 @@
// Project-local debug tasks
//
// For more documentation on how to configure debug tasks,
// see: https://zed.dev/docs/debugger
[]

View File

@@ -46,6 +46,7 @@ git.workspace = true
gpui.workspace = true
heed.workspace = true
html_to_markdown.workspace = true
indoc.workspace = true
http_client.workspace = true
indexed_docs.workspace = true
inventory.workspace = true
@@ -78,6 +79,7 @@ serde_json.workspace = true
serde_json_lenient.workspace = true
settings.workspace = true
smol.workspace = true
sqlez.workspace = true
streaming_diff.workspace = true
telemetry.workspace = true
telemetry_events.workspace = true
@@ -97,6 +99,7 @@ workspace-hack.workspace = true
workspace.workspace = true
zed_actions.workspace = true
zed_llm_client.workspace = true
zstd.workspace = true
[dev-dependencies]
buffer_diff = { workspace = true, features = ["test-support"] }

View File

@@ -3,7 +3,7 @@ use crate::context::{AgentContextHandle, RULES_ICON};
use crate::context_picker::{ContextPicker, MentionLink};
use crate::context_store::ContextStore;
use crate::context_strip::{ContextStrip, ContextStripEvent, SuggestContextKind};
use crate::message_editor::insert_message_creases;
use crate::message_editor::{extract_message_creases, insert_message_creases};
use crate::thread::{
LastRestoreCheckpoint, MessageCrease, MessageId, MessageSegment, Thread, ThreadError,
ThreadEvent, ThreadFeedback, ThreadSummary,
@@ -999,7 +999,7 @@ impl ActiveThread {
ThreadEvent::Stopped(reason) => match reason {
Ok(StopReason::EndTurn | StopReason::MaxTokens) => {
let used_tools = self.thread.read(cx).used_tools_since_last_user_message();
self.play_notification_sound(cx);
self.play_notification_sound(window, cx);
self.show_notification(
if used_tools {
"Finished running tools"
@@ -1014,9 +1014,18 @@ impl ActiveThread {
_ => {}
},
ThreadEvent::ToolConfirmationNeeded => {
self.play_notification_sound(cx);
self.play_notification_sound(window, cx);
self.show_notification("Waiting for tool confirmation", IconName::Info, window, cx);
}
ThreadEvent::ToolUseLimitReached => {
self.play_notification_sound(window, cx);
self.show_notification(
"Consecutive tool use limit reached.",
IconName::Warning,
window,
cx,
);
}
ThreadEvent::StreamedAssistantText(message_id, text) => {
if let Some(rendered_message) = self.rendered_messages_by_id.get_mut(&message_id) {
rendered_message.append_text(text, cx);
@@ -1151,9 +1160,9 @@ impl ActiveThread {
cx.notify();
}
fn play_notification_sound(&self, cx: &mut App) {
fn play_notification_sound(&self, window: &Window, cx: &mut App) {
let settings = AgentSettings::get_global(cx);
if settings.play_sound_when_agent_done {
if settings.play_sound_when_agent_done && !window.is_window_active() {
Audio::play_sound(Sound::AgentDone, cx);
}
}
@@ -1577,6 +1586,8 @@ impl ActiveThread {
let edited_text = state.editor.read(cx).text(cx);
let creases = state.editor.update(cx, extract_message_creases);
let new_context = self
.context_store
.read(cx)
@@ -1601,6 +1612,7 @@ impl ActiveThread {
message_id,
Role::User,
vec![MessageSegment::Text(edited_text)],
creases,
Some(context.loaded_context),
checkpoint.ok(),
cx,
@@ -3668,10 +3680,13 @@ fn open_editor_at_position(
#[cfg(test)]
mod tests {
use assistant_tool::{ToolRegistry, ToolWorkingSet};
use editor::EditorSettings;
use editor::{EditorSettings, display_map::CreaseMetadata};
use fs::FakeFs;
use gpui::{AppContext, TestAppContext, VisualTestContext};
use language_model::{LanguageModel, fake_provider::FakeLanguageModel};
use language_model::{
ConfiguredModel, LanguageModel, LanguageModelRegistry,
fake_provider::{FakeLanguageModel, FakeLanguageModelProvider},
};
use project::Project;
use prompt_store::PromptBuilder;
use serde_json::json;
@@ -3732,6 +3747,87 @@ mod tests {
assert!(!cx.read(|cx| workspace.read(cx).is_being_followed(CollaboratorId::Agent)));
}
#[gpui::test]
async fn test_reinserting_creases_for_edited_message(cx: &mut TestAppContext) {
init_test_settings(cx);
let project = create_test_project(cx, json!({})).await;
let (cx, active_thread, _, thread, model) =
setup_test_environment(cx, project.clone()).await;
cx.update(|_, cx| {
LanguageModelRegistry::global(cx).update(cx, |registry, cx| {
registry.set_default_model(
Some(ConfiguredModel {
provider: Arc::new(FakeLanguageModelProvider),
model,
}),
cx,
);
});
});
let creases = vec![MessageCrease {
range: 14..22,
metadata: CreaseMetadata {
icon_path: "icon".into(),
label: "foo.txt".into(),
},
context: None,
}];
let message = thread.update(cx, |thread, cx| {
let message_id = thread.insert_user_message(
"Tell me about @foo.txt",
ContextLoadResult::default(),
None,
creases,
cx,
);
thread.message(message_id).cloned().unwrap()
});
active_thread.update_in(cx, |active_thread, window, cx| {
active_thread.start_editing_message(
message.id,
message.segments.as_slice(),
message.creases.as_slice(),
window,
cx,
);
let editor = active_thread
.editing_message
.as_ref()
.unwrap()
.1
.editor
.clone();
editor.update(cx, |editor, cx| editor.edit([(0..13, "modified")], cx));
active_thread.confirm_editing_message(&Default::default(), window, cx);
});
cx.run_until_parked();
let message = thread.update(cx, |thread, _| thread.message(message.id).cloned().unwrap());
active_thread.update_in(cx, |active_thread, window, cx| {
active_thread.start_editing_message(
message.id,
message.segments.as_slice(),
message.creases.as_slice(),
window,
cx,
);
let editor = active_thread
.editing_message
.as_ref()
.unwrap()
.1
.editor
.clone();
let text = editor.update(cx, |editor, cx| editor.text(cx));
assert_eq!(text, "modified @foo.txt");
});
}
fn init_test_settings(cx: &mut TestAppContext) {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);

View File

@@ -1372,6 +1372,7 @@ impl AgentDiff {
| ThreadEvent::ToolFinished { .. }
| ThreadEvent::CheckpointChanged
| ThreadEvent::ToolConfirmationNeeded
| ThreadEvent::ToolUseLimitReached
| ThreadEvent::CancelEditing => {}
}
}
@@ -1464,7 +1465,10 @@ impl AgentDiff {
if !AgentSettings::get_global(cx).single_file_review {
for (editor, _) in self.reviewing_editors.drain() {
editor
.update(cx, |editor, cx| editor.end_temporary_diff_override(cx))
.update(cx, |editor, cx| {
editor.end_temporary_diff_override(cx);
editor.unregister_addon::<EditorAgentDiffAddon>();
})
.ok();
}
return;
@@ -1560,7 +1564,10 @@ impl AgentDiff {
if in_workspace {
editor
.update(cx, |editor, cx| editor.end_temporary_diff_override(cx))
.update(cx, |editor, cx| {
editor.end_temporary_diff_override(cx);
editor.unregister_addon::<EditorAgentDiffAddon>();
})
.ok();
self.reviewing_editors.remove(&editor);
}

View File

@@ -734,6 +734,7 @@ impl Display for RulesContext {
#[derive(Debug, Clone)]
pub struct ImageContext {
pub project_path: Option<ProjectPath>,
pub full_path: Option<Arc<Path>>,
pub original_image: Arc<gpui::Image>,
// TODO: handle this elsewhere and remove `ignore-interior-mutability` opt-out in clippy.toml
// needed due to a false positive of `clippy::mutable_key_type`.

View File

@@ -14,7 +14,7 @@ use http_client::HttpClientWithUrl;
use itertools::Itertools;
use language::{Buffer, CodeLabel, HighlightId};
use lsp::CompletionContext;
use project::{Completion, CompletionIntent, ProjectPath, Symbol, WorktreeId};
use project::{Completion, CompletionIntent, CompletionResponse, ProjectPath, Symbol, WorktreeId};
use prompt_store::PromptStore;
use rope::Point;
use text::{Anchor, OffsetRangeExt, ToPoint};
@@ -746,7 +746,7 @@ impl CompletionProvider for ContextPickerCompletionProvider {
_trigger: CompletionContext,
_window: &mut Window,
cx: &mut Context<Editor>,
) -> Task<Result<Option<Vec<Completion>>>> {
) -> Task<Result<Vec<CompletionResponse>>> {
let state = buffer.update(cx, |buffer, _cx| {
let position = buffer_position.to_point(buffer);
let line_start = Point::new(position.row, 0);
@@ -756,13 +756,13 @@ impl CompletionProvider for ContextPickerCompletionProvider {
MentionCompletion::try_parse(line, offset_to_line)
});
let Some(state) = state else {
return Task::ready(Ok(None));
return Task::ready(Ok(Vec::new()));
};
let Some((workspace, context_store)) =
self.workspace.upgrade().zip(self.context_store.upgrade())
else {
return Task::ready(Ok(None));
return Task::ready(Ok(Vec::new()));
};
let snapshot = buffer.read(cx).snapshot();
@@ -815,10 +815,10 @@ impl CompletionProvider for ContextPickerCompletionProvider {
cx.spawn(async move |_, cx| {
let matches = search_task.await;
let Some(editor) = editor.upgrade() else {
return Ok(None);
return Ok(Vec::new());
};
Ok(Some(cx.update(|cx| {
let completions = cx.update(|cx| {
matches
.into_iter()
.filter_map(|mat| match mat {
@@ -901,7 +901,14 @@ impl CompletionProvider for ContextPickerCompletionProvider {
),
})
.collect()
})?))
})?;
Ok(vec![CompletionResponse {
completions,
// Since this does its own filtering (see `filter_completions()` returns false),
// there is no benefit to computing whether this set of completions is incomplete.
is_incomplete: true,
}])
})
}
@@ -919,8 +926,9 @@ impl CompletionProvider for ContextPickerCompletionProvider {
&self,
buffer: &Entity<language::Buffer>,
position: language::Anchor,
_: &str,
_: bool,
_text: &str,
_trigger_in_words: bool,
_menu_is_open: bool,
cx: &mut Context<Editor>,
) -> bool {
let buffer = buffer.read(cx);

View File

@@ -51,6 +51,10 @@ impl Tool for ContextServerTool {
true
}
fn may_perform_edits(&self) -> bool {
true
}
fn input_schema(&self, format: LanguageModelToolSchemaFormat) -> Result<serde_json::Value> {
let mut schema = self.tool.input_schema.clone();
assistant_tool::adapt_schema_to_format(&mut schema, format)?;

View File

@@ -7,7 +7,7 @@ use assistant_context_editor::AssistantContext;
use collections::{HashSet, IndexSet};
use futures::{self, FutureExt};
use gpui::{App, Context, Entity, EventEmitter, Image, SharedString, Task, WeakEntity};
use language::Buffer;
use language::{Buffer, File as _};
use language_model::LanguageModelImage;
use project::image_store::is_image_file;
use project::{Project, ProjectItem, ProjectPath, Symbol};
@@ -304,11 +304,13 @@ impl ContextStore {
project.open_image(project_path.clone(), cx)
})?;
let image_item = open_image_task.await?;
let image = image_item.read_with(cx, |image_item, _| image_item.image.clone())?;
this.update(cx, |this, cx| {
let item = image_item.read(cx);
this.insert_image(
Some(image_item.read(cx).project_path(cx)),
image,
Some(item.project_path(cx)),
Some(item.file.full_path(cx).into()),
item.image.clone(),
remove_if_exists,
cx,
)
@@ -317,12 +319,13 @@ impl ContextStore {
}
pub fn add_image_instance(&mut self, image: Arc<Image>, cx: &mut Context<ContextStore>) {
self.insert_image(None, image, false, cx);
self.insert_image(None, None, image, false, cx);
}
fn insert_image(
&mut self,
project_path: Option<ProjectPath>,
full_path: Option<Arc<Path>>,
image: Arc<Image>,
remove_if_exists: bool,
cx: &mut Context<ContextStore>,
@@ -330,6 +333,7 @@ impl ContextStore {
let image_task = LanguageModelImage::from_image(image.clone(), cx).shared();
let context = AgentContextHandle::Image(ImageContext {
project_path,
full_path,
original_image: image,
image_task,
context_id: self.next_context_id.post_inc(),

View File

@@ -152,7 +152,7 @@ impl HistoryStore {
let entries = join_all(entries)
.await
.into_iter()
.filter_map(|result| result.log_err())
.filter_map(|result| result.log_with_level(log::Level::Debug))
.collect::<VecDeque<_>>();
this.update(cx, |this, _| {

View File

@@ -6,7 +6,7 @@ use crate::agent_model_selector::{AgentModelSelector, ModelType};
use crate::context::{AgentContextKey, ContextCreasesAddon, ContextLoadResult, load_context};
use crate::tool_compatibility::{IncompatibleToolsState, IncompatibleToolsTooltip};
use crate::ui::{
AnimatedLabel, MaxModeTooltip,
MaxModeTooltip,
preview::{AgentPreview, UsageCallout},
};
use agent_settings::{AgentSettings, CompletionMode};
@@ -27,7 +27,7 @@ use gpui::{
Animation, AnimationExt, App, ClipboardEntry, Entity, EventEmitter, Focusable, Subscription,
Task, TextStyle, WeakEntity, linear_color_stop, linear_gradient, point, pulsating_between,
};
use language::{Buffer, Language};
use language::{Buffer, Language, Point};
use language_model::{
ConfiguredModel, LanguageModelRequestMessage, MessageContent, RequestUsage,
ZED_CLOUD_PROVIDER_ID,
@@ -51,9 +51,9 @@ use crate::profile_selector::ProfileSelector;
use crate::thread::{MessageCrease, Thread, TokenUsageRatio};
use crate::thread_store::{TextThreadStore, ThreadStore};
use crate::{
ActiveThread, AgentDiffPane, Chat, ChatWithFollow, ExpandMessageEditor, Follow, NewThread,
OpenAgentDiff, RemoveAllContext, ToggleBurnMode, ToggleContextPicker, ToggleProfileSelector,
register_agent_preview,
ActiveThread, AgentDiffPane, Chat, ChatWithFollow, ExpandMessageEditor, Follow, KeepAll,
NewThread, OpenAgentDiff, RejectAll, RemoveAllContext, ToggleBurnMode, ToggleContextPicker,
ToggleProfileSelector, register_agent_preview,
};
#[derive(RegisterComponent)]
@@ -112,6 +112,7 @@ pub(crate) fn create_editor(
editor.set_placeholder_text("Message the agent @ to include context", cx);
editor.set_show_indent_guides(false, cx);
editor.set_soft_wrap();
editor.set_use_modal_editing(true);
editor.set_context_menu_options(ContextMenuOptions {
min_entries_visible: 12,
max_entries_visible: 12,
@@ -458,11 +459,20 @@ impl MessageEditor {
}
fn handle_review_click(&mut self, window: &mut Window, cx: &mut Context<Self>) {
if self.thread.read(cx).has_pending_edit_tool_uses() {
return;
}
self.edits_expanded = true;
AgentDiffPane::deploy(self.thread.clone(), self.workspace.clone(), window, cx).log_err();
cx.notify();
}
fn handle_edit_bar_expand(&mut self, cx: &mut Context<Self>) {
self.edits_expanded = !self.edits_expanded;
cx.notify();
}
fn handle_file_click(
&self,
buffer: Entity<Buffer>,
@@ -493,6 +503,40 @@ impl MessageEditor {
});
}
fn handle_accept_all(&mut self, _window: &mut Window, cx: &mut Context<Self>) {
if self.thread.read(cx).has_pending_edit_tool_uses() {
return;
}
self.thread.update(cx, |thread, cx| {
thread.keep_all_edits(cx);
});
cx.notify();
}
fn handle_reject_all(&mut self, _window: &mut Window, cx: &mut Context<Self>) {
if self.thread.read(cx).has_pending_edit_tool_uses() {
return;
}
// Since there's no reject_all_edits method in the thread API,
// we need to iterate through all buffers and reject their edits
let action_log = self.thread.read(cx).action_log().clone();
let changed_buffers = action_log.read(cx).changed_buffers(cx);
for (buffer, _) in changed_buffers {
self.thread.update(cx, |thread, cx| {
let buffer_snapshot = buffer.read(cx);
let start = buffer_snapshot.anchor_before(Point::new(0, 0));
let end = buffer_snapshot.anchor_after(buffer_snapshot.max_point());
thread
.reject_edits_in_ranges(buffer, vec![start..end], cx)
.detach();
});
}
cx.notify();
}
fn render_max_mode_toggle(&self, cx: &mut Context<Self>) -> Option<AnyElement> {
let thread = self.thread.read(cx);
let model = thread.configured_model();
@@ -614,6 +658,12 @@ impl MessageEditor {
.on_action(cx.listener(Self::move_up))
.on_action(cx.listener(Self::expand_message_editor))
.on_action(cx.listener(Self::toggle_burn_mode))
.on_action(
cx.listener(|this, _: &KeepAll, window, cx| this.handle_accept_all(window, cx)),
)
.on_action(
cx.listener(|this, _: &RejectAll, window, cx| this.handle_reject_all(window, cx)),
)
.capture_action(cx.listener(Self::paste))
.gap_2()
.p_2()
@@ -869,7 +919,10 @@ impl MessageEditor {
let bg_edit_files_disclosure = editor_bg_color.blend(active_color.opacity(0.3));
let is_edit_changes_expanded = self.edits_expanded;
let is_generating = self.thread.read(cx).is_generating();
let thread = self.thread.read(cx);
let pending_edits = thread.has_pending_edit_tool_uses();
const EDIT_NOT_READY_TOOLTIP_LABEL: &str = "Wait until file edits are complete.";
v_flex()
.mt_1()
@@ -887,31 +940,28 @@ impl MessageEditor {
}])
.child(
h_flex()
.id("edits-container")
.cursor_pointer()
.p_1p5()
.p_1()
.justify_between()
.when(is_edit_changes_expanded, |this| {
this.border_b_1().border_color(border_color)
})
.on_click(
cx.listener(|this, _, window, cx| this.handle_review_click(window, cx)),
)
.child(
h_flex()
.id("edits-container")
.cursor_pointer()
.w_full()
.gap_1()
.child(
Disclosure::new("edits-disclosure", is_edit_changes_expanded)
.on_click(cx.listener(|this, _ev, _window, cx| {
this.edits_expanded = !this.edits_expanded;
cx.notify();
.on_click(cx.listener(|this, _, _, cx| {
this.handle_edit_bar_expand(cx)
})),
)
.map(|this| {
if is_generating {
if pending_edits {
this.child(
AnimatedLabel::new(format!(
"Editing {} {}",
Label::new(format!(
"Editing {} {}",
changed_buffers.len(),
if changed_buffers.len() == 1 {
"file"
@@ -919,7 +969,15 @@ impl MessageEditor {
"files"
}
))
.size(LabelSize::Small),
.color(Color::Muted)
.size(LabelSize::Small)
.with_animation(
"edit-label",
Animation::new(Duration::from_secs(2))
.repeat()
.with_easing(pulsating_between(0.3, 0.7)),
|label, delta| label.alpha(delta),
),
)
} else {
this.child(
@@ -944,23 +1002,74 @@ impl MessageEditor {
.color(Color::Muted),
)
}
}),
})
.on_click(
cx.listener(|this, _, _, cx| this.handle_edit_bar_expand(cx)),
),
)
.child(
Button::new("review", "Review Changes")
.label_size(LabelSize::Small)
.key_binding(
KeyBinding::for_action_in(
&OpenAgentDiff,
&focus_handle,
window,
cx,
)
.map(|kb| kb.size(rems_from_px(12.))),
h_flex()
.gap_1()
.child(
IconButton::new("review-changes", IconName::ListTodo)
.icon_size(IconSize::Small)
.tooltip({
let focus_handle = focus_handle.clone();
move |window, cx| {
Tooltip::for_action_in(
"Review Changes",
&OpenAgentDiff,
&focus_handle,
window,
cx,
)
}
})
.on_click(cx.listener(|this, _, window, cx| {
this.handle_review_click(window, cx)
})),
)
.on_click(cx.listener(|this, _, window, cx| {
this.handle_review_click(window, cx)
})),
.child(ui::Divider::vertical().color(ui::DividerColor::Border))
.child(
Button::new("reject-all-changes", "Reject All")
.label_size(LabelSize::Small)
.disabled(pending_edits)
.when(pending_edits, |this| {
this.tooltip(Tooltip::text(EDIT_NOT_READY_TOOLTIP_LABEL))
})
.key_binding(
KeyBinding::for_action_in(
&RejectAll,
&focus_handle.clone(),
window,
cx,
)
.map(|kb| kb.size(rems_from_px(10.))),
)
.on_click(cx.listener(|this, _, window, cx| {
this.handle_reject_all(window, cx)
})),
)
.child(
Button::new("accept-all-changes", "Accept All")
.label_size(LabelSize::Small)
.disabled(pending_edits)
.when(pending_edits, |this| {
this.tooltip(Tooltip::text(EDIT_NOT_READY_TOOLTIP_LABEL))
})
.key_binding(
KeyBinding::for_action_in(
&KeepAll,
&focus_handle,
window,
cx,
)
.map(|kb| kb.size(rems_from_px(10.))),
)
.on_click(cx.listener(|this, _, window, cx| {
this.handle_accept_all(window, cx)
})),
),
),
)
.when(is_edit_changes_expanded, |parent| {

View File

@@ -179,18 +179,17 @@ impl TerminalTransaction {
// Ensure that the assistant cannot accidentally execute commands that are streamed into the terminal
let input = Self::sanitize_input(hunk);
self.terminal
.update(cx, |terminal, _| terminal.input(input));
.update(cx, |terminal, _| terminal.input(input.into_bytes()));
}
pub fn undo(&self, cx: &mut App) {
self.terminal
.update(cx, |terminal, _| terminal.input(CLEAR_INPUT.to_string()));
.update(cx, |terminal, _| terminal.input(CLEAR_INPUT.as_bytes()));
}
pub fn complete(&self, cx: &mut App) {
self.terminal.update(cx, |terminal, _| {
terminal.input(CARRIAGE_RETURN.to_string())
});
self.terminal
.update(cx, |terminal, _| terminal.input(CARRIAGE_RETURN.as_bytes()));
}
fn sanitize_input(mut input: String) -> String {

View File

@@ -106,7 +106,7 @@ impl TerminalInlineAssistant {
});
let prompt_editor_render = prompt_editor.clone();
let block = terminal_view::BlockProperties {
height: 2,
height: 4,
render: Box::new(move |_| prompt_editor_render.clone().into_any_element()),
};
terminal_view.update(cx, |terminal_view, cx| {
@@ -202,7 +202,7 @@ impl TerminalInlineAssistant {
.update(cx, |terminal, cx| {
terminal
.terminal()
.update(cx, |terminal, _| terminal.input(CLEAR_INPUT.to_string()));
.update(cx, |terminal, _| terminal.input(CLEAR_INPUT.as_bytes()));
})
.log_err();

View File

@@ -871,7 +871,16 @@ impl Thread {
self.tool_use
.pending_tool_uses()
.iter()
.all(|tool_use| tool_use.status.is_error())
.all(|pending_tool_use| pending_tool_use.status.is_error())
}
/// Returns whether any pending tool uses may perform edits
pub fn has_pending_edit_tool_uses(&self) -> bool {
self.tool_use
.pending_tool_uses()
.iter()
.filter(|pending_tool_use| !pending_tool_use.status.is_error())
.any(|pending_tool_use| pending_tool_use.may_perform_edits)
}
pub fn tool_uses_for_message(&self, id: MessageId, cx: &App) -> Vec<ToolUse> {
@@ -1023,6 +1032,7 @@ impl Thread {
id: MessageId,
new_role: Role,
new_segments: Vec<MessageSegment>,
creases: Vec<MessageCrease>,
loaded_context: Option<LoadedContext>,
checkpoint: Option<GitStoreCheckpoint>,
cx: &mut Context<Self>,
@@ -1032,6 +1042,7 @@ impl Thread {
};
message.role = new_role;
message.segments = new_segments;
message.creases = creases;
if let Some(context) = loaded_context {
message.loaded_context = context;
}
@@ -1673,6 +1684,7 @@ impl Thread {
}
CompletionRequestStatus::ToolUseLimitReached => {
thread.tool_use_limit_reached = true;
cx.emit(ThreadEvent::ToolUseLimitReached);
}
}
}
@@ -2843,6 +2855,7 @@ pub enum ThreadEvent {
},
CheckpointChanged,
ToolConfirmationNeeded,
ToolUseLimitReached,
CancelEditing,
CompletionCanceled,
}

View File

@@ -1,8 +1,7 @@
use std::borrow::Cow;
use std::cell::{Ref, RefCell};
use std::path::{Path, PathBuf};
use std::rc::Rc;
use std::sync::Arc;
use std::sync::{Arc, Mutex};
use agent_settings::{AgentProfile, AgentProfileId, AgentSettings, CompletionMode};
use anyhow::{Context as _, Result, anyhow};
@@ -17,8 +16,7 @@ use gpui::{
App, BackgroundExecutor, Context, Entity, EventEmitter, Global, ReadGlobal, SharedString,
Subscription, Task, prelude::*,
};
use heed::Database;
use heed::types::SerdeBincode;
use language_model::{LanguageModelToolResultContent, LanguageModelToolUseId, Role, TokenUsage};
use project::context_server_store::{ContextServerStatus, ContextServerStore};
use project::{Project, ProjectItem, ProjectPath, Worktree};
@@ -35,14 +33,52 @@ use crate::context_server_tool::ContextServerTool;
use crate::thread::{
DetailedSummaryState, ExceededWindowError, MessageId, ProjectSnapshot, Thread, ThreadId,
};
use indoc::indoc;
use sqlez::{
bindable::{Bind, Column},
connection::Connection,
statement::Statement,
};
const RULES_FILE_NAMES: [&'static str; 6] = [
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
pub enum DataType {
#[serde(rename = "json")]
Json,
#[serde(rename = "zstd")]
Zstd,
}
impl Bind for DataType {
fn bind(&self, statement: &Statement, start_index: i32) -> Result<i32> {
let value = match self {
DataType::Json => "json",
DataType::Zstd => "zstd",
};
value.bind(statement, start_index)
}
}
impl Column for DataType {
fn column(statement: &mut Statement, start_index: i32) -> Result<(Self, i32)> {
let (value, next_index) = String::column(statement, start_index)?;
let data_type = match value.as_str() {
"json" => DataType::Json,
"zstd" => DataType::Zstd,
_ => anyhow::bail!("Unknown data type: {}", value),
};
Ok((data_type, next_index))
}
}
const RULES_FILE_NAMES: [&'static str; 8] = [
".rules",
".cursorrules",
".windsurfrules",
".clinerules",
".github/copilot-instructions.md",
"CLAUDE.md",
"AGENT.md",
"AGENTS.md",
];
pub fn init(cx: &mut App) {
@@ -866,25 +902,27 @@ impl Global for GlobalThreadsDatabase {}
pub(crate) struct ThreadsDatabase {
executor: BackgroundExecutor,
env: heed::Env,
threads: Database<SerdeBincode<ThreadId>, SerializedThread>,
connection: Arc<Mutex<Connection>>,
}
impl heed::BytesEncode<'_> for SerializedThread {
type EItem = SerializedThread;
impl ThreadsDatabase {
fn connection(&self) -> Arc<Mutex<Connection>> {
self.connection.clone()
}
fn bytes_encode(item: &Self::EItem) -> Result<Cow<[u8]>, heed::BoxedError> {
serde_json::to_vec(item).map(Cow::Owned).map_err(Into::into)
const COMPRESSION_LEVEL: i32 = 3;
}
impl Bind for ThreadId {
fn bind(&self, statement: &Statement, start_index: i32) -> Result<i32> {
self.to_string().bind(statement, start_index)
}
}
impl<'a> heed::BytesDecode<'a> for SerializedThread {
type DItem = SerializedThread;
fn bytes_decode(bytes: &'a [u8]) -> Result<Self::DItem, heed::BoxedError> {
// We implement this type manually because we want to call `SerializedThread::from_json`,
// instead of the Deserialize trait implementation for `SerializedThread`.
SerializedThread::from_json(bytes).map_err(Into::into)
impl Column for ThreadId {
fn column(statement: &mut Statement, start_index: i32) -> Result<(Self, i32)> {
let (id_str, next_index) = String::column(statement, start_index)?;
Ok((ThreadId::from(id_str.as_str()), next_index))
}
}
@@ -900,8 +938,8 @@ impl ThreadsDatabase {
let database_future = executor
.spawn({
let executor = executor.clone();
let database_path = paths::data_dir().join("threads/threads-db.1.mdb");
async move { ThreadsDatabase::new(database_path, executor) }
let threads_dir = paths::data_dir().join("threads");
async move { ThreadsDatabase::new(threads_dir, executor) }
})
.then(|result| future::ready(result.map(Arc::new).map_err(Arc::new)))
.boxed()
@@ -910,41 +948,144 @@ impl ThreadsDatabase {
cx.set_global(GlobalThreadsDatabase(database_future));
}
pub fn new(path: PathBuf, executor: BackgroundExecutor) -> Result<Self> {
std::fs::create_dir_all(&path)?;
pub fn new(threads_dir: PathBuf, executor: BackgroundExecutor) -> Result<Self> {
std::fs::create_dir_all(&threads_dir)?;
let sqlite_path = threads_dir.join("threads.db");
let mdb_path = threads_dir.join("threads-db.1.mdb");
let needs_migration_from_heed = mdb_path.exists();
let connection = Connection::open_file(&sqlite_path.to_string_lossy());
connection.exec(indoc! {"
CREATE TABLE IF NOT EXISTS threads (
id TEXT PRIMARY KEY,
summary TEXT NOT NULL,
updated_at TEXT NOT NULL,
data_type TEXT NOT NULL,
data BLOB NOT NULL
)
"})?()
.map_err(|e| anyhow!("Failed to create threads table: {}", e))?;
let db = Self {
executor: executor.clone(),
connection: Arc::new(Mutex::new(connection)),
};
if needs_migration_from_heed {
let db_connection = db.connection();
let executor_clone = executor.clone();
executor
.spawn(async move {
log::info!("Starting threads.db migration");
Self::migrate_from_heed(&mdb_path, db_connection, executor_clone)?;
std::fs::remove_dir_all(mdb_path)?;
log::info!("threads.db migrated to sqlite");
Ok::<(), anyhow::Error>(())
})
.detach();
}
Ok(db)
}
// Remove this migration after 2025-09-01
fn migrate_from_heed(
mdb_path: &Path,
connection: Arc<Mutex<Connection>>,
_executor: BackgroundExecutor,
) -> Result<()> {
use heed::types::SerdeBincode;
struct SerializedThreadHeed(SerializedThread);
impl heed::BytesEncode<'_> for SerializedThreadHeed {
type EItem = SerializedThreadHeed;
fn bytes_encode(
item: &Self::EItem,
) -> Result<std::borrow::Cow<[u8]>, heed::BoxedError> {
serde_json::to_vec(&item.0)
.map(std::borrow::Cow::Owned)
.map_err(Into::into)
}
}
impl<'a> heed::BytesDecode<'a> for SerializedThreadHeed {
type DItem = SerializedThreadHeed;
fn bytes_decode(bytes: &'a [u8]) -> Result<Self::DItem, heed::BoxedError> {
SerializedThread::from_json(bytes)
.map(SerializedThreadHeed)
.map_err(Into::into)
}
}
const ONE_GB_IN_BYTES: usize = 1024 * 1024 * 1024;
let env = unsafe {
heed::EnvOpenOptions::new()
.map_size(ONE_GB_IN_BYTES)
.max_dbs(1)
.open(path)?
.open(mdb_path)?
};
let mut txn = env.write_txn()?;
let threads = env.create_database(&mut txn, Some("threads"))?;
txn.commit()?;
let txn = env.write_txn()?;
let threads: heed::Database<SerdeBincode<ThreadId>, SerializedThreadHeed> = env
.open_database(&txn, Some("threads"))?
.ok_or_else(|| anyhow!("threads database not found"))?;
Ok(Self {
executor,
env,
threads,
})
for result in threads.iter(&txn)? {
let (thread_id, thread_heed) = result?;
Self::save_thread_sync(&connection, thread_id, thread_heed.0)?;
}
Ok(())
}
fn save_thread_sync(
connection: &Arc<Mutex<Connection>>,
id: ThreadId,
thread: SerializedThread,
) -> Result<()> {
let json_data = serde_json::to_string(&thread)?;
let summary = thread.summary.to_string();
let updated_at = thread.updated_at.to_rfc3339();
let connection = connection.lock().unwrap();
let compressed = zstd::encode_all(json_data.as_bytes(), Self::COMPRESSION_LEVEL)?;
let data_type = DataType::Zstd;
let data = compressed;
let mut insert = connection.exec_bound::<(ThreadId, String, String, DataType, Vec<u8>)>(indoc! {"
INSERT OR REPLACE INTO threads (id, summary, updated_at, data_type, data) VALUES (?, ?, ?, ?, ?)
"})?;
insert((id, summary, updated_at, data_type, data))?;
Ok(())
}
pub fn list_threads(&self) -> Task<Result<Vec<SerializedThreadMetadata>>> {
let env = self.env.clone();
let threads = self.threads;
let connection = self.connection.clone();
self.executor.spawn(async move {
let txn = env.read_txn()?;
let mut iter = threads.iter(&txn)?;
let connection = connection.lock().unwrap();
let mut select =
connection.select_bound::<(), (ThreadId, String, String)>(indoc! {"
SELECT id, summary, updated_at FROM threads ORDER BY updated_at DESC
"})?;
let rows = select(())?;
let mut threads = Vec::new();
while let Some((key, value)) = iter.next().transpose()? {
for (id, summary, updated_at) in rows {
threads.push(SerializedThreadMetadata {
id: key,
summary: value.summary,
updated_at: value.updated_at,
id,
summary: summary.into(),
updated_at: DateTime::parse_from_rfc3339(&updated_at)?.with_timezone(&Utc),
});
}
@@ -953,36 +1094,51 @@ impl ThreadsDatabase {
}
pub fn try_find_thread(&self, id: ThreadId) -> Task<Result<Option<SerializedThread>>> {
let env = self.env.clone();
let threads = self.threads;
let connection = self.connection.clone();
self.executor.spawn(async move {
let txn = env.read_txn()?;
let thread = threads.get(&txn, &id)?;
Ok(thread)
let connection = connection.lock().unwrap();
let mut select = connection.select_bound::<ThreadId, (DataType, Vec<u8>)>(indoc! {"
SELECT data_type, data FROM threads WHERE id = ? LIMIT 1
"})?;
let rows = select(id)?;
if let Some((data_type, data)) = rows.into_iter().next() {
let json_data = match data_type {
DataType::Zstd => {
let decompressed = zstd::decode_all(&data[..])?;
String::from_utf8(decompressed)?
}
DataType::Json => String::from_utf8(data)?,
};
let thread = SerializedThread::from_json(json_data.as_bytes())?;
Ok(Some(thread))
} else {
Ok(None)
}
})
}
pub fn save_thread(&self, id: ThreadId, thread: SerializedThread) -> Task<Result<()>> {
let env = self.env.clone();
let threads = self.threads;
let connection = self.connection.clone();
self.executor.spawn(async move {
let mut txn = env.write_txn()?;
threads.put(&mut txn, &id, &thread)?;
txn.commit()?;
Ok(())
})
self.executor
.spawn(async move { Self::save_thread_sync(&connection, id, thread) })
}
pub fn delete_thread(&self, id: ThreadId) -> Task<Result<()>> {
let env = self.env.clone();
let threads = self.threads;
let connection = self.connection.clone();
self.executor.spawn(async move {
let mut txn = env.write_txn()?;
threads.delete(&mut txn, &id)?;
txn.commit()?;
let connection = connection.lock().unwrap();
let mut delete = connection.exec_bound::<ThreadId>(indoc! {"
DELETE FROM threads WHERE id = ?
"})?;
delete(id)?;
Ok(())
})
}

View File

@@ -337,6 +337,12 @@ impl ToolUseState {
)
.into();
let may_perform_edits = self
.tools
.read(cx)
.tool(&tool_use.name, cx)
.is_some_and(|tool| tool.may_perform_edits());
self.pending_tool_uses_by_id.insert(
tool_use.id.clone(),
PendingToolUse {
@@ -345,6 +351,7 @@ impl ToolUseState {
name: tool_use.name.clone(),
ui_text: ui_text.clone(),
input: tool_use.input,
may_perform_edits,
status,
},
);
@@ -518,6 +525,7 @@ pub struct PendingToolUse {
pub ui_text: Arc<str>,
pub input: serde_json::Value,
pub status: PendingToolUseStatus,
pub may_perform_edits: bool,
}
#[derive(Debug, Clone)]

View File

@@ -304,7 +304,7 @@ impl AddedContext {
AgentContextHandle::Thread(handle) => Some(Self::pending_thread(handle, cx)),
AgentContextHandle::TextThread(handle) => Some(Self::pending_text_thread(handle, cx)),
AgentContextHandle::Rules(handle) => Self::pending_rules(handle, prompt_store, cx),
AgentContextHandle::Image(handle) => Some(Self::image(handle)),
AgentContextHandle::Image(handle) => Some(Self::image(handle, cx)),
}
}
@@ -318,7 +318,7 @@ impl AddedContext {
AgentContext::Thread(context) => Self::attached_thread(context),
AgentContext::TextThread(context) => Self::attached_text_thread(context),
AgentContext::Rules(context) => Self::attached_rules(context),
AgentContext::Image(context) => Self::image(context.clone()),
AgentContext::Image(context) => Self::image(context.clone(), cx),
}
}
@@ -333,14 +333,8 @@ impl AddedContext {
fn file(handle: FileContextHandle, full_path: &Path, cx: &App) -> AddedContext {
let full_path_string: SharedString = full_path.to_string_lossy().into_owned().into();
let name = full_path
.file_name()
.map(|n| n.to_string_lossy().into_owned().into())
.unwrap_or_else(|| full_path_string.clone());
let parent = full_path
.parent()
.and_then(|p| p.file_name())
.map(|n| n.to_string_lossy().into_owned().into());
let (name, parent) =
extract_file_name_and_directory_from_full_path(full_path, &full_path_string);
AddedContext {
kind: ContextKind::File,
name,
@@ -370,14 +364,8 @@ impl AddedContext {
fn directory(handle: DirectoryContextHandle, full_path: &Path) -> AddedContext {
let full_path_string: SharedString = full_path.to_string_lossy().into_owned().into();
let name = full_path
.file_name()
.map(|n| n.to_string_lossy().into_owned().into())
.unwrap_or_else(|| full_path_string.clone());
let parent = full_path
.parent()
.and_then(|p| p.file_name())
.map(|n| n.to_string_lossy().into_owned().into());
let (name, parent) =
extract_file_name_and_directory_from_full_path(full_path, &full_path_string);
AddedContext {
kind: ContextKind::Directory,
name,
@@ -605,13 +593,23 @@ impl AddedContext {
}
}
fn image(context: ImageContext) -> AddedContext {
fn image(context: ImageContext, cx: &App) -> AddedContext {
let (name, parent, icon_path) = if let Some(full_path) = context.full_path.as_ref() {
let full_path_string: SharedString = full_path.to_string_lossy().into_owned().into();
let (name, parent) =
extract_file_name_and_directory_from_full_path(full_path, &full_path_string);
let icon_path = FileIcons::get_icon(&full_path, cx);
(name, parent, icon_path)
} else {
("Image".into(), None, None)
};
AddedContext {
kind: ContextKind::Image,
name: "Image".into(),
parent: None,
name,
parent,
tooltip: None,
icon_path: None,
icon_path,
status: match context.status() {
ImageStatus::Loading => ContextStatus::Loading {
message: "Loading…".into(),
@@ -639,6 +637,22 @@ impl AddedContext {
}
}
fn extract_file_name_and_directory_from_full_path(
path: &Path,
name_fallback: &SharedString,
) -> (SharedString, Option<SharedString>) {
let name = path
.file_name()
.map(|n| n.to_string_lossy().into_owned().into())
.unwrap_or_else(|| name_fallback.clone());
let parent = path
.parent()
.and_then(|p| p.file_name())
.map(|n| n.to_string_lossy().into_owned().into());
(name, parent)
}
#[derive(Debug, Clone)]
struct ContextFileExcerpt {
pub file_name_and_range: SharedString,
@@ -765,37 +779,49 @@ impl Component for AddedContext {
let mut next_context_id = ContextId::zero();
let image_ready = (
"Ready",
AddedContext::image(ImageContext {
context_id: next_context_id.post_inc(),
project_path: None,
original_image: Arc::new(Image::empty()),
image_task: Task::ready(Some(LanguageModelImage::empty())).shared(),
}),
AddedContext::image(
ImageContext {
context_id: next_context_id.post_inc(),
project_path: None,
full_path: None,
original_image: Arc::new(Image::empty()),
image_task: Task::ready(Some(LanguageModelImage::empty())).shared(),
},
cx,
),
);
let image_loading = (
"Loading",
AddedContext::image(ImageContext {
context_id: next_context_id.post_inc(),
project_path: None,
original_image: Arc::new(Image::empty()),
image_task: cx
.background_spawn(async move {
smol::Timer::after(Duration::from_secs(60 * 5)).await;
Some(LanguageModelImage::empty())
})
.shared(),
}),
AddedContext::image(
ImageContext {
context_id: next_context_id.post_inc(),
project_path: None,
full_path: None,
original_image: Arc::new(Image::empty()),
image_task: cx
.background_spawn(async move {
smol::Timer::after(Duration::from_secs(60 * 5)).await;
Some(LanguageModelImage::empty())
})
.shared(),
},
cx,
),
);
let image_error = (
"Error",
AddedContext::image(ImageContext {
context_id: next_context_id.post_inc(),
project_path: None,
original_image: Arc::new(Image::empty()),
image_task: Task::ready(None).shared(),
}),
AddedContext::image(
ImageContext {
context_id: next_context_id.post_inc(),
project_path: None,
full_path: None,
original_image: Arc::new(Image::empty()),
image_task: Task::ready(None).shared(),
},
cx,
),
);
Some(

View File

@@ -372,6 +372,8 @@ impl AgentSettingsContent {
None,
None,
Some(language_model.supports_tools()),
Some(language_model.supports_images()),
None,
)),
api_url,
});
@@ -689,6 +691,7 @@ pub struct AgentSettingsContentV2 {
pub enum CompletionMode {
#[default]
Normal,
#[serde(alias = "max")]
Burn,
}
@@ -727,6 +730,7 @@ impl JsonSchema for LanguageModelProviderSetting {
"zed.dev".into(),
"copilot_chat".into(),
"deepseek".into(),
"openrouter".into(),
"mistral".into(),
]),
..Default::default()

View File

@@ -60,6 +60,7 @@ zed_actions.workspace = true
zed_llm_client.workspace = true
[dev-dependencies]
indoc.workspace = true
language_model = { workspace = true, features = ["test-support"] }
languages = { workspace = true, features = ["test-support"] }
pretty_assertions.workspace = true

View File

@@ -1646,34 +1646,35 @@ impl ContextEditor {
let context = self.context.read(cx);
let mut text = String::new();
for message in context.messages(cx) {
if message.offset_range.start >= selection.range().end {
break;
} else if message.offset_range.end >= selection.range().start {
let range = cmp::max(message.offset_range.start, selection.range().start)
..cmp::min(message.offset_range.end, selection.range().end);
if range.is_empty() {
let snapshot = context.buffer().read(cx).snapshot();
let point = snapshot.offset_to_point(range.start);
selection.start = snapshot.point_to_offset(Point::new(point.row, 0));
selection.end = snapshot.point_to_offset(cmp::min(
Point::new(point.row + 1, 0),
snapshot.max_point(),
));
for chunk in context.buffer().read(cx).text_for_range(selection.range()) {
text.push_str(chunk);
}
} else {
for chunk in context.buffer().read(cx).text_for_range(range) {
text.push_str(chunk);
}
if message.offset_range.end < selection.range().end {
text.push('\n');
// If selection is empty, we want to copy the entire line
if selection.range().is_empty() {
let snapshot = context.buffer().read(cx).snapshot();
let point = snapshot.offset_to_point(selection.range().start);
selection.start = snapshot.point_to_offset(Point::new(point.row, 0));
selection.end = snapshot
.point_to_offset(cmp::min(Point::new(point.row + 1, 0), snapshot.max_point()));
for chunk in context.buffer().read(cx).text_for_range(selection.range()) {
text.push_str(chunk);
}
} else {
for message in context.messages(cx) {
if message.offset_range.start >= selection.range().end {
break;
} else if message.offset_range.end >= selection.range().start {
let range = cmp::max(message.offset_range.start, selection.range().start)
..cmp::min(message.offset_range.end, selection.range().end);
if !range.is_empty() {
for chunk in context.buffer().read(cx).text_for_range(range) {
text.push_str(chunk);
}
if message.offset_range.end < selection.range().end {
text.push('\n');
}
}
}
}
}
(text, CopyMetadata { creases }, vec![selection])
}
@@ -3264,74 +3265,92 @@ mod tests {
use super::*;
use fs::FakeFs;
use gpui::{App, TestAppContext, VisualTestContext};
use indoc::indoc;
use language::{Buffer, LanguageRegistry};
use pretty_assertions::assert_eq;
use prompt_store::PromptBuilder;
use text::OffsetRangeExt;
use unindent::Unindent;
use util::path;
#[gpui::test]
async fn test_copy_paste_whole_message(cx: &mut TestAppContext) {
let (context, context_editor, mut cx) = setup_context_editor_text(vec![
(Role::User, "What is the Zed editor?"),
(
Role::Assistant,
"Zed is a modern, high-performance code editor designed from the ground up for speed and collaboration.",
),
(Role::User, ""),
],cx).await;
// Select & Copy whole user message
assert_copy_paste_context_editor(
&context_editor,
message_range(&context, 0, &mut cx),
indoc! {"
What is the Zed editor?
Zed is a modern, high-performance code editor designed from the ground up for speed and collaboration.
What is the Zed editor?
"},
&mut cx,
);
// Select & Copy whole assistant message
assert_copy_paste_context_editor(
&context_editor,
message_range(&context, 1, &mut cx),
indoc! {"
What is the Zed editor?
Zed is a modern, high-performance code editor designed from the ground up for speed and collaboration.
What is the Zed editor?
Zed is a modern, high-performance code editor designed from the ground up for speed and collaboration.
"},
&mut cx,
);
}
#[gpui::test]
async fn test_copy_paste_no_selection(cx: &mut TestAppContext) {
cx.update(init_test);
let (context, context_editor, mut cx) = setup_context_editor_text(
vec![
(Role::User, "user1"),
(Role::Assistant, "assistant1"),
(Role::Assistant, "assistant2"),
(Role::User, ""),
],
cx,
)
.await;
let fs = FakeFs::new(cx.executor());
let registry = Arc::new(LanguageRegistry::test(cx.executor()));
let prompt_builder = Arc::new(PromptBuilder::new(None).unwrap());
let context = cx.new(|cx| {
AssistantContext::local(
registry,
None,
None,
prompt_builder.clone(),
Arc::new(SlashCommandWorkingSet::default()),
cx,
)
});
let project = Project::test(fs.clone(), [path!("/test").as_ref()], cx).await;
let window = cx.add_window(|window, cx| Workspace::test_new(project.clone(), window, cx));
let workspace = window.root(cx).unwrap();
let cx = &mut VisualTestContext::from_window(*window, cx);
// Copy and paste first assistant message
let message_2_range = message_range(&context, 1, &mut cx);
assert_copy_paste_context_editor(
&context_editor,
message_2_range.start..message_2_range.start,
indoc! {"
user1
assistant1
assistant2
assistant1
"},
&mut cx,
);
let context_editor = window
.update(cx, |_, window, cx| {
cx.new(|cx| {
ContextEditor::for_context(
context,
fs,
workspace.downgrade(),
project,
None,
window,
cx,
)
})
})
.unwrap();
context_editor.update_in(cx, |context_editor, window, cx| {
context_editor.editor.update(cx, |editor, cx| {
editor.set_text("abc\ndef\nghi", window, cx);
editor.move_to_beginning(&Default::default(), window, cx);
})
});
context_editor.update_in(cx, |context_editor, window, cx| {
context_editor.editor.update(cx, |editor, cx| {
editor.copy(&Default::default(), window, cx);
editor.paste(&Default::default(), window, cx);
assert_eq!(editor.text(cx), "abc\nabc\ndef\nghi");
})
});
context_editor.update_in(cx, |context_editor, window, cx| {
context_editor.editor.update(cx, |editor, cx| {
editor.cut(&Default::default(), window, cx);
assert_eq!(editor.text(cx), "abc\ndef\nghi");
editor.paste(&Default::default(), window, cx);
assert_eq!(editor.text(cx), "abc\nabc\ndef\nghi");
})
});
// Copy and cut second assistant message
let message_3_range = message_range(&context, 2, &mut cx);
assert_copy_paste_context_editor(
&context_editor,
message_3_range.start..message_3_range.start,
indoc! {"
user1
assistant1
assistant2
assistant1
assistant2
"},
&mut cx,
);
}
#[gpui::test]
@@ -3408,6 +3427,129 @@ mod tests {
}
}
async fn setup_context_editor_text(
messages: Vec<(Role, &str)>,
cx: &mut TestAppContext,
) -> (
Entity<AssistantContext>,
Entity<ContextEditor>,
VisualTestContext,
) {
cx.update(init_test);
let fs = FakeFs::new(cx.executor());
let context = create_context_with_messages(messages, cx);
let project = Project::test(fs.clone(), [path!("/test").as_ref()], cx).await;
let window = cx.add_window(|window, cx| Workspace::test_new(project.clone(), window, cx));
let workspace = window.root(cx).unwrap();
let mut cx = VisualTestContext::from_window(*window, cx);
let context_editor = window
.update(&mut cx, |_, window, cx| {
cx.new(|cx| {
let editor = ContextEditor::for_context(
context.clone(),
fs,
workspace.downgrade(),
project,
None,
window,
cx,
);
editor
})
})
.unwrap();
(context, context_editor, cx)
}
fn message_range(
context: &Entity<AssistantContext>,
message_ix: usize,
cx: &mut TestAppContext,
) -> Range<usize> {
context.update(cx, |context, cx| {
context
.messages(cx)
.nth(message_ix)
.unwrap()
.anchor_range
.to_offset(&context.buffer().read(cx).snapshot())
})
}
fn assert_copy_paste_context_editor<T: editor::ToOffset>(
context_editor: &Entity<ContextEditor>,
range: Range<T>,
expected_text: &str,
cx: &mut VisualTestContext,
) {
context_editor.update_in(cx, |context_editor, window, cx| {
context_editor.editor.update(cx, |editor, cx| {
editor.change_selections(None, window, cx, |s| s.select_ranges([range]));
});
context_editor.copy(&Default::default(), window, cx);
context_editor.editor.update(cx, |editor, cx| {
editor.move_to_end(&Default::default(), window, cx);
});
context_editor.paste(&Default::default(), window, cx);
context_editor.editor.update(cx, |editor, cx| {
assert_eq!(editor.text(cx), expected_text);
});
});
}
fn create_context_with_messages(
mut messages: Vec<(Role, &str)>,
cx: &mut TestAppContext,
) -> Entity<AssistantContext> {
let registry = Arc::new(LanguageRegistry::test(cx.executor()));
let prompt_builder = Arc::new(PromptBuilder::new(None).unwrap());
cx.new(|cx| {
let mut context = AssistantContext::local(
registry,
None,
None,
prompt_builder.clone(),
Arc::new(SlashCommandWorkingSet::default()),
cx,
);
let mut message_1 = context.messages(cx).next().unwrap();
let (role, text) = messages.remove(0);
loop {
if role == message_1.role {
context.buffer().update(cx, |buffer, cx| {
buffer.edit([(message_1.offset_range, text)], None, cx);
});
break;
}
let mut ids = HashSet::default();
ids.insert(message_1.id);
context.cycle_message_roles(ids, cx);
message_1 = context.messages(cx).next().unwrap();
}
let mut last_message_id = message_1.id;
for (role, text) in messages {
context.insert_message_after(last_message_id, role, MessageStatus::Done, cx);
let message = context.messages(cx).last().unwrap();
last_message_id = message.id;
context.buffer().update(cx, |buffer, cx| {
buffer.edit([(message.offset_range, text)], None, cx);
})
}
context
})
}
fn init_test(cx: &mut App) {
let settings_store = SettingsStore::test(cx);
prompt_store::init(cx);

View File

@@ -48,7 +48,7 @@ impl SlashCommandCompletionProvider {
name_range: Range<Anchor>,
window: &mut Window,
cx: &mut App,
) -> Task<Result<Option<Vec<project::Completion>>>> {
) -> Task<Result<Vec<project::CompletionResponse>>> {
let slash_commands = self.slash_commands.clone();
let candidates = slash_commands
.command_names(cx)
@@ -71,28 +71,27 @@ impl SlashCommandCompletionProvider {
.await;
cx.update(|_, cx| {
Some(
matches
.into_iter()
.filter_map(|mat| {
let command = slash_commands.command(&mat.string, cx)?;
let mut new_text = mat.string.clone();
let requires_argument = command.requires_argument();
let accepts_arguments = command.accepts_arguments();
if requires_argument || accepts_arguments {
new_text.push(' ');
}
let completions = matches
.into_iter()
.filter_map(|mat| {
let command = slash_commands.command(&mat.string, cx)?;
let mut new_text = mat.string.clone();
let requires_argument = command.requires_argument();
let accepts_arguments = command.accepts_arguments();
if requires_argument || accepts_arguments {
new_text.push(' ');
}
let confirm =
editor
.clone()
.zip(workspace.clone())
.map(|(editor, workspace)| {
let command_name = mat.string.clone();
let command_range = command_range.clone();
let editor = editor.clone();
let workspace = workspace.clone();
Arc::new(
let confirm =
editor
.clone()
.zip(workspace.clone())
.map(|(editor, workspace)| {
let command_name = mat.string.clone();
let command_range = command_range.clone();
let editor = editor.clone();
let workspace = workspace.clone();
Arc::new(
move |intent: CompletionIntent,
window: &mut Window,
cx: &mut App| {
@@ -118,22 +117,27 @@ impl SlashCommandCompletionProvider {
}
},
) as Arc<_>
});
Some(project::Completion {
replace_range: name_range.clone(),
documentation: Some(CompletionDocumentation::SingleLine(
command.description().into(),
)),
new_text,
label: command.label(cx),
icon_path: None,
insert_text_mode: None,
confirm,
source: CompletionSource::Custom,
})
});
Some(project::Completion {
replace_range: name_range.clone(),
documentation: Some(CompletionDocumentation::SingleLine(
command.description().into(),
)),
new_text,
label: command.label(cx),
icon_path: None,
insert_text_mode: None,
confirm,
source: CompletionSource::Custom,
})
.collect(),
)
})
.collect();
vec![project::CompletionResponse {
completions,
is_incomplete: false,
}]
})
})
}
@@ -147,7 +151,7 @@ impl SlashCommandCompletionProvider {
last_argument_range: Range<Anchor>,
window: &mut Window,
cx: &mut App,
) -> Task<Result<Option<Vec<project::Completion>>>> {
) -> Task<Result<Vec<project::CompletionResponse>>> {
let new_cancel_flag = Arc::new(AtomicBool::new(false));
let mut flag = self.cancel_flag.lock();
flag.store(true, SeqCst);
@@ -165,28 +169,27 @@ impl SlashCommandCompletionProvider {
let workspace = self.workspace.clone();
let arguments = arguments.to_vec();
cx.background_spawn(async move {
Ok(Some(
completions
.await?
.into_iter()
.map(|new_argument| {
let confirm =
editor
.clone()
.zip(workspace.clone())
.map(|(editor, workspace)| {
Arc::new({
let mut completed_arguments = arguments.clone();
if new_argument.replace_previous_arguments {
completed_arguments.clear();
} else {
completed_arguments.pop();
}
completed_arguments.push(new_argument.new_text.clone());
let completions = completions
.await?
.into_iter()
.map(|new_argument| {
let confirm =
editor
.clone()
.zip(workspace.clone())
.map(|(editor, workspace)| {
Arc::new({
let mut completed_arguments = arguments.clone();
if new_argument.replace_previous_arguments {
completed_arguments.clear();
} else {
completed_arguments.pop();
}
completed_arguments.push(new_argument.new_text.clone());
let command_range = command_range.clone();
let command_name = command_name.clone();
move |intent: CompletionIntent,
let command_range = command_range.clone();
let command_name = command_name.clone();
move |intent: CompletionIntent,
window: &mut Window,
cx: &mut App| {
if new_argument.after_completion.run()
@@ -210,34 +213,41 @@ impl SlashCommandCompletionProvider {
!new_argument.after_completion.run()
}
}
}) as Arc<_>
});
}) as Arc<_>
});
let mut new_text = new_argument.new_text.clone();
if new_argument.after_completion == AfterCompletion::Continue {
new_text.push(' ');
}
let mut new_text = new_argument.new_text.clone();
if new_argument.after_completion == AfterCompletion::Continue {
new_text.push(' ');
}
project::Completion {
replace_range: if new_argument.replace_previous_arguments {
argument_range.clone()
} else {
last_argument_range.clone()
},
label: new_argument.label,
icon_path: None,
new_text,
documentation: None,
confirm,
insert_text_mode: None,
source: CompletionSource::Custom,
}
})
.collect(),
))
project::Completion {
replace_range: if new_argument.replace_previous_arguments {
argument_range.clone()
} else {
last_argument_range.clone()
},
label: new_argument.label,
icon_path: None,
new_text,
documentation: None,
confirm,
insert_text_mode: None,
source: CompletionSource::Custom,
}
})
.collect();
Ok(vec![project::CompletionResponse {
completions,
is_incomplete: false,
}])
})
} else {
Task::ready(Ok(Some(Vec::new())))
Task::ready(Ok(vec![project::CompletionResponse {
completions: Vec::new(),
is_incomplete: false,
}]))
}
}
}
@@ -251,7 +261,7 @@ impl CompletionProvider for SlashCommandCompletionProvider {
_: editor::CompletionContext,
window: &mut Window,
cx: &mut Context<Editor>,
) -> Task<Result<Option<Vec<project::Completion>>>> {
) -> Task<Result<Vec<project::CompletionResponse>>> {
let Some((name, arguments, command_range, last_argument_range)) =
buffer.update(cx, |buffer, _cx| {
let position = buffer_position.to_point(buffer);
@@ -295,7 +305,10 @@ impl CompletionProvider for SlashCommandCompletionProvider {
Some((name, arguments, command_range, last_argument_range))
})
else {
return Task::ready(Ok(Some(Vec::new())));
return Task::ready(Ok(vec![project::CompletionResponse {
completions: Vec::new(),
is_incomplete: false,
}]));
};
if let Some((arguments, argument_range)) = arguments {
@@ -329,6 +342,7 @@ impl CompletionProvider for SlashCommandCompletionProvider {
position: language::Anchor,
_text: &str,
_trigger_in_words: bool,
_menu_is_open: bool,
cx: &mut Context<Editor>,
) -> bool {
let buffer = buffer.read(cx);

View File

@@ -218,6 +218,9 @@ pub trait Tool: 'static + Send + Sync {
/// before having permission to run.
fn needs_confirmation(&self, input: &serde_json::Value, cx: &App) -> bool;
/// Returns true if the tool may perform edits.
fn may_perform_edits(&self) -> bool;
/// Returns the JSON schema that describes the tool's input.
fn input_schema(&self, _: LanguageModelToolSchemaFormat) -> Result<serde_json::Value> {
Ok(serde_json::Value::Object(serde_json::Map::default()))

View File

@@ -48,6 +48,10 @@ impl Tool for CopyPathTool {
false
}
fn may_perform_edits(&self) -> bool {
true
}
fn description(&self) -> String {
include_str!("./copy_path_tool/description.md").into()
}

View File

@@ -33,12 +33,16 @@ impl Tool for CreateDirectoryTool {
"create_directory".into()
}
fn description(&self) -> String {
include_str!("./create_directory_tool/description.md").into()
}
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
false
}
fn description(&self) -> String {
include_str!("./create_directory_tool/description.md").into()
fn may_perform_edits(&self) -> bool {
false
}
fn icon(&self) -> IconName {

View File

@@ -37,6 +37,10 @@ impl Tool for DeletePathTool {
false
}
fn may_perform_edits(&self) -> bool {
true
}
fn description(&self) -> String {
include_str!("./delete_path_tool/description.md").into()
}

View File

@@ -50,6 +50,10 @@ impl Tool for DiagnosticsTool {
false
}
fn may_perform_edits(&self) -> bool {
false
}
fn description(&self) -> String {
include_str!("./diagnostics_tool/description.md").into()
}

View File

@@ -2,6 +2,7 @@ use crate::{
Templates,
edit_agent::{EditAgent, EditAgentOutput, EditAgentOutputEvent},
schema::json_schema_for,
ui::{COLLAPSED_LINES, ToolOutputPreview},
};
use anyhow::{Context as _, Result, anyhow};
use assistant_tool::{
@@ -13,7 +14,7 @@ use editor::{Editor, EditorMode, MinimapVisibility, MultiBuffer, PathKey};
use futures::StreamExt;
use gpui::{
Animation, AnimationExt, AnyWindowHandle, App, AppContext, AsyncApp, Entity, Task,
TextStyleRefinement, WeakEntity, pulsating_between,
TextStyleRefinement, WeakEntity, pulsating_between, px,
};
use indoc::formatdoc;
use language::{
@@ -128,6 +129,10 @@ impl Tool for EditFileTool {
false
}
fn may_perform_edits(&self) -> bool {
true
}
fn description(&self) -> String {
include_str!("edit_file_tool/description.md").to_string()
}
@@ -884,30 +889,8 @@ impl ToolCard for EditFileToolCard {
(element.into_any_element(), line_height)
});
let (full_height_icon, full_height_tooltip_label) = if self.full_height_expanded {
(IconName::ChevronUp, "Collapse Code Block")
} else {
(IconName::ChevronDown, "Expand Code Block")
};
let gradient_overlay =
div()
.absolute()
.bottom_0()
.left_0()
.w_full()
.h_2_5()
.bg(gpui::linear_gradient(
0.,
gpui::linear_color_stop(cx.theme().colors().editor_background, 0.),
gpui::linear_color_stop(cx.theme().colors().editor_background.opacity(0.), 1.),
));
let border_color = cx.theme().colors().border.opacity(0.6);
const DEFAULT_COLLAPSED_LINES: u32 = 10;
let is_collapsible = self.total_lines.unwrap_or(0) > DEFAULT_COLLAPSED_LINES;
let waiting_for_diff = {
let styles = [
("w_4_5", (0.1, 0.85), 2000),
@@ -992,48 +975,34 @@ impl ToolCard for EditFileToolCard {
card.child(waiting_for_diff)
})
.when(self.preview_expanded && !self.is_loading(), |card| {
let editor_view = v_flex()
.relative()
.h_full()
.when(!self.full_height_expanded, |editor_container| {
editor_container.max_h(px(COLLAPSED_LINES as f32 * editor_line_height.0))
})
.overflow_hidden()
.border_t_1()
.border_color(border_color)
.bg(cx.theme().colors().editor_background)
.child(editor);
card.child(
v_flex()
.relative()
.h_full()
.when(!self.full_height_expanded, |editor_container| {
editor_container
.max_h(DEFAULT_COLLAPSED_LINES as f32 * editor_line_height)
})
.overflow_hidden()
.border_t_1()
.border_color(border_color)
.bg(cx.theme().colors().editor_background)
.child(editor)
.when(
!self.full_height_expanded && is_collapsible,
|editor_container| editor_container.child(gradient_overlay),
),
ToolOutputPreview::new(editor_view.into_any_element(), self.editor.entity_id())
.with_total_lines(self.total_lines.unwrap_or(0) as usize)
.toggle_state(self.full_height_expanded)
.with_collapsed_fade()
.on_toggle({
let this = cx.entity().downgrade();
move |is_expanded, _window, cx| {
if let Some(this) = this.upgrade() {
this.update(cx, |this, _cx| {
this.full_height_expanded = is_expanded;
});
}
}
}),
)
.when(is_collapsible, |card| {
card.child(
h_flex()
.id(("expand-button", self.editor.entity_id()))
.flex_none()
.cursor_pointer()
.h_5()
.justify_center()
.border_t_1()
.rounded_b_md()
.border_color(border_color)
.bg(cx.theme().colors().editor_background)
.hover(|style| style.bg(cx.theme().colors().element_hover.opacity(0.1)))
.child(
Icon::new(full_height_icon)
.size(IconSize::Small)
.color(Color::Muted),
)
.tooltip(Tooltip::text(full_height_tooltip_label))
.on_click(cx.listener(move |this, _event, _window, _cx| {
this.full_height_expanded = !this.full_height_expanded;
})),
)
})
})
}
}

View File

@@ -118,7 +118,11 @@ impl Tool for FetchTool {
}
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
true
false
}
fn may_perform_edits(&self) -> bool {
false
}
fn description(&self) -> String {

View File

@@ -59,6 +59,10 @@ impl Tool for FindPathTool {
false
}
fn may_perform_edits(&self) -> bool {
false
}
fn description(&self) -> String {
include_str!("./find_path_tool/description.md").into()
}

View File

@@ -60,6 +60,10 @@ impl Tool for GrepTool {
false
}
fn may_perform_edits(&self) -> bool {
false
}
fn description(&self) -> String {
include_str!("./grep_tool/description.md").into()
}

View File

@@ -48,6 +48,10 @@ impl Tool for ListDirectoryTool {
false
}
fn may_perform_edits(&self) -> bool {
false
}
fn description(&self) -> String {
include_str!("./list_directory_tool/description.md").into()
}

View File

@@ -46,6 +46,10 @@ impl Tool for MovePathTool {
false
}
fn may_perform_edits(&self) -> bool {
true
}
fn description(&self) -> String {
include_str!("./move_path_tool/description.md").into()
}

View File

@@ -37,6 +37,10 @@ impl Tool for NowTool {
false
}
fn may_perform_edits(&self) -> bool {
false
}
fn description(&self) -> String {
"Returns the current datetime in RFC 3339 format. Only use this tool when the user specifically asks for it or the current task would benefit from knowing the current datetime.".into()
}

View File

@@ -26,7 +26,9 @@ impl Tool for OpenTool {
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
true
}
fn may_perform_edits(&self) -> bool {
false
}
fn description(&self) -> String {
include_str!("./open_tool/description.md").to_string()
}

View File

@@ -58,6 +58,10 @@ impl Tool for ReadFileTool {
false
}
fn may_perform_edits(&self) -> bool {
false
}
fn description(&self) -> String {
include_str!("./read_file_tool/description.md").into()
}

View File

@@ -1,4 +1,7 @@
use crate::schema::json_schema_for;
use crate::{
schema::json_schema_for,
ui::{COLLAPSED_LINES, ToolOutputPreview},
};
use anyhow::{Context as _, Result, anyhow};
use assistant_tool::{ActionLog, Tool, ToolCard, ToolResult, ToolUseStatus};
use futures::{FutureExt as _, future::Shared};
@@ -25,7 +28,7 @@ use terminal_view::TerminalView;
use theme::ThemeSettings;
use ui::{Disclosure, Tooltip, prelude::*};
use util::{
get_system_shell, markdown::MarkdownInlineCode, size::format_file_size,
ResultExt, get_system_shell, markdown::MarkdownInlineCode, size::format_file_size,
time::duration_alt_display,
};
use workspace::Workspace;
@@ -77,6 +80,10 @@ impl Tool for TerminalTool {
true
}
fn may_perform_edits(&self) -> bool {
false
}
fn description(&self) -> String {
include_str!("./terminal_tool/description.md").to_string()
}
@@ -254,22 +261,24 @@ impl Tool for TerminalTool {
let terminal_view = window.update(cx, |_, window, cx| {
cx.new(|cx| {
TerminalView::new(
let mut view = TerminalView::new(
terminal.clone(),
workspace.downgrade(),
None,
project.downgrade(),
true,
window,
cx,
)
);
view.set_embedded_mode(None, cx);
view
})
})?;
let _ = card.update(cx, |card, _| {
card.update(cx, |card, _| {
card.terminal = Some(terminal_view.clone());
card.start_instant = Instant::now();
});
})
.log_err();
let exit_status = terminal
.update(cx, |terminal, cx| terminal.wait_for_completed_task(cx))?
@@ -285,7 +294,7 @@ impl Tool for TerminalTool {
exit_status.map(portable_pty::ExitStatus::from),
);
let _ = card.update(cx, |card, _| {
card.update(cx, |card, _| {
card.command_finished = true;
card.exit_status = exit_status;
card.was_content_truncated = processed_content.len() < previous_len;
@@ -293,7 +302,8 @@ impl Tool for TerminalTool {
card.content_line_count = content_line_count;
card.finished_with_empty_output = finished_with_empty_output;
card.elapsed_time = Some(card.start_instant.elapsed());
});
})
.log_err();
Ok(processed_content.into())
}
@@ -473,7 +483,6 @@ impl ToolCard for TerminalToolCard {
let time_elapsed = self
.elapsed_time
.unwrap_or_else(|| self.start_instant.elapsed());
let should_hide_terminal = tool_failed || self.finished_with_empty_output;
let header_bg = cx
.theme()
@@ -574,7 +583,7 @@ impl ToolCard for TerminalToolCard {
),
)
})
.when(!should_hide_terminal, |header| {
.when(!self.finished_with_empty_output, |header| {
header.child(
Disclosure::new(
("terminal-tool-disclosure", self.entity_id),
@@ -618,19 +627,43 @@ impl ToolCard for TerminalToolCard {
),
),
)
.when(self.preview_expanded && !should_hide_terminal, |this| {
this.child(
div()
.pt_2()
.min_h_72()
.border_t_1()
.border_color(border_color)
.bg(cx.theme().colors().editor_background)
.rounded_b_md()
.text_ui_sm(cx)
.child(terminal.clone()),
)
})
.when(
self.preview_expanded && !self.finished_with_empty_output,
|this| {
this.child(
div()
.pt_2()
.border_t_1()
.border_color(border_color)
.bg(cx.theme().colors().editor_background)
.rounded_b_md()
.text_ui_sm(cx)
.child(
ToolOutputPreview::new(
terminal.clone().into_any_element(),
terminal.entity_id(),
)
.with_total_lines(self.content_line_count)
.toggle_state(!terminal.read(cx).is_content_limited(window))
.on_toggle({
let terminal = terminal.clone();
move |is_expanded, _, cx| {
terminal.update(cx, |terminal, cx| {
terminal.set_embedded_mode(
if is_expanded {
None
} else {
Some(COLLAPSED_LINES)
},
cx,
);
});
}
}),
),
)
},
)
.into_any()
}
}

View File

@@ -28,6 +28,10 @@ impl Tool for ThinkingTool {
false
}
fn may_perform_edits(&self) -> bool {
false
}
fn description(&self) -> String {
include_str!("./thinking_tool/description.md").to_string()
}

View File

@@ -1,3 +1,5 @@
mod tool_call_card_header;
mod tool_output_preview;
pub use tool_call_card_header::*;
pub use tool_output_preview::*;

View File

@@ -0,0 +1,115 @@
use gpui::{AnyElement, EntityId, prelude::*};
use ui::{Tooltip, prelude::*};
#[derive(IntoElement)]
pub struct ToolOutputPreview<F>
where
F: Fn(bool, &mut Window, &mut App) + 'static,
{
content: AnyElement,
entity_id: EntityId,
full_height: bool,
total_lines: usize,
collapsed_fade: bool,
on_toggle: Option<F>,
}
pub const COLLAPSED_LINES: usize = 10;
impl<F> ToolOutputPreview<F>
where
F: Fn(bool, &mut Window, &mut App) + 'static,
{
pub fn new(content: AnyElement, entity_id: EntityId) -> Self {
Self {
content,
entity_id,
full_height: true,
total_lines: 0,
collapsed_fade: false,
on_toggle: None,
}
}
pub fn with_total_lines(mut self, total_lines: usize) -> Self {
self.total_lines = total_lines;
self
}
pub fn toggle_state(mut self, full_height: bool) -> Self {
self.full_height = full_height;
self
}
pub fn with_collapsed_fade(mut self) -> Self {
self.collapsed_fade = true;
self
}
pub fn on_toggle(mut self, listener: F) -> Self {
self.on_toggle = Some(listener);
self
}
}
impl<F> RenderOnce for ToolOutputPreview<F>
where
F: Fn(bool, &mut Window, &mut App) + 'static,
{
fn render(self, _window: &mut Window, cx: &mut App) -> impl IntoElement {
if self.total_lines <= COLLAPSED_LINES {
return self.content;
}
let border_color = cx.theme().colors().border.opacity(0.6);
let (icon, tooltip_label) = if self.full_height {
(IconName::ChevronUp, "Collapse")
} else {
(IconName::ChevronDown, "Expand")
};
let gradient_overlay =
if self.collapsed_fade && !self.full_height {
Some(div().absolute().bottom_5().left_0().w_full().h_2_5().bg(
gpui::linear_gradient(
0.,
gpui::linear_color_stop(cx.theme().colors().editor_background, 0.),
gpui::linear_color_stop(
cx.theme().colors().editor_background.opacity(0.),
1.,
),
),
))
} else {
None
};
v_flex()
.relative()
.child(self.content)
.children(gradient_overlay)
.child(
h_flex()
.id(("expand-button", self.entity_id))
.flex_none()
.cursor_pointer()
.h_5()
.justify_center()
.border_t_1()
.rounded_b_md()
.border_color(border_color)
.bg(cx.theme().colors().editor_background)
.hover(|style| style.bg(cx.theme().colors().element_hover.opacity(0.1)))
.child(Icon::new(icon).size(IconSize::Small).color(Color::Muted))
.tooltip(Tooltip::text(tooltip_label))
.when_some(self.on_toggle, |this, on_toggle| {
this.on_click({
move |_, window, cx| {
on_toggle(!self.full_height, window, cx);
}
})
}),
)
.into_any()
}
}

View File

@@ -36,6 +36,10 @@ impl Tool for WebSearchTool {
false
}
fn may_perform_edits(&self) -> bool {
false
}
fn description(&self) -> String {
"Search the web for information using your query. Use this when you need real-time information, facts, or data that might not be in your training. Results will include snippets and links from relevant web pages.".into()
}

View File

@@ -71,16 +71,20 @@ pub enum Model {
// DeepSeek
DeepSeekR1,
// Meta models
MetaLlama38BInstructV1,
MetaLlama370BInstructV1,
MetaLlama318BInstructV1_128k,
MetaLlama318BInstructV1,
MetaLlama3170BInstructV1_128k,
MetaLlama3170BInstructV1,
MetaLlama3211BInstructV1,
MetaLlama3290BInstructV1,
MetaLlama321BInstructV1,
MetaLlama323BInstructV1,
MetaLlama3_8BInstruct,
MetaLlama3_70BInstruct,
MetaLlama31_8BInstruct,
MetaLlama31_70BInstruct,
MetaLlama31_405BInstruct,
MetaLlama32_1BInstruct,
MetaLlama32_3BInstruct,
MetaLlama32_11BMultiModal,
MetaLlama32_90BMultiModal,
MetaLlama33_70BInstruct,
#[allow(non_camel_case_types)]
MetaLlama4Scout_17BInstruct,
#[allow(non_camel_case_types)]
MetaLlama4Maverick_17BInstruct,
// Mistral models
MistralMistral7BInstructV0,
MistralMixtral8x7BInstructV0,
@@ -145,7 +149,7 @@ impl Model {
Model::AmazonNovaMicro => "amazon.nova-micro-v1:0",
Model::AmazonNovaPro => "amazon.nova-pro-v1:0",
Model::AmazonNovaPremier => "amazon.nova-premier-v1:0",
Model::DeepSeekR1 => "us.deepseek.r1-v1:0",
Model::DeepSeekR1 => "deepseek.r1-v1:0",
Model::AI21J2GrandeInstruct => "ai21.j2-grande-instruct",
Model::AI21J2JumboInstruct => "ai21.j2-jumbo-instruct",
Model::AI21J2Mid => "ai21.j2-mid",
@@ -160,16 +164,18 @@ impl Model {
Model::CohereCommandRV1 => "cohere.command-r-v1:0",
Model::CohereCommandRPlusV1 => "cohere.command-r-plus-v1:0",
Model::CohereCommandLightTextV14_4k => "cohere.command-light-text-v14:7:4k",
Model::MetaLlama38BInstructV1 => "meta.llama3-8b-instruct-v1:0",
Model::MetaLlama370BInstructV1 => "meta.llama3-70b-instruct-v1:0",
Model::MetaLlama318BInstructV1_128k => "meta.llama3-1-8b-instruct-v1:0:128k",
Model::MetaLlama318BInstructV1 => "meta.llama3-1-8b-instruct-v1:0",
Model::MetaLlama3170BInstructV1_128k => "meta.llama3-1-70b-instruct-v1:0:128k",
Model::MetaLlama3170BInstructV1 => "meta.llama3-1-70b-instruct-v1:0",
Model::MetaLlama3211BInstructV1 => "meta.llama3-2-11b-instruct-v1:0",
Model::MetaLlama3290BInstructV1 => "meta.llama3-2-90b-instruct-v1:0",
Model::MetaLlama321BInstructV1 => "meta.llama3-2-1b-instruct-v1:0",
Model::MetaLlama323BInstructV1 => "meta.llama3-2-3b-instruct-v1:0",
Model::MetaLlama3_8BInstruct => "meta.llama3-8b-instruct-v1:0",
Model::MetaLlama3_70BInstruct => "meta.llama3-70b-instruct-v1:0",
Model::MetaLlama31_8BInstruct => "meta.llama3-1-8b-instruct-v1:0",
Model::MetaLlama31_70BInstruct => "meta.llama3-1-70b-instruct-v1:0",
Model::MetaLlama31_405BInstruct => "meta.llama3-1-405b-instruct-v1:0",
Model::MetaLlama32_11BMultiModal => "meta.llama3-2-11b-instruct-v1:0",
Model::MetaLlama32_90BMultiModal => "meta.llama3-2-90b-instruct-v1:0",
Model::MetaLlama32_1BInstruct => "meta.llama3-2-1b-instruct-v1:0",
Model::MetaLlama32_3BInstruct => "meta.llama3-2-3b-instruct-v1:0",
Model::MetaLlama33_70BInstruct => "meta.llama3-3-70b-instruct-v1:0",
Model::MetaLlama4Scout_17BInstruct => "meta.llama4-scout-17b-instruct-v1:0",
Model::MetaLlama4Maverick_17BInstruct => "meta.llama4-maverick-17b-instruct-v1:0",
Model::MistralMistral7BInstructV0 => "mistral.mistral-7b-instruct-v0:2",
Model::MistralMixtral8x7BInstructV0 => "mistral.mixtral-8x7b-instruct-v0:1",
Model::MistralMistralLarge2402V1 => "mistral.mistral-large-2402-v1:0",
@@ -214,16 +220,18 @@ impl Model {
Self::CohereCommandRV1 => "Cohere Command R V1",
Self::CohereCommandRPlusV1 => "Cohere Command R Plus V1",
Self::CohereCommandLightTextV14_4k => "Cohere Command Light Text V14 4K",
Self::MetaLlama38BInstructV1 => "Meta Llama 3 8B Instruct V1",
Self::MetaLlama370BInstructV1 => "Meta Llama 3 70B Instruct V1",
Self::MetaLlama318BInstructV1_128k => "Meta Llama 3 1.8B Instruct V1 128K",
Self::MetaLlama318BInstructV1 => "Meta Llama 3 1.8B Instruct V1",
Self::MetaLlama3170BInstructV1_128k => "Meta Llama 3 1 70B Instruct V1 128K",
Self::MetaLlama3170BInstructV1 => "Meta Llama 3 1 70B Instruct V1",
Self::MetaLlama3211BInstructV1 => "Meta Llama 3 2 11B Instruct V1",
Self::MetaLlama3290BInstructV1 => "Meta Llama 3 2 90B Instruct V1",
Self::MetaLlama321BInstructV1 => "Meta Llama 3 2 1B Instruct V1",
Self::MetaLlama323BInstructV1 => "Meta Llama 3 2 3B Instruct V1",
Self::MetaLlama3_8BInstruct => "Meta Llama 3 8B Instruct",
Self::MetaLlama3_70BInstruct => "Meta Llama 3 70B Instruct",
Self::MetaLlama31_8BInstruct => "Meta Llama 3.1 8B Instruct",
Self::MetaLlama31_70BInstruct => "Meta Llama 3.1 70B Instruct",
Self::MetaLlama31_405BInstruct => "Meta Llama 3.1 405B Instruct",
Self::MetaLlama32_11BMultiModal => "Meta Llama 3.2 11B Vision Instruct",
Self::MetaLlama32_90BMultiModal => "Meta Llama 3.2 90B Vision Instruct",
Self::MetaLlama32_1BInstruct => "Meta Llama 3.2 1B Instruct",
Self::MetaLlama32_3BInstruct => "Meta Llama 3.2 3B Instruct",
Self::MetaLlama33_70BInstruct => "Meta Llama 3.3 70B Instruct",
Self::MetaLlama4Scout_17BInstruct => "Meta Llama 4 Scout 17B Instruct",
Self::MetaLlama4Maverick_17BInstruct => "Meta Llama 4 Maverick 17B Instruct",
Self::MistralMistral7BInstructV0 => "Mistral 7B Instruct V0",
Self::MistralMixtral8x7BInstructV0 => "Mistral Mixtral 8x7B Instruct V0",
Self::MistralMistralLarge2402V1 => "Mistral Large 2402 V1",
@@ -365,55 +373,60 @@ impl Model {
Ok(format!("{}.{}", region_group, model_id))
}
// Models available only in US
(Model::Claude3Opus, "us")
| (Model::Claude3_5Haiku, "us")
| (Model::Claude3_7Sonnet, "us")
| (Model::ClaudeSonnet4, "us")
| (Model::ClaudeOpus4, "us")
| (Model::ClaudeSonnet4Thinking, "us")
| (Model::ClaudeOpus4Thinking, "us")
| (Model::Claude3_7SonnetThinking, "us")
| (Model::AmazonNovaPremier, "us")
| (Model::MistralPixtralLarge2502V1, "us") => {
// Available everywhere
(Model::AmazonNovaLite | Model::AmazonNovaMicro | Model::AmazonNovaPro, _) => {
Ok(format!("{}.{}", region_group, model_id))
}
// Models available in US, EU, and APAC
(Model::Claude3_5SonnetV2, "us")
| (Model::Claude3_5SonnetV2, "apac")
| (Model::Claude3_5Sonnet, _)
| (Model::Claude3Haiku, _)
| (Model::Claude3Sonnet, _)
| (Model::AmazonNovaLite, _)
| (Model::AmazonNovaMicro, _)
| (Model::AmazonNovaPro, _) => Ok(format!("{}.{}", region_group, model_id)),
// Models in US
(
Model::AmazonNovaPremier
| Model::Claude3_5Haiku
| Model::Claude3_5Sonnet
| Model::Claude3_5SonnetV2
| Model::Claude3_7Sonnet
| Model::Claude3_7SonnetThinking
| Model::Claude3Haiku
| Model::Claude3Opus
| Model::Claude3Sonnet
| Model::DeepSeekR1
| Model::MetaLlama31_405BInstruct
| Model::MetaLlama31_70BInstruct
| Model::MetaLlama31_8BInstruct
| Model::MetaLlama32_11BMultiModal
| Model::MetaLlama32_1BInstruct
| Model::MetaLlama32_3BInstruct
| Model::MetaLlama32_90BMultiModal
| Model::MetaLlama33_70BInstruct
| Model::MetaLlama4Maverick_17BInstruct
| Model::MetaLlama4Scout_17BInstruct
| Model::MistralPixtralLarge2502V1
| Model::PalmyraWriterX4
| Model::PalmyraWriterX5,
"us",
) => Ok(format!("{}.{}", region_group, model_id)),
// Models with limited EU availability
(Model::MetaLlama321BInstructV1, "us")
| (Model::MetaLlama321BInstructV1, "eu")
| (Model::MetaLlama323BInstructV1, "us")
| (Model::MetaLlama323BInstructV1, "eu") => {
Ok(format!("{}.{}", region_group, model_id))
}
// Models available in EU
(
Model::Claude3_5Sonnet
| Model::Claude3_7Sonnet
| Model::Claude3_7SonnetThinking
| Model::Claude3Haiku
| Model::Claude3Sonnet
| Model::MetaLlama32_1BInstruct
| Model::MetaLlama32_3BInstruct
| Model::MistralPixtralLarge2502V1,
"eu",
) => Ok(format!("{}.{}", region_group, model_id)),
// US-only models (all remaining Meta models)
(Model::MetaLlama38BInstructV1, "us")
| (Model::MetaLlama370BInstructV1, "us")
| (Model::MetaLlama318BInstructV1, "us")
| (Model::MetaLlama318BInstructV1_128k, "us")
| (Model::MetaLlama3170BInstructV1, "us")
| (Model::MetaLlama3170BInstructV1_128k, "us")
| (Model::MetaLlama3211BInstructV1, "us")
| (Model::MetaLlama3290BInstructV1, "us") => {
Ok(format!("{}.{}", region_group, model_id))
}
// Writer models only available in the US
(Model::PalmyraWriterX4, "us") | (Model::PalmyraWriterX5, "us") => {
// They have some goofiness
Ok(format!("{}.{}", region_group, model_id))
}
// Models available in APAC
(
Model::Claude3_5Sonnet
| Model::Claude3_5SonnetV2
| Model::Claude3Haiku
| Model::Claude3Sonnet,
"apac",
) => Ok(format!("{}.{}", region_group, model_id)),
// Any other combination is not supported
_ => Ok(self.id().into()),
@@ -464,6 +477,10 @@ mod tests {
Model::Claude3_5SonnetV2.cross_region_inference_id("ap-northeast-1")?,
"apac.anthropic.claude-3-5-sonnet-20241022-v2:0"
);
assert_eq!(
Model::Claude3_5SonnetV2.cross_region_inference_id("ap-southeast-2")?,
"apac.anthropic.claude-3-5-sonnet-20241022-v2:0"
);
assert_eq!(
Model::AmazonNovaLite.cross_region_inference_id("ap-south-1")?,
"apac.amazon.nova-lite-v1:0"
@@ -489,11 +506,15 @@ mod tests {
fn test_meta_models_inference_ids() -> anyhow::Result<()> {
// Test Meta models
assert_eq!(
Model::MetaLlama370BInstructV1.cross_region_inference_id("us-east-1")?,
"us.meta.llama3-70b-instruct-v1:0"
Model::MetaLlama3_70BInstruct.cross_region_inference_id("us-east-1")?,
"meta.llama3-70b-instruct-v1:0"
);
assert_eq!(
Model::MetaLlama321BInstructV1.cross_region_inference_id("eu-west-1")?,
Model::MetaLlama31_70BInstruct.cross_region_inference_id("us-east-1")?,
"us.meta.llama3-1-70b-instruct-v1:0"
);
assert_eq!(
Model::MetaLlama32_1BInstruct.cross_region_inference_id("eu-west-1")?,
"eu.meta.llama3-2-1b-instruct-v1:0"
);
Ok(())

View File

@@ -35,6 +35,7 @@ pub struct ChannelBuffer {
pub enum ChannelBufferEvent {
CollaboratorsChanged,
Disconnected,
Connected,
BufferEdited,
ChannelChanged,
}
@@ -103,6 +104,17 @@ impl ChannelBuffer {
}
}
pub fn connected(&mut self, cx: &mut Context<Self>) {
self.connected = true;
if self.subscription.is_none() {
let Ok(subscription) = self.client.subscribe_to_entity(self.channel_id.0) else {
return;
};
self.subscription = Some(subscription.set_entity(&cx.entity(), &mut cx.to_async()));
cx.emit(ChannelBufferEvent::Connected);
}
}
pub fn remote_id(&self, cx: &App) -> BufferId {
self.buffer.read(cx).remote_id()
}

View File

@@ -56,7 +56,6 @@ pub struct Channel {
pub name: SharedString,
pub visibility: proto::ChannelVisibility,
pub parent_path: Vec<ChannelId>,
pub channel_order: i32,
}
#[derive(Default, Debug)]
@@ -615,24 +614,7 @@ impl ChannelStore {
to: to.0,
})
.await?;
Ok(())
})
}
pub fn reorder_channel(
&mut self,
channel_id: ChannelId,
direction: proto::reorder_channel::Direction,
cx: &mut Context<Self>,
) -> Task<Result<()>> {
let client = self.client.clone();
cx.spawn(async move |_, _| {
client
.request(proto::ReorderChannel {
channel_id: channel_id.0,
direction: direction.into(),
})
.await?;
Ok(())
})
}
@@ -990,6 +972,7 @@ impl ChannelStore {
.log_err();
if let Some(operations) = operations {
channel_buffer.connected(cx);
let client = this.client.clone();
cx.background_spawn(async move {
let operations = operations.await;
@@ -1030,8 +1013,8 @@ impl ChannelStore {
if let Some(this) = this.upgrade() {
this.update(cx, |this, cx| {
for (_, buffer) in this.opened_buffers.drain() {
if let OpenEntityHandle::Open(buffer) = buffer {
for (_, buffer) in &this.opened_buffers {
if let OpenEntityHandle::Open(buffer) = &buffer {
if let Some(buffer) = buffer.upgrade() {
buffer.update(cx, |buffer, cx| buffer.disconnect(cx));
}
@@ -1044,18 +1027,6 @@ impl ChannelStore {
});
}
#[cfg(any(test, feature = "test-support"))]
pub fn reset(&mut self) {
self.channel_invitations.clear();
self.channel_index.clear();
self.channel_participants.clear();
self.outgoing_invites.clear();
self.opened_buffers.clear();
self.opened_chats.clear();
self.disconnect_channel_buffers_task = None;
self.channel_states.clear();
}
pub(crate) fn update_channels(
&mut self,
payload: proto::UpdateChannels,
@@ -1080,7 +1051,6 @@ impl ChannelStore {
visibility: channel.visibility(),
name: channel.name.into(),
parent_path: channel.parent_path.into_iter().map(ChannelId).collect(),
channel_order: channel.channel_order,
}),
),
}

View File

@@ -61,13 +61,11 @@ impl ChannelPathsInsertGuard<'_> {
ret = existing_channel.visibility != channel_proto.visibility()
|| existing_channel.name != channel_proto.name
|| existing_channel.parent_path != parent_path
|| existing_channel.channel_order != channel_proto.channel_order;
|| existing_channel.parent_path != parent_path;
existing_channel.visibility = channel_proto.visibility();
existing_channel.name = channel_proto.name.into();
existing_channel.parent_path = parent_path;
existing_channel.channel_order = channel_proto.channel_order;
} else {
self.channels_by_id.insert(
ChannelId(channel_proto.id),
@@ -76,7 +74,6 @@ impl ChannelPathsInsertGuard<'_> {
visibility: channel_proto.visibility(),
name: channel_proto.name.into(),
parent_path,
channel_order: channel_proto.channel_order,
}),
);
self.insert_root(ChannelId(channel_proto.id));
@@ -103,18 +100,17 @@ impl Drop for ChannelPathsInsertGuard<'_> {
fn channel_path_sorting_key(
id: ChannelId,
channels_by_id: &BTreeMap<ChannelId, Arc<Channel>>,
) -> impl Iterator<Item = (i32, ChannelId)> {
let (parent_path, order_and_id) =
channels_by_id
.get(&id)
.map_or((&[] as &[_], None), |channel| {
(
channel.parent_path.as_slice(),
Some((channel.channel_order, channel.id)),
)
});
) -> impl Iterator<Item = (&str, ChannelId)> {
let (parent_path, name) = channels_by_id
.get(&id)
.map_or((&[] as &[_], None), |channel| {
(
channel.parent_path.as_slice(),
Some((channel.name.as_ref(), channel.id)),
)
});
parent_path
.iter()
.filter_map(|id| Some((channels_by_id.get(id)?.channel_order, *id)))
.chain(order_and_id)
.filter_map(|id| Some((channels_by_id.get(id)?.name.as_ref(), *id)))
.chain(name)
}

View File

@@ -21,14 +21,12 @@ fn test_update_channels(cx: &mut App) {
name: "b".to_string(),
visibility: proto::ChannelVisibility::Members as i32,
parent_path: Vec::new(),
channel_order: 1,
},
proto::Channel {
id: 2,
name: "a".to_string(),
visibility: proto::ChannelVisibility::Members as i32,
parent_path: Vec::new(),
channel_order: 2,
},
],
..Default::default()
@@ -39,8 +37,8 @@ fn test_update_channels(cx: &mut App) {
&channel_store,
&[
//
(0, "b".to_string()),
(0, "a".to_string()),
(0, "b".to_string()),
],
cx,
);
@@ -54,14 +52,12 @@ fn test_update_channels(cx: &mut App) {
name: "x".to_string(),
visibility: proto::ChannelVisibility::Members as i32,
parent_path: vec![1],
channel_order: 1,
},
proto::Channel {
id: 4,
name: "y".to_string(),
visibility: proto::ChannelVisibility::Members as i32,
parent_path: vec![2],
channel_order: 1,
},
],
..Default::default()
@@ -71,111 +67,15 @@ fn test_update_channels(cx: &mut App) {
assert_channels(
&channel_store,
&[
(0, "b".to_string()),
(1, "x".to_string()),
(0, "a".to_string()),
(1, "y".to_string()),
(0, "b".to_string()),
(1, "x".to_string()),
],
cx,
);
}
#[gpui::test]
fn test_update_channels_order_independent(cx: &mut App) {
/// Based on: https://stackoverflow.com/a/59939809
fn unique_permutations<T: Clone>(items: Vec<T>) -> Vec<Vec<T>> {
if items.len() == 1 {
vec![items]
} else {
let mut output: Vec<Vec<T>> = vec![];
for (ix, first) in items.iter().enumerate() {
let mut remaining_elements = items.clone();
remaining_elements.remove(ix);
for mut permutation in unique_permutations(remaining_elements) {
permutation.insert(0, first.clone());
output.push(permutation);
}
}
output
}
}
let test_data = vec![
proto::Channel {
id: 6,
name: "β".to_string(),
visibility: proto::ChannelVisibility::Members as i32,
parent_path: vec![1, 3],
channel_order: 1,
},
proto::Channel {
id: 5,
name: "α".to_string(),
visibility: proto::ChannelVisibility::Members as i32,
parent_path: vec![1],
channel_order: 2,
},
proto::Channel {
id: 3,
name: "x".to_string(),
visibility: proto::ChannelVisibility::Members as i32,
parent_path: vec![1],
channel_order: 1,
},
proto::Channel {
id: 4,
name: "y".to_string(),
visibility: proto::ChannelVisibility::Members as i32,
parent_path: vec![2],
channel_order: 1,
},
proto::Channel {
id: 1,
name: "b".to_string(),
visibility: proto::ChannelVisibility::Members as i32,
parent_path: Vec::new(),
channel_order: 1,
},
proto::Channel {
id: 2,
name: "a".to_string(),
visibility: proto::ChannelVisibility::Members as i32,
parent_path: Vec::new(),
channel_order: 2,
},
];
let channel_store = init_test(cx);
let permutations = unique_permutations(test_data);
for test_instance in permutations {
channel_store.update(cx, |channel_store, _| channel_store.reset());
update_channels(
&channel_store,
proto::UpdateChannels {
channels: test_instance,
..Default::default()
},
cx,
);
assert_channels(
&channel_store,
&[
(0, "b".to_string()),
(1, "x".to_string()),
(2, "β".to_string()),
(1, "α".to_string()),
(0, "a".to_string()),
(1, "y".to_string()),
],
cx,
);
}
}
#[gpui::test]
fn test_dangling_channel_paths(cx: &mut App) {
let channel_store = init_test(cx);
@@ -189,21 +89,18 @@ fn test_dangling_channel_paths(cx: &mut App) {
name: "a".to_string(),
visibility: proto::ChannelVisibility::Members as i32,
parent_path: vec![],
channel_order: 1,
},
proto::Channel {
id: 1,
name: "b".to_string(),
visibility: proto::ChannelVisibility::Members as i32,
parent_path: vec![0],
channel_order: 1,
},
proto::Channel {
id: 2,
name: "c".to_string(),
visibility: proto::ChannelVisibility::Members as i32,
parent_path: vec![0, 1],
channel_order: 1,
},
],
..Default::default()
@@ -250,7 +147,6 @@ async fn test_channel_messages(cx: &mut TestAppContext) {
name: "the-channel".to_string(),
visibility: proto::ChannelVisibility::Members as i32,
parent_path: vec![],
channel_order: 1,
}],
..Default::default()
});

View File

@@ -57,7 +57,7 @@ We run two instances of collab:
Both of these run on the Kubernetes cluster hosted in Digital Ocean.
Deployment is triggered by pushing to the `collab-staging` (or `collab-production`) tag in Github. The best way to do this is:
Deployment is triggered by pushing to the `collab-staging` (or `collab-production`) tag in GitHub. The best way to do this is:
- `./script/deploy-collab staging`
- `./script/deploy-collab production`

View File

@@ -266,14 +266,11 @@ CREATE TABLE "channels" (
"created_at" TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
"visibility" VARCHAR NOT NULL,
"parent_path" TEXT NOT NULL,
"requires_zed_cla" BOOLEAN NOT NULL DEFAULT FALSE,
"channel_order" INTEGER NOT NULL DEFAULT 1
"requires_zed_cla" BOOLEAN NOT NULL DEFAULT FALSE
);
CREATE INDEX "index_channels_on_parent_path" ON "channels" ("parent_path");
CREATE INDEX "index_channels_on_parent_path_and_order" ON "channels" ("parent_path", "channel_order");
CREATE TABLE IF NOT EXISTS "channel_chat_participants" (
"id" INTEGER PRIMARY KEY AUTOINCREMENT,
"user_id" INTEGER NOT NULL REFERENCES users (id),

View File

@@ -1,16 +0,0 @@
-- Add channel_order column to channels table with default value
ALTER TABLE channels ADD COLUMN channel_order INTEGER NOT NULL DEFAULT 1;
-- Update channel_order for existing channels using ROW_NUMBER for deterministic ordering
UPDATE channels
SET channel_order = (
SELECT ROW_NUMBER() OVER (
PARTITION BY parent_path
ORDER BY name, id
)
FROM channels c2
WHERE c2.id = channels.id
);
-- Create index for efficient ordering queries
CREATE INDEX "index_channels_on_parent_path_and_order" ON "channels" ("parent_path", "channel_order");

View File

@@ -219,12 +219,19 @@ struct BillingSubscriptionJson {
id: BillingSubscriptionId,
name: String,
status: StripeSubscriptionStatus,
period: Option<BillingSubscriptionPeriodJson>,
trial_end_at: Option<String>,
cancel_at: Option<String>,
/// Whether this subscription can be canceled.
is_cancelable: bool,
}
#[derive(Debug, Serialize)]
struct BillingSubscriptionPeriodJson {
start_at: String,
end_at: String,
}
#[derive(Debug, Serialize)]
struct ListBillingSubscriptionsResponse {
subscriptions: Vec<BillingSubscriptionJson>,
@@ -254,6 +261,15 @@ async fn list_billing_subscriptions(
None => "Zed LLM Usage".to_string(),
},
status: subscription.stripe_subscription_status,
period: maybe!({
let start_at = subscription.current_period_start_at()?;
let end_at = subscription.current_period_end_at()?;
Some(BillingSubscriptionPeriodJson {
start_at: start_at.to_rfc3339_opts(SecondsFormat::Millis, true),
end_at: end_at.to_rfc3339_opts(SecondsFormat::Millis, true),
})
}),
trial_end_at: if subscription.kind == Some(SubscriptionKind::ZedProTrial) {
maybe!({
let end_at = subscription.stripe_current_period_end?;

View File

@@ -66,7 +66,7 @@ async fn get_extensions(
params.filter.as_deref(),
provides_filter.as_ref(),
params.max_schema_version,
500,
1_000,
)
.await?;

View File

@@ -582,7 +582,6 @@ pub struct Channel {
pub visibility: ChannelVisibility,
/// parent_path is the channel ids from the root to this one (not including this one)
pub parent_path: Vec<ChannelId>,
pub channel_order: i32,
}
impl Channel {
@@ -592,7 +591,6 @@ impl Channel {
visibility: value.visibility,
name: value.clone().name,
parent_path: value.ancestors().collect(),
channel_order: value.channel_order,
}
}
@@ -602,13 +600,8 @@ impl Channel {
name: self.name.clone(),
visibility: self.visibility.into(),
parent_path: self.parent_path.iter().map(|c| c.to_proto()).collect(),
channel_order: self.channel_order,
}
}
pub fn root_id(&self) -> ChannelId {
self.parent_path.first().copied().unwrap_or(self.id)
}
}
#[derive(Debug, PartialEq, Eq, Hash)]

View File

@@ -4,7 +4,7 @@ use rpc::{
ErrorCode, ErrorCodeExt,
proto::{ChannelBufferVersion, VectorClockEntry, channel_member::Kind},
};
use sea_orm::{ActiveValue, DbBackend, TryGetableMany};
use sea_orm::{DbBackend, TryGetableMany};
impl Database {
#[cfg(test)]
@@ -59,28 +59,16 @@ impl Database {
parent = Some(parent_channel);
}
let parent_path = parent
.as_ref()
.map_or(String::new(), |parent| parent.path());
// Find the maximum channel_order among siblings to set the new channel at the end
let max_order = max_order(&parent_path, &tx).await?;
log::info!(
"Creating channel '{}' with parent_path='{}', max_order={}, new_order={}",
name,
parent_path,
max_order,
max_order + 1
);
let channel = channel::ActiveModel {
id: ActiveValue::NotSet,
name: ActiveValue::Set(name.to_string()),
visibility: ActiveValue::Set(ChannelVisibility::Members),
parent_path: ActiveValue::Set(parent_path),
parent_path: ActiveValue::Set(
parent
.as_ref()
.map_or(String::new(), |parent| parent.path()),
),
requires_zed_cla: ActiveValue::NotSet,
channel_order: ActiveValue::Set(max_order + 1),
}
.insert(&*tx)
.await?;
@@ -543,7 +531,11 @@ impl Database {
.get_channel_descendants_excluding_self(channels.iter(), tx)
.await?;
descendants.extend(channels);
for channel in channels {
if let Err(ix) = descendants.binary_search_by_key(&channel.path(), |c| c.path()) {
descendants.insert(ix, channel);
}
}
let roles_by_channel_id = channel_memberships
.iter()
@@ -960,14 +952,11 @@ impl Database {
}
let root_id = channel.root_id();
let new_parent_path = new_parent.path();
let old_path = format!("{}{}/", channel.parent_path, channel.id);
let new_path = format!("{}{}/", &new_parent_path, channel.id);
let new_order = max_order(&new_parent_path, &tx).await? + 1;
let new_path = format!("{}{}/", new_parent.path(), channel.id);
let mut model = channel.into_active_model();
model.parent_path = ActiveValue::Set(new_parent.path());
model.channel_order = ActiveValue::Set(new_order);
let channel = model.update(&*tx).await?;
let descendent_ids =
@@ -997,132 +986,6 @@ impl Database {
})
.await
}
pub async fn reorder_channel(
&self,
channel_id: ChannelId,
direction: proto::reorder_channel::Direction,
user_id: UserId,
) -> Result<Vec<Channel>> {
self.transaction(|tx| async move {
let mut channel = self.get_channel_internal(channel_id, &tx).await?;
log::info!(
"Reordering channel {} (parent_path: '{}', order: {})",
channel.id,
channel.parent_path,
channel.channel_order
);
// Check if user is admin of the channel
self.check_user_is_channel_admin(&channel, user_id, &tx)
.await?;
// Find the sibling channel to swap with
let sibling_channel = match direction {
proto::reorder_channel::Direction::Up => {
log::info!(
"Looking for sibling with parent_path='{}' and order < {}",
channel.parent_path,
channel.channel_order
);
// Find channel with highest order less than current
channel::Entity::find()
.filter(
channel::Column::ParentPath
.eq(&channel.parent_path)
.and(channel::Column::ChannelOrder.lt(channel.channel_order)),
)
.order_by_desc(channel::Column::ChannelOrder)
.one(&*tx)
.await?
}
proto::reorder_channel::Direction::Down => {
log::info!(
"Looking for sibling with parent_path='{}' and order > {}",
channel.parent_path,
channel.channel_order
);
// Find channel with lowest order greater than current
channel::Entity::find()
.filter(
channel::Column::ParentPath
.eq(&channel.parent_path)
.and(channel::Column::ChannelOrder.gt(channel.channel_order)),
)
.order_by_asc(channel::Column::ChannelOrder)
.one(&*tx)
.await?
}
};
let mut sibling_channel = match sibling_channel {
Some(sibling) => {
log::info!(
"Found sibling {} (parent_path: '{}', order: {})",
sibling.id,
sibling.parent_path,
sibling.channel_order
);
sibling
}
None => {
log::warn!("No sibling found to swap with");
// No sibling to swap with
return Ok(vec![]);
}
};
let current_order = channel.channel_order;
let sibling_order = sibling_channel.channel_order;
channel::ActiveModel {
id: ActiveValue::Unchanged(sibling_channel.id),
channel_order: ActiveValue::Set(current_order),
..Default::default()
}
.update(&*tx)
.await?;
sibling_channel.channel_order = current_order;
channel::ActiveModel {
id: ActiveValue::Unchanged(channel.id),
channel_order: ActiveValue::Set(sibling_order),
..Default::default()
}
.update(&*tx)
.await?;
channel.channel_order = sibling_order;
log::info!(
"Reorder complete. Swapped channels {} and {}",
channel.id,
sibling_channel.id
);
let swapped_channels = vec![
Channel::from_model(channel),
Channel::from_model(sibling_channel),
];
Ok(swapped_channels)
})
.await
}
}
async fn max_order(parent_path: &str, tx: &TransactionHandle) -> Result<i32> {
let max_order = channel::Entity::find()
.filter(channel::Column::ParentPath.eq(parent_path))
.select_only()
.column_as(channel::Column::ChannelOrder.max(), "max_order")
.into_tuple::<Option<i32>>()
.one(&**tx)
.await?
.flatten()
.unwrap_or(0);
Ok(max_order)
}
#[derive(Copy, Clone, Debug, EnumIter, DeriveColumn)]

View File

@@ -66,6 +66,87 @@ impl Database {
.await
}
/// Delete all channel chat participants from previous servers
pub async fn delete_stale_channel_chat_participants(
&self,
environment: &str,
new_server_id: ServerId,
) -> Result<()> {
self.transaction(|tx| async move {
let stale_server_epochs = self
.stale_server_ids(environment, new_server_id, &tx)
.await?;
channel_chat_participant::Entity::delete_many()
.filter(
channel_chat_participant::Column::ConnectionServerId
.is_in(stale_server_epochs.iter().copied()),
)
.exec(&*tx)
.await?;
Ok(())
})
.await
}
pub async fn clear_old_worktree_entries(&self, server_id: ServerId) -> Result<()> {
self.transaction(|tx| async move {
use sea_orm::Statement;
use sea_orm::sea_query::{Expr, Query};
loop {
let delete_query = Query::delete()
.from_table(worktree_entry::Entity)
.and_where(
Expr::tuple([
Expr::col((worktree_entry::Entity, worktree_entry::Column::ProjectId))
.into(),
Expr::col((worktree_entry::Entity, worktree_entry::Column::WorktreeId))
.into(),
Expr::col((worktree_entry::Entity, worktree_entry::Column::Id)).into(),
])
.in_subquery(
Query::select()
.columns([
(worktree_entry::Entity, worktree_entry::Column::ProjectId),
(worktree_entry::Entity, worktree_entry::Column::WorktreeId),
(worktree_entry::Entity, worktree_entry::Column::Id),
])
.from(worktree_entry::Entity)
.inner_join(
project::Entity,
Expr::col((project::Entity, project::Column::Id)).equals((
worktree_entry::Entity,
worktree_entry::Column::ProjectId,
)),
)
.and_where(project::Column::HostConnectionServerId.ne(server_id))
.limit(10000)
.to_owned(),
),
)
.to_owned();
let statement = Statement::from_sql_and_values(
tx.get_database_backend(),
delete_query
.to_string(sea_orm::sea_query::PostgresQueryBuilder)
.as_str(),
vec![],
);
let result = tx.execute(statement).await?;
if result.rows_affected() == 0 {
break;
}
}
Ok(())
})
.await
}
/// Deletes any stale servers in the environment that don't match the `new_server_id`.
pub async fn delete_stale_servers(
&self,
@@ -86,7 +167,7 @@ impl Database {
.await
}
async fn stale_server_ids(
pub async fn stale_server_ids(
&self,
environment: &str,
new_server_id: ServerId,

View File

@@ -10,9 +10,6 @@ pub struct Model {
pub visibility: ChannelVisibility,
pub parent_path: String,
pub requires_zed_cla: bool,
/// The order of this channel relative to its siblings within the same parent.
/// Lower values appear first. Channels are sorted by parent_path first, then by channel_order.
pub channel_order: i32,
}
impl Model {

View File

@@ -172,35 +172,16 @@ impl Drop for TestDb {
}
}
fn assert_channel_tree_matches(actual: Vec<Channel>, expected: Vec<Channel>) {
let expected_channels = expected.into_iter().collect::<HashSet<_>>();
let actual_channels = actual.into_iter().collect::<HashSet<_>>();
pretty_assertions::assert_eq!(expected_channels, actual_channels);
}
fn channel_tree(channels: &[(ChannelId, &[ChannelId], &'static str)]) -> Vec<Channel> {
use std::collections::HashMap;
let mut result = Vec::new();
let mut order_by_parent: HashMap<Vec<ChannelId>, i32> = HashMap::new();
for (id, parent_path, name) in channels {
let parent_key = parent_path.to_vec();
let order = *order_by_parent
.entry(parent_key.clone())
.and_modify(|e| *e += 1)
.or_insert(1);
result.push(Channel {
channels
.iter()
.map(|(id, parent_path, name)| Channel {
id: *id,
name: name.to_string(),
visibility: ChannelVisibility::Members,
parent_path: parent_key,
channel_order: order,
});
}
result
parent_path: parent_path.to_vec(),
})
.collect()
}
static GITHUB_USER_ID: AtomicI32 = AtomicI32::new(5);

View File

@@ -1,15 +1,15 @@
use crate::{
db::{
Channel, ChannelId, ChannelRole, Database, NewUserParams, RoomId, UserId,
tests::{assert_channel_tree_matches, channel_tree, new_test_connection, new_test_user},
tests::{channel_tree, new_test_connection, new_test_user},
},
test_both_dbs,
};
use rpc::{
ConnectionId,
proto::{self, reorder_channel},
proto::{self},
};
use std::{collections::HashSet, sync::Arc};
use std::sync::Arc;
test_both_dbs!(test_channels, test_channels_postgres, test_channels_sqlite);
@@ -59,28 +59,28 @@ async fn test_channels(db: &Arc<Database>) {
.unwrap();
let result = db.get_channels_for_user(a_id).await.unwrap();
assert_channel_tree_matches(
assert_eq!(
result.channels,
channel_tree(&[
(zed_id, &[], "zed"),
(crdb_id, &[zed_id], "crdb"),
(livestreaming_id, &[zed_id], "livestreaming"),
(livestreaming_id, &[zed_id], "livestreaming",),
(replace_id, &[zed_id], "replace"),
(rust_id, &[], "rust"),
(cargo_id, &[rust_id], "cargo"),
(cargo_ra_id, &[rust_id, cargo_id], "cargo-ra"),
]),
(cargo_ra_id, &[rust_id, cargo_id], "cargo-ra",)
],)
);
let result = db.get_channels_for_user(b_id).await.unwrap();
assert_channel_tree_matches(
assert_eq!(
result.channels,
channel_tree(&[
(zed_id, &[], "zed"),
(crdb_id, &[zed_id], "crdb"),
(livestreaming_id, &[zed_id], "livestreaming"),
(replace_id, &[zed_id], "replace"),
]),
(livestreaming_id, &[zed_id], "livestreaming",),
(replace_id, &[zed_id], "replace")
],)
);
// Update member permissions
@@ -94,14 +94,14 @@ async fn test_channels(db: &Arc<Database>) {
assert!(set_channel_admin.is_ok());
let result = db.get_channels_for_user(b_id).await.unwrap();
assert_channel_tree_matches(
assert_eq!(
result.channels,
channel_tree(&[
(zed_id, &[], "zed"),
(crdb_id, &[zed_id], "crdb"),
(livestreaming_id, &[zed_id], "livestreaming"),
(replace_id, &[zed_id], "replace"),
]),
(livestreaming_id, &[zed_id], "livestreaming",),
(replace_id, &[zed_id], "replace")
],)
);
// Remove a single channel
@@ -313,8 +313,8 @@ async fn test_channel_renames(db: &Arc<Database>) {
test_both_dbs!(
test_db_channel_moving,
test_db_channel_moving_postgres,
test_db_channel_moving_sqlite
test_channels_moving_postgres,
test_channels_moving_sqlite
);
async fn test_db_channel_moving(db: &Arc<Database>) {
@@ -343,14 +343,16 @@ async fn test_db_channel_moving(db: &Arc<Database>) {
.await
.unwrap();
let livestreaming_sub_id = db
.create_sub_channel("livestreaming_sub", livestreaming_id, a_id)
let livestreaming_dag_id = db
.create_sub_channel("livestreaming_dag", livestreaming_id, a_id)
.await
.unwrap();
// ========================================================================
// sanity check
// Initial DAG:
// /- gpui2
// zed -- crdb - livestreaming - livestreaming_sub
// zed -- crdb - livestreaming - livestreaming_dag
let result = db.get_channels_for_user(a_id).await.unwrap();
assert_channel_tree(
result.channels,
@@ -358,242 +360,10 @@ async fn test_db_channel_moving(db: &Arc<Database>) {
(zed_id, &[]),
(crdb_id, &[zed_id]),
(livestreaming_id, &[zed_id, crdb_id]),
(livestreaming_sub_id, &[zed_id, crdb_id, livestreaming_id]),
(livestreaming_dag_id, &[zed_id, crdb_id, livestreaming_id]),
(gpui2_id, &[zed_id]),
],
);
// Check that we can do a simple leaf -> leaf move
db.move_channel(livestreaming_sub_id, crdb_id, a_id)
.await
.unwrap();
// /- gpui2
// zed -- crdb -- livestreaming
// \- livestreaming_sub
let result = db.get_channels_for_user(a_id).await.unwrap();
assert_channel_tree(
result.channels,
&[
(zed_id, &[]),
(crdb_id, &[zed_id]),
(livestreaming_id, &[zed_id, crdb_id]),
(livestreaming_sub_id, &[zed_id, crdb_id]),
(gpui2_id, &[zed_id]),
],
);
// Check that we can move a whole subtree at once
db.move_channel(crdb_id, gpui2_id, a_id).await.unwrap();
// zed -- gpui2 -- crdb -- livestreaming
// \- livestreaming_sub
let result = db.get_channels_for_user(a_id).await.unwrap();
assert_channel_tree(
result.channels,
&[
(zed_id, &[]),
(gpui2_id, &[zed_id]),
(crdb_id, &[zed_id, gpui2_id]),
(livestreaming_id, &[zed_id, gpui2_id, crdb_id]),
(livestreaming_sub_id, &[zed_id, gpui2_id, crdb_id]),
],
);
}
test_both_dbs!(
test_channel_reordering,
test_channel_reordering_postgres,
test_channel_reordering_sqlite
);
async fn test_channel_reordering(db: &Arc<Database>) {
let admin_id = db
.create_user(
"admin@example.com",
None,
false,
NewUserParams {
github_login: "admin".into(),
github_user_id: 1,
},
)
.await
.unwrap()
.user_id;
let user_id = db
.create_user(
"user@example.com",
None,
false,
NewUserParams {
github_login: "user".into(),
github_user_id: 2,
},
)
.await
.unwrap()
.user_id;
// Create a root channel with some sub-channels
let root_id = db.create_root_channel("root", admin_id).await.unwrap();
// Invite user to root channel so they can see the sub-channels
db.invite_channel_member(root_id, user_id, admin_id, ChannelRole::Member)
.await
.unwrap();
db.respond_to_channel_invite(root_id, user_id, true)
.await
.unwrap();
let alpha_id = db
.create_sub_channel("alpha", root_id, admin_id)
.await
.unwrap();
let beta_id = db
.create_sub_channel("beta", root_id, admin_id)
.await
.unwrap();
let gamma_id = db
.create_sub_channel("gamma", root_id, admin_id)
.await
.unwrap();
// Initial order should be: root, alpha (order=1), beta (order=2), gamma (order=3)
let result = db.get_channels_for_user(admin_id).await.unwrap();
assert_channel_tree_order(
result.channels,
&[
(root_id, &[], 1),
(alpha_id, &[root_id], 1),
(beta_id, &[root_id], 2),
(gamma_id, &[root_id], 3),
],
);
// Test moving beta up (should swap with alpha)
let updated_channels = db
.reorder_channel(beta_id, reorder_channel::Direction::Up, admin_id)
.await
.unwrap();
// Verify that beta and alpha were returned as updated
assert_eq!(updated_channels.len(), 2);
let updated_ids: std::collections::HashSet<_> = updated_channels.iter().map(|c| c.id).collect();
assert!(updated_ids.contains(&alpha_id));
assert!(updated_ids.contains(&beta_id));
// Now order should be: root, beta (order=1), alpha (order=2), gamma (order=3)
let result = db.get_channels_for_user(admin_id).await.unwrap();
assert_channel_tree_order(
result.channels,
&[
(root_id, &[], 1),
(beta_id, &[root_id], 1),
(alpha_id, &[root_id], 2),
(gamma_id, &[root_id], 3),
],
);
// Test moving gamma down (should be no-op since it's already last)
let updated_channels = db
.reorder_channel(gamma_id, reorder_channel::Direction::Down, admin_id)
.await
.unwrap();
// Should return just nothing
assert_eq!(updated_channels.len(), 0);
// Test moving alpha down (should swap with gamma)
let updated_channels = db
.reorder_channel(alpha_id, reorder_channel::Direction::Down, admin_id)
.await
.unwrap();
// Verify that alpha and gamma were returned as updated
assert_eq!(updated_channels.len(), 2);
let updated_ids: std::collections::HashSet<_> = updated_channels.iter().map(|c| c.id).collect();
assert!(updated_ids.contains(&alpha_id));
assert!(updated_ids.contains(&gamma_id));
// Now order should be: root, beta (order=1), gamma (order=2), alpha (order=3)
let result = db.get_channels_for_user(admin_id).await.unwrap();
assert_channel_tree_order(
result.channels,
&[
(root_id, &[], 1),
(beta_id, &[root_id], 1),
(gamma_id, &[root_id], 2),
(alpha_id, &[root_id], 3),
],
);
// Test that non-admin cannot reorder
let reorder_result = db
.reorder_channel(beta_id, reorder_channel::Direction::Up, user_id)
.await;
assert!(reorder_result.is_err());
// Test moving beta up (should be no-op since it's already first)
let updated_channels = db
.reorder_channel(beta_id, reorder_channel::Direction::Up, admin_id)
.await
.unwrap();
// Should return nothing
assert_eq!(updated_channels.len(), 0);
// Adding a channel to an existing ordering should add it to the end
let delta_id = db
.create_sub_channel("delta", root_id, admin_id)
.await
.unwrap();
let result = db.get_channels_for_user(admin_id).await.unwrap();
assert_channel_tree_order(
result.channels,
&[
(root_id, &[], 1),
(beta_id, &[root_id], 1),
(gamma_id, &[root_id], 2),
(alpha_id, &[root_id], 3),
(delta_id, &[root_id], 4),
],
);
// And moving a channel into an existing ordering should add it to the end
let eta_id = db
.create_sub_channel("eta", delta_id, admin_id)
.await
.unwrap();
let result = db.get_channels_for_user(admin_id).await.unwrap();
assert_channel_tree_order(
result.channels,
&[
(root_id, &[], 1),
(beta_id, &[root_id], 1),
(gamma_id, &[root_id], 2),
(alpha_id, &[root_id], 3),
(delta_id, &[root_id], 4),
(eta_id, &[root_id, delta_id], 1),
],
);
db.move_channel(eta_id, root_id, admin_id).await.unwrap();
let result = db.get_channels_for_user(admin_id).await.unwrap();
assert_channel_tree_order(
result.channels,
&[
(root_id, &[], 1),
(beta_id, &[root_id], 1),
(gamma_id, &[root_id], 2),
(alpha_id, &[root_id], 3),
(delta_id, &[root_id], 4),
(eta_id, &[root_id], 5),
],
);
}
test_both_dbs!(
@@ -652,20 +422,6 @@ async fn test_db_channel_moving_bugs(db: &Arc<Database>) {
(livestreaming_id, &[zed_id, projects_id]),
],
);
// Can't un-root a root channel
db.move_channel(zed_id, livestreaming_id, user_id)
.await
.unwrap_err();
let result = db.get_channels_for_user(user_id).await.unwrap();
assert_channel_tree(
result.channels,
&[
(zed_id, &[]),
(projects_id, &[zed_id]),
(livestreaming_id, &[zed_id, projects_id]),
],
);
}
test_both_dbs!(
@@ -989,29 +745,10 @@ fn assert_channel_tree(actual: Vec<Channel>, expected: &[(ChannelId, &[ChannelId
let actual = actual
.iter()
.map(|channel| (channel.id, channel.parent_path.as_slice()))
.collect::<HashSet<_>>();
let expected = expected
.iter()
.map(|(id, parents)| (*id, *parents))
.collect::<HashSet<_>>();
pretty_assertions::assert_eq!(actual, expected, "wrong channel ids and parent paths");
}
#[track_caller]
fn assert_channel_tree_order(actual: Vec<Channel>, expected: &[(ChannelId, &[ChannelId], i32)]) {
let actual = actual
.iter()
.map(|channel| {
(
channel.id,
channel.parent_path.as_slice(),
channel.channel_order,
)
})
.collect::<HashSet<_>>();
let expected = expected
.iter()
.map(|(id, parents, order)| (*id, *parents, *order))
.collect::<HashSet<_>>();
pretty_assertions::assert_eq!(actual, expected, "wrong channel ids and parent paths");
.collect::<Vec<_>>();
pretty_assertions::assert_eq!(
actual,
expected.to_vec(),
"wrong channel ids and parent paths"
);
}

View File

@@ -384,7 +384,6 @@ impl Server {
.add_request_handler(get_notifications)
.add_request_handler(mark_notification_as_read)
.add_request_handler(move_channel)
.add_request_handler(reorder_channel)
.add_request_handler(follow)
.add_message_handler(unfollow)
.add_message_handler(update_followers)
@@ -434,6 +433,16 @@ impl Server {
tracing::info!("waiting for cleanup timeout");
timeout.await;
tracing::info!("cleanup timeout expired, retrieving stale rooms");
app_state
.db
.delete_stale_channel_chat_participants(
&app_state.config.zed_environment,
server_id,
)
.await
.trace_err();
if let Some((room_ids, channel_ids)) = app_state
.db
.stale_server_resource_ids(&app_state.config.zed_environment, server_id)
@@ -555,6 +564,21 @@ impl Server {
}
}
app_state
.db
.delete_stale_channel_chat_participants(
&app_state.config.zed_environment,
server_id,
)
.await
.trace_err();
app_state
.db
.clear_old_worktree_entries(server_id)
.await
.trace_err();
app_state
.db
.delete_stale_servers(&app_state.config.zed_environment, server_id)
@@ -3196,51 +3220,6 @@ async fn move_channel(
Ok(())
}
async fn reorder_channel(
request: proto::ReorderChannel,
response: Response<proto::ReorderChannel>,
session: Session,
) -> Result<()> {
let channel_id = ChannelId::from_proto(request.channel_id);
let direction = request.direction();
let updated_channels = session
.db()
.await
.reorder_channel(channel_id, direction, session.user_id())
.await?;
if let Some(root_id) = updated_channels.first().map(|channel| channel.root_id()) {
let connection_pool = session.connection_pool().await;
for (connection_id, role) in connection_pool.channel_connection_ids(root_id) {
let channels = updated_channels
.iter()
.filter_map(|channel| {
if role.can_see_channel(channel.visibility) {
Some(channel.to_proto())
} else {
None
}
})
.collect::<Vec<_>>();
if channels.is_empty() {
continue;
}
let update = proto::UpdateChannels {
channels,
..Default::default()
};
session.peer.send(connection_id, update.clone())?;
}
}
response.send(Ack {})?;
Ok(())
}
/// Get the list of channel members
async fn get_channel_members(
request: proto::GetChannelMembers,

View File

@@ -3,7 +3,7 @@ use std::sync::Arc;
use anyhow::{Context as _, Result, anyhow};
use async_trait::async_trait;
use serde::Serialize;
use serde::{Deserialize, Serialize};
use stripe::{
CancellationDetails, CancellationDetailsReason, CheckoutSession, CheckoutSessionMode,
CheckoutSessionPaymentMethodCollection, CreateCheckoutSession, CreateCheckoutSessionLineItems,
@@ -213,9 +213,18 @@ impl StripeClient for RealStripeClient {
}
async fn create_meter_event(&self, params: StripeCreateMeterEventParams<'_>) -> Result<()> {
#[derive(Deserialize)]
struct StripeMeterEvent {
pub identifier: String,
}
let identifier = params.identifier;
match self.client.post_form("/billing/meter_events", params).await {
Ok(event) => Ok(event),
match self
.client
.post_form::<StripeMeterEvent, _>("/billing/meter_events", params)
.await
{
Ok(_event) => Ok(()),
Err(stripe::StripeError::Stripe(error)) => {
if error.http_status == 400
&& error
@@ -228,7 +237,7 @@ impl StripeClient for RealStripeClient {
Err(anyhow!(stripe::StripeError::Stripe(error)))
}
}
Err(error) => Err(anyhow!(error)),
Err(error) => Err(anyhow!("failed to create meter event: {error:?}")),
}
}

View File

@@ -1010,7 +1010,6 @@ async fn test_peers_following_each_other(cx_a: &mut TestAppContext, cx_b: &mut T
workspace_b.update_in(cx_b, |workspace, window, cx| {
workspace.active_pane().update(cx, |pane, cx| {
pane.close_inactive_items(&Default::default(), window, cx)
.unwrap()
.detach();
});
});

View File

@@ -354,6 +354,10 @@ impl ChannelView {
editor.set_read_only(true);
cx.notify();
}),
ChannelBufferEvent::Connected => self.editor.update(cx, |editor, cx| {
editor.set_read_only(false);
cx.notify();
}),
ChannelBufferEvent::ChannelChanged => {
self.editor.update(cx, |_, cx| {
cx.emit(editor::EditorEvent::TitleChanged);

View File

@@ -1,455 +0,0 @@
//! Link folding support for channel notes.
//!
//! This module provides functionality to automatically fold markdown links in channel notes,
//! displaying only the link text with an underline decoration. When the cursor is positioned
//! inside a link, the crease is temporarily removed to show the full markdown syntax.
//!
//! Example:
//! - When rendered: [Example](https://example.com) becomes "Example" with underline
//! - When cursor is inside: Full markdown syntax is shown
//!
//! The main components are:
//! - `parse_markdown_links`: Extracts markdown links from text
//! - `create_link_creases`: Creates visual creases for links
//! - `LinkFoldingManager`: Manages dynamic showing/hiding of link creases based on cursor position
use editor::{
Anchor, Editor, FoldPlaceholder,
display_map::{Crease, CreaseId},
};
use gpui::{App, Entity, Window};
use std::{collections::HashMap, ops::Range, sync::Arc};
use ui::prelude::*;
#[derive(Debug, Clone, PartialEq)]
pub struct MarkdownLink {
pub text: String,
pub url: String,
pub range: Range<usize>,
}
pub fn parse_markdown_links(text: &str) -> Vec<MarkdownLink> {
let mut links = Vec::new();
let mut chars = text.char_indices();
while let Some((start, ch)) = chars.next() {
if ch == '[' {
// Look for the closing bracket
let mut bracket_depth = 1;
let text_start = start + 1;
let mut text_end = None;
for (i, ch) in chars.by_ref() {
match ch {
'[' => bracket_depth += 1,
']' => {
bracket_depth -= 1;
if bracket_depth == 0 {
text_end = Some(i);
break;
}
}
_ => {}
}
}
if let Some(text_end) = text_end {
// Check if the next character is '('
if let Some((_, '(')) = chars.next() {
// Look for the closing parenthesis
let url_start = text_end + 2;
let mut url_end = None;
for (i, ch) in chars.by_ref() {
if ch == ')' {
url_end = Some(i);
break;
}
}
if let Some(url_end) = url_end {
links.push(MarkdownLink {
text: text[text_start..text_end].to_string(),
url: text[url_start..url_end].to_string(),
range: start..url_end + 1,
});
}
}
}
}
}
links
}
pub fn create_link_creases(
links: &[MarkdownLink],
buffer_snapshot: &editor::MultiBufferSnapshot,
) -> Vec<Crease<Anchor>> {
links
.iter()
.map(|link| {
// Convert byte offsets to Points first
let start_point = buffer_snapshot.offset_to_point(link.range.start);
let end_point = buffer_snapshot.offset_to_point(link.range.end);
// Create anchors from points
let start = buffer_snapshot.anchor_before(start_point);
let end = buffer_snapshot.anchor_after(end_point);
let link_text = link.text.clone();
Crease::simple(
start..end,
FoldPlaceholder {
render: Arc::new(move |_fold_id, _range, cx| {
div()
.child(link_text.clone())
.text_decoration_1()
.text_decoration_solid()
.text_color(cx.theme().colors().link_text_hover)
.into_any_element()
}),
constrain_width: false,
merge_adjacent: false,
type_tag: None,
},
)
})
.collect()
}
pub struct LinkFoldingManager {
editor: Entity<Editor>,
folded_links: Vec<MarkdownLink>,
link_creases: HashMap<CreaseId, Range<usize>>,
}
impl LinkFoldingManager {
pub fn new(editor: Entity<Editor>, window: &mut Window, cx: &mut App) -> Self {
let mut manager = Self {
editor: editor.clone(),
folded_links: Vec::new(),
link_creases: HashMap::default(),
};
manager.refresh_links(window, cx);
manager
}
pub fn handle_selections_changed(&mut self, window: &mut Window, cx: &mut App) {
self.update_link_visibility(window, cx);
}
pub fn handle_edited(&mut self, window: &mut Window, cx: &mut App) {
// Re-parse links and update folds
// Note: In a production implementation, this could be debounced
// to avoid updating on every keystroke
self.refresh_links(window, cx);
}
fn refresh_links(&mut self, window: &mut Window, cx: &mut App) {
// Remove existing creases
if !self.link_creases.is_empty() {
self.editor.update(cx, |editor, cx| {
let crease_ids: Vec<_> = self.link_creases.keys().copied().collect();
editor.remove_creases(crease_ids, cx);
});
self.link_creases.clear();
}
// Parse new links
let buffer_text = self.editor.read(cx).buffer().read(cx).snapshot(cx).text();
let links = parse_markdown_links(&buffer_text);
self.folded_links = links;
// Insert creases for all links
if !self.folded_links.is_empty() {
self.editor.update(cx, |editor, cx| {
let buffer = editor.buffer().read(cx).snapshot(cx);
let creases = create_link_creases(&self.folded_links, &buffer);
let crease_ids = editor.insert_creases(creases.clone(), cx);
// Store the mapping of crease IDs to link ranges
for (crease_id, link) in crease_ids.into_iter().zip(self.folded_links.iter()) {
self.link_creases.insert(crease_id, link.range.clone());
}
// Fold the creases to activate the custom placeholder
editor.fold_creases(creases, true, window, cx);
});
}
// Update visibility based on cursor position
self.update_link_visibility(window, cx);
}
fn update_link_visibility(&mut self, window: &mut Window, cx: &mut App) {
if self.folded_links.is_empty() {
return;
}
self.editor.update(cx, |editor, cx| {
let buffer = editor.buffer().read(cx).snapshot(cx);
let selections = editor.selections.all::<usize>(cx);
// Find which links should be visible (cursor inside or adjacent)
// Remove creases where cursor is inside or adjacent to the link
let mut creases_to_remove = Vec::new();
for (crease_id, link_range) in &self.link_creases {
let cursor_near_link = selections.iter().any(|selection| {
let cursor_offset = selection.head();
// Check if cursor is inside the link
if link_range.contains(&cursor_offset) {
return true;
}
// Check if cursor is adjacent (immediately before or after)
cursor_offset == link_range.start || cursor_offset == link_range.end
});
if cursor_near_link {
creases_to_remove.push(*crease_id);
}
}
if !creases_to_remove.is_empty() {
editor.remove_creases(creases_to_remove.clone(), cx);
for crease_id in creases_to_remove {
self.link_creases.remove(&crease_id);
}
}
// Re-add creases for links where cursor is not present or adjacent
let links_to_recreate: Vec<_> = self
.folded_links
.iter()
.filter(|link| {
!selections.iter().any(|selection| {
let cursor_offset = selection.head();
// Check if cursor is inside or adjacent to the link
link.range.contains(&cursor_offset)
|| cursor_offset == link.range.start
|| cursor_offset == link.range.end
})
})
.filter(|link| {
// Only recreate if not already present
!self.link_creases.values().any(|range| range == &link.range)
})
.cloned()
.collect();
if !links_to_recreate.is_empty() {
let creases = create_link_creases(&links_to_recreate, &buffer);
let crease_ids = editor.insert_creases(creases.clone(), cx);
for (crease_id, link) in crease_ids.into_iter().zip(links_to_recreate.iter()) {
self.link_creases.insert(crease_id, link.range.clone());
}
// Fold the creases to activate the custom placeholder
editor.fold_creases(creases, true, window, cx);
}
});
}
/// Checks if a position (in buffer coordinates) is within a folded link
/// and returns the link if found
pub fn link_at_position(&self, position: usize) -> Option<&MarkdownLink> {
// Check if the position falls within any folded link that has an active crease
self.folded_links.iter().find(|link| {
link.range.contains(&position)
&& self.link_creases.values().any(|range| range == &link.range)
})
}
/// Opens a link URL in the default browser
pub fn open_link(&self, url: &str, cx: &mut App) {
cx.open_url(url);
}
}
#[cfg(test)]
mod tests {
use super::*;
#[gpui::test]
fn test_parse_markdown_links() {
let text = "This is a [link](https://example.com) and another [test](https://test.com).";
let links = parse_markdown_links(text);
assert_eq!(links.len(), 2);
assert_eq!(links[0].text, "link");
assert_eq!(links[0].url, "https://example.com");
assert_eq!(links[0].range, 10..37);
assert_eq!(links[1].text, "test");
assert_eq!(links[1].url, "https://test.com");
assert_eq!(links[1].range, 50..74);
}
#[gpui::test]
fn test_parse_nested_brackets() {
let text = "A [[nested] link](https://example.com) here.";
let links = parse_markdown_links(text);
assert_eq!(links.len(), 1);
assert_eq!(links[0].text, "[nested] link");
assert_eq!(links[0].url, "https://example.com");
}
#[gpui::test]
fn test_parse_no_links() {
let text = "This text has no links.";
let links = parse_markdown_links(text);
assert_eq!(links.len(), 0);
}
#[gpui::test]
fn test_parse_incomplete_links() {
let text = "This [link has no url] and [this](incomplete";
let links = parse_markdown_links(text);
assert_eq!(links.len(), 0);
}
#[gpui::test]
fn test_link_parsing_and_folding() {
// Test comprehensive link parsing
let text = "Check out [Zed](https://zed.dev) and [GitHub](https://github.com)!";
let links = parse_markdown_links(text);
assert_eq!(links.len(), 2);
assert_eq!(links[0].text, "Zed");
assert_eq!(links[0].url, "https://zed.dev");
assert_eq!(links[0].range, 10..32);
assert_eq!(links[1].text, "GitHub");
assert_eq!(links[1].url, "https://github.com");
assert_eq!(links[1].range, 37..65);
}
#[gpui::test]
fn test_link_detection_when_typing() {
// Test that links are detected as they're typed
let text1 = "Check out ";
let links1 = parse_markdown_links(text1);
assert_eq!(links1.len(), 0, "No links in plain text");
let text2 = "Check out [";
let links2 = parse_markdown_links(text2);
assert_eq!(links2.len(), 0, "Incomplete link not detected");
let text3 = "Check out [Zed]";
let links3 = parse_markdown_links(text3);
assert_eq!(links3.len(), 0, "Link without URL not detected");
let text4 = "Check out [Zed](";
let links4 = parse_markdown_links(text4);
assert_eq!(links4.len(), 0, "Link with incomplete URL not detected");
let text5 = "Check out [Zed](https://zed.dev)";
let links5 = parse_markdown_links(text5);
assert_eq!(links5.len(), 1, "Complete link should be detected");
assert_eq!(links5[0].text, "Zed");
assert_eq!(links5[0].url, "https://zed.dev");
// Test link detection in middle of text
let text6 = "Check out [Zed](https://zed.dev) for coding!";
let links6 = parse_markdown_links(text6);
assert_eq!(links6.len(), 1, "Link in middle of text should be detected");
assert_eq!(links6[0].range, 10..32);
}
#[gpui::test]
fn test_link_position_detection() {
// Test the logic for determining if a position is within a link
let links = vec![
MarkdownLink {
text: "Zed".to_string(),
url: "https://zed.dev".to_string(),
range: 10..32,
},
MarkdownLink {
text: "GitHub".to_string(),
url: "https://github.com".to_string(),
range: 50..78,
},
];
// Test positions inside the first link
assert!(links[0].range.contains(&10), "Start of first link");
assert!(links[0].range.contains(&20), "Middle of first link");
assert!(links[0].range.contains(&31), "Near end of first link");
// Test positions inside the second link
assert!(links[1].range.contains(&50), "Start of second link");
assert!(links[1].range.contains(&65), "Middle of second link");
assert!(links[1].range.contains(&77), "Near end of second link");
// Test positions outside any link
assert!(!links[0].range.contains(&9), "Before first link");
assert!(!links[0].range.contains(&32), "After first link");
assert!(!links[1].range.contains(&49), "Before second link");
assert!(!links[1].range.contains(&78), "After second link");
// Test finding a link at a specific position
let link_at_20 = links.iter().find(|link| link.range.contains(&20));
assert!(link_at_20.is_some());
assert_eq!(link_at_20.unwrap().text, "Zed");
assert_eq!(link_at_20.unwrap().url, "https://zed.dev");
let link_at_65 = links.iter().find(|link| link.range.contains(&65));
assert!(link_at_65.is_some());
assert_eq!(link_at_65.unwrap().text, "GitHub");
assert_eq!(link_at_65.unwrap().url, "https://github.com");
}
#[gpui::test]
fn test_cursor_adjacent_link_expansion() {
// Test the logic for determining if cursor is inside or adjacent to a link
let link = MarkdownLink {
text: "Example".to_string(),
url: "https://example.com".to_string(),
range: 10..37,
};
// Helper function to check if cursor should expand the link
let should_expand_link = |cursor_pos: usize, link: &MarkdownLink| -> bool {
// Link should be expanded if cursor is inside or adjacent
link.range.contains(&cursor_pos)
|| cursor_pos == link.range.start
|| cursor_pos == link.range.end
};
// Test cursor positions
assert!(
should_expand_link(10, &link),
"Cursor at position 10 (link start) should expand link"
);
assert!(
should_expand_link(37, &link),
"Cursor at position 37 (link end) should expand link"
);
assert!(
should_expand_link(20, &link),
"Cursor at position 20 (inside link) should expand link"
);
assert!(
!should_expand_link(9, &link),
"Cursor at position 9 (before link) should not expand link"
);
assert!(
!should_expand_link(38, &link),
"Cursor at position 38 (after link) should not expand link"
);
// Test the edge cases
assert_eq!(link.range.start, 10, "Link starts at position 10");
assert_eq!(link.range.end, 37, "Link ends at position 37");
assert!(link.range.contains(&10), "Range includes start position");
assert!(!link.range.contains(&37), "Range excludes end position");
}
}

View File

@@ -12,7 +12,7 @@ use language::{
Anchor, Buffer, BufferSnapshot, CodeLabel, LanguageRegistry, ToOffset,
language_settings::SoftWrap,
};
use project::{Completion, CompletionSource, search::SearchQuery};
use project::{Completion, CompletionResponse, CompletionSource, search::SearchQuery};
use settings::Settings;
use std::{
cell::RefCell,
@@ -64,9 +64,9 @@ impl CompletionProvider for MessageEditorCompletionProvider {
_: editor::CompletionContext,
_window: &mut Window,
cx: &mut Context<Editor>,
) -> Task<Result<Option<Vec<Completion>>>> {
) -> Task<Result<Vec<CompletionResponse>>> {
let Some(handle) = self.0.upgrade() else {
return Task::ready(Ok(None));
return Task::ready(Ok(Vec::new()));
};
handle.update(cx, |message_editor, cx| {
message_editor.completions(buffer, buffer_position, cx)
@@ -89,6 +89,7 @@ impl CompletionProvider for MessageEditorCompletionProvider {
_position: language::Anchor,
text: &str,
_trigger_in_words: bool,
_menu_is_open: bool,
_cx: &mut Context<Editor>,
) -> bool {
text == "@"
@@ -248,22 +249,21 @@ impl MessageEditor {
buffer: &Entity<Buffer>,
end_anchor: Anchor,
cx: &mut Context<Self>,
) -> Task<Result<Option<Vec<Completion>>>> {
) -> Task<Result<Vec<CompletionResponse>>> {
if let Some((start_anchor, query, candidates)) =
self.collect_mention_candidates(buffer, end_anchor, cx)
{
if !candidates.is_empty() {
return cx.spawn(async move |_, cx| {
Ok(Some(
Self::resolve_completions_for_candidates(
&cx,
query.as_str(),
&candidates,
start_anchor..end_anchor,
Self::completion_for_mention,
)
.await,
))
let completion_response = Self::resolve_completions_for_candidates(
&cx,
query.as_str(),
&candidates,
start_anchor..end_anchor,
Self::completion_for_mention,
)
.await;
Ok(vec![completion_response])
});
}
}
@@ -273,21 +273,23 @@ impl MessageEditor {
{
if !candidates.is_empty() {
return cx.spawn(async move |_, cx| {
Ok(Some(
Self::resolve_completions_for_candidates(
&cx,
query.as_str(),
candidates,
start_anchor..end_anchor,
Self::completion_for_emoji,
)
.await,
))
let completion_response = Self::resolve_completions_for_candidates(
&cx,
query.as_str(),
candidates,
start_anchor..end_anchor,
Self::completion_for_emoji,
)
.await;
Ok(vec![completion_response])
});
}
}
Task::ready(Ok(Some(Vec::new())))
Task::ready(Ok(vec![CompletionResponse {
completions: Vec::new(),
is_incomplete: false,
}]))
}
async fn resolve_completions_for_candidates(
@@ -296,18 +298,19 @@ impl MessageEditor {
candidates: &[StringMatchCandidate],
range: Range<Anchor>,
completion_fn: impl Fn(&StringMatch) -> (String, CodeLabel),
) -> Vec<Completion> {
) -> CompletionResponse {
const LIMIT: usize = 10;
let matches = fuzzy::match_strings(
candidates,
query,
true,
10,
LIMIT,
&Default::default(),
cx.background_executor().clone(),
)
.await;
matches
let completions = matches
.into_iter()
.map(|mat| {
let (new_text, label) = completion_fn(&mat);
@@ -322,7 +325,12 @@ impl MessageEditor {
source: CompletionSource::Custom,
}
})
.collect()
.collect::<Vec<_>>();
CompletionResponse {
is_incomplete: completions.len() >= LIMIT,
completions,
}
}
fn completion_for_mention(mat: &StringMatch) -> (String, CodeLabel) {

View File

@@ -14,9 +14,9 @@ use fuzzy::{StringMatchCandidate, match_strings};
use gpui::{
AnyElement, App, AsyncWindowContext, Bounds, ClickEvent, ClipboardItem, Context, DismissEvent,
Div, Entity, EventEmitter, FocusHandle, Focusable, FontStyle, InteractiveElement, IntoElement,
KeyContext, ListOffset, ListState, MouseDownEvent, ParentElement, Pixels, Point, PromptLevel,
Render, SharedString, Styled, Subscription, Task, TextStyle, WeakEntity, Window, actions,
anchored, canvas, deferred, div, fill, list, point, prelude::*, px,
ListOffset, ListState, MouseDownEvent, ParentElement, Pixels, Point, PromptLevel, Render,
SharedString, Styled, Subscription, Task, TextStyle, WeakEntity, Window, actions, anchored,
canvas, deferred, div, fill, list, point, prelude::*, px,
};
use menu::{Cancel, Confirm, SecondaryConfirm, SelectNext, SelectPrevious};
use project::{Fs, Project};
@@ -52,8 +52,6 @@ actions!(
StartMoveChannel,
MoveSelected,
InsertSpace,
MoveChannelUp,
MoveChannelDown,
]
);
@@ -1963,33 +1961,6 @@ impl CollabPanel {
})
}
fn move_channel_up(&mut self, _: &MoveChannelUp, window: &mut Window, cx: &mut Context<Self>) {
if let Some(channel) = self.selected_channel() {
self.channel_store.update(cx, |store, cx| {
store
.reorder_channel(channel.id, proto::reorder_channel::Direction::Up, cx)
.detach_and_prompt_err("Failed to move channel up", window, cx, |_, _, _| None)
});
}
}
fn move_channel_down(
&mut self,
_: &MoveChannelDown,
window: &mut Window,
cx: &mut Context<Self>,
) {
if let Some(channel) = self.selected_channel() {
self.channel_store.update(cx, |store, cx| {
store
.reorder_channel(channel.id, proto::reorder_channel::Direction::Down, cx)
.detach_and_prompt_err("Failed to move channel down", window, cx, |_, _, _| {
None
})
});
}
}
fn open_channel_notes(
&mut self,
channel_id: ChannelId,
@@ -2003,7 +1974,7 @@ impl CollabPanel {
fn show_inline_context_menu(
&mut self,
_: &Secondary,
_: &menu::SecondaryConfirm,
window: &mut Window,
cx: &mut Context<Self>,
) {
@@ -2032,21 +2003,6 @@ impl CollabPanel {
}
}
fn dispatch_context(&self, window: &Window, cx: &Context<Self>) -> KeyContext {
let mut dispatch_context = KeyContext::new_with_defaults();
dispatch_context.add("CollabPanel");
dispatch_context.add("menu");
let identifier = if self.channel_name_editor.focus_handle(cx).is_focused(window) {
"editing"
} else {
"not_editing"
};
dispatch_context.add(identifier);
dispatch_context
}
fn selected_channel(&self) -> Option<&Arc<Channel>> {
self.selection
.and_then(|ix| self.entries.get(ix))
@@ -3009,7 +2965,7 @@ fn render_tree_branch(
impl Render for CollabPanel {
fn render(&mut self, window: &mut Window, cx: &mut Context<Self>) -> impl IntoElement {
v_flex()
.key_context(self.dispatch_context(window, cx))
.key_context("CollabPanel")
.on_action(cx.listener(CollabPanel::cancel))
.on_action(cx.listener(CollabPanel::select_next))
.on_action(cx.listener(CollabPanel::select_previous))
@@ -3021,8 +2977,6 @@ impl Render for CollabPanel {
.on_action(cx.listener(CollabPanel::collapse_selected_channel))
.on_action(cx.listener(CollabPanel::expand_selected_channel))
.on_action(cx.listener(CollabPanel::start_move_selected_channel))
.on_action(cx.listener(CollabPanel::move_channel_up))
.on_action(cx.listener(CollabPanel::move_channel_down))
.track_focus(&self.focus_handle(cx))
.size_full()
.child(if self.user_store.read(cx).current_user().is_none() {

View File

@@ -581,7 +581,7 @@ async fn stream_completion(
api_key: String,
request: Request,
) -> Result<BoxStream<'static, Result<ResponseEvent>>> {
let is_vision_request = request.messages.iter().any(|message| match message {
let is_vision_request = request.messages.last().map_or(false, |message| match message {
ChatMessage::User { content }
| ChatMessage::Assistant { content, .. }
| ChatMessage::Tool { content, .. } => {
@@ -736,116 +736,4 @@ mod tests {
assert_eq!(schema.data[0].id, "gpt-4");
assert_eq!(schema.data[1].id, "claude-3.7-sonnet");
}
#[test]
fn test_vision_request_detection() {
fn message_contains_image(message: &ChatMessage) -> bool {
match message {
ChatMessage::User { content }
| ChatMessage::Assistant { content, .. }
| ChatMessage::Tool { content, .. } => {
matches!(content, ChatMessageContent::Multipart(parts) if
parts.iter().any(|part| matches!(part, ChatMessagePart::Image { .. })))
}
_ => false,
}
}
// Helper function to detect if a request is a vision request
fn is_vision_request(request: &Request) -> bool {
request.messages.iter().any(message_contains_image)
}
let request_with_image_in_last = Request {
intent: true,
n: 1,
stream: true,
temperature: 0.1,
model: "claude-3.7-sonnet".to_string(),
messages: vec![
ChatMessage::User {
content: ChatMessageContent::Plain("Hello".to_string()),
},
ChatMessage::Assistant {
content: ChatMessageContent::Plain("How can I help?".to_string()),
tool_calls: vec![],
},
ChatMessage::User {
content: ChatMessageContent::Multipart(vec![
ChatMessagePart::Text {
text: "What's in this image?".to_string(),
},
ChatMessagePart::Image {
image_url: ImageUrl {
url: "data:image/png;base64,abc123".to_string(),
},
},
]),
},
],
tools: vec![],
tool_choice: None,
};
let request_with_image_in_earlier = Request {
intent: true,
n: 1,
stream: true,
temperature: 0.1,
model: "claude-3.7-sonnet".to_string(),
messages: vec![
ChatMessage::User {
content: ChatMessageContent::Plain("Hello".to_string()),
},
ChatMessage::User {
content: ChatMessageContent::Multipart(vec![
ChatMessagePart::Text {
text: "What's in this image?".to_string(),
},
ChatMessagePart::Image {
image_url: ImageUrl {
url: "data:image/png;base64,abc123".to_string(),
},
},
]),
},
ChatMessage::Assistant {
content: ChatMessageContent::Plain("I see a cat in the image.".to_string()),
tool_calls: vec![],
},
ChatMessage::User {
content: ChatMessageContent::Plain("What color is it?".to_string()),
},
],
tools: vec![],
tool_choice: None,
};
let request_with_no_images = Request {
intent: true,
n: 1,
stream: true,
temperature: 0.1,
model: "claude-3.7-sonnet".to_string(),
messages: vec![
ChatMessage::User {
content: ChatMessageContent::Plain("Hello".to_string()),
},
ChatMessage::Assistant {
content: ChatMessageContent::Plain("How can I help?".to_string()),
tool_calls: vec![],
},
ChatMessage::User {
content: ChatMessageContent::Plain("Tell me about Rust.".to_string()),
},
],
tools: vec![],
tool_choice: None,
};
assert!(is_vision_request(&request_with_image_in_last));
assert!(is_vision_request(&request_with_image_in_earlier));
assert!(!is_vision_request(&request_with_no_images));
}
}

View File

@@ -333,24 +333,6 @@ pub async fn download_adapter_from_github(
Ok(version_path)
}
pub async fn fetch_latest_adapter_version_from_github(
github_repo: GithubRepo,
delegate: &dyn DapDelegate,
) -> Result<AdapterVersion> {
let release = latest_github_release(
&format!("{}/{}", github_repo.repo_owner, github_repo.repo_name),
false,
false,
delegate.http_client(),
)
.await?;
Ok(AdapterVersion {
tag_name: release.tag_name,
url: release.zipball_url,
})
}
#[async_trait(?Send)]
pub trait DebugAdapter: 'static + Send + Sync {
fn name(&self) -> DebugAdapterName;
@@ -370,21 +352,19 @@ pub trait DebugAdapter: 'static + Send + Sync {
None
}
fn validate_config(
/// Extracts the kind (attach/launch) of debug configuration from the given JSON config.
/// This method should only return error when the kind cannot be determined for a given configuration;
/// in particular, it *should not* validate whether the request as a whole is valid, because that's best left to the debug adapter itself to decide.
fn request_kind(
&self,
config: &serde_json::Value,
) -> Result<StartDebuggingRequestArgumentsRequest> {
let map = config.as_object().context("Config isn't an object")?;
let request_variant = map
.get("request")
.and_then(|val| val.as_str())
.context("request argument is not found or invalid")?;
match request_variant {
"launch" => Ok(StartDebuggingRequestArgumentsRequest::Launch),
"attach" => Ok(StartDebuggingRequestArgumentsRequest::Attach),
_ => Err(anyhow!("request must be either 'launch' or 'attach'")),
match config.get("request") {
Some(val) if val == "launch" => Ok(StartDebuggingRequestArgumentsRequest::Launch),
Some(val) if val == "attach" => Ok(StartDebuggingRequestArgumentsRequest::Attach),
_ => Err(anyhow!(
"missing or invalid `request` field in config. Expected 'launch' or 'attach'"
)),
}
}
@@ -414,7 +394,7 @@ impl DebugAdapter for FakeAdapter {
serde_json::Value::Null
}
fn validate_config(
fn request_kind(
&self,
config: &serde_json::Value,
) -> Result<StartDebuggingRequestArgumentsRequest> {
@@ -459,7 +439,7 @@ impl DebugAdapter for FakeAdapter {
envs: HashMap::default(),
cwd: None,
request_args: StartDebuggingRequestArguments {
request: self.validate_config(&task_definition.config)?,
request: self.request_kind(&task_definition.config)?,
configuration: task_definition.config.clone(),
},
})

View File

@@ -52,7 +52,7 @@ pub fn send_telemetry(scenario: &DebugScenario, location: TelemetrySpawnLocation
return;
};
let kind = adapter
.validate_config(&scenario.config)
.request_kind(&scenario.config)
.ok()
.map(serde_json::to_value)
.and_then(Result::ok);

View File

@@ -4,7 +4,7 @@ use dap_types::{
messages::{Message, Response},
};
use futures::{AsyncRead, AsyncReadExt as _, AsyncWrite, FutureExt as _, channel::oneshot, select};
use gpui::AsyncApp;
use gpui::{AppContext as _, AsyncApp, Task};
use settings::Settings as _;
use smallvec::SmallVec;
use smol::{
@@ -22,7 +22,7 @@ use std::{
time::Duration,
};
use task::TcpArgumentsTemplate;
use util::{ResultExt as _, TryFutureExt};
use util::{ConnectionResult, ResultExt as _};
use crate::{adapters::DebugAdapterBinary, debugger_settings::DebuggerSettings};
@@ -126,7 +126,7 @@ pub(crate) struct TransportDelegate {
pending_requests: Requests,
transport: Transport,
server_tx: Arc<Mutex<Option<Sender<Message>>>>,
_tasks: Vec<gpui::Task<Option<()>>>,
_tasks: Vec<Task<()>>,
}
impl TransportDelegate {
@@ -141,7 +141,7 @@ impl TransportDelegate {
log_handlers: Default::default(),
current_requests: Default::default(),
pending_requests: Default::default(),
_tasks: Default::default(),
_tasks: Vec::new(),
};
let messages = this.start_handlers(transport_pipes, cx).await?;
Ok((messages, this))
@@ -166,45 +166,76 @@ impl TransportDelegate {
None
};
let adapter_log_handler = log_handler.clone();
cx.update(|cx| {
if let Some(stdout) = params.stdout.take() {
self._tasks.push(
cx.background_executor()
.spawn(Self::handle_adapter_log(stdout, log_handler.clone()).log_err()),
);
self._tasks.push(cx.background_spawn(async move {
match Self::handle_adapter_log(stdout, adapter_log_handler).await {
ConnectionResult::Timeout => {
log::error!("Timed out when handling debugger log");
}
ConnectionResult::ConnectionReset => {
log::info!("Debugger logs connection closed");
}
ConnectionResult::Result(Ok(())) => {}
ConnectionResult::Result(Err(e)) => {
log::error!("Error handling debugger log: {e}");
}
}
}));
}
self._tasks.push(
cx.background_executor().spawn(
Self::handle_output(
params.output,
client_tx,
self.pending_requests.clone(),
log_handler.clone(),
)
.log_err(),
),
);
let pending_requests = self.pending_requests.clone();
let output_log_handler = log_handler.clone();
self._tasks.push(cx.background_spawn(async move {
match Self::handle_output(
params.output,
client_tx,
pending_requests,
output_log_handler,
)
.await
{
Ok(()) => {}
Err(e) => log::error!("Error handling debugger output: {e}"),
}
}));
if let Some(stderr) = params.stderr.take() {
self._tasks.push(
cx.background_executor()
.spawn(Self::handle_error(stderr, self.log_handlers.clone()).log_err()),
);
let log_handlers = self.log_handlers.clone();
self._tasks.push(cx.background_spawn(async move {
match Self::handle_error(stderr, log_handlers).await {
ConnectionResult::Timeout => {
log::error!("Timed out reading debugger error stream")
}
ConnectionResult::ConnectionReset => {
log::info!("Debugger closed its error stream")
}
ConnectionResult::Result(Ok(())) => {}
ConnectionResult::Result(Err(e)) => {
log::error!("Error handling debugger error: {e}")
}
}
}));
}
self._tasks.push(
cx.background_executor().spawn(
Self::handle_input(
params.input,
client_rx,
self.current_requests.clone(),
self.pending_requests.clone(),
log_handler.clone(),
)
.log_err(),
),
);
let current_requests = self.current_requests.clone();
let pending_requests = self.pending_requests.clone();
let log_handler = log_handler.clone();
self._tasks.push(cx.background_spawn(async move {
match Self::handle_input(
params.input,
client_rx,
current_requests,
pending_requests,
log_handler,
)
.await
{
Ok(()) => {}
Err(e) => log::error!("Error handling debugger input: {e}"),
}
}));
})?;
{
@@ -235,7 +266,7 @@ impl TransportDelegate {
async fn handle_adapter_log<Stdout>(
stdout: Stdout,
log_handlers: Option<LogHandlers>,
) -> Result<()>
) -> ConnectionResult<()>
where
Stdout: AsyncRead + Unpin + Send + 'static,
{
@@ -245,13 +276,14 @@ impl TransportDelegate {
let result = loop {
line.truncate(0);
let bytes_read = match reader.read_line(&mut line).await {
Ok(bytes_read) => bytes_read,
Err(e) => break Err(e.into()),
};
if bytes_read == 0 {
anyhow::bail!("Debugger log stream closed");
match reader
.read_line(&mut line)
.await
.context("reading adapter log line")
{
Ok(0) => break ConnectionResult::ConnectionReset,
Ok(_) => {}
Err(e) => break ConnectionResult::Result(Err(e)),
}
if let Some(log_handlers) = log_handlers.as_ref() {
@@ -337,35 +369,35 @@ impl TransportDelegate {
let mut reader = BufReader::new(server_stdout);
let result = loop {
let message =
Self::receive_server_message(&mut reader, &mut recv_buffer, log_handlers.as_ref())
.await;
match message {
Ok(Message::Response(res)) => {
match Self::receive_server_message(&mut reader, &mut recv_buffer, log_handlers.as_ref())
.await
{
ConnectionResult::Timeout => anyhow::bail!("Timed out when connecting to debugger"),
ConnectionResult::ConnectionReset => {
log::info!("Debugger closed the connection");
return Ok(());
}
ConnectionResult::Result(Ok(Message::Response(res))) => {
if let Some(tx) = pending_requests.lock().await.remove(&res.request_seq) {
if let Err(e) = tx.send(Self::process_response(res)) {
log::trace!("Did not send response `{:?}` for a cancelled", e);
}
} else {
client_tx.send(Message::Response(res)).await?;
};
}
}
Ok(message) => {
client_tx.send(message).await?;
}
Err(e) => break Err(e),
ConnectionResult::Result(Ok(message)) => client_tx.send(message).await?,
ConnectionResult::Result(Err(e)) => break Err(e),
}
};
drop(client_tx);
log::debug!("Handle adapter output dropped");
result
}
async fn handle_error<Stderr>(stderr: Stderr, log_handlers: LogHandlers) -> Result<()>
async fn handle_error<Stderr>(stderr: Stderr, log_handlers: LogHandlers) -> ConnectionResult<()>
where
Stderr: AsyncRead + Unpin + Send + 'static,
{
@@ -375,8 +407,12 @@ impl TransportDelegate {
let mut reader = BufReader::new(stderr);
let result = loop {
match reader.read_line(&mut buffer).await {
Ok(0) => anyhow::bail!("debugger error stream closed"),
match reader
.read_line(&mut buffer)
.await
.context("reading error log line")
{
Ok(0) => break ConnectionResult::ConnectionReset,
Ok(_) => {
for (kind, log_handler) in log_handlers.lock().iter_mut() {
if matches!(kind, LogKind::Adapter) {
@@ -386,7 +422,7 @@ impl TransportDelegate {
buffer.truncate(0);
}
Err(error) => break Err(error.into()),
Err(error) => break ConnectionResult::Result(Err(error)),
}
};
@@ -420,7 +456,7 @@ impl TransportDelegate {
reader: &mut BufReader<Stdout>,
buffer: &mut String,
log_handlers: Option<&LogHandlers>,
) -> Result<Message>
) -> ConnectionResult<Message>
where
Stdout: AsyncRead + Unpin + Send + 'static,
{
@@ -428,48 +464,58 @@ impl TransportDelegate {
loop {
buffer.truncate(0);
if reader
match reader
.read_line(buffer)
.await
.with_context(|| "reading a message from server")?
== 0
.with_context(|| "reading a message from server")
{
anyhow::bail!("debugger reader stream closed, last string output: '{buffer}'");
Ok(0) => return ConnectionResult::ConnectionReset,
Ok(_) => {}
Err(e) => return ConnectionResult::Result(Err(e)),
};
if buffer == "\r\n" {
break;
}
let parts = buffer.trim().split_once(": ");
match parts {
Some(("Content-Length", value)) => {
content_length = Some(value.parse().context("invalid content length")?);
if let Some(("Content-Length", value)) = buffer.trim().split_once(": ") {
match value.parse().context("invalid content length") {
Ok(length) => content_length = Some(length),
Err(e) => return ConnectionResult::Result(Err(e)),
}
_ => {}
}
}
let content_length = content_length.context("missing content length")?;
let content_length = match content_length.context("missing content length") {
Ok(length) => length,
Err(e) => return ConnectionResult::Result(Err(e)),
};
let mut content = vec![0; content_length];
reader
if let Err(e) = reader
.read_exact(&mut content)
.await
.with_context(|| "reading after a loop")?;
.with_context(|| "reading after a loop")
{
return ConnectionResult::Result(Err(e));
}
let message = std::str::from_utf8(&content).context("invalid utf8 from server")?;
let message_str = match std::str::from_utf8(&content).context("invalid utf8 from server") {
Ok(str) => str,
Err(e) => return ConnectionResult::Result(Err(e)),
};
if let Some(log_handlers) = log_handlers {
for (kind, log_handler) in log_handlers.lock().iter_mut() {
if matches!(kind, LogKind::Rpc) {
log_handler(IoKind::StdOut, &message);
log_handler(IoKind::StdOut, message_str);
}
}
}
Ok(serde_json::from_str::<Message>(message)?)
ConnectionResult::Result(
serde_json::from_str::<Message>(message_str).context("deserializing server message"),
)
}
pub async fn shutdown(&self) -> Result<()> {
@@ -777,71 +823,31 @@ impl FakeTransport {
let response_handlers = this.response_handlers.clone();
let stdout_writer = Arc::new(Mutex::new(stdout_writer));
cx.background_executor()
.spawn(async move {
let mut reader = BufReader::new(stdin_reader);
let mut buffer = String::new();
cx.background_spawn(async move {
let mut reader = BufReader::new(stdin_reader);
let mut buffer = String::new();
loop {
let message =
TransportDelegate::receive_server_message(&mut reader, &mut buffer, None)
.await;
match message {
Err(error) => {
break anyhow::anyhow!(error);
}
Ok(message) => {
match message {
Message::Request(request) => {
// redirect reverse requests to stdout writer/reader
if request.command == RunInTerminal::COMMAND
|| request.command == StartDebugging::COMMAND
{
let message =
serde_json::to_string(&Message::Request(request))
.unwrap();
let mut writer = stdout_writer.lock().await;
writer
.write_all(
TransportDelegate::build_rpc_message(message)
.as_bytes(),
)
.await
.unwrap();
writer.flush().await.unwrap();
} else {
let response = if let Some(handle) = request_handlers
.lock()
.get_mut(request.command.as_str())
{
handle(
request.seq,
request.arguments.unwrap_or(json!({})),
)
} else {
panic!("No request handler for {}", request.command);
};
let message =
serde_json::to_string(&Message::Response(response))
.unwrap();
let mut writer = stdout_writer.lock().await;
writer
.write_all(
TransportDelegate::build_rpc_message(message)
.as_bytes(),
)
.await
.unwrap();
writer.flush().await.unwrap();
}
}
Message::Event(event) => {
loop {
match TransportDelegate::receive_server_message(&mut reader, &mut buffer, None)
.await
{
ConnectionResult::Timeout => {
anyhow::bail!("Timed out when connecting to debugger");
}
ConnectionResult::ConnectionReset => {
log::info!("Debugger closed the connection");
break Ok(());
}
ConnectionResult::Result(Err(e)) => break Err(e),
ConnectionResult::Result(Ok(message)) => {
match message {
Message::Request(request) => {
// redirect reverse requests to stdout writer/reader
if request.command == RunInTerminal::COMMAND
|| request.command == StartDebugging::COMMAND
{
let message =
serde_json::to_string(&Message::Event(event)).unwrap();
serde_json::to_string(&Message::Request(request)).unwrap();
let mut writer = stdout_writer.lock().await;
writer
@@ -852,22 +858,58 @@ impl FakeTransport {
.await
.unwrap();
writer.flush().await.unwrap();
}
Message::Response(response) => {
if let Some(handle) =
response_handlers.lock().get(response.command.as_str())
} else {
let response = if let Some(handle) =
request_handlers.lock().get_mut(request.command.as_str())
{
handle(response);
handle(request.seq, request.arguments.unwrap_or(json!({})))
} else {
log::error!("No response handler for {}", response.command);
}
panic!("No request handler for {}", request.command);
};
let message =
serde_json::to_string(&Message::Response(response))
.unwrap();
let mut writer = stdout_writer.lock().await;
writer
.write_all(
TransportDelegate::build_rpc_message(message)
.as_bytes(),
)
.await
.unwrap();
writer.flush().await.unwrap();
}
}
Message::Event(event) => {
let message =
serde_json::to_string(&Message::Event(event)).unwrap();
let mut writer = stdout_writer.lock().await;
writer
.write_all(
TransportDelegate::build_rpc_message(message).as_bytes(),
)
.await
.unwrap();
writer.flush().await.unwrap();
}
Message::Response(response) => {
if let Some(handle) =
response_handlers.lock().get(response.command.as_str())
{
handle(response);
} else {
log::error!("No response handler for {}", response.command);
}
}
}
}
}
})
.detach();
}
})
.detach();
Ok((
TransportPipe::new(Box::new(stdin_writer), Box::new(stdout_reader), None, None),

View File

@@ -1,11 +1,8 @@
use std::{collections::HashMap, path::PathBuf, sync::OnceLock};
use anyhow::{Context as _, Result, anyhow};
use anyhow::{Context as _, Result};
use async_trait::async_trait;
use dap::{
StartDebuggingRequestArgumentsRequest,
adapters::{DebugTaskDefinition, latest_github_release},
};
use dap::adapters::{DebugTaskDefinition, latest_github_release};
use futures::StreamExt;
use gpui::AsyncApp;
use serde_json::Value;
@@ -37,7 +34,7 @@ impl CodeLldbDebugAdapter {
Value::String(String::from(task_definition.label.as_ref())),
);
let request = self.validate_config(&configuration)?;
let request = self.request_kind(&configuration)?;
Ok(dap::StartDebuggingRequestArguments {
request,
@@ -89,48 +86,6 @@ impl DebugAdapter for CodeLldbDebugAdapter {
DebugAdapterName(Self::ADAPTER_NAME.into())
}
fn validate_config(
&self,
config: &serde_json::Value,
) -> Result<StartDebuggingRequestArgumentsRequest> {
let map = config
.as_object()
.ok_or_else(|| anyhow!("Config isn't an object"))?;
let request_variant = map
.get("request")
.and_then(|r| r.as_str())
.ok_or_else(|| anyhow!("request field is required and must be a string"))?;
match request_variant {
"launch" => {
// For launch, verify that one of the required configs exists
if !(map.contains_key("program")
|| map.contains_key("targetCreateCommands")
|| map.contains_key("cargo"))
{
return Err(anyhow!(
"launch request requires either 'program', 'targetCreateCommands', or 'cargo' field"
));
}
Ok(StartDebuggingRequestArgumentsRequest::Launch)
}
"attach" => {
// For attach, verify that either pid or program exists
if !(map.contains_key("pid") || map.contains_key("program")) {
return Err(anyhow!(
"attach request requires either 'pid' or 'program' field"
));
}
Ok(StartDebuggingRequestArgumentsRequest::Attach)
}
_ => Err(anyhow!(
"request must be either 'launch' or 'attach', got '{}'",
request_variant
)),
}
}
fn config_from_zed_format(&self, zed_scenario: ZedDebugConfig) -> Result<DebugScenario> {
let mut configuration = json!({
"request": match zed_scenario.request {

View File

@@ -178,7 +178,7 @@ impl DebugAdapter for GdbDebugAdapter {
let gdb_path = user_setting_path.unwrap_or(gdb_path?);
let request_args = StartDebuggingRequestArguments {
request: self.validate_config(&config.config)?,
request: self.request_kind(&config.config)?,
configuration: config.config.clone(),
};

View File

@@ -1,6 +1,6 @@
use anyhow::{Context as _, anyhow, bail};
use anyhow::{Context as _, bail};
use dap::{
StartDebuggingRequestArguments, StartDebuggingRequestArgumentsRequest,
StartDebuggingRequestArguments,
adapters::{
DebugTaskDefinition, DownloadedFileType, download_adapter_from_github,
latest_github_release,
@@ -350,24 +350,6 @@ impl DebugAdapter for GoDebugAdapter {
})
}
fn validate_config(
&self,
config: &serde_json::Value,
) -> Result<StartDebuggingRequestArgumentsRequest> {
let map = config.as_object().context("Config isn't an object")?;
let request_variant = map
.get("request")
.and_then(|val| val.as_str())
.context("request argument is not found or invalid")?;
match request_variant {
"launch" => Ok(StartDebuggingRequestArgumentsRequest::Launch),
"attach" => Ok(StartDebuggingRequestArgumentsRequest::Attach),
_ => Err(anyhow!("request must be either 'launch' or 'attach'")),
}
}
fn config_from_zed_format(&self, zed_scenario: ZedDebugConfig) -> Result<DebugScenario> {
let mut args = match &zed_scenario.request {
dap::DebugRequest::Attach(attach_config) => {
@@ -488,7 +470,7 @@ impl DebugAdapter for GoDebugAdapter {
connection: None,
request_args: StartDebuggingRequestArguments {
configuration: task_definition.config.clone(),
request: self.validate_config(&task_definition.config)?,
request: self.request_kind(&task_definition.config)?,
},
})
}

View File

@@ -1,9 +1,6 @@
use adapters::latest_github_release;
use anyhow::{Context as _, anyhow};
use dap::{
StartDebuggingRequestArguments, StartDebuggingRequestArgumentsRequest,
adapters::DebugTaskDefinition,
};
use anyhow::Context as _;
use dap::{StartDebuggingRequestArguments, adapters::DebugTaskDefinition};
use gpui::AsyncApp;
use std::{collections::HashMap, path::PathBuf, sync::OnceLock};
use task::DebugRequest;
@@ -95,7 +92,7 @@ impl JsDebugAdapter {
}),
request_args: StartDebuggingRequestArguments {
configuration: task_definition.config.clone(),
request: self.validate_config(&task_definition.config)?,
request: self.request_kind(&task_definition.config)?,
},
})
}
@@ -107,29 +104,6 @@ impl DebugAdapter for JsDebugAdapter {
DebugAdapterName(Self::ADAPTER_NAME.into())
}
fn validate_config(
&self,
config: &serde_json::Value,
) -> Result<dap::StartDebuggingRequestArgumentsRequest> {
match config.get("request") {
Some(val) if val == "launch" => {
if config.get("program").is_none() && config.get("url").is_none() {
return Err(anyhow!(
"either program or url is required for launch request"
));
}
Ok(StartDebuggingRequestArgumentsRequest::Launch)
}
Some(val) if val == "attach" => {
if !config.get("processId").is_some_and(|val| val.is_u64()) {
return Err(anyhow!("processId must be a number"));
}
Ok(StartDebuggingRequestArgumentsRequest::Attach)
}
_ => Err(anyhow!("missing or invalid request field in config")),
}
}
fn config_from_zed_format(&self, zed_scenario: ZedDebugConfig) -> Result<DebugScenario> {
let mut args = json!({
"type": "pwa-node",

View File

@@ -94,7 +94,7 @@ impl PhpDebugAdapter {
envs: HashMap::default(),
request_args: StartDebuggingRequestArguments {
configuration: task_definition.config.clone(),
request: <Self as DebugAdapter>::validate_config(self, &task_definition.config)?,
request: <Self as DebugAdapter>::request_kind(self, &task_definition.config)?,
},
})
}
@@ -282,10 +282,7 @@ impl DebugAdapter for PhpDebugAdapter {
Some(SharedString::new_static("PHP").into())
}
fn validate_config(
&self,
_: &serde_json::Value,
) -> Result<StartDebuggingRequestArgumentsRequest> {
fn request_kind(&self, _: &serde_json::Value) -> Result<StartDebuggingRequestArgumentsRequest> {
Ok(StartDebuggingRequestArgumentsRequest::Launch)
}

View File

@@ -1,10 +1,8 @@
use crate::*;
use anyhow::{Context as _, anyhow};
use dap::{
DebugRequest, StartDebuggingRequestArguments, StartDebuggingRequestArgumentsRequest,
adapters::DebugTaskDefinition,
};
use gpui::{AsyncApp, SharedString};
use anyhow::Context as _;
use dap::adapters::latest_github_release;
use dap::{DebugRequest, StartDebuggingRequestArguments, adapters::DebugTaskDefinition};
use gpui::{AppContext, AsyncApp, SharedString};
use json_dotpath::DotPaths;
use language::{LanguageName, Toolchain};
use serde_json::Value;
@@ -24,12 +22,13 @@ pub(crate) struct PythonDebugAdapter {
impl PythonDebugAdapter {
const ADAPTER_NAME: &'static str = "Debugpy";
const DEBUG_ADAPTER_NAME: DebugAdapterName =
DebugAdapterName(SharedString::new_static(Self::ADAPTER_NAME));
const ADAPTER_PACKAGE_NAME: &'static str = "debugpy";
const ADAPTER_PATH: &'static str = "src/debugpy/adapter";
const LANGUAGE_NAME: &'static str = "Python";
async fn generate_debugpy_arguments(
&self,
host: &Ipv4Addr,
port: u16,
user_installed_path: Option<&Path>,
@@ -57,7 +56,7 @@ impl PythonDebugAdapter {
format!("--port={}", port),
])
} else {
let adapter_path = paths::debug_adapters_dir().join(self.name().as_ref());
let adapter_path = paths::debug_adapters_dir().join(Self::DEBUG_ADAPTER_NAME.as_ref());
let file_name_prefix = format!("{}_", Self::ADAPTER_NAME);
let debugpy_dir =
@@ -86,7 +85,7 @@ impl PythonDebugAdapter {
&self,
task_definition: &DebugTaskDefinition,
) -> Result<StartDebuggingRequestArguments> {
let request = self.validate_config(&task_definition.config)?;
let request = self.request_kind(&task_definition.config)?;
let mut configuration = task_definition.config.clone();
if let Ok(console) = configuration.dot_get_mut("console") {
@@ -110,22 +109,21 @@ impl PythonDebugAdapter {
repo_owner: "microsoft".into(),
};
adapters::fetch_latest_adapter_version_from_github(github_repo, delegate.as_ref()).await
fetch_latest_adapter_version_from_github(github_repo, delegate.as_ref()).await
}
async fn install_binary(
&self,
adapter_name: DebugAdapterName,
version: AdapterVersion,
delegate: &Arc<dyn DapDelegate>,
delegate: Arc<dyn DapDelegate>,
) -> Result<()> {
let version_path = adapters::download_adapter_from_github(
self.name(),
adapter_name,
version,
adapters::DownloadedFileType::Zip,
adapters::DownloadedFileType::GzipTar,
delegate.as_ref(),
)
.await?;
// only needed when you install the latest version for the first time
if let Some(debugpy_dir) =
util::fs::find_file_name_in_dir(version_path.as_path(), |file_name| {
@@ -174,14 +172,13 @@ impl PythonDebugAdapter {
let python_command = python_path.context("failed to find binary path for Python")?;
log::debug!("Using Python executable: {}", python_command);
let arguments = self
.generate_debugpy_arguments(
&host,
port,
user_installed_path.as_deref(),
installed_in_venv,
)
.await?;
let arguments = Self::generate_debugpy_arguments(
&host,
port,
user_installed_path.as_deref(),
installed_in_venv,
)
.await?;
log::debug!(
"Starting debugpy adapter with command: {} {}",
@@ -207,7 +204,7 @@ impl PythonDebugAdapter {
#[async_trait(?Send)]
impl DebugAdapter for PythonDebugAdapter {
fn name(&self) -> DebugAdapterName {
DebugAdapterName(Self::ADAPTER_NAME.into())
Self::DEBUG_ADAPTER_NAME
}
fn adapter_language_name(&self) -> Option<LanguageName> {
@@ -254,24 +251,6 @@ impl DebugAdapter for PythonDebugAdapter {
})
}
fn validate_config(
&self,
config: &serde_json::Value,
) -> Result<StartDebuggingRequestArgumentsRequest> {
let map = config.as_object().context("Config isn't an object")?;
let request_variant = map
.get("request")
.and_then(|val| val.as_str())
.context("request is not valid")?;
match request_variant {
"launch" => Ok(StartDebuggingRequestArgumentsRequest::Launch),
"attach" => Ok(StartDebuggingRequestArgumentsRequest::Attach),
_ => Err(anyhow!("request must be either 'launch' or 'attach'")),
}
}
async fn dap_schema(&self) -> serde_json::Value {
json!({
"properties": {
@@ -656,7 +635,9 @@ impl DebugAdapter for PythonDebugAdapter {
if self.checked.set(()).is_ok() {
delegate.output_to_console(format!("Checking latest version of {}...", self.name()));
if let Some(version) = self.fetch_latest_adapter_version(delegate).await.log_err() {
self.install_binary(version, delegate).await?;
cx.background_spawn(Self::install_binary(self.name(), version, delegate.clone()))
.await
.context("Failed to install debugpy")?;
}
}
@@ -665,6 +646,24 @@ impl DebugAdapter for PythonDebugAdapter {
}
}
async fn fetch_latest_adapter_version_from_github(
github_repo: GithubRepo,
delegate: &dyn DapDelegate,
) -> Result<AdapterVersion> {
let release = latest_github_release(
&format!("{}/{}", github_repo.repo_owner, github_repo.repo_name),
false,
false,
delegate.http_client(),
)
.await?;
Ok(AdapterVersion {
tag_name: release.tag_name,
url: release.tarball_url,
})
}
#[cfg(test)]
mod tests {
use super::*;
@@ -672,20 +671,18 @@ mod tests {
#[gpui::test]
async fn test_debugpy_install_path_cases() {
let adapter = PythonDebugAdapter::default();
let host = Ipv4Addr::new(127, 0, 0, 1);
let port = 5678;
// Case 1: User-defined debugpy path (highest precedence)
let user_path = PathBuf::from("/custom/path/to/debugpy");
let user_args = adapter
.generate_debugpy_arguments(&host, port, Some(&user_path), false)
.await
.unwrap();
let user_args =
PythonDebugAdapter::generate_debugpy_arguments(&host, port, Some(&user_path), false)
.await
.unwrap();
// Case 2: Venv-installed debugpy (uses -m debugpy.adapter)
let venv_args = adapter
.generate_debugpy_arguments(&host, port, None, true)
let venv_args = PythonDebugAdapter::generate_debugpy_arguments(&host, port, None, true)
.await
.unwrap();
@@ -700,9 +697,4 @@ mod tests {
// Note: Case 3 (GitHub-downloaded debugpy) is not tested since this requires mocking the Github API.
}
#[test]
fn test_adapter_path_constant() {
assert_eq!(PythonDebugAdapter::ADAPTER_PATH, "src/debugpy/adapter");
}
}

View File

@@ -265,7 +265,7 @@ impl DebugAdapter for RubyDebugAdapter {
cwd: None,
envs: std::collections::HashMap::default(),
request_args: StartDebuggingRequestArguments {
request: self.validate_config(&definition.config)?,
request: self.request_kind(&definition.config)?,
configuration: definition.config.clone(),
},
})

View File

@@ -50,6 +50,7 @@ project.workspace = true
rpc.workspace = true
serde.workspace = true
serde_json.workspace = true
# serde_json_lenient.workspace = true
settings.workspace = true
shlex.workspace = true
sysinfo.workspace = true

View File

@@ -3,11 +3,12 @@ use crate::session::DebugSession;
use crate::session::running::RunningState;
use crate::{
ClearAllBreakpoints, Continue, Detach, FocusBreakpointList, FocusConsole, FocusFrames,
FocusLoadedSources, FocusModules, FocusTerminal, FocusVariables, Pause, Restart,
ShowStackTrace, StepBack, StepInto, StepOut, StepOver, Stop, ToggleIgnoreBreakpoints,
ToggleSessionPicker, ToggleThreadPicker, persistence, spawn_task_or_modal,
FocusLoadedSources, FocusModules, FocusTerminal, FocusVariables, NewProcessModal,
NewProcessMode, Pause, Restart, ShowStackTrace, StepBack, StepInto, StepOut, StepOver, Stop,
ToggleExpandItem, ToggleIgnoreBreakpoints, ToggleSessionPicker, ToggleThreadPicker,
persistence, spawn_task_or_modal,
};
use anyhow::{Context as _, Result, anyhow};
use anyhow::Result;
use command_palette_hooks::CommandPaletteFilter;
use dap::StartDebuggingRequestArguments;
use dap::adapters::DebugAdapterName;
@@ -24,7 +25,7 @@ use gpui::{
use language::Buffer;
use project::debugger::session::{Session, SessionStateEvent};
use project::{Fs, ProjectPath, WorktreeId};
use project::{Fs, WorktreeId};
use project::{Project, debugger::session::ThreadStatus};
use rpc::proto::{self};
use settings::Settings;
@@ -69,6 +70,7 @@ pub struct DebugPanel {
pub(crate) thread_picker_menu_handle: PopoverMenuHandle<ContextMenu>,
pub(crate) session_picker_menu_handle: PopoverMenuHandle<ContextMenu>,
fs: Arc<dyn Fs>,
is_zoomed: bool,
_subscriptions: [Subscription; 1],
}
@@ -103,6 +105,7 @@ impl DebugPanel {
fs: workspace.app_state().fs.clone(),
thread_picker_menu_handle,
session_picker_menu_handle,
is_zoomed: false,
_subscriptions: [focus_subscription],
debug_scenario_scheduled_last: true,
}
@@ -334,10 +337,17 @@ impl DebugPanel {
let Some(task_inventory) = task_store.read(cx).task_inventory() else {
return;
};
let workspace = self.workspace.clone();
let Some(scenario) = task_inventory.read(cx).last_scheduled_scenario().cloned() else {
window.defer(cx, move |window, cx| {
workspace
.update(cx, |workspace, cx| {
NewProcessModal::show(workspace, window, NewProcessMode::Launch, None, cx);
})
.ok();
});
return;
};
let workspace = self.workspace.clone();
cx.spawn_in(window, async move |this, cx| {
let task_contexts = workspace
@@ -942,68 +952,69 @@ impl DebugPanel {
cx.notify();
}
pub(crate) fn save_scenario(
&self,
scenario: &DebugScenario,
worktree_id: WorktreeId,
window: &mut Window,
cx: &mut App,
) -> Task<Result<ProjectPath>> {
self.workspace
.update(cx, |workspace, cx| {
let Some(mut path) = workspace.absolute_path_of_worktree(worktree_id, cx) else {
return Task::ready(Err(anyhow!("Couldn't get worktree path")));
};
// TODO: restore once we have proper comment preserving file edits
// pub(crate) fn save_scenario(
// &self,
// scenario: &DebugScenario,
// worktree_id: WorktreeId,
// window: &mut Window,
// cx: &mut App,
// ) -> Task<Result<ProjectPath>> {
// self.workspace
// .update(cx, |workspace, cx| {
// let Some(mut path) = workspace.absolute_path_of_worktree(worktree_id, cx) else {
// return Task::ready(Err(anyhow!("Couldn't get worktree path")));
// };
let serialized_scenario = serde_json::to_value(scenario);
// let serialized_scenario = serde_json::to_value(scenario);
cx.spawn_in(window, async move |workspace, cx| {
let serialized_scenario = serialized_scenario?;
let fs =
workspace.read_with(cx, |workspace, _| workspace.app_state().fs.clone())?;
// cx.spawn_in(window, async move |workspace, cx| {
// let serialized_scenario = serialized_scenario?;
// let fs =
// workspace.read_with(cx, |workspace, _| workspace.app_state().fs.clone())?;
path.push(paths::local_settings_folder_relative_path());
if !fs.is_dir(path.as_path()).await {
fs.create_dir(path.as_path()).await?;
}
path.pop();
// path.push(paths::local_settings_folder_relative_path());
// if !fs.is_dir(path.as_path()).await {
// fs.create_dir(path.as_path()).await?;
// }
// path.pop();
path.push(paths::local_debug_file_relative_path());
let path = path.as_path();
// path.push(paths::local_debug_file_relative_path());
// let path = path.as_path();
if !fs.is_file(path).await {
let content =
serde_json::to_string_pretty(&serde_json::Value::Array(vec![
serialized_scenario,
]))?;
// if !fs.is_file(path).await {
// fs.create_file(path, Default::default()).await?;
// fs.write(
// path,
// initial_local_debug_tasks_content().to_string().as_bytes(),
// )
// .await?;
// }
fs.create_file(path, Default::default()).await?;
fs.save(path, &content.into(), Default::default()).await?;
} else {
let content = fs.load(path).await?;
let mut values = serde_json::from_str::<Vec<serde_json::Value>>(&content)?;
values.push(serialized_scenario);
fs.save(
path,
&serde_json::to_string_pretty(&values).map(Into::into)?,
Default::default(),
)
.await?;
}
// let content = fs.load(path).await?;
// let mut values =
// serde_json_lenient::from_str::<Vec<serde_json::Value>>(&content)?;
// values.push(serialized_scenario);
// fs.save(
// path,
// &serde_json_lenient::to_string_pretty(&values).map(Into::into)?,
// Default::default(),
// )
// .await?;
workspace.update(cx, |workspace, cx| {
workspace
.project()
.read(cx)
.project_path_for_absolute_path(&path, cx)
.context(
"Couldn't get project path for .zed/debug.json in active worktree",
)
})?
})
})
.unwrap_or_else(|err| Task::ready(Err(err)))
}
// workspace.update(cx, |workspace, cx| {
// workspace
// .project()
// .read(cx)
// .project_path_for_absolute_path(&path, cx)
// .context(
// "Couldn't get project path for .zed/debug.json in active worktree",
// )
// })?
// })
// })
// .unwrap_or_else(|err| Task::ready(Err(err)))
// }
pub(crate) fn toggle_thread_picker(&mut self, window: &mut Window, cx: &mut Context<Self>) {
self.thread_picker_menu_handle.toggle(window, cx);
@@ -1012,6 +1023,22 @@ impl DebugPanel {
pub(crate) fn toggle_session_picker(&mut self, window: &mut Window, cx: &mut Context<Self>) {
self.session_picker_menu_handle.toggle(window, cx);
}
fn toggle_zoom(
&mut self,
_: &workspace::ToggleZoom,
window: &mut Window,
cx: &mut Context<Self>,
) {
if self.is_zoomed {
cx.emit(PanelEvent::ZoomOut);
} else {
if !self.focus_handle(cx).contains_focused(window, cx) {
cx.focus_self(window);
}
cx.emit(PanelEvent::ZoomIn);
}
}
}
async fn register_session_inner(
@@ -1167,6 +1194,15 @@ impl Panel for DebugPanel {
}
fn set_active(&mut self, _: bool, _: &mut Window, _: &mut Context<Self>) {}
fn is_zoomed(&self, _window: &Window, _cx: &App) -> bool {
self.is_zoomed
}
fn set_zoomed(&mut self, zoomed: bool, _window: &mut Window, cx: &mut Context<Self>) {
self.is_zoomed = zoomed;
cx.notify();
}
}
impl Render for DebugPanel {
@@ -1307,6 +1343,23 @@ impl Render for DebugPanel {
.ok();
}
})
.on_action(cx.listener(Self::toggle_zoom))
.on_action(cx.listener(|panel, _: &ToggleExpandItem, _, cx| {
let Some(session) = panel.active_session() else {
return;
};
let active_pane = session
.read(cx)
.running_state()
.read(cx)
.active_pane()
.clone();
active_pane.update(cx, |pane, cx| {
let is_zoomed = pane.is_zoomed();
pane.set_zoomed(!is_zoomed, cx);
});
cx.notify();
}))
.when(self.active_session.is_some(), |this| {
this.on_mouse_down(
MouseButton::Right,
@@ -1410,4 +1463,10 @@ impl workspace::DebuggerProvider for DebuggerProvider {
fn debug_scenario_scheduled_last(&self, cx: &App) -> bool {
self.0.read(cx).debug_scenario_scheduled_last
}
fn active_thread_state(&self, cx: &App) -> Option<ThreadStatus> {
let session = self.0.read(cx).active_session()?;
let thread = session.read(cx).running_state().read(cx).thread_id()?;
session.read(cx).session(cx).read(cx).thread_state(thread)
}
}

View File

@@ -3,7 +3,7 @@ use debugger_panel::{DebugPanel, ToggleFocus};
use editor::Editor;
use feature_flags::{DebuggerFeatureFlag, FeatureFlagViewExt};
use gpui::{App, EntityInputHandler, actions};
use new_session_modal::{NewSessionModal, NewSessionMode};
use new_process_modal::{NewProcessModal, NewProcessMode};
use project::debugger::{self, breakpoint_store::SourceBreakpoint};
use session::DebugSession;
use settings::Settings;
@@ -15,7 +15,7 @@ use workspace::{ItemHandle, ShutdownDebugAdapters, Workspace};
pub mod attach_modal;
pub mod debugger_panel;
mod dropdown_menus;
mod new_session_modal;
mod new_process_modal;
mod persistence;
pub(crate) mod session;
mod stack_trace_view;
@@ -49,6 +49,7 @@ actions!(
ToggleThreadPicker,
ToggleSessionPicker,
RerunLastSession,
ToggleExpandItem,
]
);
@@ -210,7 +211,7 @@ pub fn init(cx: &mut App) {
},
)
.register_action(|workspace: &mut Workspace, _: &Start, window, cx| {
NewSessionModal::show(workspace, window, NewSessionMode::Launch, None, cx);
NewProcessModal::show(workspace, window, NewProcessMode::Debug, None, cx);
})
.register_action(
|workspace: &mut Workspace, _: &RerunLastSession, window, cx| {
@@ -352,7 +353,7 @@ fn spawn_task_or_modal(
.detach_and_log_err(cx)
}
Spawn::ViaModal { reveal_target } => {
NewSessionModal::show(workspace, window, NewSessionMode::Task, *reveal_target, cx);
NewProcessModal::show(workspace, window, NewProcessMode::Task, *reveal_target, cx);
}
}
}

View File

@@ -8,7 +8,8 @@ pub mod variable_list;
use std::{any::Any, ops::ControlFlow, path::PathBuf, sync::Arc, time::Duration};
use crate::{
new_session_modal::resolve_path,
ToggleExpandItem,
new_process_modal::resolve_path,
persistence::{self, DebuggerPaneItem, SerializedLayout},
};
@@ -285,6 +286,7 @@ pub(crate) fn new_debugger_pane(
&new_pane,
item_id_to_move,
new_pane.read(cx).active_item_index(),
true,
window,
cx,
);
@@ -347,6 +349,7 @@ pub(crate) fn new_debugger_pane(
false
}
})));
pane.set_can_toggle_zoom(false, cx);
pane.display_nav_history_buttons(None);
pane.set_custom_drop_handle(cx, custom_drop_handle);
pane.set_should_display_tab_bar(|_, _| true);
@@ -472,17 +475,19 @@ pub(crate) fn new_debugger_pane(
},
)
.icon_size(IconSize::XSmall)
.on_click(cx.listener(move |pane, _, window, cx| {
pane.toggle_zoom(&workspace::ToggleZoom, window, cx);
.on_click(cx.listener(move |pane, _, _, cx| {
let is_zoomed = pane.is_zoomed();
pane.set_zoomed(!is_zoomed, cx);
cx.notify();
}))
.tooltip({
let focus_handle = focus_handle.clone();
move |window, cx| {
let zoomed_text =
if zoomed { "Zoom Out" } else { "Zoom In" };
if zoomed { "Minimize" } else { "Expand" };
Tooltip::for_action_in(
zoomed_text,
&workspace::ToggleZoom,
&ToggleExpandItem,
&focus_handle,
window,
cx,
@@ -566,7 +571,7 @@ impl RunningState {
}
}
pub(crate) fn relativlize_paths(
pub(crate) fn relativize_paths(
key: Option<&str>,
config: &mut serde_json::Value,
context: &TaskContext,
@@ -574,12 +579,12 @@ impl RunningState {
match config {
serde_json::Value::Object(obj) => {
obj.iter_mut()
.for_each(|(key, value)| Self::relativlize_paths(Some(key), value, context));
.for_each(|(key, value)| Self::relativize_paths(Some(key), value, context));
}
serde_json::Value::Array(array) => {
array
.iter_mut()
.for_each(|value| Self::relativlize_paths(None, value, context));
.for_each(|value| Self::relativize_paths(None, value, context));
}
serde_json::Value::String(s) if key == Some("program") || key == Some("cwd") => {
// Some built-in zed tasks wrap their arguments in quotes as they might contain spaces.
@@ -806,13 +811,13 @@ impl RunningState {
mut config,
tcp_connection,
} = scenario;
Self::relativlize_paths(None, &mut config, &task_context);
Self::relativize_paths(None, &mut config, &task_context);
Self::substitute_variables_in_config(&mut config, &task_context);
let request_type = dap_registry
.adapter(&adapter)
.ok_or_else(|| anyhow!("{}: is not a valid adapter name", &adapter))
.and_then(|adapter| adapter.validate_config(&config));
.and_then(|adapter| adapter.request_kind(&config));
let config_is_valid = request_type.is_ok();
@@ -897,7 +902,6 @@ impl RunningState {
weak_workspace,
None,
weak_project,
false,
window,
cx,
)
@@ -954,7 +958,10 @@ impl RunningState {
config = scenario.config;
Self::substitute_variables_in_config(&mut config, &task_context);
} else {
anyhow::bail!("No request or build provided");
let Err(e) = request_type else {
unreachable!();
};
anyhow::bail!("Zed cannot determine how to run this debug scenario. `build` field was not provided and Debug Adapter won't accept provided configuration because: {e}");
};
Ok(DebugTaskDefinition {
@@ -1048,15 +1055,7 @@ impl RunningState {
let terminal = terminal_task.await?;
let terminal_view = cx.new_window_entity(|window, cx| {
TerminalView::new(
terminal.clone(),
workspace,
None,
weak_project,
false,
window,
cx,
)
TerminalView::new(terminal.clone(), workspace, None, weak_project, window, cx)
})?;
running.update_in(cx, |running, window, cx| {
@@ -1257,18 +1256,6 @@ impl RunningState {
Event::Focus => {
this.active_pane = source_pane.clone();
}
Event::ZoomIn => {
source_pane.update(cx, |pane, cx| {
pane.set_zoomed(true, cx);
});
cx.notify();
}
Event::ZoomOut => {
source_pane.update(cx, |pane, cx| {
pane.set_zoomed(false, cx);
});
cx.notify();
}
_ => {}
}
}

Some files were not shown because too many files have changed in this diff Show More