Compare commits

..

77 Commits

Author SHA1 Message Date
David Kleingeld
b5d1eeff60 Add @dvdsk to git reviewers list 2025-11-03 18:50:00 +01:00
Nia
45b78482f5 perf: Fixup Hyperfine finding (#41837)
Release Notes:

- N/A
2025-11-03 17:36:33 +00:00
Antal Szabó
71f1f3728d windows: Remove null terminator from keyboard ID (#41785)
Closes #41486, closes #35862 

It is unnecessary, and it broke the `uses_altgr` function.

Also add Slovenian layout as using AltGr.

This should fix:
-
https://github.com/zed-industries/zed/pull/40536#issuecomment-3477121224
- https://github.com/zed-industries/zed/issues/41486
- https://github.com/zed-industries/zed/issues/35862

As the current strategy relies on manually adding layouts that have
AltGr, it's brittle and not very elegant. It also has other issues (it
requests the current layout on every kesytroke and mouse movement).

**A potentially better and more comprehensive solution is at
https://github.com/zed-industries/zed/pull/41259**
This is just to fix the immediate issues while that gets reviewed.

Release Notes:

- windows: Fix AltGr handling on non-US layouts again.
2025-11-03 18:29:28 +01:00
Richard Feldman
8b560cd8aa Fix bug with uninstalled agent extensions (#41836)
Previously, uninstalled agent extensions didn't immediately disappear
from the menu. Now, they do!

Release Notes:

- N/A
2025-11-03 12:09:26 -05:00
Lukas Wirth
38e1e3f498 project: Use user configured shells for project env fetching (#41288)
Closes https://github.com/zed-industries/zed/issues/40464

Release Notes:

- Fix shell environment sourcing not respecting users remote shells
2025-11-03 16:29:07 +00:00
Karl-Erik Enkelmann
a6b177d806 Update open buffers with newly registered completion trigger characters (#41243)
Closes https://github.com/zed-extensions/java/issues/108

Previously, when language servers dynamically register completion
capabilities with trigger characters for completions (hello JDTLS), this
would not get updated in buffers for that language server that were
already open. This change is to find open buffers for the language
server and update the trigger characters in each of them when the new
capability is being registered.

Release Notes:

- N/A
2025-11-03 17:28:30 +01:00
Lukas Wirth
73366bef62 diagnostics: Live update diagnostics view on edits while focused (#41829)
Prior we were only updating the diagnostics pane when it is either
unfocued, saved or when a disk based diagnostic run finishes (aka cargo
check). The reason for this is simple, we do not want to take away the
excerpt under the users cursor while they are typing if they manage to
fix the diagnostic. Additionally we need to prevent dropping the changed
buffer before it is saved.

Delaying updates was a simple way to work around these kind of issues,
but comes at a huge annoyance that the diagnostics pane is not actually
reflecting the current state of the world but some snapshot of it
instead making it less than ideal to work within it for languages that
do not leverage disk based diagnostics (that is not rust-analyzer, and
even for rust-analyzer its annoying).

This PR changes this. We now always live update the view but take care
to retain unsaved buffers as well as buffers that contain a cursor in
them (as well as some other "checkpoint" properties).

Release Notes:

- Improved diagnostics pane to live update when editing within its
editor
2025-11-03 16:16:05 +00:00
David Kleingeld
48bd253358 Adds instructions on how to use Perf & Flamegraph without debug symbols (#41831)
Release Notes:

- N/A
2025-11-03 16:11:30 +00:00
Bob Mannino
2131d88e48 Add center_on_match option for search (#40523)
[Closes discussion
#28943](https://github.com/zed-industries/zed/discussions/28943)

Release Notes:

- Added `center_on_match` option to center matched text in view during buffer or project search.

---------

Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>
2025-11-03 21:17:09 +05:30
Kirill Bulatov
42149df0f2 Show modal hover for one-off tasks (#41824)
Before, one-off commands did not show anything on hover at all, now they
show their command:

<img width="657" height="527" alt="after"
src="https://github.com/user-attachments/assets/d43292ca-9101-4a6a-b689-828277e8cfeb"
/>

Release Notes:

- Show modal hover for one-off tasks
2025-11-03 15:25:44 +00:00
Bing Wang
379bdb227a languages: Add ignore keyword for gomod (#41520)
go 1.25 introduce ignore directive in go.mod to specify directories the
go command should ignore.

ref: https://tip.golang.org/doc/go1.25#go-command


Release Notes:

- Added syntax highlighting support for the new [`ignore`
directive](https://tip.golang.org/doc/go1.25#go-command) in `go.mod`
files
2025-11-03 09:11:50 -06:00
Dijana Pavlovic
04e53bff3d Move @punctuation.delimiter before @operator capture (#41663)
Closes #41593

From what I understand the order of captures inside tree-sitter query
files matters, and the last capture will win. `?` and `:` are captured
by both `@operator` and `@punctuation.delimiter`.So in order for the
ternary operator to win it should live after `@punctuation.delimiter`.

Before:
<img width="298" height="32" alt="Screenshot 2025-10-31 at 17 41 21"
src="https://github.com/user-attachments/assets/af376e52-88be-4f62-9e2b-a106731f8145"
/>


After:
<img width="303" height="39" alt="Screenshot 2025-10-31 at 17 41 33"
src="https://github.com/user-attachments/assets/9a754ae9-0521-4c70-9adb-90a562404ce8"
/>


Release Notes:

- Fixed an issue where the ternary operator symbols in TypeScript would
not be highlighted as operators.
2025-11-03 08:51:22 -06:00
Kirill Bulatov
28f30fc851 Fix racy inlay hints queries (#41816)
Follow-up of https://github.com/zed-industries/zed/pull/40183

Release Notes:

- (Preview only) Fixed inlay hints duplicating when multiple editors are
open for the same buffer

---------

Co-authored-by: Lukas Wirth <lukas@zed.dev>
2025-11-03 13:54:53 +00:00
Lukas Wirth
f8b414c22c zed: Reduce number of rayon threads, spawn with bigger stacks (#41812)
We already do this for the cli and remote server but forgot to do so for
the main binary

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-03 12:28:32 +00:00
Lukas Wirth
50504793e6 file_finder: Fix highlighting panic in open path prompt (#41808)
Closes https://github.com/zed-industries/zed/issues/41249

Couldn't quite come up with a test case here but verified it works.

Release Notes:

- Fixed a panic in file finder when deleting characters
2025-11-03 12:07:13 +00:00
kallyaleksiev
b625263989 remote: Close window when SSH connection fails (#41782)
## Context 

This PR closes issue https://github.com/zed-industries/zed/issues/41781

It essentially `matches` the result of opening the connection here
f7153bbe8a/crates/recent_projects/src/remote_connections.rs (L650)

and adds a Close / Retry alert that upon 'Close' closes the new window
if the result is an error
2025-11-03 11:13:20 +00:00
Lukas Wirth
c8f9db2e24 remote: Fix more quoting issues with nushell (#41547)
https://github.com/zed-industries/zed/pull/40084#issuecomment-3464159871
Closes https://github.com/zed-industries/zed/pull/41547

Release Notes:

- Fixed remoting not working when the remote has nu set as its shell
2025-11-03 10:50:05 +00:00
Lukas Wirth
bc3c88e737 Revert "windows: Don't flood windows message queue with gpui messages" (#41803)
Reverts zed-industries/zed#41595

Closes #41704
2025-11-03 10:41:53 +00:00
Oleksiy Syvokon
3a058138c1 Fix Sonnet's regression with inserting </parameter></invoke> (#41800)
Sometimes, inside the edit agent, Sonnet thinks that it's doing a tool
call and closes its response with `</parameter></invoke>` instead of
properly closing </new_text>.

A better but more labor-intensive way of fixing this would be switching
to streaming tool calls for LLMs that support it.

Closes #39921

Release Notes:

- Fixed Sonnet's regression with inserting `</parameter></invoke>`
sometimes
2025-11-03 12:22:51 +02:00
Lukas Wirth
f2b539598e sum_tree: Spawn less tasks in SumTree::from_iter_async (#41793)
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-03 10:02:31 +00:00
ᴀᴍᴛᴏᴀᴇʀ
dc503e9975 Support GitLab and self-hosted GitLab avatars in git blame (#41747)
Part of #11043.

Release Notes:

- Added Support for showing GitLab and self-hosted GitLab avatars in git
blame
2025-11-03 07:51:04 +01:00
ᴀᴍᴛᴏᴀᴇʀ
73b75a7765 Support Gitee avatars in git blame (#41783)
Part of https://github.com/zed-industries/zed/issues/11043.

<img width="3596" height="1894" alt="CleanShot 2025-11-03 at 10 39
08@2x"
src="https://github.com/user-attachments/assets/68d16c32-fd23-4f54-9973-6cbda9685a8f"
/>


Release Notes:

- Added Support for showing Gitee avatars in git blame
2025-11-03 07:47:20 +01:00
Danilo Leal
deacd3e922 extension_ui: Fix card label truncation (#41784)
Closes https://github.com/zed-industries/zed/issues/41763

Release Notes:

- N/A
2025-11-03 03:54:47 +00:00
Aero
f7153bbe8a agent_ui: Add delete button for compatible API-based LLM providers (#41739)
Discussion: https://github.com/zed-industries/zed/discussions/41736

Release Notes:

- agent panel: Added the ability to remove OpenAI-compatible LLM
providers directly from the UI.

---------

Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
2025-11-02 16:05:29 -03:00
Xiaobo Liu
4e7ba8e680 acp_tools: Add vertical scrollbar to ACP logs (#41740)
Release Notes:

- N/A

---------

Signed-off-by: Xiaobo Liu <cppcoffee@gmail.com>
Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
2025-11-02 18:11:21 +00:00
Danilo Leal
9909b59bd0 agent_ui: Improve the "go to file" affordance in the edit bar (#41762)
This PR makes it clearer that you can click on the file path to open the
corresponding file in the agent panel's "edit bar", which is the element
that shows up in the panel as soon as agent-made edits happen.

Release Notes:

- agent panel: Improved the "go to file" affordance in the edit bar.
2025-11-02 14:56:23 -03:00
Danilo Leal
00ff89f00f agent_ui: Make single file review actions match panel (#41718)
When we introduced the ACP-based agent panel, the condition that the
"review" | "reject" | "keep" buttons observed to be displayed got
mismatched between the panel and the pane (when in the single file
review scenario). In the panel, the buttons appear as soon as there are
changed buffers, whereas in the pane, they appear when response
generation is done.

I believe that making them appear at the same time, observing the same
condition, is the desired behavior. Thus, I think the panel behavior is
more correct, because there are loads of times where agent response
generation isn't technically done (e.g., when there's a command waiting
for permission to be run) but the _file edit_ has already been performed
and is in a good state to be already accepted or rejected.

So, this is what this PR is doing; effectively removing the "generating"
state from the agent diff, and switching to `EditorState::Reviewing`
when there are changed buffers.

Release Notes:

- Improved agent edit single file reviews by making the "reject" and
"accept" buttons appear at the same time.
2025-11-02 14:30:50 -03:00
Danilo Leal
12fe12b5ac docs: Update theme, icon theme, and visual customization pages (#41761)
Some housekeeping updates:

- Update hardcoded actions/keybindings so they're pulled from the repo
- Mention settings window when useful
- Add more info about agent panel's font size
- Break sentences in individual lines

Release Notes:

- N/A
2025-11-02 14:30:37 -03:00
Mayank Verma
a9bc890497 ui: Fix popover menu not restoring focus to the previously focused element (#41751)
Closes #26548

Here's a before/after comparison:


https://github.com/user-attachments/assets/21d49db7-28bb-4fe2-bdaf-e86b6400ae7a

Release Notes:

- Fixed popover menus not restoring focus to the previously focused
element
2025-11-02 16:56:58 +01:00
Xiaobo Liu
d887e2050f windows: Hide background helpers behind CREATE_NO_WINDOW (#41737)
Close https://github.com/zed-industries/zed/issues/41538

Release Notes:

- Fixed some processes on windows not spawning with CREATE_NO_WINDOW

---------

Signed-off-by: Xiaobo Liu <cppcoffee@gmail.com>
2025-11-02 09:15:01 +01:00
Anthony Eid
d5421ba1a8 windows: Fix click bleeding through collab follow (#41726)
On Windows, clicking on a collab user icon in the title bar would
minimize/expand Zed because the click would bleed through to the title
bar. This PR fixes this by stopping propagation.

#### Before (On MacOS with double clicks to mimic the same behavior)

https://github.com/user-attachments/assets/5a91f7ff-265a-4575-aa23-00b8d30daeed
#### After (On MacOS with double clicks to mimic the same behavior)

https://github.com/user-attachments/assets/e9fcb98f-4855-4f21-8926-2d306d256f1c

Release Notes:

- Windows: Fix clicking on user icon in title bar to follow
minimizing/expanding Zed

Co-authored-by: Remco Smits <djsmits12@gmail.com>
2025-11-02 07:35:11 +00:00
Joseph T. Lyons
548cdfde3a Delete release process docs (#41733)
These have been migrated to the README.md
[here](https://github.com/zed-industries/release_notes). These don't
need to be public. Putting them in the same repo where we draft
(`release_notes`) means less jumping around and allows us to include
additional information we might not want to make public.

Release Notes:

- N/A
2025-11-02 00:37:02 -04:00
Ben Kunkle
2408f767f4 gh-workflow unit evals (#41637)
Closes #ISSUE

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-01 22:45:44 -04:00
Haojian Wu
df15d2d2fe Fix doc typos (#41727)
Release Notes:

- N/A
2025-11-01 20:52:32 -03:00
Anthony Eid
07dcb8f2bb debugger: Add program and module path fallbacks for debugpy toolchain (#40975)
Fixes the Debugpy toolchain detection bug in #40324 

When detecting what toolchain (venv) to use in the Debugpy configuration
stage, we used to only base it off of the current working directory
argument passed to the config. This is wrong behavior for cases like
mono repos, where the correct virtual environment to use is nested in
another folder.

This PR fixes this issue by adding the program and module fields as
fallbacks to check for virtual environments. We also added support for
program/module relative paths as well when cwd is not None.

Release Notes:

- debugger: Improve mono repo virtual environment detection with Debugpy

---------

Co-authored-by: Remco Smits <djsmits12@gmail.com>
2025-11-01 17:30:13 -04:00
Agus Zubiaga
06bdb28517 zeta cli: Add convert-example command (#41608)
Adds a `convert-example` subcommand to the zeta cli that converts eval
examples from/to `json`, `toml`, and `md` formats.

Release Notes:

- N/A

---------

Co-authored-by: Max Brunsfeld <maxbrunsfeld@gmail.com>
2025-11-01 19:35:04 +00:00
Mark Christiansen
d6b58bb948 agent_ui: Use agent font size tokens for thread markdown rendering (#41610)
Release Notes:

- N/A

--- 

Previously, agent markdown rendering used hardcoded font sizes
(TextSize::Default and TextSize::Small) which ignored the
agent_ui_font_size and agent_buffer_font_size settings. This updates the
markdown style to respect these settings.

This pull request adds support for customizing the font size of code
blocks in agent responses, making it possible to set a distinct font
size for code within the agent panel. The changes ensure that if the new
setting is not specified, the font size will fall back to the agent UI
font size, maintaining consistent appearance.

(I am a frontend developer without any Rust knowledge so this is
co-authored with Claude Code)


**Theme settings extension:**

* Added a new `agent_buffer_code_font_size` setting to
`ThemeSettingsContent`, `ThemeSettings`, and the default settings JSON,
allowing users to specify the font size for code blocks in agent
responses.
[[1]](diffhunk://#diff-a3bba02a485aba48e8e9a9d85485332378aa4fe29a0c50d11ae801ecfa0a56a4R69-R72)
[[2]](diffhunk://#diff-aed3a9217587d27844c57ac8aff4a749f1fb1fc5d54926ef5065bf85f8fd633aR118-R119)
[[3]](diffhunk://#diff-42e01d7aacb60673842554e30970b4ddbbaee7a2ec2c6f2be1c0b08b0dd89631R82-R83)
* Updated the VSCode import logic to recognize and import the new
`agent_buffer_code_font_size` setting.

**Font size application in agent UI:**

* Modified the agent UI rendering logic in `thread_view.rs` to use the
new `agent_buffer_code_font_size` for code blocks, and to fall back to
the agent UI font size if unset.
[[1]](diffhunk://#diff-f73942e8d4f8c4d4d173d57d7c58bb653c4bb6ae7079533ee501750cdca27d98L5584-R5584)
[[2]](diffhunk://#diff-f73942e8d4f8c4d4d173d57d7c58bb653c4bb6ae7079533ee501750cdca27d98L5596-R5598)
* Implemented a helper method in `ThemeSettings` to retrieve the code
block font size, with fallback logic to ensure a value is always used.
* Updated the settings application logic to propagate the new code block
font size setting throughout the theme system.


### Example Screenshots
![Screenshot 2025-10-31 at 12 38
28](https://github.com/user-attachments/assets/cbc34232-ab1f-40bf-a006-689678380e47)
![Screenshot 2025-10-31 at 12 37
45](https://github.com/user-attachments/assets/372b5cf8-2df8-425a-b052-12136de7c6bd)

---------

Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
2025-11-01 11:14:21 -03:00
Charles McLaughlin
03e0581ee8 agent_ui: Show notifications also when the panel is hidden (#40942)
Currently Zed only displays agent notifications (e.g. when the agent
completes a task) if the user has switched apps and Zed is not in the
foreground. This adds PR supports the scenario where the agent finishes
a long-running task and the user is busy coding within Zed on something
else.

Releases Note:

- If agent notifications are turned on, they will now also be displayed
when the agent panel is hidden, in complement to them showing when the
Zed window is in the background.

---------

Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
2025-11-01 11:14:12 -03:00
Conrad Irwin
1552e13799 Fix telemetry in release builds (#41695)
This was inadvertently broken in v0.211.1-pre when we rewrote the
release build

Release Notes:

- N/A
2025-10-31 22:55:12 -06:00
Dijana Pavlovic
ade0f1342c agent_ui: Prevent mode selector tooltip from going off-screen (#41589)
Closes #41458 

Dynamically position mode selector tooltip to prevent clipping.

Position tooltip on the right when panel is docked left, otherwise on
the left. This ensures the tooltip remains visible regardless of panel
position.

**Note:** The tooltip currently vertically aligns with the bottom of the
menu rather than individual items. Would be great if it can be aligned
with the option it explains. But this doesn't seem trivial to me to
implement and not sure if it's important enough atm?

Before: 
<img width="431" height="248" alt="Screenshot 2025-10-30 at 22 21 09"
src="https://github.com/user-attachments/assets/073f5440-b1bf-420b-b12f-558928b627f1"
/>

After:
<img width="632" height="158" alt="Screenshot 2025-10-30 at 17 26 52"
src="https://github.com/user-attachments/assets/e999e390-bf23-435e-9df0-3126dbc14ecb"
/>
<img width="685" height="175" alt="Screenshot 2025-10-30 at 17 27 15"
src="https://github.com/user-attachments/assets/84efca94-7920-474b-bcf8-062c7b59a812"
/>


Release Notes:

- Improved the agent panel's mode selector by preventing it to go
off-screen in case the panel is docked to the left.
2025-11-01 04:29:58 +00:00
Joe Innes
04f7b08ab9 Give visual feedback when an operation is pending (#41686)
Currently, if a commit operation takes some time, there's no visual
feedback in the UI that anything's happening.

This PR changes the colour of the text on the button to the
`Color::Disabled` colour when a commit operation is pending.

Release Notes:

- Improved UI feedback when a commit is in progress

---------

Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
2025-11-01 04:24:59 +00:00
Anthony Eid
ecbdffc84f debugger: Fix Debugpy attach with connect session startup (#41690)
Closes #38345, #34882, #33280

Debugpy has four distinct configuration scenarios, which are:
1. launch
2. attach with process id
3. attach with listen
4. attach with connect

Spawning Debugpy directly works with the first three scenarios but not
with "attach with connect". Which requires host/port arguments being
passed in both with an attach request and when starting up Debugpy. This
PR passes in the right arguments when spawning Debugpy in an attach with
connect scenario, thus fixing the bug.

The VsCode extension comment that explains this:
98f5b93ee4/src/extension/debugger/adapter/factory.ts (L43-L51)

Release Notes:

- debugger: Fix Python attach-based sessions not working with `connect`
or `port` arguments
2025-10-31 19:02:51 -04:00
Jakub Konka
aa61f25795 git: Make GitPanel more responsive to long-running staging ops (#41667)
Currently, this only applies to long-running individually selected
unstaged files in the git panel. Next up I would like to make this work
for `Stage All`/`Unstage All` however this will most likely require
pushing `PendingOperation` into `GitStore` (from the `GitPanel`).

Release Notes:

- N/A
2025-10-31 22:47:49 +01:00
Danilo Leal
d406409b72 Fix categorization of agent server extensions (#41689)
We missed making extensions that provide agent servers fill the
`provides` field with `agent-servers`, and thus, filtering for this type
of extension in both the app and site wouldn't return anything.

Release Notes:

- N/A
2025-10-31 21:18:22 +00:00
Danilo Leal
bf79592465 git_ui: Adjust stash picker (#41688)
Just tidying it up by removing the unnecessary eye icon buttons in all
list items and adding that action in the form of a button in the footer,
closer to all other actions. Also reordering the footer buttons so that
the likely most common action is in the far right.

Release Notes:

- N/A
2025-10-31 18:15:52 -03:00
Ben Kunkle
d3d7199507 Fix release.yml workflow (#41675)
Closes #ISSUE

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-10-31 16:29:13 -04:00
Danilo Leal
743a9cf258 Add included agents in extensions search (#41679)
Given agent servers will soon be a thing, I'm adding Claude Code, Gemini
CLI, and Codex CLI as included agents in case anyone comes first to
search them as extensions before looking up on the agent panel.

Release Notes:

- N/A
2025-10-31 16:45:39 -03:00
Conrad Irwin
a05358f47f Delete old ci.yml (#41668)
The new one is much better

Release Notes:

- N/A
2025-10-31 12:58:47 -06:00
Ben Kunkle
3a4aba1df2 gh-workflow release (#41502)
Closes #ISSUE

Rewrite our release pipeline to be generated by `gh-workflow`

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-10-31 14:23:25 -04:00
tidely
12d71b37bb ollama: Add button for refreshing available models (#38181)
Closes #17524

This PR adds a button to the bottom right corner of the ollama settings
ui. It resets the available ollama models, also resets the "Connected"
state in the process. This means it can be used to check if the
connection is still valid as well. It's a question whether we should
clear the available models on ALL `fetch_models` calls, since these only
happen during auth anyway.

Ollama is a local model provider which means clicking the refresh button
often only flashes the "not connected" state because the latency of the
request is so low. This accentuates changes in the UI, however I don't
think there's a way around this without adding some rather cumbersome
deferred ui updates.

I've attached the refresh button to the "Connected" `ButtonLike`, since
I don't think automatic UI spacing should separate these elements. I
think this is okay because the "Connected" isn't actually something that
the user can interact with.

Before: 
<img width="211" height="245" alt="image"
src="https://github.com/user-attachments/assets/ea90e24a-b603-4ee2-9212-2917e1695774"
/>

After: 
<img width="211" height="250" alt="image"
src="https://github.com/user-attachments/assets/be9af950-86a2-4067-87a0-52034a80a823"
/>


Alternative approach: There was also a suggestion to simply add a entry
to the command palette, however none of the other providers have this
ability currently either so I went with this approach. The current
approach also makes it more discoverable to the user.

Release Notes:

- Added a button for refreshing available ollama models

---------

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-10-31 18:12:02 +00:00
Conrad Irwin
34e0c97dbc Generate dwarf files for builds again (#41651)
Closes #ISSUE

Release Notes:

- N/A
2025-10-31 10:51:06 -06:00
Andrew Farkas
cf31b736f7 Add Andrew to REVIEWERS.conl (#41662)
Release Notes:

- N/A
2025-10-31 16:48:21 +00:00
versecafe
1cb512f336 bedrock: Fix duplicate region input (#41341)
Closes #41313

Release Notes:

- Fixes #41313

<img width="453" height="870" alt="Screenshot 2025-10-27 at 10 23 37 PM"
src="https://github.com/user-attachments/assets/93bfba18-1bff-494e-a4c2-05b54ad6eed8"
/>

Co-authored-by: Richard Feldman <oss@rtfeldman.com>
2025-10-31 16:43:45 +00:00
Lukas Wirth
4e6a562efe editor: Fix refresh_linked_ranges panics due to old snapshot use (#41657)
Fixes ZED-29Z

Release Notes:

- Fixed panic in `refresh_linked_ranges`
2025-10-31 17:39:55 +01:00
versecafe
c1dea842ff agent: Model name context (#41490)
Closes #41478

Release Notes:

- Fixed #41478

<img width="459" height="916" alt="Screenshot 2025-10-29 at 1 31 26 PM"
src="https://github.com/user-attachments/assets/1d5b9fdf-9800-44e4-bdd5-f0964f93625f"
/>

> caused by using haiku 4.5 from the anthropic provider and then
swapping to sonnet 3.7 through zed, doing this does mess with prompt
caching but a model swap already invalidates that so it shouldn't have
any cost impact on end users
2025-10-31 16:12:46 +00:00
Dino
c42d54af17 agent_ui: Autoscroll after inserting selections (#41370)
Update the behavior of the `zed_actions::agent::AddSelectionToThread`
action so that, after the selecitons are added to the current thread,
the editor automatically scrolls to the cursor's position, fixing an
issue where the inserted selection's UI component could wrap the cursor
to the next line below, leaving it outside the viewable area.

Closes #39694

Release Notes:

- Improved the `agent: add selection to thread` action so as to
automatically scroll to the cursor's position after selections are
inserted
2025-10-31 15:17:26 +00:00
Dino
f3a5ebc315 vim: Only focus when associated editor is also focused (#41487)
Update `Vim::activate` to ensure that the `Vim.focused` method is only
called if the associated editor is also focused.

This ensures that the `VimEvent::Focused` event is only emitted when the
editor is actually focused, preventing a bug where, after starting Zed,
Vim's mode indicator would show that the mode was `Insert` even though
it was in `Normal` mode in the main editor.

Closes #41353 

Release Notes:

- Fixed vim's mode being shown as `Inserted` right after opening Zed

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-10-31 15:04:54 +00:00
Lukas Wirth
f73d6fe4ce terminal: Kill the terminal child process, not the terminal process on exit (#41631)
When rerunning a task, our process id fetching seems to sometimes return
the previous terminal's process id when respawning the task, causing us
to kill the new terminal once the previous one drops as we spawn a new
one, then drop the old one. This results in rerun sometimes spawning a
blank task as the terminal immediately exits. The fix here is simple, we
actually want to kill the process running inside the terminal process,
not the terminal process itself when we exit in the terminal.

No relnotes as this was introduced yesterday in
https://github.com/zed-industries/zed/pull/41562

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-10-31 15:01:50 +00:00
Lukas Wirth
c6d61870e2 editor: Fix incorrect hover popup row clamping (#41645)
Fixes ZED-2TR
Fixes ZED-2TQ
Fixes ZED-2TB
Fixes ZED-2SW
Fixes ZED-2SQ

Release Notes:

- Fixed panic in repainting hover popups

Co-authored by: David <david@zed.dev>
2025-10-31 14:24:23 +00:00
Danilo Leal
1f938c08d2 project panel: Remove extra separator when "Rename" is hidden (#41639)
Closes https://github.com/zed-industries/zed/issues/41633

Release Notes:

- N/A
2025-10-31 13:55:08 +00:00
Lukas Wirth
f2ce06c7b0 sum_tree: Replace rayon with futures (#41586)
Release Notes:

- N/A *or* Added/Fixed/Improved ...

Co-authored by: Kate <kate@zed.dev>
2025-10-31 10:39:01 +00:00
Mikayla Maki
7c29c6d7a6 Increased the max height of pickers (#41617)
Release Notes:

- Increased the max size of picker based UI
2025-10-31 04:26:36 +00:00
Andrew Farkas
eab06eb1d9 Keep selection in SwitchToHelixNormalMode (#41583)
Closes #41125

Release Notes:

- Fixed `SwitchToHelixNormalMode` to keep selection
- Added default keybinds for `SwitchToHelixNormalMode` when in Helix
mode
2025-10-31 01:53:46 +00:00
Conrad Irwin
c2537fad43 Add a no-op compare_perf workflow (#41605)
Testing PR for @zed-zippy

Release Notes:

- N/A
2025-10-30 17:28:03 -06:00
Bennet Bo Fenner
977856407e Add bennetbo to REVIEWERS.conl (#41604)
Release Notes:

- N/A
2025-10-30 22:25:47 +00:00
Mikayla Maki
7070038c92 gpui: Remove type bound (#41603)
Release Notes:

- N/A
2025-10-30 22:17:45 +00:00
Bennet Bo Fenner
b059c1fce7 agent_servers: Expand ~ in path from settings (#41602)
Closes #40796


Release Notes:

- Fixed an issue where `~` would not be expanded when specifiying the
path of an ACP server
2025-10-30 22:14:41 +00:00
Chris
03c6d6285c outline_panel: Fix collapse/expand all entries (#41342)
Closes #39937

Release Notes:

- Fixed expand/collapse all entries not working in singleton buffer mode
2025-10-30 21:48:37 +00:00
Agus Zubiaga
60c546196a zeta2: Expose llm-based context retrieval via zeta_cli (#41584)
Release Notes:

- N/A

---------

Co-authored-by: Max Brunsfeld <maxbrunsfeld@gmail.com>
Co-authored-by: Oleksiy Syvokon <oleksiy.syvokon@gmail.com>
2025-10-30 21:41:09 +00:00
Dino
8aa2158418 vim: Improve pasting while in replace mode (#41549)
- Update `vim::normal::Vim.normal_replace` to work with more than one
  character
- Add `vim::replace::Vim.paste_replace` to handle pasting the 
  clipboard's contents while in replace mode
- Update vim's handling of the `editor::actions::Paste` action so that
  the `paste_replace` method is called when vim is in replace mode,
  otherwise it'll just call the regular `editor::Editor.paste` method

Closes #41378 

Release Notes:

- Improved pasting while in Vim's Replace mode, ensuring that the Zed
replaces the same number of characters as the length of the contents
being pasted
2025-10-30 20:33:03 +00:00
Anthony Eid
5ae0768ce4 debugger: Polish breakpoint list UI (#41598)
This PR fixes breakpoint icon alignment to also be at the end of a
rendered entry and enables editing breakpoint qualities when there's no
active session.

The alignment issue was caused by some icons being invisible, so the
layout phase always accounted for the space they would take up. Only
laying out the icons when they are visible fixed the issue.

#### Before
<img width="1014" height="316" alt="image"
src="https://github.com/user-attachments/assets/9a9ced06-e219-4d9d-8793-6bdfdaca48e8"
/>

#### After
[
<img width="502" height="167" alt="Screenshot 2025-10-30 at 3 21 17 PM"
src="https://github.com/user-attachments/assets/23744868-e354-461c-a940-9b6812e1bcf4"
/>
](url)

Release Notes:

- Breakpoint list: Allow adding conditions, logs, and hit conditions to
breakpoints when there's no active session
2025-10-30 16:15:21 -04:00
Anthony Eid
44e5a962e6 debugger: Add horizontal scroll bars to variable list, memory view, and breakpoint list (#41594)
Closes #40360

This PR added heuristics to determine what variable/breakpoint list
entry has the longest width when rendered. I added this in so the
uniform list would correctly determine which item has the longest width
and use that to calculate the scrollbar size.

The heuristic can be off if a non-mono space font is used in the UI; in
most cases, it's more than accurate enough though.

Release Notes:

- debugger: Add horizontal scroll bars to variable list, memory view,
and breakpoint list

---------

Co-authored-by: MrSubidubi <dev@bahn.sh>
2025-10-30 19:43:32 +00:00
Lukas Wirth
3944234bab windows: Don't flood windows message queue with gpui messages (#41595)
Release Notes:

- N/A

Co-authored by: Max Brunsfeld <max@zed.dev>
2025-10-30 20:09:32 +01:00
Lukas Wirth
ac3b232dda Reduce amount of foreground tasks spawned on multibuffer/editor updates (#41479)
When doing a project wide search in zed on windows for `hang`, zed
starts to freeze for a couple seconds ultimately starting to error with
`Not enough quota is available to process this command.` when
dispatching windows messages. The cause for this is that we simply
overload the windows message pump due to the sheer amount of foreground
tasks we spawn when we populate the project search.

This PR is an attempt at reducing this.

Release Notes:

- Reduced hangs and stutters in large project file searches
2025-10-30 17:40:56 +00:00
Paweł Kondzior
743180342a agent_ui: Insert thread summary as proper mention URI (#40722)
This ensures the thread summary is treated as a tracked mention with
accessible context.

Changes:
- Fixed `MessageEditor::insert_thread_summary()` to use proper mention
URI format
- Added test coverage to verify the fix

Release Notes:

- Fixed an issue where "New From Summary" was not properly inserting
thread summaries as contextual mentions when creating new threads.
Thread summaries are now inserted as proper mention URIs.
2025-10-30 17:19:32 +01:00
Bennet Fenner
3825ce523e agent_ui: Fix agent: Chat with follow not working (#41581)
Release Notes:

- Fixed an issue where `agent: Chat with follow` was not working anymore

Co-authored-by: Ben Brandt <benjamin.j.brandt@gmail.com>
2025-10-30 16:03:12 +00:00
Anthony Eid
b4cf7e440e debugger: Get rid of initialize_args in php debugger setup docs (#41579)
Related to issue: #40887

Release Notes:

- N/A

Co-authored-by: Remco Smits <djsmits12@gmail.com>
2025-10-30 11:47:59 -04:00
218 changed files with 7276 additions and 4012 deletions

View File

@@ -1,841 +0,0 @@
name: CI
on:
push:
tags:
- "v*"
concurrency:
# Allow only one workflow per any non-`main` branch.
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: 0
RUST_BACKTRACE: 1
DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }}
DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }}
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
jobs:
job_spec:
name: Decide which jobs to run
if: github.repository_owner == 'zed-industries'
outputs:
run_tests: ${{ steps.filter.outputs.run_tests }}
run_license: ${{ steps.filter.outputs.run_license }}
run_docs: ${{ steps.filter.outputs.run_docs }}
run_nix: ${{ steps.filter.outputs.run_nix }}
run_actionlint: ${{ steps.filter.outputs.run_actionlint }}
runs-on:
- namespace-profile-2x4-ubuntu-2404
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
# 350 is arbitrary; ~10days of history on main (5secs); full history is ~25secs
fetch-depth: ${{ github.ref == 'refs/heads/main' && 2 || 350 }}
- name: Fetch git history and generate output filters
id: filter
run: |
if [ -z "$GITHUB_BASE_REF" ]; then
echo "Not in a PR context (i.e., push to main/stable/preview)"
COMPARE_REV="$(git rev-parse HEAD~1)"
else
echo "In a PR context comparing to pull_request.base.ref"
git fetch origin "$GITHUB_BASE_REF" --depth=350
COMPARE_REV="$(git merge-base "origin/${GITHUB_BASE_REF}" HEAD)"
fi
CHANGED_FILES="$(git diff --name-only "$COMPARE_REV" ${{ github.sha }})"
# Specify anything which should potentially skip full test suite in this regex:
# - docs/
# - script/update_top_ranking_issues/
# - .github/ISSUE_TEMPLATE/
# - .github/workflows/ (except .github/workflows/ci.yml)
SKIP_REGEX='^(docs/|script/update_top_ranking_issues/|\.github/(ISSUE_TEMPLATE|workflows/(?!ci)))'
echo "$CHANGED_FILES" | grep -qvP "$SKIP_REGEX" && \
echo "run_tests=true" >> "$GITHUB_OUTPUT" || \
echo "run_tests=false" >> "$GITHUB_OUTPUT"
echo "$CHANGED_FILES" | grep -qP '^docs/' && \
echo "run_docs=true" >> "$GITHUB_OUTPUT" || \
echo "run_docs=false" >> "$GITHUB_OUTPUT"
echo "$CHANGED_FILES" | grep -qP '^\.github/(workflows/|actions/|actionlint.yml)' && \
echo "run_actionlint=true" >> "$GITHUB_OUTPUT" || \
echo "run_actionlint=false" >> "$GITHUB_OUTPUT"
echo "$CHANGED_FILES" | grep -qP '^(Cargo.lock|script/.*licenses)' && \
echo "run_license=true" >> "$GITHUB_OUTPUT" || \
echo "run_license=false" >> "$GITHUB_OUTPUT"
echo "$CHANGED_FILES" | grep -qP '^(nix/|flake\.|Cargo\.|rust-toolchain.toml|\.cargo/config.toml)' && \
echo "$GITHUB_REF_NAME" | grep -qvP '^v[0-9]+\.[0-9]+\.[0-9x](-pre)?$' && \
echo "run_nix=true" >> "$GITHUB_OUTPUT" || \
echo "run_nix=false" >> "$GITHUB_OUTPUT"
migration_checks:
name: Check Postgres and Protobuf migrations, mergability
needs: [job_spec]
if: |
github.repository_owner == 'zed-industries' &&
needs.job_spec.outputs.run_tests == 'true'
timeout-minutes: 60
runs-on:
- self-mini-macos
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
fetch-depth: 0 # fetch full history
- name: Remove untracked files
run: git clean -df
- name: Find modified migrations
shell: bash -euxo pipefail {0}
run: |
export SQUAWK_GITHUB_TOKEN=${{ github.token }}
. ./script/squawk
- name: Ensure fresh merge
shell: bash -euxo pipefail {0}
run: |
if [ -z "$GITHUB_BASE_REF" ];
then
echo "BUF_BASE_BRANCH=$(git merge-base origin/main HEAD)" >> "$GITHUB_ENV"
else
git checkout -B temp
git merge -q "origin/$GITHUB_BASE_REF" -m "merge main into temp"
echo "BUF_BASE_BRANCH=$GITHUB_BASE_REF" >> "$GITHUB_ENV"
fi
- uses: bufbuild/buf-setup-action@v1
with:
version: v1.29.0
- uses: bufbuild/buf-breaking-action@v1
with:
input: "crates/proto/proto/"
against: "https://github.com/${GITHUB_REPOSITORY}.git#branch=${BUF_BASE_BRANCH},subdir=crates/proto/proto/"
style:
timeout-minutes: 60
name: Check formatting and spelling
needs: [job_spec]
if: github.repository_owner == 'zed-industries'
runs-on:
- namespace-profile-4x8-ubuntu-2204
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
- uses: pnpm/action-setup@fe02b34f77f8bc703788d5817da081398fad5dd2 # v4.0.0
with:
version: 9
- name: Prettier Check on /docs
working-directory: ./docs
run: |
pnpm dlx "prettier@${PRETTIER_VERSION}" . --check || {
echo "To fix, run from the root of the Zed repo:"
echo " cd docs && pnpm dlx prettier@${PRETTIER_VERSION} . --write && cd .."
false
}
env:
PRETTIER_VERSION: 3.5.0
- name: Prettier Check on default.json
run: |
pnpm dlx "prettier@${PRETTIER_VERSION}" assets/settings/default.json --check || {
echo "To fix, run from the root of the Zed repo:"
echo " pnpm dlx prettier@${PRETTIER_VERSION} assets/settings/default.json --write"
false
}
env:
PRETTIER_VERSION: 3.5.0
# To support writing comments that they will certainly be revisited.
- name: Check for todo! and FIXME comments
run: script/check-todos
- name: Check modifier use in keymaps
run: script/check-keymaps
- name: Run style checks
uses: ./.github/actions/check_style
- name: Check for typos
uses: crate-ci/typos@80c8a4945eec0f6d464eaf9e65ed98ef085283d1 # v1.38.1
with:
config: ./typos.toml
check_docs:
timeout-minutes: 60
name: Check docs
needs: [job_spec]
if: |
github.repository_owner == 'zed-industries' &&
(needs.job_spec.outputs.run_tests == 'true' || needs.job_spec.outputs.run_docs == 'true')
runs-on:
- namespace-profile-8x16-ubuntu-2204
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Configure CI
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
- name: Build docs
uses: ./.github/actions/build_docs
actionlint:
runs-on: namespace-profile-2x4-ubuntu-2404
if: github.repository_owner == 'zed-industries' && needs.job_spec.outputs.run_actionlint == 'true'
needs: [job_spec]
steps:
- uses: actions/checkout@v4
- name: Download actionlint
id: get_actionlint
run: bash <(curl https://raw.githubusercontent.com/rhysd/actionlint/main/scripts/download-actionlint.bash)
shell: bash
- name: Check workflow files
run: ${{ steps.get_actionlint.outputs.executable }} -color
shell: bash
macos_tests:
timeout-minutes: 60
name: (macOS) Run Clippy and tests
needs: [job_spec]
if: |
github.repository_owner == 'zed-industries' &&
needs.job_spec.outputs.run_tests == 'true'
runs-on:
- self-mini-macos
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Configure CI
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
- name: Check that Cargo.lock is up to date
run: |
cargo update --locked --workspace
- name: cargo clippy
run: ./script/clippy
- name: Install cargo-machete
uses: clechasseur/rs-cargo@8435b10f6e71c2e3d4d3b7573003a8ce4bfc6386 # v2
with:
command: install
args: cargo-machete@0.7.0
- name: Check unused dependencies
uses: clechasseur/rs-cargo@8435b10f6e71c2e3d4d3b7573003a8ce4bfc6386 # v2
with:
command: machete
- name: Check licenses
run: |
script/check-licenses
if [[ "${{ needs.job_spec.outputs.run_license }}" == "true" ]]; then
script/generate-licenses /tmp/zed_licenses_output
fi
- name: Check for new vulnerable dependencies
if: github.event_name == 'pull_request'
uses: actions/dependency-review-action@67d4f4bd7a9b17a0db54d2a7519187c65e339de8 # v4
with:
license-check: false
- name: Run tests
uses: ./.github/actions/run_tests
- name: Build collab
# we should do this on a linux x86 machinge
run: cargo build -p collab
- name: Build other binaries and features
run: |
cargo build --workspace --bins --examples
# Since the macOS runners are stateful, so we need to remove the config file to prevent potential bug.
- name: Clean CI config file
if: always()
run: rm -rf ./../.cargo
linux_tests:
timeout-minutes: 60
name: (Linux) Run Clippy and tests
needs: [job_spec]
if: |
github.repository_owner == 'zed-industries' &&
needs.job_spec.outputs.run_tests == 'true'
runs-on:
- namespace-profile-16x32-ubuntu-2204
steps:
- name: Add Rust to the PATH
run: echo "$HOME/.cargo/bin" >> "$GITHUB_PATH"
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Cache dependencies
uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
# cache-provider: "buildjet"
- name: Install Linux dependencies
run: ./script/linux
- name: Configure CI
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
- name: cargo clippy
run: ./script/clippy
- name: Run tests
uses: ./.github/actions/run_tests
- name: Build other binaries and features
run: |
cargo build -p zed
cargo check -p workspace
cargo check -p gpui --examples
# Even the Linux runner is not stateful, in theory there is no need to do this cleanup.
# But, to avoid potential issues in the future if we choose to use a stateful Linux runner and forget to add code
# to clean up the config file, Ive included the cleanup code here as a precaution.
# While its not strictly necessary at this moment, I believe its better to err on the side of caution.
- name: Clean CI config file
if: always()
run: rm -rf ./../.cargo
doctests:
# Nextest currently doesn't support doctests, so run them separately and in parallel.
timeout-minutes: 60
name: (Linux) Run doctests
needs: [job_spec]
if: |
github.repository_owner == 'zed-industries' &&
needs.job_spec.outputs.run_tests == 'true'
runs-on:
- namespace-profile-16x32-ubuntu-2204
steps:
- name: Add Rust to the PATH
run: echo "$HOME/.cargo/bin" >> "$GITHUB_PATH"
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Cache dependencies
uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
# cache-provider: "buildjet"
- name: Install Linux dependencies
run: ./script/linux
- name: Configure CI
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
- name: Run doctests
run: cargo test --workspace --doc --no-fail-fast
- name: Clean CI config file
if: always()
run: rm -rf ./../.cargo
build_remote_server:
timeout-minutes: 60
name: (Linux) Build Remote Server
needs: [job_spec]
if: |
github.repository_owner == 'zed-industries' &&
needs.job_spec.outputs.run_tests == 'true'
runs-on:
- namespace-profile-16x32-ubuntu-2204
steps:
- name: Add Rust to the PATH
run: echo "$HOME/.cargo/bin" >> "$GITHUB_PATH"
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Cache dependencies
uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
# cache-provider: "buildjet"
- name: Install Clang & Mold
run: ./script/remote-server && ./script/install-mold 2.34.0
- name: Configure CI
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
- name: Build Remote Server
run: cargo build -p remote_server
- name: Clean CI config file
if: always()
run: rm -rf ./../.cargo
windows_tests:
timeout-minutes: 60
name: (Windows) Run Clippy and tests
needs: [job_spec]
if: |
github.repository_owner == 'zed-industries' &&
needs.job_spec.outputs.run_tests == 'true'
runs-on: [self-32vcpu-windows-2022]
steps:
- name: Environment Setup
run: |
$RunnerDir = Split-Path -Parent $env:RUNNER_WORKSPACE
Write-Output `
"RUSTUP_HOME=$RunnerDir\.rustup" `
"CARGO_HOME=$RunnerDir\.cargo" `
"PATH=$RunnerDir\.cargo\bin;$env:PATH" `
>> $env:GITHUB_ENV
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Configure CI
run: |
New-Item -ItemType Directory -Path "./../.cargo" -Force
Copy-Item -Path "./.cargo/ci-config.toml" -Destination "./../.cargo/config.toml"
- name: cargo clippy
run: |
.\script\clippy.ps1
- name: Run tests
uses: ./.github/actions/run_tests_windows
- name: Build Zed
run: cargo build
- name: Limit target directory size
run: ./script/clear-target-dir-if-larger-than.ps1 250
- name: Clean CI config file
if: always()
run: Remove-Item -Recurse -Path "./../.cargo" -Force -ErrorAction SilentlyContinue
tests_pass:
name: Tests Pass
runs-on: namespace-profile-2x4-ubuntu-2404
needs:
- job_spec
- style
- check_docs
- actionlint
- migration_checks
# run_tests: If adding required tests, add them here and to script below.
- linux_tests
- build_remote_server
- macos_tests
- windows_tests
if: |
github.repository_owner == 'zed-industries' &&
always()
steps:
- name: Check all tests passed
run: |
# Check dependent jobs...
RET_CODE=0
# Always check style
[[ "${{ needs.style.result }}" != 'success' ]] && { RET_CODE=1; echo "style tests failed"; }
if [[ "${{ needs.job_spec.outputs.run_docs }}" == "true" ]]; then
[[ "${{ needs.check_docs.result }}" != 'success' ]] && { RET_CODE=1; echo "docs checks failed"; }
fi
if [[ "${{ needs.job_spec.outputs.run_actionlint }}" == "true" ]]; then
[[ "${{ needs.actionlint.result }}" != 'success' ]] && { RET_CODE=1; echo "actionlint checks failed"; }
fi
# Only check test jobs if they were supposed to run
if [[ "${{ needs.job_spec.outputs.run_tests }}" == "true" ]]; then
[[ "${{ needs.macos_tests.result }}" != 'success' ]] && { RET_CODE=1; echo "macOS tests failed"; }
[[ "${{ needs.linux_tests.result }}" != 'success' ]] && { RET_CODE=1; echo "Linux tests failed"; }
[[ "${{ needs.windows_tests.result }}" != 'success' ]] && { RET_CODE=1; echo "Windows tests failed"; }
[[ "${{ needs.build_remote_server.result }}" != 'success' ]] && { RET_CODE=1; echo "Remote server build failed"; }
# This check is intentionally disabled. See: https://github.com/zed-industries/zed/pull/28431
# [[ "${{ needs.migration_checks.result }}" != 'success' ]] && { RET_CODE=1; echo "Migration Checks failed"; }
fi
if [[ "$RET_CODE" -eq 0 ]]; then
echo "All tests passed successfully!"
fi
exit $RET_CODE
bundle-mac:
timeout-minutes: 120
name: Create a macOS bundle
runs-on:
- self-mini-macos
if: startsWith(github.ref, 'refs/tags/v')
needs: [macos_tests]
env:
MACOS_CERTIFICATE: ${{ secrets.MACOS_CERTIFICATE }}
MACOS_CERTIFICATE_PASSWORD: ${{ secrets.MACOS_CERTIFICATE_PASSWORD }}
APPLE_NOTARIZATION_KEY: ${{ secrets.APPLE_NOTARIZATION_KEY }}
APPLE_NOTARIZATION_KEY_ID: ${{ secrets.APPLE_NOTARIZATION_KEY_ID }}
APPLE_NOTARIZATION_ISSUER_ID: ${{ secrets.APPLE_NOTARIZATION_ISSUER_ID }}
steps:
- name: Install Node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
with:
node-version: "18"
- name: Setup Sentry CLI
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b #v2
with:
token: ${{ SECRETS.SENTRY_AUTH_TOKEN }}
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
# We need to fetch more than one commit so that `script/draft-release-notes`
# is able to diff between the current and previous tag.
#
# 25 was chosen arbitrarily.
fetch-depth: 25
clean: false
ref: ${{ github.ref }}
- name: Limit target directory size
run: script/clear-target-dir-if-larger-than 300
- name: Determine version and release channel
run: |
# This exports RELEASE_CHANNEL into env (GITHUB_ENV)
script/determine-release-channel
- name: Draft release notes
run: |
mkdir -p target/
# Ignore any errors that occur while drafting release notes to not fail the build.
script/draft-release-notes "$RELEASE_VERSION" "$RELEASE_CHANNEL" > target/release-notes.md || true
script/create-draft-release target/release-notes.md
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Create macOS app bundle (aarch64)
run: script/bundle-mac aarch64-apple-darwin
- name: Create macOS app bundle (x64)
run: script/bundle-mac x86_64-apple-darwin
- name: Rename binaries
run: |
mv target/aarch64-apple-darwin/release/Zed.dmg target/aarch64-apple-darwin/release/Zed-aarch64.dmg
mv target/x86_64-apple-darwin/release/Zed.dmg target/x86_64-apple-darwin/release/Zed-x86_64.dmg
- uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v1
name: Upload app bundle to release
if: ${{ env.RELEASE_CHANNEL == 'preview' || env.RELEASE_CHANNEL == 'stable' }}
with:
draft: true
prerelease: ${{ env.RELEASE_CHANNEL == 'preview' }}
files: |
target/zed-remote-server-macos-x86_64.gz
target/zed-remote-server-macos-aarch64.gz
target/aarch64-apple-darwin/release/Zed-aarch64.dmg
target/x86_64-apple-darwin/release/Zed-x86_64.dmg
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
bundle-linux-x86_x64:
timeout-minutes: 60
name: Linux x86_x64 release bundle
runs-on:
- namespace-profile-16x32-ubuntu-2004 # ubuntu 20.04 for minimal glibc
if: |
( startsWith(github.ref, 'refs/tags/v') )
needs: [linux_tests]
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Install Linux dependencies
run: ./script/linux && ./script/install-mold 2.34.0
- name: Setup Sentry CLI
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b #v2
with:
token: ${{ SECRETS.SENTRY_AUTH_TOKEN }}
- name: Determine version and release channel
run: |
# This exports RELEASE_CHANNEL into env (GITHUB_ENV)
script/determine-release-channel
- name: Create Linux .tar.gz bundle
run: script/bundle-linux
- name: Upload Artifacts to release
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v1
with:
draft: true
prerelease: ${{ env.RELEASE_CHANNEL == 'preview' }}
files: |
target/zed-remote-server-linux-x86_64.gz
target/release/zed-linux-x86_64.tar.gz
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
bundle-linux-aarch64: # this runs on ubuntu22.04
timeout-minutes: 60
name: Linux arm64 release bundle
runs-on:
- namespace-profile-8x32-ubuntu-2004-arm-m4 # ubuntu 20.04 for minimal glibc
if: |
startsWith(github.ref, 'refs/tags/v')
needs: [linux_tests]
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Install Linux dependencies
run: ./script/linux
- name: Setup Sentry CLI
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b #v2
with:
token: ${{ SECRETS.SENTRY_AUTH_TOKEN }}
- name: Determine version and release channel
run: |
# This exports RELEASE_CHANNEL into env (GITHUB_ENV)
script/determine-release-channel
- name: Create and upload Linux .tar.gz bundles
run: script/bundle-linux
- name: Upload Artifacts to release
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v1
with:
draft: true
prerelease: ${{ env.RELEASE_CHANNEL == 'preview' }}
files: |
target/zed-remote-server-linux-aarch64.gz
target/release/zed-linux-aarch64.tar.gz
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
freebsd:
timeout-minutes: 60
runs-on: github-8vcpu-ubuntu-2404
if: |
false && ( startsWith(github.ref, 'refs/tags/v') )
needs: [linux_tests]
name: Build Zed on FreeBSD
steps:
- uses: actions/checkout@v4
- name: Build FreeBSD remote-server
id: freebsd-build
uses: vmactions/freebsd-vm@c3ae29a132c8ef1924775414107a97cac042aad5 # v1.2.0
with:
usesh: true
release: 13.5
copyback: true
prepare: |
pkg install -y \
bash curl jq git \
rustup-init cmake-core llvm-devel-lite pkgconf protobuf # ibx11 alsa-lib rust-bindgen-cli
run: |
freebsd-version
sysctl hw.model
sysctl hw.ncpu
sysctl hw.physmem
sysctl hw.usermem
git config --global --add safe.directory /home/runner/work/zed/zed
rustup-init --profile minimal --default-toolchain none -y
. "$HOME/.cargo/env"
./script/bundle-freebsd
mkdir -p out/
mv "target/zed-remote-server-freebsd-x86_64.gz" out/
rm -rf target/
cargo clean
- name: Upload Artifact to Workflow - zed-remote-server (run-bundling)
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: contains(github.event.pull_request.labels.*.name, 'run-bundling')
with:
name: zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-x86_64-unknown-freebsd.gz
path: out/zed-remote-server-freebsd-x86_64.gz
- name: Upload Artifacts to release
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v1
if: ${{ !(contains(github.event.pull_request.labels.*.name, 'run-bundling')) }}
with:
draft: true
prerelease: ${{ env.RELEASE_CHANNEL == 'preview' }}
files: |
out/zed-remote-server-freebsd-x86_64.gz
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
nix-build:
name: Build with Nix
uses: ./.github/workflows/nix_build.yml
needs: [job_spec]
if: github.repository_owner == 'zed-industries' &&
(contains(github.event.pull_request.labels.*.name, 'run-nix') ||
needs.job_spec.outputs.run_nix == 'true')
secrets: inherit
bundle-windows-x64:
timeout-minutes: 120
name: Create a Windows installer for x86_64
runs-on: [self-32vcpu-windows-2022]
if: |
( startsWith(github.ref, 'refs/tags/v') )
needs: [windows_tests]
env:
AZURE_TENANT_ID: ${{ secrets.AZURE_SIGNING_TENANT_ID }}
AZURE_CLIENT_ID: ${{ secrets.AZURE_SIGNING_CLIENT_ID }}
AZURE_CLIENT_SECRET: ${{ secrets.AZURE_SIGNING_CLIENT_SECRET }}
ACCOUNT_NAME: ${{ vars.AZURE_SIGNING_ACCOUNT_NAME }}
CERT_PROFILE_NAME: ${{ vars.AZURE_SIGNING_CERT_PROFILE_NAME }}
ENDPOINT: ${{ vars.AZURE_SIGNING_ENDPOINT }}
FILE_DIGEST: SHA256
TIMESTAMP_DIGEST: SHA256
TIMESTAMP_SERVER: "http://timestamp.acs.microsoft.com"
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Setup Sentry CLI
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b #v2
with:
token: ${{ SECRETS.SENTRY_AUTH_TOKEN }}
- name: Determine version and release channel
working-directory: ${{ env.ZED_WORKSPACE }}
run: |
# This exports RELEASE_CHANNEL into env (GITHUB_ENV)
script/determine-release-channel.ps1
- name: Build Zed installer
working-directory: ${{ env.ZED_WORKSPACE }}
run: script/bundle-windows.ps1
- name: Upload Artifacts to release
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v1
with:
draft: true
prerelease: ${{ env.RELEASE_CHANNEL == 'preview' }}
files: ${{ env.SETUP_PATH }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
bundle-windows-aarch64:
timeout-minutes: 120
name: Create a Windows installer for aarch64
runs-on: [self-32vcpu-windows-2022]
if: |
( startsWith(github.ref, 'refs/tags/v') )
needs: [windows_tests]
env:
AZURE_TENANT_ID: ${{ secrets.AZURE_SIGNING_TENANT_ID }}
AZURE_CLIENT_ID: ${{ secrets.AZURE_SIGNING_CLIENT_ID }}
AZURE_CLIENT_SECRET: ${{ secrets.AZURE_SIGNING_CLIENT_SECRET }}
ACCOUNT_NAME: ${{ vars.AZURE_SIGNING_ACCOUNT_NAME }}
CERT_PROFILE_NAME: ${{ vars.AZURE_SIGNING_CERT_PROFILE_NAME }}
ENDPOINT: ${{ vars.AZURE_SIGNING_ENDPOINT }}
FILE_DIGEST: SHA256
TIMESTAMP_DIGEST: SHA256
TIMESTAMP_SERVER: "http://timestamp.acs.microsoft.com"
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Setup Sentry CLI
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b #v2
with:
token: ${{ SECRETS.SENTRY_AUTH_TOKEN }}
- name: Determine version and release channel
working-directory: ${{ env.ZED_WORKSPACE }}
run: |
# This exports RELEASE_CHANNEL into env (GITHUB_ENV)
script/determine-release-channel.ps1
- name: Build Zed installer
working-directory: ${{ env.ZED_WORKSPACE }}
run: script/bundle-windows.ps1 -Architecture aarch64
- name: Upload Artifacts to release
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v1
with:
draft: true
prerelease: ${{ env.RELEASE_CHANNEL == 'preview' }}
files: ${{ env.SETUP_PATH }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
auto-release-preview:
name: Auto release preview
if: |
false
&& startsWith(github.ref, 'refs/tags/v')
&& endsWith(github.ref, '-pre') && !endsWith(github.ref, '.0-pre')
needs: [bundle-mac, bundle-linux-x86_x64, bundle-linux-aarch64, bundle-windows-x64, bundle-windows-aarch64]
runs-on:
- self-mini-macos
steps:
- name: gh release
run: gh release edit "$GITHUB_REF_NAME" --draft=false
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Create Sentry release
uses: getsentry/action-release@526942b68292201ac6bbb99b9a0747d4abee354c # v3
env:
SENTRY_ORG: zed-dev
SENTRY_PROJECT: zed
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
with:
environment: production

13
.github/workflows/compare_perf.yml vendored Normal file
View File

@@ -0,0 +1,13 @@
# Generated from xtask::workflows::compare_perf
# Rebuild with `cargo xtask workflows`.
name: compare_perf
on:
workflow_dispatch: {}
jobs:
run_perf:
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false

View File

@@ -29,10 +29,10 @@ jobs:
node-version: '20'
cache: pnpm
cache-dependency-path: script/danger/pnpm-lock.yaml
- name: danger::install_deps
- name: danger::danger_job::install_deps
run: pnpm install --dir script/danger
shell: bash -euxo pipefail {0}
- name: danger::run
- name: danger::danger_job::run
run: pnpm run --dir script/danger danger ci
shell: bash -euxo pipefail {0}
env:

View File

@@ -1,71 +0,0 @@
name: Run Agent Eval
on:
schedule:
- cron: "0 0 * * *"
pull_request:
branches:
- "**"
types: [synchronize, reopened, labeled]
workflow_dispatch:
concurrency:
# Allow only one workflow per any non-`main` branch.
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: 0
RUST_BACKTRACE: 1
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_EVAL_TELEMETRY: 1
jobs:
run_eval:
timeout-minutes: 60
name: Run Agent Eval
if: >
github.repository_owner == 'zed-industries' &&
(github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'run-eval'))
runs-on:
- namespace-profile-16x32-ubuntu-2204
steps:
- name: Add Rust to the PATH
run: echo "$HOME/.cargo/bin" >> "$GITHUB_PATH"
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Cache dependencies
uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
# cache-provider: "buildjet"
- name: Install Linux dependencies
run: ./script/linux
- name: Configure CI
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
- name: Compile eval
run: cargo build --package=eval
- name: Run eval
run: cargo run --package=eval -- --repetitions=8 --concurrency=1
# Even the Linux runner is not stateful, in theory there is no need to do this cleanup.
# But, to avoid potential issues in the future if we choose to use a stateful Linux runner and forget to add code
# to clean up the config file, Ive included the cleanup code here as a precaution.
# While its not strictly necessary at this moment, I believe its better to err on the side of caution.
- name: Clean CI config file
if: always()
run: rm -rf ./../.cargo

View File

@@ -1,97 +0,0 @@
# Generated from xtask::workflows::nix_build
# Rebuild with `cargo xtask workflows`.
name: nix_build
env:
CARGO_TERM_COLOR: always
RUST_BACKTRACE: '1'
CARGO_INCREMENTAL: '0'
on:
pull_request:
branches:
- '**'
paths:
- nix/**
- flake.*
- Cargo.*
- rust-toolchain.toml
- .cargo/config.toml
push:
branches:
- main
- v[0-9]+.[0-9]+.x
paths:
- nix/**
- flake.*
- Cargo.*
- rust-toolchain.toml
- .cargo/config.toml
workflow_call: {}
jobs:
build_nix_linux_x86_64:
if: github.repository_owner == 'zed-industries'
runs-on: namespace-profile-32x64-ubuntu-2004
env:
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}
GIT_LFS_SKIP_SMUDGE: '1'
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: nix_build::install_nix
uses: cachix/install-nix-action@02a151ada4993995686f9ed4f1be7cfbb229e56f
with:
github_access_token: ${{ secrets.GITHUB_TOKEN }}
- name: nix_build::cachix_action
uses: cachix/cachix-action@0fc020193b5a1fa3ac4575aa3a7d3aa6a35435ad
with:
name: zed
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
cachixArgs: -v
pushFilter: -zed-editor-[0-9.]*-nightly
- name: nix_build::build
run: nix build .#debug -L --accept-flake-config
shell: bash -euxo pipefail {0}
timeout-minutes: 60
continue-on-error: true
build_nix_mac_aarch64:
if: github.repository_owner == 'zed-industries'
runs-on: self-mini-macos
env:
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}
GIT_LFS_SKIP_SMUDGE: '1'
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: nix_build::set_path
run: |
echo "/nix/var/nix/profiles/default/bin" >> "$GITHUB_PATH"
echo "/Users/administrator/.nix-profile/bin" >> "$GITHUB_PATH"
shell: bash -euxo pipefail {0}
- name: nix_build::cachix_action
uses: cachix/cachix-action@0fc020193b5a1fa3ac4575aa3a7d3aa6a35435ad
with:
name: zed
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
cachixArgs: -v
pushFilter: -zed-editor-[0-9.]*-nightly
- name: nix_build::build
run: nix build .#debug -L --accept-flake-config
shell: bash -euxo pipefail {0}
- name: nix_build::limit_store
run: |-
if [ "$(du -sm /nix/store | cut -f1)" -gt 50000 ]; then
nix-collect-garbage -d || true
fi
shell: bash -euxo pipefail {0}
timeout-minutes: 60
continue-on-error: true
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true

486
.github/workflows/release.yml vendored Normal file
View File

@@ -0,0 +1,486 @@
# Generated from xtask::workflows::release
# Rebuild with `cargo xtask workflows`.
name: release
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: '0'
RUST_BACKTRACE: '1'
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
on:
push:
tags:
- v*
jobs:
run_tests_mac:
if: github.repository_owner == 'zed-industries'
runs-on: self-mini-macos
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_cargo_config
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
shell: bash -euxo pipefail {0}
- name: steps::setup_node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
node-version: '20'
- name: steps::clippy
run: ./script/clippy
shell: bash -euxo pipefail {0}
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: bash -euxo pipefail {0}
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 300
shell: bash -euxo pipefail {0}
- name: steps::cargo_nextest
run: cargo nextest run --workspace --no-fail-fast --failure-output immediate-final
shell: bash -euxo pipefail {0}
- name: steps::cleanup_cargo_config
if: always()
run: |
rm -rf ./../.cargo
shell: bash -euxo pipefail {0}
timeout-minutes: 60
run_tests_linux:
if: github.repository_owner == 'zed-industries'
runs-on: namespace-profile-16x32-ubuntu-2204
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_cargo_config
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
shell: bash -euxo pipefail {0}
- name: steps::setup_linux
run: ./script/linux
shell: bash -euxo pipefail {0}
- name: steps::install_mold
run: ./script/install-mold
shell: bash -euxo pipefail {0}
- name: steps::setup_node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
node-version: '20'
- name: steps::clippy
run: ./script/clippy
shell: bash -euxo pipefail {0}
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: bash -euxo pipefail {0}
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 100
shell: bash -euxo pipefail {0}
- name: steps::cargo_nextest
run: cargo nextest run --workspace --no-fail-fast --failure-output immediate-final
shell: bash -euxo pipefail {0}
- name: steps::cleanup_cargo_config
if: always()
run: |
rm -rf ./../.cargo
shell: bash -euxo pipefail {0}
timeout-minutes: 60
run_tests_windows:
if: github.repository_owner == 'zed-industries'
runs-on: self-32vcpu-windows-2022
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_cargo_config
run: |
New-Item -ItemType Directory -Path "./../.cargo" -Force
Copy-Item -Path "./.cargo/ci-config.toml" -Destination "./../.cargo/config.toml"
shell: pwsh
- name: steps::setup_node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
node-version: '20'
- name: steps::clippy
run: ./script/clippy.ps1
shell: pwsh
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: pwsh
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than.ps1 250
shell: pwsh
- name: steps::cargo_nextest
run: cargo nextest run --workspace --no-fail-fast --failure-output immediate-final
shell: pwsh
- name: steps::cleanup_cargo_config
if: always()
run: |
Remove-Item -Recurse -Path "./../.cargo" -Force -ErrorAction SilentlyContinue
shell: pwsh
timeout-minutes: 60
check_scripts:
if: github.repository_owner == 'zed-industries'
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: run_tests::check_scripts::run_shellcheck
run: ./script/shellcheck-scripts error
shell: bash -euxo pipefail {0}
- id: get_actionlint
name: run_tests::check_scripts::download_actionlint
run: bash <(curl https://raw.githubusercontent.com/rhysd/actionlint/main/scripts/download-actionlint.bash)
shell: bash -euxo pipefail {0}
- name: run_tests::check_scripts::run_actionlint
run: |
${{ steps.get_actionlint.outputs.executable }} -color
shell: bash -euxo pipefail {0}
- name: run_tests::check_scripts::check_xtask_workflows
run: |
cargo xtask workflows
if ! git diff --exit-code .github; then
echo "Error: .github directory has uncommitted changes after running 'cargo xtask workflows'"
echo "Please run 'cargo xtask workflows' locally and commit the changes"
exit 1
fi
shell: bash -euxo pipefail {0}
timeout-minutes: 60
create_draft_release:
if: github.repository_owner == 'zed-industries'
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
fetch-depth: 25
ref: ${{ github.ref }}
- name: script/determine-release-channel
run: script/determine-release-channel
shell: bash -euxo pipefail {0}
- name: mkdir -p target/
run: mkdir -p target/
shell: bash -euxo pipefail {0}
- name: release::create_draft_release::generate_release_notes
run: node --redirect-warnings=/dev/null ./script/draft-release-notes "$RELEASE_VERSION" "$RELEASE_CHANNEL" > target/release-notes.md
shell: bash -euxo pipefail {0}
- name: release::create_draft_release::create_release
run: script/create-draft-release target/release-notes.md
shell: bash -euxo pipefail {0}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
timeout-minutes: 60
bundle_linux_arm64:
needs:
- run_tests_linux
- check_scripts
runs-on: namespace-profile-8x32-ubuntu-2004-arm-m4
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_sentry
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b
with:
token: ${{ secrets.SENTRY_AUTH_TOKEN }}
- name: steps::setup_linux
run: ./script/linux
shell: bash -euxo pipefail {0}
- name: steps::install_mold
run: ./script/install-mold
shell: bash -euxo pipefail {0}
- name: ./script/bundle-linux
run: ./script/bundle-linux
shell: bash -euxo pipefail {0}
- name: '@actions/upload-artifact zed-${{ github.event.pull_request.head.sha || github.sha }}-aarch64-unknown-linux-gnu.tar.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-${{ github.event.pull_request.head.sha || github.sha }}-aarch64-unknown-linux-gnu.tar.gz
path: target/release/zed-*.tar.gz
if-no-files-found: error
- name: '@actions/upload-artifact zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-aarch64-unknown-linux-gnu.tar.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-aarch64-unknown-linux-gnu.tar.gz
path: target/zed-remote-server-*.gz
if-no-files-found: error
outputs:
zed: zed-${{ github.event.pull_request.head.sha || github.sha }}-aarch64-unknown-linux-gnu.tar.gz
remote-server: zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-aarch64-unknown-linux-gnu.tar.gz
timeout-minutes: 60
bundle_linux_x86_64:
needs:
- run_tests_linux
- check_scripts
runs-on: namespace-profile-32x64-ubuntu-2004
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_sentry
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b
with:
token: ${{ secrets.SENTRY_AUTH_TOKEN }}
- name: steps::setup_linux
run: ./script/linux
shell: bash -euxo pipefail {0}
- name: steps::install_mold
run: ./script/install-mold
shell: bash -euxo pipefail {0}
- name: ./script/bundle-linux
run: ./script/bundle-linux
shell: bash -euxo pipefail {0}
- name: '@actions/upload-artifact zed-${{ github.event.pull_request.head.sha || github.sha }}-x86_64-unknown-linux-gnu.tar.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-${{ github.event.pull_request.head.sha || github.sha }}-x86_64-unknown-linux-gnu.tar.gz
path: target/release/zed-*.tar.gz
if-no-files-found: error
- name: '@actions/upload-artifact zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-x86_64-unknown-linux-gnu.tar.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-x86_64-unknown-linux-gnu.tar.gz
path: target/zed-remote-server-*.gz
if-no-files-found: error
outputs:
zed: zed-${{ github.event.pull_request.head.sha || github.sha }}-x86_64-unknown-linux-gnu.tar.gz
remote-server: zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-x86_64-unknown-linux-gnu.tar.gz
timeout-minutes: 60
bundle_mac_arm64:
needs:
- run_tests_mac
- check_scripts
runs-on: self-mini-macos
env:
MACOS_CERTIFICATE: ${{ secrets.MACOS_CERTIFICATE }}
MACOS_CERTIFICATE_PASSWORD: ${{ secrets.MACOS_CERTIFICATE_PASSWORD }}
APPLE_NOTARIZATION_KEY: ${{ secrets.APPLE_NOTARIZATION_KEY }}
APPLE_NOTARIZATION_KEY_ID: ${{ secrets.APPLE_NOTARIZATION_KEY_ID }}
APPLE_NOTARIZATION_ISSUER_ID: ${{ secrets.APPLE_NOTARIZATION_ISSUER_ID }}
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
node-version: '20'
- name: steps::setup_sentry
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b
with:
token: ${{ secrets.SENTRY_AUTH_TOKEN }}
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 300
shell: bash -euxo pipefail {0}
- name: run_bundling::bundle_mac
run: ./script/bundle-mac aarch64-apple-darwin
shell: bash -euxo pipefail {0}
- name: '@actions/upload-artifact Zed_${{ github.event.pull_request.head.sha || github.sha }}-aarch64.dmg'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: Zed_${{ github.event.pull_request.head.sha || github.sha }}-aarch64.dmg
path: target/aarch64-apple-darwin/release/Zed.dmg
if-no-files-found: error
- name: '@actions/upload-artifact zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-macos-aarch64.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-macos-aarch64.gz
path: target/zed-remote-server-macos-aarch64.gz
if-no-files-found: error
outputs:
zed: Zed_${{ github.event.pull_request.head.sha || github.sha }}-aarch64.dmg
remote-server: zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-macos-aarch64.gz
timeout-minutes: 60
bundle_mac_x86_64:
needs:
- run_tests_mac
- check_scripts
runs-on: self-mini-macos
env:
MACOS_CERTIFICATE: ${{ secrets.MACOS_CERTIFICATE }}
MACOS_CERTIFICATE_PASSWORD: ${{ secrets.MACOS_CERTIFICATE_PASSWORD }}
APPLE_NOTARIZATION_KEY: ${{ secrets.APPLE_NOTARIZATION_KEY }}
APPLE_NOTARIZATION_KEY_ID: ${{ secrets.APPLE_NOTARIZATION_KEY_ID }}
APPLE_NOTARIZATION_ISSUER_ID: ${{ secrets.APPLE_NOTARIZATION_ISSUER_ID }}
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
node-version: '20'
- name: steps::setup_sentry
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b
with:
token: ${{ secrets.SENTRY_AUTH_TOKEN }}
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 300
shell: bash -euxo pipefail {0}
- name: run_bundling::bundle_mac
run: ./script/bundle-mac x86_64-apple-darwin
shell: bash -euxo pipefail {0}
- name: '@actions/upload-artifact Zed_${{ github.event.pull_request.head.sha || github.sha }}-x86_64.dmg'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: Zed_${{ github.event.pull_request.head.sha || github.sha }}-x86_64.dmg
path: target/x86_64-apple-darwin/release/Zed.dmg
if-no-files-found: error
- name: '@actions/upload-artifact zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-macos-x86_64.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-macos-x86_64.gz
path: target/zed-remote-server-macos-x86_64.gz
if-no-files-found: error
outputs:
zed: Zed_${{ github.event.pull_request.head.sha || github.sha }}-x86_64.dmg
remote-server: zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-macos-x86_64.gz
timeout-minutes: 60
bundle_windows_arm64:
needs:
- run_tests_windows
- check_scripts
runs-on: self-32vcpu-windows-2022
env:
AZURE_TENANT_ID: ${{ secrets.AZURE_SIGNING_TENANT_ID }}
AZURE_CLIENT_ID: ${{ secrets.AZURE_SIGNING_CLIENT_ID }}
AZURE_CLIENT_SECRET: ${{ secrets.AZURE_SIGNING_CLIENT_SECRET }}
ACCOUNT_NAME: ${{ vars.AZURE_SIGNING_ACCOUNT_NAME }}
CERT_PROFILE_NAME: ${{ vars.AZURE_SIGNING_CERT_PROFILE_NAME }}
ENDPOINT: ${{ vars.AZURE_SIGNING_ENDPOINT }}
FILE_DIGEST: SHA256
TIMESTAMP_DIGEST: SHA256
TIMESTAMP_SERVER: http://timestamp.acs.microsoft.com
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_sentry
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b
with:
token: ${{ secrets.SENTRY_AUTH_TOKEN }}
- name: run_bundling::bundle_windows
run: script/bundle-windows.ps1 -Architecture aarch64
shell: pwsh
working-directory: ${{ env.ZED_WORKSPACE }}
- name: '@actions/upload-artifact Zed_${{ github.event.pull_request.head.sha || github.sha }}-aarch64.exe'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: Zed_${{ github.event.pull_request.head.sha || github.sha }}-aarch64.exe
path: ${{ env.SETUP_PATH }}
if-no-files-found: error
outputs:
zed: Zed_${{ github.event.pull_request.head.sha || github.sha }}-aarch64.exe
timeout-minutes: 60
bundle_windows_x86_64:
needs:
- run_tests_windows
- check_scripts
runs-on: self-32vcpu-windows-2022
env:
AZURE_TENANT_ID: ${{ secrets.AZURE_SIGNING_TENANT_ID }}
AZURE_CLIENT_ID: ${{ secrets.AZURE_SIGNING_CLIENT_ID }}
AZURE_CLIENT_SECRET: ${{ secrets.AZURE_SIGNING_CLIENT_SECRET }}
ACCOUNT_NAME: ${{ vars.AZURE_SIGNING_ACCOUNT_NAME }}
CERT_PROFILE_NAME: ${{ vars.AZURE_SIGNING_CERT_PROFILE_NAME }}
ENDPOINT: ${{ vars.AZURE_SIGNING_ENDPOINT }}
FILE_DIGEST: SHA256
TIMESTAMP_DIGEST: SHA256
TIMESTAMP_SERVER: http://timestamp.acs.microsoft.com
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_sentry
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b
with:
token: ${{ secrets.SENTRY_AUTH_TOKEN }}
- name: run_bundling::bundle_windows
run: script/bundle-windows.ps1 -Architecture x86_64
shell: pwsh
working-directory: ${{ env.ZED_WORKSPACE }}
- name: '@actions/upload-artifact Zed_${{ github.event.pull_request.head.sha || github.sha }}-x86_64.exe'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: Zed_${{ github.event.pull_request.head.sha || github.sha }}-x86_64.exe
path: ${{ env.SETUP_PATH }}
if-no-files-found: error
outputs:
zed: Zed_${{ github.event.pull_request.head.sha || github.sha }}-x86_64.exe
timeout-minutes: 60
upload_release_assets:
needs:
- create_draft_release
- bundle_linux_arm64
- bundle_linux_x86_64
- bundle_mac_arm64
- bundle_mac_x86_64
- bundle_windows_arm64
- bundle_windows_x86_64
runs-on: namespace-profile-4x8-ubuntu-2204
steps:
- name: release::upload_release_assets::download_workflow_artifacts
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53
with:
path: ./artifacts/
- name: ls -lR ./artifacts
run: ls -lR ./artifacts
shell: bash -euxo pipefail {0}
- name: release::upload_release_assets::prep_release_artifacts
run: |-
mkdir -p release-artifacts/
mv ./artifacts/${{ needs.bundle_mac_x86_64.outputs.zed }}/* release-artifacts/Zed-x86_64.dmg
mv ./artifacts/${{ needs.bundle_mac_arm64.outputs.zed }}/* release-artifacts/Zed-aarch64.dmg
mv ./artifacts/${{ needs.bundle_windows_x86_64.outputs.zed }}/* release-artifacts/Zed-x86_64.exe
mv ./artifacts/${{ needs.bundle_windows_arm64.outputs.zed }}/* release-artifacts/Zed-aarch64.exe
mv ./artifacts/${{ needs.bundle_linux_arm64.outputs.zed }}/* release-artifacts/zed-linux-aarch64.tar.gz
mv ./artifacts/${{ needs.bundle_linux_x86_64.outputs.zed }}/* release-artifacts/zed-linux-x86_64.tar.gz
mv ./artifacts/${{ needs.bundle_linux_x86_64.outputs.remote-server }}/* release-artifacts/zed-remote-server-linux-x86_64.gz
mv ./artifacts/${{ needs.bundle_linux_arm64.outputs.remote-server }}/* release-artifacts/zed-remote-server-linux-aarch64.gz
mv ./artifacts/${{ needs.bundle_mac_x86_64.outputs.remote-server }}/* release-artifacts/zed-remote-server-macos-x86_64.gz
mv ./artifacts/${{ needs.bundle_mac_arm64.outputs.remote-server }}/* release-artifacts/zed-remote-server-macos-aarch64.gz
shell: bash -euxo pipefail {0}
- name: gh release upload "$GITHUB_REF_NAME" --repo=zed-industries/zed release-artifacts/*
run: gh release upload "$GITHUB_REF_NAME" --repo=zed-industries/zed release-artifacts/*
shell: bash -euxo pipefail {0}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
auto_release_preview:
needs:
- upload_release_assets
if: |
false
&& startsWith(github.ref, 'refs/tags/v')
&& endsWith(github.ref, '-pre') && !endsWith(github.ref, '.0-pre')
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: gh release edit "$GITHUB_REF_NAME" --repo=zed-industries/zed --draft=false
run: gh release edit "$GITHUB_REF_NAME" --repo=zed-industries/zed --draft=false
shell: bash -euxo pipefail {0}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: release::auto_release_preview::create_sentry_release
uses: getsentry/action-release@526942b68292201ac6bbb99b9a0747d4abee354c
with:
environment: production
env:
SENTRY_ORG: zed-dev
SENTRY_PROJECT: zed
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true

View File

@@ -201,9 +201,6 @@ jobs:
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b
with:
token: ${{ secrets.SENTRY_AUTH_TOKEN }}
- name: release_nightly::add_rust_to_path
run: echo "$HOME/.cargo/bin" >> "$GITHUB_PATH"
shell: bash -euxo pipefail {0}
- name: ./script/linux
run: ./script/linux
shell: bash -euxo pipefail {0}
@@ -242,9 +239,6 @@ jobs:
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b
with:
token: ${{ secrets.SENTRY_AUTH_TOKEN }}
- name: release_nightly::add_rust_to_path
run: echo "$HOME/.cargo/bin" >> "$GITHUB_PATH"
shell: bash -euxo pipefail {0}
- name: ./script/linux
run: ./script/linux
shell: bash -euxo pipefail {0}
@@ -298,11 +292,11 @@ jobs:
"nightly" | Set-Content -Path "crates/zed/RELEASE_CHANNEL"
shell: pwsh
working-directory: ${{ env.ZED_WORKSPACE }}
- name: release_nightly::build_zed_installer
- name: run_bundling::bundle_windows
run: script/bundle-windows.ps1 -Architecture x86_64
shell: pwsh
working-directory: ${{ env.ZED_WORKSPACE }}
- name: release_nightly::upload_zed_nightly_windows
- name: release_nightly::upload_zed_nightly
run: script/upload-nightly.ps1 -Architecture x86_64
shell: pwsh
working-directory: ${{ env.ZED_WORKSPACE }}
@@ -340,11 +334,11 @@ jobs:
"nightly" | Set-Content -Path "crates/zed/RELEASE_CHANNEL"
shell: pwsh
working-directory: ${{ env.ZED_WORKSPACE }}
- name: release_nightly::build_zed_installer
- name: run_bundling::bundle_windows
run: script/bundle-windows.ps1 -Architecture aarch64
shell: pwsh
working-directory: ${{ env.ZED_WORKSPACE }}
- name: release_nightly::upload_zed_nightly_windows
- name: release_nightly::upload_zed_nightly
run: script/upload-nightly.ps1 -Architecture aarch64
shell: pwsh
working-directory: ${{ env.ZED_WORKSPACE }}
@@ -365,17 +359,17 @@ jobs:
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: nix_build::install_nix
- name: nix_build::build_nix::install_nix
uses: cachix/install-nix-action@02a151ada4993995686f9ed4f1be7cfbb229e56f
with:
github_access_token: ${{ secrets.GITHUB_TOKEN }}
- name: nix_build::cachix_action
- name: nix_build::build_nix::cachix_action
uses: cachix/cachix-action@0fc020193b5a1fa3ac4575aa3a7d3aa6a35435ad
with:
name: zed
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
cachixArgs: -v
- name: nix_build::build
- name: nix_build::build_nix::build
run: nix build .#default -L --accept-flake-config
shell: bash -euxo pipefail {0}
timeout-minutes: 60
@@ -396,21 +390,21 @@ jobs:
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: nix_build::set_path
- name: nix_build::build_nix::set_path
run: |
echo "/nix/var/nix/profiles/default/bin" >> "$GITHUB_PATH"
echo "/Users/administrator/.nix-profile/bin" >> "$GITHUB_PATH"
shell: bash -euxo pipefail {0}
- name: nix_build::cachix_action
- name: nix_build::build_nix::cachix_action
uses: cachix/cachix-action@0fc020193b5a1fa3ac4575aa3a7d3aa6a35435ad
with:
name: zed
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
cachixArgs: -v
- name: nix_build::build
- name: nix_build::build_nix::build
run: nix build .#default -L --accept-flake-config
shell: bash -euxo pipefail {0}
- name: nix_build::limit_store
- name: nix_build::build_nix::limit_store
run: |-
if [ "$(du -sm /nix/store | cut -f1)" -gt 50000 ]; then
nix-collect-garbage -d || true
@@ -434,7 +428,7 @@ jobs:
with:
clean: false
fetch-depth: 0
- name: release_nightly::update_nightly_tag
- name: release_nightly::update_nightly_tag_job::update_nightly_tag
run: |
if [ "$(git rev-parse nightly)" = "$(git rev-parse HEAD)" ]; then
echo "Nightly tag already points to current commit. Skipping tagging."
@@ -445,7 +439,7 @@ jobs:
git tag -f nightly
git push origin nightly --force
shell: bash -euxo pipefail {0}
- name: release_nightly::create_sentry_release
- name: release_nightly::update_nightly_tag_job::create_sentry_release
uses: getsentry/action-release@526942b68292201ac6bbb99b9a0747d4abee354c
with:
environment: production

62
.github/workflows/run_agent_evals.yml vendored Normal file
View File

@@ -0,0 +1,62 @@
# Generated from xtask::workflows::run_agent_evals
# Rebuild with `cargo xtask workflows`.
name: run_agent_evals
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: '0'
RUST_BACKTRACE: '1'
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_EVAL_TELEMETRY: '1'
on:
pull_request:
types:
- synchronize
- reopened
- labeled
branches:
- '**'
schedule:
- cron: 0 0 * * *
workflow_dispatch: {}
jobs:
agent_evals:
if: |
github.repository_owner == 'zed-industries' &&
(github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'run-eval'))
runs-on: namespace-profile-16x32-ubuntu-2204
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::cache_rust_dependencies
uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- name: steps::setup_linux
run: ./script/linux
shell: bash -euxo pipefail {0}
- name: steps::install_mold
run: ./script/install-mold
shell: bash -euxo pipefail {0}
- name: steps::setup_cargo_config
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
shell: bash -euxo pipefail {0}
- name: cargo build --package=eval
run: cargo build --package=eval
shell: bash -euxo pipefail {0}
- name: run_agent_evals::agent_evals::run_eval
run: cargo run --package=eval -- --repetitions=8 --concurrency=1
shell: bash -euxo pipefail {0}
- name: steps::cleanup_cargo_config
if: always()
run: |
rm -rf ./../.cargo
shell: bash -euxo pipefail {0}
timeout-minutes: 60
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true

View File

@@ -48,11 +48,16 @@ jobs:
with:
name: Zed_${{ github.event.pull_request.head.sha || github.sha }}-x86_64.dmg
path: target/x86_64-apple-darwin/release/Zed.dmg
if-no-files-found: error
- name: '@actions/upload-artifact zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-macos-x86_64.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-macos-x86_64.gz
path: target/zed-remote-server-macos-x86_64.gz
if-no-files-found: error
outputs:
zed: Zed_${{ github.event.pull_request.head.sha || github.sha }}-x86_64.dmg
remote-server: zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-macos-x86_64.gz
timeout-minutes: 60
bundle_mac_arm64:
if: |-
@@ -89,11 +94,16 @@ jobs:
with:
name: Zed_${{ github.event.pull_request.head.sha || github.sha }}-aarch64.dmg
path: target/aarch64-apple-darwin/release/Zed.dmg
if-no-files-found: error
- name: '@actions/upload-artifact zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-macos-aarch64.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-macos-aarch64.gz
path: target/zed-remote-server-macos-aarch64.gz
if-no-files-found: error
outputs:
zed: Zed_${{ github.event.pull_request.head.sha || github.sha }}-aarch64.dmg
remote-server: zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-macos-aarch64.gz
timeout-minutes: 60
bundle_linux_x86_64:
if: |-
@@ -123,11 +133,16 @@ jobs:
with:
name: zed-${{ github.event.pull_request.head.sha || github.sha }}-x86_64-unknown-linux-gnu.tar.gz
path: target/release/zed-*.tar.gz
if-no-files-found: error
- name: '@actions/upload-artifact zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-x86_64-unknown-linux-gnu.tar.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-x86_64-unknown-linux-gnu.tar.gz
path: target/release/zed-remote-server-*.tar.gz
path: target/zed-remote-server-*.gz
if-no-files-found: error
outputs:
zed: zed-${{ github.event.pull_request.head.sha || github.sha }}-x86_64-unknown-linux-gnu.tar.gz
remote-server: zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-x86_64-unknown-linux-gnu.tar.gz
timeout-minutes: 60
bundle_linux_arm64:
if: |-
@@ -157,11 +172,16 @@ jobs:
with:
name: zed-${{ github.event.pull_request.head.sha || github.sha }}-aarch64-unknown-linux-gnu.tar.gz
path: target/release/zed-*.tar.gz
if-no-files-found: error
- name: '@actions/upload-artifact zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-aarch64-unknown-linux-gnu.tar.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-aarch64-unknown-linux-gnu.tar.gz
path: target/release/zed-remote-server-*.tar.gz
path: target/zed-remote-server-*.gz
if-no-files-found: error
outputs:
zed: zed-${{ github.event.pull_request.head.sha || github.sha }}-aarch64-unknown-linux-gnu.tar.gz
remote-server: zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-aarch64-unknown-linux-gnu.tar.gz
timeout-minutes: 60
bundle_windows_x86_64:
if: |-
@@ -196,6 +216,9 @@ jobs:
with:
name: Zed_${{ github.event.pull_request.head.sha || github.sha }}-x86_64.exe
path: ${{ env.SETUP_PATH }}
if-no-files-found: error
outputs:
zed: Zed_${{ github.event.pull_request.head.sha || github.sha }}-x86_64.exe
timeout-minutes: 60
bundle_windows_arm64:
if: |-
@@ -230,6 +253,9 @@ jobs:
with:
name: Zed_${{ github.event.pull_request.head.sha || github.sha }}-aarch64.exe
path: ${{ env.SETUP_PATH }}
if-no-files-found: error
outputs:
zed: Zed_${{ github.event.pull_request.head.sha || github.sha }}-aarch64.exe
timeout-minutes: 60
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.ref }}

View File

@@ -444,18 +444,18 @@ jobs:
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: nix_build::install_nix
- name: nix_build::build_nix::install_nix
uses: cachix/install-nix-action@02a151ada4993995686f9ed4f1be7cfbb229e56f
with:
github_access_token: ${{ secrets.GITHUB_TOKEN }}
- name: nix_build::cachix_action
- name: nix_build::build_nix::cachix_action
uses: cachix/cachix-action@0fc020193b5a1fa3ac4575aa3a7d3aa6a35435ad
with:
name: zed
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
cachixArgs: -v
pushFilter: -zed-editor-[0-9.]*-nightly
- name: nix_build::build
- name: nix_build::build_nix::build
run: nix build .#debug -L --accept-flake-config
shell: bash -euxo pipefail {0}
timeout-minutes: 60
@@ -475,22 +475,22 @@ jobs:
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: nix_build::set_path
- name: nix_build::build_nix::set_path
run: |
echo "/nix/var/nix/profiles/default/bin" >> "$GITHUB_PATH"
echo "/Users/administrator/.nix-profile/bin" >> "$GITHUB_PATH"
shell: bash -euxo pipefail {0}
- name: nix_build::cachix_action
- name: nix_build::build_nix::cachix_action
uses: cachix/cachix-action@0fc020193b5a1fa3ac4575aa3a7d3aa6a35435ad
with:
name: zed
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
cachixArgs: -v
pushFilter: -zed-editor-[0-9.]*-nightly
- name: nix_build::build
- name: nix_build::build_nix::build
run: nix build .#debug -L --accept-flake-config
shell: bash -euxo pipefail {0}
- name: nix_build::limit_store
- name: nix_build::build_nix::limit_store
run: |-
if [ "$(du -sm /nix/store | cut -f1)" -gt 50000 ]; then
nix-collect-garbage -d || true

63
.github/workflows/run_unit_evals.yml vendored Normal file
View File

@@ -0,0 +1,63 @@
# Generated from xtask::workflows::run_agent_evals
# Rebuild with `cargo xtask workflows`.
name: run_agent_evals
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: '0'
RUST_BACKTRACE: '1'
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
on:
schedule:
- cron: 47 1 * * 2
workflow_dispatch: {}
jobs:
unit_evals:
runs-on: namespace-profile-16x32-ubuntu-2204
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_cargo_config
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
shell: bash -euxo pipefail {0}
- name: steps::cache_rust_dependencies
uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- name: steps::setup_linux
run: ./script/linux
shell: bash -euxo pipefail {0}
- name: steps::install_mold
run: ./script/install-mold
shell: bash -euxo pipefail {0}
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: bash -euxo pipefail {0}
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 100
shell: bash -euxo pipefail {0}
- name: ./script/run-unit-evals
run: ./script/run-unit-evals
shell: bash -euxo pipefail {0}
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
- name: run_agent_evals::unit_evals::send_failure_to_slack
if: ${{ failure() }}
uses: slackapi/slack-github-action@b0fa283ad8fea605de13dc3f449259339835fc52
with:
method: chat.postMessage
token: ${{ secrets.SLACK_APP_ZED_UNIT_EVALS_BOT_TOKEN }}
payload: |
channel: C04UDRNNJFQ
text: "Unit Evals Failed: https://github.com/zed-industries/zed/actions/runs/${{ github.run_id }}"
- name: steps::cleanup_cargo_config
if: always()
run: |
rm -rf ./../.cargo
shell: bash -euxo pipefail {0}
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true

View File

@@ -1,86 +0,0 @@
name: Run Unit Evals
on:
schedule:
# GitHub might drop jobs at busy times, so we choose a random time in the middle of the night.
- cron: "47 1 * * 2"
workflow_dispatch:
concurrency:
# Allow only one workflow per any non-`main` branch.
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: 0
RUST_BACKTRACE: 1
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
jobs:
unit_evals:
if: github.repository_owner == 'zed-industries'
timeout-minutes: 60
name: Run unit evals
runs-on:
- namespace-profile-16x32-ubuntu-2204
steps:
- name: Add Rust to the PATH
run: echo "$HOME/.cargo/bin" >> "$GITHUB_PATH"
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Cache dependencies
uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
# cache-provider: "buildjet"
- name: Install Linux dependencies
run: ./script/linux
- name: Configure CI
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
- name: Install Rust
shell: bash -euxo pipefail {0}
run: |
cargo install cargo-nextest --locked
- name: Install Node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
with:
node-version: "18"
- name: Limit target directory size
shell: bash -euxo pipefail {0}
run: script/clear-target-dir-if-larger-than 100
- name: Run unit evals
shell: bash -euxo pipefail {0}
run: cargo nextest run --workspace --no-fail-fast --features unit-eval --no-capture -E 'test(::eval_)'
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
- name: Send failure message to Slack channel if needed
if: ${{ failure() }}
uses: slackapi/slack-github-action@b0fa283ad8fea605de13dc3f449259339835fc52
with:
method: chat.postMessage
token: ${{ secrets.SLACK_APP_ZED_UNIT_EVALS_BOT_TOKEN }}
payload: |
channel: C04UDRNNJFQ
text: "Unit Evals Failed: https://github.com/zed-industries/zed/actions/runs/${{ github.run_id }}"
# Even the Linux runner is not stateful, in theory there is no need to do this cleanup.
# But, to avoid potential issues in the future if we choose to use a stateful Linux runner and forget to add code
# to clean up the config file, Ive included the cleanup code here as a precaution.
# While its not strictly necessary at this moment, I believe its better to err on the side of caution.
- name: Clean CI config file
if: always()
run: rm -rf ./../.cargo

25
Cargo.lock generated
View File

@@ -1339,6 +1339,7 @@ dependencies = [
"settings",
"smol",
"tempfile",
"util",
"which 6.0.3",
"workspace",
]
@@ -4528,12 +4529,15 @@ dependencies = [
"fs",
"futures 0.3.31",
"gpui",
"http_client",
"json_dotpath",
"language",
"log",
"node_runtime",
"paths",
"serde",
"serde_json",
"settings",
"smol",
"task",
"util",
@@ -4932,6 +4936,7 @@ dependencies = [
"editor",
"gpui",
"indoc",
"itertools 0.14.0",
"language",
"log",
"lsp",
@@ -7074,6 +7079,7 @@ dependencies = [
"serde_json",
"settings",
"url",
"urlencoding",
"util",
]
@@ -12711,6 +12717,12 @@ version = "0.2.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5da3b0203fd7ee5720aa0b5e790b591aa5d3f41c3ed2c34a3a393382198af2f7"
[[package]]
name = "pollster"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2f3a9f18d041e6d0e102a0a46750538147e5e8992d3b4873aaafee2520b00ce3"
[[package]]
name = "portable-atomic"
version = "1.11.1"
@@ -12759,7 +12771,7 @@ dependencies = [
"log",
"parking_lot",
"pin-project",
"pollster",
"pollster 0.2.5",
"static_assertions",
"thiserror 1.0.69",
]
@@ -14311,7 +14323,6 @@ dependencies = [
"gpui",
"log",
"rand 0.9.2",
"rayon",
"sum_tree",
"unicode-segmentation",
"util",
@@ -16237,6 +16248,7 @@ checksum = "2b2231b7c3057d5e4ad0156fb3dc807d900806020c5ffa3ee6ff2c8c76fb8520"
name = "streaming_diff"
version = "0.1.0"
dependencies = [
"gpui",
"ordered-float 2.10.1",
"rand 0.9.2",
"rope",
@@ -16355,9 +16367,11 @@ version = "0.1.0"
dependencies = [
"arrayvec",
"ctor",
"futures 0.3.31",
"futures-lite 1.13.0",
"log",
"pollster 0.4.0",
"rand 0.9.2",
"rayon",
"zlog",
]
@@ -18042,7 +18056,7 @@ dependencies = [
[[package]]
name = "tree-sitter-gomod"
version = "1.1.1"
source = "git+https://github.com/camdencheek/tree-sitter-go-mod?rev=6efb59652d30e0e9cd5f3b3a669afd6f1a926d3c#6efb59652d30e0e9cd5f3b3a669afd6f1a926d3c"
source = "git+https://github.com/camdencheek/tree-sitter-go-mod?rev=2e886870578eeba1927a2dc4bd2e2b3f598c5f9a#2e886870578eeba1927a2dc4bd2e2b3f598c5f9a"
dependencies = [
"cc",
"tree-sitter-language",
@@ -21220,6 +21234,7 @@ dependencies = [
"project_symbols",
"prompt_store",
"proto",
"rayon",
"recent_projects",
"release_channel",
"remote",
@@ -21746,6 +21761,7 @@ dependencies = [
"polars",
"project",
"prompt_store",
"pulldown-cmark 0.12.2",
"release_channel",
"reqwest_client",
"serde",
@@ -21755,6 +21771,7 @@ dependencies = [
"smol",
"soa-rs",
"terminal_view",
"toml 0.8.23",
"util",
"watch",
"zeta",

View File

@@ -680,7 +680,7 @@ tree-sitter-elixir = "0.3"
tree-sitter-embedded-template = "0.23.0"
tree-sitter-gitcommit = { git = "https://github.com/zed-industries/tree-sitter-git-commit", rev = "88309716a69dd13ab83443721ba6e0b491d37ee9" }
tree-sitter-go = "0.23"
tree-sitter-go-mod = { git = "https://github.com/camdencheek/tree-sitter-go-mod", rev = "6efb59652d30e0e9cd5f3b3a669afd6f1a926d3c", package = "tree-sitter-gomod" }
tree-sitter-go-mod = { git = "https://github.com/camdencheek/tree-sitter-go-mod", rev = "2e886870578eeba1927a2dc4bd2e2b3f598c5f9a", package = "tree-sitter-gomod" }
tree-sitter-gowork = { git = "https://github.com/zed-industries/tree-sitter-go-work", rev = "acb0617bf7f4fda02c6217676cc64acb89536dc7" }
tree-sitter-heex = { git = "https://github.com/zed-industries/tree-sitter-heex", rev = "1dd45142fbb05562e35b2040c6129c9bca346592" }
tree-sitter-html = "0.23"

View File

@@ -19,6 +19,7 @@
= @dinocosta
= @smitbarmase
= @cole-miller
= @HactarCE
vim
= @ConradIrwin
@@ -32,6 +33,7 @@ gpui
git
= @cole-miller
= @danilo-leal
= @dvdsk
linux
= @dvdsk
@@ -80,6 +82,7 @@ ai
= @rtfeldman
= @danilo-leal
= @benbrandt
= @bennetbo
design
= @danilo-leal

View File

@@ -421,6 +421,12 @@
"ctrl-[": "editor::Cancel"
}
},
{
"context": "vim_mode == helix_select && !menu",
"bindings": {
"escape": "vim::SwitchToHelixNormalMode"
}
},
{
"context": "(vim_mode == helix_normal || vim_mode == helix_select) && !menu",
"bindings": {

View File

@@ -602,7 +602,9 @@
"whole_word": false,
"case_sensitive": false,
"include_ignored": false,
"regex": false
"regex": false,
// Whether to center the cursor on each search match when navigating.
"center_on_match": false
},
// When to populate a new search's query based on the text under the cursor.
// This setting can take the following three values:

View File

@@ -3,7 +3,6 @@ mod diff;
mod mention;
mod terminal;
use ::terminal::terminal_settings::TerminalSettings;
use agent_settings::AgentSettings;
use collections::HashSet;
pub use connection::*;
@@ -12,7 +11,7 @@ use language::language_settings::FormatOnSave;
pub use mention::*;
use project::lsp_store::{FormatTrigger, LspFormatTarget};
use serde::{Deserialize, Serialize};
use settings::{Settings as _, SettingsLocation};
use settings::Settings as _;
use task::{Shell, ShellBuilder};
pub use terminal::*;
@@ -2141,17 +2140,9 @@ impl AcpThread {
) -> Task<Result<Entity<Terminal>>> {
let env = match &cwd {
Some(dir) => self.project.update(cx, |project, cx| {
let worktree = project.find_worktree(dir.as_path(), cx);
let shell = TerminalSettings::get(
worktree.as_ref().map(|(worktree, path)| SettingsLocation {
worktree_id: worktree.read(cx).id(),
path: &path,
}),
cx,
)
.shell
.clone();
project.directory_environment(&shell, dir.as_path().into(), cx)
project.environment().update(cx, |env, cx| {
env.directory_environment(dir.as_path().into(), cx)
})
}),
None => Task::ready(None).shared(),
};

View File

@@ -361,10 +361,12 @@ async fn build_buffer_diff(
) -> Result<Entity<BufferDiff>> {
let buffer = cx.update(|cx| buffer.read(cx).snapshot())?;
let executor = cx.background_executor().clone();
let old_text_rope = cx
.background_spawn({
let old_text = old_text.clone();
async move { Rope::from(old_text.as_str()) }
let executor = executor.clone();
async move { Rope::from_str(old_text.as_str(), &executor) }
})
.await;
let base_buffer = cx

View File

@@ -5,10 +5,8 @@ use gpui::{App, AppContext, AsyncApp, Context, Entity, Task};
use language::LanguageRegistry;
use markdown::Markdown;
use project::Project;
use settings::{Settings as _, SettingsLocation};
use std::{path::PathBuf, process::ExitStatus, sync::Arc, time::Instant};
use task::Shell;
use terminal::terminal_settings::TerminalSettings;
use util::get_default_system_shell_preferring_bash;
pub struct Terminal {
@@ -187,17 +185,9 @@ pub async fn create_terminal_entity(
let mut env = if let Some(dir) = &cwd {
project
.update(cx, |project, cx| {
let worktree = project.find_worktree(dir.as_path(), cx);
let shell = TerminalSettings::get(
worktree.as_ref().map(|(worktree, path)| SettingsLocation {
worktree_id: worktree.read(cx).id(),
path: &path,
}),
cx,
)
.shell
.clone();
project.directory_environment(&shell, dir.clone().into(), cx)
project.environment().update(cx, |env, cx| {
env.directory_environment(dir.clone().into(), cx)
})
})?
.await
.unwrap_or_default()

View File

@@ -19,7 +19,7 @@ use markdown::{CodeBlockRenderer, Markdown, MarkdownElement, MarkdownStyle};
use project::Project;
use settings::Settings;
use theme::ThemeSettings;
use ui::{Tooltip, prelude::*};
use ui::{Tooltip, WithScrollbar, prelude::*};
use util::ResultExt as _;
use workspace::{
Item, ItemHandle, ToolbarItemEvent, ToolbarItemLocation, ToolbarItemView, Workspace,
@@ -291,17 +291,19 @@ impl AcpTools {
let expanded = self.expanded.contains(&index);
v_flex()
.w_full()
.px_4()
.py_3()
.border_color(colors.border)
.border_b_1()
.gap_2()
.items_start()
.font_buffer(cx)
.text_size(base_size)
.id(index)
.group("message")
.cursor_pointer()
.font_buffer(cx)
.w_full()
.py_3()
.pl_4()
.pr_5()
.gap_2()
.items_start()
.text_size(base_size)
.border_color(colors.border)
.border_b_1()
.hover(|this| this.bg(colors.element_background.opacity(0.5)))
.on_click(cx.listener(move |this, _, _, cx| {
if this.expanded.contains(&index) {
@@ -323,15 +325,14 @@ impl AcpTools {
h_flex()
.w_full()
.gap_2()
.items_center()
.flex_shrink_0()
.child(match message.direction {
acp::StreamMessageDirection::Incoming => {
ui::Icon::new(ui::IconName::ArrowDown).color(Color::Error)
}
acp::StreamMessageDirection::Outgoing => {
ui::Icon::new(ui::IconName::ArrowUp).color(Color::Success)
}
acp::StreamMessageDirection::Incoming => Icon::new(IconName::ArrowDown)
.color(Color::Error)
.size(IconSize::Small),
acp::StreamMessageDirection::Outgoing => Icon::new(IconName::ArrowUp)
.color(Color::Success)
.size(IconSize::Small),
})
.child(
Label::new(message.name.clone())
@@ -501,7 +502,7 @@ impl Focusable for AcpTools {
}
impl Render for AcpTools {
fn render(&mut self, _window: &mut Window, cx: &mut Context<Self>) -> impl IntoElement {
fn render(&mut self, window: &mut Window, cx: &mut Context<Self>) -> impl IntoElement {
v_flex()
.track_focus(&self.focus_handle)
.size_full()
@@ -516,13 +517,19 @@ impl Render for AcpTools {
.child("No messages recorded yet")
.into_any()
} else {
list(
connection.list_state.clone(),
cx.processor(Self::render_message),
)
.with_sizing_behavior(gpui::ListSizingBehavior::Auto)
.flex_grow()
.into_any()
div()
.size_full()
.flex_grow()
.child(
list(
connection.list_state.clone(),
cx.processor(Self::render_message),
)
.with_sizing_behavior(gpui::ListSizingBehavior::Auto)
.size_full(),
)
.vertical_scrollbar_for(connection.list_state.clone(), window, cx)
.into_any()
}
}
None => h_flex()

View File

@@ -3,7 +3,9 @@ use buffer_diff::BufferDiff;
use clock;
use collections::BTreeMap;
use futures::{FutureExt, StreamExt, channel::mpsc};
use gpui::{App, AppContext, AsyncApp, Context, Entity, Subscription, Task, WeakEntity};
use gpui::{
App, AppContext, AsyncApp, BackgroundExecutor, Context, Entity, Subscription, Task, WeakEntity,
};
use language::{Anchor, Buffer, BufferEvent, DiskState, Point, ToPoint};
use project::{Project, ProjectItem, lsp_store::OpenLspBufferHandle};
use std::{cmp, ops::Range, sync::Arc};
@@ -321,6 +323,7 @@ impl ActionLog {
let unreviewed_edits = tracked_buffer.unreviewed_edits.clone();
let edits = diff_snapshots(&old_snapshot, &new_snapshot);
let mut has_user_changes = false;
let executor = cx.background_executor().clone();
async move {
if let ChangeAuthor::User = author {
has_user_changes = apply_non_conflicting_edits(
@@ -328,6 +331,7 @@ impl ActionLog {
edits,
&mut base_text,
new_snapshot.as_rope(),
&executor,
);
}
@@ -382,6 +386,7 @@ impl ActionLog {
let agent_diff_base = tracked_buffer.diff_base.clone();
let git_diff_base = git_diff.read(cx).base_text().as_rope().clone();
let buffer_text = tracked_buffer.snapshot.as_rope().clone();
let executor = cx.background_executor().clone();
anyhow::Ok(cx.background_spawn(async move {
let mut old_unreviewed_edits = old_unreviewed_edits.into_iter().peekable();
let committed_edits = language::line_diff(
@@ -416,8 +421,11 @@ impl ActionLog {
),
new_agent_diff_base.max_point(),
));
new_agent_diff_base
.replace(old_byte_start..old_byte_end, &unreviewed_new);
new_agent_diff_base.replace(
old_byte_start..old_byte_end,
&unreviewed_new,
&executor,
);
row_delta +=
unreviewed.new_len() as i32 - unreviewed.old_len() as i32;
}
@@ -611,6 +619,7 @@ impl ActionLog {
.snapshot
.text_for_range(new_range)
.collect::<String>(),
cx.background_executor(),
);
delta += edit.new_len() as i32 - edit.old_len() as i32;
false
@@ -824,6 +833,7 @@ fn apply_non_conflicting_edits(
edits: Vec<Edit<u32>>,
old_text: &mut Rope,
new_text: &Rope,
executor: &BackgroundExecutor,
) -> bool {
let mut old_edits = patch.edits().iter().cloned().peekable();
let mut new_edits = edits.into_iter().peekable();
@@ -877,6 +887,7 @@ fn apply_non_conflicting_edits(
old_text.replace(
old_bytes,
&new_text.chunks_in_range(new_bytes).collect::<String>(),
executor,
);
applied_delta += new_edit.new_len() as i32 - new_edit.old_len() as i32;
has_made_changes = true;
@@ -2282,6 +2293,7 @@ mod tests {
old_text.replace(
old_start..old_end,
&new_text.slice_rows(edit.new.clone()).to_string(),
cx.background_executor(),
);
}
pretty_assertions::assert_eq!(old_text.to_string(), new_text.to_string());

View File

@@ -13,7 +13,15 @@ const EDITS_END_TAG: &str = "</edits>";
const SEARCH_MARKER: &str = "<<<<<<< SEARCH";
const SEPARATOR_MARKER: &str = "=======";
const REPLACE_MARKER: &str = ">>>>>>> REPLACE";
const END_TAGS: [&str; 3] = [OLD_TEXT_END_TAG, NEW_TEXT_END_TAG, EDITS_END_TAG];
const SONNET_PARAMETER_INVOKE_1: &str = "</parameter>\n</invoke>";
const SONNET_PARAMETER_INVOKE_2: &str = "</parameter></invoke>";
const END_TAGS: [&str; 5] = [
OLD_TEXT_END_TAG,
NEW_TEXT_END_TAG,
EDITS_END_TAG,
SONNET_PARAMETER_INVOKE_1, // Remove this after switching to streaming tool call
SONNET_PARAMETER_INVOKE_2,
];
#[derive(Debug)]
pub enum EditParserEvent {
@@ -547,6 +555,37 @@ mod tests {
);
}
#[gpui::test(iterations = 1000)]
fn test_xml_edits_with_closing_parameter_invoke(mut rng: StdRng) {
// This case is a regression with Claude Sonnet 4.5.
// Sometimes Sonnet thinks that it's doing a tool call
// and closes its response with '</parameter></invoke>'
// instead of properly closing </new_text>
let mut parser = EditParser::new(EditFormat::XmlTags);
assert_eq!(
parse_random_chunks(
indoc! {"
<old_text>some text</old_text><new_text>updated text</parameter></invoke>
"},
&mut parser,
&mut rng
),
vec![Edit {
old_text: "some text".to_string(),
new_text: "updated text".to_string(),
line_hint: None,
},]
);
assert_eq!(
parser.finish(),
EditParserMetrics {
tags: 2,
mismatched_tags: 1
}
);
}
#[gpui::test(iterations = 1000)]
fn test_xml_nested_tags(mut rng: StdRng) {
let mut parser = EditParser::new(EditFormat::XmlTags);
@@ -1035,6 +1074,11 @@ mod tests {
last_ix = chunk_ix;
}
if new_text.is_some() {
pending_edit.new_text = new_text.take().unwrap();
edits.push(pending_edit);
}
edits
}
}

View File

@@ -1581,6 +1581,7 @@ impl EditAgentTest {
let template = crate::SystemPromptTemplate {
project: &project_context,
available_tools: tool_names,
model_name: None,
};
let templates = Templates::new();
template.render(&templates).unwrap()

View File

@@ -305,18 +305,20 @@ impl SearchMatrix {
#[cfg(test)]
mod tests {
use super::*;
use gpui::TestAppContext;
use indoc::indoc;
use language::{BufferId, TextBuffer};
use rand::prelude::*;
use text::ReplicaId;
use util::test::{generate_marked_text, marked_text_ranges};
#[test]
fn test_empty_query() {
#[gpui::test]
fn test_empty_query(cx: &mut gpui::TestAppContext) {
let buffer = TextBuffer::new(
ReplicaId::LOCAL,
BufferId::new(1).unwrap(),
"Hello world\nThis is a test\nFoo bar baz",
cx.background_executor(),
);
let snapshot = buffer.snapshot();
@@ -325,12 +327,13 @@ mod tests {
assert_eq!(finish(finder), None);
}
#[test]
fn test_streaming_exact_match() {
#[gpui::test]
fn test_streaming_exact_match(cx: &mut gpui::TestAppContext) {
let buffer = TextBuffer::new(
ReplicaId::LOCAL,
BufferId::new(1).unwrap(),
"Hello world\nThis is a test\nFoo bar baz",
cx.background_executor(),
);
let snapshot = buffer.snapshot();
@@ -349,8 +352,8 @@ mod tests {
assert_eq!(finish(finder), Some("This is a test".to_string()));
}
#[test]
fn test_streaming_fuzzy_match() {
#[gpui::test]
fn test_streaming_fuzzy_match(cx: &mut gpui::TestAppContext) {
let buffer = TextBuffer::new(
ReplicaId::LOCAL,
BufferId::new(1).unwrap(),
@@ -363,6 +366,7 @@ mod tests {
return x * y;
}
"},
cx.background_executor(),
);
let snapshot = buffer.snapshot();
@@ -383,12 +387,13 @@ mod tests {
);
}
#[test]
fn test_incremental_improvement() {
#[gpui::test]
fn test_incremental_improvement(cx: &mut gpui::TestAppContext) {
let buffer = TextBuffer::new(
ReplicaId::LOCAL,
BufferId::new(1).unwrap(),
"Line 1\nLine 2\nLine 3\nLine 4\nLine 5",
cx.background_executor(),
);
let snapshot = buffer.snapshot();
@@ -408,8 +413,8 @@ mod tests {
assert_eq!(finish(finder), Some("Line 3\nLine 4".to_string()));
}
#[test]
fn test_incomplete_lines_buffering() {
#[gpui::test]
fn test_incomplete_lines_buffering(cx: &mut gpui::TestAppContext) {
let buffer = TextBuffer::new(
ReplicaId::LOCAL,
BufferId::new(1).unwrap(),
@@ -418,6 +423,7 @@ mod tests {
jumps over the lazy dog
Pack my box with five dozen liquor jugs
"},
cx.background_executor(),
);
let snapshot = buffer.snapshot();
@@ -435,8 +441,8 @@ mod tests {
);
}
#[test]
fn test_multiline_fuzzy_match() {
#[gpui::test]
fn test_multiline_fuzzy_match(cx: &mut gpui::TestAppContext) {
let buffer = TextBuffer::new(
ReplicaId::LOCAL,
BufferId::new(1).unwrap(),
@@ -456,6 +462,7 @@ mod tests {
}
}
"#},
cx.background_executor(),
);
let snapshot = buffer.snapshot();
@@ -509,7 +516,7 @@ mod tests {
}
#[gpui::test(iterations = 100)]
fn test_resolve_location_single_line(mut rng: StdRng) {
fn test_resolve_location_single_line(mut rng: StdRng, cx: &mut TestAppContext) {
assert_location_resolution(
concat!(
" Lorem\n",
@@ -519,11 +526,12 @@ mod tests {
),
"ipsum",
&mut rng,
cx,
);
}
#[gpui::test(iterations = 100)]
fn test_resolve_location_multiline(mut rng: StdRng) {
fn test_resolve_location_multiline(mut rng: StdRng, cx: &mut TestAppContext) {
assert_location_resolution(
concat!(
" Lorem\n",
@@ -533,11 +541,12 @@ mod tests {
),
"ipsum\ndolor sit amet",
&mut rng,
cx,
);
}
#[gpui::test(iterations = 100)]
fn test_resolve_location_function_with_typo(mut rng: StdRng) {
fn test_resolve_location_function_with_typo(mut rng: StdRng, cx: &mut TestAppContext) {
assert_location_resolution(
indoc! {"
«fn foo1(a: usize) -> usize {
@@ -550,11 +559,12 @@ mod tests {
"},
"fn foo1(a: usize) -> u32 {\n40\n}",
&mut rng,
cx,
);
}
#[gpui::test(iterations = 100)]
fn test_resolve_location_class_methods(mut rng: StdRng) {
fn test_resolve_location_class_methods(mut rng: StdRng, cx: &mut TestAppContext) {
assert_location_resolution(
indoc! {"
class Something {
@@ -575,11 +585,12 @@ mod tests {
six() { return 6666; }
"},
&mut rng,
cx,
);
}
#[gpui::test(iterations = 100)]
fn test_resolve_location_imports_no_match(mut rng: StdRng) {
fn test_resolve_location_imports_no_match(mut rng: StdRng, cx: &mut TestAppContext) {
assert_location_resolution(
indoc! {"
use std::ops::Range;
@@ -609,11 +620,12 @@ mod tests {
use std::sync::Arc;
"},
&mut rng,
cx,
);
}
#[gpui::test(iterations = 100)]
fn test_resolve_location_nested_closure(mut rng: StdRng) {
fn test_resolve_location_nested_closure(mut rng: StdRng, cx: &mut TestAppContext) {
assert_location_resolution(
indoc! {"
impl Foo {
@@ -641,11 +653,12 @@ mod tests {
" });",
),
&mut rng,
cx,
);
}
#[gpui::test(iterations = 100)]
fn test_resolve_location_tool_invocation(mut rng: StdRng) {
fn test_resolve_location_tool_invocation(mut rng: StdRng, cx: &mut TestAppContext) {
assert_location_resolution(
indoc! {r#"
let tool = cx
@@ -673,11 +686,12 @@ mod tests {
" .output;",
),
&mut rng,
cx,
);
}
#[gpui::test]
fn test_line_hint_selection() {
fn test_line_hint_selection(cx: &mut TestAppContext) {
let text = indoc! {r#"
fn first_function() {
return 42;
@@ -696,6 +710,7 @@ mod tests {
ReplicaId::LOCAL,
BufferId::new(1).unwrap(),
text.to_string(),
cx.background_executor(),
);
let snapshot = buffer.snapshot();
let mut matcher = StreamingFuzzyMatcher::new(snapshot.clone());
@@ -727,9 +742,19 @@ mod tests {
}
#[track_caller]
fn assert_location_resolution(text_with_expected_range: &str, query: &str, rng: &mut StdRng) {
fn assert_location_resolution(
text_with_expected_range: &str,
query: &str,
rng: &mut StdRng,
cx: &mut TestAppContext,
) {
let (text, expected_ranges) = marked_text_ranges(text_with_expected_range, false);
let buffer = TextBuffer::new(ReplicaId::LOCAL, BufferId::new(1).unwrap(), text.clone());
let buffer = TextBuffer::new(
ReplicaId::LOCAL,
BufferId::new(1).unwrap(),
text.clone(),
cx.background_executor(),
);
let snapshot = buffer.snapshot();
let mut matcher = StreamingFuzzyMatcher::new(snapshot);

View File

@@ -38,6 +38,7 @@ pub struct SystemPromptTemplate<'a> {
#[serde(flatten)]
pub project: &'a prompt_store::ProjectContext,
pub available_tools: Vec<SharedString>,
pub model_name: Option<String>,
}
impl Template for SystemPromptTemplate<'_> {
@@ -79,9 +80,11 @@ mod tests {
let template = SystemPromptTemplate {
project: &project,
available_tools: vec!["echo".into()],
model_name: Some("test-model".to_string()),
};
let templates = Templates::new();
let rendered = template.render(&templates).unwrap();
assert!(rendered.contains("## Fixing Diagnostics"));
assert!(rendered.contains("test-model"));
}
}

View File

@@ -150,6 +150,12 @@ Otherwise, follow debugging best practices:
Operating System: {{os}}
Default Shell: {{shell}}
{{#if model_name}}
## Model Information
You are powered by the model named {{model_name}}.
{{/if}}
{{#if (or has_rules has_user_rules)}}
## User's Custom Instructions

View File

@@ -1928,6 +1928,7 @@ impl Thread {
let system_prompt = SystemPromptTemplate {
project: self.project_context.read(cx),
available_tools,
model_name: self.model.as_ref().map(|m| m.name().0.to_string()),
}
.render(&self.templates)
.context("failed to build system prompt")

View File

@@ -569,6 +569,7 @@ mod tests {
use prompt_store::ProjectContext;
use serde_json::json;
use settings::SettingsStore;
use text::Rope;
use util::{path, rel_path::rel_path};
#[gpui::test]
@@ -741,7 +742,7 @@ mod tests {
// Create the file
fs.save(
path!("/root/src/main.rs").as_ref(),
&"initial content".into(),
&Rope::from_str_small("initial content"),
language::LineEnding::Unix,
)
.await
@@ -908,7 +909,7 @@ mod tests {
// Create a simple file with trailing whitespace
fs.save(
path!("/root/src/main.rs").as_ref(),
&"initial content".into(),
&Rope::from_str_small("initial content"),
language::LineEnding::Unix,
)
.await

View File

@@ -1,4 +1,5 @@
use crate::{
ChatWithFollow,
acp::completion_provider::{ContextPickerCompletionProvider, SlashCommandCompletion},
context_picker::{ContextPickerAction, fetch_context_picker::fetch_url_content},
};
@@ -15,6 +16,7 @@ use editor::{
MultiBuffer, ToOffset,
actions::Paste,
display_map::{Crease, CreaseId, FoldId},
scroll::Autoscroll,
};
use futures::{
FutureExt as _,
@@ -49,7 +51,7 @@ use text::OffsetRangeExt;
use theme::ThemeSettings;
use ui::{ButtonLike, TintColor, Toggleable, prelude::*};
use util::{ResultExt, debug_panic, rel_path::RelPath};
use workspace::{Workspace, notifications::NotifyResultExt as _};
use workspace::{CollaboratorId, Workspace, notifications::NotifyResultExt as _};
use zed_actions::agent::Chat;
pub struct MessageEditor {
@@ -234,8 +236,16 @@ impl MessageEditor {
window: &mut Window,
cx: &mut Context<Self>,
) {
let uri = MentionUri::Thread {
id: thread.id.clone(),
name: thread.title.to_string(),
};
let content = format!("{}\n", uri.as_link());
let content_len = content.len() - 1;
let start = self.editor.update(cx, |editor, cx| {
editor.set_text(format!("{}\n", thread.title), window, cx);
editor.set_text(content, window, cx);
editor
.buffer()
.read(cx)
@@ -244,18 +254,8 @@ impl MessageEditor {
.text_anchor
});
self.confirm_mention_completion(
thread.title.clone(),
start,
thread.title.len(),
MentionUri::Thread {
id: thread.id.clone(),
name: thread.title.to_string(),
},
window,
cx,
)
.detach();
self.confirm_mention_completion(thread.title, start, content_len, uri, window, cx)
.detach();
}
#[cfg(test)]
@@ -592,6 +592,21 @@ impl MessageEditor {
),
);
}
// Take this explanation with a grain of salt but, with creases being
// inserted, GPUI's recomputes the editor layout in the next frames, so
// directly calling `editor.request_autoscroll` wouldn't work as
// expected. We're leveraging `cx.on_next_frame` to wait 2 frames and
// ensure that the layout has been recalculated so that the autoscroll
// request actually shows the cursor's new position.
let editor = self.editor.clone();
cx.on_next_frame(window, move |_, window, cx| {
cx.on_next_frame(window, move |_, _, cx| {
editor.update(cx, |editor, cx| {
editor.request_autoscroll(Autoscroll::fit(), cx)
});
});
});
}
fn confirm_mention_for_thread(
@@ -813,6 +828,21 @@ impl MessageEditor {
self.send(cx);
}
fn chat_with_follow(
&mut self,
_: &ChatWithFollow,
window: &mut Window,
cx: &mut Context<Self>,
) {
self.workspace
.update(cx, |this, cx| {
this.follow(CollaboratorId::Agent, window, cx)
})
.log_err();
self.send(cx);
}
fn cancel(&mut self, _: &editor::actions::Cancel, _: &mut Window, cx: &mut Context<Self>) {
cx.emit(MessageEditorEvent::Cancel)
}
@@ -1016,6 +1046,7 @@ impl MessageEditor {
self.editor.update(cx, |message_editor, cx| {
message_editor.edit([(cursor_anchor..cursor_anchor, completion.new_text)], cx);
message_editor.request_autoscroll(Autoscroll::fit(), cx);
});
if let Some(confirm) = completion.confirm {
confirm(CompletionIntent::Complete, window, cx);
@@ -1276,6 +1307,7 @@ impl Render for MessageEditor {
div()
.key_context("MessageEditor")
.on_action(cx.listener(Self::chat))
.on_action(cx.listener(Self::chat_with_follow))
.on_action(cx.listener(Self::cancel))
.capture_action(cx.listener(Self::paste))
.flex_1()
@@ -1584,6 +1616,7 @@ mod tests {
use gpui::{
AppContext, Entity, EventEmitter, FocusHandle, Focusable, TestAppContext, VisualTestContext,
};
use language_model::LanguageModelRegistry;
use lsp::{CompletionContext, CompletionTriggerKind};
use project::{CompletionIntent, Project, ProjectPath};
use serde_json::json;
@@ -2730,6 +2763,82 @@ mod tests {
}
}
#[gpui::test]
async fn test_insert_thread_summary(cx: &mut TestAppContext) {
init_test(cx);
cx.update(LanguageModelRegistry::test);
let fs = FakeFs::new(cx.executor());
fs.insert_tree("/project", json!({"file": ""})).await;
let project = Project::test(fs, [Path::new(path!("/project"))], cx).await;
let (workspace, cx) =
cx.add_window_view(|window, cx| Workspace::test_new(project.clone(), window, cx));
let text_thread_store = cx.new(|cx| TextThreadStore::fake(project.clone(), cx));
let history_store = cx.new(|cx| HistoryStore::new(text_thread_store, cx));
// Create a thread metadata to insert as summary
let thread_metadata = agent::DbThreadMetadata {
id: acp::SessionId("thread-123".into()),
title: "Previous Conversation".into(),
updated_at: chrono::Utc::now(),
};
let message_editor = cx.update(|window, cx| {
cx.new(|cx| {
let mut editor = MessageEditor::new(
workspace.downgrade(),
project.clone(),
history_store.clone(),
None,
Default::default(),
Default::default(),
"Test Agent".into(),
"Test",
EditorMode::AutoHeight {
min_lines: 1,
max_lines: None,
},
window,
cx,
);
editor.insert_thread_summary(thread_metadata.clone(), window, cx);
editor
})
});
// Construct expected values for verification
let expected_uri = MentionUri::Thread {
id: thread_metadata.id.clone(),
name: thread_metadata.title.to_string(),
};
let expected_link = format!("[@{}]({})", thread_metadata.title, expected_uri.to_uri());
message_editor.read_with(cx, |editor, cx| {
let text = editor.text(cx);
assert!(
text.contains(&expected_link),
"Expected editor text to contain thread mention link.\nExpected substring: {}\nActual text: {}",
expected_link,
text
);
let mentions = editor.mentions();
assert_eq!(
mentions.len(),
1,
"Expected exactly one mention after inserting thread summary"
);
assert!(
mentions.contains(&expected_uri),
"Expected mentions to contain the thread URI"
);
});
}
#[gpui::test]
async fn test_whitespace_trimming(cx: &mut TestAppContext) {
init_test(cx);
@@ -2787,4 +2896,161 @@ mod tests {
})]
);
}
#[gpui::test]
async fn test_autoscroll_after_insert_selections(cx: &mut TestAppContext) {
init_test(cx);
let app_state = cx.update(AppState::test);
cx.update(|cx| {
language::init(cx);
editor::init(cx);
workspace::init(app_state.clone(), cx);
Project::init_settings(cx);
});
app_state
.fs
.as_fake()
.insert_tree(
path!("/dir"),
json!({
"test.txt": "line1\nline2\nline3\nline4\nline5\n",
}),
)
.await;
let project = Project::test(app_state.fs.clone(), [path!("/dir").as_ref()], cx).await;
let window = cx.add_window(|window, cx| Workspace::test_new(project.clone(), window, cx));
let workspace = window.root(cx).unwrap();
let worktree = project.update(cx, |project, cx| {
let mut worktrees = project.worktrees(cx).collect::<Vec<_>>();
assert_eq!(worktrees.len(), 1);
worktrees.pop().unwrap()
});
let worktree_id = worktree.read_with(cx, |worktree, _| worktree.id());
let mut cx = VisualTestContext::from_window(*window, cx);
// Open a regular editor with the created file, and select a portion of
// the text that will be used for the selections that are meant to be
// inserted in the agent panel.
let editor = workspace
.update_in(&mut cx, |workspace, window, cx| {
workspace.open_path(
ProjectPath {
worktree_id,
path: rel_path("test.txt").into(),
},
None,
false,
window,
cx,
)
})
.await
.unwrap()
.downcast::<Editor>()
.unwrap();
editor.update_in(&mut cx, |editor, window, cx| {
editor.change_selections(Default::default(), window, cx, |selections| {
selections.select_ranges([Point::new(0, 0)..Point::new(0, 5)]);
});
});
let text_thread_store = cx.new(|cx| TextThreadStore::fake(project.clone(), cx));
let history_store = cx.new(|cx| HistoryStore::new(text_thread_store, cx));
// Create a new `MessageEditor`. The `EditorMode::full()` has to be used
// to ensure we have a fixed viewport, so we can eventually actually
// place the cursor outside of the visible area.
let message_editor = workspace.update_in(&mut cx, |workspace, window, cx| {
let workspace_handle = cx.weak_entity();
let message_editor = cx.new(|cx| {
MessageEditor::new(
workspace_handle,
project.clone(),
history_store.clone(),
None,
Default::default(),
Default::default(),
"Test Agent".into(),
"Test",
EditorMode::full(),
window,
cx,
)
});
workspace.active_pane().update(cx, |pane, cx| {
pane.add_item(
Box::new(cx.new(|_| MessageEditorItem(message_editor.clone()))),
true,
true,
None,
window,
cx,
);
});
message_editor
});
message_editor.update_in(&mut cx, |message_editor, window, cx| {
message_editor.editor.update(cx, |editor, cx| {
// Update the Agent Panel's Message Editor text to have 100
// lines, ensuring that the cursor is set at line 90 and that we
// then scroll all the way to the top, so the cursor's position
// remains off screen.
let mut lines = String::new();
for _ in 1..=100 {
lines.push_str(&"Another line in the agent panel's message editor\n");
}
editor.set_text(lines.as_str(), window, cx);
editor.change_selections(Default::default(), window, cx, |selections| {
selections.select_ranges([Point::new(90, 0)..Point::new(90, 0)]);
});
editor.set_scroll_position(gpui::Point::new(0., 0.), window, cx);
});
});
cx.run_until_parked();
// Before proceeding, let's assert that the cursor is indeed off screen,
// otherwise the rest of the test doesn't make sense.
message_editor.update_in(&mut cx, |message_editor, window, cx| {
message_editor.editor.update(cx, |editor, cx| {
let snapshot = editor.snapshot(window, cx);
let cursor_row = editor.selections.newest::<Point>(&snapshot).head().row;
let scroll_top = snapshot.scroll_position().y as u32;
let visible_lines = editor.visible_line_count().unwrap() as u32;
let visible_range = scroll_top..(scroll_top + visible_lines);
assert!(!visible_range.contains(&cursor_row));
})
});
// Now let's insert the selection in the Agent Panel's editor and
// confirm that, after the insertion, the cursor is now in the visible
// range.
message_editor.update_in(&mut cx, |message_editor, window, cx| {
message_editor.insert_selections(window, cx);
});
cx.run_until_parked();
message_editor.update_in(&mut cx, |message_editor, window, cx| {
message_editor.editor.update(cx, |editor, cx| {
let snapshot = editor.snapshot(window, cx);
let cursor_row = editor.selections.newest::<Point>(&snapshot).head().row;
let scroll_top = snapshot.scroll_position().y as u32;
let visible_lines = editor.visible_line_count().unwrap() as u32;
let visible_range = scroll_top..(scroll_top + visible_lines);
assert!(visible_range.contains(&cursor_row));
})
});
}
}

View File

@@ -1,8 +1,10 @@
use acp_thread::AgentSessionModes;
use agent_client_protocol as acp;
use agent_servers::AgentServer;
use agent_settings::AgentSettings;
use fs::Fs;
use gpui::{Context, Entity, FocusHandle, WeakEntity, Window, prelude::*};
use settings::Settings as _;
use std::{rc::Rc, sync::Arc};
use ui::{
Button, ContextMenu, ContextMenuEntry, DocumentationEdge, DocumentationSide, KeyBinding,
@@ -84,6 +86,14 @@ impl ModeSelector {
let current_mode = self.connection.current_mode();
let default_mode = self.agent_server.default_mode(cx);
let settings = AgentSettings::get_global(cx);
let side = match settings.dock {
settings::DockPosition::Left => DocumentationSide::Right,
settings::DockPosition::Bottom | settings::DockPosition::Right => {
DocumentationSide::Left
}
};
for mode in all_modes {
let is_selected = &mode.id == &current_mode;
let is_default = Some(&mode.id) == default_mode.as_ref();
@@ -91,7 +101,7 @@ impl ModeSelector {
.toggleable(IconPosition::End, is_selected);
let entry = if let Some(description) = &mode.description {
entry.documentation_aside(DocumentationSide::Left, DocumentationEdge::Bottom, {
entry.documentation_aside(side, DocumentationEdge::Bottom, {
let description = description.clone();
move |cx| {

View File

@@ -3631,6 +3631,7 @@ impl AcpThreadView {
.child(
h_flex()
.id("edits-container")
.cursor_pointer()
.gap_1()
.child(Disclosure::new("edits-disclosure", expanded))
.map(|this| {
@@ -3770,6 +3771,7 @@ impl AcpThreadView {
Label::new(name.to_string())
.size(LabelSize::XSmall)
.buffer_font(cx)
.ml_1p5()
});
let file_icon = FileIcons::get_icon(path.as_std_path(), cx)
@@ -3801,14 +3803,30 @@ impl AcpThreadView {
})
.child(
h_flex()
.id(("file-name-row", index))
.relative()
.id(("file-name", index))
.pr_8()
.gap_1p5()
.w_full()
.overflow_x_scroll()
.child(file_icon)
.child(h_flex().gap_0p5().children(file_name).children(file_path))
.child(
h_flex()
.id(("file-name-path", index))
.cursor_pointer()
.pr_0p5()
.gap_0p5()
.hover(|s| s.bg(cx.theme().colors().element_hover))
.rounded_xs()
.child(file_icon)
.children(file_name)
.children(file_path)
.tooltip(Tooltip::text("Go to File"))
.on_click({
let buffer = buffer.clone();
cx.listener(move |this, _, window, cx| {
this.open_edited_buffer(&buffer, window, cx);
})
}),
)
.child(
div()
.absolute()
@@ -3818,13 +3836,7 @@ impl AcpThreadView {
.bottom_0()
.right_0()
.bg(overlay_gradient),
)
.on_click({
let buffer = buffer.clone();
cx.listener(move |this, _, window, cx| {
this.open_edited_buffer(&buffer, window, cx);
})
}),
),
)
.child(
h_flex()
@@ -4571,14 +4583,29 @@ impl AcpThreadView {
window: &mut Window,
cx: &mut Context<Self>,
) {
if window.is_window_active() || !self.notifications.is_empty() {
if !self.notifications.is_empty() {
return;
}
let settings = AgentSettings::get_global(cx);
let window_is_inactive = !window.is_window_active();
let panel_is_hidden = self
.workspace
.upgrade()
.map(|workspace| AgentPanel::is_hidden(&workspace, cx))
.unwrap_or(true);
let should_notify = window_is_inactive || panel_is_hidden;
if !should_notify {
return;
}
// TODO: Change this once we have title summarization for external agents.
let title = self.agent.name();
match AgentSettings::get_global(cx).notify_when_agent_waiting {
match settings.notify_when_agent_waiting {
NotifyWhenAgentWaiting::PrimaryScreen => {
if let Some(primary) = cx.primary_display() {
self.pop_up(icon, caption.into(), title, window, primary, cx);
@@ -5581,7 +5608,7 @@ fn default_markdown_style(
let theme_settings = ThemeSettings::get_global(cx);
let colors = cx.theme().colors();
let buffer_font_size = TextSize::Small.rems(cx);
let buffer_font_size = theme_settings.agent_buffer_font_size(cx);
let mut text_style = window.text_style();
let line_height = buffer_font_size * 1.75;
@@ -5593,9 +5620,9 @@ fn default_markdown_style(
};
let font_size = if buffer_font {
TextSize::Small.rems(cx)
theme_settings.agent_buffer_font_size(cx)
} else {
TextSize::Default.rems(cx)
theme_settings.agent_ui_font_size(cx)
};
let text_color = if muted_text {
@@ -5892,6 +5919,107 @@ pub(crate) mod tests {
);
}
#[gpui::test]
async fn test_notification_when_panel_hidden(cx: &mut TestAppContext) {
init_test(cx);
let (thread_view, cx) = setup_thread_view(StubAgentServer::default_response(), cx).await;
add_to_workspace(thread_view.clone(), cx);
let message_editor = cx.read(|cx| thread_view.read(cx).message_editor.clone());
message_editor.update_in(cx, |editor, window, cx| {
editor.set_text("Hello", window, cx);
});
// Window is active (don't deactivate), but panel will be hidden
// Note: In the test environment, the panel is not actually added to the dock,
// so is_agent_panel_hidden will return true
thread_view.update_in(cx, |thread_view, window, cx| {
thread_view.send(window, cx);
});
cx.run_until_parked();
// Should show notification because window is active but panel is hidden
assert!(
cx.windows()
.iter()
.any(|window| window.downcast::<AgentNotification>().is_some()),
"Expected notification when panel is hidden"
);
}
#[gpui::test]
async fn test_notification_still_works_when_window_inactive(cx: &mut TestAppContext) {
init_test(cx);
let (thread_view, cx) = setup_thread_view(StubAgentServer::default_response(), cx).await;
let message_editor = cx.read(|cx| thread_view.read(cx).message_editor.clone());
message_editor.update_in(cx, |editor, window, cx| {
editor.set_text("Hello", window, cx);
});
// Deactivate window - should show notification regardless of setting
cx.deactivate_window();
thread_view.update_in(cx, |thread_view, window, cx| {
thread_view.send(window, cx);
});
cx.run_until_parked();
// Should still show notification when window is inactive (existing behavior)
assert!(
cx.windows()
.iter()
.any(|window| window.downcast::<AgentNotification>().is_some()),
"Expected notification when window is inactive"
);
}
#[gpui::test]
async fn test_notification_respects_never_setting(cx: &mut TestAppContext) {
init_test(cx);
// Set notify_when_agent_waiting to Never
cx.update(|cx| {
AgentSettings::override_global(
AgentSettings {
notify_when_agent_waiting: NotifyWhenAgentWaiting::Never,
..AgentSettings::get_global(cx).clone()
},
cx,
);
});
let (thread_view, cx) = setup_thread_view(StubAgentServer::default_response(), cx).await;
let message_editor = cx.read(|cx| thread_view.read(cx).message_editor.clone());
message_editor.update_in(cx, |editor, window, cx| {
editor.set_text("Hello", window, cx);
});
// Window is active
thread_view.update_in(cx, |thread_view, window, cx| {
thread_view.send(window, cx);
});
cx.run_until_parked();
// Should NOT show notification because notify_when_agent_waiting is Never
assert!(
!cx.windows()
.iter()
.any(|window| window.downcast::<AgentNotification>().is_some()),
"Expected no notification when notify_when_agent_waiting is Never"
);
}
async fn setup_thread_view(
agent: impl AgentServer + 'static,
cx: &mut TestAppContext,

View File

@@ -23,15 +23,18 @@ use language::LanguageRegistry;
use language_model::{
LanguageModelProvider, LanguageModelProviderId, LanguageModelRegistry, ZED_CLOUD_PROVIDER_ID,
};
use language_models::AllLanguageModelSettings;
use notifications::status_toast::{StatusToast, ToastIcon};
use project::{
agent_server_store::{AgentServerStore, CLAUDE_CODE_NAME, CODEX_NAME, GEMINI_NAME},
context_server_store::{ContextServerConfiguration, ContextServerStatus, ContextServerStore},
};
use settings::{SettingsStore, update_settings_file};
use rope::Rope;
use settings::{Settings, SettingsStore, update_settings_file};
use ui::{
Chip, CommonAnimationExt, ContextMenu, Disclosure, Divider, DividerColor, ElevationIndex,
Indicator, PopoverMenu, Switch, SwitchColor, Tooltip, WithScrollbar, prelude::*,
Button, ButtonStyle, Chip, CommonAnimationExt, ContextMenu, Disclosure, Divider, DividerColor,
ElevationIndex, IconName, IconPosition, IconSize, Indicator, LabelSize, PopoverMenu, Switch,
SwitchColor, Tooltip, WithScrollbar, prelude::*,
};
use util::ResultExt as _;
use workspace::{Workspace, create_and_open_local_file};
@@ -303,10 +306,76 @@ impl AgentConfiguration {
}
})),
)
}),
})
.when(
is_expanded && is_removable_provider(&provider.id(), cx),
|this| {
this.child(
Button::new(
SharedString::from(format!("delete-provider-{provider_id}")),
"Remove Provider",
)
.full_width()
.style(ButtonStyle::Outlined)
.icon_position(IconPosition::Start)
.icon(IconName::Trash)
.icon_size(IconSize::Small)
.icon_color(Color::Muted)
.label_size(LabelSize::Small)
.on_click(cx.listener({
let provider = provider.clone();
move |this, _event, window, cx| {
this.delete_provider(provider.clone(), window, cx);
}
})),
)
},
),
)
}
fn delete_provider(
&mut self,
provider: Arc<dyn LanguageModelProvider>,
window: &mut Window,
cx: &mut Context<Self>,
) {
let fs = self.fs.clone();
let provider_id = provider.id();
cx.spawn_in(window, async move |_, cx| {
cx.update(|_window, cx| {
update_settings_file(fs.clone(), cx, {
let provider_id = provider_id.clone();
move |settings, _| {
if let Some(ref mut openai_compatible) = settings
.language_models
.as_mut()
.and_then(|lm| lm.openai_compatible.as_mut())
{
let key_to_remove: Arc<str> = Arc::from(provider_id.0.as_ref());
openai_compatible.remove(&key_to_remove);
}
}
});
})
.log_err();
cx.update(|_window, cx| {
LanguageModelRegistry::global(cx).update(cx, {
let provider_id = provider_id.clone();
move |registry, cx| {
registry.unregister_provider(provider_id, cx);
}
})
})
.log_err();
anyhow::Ok(())
})
.detach_and_log_err(cx);
}
fn render_provider_configuration_section(
&mut self,
cx: &mut Context<Self>,
@@ -1114,8 +1183,11 @@ async fn open_new_agent_servers_entry_in_settings_editor(
) -> Result<()> {
let settings_editor = workspace
.update_in(cx, |_, window, cx| {
create_and_open_local_file(paths::settings_file(), window, cx, || {
settings::initial_user_settings_content().as_ref().into()
create_and_open_local_file(paths::settings_file(), window, cx, |cx| {
Rope::from_str(
&settings::initial_user_settings_content(),
cx.background_executor(),
)
})
})?
.await?
@@ -1221,3 +1293,14 @@ fn find_text_in_buffer(
None
}
}
// OpenAI-compatible providers are user-configured and can be removed,
// whereas built-in providers (like Anthropic, OpenAI, Google, etc.) can't.
//
// If in the future we have more "API-compatible-type" of providers,
// they should be included here as removable providers.
fn is_removable_provider(provider_id: &LanguageModelProviderId, cx: &App) -> bool {
AllLanguageModelSettings::get_global(cx)
.openai_compatible
.contains_key(provider_id.0.as_ref())
}

View File

@@ -70,14 +70,6 @@ impl AgentDiffThread {
}
}
fn is_generating(&self, cx: &App) -> bool {
match self {
AgentDiffThread::AcpThread(thread) => {
thread.read(cx).status() == acp_thread::ThreadStatus::Generating
}
}
}
fn has_pending_edit_tool_uses(&self, cx: &App) -> bool {
match self {
AgentDiffThread::AcpThread(thread) => thread.read(cx).has_pending_edit_tool_calls(),
@@ -970,9 +962,7 @@ impl AgentDiffToolbar {
None => ToolbarItemLocation::Hidden,
Some(AgentDiffToolbarItem::Pane(_)) => ToolbarItemLocation::PrimaryRight,
Some(AgentDiffToolbarItem::Editor { state, .. }) => match state {
EditorState::Generating | EditorState::Reviewing => {
ToolbarItemLocation::PrimaryRight
}
EditorState::Reviewing => ToolbarItemLocation::PrimaryRight,
EditorState::Idle => ToolbarItemLocation::Hidden,
},
}
@@ -1050,7 +1040,6 @@ impl Render for AgentDiffToolbar {
let content = match state {
EditorState::Idle => return Empty.into_any(),
EditorState::Generating => vec![spinner_icon],
EditorState::Reviewing => vec![
h_flex()
.child(
@@ -1222,7 +1211,6 @@ pub struct AgentDiff {
pub enum EditorState {
Idle,
Reviewing,
Generating,
}
struct WorkspaceThread {
@@ -1545,15 +1533,11 @@ impl AgentDiff {
multibuffer.add_diff(diff_handle.clone(), cx);
});
let new_state = if thread.is_generating(cx) {
EditorState::Generating
} else {
EditorState::Reviewing
};
let reviewing_state = EditorState::Reviewing;
let previous_state = self
.reviewing_editors
.insert(weak_editor.clone(), new_state.clone());
.insert(weak_editor.clone(), reviewing_state.clone());
if previous_state.is_none() {
editor.update(cx, |editor, cx| {
@@ -1566,7 +1550,9 @@ impl AgentDiff {
unaffected.remove(weak_editor);
}
if new_state == EditorState::Reviewing && previous_state != Some(new_state) {
if reviewing_state == EditorState::Reviewing
&& previous_state != Some(reviewing_state)
{
// Jump to first hunk when we enter review mode
editor.update(cx, |editor, cx| {
let snapshot = multibuffer.read(cx).snapshot(cx);

View File

@@ -729,6 +729,25 @@ impl AgentPanel {
&self.context_server_registry
}
pub fn is_hidden(workspace: &Entity<Workspace>, cx: &App) -> bool {
let workspace_read = workspace.read(cx);
workspace_read
.panel::<AgentPanel>(cx)
.map(|panel| {
let panel_id = Entity::entity_id(&panel);
let is_visible = workspace_read.all_docks().iter().any(|dock| {
dock.read(cx)
.visible_panel()
.is_some_and(|visible_panel| visible_panel.panel_id() == panel_id)
});
!is_visible
})
.unwrap_or(true)
}
fn active_thread_view(&self) -> Option<&Entity<AcpThreadView>> {
match &self.active_view {
ActiveView::ExternalAgentThread { thread_view, .. } => Some(thread_view),

View File

@@ -487,9 +487,10 @@ impl CodegenAlternative {
) {
let start_time = Instant::now();
let snapshot = self.snapshot.clone();
let selected_text = snapshot
.text_for_range(self.range.start..self.range.end)
.collect::<Rope>();
let selected_text = Rope::from_iter(
snapshot.text_for_range(self.range.start..self.range.end),
cx.background_executor(),
);
let selection_start = self.range.start.to_point(&snapshot);

View File

@@ -2591,11 +2591,12 @@ impl SearchableItem for TextThreadEditor {
&mut self,
index: usize,
matches: &[Self::Match],
collapse: bool,
window: &mut Window,
cx: &mut Context<Self>,
) {
self.editor.update(cx, |editor, cx| {
editor.activate_match(index, matches, window, cx);
editor.activate_match(index, matches, collapse, window, cx);
});
}

View File

@@ -744,12 +744,13 @@ impl TextThread {
telemetry: Option<Arc<Telemetry>>,
cx: &mut Context<Self>,
) -> Self {
let buffer = cx.new(|_cx| {
let buffer = cx.new(|cx| {
let buffer = Buffer::remote(
language::BufferId::new(1).unwrap(),
replica_id,
capability,
"",
cx.background_executor(),
);
buffer.set_language_registry(language_registry.clone());
buffer

View File

@@ -26,6 +26,7 @@ serde_json.workspace = true
settings.workspace = true
smol.workspace = true
tempfile.workspace = true
util.workspace = true
workspace.workspace = true
[target.'cfg(not(target_os = "windows"))'.dependencies]

View File

@@ -962,7 +962,7 @@ pub async fn finalize_auto_update_on_quit() {
.parent()
.map(|p| p.join("tools").join("auto_update_helper.exe"))
{
let mut command = smol::process::Command::new(helper);
let mut command = util::command::new_smol_command(helper);
command.arg("--launch");
command.arg("false");
if let Ok(mut cmd) = command.spawn() {

View File

@@ -1,6 +1,9 @@
use futures::channel::oneshot;
use git2::{DiffLineType as GitDiffLineType, DiffOptions as GitOptions, Patch as GitPatch};
use gpui::{App, AppContext as _, AsyncApp, Context, Entity, EventEmitter, Task, TaskLabel};
use gpui::{
App, AppContext as _, AsyncApp, BackgroundExecutor, Context, Entity, EventEmitter, Task,
TaskLabel,
};
use language::{Language, LanguageRegistry};
use rope::Rope;
use std::{
@@ -191,7 +194,7 @@ impl BufferDiffSnapshot {
let base_text_exists;
let base_text_snapshot;
if let Some(text) = &base_text {
let base_text_rope = Rope::from(text.as_str());
let base_text_rope = Rope::from_str(text.as_str(), cx.background_executor());
base_text_pair = Some((text.clone(), base_text_rope.clone()));
let snapshot =
language::Buffer::build_snapshot(base_text_rope, language, language_registry, cx);
@@ -311,6 +314,7 @@ impl BufferDiffInner {
hunks: &[DiffHunk],
buffer: &text::BufferSnapshot,
file_exists: bool,
cx: &BackgroundExecutor,
) -> Option<Rope> {
let head_text = self
.base_text_exists
@@ -505,7 +509,7 @@ impl BufferDiffInner {
for (old_range, replacement_text) in edits {
new_index_text.append(index_cursor.slice(old_range.start));
index_cursor.seek_forward(old_range.end);
new_index_text.push(&replacement_text);
new_index_text.push(&replacement_text, cx);
}
new_index_text.append(index_cursor.suffix());
Some(new_index_text)
@@ -962,6 +966,7 @@ impl BufferDiff {
hunks,
buffer,
file_exists,
cx.background_executor(),
);
cx.emit(BufferDiffEvent::HunksStagedOrUnstaged(
@@ -1385,7 +1390,12 @@ mod tests {
"
.unindent();
let mut buffer = Buffer::new(ReplicaId::LOCAL, BufferId::new(1).unwrap(), buffer_text);
let mut buffer = Buffer::new(
ReplicaId::LOCAL,
BufferId::new(1).unwrap(),
buffer_text,
cx.background_executor(),
);
let mut diff = BufferDiffSnapshot::new_sync(buffer.clone(), diff_base.clone(), cx);
assert_hunks(
diff.hunks_intersecting_range(Anchor::MIN..Anchor::MAX, &buffer),
@@ -1394,7 +1404,7 @@ mod tests {
&[(1..2, "two\n", "HELLO\n", DiffHunkStatus::modified_none())],
);
buffer.edit([(0..0, "point five\n")]);
buffer.edit([(0..0, "point five\n")], cx.background_executor());
diff = BufferDiffSnapshot::new_sync(buffer.clone(), diff_base.clone(), cx);
assert_hunks(
diff.hunks_intersecting_range(Anchor::MIN..Anchor::MAX, &buffer),
@@ -1459,7 +1469,12 @@ mod tests {
"
.unindent();
let buffer = Buffer::new(ReplicaId::LOCAL, BufferId::new(1).unwrap(), buffer_text);
let buffer = Buffer::new(
ReplicaId::LOCAL,
BufferId::new(1).unwrap(),
buffer_text,
cx.background_executor(),
);
let unstaged_diff = BufferDiffSnapshot::new_sync(buffer.clone(), index_text, cx);
let mut uncommitted_diff =
BufferDiffSnapshot::new_sync(buffer.clone(), head_text.clone(), cx);
@@ -1528,7 +1543,12 @@ mod tests {
"
.unindent();
let buffer = Buffer::new(ReplicaId::LOCAL, BufferId::new(1).unwrap(), buffer_text);
let buffer = Buffer::new(
ReplicaId::LOCAL,
BufferId::new(1).unwrap(),
buffer_text,
cx.background_executor(),
);
let diff = cx
.update(|cx| {
BufferDiffSnapshot::new_with_base_text(
@@ -1791,7 +1811,12 @@ mod tests {
for example in table {
let (buffer_text, ranges) = marked_text_ranges(&example.buffer_marked_text, false);
let buffer = Buffer::new(ReplicaId::LOCAL, BufferId::new(1).unwrap(), buffer_text);
let buffer = Buffer::new(
ReplicaId::LOCAL,
BufferId::new(1).unwrap(),
buffer_text,
cx.background_executor(),
);
let hunk_range =
buffer.anchor_before(ranges[0].start)..buffer.anchor_before(ranges[0].end);
@@ -1868,6 +1893,7 @@ mod tests {
ReplicaId::LOCAL,
BufferId::new(1).unwrap(),
buffer_text.clone(),
cx.background_executor(),
);
let unstaged = BufferDiffSnapshot::new_sync(buffer.clone(), index_text, cx);
let uncommitted = BufferDiffSnapshot::new_sync(buffer.clone(), head_text.clone(), cx);
@@ -1941,7 +1967,12 @@ mod tests {
"
.unindent();
let mut buffer = Buffer::new(ReplicaId::LOCAL, BufferId::new(1).unwrap(), buffer_text_1);
let mut buffer = Buffer::new(
ReplicaId::LOCAL,
BufferId::new(1).unwrap(),
buffer_text_1,
cx.background_executor(),
);
let empty_diff = cx.update(|cx| BufferDiffSnapshot::empty(&buffer, cx));
let diff_1 = BufferDiffSnapshot::new_sync(buffer.clone(), base_text.clone(), cx);
@@ -1961,6 +1992,7 @@ mod tests {
NINE
"
.unindent(),
cx.background_executor(),
);
let diff_2 = BufferDiffSnapshot::new_sync(buffer.clone(), base_text.clone(), cx);
assert_eq!(None, diff_2.inner.compare(&diff_1.inner, &buffer));
@@ -1978,6 +2010,7 @@ mod tests {
NINE
"
.unindent(),
cx.background_executor(),
);
let diff_3 = BufferDiffSnapshot::new_sync(buffer.clone(), base_text.clone(), cx);
let range = diff_3.inner.compare(&diff_2.inner, &buffer).unwrap();
@@ -1995,6 +2028,7 @@ mod tests {
NINE
"
.unindent(),
cx.background_executor(),
);
let diff_4 = BufferDiffSnapshot::new_sync(buffer.clone(), base_text.clone(), cx);
let range = diff_4.inner.compare(&diff_3.inner, &buffer).unwrap();
@@ -2013,6 +2047,7 @@ mod tests {
NINE
"
.unindent(),
cx.background_executor(),
);
let diff_5 = BufferDiffSnapshot::new_sync(buffer.snapshot(), base_text.clone(), cx);
let range = diff_5.inner.compare(&diff_4.inner, &buffer).unwrap();
@@ -2031,6 +2066,7 @@ mod tests {
«nine»
"
.unindent(),
cx.background_executor(),
);
let diff_6 = BufferDiffSnapshot::new_sync(buffer.snapshot(), base_text, cx);
let range = diff_6.inner.compare(&diff_5.inner, &buffer).unwrap();
@@ -2140,14 +2176,14 @@ mod tests {
let working_copy = gen_working_copy(rng, &head_text);
let working_copy = cx.new(|cx| {
language::Buffer::local_normalized(
Rope::from(working_copy.as_str()),
Rope::from_str(working_copy.as_str(), cx.background_executor()),
text::LineEnding::default(),
cx,
)
});
let working_copy = working_copy.read_with(cx, |working_copy, _| working_copy.snapshot());
let mut index_text = if rng.random() {
Rope::from(head_text.as_str())
Rope::from_str(head_text.as_str(), cx.background_executor())
} else {
working_copy.as_rope().clone()
};

View File

@@ -70,6 +70,7 @@ impl ChannelBuffer {
ReplicaId::new(response.replica_id as u16),
capability,
base_text,
cx.background_executor(),
)
})?;
buffer.update(cx, |buffer, cx| buffer.apply_ops(operations, cx))?;

View File

@@ -182,8 +182,8 @@ pub fn build_prompt(
}
for related_file in &request.included_files {
writeln!(&mut prompt, "`````filename={}", related_file.path.display()).unwrap();
write_excerpts(
write_codeblock(
&related_file.path,
&related_file.excerpts,
if related_file.path == request.excerpt_path {
&insertions
@@ -194,7 +194,6 @@ pub fn build_prompt(
request.prompt_format == PromptFormat::NumLinesUniDiff,
&mut prompt,
);
write!(&mut prompt, "`````\n\n").unwrap();
}
}
@@ -205,6 +204,25 @@ pub fn build_prompt(
Ok((prompt, section_labels))
}
pub fn write_codeblock<'a>(
path: &Path,
excerpts: impl IntoIterator<Item = &'a Excerpt>,
sorted_insertions: &[(Point, &str)],
file_line_count: Line,
include_line_numbers: bool,
output: &'a mut String,
) {
writeln!(output, "`````{}", path.display()).unwrap();
write_excerpts(
excerpts,
sorted_insertions,
file_line_count,
include_line_numbers,
output,
);
write!(output, "`````\n\n").unwrap();
}
pub fn write_excerpts<'a>(
excerpts: impl IntoIterator<Item = &'a Excerpt>,
sorted_insertions: &[(Point, &str)],
@@ -597,8 +615,7 @@ impl<'a> SyntaxBasedPrompt<'a> {
disjoint_snippets.push(current_snippet);
}
// TODO: remove filename=?
writeln!(output, "`````filename={}", file_path.display()).ok();
writeln!(output, "`````path={}", file_path.display()).ok();
let mut skipped_last_snippet = false;
for (snippet, range) in disjoint_snippets {
let section_index = section_ranges.len();

View File

@@ -701,12 +701,12 @@ impl Database {
return Ok(());
}
let mut text_buffer = text::Buffer::new(
let mut text_buffer = text::Buffer::new_slow(
clock::ReplicaId::LOCAL,
text::BufferId::new(1).unwrap(),
base_text,
);
text_buffer.apply_ops(operations.into_iter().filter_map(operation_from_wire));
text_buffer.apply_ops(operations.into_iter().filter_map(operation_from_wire), None);
let base_text = text_buffer.text();
let epoch = buffer.epoch + 1;

View File

@@ -74,11 +74,21 @@ async fn test_channel_buffers(db: &Arc<Database>) {
ReplicaId::new(0),
text::BufferId::new(1).unwrap(),
"".to_string(),
&db.test_options.as_ref().unwrap().executor,
);
let operations = vec![
buffer_a.edit([(0..0, "hello world")]),
buffer_a.edit([(5..5, ", cruel")]),
buffer_a.edit([(0..5, "goodbye")]),
buffer_a.edit(
[(0..0, "hello world")],
&db.test_options.as_ref().unwrap().executor,
),
buffer_a.edit(
[(5..5, ", cruel")],
&db.test_options.as_ref().unwrap().executor,
),
buffer_a.edit(
[(0..5, "goodbye")],
&db.test_options.as_ref().unwrap().executor,
),
buffer_a.undo().unwrap().1,
];
assert_eq!(buffer_a.text(), "hello, cruel world");
@@ -102,15 +112,19 @@ async fn test_channel_buffers(db: &Arc<Database>) {
ReplicaId::new(0),
text::BufferId::new(1).unwrap(),
buffer_response_b.base_text,
&db.test_options.as_ref().unwrap().executor,
);
buffer_b.apply_ops(
buffer_response_b.operations.into_iter().map(|operation| {
let operation = proto::deserialize_operation(operation).unwrap();
if let language::Operation::Buffer(operation) = operation {
operation
} else {
unreachable!()
}
}),
None,
);
buffer_b.apply_ops(buffer_response_b.operations.into_iter().map(|operation| {
let operation = proto::deserialize_operation(operation).unwrap();
if let language::Operation::Buffer(operation) = operation {
operation
} else {
unreachable!()
}
}));
assert_eq!(buffer_b.text(), "hello, cruel world");
@@ -247,6 +261,7 @@ async fn test_channel_buffers_last_operations(db: &Database) {
ReplicaId::new(res.replica_id as u16),
text::BufferId::new(1).unwrap(),
"".to_string(),
&db.test_options.as_ref().unwrap().executor,
));
}
@@ -255,9 +270,9 @@ async fn test_channel_buffers_last_operations(db: &Database) {
user_id,
db,
vec![
text_buffers[0].edit([(0..0, "a")]),
text_buffers[0].edit([(0..0, "b")]),
text_buffers[0].edit([(0..0, "c")]),
text_buffers[0].edit([(0..0, "a")], &db.test_options.as_ref().unwrap().executor),
text_buffers[0].edit([(0..0, "b")], &db.test_options.as_ref().unwrap().executor),
text_buffers[0].edit([(0..0, "c")], &db.test_options.as_ref().unwrap().executor),
],
)
.await;
@@ -267,9 +282,9 @@ async fn test_channel_buffers_last_operations(db: &Database) {
user_id,
db,
vec![
text_buffers[1].edit([(0..0, "d")]),
text_buffers[1].edit([(1..1, "e")]),
text_buffers[1].edit([(2..2, "f")]),
text_buffers[1].edit([(0..0, "d")], &db.test_options.as_ref().unwrap().executor),
text_buffers[1].edit([(1..1, "e")], &db.test_options.as_ref().unwrap().executor),
text_buffers[1].edit([(2..2, "f")], &db.test_options.as_ref().unwrap().executor),
],
)
.await;
@@ -286,14 +301,15 @@ async fn test_channel_buffers_last_operations(db: &Database) {
replica_id,
text::BufferId::new(1).unwrap(),
"def".to_string(),
&db.test_options.as_ref().unwrap().executor,
);
update_buffer(
buffers[1].channel_id,
user_id,
db,
vec![
text_buffers[1].edit([(0..0, "g")]),
text_buffers[1].edit([(0..0, "h")]),
text_buffers[1].edit([(0..0, "g")], &db.test_options.as_ref().unwrap().executor),
text_buffers[1].edit([(0..0, "h")], &db.test_options.as_ref().unwrap().executor),
],
)
.await;
@@ -302,7 +318,7 @@ async fn test_channel_buffers_last_operations(db: &Database) {
buffers[2].channel_id,
user_id,
db,
vec![text_buffers[2].edit([(0..0, "i")])],
vec![text_buffers[2].edit([(0..0, "i")], &db.test_options.as_ref().unwrap().executor)],
)
.await;

View File

@@ -39,6 +39,7 @@ use std::{
Arc,
atomic::{self, AtomicBool, AtomicUsize},
},
time::Duration,
};
use text::Point;
use util::{path, rel_path::rel_path, uri};
@@ -1817,14 +1818,7 @@ async fn test_mutual_editor_inlay_hint_cache_update(
settings.project.all_languages.defaults.inlay_hints =
Some(InlayHintSettingsContent {
enabled: Some(true),
show_value_hints: Some(true),
edit_debounce_ms: Some(0),
scroll_debounce_ms: Some(0),
show_type_hints: Some(true),
show_parameter_hints: Some(false),
show_other_hints: Some(true),
show_background: Some(false),
toggle_on_modifiers_press: None,
..InlayHintSettingsContent::default()
})
});
});
@@ -1834,15 +1828,8 @@ async fn test_mutual_editor_inlay_hint_cache_update(
store.update_user_settings(cx, |settings| {
settings.project.all_languages.defaults.inlay_hints =
Some(InlayHintSettingsContent {
show_value_hints: Some(true),
enabled: Some(true),
edit_debounce_ms: Some(0),
scroll_debounce_ms: Some(0),
show_type_hints: Some(true),
show_parameter_hints: Some(false),
show_other_hints: Some(true),
show_background: Some(false),
toggle_on_modifiers_press: None,
..InlayHintSettingsContent::default()
})
});
});
@@ -1935,6 +1922,7 @@ async fn test_mutual_editor_inlay_hint_cache_update(
});
let fake_language_server = fake_language_servers.next().await.unwrap();
let editor_a = file_a.await.unwrap().downcast::<Editor>().unwrap();
executor.advance_clock(Duration::from_millis(100));
executor.run_until_parked();
let initial_edit = edits_made.load(atomic::Ordering::Acquire);
@@ -1955,6 +1943,7 @@ async fn test_mutual_editor_inlay_hint_cache_update(
.downcast::<Editor>()
.unwrap();
executor.advance_clock(Duration::from_millis(100));
executor.run_until_parked();
editor_b.update(cx_b, |editor, cx| {
assert_eq!(
@@ -1973,6 +1962,7 @@ async fn test_mutual_editor_inlay_hint_cache_update(
});
cx_b.focus(&editor_b);
executor.advance_clock(Duration::from_secs(1));
executor.run_until_parked();
editor_a.update(cx_a, |editor, cx| {
assert_eq!(
@@ -1996,6 +1986,7 @@ async fn test_mutual_editor_inlay_hint_cache_update(
});
cx_a.focus(&editor_a);
executor.advance_clock(Duration::from_secs(1));
executor.run_until_parked();
editor_a.update(cx_a, |editor, cx| {
assert_eq!(
@@ -2017,6 +2008,7 @@ async fn test_mutual_editor_inlay_hint_cache_update(
.into_response()
.expect("inlay refresh request failed");
executor.advance_clock(Duration::from_secs(1));
executor.run_until_parked();
editor_a.update(cx_a, |editor, cx| {
assert_eq!(

View File

@@ -3694,7 +3694,7 @@ async fn test_buffer_reloading(
assert_eq!(buf.line_ending(), LineEnding::Unix);
});
let new_contents = Rope::from("d\ne\nf");
let new_contents = Rope::from_str_small("d\ne\nf");
client_a
.fs()
.save(
@@ -4479,7 +4479,7 @@ async fn test_reloading_buffer_manually(
.fs()
.save(
path!("/a/a.rs").as_ref(),
&Rope::from("let seven = 7;"),
&Rope::from_str_small("let seven = 7;"),
LineEnding::Unix,
)
.await

View File

@@ -27,6 +27,7 @@ use std::{
rc::Rc,
sync::Arc,
};
use text::Rope;
use util::{
ResultExt, path,
paths::PathStyle,
@@ -938,7 +939,11 @@ impl RandomizedTest for ProjectCollaborationTest {
client
.fs()
.save(&path, &content.as_str().into(), text::LineEnding::Unix)
.save(
&path,
&Rope::from_str_small(content.as_str()),
text::LineEnding::Unix,
)
.await
.unwrap();
}

View File

@@ -41,6 +41,10 @@ util.workspace = true
[dev-dependencies]
dap = { workspace = true, features = ["test-support"] }
fs = { workspace = true, features = ["test-support"] }
gpui = { workspace = true, features = ["test-support"] }
http_client.workspace = true
node_runtime.workspace = true
settings = { workspace = true, features = ["test-support"] }
task = { workspace = true, features = ["test-support"] }
util = { workspace = true, features = ["test-support"] }

View File

@@ -4,6 +4,8 @@ mod go;
mod javascript;
mod python;
#[cfg(test)]
use std::path::PathBuf;
use std::sync::Arc;
use anyhow::Result;
@@ -38,3 +40,65 @@ pub fn init(cx: &mut App) {
}
})
}
#[cfg(test)]
mod test_mocks {
use super::*;
pub(crate) struct MockDelegate {
worktree_root: PathBuf,
}
impl MockDelegate {
pub(crate) fn new() -> Arc<dyn adapters::DapDelegate> {
Arc::new(Self {
worktree_root: PathBuf::from("/tmp/test"),
})
}
}
#[async_trait::async_trait]
impl adapters::DapDelegate for MockDelegate {
fn worktree_id(&self) -> settings::WorktreeId {
settings::WorktreeId::from_usize(0)
}
fn worktree_root_path(&self) -> &std::path::Path {
&self.worktree_root
}
fn http_client(&self) -> Arc<dyn http_client::HttpClient> {
unimplemented!("Not needed for tests")
}
fn node_runtime(&self) -> node_runtime::NodeRuntime {
unimplemented!("Not needed for tests")
}
fn toolchain_store(&self) -> Arc<dyn language::LanguageToolchainStore> {
unimplemented!("Not needed for tests")
}
fn fs(&self) -> Arc<dyn fs::Fs> {
unimplemented!("Not needed for tests")
}
fn output_to_console(&self, _msg: String) {}
async fn which(&self, _command: &std::ffi::OsStr) -> Option<PathBuf> {
None
}
async fn read_text_file(&self, _path: &util::rel_path::RelPath) -> Result<String> {
Ok(String::new())
}
async fn shell_env(&self) -> collections::HashMap<String, String> {
collections::HashMap::default()
}
fn is_headless(&self) -> bool {
false
}
}
}

View File

@@ -23,6 +23,11 @@ use std::{
use util::command::new_smol_command;
use util::{ResultExt, paths::PathStyle, rel_path::RelPath};
enum DebugpyLaunchMode<'a> {
Normal,
AttachWithConnect { host: Option<&'a str> },
}
#[derive(Default)]
pub(crate) struct PythonDebugAdapter {
base_venv_path: OnceCell<Result<Arc<Path>, String>>,
@@ -36,10 +41,11 @@ impl PythonDebugAdapter {
const LANGUAGE_NAME: &'static str = "Python";
async fn generate_debugpy_arguments(
host: &Ipv4Addr,
async fn generate_debugpy_arguments<'a>(
host: &'a Ipv4Addr,
port: u16,
user_installed_path: Option<&Path>,
launch_mode: DebugpyLaunchMode<'a>,
user_installed_path: Option<&'a Path>,
user_args: Option<Vec<String>>,
) -> Result<Vec<String>> {
let mut args = if let Some(user_installed_path) = user_installed_path {
@@ -62,7 +68,20 @@ impl PythonDebugAdapter {
args.extend(if let Some(args) = user_args {
args
} else {
vec![format!("--host={}", host), format!("--port={}", port)]
match launch_mode {
DebugpyLaunchMode::Normal => {
vec![format!("--host={}", host), format!("--port={}", port)]
}
DebugpyLaunchMode::AttachWithConnect { host } => {
let mut args = vec!["connect".to_string()];
if let Some(host) = host {
args.push(format!("{host}:"));
}
args.push(format!("{port}"));
args
}
}
});
Ok(args)
}
@@ -315,7 +334,46 @@ impl PythonDebugAdapter {
user_env: Option<HashMap<String, String>>,
python_from_toolchain: Option<String>,
) -> Result<DebugAdapterBinary> {
let tcp_connection = config.tcp_connection.clone().unwrap_or_default();
let mut tcp_connection = config.tcp_connection.clone().unwrap_or_default();
let (config_port, config_host) = config
.config
.get("connect")
.map(|value| {
(
value
.get("port")
.and_then(|val| val.as_u64().map(|p| p as u16)),
value.get("host").and_then(|val| val.as_str()),
)
})
.unwrap_or_else(|| {
(
config
.config
.get("port")
.and_then(|port| port.as_u64().map(|p| p as u16)),
config.config.get("host").and_then(|host| host.as_str()),
)
});
let is_attach_with_connect = if config
.config
.get("request")
.is_some_and(|val| val.as_str().is_some_and(|request| request == "attach"))
{
if tcp_connection.host.is_some() && config_host.is_some() {
bail!("Cannot have two different hosts in debug configuration")
} else if tcp_connection.port.is_some() && config_port.is_some() {
bail!("Cannot have two different ports in debug configuration")
}
tcp_connection.port = config_port;
DebugpyLaunchMode::AttachWithConnect { host: config_host }
} else {
DebugpyLaunchMode::Normal
};
let (host, port, timeout) = crate::configure_tcp_connection(tcp_connection).await?;
let python_path = if let Some(toolchain) = python_from_toolchain {
@@ -330,6 +388,7 @@ impl PythonDebugAdapter {
let arguments = Self::generate_debugpy_arguments(
&host,
port,
is_attach_with_connect,
user_installed_path.as_deref(),
user_args,
)
@@ -765,29 +824,58 @@ impl DebugAdapter for PythonDebugAdapter {
.await;
}
let base_path = config
.config
.get("cwd")
.and_then(|cwd| {
RelPath::new(
cwd.as_str()
.map(Path::new)?
.strip_prefix(delegate.worktree_root_path())
.ok()?,
PathStyle::local(),
)
.ok()
let base_paths = ["cwd", "program", "module"]
.into_iter()
.filter_map(|key| {
config.config.get(key).and_then(|cwd| {
RelPath::new(
cwd.as_str()
.map(Path::new)?
.strip_prefix(delegate.worktree_root_path())
.ok()?,
PathStyle::local(),
)
.ok()
})
})
.unwrap_or_else(|| RelPath::empty().into());
let toolchain = delegate
.toolchain_store()
.active_toolchain(
delegate.worktree_id(),
base_path.into_arc(),
language::LanguageName::new(Self::LANGUAGE_NAME),
cx,
.chain(
// While Debugpy's wiki saids absolute paths are required, but it actually supports relative paths when cwd is passed in.
// (Which should always be the case because Zed defaults to the cwd worktree root)
// So we want to check that these relative paths find toolchains as well. Otherwise, they won't be checked
// because the strip prefix in the iteration above will return an error
config
.config
.get("cwd")
.map(|_| {
["program", "module"].into_iter().filter_map(|key| {
config.config.get(key).and_then(|value| {
let path = Path::new(value.as_str()?);
RelPath::new(path, PathStyle::local()).ok()
})
})
})
.into_iter()
.flatten(),
)
.await;
.chain([RelPath::empty().into()]);
let mut toolchain = None;
for base_path in base_paths {
if let Some(found_toolchain) = delegate
.toolchain_store()
.active_toolchain(
delegate.worktree_id(),
base_path.into_arc(),
language::LanguageName::new(Self::LANGUAGE_NAME),
cx,
)
.await
{
toolchain = Some(found_toolchain);
break;
}
}
self.fetch_debugpy_whl(toolchain.clone(), delegate)
.await
@@ -824,7 +912,148 @@ mod tests {
use util::path;
use super::*;
use std::{net::Ipv4Addr, path::PathBuf};
use task::TcpArgumentsTemplate;
#[gpui::test]
async fn test_tcp_connection_conflict_with_connect_args() {
let adapter = PythonDebugAdapter {
base_venv_path: OnceCell::new(),
debugpy_whl_base_path: OnceCell::new(),
};
let config_with_port_conflict = json!({
"request": "attach",
"connect": {
"port": 5679
}
});
let tcp_connection = TcpArgumentsTemplate {
host: None,
port: Some(5678),
timeout: None,
};
let task_def = DebugTaskDefinition {
label: "test".into(),
adapter: PythonDebugAdapter::ADAPTER_NAME.into(),
config: config_with_port_conflict,
tcp_connection: Some(tcp_connection.clone()),
};
let result = adapter
.get_installed_binary(
&test_mocks::MockDelegate::new(),
&task_def,
None,
None,
None,
Some("python3".to_string()),
)
.await;
assert!(result.is_err());
assert!(
result
.unwrap_err()
.to_string()
.contains("Cannot have two different ports")
);
let host = Ipv4Addr::new(127, 0, 0, 1);
let config_with_host_conflict = json!({
"request": "attach",
"connect": {
"host": "192.168.1.1",
"port": 5678
}
});
let tcp_connection_with_host = TcpArgumentsTemplate {
host: Some(host),
port: None,
timeout: None,
};
let task_def_host = DebugTaskDefinition {
label: "test".into(),
adapter: PythonDebugAdapter::ADAPTER_NAME.into(),
config: config_with_host_conflict,
tcp_connection: Some(tcp_connection_with_host),
};
let result_host = adapter
.get_installed_binary(
&test_mocks::MockDelegate::new(),
&task_def_host,
None,
None,
None,
Some("python3".to_string()),
)
.await;
assert!(result_host.is_err());
assert!(
result_host
.unwrap_err()
.to_string()
.contains("Cannot have two different hosts")
);
}
#[gpui::test]
async fn test_attach_with_connect_mode_generates_correct_arguments() {
let host = Ipv4Addr::new(127, 0, 0, 1);
let port = 5678;
let args_without_host = PythonDebugAdapter::generate_debugpy_arguments(
&host,
port,
DebugpyLaunchMode::AttachWithConnect { host: None },
None,
None,
)
.await
.unwrap();
let expected_suffix = path!("debug_adapters/Debugpy/debugpy/adapter");
assert!(args_without_host[0].ends_with(expected_suffix));
assert_eq!(args_without_host[1], "connect");
assert_eq!(args_without_host[2], "5678");
let args_with_host = PythonDebugAdapter::generate_debugpy_arguments(
&host,
port,
DebugpyLaunchMode::AttachWithConnect {
host: Some("192.168.1.100"),
},
None,
None,
)
.await
.unwrap();
assert!(args_with_host[0].ends_with(expected_suffix));
assert_eq!(args_with_host[1], "connect");
assert_eq!(args_with_host[2], "192.168.1.100:");
assert_eq!(args_with_host[3], "5678");
let args_normal = PythonDebugAdapter::generate_debugpy_arguments(
&host,
port,
DebugpyLaunchMode::Normal,
None,
None,
)
.await
.unwrap();
assert!(args_normal[0].ends_with(expected_suffix));
assert_eq!(args_normal[1], "--host=127.0.0.1");
assert_eq!(args_normal[2], "--port=5678");
assert!(!args_normal.contains(&"connect".to_string()));
}
#[gpui::test]
async fn test_debugpy_install_path_cases() {
@@ -833,15 +1062,25 @@ mod tests {
// Case 1: User-defined debugpy path (highest precedence)
let user_path = PathBuf::from("/custom/path/to/debugpy/src/debugpy/adapter");
let user_args =
PythonDebugAdapter::generate_debugpy_arguments(&host, port, Some(&user_path), None)
.await
.unwrap();
let user_args = PythonDebugAdapter::generate_debugpy_arguments(
&host,
port,
DebugpyLaunchMode::Normal,
Some(&user_path),
None,
)
.await
.unwrap();
// Case 2: Venv-installed debugpy (uses -m debugpy.adapter)
let venv_args = PythonDebugAdapter::generate_debugpy_arguments(&host, port, None, None)
.await
.unwrap();
let venv_args = PythonDebugAdapter::generate_debugpy_arguments(
&host,
port,
DebugpyLaunchMode::Normal,
None,
None,
)
.await
.unwrap();
assert_eq!(user_args[0], "/custom/path/to/debugpy/src/debugpy/adapter");
assert_eq!(user_args[1], "--host=127.0.0.1");
@@ -856,6 +1095,7 @@ mod tests {
let user_args = PythonDebugAdapter::generate_debugpy_arguments(
&host,
port,
DebugpyLaunchMode::Normal,
Some(&user_path),
Some(vec!["foo".into()]),
)
@@ -864,6 +1104,7 @@ mod tests {
let venv_args = PythonDebugAdapter::generate_debugpy_arguments(
&host,
port,
DebugpyLaunchMode::Normal,
None,
Some(vec!["foo".into()]),
)

View File

@@ -1029,11 +1029,13 @@ impl SearchableItem for DapLogView {
&mut self,
index: usize,
matches: &[Self::Match],
collapse: bool,
window: &mut Window,
cx: &mut Context<Self>,
) {
self.editor
.update(cx, |e, cx| e.activate_match(index, matches, window, cx))
self.editor.update(cx, |e, cx| {
e.activate_match(index, matches, collapse, window, cx)
})
}
fn select_matches(

View File

@@ -12,6 +12,7 @@ use gpui::{
Action, AppContext, ClickEvent, Entity, FocusHandle, Focusable, MouseButton, ScrollStrategy,
Task, UniformListScrollHandle, WeakEntity, actions, uniform_list,
};
use itertools::Itertools;
use language::Point;
use project::{
Project,
@@ -24,7 +25,7 @@ use project::{
};
use ui::{
Divider, DividerColor, FluentBuilder as _, Indicator, IntoElement, ListItem, Render,
StatefulInteractiveElement, Tooltip, WithScrollbar, prelude::*,
ScrollAxes, StatefulInteractiveElement, Tooltip, WithScrollbar, prelude::*,
};
use util::rel_path::RelPath;
use workspace::Workspace;
@@ -55,6 +56,7 @@ pub(crate) struct BreakpointList {
focus_handle: FocusHandle,
scroll_handle: UniformListScrollHandle,
selected_ix: Option<usize>,
max_width_index: Option<usize>,
input: Entity<Editor>,
strip_mode: Option<ActiveBreakpointStripMode>,
serialize_exception_breakpoints_task: Option<Task<anyhow::Result<()>>>,
@@ -95,6 +97,7 @@ impl BreakpointList {
dap_store,
worktree_store,
breakpoints: Default::default(),
max_width_index: None,
workspace,
session,
focus_handle,
@@ -546,7 +549,7 @@ impl BreakpointList {
.session
.as_ref()
.map(|session| SupportedBreakpointProperties::from(session.read(cx).capabilities()))
.unwrap_or_else(SupportedBreakpointProperties::empty);
.unwrap_or_else(SupportedBreakpointProperties::all);
let strip_mode = self.strip_mode;
uniform_list(
@@ -570,6 +573,8 @@ impl BreakpointList {
.collect()
}),
)
.with_horizontal_sizing_behavior(gpui::ListHorizontalSizingBehavior::Unconstrained)
.with_width_from_item(self.max_width_index)
.track_scroll(self.scroll_handle.clone())
.flex_1()
}
@@ -732,6 +737,26 @@ impl Render for BreakpointList {
.chain(exception_breakpoints),
);
let text_pixels = ui::TextSize::Default.pixels(cx).to_f64() as f32;
self.max_width_index = self
.breakpoints
.iter()
.map(|entry| match &entry.kind {
BreakpointEntryKind::LineBreakpoint(line_bp) => {
let name_and_line = format!("{}:{}", line_bp.name, line_bp.line);
let dir_len = line_bp.dir.as_ref().map(|d| d.len()).unwrap_or(0);
(name_and_line.len() + dir_len) as f32 * text_pixels
}
BreakpointEntryKind::ExceptionBreakpoint(exc_bp) => {
exc_bp.data.label.len() as f32 * text_pixels
}
BreakpointEntryKind::DataBreakpoint(data_bp) => {
data_bp.0.context.human_readable_label().len() as f32 * text_pixels
}
})
.position_max_by(|left, right| left.total_cmp(right));
v_flex()
.id("breakpoint-list")
.key_context("BreakpointList")
@@ -749,7 +774,14 @@ impl Render for BreakpointList {
.size_full()
.pt_1()
.child(self.render_list(cx))
.vertical_scrollbar_for(self.scroll_handle.clone(), window, cx)
.custom_scrollbars(
ui::Scrollbars::new(ScrollAxes::Both)
.tracked_scroll_handle(self.scroll_handle.clone())
.with_track_along(ScrollAxes::Both, cx.theme().colors().panel_background)
.tracked_entity(cx.entity_id()),
window,
cx,
)
.when_some(self.strip_mode, |this, _| {
this.child(Divider::horizontal().color(DividerColor::Border))
.child(
@@ -1376,8 +1408,10 @@ impl RenderOnce for BreakpointOptionsStrip {
h_flex()
.gap_px()
.mr_3() // Space to avoid overlapping with the scrollbar
.child(
div()
.justify_end()
.when(has_logs || self.is_selected, |this| {
this.child(
div()
.map(self.add_focus_styles(
ActiveBreakpointStripMode::Log,
supports_logs,
@@ -1406,45 +1440,46 @@ impl RenderOnce for BreakpointOptionsStrip {
)
}),
)
.when(!has_logs && !self.is_selected, |this| this.invisible()),
)
.child(
div()
.map(self.add_focus_styles(
ActiveBreakpointStripMode::Condition,
supports_condition,
window,
cx,
))
.child(
IconButton::new(
SharedString::from(format!("{id}-condition-toggle")),
IconName::SplitAlt,
)
.shape(ui::IconButtonShape::Square)
.style(style_for_toggle(
)
})
.when(has_condition || self.is_selected, |this| {
this.child(
div()
.map(self.add_focus_styles(
ActiveBreakpointStripMode::Condition,
has_condition,
supports_condition,
window,
cx,
))
.icon_size(IconSize::Small)
.icon_color(color_for_toggle(has_condition))
.when(has_condition, |this| this.indicator(Indicator::dot().color(Color::Info)))
.disabled(!supports_condition)
.toggle_state(self.is_toggled(ActiveBreakpointStripMode::Condition))
.on_click(self.on_click_callback(ActiveBreakpointStripMode::Condition))
.tooltip(|_window, cx| {
Tooltip::with_meta(
"Set Condition",
None,
"Set condition to evaluate when a breakpoint is hit. Program execution will stop only when the condition is met.",
cx,
.child(
IconButton::new(
SharedString::from(format!("{id}-condition-toggle")),
IconName::SplitAlt,
)
}),
)
.when(!has_condition && !self.is_selected, |this| this.invisible()),
)
.child(
div()
.shape(ui::IconButtonShape::Square)
.style(style_for_toggle(
ActiveBreakpointStripMode::Condition,
has_condition,
))
.icon_size(IconSize::Small)
.icon_color(color_for_toggle(has_condition))
.when(has_condition, |this| this.indicator(Indicator::dot().color(Color::Info)))
.disabled(!supports_condition)
.toggle_state(self.is_toggled(ActiveBreakpointStripMode::Condition))
.on_click(self.on_click_callback(ActiveBreakpointStripMode::Condition))
.tooltip(|_window, cx| {
Tooltip::with_meta(
"Set Condition",
None,
"Set condition to evaluate when a breakpoint is hit. Program execution will stop only when the condition is met.",
cx,
)
}),
)
)
})
.when(has_hit_condition || self.is_selected, |this| {
this.child(div()
.map(self.add_focus_styles(
ActiveBreakpointStripMode::HitCondition,
supports_hit_condition,
@@ -1475,10 +1510,8 @@ impl RenderOnce for BreakpointOptionsStrip {
cx,
)
}),
)
.when(!has_hit_condition && !self.is_selected, |this| {
this.invisible()
}),
)
))
})
}
}

View File

@@ -10,8 +10,9 @@ use std::{
use editor::{Editor, EditorElement, EditorStyle};
use gpui::{
Action, Along, AppContext, Axis, DismissEvent, DragMoveEvent, Empty, Entity, FocusHandle,
Focusable, MouseButton, Point, ScrollStrategy, ScrollWheelEvent, Subscription, Task, TextStyle,
UniformList, UniformListScrollHandle, WeakEntity, actions, anchored, deferred, uniform_list,
Focusable, ListHorizontalSizingBehavior, MouseButton, Point, ScrollStrategy, ScrollWheelEvent,
Subscription, Task, TextStyle, UniformList, UniformListScrollHandle, WeakEntity, actions,
anchored, deferred, uniform_list,
};
use notifications::status_toast::{StatusToast, ToastIcon};
use project::debugger::{MemoryCell, dap_command::DataBreakpointContext, session::Session};
@@ -229,6 +230,7 @@ impl MemoryView {
},
)
.track_scroll(view_state.scroll_handle)
.with_horizontal_sizing_behavior(ListHorizontalSizingBehavior::Unconstrained)
.on_scroll_wheel(cx.listener(|this, evt: &ScrollWheelEvent, window, _| {
let mut view_state = this.view_state();
let delta = evt.delta.pixel_delta(window.line_height());
@@ -917,7 +919,17 @@ impl Render for MemoryView {
)
.with_priority(1)
}))
.vertical_scrollbar_for(self.view_state_handle.clone(), window, cx),
.custom_scrollbars(
ui::Scrollbars::new(ui::ScrollAxes::Both)
.tracked_scroll_handle(self.view_state_handle.clone())
.with_track_along(
ui::ScrollAxes::Both,
cx.theme().colors().panel_background,
)
.tracked_entity(cx.entity_id()),
window,
cx,
),
)
}
}

View File

@@ -11,15 +11,18 @@ use gpui::{
FocusHandle, Focusable, Hsla, MouseDownEvent, Point, Subscription, TextStyleRefinement,
UniformListScrollHandle, WeakEntity, actions, anchored, deferred, uniform_list,
};
use itertools::Itertools;
use menu::{SelectFirst, SelectLast, SelectNext, SelectPrevious};
use project::debugger::{
dap_command::DataBreakpointContext,
session::{Session, SessionEvent, Watcher},
};
use std::{collections::HashMap, ops::Range, sync::Arc};
use ui::{ContextMenu, ListItem, ScrollableHandle, Tooltip, WithScrollbar, prelude::*};
use ui::{ContextMenu, ListItem, ScrollAxes, ScrollableHandle, Tooltip, WithScrollbar, prelude::*};
use util::{debug_panic, maybe};
static INDENT_STEP_SIZE: Pixels = px(10.0);
actions!(
variable_list,
[
@@ -185,6 +188,7 @@ struct VariableColor {
pub struct VariableList {
entries: Vec<ListEntry>,
max_width_index: Option<usize>,
entry_states: HashMap<EntryPath, EntryState>,
selected_stack_frame_id: Option<StackFrameId>,
list_handle: UniformListScrollHandle,
@@ -243,6 +247,7 @@ impl VariableList {
disabled: false,
edited_path: None,
entries: Default::default(),
max_width_index: None,
entry_states: Default::default(),
weak_running,
memory_view,
@@ -368,6 +373,26 @@ impl VariableList {
}
self.entries = entries;
let text_pixels = ui::TextSize::Default.pixels(cx).to_f64() as f32;
let indent_size = INDENT_STEP_SIZE.to_f64() as f32;
self.max_width_index = self
.entries
.iter()
.map(|entry| match &entry.entry {
DapEntry::Scope(scope) => scope.name.len() as f32 * text_pixels,
DapEntry::Variable(variable) => {
(variable.value.len() + variable.name.len()) as f32 * text_pixels
+ (entry.path.indices.len() as f32 * indent_size)
}
DapEntry::Watcher(watcher) => {
(watcher.value.len() + watcher.expression.len()) as f32 * text_pixels
+ (entry.path.indices.len() as f32 * indent_size)
}
})
.position_max_by(|left, right| left.total_cmp(right));
cx.notify();
}
@@ -1244,7 +1269,7 @@ impl VariableList {
.disabled(self.disabled)
.selectable(false)
.indent_level(state.depth)
.indent_step_size(px(10.))
.indent_step_size(INDENT_STEP_SIZE)
.always_show_disclosure_icon(true)
.when(var_ref > 0, |list_item| {
list_item.toggle(state.is_expanded).on_toggle(cx.listener({
@@ -1445,7 +1470,7 @@ impl VariableList {
.disabled(self.disabled)
.selectable(false)
.indent_level(state.depth)
.indent_step_size(px(10.))
.indent_step_size(INDENT_STEP_SIZE)
.always_show_disclosure_icon(true)
.when(var_ref > 0, |list_item| {
list_item.toggle(state.is_expanded).on_toggle(cx.listener({
@@ -1507,7 +1532,6 @@ impl Render for VariableList {
.key_context("VariableList")
.id("variable-list")
.group("variable-list")
.overflow_y_scroll()
.size_full()
.on_action(cx.listener(Self::select_first))
.on_action(cx.listener(Self::select_last))
@@ -1533,6 +1557,9 @@ impl Render for VariableList {
}),
)
.track_scroll(self.list_handle.clone())
.with_width_from_item(self.max_width_index)
.with_sizing_behavior(gpui::ListSizingBehavior::Auto)
.with_horizontal_sizing_behavior(gpui::ListHorizontalSizingBehavior::Unconstrained)
.gap_1_5()
.size_full()
.flex_grow(),
@@ -1546,7 +1573,15 @@ impl Render for VariableList {
)
.with_priority(1)
}))
.vertical_scrollbar_for(self.list_handle.clone(), window, cx)
// .vertical_scrollbar_for(self.list_handle.clone(), window, cx)
.custom_scrollbars(
ui::Scrollbars::new(ScrollAxes::Both)
.tracked_scroll_handle(self.list_handle.clone())
.with_track_along(ScrollAxes::Both, cx.theme().colors().panel_background)
.tracked_entity(cx.entity_id()),
window,
cx,
)
}
}

View File

@@ -34,6 +34,7 @@ theme.workspace = true
ui.workspace = true
util.workspace = true
workspace.workspace = true
itertools.workspace = true
[dev-dependencies]
client = { workspace = true, features = ["test-support"] }

View File

@@ -1,5 +1,5 @@
use crate::{
DIAGNOSTICS_UPDATE_DELAY, IncludeWarnings, ToggleWarnings, context_range_for_entry,
DIAGNOSTICS_UPDATE_DEBOUNCE, IncludeWarnings, ToggleWarnings, context_range_for_entry,
diagnostic_renderer::{DiagnosticBlock, DiagnosticRenderer},
toolbar_controls::DiagnosticsToolbarEditor,
};
@@ -283,7 +283,7 @@ impl BufferDiagnosticsEditor {
self.update_excerpts_task = Some(cx.spawn_in(window, async move |editor, cx| {
cx.background_executor()
.timer(DIAGNOSTICS_UPDATE_DELAY)
.timer(DIAGNOSTICS_UPDATE_DEBOUNCE)
.await;
if let Some(buffer) = buffer {
@@ -938,10 +938,6 @@ impl DiagnosticsToolbarEditor for WeakEntity<BufferDiagnosticsEditor> {
.unwrap_or(false)
}
fn has_stale_excerpts(&self, _cx: &App) -> bool {
false
}
fn is_updating(&self, cx: &App) -> bool {
self.read_with(cx, |buffer_diagnostics_editor, cx| {
buffer_diagnostics_editor.update_excerpts_task.is_some()

View File

@@ -9,7 +9,7 @@ mod diagnostics_tests;
use anyhow::Result;
use buffer_diagnostics::BufferDiagnosticsEditor;
use collections::{BTreeSet, HashMap};
use collections::{BTreeSet, HashMap, HashSet};
use diagnostic_renderer::DiagnosticBlock;
use editor::{
Editor, EditorEvent, ExcerptRange, MultiBuffer, PathKey,
@@ -17,10 +17,11 @@ use editor::{
multibuffer_context_lines,
};
use gpui::{
AnyElement, AnyView, App, AsyncApp, Context, Entity, EventEmitter, FocusHandle, Focusable,
Global, InteractiveElement, IntoElement, ParentElement, Render, SharedString, Styled,
Subscription, Task, WeakEntity, Window, actions, div,
AnyElement, AnyView, App, AsyncApp, Context, Entity, EventEmitter, FocusHandle, FocusOutEvent,
Focusable, Global, InteractiveElement, IntoElement, ParentElement, Render, SharedString,
Styled, Subscription, Task, WeakEntity, Window, actions, div,
};
use itertools::Itertools as _;
use language::{
Bias, Buffer, BufferRow, BufferSnapshot, DiagnosticEntry, DiagnosticEntryRef, Point,
ToTreeSitterPoint,
@@ -32,7 +33,7 @@ use project::{
use settings::Settings;
use std::{
any::{Any, TypeId},
cmp::{self, Ordering},
cmp,
ops::{Range, RangeInclusive},
sync::Arc,
time::Duration,
@@ -89,8 +90,8 @@ pub(crate) struct ProjectDiagnosticsEditor {
impl EventEmitter<EditorEvent> for ProjectDiagnosticsEditor {}
const DIAGNOSTICS_UPDATE_DELAY: Duration = Duration::from_millis(50);
const DIAGNOSTICS_SUMMARY_UPDATE_DELAY: Duration = Duration::from_millis(30);
const DIAGNOSTICS_UPDATE_DEBOUNCE: Duration = Duration::from_millis(50);
const DIAGNOSTICS_SUMMARY_UPDATE_DEBOUNCE: Duration = Duration::from_millis(30);
impl Render for ProjectDiagnosticsEditor {
fn render(&mut self, _: &mut Window, cx: &mut Context<Self>) -> impl IntoElement {
@@ -149,6 +150,12 @@ impl Render for ProjectDiagnosticsEditor {
}
}
#[derive(PartialEq, Eq, Copy, Clone, Debug)]
enum RetainExcerpts {
Yes,
No,
}
impl ProjectDiagnosticsEditor {
pub fn register(
workspace: &mut Workspace,
@@ -165,14 +172,21 @@ impl ProjectDiagnosticsEditor {
window: &mut Window,
cx: &mut Context<Self>,
) -> Self {
let project_event_subscription =
cx.subscribe_in(&project_handle, window, |this, _project, event, window, cx| match event {
let project_event_subscription = cx.subscribe_in(
&project_handle,
window,
|this, _project, event, window, cx| match event {
project::Event::DiskBasedDiagnosticsStarted { .. } => {
cx.notify();
}
project::Event::DiskBasedDiagnosticsFinished { language_server_id } => {
log::debug!("disk based diagnostics finished for server {language_server_id}");
this.update_stale_excerpts(window, cx);
this.close_diagnosticless_buffers(
window,
cx,
this.editor.focus_handle(cx).contains_focused(window, cx)
|| this.focus_handle.contains_focused(window, cx),
);
}
project::Event::DiagnosticsUpdated {
language_server_id,
@@ -181,34 +195,39 @@ impl ProjectDiagnosticsEditor {
this.paths_to_update.extend(paths.clone());
this.diagnostic_summary_update = cx.spawn(async move |this, cx| {
cx.background_executor()
.timer(DIAGNOSTICS_SUMMARY_UPDATE_DELAY)
.timer(DIAGNOSTICS_SUMMARY_UPDATE_DEBOUNCE)
.await;
this.update(cx, |this, cx| {
this.update_diagnostic_summary(cx);
})
.log_err();
});
cx.emit(EditorEvent::TitleChanged);
if this.editor.focus_handle(cx).contains_focused(window, cx) || this.focus_handle.contains_focused(window, cx) {
log::debug!("diagnostics updated for server {language_server_id}, paths {paths:?}. recording change");
} else {
log::debug!("diagnostics updated for server {language_server_id}, paths {paths:?}. updating excerpts");
this.update_stale_excerpts(window, cx);
}
log::debug!(
"diagnostics updated for server {language_server_id}, \
paths {paths:?}. updating excerpts"
);
let focused = this.editor.focus_handle(cx).contains_focused(window, cx)
|| this.focus_handle.contains_focused(window, cx);
this.update_stale_excerpts(
if focused {
RetainExcerpts::Yes
} else {
RetainExcerpts::No
},
window,
cx,
);
}
_ => {}
});
},
);
let focus_handle = cx.focus_handle();
cx.on_focus_in(&focus_handle, window, |this, window, cx| {
this.focus_in(window, cx)
})
.detach();
cx.on_focus_out(&focus_handle, window, |this, _event, window, cx| {
this.focus_out(window, cx)
})
.detach();
cx.on_focus_in(&focus_handle, window, Self::focus_in)
.detach();
cx.on_focus_out(&focus_handle, window, Self::focus_out)
.detach();
let excerpts = cx.new(|cx| MultiBuffer::new(project_handle.read(cx).capability()));
let editor = cx.new(|cx| {
@@ -238,8 +257,11 @@ impl ProjectDiagnosticsEditor {
window.focus(&this.focus_handle);
}
}
EditorEvent::Blurred => this.update_stale_excerpts(window, cx),
EditorEvent::Saved => this.update_stale_excerpts(window, cx),
EditorEvent::Blurred => this.close_diagnosticless_buffers(window, cx, false),
EditorEvent::Saved => this.close_diagnosticless_buffers(window, cx, true),
EditorEvent::SelectionsChanged { .. } => {
this.close_diagnosticless_buffers(window, cx, true)
}
_ => {}
}
},
@@ -283,15 +305,67 @@ impl ProjectDiagnosticsEditor {
this
}
fn update_stale_excerpts(&mut self, window: &mut Window, cx: &mut Context<Self>) {
if self.update_excerpts_task.is_some() || self.multibuffer.read(cx).is_dirty(cx) {
/// Closes all excerpts of buffers that:
/// - have no diagnostics anymore
/// - are saved (not dirty)
/// - and, if `reatin_selections` is true, do not have selections within them
fn close_diagnosticless_buffers(
&mut self,
_window: &mut Window,
cx: &mut Context<Self>,
retain_selections: bool,
) {
let buffer_ids = self.multibuffer.read(cx).all_buffer_ids();
let selected_buffers = self.editor.update(cx, |editor, cx| {
editor
.selections
.all_anchors(cx)
.iter()
.filter_map(|anchor| anchor.start.buffer_id)
.collect::<HashSet<_>>()
});
for buffer_id in buffer_ids {
if retain_selections && selected_buffers.contains(&buffer_id) {
continue;
}
let has_blocks = self
.blocks
.get(&buffer_id)
.is_none_or(|blocks| blocks.is_empty());
if !has_blocks {
continue;
}
let is_dirty = self
.multibuffer
.read(cx)
.buffer(buffer_id)
.is_some_and(|buffer| buffer.read(cx).is_dirty());
if !is_dirty {
continue;
}
self.multibuffer.update(cx, |b, cx| {
b.remove_excerpts_for_buffer(buffer_id, cx);
});
}
}
fn update_stale_excerpts(
&mut self,
mut retain_excerpts: RetainExcerpts,
window: &mut Window,
cx: &mut Context<Self>,
) {
if self.update_excerpts_task.is_some() {
return;
}
if self.multibuffer.read(cx).is_dirty(cx) {
retain_excerpts = RetainExcerpts::Yes;
}
let project_handle = self.project.clone();
self.update_excerpts_task = Some(cx.spawn_in(window, async move |this, cx| {
cx.background_executor()
.timer(DIAGNOSTICS_UPDATE_DELAY)
.timer(DIAGNOSTICS_UPDATE_DEBOUNCE)
.await;
loop {
let Some(path) = this.update(cx, |this, cx| {
@@ -312,7 +386,7 @@ impl ProjectDiagnosticsEditor {
.log_err()
{
this.update_in(cx, |this, window, cx| {
this.update_excerpts(buffer, window, cx)
this.update_excerpts(buffer, retain_excerpts, window, cx)
})?
.await?;
}
@@ -378,10 +452,10 @@ impl ProjectDiagnosticsEditor {
}
}
fn focus_out(&mut self, window: &mut Window, cx: &mut Context<Self>) {
fn focus_out(&mut self, _: FocusOutEvent, window: &mut Window, cx: &mut Context<Self>) {
if !self.focus_handle.is_focused(window) && !self.editor.focus_handle(cx).is_focused(window)
{
self.update_stale_excerpts(window, cx);
self.close_diagnosticless_buffers(window, cx, false);
}
}
@@ -403,12 +477,13 @@ impl ProjectDiagnosticsEditor {
});
}
}
multibuffer.clear(cx);
});
self.paths_to_update = project_paths;
});
self.update_stale_excerpts(window, cx);
self.update_stale_excerpts(RetainExcerpts::No, window, cx);
}
fn diagnostics_are_unchanged(
@@ -431,6 +506,7 @@ impl ProjectDiagnosticsEditor {
fn update_excerpts(
&mut self,
buffer: Entity<Buffer>,
retain_excerpts: RetainExcerpts,
window: &mut Window,
cx: &mut Context<Self>,
) -> Task<Result<()>> {
@@ -497,24 +573,27 @@ impl ProjectDiagnosticsEditor {
)
})?;
for item in more {
let i = blocks
.binary_search_by(|probe| {
probe
.initial_range
.start
.cmp(&item.initial_range.start)
.then(probe.initial_range.end.cmp(&item.initial_range.end))
.then(Ordering::Greater)
})
.unwrap_or_else(|i| i);
blocks.insert(i, item);
}
blocks.extend(more);
}
let mut excerpt_ranges: Vec<ExcerptRange<Point>> = Vec::new();
let mut excerpt_ranges: Vec<ExcerptRange<Point>> = match retain_excerpts {
RetainExcerpts::No => Vec::new(),
RetainExcerpts::Yes => this.update(cx, |this, cx| {
this.multibuffer.update(cx, |multi_buffer, cx| {
multi_buffer
.excerpts_for_buffer(buffer_id, cx)
.into_iter()
.map(|(_, range)| ExcerptRange {
context: range.context.to_point(&buffer_snapshot),
primary: range.primary.to_point(&buffer_snapshot),
})
.collect()
})
})?,
};
let mut result_blocks = vec![None; excerpt_ranges.len()];
let context_lines = cx.update(|_, cx| multibuffer_context_lines(cx))?;
for b in blocks.iter() {
for b in blocks {
let excerpt_range = context_range_for_entry(
b.initial_range.clone(),
context_lines,
@@ -541,7 +620,8 @@ impl ProjectDiagnosticsEditor {
context: excerpt_range,
primary: b.initial_range.clone(),
},
)
);
result_blocks.insert(i, Some(b));
}
this.update_in(cx, |this, window, cx| {
@@ -562,7 +642,7 @@ impl ProjectDiagnosticsEditor {
)
});
#[cfg(test)]
let cloned_blocks = blocks.clone();
let cloned_blocks = result_blocks.clone();
if was_empty && let Some(anchor_range) = anchor_ranges.first() {
let range_to_select = anchor_range.start..anchor_range.start;
@@ -576,22 +656,20 @@ impl ProjectDiagnosticsEditor {
}
}
let editor_blocks =
anchor_ranges
.into_iter()
.zip(blocks.into_iter())
.map(|(anchor, block)| {
let editor = this.editor.downgrade();
BlockProperties {
placement: BlockPlacement::Near(anchor.start),
height: Some(1),
style: BlockStyle::Flex,
render: Arc::new(move |bcx| {
block.render_block(editor.clone(), bcx)
}),
priority: 1,
}
});
let editor_blocks = anchor_ranges
.into_iter()
.zip_eq(result_blocks.into_iter())
.filter_map(|(anchor, block)| {
let block = block?;
let editor = this.editor.downgrade();
Some(BlockProperties {
placement: BlockPlacement::Near(anchor.start),
height: Some(1),
style: BlockStyle::Flex,
render: Arc::new(move |bcx| block.render_block(editor.clone(), bcx)),
priority: 1,
})
});
let block_ids = this.editor.update(cx, |editor, cx| {
editor.display_map.update(cx, |display_map, cx| {
@@ -601,7 +679,9 @@ impl ProjectDiagnosticsEditor {
#[cfg(test)]
{
for (block_id, block) in block_ids.iter().zip(cloned_blocks.iter()) {
for (block_id, block) in
block_ids.iter().zip(cloned_blocks.into_iter().flatten())
{
let markdown = block.markdown.clone();
editor::test::set_block_content_for_tests(
&this.editor,
@@ -626,6 +706,7 @@ impl ProjectDiagnosticsEditor {
fn update_diagnostic_summary(&mut self, cx: &mut Context<Self>) {
self.summary = self.project.read(cx).diagnostic_summary(false, cx);
cx.emit(EditorEvent::TitleChanged);
}
}
@@ -843,13 +924,6 @@ impl DiagnosticsToolbarEditor for WeakEntity<ProjectDiagnosticsEditor> {
.unwrap_or(false)
}
fn has_stale_excerpts(&self, cx: &App) -> bool {
self.read_with(cx, |project_diagnostics_editor, _cx| {
!project_diagnostics_editor.paths_to_update.is_empty()
})
.unwrap_or(false)
}
fn is_updating(&self, cx: &App) -> bool {
self.read_with(cx, |project_diagnostics_editor, cx| {
project_diagnostics_editor.update_excerpts_task.is_some()
@@ -1010,12 +1084,6 @@ async fn heuristic_syntactic_expand(
return;
}
}
log::info!(
"Expanding to ancestor started on {} node\
exceeding row limit of {max_row_count}.",
node.grammar_name()
);
*ancestor_range = Some(None);
}
})

View File

@@ -119,7 +119,7 @@ async fn test_diagnostics(cx: &mut TestAppContext) {
let editor = diagnostics.update(cx, |diagnostics, _| diagnostics.editor.clone());
diagnostics
.next_notification(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10), cx)
.next_notification(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10), cx)
.await;
pretty_assertions::assert_eq!(
@@ -190,7 +190,7 @@ async fn test_diagnostics(cx: &mut TestAppContext) {
});
diagnostics
.next_notification(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10), cx)
.next_notification(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10), cx)
.await;
pretty_assertions::assert_eq!(
@@ -277,7 +277,7 @@ async fn test_diagnostics(cx: &mut TestAppContext) {
});
diagnostics
.next_notification(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10), cx)
.next_notification(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10), cx)
.await;
pretty_assertions::assert_eq!(
@@ -391,7 +391,7 @@ async fn test_diagnostics_with_folds(cx: &mut TestAppContext) {
// Only the first language server's diagnostics are shown.
cx.executor()
.advance_clock(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10));
.advance_clock(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10));
cx.executor().run_until_parked();
editor.update_in(cx, |editor, window, cx| {
editor.fold_ranges(vec![Point::new(0, 0)..Point::new(3, 0)], false, window, cx);
@@ -490,7 +490,7 @@ async fn test_diagnostics_multiple_servers(cx: &mut TestAppContext) {
// Only the first language server's diagnostics are shown.
cx.executor()
.advance_clock(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10));
.advance_clock(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10));
cx.executor().run_until_parked();
pretty_assertions::assert_eq!(
@@ -530,7 +530,7 @@ async fn test_diagnostics_multiple_servers(cx: &mut TestAppContext) {
// Both language server's diagnostics are shown.
cx.executor()
.advance_clock(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10));
.advance_clock(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10));
cx.executor().run_until_parked();
pretty_assertions::assert_eq!(
@@ -587,7 +587,7 @@ async fn test_diagnostics_multiple_servers(cx: &mut TestAppContext) {
// Only the first language server's diagnostics are updated.
cx.executor()
.advance_clock(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10));
.advance_clock(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10));
cx.executor().run_until_parked();
pretty_assertions::assert_eq!(
@@ -629,7 +629,7 @@ async fn test_diagnostics_multiple_servers(cx: &mut TestAppContext) {
// Both language servers' diagnostics are updated.
cx.executor()
.advance_clock(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10));
.advance_clock(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10));
cx.executor().run_until_parked();
pretty_assertions::assert_eq!(
@@ -760,7 +760,7 @@ async fn test_random_diagnostics_blocks(cx: &mut TestAppContext, mut rng: StdRng
.unwrap()
});
cx.executor()
.advance_clock(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10));
.advance_clock(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10));
cx.run_until_parked();
}
@@ -769,7 +769,7 @@ async fn test_random_diagnostics_blocks(cx: &mut TestAppContext, mut rng: StdRng
log::info!("updating mutated diagnostics view");
mutated_diagnostics.update_in(cx, |diagnostics, window, cx| {
diagnostics.update_stale_excerpts(window, cx)
diagnostics.update_stale_excerpts(RetainExcerpts::No, window, cx)
});
log::info!("constructing reference diagnostics view");
@@ -777,7 +777,7 @@ async fn test_random_diagnostics_blocks(cx: &mut TestAppContext, mut rng: StdRng
ProjectDiagnosticsEditor::new(true, project.clone(), workspace.downgrade(), window, cx)
});
cx.executor()
.advance_clock(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10));
.advance_clock(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10));
cx.run_until_parked();
let mutated_excerpts =
@@ -789,7 +789,12 @@ async fn test_random_diagnostics_blocks(cx: &mut TestAppContext, mut rng: StdRng
// The mutated view may contain more than the reference view as
// we don't currently shrink excerpts when diagnostics were removed.
let mut ref_iter = reference_excerpts.lines().filter(|line| *line != "§ -----");
let mut ref_iter = reference_excerpts.lines().filter(|line| {
// ignore $ ---- and $ <file>.rs
!line.starts_with('§')
|| line.starts_with("§ diagnostic")
|| line.starts_with("§ related info")
});
let mut next_ref_line = ref_iter.next();
let mut skipped_block = false;
@@ -797,7 +802,12 @@ async fn test_random_diagnostics_blocks(cx: &mut TestAppContext, mut rng: StdRng
if let Some(ref_line) = next_ref_line {
if mut_line == ref_line {
next_ref_line = ref_iter.next();
} else if mut_line.contains('§') && mut_line != "§ -----" {
} else if mut_line.contains('§')
// ignore $ ---- and $ <file>.rs
&& (!mut_line.starts_with('§')
|| mut_line.starts_with("§ diagnostic")
|| mut_line.starts_with("§ related info"))
{
skipped_block = true;
}
}
@@ -877,7 +887,7 @@ async fn test_random_diagnostics_with_inlays(cx: &mut TestAppContext, mut rng: S
vec![Inlay::edit_prediction(
post_inc(&mut next_inlay_id),
snapshot.buffer_snapshot().anchor_before(position),
Rope::from_iter(["Test inlay ", "next_inlay_id"]),
Rope::from_iter_small(["Test inlay ", "next_inlay_id"]),
)],
cx,
);
@@ -949,7 +959,7 @@ async fn test_random_diagnostics_with_inlays(cx: &mut TestAppContext, mut rng: S
.unwrap()
});
cx.executor()
.advance_clock(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10));
.advance_clock(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10));
cx.run_until_parked();
}
@@ -958,11 +968,11 @@ async fn test_random_diagnostics_with_inlays(cx: &mut TestAppContext, mut rng: S
log::info!("updating mutated diagnostics view");
mutated_diagnostics.update_in(cx, |diagnostics, window, cx| {
diagnostics.update_stale_excerpts(window, cx)
diagnostics.update_stale_excerpts(RetainExcerpts::No, window, cx)
});
cx.executor()
.advance_clock(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10));
.advance_clock(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10));
cx.run_until_parked();
}
@@ -1427,7 +1437,7 @@ async fn test_diagnostics_with_code(cx: &mut TestAppContext) {
let editor = diagnostics.update(cx, |diagnostics, _| diagnostics.editor.clone());
diagnostics
.next_notification(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10), cx)
.next_notification(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10), cx)
.await;
// Verify that the diagnostic codes are displayed correctly
@@ -1704,7 +1714,7 @@ async fn test_buffer_diagnostics(cx: &mut TestAppContext) {
// wait a little bit to ensure that the buffer diagnostic's editor content
// is rendered.
cx.executor()
.advance_clock(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10));
.advance_clock(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10));
pretty_assertions::assert_eq!(
editor_content_with_blocks(&editor, cx),
@@ -1837,7 +1847,7 @@ async fn test_buffer_diagnostics_without_warnings(cx: &mut TestAppContext) {
// wait a little bit to ensure that the buffer diagnostic's editor content
// is rendered.
cx.executor()
.advance_clock(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10));
.advance_clock(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10));
pretty_assertions::assert_eq!(
editor_content_with_blocks(&editor, cx),
@@ -1971,7 +1981,7 @@ async fn test_buffer_diagnostics_multiple_servers(cx: &mut TestAppContext) {
// wait a little bit to ensure that the buffer diagnostic's editor content
// is rendered.
cx.executor()
.advance_clock(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10));
.advance_clock(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10));
pretty_assertions::assert_eq!(
editor_content_with_blocks(&editor, cx),
@@ -2070,7 +2080,7 @@ fn random_lsp_diagnostic(
const ERROR_MARGIN: usize = 10;
let file_content = fs.read_file_sync(path).unwrap();
let file_text = Rope::from(String::from_utf8_lossy(&file_content).as_ref());
let file_text = Rope::from_str_small(String::from_utf8_lossy(&file_content).as_ref());
let start = rng.random_range(0..file_text.len().saturating_add(ERROR_MARGIN));
let end = rng.random_range(start..file_text.len().saturating_add(ERROR_MARGIN));

View File

@@ -16,9 +16,6 @@ pub(crate) trait DiagnosticsToolbarEditor: Send + Sync {
/// Toggles whether warning diagnostics should be displayed by the
/// diagnostics editor.
fn toggle_warnings(&self, window: &mut Window, cx: &mut App);
/// Indicates whether any of the excerpts displayed by the diagnostics
/// editor are stale.
fn has_stale_excerpts(&self, cx: &App) -> bool;
/// Indicates whether the diagnostics editor is currently updating the
/// diagnostics.
fn is_updating(&self, cx: &App) -> bool;
@@ -37,14 +34,12 @@ pub(crate) trait DiagnosticsToolbarEditor: Send + Sync {
impl Render for ToolbarControls {
fn render(&mut self, _: &mut Window, cx: &mut Context<Self>) -> impl IntoElement {
let mut has_stale_excerpts = false;
let mut include_warnings = false;
let mut is_updating = false;
match &self.editor {
Some(editor) => {
include_warnings = editor.include_warnings(cx);
has_stale_excerpts = editor.has_stale_excerpts(cx);
is_updating = editor.is_updating(cx);
}
None => {}
@@ -86,7 +81,6 @@ impl Render for ToolbarControls {
IconButton::new("refresh-diagnostics", IconName::ArrowCircle)
.icon_color(Color::Info)
.shape(IconButtonShape::Square)
.disabled(!has_stale_excerpts)
.tooltip(Tooltip::for_action_title(
"Refresh diagnostics",
&ToggleDiagnosticsRefresh,

View File

@@ -13,7 +13,7 @@ use gpui::{
};
use indoc::indoc;
use language::{
EditPredictionsMode, File, Language,
EditPredictionsMode, File, Language, Rope,
language_settings::{self, AllLanguageSettings, EditPredictionProvider, all_language_settings},
};
use project::DisableAiSettings;
@@ -1056,8 +1056,11 @@ async fn open_disabled_globs_setting_in_editor(
) -> Result<()> {
let settings_editor = workspace
.update_in(cx, |_, window, cx| {
create_and_open_local_file(paths::settings_file(), window, cx, || {
settings::initial_user_settings_content().as_ref().into()
create_and_open_local_file(paths::settings_file(), window, cx, |cx| {
Rope::from_str(
settings::initial_user_settings_content().as_ref(),
cx.background_executor(),
)
})
})?
.await?

View File

@@ -565,15 +565,6 @@ impl DisplayMap {
self.inlay_map.current_inlays()
}
// pub(crate) fn sync_unsplittable_groups(&mut self) {
// let group_boundaries = /* read diff hunk info from self */;
// self.splice_unsplittable_groups(group_boundaries);
// }
// fn splice_unsplittable_groups(&mut self, group_boundaries: Vec<usize>) {
// }
pub(crate) fn splice_inlays(
&mut self,
to_remove: &[InlayId],
@@ -1578,6 +1569,7 @@ pub mod tests {
use lsp::LanguageServerId;
use project::Project;
use rand::{Rng, prelude::*};
use rope::Rope;
use settings::{SettingsContent, SettingsStore};
use smol::stream::StreamExt;
use std::{env, sync::Arc};
@@ -2083,7 +2075,7 @@ pub mod tests {
vec![Inlay::edit_prediction(
0,
buffer_snapshot.anchor_after(0),
"\n",
Rope::from_str_small("\n"),
)],
cx,
);

View File

@@ -507,55 +507,6 @@ impl BlockMap {
BlockMapWriter(self)
}
// current problem: how do we know how many spacer lines?
// q: which buffer row if any does this wrap row end? log(n)
// iterate over buffer rows; for each one, translate into a wrap row and seek to that wrap row
// q: for a given buffer row, what is the wrap row count? log(n)
// q: which diff hunk if any does this wrap row end?
// q: for a given diff hunk, what is the wrap row count of the _other side_?
// struct DiffLineMapping {
// group_boundaries: Vec<usize>, // group_boundaries[x] would be the start phys row for the `x`th group
// // group_boundaries[x + 1] would be the end phsy ...
// row_boundaries: Vec<usize>, // row_boundaries[x] would be the start wrapped row for the `x`th phys line
// // row_boundaries[x + 1] would be the end wrapped_row ...
// }
// group is either:
// - a physical line that is not in a diff
// - an entire diff
//
// we never split up a group with a spacer
// impl DiffLineMapping {
// // modulo off-by-one errors
// fn which_buffer_row_end(&self, wrap_row: usize) -> Option<usize> {
// self.row_boundaries.binary_search(wrap_row).ok()
// }
// fn number_of_spacers_needed(&self, physical_line: usize, other: &DiffLineMapping) -> isize {
// (self.row_boundaries[physical_line + 1] - self.row_boundaries[physical_line]) - (other.row_boundaries[physical_line + 1] - other.row_boundaries[physical_line])
// }
// // left[x].end - left[x.start] - (right[x].end - right[x].start)
// }
// - we get some wrap row edits
// - compute the affected range in the same way
//
// old:
// loop {
// find the next block in the affected region for the edit
// insert an isomorphic transform leading up to that block
// insert block
// }
//
// new:
// for each wrap row in the affected region {
//
// }
fn sync(&self, wrap_snapshot: &WrapSnapshot, mut edits: Patch<u32>) {
let buffer = wrap_snapshot.buffer_snapshot();

View File

@@ -107,8 +107,6 @@ impl FoldPoint {
}
pub fn to_offset(self, snapshot: &FoldSnapshot) -> FoldOffset {
dbg!(&self);
dbg!(snapshot.max_point());
let (start, _, item) = snapshot
.transforms
.find::<Dimensions<FoldPoint, TransformSummary>, _>((), &self, Bias::Right);

View File

@@ -590,7 +590,6 @@ impl InlayMap {
.is_none_or(|edit| edit.old.start >= cursor.end().0)
{
let transform_start = new_transforms.summary().input.len;
// FIXME: attempted to subtract with overflow
let transform_end =
buffer_edit.new.end + (cursor.end().0 - buffer_edit.old.end);
push_isomorphic(
@@ -701,16 +700,20 @@ impl InlayMap {
.collect::<String>();
let next_inlay = if i % 2 == 0 {
use rope::Rope;
Inlay::mock_hint(
post_inc(next_inlay_id),
snapshot.buffer.anchor_at(position, bias),
&text,
Rope::from_str_small(&text),
)
} else {
use rope::Rope;
Inlay::edit_prediction(
post_inc(next_inlay_id),
snapshot.buffer.anchor_at(position, bias),
&text,
Rope::from_str_small(&text),
)
};
let inlay_id = next_inlay.id;
@@ -1302,7 +1305,7 @@ mod tests {
vec![Inlay::mock_hint(
post_inc(&mut next_inlay_id),
buffer.read(cx).snapshot(cx).anchor_after(3),
"|123|",
Rope::from_str_small("|123|"),
)],
);
assert_eq!(inlay_snapshot.text(), "abc|123|defghi");
@@ -1379,12 +1382,12 @@ mod tests {
Inlay::mock_hint(
post_inc(&mut next_inlay_id),
buffer.read(cx).snapshot(cx).anchor_before(3),
"|123|",
Rope::from_str_small("|123|"),
),
Inlay::edit_prediction(
post_inc(&mut next_inlay_id),
buffer.read(cx).snapshot(cx).anchor_after(3),
"|456|",
Rope::from_str_small("|456|"),
),
],
);
@@ -1594,17 +1597,17 @@ mod tests {
Inlay::mock_hint(
post_inc(&mut next_inlay_id),
buffer.read(cx).snapshot(cx).anchor_before(0),
"|123|\n",
Rope::from_str_small("|123|\n"),
),
Inlay::mock_hint(
post_inc(&mut next_inlay_id),
buffer.read(cx).snapshot(cx).anchor_before(4),
"|456|",
Rope::from_str_small("|456|"),
),
Inlay::edit_prediction(
post_inc(&mut next_inlay_id),
buffer.read(cx).snapshot(cx).anchor_before(7),
"\n|567|\n",
Rope::from_str_small("\n|567|\n"),
),
],
);
@@ -1678,9 +1681,14 @@ mod tests {
(offset, inlay.clone())
})
.collect::<Vec<_>>();
let mut expected_text = Rope::from(&buffer_snapshot.text());
let mut expected_text =
Rope::from_str(&buffer_snapshot.text(), cx.background_executor());
for (offset, inlay) in inlays.iter().rev() {
expected_text.replace(*offset..*offset, &inlay.text().to_string());
expected_text.replace(
*offset..*offset,
&inlay.text().to_string(),
cx.background_executor(),
);
}
assert_eq!(inlay_snapshot.text(), expected_text.to_string());
@@ -2068,7 +2076,7 @@ mod tests {
let inlay = Inlay {
id: InlayId::Hint(0),
position,
content: InlayContent::Text(text::Rope::from(inlay_text)),
content: InlayContent::Text(text::Rope::from_str(inlay_text, cx.background_executor())),
};
let (inlay_snapshot, _) = inlay_map.splice(&[], vec![inlay]);
@@ -2182,7 +2190,10 @@ mod tests {
let inlay = Inlay {
id: InlayId::Hint(0),
position,
content: InlayContent::Text(text::Rope::from(test_case.inlay_text)),
content: InlayContent::Text(text::Rope::from_str(
test_case.inlay_text,
cx.background_executor(),
)),
};
let (inlay_snapshot, _) = inlay_map.splice(&[], vec![inlay]);

View File

@@ -1042,7 +1042,7 @@ mod tests {
let (mut tab_map, _) = TabMap::new(fold_snapshot, tab_size);
let tabs_snapshot = tab_map.set_max_expansion_column(32);
let text = text::Rope::from(tabs_snapshot.text().as_str());
let text = text::Rope::from_str(tabs_snapshot.text().as_str(), cx.background_executor());
log::info!(
"TabMap text (tab size: {}): {:?}",
tab_size,

View File

@@ -568,18 +568,15 @@ impl WrapSnapshot {
let mut old_start = old_cursor.start().output.lines;
old_start += tab_edit.old.start.0 - old_cursor.start().input.lines;
// todo(lw): Should these be seek_forward?
old_cursor.seek(&tab_edit.old.end, Bias::Right);
old_cursor.seek_forward(&tab_edit.old.end, Bias::Right);
let mut old_end = old_cursor.start().output.lines;
old_end += tab_edit.old.end.0 - old_cursor.start().input.lines;
// todo(lw): Should these be seek_forward?
new_cursor.seek(&tab_edit.new.start, Bias::Right);
let mut new_start = new_cursor.start().output.lines;
new_start += tab_edit.new.start.0 - new_cursor.start().input.lines;
// todo(lw): Should these be seek_forward?
new_cursor.seek(&tab_edit.new.end, Bias::Right);
new_cursor.seek_forward(&tab_edit.new.end, Bias::Right);
let mut new_end = new_cursor.start().output.lines;
new_end += tab_edit.new.end.0 - new_cursor.start().input.lines;
@@ -834,25 +831,6 @@ impl WrapSnapshot {
None
}
// Editor calls DisplayMap::sync()
// - that calls all ther other WhateverMap::sync()s
//
// BlockMap::sync()
// - WrapSnapshot
// - GroupBoundary (for the other Editor's DisplayMap)
// - the other Editor's BlockMap::sync() must have been called
// - which requires this Editor's BlockMap::sync() to have been called
//
// MultiBuffer::sync() <- InlayMap::sync() <- ... <- WrapMap::sync() (us)
// MultiBuffer::sync() <- InlayMap::sync() <- ... <- WrapMap::sync() (them)
//
// BlockMap::sync(our wrap map, their wrap map) (us)
// BlockMap::sync(their wrap map, our wrap map) (them)
// pub fn next_group_boundary(&self, point: WrapPoint) -> Option<(u32, Option<(added_lines: u32, deleted_lines: u32)>)> {
// // ...
// }
#[cfg(test)]
pub fn text(&self) -> String {
self.text_chunks(0).collect()
@@ -885,7 +863,7 @@ impl WrapSnapshot {
}
}
let text = language::Rope::from(self.text().as_str());
let text = language::Rope::from_str_small(self.text().as_str());
let mut input_buffer_rows = self.tab_snapshot.rows(0);
let mut expected_buffer_rows = Vec::new();
let mut prev_tab_row = 0;
@@ -1435,9 +1413,10 @@ mod tests {
}
}
let mut initial_text = Rope::from(initial_snapshot.text().as_str());
let mut initial_text =
Rope::from_str(initial_snapshot.text().as_str(), cx.background_executor());
for (snapshot, patch) in edits {
let snapshot_text = Rope::from(snapshot.text().as_str());
let snapshot_text = Rope::from_str(snapshot.text().as_str(), cx.background_executor());
for edit in &patch {
let old_start = initial_text.point_to_offset(Point::new(edit.new.start, 0));
let old_end = initial_text.point_to_offset(cmp::min(
@@ -1453,7 +1432,7 @@ mod tests {
.chunks_in_range(new_start..new_end)
.collect::<String>();
initial_text.replace(old_start..old_end, &new_text);
initial_text.replace(old_start..old_end, &new_text, cx.background_executor());
}
assert_eq!(initial_text.to_string(), snapshot_text.to_string());
}

View File

@@ -1069,7 +1069,6 @@ pub struct Editor {
searchable: bool,
cursor_shape: CursorShape,
current_line_highlight: Option<CurrentLineHighlight>,
collapse_matches: bool,
autoindent_mode: Option<AutoindentMode>,
workspace: Option<(WeakEntity<Workspace>, Option<WorkspaceId>)>,
input_enabled: bool,
@@ -1833,9 +1832,15 @@ impl Editor {
project::Event::RefreshCodeLens => {
// we always query lens with actions, without storing them, always refreshing them
}
project::Event::RefreshInlayHints(server_id) => {
project::Event::RefreshInlayHints {
server_id,
request_id,
} => {
editor.refresh_inlay_hints(
InlayHintRefreshReason::RefreshRequested(*server_id),
InlayHintRefreshReason::RefreshRequested {
server_id: *server_id,
request_id: *request_id,
},
cx,
);
}
@@ -2119,7 +2124,7 @@ impl Editor {
.unwrap_or_default(),
current_line_highlight: None,
autoindent_mode: Some(AutoindentMode::EachLine),
collapse_matches: false,
workspace: None,
input_enabled: !is_minimap,
use_modal_editing: full_mode,
@@ -2272,7 +2277,7 @@ impl Editor {
}
}
EditorEvent::Edited { .. } => {
if !vim_enabled(cx) {
if vim_flavor(cx).is_none() {
let display_map = editor.display_snapshot(cx);
let selections = editor.selections.all_adjusted_display(&display_map);
let pop_state = editor
@@ -2881,12 +2886,12 @@ impl Editor {
self.current_line_highlight = current_line_highlight;
}
pub fn set_collapse_matches(&mut self, collapse_matches: bool) {
self.collapse_matches = collapse_matches;
}
pub fn range_for_match<T: std::marker::Copy>(&self, range: &Range<T>) -> Range<T> {
if self.collapse_matches {
pub fn range_for_match<T: std::marker::Copy>(
&self,
range: &Range<T>,
collapse: bool,
) -> Range<T> {
if collapse {
return range.start..range.start;
}
range.clone()
@@ -7853,7 +7858,7 @@ impl Editor {
let inlay = Inlay::edit_prediction(
post_inc(&mut self.next_inlay_id),
range.start,
new_text.as_str(),
Rope::from_str_small(new_text.as_str()),
);
inlay_ids.push(inlay.id);
inlays.push(inlay);
@@ -16654,7 +16659,7 @@ impl Editor {
editor.update_in(cx, |editor, window, cx| {
let range = target_range.to_point(target_buffer.read(cx));
let range = editor.range_for_match(&range);
let range = editor.range_for_match(&range, false);
let range = collapse_multiline_range(range);
if !split
@@ -21457,7 +21462,7 @@ impl Editor {
.and_then(|e| e.to_str())
.map(|a| a.to_string()));
let vim_mode = vim_enabled(cx);
let vim_mode = vim_flavor(cx).is_some();
let edit_predictions_provider = all_language_settings(file, cx).edit_predictions.provider;
let copilot_enabled = edit_predictions_provider
@@ -22088,10 +22093,26 @@ fn edit_for_markdown_paste<'a>(
(range, new_text)
}
fn vim_enabled(cx: &App) -> bool {
vim_mode_setting::VimModeSetting::try_get(cx)
#[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
pub enum VimFlavor {
Vim,
Helix,
}
pub fn vim_flavor(cx: &App) -> Option<VimFlavor> {
if vim_mode_setting::HelixModeSetting::try_get(cx)
.map(|helix_mode| helix_mode.0)
.unwrap_or(false)
{
Some(VimFlavor::Helix)
} else if vim_mode_setting::VimModeSetting::try_get(cx)
.map(|vim_mode| vim_mode.0)
.unwrap_or(false)
{
Some(VimFlavor::Vim)
} else {
None // neither vim nor helix mode
}
}
fn process_completion_for_edit(

View File

@@ -159,6 +159,7 @@ pub struct SearchSettings {
pub case_sensitive: bool,
pub include_ignored: bool,
pub regex: bool,
pub center_on_match: bool,
}
impl EditorSettings {
@@ -249,6 +250,7 @@ impl Settings for EditorSettings {
case_sensitive: search.case_sensitive.unwrap(),
include_ignored: search.include_ignored.unwrap(),
regex: search.regex.unwrap(),
center_on_match: search.center_on_match.unwrap(),
},
auto_signature_help: editor.auto_signature_help.unwrap(),
show_signature_help_after_edits: editor.show_signature_help_after_edits.unwrap(),

View File

@@ -5114,19 +5114,21 @@ impl EditorElement {
cx,
)
});
let Some((position, hover_popovers)) = hover_popovers else {
let Some((popover_position, hover_popovers)) = hover_popovers else {
return;
};
// This is safe because we check on layout whether the required row is available
let hovered_row_layout =
&line_layouts[position.row().minus(visible_display_row_range.start) as usize];
let hovered_row_layout = &line_layouts[popover_position
.row()
.minus(visible_display_row_range.start)
as usize];
// Compute Hovered Point
let x = hovered_row_layout.x_for_index(position.column() as usize)
let x = hovered_row_layout.x_for_index(popover_position.column() as usize)
- Pixels::from(scroll_pixel_position.x);
let y = Pixels::from(
position.row().as_f64() * ScrollPixelOffset::from(line_height)
popover_position.row().as_f64() * ScrollPixelOffset::from(line_height)
- scroll_pixel_position.y,
);
let hovered_point = content_origin + point(x, y);

View File

@@ -602,6 +602,7 @@ impl GitBlame {
}
fn regenerate_on_edit(&mut self, cx: &mut Context<Self>) {
// todo(lw): hot foreground spawn
self.regenerate_on_edit_task = cx.spawn(async move |this, cx| {
cx.background_executor()
.timer(REGENERATE_ON_EDIT_DEBOUNCE_INTERVAL)
@@ -1114,18 +1115,19 @@ mod tests {
let fs = FakeFs::new(cx.executor());
let buffer_initial_text_len = rng.random_range(5..15);
let mut buffer_initial_text = Rope::from(
let mut buffer_initial_text = Rope::from_str(
RandomCharIter::new(&mut rng)
.take(buffer_initial_text_len)
.collect::<String>()
.as_str(),
cx.background_executor(),
);
let mut newline_ixs = (0..buffer_initial_text_len).choose_multiple(&mut rng, 5);
newline_ixs.sort_unstable();
for newline_ix in newline_ixs.into_iter().rev() {
let newline_ix = buffer_initial_text.clip_offset(newline_ix, Bias::Right);
buffer_initial_text.replace(newline_ix..newline_ix, "\n");
buffer_initial_text.replace(newline_ix..newline_ix, "\n", cx.background_executor());
}
log::info!("initial buffer text: {:?}", buffer_initial_text);

View File

@@ -797,9 +797,18 @@ impl HoverState {
})
})?;
let mut point = anchor.to_display_point(&snapshot.display_snapshot);
// Clamp the point within the visible rows in case the popup source spans multiple lines
if point.row() < visible_rows.start {
if visible_rows.end <= point.row() {
point = crate::movement::up_by_rows(
&snapshot.display_snapshot,
point,
1 + (point.row() - visible_rows.end).0,
text::SelectionGoal::None,
true,
text_layout_details,
)
.0;
} else if point.row() < visible_rows.start {
point = crate::movement::down_by_rows(
&snapshot.display_snapshot,
point,
@@ -809,16 +818,11 @@ impl HoverState {
text_layout_details,
)
.0;
} else if visible_rows.end <= point.row() {
point = crate::movement::up_by_rows(
&snapshot.display_snapshot,
point,
(visible_rows.end - point.row()).0,
text::SelectionGoal::None,
true,
text_layout_details,
)
.0;
}
if !visible_rows.contains(&point.row()) {
log::error!("Hover popover point out of bounds after moving");
return None;
}
let mut elements = Vec::new();

View File

@@ -59,10 +59,10 @@ impl Inlay {
pub fn hint(id: InlayId, position: Anchor, hint: &InlayHint) -> Self {
let mut text = hint.text();
if hint.padding_right && text.reversed_chars_at(text.len()).next() != Some(' ') {
text.push(" ");
text.push_small(" ");
}
if hint.padding_left && text.chars_at(0).next() != Some(' ') {
text.push_front(" ");
text.push_front_small(" ");
}
Self {
id,
@@ -72,11 +72,11 @@ impl Inlay {
}
#[cfg(any(test, feature = "test-support"))]
pub fn mock_hint(id: usize, position: Anchor, text: impl Into<Rope>) -> Self {
pub fn mock_hint(id: usize, position: Anchor, text: Rope) -> Self {
Self {
id: InlayId::Hint(id),
position,
content: InlayContent::Text(text.into()),
content: InlayContent::Text(text),
}
}
@@ -88,19 +88,19 @@ impl Inlay {
}
}
pub fn edit_prediction<T: Into<Rope>>(id: usize, position: Anchor, text: T) -> Self {
pub fn edit_prediction(id: usize, position: Anchor, text: Rope) -> Self {
Self {
id: InlayId::EditPrediction(id),
position,
content: InlayContent::Text(text.into()),
content: InlayContent::Text(text),
}
}
pub fn debugger<T: Into<Rope>>(id: usize, position: Anchor, text: T) -> Self {
pub fn debugger(id: usize, position: Anchor, text: Rope) -> Self {
Self {
id: InlayId::DebuggerValue(id),
position,
content: InlayContent::Text(text.into()),
content: InlayContent::Text(text),
}
}
@@ -108,7 +108,7 @@ impl Inlay {
static COLOR_TEXT: OnceLock<Rope> = OnceLock::new();
match &self.content {
InlayContent::Text(text) => text,
InlayContent::Color(_) => COLOR_TEXT.get_or_init(|| Rope::from("")),
InlayContent::Color(_) => COLOR_TEXT.get_or_init(|| Rope::from_str_small("")),
}
}

View File

@@ -1,5 +1,4 @@
use std::{
collections::hash_map,
ops::{ControlFlow, Range},
time::Duration,
};
@@ -49,8 +48,8 @@ pub struct LspInlayHintData {
allowed_hint_kinds: HashSet<Option<InlayHintKind>>,
invalidate_debounce: Option<Duration>,
append_debounce: Option<Duration>,
hint_refresh_tasks: HashMap<BufferId, HashMap<Vec<Range<BufferRow>>, Vec<Task<()>>>>,
hint_chunk_fetched: HashMap<BufferId, (Global, HashSet<Range<BufferRow>>)>,
hint_refresh_tasks: HashMap<BufferId, Vec<Task<()>>>,
hint_chunk_fetching: HashMap<BufferId, (Global, HashSet<Range<BufferRow>>)>,
invalidate_hints_for_buffers: HashSet<BufferId>,
pub added_hints: HashMap<InlayId, Option<InlayHintKind>>,
}
@@ -63,7 +62,7 @@ impl LspInlayHintData {
enabled_in_settings: settings.enabled,
hint_refresh_tasks: HashMap::default(),
added_hints: HashMap::default(),
hint_chunk_fetched: HashMap::default(),
hint_chunk_fetching: HashMap::default(),
invalidate_hints_for_buffers: HashSet::default(),
invalidate_debounce: debounce_value(settings.edit_debounce_ms),
append_debounce: debounce_value(settings.scroll_debounce_ms),
@@ -99,9 +98,8 @@ impl LspInlayHintData {
pub fn clear(&mut self) {
self.hint_refresh_tasks.clear();
self.hint_chunk_fetched.clear();
self.hint_chunk_fetching.clear();
self.added_hints.clear();
self.invalidate_hints_for_buffers.clear();
}
/// Checks inlay hint settings for enabled hint kinds and general enabled state.
@@ -199,7 +197,7 @@ impl LspInlayHintData {
) {
for buffer_id in removed_buffer_ids {
self.hint_refresh_tasks.remove(buffer_id);
self.hint_chunk_fetched.remove(buffer_id);
self.hint_chunk_fetching.remove(buffer_id);
}
}
}
@@ -211,7 +209,10 @@ pub enum InlayHintRefreshReason {
SettingsChange(InlayHintSettings),
NewLinesShown,
BufferEdited(BufferId),
RefreshRequested(LanguageServerId),
RefreshRequested {
server_id: LanguageServerId,
request_id: Option<usize>,
},
ExcerptsRemoved(Vec<ExcerptId>),
}
@@ -296,7 +297,7 @@ impl Editor {
| InlayHintRefreshReason::Toggle(_)
| InlayHintRefreshReason::SettingsChange(_) => true,
InlayHintRefreshReason::NewLinesShown
| InlayHintRefreshReason::RefreshRequested(_)
| InlayHintRefreshReason::RefreshRequested { .. }
| InlayHintRefreshReason::ExcerptsRemoved(_) => false,
InlayHintRefreshReason::BufferEdited(buffer_id) => {
let Some(affected_language) = self
@@ -370,48 +371,45 @@ impl Editor {
let Some(buffer) = multi_buffer.read(cx).buffer(buffer_id) else {
continue;
};
let fetched_tasks = inlay_hints.hint_chunk_fetched.entry(buffer_id).or_default();
let (fetched_for_version, fetched_chunks) = inlay_hints
.hint_chunk_fetching
.entry(buffer_id)
.or_default();
if visible_excerpts
.buffer_version
.changed_since(&fetched_tasks.0)
.changed_since(fetched_for_version)
{
fetched_tasks.1.clear();
fetched_tasks.0 = visible_excerpts.buffer_version.clone();
*fetched_for_version = visible_excerpts.buffer_version.clone();
fetched_chunks.clear();
inlay_hints.hint_refresh_tasks.remove(&buffer_id);
}
let applicable_chunks =
semantics_provider.applicable_inlay_chunks(&buffer, &visible_excerpts.ranges, cx);
let known_chunks = if ignore_previous_fetches {
None
} else {
Some((fetched_for_version.clone(), fetched_chunks.clone()))
};
match inlay_hints
let mut applicable_chunks =
semantics_provider.applicable_inlay_chunks(&buffer, &visible_excerpts.ranges, cx);
applicable_chunks.retain(|chunk| fetched_chunks.insert(chunk.clone()));
if applicable_chunks.is_empty() && !ignore_previous_fetches {
continue;
}
inlay_hints
.hint_refresh_tasks
.entry(buffer_id)
.or_default()
.entry(applicable_chunks)
{
hash_map::Entry::Occupied(mut o) => {
if invalidate_cache.should_invalidate() || ignore_previous_fetches {
o.get_mut().push(spawn_editor_hints_refresh(
buffer_id,
invalidate_cache,
ignore_previous_fetches,
debounce,
visible_excerpts,
cx,
));
}
}
hash_map::Entry::Vacant(v) => {
v.insert(Vec::new()).push(spawn_editor_hints_refresh(
buffer_id,
invalidate_cache,
ignore_previous_fetches,
debounce,
visible_excerpts,
cx,
));
}
}
.push(spawn_editor_hints_refresh(
buffer_id,
invalidate_cache,
debounce,
visible_excerpts,
known_chunks,
applicable_chunks,
cx,
));
}
}
@@ -506,9 +504,13 @@ impl Editor {
}
InlayHintRefreshReason::NewLinesShown => InvalidationStrategy::None,
InlayHintRefreshReason::BufferEdited(_) => InvalidationStrategy::BufferEdited,
InlayHintRefreshReason::RefreshRequested(server_id) => {
InvalidationStrategy::RefreshRequested(*server_id)
}
InlayHintRefreshReason::RefreshRequested {
server_id,
request_id,
} => InvalidationStrategy::RefreshRequested {
server_id: *server_id,
request_id: *request_id,
},
};
match &mut self.inlay_hints {
@@ -718,44 +720,29 @@ impl Editor {
fn inlay_hints_for_buffer(
&mut self,
invalidate_cache: InvalidationStrategy,
ignore_previous_fetches: bool,
buffer_excerpts: VisibleExcerpts,
known_chunks: Option<(Global, HashSet<Range<BufferRow>>)>,
cx: &mut Context<Self>,
) -> Option<Vec<Task<(Range<BufferRow>, anyhow::Result<CacheInlayHints>)>>> {
let semantics_provider = self.semantics_provider()?;
let inlay_hints = self.inlay_hints.as_mut()?;
let buffer_id = buffer_excerpts.buffer.read(cx).remote_id();
let new_hint_tasks = semantics_provider
.inlay_hints(
invalidate_cache,
buffer_excerpts.buffer,
buffer_excerpts.ranges,
inlay_hints
.hint_chunk_fetched
.get(&buffer_id)
.filter(|_| !ignore_previous_fetches && !invalidate_cache.should_invalidate())
.cloned(),
known_chunks,
cx,
)
.unwrap_or_default();
let (known_version, known_chunks) =
inlay_hints.hint_chunk_fetched.entry(buffer_id).or_default();
if buffer_excerpts.buffer_version.changed_since(known_version) {
known_chunks.clear();
*known_version = buffer_excerpts.buffer_version;
}
let mut hint_tasks = Vec::new();
let mut hint_tasks = None;
for (row_range, new_hints_task) in new_hint_tasks {
let inserted = known_chunks.insert(row_range.clone());
if inserted || ignore_previous_fetches || invalidate_cache.should_invalidate() {
hint_tasks.push(cx.spawn(async move |_, _| (row_range, new_hints_task.await)));
}
hint_tasks
.get_or_insert_with(Vec::new)
.push(cx.spawn(async move |_, _| (row_range, new_hints_task.await)));
}
Some(hint_tasks)
hint_tasks
}
fn apply_fetched_hints(
@@ -793,20 +780,28 @@ impl Editor {
let excerpts = self.buffer.read(cx).excerpt_ids();
let hints_to_insert = new_hints
.into_iter()
.filter_map(|(chunk_range, hints_result)| match hints_result {
Ok(new_hints) => Some(new_hints),
Err(e) => {
log::error!(
"Failed to query inlays for buffer row range {chunk_range:?}, {e:#}"
);
if let Some((for_version, chunks_fetched)) =
inlay_hints.hint_chunk_fetched.get_mut(&buffer_id)
{
if for_version == &query_version {
chunks_fetched.remove(&chunk_range);
.filter_map(|(chunk_range, hints_result)| {
let chunks_fetched = inlay_hints.hint_chunk_fetching.get_mut(&buffer_id);
match hints_result {
Ok(new_hints) => {
if new_hints.is_empty() {
if let Some((_, chunks_fetched)) = chunks_fetched {
chunks_fetched.remove(&chunk_range);
}
}
Some(new_hints)
}
Err(e) => {
log::error!(
"Failed to query inlays for buffer row range {chunk_range:?}, {e:#}"
);
if let Some((for_version, chunks_fetched)) = chunks_fetched {
if for_version == &query_version {
chunks_fetched.remove(&chunk_range);
}
}
None
}
None
}
})
.flat_map(|hints| hints.into_values())
@@ -856,9 +851,10 @@ struct VisibleExcerpts {
fn spawn_editor_hints_refresh(
buffer_id: BufferId,
invalidate_cache: InvalidationStrategy,
ignore_previous_fetches: bool,
debounce: Option<Duration>,
buffer_excerpts: VisibleExcerpts,
known_chunks: Option<(Global, HashSet<Range<BufferRow>>)>,
applicable_chunks: Vec<Range<BufferRow>>,
cx: &mut Context<'_, Editor>,
) -> Task<()> {
cx.spawn(async move |editor, cx| {
@@ -869,12 +865,7 @@ fn spawn_editor_hints_refresh(
let query_version = buffer_excerpts.buffer_version.clone();
let Some(hint_tasks) = editor
.update(cx, |editor, cx| {
editor.inlay_hints_for_buffer(
invalidate_cache,
ignore_previous_fetches,
buffer_excerpts,
cx,
)
editor.inlay_hints_for_buffer(invalidate_cache, buffer_excerpts, known_chunks, cx)
})
.ok()
else {
@@ -882,6 +873,19 @@ fn spawn_editor_hints_refresh(
};
let hint_tasks = hint_tasks.unwrap_or_default();
if hint_tasks.is_empty() {
editor
.update(cx, |editor, _| {
if let Some((_, hint_chunk_fetching)) = editor
.inlay_hints
.as_mut()
.and_then(|inlay_hints| inlay_hints.hint_chunk_fetching.get_mut(&buffer_id))
{
for applicable_chunks in &applicable_chunks {
hint_chunk_fetching.remove(applicable_chunks);
}
}
})
.ok();
return;
}
let new_hints = join_all(hint_tasks).await;
@@ -1102,7 +1106,10 @@ pub mod tests {
editor
.update(cx, |editor, _window, cx| {
editor.refresh_inlay_hints(
InlayHintRefreshReason::RefreshRequested(fake_server.server.server_id()),
InlayHintRefreshReason::RefreshRequested {
server_id: fake_server.server.server_id(),
request_id: Some(1),
},
cx,
);
})
@@ -1958,15 +1965,8 @@ pub mod tests {
async fn test_large_buffer_inlay_requests_split(cx: &mut gpui::TestAppContext) {
init_test(cx, |settings| {
settings.defaults.inlay_hints = Some(InlayHintSettingsContent {
show_value_hints: Some(true),
enabled: Some(true),
edit_debounce_ms: Some(0),
scroll_debounce_ms: Some(0),
show_type_hints: Some(true),
show_parameter_hints: Some(true),
show_other_hints: Some(true),
show_background: Some(false),
toggle_on_modifiers_press: None,
..InlayHintSettingsContent::default()
})
});
@@ -2044,6 +2044,7 @@ pub mod tests {
cx.add_window(|window, cx| Editor::for_buffer(buffer, Some(project), window, cx));
cx.executor().run_until_parked();
let _fake_server = fake_servers.next().await.unwrap();
cx.executor().advance_clock(Duration::from_millis(100));
cx.executor().run_until_parked();
let ranges = lsp_request_ranges
@@ -2129,6 +2130,7 @@ pub mod tests {
);
})
.unwrap();
cx.executor().advance_clock(Duration::from_millis(100));
cx.executor().run_until_parked();
editor.update(cx, |_, _, _| {
let ranges = lsp_request_ranges
@@ -2145,6 +2147,7 @@ pub mod tests {
editor.handle_input("++++more text++++", window, cx);
})
.unwrap();
cx.executor().advance_clock(Duration::from_secs(1));
cx.executor().run_until_parked();
editor.update(cx, |editor, _window, cx| {
let mut ranges = lsp_request_ranges.lock().drain(..).collect::<Vec<_>>();
@@ -3887,7 +3890,10 @@ let c = 3;"#
editor
.update(cx, |editor, _, cx| {
editor.refresh_inlay_hints(
InlayHintRefreshReason::RefreshRequested(fake_server.server.server_id()),
InlayHintRefreshReason::RefreshRequested {
server_id: fake_server.server.server_id(),
request_id: Some(1),
},
cx,
);
})
@@ -4022,7 +4028,7 @@ let c = 3;"#
let mut all_fetched_hints = Vec::new();
for buffer in editor.buffer.read(cx).all_buffers() {
lsp_store.update(cx, |lsp_store, cx| {
let hints = &lsp_store.latest_lsp_data(&buffer, cx).inlay_hints();
let hints = lsp_store.latest_lsp_data(&buffer, cx).inlay_hints();
all_cached_labels.extend(hints.all_cached_hints().into_iter().map(|hint| {
let mut label = hint.text().to_string();
if hint.padding_left {

View File

@@ -1587,12 +1587,18 @@ impl SearchableItem for Editor {
&mut self,
index: usize,
matches: &[Range<Anchor>],
collapse: bool,
window: &mut Window,
cx: &mut Context<Self>,
) {
self.unfold_ranges(&[matches[index].clone()], false, true, cx);
let range = self.range_for_match(&matches[index]);
self.change_selections(Default::default(), window, cx, |s| {
let range = self.range_for_match(&matches[index], collapse);
let autoscroll = if EditorSettings::get_global(cx).search.center_on_match {
Autoscroll::center()
} else {
Autoscroll::fit()
};
self.change_selections(SelectionEffects::scroll(autoscroll), window, cx, |s| {
s.select_ranges([range]);
})
}

View File

@@ -1,5 +1,5 @@
use collections::HashMap;
use gpui::{Context, Window};
use gpui::{AppContext, Context, Window};
use itertools::Itertools;
use std::{ops::Range, time::Duration};
use text::{AnchorRangeExt, BufferId, ToPoint};
@@ -59,8 +59,9 @@ pub(super) fn refresh_linked_ranges(
let mut applicable_selections = Vec::new();
editor
.update(cx, |editor, cx| {
let selections = editor.selections.all::<usize>(&editor.display_snapshot(cx));
let snapshot = editor.buffer.read(cx).snapshot(cx);
let display_snapshot = editor.display_snapshot(cx);
let selections = editor.selections.all::<usize>(&display_snapshot);
let snapshot = display_snapshot.buffer_snapshot();
let buffer = editor.buffer.read(cx);
for selection in selections {
let cursor_position = selection.head();
@@ -90,14 +91,16 @@ pub(super) fn refresh_linked_ranges(
let highlights = project
.update(cx, |project, cx| {
let mut linked_edits_tasks = vec![];
for (buffer, start, end) in &applicable_selections {
let snapshot = buffer.read(cx).snapshot();
let buffer_id = buffer.read(cx).remote_id();
let linked_edits_task = project.linked_edits(buffer, *start, cx);
let highlights = move || async move {
let cx = cx.to_async();
let highlights = async move {
let edits = linked_edits_task.await.log_err()?;
let snapshot = cx
.read_entity(&buffer, |buffer, _| buffer.snapshot())
.ok()?;
let buffer_id = snapshot.remote_id();
// Find the range containing our current selection.
// We might not find one, because the selection contains both the start and end of the contained range
// (think of selecting <`html>foo`</html> - even though there's a matching closing tag, the selection goes beyond the range of the opening tag)
@@ -128,7 +131,7 @@ pub(super) fn refresh_linked_ranges(
siblings.sort_by(|lhs, rhs| lhs.0.cmp(&rhs.0, &snapshot));
Some((buffer_id, siblings))
};
linked_edits_tasks.push(highlights());
linked_edits_tasks.push(highlights);
}
linked_edits_tasks
})

View File

@@ -60,8 +60,10 @@ async fn lsp_task_context(
buffer: &Entity<Buffer>,
cx: &mut AsyncApp,
) -> Option<TaskContext> {
let worktree_store = project
.read_with(cx, |project, _| project.worktree_store())
let (worktree_store, environment) = project
.read_with(cx, |project, _| {
(project.worktree_store(), project.environment().clone())
})
.ok()?;
let worktree_abs_path = cx
@@ -74,9 +76,9 @@ async fn lsp_task_context(
})
.ok()?;
let project_env = project
.update(cx, |project, cx| {
project.buffer_environment(buffer, &worktree_store, cx)
let project_env = environment
.update(cx, |environment, cx| {
environment.buffer_environment(buffer, &worktree_store, cx)
})
.ok()?
.await;

View File

@@ -878,6 +878,7 @@ mod tests {
use gpui::{AppContext as _, font, px};
use language::Capability;
use project::{Project, project_settings::DiagnosticSeverity};
use rope::Rope;
use settings::SettingsStore;
use util::post_inc;
@@ -1024,22 +1025,22 @@ mod tests {
Inlay::edit_prediction(
post_inc(&mut id),
buffer_snapshot.anchor_before(offset),
"test",
Rope::from_str_small("test"),
),
Inlay::edit_prediction(
post_inc(&mut id),
buffer_snapshot.anchor_after(offset),
"test",
Rope::from_str_small("test"),
),
Inlay::mock_hint(
post_inc(&mut id),
buffer_snapshot.anchor_before(offset),
"test",
Rope::from_str_small("test"),
),
Inlay::mock_hint(
post_inc(&mut id),
buffer_snapshot.anchor_after(offset),
"test",
Rope::from_str_small("test"),
),
]
})

View File

@@ -193,7 +193,7 @@ impl Editor {
if let Some(language) = language {
for signature in &mut signature_help.signatures {
let text = Rope::from(signature.label.as_ref());
let text = Rope::from_str_small(signature.label.as_ref());
let highlights = language
.highlight_text(&text, 0..signature.label.len())
.into_iter()

View File

@@ -145,6 +145,10 @@ fn extension_provides(manifest: &ExtensionManifest) -> BTreeSet<ExtensionProvide
provides.insert(ExtensionProvides::ContextServers);
}
if !manifest.agent_servers.is_empty() {
provides.insert(ExtensionProvides::AgentServers);
}
if manifest.snippets.is_some() {
provides.insert(ExtensionProvides::Snippets);
}

View File

@@ -1468,6 +1468,7 @@ impl ExtensionStore {
let extensions_dir = self.installed_dir.clone();
let index_path = self.index_path.clone();
let proxy = self.proxy.clone();
let executor = cx.background_executor().clone();
cx.background_spawn(async move {
let start_time = Instant::now();
let mut index = ExtensionIndex::default();
@@ -1501,10 +1502,14 @@ impl ExtensionStore {
}
if let Ok(index_json) = serde_json::to_string_pretty(&index) {
fs.save(&index_path, &index_json.as_str().into(), Default::default())
.await
.context("failed to save extension index")
.log_err();
fs.save(
&index_path,
&Rope::from_str(&index_json, &executor),
Default::default(),
)
.await
.context("failed to save extension index")
.log_err();
}
log::info!("rebuilt extension index in {:?}", start_time.elapsed());
@@ -1671,7 +1676,7 @@ impl ExtensionStore {
let manifest_toml = toml::to_string(&loaded_extension.manifest)?;
fs.save(
&tmp_dir.join(EXTENSION_TOML),
&Rope::from(manifest_toml),
&Rope::from_str_small(&manifest_toml),
language::LineEnding::Unix,
)
.await?;

View File

@@ -225,6 +225,9 @@ impl ExtensionFilter {
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Clone, Copy)]
enum Feature {
AgentClaude,
AgentCodex,
AgentGemini,
ExtensionRuff,
ExtensionTailwind,
Git,
@@ -244,6 +247,9 @@ fn keywords_by_feature() -> &'static BTreeMap<Feature, Vec<&'static str>> {
static KEYWORDS_BY_FEATURE: OnceLock<BTreeMap<Feature, Vec<&'static str>>> = OnceLock::new();
KEYWORDS_BY_FEATURE.get_or_init(|| {
BTreeMap::from_iter([
(Feature::AgentClaude, vec!["claude", "claude code"]),
(Feature::AgentCodex, vec!["codex", "codex cli"]),
(Feature::AgentGemini, vec!["gemini", "gemini cli"]),
(Feature::ExtensionRuff, vec!["ruff"]),
(Feature::ExtensionTailwind, vec!["tail", "tailwind"]),
(Feature::Git, vec!["git"]),
@@ -799,25 +805,22 @@ impl ExtensionsPage {
)
.child(
h_flex()
.gap_2()
.gap_1()
.justify_between()
.child(
h_flex()
.gap_1()
.child(
Icon::new(IconName::Person)
.size(IconSize::XSmall)
.color(Color::Muted),
)
.child(
Label::new(extension.manifest.authors.join(", "))
.size(LabelSize::Small)
.color(Color::Muted)
.truncate(),
),
Icon::new(IconName::Person)
.size(IconSize::XSmall)
.color(Color::Muted),
)
.child(
Label::new(extension.manifest.authors.join(", "))
.size(LabelSize::Small)
.color(Color::Muted)
.truncate(),
)
.child(
h_flex()
.ml_auto()
.gap_1()
.child(
IconButton::new(
@@ -1422,6 +1425,24 @@ impl ExtensionsPage {
for feature in &self.upsells {
let banner = match feature {
Feature::AgentClaude => self.render_feature_upsell_banner(
"Claude Code support is built-in to Zed!".into(),
"https://zed.dev/docs/ai/external-agents#claude-code".into(),
false,
cx,
),
Feature::AgentCodex => self.render_feature_upsell_banner(
"Codex CLI support is built-in to Zed!".into(),
"https://zed.dev/docs/ai/external-agents#codex-cli".into(),
false,
cx,
),
Feature::AgentGemini => self.render_feature_upsell_banner(
"Gemini CLI support is built-in to Zed!".into(),
"https://zed.dev/docs/ai/external-agents#gemini-cli".into(),
false,
cx,
),
Feature::ExtensionRuff => self.render_feature_upsell_banner(
"Ruff (linter for Python) support is built-in to Zed!".into(),
"https://zed.dev/docs/languages/python#code-formatting--linting".into(),

View File

@@ -711,7 +711,9 @@ impl PickerDelegate for OpenPathDelegate {
match &self.directory_state {
DirectoryState::List { parent_path, .. } => {
let (label, indices) = if *parent_path == self.prompt_root {
let (label, indices) = if is_current_dir_candidate {
("open this directory".to_string(), vec![])
} else if *parent_path == self.prompt_root {
match_positions.iter_mut().for_each(|position| {
*position += self.prompt_root.len();
});
@@ -719,8 +721,6 @@ impl PickerDelegate for OpenPathDelegate {
format!("{}{}", self.prompt_root, candidate.path.string),
match_positions,
)
} else if is_current_dir_candidate {
("open this directory".to_string(), vec![])
} else {
(candidate.path.string, match_positions)
};

View File

@@ -377,7 +377,7 @@ impl Fs for RealFs {
#[cfg(windows)]
if smol::fs::metadata(&target).await?.is_dir() {
let status = smol::process::Command::new("cmd")
let status = new_smol_command("cmd")
.args(["/C", "mklink", "/J"])
.args([path, target.as_path()])
.status()

View File

@@ -23,6 +23,7 @@ serde.workspace = true
serde_json.workspace = true
settings.workspace = true
url.workspace = true
urlencoding.workspace = true
util.workspace = true
[dev-dependencies]

View File

@@ -1,5 +1,11 @@
use std::str::FromStr;
use std::{str::FromStr, sync::Arc};
use anyhow::{Context as _, Result, bail};
use async_trait::async_trait;
use futures::AsyncReadExt;
use gpui::SharedString;
use http_client::{AsyncBody, HttpClient, HttpRequestExt, Request};
use serde::Deserialize;
use url::Url;
use git::{
@@ -9,6 +15,55 @@ use git::{
pub struct Gitee;
#[derive(Debug, Deserialize)]
struct CommitDetails {
author: Option<Author>,
}
#[derive(Debug, Deserialize)]
struct Author {
avatar_url: String,
}
impl Gitee {
async fn fetch_gitee_commit_author(
&self,
repo_owner: &str,
repo: &str,
commit: &str,
client: &Arc<dyn HttpClient>,
) -> Result<Option<Author>> {
let url = format!("https://gitee.com/api/v5/repos/{repo_owner}/{repo}/commits/{commit}");
let request = Request::get(&url)
.header("Content-Type", "application/json")
.follow_redirects(http_client::RedirectPolicy::FollowAll);
let mut response = client
.send(request.body(AsyncBody::default())?)
.await
.with_context(|| format!("error fetching Gitee commit details at {:?}", url))?;
let mut body = Vec::new();
response.body_mut().read_to_end(&mut body).await?;
if response.status().is_client_error() {
let text = String::from_utf8_lossy(body.as_slice());
bail!(
"status error {}, response: {text:?}",
response.status().as_u16()
);
}
let body_str = std::str::from_utf8(&body)?;
serde_json::from_str::<CommitDetails>(body_str)
.map(|commit| commit.author)
.context("failed to deserialize Gitee commit details")
}
}
#[async_trait]
impl GitHostingProvider for Gitee {
fn name(&self) -> String {
"Gitee".to_string()
@@ -19,7 +74,7 @@ impl GitHostingProvider for Gitee {
}
fn supports_avatars(&self) -> bool {
false
true
}
fn format_line_number(&self, line: u32) -> String {
@@ -80,6 +135,26 @@ impl GitHostingProvider for Gitee {
);
permalink
}
async fn commit_author_avatar_url(
&self,
repo_owner: &str,
repo: &str,
commit: SharedString,
http_client: Arc<dyn HttpClient>,
) -> Result<Option<Url>> {
let commit = commit.to_string();
let avatar_url = self
.fetch_gitee_commit_author(repo_owner, repo, &commit, &http_client)
.await?
.map(|author| -> Result<Url, url::ParseError> {
let mut url = Url::parse(&author.avatar_url)?;
url.set_query(Some("width=128"));
Ok(url)
})
.transpose()?;
Ok(avatar_url)
}
}
#[cfg(test)]

View File

@@ -1,6 +1,11 @@
use std::str::FromStr;
use std::{str::FromStr, sync::Arc};
use anyhow::{Result, bail};
use anyhow::{Context as _, Result, bail};
use async_trait::async_trait;
use futures::AsyncReadExt;
use gpui::SharedString;
use http_client::{AsyncBody, HttpClient, HttpRequestExt, Request};
use serde::Deserialize;
use url::Url;
use git::{
@@ -10,6 +15,16 @@ use git::{
use crate::get_host_from_git_remote_url;
#[derive(Debug, Deserialize)]
struct CommitDetails {
author_email: String,
}
#[derive(Debug, Deserialize)]
struct AvatarInfo {
avatar_url: String,
}
#[derive(Debug)]
pub struct Gitlab {
name: String,
@@ -46,8 +61,79 @@ impl Gitlab {
Url::parse(&format!("https://{}", host))?,
))
}
async fn fetch_gitlab_commit_author(
&self,
repo_owner: &str,
repo: &str,
commit: &str,
client: &Arc<dyn HttpClient>,
) -> Result<Option<AvatarInfo>> {
let Some(host) = self.base_url.host_str() else {
bail!("failed to get host from gitlab base url");
};
let project_path = format!("{}/{}", repo_owner, repo);
let project_path_encoded = urlencoding::encode(&project_path);
let url = format!(
"https://{host}/api/v4/projects/{project_path_encoded}/repository/commits/{commit}"
);
let request = Request::get(&url)
.header("Content-Type", "application/json")
.follow_redirects(http_client::RedirectPolicy::FollowAll);
let mut response = client
.send(request.body(AsyncBody::default())?)
.await
.with_context(|| format!("error fetching GitLab commit details at {:?}", url))?;
let mut body = Vec::new();
response.body_mut().read_to_end(&mut body).await?;
if response.status().is_client_error() {
let text = String::from_utf8_lossy(body.as_slice());
bail!(
"status error {}, response: {text:?}",
response.status().as_u16()
);
}
let body_str = std::str::from_utf8(&body)?;
let author_email = serde_json::from_str::<CommitDetails>(body_str)
.map(|commit| commit.author_email)
.context("failed to deserialize GitLab commit details")?;
let avatar_info_url = format!("https://{host}/api/v4/avatar?email={author_email}");
let request = Request::get(&avatar_info_url)
.header("Content-Type", "application/json")
.follow_redirects(http_client::RedirectPolicy::FollowAll);
let mut response = client
.send(request.body(AsyncBody::default())?)
.await
.with_context(|| format!("error fetching GitLab avatar info at {:?}", url))?;
let mut body = Vec::new();
response.body_mut().read_to_end(&mut body).await?;
if response.status().is_client_error() {
let text = String::from_utf8_lossy(body.as_slice());
bail!(
"status error {}, response: {text:?}",
response.status().as_u16()
);
}
let body_str = std::str::from_utf8(&body)?;
serde_json::from_str::<Option<AvatarInfo>>(body_str)
.context("failed to deserialize GitLab avatar info")
}
}
#[async_trait]
impl GitHostingProvider for Gitlab {
fn name(&self) -> String {
self.name.clone()
@@ -58,7 +144,7 @@ impl GitHostingProvider for Gitlab {
}
fn supports_avatars(&self) -> bool {
false
true
}
fn format_line_number(&self, line: u32) -> String {
@@ -122,6 +208,39 @@ impl GitHostingProvider for Gitlab {
);
permalink
}
async fn commit_author_avatar_url(
&self,
repo_owner: &str,
repo: &str,
commit: SharedString,
http_client: Arc<dyn HttpClient>,
) -> Result<Option<Url>> {
let commit = commit.to_string();
let avatar_url = self
.fetch_gitlab_commit_author(repo_owner, repo, &commit, &http_client)
.await?
.map(|author| -> Result<Url, url::ParseError> {
let mut url = Url::parse(&author.avatar_url)?;
if let Some(host) = url.host_str() {
let size_query = if host.contains("gravatar") || host.contains("libravatar") {
Some("s=128")
} else if self
.base_url
.host_str()
.is_some_and(|base_host| host.contains(base_host))
{
Some("width=128")
} else {
None
};
url.set_query(size_query);
}
Ok(url)
})
.transpose()?;
Ok(avatar_url)
}
}
#[cfg(test)]
@@ -134,8 +253,8 @@ mod tests {
#[test]
fn test_invalid_self_hosted_remote_url() {
let remote_url = "https://gitlab.com/zed-industries/zed.git";
let github = Gitlab::from_remote_url(remote_url);
assert!(github.is_err());
let gitlab = Gitlab::from_remote_url(remote_url);
assert!(gitlab.is_err());
}
#[test]

View File

@@ -170,7 +170,10 @@ impl CommitView {
ReplicaId::LOCAL,
cx.entity_id().as_non_zero_u64().into(),
LineEnding::default(),
format_commit(&commit, stash.is_some()).into(),
Rope::from_str(
&format_commit(&commit, stash.is_some()),
cx.background_executor(),
),
);
metadata_buffer_id = Some(buffer.remote_id());
Buffer::build(buffer, Some(file.clone()), Capability::ReadWrite)
@@ -336,7 +339,7 @@ async fn build_buffer(
) -> Result<Entity<Buffer>> {
let line_ending = LineEnding::detect(&text);
LineEnding::normalize(&mut text);
let text = Rope::from(text);
let text = Rope::from_str(&text, cx.background_executor());
let language = cx.update(|cx| language_registry.language_for_file(&blob, Some(&text), cx))?;
let language = if let Some(language) = language {
language_registry
@@ -376,7 +379,7 @@ async fn build_buffer_diff(
let base_buffer = cx
.update(|cx| {
Buffer::build_snapshot(
old_text.as_deref().unwrap_or("").into(),
Rope::from_str(old_text.as_deref().unwrap_or(""), cx.background_executor()),
buffer.language().cloned(),
Some(language_registry.clone()),
cx,

View File

@@ -359,6 +359,7 @@ mod tests {
use super::*;
use editor::test::editor_test_context::assert_state_with_diff;
use gpui::TestAppContext;
use language::Rope;
use project::{FakeFs, Fs, Project};
use settings::SettingsStore;
use std::path::PathBuf;
@@ -429,7 +430,7 @@ mod tests {
// Modify the new file on disk
fs.save(
path!("/test/new_file.txt").as_ref(),
&unindent(
&Rope::from_str_small(&unindent(
"
new line 1
line 2
@@ -437,8 +438,7 @@ mod tests {
line 4
new line 5
",
)
.into(),
)),
Default::default(),
)
.await
@@ -465,15 +465,14 @@ mod tests {
// Modify the old file on disk
fs.save(
path!("/test/old_file.txt").as_ref(),
&unindent(
&Rope::from_str_small(&unindent(
"
new line 1
line 2
old line 3
line 4
",
)
.into(),
)),
Default::default(),
)
.await

View File

@@ -58,8 +58,8 @@ use std::{collections::HashSet, sync::Arc, time::Duration, usize};
use strum::{IntoEnumIterator, VariantNames};
use time::OffsetDateTime;
use ui::{
Checkbox, CommonAnimationExt, ContextMenu, ElevationIndex, IconPosition, Label, LabelSize,
PopoverMenu, ScrollAxes, Scrollbars, SplitButton, Tooltip, WithScrollbar, prelude::*,
ButtonLike, Checkbox, CommonAnimationExt, ContextMenu, ElevationIndex, PopoverMenu, ScrollAxes,
Scrollbars, SplitButton, Tooltip, WithScrollbar, prelude::*,
};
use util::paths::PathStyle;
use util::{ResultExt, TryFutureExt, maybe};
@@ -286,6 +286,12 @@ struct PendingOperation {
op_id: usize,
}
impl PendingOperation {
fn contains_path(&self, path: &RepoPath) -> bool {
self.entries.iter().any(|p| &p.repo_path == path)
}
}
pub struct GitPanel {
pub(crate) active_repository: Option<Entity<Repository>>,
pub(crate) commit_editor: Entity<Editor>,
@@ -1240,19 +1246,21 @@ impl GitPanel {
};
let (stage, repo_paths) = match entry {
GitListEntry::Status(status_entry) => {
if status_entry.status.staging().is_fully_staged() {
let repo_paths = vec![status_entry.clone()];
let stage = if let Some(status) = self.entry_staging(&status_entry) {
!status.is_fully_staged()
} else if status_entry.status.staging().is_fully_staged() {
if let Some(op) = self.bulk_staging.clone()
&& op.anchor == status_entry.repo_path
{
self.bulk_staging = None;
}
(false, vec![status_entry.clone()])
false
} else {
self.set_bulk_staging_anchor(status_entry.repo_path.clone(), cx);
(true, vec![status_entry.clone()])
}
true
};
(stage, repo_paths)
}
GitListEntry::Header(section) => {
let goal_staged_state = !self.header_state(section.header).selected();
@@ -2677,10 +2685,7 @@ impl GitPanel {
if self.pending.iter().any(|pending| {
pending.target_status == TargetStatus::Reverted
&& !pending.finished
&& pending
.entries
.iter()
.any(|pending| pending.repo_path == entry.repo_path)
&& pending.contains_path(&entry.repo_path)
}) {
continue;
}
@@ -2731,10 +2736,7 @@ impl GitPanel {
last_pending_staged = pending.entries.first().cloned();
}
if let Some(single_staged) = &single_staged_entry
&& pending
.entries
.iter()
.any(|entry| entry.repo_path == single_staged.repo_path)
&& pending.contains_path(&single_staged.repo_path)
{
pending_status_for_single_staged = Some(pending.target_status);
}
@@ -2797,7 +2799,7 @@ impl GitPanel {
&& let Some(index) = bulk_staging_anchor_new_index
&& let Some(entry) = self.entries.get(index)
&& let Some(entry) = entry.status_entry()
&& self.entry_staging(entry) == StageStatus::Staged
&& self.entry_staging(entry).unwrap_or(entry.staging) == StageStatus::Staged
{
self.bulk_staging = bulk_staging;
}
@@ -2845,39 +2847,47 @@ impl GitPanel {
self.entry_count += 1;
if repo.had_conflict_on_last_merge_head_change(&status_entry.repo_path) {
self.conflicted_count += 1;
if self.entry_staging(status_entry).has_staged() {
if self
.entry_staging(status_entry)
.unwrap_or(status_entry.staging)
.has_staged()
{
self.conflicted_staged_count += 1;
}
} else if status_entry.status.is_created() {
self.new_count += 1;
if self.entry_staging(status_entry).has_staged() {
if self
.entry_staging(status_entry)
.unwrap_or(status_entry.staging)
.has_staged()
{
self.new_staged_count += 1;
}
} else {
self.tracked_count += 1;
if self.entry_staging(status_entry).has_staged() {
if self
.entry_staging(status_entry)
.unwrap_or(status_entry.staging)
.has_staged()
{
self.tracked_staged_count += 1;
}
}
}
}
fn entry_staging(&self, entry: &GitStatusEntry) -> StageStatus {
fn entry_staging(&self, entry: &GitStatusEntry) -> Option<StageStatus> {
for pending in self.pending.iter().rev() {
if pending
.entries
.iter()
.any(|pending_entry| pending_entry.repo_path == entry.repo_path)
{
if pending.contains_path(&entry.repo_path) {
match pending.target_status {
TargetStatus::Staged => return StageStatus::Staged,
TargetStatus::Unstaged => return StageStatus::Unstaged,
TargetStatus::Staged => return Some(StageStatus::Staged),
TargetStatus::Unstaged => return Some(StageStatus::Unstaged),
TargetStatus::Reverted => continue,
TargetStatus::Unchanged => continue,
}
}
}
entry.staging
None
}
pub(crate) fn has_staged_changes(&self) -> bool {
@@ -3495,6 +3505,12 @@ impl GitPanel {
let amend = self.amend_pending();
let signoff = self.signoff_enabled;
let label_color = if self.pending_commit.is_some() {
Color::Disabled
} else {
Color::Default
};
div()
.id("commit-wrapper")
.on_hover(cx.listener(move |this, hovered, _, cx| {
@@ -3503,14 +3519,15 @@ impl GitPanel {
cx.notify()
}))
.child(SplitButton::new(
ui::ButtonLike::new_rounded_left(ElementId::Name(
ButtonLike::new_rounded_left(ElementId::Name(
format!("split-button-left-{}", title).into(),
))
.layer(ui::ElevationIndex::ModalSurface)
.size(ui::ButtonSize::Compact)
.layer(ElevationIndex::ModalSurface)
.size(ButtonSize::Compact)
.child(
div()
.child(Label::new(title).size(LabelSize::Small))
Label::new(title)
.size(LabelSize::Small)
.color(label_color)
.mr_0p5(),
)
.on_click({
@@ -3710,7 +3727,8 @@ impl GitPanel {
let ix = self.entry_by_path(&repo_path, cx)?;
let entry = self.entries.get(ix)?;
let entry_staging = self.entry_staging(entry.status_entry()?);
let status = entry.status_entry()?;
let entry_staging = self.entry_staging(status).unwrap_or(status.staging);
let checkbox = Checkbox::new("stage-file", entry_staging.as_bool().into())
.disabled(!self.has_write_access(cx))
@@ -4004,8 +4022,8 @@ impl GitPanel {
let checkbox_id: ElementId =
ElementId::Name(format!("entry_{}_{}_checkbox", display_name, ix).into());
let entry_staging = self.entry_staging(entry);
let mut is_staged: ToggleState = self.entry_staging(entry).as_bool().into();
let entry_staging = self.entry_staging(entry).unwrap_or(entry.staging);
let mut is_staged: ToggleState = entry_staging.as_bool().into();
if self.show_placeholders && !self.has_staged_changes() && !entry.status.is_created() {
is_staged = ToggleState::Selected;
}

View File

@@ -5,16 +5,14 @@ use git::stash::StashEntry;
use gpui::{
Action, AnyElement, App, Context, DismissEvent, Entity, EventEmitter, FocusHandle, Focusable,
InteractiveElement, IntoElement, Modifiers, ModifiersChangedEvent, ParentElement, Render,
SharedString, Styled, Subscription, Task, WeakEntity, Window, actions, rems, svg,
SharedString, Styled, Subscription, Task, WeakEntity, Window, actions, rems,
};
use picker::{Picker, PickerDelegate};
use project::git_store::{Repository, RepositoryEvent};
use std::sync::Arc;
use time::{OffsetDateTime, UtcOffset};
use time_format;
use ui::{
ButtonLike, HighlightedLabel, KeyBinding, ListItem, ListItemSpacing, Tooltip, prelude::*,
};
use ui::{HighlightedLabel, KeyBinding, ListItem, ListItemSpacing, Tooltip, prelude::*};
use util::ResultExt;
use workspace::notifications::DetachAndPromptErr;
use workspace::{ModalView, Workspace};
@@ -434,7 +432,7 @@ impl PickerDelegate for StashListDelegate {
ix: usize,
selected: bool,
_window: &mut Window,
cx: &mut Context<Picker<Self>>,
_cx: &mut Context<Picker<Self>>,
) -> Option<Self::ListItem> {
let entry_match = &self.matches[ix];
@@ -446,23 +444,14 @@ impl PickerDelegate for StashListDelegate {
.into_any_element();
let branch_name = entry_match.entry.branch.clone().unwrap_or_default();
let branch_label = h_flex()
let branch_info = h_flex()
.gap_1p5()
.w_full()
.child(
h_flex()
.gap_0p5()
.child(
Icon::new(IconName::GitBranch)
.color(Color::Muted)
.size(IconSize::Small),
)
.child(
Label::new(branch_name)
.truncate()
.color(Color::Muted)
.size(LabelSize::Small),
),
Label::new(branch_name)
.truncate()
.color(Color::Muted)
.size(LabelSize::Small),
)
.child(
Label::new("")
@@ -476,42 +465,12 @@ impl PickerDelegate for StashListDelegate {
.size(LabelSize::Small),
);
let show_button = div()
.group("show-button-hover")
.child(
ButtonLike::new("show-button")
.child(
svg()
.size(IconSize::Medium.rems())
.flex_none()
.path(IconName::Eye.path())
.text_color(Color::Default.color(cx))
.group_hover("show-button-hover", |this| {
this.text_color(Color::Accent.color(cx))
})
.hover(|this| this.text_color(Color::Accent.color(cx))),
)
.tooltip(Tooltip::for_action_title("Show Stash", &ShowStashItem))
.on_click(cx.listener(move |picker, _, window, cx| {
cx.stop_propagation();
picker.delegate.show_stash_at(ix, window, cx);
})),
)
.into_any_element();
Some(
ListItem::new(SharedString::from(format!("stash-{ix}")))
.inset(true)
.spacing(ListItemSpacing::Sparse)
.toggle_state(selected)
.end_slot(show_button)
.child(
v_flex()
.w_full()
.overflow_hidden()
.child(stash_label)
.child(branch_label.into_element()),
)
.child(v_flex().w_full().child(stash_label).child(branch_info))
.tooltip(Tooltip::text(format!(
"stash@{{{}}}",
entry_match.entry.index
@@ -534,26 +493,6 @@ impl PickerDelegate for StashListDelegate {
.justify_end()
.border_t_1()
.border_color(cx.theme().colors().border_variant)
.child(
Button::new("apply-stash", "Apply")
.key_binding(
KeyBinding::for_action_in(&menu::Confirm, &focus_handle, cx)
.map(|kb| kb.size(rems_from_px(12.))),
)
.on_click(|_, window, cx| {
window.dispatch_action(menu::Confirm.boxed_clone(), cx)
}),
)
.child(
Button::new("pop-stash", "Pop")
.key_binding(
KeyBinding::for_action_in(&menu::SecondaryConfirm, &focus_handle, cx)
.map(|kb| kb.size(rems_from_px(12.))),
)
.on_click(|_, window, cx| {
window.dispatch_action(menu::SecondaryConfirm.boxed_clone(), cx)
}),
)
.child(
Button::new("drop-stash", "Drop")
.key_binding(
@@ -568,6 +507,42 @@ impl PickerDelegate for StashListDelegate {
window.dispatch_action(stash_picker::DropStashItem.boxed_clone(), cx)
}),
)
.child(
Button::new("view-stash", "View")
.key_binding(
KeyBinding::for_action_in(
&stash_picker::ShowStashItem,
&focus_handle,
cx,
)
.map(|kb| kb.size(rems_from_px(12.))),
)
.on_click(cx.listener(move |picker, _, window, cx| {
cx.stop_propagation();
let selected_ix = picker.delegate.selected_index();
picker.delegate.show_stash_at(selected_ix, window, cx);
})),
)
.child(
Button::new("pop-stash", "Pop")
.key_binding(
KeyBinding::for_action_in(&menu::SecondaryConfirm, &focus_handle, cx)
.map(|kb| kb.size(rems_from_px(12.))),
)
.on_click(|_, window, cx| {
window.dispatch_action(menu::SecondaryConfirm.boxed_clone(), cx)
}),
)
.child(
Button::new("apply-stash", "Apply")
.key_binding(
KeyBinding::for_action_in(&menu::Confirm, &focus_handle, cx)
.map(|kb| kb.size(rems_from_px(12.))),
)
.on_click(|_, window, cx| {
window.dispatch_action(menu::Confirm.boxed_clone(), cx)
}),
)
.into_any(),
)
}

View File

@@ -1,4 +1,4 @@
use editor::{Editor, MultiBufferSnapshot};
use editor::{Editor, EditorEvent, MultiBufferSnapshot};
use gpui::{App, Entity, FocusHandle, Focusable, Styled, Subscription, Task, WeakEntity};
use settings::Settings;
use std::{fmt::Write, num::NonZeroU32, time::Duration};
@@ -81,7 +81,7 @@ impl CursorPosition {
fn update_position(
&mut self,
editor: Entity<Editor>,
editor: &Entity<Editor>,
debounce: Option<Duration>,
window: &mut Window,
cx: &mut Context<Self>,
@@ -269,19 +269,21 @@ impl StatusItemView for CursorPosition {
cx: &mut Context<Self>,
) {
if let Some(editor) = active_pane_item.and_then(|item| item.act_as::<Editor>(cx)) {
self._observe_active_editor =
Some(
cx.observe_in(&editor, window, |cursor_position, editor, window, cx| {
Self::update_position(
cursor_position,
editor,
Some(UPDATE_DEBOUNCE),
window,
cx,
)
}),
);
self.update_position(editor, None, window, cx);
self._observe_active_editor = Some(cx.subscribe_in(
&editor,
window,
|cursor_position, editor, event, window, cx| match event {
EditorEvent::SelectionsChanged { .. } => Self::update_position(
cursor_position,
editor,
Some(UPDATE_DEBOUNCE),
window,
cx,
),
_ => {}
},
));
self.update_position(&editor, None, window, cx);
} else {
self.position = None;
self._observe_active_editor = None;

View File

@@ -176,7 +176,7 @@ impl AsyncApp {
lock.open_window(options, build_root_view)
}
/// Schedule a future to be polled in the background.
/// Schedule a future to be polled in the foreground.
#[track_caller]
pub fn spawn<AsyncFn, R>(&self, f: AsyncFn) -> Task<R>
where
@@ -260,6 +260,19 @@ impl AsyncApp {
}
}
impl sum_tree::BackgroundSpawn for BackgroundExecutor {
type Task<R>
= Task<R>
where
R: Send + Sync;
fn background_spawn<R>(&self, future: impl Future<Output = R> + Send + 'static) -> Self::Task<R>
where
R: Send + Sync + 'static,
{
self.spawn(future)
}
}
/// A cloneable, owned handle to the application context,
/// composed with the window associated with the current task.
#[derive(Clone, Deref, DerefMut)]

View File

@@ -393,6 +393,11 @@ impl TestAppContext {
}
}
/// Returns the background executor for this context.
pub fn background_executor(&self) -> &BackgroundExecutor {
&self.background_executor
}
/// Wait until there are no more pending tasks.
pub fn run_until_parked(&mut self) {
self.background_executor.run_until_parked()

View File

@@ -251,6 +251,8 @@ impl Element for UniformList {
None
}
// self.max_found_width = 0.0
//
fn request_layout(
&mut self,
global_id: Option<&GlobalElementId>,

View File

@@ -342,7 +342,7 @@ impl BackgroundExecutor {
/// for all of them to complete before returning.
pub async fn scoped<'scope, F>(&self, scheduler: F)
where
F: FnOnce(&mut Scope<'scope>),
F: for<'a> FnOnce(&'a mut Scope<'scope>),
{
let mut scope = Scope::new(self.clone());
(scheduler)(&mut scope);
@@ -479,7 +479,6 @@ impl ForegroundExecutor {
}
/// Enqueues the given Task to run on the main thread at some point in the future.
#[track_caller]
pub fn spawn<R>(&self, future: impl Future<Output = R> + 'static) -> Task<R>
where
R: 'static,

View File

@@ -9,7 +9,6 @@ use windows::Win32::UI::{
},
WindowsAndMessaging::KL_NAMELENGTH,
};
use windows_core::HSTRING;
use crate::{
KeybindingKeystroke, Keystroke, Modifiers, PlatformKeyboardLayout, PlatformKeyboardMapper,
@@ -93,14 +92,13 @@ impl PlatformKeyboardMapper for WindowsKeyboardMapper {
impl WindowsKeyboardLayout {
pub(crate) fn new() -> Result<Self> {
let mut buffer = [0u16; KL_NAMELENGTH as usize];
let mut buffer = [0u16; KL_NAMELENGTH as usize]; // KL_NAMELENGTH includes the null terminator
unsafe { GetKeyboardLayoutNameW(&mut buffer)? };
let id = HSTRING::from_wide(&buffer).to_string();
let id = String::from_utf16_lossy(&buffer[..buffer.len() - 1]); // Remove the null terminator
let entry = windows_registry::LOCAL_MACHINE.open(format!(
"System\\CurrentControlSet\\Control\\Keyboard Layouts\\{}",
id
"System\\CurrentControlSet\\Control\\Keyboard Layouts\\{id}"
))?;
let name = entry.get_hstring("Layout Text")?.to_string();
let name = entry.get_string("Layout Text")?;
Ok(Self { id, name })
}
@@ -135,6 +133,7 @@ impl WindowsKeyboardLayout {
b"0405" | // Czech
b"040E" | // Hungarian
b"0424" | // Slovenian
b"041A" | // Croatian
b"041B" | // Slovak
b"0418" // Romanian
)

View File

@@ -4326,10 +4326,10 @@ impl Window {
}
/// Returns a generic event listener that invokes the given listener with the view and context associated with the given view handle.
pub fn listener_for<V: Render, E>(
pub fn listener_for<T: 'static, E>(
&self,
view: &Entity<V>,
f: impl Fn(&mut V, &E, &mut Window, &mut Context<V>) + 'static,
view: &Entity<T>,
f: impl Fn(&mut T, &E, &mut Window, &mut Context<T>) + 'static,
) -> impl Fn(&E, &mut Window, &mut App) + 'static {
let view = view.downgrade();
move |e: &E, window: &mut Window, cx: &mut App| {

View File

@@ -22,7 +22,7 @@ use gpui::{
ScrollWheelEvent, Stateful, StyledText, Subscription, Task, TextStyleRefinement, WeakEntity,
actions, anchored, deferred, div,
};
use language::{Language, LanguageConfig, ToOffset as _};
use language::{Language, LanguageConfig, Rope, ToOffset as _};
use notifications::status_toast::{StatusToast, ToastIcon};
use project::{CompletionDisplayOptions, Project};
use settings::{
@@ -2119,7 +2119,7 @@ impl RenderOnce for SyntaxHighlightedText {
let highlights = self
.language
.highlight_text(&text.as_ref().into(), 0..text.len());
.highlight_text(&Rope::from_str_small(text.as_ref()), 0..text.len());
let mut runs = Vec::with_capacity(highlights.len());
let mut offset = 0;

View File

@@ -24,8 +24,8 @@ use collections::HashMap;
use fs::MTime;
use futures::channel::oneshot;
use gpui::{
App, AppContext as _, Context, Entity, EventEmitter, HighlightStyle, SharedString, StyledText,
Task, TaskLabel, TextStyle,
App, AppContext as _, BackgroundExecutor, Context, Entity, EventEmitter, HighlightStyle,
SharedString, StyledText, Task, TaskLabel, TextStyle,
};
use lsp::{LanguageServerId, NumberOrString};
@@ -832,6 +832,7 @@ impl Buffer {
ReplicaId::LOCAL,
cx.entity_id().as_non_zero_u64().into(),
base_text.into(),
&cx.background_executor(),
),
None,
Capability::ReadWrite,
@@ -862,9 +863,10 @@ impl Buffer {
replica_id: ReplicaId,
capability: Capability,
base_text: impl Into<String>,
cx: &BackgroundExecutor,
) -> Self {
Self::build(
TextBuffer::new(replica_id, remote_id, base_text.into()),
TextBuffer::new(replica_id, remote_id, base_text.into(), cx),
None,
capability,
)
@@ -877,9 +879,10 @@ impl Buffer {
capability: Capability,
message: proto::BufferState,
file: Option<Arc<dyn File>>,
cx: &BackgroundExecutor,
) -> Result<Self> {
let buffer_id = BufferId::new(message.id).context("Could not deserialize buffer_id")?;
let buffer = TextBuffer::new(replica_id, buffer_id, message.base_text);
let buffer = TextBuffer::new(replica_id, buffer_id, message.base_text, cx);
let mut this = Self::build(buffer, file, capability);
this.text.set_line_ending(proto::deserialize_line_ending(
rpc::proto::LineEnding::from_i32(message.line_ending).context("missing line_ending")?,
@@ -1138,13 +1141,14 @@ impl Buffer {
let old_snapshot = self.text.snapshot();
let mut branch_buffer = self.text.branch();
let mut syntax_snapshot = self.syntax_map.lock().snapshot();
let executor = cx.background_executor().clone();
cx.background_spawn(async move {
if !edits.is_empty() {
if let Some(language) = language.clone() {
syntax_snapshot.reparse(&old_snapshot, registry.clone(), language);
}
branch_buffer.edit(edits.iter().cloned());
branch_buffer.edit(edits.iter().cloned(), &executor);
let snapshot = branch_buffer.snapshot();
syntax_snapshot.interpolate(&snapshot);
@@ -1573,21 +1577,24 @@ impl Buffer {
self.reparse = None;
}
Err(parse_task) => {
// todo(lw): hot foreground spawn
self.reparse = Some(cx.spawn(async move |this, cx| {
let new_syntax_map = parse_task.await;
let new_syntax_map = cx.background_spawn(parse_task).await;
this.update(cx, move |this, cx| {
let grammar_changed =
let grammar_changed = || {
this.language.as_ref().is_none_or(|current_language| {
!Arc::ptr_eq(&language, current_language)
});
let language_registry_changed = new_syntax_map
.contains_unknown_injections()
&& language_registry.is_some_and(|registry| {
registry.version() != new_syntax_map.language_registry_version()
});
let parse_again = language_registry_changed
|| grammar_changed
|| this.version.changed_since(&parsed_version);
})
};
let language_registry_changed = || {
new_syntax_map.contains_unknown_injections()
&& language_registry.is_some_and(|registry| {
registry.version() != new_syntax_map.language_registry_version()
})
};
let parse_again = this.version.changed_since(&parsed_version)
|| language_registry_changed()
|| grammar_changed();
this.did_finish_parsing(new_syntax_map, cx);
this.reparse = None;
if parse_again {
@@ -2358,7 +2365,9 @@ impl Buffer {
let autoindent_request = autoindent_mode
.and_then(|mode| self.language.as_ref().map(|_| (self.snapshot(), mode)));
let edit_operation = self.text.edit(edits.iter().cloned());
let edit_operation = self
.text
.edit(edits.iter().cloned(), cx.background_executor());
let edit_id = edit_operation.timestamp();
if let Some((before_edit, mode)) = autoindent_request {
@@ -2589,7 +2598,8 @@ impl Buffer {
for operation in buffer_ops.iter() {
self.send_operation(Operation::Buffer(operation.clone()), false, cx);
}
self.text.apply_ops(buffer_ops);
self.text
.apply_ops(buffer_ops, Some(cx.background_executor()));
self.deferred_ops.insert(deferred_ops);
self.flush_deferred_ops(cx);
self.did_edit(&old_version, was_dirty, cx);

Some files were not shown because too many files have changed in this diff Show More