Compare commits

...

218 Commits

Author SHA1 Message Date
Conrad Irwin
b69a2ea200 Tag stack 2 2025-11-05 00:24:12 -07:00
Richard Feldman
0da52d6774 Add ACP terminal-login via _meta field (#41954)
As discussed with @benbrandt and @mikayla-maki:

* We now tell ACP clients we support the nonstandard `terminal-auth`
`_meta` field for terminal-based authentication
* In the future, we anticipate ACP itself supporting *some* form of
terminal-based authentication, but that hasn't been designed yet or gone
through the RFD process
* For now, this unblocks terminal-based auth

Release Notes:

- Added experimental terminal-based authentication to ACP support
2025-11-04 21:40:35 -05:00
Richard Feldman
60ee0dd19b Use our node runtime for ACP extensions (#41955)
Release Notes:

- Now ACP extensions use Zed's managed Node.js runtime

---------

Co-authored-by: Mikayla Maki <mikayla.c.maki@gmail.com>
2025-11-04 21:40:23 -05:00
Conrad Irwin
9fc4abd8de Re-enable preview auto-release (#41952)
Patch releases to preview will now automatically release

Release Notes:

- N/A
2025-11-04 19:26:33 -07:00
John Tur
2ead8c42fb Improve Windows text input for international keyboard layouts and IMEs (#41259)
- Custom handling of dead keys has been removed. UX for dead keys is now
the same as other applications on Windows.
- We could bring back some kind of custom UI, but only if UX is fully
compatible with expected Windows behavior (e.g. ability to move the
cursor after typing a dead key).
  - Fixes https://github.com/zed-industries/zed/issues/38838
- Character input via AltGr shift state now always has priority over
keybindings. This applies regardless of whether the keystroke used the
AltGr key or Ctrl+Alt to enter the shift state.
- In particular, we use the following heuristic to determine whether a
keystroke should trigger character input first or trigger keybindings
first:
- If the keystroke does not have any of Ctrl/Alt/Win down, trigger
keybindings first.
- Otherwise, determine the character that would be entered by the
keystroke. If it is a control character, or no character at all, trigger
keybindings first.
- Otherwise, the keystroke has _any_ of Ctrl/Alt/Win down and generates
a printable character. Compare this character against the character that
would be generated if the keystroke had _none_ of Ctrl/Alt/Win down:
- If the character is the same, the modifiers are not significant;
trigger keybindings first.
- If there is no active input handler, or the active input handler
indicates that it isn't accepting text input (e.g. when an operator is
pending in Vim mode), character entry is not useful; trigger keybindings
first.
- Otherwise, assume the modifiers enable access to an otherwise
difficult-to-enter key; trigger character entry first.
  - Fixes https://github.com/zed-industries/zed/issues/35862
- Fixes
https://github.com/zed-industries/zed/issues/40054#issuecomment-3447833349
  - Fixes https://github.com/zed-industries/zed/issues/41486
- TranslateMessage calls are no longer skipped for unhandled keystrokes.
This fixes language input keys on Japanese and Korean keyboards (and
surely other cases as well).
- To avoid any other missing-TranslateMessage headaches in the future,
the message loop has been rewritten in a "traditional" Win32 style,
where accelerators are handled in the message loop and TranslateMessage
is called in the intended manner.
  - Fixes https://github.com/zed-industries/zed/issues/39971
  - Fixes https://github.com/zed-industries/zed/issues/40300
  - Fixes https://github.com/zed-industries/zed/issues/40321
  - Fixes https://github.com/zed-industries/zed/issues/40335
  - Fixes https://github.com/zed-industries/zed/issues/40592
  - Fixes https://github.com/zed-industries/zed/issues/40638
- As a bonus, Alt+Space now opens the system menu, since it is triggered
by the WM_SYSCHAR generated by TranslateMessage.
- VK_PROCESSKEYs are now ignored rather than being unwrapped and matched
against keybindings. This ensures that IMEs will reliably receieve
keystrokes that they express interest in. This matches the behavior of
native Windows applications.
  - Fixes https://github.com/zed-industries/zed/issues/36736
  - Fixes https://github.com/zed-industries/zed/issues/39608
  - Fixes https://github.com/zed-industries/zed/issues/39991
  - Fixes https://github.com/zed-industries/zed/issues/41223
  - Fixes https://github.com/zed-industries/zed/issues/41656
  - Fixes https://github.com/zed-industries/zed/issues/34180
  - Fixes https://github.com/zed-industries/zed/issues/41766


Release Notes:

- windows: Improved keyboard input handling for international keyboard
layouts and IMEs
2025-11-04 20:28:12 -05:00
Danilo Leal
0a4b1ac696 inline assistant: Mention ability to add context with @ in the placeholder (#41950)
This has been possible in the inline assistant for ages now and maybe
you didn't know because we didn't say anything about it! This PR fixes
that by including that you can @-mention context on it the same you can
in the agent panel.

Release Notes:

- N/A
2025-11-04 21:18:49 -03:00
Conrad Irwin
f9fb855990 Fetch (just) enough refs in script/cherry-pick (#41949)
Before this change we'd download all the tagged commits, but none of
their ancestors,
this was slow and made cherry-picking fail.

Release Notes:

- N/A
2025-11-04 17:09:43 -07:00
Conrad Irwin
b587a62ac3 No-op commit to test cherry-picking (#41948)
Closes #ISSUE

Release Notes:

- N/A
2025-11-04 23:35:19 +00:00
Conrad Irwin
1b2e38bb33 More tweaks to CI pipeline (#41941)
Closes #ISSUE

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-04 16:28:29 -07:00
Kirill Bulatov
4339c772e4 Tidy up Edit in json footer entries (#41890)
Before:
<img width="350" height="109" alt="before"
src="https://github.com/user-attachments/assets/d5d3e6bd-3a65-4d7d-8585-1e4d8f72997f"
/>

After:
<img width="310" height="103" alt="after"
src="https://github.com/user-attachments/assets/40137084-7323-4a79-b95b-a020c418646b"
/>

* All items got a keybinding label
* All items were made of a non-small size, to match the text size in the
right button
* Keybindings are rendered as disabled for disabled buttons

Release Notes:

- N/A

Co-authored-by: Lukas Wirth <lukas@zed.dev>
2025-11-05 00:49:47 +02:00
tidely
ba7ea71c00 node_runtime: Improve proxy mapping (#41807)
Closes #ISSUE

More accurately map localhost to `127.0.0.1`. Previously we would
lowercase and simply replace all instances of localhost inside of the
URL string, meaning query parameters, username, password etc. could not
contain the string `localhost` or contain uppercase letters without
getting modified. Added a test ensuring the mapping logic works. The
previous implementation would fail this new test.

Release Notes:

- Improved the behavior of mapping `localhost` to `127.0.0.1` when
passing configured proxy urls to `node`
2025-11-04 17:11:54 -05:00
Andrew Farkas
cd04450273 Refactor completions (#41939)
This is progress toward multi-word snippets (including snippets with
prefixes containing symbols)

Release Notes:

- Removed `trigger` argument in `ShowCompletions` command

---------

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-11-04 20:18:03 +00:00
Sergey Cherepanoff
769a8a650e windows: Automatically find Windows SDK when building gpui (#38711)
### Summary

This PR changes `gpui/build.rs` to look up the Windows SDK directory in
the registry instead of falling back to a hard-coded path.

---

### Problem

Currently, building `gpui` on Windows requires `fxc.exe` to be in `PATH`
or at a predefined location (unless `GPUI_FXC_PATH` is set). This
requires to maintain a certain build environment with proper paths/vars
or to install the specific SDK version.

It is possible to find the SDK automatically using the registry keys it
creates upon installation. Specifically in
`SOFTWARE\\WOW6432Node\\Microsoft\\Microsoft SDKs\\Windows\\v10.0`
branch there are:
* `InstallationFolder` telling the SDK installation location;
* `ProductVersion` telling the SDK version in use.

These keys provide enough information to locate the SDK binaries, with
added robustness:
* handles non-standard SDK installation path;
* deterministically selects the latest SDK when multiple versions are
present.

---

### Changes Made

* **Updated `crates/gpui/build.rs`**:

  * added dependency on `winreg`
  * introduced `find_latest_windows_sdk_binary()` helper
  * updated fallback logic to use registry lookup

This PR only changes the fallback location, and does not touch the
established environment-based workflow.

Release Notes:

- N/A

---

### Impact

Reduces manual configuration needed to build GPUI on Windows.

---------

Co-authored-by: John Tur <john-tur@outlook.com>
2025-11-04 20:14:54 +00:00
Anthony Eid
94ff4aa4b2 AI: Fix Github Copilot edit predictions failing to start (#41934)
Closes #41457 #41806 #41801

Copilot started using `node:sqlite` module which is an experimental
feature between node v22-v23 (stable in v24). The fix was passing in the
experimental flag when Zed starts the copilot LSP.

I tested this with v20.19.5 and v24.11.0. The fix got v20.19 working and
didn't affect v24.11 which was already working.

Release Notes:

- AI: Fix Github Copilot edit predictions failing to start
2025-11-04 14:31:51 -05:00
Ignasius
1e1480405a Fix integer underflow in autosave mode after delay in the settings (#41898)
Closes #41774 

Release Notes:

- settings_ui: Fixed an integer underflow panic when attempting to hit
the `-` sign on settings item that take delays in milliseconds
2025-11-04 14:12:05 -05:00
scuzqy
cdd7d4b2fb windows: Skip AppendCategory call when there are no shell links (#37926)
`ICustomDestinationList::AppendCategory` rejects an empty `IObjectArray`
and returns an `E_INVALIDARG` error. Error propagated and caused an
early-return from `update_jump_list()`.
<img width="1628" height="540" alt="image"
src="https://github.com/user-attachments/assets/f8143297-c71e-42a1-a505-66cd77dfa599"
/>

Release Notes:

- N/A
2025-11-04 18:48:18 +00:00
Piotr Osiewicz
4fd2b3f374 editor: Jumping to diagnostics unfolds target locations (#41932)
Release Notes:

- Jumping to diagnostics no longer skips over folded regions. The folded
region that contains a target diagnostic is now unfolded.
2025-11-04 18:43:24 +00:00
Conrad Irwin
43a7f96462 Improve compare_perf.yml, cherry_pick.yml (#41606)
Release Notes:

- N/A

---------

Co-authored-by: Nia Espera <nia@zed.dev>
2025-11-04 11:29:35 -07:00
Joseph T. Lyons
2a2e04bb5c Add docs for settings profiles (#41931)
Release Notes:

- N/A
2025-11-04 18:27:39 +00:00
Piotr Osiewicz
9bf212bd1e lsp: Fix dynamic registration of document diagnostics (#41929)
- lsp: Fix dynamic registration of diagnostic capabilities not taking
effect when an initial capability is not specified
Gist of the issue lies within use of .get_mut instead of .entry. If we
had not created any dynamic capability beforehand, we'd miss a
registration, essentially

- **Determine whether to update remote caps in a smarter manner**

Release Notes:

- Fixed document diagnostics with Ty language server.
2025-11-04 18:23:14 +00:00
localcc
38cd16aad9 Fix save_last_workspace (#41907)
Closes #37348

Release Notes:

- Fixed last workspace window restoration on linux/windows
2025-11-04 18:47:49 +01:00
Ben Kunkle
d8655f0656 settings_ui: Fix dropdowns after #41036 (#41920)
Closes #41533

Both of the issues in the release notes that are fixed in this PR, were
caused by incorrect usage of the `window.use_state` API.
The first issue was caused by calling `window.use_state` in a render
helper, resulting in the element ID used to share state being the same
across different pages, resulting in the state being re-used when it
should have been re-created. The fix for this was to move the
`window.state` (and rendering logic) into a `impl RenderOnce` component,
so that the IDs are resolved during the render, avoiding the state
conflicts.

The second issue is caused by using a `move` closure in the
`window.use_state` call, resulting in stale closure values when the
window state is re-used.

Release Notes:

- settings_ui: Fixed an issue where some dropdown menus would show
options from a different dropdown when clicked
- settings_ui: Fixed an issue where attempting to change a setting in a
dropdown back to it's original value after changing it would do nothing
2025-11-04 12:46:16 -05:00
Oleksiy Syvokon
91d631c229 Evaluate zeta2 context retrieval and edit predictions (#41921)
This PR implements the `zeta-cli eval` command. It will:

- Run the edit prediction model if there are no cached results
- Compute precision/recall/F1 for context retrieval at the line level:
every retrieved line of context is counted as a true positive (correct
retrieval), false positive (retrieved something that was not expected),
or false negative (didn't retrieve an expected line)
- Compute similar metrics for edit predictions
- Pretty-print results, highlighting the difference between actual and
expected when printing to tty

Other changes:
- `zeta-cli predict` accepts a `--format` argument with options `md`,
`json`, `diff`
- Code restructure

Release Notes:

- N/A

---------

Co-authored-by: Piotr Osiewicz <24362066+osiewicz@users.noreply.github.com>
Co-authored-by: Agus Zubiaga <agus@zed.dev>
2025-11-04 17:36:50 +00:00
Joseph T. Lyons
054d2e1524 Add Ben K to gpui PR reviewers (#41919)
Release Notes:

- N/A
2025-11-04 17:06:16 +00:00
Sathiyaraman M
982f2418f4 git: Add support for git pull with rebase (#41117)
- Adds a new action `git::PullRebase` which adds `--rebase` in the final
command invoked by existing Git-Pull implementation.
- Includes the new action in "Fetch/Push" button in the Git Panel
(screenshot below)
- Adds key-binding for `git::PullRebase` in all three platforms,
following the existing key-binding patterns (`ctrl-g shift-down`)
- Update git docs to include the new action.

Sidenote: This is my first ever OSS contribution

Screenshot:

<img width="234" height="215" alt="image"
src="https://github.com/user-attachments/assets/713d068f-5ea5-444f-8d66-444ca65affc8"
/>

---

Release Notes:

- Git: Added `git: pull rebase` for running `git pull --rebase`.
2025-11-04 16:41:06 +00:00
Joseph T. Lyons
9f580464f0 Add Anthony and Cameron to gpui PR reviewers (#41914)
Release Notes:

- N/A
2025-11-04 16:20:41 +00:00
ᴀᴍᴛᴏᴀᴇʀ
fc3e503cfe remote: Fix incorrect default repository selection when using remote (#41698)
If I understand this correctly: The `active_repo_id` uses
`get_or_insert_with`, which makes it dependent on the `RepositoryAdded`
event sequence. To ensure correct initialization of the `active_repo_id`
on the remote side, the first local `RepositoryAdded` event must
synchronously send an `UpdateRepository` to `updates_tx`.

Closes #30694

Release Notes:

- Fixed incorrect default repository selection when using remote
2025-11-04 11:15:35 -05:00
Joseph T. Lyons
52c49b86b2 Sort PR reviewer categories (#41912)
Release Notes:

- N/A
2025-11-04 16:07:46 +00:00
Joseph T. Lyons
07b707153d Sort PR reviewers per category (#41909)
Release Notes:

- N/A
2025-11-04 15:44:17 +00:00
Lukas Wirth
75cef88ea2 agent_ui: Render error context when failing to spawn agent thread (#41908)
Release Notes:

- Fixed not telling the user what went wrong when spawning ACP agents
2025-11-04 15:35:33 +00:00
Pranav Joglekar
46b39f0077 editor: Clean up edit prediction preview when edit prediction is disabled (#40159)
Update the `editor::Editor.handle_modifiers_changed` method to ensure
that the `editor::Editor.update_edit_prediction_preview` method is
called even if edit prediction preview is disabled, if there's an active
edit prediction preview.

Without this change it was possible for users to get into a state where
holding the modifiers to show the prediction were part of the modifiers
used to disable edit prediction. When that keybinding was used, edit
prediction would be disabled, but the edit prediction preview would
remain as active, so the context menu for the editor would never be
shown again, as the editor would assume it was still showing the edit
prediction preview.

Closes #40056 

Release Notes:

- Fixed a bug that could cause the completions menu to stop being show
when edit predictions were disabled

---------

Co-authored-by: dino <dinojoaocosta@gmail.com>
2025-11-04 15:30:37 +00:00
Thomas Heartman
fb410ab3ae Support relative line number on wrapped lines (rework) (#41805)
## Add relative line numbers on wrapped lines, take 2

This is a rework of https://github.com/zed-industries/zed/pull/39268
that excludes
e7096d27a6.
This commit introduced some line number rendering issues as described in
https://github.com/zed-industries/zed/issues/41422.

While @ConradIrwin suggested we try to pass in the buffer rows from the
calling method instead of the snapshot, that
appears to have had unintended consequences and I don't think the two
calculations were intended to do the same thing. Hence, this PR has
removed those changes.

This PR also includes the migration fix originally done by @MrSubidubi
in https://github.com/zed-industries/zed/pull/41351.

## Original PR description and release notes.

**Problem:** Current relative line numbering creates a mismatch with
vim-style navigation when soft wrap is enabled. Users must mentally
calculate whether target lines are wrapped segments or logical lines,
making `<n>j/k` navigation unreliable and cognitively demanding.

**How things work today:**
- Real line navigation (`j/k` moves by logical lines): Requires
determining if visible lines are wrapped segments before jumping. Can't
jump to wrapped lines directly.
- Display line navigation (`j/k` moves by display rows): Line numbers
don't correspond to actual row distances for multi-line jumps.

**Proposed solution:** Count and number each display line (including
wrapped segments) for relative numbering. This creates direct
visual-to-navigational correspondence, where the relative number shown
always matches the `<n>j/k` distance needed.

**Benefits:**
- Eliminates mental overhead of distinguishing wrapped vs. logical lines
- Makes relative line numbers consistently actionable regardless of wrap
state
- Preserves intuitive "what you see is what you navigate" principle
- Maintains vim workflow efficiency in narrow window scenarios

Also explained and discussed in
https://github.com/zed-industries/zed/discussions/25733.

Release Notes:

- Added support for counting wrapped lines as relative lines and for
displaying line numbers for wrapped segments. Changes
`relative_line_numbers` from a boolean to an enum: `enabled`,
`disabled`, or `wrapped`.
2025-11-04 08:24:17 -07:00
B. Collier Jones
1b93242351 project_panel: Add hidden files glob patterns and action toggle hidden files visibility (#41532)
This PR adds the ability to configure which files are considered
"hidden" in the project panel and toggle their visibility with a
keyboard shortcut. Previously, the editor hardcoded dotfiles as hidden -
now users can customize the pattern and quickly show/hide them.

### Release Notes

- Added `project_panel::ToggleHideHidden` action with keyboard shortcuts
to toggle visibility of hidden files
- Added configurable `hidden_files` setting to customize which files are
marked as hidden (defaults to `**/.*` for dotfiles)

### Motivation

This change allows users to:
1. Quickly toggle hidden file visibility with a keyboard shortcut
2. Customize which files are considered "hidden" beyond just dotfiles
3. Better organize their project panel by hiding build artifacts, logs,
or other generated files

### Usage

**Toggle hidden files:**
- **macOS:** `cmd-alt-.`
- **Linux:** `ctrl-alt-.`
- **Windows:** `ctrl-alt-.`

**Customize patterns in settings:**
```json
{
  "hidden_files": ["**/.*", "**/*.tmp", "**/build/**"]
}
```

### Changes

**Core Implementation:**
- Added `hidden_files` setting (defaults to `**/.*` to match current
dotfile behavior)
- Replaced hardcoded `name.starts_with('.')` logic with configurable
pattern matching using `PathMatcher`
- Hidden status propagates through directory hierarchies (if a directory
is hidden, all children inherit that status)

**User-Facing:**
- Added `ToggleHideHidden` action in the project panel
- Added keyboard shortcuts for all platforms
- Added settings UI entry for configuring `hidden_files` patterns

**Testing:**
- Added comprehensive test coverage validating default behavior, custom
patterns, propagation, and settings changes

### Implementation Notes

- Uses `PathMatcher` for efficient glob matching
- Settings changes automatically trigger worktree re-indexing
- No breaking changes - defaults maintain current behavior (hiding
dotfiles)

---

**Disclaimer:** This was implemented with a fair amount of copy/paste
(particularly the gitignore handling), trial and error, and a healthy
dose of Claude.

### Screenshots

**Project Panel with hidden files visible:**
<img width="1368" height="935" alt="Screenshot 2025-10-30 at 3 15 53 AM"
src="https://github.com/user-attachments/assets/1cbe90ce-504c-4f9b-bca8-bef02ab961be"
/>

**Project Panel with hidden files hidden:**
<img width="1363" height="917" alt="Screenshot 2025-10-30 at 3 16 07 AM"
src="https://github.com/user-attachments/assets/9297f43e-98c7-4b19-be8f-3934589d6451"
/>

**Toggle action in command palette:**
<img width="565" height="161" alt="Screenshot 2025-10-30 at 3 17 26 AM"
src="https://github.com/user-attachments/assets/4dc9e7b6-9c29-4972-b886-88d8018905da"
/>

Release Notes:

- Added the ability to configure glob patterns for files treated as
hidden in the project panel using the `hidden_files` setting.
- Added an action `project panel: toggle hidden files` to quickly show
or hide hidden files in the project panel.

---------

Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>
2025-11-04 20:35:37 +05:30
Lukas Wirth
fb6e41d51e remote_server: Fix panic due to invalid settings access (#41904)
Closes https://github.com/zed-industries/zed/issues/41860

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-04 14:56:24 +00:00
Lukas Wirth
0ec31db398 util: Add missing quotes in shell env capturing on windows (#41902)
Release Notes:

- Fixed shell env capturing failing if zed is installed on a path with
whitespace in it
2025-11-04 14:31:20 +00:00
Jakub Konka
a262ca1cd5 util: Log JSON with envs if failed to deserialize (#41894)
Release Notes:

- N/A
2025-11-04 15:30:26 +01:00
localcc
6a38d699dc Fix autoupdate nuking the app on Windows (#41571)
Closes #41477

Release Notes:

- N/A
2025-11-04 15:06:30 +01:00
Smit Barmase
95feefc1cf remote: Fix terminal crash on Elvish shell (#41893) 2025-11-04 17:48:42 +05:30
Coenen Benjamin
a827f25d00 file_finder: Respect .gitignore and file_scan_inclusions with ** in glob (#40654)
Closes #39037 

Previously, the code split the `**/.env` glob in `file_scan_inclusions`
into two sources for the `PathMatcher`: `["**", "**/.env"]`. This
approach works for directories, but including `**` will match all
directories and their files. To address this, I now select the
appropriate `PathMatcher` using only `**/.env` when specifically
targeting a file to determine whether to include it in the file finder.

Release Notes:

- Fixed: respect `.gitignore` and `file_scan_inclusions` settings with
`**` in glob for file finder

---------

Signed-off-by: Benjamin <5719034+bnjjj@users.noreply.github.com>
Co-authored-by: Julia Ryan <juliaryan3.14@gmail.com>
2025-11-04 12:11:17 +00:00
Kirill Bulatov
cc6208b17f Show inlay label parts' tooltips if those are present and hovered (#41889)
Part of https://github.com/zed-industries/zed/issues/33715


https://github.com/user-attachments/assets/d2d6f47d-3974-4c8c-aab9-9046891186bf

Unlike VSCode that does more advanced hovering, this one only works for
inlays with tooltip LSP data in them.

Release Notes:

- Started to show inlay label parts' tooltips when they are hovered

Co-authored-by: Lukas Wirth <lukas@zed.dev>
2025-11-04 11:10:51 +00:00
Dino
8d15ec7f99 vim: Fix mini delimiters in multibuffer (#41834)
- Update `vim::object::find_mini_delimiters` in order to filter out the
  ranges before calling `vim::object::cover_or_next`, ensuring that the
  provided ranges are converted from multibuffer space into buffer
  space.
- Remove the `range_filter` from `vim::object::cover_or_next` was the
  `find_mini_delimiters` function is the only caller and no longer uses
  it

Closes #41346 

Release Notes:

- Fixed a crash that could occur when using `vim::MiniQuotes` and
`vim::MiniBrackets` in a multibuffer
2025-11-04 11:08:51 +00:00
Jakub Konka
ca5a4dcffa terminal: Resolve env based on the project dir on the target (#41867)
Prior to this change we would always resolve envs when spawning a new
terminal window based on the inherited CLI environment. This works fine
as long as we open a new Zed instance in the terminal when using it
locally only. When using Zed connected to a remote server, it would not
be meaningful however. WIth this change, we correctly ping the remote
for the project-local envs and use that instead. This change should also
fix a pesky issue when updating Zed - after Zed restarts, opening a new
terminal window will not run `direnv` for example.

Release Notes:

- N/A
2025-11-04 09:52:28 +01:00
Lukas Wirth
3e7b8efb98 editor: Newtype WrapRow (#41843)
Makes a better distinction between `WrapRow` and `BlockRow`

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-04 08:48:38 +00:00
John Tur
4002b32ad4 Coalesce dispatcher window messages on Windows (#41861)
Take 2 at https://github.com/zed-industries/zed/pull/41595

Release Notes:

- N/A
2025-11-04 09:43:06 +01:00
Coenen Benjamin
3ef8163357 yaml: Fix indentation with dictionary when editing (#40913)
Closes #39570 

Release Notes:

- Fixed indentation with dictionary when editing YAML file.

---------

Signed-off-by: Benjamin <5719034+bnjjj@users.noreply.github.com>
2025-11-04 12:23:34 +05:30
Conrad Irwin
b9524837bb Fix branch diff hunk expansion (#41873)
Closes #ISSUE

Release Notes:

- (preview only) Fixes a bug where hunks were not expanded when viewing
branch diff
2025-11-04 03:46:02 +00:00
Alvaro Parker
e5660d25f1 git: Add git worktree picker (#38719)
Related discussions #26084 

Worktree creations are implemented similar to how branch creations are
handled on the branch picker (the user types a new name that's not on
the list and a new entry option appears to create a new branch with that
name).


https://github.com/user-attachments/assets/39e58983-740c-4a91-be88-57ef95aed85b

With this picker you have a few workflows: 

- Open the picker and type the name of a branch that's checked out on an
existing worktree:
    - Press enter to open the worktree on a new window
- Press ctrl-enter to open the worktree and replace the current window
- Open the picker and type the name of a new branch or an existing one
that's not checked out in another worktree:
- Press enter to create the worktree and open in a new window. If the
branch doesn't exists, we will create a new one based on the branch you
have currently checked out. If the branch does exists then we create a
worktree with that branch checked out.
- Press ctrl-enter to do everything on the previous point but instead,
replace the current window with the new worktre.
- Open the picker and type the name of a new branch or an existing one
that's not checked out in another worktree:
- If a default branch is detected on the repo, you can create a new
worktree based on that branch by pressing ctrl-enter or
ctrl-shift-enter. The first one will open a new window and the last one
will replace the current one.


Note: If you preffer to not use the system prompt for choosing a
directory, you can set `"use_system_path_prompts": false` in zed
settings.

Release Notes:

- Added git worktree picker to open a git worktree on a new window or
replace the current one
- Added git worktree creation action

---------

Co-authored-by: Cole Miller <cole@zed.dev>
2025-11-03 21:38:00 -05:00
Danilo Leal
57adf42492 agent_ui: Add profiles item in the panel menu (#41871)
Making the profile management modal accessible through the agent panel
additional options menu as well. And in the process, adjusting the menu
keybinding that was getting conflicted with something else.

Release Notes:

- N/A
2025-11-03 22:54:28 -03:00
Danilo Leal
4cdcb0c15e agent_ui: Improve display of external agents in configuration view (#41869)
This PR makes the agent panel's configuration view use the icon from an
external agent that comes directly from the extension, as well as some
other clean ups.

Release Notes:

- N/A
2025-11-03 22:22:50 -03:00
Conrad Irwin
9113a20b8b Shell out to real tar in extension builder (#41856)
We see `test_extension_store_with_test_extension` hang in untarring the
WASI SDK some times.

In lieu of trying to debug the problem, let's try shelling out for now
in the hope that the test becomes more reliable.

There's a bit of risk here because we're using async-tar for other
things (but probably not 300Mb tar files...)

Assisted-By: Zed AI

Closes #ISSUE

Release Notes:

- N/A
2025-11-03 16:13:03 -07:00
Max Brunsfeld
1631cec15a Add zeta-cli subcommand for running zeta2 predictions (#41722)
This PR adds a `zeta zeta2 predict` subcommand that takes an edit
prediction example markdown file as an argument, and performs zeta2's
prediction, showing the retrieved context and the predicted edit.

* [x] Apply uncommitted diff to get repo into the right state.
* [x] Apply edits in edit history
* [x] Display predicted edits as unified diff, regardless of model
output format

Release Notes:

- N/A

---------

Co-authored-by: Agus Zubiaga <agus@zed.dev>
Co-authored-by: Piotr Osiewicz <24362066+osiewicz@users.noreply.github.com>
Co-authored-by: Ben Kunkle <ben.kunkle@gmail.com>
2025-11-03 15:12:08 -08:00
Kirill Bulatov
5e41ce17e3 Do not pull diagnostics when those are disabled (#41865)
Based on 

[hang.log](https://github.com/user-attachments/files/23319081/hang.log)


Release Notes:

- N/A
2025-11-03 22:42:26 +00:00
KyleBarton
5ed458497e Copy outline improvements from typescript over to tsx as well (#41862)
Closes #4483 

Release Notes:

- Interprets outline of tsx files with the same grammar as typescript,
including improvements from #39797
2025-11-03 14:38:05 -08:00
Kirill Bulatov
9ecf257502 Fix incorrect search ranges when rendering search matches in the outline panel (#41859)
Closes https://github.com/zed-industries/zed/issues/41792

Release Notes:

- Fixed outline panel panicking when rendering certain search matches
2025-11-03 22:01:52 +00:00
Anthony Eid
2eeb02305c editor: Add a setting to show a scrollbar in completion menu (#41849)
A user on Discord requested this feature:
https://discord.com/channels/869392257814519848/1434188637389717556/1434188637389717556

I added a scrollbar setting called `completion_menu_scrollbar` to the
completion menu and defaulted it to "Never" to match past behavior.

Release Notes:

- editor: Add `editor.completion_menu_scrollbar` setting to show a
scrollbar in the completion menu
2025-11-03 21:03:18 +00:00
Katie Geer
c2b3e60f6d settings: Change "remove trailing whitespace on save" to default false for Markdown (#41658)
Closes #ISSUE: reported on X by user. 

Release Notes:

- Made it so that the default value for the "remove trailing whitespace
on save" setting in Markdown is false, to fix cases where the removed
trailing whitespace had syntactic meaning
2025-11-03 13:00:30 -08:00
Conrad Irwin
d075a56ee7 Fix merge conflict (#41853)
Closes #ISSUE

Release Notes:

- N/A
2025-11-03 13:41:39 -07:00
Piotr Osiewicz
8217e57a2d ci: Enable namespace caching for Linux workers (#41652)
Release Notes:

- N/A
2025-11-03 21:07:36 +01:00
Finn Evers
08daedd014 editor: Add support for no scroll margin in full mode (#41838)
Noticed this whilst testing the Docker debugger. I randomly scrolled the
console off screen and was confused briefly as to why this was the case.

Release Notes:

- The debugger query console will no longer needlessly overscroll.
2025-11-03 20:47:39 +01:00
Conrad Irwin
4da5675920 Re-use the existing bundle steps for nightly too (#41699)
One of the reasons we didn't spot that we were missing the telemetry env
vars for the production builds was that nightly (which was working) had
its own set of build steps. This re-uses those and pushes the env vars
down from the workflow to the job.

It also fixes nightly releases to upload all-in-one go so that all
platforms update in sync.

Closes #41655

Release Notes:

- N/A
2025-11-03 12:29:22 -07:00
Lukas Wirth
5fc54986c7 Revert "sum_tree: Replace rayon with futures (#41586) (#41846)
This causes the background executor to hang

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-03 19:25:15 +00:00
Finn Evers
cb5055aaec agent_ui: Fix expand message editor button not always working (#41845)
The button could not be clicked whenever the editor was currently not
focused. This PR fixes this and also registers the action on a more
global level, similar to how this is done for all the other agent
actions.

Release Notes:

- Fixed an issue where the `Expand message editor` button would not work
in agent threads if the message editor was not focused.
2025-11-03 19:08:27 +00:00
warrenjokinen
454d649b6e docs: Mark macOS 26.x as being supported (#41777)
Add "Tahoe" to list of supported macOS versions.

Closes #ISSUE

Release Notes:

- N/A
2025-11-03 13:43:42 -05:00
Vitaly Slobodin
222767e69b ruby: Disable Ruby LSP for ERB files (#41754)
The Ruby extension uses the `solargraph`
language server by default for Ruby files.
However, when a user opens any ERB file,
the extension automatically starts the Ruby LSP.
This affects developers because
they do not expect the Ruby LSP to be running.

Closes https://github.com/zed-extensions/ruby/issues/172

Release Notes:

- N/A
2025-11-03 19:35:36 +01:00
Xiaobo Liu
d7b7fa3ee2 agent: Add XML escaping for TextThreadContext title attribute (#39734)
Escape special characters (&, <, >, ", ') in the title attribute of
TextThreadContext's XML output to prevent malformed XML when titles
contain these characters.

Resolves TODO at context.rs:629

Release Notes:

- N/A

Signed-off-by: Xiaobo Liu <cppcoffee@gmail.com>
Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-11-03 18:16:36 +00:00
Dylan
7cfce60570 project_search: Add button to collapse/expand all excerpts (#41654)
<img width="500" height="834" alt="Screenshot 2025-11-03 at 12  59@2x"
src="https://github.com/user-attachments/assets/15c5e1fc-2291-41b4-9eec-a8cfa5a446c7"
/>

Releases Note:

- Added a button that allows to expand/collapse all project search
excerpts at once.

---------

Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
2025-11-03 14:50:34 -03:00
Nia
45b78482f5 perf: Fixup Hyperfine finding (#41837)
Release Notes:

- N/A
2025-11-03 17:36:33 +00:00
Antal Szabó
71f1f3728d windows: Remove null terminator from keyboard ID (#41785)
Closes #41486, closes #35862 

It is unnecessary, and it broke the `uses_altgr` function.

Also add Slovenian layout as using AltGr.

This should fix:
-
https://github.com/zed-industries/zed/pull/40536#issuecomment-3477121224
- https://github.com/zed-industries/zed/issues/41486
- https://github.com/zed-industries/zed/issues/35862

As the current strategy relies on manually adding layouts that have
AltGr, it's brittle and not very elegant. It also has other issues (it
requests the current layout on every kesytroke and mouse movement).

**A potentially better and more comprehensive solution is at
https://github.com/zed-industries/zed/pull/41259**
This is just to fix the immediate issues while that gets reviewed.

Release Notes:

- windows: Fix AltGr handling on non-US layouts again.
2025-11-03 18:29:28 +01:00
Richard Feldman
8b560cd8aa Fix bug with uninstalled agent extensions (#41836)
Previously, uninstalled agent extensions didn't immediately disappear
from the menu. Now, they do!

Release Notes:

- N/A
2025-11-03 12:09:26 -05:00
Lukas Wirth
38e1e3f498 project: Use user configured shells for project env fetching (#41288)
Closes https://github.com/zed-industries/zed/issues/40464

Release Notes:

- Fix shell environment sourcing not respecting users remote shells
2025-11-03 16:29:07 +00:00
Karl-Erik Enkelmann
a6b177d806 Update open buffers with newly registered completion trigger characters (#41243)
Closes https://github.com/zed-extensions/java/issues/108

Previously, when language servers dynamically register completion
capabilities with trigger characters for completions (hello JDTLS), this
would not get updated in buffers for that language server that were
already open. This change is to find open buffers for the language
server and update the trigger characters in each of them when the new
capability is being registered.

Release Notes:

- N/A
2025-11-03 17:28:30 +01:00
Lukas Wirth
73366bef62 diagnostics: Live update diagnostics view on edits while focused (#41829)
Prior we were only updating the diagnostics pane when it is either
unfocued, saved or when a disk based diagnostic run finishes (aka cargo
check). The reason for this is simple, we do not want to take away the
excerpt under the users cursor while they are typing if they manage to
fix the diagnostic. Additionally we need to prevent dropping the changed
buffer before it is saved.

Delaying updates was a simple way to work around these kind of issues,
but comes at a huge annoyance that the diagnostics pane is not actually
reflecting the current state of the world but some snapshot of it
instead making it less than ideal to work within it for languages that
do not leverage disk based diagnostics (that is not rust-analyzer, and
even for rust-analyzer its annoying).

This PR changes this. We now always live update the view but take care
to retain unsaved buffers as well as buffers that contain a cursor in
them (as well as some other "checkpoint" properties).

Release Notes:

- Improved diagnostics pane to live update when editing within its
editor
2025-11-03 16:16:05 +00:00
David Kleingeld
48bd253358 Adds instructions on how to use Perf & Flamegraph without debug symbols (#41831)
Release Notes:

- N/A
2025-11-03 16:11:30 +00:00
Bob Mannino
2131d88e48 Add center_on_match option for search (#40523)
[Closes discussion
#28943](https://github.com/zed-industries/zed/discussions/28943)

Release Notes:

- Added `center_on_match` option to center matched text in view during buffer or project search.

---------

Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>
2025-11-03 21:17:09 +05:30
Kirill Bulatov
42149df0f2 Show modal hover for one-off tasks (#41824)
Before, one-off commands did not show anything on hover at all, now they
show their command:

<img width="657" height="527" alt="after"
src="https://github.com/user-attachments/assets/d43292ca-9101-4a6a-b689-828277e8cfeb"
/>

Release Notes:

- Show modal hover for one-off tasks
2025-11-03 15:25:44 +00:00
Bing Wang
379bdb227a languages: Add ignore keyword for gomod (#41520)
go 1.25 introduce ignore directive in go.mod to specify directories the
go command should ignore.

ref: https://tip.golang.org/doc/go1.25#go-command


Release Notes:

- Added syntax highlighting support for the new [`ignore`
directive](https://tip.golang.org/doc/go1.25#go-command) in `go.mod`
files
2025-11-03 09:11:50 -06:00
Dijana Pavlovic
04e53bff3d Move @punctuation.delimiter before @operator capture (#41663)
Closes #41593

From what I understand the order of captures inside tree-sitter query
files matters, and the last capture will win. `?` and `:` are captured
by both `@operator` and `@punctuation.delimiter`.So in order for the
ternary operator to win it should live after `@punctuation.delimiter`.

Before:
<img width="298" height="32" alt="Screenshot 2025-10-31 at 17 41 21"
src="https://github.com/user-attachments/assets/af376e52-88be-4f62-9e2b-a106731f8145"
/>


After:
<img width="303" height="39" alt="Screenshot 2025-10-31 at 17 41 33"
src="https://github.com/user-attachments/assets/9a754ae9-0521-4c70-9adb-90a562404ce8"
/>


Release Notes:

- Fixed an issue where the ternary operator symbols in TypeScript would
not be highlighted as operators.
2025-11-03 08:51:22 -06:00
Kirill Bulatov
28f30fc851 Fix racy inlay hints queries (#41816)
Follow-up of https://github.com/zed-industries/zed/pull/40183

Release Notes:

- (Preview only) Fixed inlay hints duplicating when multiple editors are
open for the same buffer

---------

Co-authored-by: Lukas Wirth <lukas@zed.dev>
2025-11-03 13:54:53 +00:00
Lukas Wirth
f8b414c22c zed: Reduce number of rayon threads, spawn with bigger stacks (#41812)
We already do this for the cli and remote server but forgot to do so for
the main binary

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-03 12:28:32 +00:00
Lukas Wirth
50504793e6 file_finder: Fix highlighting panic in open path prompt (#41808)
Closes https://github.com/zed-industries/zed/issues/41249

Couldn't quite come up with a test case here but verified it works.

Release Notes:

- Fixed a panic in file finder when deleting characters
2025-11-03 12:07:13 +00:00
kallyaleksiev
b625263989 remote: Close window when SSH connection fails (#41782)
## Context 

This PR closes issue https://github.com/zed-industries/zed/issues/41781

It essentially `matches` the result of opening the connection here
f7153bbe8a/crates/recent_projects/src/remote_connections.rs (L650)

and adds a Close / Retry alert that upon 'Close' closes the new window
if the result is an error
2025-11-03 11:13:20 +00:00
Lukas Wirth
c8f9db2e24 remote: Fix more quoting issues with nushell (#41547)
https://github.com/zed-industries/zed/pull/40084#issuecomment-3464159871
Closes https://github.com/zed-industries/zed/pull/41547

Release Notes:

- Fixed remoting not working when the remote has nu set as its shell
2025-11-03 10:50:05 +00:00
Lukas Wirth
bc3c88e737 Revert "windows: Don't flood windows message queue with gpui messages" (#41803)
Reverts zed-industries/zed#41595

Closes #41704
2025-11-03 10:41:53 +00:00
Oleksiy Syvokon
3a058138c1 Fix Sonnet's regression with inserting </parameter></invoke> (#41800)
Sometimes, inside the edit agent, Sonnet thinks that it's doing a tool
call and closes its response with `</parameter></invoke>` instead of
properly closing </new_text>.

A better but more labor-intensive way of fixing this would be switching
to streaming tool calls for LLMs that support it.

Closes #39921

Release Notes:

- Fixed Sonnet's regression with inserting `</parameter></invoke>`
sometimes
2025-11-03 12:22:51 +02:00
Lukas Wirth
f2b539598e sum_tree: Spawn less tasks in SumTree::from_iter_async (#41793)
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-03 10:02:31 +00:00
ᴀᴍᴛᴏᴀᴇʀ
dc503e9975 Support GitLab and self-hosted GitLab avatars in git blame (#41747)
Part of #11043.

Release Notes:

- Added Support for showing GitLab and self-hosted GitLab avatars in git
blame
2025-11-03 07:51:04 +01:00
ᴀᴍᴛᴏᴀᴇʀ
73b75a7765 Support Gitee avatars in git blame (#41783)
Part of https://github.com/zed-industries/zed/issues/11043.

<img width="3596" height="1894" alt="CleanShot 2025-11-03 at 10 39
08@2x"
src="https://github.com/user-attachments/assets/68d16c32-fd23-4f54-9973-6cbda9685a8f"
/>


Release Notes:

- Added Support for showing Gitee avatars in git blame
2025-11-03 07:47:20 +01:00
Danilo Leal
deacd3e922 extension_ui: Fix card label truncation (#41784)
Closes https://github.com/zed-industries/zed/issues/41763

Release Notes:

- N/A
2025-11-03 03:54:47 +00:00
Aero
f7153bbe8a agent_ui: Add delete button for compatible API-based LLM providers (#41739)
Discussion: https://github.com/zed-industries/zed/discussions/41736

Release Notes:

- agent panel: Added the ability to remove OpenAI-compatible LLM
providers directly from the UI.

---------

Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
2025-11-02 16:05:29 -03:00
Xiaobo Liu
4e7ba8e680 acp_tools: Add vertical scrollbar to ACP logs (#41740)
Release Notes:

- N/A

---------

Signed-off-by: Xiaobo Liu <cppcoffee@gmail.com>
Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
2025-11-02 18:11:21 +00:00
Danilo Leal
9909b59bd0 agent_ui: Improve the "go to file" affordance in the edit bar (#41762)
This PR makes it clearer that you can click on the file path to open the
corresponding file in the agent panel's "edit bar", which is the element
that shows up in the panel as soon as agent-made edits happen.

Release Notes:

- agent panel: Improved the "go to file" affordance in the edit bar.
2025-11-02 14:56:23 -03:00
Danilo Leal
00ff89f00f agent_ui: Make single file review actions match panel (#41718)
When we introduced the ACP-based agent panel, the condition that the
"review" | "reject" | "keep" buttons observed to be displayed got
mismatched between the panel and the pane (when in the single file
review scenario). In the panel, the buttons appear as soon as there are
changed buffers, whereas in the pane, they appear when response
generation is done.

I believe that making them appear at the same time, observing the same
condition, is the desired behavior. Thus, I think the panel behavior is
more correct, because there are loads of times where agent response
generation isn't technically done (e.g., when there's a command waiting
for permission to be run) but the _file edit_ has already been performed
and is in a good state to be already accepted or rejected.

So, this is what this PR is doing; effectively removing the "generating"
state from the agent diff, and switching to `EditorState::Reviewing`
when there are changed buffers.

Release Notes:

- Improved agent edit single file reviews by making the "reject" and
"accept" buttons appear at the same time.
2025-11-02 14:30:50 -03:00
Danilo Leal
12fe12b5ac docs: Update theme, icon theme, and visual customization pages (#41761)
Some housekeeping updates:

- Update hardcoded actions/keybindings so they're pulled from the repo
- Mention settings window when useful
- Add more info about agent panel's font size
- Break sentences in individual lines

Release Notes:

- N/A
2025-11-02 14:30:37 -03:00
Mayank Verma
a9bc890497 ui: Fix popover menu not restoring focus to the previously focused element (#41751)
Closes #26548

Here's a before/after comparison:


https://github.com/user-attachments/assets/21d49db7-28bb-4fe2-bdaf-e86b6400ae7a

Release Notes:

- Fixed popover menus not restoring focus to the previously focused
element
2025-11-02 16:56:58 +01:00
Xiaobo Liu
d887e2050f windows: Hide background helpers behind CREATE_NO_WINDOW (#41737)
Close https://github.com/zed-industries/zed/issues/41538

Release Notes:

- Fixed some processes on windows not spawning with CREATE_NO_WINDOW

---------

Signed-off-by: Xiaobo Liu <cppcoffee@gmail.com>
2025-11-02 09:15:01 +01:00
Anthony Eid
d5421ba1a8 windows: Fix click bleeding through collab follow (#41726)
On Windows, clicking on a collab user icon in the title bar would
minimize/expand Zed because the click would bleed through to the title
bar. This PR fixes this by stopping propagation.

#### Before (On MacOS with double clicks to mimic the same behavior)

https://github.com/user-attachments/assets/5a91f7ff-265a-4575-aa23-00b8d30daeed
#### After (On MacOS with double clicks to mimic the same behavior)

https://github.com/user-attachments/assets/e9fcb98f-4855-4f21-8926-2d306d256f1c

Release Notes:

- Windows: Fix clicking on user icon in title bar to follow
minimizing/expanding Zed

Co-authored-by: Remco Smits <djsmits12@gmail.com>
2025-11-02 07:35:11 +00:00
Joseph T. Lyons
548cdfde3a Delete release process docs (#41733)
These have been migrated to the README.md
[here](https://github.com/zed-industries/release_notes). These don't
need to be public. Putting them in the same repo where we draft
(`release_notes`) means less jumping around and allows us to include
additional information we might not want to make public.

Release Notes:

- N/A
2025-11-02 00:37:02 -04:00
Ben Kunkle
2408f767f4 gh-workflow unit evals (#41637)
Closes #ISSUE

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-01 22:45:44 -04:00
Haojian Wu
df15d2d2fe Fix doc typos (#41727)
Release Notes:

- N/A
2025-11-01 20:52:32 -03:00
Anthony Eid
07dcb8f2bb debugger: Add program and module path fallbacks for debugpy toolchain (#40975)
Fixes the Debugpy toolchain detection bug in #40324 

When detecting what toolchain (venv) to use in the Debugpy configuration
stage, we used to only base it off of the current working directory
argument passed to the config. This is wrong behavior for cases like
mono repos, where the correct virtual environment to use is nested in
another folder.

This PR fixes this issue by adding the program and module fields as
fallbacks to check for virtual environments. We also added support for
program/module relative paths as well when cwd is not None.

Release Notes:

- debugger: Improve mono repo virtual environment detection with Debugpy

---------

Co-authored-by: Remco Smits <djsmits12@gmail.com>
2025-11-01 17:30:13 -04:00
Agus Zubiaga
06bdb28517 zeta cli: Add convert-example command (#41608)
Adds a `convert-example` subcommand to the zeta cli that converts eval
examples from/to `json`, `toml`, and `md` formats.

Release Notes:

- N/A

---------

Co-authored-by: Max Brunsfeld <maxbrunsfeld@gmail.com>
2025-11-01 19:35:04 +00:00
Mark Christiansen
d6b58bb948 agent_ui: Use agent font size tokens for thread markdown rendering (#41610)
Release Notes:

- N/A

--- 

Previously, agent markdown rendering used hardcoded font sizes
(TextSize::Default and TextSize::Small) which ignored the
agent_ui_font_size and agent_buffer_font_size settings. This updates the
markdown style to respect these settings.

This pull request adds support for customizing the font size of code
blocks in agent responses, making it possible to set a distinct font
size for code within the agent panel. The changes ensure that if the new
setting is not specified, the font size will fall back to the agent UI
font size, maintaining consistent appearance.

(I am a frontend developer without any Rust knowledge so this is
co-authored with Claude Code)


**Theme settings extension:**

* Added a new `agent_buffer_code_font_size` setting to
`ThemeSettingsContent`, `ThemeSettings`, and the default settings JSON,
allowing users to specify the font size for code blocks in agent
responses.
[[1]](diffhunk://#diff-a3bba02a485aba48e8e9a9d85485332378aa4fe29a0c50d11ae801ecfa0a56a4R69-R72)
[[2]](diffhunk://#diff-aed3a9217587d27844c57ac8aff4a749f1fb1fc5d54926ef5065bf85f8fd633aR118-R119)
[[3]](diffhunk://#diff-42e01d7aacb60673842554e30970b4ddbbaee7a2ec2c6f2be1c0b08b0dd89631R82-R83)
* Updated the VSCode import logic to recognize and import the new
`agent_buffer_code_font_size` setting.

**Font size application in agent UI:**

* Modified the agent UI rendering logic in `thread_view.rs` to use the
new `agent_buffer_code_font_size` for code blocks, and to fall back to
the agent UI font size if unset.
[[1]](diffhunk://#diff-f73942e8d4f8c4d4d173d57d7c58bb653c4bb6ae7079533ee501750cdca27d98L5584-R5584)
[[2]](diffhunk://#diff-f73942e8d4f8c4d4d173d57d7c58bb653c4bb6ae7079533ee501750cdca27d98L5596-R5598)
* Implemented a helper method in `ThemeSettings` to retrieve the code
block font size, with fallback logic to ensure a value is always used.
* Updated the settings application logic to propagate the new code block
font size setting throughout the theme system.


### Example Screenshots
![Screenshot 2025-10-31 at 12 38
28](https://github.com/user-attachments/assets/cbc34232-ab1f-40bf-a006-689678380e47)
![Screenshot 2025-10-31 at 12 37
45](https://github.com/user-attachments/assets/372b5cf8-2df8-425a-b052-12136de7c6bd)

---------

Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
2025-11-01 11:14:21 -03:00
Charles McLaughlin
03e0581ee8 agent_ui: Show notifications also when the panel is hidden (#40942)
Currently Zed only displays agent notifications (e.g. when the agent
completes a task) if the user has switched apps and Zed is not in the
foreground. This adds PR supports the scenario where the agent finishes
a long-running task and the user is busy coding within Zed on something
else.

Releases Note:

- If agent notifications are turned on, they will now also be displayed
when the agent panel is hidden, in complement to them showing when the
Zed window is in the background.

---------

Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
2025-11-01 11:14:12 -03:00
Conrad Irwin
1552e13799 Fix telemetry in release builds (#41695)
This was inadvertently broken in v0.211.1-pre when we rewrote the
release build

Release Notes:

- N/A
2025-10-31 22:55:12 -06:00
Dijana Pavlovic
ade0f1342c agent_ui: Prevent mode selector tooltip from going off-screen (#41589)
Closes #41458 

Dynamically position mode selector tooltip to prevent clipping.

Position tooltip on the right when panel is docked left, otherwise on
the left. This ensures the tooltip remains visible regardless of panel
position.

**Note:** The tooltip currently vertically aligns with the bottom of the
menu rather than individual items. Would be great if it can be aligned
with the option it explains. But this doesn't seem trivial to me to
implement and not sure if it's important enough atm?

Before: 
<img width="431" height="248" alt="Screenshot 2025-10-30 at 22 21 09"
src="https://github.com/user-attachments/assets/073f5440-b1bf-420b-b12f-558928b627f1"
/>

After:
<img width="632" height="158" alt="Screenshot 2025-10-30 at 17 26 52"
src="https://github.com/user-attachments/assets/e999e390-bf23-435e-9df0-3126dbc14ecb"
/>
<img width="685" height="175" alt="Screenshot 2025-10-30 at 17 27 15"
src="https://github.com/user-attachments/assets/84efca94-7920-474b-bcf8-062c7b59a812"
/>


Release Notes:

- Improved the agent panel's mode selector by preventing it to go
off-screen in case the panel is docked to the left.
2025-11-01 04:29:58 +00:00
Joe Innes
04f7b08ab9 Give visual feedback when an operation is pending (#41686)
Currently, if a commit operation takes some time, there's no visual
feedback in the UI that anything's happening.

This PR changes the colour of the text on the button to the
`Color::Disabled` colour when a commit operation is pending.

Release Notes:

- Improved UI feedback when a commit is in progress

---------

Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
2025-11-01 04:24:59 +00:00
Anthony Eid
ecbdffc84f debugger: Fix Debugpy attach with connect session startup (#41690)
Closes #38345, #34882, #33280

Debugpy has four distinct configuration scenarios, which are:
1. launch
2. attach with process id
3. attach with listen
4. attach with connect

Spawning Debugpy directly works with the first three scenarios but not
with "attach with connect". Which requires host/port arguments being
passed in both with an attach request and when starting up Debugpy. This
PR passes in the right arguments when spawning Debugpy in an attach with
connect scenario, thus fixing the bug.

The VsCode extension comment that explains this:
98f5b93ee4/src/extension/debugger/adapter/factory.ts (L43-L51)

Release Notes:

- debugger: Fix Python attach-based sessions not working with `connect`
or `port` arguments
2025-10-31 19:02:51 -04:00
Jakub Konka
aa61f25795 git: Make GitPanel more responsive to long-running staging ops (#41667)
Currently, this only applies to long-running individually selected
unstaged files in the git panel. Next up I would like to make this work
for `Stage All`/`Unstage All` however this will most likely require
pushing `PendingOperation` into `GitStore` (from the `GitPanel`).

Release Notes:

- N/A
2025-10-31 22:47:49 +01:00
Danilo Leal
d406409b72 Fix categorization of agent server extensions (#41689)
We missed making extensions that provide agent servers fill the
`provides` field with `agent-servers`, and thus, filtering for this type
of extension in both the app and site wouldn't return anything.

Release Notes:

- N/A
2025-10-31 21:18:22 +00:00
Danilo Leal
bf79592465 git_ui: Adjust stash picker (#41688)
Just tidying it up by removing the unnecessary eye icon buttons in all
list items and adding that action in the form of a button in the footer,
closer to all other actions. Also reordering the footer buttons so that
the likely most common action is in the far right.

Release Notes:

- N/A
2025-10-31 18:15:52 -03:00
Ben Kunkle
d3d7199507 Fix release.yml workflow (#41675)
Closes #ISSUE

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-10-31 16:29:13 -04:00
Danilo Leal
743a9cf258 Add included agents in extensions search (#41679)
Given agent servers will soon be a thing, I'm adding Claude Code, Gemini
CLI, and Codex CLI as included agents in case anyone comes first to
search them as extensions before looking up on the agent panel.

Release Notes:

- N/A
2025-10-31 16:45:39 -03:00
Conrad Irwin
a05358f47f Delete old ci.yml (#41668)
The new one is much better

Release Notes:

- N/A
2025-10-31 12:58:47 -06:00
Ben Kunkle
3a4aba1df2 gh-workflow release (#41502)
Closes #ISSUE

Rewrite our release pipeline to be generated by `gh-workflow`

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-10-31 14:23:25 -04:00
tidely
12d71b37bb ollama: Add button for refreshing available models (#38181)
Closes #17524

This PR adds a button to the bottom right corner of the ollama settings
ui. It resets the available ollama models, also resets the "Connected"
state in the process. This means it can be used to check if the
connection is still valid as well. It's a question whether we should
clear the available models on ALL `fetch_models` calls, since these only
happen during auth anyway.

Ollama is a local model provider which means clicking the refresh button
often only flashes the "not connected" state because the latency of the
request is so low. This accentuates changes in the UI, however I don't
think there's a way around this without adding some rather cumbersome
deferred ui updates.

I've attached the refresh button to the "Connected" `ButtonLike`, since
I don't think automatic UI spacing should separate these elements. I
think this is okay because the "Connected" isn't actually something that
the user can interact with.

Before: 
<img width="211" height="245" alt="image"
src="https://github.com/user-attachments/assets/ea90e24a-b603-4ee2-9212-2917e1695774"
/>

After: 
<img width="211" height="250" alt="image"
src="https://github.com/user-attachments/assets/be9af950-86a2-4067-87a0-52034a80a823"
/>


Alternative approach: There was also a suggestion to simply add a entry
to the command palette, however none of the other providers have this
ability currently either so I went with this approach. The current
approach also makes it more discoverable to the user.

Release Notes:

- Added a button for refreshing available ollama models

---------

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-10-31 18:12:02 +00:00
Conrad Irwin
34e0c97dbc Generate dwarf files for builds again (#41651)
Closes #ISSUE

Release Notes:

- N/A
2025-10-31 10:51:06 -06:00
Andrew Farkas
cf31b736f7 Add Andrew to REVIEWERS.conl (#41662)
Release Notes:

- N/A
2025-10-31 16:48:21 +00:00
versecafe
1cb512f336 bedrock: Fix duplicate region input (#41341)
Closes #41313

Release Notes:

- Fixes #41313

<img width="453" height="870" alt="Screenshot 2025-10-27 at 10 23 37 PM"
src="https://github.com/user-attachments/assets/93bfba18-1bff-494e-a4c2-05b54ad6eed8"
/>

Co-authored-by: Richard Feldman <oss@rtfeldman.com>
2025-10-31 16:43:45 +00:00
Lukas Wirth
4e6a562efe editor: Fix refresh_linked_ranges panics due to old snapshot use (#41657)
Fixes ZED-29Z

Release Notes:

- Fixed panic in `refresh_linked_ranges`
2025-10-31 17:39:55 +01:00
versecafe
c1dea842ff agent: Model name context (#41490)
Closes #41478

Release Notes:

- Fixed #41478

<img width="459" height="916" alt="Screenshot 2025-10-29 at 1 31 26 PM"
src="https://github.com/user-attachments/assets/1d5b9fdf-9800-44e4-bdd5-f0964f93625f"
/>

> caused by using haiku 4.5 from the anthropic provider and then
swapping to sonnet 3.7 through zed, doing this does mess with prompt
caching but a model swap already invalidates that so it shouldn't have
any cost impact on end users
2025-10-31 16:12:46 +00:00
Dino
c42d54af17 agent_ui: Autoscroll after inserting selections (#41370)
Update the behavior of the `zed_actions::agent::AddSelectionToThread`
action so that, after the selecitons are added to the current thread,
the editor automatically scrolls to the cursor's position, fixing an
issue where the inserted selection's UI component could wrap the cursor
to the next line below, leaving it outside the viewable area.

Closes #39694

Release Notes:

- Improved the `agent: add selection to thread` action so as to
automatically scroll to the cursor's position after selections are
inserted
2025-10-31 15:17:26 +00:00
Dino
f3a5ebc315 vim: Only focus when associated editor is also focused (#41487)
Update `Vim::activate` to ensure that the `Vim.focused` method is only
called if the associated editor is also focused.

This ensures that the `VimEvent::Focused` event is only emitted when the
editor is actually focused, preventing a bug where, after starting Zed,
Vim's mode indicator would show that the mode was `Insert` even though
it was in `Normal` mode in the main editor.

Closes #41353 

Release Notes:

- Fixed vim's mode being shown as `Inserted` right after opening Zed

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-10-31 15:04:54 +00:00
Lukas Wirth
f73d6fe4ce terminal: Kill the terminal child process, not the terminal process on exit (#41631)
When rerunning a task, our process id fetching seems to sometimes return
the previous terminal's process id when respawning the task, causing us
to kill the new terminal once the previous one drops as we spawn a new
one, then drop the old one. This results in rerun sometimes spawning a
blank task as the terminal immediately exits. The fix here is simple, we
actually want to kill the process running inside the terminal process,
not the terminal process itself when we exit in the terminal.

No relnotes as this was introduced yesterday in
https://github.com/zed-industries/zed/pull/41562

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-10-31 15:01:50 +00:00
Lukas Wirth
c6d61870e2 editor: Fix incorrect hover popup row clamping (#41645)
Fixes ZED-2TR
Fixes ZED-2TQ
Fixes ZED-2TB
Fixes ZED-2SW
Fixes ZED-2SQ

Release Notes:

- Fixed panic in repainting hover popups

Co-authored by: David <david@zed.dev>
2025-10-31 14:24:23 +00:00
Danilo Leal
1f938c08d2 project panel: Remove extra separator when "Rename" is hidden (#41639)
Closes https://github.com/zed-industries/zed/issues/41633

Release Notes:

- N/A
2025-10-31 13:55:08 +00:00
Lukas Wirth
f2ce06c7b0 sum_tree: Replace rayon with futures (#41586)
Release Notes:

- N/A *or* Added/Fixed/Improved ...

Co-authored by: Kate <kate@zed.dev>
2025-10-31 10:39:01 +00:00
Mikayla Maki
7c29c6d7a6 Increased the max height of pickers (#41617)
Release Notes:

- Increased the max size of picker based UI
2025-10-31 04:26:36 +00:00
Andrew Farkas
eab06eb1d9 Keep selection in SwitchToHelixNormalMode (#41583)
Closes #41125

Release Notes:

- Fixed `SwitchToHelixNormalMode` to keep selection
- Added default keybinds for `SwitchToHelixNormalMode` when in Helix
mode
2025-10-31 01:53:46 +00:00
Conrad Irwin
c2537fad43 Add a no-op compare_perf workflow (#41605)
Testing PR for @zed-zippy

Release Notes:

- N/A
2025-10-30 17:28:03 -06:00
Bennet Bo Fenner
977856407e Add bennetbo to REVIEWERS.conl (#41604)
Release Notes:

- N/A
2025-10-30 22:25:47 +00:00
Mikayla Maki
7070038c92 gpui: Remove type bound (#41603)
Release Notes:

- N/A
2025-10-30 22:17:45 +00:00
Bennet Bo Fenner
b059c1fce7 agent_servers: Expand ~ in path from settings (#41602)
Closes #40796


Release Notes:

- Fixed an issue where `~` would not be expanded when specifiying the
path of an ACP server
2025-10-30 22:14:41 +00:00
Chris
03c6d6285c outline_panel: Fix collapse/expand all entries (#41342)
Closes #39937

Release Notes:

- Fixed expand/collapse all entries not working in singleton buffer mode
2025-10-30 21:48:37 +00:00
Agus Zubiaga
60c546196a zeta2: Expose llm-based context retrieval via zeta_cli (#41584)
Release Notes:

- N/A

---------

Co-authored-by: Max Brunsfeld <maxbrunsfeld@gmail.com>
Co-authored-by: Oleksiy Syvokon <oleksiy.syvokon@gmail.com>
2025-10-30 21:41:09 +00:00
Dino
8aa2158418 vim: Improve pasting while in replace mode (#41549)
- Update `vim::normal::Vim.normal_replace` to work with more than one
  character
- Add `vim::replace::Vim.paste_replace` to handle pasting the 
  clipboard's contents while in replace mode
- Update vim's handling of the `editor::actions::Paste` action so that
  the `paste_replace` method is called when vim is in replace mode,
  otherwise it'll just call the regular `editor::Editor.paste` method

Closes #41378 

Release Notes:

- Improved pasting while in Vim's Replace mode, ensuring that the Zed
replaces the same number of characters as the length of the contents
being pasted
2025-10-30 20:33:03 +00:00
Anthony Eid
5ae0768ce4 debugger: Polish breakpoint list UI (#41598)
This PR fixes breakpoint icon alignment to also be at the end of a
rendered entry and enables editing breakpoint qualities when there's no
active session.

The alignment issue was caused by some icons being invisible, so the
layout phase always accounted for the space they would take up. Only
laying out the icons when they are visible fixed the issue.

#### Before
<img width="1014" height="316" alt="image"
src="https://github.com/user-attachments/assets/9a9ced06-e219-4d9d-8793-6bdfdaca48e8"
/>

#### After
[
<img width="502" height="167" alt="Screenshot 2025-10-30 at 3 21 17 PM"
src="https://github.com/user-attachments/assets/23744868-e354-461c-a940-9b6812e1bcf4"
/>
](url)

Release Notes:

- Breakpoint list: Allow adding conditions, logs, and hit conditions to
breakpoints when there's no active session
2025-10-30 16:15:21 -04:00
Anthony Eid
44e5a962e6 debugger: Add horizontal scroll bars to variable list, memory view, and breakpoint list (#41594)
Closes #40360

This PR added heuristics to determine what variable/breakpoint list
entry has the longest width when rendered. I added this in so the
uniform list would correctly determine which item has the longest width
and use that to calculate the scrollbar size.

The heuristic can be off if a non-mono space font is used in the UI; in
most cases, it's more than accurate enough though.

Release Notes:

- debugger: Add horizontal scroll bars to variable list, memory view,
and breakpoint list

---------

Co-authored-by: MrSubidubi <dev@bahn.sh>
2025-10-30 19:43:32 +00:00
Lukas Wirth
3944234bab windows: Don't flood windows message queue with gpui messages (#41595)
Release Notes:

- N/A

Co-authored by: Max Brunsfeld <max@zed.dev>
2025-10-30 20:09:32 +01:00
Lukas Wirth
ac3b232dda Reduce amount of foreground tasks spawned on multibuffer/editor updates (#41479)
When doing a project wide search in zed on windows for `hang`, zed
starts to freeze for a couple seconds ultimately starting to error with
`Not enough quota is available to process this command.` when
dispatching windows messages. The cause for this is that we simply
overload the windows message pump due to the sheer amount of foreground
tasks we spawn when we populate the project search.

This PR is an attempt at reducing this.

Release Notes:

- Reduced hangs and stutters in large project file searches
2025-10-30 17:40:56 +00:00
Paweł Kondzior
743180342a agent_ui: Insert thread summary as proper mention URI (#40722)
This ensures the thread summary is treated as a tracked mention with
accessible context.

Changes:
- Fixed `MessageEditor::insert_thread_summary()` to use proper mention
URI format
- Added test coverage to verify the fix

Release Notes:

- Fixed an issue where "New From Summary" was not properly inserting
thread summaries as contextual mentions when creating new threads.
Thread summaries are now inserted as proper mention URIs.
2025-10-30 17:19:32 +01:00
Bennet Fenner
3825ce523e agent_ui: Fix agent: Chat with follow not working (#41581)
Release Notes:

- Fixed an issue where `agent: Chat with follow` was not working anymore

Co-authored-by: Ben Brandt <benjamin.j.brandt@gmail.com>
2025-10-30 16:03:12 +00:00
Anthony Eid
b4cf7e440e debugger: Get rid of initialize_args in php debugger setup docs (#41579)
Related to issue: #40887

Release Notes:

- N/A

Co-authored-by: Remco Smits <djsmits12@gmail.com>
2025-10-30 11:47:59 -04:00
Conrad Irwin
bdb2d6c8de Don't skip tests in nightly release (#41573)
Release Notes:

- N/A
2025-10-30 14:59:30 +00:00
Lukas Wirth
0c73252c9d project: Spawn terminal process on background executor (#41216)
Attempt 2 for https://github.com/zed-industries/zed/pull/40774

We were spawning the process on the foreground thread before which can
block an arbitrary amount of time. Likewise we no longer block
deserialization on the terminal loading.

Release Notes:

- Improved startup time on systems with slow process spawning
capabilities
2025-10-30 13:55:19 +00:00
Danilo Leal
c7aa805398 docs: Improve the Inline Assistant content (#41566)
Release Notes:

- N/A
2025-10-30 13:53:31 +00:00
Lukas Wirth
94ba24dadd terminal: Properly kill child process on terminal exit (#41562)
Release Notes:

- Fixed terminal processes occasionally leaking

Co-authored by: Jakub <jakub@zed.dev>
2025-10-30 13:40:31 +00:00
Agus Zubiaga
046b43f135 collab panel: Open selected channel notes (#41560)
Adds an action to open the notes for the currently selected channel in
the collab panel, which is mapped to `alt-enter` in all platforms.

Release Notes:

- collab: Add `collab_panel::OpenSelectedChannelNotes` action
(`alt-enter` by default)
2025-10-30 13:10:19 +00:00
Caleb Jasik
426040f08f Add cmd-d shortcut for (terminal) pane::SplitRight (#41139)
Add default keybinding for `pane::SplitRight` in the `Terminal` context
for all platforms.

Closes #ISSUE

Release Notes:

- Added VS Code's terminal split keybindings (`cmd` on MacOS,
`ctrl-shift-5` on Windows and Linux)

---------

Co-authored-by: dino <dinojoaocosta@gmail.com>
2025-10-30 12:28:06 +00:00
Finn Evers
785b5ade6e extension_host: Do not try auto installing suppressed extensions (#41551)
Release Notes:

- Fixed an issue where Zed would try to install extensions specified
under `auto_install_extensions` which were moved into core.
2025-10-30 12:24:32 +00:00
A. Teo Welton
344f63c6ca Language: Fix minor C++ completion label formatting issue (#41544)
Closes #39515

**Details:**
- Improved logic for formatting completion labels, as some (such as
`namespace`) were missing space characters.
- Added extra logic as per stale PR #39533
[comment](https://github.com/zed-industries/zed/pull/39533#issuecomment-3368549433)
ensuring that cases where extra spaces are not necessary (such as
functions) are not affected
- I will note, I was not able to figure out how to fix the coloring of
`namespace` within completion labels as mentioned in that comment, if
someone would provide me with direction I would be happy to look into
that too.

Previous:
<img width="812" height="530" alt="previous"
src="https://github.com/user-attachments/assets/b38f1590-ca2d-489d-9dcb-2d478eb6ed03"
/>

Fixed:
<img width="812" height="530" alt="fixed"
src="https://github.com/user-attachments/assets/020b151d-e5d9-467e-99c1-5b0cab057169"
/>


Release Notes:

- Fixed minor issue where some `clangd` labels would be missing a space
in formatting
2025-10-30 10:47:44 +00:00
claytonrcarter
e30d5998e4 bundle: Restore local install on macOS (#41482)
I just pulled and ran a local build via `script/bundle-mac -l -i` but
found that the resulting bundle wasn't installed as expected. (me:
"ToggleAllDocks!! Wait! Where is it?!") Looking into, it looks like the
`-l` flag was removed in #41392, leaving the `$local_only` var orphaned,
which then left the `-i/$local_install` flag unreachable. I suspect that
this was unintentional, so this PR re-adds the `-l/$local_only` flag to
`script/bundle-mac`.

I ran the build again and confirmed that local install seemed to work as
expected. (ie "ToggleAllDocks!! 🎉")

While here, I also removed the last reference to `$local_arch`, because
all other references to that were removed in #41392.

/cc @osiewicz 

Release Notes:

- N/A

---------

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-10-29 21:55:02 -06:00
Conrad Irwin
277ae27ca2 Use gh-workflow for tests (take 2) (#41420)
This re-implements the reverted commit 8b051d6cc3.

Closes #ISSUE

Release Notes:

- N/A

---------

Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-10-29 21:28:43 -04:00
Danilo Leal
64fdc1d5b6 docs: Fix Codestral section title in edit prediction page (#41509)
Follow up to https://github.com/zed-industries/zed/pull/41507 as I
realized I didn't change the title for this section.

Release Notes:

- N/A
2025-10-30 01:18:43 +00:00
Danilo Leal
992448b560 edit prediction: Add ability to switch providers from the status bar menu (#41504)
Closes https://github.com/zed-industries/zed/issues/41500

<img width="500" height="1122" alt="Screenshot 2025-10-29 at 9  43@2x"
src="https://github.com/user-attachments/assets/ac2a81ad-99bb-43cd-b032-f2485fc23166"
/>

Release Notes:

- Added the ability to switch between configured edit prediction
providers through the status bar menu.
2025-10-29 22:16:13 -03:00
Danilo Leal
802b0e4968 docs: Add content about EP with Codestral (#41507)
This was missing after we added support to Codestral as an edit
prediction provider.

Release Notes:

- N/A
2025-10-29 22:07:38 -03:00
Agus Zubiaga
b8cdd38efb zeta2: Improve context search performance (#41501)
We'll now perform all searches from the context model concurrently, and
combine queries for the same glob into one reducing the total number of
project searches.

For better readability, the debug context view now displays each
top-level regex alternation individually, grouped by its corresponding
glob:

<img width="1592" height="672" alt="CleanShot 2025-10-29 at 19 56 03@2x"
src="https://github.com/user-attachments/assets/f6e8408e-09d6-4e27-ba11-a739a772aa12"
/>

  
Release Notes:

- N/A
2025-10-29 23:06:49 +00:00
Danilo Leal
87f9ba380f settings_ui: Close the settings window when going to the JSON file (#41491)
Release Notes:

- N/A
2025-10-29 19:19:36 -03:00
Danilo Leal
12dae07108 agent_ui: Fix history view background color when zoomed in (#41493)
Release Notes:

- N/A
2025-10-29 17:58:31 -03:00
Danilo Leal
cf0f442869 settings_ui: Fix links for edit prediction items (#41492)
Follow up to the bonus commit we added in
https://github.com/zed-industries/zed/pull/41172/.

Release Notes:

- N/A
2025-10-29 17:58:22 -03:00
Joseph T. Lyons
de9c4127a5 Remove references to how-to blog posts (#41489)
Release Notes:

- N/A
2025-10-29 19:48:50 +00:00
Joseph T. Lyons
e7089fe45c Update release process doc (#41488)
Release Notes:

- N/A
2025-10-29 19:47:32 +00:00
Kirill Bulatov
901b6ffd28 Support numeric tokens in work report LSP requests (#41448)
Closes https://github.com/zed-industries/zed/issues/41347

Release Notes:

- Indicate progress for more kinds of language servers
2025-10-29 19:35:16 +00:00
Danilo Leal
edc380db80 settings_ui: Add edit prediction settings (#41480)
Release Notes:

- N/A

---------

Co-authored-by: Ben Kunkle <Ben.kunkle@gmail.com>
2025-10-29 16:18:06 -03:00
Danilo Leal
33adfa443e docs: Add content about adding selection as context in the agent panel (#41485)
Release Notes:

- N/A
2025-10-29 16:17:45 -03:00
Finn Evers
9e5438906a svg_preview: Update preview on every buffer edit (#41270)
Closes https://github.com/zed-industries/zed/issues/39104

This fixes an issue where the preview would not work for remote buffers
in the process.

Release Notes:

- Fixed an issue where the SVG preview would not work in remote
scenarios.
- The SVG preview will now rerender on every keypress instead of only on
saves.
2025-10-29 18:49:39 +01:00
Joseph T. Lyons
fbe2907919 Document zed: reveal log in file manager in crash report template (#41053)
Merge once stable is v0.210 (10/29/2025).

Release Notes:

- N/A
2025-10-29 13:35:23 -04:00
Paul Xu
02f5a514ce gpui: Add justify_evenly to Styled (#41262)
Release Notes:
- gpui: Add `justify_evenly()` to `Styled`.
2025-10-29 12:56:53 -04:00
tidely
4bd4d76276 gpui: Fix GPUI prompts from bleeding clicks into lower windows (#41442)
Closes #41180 

When using the fallback prompt renderer (default on Wayland), clicks
would bleed through into underlying windows. When the click happens to
hit a button that creates a prompt, it drops the
`RenderablePromptHandle` which is contained within `Window`, causing the
`Receiver` which returns the index of the clicked `PromptButton` to
return `Err(Canceled)` even though a button was pressed.

This bug appears in the GPUI `window.rs` example, which can be ran using
`cargo run -p gpui --example window`. MacOS has a native
`PromptRenderer` and thus needs additional code to be adjusted to be
able to reproduce the issue.

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-10-29 12:53:06 -04:00
Cameron Mcloughlin
7a7e820030 settings_ui: Remove OpenSettingsAt from command palette (#41358)
Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-10-29 16:29:53 +00:00
Bennet Fenner
0e45500158 prompt_store: Remove unused code (#41473)
Release Notes:

- N/A
2025-10-29 16:27:30 +00:00
Danilo Leal
16c399876c settings_ui: Add ability to copy a link for a given setting (#41172)
Release Notes:

- settings_ui: Added the ability to copy a link to a given setting,
allowing users to quickly open the settings window at the correct
location in a faster way.

---------

Co-authored-by: cameron <cameron.studdstreet@gmail.com>
Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-10-29 13:15:08 -03:00
Lukas Wirth
3583e129d1 editor: Limit the amount of git processes spawned per multibuffer (#41472)
Release Notes:

- Reduced the number of concurrent git processes spawned for blaming
2025-10-29 16:08:41 +00:00
skewb1k
75b1da0f65 Fix Gruvbox accent colors (#41470)
In #11503, the "accents" option was incorrectly at the top level. This
moves it under the "style" key so it takes effect.

### Before/After
<img width="872" height="499" alt="1761750444_screenshot"
src="https://github.com/user-attachments/assets/2720d576-33b7-42df-9290-7b6a56f5b6a6"
/>
<img width="901" height="501" alt="1761750448_screenshot"
src="https://github.com/user-attachments/assets/bd6b7ccb-77ef-467c-b7cc-a5107b093db5"
/>

Release Notes:

- N/A
2025-10-29 15:59:42 +00:00
Shardul Vaidya
207a202477 bedrock: Add support for Claude Haiku 4.5 model (#41045)
Release Notes:

- bedrock: Added support for Claude Haiku 4.5

---------

Co-authored-by: Ona <no-reply@ona.com>
2025-10-29 16:41:43 +01:00
Yordis Prieto
0871c539ee acp_tools: Add button to clear messages (#41206)
Added a "Clear Messages" button to the ACP logs toolbar that removes all
messages.

## Motivation

When debugging ACP protocol implementations, the message list can become
cluttered with old messages. This feature allows clearing all messages
with a single click to start fresh, making it easier to focus on new
interactions without closing and reopening the ACP logs view.

Release Notes:

- N/A
2025-10-29 16:40:02 +01:00
Hilmar Wiegand
b92664c52d gpui: Implement support for wlr layer shell (#35610)
This reintroduces `layer_shell` support after #32651 was reverted. On
top of that, it allows setting options for the created surface,
restricts the enum variant to the `wayland` feature, and adds an example
that renders a clock widget using the protocol.

I've renamed the `WindowKind` variant to `LayerShell` from `Overlay`,
since the protocol can also be used to render wallpapers and such, which
doesn't really fit with the word.

Things I'm still unsure of:
- We need to get the layer options types to the user somehow, but
nothing from the `platform::linux` crate was exported, I'm assuming
intentionally. I've kept the types inside the module (instead of doing
`pub use layer_shell::*` to not pollute the global namespace with
generic words like `Anchor` or `Layer` Let me know if you want to do
this differently.
- I've added the options to the `WindowKind` variant. That's the only
clean way I see to supply them when the window is created. This makes
the kind no longer implement `Copy`.
- The options don't have setter methods yet and can only be defined on
window creation. We'd have to make fallible functions for setting them,
which only work if the underlying surface is a `layer_shell` surface.
That feels un-rust-y.

CC @zeroeightysix  
Thanks to @wuliuqii, whose layer-shell implementation I've also looked
at while putting this together.

Release Notes:

- Add support for the `layer_shell` protocol on wayland

---------

Co-authored-by: Ridan Vandenbergh <ridanvandenbergh@gmail.com>
2025-10-29 11:32:01 -04:00
Anthony Eid
19099e808c editor: Add action to move between snippet tabstop positions (#41466)
Closes #41407

This solves a problem where users couldn't navigate between snippet
tabstops while the completion menu was open.

I named the action {Next, Previous}SnippetTabstop instead of Placeholder
to be more inline with the LSP spec naming convention and our codebase
names.

Release Notes:

- Editor: Add actions to move between snippet tabstop positions
2025-10-29 15:26:09 +00:00
Ben Kunkle
f29ac79bbd Add myself as a docs reviewer (#41463)
Closes #ISSUE

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-10-29 14:21:56 +00:00
Angelo Verlain
797ac5ead4 docs: Update docs for using ESLint as the only formatter (#40679)
Closes #ISSUE

Release Notes:

- N/A

---------

Co-authored-by: Ben Kunkle <Ben.kunkle@gmail.com>
2025-10-29 13:58:35 +00:00
Lukas Wirth
37c6cd43e0 project: Fix inlay hints duplicatig on chunk start (#41461)
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-10-29 13:52:03 +00:00
Justin Su
01a1b9b2c1 Document Go hard tabs in default settings (#41459)
Closes https://github.com/zed-industries/zed/issues/40876

This is already present in the code but missing from the default
settings, which is confusing.

Release Notes:

- N/A
2025-10-29 09:44:26 -04:00
Anthony Eid
d44437d543 display map: Fix left shift debug panic (#38656)
Closes https://github.com/zed-industries/zed/issues/38558

The bug occurred because TabStopCursor chunk_position.1 is bounded
between 0 and 128. The fix for this was changing the bound to 0 and 127.

This also allowed me to simplify some of the tab stop cursor code to be
a bit faster (less branches and unbounded shifts).

Release Notes:

- N/A
2025-10-29 09:34:33 -04:00
Finn Evers
6be029ff17 Document plain text soft wrap in default settings (#41456)
Closes #41169

This was alredy present in code before, but not documented in the
default settings, which could lead to confusion,

Release Notes:

- N/A
2025-10-29 12:38:18 +00:00
Finn Evers
d59ecf790f ui: Don't show scrollbar track in too many cases (#41455)
Follow-up to https://github.com/zed-industries/zed/pull/41354 which
introduced a small regression.

Release Notes:

- N/A
2025-10-29 12:20:57 +00:00
Lukas Wirth
bde7e55adb editor: Render diagnostic popover even if the source is out of view (#41449)
This happens quite often with cargo based diagnostics which may spawn
several lines (sometimes the entire screen), forcing the user to scroll
up to the start of the diagnostic just to see the hover message is not
great.

Release Notes:

- Fixed diagnostics hovers not working if the diagnostic spans out of
view
2025-10-29 12:18:34 +00:00
Lukas Wirth
b7d31fabc5 vim: Add helix mode toggle (#41454)
Just for parity with vim. Also prevents these toggles from having both
enabled at the same time as that is a buggy state.

Release Notes:

- Added command to toggle helix mode
2025-10-29 12:16:31 +00:00
Finn Evers
1a223e23fb Revert "Support relative line number on wrapped lines (#39268)" (#41450)
Closes #41422

This completely broke line numbering as described in the linked issue
and scrolling up does not have the correct numbers any more.

Release Notes:

- NOTE: The `relative_line_numbers` change
(https://github.com/zed-industries/zed/pull/39268) was reverted and did
not make the release cut!
2025-10-29 11:07:23 +00:00
Xiaobo Liu
f2c03d0d0a gpui: Fix typo in ForegroundExecutor documentation (#41446)
Release Notes:

- N/A

Signed-off-by: Xiaobo Liu <cppcoffee@gmail.com>
2025-10-29 10:53:52 +00:00
Lukas Wirth
6fa823417f editor: When expanding first excerpt up, scroll it into view (#41445)
Before

https://github.com/user-attachments/assets/2390e924-112a-43fa-8ab8-429a55456d12

After

https://github.com/user-attachments/assets/b47c95f0-ccd9-40a6-ab04-28295158102e

Release Notes:

- Fixed an issue where expanding the first excerpt upwards would expand
it out of view
2025-10-29 10:32:42 +00:00
Mayank Verma
8725a2d166 go_to_line: Fix scroll position restore on dismiss (#41234)
Closes #35347

Release Notes:

- Fixed Go To Line jumping back to previous position on dismiss
2025-10-29 08:47:48 +00:00
Conrad Irwin
f9c97d29c8 Bump Zed to v0.212 (#41417)
Release Notes:

- N/A
2025-10-29 02:10:33 +00:00
Conrad Irwin
5192233b59 Fix people who use gh instead of env vars (#41418)
Closes #ISSUE

Release Notes:

- N/A
2025-10-29 02:09:22 +00:00
Callum Tolley
d0d7b9cdcd Update docs to use == instead of = (#41415)
Closes #41219

Release Notes:

- Updated docs to use `==` instead of `=` in keymap context.

Hopefully I'm not mistaken here, but I think the docs have a bug in them
2025-10-28 19:34:19 -06:00
Ben Kunkle
8b051d6cc3 Revert "Use gh workflow for tests" (#41411)
Reverts zed-industries/zed#41384

The branch-protection rules work much better when there is a Job that
runs every time and can be depended on to pass, we no longer have this.

Release Notes:

- N/A
2025-10-29 00:37:57 +00:00
John Tur
16d84a31ec Adjust Windows fusion manifests (#41408)
- Declare UAC support. This will prevent Windows from flagging
`auto_update_helper.exe` as a legacy setup program that needs to run as
administrator.
- Declare support for Windows 10. This will stop Windows from applying
various application compatibility profiles.

The UAC policy is not really appropriate to apply to all GPUI
applications (e.g. an installer written in GPUI may want to declare
itself as `requireAdministrator` instead of `asInvoker`). I tried
splitting this into a Zed.exe-only manifest and enabling manifest file
merging, but I ran out of my time-box. We can fix this later if this is
flagged by GPUI users.

Release Notes:

- N/A
2025-10-28 20:21:31 -04:00
Ben Kunkle
4adff4aa8a Use gh workflow for tests (#41384)
Follow up for: #41304

Splits CI tests (cherry-picks and PRs only for now) into separate
workflows using `gh-workflow`. Includes a couple restructures to
- run more things in parallel
- remove our previous shell script based checking to filter tests based
on files changed, instead using the builtin `paths:` workflow filters


Splitting the docs/style/rust tests & checks into separate workflows
means we lose the complete summary showing all the tests in one view,
but it's possible to re-add in the future if we go back to checking what
files changed ourselves or always run everything.

Release Notes:

- N/A *or* Added/Fixed/Improved ...

---------

Co-authored-by: Conrad <conrad@zed.dev>
2025-10-28 23:31:38 +00:00
Danilo Leal
7de3c67b1d docs: Improve header on mobile (#41404)
Release Notes:

- N/A
2025-10-28 19:34:13 -03:00
Max Brunsfeld
60bd417d8b Allow inspection of zeta2's LLM-based context retrieval (#41340)
Release Notes:

- N/A

---------

Co-authored-by: Agus Zubiaga <agus@zed.dev>
2025-10-28 22:26:48 +00:00
Danilo Leal
d31194dcf8 docs: Fix keybinding display in /configuring-zed (#41402)
Release Notes:

- N/A
2025-10-28 19:20:49 -03:00
Piotr Osiewicz
4cc6d6a398 ci: Notarize in parallel (different flavor) (#41392)
Release Notes:

- N/A

---------

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
Co-authored-by: Ben Kunkle <ben@zed.dev>
Co-authored-by: Conrad Irwin <conrad@zed.dev>
2025-10-28 16:18:17 -06:00
Piotr Osiewicz
b9eafb80fd extensions: Load extension byte repr in background thread (again) (#41398)
Release Notes:

- N/A
2025-10-28 20:54:35 +00:00
Cameron Mcloughlin
5e7927f628 [WIP] editor: Implement next/prev reference (#41078)
Co-authored-by: Cole <cole@zed.dev>
Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-10-28 20:41:44 +00:00
Katie Geer
b75736568b docs: Reorganize introduction (#41387)
Release Notes:
- Remove Windows/Linux from Getting Started
- Consolidate download & system into new Installation page
- Move Remote Dev out of Windows and into Remote Development
- Add Uninstall page
- Add updates page
- Remove addl learning materials from intro
2025-10-28 17:39:40 -03:00
Lukas Wirth
360074effb rope: Prevent stack overflows by bumping rayon stack sizes (#41397)
Thread stacks in rust by default have 2 megabytes of stack which for
sumtrees (or ropes in this case) can easily be exceeded depending on the
workload.

Release Notes:

- Fixed stack overflows when constructing large ropes
2025-10-28 20:21:49 +00:00
Conrad Irwin
b1922b7156 Move Nightly release to gh-workflow (#41349)
Follow up to #41304 to move nightly release over

Release Notes:

- N/A

---------

Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-10-28 13:57:23 -06:00
Danilo Leal
54fd7ea699 agent_ui: Don't show root project in path prefix in @-mention menu (#41372)
This PR hides the worktree root name from the path prefix displayed when
you @-mention a file or directory in the agent panel. Given the tight UI
real state we have, I believe that having the project name in there is
redundant—the project you're in is already displayed in the title bar.
Not only it was the very first word you'd see after the file's name, but
it also made the path longer than it needs to. A bit of a design clean
up here :)

(PS: We still show the project name if there are more than one in the
same workspace.)

Release Notes:

- N/A
2025-10-28 16:15:53 -03:00
Danilo Leal
8564602c3b command palette: Make footer buttons justified to the right (#41382)
Release Notes:

- N/A
2025-10-28 16:15:44 -03:00
Marshall Bowers
e604ef3af9 Add a setting to prevent sharing projects in public channels (#41395)
This PR adds a setting to prevent projects from being shared in public
channels.

This can be enabled by adding the following to the project settings
(`.zed/settings.json`):

```json
{
  "prevent_sharing_in_public_channels": true
}
```

This will then disable the "Share" button when not in a private channel:

<img width="380" height="115" alt="Screenshot 2025-10-28 at 2 28 10 PM"
src="https://github.com/user-attachments/assets/6761ac34-c0d5-4451-a443-adf7a1c42bcd"
/>

Release Notes:

- collaboration: Added a `prevent_sharing_in_public_channels` project
setting for preventing projects from being shared in public channels.
2025-10-28 18:48:07 +00:00
Bennet Fenner
1f5101d9fd agent_ui: Trim whitespace when submitting message (#41391)
Closes #41017

Release Notes:

- N/A
2025-10-28 19:02:39 +01:00
Christian Durán Carvajal
37540d1ff7 agent: Only include tool guidance in system prompt when profile has tools enabled (#40413)
Was previously sending all of the tools to the LLM on the first message
of a conversation regardless of the selected agent profile. This added
extra context, and tended to create scenarios where the LLM would
attempt to use the tool and it would fail since it was not available.

To reproduce, create a new conversation where you ask the Minimal mode
which tools it has access to, or try to write to a file.

Release Notes:

- Fixed an issue in the agent where all tools would be presented as
available even when using the `Minimal` profile

---------

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-10-28 16:56:39 +00:00
Piotr Osiewicz
d9d24582bb lsp: Fix workspace diagnostics when registered statically (#41386)
Closes #41379

Release Notes:

- Fixed diagnostics for Ruff and Biome
2025-10-28 17:16:42 +01:00
Sushant Mishra
a4a2acfaa5 gpui: Set initial window title on Wayland (#36844)
Closes #36843 

Release Notes:

- Fixed: set the initial window title correctly on startup in Wayland.
2025-10-28 16:52:01 +01:00
Jason Lee
55554a8653 markdown_preview: Improve nested list item prefix style (#39606)
Release Notes:

- Improved nested list item prefix style for Markdown preview.


## Before

<img width="667" height="450" alt="SCR-20251006-rtis"
src="https://github.com/user-attachments/assets/439160c4-7982-463c-9017-268d47c42c0c"
/>

## After

<img width="739" height="440" alt="SCR-20251006-rzlb"
src="https://github.com/user-attachments/assets/f6c237d9-3ff0-4468-ae9c-6853c5c2946a"
/>

---------

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-10-28 16:44:05 +01:00
Paweł Kondzior
d00ad02b05 agent_ui: Add file name and line number to symbol completions (#40508)
Symbol completions in the context picker now show "SymbolName
filename.txt L123" instead of just "SymbolName". This helps distinguish
between symbols with the same name in different files.

## Motivation

I noticed that the message prompt editor only showed symbol names
without any context, making it hard to use in large projects with many
symbols sharing the same name. The inline prompt editor already includes
file context for symbol completions, so I brought that same UX
improvement to the message prompt editor. I've decided to match the
highlighting style with directory completion entries from the message
editor.

## Changes

- Extract file name from both InProject and OutsideProject symbol
locations
- Add `build_symbol_label` helper to format labels with file context
- Update test expectation for new label format

## Screenshots
Inline Prompt Editor
<img width="843" height="334" alt="Screenshot 2025-10-17 at 17 47 52"
src="https://github.com/user-attachments/assets/16752e1a-0b8c-4f58-a0b2-25ae5130f18c"
/>
Old Message Editor:
<img width="466" height="363" alt="Screenshot 2025-10-17 at 17 31 44"
src="https://github.com/user-attachments/assets/900659f3-0632-4dff-bc9a-ab2dad351964"
/>
New Message Editor:
<img width="482" height="359" alt="Screenshot 2025-10-17 at 17 31 23"
src="https://github.com/user-attachments/assets/bf382167-f88b-4f3d-ba3e-1e251a8df962"
/>


Release Notes:

- Added file names and line numbers to symbol completions in the agent
panel
2025-10-28 16:43:48 +01:00
David Kleingeld
233a1eb46e Open keymap editor from command palette entry (#40825)
Release Notes:

- Adds footer to the command palette with buttons to add or change the
selected actions keybinding. Both open the keymap editor though the add
button takes you directly to the modal for recording a new keybind.


https://github.com/user-attachments/assets/0ee6b91e-b1dd-4d7f-ad64-cc79689ceeb2

---------

Co-authored-by: Joseph T. Lyons <JosephTLyons@gmail.com>
Co-authored-by: Julia Ryan <juliaryan3.14@gmail.com>
Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
2025-10-28 15:14:58 +00:00
Conrad Irwin
c656101862 Use gh-workflow for the run-bundling aspects of CI.yml (#41304)
To help make our GitHub Actions easier to understand, we're planning to
split the existing `ci.yml` into three separate workflows:

* run_bundling.yml (this PR)
* run_tests.yml 
* make_release.yml

To avoid the duplication that this might otherwise cause, we're planning
to write the workflows with gh-workflow, and use rust instead of
encoding logic in YAML conditions.

Release Notes:

- N/A

---------

Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-10-28 09:12:04 -06:00
Lionel Henry
3248a05406 Propagate Jupyter client errors (#40886)
Closes #40884

- Make IOPub task return a `Result`
- Create a monitoring task that watches over IOPub, Control, Routing and
Shell tasks.
- If any of these tasks fail, report the error with `kernel_errored()`
(which is already used to report process crashes)


https://github.com/user-attachments/assets/3125f6c7-099a-41ca-b668-fe694ecc68b9

This is not perfect. I did not have time to look into this but:

- When such errors happen, the kernel should be shut down.
- The kernel should no longer appear as online in the UI

But at least the user is getting feedback on what went wrong.

Release Notes:

- Jupyter client errors are now surfaced in the UI (#40884)
2025-10-28 14:45:27 +00:00
Bennet Fenner
baaf87aa23 Fix unit_evals.yml (#41377)
Release Notes:

- N/A
2025-10-28 14:30:47 +00:00
347 changed files with 20825 additions and 7912 deletions

View File

@@ -35,10 +35,8 @@ body:
attributes:
label: If applicable, attach your `Zed.log` file to this issue.
description: |
macOS: `~/Library/Logs/Zed/Zed.log`
Windows: `C:\Users\YOU\AppData\Local\Zed\logs\Zed.log`
Linux: `~/.local/share/zed/logs/Zed.log` or $XDG_DATA_HOME
If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000.
From the command palette, run `zed: open log` to see the last 1000 lines.
Or run `zed: reveal log in file manager` to reveal the log file itself.
value: |
<details><summary>Zed.log</summary>

39
.github/workflows/cherry_pick.yml vendored Normal file
View File

@@ -0,0 +1,39 @@
# Generated from xtask::workflows::cherry_pick
# Rebuild with `cargo xtask workflows`.
name: cherry_pick
on:
workflow_dispatch:
inputs:
commit:
description: commit
required: true
type: string
branch:
description: branch
required: true
type: string
channel:
description: channel
required: true
type: string
jobs:
run_cherry_pick:
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- id: get-app-token
name: cherry_pick::run_cherry_pick::authenticate_as_zippy
uses: actions/create-github-app-token@bef1eaf1c0ac2b148ee2a0a74c65fbe6db0631f1
with:
app-id: ${{ secrets.ZED_ZIPPY_APP_ID }}
private-key: ${{ secrets.ZED_ZIPPY_APP_PRIVATE_KEY }}
- name: cherry_pick::run_cherry_pick::cherry_pick
run: ./script/cherry-pick ${{ inputs.branch }} ${{ inputs.commit }} ${{ inputs.channel }}
shell: bash -euxo pipefail {0}
env:
GIT_COMMITTER_NAME: Zed Zippy
GIT_COMMITTER_EMAIL: hi@zed.dev
GITHUB_TOKEN: ${{ steps.get-app-token.outputs.token }}

View File

@@ -1,926 +0,0 @@
name: CI
on:
push:
branches:
- main
- "v[0-9]+.[0-9]+.x"
tags:
- "v*"
pull_request:
branches:
- "**"
concurrency:
# Allow only one workflow per any non-`main` branch.
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: 0
RUST_BACKTRACE: 1
DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }}
DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }}
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
jobs:
job_spec:
name: Decide which jobs to run
if: github.repository_owner == 'zed-industries'
outputs:
run_tests: ${{ steps.filter.outputs.run_tests }}
run_license: ${{ steps.filter.outputs.run_license }}
run_docs: ${{ steps.filter.outputs.run_docs }}
run_nix: ${{ steps.filter.outputs.run_nix }}
run_actionlint: ${{ steps.filter.outputs.run_actionlint }}
runs-on:
- namespace-profile-2x4-ubuntu-2404
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
# 350 is arbitrary; ~10days of history on main (5secs); full history is ~25secs
fetch-depth: ${{ github.ref == 'refs/heads/main' && 2 || 350 }}
- name: Fetch git history and generate output filters
id: filter
run: |
if [ -z "$GITHUB_BASE_REF" ]; then
echo "Not in a PR context (i.e., push to main/stable/preview)"
COMPARE_REV="$(git rev-parse HEAD~1)"
else
echo "In a PR context comparing to pull_request.base.ref"
git fetch origin "$GITHUB_BASE_REF" --depth=350
COMPARE_REV="$(git merge-base "origin/${GITHUB_BASE_REF}" HEAD)"
fi
CHANGED_FILES="$(git diff --name-only "$COMPARE_REV" ${{ github.sha }})"
# Specify anything which should potentially skip full test suite in this regex:
# - docs/
# - script/update_top_ranking_issues/
# - .github/ISSUE_TEMPLATE/
# - .github/workflows/ (except .github/workflows/ci.yml)
SKIP_REGEX='^(docs/|script/update_top_ranking_issues/|\.github/(ISSUE_TEMPLATE|workflows/(?!ci)))'
echo "$CHANGED_FILES" | grep -qvP "$SKIP_REGEX" && \
echo "run_tests=true" >> "$GITHUB_OUTPUT" || \
echo "run_tests=false" >> "$GITHUB_OUTPUT"
echo "$CHANGED_FILES" | grep -qP '^docs/' && \
echo "run_docs=true" >> "$GITHUB_OUTPUT" || \
echo "run_docs=false" >> "$GITHUB_OUTPUT"
echo "$CHANGED_FILES" | grep -qP '^\.github/(workflows/|actions/|actionlint.yml)' && \
echo "run_actionlint=true" >> "$GITHUB_OUTPUT" || \
echo "run_actionlint=false" >> "$GITHUB_OUTPUT"
echo "$CHANGED_FILES" | grep -qP '^(Cargo.lock|script/.*licenses)' && \
echo "run_license=true" >> "$GITHUB_OUTPUT" || \
echo "run_license=false" >> "$GITHUB_OUTPUT"
echo "$CHANGED_FILES" | grep -qP '^(nix/|flake\.|Cargo\.|rust-toolchain.toml|\.cargo/config.toml)' && \
echo "$GITHUB_REF_NAME" | grep -qvP '^v[0-9]+\.[0-9]+\.[0-9x](-pre)?$' && \
echo "run_nix=true" >> "$GITHUB_OUTPUT" || \
echo "run_nix=false" >> "$GITHUB_OUTPUT"
migration_checks:
name: Check Postgres and Protobuf migrations, mergability
needs: [job_spec]
if: |
github.repository_owner == 'zed-industries' &&
needs.job_spec.outputs.run_tests == 'true'
timeout-minutes: 60
runs-on:
- self-mini-macos
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
fetch-depth: 0 # fetch full history
- name: Remove untracked files
run: git clean -df
- name: Find modified migrations
shell: bash -euxo pipefail {0}
run: |
export SQUAWK_GITHUB_TOKEN=${{ github.token }}
. ./script/squawk
- name: Ensure fresh merge
shell: bash -euxo pipefail {0}
run: |
if [ -z "$GITHUB_BASE_REF" ];
then
echo "BUF_BASE_BRANCH=$(git merge-base origin/main HEAD)" >> "$GITHUB_ENV"
else
git checkout -B temp
git merge -q "origin/$GITHUB_BASE_REF" -m "merge main into temp"
echo "BUF_BASE_BRANCH=$GITHUB_BASE_REF" >> "$GITHUB_ENV"
fi
- uses: bufbuild/buf-setup-action@v1
with:
version: v1.29.0
- uses: bufbuild/buf-breaking-action@v1
with:
input: "crates/proto/proto/"
against: "https://github.com/${GITHUB_REPOSITORY}.git#branch=${BUF_BASE_BRANCH},subdir=crates/proto/proto/"
style:
timeout-minutes: 60
name: Check formatting and spelling
needs: [job_spec]
if: github.repository_owner == 'zed-industries'
runs-on:
- namespace-profile-4x8-ubuntu-2204
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
- uses: pnpm/action-setup@fe02b34f77f8bc703788d5817da081398fad5dd2 # v4.0.0
with:
version: 9
- name: Prettier Check on /docs
working-directory: ./docs
run: |
pnpm dlx "prettier@${PRETTIER_VERSION}" . --check || {
echo "To fix, run from the root of the Zed repo:"
echo " cd docs && pnpm dlx prettier@${PRETTIER_VERSION} . --write && cd .."
false
}
env:
PRETTIER_VERSION: 3.5.0
- name: Prettier Check on default.json
run: |
pnpm dlx "prettier@${PRETTIER_VERSION}" assets/settings/default.json --check || {
echo "To fix, run from the root of the Zed repo:"
echo " pnpm dlx prettier@${PRETTIER_VERSION} assets/settings/default.json --write"
false
}
env:
PRETTIER_VERSION: 3.5.0
# To support writing comments that they will certainly be revisited.
- name: Check for todo! and FIXME comments
run: script/check-todos
- name: Check modifier use in keymaps
run: script/check-keymaps
- name: Run style checks
uses: ./.github/actions/check_style
- name: Check for typos
uses: crate-ci/typos@80c8a4945eec0f6d464eaf9e65ed98ef085283d1 # v1.38.1
with:
config: ./typos.toml
check_docs:
timeout-minutes: 60
name: Check docs
needs: [job_spec]
if: |
github.repository_owner == 'zed-industries' &&
(needs.job_spec.outputs.run_tests == 'true' || needs.job_spec.outputs.run_docs == 'true')
runs-on:
- namespace-profile-8x16-ubuntu-2204
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Configure CI
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
- name: Build docs
uses: ./.github/actions/build_docs
actionlint:
runs-on: namespace-profile-2x4-ubuntu-2404
if: github.repository_owner == 'zed-industries' && needs.job_spec.outputs.run_actionlint == 'true'
needs: [job_spec]
steps:
- uses: actions/checkout@v4
- name: Download actionlint
id: get_actionlint
run: bash <(curl https://raw.githubusercontent.com/rhysd/actionlint/main/scripts/download-actionlint.bash)
shell: bash
- name: Check workflow files
run: ${{ steps.get_actionlint.outputs.executable }} -color
shell: bash
macos_tests:
timeout-minutes: 60
name: (macOS) Run Clippy and tests
needs: [job_spec]
if: |
github.repository_owner == 'zed-industries' &&
needs.job_spec.outputs.run_tests == 'true'
runs-on:
- self-mini-macos
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Configure CI
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
- name: Check that Cargo.lock is up to date
run: |
cargo update --locked --workspace
- name: cargo clippy
run: ./script/clippy
- name: Install cargo-machete
uses: clechasseur/rs-cargo@8435b10f6e71c2e3d4d3b7573003a8ce4bfc6386 # v2
with:
command: install
args: cargo-machete@0.7.0
- name: Check unused dependencies
uses: clechasseur/rs-cargo@8435b10f6e71c2e3d4d3b7573003a8ce4bfc6386 # v2
with:
command: machete
- name: Check licenses
run: |
script/check-licenses
if [[ "${{ needs.job_spec.outputs.run_license }}" == "true" ]]; then
script/generate-licenses /tmp/zed_licenses_output
fi
- name: Check for new vulnerable dependencies
if: github.event_name == 'pull_request'
uses: actions/dependency-review-action@67d4f4bd7a9b17a0db54d2a7519187c65e339de8 # v4
with:
license-check: false
- name: Run tests
uses: ./.github/actions/run_tests
- name: Build collab
run: cargo build -p collab
- name: Build other binaries and features
run: |
cargo build --workspace --bins --all-features
cargo check -p gpui --features "macos-blade"
cargo check -p workspace
cargo build -p remote_server
cargo check -p gpui --examples
# Since the macOS runners are stateful, so we need to remove the config file to prevent potential bug.
- name: Clean CI config file
if: always()
run: rm -rf ./../.cargo
linux_tests:
timeout-minutes: 60
name: (Linux) Run Clippy and tests
needs: [job_spec]
if: |
github.repository_owner == 'zed-industries' &&
needs.job_spec.outputs.run_tests == 'true'
runs-on:
- namespace-profile-16x32-ubuntu-2204
steps:
- name: Add Rust to the PATH
run: echo "$HOME/.cargo/bin" >> "$GITHUB_PATH"
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Cache dependencies
uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
# cache-provider: "buildjet"
- name: Install Linux dependencies
run: ./script/linux
- name: Configure CI
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
- name: cargo clippy
run: ./script/clippy
- name: Run tests
uses: ./.github/actions/run_tests
- name: Build other binaries and features
run: |
cargo build -p zed
cargo check -p workspace
cargo check -p gpui --examples
# Even the Linux runner is not stateful, in theory there is no need to do this cleanup.
# But, to avoid potential issues in the future if we choose to use a stateful Linux runner and forget to add code
# to clean up the config file, Ive included the cleanup code here as a precaution.
# While its not strictly necessary at this moment, I believe its better to err on the side of caution.
- name: Clean CI config file
if: always()
run: rm -rf ./../.cargo
doctests:
# Nextest currently doesn't support doctests, so run them separately and in parallel.
timeout-minutes: 60
name: (Linux) Run doctests
needs: [job_spec]
if: |
github.repository_owner == 'zed-industries' &&
needs.job_spec.outputs.run_tests == 'true'
runs-on:
- namespace-profile-16x32-ubuntu-2204
steps:
- name: Add Rust to the PATH
run: echo "$HOME/.cargo/bin" >> "$GITHUB_PATH"
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Cache dependencies
uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
# cache-provider: "buildjet"
- name: Install Linux dependencies
run: ./script/linux
- name: Configure CI
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
- name: Run doctests
run: cargo test --workspace --doc --no-fail-fast
- name: Clean CI config file
if: always()
run: rm -rf ./../.cargo
build_remote_server:
timeout-minutes: 60
name: (Linux) Build Remote Server
needs: [job_spec]
if: |
github.repository_owner == 'zed-industries' &&
needs.job_spec.outputs.run_tests == 'true'
runs-on:
- namespace-profile-16x32-ubuntu-2204
steps:
- name: Add Rust to the PATH
run: echo "$HOME/.cargo/bin" >> "$GITHUB_PATH"
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Cache dependencies
uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
# cache-provider: "buildjet"
- name: Install Clang & Mold
run: ./script/remote-server && ./script/install-mold 2.34.0
- name: Configure CI
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
- name: Build Remote Server
run: cargo build -p remote_server
- name: Clean CI config file
if: always()
run: rm -rf ./../.cargo
windows_tests:
timeout-minutes: 60
name: (Windows) Run Clippy and tests
needs: [job_spec]
if: |
github.repository_owner == 'zed-industries' &&
needs.job_spec.outputs.run_tests == 'true'
runs-on: [self-32vcpu-windows-2022]
steps:
- name: Environment Setup
run: |
$RunnerDir = Split-Path -Parent $env:RUNNER_WORKSPACE
Write-Output `
"RUSTUP_HOME=$RunnerDir\.rustup" `
"CARGO_HOME=$RunnerDir\.cargo" `
"PATH=$RunnerDir\.cargo\bin;$env:PATH" `
>> $env:GITHUB_ENV
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Configure CI
run: |
New-Item -ItemType Directory -Path "./../.cargo" -Force
Copy-Item -Path "./.cargo/ci-config.toml" -Destination "./../.cargo/config.toml"
- name: cargo clippy
run: |
.\script\clippy.ps1
- name: Run tests
uses: ./.github/actions/run_tests_windows
- name: Build Zed
run: cargo build
- name: Limit target directory size
run: ./script/clear-target-dir-if-larger-than.ps1 250
- name: Clean CI config file
if: always()
run: Remove-Item -Recurse -Path "./../.cargo" -Force -ErrorAction SilentlyContinue
tests_pass:
name: Tests Pass
runs-on: namespace-profile-2x4-ubuntu-2404
needs:
- job_spec
- style
- check_docs
- actionlint
- migration_checks
# run_tests: If adding required tests, add them here and to script below.
- linux_tests
- build_remote_server
- macos_tests
- windows_tests
if: |
github.repository_owner == 'zed-industries' &&
always()
steps:
- name: Check all tests passed
run: |
# Check dependent jobs...
RET_CODE=0
# Always check style
[[ "${{ needs.style.result }}" != 'success' ]] && { RET_CODE=1; echo "style tests failed"; }
if [[ "${{ needs.job_spec.outputs.run_docs }}" == "true" ]]; then
[[ "${{ needs.check_docs.result }}" != 'success' ]] && { RET_CODE=1; echo "docs checks failed"; }
fi
if [[ "${{ needs.job_spec.outputs.run_actionlint }}" == "true" ]]; then
[[ "${{ needs.actionlint.result }}" != 'success' ]] && { RET_CODE=1; echo "actionlint checks failed"; }
fi
# Only check test jobs if they were supposed to run
if [[ "${{ needs.job_spec.outputs.run_tests }}" == "true" ]]; then
[[ "${{ needs.macos_tests.result }}" != 'success' ]] && { RET_CODE=1; echo "macOS tests failed"; }
[[ "${{ needs.linux_tests.result }}" != 'success' ]] && { RET_CODE=1; echo "Linux tests failed"; }
[[ "${{ needs.windows_tests.result }}" != 'success' ]] && { RET_CODE=1; echo "Windows tests failed"; }
[[ "${{ needs.build_remote_server.result }}" != 'success' ]] && { RET_CODE=1; echo "Remote server build failed"; }
# This check is intentionally disabled. See: https://github.com/zed-industries/zed/pull/28431
# [[ "${{ needs.migration_checks.result }}" != 'success' ]] && { RET_CODE=1; echo "Migration Checks failed"; }
fi
if [[ "$RET_CODE" -eq 0 ]]; then
echo "All tests passed successfully!"
fi
exit $RET_CODE
bundle-mac:
timeout-minutes: 120
name: Create a macOS bundle
runs-on:
- self-mini-macos
if: |
( startsWith(github.ref, 'refs/tags/v')
|| contains(github.event.pull_request.labels.*.name, 'run-bundling') )
needs: [macos_tests]
env:
MACOS_CERTIFICATE: ${{ secrets.MACOS_CERTIFICATE }}
MACOS_CERTIFICATE_PASSWORD: ${{ secrets.MACOS_CERTIFICATE_PASSWORD }}
APPLE_NOTARIZATION_KEY: ${{ secrets.APPLE_NOTARIZATION_KEY }}
APPLE_NOTARIZATION_KEY_ID: ${{ secrets.APPLE_NOTARIZATION_KEY_ID }}
APPLE_NOTARIZATION_ISSUER_ID: ${{ secrets.APPLE_NOTARIZATION_ISSUER_ID }}
steps:
- name: Install Node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
with:
node-version: "18"
- name: Setup Sentry CLI
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b #v2
with:
token: ${{ SECRETS.SENTRY_AUTH_TOKEN }}
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
# We need to fetch more than one commit so that `script/draft-release-notes`
# is able to diff between the current and previous tag.
#
# 25 was chosen arbitrarily.
fetch-depth: 25
clean: false
ref: ${{ github.ref }}
- name: Limit target directory size
run: script/clear-target-dir-if-larger-than 300
- name: Determine version and release channel
if: ${{ startsWith(github.ref, 'refs/tags/v') }}
run: |
# This exports RELEASE_CHANNEL into env (GITHUB_ENV)
script/determine-release-channel
- name: Draft release notes
if: ${{ startsWith(github.ref, 'refs/tags/v') }}
run: |
mkdir -p target/
# Ignore any errors that occur while drafting release notes to not fail the build.
script/draft-release-notes "$RELEASE_VERSION" "$RELEASE_CHANNEL" > target/release-notes.md || true
script/create-draft-release target/release-notes.md
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Create macOS app bundle
run: script/bundle-mac
- name: Rename binaries
if: ${{ github.ref == 'refs/heads/main' }} || contains(github.event.pull_request.labels.*.name, 'run-bundling') }}
run: |
mv target/aarch64-apple-darwin/release/Zed.dmg target/aarch64-apple-darwin/release/Zed-aarch64.dmg
mv target/x86_64-apple-darwin/release/Zed.dmg target/x86_64-apple-darwin/release/Zed-x86_64.dmg
- name: Upload app bundle (aarch64) to workflow run if main branch or specific label
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: ${{ github.ref == 'refs/heads/main' }} || contains(github.event.pull_request.labels.*.name, 'run-bundling') }}
with:
name: Zed_${{ github.event.pull_request.head.sha || github.sha }}-aarch64.dmg
path: target/aarch64-apple-darwin/release/Zed-aarch64.dmg
- name: Upload app bundle (x86_64) to workflow run if main branch or specific label
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: ${{ github.ref == 'refs/heads/main' }} || contains(github.event.pull_request.labels.*.name, 'run-bundling') }}
with:
name: Zed_${{ github.event.pull_request.head.sha || github.sha }}-x86_64.dmg
path: target/x86_64-apple-darwin/release/Zed-x86_64.dmg
- uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v1
name: Upload app bundle to release
if: ${{ env.RELEASE_CHANNEL == 'preview' || env.RELEASE_CHANNEL == 'stable' }}
with:
draft: true
prerelease: ${{ env.RELEASE_CHANNEL == 'preview' }}
files: |
target/zed-remote-server-macos-x86_64.gz
target/zed-remote-server-macos-aarch64.gz
target/aarch64-apple-darwin/release/Zed-aarch64.dmg
target/x86_64-apple-darwin/release/Zed-x86_64.dmg
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
bundle-linux-x86_x64:
timeout-minutes: 60
name: Linux x86_x64 release bundle
runs-on:
- namespace-profile-16x32-ubuntu-2004 # ubuntu 20.04 for minimal glibc
if: |
( startsWith(github.ref, 'refs/tags/v')
|| contains(github.event.pull_request.labels.*.name, 'run-bundling') )
needs: [linux_tests]
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Install Linux dependencies
run: ./script/linux && ./script/install-mold 2.34.0
- name: Setup Sentry CLI
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b #v2
with:
token: ${{ SECRETS.SENTRY_AUTH_TOKEN }}
- name: Determine version and release channel
if: startsWith(github.ref, 'refs/tags/v')
run: |
# This exports RELEASE_CHANNEL into env (GITHUB_ENV)
script/determine-release-channel
- name: Create Linux .tar.gz bundle
run: script/bundle-linux
- name: Upload Artifact to Workflow - zed (run-bundling)
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: contains(github.event.pull_request.labels.*.name, 'run-bundling')
with:
name: zed-${{ github.event.pull_request.head.sha || github.sha }}-x86_64-unknown-linux-gnu.tar.gz
path: target/release/zed-*.tar.gz
- name: Upload Artifact to Workflow - zed-remote-server (run-bundling)
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: contains(github.event.pull_request.labels.*.name, 'run-bundling')
with:
name: zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-x86_64-unknown-linux-gnu.gz
path: target/zed-remote-server-linux-x86_64.gz
- name: Upload Artifacts to release
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v1
if: ${{ !(contains(github.event.pull_request.labels.*.name, 'run-bundling')) }}
with:
draft: true
prerelease: ${{ env.RELEASE_CHANNEL == 'preview' }}
files: |
target/zed-remote-server-linux-x86_64.gz
target/release/zed-linux-x86_64.tar.gz
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
bundle-linux-aarch64: # this runs on ubuntu22.04
timeout-minutes: 60
name: Linux arm64 release bundle
runs-on:
- namespace-profile-8x32-ubuntu-2004-arm-m4 # ubuntu 20.04 for minimal glibc
if: |
startsWith(github.ref, 'refs/tags/v')
|| contains(github.event.pull_request.labels.*.name, 'run-bundling')
needs: [linux_tests]
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Install Linux dependencies
run: ./script/linux
- name: Setup Sentry CLI
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b #v2
with:
token: ${{ SECRETS.SENTRY_AUTH_TOKEN }}
- name: Determine version and release channel
if: startsWith(github.ref, 'refs/tags/v')
run: |
# This exports RELEASE_CHANNEL into env (GITHUB_ENV)
script/determine-release-channel
- name: Create and upload Linux .tar.gz bundles
run: script/bundle-linux
- name: Upload Artifact to Workflow - zed (run-bundling)
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: contains(github.event.pull_request.labels.*.name, 'run-bundling')
with:
name: zed-${{ github.event.pull_request.head.sha || github.sha }}-aarch64-unknown-linux-gnu.tar.gz
path: target/release/zed-*.tar.gz
- name: Upload Artifact to Workflow - zed-remote-server (run-bundling)
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: contains(github.event.pull_request.labels.*.name, 'run-bundling')
with:
name: zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-aarch64-unknown-linux-gnu.gz
path: target/zed-remote-server-linux-aarch64.gz
- name: Upload Artifacts to release
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v1
if: ${{ !(contains(github.event.pull_request.labels.*.name, 'run-bundling')) }}
with:
draft: true
prerelease: ${{ env.RELEASE_CHANNEL == 'preview' }}
files: |
target/zed-remote-server-linux-aarch64.gz
target/release/zed-linux-aarch64.tar.gz
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
freebsd:
timeout-minutes: 60
runs-on: github-8vcpu-ubuntu-2404
if: |
false && ( startsWith(github.ref, 'refs/tags/v')
|| contains(github.event.pull_request.labels.*.name, 'run-bundling') )
needs: [linux_tests]
name: Build Zed on FreeBSD
steps:
- uses: actions/checkout@v4
- name: Build FreeBSD remote-server
id: freebsd-build
uses: vmactions/freebsd-vm@c3ae29a132c8ef1924775414107a97cac042aad5 # v1.2.0
with:
usesh: true
release: 13.5
copyback: true
prepare: |
pkg install -y \
bash curl jq git \
rustup-init cmake-core llvm-devel-lite pkgconf protobuf # ibx11 alsa-lib rust-bindgen-cli
run: |
freebsd-version
sysctl hw.model
sysctl hw.ncpu
sysctl hw.physmem
sysctl hw.usermem
git config --global --add safe.directory /home/runner/work/zed/zed
rustup-init --profile minimal --default-toolchain none -y
. "$HOME/.cargo/env"
./script/bundle-freebsd
mkdir -p out/
mv "target/zed-remote-server-freebsd-x86_64.gz" out/
rm -rf target/
cargo clean
- name: Upload Artifact to Workflow - zed-remote-server (run-bundling)
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: contains(github.event.pull_request.labels.*.name, 'run-bundling')
with:
name: zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-x86_64-unknown-freebsd.gz
path: out/zed-remote-server-freebsd-x86_64.gz
- name: Upload Artifacts to release
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v1
if: ${{ !(contains(github.event.pull_request.labels.*.name, 'run-bundling')) }}
with:
draft: true
prerelease: ${{ env.RELEASE_CHANNEL == 'preview' }}
files: |
out/zed-remote-server-freebsd-x86_64.gz
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
nix-build:
name: Build with Nix
uses: ./.github/workflows/nix.yml
needs: [job_spec]
if: github.repository_owner == 'zed-industries' &&
(contains(github.event.pull_request.labels.*.name, 'run-nix') ||
needs.job_spec.outputs.run_nix == 'true')
secrets: inherit
with:
flake-output: debug
# excludes the final package to only cache dependencies
cachix-filter: "-zed-editor-[0-9.]*-nightly"
bundle-windows-x64:
timeout-minutes: 120
name: Create a Windows installer for x86_64
runs-on: [self-32vcpu-windows-2022]
if: |
( startsWith(github.ref, 'refs/tags/v')
|| contains(github.event.pull_request.labels.*.name, 'run-bundling') )
needs: [windows_tests]
env:
AZURE_TENANT_ID: ${{ secrets.AZURE_SIGNING_TENANT_ID }}
AZURE_CLIENT_ID: ${{ secrets.AZURE_SIGNING_CLIENT_ID }}
AZURE_CLIENT_SECRET: ${{ secrets.AZURE_SIGNING_CLIENT_SECRET }}
ACCOUNT_NAME: ${{ vars.AZURE_SIGNING_ACCOUNT_NAME }}
CERT_PROFILE_NAME: ${{ vars.AZURE_SIGNING_CERT_PROFILE_NAME }}
ENDPOINT: ${{ vars.AZURE_SIGNING_ENDPOINT }}
FILE_DIGEST: SHA256
TIMESTAMP_DIGEST: SHA256
TIMESTAMP_SERVER: "http://timestamp.acs.microsoft.com"
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Setup Sentry CLI
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b #v2
with:
token: ${{ SECRETS.SENTRY_AUTH_TOKEN }}
- name: Determine version and release channel
working-directory: ${{ env.ZED_WORKSPACE }}
if: ${{ startsWith(github.ref, 'refs/tags/v') }}
run: |
# This exports RELEASE_CHANNEL into env (GITHUB_ENV)
script/determine-release-channel.ps1
- name: Build Zed installer
working-directory: ${{ env.ZED_WORKSPACE }}
run: script/bundle-windows.ps1
- name: Upload installer (x86_64) to Workflow - zed (run-bundling)
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: contains(github.event.pull_request.labels.*.name, 'run-bundling')
with:
name: Zed_${{ github.event.pull_request.head.sha || github.sha }}-x86_64.exe
path: ${{ env.SETUP_PATH }}
- name: Upload Artifacts to release
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v1
if: ${{ !(contains(github.event.pull_request.labels.*.name, 'run-bundling')) }}
with:
draft: true
prerelease: ${{ env.RELEASE_CHANNEL == 'preview' }}
files: ${{ env.SETUP_PATH }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
bundle-windows-aarch64:
timeout-minutes: 120
name: Create a Windows installer for aarch64
runs-on: [self-32vcpu-windows-2022]
if: |
( startsWith(github.ref, 'refs/tags/v')
|| contains(github.event.pull_request.labels.*.name, 'run-bundling') )
needs: [windows_tests]
env:
AZURE_TENANT_ID: ${{ secrets.AZURE_SIGNING_TENANT_ID }}
AZURE_CLIENT_ID: ${{ secrets.AZURE_SIGNING_CLIENT_ID }}
AZURE_CLIENT_SECRET: ${{ secrets.AZURE_SIGNING_CLIENT_SECRET }}
ACCOUNT_NAME: ${{ vars.AZURE_SIGNING_ACCOUNT_NAME }}
CERT_PROFILE_NAME: ${{ vars.AZURE_SIGNING_CERT_PROFILE_NAME }}
ENDPOINT: ${{ vars.AZURE_SIGNING_ENDPOINT }}
FILE_DIGEST: SHA256
TIMESTAMP_DIGEST: SHA256
TIMESTAMP_SERVER: "http://timestamp.acs.microsoft.com"
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Setup Sentry CLI
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b #v2
with:
token: ${{ SECRETS.SENTRY_AUTH_TOKEN }}
- name: Determine version and release channel
working-directory: ${{ env.ZED_WORKSPACE }}
if: ${{ startsWith(github.ref, 'refs/tags/v') }}
run: |
# This exports RELEASE_CHANNEL into env (GITHUB_ENV)
script/determine-release-channel.ps1
- name: Build Zed installer
working-directory: ${{ env.ZED_WORKSPACE }}
run: script/bundle-windows.ps1 -Architecture aarch64
- name: Upload installer (aarch64) to Workflow - zed (run-bundling)
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: contains(github.event.pull_request.labels.*.name, 'run-bundling')
with:
name: Zed_${{ github.event.pull_request.head.sha || github.sha }}-aarch64.exe
path: ${{ env.SETUP_PATH }}
- name: Upload Artifacts to release
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v1
if: ${{ !(contains(github.event.pull_request.labels.*.name, 'run-bundling')) }}
with:
draft: true
prerelease: ${{ env.RELEASE_CHANNEL == 'preview' }}
files: ${{ env.SETUP_PATH }}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
auto-release-preview:
name: Auto release preview
if: |
false
&& startsWith(github.ref, 'refs/tags/v')
&& endsWith(github.ref, '-pre') && !endsWith(github.ref, '.0-pre')
needs: [bundle-mac, bundle-linux-x86_x64, bundle-linux-aarch64, bundle-windows-x64, bundle-windows-aarch64]
runs-on:
- self-mini-macos
steps:
- name: gh release
run: gh release edit "$GITHUB_REF_NAME" --draft=false
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Create Sentry release
uses: getsentry/action-release@526942b68292201ac6bbb99b9a0747d4abee354c # v3
env:
SENTRY_ORG: zed-dev
SENTRY_PROJECT: zed
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
with:
environment: production

78
.github/workflows/compare_perf.yml vendored Normal file
View File

@@ -0,0 +1,78 @@
# Generated from xtask::workflows::compare_perf
# Rebuild with `cargo xtask workflows`.
name: compare_perf
on:
workflow_dispatch:
inputs:
head:
description: head
required: true
type: string
base:
description: base
required: true
type: string
crate_name:
description: crate_name
type: string
default: ''
jobs:
run_perf:
runs-on: namespace-profile-16x32-ubuntu-2204
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_cargo_config
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
shell: bash -euxo pipefail {0}
- name: steps::setup_linux
run: ./script/linux
shell: bash -euxo pipefail {0}
- name: steps::install_mold
run: ./script/install-mold
shell: bash -euxo pipefail {0}
- name: compare_perf::run_perf::install_hyperfine
run: cargo install hyperfine
shell: bash -euxo pipefail {0}
- name: steps::git_checkout
run: git fetch origin ${{ inputs.base }} && git checkout ${{ inputs.base }}
shell: bash -euxo pipefail {0}
- name: compare_perf::run_perf::cargo_perf_test
run: |2-
if [ -n "${{ inputs.crate_name }}" ]; then
cargo perf-test -p ${{ inputs.crate_name }} -- --json=${{ inputs.base }};
else
cargo perf-test -p vim -- --json=${{ inputs.base }};
fi
shell: bash -euxo pipefail {0}
- name: steps::git_checkout
run: git fetch origin ${{ inputs.head }} && git checkout ${{ inputs.head }}
shell: bash -euxo pipefail {0}
- name: compare_perf::run_perf::cargo_perf_test
run: |2-
if [ -n "${{ inputs.crate_name }}" ]; then
cargo perf-test -p ${{ inputs.crate_name }} -- --json=${{ inputs.head }};
else
cargo perf-test -p vim -- --json=${{ inputs.head }};
fi
shell: bash -euxo pipefail {0}
- name: compare_perf::run_perf::compare_runs
run: cargo perf-compare --save=results.md ${{ inputs.base }} ${{ inputs.head }}
shell: bash -euxo pipefail {0}
- name: '@actions/upload-artifact results.md'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: results.md
path: results.md
if-no-files-found: error
- name: steps::cleanup_cargo_config
if: always()
run: |
rm -rf ./../.cargo
shell: bash -euxo pipefail {0}

View File

@@ -1,5 +1,6 @@
# generated `cargo xtask workflows`. Do not edit.
name: Danger
# Generated from xtask::workflows::danger
# Rebuild with `cargo xtask workflows`.
name: danger
on:
pull_request:
types:
@@ -16,20 +17,22 @@ jobs:
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_pnpm
uses: pnpm/action-setup@fe02b34f77f8bc703788d5817da081398fad5dd2
with:
version: '9'
- name: steps::danger::setup_node
- name: steps::setup_node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
node-version: '20'
cache: pnpm
cache-dependency-path: script/danger/pnpm-lock.yaml
- name: steps::danger::install_deps
- name: danger::danger_job::install_deps
run: pnpm install --dir script/danger
shell: bash -euxo pipefail {0}
- name: steps::danger::run
- name: danger::danger_job::run
run: pnpm run --dir script/danger danger ci
shell: bash -euxo pipefail {0}
env:

View File

@@ -1,71 +0,0 @@
name: Run Agent Eval
on:
schedule:
- cron: "0 0 * * *"
pull_request:
branches:
- "**"
types: [synchronize, reopened, labeled]
workflow_dispatch:
concurrency:
# Allow only one workflow per any non-`main` branch.
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: 0
RUST_BACKTRACE: 1
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_EVAL_TELEMETRY: 1
jobs:
run_eval:
timeout-minutes: 60
name: Run Agent Eval
if: >
github.repository_owner == 'zed-industries' &&
(github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'run-eval'))
runs-on:
- namespace-profile-16x32-ubuntu-2204
steps:
- name: Add Rust to the PATH
run: echo "$HOME/.cargo/bin" >> "$GITHUB_PATH"
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Cache dependencies
uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
# cache-provider: "buildjet"
- name: Install Linux dependencies
run: ./script/linux
- name: Configure CI
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
- name: Compile eval
run: cargo build --package=eval
- name: Run eval
run: cargo run --package=eval -- --repetitions=8 --concurrency=1
# Even the Linux runner is not stateful, in theory there is no need to do this cleanup.
# But, to avoid potential issues in the future if we choose to use a stateful Linux runner and forget to add code
# to clean up the config file, Ive included the cleanup code here as a precaution.
# While its not strictly necessary at this moment, I believe its better to err on the side of caution.
- name: Clean CI config file
if: always()
run: rm -rf ./../.cargo

View File

@@ -1,76 +0,0 @@
# generated `cargo xtask workflows`. Do not edit.
name: Nix build
on:
workflow_call:
inputs:
flake-output:
type: string
default: default
cachix-filter:
type: string
jobs:
nix-build-linux-x86:
if: github.repository_owner == 'zed-industries'
runs-on: namespace-profile-16x32-ubuntu-2204
env:
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_MINIDUMP_ENDPOINT }}
ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}
GIT_LFS_SKIP_SMUDGE: '1'
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: 'false'
- name: steps::nix::install_nix
uses: cachix/install-nix-action@02a151ada4993995686f9ed4f1be7cfbb229e56f
with:
github_access_token: ${{ secrets.GITHUB_TOKEN }}
- name: steps::nix::cachix_action
uses: cachix/cachix-action@0fc020193b5a1fa3ac4575aa3a7d3aa6a35435ad
with:
name: zed
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
pushFilter: ${{ inputs.cachix-filter }}
cachixArgs: -v
- name: steps::nix::build
run: nix build .#${{ inputs.flake-output }} -L --accept-flake-config
shell: bash -euxo pipefail {0}
timeout-minutes: 60
continue-on-error: true
nix-build-mac-arm:
if: github.repository_owner == 'zed-industries'
runs-on: self-mini-macos
env:
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_MINIDUMP_ENDPOINT }}
ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}
GIT_LFS_SKIP_SMUDGE: '1'
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: 'false'
- name: steps::nix::set_path
run: |
echo "/nix/var/nix/profiles/default/bin" >> "$GITHUB_PATH"
echo "/Users/administrator/.nix-profile/bin" >> "$GITHUB_PATH"
shell: bash -euxo pipefail {0}
- name: steps::nix::cachix_action
uses: cachix/cachix-action@0fc020193b5a1fa3ac4575aa3a7d3aa6a35435ad
with:
name: zed
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
pushFilter: ${{ inputs.cachix-filter }}
cachixArgs: -v
- name: steps::nix::build
run: nix build .#${{ inputs.flake-output }} -L --accept-flake-config
shell: bash -euxo pipefail {0}
- name: steps::nix::limit_store
run: |-
if [ "$(du -sm /nix/store | cut -f1)" -gt 50000 ]; then
nix-collect-garbage -d || true
fi
shell: bash -euxo pipefail {0}
timeout-minutes: 60
continue-on-error: true

488
.github/workflows/release.yml vendored Normal file
View File

@@ -0,0 +1,488 @@
# Generated from xtask::workflows::release
# Rebuild with `cargo xtask workflows`.
name: release
env:
CARGO_TERM_COLOR: always
RUST_BACKTRACE: '1'
on:
push:
tags:
- v*
jobs:
run_tests_mac:
if: github.repository_owner == 'zed-industries'
runs-on: self-mini-macos
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_cargo_config
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
shell: bash -euxo pipefail {0}
- name: steps::setup_node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
node-version: '20'
- name: steps::clippy
run: ./script/clippy
shell: bash -euxo pipefail {0}
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: bash -euxo pipefail {0}
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 300
shell: bash -euxo pipefail {0}
- name: steps::cargo_nextest
run: cargo nextest run --workspace --no-fail-fast --failure-output immediate-final
shell: bash -euxo pipefail {0}
- name: steps::cleanup_cargo_config
if: always()
run: |
rm -rf ./../.cargo
shell: bash -euxo pipefail {0}
timeout-minutes: 60
run_tests_linux:
if: github.repository_owner == 'zed-industries'
runs-on: namespace-profile-16x32-ubuntu-2204
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_cargo_config
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
shell: bash -euxo pipefail {0}
- name: steps::setup_linux
run: ./script/linux
shell: bash -euxo pipefail {0}
- name: steps::install_mold
run: ./script/install-mold
shell: bash -euxo pipefail {0}
- name: steps::cache_rust_dependencies_namespace
uses: namespacelabs/nscloud-cache-action@v1
with:
cache: rust
- name: steps::setup_node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
node-version: '20'
- name: steps::clippy
run: ./script/clippy
shell: bash -euxo pipefail {0}
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: bash -euxo pipefail {0}
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 250
shell: bash -euxo pipefail {0}
- name: steps::cargo_nextest
run: cargo nextest run --workspace --no-fail-fast --failure-output immediate-final
shell: bash -euxo pipefail {0}
- name: steps::cleanup_cargo_config
if: always()
run: |
rm -rf ./../.cargo
shell: bash -euxo pipefail {0}
timeout-minutes: 60
run_tests_windows:
if: github.repository_owner == 'zed-industries'
runs-on: self-32vcpu-windows-2022
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_cargo_config
run: |
New-Item -ItemType Directory -Path "./../.cargo" -Force
Copy-Item -Path "./.cargo/ci-config.toml" -Destination "./../.cargo/config.toml"
shell: pwsh
- name: steps::setup_node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
node-version: '20'
- name: steps::clippy
run: ./script/clippy.ps1
shell: pwsh
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: pwsh
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than.ps1 250
shell: pwsh
- name: steps::cargo_nextest
run: cargo nextest run --workspace --no-fail-fast --failure-output immediate-final
shell: pwsh
- name: steps::cleanup_cargo_config
if: always()
run: |
Remove-Item -Recurse -Path "./../.cargo" -Force -ErrorAction SilentlyContinue
shell: pwsh
timeout-minutes: 60
check_scripts:
if: github.repository_owner == 'zed-industries'
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: run_tests::check_scripts::run_shellcheck
run: ./script/shellcheck-scripts error
shell: bash -euxo pipefail {0}
- id: get_actionlint
name: run_tests::check_scripts::download_actionlint
run: bash <(curl https://raw.githubusercontent.com/rhysd/actionlint/main/scripts/download-actionlint.bash)
shell: bash -euxo pipefail {0}
- name: run_tests::check_scripts::run_actionlint
run: |
${{ steps.get_actionlint.outputs.executable }} -color
shell: bash -euxo pipefail {0}
- name: run_tests::check_scripts::check_xtask_workflows
run: |
cargo xtask workflows
if ! git diff --exit-code .github; then
echo "Error: .github directory has uncommitted changes after running 'cargo xtask workflows'"
echo "Please run 'cargo xtask workflows' locally and commit the changes"
exit 1
fi
shell: bash -euxo pipefail {0}
timeout-minutes: 60
create_draft_release:
if: github.repository_owner == 'zed-industries'
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
fetch-depth: 25
ref: ${{ github.ref }}
- name: script/determine-release-channel
run: script/determine-release-channel
shell: bash -euxo pipefail {0}
- name: mkdir -p target/
run: mkdir -p target/
shell: bash -euxo pipefail {0}
- name: release::create_draft_release::generate_release_notes
run: node --redirect-warnings=/dev/null ./script/draft-release-notes "$RELEASE_VERSION" "$RELEASE_CHANNEL" > target/release-notes.md
shell: bash -euxo pipefail {0}
- name: release::create_draft_release::create_release
run: script/create-draft-release target/release-notes.md
shell: bash -euxo pipefail {0}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
timeout-minutes: 60
bundle_linux_aarch64:
needs:
- run_tests_linux
- check_scripts
runs-on: namespace-profile-8x32-ubuntu-2004-arm-m4
env:
CARGO_INCREMENTAL: 0
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_sentry
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b
with:
token: ${{ secrets.SENTRY_AUTH_TOKEN }}
- name: steps::setup_linux
run: ./script/linux
shell: bash -euxo pipefail {0}
- name: steps::install_mold
run: ./script/install-mold
shell: bash -euxo pipefail {0}
- name: ./script/bundle-linux
run: ./script/bundle-linux
shell: bash -euxo pipefail {0}
- name: '@actions/upload-artifact zed-linux-aarch64.tar.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-linux-aarch64.tar.gz
path: target/release/zed-linux-aarch64.tar.gz
if-no-files-found: error
- name: '@actions/upload-artifact zed-remote-server-linux-aarch64.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-remote-server-linux-aarch64.gz
path: target/zed-remote-server-linux-aarch64.gz
if-no-files-found: error
timeout-minutes: 60
bundle_linux_x86_64:
needs:
- run_tests_linux
- check_scripts
runs-on: namespace-profile-32x64-ubuntu-2004
env:
CARGO_INCREMENTAL: 0
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_sentry
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b
with:
token: ${{ secrets.SENTRY_AUTH_TOKEN }}
- name: steps::setup_linux
run: ./script/linux
shell: bash -euxo pipefail {0}
- name: steps::install_mold
run: ./script/install-mold
shell: bash -euxo pipefail {0}
- name: ./script/bundle-linux
run: ./script/bundle-linux
shell: bash -euxo pipefail {0}
- name: '@actions/upload-artifact zed-linux-x86_64.tar.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-linux-x86_64.tar.gz
path: target/release/zed-linux-x86_64.tar.gz
if-no-files-found: error
- name: '@actions/upload-artifact zed-remote-server-linux-x86_64.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-remote-server-linux-x86_64.gz
path: target/zed-remote-server-linux-x86_64.gz
if-no-files-found: error
timeout-minutes: 60
bundle_mac_aarch64:
needs:
- run_tests_mac
- check_scripts
runs-on: self-mini-macos
env:
CARGO_INCREMENTAL: 0
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
MACOS_CERTIFICATE: ${{ secrets.MACOS_CERTIFICATE }}
MACOS_CERTIFICATE_PASSWORD: ${{ secrets.MACOS_CERTIFICATE_PASSWORD }}
APPLE_NOTARIZATION_KEY: ${{ secrets.APPLE_NOTARIZATION_KEY }}
APPLE_NOTARIZATION_KEY_ID: ${{ secrets.APPLE_NOTARIZATION_KEY_ID }}
APPLE_NOTARIZATION_ISSUER_ID: ${{ secrets.APPLE_NOTARIZATION_ISSUER_ID }}
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
node-version: '20'
- name: steps::setup_sentry
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b
with:
token: ${{ secrets.SENTRY_AUTH_TOKEN }}
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 300
shell: bash -euxo pipefail {0}
- name: run_bundling::bundle_mac::bundle_mac
run: ./script/bundle-mac aarch64-apple-darwin
shell: bash -euxo pipefail {0}
- name: '@actions/upload-artifact Zed-aarch64.dmg'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: Zed-aarch64.dmg
path: target/aarch64-apple-darwin/release/Zed-aarch64.dmg
if-no-files-found: error
- name: '@actions/upload-artifact zed-remote-server-macos-aarch64.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-remote-server-macos-aarch64.gz
path: target/zed-remote-server-macos-aarch64.gz
if-no-files-found: error
timeout-minutes: 60
bundle_mac_x86_64:
needs:
- run_tests_mac
- check_scripts
runs-on: self-mini-macos
env:
CARGO_INCREMENTAL: 0
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
MACOS_CERTIFICATE: ${{ secrets.MACOS_CERTIFICATE }}
MACOS_CERTIFICATE_PASSWORD: ${{ secrets.MACOS_CERTIFICATE_PASSWORD }}
APPLE_NOTARIZATION_KEY: ${{ secrets.APPLE_NOTARIZATION_KEY }}
APPLE_NOTARIZATION_KEY_ID: ${{ secrets.APPLE_NOTARIZATION_KEY_ID }}
APPLE_NOTARIZATION_ISSUER_ID: ${{ secrets.APPLE_NOTARIZATION_ISSUER_ID }}
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
node-version: '20'
- name: steps::setup_sentry
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b
with:
token: ${{ secrets.SENTRY_AUTH_TOKEN }}
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 300
shell: bash -euxo pipefail {0}
- name: run_bundling::bundle_mac::bundle_mac
run: ./script/bundle-mac x86_64-apple-darwin
shell: bash -euxo pipefail {0}
- name: '@actions/upload-artifact Zed-x86_64.dmg'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: Zed-x86_64.dmg
path: target/x86_64-apple-darwin/release/Zed-x86_64.dmg
if-no-files-found: error
- name: '@actions/upload-artifact zed-remote-server-macos-x86_64.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-remote-server-macos-x86_64.gz
path: target/zed-remote-server-macos-x86_64.gz
if-no-files-found: error
timeout-minutes: 60
bundle_windows_aarch64:
needs:
- run_tests_windows
- check_scripts
runs-on: self-32vcpu-windows-2022
env:
CARGO_INCREMENTAL: 0
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
AZURE_TENANT_ID: ${{ secrets.AZURE_SIGNING_TENANT_ID }}
AZURE_CLIENT_ID: ${{ secrets.AZURE_SIGNING_CLIENT_ID }}
AZURE_CLIENT_SECRET: ${{ secrets.AZURE_SIGNING_CLIENT_SECRET }}
ACCOUNT_NAME: ${{ vars.AZURE_SIGNING_ACCOUNT_NAME }}
CERT_PROFILE_NAME: ${{ vars.AZURE_SIGNING_CERT_PROFILE_NAME }}
ENDPOINT: ${{ vars.AZURE_SIGNING_ENDPOINT }}
FILE_DIGEST: SHA256
TIMESTAMP_DIGEST: SHA256
TIMESTAMP_SERVER: http://timestamp.acs.microsoft.com
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_sentry
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b
with:
token: ${{ secrets.SENTRY_AUTH_TOKEN }}
- name: run_bundling::bundle_windows::bundle_windows
run: script/bundle-windows.ps1 -Architecture aarch64
shell: pwsh
working-directory: ${{ env.ZED_WORKSPACE }}
- name: '@actions/upload-artifact Zed-aarch64.exe'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: Zed-aarch64.exe
path: target/Zed-aarch64.exe
if-no-files-found: error
timeout-minutes: 60
bundle_windows_x86_64:
needs:
- run_tests_windows
- check_scripts
runs-on: self-32vcpu-windows-2022
env:
CARGO_INCREMENTAL: 0
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
AZURE_TENANT_ID: ${{ secrets.AZURE_SIGNING_TENANT_ID }}
AZURE_CLIENT_ID: ${{ secrets.AZURE_SIGNING_CLIENT_ID }}
AZURE_CLIENT_SECRET: ${{ secrets.AZURE_SIGNING_CLIENT_SECRET }}
ACCOUNT_NAME: ${{ vars.AZURE_SIGNING_ACCOUNT_NAME }}
CERT_PROFILE_NAME: ${{ vars.AZURE_SIGNING_CERT_PROFILE_NAME }}
ENDPOINT: ${{ vars.AZURE_SIGNING_ENDPOINT }}
FILE_DIGEST: SHA256
TIMESTAMP_DIGEST: SHA256
TIMESTAMP_SERVER: http://timestamp.acs.microsoft.com
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_sentry
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b
with:
token: ${{ secrets.SENTRY_AUTH_TOKEN }}
- name: run_bundling::bundle_windows::bundle_windows
run: script/bundle-windows.ps1 -Architecture x86_64
shell: pwsh
working-directory: ${{ env.ZED_WORKSPACE }}
- name: '@actions/upload-artifact Zed-x86_64.exe'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: Zed-x86_64.exe
path: target/Zed-x86_64.exe
if-no-files-found: error
timeout-minutes: 60
upload_release_assets:
needs:
- create_draft_release
- bundle_linux_aarch64
- bundle_linux_x86_64
- bundle_mac_aarch64
- bundle_mac_x86_64
- bundle_windows_aarch64
- bundle_windows_x86_64
runs-on: namespace-profile-4x8-ubuntu-2204
steps:
- name: release::download_workflow_artifacts
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53
with:
path: ./artifacts/
- name: ls -lR ./artifacts
run: ls -lR ./artifacts
shell: bash -euxo pipefail {0}
- name: release::prep_release_artifacts
run: |-
mkdir -p release-artifacts/
mv ./artifacts/Zed-aarch64.dmg/Zed-aarch64.dmg release-artifacts/Zed-aarch64.dmg
mv ./artifacts/Zed-x86_64.dmg/Zed-x86_64.dmg release-artifacts/Zed-x86_64.dmg
mv ./artifacts/zed-linux-aarch64.tar.gz/zed-linux-aarch64.tar.gz release-artifacts/zed-linux-aarch64.tar.gz
mv ./artifacts/zed-linux-x86_64.tar.gz/zed-linux-x86_64.tar.gz release-artifacts/zed-linux-x86_64.tar.gz
mv ./artifacts/Zed-x86_64.exe/Zed-x86_64.exe release-artifacts/Zed-x86_64.exe
mv ./artifacts/Zed-aarch64.exe/Zed-aarch64.exe release-artifacts/Zed-aarch64.exe
mv ./artifacts/zed-remote-server-macos-aarch64.gz/zed-remote-server-macos-aarch64.gz release-artifacts/zed-remote-server-macos-aarch64.gz
mv ./artifacts/zed-remote-server-macos-x86_64.gz/zed-remote-server-macos-x86_64.gz release-artifacts/zed-remote-server-macos-x86_64.gz
mv ./artifacts/zed-remote-server-linux-aarch64.gz/zed-remote-server-linux-aarch64.gz release-artifacts/zed-remote-server-linux-aarch64.gz
mv ./artifacts/zed-remote-server-linux-x86_64.gz/zed-remote-server-linux-x86_64.gz release-artifacts/zed-remote-server-linux-x86_64.gz
shell: bash -euxo pipefail {0}
- name: gh release upload "$GITHUB_REF_NAME" --repo=zed-industries/zed release-artifacts/*
run: gh release upload "$GITHUB_REF_NAME" --repo=zed-industries/zed release-artifacts/*
shell: bash -euxo pipefail {0}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
auto_release_preview:
needs:
- upload_release_assets
if: startsWith(github.ref, 'refs/tags/v') && endsWith(github.ref, '-pre') && !endsWith(github.ref, '.0-pre')
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: gh release edit "$GITHUB_REF_NAME" --repo=zed-industries/zed --draft=false
run: gh release edit "$GITHUB_REF_NAME" --repo=zed-industries/zed --draft=false
shell: bash -euxo pipefail {0}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: release::create_sentry_release
uses: getsentry/action-release@526942b68292201ac6bbb99b9a0747d4abee354c
with:
environment: production
env:
SENTRY_ORG: zed-dev
SENTRY_PROJECT: zed
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true

View File

@@ -1,367 +1,489 @@
name: Release Nightly
on:
schedule:
# Fire every day at 7:00am UTC (Roughly before EU workday and after US workday)
- cron: "0 7 * * *"
push:
tags:
- "nightly"
# Generated from xtask::workflows::release_nightly
# Rebuild with `cargo xtask workflows`.
name: release_nightly
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: 0
RUST_BACKTRACE: 1
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }}
DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }}
RUST_BACKTRACE: '1'
on:
push:
tags:
- nightly
schedule:
- cron: 0 7 * * *
jobs:
style:
timeout-minutes: 60
name: Check formatting and Clippy lints
check_style:
if: github.repository_owner == 'zed-industries'
runs-on:
- self-hosted
- macOS
runs-on: self-mini-macos
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
fetch-depth: 0
- name: Run style checks
uses: ./.github/actions/check_style
- name: Run clippy
run: ./script/clippy
tests:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
fetch-depth: 0
- name: steps::cargo_fmt
run: cargo fmt --all -- --check
shell: bash -euxo pipefail {0}
- name: ./script/clippy
run: ./script/clippy
shell: bash -euxo pipefail {0}
timeout-minutes: 60
name: Run tests
run_tests_windows:
if: github.repository_owner == 'zed-industries'
runs-on:
- self-hosted
- macOS
needs: style
runs-on: self-32vcpu-windows-2022
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Run tests
uses: ./.github/actions/run_tests
windows-tests:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_cargo_config
run: |
New-Item -ItemType Directory -Path "./../.cargo" -Force
Copy-Item -Path "./.cargo/ci-config.toml" -Destination "./../.cargo/config.toml"
shell: pwsh
- name: steps::setup_node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
node-version: '20'
- name: steps::clippy
run: ./script/clippy.ps1
shell: pwsh
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: pwsh
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than.ps1 250
shell: pwsh
- name: steps::cargo_nextest
run: cargo nextest run --workspace --no-fail-fast --failure-output immediate-final
shell: pwsh
- name: steps::cleanup_cargo_config
if: always()
run: |
Remove-Item -Recurse -Path "./../.cargo" -Force -ErrorAction SilentlyContinue
shell: pwsh
timeout-minutes: 60
name: Run tests on Windows
if: github.repository_owner == 'zed-industries'
runs-on: [self-32vcpu-windows-2022]
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Configure CI
run: |
New-Item -ItemType Directory -Path "./../.cargo" -Force
Copy-Item -Path "./.cargo/ci-config.toml" -Destination "./../.cargo/config.toml"
- name: Run tests
uses: ./.github/actions/run_tests_windows
- name: Limit target directory size
run: ./script/clear-target-dir-if-larger-than.ps1 1024
- name: Clean CI config file
if: always()
run: Remove-Item -Recurse -Path "./../.cargo" -Force -ErrorAction SilentlyContinue
bundle-mac:
timeout-minutes: 60
name: Create a macOS bundle
if: github.repository_owner == 'zed-industries'
runs-on:
- self-mini-macos
needs: tests
bundle_linux_aarch64:
needs:
- check_style
- run_tests_windows
runs-on: namespace-profile-8x32-ubuntu-2004-arm-m4
env:
CARGO_INCREMENTAL: 0
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: run_bundling::set_release_channel_to_nightly
run: |
set -eu
version=$(git rev-parse --short HEAD)
echo "Publishing version: ${version} on release channel nightly"
echo "nightly" > crates/zed/RELEASE_CHANNEL
shell: bash -euxo pipefail {0}
- name: steps::setup_sentry
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b
with:
token: ${{ secrets.SENTRY_AUTH_TOKEN }}
- name: steps::setup_linux
run: ./script/linux
shell: bash -euxo pipefail {0}
- name: steps::install_mold
run: ./script/install-mold
shell: bash -euxo pipefail {0}
- name: ./script/bundle-linux
run: ./script/bundle-linux
shell: bash -euxo pipefail {0}
- name: '@actions/upload-artifact zed-linux-aarch64.tar.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-linux-aarch64.tar.gz
path: target/release/zed-linux-aarch64.tar.gz
if-no-files-found: error
- name: '@actions/upload-artifact zed-remote-server-linux-aarch64.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-remote-server-linux-aarch64.gz
path: target/zed-remote-server-linux-aarch64.gz
if-no-files-found: error
timeout-minutes: 60
bundle_linux_x86_64:
needs:
- check_style
- run_tests_windows
runs-on: namespace-profile-32x64-ubuntu-2004
env:
CARGO_INCREMENTAL: 0
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: run_bundling::set_release_channel_to_nightly
run: |
set -eu
version=$(git rev-parse --short HEAD)
echo "Publishing version: ${version} on release channel nightly"
echo "nightly" > crates/zed/RELEASE_CHANNEL
shell: bash -euxo pipefail {0}
- name: steps::setup_sentry
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b
with:
token: ${{ secrets.SENTRY_AUTH_TOKEN }}
- name: steps::setup_linux
run: ./script/linux
shell: bash -euxo pipefail {0}
- name: steps::install_mold
run: ./script/install-mold
shell: bash -euxo pipefail {0}
- name: ./script/bundle-linux
run: ./script/bundle-linux
shell: bash -euxo pipefail {0}
- name: '@actions/upload-artifact zed-linux-x86_64.tar.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-linux-x86_64.tar.gz
path: target/release/zed-linux-x86_64.tar.gz
if-no-files-found: error
- name: '@actions/upload-artifact zed-remote-server-linux-x86_64.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-remote-server-linux-x86_64.gz
path: target/zed-remote-server-linux-x86_64.gz
if-no-files-found: error
timeout-minutes: 60
bundle_mac_aarch64:
needs:
- check_style
- run_tests_windows
runs-on: self-mini-macos
env:
CARGO_INCREMENTAL: 0
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
MACOS_CERTIFICATE: ${{ secrets.MACOS_CERTIFICATE }}
MACOS_CERTIFICATE_PASSWORD: ${{ secrets.MACOS_CERTIFICATE_PASSWORD }}
APPLE_NOTARIZATION_KEY: ${{ secrets.APPLE_NOTARIZATION_KEY }}
APPLE_NOTARIZATION_KEY_ID: ${{ secrets.APPLE_NOTARIZATION_KEY_ID }}
APPLE_NOTARIZATION_ISSUER_ID: ${{ secrets.APPLE_NOTARIZATION_ISSUER_ID }}
steps:
- name: Install Node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
with:
node-version: "18"
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Set release channel to nightly
run: |
set -eu
version=$(git rev-parse --short HEAD)
echo "Publishing version: ${version} on release channel nightly"
echo "nightly" > crates/zed/RELEASE_CHANNEL
- name: Setup Sentry CLI
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b #v2
with:
token: ${{ SECRETS.SENTRY_AUTH_TOKEN }}
- name: Create macOS app bundle
run: script/bundle-mac
- name: Upload Zed Nightly
run: script/upload-nightly macos
bundle-linux-x86:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: run_bundling::set_release_channel_to_nightly
run: |
set -eu
version=$(git rev-parse --short HEAD)
echo "Publishing version: ${version} on release channel nightly"
echo "nightly" > crates/zed/RELEASE_CHANNEL
shell: bash -euxo pipefail {0}
- name: steps::setup_node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
node-version: '20'
- name: steps::setup_sentry
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b
with:
token: ${{ secrets.SENTRY_AUTH_TOKEN }}
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 300
shell: bash -euxo pipefail {0}
- name: run_bundling::bundle_mac::bundle_mac
run: ./script/bundle-mac aarch64-apple-darwin
shell: bash -euxo pipefail {0}
- name: '@actions/upload-artifact Zed-aarch64.dmg'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: Zed-aarch64.dmg
path: target/aarch64-apple-darwin/release/Zed-aarch64.dmg
if-no-files-found: error
- name: '@actions/upload-artifact zed-remote-server-macos-aarch64.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-remote-server-macos-aarch64.gz
path: target/zed-remote-server-macos-aarch64.gz
if-no-files-found: error
timeout-minutes: 60
name: Create a Linux *.tar.gz bundle for x86
if: github.repository_owner == 'zed-industries'
runs-on:
- namespace-profile-16x32-ubuntu-2004 # ubuntu 20.04 for minimal glibc
needs: tests
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Add Rust to the PATH
run: echo "$HOME/.cargo/bin" >> "$GITHUB_PATH"
- name: Install Linux dependencies
run: ./script/linux && ./script/install-mold 2.34.0
- name: Setup Sentry CLI
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b #v2
with:
token: ${{ SECRETS.SENTRY_AUTH_TOKEN }}
- name: Limit target directory size
run: script/clear-target-dir-if-larger-than 100
- name: Set release channel to nightly
run: |
set -euo pipefail
version=$(git rev-parse --short HEAD)
echo "Publishing version: ${version} on release channel nightly"
echo "nightly" > crates/zed/RELEASE_CHANNEL
- name: Create Linux .tar.gz bundle
run: script/bundle-linux
- name: Upload Zed Nightly
run: script/upload-nightly linux-targz
bundle-linux-arm:
timeout-minutes: 60
name: Create a Linux *.tar.gz bundle for ARM
if: github.repository_owner == 'zed-industries'
runs-on:
- namespace-profile-8x32-ubuntu-2004-arm-m4 # ubuntu 20.04 for minimal glibc
needs: tests
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Install Linux dependencies
run: ./script/linux
- name: Setup Sentry CLI
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b #v2
with:
token: ${{ SECRETS.SENTRY_AUTH_TOKEN }}
- name: Limit target directory size
run: script/clear-target-dir-if-larger-than 100
- name: Set release channel to nightly
run: |
set -euo pipefail
version=$(git rev-parse --short HEAD)
echo "Publishing version: ${version} on release channel nightly"
echo "nightly" > crates/zed/RELEASE_CHANNEL
- name: Create Linux .tar.gz bundle
run: script/bundle-linux
- name: Upload Zed Nightly
run: script/upload-nightly linux-targz
freebsd:
timeout-minutes: 60
if: false && github.repository_owner == 'zed-industries'
runs-on: github-8vcpu-ubuntu-2404
needs: tests
name: Build Zed on FreeBSD
steps:
- uses: actions/checkout@v4
- name: Build FreeBSD remote-server
id: freebsd-build
uses: vmactions/freebsd-vm@c3ae29a132c8ef1924775414107a97cac042aad5 # v1.2.0
with:
# envs: "MYTOKEN MYTOKEN2"
usesh: true
release: 13.5
copyback: true
prepare: |
pkg install -y \
bash curl jq git \
rustup-init cmake-core llvm-devel-lite pkgconf protobuf # ibx11 alsa-lib rust-bindgen-cli
run: |
freebsd-version
sysctl hw.model
sysctl hw.ncpu
sysctl hw.physmem
sysctl hw.usermem
git config --global --add safe.directory /home/runner/work/zed/zed
rustup-init --profile minimal --default-toolchain none -y
. "$HOME/.cargo/env"
./script/bundle-freebsd
mkdir -p out/
mv "target/zed-remote-server-freebsd-x86_64.gz" out/
rm -rf target/
cargo clean
- name: Upload Zed Nightly
run: script/upload-nightly freebsd
bundle-nix:
name: Build and cache Nix package
needs: tests
secrets: inherit
uses: ./.github/workflows/nix.yml
bundle-windows-x64:
timeout-minutes: 60
name: Create a Windows installer for x86_64
if: github.repository_owner == 'zed-industries'
runs-on: [self-32vcpu-windows-2022]
needs: windows-tests
env:
AZURE_TENANT_ID: ${{ secrets.AZURE_SIGNING_TENANT_ID }}
AZURE_CLIENT_ID: ${{ secrets.AZURE_SIGNING_CLIENT_ID }}
AZURE_CLIENT_SECRET: ${{ secrets.AZURE_SIGNING_CLIENT_SECRET }}
ACCOUNT_NAME: ${{ vars.AZURE_SIGNING_ACCOUNT_NAME }}
CERT_PROFILE_NAME: ${{ vars.AZURE_SIGNING_CERT_PROFILE_NAME }}
ENDPOINT: ${{ vars.AZURE_SIGNING_ENDPOINT }}
FILE_DIGEST: SHA256
TIMESTAMP_DIGEST: SHA256
TIMESTAMP_SERVER: "http://timestamp.acs.microsoft.com"
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Set release channel to nightly
working-directory: ${{ env.ZED_WORKSPACE }}
run: |
$ErrorActionPreference = "Stop"
$version = git rev-parse --short HEAD
Write-Host "Publishing version: $version on release channel nightly"
"nightly" | Set-Content -Path "crates/zed/RELEASE_CHANNEL"
- name: Setup Sentry CLI
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b #v2
with:
token: ${{ SECRETS.SENTRY_AUTH_TOKEN }}
- name: Build Zed installer
working-directory: ${{ env.ZED_WORKSPACE }}
run: script/bundle-windows.ps1 -Architecture x86_64
- name: Upload Zed Nightly
working-directory: ${{ env.ZED_WORKSPACE }}
run: script/upload-nightly.ps1 -Architecture x86_64
bundle-windows-arm64:
timeout-minutes: 60
name: Create a Windows installer for aarch64
if: github.repository_owner == 'zed-industries'
runs-on: [self-32vcpu-windows-2022]
needs: windows-tests
env:
AZURE_TENANT_ID: ${{ secrets.AZURE_SIGNING_TENANT_ID }}
AZURE_CLIENT_ID: ${{ secrets.AZURE_SIGNING_CLIENT_ID }}
AZURE_CLIENT_SECRET: ${{ secrets.AZURE_SIGNING_CLIENT_SECRET }}
ACCOUNT_NAME: ${{ vars.AZURE_SIGNING_ACCOUNT_NAME }}
CERT_PROFILE_NAME: ${{ vars.AZURE_SIGNING_CERT_PROFILE_NAME }}
ENDPOINT: ${{ vars.AZURE_SIGNING_ENDPOINT }}
FILE_DIGEST: SHA256
TIMESTAMP_DIGEST: SHA256
TIMESTAMP_SERVER: "http://timestamp.acs.microsoft.com"
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Set release channel to nightly
working-directory: ${{ env.ZED_WORKSPACE }}
run: |
$ErrorActionPreference = "Stop"
$version = git rev-parse --short HEAD
Write-Host "Publishing version: $version on release channel nightly"
"nightly" | Set-Content -Path "crates/zed/RELEASE_CHANNEL"
- name: Setup Sentry CLI
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b #v2
with:
token: ${{ SECRETS.SENTRY_AUTH_TOKEN }}
- name: Build Zed installer
working-directory: ${{ env.ZED_WORKSPACE }}
run: script/bundle-windows.ps1 -Architecture aarch64
- name: Upload Zed Nightly
working-directory: ${{ env.ZED_WORKSPACE }}
run: script/upload-nightly.ps1 -Architecture aarch64
update-nightly-tag:
name: Update nightly tag
if: github.repository_owner == 'zed-industries'
runs-on: namespace-profile-2x4-ubuntu-2404
bundle_mac_x86_64:
needs:
- bundle-mac
- bundle-linux-x86
- bundle-linux-arm
- bundle-windows-x64
- bundle-windows-arm64
- check_style
- run_tests_windows
runs-on: self-mini-macos
env:
CARGO_INCREMENTAL: 0
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
MACOS_CERTIFICATE: ${{ secrets.MACOS_CERTIFICATE }}
MACOS_CERTIFICATE_PASSWORD: ${{ secrets.MACOS_CERTIFICATE_PASSWORD }}
APPLE_NOTARIZATION_KEY: ${{ secrets.APPLE_NOTARIZATION_KEY }}
APPLE_NOTARIZATION_KEY_ID: ${{ secrets.APPLE_NOTARIZATION_KEY_ID }}
APPLE_NOTARIZATION_ISSUER_ID: ${{ secrets.APPLE_NOTARIZATION_ISSUER_ID }}
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
fetch-depth: 0
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: run_bundling::set_release_channel_to_nightly
run: |
set -eu
version=$(git rev-parse --short HEAD)
echo "Publishing version: ${version} on release channel nightly"
echo "nightly" > crates/zed/RELEASE_CHANNEL
shell: bash -euxo pipefail {0}
- name: steps::setup_node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
node-version: '20'
- name: steps::setup_sentry
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b
with:
token: ${{ secrets.SENTRY_AUTH_TOKEN }}
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 300
shell: bash -euxo pipefail {0}
- name: run_bundling::bundle_mac::bundle_mac
run: ./script/bundle-mac x86_64-apple-darwin
shell: bash -euxo pipefail {0}
- name: '@actions/upload-artifact Zed-x86_64.dmg'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: Zed-x86_64.dmg
path: target/x86_64-apple-darwin/release/Zed-x86_64.dmg
if-no-files-found: error
- name: '@actions/upload-artifact zed-remote-server-macos-x86_64.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-remote-server-macos-x86_64.gz
path: target/zed-remote-server-macos-x86_64.gz
if-no-files-found: error
timeout-minutes: 60
bundle_windows_aarch64:
needs:
- check_style
- run_tests_windows
runs-on: self-32vcpu-windows-2022
env:
CARGO_INCREMENTAL: 0
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
AZURE_TENANT_ID: ${{ secrets.AZURE_SIGNING_TENANT_ID }}
AZURE_CLIENT_ID: ${{ secrets.AZURE_SIGNING_CLIENT_ID }}
AZURE_CLIENT_SECRET: ${{ secrets.AZURE_SIGNING_CLIENT_SECRET }}
ACCOUNT_NAME: ${{ vars.AZURE_SIGNING_ACCOUNT_NAME }}
CERT_PROFILE_NAME: ${{ vars.AZURE_SIGNING_CERT_PROFILE_NAME }}
ENDPOINT: ${{ vars.AZURE_SIGNING_ENDPOINT }}
FILE_DIGEST: SHA256
TIMESTAMP_DIGEST: SHA256
TIMESTAMP_SERVER: http://timestamp.acs.microsoft.com
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: run_bundling::set_release_channel_to_nightly
run: |
$ErrorActionPreference = "Stop"
$version = git rev-parse --short HEAD
Write-Host "Publishing version: $version on release channel nightly"
"nightly" | Set-Content -Path "crates/zed/RELEASE_CHANNEL"
shell: pwsh
working-directory: ${{ env.ZED_WORKSPACE }}
- name: steps::setup_sentry
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b
with:
token: ${{ secrets.SENTRY_AUTH_TOKEN }}
- name: run_bundling::bundle_windows::bundle_windows
run: script/bundle-windows.ps1 -Architecture aarch64
shell: pwsh
working-directory: ${{ env.ZED_WORKSPACE }}
- name: '@actions/upload-artifact Zed-aarch64.exe'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: Zed-aarch64.exe
path: target/Zed-aarch64.exe
if-no-files-found: error
timeout-minutes: 60
bundle_windows_x86_64:
needs:
- check_style
- run_tests_windows
runs-on: self-32vcpu-windows-2022
env:
CARGO_INCREMENTAL: 0
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
AZURE_TENANT_ID: ${{ secrets.AZURE_SIGNING_TENANT_ID }}
AZURE_CLIENT_ID: ${{ secrets.AZURE_SIGNING_CLIENT_ID }}
AZURE_CLIENT_SECRET: ${{ secrets.AZURE_SIGNING_CLIENT_SECRET }}
ACCOUNT_NAME: ${{ vars.AZURE_SIGNING_ACCOUNT_NAME }}
CERT_PROFILE_NAME: ${{ vars.AZURE_SIGNING_CERT_PROFILE_NAME }}
ENDPOINT: ${{ vars.AZURE_SIGNING_ENDPOINT }}
FILE_DIGEST: SHA256
TIMESTAMP_DIGEST: SHA256
TIMESTAMP_SERVER: http://timestamp.acs.microsoft.com
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: run_bundling::set_release_channel_to_nightly
run: |
$ErrorActionPreference = "Stop"
$version = git rev-parse --short HEAD
Write-Host "Publishing version: $version on release channel nightly"
"nightly" | Set-Content -Path "crates/zed/RELEASE_CHANNEL"
shell: pwsh
working-directory: ${{ env.ZED_WORKSPACE }}
- name: steps::setup_sentry
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b
with:
token: ${{ secrets.SENTRY_AUTH_TOKEN }}
- name: run_bundling::bundle_windows::bundle_windows
run: script/bundle-windows.ps1 -Architecture x86_64
shell: pwsh
working-directory: ${{ env.ZED_WORKSPACE }}
- name: '@actions/upload-artifact Zed-x86_64.exe'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: Zed-x86_64.exe
path: target/Zed-x86_64.exe
if-no-files-found: error
timeout-minutes: 60
build_nix_linux_x86_64:
needs:
- check_style
- run_tests_windows
if: github.repository_owner == 'zed-industries'
runs-on: namespace-profile-32x64-ubuntu-2004
env:
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}
GIT_LFS_SKIP_SMUDGE: '1'
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: nix_build::build_nix::install_nix
uses: cachix/install-nix-action@02a151ada4993995686f9ed4f1be7cfbb229e56f
with:
github_access_token: ${{ secrets.GITHUB_TOKEN }}
- name: nix_build::build_nix::cachix_action
uses: cachix/cachix-action@0fc020193b5a1fa3ac4575aa3a7d3aa6a35435ad
with:
name: zed
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
cachixArgs: -v
- name: nix_build::build_nix::build
run: nix build .#default -L --accept-flake-config
shell: bash -euxo pipefail {0}
timeout-minutes: 60
continue-on-error: true
build_nix_mac_aarch64:
needs:
- check_style
- run_tests_windows
if: github.repository_owner == 'zed-industries'
runs-on: self-mini-macos
env:
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}
GIT_LFS_SKIP_SMUDGE: '1'
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: nix_build::build_nix::set_path
run: |
echo "/nix/var/nix/profiles/default/bin" >> "$GITHUB_PATH"
echo "/Users/administrator/.nix-profile/bin" >> "$GITHUB_PATH"
shell: bash -euxo pipefail {0}
- name: nix_build::build_nix::cachix_action
uses: cachix/cachix-action@0fc020193b5a1fa3ac4575aa3a7d3aa6a35435ad
with:
name: zed
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
cachixArgs: -v
- name: nix_build::build_nix::build
run: nix build .#default -L --accept-flake-config
shell: bash -euxo pipefail {0}
- name: nix_build::build_nix::limit_store
run: |-
if [ "$(du -sm /nix/store | cut -f1)" -gt 50000 ]; then
nix-collect-garbage -d || true
fi
shell: bash -euxo pipefail {0}
timeout-minutes: 60
continue-on-error: true
update_nightly_tag:
needs:
- bundle_linux_aarch64
- bundle_linux_x86_64
- bundle_mac_aarch64
- bundle_mac_x86_64
- bundle_windows_aarch64
- bundle_windows_x86_64
if: github.repository_owner == 'zed-industries'
runs-on: namespace-profile-4x8-ubuntu-2204
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
fetch-depth: 0
- name: release::download_workflow_artifacts
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53
with:
path: ./artifacts/
- name: ls -lR ./artifacts
run: ls -lR ./artifacts
shell: bash -euxo pipefail {0}
- name: release::prep_release_artifacts
run: |-
mkdir -p release-artifacts/
- name: Update nightly tag
run: |
if [ "$(git rev-parse nightly)" = "$(git rev-parse HEAD)" ]; then
echo "Nightly tag already points to current commit. Skipping tagging."
exit 0
fi
git config user.name github-actions
git config user.email github-actions@github.com
git tag -f nightly
git push origin nightly --force
- name: Create Sentry release
uses: getsentry/action-release@526942b68292201ac6bbb99b9a0747d4abee354c # v3
env:
SENTRY_ORG: zed-dev
SENTRY_PROJECT: zed
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
with:
environment: production
mv ./artifacts/Zed-aarch64.dmg/Zed-aarch64.dmg release-artifacts/Zed-aarch64.dmg
mv ./artifacts/Zed-x86_64.dmg/Zed-x86_64.dmg release-artifacts/Zed-x86_64.dmg
mv ./artifacts/zed-linux-aarch64.tar.gz/zed-linux-aarch64.tar.gz release-artifacts/zed-linux-aarch64.tar.gz
mv ./artifacts/zed-linux-x86_64.tar.gz/zed-linux-x86_64.tar.gz release-artifacts/zed-linux-x86_64.tar.gz
mv ./artifacts/Zed-x86_64.exe/Zed-x86_64.exe release-artifacts/Zed-x86_64.exe
mv ./artifacts/Zed-aarch64.exe/Zed-aarch64.exe release-artifacts/Zed-aarch64.exe
mv ./artifacts/zed-remote-server-macos-aarch64.gz/zed-remote-server-macos-aarch64.gz release-artifacts/zed-remote-server-macos-aarch64.gz
mv ./artifacts/zed-remote-server-macos-x86_64.gz/zed-remote-server-macos-x86_64.gz release-artifacts/zed-remote-server-macos-x86_64.gz
mv ./artifacts/zed-remote-server-linux-aarch64.gz/zed-remote-server-linux-aarch64.gz release-artifacts/zed-remote-server-linux-aarch64.gz
mv ./artifacts/zed-remote-server-linux-x86_64.gz/zed-remote-server-linux-x86_64.gz release-artifacts/zed-remote-server-linux-x86_64.gz
shell: bash -euxo pipefail {0}
- name: ./script/upload-nightly
run: ./script/upload-nightly
shell: bash -euxo pipefail {0}
env:
DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }}
DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }}
- name: release_nightly::update_nightly_tag_job::update_nightly_tag
run: |
if [ "$(git rev-parse nightly)" = "$(git rev-parse HEAD)" ]; then
echo "Nightly tag already points to current commit. Skipping tagging."
exit 0
fi
git config user.name github-actions
git config user.email github-actions@github.com
git tag -f nightly
git push origin nightly --force
shell: bash -euxo pipefail {0}
- name: release::create_sentry_release
uses: getsentry/action-release@526942b68292201ac6bbb99b9a0747d4abee354c
with:
environment: production
env:
SENTRY_ORG: zed-dev
SENTRY_PROJECT: zed
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
timeout-minutes: 60

62
.github/workflows/run_agent_evals.yml vendored Normal file
View File

@@ -0,0 +1,62 @@
# Generated from xtask::workflows::run_agent_evals
# Rebuild with `cargo xtask workflows`.
name: run_agent_evals
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: '0'
RUST_BACKTRACE: '1'
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_EVAL_TELEMETRY: '1'
on:
pull_request:
types:
- synchronize
- reopened
- labeled
branches:
- '**'
schedule:
- cron: 0 0 * * *
workflow_dispatch: {}
jobs:
agent_evals:
if: |
github.repository_owner == 'zed-industries' &&
(github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'run-eval'))
runs-on: namespace-profile-16x32-ubuntu-2204
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::cache_rust_dependencies_namespace
uses: namespacelabs/nscloud-cache-action@v1
with:
cache: rust
- name: steps::setup_linux
run: ./script/linux
shell: bash -euxo pipefail {0}
- name: steps::install_mold
run: ./script/install-mold
shell: bash -euxo pipefail {0}
- name: steps::setup_cargo_config
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
shell: bash -euxo pipefail {0}
- name: cargo build --package=eval
run: cargo build --package=eval
shell: bash -euxo pipefail {0}
- name: run_agent_evals::agent_evals::run_eval
run: cargo run --package=eval -- --repetitions=8 --concurrency=1
shell: bash -euxo pipefail {0}
- name: steps::cleanup_cargo_config
if: always()
run: |
rm -rf ./../.cargo
shell: bash -euxo pipefail {0}
timeout-minutes: 60
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true

263
.github/workflows/run_bundling.yml vendored Normal file
View File

@@ -0,0 +1,263 @@
# Generated from xtask::workflows::run_bundling
# Rebuild with `cargo xtask workflows`.
name: run_bundling
env:
CARGO_TERM_COLOR: always
RUST_BACKTRACE: '1'
on:
pull_request:
types:
- labeled
- synchronize
jobs:
bundle_linux_aarch64:
if: |-
(github.event.action == 'labeled' && github.event.label.name == 'run-bundling') ||
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
runs-on: namespace-profile-8x32-ubuntu-2004-arm-m4
env:
CARGO_INCREMENTAL: 0
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_sentry
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b
with:
token: ${{ secrets.SENTRY_AUTH_TOKEN }}
- name: steps::setup_linux
run: ./script/linux
shell: bash -euxo pipefail {0}
- name: steps::install_mold
run: ./script/install-mold
shell: bash -euxo pipefail {0}
- name: ./script/bundle-linux
run: ./script/bundle-linux
shell: bash -euxo pipefail {0}
- name: '@actions/upload-artifact zed-linux-aarch64.tar.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-linux-aarch64.tar.gz
path: target/release/zed-linux-aarch64.tar.gz
if-no-files-found: error
- name: '@actions/upload-artifact zed-remote-server-linux-aarch64.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-remote-server-linux-aarch64.gz
path: target/zed-remote-server-linux-aarch64.gz
if-no-files-found: error
timeout-minutes: 60
bundle_linux_x86_64:
if: |-
(github.event.action == 'labeled' && github.event.label.name == 'run-bundling') ||
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
runs-on: namespace-profile-32x64-ubuntu-2004
env:
CARGO_INCREMENTAL: 0
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_sentry
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b
with:
token: ${{ secrets.SENTRY_AUTH_TOKEN }}
- name: steps::setup_linux
run: ./script/linux
shell: bash -euxo pipefail {0}
- name: steps::install_mold
run: ./script/install-mold
shell: bash -euxo pipefail {0}
- name: ./script/bundle-linux
run: ./script/bundle-linux
shell: bash -euxo pipefail {0}
- name: '@actions/upload-artifact zed-linux-x86_64.tar.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-linux-x86_64.tar.gz
path: target/release/zed-linux-x86_64.tar.gz
if-no-files-found: error
- name: '@actions/upload-artifact zed-remote-server-linux-x86_64.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-remote-server-linux-x86_64.gz
path: target/zed-remote-server-linux-x86_64.gz
if-no-files-found: error
timeout-minutes: 60
bundle_mac_aarch64:
if: |-
(github.event.action == 'labeled' && github.event.label.name == 'run-bundling') ||
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
runs-on: self-mini-macos
env:
CARGO_INCREMENTAL: 0
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
MACOS_CERTIFICATE: ${{ secrets.MACOS_CERTIFICATE }}
MACOS_CERTIFICATE_PASSWORD: ${{ secrets.MACOS_CERTIFICATE_PASSWORD }}
APPLE_NOTARIZATION_KEY: ${{ secrets.APPLE_NOTARIZATION_KEY }}
APPLE_NOTARIZATION_KEY_ID: ${{ secrets.APPLE_NOTARIZATION_KEY_ID }}
APPLE_NOTARIZATION_ISSUER_ID: ${{ secrets.APPLE_NOTARIZATION_ISSUER_ID }}
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
node-version: '20'
- name: steps::setup_sentry
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b
with:
token: ${{ secrets.SENTRY_AUTH_TOKEN }}
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 300
shell: bash -euxo pipefail {0}
- name: run_bundling::bundle_mac::bundle_mac
run: ./script/bundle-mac aarch64-apple-darwin
shell: bash -euxo pipefail {0}
- name: '@actions/upload-artifact Zed-aarch64.dmg'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: Zed-aarch64.dmg
path: target/aarch64-apple-darwin/release/Zed-aarch64.dmg
if-no-files-found: error
- name: '@actions/upload-artifact zed-remote-server-macos-aarch64.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-remote-server-macos-aarch64.gz
path: target/zed-remote-server-macos-aarch64.gz
if-no-files-found: error
timeout-minutes: 60
bundle_mac_x86_64:
if: |-
(github.event.action == 'labeled' && github.event.label.name == 'run-bundling') ||
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
runs-on: self-mini-macos
env:
CARGO_INCREMENTAL: 0
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
MACOS_CERTIFICATE: ${{ secrets.MACOS_CERTIFICATE }}
MACOS_CERTIFICATE_PASSWORD: ${{ secrets.MACOS_CERTIFICATE_PASSWORD }}
APPLE_NOTARIZATION_KEY: ${{ secrets.APPLE_NOTARIZATION_KEY }}
APPLE_NOTARIZATION_KEY_ID: ${{ secrets.APPLE_NOTARIZATION_KEY_ID }}
APPLE_NOTARIZATION_ISSUER_ID: ${{ secrets.APPLE_NOTARIZATION_ISSUER_ID }}
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
node-version: '20'
- name: steps::setup_sentry
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b
with:
token: ${{ secrets.SENTRY_AUTH_TOKEN }}
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 300
shell: bash -euxo pipefail {0}
- name: run_bundling::bundle_mac::bundle_mac
run: ./script/bundle-mac x86_64-apple-darwin
shell: bash -euxo pipefail {0}
- name: '@actions/upload-artifact Zed-x86_64.dmg'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: Zed-x86_64.dmg
path: target/x86_64-apple-darwin/release/Zed-x86_64.dmg
if-no-files-found: error
- name: '@actions/upload-artifact zed-remote-server-macos-x86_64.gz'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: zed-remote-server-macos-x86_64.gz
path: target/zed-remote-server-macos-x86_64.gz
if-no-files-found: error
timeout-minutes: 60
bundle_windows_aarch64:
if: |-
(github.event.action == 'labeled' && github.event.label.name == 'run-bundling') ||
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
runs-on: self-32vcpu-windows-2022
env:
CARGO_INCREMENTAL: 0
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
AZURE_TENANT_ID: ${{ secrets.AZURE_SIGNING_TENANT_ID }}
AZURE_CLIENT_ID: ${{ secrets.AZURE_SIGNING_CLIENT_ID }}
AZURE_CLIENT_SECRET: ${{ secrets.AZURE_SIGNING_CLIENT_SECRET }}
ACCOUNT_NAME: ${{ vars.AZURE_SIGNING_ACCOUNT_NAME }}
CERT_PROFILE_NAME: ${{ vars.AZURE_SIGNING_CERT_PROFILE_NAME }}
ENDPOINT: ${{ vars.AZURE_SIGNING_ENDPOINT }}
FILE_DIGEST: SHA256
TIMESTAMP_DIGEST: SHA256
TIMESTAMP_SERVER: http://timestamp.acs.microsoft.com
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_sentry
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b
with:
token: ${{ secrets.SENTRY_AUTH_TOKEN }}
- name: run_bundling::bundle_windows::bundle_windows
run: script/bundle-windows.ps1 -Architecture aarch64
shell: pwsh
working-directory: ${{ env.ZED_WORKSPACE }}
- name: '@actions/upload-artifact Zed-aarch64.exe'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: Zed-aarch64.exe
path: target/Zed-aarch64.exe
if-no-files-found: error
timeout-minutes: 60
bundle_windows_x86_64:
if: |-
(github.event.action == 'labeled' && github.event.label.name == 'run-bundling') ||
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
runs-on: self-32vcpu-windows-2022
env:
CARGO_INCREMENTAL: 0
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
AZURE_TENANT_ID: ${{ secrets.AZURE_SIGNING_TENANT_ID }}
AZURE_CLIENT_ID: ${{ secrets.AZURE_SIGNING_CLIENT_ID }}
AZURE_CLIENT_SECRET: ${{ secrets.AZURE_SIGNING_CLIENT_SECRET }}
ACCOUNT_NAME: ${{ vars.AZURE_SIGNING_ACCOUNT_NAME }}
CERT_PROFILE_NAME: ${{ vars.AZURE_SIGNING_CERT_PROFILE_NAME }}
ENDPOINT: ${{ vars.AZURE_SIGNING_ENDPOINT }}
FILE_DIGEST: SHA256
TIMESTAMP_DIGEST: SHA256
TIMESTAMP_SERVER: http://timestamp.acs.microsoft.com
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_sentry
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b
with:
token: ${{ secrets.SENTRY_AUTH_TOKEN }}
- name: run_bundling::bundle_windows::bundle_windows
run: script/bundle-windows.ps1 -Architecture x86_64
shell: pwsh
working-directory: ${{ env.ZED_WORKSPACE }}
- name: '@actions/upload-artifact Zed-x86_64.exe'
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: Zed-x86_64.exe
path: target/Zed-x86_64.exe
if-no-files-found: error
timeout-minutes: 60
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.ref }}
cancel-in-progress: true

569
.github/workflows/run_tests.yml vendored Normal file
View File

@@ -0,0 +1,569 @@
# Generated from xtask::workflows::run_tests
# Rebuild with `cargo xtask workflows`.
name: run_tests
env:
CARGO_TERM_COLOR: always
RUST_BACKTRACE: '1'
CARGO_INCREMENTAL: '0'
on:
pull_request:
branches:
- '**'
push:
branches:
- main
- v[0-9]+.[0-9]+.x
jobs:
orchestrate:
if: github.repository_owner == 'zed-industries'
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
fetch-depth: ${{ github.ref == 'refs/heads/main' && 2 || 350 }}
- id: filter
name: filter
run: |
if [ -z "$GITHUB_BASE_REF" ]; then
echo "Not in a PR context (i.e., push to main/stable/preview)"
COMPARE_REV="$(git rev-parse HEAD~1)"
else
echo "In a PR context comparing to pull_request.base.ref"
git fetch origin "$GITHUB_BASE_REF" --depth=350
COMPARE_REV="$(git merge-base "origin/${GITHUB_BASE_REF}" HEAD)"
fi
CHANGED_FILES="$(git diff --name-only "$COMPARE_REV" ${{ github.sha }})"
check_pattern() {
local output_name="$1"
local pattern="$2"
local grep_arg="$3"
echo "$CHANGED_FILES" | grep "$grep_arg" "$pattern" && \
echo "${output_name}=true" >> "$GITHUB_OUTPUT" || \
echo "${output_name}=false" >> "$GITHUB_OUTPUT"
}
check_pattern "run_action_checks" '^\.github/(workflows/|actions/|actionlint.yml)|tooling/xtask|script/' -qP
check_pattern "run_docs" '^docs/' -qP
check_pattern "run_licenses" '^(Cargo.lock|script/.*licenses)' -qP
check_pattern "run_nix" '^(nix/|flake\.|Cargo\.|rust-toolchain.toml|\.cargo/config.toml)' -qP
check_pattern "run_tests" '^(docs/|script/update_top_ranking_issues/|\.github/(ISSUE_TEMPLATE|workflows/(?!run_tests)))' -qvP
shell: bash -euxo pipefail {0}
outputs:
run_action_checks: ${{ steps.filter.outputs.run_action_checks }}
run_docs: ${{ steps.filter.outputs.run_docs }}
run_licenses: ${{ steps.filter.outputs.run_licenses }}
run_nix: ${{ steps.filter.outputs.run_nix }}
run_tests: ${{ steps.filter.outputs.run_tests }}
check_style:
if: github.repository_owner == 'zed-industries'
runs-on: namespace-profile-4x8-ubuntu-2204
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::cache_rust_dependencies_namespace
uses: namespacelabs/nscloud-cache-action@v1
with:
cache: rust
- name: steps::setup_pnpm
uses: pnpm/action-setup@fe02b34f77f8bc703788d5817da081398fad5dd2
with:
version: '9'
- name: ./script/prettier
run: ./script/prettier
shell: bash -euxo pipefail {0}
- name: ./script/check-todos
run: ./script/check-todos
shell: bash -euxo pipefail {0}
- name: ./script/check-keymaps
run: ./script/check-keymaps
shell: bash -euxo pipefail {0}
- name: run_tests::check_style::check_for_typos
uses: crate-ci/typos@80c8a4945eec0f6d464eaf9e65ed98ef085283d1
with:
config: ./typos.toml
- name: steps::cargo_fmt
run: cargo fmt --all -- --check
shell: bash -euxo pipefail {0}
timeout-minutes: 60
run_tests_windows:
needs:
- orchestrate
if: needs.orchestrate.outputs.run_tests == 'true'
runs-on: self-32vcpu-windows-2022
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_cargo_config
run: |
New-Item -ItemType Directory -Path "./../.cargo" -Force
Copy-Item -Path "./.cargo/ci-config.toml" -Destination "./../.cargo/config.toml"
shell: pwsh
- name: steps::setup_node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
node-version: '20'
- name: steps::clippy
run: ./script/clippy.ps1
shell: pwsh
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: pwsh
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than.ps1 250
shell: pwsh
- name: steps::cargo_nextest
run: cargo nextest run --workspace --no-fail-fast --failure-output immediate-final
shell: pwsh
- name: steps::cleanup_cargo_config
if: always()
run: |
Remove-Item -Recurse -Path "./../.cargo" -Force -ErrorAction SilentlyContinue
shell: pwsh
timeout-minutes: 60
run_tests_linux:
needs:
- orchestrate
if: needs.orchestrate.outputs.run_tests == 'true'
runs-on: namespace-profile-16x32-ubuntu-2204
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_cargo_config
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
shell: bash -euxo pipefail {0}
- name: steps::setup_linux
run: ./script/linux
shell: bash -euxo pipefail {0}
- name: steps::install_mold
run: ./script/install-mold
shell: bash -euxo pipefail {0}
- name: steps::cache_rust_dependencies_namespace
uses: namespacelabs/nscloud-cache-action@v1
with:
cache: rust
- name: steps::setup_node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
node-version: '20'
- name: steps::clippy
run: ./script/clippy
shell: bash -euxo pipefail {0}
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: bash -euxo pipefail {0}
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 250
shell: bash -euxo pipefail {0}
- name: steps::cargo_nextest
run: cargo nextest run --workspace --no-fail-fast --failure-output immediate-final
shell: bash -euxo pipefail {0}
- name: steps::cleanup_cargo_config
if: always()
run: |
rm -rf ./../.cargo
shell: bash -euxo pipefail {0}
timeout-minutes: 60
run_tests_mac:
needs:
- orchestrate
if: needs.orchestrate.outputs.run_tests == 'true'
runs-on: self-mini-macos
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_cargo_config
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
shell: bash -euxo pipefail {0}
- name: steps::setup_node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
node-version: '20'
- name: steps::clippy
run: ./script/clippy
shell: bash -euxo pipefail {0}
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: bash -euxo pipefail {0}
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 300
shell: bash -euxo pipefail {0}
- name: steps::cargo_nextest
run: cargo nextest run --workspace --no-fail-fast --failure-output immediate-final
shell: bash -euxo pipefail {0}
- name: steps::cleanup_cargo_config
if: always()
run: |
rm -rf ./../.cargo
shell: bash -euxo pipefail {0}
timeout-minutes: 60
doctests:
needs:
- orchestrate
if: needs.orchestrate.outputs.run_tests == 'true'
runs-on: namespace-profile-16x32-ubuntu-2204
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::cache_rust_dependencies_namespace
uses: namespacelabs/nscloud-cache-action@v1
with:
cache: rust
- name: steps::setup_linux
run: ./script/linux
shell: bash -euxo pipefail {0}
- name: steps::install_mold
run: ./script/install-mold
shell: bash -euxo pipefail {0}
- name: steps::setup_cargo_config
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
shell: bash -euxo pipefail {0}
- id: run_doctests
name: run_tests::doctests::run_doctests
run: |
cargo test --workspace --doc --no-fail-fast
shell: bash -euxo pipefail {0}
- name: steps::cleanup_cargo_config
if: always()
run: |
rm -rf ./../.cargo
shell: bash -euxo pipefail {0}
timeout-minutes: 60
check_workspace_binaries:
needs:
- orchestrate
if: needs.orchestrate.outputs.run_tests == 'true'
runs-on: namespace-profile-8x16-ubuntu-2204
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_cargo_config
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
shell: bash -euxo pipefail {0}
- name: steps::setup_linux
run: ./script/linux
shell: bash -euxo pipefail {0}
- name: steps::install_mold
run: ./script/install-mold
shell: bash -euxo pipefail {0}
- name: steps::cache_rust_dependencies_namespace
uses: namespacelabs/nscloud-cache-action@v1
with:
cache: rust
- name: cargo build -p collab
run: cargo build -p collab
shell: bash -euxo pipefail {0}
- name: cargo build --workspace --bins --examples
run: cargo build --workspace --bins --examples
shell: bash -euxo pipefail {0}
- name: steps::cleanup_cargo_config
if: always()
run: |
rm -rf ./../.cargo
shell: bash -euxo pipefail {0}
timeout-minutes: 60
check_postgres_and_protobuf_migrations:
needs:
- orchestrate
if: needs.orchestrate.outputs.run_tests == 'true'
runs-on: self-mini-macos
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
fetch-depth: 0
- name: run_tests::check_postgres_and_protobuf_migrations::remove_untracked_files
run: git clean -df
shell: bash -euxo pipefail {0}
- name: run_tests::check_postgres_and_protobuf_migrations::ensure_fresh_merge
run: |
if [ -z "$GITHUB_BASE_REF" ];
then
echo "BUF_BASE_BRANCH=$(git merge-base origin/main HEAD)" >> "$GITHUB_ENV"
else
git checkout -B temp
git merge -q "origin/$GITHUB_BASE_REF" -m "merge main into temp"
echo "BUF_BASE_BRANCH=$GITHUB_BASE_REF" >> "$GITHUB_ENV"
fi
shell: bash -euxo pipefail {0}
- name: run_tests::check_postgres_and_protobuf_migrations::bufbuild_setup_action
uses: bufbuild/buf-setup-action@v1
with:
version: v1.29.0
- name: run_tests::check_postgres_and_protobuf_migrations::bufbuild_breaking_action
uses: bufbuild/buf-breaking-action@v1
with:
input: crates/proto/proto/
against: https://github.com/${GITHUB_REPOSITORY}.git#branch=${BUF_BASE_BRANCH},subdir=crates/proto/proto/
timeout-minutes: 60
check_dependencies:
needs:
- orchestrate
if: needs.orchestrate.outputs.run_tests == 'true'
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::cache_rust_dependencies_namespace
uses: namespacelabs/nscloud-cache-action@v1
with:
cache: rust
- name: run_tests::check_dependencies::install_cargo_machete
uses: clechasseur/rs-cargo@8435b10f6e71c2e3d4d3b7573003a8ce4bfc6386
with:
command: install
args: cargo-machete@0.7.0
- name: run_tests::check_dependencies::run_cargo_machete
uses: clechasseur/rs-cargo@8435b10f6e71c2e3d4d3b7573003a8ce4bfc6386
with:
command: machete
- name: run_tests::check_dependencies::check_cargo_lock
run: cargo update --locked --workspace
shell: bash -euxo pipefail {0}
- name: run_tests::check_dependencies::check_vulnerable_dependencies
if: github.event_name == 'pull_request'
uses: actions/dependency-review-action@67d4f4bd7a9b17a0db54d2a7519187c65e339de8
with:
license-check: false
timeout-minutes: 60
check_docs:
needs:
- orchestrate
if: needs.orchestrate.outputs.run_docs == 'true'
runs-on: namespace-profile-8x16-ubuntu-2204
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_cargo_config
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
shell: bash -euxo pipefail {0}
- name: steps::cache_rust_dependencies_namespace
uses: namespacelabs/nscloud-cache-action@v1
with:
cache: rust
- name: run_tests::check_docs::lychee_link_check
uses: lycheeverse/lychee-action@82202e5e9c2f4ef1a55a3d02563e1cb6041e5332
with:
args: --no-progress --exclude '^http' './docs/src/**/*'
fail: true
jobSummary: false
- name: steps::setup_linux
run: ./script/linux
shell: bash -euxo pipefail {0}
- name: steps::install_mold
run: ./script/install-mold
shell: bash -euxo pipefail {0}
- name: run_tests::check_docs::install_mdbook
uses: peaceiris/actions-mdbook@ee69d230fe19748b7abf22df32acaa93833fad08
with:
mdbook-version: 0.4.37
- name: run_tests::check_docs::build_docs
run: |
mkdir -p target/deploy
mdbook build ./docs --dest-dir=../target/deploy/docs/
shell: bash -euxo pipefail {0}
- name: run_tests::check_docs::lychee_link_check
uses: lycheeverse/lychee-action@82202e5e9c2f4ef1a55a3d02563e1cb6041e5332
with:
args: --no-progress --exclude '^http' 'target/deploy/docs'
fail: true
jobSummary: false
timeout-minutes: 60
check_licenses:
needs:
- orchestrate
if: needs.orchestrate.outputs.run_licenses == 'true'
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::cache_rust_dependencies_namespace
uses: namespacelabs/nscloud-cache-action@v1
with:
cache: rust
- name: ./script/check-licenses
run: ./script/check-licenses
shell: bash -euxo pipefail {0}
- name: ./script/generate-licenses
run: ./script/generate-licenses
shell: bash -euxo pipefail {0}
check_scripts:
needs:
- orchestrate
if: needs.orchestrate.outputs.run_action_checks == 'true'
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: run_tests::check_scripts::run_shellcheck
run: ./script/shellcheck-scripts error
shell: bash -euxo pipefail {0}
- id: get_actionlint
name: run_tests::check_scripts::download_actionlint
run: bash <(curl https://raw.githubusercontent.com/rhysd/actionlint/main/scripts/download-actionlint.bash)
shell: bash -euxo pipefail {0}
- name: run_tests::check_scripts::run_actionlint
run: |
${{ steps.get_actionlint.outputs.executable }} -color
shell: bash -euxo pipefail {0}
- name: run_tests::check_scripts::check_xtask_workflows
run: |
cargo xtask workflows
if ! git diff --exit-code .github; then
echo "Error: .github directory has uncommitted changes after running 'cargo xtask workflows'"
echo "Please run 'cargo xtask workflows' locally and commit the changes"
exit 1
fi
shell: bash -euxo pipefail {0}
timeout-minutes: 60
build_nix_linux_x86_64:
needs:
- orchestrate
if: needs.orchestrate.outputs.run_nix == 'true'
runs-on: namespace-profile-32x64-ubuntu-2004
env:
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}
GIT_LFS_SKIP_SMUDGE: '1'
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: nix_build::build_nix::install_nix
uses: cachix/install-nix-action@02a151ada4993995686f9ed4f1be7cfbb229e56f
with:
github_access_token: ${{ secrets.GITHUB_TOKEN }}
- name: nix_build::build_nix::cachix_action
uses: cachix/cachix-action@0fc020193b5a1fa3ac4575aa3a7d3aa6a35435ad
with:
name: zed
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
cachixArgs: -v
pushFilter: -zed-editor-[0-9.]*-nightly
- name: nix_build::build_nix::build
run: nix build .#debug -L --accept-flake-config
shell: bash -euxo pipefail {0}
timeout-minutes: 60
continue-on-error: true
build_nix_mac_aarch64:
needs:
- orchestrate
if: needs.orchestrate.outputs.run_nix == 'true'
runs-on: self-mini-macos
env:
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}
GIT_LFS_SKIP_SMUDGE: '1'
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: nix_build::build_nix::set_path
run: |
echo "/nix/var/nix/profiles/default/bin" >> "$GITHUB_PATH"
echo "/Users/administrator/.nix-profile/bin" >> "$GITHUB_PATH"
shell: bash -euxo pipefail {0}
- name: nix_build::build_nix::cachix_action
uses: cachix/cachix-action@0fc020193b5a1fa3ac4575aa3a7d3aa6a35435ad
with:
name: zed
authToken: ${{ secrets.CACHIX_AUTH_TOKEN }}
cachixArgs: -v
pushFilter: -zed-editor-[0-9.]*-nightly
- name: nix_build::build_nix::build
run: nix build .#debug -L --accept-flake-config
shell: bash -euxo pipefail {0}
- name: nix_build::build_nix::limit_store
run: |-
if [ "$(du -sm /nix/store | cut -f1)" -gt 50000 ]; then
nix-collect-garbage -d || true
fi
shell: bash -euxo pipefail {0}
timeout-minutes: 60
continue-on-error: true
tests_pass:
needs:
- orchestrate
- check_style
- run_tests_windows
- run_tests_linux
- run_tests_mac
- doctests
- check_workspace_binaries
- check_postgres_and_protobuf_migrations
- check_dependencies
- check_docs
- check_licenses
- check_scripts
- build_nix_linux_x86_64
- build_nix_mac_aarch64
if: github.repository_owner == 'zed-industries' && always()
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: run_tests::tests_pass
run: |
set +x
EXIT_CODE=0
check_result() {
echo "* $1: $2"
if [[ "$2" != "skipped" && "$2" != "success" ]]; then EXIT_CODE=1; fi
}
check_result "orchestrate" "${{ needs.orchestrate.result }}"
check_result "check_style" "${{ needs.check_style.result }}"
check_result "run_tests_windows" "${{ needs.run_tests_windows.result }}"
check_result "run_tests_linux" "${{ needs.run_tests_linux.result }}"
check_result "run_tests_mac" "${{ needs.run_tests_mac.result }}"
check_result "doctests" "${{ needs.doctests.result }}"
check_result "check_workspace_binaries" "${{ needs.check_workspace_binaries.result }}"
check_result "check_postgres_and_protobuf_migrations" "${{ needs.check_postgres_and_protobuf_migrations.result }}"
check_result "check_dependencies" "${{ needs.check_dependencies.result }}"
check_result "check_docs" "${{ needs.check_docs.result }}"
check_result "check_licenses" "${{ needs.check_licenses.result }}"
check_result "check_scripts" "${{ needs.check_scripts.result }}"
check_result "build_nix_linux_x86_64" "${{ needs.build_nix_linux_x86_64.result }}"
check_result "build_nix_mac_aarch64" "${{ needs.build_nix_mac_aarch64.result }}"
exit $EXIT_CODE
shell: bash -euxo pipefail {0}
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true

63
.github/workflows/run_unit_evals.yml vendored Normal file
View File

@@ -0,0 +1,63 @@
# Generated from xtask::workflows::run_agent_evals
# Rebuild with `cargo xtask workflows`.
name: run_agent_evals
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: '0'
RUST_BACKTRACE: '1'
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
on:
schedule:
- cron: 47 1 * * 2
workflow_dispatch: {}
jobs:
unit_evals:
runs-on: namespace-profile-16x32-ubuntu-2204
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_cargo_config
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
shell: bash -euxo pipefail {0}
- name: steps::cache_rust_dependencies_namespace
uses: namespacelabs/nscloud-cache-action@v1
with:
cache: rust
- name: steps::setup_linux
run: ./script/linux
shell: bash -euxo pipefail {0}
- name: steps::install_mold
run: ./script/install-mold
shell: bash -euxo pipefail {0}
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: bash -euxo pipefail {0}
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 250
shell: bash -euxo pipefail {0}
- name: ./script/run-unit-evals
run: ./script/run-unit-evals
shell: bash -euxo pipefail {0}
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
- name: run_agent_evals::unit_evals::send_failure_to_slack
if: ${{ failure() }}
uses: slackapi/slack-github-action@b0fa283ad8fea605de13dc3f449259339835fc52
with:
method: chat.postMessage
token: ${{ secrets.SLACK_APP_ZED_UNIT_EVALS_BOT_TOKEN }}
payload: |
channel: C04UDRNNJFQ
text: "Unit Evals Failed: https://github.com/zed-industries/zed/actions/runs/${{ github.run_id }}"
- name: steps::cleanup_cargo_config
if: always()
run: |
rm -rf ./../.cargo
shell: bash -euxo pipefail {0}
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true

View File

@@ -1,21 +0,0 @@
name: Script
on:
pull_request:
paths:
- "script/**"
push:
branches:
- main
jobs:
shellcheck:
name: "ShellCheck Scripts"
if: github.repository_owner == 'zed-industries'
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
- name: Shellcheck ./scripts
run: |
./script/shellcheck-scripts error

View File

@@ -1,86 +0,0 @@
name: Run Unit Evals
on:
schedule:
# GitHub might drop jobs at busy times, so we choose a random time in the middle of the night.
- cron: "47 1 * * 2"
workflow_dispatch:
concurrency:
# Allow only one workflow per any non-`main` branch.
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: 0
RUST_BACKTRACE: 1
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
jobs:
unit_evals:
if: github.repository_owner == 'zed-industries'
timeout-minutes: 60
name: Run unit evals
runs-on:
- namespace-profile-16x32-ubuntu-2204
steps:
- name: Add Rust to the PATH
run: echo "$HOME/.cargo/bin" >> "$GITHUB_PATH"
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Cache dependencies
uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
# cache-provider: "buildjet"
- name: Install Linux dependencies
run: ./script/linux
- name: Configure CI
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
- name: Install Rust
shell: bash -euxo pipefail {0}
run: |
cargo install cargo-nextest --locked
- name: Install Node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
with:
node-version: "18"
- name: Limit target directory size
shell: bash -euxo pipefail {0}
run: script/clear-target-dir-if-larger-than 100
- name: Run unit evals
shell: bash -euxo pipefail {0}
run: cargo nextest run --workspace --no-fail-fast --features eval --no-capture -E 'test(::eval_)'
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
- name: Send failure message to Slack channel if needed
if: ${{ failure() }}
uses: slackapi/slack-github-action@b0fa283ad8fea605de13dc3f449259339835fc52
with:
method: chat.postMessage
token: ${{ secrets.SLACK_APP_ZED_UNIT_EVALS_BOT_TOKEN }}
payload: |
channel: C04UDRNNJFQ
text: "Unit Evals Failed: https://github.com/zed-industries/zed/actions/runs/${{ github.run_id }}"
# Even the Linux runner is not stateful, in theory there is no need to do this cleanup.
# But, to avoid potential issues in the future if we choose to use a stateful Linux runner and forget to add code
# to clean up the config file, Ive included the cleanup code here as a precaution.
# While its not strictly necessary at this moment, I believe its better to err on the side of caution.
- name: Clean CI config file
if: always()
run: rm -rf ./../.cargo

45
Cargo.lock generated
View File

@@ -1339,6 +1339,7 @@ dependencies = [
"settings",
"smol",
"tempfile",
"util",
"which 6.0.3",
"workspace",
]
@@ -1350,6 +1351,7 @@ dependencies = [
"anyhow",
"log",
"simplelog",
"tempfile",
"windows 0.61.3",
"winresource",
]
@@ -4528,12 +4530,15 @@ dependencies = [
"fs",
"futures 0.3.31",
"gpui",
"http_client",
"json_dotpath",
"language",
"log",
"node_runtime",
"paths",
"serde",
"serde_json",
"settings",
"smol",
"task",
"util",
@@ -4932,6 +4937,7 @@ dependencies = [
"editor",
"gpui",
"indoc",
"itertools 0.14.0",
"language",
"log",
"lsp",
@@ -5832,8 +5838,6 @@ name = "extension"
version = "0.1.0"
dependencies = [
"anyhow",
"async-compression",
"async-tar",
"async-trait",
"collections",
"dap",
@@ -6957,7 +6961,7 @@ dependencies = [
[[package]]
name = "gh-workflow"
version = "0.8.0"
source = "git+https://github.com/zed-industries/gh-workflow?rev=a0b197dd77c0ed2390c150e601f9d4f9a0ca7105#a0b197dd77c0ed2390c150e601f9d4f9a0ca7105"
source = "git+https://github.com/zed-industries/gh-workflow?rev=3eaa84abca0778eb54272f45a312cb24f9a0b435#3eaa84abca0778eb54272f45a312cb24f9a0b435"
dependencies = [
"async-trait",
"derive_more 2.0.1",
@@ -6974,7 +6978,7 @@ dependencies = [
[[package]]
name = "gh-workflow-macros"
version = "0.8.0"
source = "git+https://github.com/zed-industries/gh-workflow?rev=a0b197dd77c0ed2390c150e601f9d4f9a0ca7105#a0b197dd77c0ed2390c150e601f9d4f9a0ca7105"
source = "git+https://github.com/zed-industries/gh-workflow?rev=3eaa84abca0778eb54272f45a312cb24f9a0b435#3eaa84abca0778eb54272f45a312cb24f9a0b435"
dependencies = [
"heck 0.5.0",
"quote",
@@ -7074,6 +7078,7 @@ dependencies = [
"serde_json",
"settings",
"url",
"urlencoding",
"util",
]
@@ -7112,6 +7117,8 @@ dependencies = [
"picker",
"pretty_assertions",
"project",
"recent_projects",
"remote",
"schemars 1.0.4",
"serde",
"serde_json",
@@ -7263,6 +7270,7 @@ dependencies = [
"async-task",
"backtrace",
"bindgen 0.71.1",
"bitflags 2.9.4",
"blade-graphics",
"blade-macros",
"blade-util",
@@ -7342,6 +7350,7 @@ dependencies = [
"wayland-cursor",
"wayland-protocols 0.31.2",
"wayland-protocols-plasma",
"wayland-protocols-wlr",
"windows 0.61.3",
"windows-core 0.61.2",
"windows-numerics",
@@ -13123,7 +13132,6 @@ dependencies = [
"paths",
"rope",
"serde",
"serde_json",
"text",
"util",
"uuid",
@@ -16494,7 +16502,7 @@ dependencies = [
"editor",
"file_icons",
"gpui",
"multi_buffer",
"language",
"ui",
"workspace",
]
@@ -17439,6 +17447,7 @@ dependencies = [
"anyhow",
"auto_update",
"call",
"channel",
"chrono",
"client",
"cloud_llm_client",
@@ -18040,7 +18049,7 @@ dependencies = [
[[package]]
name = "tree-sitter-gomod"
version = "1.1.1"
source = "git+https://github.com/camdencheek/tree-sitter-go-mod?rev=6efb59652d30e0e9cd5f3b3a669afd6f1a926d3c#6efb59652d30e0e9cd5f3b3a669afd6f1a926d3c"
source = "git+https://github.com/camdencheek/tree-sitter-go-mod?rev=2e886870578eeba1927a2dc4bd2e2b3f598c5f9a#2e886870578eeba1927a2dc4bd2e2b3f598c5f9a"
dependencies = [
"cc",
"tree-sitter-language",
@@ -19489,6 +19498,19 @@ dependencies = [
"wayland-scanner",
]
[[package]]
name = "wayland-protocols-wlr"
version = "0.3.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "efd94963ed43cf9938a090ca4f7da58eb55325ec8200c3848963e98dc25b78ec"
dependencies = [
"bitflags 2.9.4",
"wayland-backend",
"wayland-client",
"wayland-protocols 0.32.9",
"wayland-scanner",
]
[[package]]
name = "wayland-scanner"
version = "0.31.7"
@@ -21113,7 +21135,7 @@ dependencies = [
[[package]]
name = "zed"
version = "0.211.0"
version = "0.212.0"
dependencies = [
"acp_tools",
"activity_indicator",
@@ -21205,6 +21227,7 @@ dependencies = [
"project_symbols",
"prompt_store",
"proto",
"rayon",
"recent_projects",
"release_channel",
"remote",
@@ -21669,6 +21692,7 @@ dependencies = [
name = "zeta2_tools"
version = "0.1.0"
dependencies = [
"anyhow",
"chrono",
"clap",
"client",
@@ -21686,6 +21710,7 @@ dependencies = [
"ordered-float 2.10.1",
"pretty_assertions",
"project",
"regex-syntax",
"serde",
"serde_json",
"settings",
@@ -21717,6 +21742,7 @@ dependencies = [
"futures 0.3.31",
"gpui",
"gpui_tokio",
"indoc",
"language",
"language_extension",
"language_model",
@@ -21727,8 +21753,10 @@ dependencies = [
"ordered-float 2.10.1",
"paths",
"polars",
"pretty_assertions",
"project",
"prompt_store",
"pulldown-cmark 0.12.2",
"release_channel",
"reqwest_client",
"serde",
@@ -21738,6 +21766,7 @@ dependencies = [
"smol",
"soa-rs",
"terminal_view",
"toml 0.8.23",
"util",
"watch",
"zeta",

View File

@@ -508,7 +508,7 @@ fork = "0.2.0"
futures = "0.3"
futures-batch = "0.6.1"
futures-lite = "1.13"
gh-workflow = { git = "https://github.com/zed-industries/gh-workflow", rev = "a0b197dd77c0ed2390c150e601f9d4f9a0ca7105" }
gh-workflow = { git = "https://github.com/zed-industries/gh-workflow", rev = "3eaa84abca0778eb54272f45a312cb24f9a0b435" }
git2 = { version = "0.20.1", default-features = false }
globset = "0.4"
handlebars = "4.3"
@@ -680,7 +680,7 @@ tree-sitter-elixir = "0.3"
tree-sitter-embedded-template = "0.23.0"
tree-sitter-gitcommit = { git = "https://github.com/zed-industries/tree-sitter-git-commit", rev = "88309716a69dd13ab83443721ba6e0b491d37ee9" }
tree-sitter-go = "0.23"
tree-sitter-go-mod = { git = "https://github.com/camdencheek/tree-sitter-go-mod", rev = "6efb59652d30e0e9cd5f3b3a669afd6f1a926d3c", package = "tree-sitter-gomod" }
tree-sitter-go-mod = { git = "https://github.com/camdencheek/tree-sitter-go-mod", rev = "2e886870578eeba1927a2dc4bd2e2b3f598c5f9a", package = "tree-sitter-gomod" }
tree-sitter-gowork = { git = "https://github.com/zed-industries/tree-sitter-go-work", rev = "acb0617bf7f4fda02c6217676cc64acb89536dc7" }
tree-sitter-heex = { git = "https://github.com/zed-industries/tree-sitter-heex", rev = "1dd45142fbb05562e35b2040c6129c9bca346592" }
tree-sitter-html = "0.23"

View File

@@ -8,100 +8,110 @@
; to other areas too.
<all>
= @cole-miller
= @ConradIrwin
= @maxdeviant
= @SomeoneToIgnore
= @probably-neb
= @danilo-leal
= @Veykril
= @dinocosta
= @HactarCE
= @kubkon
= @maxdeviant
= @p1n3appl3
= @dinocosta
= @smitbarmase
= @cole-miller
vim
= @ConradIrwin
= @probably-neb
= @p1n3appl3
= @dinocosta
gpui
= @mikayla-maki
git
= @cole-miller
= @danilo-leal
linux
= @dvdsk
= @smitbarmase
= @p1n3appl3
= @cole-miller
windows
= @reflectronic
= @localcc
pickers
= @p1n3appl3
= @dvdsk
= @SomeoneToIgnore
= @Veykril
ai
= @benbrandt
= @bennetbo
= @danilo-leal
= @rtfeldman
audio
= @dvdsk
helix
= @kubkon
terminal
= @kubkon
= @Veykril
debugger
= @kubkon
= @osiewicz
= @Anthony-Eid
extension
= @kubkon
settings_ui
= @probably-neb
= @danilo-leal
= @Anthony-Eid
crashes
= @p1n3appl3
= @Veykril
ai
= @rtfeldman
= @danilo-leal
= @benbrandt
debugger
= @Anthony-Eid
= @kubkon
= @osiewicz
design
= @danilo-leal
docs
= @probably-neb
extension
= @kubkon
git
= @cole-miller
= @danilo-leal
gpui
= @Anthony-Eid
= @cameron1024
= @mikayla-maki
= @probably-neb
helix
= @kubkon
languages
= @osiewicz
= @probably-neb
= @smitbarmase
= @SomeoneToIgnore
= @Veykril
linux
= @cole-miller
= @dvdsk
= @p1n3appl3
= @probably-neb
= @smitbarmase
lsp
= @osiewicz
= @smitbarmase
= @SomeoneToIgnore
= @Veykril
multi_buffer
= @Veykril
= @SomeoneToIgnore
lsp
= @osiewicz
= @Veykril
= @smitbarmase
= @SomeoneToIgnore
languages
= @osiewicz
= @Veykril
= @smitbarmase
pickers
= @dvdsk
= @p1n3appl3
= @SomeoneToIgnore
project_panel
= @smitbarmase
settings_ui
= @Anthony-Eid
= @danilo-leal
= @probably-neb
tasks
= @SomeoneToIgnore
= @Veykril
terminal
= @kubkon
= @Veykril
vim
= @ConradIrwin
= @dinocosta
= @p1n3appl3
= @probably-neb
windows
= @localcc
= @reflectronic

View File

@@ -0,0 +1,4 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M11.3335 13.3333L8.00017 10L4.66685 13.3333" stroke="black" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M11.3335 2.66669L8.00017 6.00002L4.66685 2.66669" stroke="black" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

After

Width:  |  Height:  |  Size: 382 B

5
assets/icons/link.svg Normal file
View File

@@ -0,0 +1,5 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M6.2 11H5C4.20435 11 3.44129 10.6839 2.87868 10.1213C2.31607 9.55871 2 8.79565 2 8C2 7.20435 2.31607 6.44129 2.87868 5.87868C3.44129 5.31607 4.20435 5 5 5H6.2" stroke="#C4CAD4" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M9.80005 5H11C11.7957 5 12.5588 5.31607 13.1214 5.87868C13.684 6.44129 14 7.20435 14 8C14 8.79565 13.684 9.55871 13.1214 10.1213C12.5588 10.6839 11.7957 11 11 11H9.80005" stroke="#C4CAD4" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M5.6001 8H10.4001" stroke="#C4CAD4" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

After

Width:  |  Height:  |  Size: 735 B

View File

Before

Width:  |  Height:  |  Size: 5.5 KiB

After

Width:  |  Height:  |  Size: 5.5 KiB

View File

Before

Width:  |  Height:  |  Size: 3.9 KiB

After

Width:  |  Height:  |  Size: 3.9 KiB

View File

@@ -43,7 +43,8 @@
"f11": "zed::ToggleFullScreen",
"ctrl-alt-z": "edit_prediction::RateCompletions",
"ctrl-alt-shift-i": "edit_prediction::ToggleMenu",
"ctrl-alt-l": "lsp_tool::ToggleMenu"
"ctrl-alt-l": "lsp_tool::ToggleMenu",
"ctrl-alt-.": "project_panel::ToggleHideHidden"
}
},
{
@@ -235,12 +236,13 @@
"ctrl-alt-n": "agent::NewTextThread",
"ctrl-shift-h": "agent::OpenHistory",
"ctrl-alt-c": "agent::OpenSettings",
"ctrl-alt-p": "agent::OpenRulesLibrary",
"ctrl-alt-p": "agent::ManageProfiles",
"ctrl-alt-l": "agent::OpenRulesLibrary",
"ctrl-i": "agent::ToggleProfileSelector",
"ctrl-alt-/": "agent::ToggleModelSelector",
"ctrl-shift-a": "agent::ToggleContextPicker",
"ctrl-shift-j": "agent::ToggleNavigationMenu",
"ctrl-shift-i": "agent::ToggleOptionsMenu",
"ctrl-alt-i": "agent::ToggleOptionsMenu",
"ctrl-alt-shift-n": "agent::ToggleNewThreadMenu",
"shift-alt-escape": "agent::ExpandMessageEditor",
"ctrl->": "agent::AddSelectionToThread",
@@ -407,6 +409,7 @@
"bindings": {
"escape": "project_search::ToggleFocus",
"shift-find": "search::FocusSearch",
"shift-enter": "project_search::ToggleAllSearchResults",
"ctrl-shift-f": "search::FocusSearch",
"ctrl-shift-h": "search::ToggleReplace",
"alt-ctrl-g": "search::ToggleRegex",
@@ -479,6 +482,7 @@
"alt-w": "search::ToggleWholeWord",
"alt-find": "project_search::ToggleFilters",
"alt-ctrl-f": "project_search::ToggleFilters",
"shift-enter": "project_search::ToggleAllSearchResults",
"ctrl-alt-shift-r": "search::ToggleRegex",
"ctrl-alt-shift-x": "search::ToggleRegex",
"alt-r": "search::ToggleRegex",
@@ -731,6 +735,14 @@
"tab": "editor::ComposeCompletion"
}
},
{
"context": "Editor && in_snippet",
"use_key_equivalents": true,
"bindings": {
"alt-right": "editor::NextSnippetTabstop",
"alt-left": "editor::PreviousSnippetTabstop"
}
},
// Bindings for accepting edit predictions
//
// alt-l is provided as an alternative to tab/alt-tab. and will be displayed in the UI. This is
@@ -933,6 +945,7 @@
"ctrl-g ctrl-g": "git::Fetch",
"ctrl-g up": "git::Push",
"ctrl-g down": "git::Pull",
"ctrl-g shift-down": "git::PullRebase",
"ctrl-g shift-up": "git::ForcePush",
"ctrl-g d": "git::Diff",
"ctrl-g backspace": "git::RestoreTrackedFiles",
@@ -1012,7 +1025,8 @@
"context": "CollabPanel",
"bindings": {
"alt-up": "collab_panel::MoveChannelUp",
"alt-down": "collab_panel::MoveChannelDown"
"alt-down": "collab_panel::MoveChannelDown",
"alt-enter": "collab_panel::OpenSelectedChannelNotes"
}
},
{
@@ -1126,7 +1140,8 @@
"ctrl-shift-space": "terminal::ToggleViMode",
"ctrl-shift-r": "terminal::RerunTask",
"ctrl-alt-r": "terminal::RerunTask",
"alt-t": "terminal::RerunTask"
"alt-t": "terminal::RerunTask",
"ctrl-shift-5": "pane::SplitRight"
}
},
{
@@ -1242,6 +1257,14 @@
"ctrl-shift-enter": "workspace::OpenWithSystem"
}
},
{
"context": "GitWorktreeSelector || (GitWorktreeSelector > Picker > Editor)",
"use_key_equivalents": true,
"bindings": {
"ctrl-shift-space": "git::WorktreeFromDefaultOnWindow",
"ctrl-space": "git::WorktreeFromDefault"
}
},
{
"context": "SettingsWindow",
"use_key_equivalents": true,
@@ -1298,5 +1321,12 @@
"ctrl-enter up": "dev::Zeta2RatePredictionPositive",
"ctrl-enter down": "dev::Zeta2RatePredictionNegative"
}
},
{
"context": "Zeta2Context > Editor",
"bindings": {
"alt-left": "dev::Zeta2ContextGoBack",
"alt-right": "dev::Zeta2ContextGoForward"
}
}
]

View File

@@ -49,7 +49,8 @@
"ctrl-cmd-f": "zed::ToggleFullScreen",
"ctrl-cmd-z": "edit_prediction::RateCompletions",
"ctrl-cmd-i": "edit_prediction::ToggleMenu",
"ctrl-cmd-l": "lsp_tool::ToggleMenu"
"ctrl-cmd-l": "lsp_tool::ToggleMenu",
"cmd-alt-.": "project_panel::ToggleHideHidden"
}
},
{
@@ -274,12 +275,13 @@
"cmd-alt-n": "agent::NewTextThread",
"cmd-shift-h": "agent::OpenHistory",
"cmd-alt-c": "agent::OpenSettings",
"cmd-alt-p": "agent::OpenRulesLibrary",
"cmd-alt-l": "agent::OpenRulesLibrary",
"cmd-alt-p": "agent::ManageProfiles",
"cmd-i": "agent::ToggleProfileSelector",
"cmd-alt-/": "agent::ToggleModelSelector",
"cmd-shift-a": "agent::ToggleContextPicker",
"cmd-shift-j": "agent::ToggleNavigationMenu",
"cmd-shift-i": "agent::ToggleOptionsMenu",
"cmd-alt-m": "agent::ToggleOptionsMenu",
"cmd-alt-shift-n": "agent::ToggleNewThreadMenu",
"shift-alt-escape": "agent::ExpandMessageEditor",
"cmd->": "agent::AddSelectionToThread",
@@ -468,6 +470,7 @@
"bindings": {
"escape": "project_search::ToggleFocus",
"cmd-shift-j": "project_search::ToggleFilters",
"shift-enter": "project_search::ToggleAllSearchResults",
"cmd-shift-f": "search::FocusSearch",
"cmd-shift-h": "search::ToggleReplace",
"alt-cmd-g": "search::ToggleRegex",
@@ -496,6 +499,7 @@
"bindings": {
"escape": "project_search::ToggleFocus",
"cmd-shift-j": "project_search::ToggleFilters",
"shift-enter": "project_search::ToggleAllSearchResults",
"cmd-shift-h": "search::ToggleReplace",
"alt-cmd-g": "search::ToggleRegex",
"alt-cmd-x": "search::ToggleRegex"
@@ -801,6 +805,14 @@
"tab": "editor::ComposeCompletion"
}
},
{
"context": "Editor && in_snippet",
"use_key_equivalents": true,
"bindings": {
"alt-right": "editor::NextSnippetTabstop",
"alt-left": "editor::PreviousSnippetTabstop"
}
},
{
"context": "Editor && edit_prediction",
"bindings": {
@@ -1026,6 +1038,7 @@
"ctrl-g ctrl-g": "git::Fetch",
"ctrl-g up": "git::Push",
"ctrl-g down": "git::Pull",
"ctrl-g shift-down": "git::PullRebase",
"ctrl-g shift-up": "git::ForcePush",
"ctrl-g d": "git::Diff",
"ctrl-g backspace": "git::RestoreTrackedFiles",
@@ -1077,7 +1090,8 @@
"use_key_equivalents": true,
"bindings": {
"alt-up": "collab_panel::MoveChannelUp",
"alt-down": "collab_panel::MoveChannelDown"
"alt-down": "collab_panel::MoveChannelDown",
"alt-enter": "collab_panel::OpenSelectedChannelNotes"
}
},
{
@@ -1209,6 +1223,7 @@
"ctrl-alt-down": "pane::SplitDown",
"ctrl-alt-left": "pane::SplitLeft",
"ctrl-alt-right": "pane::SplitRight",
"cmd-d": "pane::SplitRight",
"cmd-alt-r": "terminal::RerunTask"
}
},
@@ -1347,6 +1362,14 @@
"ctrl-shift-enter": "workspace::OpenWithSystem"
}
},
{
"context": "GitWorktreeSelector || (GitWorktreeSelector > Picker > Editor)",
"use_key_equivalents": true,
"bindings": {
"ctrl-shift-space": "git::WorktreeFromDefaultOnWindow",
"ctrl-space": "git::WorktreeFromDefault"
}
},
{
"context": "SettingsWindow",
"use_key_equivalents": true,
@@ -1404,5 +1427,12 @@
"cmd-enter up": "dev::Zeta2RatePredictionPositive",
"cmd-enter down": "dev::Zeta2RatePredictionNegative"
}
},
{
"context": "Zeta2Context > Editor",
"bindings": {
"alt-left": "dev::Zeta2ContextGoBack",
"alt-right": "dev::Zeta2ContextGoForward"
}
}
]

View File

@@ -41,7 +41,8 @@
"shift-f11": "debugger::StepOut",
"f11": "zed::ToggleFullScreen",
"ctrl-shift-i": "edit_prediction::ToggleMenu",
"shift-alt-l": "lsp_tool::ToggleMenu"
"shift-alt-l": "lsp_tool::ToggleMenu",
"ctrl-alt-.": "project_panel::ToggleHideHidden"
}
},
{
@@ -236,12 +237,13 @@
"shift-alt-n": "agent::NewTextThread",
"ctrl-shift-h": "agent::OpenHistory",
"shift-alt-c": "agent::OpenSettings",
"shift-alt-p": "agent::OpenRulesLibrary",
"shift-alt-l": "agent::OpenRulesLibrary",
"shift-alt-p": "agent::ManageProfiles",
"ctrl-i": "agent::ToggleProfileSelector",
"shift-alt-/": "agent::ToggleModelSelector",
"ctrl-shift-a": "agent::ToggleContextPicker",
"ctrl-shift-j": "agent::ToggleNavigationMenu",
"ctrl-shift-i": "agent::ToggleOptionsMenu",
"ctrl-alt-i": "agent::ToggleOptionsMenu",
// "ctrl-shift-alt-n": "agent::ToggleNewThreadMenu",
"shift-alt-escape": "agent::ExpandMessageEditor",
"ctrl-shift-.": "agent::AddSelectionToThread",
@@ -488,6 +490,7 @@
"alt-c": "search::ToggleCaseSensitive",
"alt-w": "search::ToggleWholeWord",
"alt-f": "project_search::ToggleFilters",
"shift-enter": "project_search::ToggleAllSearchResults",
"alt-r": "search::ToggleRegex",
// "ctrl-shift-alt-x": "search::ToggleRegex",
"ctrl-k shift-enter": "pane::TogglePinTab"
@@ -736,6 +739,14 @@
"tab": "editor::ComposeCompletion"
}
},
{
"context": "Editor && in_snippet",
"use_key_equivalents": true,
"bindings": {
"alt-right": "editor::NextSnippetTabstop",
"alt-left": "editor::PreviousSnippetTabstop"
}
},
// Bindings for accepting edit predictions
//
// alt-l is provided as an alternative to tab/alt-tab. and will be displayed in the UI. This is
@@ -943,6 +954,7 @@
"ctrl-g ctrl-g": "git::Fetch",
"ctrl-g up": "git::Push",
"ctrl-g down": "git::Pull",
"ctrl-g shift-down": "git::PullRebase",
"ctrl-g shift-up": "git::ForcePush",
"ctrl-g d": "git::Diff",
"ctrl-g backspace": "git::RestoreTrackedFiles",
@@ -1030,7 +1042,8 @@
"use_key_equivalents": true,
"bindings": {
"alt-up": "collab_panel::MoveChannelUp",
"alt-down": "collab_panel::MoveChannelDown"
"alt-down": "collab_panel::MoveChannelDown",
"alt-enter": "collab_panel::OpenSelectedChannelNotes"
}
},
{
@@ -1152,7 +1165,8 @@
"ctrl-shift-space": "terminal::ToggleViMode",
"ctrl-shift-r": "terminal::RerunTask",
"ctrl-alt-r": "terminal::RerunTask",
"alt-t": "terminal::RerunTask"
"alt-t": "terminal::RerunTask",
"ctrl-shift-5": "pane::SplitRight"
}
},
{
@@ -1270,6 +1284,14 @@
"shift-alt-a": "onboarding::OpenAccount"
}
},
{
"context": "GitWorktreeSelector || (GitWorktreeSelector > Picker > Editor)",
"use_key_equivalents": true,
"bindings": {
"ctrl-shift-space": "git::WorktreeFromDefaultOnWindow",
"ctrl-space": "git::WorktreeFromDefault"
}
},
{
"context": "SettingsWindow",
"use_key_equivalents": true,
@@ -1327,5 +1349,12 @@
"ctrl-enter up": "dev::Zeta2RatePredictionPositive",
"ctrl-enter down": "dev::Zeta2RatePredictionNegative"
}
},
{
"context": "Zeta2Context > Editor",
"bindings": {
"alt-left": "dev::Zeta2ContextGoBack",
"alt-right": "dev::Zeta2ContextGoForward"
}
}
]

View File

@@ -67,6 +67,7 @@
"ctrl-o": "pane::GoBack",
"ctrl-i": "pane::GoForward",
"ctrl-]": "editor::GoToDefinition",
"ctrl-t": "pane::GoToOlderTag",
"escape": "vim::SwitchToNormalMode",
"ctrl-[": "vim::SwitchToNormalMode",
"v": "vim::ToggleVisual",
@@ -220,6 +221,8 @@
"[ {": ["vim::UnmatchedBackward", { "char": "{" }],
"] )": ["vim::UnmatchedForward", { "char": ")" }],
"[ (": ["vim::UnmatchedBackward", { "char": "(" }],
"[ r": "vim::GoToPreviousReference",
"] r": "vim::GoToNextReference",
// tree-sitter related commands
"[ x": "vim::SelectLargerSyntaxNode",
"] x": "vim::SelectSmallerSyntaxNode"
@@ -419,6 +422,12 @@
"ctrl-[": "editor::Cancel"
}
},
{
"context": "vim_mode == helix_select && !menu",
"bindings": {
"escape": "vim::SwitchToHelixNormalMode"
}
},
{
"context": "(vim_mode == helix_normal || vim_mode == helix_select) && !menu",
"bindings": {

View File

@@ -1,179 +0,0 @@
You are a highly skilled software engineer with extensive knowledge in many programming languages, frameworks, design patterns, and best practices.
## Communication
1. Be conversational but professional.
2. Refer to the user in the second person and yourself in the first person.
3. Format your responses in markdown. Use backticks to format file, directory, function, and class names.
4. NEVER lie or make things up.
5. Refrain from apologizing all the time when results are unexpected. Instead, just try your best to proceed or explain the circumstances to the user without apologizing.
{{#if has_tools}}
## Tool Use
1. Make sure to adhere to the tools schema.
2. Provide every required argument.
3. DO NOT use tools to access items that are already available in the context section.
4. Use only the tools that are currently available.
5. DO NOT use a tool that is not available just because it appears in the conversation. This means the user turned it off.
6. NEVER run commands that don't terminate on their own such as web servers (like `npm run start`, `npm run dev`, `python -m http.server`, etc) or file watchers.
7. Avoid HTML entity escaping - use plain characters instead.
## Searching and Reading
If you are unsure how to fulfill the user's request, gather more information with tool calls and/or clarifying questions.
{{! TODO: If there are files, we should mention it but otherwise omit that fact }}
If appropriate, use tool calls to explore the current project, which contains the following root directories:
{{#each worktrees}}
- `{{abs_path}}`
{{/each}}
- Bias towards not asking the user for help if you can find the answer yourself.
- When providing paths to tools, the path should always start with the name of a project root directory listed above.
- Before you read or edit a file, you must first find the full path. DO NOT ever guess a file path!
{{# if (has_tool 'grep') }}
- When looking for symbols in the project, prefer the `grep` tool.
- As you learn about the structure of the project, use that information to scope `grep` searches to targeted subtrees of the project.
- The user might specify a partial file path. If you don't know the full path, use `find_path` (not `grep`) before you read the file.
{{/if}}
{{else}}
You are being tasked with providing a response, but you have no ability to use tools or to read or write any aspect of the user's system (other than any context the user might have provided to you).
As such, if you need the user to perform any actions for you, you must request them explicitly. Bias towards giving a response to the best of your ability, and then making requests for the user to take action (e.g. to give you more context) only optionally.
The one exception to this is if the user references something you don't know about - for example, the name of a source code file, function, type, or other piece of code that you have no awareness of. In this case, you MUST NOT MAKE SOMETHING UP, or assume you know what that thing is or how it works. Instead, you must ask the user for clarification rather than giving a response.
{{/if}}
## Code Block Formatting
Whenever you mention a code block, you MUST use ONLY use the following format:
```path/to/Something.blah#L123-456
(code goes here)
```
The `#L123-456` means the line number range 123 through 456, and the path/to/Something.blah
is a path in the project. (If there is no valid path in the project, then you can use
/dev/null/path.extension for its path.) This is the ONLY valid way to format code blocks, because the Markdown parser
does not understand the more common ```language syntax, or bare ``` blocks. It only
understands this path-based syntax, and if the path is missing, then it will error and you will have to do it over again.
Just to be really clear about this, if you ever find yourself writing three backticks followed by a language name, STOP!
You have made a mistake. You can only ever put paths after triple backticks!
<example>
Based on all the information I've gathered, here's a summary of how this system works:
1. The README file is loaded into the system.
2. The system finds the first two headers, including everything in between. In this case, that would be:
```path/to/README.md#L8-12
# First Header
This is the info under the first header.
## Sub-header
```
3. Then the system finds the last header in the README:
```path/to/README.md#L27-29
## Last Header
This is the last header in the README.
```
4. Finally, it passes this information on to the next process.
</example>
<example>
In Markdown, hash marks signify headings. For example:
```/dev/null/example.md#L1-3
# Level 1 heading
## Level 2 heading
### Level 3 heading
```
</example>
Here are examples of ways you must never render code blocks:
<bad_example_do_not_do_this>
In Markdown, hash marks signify headings. For example:
```
# Level 1 heading
## Level 2 heading
### Level 3 heading
```
</bad_example_do_not_do_this>
This example is unacceptable because it does not include the path.
<bad_example_do_not_do_this>
In Markdown, hash marks signify headings. For example:
```markdown
# Level 1 heading
## Level 2 heading
### Level 3 heading
```
</bad_example_do_not_do_this>
This example is unacceptable because it has the language instead of the path.
<bad_example_do_not_do_this>
In Markdown, hash marks signify headings. For example:
# Level 1 heading
## Level 2 heading
### Level 3 heading
</bad_example_do_not_do_this>
This example is unacceptable because it uses indentation to mark the code block
instead of backticks with a path.
<bad_example_do_not_do_this>
In Markdown, hash marks signify headings. For example:
```markdown
/dev/null/example.md#L1-3
# Level 1 heading
## Level 2 heading
### Level 3 heading
```
</bad_example_do_not_do_this>
This example is unacceptable because the path is in the wrong place. The path must be directly after the opening backticks.
{{#if has_tools}}
## Fixing Diagnostics
1. Make 1-2 attempts at fixing diagnostics, then defer to the user.
2. Never simplify code you've written just to solve diagnostics. Complete, mostly correct code is more valuable than perfect code that doesn't solve the problem.
## Debugging
When debugging, only make code changes if you are certain that you can solve the problem.
Otherwise, follow debugging best practices:
1. Address the root cause instead of the symptoms.
2. Add descriptive logging statements and error messages to track variable and code state.
3. Add test functions and statements to isolate the problem.
{{/if}}
## Calling External APIs
1. Unless explicitly requested by the user, use the best suited external APIs and packages to solve the task. There is no need to ask the user for permission.
2. When selecting which version of an API or package to use, choose one that is compatible with the user's dependency management file(s). If no such file exists or if the package is not present, use the latest version that is in your training data.
3. If an external API requires an API Key, be sure to point this out to the user. Adhere to best security practices (e.g. DO NOT hardcode an API key in a place where it can be exposed)
## System Information
Operating System: {{os}}
Default Shell: {{shell}}
{{#if (or has_rules has_user_rules)}}
## User's Custom Instructions
The following additional instructions are provided by the user, and should be followed to the best of your ability{{#if has_tools}} without interfering with the tool use guidelines{{/if}}.
{{#if has_rules}}
There are project rules that apply to these root directories:
{{#each worktrees}}
{{#if rules_file}}
`{{root_name}}/{{rules_file.path_in_worktree}}`:
``````
{{{rules_file.text}}}
``````
{{/if}}
{{/each}}
{{/if}}
{{#if has_user_rules}}
The user has specified the following rules that should be applied:
{{#each user_rules}}
{{#if title}}
Rules title: {{title}}
{{/if}}
``````
{{contents}}
``````
{{/each}}
{{/if}}
{{/if}}

View File

@@ -255,6 +255,19 @@
// Whether to display inline and alongside documentation for items in the
// completions menu
"show_completion_documentation": true,
// When to show the scrollbar in the completion menu.
// This setting can take four values:
//
// 1. Show the scrollbar if there's important information or
// follow the system's configured behavior
// "auto"
// 2. Match the system's configured behavior:
// "system"
// 3. Always show the scrollbar:
// "always"
// 4. Never show the scrollbar:
// "never" (default)
"completion_menu_scrollbar": "never",
// Show method signatures in the editor, when inside parentheses.
"auto_signature_help": false,
// Whether to show the signature help after completion or a bracket pair inserted.
@@ -602,7 +615,9 @@
"whole_word": false,
"case_sensitive": false,
"include_ignored": false,
"regex": false
"regex": false,
// Whether to center the cursor on each search match when navigating.
"center_on_match": false
},
// When to populate a new search's query based on the text under the cursor.
// This setting can take the following three values:
@@ -1232,6 +1247,9 @@
// that are overly broad can slow down Zed's file scanning. `file_scan_exclusions` takes
// precedence over these inclusions.
"file_scan_inclusions": [".env*"],
// Globs to match files that will be considered "hidden". These files can be hidden from the
// project panel by toggling the "hide_hidden" setting.
"hidden_files": ["**/.*"],
// Git gutter behavior configuration.
"git": {
// Control whether the git gutter is shown. May take 2 values:
@@ -1329,7 +1347,7 @@
"model": null,
"max_tokens": null
},
// Whether edit predictions are enabled when editing text threads.
// Whether edit predictions are enabled when editing text threads in the agent panel.
// This setting has no effect if globally disabled.
"enabled_in_text_threads": true
},
@@ -1700,6 +1718,7 @@
"preferred_line_length": 72
},
"Go": {
"hard_tabs": true,
"code_actions_on_format": {
"source.organizeImports": true
},
@@ -1718,6 +1737,9 @@
"allowed": true
}
},
"HTML+ERB": {
"language_servers": ["herb", "!ruby-lsp", "..."]
},
"Java": {
"prettier": {
"allowed": true,
@@ -1740,6 +1762,9 @@
"allowed": true
}
},
"JS+ERB": {
"language_servers": ["!ruby-lsp", "..."]
},
"Kotlin": {
"language_servers": ["!kotlin-language-server", "kotlin-lsp", "..."]
},
@@ -1754,6 +1779,7 @@
"Markdown": {
"format_on_save": "off",
"use_on_type_format": false,
"remove_trailing_whitespace_on_save": false,
"allow_rewrap": "anywhere",
"soft_wrap": "editor_width",
"prettier": {
@@ -1769,7 +1795,8 @@
}
},
"Plain Text": {
"allow_rewrap": "anywhere"
"allow_rewrap": "anywhere",
"soft_wrap": "editor_width"
},
"Python": {
"code_actions_on_format": {
@@ -1843,6 +1870,9 @@
"allowed": true
}
},
"YAML+ERB": {
"language_servers": ["!ruby-lsp", "..."]
},
"Zig": {
"language_servers": ["zls", "..."]
}

View File

@@ -6,8 +6,8 @@
{
"name": "Gruvbox Dark",
"appearance": "dark",
"accents": ["#cc241dff", "#98971aff", "#d79921ff", "#458588ff", "#b16286ff", "#689d6aff", "#d65d0eff"],
"style": {
"accents": ["#cc241dff", "#98971aff", "#d79921ff", "#458588ff", "#b16286ff", "#689d6aff", "#d65d0eff"],
"border": "#5b534dff",
"border.variant": "#494340ff",
"border.focused": "#303a36ff",
@@ -412,8 +412,8 @@
{
"name": "Gruvbox Dark Hard",
"appearance": "dark",
"accents": ["#cc241dff", "#98971aff", "#d79921ff", "#458588ff", "#b16286ff", "#689d6aff", "#d65d0eff"],
"style": {
"accents": ["#cc241dff", "#98971aff", "#d79921ff", "#458588ff", "#b16286ff", "#689d6aff", "#d65d0eff"],
"border": "#5b534dff",
"border.variant": "#494340ff",
"border.focused": "#303a36ff",
@@ -818,8 +818,8 @@
{
"name": "Gruvbox Dark Soft",
"appearance": "dark",
"accents": ["#cc241dff", "#98971aff", "#d79921ff", "#458588ff", "#b16286ff", "#689d6aff", "#d65d0eff"],
"style": {
"accents": ["#cc241dff", "#98971aff", "#d79921ff", "#458588ff", "#b16286ff", "#689d6aff", "#d65d0eff"],
"border": "#5b534dff",
"border.variant": "#494340ff",
"border.focused": "#303a36ff",
@@ -1224,8 +1224,8 @@
{
"name": "Gruvbox Light",
"appearance": "light",
"accents": ["#cc241dff", "#98971aff", "#d79921ff", "#458588ff", "#b16286ff", "#689d6aff", "#d65d0eff"],
"style": {
"accents": ["#cc241dff", "#98971aff", "#d79921ff", "#458588ff", "#b16286ff", "#689d6aff", "#d65d0eff"],
"border": "#c8b899ff",
"border.variant": "#ddcca7ff",
"border.focused": "#adc5ccff",
@@ -1630,8 +1630,8 @@
{
"name": "Gruvbox Light Hard",
"appearance": "light",
"accents": ["#cc241dff", "#98971aff", "#d79921ff", "#458588ff", "#b16286ff", "#689d6aff", "#d65d0eff"],
"style": {
"accents": ["#cc241dff", "#98971aff", "#d79921ff", "#458588ff", "#b16286ff", "#689d6aff", "#d65d0eff"],
"border": "#c8b899ff",
"border.variant": "#ddcca7ff",
"border.focused": "#adc5ccff",
@@ -2036,8 +2036,8 @@
{
"name": "Gruvbox Light Soft",
"appearance": "light",
"accents": ["#cc241dff", "#98971aff", "#d79921ff", "#458588ff", "#b16286ff", "#689d6aff", "#d65d0eff"],
"style": {
"accents": ["#cc241dff", "#98971aff", "#d79921ff", "#458588ff", "#b16286ff", "#689d6aff", "#d65d0eff"],
"border": "#c8b899ff",
"border.variant": "#ddcca7ff",
"border.focused": "#adc5ccff",

21
ci/Dockerfile.namespace Normal file
View File

@@ -0,0 +1,21 @@
ARG NAMESPACE_BASE_IMAGE_REF=""
# Your image must build FROM NAMESPACE_BASE_IMAGE_REF
FROM ${NAMESPACE_BASE_IMAGE_REF} AS base
# Remove problematic git-lfs packagecloud source
RUN sudo rm -f /etc/apt/sources.list.d/*git-lfs*.list
# Install git and SSH for cloning private repositories
RUN sudo apt-get update && \
sudo apt-get install -y git openssh-client
# Clone the Zed repository
RUN git clone https://github.com/zed-industries/zed.git ~/zed
# Run the Linux installation script
WORKDIR /home/runner/zed
RUN ./script/linux
# Clean up unnecessary files to reduce image size
RUN sudo apt-get clean && sudo rm -rf \
/home/runner/zed

View File

@@ -3,7 +3,6 @@ mod diff;
mod mention;
mod terminal;
use ::terminal::terminal_settings::TerminalSettings;
use agent_settings::AgentSettings;
use collections::HashSet;
pub use connection::*;
@@ -12,7 +11,7 @@ use language::language_settings::FormatOnSave;
pub use mention::*;
use project::lsp_store::{FormatTrigger, LspFormatTarget};
use serde::{Deserialize, Serialize};
use settings::{Settings as _, SettingsLocation};
use settings::Settings as _;
use task::{Shell, ShellBuilder};
pub use terminal::*;
@@ -2141,17 +2140,9 @@ impl AcpThread {
) -> Task<Result<Entity<Terminal>>> {
let env = match &cwd {
Some(dir) => self.project.update(cx, |project, cx| {
let worktree = project.find_worktree(dir.as_path(), cx);
let shell = TerminalSettings::get(
worktree.as_ref().map(|(worktree, path)| SettingsLocation {
worktree_id: worktree.read(cx).id(),
path: &path,
}),
cx,
)
.shell
.clone();
project.directory_environment(&shell, dir.as_path().into(), cx)
project.environment().update(cx, |env, cx| {
env.directory_environment(dir.as_path().into(), cx)
})
}),
None => Task::ready(None).shared(),
};

View File

@@ -5,10 +5,8 @@ use gpui::{App, AppContext, AsyncApp, Context, Entity, Task};
use language::LanguageRegistry;
use markdown::Markdown;
use project::Project;
use settings::{Settings as _, SettingsLocation};
use std::{path::PathBuf, process::ExitStatus, sync::Arc, time::Instant};
use task::Shell;
use terminal::terminal_settings::TerminalSettings;
use util::get_default_system_shell_preferring_bash;
pub struct Terminal {
@@ -187,17 +185,9 @@ pub async fn create_terminal_entity(
let mut env = if let Some(dir) = &cwd {
project
.update(cx, |project, cx| {
let worktree = project.find_worktree(dir.as_path(), cx);
let shell = TerminalSettings::get(
worktree.as_ref().map(|(worktree, path)| SettingsLocation {
worktree_id: worktree.read(cx).id(),
path: &path,
}),
cx,
)
.shell
.clone();
project.directory_environment(&shell, dir.clone().into(), cx)
project.environment().update(cx, |env, cx| {
env.directory_environment(dir.clone().into(), cx)
})
})?
.await
.unwrap_or_default()

View File

@@ -19,7 +19,7 @@ use markdown::{CodeBlockRenderer, Markdown, MarkdownElement, MarkdownStyle};
use project::Project;
use settings::Settings;
use theme::ThemeSettings;
use ui::{Tooltip, prelude::*};
use ui::{Tooltip, WithScrollbar, prelude::*};
use util::ResultExt as _;
use workspace::{
Item, ItemHandle, ToolbarItemEvent, ToolbarItemLocation, ToolbarItemView, Workspace,
@@ -259,6 +259,15 @@ impl AcpTools {
serde_json::to_string_pretty(&messages).ok()
}
fn clear_messages(&mut self, cx: &mut Context<Self>) {
if let Some(connection) = self.watched_connection.as_mut() {
connection.messages.clear();
connection.list_state.reset(0);
self.expanded.clear();
cx.notify();
}
}
fn render_message(
&mut self,
index: usize,
@@ -282,17 +291,19 @@ impl AcpTools {
let expanded = self.expanded.contains(&index);
v_flex()
.w_full()
.px_4()
.py_3()
.border_color(colors.border)
.border_b_1()
.gap_2()
.items_start()
.font_buffer(cx)
.text_size(base_size)
.id(index)
.group("message")
.cursor_pointer()
.font_buffer(cx)
.w_full()
.py_3()
.pl_4()
.pr_5()
.gap_2()
.items_start()
.text_size(base_size)
.border_color(colors.border)
.border_b_1()
.hover(|this| this.bg(colors.element_background.opacity(0.5)))
.on_click(cx.listener(move |this, _, _, cx| {
if this.expanded.contains(&index) {
@@ -314,15 +325,14 @@ impl AcpTools {
h_flex()
.w_full()
.gap_2()
.items_center()
.flex_shrink_0()
.child(match message.direction {
acp::StreamMessageDirection::Incoming => {
ui::Icon::new(ui::IconName::ArrowDown).color(Color::Error)
}
acp::StreamMessageDirection::Outgoing => {
ui::Icon::new(ui::IconName::ArrowUp).color(Color::Success)
}
acp::StreamMessageDirection::Incoming => Icon::new(IconName::ArrowDown)
.color(Color::Error)
.size(IconSize::Small),
acp::StreamMessageDirection::Outgoing => Icon::new(IconName::ArrowUp)
.color(Color::Success)
.size(IconSize::Small),
})
.child(
Label::new(message.name.clone())
@@ -492,7 +502,7 @@ impl Focusable for AcpTools {
}
impl Render for AcpTools {
fn render(&mut self, _window: &mut Window, cx: &mut Context<Self>) -> impl IntoElement {
fn render(&mut self, window: &mut Window, cx: &mut Context<Self>) -> impl IntoElement {
v_flex()
.track_focus(&self.focus_handle)
.size_full()
@@ -507,13 +517,19 @@ impl Render for AcpTools {
.child("No messages recorded yet")
.into_any()
} else {
list(
connection.list_state.clone(),
cx.processor(Self::render_message),
)
.with_sizing_behavior(gpui::ListSizingBehavior::Auto)
.flex_grow()
.into_any()
div()
.size_full()
.flex_grow()
.child(
list(
connection.list_state.clone(),
cx.processor(Self::render_message),
)
.with_sizing_behavior(gpui::ListSizingBehavior::Auto)
.size_full(),
)
.vertical_scrollbar_for(connection.list_state.clone(), window, cx)
.into_any()
}
}
None => h_flex()
@@ -547,10 +563,16 @@ impl Render for AcpToolsToolbarItemView {
};
let acp_tools = acp_tools.clone();
let has_messages = acp_tools
.read(cx)
.watched_connection
.as_ref()
.is_some_and(|connection| !connection.messages.is_empty());
h_flex()
.gap_2()
.child(
.child({
let acp_tools = acp_tools.clone();
IconButton::new(
"copy_all_messages",
if self.just_copied {
@@ -565,13 +587,7 @@ impl Render for AcpToolsToolbarItemView {
} else {
"Copy All Messages"
}))
.disabled(
acp_tools
.read(cx)
.watched_connection
.as_ref()
.is_none_or(|connection| connection.messages.is_empty()),
)
.disabled(!has_messages)
.on_click(cx.listener(move |this, _, _window, cx| {
if let Some(content) = acp_tools.read(cx).serialize_observed_messages() {
cx.write_to_clipboard(ClipboardItem::new_string(content));
@@ -586,7 +602,18 @@ impl Render for AcpToolsToolbarItemView {
})
.detach();
}
})),
}))
})
.child(
IconButton::new("clear_messages", IconName::Trash)
.icon_size(IconSize::Small)
.tooltip(Tooltip::text("Clear Messages"))
.disabled(!has_messages)
.on_click(cx.listener(move |_this, _, _window, cx| {
acp_tools.update(cx, |acp_tools, cx| {
acp_tools.clear_messages(cx);
});
})),
)
.into_any()
}

View File

@@ -11,7 +11,7 @@ use language::{
LanguageServerStatusUpdate, ServerHealth,
};
use project::{
LanguageServerProgress, LspStoreEvent, Project, ProjectEnvironmentEvent,
LanguageServerProgress, LspStoreEvent, ProgressToken, Project, ProjectEnvironmentEvent,
git_store::{GitStoreEvent, Repository},
};
use smallvec::SmallVec;
@@ -61,7 +61,7 @@ struct ServerStatus {
struct PendingWork<'a> {
language_server_id: LanguageServerId,
progress_token: &'a str,
progress_token: &'a ProgressToken,
progress: &'a LanguageServerProgress,
}
@@ -313,9 +313,9 @@ impl ActivityIndicator {
let mut pending_work = status
.pending_work
.iter()
.map(|(token, progress)| PendingWork {
.map(|(progress_token, progress)| PendingWork {
language_server_id: server_id,
progress_token: token.as_str(),
progress_token,
progress,
})
.collect::<SmallVec<[_; 4]>>();
@@ -358,11 +358,7 @@ impl ActivityIndicator {
..
}) = pending_work.next()
{
let mut message = progress
.title
.as_deref()
.unwrap_or(progress_token)
.to_string();
let mut message = progress.title.clone().unwrap_or(progress_token.to_string());
if let Some(percentage) = progress.percentage {
write!(&mut message, " ({}%)", percentage).unwrap();
@@ -773,7 +769,7 @@ impl Render for ActivityIndicator {
let Some(content) = self.content_to_render(cx) else {
return result;
};
let this = cx.entity().downgrade();
let activity_indicator = cx.entity().downgrade();
let truncate_content = content.message.len() > MAX_MESSAGE_LEN;
result.gap_2().child(
PopoverMenu::new("activity-indicator-popover")
@@ -815,22 +811,21 @@ impl Render for ActivityIndicator {
)
.anchor(gpui::Corner::BottomLeft)
.menu(move |window, cx| {
let strong_this = this.upgrade()?;
let strong_this = activity_indicator.upgrade()?;
let mut has_work = false;
let menu = ContextMenu::build(window, cx, |mut menu, _, cx| {
for work in strong_this.read(cx).pending_language_server_work(cx) {
has_work = true;
let this = this.clone();
let activity_indicator = activity_indicator.clone();
let mut title = work
.progress
.title
.as_deref()
.unwrap_or(work.progress_token)
.to_owned();
.clone()
.unwrap_or(work.progress_token.to_string());
if work.progress.is_cancellable {
let language_server_id = work.language_server_id;
let token = work.progress_token.to_string();
let token = work.progress_token.clone();
let title = SharedString::from(title);
menu = menu.custom_entry(
move |_, _| {
@@ -842,18 +837,23 @@ impl Render for ActivityIndicator {
.into_any_element()
},
move |_, cx| {
this.update(cx, |this, cx| {
this.project.update(cx, |project, cx| {
project.cancel_language_server_work(
language_server_id,
Some(token.clone()),
let token = token.clone();
activity_indicator
.update(cx, |activity_indicator, cx| {
activity_indicator.project.update(
cx,
|project, cx| {
project.cancel_language_server_work(
language_server_id,
Some(token),
cx,
);
},
);
});
this.context_menu_handle.hide(cx);
cx.notify();
})
.ok();
activity_indicator.context_menu_handle.hide(cx);
cx.notify();
})
.ok();
},
);
} else {

View File

@@ -11,7 +11,7 @@ path = "src/agent.rs"
[features]
test-support = ["db/test-support"]
eval = []
edit-agent-eval = []
unit-eval = []
e2e = []
[lints]

View File

@@ -13,7 +13,15 @@ const EDITS_END_TAG: &str = "</edits>";
const SEARCH_MARKER: &str = "<<<<<<< SEARCH";
const SEPARATOR_MARKER: &str = "=======";
const REPLACE_MARKER: &str = ">>>>>>> REPLACE";
const END_TAGS: [&str; 3] = [OLD_TEXT_END_TAG, NEW_TEXT_END_TAG, EDITS_END_TAG];
const SONNET_PARAMETER_INVOKE_1: &str = "</parameter>\n</invoke>";
const SONNET_PARAMETER_INVOKE_2: &str = "</parameter></invoke>";
const END_TAGS: [&str; 5] = [
OLD_TEXT_END_TAG,
NEW_TEXT_END_TAG,
EDITS_END_TAG,
SONNET_PARAMETER_INVOKE_1, // Remove this after switching to streaming tool call
SONNET_PARAMETER_INVOKE_2,
];
#[derive(Debug)]
pub enum EditParserEvent {
@@ -547,6 +555,37 @@ mod tests {
);
}
#[gpui::test(iterations = 1000)]
fn test_xml_edits_with_closing_parameter_invoke(mut rng: StdRng) {
// This case is a regression with Claude Sonnet 4.5.
// Sometimes Sonnet thinks that it's doing a tool call
// and closes its response with '</parameter></invoke>'
// instead of properly closing </new_text>
let mut parser = EditParser::new(EditFormat::XmlTags);
assert_eq!(
parse_random_chunks(
indoc! {"
<old_text>some text</old_text><new_text>updated text</parameter></invoke>
"},
&mut parser,
&mut rng
),
vec![Edit {
old_text: "some text".to_string(),
new_text: "updated text".to_string(),
line_hint: None,
},]
);
assert_eq!(
parser.finish(),
EditParserMetrics {
tags: 2,
mismatched_tags: 1
}
);
}
#[gpui::test(iterations = 1000)]
fn test_xml_nested_tags(mut rng: StdRng) {
let mut parser = EditParser::new(EditFormat::XmlTags);
@@ -1035,6 +1074,11 @@ mod tests {
last_ix = chunk_ix;
}
if new_text.is_some() {
pending_edit.new_text = new_text.take().unwrap();
edits.push(pending_edit);
}
edits
}
}

View File

@@ -31,7 +31,7 @@ use std::{
use util::path;
#[test]
#[cfg_attr(not(feature = "edit-agent-eval"), ignore)]
#[cfg_attr(not(feature = "unit-eval"), ignore)]
fn eval_extract_handle_command_output() {
// Test how well agent generates multiple edit hunks.
//
@@ -108,7 +108,7 @@ fn eval_extract_handle_command_output() {
}
#[test]
#[cfg_attr(not(feature = "edit-agent-eval"), ignore)]
#[cfg_attr(not(feature = "unit-eval"), ignore)]
fn eval_delete_run_git_blame() {
// Model | Pass rate
// ----------------------------|----------
@@ -171,7 +171,7 @@ fn eval_delete_run_git_blame() {
}
#[test]
#[cfg_attr(not(feature = "edit-agent-eval"), ignore)]
#[cfg_attr(not(feature = "unit-eval"), ignore)]
fn eval_translate_doc_comments() {
// Model | Pass rate
// ============================================
@@ -234,7 +234,7 @@ fn eval_translate_doc_comments() {
}
#[test]
#[cfg_attr(not(feature = "edit-agent-eval"), ignore)]
#[cfg_attr(not(feature = "unit-eval"), ignore)]
fn eval_use_wasi_sdk_in_compile_parser_to_wasm() {
// Model | Pass rate
// ============================================
@@ -360,7 +360,7 @@ fn eval_use_wasi_sdk_in_compile_parser_to_wasm() {
}
#[test]
#[cfg_attr(not(feature = "edit-agent-eval"), ignore)]
#[cfg_attr(not(feature = "unit-eval"), ignore)]
fn eval_disable_cursor_blinking() {
// Model | Pass rate
// ============================================
@@ -446,7 +446,7 @@ fn eval_disable_cursor_blinking() {
}
#[test]
#[cfg_attr(not(feature = "edit-agent-eval"), ignore)]
#[cfg_attr(not(feature = "unit-eval"), ignore)]
fn eval_from_pixels_constructor() {
// Results for 2025-06-13
//
@@ -656,7 +656,7 @@ fn eval_from_pixels_constructor() {
}
#[test]
#[cfg_attr(not(feature = "edit-agent-eval"), ignore)]
#[cfg_attr(not(feature = "unit-eval"), ignore)]
fn eval_zode() {
// Model | Pass rate
// ============================================
@@ -763,7 +763,7 @@ fn eval_zode() {
}
#[test]
#[cfg_attr(not(feature = "edit-agent-eval"), ignore)]
#[cfg_attr(not(feature = "unit-eval"), ignore)]
fn eval_add_overwrite_test() {
// Model | Pass rate
// ============================================
@@ -995,7 +995,7 @@ fn eval_add_overwrite_test() {
}
#[test]
#[cfg_attr(not(feature = "edit-agent-eval"), ignore)]
#[cfg_attr(not(feature = "unit-eval"), ignore)]
fn eval_create_empty_file() {
// Check that Edit Agent can create a file without writing its
// thoughts into it. This issue is not specific to empty files, but
@@ -1581,6 +1581,7 @@ impl EditAgentTest {
let template = crate::SystemPromptTemplate {
project: &project_context,
available_tools: tool_names,
model_name: None,
};
let templates = Templates::new();
template.render(&templates).unwrap()

View File

@@ -38,6 +38,7 @@ pub struct SystemPromptTemplate<'a> {
#[serde(flatten)]
pub project: &'a prompt_store::ProjectContext,
pub available_tools: Vec<SharedString>,
pub model_name: Option<String>,
}
impl Template for SystemPromptTemplate<'_> {
@@ -79,9 +80,11 @@ mod tests {
let template = SystemPromptTemplate {
project: &project,
available_tools: vec!["echo".into()],
model_name: Some("test-model".to_string()),
};
let templates = Templates::new();
let rendered = template.render(&templates).unwrap();
assert!(rendered.contains("## Fixing Diagnostics"));
assert!(rendered.contains("test-model"));
}
}

View File

@@ -150,6 +150,12 @@ Otherwise, follow debugging best practices:
Operating System: {{os}}
Default Shell: {{shell}}
{{#if model_name}}
## Model Information
You are powered by the model named {{model_name}}.
{{/if}}
{{#if (or has_rules has_user_rules)}}
## User's Custom Instructions

View File

@@ -160,6 +160,42 @@ async fn test_system_prompt(cx: &mut TestAppContext) {
);
}
#[gpui::test]
async fn test_system_prompt_without_tools(cx: &mut TestAppContext) {
let ThreadTest { model, thread, .. } = setup(cx, TestModel::Fake).await;
let fake_model = model.as_fake();
thread
.update(cx, |thread, cx| {
thread.send(UserMessageId::new(), ["abc"], cx)
})
.unwrap();
cx.run_until_parked();
let mut pending_completions = fake_model.pending_completions();
assert_eq!(
pending_completions.len(),
1,
"unexpected pending completions: {:?}",
pending_completions
);
let pending_completion = pending_completions.pop().unwrap();
assert_eq!(pending_completion.messages[0].role, Role::System);
let system_message = &pending_completion.messages[0];
let system_prompt = system_message.content[0].to_str().unwrap();
assert!(
!system_prompt.contains("## Tool Use"),
"unexpected system message: {:?}",
system_message
);
assert!(
!system_prompt.contains("## Fixing Diagnostics"),
"unexpected system message: {:?}",
system_message
);
}
#[gpui::test]
async fn test_prompt_caching(cx: &mut TestAppContext) {
let ThreadTest { model, thread, .. } = setup(cx, TestModel::Fake).await;

View File

@@ -1816,9 +1816,15 @@ impl Thread {
log::debug!("Completion intent: {:?}", completion_intent);
log::debug!("Completion mode: {:?}", self.completion_mode);
let messages = self.build_request_messages(cx);
let available_tools: Vec<_> = self
.running_turn
.as_ref()
.map(|turn| turn.tools.keys().cloned().collect())
.unwrap_or_default();
log::debug!("Request includes {} tools", available_tools.len());
let messages = self.build_request_messages(available_tools, cx);
log::debug!("Request will include {} messages", messages.len());
log::debug!("Request includes {} tools", tools.len());
let request = LanguageModelRequest {
thread_id: Some(self.id.to_string()),
@@ -1909,7 +1915,11 @@ impl Thread {
self.running_turn.as_ref()?.tools.get(name).cloned()
}
fn build_request_messages(&self, cx: &App) -> Vec<LanguageModelRequestMessage> {
fn build_request_messages(
&self,
available_tools: Vec<SharedString>,
cx: &App,
) -> Vec<LanguageModelRequestMessage> {
log::trace!(
"Building request messages from {} thread messages",
self.messages.len()
@@ -1917,7 +1927,8 @@ impl Thread {
let system_prompt = SystemPromptTemplate {
project: self.project_context.read(cx),
available_tools: self.tools.keys().cloned().collect(),
available_tools,
model_name: self.model.as_ref().map(|m| m.name().0.to_string()),
}
.render(&self.templates)
.context("failed to build system prompt")

View File

@@ -178,6 +178,7 @@ impl AcpConnection {
meta: Some(serde_json::json!({
// Experimental: Allow for rendering terminal output from the agents
"terminal_output": true,
"terminal-auth": true,
})),
},
client_info: Some(acp::Implementation {

View File

@@ -253,17 +253,22 @@ impl ContextPickerCompletionProvider {
) -> Option<Completion> {
let project = workspace.read(cx).project().clone();
let label = CodeLabel::plain(symbol.name.clone(), None);
let abs_path = match &symbol.path {
SymbolLocation::InProject(project_path) => {
project.read(cx).absolute_path(&project_path, cx)?
}
let (abs_path, file_name) = match &symbol.path {
SymbolLocation::InProject(project_path) => (
project.read(cx).absolute_path(&project_path, cx)?,
project_path.path.file_name()?.to_string().into(),
),
SymbolLocation::OutsideProject {
abs_path,
signature: _,
} => PathBuf::from(abs_path.as_ref()),
} => (
PathBuf::from(abs_path.as_ref()),
abs_path.file_name().map(|f| f.to_string_lossy())?,
),
};
let label = build_symbol_label(&symbol.name, &file_name, symbol.range.start.0.row + 1, cx);
let uri = MentionUri::Symbol {
abs_path,
name: symbol.name.clone(),
@@ -570,6 +575,7 @@ impl ContextPickerCompletionProvider {
.unwrap_or_default();
let workspace = workspace.read(cx);
let project = workspace.project().read(cx);
let include_root_name = workspace.visible_worktrees(cx).count() > 1;
if let Some(agent_panel) = workspace.panel::<AgentPanel>(cx)
&& let Some(thread) = agent_panel.read(cx).active_agent_thread(cx)
@@ -596,7 +602,11 @@ impl ContextPickerCompletionProvider {
project
.worktree_for_id(project_path.worktree_id, cx)
.map(|worktree| {
let path_prefix = worktree.read(cx).root_name().into();
let path_prefix = if include_root_name {
worktree.read(cx).root_name().into()
} else {
RelPath::empty().into()
};
Match::File(FileMatch {
mat: fuzzy::PathMatch {
score: 1.,
@@ -674,6 +684,17 @@ impl ContextPickerCompletionProvider {
}
}
fn build_symbol_label(symbol_name: &str, file_name: &str, line: u32, cx: &App) -> CodeLabel {
let comment_id = cx.theme().syntax().highlight_id("comment").map(HighlightId);
let mut label = CodeLabelBuilder::default();
label.push_str(symbol_name, None);
label.push_str(" ", None);
label.push_str(&format!("{} L{}", file_name, line), comment_id);
label.build()
}
fn build_code_label_for_full_path(file_name: &str, directory: Option<&str>, cx: &App) -> CodeLabel {
let comment_id = cx.theme().syntax().highlight_id("comment").map(HighlightId);
let mut label = CodeLabelBuilder::default();
@@ -812,9 +833,21 @@ impl CompletionProvider for ContextPickerCompletionProvider {
path: mat.path.clone(),
};
// If path is empty, this means we're matching with the root directory itself
// so we use the path_prefix as the name
let path_prefix = if mat.path.is_empty() {
project
.read(cx)
.worktree_for_id(project_path.worktree_id, cx)
.map(|wt| wt.read(cx).root_name().into())
.unwrap_or_else(|| mat.path_prefix.clone())
} else {
mat.path_prefix.clone()
};
Self::completion_for_path(
project_path,
&mat.path_prefix,
&path_prefix,
is_recent,
mat.is_dir,
source_range.clone(),

View File

@@ -4,7 +4,7 @@ use acp_thread::{AcpThread, AgentThreadEntry};
use agent::HistoryStore;
use agent_client_protocol::{self as acp, ToolCallId};
use collections::HashMap;
use editor::{Editor, EditorMode, MinimapVisibility};
use editor::{Editor, EditorMode, MinimapVisibility, SizingBehavior};
use gpui::{
AnyEntity, App, AppContext as _, Entity, EntityId, EventEmitter, FocusHandle, Focusable,
ScrollHandle, SharedString, TextStyleRefinement, WeakEntity, Window,
@@ -357,7 +357,7 @@ fn create_editor_diff(
EditorMode::Full {
scale_ui_elements_with_buffer_font_size: false,
show_active_line_background: false,
sized_by_content: true,
sizing_behavior: SizingBehavior::SizeByContent,
},
diff.read(cx).multibuffer().clone(),
None,

View File

@@ -1,4 +1,5 @@
use crate::{
ChatWithFollow,
acp::completion_provider::{ContextPickerCompletionProvider, SlashCommandCompletion},
context_picker::{ContextPickerAction, fetch_context_picker::fetch_url_content},
};
@@ -15,6 +16,7 @@ use editor::{
MultiBuffer, ToOffset,
actions::Paste,
display_map::{Crease, CreaseId, FoldId},
scroll::Autoscroll,
};
use futures::{
FutureExt as _,
@@ -49,7 +51,7 @@ use text::OffsetRangeExt;
use theme::ThemeSettings;
use ui::{ButtonLike, TintColor, Toggleable, prelude::*};
use util::{ResultExt, debug_panic, rel_path::RelPath};
use workspace::{Workspace, notifications::NotifyResultExt as _};
use workspace::{CollaboratorId, Workspace, notifications::NotifyResultExt as _};
use zed_actions::agent::Chat;
pub struct MessageEditor {
@@ -234,8 +236,16 @@ impl MessageEditor {
window: &mut Window,
cx: &mut Context<Self>,
) {
let uri = MentionUri::Thread {
id: thread.id.clone(),
name: thread.title.to_string(),
};
let content = format!("{}\n", uri.as_link());
let content_len = content.len() - 1;
let start = self.editor.update(cx, |editor, cx| {
editor.set_text(format!("{}\n", thread.title), window, cx);
editor.set_text(content, window, cx);
editor
.buffer()
.read(cx)
@@ -244,18 +254,8 @@ impl MessageEditor {
.text_anchor
});
self.confirm_mention_completion(
thread.title.clone(),
start,
thread.title.len(),
MentionUri::Thread {
id: thread.id.clone(),
name: thread.title.to_string(),
},
window,
cx,
)
.detach();
self.confirm_mention_completion(thread.title, start, content_len, uri, window, cx)
.detach();
}
#[cfg(test)]
@@ -592,6 +592,21 @@ impl MessageEditor {
),
);
}
// Take this explanation with a grain of salt but, with creases being
// inserted, GPUI's recomputes the editor layout in the next frames, so
// directly calling `editor.request_autoscroll` wouldn't work as
// expected. We're leveraging `cx.on_next_frame` to wait 2 frames and
// ensure that the layout has been recalculated so that the autoscroll
// request actually shows the cursor's new position.
let editor = self.editor.clone();
cx.on_next_frame(window, move |_, window, cx| {
cx.on_next_frame(window, move |_, _, cx| {
editor.update(cx, |editor, cx| {
editor.request_autoscroll(Autoscroll::fit(), cx)
});
});
});
}
fn confirm_mention_for_thread(
@@ -702,7 +717,7 @@ impl MessageEditor {
let mut all_tracked_buffers = Vec::new();
let result = editor.update(cx, |editor, cx| {
let mut ix = 0;
let mut ix = text.chars().position(|c| !c.is_whitespace()).unwrap_or(0);
let mut chunks: Vec<acp::ContentBlock> = Vec::new();
let text = editor.text(cx);
editor.display_map.update(cx, |map, cx| {
@@ -714,15 +729,6 @@ impl MessageEditor {
let crease_range = crease.range().to_offset(&snapshot.buffer_snapshot());
if crease_range.start > ix {
//todo(): Custom slash command ContentBlock?
// let chunk = if prevent_slash_commands
// && ix == 0
// && parse_slash_command(&text[ix..]).is_some()
// {
// format!(" {}", &text[ix..crease_range.start]).into()
// } else {
// text[ix..crease_range.start].into()
// };
let chunk = text[ix..crease_range.start].into();
chunks.push(chunk);
}
@@ -783,15 +789,6 @@ impl MessageEditor {
}
if ix < text.len() {
//todo(): Custom slash command ContentBlock?
// let last_chunk = if prevent_slash_commands
// && ix == 0
// && parse_slash_command(&text[ix..]).is_some()
// {
// format!(" {}", text[ix..].trim_end())
// } else {
// text[ix..].trim_end().to_owned()
// };
let last_chunk = text[ix..].trim_end().to_owned();
if !last_chunk.is_empty() {
chunks.push(last_chunk.into());
@@ -831,6 +828,21 @@ impl MessageEditor {
self.send(cx);
}
fn chat_with_follow(
&mut self,
_: &ChatWithFollow,
window: &mut Window,
cx: &mut Context<Self>,
) {
self.workspace
.update(cx, |this, cx| {
this.follow(CollaboratorId::Agent, window, cx)
})
.log_err();
self.send(cx);
}
fn cancel(&mut self, _: &editor::actions::Cancel, _: &mut Window, cx: &mut Context<Self>) {
cx.emit(MessageEditorEvent::Cancel)
}
@@ -1034,6 +1046,7 @@ impl MessageEditor {
self.editor.update(cx, |message_editor, cx| {
message_editor.edit([(cursor_anchor..cursor_anchor, completion.new_text)], cx);
message_editor.request_autoscroll(Autoscroll::fit(), cx);
});
if let Some(confirm) = completion.confirm {
confirm(CompletionIntent::Complete, window, cx);
@@ -1294,6 +1307,7 @@ impl Render for MessageEditor {
div()
.key_context("MessageEditor")
.on_action(cx.listener(Self::chat))
.on_action(cx.listener(Self::chat_with_follow))
.on_action(cx.listener(Self::cancel))
.capture_action(cx.listener(Self::paste))
.flex_1()
@@ -1602,6 +1616,7 @@ mod tests {
use gpui::{
AppContext, Entity, EventEmitter, FocusHandle, Focusable, TestAppContext, VisualTestContext,
};
use language_model::LanguageModelRegistry;
use lsp::{CompletionContext, CompletionTriggerKind};
use project::{CompletionIntent, Project, ProjectPath};
use serde_json::json;
@@ -2183,10 +2198,10 @@ mod tests {
assert_eq!(
current_completion_labels(editor),
&[
format!("eight.txt dir{slash}b{slash}"),
format!("seven.txt dir{slash}b{slash}"),
format!("six.txt dir{slash}b{slash}"),
format!("five.txt dir{slash}b{slash}"),
format!("eight.txt b{slash}"),
format!("seven.txt b{slash}"),
format!("six.txt b{slash}"),
format!("five.txt b{slash}"),
]
);
editor.set_text("", window, cx);
@@ -2214,10 +2229,10 @@ mod tests {
assert_eq!(
current_completion_labels(editor),
&[
format!("eight.txt dir{slash}b{slash}"),
format!("seven.txt dir{slash}b{slash}"),
format!("six.txt dir{slash}b{slash}"),
format!("five.txt dir{slash}b{slash}"),
format!("eight.txt b{slash}"),
format!("seven.txt b{slash}"),
format!("six.txt b{slash}"),
format!("five.txt b{slash}"),
"Files & Directories".into(),
"Symbols".into(),
"Threads".into(),
@@ -2250,7 +2265,7 @@ mod tests {
assert!(editor.has_visible_completions_menu());
assert_eq!(
current_completion_labels(editor),
vec![format!("one.txt dir{slash}a{slash}")]
vec![format!("one.txt a{slash}")]
);
});
@@ -2473,7 +2488,7 @@ mod tests {
format!("Lorem [@one.txt]({url_one}) Ipsum [@eight.txt]({url_eight}) @symbol ")
);
assert!(editor.has_visible_completions_menu());
assert_eq!(current_completion_labels(editor), &["MySymbol"]);
assert_eq!(current_completion_labels(editor), &["MySymbol one.txt L1"]);
});
editor.update_in(&mut cx, |editor, window, cx| {
@@ -2529,7 +2544,7 @@ mod tests {
format!("Lorem [@one.txt]({url_one}) Ipsum [@eight.txt]({url_eight}) [@MySymbol]({}) @file x.png", symbol.to_uri())
);
assert!(editor.has_visible_completions_menu());
assert_eq!(current_completion_labels(editor), &[format!("x.png dir{slash}")]);
assert_eq!(current_completion_labels(editor), &["x.png "]);
});
editor.update_in(&mut cx, |editor, window, cx| {
@@ -2571,7 +2586,7 @@ mod tests {
format!("Lorem [@one.txt]({url_one}) Ipsum [@eight.txt]({url_eight}) [@MySymbol]({}) @file x.png", symbol.to_uri())
);
assert!(editor.has_visible_completions_menu());
assert_eq!(current_completion_labels(editor), &[format!("x.png dir{slash}")]);
assert_eq!(current_completion_labels(editor), &["x.png "]);
});
editor.update_in(&mut cx, |editor, window, cx| {
@@ -2747,4 +2762,295 @@ mod tests {
_ => panic!("Expected Text mention for small file"),
}
}
#[gpui::test]
async fn test_insert_thread_summary(cx: &mut TestAppContext) {
init_test(cx);
cx.update(LanguageModelRegistry::test);
let fs = FakeFs::new(cx.executor());
fs.insert_tree("/project", json!({"file": ""})).await;
let project = Project::test(fs, [Path::new(path!("/project"))], cx).await;
let (workspace, cx) =
cx.add_window_view(|window, cx| Workspace::test_new(project.clone(), window, cx));
let text_thread_store = cx.new(|cx| TextThreadStore::fake(project.clone(), cx));
let history_store = cx.new(|cx| HistoryStore::new(text_thread_store, cx));
// Create a thread metadata to insert as summary
let thread_metadata = agent::DbThreadMetadata {
id: acp::SessionId("thread-123".into()),
title: "Previous Conversation".into(),
updated_at: chrono::Utc::now(),
};
let message_editor = cx.update(|window, cx| {
cx.new(|cx| {
let mut editor = MessageEditor::new(
workspace.downgrade(),
project.clone(),
history_store.clone(),
None,
Default::default(),
Default::default(),
"Test Agent".into(),
"Test",
EditorMode::AutoHeight {
min_lines: 1,
max_lines: None,
},
window,
cx,
);
editor.insert_thread_summary(thread_metadata.clone(), window, cx);
editor
})
});
// Construct expected values for verification
let expected_uri = MentionUri::Thread {
id: thread_metadata.id.clone(),
name: thread_metadata.title.to_string(),
};
let expected_link = format!("[@{}]({})", thread_metadata.title, expected_uri.to_uri());
message_editor.read_with(cx, |editor, cx| {
let text = editor.text(cx);
assert!(
text.contains(&expected_link),
"Expected editor text to contain thread mention link.\nExpected substring: {}\nActual text: {}",
expected_link,
text
);
let mentions = editor.mentions();
assert_eq!(
mentions.len(),
1,
"Expected exactly one mention after inserting thread summary"
);
assert!(
mentions.contains(&expected_uri),
"Expected mentions to contain the thread URI"
);
});
}
#[gpui::test]
async fn test_whitespace_trimming(cx: &mut TestAppContext) {
init_test(cx);
let fs = FakeFs::new(cx.executor());
fs.insert_tree("/project", json!({"file.rs": "fn main() {}"}))
.await;
let project = Project::test(fs, [Path::new(path!("/project"))], cx).await;
let (workspace, cx) =
cx.add_window_view(|window, cx| Workspace::test_new(project.clone(), window, cx));
let text_thread_store = cx.new(|cx| TextThreadStore::fake(project.clone(), cx));
let history_store = cx.new(|cx| HistoryStore::new(text_thread_store, cx));
let message_editor = cx.update(|window, cx| {
cx.new(|cx| {
MessageEditor::new(
workspace.downgrade(),
project.clone(),
history_store.clone(),
None,
Default::default(),
Default::default(),
"Test Agent".into(),
"Test",
EditorMode::AutoHeight {
min_lines: 1,
max_lines: None,
},
window,
cx,
)
})
});
let editor = message_editor.update(cx, |message_editor, _| message_editor.editor.clone());
cx.run_until_parked();
editor.update_in(cx, |editor, window, cx| {
editor.set_text(" hello world ", window, cx);
});
let (content, _) = message_editor
.update(cx, |message_editor, cx| message_editor.contents(false, cx))
.await
.unwrap();
assert_eq!(
content,
vec![acp::ContentBlock::Text(acp::TextContent {
text: "hello world".into(),
annotations: None,
meta: None
})]
);
}
#[gpui::test]
async fn test_autoscroll_after_insert_selections(cx: &mut TestAppContext) {
init_test(cx);
let app_state = cx.update(AppState::test);
cx.update(|cx| {
language::init(cx);
editor::init(cx);
workspace::init(app_state.clone(), cx);
Project::init_settings(cx);
});
app_state
.fs
.as_fake()
.insert_tree(
path!("/dir"),
json!({
"test.txt": "line1\nline2\nline3\nline4\nline5\n",
}),
)
.await;
let project = Project::test(app_state.fs.clone(), [path!("/dir").as_ref()], cx).await;
let window = cx.add_window(|window, cx| Workspace::test_new(project.clone(), window, cx));
let workspace = window.root(cx).unwrap();
let worktree = project.update(cx, |project, cx| {
let mut worktrees = project.worktrees(cx).collect::<Vec<_>>();
assert_eq!(worktrees.len(), 1);
worktrees.pop().unwrap()
});
let worktree_id = worktree.read_with(cx, |worktree, _| worktree.id());
let mut cx = VisualTestContext::from_window(*window, cx);
// Open a regular editor with the created file, and select a portion of
// the text that will be used for the selections that are meant to be
// inserted in the agent panel.
let editor = workspace
.update_in(&mut cx, |workspace, window, cx| {
workspace.open_path(
ProjectPath {
worktree_id,
path: rel_path("test.txt").into(),
},
None,
false,
window,
cx,
)
})
.await
.unwrap()
.downcast::<Editor>()
.unwrap();
editor.update_in(&mut cx, |editor, window, cx| {
editor.change_selections(Default::default(), window, cx, |selections| {
selections.select_ranges([Point::new(0, 0)..Point::new(0, 5)]);
});
});
let text_thread_store = cx.new(|cx| TextThreadStore::fake(project.clone(), cx));
let history_store = cx.new(|cx| HistoryStore::new(text_thread_store, cx));
// Create a new `MessageEditor`. The `EditorMode::full()` has to be used
// to ensure we have a fixed viewport, so we can eventually actually
// place the cursor outside of the visible area.
let message_editor = workspace.update_in(&mut cx, |workspace, window, cx| {
let workspace_handle = cx.weak_entity();
let message_editor = cx.new(|cx| {
MessageEditor::new(
workspace_handle,
project.clone(),
history_store.clone(),
None,
Default::default(),
Default::default(),
"Test Agent".into(),
"Test",
EditorMode::full(),
window,
cx,
)
});
workspace.active_pane().update(cx, |pane, cx| {
pane.add_item(
Box::new(cx.new(|_| MessageEditorItem(message_editor.clone()))),
true,
true,
None,
window,
cx,
);
});
message_editor
});
message_editor.update_in(&mut cx, |message_editor, window, cx| {
message_editor.editor.update(cx, |editor, cx| {
// Update the Agent Panel's Message Editor text to have 100
// lines, ensuring that the cursor is set at line 90 and that we
// then scroll all the way to the top, so the cursor's position
// remains off screen.
let mut lines = String::new();
for _ in 1..=100 {
lines.push_str(&"Another line in the agent panel's message editor\n");
}
editor.set_text(lines.as_str(), window, cx);
editor.change_selections(Default::default(), window, cx, |selections| {
selections.select_ranges([Point::new(90, 0)..Point::new(90, 0)]);
});
editor.set_scroll_position(gpui::Point::new(0., 0.), window, cx);
});
});
cx.run_until_parked();
// Before proceeding, let's assert that the cursor is indeed off screen,
// otherwise the rest of the test doesn't make sense.
message_editor.update_in(&mut cx, |message_editor, window, cx| {
message_editor.editor.update(cx, |editor, cx| {
let snapshot = editor.snapshot(window, cx);
let cursor_row = editor.selections.newest::<Point>(&snapshot).head().row;
let scroll_top = snapshot.scroll_position().y as u32;
let visible_lines = editor.visible_line_count().unwrap() as u32;
let visible_range = scroll_top..(scroll_top + visible_lines);
assert!(!visible_range.contains(&cursor_row));
})
});
// Now let's insert the selection in the Agent Panel's editor and
// confirm that, after the insertion, the cursor is now in the visible
// range.
message_editor.update_in(&mut cx, |message_editor, window, cx| {
message_editor.insert_selections(window, cx);
});
cx.run_until_parked();
message_editor.update_in(&mut cx, |message_editor, window, cx| {
message_editor.editor.update(cx, |editor, cx| {
let snapshot = editor.snapshot(window, cx);
let cursor_row = editor.selections.newest::<Point>(&snapshot).head().row;
let scroll_top = snapshot.scroll_position().y as u32;
let visible_lines = editor.visible_line_count().unwrap() as u32;
let visible_range = scroll_top..(scroll_top + visible_lines);
assert!(visible_range.contains(&cursor_row));
})
});
}
}

View File

@@ -1,8 +1,10 @@
use acp_thread::AgentSessionModes;
use agent_client_protocol as acp;
use agent_servers::AgentServer;
use agent_settings::AgentSettings;
use fs::Fs;
use gpui::{Context, Entity, FocusHandle, WeakEntity, Window, prelude::*};
use settings::Settings as _;
use std::{rc::Rc, sync::Arc};
use ui::{
Button, ContextMenu, ContextMenuEntry, DocumentationEdge, DocumentationSide, KeyBinding,
@@ -84,6 +86,14 @@ impl ModeSelector {
let current_mode = self.connection.current_mode();
let default_mode = self.agent_server.default_mode(cx);
let settings = AgentSettings::get_global(cx);
let side = match settings.dock {
settings::DockPosition::Left => DocumentationSide::Right,
settings::DockPosition::Bottom | settings::DockPosition::Right => {
DocumentationSide::Left
}
};
for mode in all_modes {
let is_selected = &mode.id == &current_mode;
let is_default = Some(&mode.id) == default_mode.as_ref();
@@ -91,7 +101,7 @@ impl ModeSelector {
.toggleable(IconPosition::End, is_selected);
let entry = if let Some(description) = &mode.description {
entry.documentation_aside(DocumentationSide::Left, DocumentationEdge::Bottom, {
entry.documentation_aside(side, DocumentationEdge::Bottom, {
let description = description.clone();
move |cx| {

View File

@@ -450,6 +450,7 @@ impl Render for AcpThreadHistory {
v_flex()
.key_context("ThreadHistory")
.size_full()
.bg(cx.theme().colors().panel_background)
.on_action(cx.listener(Self::select_previous))
.on_action(cx.listener(Self::select_next))
.on_action(cx.listener(Self::select_first))

View File

@@ -17,7 +17,9 @@ use client::zed_urls;
use cloud_llm_client::PlanV1;
use collections::{HashMap, HashSet};
use editor::scroll::Autoscroll;
use editor::{Editor, EditorEvent, EditorMode, MultiBuffer, PathKey, SelectionEffects};
use editor::{
Editor, EditorEvent, EditorMode, MultiBuffer, PathKey, SelectionEffects, SizingBehavior,
};
use file_icons::FileIcons;
use fs::Fs;
use futures::FutureExt as _;
@@ -102,7 +104,7 @@ impl ThreadError {
{
Self::AuthenticationRequired(acp_error.message.clone().into())
} else {
let string = error.to_string();
let string = format!("{:#}", error);
// TODO: we should have Gemini return better errors here.
if agent.clone().downcast::<agent_servers::Gemini>().is_some()
&& string.contains("Could not load the default credentials")
@@ -111,7 +113,7 @@ impl ThreadError {
{
Self::AuthenticationRequired(string.into())
} else {
Self::Other(error.to_string().into())
Self::Other(string.into())
}
}
}
@@ -793,7 +795,8 @@ impl AcpThreadView {
if let Some(load_err) = err.downcast_ref::<LoadError>() {
self.thread_state = ThreadState::LoadError(load_err.clone());
} else {
self.thread_state = ThreadState::LoadError(LoadError::Other(err.to_string().into()))
self.thread_state =
ThreadState::LoadError(LoadError::Other(format!("{:#}", err).into()))
}
if self.message_editor.focus_handle(cx).is_focused(window) {
self.focus_handle.focus(window)
@@ -881,6 +884,7 @@ impl AcpThreadView {
cx: &mut Context<Self>,
) {
self.set_editor_is_expanded(!self.editor_expanded, cx);
cx.stop_propagation();
cx.notify();
}
@@ -892,7 +896,7 @@ impl AcpThreadView {
EditorMode::Full {
scale_ui_elements_with_buffer_font_size: false,
show_active_line_background: false,
sized_by_content: false,
sizing_behavior: SizingBehavior::ExcludeOverscrollMargin,
},
cx,
)
@@ -1469,6 +1473,106 @@ impl AcpThreadView {
return;
};
// Check for the experimental "terminal-auth" _meta field
let auth_method = connection.auth_methods().iter().find(|m| m.id == method);
if let Some(auth_method) = auth_method {
if let Some(meta) = &auth_method.meta {
if let Some(terminal_auth) = meta.get("terminal-auth") {
// Extract terminal auth details from meta
if let (Some(command), Some(label)) = (
terminal_auth.get("command").and_then(|v| v.as_str()),
terminal_auth.get("label").and_then(|v| v.as_str()),
) {
let args = terminal_auth
.get("args")
.and_then(|v| v.as_array())
.map(|arr| {
arr.iter()
.filter_map(|v| v.as_str().map(String::from))
.collect()
})
.unwrap_or_default();
let env = terminal_auth
.get("env")
.and_then(|v| v.as_object())
.map(|obj| {
obj.iter()
.filter_map(|(k, v)| {
v.as_str().map(|val| (k.clone(), val.to_string()))
})
.collect::<HashMap<String, String>>()
})
.unwrap_or_default();
// Build SpawnInTerminal from _meta
let login = task::SpawnInTerminal {
id: task::TaskId(format!("external-agent-{}-login", label)),
full_label: label.to_string(),
label: label.to_string(),
command: Some(command.to_string()),
args,
command_label: label.to_string(),
env,
use_new_terminal: true,
allow_concurrent_runs: true,
hide: task::HideStrategy::Always,
..Default::default()
};
self.thread_error.take();
configuration_view.take();
pending_auth_method.replace(method.clone());
if let Some(workspace) = self.workspace.upgrade() {
let authenticate = Self::spawn_external_agent_login(
login, workspace, false, window, cx,
);
cx.notify();
self.auth_task = Some(cx.spawn_in(window, {
let agent = self.agent.clone();
async move |this, cx| {
let result = authenticate.await;
match &result {
Ok(_) => telemetry::event!(
"Authenticate Agent Succeeded",
agent = agent.telemetry_id()
),
Err(_) => {
telemetry::event!(
"Authenticate Agent Failed",
agent = agent.telemetry_id(),
)
}
}
this.update_in(cx, |this, window, cx| {
if let Err(err) = result {
if let ThreadState::Unauthenticated {
pending_auth_method,
..
} = &mut this.thread_state
{
pending_auth_method.take();
}
this.handle_thread_error(err, cx);
} else {
this.reset(window, cx);
}
this.auth_task.take()
})
.ok();
}
}));
}
return;
}
}
}
}
if method.0.as_ref() == "gemini-api-key" {
let registry = LanguageModelRegistry::global(cx);
let provider = registry
@@ -3631,6 +3735,7 @@ impl AcpThreadView {
.child(
h_flex()
.id("edits-container")
.cursor_pointer()
.gap_1()
.child(Disclosure::new("edits-disclosure", expanded))
.map(|this| {
@@ -3770,6 +3875,7 @@ impl AcpThreadView {
Label::new(name.to_string())
.size(LabelSize::XSmall)
.buffer_font(cx)
.ml_1p5()
});
let file_icon = FileIcons::get_icon(path.as_std_path(), cx)
@@ -3801,14 +3907,30 @@ impl AcpThreadView {
})
.child(
h_flex()
.id(("file-name-row", index))
.relative()
.id(("file-name", index))
.pr_8()
.gap_1p5()
.w_full()
.overflow_x_scroll()
.child(file_icon)
.child(h_flex().gap_0p5().children(file_name).children(file_path))
.child(
h_flex()
.id(("file-name-path", index))
.cursor_pointer()
.pr_0p5()
.gap_0p5()
.hover(|s| s.bg(cx.theme().colors().element_hover))
.rounded_xs()
.child(file_icon)
.children(file_name)
.children(file_path)
.tooltip(Tooltip::text("Go to File"))
.on_click({
let buffer = buffer.clone();
cx.listener(move |this, _, window, cx| {
this.open_edited_buffer(&buffer, window, cx);
})
}),
)
.child(
div()
.absolute()
@@ -3818,13 +3940,7 @@ impl AcpThreadView {
.bottom_0()
.right_0()
.bg(overlay_gradient),
)
.on_click({
let buffer = buffer.clone();
cx.listener(move |this, _, window, cx| {
this.open_edited_buffer(&buffer, window, cx);
})
}),
),
)
.child(
h_flex()
@@ -3966,8 +4082,12 @@ impl AcpThreadView {
)
}
})
.on_click(cx.listener(|_, _, window, cx| {
window.dispatch_action(Box::new(ExpandMessageEditor), cx);
.on_click(cx.listener(|this, _, window, cx| {
this.expand_message_editor(
&ExpandMessageEditor,
window,
cx,
);
})),
),
),
@@ -4571,14 +4691,29 @@ impl AcpThreadView {
window: &mut Window,
cx: &mut Context<Self>,
) {
if window.is_window_active() || !self.notifications.is_empty() {
if !self.notifications.is_empty() {
return;
}
let settings = AgentSettings::get_global(cx);
let window_is_inactive = !window.is_window_active();
let panel_is_hidden = self
.workspace
.upgrade()
.map(|workspace| AgentPanel::is_hidden(&workspace, cx))
.unwrap_or(true);
let should_notify = window_is_inactive || panel_is_hidden;
if !should_notify {
return;
}
// TODO: Change this once we have title summarization for external agents.
let title = self.agent.name();
match AgentSettings::get_global(cx).notify_when_agent_waiting {
match settings.notify_when_agent_waiting {
NotifyWhenAgentWaiting::PrimaryScreen => {
if let Some(primary) = cx.primary_display() {
self.pop_up(icon, caption.into(), title, window, primary, cx);
@@ -5581,7 +5716,7 @@ fn default_markdown_style(
let theme_settings = ThemeSettings::get_global(cx);
let colors = cx.theme().colors();
let buffer_font_size = TextSize::Small.rems(cx);
let buffer_font_size = theme_settings.agent_buffer_font_size(cx);
let mut text_style = window.text_style();
let line_height = buffer_font_size * 1.75;
@@ -5593,9 +5728,9 @@ fn default_markdown_style(
};
let font_size = if buffer_font {
TextSize::Small.rems(cx)
theme_settings.agent_buffer_font_size(cx)
} else {
TextSize::Default.rems(cx)
theme_settings.agent_ui_font_size(cx)
};
let text_color = if muted_text {
@@ -5892,6 +6027,107 @@ pub(crate) mod tests {
);
}
#[gpui::test]
async fn test_notification_when_panel_hidden(cx: &mut TestAppContext) {
init_test(cx);
let (thread_view, cx) = setup_thread_view(StubAgentServer::default_response(), cx).await;
add_to_workspace(thread_view.clone(), cx);
let message_editor = cx.read(|cx| thread_view.read(cx).message_editor.clone());
message_editor.update_in(cx, |editor, window, cx| {
editor.set_text("Hello", window, cx);
});
// Window is active (don't deactivate), but panel will be hidden
// Note: In the test environment, the panel is not actually added to the dock,
// so is_agent_panel_hidden will return true
thread_view.update_in(cx, |thread_view, window, cx| {
thread_view.send(window, cx);
});
cx.run_until_parked();
// Should show notification because window is active but panel is hidden
assert!(
cx.windows()
.iter()
.any(|window| window.downcast::<AgentNotification>().is_some()),
"Expected notification when panel is hidden"
);
}
#[gpui::test]
async fn test_notification_still_works_when_window_inactive(cx: &mut TestAppContext) {
init_test(cx);
let (thread_view, cx) = setup_thread_view(StubAgentServer::default_response(), cx).await;
let message_editor = cx.read(|cx| thread_view.read(cx).message_editor.clone());
message_editor.update_in(cx, |editor, window, cx| {
editor.set_text("Hello", window, cx);
});
// Deactivate window - should show notification regardless of setting
cx.deactivate_window();
thread_view.update_in(cx, |thread_view, window, cx| {
thread_view.send(window, cx);
});
cx.run_until_parked();
// Should still show notification when window is inactive (existing behavior)
assert!(
cx.windows()
.iter()
.any(|window| window.downcast::<AgentNotification>().is_some()),
"Expected notification when window is inactive"
);
}
#[gpui::test]
async fn test_notification_respects_never_setting(cx: &mut TestAppContext) {
init_test(cx);
// Set notify_when_agent_waiting to Never
cx.update(|cx| {
AgentSettings::override_global(
AgentSettings {
notify_when_agent_waiting: NotifyWhenAgentWaiting::Never,
..AgentSettings::get_global(cx).clone()
},
cx,
);
});
let (thread_view, cx) = setup_thread_view(StubAgentServer::default_response(), cx).await;
let message_editor = cx.read(|cx| thread_view.read(cx).message_editor.clone());
message_editor.update_in(cx, |editor, window, cx| {
editor.set_text("Hello", window, cx);
});
// Window is active
thread_view.update_in(cx, |thread_view, window, cx| {
thread_view.send(window, cx);
});
cx.run_until_parked();
// Should NOT show notification because notify_when_agent_waiting is Never
assert!(
!cx.windows()
.iter()
.any(|window| window.downcast::<AgentNotification>().is_some()),
"Expected no notification when notify_when_agent_waiting is Never"
);
}
async fn setup_thread_view(
agent: impl AgentServer + 'static,
cx: &mut TestAppContext,

View File

@@ -23,15 +23,17 @@ use language::LanguageRegistry;
use language_model::{
LanguageModelProvider, LanguageModelProviderId, LanguageModelRegistry, ZED_CLOUD_PROVIDER_ID,
};
use language_models::AllLanguageModelSettings;
use notifications::status_toast::{StatusToast, ToastIcon};
use project::{
agent_server_store::{AgentServerStore, CLAUDE_CODE_NAME, CODEX_NAME, GEMINI_NAME},
context_server_store::{ContextServerConfiguration, ContextServerStatus, ContextServerStore},
};
use settings::{SettingsStore, update_settings_file};
use settings::{Settings, SettingsStore, update_settings_file};
use ui::{
Chip, CommonAnimationExt, ContextMenu, Disclosure, Divider, DividerColor, ElevationIndex,
Indicator, PopoverMenu, Switch, SwitchColor, Tooltip, WithScrollbar, prelude::*,
Button, ButtonStyle, Chip, CommonAnimationExt, ContextMenu, Disclosure, Divider, DividerColor,
ElevationIndex, IconName, IconPosition, IconSize, Indicator, LabelSize, PopoverMenu, Switch,
SwitchColor, Tooltip, WithScrollbar, prelude::*,
};
use util::ResultExt as _;
use workspace::{Workspace, create_and_open_local_file};
@@ -152,7 +154,42 @@ pub enum AssistantConfigurationEvent {
impl EventEmitter<AssistantConfigurationEvent> for AgentConfiguration {}
enum AgentIcon {
Name(IconName),
Path(SharedString),
}
impl AgentConfiguration {
fn render_section_title(
&mut self,
title: impl Into<SharedString>,
description: impl Into<SharedString>,
menu: AnyElement,
) -> impl IntoElement {
h_flex()
.p_4()
.pb_0()
.mb_2p5()
.items_start()
.justify_between()
.child(
v_flex()
.w_full()
.gap_0p5()
.child(
h_flex()
.pr_1()
.w_full()
.gap_2()
.justify_between()
.flex_wrap()
.child(Headline::new(title.into()))
.child(menu),
)
.child(Label::new(description.into()).color(Color::Muted)),
)
}
fn render_provider_configuration_block(
&mut self,
provider: &Arc<dyn LanguageModelProvider>,
@@ -287,7 +324,7 @@ impl AgentConfiguration {
"Start New Thread",
)
.full_width()
.style(ButtonStyle::Filled)
.style(ButtonStyle::Outlined)
.layer(ElevationIndex::ModalSurface)
.icon_position(IconPosition::Start)
.icon(IconName::Thread)
@@ -303,89 +340,122 @@ impl AgentConfiguration {
}
})),
)
}),
})
.when(
is_expanded && is_removable_provider(&provider.id(), cx),
|this| {
this.child(
Button::new(
SharedString::from(format!("delete-provider-{provider_id}")),
"Remove Provider",
)
.full_width()
.style(ButtonStyle::Outlined)
.icon_position(IconPosition::Start)
.icon(IconName::Trash)
.icon_size(IconSize::Small)
.icon_color(Color::Muted)
.label_size(LabelSize::Small)
.on_click(cx.listener({
let provider = provider.clone();
move |this, _event, window, cx| {
this.delete_provider(provider.clone(), window, cx);
}
})),
)
},
),
)
}
fn delete_provider(
&mut self,
provider: Arc<dyn LanguageModelProvider>,
window: &mut Window,
cx: &mut Context<Self>,
) {
let fs = self.fs.clone();
let provider_id = provider.id();
cx.spawn_in(window, async move |_, cx| {
cx.update(|_window, cx| {
update_settings_file(fs.clone(), cx, {
let provider_id = provider_id.clone();
move |settings, _| {
if let Some(ref mut openai_compatible) = settings
.language_models
.as_mut()
.and_then(|lm| lm.openai_compatible.as_mut())
{
let key_to_remove: Arc<str> = Arc::from(provider_id.0.as_ref());
openai_compatible.remove(&key_to_remove);
}
}
});
})
.log_err();
cx.update(|_window, cx| {
LanguageModelRegistry::global(cx).update(cx, {
let provider_id = provider_id.clone();
move |registry, cx| {
registry.unregister_provider(provider_id, cx);
}
})
})
.log_err();
anyhow::Ok(())
})
.detach_and_log_err(cx);
}
fn render_provider_configuration_section(
&mut self,
cx: &mut Context<Self>,
) -> impl IntoElement {
let providers = LanguageModelRegistry::read_global(cx).providers();
let popover_menu = PopoverMenu::new("add-provider-popover")
.trigger(
Button::new("add-provider", "Add Provider")
.style(ButtonStyle::Outlined)
.icon_position(IconPosition::Start)
.icon(IconName::Plus)
.icon_size(IconSize::Small)
.icon_color(Color::Muted)
.label_size(LabelSize::Small),
)
.anchor(gpui::Corner::TopRight)
.menu({
let workspace = self.workspace.clone();
move |window, cx| {
Some(ContextMenu::build(window, cx, |menu, _window, _cx| {
menu.header("Compatible APIs").entry("OpenAI", None, {
let workspace = workspace.clone();
move |window, cx| {
workspace
.update(cx, |workspace, cx| {
AddLlmProviderModal::toggle(
LlmCompatibleProvider::OpenAi,
workspace,
window,
cx,
);
})
.log_err();
}
})
}))
}
});
v_flex()
.w_full()
.child(
h_flex()
.p(DynamicSpacing::Base16.rems(cx))
.pr(DynamicSpacing::Base20.rems(cx))
.pb_0()
.mb_2p5()
.items_start()
.justify_between()
.child(
v_flex()
.w_full()
.gap_0p5()
.child(
h_flex()
.pr_1()
.w_full()
.gap_2()
.justify_between()
.child(Headline::new("LLM Providers"))
.child(
PopoverMenu::new("add-provider-popover")
.trigger(
Button::new("add-provider", "Add Provider")
.style(ButtonStyle::Filled)
.layer(ElevationIndex::ModalSurface)
.icon_position(IconPosition::Start)
.icon(IconName::Plus)
.icon_size(IconSize::Small)
.icon_color(Color::Muted)
.label_size(LabelSize::Small),
)
.anchor(gpui::Corner::TopRight)
.menu({
let workspace = self.workspace.clone();
move |window, cx| {
Some(ContextMenu::build(
window,
cx,
|menu, _window, _cx| {
menu.header("Compatible APIs").entry(
"OpenAI",
None,
{
let workspace =
workspace.clone();
move |window, cx| {
workspace
.update(cx, |workspace, cx| {
AddLlmProviderModal::toggle(
LlmCompatibleProvider::OpenAi,
workspace,
window,
cx,
);
})
.log_err();
}
},
)
},
))
}
}),
),
)
.child(
Label::new("Add at least one provider to use AI-powered features with Zed's native agent.")
.color(Color::Muted),
),
),
)
.child(self.render_section_title(
"LLM Providers",
"Add at least one provider to use AI-powered features with Zed's native agent.",
popover_menu.into_any_element(),
))
.child(
div()
.w_full()
@@ -464,8 +534,7 @@ impl AgentConfiguration {
let add_server_popover = PopoverMenu::new("add-server-popover")
.trigger(
Button::new("add-server", "Add Server")
.style(ButtonStyle::Filled)
.layer(ElevationIndex::ModalSurface)
.style(ButtonStyle::Outlined)
.icon_position(IconPosition::Start)
.icon(IconName::Plus)
.icon_size(IconSize::Small)
@@ -498,61 +567,57 @@ impl AgentConfiguration {
});
v_flex()
.p(DynamicSpacing::Base16.rems(cx))
.pr(DynamicSpacing::Base20.rems(cx))
.gap_2()
.border_b_1()
.border_color(cx.theme().colors().border)
.child(self.render_section_title(
"Model Context Protocol (MCP) Servers",
"All MCP servers connected directly or via a Zed extension.",
add_server_popover.into_any_element(),
))
.child(
h_flex()
v_flex()
.pl_4()
.pb_4()
.pr_5()
.w_full()
.items_start()
.justify_between()
.gap_1()
.child(
v_flex()
.gap_0p5()
.child(Headline::new("Model Context Protocol (MCP) Servers"))
.child(
Label::new(
"All MCP servers connected directly or via a Zed extension.",
)
.color(Color::Muted),
),
)
.child(add_server_popover),
)
.child(v_flex().w_full().gap_1().map(|mut parent| {
if context_server_ids.is_empty() {
parent.child(
h_flex()
.p_4()
.justify_center()
.border_1()
.border_dashed()
.border_color(cx.theme().colors().border.opacity(0.6))
.rounded_sm()
.child(
Label::new("No MCP servers added yet.")
.color(Color::Muted)
.size(LabelSize::Small),
),
)
} else {
for (index, context_server_id) in context_server_ids.into_iter().enumerate() {
if index > 0 {
parent = parent.child(
Divider::horizontal()
.color(DividerColor::BorderFaded)
.into_any_element(),
);
.map(|mut parent| {
if context_server_ids.is_empty() {
parent.child(
h_flex()
.p_4()
.justify_center()
.border_1()
.border_dashed()
.border_color(cx.theme().colors().border.opacity(0.6))
.rounded_sm()
.child(
Label::new("No MCP servers added yet.")
.color(Color::Muted)
.size(LabelSize::Small),
),
)
} else {
for (index, context_server_id) in
context_server_ids.into_iter().enumerate()
{
if index > 0 {
parent = parent.child(
Divider::horizontal()
.color(DividerColor::BorderFaded)
.into_any_element(),
);
}
parent = parent.child(self.render_context_server(
context_server_id,
window,
cx,
));
}
parent
}
parent =
parent.child(self.render_context_server(context_server_id, window, cx));
}
parent
}
}))
}),
)
}
fn render_context_server(
@@ -597,12 +662,12 @@ impl AgentConfiguration {
let (source_icon, source_tooltip) = if is_from_extension {
(
IconName::ZedMcpExtension,
IconName::ZedSrcExtension,
"This MCP server was installed from an extension.",
)
} else {
(
IconName::ZedMcpCustom,
IconName::ZedSrcCustom,
"This custom MCP server was installed directly.",
)
};
@@ -884,9 +949,9 @@ impl AgentConfiguration {
}
fn render_agent_servers_section(&mut self, cx: &mut Context<Self>) -> impl IntoElement {
let user_defined_agents = self
.agent_server_store
.read(cx)
let agent_server_store = self.agent_server_store.read(cx);
let user_defined_agents = agent_server_store
.external_agents()
.filter(|name| {
name.0 != GEMINI_NAME && name.0 != CLAUDE_CODE_NAME && name.0 != CODEX_NAME
@@ -897,102 +962,121 @@ impl AgentConfiguration {
let user_defined_agents = user_defined_agents
.into_iter()
.map(|name| {
self.render_agent_server(IconName::Ai, name)
let icon = if let Some(icon_path) = agent_server_store.agent_icon(&name) {
AgentIcon::Path(icon_path)
} else {
AgentIcon::Name(IconName::Ai)
};
self.render_agent_server(icon, name, true)
.into_any_element()
})
.collect::<Vec<_>>();
let add_agens_button = Button::new("add-agent", "Add Agent")
.style(ButtonStyle::Outlined)
.icon_position(IconPosition::Start)
.icon(IconName::Plus)
.icon_size(IconSize::Small)
.icon_color(Color::Muted)
.label_size(LabelSize::Small)
.on_click(move |_, window, cx| {
if let Some(workspace) = window.root().flatten() {
let workspace = workspace.downgrade();
window
.spawn(cx, async |cx| {
open_new_agent_servers_entry_in_settings_editor(workspace, cx).await
})
.detach_and_log_err(cx);
}
});
v_flex()
.border_b_1()
.border_color(cx.theme().colors().border)
.child(
v_flex()
.p(DynamicSpacing::Base16.rems(cx))
.pr(DynamicSpacing::Base20.rems(cx))
.gap_2()
.child(self.render_section_title(
"External Agents",
"All agents connected through the Agent Client Protocol.",
add_agens_button.into_any_element(),
))
.child(
v_flex()
.gap_0p5()
.child(
h_flex()
.pr_1()
.w_full()
.gap_2()
.justify_between()
.child(Headline::new("External Agents"))
.child(
Button::new("add-agent", "Add Agent")
.style(ButtonStyle::Filled)
.layer(ElevationIndex::ModalSurface)
.icon_position(IconPosition::Start)
.icon(IconName::Plus)
.icon_size(IconSize::Small)
.icon_color(Color::Muted)
.label_size(LabelSize::Small)
.on_click(
move |_, window, cx| {
if let Some(workspace) = window.root().flatten() {
let workspace = workspace.downgrade();
window
.spawn(cx, async |cx| {
open_new_agent_servers_entry_in_settings_editor(
workspace,
cx,
).await
})
.detach_and_log_err(cx);
}
}
),
)
)
.child(
Label::new(
"All agents connected through the Agent Client Protocol.",
)
.color(Color::Muted),
),
)
.child(self.render_agent_server(
IconName::AiClaude,
"Claude Code",
))
.child(Divider::horizontal().color(DividerColor::BorderFaded))
.child(self.render_agent_server(
IconName::AiOpenAi,
"Codex",
))
.child(Divider::horizontal().color(DividerColor::BorderFaded))
.child(self.render_agent_server(
IconName::AiGemini,
"Gemini CLI",
))
.map(|mut parent| {
for agent in user_defined_agents {
parent = parent.child(Divider::horizontal().color(DividerColor::BorderFaded))
.child(agent);
}
parent
})
.p_4()
.pt_0()
.gap_2()
.child(self.render_agent_server(
AgentIcon::Name(IconName::AiClaude),
"Claude Code",
false,
))
.child(Divider::horizontal().color(DividerColor::BorderFaded))
.child(self.render_agent_server(
AgentIcon::Name(IconName::AiOpenAi),
"Codex",
false,
))
.child(Divider::horizontal().color(DividerColor::BorderFaded))
.child(self.render_agent_server(
AgentIcon::Name(IconName::AiGemini),
"Gemini CLI",
false,
))
.map(|mut parent| {
for agent in user_defined_agents {
parent = parent
.child(
Divider::horizontal().color(DividerColor::BorderFaded),
)
.child(agent);
}
parent
}),
),
)
}
fn render_agent_server(
&self,
icon: IconName,
icon: AgentIcon,
name: impl Into<SharedString>,
external: bool,
) -> impl IntoElement {
h_flex().gap_1p5().justify_between().child(
h_flex()
.gap_1p5()
.child(Icon::new(icon).size(IconSize::Small).color(Color::Muted))
.child(Label::new(name.into()))
.child(
Icon::new(IconName::Check)
.size(IconSize::Small)
.color(Color::Success),
),
)
let name = name.into();
let icon = match icon {
AgentIcon::Name(icon_name) => Icon::new(icon_name)
.size(IconSize::Small)
.color(Color::Muted),
AgentIcon::Path(icon_path) => Icon::from_path(icon_path)
.size(IconSize::Small)
.color(Color::Muted),
};
let tooltip_id = SharedString::new(format!("agent-source-{}", name));
let tooltip_message = format!("The {} agent was installed from an extension.", name);
h_flex()
.gap_1p5()
.child(icon)
.child(Label::new(name))
.when(external, |this| {
this.child(
div()
.id(tooltip_id)
.flex_none()
.tooltip(Tooltip::text(tooltip_message))
.child(
Icon::new(IconName::ZedSrcExtension)
.size(IconSize::Small)
.color(Color::Muted),
),
)
})
.child(
Icon::new(IconName::Check)
.color(Color::Success)
.size(IconSize::Small),
)
}
}
@@ -1221,3 +1305,14 @@ fn find_text_in_buffer(
None
}
}
// OpenAI-compatible providers are user-configured and can be removed,
// whereas built-in providers (like Anthropic, OpenAI, Google, etc.) can't.
//
// If in the future we have more "API-compatible-type" of providers,
// they should be included here as removable providers.
fn is_removable_provider(provider_id: &LanguageModelProviderId, cx: &App) -> bool {
AllLanguageModelSettings::get_global(cx)
.openai_compatible
.contains_key(provider_id.0.as_ref())
}

View File

@@ -25,6 +25,7 @@ use std::{
any::{Any, TypeId},
collections::hash_map::Entry,
ops::Range,
rc::Rc,
sync::Arc,
};
use ui::{CommonAnimationExt, IconButtonShape, KeyBinding, Tooltip, prelude::*, vertical_divider};
@@ -70,14 +71,6 @@ impl AgentDiffThread {
}
}
fn is_generating(&self, cx: &App) -> bool {
match self {
AgentDiffThread::AcpThread(thread) => {
thread.read(cx).status() == acp_thread::ThreadStatus::Generating
}
}
}
fn has_pending_edit_tool_uses(&self, cx: &App) -> bool {
match self {
AgentDiffThread::AcpThread(thread) => thread.read(cx).has_pending_edit_tool_calls(),
@@ -524,12 +517,7 @@ impl Item for AgentDiffPane {
.update(cx, |editor, cx| editor.deactivated(window, cx));
}
fn navigate(
&mut self,
data: Box<dyn Any>,
window: &mut Window,
cx: &mut Context<Self>,
) -> bool {
fn navigate(&mut self, data: Rc<dyn Any>, window: &mut Window, cx: &mut Context<Self>) -> bool {
self.editor
.update(cx, |editor, cx| editor.navigate(data, window, cx))
}
@@ -970,9 +958,7 @@ impl AgentDiffToolbar {
None => ToolbarItemLocation::Hidden,
Some(AgentDiffToolbarItem::Pane(_)) => ToolbarItemLocation::PrimaryRight,
Some(AgentDiffToolbarItem::Editor { state, .. }) => match state {
EditorState::Generating | EditorState::Reviewing => {
ToolbarItemLocation::PrimaryRight
}
EditorState::Reviewing => ToolbarItemLocation::PrimaryRight,
EditorState::Idle => ToolbarItemLocation::Hidden,
},
}
@@ -1050,7 +1036,6 @@ impl Render for AgentDiffToolbar {
let content = match state {
EditorState::Idle => return Empty.into_any(),
EditorState::Generating => vec![spinner_icon],
EditorState::Reviewing => vec![
h_flex()
.child(
@@ -1222,7 +1207,6 @@ pub struct AgentDiff {
pub enum EditorState {
Idle,
Reviewing,
Generating,
}
struct WorkspaceThread {
@@ -1545,15 +1529,11 @@ impl AgentDiff {
multibuffer.add_diff(diff_handle.clone(), cx);
});
let new_state = if thread.is_generating(cx) {
EditorState::Generating
} else {
EditorState::Reviewing
};
let reviewing_state = EditorState::Reviewing;
let previous_state = self
.reviewing_editors
.insert(weak_editor.clone(), new_state.clone());
.insert(weak_editor.clone(), reviewing_state.clone());
if previous_state.is_none() {
editor.update(cx, |editor, cx| {
@@ -1566,7 +1546,9 @@ impl AgentDiff {
unaffected.remove(weak_editor);
}
if new_state == EditorState::Reviewing && previous_state != Some(new_state) {
if reviewing_state == EditorState::Reviewing
&& previous_state != Some(reviewing_state)
{
// Jump to first hunk when we enter review mode
editor.update(cx, |editor, cx| {
let snapshot = multibuffer.read(cx).snapshot(cx);

View File

@@ -19,8 +19,6 @@ use settings::{
use zed_actions::OpenBrowser;
use zed_actions::agent::{OpenClaudeCodeOnboardingModal, ReauthenticateAgent};
use crate::acp::{AcpThreadHistory, ThreadHistoryEvent};
use crate::context_store::ContextStore;
use crate::ui::{AcpOnboardingModal, ClaudeCodeOnboardingModal};
use crate::{
AddContextServer, AgentDiffPane, DeleteRecentlyOpenThread, Follow, InlineAssistant,
@@ -33,9 +31,14 @@ use crate::{
text_thread_editor::{AgentPanelDelegate, TextThreadEditor, make_lsp_adapter_delegate},
ui::{AgentOnboardingModal, EndTrialUpsell},
};
use crate::{
ExpandMessageEditor,
acp::{AcpThreadHistory, ThreadHistoryEvent},
};
use crate::{
ExternalAgent, NewExternalAgentThread, NewNativeAgentThreadFromSummary, placeholder_command,
};
use crate::{ManageProfiles, context_store::ContextStore};
use agent_settings::AgentSettings;
use ai_onboarding::AgentPanelOnboarding;
use anyhow::{Result, anyhow};
@@ -106,6 +109,12 @@ pub fn init(cx: &mut App) {
}
},
)
.register_action(|workspace, _: &ExpandMessageEditor, window, cx| {
if let Some(panel) = workspace.panel::<AgentPanel>(cx) {
workspace.focus_panel::<AgentPanel>(window, cx);
panel.update(cx, |panel, cx| panel.expand_message_editor(window, cx));
}
})
.register_action(|workspace, _: &OpenHistory, window, cx| {
if let Some(panel) = workspace.panel::<AgentPanel>(cx) {
workspace.focus_panel::<AgentPanel>(window, cx);
@@ -729,6 +738,25 @@ impl AgentPanel {
&self.context_server_registry
}
pub fn is_hidden(workspace: &Entity<Workspace>, cx: &App) -> bool {
let workspace_read = workspace.read(cx);
workspace_read
.panel::<AgentPanel>(cx)
.map(|panel| {
let panel_id = Entity::entity_id(&panel);
let is_visible = workspace_read.all_docks().iter().any(|dock| {
dock.read(cx)
.visible_panel()
.is_some_and(|visible_panel| visible_panel.panel_id() == panel_id)
});
!is_visible
})
.unwrap_or(true)
}
fn active_thread_view(&self) -> Option<&Entity<AcpThreadView>> {
match &self.active_view {
ActiveView::ExternalAgentThread { thread_view, .. } => Some(thread_view),
@@ -925,6 +953,15 @@ impl AgentPanel {
.detach_and_log_err(cx);
}
fn expand_message_editor(&mut self, window: &mut Window, cx: &mut Context<Self>) {
if let Some(thread_view) = self.active_thread_view() {
thread_view.update(cx, |view, cx| {
view.expand_message_editor(&ExpandMessageEditor, window, cx);
view.focus_handle(cx).focus(window);
});
}
}
fn open_history(&mut self, window: &mut Window, cx: &mut Context<Self>) {
if matches!(self.active_view, ActiveView::History) {
if let Some(previous_view) = self.previous_view.take() {
@@ -1743,10 +1780,9 @@ impl AgentPanel {
}),
)
.action("Add Custom Server…", Box::new(AddContextServer))
.separator();
menu = menu
.separator()
.action("Rules", Box::new(OpenRulesLibrary::default()))
.action("Profiles", Box::new(ManageProfiles::default()))
.action("Settings", Box::new(OpenSettings))
.separator()
.action(full_screen_label, Box::new(ToggleZoom));

View File

@@ -620,8 +620,18 @@ impl TextThreadContextHandle {
impl Display for TextThreadContext {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
// TODO: escape title?
writeln!(f, "<text_thread title=\"{}\">", self.title)?;
write!(f, "<text_thread title=\"")?;
for c in self.title.chars() {
match c {
'&' => write!(f, "&amp;")?,
'<' => write!(f, "&lt;")?,
'>' => write!(f, "&gt;")?,
'"' => write!(f, "&quot;")?,
'\'' => write!(f, "&apos;")?,
_ => write!(f, "{}", c)?,
}
}
writeln!(f, "\">")?;
write!(f, "{}", self.text.trim())?;
write!(f, "\n</text_thread>")
}

View File

@@ -662,6 +662,7 @@ pub(crate) fn recent_context_picker_entries(
let mut recent = Vec::with_capacity(6);
let workspace = workspace.read(cx);
let project = workspace.project().read(cx);
let include_root_name = workspace.visible_worktrees(cx).count() > 1;
recent.extend(
workspace
@@ -675,9 +676,16 @@ pub(crate) fn recent_context_picker_entries(
.filter_map(|(project_path, _)| {
project
.worktree_for_id(project_path.worktree_id, cx)
.map(|worktree| RecentEntry::File {
project_path,
path_prefix: worktree.read(cx).root_name().into(),
.map(|worktree| {
let path_prefix = if include_root_name {
worktree.read(cx).root_name().into()
} else {
RelPath::empty().into()
};
RecentEntry::File {
project_path,
path_prefix,
}
})
}),
);

View File

@@ -655,13 +655,12 @@ impl ContextPickerCompletionProvider {
let SymbolLocation::InProject(symbol_path) = &symbol.path else {
return None;
};
let path_prefix = workspace
let _path_prefix = workspace
.read(cx)
.project()
.read(cx)
.worktree_for_id(symbol_path.worktree_id, cx)?
.read(cx)
.root_name();
.worktree_for_id(symbol_path.worktree_id, cx)?;
let path_prefix = RelPath::empty();
let (file_name, directory) = super::file_context_picker::extract_file_name_and_directory(
&symbol_path.path,
@@ -818,9 +817,21 @@ impl CompletionProvider for ContextPickerCompletionProvider {
return None;
}
// If path is empty, this means we're matching with the root directory itself
// so we use the path_prefix as the name
let path_prefix = if mat.path.is_empty() {
project
.read(cx)
.worktree_for_id(project_path.worktree_id, cx)
.map(|wt| wt.read(cx).root_name().into())
.unwrap_or_else(|| mat.path_prefix.clone())
} else {
mat.path_prefix.clone()
};
Some(Self::completion_for_path(
project_path,
&mat.path_prefix,
&path_prefix,
is_recent,
mat.is_dir,
excerpt_id,
@@ -1309,10 +1320,10 @@ mod tests {
assert_eq!(
current_completion_labels(editor),
&[
format!("seven.txt dir{slash}b{slash}"),
format!("six.txt dir{slash}b{slash}"),
format!("five.txt dir{slash}b{slash}"),
format!("four.txt dir{slash}a{slash}"),
format!("seven.txt b{slash}"),
format!("six.txt b{slash}"),
format!("five.txt b{slash}"),
format!("four.txt a{slash}"),
"Files & Directories".into(),
"Symbols".into(),
"Fetch".into()
@@ -1344,7 +1355,7 @@ mod tests {
assert!(editor.has_visible_completions_menu());
assert_eq!(
current_completion_labels(editor),
vec![format!("one.txt dir{slash}a{slash}")]
vec![format!("one.txt a{slash}")]
);
});
@@ -1356,12 +1367,12 @@ mod tests {
editor.update(&mut cx, |editor, cx| {
assert_eq!(
editor.text(cx),
format!("Lorem [@one.txt](@file:dir{slash}a{slash}one.txt) ")
format!("Lorem [@one.txt](@file:a{slash}one.txt) ")
);
assert!(!editor.has_visible_completions_menu());
assert_eq!(
fold_ranges(editor, cx),
vec![Point::new(0, 6)..Point::new(0, 37)]
vec![Point::new(0, 6)..Point::new(0, 33)]
);
});
@@ -1370,12 +1381,12 @@ mod tests {
editor.update(&mut cx, |editor, cx| {
assert_eq!(
editor.text(cx),
format!("Lorem [@one.txt](@file:dir{slash}a{slash}one.txt) ")
format!("Lorem [@one.txt](@file:a{slash}one.txt) ")
);
assert!(!editor.has_visible_completions_menu());
assert_eq!(
fold_ranges(editor, cx),
vec![Point::new(0, 6)..Point::new(0, 37)]
vec![Point::new(0, 6)..Point::new(0, 33)]
);
});
@@ -1384,12 +1395,12 @@ mod tests {
editor.update(&mut cx, |editor, cx| {
assert_eq!(
editor.text(cx),
format!("Lorem [@one.txt](@file:dir{slash}a{slash}one.txt) Ipsum "),
format!("Lorem [@one.txt](@file:a{slash}one.txt) Ipsum "),
);
assert!(!editor.has_visible_completions_menu());
assert_eq!(
fold_ranges(editor, cx),
vec![Point::new(0, 6)..Point::new(0, 37)]
vec![Point::new(0, 6)..Point::new(0, 33)]
);
});
@@ -1398,12 +1409,12 @@ mod tests {
editor.update(&mut cx, |editor, cx| {
assert_eq!(
editor.text(cx),
format!("Lorem [@one.txt](@file:dir{slash}a{slash}one.txt) Ipsum @file "),
format!("Lorem [@one.txt](@file:a{slash}one.txt) Ipsum @file "),
);
assert!(editor.has_visible_completions_menu());
assert_eq!(
fold_ranges(editor, cx),
vec![Point::new(0, 6)..Point::new(0, 37)]
vec![Point::new(0, 6)..Point::new(0, 33)]
);
});
@@ -1416,14 +1427,14 @@ mod tests {
editor.update(&mut cx, |editor, cx| {
assert_eq!(
editor.text(cx),
format!("Lorem [@one.txt](@file:dir{slash}a{slash}one.txt) Ipsum [@seven.txt](@file:dir{slash}b{slash}seven.txt) ")
format!("Lorem [@one.txt](@file:a{slash}one.txt) Ipsum [@seven.txt](@file:b{slash}seven.txt) ")
);
assert!(!editor.has_visible_completions_menu());
assert_eq!(
fold_ranges(editor, cx),
vec![
Point::new(0, 6)..Point::new(0, 37),
Point::new(0, 45)..Point::new(0, 80)
Point::new(0, 6)..Point::new(0, 33),
Point::new(0, 41)..Point::new(0, 72)
]
);
});
@@ -1433,14 +1444,14 @@ mod tests {
editor.update(&mut cx, |editor, cx| {
assert_eq!(
editor.text(cx),
format!("Lorem [@one.txt](@file:dir{slash}a{slash}one.txt) Ipsum [@seven.txt](@file:dir{slash}b{slash}seven.txt) \n@")
format!("Lorem [@one.txt](@file:a{slash}one.txt) Ipsum [@seven.txt](@file:b{slash}seven.txt) \n@")
);
assert!(editor.has_visible_completions_menu());
assert_eq!(
fold_ranges(editor, cx),
vec![
Point::new(0, 6)..Point::new(0, 37),
Point::new(0, 45)..Point::new(0, 80)
Point::new(0, 6)..Point::new(0, 33),
Point::new(0, 41)..Point::new(0, 72)
]
);
});
@@ -1454,20 +1465,203 @@ mod tests {
editor.update(&mut cx, |editor, cx| {
assert_eq!(
editor.text(cx),
format!("Lorem [@one.txt](@file:dir{slash}a{slash}one.txt) Ipsum [@seven.txt](@file:dir{slash}b{slash}seven.txt) \n[@six.txt](@file:dir{slash}b{slash}six.txt) ")
format!("Lorem [@one.txt](@file:a{slash}one.txt) Ipsum [@seven.txt](@file:b{slash}seven.txt) \n[@six.txt](@file:b{slash}six.txt) ")
);
assert!(!editor.has_visible_completions_menu());
assert_eq!(
fold_ranges(editor, cx),
vec![
Point::new(0, 6)..Point::new(0, 37),
Point::new(0, 45)..Point::new(0, 80),
Point::new(1, 0)..Point::new(1, 31)
Point::new(0, 6)..Point::new(0, 33),
Point::new(0, 41)..Point::new(0, 72),
Point::new(1, 0)..Point::new(1, 27)
]
);
});
}
#[gpui::test]
async fn test_context_completion_provider_multiple_worktrees(cx: &mut TestAppContext) {
init_test(cx);
let app_state = cx.update(AppState::test);
cx.update(|cx| {
language::init(cx);
editor::init(cx);
workspace::init(app_state.clone(), cx);
Project::init_settings(cx);
});
app_state
.fs
.as_fake()
.insert_tree(
path!("/project1"),
json!({
"a": {
"one.txt": "",
"two.txt": "",
}
}),
)
.await;
app_state
.fs
.as_fake()
.insert_tree(
path!("/project2"),
json!({
"b": {
"three.txt": "",
"four.txt": "",
}
}),
)
.await;
let project = Project::test(
app_state.fs.clone(),
[path!("/project1").as_ref(), path!("/project2").as_ref()],
cx,
)
.await;
let window = cx.add_window(|window, cx| Workspace::test_new(project.clone(), window, cx));
let workspace = window.root(cx).unwrap();
let worktrees = project.update(cx, |project, cx| {
let worktrees = project.worktrees(cx).collect::<Vec<_>>();
assert_eq!(worktrees.len(), 2);
worktrees
});
let mut cx = VisualTestContext::from_window(*window.deref(), cx);
let slash = PathStyle::local().separator();
for (worktree_idx, paths) in [
vec![rel_path("a/one.txt"), rel_path("a/two.txt")],
vec![rel_path("b/three.txt"), rel_path("b/four.txt")],
]
.iter()
.enumerate()
{
let worktree_id = worktrees[worktree_idx].read_with(&cx, |wt, _| wt.id());
for path in paths {
workspace
.update_in(&mut cx, |workspace, window, cx| {
workspace.open_path(
ProjectPath {
worktree_id,
path: (*path).into(),
},
None,
false,
window,
cx,
)
})
.await
.unwrap();
}
}
let editor = workspace.update_in(&mut cx, |workspace, window, cx| {
let editor = cx.new(|cx| {
Editor::new(
editor::EditorMode::full(),
multi_buffer::MultiBuffer::build_simple("", cx),
None,
window,
cx,
)
});
workspace.active_pane().update(cx, |pane, cx| {
pane.add_item(
Box::new(cx.new(|_| AtMentionEditor(editor.clone()))),
true,
true,
None,
window,
cx,
);
});
editor
});
let context_store = cx.new(|_| ContextStore::new(project.downgrade()));
let editor_entity = editor.downgrade();
editor.update_in(&mut cx, |editor, window, cx| {
window.focus(&editor.focus_handle(cx));
editor.set_completion_provider(Some(Rc::new(ContextPickerCompletionProvider::new(
workspace.downgrade(),
context_store.downgrade(),
None,
None,
editor_entity,
None,
))));
});
cx.simulate_input("@");
// With multiple worktrees, we should see the project name as prefix
editor.update(&mut cx, |editor, cx| {
assert_eq!(editor.text(cx), "@");
assert!(editor.has_visible_completions_menu());
let labels = current_completion_labels(editor);
assert!(
labels.contains(&format!("four.txt project2{slash}b{slash}")),
"Expected 'four.txt project2{slash}b{slash}' in labels: {:?}",
labels
);
assert!(
labels.contains(&format!("three.txt project2{slash}b{slash}")),
"Expected 'three.txt project2{slash}b{slash}' in labels: {:?}",
labels
);
});
editor.update_in(&mut cx, |editor, window, cx| {
editor.context_menu_next(&editor::actions::ContextMenuNext, window, cx);
editor.context_menu_next(&editor::actions::ContextMenuNext, window, cx);
editor.context_menu_next(&editor::actions::ContextMenuNext, window, cx);
editor.context_menu_next(&editor::actions::ContextMenuNext, window, cx);
editor.confirm_completion(&editor::actions::ConfirmCompletion::default(), window, cx);
});
cx.run_until_parked();
editor.update(&mut cx, |editor, cx| {
assert_eq!(editor.text(cx), "@file ");
assert!(editor.has_visible_completions_menu());
});
cx.simulate_input("one");
editor.update(&mut cx, |editor, cx| {
assert_eq!(editor.text(cx), "@file one");
assert!(editor.has_visible_completions_menu());
assert_eq!(
current_completion_labels(editor),
vec![format!("one.txt project1{slash}a{slash}")]
);
});
editor.update_in(&mut cx, |editor, window, cx| {
editor.confirm_completion(&editor::actions::ConfirmCompletion::default(), window, cx);
});
editor.update(&mut cx, |editor, cx| {
assert_eq!(
editor.text(cx),
format!("[@one.txt](@file:project1{slash}a{slash}one.txt) ")
);
assert!(!editor.has_visible_completions_menu());
});
}
fn fold_ranges(editor: &Editor, cx: &mut App) -> Vec<Range<Point>> {
let snapshot = editor.buffer().read(cx).snapshot(cx);
editor.display_map.update(cx, |display_map, cx| {

View File

@@ -197,34 +197,50 @@ pub(crate) fn search_files(
if query.is_empty() {
let workspace = workspace.read(cx);
let project = workspace.project().read(cx);
let visible_worktrees = workspace.visible_worktrees(cx).collect::<Vec<_>>();
let include_root_name = visible_worktrees.len() > 1;
let recent_matches = workspace
.recent_navigation_history(Some(10), cx)
.into_iter()
.filter_map(|(project_path, _)| {
let worktree = project.worktree_for_id(project_path.worktree_id, cx)?;
Some(FileMatch {
.map(|(project_path, _)| {
let path_prefix = if include_root_name {
project
.worktree_for_id(project_path.worktree_id, cx)
.map(|wt| wt.read(cx).root_name().into())
.unwrap_or_else(|| RelPath::empty().into())
} else {
RelPath::empty().into()
};
FileMatch {
mat: PathMatch {
score: 0.,
positions: Vec::new(),
worktree_id: project_path.worktree_id.to_usize(),
path: project_path.path,
path_prefix: worktree.read(cx).root_name().into(),
path_prefix,
distance_to_relative_ancestor: 0,
is_dir: false,
},
is_recent: true,
})
}
});
let file_matches = project.worktrees(cx).flat_map(|worktree| {
let file_matches = visible_worktrees.into_iter().flat_map(|worktree| {
let worktree = worktree.read(cx);
let path_prefix: Arc<RelPath> = if include_root_name {
worktree.root_name().into()
} else {
RelPath::empty().into()
};
worktree.entries(false, 0).map(move |entry| FileMatch {
mat: PathMatch {
score: 0.,
positions: Vec::new(),
worktree_id: worktree.id().to_usize(),
path: entry.path.clone(),
path_prefix: worktree.root_name().into(),
path_prefix: path_prefix.clone(),
distance_to_relative_ancestor: 0,
is_dir: entry.is_dir(),
},
@@ -235,6 +251,7 @@ pub(crate) fn search_files(
Task::ready(recent_matches.chain(file_matches).collect())
} else {
let worktrees = workspace.read(cx).visible_worktrees(cx).collect::<Vec<_>>();
let include_root_name = worktrees.len() > 1;
let candidate_sets = worktrees
.into_iter()
.map(|worktree| {
@@ -243,7 +260,7 @@ pub(crate) fn search_files(
PathMatchCandidateSet {
snapshot: worktree.snapshot(),
include_ignored: worktree.root_entry().is_some_and(|entry| entry.is_ignored),
include_root_name: true,
include_root_name,
candidates: project::Candidates::Entries,
}
})
@@ -276,6 +293,12 @@ pub fn extract_file_name_and_directory(
path_prefix: &RelPath,
path_style: PathStyle,
) -> (SharedString, Option<SharedString>) {
// If path is empty, this means we're matching with the root directory itself
// so we use the path_prefix as the name
if path.is_empty() && !path_prefix.is_empty() {
return (path_prefix.display(path_style).to_string().into(), None);
}
let full_path = path_prefix.join(path);
let file_name = full_path.file_name().unwrap_or_default();
let display_path = full_path.display(path_style);

View File

@@ -260,10 +260,10 @@ impl<T: 'static> PromptEditor<T> {
let agent_panel_keybinding =
ui::text_for_action(&zed_actions::assistant::ToggleFocus, window, cx)
.map(|keybinding| format!("{keybinding} to chat"))
.map(|keybinding| format!("{keybinding} to chat"))
.unwrap_or_default();
format!("{action}… ({agent_panel_keybinding}↓↑ for history)")
format!("{action}… ({agent_panel_keybinding}↓↑ for history — @ to include context)")
}
pub fn prompt(&self, cx: &App) -> String {

View File

@@ -477,7 +477,7 @@ impl TextThreadEditor {
editor.insert(&format!("/{name}"), window, cx);
if command.accepts_arguments() {
editor.insert(" ", window, cx);
editor.show_completions(&ShowCompletions::default(), window, cx);
editor.show_completions(&ShowCompletions, window, cx);
}
});
});
@@ -2530,7 +2530,7 @@ impl Item for TextThreadEditor {
fn navigate(
&mut self,
data: Box<dyn std::any::Any>,
data: Rc<dyn std::any::Any>,
window: &mut Window,
cx: &mut Context<Self>,
) -> bool {
@@ -2591,11 +2591,12 @@ impl SearchableItem for TextThreadEditor {
&mut self,
index: usize,
matches: &[Self::Match],
collapse: bool,
window: &mut Window,
cx: &mut Context<Self>,
) {
self.editor.update(cx, |editor, cx| {
editor.activate_match(index, matches, window, cx);
editor.activate_match(index, matches, collapse, window, cx);
});
}

View File

@@ -26,6 +26,7 @@ serde_json.workspace = true
settings.workspace = true
smol.workspace = true
tempfile.workspace = true
util.workspace = true
workspace.workspace = true
[target.'cfg(not(target_os = "windows"))'.dependencies]

View File

@@ -331,6 +331,16 @@ impl AutoUpdater {
pub fn start_polling(&self, cx: &mut Context<Self>) -> Task<Result<()>> {
cx.spawn(async move |this, cx| {
#[cfg(target_os = "windows")]
{
use util::ResultExt;
cleanup_windows()
.await
.context("failed to cleanup old directories")
.log_err();
}
loop {
this.update(cx, |this, cx| this.poll(UpdateCheckType::Automatic, cx))?;
cx.background_executor().timer(POLL_INTERVAL).await;
@@ -923,6 +933,32 @@ async fn install_release_macos(
Ok(None)
}
#[cfg(target_os = "windows")]
async fn cleanup_windows() -> Result<()> {
use util::ResultExt;
let parent = std::env::current_exe()?
.parent()
.context("No parent dir for Zed.exe")?
.to_owned();
// keep in sync with crates/auto_update_helper/src/updater.rs
smol::fs::remove_dir(parent.join("updates"))
.await
.context("failed to remove updates dir")
.log_err();
smol::fs::remove_dir(parent.join("install"))
.await
.context("failed to remove install dir")
.log_err();
smol::fs::remove_dir(parent.join("old"))
.await
.context("failed to remove old version dir")
.log_err();
Ok(())
}
async fn install_release_windows(downloaded_installer: PathBuf) -> Result<Option<PathBuf>> {
let output = Command::new(downloaded_installer)
.arg("/verysilent")
@@ -962,7 +998,7 @@ pub async fn finalize_auto_update_on_quit() {
.parent()
.map(|p| p.join("tools").join("auto_update_helper.exe"))
{
let mut command = smol::process::Command::new(helper);
let mut command = util::command::new_smol_command(helper);
command.arg("--launch");
command.arg("false");
if let Ok(mut cmd) = command.spawn() {

View File

@@ -21,6 +21,9 @@ simplelog.workspace = true
[target.'cfg(target_os = "windows")'.dependencies]
windows.workspace = true
[target.'cfg(target_os = "windows")'.dev-dependencies]
tempfile.workspace = true
[target.'cfg(target_os = "windows")'.build-dependencies]
winresource = "0.1"

View File

@@ -1,16 +1,32 @@
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0" xmlns:asmv3="urn:schemas-microsoft-com:asm.v3">
<asmv3:application>
<asmv3:windowsSettings>
<dpiAware xmlns="http://schemas.microsoft.com/SMI/2005/WindowsSettings">true</dpiAware>
<assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0">
<trustInfo xmlns="urn:schemas-microsoft-com:asm.v3">
<security>
<requestedPrivileges>
<requestedExecutionLevel level="asInvoker" uiAccess="false" />
</requestedPrivileges>
</security>
</trustInfo>
<compatibility xmlns="urn:schemas-microsoft-com:compatibility.v1">
<application>
<!-- Windows 10 -->
<supportedOS Id="{8e0f7a12-bfb3-4fe8-b9a5-48fd50a15a9a}" />
</application>
</compatibility>
<application xmlns="urn:schemas-microsoft-com:asm.v3">
<windowsSettings>
<dpiAware xmlns="http://schemas.microsoft.com/SMI/2005/WindowsSettings">true/pm</dpiAware>
<dpiAwareness xmlns="http://schemas.microsoft.com/SMI/2016/WindowsSettings">PerMonitorV2</dpiAwareness>
</asmv3:windowsSettings>
</asmv3:application>
</windowsSettings>
</application>
<dependency>
<dependentAssembly>
<assemblyIdentity type='win32'
<assemblyIdentity
type='win32'
name='Microsoft.Windows.Common-Controls'
version='6.0.0.0' processorArchitecture='*'
publicKeyToken='6595b64144ccf1df' />
version='6.0.0.0'
processorArchitecture='*'
publicKeyToken='6595b64144ccf1df'
/>
</dependentAssembly>
</dependency>
</assembly>

View File

@@ -1,4 +1,5 @@
use std::{
cell::LazyCell,
path::Path,
time::{Duration, Instant},
};
@@ -11,210 +12,274 @@ use windows::Win32::{
use crate::windows_impl::WM_JOB_UPDATED;
type Job = fn(&Path) -> Result<()>;
pub(crate) struct Job {
pub apply: Box<dyn Fn(&Path) -> Result<()>>,
pub rollback: Box<dyn Fn(&Path) -> Result<()>>,
}
#[cfg(not(test))]
pub(crate) const JOBS: &[Job] = &[
// Delete old files
|app_dir| {
let zed_executable = app_dir.join("Zed.exe");
log::info!("Removing old file: {}", zed_executable.display());
std::fs::remove_file(&zed_executable).context(format!(
"Failed to remove old file {}",
zed_executable.display()
))
},
|app_dir| {
let zed_cli = app_dir.join("bin\\zed.exe");
log::info!("Removing old file: {}", zed_cli.display());
std::fs::remove_file(&zed_cli)
.context(format!("Failed to remove old file {}", zed_cli.display()))
},
|app_dir| {
let zed_wsl = app_dir.join("bin\\zed");
log::info!("Removing old file: {}", zed_wsl.display());
std::fs::remove_file(&zed_wsl)
.context(format!("Failed to remove old file {}", zed_wsl.display()))
},
// TODO: remove after a few weeks once everyone is on the new version and this file never exists
|app_dir| {
let open_console = app_dir.join("OpenConsole.exe");
if open_console.exists() {
log::info!("Removing old file: {}", open_console.display());
std::fs::remove_file(&open_console).context(format!(
"Failed to remove old file {}",
open_console.display()
))?
impl Job {
pub fn mkdir(name: &'static Path) -> Self {
Job {
apply: Box::new(move |app_dir| {
let dir = app_dir.join(name);
std::fs::create_dir_all(&dir)
.context(format!("Failed to create directory {}", dir.display()))
}),
rollback: Box::new(move |app_dir| {
let dir = app_dir.join(name);
std::fs::remove_dir_all(&dir)
.context(format!("Failed to remove directory {}", dir.display()))
}),
}
Ok(())
},
|app_dir| {
let archs = ["x64", "arm64"];
for arch in archs {
let open_console = app_dir.join(format!("{arch}\\OpenConsole.exe"));
if open_console.exists() {
log::info!("Removing old file: {}", open_console.display());
std::fs::remove_file(&open_console).context(format!(
"Failed to remove old file {}",
open_console.display()
))?
}
}
pub fn mkdir_if_exists(name: &'static Path, check: &'static Path) -> Self {
Job {
apply: Box::new(move |app_dir| {
let dir = app_dir.join(name);
let check = app_dir.join(check);
if check.exists() {
std::fs::create_dir_all(&dir)
.context(format!("Failed to create directory {}", dir.display()))?
}
Ok(())
}),
rollback: Box::new(move |app_dir| {
let dir = app_dir.join(name);
if dir.exists() {
std::fs::remove_dir_all(&dir)
.context(format!("Failed to remove directory {}", dir.display()))?
}
Ok(())
}),
}
Ok(())
},
|app_dir| {
let conpty = app_dir.join("conpty.dll");
log::info!("Removing old file: {}", conpty.display());
std::fs::remove_file(&conpty)
.context(format!("Failed to remove old file {}", conpty.display()))
},
// Copy new files
|app_dir| {
let zed_executable_source = app_dir.join("install\\Zed.exe");
let zed_executable_dest = app_dir.join("Zed.exe");
log::info!(
"Copying new file {} to {}",
zed_executable_source.display(),
zed_executable_dest.display()
);
std::fs::copy(&zed_executable_source, &zed_executable_dest)
.map(|_| ())
.context(format!(
"Failed to copy new file {} to {}",
zed_executable_source.display(),
zed_executable_dest.display()
))
},
|app_dir| {
let zed_cli_source = app_dir.join("install\\bin\\zed.exe");
let zed_cli_dest = app_dir.join("bin\\zed.exe");
log::info!(
"Copying new file {} to {}",
zed_cli_source.display(),
zed_cli_dest.display()
);
std::fs::copy(&zed_cli_source, &zed_cli_dest)
.map(|_| ())
.context(format!(
"Failed to copy new file {} to {}",
zed_cli_source.display(),
zed_cli_dest.display()
))
},
|app_dir| {
let zed_wsl_source = app_dir.join("install\\bin\\zed");
let zed_wsl_dest = app_dir.join("bin\\zed");
log::info!(
"Copying new file {} to {}",
zed_wsl_source.display(),
zed_wsl_dest.display()
);
std::fs::copy(&zed_wsl_source, &zed_wsl_dest)
.map(|_| ())
.context(format!(
"Failed to copy new file {} to {}",
zed_wsl_source.display(),
zed_wsl_dest.display()
))
},
|app_dir| {
let archs = ["x64", "arm64"];
for arch in archs {
let open_console_source = app_dir.join(format!("install\\{arch}\\OpenConsole.exe"));
let open_console_dest = app_dir.join(format!("{arch}\\OpenConsole.exe"));
if open_console_source.exists() {
}
pub fn move_file(filename: &'static Path, new_filename: &'static Path) -> Self {
Job {
apply: Box::new(move |app_dir| {
let old_file = app_dir.join(filename);
let new_file = app_dir.join(new_filename);
log::info!(
"Copying new file {} to {}",
open_console_source.display(),
open_console_dest.display()
"Moving file: {}->{}",
old_file.display(),
new_file.display()
);
let parent = open_console_dest.parent().context(format!(
"Failed to get parent directory of {}",
open_console_dest.display()
))?;
std::fs::create_dir_all(parent)
.context(format!("Failed to create directory {}", parent.display()))?;
std::fs::copy(&open_console_source, &open_console_dest)
.map(|_| ())
.context(format!(
"Failed to copy new file {} to {}",
open_console_source.display(),
open_console_dest.display()
))?
}
}
Ok(())
},
|app_dir| {
let conpty_source = app_dir.join("install\\conpty.dll");
let conpty_dest = app_dir.join("conpty.dll");
log::info!(
"Copying new file {} to {}",
conpty_source.display(),
conpty_dest.display()
);
std::fs::copy(&conpty_source, &conpty_dest)
.map(|_| ())
.context(format!(
"Failed to copy new file {} to {}",
conpty_source.display(),
conpty_dest.display()
))
},
// Clean up installer folder and updates folder
|app_dir| {
let updates_folder = app_dir.join("updates");
log::info!("Cleaning up: {}", updates_folder.display());
std::fs::remove_dir_all(&updates_folder).context(format!(
"Failed to remove updates folder {}",
updates_folder.display()
))
},
|app_dir| {
let installer_folder = app_dir.join("install");
log::info!("Cleaning up: {}", installer_folder.display());
std::fs::remove_dir_all(&installer_folder).context(format!(
"Failed to remove installer folder {}",
installer_folder.display()
))
},
];
std::fs::rename(&old_file, new_file)
.context(format!("Failed to move file {}", old_file.display()))
}),
rollback: Box::new(move |app_dir| {
let old_file = app_dir.join(filename);
let new_file = app_dir.join(new_filename);
log::info!(
"Rolling back file move: {}->{}",
old_file.display(),
new_file.display()
);
std::fs::rename(&new_file, &old_file).context(format!(
"Failed to rollback file move {}->{}",
new_file.display(),
old_file.display()
))
}),
}
}
pub fn move_if_exists(filename: &'static Path, new_filename: &'static Path) -> Self {
Job {
apply: Box::new(move |app_dir| {
let old_file = app_dir.join(filename);
let new_file = app_dir.join(new_filename);
if old_file.exists() {
log::info!(
"Moving file: {}->{}",
old_file.display(),
new_file.display()
);
std::fs::rename(&old_file, new_file)
.context(format!("Failed to move file {}", old_file.display()))?;
}
Ok(())
}),
rollback: Box::new(move |app_dir| {
let old_file = app_dir.join(filename);
let new_file = app_dir.join(new_filename);
if new_file.exists() {
log::info!(
"Rolling back file move: {}->{}",
old_file.display(),
new_file.display()
);
std::fs::rename(&new_file, &old_file).context(format!(
"Failed to rollback file move {}->{}",
new_file.display(),
old_file.display()
))?
}
Ok(())
}),
}
}
pub fn rmdir_nofail(filename: &'static Path) -> Self {
Job {
apply: Box::new(move |app_dir| {
let filename = app_dir.join(filename);
log::info!("Removing file: {}", filename.display());
if let Err(e) = std::fs::remove_dir_all(&filename) {
log::warn!("Failed to remove directory: {}", e);
}
Ok(())
}),
rollback: Box::new(move |app_dir| {
let filename = app_dir.join(filename);
anyhow::bail!(
"Delete operations cannot be rolled back, file: {}",
filename.display()
)
}),
}
}
}
// app is single threaded
#[cfg(not(test))]
#[allow(clippy::declare_interior_mutable_const)]
pub(crate) const JOBS: LazyCell<[Job; 22]> = LazyCell::new(|| {
fn p(value: &str) -> &Path {
Path::new(value)
}
[
// Move old files
// Not deleting because installing new files can fail
Job::mkdir(p("old")),
Job::move_file(p("Zed.exe"), p("old\\Zed.exe")),
Job::mkdir(p("old\\bin")),
Job::move_file(p("bin\\Zed.exe"), p("old\\bin\\Zed.exe")),
Job::move_file(p("bin\\zed"), p("old\\bin\\zed")),
//
// TODO: remove after a few weeks once everyone is on the new version and this file never exists
Job::move_if_exists(p("OpenConsole.exe"), p("old\\OpenConsole.exe")),
Job::mkdir(p("old\\x64")),
Job::mkdir(p("old\\arm64")),
Job::move_if_exists(p("x64\\OpenConsole.exe"), p("old\\x64\\OpenConsole.exe")),
Job::move_if_exists(
p("arm64\\OpenConsole.exe"),
p("old\\arm64\\OpenConsole.exe"),
),
//
Job::move_file(p("conpty.dll"), p("old\\conpty.dll")),
// Copy new files
Job::move_file(p("install\\Zed.exe"), p("Zed.exe")),
Job::move_file(p("install\\bin\\Zed.exe"), p("bin\\Zed.exe")),
Job::move_file(p("install\\bin\\zed"), p("bin\\zed")),
//
Job::mkdir_if_exists(p("x64"), p("install\\x64")),
Job::mkdir_if_exists(p("arm64"), p("install\\arm64")),
Job::move_if_exists(
p("install\\x64\\OpenConsole.exe"),
p("x64\\OpenConsole.exe"),
),
Job::move_if_exists(
p("install\\arm64\\OpenConsole.exe"),
p("arm64\\OpenConsole.exe"),
),
//
Job::move_file(p("install\\conpty.dll"), p("conpty.dll")),
// Cleanup installer and updates folder
Job::rmdir_nofail(p("updates")),
Job::rmdir_nofail(p("install")),
// Cleanup old installation
Job::rmdir_nofail(p("old")),
]
});
// app is single threaded
#[cfg(test)]
pub(crate) const JOBS: &[Job] = &[
|_| {
std::thread::sleep(Duration::from_millis(1000));
if let Ok(config) = std::env::var("ZED_AUTO_UPDATE") {
match config.as_str() {
"err" => Err(std::io::Error::other("Simulated error")).context("Anyhow!"),
_ => panic!("Unknown ZED_AUTO_UPDATE value: {}", config),
}
} else {
Ok(())
}
},
|_| {
std::thread::sleep(Duration::from_millis(1000));
if let Ok(config) = std::env::var("ZED_AUTO_UPDATE") {
match config.as_str() {
"err" => Err(std::io::Error::other("Simulated error")).context("Anyhow!"),
_ => panic!("Unknown ZED_AUTO_UPDATE value: {}", config),
}
} else {
Ok(())
}
},
];
#[allow(clippy::declare_interior_mutable_const)]
pub(crate) const JOBS: LazyCell<[Job; 9]> = LazyCell::new(|| {
fn p(value: &str) -> &Path {
Path::new(value)
}
[
Job {
apply: Box::new(|_| {
std::thread::sleep(Duration::from_millis(1000));
if let Ok(config) = std::env::var("ZED_AUTO_UPDATE") {
match config.as_str() {
"err1" => Err(std::io::Error::other("Simulated error")).context("Anyhow!"),
"err2" => Ok(()),
_ => panic!("Unknown ZED_AUTO_UPDATE value: {}", config),
}
} else {
Ok(())
}
}),
rollback: Box::new(|_| {
unsafe { std::env::set_var("ZED_AUTO_UPDATE_RB", "rollback1") };
Ok(())
}),
},
Job::mkdir(p("test1")),
Job::mkdir_if_exists(p("test_exists"), p("test1")),
Job::mkdir_if_exists(p("test_missing"), p("dont")),
Job {
apply: Box::new(|folder| {
std::fs::write(folder.join("test1/test"), "test")?;
Ok(())
}),
rollback: Box::new(|folder| {
std::fs::remove_file(folder.join("test1/test"))?;
Ok(())
}),
},
Job::move_file(p("test1/test"), p("test1/moved")),
Job::move_if_exists(p("test1/test"), p("test1/noop")),
Job {
apply: Box::new(|_| {
std::thread::sleep(Duration::from_millis(1000));
if let Ok(config) = std::env::var("ZED_AUTO_UPDATE") {
match config.as_str() {
"err1" => Ok(()),
"err2" => Err(std::io::Error::other("Simulated error")).context("Anyhow!"),
_ => panic!("Unknown ZED_AUTO_UPDATE value: {}", config),
}
} else {
Ok(())
}
}),
rollback: Box::new(|_| Ok(())),
},
Job::rmdir_nofail(p("test1/nofolder")),
]
});
pub(crate) fn perform_update(app_dir: &Path, hwnd: Option<isize>, launch: bool) -> Result<()> {
let hwnd = hwnd.map(|ptr| HWND(ptr as _));
for job in JOBS.iter() {
let mut last_successful_job = None;
'outer: for (i, job) in JOBS.iter().enumerate() {
let start = Instant::now();
loop {
anyhow::ensure!(start.elapsed().as_secs() <= 2, "Timed out");
match (*job)(app_dir) {
if start.elapsed().as_secs() > 2 {
log::error!("Timed out, rolling back");
break 'outer;
}
match (job.apply)(app_dir) {
Ok(_) => {
last_successful_job = Some(i);
unsafe { PostMessageW(hwnd, WM_JOB_UPDATED, WPARAM(0), LPARAM(0))? };
break;
}
@@ -223,6 +288,7 @@ pub(crate) fn perform_update(app_dir: &Path, hwnd: Option<isize>, launch: bool)
let io_err = err.downcast_ref::<std::io::Error>().unwrap();
if io_err.kind() == std::io::ErrorKind::NotFound {
log::warn!("File or folder not found.");
last_successful_job = Some(i);
unsafe { PostMessageW(hwnd, WM_JOB_UPDATED, WPARAM(0), LPARAM(0))? };
break;
}
@@ -233,6 +299,28 @@ pub(crate) fn perform_update(app_dir: &Path, hwnd: Option<isize>, launch: bool)
}
}
}
if last_successful_job
.map(|job| job != JOBS.len() - 1)
.unwrap_or(true)
{
let Some(last_successful_job) = last_successful_job else {
anyhow::bail!("Autoupdate failed, nothing to rollback");
};
for job in (0..=last_successful_job).rev() {
let job = &JOBS[job];
if let Err(e) = (job.rollback)(app_dir) {
anyhow::bail!(
"Job rollback failed, the app might be left in an inconsistent state: ({:?})",
e
);
}
}
anyhow::bail!("Autoupdate failed, rollback successful");
}
if launch {
#[allow(clippy::disallowed_methods, reason = "doesn't run in the main binary")]
let _ = std::process::Command::new(app_dir.join("Zed.exe")).spawn();
@@ -247,12 +335,27 @@ mod test {
#[test]
fn test_perform_update() {
let app_dir = std::path::Path::new("C:/");
let app_dir = tempfile::tempdir().unwrap();
let app_dir = app_dir.path();
assert!(perform_update(app_dir, None, false).is_ok());
let app_dir = tempfile::tempdir().unwrap();
let app_dir = app_dir.path();
// Simulate a timeout
unsafe { std::env::set_var("ZED_AUTO_UPDATE", "err") };
unsafe { std::env::set_var("ZED_AUTO_UPDATE", "err1") };
let ret = perform_update(app_dir, None, false);
assert!(ret.is_err_and(|e| e.to_string().as_str() == "Timed out"));
assert!(
ret.is_err_and(|e| e.to_string().as_str() == "Autoupdate failed, nothing to rollback")
);
let app_dir = tempfile::tempdir().unwrap();
let app_dir = app_dir.path();
// Simulate a timeout
unsafe { std::env::set_var("ZED_AUTO_UPDATE", "err2") };
let ret = perform_update(app_dir, None, false);
assert!(
ret.is_err_and(|e| e.to_string().as_str() == "Autoupdate failed, rollback successful")
);
assert!(std::env::var("ZED_AUTO_UPDATE_RB").is_ok_and(|e| e == "rollback1"));
}
}

View File

@@ -66,6 +66,8 @@ pub enum Model {
Claude3Sonnet,
#[serde(rename = "claude-3-5-haiku", alias = "claude-3-5-haiku-latest")]
Claude3_5Haiku,
#[serde(rename = "claude-haiku-4-5", alias = "claude-haiku-4-5-latest")]
ClaudeHaiku4_5,
Claude3_5Sonnet,
Claude3Haiku,
// Amazon Nova Models
@@ -147,6 +149,8 @@ impl Model {
Ok(Self::Claude3Sonnet)
} else if id.starts_with("claude-3-5-haiku") {
Ok(Self::Claude3_5Haiku)
} else if id.starts_with("claude-haiku-4-5") {
Ok(Self::ClaudeHaiku4_5)
} else if id.starts_with("claude-3-7-sonnet") {
Ok(Self::Claude3_7Sonnet)
} else if id.starts_with("claude-3-7-sonnet-thinking") {
@@ -180,6 +184,7 @@ impl Model {
Model::Claude3Sonnet => "claude-3-sonnet",
Model::Claude3Haiku => "claude-3-haiku",
Model::Claude3_5Haiku => "claude-3-5-haiku",
Model::ClaudeHaiku4_5 => "claude-haiku-4-5",
Model::Claude3_7Sonnet => "claude-3-7-sonnet",
Model::Claude3_7SonnetThinking => "claude-3-7-sonnet-thinking",
Model::AmazonNovaLite => "amazon-nova-lite",
@@ -246,6 +251,7 @@ impl Model {
Model::Claude3Sonnet => "anthropic.claude-3-sonnet-20240229-v1:0",
Model::Claude3Haiku => "anthropic.claude-3-haiku-20240307-v1:0",
Model::Claude3_5Haiku => "anthropic.claude-3-5-haiku-20241022-v1:0",
Model::ClaudeHaiku4_5 => "anthropic.claude-haiku-4-5-20251001-v1:0",
Model::Claude3_7Sonnet | Model::Claude3_7SonnetThinking => {
"anthropic.claude-3-7-sonnet-20250219-v1:0"
}
@@ -309,6 +315,7 @@ impl Model {
Self::Claude3Sonnet => "Claude 3 Sonnet",
Self::Claude3Haiku => "Claude 3 Haiku",
Self::Claude3_5Haiku => "Claude 3.5 Haiku",
Self::ClaudeHaiku4_5 => "Claude Haiku 4.5",
Self::Claude3_7Sonnet => "Claude 3.7 Sonnet",
Self::Claude3_7SonnetThinking => "Claude 3.7 Sonnet Thinking",
Self::AmazonNovaLite => "Amazon Nova Lite",
@@ -363,6 +370,7 @@ impl Model {
| Self::Claude3Opus
| Self::Claude3Sonnet
| Self::Claude3_5Haiku
| Self::ClaudeHaiku4_5
| Self::Claude3_7Sonnet
| Self::ClaudeSonnet4
| Self::ClaudeOpus4
@@ -385,7 +393,7 @@ impl Model {
Self::Claude3Opus | Self::Claude3Sonnet | Self::Claude3_5Haiku => 4_096,
Self::Claude3_7Sonnet | Self::Claude3_7SonnetThinking => 128_000,
Self::ClaudeSonnet4 | Self::ClaudeSonnet4Thinking => 64_000,
Self::ClaudeSonnet4_5 | Self::ClaudeSonnet4_5Thinking => 64_000,
Self::ClaudeSonnet4_5 | Self::ClaudeSonnet4_5Thinking | Self::ClaudeHaiku4_5 => 64_000,
Self::ClaudeOpus4
| Self::ClaudeOpus4Thinking
| Self::ClaudeOpus4_1
@@ -404,6 +412,7 @@ impl Model {
| Self::Claude3Opus
| Self::Claude3Sonnet
| Self::Claude3_5Haiku
| Self::ClaudeHaiku4_5
| Self::Claude3_7Sonnet
| Self::ClaudeOpus4
| Self::ClaudeOpus4Thinking
@@ -438,7 +447,8 @@ impl Model {
| Self::ClaudeSonnet4Thinking
| Self::ClaudeSonnet4_5
| Self::ClaudeSonnet4_5Thinking
| Self::Claude3_5Haiku => true,
| Self::Claude3_5Haiku
| Self::ClaudeHaiku4_5 => true,
// Amazon Nova models (all support tool use)
Self::AmazonNovaPremier
@@ -464,6 +474,7 @@ impl Model {
// Nova models support only text caching
// https://docs.aws.amazon.com/bedrock/latest/userguide/prompt-caching.html#prompt-caching-models
Self::Claude3_5Haiku
| Self::ClaudeHaiku4_5
| Self::Claude3_7Sonnet
| Self::Claude3_7SonnetThinking
| Self::ClaudeSonnet4
@@ -500,7 +511,7 @@ impl Model {
min_total_token: 1024,
}),
Self::Claude3_5Haiku => Some(BedrockModelCacheConfiguration {
Self::Claude3_5Haiku | Self::ClaudeHaiku4_5 => Some(BedrockModelCacheConfiguration {
max_cache_anchors: 4,
min_total_token: 2048,
}),
@@ -569,6 +580,7 @@ impl Model {
(
Model::AmazonNovaPremier
| Model::Claude3_5Haiku
| Model::ClaudeHaiku4_5
| Model::Claude3_5Sonnet
| Model::Claude3_5SonnetV2
| Model::Claude3_7Sonnet
@@ -606,6 +618,7 @@ impl Model {
// Models available in EU
(
Model::Claude3_5Sonnet
| Model::ClaudeHaiku4_5
| Model::Claude3_7Sonnet
| Model::Claude3_7SonnetThinking
| Model::ClaudeSonnet4
@@ -624,6 +637,7 @@ impl Model {
(
Model::Claude3_5Sonnet
| Model::Claude3_5SonnetV2
| Model::ClaudeHaiku4_5
| Model::Claude3Haiku
| Model::Claude3Sonnet
| Model::Claude3_7Sonnet

View File

@@ -100,13 +100,21 @@ impl Render for Breadcrumbs {
let breadcrumbs_stack = h_flex().gap_1().children(breadcrumbs);
let prefix_element = active_item.breadcrumb_prefix(window, cx);
let breadcrumbs = if let Some(prefix) = prefix_element {
h_flex().gap_1p5().child(prefix).child(breadcrumbs_stack)
} else {
breadcrumbs_stack
};
match active_item
.downcast::<Editor>()
.map(|editor| editor.downgrade())
{
Some(editor) => element.child(
ButtonLike::new("toggle outline view")
.child(breadcrumbs_stack)
.child(breadcrumbs)
.style(ButtonStyle::Transparent)
.on_click({
let editor = editor.clone();
@@ -141,7 +149,7 @@ impl Render for Breadcrumbs {
// Match the height and padding of the `ButtonLike` in the other arm.
.h(rems_from_px(22.))
.pl_1()
.child(breadcrumbs_stack),
.child(breadcrumbs),
}
}
}

View File

@@ -358,6 +358,7 @@ fn main() -> Result<()> {
rayon::ThreadPoolBuilder::new()
.num_threads(4)
.stack_size(10 * 1024 * 1024)
.thread_name(|ix| format!("RayonWorker{}", ix))
.build_global()
.unwrap();

View File

@@ -1,4 +1,5 @@
pub mod predict_edits_v3;
pub mod udiff;
use std::str::FromStr;
use std::sync::Arc;

View File

@@ -0,0 +1,294 @@
use std::{borrow::Cow, fmt::Display};
#[derive(Debug, PartialEq)]
pub enum DiffLine<'a> {
OldPath { path: Cow<'a, str> },
NewPath { path: Cow<'a, str> },
HunkHeader(Option<HunkLocation>),
Context(&'a str),
Deletion(&'a str),
Addition(&'a str),
Garbage(&'a str),
}
#[derive(Debug, PartialEq)]
pub struct HunkLocation {
start_line_old: u32,
count_old: u32,
start_line_new: u32,
count_new: u32,
}
impl<'a> DiffLine<'a> {
pub fn parse(line: &'a str) -> Self {
Self::try_parse(line).unwrap_or(Self::Garbage(line))
}
fn try_parse(line: &'a str) -> Option<Self> {
if let Some(header) = line.strip_prefix("---").and_then(eat_required_whitespace) {
let path = parse_header_path("a/", header);
Some(Self::OldPath { path })
} else if let Some(header) = line.strip_prefix("+++").and_then(eat_required_whitespace) {
Some(Self::NewPath {
path: parse_header_path("b/", header),
})
} else if let Some(header) = line.strip_prefix("@@").and_then(eat_required_whitespace) {
if header.starts_with("...") {
return Some(Self::HunkHeader(None));
}
let (start_line_old, header) = header.strip_prefix('-')?.split_once(',')?;
let mut parts = header.split_ascii_whitespace();
let count_old = parts.next()?;
let (start_line_new, count_new) = parts.next()?.strip_prefix('+')?.split_once(',')?;
Some(Self::HunkHeader(Some(HunkLocation {
start_line_old: start_line_old.parse::<u32>().ok()?.saturating_sub(1),
count_old: count_old.parse().ok()?,
start_line_new: start_line_new.parse::<u32>().ok()?.saturating_sub(1),
count_new: count_new.parse().ok()?,
})))
} else if let Some(deleted_header) = line.strip_prefix("-") {
Some(Self::Deletion(deleted_header))
} else if line.is_empty() {
Some(Self::Context(""))
} else if let Some(context) = line.strip_prefix(" ") {
Some(Self::Context(context))
} else {
Some(Self::Addition(line.strip_prefix("+")?))
}
}
}
impl<'a> Display for DiffLine<'a> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
DiffLine::OldPath { path } => write!(f, "--- {path}"),
DiffLine::NewPath { path } => write!(f, "+++ {path}"),
DiffLine::HunkHeader(Some(hunk_location)) => {
write!(
f,
"@@ -{},{} +{},{} @@",
hunk_location.start_line_old + 1,
hunk_location.count_old,
hunk_location.start_line_new + 1,
hunk_location.count_new
)
}
DiffLine::HunkHeader(None) => write!(f, "@@ ... @@"),
DiffLine::Context(content) => write!(f, " {content}"),
DiffLine::Deletion(content) => write!(f, "-{content}"),
DiffLine::Addition(content) => write!(f, "+{content}"),
DiffLine::Garbage(line) => write!(f, "{line}"),
}
}
}
fn parse_header_path<'a>(strip_prefix: &'static str, header: &'a str) -> Cow<'a, str> {
if !header.contains(['"', '\\']) {
let path = header.split_ascii_whitespace().next().unwrap_or(header);
return Cow::Borrowed(path.strip_prefix(strip_prefix).unwrap_or(path));
}
let mut path = String::with_capacity(header.len());
let mut in_quote = false;
let mut chars = header.chars().peekable();
let mut strip_prefix = Some(strip_prefix);
while let Some(char) = chars.next() {
if char == '"' {
in_quote = !in_quote;
} else if char == '\\' {
let Some(&next_char) = chars.peek() else {
break;
};
chars.next();
path.push(next_char);
} else if char.is_ascii_whitespace() && !in_quote {
break;
} else {
path.push(char);
}
if let Some(prefix) = strip_prefix
&& path == prefix
{
strip_prefix.take();
path.clear();
}
}
Cow::Owned(path)
}
fn eat_required_whitespace(header: &str) -> Option<&str> {
let trimmed = header.trim_ascii_start();
if trimmed.len() == header.len() {
None
} else {
Some(trimmed)
}
}
#[cfg(test)]
mod tests {
use super::*;
use indoc::indoc;
#[test]
fn parse_lines_simple() {
let input = indoc! {"
diff --git a/text.txt b/text.txt
index 86c770d..a1fd855 100644
--- a/file.txt
+++ b/file.txt
@@ -1,2 +1,3 @@
context
-deleted
+inserted
garbage
--- b/file.txt
+++ a/file.txt
"};
let lines = input.lines().map(DiffLine::parse).collect::<Vec<_>>();
pretty_assertions::assert_eq!(
lines,
&[
DiffLine::Garbage("diff --git a/text.txt b/text.txt"),
DiffLine::Garbage("index 86c770d..a1fd855 100644"),
DiffLine::OldPath {
path: "file.txt".into()
},
DiffLine::NewPath {
path: "file.txt".into()
},
DiffLine::HunkHeader(Some(HunkLocation {
start_line_old: 0,
count_old: 2,
start_line_new: 0,
count_new: 3
})),
DiffLine::Context("context"),
DiffLine::Deletion("deleted"),
DiffLine::Addition("inserted"),
DiffLine::Garbage("garbage"),
DiffLine::Context(""),
DiffLine::OldPath {
path: "b/file.txt".into()
},
DiffLine::NewPath {
path: "a/file.txt".into()
},
]
);
}
#[test]
fn file_header_extra_space() {
let options = ["--- file", "--- file", "---\tfile"];
for option in options {
pretty_assertions::assert_eq!(
DiffLine::parse(option),
DiffLine::OldPath {
path: "file".into()
},
"{option}",
);
}
}
#[test]
fn hunk_header_extra_space() {
let options = [
"@@ -1,2 +1,3 @@",
"@@ -1,2 +1,3 @@",
"@@\t-1,2\t+1,3\t@@",
"@@ -1,2 +1,3 @@",
"@@ -1,2 +1,3 @@",
"@@ -1,2 +1,3 @@",
"@@ -1,2 +1,3 @@ garbage",
];
for option in options {
pretty_assertions::assert_eq!(
DiffLine::parse(option),
DiffLine::HunkHeader(Some(HunkLocation {
start_line_old: 0,
count_old: 2,
start_line_new: 0,
count_new: 3
})),
"{option}",
);
}
}
#[test]
fn hunk_header_without_location() {
pretty_assertions::assert_eq!(DiffLine::parse("@@ ... @@"), DiffLine::HunkHeader(None));
}
#[test]
fn test_parse_path() {
assert_eq!(parse_header_path("a/", "foo.txt"), "foo.txt");
assert_eq!(
parse_header_path("a/", "foo/bar/baz.txt"),
"foo/bar/baz.txt"
);
assert_eq!(parse_header_path("a/", "a/foo.txt"), "foo.txt");
assert_eq!(
parse_header_path("a/", "a/foo/bar/baz.txt"),
"foo/bar/baz.txt"
);
// Extra
assert_eq!(
parse_header_path("a/", "a/foo/bar/baz.txt 2025"),
"foo/bar/baz.txt"
);
assert_eq!(
parse_header_path("a/", "a/foo/bar/baz.txt\t2025"),
"foo/bar/baz.txt"
);
assert_eq!(
parse_header_path("a/", "a/foo/bar/baz.txt \""),
"foo/bar/baz.txt"
);
// Quoted
assert_eq!(
parse_header_path("a/", "a/foo/bar/\"baz quox.txt\""),
"foo/bar/baz quox.txt"
);
assert_eq!(
parse_header_path("a/", "\"a/foo/bar/baz quox.txt\""),
"foo/bar/baz quox.txt"
);
assert_eq!(
parse_header_path("a/", "\"foo/bar/baz quox.txt\""),
"foo/bar/baz quox.txt"
);
assert_eq!(parse_header_path("a/", "\"whatever 🤷\""), "whatever 🤷");
assert_eq!(
parse_header_path("a/", "\"foo/bar/baz quox.txt\" 2025"),
"foo/bar/baz quox.txt"
);
// unescaped quotes are dropped
assert_eq!(parse_header_path("a/", "foo/\"bar\""), "foo/bar");
// Escaped
assert_eq!(
parse_header_path("a/", "\"foo/\\\"bar\\\"/baz.txt\""),
"foo/\"bar\"/baz.txt"
);
assert_eq!(
parse_header_path("a/", "\"C:\\\\Projects\\\\My App\\\\old file.txt\""),
"C:\\Projects\\My App\\old file.txt"
);
}
}

View File

@@ -182,8 +182,8 @@ pub fn build_prompt(
}
for related_file in &request.included_files {
writeln!(&mut prompt, "`````filename={}", related_file.path.display()).unwrap();
write_excerpts(
write_codeblock(
&related_file.path,
&related_file.excerpts,
if related_file.path == request.excerpt_path {
&insertions
@@ -194,7 +194,6 @@ pub fn build_prompt(
request.prompt_format == PromptFormat::NumLinesUniDiff,
&mut prompt,
);
write!(&mut prompt, "`````\n\n").unwrap();
}
}
@@ -205,6 +204,25 @@ pub fn build_prompt(
Ok((prompt, section_labels))
}
pub fn write_codeblock<'a>(
path: &Path,
excerpts: impl IntoIterator<Item = &'a Excerpt>,
sorted_insertions: &[(Point, &str)],
file_line_count: Line,
include_line_numbers: bool,
output: &'a mut String,
) {
writeln!(output, "`````{}", path.display()).unwrap();
write_excerpts(
excerpts,
sorted_insertions,
file_line_count,
include_line_numbers,
output,
);
write!(output, "`````\n\n").unwrap();
}
pub fn write_excerpts<'a>(
excerpts: impl IntoIterator<Item = &'a Excerpt>,
sorted_insertions: &[(Point, &str)],
@@ -597,8 +615,7 @@ impl<'a> SyntaxBasedPrompt<'a> {
disjoint_snippets.push(current_snippet);
}
// TODO: remove filename=?
writeln!(output, "`````filename={}", file_path.display()).ok();
writeln!(output, "`````path={}", file_path.display()).ok();
let mut skipped_last_snippet = false;
for (snippet, range) in disjoint_snippets {
let section_index = section_ranges.len();

View File

@@ -66,6 +66,14 @@ impl CodestralCompletionProvider {
Self::api_key(cx).is_some()
}
/// This is so we can immediately show Codestral as a provider users can
/// switch to in the edit prediction menu, if the API has been added
pub fn ensure_api_key_loaded(http_client: Arc<dyn HttpClient>, cx: &mut App) {
MistralLanguageModelProvider::global(http_client, cx)
.load_codestral_api_key(cx)
.detach();
}
fn api_key(cx: &App) -> Option<Arc<str>> {
MistralLanguageModelProvider::try_global(cx)
.and_then(|provider| provider.codestral_api_key(CODESTRAL_API_URL, cx))

View File

@@ -17,12 +17,14 @@ use editor::{
use fs::Fs;
use futures::{SinkExt, StreamExt, channel::mpsc, lock::Mutex};
use git::repository::repo_path;
use gpui::{App, Rgba, TestAppContext, UpdateGlobal, VisualContext, VisualTestContext};
use gpui::{
App, Rgba, SharedString, TestAppContext, UpdateGlobal, VisualContext, VisualTestContext,
};
use indoc::indoc;
use language::FakeLspAdapter;
use lsp::LSP_REQUEST_TIMEOUT;
use project::{
ProjectPath, SERVER_PROGRESS_THROTTLE_TIMEOUT,
ProgressToken, ProjectPath, SERVER_PROGRESS_THROTTLE_TIMEOUT,
lsp_store::lsp_ext_command::{ExpandedMacro, LspExtExpandMacro},
};
use recent_projects::disconnected_overlay::DisconnectedOverlay;
@@ -37,6 +39,7 @@ use std::{
Arc,
atomic::{self, AtomicBool, AtomicUsize},
},
time::Duration,
};
use text::Point;
use util::{path, rel_path::rel_path, uri};
@@ -1283,12 +1286,14 @@ async fn test_language_server_statuses(cx_a: &mut TestAppContext, cx_b: &mut Tes
});
executor.run_until_parked();
let token = ProgressToken::String(SharedString::from("the-token"));
project_a.read_with(cx_a, |project, cx| {
let status = project.language_server_statuses(cx).next().unwrap().1;
assert_eq!(status.name.0, "the-language-server");
assert_eq!(status.pending_work.len(), 1);
assert_eq!(
status.pending_work["the-token"].message.as_ref().unwrap(),
status.pending_work[&token].message.as_ref().unwrap(),
"the-message"
);
});
@@ -1322,7 +1327,7 @@ async fn test_language_server_statuses(cx_a: &mut TestAppContext, cx_b: &mut Tes
assert_eq!(status.name.0, "the-language-server");
assert_eq!(status.pending_work.len(), 1);
assert_eq!(
status.pending_work["the-token"].message.as_ref().unwrap(),
status.pending_work[&token].message.as_ref().unwrap(),
"the-message-2"
);
});
@@ -1332,7 +1337,7 @@ async fn test_language_server_statuses(cx_a: &mut TestAppContext, cx_b: &mut Tes
assert_eq!(status.name.0, "the-language-server");
assert_eq!(status.pending_work.len(), 1);
assert_eq!(
status.pending_work["the-token"].message.as_ref().unwrap(),
status.pending_work[&token].message.as_ref().unwrap(),
"the-message-2"
);
});
@@ -1813,14 +1818,7 @@ async fn test_mutual_editor_inlay_hint_cache_update(
settings.project.all_languages.defaults.inlay_hints =
Some(InlayHintSettingsContent {
enabled: Some(true),
show_value_hints: Some(true),
edit_debounce_ms: Some(0),
scroll_debounce_ms: Some(0),
show_type_hints: Some(true),
show_parameter_hints: Some(false),
show_other_hints: Some(true),
show_background: Some(false),
toggle_on_modifiers_press: None,
..InlayHintSettingsContent::default()
})
});
});
@@ -1830,15 +1828,8 @@ async fn test_mutual_editor_inlay_hint_cache_update(
store.update_user_settings(cx, |settings| {
settings.project.all_languages.defaults.inlay_hints =
Some(InlayHintSettingsContent {
show_value_hints: Some(true),
enabled: Some(true),
edit_debounce_ms: Some(0),
scroll_debounce_ms: Some(0),
show_type_hints: Some(true),
show_parameter_hints: Some(false),
show_other_hints: Some(true),
show_background: Some(false),
toggle_on_modifiers_press: None,
..InlayHintSettingsContent::default()
})
});
});
@@ -1931,6 +1922,7 @@ async fn test_mutual_editor_inlay_hint_cache_update(
});
let fake_language_server = fake_language_servers.next().await.unwrap();
let editor_a = file_a.await.unwrap().downcast::<Editor>().unwrap();
executor.advance_clock(Duration::from_millis(100));
executor.run_until_parked();
let initial_edit = edits_made.load(atomic::Ordering::Acquire);
@@ -1951,6 +1943,7 @@ async fn test_mutual_editor_inlay_hint_cache_update(
.downcast::<Editor>()
.unwrap();
executor.advance_clock(Duration::from_millis(100));
executor.run_until_parked();
editor_b.update(cx_b, |editor, cx| {
assert_eq!(
@@ -1969,6 +1962,7 @@ async fn test_mutual_editor_inlay_hint_cache_update(
});
cx_b.focus(&editor_b);
executor.advance_clock(Duration::from_secs(1));
executor.run_until_parked();
editor_a.update(cx_a, |editor, cx| {
assert_eq!(
@@ -1992,6 +1986,7 @@ async fn test_mutual_editor_inlay_hint_cache_update(
});
cx_a.focus(&editor_a);
executor.advance_clock(Duration::from_secs(1));
executor.run_until_parked();
editor_a.update(cx_a, |editor, cx| {
assert_eq!(
@@ -2013,6 +2008,7 @@ async fn test_mutual_editor_inlay_hint_cache_update(
.into_response()
.expect("inlay refresh request failed");
executor.advance_clock(Duration::from_secs(1));
executor.run_until_parked();
editor_a.update(cx_a, |editor, cx| {
assert_eq!(

View File

@@ -18,6 +18,7 @@ use project::Project;
use rpc::proto::ChannelVisibility;
use std::{
any::{Any, TypeId},
rc::Rc,
sync::Arc,
};
use ui::prelude::*;
@@ -515,12 +516,7 @@ impl Item for ChannelView {
})))
}
fn navigate(
&mut self,
data: Box<dyn Any>,
window: &mut Window,
cx: &mut Context<Self>,
) -> bool {
fn navigate(&mut self, data: Rc<dyn Any>, window: &mut Window, cx: &mut Context<Self>) -> bool {
self.editor
.update(cx, |editor, cx| editor.navigate(data, window, cx))
}

View File

@@ -54,6 +54,10 @@ actions!(
CollapseSelectedChannel,
/// Expands the selected channel in the tree view.
ExpandSelectedChannel,
/// Opens the meeting notes for the selected channel in the panel.
///
/// Use `collab::OpenChannelNotes` to open the channel notes for the current call.
OpenSelectedChannelNotes,
/// Starts moving a channel to a new location.
StartMoveChannel,
/// Moves the selected item to the current location.
@@ -1856,6 +1860,17 @@ impl CollabPanel {
}
}
fn open_selected_channel_notes(
&mut self,
_: &OpenSelectedChannelNotes,
window: &mut Window,
cx: &mut Context<Self>,
) {
if let Some(channel) = self.selected_channel() {
self.open_channel_notes(channel.id, window, cx);
}
}
fn set_channel_visibility(
&mut self,
channel_id: ChannelId,
@@ -2976,6 +2991,7 @@ impl Render for CollabPanel {
.on_action(cx.listener(CollabPanel::remove_selected_channel))
.on_action(cx.listener(CollabPanel::show_inline_context_menu))
.on_action(cx.listener(CollabPanel::rename_selected_channel))
.on_action(cx.listener(CollabPanel::open_selected_channel_notes))
.on_action(cx.listener(CollabPanel::collapse_selected_channel))
.on_action(cx.listener(CollabPanel::expand_selected_channel))
.on_action(cx.listener(CollabPanel::start_move_selected_channel))

View File

@@ -20,6 +20,7 @@ command_palette_hooks.workspace = true
db.workspace = true
fuzzy.workspace = true
gpui.workspace = true
menu.workspace = true
log.workspace = true
picker.workspace = true
postage.workspace = true

View File

@@ -22,7 +22,7 @@ use persistence::COMMAND_PALETTE_HISTORY;
use picker::{Picker, PickerDelegate};
use postage::{sink::Sink, stream::Stream};
use settings::Settings;
use ui::{HighlightedLabel, KeyBinding, ListItem, ListItemSpacing, h_flex, prelude::*, v_flex};
use ui::{HighlightedLabel, KeyBinding, ListItem, ListItemSpacing, prelude::*};
use util::ResultExt;
use workspace::{ModalView, Workspace, WorkspaceSettings};
use zed_actions::{OpenZedUrl, command_palette::Toggle};
@@ -143,7 +143,7 @@ impl Focusable for CommandPalette {
}
impl Render for CommandPalette {
fn render(&mut self, _window: &mut Window, _cx: &mut Context<Self>) -> impl IntoElement {
fn render(&mut self, _window: &mut Window, _: &mut Context<Self>) -> impl IntoElement {
v_flex()
.key_context("CommandPalette")
.w(rems(34.))
@@ -261,6 +261,17 @@ impl CommandPaletteDelegate {
HashMap::new()
}
}
fn selected_command(&self) -> Option<&Command> {
let action_ix = self
.matches
.get(self.selected_ix)
.map(|m| m.candidate_id)
.unwrap_or(self.selected_ix);
// this gets called in headless tests where there are no commands loaded
// so we need to return an Option here
self.commands.get(action_ix)
}
}
impl PickerDelegate for CommandPaletteDelegate {
@@ -411,7 +422,20 @@ impl PickerDelegate for CommandPaletteDelegate {
.log_err();
}
fn confirm(&mut self, _: bool, window: &mut Window, cx: &mut Context<Picker<Self>>) {
fn confirm(&mut self, secondary: bool, window: &mut Window, cx: &mut Context<Picker<Self>>) {
if secondary {
let Some(selected_command) = self.selected_command() else {
return;
};
let action_name = selected_command.action.name();
let open_keymap = Box::new(zed_actions::ChangeKeybinding {
action: action_name.to_string(),
});
window.dispatch_action(open_keymap, cx);
self.dismissed(window, cx);
return;
}
if self.matches.is_empty() {
self.dismissed(window, cx);
return;
@@ -448,6 +472,7 @@ impl PickerDelegate for CommandPaletteDelegate {
) -> Option<Self::ListItem> {
let matching_command = self.matches.get(ix)?;
let command = self.commands.get(matching_command.candidate_id)?;
Some(
ListItem::new(ix)
.inset(true)
@@ -470,6 +495,59 @@ impl PickerDelegate for CommandPaletteDelegate {
),
)
}
fn render_footer(
&self,
window: &mut Window,
cx: &mut Context<Picker<Self>>,
) -> Option<AnyElement> {
let selected_command = self.selected_command()?;
let keybind =
KeyBinding::for_action_in(&*selected_command.action, &self.previous_focus_handle, cx);
let focus_handle = &self.previous_focus_handle;
let keybinding_buttons = if keybind.has_binding(window) {
Button::new("change", "Change Keybinding…")
.key_binding(
KeyBinding::for_action_in(&menu::SecondaryConfirm, focus_handle, cx)
.map(|kb| kb.size(rems_from_px(12.))),
)
.on_click(move |_, window, cx| {
window.dispatch_action(menu::SecondaryConfirm.boxed_clone(), cx);
})
} else {
Button::new("add", "Add Keybinding…")
.key_binding(
KeyBinding::for_action_in(&menu::SecondaryConfirm, focus_handle, cx)
.map(|kb| kb.size(rems_from_px(12.))),
)
.on_click(move |_, window, cx| {
window.dispatch_action(menu::SecondaryConfirm.boxed_clone(), cx);
})
};
Some(
h_flex()
.w_full()
.p_1p5()
.gap_1()
.justify_end()
.border_t_1()
.border_color(cx.theme().colors().border_variant)
.child(keybinding_buttons)
.child(
Button::new("run-action", "Run")
.key_binding(
KeyBinding::for_action_in(&menu::Confirm, &focus_handle, cx)
.map(|kb| kb.size(rems_from_px(12.))),
)
.on_click(|_, window, cx| {
window.dispatch_action(menu::Confirm.boxed_clone(), cx)
}),
)
.into_any(),
)
}
}
pub fn humanize_action_name(name: &str) -> String {

View File

@@ -489,7 +489,11 @@ impl Copilot {
let node_path = node_runtime.binary_path().await?;
ensure_node_version_for_copilot(&node_path).await?;
let arguments: Vec<OsString> = vec![server_path.into(), "--stdio".into()];
let arguments: Vec<OsString> = vec![
"--experimental-sqlite".into(),
server_path.into(),
"--stdio".into(),
];
let binary = LanguageServerBinary {
path: node_path,
arguments,

View File

@@ -41,6 +41,10 @@ util.workspace = true
[dev-dependencies]
dap = { workspace = true, features = ["test-support"] }
fs = { workspace = true, features = ["test-support"] }
gpui = { workspace = true, features = ["test-support"] }
http_client.workspace = true
node_runtime.workspace = true
settings = { workspace = true, features = ["test-support"] }
task = { workspace = true, features = ["test-support"] }
util = { workspace = true, features = ["test-support"] }

View File

@@ -4,6 +4,8 @@ mod go;
mod javascript;
mod python;
#[cfg(test)]
use std::path::PathBuf;
use std::sync::Arc;
use anyhow::Result;
@@ -38,3 +40,65 @@ pub fn init(cx: &mut App) {
}
})
}
#[cfg(test)]
mod test_mocks {
use super::*;
pub(crate) struct MockDelegate {
worktree_root: PathBuf,
}
impl MockDelegate {
pub(crate) fn new() -> Arc<dyn adapters::DapDelegate> {
Arc::new(Self {
worktree_root: PathBuf::from("/tmp/test"),
})
}
}
#[async_trait::async_trait]
impl adapters::DapDelegate for MockDelegate {
fn worktree_id(&self) -> settings::WorktreeId {
settings::WorktreeId::from_usize(0)
}
fn worktree_root_path(&self) -> &std::path::Path {
&self.worktree_root
}
fn http_client(&self) -> Arc<dyn http_client::HttpClient> {
unimplemented!("Not needed for tests")
}
fn node_runtime(&self) -> node_runtime::NodeRuntime {
unimplemented!("Not needed for tests")
}
fn toolchain_store(&self) -> Arc<dyn language::LanguageToolchainStore> {
unimplemented!("Not needed for tests")
}
fn fs(&self) -> Arc<dyn fs::Fs> {
unimplemented!("Not needed for tests")
}
fn output_to_console(&self, _msg: String) {}
async fn which(&self, _command: &std::ffi::OsStr) -> Option<PathBuf> {
None
}
async fn read_text_file(&self, _path: &util::rel_path::RelPath) -> Result<String> {
Ok(String::new())
}
async fn shell_env(&self) -> collections::HashMap<String, String> {
collections::HashMap::default()
}
fn is_headless(&self) -> bool {
false
}
}
}

View File

@@ -23,6 +23,11 @@ use std::{
use util::command::new_smol_command;
use util::{ResultExt, paths::PathStyle, rel_path::RelPath};
enum DebugpyLaunchMode<'a> {
Normal,
AttachWithConnect { host: Option<&'a str> },
}
#[derive(Default)]
pub(crate) struct PythonDebugAdapter {
base_venv_path: OnceCell<Result<Arc<Path>, String>>,
@@ -36,10 +41,11 @@ impl PythonDebugAdapter {
const LANGUAGE_NAME: &'static str = "Python";
async fn generate_debugpy_arguments(
host: &Ipv4Addr,
async fn generate_debugpy_arguments<'a>(
host: &'a Ipv4Addr,
port: u16,
user_installed_path: Option<&Path>,
launch_mode: DebugpyLaunchMode<'a>,
user_installed_path: Option<&'a Path>,
user_args: Option<Vec<String>>,
) -> Result<Vec<String>> {
let mut args = if let Some(user_installed_path) = user_installed_path {
@@ -62,7 +68,20 @@ impl PythonDebugAdapter {
args.extend(if let Some(args) = user_args {
args
} else {
vec![format!("--host={}", host), format!("--port={}", port)]
match launch_mode {
DebugpyLaunchMode::Normal => {
vec![format!("--host={}", host), format!("--port={}", port)]
}
DebugpyLaunchMode::AttachWithConnect { host } => {
let mut args = vec!["connect".to_string()];
if let Some(host) = host {
args.push(format!("{host}:"));
}
args.push(format!("{port}"));
args
}
}
});
Ok(args)
}
@@ -315,7 +334,46 @@ impl PythonDebugAdapter {
user_env: Option<HashMap<String, String>>,
python_from_toolchain: Option<String>,
) -> Result<DebugAdapterBinary> {
let tcp_connection = config.tcp_connection.clone().unwrap_or_default();
let mut tcp_connection = config.tcp_connection.clone().unwrap_or_default();
let (config_port, config_host) = config
.config
.get("connect")
.map(|value| {
(
value
.get("port")
.and_then(|val| val.as_u64().map(|p| p as u16)),
value.get("host").and_then(|val| val.as_str()),
)
})
.unwrap_or_else(|| {
(
config
.config
.get("port")
.and_then(|port| port.as_u64().map(|p| p as u16)),
config.config.get("host").and_then(|host| host.as_str()),
)
});
let is_attach_with_connect = if config
.config
.get("request")
.is_some_and(|val| val.as_str().is_some_and(|request| request == "attach"))
{
if tcp_connection.host.is_some() && config_host.is_some() {
bail!("Cannot have two different hosts in debug configuration")
} else if tcp_connection.port.is_some() && config_port.is_some() {
bail!("Cannot have two different ports in debug configuration")
}
tcp_connection.port = config_port;
DebugpyLaunchMode::AttachWithConnect { host: config_host }
} else {
DebugpyLaunchMode::Normal
};
let (host, port, timeout) = crate::configure_tcp_connection(tcp_connection).await?;
let python_path = if let Some(toolchain) = python_from_toolchain {
@@ -330,6 +388,7 @@ impl PythonDebugAdapter {
let arguments = Self::generate_debugpy_arguments(
&host,
port,
is_attach_with_connect,
user_installed_path.as_deref(),
user_args,
)
@@ -765,29 +824,58 @@ impl DebugAdapter for PythonDebugAdapter {
.await;
}
let base_path = config
.config
.get("cwd")
.and_then(|cwd| {
RelPath::new(
cwd.as_str()
.map(Path::new)?
.strip_prefix(delegate.worktree_root_path())
.ok()?,
PathStyle::local(),
)
.ok()
let base_paths = ["cwd", "program", "module"]
.into_iter()
.filter_map(|key| {
config.config.get(key).and_then(|cwd| {
RelPath::new(
cwd.as_str()
.map(Path::new)?
.strip_prefix(delegate.worktree_root_path())
.ok()?,
PathStyle::local(),
)
.ok()
})
})
.unwrap_or_else(|| RelPath::empty().into());
let toolchain = delegate
.toolchain_store()
.active_toolchain(
delegate.worktree_id(),
base_path.into_arc(),
language::LanguageName::new(Self::LANGUAGE_NAME),
cx,
.chain(
// While Debugpy's wiki saids absolute paths are required, but it actually supports relative paths when cwd is passed in.
// (Which should always be the case because Zed defaults to the cwd worktree root)
// So we want to check that these relative paths find toolchains as well. Otherwise, they won't be checked
// because the strip prefix in the iteration above will return an error
config
.config
.get("cwd")
.map(|_| {
["program", "module"].into_iter().filter_map(|key| {
config.config.get(key).and_then(|value| {
let path = Path::new(value.as_str()?);
RelPath::new(path, PathStyle::local()).ok()
})
})
})
.into_iter()
.flatten(),
)
.await;
.chain([RelPath::empty().into()]);
let mut toolchain = None;
for base_path in base_paths {
if let Some(found_toolchain) = delegate
.toolchain_store()
.active_toolchain(
delegate.worktree_id(),
base_path.into_arc(),
language::LanguageName::new(Self::LANGUAGE_NAME),
cx,
)
.await
{
toolchain = Some(found_toolchain);
break;
}
}
self.fetch_debugpy_whl(toolchain.clone(), delegate)
.await
@@ -824,7 +912,148 @@ mod tests {
use util::path;
use super::*;
use std::{net::Ipv4Addr, path::PathBuf};
use task::TcpArgumentsTemplate;
#[gpui::test]
async fn test_tcp_connection_conflict_with_connect_args() {
let adapter = PythonDebugAdapter {
base_venv_path: OnceCell::new(),
debugpy_whl_base_path: OnceCell::new(),
};
let config_with_port_conflict = json!({
"request": "attach",
"connect": {
"port": 5679
}
});
let tcp_connection = TcpArgumentsTemplate {
host: None,
port: Some(5678),
timeout: None,
};
let task_def = DebugTaskDefinition {
label: "test".into(),
adapter: PythonDebugAdapter::ADAPTER_NAME.into(),
config: config_with_port_conflict,
tcp_connection: Some(tcp_connection.clone()),
};
let result = adapter
.get_installed_binary(
&test_mocks::MockDelegate::new(),
&task_def,
None,
None,
None,
Some("python3".to_string()),
)
.await;
assert!(result.is_err());
assert!(
result
.unwrap_err()
.to_string()
.contains("Cannot have two different ports")
);
let host = Ipv4Addr::new(127, 0, 0, 1);
let config_with_host_conflict = json!({
"request": "attach",
"connect": {
"host": "192.168.1.1",
"port": 5678
}
});
let tcp_connection_with_host = TcpArgumentsTemplate {
host: Some(host),
port: None,
timeout: None,
};
let task_def_host = DebugTaskDefinition {
label: "test".into(),
adapter: PythonDebugAdapter::ADAPTER_NAME.into(),
config: config_with_host_conflict,
tcp_connection: Some(tcp_connection_with_host),
};
let result_host = adapter
.get_installed_binary(
&test_mocks::MockDelegate::new(),
&task_def_host,
None,
None,
None,
Some("python3".to_string()),
)
.await;
assert!(result_host.is_err());
assert!(
result_host
.unwrap_err()
.to_string()
.contains("Cannot have two different hosts")
);
}
#[gpui::test]
async fn test_attach_with_connect_mode_generates_correct_arguments() {
let host = Ipv4Addr::new(127, 0, 0, 1);
let port = 5678;
let args_without_host = PythonDebugAdapter::generate_debugpy_arguments(
&host,
port,
DebugpyLaunchMode::AttachWithConnect { host: None },
None,
None,
)
.await
.unwrap();
let expected_suffix = path!("debug_adapters/Debugpy/debugpy/adapter");
assert!(args_without_host[0].ends_with(expected_suffix));
assert_eq!(args_without_host[1], "connect");
assert_eq!(args_without_host[2], "5678");
let args_with_host = PythonDebugAdapter::generate_debugpy_arguments(
&host,
port,
DebugpyLaunchMode::AttachWithConnect {
host: Some("192.168.1.100"),
},
None,
None,
)
.await
.unwrap();
assert!(args_with_host[0].ends_with(expected_suffix));
assert_eq!(args_with_host[1], "connect");
assert_eq!(args_with_host[2], "192.168.1.100:");
assert_eq!(args_with_host[3], "5678");
let args_normal = PythonDebugAdapter::generate_debugpy_arguments(
&host,
port,
DebugpyLaunchMode::Normal,
None,
None,
)
.await
.unwrap();
assert!(args_normal[0].ends_with(expected_suffix));
assert_eq!(args_normal[1], "--host=127.0.0.1");
assert_eq!(args_normal[2], "--port=5678");
assert!(!args_normal.contains(&"connect".to_string()));
}
#[gpui::test]
async fn test_debugpy_install_path_cases() {
@@ -833,15 +1062,25 @@ mod tests {
// Case 1: User-defined debugpy path (highest precedence)
let user_path = PathBuf::from("/custom/path/to/debugpy/src/debugpy/adapter");
let user_args =
PythonDebugAdapter::generate_debugpy_arguments(&host, port, Some(&user_path), None)
.await
.unwrap();
let user_args = PythonDebugAdapter::generate_debugpy_arguments(
&host,
port,
DebugpyLaunchMode::Normal,
Some(&user_path),
None,
)
.await
.unwrap();
// Case 2: Venv-installed debugpy (uses -m debugpy.adapter)
let venv_args = PythonDebugAdapter::generate_debugpy_arguments(&host, port, None, None)
.await
.unwrap();
let venv_args = PythonDebugAdapter::generate_debugpy_arguments(
&host,
port,
DebugpyLaunchMode::Normal,
None,
None,
)
.await
.unwrap();
assert_eq!(user_args[0], "/custom/path/to/debugpy/src/debugpy/adapter");
assert_eq!(user_args[1], "--host=127.0.0.1");
@@ -856,6 +1095,7 @@ mod tests {
let user_args = PythonDebugAdapter::generate_debugpy_arguments(
&host,
port,
DebugpyLaunchMode::Normal,
Some(&user_path),
Some(vec!["foo".into()]),
)
@@ -864,6 +1104,7 @@ mod tests {
let venv_args = PythonDebugAdapter::generate_debugpy_arguments(
&host,
port,
DebugpyLaunchMode::Normal,
None,
Some(vec!["foo".into()]),
)

View File

@@ -1029,11 +1029,13 @@ impl SearchableItem for DapLogView {
&mut self,
index: usize,
matches: &[Self::Match],
collapse: bool,
window: &mut Window,
cx: &mut Context<Self>,
) {
self.editor
.update(cx, |e, cx| e.activate_match(index, matches, window, cx))
self.editor.update(cx, |e, cx| {
e.activate_match(index, matches, collapse, window, cx)
})
}
fn select_matches(

View File

@@ -697,6 +697,7 @@ impl Render for NewProcessModal {
.justify_between()
.border_t_1()
.border_color(cx.theme().colors().border_variant);
let secondary_action = menu::SecondaryConfirm.boxed_clone();
match self.mode {
NewProcessMode::Launch => el.child(
container
@@ -706,6 +707,7 @@ impl Render for NewProcessModal {
.on_click(cx.listener(|this, _, window, cx| {
this.save_debug_scenario(window, cx);
}))
.key_binding(KeyBinding::for_action(&*secondary_action, cx))
.disabled(
self.debugger.is_none()
|| self
@@ -749,7 +751,6 @@ impl Render for NewProcessModal {
container
.child(div().child({
Button::new("edit-attach-task", "Edit in debug.json")
.label_size(LabelSize::Small)
.key_binding(KeyBinding::for_action(&*secondary_action, cx))
.on_click(move |_, window, cx| {
window.dispatch_action(secondary_action.boxed_clone(), cx)
@@ -1192,7 +1193,7 @@ impl PickerDelegate for DebugDelegate {
}
fn placeholder_text(&self, _window: &mut Window, _cx: &mut App) -> std::sync::Arc<str> {
"Find a debug task, or debug a command.".into()
"Find a debug task, or debug a command".into()
}
fn update_matches(
@@ -1453,18 +1454,17 @@ impl PickerDelegate for DebugDelegate {
.child({
let action = menu::SecondaryConfirm.boxed_clone();
if self.matches.is_empty() {
Button::new("edit-debug-json", "Edit debug.json")
.label_size(LabelSize::Small)
.on_click(cx.listener(|_picker, _, window, cx| {
Button::new("edit-debug-json", "Edit debug.json").on_click(cx.listener(
|_picker, _, window, cx| {
window.dispatch_action(
zed_actions::OpenProjectDebugTasks.boxed_clone(),
cx,
);
cx.emit(DismissEvent);
}))
},
))
} else {
Button::new("edit-debug-task", "Edit in debug.json")
.label_size(LabelSize::Small)
.key_binding(KeyBinding::for_action(&*action, cx))
.on_click(move |_, window, cx| {
window.dispatch_action(action.boxed_clone(), cx)

View File

@@ -12,6 +12,7 @@ use gpui::{
Action, AppContext, ClickEvent, Entity, FocusHandle, Focusable, MouseButton, ScrollStrategy,
Task, UniformListScrollHandle, WeakEntity, actions, uniform_list,
};
use itertools::Itertools;
use language::Point;
use project::{
Project,
@@ -24,7 +25,7 @@ use project::{
};
use ui::{
Divider, DividerColor, FluentBuilder as _, Indicator, IntoElement, ListItem, Render,
StatefulInteractiveElement, Tooltip, WithScrollbar, prelude::*,
ScrollAxes, StatefulInteractiveElement, Tooltip, WithScrollbar, prelude::*,
};
use util::rel_path::RelPath;
use workspace::Workspace;
@@ -55,6 +56,7 @@ pub(crate) struct BreakpointList {
focus_handle: FocusHandle,
scroll_handle: UniformListScrollHandle,
selected_ix: Option<usize>,
max_width_index: Option<usize>,
input: Entity<Editor>,
strip_mode: Option<ActiveBreakpointStripMode>,
serialize_exception_breakpoints_task: Option<Task<anyhow::Result<()>>>,
@@ -95,6 +97,7 @@ impl BreakpointList {
dap_store,
worktree_store,
breakpoints: Default::default(),
max_width_index: None,
workspace,
session,
focus_handle,
@@ -546,7 +549,7 @@ impl BreakpointList {
.session
.as_ref()
.map(|session| SupportedBreakpointProperties::from(session.read(cx).capabilities()))
.unwrap_or_else(SupportedBreakpointProperties::empty);
.unwrap_or_else(SupportedBreakpointProperties::all);
let strip_mode = self.strip_mode;
uniform_list(
@@ -570,6 +573,8 @@ impl BreakpointList {
.collect()
}),
)
.with_horizontal_sizing_behavior(gpui::ListHorizontalSizingBehavior::Unconstrained)
.with_width_from_item(self.max_width_index)
.track_scroll(self.scroll_handle.clone())
.flex_1()
}
@@ -732,6 +737,26 @@ impl Render for BreakpointList {
.chain(exception_breakpoints),
);
let text_pixels = ui::TextSize::Default.pixels(cx).to_f64() as f32;
self.max_width_index = self
.breakpoints
.iter()
.map(|entry| match &entry.kind {
BreakpointEntryKind::LineBreakpoint(line_bp) => {
let name_and_line = format!("{}:{}", line_bp.name, line_bp.line);
let dir_len = line_bp.dir.as_ref().map(|d| d.len()).unwrap_or(0);
(name_and_line.len() + dir_len) as f32 * text_pixels
}
BreakpointEntryKind::ExceptionBreakpoint(exc_bp) => {
exc_bp.data.label.len() as f32 * text_pixels
}
BreakpointEntryKind::DataBreakpoint(data_bp) => {
data_bp.0.context.human_readable_label().len() as f32 * text_pixels
}
})
.position_max_by(|left, right| left.total_cmp(right));
v_flex()
.id("breakpoint-list")
.key_context("BreakpointList")
@@ -749,7 +774,14 @@ impl Render for BreakpointList {
.size_full()
.pt_1()
.child(self.render_list(cx))
.vertical_scrollbar_for(self.scroll_handle.clone(), window, cx)
.custom_scrollbars(
ui::Scrollbars::new(ScrollAxes::Both)
.tracked_scroll_handle(self.scroll_handle.clone())
.with_track_along(ScrollAxes::Both, cx.theme().colors().panel_background)
.tracked_entity(cx.entity_id()),
window,
cx,
)
.when_some(self.strip_mode, |this, _| {
this.child(Divider::horizontal().color(DividerColor::Border))
.child(
@@ -1376,8 +1408,10 @@ impl RenderOnce for BreakpointOptionsStrip {
h_flex()
.gap_px()
.mr_3() // Space to avoid overlapping with the scrollbar
.child(
div()
.justify_end()
.when(has_logs || self.is_selected, |this| {
this.child(
div()
.map(self.add_focus_styles(
ActiveBreakpointStripMode::Log,
supports_logs,
@@ -1406,45 +1440,46 @@ impl RenderOnce for BreakpointOptionsStrip {
)
}),
)
.when(!has_logs && !self.is_selected, |this| this.invisible()),
)
.child(
div()
.map(self.add_focus_styles(
ActiveBreakpointStripMode::Condition,
supports_condition,
window,
cx,
))
.child(
IconButton::new(
SharedString::from(format!("{id}-condition-toggle")),
IconName::SplitAlt,
)
.shape(ui::IconButtonShape::Square)
.style(style_for_toggle(
)
})
.when(has_condition || self.is_selected, |this| {
this.child(
div()
.map(self.add_focus_styles(
ActiveBreakpointStripMode::Condition,
has_condition,
supports_condition,
window,
cx,
))
.icon_size(IconSize::Small)
.icon_color(color_for_toggle(has_condition))
.when(has_condition, |this| this.indicator(Indicator::dot().color(Color::Info)))
.disabled(!supports_condition)
.toggle_state(self.is_toggled(ActiveBreakpointStripMode::Condition))
.on_click(self.on_click_callback(ActiveBreakpointStripMode::Condition))
.tooltip(|_window, cx| {
Tooltip::with_meta(
"Set Condition",
None,
"Set condition to evaluate when a breakpoint is hit. Program execution will stop only when the condition is met.",
cx,
.child(
IconButton::new(
SharedString::from(format!("{id}-condition-toggle")),
IconName::SplitAlt,
)
}),
)
.when(!has_condition && !self.is_selected, |this| this.invisible()),
)
.child(
div()
.shape(ui::IconButtonShape::Square)
.style(style_for_toggle(
ActiveBreakpointStripMode::Condition,
has_condition,
))
.icon_size(IconSize::Small)
.icon_color(color_for_toggle(has_condition))
.when(has_condition, |this| this.indicator(Indicator::dot().color(Color::Info)))
.disabled(!supports_condition)
.toggle_state(self.is_toggled(ActiveBreakpointStripMode::Condition))
.on_click(self.on_click_callback(ActiveBreakpointStripMode::Condition))
.tooltip(|_window, cx| {
Tooltip::with_meta(
"Set Condition",
None,
"Set condition to evaluate when a breakpoint is hit. Program execution will stop only when the condition is met.",
cx,
)
}),
)
)
})
.when(has_hit_condition || self.is_selected, |this| {
this.child(div()
.map(self.add_focus_styles(
ActiveBreakpointStripMode::HitCondition,
supports_hit_condition,
@@ -1475,10 +1510,8 @@ impl RenderOnce for BreakpointOptionsStrip {
cx,
)
}),
)
.when(!has_hit_condition && !self.is_selected, |this| {
this.invisible()
}),
)
))
})
}
}

View File

@@ -6,7 +6,10 @@ use alacritty_terminal::vte::ansi;
use anyhow::Result;
use collections::HashMap;
use dap::{CompletionItem, CompletionItemType, OutputEvent};
use editor::{Bias, CompletionProvider, Editor, EditorElement, EditorStyle, ExcerptId};
use editor::{
Bias, CompletionProvider, Editor, EditorElement, EditorMode, EditorStyle, ExcerptId,
SizingBehavior,
};
use fuzzy::StringMatchCandidate;
use gpui::{
Action as _, AppContext, Context, Corner, Entity, FocusHandle, Focusable, HighlightStyle, Hsla,
@@ -59,6 +62,11 @@ impl Console {
) -> Self {
let console = cx.new(|cx| {
let mut editor = Editor::multi_line(window, cx);
editor.set_mode(EditorMode::Full {
scale_ui_elements_with_buffer_font_size: true,
show_active_line_background: true,
sizing_behavior: SizingBehavior::ExcludeOverscrollMargin,
});
editor.move_to_end(&editor::actions::MoveToEnd, window, cx);
editor.set_read_only(true);
editor.disable_scrollbars_and_minimap(window, cx);

View File

@@ -10,8 +10,9 @@ use std::{
use editor::{Editor, EditorElement, EditorStyle};
use gpui::{
Action, Along, AppContext, Axis, DismissEvent, DragMoveEvent, Empty, Entity, FocusHandle,
Focusable, MouseButton, Point, ScrollStrategy, ScrollWheelEvent, Subscription, Task, TextStyle,
UniformList, UniformListScrollHandle, WeakEntity, actions, anchored, deferred, uniform_list,
Focusable, ListHorizontalSizingBehavior, MouseButton, Point, ScrollStrategy, ScrollWheelEvent,
Subscription, Task, TextStyle, UniformList, UniformListScrollHandle, WeakEntity, actions,
anchored, deferred, uniform_list,
};
use notifications::status_toast::{StatusToast, ToastIcon};
use project::debugger::{MemoryCell, dap_command::DataBreakpointContext, session::Session};
@@ -229,6 +230,7 @@ impl MemoryView {
},
)
.track_scroll(view_state.scroll_handle)
.with_horizontal_sizing_behavior(ListHorizontalSizingBehavior::Unconstrained)
.on_scroll_wheel(cx.listener(|this, evt: &ScrollWheelEvent, window, _| {
let mut view_state = this.view_state();
let delta = evt.delta.pixel_delta(window.line_height());
@@ -917,7 +919,17 @@ impl Render for MemoryView {
)
.with_priority(1)
}))
.vertical_scrollbar_for(self.view_state_handle.clone(), window, cx),
.custom_scrollbars(
ui::Scrollbars::new(ui::ScrollAxes::Both)
.tracked_scroll_handle(self.view_state_handle.clone())
.with_track_along(
ui::ScrollAxes::Both,
cx.theme().colors().panel_background,
)
.tracked_entity(cx.entity_id()),
window,
cx,
),
)
}
}

View File

@@ -11,15 +11,18 @@ use gpui::{
FocusHandle, Focusable, Hsla, MouseDownEvent, Point, Subscription, TextStyleRefinement,
UniformListScrollHandle, WeakEntity, actions, anchored, deferred, uniform_list,
};
use itertools::Itertools;
use menu::{SelectFirst, SelectLast, SelectNext, SelectPrevious};
use project::debugger::{
dap_command::DataBreakpointContext,
session::{Session, SessionEvent, Watcher},
};
use std::{collections::HashMap, ops::Range, sync::Arc};
use ui::{ContextMenu, ListItem, ScrollableHandle, Tooltip, WithScrollbar, prelude::*};
use ui::{ContextMenu, ListItem, ScrollAxes, ScrollableHandle, Tooltip, WithScrollbar, prelude::*};
use util::{debug_panic, maybe};
static INDENT_STEP_SIZE: Pixels = px(10.0);
actions!(
variable_list,
[
@@ -185,6 +188,7 @@ struct VariableColor {
pub struct VariableList {
entries: Vec<ListEntry>,
max_width_index: Option<usize>,
entry_states: HashMap<EntryPath, EntryState>,
selected_stack_frame_id: Option<StackFrameId>,
list_handle: UniformListScrollHandle,
@@ -243,6 +247,7 @@ impl VariableList {
disabled: false,
edited_path: None,
entries: Default::default(),
max_width_index: None,
entry_states: Default::default(),
weak_running,
memory_view,
@@ -368,6 +373,26 @@ impl VariableList {
}
self.entries = entries;
let text_pixels = ui::TextSize::Default.pixels(cx).to_f64() as f32;
let indent_size = INDENT_STEP_SIZE.to_f64() as f32;
self.max_width_index = self
.entries
.iter()
.map(|entry| match &entry.entry {
DapEntry::Scope(scope) => scope.name.len() as f32 * text_pixels,
DapEntry::Variable(variable) => {
(variable.value.len() + variable.name.len()) as f32 * text_pixels
+ (entry.path.indices.len() as f32 * indent_size)
}
DapEntry::Watcher(watcher) => {
(watcher.value.len() + watcher.expression.len()) as f32 * text_pixels
+ (entry.path.indices.len() as f32 * indent_size)
}
})
.position_max_by(|left, right| left.total_cmp(right));
cx.notify();
}
@@ -1244,7 +1269,7 @@ impl VariableList {
.disabled(self.disabled)
.selectable(false)
.indent_level(state.depth)
.indent_step_size(px(10.))
.indent_step_size(INDENT_STEP_SIZE)
.always_show_disclosure_icon(true)
.when(var_ref > 0, |list_item| {
list_item.toggle(state.is_expanded).on_toggle(cx.listener({
@@ -1445,7 +1470,7 @@ impl VariableList {
.disabled(self.disabled)
.selectable(false)
.indent_level(state.depth)
.indent_step_size(px(10.))
.indent_step_size(INDENT_STEP_SIZE)
.always_show_disclosure_icon(true)
.when(var_ref > 0, |list_item| {
list_item.toggle(state.is_expanded).on_toggle(cx.listener({
@@ -1507,7 +1532,6 @@ impl Render for VariableList {
.key_context("VariableList")
.id("variable-list")
.group("variable-list")
.overflow_y_scroll()
.size_full()
.on_action(cx.listener(Self::select_first))
.on_action(cx.listener(Self::select_last))
@@ -1533,6 +1557,9 @@ impl Render for VariableList {
}),
)
.track_scroll(self.list_handle.clone())
.with_width_from_item(self.max_width_index)
.with_sizing_behavior(gpui::ListSizingBehavior::Auto)
.with_horizontal_sizing_behavior(gpui::ListHorizontalSizingBehavior::Unconstrained)
.gap_1_5()
.size_full()
.flex_grow(),
@@ -1546,7 +1573,15 @@ impl Render for VariableList {
)
.with_priority(1)
}))
.vertical_scrollbar_for(self.list_handle.clone(), window, cx)
// .vertical_scrollbar_for(self.list_handle.clone(), window, cx)
.custom_scrollbars(
ui::Scrollbars::new(ScrollAxes::Both)
.tracked_scroll_handle(self.list_handle.clone())
.with_track_along(ScrollAxes::Both, cx.theme().colors().panel_background)
.tracked_entity(cx.entity_id()),
window,
cx,
)
}
}

View File

@@ -1,4 +1,7 @@
use std::any::{Any, TypeId};
use std::{
any::{Any, TypeId},
rc::Rc,
};
use collections::HashMap;
use dap::StackFrameId;
@@ -331,12 +334,7 @@ impl Item for StackTraceView {
.update(cx, |editor, cx| editor.deactivated(window, cx));
}
fn navigate(
&mut self,
data: Box<dyn Any>,
window: &mut Window,
cx: &mut Context<Self>,
) -> bool {
fn navigate(&mut self, data: Rc<dyn Any>, window: &mut Window, cx: &mut Context<Self>) -> bool {
self.editor
.update(cx, |editor, cx| editor.navigate(data, window, cx))
}

View File

@@ -34,6 +34,7 @@ theme.workspace = true
ui.workspace = true
util.workspace = true
workspace.workspace = true
itertools.workspace = true
[dev-dependencies]
client = { workspace = true, features = ["test-support"] }

View File

@@ -1,5 +1,5 @@
use crate::{
DIAGNOSTICS_UPDATE_DELAY, IncludeWarnings, ToggleWarnings, context_range_for_entry,
DIAGNOSTICS_UPDATE_DEBOUNCE, IncludeWarnings, ToggleWarnings, context_range_for_entry,
diagnostic_renderer::{DiagnosticBlock, DiagnosticRenderer},
toolbar_controls::DiagnosticsToolbarEditor,
};
@@ -24,6 +24,7 @@ use settings::Settings;
use std::{
any::{Any, TypeId},
cmp::Ordering,
rc::Rc,
sync::Arc,
};
use text::{Anchor, BufferSnapshot, OffsetRangeExt};
@@ -283,7 +284,7 @@ impl BufferDiagnosticsEditor {
self.update_excerpts_task = Some(cx.spawn_in(window, async move |editor, cx| {
cx.background_executor()
.timer(DIAGNOSTICS_UPDATE_DELAY)
.timer(DIAGNOSTICS_UPDATE_DEBOUNCE)
.await;
if let Some(buffer) = buffer {
@@ -734,12 +735,7 @@ impl Item for BufferDiagnosticsEditor {
self.multibuffer.read(cx).is_dirty(cx)
}
fn navigate(
&mut self,
data: Box<dyn Any>,
window: &mut Window,
cx: &mut Context<Self>,
) -> bool {
fn navigate(&mut self, data: Rc<dyn Any>, window: &mut Window, cx: &mut Context<Self>) -> bool {
self.editor
.update(cx, |editor, cx| editor.navigate(data, window, cx))
}
@@ -938,10 +934,6 @@ impl DiagnosticsToolbarEditor for WeakEntity<BufferDiagnosticsEditor> {
.unwrap_or(false)
}
fn has_stale_excerpts(&self, _cx: &App) -> bool {
false
}
fn is_updating(&self, cx: &App) -> bool {
self.read_with(cx, |buffer_diagnostics_editor, cx| {
buffer_diagnostics_editor.update_excerpts_task.is_some()

View File

@@ -9,7 +9,7 @@ mod diagnostics_tests;
use anyhow::Result;
use buffer_diagnostics::BufferDiagnosticsEditor;
use collections::{BTreeSet, HashMap};
use collections::{BTreeSet, HashMap, HashSet};
use diagnostic_renderer::DiagnosticBlock;
use editor::{
Editor, EditorEvent, ExcerptRange, MultiBuffer, PathKey,
@@ -17,10 +17,11 @@ use editor::{
multibuffer_context_lines,
};
use gpui::{
AnyElement, AnyView, App, AsyncApp, Context, Entity, EventEmitter, FocusHandle, Focusable,
Global, InteractiveElement, IntoElement, ParentElement, Render, SharedString, Styled,
Subscription, Task, WeakEntity, Window, actions, div,
AnyElement, AnyView, App, AsyncApp, Context, Entity, EventEmitter, FocusHandle, FocusOutEvent,
Focusable, Global, InteractiveElement, IntoElement, ParentElement, Render, SharedString,
Styled, Subscription, Task, WeakEntity, Window, actions, div,
};
use itertools::Itertools as _;
use language::{
Bias, Buffer, BufferRow, BufferSnapshot, DiagnosticEntry, DiagnosticEntryRef, Point,
ToTreeSitterPoint,
@@ -32,8 +33,9 @@ use project::{
use settings::Settings;
use std::{
any::{Any, TypeId},
cmp::{self, Ordering},
cmp,
ops::{Range, RangeInclusive},
rc::Rc,
sync::Arc,
time::Duration,
};
@@ -89,8 +91,8 @@ pub(crate) struct ProjectDiagnosticsEditor {
impl EventEmitter<EditorEvent> for ProjectDiagnosticsEditor {}
const DIAGNOSTICS_UPDATE_DELAY: Duration = Duration::from_millis(50);
const DIAGNOSTICS_SUMMARY_UPDATE_DELAY: Duration = Duration::from_millis(30);
const DIAGNOSTICS_UPDATE_DEBOUNCE: Duration = Duration::from_millis(50);
const DIAGNOSTICS_SUMMARY_UPDATE_DEBOUNCE: Duration = Duration::from_millis(30);
impl Render for ProjectDiagnosticsEditor {
fn render(&mut self, _: &mut Window, cx: &mut Context<Self>) -> impl IntoElement {
@@ -149,6 +151,12 @@ impl Render for ProjectDiagnosticsEditor {
}
}
#[derive(PartialEq, Eq, Copy, Clone, Debug)]
enum RetainExcerpts {
Yes,
No,
}
impl ProjectDiagnosticsEditor {
pub fn register(
workspace: &mut Workspace,
@@ -165,14 +173,21 @@ impl ProjectDiagnosticsEditor {
window: &mut Window,
cx: &mut Context<Self>,
) -> Self {
let project_event_subscription =
cx.subscribe_in(&project_handle, window, |this, _project, event, window, cx| match event {
let project_event_subscription = cx.subscribe_in(
&project_handle,
window,
|this, _project, event, window, cx| match event {
project::Event::DiskBasedDiagnosticsStarted { .. } => {
cx.notify();
}
project::Event::DiskBasedDiagnosticsFinished { language_server_id } => {
log::debug!("disk based diagnostics finished for server {language_server_id}");
this.update_stale_excerpts(window, cx);
this.close_diagnosticless_buffers(
window,
cx,
this.editor.focus_handle(cx).contains_focused(window, cx)
|| this.focus_handle.contains_focused(window, cx),
);
}
project::Event::DiagnosticsUpdated {
language_server_id,
@@ -181,34 +196,39 @@ impl ProjectDiagnosticsEditor {
this.paths_to_update.extend(paths.clone());
this.diagnostic_summary_update = cx.spawn(async move |this, cx| {
cx.background_executor()
.timer(DIAGNOSTICS_SUMMARY_UPDATE_DELAY)
.timer(DIAGNOSTICS_SUMMARY_UPDATE_DEBOUNCE)
.await;
this.update(cx, |this, cx| {
this.update_diagnostic_summary(cx);
})
.log_err();
});
cx.emit(EditorEvent::TitleChanged);
if this.editor.focus_handle(cx).contains_focused(window, cx) || this.focus_handle.contains_focused(window, cx) {
log::debug!("diagnostics updated for server {language_server_id}, paths {paths:?}. recording change");
} else {
log::debug!("diagnostics updated for server {language_server_id}, paths {paths:?}. updating excerpts");
this.update_stale_excerpts(window, cx);
}
log::debug!(
"diagnostics updated for server {language_server_id}, \
paths {paths:?}. updating excerpts"
);
let focused = this.editor.focus_handle(cx).contains_focused(window, cx)
|| this.focus_handle.contains_focused(window, cx);
this.update_stale_excerpts(
if focused {
RetainExcerpts::Yes
} else {
RetainExcerpts::No
},
window,
cx,
);
}
_ => {}
});
},
);
let focus_handle = cx.focus_handle();
cx.on_focus_in(&focus_handle, window, |this, window, cx| {
this.focus_in(window, cx)
})
.detach();
cx.on_focus_out(&focus_handle, window, |this, _event, window, cx| {
this.focus_out(window, cx)
})
.detach();
cx.on_focus_in(&focus_handle, window, Self::focus_in)
.detach();
cx.on_focus_out(&focus_handle, window, Self::focus_out)
.detach();
let excerpts = cx.new(|cx| MultiBuffer::new(project_handle.read(cx).capability()));
let editor = cx.new(|cx| {
@@ -238,8 +258,11 @@ impl ProjectDiagnosticsEditor {
window.focus(&this.focus_handle);
}
}
EditorEvent::Blurred => this.update_stale_excerpts(window, cx),
EditorEvent::Saved => this.update_stale_excerpts(window, cx),
EditorEvent::Blurred => this.close_diagnosticless_buffers(window, cx, false),
EditorEvent::Saved => this.close_diagnosticless_buffers(window, cx, true),
EditorEvent::SelectionsChanged { .. } => {
this.close_diagnosticless_buffers(window, cx, true)
}
_ => {}
}
},
@@ -283,15 +306,67 @@ impl ProjectDiagnosticsEditor {
this
}
fn update_stale_excerpts(&mut self, window: &mut Window, cx: &mut Context<Self>) {
if self.update_excerpts_task.is_some() || self.multibuffer.read(cx).is_dirty(cx) {
/// Closes all excerpts of buffers that:
/// - have no diagnostics anymore
/// - are saved (not dirty)
/// - and, if `reatin_selections` is true, do not have selections within them
fn close_diagnosticless_buffers(
&mut self,
_window: &mut Window,
cx: &mut Context<Self>,
retain_selections: bool,
) {
let buffer_ids = self.multibuffer.read(cx).all_buffer_ids();
let selected_buffers = self.editor.update(cx, |editor, cx| {
editor
.selections
.all_anchors(cx)
.iter()
.filter_map(|anchor| anchor.start.buffer_id)
.collect::<HashSet<_>>()
});
for buffer_id in buffer_ids {
if retain_selections && selected_buffers.contains(&buffer_id) {
continue;
}
let has_blocks = self
.blocks
.get(&buffer_id)
.is_none_or(|blocks| blocks.is_empty());
if !has_blocks {
continue;
}
let is_dirty = self
.multibuffer
.read(cx)
.buffer(buffer_id)
.is_some_and(|buffer| buffer.read(cx).is_dirty());
if !is_dirty {
continue;
}
self.multibuffer.update(cx, |b, cx| {
b.remove_excerpts_for_buffer(buffer_id, cx);
});
}
}
fn update_stale_excerpts(
&mut self,
mut retain_excerpts: RetainExcerpts,
window: &mut Window,
cx: &mut Context<Self>,
) {
if self.update_excerpts_task.is_some() {
return;
}
if self.multibuffer.read(cx).is_dirty(cx) {
retain_excerpts = RetainExcerpts::Yes;
}
let project_handle = self.project.clone();
self.update_excerpts_task = Some(cx.spawn_in(window, async move |this, cx| {
cx.background_executor()
.timer(DIAGNOSTICS_UPDATE_DELAY)
.timer(DIAGNOSTICS_UPDATE_DEBOUNCE)
.await;
loop {
let Some(path) = this.update(cx, |this, cx| {
@@ -312,7 +387,7 @@ impl ProjectDiagnosticsEditor {
.log_err()
{
this.update_in(cx, |this, window, cx| {
this.update_excerpts(buffer, window, cx)
this.update_excerpts(buffer, retain_excerpts, window, cx)
})?
.await?;
}
@@ -378,10 +453,10 @@ impl ProjectDiagnosticsEditor {
}
}
fn focus_out(&mut self, window: &mut Window, cx: &mut Context<Self>) {
fn focus_out(&mut self, _: FocusOutEvent, window: &mut Window, cx: &mut Context<Self>) {
if !self.focus_handle.is_focused(window) && !self.editor.focus_handle(cx).is_focused(window)
{
self.update_stale_excerpts(window, cx);
self.close_diagnosticless_buffers(window, cx, false);
}
}
@@ -403,12 +478,13 @@ impl ProjectDiagnosticsEditor {
});
}
}
multibuffer.clear(cx);
});
self.paths_to_update = project_paths;
});
self.update_stale_excerpts(window, cx);
self.update_stale_excerpts(RetainExcerpts::No, window, cx);
}
fn diagnostics_are_unchanged(
@@ -431,6 +507,7 @@ impl ProjectDiagnosticsEditor {
fn update_excerpts(
&mut self,
buffer: Entity<Buffer>,
retain_excerpts: RetainExcerpts,
window: &mut Window,
cx: &mut Context<Self>,
) -> Task<Result<()>> {
@@ -497,24 +574,27 @@ impl ProjectDiagnosticsEditor {
)
})?;
for item in more {
let i = blocks
.binary_search_by(|probe| {
probe
.initial_range
.start
.cmp(&item.initial_range.start)
.then(probe.initial_range.end.cmp(&item.initial_range.end))
.then(Ordering::Greater)
})
.unwrap_or_else(|i| i);
blocks.insert(i, item);
}
blocks.extend(more);
}
let mut excerpt_ranges: Vec<ExcerptRange<Point>> = Vec::new();
let mut excerpt_ranges: Vec<ExcerptRange<Point>> = match retain_excerpts {
RetainExcerpts::No => Vec::new(),
RetainExcerpts::Yes => this.update(cx, |this, cx| {
this.multibuffer.update(cx, |multi_buffer, cx| {
multi_buffer
.excerpts_for_buffer(buffer_id, cx)
.into_iter()
.map(|(_, range)| ExcerptRange {
context: range.context.to_point(&buffer_snapshot),
primary: range.primary.to_point(&buffer_snapshot),
})
.collect()
})
})?,
};
let mut result_blocks = vec![None; excerpt_ranges.len()];
let context_lines = cx.update(|_, cx| multibuffer_context_lines(cx))?;
for b in blocks.iter() {
for b in blocks {
let excerpt_range = context_range_for_entry(
b.initial_range.clone(),
context_lines,
@@ -541,7 +621,8 @@ impl ProjectDiagnosticsEditor {
context: excerpt_range,
primary: b.initial_range.clone(),
},
)
);
result_blocks.insert(i, Some(b));
}
this.update_in(cx, |this, window, cx| {
@@ -562,7 +643,7 @@ impl ProjectDiagnosticsEditor {
)
});
#[cfg(test)]
let cloned_blocks = blocks.clone();
let cloned_blocks = result_blocks.clone();
if was_empty && let Some(anchor_range) = anchor_ranges.first() {
let range_to_select = anchor_range.start..anchor_range.start;
@@ -576,22 +657,20 @@ impl ProjectDiagnosticsEditor {
}
}
let editor_blocks =
anchor_ranges
.into_iter()
.zip(blocks.into_iter())
.map(|(anchor, block)| {
let editor = this.editor.downgrade();
BlockProperties {
placement: BlockPlacement::Near(anchor.start),
height: Some(1),
style: BlockStyle::Flex,
render: Arc::new(move |bcx| {
block.render_block(editor.clone(), bcx)
}),
priority: 1,
}
});
let editor_blocks = anchor_ranges
.into_iter()
.zip_eq(result_blocks.into_iter())
.filter_map(|(anchor, block)| {
let block = block?;
let editor = this.editor.downgrade();
Some(BlockProperties {
placement: BlockPlacement::Near(anchor.start),
height: Some(1),
style: BlockStyle::Flex,
render: Arc::new(move |bcx| block.render_block(editor.clone(), bcx)),
priority: 1,
})
});
let block_ids = this.editor.update(cx, |editor, cx| {
editor.display_map.update(cx, |display_map, cx| {
@@ -601,7 +680,9 @@ impl ProjectDiagnosticsEditor {
#[cfg(test)]
{
for (block_id, block) in block_ids.iter().zip(cloned_blocks.iter()) {
for (block_id, block) in
block_ids.iter().zip(cloned_blocks.into_iter().flatten())
{
let markdown = block.markdown.clone();
editor::test::set_block_content_for_tests(
&this.editor,
@@ -626,6 +707,7 @@ impl ProjectDiagnosticsEditor {
fn update_diagnostic_summary(&mut self, cx: &mut Context<Self>) {
self.summary = self.project.read(cx).diagnostic_summary(false, cx);
cx.emit(EditorEvent::TitleChanged);
}
}
@@ -647,12 +729,7 @@ impl Item for ProjectDiagnosticsEditor {
.update(cx, |editor, cx| editor.deactivated(window, cx));
}
fn navigate(
&mut self,
data: Box<dyn Any>,
window: &mut Window,
cx: &mut Context<Self>,
) -> bool {
fn navigate(&mut self, data: Rc<dyn Any>, window: &mut Window, cx: &mut Context<Self>) -> bool {
self.editor
.update(cx, |editor, cx| editor.navigate(data, window, cx))
}
@@ -843,13 +920,6 @@ impl DiagnosticsToolbarEditor for WeakEntity<ProjectDiagnosticsEditor> {
.unwrap_or(false)
}
fn has_stale_excerpts(&self, cx: &App) -> bool {
self.read_with(cx, |project_diagnostics_editor, _cx| {
!project_diagnostics_editor.paths_to_update.is_empty()
})
.unwrap_or(false)
}
fn is_updating(&self, cx: &App) -> bool {
self.read_with(cx, |project_diagnostics_editor, cx| {
project_diagnostics_editor.update_excerpts_task.is_some()
@@ -1010,12 +1080,6 @@ async fn heuristic_syntactic_expand(
return;
}
}
log::info!(
"Expanding to ancestor started on {} node\
exceeding row limit of {max_row_count}.",
node.grammar_name()
);
*ancestor_range = Some(None);
}
})

View File

@@ -119,7 +119,7 @@ async fn test_diagnostics(cx: &mut TestAppContext) {
let editor = diagnostics.update(cx, |diagnostics, _| diagnostics.editor.clone());
diagnostics
.next_notification(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10), cx)
.next_notification(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10), cx)
.await;
pretty_assertions::assert_eq!(
@@ -190,7 +190,7 @@ async fn test_diagnostics(cx: &mut TestAppContext) {
});
diagnostics
.next_notification(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10), cx)
.next_notification(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10), cx)
.await;
pretty_assertions::assert_eq!(
@@ -277,7 +277,7 @@ async fn test_diagnostics(cx: &mut TestAppContext) {
});
diagnostics
.next_notification(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10), cx)
.next_notification(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10), cx)
.await;
pretty_assertions::assert_eq!(
@@ -391,7 +391,7 @@ async fn test_diagnostics_with_folds(cx: &mut TestAppContext) {
// Only the first language server's diagnostics are shown.
cx.executor()
.advance_clock(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10));
.advance_clock(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10));
cx.executor().run_until_parked();
editor.update_in(cx, |editor, window, cx| {
editor.fold_ranges(vec![Point::new(0, 0)..Point::new(3, 0)], false, window, cx);
@@ -490,7 +490,7 @@ async fn test_diagnostics_multiple_servers(cx: &mut TestAppContext) {
// Only the first language server's diagnostics are shown.
cx.executor()
.advance_clock(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10));
.advance_clock(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10));
cx.executor().run_until_parked();
pretty_assertions::assert_eq!(
@@ -530,7 +530,7 @@ async fn test_diagnostics_multiple_servers(cx: &mut TestAppContext) {
// Both language server's diagnostics are shown.
cx.executor()
.advance_clock(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10));
.advance_clock(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10));
cx.executor().run_until_parked();
pretty_assertions::assert_eq!(
@@ -587,7 +587,7 @@ async fn test_diagnostics_multiple_servers(cx: &mut TestAppContext) {
// Only the first language server's diagnostics are updated.
cx.executor()
.advance_clock(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10));
.advance_clock(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10));
cx.executor().run_until_parked();
pretty_assertions::assert_eq!(
@@ -629,7 +629,7 @@ async fn test_diagnostics_multiple_servers(cx: &mut TestAppContext) {
// Both language servers' diagnostics are updated.
cx.executor()
.advance_clock(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10));
.advance_clock(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10));
cx.executor().run_until_parked();
pretty_assertions::assert_eq!(
@@ -760,7 +760,7 @@ async fn test_random_diagnostics_blocks(cx: &mut TestAppContext, mut rng: StdRng
.unwrap()
});
cx.executor()
.advance_clock(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10));
.advance_clock(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10));
cx.run_until_parked();
}
@@ -769,7 +769,7 @@ async fn test_random_diagnostics_blocks(cx: &mut TestAppContext, mut rng: StdRng
log::info!("updating mutated diagnostics view");
mutated_diagnostics.update_in(cx, |diagnostics, window, cx| {
diagnostics.update_stale_excerpts(window, cx)
diagnostics.update_stale_excerpts(RetainExcerpts::No, window, cx)
});
log::info!("constructing reference diagnostics view");
@@ -777,7 +777,7 @@ async fn test_random_diagnostics_blocks(cx: &mut TestAppContext, mut rng: StdRng
ProjectDiagnosticsEditor::new(true, project.clone(), workspace.downgrade(), window, cx)
});
cx.executor()
.advance_clock(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10));
.advance_clock(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10));
cx.run_until_parked();
let mutated_excerpts =
@@ -789,7 +789,12 @@ async fn test_random_diagnostics_blocks(cx: &mut TestAppContext, mut rng: StdRng
// The mutated view may contain more than the reference view as
// we don't currently shrink excerpts when diagnostics were removed.
let mut ref_iter = reference_excerpts.lines().filter(|line| *line != "§ -----");
let mut ref_iter = reference_excerpts.lines().filter(|line| {
// ignore $ ---- and $ <file>.rs
!line.starts_with('§')
|| line.starts_with("§ diagnostic")
|| line.starts_with("§ related info")
});
let mut next_ref_line = ref_iter.next();
let mut skipped_block = false;
@@ -797,7 +802,12 @@ async fn test_random_diagnostics_blocks(cx: &mut TestAppContext, mut rng: StdRng
if let Some(ref_line) = next_ref_line {
if mut_line == ref_line {
next_ref_line = ref_iter.next();
} else if mut_line.contains('§') && mut_line != "§ -----" {
} else if mut_line.contains('§')
// ignore $ ---- and $ <file>.rs
&& (!mut_line.starts_with('§')
|| mut_line.starts_with("§ diagnostic")
|| mut_line.starts_with("§ related info"))
{
skipped_block = true;
}
}
@@ -949,7 +959,7 @@ async fn test_random_diagnostics_with_inlays(cx: &mut TestAppContext, mut rng: S
.unwrap()
});
cx.executor()
.advance_clock(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10));
.advance_clock(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10));
cx.run_until_parked();
}
@@ -958,11 +968,11 @@ async fn test_random_diagnostics_with_inlays(cx: &mut TestAppContext, mut rng: S
log::info!("updating mutated diagnostics view");
mutated_diagnostics.update_in(cx, |diagnostics, window, cx| {
diagnostics.update_stale_excerpts(window, cx)
diagnostics.update_stale_excerpts(RetainExcerpts::No, window, cx)
});
cx.executor()
.advance_clock(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10));
.advance_clock(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10));
cx.run_until_parked();
}
@@ -1427,7 +1437,7 @@ async fn test_diagnostics_with_code(cx: &mut TestAppContext) {
let editor = diagnostics.update(cx, |diagnostics, _| diagnostics.editor.clone());
diagnostics
.next_notification(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10), cx)
.next_notification(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10), cx)
.await;
// Verify that the diagnostic codes are displayed correctly
@@ -1704,7 +1714,7 @@ async fn test_buffer_diagnostics(cx: &mut TestAppContext) {
// wait a little bit to ensure that the buffer diagnostic's editor content
// is rendered.
cx.executor()
.advance_clock(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10));
.advance_clock(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10));
pretty_assertions::assert_eq!(
editor_content_with_blocks(&editor, cx),
@@ -1837,7 +1847,7 @@ async fn test_buffer_diagnostics_without_warnings(cx: &mut TestAppContext) {
// wait a little bit to ensure that the buffer diagnostic's editor content
// is rendered.
cx.executor()
.advance_clock(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10));
.advance_clock(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10));
pretty_assertions::assert_eq!(
editor_content_with_blocks(&editor, cx),
@@ -1971,7 +1981,7 @@ async fn test_buffer_diagnostics_multiple_servers(cx: &mut TestAppContext) {
// wait a little bit to ensure that the buffer diagnostic's editor content
// is rendered.
cx.executor()
.advance_clock(DIAGNOSTICS_UPDATE_DELAY + Duration::from_millis(10));
.advance_clock(DIAGNOSTICS_UPDATE_DEBOUNCE + Duration::from_millis(10));
pretty_assertions::assert_eq!(
editor_content_with_blocks(&editor, cx),

View File

@@ -16,9 +16,6 @@ pub(crate) trait DiagnosticsToolbarEditor: Send + Sync {
/// Toggles whether warning diagnostics should be displayed by the
/// diagnostics editor.
fn toggle_warnings(&self, window: &mut Window, cx: &mut App);
/// Indicates whether any of the excerpts displayed by the diagnostics
/// editor are stale.
fn has_stale_excerpts(&self, cx: &App) -> bool;
/// Indicates whether the diagnostics editor is currently updating the
/// diagnostics.
fn is_updating(&self, cx: &App) -> bool;
@@ -37,14 +34,12 @@ pub(crate) trait DiagnosticsToolbarEditor: Send + Sync {
impl Render for ToolbarControls {
fn render(&mut self, _: &mut Window, cx: &mut Context<Self>) -> impl IntoElement {
let mut has_stale_excerpts = false;
let mut include_warnings = false;
let mut is_updating = false;
match &self.editor {
Some(editor) => {
include_warnings = editor.include_warnings(cx);
has_stale_excerpts = editor.has_stale_excerpts(cx);
is_updating = editor.is_updating(cx);
}
None => {}
@@ -86,7 +81,6 @@ impl Render for ToolbarControls {
IconButton::new("refresh-diagnostics", IconName::ArrowCircle)
.icon_color(Color::Info)
.shape(IconButtonShape::Square)
.disabled(!has_stale_excerpts)
.tooltip(Tooltip::for_action_title(
"Refresh diagnostics",
&ToggleDiagnosticsRefresh,

View File

@@ -1,5 +1,5 @@
use anyhow::Result;
use client::{UserStore, zed_urls};
use client::{Client, UserStore, zed_urls};
use cloud_llm_client::UsageLimit;
use codestral::CodestralCompletionProvider;
use copilot::{Copilot, Status};
@@ -192,6 +192,7 @@ impl Render for EditPredictionButton {
Some(ContextMenu::build(window, cx, |menu, _, _| {
let fs = fs.clone();
let activate_url = activate_url.clone();
menu.entry("Sign In", None, move |_, cx| {
cx.open_url(activate_url.as_str())
})
@@ -244,15 +245,8 @@ impl Render for EditPredictionButton {
} else {
Some(ContextMenu::build(window, cx, |menu, _, _| {
let fs = fs.clone();
menu.entry("Use Zed AI instead", None, move |_, cx| {
set_completion_provider(
fs.clone(),
cx,
EditPredictionProvider::Zed,
)
})
.separator()
.entry(
menu.entry(
"Configure Codestral API Key",
None,
move |window, cx| {
@@ -262,6 +256,18 @@ impl Render for EditPredictionButton {
);
},
)
.separator()
.entry(
"Use Zed AI instead",
None,
move |_, cx| {
set_completion_provider(
fs.clone(),
cx,
EditPredictionProvider::Zed,
)
},
)
}))
}
})
@@ -412,6 +418,7 @@ impl EditPredictionButton {
fs: Arc<dyn Fs>,
user_store: Entity<UserStore>,
popover_menu_handle: PopoverMenuHandle<ContextMenu>,
client: Arc<Client>,
cx: &mut Context<Self>,
) -> Self {
if let Some(copilot) = Copilot::global(cx) {
@@ -421,6 +428,8 @@ impl EditPredictionButton {
cx.observe_global::<SettingsStore>(move |_, cx| cx.notify())
.detach();
CodestralCompletionProvider::ensure_api_key_loaded(client.http_client(), cx);
Self {
editor_subscription: None,
editor_enabled: None,
@@ -435,6 +444,89 @@ impl EditPredictionButton {
}
}
fn get_available_providers(&self, cx: &App) -> Vec<EditPredictionProvider> {
let mut providers = Vec::new();
providers.push(EditPredictionProvider::Zed);
if let Some(copilot) = Copilot::global(cx) {
if matches!(copilot.read(cx).status(), Status::Authorized) {
providers.push(EditPredictionProvider::Copilot);
}
}
if let Some(supermaven) = Supermaven::global(cx) {
if let Supermaven::Spawned(agent) = supermaven.read(cx) {
if matches!(agent.account_status, AccountStatus::Ready) {
providers.push(EditPredictionProvider::Supermaven);
}
}
}
if CodestralCompletionProvider::has_api_key(cx) {
providers.push(EditPredictionProvider::Codestral);
}
providers
}
fn add_provider_switching_section(
&self,
mut menu: ContextMenu,
current_provider: EditPredictionProvider,
cx: &App,
) -> ContextMenu {
let available_providers = self.get_available_providers(cx);
let other_providers: Vec<_> = available_providers
.into_iter()
.filter(|p| *p != current_provider && *p != EditPredictionProvider::None)
.collect();
if !other_providers.is_empty() {
menu = menu.separator().header("Switch Providers");
for provider in other_providers {
let fs = self.fs.clone();
menu = match provider {
EditPredictionProvider::Zed => menu.item(
ContextMenuEntry::new("Zed AI")
.documentation_aside(
DocumentationSide::Left,
DocumentationEdge::Top,
|_| {
Label::new("Zed's edit prediction is powered by Zeta, an open-source, dataset mode.")
.into_any_element()
},
)
.handler(move |_, cx| {
set_completion_provider(fs.clone(), cx, provider);
}),
),
EditPredictionProvider::Copilot => {
menu.entry("GitHub Copilot", None, move |_, cx| {
set_completion_provider(fs.clone(), cx, provider);
})
}
EditPredictionProvider::Supermaven => {
menu.entry("Supermaven", None, move |_, cx| {
set_completion_provider(fs.clone(), cx, provider);
})
}
EditPredictionProvider::Codestral => {
menu.entry("Codestral", None, move |_, cx| {
set_completion_provider(fs.clone(), cx, provider);
})
}
EditPredictionProvider::None => continue,
};
}
}
menu
}
pub fn build_copilot_start_menu(
&mut self,
window: &mut Window,
@@ -572,8 +664,10 @@ impl EditPredictionButton {
}
menu = menu.separator().header("Privacy");
if let Some(provider) = &self.edit_prediction_provider {
let data_collection = provider.data_collection_state(cx);
if data_collection.is_supported() {
let provider = provider.clone();
let enabled = data_collection.is_enabled();
@@ -691,7 +785,7 @@ impl EditPredictionButton {
}
}),
).item(
ContextMenuEntry::new("View Documentation")
ContextMenuEntry::new("View Docs")
.icon(IconName::FileGeneric)
.icon_color(Color::Muted)
.handler(move |_, cx| {
@@ -711,6 +805,7 @@ impl EditPredictionButton {
if let Some(editor_focus_handle) = self.editor_focus_handle.clone() {
menu = menu
.separator()
.header("Actions")
.entry(
"Predict Edit at Cursor",
Some(Box::new(ShowEditPrediction)),
@@ -721,7 +816,11 @@ impl EditPredictionButton {
}
},
)
.context(editor_focus_handle);
.context(editor_focus_handle)
.when(
cx.has_flag::<PredictEditsRateCompletionsFeatureFlag>(),
|this| this.action("Rate Completions", RateCompletions.boxed_clone()),
);
}
menu
@@ -733,15 +832,11 @@ impl EditPredictionButton {
cx: &mut Context<Self>,
) -> Entity<ContextMenu> {
ContextMenu::build(window, cx, |menu, window, cx| {
self.build_language_settings_menu(menu, window, cx)
.separator()
.entry("Use Zed AI instead", None, {
let fs = self.fs.clone();
move |_window, cx| {
set_completion_provider(fs.clone(), cx, EditPredictionProvider::Zed)
}
})
.separator()
let menu = self.build_language_settings_menu(menu, window, cx);
let menu =
self.add_provider_switching_section(menu, EditPredictionProvider::Copilot, cx);
menu.separator()
.link(
"Go to Copilot Settings",
OpenBrowser {
@@ -759,8 +854,11 @@ impl EditPredictionButton {
cx: &mut Context<Self>,
) -> Entity<ContextMenu> {
ContextMenu::build(window, cx, |menu, window, cx| {
self.build_language_settings_menu(menu, window, cx)
.separator()
let menu = self.build_language_settings_menu(menu, window, cx);
let menu =
self.add_provider_switching_section(menu, EditPredictionProvider::Supermaven, cx);
menu.separator()
.action("Sign Out", supermaven::SignOut.boxed_clone())
})
}
@@ -770,14 +868,12 @@ impl EditPredictionButton {
window: &mut Window,
cx: &mut Context<Self>,
) -> Entity<ContextMenu> {
let fs = self.fs.clone();
ContextMenu::build(window, cx, |menu, window, cx| {
self.build_language_settings_menu(menu, window, cx)
.separator()
.entry("Use Zed AI instead", None, move |_, cx| {
set_completion_provider(fs.clone(), cx, EditPredictionProvider::Zed)
})
.separator()
let menu = self.build_language_settings_menu(menu, window, cx);
let menu =
self.add_provider_switching_section(menu, EditPredictionProvider::Codestral, cx);
menu.separator()
.entry("Configure Codestral API Key", None, move |window, cx| {
window.dispatch_action(zed_actions::agent::OpenSettings.boxed_clone(), cx);
})
@@ -872,10 +968,10 @@ impl EditPredictionButton {
.separator();
}
self.build_language_settings_menu(menu, window, cx).when(
cx.has_flag::<PredictEditsRateCompletionsFeatureFlag>(),
|this| this.action("Rate Completions", RateCompletions.boxed_clone()),
)
let menu = self.build_language_settings_menu(menu, window, cx);
let menu = self.add_provider_switching_section(menu, EditPredictionProvider::Zed, cx);
menu
})
}

View File

@@ -213,15 +213,6 @@ pub struct ExpandExcerptsDown {
pub(super) lines: u32,
}
/// Shows code completion suggestions at the cursor position.
#[derive(PartialEq, Clone, Deserialize, Default, JsonSchema, Action)]
#[action(namespace = editor)]
#[serde(deny_unknown_fields)]
pub struct ShowCompletions {
#[serde(default)]
pub(super) trigger: Option<String>,
}
/// Handles text input in the editor.
#[derive(PartialEq, Clone, Deserialize, Default, JsonSchema, Action)]
#[action(namespace = editor)]
@@ -539,6 +530,10 @@ actions!(
GoToParentModule,
/// Goes to the previous change in the file.
GoToPreviousChange,
/// Goes to the next reference to the symbol under the cursor.
GoToNextReference,
/// Goes to the previous reference to the symbol under the cursor.
GoToPreviousReference,
/// Goes to the type definition of the symbol at cursor.
GoToTypeDefinition,
/// Goes to type definition in a split pane.
@@ -617,6 +612,8 @@ actions!(
NextEditPrediction,
/// Scrolls to the next screen.
NextScreen,
/// Goes to the next snippet tabstop if one exists.
NextSnippetTabstop,
/// Opens the context menu at cursor position.
OpenContextMenu,
/// Opens excerpts from the current file.
@@ -650,6 +647,8 @@ actions!(
Paste,
/// Navigates to the previous edit prediction.
PreviousEditPrediction,
/// Goes to the previous snippet tabstop if one exists.
PreviousSnippetTabstop,
/// Redoes the last undone edit.
Redo,
/// Redoes the last selection change.
@@ -728,6 +727,8 @@ actions!(
SelectToStartOfParagraph,
/// Extends selection up.
SelectUp,
/// Shows code completion suggestions at the cursor position.
ShowCompletions,
/// Shows the system character palette.
ShowCharacterPalette,
/// Shows edit prediction at cursor.

View File

@@ -28,10 +28,12 @@ use std::{
rc::Rc,
};
use task::ResolvedTask;
use ui::{Color, IntoElement, ListItem, Pixels, Popover, Styled, prelude::*};
use ui::{
Color, IntoElement, ListItem, Pixels, Popover, ScrollAxes, Scrollbars, Styled, WithScrollbar,
prelude::*,
};
use util::ResultExt;
use crate::CodeActionSource;
use crate::hover_popover::{hover_markdown_style, open_markdown_url};
use crate::{
CodeActionProvider, CompletionId, CompletionItemKind, CompletionProvider, DisplayRow, Editor,
@@ -39,7 +41,8 @@ use crate::{
actions::{ConfirmCodeAction, ConfirmCompletion},
split_words, styled_runs_for_code_label,
};
use settings::SnippetSortOrder;
use crate::{CodeActionSource, EditorSettings};
use settings::{Settings, SnippetSortOrder};
pub const MENU_GAP: Pixels = px(4.);
pub const MENU_ASIDE_X_PADDING: Pixels = px(16.);
@@ -249,8 +252,17 @@ enum MarkdownCacheKey {
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
pub enum CompletionsMenuSource {
/// Show all completions (words, snippets, LSP)
Normal,
/// Show only snippets (not words or LSP)
///
/// Used after typing a non-word character
SnippetsOnly,
/// Tab stops within a snippet that have a predefined finite set of choices
SnippetChoices,
/// Show only words (not snippets or LSP)
///
/// Used when word completions are explicitly triggered
Words { ignore_threshold: bool },
}
@@ -261,6 +273,20 @@ impl Drop for CompletionsMenu {
}
}
struct CompletionMenuScrollBarSetting;
impl ui::scrollbars::GlobalSetting for CompletionMenuScrollBarSetting {
fn get_value(_cx: &App) -> &Self {
&Self
}
}
impl ui::scrollbars::ScrollbarVisibility for CompletionMenuScrollBarSetting {
fn visibility(&self, cx: &App) -> ui::scrollbars::ShowScrollbar {
EditorSettings::get_global(cx).completion_menu_scrollbar
}
}
impl CompletionsMenu {
pub fn new(
id: CompletionId,
@@ -898,7 +924,17 @@ impl CompletionsMenu {
}
});
Popover::new().child(list).into_any_element()
Popover::new()
.child(
div().child(list).custom_scrollbars(
Scrollbars::for_settings::<CompletionMenuScrollBarSetting>()
.show_along(ScrollAxes::Vertical)
.tracked_scroll_handle(self.scroll_handle.clone()),
window,
cx,
),
)
.into_any_element()
}
fn render_aside(

View File

@@ -17,6 +17,9 @@
//! [Editor]: crate::Editor
//! [EditorElement]: crate::element::EditorElement
#[macro_use]
mod dimensions;
mod block_map;
mod crease_map;
mod custom_highlights;
@@ -167,11 +170,12 @@ impl DisplayMap {
}
pub fn snapshot(&mut self, cx: &mut Context<Self>) -> DisplaySnapshot {
let tab_size = Self::tab_size(&self.buffer, cx);
let buffer_snapshot = self.buffer.read(cx).snapshot(cx);
let edits = self.buffer_subscription.consume().into_inner();
let (inlay_snapshot, edits) = self.inlay_map.sync(buffer_snapshot, edits);
let (fold_snapshot, edits) = self.fold_map.read(inlay_snapshot, edits);
let tab_size = Self::tab_size(&self.buffer, cx);
let (tab_snapshot, edits) = self.tab_map.sync(fold_snapshot, edits, tab_size);
let (wrap_snapshot, edits) = self
.wrap_map
@@ -915,7 +919,7 @@ impl DisplaySnapshot {
pub fn text_chunks(&self, display_row: DisplayRow) -> impl Iterator<Item = &str> {
self.block_snapshot
.chunks(
display_row.0..self.max_point().row().next_row().0,
BlockRow(display_row.0)..BlockRow(self.max_point().row().next_row().0),
false,
self.masked,
Highlights::default(),
@@ -927,7 +931,12 @@ impl DisplaySnapshot {
pub fn reverse_text_chunks(&self, display_row: DisplayRow) -> impl Iterator<Item = &str> {
(0..=display_row.0).rev().flat_map(move |row| {
self.block_snapshot
.chunks(row..row + 1, false, self.masked, Highlights::default())
.chunks(
BlockRow(row)..BlockRow(row + 1),
false,
self.masked,
Highlights::default(),
)
.map(|h| h.text)
.collect::<Vec<_>>()
.into_iter()
@@ -942,7 +951,7 @@ impl DisplaySnapshot {
highlight_styles: HighlightStyles,
) -> DisplayChunks<'_> {
self.block_snapshot.chunks(
display_rows.start.0..display_rows.end.0,
BlockRow(display_rows.start.0)..BlockRow(display_rows.end.0),
language_aware,
self.masked,
Highlights {
@@ -1178,8 +1187,8 @@ impl DisplaySnapshot {
rows: Range<DisplayRow>,
) -> impl Iterator<Item = (DisplayRow, &Block)> {
self.block_snapshot
.blocks_in_range(rows.start.0..rows.end.0)
.map(|(row, block)| (DisplayRow(row), block))
.blocks_in_range(BlockRow(rows.start.0)..BlockRow(rows.end.0))
.map(|(row, block)| (DisplayRow(row.0), block))
}
pub fn sticky_header_excerpt(&self, row: f64) -> Option<StickyHeaderExcerpt<'_>> {
@@ -1211,7 +1220,7 @@ impl DisplaySnapshot {
pub fn soft_wrap_indent(&self, display_row: DisplayRow) -> Option<u32> {
let wrap_row = self
.block_snapshot
.to_wrap_point(BlockPoint::new(display_row.0, 0), Bias::Left)
.to_wrap_point(BlockPoint::new(BlockRow(display_row.0), 0), Bias::Left)
.row();
self.wrap_snapshot().soft_wrap_indent(wrap_row)
}
@@ -1242,7 +1251,7 @@ impl DisplaySnapshot {
}
pub fn longest_row(&self) -> DisplayRow {
DisplayRow(self.block_snapshot.longest_row())
DisplayRow(self.block_snapshot.longest_row().0)
}
pub fn longest_row_in_range(&self, range: Range<DisplayRow>) -> DisplayRow {

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,96 @@
#[derive(Copy, Clone, Debug, Default, Eq, Ord, PartialOrd, PartialEq)]
pub struct RowDelta(pub u32);
impl RowDelta {
pub fn saturating_sub(self, other: RowDelta) -> RowDelta {
RowDelta(self.0.saturating_sub(other.0))
}
}
impl ::std::ops::Add for RowDelta {
type Output = RowDelta;
fn add(self, rhs: RowDelta) -> Self::Output {
RowDelta(self.0 + rhs.0)
}
}
impl ::std::ops::Sub for RowDelta {
type Output = RowDelta;
fn sub(self, rhs: RowDelta) -> Self::Output {
RowDelta(self.0 - rhs.0)
}
}
impl ::std::ops::AddAssign for RowDelta {
fn add_assign(&mut self, rhs: RowDelta) {
self.0 += rhs.0;
}
}
impl ::std::ops::SubAssign for RowDelta {
fn sub_assign(&mut self, rhs: RowDelta) {
self.0 -= rhs.0;
}
}
macro_rules! impl_for_row_types {
($row:ident => $row_delta:ident) => {
impl $row {
pub fn saturating_sub(self, other: $row_delta) -> $row {
$row(self.0.saturating_sub(other.0))
}
}
impl ::std::ops::Add for $row {
type Output = Self;
fn add(self, rhs: Self) -> Self::Output {
Self(self.0 + rhs.0)
}
}
impl ::std::ops::Add<$row_delta> for $row {
type Output = Self;
fn add(self, rhs: $row_delta) -> Self::Output {
Self(self.0 + rhs.0)
}
}
impl ::std::ops::Sub for $row {
type Output = $row_delta;
fn sub(self, rhs: Self) -> Self::Output {
$row_delta(self.0 - rhs.0)
}
}
impl ::std::ops::Sub<$row_delta> for $row {
type Output = $row;
fn sub(self, rhs: $row_delta) -> Self::Output {
$row(self.0 - rhs.0)
}
}
impl ::std::ops::AddAssign for $row {
fn add_assign(&mut self, rhs: Self) {
self.0 += rhs.0;
}
}
impl ::std::ops::AddAssign<$row_delta> for $row {
fn add_assign(&mut self, rhs: $row_delta) {
self.0 += rhs.0;
}
}
impl ::std::ops::SubAssign<$row_delta> for $row {
fn sub_assign(&mut self, rhs: $row_delta) {
self.0 -= rhs.0;
}
}
};
}

View File

@@ -1440,7 +1440,7 @@ where
self.current_chunk.as_ref().and_then(|(chunk, idx)| {
let mut idx = *idx;
let mut diff = 0;
while idx > 0 && chunk.chars & (1 << idx) == 0 {
while idx > 0 && chunk.chars & (1u128.unbounded_shl(idx)) == 0 {
idx -= 1;
diff += 1;
}
@@ -1460,7 +1460,7 @@ where
fn is_char_boundary(&self) -> bool {
self.current_chunk
.as_ref()
.is_some_and(|(chunk, idx)| (chunk.chars & (1 << *idx.min(&127))) != 0)
.is_some_and(|(chunk, idx)| (chunk.chars & 1u128.unbounded_shl(*idx)) != 0)
}
/// distance: length to move forward while searching for the next tab stop
@@ -1483,18 +1483,20 @@ where
self.byte_offset += overshoot;
self.char_offset += get_char_offset(
chunk_position..(chunk_position + overshoot).saturating_sub(1).min(127),
chunk_position..(chunk_position + overshoot).saturating_sub(1),
chunk.chars,
);
self.current_chunk = Some((chunk, chunk_position + overshoot));
if chunk_position + overshoot < 128 {
self.current_chunk = Some((chunk, chunk_position + overshoot));
}
return None;
}
self.byte_offset += chunk_distance;
self.char_offset += get_char_offset(
chunk_position..(chunk_position + chunk_distance).saturating_sub(1).min(127),
chunk_position..(chunk_position + chunk_distance).saturating_sub(1),
chunk.chars,
);
distance_traversed += chunk_distance;
@@ -1546,8 +1548,6 @@ where
#[inline(always)]
fn get_char_offset(range: Range<u32>, bit_map: u128) -> u32 {
// This edge case can happen when we're at chunk position 128
if range.start == range.end {
return if (1u128 << range.start) & bit_map == 0 {
0
@@ -1555,7 +1555,7 @@ fn get_char_offset(range: Range<u32>, bit_map: u128) -> u32 {
1
};
}
let end_shift: u128 = 127u128 - range.end.min(127) as u128;
let end_shift: u128 = 127u128 - range.end as u128;
let mut bit_mask = (u128::MAX >> range.start) << range.start;
bit_mask = (bit_mask << end_shift) >> end_shift;
let bit_map = bit_map & bit_mask;

View File

@@ -1,5 +1,6 @@
use super::{
Highlights,
dimensions::RowDelta,
fold_map::{Chunk, FoldRows},
tab_map::{self, TabEdit, TabPoint, TabSnapshot},
};
@@ -7,13 +8,20 @@ use gpui::{App, AppContext as _, Context, Entity, Font, LineWrapper, Pixels, Tas
use language::Point;
use multi_buffer::{MultiBufferSnapshot, RowInfo};
use smol::future::yield_now;
use std::sync::LazyLock;
use std::{cmp, collections::VecDeque, mem, ops::Range, time::Duration};
use std::{cmp, collections::VecDeque, mem, ops::Range, sync::LazyLock, time::Duration};
use sum_tree::{Bias, Cursor, Dimensions, SumTree};
use text::Patch;
pub use super::tab_map::TextSummary;
pub type WrapEdit = text::Edit<u32>;
pub type WrapEdit = text::Edit<WrapRow>;
pub type WrapPatch = text::Patch<WrapRow>;
#[derive(Copy, Clone, Debug, Default, Eq, Ord, PartialOrd, PartialEq)]
pub struct WrapRow(pub u32);
impl_for_row_types! {
WrapRow => RowDelta
}
/// Handles soft wrapping of text.
///
@@ -21,8 +29,8 @@ pub type WrapEdit = text::Edit<u32>;
pub struct WrapMap {
snapshot: WrapSnapshot,
pending_edits: VecDeque<(TabSnapshot, Vec<TabEdit>)>,
interpolated_edits: Patch<u32>,
edits_since_sync: Patch<u32>,
interpolated_edits: WrapPatch,
edits_since_sync: WrapPatch,
wrap_width: Option<Pixels>,
background_task: Option<Task<()>>,
font_with_size: (Font, Pixels),
@@ -54,7 +62,7 @@ pub struct WrapChunks<'a> {
input_chunks: tab_map::TabChunks<'a>,
input_chunk: Chunk<'a>,
output_position: WrapPoint,
max_output_row: u32,
max_output_row: WrapRow,
transforms: Cursor<'a, 'static, Transform, Dimensions<WrapPoint, TabPoint>>,
snapshot: &'a WrapSnapshot,
}
@@ -63,19 +71,19 @@ pub struct WrapChunks<'a> {
pub struct WrapRows<'a> {
input_buffer_rows: FoldRows<'a>,
input_buffer_row: RowInfo,
output_row: u32,
output_row: WrapRow,
soft_wrapped: bool,
max_output_row: u32,
max_output_row: WrapRow,
transforms: Cursor<'a, 'static, Transform, Dimensions<WrapPoint, TabPoint>>,
}
impl WrapRows<'_> {
pub(crate) fn seek(&mut self, start_row: u32) {
pub(crate) fn seek(&mut self, start_row: WrapRow) {
self.transforms
.seek(&WrapPoint::new(start_row, 0), Bias::Left);
let mut input_row = self.transforms.start().1.row();
if self.transforms.item().is_some_and(|t| t.is_isomorphic()) {
input_row += start_row - self.transforms.start().0.row();
input_row += (start_row - self.transforms.start().0.row()).0;
}
self.soft_wrapped = self.transforms.item().is_some_and(|t| !t.is_isomorphic());
self.input_buffer_rows.seek(input_row);
@@ -120,7 +128,7 @@ impl WrapMap {
tab_snapshot: TabSnapshot,
edits: Vec<TabEdit>,
cx: &mut Context<Self>,
) -> (WrapSnapshot, Patch<u32>) {
) -> (WrapSnapshot, WrapPatch) {
if self.wrap_width.is_some() {
self.pending_edits.push_back((tab_snapshot, edits));
self.flush_edits(cx);
@@ -226,8 +234,8 @@ impl WrapMap {
let new_rows = self.snapshot.transforms.summary().output.lines.row + 1;
self.snapshot.interpolated = false;
self.edits_since_sync = self.edits_since_sync.compose(Patch::new(vec![WrapEdit {
old: 0..old_rows,
new: 0..new_rows,
old: WrapRow(0)..WrapRow(old_rows),
new: WrapRow(0)..WrapRow(new_rows),
}]));
}
}
@@ -331,7 +339,7 @@ impl WrapSnapshot {
self.tab_snapshot.buffer_snapshot()
}
fn interpolate(&mut self, new_tab_snapshot: TabSnapshot, tab_edits: &[TabEdit]) -> Patch<u32> {
fn interpolate(&mut self, new_tab_snapshot: TabSnapshot, tab_edits: &[TabEdit]) -> WrapPatch {
let mut new_transforms;
if tab_edits.is_empty() {
new_transforms = self.transforms.clone();
@@ -401,7 +409,7 @@ impl WrapSnapshot {
tab_edits: &[TabEdit],
wrap_width: Pixels,
line_wrapper: &mut LineWrapper,
) -> Patch<u32> {
) -> WrapPatch {
#[derive(Debug)]
struct RowEdit {
old_rows: Range<u32>,
@@ -554,7 +562,7 @@ impl WrapSnapshot {
old_snapshot.compute_edits(tab_edits, self)
}
fn compute_edits(&self, tab_edits: &[TabEdit], new_snapshot: &WrapSnapshot) -> Patch<u32> {
fn compute_edits(&self, tab_edits: &[TabEdit], new_snapshot: &WrapSnapshot) -> WrapPatch {
let mut wrap_edits = Vec::with_capacity(tab_edits.len());
let mut old_cursor = self.transforms.cursor::<TransformSummary>(());
let mut new_cursor = new_snapshot.transforms.cursor::<TransformSummary>(());
@@ -568,24 +576,21 @@ impl WrapSnapshot {
let mut old_start = old_cursor.start().output.lines;
old_start += tab_edit.old.start.0 - old_cursor.start().input.lines;
// todo(lw): Should these be seek_forward?
old_cursor.seek(&tab_edit.old.end, Bias::Right);
old_cursor.seek_forward(&tab_edit.old.end, Bias::Right);
let mut old_end = old_cursor.start().output.lines;
old_end += tab_edit.old.end.0 - old_cursor.start().input.lines;
// todo(lw): Should these be seek_forward?
new_cursor.seek(&tab_edit.new.start, Bias::Right);
let mut new_start = new_cursor.start().output.lines;
new_start += tab_edit.new.start.0 - new_cursor.start().input.lines;
// todo(lw): Should these be seek_forward?
new_cursor.seek(&tab_edit.new.end, Bias::Right);
new_cursor.seek_forward(&tab_edit.new.end, Bias::Right);
let mut new_end = new_cursor.start().output.lines;
new_end += tab_edit.new.end.0 - new_cursor.start().input.lines;
wrap_edits.push(WrapEdit {
old: old_start.row..old_end.row,
new: new_start.row..new_end.row,
old: WrapRow(old_start.row)..WrapRow(old_end.row),
new: WrapRow(new_start.row)..WrapRow(new_end.row),
});
}
@@ -595,7 +600,7 @@ impl WrapSnapshot {
pub(crate) fn chunks<'a>(
&'a self,
rows: Range<u32>,
rows: Range<WrapRow>,
language_aware: bool,
highlights: Highlights<'a>,
) -> WrapChunks<'a> {
@@ -630,17 +635,17 @@ impl WrapSnapshot {
WrapPoint(self.transforms.summary().output.lines)
}
pub fn line_len(&self, row: u32) -> u32 {
pub fn line_len(&self, row: WrapRow) -> u32 {
let (start, _, item) = self.transforms.find::<Dimensions<WrapPoint, TabPoint>, _>(
(),
&WrapPoint::new(row + 1, 0),
&WrapPoint::new(row + WrapRow(1), 0),
Bias::Left,
);
if item.is_some_and(|transform| transform.is_isomorphic()) {
let overshoot = row - start.0.row();
let tab_row = start.1.row() + overshoot;
let tab_row = start.1.row() + overshoot.0;
let tab_line_len = self.tab_snapshot.line_len(tab_row);
if overshoot == 0 {
if overshoot.0 == 0 {
start.0.column() + (tab_line_len - start.1.column())
} else {
tab_line_len
@@ -650,7 +655,7 @@ impl WrapSnapshot {
}
}
pub fn text_summary_for_range(&self, rows: Range<u32>) -> TextSummary {
pub fn text_summary_for_range(&self, rows: Range<WrapRow>) -> TextSummary {
let mut summary = TextSummary::default();
let start = WrapPoint::new(rows.start, 0);
@@ -711,10 +716,12 @@ impl WrapSnapshot {
summary
}
pub fn soft_wrap_indent(&self, row: u32) -> Option<u32> {
let (.., item) =
self.transforms
.find::<WrapPoint, _>((), &WrapPoint::new(row + 1, 0), Bias::Right);
pub fn soft_wrap_indent(&self, row: WrapRow) -> Option<u32> {
let (.., item) = self.transforms.find::<WrapPoint, _>(
(),
&WrapPoint::new(row + WrapRow(1), 0),
Bias::Right,
);
item.and_then(|transform| {
if transform.is_isomorphic() {
None
@@ -728,14 +735,14 @@ impl WrapSnapshot {
self.transforms.summary().output.longest_row
}
pub fn row_infos(&self, start_row: u32) -> WrapRows<'_> {
pub fn row_infos(&self, start_row: WrapRow) -> WrapRows<'_> {
let mut transforms = self
.transforms
.cursor::<Dimensions<WrapPoint, TabPoint>>(());
transforms.seek(&WrapPoint::new(start_row, 0), Bias::Left);
let mut input_row = transforms.start().1.row();
if transforms.item().is_some_and(|t| t.is_isomorphic()) {
input_row += start_row - transforms.start().0.row();
input_row += (start_row - transforms.start().0.row()).0;
}
let soft_wrapped = transforms.item().is_some_and(|t| !t.is_isomorphic());
let mut input_buffer_rows = self.tab_snapshot.rows(input_row);
@@ -790,9 +797,9 @@ impl WrapSnapshot {
self.tab_point_to_wrap_point(self.tab_snapshot.clip_point(self.to_tab_point(point), bias))
}
pub fn prev_row_boundary(&self, mut point: WrapPoint) -> u32 {
pub fn prev_row_boundary(&self, mut point: WrapPoint) -> WrapRow {
if self.transforms.is_empty() {
return 0;
return WrapRow(0);
}
*point.column_mut() = 0;
@@ -816,7 +823,7 @@ impl WrapSnapshot {
unreachable!()
}
pub fn next_row_boundary(&self, mut point: WrapPoint) -> Option<u32> {
pub fn next_row_boundary(&self, mut point: WrapPoint) -> Option<WrapRow> {
point.0 += Point::new(1, 0);
let mut cursor = self
@@ -836,13 +843,13 @@ impl WrapSnapshot {
#[cfg(test)]
pub fn text(&self) -> String {
self.text_chunks(0).collect()
self.text_chunks(WrapRow(0)).collect()
}
#[cfg(test)]
pub fn text_chunks(&self, wrap_row: u32) -> impl Iterator<Item = &str> {
pub fn text_chunks(&self, wrap_row: WrapRow) -> impl Iterator<Item = &str> {
self.chunks(
wrap_row..self.max_point().row() + 1,
wrap_row..self.max_point().row() + WrapRow(1),
false,
Highlights::default(),
)
@@ -870,21 +877,22 @@ impl WrapSnapshot {
let mut input_buffer_rows = self.tab_snapshot.rows(0);
let mut expected_buffer_rows = Vec::new();
let mut prev_tab_row = 0;
for display_row in 0..=self.max_point().row() {
for display_row in 0..=self.max_point().row().0 {
let display_row = WrapRow(display_row);
let tab_point = self.to_tab_point(WrapPoint::new(display_row, 0));
if tab_point.row() == prev_tab_row && display_row != 0 {
if tab_point.row() == prev_tab_row && display_row != WrapRow(0) {
expected_buffer_rows.push(None);
} else {
expected_buffer_rows.push(input_buffer_rows.next().unwrap().buffer_row);
}
prev_tab_row = tab_point.row();
assert_eq!(self.line_len(display_row), text.line_len(display_row));
assert_eq!(self.line_len(display_row), text.line_len(display_row.0));
}
for start_display_row in 0..expected_buffer_rows.len() {
assert_eq!(
self.row_infos(start_display_row as u32)
self.row_infos(WrapRow(start_display_row as u32))
.map(|row_info| row_info.buffer_row)
.collect::<Vec<_>>(),
&expected_buffer_rows[start_display_row..],
@@ -897,7 +905,7 @@ impl WrapSnapshot {
}
impl WrapChunks<'_> {
pub(crate) fn seek(&mut self, rows: Range<u32>) {
pub(crate) fn seek(&mut self, rows: Range<WrapRow>) {
let output_start = WrapPoint::new(rows.start, 0);
let output_end = WrapPoint::new(rows.end, 0);
self.transforms.seek(&output_start, Bias::Right);
@@ -934,7 +942,7 @@ impl<'a> Iterator for WrapChunks<'a> {
// Exclude newline starting prior to the desired row.
start_ix = 1;
summary.row = 0;
} else if self.output_position.row() + 1 >= self.max_output_row {
} else if self.output_position.row() + WrapRow(1) >= self.max_output_row {
// Exclude soft indentation ending after the desired row.
end_ix = 1;
summary.column = 0;
@@ -1000,7 +1008,7 @@ impl Iterator for WrapRows<'_> {
let soft_wrapped = self.soft_wrapped;
let diff_status = self.input_buffer_row.diff_status;
self.output_row += 1;
self.output_row += WrapRow(1);
self.transforms
.seek_forward(&WrapPoint::new(self.output_row, 0), Bias::Left);
if self.transforms.item().is_some_and(|t| t.is_isomorphic()) {
@@ -1111,12 +1119,12 @@ impl SumTreeExt for SumTree<Transform> {
}
impl WrapPoint {
pub fn new(row: u32, column: u32) -> Self {
Self(Point::new(row, column))
pub fn new(row: WrapRow, column: u32) -> Self {
Self(Point::new(row.0, column))
}
pub fn row(self) -> u32 {
self.0.row
pub fn row(self) -> WrapRow {
WrapRow(self.0.row)
}
pub fn row_mut(&mut self) -> &mut u32 {
@@ -1421,14 +1429,14 @@ mod tests {
for (snapshot, patch) in edits {
let snapshot_text = Rope::from(snapshot.text().as_str());
for edit in &patch {
let old_start = initial_text.point_to_offset(Point::new(edit.new.start, 0));
let old_start = initial_text.point_to_offset(Point::new(edit.new.start.0, 0));
let old_end = initial_text.point_to_offset(cmp::min(
Point::new(edit.new.start + edit.old.len() as u32, 0),
Point::new(edit.new.start.0 + (edit.old.end - edit.old.start).0, 0),
initial_text.max_point(),
));
let new_start = snapshot_text.point_to_offset(Point::new(edit.new.start, 0));
let new_start = snapshot_text.point_to_offset(Point::new(edit.new.start.0, 0));
let new_end = snapshot_text.point_to_offset(cmp::min(
Point::new(edit.new.end, 0),
Point::new(edit.new.end.0, 0),
snapshot_text.max_point(),
));
let new_text = snapshot_text
@@ -1488,11 +1496,11 @@ mod tests {
impl WrapSnapshot {
fn verify_chunks(&mut self, rng: &mut impl Rng) {
for _ in 0..5 {
let mut end_row = rng.random_range(0..=self.max_point().row());
let mut end_row = rng.random_range(0..=self.max_point().row().0);
let start_row = rng.random_range(0..=end_row);
end_row += 1;
let mut expected_text = self.text_chunks(start_row).collect::<String>();
let mut expected_text = self.text_chunks(WrapRow(start_row)).collect::<String>();
if expected_text.ends_with('\n') {
expected_text.push('\n');
}
@@ -1501,12 +1509,16 @@ mod tests {
.take((end_row - start_row) as usize)
.collect::<Vec<_>>()
.join("\n");
if end_row <= self.max_point().row() {
if end_row <= self.max_point().row().0 {
expected_text.push('\n');
}
let actual_text = self
.chunks(start_row..end_row, true, Highlights::default())
.chunks(
WrapRow(start_row)..WrapRow(end_row),
true,
Highlights::default(),
)
.map(|c| c.text)
.collect::<String>();
assert_eq!(

View File

@@ -1,12 +1,13 @@
use edit_prediction::EditPredictionProvider;
use gpui::{Entity, prelude::*};
use gpui::{Entity, KeyBinding, Modifiers, prelude::*};
use indoc::indoc;
use multi_buffer::{Anchor, MultiBufferSnapshot, ToPoint};
use std::ops::Range;
use text::{Point, ToOffset};
use crate::{
EditPrediction, editor_tests::init_test, test::editor_test_context::EditorTestContext,
AcceptEditPrediction, EditPrediction, MenuEditPredictionsPolicy, editor_tests::init_test,
test::editor_test_context::EditorTestContext,
};
#[gpui::test]
@@ -270,6 +271,63 @@ async fn test_edit_prediction_jump_disabled_for_non_zed_providers(cx: &mut gpui:
});
}
#[gpui::test]
async fn test_edit_prediction_preview_cleanup_on_toggle_off(cx: &mut gpui::TestAppContext) {
init_test(cx, |_| {});
// Bind `ctrl-shift-a` to accept the provided edit prediction. The actual key
// binding here doesn't matter, we simply need to confirm that holding the
// binding's modifiers triggers the edit prediction preview.
cx.update(|cx| cx.bind_keys([KeyBinding::new("ctrl-shift-a", AcceptEditPrediction, None)]));
let mut cx = EditorTestContext::new(cx).await;
let provider = cx.new(|_| FakeEditPredictionProvider::default());
assign_editor_completion_provider(provider.clone(), &mut cx);
cx.set_state("let x = ˇ;");
propose_edits(&provider, vec![(8..8, "42")], &mut cx);
cx.update_editor(|editor, window, cx| {
editor.set_menu_edit_predictions_policy(MenuEditPredictionsPolicy::ByProvider);
editor.update_visible_edit_prediction(window, cx)
});
cx.editor(|editor, _, _| {
assert!(editor.has_active_edit_prediction());
});
// Simulate pressing the modifiers for `AcceptEditPrediction`, namely
// `ctrl-shift`, so that we can confirm that the edit prediction preview is
// activated.
let modifiers = Modifiers::control_shift();
cx.simulate_modifiers_change(modifiers);
cx.run_until_parked();
cx.editor(|editor, _, _| {
assert!(editor.edit_prediction_preview_is_active());
});
// Disable showing edit predictions without issuing a new modifiers changed
// event, to confirm that the edit prediction preview is still active.
cx.update_editor(|editor, window, cx| {
editor.set_show_edit_predictions(Some(false), window, cx);
});
cx.editor(|editor, _, _| {
assert!(!editor.has_active_edit_prediction());
assert!(editor.edit_prediction_preview_is_active());
});
// Now release the modifiers
// Simulate releasing all modifiers, ensuring that even with edit prediction
// disabled, the edit prediction preview is cleaned up.
cx.simulate_modifiers_change(Modifiers::none());
cx.run_until_parked();
cx.editor(|editor, _, _| {
assert!(!editor.edit_prediction_preview_is_active());
});
}
fn assert_editor_active_edit_completion(
cx: &mut EditorTestContext,
assert: impl FnOnce(MultiBufferSnapshot, &Vec<(Range<Anchor>, String)>),
@@ -395,7 +453,7 @@ impl EditPredictionProvider for FakeEditPredictionProvider {
}
fn show_completions_in_menu() -> bool {
false
true
}
fn supports_jump_to_edit() -> bool {

View File

@@ -455,6 +455,20 @@ pub enum SelectMode {
All,
}
#[derive(Copy, Clone, Default, PartialEq, Eq, Debug)]
pub enum SizingBehavior {
/// The editor will layout itself using `size_full` and will include the vertical
/// scroll margin as requested by user settings.
#[default]
Default,
/// The editor will layout itself using `size_full`, but will not have any
/// vertical overscroll.
ExcludeOverscrollMargin,
/// The editor will request a vertical size according to its content and will be
/// layouted without a vertical scroll margin.
SizeByContent,
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub enum EditorMode {
SingleLine,
@@ -467,8 +481,8 @@ pub enum EditorMode {
scale_ui_elements_with_buffer_font_size: bool,
/// When set to `true`, the editor will render a background for the active line.
show_active_line_background: bool,
/// When set to `true`, the editor's height will be determined by its content.
sized_by_content: bool,
/// Determines the sizing behavior for this editor
sizing_behavior: SizingBehavior,
},
Minimap {
parent: WeakEntity<Editor>,
@@ -480,7 +494,7 @@ impl EditorMode {
Self::Full {
scale_ui_elements_with_buffer_font_size: true,
show_active_line_background: true,
sized_by_content: false,
sizing_behavior: SizingBehavior::Default,
}
}
@@ -1072,7 +1086,6 @@ pub struct Editor {
searchable: bool,
cursor_shape: CursorShape,
current_line_highlight: Option<CurrentLineHighlight>,
collapse_matches: bool,
autoindent_mode: Option<AutoindentMode>,
workspace: Option<(WeakEntity<Workspace>, Option<WorkspaceId>)>,
input_enabled: bool,
@@ -1836,9 +1849,15 @@ impl Editor {
project::Event::RefreshCodeLens => {
// we always query lens with actions, without storing them, always refreshing them
}
project::Event::RefreshInlayHints(server_id) => {
project::Event::RefreshInlayHints {
server_id,
request_id,
} => {
editor.refresh_inlay_hints(
InlayHintRefreshReason::RefreshRequested(*server_id),
InlayHintRefreshReason::RefreshRequested {
server_id: *server_id,
request_id: *request_id,
},
cx,
);
}
@@ -2122,7 +2141,7 @@ impl Editor {
.unwrap_or_default(),
current_line_highlight: None,
autoindent_mode: Some(AutoindentMode::EachLine),
collapse_matches: false,
workspace: None,
input_enabled: !is_minimap,
use_modal_editing: full_mode,
@@ -2275,7 +2294,7 @@ impl Editor {
}
}
EditorEvent::Edited { .. } => {
if !vim_enabled(cx) {
if vim_flavor(cx).is_none() {
let display_map = editor.display_snapshot(cx);
let selections = editor.selections.all_adjusted_display(&display_map);
let pop_state = editor
@@ -2442,6 +2461,10 @@ impl Editor {
key_context.add("renaming");
}
if !self.snippet_stack.is_empty() {
key_context.add("in_snippet");
}
match self.context_menu.borrow().as_ref() {
Some(CodeContextMenu::Completions(menu)) => {
if menu.visible() {
@@ -2880,12 +2903,12 @@ impl Editor {
self.current_line_highlight = current_line_highlight;
}
pub fn set_collapse_matches(&mut self, collapse_matches: bool) {
self.collapse_matches = collapse_matches;
}
pub fn range_for_match<T: std::marker::Copy>(&self, range: &Range<T>) -> Range<T> {
if self.collapse_matches {
pub fn range_for_match<T: std::marker::Copy>(
&self,
range: &Range<T>,
collapse: bool,
) -> Range<T> {
if collapse {
return range.start..range.start;
}
range.clone()
@@ -3119,7 +3142,7 @@ impl Editor {
};
if continue_showing {
self.show_completions(&ShowCompletions { trigger: None }, window, cx);
self.open_or_update_completions_menu(None, None, false, window, cx);
} else {
self.hide_context_menu(window, cx);
}
@@ -4949,57 +4972,18 @@ impl Editor {
ignore_threshold: false,
}),
None,
trigger_in_words,
window,
cx,
);
}
Some(CompletionsMenuSource::Normal)
| Some(CompletionsMenuSource::SnippetChoices)
| None
if self.is_completion_trigger(
text,
trigger_in_words,
completions_source.is_some(),
cx,
) =>
{
self.show_completions(
&ShowCompletions {
trigger: Some(text.to_owned()).filter(|x| !x.is_empty()),
},
window,
cx,
)
}
_ => {
self.hide_context_menu(window, cx);
}
}
}
fn is_completion_trigger(
&self,
text: &str,
trigger_in_words: bool,
menu_is_open: bool,
cx: &mut Context<Self>,
) -> bool {
let position = self.selections.newest_anchor().head();
let Some(buffer) = self.buffer.read(cx).buffer_for_anchor(position, cx) else {
return false;
};
if let Some(completion_provider) = &self.completion_provider {
completion_provider.is_completion_trigger(
&buffer,
position.text_anchor,
text,
trigger_in_words,
menu_is_open,
_ => self.open_or_update_completions_menu(
None,
Some(text.to_owned()).filter(|x| !x.is_empty()),
true,
window,
cx,
)
} else {
false
),
}
}
@@ -5277,6 +5261,7 @@ impl Editor {
ignore_threshold: true,
}),
None,
false,
window,
cx,
);
@@ -5284,17 +5269,18 @@ impl Editor {
pub fn show_completions(
&mut self,
options: &ShowCompletions,
_: &ShowCompletions,
window: &mut Window,
cx: &mut Context<Self>,
) {
self.open_or_update_completions_menu(None, options.trigger.as_deref(), window, cx);
self.open_or_update_completions_menu(None, None, false, window, cx);
}
fn open_or_update_completions_menu(
&mut self,
requested_source: Option<CompletionsMenuSource>,
trigger: Option<&str>,
trigger: Option<String>,
trigger_in_words: bool,
window: &mut Window,
cx: &mut Context<Self>,
) {
@@ -5302,6 +5288,15 @@ impl Editor {
return;
}
let completions_source = self
.context_menu
.borrow()
.as_ref()
.and_then(|menu| match menu {
CodeContextMenu::Completions(completions_menu) => Some(completions_menu.source),
CodeContextMenu::CodeActions(_) => None,
});
let multibuffer_snapshot = self.buffer.read(cx).read(cx);
// Typically `start` == `end`, but with snippet tabstop choices the default choice is
@@ -5349,7 +5344,8 @@ impl Editor {
ignore_word_threshold = ignore_threshold;
None
}
Some(CompletionsMenuSource::SnippetChoices) => {
Some(CompletionsMenuSource::SnippetChoices)
| Some(CompletionsMenuSource::SnippetsOnly) => {
log::error!("bug: SnippetChoices requested_source is not handled");
None
}
@@ -5363,13 +5359,19 @@ impl Editor {
.as_ref()
.is_none_or(|provider| provider.filter_completions());
let was_snippets_only = matches!(
completions_source,
Some(CompletionsMenuSource::SnippetsOnly)
);
if let Some(CodeContextMenu::Completions(menu)) = self.context_menu.borrow_mut().as_mut() {
if filter_completions {
menu.filter(query.clone(), provider.clone(), window, cx);
}
// When `is_incomplete` is false, no need to re-query completions when the current query
// is a suffix of the initial query.
if !menu.is_incomplete {
let was_complete = !menu.is_incomplete;
if was_complete && !was_snippets_only {
// If the new query is a suffix of the old query (typing more characters) and
// the previous result was complete, the existing completions can be filtered.
//
@@ -5393,23 +5395,6 @@ impl Editor {
}
};
let trigger_kind = match trigger {
Some(trigger) if buffer.read(cx).completion_triggers().contains(trigger) => {
CompletionTriggerKind::TRIGGER_CHARACTER
}
_ => CompletionTriggerKind::INVOKED,
};
let completion_context = CompletionContext {
trigger_character: trigger.and_then(|trigger| {
if trigger_kind == CompletionTriggerKind::TRIGGER_CHARACTER {
Some(String::from(trigger))
} else {
None
}
}),
trigger_kind,
};
let Anchor {
excerpt_id: buffer_excerpt_id,
text_anchor: buffer_position,
@@ -5467,49 +5452,72 @@ impl Editor {
&& match &query {
Some(query) => query.chars().count() < completion_settings.words_min_length,
None => completion_settings.words_min_length != 0,
});
})
|| (provider.is_some() && completion_settings.words == WordsCompletionMode::Disabled);
let (mut words, provider_responses) = match &provider {
Some(provider) => {
let provider_responses = provider.completions(
buffer_excerpt_id,
let mut words = if omit_word_completions {
Task::ready(BTreeMap::default())
} else {
cx.background_spawn(async move {
buffer_snapshot.words_in_range(WordsQuery {
fuzzy_contents: None,
range: word_search_range,
skip_digits,
})
})
};
let load_provider_completions = provider.as_ref().is_some_and(|provider| {
trigger.as_ref().is_none_or(|trigger| {
provider.is_completion_trigger(
&buffer,
buffer_position,
completion_context,
window,
position.text_anchor,
trigger,
trigger_in_words,
completions_source.is_some(),
cx,
);
)
})
});
let words = match (omit_word_completions, completion_settings.words) {
(true, _) | (_, WordsCompletionMode::Disabled) => {
Task::ready(BTreeMap::default())
}
(false, WordsCompletionMode::Enabled | WordsCompletionMode::Fallback) => cx
.background_spawn(async move {
buffer_snapshot.words_in_range(WordsQuery {
fuzzy_contents: None,
range: word_search_range,
skip_digits,
})
}),
};
let provider_responses = if let Some(provider) = &provider
&& load_provider_completions
{
let trigger_character =
trigger.filter(|trigger| buffer.read(cx).completion_triggers().contains(trigger));
let completion_context = CompletionContext {
trigger_kind: match &trigger_character {
Some(_) => CompletionTriggerKind::TRIGGER_CHARACTER,
None => CompletionTriggerKind::INVOKED,
},
trigger_character,
};
(words, provider_responses)
}
None => {
let words = if omit_word_completions {
Task::ready(BTreeMap::default())
} else {
cx.background_spawn(async move {
buffer_snapshot.words_in_range(WordsQuery {
fuzzy_contents: None,
range: word_search_range,
skip_digits,
})
})
};
(words, Task::ready(Ok(Vec::new())))
}
provider.completions(
buffer_excerpt_id,
&buffer,
buffer_position,
completion_context,
window,
cx,
)
} else {
Task::ready(Ok(Vec::new()))
};
let snippets = if let Some(provider) = &provider
&& provider.show_snippets()
&& let Some(project) = self.project()
{
project.update(cx, |project, cx| {
snippet_completions(project, &buffer, buffer_position, cx)
})
} else {
Task::ready(Ok(CompletionResponse {
completions: Vec::new(),
display_options: Default::default(),
is_incomplete: false,
}))
};
let snippet_sort_order = EditorSettings::get_global(cx).snippet_sort_order;
@@ -5567,6 +5575,13 @@ impl Editor {
confirm: None,
}));
completions.extend(
snippets
.await
.into_iter()
.flat_map(|response| response.completions),
);
let menu = if completions.is_empty() {
None
} else {
@@ -5578,7 +5593,11 @@ impl Editor {
.map(|workspace| workspace.read(cx).app_state().languages.clone());
let menu = CompletionsMenu::new(
id,
requested_source.unwrap_or(CompletionsMenuSource::Normal),
requested_source.unwrap_or(if load_provider_completions {
CompletionsMenuSource::Normal
} else {
CompletionsMenuSource::SnippetsOnly
}),
sort_completions,
show_completion_documentation,
position,
@@ -5908,7 +5927,7 @@ impl Editor {
.as_ref()
.is_some_and(|confirm| confirm(intent, window, cx));
if show_new_completions_on_confirm {
self.show_completions(&ShowCompletions { trigger: None }, window, cx);
self.open_or_update_completions_menu(None, None, false, window, cx);
}
let provider = self.completion_provider.as_ref()?;
@@ -7553,7 +7572,14 @@ impl Editor {
window: &mut Window,
cx: &mut Context<Self>,
) {
if self.show_edit_predictions_in_menu() {
// Ensure that the edit prediction preview is updated, even when not
// enabled, if there's an active edit prediction preview.
if self.show_edit_predictions_in_menu()
|| matches!(
self.edit_prediction_preview,
EditPredictionPreview::Active { .. }
)
{
self.update_edit_prediction_preview(&modifiers, window, cx);
}
@@ -9950,6 +9976,38 @@ impl Editor {
self.outdent(&Outdent, window, cx);
}
pub fn next_snippet_tabstop(
&mut self,
_: &NextSnippetTabstop,
window: &mut Window,
cx: &mut Context<Self>,
) {
if self.mode.is_single_line() || self.snippet_stack.is_empty() {
return;
}
if self.move_to_next_snippet_tabstop(window, cx) {
self.hide_mouse_cursor(HideMouseCursorOrigin::TypingAction, cx);
return;
}
}
pub fn previous_snippet_tabstop(
&mut self,
_: &PreviousSnippetTabstop,
window: &mut Window,
cx: &mut Context<Self>,
) {
if self.mode.is_single_line() || self.snippet_stack.is_empty() {
return;
}
if self.move_to_prev_snippet_tabstop(window, cx) {
self.hide_mouse_cursor(HideMouseCursorOrigin::TypingAction, cx);
return;
}
}
pub fn tab(&mut self, _: &Tab, window: &mut Window, cx: &mut Context<Self>) {
if self.mode.is_single_line() {
cx.propagate();
@@ -12662,6 +12720,10 @@ impl Editor {
});
}
// 🤔 | .. | show_in_menu |
// | .. | true true
// | had_edit_prediction | false true
let trigger_in_words =
this.show_edit_predictions_in_menu() || !had_active_edit_prediction;
@@ -13976,6 +14038,27 @@ impl Editor {
);
}
fn finish_tag_jump(&mut self, point: Point, cx: &mut Context<Self>) {
let Some(nav_history) = self.nav_history.as_mut() else {
return;
};
let snapshot = self.buffer.read(cx).snapshot(cx);
let cursor_anchor = snapshot.anchor_after(point);
let cursor_position = cursor_anchor.to_point(&snapshot);
let scroll_state = self.scroll_manager.anchor();
let scroll_top_row = scroll_state.top_row(&snapshot);
dbg!("finish tag jump", cursor_position);
nav_history.finish_tag_jump(
Some(NavigationData {
cursor_anchor,
cursor_position,
scroll_anchor: scroll_state,
scroll_top_row,
}),
cx,
);
}
fn push_to_nav_history(
&mut self,
cursor_anchor: Anchor,
@@ -15845,7 +15928,7 @@ impl Editor {
) {
let current_scroll_position = self.scroll_position(cx);
let lines_to_expand = EditorSettings::get_global(cx).expand_excerpt_lines;
let mut should_scroll_up = false;
let mut scroll = None;
if direction == ExpandExcerptDirection::Down {
let multi_buffer = self.buffer.read(cx);
@@ -15858,17 +15941,30 @@ impl Editor {
let excerpt_end_row = Point::from_anchor(&excerpt_range.end, &buffer_snapshot).row;
let last_row = buffer_snapshot.max_point().row;
let lines_below = last_row.saturating_sub(excerpt_end_row);
should_scroll_up = lines_below >= lines_to_expand;
if lines_below >= lines_to_expand {
scroll = Some(
current_scroll_position
+ gpui::Point::new(0.0, lines_to_expand as ScrollOffset),
);
}
}
}
if direction == ExpandExcerptDirection::Up
&& self
.buffer
.read(cx)
.snapshot(cx)
.excerpt_before(excerpt)
.is_none()
{
scroll = Some(current_scroll_position);
}
self.buffer.update(cx, |buffer, cx| {
buffer.expand_excerpts([excerpt], lines_to_expand, direction, cx)
});
if should_scroll_up {
let new_scroll_position =
current_scroll_position + gpui::Point::new(0.0, lines_to_expand as ScrollOffset);
if let Some(new_scroll_position) = scroll {
self.set_scroll_position(new_scroll_position, window, cx);
}
}
@@ -15950,7 +16046,6 @@ impl Editor {
}
fn filtered<'a>(
snapshot: EditorSnapshot,
severity: GoToDiagnosticSeverityFilter,
diagnostics: impl Iterator<Item = DiagnosticEntryRef<'a, usize>>,
) -> impl Iterator<Item = DiagnosticEntryRef<'a, usize>> {
@@ -15958,19 +16053,15 @@ impl Editor {
.filter(move |entry| severity.matches(entry.diagnostic.severity))
.filter(|entry| entry.range.start != entry.range.end)
.filter(|entry| !entry.diagnostic.is_unnecessary)
.filter(move |entry| !snapshot.intersects_fold(entry.range.start))
}
let snapshot = self.snapshot(window, cx);
let before = filtered(
snapshot.clone(),
severity,
buffer
.diagnostics_in_range(0..selection.start)
.filter(|entry| entry.range.start <= selection.start),
);
let after = filtered(
snapshot,
severity,
buffer
.diagnostics_in_range(selection.start..buffer.len())
@@ -16009,6 +16100,15 @@ impl Editor {
let Some(buffer_id) = buffer.buffer_id_for_anchor(next_diagnostic_start) else {
return;
};
let snapshot = self.snapshot(window, cx);
if snapshot.intersects_fold(next_diagnostic.range.start) {
self.unfold_ranges(
std::slice::from_ref(&next_diagnostic.range),
true,
false,
cx,
);
}
self.change_selections(Default::default(), window, cx, |s| {
s.select_ranges(vec![
next_diagnostic.range.start..next_diagnostic.range.start,
@@ -16358,18 +16458,37 @@ impl Editor {
let Some(provider) = self.semantics_provider.clone() else {
return Task::ready(Ok(Navigated::No));
};
let head = self
let cursor = self
.selections
.newest::<usize>(&self.display_snapshot(cx))
.head();
let buffer = self.buffer.read(cx);
let Some((buffer, head)) = buffer.text_anchor_for_position(head, cx) else {
let multi_buffer = self.buffer.read(cx);
let Some((buffer, head)) = multi_buffer.text_anchor_for_position(cursor, cx) else {
return Task::ready(Ok(Navigated::No));
};
let Some(definitions) = provider.definitions(&buffer, head, kind, cx) else {
return Task::ready(Ok(Navigated::No));
};
if let Some(nav_history) = self.nav_history.as_mut() {
let snapshot = self.buffer.read(cx).snapshot(cx);
let cursor_anchor = snapshot.anchor_after(cursor);
let cursor_position = snapshot.offset_to_point(cursor);
let scroll_anchor = self.scroll_manager.anchor();
let scroll_top_row = scroll_anchor.top_row(&snapshot);
dbg!("start tag jump", cursor_position);
nav_history.start_tag_jump(
Some(NavigationData {
cursor_anchor,
cursor_position,
scroll_anchor,
scroll_top_row,
}),
cx,
);
}
cx.spawn_in(window, async move |editor, cx| {
let Some(definitions) = definitions.await? else {
return Ok(Navigated::No);
@@ -16608,12 +16727,13 @@ impl Editor {
editor.update_in(cx, |editor, window, cx| {
let range = target_range.to_point(target_buffer.read(cx));
let range = editor.range_for_match(&range);
let range = editor.range_for_match(&range, false);
let range = collapse_multiline_range(range);
if !split
&& Some(&target_buffer) == editor.buffer.read(cx).as_singleton().as_ref()
{
editor.finish_tag_jump(range.start, cx);
editor.go_to_singleton_buffer_range(range, window, cx);
} else {
let pane = workspace.read(cx).active_pane().clone();
@@ -16639,6 +16759,7 @@ impl Editor {
// When selecting a definition in a different buffer, disable the nav history
// to avoid creating a history entry at the previous cursor location.
pane.update(cx, |pane, _| pane.disable_history());
target_editor.finish_tag_jump(range.start, cx);
target_editor.go_to_singleton_buffer_range(range, window, cx);
pane.update(cx, |pane, _| pane.enable_history());
});
@@ -16686,6 +16807,139 @@ impl Editor {
})
}
fn go_to_next_reference(
&mut self,
_: &GoToNextReference,
window: &mut Window,
cx: &mut Context<Self>,
) {
let task = self.go_to_reference_before_or_after_position(Direction::Next, 1, window, cx);
if let Some(task) = task {
task.detach();
};
}
fn go_to_prev_reference(
&mut self,
_: &GoToPreviousReference,
window: &mut Window,
cx: &mut Context<Self>,
) {
let task = self.go_to_reference_before_or_after_position(Direction::Prev, 1, window, cx);
if let Some(task) = task {
task.detach();
};
}
pub fn go_to_reference_before_or_after_position(
&mut self,
direction: Direction,
count: usize,
window: &mut Window,
cx: &mut Context<Self>,
) -> Option<Task<Result<()>>> {
let selection = self.selections.newest_anchor();
let head = selection.head();
let multi_buffer = self.buffer.read(cx);
let (buffer, text_head) = multi_buffer.text_anchor_for_position(head, cx)?;
let workspace = self.workspace()?;
let project = workspace.read(cx).project().clone();
let references =
project.update(cx, |project, cx| project.references(&buffer, text_head, cx));
Some(cx.spawn_in(window, async move |editor, cx| -> Result<()> {
let Some(locations) = references.await? else {
return Ok(());
};
if locations.is_empty() {
// totally normal - the cursor may be on something which is not
// a symbol (e.g. a keyword)
log::info!("no references found under cursor");
return Ok(());
}
let multi_buffer = editor.read_with(cx, |editor, _| editor.buffer().clone())?;
let multi_buffer_snapshot =
multi_buffer.read_with(cx, |multi_buffer, cx| multi_buffer.snapshot(cx))?;
let (locations, current_location_index) =
multi_buffer.update(cx, |multi_buffer, cx| {
let mut locations = locations
.into_iter()
.filter_map(|loc| {
let start = multi_buffer.buffer_anchor_to_anchor(
&loc.buffer,
loc.range.start,
cx,
)?;
let end = multi_buffer.buffer_anchor_to_anchor(
&loc.buffer,
loc.range.end,
cx,
)?;
Some(start..end)
})
.collect::<Vec<_>>();
// There is an O(n) implementation, but given this list will be
// small (usually <100 items), the extra O(log(n)) factor isn't
// worth the (surprisingly large amount of) extra complexity.
locations
.sort_unstable_by(|l, r| l.start.cmp(&r.start, &multi_buffer_snapshot));
let head_offset = head.to_offset(&multi_buffer_snapshot);
let current_location_index = locations.iter().position(|loc| {
loc.start.to_offset(&multi_buffer_snapshot) <= head_offset
&& loc.end.to_offset(&multi_buffer_snapshot) >= head_offset
});
(locations, current_location_index)
})?;
let Some(current_location_index) = current_location_index else {
// This indicates something has gone wrong, because we already
// handle the "no references" case above
log::error!(
"failed to find current reference under cursor. Total references: {}",
locations.len()
);
return Ok(());
};
let destination_location_index = match direction {
Direction::Next => (current_location_index + count) % locations.len(),
Direction::Prev => {
(current_location_index + locations.len() - count % locations.len())
% locations.len()
}
};
// TODO(cameron): is this needed?
// the thinking is to avoid "jumping to the current location" (avoid
// polluting "jumplist" in vim terms)
if current_location_index == destination_location_index {
return Ok(());
}
let Range { start, end } = locations[destination_location_index];
editor.update_in(cx, |editor, window, cx| {
let effects = SelectionEffects::default();
editor.unfold_ranges(&[start..end], false, false, cx);
editor.change_selections(effects, window, cx, |s| {
s.select_ranges([start..start]);
});
})?;
Ok(())
}))
}
pub fn find_all_references(
&mut self,
_: &FindAllReferences,
@@ -16827,6 +17081,7 @@ impl Editor {
multibuffer.with_title(title)
});
let first_range = ranges.first().cloned();
let existing = workspace.active_pane().update(cx, |pane, cx| {
pane.items()
.filter_map(|item| item.downcast::<Editor>())
@@ -16879,6 +17134,21 @@ impl Editor {
});
}
});
cx.defer({
let editor = editor.clone();
move |cx| {
let Some(range) = first_range else { return };
editor.update(cx, |editor, cx| {
let point = editor
.buffer()
.read(cx)
.snapshot(cx)
.summary_for_anchor(&range.start);
editor.finish_tag_jump(point, cx)
})
}
});
let item = Box::new(editor);
let item_id = item.item_id();
@@ -17649,6 +17919,7 @@ impl Editor {
.unwrap_or(self.diagnostics_max_severity);
if !self.inline_diagnostics_enabled()
|| !self.diagnostics_enabled()
|| !self.show_inline_diagnostics
|| max_severity == DiagnosticSeverity::Off
{
@@ -17727,7 +17998,7 @@ impl Editor {
window: &Window,
cx: &mut Context<Self>,
) -> Option<()> {
if self.ignore_lsp_data() {
if self.ignore_lsp_data() || !self.diagnostics_enabled() {
return None;
}
let pull_diagnostics_settings = ProjectSettings::get_global(cx)
@@ -21285,7 +21556,7 @@ impl Editor {
.and_then(|e| e.to_str())
.map(|a| a.to_string()));
let vim_mode = vim_enabled(cx);
let vim_mode = vim_flavor(cx).is_some();
let edit_predictions_provider = all_language_settings(file, cx).edit_predictions.provider;
let copilot_enabled = edit_predictions_provider
@@ -21916,10 +22187,26 @@ fn edit_for_markdown_paste<'a>(
(range, new_text)
}
fn vim_enabled(cx: &App) -> bool {
vim_mode_setting::VimModeSetting::try_get(cx)
#[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
pub enum VimFlavor {
Vim,
Helix,
}
pub fn vim_flavor(cx: &App) -> Option<VimFlavor> {
if vim_mode_setting::HelixModeSetting::try_get(cx)
.map(|helix_mode| helix_mode.0)
.unwrap_or(false)
{
Some(VimFlavor::Helix)
} else if vim_mode_setting::VimModeSetting::try_get(cx)
.map(|vim_mode| vim_mode.0)
.unwrap_or(false)
{
Some(VimFlavor::Vim)
} else {
None // neither vim nor helix mode
}
}
fn process_completion_for_edit(
@@ -22696,6 +22983,10 @@ pub trait CompletionProvider {
fn filter_completions(&self) -> bool {
true
}
fn show_snippets(&self) -> bool {
false
}
}
pub trait CodeActionProvider {
@@ -22956,16 +23247,8 @@ impl CompletionProvider for Entity<Project> {
cx: &mut Context<Editor>,
) -> Task<Result<Vec<CompletionResponse>>> {
self.update(cx, |project, cx| {
let snippets = snippet_completions(project, buffer, buffer_position, cx);
let project_completions = project.completions(buffer, buffer_position, options, cx);
cx.background_spawn(async move {
let mut responses = project_completions.await?;
let snippets = snippets.await?;
if !snippets.completions.is_empty() {
responses.push(snippets);
}
Ok(responses)
})
let task = project.completions(buffer, buffer_position, options, cx);
cx.background_spawn(task)
})
}
@@ -23037,6 +23320,10 @@ impl CompletionProvider for Entity<Project> {
buffer.completion_triggers().contains(text)
}
fn show_snippets(&self) -> bool {
true
}
}
impl SemanticsProvider for Entity<Project> {
@@ -23965,6 +24252,10 @@ impl EntityInputHandler for Editor {
let utf16_offset = anchor.to_offset_utf16(&position_map.snapshot.buffer_snapshot());
Some(utf16_offset.0)
}
fn accepts_text_input(&self, _window: &mut Window, _cx: &mut Context<Self>) -> bool {
self.input_enabled
}
}
trait SelectionExt {

Some files were not shown because too many files have changed in this diff Show More