Compare commits

...

113 Commits

Author SHA1 Message Date
dino
f414b65002 chore: add vim commands and verticla split alternate 2025-11-13 22:10:17 +00:00
dino
1ccf554e14 feat: initial working version of projections 2025-11-13 20:48:58 +00:00
dino
82d86110d2 chore(vim): add initial projections structs 2025-11-13 20:48:58 +00:00
dino
7143d4017b chore(vim): add projections 🌅 2025-11-13 20:48:58 +00:00
AidanV
84f24e4b62 vim: Add :<range>w <filename> command (#41256)
Release Notes:

- Adds support for `:[range]w {file}`
  - This writes the lines in the range to the specified
- Adds support for `:[range]w`
  - This replaces the current file with the selected lines
2025-11-13 13:27:08 -07:00
Abul Hossain Khan
03fad4b951 workspace: Fix pinned tab causing resize loop on adjacent tab (#41884)
Closes #41467 

My first PR in Zed, any guidance or tips are appreciated.

This fixes the flickering/resize loop that occurred on the tab
immediately to the right of a pinned tab.

Removed the conditional border on the pinned tabs container. The border
was a visual indicator to show when unpinned tabs were scrolled, but it
wasn't essential and was causing the layout thrashing.

Release Notes:

- Fixed

---------

Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>
2025-11-14 01:52:57 +05:30
Kevin Rubio
c626e770a0 outline_panel: Remove toggle expanded behavior from OpenSelectedEntry (#42214)
Fixed outline panel space key behavior by removing duplicate toggle call

The `open_selected_entry` function in `outline_panel.rs` was incorrectly
calling `self.toggle_expanded(&selected_entry, window, cx)` in addition
to its primary logic, causing the space key to both open/close entries
AND toggle their expanded state. Removed the redundant `toggle_expanded`
call to achieve the intended behavior.

Closes #41711

Release Notes:

- Fixed issue with the outline panel where pressing space would cause an
open selected entry to collapse and cause a closed selected entry to
open.

---------

Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>
2025-11-14 01:07:22 +05:30
Lionel Henry
fa0c7500c1 Update runtimed to fix compatibility issue with the Ark kernel (#40889)
Closes #40888

This updates runtimed to the latest version, which handles the
"starting" variant of `execution_state`. It actually handles a bunch of
other variants that are not documented in the protocol (see
https://jupyter-client.readthedocs.io/en/stable/messaging.html#kernel-status),
like "starting", "terminating", etc. I added implementations for these
variants as well.

Release Notes:

- Fixed issue that prevented the Ark kernel from working in Zed
(#40888).

---------

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-11-13 19:35:45 +00:00
Richard Feldman
e91be9e98e Fix ACP CLI login via remote (#42647)
Release Notes:

- Fixed logging into Gemini CLI and Claude Code when remoting and
authenticating via CLI

Co-authored-by: Lukas Wirth <me@lukaswirth.dev>
2025-11-13 19:13:09 +01:00
Rafael Lüder
46eb9e5223 Update scale factor and drawable size when macOS window changes screen (#38269)
Summary

Fixes UI scaling issue that occurs when starting Zed after disconnecting
an external monitor on macOS. The window's scale factor and drawable
size are now properly updated when the window changes screens.

Problem Description

When an external monitor is disconnected and Zed is started with only
the built-in screen active, the UI scale becomes incorrect. This happens
because:

1. macOS triggers the `window_did_change_screen` callback when a window
moves between displays (including when displays are disconnected)
2. The existing implementation only restarted the display link but
didn't update the window's scale factor or drawable size
3. This left the window with stale scaling information from the previous
display configuration

Root Cause

The `window_did_change_screen` callback in
`crates/gpui/src/platform/mac/window.rs` was missing the logic to update
the window's scale factor and drawable size when moving between screens.
This logic was only present in the `view_did_change_backing_properties
callback`, which isn't triggered when external monitors are
disconnected.

Solution

- Extracted common logic: Created a new `update_window_scale_factor()`
function that encapsulates the scale factor and drawable size update
logic
- Added scale factor update to screen change: Modified
`window_did_change_screen` to call this function after restarting the
display link
- Refactored existing code: Updated `view_did_change_backing_properties`
to use the new shared function, reducing code duplication

The fix ensures that whenever a window changes screens (due to monitor
disconnect, reconnect, or manual movement), the scale factor, drawable
size, and renderer state are properly synchronized.

Testing

-  Verified that UI scaling remains correct after disconnecting
external monitor
-  Confirmed that reconnecting external monitor works properly
-  Tested that manual window movement between displays updates scaling
correctly
-  No regressions observed in normal window operations

To verity my fix worked I had to copy my preview workspace over my dev
workspace, once I had done this I could reproduce the issue on main
consistently. After switching to the branch with this fix the issue was
resolved.

The fix is similar to what was done on
https://github.com/zed-industries/zed/pull/35686 (Windows)

Closes #37245 #38229

Release Notes:

- Fixed: Update scale factor and drawable size when macOS window changes
screen

---------

Co-authored-by: Kate <work@localcc.cc>
2025-11-13 16:51:13 +00:00
Conrad Irwin
cb7bd5fe19 Include source PR number in cherry-picks (#42642)
Release Notes:

- N/A
2025-11-13 16:06:26 +00:00
Ben Kunkle
b900ac2ac7 ci: Fix script/clear-target-dir-if-larger-than post #41652 (#42640)
Closes #ISSUE

The namespace runners mount the `target` directory to the cache drive,
so `rm -rf target` would fail with `Device Busy`. Instead we now do `rm
-rf target/* target/.*` to remove all files (including hidden files)
from the `target` directory, without removing the target directory
itself

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-13 10:58:59 -05:00
Dino
b709996ec6 editor: Fix pane's tab buttons flicker on right-click (#42549)
Whenever right-click was used on the editor, the pane's tab buttons
would flicker, which was confirmed to happen because of the following
check:

```
self.focus_handle.contains_focused(window, cx)
    || self
        .active_item()
        .is_some_and(|item| {
            item.item_focus_handle(cx).contains_focused(window, cx)
        })
```

This check was returning `false` right after right-clicking but
returning `true` right after. When digging into it a little bit more,
this appears to be happening because the editor's `MouseContextMenu`
relies on `ContextMenu` which is rendered in a deferred fashion but
`MouseContextMenu` updates the window's focus to it instantaneously.

Since the `ContextMenu` is rendered in a deferred fashion, its focus
handle is not yet a descendant of the editor (pane's active item) focus
handle, so the `contains_focused(window, cx)` call would return `false`,
with it returning `true` after the menu was rendered.

This commit updates the `MouseContextMenu::new` function to leverage
`cx.on_next_frame` and ensure that the focus is only moved to the
`ContextMenu` 2 frames later, ensuring that by the time the focus is
moved, the `ContextMenu`'s focus handle is a descendant of the editor's.

Closes #41771 

Release Notes:

- Fixed pane's tab buttons flickering when using right-click on the
editor
2025-11-13 15:57:26 +00:00
Smit Barmase
b6972d70a5 editor: Fix panic when calculating jump data for buffer header (#42639)
Just on nightly.

Release Notes:

- N/A *or* Added/Fixed/Improved ...

Co-authored-by: Lukas Wirth <me@lukaswirth.dev>
2025-11-13 15:48:05 +00:00
kitt
ec1664f61a zed: Enable line wrapping for cli help (#42496)
This enables clap's [wrap-help] feature and sets max_term_width to wrap
after 100 columns (the value clap is planning to default to in clap-v5).

This commit also adds blank lines which cause clap to split longer doc
comments into separate help (displayed for `-h`) and long_help
(displayed for `--help`) messages, as per [doc-processing].

[wrap-help]:
https://docs.rs/clap/4.5.49/clap/_features/index.html#optional-features
[doc-processing]:
https://docs.rs/clap/4.5.49/clap/_derive/index.html#pre-processing

![before: some lines of help text stretch across the whole screen.
after: all lines are wrapped at 100 columns, and some manual linebreaks
are preserved where it makes sense (in particular, when listing the
user-data-dir locations on each
platform)](https://github.com/user-attachments/assets/359067b4-5ffb-4fe3-80bd-5e1062986417)


Release Notes:

- N/A
2025-11-13 10:46:51 -05:00
Agus Zubiaga
c2c5fceb5b zeta eval: Allow no headings under "Expected Context" (#42638)
Release Notes:

- N/A
2025-11-13 15:43:22 +00:00
Richard Feldman
eadc2301e0 Fetch the unit eval commit before checking it out (#42636)
Release Notes:

- N/A
2025-11-13 15:21:53 +00:00
Richard Feldman
b500470391 Disabled agent commands (#42579)
Closes #31346

Release Notes:

- Agent commands no longer show up in the command palette when `agent`
is disabled. Same for edit predictions.
2025-11-13 10:10:02 -05:00
Oleksiy Syvokon
55e4258147 agent: Workaround for Sonnet inserting </parameter> tag (#42634)
Release Notes:

- N/A
2025-11-13 15:09:16 +00:00
Agus Zubiaga
8467a1b08b zeta eval: Improve output (#42629)
Hides the aggregated scores if only one example/repetition ran. It also
fixes an issue with the expected context scoring.

Release Notes:

- N/A
2025-11-13 14:47:48 +00:00
Tim McLean
fb90b12073 Add retry support for OpenAI-compatible LLM providers (#37891)
Automatically retry the agent's LLM completion requests when the
provider returns 429 Too Many Requests. Uses the Retry-After header to
determine the retry delay if it is available.

Many providers are frequently overloaded or have low rate limits. These
providers are essentially unusable without automatic retries.

Tested with Cerebras configured via openai_compatible.

Related: #31531 

Release Notes:

- Added automatic retries for OpenAI-compatible LLM providers

---------

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-11-13 14:15:46 +00:00
Mayank Verma
92e64f9cf0 settings: Add tilde expansion support for LSP binary path (#41715)
Closes #38227

Release Notes:

- Added tilde expansion support for LSP binary path in `settings.json`
2025-11-13 09:14:18 -05:00
Remco Smits
f318bb5fd7 markdown: Add support for HTML href elements (#42265)
This PR adds support for `HTML` href elements. It also refactored the
way we stored the regions, this was done because otherwise I had to add
2 extra arguments to each `HTML` parser method. It's now also more
inline with how we have done it for the highlights.

**Small note**: the markdown parser only supports HTML href tags inside
a paragraph tag. So adding them as a root node will result in just
showing the inner text. This is a limitation of the markdown parser we
use itself.

**Before**
<img width="935" height="174" alt="Screenshot 2025-11-08 at 15 40 28"
src="https://github.com/user-attachments/assets/42172222-ed49-4a4b-8957-a46330e54c69"
/>

**After**
<img width="1026" height="180" alt="Screenshot 2025-11-08 at 15 29 55"
src="https://github.com/user-attachments/assets/9e139c2d-d43a-4952-8d1f-15eb92966241"
/>

**Example code**
```markdown
<p>asd <a href="https://example.com">Link Text</a> more text</p>
<p><a href="https://example.com">Link Text</a></p>

[Duck Duck Go](https://duckduckgo.com)
```

**TODO**:
- [x] Add tests

cc @bennetbo

Release Notes:

- Markdown Preview: Add support for `HTML` href elements.

---------

Co-authored-by: Bennet Bo Fenner <bennet@zed.dev>
2025-11-13 15:12:17 +01:00
Piotr Osiewicz
430b55405a search: New recent old search implementation (#40835)
This is an in-progress work on changing how task scheduler affects
performance of project search. Instead of relying on tasks being
executed at a discretion of the task scheduler, we want to experiment
with having a set of "agents" that prioritize driving in-progress
project search matches to completion over pushing the whole thing to
completion. This should hopefully significantly improve throughput &
latency of project search.

This PR has been reverted previously in #40831.

Release Notes:
- Improved project search performance in local projects.

---------

Co-authored-by: Smit Barmase <smit@zed.dev>
Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>
2025-11-13 14:56:40 +01:00
Lukas Wirth
27f700e2b2 askpass: Quote paths in generated askpass script (#42622)
Closes https://github.com/zed-industries/zed/issues/42618

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-13 14:37:47 +01:00
Smit Barmase
b5633f5bc7 editor: Improve multi-buffer header filename click to jump to the latest selection from that buffer - take 2 (#42613)
Relands https://github.com/zed-industries/zed/pull/42480

Release Notes:

- Clicking the multi-buffer header file name or the "Open file" button
now jumps to the most recent selection in that buffer, if one exists.

---------

Co-authored-by: Piotr Osiewicz <24362066+osiewicz@users.noreply.github.com>
2025-11-13 17:14:33 +05:30
R.Amogh
b9ce52dc95 agent_ui: Fix scrolling in context server configuration modal (#42502)
## Summary

Fixes #42342

When installing a dev extension with long installation instructions, the
configuration modal would overflow and users couldn't scroll to see the
full content or interact with buttons at the bottom.

## Solution

This PR adds a `ScrollHandle` to the `ConfigureContextServerModal` and
passes it to the `Modal` component, enabling the built-in modal
scrolling capability. This ensures all content remains accessible
regardless of length.

## Changes

- Added `ScrollHandle` import to the ui imports
- Added `scroll_handle: ScrollHandle` field to
`ConfigureContextServerModal` struct
- Initialize `scroll_handle` with `ScrollHandle::new()` when creating
the modal
- Pass the scroll handle to `Modal::new()` instead of `None`

## Testing

- Built the changes locally
- Tested with extensions that have long installation instructions
- Verified scrolling works and all content is accessible
- Confirmed no regression for extensions with short descriptions

Release Notes:

- Fixed scrolling issue in extension configuration modal when
installation instructions overflow the viewport

---------

Co-authored-by: Finn Evers <finn.evers@outlook.de>
2025-11-13 12:41:38 +01:00
mikeHag
34a7cfb2e5 Update cargo.rs to allow debugging of integration test annotated with the ignore attribute (#42574)
Address #40429

If an integration test is annotated with the ignore attribute, allow the
"debug: Test" option of the debug scenario or Code Action to run with
"--include-ignored"

Closes #40429

Release Notes:

- N/A
2025-11-13 12:31:23 +01:00
Kirill Bulatov
99016e3a85 Update outdated dependencies (#42611)
New rustc starts to output a few warnings, fix them by updating the
corresponding packages.

<details>
  <summary>Incompatibility notes</summary>
    
  ```
The following warnings were discovered during the build. These warnings
are an
indication that the packages contain code that will become an error in a
future release of Rust. These warnings typically cover changes to close
soundness problems, unintended or undocumented behavior, or critical
problems
that cannot be fixed in a backwards-compatible fashion, and are not
expected
to be in wide use.

Each warning should contain a link for more information on what the
warning
means and how to resolve it.


To solve this problem, you can try the following approaches:


- Some affected dependencies have newer versions available.
You may want to consider updating them to a newer version to see if the
issue has been fixed.

num-bigint-dig v0.8.4 has the following newer versions available: 0.8.5,
0.9.0, 0.9.1

- If the issue is not solved by updating the dependencies, a fix has to
be
implemented by those dependencies. You can help with that by notifying
the
maintainers of this problem (e.g. by creating a bug report) or by
proposing a
fix to the maintainers (e.g. by creating a pull request):

  - num-bigint-dig@0.8.4
  - Repository: https://github.com/dignifiedquire/num-bigint
- Detailed warning command: `cargo report future-incompatibilities --id
1 --package num-bigint-dig@0.8.4`

- If waiting for an upstream fix is not an option, you can use the
`[patch]`
section in `Cargo.toml` to use your own version of the dependency. For
more
information, see:

https://doc.rust-lang.org/cargo/reference/overriding-dependencies.html#the-patch-section

The package `num-bigint-dig v0.8.4` currently triggers the following
future incompatibility lints:
> warning: macro `vec` is private
> -->
/Users/someonetoignore/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/num-bigint-dig-0.8.4/src/biguint.rs:490:22
>     |
> 490 |         BigUint::new(vec![1])
>     |                      ^^^
>     |
> = warning: this was previously accepted by the compiler but is being
phased out; it will become a hard error in a future release!
> = note: for more information, see issue #120192
<https://github.com/rust-lang/rust/issues/120192>
> 
> warning: macro `vec` is private
> -->
/Users/someonetoignore/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/num-bigint-dig-0.8.4/src/biguint.rs:2005:9
>      |
> 2005 |         vec![0]
>      |         ^^^
>      |
> = warning: this was previously accepted by the compiler but is being
phased out; it will become a hard error in a future release!
> = note: for more information, see issue #120192
<https://github.com/rust-lang/rust/issues/120192>
> 
> warning: macro `vec` is private
> -->
/Users/someonetoignore/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/num-bigint-dig-0.8.4/src/biguint.rs:2027:16
>      |
> 2027 |         return vec![b'0'];
>      |                ^^^
>      |
> = warning: this was previously accepted by the compiler but is being
phased out; it will become a hard error in a future release!
> = note: for more information, see issue #120192
<https://github.com/rust-lang/rust/issues/120192>
> 
> warning: macro `vec` is private
> -->
/Users/someonetoignore/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/num-bigint-dig-0.8.4/src/biguint.rs:2313:13
>      |
> 2313 |             vec![0]
>      |             ^^^
>      |
> = warning: this was previously accepted by the compiler but is being
phased out; it will become a hard error in a future release!
> = note: for more information, see issue #120192
<https://github.com/rust-lang/rust/issues/120192>
> 
> warning: macro `vec` is private
> -->
/Users/someonetoignore/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/num-bigint-dig-0.8.4/src/prime.rs:138:22
>     |
> 138 |     let mut moduli = vec![BigUint::zero(); prime_limit];
>     |                      ^^^
>     |
> = warning: this was previously accepted by the compiler but is being
phased out; it will become a hard error in a future release!
> = note: for more information, see issue #120192
<https://github.com/rust-lang/rust/issues/120192>
> 
> warning: macro `vec` is private
> -->
/Users/someonetoignore/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/num-bigint-dig-0.8.4/src/bigrand.rs:319:25
>     |
> 319 |         let mut bytes = vec![0u8; bytes_len];
>     |                         ^^^
>     |
> = warning: this was previously accepted by the compiler but is being
phased out; it will become a hard error in a future release!
> = note: for more information, see issue #120192
<https://github.com/rust-lang/rust/issues/120192>
> 

  ```
  
</details>

Release Notes:

- N/A
2025-11-13 10:35:16 +00:00
Lukas Wirth
dea3c8c949 remote: More nushell fixes (#42608)
Closes https://github.com/zed-industries/zed/issues/42594

Release Notes:

- Fixed remote server installation failing with nutshell
2025-11-13 09:53:31 +00:00
Lukas Wirth
7eac6d242c diagnostics: Workaround weird panic in update_path_excerpts (#42602)
Fixes ZED-36P

Patching this over for now until I can figure out the cause of this

Release Notes:

- Fixed panic in diagnostics pane
2025-11-13 09:13:54 +00:00
Lukas Wirth
b92b28314f Replace {floor/ceil}_char_boundary polyfills with std (#42599)
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-13 08:11:18 +00:00
AidanV
1fc0642de1 vim: Make each vim repeat its own transaction (#41735)
Release Notes:

- Pressing `u` after multiple `.` in rapid succession will now only undo
the latest repeat instead of all repeats.

---------

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-11-13 06:46:14 +00:00
Conrad Irwin
045ac6d1b6 Release failure visibility (#42572)
Closes #ISSUE

Release Notes:

- N/A
2025-11-12 23:11:09 -07:00
Sean Hagstrom
1936f16c62 editor: Use a single newline between each copied line from a multi-cursor selection (#41204)
Closes #40923

Release Notes:

- Fixed the amount of newlines between copied lines from a multi-cursor
selection of multiple full-line copies.

---


https://github.com/user-attachments/assets/ab7474d6-0e49-4c29-9700-7692cd019cef
2025-11-12 22:58:13 -07:00
Conrad Irwin
b32559f07d Avoid re-creating releases when re-running workflows (#42573)
Closes #ISSUE

Release Notes:

- N/A
2025-11-12 21:50:15 -07:00
Julia Ryan
28adedf1fa Disable env clearing for npm subcommands (#42587)
Fixes #39448

Several node version managers such as [volta](https://volta.sh) use thin
wrappers that locate the "real" node/npm binary with an env var that
points at their install root. When it finds this, it prepends the
correct directory to PATH, otherwise it'll check a hardcoded default
location and prepend that to PATH if it exists.

We were clearing env for npm subcommands, which meant that volta and co.
failed to locate the install root, and because they were installed via
scoop they don't use the default install path either so it simply
doesn't prepend anything to PATH (winget on the other hand installs
volta to the right place, which is why it worked when using that instead
of scoop to install volta @IllusionaryX).

So volta's npm wrapper executes a subcommand `npm`, but when that
doesn't prepend a different directory to PATH the first `npm` found in
PATH is that same wrapper itself, which horrifyingly causes itself to
re-exec continuously. I think they might have some logic to try to
prevent this using, you'll never guess, another env var that they set
whenever a volta wrapper execs something. Of course since we clear the
env that var also fails to propagate.

Removing env clearing (but keeping the prepending of npm path from your
settings) fixes these issues.

Release Notes:

- Fixed issues with scoop installations of mise/volta

Co-authored-by: John Tur <john-tur@outlook.com>
2025-11-12 22:03:59 -06:00
Max Brunsfeld
c9e231043a Report discarded zeta predictions and indicate whether they were shown (#42403)
Release Notes:

- N/A

---------

Co-authored-by: Michael Sloan <mgsloan@gmail.com>
Co-authored-by: Ben Kunkle <ben@zed.dev>
Co-authored-by: Agus Zubiaga <agus@zed.dev>
2025-11-12 16:41:04 -08:00
Richard Feldman
ede3b1dae6 Allow running concurrent unit evals (#42578)
Right now only one unit eval GitHub Action can be run at a time. This
permits them to run concurrently.

Release Notes:

- N/A
2025-11-12 22:04:38 +00:00
Agus Zubiaga
b0700a4625 zeta eval: --repeat flag (#42569)
Adds a `--repeat` flag to the zeta eval that runs each example as many
times as specified. Also makes the output nicer in a few ways.

Release Notes:

- N/A

---------

Co-authored-by: Ben Kunkle <ben@zed.dev>
Co-authored-by: Michael <michael@zed.dev>
2025-11-12 16:58:22 -05:00
Michael Sloan
f2a1eb9963 Make check-licenses script check that AGPL crates are not included in release binaries (#42571)
See discussion in #24657. Recalled that I had a stashed change for this,
so polished it up

Release Notes:

- N/A
2025-11-12 21:58:12 +00:00
Andrew Farkas
0c1ca2a45a Improve pane: reopen closed item to not reopen closed tabs (#42568)
Closes #42134

Release Notes:

- Improved `pane: reopen closed item` to not reopen closed tabs.
2025-11-12 21:08:41 +00:00
Conrad Irwin
8fd8b989a6 Use powershell for winget job steps (#42565)
Co-Authored-By: Claude

Release Notes:

- N/A
2025-11-12 13:41:20 -07:00
Lucas Parry
fd837b348f project_panel: Make natural sort ordering consistent with other apps (#41080)
The existing sorting approach when faced with `Dir1`, `dir2`, `Dir3`,
would only get as far as comparing the stems without numbers (`dir` and
`Dir`), and then the lowercase-first tie breaker in that function would
determine that `dir2` should come first, resulting in an undesirable
order of `dir2`, `Dir1`, `Dir3`.

This patch defers tie-breaking until it's determined that there's no
other difference in the strings outside of case to order on, at which
point we tie-break to provide a stable sort.

Natural number sorting is still preserved, and mixing different cases
alphabetically (as opposed to all lowercase alpha, followed by all
uppercase alpha) is preserved.

Closes #41080


Release Notes:

- Fixed: ProjectPanel sorting bug

Screenshots:

Before | After
----|---
<img width="237" height="325" alt="image"
src="https://github.com/user-attachments/assets/6e92e8c0-2172-4a8f-a058-484749da047b"
/> | <img width="239" height="325" alt="image"
src="https://github.com/user-attachments/assets/874ad29f-7238-4bfc-b89b-fd64f9b8889a"
/>

I'm having trouble reasoning through what was previously going wrong
with `docs` in the before screenshot, but it also seems to now appear
alphabetically where you'd expect it with this patch

---------

Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>
2025-11-13 02:04:40 +05:30
Piotr Osiewicz
6b239c3a9a Bump Rust to 1.91.1 (#42561)
Release Notes:

- N/A

---------

Co-authored-by: Julia Ryan <juliaryan3.14@gmail.com>
2025-11-12 20:27:04 +00:00
Piotr Osiewicz
73e5df6445 ci: Install pre-built cargo nextest instead of rolling our own (#42556)
Closes #ISSUE

Release Notes:

- N/A
2025-11-12 20:05:40 +00:00
KyleBarton
b403c199df Add additional comment for context in Tyepscript highlights (#42564)
This adds additional comments which were left out from #42494 by
accident. Namely, it describes why we have additional custom
highlighting in `highlights.scm` for the Typescript grammar.

Release Notes:

- N/A
2025-11-12 19:59:10 +00:00
Konstantinos Lyrakis
cb4067723b Fix typo (#42559)
Fixed a typo in the docs

Release Notes:

- N/A
2025-11-12 21:07:34 +02:00
Ben Kunkle
1c625f8783 Fix JSON Schema documentation for code_actions_on_format (#42128)
Closes #ISSUE

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-12 18:33:02 +00:00
KyleBarton
4adec27a3d Implement pretty TypeScript errors (#42494)
Closes #7844

This change uses tree-sitter highlights as a method of showing
typescript errors prettily, keeping regex as simple as possible:

<img width="832" height="446" alt="Screenshot 2025-11-11 at 3 40 24 PM"
src="https://github.com/user-attachments/assets/0b3b6cf1-4d4d-4398-b89b-ef5ec0df87ec"
/>

It covers three main areas:

1. Diagnostics

Diagnostics are now rendered with language-aware typescript, by
providing the project's language registry.

2. Vtsls

The LSP provider for typescript now implements the
`diagnostic_message_to_markdown` function in the `LspAdapter` trait, so
as to provide Diagnostics with \`\`\`typescript...\`\`\`-style code
blocks for any selection of typescript longer than one word. In the
single-word case, it simply wraps with \`\`

3. Typescript's `highlights.scm`

`vtsls` doesn't provide strictly valid typescript in much of its
messaging. Rather, it returns a message with snippets of typescript
values which are invalid. Tree-sitter was not properly highlighting
these snippets because it was expecting key-value formats. For instance:
```
type foo = { foo: string; bar: string; baz: number[] }
```
is valid, whereas simply
```
{ foo: string; bar: string; baz: number[] }
```
is not.

Therefore, highlights.scm needed to be adjusted in order to
pattern-match on literal values that might be returned from the vtsls
diagnostics messages. This was done by a) identifying arrow functions on
their own, and b) augmenting the `statment_block` pattern matching in
order to match on values which were clearly object literals.

This approach may not be exhaustive - I'm happy to work on any
additional cases we might identify from `vtsls` here - but hopefully
demonstrates an extensible approach to making these messages look nice,
without taking on the technical burden of extensive regex.

Release Notes:

- Show pretty TypeScript errors with language-aware Markdown.
2025-11-12 10:32:46 -08:00
Remco Smits
e8daab15ab debugger: Fix prevent creating breakpoints inside breakpoint editor (#42475)
Closes #38057

This PR fixes that you can no longer create breakpoints inside the
breakpoint editor in code called `BreakpointPromptEditor`. As you can
see, inside the after video, there is no breakpoint editor created
anymore.

**Before**


https://github.com/user-attachments/assets/c4e02684-ac40-4176-bd19-f8f08e831dde

**After**


https://github.com/user-attachments/assets/f5b1176f-9545-4629-be12-05c64697a3de

Release Notes:

- Debugger: Prevent breakpoints from being created inside the breakpoint
editor
2025-11-12 18:18:10 +00:00
Ben Kunkle
6501b0c311 zeta eval: Improve determinism and debugging ergonomics (#42478)
- Improves the determinism of the search step for better cache
reusability
- Adds a `--cache force` mode that refuses to make any requests or
searches that aren't cached
- The structure of the `zeta-*` directories under `target` has been
rethought for convenience

Release Notes:

- N/A

---------

Co-authored-by: Agus <agus@zed.dev>
2025-11-12 18:16:13 +00:00
Ben Kunkle
6c0069ca98 zeta2: Improve error reporting and eval purity (#42470)
Closes #ISSUE

Improves error reporting for various failure modes of zeta2, including
failing to parse the `<old_text>`/`<new_text>` pattern, and the contents
of `<old_text>` failing to match.

Additionally, makes it so that evals are checked out into a worktree
with the _repo_ name instead of the _example_ name, in order to make
sure that the eval name has no influence on the models prediction. The
repo name worktrees are still namespaced by the example name like
`{example_name}/{repo_name}` to ensure evals pointing to the same repo
do not conflict.

Release Notes:

- N/A *or* Added/Fixed/Improved ...

---------

Co-authored-by: Agus <agus@zed.dev>
2025-11-12 12:52:11 -05:00
Conrad Irwin
c8930e07a3 Allow multiple parked threads in tests (#42551)
Closes #ISSUE

Release Notes:

- N/A

Co-Authored-By: Piotr <piotr@zed.dev>
2025-11-12 10:29:31 -07:00
Richard Feldman
ab352f669e Gracefully handle @mention-ing large files with no outlines (#42543)
Closes #32098

Release Notes:

- In the Agent panel, when `@mention`-ing large files with no outline,
their first 1KB is now added to context
2025-11-12 16:55:25 +00:00
Finn Evers
e79188261b fs: Fix wrong watcher trace log on Linux (#42544)
Follow-up to #40200

Release Notes:

- N/A
2025-11-12 16:26:53 +00:00
Marshall Bowers
ab62739605 collab: Remove unused methods from User model (#42536)
This PR removes some unused methods from the `User` model.

Release Notes:

- N/A
2025-11-12 15:38:16 +00:00
Marco Mihai Condrache
cfbde91833 terminal: Add setting for scroll multiplier (#39463)
Closes #5130

Release Notes:

- Added setting option for scroll multiplier of the terminal

---------

Signed-off-by: Marco Mihai Condrache <52580954+marcocondrache@users.noreply.github.com>
Co-authored-by: MrSubidubi <finn@zed.dev>
2025-11-12 16:38:06 +01:00
Vasyl Protsiv
80b32ddaad gpui: Add 'Nearest' scrolling strategy to 'UniformList' (#41844)
This PR introduces `Nearest` scrolling strategy to `UniformList`. This
is now used in completions menu and the picker to choose the appropriate
scrolling strategy depending on movement direction. Previously,
selecting the next element after the last visible item caused the menu
to scroll with `ScrollStrategy::Top`, which scrolled the whole page and
placed the next element at the top. This behavior is inconsistent,
because using `ScrollStrategy::Top` when moving up only scrolls one
element, not the whole page.


https://github.com/user-attachments/assets/ccfb238f-8f76-4a18-a18d-bbcb63340c5a

The solution is to introduce the `Nearest` scrolling strategy which will
internally choose the scrolling strategy depending on whether the new
selected item is below or above currently visible items. This ensures a
single-item scroll regardless of movement direction.


https://github.com/user-attachments/assets/8502efb8-e2c0-4ab1-bd8d-93103841a9c4


I also noticed that some functions in the file have different logic
depending on `y_flipped`. This appears related to reversing the order of
elements in the list when the completion menu appears above the cursor.
This was a feature suggested in #11200 and implemented in #23446. It
looks like this feature was reverted in #27765 and there currently seem
to be no way to have `y_flipped` to be set to `true`.

My understanding is that the opposite scroll strategy should be used if
`y_flipped`, but since there is no way to enable this feature to test it
and I don't know if the feature is ever going to be reintroduced I
decided not to include it in this PR.


Release Notes:

- gpui: Add 'Nearest' scrolling strategy to 'UniformList'
2025-11-12 16:37:14 +01:00
Joseph T. Lyons
53652cdb3f Bump Zed to v0.214 (#42539)
Release Notes:

- N/A
2025-11-12 15:36:28 +00:00
Smit Barmase
1d75a9c4b2 Reverts "add OpenExcerptsSplit and dispatches on click" (#42538)
Partially reverts https://github.com/zed-industries/zed/pull/42283 to
restore the old behavior of excerpt clicking.

Release Notes:

- N/A
2025-11-12 20:47:29 +05:30
Richard Feldman
c5ab1d4679 Stop thread on Restore Checkpoint (#42537)
Closes #35142

In addition to cleaning up the terminals, also stops the conversation.

Release Notes:

- Restoring a checkpoint now stops the agent conversation.
2025-11-12 15:13:40 +00:00
Smit Barmase
1fdd95a9b3 Revert "editor: Improve multi-buffer header filename click to jump to the latest selection from that buffer" (#42534)
Reverts zed-industries/zed#42480

This panics on Nightly in cases where anchor might not be valid for that
snapshot. Taking it back before the cutoff.

Release Notes:

- N/A
2025-11-12 20:31:43 +05:30
localcc
49634f6041 Miniprofiler (#42385)
Release Notes:

- Added hang detection and a built in performance profiler
2025-11-12 15:31:20 +01:00
Jakub Konka
2119ac42d7 git_panel: Fix partially staged changes not showing up (#42530)
Release Notes:

- N/A
2025-11-12 15:13:29 +01:00
Hans
e833d1af8d vim: Fix change surround adding unwanted spaces with quotes (#42431)
Update `Vim.change_surround` in order to ensure that there's no
overlapping edits by keeping track of where the open string range ends
and ensuring that the closing string range start does not go lower than
the open string range end.

Closes #42316 

Release Notes:

- Fix vim's change surrounds `cs` inserting spaces with quotes by
preventing overlapping edits

---------

Co-authored-by: dino <dinojoaocosta@gmail.com>
2025-11-12 13:04:24 +00:00
Ben Kunkle
7be76c74d6 Use set -x in script/clear-target-dir-if-larger-than (#42525)
Closes #ISSUE

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-12 12:52:19 +00:00
Piotr Osiewicz
c2980cba18 remote_server: Bump fork to 0.4.0 (#42520)
Release Notes:

- N/A
2025-11-12 11:57:53 +00:00
Lena
a0be53a190 Wake up stalebot with an updated config (#42516)
- switch the bot from looking at the `bug/crash` labels which we don't
  use anymore to the Bug/Crash issue types which we do use
- shorten the period of time after which a bug is suspected to be stale
  (with our pace they can indeed be outdated in 60 days)
- extend the grace period for someone to come around and say nope, this
  problem still exists (people might be away for a couple of weeks).


Release Notes:

- N/A
2025-11-12 12:40:26 +01:00
Lena
70feff3c7a Add a one-off cleanup script for GH issue types (#42515)
Mainly for historical purposes and in case we want to do something similar enough in the future.

Release Notes:

- N/A
2025-11-12 11:40:31 +01:00
Finn Evers
f46990bac8 extensions_ui: Add XML extension suggestion for XML files (#42514)
Closes #41798

Release Notes:

- N/A
2025-11-12 10:12:02 +00:00
Lukas Wirth
78f466559a vim: Fix empty selections panic in insert_at_previous (#42504)
Fixes ZED-15C

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-12 09:54:22 +00:00
CnsMaple
4f158c1983 docs: Update basedpyright settings examples (#42497)
The
[example](https://docs.basedpyright.com/latest/configuration/language-server-settings/#zed)
on the official website of basedpyright is correct.

Release Notes:

- Update basedpyright settings examples
2025-11-12 10:05:17 +01:00
Kirill Bulatov
ddf762e368 Revert "gpui: Unify the index_for_x methods (#42162)" (#42505)
This reverts commit 082b80ec89.

This broke clicking, e.g. in snippets like

```rs
let x = vec![
    1, 2, //
    3,
];
```

clicking between `2` and `,` is quite off now.

Release Notes:

- N/A
2025-11-12 08:24:06 +00:00
Lukas Wirth
f2cadad49a gpui: Fix RefCell already borrowed in WindowsPlatform::run (#42506)
Relands #42440 

Fixes ZED-1VX

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-12 08:19:32 +00:00
Lukas Wirth
231d1b1d58 diagnostics: Close diagnosticsless buffers on refresh (#42503)
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-12 08:11:50 +00:00
Andrew Farkas
2bcfc12951 Absolutize LSP and DAP paths more conservatively (#42482)
Fixes a regression caused by #42135 where LSP and DAP binaries weren't
being used from `PATH` env var

Now we absolutize the path if (path is relative AND (path has multiple
components OR path exists in worktree)).

- Relative paths with multiple components might not exist in the
worktree because they are ignored. Paths with a single component will at
least have an entry saying that they exist and are ignored.
- Relative paths with multiple components will never use the `PATH` env
var, so they can be safely absolutized

Release Notes:

- N/A
2025-11-12 01:36:22 +00:00
Richard Feldman
cf6ae01d07 Show recommended models under normal category too (#42489)
<img width="395" height="444" alt="Screenshot 2025-11-11 at 4 04 57 PM"
src="https://github.com/user-attachments/assets/8da68721-6e33-4d01-810d-4aa1e2f3402d"
/>

Discussed with @danilo-leal and we're going with the "it's checked in
both places" design!

Closes #40910

Release Notes:

- Recommended AI models now still appear in their normal category in
addition to "Recommended:"
2025-11-11 22:10:46 +00:00
Miguel Cárdenas
2ad7ecbcf0 project_panel: Add auto_open settings (#40435)
- Based on #40234, and improvement of #40331

Release Notes:

- Added granular settings to control when files auto-open in the project
panel (project_panel.auto_open.on_create, on_paste, on_drop)

<img width="662" height="367" alt="Screenshot_2025-10-16_17-28-31"
src="https://github.com/user-attachments/assets/930a0a50-fc89-4c5d-8d05-b1fa2279de8b"
/>

---------

Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>
2025-11-12 03:23:40 +05:30
Lukas Wirth
854c6873c7 Revert "gpui: Fix RefCell already borrowed in WindowsPlatform::run" (#42481)
Reverts zed-industries/zed#42440

There are invalid temporaries in here keeping the borrows alive for
longer
2025-11-11 21:42:59 +00:00
Andrew Farkas
da94f898e6 Add support for multi-word snippet prefixes (#42398)
Supercedes #41126

Closes #39559, #35397, and #41426

Release Notes:

- Added support for multi-word snippet prefixes

---------

Co-authored-by: Agus Zubiaga <hi@aguz.me>
Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
Co-authored-by: Cole Miller <cole@zed.dev>
2025-11-11 16:34:25 -05:00
Richard Feldman
f62bfe1dfa Use enterprise_uri for settings when provided (#42485)
Closes #34945

Release Notes:

- Fixed `enterprise_uri` not being used for GitHub settings URL when
provided
2025-11-11 21:31:42 +00:00
Richard Feldman
a56693d9e8 Fix panic when opening an invalid URL (#42483)
Now instead of a panic we see this:

<img width="511" height="132" alt="Screenshot 2025-11-11 at 3 47 25 PM"
src="https://github.com/user-attachments/assets/48ba2f41-c5c0-4030-9331-0d3acfbf9461"
/>


Release Notes:

- Trying to open invalid URLs in a browser now shows an error instead of
panicking
2025-11-11 21:24:37 +00:00
Smit Barmase
b4b7a23c39 editor: Improve multi-buffer header filename click to jump to the latest selection from that buffer (#42480)
Closes https://github.com/zed-industries/zed/pull/42099

Regressed in https://github.com/zed-industries/zed/pull/42283

Release Notes:

- Clicking the multi-buffer header file name or the "Open file" button
now jumps to the most recent selection in that buffer, if one exists.
2025-11-12 02:04:37 +05:30
Richard Feldman
0d56ed7d91 Only send unit eval failures to Slack for cron job (#42479)
Release Notes:

- N/A
2025-11-11 20:19:34 +00:00
Lay Sheth
e01e0b83c4 Avoid panics in LSP store path handling (#42117)
Release Notes:

- Fixed incorrect journal paths handling
2025-11-11 20:51:57 +02:00
Richard Feldman
908ef03502 Split out cron and non-cron unit evals (#42472)
Release Notes:

- N/A

---------

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-11-11 13:45:48 -05:00
feeiyu
5f4d0dbaab Fix circular reference issue around PopoverMenu (#42461)
Follow up to https://github.com/zed-industries/zed/pull/42351

Release Notes:

- N/A
2025-11-11 19:20:38 +02:00
brequet
c50f821613 docs: Fix typo in configuring-zed.md (#42454)
Fix a minor typo in the setting key: `auto_install_extension` should be
`auto_install_extensions`.

Release Notes:

- N/A
2025-11-11 17:58:18 +01:00
Marshall Bowers
7e491ac500 collab: Drop embeddings table (#42466)
This PR drops the `embeddings` table, as it is no longer used.

Release Notes:

- N/A
2025-11-11 11:44:04 -05:00
Richard Feldman
9e1e732db8 Use longer timeout on evals (#42465)
The GPT-5 ones in particular can take a long time!

Release Notes:

- N/A

---------

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-11-11 16:37:20 +00:00
Lukas Wirth
83351283e4 settings: Skip terminal env vars with substitutions in vscode import (#42464)
Closes https://github.com/zed-industries/zed/issues/40547

Release Notes:

- Fixed vscode import creating faulty terminal env vars in terminal
settings
2025-11-11 16:15:12 +00:00
Marshall Bowers
03acbb7de3 collab: Remove unused embeddings queries and model (#42463)
This PR removes the queries and database model for embeddings, as
they're no longer used.

Release Notes:

- N/A
2025-11-11 16:13:59 +00:00
Richard Feldman
0268b17096 Add more secrets to eval workflows (#42459)
Release Notes:

- N/A
2025-11-11 16:07:57 +00:00
Danilo Leal
993919d360 agent_ui: Add icon button to trigger the @-mention completions menu (#42449)
Closes https://github.com/zed-industries/zed/issues/37087

This PR adds an icon button to the footer of the message editor enabling
to trigger and interact with the @-mention completions menu with the
mouse. This is a first step towards making other types of context you
can add in Zed's agent panel more discoverable. Next, I want to improve
the discoverability of images and selections, given that you wouldn't
necessarily know they work in Zed without a clear way to see them. But I
think that for now, this is enough to close the issue above, which had
lots of productive comments and discussion!

<img width="500" height="540" alt="Screenshot 2025-11-11 at 10  46 3@2x"
src="https://github.com/user-attachments/assets/fd028442-6f77-4153-bea1-c0b815da4ac6"
/>

Release Notes:

- agent: Added an icon button in the agent panel that allows to trigger
the @-mention menu (for adding context) now also with the mouse.
2025-11-11 12:50:56 -03:00
Danilo Leal
8467a3dbd6 agent_ui: Allow to uninstall agent servers from the settings view (#42445)
This PR also adds items within the "Add Agent" menu to:
1. Add more agent servers from extensions, opening up the extensions
page with "Agent Servers" already filtered
2. Go to the agent server + ACP docs to learn more about them

I feel like having them there is a nice way to promote this knowledge
from within the product and have users learn more about them.

<img width="500" height="540" alt="Screenshot 2025-11-11 at 10  46 3@2x"
src="https://github.com/user-attachments/assets/9449df2e-1568-44d8-83ca-87cbb9eefdd2"
/>

Release Notes:

- agent: Enabled uninstalled agent servers from the agent panel's
settings view.
2025-11-11 12:47:08 -03:00
Bennet Bo Fenner
ee2e690657 agent_servers: Fix panic when setting default mode (#42452)
Closes ZED-35A

Release Notes:

- Fixed an issue where Zed would panic when trying to set the default
mode for ACP agents
2025-11-11 15:25:27 +00:00
tidely
28d019be2e ollama: Fix tool calling (#42275)
Closes #42303

Ollama added tool call identifiers
(https://github.com/ollama/ollama/pull/12956) in its latest version
[v0.12.10](https://github.com/ollama/ollama/releases/tag/v0.12.10). This
broke our json schema and made all tool calls fail.

This PR fixes the schema and uses the Ollama provided tool call
identifier when available. We remain backwards compatible and still use
our own identifier with older versions of Ollama. I added a `TODO` to
remove the `Option` around the new field when most users have updated
their installations to v0.12.10 or above.

Note to reviewer: The fix to this issue should likely get cherry-picked
into the next release, since Ollama becomes unusable as an agent without
it.

Release Notes:

- Fixed tool calling when using the latest version of Ollama
2025-11-11 16:10:47 +01:00
Lukas Wirth
a19d11184d remote: Add more context to error logging in wsl (#42450)
cc https://github.com/zed-industries/zed/issues/40892

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-11 15:09:56 +00:00
Lukas Wirth
38e2c7aa66 editor: Hide file blame on editor cancel (ESC) (#42436)
Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-11 13:56:04 +00:00
liuyanghejerry
10d5d78ded Improve error messages on extension loading (#42266)
This pull request improves error message when extension loading goes
wrong.

Before:

```
2025-11-08T21:16:02+08:00 ERROR [extension_host::extension_host] failed to load arkts extension.toml

Caused by:
    No such file or directory (os error 2)
```

Now:

```
2025-11-08T22:57:00+08:00 ERROR [extension_host::extension_host] failed to load arkts extension.toml, "/Users/user_name_placeholder/Library/Application Support/Zed/extensions/installed/arkts/extension.toml"

Caused by:
    No such file or directory (os error 2)

```

Release Notes:

- N/A
2025-11-11 15:45:03 +02:00
Terra
dfd7e85d5d Replace deprecated json.schemastore.org with www.schemastore.org (#42336)
Release Notes:

- N/A

According to
[microsoft/vscode#254689](https://github.com/microsoft/vscode/issues/254689),
the json.schemastore.org domain has been deprecated and should now use
www.schemastore.org (or schemastore.org) instead.

This PR updates all occurrences of the old domain within the Zed
codebase,
including code, documentation, and configuration files.
2025-11-11 15:43:25 +02:00
Lukas Wirth
b8fcd3ea04 gpui: Fix RefCell already borrowed in WindowsPlatform::run (#42440)
Fixes ZED-1VX

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-11 13:43:06 +00:00
Libon
9be5e31aca Add clear recent files history command (#42176)
![2025-11-07
181619](https://github.com/user-attachments/assets/a9bef7a6-dc0b-4db2-85e5-2e1df7b21cfa)


Release Notes:

- Added "workspace: clear navigation history" command
2025-11-11 15:42:00 +02:00
Kirill Bulatov
58db38722b Find proper applicable chunks for visible ranges (#42422)
Release Notes:

- Fixed inlay hints not being queried for certain long-ranged jumps

Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>
Co-authored-by: Lukas Wirth <lukas@zed.dev>
2025-11-11 13:38:28 +00:00
Agus Zubiaga
f2ad0d716f zeta cli: Print log paths when running predict (#42396)
Release Notes:

- N/A

Co-authored-by: Michael Sloan <mgsloan@gmail.com>
Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-11-11 09:56:20 -03:00
Lukas Wirth
777b46533f auto_update: Ignore dir removal errors on windows (#42435)
The auto update helper already removes these when successful, so these
will always fail in the common case.

Additional replaces a mutable const with a static as otherwise we'll
rebuild the job list on every access

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-11 12:55:19 +00:00
Miguel Raz Guzmán Macedo
b3dd51560b docs: Fix broken links in docs with lychee (#42404)
Lychee is a [Rust based](https://lychee.cli.rs) async parallel link
checker.

I ran it against the codebase to suss out stale links and fixed those
up.

There's currently 2 remaining cases that I don't know how to resolve:

1. https://flathub.org/apps/dev.zed.Zed - nginx is giving a 502 bad
gateway
2.
https://github.com/zed-industries/zed/actions/workflows/ci.yml/badge.svg
- I don't want to mess with the CI pipeline in this PR.

Once again, I'll punt to the Docs Czar to see if this gets incorporated
into CI later.

---

## Running `lychee` locally:

```
cargo binstall -y lychee
lychee .
```

---
Release Notes:

- N/A

Signed-off-by: mrg <miguelraz@ciencias.unam.mx>
2025-11-11 13:55:02 +01:00
dDostalker
25489c2b7a Fix adding a Python virtual environment, may duplicate the "open this dictionary" string when modifying content. (#41840)
Release Notes:

- Fixed an issue when adding a Python virtual environment that may cause
duplicate "open this dictionary" entries

- Trigger condition:
Type `C:\`, delete `\`, then repeatedly add `\`.

-Video

bug:

https://github.com/user-attachments/assets/f68008bb-9138-4451-a842-25b58574493b

fix:

https://github.com/user-attachments/assets/2913b8c2-adee-4275-af7e-e055fd78915f
2025-11-11 13:22:32 +01:00
Alexandre Anício
dc372e8a84 editor: Unfold buffers with selections on edit + Remove selections on buffer fold (#37953)
Closes #36376 

Problem:
Multi-cursor edits/selections in multi-buffers view were jumping to
incorrect locations after toggling buffer folds. When users created
multiple selections across different buffers in a multi-buffer view
(like project search results) and then folded one of the buffers,
subsequent text insertion would either:

1. Insert text at wrong locations (like at the top of the first unfolded
buffer)
2. Replace the entire content in some buffers instead of inserting at
the intended cursor positions
3. Create orphaned selections that caused corruption in the editing
experience

The issue seems to happen because when a buffer gets folded in a
multi-buffer view, the existing selections associated with that buffer
become invalid anchor points.

Solution:
1. Selection Cleanup on Buffer Folding
- Added `remove_selections_from_buffer()` method that filters out all
selections from a buffer when it gets folded
- This prevents invalid selections from corrupting subsequent editing
operations
- Includes edge case handling: if all selections are removed (all
buffers folded), it creates a default selection at the start of the
first buffer to prevent panics

2. Unfolding buffers before editing  
- Added `unfold_buffers_with_selections()` call in `handle_input()`
ensures buffers with active selections are automatically unfolded before
editing
- This helps in fixing an edge case (covered in the tests) where, if you
fold all buffers in a multi-buffer view, and try to insert text in a
selection, it gets unfolded before the edit happens. Without this, the
inserted text would override the entire buffer content.
- If we don't care about this edge case, we could remove this method. I
find it ok to add since we already trigger buffer unfolding after edits
with `Event::ExcerptsEdited`.

Release Notes:

- Fixed multi-cursor edits jumping to incorrect locations after toggling
buffer folds in multi-buffer views (e.g, project search)
- Multi-cursor selections now properly handle buffer folding/unfolding
operations
- Text insertion no longer occurs at the wrong positions when buffers
are folded during multi-cursor editing
- Eliminated content replacement bugs where entire buffer contents were
incorrectly overwritten
- Added safe fallback behavior when all buffers in a multi-buffer view
are folded

---------

Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>
2025-11-11 16:59:44 +05:30
Lukas Wirth
1c4bb60209 gpui: Fix invalid unwrap in windows window creation (#42426)
Fixes ZED-34M

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-11-11 10:55:19 +00:00
Dino
97100ce52f editor: Respect search case sensitivity when selecting occurrences (#42121)
Update how the editor's `select_*` methods work in order to respect the
`search.case_sensitive` setting, or to be overriden by the
`BufferSearchBar` search options.

- Update both the `SearchableItem` and `SearchableItemHandle` traits
  with a new `set_search_is_case_sensitive` method that allows callers
  to set the case sensitivity of the search
- Update the `BufferSearchBar` to leverage
  `SearchableItemHandle.set_search_is_case_sensitive` in order to sync
  its case sensitivity options with the searchable item
- Update the implementation of the `SearchableItem` trait for `Editor`
  so as to store the argument provided to the
  `set_search_is_case_sensitive` method
- Update the way search queries are built by `Editor` so as to rely on
  `SearchableItem.set_search_is_case_sensitive` argument, if not `None`,
  or default to the editor's `search.case_sensitive` settings

Closes #41070 

Release Notes:

- Improved the "Select Next Occurrence", "Select Previous Occurrence"
and "Select All Occurrences" actions in order to respect the case
sensitivity search settings

---------

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-11-11 10:26:40 +00:00
Dino
dcf56144b5 vim: Sort whole buffer when no range is specified (#42376)
- Introduce a `default_range` field to `VimCommand`, to be optionally
  used when no range is specified for the command
- Update `VimCommand.parse` to take into consideration the
  `default_range`
- Introduce `CommandRange::buffer` to obtain the `CommandRange` which
  corresponds to the whole buffer
- Update the `VimCommand` definitions for both `sort` and `sort i` to
  default to the whole buffer when no range is specified

Closes #41750 

Release Notes:

- Improved vim's `:sort` command to sort the buffer's content when no
selection is used
2025-11-11 10:04:30 +00:00
232 changed files with 9263 additions and 2375 deletions

View File

@@ -1,4 +1,4 @@
# yaml-language-server: $schema=https://json.schemastore.org/github-issue-config.json
# yaml-language-server: $schema=https://www.schemastore.org/github-issue-config.json
blank_issues_enabled: false
contact_links:
- name: Feature Request

View File

@@ -56,14 +56,14 @@ jobs:
- id: set-package-name
name: after_release::publish_winget::set_package_name
run: |
if [ "${{ github.event.release.prerelease }}" == "true" ]; then
PACKAGE_NAME=ZedIndustries.Zed.Preview
else
PACKAGE_NAME=ZedIndustries.Zed
fi
if ("${{ github.event.release.prerelease }}" -eq "true") {
$PACKAGE_NAME = "ZedIndustries.Zed.Preview"
} else {
$PACKAGE_NAME = "ZedIndustries.Zed"
}
echo "PACKAGE_NAME=$PACKAGE_NAME" >> "$GITHUB_OUTPUT"
shell: bash -euxo pipefail {0}
echo "PACKAGE_NAME=$PACKAGE_NAME" >> $env:GITHUB_OUTPUT
shell: pwsh
- name: after_release::publish_winget::winget_releaser
uses: vedantmgoyal9/winget-releaser@19e706d4c9121098010096f9c495a70a7518b30f
with:
@@ -86,3 +86,19 @@ jobs:
SENTRY_ORG: zed-dev
SENTRY_PROJECT: zed
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
notify_on_failure:
needs:
- rebuild_releases_page
- post_to_discord
- publish_winget
- create_sentry_release
if: failure()
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: release::notify_on_failure::notify_slack
run: |-
curl -X POST -H 'Content-type: application/json'\
--data '{"text":"${{ github.workflow }} failed: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"}' "$SLACK_WEBHOOK"
shell: bash -euxo pipefail {0}
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK_WORKFLOW_FAILURES }}

View File

@@ -1,6 +1,7 @@
# Generated from xtask::workflows::cherry_pick
# Rebuild with `cargo xtask workflows`.
name: cherry_pick
run-name: 'cherry_pick to ${{ inputs.channel }} #${{ inputs.pr_number }}'
on:
workflow_dispatch:
inputs:
@@ -16,6 +17,10 @@ on:
description: channel
required: true
type: string
pr_number:
description: pr_number
required: true
type: string
jobs:
run_cherry_pick:
runs-on: namespace-profile-2x4-ubuntu-2404

View File

@@ -15,13 +15,13 @@ jobs:
stale-issue-message: >
Hi there! 👋
We're working to clean up our issue tracker by closing older issues that might not be relevant anymore. If you are able to reproduce this issue in the latest version of Zed, please let us know by commenting on this issue, and we will keep it open. If you can't reproduce it, feel free to close the issue yourself. Otherwise, we'll close it in 7 days.
We're working to clean up our issue tracker by closing older bugs that might not be relevant anymore. If you are able to reproduce this issue in the latest version of Zed, please let us know by commenting on this issue, and it will be kept open. If you can't reproduce it, feel free to close the issue yourself. Otherwise, it will close automatically in 14 days.
Thanks for your help!
close-issue-message: "This issue was closed due to inactivity. If you're still experiencing this problem, please open a new issue with a link to this issue."
days-before-stale: 120
days-before-close: 7
any-of-issue-labels: "bug,panic / crash"
days-before-stale: 60
days-before-close: 14
only-issue-types: "Bug,Crash"
operations-per-run: 1000
ascending: true
enable-statistics: true

View File

@@ -29,9 +29,6 @@ jobs:
- name: steps::clippy
run: ./script/clippy
shell: bash -euxo pipefail {0}
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: bash -euxo pipefail {0}
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 300
shell: bash -euxo pipefail {0}
@@ -78,8 +75,7 @@ jobs:
run: ./script/clippy
shell: bash -euxo pipefail {0}
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: bash -euxo pipefail {0}
uses: taiki-e/install-action@nextest
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 250
shell: bash -euxo pipefail {0}
@@ -112,9 +108,6 @@ jobs:
- name: steps::clippy
run: ./script/clippy.ps1
shell: pwsh
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: pwsh
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than.ps1 250
shell: pwsh
@@ -484,6 +477,20 @@ jobs:
shell: bash -euxo pipefail {0}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
notify_on_failure:
needs:
- upload_release_assets
- auto_release_preview
if: failure()
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: release::notify_on_failure::notify_slack
run: |-
curl -X POST -H 'Content-type: application/json'\
--data '{"text":"${{ github.workflow }} failed: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"}' "$SLACK_WEBHOOK"
shell: bash -euxo pipefail {0}
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK_WORKFLOW_FAILURES }}
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true

View File

@@ -47,9 +47,6 @@ jobs:
- name: steps::clippy
run: ./script/clippy.ps1
shell: pwsh
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: pwsh
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than.ps1 250
shell: pwsh
@@ -493,3 +490,21 @@ jobs:
SENTRY_PROJECT: zed
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
timeout-minutes: 60
notify_on_failure:
needs:
- bundle_linux_aarch64
- bundle_linux_x86_64
- bundle_mac_aarch64
- bundle_mac_x86_64
- bundle_windows_aarch64
- bundle_windows_x86_64
if: failure()
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: release::notify_on_failure::notify_slack
run: |-
curl -X POST -H 'Content-type: application/json'\
--data '{"text":"${{ github.workflow }} failed: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}"}' "$SLACK_WEBHOOK"
shell: bash -euxo pipefail {0}
env:
SLACK_WEBHOOK: ${{ secrets.SLACK_WEBHOOK_WORKFLOW_FAILURES }}

View File

@@ -6,6 +6,9 @@ env:
CARGO_INCREMENTAL: '0'
RUST_BACKTRACE: '1'
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
GOOGLE_AI_API_KEY: ${{ secrets.GOOGLE_AI_API_KEY }}
GOOGLE_CLOUD_PROJECT: ${{ secrets.GOOGLE_CLOUD_PROJECT }}
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_EVAL_TELEMETRY: '1'
MODEL_NAME: ${{ inputs.model_name }}
@@ -48,6 +51,11 @@ jobs:
- name: run_agent_evals::agent_evals::run_eval
run: cargo run --package=eval -- --repetitions=8 --concurrency=1 --model "${MODEL_NAME}"
shell: bash -euxo pipefail {0}
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
GOOGLE_AI_API_KEY: ${{ secrets.GOOGLE_AI_API_KEY }}
GOOGLE_CLOUD_PROJECT: ${{ secrets.GOOGLE_CLOUD_PROJECT }}
- name: steps::cleanup_cargo_config
if: always()
run: |

View File

@@ -0,0 +1,68 @@
# Generated from xtask::workflows::run_cron_unit_evals
# Rebuild with `cargo xtask workflows`.
name: run_cron_unit_evals
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: '0'
RUST_BACKTRACE: '1'
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
on:
schedule:
- cron: 47 1 * * 2
workflow_dispatch: {}
jobs:
cron_unit_evals:
runs-on: namespace-profile-16x32-ubuntu-2204
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: steps::setup_cargo_config
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
shell: bash -euxo pipefail {0}
- name: steps::cache_rust_dependencies_namespace
uses: namespacelabs/nscloud-cache-action@v1
with:
cache: rust
- name: steps::setup_linux
run: ./script/linux
shell: bash -euxo pipefail {0}
- name: steps::install_mold
run: ./script/install-mold
shell: bash -euxo pipefail {0}
- name: steps::download_wasi_sdk
run: ./script/download-wasi-sdk
shell: bash -euxo pipefail {0}
- name: steps::cargo_install_nextest
uses: taiki-e/install-action@nextest
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 250
shell: bash -euxo pipefail {0}
- name: ./script/run-unit-evals
run: ./script/run-unit-evals
shell: bash -euxo pipefail {0}
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
GOOGLE_AI_API_KEY: ${{ secrets.GOOGLE_AI_API_KEY }}
GOOGLE_CLOUD_PROJECT: ${{ secrets.GOOGLE_CLOUD_PROJECT }}
- name: steps::cleanup_cargo_config
if: always()
run: |
rm -rf ./../.cargo
shell: bash -euxo pipefail {0}
- name: run_agent_evals::cron_unit_evals::send_failure_to_slack
if: ${{ failure() }}
uses: slackapi/slack-github-action@b0fa283ad8fea605de13dc3f449259339835fc52
with:
method: chat.postMessage
token: ${{ secrets.SLACK_APP_ZED_UNIT_EVALS_BOT_TOKEN }}
payload: |
channel: C04UDRNNJFQ
text: "Unit Evals Failed: https://github.com/zed-industries/zed/actions/runs/${{ github.run_id }}"
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true

View File

@@ -113,9 +113,6 @@ jobs:
- name: steps::clippy
run: ./script/clippy.ps1
shell: pwsh
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: pwsh
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than.ps1 250
shell: pwsh
@@ -164,8 +161,7 @@ jobs:
run: ./script/clippy
shell: bash -euxo pipefail {0}
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: bash -euxo pipefail {0}
uses: taiki-e/install-action@nextest
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 250
shell: bash -euxo pipefail {0}
@@ -200,9 +196,6 @@ jobs:
- name: steps::clippy
run: ./script/clippy
shell: bash -euxo pipefail {0}
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: bash -euxo pipefail {0}
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 300
shell: bash -euxo pipefail {0}

View File

@@ -1,17 +1,26 @@
# Generated from xtask::workflows::run_agent_evals
# Generated from xtask::workflows::run_unit_evals
# Rebuild with `cargo xtask workflows`.
name: run_agent_evals
name: run_unit_evals
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: '0'
RUST_BACKTRACE: '1'
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_EVAL_TELEMETRY: '1'
MODEL_NAME: ${{ inputs.model_name }}
on:
schedule:
- cron: 47 1 * * 2
workflow_dispatch: {}
workflow_dispatch:
inputs:
model_name:
description: model_name
required: true
type: string
commit_sha:
description: commit_sha
required: true
type: string
jobs:
unit_evals:
run_unit_evals:
runs-on: namespace-profile-16x32-ubuntu-2204
steps:
- name: steps::checkout_repo
@@ -37,8 +46,7 @@ jobs:
run: ./script/download-wasi-sdk
shell: bash -euxo pipefail {0}
- name: steps::cargo_install_nextest
run: cargo install cargo-nextest --locked
shell: bash -euxo pipefail {0}
uses: taiki-e/install-action@nextest
- name: steps::clear_target_dir_if_large
run: ./script/clear-target-dir-if-larger-than 250
shell: bash -euxo pipefail {0}
@@ -47,20 +55,15 @@ jobs:
shell: bash -euxo pipefail {0}
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
- name: run_agent_evals::unit_evals::send_failure_to_slack
if: ${{ failure() }}
uses: slackapi/slack-github-action@b0fa283ad8fea605de13dc3f449259339835fc52
with:
method: chat.postMessage
token: ${{ secrets.SLACK_APP_ZED_UNIT_EVALS_BOT_TOKEN }}
payload: |
channel: C04UDRNNJFQ
text: "Unit Evals Failed: https://github.com/zed-industries/zed/actions/runs/${{ github.run_id }}"
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
GOOGLE_AI_API_KEY: ${{ secrets.GOOGLE_AI_API_KEY }}
GOOGLE_CLOUD_PROJECT: ${{ secrets.GOOGLE_CLOUD_PROJECT }}
UNIT_EVAL_COMMIT: ${{ inputs.commit_sha }}
- name: steps::cleanup_cargo_config
if: always()
run: |
rm -rf ./../.cargo
shell: bash -euxo pipefail {0}
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.run_id }}
cancel-in-progress: true

109
Cargo.lock generated
View File

@@ -1461,6 +1461,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "879b6c89592deb404ba4dc0ae6b58ffd1795c78991cbb5b8bc441c48a070440d"
dependencies = [
"aws-lc-sys",
"untrusted 0.7.1",
"zeroize",
]
@@ -6248,7 +6249,7 @@ dependencies = [
"futures-core",
"futures-sink",
"nanorand",
"spin",
"spin 0.9.8",
]
[[package]]
@@ -6359,9 +6360,9 @@ checksum = "aa9a19cbb55df58761df49b23516a86d432839add4af60fc256da840f66ed35b"
[[package]]
name = "fork"
version = "0.2.0"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "05dc8b302e04a1c27f4fe694439ef0f29779ca4edc205b7b58f00db04e29656d"
checksum = "30268f1eefccc9d72f43692e8b89e659aeb52e84016c3b32b6e7e9f1c8f38f94"
dependencies = [
"libc",
]
@@ -7287,6 +7288,7 @@ dependencies = [
"calloop",
"calloop-wayland-source",
"cbindgen",
"circular-buffer",
"cocoa 0.26.0",
"cocoa-foundation 0.2.0",
"collections",
@@ -7342,6 +7344,7 @@ dependencies = [
"slotmap",
"smallvec",
"smol",
"spin 0.10.0",
"stacksafe",
"strum 0.27.2",
"sum_tree",
@@ -8656,23 +8659,25 @@ dependencies = [
[[package]]
name = "jupyter-protocol"
version = "0.6.0"
source = "git+https://github.com/ConradIrwin/runtimed?rev=7130c804216b6914355d15d0b91ea91f6babd734#7130c804216b6914355d15d0b91ea91f6babd734"
version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d9c047f6b5e551563af2ddb13dafed833f0ec5a5b0f9621d5ad740a9ff1e1095"
dependencies = [
"anyhow",
"async-trait",
"bytes 1.10.1",
"chrono",
"futures 0.3.31",
"serde",
"serde_json",
"thiserror 2.0.17",
"uuid",
]
[[package]]
name = "jupyter-websocket-client"
version = "0.9.0"
source = "git+https://github.com/ConradIrwin/runtimed?rev=7130c804216b6914355d15d0b91ea91f6babd734#7130c804216b6914355d15d0b91ea91f6babd734"
version = "0.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4197fa926a6b0bddfed7377d9fed3d00a0dec44a1501e020097bd26604699cae"
dependencies = [
"anyhow",
"async-trait",
@@ -8681,6 +8686,7 @@ dependencies = [
"jupyter-protocol",
"serde",
"serde_json",
"tokio",
"url",
"uuid",
]
@@ -8870,6 +8876,7 @@ dependencies = [
"icons",
"image",
"log",
"open_ai",
"open_router",
"parking_lot",
"proto",
@@ -9072,7 +9079,7 @@ version = "1.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bbd2bcb4c963f2ddae06a2efc7e9f3591312473c50c6685e1f298068316e66fe"
dependencies = [
"spin",
"spin 0.9.8",
]
[[package]]
@@ -10014,6 +10021,18 @@ version = "0.2.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "68354c5c6bd36d73ff3feceb05efa59b6acb7626617f4962be322a825e61f79a"
[[package]]
name = "miniprofiler_ui"
version = "0.1.0"
dependencies = [
"gpui",
"serde_json",
"smol",
"util",
"workspace",
"zed_actions",
]
[[package]]
name = "miniz_oxide"
version = "0.8.9"
@@ -10218,8 +10237,9 @@ dependencies = [
[[package]]
name = "nbformat"
version = "0.10.0"
source = "git+https://github.com/ConradIrwin/runtimed?rev=7130c804216b6914355d15d0b91ea91f6babd734#7130c804216b6914355d15d0b91ea91f6babd734"
version = "0.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "89c7229d604d847227002715e1235cd84e81919285d904ccb290a42ecc409348"
dependencies = [
"anyhow",
"chrono",
@@ -10505,11 +10525,10 @@ dependencies = [
[[package]]
name = "num-bigint-dig"
version = "0.8.4"
version = "0.8.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "dc84195820f291c7697304f3cbdadd1cb7199c0efc917ff5eafd71225c136151"
checksum = "e661dda6640fad38e827a6d4a310ff4763082116fe217f279885c97f511bb0b7"
dependencies = [
"byteorder",
"lazy_static",
"libm",
"num-integer",
@@ -11012,6 +11031,7 @@ dependencies = [
"serde_json",
"settings",
"strum 0.27.2",
"thiserror 2.0.17",
]
[[package]]
@@ -13051,6 +13071,23 @@ dependencies = [
"zlog",
]
[[package]]
name = "project_benchmarks"
version = "0.1.0"
dependencies = [
"anyhow",
"clap",
"client",
"futures 0.3.31",
"gpui",
"http_client",
"language",
"node_runtime",
"project",
"settings",
"watch",
]
[[package]]
name = "project_panel"
version = "0.1.0"
@@ -13078,6 +13115,7 @@ dependencies = [
"settings",
"smallvec",
"telemetry",
"tempfile",
"theme",
"ui",
"util",
@@ -14241,7 +14279,7 @@ dependencies = [
"cfg-if",
"getrandom 0.2.16",
"libc",
"untrusted",
"untrusted 0.9.0",
"windows-sys 0.52.0",
]
@@ -14370,9 +14408,9 @@ dependencies = [
[[package]]
name = "rsa"
version = "0.9.8"
version = "0.9.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "78928ac1ed176a5ca1d17e578a1825f3d81ca54cf41053a592584b020cfd691b"
checksum = "40a0376c50d0358279d9d643e4bf7b7be212f1f4ff1da9070a7b54d22ef75c88"
dependencies = [
"const-oid",
"digest",
@@ -14422,25 +14460,26 @@ dependencies = [
[[package]]
name = "runtimelib"
version = "0.25.0"
source = "git+https://github.com/ConradIrwin/runtimed?rev=7130c804216b6914355d15d0b91ea91f6babd734#7130c804216b6914355d15d0b91ea91f6babd734"
version = "0.30.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "481b48894073a0096f28cbe9860af01fc1b861e55b3bc96afafc645ee3de62dc"
dependencies = [
"anyhow",
"async-dispatcher",
"async-std",
"aws-lc-rs",
"base64 0.22.1",
"bytes 1.10.1",
"chrono",
"data-encoding",
"dirs 5.0.1",
"dirs 6.0.0",
"futures 0.3.31",
"glob",
"jupyter-protocol",
"ring",
"serde",
"serde_json",
"shellexpand 3.1.1",
"smol",
"thiserror 2.0.17",
"uuid",
"zeromq",
]
@@ -14708,7 +14747,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8b6275d1ee7a1cd780b64aca7726599a1dbc893b1e64144529e55c3c2f745765"
dependencies = [
"ring",
"untrusted",
"untrusted 0.9.0",
]
[[package]]
@@ -14720,7 +14759,7 @@ dependencies = [
"aws-lc-rs",
"ring",
"rustls-pki-types",
"untrusted",
"untrusted 0.9.0",
]
[[package]]
@@ -14950,7 +14989,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "da046153aa2352493d6cb7da4b6e5c0c057d8a1d0a9aa8560baffdd945acd414"
dependencies = [
"ring",
"untrusted",
"untrusted 0.9.0",
]
[[package]]
@@ -15853,6 +15892,15 @@ dependencies = [
"lock_api",
]
[[package]]
name = "spin"
version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d5fe4ccb98d9c292d56fec89a5e07da7fc4cf0dc11e156b41793132775d3e591"
dependencies = [
"lock_api",
]
[[package]]
name = "spirv"
version = "0.3.0+sdk-1.3.268.0"
@@ -18534,6 +18582,12 @@ version = "0.2.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "673aac59facbab8a9007c7f6108d11f63b603f7cabff99fabf650fea5c32b861"
[[package]]
name = "untrusted"
version = "0.7.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a156c684c91ea7d62626509bce3cb4e1d9ed5c4d978f7b4352658f96a4c26b4a"
[[package]]
name = "untrusted"
version = "0.9.0"
@@ -21146,7 +21200,7 @@ dependencies = [
[[package]]
name = "zed"
version = "0.213.0"
version = "0.214.0"
dependencies = [
"acp_tools",
"activity_indicator",
@@ -21164,6 +21218,7 @@ dependencies = [
"breadcrumbs",
"call",
"channel",
"chrono",
"clap",
"cli",
"client",
@@ -21221,6 +21276,7 @@ dependencies = [
"menu",
"migrator",
"mimalloc",
"miniprofiler_ui",
"nc",
"nix 0.29.0",
"node_runtime",
@@ -21693,6 +21749,7 @@ dependencies = [
"serde_json",
"settings",
"smol",
"strsim",
"thiserror 2.0.17",
"util",
"uuid",

View File

@@ -110,6 +110,7 @@ members = [
"crates/menu",
"crates/migrator",
"crates/mistral",
"crates/miniprofiler_ui",
"crates/multi_buffer",
"crates/nc",
"crates/net",
@@ -126,6 +127,7 @@ members = [
"crates/picker",
"crates/prettier",
"crates/project",
"crates/project_benchmarks",
"crates/project_panel",
"crates/project_symbols",
"crates/prompt_store",
@@ -341,6 +343,7 @@ menu = { path = "crates/menu" }
migrator = { path = "crates/migrator" }
mistral = { path = "crates/mistral" }
multi_buffer = { path = "crates/multi_buffer" }
miniprofiler_ui = { path = "crates/miniprofiler_ui" }
nc = { path = "crates/nc" }
net = { path = "crates/net" }
node_runtime = { path = "crates/node_runtime" }
@@ -482,7 +485,7 @@ cfg-if = "1.0.3"
chrono = { version = "0.4", features = ["serde"] }
ciborium = "0.2"
circular-buffer = "1.0"
clap = { version = "4.4", features = ["derive"] }
clap = { version = "4.4", features = ["derive", "wrap_help"] }
cocoa = "=0.26.0"
cocoa-foundation = "=0.2.0"
convert_case = "0.8.0"
@@ -504,7 +507,7 @@ emojis = "0.6.1"
env_logger = "0.11"
exec = "0.3.1"
fancy-regex = "0.14.0"
fork = "0.2.0"
fork = "0.4.0"
futures = "0.3"
futures-batch = "0.6.1"
futures-lite = "1.13"
@@ -531,8 +534,8 @@ itertools = "0.14.0"
json_dotpath = "1.1"
jsonschema = "0.30.0"
jsonwebtoken = "9.3"
jupyter-protocol = { git = "https://github.com/ConradIrwin/runtimed", rev = "7130c804216b6914355d15d0b91ea91f6babd734" }
jupyter-websocket-client = { git = "https://github.com/ConradIrwin/runtimed" ,rev = "7130c804216b6914355d15d0b91ea91f6babd734" }
jupyter-protocol = "0.10.0"
jupyter-websocket-client = "0.15.0"
libc = "0.2"
libsqlite3-sys = { version = "0.30.1", features = ["bundled"] }
linkify = "0.10.0"
@@ -545,7 +548,7 @@ minidumper = "0.8"
moka = { version = "0.12.10", features = ["sync"] }
naga = { version = "25.0", features = ["wgsl-in"] }
nanoid = "0.4"
nbformat = { git = "https://github.com/ConradIrwin/runtimed", rev = "7130c804216b6914355d15d0b91ea91f6babd734" }
nbformat = "0.15.0"
nix = "0.29"
num-format = "0.4.4"
num-traits = "0.2"
@@ -616,8 +619,8 @@ reqwest = { git = "https://github.com/zed-industries/reqwest.git", rev = "c15662
"stream",
], package = "zed-reqwest", version = "0.12.15-zed" }
rsa = "0.9.6"
runtimelib = { git = "https://github.com/ConradIrwin/runtimed", rev = "7130c804216b6914355d15d0b91ea91f6babd734", default-features = false, features = [
"async-dispatcher-runtime",
runtimelib = { version = "0.30.0", default-features = false, features = [
"async-dispatcher-runtime", "aws-lc-rs"
] }
rust-embed = { version = "8.4", features = ["include-exclude"] }
rustc-hash = "2.1.0"

View File

@@ -1,6 +1,6 @@
# syntax = docker/dockerfile:1.2
FROM rust:1.90-bookworm as builder
FROM rust:1.91.1-bookworm as builder
WORKDIR app
COPY . .

4
assets/icons/at_sign.svg Normal file
View File

@@ -0,0 +1,4 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M8.00156 10.3996C9.32705 10.3996 10.4016 9.32509 10.4016 7.99961C10.4016 6.67413 9.32705 5.59961 8.00156 5.59961C6.67608 5.59961 5.60156 6.67413 5.60156 7.99961C5.60156 9.32509 6.67608 10.3996 8.00156 10.3996Z" stroke="black" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M10.4 5.6V8.6C10.4 9.07739 10.5896 9.53523 10.9272 9.8728C11.2648 10.2104 11.7226 10.4 12.2 10.4C12.6774 10.4 13.1352 10.2104 13.4728 9.8728C13.8104 9.53523 14 9.07739 14 8.6V8C14 6.64839 13.5436 5.33636 12.7048 4.27651C11.8661 3.21665 10.694 2.47105 9.37852 2.16051C8.06306 1.84997 6.68129 1.99269 5.45707 2.56554C4.23285 3.13838 3.23791 4.1078 2.63344 5.31672C2.02898 6.52565 1.85041 7.90325 2.12667 9.22633C2.40292 10.5494 3.11782 11.7405 4.15552 12.6065C5.19323 13.4726 6.49295 13.9629 7.84411 13.998C9.19527 14.0331 10.5187 13.611 11.6 12.8" stroke="black" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

After

Width:  |  Height:  |  Size: 1.0 KiB

View File

@@ -616,9 +616,13 @@
"search": {
// Whether to show the project search button in the status bar.
"button": true,
// Whether to only match on whole words.
"whole_word": false,
// Whether to match case sensitively.
"case_sensitive": false,
// Whether to include gitignored files in search results.
"include_ignored": false,
// Whether to interpret the search query as a regular expression.
"regex": false,
// Whether to center the cursor on each search match when navigating.
"center_on_match": false
@@ -744,8 +748,15 @@
"hide_root": false,
// Whether to hide the hidden entries in the project panel.
"hide_hidden": false,
// Whether to automatically open files when pasting them in the project panel.
"open_file_on_paste": true
// Settings for automatically opening files.
"auto_open": {
// Whether to automatically open newly created files in the editor.
"on_create": true,
// Whether to automatically open files after pasting or duplicating them.
"on_paste": true,
// Whether to automatically open files dropped from external sources.
"on_drop": true
}
},
"outline_panel": {
// Whether to show the outline panel button in the status bar
@@ -1539,6 +1550,8 @@
// Default: 10_000, maximum: 100_000 (all bigger values set will be treated as 100_000), 0 disables the scrolling.
// Existing terminals will not pick up this change until they are recreated.
"max_scroll_history_lines": 10000,
// The multiplier for scrolling speed in the terminal.
"scroll_multiplier": 1.0,
// The minimum APCA perceptual contrast between foreground and background colors.
// APCA (Accessible Perceptual Contrast Algorithm) is more accurate than WCAG 2.x,
// especially for dark mode. Values range from 0 to 106.

View File

@@ -1866,10 +1866,14 @@ impl AcpThread {
.checkpoint
.as_ref()
.map(|c| c.git_checkpoint.clone());
// Cancel any in-progress generation before restoring
let cancel_task = self.cancel(cx);
let rewind = self.rewind(id.clone(), cx);
let git_store = self.project.read(cx).git_store().clone();
cx.spawn(async move |_, cx| {
cancel_task.await;
rewind.await?;
if let Some(checkpoint) = checkpoint {
git_store
@@ -1894,9 +1898,25 @@ impl AcpThread {
cx.update(|cx| truncate.run(id.clone(), cx))?.await?;
this.update(cx, |this, cx| {
if let Some((ix, _)) = this.user_message_mut(&id) {
// Collect all terminals from entries that will be removed
let terminals_to_remove: Vec<acp::TerminalId> = this.entries[ix..]
.iter()
.flat_map(|entry| entry.terminals())
.filter_map(|terminal| terminal.read(cx).id().clone().into())
.collect();
let range = ix..this.entries.len();
this.entries.truncate(ix);
cx.emit(AcpThreadEvent::EntriesRemoved(range));
// Kill and remove the terminals
for terminal_id in terminals_to_remove {
if let Some(terminal) = this.terminals.remove(&terminal_id) {
terminal.update(cx, |terminal, cx| {
terminal.kill(cx);
});
}
}
}
this.action_log().update(cx, |action_log, cx| {
action_log.reject_all_edits(Some(telemetry), cx)
@@ -3803,4 +3823,314 @@ mod tests {
}
});
}
/// Tests that restoring a checkpoint properly cleans up terminals that were
/// created after that checkpoint, and cancels any in-progress generation.
///
/// Reproduces issue #35142: When a checkpoint is restored, any terminal processes
/// that were started after that checkpoint should be terminated, and any in-progress
/// AI generation should be canceled.
#[gpui::test]
async fn test_restore_checkpoint_kills_terminal(cx: &mut TestAppContext) {
init_test(cx);
let fs = FakeFs::new(cx.executor());
let project = Project::test(fs, [], cx).await;
let connection = Rc::new(FakeAgentConnection::new());
let thread = cx
.update(|cx| connection.new_thread(project, Path::new(path!("/test")), cx))
.await
.unwrap();
// Send first user message to create a checkpoint
cx.update(|cx| {
thread.update(cx, |thread, cx| {
thread.send(vec!["first message".into()], cx)
})
})
.await
.unwrap();
// Send second message (creates another checkpoint) - we'll restore to this one
cx.update(|cx| {
thread.update(cx, |thread, cx| {
thread.send(vec!["second message".into()], cx)
})
})
.await
.unwrap();
// Create 2 terminals BEFORE the checkpoint that have completed running
let terminal_id_1 = acp::TerminalId(uuid::Uuid::new_v4().to_string().into());
let mock_terminal_1 = cx.new(|cx| {
let builder = ::terminal::TerminalBuilder::new_display_only(
::terminal::terminal_settings::CursorShape::default(),
::terminal::terminal_settings::AlternateScroll::On,
None,
0,
)
.unwrap();
builder.subscribe(cx)
});
thread.update(cx, |thread, cx| {
thread.on_terminal_provider_event(
TerminalProviderEvent::Created {
terminal_id: terminal_id_1.clone(),
label: "echo 'first'".to_string(),
cwd: Some(PathBuf::from("/test")),
output_byte_limit: None,
terminal: mock_terminal_1.clone(),
},
cx,
);
});
thread.update(cx, |thread, cx| {
thread.on_terminal_provider_event(
TerminalProviderEvent::Output {
terminal_id: terminal_id_1.clone(),
data: b"first\n".to_vec(),
},
cx,
);
});
thread.update(cx, |thread, cx| {
thread.on_terminal_provider_event(
TerminalProviderEvent::Exit {
terminal_id: terminal_id_1.clone(),
status: acp::TerminalExitStatus {
exit_code: Some(0),
signal: None,
meta: None,
},
},
cx,
);
});
let terminal_id_2 = acp::TerminalId(uuid::Uuid::new_v4().to_string().into());
let mock_terminal_2 = cx.new(|cx| {
let builder = ::terminal::TerminalBuilder::new_display_only(
::terminal::terminal_settings::CursorShape::default(),
::terminal::terminal_settings::AlternateScroll::On,
None,
0,
)
.unwrap();
builder.subscribe(cx)
});
thread.update(cx, |thread, cx| {
thread.on_terminal_provider_event(
TerminalProviderEvent::Created {
terminal_id: terminal_id_2.clone(),
label: "echo 'second'".to_string(),
cwd: Some(PathBuf::from("/test")),
output_byte_limit: None,
terminal: mock_terminal_2.clone(),
},
cx,
);
});
thread.update(cx, |thread, cx| {
thread.on_terminal_provider_event(
TerminalProviderEvent::Output {
terminal_id: terminal_id_2.clone(),
data: b"second\n".to_vec(),
},
cx,
);
});
thread.update(cx, |thread, cx| {
thread.on_terminal_provider_event(
TerminalProviderEvent::Exit {
terminal_id: terminal_id_2.clone(),
status: acp::TerminalExitStatus {
exit_code: Some(0),
signal: None,
meta: None,
},
},
cx,
);
});
// Get the second message ID to restore to
let second_message_id = thread.read_with(cx, |thread, _| {
// At this point we have:
// - Index 0: First user message (with checkpoint)
// - Index 1: Second user message (with checkpoint)
// No assistant responses because FakeAgentConnection just returns EndTurn
let AgentThreadEntry::UserMessage(message) = &thread.entries[1] else {
panic!("expected user message at index 1");
};
message.id.clone().unwrap()
});
// Create a terminal AFTER the checkpoint we'll restore to.
// This simulates the AI agent starting a long-running terminal command.
let terminal_id = acp::TerminalId(uuid::Uuid::new_v4().to_string().into());
let mock_terminal = cx.new(|cx| {
let builder = ::terminal::TerminalBuilder::new_display_only(
::terminal::terminal_settings::CursorShape::default(),
::terminal::terminal_settings::AlternateScroll::On,
None,
0,
)
.unwrap();
builder.subscribe(cx)
});
// Register the terminal as created
thread.update(cx, |thread, cx| {
thread.on_terminal_provider_event(
TerminalProviderEvent::Created {
terminal_id: terminal_id.clone(),
label: "sleep 1000".to_string(),
cwd: Some(PathBuf::from("/test")),
output_byte_limit: None,
terminal: mock_terminal.clone(),
},
cx,
);
});
// Simulate the terminal producing output (still running)
thread.update(cx, |thread, cx| {
thread.on_terminal_provider_event(
TerminalProviderEvent::Output {
terminal_id: terminal_id.clone(),
data: b"terminal is running...\n".to_vec(),
},
cx,
);
});
// Create a tool call entry that references this terminal
// This represents the agent requesting a terminal command
thread.update(cx, |thread, cx| {
thread
.handle_session_update(
acp::SessionUpdate::ToolCall(acp::ToolCall {
id: acp::ToolCallId("terminal-tool-1".into()),
title: "Running command".into(),
kind: acp::ToolKind::Execute,
status: acp::ToolCallStatus::InProgress,
content: vec![acp::ToolCallContent::Terminal {
terminal_id: terminal_id.clone(),
}],
locations: vec![],
raw_input: Some(
serde_json::json!({"command": "sleep 1000", "cd": "/test"}),
),
raw_output: None,
meta: None,
}),
cx,
)
.unwrap();
});
// Verify terminal exists and is in the thread
let terminal_exists_before =
thread.read_with(cx, |thread, _| thread.terminals.contains_key(&terminal_id));
assert!(
terminal_exists_before,
"Terminal should exist before checkpoint restore"
);
// Verify the terminal's underlying task is still running (not completed)
let terminal_running_before = thread.read_with(cx, |thread, _cx| {
let terminal_entity = thread.terminals.get(&terminal_id).unwrap();
terminal_entity.read_with(cx, |term, _cx| {
term.output().is_none() // output is None means it's still running
})
});
assert!(
terminal_running_before,
"Terminal should be running before checkpoint restore"
);
// Verify we have the expected entries before restore
let entry_count_before = thread.read_with(cx, |thread, _| thread.entries.len());
assert!(
entry_count_before > 1,
"Should have multiple entries before restore"
);
// Restore the checkpoint to the second message.
// This should:
// 1. Cancel any in-progress generation (via the cancel() call)
// 2. Remove the terminal that was created after that point
thread
.update(cx, |thread, cx| {
thread.restore_checkpoint(second_message_id, cx)
})
.await
.unwrap();
// Verify that no send_task is in progress after restore
// (cancel() clears the send_task)
let has_send_task_after = thread.read_with(cx, |thread, _| thread.send_task.is_some());
assert!(
!has_send_task_after,
"Should not have a send_task after restore (cancel should have cleared it)"
);
// Verify the entries were truncated (restoring to index 1 truncates at 1, keeping only index 0)
let entry_count = thread.read_with(cx, |thread, _| thread.entries.len());
assert_eq!(
entry_count, 1,
"Should have 1 entry after restore (only the first user message)"
);
// Verify the 2 completed terminals from before the checkpoint still exist
let terminal_1_exists = thread.read_with(cx, |thread, _| {
thread.terminals.contains_key(&terminal_id_1)
});
assert!(
terminal_1_exists,
"Terminal 1 (from before checkpoint) should still exist"
);
let terminal_2_exists = thread.read_with(cx, |thread, _| {
thread.terminals.contains_key(&terminal_id_2)
});
assert!(
terminal_2_exists,
"Terminal 2 (from before checkpoint) should still exist"
);
// Verify they're still in completed state
let terminal_1_completed = thread.read_with(cx, |thread, _cx| {
let terminal_entity = thread.terminals.get(&terminal_id_1).unwrap();
terminal_entity.read_with(cx, |term, _cx| term.output().is_some())
});
assert!(terminal_1_completed, "Terminal 1 should still be completed");
let terminal_2_completed = thread.read_with(cx, |thread, _cx| {
let terminal_entity = thread.terminals.get(&terminal_id_2).unwrap();
terminal_entity.read_with(cx, |term, _cx| term.output().is_some())
});
assert!(terminal_2_completed, "Terminal 2 should still be completed");
// Verify the running terminal (created after checkpoint) was removed
let terminal_3_exists =
thread.read_with(cx, |thread, _| thread.terminals.contains_key(&terminal_id));
assert!(
!terminal_3_exists,
"Terminal 3 (created after checkpoint) should have been removed"
);
// Verify total count is 2 (the two from before the checkpoint)
let terminal_count = thread.read_with(cx, |thread, _| thread.terminals.len());
assert_eq!(
terminal_count, 2,
"Should have exactly 2 terminals (the completed ones from before checkpoint)"
);
}
}

View File

@@ -133,9 +133,7 @@ impl LanguageModels {
for model in provider.provided_models(cx) {
let model_info = Self::map_language_model_to_info(&model, &provider);
let model_id = model_info.id.clone();
if !recommended_models.contains(&(model.provider_id(), model.id())) {
provider_models.push(model_info);
}
provider_models.push(model_info);
models.insert(model_id, model);
}
if !provider_models.is_empty() {

View File

@@ -15,12 +15,14 @@ const SEPARATOR_MARKER: &str = "=======";
const REPLACE_MARKER: &str = ">>>>>>> REPLACE";
const SONNET_PARAMETER_INVOKE_1: &str = "</parameter>\n</invoke>";
const SONNET_PARAMETER_INVOKE_2: &str = "</parameter></invoke>";
const END_TAGS: [&str; 5] = [
const SONNET_PARAMETER_INVOKE_3: &str = "</parameter>";
const END_TAGS: [&str; 6] = [
OLD_TEXT_END_TAG,
NEW_TEXT_END_TAG,
EDITS_END_TAG,
SONNET_PARAMETER_INVOKE_1, // Remove this after switching to streaming tool call
SONNET_PARAMETER_INVOKE_1, // Remove these after switching to streaming tool call
SONNET_PARAMETER_INVOKE_2,
SONNET_PARAMETER_INVOKE_3,
];
#[derive(Debug)]
@@ -567,21 +569,29 @@ mod tests {
parse_random_chunks(
indoc! {"
<old_text>some text</old_text><new_text>updated text</parameter></invoke>
<old_text>more text</old_text><new_text>upd</parameter></new_text>
"},
&mut parser,
&mut rng
),
vec![Edit {
old_text: "some text".to_string(),
new_text: "updated text".to_string(),
line_hint: None,
},]
vec![
Edit {
old_text: "some text".to_string(),
new_text: "updated text".to_string(),
line_hint: None,
},
Edit {
old_text: "more text".to_string(),
new_text: "upd".to_string(),
line_hint: None,
},
]
);
assert_eq!(
parser.finish(),
EditParserMetrics {
tags: 2,
mismatched_tags: 1
tags: 4,
mismatched_tags: 2
}
);
}

View File

@@ -44,6 +44,25 @@ pub async fn get_buffer_content_or_outline(
.collect::<Vec<_>>()
})?;
// If no outline exists, fall back to first 1KB so the agent has some context
if outline_items.is_empty() {
let text = buffer.read_with(cx, |buffer, _| {
let snapshot = buffer.snapshot();
let len = snapshot.len().min(1024);
let content = snapshot.text_for_range(0..len).collect::<String>();
if let Some(path) = path {
format!("# First 1KB of {path} (file too large to show full content, and no outline available)\n\n{content}")
} else {
format!("# First 1KB of file (file too large to show full content, and no outline available)\n\n{content}")
}
})?;
return Ok(BufferContent {
text,
is_outline: false,
});
}
let outline_text = render_outline(outline_items, None, 0, usize::MAX).await?;
let text = if let Some(path) = path {
@@ -140,3 +159,62 @@ fn render_entries(
entries_rendered
}
#[cfg(test)]
mod tests {
use super::*;
use fs::FakeFs;
use gpui::TestAppContext;
use project::Project;
use settings::SettingsStore;
#[gpui::test]
async fn test_large_file_fallback_to_subset(cx: &mut TestAppContext) {
cx.update(|cx| {
let settings = SettingsStore::test(cx);
cx.set_global(settings);
});
let fs = FakeFs::new(cx.executor());
let project = Project::test(fs, [], cx).await;
let content = "A".repeat(100 * 1024); // 100KB
let content_len = content.len();
let buffer = project
.update(cx, |project, cx| project.create_buffer(true, cx))
.await
.expect("failed to create buffer");
buffer.update(cx, |buffer, cx| buffer.set_text(content, cx));
let result = cx
.spawn(|cx| async move { get_buffer_content_or_outline(buffer, None, &cx).await })
.await
.unwrap();
// Should contain some of the actual file content
assert!(
result.text.contains("AAAAAAAAAA"),
"Result did not contain content subset"
);
// Should be marked as not an outline (it's truncated content)
assert!(
!result.is_outline,
"Large file without outline should not be marked as outline"
);
// Should be reasonably sized (much smaller than original)
assert!(
result.text.len() < 50 * 1024,
"Result size {} should be smaller than 50KB",
result.text.len()
);
// Should be significantly smaller than the original content
assert!(
result.text.len() < content_len / 10,
"Result should be much smaller than original content"
);
}
}

View File

@@ -50,13 +50,14 @@ impl crate::AgentServer for CustomAgentServer {
fn set_default_mode(&self, mode_id: Option<acp::SessionModeId>, fs: Arc<dyn Fs>, cx: &mut App) {
let name = self.name();
update_settings_file(fs, cx, move |settings, _| {
settings
if let Some(settings) = settings
.agent_servers
.get_or_insert_default()
.custom
.get_mut(&name)
.unwrap()
.default_mode = mode_id.map(|m| m.to_string())
{
settings.default_mode = mode_id.map(|m| m.to_string())
}
});
}

View File

@@ -109,6 +109,8 @@ impl ContextPickerCompletionProvider {
icon_path: Some(mode.icon().path().into()),
documentation: None,
source: project::CompletionSource::Custom,
match_start: None,
snippet_deduplication_key: None,
insert_text_mode: None,
// This ensures that when a user accepts this completion, the
// completion menu will still be shown after "@category " is
@@ -146,6 +148,8 @@ impl ContextPickerCompletionProvider {
documentation: None,
insert_text_mode: None,
source: project::CompletionSource::Custom,
match_start: None,
snippet_deduplication_key: None,
icon_path: Some(icon_for_completion),
confirm: Some(confirm_completion_callback(
thread_entry.title().clone(),
@@ -177,6 +181,8 @@ impl ContextPickerCompletionProvider {
documentation: None,
insert_text_mode: None,
source: project::CompletionSource::Custom,
match_start: None,
snippet_deduplication_key: None,
icon_path: Some(icon_path),
confirm: Some(confirm_completion_callback(
rule.title,
@@ -233,6 +239,8 @@ impl ContextPickerCompletionProvider {
documentation: None,
source: project::CompletionSource::Custom,
icon_path: Some(completion_icon_path),
match_start: None,
snippet_deduplication_key: None,
insert_text_mode: None,
confirm: Some(confirm_completion_callback(
file_name,
@@ -284,6 +292,8 @@ impl ContextPickerCompletionProvider {
documentation: None,
source: project::CompletionSource::Custom,
icon_path: Some(icon_path),
match_start: None,
snippet_deduplication_key: None,
insert_text_mode: None,
confirm: Some(confirm_completion_callback(
symbol.name.into(),
@@ -316,6 +326,8 @@ impl ContextPickerCompletionProvider {
documentation: None,
source: project::CompletionSource::Custom,
icon_path: Some(icon_path),
match_start: None,
snippet_deduplication_key: None,
insert_text_mode: None,
confirm: Some(confirm_completion_callback(
url_to_fetch.to_string().into(),
@@ -384,6 +396,8 @@ impl ContextPickerCompletionProvider {
icon_path: Some(action.icon().path().into()),
documentation: None,
source: project::CompletionSource::Custom,
match_start: None,
snippet_deduplication_key: None,
insert_text_mode: None,
// This ensures that when a user accepts this completion, the
// completion menu will still be shown after "@category " is
@@ -694,14 +708,18 @@ fn build_symbol_label(symbol_name: &str, file_name: &str, line: u32, cx: &App) -
}
fn build_code_label_for_full_path(file_name: &str, directory: Option<&str>, cx: &App) -> CodeLabel {
let comment_id = cx.theme().syntax().highlight_id("comment").map(HighlightId);
let path = cx
.theme()
.syntax()
.highlight_id("variable")
.map(HighlightId);
let mut label = CodeLabelBuilder::default();
label.push_str(file_name, None);
label.push_str(" ", None);
if let Some(directory) = directory {
label.push_str(directory, comment_id);
label.push_str(directory, path);
}
label.build()
@@ -770,6 +788,8 @@ impl CompletionProvider for ContextPickerCompletionProvider {
)),
source: project::CompletionSource::Custom,
icon_path: None,
match_start: None,
snippet_deduplication_key: None,
insert_text_mode: None,
confirm: Some(Arc::new({
let editor = editor.clone();

View File

@@ -15,6 +15,7 @@ use editor::{
EditorEvent, EditorMode, EditorSnapshot, EditorStyle, ExcerptId, FoldPlaceholder, Inlay,
MultiBuffer, ToOffset,
actions::Paste,
code_context_menus::CodeContextMenu,
display_map::{Crease, CreaseId, FoldId},
scroll::Autoscroll,
};
@@ -272,6 +273,15 @@ impl MessageEditor {
self.editor.read(cx).is_empty(cx)
}
pub fn is_completions_menu_visible(&self, cx: &App) -> bool {
self.editor
.read(cx)
.context_menu()
.borrow()
.as_ref()
.is_some_and(|menu| matches!(menu, CodeContextMenu::Completions(_)) && menu.visible())
}
pub fn mentions(&self) -> HashSet<MentionUri> {
self.mention_set
.mentions
@@ -836,6 +846,45 @@ impl MessageEditor {
cx.emit(MessageEditorEvent::Send)
}
pub fn trigger_completion_menu(&mut self, window: &mut Window, cx: &mut Context<Self>) {
let editor = self.editor.clone();
cx.spawn_in(window, async move |_, cx| {
editor
.update_in(cx, |editor, window, cx| {
let menu_is_open =
editor.context_menu().borrow().as_ref().is_some_and(|menu| {
matches!(menu, CodeContextMenu::Completions(_)) && menu.visible()
});
let has_at_sign = {
let snapshot = editor.display_snapshot(cx);
let cursor = editor.selections.newest::<text::Point>(&snapshot).head();
let offset = cursor.to_offset(&snapshot);
if offset > 0 {
snapshot
.buffer_snapshot()
.reversed_chars_at(offset)
.next()
.map(|sign| sign == '@')
.unwrap_or(false)
} else {
false
}
};
if menu_is_open && has_at_sign {
return;
}
editor.insert("@", window, cx);
editor.show_completions(&editor::actions::ShowCompletions, window, cx);
})
.log_err();
})
.detach();
}
fn chat(&mut self, _: &Chat, _: &mut Window, cx: &mut Context<Self>) {
self.send(cx);
}
@@ -2622,13 +2671,14 @@ mod tests {
}
#[gpui::test]
async fn test_large_file_mention_uses_outline(cx: &mut TestAppContext) {
async fn test_large_file_mention_fallback(cx: &mut TestAppContext) {
init_test(cx);
let fs = FakeFs::new(cx.executor());
// Create a large file that exceeds AUTO_OUTLINE_SIZE
const LINE: &str = "fn example_function() { /* some code */ }\n";
// Using plain text without a configured language, so no outline is available
const LINE: &str = "This is a line of text in the file\n";
let large_content = LINE.repeat(2 * (outline::AUTO_OUTLINE_SIZE / LINE.len()));
assert!(large_content.len() > outline::AUTO_OUTLINE_SIZE);
@@ -2639,8 +2689,8 @@ mod tests {
fs.insert_tree(
"/project",
json!({
"large_file.rs": large_content.clone(),
"small_file.rs": small_content,
"large_file.txt": large_content.clone(),
"small_file.txt": small_content,
}),
)
.await;
@@ -2686,7 +2736,7 @@ mod tests {
let large_file_abs_path = project.read_with(cx, |project, cx| {
let worktree = project.worktrees(cx).next().unwrap();
let worktree_root = worktree.read(cx).abs_path();
worktree_root.join("large_file.rs")
worktree_root.join("large_file.txt")
});
let large_file_task = message_editor.update(cx, |editor, cx| {
editor.confirm_mention_for_file(large_file_abs_path, cx)
@@ -2695,11 +2745,20 @@ mod tests {
let large_file_mention = large_file_task.await.unwrap();
match large_file_mention {
Mention::Text { content, .. } => {
// Should contain outline header for large files
assert!(content.contains("File outline for"));
assert!(content.contains("file too large to show full content"));
// Should not contain the full repeated content
assert!(!content.contains(&LINE.repeat(100)));
// Should contain some of the content but not all of it
assert!(
content.contains(LINE),
"Should contain some of the file content"
);
assert!(
!content.contains(&LINE.repeat(100)),
"Should not contain the full file"
);
// Should be much smaller than original
assert!(
content.len() < large_content.len() / 10,
"Should be significantly truncated"
);
}
_ => panic!("Expected Text mention for large file"),
}
@@ -2709,7 +2768,7 @@ mod tests {
let small_file_abs_path = project.read_with(cx, |project, cx| {
let worktree = project.worktrees(cx).next().unwrap();
let worktree_root = worktree.read(cx).abs_path();
worktree_root.join("small_file.rs")
worktree_root.join("small_file.txt")
});
let small_file_task = message_editor.update(cx, |editor, cx| {
editor.confirm_mention_for_file(small_file_abs_path, cx)
@@ -2718,10 +2777,8 @@ mod tests {
let small_file_mention = small_file_task.await.unwrap();
match small_file_mention {
Mention::Text { content, .. } => {
// Should contain the actual content
// Should contain the full actual content
assert_eq!(content, small_content);
// Should not contain outline header
assert!(!content.contains("File outline for"));
}
_ => panic!("Expected Text mention for small file"),
}

View File

@@ -4188,6 +4188,8 @@ impl AcpThreadView {
.justify_between()
.child(
h_flex()
.gap_0p5()
.child(self.render_add_context_button(cx))
.child(self.render_follow_toggle(cx))
.children(self.render_burn_mode_toggle(cx)),
)
@@ -4502,6 +4504,29 @@ impl AcpThreadView {
}))
}
fn render_add_context_button(&self, cx: &mut Context<Self>) -> impl IntoElement {
let message_editor = self.message_editor.clone();
let menu_visible = message_editor.read(cx).is_completions_menu_visible(cx);
IconButton::new("add-context", IconName::AtSign)
.icon_size(IconSize::Small)
.icon_color(Color::Muted)
.when(!menu_visible, |this| {
this.tooltip(move |_window, cx| {
Tooltip::with_meta("Add Context", None, "Or type @ to include context", cx)
})
})
.on_click(cx.listener(move |_this, _, window, cx| {
let message_editor_clone = message_editor.clone();
window.defer(cx, move |window, cx| {
message_editor_clone.update(cx, |message_editor, cx| {
message_editor.trigger_completion_menu(window, cx);
});
});
}))
}
fn render_markdown(&self, markdown: Entity<Markdown>, style: MarkdownStyle) -> MarkdownElement {
let workspace = self.workspace.clone();
MarkdownElement::new(markdown, style).on_url_click(move |text, window, cx| {

View File

@@ -8,6 +8,7 @@ use std::{ops::Range, sync::Arc};
use agent::ContextServerRegistry;
use anyhow::Result;
use client::zed_urls;
use cloud_llm_client::{Plan, PlanV1, PlanV2};
use collections::HashMap;
use context_server::ContextServerId;
@@ -26,18 +27,20 @@ use language_model::{
use language_models::AllLanguageModelSettings;
use notifications::status_toast::{StatusToast, ToastIcon};
use project::{
agent_server_store::{AgentServerStore, CLAUDE_CODE_NAME, CODEX_NAME, GEMINI_NAME},
agent_server_store::{
AgentServerStore, CLAUDE_CODE_NAME, CODEX_NAME, ExternalAgentServerName, GEMINI_NAME,
},
context_server_store::{ContextServerConfiguration, ContextServerStatus, ContextServerStore},
};
use settings::{Settings, SettingsStore, update_settings_file};
use ui::{
Button, ButtonStyle, Chip, CommonAnimationExt, ContextMenu, Disclosure, Divider, DividerColor,
ElevationIndex, IconName, IconPosition, IconSize, Indicator, LabelSize, PopoverMenu, Switch,
SwitchColor, Tooltip, WithScrollbar, prelude::*,
Button, ButtonStyle, Chip, CommonAnimationExt, ContextMenu, ContextMenuEntry, Disclosure,
Divider, DividerColor, ElevationIndex, IconName, IconPosition, IconSize, Indicator, LabelSize,
PopoverMenu, Switch, SwitchColor, Tooltip, WithScrollbar, prelude::*,
};
use util::ResultExt as _;
use workspace::{Workspace, create_and_open_local_file};
use zed_actions::ExtensionCategoryFilter;
use zed_actions::{ExtensionCategoryFilter, OpenBrowser};
pub(crate) use configure_context_server_modal::ConfigureContextServerModal;
pub(crate) use configure_context_server_tools_modal::ConfigureContextServerToolsModal;
@@ -415,6 +418,7 @@ impl AgentConfiguration {
cx: &mut Context<Self>,
) -> impl IntoElement {
let providers = LanguageModelRegistry::read_global(cx).providers();
let popover_menu = PopoverMenu::new("add-provider-popover")
.trigger(
Button::new("add-provider", "Add Provider")
@@ -425,7 +429,6 @@ impl AgentConfiguration {
.icon_color(Color::Muted)
.label_size(LabelSize::Small),
)
.anchor(gpui::Corner::TopRight)
.menu({
let workspace = self.workspace.clone();
move |window, cx| {
@@ -447,6 +450,11 @@ impl AgentConfiguration {
})
}))
}
})
.anchor(gpui::Corner::TopRight)
.offset(gpui::Point {
x: px(0.0),
y: px(2.0),
});
v_flex()
@@ -541,7 +549,6 @@ impl AgentConfiguration {
.icon_color(Color::Muted)
.label_size(LabelSize::Small),
)
.anchor(gpui::Corner::TopRight)
.menu({
move |window, cx| {
Some(ContextMenu::build(window, cx, |menu, _window, _cx| {
@@ -564,6 +571,11 @@ impl AgentConfiguration {
})
}))
}
})
.anchor(gpui::Corner::TopRight)
.offset(gpui::Point {
x: px(0.0),
y: px(2.0),
});
v_flex()
@@ -943,7 +955,7 @@ impl AgentConfiguration {
.cloned()
.collect::<Vec<_>>();
let user_defined_agents = user_defined_agents
let user_defined_agents: Vec<_> = user_defined_agents
.into_iter()
.map(|name| {
let icon = if let Some(icon_path) = agent_server_store.agent_icon(&name) {
@@ -951,27 +963,93 @@ impl AgentConfiguration {
} else {
AgentIcon::Name(IconName::Ai)
};
self.render_agent_server(icon, name, true)
.into_any_element()
(name, icon)
})
.collect::<Vec<_>>();
.collect();
let add_agens_button = Button::new("add-agent", "Add Agent")
.style(ButtonStyle::Outlined)
.icon_position(IconPosition::Start)
.icon(IconName::Plus)
.icon_size(IconSize::Small)
.icon_color(Color::Muted)
.label_size(LabelSize::Small)
.on_click(move |_, window, cx| {
if let Some(workspace) = window.root().flatten() {
let workspace = workspace.downgrade();
window
.spawn(cx, async |cx| {
open_new_agent_servers_entry_in_settings_editor(workspace, cx).await
let add_agent_popover = PopoverMenu::new("add-agent-server-popover")
.trigger(
Button::new("add-agent", "Add Agent")
.style(ButtonStyle::Outlined)
.icon_position(IconPosition::Start)
.icon(IconName::Plus)
.icon_size(IconSize::Small)
.icon_color(Color::Muted)
.label_size(LabelSize::Small),
)
.menu({
move |window, cx| {
Some(ContextMenu::build(window, cx, |menu, _window, _cx| {
menu.entry("Install from Extensions", None, {
|window, cx| {
window.dispatch_action(
zed_actions::Extensions {
category_filter: Some(
ExtensionCategoryFilter::AgentServers,
),
id: None,
}
.boxed_clone(),
cx,
)
}
})
.detach_and_log_err(cx);
.entry("Add Custom Agent", None, {
move |window, cx| {
if let Some(workspace) = window.root().flatten() {
let workspace = workspace.downgrade();
window
.spawn(cx, async |cx| {
open_new_agent_servers_entry_in_settings_editor(
workspace, cx,
)
.await
})
.detach_and_log_err(cx);
}
}
})
.separator()
.header("Learn More")
.item(
ContextMenuEntry::new("Agent Servers Docs")
.icon(IconName::ArrowUpRight)
.icon_color(Color::Muted)
.icon_position(IconPosition::End)
.handler({
move |window, cx| {
window.dispatch_action(
Box::new(OpenBrowser {
url: zed_urls::agent_server_docs(cx),
}),
cx,
);
}
}),
)
.item(
ContextMenuEntry::new("ACP Docs")
.icon(IconName::ArrowUpRight)
.icon_color(Color::Muted)
.icon_position(IconPosition::End)
.handler({
move |window, cx| {
window.dispatch_action(
Box::new(OpenBrowser {
url: "https://agentclientprotocol.com/".into(),
}),
cx,
);
}
}),
)
}))
}
})
.anchor(gpui::Corner::TopRight)
.offset(gpui::Point {
x: px(0.0),
y: px(2.0),
});
v_flex()
@@ -982,7 +1060,7 @@ impl AgentConfiguration {
.child(self.render_section_title(
"External Agents",
"All agents connected through the Agent Client Protocol.",
add_agens_button.into_any_element(),
add_agent_popover.into_any_element(),
))
.child(
v_flex()
@@ -993,26 +1071,29 @@ impl AgentConfiguration {
AgentIcon::Name(IconName::AiClaude),
"Claude Code",
false,
cx,
))
.child(Divider::horizontal().color(DividerColor::BorderFaded))
.child(self.render_agent_server(
AgentIcon::Name(IconName::AiOpenAi),
"Codex CLI",
false,
cx,
))
.child(Divider::horizontal().color(DividerColor::BorderFaded))
.child(self.render_agent_server(
AgentIcon::Name(IconName::AiGemini),
"Gemini CLI",
false,
cx,
))
.map(|mut parent| {
for agent in user_defined_agents {
for (name, icon) in user_defined_agents {
parent = parent
.child(
Divider::horizontal().color(DividerColor::BorderFaded),
)
.child(agent);
.child(self.render_agent_server(icon, name, true, cx));
}
parent
}),
@@ -1025,6 +1106,7 @@ impl AgentConfiguration {
icon: AgentIcon,
name: impl Into<SharedString>,
external: bool,
cx: &mut Context<Self>,
) -> impl IntoElement {
let name = name.into();
let icon = match icon {
@@ -1039,28 +1121,53 @@ impl AgentConfiguration {
let tooltip_id = SharedString::new(format!("agent-source-{}", name));
let tooltip_message = format!("The {} agent was installed from an extension.", name);
let agent_server_name = ExternalAgentServerName(name.clone());
let uninstall_btn_id = SharedString::from(format!("uninstall-{}", name));
let uninstall_button = IconButton::new(uninstall_btn_id, IconName::Trash)
.icon_color(Color::Muted)
.icon_size(IconSize::Small)
.tooltip(Tooltip::text("Uninstall Agent Extension"))
.on_click(cx.listener(move |this, _, _window, cx| {
let agent_name = agent_server_name.clone();
if let Some(ext_id) = this.agent_server_store.update(cx, |store, _cx| {
store.get_extension_id_for_agent(&agent_name)
}) {
ExtensionStore::global(cx)
.update(cx, |store, cx| store.uninstall_extension(ext_id, cx))
.detach_and_log_err(cx);
}
}));
h_flex()
.gap_1p5()
.child(icon)
.child(Label::new(name))
.when(external, |this| {
this.child(
div()
.id(tooltip_id)
.flex_none()
.tooltip(Tooltip::text(tooltip_message))
.child(
Icon::new(IconName::ZedSrcExtension)
.size(IconSize::Small)
.color(Color::Muted),
),
)
})
.gap_1()
.justify_between()
.child(
Icon::new(IconName::Check)
.color(Color::Success)
.size(IconSize::Small),
h_flex()
.gap_1p5()
.child(icon)
.child(Label::new(name))
.when(external, |this| {
this.child(
div()
.id(tooltip_id)
.flex_none()
.tooltip(Tooltip::text(tooltip_message))
.child(
Icon::new(IconName::ZedSrcExtension)
.size(IconSize::Small)
.color(Color::Muted),
),
)
})
.child(
Icon::new(IconName::Check)
.color(Color::Success)
.size(IconSize::Small),
),
)
.when(external, |this| this.child(uninstall_button))
}
}

View File

@@ -7,8 +7,8 @@ use anyhow::{Context as _, Result};
use context_server::{ContextServerCommand, ContextServerId};
use editor::{Editor, EditorElement, EditorStyle};
use gpui::{
AsyncWindowContext, DismissEvent, Entity, EventEmitter, FocusHandle, Focusable, Task,
TextStyle, TextStyleRefinement, UnderlineStyle, WeakEntity, prelude::*,
AsyncWindowContext, DismissEvent, Entity, EventEmitter, FocusHandle, Focusable, ScrollHandle,
Task, TextStyle, TextStyleRefinement, UnderlineStyle, WeakEntity, prelude::*,
};
use language::{Language, LanguageRegistry};
use markdown::{Markdown, MarkdownElement, MarkdownStyle};
@@ -23,7 +23,8 @@ use project::{
use settings::{Settings as _, update_settings_file};
use theme::ThemeSettings;
use ui::{
CommonAnimationExt, KeyBinding, Modal, ModalFooter, ModalHeader, Section, Tooltip, prelude::*,
CommonAnimationExt, KeyBinding, Modal, ModalFooter, ModalHeader, Section, Tooltip,
WithScrollbar, prelude::*,
};
use util::ResultExt as _;
use workspace::{ModalView, Workspace};
@@ -252,6 +253,7 @@ pub struct ConfigureContextServerModal {
source: ConfigurationSource,
state: State,
original_server_id: Option<ContextServerId>,
scroll_handle: ScrollHandle,
}
impl ConfigureContextServerModal {
@@ -361,6 +363,7 @@ impl ConfigureContextServerModal {
window,
cx,
),
scroll_handle: ScrollHandle::new(),
})
})
})
@@ -680,6 +683,7 @@ impl ConfigureContextServerModal {
impl Render for ConfigureContextServerModal {
fn render(&mut self, window: &mut Window, cx: &mut Context<Self>) -> impl IntoElement {
let scroll_handle = self.scroll_handle.clone();
div()
.elevation_3(cx)
.w(rems(34.))
@@ -699,14 +703,29 @@ impl Render for ConfigureContextServerModal {
Modal::new("configure-context-server", None)
.header(self.render_modal_header())
.section(
Section::new()
.child(self.render_modal_description(window, cx))
.child(self.render_modal_content(cx))
.child(match &self.state {
State::Idle => div(),
State::Waiting => Self::render_waiting_for_context_server(),
State::Error(error) => Self::render_modal_error(error.clone()),
}),
Section::new().child(
div()
.size_full()
.child(
div()
.id("modal-content")
.max_h(vh(0.7, window))
.overflow_y_scroll()
.track_scroll(&scroll_handle)
.child(self.render_modal_description(window, cx))
.child(self.render_modal_content(cx))
.child(match &self.state {
State::Idle => div(),
State::Waiting => {
Self::render_waiting_for_context_server()
}
State::Error(error) => {
Self::render_modal_error(error.clone())
}
}),
)
.vertical_scrollbar_for(scroll_handle, window, cx),
),
)
.footer(self.render_modal_footer(cx)),
)

View File

@@ -30,7 +30,10 @@ use command_palette_hooks::CommandPaletteFilter;
use feature_flags::FeatureFlagAppExt as _;
use fs::Fs;
use gpui::{Action, App, Entity, SharedString, actions};
use language::LanguageRegistry;
use language::{
LanguageRegistry,
language_settings::{AllLanguageSettings, EditPredictionProvider},
};
use language_model::{
ConfiguredModel, LanguageModel, LanguageModelId, LanguageModelProviderId, LanguageModelRegistry,
};
@@ -286,7 +289,25 @@ pub fn init(
fn update_command_palette_filter(cx: &mut App) {
let disable_ai = DisableAiSettings::get_global(cx).disable_ai;
let agent_enabled = AgentSettings::get_global(cx).enabled;
let edit_prediction_provider = AllLanguageSettings::get_global(cx)
.edit_predictions
.provider;
CommandPaletteFilter::update_global(cx, |filter, _| {
use editor::actions::{
AcceptEditPrediction, AcceptPartialEditPrediction, NextEditPrediction,
PreviousEditPrediction, ShowEditPrediction, ToggleEditPrediction,
};
let edit_prediction_actions = [
TypeId::of::<AcceptEditPrediction>(),
TypeId::of::<AcceptPartialEditPrediction>(),
TypeId::of::<ShowEditPrediction>(),
TypeId::of::<NextEditPrediction>(),
TypeId::of::<PreviousEditPrediction>(),
TypeId::of::<ToggleEditPrediction>(),
];
if disable_ai {
filter.hide_namespace("agent");
filter.hide_namespace("assistant");
@@ -295,42 +316,45 @@ fn update_command_palette_filter(cx: &mut App) {
filter.hide_namespace("zed_predict_onboarding");
filter.hide_namespace("edit_prediction");
use editor::actions::{
AcceptEditPrediction, AcceptPartialEditPrediction, NextEditPrediction,
PreviousEditPrediction, ShowEditPrediction, ToggleEditPrediction,
};
let edit_prediction_actions = [
TypeId::of::<AcceptEditPrediction>(),
TypeId::of::<AcceptPartialEditPrediction>(),
TypeId::of::<ShowEditPrediction>(),
TypeId::of::<NextEditPrediction>(),
TypeId::of::<PreviousEditPrediction>(),
TypeId::of::<ToggleEditPrediction>(),
];
filter.hide_action_types(&edit_prediction_actions);
filter.hide_action_types(&[TypeId::of::<zed_actions::OpenZedPredictOnboarding>()]);
} else {
filter.show_namespace("agent");
if agent_enabled {
filter.show_namespace("agent");
} else {
filter.hide_namespace("agent");
}
filter.show_namespace("assistant");
filter.show_namespace("copilot");
match edit_prediction_provider {
EditPredictionProvider::None => {
filter.hide_namespace("edit_prediction");
filter.hide_namespace("copilot");
filter.hide_namespace("supermaven");
filter.hide_action_types(&edit_prediction_actions);
}
EditPredictionProvider::Copilot => {
filter.show_namespace("edit_prediction");
filter.show_namespace("copilot");
filter.hide_namespace("supermaven");
filter.show_action_types(edit_prediction_actions.iter());
}
EditPredictionProvider::Supermaven => {
filter.show_namespace("edit_prediction");
filter.hide_namespace("copilot");
filter.show_namespace("supermaven");
filter.show_action_types(edit_prediction_actions.iter());
}
EditPredictionProvider::Zed | EditPredictionProvider::Codestral => {
filter.show_namespace("edit_prediction");
filter.hide_namespace("copilot");
filter.hide_namespace("supermaven");
filter.show_action_types(edit_prediction_actions.iter());
}
}
filter.show_namespace("zed_predict_onboarding");
filter.show_namespace("edit_prediction");
use editor::actions::{
AcceptEditPrediction, AcceptPartialEditPrediction, NextEditPrediction,
PreviousEditPrediction, ShowEditPrediction, ToggleEditPrediction,
};
let edit_prediction_actions = [
TypeId::of::<AcceptEditPrediction>(),
TypeId::of::<AcceptPartialEditPrediction>(),
TypeId::of::<ShowEditPrediction>(),
TypeId::of::<NextEditPrediction>(),
TypeId::of::<PreviousEditPrediction>(),
TypeId::of::<ToggleEditPrediction>(),
];
filter.show_action_types(edit_prediction_actions.iter());
filter.show_action_types(&[TypeId::of::<zed_actions::OpenZedPredictOnboarding>()]);
}
});
@@ -420,3 +444,137 @@ fn register_slash_commands(cx: &mut App) {
})
.detach();
}
#[cfg(test)]
mod tests {
use super::*;
use agent_settings::{AgentProfileId, AgentSettings, CompletionMode};
use command_palette_hooks::CommandPaletteFilter;
use editor::actions::AcceptEditPrediction;
use gpui::{BorrowAppContext, TestAppContext, px};
use project::DisableAiSettings;
use settings::{
DefaultAgentView, DockPosition, NotifyWhenAgentWaiting, Settings, SettingsStore,
};
#[gpui::test]
fn test_agent_command_palette_visibility(cx: &mut TestAppContext) {
// Init settings
cx.update(|cx| {
let store = SettingsStore::test(cx);
cx.set_global(store);
command_palette_hooks::init(cx);
AgentSettings::register(cx);
DisableAiSettings::register(cx);
AllLanguageSettings::register(cx);
});
let agent_settings = AgentSettings {
enabled: true,
button: true,
dock: DockPosition::Right,
default_width: px(300.),
default_height: px(600.),
default_model: None,
inline_assistant_model: None,
commit_message_model: None,
thread_summary_model: None,
inline_alternatives: vec![],
default_profile: AgentProfileId::default(),
default_view: DefaultAgentView::Thread,
profiles: Default::default(),
always_allow_tool_actions: false,
notify_when_agent_waiting: NotifyWhenAgentWaiting::default(),
play_sound_when_agent_done: false,
single_file_review: false,
model_parameters: vec![],
preferred_completion_mode: CompletionMode::Normal,
enable_feedback: false,
expand_edit_card: true,
expand_terminal_card: true,
use_modifier_to_send: true,
message_editor_min_lines: 1,
};
cx.update(|cx| {
AgentSettings::override_global(agent_settings.clone(), cx);
DisableAiSettings::override_global(DisableAiSettings { disable_ai: false }, cx);
// Initial update
update_command_palette_filter(cx);
});
// Assert visible
cx.update(|cx| {
let filter = CommandPaletteFilter::try_global(cx).unwrap();
assert!(
!filter.is_hidden(&NewThread),
"NewThread should be visible by default"
);
});
// Disable agent
cx.update(|cx| {
let mut new_settings = agent_settings.clone();
new_settings.enabled = false;
AgentSettings::override_global(new_settings, cx);
// Trigger update
update_command_palette_filter(cx);
});
// Assert hidden
cx.update(|cx| {
let filter = CommandPaletteFilter::try_global(cx).unwrap();
assert!(
filter.is_hidden(&NewThread),
"NewThread should be hidden when agent is disabled"
);
});
// Test EditPredictionProvider
// Enable EditPredictionProvider::Copilot
cx.update(|cx| {
cx.update_global::<SettingsStore, _>(|store, cx| {
store.update_user_settings(cx, |s| {
s.project
.all_languages
.features
.get_or_insert(Default::default())
.edit_prediction_provider = Some(EditPredictionProvider::Copilot);
});
});
update_command_palette_filter(cx);
});
cx.update(|cx| {
let filter = CommandPaletteFilter::try_global(cx).unwrap();
assert!(
!filter.is_hidden(&AcceptEditPrediction),
"EditPrediction should be visible when provider is Copilot"
);
});
// Disable EditPredictionProvider (None)
cx.update(|cx| {
cx.update_global::<SettingsStore, _>(|store, cx| {
store.update_user_settings(cx, |s| {
s.project
.all_languages
.features
.get_or_insert(Default::default())
.edit_prediction_provider = Some(EditPredictionProvider::None);
});
});
update_command_palette_filter(cx);
});
cx.update(|cx| {
let filter = CommandPaletteFilter::try_global(cx).unwrap();
assert!(
filter.is_hidden(&AcceptEditPrediction),
"EditPrediction should be hidden when provider is None"
);
});
}
}

View File

@@ -1089,7 +1089,7 @@ mod tests {
}
#[gpui::test]
async fn test_large_file_uses_outline(cx: &mut TestAppContext) {
async fn test_large_file_uses_fallback(cx: &mut TestAppContext) {
init_test_settings(cx);
// Create a large file that exceeds AUTO_OUTLINE_SIZE
@@ -1101,16 +1101,16 @@ mod tests {
let file_context = load_context_for("file.txt", large_content, cx).await;
// Should contain some of the actual file content
assert!(
file_context
.text
.contains(&format!("# File outline for {}", path!("test/file.txt"))),
"Large files should not get an outline"
file_context.text.contains(LINE),
"Should contain some of the file content"
);
// Should be much smaller than original
assert!(
file_context.text.len() < content_len,
"Outline should be smaller than original content"
file_context.text.len() < content_len / 10,
"Should be significantly smaller than original content"
);
}

View File

@@ -278,6 +278,8 @@ impl ContextPickerCompletionProvider {
icon_path: Some(mode.icon().path().into()),
documentation: None,
source: project::CompletionSource::Custom,
match_start: None,
snippet_deduplication_key: None,
insert_text_mode: None,
// This ensures that when a user accepts this completion, the
// completion menu will still be shown after "@category " is
@@ -386,6 +388,8 @@ impl ContextPickerCompletionProvider {
icon_path: Some(action.icon().path().into()),
documentation: None,
source: project::CompletionSource::Custom,
match_start: None,
snippet_deduplication_key: None,
insert_text_mode: None,
// This ensures that when a user accepts this completion, the
// completion menu will still be shown after "@category " is
@@ -417,6 +421,8 @@ impl ContextPickerCompletionProvider {
replace_range: source_range.clone(),
new_text,
label: CodeLabel::plain(thread_entry.title().to_string(), None),
match_start: None,
snippet_deduplication_key: None,
documentation: None,
insert_text_mode: None,
source: project::CompletionSource::Custom,
@@ -484,6 +490,8 @@ impl ContextPickerCompletionProvider {
replace_range: source_range.clone(),
new_text,
label: CodeLabel::plain(rules.title.to_string(), None),
match_start: None,
snippet_deduplication_key: None,
documentation: None,
insert_text_mode: None,
source: project::CompletionSource::Custom,
@@ -524,6 +532,8 @@ impl ContextPickerCompletionProvider {
documentation: None,
source: project::CompletionSource::Custom,
icon_path: Some(IconName::ToolWeb.path().into()),
match_start: None,
snippet_deduplication_key: None,
insert_text_mode: None,
confirm: Some(confirm_completion_callback(
IconName::ToolWeb.path().into(),
@@ -612,6 +622,8 @@ impl ContextPickerCompletionProvider {
documentation: None,
source: project::CompletionSource::Custom,
icon_path: Some(completion_icon_path),
match_start: None,
snippet_deduplication_key: None,
insert_text_mode: None,
confirm: Some(confirm_completion_callback(
crease_icon_path,
@@ -689,6 +701,8 @@ impl ContextPickerCompletionProvider {
documentation: None,
source: project::CompletionSource::Custom,
icon_path: Some(IconName::Code.path().into()),
match_start: None,
snippet_deduplication_key: None,
insert_text_mode: None,
confirm: Some(confirm_completion_callback(
IconName::Code.path().into(),

View File

@@ -1,6 +1,6 @@
use std::{cmp::Reverse, sync::Arc};
use collections::{HashSet, IndexMap};
use collections::IndexMap;
use fuzzy::{StringMatch, StringMatchCandidate, match_strings};
use gpui::{Action, AnyElement, App, BackgroundExecutor, DismissEvent, Subscription, Task};
use language_model::{
@@ -57,7 +57,7 @@ fn all_models(cx: &App) -> GroupedModels {
})
.collect();
let other = providers
let all = providers
.iter()
.flat_map(|provider| {
provider
@@ -70,7 +70,7 @@ fn all_models(cx: &App) -> GroupedModels {
})
.collect();
GroupedModels::new(other, recommended)
GroupedModels::new(all, recommended)
}
#[derive(Clone)]
@@ -210,33 +210,24 @@ impl LanguageModelPickerDelegate {
struct GroupedModels {
recommended: Vec<ModelInfo>,
other: IndexMap<LanguageModelProviderId, Vec<ModelInfo>>,
all: IndexMap<LanguageModelProviderId, Vec<ModelInfo>>,
}
impl GroupedModels {
pub fn new(other: Vec<ModelInfo>, recommended: Vec<ModelInfo>) -> Self {
let recommended_ids = recommended
.iter()
.map(|info| (info.model.provider_id(), info.model.id()))
.collect::<HashSet<_>>();
let mut other_by_provider: IndexMap<_, Vec<ModelInfo>> = IndexMap::default();
for model in other {
if recommended_ids.contains(&(model.model.provider_id(), model.model.id())) {
continue;
}
pub fn new(all: Vec<ModelInfo>, recommended: Vec<ModelInfo>) -> Self {
let mut all_by_provider: IndexMap<_, Vec<ModelInfo>> = IndexMap::default();
for model in all {
let provider = model.model.provider_id();
if let Some(models) = other_by_provider.get_mut(&provider) {
if let Some(models) = all_by_provider.get_mut(&provider) {
models.push(model);
} else {
other_by_provider.insert(provider, vec![model]);
all_by_provider.insert(provider, vec![model]);
}
}
Self {
recommended,
other: other_by_provider,
all: all_by_provider,
}
}
@@ -252,7 +243,7 @@ impl GroupedModels {
);
}
for models in self.other.values() {
for models in self.all.values() {
if models.is_empty() {
continue;
}
@@ -267,20 +258,6 @@ impl GroupedModels {
}
entries
}
fn model_infos(&self) -> Vec<ModelInfo> {
let other = self
.other
.values()
.flat_map(|model| model.iter())
.cloned()
.collect::<Vec<_>>();
self.recommended
.iter()
.chain(&other)
.cloned()
.collect::<Vec<_>>()
}
}
enum LanguageModelPickerEntry {
@@ -425,8 +402,9 @@ impl PickerDelegate for LanguageModelPickerDelegate {
.collect::<Vec<_>>();
let available_models = all_models
.model_infos()
.iter()
.all
.values()
.flat_map(|models| models.iter())
.filter(|m| configured_provider_ids.contains(&m.model.provider_id()))
.cloned()
.collect::<Vec<_>>();
@@ -764,46 +742,52 @@ mod tests {
}
#[gpui::test]
fn test_exclude_recommended_models(_cx: &mut TestAppContext) {
fn test_recommended_models_also_appear_in_other(_cx: &mut TestAppContext) {
let recommended_models = create_models(vec![("zed", "claude")]);
let all_models = create_models(vec![
("zed", "claude"), // Should be filtered out from "other"
("zed", "claude"), // Should also appear in "other"
("zed", "gemini"),
("copilot", "o3"),
]);
let grouped_models = GroupedModels::new(all_models, recommended_models);
let actual_other_models = grouped_models
.other
let actual_all_models = grouped_models
.all
.values()
.flatten()
.cloned()
.collect::<Vec<_>>();
// Recommended models should not appear in "other"
assert_models_eq(actual_other_models, vec!["zed/gemini", "copilot/o3"]);
// Recommended models should also appear in "all"
assert_models_eq(
actual_all_models,
vec!["zed/claude", "zed/gemini", "copilot/o3"],
);
}
#[gpui::test]
fn test_dont_exclude_models_from_other_providers(_cx: &mut TestAppContext) {
fn test_models_from_different_providers(_cx: &mut TestAppContext) {
let recommended_models = create_models(vec![("zed", "claude")]);
let all_models = create_models(vec![
("zed", "claude"), // Should be filtered out from "other"
("zed", "claude"), // Should also appear in "other"
("zed", "gemini"),
("copilot", "claude"), // Should not be filtered out from "other"
("copilot", "claude"), // Different provider, should appear in "other"
]);
let grouped_models = GroupedModels::new(all_models, recommended_models);
let actual_other_models = grouped_models
.other
let actual_all_models = grouped_models
.all
.values()
.flatten()
.cloned()
.collect::<Vec<_>>();
// Recommended models should not appear in "other"
assert_models_eq(actual_other_models, vec!["zed/gemini", "copilot/claude"]);
// All models should appear in "all" regardless of recommended status
assert_models_eq(
actual_all_models,
vec!["zed/claude", "zed/gemini", "copilot/claude"],
);
}
}

View File

@@ -127,6 +127,8 @@ impl SlashCommandCompletionProvider {
new_text,
label: command.label(cx),
icon_path: None,
match_start: None,
snippet_deduplication_key: None,
insert_text_mode: None,
confirm,
source: CompletionSource::Custom,
@@ -232,6 +234,8 @@ impl SlashCommandCompletionProvider {
icon_path: None,
new_text,
documentation: None,
match_start: None,
snippet_deduplication_key: None,
confirm,
insert_text_mode: None,
source: CompletionSource::Custom,

View File

@@ -205,13 +205,9 @@ impl PasswordProxy {
} else {
ShellKind::Posix
};
let askpass_program = ASKPASS_PROGRAM
.get_or_init(|| current_exec)
.try_shell_safe(shell_kind)
.context("Failed to shell-escape Askpass program path.")?
.to_string();
let askpass_program = ASKPASS_PROGRAM.get_or_init(|| current_exec);
// Create an askpass script that communicates back to this process.
let askpass_script = generate_askpass_script(&askpass_program, &askpass_socket);
let askpass_script = generate_askpass_script(shell_kind, askpass_program, &askpass_socket)?;
let _task = executor.spawn(async move {
maybe!(async move {
let listener =
@@ -334,23 +330,51 @@ pub fn set_askpass_program(path: std::path::PathBuf) {
#[inline]
#[cfg(not(target_os = "windows"))]
fn generate_askpass_script(askpass_program: &str, askpass_socket: &std::path::Path) -> String {
format!(
fn generate_askpass_script(
shell_kind: ShellKind,
askpass_program: &std::path::Path,
askpass_socket: &std::path::Path,
) -> Result<String> {
let askpass_program = shell_kind.prepend_command_prefix(
askpass_program
.to_str()
.context("Askpass program is on a non-utf8 path")?,
);
let askpass_program = shell_kind
.try_quote_prefix_aware(&askpass_program)
.context("Failed to shell-escape Askpass program path")?;
let askpass_socket = askpass_socket
.try_shell_safe(shell_kind)
.context("Failed to shell-escape Askpass socket path")?;
let print_args = "printf '%s\\0' \"$@\"";
let shebang = "#!/bin/sh";
Ok(format!(
"{shebang}\n{print_args} | {askpass_program} --askpass={askpass_socket} 2> /dev/null \n",
askpass_socket = askpass_socket.display(),
print_args = "printf '%s\\0' \"$@\"",
shebang = "#!/bin/sh",
)
))
}
#[inline]
#[cfg(target_os = "windows")]
fn generate_askpass_script(askpass_program: &str, askpass_socket: &std::path::Path) -> String {
format!(
fn generate_askpass_script(
shell_kind: ShellKind,
askpass_program: &std::path::Path,
askpass_socket: &std::path::Path,
) -> Result<String> {
let askpass_program = shell_kind.prepend_command_prefix(
askpass_program
.to_str()
.context("Askpass program is on a non-utf8 path")?,
);
let askpass_program = shell_kind
.try_quote_prefix_aware(&askpass_program)
.context("Failed to shell-escape Askpass program path")?;
let askpass_socket = askpass_socket
.try_shell_safe(shell_kind)
.context("Failed to shell-escape Askpass socket path")?;
Ok(format!(
r#"
$ErrorActionPreference = 'Stop';
($args -join [char]0) | & {askpass_program} --askpass={askpass_socket} 2> $null
"#,
askpass_socket = askpass_socket.display(),
)
))
}

View File

@@ -350,8 +350,7 @@ impl AutoUpdater {
pub fn start_polling(&self, cx: &mut Context<Self>) -> Task<Result<()>> {
cx.spawn(async move |this, cx| {
#[cfg(target_os = "windows")]
{
if cfg!(target_os = "windows") {
use util::ResultExt;
cleanup_windows()
@@ -903,28 +902,16 @@ async fn install_release_macos(
Ok(None)
}
#[cfg(target_os = "windows")]
async fn cleanup_windows() -> Result<()> {
use util::ResultExt;
let parent = std::env::current_exe()?
.parent()
.context("No parent dir for Zed.exe")?
.to_owned();
// keep in sync with crates/auto_update_helper/src/updater.rs
smol::fs::remove_dir(parent.join("updates"))
.await
.context("failed to remove updates dir")
.log_err();
smol::fs::remove_dir(parent.join("install"))
.await
.context("failed to remove install dir")
.log_err();
smol::fs::remove_dir(parent.join("old"))
.await
.context("failed to remove old version dir")
.log_err();
_ = smol::fs::remove_dir(parent.join("updates")).await;
_ = smol::fs::remove_dir(parent.join("install")).await;
_ = smol::fs::remove_dir(parent.join("old")).await;
Ok(())
}

View File

@@ -1,6 +1,6 @@
use std::{
cell::LazyCell,
path::Path,
sync::LazyLock,
time::{Duration, Instant},
};
@@ -13,8 +13,8 @@ use windows::Win32::{
use crate::windows_impl::WM_JOB_UPDATED;
pub(crate) struct Job {
pub apply: Box<dyn Fn(&Path) -> Result<()>>,
pub rollback: Box<dyn Fn(&Path) -> Result<()>>,
pub apply: Box<dyn Fn(&Path) -> Result<()> + Send + Sync>,
pub rollback: Box<dyn Fn(&Path) -> Result<()> + Send + Sync>,
}
impl Job {
@@ -154,10 +154,8 @@ impl Job {
}
}
// app is single threaded
#[cfg(not(test))]
#[allow(clippy::declare_interior_mutable_const)]
pub(crate) const JOBS: LazyCell<[Job; 22]> = LazyCell::new(|| {
pub(crate) static JOBS: LazyLock<[Job; 22]> = LazyLock::new(|| {
fn p(value: &str) -> &Path {
Path::new(value)
}
@@ -206,10 +204,8 @@ pub(crate) const JOBS: LazyCell<[Job; 22]> = LazyCell::new(|| {
]
});
// app is single threaded
#[cfg(test)]
#[allow(clippy::declare_interior_mutable_const)]
pub(crate) const JOBS: LazyCell<[Job; 9]> = LazyCell::new(|| {
pub(crate) static JOBS: LazyLock<[Job; 9]> = LazyLock::new(|| {
fn p(value: &str) -> &Path {
Path::new(value)
}

View File

@@ -1683,7 +1683,9 @@ impl LiveKitRoom {
}
}
#[derive(Default)]
enum LocalTrack<Stream: ?Sized> {
#[default]
None,
Pending {
publish_id: usize,
@@ -1694,12 +1696,6 @@ enum LocalTrack<Stream: ?Sized> {
},
}
impl<T: ?Sized> Default for LocalTrack<T> {
fn default() -> Self {
Self::None
}
}
#[derive(Copy, Clone, PartialEq, Eq)]
pub enum RoomStatus {
Online,

View File

@@ -51,3 +51,11 @@ pub fn external_agents_docs(cx: &App) -> String {
server_url = server_url(cx)
)
}
/// Returns the URL to Zed agent servers documentation.
pub fn agent_server_docs(cx: &App) -> String {
format!(
"{server_url}/docs/extensions/agent-servers",
server_url = server_url(cx)
)
}

View File

@@ -58,6 +58,9 @@ pub const SERVER_SUPPORTS_STATUS_MESSAGES_HEADER_NAME: &str =
/// The name of the header used by the client to indicate that it supports receiving xAI models.
pub const CLIENT_SUPPORTS_X_AI_HEADER_NAME: &str = "x-zed-client-supports-x-ai";
/// The maximum number of edit predictions that can be rejected per request.
pub const MAX_EDIT_PREDICTION_REJECTIONS_PER_REQUEST: usize = 100;
#[derive(Debug, PartialEq, Clone, Copy, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum UsageLimit {
@@ -192,6 +195,17 @@ pub struct AcceptEditPredictionBody {
pub request_id: String,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct RejectEditPredictionsBody {
pub rejections: Vec<EditPredictionRejection>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct EditPredictionRejection {
pub request_id: String,
pub was_shown: bool,
}
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Clone, Copy, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum CompletionMode {

View File

@@ -44,7 +44,7 @@ pub struct SearchToolInput {
}
/// Search for relevant code by path, syntax hierarchy, and content.
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema)]
#[derive(Debug, Clone, Serialize, Deserialize, JsonSchema, Hash)]
pub struct SearchToolQuery {
/// 1. A glob pattern to match file paths in the codebase to search in.
pub glob: String,

View File

@@ -0,0 +1 @@
drop table embeddings;

View File

@@ -5,7 +5,6 @@ pub mod buffers;
pub mod channels;
pub mod contacts;
pub mod contributors;
pub mod embeddings;
pub mod extensions;
pub mod notifications;
pub mod projects;

View File

@@ -1,94 +0,0 @@
use super::*;
use time::Duration;
use time::OffsetDateTime;
impl Database {
pub async fn get_embeddings(
&self,
model: &str,
digests: &[Vec<u8>],
) -> Result<HashMap<Vec<u8>, Vec<f32>>> {
self.transaction(|tx| async move {
let embeddings = {
let mut db_embeddings = embedding::Entity::find()
.filter(
embedding::Column::Model.eq(model).and(
embedding::Column::Digest
.is_in(digests.iter().map(|digest| digest.as_slice())),
),
)
.stream(&*tx)
.await?;
let mut embeddings = HashMap::default();
while let Some(db_embedding) = db_embeddings.next().await {
let db_embedding = db_embedding?;
embeddings.insert(db_embedding.digest, db_embedding.dimensions);
}
embeddings
};
if !embeddings.is_empty() {
let now = OffsetDateTime::now_utc();
let retrieved_at = PrimitiveDateTime::new(now.date(), now.time());
embedding::Entity::update_many()
.filter(
embedding::Column::Digest
.is_in(embeddings.keys().map(|digest| digest.as_slice())),
)
.col_expr(embedding::Column::RetrievedAt, Expr::value(retrieved_at))
.exec(&*tx)
.await?;
}
Ok(embeddings)
})
.await
}
pub async fn save_embeddings(
&self,
model: &str,
embeddings: &HashMap<Vec<u8>, Vec<f32>>,
) -> Result<()> {
self.transaction(|tx| async move {
embedding::Entity::insert_many(embeddings.iter().map(|(digest, dimensions)| {
let now_offset_datetime = OffsetDateTime::now_utc();
let retrieved_at =
PrimitiveDateTime::new(now_offset_datetime.date(), now_offset_datetime.time());
embedding::ActiveModel {
model: ActiveValue::set(model.to_string()),
digest: ActiveValue::set(digest.clone()),
dimensions: ActiveValue::set(dimensions.clone()),
retrieved_at: ActiveValue::set(retrieved_at),
}
}))
.on_conflict(
OnConflict::columns([embedding::Column::Model, embedding::Column::Digest])
.do_nothing()
.to_owned(),
)
.exec_without_returning(&*tx)
.await?;
Ok(())
})
.await
}
pub async fn purge_old_embeddings(&self) -> Result<()> {
self.transaction(|tx| async move {
embedding::Entity::delete_many()
.filter(
embedding::Column::RetrievedAt
.lte(OffsetDateTime::now_utc() - Duration::days(60)),
)
.exec(&*tx)
.await?;
Ok(())
})
.await
}
}

View File

@@ -8,7 +8,6 @@ pub mod channel_chat_participant;
pub mod channel_member;
pub mod contact;
pub mod contributor;
pub mod embedding;
pub mod extension;
pub mod extension_version;
pub mod follower;

View File

@@ -1,18 +0,0 @@
use sea_orm::entity::prelude::*;
use time::PrimitiveDateTime;
#[derive(Clone, Debug, PartialEq, DeriveEntityModel)]
#[sea_orm(table_name = "embeddings")]
pub struct Model {
#[sea_orm(primary_key)]
pub model: String,
#[sea_orm(primary_key)]
pub digest: Vec<u8>,
pub dimensions: Vec<f32>,
pub retrieved_at: PrimitiveDateTime,
}
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]
pub enum Relation {}
impl ActiveModelBehavior for ActiveModel {}

View File

@@ -39,25 +39,6 @@ pub enum Relation {
Contributor,
}
impl Model {
/// Returns the timestamp of when the user's account was created.
///
/// This will be the earlier of the `created_at` and `github_user_created_at` timestamps.
pub fn account_created_at(&self) -> NaiveDateTime {
let mut account_created_at = self.created_at;
if let Some(github_created_at) = self.github_user_created_at {
account_created_at = account_created_at.min(github_created_at);
}
account_created_at
}
/// Returns the age of the user's account.
pub fn account_age(&self) -> chrono::Duration {
chrono::Utc::now().naive_utc() - self.account_created_at()
}
}
impl Related<super::access_token::Entity> for Entity {
fn to() -> RelationDef {
Relation::AccessToken.def()

View File

@@ -2,9 +2,6 @@ mod buffer_tests;
mod channel_tests;
mod contributor_tests;
mod db_tests;
// we only run postgres tests on macos right now
#[cfg(target_os = "macos")]
mod embedding_tests;
mod extension_tests;
use crate::migrations::run_database_migrations;

View File

@@ -1,87 +0,0 @@
use super::TestDb;
use crate::db::embedding;
use collections::HashMap;
use sea_orm::{ColumnTrait, EntityTrait, QueryFilter, sea_query::Expr};
use std::ops::Sub;
use time::{Duration, OffsetDateTime, PrimitiveDateTime};
// SQLite does not support array arguments, so we only test this against a real postgres instance
#[gpui::test]
async fn test_get_embeddings_postgres(cx: &mut gpui::TestAppContext) {
let test_db = TestDb::postgres(cx.executor());
let db = test_db.db();
let provider = "test_model";
let digest1 = vec![1, 2, 3];
let digest2 = vec![4, 5, 6];
let embeddings = HashMap::from_iter([
(digest1.clone(), vec![0.1, 0.2, 0.3]),
(digest2.clone(), vec![0.4, 0.5, 0.6]),
]);
// Save embeddings
db.save_embeddings(provider, &embeddings).await.unwrap();
// Retrieve embeddings
let retrieved_embeddings = db
.get_embeddings(provider, &[digest1.clone(), digest2.clone()])
.await
.unwrap();
assert_eq!(retrieved_embeddings.len(), 2);
assert!(retrieved_embeddings.contains_key(&digest1));
assert!(retrieved_embeddings.contains_key(&digest2));
// Check if the retrieved embeddings are correct
assert_eq!(retrieved_embeddings[&digest1], vec![0.1, 0.2, 0.3]);
assert_eq!(retrieved_embeddings[&digest2], vec![0.4, 0.5, 0.6]);
}
#[gpui::test]
async fn test_purge_old_embeddings(cx: &mut gpui::TestAppContext) {
let test_db = TestDb::postgres(cx.executor());
let db = test_db.db();
let model = "test_model";
let digest = vec![7, 8, 9];
let embeddings = HashMap::from_iter([(digest.clone(), vec![0.7, 0.8, 0.9])]);
// Save old embeddings
db.save_embeddings(model, &embeddings).await.unwrap();
// Reach into the DB and change the retrieved at to be > 60 days
db.transaction(|tx| {
let digest = digest.clone();
async move {
let sixty_days_ago = OffsetDateTime::now_utc().sub(Duration::days(61));
let retrieved_at = PrimitiveDateTime::new(sixty_days_ago.date(), sixty_days_ago.time());
embedding::Entity::update_many()
.filter(
embedding::Column::Model
.eq(model)
.and(embedding::Column::Digest.eq(digest)),
)
.col_expr(embedding::Column::RetrievedAt, Expr::value(retrieved_at))
.exec(&*tx)
.await
.unwrap();
Ok(())
}
})
.await
.unwrap();
// Purge old embeddings
db.purge_old_embeddings().await.unwrap();
// Try to retrieve the purged embeddings
let retrieved_embeddings = db
.get_embeddings(model, std::slice::from_ref(&digest))
.await
.unwrap();
assert!(
retrieved_embeddings.is_empty(),
"Old embeddings should have been purged"
);
}

View File

@@ -13,7 +13,7 @@ use collab::llm::db::LlmDatabase;
use collab::migrations::run_database_migrations;
use collab::{
AppState, Config, Result, api::fetch_extensions_from_blob_store_periodically, db, env,
executor::Executor, rpc::ResultExt,
executor::Executor,
};
use db::Database;
use std::{
@@ -95,8 +95,6 @@ async fn main() -> Result<()> {
let state = AppState::new(config, Executor::Production).await?;
if mode.is_collab() {
state.db.purge_old_embeddings().await.trace_err();
let epoch = state
.db
.create_server(&state.config.zed_environment)

View File

@@ -677,6 +677,8 @@ impl ConsoleQueryBarCompletionProvider {
),
new_text: string_match.string.clone(),
label: CodeLabel::plain(string_match.string.clone(), None),
match_start: None,
snippet_deduplication_key: None,
icon_path: None,
documentation: Some(CompletionDocumentation::MultiLineMarkdown(
variable_value.into(),
@@ -790,6 +792,8 @@ impl ConsoleQueryBarCompletionProvider {
documentation: completion.detail.map(|detail| {
CompletionDocumentation::MultiLineMarkdown(detail.into())
}),
match_start: None,
snippet_deduplication_key: None,
confirm: None,
source: project::CompletionSource::Dap { sort_text },
insert_text_mode: None,

View File

@@ -370,11 +370,16 @@ impl BufferDiagnosticsEditor {
continue;
}
let languages = buffer_diagnostics_editor
.read_with(cx, |b, cx| b.project.read(cx).languages().clone())
.ok();
let diagnostic_blocks = cx.update(|_window, cx| {
DiagnosticRenderer::diagnostic_blocks_for_group(
group,
buffer_snapshot.remote_id(),
Some(Arc::new(buffer_diagnostics_editor.clone())),
languages,
cx,
)
})?;

View File

@@ -6,7 +6,7 @@ use editor::{
hover_popover::diagnostics_markdown_style,
};
use gpui::{AppContext, Entity, Focusable, WeakEntity};
use language::{BufferId, Diagnostic, DiagnosticEntryRef};
use language::{BufferId, Diagnostic, DiagnosticEntryRef, LanguageRegistry};
use lsp::DiagnosticSeverity;
use markdown::{Markdown, MarkdownElement};
use settings::Settings;
@@ -27,6 +27,7 @@ impl DiagnosticRenderer {
diagnostic_group: Vec<DiagnosticEntryRef<'_, Point>>,
buffer_id: BufferId,
diagnostics_editor: Option<Arc<dyn DiagnosticsToolbarEditor>>,
language_registry: Option<Arc<LanguageRegistry>>,
cx: &mut App,
) -> Vec<DiagnosticBlock> {
let Some(primary_ix) = diagnostic_group
@@ -75,11 +76,14 @@ impl DiagnosticRenderer {
))
}
}
results.push(DiagnosticBlock {
initial_range: primary.range.clone(),
severity: primary.diagnostic.severity,
diagnostics_editor: diagnostics_editor.clone(),
markdown: cx.new(|cx| Markdown::new(markdown.into(), None, None, cx)),
markdown: cx.new(|cx| {
Markdown::new(markdown.into(), language_registry.clone(), None, cx)
}),
});
} else {
if entry.range.start.row.abs_diff(primary.range.start.row) >= 5 {
@@ -91,7 +95,9 @@ impl DiagnosticRenderer {
initial_range: entry.range.clone(),
severity: entry.diagnostic.severity,
diagnostics_editor: diagnostics_editor.clone(),
markdown: cx.new(|cx| Markdown::new(markdown.into(), None, None, cx)),
markdown: cx.new(|cx| {
Markdown::new(markdown.into(), language_registry.clone(), None, cx)
}),
});
}
}
@@ -118,9 +124,16 @@ impl editor::DiagnosticRenderer for DiagnosticRenderer {
buffer_id: BufferId,
snapshot: EditorSnapshot,
editor: WeakEntity<Editor>,
language_registry: Option<Arc<LanguageRegistry>>,
cx: &mut App,
) -> Vec<BlockProperties<Anchor>> {
let blocks = Self::diagnostic_blocks_for_group(diagnostic_group, buffer_id, None, cx);
let blocks = Self::diagnostic_blocks_for_group(
diagnostic_group,
buffer_id,
None,
language_registry,
cx,
);
blocks
.into_iter()
@@ -146,9 +159,16 @@ impl editor::DiagnosticRenderer for DiagnosticRenderer {
diagnostic_group: Vec<DiagnosticEntryRef<'_, Point>>,
range: Range<Point>,
buffer_id: BufferId,
language_registry: Option<Arc<LanguageRegistry>>,
cx: &mut App,
) -> Option<Entity<Markdown>> {
let blocks = Self::diagnostic_blocks_for_group(diagnostic_group, buffer_id, None, cx);
let blocks = Self::diagnostic_blocks_for_group(
diagnostic_group,
buffer_id,
None,
language_registry,
cx,
);
blocks
.into_iter()
.find_map(|block| (block.initial_range == range).then(|| block.markdown))
@@ -206,6 +226,11 @@ impl DiagnosticBlock {
self.markdown.clone(),
diagnostics_markdown_style(bcx.window, cx),
)
.code_block_renderer(markdown::CodeBlockRenderer::Default {
copy_button: false,
copy_button_on_hover: false,
border: false,
})
.on_url_click({
move |link, window, cx| {
editor

View File

@@ -73,7 +73,7 @@ pub fn init(cx: &mut App) {
}
pub(crate) struct ProjectDiagnosticsEditor {
project: Entity<Project>,
pub project: Entity<Project>,
workspace: WeakEntity<Workspace>,
focus_handle: FocusHandle,
editor: Entity<Editor>,
@@ -182,7 +182,6 @@ impl ProjectDiagnosticsEditor {
project::Event::DiskBasedDiagnosticsFinished { language_server_id } => {
log::debug!("disk based diagnostics finished for server {language_server_id}");
this.close_diagnosticless_buffers(
window,
cx,
this.editor.focus_handle(cx).contains_focused(window, cx)
|| this.focus_handle.contains_focused(window, cx),
@@ -247,10 +246,10 @@ impl ProjectDiagnosticsEditor {
window.focus(&this.focus_handle);
}
}
EditorEvent::Blurred => this.close_diagnosticless_buffers(window, cx, false),
EditorEvent::Saved => this.close_diagnosticless_buffers(window, cx, true),
EditorEvent::Blurred => this.close_diagnosticless_buffers(cx, false),
EditorEvent::Saved => this.close_diagnosticless_buffers(cx, true),
EditorEvent::SelectionsChanged { .. } => {
this.close_diagnosticless_buffers(window, cx, true)
this.close_diagnosticless_buffers(cx, true)
}
_ => {}
}
@@ -298,12 +297,7 @@ impl ProjectDiagnosticsEditor {
/// - have no diagnostics anymore
/// - are saved (not dirty)
/// - and, if `retain_selections` is true, do not have selections within them
fn close_diagnosticless_buffers(
&mut self,
_window: &mut Window,
cx: &mut Context<Self>,
retain_selections: bool,
) {
fn close_diagnosticless_buffers(&mut self, cx: &mut Context<Self>, retain_selections: bool) {
let snapshot = self
.editor
.update(cx, |editor, cx| editor.display_snapshot(cx));
@@ -447,7 +441,7 @@ impl ProjectDiagnosticsEditor {
fn focus_out(&mut self, _: FocusOutEvent, window: &mut Window, cx: &mut Context<Self>) {
if !self.focus_handle.is_focused(window) && !self.editor.focus_handle(cx).is_focused(window)
{
self.close_diagnosticless_buffers(window, cx, false);
self.close_diagnosticless_buffers(cx, false);
}
}
@@ -461,8 +455,7 @@ impl ProjectDiagnosticsEditor {
});
}
});
self.multibuffer
.update(cx, |multibuffer, cx| multibuffer.clear(cx));
self.close_diagnosticless_buffers(cx, false);
self.project.update(cx, |project, cx| {
self.paths_to_update = project
.diagnostic_summaries(false, cx)
@@ -552,11 +545,15 @@ impl ProjectDiagnosticsEditor {
if group_severity.is_none_or(|s| s > max_severity) {
continue;
}
let languages = this
.read_with(cx, |t, cx| t.project.read(cx).languages().clone())
.ok();
let more = cx.update(|_, cx| {
crate::diagnostic_renderer::DiagnosticRenderer::diagnostic_blocks_for_group(
group,
buffer_snapshot.remote_id(),
Some(diagnostics_toolbar_editor.clone()),
languages,
cx,
)
})?;

View File

@@ -104,6 +104,7 @@ pub trait EditPredictionProvider: 'static + Sized {
);
fn accept(&mut self, cx: &mut Context<Self>);
fn discard(&mut self, cx: &mut Context<Self>);
fn did_show(&mut self, _cx: &mut Context<Self>) {}
fn suggest(
&mut self,
buffer: &Entity<Buffer>,
@@ -142,6 +143,7 @@ pub trait EditPredictionProviderHandle {
direction: Direction,
cx: &mut App,
);
fn did_show(&self, cx: &mut App);
fn accept(&self, cx: &mut App);
fn discard(&self, cx: &mut App);
fn suggest(
@@ -233,6 +235,10 @@ where
self.update(cx, |this, cx| this.discard(cx))
}
fn did_show(&self, cx: &mut App) {
self.update(cx, |this, cx| this.did_show(cx))
}
fn suggest(
&self,
buffer: &Entity<Buffer>,

View File

@@ -43,7 +43,8 @@ actions!(
]
);
const COPILOT_SETTINGS_URL: &str = "https://github.com/settings/copilot";
const COPILOT_SETTINGS_PATH: &str = "/settings/copilot";
const COPILOT_SETTINGS_URL: &str = concat!("https://github.com", "/settings/copilot");
const PRIVACY_DOCS: &str = "https://zed.dev/docs/ai/privacy-and-security";
struct CopilotErrorToast;
@@ -128,20 +129,21 @@ impl Render for EditPredictionButton {
}),
);
}
let this = cx.entity();
let this = cx.weak_entity();
div().child(
PopoverMenu::new("copilot")
.menu(move |window, cx| {
let current_status = Copilot::global(cx)?.read(cx).status();
Some(match current_status {
match current_status {
Status::Authorized => this.update(cx, |this, cx| {
this.build_copilot_context_menu(window, cx)
}),
_ => this.update(cx, |this, cx| {
this.build_copilot_start_menu(window, cx)
}),
})
}
.ok()
})
.anchor(Corner::BottomRight)
.trigger_with_tooltip(
@@ -182,7 +184,7 @@ impl Render for EditPredictionButton {
let icon = status.to_icon();
let tooltip_text = status.to_tooltip();
let has_menu = status.has_menu();
let this = cx.entity();
let this = cx.weak_entity();
let fs = self.fs.clone();
div().child(
@@ -209,9 +211,11 @@ impl Render for EditPredictionButton {
)
}))
}
SupermavenButtonStatus::Ready => Some(this.update(cx, |this, cx| {
this.build_supermaven_context_menu(window, cx)
})),
SupermavenButtonStatus::Ready => this
.update(cx, |this, cx| {
this.build_supermaven_context_menu(window, cx)
})
.ok(),
_ => None,
})
.anchor(Corner::BottomRight)
@@ -233,15 +237,16 @@ impl Render for EditPredictionButton {
let enabled = self.editor_enabled.unwrap_or(true);
let has_api_key = CodestralCompletionProvider::has_api_key(cx);
let fs = self.fs.clone();
let this = cx.entity();
let this = cx.weak_entity();
div().child(
PopoverMenu::new("codestral")
.menu(move |window, cx| {
if has_api_key {
Some(this.update(cx, |this, cx| {
this.update(cx, |this, cx| {
this.build_codestral_context_menu(window, cx)
}))
})
.ok()
} else {
Some(ContextMenu::build(window, cx, |menu, _, _| {
let fs = fs.clone();
@@ -832,6 +837,16 @@ impl EditPredictionButton {
window: &mut Window,
cx: &mut Context<Self>,
) -> Entity<ContextMenu> {
let all_language_settings = all_language_settings(None, cx);
let copilot_config = copilot::copilot_chat::CopilotChatConfiguration {
enterprise_uri: all_language_settings
.edit_predictions
.copilot
.enterprise_uri
.clone(),
};
let settings_url = copilot_settings_url(copilot_config.enterprise_uri.as_deref());
ContextMenu::build(window, cx, |menu, window, cx| {
let menu = self.build_language_settings_menu(menu, window, cx);
let menu =
@@ -840,10 +855,7 @@ impl EditPredictionButton {
menu.separator()
.link(
"Go to Copilot Settings",
OpenBrowser {
url: COPILOT_SETTINGS_URL.to_string(),
}
.boxed_clone(),
OpenBrowser { url: settings_url }.boxed_clone(),
)
.action("Sign Out", copilot::SignOut.boxed_clone())
})
@@ -1172,3 +1184,99 @@ fn toggle_edit_prediction_mode(fs: Arc<dyn Fs>, mode: EditPredictionsMode, cx: &
});
}
}
fn copilot_settings_url(enterprise_uri: Option<&str>) -> String {
match enterprise_uri {
Some(uri) => {
format!("{}{}", uri.trim_end_matches('/'), COPILOT_SETTINGS_PATH)
}
None => COPILOT_SETTINGS_URL.to_string(),
}
}
#[cfg(test)]
mod tests {
use super::*;
use gpui::TestAppContext;
#[gpui::test]
async fn test_copilot_settings_url_with_enterprise_uri(cx: &mut TestAppContext) {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
});
cx.update_global(|settings_store: &mut SettingsStore, cx| {
settings_store
.set_user_settings(
r#"{"edit_predictions":{"copilot":{"enterprise_uri":"https://my-company.ghe.com"}}}"#,
cx,
)
.unwrap();
});
let url = cx.update(|cx| {
let all_language_settings = all_language_settings(None, cx);
copilot_settings_url(
all_language_settings
.edit_predictions
.copilot
.enterprise_uri
.as_deref(),
)
});
assert_eq!(url, "https://my-company.ghe.com/settings/copilot");
}
#[gpui::test]
async fn test_copilot_settings_url_with_enterprise_uri_trailing_slash(cx: &mut TestAppContext) {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
});
cx.update_global(|settings_store: &mut SettingsStore, cx| {
settings_store
.set_user_settings(
r#"{"edit_predictions":{"copilot":{"enterprise_uri":"https://my-company.ghe.com/"}}}"#,
cx,
)
.unwrap();
});
let url = cx.update(|cx| {
let all_language_settings = all_language_settings(None, cx);
copilot_settings_url(
all_language_settings
.edit_predictions
.copilot
.enterprise_uri
.as_deref(),
)
});
assert_eq!(url, "https://my-company.ghe.com/settings/copilot");
}
#[gpui::test]
async fn test_copilot_settings_url_without_enterprise_uri(cx: &mut TestAppContext) {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
});
let url = cx.update(|cx| {
let all_language_settings = all_language_settings(None, cx);
copilot_settings_url(
all_language_settings
.edit_predictions
.copilot
.enterprise_uri
.as_deref(),
)
});
assert_eq!(url, "https://github.com/settings/copilot");
}
}

View File

@@ -305,6 +305,8 @@ impl CompletionBuilder {
icon_path: None,
insert_text_mode: None,
confirm: None,
match_start: None,
snippet_deduplication_key: None,
}
}
}

View File

@@ -17,7 +17,6 @@ use project::{CompletionDisplayOptions, CompletionSource};
use task::DebugScenario;
use task::TaskContext;
use std::collections::VecDeque;
use std::sync::Arc;
use std::sync::atomic::{AtomicBool, Ordering};
use std::{
@@ -36,12 +35,13 @@ use util::ResultExt;
use crate::hover_popover::{hover_markdown_style, open_markdown_url};
use crate::{
CodeActionProvider, CompletionId, CompletionItemKind, CompletionProvider, DisplayRow, Editor,
EditorStyle, ResolvedTasks,
CodeActionProvider, CompletionId, CompletionProvider, DisplayRow, Editor, EditorStyle,
ResolvedTasks,
actions::{ConfirmCodeAction, ConfirmCompletion},
split_words, styled_runs_for_code_label,
};
use crate::{CodeActionSource, EditorSettings};
use collections::{HashSet, VecDeque};
use settings::{Settings, SnippetSortOrder};
pub const MENU_GAP: Pixels = px(4.);
@@ -220,7 +220,9 @@ pub struct CompletionsMenu {
pub is_incomplete: bool,
pub buffer: Entity<Buffer>,
pub completions: Rc<RefCell<Box<[Completion]>>>,
match_candidates: Arc<[StringMatchCandidate]>,
/// String match candidate for each completion, grouped by `match_start`.
match_candidates: Arc<[(Option<text::Anchor>, Vec<StringMatchCandidate>)]>,
/// Entries displayed in the menu, which is a filtered and sorted subset of `match_candidates`.
pub entries: Rc<RefCell<Box<[StringMatch]>>>,
pub selected_item: usize,
filter_task: Task<()>,
@@ -308,6 +310,8 @@ impl CompletionsMenu {
.iter()
.enumerate()
.map(|(id, completion)| StringMatchCandidate::new(id, completion.label.filter_text()))
.into_group_map_by(|candidate| completions[candidate.id].match_start)
.into_iter()
.collect();
let completions_menu = Self {
@@ -355,6 +359,8 @@ impl CompletionsMenu {
replace_range: selection.start.text_anchor..selection.end.text_anchor,
new_text: choice.to_string(),
label: CodeLabel::plain(choice.to_string(), None),
match_start: None,
snippet_deduplication_key: None,
icon_path: None,
documentation: None,
confirm: None,
@@ -363,11 +369,14 @@ impl CompletionsMenu {
})
.collect();
let match_candidates = choices
.iter()
.enumerate()
.map(|(id, completion)| StringMatchCandidate::new(id, completion))
.collect();
let match_candidates = Arc::new([(
None,
choices
.iter()
.enumerate()
.map(|(id, completion)| StringMatchCandidate::new(id, completion))
.collect(),
)]);
let entries = choices
.iter()
.enumerate()
@@ -497,7 +506,7 @@ impl CompletionsMenu {
cx: &mut Context<Editor>,
) {
self.scroll_handle
.scroll_to_item(self.selected_item, ScrollStrategy::Top);
.scroll_to_item(self.selected_item, ScrollStrategy::Nearest);
if let Some(provider) = provider {
let entries = self.entries.borrow();
let entry = if self.selected_item < entries.len() {
@@ -948,7 +957,7 @@ impl CompletionsMenu {
}
let mat = &self.entries.borrow()[self.selected_item];
let completions = self.completions.borrow_mut();
let completions = self.completions.borrow();
let multiline_docs = match completions[mat.candidate_id].documentation.as_ref() {
Some(CompletionDocumentation::MultiLinePlainText(text)) => div().child(text.clone()),
Some(CompletionDocumentation::SingleLineAndMultiLinePlainText {
@@ -1026,57 +1035,74 @@ impl CompletionsMenu {
pub fn filter(
&mut self,
query: Option<Arc<String>>,
query: Arc<String>,
query_end: text::Anchor,
buffer: &Entity<Buffer>,
provider: Option<Rc<dyn CompletionProvider>>,
window: &mut Window,
cx: &mut Context<Editor>,
) {
self.cancel_filter.store(true, Ordering::Relaxed);
if let Some(query) = query {
self.cancel_filter = Arc::new(AtomicBool::new(false));
let matches = self.do_async_filtering(query, cx);
let id = self.id;
self.filter_task = cx.spawn_in(window, async move |editor, cx| {
let matches = matches.await;
editor
.update_in(cx, |editor, window, cx| {
editor.with_completions_menu_matching_id(id, |this| {
if let Some(this) = this {
this.set_filter_results(matches, provider, window, cx);
}
});
})
.ok();
});
} else {
self.filter_task = Task::ready(());
let matches = self.unfiltered_matches();
self.set_filter_results(matches, provider, window, cx);
}
self.cancel_filter = Arc::new(AtomicBool::new(false));
let matches = self.do_async_filtering(query, query_end, buffer, cx);
let id = self.id;
self.filter_task = cx.spawn_in(window, async move |editor, cx| {
let matches = matches.await;
editor
.update_in(cx, |editor, window, cx| {
editor.with_completions_menu_matching_id(id, |this| {
if let Some(this) = this {
this.set_filter_results(matches, provider, window, cx);
}
});
})
.ok();
});
}
pub fn do_async_filtering(
&self,
query: Arc<String>,
query_end: text::Anchor,
buffer: &Entity<Buffer>,
cx: &Context<Editor>,
) -> Task<Vec<StringMatch>> {
let matches_task = cx.background_spawn({
let query = query.clone();
let match_candidates = self.match_candidates.clone();
let cancel_filter = self.cancel_filter.clone();
let background_executor = cx.background_executor().clone();
async move {
fuzzy::match_strings(
&match_candidates,
&query,
query.chars().any(|c| c.is_uppercase()),
false,
1000,
&cancel_filter,
background_executor,
)
.await
let buffer_snapshot = buffer.read(cx).snapshot();
let background_executor = cx.background_executor().clone();
let match_candidates = self.match_candidates.clone();
let cancel_filter = self.cancel_filter.clone();
let default_query = query.clone();
let matches_task = cx.background_spawn(async move {
let queries_and_candidates = match_candidates
.iter()
.map(|(query_start, candidates)| {
let query_for_batch = match query_start {
Some(start) => {
Arc::new(buffer_snapshot.text_for_range(*start..query_end).collect())
}
None => default_query.clone(),
};
(query_for_batch, candidates)
})
.collect_vec();
let mut results = vec![];
for (query, match_candidates) in queries_and_candidates {
results.extend(
fuzzy::match_strings(
&match_candidates,
&query,
query.chars().any(|c| c.is_uppercase()),
false,
1000,
&cancel_filter,
background_executor.clone(),
)
.await,
);
}
results
});
let completions = self.completions.clone();
@@ -1085,45 +1111,31 @@ impl CompletionsMenu {
cx.foreground_executor().spawn(async move {
let mut matches = matches_task.await;
let completions_ref = completions.borrow();
if sort_completions {
matches = Self::sort_string_matches(
matches,
Some(&query),
Some(&query), // used for non-snippets only
snippet_sort_order,
completions.borrow().as_ref(),
&completions_ref,
);
}
// Remove duplicate snippet prefixes (e.g., "cool code" will match
// the text "c c" in two places; we should only show the longer one)
let mut snippets_seen = HashSet::<(usize, usize)>::default();
matches.retain(|result| {
match completions_ref[result.candidate_id].snippet_deduplication_key {
Some(key) => snippets_seen.insert(key),
None => true,
}
});
matches
})
}
/// Like `do_async_filtering` but there is no filter query, so no need to spawn tasks.
pub fn unfiltered_matches(&self) -> Vec<StringMatch> {
let mut matches = self
.match_candidates
.iter()
.enumerate()
.map(|(candidate_id, candidate)| StringMatch {
candidate_id,
score: Default::default(),
positions: Default::default(),
string: candidate.string.clone(),
})
.collect();
if self.sort_completions {
matches = Self::sort_string_matches(
matches,
None,
self.snippet_sort_order,
self.completions.borrow().as_ref(),
);
}
matches
}
pub fn set_filter_results(
&mut self,
matches: Vec<StringMatch>,
@@ -1166,28 +1178,13 @@ impl CompletionsMenu {
.and_then(|c| c.to_lowercase().next());
if snippet_sort_order == SnippetSortOrder::None {
matches.retain(|string_match| {
let completion = &completions[string_match.candidate_id];
let is_snippet = matches!(
&completion.source,
CompletionSource::Lsp { lsp_completion, .. }
if lsp_completion.kind == Some(CompletionItemKind::SNIPPET)
);
!is_snippet
});
matches
.retain(|string_match| !completions[string_match.candidate_id].is_snippet_kind());
}
matches.sort_unstable_by_key(|string_match| {
let completion = &completions[string_match.candidate_id];
let is_snippet = matches!(
&completion.source,
CompletionSource::Lsp { lsp_completion, .. }
if lsp_completion.kind == Some(CompletionItemKind::SNIPPET)
);
let sort_text = match &completion.source {
CompletionSource::Lsp { lsp_completion, .. } => lsp_completion.sort_text.as_deref(),
CompletionSource::Dap { sort_text } => Some(sort_text.as_str()),
@@ -1199,14 +1196,17 @@ impl CompletionsMenu {
let score = string_match.score;
let sort_score = Reverse(OrderedFloat(score));
let query_start_doesnt_match_split_words = query_start_lower
.map(|query_char| {
!split_words(&string_match.string).any(|word| {
word.chars().next().and_then(|c| c.to_lowercase().next())
== Some(query_char)
// Snippets do their own first-letter matching logic elsewhere.
let is_snippet = completion.is_snippet_kind();
let query_start_doesnt_match_split_words = !is_snippet
&& query_start_lower
.map(|query_char| {
!split_words(&string_match.string).any(|word| {
word.chars().next().and_then(|c| c.to_lowercase().next())
== Some(query_char)
})
})
})
.unwrap_or(false);
.unwrap_or(false);
if query_start_doesnt_match_split_words {
MatchTier::OtherMatch { sort_score }
@@ -1218,6 +1218,7 @@ impl CompletionsMenu {
SnippetSortOrder::None => Reverse(0),
};
let sort_positions = string_match.positions.clone();
// This exact matching won't work for multi-word snippets, but it's fine
let sort_exact = Reverse(if Some(completion.label.filter_text()) == query {
1
} else {

View File

@@ -1097,7 +1097,7 @@ impl DisplaySnapshot {
details: &TextLayoutDetails,
) -> u32 {
let layout_line = self.layout_row(display_row, details);
layout_line.index_for_x(x) as u32
layout_line.closest_index_for_x(x) as u32
}
pub fn grapheme_at(&self, mut point: DisplayPoint) -> Option<SharedString> {

View File

@@ -248,10 +248,8 @@ impl<'a> Iterator for InlayChunks<'a> {
// Determine split index handling edge cases
let split_index = if desired_bytes >= chunk.text.len() {
chunk.text.len()
} else if chunk.text.is_char_boundary(desired_bytes) {
desired_bytes
} else {
find_next_utf8_boundary(chunk.text, desired_bytes)
chunk.text.ceil_char_boundary(desired_bytes)
};
let (prefix, suffix) = chunk.text.split_at(split_index);
@@ -373,10 +371,8 @@ impl<'a> Iterator for InlayChunks<'a> {
.next()
.map(|c| c.len_utf8())
.unwrap_or(1)
} else if inlay_chunk.is_char_boundary(next_inlay_highlight_endpoint) {
next_inlay_highlight_endpoint
} else {
find_next_utf8_boundary(inlay_chunk, next_inlay_highlight_endpoint)
inlay_chunk.ceil_char_boundary(next_inlay_highlight_endpoint)
};
let (chunk, remainder) = inlay_chunk.split_at(split_index);
@@ -1146,31 +1142,6 @@ fn push_isomorphic(sum_tree: &mut SumTree<Transform>, summary: TextSummary) {
}
}
/// Given a byte index that is NOT a UTF-8 boundary, find the next one.
/// Assumes: 0 < byte_index < text.len() and !text.is_char_boundary(byte_index)
#[inline(always)]
fn find_next_utf8_boundary(text: &str, byte_index: usize) -> usize {
let bytes = text.as_bytes();
let mut idx = byte_index + 1;
// Scan forward until we find a boundary
while idx < text.len() {
if is_utf8_char_boundary(bytes[idx]) {
return idx;
}
idx += 1;
}
// Hit the end, return the full length
text.len()
}
// Private helper function taken from Rust's core::num module (which is both Apache2 and MIT licensed)
const fn is_utf8_char_boundary(byte: u8) -> bool {
// This is bit magic equivalent to: b < 128 || b >= 192
(byte as i8) >= -0x40
}
#[cfg(test)]
mod tests {
use super::*;

View File

@@ -74,7 +74,7 @@ use ::git::{
blame::{BlameEntry, ParsedCommitMessage},
status::FileStatus,
};
use aho_corasick::AhoCorasick;
use aho_corasick::{AhoCorasick, AhoCorasickBuilder, BuildError};
use anyhow::{Context as _, Result, anyhow};
use blink_manager::BlinkManager;
use buffer_diff::DiffHunkStatus;
@@ -117,8 +117,9 @@ use language::{
AutoindentMode, BlockCommentConfig, BracketMatch, BracketPair, Buffer, BufferRow,
BufferSnapshot, Capability, CharClassifier, CharKind, CharScopeContext, CodeLabel, CursorShape,
DiagnosticEntryRef, DiffOptions, EditPredictionsMode, EditPreview, HighlightedText, IndentKind,
IndentSize, Language, OffsetRangeExt, OutlineItem, Point, Runnable, RunnableRange, Selection,
SelectionGoal, TextObject, TransactionId, TreeSitterOptions, WordsQuery,
IndentSize, Language, LanguageRegistry, OffsetRangeExt, OutlineItem, Point, Runnable,
RunnableRange, Selection, SelectionGoal, TextObject, TransactionId, TreeSitterOptions,
WordsQuery,
language_settings::{
self, LspInsertMode, RewrapBehavior, WordsCompletionMode, all_language_settings,
language_settings,
@@ -371,6 +372,7 @@ pub trait DiagnosticRenderer {
buffer_id: BufferId,
snapshot: EditorSnapshot,
editor: WeakEntity<Editor>,
language_registry: Option<Arc<LanguageRegistry>>,
cx: &mut App,
) -> Vec<BlockProperties<Anchor>>;
@@ -379,6 +381,7 @@ pub trait DiagnosticRenderer {
diagnostic_group: Vec<DiagnosticEntryRef<'_, Point>>,
range: Range<Point>,
buffer_id: BufferId,
language_registry: Option<Arc<LanguageRegistry>>,
cx: &mut App,
) -> Option<Entity<markdown::Markdown>>;
@@ -1190,6 +1193,7 @@ pub struct Editor {
refresh_colors_task: Task<()>,
inlay_hints: Option<LspInlayHintData>,
folding_newlines: Task<()>,
select_next_is_case_sensitive: Option<bool>,
pub lookup_key: Option<Box<dyn Any + Send + Sync>>,
}
@@ -1296,8 +1300,9 @@ struct SelectionHistoryEntry {
add_selections_state: Option<AddSelectionsState>,
}
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
#[derive(Copy, Clone, Default, Debug, PartialEq, Eq)]
enum SelectionHistoryMode {
#[default]
Normal,
Undoing,
Redoing,
@@ -1310,12 +1315,6 @@ struct HoveredCursor {
selection_id: usize,
}
impl Default for SelectionHistoryMode {
fn default() -> Self {
Self::Normal
}
}
#[derive(Debug)]
/// SelectionEffects controls the side-effects of updating the selection.
///
@@ -2333,6 +2332,7 @@ impl Editor {
selection_drag_state: SelectionDragState::None,
folding_newlines: Task::ready(()),
lookup_key: None,
select_next_is_case_sensitive: None,
};
if is_minimap {
@@ -3447,6 +3447,21 @@ impl Editor {
Subscription::join(other_subscription, this_subscription)
}
fn unfold_buffers_with_selections(&mut self, cx: &mut Context<Self>) {
if self.buffer().read(cx).is_singleton() {
return;
}
let snapshot = self.buffer.read(cx).snapshot(cx);
let buffer_ids: HashSet<BufferId> = self
.selections
.disjoint_anchor_ranges()
.flat_map(|range| snapshot.buffer_ids_for_range(range))
.collect();
for buffer_id in buffer_ids {
self.unfold_buffer(buffer_id, cx);
}
}
/// Changes selections using the provided mutation function. Changes to `self.selections` occur
/// immediately, but when run within `transact` or `with_selection_effects_deferred` other
/// effects of selection change occur at the end of the transaction.
@@ -4059,17 +4074,24 @@ impl Editor {
self.selection_mark_mode = false;
self.selection_drag_state = SelectionDragState::None;
if self.dismiss_menus_and_popups(true, window, cx) {
cx.notify();
return;
}
if self.clear_expanded_diff_hunks(cx) {
cx.notify();
return;
}
if self.dismiss_menus_and_popups(true, window, cx) {
if self.show_git_blame_gutter {
self.show_git_blame_gutter = false;
cx.notify();
return;
}
if self.mode.is_full()
&& self.change_selections(Default::default(), window, cx, |s| s.try_cancel())
{
cx.notify();
return;
}
@@ -4188,6 +4210,8 @@ impl Editor {
self.hide_mouse_cursor(HideMouseCursorOrigin::TypingAction, cx);
self.unfold_buffers_with_selections(cx);
let selections = self.selections.all_adjusted(&self.display_snapshot(cx));
let mut bracket_inserted = false;
let mut edits = Vec::new();
@@ -5491,7 +5515,14 @@ impl Editor {
if let Some(CodeContextMenu::Completions(menu)) = self.context_menu.borrow_mut().as_mut() {
if filter_completions {
menu.filter(query.clone(), provider.clone(), window, cx);
menu.filter(
query.clone().unwrap_or_default(),
buffer_position.text_anchor,
&buffer,
provider.clone(),
window,
cx,
);
}
// When `is_incomplete` is false, no need to re-query completions when the current query
// is a suffix of the initial query.
@@ -5500,7 +5531,7 @@ impl Editor {
// If the new query is a suffix of the old query (typing more characters) and
// the previous result was complete, the existing completions can be filtered.
//
// Note that this is always true for snippet completions.
// Note that snippet completions are always complete.
let query_matches = match (&menu.initial_query, &query) {
(Some(initial_query), Some(query)) => query.starts_with(initial_query.as_ref()),
(None, _) => true,
@@ -5630,12 +5661,15 @@ impl Editor {
};
let mut words = if load_word_completions {
cx.background_spawn(async move {
buffer_snapshot.words_in_range(WordsQuery {
fuzzy_contents: None,
range: word_search_range,
skip_digits,
})
cx.background_spawn({
let buffer_snapshot = buffer_snapshot.clone();
async move {
buffer_snapshot.words_in_range(WordsQuery {
fuzzy_contents: None,
range: word_search_range,
skip_digits,
})
}
})
} else {
Task::ready(BTreeMap::default())
@@ -5645,8 +5679,11 @@ impl Editor {
&& provider.show_snippets()
&& let Some(project) = self.project()
{
let char_classifier = buffer_snapshot
.char_classifier_at(buffer_position)
.scope_context(Some(CharScopeContext::Completion));
project.update(cx, |project, cx| {
snippet_completions(project, &buffer, buffer_position, cx)
snippet_completions(project, &buffer, buffer_position, char_classifier, cx)
})
} else {
Task::ready(Ok(CompletionResponse {
@@ -5701,6 +5738,8 @@ impl Editor {
replace_range: word_replace_range.clone(),
new_text: word.clone(),
label: CodeLabel::plain(word, None),
match_start: None,
snippet_deduplication_key: None,
icon_path: None,
documentation: None,
source: CompletionSource::BufferWord {
@@ -5749,11 +5788,12 @@ impl Editor {
);
let query = if filter_completions { query } else { None };
let matches_task = if let Some(query) = query {
menu.do_async_filtering(query, cx)
} else {
Task::ready(menu.unfiltered_matches())
};
let matches_task = menu.do_async_filtering(
query.unwrap_or_default(),
buffer_position,
&buffer,
cx,
);
(menu, matches_task)
}) else {
return;
@@ -5770,7 +5810,7 @@ impl Editor {
return;
};
// Only valid to take prev_menu because it the new menu is immediately set
// Only valid to take prev_menu because either the new menu is immediately set
// below, or the menu is hidden.
if let Some(CodeContextMenu::Completions(prev_menu)) =
editor.context_menu.borrow_mut().take()
@@ -7825,6 +7865,10 @@ impl Editor {
self.edit_prediction_preview,
EditPredictionPreview::Inactive { .. }
) {
if let Some(provider) = self.edit_prediction_provider.as_ref() {
provider.provider.did_show(cx)
}
self.edit_prediction_preview = EditPredictionPreview::Active {
previous_scroll_position: None,
since: Instant::now(),
@@ -8004,6 +8048,9 @@ impl Editor {
&& !self.edit_predictions_hidden_for_vim_mode;
if show_completions_in_buffer {
if let Some(provider) = &self.edit_prediction_provider {
provider.provider.did_show(cx);
}
if edits
.iter()
.all(|(range, _)| range.to_offset(&multibuffer).is_empty())
@@ -11018,6 +11065,10 @@ impl Editor {
window: &mut Window,
cx: &mut Context<Self>,
) {
if self.breakpoint_store.is_none() {
return;
}
for (anchor, breakpoint) in self.breakpoints_at_cursors(window, cx) {
let breakpoint = breakpoint.unwrap_or_else(|| Breakpoint {
message: None,
@@ -11077,6 +11128,10 @@ impl Editor {
window: &mut Window,
cx: &mut Context<Self>,
) {
if self.breakpoint_store.is_none() {
return;
}
for (anchor, breakpoint) in self.breakpoints_at_cursors(window, cx) {
let Some(breakpoint) = breakpoint.filter(|breakpoint| breakpoint.is_disabled()) else {
continue;
@@ -11096,6 +11151,10 @@ impl Editor {
window: &mut Window,
cx: &mut Context<Self>,
) {
if self.breakpoint_store.is_none() {
return;
}
for (anchor, breakpoint) in self.breakpoints_at_cursors(window, cx) {
let Some(breakpoint) = breakpoint.filter(|breakpoint| breakpoint.is_enabled()) else {
continue;
@@ -11115,6 +11174,10 @@ impl Editor {
window: &mut Window,
cx: &mut Context<Self>,
) {
if self.breakpoint_store.is_none() {
return;
}
for (anchor, breakpoint) in self.breakpoints_at_cursors(window, cx) {
if let Some(breakpoint) = breakpoint {
self.edit_breakpoint_at_anchor(
@@ -12522,6 +12585,7 @@ impl Editor {
{
let max_point = buffer.max_point();
let mut is_first = true;
let mut prev_selection_was_entire_line = false;
for selection in &mut selections {
let is_entire_line =
(selection.is_empty() && cut_no_selection_line) || self.selections.line_mode();
@@ -12536,9 +12600,10 @@ impl Editor {
}
if is_first {
is_first = false;
} else {
} else if !prev_selection_was_entire_line {
text += "\n";
}
prev_selection_was_entire_line = is_entire_line;
let mut len = 0;
for chunk in buffer.text_for_range(selection.start..selection.end) {
text.push_str(chunk);
@@ -12621,6 +12686,7 @@ impl Editor {
{
let max_point = buffer.max_point();
let mut is_first = true;
let mut prev_selection_was_entire_line = false;
for selection in &selections {
let mut start = selection.start;
let mut end = selection.end;
@@ -12679,9 +12745,10 @@ impl Editor {
for trimmed_range in trimmed_selections {
if is_first {
is_first = false;
} else {
} else if !prev_selection_was_entire_line {
text += "\n";
}
prev_selection_was_entire_line = is_entire_line;
let mut len = 0;
for chunk in buffer.text_for_range(trimmed_range.start..trimmed_range.end) {
text.push_str(chunk);
@@ -12755,7 +12822,11 @@ impl Editor {
let end_offset = start_offset + clipboard_selection.len;
to_insert = &clipboard_text[start_offset..end_offset];
entire_line = clipboard_selection.is_entire_line;
start_offset = end_offset + 1;
start_offset = if entire_line {
end_offset
} else {
end_offset + 1
};
original_indent_column = Some(clipboard_selection.first_line_indent);
} else {
to_insert = &*clipboard_text;
@@ -14645,7 +14716,7 @@ impl Editor {
.collect::<String>();
let is_empty = query.is_empty();
let select_state = SelectNextState {
query: AhoCorasick::new(&[query])?,
query: self.build_query(&[query], cx)?,
wordwise: true,
done: is_empty,
};
@@ -14655,7 +14726,7 @@ impl Editor {
}
} else if let Some(selected_text) = selected_text {
self.select_next_state = Some(SelectNextState {
query: AhoCorasick::new(&[selected_text])?,
query: self.build_query(&[selected_text], cx)?,
wordwise: false,
done: false,
});
@@ -14863,7 +14934,7 @@ impl Editor {
.collect::<String>();
let is_empty = query.is_empty();
let select_state = SelectNextState {
query: AhoCorasick::new(&[query.chars().rev().collect::<String>()])?,
query: self.build_query(&[query.chars().rev().collect::<String>()], cx)?,
wordwise: true,
done: is_empty,
};
@@ -14873,7 +14944,8 @@ impl Editor {
}
} else if let Some(selected_text) = selected_text {
self.select_prev_state = Some(SelectNextState {
query: AhoCorasick::new(&[selected_text.chars().rev().collect::<String>()])?,
query: self
.build_query(&[selected_text.chars().rev().collect::<String>()], cx)?,
wordwise: false,
done: false,
});
@@ -14883,6 +14955,25 @@ impl Editor {
Ok(())
}
/// Builds an `AhoCorasick` automaton from the provided patterns, while
/// setting the case sensitivity based on the global
/// `SelectNextCaseSensitive` setting, if set, otherwise based on the
/// editor's settings.
fn build_query<I, P>(&self, patterns: I, cx: &Context<Self>) -> Result<AhoCorasick, BuildError>
where
I: IntoIterator<Item = P>,
P: AsRef<[u8]>,
{
let case_sensitive = self.select_next_is_case_sensitive.map_or_else(
|| EditorSettings::get_global(cx).search.case_sensitive,
|value| value,
);
let mut builder = AhoCorasickBuilder::new();
builder.ascii_case_insensitive(!case_sensitive);
builder.build(patterns)
}
pub fn find_next_match(
&mut self,
_: &FindNextMatch,
@@ -17869,8 +17960,18 @@ impl Editor {
.diagnostic_group(buffer_id, diagnostic.diagnostic.group_id)
.collect::<Vec<_>>();
let blocks =
renderer.render_group(diagnostic_group, buffer_id, snapshot, cx.weak_entity(), cx);
let language_registry = self
.project()
.map(|project| project.read(cx).languages().clone());
let blocks = renderer.render_group(
diagnostic_group,
buffer_id,
snapshot,
cx.weak_entity(),
language_registry,
cx,
);
let blocks = self.display_map.update(cx, |display_map, cx| {
display_map.insert_blocks(blocks, cx).into_iter().collect()
@@ -18857,10 +18958,17 @@ impl Editor {
if self.buffer().read(cx).is_singleton() || self.is_buffer_folded(buffer_id, cx) {
return;
}
let folded_excerpts = self.buffer().read(cx).excerpts_for_buffer(buffer_id, cx);
self.display_map.update(cx, |display_map, cx| {
display_map.fold_buffers([buffer_id], cx)
});
let snapshot = self.display_snapshot(cx);
self.selections.change_with(&snapshot, |selections| {
selections.remove_selections_from_buffer(buffer_id);
});
cx.emit(EditorEvent::BufferFoldToggled {
ids: folded_excerpts.iter().map(|&(id, _)| id).collect(),
folded: true,
@@ -23148,10 +23256,11 @@ impl CodeActionProvider for Entity<Project> {
fn snippet_completions(
project: &Project,
buffer: &Entity<Buffer>,
buffer_position: text::Anchor,
buffer_anchor: text::Anchor,
classifier: CharClassifier,
cx: &mut App,
) -> Task<Result<CompletionResponse>> {
let languages = buffer.read(cx).languages_at(buffer_position);
let languages = buffer.read(cx).languages_at(buffer_anchor);
let snippet_store = project.snippets().read(cx);
let scopes: Vec<_> = languages
@@ -23180,97 +23289,146 @@ fn snippet_completions(
let executor = cx.background_executor().clone();
cx.background_spawn(async move {
let is_word_char = |c| classifier.is_word(c);
let mut is_incomplete = false;
let mut completions: Vec<Completion> = Vec::new();
for (scope, snippets) in scopes.into_iter() {
let classifier =
CharClassifier::new(Some(scope)).scope_context(Some(CharScopeContext::Completion));
const MAX_WORD_PREFIX_LEN: usize = 128;
let last_word: String = snapshot
.reversed_chars_for_range(text::Anchor::MIN..buffer_position)
.take(MAX_WORD_PREFIX_LEN)
.take_while(|c| classifier.is_word(*c))
.collect::<String>()
.chars()
.rev()
.collect();
const MAX_PREFIX_LEN: usize = 128;
let buffer_offset = text::ToOffset::to_offset(&buffer_anchor, &snapshot);
let window_start = buffer_offset.saturating_sub(MAX_PREFIX_LEN);
let window_start = snapshot.clip_offset(window_start, Bias::Left);
if last_word.is_empty() {
return Ok(CompletionResponse {
completions: vec![],
display_options: CompletionDisplayOptions::default(),
is_incomplete: true,
});
let max_buffer_window: String = snapshot
.text_for_range(window_start..buffer_offset)
.collect();
if max_buffer_window.is_empty() {
return Ok(CompletionResponse {
completions: vec![],
display_options: CompletionDisplayOptions::default(),
is_incomplete: true,
});
}
for (_scope, snippets) in scopes.into_iter() {
// Sort snippets by word count to match longer snippet prefixes first.
let mut sorted_snippet_candidates = snippets
.iter()
.enumerate()
.flat_map(|(snippet_ix, snippet)| {
snippet
.prefix
.iter()
.enumerate()
.map(move |(prefix_ix, prefix)| {
let word_count =
snippet_candidate_suffixes(prefix, is_word_char).count();
((snippet_ix, prefix_ix), prefix, word_count)
})
})
.collect_vec();
sorted_snippet_candidates
.sort_unstable_by_key(|(_, _, word_count)| Reverse(*word_count));
// Each prefix may be matched multiple times; the completion menu must filter out duplicates.
let buffer_windows = snippet_candidate_suffixes(&max_buffer_window, is_word_char)
.take(
sorted_snippet_candidates
.first()
.map(|(_, _, word_count)| *word_count)
.unwrap_or_default(),
)
.collect_vec();
const MAX_RESULTS: usize = 100;
// Each match also remembers how many characters from the buffer it consumed
let mut matches: Vec<(StringMatch, usize)> = vec![];
let mut snippet_list_cutoff_index = 0;
for (buffer_index, buffer_window) in buffer_windows.iter().enumerate().rev() {
let word_count = buffer_index + 1;
// Increase `snippet_list_cutoff_index` until we have all of the
// snippets with sufficiently many words.
while sorted_snippet_candidates
.get(snippet_list_cutoff_index)
.is_some_and(|(_ix, _prefix, snippet_word_count)| {
*snippet_word_count >= word_count
})
{
snippet_list_cutoff_index += 1;
}
// Take only the candidates with at least `word_count` many words
let snippet_candidates_at_word_len =
&sorted_snippet_candidates[..snippet_list_cutoff_index];
let candidates = snippet_candidates_at_word_len
.iter()
.map(|(_snippet_ix, prefix, _snippet_word_count)| prefix)
.enumerate() // index in `sorted_snippet_candidates`
// First char must match
.filter(|(_ix, prefix)| {
itertools::equal(
prefix
.chars()
.next()
.into_iter()
.flat_map(|c| c.to_lowercase()),
buffer_window
.chars()
.next()
.into_iter()
.flat_map(|c| c.to_lowercase()),
)
})
.map(|(ix, prefix)| StringMatchCandidate::new(ix, prefix))
.collect::<Vec<StringMatchCandidate>>();
matches.extend(
fuzzy::match_strings(
&candidates,
&buffer_window,
buffer_window.chars().any(|c| c.is_uppercase()),
true,
MAX_RESULTS - matches.len(), // always prioritize longer snippets
&Default::default(),
executor.clone(),
)
.await
.into_iter()
.map(|string_match| (string_match, buffer_window.len())),
);
if matches.len() >= MAX_RESULTS {
break;
}
}
let as_offset = text::ToOffset::to_offset(&buffer_position, &snapshot);
let to_lsp = |point: &text::Anchor| {
let end = text::ToPointUtf16::to_point_utf16(point, &snapshot);
point_to_lsp(end)
};
let lsp_end = to_lsp(&buffer_position);
let candidates = snippets
.iter()
.enumerate()
.flat_map(|(ix, snippet)| {
snippet
.prefix
.iter()
.map(move |prefix| StringMatchCandidate::new(ix, prefix))
})
.collect::<Vec<StringMatchCandidate>>();
const MAX_RESULTS: usize = 100;
let mut matches = fuzzy::match_strings(
&candidates,
&last_word,
last_word.chars().any(|c| c.is_uppercase()),
true,
MAX_RESULTS,
&Default::default(),
executor.clone(),
)
.await;
let lsp_end = to_lsp(&buffer_anchor);
if matches.len() >= MAX_RESULTS {
is_incomplete = true;
}
// Remove all candidates where the query's start does not match the start of any word in the candidate
if let Some(query_start) = last_word.chars().next() {
matches.retain(|string_match| {
split_words(&string_match.string).any(|word| {
// Check that the first codepoint of the word as lowercase matches the first
// codepoint of the query as lowercase
word.chars()
.flat_map(|codepoint| codepoint.to_lowercase())
.zip(query_start.to_lowercase())
.all(|(word_cp, query_cp)| word_cp == query_cp)
})
});
}
let matched_strings = matches
.into_iter()
.map(|m| m.string)
.collect::<HashSet<_>>();
completions.extend(snippets.iter().filter_map(|snippet| {
let matching_prefix = snippet
.prefix
.iter()
.find(|prefix| matched_strings.contains(*prefix))?;
let start = as_offset - last_word.len();
completions.extend(matches.iter().map(|(string_match, buffer_window_len)| {
let ((snippet_index, prefix_index), matching_prefix, _snippet_word_count) =
sorted_snippet_candidates[string_match.candidate_id];
let snippet = &snippets[snippet_index];
let start = buffer_offset - buffer_window_len;
let start = snapshot.anchor_before(start);
let range = start..buffer_position;
let range = start..buffer_anchor;
let lsp_start = to_lsp(&start);
let lsp_range = lsp::Range {
start: lsp_start,
end: lsp_end,
};
Some(Completion {
Completion {
replace_range: range,
new_text: snippet.body.clone(),
source: CompletionSource::Lsp {
@@ -23300,7 +23458,11 @@ fn snippet_completions(
}),
lsp_defaults: None,
},
label: CodeLabel::plain(matching_prefix.clone(), None),
label: CodeLabel {
text: matching_prefix.clone(),
runs: Vec::new(),
filter_range: 0..matching_prefix.len(),
},
icon_path: None,
documentation: Some(CompletionDocumentation::SingleLineAndMultiLinePlainText {
single_line: snippet.name.clone().into(),
@@ -23311,8 +23473,10 @@ fn snippet_completions(
}),
insert_text_mode: None,
confirm: None,
})
}))
match_start: Some(start),
snippet_deduplication_key: Some((snippet_index, prefix_index)),
}
}));
}
Ok(CompletionResponse {
@@ -24558,6 +24722,33 @@ pub(crate) fn split_words(text: &str) -> impl std::iter::Iterator<Item = &str> +
})
}
/// Given a string of text immediately before the cursor, iterates over possible
/// strings a snippet could match to. More precisely: returns an iterator over
/// suffixes of `text` created by splitting at word boundaries (before & after
/// every non-word character).
///
/// Shorter suffixes are returned first.
pub(crate) fn snippet_candidate_suffixes(
text: &str,
is_word_char: impl Fn(char) -> bool,
) -> impl std::iter::Iterator<Item = &str> {
let mut prev_index = text.len();
let mut prev_codepoint = None;
text.char_indices()
.rev()
.chain([(0, '\0')])
.filter_map(move |(index, codepoint)| {
let prev_index = std::mem::replace(&mut prev_index, index);
let prev_codepoint = prev_codepoint.replace(codepoint)?;
if is_word_char(prev_codepoint) && is_word_char(codepoint) {
None
} else {
let chunk = &text[prev_index..]; // go to end of string
Some(chunk)
}
})
}
pub trait RangeToAnchorExt: Sized {
fn to_anchors(self, snapshot: &MultiBufferSnapshot) -> Range<Anchor>;

View File

@@ -162,10 +162,15 @@ pub struct DragAndDropSelection {
pub struct SearchSettings {
/// Whether to show the project search button in the status bar.
pub button: bool,
/// Whether to only match on whole words.
pub whole_word: bool,
/// Whether to match case sensitively.
pub case_sensitive: bool,
/// Whether to include gitignored files in search results.
pub include_ignored: bool,
/// Whether to interpret the search query as a regular expression.
pub regex: bool,
/// Whether to center the cursor on each search match when navigating.
pub center_on_match: bool,
}

View File

@@ -44,8 +44,8 @@ use project::{
};
use serde_json::{self, json};
use settings::{
AllLanguageSettingsContent, IndentGuideBackgroundColoring, IndentGuideColoring,
ProjectSettingsContent,
AllLanguageSettingsContent, EditorSettingsContent, IndentGuideBackgroundColoring,
IndentGuideColoring, ProjectSettingsContent, SearchSettingsContent,
};
use std::{cell::RefCell, future::Future, rc::Rc, sync::atomic::AtomicBool, time::Instant};
use std::{
@@ -8314,8 +8314,15 @@ async fn test_add_selection_above_below_multi_cursor_existing_state(cx: &mut Tes
#[gpui::test]
async fn test_select_next(cx: &mut TestAppContext) {
init_test(cx, |_| {});
let mut cx = EditorTestContext::new(cx).await;
// Enable case sensitive search.
update_test_editor_settings(&mut cx, |settings| {
let mut search_settings = SearchSettingsContent::default();
search_settings.case_sensitive = Some(true);
settings.search = Some(search_settings);
});
cx.set_state("abc\nˇabc abc\ndefabc\nabc");
cx.update_editor(|e, window, cx| e.select_next(&SelectNext::default(), window, cx))
@@ -8346,14 +8353,41 @@ async fn test_select_next(cx: &mut TestAppContext) {
cx.update_editor(|e, window, cx| e.select_next(&SelectNext::default(), window, cx))
.unwrap();
cx.assert_editor_state("abc\n«ˇabc» «ˇabc»\ndefabc\nabc");
// Test case sensitivity
cx.set_state("«ˇfoo»\nFOO\nFoo\nfoo");
cx.update_editor(|e, window, cx| {
e.select_next(&SelectNext::default(), window, cx).unwrap();
});
cx.assert_editor_state("«ˇfoo»\nFOO\nFoo\n«ˇfoo»");
// Disable case sensitive search.
update_test_editor_settings(&mut cx, |settings| {
let mut search_settings = SearchSettingsContent::default();
search_settings.case_sensitive = Some(false);
settings.search = Some(search_settings);
});
cx.set_state("«ˇfoo»\nFOO\nFoo");
cx.update_editor(|e, window, cx| {
e.select_next(&SelectNext::default(), window, cx).unwrap();
e.select_next(&SelectNext::default(), window, cx).unwrap();
});
cx.assert_editor_state("«ˇfoo»\n«ˇFOO»\n«ˇFoo»");
}
#[gpui::test]
async fn test_select_all_matches(cx: &mut TestAppContext) {
init_test(cx, |_| {});
let mut cx = EditorTestContext::new(cx).await;
// Enable case sensitive search.
update_test_editor_settings(&mut cx, |settings| {
let mut search_settings = SearchSettingsContent::default();
search_settings.case_sensitive = Some(true);
settings.search = Some(search_settings);
});
// Test caret-only selections
cx.set_state("abc\nˇabc abc\ndefabc\nabc");
cx.update_editor(|e, window, cx| e.select_all_matches(&SelectAllMatches, window, cx))
@@ -8398,6 +8432,26 @@ async fn test_select_all_matches(cx: &mut TestAppContext) {
e.set_clip_at_line_ends(false, cx);
});
cx.assert_editor_state("«abcˇ»");
// Test case sensitivity
cx.set_state("fˇoo\nFOO\nFoo");
cx.update_editor(|e, window, cx| {
e.select_all_matches(&SelectAllMatches, window, cx).unwrap();
});
cx.assert_editor_state("«fooˇ»\nFOO\nFoo");
// Disable case sensitive search.
update_test_editor_settings(&mut cx, |settings| {
let mut search_settings = SearchSettingsContent::default();
search_settings.case_sensitive = Some(false);
settings.search = Some(search_settings);
});
cx.set_state("fˇoo\nFOO\nFoo");
cx.update_editor(|e, window, cx| {
e.select_all_matches(&SelectAllMatches, window, cx).unwrap();
});
cx.assert_editor_state("«fooˇ»\n«FOOˇ»\n«Fooˇ»");
}
#[gpui::test]
@@ -8769,8 +8823,15 @@ let foo = «2ˇ»;"#,
#[gpui::test]
async fn test_select_previous_with_single_selection(cx: &mut TestAppContext) {
init_test(cx, |_| {});
let mut cx = EditorTestContext::new(cx).await;
// Enable case sensitive search.
update_test_editor_settings(&mut cx, |settings| {
let mut search_settings = SearchSettingsContent::default();
search_settings.case_sensitive = Some(true);
settings.search = Some(search_settings);
});
cx.set_state("abc\n«ˇabc» abc\ndefabc\nabc");
cx.update_editor(|e, window, cx| e.select_previous(&SelectPrevious::default(), window, cx))
@@ -8795,6 +8856,32 @@ async fn test_select_previous_with_single_selection(cx: &mut TestAppContext) {
cx.update_editor(|e, window, cx| e.select_previous(&SelectPrevious::default(), window, cx))
.unwrap();
cx.assert_editor_state("«ˇabc»\n«ˇabc» «ˇabc»\ndef«ˇabc»\n«ˇabc»");
// Test case sensitivity
cx.set_state("foo\nFOO\nFoo\n«ˇfoo»");
cx.update_editor(|e, window, cx| {
e.select_previous(&SelectPrevious::default(), window, cx)
.unwrap();
e.select_previous(&SelectPrevious::default(), window, cx)
.unwrap();
});
cx.assert_editor_state("«ˇfoo»\nFOO\nFoo\n«ˇfoo»");
// Disable case sensitive search.
update_test_editor_settings(&mut cx, |settings| {
let mut search_settings = SearchSettingsContent::default();
search_settings.case_sensitive = Some(false);
settings.search = Some(search_settings);
});
cx.set_state("foo\nFOO\n«ˇFoo»");
cx.update_editor(|e, window, cx| {
e.select_previous(&SelectPrevious::default(), window, cx)
.unwrap();
e.select_previous(&SelectPrevious::default(), window, cx)
.unwrap();
});
cx.assert_editor_state("«ˇfoo»\n«ˇFOO»\n«ˇFoo»");
}
#[gpui::test]
@@ -11327,6 +11414,53 @@ async fn test_snippet_indentation(cx: &mut TestAppContext) {
ˇ"});
}
#[gpui::test]
async fn test_snippet_with_multi_word_prefix(cx: &mut TestAppContext) {
init_test(cx, |_| {});
let mut cx = EditorTestContext::new(cx).await;
cx.update_editor(|editor, _, cx| {
editor.project().unwrap().update(cx, |project, cx| {
project.snippets().update(cx, |snippets, _cx| {
let snippet = project::snippet_provider::Snippet {
prefix: vec!["multi word".to_string()],
body: "this is many words".to_string(),
description: Some("description".to_string()),
name: "multi-word snippet test".to_string(),
};
snippets.add_snippet_for_test(
None,
PathBuf::from("test_snippets.json"),
vec![Arc::new(snippet)],
);
});
})
});
for (input_to_simulate, should_match_snippet) in [
("m", true),
("m ", true),
("m w", true),
("aa m w", true),
("aa m g", false),
] {
cx.set_state("ˇ");
cx.simulate_input(input_to_simulate); // fails correctly
cx.update_editor(|editor, _, _| {
let Some(CodeContextMenu::Completions(context_menu)) = &*editor.context_menu.borrow()
else {
assert!(!should_match_snippet); // no completions! don't even show the menu
return;
};
assert!(context_menu.visible());
let completions = context_menu.completions.borrow();
assert_eq!(!completions.is_empty(), should_match_snippet);
});
}
}
#[gpui::test]
async fn test_document_format_during_save(cx: &mut TestAppContext) {
init_test(cx, |_| {});
@@ -17282,6 +17416,41 @@ fn test_split_words() {
assert_eq!(split(":do_the_thing"), &[":", "do_", "the_", "thing"]);
}
#[test]
fn test_split_words_for_snippet_prefix() {
fn split(text: &str) -> Vec<&str> {
snippet_candidate_suffixes(text, |c| c.is_alphanumeric() || c == '_').collect()
}
assert_eq!(split("HelloWorld"), &["HelloWorld"]);
assert_eq!(split("hello_world"), &["hello_world"]);
assert_eq!(split("_hello_world_"), &["_hello_world_"]);
assert_eq!(split("Hello_World"), &["Hello_World"]);
assert_eq!(split("helloWOrld"), &["helloWOrld"]);
assert_eq!(split("helloworld"), &["helloworld"]);
assert_eq!(
split("this@is!@#$^many . symbols"),
&[
"symbols",
" symbols",
". symbols",
" . symbols",
" . symbols",
" . symbols",
"many . symbols",
"^many . symbols",
"$^many . symbols",
"#$^many . symbols",
"@#$^many . symbols",
"!@#$^many . symbols",
"is!@#$^many . symbols",
"@is!@#$^many . symbols",
"this@is!@#$^many . symbols",
],
);
assert_eq!(split("a.s"), &["s", ".s", "a.s"]);
}
#[gpui::test]
async fn test_move_to_enclosing_bracket(cx: &mut TestAppContext) {
init_test(cx, |_| {});
@@ -22291,7 +22460,7 @@ async fn test_folding_buffers(cx: &mut TestAppContext) {
assert_eq!(
multi_buffer_editor.update(cx, |editor, cx| editor.display_text(cx)),
"\n\nB\n\n\n\n\n\n\nllll\nmmmm\nnnnn\n\n\nqqqq\nrrrr\n\n\nuuuu\n\n",
"\n\naaaa\nBbbbb\ncccc\n\n\nffff\ngggg\n\n\njjjj\n\n\nllll\nmmmm\nnnnn\n\n\nqqqq\nrrrr\n\n\nuuuu\n\n",
"After unfolding the first buffer, its and 2nd buffer's text should be displayed"
);
@@ -22300,7 +22469,7 @@ async fn test_folding_buffers(cx: &mut TestAppContext) {
});
assert_eq!(
multi_buffer_editor.update(cx, |editor, cx| editor.display_text(cx)),
"\n\nB\n\n\n\n\n\n\nllll\nmmmm\nnnnn\n\n\nqqqq\nrrrr\n\n\nuuuu\n\n\nvvvv\nwwww\nxxxx\n\n\n1111\n2222\n\n\n5555",
"\n\naaaa\nBbbbb\ncccc\n\n\nffff\ngggg\n\n\njjjj\n\n\nllll\nmmmm\nnnnn\n\n\nqqqq\nrrrr\n\n\nuuuu\n\n\nvvvv\nwwww\nxxxx\n\n\n1111\n2222\n\n\n5555",
"After unfolding the all buffers, all original text should be displayed"
);
}
@@ -25533,6 +25702,195 @@ pub fn check_displayed_completions(expected: Vec<&'static str>, cx: &mut EditorL
});
}
#[gpui::test]
async fn test_mixed_completions_with_multi_word_snippet(cx: &mut TestAppContext) {
init_test(cx, |_| {});
let mut cx = EditorLspTestContext::new_rust(
lsp::ServerCapabilities {
completion_provider: Some(lsp::CompletionOptions {
..Default::default()
}),
..Default::default()
},
cx,
)
.await;
cx.lsp
.set_request_handler::<lsp::request::Completion, _, _>(move |_, _| async move {
Ok(Some(lsp::CompletionResponse::Array(vec![
lsp::CompletionItem {
label: "unsafe".into(),
text_edit: Some(lsp::CompletionTextEdit::Edit(lsp::TextEdit {
range: lsp::Range {
start: lsp::Position {
line: 0,
character: 9,
},
end: lsp::Position {
line: 0,
character: 11,
},
},
new_text: "unsafe".to_string(),
})),
insert_text_mode: Some(lsp::InsertTextMode::AS_IS),
..Default::default()
},
])))
});
cx.update_editor(|editor, _, cx| {
editor.project().unwrap().update(cx, |project, cx| {
project.snippets().update(cx, |snippets, _cx| {
snippets.add_snippet_for_test(
None,
PathBuf::from("test_snippets.json"),
vec![
Arc::new(project::snippet_provider::Snippet {
prefix: vec![
"unlimited word count".to_string(),
"unlimit word count".to_string(),
"unlimited unknown".to_string(),
],
body: "this is many words".to_string(),
description: Some("description".to_string()),
name: "multi-word snippet test".to_string(),
}),
Arc::new(project::snippet_provider::Snippet {
prefix: vec!["unsnip".to_string(), "@few".to_string()],
body: "fewer words".to_string(),
description: Some("alt description".to_string()),
name: "other name".to_string(),
}),
Arc::new(project::snippet_provider::Snippet {
prefix: vec!["ab aa".to_string()],
body: "abcd".to_string(),
description: None,
name: "alphabet".to_string(),
}),
],
);
});
})
});
let get_completions = |cx: &mut EditorLspTestContext| {
cx.update_editor(|editor, _, _| match &*editor.context_menu.borrow() {
Some(CodeContextMenu::Completions(context_menu)) => {
let entries = context_menu.entries.borrow();
entries
.iter()
.map(|entry| entry.string.clone())
.collect_vec()
}
_ => vec![],
})
};
// snippets:
// @foo
// foo bar
//
// when typing:
//
// when typing:
// - if I type a symbol "open the completions with snippets only"
// - if I type a word character "open the completions menu" (if it had been open snippets only, clear it out)
//
// stuff we need:
// - filtering logic change?
// - remember how far back the completion started.
let test_cases: &[(&str, &[&str])] = &[
(
"un",
&[
"unsafe",
"unlimit word count",
"unlimited unknown",
"unlimited word count",
"unsnip",
],
),
(
"u ",
&[
"unlimit word count",
"unlimited unknown",
"unlimited word count",
],
),
("u a", &["ab aa", "unsafe"]), // unsAfe
(
"u u",
&[
"unsafe",
"unlimit word count",
"unlimited unknown", // ranked highest among snippets
"unlimited word count",
"unsnip",
],
),
("uw c", &["unlimit word count", "unlimited word count"]),
(
"u w",
&[
"unlimit word count",
"unlimited word count",
"unlimited unknown",
],
),
("u w ", &["unlimit word count", "unlimited word count"]),
(
"u ",
&[
"unlimit word count",
"unlimited unknown",
"unlimited word count",
],
),
("wor", &[]),
("uf", &["unsafe"]),
("af", &["unsafe"]),
("afu", &[]),
(
"ue",
&["unsafe", "unlimited unknown", "unlimited word count"],
),
("@", &["@few"]),
("@few", &["@few"]),
("@ ", &[]),
("a@", &["@few"]),
("a@f", &["@few", "unsafe"]),
("a@fw", &["@few"]),
("a", &["ab aa", "unsafe"]),
("aa", &["ab aa"]),
("aaa", &["ab aa"]),
("ab", &["ab aa"]),
("ab ", &["ab aa"]),
("ab a", &["ab aa", "unsafe"]),
("ab ab", &["ab aa"]),
("ab ab aa", &["ab aa"]),
];
for &(input_to_simulate, expected_completions) in test_cases {
cx.set_state("fn a() { ˇ }\n");
for c in input_to_simulate.split("") {
cx.simulate_input(c);
cx.run_until_parked();
}
let expected_completions = expected_completions
.iter()
.map(|s| s.to_string())
.collect_vec();
assert_eq!(
get_completions(&mut cx),
expected_completions,
"< actual / expected >, input = {input_to_simulate:?}",
);
}
}
/// Handle completion request passing a marked string specifying where the completion
/// should be triggered from using '|' character, what range should be replaced, and what completions
/// should be returned using '<' and '>' to delimit the range.
@@ -25717,6 +26075,17 @@ pub(crate) fn update_test_project_settings(
});
}
pub(crate) fn update_test_editor_settings(
cx: &mut TestAppContext,
f: impl Fn(&mut EditorSettingsContent),
) {
cx.update(|cx| {
SettingsStore::update_global(cx, |store, cx| {
store.update_user_settings(cx, |settings| f(&mut settings.editor));
})
})
}
pub(crate) fn init_test(cx: &mut TestAppContext, f: fn(&mut AllLanguageSettingsContent)) {
cx.update(|cx| {
assets::Assets.load_test_fonts(cx);
@@ -27006,6 +27375,60 @@ async fn test_copy_line_without_trailing_newline(cx: &mut TestAppContext) {
cx.assert_editor_state("line1\nline2\nˇ");
}
#[gpui::test]
async fn test_multi_selection_copy_with_newline_between_copied_lines(cx: &mut TestAppContext) {
init_test(cx, |_| {});
let mut cx = EditorTestContext::new(cx).await;
cx.set_state("ˇline1\nˇline2\nˇline3\n");
cx.update_editor(|e, window, cx| e.copy(&Copy, window, cx));
let clipboard_text = cx
.read_from_clipboard()
.and_then(|item| item.text().as_deref().map(str::to_string));
assert_eq!(
clipboard_text,
Some("line1\nline2\nline3\n".to_string()),
"Copying multiple lines should include a single newline between lines"
);
cx.set_state("lineA\nˇ");
cx.update_editor(|e, window, cx| e.paste(&Paste, window, cx));
cx.assert_editor_state("lineA\nline1\nline2\nline3\nˇ");
}
#[gpui::test]
async fn test_multi_selection_cut_with_newline_between_copied_lines(cx: &mut TestAppContext) {
init_test(cx, |_| {});
let mut cx = EditorTestContext::new(cx).await;
cx.set_state("ˇline1\nˇline2\nˇline3\n");
cx.update_editor(|e, window, cx| e.cut(&Cut, window, cx));
let clipboard_text = cx
.read_from_clipboard()
.and_then(|item| item.text().as_deref().map(str::to_string));
assert_eq!(
clipboard_text,
Some("line1\nline2\nline3\n".to_string()),
"Copying multiple lines should include a single newline between lines"
);
cx.set_state("lineA\nˇ");
cx.update_editor(|e, window, cx| e.paste(&Paste, window, cx));
cx.assert_editor_state("lineA\nline1\nline2\nline3\nˇ");
}
#[gpui::test]
async fn test_end_of_editor_context(cx: &mut TestAppContext) {
init_test(cx, |_| {});
@@ -27355,3 +27778,213 @@ async fn test_next_prev_reference(cx: &mut TestAppContext) {
_move(Direction::Prev, 2, &mut cx).await;
cx.assert_editor_state(CYCLE_POSITIONS[1]);
}
#[gpui::test]
async fn test_multibuffer_selections_with_folding(cx: &mut TestAppContext) {
init_test(cx, |_| {});
let (editor, cx) = cx.add_window_view(|window, cx| {
let multi_buffer = MultiBuffer::build_multi(
[
("1\n2\n3\n", vec![Point::row_range(0..3)]),
("1\n2\n3\n", vec![Point::row_range(0..3)]),
],
cx,
);
Editor::new(EditorMode::full(), multi_buffer, None, window, cx)
});
let mut cx = EditorTestContext::for_editor_in(editor.clone(), cx).await;
let buffer_ids = cx.multibuffer(|mb, _| mb.excerpt_buffer_ids());
cx.assert_excerpts_with_selections(indoc! {"
[EXCERPT]
ˇ1
2
3
[EXCERPT]
1
2
3
"});
// Scenario 1: Unfolded buffers, position cursor on "2", select all matches, then insert
cx.update_editor(|editor, window, cx| {
editor.change_selections(None.into(), window, cx, |s| {
s.select_ranges([2..3]);
});
});
cx.assert_excerpts_with_selections(indoc! {"
[EXCERPT]
1
3
[EXCERPT]
1
2
3
"});
cx.update_editor(|editor, window, cx| {
editor
.select_all_matches(&SelectAllMatches, window, cx)
.unwrap();
});
cx.assert_excerpts_with_selections(indoc! {"
[EXCERPT]
1
3
[EXCERPT]
1
3
"});
cx.update_editor(|editor, window, cx| {
editor.handle_input("X", window, cx);
});
cx.assert_excerpts_with_selections(indoc! {"
[EXCERPT]
1
3
[EXCERPT]
1
3
"});
// Scenario 2: Select "2", then fold second buffer before insertion
cx.update_multibuffer(|mb, cx| {
for buffer_id in buffer_ids.iter() {
let buffer = mb.buffer(*buffer_id).unwrap();
buffer.update(cx, |buffer, cx| {
buffer.edit([(0..buffer.len(), "1\n2\n3\n")], None, cx);
});
}
});
// Select "2" and select all matches
cx.update_editor(|editor, window, cx| {
editor.change_selections(None.into(), window, cx, |s| {
s.select_ranges([2..3]);
});
editor
.select_all_matches(&SelectAllMatches, window, cx)
.unwrap();
});
// Fold second buffer - should remove selections from folded buffer
cx.update_editor(|editor, _, cx| {
editor.fold_buffer(buffer_ids[1], cx);
});
cx.assert_excerpts_with_selections(indoc! {"
[EXCERPT]
1
3
[EXCERPT]
[FOLDED]
"});
// Insert text - should only affect first buffer
cx.update_editor(|editor, window, cx| {
editor.handle_input("Y", window, cx);
});
cx.update_editor(|editor, _, cx| {
editor.unfold_buffer(buffer_ids[1], cx);
});
cx.assert_excerpts_with_selections(indoc! {"
[EXCERPT]
1
3
[EXCERPT]
1
2
3
"});
// Scenario 3: Select "2", then fold first buffer before insertion
cx.update_multibuffer(|mb, cx| {
for buffer_id in buffer_ids.iter() {
let buffer = mb.buffer(*buffer_id).unwrap();
buffer.update(cx, |buffer, cx| {
buffer.edit([(0..buffer.len(), "1\n2\n3\n")], None, cx);
});
}
});
// Select "2" and select all matches
cx.update_editor(|editor, window, cx| {
editor.change_selections(None.into(), window, cx, |s| {
s.select_ranges([2..3]);
});
editor
.select_all_matches(&SelectAllMatches, window, cx)
.unwrap();
});
// Fold first buffer - should remove selections from folded buffer
cx.update_editor(|editor, _, cx| {
editor.fold_buffer(buffer_ids[0], cx);
});
cx.assert_excerpts_with_selections(indoc! {"
[EXCERPT]
[FOLDED]
[EXCERPT]
1
3
"});
// Insert text - should only affect second buffer
cx.update_editor(|editor, window, cx| {
editor.handle_input("Z", window, cx);
});
cx.update_editor(|editor, _, cx| {
editor.unfold_buffer(buffer_ids[0], cx);
});
cx.assert_excerpts_with_selections(indoc! {"
[EXCERPT]
1
2
3
[EXCERPT]
1
3
"});
// Edge case scenario: fold all buffers, then try to insert
cx.update_editor(|editor, _, cx| {
editor.fold_buffer(buffer_ids[0], cx);
editor.fold_buffer(buffer_ids[1], cx);
});
cx.assert_excerpts_with_selections(indoc! {"
[EXCERPT]
ˇ[FOLDED]
[EXCERPT]
[FOLDED]
"});
// Insert should work via default selection
cx.update_editor(|editor, window, cx| {
editor.handle_input("W", window, cx);
});
cx.update_editor(|editor, _, cx| {
editor.unfold_buffer(buffer_ids[0], cx);
editor.unfold_buffer(buffer_ids[1], cx);
});
cx.assert_excerpts_with_selections(indoc! {"
[EXCERPT]
Wˇ1
2
3
[EXCERPT]
1
Z
3
"});
}

View File

@@ -6,10 +6,10 @@ use crate::{
EditDisplayMode, EditPrediction, Editor, EditorMode, EditorSettings, EditorSnapshot,
EditorStyle, FILE_HEADER_HEIGHT, FocusedBlock, GutterDimensions, HalfPageDown, HalfPageUp,
HandleInput, HoveredCursor, InlayHintRefreshReason, JumpData, LineDown, LineHighlight, LineUp,
MAX_LINE_LEN, MINIMAP_FONT_SIZE, MULTI_BUFFER_EXCERPT_HEADER_HEIGHT, OpenExcerpts,
OpenExcerptsSplit, PageDown, PageUp, PhantomBreakpointIndicator, Point, RowExt, RowRangeExt,
SelectPhase, SelectedTextHighlight, Selection, SelectionDragState, SelectionEffects,
SizingBehavior, SoftWrap, StickyHeaderExcerpt, ToPoint, ToggleFold, ToggleFoldAll,
MAX_LINE_LEN, MINIMAP_FONT_SIZE, MULTI_BUFFER_EXCERPT_HEADER_HEIGHT, OpenExcerpts, PageDown,
PageUp, PhantomBreakpointIndicator, Point, RowExt, RowRangeExt, SelectPhase,
SelectedTextHighlight, Selection, SelectionDragState, SelectionEffects, SizingBehavior,
SoftWrap, StickyHeaderExcerpt, ToPoint, ToggleFold, ToggleFoldAll,
code_context_menus::{CodeActionsMenu, MENU_ASIDE_MAX_WIDTH, MENU_ASIDE_MIN_WIDTH, MENU_GAP},
display_map::{
Block, BlockContext, BlockStyle, ChunkRendererId, DisplaySnapshot, EditorMargins,
@@ -3664,6 +3664,7 @@ impl EditorElement {
row_block_types: &mut HashMap<DisplayRow, bool>,
selections: &[Selection<Point>],
selected_buffer_ids: &Vec<BufferId>,
latest_selection_anchors: &HashMap<BufferId, Anchor>,
is_row_soft_wrapped: impl Copy + Fn(usize) -> bool,
sticky_header_excerpt_id: Option<ExcerptId>,
window: &mut Window,
@@ -3739,7 +3740,13 @@ impl EditorElement {
let selected = selected_buffer_ids.contains(&first_excerpt.buffer_id);
let result = v_flex().id(block_id).w_full().pr(editor_margins.right);
let jump_data = header_jump_data(snapshot, block_row_start, *height, first_excerpt);
let jump_data = header_jump_data(
snapshot,
block_row_start,
*height,
first_excerpt,
latest_selection_anchors,
);
result
.child(self.render_buffer_header(
first_excerpt,
@@ -3774,7 +3781,13 @@ impl EditorElement {
Block::BufferHeader { excerpt, height } => {
let mut result = v_flex().id(block_id).w_full();
let jump_data = header_jump_data(snapshot, block_row_start, *height, excerpt);
let jump_data = header_jump_data(
snapshot,
block_row_start,
*height,
excerpt,
latest_selection_anchors,
);
if sticky_header_excerpt_id != Some(excerpt.id) {
let selected = selected_buffer_ids.contains(&excerpt.buffer_id);
@@ -4042,24 +4055,17 @@ impl EditorElement {
)
.group_hover("", |div| div.underline()),
)
.on_click({
let focus_handle = focus_handle.clone();
move |event, window, cx| {
if event.modifiers().secondary() {
focus_handle.dispatch_action(
&OpenExcerptsSplit,
window,
cx,
);
} else {
focus_handle.dispatch_action(
&OpenExcerpts,
window,
cx,
);
}
.on_click(window.listener_for(&self.editor, {
let jump_data = jump_data.clone();
move |editor, e: &ClickEvent, window, cx| {
editor.open_excerpts_common(
Some(jump_data.clone()),
e.modifiers().secondary(),
window,
cx,
);
}
}),
})),
)
.when_some(parent_path, |then, path| {
then.child(div().child(path).text_color(
@@ -4087,24 +4093,17 @@ impl EditorElement {
cx,
)),
)
.on_click({
let focus_handle = focus_handle.clone();
move |event, window, cx| {
if event.modifiers().secondary() {
focus_handle.dispatch_action(
&OpenExcerptsSplit,
window,
cx,
);
} else {
focus_handle.dispatch_action(
&OpenExcerpts,
window,
cx,
);
}
.on_click(window.listener_for(&self.editor, {
let jump_data = jump_data.clone();
move |editor, e: &ClickEvent, window, cx| {
editor.open_excerpts_common(
Some(jump_data.clone()),
e.modifiers().secondary(),
window,
cx,
);
}
}),
})),
)
},
)
@@ -4250,6 +4249,7 @@ impl EditorElement {
line_layouts: &mut [LineWithInvisibles],
selections: &[Selection<Point>],
selected_buffer_ids: &Vec<BufferId>,
latest_selection_anchors: &HashMap<BufferId, Anchor>,
is_row_soft_wrapped: impl Copy + Fn(usize) -> bool,
sticky_header_excerpt_id: Option<ExcerptId>,
window: &mut Window,
@@ -4293,6 +4293,7 @@ impl EditorElement {
&mut row_block_types,
selections,
selected_buffer_ids,
latest_selection_anchors,
is_row_soft_wrapped,
sticky_header_excerpt_id,
window,
@@ -4350,6 +4351,7 @@ impl EditorElement {
&mut row_block_types,
selections,
selected_buffer_ids,
latest_selection_anchors,
is_row_soft_wrapped,
sticky_header_excerpt_id,
window,
@@ -4405,6 +4407,7 @@ impl EditorElement {
&mut row_block_types,
selections,
selected_buffer_ids,
latest_selection_anchors,
is_row_soft_wrapped,
sticky_header_excerpt_id,
window,
@@ -4487,6 +4490,7 @@ impl EditorElement {
hitbox: &Hitbox,
selected_buffer_ids: &Vec<BufferId>,
blocks: &[BlockLayout],
latest_selection_anchors: &HashMap<BufferId, Anchor>,
window: &mut Window,
cx: &mut App,
) -> AnyElement {
@@ -4495,6 +4499,7 @@ impl EditorElement {
DisplayRow(scroll_position.y as u32),
FILE_HEADER_HEIGHT + MULTI_BUFFER_EXCERPT_HEADER_HEIGHT,
excerpt,
latest_selection_anchors,
);
let editor_bg_color = cx.theme().colors().editor_background;
@@ -7794,18 +7799,52 @@ fn file_status_label_color(file_status: Option<FileStatus>) -> Color {
}
fn header_jump_data(
editor_snapshot: &EditorSnapshot,
block_row_start: DisplayRow,
height: u32,
first_excerpt: &ExcerptInfo,
latest_selection_anchors: &HashMap<BufferId, Anchor>,
) -> JumpData {
let jump_target = if let Some(anchor) = latest_selection_anchors.get(&first_excerpt.buffer_id)
&& let Some(range) = editor_snapshot.context_range_for_excerpt(anchor.excerpt_id)
&& let Some(buffer) = editor_snapshot
.buffer_snapshot()
.buffer_for_excerpt(anchor.excerpt_id)
{
JumpTargetInExcerptInput {
id: anchor.excerpt_id,
buffer,
excerpt_start_anchor: range.start,
jump_anchor: anchor.text_anchor,
}
} else {
JumpTargetInExcerptInput {
id: first_excerpt.id,
buffer: &first_excerpt.buffer,
excerpt_start_anchor: first_excerpt.range.context.start,
jump_anchor: first_excerpt.range.primary.start,
}
};
header_jump_data_inner(editor_snapshot, block_row_start, height, &jump_target)
}
struct JumpTargetInExcerptInput<'a> {
id: ExcerptId,
buffer: &'a language::BufferSnapshot,
excerpt_start_anchor: text::Anchor,
jump_anchor: text::Anchor,
}
fn header_jump_data_inner(
snapshot: &EditorSnapshot,
block_row_start: DisplayRow,
height: u32,
for_excerpt: &ExcerptInfo,
for_excerpt: &JumpTargetInExcerptInput,
) -> JumpData {
let range = &for_excerpt.range;
let buffer = &for_excerpt.buffer;
let jump_anchor = range.primary.start;
let excerpt_start = range.context.start;
let jump_position = language::ToPoint::to_point(&jump_anchor, buffer);
let rows_from_excerpt_start = if jump_anchor == excerpt_start {
let jump_position = language::ToPoint::to_point(&for_excerpt.jump_anchor, buffer);
let excerpt_start = for_excerpt.excerpt_start_anchor;
let rows_from_excerpt_start = if for_excerpt.jump_anchor == excerpt_start {
0
} else {
let excerpt_start_point = language::ToPoint::to_point(&excerpt_start, buffer);
@@ -7822,7 +7861,7 @@ fn header_jump_data(
JumpData::MultiBufferPoint {
excerpt_id: for_excerpt.id,
anchor: jump_anchor,
anchor: for_excerpt.jump_anchor,
position: jump_position,
line_offset_from_top,
}
@@ -8620,7 +8659,7 @@ impl LineWithInvisibles {
let fragment_end_x = fragment_start_x + shaped_line.width;
if x < fragment_end_x {
return Some(
fragment_start_index + shaped_line.index_for_x(x - fragment_start_x),
fragment_start_index + shaped_line.index_for_x(x - fragment_start_x)?,
);
}
fragment_start_x = fragment_end_x;
@@ -9139,15 +9178,18 @@ impl Element for EditorElement {
cx,
);
let (local_selections, selected_buffer_ids): (
let (local_selections, selected_buffer_ids, latest_selection_anchors): (
Vec<Selection<Point>>,
Vec<BufferId>,
HashMap<BufferId, Anchor>,
) = self
.editor_with_selections(cx)
.map(|editor| {
editor.update(cx, |editor, cx| {
let all_selections =
editor.selections.all::<Point>(&snapshot.display_snapshot);
let all_anchor_selections =
editor.selections.all_anchors(&snapshot.display_snapshot);
let selected_buffer_ids =
if editor.buffer_kind(cx) == ItemBufferKind::Singleton {
Vec::new()
@@ -9176,10 +9218,31 @@ impl Element for EditorElement {
selections
.extend(editor.selections.pending(&snapshot.display_snapshot));
(selections, selected_buffer_ids)
let mut anchors_by_buffer: HashMap<BufferId, (usize, Anchor)> =
HashMap::default();
for selection in all_anchor_selections.iter() {
let head = selection.head();
if let Some(buffer_id) = head.buffer_id {
anchors_by_buffer
.entry(buffer_id)
.and_modify(|(latest_id, latest_anchor)| {
if selection.id > *latest_id {
*latest_id = selection.id;
*latest_anchor = head;
}
})
.or_insert((selection.id, head));
}
}
let latest_selection_anchors = anchors_by_buffer
.into_iter()
.map(|(buffer_id, (_, anchor))| (buffer_id, anchor))
.collect();
(selections, selected_buffer_ids, latest_selection_anchors)
})
})
.unwrap_or_default();
.unwrap_or_else(|| (Vec::new(), Vec::new(), HashMap::default()));
let (selections, mut active_rows, newest_selection_head) = self
.layout_selections(
@@ -9410,6 +9473,7 @@ impl Element for EditorElement {
&mut line_layouts,
&local_selections,
&selected_buffer_ids,
&latest_selection_anchors,
is_row_soft_wrapped,
sticky_header_excerpt_id,
window,
@@ -9443,6 +9507,7 @@ impl Element for EditorElement {
&hitbox,
&selected_buffer_ids,
&blocks,
&latest_selection_anchors,
window,
cx,
)

View File

@@ -341,7 +341,13 @@ fn show_hover(
renderer
.as_ref()
.and_then(|renderer| {
renderer.render_hover(group, point_range, buffer_id, cx)
renderer.render_hover(
group,
point_range,
buffer_id,
language_registry.clone(),
cx,
)
})
.context("no rendered diagnostic")
})??;
@@ -986,6 +992,11 @@ impl DiagnosticPopover {
self.markdown.clone(),
diagnostics_markdown_style(window, cx),
)
.code_block_renderer(markdown::CodeBlockRenderer::Default {
copy_button: false,
copy_button_on_hover: false,
border: false,
})
.on_url_click(
move |link, window, cx| {
if let Some(renderer) = GlobalDiagnosticRenderer::global(cx)

View File

@@ -1796,6 +1796,14 @@ impl SearchableItem for Editor {
fn search_bar_visibility_changed(&mut self, _: bool, _: &mut Window, _: &mut Context<Self>) {
self.expect_bounds_change = self.last_bounds;
}
fn set_search_is_case_sensitive(
&mut self,
case_sensitive: Option<bool>,
_cx: &mut Context<Self>,
) {
self.select_next_is_case_sensitive = case_sensitive;
}
}
pub fn active_match_index(

View File

@@ -81,7 +81,19 @@ impl MouseContextMenu {
cx: &mut Context<Editor>,
) -> Self {
let context_menu_focus = context_menu.focus_handle(cx);
window.focus(&context_menu_focus);
// Since `ContextMenu` is rendered in a deferred fashion its focus
// handle is not linked to the Editor's until after the deferred draw
// callback runs.
// We need to wait for that to happen before focusing it, so that
// calling `contains_focused` on the editor's focus handle returns
// `true` when the `ContextMenu` is focused.
let focus_handle = context_menu_focus.clone();
cx.on_next_frame(window, move |_, window, cx| {
cx.on_next_frame(window, move |_, window, _cx| {
window.focus(&focus_handle);
});
});
let _dismiss_subscription = cx.subscribe_in(&context_menu, window, {
let context_menu_focus = context_menu_focus.clone();
@@ -329,8 +341,18 @@ mod tests {
}
"});
cx.editor(|editor, _window, _app| assert!(editor.mouse_context_menu.is_none()));
cx.update_editor(|editor, window, cx| {
deploy_context_menu(editor, Some(Default::default()), point, window, cx)
deploy_context_menu(editor, Some(Default::default()), point, window, cx);
// Assert that, even after deploying the editor's mouse context
// menu, the editor's focus handle still contains the focused
// element. The pane's tab bar relies on this to determine whether
// to show the tab bar buttons and there was a small flicker when
// deploying the mouse context menu that would cause this to not be
// true, making it so that the buttons would disappear for a couple
// of frames.
assert!(editor.focus_handle.contains_focused(window, cx));
});
cx.assert_editor_state(indoc! {"

View File

@@ -372,7 +372,7 @@ impl SelectionsCollection {
let is_empty = positions.start == positions.end;
let line_len = display_map.line_len(row);
let line = display_map.layout_row(row, text_layout_details);
let start_col = line.index_for_x(positions.start) as u32;
let start_col = line.closest_index_for_x(positions.start) as u32;
let (start, end) = if is_empty {
let point = DisplayPoint::new(row, std::cmp::min(start_col, line_len));
@@ -382,7 +382,7 @@ impl SelectionsCollection {
return None;
}
let start = DisplayPoint::new(row, start_col);
let end_col = line.index_for_x(positions.end) as u32;
let end_col = line.closest_index_for_x(positions.end) as u32;
let end = DisplayPoint::new(row, end_col);
(start, end)
};
@@ -487,6 +487,43 @@ impl<'snap, 'a> MutableSelectionsCollection<'snap, 'a> {
self.selections_changed |= changed;
}
pub fn remove_selections_from_buffer(&mut self, buffer_id: language::BufferId) {
let mut changed = false;
let filtered_selections: Arc<[Selection<Anchor>]> = {
self.disjoint
.iter()
.filter(|selection| {
if let Some(selection_buffer_id) =
self.snapshot.buffer_id_for_anchor(selection.start)
{
let should_remove = selection_buffer_id == buffer_id;
changed |= should_remove;
!should_remove
} else {
true
}
})
.cloned()
.collect()
};
if filtered_selections.is_empty() {
let default_anchor = self.snapshot.anchor_before(0);
self.collection.disjoint = Arc::from([Selection {
id: post_inc(&mut self.collection.next_selection_id),
start: default_anchor,
end: default_anchor,
reversed: false,
goal: SelectionGoal::None,
}]);
} else {
self.collection.disjoint = filtered_selections;
}
self.selections_changed |= changed;
}
pub fn clear_pending(&mut self) {
if self.collection.pending.is_some() {
self.collection.pending = None;

View File

@@ -59,6 +59,17 @@ impl EditorTestContext {
})
.await
.unwrap();
let language = project
.read_with(cx, |project, _cx| {
project.languages().language_for_name("Plain Text")
})
.await
.unwrap();
buffer.update(cx, |buffer, cx| {
buffer.set_language(Some(language), cx);
});
let editor = cx.add_window(|window, cx| {
let editor = build_editor_with_project(
project,

View File

@@ -463,8 +463,8 @@ pub fn find_model(
.ok_or_else(|| {
anyhow::anyhow!(
"No language model with ID {}/{} was available. Available models: {}",
selected.model.0,
selected.provider.0,
selected.model.0,
model_registry
.available_models(cx)
.map(|model| format!("{}/{}", model.provider_id().0, model.id().0))

View File

@@ -267,10 +267,9 @@ impl ExtensionManifest {
let mut extension_manifest_path = extension_dir.join("extension.json");
if fs.is_file(&extension_manifest_path).await {
let manifest_content = fs
.load(&extension_manifest_path)
.await
.with_context(|| format!("failed to load {extension_name} extension.json"))?;
let manifest_content = fs.load(&extension_manifest_path).await.with_context(|| {
format!("loading {extension_name} extension.json, {extension_manifest_path:?}")
})?;
let manifest_json = serde_json::from_str::<OldExtensionManifest>(&manifest_content)
.with_context(|| {
format!("invalid extension.json for extension {extension_name}")
@@ -279,10 +278,9 @@ impl ExtensionManifest {
Ok(manifest_from_old_manifest(manifest_json, extension_name))
} else {
extension_manifest_path.set_extension("toml");
let manifest_content = fs
.load(&extension_manifest_path)
.await
.with_context(|| format!("failed to load {extension_name} extension.toml"))?;
let manifest_content = fs.load(&extension_manifest_path).await.with_context(|| {
format!("loading {extension_name} extension.toml, {extension_manifest_path:?}")
})?;
toml::from_str(&manifest_content).map_err(|err| {
anyhow!("Invalid extension.toml for extension {extension_name}:\n{err}")
})

View File

@@ -763,17 +763,17 @@ impl WasmExtension {
.fs
.open_sync(&path)
.await
.context("failed to open wasm file")?;
.context(format!("opening wasm file, path: {path:?}"))?;
let mut wasm_bytes = Vec::new();
wasm_file
.read_to_end(&mut wasm_bytes)
.context("failed to read wasm")?;
.context(format!("reading wasm file, path: {path:?}"))?;
wasm_host
.load_extension(wasm_bytes, manifest, cx)
.await
.with_context(|| format!("failed to load wasm extension {}", manifest.id))
.with_context(|| format!("loading wasm extension: {}", manifest.id))
}
pub async fn call<T, Fn>(&self, f: Fn) -> Result<T>

View File

@@ -75,6 +75,7 @@ const SUGGESTIONS_BY_EXTENSION_ID: &[(&str, &[&str])] = &[
("vue", &["vue"]),
("wgsl", &["wgsl"]),
("wit", &["wit"]),
("xml", &["xml"]),
("zig", &["zig"]),
];

View File

@@ -3452,3 +3452,99 @@ async fn test_paths_with_starting_slash(cx: &mut TestAppContext) {
assert_eq!(active_editor.read(cx).title(cx), "file1.txt");
});
}
#[gpui::test]
async fn test_clear_navigation_history(cx: &mut TestAppContext) {
let app_state = init_test(cx);
app_state
.fs
.as_fake()
.insert_tree(
path!("/src"),
json!({
"test": {
"first.rs": "// First file",
"second.rs": "// Second file",
"third.rs": "// Third file",
}
}),
)
.await;
let project = Project::test(app_state.fs.clone(), [path!("/src").as_ref()], cx).await;
let (workspace, cx) = cx.add_window_view(|window, cx| Workspace::test_new(project, window, cx));
workspace.update_in(cx, |_workspace, window, cx| window.focused(cx));
// Open some files to generate navigation history
open_close_queried_buffer("fir", 1, "first.rs", &workspace, cx).await;
open_close_queried_buffer("sec", 1, "second.rs", &workspace, cx).await;
let history_before_clear =
open_close_queried_buffer("thi", 1, "third.rs", &workspace, cx).await;
assert_eq!(
history_before_clear.len(),
2,
"Should have history items before clearing"
);
// Verify that file finder shows history items
let picker = open_file_picker(&workspace, cx);
cx.simulate_input("fir");
picker.update(cx, |finder, _| {
let matches = collect_search_matches(finder);
assert!(
!matches.history.is_empty(),
"File finder should show history items before clearing"
);
});
workspace.update_in(cx, |_, window, cx| {
window.dispatch_action(menu::Cancel.boxed_clone(), cx);
});
// Verify navigation state before clear
workspace.update(cx, |workspace, cx| {
let pane = workspace.active_pane();
pane.read(cx).can_navigate_backward()
});
// Clear navigation history
cx.dispatch_action(workspace::ClearNavigationHistory);
// Verify that navigation is disabled immediately after clear
workspace.update(cx, |workspace, cx| {
let pane = workspace.active_pane();
assert!(
!pane.read(cx).can_navigate_backward(),
"Should not be able to navigate backward after clearing history"
);
assert!(
!pane.read(cx).can_navigate_forward(),
"Should not be able to navigate forward after clearing history"
);
});
// Verify that file finder no longer shows history items
let picker = open_file_picker(&workspace, cx);
cx.simulate_input("fir");
picker.update(cx, |finder, _| {
let matches = collect_search_matches(finder);
assert!(
matches.history.is_empty(),
"File finder should not show history items after clearing"
);
});
workspace.update_in(cx, |_, window, cx| {
window.dispatch_action(menu::Cancel.boxed_clone(), cx);
});
// Verify history is empty by opening a new file
// (this should not show any previous history)
let history_after_clear =
open_close_queried_buffer("sec", 1, "second.rs", &workspace, cx).await;
assert_eq!(
history_after_clear.len(),
0,
"Should have no history items after clearing"
);
}

View File

@@ -399,7 +399,12 @@ impl PickerDelegate for OpenPathDelegate {
}
})
.unwrap_or(false);
if should_prepend_with_current_dir {
let current_dir_in_new_entries = new_entries
.iter()
.any(|entry| &entry.path.string == current_dir);
if should_prepend_with_current_dir && !current_dir_in_new_entries {
new_entries.insert(
0,
CandidateInfo {

View File

@@ -72,8 +72,8 @@ impl Watcher for FsWatcher {
}
#[cfg(target_os = "linux")]
{
log::trace!("path to watch is already watched: {path:?}");
if self.registrations.lock().contains_key(path) {
log::trace!("path to watch is already watched: {path:?}");
return Ok(());
}
}

View File

@@ -3980,9 +3980,9 @@ impl GitPanel {
.map(|ops| ops.staging() || ops.staged())
.or_else(|| {
repo.status_for_path(&entry.repo_path)
.map(|status| status.status.staging().has_staged())
.and_then(|status| status.status.staging().as_bool())
})
.unwrap_or(entry.staging.has_staged());
.or_else(|| entry.staging.as_bool());
let mut is_staged: ToggleState = is_staging_or_staged.into();
if self.show_placeholders && !self.has_staged_changes() && !entry.status.is_created() {
is_staged = ToggleState::Selected;
@@ -4102,7 +4102,9 @@ impl GitPanel {
}
})
.tooltip(move |_window, cx| {
let action = if is_staging_or_staged {
// If is_staging_or_staged is None, this implies the file was partially staged, and so
// we allow the user to stage it in full by displaying `Stage` in the tooltip.
let action = if is_staging_or_staged.unwrap_or(false) {
"Unstage"
} else {
"Stage"

View File

@@ -138,6 +138,8 @@ waker-fn = "1.2.0"
lyon = "1.0"
libc.workspace = true
pin-project = "1.1.10"
circular-buffer.workspace = true
spin = "0.10.0"
[target.'cfg(target_os = "macos")'.dependencies]
block = "0.1"

View File

@@ -178,7 +178,7 @@ impl TextInput {
if position.y > bounds.bottom() {
return self.content.len();
}
line.index_for_x(position.x - bounds.left())
line.closest_index_for_x(position.x - bounds.left())
}
fn select_to(&mut self, offset: usize, cx: &mut Context<Self>) {
@@ -380,7 +380,7 @@ impl EntityInputHandler for TextInput {
let last_layout = self.last_layout.as_ref()?;
assert_eq!(last_layout.text, self.content);
let utf8_index = last_layout.index_for_x(point.x - line_point.x);
let utf8_index = last_layout.index_for_x(point.x - line_point.x)?;
Some(self.offset_to_utf16(utf8_index))
}
}

View File

@@ -92,6 +92,10 @@ pub enum ScrollStrategy {
/// May not be possible if there's not enough list items above the item scrolled to:
/// in this case, the element will be placed at the closest possible position.
Bottom,
/// If the element is not visible attempt to place it at:
/// - The top of the list's viewport if the target element is above currently visible elements.
/// - The bottom of the list's viewport if the target element is above currently visible elements.
Nearest,
}
#[derive(Clone, Copy, Debug)]
@@ -391,39 +395,42 @@ impl Element for UniformList {
scroll_offset.x = Pixels::ZERO;
}
if let Some(deferred_scroll) = shared_scroll_to_item {
let mut ix = deferred_scroll.item_index;
if let Some(DeferredScrollToItem {
mut item_index,
mut strategy,
offset,
scroll_strict,
}) = shared_scroll_to_item
{
if y_flipped {
ix = self.item_count.saturating_sub(ix + 1);
item_index = self.item_count.saturating_sub(item_index + 1);
}
let list_height = padded_bounds.size.height;
let mut updated_scroll_offset = shared_scroll_offset.borrow_mut();
let item_top = item_height * ix;
let item_top = item_height * item_index;
let item_bottom = item_top + item_height;
let scroll_top = -updated_scroll_offset.y;
let offset_pixels = item_height * deferred_scroll.offset;
let mut scrolled_to_top = false;
let offset_pixels = item_height * offset;
if item_top < scroll_top + offset_pixels {
scrolled_to_top = true;
// todo: using the padding here is wrong - this only works well for few scenarios
updated_scroll_offset.y = -item_top + padding.top + offset_pixels;
} else if item_bottom > scroll_top + list_height {
scrolled_to_top = true;
updated_scroll_offset.y = -(item_bottom - list_height);
}
// is the selected item above/below currently visible items
let is_above = item_top < scroll_top + offset_pixels;
let is_below = item_bottom > scroll_top + list_height;
if deferred_scroll.scroll_strict
|| (scrolled_to_top
&& (item_top < scroll_top + offset_pixels
|| item_bottom > scroll_top + list_height))
{
match deferred_scroll.strategy {
if scroll_strict || is_above || is_below {
if strategy == ScrollStrategy::Nearest {
if is_above {
strategy = ScrollStrategy::Top;
} else if is_below {
strategy = ScrollStrategy::Bottom;
}
}
let max_scroll_offset =
(content_height - list_height).max(Pixels::ZERO);
match strategy {
ScrollStrategy::Top => {
updated_scroll_offset.y = -(item_top - offset_pixels)
.max(Pixels::ZERO)
.min(content_height - list_height)
.max(Pixels::ZERO);
.clamp(Pixels::ZERO, max_scroll_offset);
}
ScrollStrategy::Center => {
let item_center = item_top + item_height / 2.0;
@@ -431,18 +438,15 @@ impl Element for UniformList {
let viewport_height = list_height - offset_pixels;
let viewport_center = offset_pixels + viewport_height / 2.0;
let target_scroll_top = item_center - viewport_center;
updated_scroll_offset.y = -target_scroll_top
.max(Pixels::ZERO)
.min(content_height - list_height)
.max(Pixels::ZERO);
updated_scroll_offset.y =
-target_scroll_top.clamp(Pixels::ZERO, max_scroll_offset);
}
ScrollStrategy::Bottom => {
updated_scroll_offset.y = -(item_bottom - list_height
+ offset_pixels)
.max(Pixels::ZERO)
.min(content_height - list_height)
.max(Pixels::ZERO);
updated_scroll_offset.y = -(item_bottom - list_height)
.clamp(Pixels::ZERO, max_scroll_offset);
}
ScrollStrategy::Nearest => {
// Nearest, but the item is visible -> no scroll is required
}
}
}
@@ -695,3 +699,150 @@ impl InteractiveElement for UniformList {
&mut self.interactivity
}
}
#[cfg(test)]
mod test {
use crate::TestAppContext;
#[gpui::test]
fn test_scroll_strategy_nearest(cx: &mut TestAppContext) {
use crate::{
Context, FocusHandle, ScrollStrategy, UniformListScrollHandle, Window, actions, div,
prelude::*, px, uniform_list,
};
use std::ops::Range;
actions!(example, [SelectNext, SelectPrev]);
struct TestView {
index: usize,
length: usize,
scroll_handle: UniformListScrollHandle,
focus_handle: FocusHandle,
visible_range: Range<usize>,
}
impl TestView {
pub fn select_next(
&mut self,
_: &SelectNext,
window: &mut Window,
_: &mut Context<Self>,
) {
if self.index + 1 == self.length {
self.index = 0
} else {
self.index += 1;
}
self.scroll_handle
.scroll_to_item(self.index, ScrollStrategy::Nearest);
window.refresh();
}
pub fn select_previous(
&mut self,
_: &SelectPrev,
window: &mut Window,
_: &mut Context<Self>,
) {
if self.index == 0 {
self.index = self.length - 1
} else {
self.index -= 1;
}
self.scroll_handle
.scroll_to_item(self.index, ScrollStrategy::Nearest);
window.refresh();
}
}
impl Render for TestView {
fn render(&mut self, _window: &mut Window, cx: &mut Context<Self>) -> impl IntoElement {
div()
.id("list-example")
.track_focus(&self.focus_handle)
.on_action(cx.listener(Self::select_next))
.on_action(cx.listener(Self::select_previous))
.size_full()
.child(
uniform_list(
"entries",
self.length,
cx.processor(|this, range: Range<usize>, _window, _cx| {
this.visible_range = range.clone();
range
.map(|ix| div().id(ix).h(px(20.0)).child(format!("Item {ix}")))
.collect()
}),
)
.track_scroll(self.scroll_handle.clone())
.h(px(200.0)),
)
}
}
let (view, cx) = cx.add_window_view(|window, cx| {
let focus_handle = cx.focus_handle();
window.focus(&focus_handle);
TestView {
scroll_handle: UniformListScrollHandle::new(),
index: 0,
focus_handle,
length: 47,
visible_range: 0..0,
}
});
// 10 out of 47 items are visible
// First 9 times selecting next item does not scroll
for ix in 1..10 {
cx.dispatch_action(SelectNext);
view.read_with(cx, |view, _| {
assert_eq!(view.index, ix);
assert_eq!(view.visible_range, 0..10);
})
}
// Now each time the list scrolls down by 1
for ix in 10..47 {
cx.dispatch_action(SelectNext);
view.read_with(cx, |view, _| {
assert_eq!(view.index, ix);
assert_eq!(view.visible_range, ix - 9..ix + 1);
})
}
// After the last item we move back to the start
cx.dispatch_action(SelectNext);
view.read_with(cx, |view, _| {
assert_eq!(view.index, 0);
assert_eq!(view.visible_range, 0..10);
});
// Return to the last element
cx.dispatch_action(SelectPrev);
view.read_with(cx, |view, _| {
assert_eq!(view.index, 46);
assert_eq!(view.visible_range, 37..47);
});
// First 9 times selecting previous does not scroll
for ix in (37..46).rev() {
cx.dispatch_action(SelectPrev);
view.read_with(cx, |view, _| {
assert_eq!(view.index, ix);
assert_eq!(view.visible_range, 37..47);
})
}
// Now each time the list scrolls up by 1
for ix in (0..37).rev() {
cx.dispatch_action(SelectPrev);
view.read_with(cx, |view, _| {
assert_eq!(view.index, ix);
assert_eq!(view.visible_range, ix..ix + 10);
})
}
}
}

View File

@@ -1,4 +1,4 @@
use crate::{App, PlatformDispatcher};
use crate::{App, PlatformDispatcher, RunnableMeta, RunnableVariant};
use async_task::Runnable;
use futures::channel::mpsc;
use smol::prelude::*;
@@ -62,7 +62,7 @@ enum TaskState<T> {
Ready(Option<T>),
/// A task that is currently running.
Spawned(async_task::Task<T>),
Spawned(async_task::Task<T, RunnableMeta>),
}
impl<T> Task<T> {
@@ -146,6 +146,7 @@ impl BackgroundExecutor {
}
/// Enqueues the given future to be run to completion on a background thread.
#[track_caller]
pub fn spawn<R>(&self, future: impl Future<Output = R> + Send + 'static) -> Task<R>
where
R: Send + 'static,
@@ -155,6 +156,7 @@ impl BackgroundExecutor {
/// Enqueues the given future to be run to completion on a background thread.
/// The given label can be used to control the priority of the task in tests.
#[track_caller]
pub fn spawn_labeled<R>(
&self,
label: TaskLabel,
@@ -166,14 +168,20 @@ impl BackgroundExecutor {
self.spawn_internal::<R>(Box::pin(future), Some(label))
}
#[track_caller]
fn spawn_internal<R: Send + 'static>(
&self,
future: AnyFuture<R>,
label: Option<TaskLabel>,
) -> Task<R> {
let dispatcher = self.dispatcher.clone();
let (runnable, task) =
async_task::spawn(future, move |runnable| dispatcher.dispatch(runnable, label));
let location = core::panic::Location::caller();
let (runnable, task) = async_task::Builder::new()
.metadata(RunnableMeta { location })
.spawn(
move |_| future,
move |runnable| dispatcher.dispatch(RunnableVariant::Meta(runnable), label),
);
runnable.schedule();
Task(TaskState::Spawned(task))
}
@@ -281,7 +289,11 @@ impl BackgroundExecutor {
});
let mut cx = std::task::Context::from_waker(&waker);
let duration = Duration::from_secs(180);
let duration = Duration::from_secs(
option_env!("GPUI_TEST_TIMEOUT")
.and_then(|s| s.parse::<u64>().ok())
.unwrap_or(180),
);
let mut test_should_end_by = Instant::now() + duration;
loop {
@@ -315,10 +327,8 @@ impl BackgroundExecutor {
"parked with nothing left to run{waiting_message}{backtrace_message}",
)
}
dispatcher.set_unparker(unparker.clone());
parker.park_timeout(
test_should_end_by.saturating_duration_since(Instant::now()),
);
dispatcher.push_unparker(unparker.clone());
parker.park_timeout(Duration::from_millis(1));
if Instant::now() > test_should_end_by {
panic!("test timed out after {duration:?} with allow_parking")
}
@@ -370,10 +380,13 @@ impl BackgroundExecutor {
if duration.is_zero() {
return Task::ready(());
}
let (runnable, task) = async_task::spawn(async move {}, {
let dispatcher = self.dispatcher.clone();
move |runnable| dispatcher.dispatch_after(duration, runnable)
});
let location = core::panic::Location::caller();
let (runnable, task) = async_task::Builder::new()
.metadata(RunnableMeta { location })
.spawn(move |_| async move {}, {
let dispatcher = self.dispatcher.clone();
move |runnable| dispatcher.dispatch_after(duration, RunnableVariant::Meta(runnable))
});
runnable.schedule();
Task(TaskState::Spawned(task))
}
@@ -479,24 +492,29 @@ impl ForegroundExecutor {
}
/// Enqueues the given Task to run on the main thread at some point in the future.
#[track_caller]
pub fn spawn<R>(&self, future: impl Future<Output = R> + 'static) -> Task<R>
where
R: 'static,
{
let dispatcher = self.dispatcher.clone();
let location = core::panic::Location::caller();
#[track_caller]
fn inner<R: 'static>(
dispatcher: Arc<dyn PlatformDispatcher>,
future: AnyLocalFuture<R>,
location: &'static core::panic::Location<'static>,
) -> Task<R> {
let (runnable, task) = spawn_local_with_source_location(future, move |runnable| {
dispatcher.dispatch_on_main_thread(runnable)
});
let (runnable, task) = spawn_local_with_source_location(
future,
move |runnable| dispatcher.dispatch_on_main_thread(RunnableVariant::Meta(runnable)),
RunnableMeta { location },
);
runnable.schedule();
Task(TaskState::Spawned(task))
}
inner::<R>(dispatcher, Box::pin(future))
inner::<R>(dispatcher, Box::pin(future), location)
}
}
@@ -505,14 +523,16 @@ impl ForegroundExecutor {
/// Copy-modified from:
/// <https://github.com/smol-rs/async-task/blob/ca9dbe1db9c422fd765847fa91306e30a6bb58a9/src/runnable.rs#L405>
#[track_caller]
fn spawn_local_with_source_location<Fut, S>(
fn spawn_local_with_source_location<Fut, S, M>(
future: Fut,
schedule: S,
) -> (Runnable<()>, async_task::Task<Fut::Output, ()>)
metadata: M,
) -> (Runnable<M>, async_task::Task<Fut::Output, M>)
where
Fut: Future + 'static,
Fut::Output: 'static,
S: async_task::Schedule<()> + Send + Sync + 'static,
S: async_task::Schedule<M> + Send + Sync + 'static,
M: 'static,
{
#[inline]
fn thread_id() -> ThreadId {
@@ -560,7 +580,11 @@ where
location: Location::caller(),
};
unsafe { async_task::spawn_unchecked(future, schedule) }
unsafe {
async_task::Builder::new()
.metadata(metadata)
.spawn_unchecked(move |_| future, schedule)
}
}
/// Scope manages a set of tasks that are enqueued and waited on together. See [`BackgroundExecutor::scoped`].
@@ -590,6 +614,7 @@ impl<'a> Scope<'a> {
}
/// Spawn a future into this scope.
#[track_caller]
pub fn spawn<F>(&mut self, f: F)
where
F: Future<Output = ()> + Send + 'a,

View File

@@ -30,6 +30,7 @@ mod keymap;
mod path_builder;
mod platform;
pub mod prelude;
mod profiler;
mod scene;
mod shared_string;
mod shared_uri;
@@ -87,6 +88,7 @@ use key_dispatch::*;
pub use keymap::*;
pub use path_builder::*;
pub use platform::*;
pub use profiler::*;
pub use refineable::*;
pub use scene::*;
pub use shared_string::*;

View File

@@ -305,9 +305,10 @@ pub enum KeyboardButton {
}
/// An enum representing the mouse button that was pressed.
#[derive(Hash, PartialEq, Eq, Copy, Clone, Debug)]
#[derive(Hash, Default, PartialEq, Eq, Copy, Clone, Debug)]
pub enum MouseButton {
/// The left mouse button.
#[default]
Left,
/// The right mouse button.
@@ -333,28 +334,17 @@ impl MouseButton {
}
}
impl Default for MouseButton {
fn default() -> Self {
Self::Left
}
}
/// A navigation direction, such as back or forward.
#[derive(Hash, PartialEq, Eq, Copy, Clone, Debug)]
#[derive(Hash, Default, PartialEq, Eq, Copy, Clone, Debug)]
pub enum NavigationDirection {
/// The back button.
#[default]
Back,
/// The forward button.
Forward,
}
impl Default for NavigationDirection {
fn default() -> Self {
Self::Back
}
}
/// A mouse move event from the platform.
#[derive(Clone, Debug, Default)]
pub struct MouseMoveEvent {

View File

@@ -40,8 +40,8 @@ use crate::{
DEFAULT_WINDOW_SIZE, DevicePixels, DispatchEventResult, Font, FontId, FontMetrics, FontRun,
ForegroundExecutor, GlyphId, GpuSpecs, ImageSource, Keymap, LineLayout, Pixels, PlatformInput,
Point, RenderGlyphParams, RenderImage, RenderImageParams, RenderSvgParams, Scene, ShapedGlyph,
ShapedRun, SharedString, Size, SvgRenderer, SystemWindowTab, Task, TaskLabel, Window,
WindowControlArea, hash, point, px, size,
ShapedRun, SharedString, Size, SvgRenderer, SystemWindowTab, Task, TaskLabel, TaskTiming,
ThreadTaskTimings, Window, WindowControlArea, hash, point, px, size,
};
use anyhow::Result;
use async_task::Runnable;
@@ -559,14 +559,32 @@ pub(crate) trait PlatformWindow: HasWindowHandle + HasDisplayHandle {
}
}
/// This type is public so that our test macro can generate and use it, but it should not
/// be considered part of our public API.
#[doc(hidden)]
#[derive(Debug)]
pub struct RunnableMeta {
/// Location of the runnable
pub location: &'static core::panic::Location<'static>,
}
#[doc(hidden)]
pub enum RunnableVariant {
Meta(Runnable<RunnableMeta>),
Compat(Runnable),
}
/// This type is public so that our test macro can generate and use it, but it should not
/// be considered part of our public API.
#[doc(hidden)]
pub trait PlatformDispatcher: Send + Sync {
fn get_all_timings(&self) -> Vec<ThreadTaskTimings>;
fn get_current_thread_timings(&self) -> Vec<TaskTiming>;
fn is_main_thread(&self) -> bool;
fn dispatch(&self, runnable: Runnable, label: Option<TaskLabel>);
fn dispatch_on_main_thread(&self, runnable: Runnable);
fn dispatch_after(&self, duration: Duration, runnable: Runnable);
fn dispatch(&self, runnable: RunnableVariant, label: Option<TaskLabel>);
fn dispatch_on_main_thread(&self, runnable: RunnableVariant);
fn dispatch_after(&self, duration: Duration, runnable: RunnableVariant);
fn now(&self) -> Instant {
Instant::now()
}
@@ -1328,11 +1346,12 @@ pub enum WindowKind {
///
/// On macOS, this corresponds to named [`NSAppearance`](https://developer.apple.com/documentation/appkit/nsappearance)
/// values.
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
#[derive(Copy, Clone, Debug, Default, PartialEq, Eq)]
pub enum WindowAppearance {
/// A light appearance.
///
/// On macOS, this corresponds to the `aqua` appearance.
#[default]
Light,
/// A light appearance with vibrant colors.
@@ -1351,12 +1370,6 @@ pub enum WindowAppearance {
VibrantDark,
}
impl Default for WindowAppearance {
fn default() -> Self {
Self::Light
}
}
/// The appearance of the background of the window itself, when there is
/// no content or the content is transparent.
#[derive(Copy, Clone, Debug, Default, PartialEq)]
@@ -1457,9 +1470,10 @@ impl From<&str> for PromptButton {
}
/// The style of the cursor (pointer)
#[derive(Copy, Clone, Debug, PartialEq, Eq, Hash, Serialize, Deserialize, JsonSchema)]
#[derive(Copy, Clone, Default, Debug, PartialEq, Eq, Hash, Serialize, Deserialize, JsonSchema)]
pub enum CursorStyle {
/// The default cursor
#[default]
Arrow,
/// A text input cursor
@@ -1546,12 +1560,6 @@ pub enum CursorStyle {
None,
}
impl Default for CursorStyle {
fn default() -> Self {
Self::Arrow
}
}
/// A clipboard item that should be copied to the clipboard
#[derive(Clone, Debug, Eq, PartialEq)]
pub struct ClipboardItem {

View File

@@ -1,5 +1,7 @@
use crate::{PlatformDispatcher, TaskLabel};
use async_task::Runnable;
use crate::{
GLOBAL_THREAD_TIMINGS, PlatformDispatcher, RunnableVariant, THREAD_TIMINGS, TaskLabel,
TaskTiming, ThreadTaskTimings,
};
use calloop::{
EventLoop,
channel::{self, Sender},
@@ -13,20 +15,20 @@ use util::ResultExt;
struct TimerAfter {
duration: Duration,
runnable: Runnable,
runnable: RunnableVariant,
}
pub(crate) struct LinuxDispatcher {
main_sender: Sender<Runnable>,
main_sender: Sender<RunnableVariant>,
timer_sender: Sender<TimerAfter>,
background_sender: flume::Sender<Runnable>,
background_sender: flume::Sender<RunnableVariant>,
_background_threads: Vec<thread::JoinHandle<()>>,
main_thread_id: thread::ThreadId,
}
impl LinuxDispatcher {
pub fn new(main_sender: Sender<Runnable>) -> Self {
let (background_sender, background_receiver) = flume::unbounded::<Runnable>();
pub fn new(main_sender: Sender<RunnableVariant>) -> Self {
let (background_sender, background_receiver) = flume::unbounded::<RunnableVariant>();
let thread_count = std::thread::available_parallelism()
.map(|i| i.get())
.unwrap_or(1);
@@ -40,7 +42,36 @@ impl LinuxDispatcher {
for runnable in receiver {
let start = Instant::now();
runnable.run();
let mut location = match runnable {
RunnableVariant::Meta(runnable) => {
let location = runnable.metadata().location;
let timing = TaskTiming {
location,
start,
end: None,
};
Self::add_task_timing(timing);
runnable.run();
timing
}
RunnableVariant::Compat(runnable) => {
let location = core::panic::Location::caller();
let timing = TaskTiming {
location,
start,
end: None,
};
Self::add_task_timing(timing);
runnable.run();
timing
}
};
let end = Instant::now();
location.end = Some(end);
Self::add_task_timing(location);
log::trace!(
"background thread {}: ran runnable. took: {:?}",
@@ -72,7 +103,36 @@ impl LinuxDispatcher {
calloop::timer::Timer::from_duration(timer.duration),
move |_, _, _| {
if let Some(runnable) = runnable.take() {
runnable.run();
let start = Instant::now();
let mut timing = match runnable {
RunnableVariant::Meta(runnable) => {
let location = runnable.metadata().location;
let timing = TaskTiming {
location,
start,
end: None,
};
Self::add_task_timing(timing);
runnable.run();
timing
}
RunnableVariant::Compat(runnable) => {
let timing = TaskTiming {
location: core::panic::Location::caller(),
start,
end: None,
};
Self::add_task_timing(timing);
runnable.run();
timing
}
};
let end = Instant::now();
timing.end = Some(end);
Self::add_task_timing(timing);
}
TimeoutAction::Drop
},
@@ -96,18 +156,53 @@ impl LinuxDispatcher {
main_thread_id: thread::current().id(),
}
}
pub(crate) fn add_task_timing(timing: TaskTiming) {
THREAD_TIMINGS.with(|timings| {
let mut timings = timings.lock();
let timings = &mut timings.timings;
if let Some(last_timing) = timings.iter_mut().rev().next() {
if last_timing.location == timing.location {
last_timing.end = timing.end;
return;
}
}
timings.push_back(timing);
});
}
}
impl PlatformDispatcher for LinuxDispatcher {
fn get_all_timings(&self) -> Vec<crate::ThreadTaskTimings> {
let global_timings = GLOBAL_THREAD_TIMINGS.lock();
ThreadTaskTimings::convert(&global_timings)
}
fn get_current_thread_timings(&self) -> Vec<crate::TaskTiming> {
THREAD_TIMINGS.with(|timings| {
let timings = timings.lock();
let timings = &timings.timings;
let mut vec = Vec::with_capacity(timings.len());
let (s1, s2) = timings.as_slices();
vec.extend_from_slice(s1);
vec.extend_from_slice(s2);
vec
})
}
fn is_main_thread(&self) -> bool {
thread::current().id() == self.main_thread_id
}
fn dispatch(&self, runnable: Runnable, _: Option<TaskLabel>) {
fn dispatch(&self, runnable: RunnableVariant, _: Option<TaskLabel>) {
self.background_sender.send(runnable).unwrap();
}
fn dispatch_on_main_thread(&self, runnable: Runnable) {
fn dispatch_on_main_thread(&self, runnable: RunnableVariant) {
self.main_sender.send(runnable).unwrap_or_else(|runnable| {
// NOTE: Runnable may wrap a Future that is !Send.
//
@@ -121,7 +216,7 @@ impl PlatformDispatcher for LinuxDispatcher {
});
}
fn dispatch_after(&self, duration: Duration, runnable: Runnable) {
fn dispatch_after(&self, duration: Duration, runnable: RunnableVariant) {
self.timer_sender
.send(TimerAfter { duration, runnable })
.ok();

View File

@@ -31,7 +31,10 @@ impl HeadlessClient {
handle
.insert_source(main_receiver, |event, _, _: &mut HeadlessClient| {
if let calloop::channel::Event::Msg(runnable) = event {
runnable.run();
match runnable {
crate::RunnableVariant::Meta(runnable) => runnable.run(),
crate::RunnableVariant::Compat(runnable) => runnable.run(),
};
}
})
.ok();

View File

@@ -15,7 +15,6 @@ use std::{
};
use anyhow::{Context as _, anyhow};
use async_task::Runnable;
use calloop::{LoopSignal, channel::Channel};
use futures::channel::oneshot;
use util::ResultExt as _;
@@ -26,7 +25,8 @@ use crate::{
Action, AnyWindowHandle, BackgroundExecutor, ClipboardItem, CursorStyle, DisplayId,
ForegroundExecutor, Keymap, LinuxDispatcher, Menu, MenuItem, OwnedMenu, PathPromptOptions,
Pixels, Platform, PlatformDisplay, PlatformKeyboardLayout, PlatformKeyboardMapper,
PlatformTextSystem, PlatformWindow, Point, Result, Task, WindowAppearance, WindowParams, px,
PlatformTextSystem, PlatformWindow, Point, Result, RunnableVariant, Task, WindowAppearance,
WindowParams, px,
};
#[cfg(any(feature = "wayland", feature = "x11"))]
@@ -105,8 +105,8 @@ pub(crate) struct LinuxCommon {
}
impl LinuxCommon {
pub fn new(signal: LoopSignal) -> (Self, Channel<Runnable>) {
let (main_sender, main_receiver) = calloop::channel::channel::<Runnable>();
pub fn new(signal: LoopSignal) -> (Self, Channel<RunnableVariant>) {
let (main_sender, main_receiver) = calloop::channel::channel::<RunnableVariant>();
#[cfg(any(feature = "wayland", feature = "x11"))]
let text_system = Arc::new(crate::CosmicTextSystem::new());

View File

@@ -71,7 +71,6 @@ use super::{
window::{ImeInput, WaylandWindowStatePtr},
};
use crate::platform::{PlatformWindow, blade::BladeContext};
use crate::{
AnyWindowHandle, Bounds, Capslock, CursorStyle, DOUBLE_CLICK_INTERVAL, DevicePixels, DisplayId,
FileDropEvent, ForegroundExecutor, KeyDownEvent, KeyUpEvent, Keystroke, LinuxCommon,
@@ -80,6 +79,10 @@ use crate::{
PlatformInput, PlatformKeyboardLayout, Point, SCROLL_LINES, ScrollDelta, ScrollWheelEvent,
Size, TouchPhase, WindowParams, point, px, size,
};
use crate::{
LinuxDispatcher, RunnableVariant, TaskTiming,
platform::{PlatformWindow, blade::BladeContext},
};
use crate::{
SharedString,
platform::linux::{
@@ -491,7 +494,37 @@ impl WaylandClient {
move |event, _, _: &mut WaylandClientStatePtr| {
if let calloop::channel::Event::Msg(runnable) = event {
handle.insert_idle(|_| {
runnable.run();
let start = Instant::now();
let mut timing = match runnable {
RunnableVariant::Meta(runnable) => {
let location = runnable.metadata().location;
let timing = TaskTiming {
location,
start,
end: None,
};
LinuxDispatcher::add_task_timing(timing);
runnable.run();
timing
}
RunnableVariant::Compat(runnable) => {
let location = core::panic::Location::caller();
let timing = TaskTiming {
location,
start,
end: None,
};
LinuxDispatcher::add_task_timing(timing);
runnable.run();
timing
}
};
let end = Instant::now();
timing.end = Some(end);
LinuxDispatcher::add_task_timing(timing);
});
}
}

View File

@@ -1,4 +1,4 @@
use crate::{Capslock, xcb_flush};
use crate::{Capslock, LinuxDispatcher, RunnableVariant, TaskTiming, xcb_flush};
use anyhow::{Context as _, anyhow};
use ashpd::WindowIdentifier;
use calloop::{
@@ -313,7 +313,37 @@ impl X11Client {
// events have higher priority and runnables are only worked off after the event
// callbacks.
handle.insert_idle(|_| {
runnable.run();
let start = Instant::now();
let mut timing = match runnable {
RunnableVariant::Meta(runnable) => {
let location = runnable.metadata().location;
let timing = TaskTiming {
location,
start,
end: None,
};
LinuxDispatcher::add_task_timing(timing);
runnable.run();
timing
}
RunnableVariant::Compat(runnable) => {
let location = core::panic::Location::caller();
let timing = TaskTiming {
location,
start,
end: None,
};
LinuxDispatcher::add_task_timing(timing);
runnable.run();
timing
}
};
let end = Instant::now();
timing.end = Some(end);
LinuxDispatcher::add_task_timing(timing);
});
}
}

View File

@@ -2,7 +2,11 @@
#![allow(non_camel_case_types)]
#![allow(non_snake_case)]
use crate::{PlatformDispatcher, TaskLabel};
use crate::{
GLOBAL_THREAD_TIMINGS, PlatformDispatcher, RunnableMeta, RunnableVariant, THREAD_TIMINGS,
TaskLabel, TaskTiming, ThreadTaskTimings,
};
use async_task::Runnable;
use objc::{
class, msg_send,
@@ -12,7 +16,7 @@ use objc::{
use std::{
ffi::c_void,
ptr::{NonNull, addr_of},
time::Duration,
time::{Duration, Instant},
};
/// All items in the generated file are marked as pub, so we're gonna wrap it in a separate mod to prevent
@@ -29,47 +33,155 @@ pub(crate) fn dispatch_get_main_queue() -> dispatch_queue_t {
pub(crate) struct MacDispatcher;
impl PlatformDispatcher for MacDispatcher {
fn get_all_timings(&self) -> Vec<ThreadTaskTimings> {
let global_timings = GLOBAL_THREAD_TIMINGS.lock();
ThreadTaskTimings::convert(&global_timings)
}
fn get_current_thread_timings(&self) -> Vec<TaskTiming> {
THREAD_TIMINGS.with(|timings| {
let timings = &timings.lock().timings;
let mut vec = Vec::with_capacity(timings.len());
let (s1, s2) = timings.as_slices();
vec.extend_from_slice(s1);
vec.extend_from_slice(s2);
vec
})
}
fn is_main_thread(&self) -> bool {
let is_main_thread: BOOL = unsafe { msg_send![class!(NSThread), isMainThread] };
is_main_thread == YES
}
fn dispatch(&self, runnable: Runnable, _: Option<TaskLabel>) {
fn dispatch(&self, runnable: RunnableVariant, _: Option<TaskLabel>) {
let (context, trampoline) = match runnable {
RunnableVariant::Meta(runnable) => (
runnable.into_raw().as_ptr() as *mut c_void,
Some(trampoline as unsafe extern "C" fn(*mut c_void)),
),
RunnableVariant::Compat(runnable) => (
runnable.into_raw().as_ptr() as *mut c_void,
Some(trampoline_compat as unsafe extern "C" fn(*mut c_void)),
),
};
unsafe {
dispatch_async_f(
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH.try_into().unwrap(), 0),
runnable.into_raw().as_ptr() as *mut c_void,
Some(trampoline),
context,
trampoline,
);
}
}
fn dispatch_on_main_thread(&self, runnable: Runnable) {
fn dispatch_on_main_thread(&self, runnable: RunnableVariant) {
let (context, trampoline) = match runnable {
RunnableVariant::Meta(runnable) => (
runnable.into_raw().as_ptr() as *mut c_void,
Some(trampoline as unsafe extern "C" fn(*mut c_void)),
),
RunnableVariant::Compat(runnable) => (
runnable.into_raw().as_ptr() as *mut c_void,
Some(trampoline_compat as unsafe extern "C" fn(*mut c_void)),
),
};
unsafe {
dispatch_async_f(
dispatch_get_main_queue(),
runnable.into_raw().as_ptr() as *mut c_void,
Some(trampoline),
);
dispatch_async_f(dispatch_get_main_queue(), context, trampoline);
}
}
fn dispatch_after(&self, duration: Duration, runnable: Runnable) {
fn dispatch_after(&self, duration: Duration, runnable: RunnableVariant) {
let (context, trampoline) = match runnable {
RunnableVariant::Meta(runnable) => (
runnable.into_raw().as_ptr() as *mut c_void,
Some(trampoline as unsafe extern "C" fn(*mut c_void)),
),
RunnableVariant::Compat(runnable) => (
runnable.into_raw().as_ptr() as *mut c_void,
Some(trampoline_compat as unsafe extern "C" fn(*mut c_void)),
),
};
unsafe {
let queue =
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH.try_into().unwrap(), 0);
let when = dispatch_time(DISPATCH_TIME_NOW as u64, duration.as_nanos() as i64);
dispatch_after_f(
when,
queue,
runnable.into_raw().as_ptr() as *mut c_void,
Some(trampoline),
);
dispatch_after_f(when, queue, context, trampoline);
}
}
}
extern "C" fn trampoline(runnable: *mut c_void) {
let task = unsafe { Runnable::<()>::from_raw(NonNull::new_unchecked(runnable as *mut ())) };
let task =
unsafe { Runnable::<RunnableMeta>::from_raw(NonNull::new_unchecked(runnable as *mut ())) };
let location = task.metadata().location;
let start = Instant::now();
let timing = TaskTiming {
location,
start,
end: None,
};
THREAD_TIMINGS.with(|timings| {
let mut timings = timings.lock();
let timings = &mut timings.timings;
if let Some(last_timing) = timings.iter_mut().rev().next() {
if last_timing.location == timing.location {
return;
}
}
timings.push_back(timing);
});
task.run();
let end = Instant::now();
THREAD_TIMINGS.with(|timings| {
let mut timings = timings.lock();
let timings = &mut timings.timings;
let Some(last_timing) = timings.iter_mut().rev().next() else {
return;
};
last_timing.end = Some(end);
});
}
extern "C" fn trampoline_compat(runnable: *mut c_void) {
let task = unsafe { Runnable::<()>::from_raw(NonNull::new_unchecked(runnable as *mut ())) };
let location = core::panic::Location::caller();
let start = Instant::now();
let timing = TaskTiming {
location,
start,
end: None,
};
THREAD_TIMINGS.with(|timings| {
let mut timings = timings.lock();
let timings = &mut timings.timings;
if let Some(last_timing) = timings.iter_mut().rev().next() {
if last_timing.location == timing.location {
return;
}
}
timings.push_back(timing);
});
task.run();
let end = Instant::now();
THREAD_TIMINGS.with(|timings| {
let mut timings = timings.lock();
let timings = &mut timings.timings;
let Some(last_timing) = timings.iter_mut().rev().next() else {
return;
};
last_timing.end = Some(end);
});
}

View File

@@ -651,9 +651,12 @@ impl Platform for MacPlatform {
fn open_url(&self, url: &str) {
unsafe {
let url = NSURL::alloc(nil)
.initWithString_(ns_string(url))
.autorelease();
let ns_url = NSURL::alloc(nil).initWithString_(ns_string(url));
if ns_url.is_null() {
log::error!("Failed to create NSURL from string: {}", url);
return;
}
let url = ns_url.autorelease();
let workspace: id = msg_send![class!(NSWorkspace), sharedWorkspace];
msg_send![workspace, openURL: url]
}

View File

@@ -1967,10 +1967,36 @@ extern "C" fn window_did_move(this: &Object, _: Sel, _: id) {
}
}
// Update the window scale factor and drawable size, and call the resize callback if any.
fn update_window_scale_factor(window_state: &Arc<Mutex<MacWindowState>>) {
let mut lock = window_state.as_ref().lock();
let scale_factor = lock.scale_factor();
let size = lock.content_size();
let drawable_size = size.to_device_pixels(scale_factor);
unsafe {
let _: () = msg_send![
lock.renderer.layer(),
setContentsScale: scale_factor as f64
];
}
lock.renderer.update_drawable_size(drawable_size);
if let Some(mut callback) = lock.resize_callback.take() {
let content_size = lock.content_size();
let scale_factor = lock.scale_factor();
drop(lock);
callback(content_size, scale_factor);
window_state.as_ref().lock().resize_callback = Some(callback);
};
}
extern "C" fn window_did_change_screen(this: &Object, _: Sel, _: id) {
let window_state = unsafe { get_window_state(this) };
let mut lock = window_state.as_ref().lock();
lock.start_display_link();
drop(lock);
update_window_scale_factor(&window_state);
}
extern "C" fn window_did_change_key_status(this: &Object, selector: Sel, _: id) {
@@ -2079,27 +2105,7 @@ extern "C" fn make_backing_layer(this: &Object, _: Sel) -> id {
extern "C" fn view_did_change_backing_properties(this: &Object, _: Sel) {
let window_state = unsafe { get_window_state(this) };
let mut lock = window_state.as_ref().lock();
let scale_factor = lock.scale_factor();
let size = lock.content_size();
let drawable_size = size.to_device_pixels(scale_factor);
unsafe {
let _: () = msg_send![
lock.renderer.layer(),
setContentsScale: scale_factor as f64
];
}
lock.renderer.update_drawable_size(drawable_size);
if let Some(mut callback) = lock.resize_callback.take() {
let content_size = lock.content_size();
let scale_factor = lock.scale_factor();
drop(lock);
callback(content_size, scale_factor);
window_state.as_ref().lock().resize_callback = Some(callback);
};
update_window_scale_factor(&window_state);
}
extern "C" fn set_frame_size(this: &Object, _: Sel, size: NSSize) {

View File

@@ -1,5 +1,4 @@
use crate::{PlatformDispatcher, TaskLabel};
use async_task::Runnable;
use crate::{PlatformDispatcher, RunnableVariant, TaskLabel};
use backtrace::Backtrace;
use collections::{HashMap, HashSet, VecDeque};
use parking::Unparker;
@@ -26,10 +25,10 @@ pub struct TestDispatcher {
struct TestDispatcherState {
random: StdRng,
foreground: HashMap<TestDispatcherId, VecDeque<Runnable>>,
background: Vec<Runnable>,
deprioritized_background: Vec<Runnable>,
delayed: Vec<(Duration, Runnable)>,
foreground: HashMap<TestDispatcherId, VecDeque<RunnableVariant>>,
background: Vec<RunnableVariant>,
deprioritized_background: Vec<RunnableVariant>,
delayed: Vec<(Duration, RunnableVariant)>,
start_time: Instant,
time: Duration,
is_main_thread: bool,
@@ -39,7 +38,7 @@ struct TestDispatcherState {
waiting_backtrace: Option<Backtrace>,
deprioritized_task_labels: HashSet<TaskLabel>,
block_on_ticks: RangeInclusive<usize>,
last_parked: Option<Unparker>,
unparkers: Vec<Unparker>,
}
impl TestDispatcher {
@@ -59,7 +58,7 @@ impl TestDispatcher {
waiting_backtrace: None,
deprioritized_task_labels: Default::default(),
block_on_ticks: 0..=1000,
last_parked: None,
unparkers: Default::default(),
};
TestDispatcher {
@@ -175,7 +174,13 @@ impl TestDispatcher {
let was_main_thread = state.is_main_thread;
state.is_main_thread = main_thread;
drop(state);
runnable.run();
// todo(localcc): add timings to tests
match runnable {
RunnableVariant::Meta(runnable) => runnable.run(),
RunnableVariant::Compat(runnable) => runnable.run(),
};
self.state.lock().is_main_thread = was_main_thread;
true
@@ -240,20 +245,14 @@ impl TestDispatcher {
let block_on_ticks = lock.block_on_ticks.clone();
lock.random.random_range(block_on_ticks)
}
pub fn unpark_last(&self) {
self.state
.lock()
.last_parked
.take()
.as_ref()
.map(Unparker::unpark);
pub fn unpark_all(&self) {
self.state.lock().unparkers.retain(|parker| parker.unpark());
}
pub fn set_unparker(&self, unparker: Unparker) {
let last = { self.state.lock().last_parked.replace(unparker) };
if let Some(last) = last {
last.unpark();
}
pub fn push_unparker(&self, unparker: Unparker) {
let mut state = self.state.lock();
state.unparkers.push(unparker);
}
}
@@ -268,6 +267,14 @@ impl Clone for TestDispatcher {
}
impl PlatformDispatcher for TestDispatcher {
fn get_all_timings(&self) -> Vec<crate::ThreadTaskTimings> {
Vec::new()
}
fn get_current_thread_timings(&self) -> Vec<crate::TaskTiming> {
Vec::new()
}
fn is_main_thread(&self) -> bool {
self.state.lock().is_main_thread
}
@@ -277,7 +284,7 @@ impl PlatformDispatcher for TestDispatcher {
state.start_time + state.time
}
fn dispatch(&self, runnable: Runnable, label: Option<TaskLabel>) {
fn dispatch(&self, runnable: RunnableVariant, label: Option<TaskLabel>) {
{
let mut state = self.state.lock();
if label.is_some_and(|label| state.deprioritized_task_labels.contains(&label)) {
@@ -286,20 +293,20 @@ impl PlatformDispatcher for TestDispatcher {
state.background.push(runnable);
}
}
self.unpark_last();
self.unpark_all();
}
fn dispatch_on_main_thread(&self, runnable: Runnable) {
fn dispatch_on_main_thread(&self, runnable: RunnableVariant) {
self.state
.lock()
.foreground
.entry(self.id)
.or_default()
.push_back(runnable);
self.unpark_last();
self.unpark_all();
}
fn dispatch_after(&self, duration: std::time::Duration, runnable: Runnable) {
fn dispatch_after(&self, duration: std::time::Duration, runnable: RunnableVariant) {
let mut state = self.state.lock();
let next_time = state.time + duration;
let ix = match state.delayed.binary_search_by_key(&next_time, |e| e.0) {

View File

@@ -1,10 +1,9 @@
use std::{
sync::atomic::{AtomicBool, Ordering},
thread::{ThreadId, current},
time::Duration,
time::{Duration, Instant},
};
use async_task::Runnable;
use flume::Sender;
use util::ResultExt;
use windows::{
@@ -18,12 +17,13 @@ use windows::{
};
use crate::{
HWND, PlatformDispatcher, SafeHwnd, TaskLabel, WM_GPUI_TASK_DISPATCHED_ON_MAIN_THREAD,
GLOBAL_THREAD_TIMINGS, HWND, PlatformDispatcher, RunnableVariant, SafeHwnd, THREAD_TIMINGS,
TaskLabel, TaskTiming, ThreadTaskTimings, WM_GPUI_TASK_DISPATCHED_ON_MAIN_THREAD,
};
pub(crate) struct WindowsDispatcher {
pub(crate) wake_posted: AtomicBool,
main_sender: Sender<Runnable>,
main_sender: Sender<RunnableVariant>,
main_thread_id: ThreadId,
platform_window_handle: SafeHwnd,
validation_number: usize,
@@ -31,7 +31,7 @@ pub(crate) struct WindowsDispatcher {
impl WindowsDispatcher {
pub(crate) fn new(
main_sender: Sender<Runnable>,
main_sender: Sender<RunnableVariant>,
platform_window_handle: HWND,
validation_number: usize,
) -> Self {
@@ -47,42 +47,115 @@ impl WindowsDispatcher {
}
}
fn dispatch_on_threadpool(&self, runnable: Runnable) {
fn dispatch_on_threadpool(&self, runnable: RunnableVariant) {
let handler = {
let mut task_wrapper = Some(runnable);
WorkItemHandler::new(move |_| {
task_wrapper.take().unwrap().run();
Self::execute_runnable(task_wrapper.take().unwrap());
Ok(())
})
};
ThreadPool::RunWithPriorityAsync(&handler, WorkItemPriority::High).log_err();
}
fn dispatch_on_threadpool_after(&self, runnable: Runnable, duration: Duration) {
fn dispatch_on_threadpool_after(&self, runnable: RunnableVariant, duration: Duration) {
let handler = {
let mut task_wrapper = Some(runnable);
TimerElapsedHandler::new(move |_| {
task_wrapper.take().unwrap().run();
Self::execute_runnable(task_wrapper.take().unwrap());
Ok(())
})
};
ThreadPoolTimer::CreateTimer(&handler, duration.into()).log_err();
}
#[inline(always)]
pub(crate) fn execute_runnable(runnable: RunnableVariant) {
let start = Instant::now();
let mut timing = match runnable {
RunnableVariant::Meta(runnable) => {
let location = runnable.metadata().location;
let timing = TaskTiming {
location,
start,
end: None,
};
Self::add_task_timing(timing);
runnable.run();
timing
}
RunnableVariant::Compat(runnable) => {
let timing = TaskTiming {
location: core::panic::Location::caller(),
start,
end: None,
};
Self::add_task_timing(timing);
runnable.run();
timing
}
};
let end = Instant::now();
timing.end = Some(end);
Self::add_task_timing(timing);
}
pub(crate) fn add_task_timing(timing: TaskTiming) {
THREAD_TIMINGS.with(|timings| {
let mut timings = timings.lock();
let timings = &mut timings.timings;
if let Some(last_timing) = timings.iter_mut().rev().next() {
if last_timing.location == timing.location {
last_timing.end = timing.end;
return;
}
}
timings.push_back(timing);
});
}
}
impl PlatformDispatcher for WindowsDispatcher {
fn get_all_timings(&self) -> Vec<ThreadTaskTimings> {
let global_thread_timings = GLOBAL_THREAD_TIMINGS.lock();
ThreadTaskTimings::convert(&global_thread_timings)
}
fn get_current_thread_timings(&self) -> Vec<crate::TaskTiming> {
THREAD_TIMINGS.with(|timings| {
let timings = timings.lock();
let timings = &timings.timings;
let mut vec = Vec::with_capacity(timings.len());
let (s1, s2) = timings.as_slices();
vec.extend_from_slice(s1);
vec.extend_from_slice(s2);
vec
})
}
fn is_main_thread(&self) -> bool {
current().id() == self.main_thread_id
}
fn dispatch(&self, runnable: Runnable, label: Option<TaskLabel>) {
fn dispatch(&self, runnable: RunnableVariant, label: Option<TaskLabel>) {
self.dispatch_on_threadpool(runnable);
if let Some(label) = label {
log::debug!("TaskLabel: {label:?}");
}
}
fn dispatch_on_main_thread(&self, runnable: Runnable) {
fn dispatch_on_main_thread(&self, runnable: RunnableVariant) {
match self.main_sender.send(runnable) {
Ok(_) => {
if !self.wake_posted.swap(true, Ordering::AcqRel) {
@@ -111,7 +184,7 @@ impl PlatformDispatcher for WindowsDispatcher {
}
}
fn dispatch_after(&self, duration: Duration, runnable: Runnable) {
fn dispatch_after(&self, duration: Duration, runnable: RunnableVariant) {
self.dispatch_on_threadpool_after(runnable, duration);
}
}

View File

@@ -239,7 +239,7 @@ impl WindowsWindowInner {
fn handle_timer_msg(&self, handle: HWND, wparam: WPARAM) -> Option<isize> {
if wparam.0 == SIZE_MOVE_LOOP_TIMER_ID {
for runnable in self.main_receiver.drain() {
runnable.run();
WindowsDispatcher::execute_runnable(runnable);
}
self.handle_paint_msg(handle)
} else {
@@ -487,14 +487,12 @@ impl WindowsWindowInner {
let scale_factor = lock.scale_factor;
let wheel_scroll_amount = match modifiers.shift {
true => {
self.system_settings
.borrow()
self.system_settings()
.mouse_wheel_settings
.wheel_scroll_chars
}
false => {
self.system_settings
.borrow()
self.system_settings()
.mouse_wheel_settings
.wheel_scroll_lines
}
@@ -541,8 +539,7 @@ impl WindowsWindowInner {
};
let scale_factor = lock.scale_factor;
let wheel_scroll_chars = self
.system_settings
.borrow()
.system_settings()
.mouse_wheel_settings
.wheel_scroll_chars;
drop(lock);
@@ -677,8 +674,7 @@ impl WindowsWindowInner {
// used by Chrome. However, it may result in one row of pixels being obscured
// in our client area. But as Chrome says, "there seems to be no better solution."
if is_maximized
&& let Some(ref taskbar_position) =
self.system_settings.borrow().auto_hide_taskbar_position
&& let Some(ref taskbar_position) = self.system_settings().auto_hide_taskbar_position
{
// For the auto-hide taskbar, adjust in by 1 pixel on taskbar edge,
// so the window isn't treated as a "fullscreen app", which would cause
@@ -1072,7 +1068,7 @@ impl WindowsWindowInner {
lock.border_offset.update(handle).log_err();
// system settings may emit a window message which wants to take the refcell lock, so drop it
drop(lock);
self.system_settings.borrow_mut().update(display, wparam.0);
self.system_settings_mut().update(display, wparam.0);
} else {
self.handle_system_theme_changed(handle, lparam)?;
};
@@ -1146,8 +1142,10 @@ impl WindowsWindowInner {
require_presentation: false,
force_render,
});
self.state.borrow_mut().callbacks.request_frame = Some(request_frame);
unsafe { ValidateRect(Some(handle), None).ok().log_err() };
Some(0)
}

View File

@@ -8,7 +8,6 @@ use std::{
use ::util::{ResultExt, paths::SanitizedPath};
use anyhow::{Context as _, Result, anyhow};
use async_task::Runnable;
use futures::channel::oneshot::{self, Receiver};
use itertools::Itertools;
use parking_lot::RwLock;
@@ -46,7 +45,7 @@ struct WindowsPlatformInner {
raw_window_handles: std::sync::Weak<RwLock<SmallVec<[SafeHwnd; 4]>>>,
// The below members will never change throughout the entire lifecycle of the app.
validation_number: usize,
main_receiver: flume::Receiver<Runnable>,
main_receiver: flume::Receiver<RunnableVariant>,
dispatcher: Arc<WindowsDispatcher>,
}
@@ -93,7 +92,7 @@ impl WindowsPlatform {
OleInitialize(None).context("unable to initialize Windows OLE")?;
}
let directx_devices = DirectXDevices::new().context("Creating DirectX devices")?;
let (main_sender, main_receiver) = flume::unbounded::<Runnable>();
let (main_sender, main_receiver) = flume::unbounded::<RunnableVariant>();
let validation_number = if usize::BITS == 64 {
rand::random::<u64>() as usize
} else {
@@ -342,9 +341,8 @@ impl Platform for WindowsPlatform {
}
}
if let Some(ref mut callback) = self.inner.state.borrow_mut().callbacks.quit {
callback();
}
self.inner
.with_callback(|callbacks| &mut callbacks.quit, |callback| callback());
}
fn quit(&self) {
@@ -578,14 +576,13 @@ impl Platform for WindowsPlatform {
fn set_cursor_style(&self, style: CursorStyle) {
let hcursor = load_cursor(style);
let mut lock = self.inner.state.borrow_mut();
if lock.current_cursor.map(|c| c.0) != hcursor.map(|c| c.0) {
if self.inner.state.borrow_mut().current_cursor.map(|c| c.0) != hcursor.map(|c| c.0) {
self.post_message(
WM_GPUI_CURSOR_STYLE_CHANGED,
WPARAM(0),
LPARAM(hcursor.map_or(0, |c| c.0 as isize)),
);
lock.current_cursor = hcursor;
self.inner.state.borrow_mut().current_cursor = hcursor;
}
}
@@ -724,6 +721,19 @@ impl WindowsPlatformInner {
}))
}
/// Calls `project` to project to the corresponding callback field, removes it from callbacks, calls `f` with the callback and then puts the callback back.
fn with_callback<T>(
&self,
project: impl Fn(&mut PlatformCallbacks) -> &mut Option<T>,
f: impl FnOnce(&mut T),
) {
let callback = project(&mut self.state.borrow_mut().callbacks).take();
if let Some(mut callback) = callback {
f(&mut callback);
*project(&mut self.state.borrow_mut().callbacks) = Some(callback)
}
}
fn handle_msg(
self: &Rc<Self>,
handle: HWND,
@@ -783,7 +793,7 @@ impl WindowsPlatformInner {
fn run_foreground_task(&self) -> Option<isize> {
loop {
for runnable in self.main_receiver.drain() {
runnable.run();
WindowsDispatcher::execute_runnable(runnable);
}
// Someone could enqueue a Runnable here. The flag is still true, so they will not PostMessage.
@@ -794,7 +804,8 @@ impl WindowsPlatformInner {
match self.main_receiver.try_recv() {
Ok(runnable) => {
let _ = dispatcher.wake_posted.swap(true, Ordering::AcqRel);
runnable.run();
WindowsDispatcher::execute_runnable(runnable);
continue;
}
_ => {
@@ -807,40 +818,36 @@ impl WindowsPlatformInner {
}
fn handle_dock_action_event(&self, action_idx: usize) -> Option<isize> {
let mut lock = self.state.borrow_mut();
let mut callback = lock.callbacks.app_menu_action.take()?;
let Some(action) = lock
let Some(action) = self
.state
.borrow_mut()
.jump_list
.dock_menus
.get(action_idx)
.map(|dock_menu| dock_menu.action.boxed_clone())
else {
lock.callbacks.app_menu_action = Some(callback);
log::error!("Dock menu for index {action_idx} not found");
return Some(1);
};
drop(lock);
callback(&*action);
self.state.borrow_mut().callbacks.app_menu_action = Some(callback);
self.with_callback(
|callbacks| &mut callbacks.app_menu_action,
|callback| callback(&*action),
);
Some(0)
}
fn handle_keyboard_layout_change(&self) -> Option<isize> {
let mut callback = self
.state
.borrow_mut()
.callbacks
.keyboard_layout_change
.take()?;
callback();
self.state.borrow_mut().callbacks.keyboard_layout_change = Some(callback);
self.with_callback(
|callbacks| &mut callbacks.keyboard_layout_change,
|callback| callback(),
);
Some(0)
}
fn handle_device_lost(&self, lparam: LPARAM) -> Option<isize> {
let mut lock = self.state.borrow_mut();
let directx_devices = lparam.0 as *const DirectXDevices;
let directx_devices = unsafe { &*directx_devices };
let mut lock = self.state.borrow_mut();
lock.directx_devices.take();
lock.directx_devices = Some(directx_devices.clone());
@@ -866,7 +873,7 @@ pub(crate) struct WindowCreationInfo {
pub(crate) windows_version: WindowsVersion,
pub(crate) drop_target_helper: IDropTargetHelper,
pub(crate) validation_number: usize,
pub(crate) main_receiver: flume::Receiver<Runnable>,
pub(crate) main_receiver: flume::Receiver<RunnableVariant>,
pub(crate) platform_window_handle: HWND,
pub(crate) disable_direct_composition: bool,
pub(crate) directx_devices: DirectXDevices,
@@ -876,8 +883,8 @@ struct PlatformWindowCreateContext {
inner: Option<Result<Rc<WindowsPlatformInner>>>,
raw_window_handles: std::sync::Weak<RwLock<SmallVec<[SafeHwnd; 4]>>>,
validation_number: usize,
main_sender: Option<flume::Sender<Runnable>>,
main_receiver: Option<flume::Receiver<Runnable>>,
main_sender: Option<flume::Sender<RunnableVariant>>,
main_receiver: Option<flume::Receiver<RunnableVariant>>,
directx_devices: Option<DirectXDevices>,
dispatcher: Option<Arc<WindowsDispatcher>>,
}

View File

@@ -12,7 +12,6 @@ use std::{
use ::util::ResultExt;
use anyhow::{Context as _, Result};
use async_task::Runnable;
use futures::channel::oneshot::{self, Receiver};
use raw_window_handle as rwh;
use smallvec::SmallVec;
@@ -63,14 +62,14 @@ pub(crate) struct WindowsWindowInner {
hwnd: HWND,
drop_target_helper: IDropTargetHelper,
pub(crate) state: RefCell<WindowsWindowState>,
pub(crate) system_settings: RefCell<WindowsSystemSettings>,
system_settings: RefCell<WindowsSystemSettings>,
pub(crate) handle: AnyWindowHandle,
pub(crate) hide_title_bar: bool,
pub(crate) is_movable: bool,
pub(crate) executor: ForegroundExecutor,
pub(crate) windows_version: WindowsVersion,
pub(crate) validation_number: usize,
pub(crate) main_receiver: flume::Receiver<Runnable>,
pub(crate) main_receiver: flume::Receiver<RunnableVariant>,
pub(crate) platform_window_handle: HWND,
}
@@ -321,6 +320,14 @@ impl WindowsWindowInner {
}
Ok(())
}
pub(crate) fn system_settings(&self) -> std::cell::Ref<'_, WindowsSystemSettings> {
self.system_settings.borrow()
}
pub(crate) fn system_settings_mut(&self) -> std::cell::RefMut<'_, WindowsSystemSettings> {
self.system_settings.borrow_mut()
}
}
#[derive(Default)]
@@ -349,7 +356,7 @@ struct WindowCreateContext {
windows_version: WindowsVersion,
drop_target_helper: IDropTargetHelper,
validation_number: usize,
main_receiver: flume::Receiver<Runnable>,
main_receiver: flume::Receiver<RunnableVariant>,
platform_window_handle: HWND,
appearance: WindowAppearance,
disable_direct_composition: bool,
@@ -453,8 +460,9 @@ impl WindowsWindow {
// Failure to create a `WindowsWindowState` can cause window creation to fail,
// so check the inner result first.
let this = context.inner.take().unwrap()?;
let this = context.inner.take().transpose()?;
let hwnd = creation_result?;
let this = this.unwrap();
register_drag_drop(&this)?;
configure_dwm_dark_mode(hwnd, appearance);

218
crates/gpui/src/profiler.rs Normal file
View File

@@ -0,0 +1,218 @@
use std::{
cell::LazyCell,
hash::Hasher,
hash::{DefaultHasher, Hash},
sync::Arc,
thread::ThreadId,
time::Instant,
};
use serde::{Deserialize, Serialize};
#[doc(hidden)]
#[derive(Debug, Copy, Clone)]
pub struct TaskTiming {
pub location: &'static core::panic::Location<'static>,
pub start: Instant,
pub end: Option<Instant>,
}
#[doc(hidden)]
#[derive(Debug, Clone)]
pub struct ThreadTaskTimings {
pub thread_name: Option<String>,
pub thread_id: ThreadId,
pub timings: Vec<TaskTiming>,
}
impl ThreadTaskTimings {
pub(crate) fn convert(timings: &[GlobalThreadTimings]) -> Vec<Self> {
timings
.iter()
.filter_map(|t| match t.timings.upgrade() {
Some(timings) => Some((t.thread_id, timings)),
_ => None,
})
.map(|(thread_id, timings)| {
let timings = timings.lock();
let thread_name = timings.thread_name.clone();
let timings = &timings.timings;
let mut vec = Vec::with_capacity(timings.len());
let (s1, s2) = timings.as_slices();
vec.extend_from_slice(s1);
vec.extend_from_slice(s2);
ThreadTaskTimings {
thread_name,
thread_id,
timings: vec,
}
})
.collect()
}
}
/// Serializable variant of [`core::panic::Location`]
#[derive(Debug, Copy, Clone, Serialize, Deserialize)]
pub struct SerializedLocation<'a> {
/// Name of the source file
pub file: &'a str,
/// Line in the source file
pub line: u32,
/// Column in the source file
pub column: u32,
}
impl<'a> From<&'a core::panic::Location<'a>> for SerializedLocation<'a> {
fn from(value: &'a core::panic::Location<'a>) -> Self {
SerializedLocation {
file: value.file(),
line: value.line(),
column: value.column(),
}
}
}
/// Serializable variant of [`TaskTiming`]
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SerializedTaskTiming<'a> {
/// Location of the timing
#[serde(borrow)]
pub location: SerializedLocation<'a>,
/// Time at which the measurement was reported in nanoseconds
pub start: u128,
/// Duration of the measurement in nanoseconds
pub duration: u128,
}
impl<'a> SerializedTaskTiming<'a> {
/// Convert an array of [`TaskTiming`] into their serializable format
///
/// # Params
///
/// `anchor` - [`Instant`] that should be earlier than all timings to use as base anchor
pub fn convert(anchor: Instant, timings: &[TaskTiming]) -> Vec<SerializedTaskTiming<'static>> {
let serialized = timings
.iter()
.map(|timing| {
let start = timing.start.duration_since(anchor).as_nanos();
let duration = timing
.end
.unwrap_or_else(|| Instant::now())
.duration_since(timing.start)
.as_nanos();
SerializedTaskTiming {
location: timing.location.into(),
start,
duration,
}
})
.collect::<Vec<_>>();
serialized
}
}
/// Serializable variant of [`ThreadTaskTimings`]
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct SerializedThreadTaskTimings<'a> {
/// Thread name
pub thread_name: Option<String>,
/// Hash of the thread id
pub thread_id: u64,
/// Timing records for this thread
#[serde(borrow)]
pub timings: Vec<SerializedTaskTiming<'a>>,
}
impl<'a> SerializedThreadTaskTimings<'a> {
/// Convert [`ThreadTaskTimings`] into their serializable format
///
/// # Params
///
/// `anchor` - [`Instant`] that should be earlier than all timings to use as base anchor
pub fn convert(
anchor: Instant,
timings: ThreadTaskTimings,
) -> SerializedThreadTaskTimings<'static> {
let serialized_timings = SerializedTaskTiming::convert(anchor, &timings.timings);
let mut hasher = DefaultHasher::new();
timings.thread_id.hash(&mut hasher);
let thread_id = hasher.finish();
SerializedThreadTaskTimings {
thread_name: timings.thread_name,
thread_id,
timings: serialized_timings,
}
}
}
// Allow 20mb of task timing entries
const MAX_TASK_TIMINGS: usize = (20 * 1024 * 1024) / core::mem::size_of::<TaskTiming>();
pub(crate) type TaskTimings = circular_buffer::CircularBuffer<MAX_TASK_TIMINGS, TaskTiming>;
pub(crate) type GuardedTaskTimings = spin::Mutex<ThreadTimings>;
pub(crate) struct GlobalThreadTimings {
pub thread_id: ThreadId,
pub timings: std::sync::Weak<GuardedTaskTimings>,
}
pub(crate) static GLOBAL_THREAD_TIMINGS: spin::Mutex<Vec<GlobalThreadTimings>> =
spin::Mutex::new(Vec::new());
thread_local! {
pub(crate) static THREAD_TIMINGS: LazyCell<Arc<GuardedTaskTimings>> = LazyCell::new(|| {
let current_thread = std::thread::current();
let thread_name = current_thread.name();
let thread_id = current_thread.id();
let timings = ThreadTimings::new(thread_name.map(|e| e.to_string()), thread_id);
let timings = Arc::new(spin::Mutex::new(timings));
{
let timings = Arc::downgrade(&timings);
let global_timings = GlobalThreadTimings {
thread_id: std::thread::current().id(),
timings,
};
GLOBAL_THREAD_TIMINGS.lock().push(global_timings);
}
timings
});
}
pub(crate) struct ThreadTimings {
pub thread_name: Option<String>,
pub thread_id: ThreadId,
pub timings: Box<TaskTimings>,
}
impl ThreadTimings {
pub(crate) fn new(thread_name: Option<String>, thread_id: ThreadId) -> Self {
ThreadTimings {
thread_name,
thread_id,
timings: TaskTimings::boxed(),
}
}
}
impl Drop for ThreadTimings {
fn drop(&mut self) {
let mut thread_timings = GLOBAL_THREAD_TIMINGS.lock();
let Some((index, _)) = thread_timings
.iter()
.enumerate()
.find(|(_, t)| t.thread_id == self.thread_id)
else {
return;
};
thread_timings.swap_remove(index);
}
}

View File

@@ -320,7 +320,7 @@ mod tests {
let focus_map = Arc::new(FocusMap::default());
let mut tab_index_map = TabStopMap::default();
let focus_handles = vec![
let focus_handles = [
FocusHandle::new(&focus_map).tab_stop(true).tab_index(0),
FocusHandle::new(&focus_map).tab_stop(true).tab_index(1),
FocusHandle::new(&focus_map).tab_stop(true).tab_index(1),

View File

@@ -54,9 +54,25 @@ pub struct ShapedGlyph {
}
impl LineLayout {
/// The index for the character at the given x coordinate
pub fn index_for_x(&self, x: Pixels) -> Option<usize> {
if x >= self.width {
None
} else {
for run in self.runs.iter().rev() {
for glyph in run.glyphs.iter().rev() {
if glyph.position.x <= x {
return Some(glyph.index);
}
}
}
Some(0)
}
}
/// closest_index_for_x returns the character boundary closest to the given x coordinate
/// (e.g. to handle aligning up/down arrow keys)
pub fn index_for_x(&self, x: Pixels) -> usize {
pub fn closest_index_for_x(&self, x: Pixels) -> usize {
let mut prev_index = 0;
let mut prev_x = px(0.);
@@ -262,10 +278,34 @@ impl WrappedLineLayout {
}
/// The index corresponding to a given position in this layout for the given line height.
///
/// See also [`Self::closest_index_for_position`].
pub fn index_for_position(
&self,
position: Point<Pixels>,
line_height: Pixels,
) -> Result<usize, usize> {
self._index_for_position(position, line_height, false)
}
/// The closest index to a given position in this layout for the given line height.
///
/// Closest means the character boundary closest to the given position.
///
/// See also [`LineLayout::closest_index_for_x`].
pub fn closest_index_for_position(
&self,
position: Point<Pixels>,
line_height: Pixels,
) -> Result<usize, usize> {
self._index_for_position(position, line_height, true)
}
fn _index_for_position(
&self,
mut position: Point<Pixels>,
line_height: Pixels,
closest: bool,
) -> Result<usize, usize> {
let wrapped_line_ix = (position.y / line_height) as usize;
@@ -305,9 +345,16 @@ impl WrappedLineLayout {
} else if position_in_unwrapped_line.x >= wrapped_line_end_x {
Err(wrapped_line_end_index)
} else {
Ok(self
.unwrapped_layout
.index_for_x(position_in_unwrapped_line.x))
if closest {
Ok(self
.unwrapped_layout
.closest_index_for_x(position_in_unwrapped_line.x))
} else {
Ok(self
.unwrapped_layout
.index_for_x(position_in_unwrapped_line.x)
.unwrap())
}
}
}

View File

@@ -35,6 +35,7 @@ pub enum IconName {
ArrowUp,
ArrowUpRight,
Attach,
AtSign,
AudioOff,
AudioOn,
Backspace,

View File

@@ -664,6 +664,8 @@ impl CompletionProvider for RustStyleCompletionProvider {
replace_range: replace_range.clone(),
new_text: format!(".{}()", method.name),
label: CodeLabel::plain(method.name.to_string(), None),
match_start: None,
snippet_deduplication_key: None,
icon_path: None,
documentation: method.documentation.map(|documentation| {
CompletionDocumentation::MultiLineMarkdown(documentation.into())

View File

@@ -173,9 +173,15 @@ pub fn new_journal_entry(workspace: &Workspace, window: &mut Window, cx: &mut Ap
}
fn journal_dir(path: &str) -> Option<PathBuf> {
shellexpand::full(path) //TODO handle this better
.ok()
.map(|dir| Path::new(&dir.to_string()).to_path_buf().join("journal"))
let expanded = shellexpand::full(path).ok()?;
let base_path = Path::new(expanded.as_ref());
let absolute_path = if base_path.is_absolute() {
base_path.to_path_buf()
} else {
log::warn!("Invalid journal path {path:?} (not absolute), falling back to home directory",);
std::env::home_dir()?
};
Some(absolute_path.join("journal"))
}
fn heading_entry(now: NaiveTime, hour_format: &HourFormat) -> String {
@@ -224,4 +230,65 @@ mod tests {
assert_eq!(actual_heading_entry, expected_heading_entry);
}
}
mod journal_dir_tests {
use super::super::*;
#[test]
#[cfg(target_family = "unix")]
fn test_absolute_unix_path() {
let result = journal_dir("/home/user");
assert!(result.is_some());
let path = result.unwrap();
assert!(path.is_absolute());
assert_eq!(path, PathBuf::from("/home/user/journal"));
}
#[test]
fn test_tilde_expansion() {
let result = journal_dir("~/documents");
assert!(result.is_some());
let path = result.unwrap();
assert!(path.is_absolute(), "Tilde should expand to absolute path");
if let Some(home) = std::env::home_dir() {
assert_eq!(path, home.join("documents").join("journal"));
}
}
#[test]
fn test_relative_path_falls_back_to_home() {
for relative_path in ["relative/path", "NONEXT/some/path", "../some/path"] {
let result = journal_dir(relative_path);
assert!(result.is_some(), "Failed for path: {}", relative_path);
let path = result.unwrap();
assert!(
path.is_absolute(),
"Path should be absolute for input '{}', got: {:?}",
relative_path,
path
);
if let Some(home) = std::env::home_dir() {
assert_eq!(
path,
home.join("journal"),
"Should fall back to home directory for input '{}'",
relative_path
);
}
}
}
#[test]
#[cfg(target_os = "windows")]
fn test_absolute_path_windows_style() {
let result = journal_dir("C:\\Users\\user\\Documents");
assert!(result.is_some());
let path = result.unwrap();
assert_eq!(path, PathBuf::from("C:\\Users\\user\\Documents\\journal"));
}
}
}

View File

@@ -1030,22 +1030,22 @@
"$ref": "#"
},
"eslintConfig": {
"$ref": "https://json.schemastore.org/eslintrc.json"
"$ref": "https://www.schemastore.org/eslintrc.json"
},
"prettier": {
"$ref": "https://json.schemastore.org/prettierrc.json"
"$ref": "https://www.schemastore.org/prettierrc.json"
},
"stylelint": {
"$ref": "https://json.schemastore.org/stylelintrc.json"
"$ref": "https://www.schemastore.org/stylelintrc.json"
},
"ava": {
"$ref": "https://json.schemastore.org/ava.json"
"$ref": "https://www.schemastore.org/ava.json"
},
"release": {
"$ref": "https://json.schemastore.org/semantic-release.json"
"$ref": "https://www.schemastore.org/semantic-release.json"
},
"jscpd": {
"$ref": "https://json.schemastore.org/jscpd.json"
"$ref": "https://www.schemastore.org/jscpd.json"
},
"pnpm": {
"description": "Defines pnpm specific configuration.",
@@ -1305,5 +1305,5 @@
]
}
],
"$id": "https://json.schemastore.org/package.json"
"$id": "https://www.schemastore.org/package.json"
}

Some files were not shown because too many files have changed in this diff Show More