Compare commits

..

109 Commits

Author SHA1 Message Date
Richard Feldman
171d1fdbd0 wip 3 2025-04-11 01:35:58 -04:00
Richard Feldman
5f20674bca wip 2 2025-04-11 01:33:13 -04:00
Max Brunsfeld
353ae2335b Fix staging/unstaging hunks remotely (#28560)
Fixes a regression introduced in
https://github.com/zed-industries/zed/pull/28377 where the pending hunks
didn't get cleared properly when staging/unstaging hunks remotely. I
didn't add new tests, because the fix was to simplify some code.

Release Notes:

- N/A
2025-04-10 21:31:43 -07:00
João Marcos
ad39d3226f Add new actions editor::FindNextMatch and editor::FindPreviousMatch (#28559)
Closes #7903

Release Notes:

- Add new actions `editor::FindNextMatch` and
`editor::FindPreviousMatch` that are similar to `editor::SelectNext` and
`editor::SelectPrevious` with `"replace_newest": true`, but jumps to the
first or last selection when there are multiple selections.
2025-04-11 03:43:55 +00:00
Piotr Osiewicz
c35238bd72 debugger: Add support for setting multiple breakpoints via actions (#28437)
Allow setting multiple breakpoints with multi cursors

Release Notes:

- N/A

---------

Co-authored-by: Anthony Eid <hello@anthonyeid.me>
Co-authored-by: Remco Smits <djsmits12@gmail.com>
Co-authored-by: Anthony <anthony@zed.dev>
2025-04-10 23:31:57 -04:00
Bennet Bo Fenner
5757e352b0 agent: Fix bug where wrong crease for @mention would be displayed (#28558)
Release Notes:

- agent: Fix a bug where an inserted @mention did not show up as the one
that was selected
2025-04-11 02:04:03 +00:00
Danilo Leal
c124838a73 agent: Fix "new text thread" action name (#28555)
Moving from "NewPromptEditor" to "NewTextThread". We recently re-named
that and this was missing.

Release Notes:

- N/A
2025-04-10 22:23:44 -03:00
Marshall Bowers
5ebac7e30c agent: Clean up thread auto-capturing (#28550)
This PR cleans up the thread auto-capturing added in #28271.

- Removed usage of `unsafe`
- Fixed feature flag check
- We were incorrectly not respecting the feature flag in release builds
- Made sure the telemetry event was being run on the background executor

Release Notes:

- N/A
2025-04-11 01:08:24 +00:00
Mikayla Maki
c143846e42 Revert buggy pr (#28554)
Earlier, I merged #24723

Before merging it, I made a change that was incorrect and fast followed
with a fix: #28548

Following that fix, @bennetbo discovered that the modals where no longer
highlighting correctly, particularly the outline modal.

So I'm going to revert it all.

Release Notes:

- N/A
2025-04-10 18:58:36 -06:00
Danilo Leal
71c2a11bd9 agent: Make the message editor expandable (#28420)
This PR allows expanding the message editor textarea to fit almost the
total height of the Agent Panel. Stylistically, I'm also changing the
font family we use in the textarea to use the buffer font; want to
experiment with this for a bit.

Release Notes:

- agent: The Agent Panel textarea can now be expanded to fill almost the
total height of the panel.

---------

Co-authored-by: Bennet Bo Fenner <bennet@zed.dev>
Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-04-10 21:53:52 -03:00
Antonio Scandurra
2440faf4b2 Actually run the eval and fix a hang when retrieving outline (#28547)
Release Notes:

- Fixed a regression that caused the agent to hang sometimes.

---------

Co-authored-by: Thomas Mickley-Doyle <tmickleydoyle@gmail.com>
Co-authored-by: Nathan Sobo <nathan@zed.dev>
Co-authored-by: Michael Sloan <mgsloan@gmail.com>
2025-04-11 00:01:33 +00:00
Mikayla Maki
c0262cf62f Fix bug where all editor completions would be black (#28548)
Release Notes:

- N/A
2025-04-10 17:59:10 -06:00
Jason Lee
fd256d159d gpui: Keep drag cursor style when dragging (#24797)
Release Notes:

- Improve to keep drag cursor style on dragging resize handles.

---

### Before


https://github.com/user-attachments/assets/d4100d01-ac02-42b8-b923-9f2b4633c458

### After


https://github.com/user-attachments/assets/b5a450cd-c6de-4b39-a79c-2d73fcbad209

With example:

```
cargo run -p gpui --example drag_drop
```


https://github.com/user-attachments/assets/4cba1966-1578-40ce-a435-64ec11bcace5
2025-04-10 23:54:12 +00:00
Danilo Leal
a2a3d1a4bd editor: Refactor EditorMode::Full (#28546)
This PR lightly refactors the `EditorMode::Full` exposing two new
methods: `is_full` and `set_mode`.

Motivation is to expose fields that modify the behavior when the editor
is in `Full` mode. By using is `mode.is_full()` instead of
`EditorMode::Full` we can introduce new fields without breaking other
places in the code.

Release Notes:

- N/A

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-04-10 23:22:27 +00:00
Max Brunsfeld
294a1b63c0 Fix diff recalculation hang (#28377)
Fixes https://github.com/zed-industries/zed/issues/26039

Release Notes:

- Fixed an issue where diffs stopped updating closing and reopening them
after staging hunks.
- Fixed a bug where staging a hunk while the cursor was in a deleted
line would move the cursor erroneously.

---------

Co-authored-by: Cole Miller <m@cole-miller.net>
Co-authored-by: João Marcos <marcospb19@hotmail.com>
2025-04-10 22:58:41 +00:00
Jason Lee
ffdf725f32 gpui: Fix text hover & active style (#24723)
Release Notes:

- N/A

---

Fix this long-standing issue so that we can support Link hover colors.

And renamed `text_layout` example to `text_style`.

---

I spent some time studying the process of this text style change and
found it a bit complicated.

At first, I thought there was a problem with refine and it was not
passed properly. After changing it, I found that it was not the problem.

Then I found that it was because `TextRun` had already stored the
`color`, `background`, `underline`, `strikethrough` in TextRun in the
`request_layout` stage. They area calculate at the `request_layout`
stage, but request_layout stage there was no `hitbox`, so the hover
state was not obtained.

```bash
cargo run -p gpui --example text_style
```


https://github.com/user-attachments/assets/24f88f73-775e-41d3-a502-75a7a39ac82b

---------

Co-authored-by: Mikayla Maki <mikayla.c.maki@gmail.com>
2025-04-10 22:34:47 +00:00
vipex
8ee6a2b454 html: Fix leading slash on Windows paths (#28542)
This PR builds on the fix proposed in
[zed-extensions/astro#5](https://github.com/zed-extensions/astro/pull/5)
and serves as a workaround for certain LSPs affected by
[zed-industries/zed#20559](https://github.com/zed-industries/zed/issues/20559)—specifically,
the HTML language server in this case.

Credit to @maxdeviant for identifying and implementing the original fix.
This PR extends that solution to other areas where it may be beneficial.

Release Notes:

- N/A
2025-04-10 18:34:22 -04:00
Anthony Eid
cf65d9437a debugger: Add console indicator and resolve debug configs from NewSessionModal (#28489)
The debug console will now show an indicator when it's unopened and
there's unread messages.

`NewSessionModal` attempts to resolve debug configurations before using
the config to start debugging. This allows users to use zed's task
variables in the modal prompt.

I had to invert tasks_ui dependency on debugger_ui so `NewSessionModal`
could get the correct `TaskContexts` by calling tasks_ui functions. A
consequence of this workspace has a new event `ShowAttachModal` that I'm
not a big fan of. @osiewicz if you have time could you please take a
look to see if there's a way around adding the event. I'm open to pair
on it too.

Release Notes:

- N/A
2025-04-10 22:29:03 +00:00
Ben Kunkle
66dd6726df zlog: Support configuring log levels with env var (#28544)
Reimplemented logic from `env_logger` to parse log configuration from
environment variables.

Had to re-implement instead of using `env_filter` crate that
`env_logger` uses, as it does not export the information required to
integrate it.


Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-04-10 22:00:44 +00:00
Bennet Bo Fenner
44cb8e582b markdown: Track code block metadata in parser (#28543)
This allows us to not scan the codeblock content for newlines on every
frame in `active_thread`

Release Notes:

- N/A
2025-04-10 21:49:08 +00:00
Danilo Leal
73305ce45e Change zed.dev's default model to Claude 3.7 Sonnet (#28541)
From Claude 3.5 Sonnet to **Claude 3.7 Sonnet**.

Release Notes:

- Change the default model of Zed's hosted LLM service to Claude 3.7
Sonnet.
2025-04-10 18:34:04 -03:00
James Tucker
94b75f3ad9 gpui: Enable per-pixel, GPU composited transparency on Windows (#26645)
Move the SetLayeredWindowAttributes call to immediately after window
construction, and initialize it with per-pixel transparency settings, no
color key and no global blending. The render pipeline will perform alpha
blending during compositing.

Cleaned up the DWM acrylic API calls some, to explicitly set to the
three appropriate modes depending on opaque, transparent or blurred
settings. The API internally hides versioning concerns from the caller.

Set the window class background color to black, this prevents a
flashbang on slow startup, e.g. debug builds on a heavily loaded system.

The outcome is that the window no longer receives paint demands for
underlying window updates, while also having per-pixel transparency -
opaque theme elements are now correctly opaque. The transparency
settings are now portable across windows and macOS having mostly similar
outcomes (modulo palette differences). Small fonts may still appear to
be alpha blended - this seems to be in the glyph atlas, their pixels are
not actually opaque. Larger fonts (or higher DPIs) don't suffer this and
are as opaque as expected. Layering the window atop one that is
rendering at 120fps, the editor window can drop to its 8fps idle state,
while still being composited with 120fps alpha blend in the background,
in both blur and transparent modes.

Updates #20400

Release Notes:

- Improved transparency on Windows to be more efficient, support fully
opaque elements and more closely match other platforms.
2025-04-10 21:27:19 +00:00
Marko Kungla
384868e597 Add --user-data-dir CLI flag and propose renaming support_dir to data_dir (#26886)
This PR introduces support for a `--user-data-dir` CLI flag to override
Zed's data directory and proposes renaming `support_dir` to `data_dir`
for better cross-platform clarity. It builds on the discussion in #25349
about custom data directories, aiming to provide a flexible
cross-platform solution.

### Changes

The PR is split into two commits:
1. **[feat(cli): add --user-data-dir to override data
directory](28e8889105)**
2. **[refactor(paths): rename support_dir to data_dir for cross-platform
clarity](affd2fc606)**


### Context
Inspired by the need for custom data directories discussed in #25349,
this PR provides an immediate implementation in the first commit, while
the second commit suggests a naming improvement for broader appeal.
@mikayla-maki, I’d appreciate your feedback, especially on the rename
proposal, given your involvement in the original discussion!

### Testing
- `cargo build `
- `./target/debug/zed --user-data-dir ~/custom-data-dir`

Release Notes:
- Added --user-data-dir CLI flag

---------

Signed-off-by: Marko Kungla <marko.kungla@gmail.com>
2025-04-10 21:16:43 +00:00
Marshall Bowers
d88694f8da language_models: Fix non-streaming Copilot Chat models (#28537)
This PR fixes usage of non-streaming Copilot Chat models.

Closes https://github.com/zed-industries/zed/issues/28528.

Release Notes:

- Fixed an issue with using non-streaming Copilot Chat models (e.g., o1,
o3-mini).
2025-04-10 20:48:08 +00:00
Terminal
90f30b5c20 gpui: Allow DisplayId to be compared to u32 (#27895)
allow DisplayId to be compared to u32. This is handy since gpui doesn't
provide a method to detect current active display of the user. So when
using mouse location to get the active display we need to then compare
that display u32 to DisplayID

Release Notes:
- added From to allow u32 comparison
2025-04-10 14:41:10 -06:00
tidely
24d4f8ca18 gpui: Optimize coalesce float sign checking (#28072)
Optimize away a multiplication during in the `coalesce` function. Our
goal is to check whether the sign of two floats is the same.

Instead of multiplying each `.signum()` and checking that the result is
positive, we can simply check that the signum's are the same. This
removes a float multiplication.

```rust
a.signum() * b.signum() >= 0.0
```

turns into

```rust
a.signum() == b.signum()
```



Release Notes:

- Fix documentation for `Pixels::signum`
2025-04-10 14:39:50 -06:00
Kirill Bulatov
804066a047 Do not query for LSP tasks buffers that do not belong to the position given (#28536)
Follow-up of https://github.com/zed-industries/zed/pull/28359

Release Notes:

- Fixed a panic when LSP tasks are queried in certain multi buffer
excerpts
2025-04-10 20:37:21 +00:00
Niklas Eicker
4a356466b1 rust: Enable required features when executing main functions in tasks (#27312)
Closes #13344

This PR causes required features to be read from `cargo metadata` and
enabled when executing an example/bin in Rust.

Release Notes:

- Added enabling required features when executing a Rust example or bin
through a task
2025-04-10 20:29:07 +00:00
Smit Barmase
0921762b59 install_cli: Show feedback when installing CLI from welcome screen (#28532)
Closes #28408

Release Notes:

- Fixed no feedback provided when installing CLI from welcome page.
2025-04-11 01:47:40 +05:30
Thomas Mickley-Doyle
46b1df2e2d agent: Auto-capture telemetry feature flag (#28271)
Release Notes:

- N/A
2025-04-10 15:07:48 -05:00
Conrad Irwin
986da332db Bump rustls (#28531)
Closes #26699

Release Notes:

- Fixed a panic when enabling or disabling a VPN on macOS
2025-04-10 14:04:34 -06:00
Austin Merrick
dad33f7cc2 Fix code action selection bug while using vim visual mode (#27817)
## Problem

Code actions do not handle vim line mode correctly. See this video where
`Extract to function` doesn't extract both selected lines:


https://github.com/user-attachments/assets/8fa0fb28-0403-44f6-9e55-a59b6713dffd

## Solution

Use `selections.newest_adjusted` instead of `selections.newest_anchor`
so code actions consider the full selection.

Correct behavior:


https://github.com/user-attachments/assets/174d5a34-3873-4d20-b67d-103edec4cdbe

---

Release Notes:

- vim: Fixed code actions in visual line mode
2025-04-10 13:53:00 -06:00
Smit Barmase
64241f7d2f editor: Fix extra characters were being written at the end of an HTML tag (#28529)
Closes #25586

It is caused due to assumption all character being typed are word
characters and linked edit ranges can be used even when first non-word
character is typed. Because next character passes all the criteria like
being word character, anchor matching the previous range before typing
started, wrong edit take place.

This PR fixes it by clearing linked edit ranges when non-word character
is typed.

Before:

`<div cx^></div>cx` when typing fast.

After:

`<div cx^></div>` always.


Release Notes:

- Fixed a case where extra characters were being written at the end of
an HTML tag.
2025-04-11 00:17:34 +05:30
Ben Kunkle
fbbc23bec3 editor: Restore selections to positions after last edit (#28527)
Closes #22692

Makes it so when undoing a format operation, the selections are set to
where they were at the last edit.

Release Notes:

- Made it so the cursor position is reset to where it was after the last
edit when undoing a format operation. This will only result in different
behavior when you make an edit, scroll away, initiate formatting (either
by saving or manually) and then undo the format.

---------

Co-authored-by: Zed AI <ai@zed.dev>
2025-04-10 18:33:49 +00:00
Piotr Osiewicz
26f4705198 debugger: Add breakpoint list (#28496)
![image](https://github.com/user-attachments/assets/2cbe60cc-bf04-4233-a7bc-32affff8eef5)
Release Notes:

- N/A

---------

Co-authored-by: Anthony Eid <hello@anthonyeid.me>
2025-04-10 18:18:58 +00:00
Nate Butler
3abf95216c Add progress bar component (#28518)
- Adds the progress bar component

Release Notes:

- N/A
2025-04-10 12:11:58 -06:00
Andy Waite
b0b52f299c docs: Prefer bin/rails when running Rails tests (#28167) 2025-04-10 17:46:12 +00:00
Ben Kunkle
53cde329da Clean up formatting code and add testing for formatting with multiple formatters (including code actions!) (#28457)
Release Notes:

- N/A

---------

Co-authored-by: Max Brunsfeld <maxbrunsfeld@gmail.com>
2025-04-10 15:32:43 +00:00
Cole Miller
b55b310ad0 Downgrade environment-related logging (#28509)
Closes #ISSUE

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-04-10 14:38:29 +00:00
Cole Miller
8ab25e2bac Fix merge conflicts jumping (#28508)
This regressed in #27568, oops.

Release Notes:

- Fixed a bug causing conflicted files in the git panel to jump to the
"Tracked" section as soon as they were staged.
2025-04-10 14:29:36 +00:00
Peter Tripp
c10b1f7c61 docs: Update system requirements (#28504)
Explicitly note that macOS 12.x Monterey is required for screen sharing.

Release Notes:

- N/A
2025-04-10 14:18:12 +00:00
Thomas Mickley-Doyle
cb1ee01a66 agent: Add selected tool names to agent panel telemetry (#28247)
Release Notes:

- N/A
2025-04-10 08:43:52 -05:00
Agus Zubiaga
90bcde116f agent: Use current shell (#28470)
Release Notes:

- agent: Replace `bash` tool with `terminal` tool which uses the current
shell

---------

Co-authored-by: Bennet <bennet@zed.dev>
Co-authored-by: Antonio <antonio@zed.dev>
2025-04-09 23:38:36 -06:00
Antonio Scandurra
8ac378b86e Lay the groundwork for a Rust-based eval (#28488)
Also, we moved the logic for driving the agentic loop into `Thread` so
that we don't have to re-implement it.

Release Notes:

- N/A

---------

Co-authored-by: Nathan Sobo <nathan@zed.dev>
2025-04-10 04:45:27 +00:00
Bennet Bo Fenner
55760295d9 agent: Optimize render_markdown_block function (#28487)
Co-Authored-by: Agus <agus@zed.dev>

Closes #ISSUE

Release Notes:

- N/A

Co-authored-by: Agus <agus@zed.dev>
2025-04-10 04:29:47 +00:00
Antonio Scandurra
9dfb907f97 Revert "Add reminder message about system prompt" (#28482)
This breaks the agentic loop.
2025-04-09 22:12:33 -06:00
Danilo Leal
e20daa7639 agent: Fix toolbar spacing (#28485)
Release Notes:

- N/A
2025-04-10 01:08:07 -03:00
Danilo Leal
b46ab367ef agent: Add button to open thread as markdown (#28481)
<img
src="https://github.com/user-attachments/assets/92ca8f64-a949-4cc1-a657-3978a2c65839"
width="600"/>

Release Notes:

- agent: The action to open the current active thread in Markdown is now
exposed in the UI.
2025-04-10 00:09:34 -03:00
5brian
12212dc329 agent: Prevent sending whitespace only messages (#28409)
Prevent this from happening when sending a prompt with only spaces and
newlines:


![image](https://github.com/user-attachments/assets/b275f4c5-c013-4695-8fb4-e3ad75d41750)

Release Notes:

- agent: Prevent from sending messages containing only white space.
2025-04-09 23:57:42 -03:00
Conrad Irwin
324e4658ba Reset modifiers when the window active state changes (#28348)
Closes #23449

Release Notes:

- Fixed a bug causing shift to get stuck down when the window focus
changes

---------

Co-authored-by: Dino <dinojoaocosta@gmail.com>
2025-04-09 20:55:19 -06:00
Conrad Irwin
ed500dacb6 Fix typo in symbolicate script (#28456)
Fix silly typo in symbolicate script

Release Notes:

- N/A
2025-04-09 20:55:10 -06:00
Danilo Leal
2f4b48129b agent: Collapse code blocks in the active thread (#28467)
Release Notes:

- N/A

---------

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-04-09 23:44:02 -03:00
Dino
ed7c55a04e vim: Reset search range after substitute (#28403)
Update the `Vim::replace_command` method so as to reset the range in the
`BufferSearchBar` after running the replacement in order to fix the
issue where the number of matches in the search bar would be incorrect
after the replacement was done, as it would only take into consideration
the range in which the replacement happened, instead of the whole
buffer.

In order to get this working a new
`BufferSearchBar::clear_search_within_ranges` method is introduced in
these changes.

Release Notes:

- Fixed the number of matches displayed in the search bar after running
vim's substitute command.
2025-04-09 20:43:53 -06:00
Richard Feldman
6db4ab381c Add code action tool and rename tool (#28453)
Having a separate rename tool seems to make the agent more likely to use
it compared to having it be part of the code actions tool.

Release Notes:

- Added code action tool and rename tool.
2025-04-09 22:38:01 -04:00
renovate[bot]
0e72a7e6ce Update Rust crate smallvec to v1.15.0 (#28469)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [smallvec](https://redirect.github.com/servo/rust-smallvec) |
workspace.dependencies | minor | `1.14.0` -> `1.15.0` |

---

### Release Notes

<details>
<summary>servo/rust-smallvec (smallvec)</summary>

###
[`v1.15.0`](https://redirect.github.com/servo/rust-smallvec/releases/tag/v1.15.0)

[Compare
Source](https://redirect.github.com/servo/rust-smallvec/compare/v1.14.0...v1.15.0)

#### What's Changed

- Fix typos by
[@&#8203;waywardmonkeys](https://redirect.github.com/waywardmonkeys) in
[https://github.com/servo/rust-smallvec/pull/373](https://redirect.github.com/servo/rust-smallvec/pull/373)
- Implement bincode2 encode/decode support for smallvec v1 by
[@&#8203;markbt](https://redirect.github.com/markbt) in
[https://github.com/servo/rust-smallvec/pull/375](https://redirect.github.com/servo/rust-smallvec/pull/375)

#### New Contributors

- [@&#8203;markbt](https://redirect.github.com/markbt) made their first
contribution in
[https://github.com/servo/rust-smallvec/pull/375](https://redirect.github.com/servo/rust-smallvec/pull/375)

**Full Changelog**:
https://github.com/servo/rust-smallvec/compare/v1.14.0...v1.15.0

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "after 3pm on Wednesday" in timezone
America/New_York, Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

Release Notes:

- N/A

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOS4yMzguMCIsInVwZGF0ZWRJblZlciI6IjM5LjIzOC4wIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-04-09 17:37:32 -06:00
renovate[bot]
3dc3ab062d Update Rust crate prometheus to 0.14 (#28468)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [prometheus](https://redirect.github.com/tikv/rust-prometheus) |
dependencies | minor | `0.13` -> `0.14` |

---

### Release Notes

<details>
<summary>tikv/rust-prometheus (prometheus)</summary>

###
[`v0.14.0`](https://redirect.github.com/tikv/rust-prometheus/blob/HEAD/CHANGELOG.md#0140)

[Compare
Source](https://redirect.github.com/tikv/rust-prometheus/compare/v0.13.4...v0.14.0)

- API change: Use `AsRef<str>` for owned label values
([#&#8203;537](https://redirect.github.com/tikv/rust-prometheus/issues/537))

- Improvement: Hashing improvements
([#&#8203;532](https://redirect.github.com/tikv/rust-prometheus/issues/532))

- Dependency upgrade: Update `hyper` to 1.6
([#&#8203;524](https://redirect.github.com/tikv/rust-prometheus/issues/524))

- Dependency upgrade: Update `procfs` to 0.17
([#&#8203;543](https://redirect.github.com/tikv/rust-prometheus/issues/543))

- Dependency upgrade: Update `protobuf` to 3.7.2 for RUSTSEC-2024-0437
([#&#8203;541](https://redirect.github.com/tikv/rust-prometheus/issues/541))

- Dependency upgrade: Update `thiserror` to 2.0
([#&#8203;534](https://redirect.github.com/tikv/rust-prometheus/issues/534))

- Internal change: Fix LSP and Clippy warnings
([#&#8203;540](https://redirect.github.com/tikv/rust-prometheus/issues/540))

- Internal change: Bump MSRV to 1.81
([#&#8203;539](https://redirect.github.com/tikv/rust-prometheus/issues/539))

- Documentation: Fix `register_histogram_vec_with_registry` docstring
([#&#8203;528](https://redirect.github.com/tikv/rust-prometheus/issues/528))

- Documentation: Fix typos in static-metric docstrings
([#&#8203;479](https://redirect.github.com/tikv/rust-prometheus/issues/479))

- Documentation: Add missing `protobuf` feature to README list
([#&#8203;531](https://redirect.github.com/tikv/rust-prometheus/issues/531))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "after 3pm on Wednesday" in timezone
America/New_York, Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

Release Notes:

- N/A

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOS4yMzguMCIsInVwZGF0ZWRJblZlciI6IjM5LjIzOC4wIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-04-09 17:37:15 -06:00
Michael Sloan
ed63f216e3 Remove "use_key_equivalents" from linux keymap as it does nothing (#28464)
Release Notes:

- N/A
2025-04-09 23:02:45 +00:00
Michael Sloan
ba767a1998 Fix directory context paths (#28459)
Release Notes:

- N/A
2025-04-09 21:40:46 +00:00
renovate[bot]
23c3f5f410 Update Rust crate indexmap to v2.9.0 (#28455)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [indexmap](https://redirect.github.com/indexmap-rs/indexmap) |
workspace.dependencies | minor | `2.8.0` -> `2.9.0` |

---

### Release Notes

<details>
<summary>indexmap-rs/indexmap (indexmap)</summary>

###
[`v2.9.0`](https://redirect.github.com/indexmap-rs/indexmap/blob/HEAD/RELEASES.md#290-2025-04-04)

[Compare
Source](https://redirect.github.com/indexmap-rs/indexmap/compare/2.8.0...2.9.0)

- Added a `get_disjoint_mut` method to `IndexMap`, matching Rust 1.86's
    `HashMap` method.
- Added a `get_disjoint_indices_mut` method to `IndexMap` and
`map::Slice`,
    matching Rust 1.86's `get_disjoint_mut` method on slices.
- Deprecated the `borsh` feature in favor of their own `indexmap`
feature,
    solving a cyclic dependency that occured via `borsh-derive`.

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "after 3pm on Wednesday" in timezone
America/New_York, Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

Release Notes:

- N/A

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOS4yMzguMCIsInVwZGF0ZWRJblZlciI6IjM5LjIzOC4wIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-04-09 14:52:40 -06:00
Vitaly Slobodin
b3be294c90 lsp_store: Preserve environment variables from ExtensionLspAdapter (#28173)
## Description

In https://github.com/zed-industries/zed/pull/27213 the new feature for
setting env variables for LSPs was added but env vars passed from an
instance of `ExtensionLspAdapter` are lost now. This means if an
extension returns any env variable like this:

```rust
zed::Command {
  command: some_command,
  args: some_args,
  env: vec![("A", "value_for_a")],
}
```

The env variable `A` will never be used by `LspStore`. This commit
preserves env variables passed from an instance of
`ExtensionLspAdapter`.

After this change overwriting of env variables
happens in the following order:

```plaintext
shell <- variables from an extension <- variables from settings
```

## How to reproduce

Allow any extension to return a `zed::Command` with environment
variables to Zed. You can use [this
branch](https://github.com/zed-extensions/ruby/pull/48) for the Ruby
extension:

1. Check out the branch and install the dev version of the Ruby
extension.
2. Ensure you have the `solargraph` LSP configured and enabled for the
Ruby extension. This LSP is enabled by default in Zed and in the Ruby
extension.
3. Make sure you don’t have `solargraph` installed in your user gemset.
4. Open any Ruby project, such as [this
one](https://github.com/vitallium/stimulus-lsp-error-zed).
5. Open a Ruby file and wait for the error message about failing to
start `solargraph`. It should look like this or something similar:

```
[2025-04-05T23:17:26+02:00 ERROR project::lsp_store] server stderr: "/Users/vslobodin/.local/share/mise/installs/ruby/3.4.1/lib/ruby/site_ruby/3.4.0/rubygems.rb:262:in 'Gem.find_spec_for_exe': can't find gem solargraph (>= 0.a) with executable solargraph (Gem::GemNotFoundException)\n\tfrom /Users/vslobodin/.local/share/mise/installs/ruby/3.4.1/lib/ruby/site_ruby/3.4.0/rubygems.rb:281:in 'Gem.activate_bin_path'\n"
```

This error occurs because the Ruby extension passes the `GEM_PATH`
environment variable to specify the location of Ruby gems. Without it,
Zed tries to spawn the `solargraph` gem in the user's gemset scope. Ruby
fails to start it because the `solargraph` gem is not installed in the
user gemset but in the extension directory. By setting the `GEM_PATH`
environment variable, Ruby searches additional locations to start the
`solargraph` LSP.

I hope I've described it correctly. Please let me know if you need more
information. Thanks!

Release Notes:

- Fixed the issue where environment variables from `ExtensionLspAdapter`
were lost
2025-04-09 14:50:50 -06:00
Dino
af5318df98 Update default vim substitute command behavior and add support for 'g' flag (#28138)
This Pull Request updates the default behavior of the substitute (`s`)
command in vim mode to only replace the next match by default, instead
of all, and replace all matches only when the `g` flag is provided,
making it more similar to NeoVim's behavior.

In order to achieve this, the following changes were introduced:

- Update `BufferSearchBar::replace_next` to be a public method, so it
can be called from `Vim::replace_command` .
- Update the `Replacement::parse` to set the `should_replace_all` field
to `false` by default, and only set it to `true` if the `'g'` flag is
present in the query.
- Add support for when the `Replacement.should_replace_all` is set to
`false` in `Vim::replace_command`, so as to have it only replace the
next occurrence instead of all occurrences in the line.
- Introduce `BufferSearchBar::select_first_match` so as to activate the
first match on the line under the cursor.

Closes #24450 

Release Notes:

- Improved vim's substitute command so as to only replace the first
match by default, and replace all matches if the `'g'` flag is provided

---------

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-04-09 14:34:51 -06:00
5brian
60c420a2da docs: Update vim features (#28360)
Follow up:
https://github.com/zed-industries/zed/pull/28044#issuecomment-2786769520

Adds
- Indent wise motions
- :ls
- :set

Release Notes:

- vim: Added documentation for indent-wise motions, `:ls`, and `:set`
2025-04-09 16:30:50 -04:00
5brian
ee6c33ffb3 Fix vim test keystroke (#28406)
I wrote the test wrongly in
https://github.com/zed-industries/zed/pull/28005:

It should be `$` instead of `shift-4`, so it was just yanking from the
middle of the line instead of the newline character. Fixed it and
regenerated it.

Release Notes:

- N/A
2025-04-09 14:29:03 -06:00
tidely
9ae4f4b158 gpui: Use BoolExt trait in more places (#28052)
Use the `BoolExt` trait which converts rust booleans to their objc
equivalent when applicable.


Release Notes:

- N/A
2025-04-09 14:28:15 -06:00
renovate[bot]
915a1cb116 Update actions/dependency-review-action digest to 67d4f4b (#28450)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[actions/dependency-review-action](https://redirect.github.com/actions/dependency-review-action)
| action | digest | `3b139cf` -> `67d4f4b` |

---

### Configuration

📅 **Schedule**: Branch creation - "after 3pm on Wednesday" in timezone
America/New_York, Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

Release Notes:

- N/A

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOS4yMzguMCIsInVwZGF0ZWRJblZlciI6IjM5LjIzOC4wIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-04-09 14:23:48 -06:00
renovate[bot]
aead0e11ff Update Rust crate mimalloc to v0.1.46 (#27964)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [mimalloc](https://redirect.github.com/purpleprotocol/mimalloc_rust) |
dependencies | patch | `0.1.45` -> `0.1.46` |

---

### Release Notes

<details>
<summary>purpleprotocol/mimalloc_rust (mimalloc)</summary>

###
[`v0.1.46`](https://redirect.github.com/purpleprotocol/mimalloc_rust/releases/tag/v0.1.46):
Version 0.1.46

[Compare
Source](https://redirect.github.com/purpleprotocol/mimalloc_rust/compare/v0.1.45...v0.1.46)

##### Changes

-   Fixed musl builds.

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "after 3pm on Wednesday" in timezone
America/New_York, Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

Release Notes:

- N/A

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOS4yMjcuMyIsInVwZGF0ZWRJblZlciI6IjM5LjIzOC4wIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-04-09 14:23:19 -06:00
Anthony Eid
2752c08810 debugger: Add run to cursor and evaluate selected text actions (#28405)
## Summary

### Actions

This PR implements actions that allow a user to "run to cursor" and
"evaluate selected text" while there's an active debug session and
exposes the functionality to the UI as well.

- Run to cursor: Can be accessed by right clicking on the gutter
- Evaluate selected text: Can be accessed by selecting text then right
clicking in the editor

### Bug fixes

I also fixed these bugs as well

- Panic when using debugger: Stop action
- Debugger actions command palette filter not working properly in all
cases
- We stopped displaying the correct label in the session's context menu
when a session was terminated

Release Notes:

- N/A

---------

Co-authored-by: Max Brunsfeld <max@zed.dev>
Co-authored-by: Remco Smits <djsmits12@gmail.com>
2025-04-09 19:57:29 +00:00
Bennet Bo Fenner
780143298a agent: Fuzzy match on paths and symbols when typing @ (#28357)
Release Notes:

- agent: Improve fuzzy matching when using @-mentions
2025-04-09 19:00:23 +00:00
João Marcos
088d7c1342 Add sublime keybinding for git::Restore (#28444)
Release Notes:

- Sublime Keymap: Added `git::Restore` compatibility bind (revert_hunk).
Mac: `cmd-k cmd-z` and Linux: `ctrl-k ctrl-z`.
2025-04-09 14:57:15 -03:00
neunato
64de6bd2a8 Don't scroll the editor on select all matches (#28435)
Part of https://github.com/zed-industries/zed/issues/9309

Release Notes:

- Improved scroll behavior of `editor: select all matches`

---------

Co-authored-by: Kirill Bulatov <kirill@zed.dev>
2025-04-09 17:50:14 +00:00
Finn Evers
6aa0248ab3 docs: Update outdated keybind for opening extensions page (#28443)
This PR updates an outdated keybind for opening the extensions page (the
shown keybind opens the project panel instead) on the `Configuring
Languages` page.

It also updates a nearby keybind to use the preprocessor syntax instead.

Release Notes:

- N/A
2025-04-09 13:46:12 -04:00
Thomas Mickley-Doyle
342134fbab agent: Add reactions at the response level (#27958)
Release Notes:

- Added the user reaction (👍 or 👎) to each agent response.
- 👎 will trigger a comment box linked to the response

---------

Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
Co-authored-by: Agus Zubiaga <hi@aguz.me>
2025-04-09 14:21:07 -03:00
João Marcos
b47aa33459 Remove actions UnfoldAt and FoldAt (#28442)
`UnfoldAt` and `FoldAt` are used internally, and don't really work
when users try to trigger them, they do however appear in the command
palette and keybindings, misleading users to try using them.

Release Notes:

- Remove unused actions `UnfoldAt` and `FoldAt` (prefer `Fold` and
`Unfold`).
2025-04-09 17:13:41 +00:00
Michael Sloan
9f6c5e2877 Reapply "Use Project instead of Workspace in ContextStore (#28402)" (#28441)
Motivation for this change is to use `ContextStore` in headless
assistant, which requires it to not depend on UI entities like
`Workspace`.

This reapplies a change that was revert was in #28428, and fixes the panic.

Release Notes:

- N/A
2025-04-09 16:56:14 +00:00
Cole Miller
7bf6cd4ccf Fix ancestor git repositories going missing (#28436)
Closes #ISSUE

Release Notes:

- Fixed a bug that caused Zed to sometimes not discover git repositories
above a worktree root.
2025-04-09 12:44:29 -04:00
Peter Tripp
c7963c8a93 ci: Require workspace_hack for PR merge (#28431)
Release Notes:

- N/A
2025-04-09 16:43:38 +00:00
renovate[bot]
dd4629433b Update cachix/install-nix-action digest to d1ca217 (#27951)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[cachix/install-nix-action](https://redirect.github.com/cachix/install-nix-action)
| action | digest | `02a151a` -> `d1ca217` |

---

Release Notes:

- N/A

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOS4yMjcuMyIsInVwZGF0ZWRJblZlciI6IjM5LjIyNy4zIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-04-09 09:31:13 -07:00
Rodrigo Freire
2e56935997 Fix invalid number of space characters inserted for tab (#27336)
Closes #25941 

Release Notes:

- Corrected SoftTab indentation handling for lines with mixed spaces and
tabs across .go files and other file types.
- Renamed the editor test `test_tab_with_mixed_whitespace` to
`test_tab_with_mixed_whitespace_rust` as it only tested this behavior
for Rust buffers, which have auto-indentation support. This change
clarifies that the test does not cover default files without
language-specific features.
- Added a new editor test `test_tab_with_mixed_whitespace_txt` to ensure
proper coverage for files with no associated language.

While investigating the issue — initially thought to be Go-related — I
discovered that the underlying problem was how soft tabs were calculated
in `Editor::tab`, given that the problem could also be observed on
`.txt` files

The correct soft tab indentation is now determined by treating all `\t`
characters before the cursor (on the same row) as new indentation
levels, resetting the remainder counter accordingly.


https://github.com/user-attachments/assets/78192e98-2b81-43cb-ae6f-7c48cd17d168
2025-04-09 16:22:14 +00:00
Richard Feldman
e43a397f1d Make regex search tool optionally case-sensitive (#28427)
Release Notes:

- The agent panel's regex search tool is now optionally case-sensitive.
2025-04-09 16:21:21 +00:00
Richard Feldman
9d0fe164a7 Revert to fix panic in inline assistant (#28428)
This reverts commit f12a554f86, which
introduced a panic in inline assistant (cc @mgsloan) - I'm not sure what
the motivation was for that change, but I figure we can revert to fix
the inline assistant now and deal with that later. 😄

Panic was:

> Thread "main" panicked with "cannot read workspace::Workspace while it
is already being updated" at
/Users/rtfeldman/code/zed/crates/gpui/src/app/entity_map.rs:139:32


Release Notes:

- N/A
2025-04-09 11:24:53 -04:00
Kainoa Kanter
6d7fef6fd3 Add icon for Vyper files (#28307)
Release Notes:

- Added icon for Vyper (`.vy`, `.vyi`) files
2025-04-09 10:49:39 -04:00
5brian
b67d3fd21b git_ui: Show disabled states in context menu (#28288)
Other elements in the git panel are shown as disabled when an action is
not actionable (For example: stage all, commit). Updating the context
menu to match this behavior when an action does nothing.

|Before|After|
|--|--|

|![image](https://github.com/user-attachments/assets/e517f758-216f-4451-911b-7121dce0c53b)|![image](https://github.com/user-attachments/assets/a85905c1-2f42-44c3-8b11-2f93c8a6f686)|





Release Notes:

- Git: Improved the Git panel context menu to show actions with no
effect as disabled.
2025-04-09 10:46:21 -04:00
Agus Zubiaga
1cb4f8288d Fix bash tool output (#28391) 2025-04-09 08:20:24 -06:00
Richard Feldman
3a8fe4d973 Add reminder message about system prompt (#28344)
Trying out sending the model a reminder message about code blocks in the
system prompt. If this seems to work well, we can include more specific
reminder messages, e.g. tool-specific ones.

Release Notes:

- N/A
2025-04-09 10:09:48 -04:00
Joseph T. Lyons
9d6d152918 Bump Zed to v0.183 (#28419)
Release Notes:

-N/A
2025-04-09 09:11:25 -04:00
Joseph T. Lyons
31034f8296 Add toggle case command (#28415)
A small addition for those coming from JetBrain's IDEs. A behavioral
detail: when any upper case character is detected, the command defaults
to toggling to lower case.

> Note that when you apply the toggle case action to the CamelCase name
format, IntelliJ IDEA converts the name to the lower case.


https://www.jetbrains.com/help/idea/working-with-source-code.html#edit_code_fragments

Release Notes:

- Added an `editor: toggle case` command. Use `cmd-shift-u` for macOS
and `ctrl-shift-u` for Linux, when using the `JetBrains` keymap.
2025-04-09 08:44:53 -04:00
Piotr Osiewicz
c441b651fa debugger: Add support for CodeLLDB (#28376)
Closes #ISSUE

Release Notes:

- N/A
2025-04-09 12:57:24 +02:00
Piotr Osiewicz
61ddcd516f chore: Add workspace-hack dependency to agent_rules (#28412)
Closes #ISSUE

Release Notes:

- N/A
2025-04-09 10:19:54 +00:00
Michael Sloan
f12a554f86 Use Project instead of Workspace in ContextStore (#28402)
Release Notes:

- N/A
2025-04-09 05:05:24 +00:00
Cole Miller
9dae4d8c59 Remove references to SSH remoting beta (#28399)
Release Notes:

- N/A
2025-04-09 03:26:22 +00:00
Cole Miller
f0b7f355a2 Clean up environment loading a bit (#28356)
Closes #ISSUE

Release Notes:

- N/A
2025-04-08 22:16:35 -04:00
Cole Miller
b687a5e56d git: Always reload current branch after pushing (#28327)
Closes #27347 

Release Notes:

- Fixed a bug causing the git panel to not update after pushing to a
remote
2025-04-08 22:16:03 -04:00
Ben Kunkle
e66a24edcf format: Re-implement support for formatting with code actions that contain commands (#28392)
Closes #27692
Closes #27935

Release Notes:

- Fixed a regression where code-actions used when formatting on save
were rejected if they contained commands
2025-04-09 01:53:54 +00:00
Michael Sloan
301fc7cd7b Pull out plain rules file loading code into a new agent_rules crate (#28383)
Also renames for rules file templated into the system prompt

Release Notes:

- N/A
2025-04-09 01:31:56 +00:00
Mikayla Maki
020a1071d5 Add the project search as an item in the status bar (#28388)
Was chatting with @wilhelmklopp, he pointed out that our current
UI-accessible way to access the project search was pretty obscure.


<img width="393" alt="Screenshot 2025-04-08 at 6 57 51 PM"
src="https://github.com/user-attachments/assets/636053cd-5a88-4a5e-8155-6d41d189b7db"
/>

Release Notes:

- Added a button to open the project search to the status bar
2025-04-09 01:13:48 +00:00
Bennet Bo Fenner
38d2487630 agent: Polish Generating... animation (#28379)
https://github.com/user-attachments/assets/9e798a50-9403-4e1c-a3df-2931e748b77d



Release Notes:

- N/A
2025-04-08 18:14:30 -06:00
0x2CA
79c9f2bbd9 editor: Fix invalid read-only with split pane (#28012)
Closes #28004

Release Notes:

- Fixed invalid read-only with split pane
2025-04-08 18:09:34 -06:00
Danilo Leal
c8caae03df agent: Change the reject changes keybinding (#28381)
This PR makes the reject keybinding, in the Review Changes mutlbuffer,
`cmd-n`.

Release Notes:

- N/A
2025-04-08 21:09:05 -03:00
Danilo Leal
dabc4d8ff5 agent: Remove type of item in the panel history view (#28382)
This PR removes the labels displaying whether a certain item in the
Agent Panel's history is a thread or prompt editor.

Release Notes:

- N/A
2025-04-08 21:08:56 -03:00
Antonio Scandurra
c0ad3e8183 Introduce a telemetry event for when a tool finishes (#28380)
This should help us understand which tools fail the most.

Release Notes:

- N/A
2025-04-09 00:07:06 +00:00
Kirill Bulatov
afde25a5cb Fix a docs typo (#28384)
Closes https://github.com/zed-industries/zed/pull/28053

Release Notes:

- N/A
2025-04-09 00:05:58 +00:00
Michael Sloan
9f708ee789 Fix refactoring bug in dashes around rounded corners (#28378)
Accidentally introduced in #28341

Release Notes:

- N/A
2025-04-09 00:00:30 +00:00
Michael Sloan
58731e2fd1 Remove log when pulldown_cmark produces long substituted text (#28375)
Turns out that consecutive dashes are substituted with half the number
of input dashes. Extended the test with this case as well

Release Notes:

- N/A
2025-04-08 23:45:49 +00:00
Antonio Scandurra
d0632a5332 Fix truncation of bash output (#28374)
Release Notes:

- Fixed a regression that caused the bash tool to not include all of the
output.

---------

Co-authored-by: Agus Zubiaga <hi@aguz.me>
2025-04-08 23:41:20 +00:00
Danilo Leal
64cea2f1f1 agent: Refine toolbar spacing (#28373)
Release Notes:

- N/A
2025-04-08 23:28:25 +00:00
Antonio Scandurra
ac958d4a2d Encourage agent to edit files it just created (#28372)
Release Notes:

- Fixed a problem that would cause the agent to keep recreating a file
instead of editing it.
2025-04-08 23:18:34 +00:00
Danilo Leal
2df06cd2e4 agent: Improve thinking design display (#28186)
Release Notes:

- N/A
2025-04-08 20:13:49 -03:00
Danilo Leal
0d4ca71e68 agent: Change "prompt editor" to "text thread" (#28370)
Release Notes:

- N/A
2025-04-08 19:56:01 -03:00
Danilo Leal
e2d6505d12 agent: Make the copy button in the codeblock visible on hover (#28371)
This simplifies the UI a little bit.

Release Notes:

- N/A
2025-04-08 19:55:53 -03:00
210 changed files with 8986 additions and 4852 deletions

View File

@@ -225,7 +225,7 @@ jobs:
- name: Check for new vulnerable dependencies
if: github.event_name == 'pull_request'
uses: actions/dependency-review-action@3b139cfc5fae8b618d3eae3675e383bb1769c019 # v4
uses: actions/dependency-review-action@67d4f4bd7a9b17a0db54d2a7519187c65e339de8 # v4
with:
license-check: false
@@ -465,6 +465,7 @@ jobs:
- job_spec
- style
- migration_checks
# run_tests: If adding required tests, add them here and to script below.
- workspace_hack
- linux_tests
- build_remote_server
@@ -482,11 +483,14 @@ jobs:
# Only check test jobs if they were supposed to run
if [[ "${{ needs.job_spec.outputs.run_tests }}" == "true" ]]; then
[[ "${{ needs.workspace_hack.result }}" != 'success' ]] && { RET_CODE=1; echo "Workspace Hack failed"; }
[[ "${{ needs.macos_tests.result }}" != 'success' ]] && { RET_CODE=1; echo "macOS tests failed"; }
[[ "${{ needs.linux_tests.result }}" != 'success' ]] && { RET_CODE=1; echo "Linux tests failed"; }
[[ "${{ needs.windows_tests.result }}" != 'success' ]] && { RET_CODE=1; echo "Windows tests failed"; }
[[ "${{ needs.windows_clippy.result }}" != 'success' ]] && { RET_CODE=1; echo "Windows clippy failed"; }
[[ "${{ needs.build_remote_server.result }}" != 'success' ]] && { RET_CODE=1; echo "Remote server build failed"; }
# This check is intentionally disabled. See: https://github.com/zed-industries/zed/pull/28431
# [[ "${{ needs.migration_checks.result }}" != 'success' ]] && { RET_CODE=1; echo "Migration Checks failed"; }
fi
if [[ "$RET_CODE" -eq 0 ]]; then
echo "All tests passed successfully!"
@@ -739,7 +743,7 @@ jobs:
echo "/nix/var/nix/profiles/default/bin" >> $GITHUB_PATH
echo "/Users/administrator/.nix-profile/bin" >> $GITHUB_PATH
- uses: cachix/install-nix-action@02a151ada4993995686f9ed4f1be7cfbb229e56f # v31
- uses: cachix/install-nix-action@d1ca217b388ee87b2507a9a93bf01368bde7cec2 # v31
if: ${{ matrix.system.install_nix }}
with:
github_access_token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -206,7 +206,7 @@ jobs:
echo "/nix/var/nix/profiles/default/bin" >> $GITHUB_PATH
echo "/Users/administrator/.nix-profile/bin" >> $GITHUB_PATH
- uses: cachix/install-nix-action@02a151ada4993995686f9ed4f1be7cfbb229e56f # v31
- uses: cachix/install-nix-action@d1ca217b388ee87b2507a9a93bf01368bde7cec2 # v31
if: ${{ matrix.system.install_nix }}
with:
github_access_token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -1,13 +1,13 @@
[
{
"label": "Debug Zed with LLDB",
"adapter": "LLDB",
"label": "Debug Zed (CodeLLDB)",
"adapter": "CodeLLDB",
"program": "$ZED_WORKTREE_ROOT/target/debug/zed",
"request": "launch",
"cwd": "$ZED_WORKTREE_ROOT"
},
{
"label": "Debug Zed with GDB",
"label": "Debug Zed (GDB)",
"adapter": "GDB",
"program": "$ZED_WORKTREE_ROOT/target/debug/zed",
"request": "launch",

148
Cargo.lock generated
View File

@@ -64,6 +64,7 @@ dependencies = [
"clock",
"collections",
"command_palette_hooks",
"component",
"context_server",
"convert_case 0.8.0",
"db",
@@ -84,6 +85,7 @@ dependencies = [
"language",
"language_model",
"language_model_selector",
"linkme",
"log",
"lsp",
"markdown",
@@ -113,6 +115,7 @@ dependencies = [
"terminal_view",
"text",
"theme",
"thiserror 2.0.12",
"time",
"time_format",
"ui",
@@ -124,43 +127,6 @@ dependencies = [
"zed_actions",
]
[[package]]
name = "agent_eval"
version = "0.1.0"
dependencies = [
"agent",
"anyhow",
"assistant_tool",
"assistant_tools",
"clap",
"client",
"collections",
"context_server",
"dap",
"env_logger 0.11.8",
"fs",
"futures 0.3.31",
"gpui",
"gpui_tokio",
"language",
"language_model",
"language_models",
"node_runtime",
"project",
"prompt_store",
"release_channel",
"reqwest_client",
"serde",
"serde_json",
"serde_json_lenient",
"settings",
"smol",
"tempfile",
"util",
"walkdir",
"workspace-hack",
]
[[package]]
name = "ahash"
version = "0.7.8"
@@ -1624,7 +1590,7 @@ dependencies = [
"hyper-util",
"pin-project-lite",
"rustls 0.21.12",
"rustls 0.23.25",
"rustls 0.23.26",
"rustls-native-certs 0.8.1",
"rustls-pki-types",
"tokio",
@@ -4215,6 +4181,7 @@ dependencies = [
"settings",
"sysinfo",
"task",
"tasks_ui",
"terminal_view",
"theme",
"ui",
@@ -4585,6 +4552,7 @@ dependencies = [
"client",
"clock",
"collections",
"command_palette_hooks",
"convert_case 0.8.0",
"ctor",
"db",
@@ -4885,6 +4853,37 @@ dependencies = [
"num-traits",
]
[[package]]
name = "eval"
version = "0.1.0"
dependencies = [
"agent",
"anyhow",
"assistant_settings",
"assistant_tool",
"assistant_tools",
"client",
"context_server",
"dap",
"env_logger 0.11.8",
"fs",
"futures 0.3.31",
"gpui",
"gpui_tokio",
"language",
"language_model",
"language_models",
"node_runtime",
"project",
"prompt_store",
"release_channel",
"reqwest_client",
"serde",
"settings",
"toml 0.8.20",
"workspace-hack",
]
[[package]]
name = "evals"
version = "0.1.0"
@@ -6620,7 +6619,7 @@ dependencies = [
name = "http_client_tls"
version = "0.1.0"
dependencies = [
"rustls 0.23.25",
"rustls 0.23.26",
"rustls-platform-verifier",
"workspace-hack",
]
@@ -6719,7 +6718,7 @@ dependencies = [
"http 1.3.1",
"hyper 1.6.0",
"hyper-util",
"rustls 0.23.25",
"rustls 0.23.26",
"rustls-native-certs 0.8.1",
"rustls-pki-types",
"tokio",
@@ -7064,9 +7063,9 @@ dependencies = [
[[package]]
name = "indexmap"
version = "2.8.0"
version = "2.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3954d50fe15b02142bf25d3b8bdadb634ec3948f103d04ffe3031bc8fe9d7058"
checksum = "cea70ddb795996207ad57735b50c5982d8844f38ba9ee5f1aedcfb708a2aa11e"
dependencies = [
"equivalent",
"hashbrown 0.15.2",
@@ -7172,9 +7171,12 @@ name = "install_cli"
version = "0.1.0"
dependencies = [
"anyhow",
"client",
"gpui",
"release_channel",
"smol",
"util",
"workspace",
"workspace-hack",
]
@@ -7921,9 +7923,9 @@ checksum = "8355be11b20d696c8f18f6cc018c4e372165b1fa8126cef092399c9951984ffa"
[[package]]
name = "libmimalloc-sys"
version = "0.1.41"
version = "0.1.42"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6b20daca3a4ac14dbdc753c5e90fc7b490a48a9131daed3c9a9ced7b2defd37b"
checksum = "ec9d6fac27761dabcd4ee73571cdb06b7022dc99089acbe5435691edffaac0f4"
dependencies = [
"cc",
"libc",
@@ -8594,9 +8596,9 @@ dependencies = [
[[package]]
name = "mimalloc"
version = "0.1.45"
version = "0.1.46"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "03cb1f88093fe50061ca1195d336ffec131347c7b833db31f9ab62a2d1b7925f"
checksum = "995942f432bbb4822a7e9c3faa87a695185b0d09273ba85f097b54f4e458f2af"
dependencies = [
"libmimalloc-sys",
]
@@ -10934,9 +10936,9 @@ dependencies = [
[[package]]
name = "prometheus"
version = "0.13.4"
version = "0.14.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3d33c28a30771f7f96db69893f78b857f7450d7e0237e9c8fc6427a81bae7ed1"
checksum = "3ca5326d8d0b950a9acd87e6a3f94745394f62e4dae1b1ee22b2bc0c394af43a"
dependencies = [
"cfg-if",
"fnv",
@@ -10944,7 +10946,7 @@ dependencies = [
"memchr",
"parking_lot",
"protobuf",
"thiserror 1.0.69",
"thiserror 2.0.12",
]
[[package]]
@@ -11119,9 +11121,23 @@ dependencies = [
[[package]]
name = "protobuf"
version = "2.28.0"
version = "3.7.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "106dd99e98437432fed6519dedecfade6a06a73bb7b2a1e019fdd2bee5778d94"
checksum = "d65a1d4ddae7d8b5de68153b48f6aa3bba8cb002b243dbdbc55a5afbc98f99f4"
dependencies = [
"once_cell",
"protobuf-support",
"thiserror 1.0.69",
]
[[package]]
name = "protobuf-support"
version = "3.7.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3e36c2f31e0a47f9280fb347ef5e461ffcd2c52dd520d8e216b52f93b0b0d7d6"
dependencies = [
"thiserror 1.0.69",
]
[[package]]
name = "psm"
@@ -11247,7 +11263,7 @@ dependencies = [
"quinn-proto",
"quinn-udp",
"rustc-hash 2.1.1",
"rustls 0.23.25",
"rustls 0.23.26",
"socket2",
"thiserror 2.0.12",
"tokio",
@@ -11266,7 +11282,7 @@ dependencies = [
"rand 0.9.0",
"ring",
"rustc-hash 2.1.1",
"rustls 0.23.25",
"rustls 0.23.26",
"rustls-pki-types",
"slab",
"thiserror 2.0.12",
@@ -11874,7 +11890,7 @@ dependencies = [
"percent-encoding",
"pin-project-lite",
"quinn",
"rustls 0.23.25",
"rustls 0.23.26",
"rustls-native-certs 0.8.1",
"rustls-pemfile 2.2.0",
"rustls-pki-types",
@@ -12265,9 +12281,9 @@ dependencies = [
[[package]]
name = "rustls"
version = "0.23.25"
version = "0.23.26"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "822ee9188ac4ec04a2f0531e55d035fb2de73f18b41a63c70c2712503b6fb13c"
checksum = "df51b5869f3a441595eac5e8ff14d486ff285f7b8c0df8770e49c3b56351f0f0"
dependencies = [
"aws-lc-rs",
"log",
@@ -12341,7 +12357,7 @@ dependencies = [
"jni",
"log",
"once_cell",
"rustls 0.23.25",
"rustls 0.23.26",
"rustls-native-certs 0.8.1",
"rustls-platform-verifier-android",
"rustls-webpki 0.103.1",
@@ -13206,9 +13222,9 @@ dependencies = [
[[package]]
name = "smallvec"
version = "1.14.0"
version = "1.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7fcf8323ef1faaee30a44a340193b1ac6814fd9b7b4e88e9d4519a3e4abe1cfd"
checksum = "8917285742e9f3e1683f0a9c4e6b57960b7314d0b08d30d1ecd426713ee2eee9"
dependencies = [
"serde",
]
@@ -13439,7 +13455,7 @@ dependencies = [
"once_cell",
"percent-encoding",
"rust_decimal",
"rustls 0.23.25",
"rustls 0.23.26",
"rustls-pemfile 2.2.0",
"serde",
"serde_json",
@@ -14174,9 +14190,7 @@ version = "0.1.0"
dependencies = [
"anyhow",
"collections",
"debugger_ui",
"editor",
"feature_flags",
"file_icons",
"fuzzy",
"gpui",
@@ -14734,7 +14748,7 @@ version = "0.26.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8e727b36a1a0e8b74c376ac2211e40c2c8af09fb4013c60d910495810f008e9b"
dependencies = [
"rustls 0.23.25",
"rustls 0.23.26",
"tokio",
]
@@ -14794,7 +14808,7 @@ checksum = "7a9daff607c6d2bf6c16fd681ccb7eecc83e4e2cdc1ca067ffaadfca5de7f084"
dependencies = [
"futures-util",
"log",
"rustls 0.23.25",
"rustls 0.23.26",
"rustls-pki-types",
"tokio",
"tokio-rustls 0.26.2",
@@ -15353,7 +15367,7 @@ dependencies = [
"httparse",
"log",
"rand 0.9.0",
"rustls 0.23.25",
"rustls 0.23.26",
"rustls-pki-types",
"sha1",
"thiserror 2.0.12",
@@ -17701,7 +17715,7 @@ dependencies = [
"rust_decimal",
"rustix 0.38.44",
"rustix 1.0.5",
"rustls 0.23.25",
"rustls 0.23.26",
"rustls-webpki 0.103.1",
"scopeguard",
"sea-orm",
@@ -18221,6 +18235,7 @@ dependencies = [
"workspace-hack",
"zed_actions",
"zeta",
"zlog",
"zlog_settings",
]
@@ -18533,6 +18548,7 @@ dependencies = [
name = "zlog"
version = "0.1.0"
dependencies = [
"anyhow",
"log",
"workspace-hack",
]

View File

@@ -8,7 +8,6 @@ members = [
"crates/assets",
"crates/assistant",
"crates/assistant_context_editor",
"crates/agent_eval",
"crates/assistant_settings",
"crates/assistant_slash_command",
"crates/assistant_slash_commands",
@@ -46,6 +45,7 @@ members = [
"crates/diagnostics",
"crates/docs_preprocessor",
"crates/editor",
"crates/eval",
"crates/evals",
"crates/extension",
"crates/extension_api",
@@ -215,7 +215,6 @@ askpass = { path = "crates/askpass" }
assets = { path = "crates/assets" }
assistant = { path = "crates/assistant" }
assistant_context_editor = { path = "crates/assistant_context_editor" }
assistant_eval = { path = "crates/agent_eval" }
assistant_settings = { path = "crates/assistant_settings" }
assistant_slash_command = { path = "crates/assistant_slash_command" }
assistant_slash_commands = { path = "crates/assistant_slash_commands" }
@@ -507,7 +506,7 @@ runtimelib = { git = "https://github.com/ConradIrwin/runtimed", rev = "7130c804
rustc-demangle = "0.1.23"
rust-embed = { version = "8.4", features = ["include-exclude"] }
rustc-hash = "2.1.0"
rustls = { version = "0.23.22" }
rustls = { version = "0.23.26" }
rustls-platform-verifier = "0.5.0"
scap = { git = "https://github.com/zed-industries/scap", rev = "08f0a01417505cc0990b9931a37e5120db92e0d0", default-features = false }
schemars = { version = "0.8", features = ["impl_json_schema", "indexmap2"] }

1
assets/icons/binary.svg Normal file
View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-binary-icon lucide-binary"><rect x="14" y="14" width="4" height="6" rx="2"/><rect x="6" y="4" width="4" height="6" rx="2"/><path d="M6 20h4"/><path d="M14 10h4"/><path d="M6 14h2v6"/><path d="M14 4h2v6"/></svg>

After

Width:  |  Height:  |  Size: 413 B

View File

@@ -0,0 +1 @@
<svg width="16" height="16" fill="none" xml:space="preserve" xmlns="http://www.w3.org/2000/svg"><g style="fill:#000;fill-opacity:1" fill="#180c25"><path d="m-116.1-101.4-28.9-28.9a6.7 6.7 0 0 1-1.8-4.7v-41.2c0-2.4-2.4-4.8-4.8-4.8h-9.6a5.2 5.2 0 0 0-4.8 4.8v48c0 2.5 1 5 2.7 6.8l33.6 33.6a9.6 9.6 0 0 0 6.8 2.8h4.8c2.7 0 4.8-2.2 4.8-4.8v-4.8c0-2.5-1-5-2.8-6.8zM-79.6-176.2c0-2.4-2.4-4.8-4.8-4.8h-9.7a5.2 5.2 0 0 0-4.7 4.8v41.2c0 1.8-.8 3.5-2 4.7l-9.6 9.7a9.5 9.5 0 0 0-2.8 6.8v4.8c0 2.6 2.1 4.7 4.8 4.7h4.8c2.4 0 4.9-.9 6.7-2.8l14.4-14.3a9.6 9.6 0 0 0 2.8-6.8v-48z" style="fill:#000;fill-opacity:1;stroke-width:.255894" transform="translate(21.6 22.7) scale(.11067)"/></g></svg>

After

Width:  |  Height:  |  Size: 677 B

1
assets/icons/flame.svg Normal file
View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-flame-icon lucide-flame"><path d="M8.5 14.5A2.5 2.5 0 0 0 11 12c0-1.38-.5-2-1-3-1.072-2.143-.224-4.054 2-6 .5 2.5 2 4.9 4 6.5 2 1.6 3 3.5 3 5.5a7 7 0 1 1-14 0c0-1.153.433-2.294 1-3a2.5 2.5 0 0 0 2.5 2.5z"/></svg>

After

Width:  |  Height:  |  Size: 415 B

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-square-function-icon lucide-square-function"><rect width="18" height="18" x="3" y="3" rx="2" ry="2"/><path d="M9 17c2 0 2.8-1 2.8-2.8V10c0-2 1-3.3 3.2-3"/><path d="M9 11.2h5.7"/></svg>

After

Width:  |  Height:  |  Size: 387 B

View File

@@ -0,0 +1,3 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M10.1331 11.3776C10.2754 10.6665 10.1331 9.78593 11.1998 8.53327C11.82 7.80489 12.2664 6.96894 12.2664 6.04456C12.2664 4.91305 11.8169 3.82788 11.0168 3.02778C10.2167 2.22769 9.13152 1.7782 8.00001 1.7782C6.8685 1.7782 5.78334 2.22769 4.98324 3.02778C4.18314 3.82788 3.73364 4.91305 3.73364 6.04456C3.73364 6.75562 3.87586 7.6089 4.80024 8.53327C5.86683 9.80679 5.72462 10.6665 5.86683 11.3776M10.1331 11.3776V12.8821C10.1331 13.622 9.53341 14.2218 8.79353 14.2218H7.2065C6.46662 14.2218 5.86683 13.622 5.86683 12.8821V11.3776M10.1331 11.3776H5.86683" stroke="black" stroke-width="1.33333" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

After

Width:  |  Height:  |  Size: 751 B

View File

@@ -150,7 +150,7 @@
"context": "AgentDiff",
"bindings": {
"ctrl-y": "agent::Keep",
"ctrl-k ctrl-r": "agent::Reject"
"ctrl-n": "agent::Reject"
}
},
{
@@ -625,12 +625,13 @@
"context": "AgentPanel",
"bindings": {
"ctrl-n": "agent::NewThread",
"ctrl-alt-n": "agent::NewPromptEditor",
"ctrl-alt-n": "agent::NewTextThread",
"ctrl-shift-h": "agent::OpenHistory",
"ctrl-alt-c": "agent::OpenConfiguration",
"ctrl-i": "agent::ToggleProfileSelector",
"ctrl-alt-/": "assistant::ToggleModelSelector",
"ctrl-shift-a": "agent::ToggleContextPicker",
"shift-escape": "agent::ExpandMessageEditor",
"ctrl-e": "agent::ChatMode",
"ctrl-alt-e": "agent::RemoveAllContext"
}
@@ -644,9 +645,8 @@
},
{
"context": "AgentPanel && prompt_editor",
"use_key_equivalents": true,
"bindings": {
"cmd-n": "agent::NewPromptEditor",
"cmd-n": "agent::NewTextThread",
"cmd-alt-t": "agent::NewThread"
}
},
@@ -660,7 +660,6 @@
},
{
"context": "EditMessageEditor > Editor",
"use_key_equivalents": true,
"bindings": {
"escape": "menu::Cancel",
"enter": "menu::Confirm",
@@ -669,7 +668,6 @@
},
{
"context": "AgentFeedbackMessageEditor > Editor",
"use_key_equivalents": true,
"bindings": {
"escape": "menu::Cancel",
"enter": "menu::Confirm",
@@ -801,7 +799,6 @@
},
{
"context": "GitPanel",
"use_key_equivalents": true,
"bindings": {
"ctrl-g ctrl-g": "git::Fetch",
"ctrl-g up": "git::Push",

View File

@@ -242,7 +242,7 @@
"use_key_equivalents": true,
"bindings": {
"cmd-y": "agent::Keep",
"cmd-alt-z": "agent::Reject"
"cmd-n": "agent::Reject"
}
},
{
@@ -281,12 +281,13 @@
"use_key_equivalents": true,
"bindings": {
"cmd-n": "agent::NewThread",
"cmd-alt-n": "agent::NewPromptEditor",
"cmd-alt-n": "agent::NewTextThread",
"cmd-shift-h": "agent::OpenHistory",
"cmd-alt-c": "agent::OpenConfiguration",
"cmd-i": "agent::ToggleProfileSelector",
"cmd-alt-/": "assistant::ToggleModelSelector",
"cmd-shift-a": "agent::ToggleContextPicker",
"shift-escape": "agent::ExpandMessageEditor",
"cmd-e": "agent::ChatMode",
"cmd-alt-e": "agent::RemoveAllContext"
}
@@ -302,7 +303,7 @@
"context": "AgentPanel && prompt_editor",
"use_key_equivalents": true,
"bindings": {
"cmd-n": "agent::NewPromptEditor",
"cmd-n": "agent::NewTextThread",
"cmd-alt-t": "agent::NewThread"
}
},

View File

@@ -58,7 +58,8 @@
"ctrl-shift-home": "editor::SelectToBeginning",
"ctrl-shift-end": "editor::SelectToEnd",
"ctrl-f8": "editor::ToggleBreakpoint",
"ctrl-shift-f8": "editor::EditLogBreakpoint"
"ctrl-shift-f8": "editor::EditLogBreakpoint",
"ctrl-shift-u": "editor::ToggleCase"
}
},
{

View File

@@ -49,7 +49,9 @@
"ctrl-k ctrl-l": "editor::ConvertToLowerCase",
"shift-alt-m": "markdown::OpenPreviewToTheSide",
"ctrl-backspace": "editor::DeleteToPreviousWordStart",
"ctrl-delete": "editor::DeleteToNextWordEnd"
"ctrl-delete": "editor::DeleteToNextWordEnd",
"f3": "editor::FindNextMatch",
"shift-f3": "editor::FindPreviousMatch"
}
},
{
@@ -58,6 +60,12 @@
"ctrl-r": "outline::Toggle"
}
},
{
"context": "Editor && !agent_diff",
"bindings": {
"ctrl-k ctrl-z": "git::Restore"
}
},
{
"context": "Pane",
"bindings": {

View File

@@ -55,7 +55,8 @@
"cmd-shift-home": "editor::SelectToBeginning",
"cmd-shift-end": "editor::SelectToEnd",
"ctrl-f8": "editor::ToggleBreakpoint",
"ctrl-shift-f8": "editor::EditLogBreakpoint"
"ctrl-shift-f8": "editor::EditLogBreakpoint",
"cmd-shift-u": "editor::ToggleCase"
}
},
{

View File

@@ -51,7 +51,9 @@
"cmd-shift-j": "editor::JoinLines",
"shift-alt-m": "markdown::OpenPreviewToTheSide",
"ctrl-backspace": "editor::DeleteToPreviousWordStart",
"ctrl-delete": "editor::DeleteToNextWordEnd"
"ctrl-delete": "editor::DeleteToNextWordEnd",
"cmd-g": "editor::FindNextMatch",
"cmd-shift-g": "editor::FindPreviousMatch"
}
},
{
@@ -60,6 +62,12 @@
"cmd-r": "outline::Toggle"
}
},
{
"context": "Editor && !agent_diff",
"bindings": {
"cmd-k cmd-z": "git::Restore"
}
},
{
"context": "Pane",
"bindings": {

View File

@@ -155,7 +155,7 @@ There are rules that apply to these root directories:
{{#each worktrees}}
{{#if rules_file}}
`{{root_name}}/{{rules_file.rel_path}}`:
`{{root_name}}/{{rules_file.path_in_worktree}}`:
``````
{{{rules_file.text}}}
@@ -163,3 +163,8 @@ There are rules that apply to these root directories:
{{/if}}
{{/each}}
{{/if}}
<user_environment>
Operating System: {{os}} ({{arch}})
Shell: {{shell}}
</user_environment>

View File

@@ -624,14 +624,14 @@
// The provider to use.
"provider": "zed.dev",
// The model to use.
"model": "claude-3-5-sonnet-latest"
"model": "claude-3-7-sonnet-latest"
},
// The model to use when applying edits from the assistant.
"editor_model": {
// The provider to use.
"provider": "zed.dev",
// The model to use.
"model": "claude-3-5-sonnet-latest"
"model": "claude-3-7-sonnet-latest"
},
// When enabled, the agent can run potentially destructive actions without asking for your confirmation.
"always_allow_tool_actions": false,
@@ -656,8 +656,9 @@
"name": "Write",
"enable_all_context_servers": true,
"tools": {
"bash": true,
"terminal": true,
"batch_tool": true,
"code_actions": true,
"code_symbols": true,
"copy_path": false,
"create_file": true,
@@ -671,6 +672,7 @@
"path_search": true,
"read_file": true,
"regex_search": true,
"rename": true,
"symbol_info": true,
"thinking": true
}

View File

@@ -31,6 +31,7 @@ client.workspace = true
clock.workspace = true
collections.workspace = true
command_palette_hooks.workspace = true
component.workspace = true
context_server.workspace = true
convert_case.workspace = true
db.workspace = true
@@ -50,6 +51,7 @@ itertools.workspace = true
language.workspace = true
language_model.workspace = true
language_model_selector.workspace = true
linkme.workspace = true
log.workspace = true
lsp.workspace = true
markdown.workspace = true
@@ -78,15 +80,16 @@ terminal.workspace = true
terminal_view.workspace = true
text.workspace = true
theme.workspace = true
thiserror.workspace = true
time.workspace = true
time_format.workspace = true
ui.workspace = true
ui_input.workspace = true
util.workspace = true
uuid.workspace = true
workspace-hack.workspace = true
workspace.workspace = true
zed_actions.workspace = true
workspace-hack.workspace = true
[dev-dependencies]
buffer_diff = { workspace = true, features = ["test-support"] }

File diff suppressed because it is too large Load Diff

View File

@@ -921,15 +921,16 @@ mod tests {
})
.unwrap();
let thread_store = cx.update(|cx| {
ThreadStore::new(
project.clone(),
Arc::default(),
Arc::new(PromptBuilder::new(None).unwrap()),
cx,
)
.unwrap()
});
let thread_store = cx
.update(|cx| {
ThreadStore::load(
project.clone(),
Arc::default(),
Arc::new(PromptBuilder::new(None).unwrap()),
cx,
)
})
.await;
let thread = thread_store.update(cx, |store, cx| store.create_thread(cx));
let action_log = thread.read_with(cx, |thread, _| thread.action_log().clone());

View File

@@ -46,10 +46,11 @@ pub use agent_diff::{AgentDiff, AgentDiffToolbar};
actions!(
agent,
[
NewPromptEditor,
NewTextThread,
ToggleContextPicker,
ToggleProfileSelector,
RemoveAllContext,
ExpandMessageEditor,
OpenHistory,
AddContextServer,
RemoveSelectedThread,

View File

@@ -44,7 +44,7 @@ use crate::thread::{Thread, ThreadError, ThreadId, TokenUsageRatio};
use crate::thread_history::{PastContext, PastThread, ThreadHistory};
use crate::thread_store::ThreadStore;
use crate::{
AgentDiff, InlineAssistant, NewPromptEditor, NewThread, OpenActiveThreadAsMarkdown,
AgentDiff, InlineAssistant, NewTextThread, NewThread, OpenActiveThreadAsMarkdown,
OpenAgentDiff, OpenHistory, ThreadEvent, ToggleContextPicker,
};
@@ -70,7 +70,7 @@ pub fn init(cx: &mut App) {
panel.update(cx, |panel, cx| panel.open_configuration(window, cx));
}
})
.register_action(|workspace, _: &NewPromptEditor, window, cx| {
.register_action(|workspace, _: &NewTextThread, window, cx| {
if let Some(panel) = workspace.panel::<AssistantPanel>(cx) {
workspace.focus_panel::<AssistantPanel>(window, cx);
panel.update(cx, |panel, cx| panel.new_prompt_editor(window, cx));
@@ -194,10 +194,12 @@ impl AssistantPanel {
) -> Task<Result<Entity<Self>>> {
cx.spawn(async move |cx| {
let tools = Arc::new(ToolWorkingSet::default());
let thread_store = workspace.update(cx, |workspace, cx| {
let project = workspace.project().clone();
ThreadStore::new(project, tools.clone(), prompt_builder.clone(), cx)
})??;
let thread_store = workspace
.update(cx, |workspace, cx| {
let project = workspace.project().clone();
ThreadStore::load(project, tools.clone(), prompt_builder.clone(), cx)
})?
.await;
let slash_commands = Arc::new(SlashCommandWorkingSet::default());
let context_store = workspace
@@ -227,14 +229,14 @@ impl AssistantPanel {
) -> Self {
let thread = thread_store.update(cx, |this, cx| this.create_thread(cx));
let fs = workspace.app_state().fs.clone();
let project = workspace.project().clone();
let project = workspace.project();
let language_registry = project.read(cx).languages().clone();
let workspace = workspace.weak_handle();
let weak_self = cx.entity().downgrade();
let message_editor_context_store = cx.new(|_cx| {
crate::context_store::ContextStore::new(
workspace.clone(),
project.downgrade(),
Some(thread_store.downgrade()),
)
});
@@ -344,7 +346,7 @@ impl AssistantPanel {
let message_editor_context_store = cx.new(|_cx| {
crate::context_store::ContextStore::new(
self.workspace.clone(),
self.project.downgrade(),
Some(self.thread_store.downgrade()),
)
});
@@ -521,7 +523,7 @@ impl AssistantPanel {
this.set_active_view(thread_view, window, cx);
let message_editor_context_store = cx.new(|_cx| {
crate::context_store::ContextStore::new(
this.workspace.clone(),
this.project.downgrade(),
Some(this.thread_store.downgrade()),
)
});
@@ -855,13 +857,19 @@ impl AssistantPanel {
if is_empty {
Label::new(Thread::DEFAULT_SUMMARY.clone())
.truncate()
.ml_2()
.into_any_element()
} else if summary.is_none() {
Label::new(LOADING_SUMMARY_PLACEHOLDER)
.ml_2()
.truncate()
.into_any_element()
} else {
change_title_editor.clone().into_any_element()
div()
.ml_2()
.w_full()
.child(change_title_editor.clone())
.into_any_element()
}
}
ActiveView::PromptEditor => {
@@ -873,7 +881,7 @@ impl AssistantPanel {
})
.unwrap_or_else(|| SharedString::from(LOADING_SUMMARY_PLACEHOLDER));
Label::new(title).truncate().into_any_element()
Label::new(title).ml_2().truncate().into_any_element()
}
ActiveView::History => Label::new("History").truncate().into_any_element(),
ActiveView::Configuration => Label::new("Settings").truncate().into_any_element(),
@@ -910,23 +918,25 @@ impl AssistantPanel {
let go_back_button = match &self.active_view {
ActiveView::History | ActiveView::Configuration => Some(
IconButton::new("go-back", IconName::ArrowLeft)
.icon_size(IconSize::Small)
.on_click(cx.listener(|this, _, window, cx| {
this.go_back(&workspace::GoBack, window, cx);
}))
.tooltip({
let focus_handle = focus_handle.clone();
move |window, cx| {
Tooltip::for_action_in(
"Go Back",
&workspace::GoBack,
&focus_handle,
window,
cx,
)
}
}),
div().pl_1().child(
IconButton::new("go-back", IconName::ArrowLeft)
.icon_size(IconSize::Small)
.on_click(cx.listener(|this, _, window, cx| {
this.go_back(&workspace::GoBack, window, cx);
}))
.tooltip({
let focus_handle = focus_handle.clone();
move |window, cx| {
Tooltip::for_action_in(
"Go Back",
&workspace::GoBack,
&focus_handle,
window,
cx,
)
}
}),
),
),
_ => None,
};
@@ -944,8 +954,7 @@ impl AssistantPanel {
.child(
h_flex()
.w_full()
.pl_2()
.gap_2()
.gap_1()
.children(go_back_button)
.child(self.render_title_view(window, cx)),
)
@@ -1080,8 +1089,8 @@ impl AssistantPanel {
cx,
|menu, _window, _cx| {
menu.action(
"New Prompt Editor",
NewPromptEditor.boxed_clone(),
"New Text Thread",
NewTextThread.boxed_clone(),
)
.when(!is_empty, |menu| {
menu.action(
@@ -1289,6 +1298,7 @@ impl AssistantPanel {
let configuration_error_ref = &configuration_error;
parent
.overflow_hidden()
.p_1p5()
.justify_end()
.gap_1()
@@ -1621,7 +1631,21 @@ impl prompt_library::InlineAssistDelegate for PromptLibraryInlineAssist {
cx: &mut Context<PromptLibrary>,
) {
InlineAssistant::update_global(cx, |assistant, cx| {
assistant.assist(&prompt_editor, self.workspace.clone(), None, window, cx)
let Some(project) = self
.workspace
.upgrade()
.map(|workspace| workspace.read(cx).project().downgrade())
else {
return;
};
assistant.assist(
&prompt_editor,
self.workspace.clone(),
project,
None,
window,
cx,
)
})
}

View File

@@ -1,9 +1,9 @@
use std::{ops::Range, sync::Arc};
use std::{ops::Range, path::Path, sync::Arc};
use gpui::{App, Entity, SharedString};
use language::{Buffer, File};
use language_model::LanguageModelRequestMessage;
use project::ProjectPath;
use project::{ProjectPath, Worktree};
use serde::{Deserialize, Serialize};
use text::{Anchor, BufferId};
use ui::IconName;
@@ -69,10 +69,21 @@ pub struct FileContext {
#[derive(Debug, Clone)]
pub struct DirectoryContext {
pub id: ContextId,
pub project_path: ProjectPath,
pub worktree: Entity<Worktree>,
pub path: Arc<Path>,
/// Buffers of the files within the directory.
pub context_buffers: Vec<ContextBuffer>,
}
impl DirectoryContext {
pub fn project_path(&self, cx: &App) -> ProjectPath {
ProjectPath {
worktree_id: self.worktree.read(cx).id(),
path: self.path.clone(),
}
}
}
#[derive(Debug, Clone)]
pub struct SymbolContext {
pub id: ContextId,
@@ -86,12 +97,11 @@ pub struct FetchedUrlContext {
pub text: SharedString,
}
// TODO: Model<Thread> holds onto the thread even if the thread is deleted. Can either handle this
// explicitly or have a WeakModel<Thread> and remove during snapshot.
#[derive(Debug, Clone)]
pub struct ThreadContext {
pub id: ContextId,
// TODO: Entity<Thread> holds onto the thread even if the thread is deleted. Should probably be
// a WeakEntity and handle removal from the UI when it has dropped.
pub thread: Entity<Thread>,
pub text: SharedString,
}
@@ -105,12 +115,11 @@ impl ThreadContext {
}
}
// TODO: Model<Buffer> holds onto the buffer even if the file is deleted and closed. Should remove
// the context from the message editor in this case.
#[derive(Clone)]
pub struct ContextBuffer {
pub id: BufferId,
// TODO: Entity<Buffer> holds onto the thread even if the thread is deleted. Should probably be
// a WeakEntity and handle removal from the UI when it has dropped.
pub buffer: Entity<Buffer>,
pub file: Arc<dyn File>,
pub version: clock::Global,

View File

@@ -289,12 +289,14 @@ impl ContextPicker {
path_prefix,
} => {
let context_store = self.context_store.clone();
let worktree_id = project_path.worktree_id;
let path = project_path.path.clone();
ContextMenuItem::custom_entry(
move |_window, cx| {
render_file_context_entry(
ElementId::NamedInteger("ctx-recent".into(), ix),
worktree_id,
&path,
&path_prefix,
false,
@@ -466,7 +468,7 @@ fn recent_context_picker_entries(
recent.extend(
workspace
.recent_navigation_history_iter(cx)
.filter(|(path, _)| !current_files.contains(&path.path.to_path_buf()))
.filter(|(path, _)| !current_files.contains(path))
.take(4)
.filter_map(|(project_path, _)| {
project
@@ -507,14 +509,13 @@ fn recent_context_picker_entries(
recent
}
pub(crate) fn insert_crease_for_mention(
pub(crate) fn insert_fold_for_mention(
excerpt_id: ExcerptId,
crease_start: text::Anchor,
content_len: usize,
crease_label: SharedString,
crease_icon_path: SharedString,
editor_entity: Entity<Editor>,
window: &mut Window,
cx: &mut App,
) {
editor_entity.update(cx, |editor, cx| {
@@ -533,6 +534,7 @@ pub(crate) fn insert_crease_for_mention(
crease_label,
editor_entity.downgrade(),
),
merge_adjacent: false,
..Default::default()
};
@@ -546,8 +548,9 @@ pub(crate) fn insert_crease_for_mention(
render_trailer,
);
editor.insert_creases(vec![crease.clone()], cx);
editor.fold_creases(vec![crease], false, window, cx);
editor.display_map.update(cx, |display_map, cx| {
display_map.fold(vec![crease], cx);
});
});
}
@@ -604,12 +607,13 @@ fn render_fold_icon_button(
.gap_1()
.child(
Icon::from_path(icon_path.clone())
.size(IconSize::Small)
.size(IconSize::XSmall)
.color(Color::Muted),
)
.child(
Label::new(label.clone())
.size(LabelSize::Small)
.buffer_font(cx)
.single_line(),
),
)

View File

@@ -18,16 +18,133 @@ use text::{Anchor, ToPoint};
use ui::prelude::*;
use workspace::Workspace;
use crate::context::AssistantContext;
use crate::context_picker::file_context_picker::search_files;
use crate::context_picker::symbol_context_picker::search_symbols;
use crate::context_store::ContextStore;
use crate::thread_store::ThreadStore;
use super::fetch_context_picker::fetch_url_content;
use super::thread_context_picker::ThreadContextEntry;
use super::file_context_picker::FileMatch;
use super::symbol_context_picker::SymbolMatch;
use super::thread_context_picker::{ThreadContextEntry, ThreadMatch, search_threads};
use super::{
ContextPickerMode, MentionLink, recent_context_picker_entries, supported_context_picker_modes,
ContextPickerMode, MentionLink, RecentEntry, recent_context_picker_entries,
supported_context_picker_modes,
};
pub(crate) enum Match {
Symbol(SymbolMatch),
File(FileMatch),
Thread(ThreadMatch),
Fetch(SharedString),
Mode(ContextPickerMode),
}
fn search(
mode: Option<ContextPickerMode>,
query: String,
cancellation_flag: Arc<AtomicBool>,
recent_entries: Vec<RecentEntry>,
thread_store: Option<WeakEntity<ThreadStore>>,
workspace: Entity<Workspace>,
cx: &mut App,
) -> Task<Vec<Match>> {
match mode {
Some(ContextPickerMode::File) => {
let search_files_task =
search_files(query.clone(), cancellation_flag.clone(), &workspace, cx);
cx.background_spawn(async move {
search_files_task
.await
.into_iter()
.map(Match::File)
.collect()
})
}
Some(ContextPickerMode::Symbol) => {
let search_symbols_task =
search_symbols(query.clone(), cancellation_flag.clone(), &workspace, cx);
cx.background_spawn(async move {
search_symbols_task
.await
.into_iter()
.map(Match::Symbol)
.collect()
})
}
Some(ContextPickerMode::Thread) => {
if let Some(thread_store) = thread_store.as_ref().and_then(|t| t.upgrade()) {
let search_threads_task =
search_threads(query.clone(), cancellation_flag.clone(), thread_store, cx);
cx.background_spawn(async move {
search_threads_task
.await
.into_iter()
.map(Match::Thread)
.collect()
})
} else {
Task::ready(Vec::new())
}
}
Some(ContextPickerMode::Fetch) => {
if !query.is_empty() {
Task::ready(vec![Match::Fetch(query.into())])
} else {
Task::ready(Vec::new())
}
}
None => {
if query.is_empty() {
let mut matches = recent_entries
.into_iter()
.map(|entry| match entry {
super::RecentEntry::File {
project_path,
path_prefix,
} => Match::File(FileMatch {
mat: fuzzy::PathMatch {
score: 1.,
positions: Vec::new(),
worktree_id: project_path.worktree_id.to_usize(),
path: project_path.path,
path_prefix,
is_dir: false,
distance_to_relative_ancestor: 0,
},
is_recent: true,
}),
super::RecentEntry::Thread(thread_context_entry) => {
Match::Thread(ThreadMatch {
thread: thread_context_entry,
is_recent: true,
})
}
})
.collect::<Vec<_>>();
matches.extend(
supported_context_picker_modes(&thread_store)
.into_iter()
.map(Match::Mode),
);
Task::ready(matches)
} else {
let search_files_task =
search_files(query.clone(), cancellation_flag.clone(), &workspace, cx);
cx.background_spawn(async move {
search_files_task
.await
.into_iter()
.map(Match::File)
.collect()
})
}
}
}
}
pub struct ContextPickerCompletionProvider {
workspace: WeakEntity<Workspace>,
context_store: WeakEntity<ContextStore>,
@@ -50,97 +167,20 @@ impl ContextPickerCompletionProvider {
}
}
fn default_completions(
excerpt_id: ExcerptId,
source_range: Range<Anchor>,
context_store: Entity<ContextStore>,
thread_store: Option<WeakEntity<ThreadStore>>,
editor: Entity<Editor>,
workspace: Entity<Workspace>,
cx: &App,
) -> Vec<Completion> {
let mut completions = Vec::new();
completions.extend(
recent_context_picker_entries(
context_store.clone(),
thread_store.clone(),
workspace.clone(),
cx,
)
.iter()
.filter_map(|entry| match entry {
super::RecentEntry::File {
project_path,
path_prefix,
} => Some(Self::completion_for_path(
project_path.clone(),
path_prefix,
true,
false,
excerpt_id,
source_range.clone(),
editor.clone(),
context_store.clone(),
cx,
)),
super::RecentEntry::Thread(thread_context_entry) => {
let thread_store = thread_store
.as_ref()
.and_then(|thread_store| thread_store.upgrade())?;
Some(Self::completion_for_thread(
thread_context_entry.clone(),
excerpt_id,
source_range.clone(),
true,
editor.clone(),
context_store.clone(),
thread_store,
))
}
}),
);
completions.extend(
supported_context_picker_modes(&thread_store)
.iter()
.map(|mode| {
Completion {
replace_range: source_range.clone(),
new_text: format!("@{} ", mode.mention_prefix()),
label: CodeLabel::plain(mode.label().to_string(), None),
icon_path: Some(mode.icon().path().into()),
documentation: None,
source: project::CompletionSource::Custom,
insert_text_mode: None,
// This ensures that when a user accepts this completion, the
// completion menu will still be shown after "@category " is
// inserted
confirm: Some(Arc::new(|_, _, _| true)),
}
}),
);
completions
}
fn build_code_label_for_full_path(
file_name: &str,
directory: Option<&str>,
cx: &App,
) -> CodeLabel {
let comment_id = cx.theme().syntax().highlight_id("comment").map(HighlightId);
let mut label = CodeLabel::default();
label.push_str(&file_name, None);
label.push_str(" ", None);
if let Some(directory) = directory {
label.push_str(&directory, comment_id);
fn completion_for_mode(source_range: Range<Anchor>, mode: ContextPickerMode) -> Completion {
Completion {
replace_range: source_range.clone(),
new_text: format!("@{} ", mode.mention_prefix()),
label: CodeLabel::plain(mode.label().to_string(), None),
icon_path: Some(mode.icon().path().into()),
documentation: None,
source: project::CompletionSource::Custom,
insert_text_mode: None,
// This ensures that when a user accepts this completion, the
// completion menu will still be shown after "@category " is
// inserted
confirm: Some(Arc::new(|_, _, _| true)),
}
label.filter_range = 0..label.text().len();
label
}
fn completion_for_thread(
@@ -261,11 +301,8 @@ impl ContextPickerCompletionProvider {
path_prefix,
);
let label = Self::build_code_label_for_full_path(
&file_name,
directory.as_ref().map(|s| s.as_ref()),
cx,
);
let label =
build_code_label_for_full_path(&file_name, directory.as_ref().map(|s| s.as_ref()), cx);
let full_path = if let Some(directory) = directory {
format!("{}{}", directory, file_name)
} else {
@@ -382,6 +419,22 @@ impl ContextPickerCompletionProvider {
}
}
fn build_code_label_for_full_path(file_name: &str, directory: Option<&str>, cx: &App) -> CodeLabel {
let comment_id = cx.theme().syntax().highlight_id("comment").map(HighlightId);
let mut label = CodeLabel::default();
label.push_str(&file_name, None);
label.push_str(" ", None);
if let Some(directory) = directory {
label.push_str(&directory, comment_id);
}
label.filter_range = 0..label.text().len();
label
}
impl CompletionProvider for ContextPickerCompletionProvider {
fn completions(
&self,
@@ -404,10 +457,9 @@ impl CompletionProvider for ContextPickerCompletionProvider {
return Task::ready(Ok(None));
};
let Some(workspace) = self.workspace.upgrade() else {
return Task::ready(Ok(None));
};
let Some(context_store) = self.context_store.upgrade() else {
let Some((workspace, context_store)) =
self.workspace.upgrade().zip(self.context_store.upgrade())
else {
return Task::ready(Ok(None));
};
@@ -419,154 +471,89 @@ impl CompletionProvider for ContextPickerCompletionProvider {
let editor = self.editor.clone();
let http_client = workspace.read(cx).client().http_client().clone();
let MentionCompletion { mode, argument, .. } = state;
let query = argument.unwrap_or_else(|| "".to_string());
let recent_entries = recent_context_picker_entries(
context_store.clone(),
thread_store.clone(),
workspace.clone(),
cx,
);
let search_task = search(
mode,
query,
Arc::<AtomicBool>::default(),
recent_entries,
thread_store.clone(),
workspace.clone(),
cx,
);
cx.spawn(async move |_, cx| {
let mut completions = Vec::new();
let matches = search_task.await;
let Some(editor) = editor.upgrade() else {
return Ok(None);
};
let MentionCompletion { mode, argument, .. } = state;
let query = argument.unwrap_or_else(|| "".to_string());
match mode {
Some(ContextPickerMode::File) => {
let path_matches = cx
.update(|cx| {
super::file_context_picker::search_paths(
query,
Arc::<AtomicBool>::default(),
&workspace,
cx,
)
})?
.await;
if let Some(editor) = editor.upgrade() {
completions.reserve(path_matches.len());
cx.update(|cx| {
completions.extend(path_matches.iter().map(|mat| {
Self::completion_for_path(
ProjectPath {
worktree_id: WorktreeId::from_usize(mat.worktree_id),
path: mat.path.clone(),
},
&mat.path_prefix,
false,
mat.is_dir,
excerpt_id,
source_range.clone(),
editor.clone(),
context_store.clone(),
cx,
)
}));
})?;
}
}
Some(ContextPickerMode::Symbol) => {
if let Some(editor) = editor.upgrade() {
let symbol_matches = cx
.update(|cx| {
super::symbol_context_picker::search_symbols(
query,
Arc::new(AtomicBool::default()),
&workspace,
cx,
)
})?
.await?;
cx.update(|cx| {
completions.extend(symbol_matches.into_iter().filter_map(
|(_, symbol)| {
Self::completion_for_symbol(
symbol,
excerpt_id,
source_range.clone(),
editor.clone(),
context_store.clone(),
workspace.clone(),
cx,
)
Ok(Some(cx.update(|cx| {
matches
.into_iter()
.filter_map(|mat| match mat {
Match::File(FileMatch { mat, is_recent }) => {
Some(Self::completion_for_path(
ProjectPath {
worktree_id: WorktreeId::from_usize(mat.worktree_id),
path: mat.path.clone(),
},
));
})?;
}
}
Some(ContextPickerMode::Fetch) => {
if let Some(editor) = editor.upgrade() {
if !query.is_empty() {
completions.push(Self::completion_for_fetch(
source_range.clone(),
query.into(),
&mat.path_prefix,
is_recent,
mat.is_dir,
excerpt_id,
source_range.clone(),
editor.clone(),
context_store.clone(),
http_client.clone(),
));
}
context_store.update(cx, |store, _| {
let urls = store.context().iter().filter_map(|context| {
if let AssistantContext::FetchedUrl(context) = context {
Some(context.url.clone())
} else {
None
}
});
for url in urls {
completions.push(Self::completion_for_fetch(
source_range.clone(),
url,
excerpt_id,
editor.clone(),
context_store.clone(),
http_client.clone(),
));
}
})?;
}
}
Some(ContextPickerMode::Thread) => {
if let Some((thread_store, editor)) = thread_store
.and_then(|thread_store| thread_store.upgrade())
.zip(editor.upgrade())
{
let threads = cx
.update(|cx| {
super::thread_context_picker::search_threads(
query,
thread_store.clone(),
cx,
)
})?
.await;
for thread in threads {
completions.push(Self::completion_for_thread(
thread.clone(),
excerpt_id,
source_range.clone(),
false,
editor.clone(),
context_store.clone(),
thread_store.clone(),
));
}
}
}
None => {
cx.update(|cx| {
if let Some(editor) = editor.upgrade() {
completions.extend(Self::default_completions(
excerpt_id,
source_range.clone(),
context_store.clone(),
thread_store.clone(),
editor,
workspace.clone(),
cx,
));
))
}
})?;
}
}
Ok(Some(completions))
Match::Symbol(SymbolMatch { symbol, .. }) => Self::completion_for_symbol(
symbol,
excerpt_id,
source_range.clone(),
editor.clone(),
context_store.clone(),
workspace.clone(),
cx,
),
Match::Thread(ThreadMatch {
thread, is_recent, ..
}) => {
let thread_store = thread_store.as_ref().and_then(|t| t.upgrade())?;
Some(Self::completion_for_thread(
thread,
excerpt_id,
source_range.clone(),
is_recent,
editor.clone(),
context_store.clone(),
thread_store,
))
}
Match::Fetch(url) => Some(Self::completion_for_fetch(
source_range.clone(),
url,
excerpt_id,
editor.clone(),
context_store.clone(),
http_client.clone(),
)),
Match::Mode(mode) => {
Some(Self::completion_for_mode(source_range.clone(), mode))
}
})
.collect()
})?))
})
}
@@ -623,21 +610,20 @@ fn confirm_completion_callback(
editor: Entity<Editor>,
add_context_fn: impl Fn(&mut App) -> () + Send + Sync + 'static,
) -> Arc<dyn Fn(CompletionIntent, &mut Window, &mut App) -> bool + Send + Sync> {
Arc::new(move |_, window, cx| {
Arc::new(move |_, _, cx| {
add_context_fn(cx);
let crease_text = crease_text.clone();
let crease_icon_path = crease_icon_path.clone();
let editor = editor.clone();
window.defer(cx, move |window, cx| {
crate::context_picker::insert_crease_for_mention(
cx.defer(move |cx| {
crate::context_picker::insert_fold_for_mention(
excerpt_id,
start,
content_len,
crease_text,
crease_icon_path,
editor,
window,
cx,
);
});
@@ -676,7 +662,12 @@ impl MentionCompletion {
let mut end = last_mention_start + 1;
if let Some(mode_text) = parts.next() {
end += mode_text.len();
mode = ContextPickerMode::try_from(mode_text).ok();
if let Some(parsed_mode) = ContextPickerMode::try_from(mode_text).ok() {
mode = Some(parsed_mode);
} else {
argument = Some(mode_text.to_string());
}
match rest_of_line[mode_text.len()..].find(|c: char| !c.is_whitespace()) {
Some(whitespace_count) => {
if let Some(argument_text) = parts.next() {
@@ -702,13 +693,14 @@ impl MentionCompletion {
#[cfg(test)]
mod tests {
use super::*;
use gpui::{Focusable, TestAppContext, VisualTestContext};
use editor::AnchorRangeExt;
use gpui::{EventEmitter, FocusHandle, Focusable, TestAppContext, VisualTestContext};
use project::{Project, ProjectPath};
use serde_json::json;
use settings::SettingsStore;
use std::{ops::Deref, path::PathBuf};
use std::ops::Deref;
use util::{path, separator};
use workspace::AppState;
use workspace::{AppState, Item};
#[test]
fn test_mention_completion_parse() {
@@ -768,9 +760,42 @@ mod tests {
})
);
assert_eq!(
MentionCompletion::try_parse("Lorem @main", 0),
Some(MentionCompletion {
source_range: 6..11,
mode: None,
argument: Some("main".to_string()),
})
);
assert_eq!(MentionCompletion::try_parse("test@", 0), None);
}
struct AtMentionEditor(Entity<Editor>);
impl Item for AtMentionEditor {
type Event = ();
fn include_in_nav_history() -> bool {
false
}
}
impl EventEmitter<()> for AtMentionEditor {}
impl Focusable for AtMentionEditor {
fn focus_handle(&self, cx: &App) -> FocusHandle {
self.0.read(cx).focus_handle(cx).clone()
}
}
impl Render for AtMentionEditor {
fn render(&mut self, _window: &mut Window, _cx: &mut Context<Self>) -> impl IntoElement {
self.0.clone().into_any_element()
}
}
#[gpui::test]
async fn test_context_completion_provider(cx: &mut TestAppContext) {
init_test(cx);
@@ -846,28 +871,30 @@ mod tests {
.unwrap();
}
let item = workspace
.update_in(&mut cx, |workspace, window, cx| {
workspace.open_path(
ProjectPath {
worktree_id,
path: PathBuf::from("editor").into(),
},
let editor = workspace.update_in(&mut cx, |workspace, window, cx| {
let editor = cx.new(|cx| {
Editor::new(
editor::EditorMode::full(),
multi_buffer::MultiBuffer::build_simple("", cx),
None,
true,
window,
cx,
)
})
.await
.expect("Could not open test file");
let editor = cx.update(|_, cx| {
item.act_as::<Editor>(cx)
.expect("Opened test file wasn't an editor")
});
workspace.active_pane().update(cx, |pane, cx| {
pane.add_item(
Box::new(cx.new(|_| AtMentionEditor(editor.clone()))),
true,
true,
None,
window,
cx,
);
});
editor
});
let context_store = cx.new(|_| ContextStore::new(workspace.downgrade(), None));
let context_store = cx.new(|_| ContextStore::new(project.downgrade(), None));
let editor_entity = editor.downgrade();
editor.update_in(&mut cx, |editor, window, cx| {
@@ -895,10 +922,10 @@ mod tests {
assert_eq!(
current_completion_labels(editor),
&[
"editor dir/",
"seven.txt dir/b/",
"six.txt dir/b/",
"five.txt dir/b/",
"four.txt dir/a/",
"Files & Directories",
"Symbols",
"Fetch"
@@ -940,7 +967,7 @@ mod tests {
assert_eq!(editor.text(cx), "Lorem [@one.txt](@file:dir/a/one.txt)",);
assert!(!editor.has_visible_completions_menu());
assert_eq!(
crease_ranges(editor, cx),
fold_ranges(editor, cx),
vec![Point::new(0, 6)..Point::new(0, 37)]
);
});
@@ -951,7 +978,7 @@ mod tests {
assert_eq!(editor.text(cx), "Lorem [@one.txt](@file:dir/a/one.txt) ",);
assert!(!editor.has_visible_completions_menu());
assert_eq!(
crease_ranges(editor, cx),
fold_ranges(editor, cx),
vec![Point::new(0, 6)..Point::new(0, 37)]
);
});
@@ -965,7 +992,7 @@ mod tests {
);
assert!(!editor.has_visible_completions_menu());
assert_eq!(
crease_ranges(editor, cx),
fold_ranges(editor, cx),
vec![Point::new(0, 6)..Point::new(0, 37)]
);
});
@@ -979,7 +1006,7 @@ mod tests {
);
assert!(editor.has_visible_completions_menu());
assert_eq!(
crease_ranges(editor, cx),
fold_ranges(editor, cx),
vec![Point::new(0, 6)..Point::new(0, 37)]
);
});
@@ -993,14 +1020,14 @@ mod tests {
editor.update(&mut cx, |editor, cx| {
assert_eq!(
editor.text(cx),
"Lorem [@one.txt](@file:dir/a/one.txt) Ipsum [@editor](@file:dir/editor)"
"Lorem [@one.txt](@file:dir/a/one.txt) Ipsum [@seven.txt](@file:dir/b/seven.txt)"
);
assert!(!editor.has_visible_completions_menu());
assert_eq!(
crease_ranges(editor, cx),
fold_ranges(editor, cx),
vec![
Point::new(0, 6)..Point::new(0, 37),
Point::new(0, 44)..Point::new(0, 71)
Point::new(0, 44)..Point::new(0, 79)
]
);
});
@@ -1010,14 +1037,14 @@ mod tests {
editor.update(&mut cx, |editor, cx| {
assert_eq!(
editor.text(cx),
"Lorem [@one.txt](@file:dir/a/one.txt) Ipsum [@editor](@file:dir/editor)\n@"
"Lorem [@one.txt](@file:dir/a/one.txt) Ipsum [@seven.txt](@file:dir/b/seven.txt)\n@"
);
assert!(editor.has_visible_completions_menu());
assert_eq!(
crease_ranges(editor, cx),
fold_ranges(editor, cx),
vec![
Point::new(0, 6)..Point::new(0, 37),
Point::new(0, 44)..Point::new(0, 71)
Point::new(0, 44)..Point::new(0, 79)
]
);
});
@@ -1031,29 +1058,27 @@ mod tests {
editor.update(&mut cx, |editor, cx| {
assert_eq!(
editor.text(cx),
"Lorem [@one.txt](@file:dir/a/one.txt) Ipsum [@editor](@file:dir/editor)\n[@seven.txt](@file:dir/b/seven.txt)"
"Lorem [@one.txt](@file:dir/a/one.txt) Ipsum [@seven.txt](@file:dir/b/seven.txt)\n[@six.txt](@file:dir/b/six.txt)"
);
assert!(!editor.has_visible_completions_menu());
assert_eq!(
crease_ranges(editor, cx),
fold_ranges(editor, cx),
vec![
Point::new(0, 6)..Point::new(0, 37),
Point::new(0, 44)..Point::new(0, 71),
Point::new(1, 0)..Point::new(1, 35)
Point::new(0, 44)..Point::new(0, 79),
Point::new(1, 0)..Point::new(1, 31)
]
);
});
}
fn crease_ranges(editor: &Editor, cx: &mut App) -> Vec<Range<Point>> {
fn fold_ranges(editor: &Editor, cx: &mut App) -> Vec<Range<Point>> {
let snapshot = editor.buffer().read(cx).snapshot(cx);
editor.display_map.update(cx, |display_map, cx| {
display_map
.snapshot(cx)
.crease_snapshot
.crease_items_with_offsets(&snapshot)
.into_iter()
.map(|(_, range)| range)
.folds_in_range(0..snapshot.len())
.map(|fold| fold.range.to_point(&snapshot))
.collect()
})
}

View File

@@ -58,7 +58,7 @@ pub struct FileContextPickerDelegate {
workspace: WeakEntity<Workspace>,
context_store: WeakEntity<ContextStore>,
confirm_behavior: ConfirmBehavior,
matches: Vec<PathMatch>,
matches: Vec<FileMatch>,
selected_index: usize,
}
@@ -114,7 +114,7 @@ impl PickerDelegate for FileContextPickerDelegate {
return Task::ready(());
};
let search_task = search_paths(query, Arc::<AtomicBool>::default(), &workspace, cx);
let search_task = search_files(query, Arc::<AtomicBool>::default(), &workspace, cx);
cx.spawn_in(window, async move |this, cx| {
// TODO: This should be probably be run in the background.
@@ -128,7 +128,7 @@ impl PickerDelegate for FileContextPickerDelegate {
}
fn confirm(&mut self, _secondary: bool, window: &mut Window, cx: &mut Context<Picker<Self>>) {
let Some(mat) = self.matches.get(self.selected_index) else {
let Some(FileMatch { mat, .. }) = self.matches.get(self.selected_index) else {
return;
};
@@ -181,7 +181,7 @@ impl PickerDelegate for FileContextPickerDelegate {
_window: &mut Window,
cx: &mut Context<Picker<Self>>,
) -> Option<Self::ListItem> {
let path_match = &self.matches[ix];
let FileMatch { mat, .. } = &self.matches[ix];
Some(
ListItem::new(ix)
@@ -189,9 +189,10 @@ impl PickerDelegate for FileContextPickerDelegate {
.toggle_state(selected)
.child(render_file_context_entry(
ElementId::NamedInteger("file-ctx-picker".into(), ix),
&path_match.path,
&path_match.path_prefix,
path_match.is_dir,
WorktreeId::from_usize(mat.worktree_id),
&mat.path,
&mat.path_prefix,
mat.is_dir,
self.context_store.clone(),
cx,
)),
@@ -199,12 +200,17 @@ impl PickerDelegate for FileContextPickerDelegate {
}
}
pub(crate) fn search_paths(
pub struct FileMatch {
pub mat: PathMatch,
pub is_recent: bool,
}
pub(crate) fn search_files(
query: String,
cancellation_flag: Arc<AtomicBool>,
workspace: &Entity<Workspace>,
cx: &App,
) -> Task<Vec<PathMatch>> {
) -> Task<Vec<FileMatch>> {
if query.is_empty() {
let workspace = workspace.read(cx);
let project = workspace.project().read(cx);
@@ -213,28 +219,34 @@ pub(crate) fn search_paths(
.into_iter()
.filter_map(|(project_path, _)| {
let worktree = project.worktree_for_id(project_path.worktree_id, cx)?;
Some(PathMatch {
score: 0.,
positions: Vec::new(),
worktree_id: project_path.worktree_id.to_usize(),
path: project_path.path,
path_prefix: worktree.read(cx).root_name().into(),
distance_to_relative_ancestor: 0,
is_dir: false,
Some(FileMatch {
mat: PathMatch {
score: 0.,
positions: Vec::new(),
worktree_id: project_path.worktree_id.to_usize(),
path: project_path.path,
path_prefix: worktree.read(cx).root_name().into(),
distance_to_relative_ancestor: 0,
is_dir: false,
},
is_recent: true,
})
});
let file_matches = project.worktrees(cx).flat_map(|worktree| {
let worktree = worktree.read(cx);
let path_prefix: Arc<str> = worktree.root_name().into();
worktree.entries(false, 0).map(move |entry| PathMatch {
score: 0.,
positions: Vec::new(),
worktree_id: worktree.id().to_usize(),
path: entry.path.clone(),
path_prefix: path_prefix.clone(),
distance_to_relative_ancestor: 0,
is_dir: entry.is_dir(),
worktree.entries(false, 0).map(move |entry| FileMatch {
mat: PathMatch {
score: 0.,
positions: Vec::new(),
worktree_id: worktree.id().to_usize(),
path: entry.path.clone(),
path_prefix: path_prefix.clone(),
distance_to_relative_ancestor: 0,
is_dir: entry.is_dir(),
},
is_recent: false,
})
});
@@ -269,6 +281,12 @@ pub(crate) fn search_paths(
executor,
)
.await
.into_iter()
.map(|mat| FileMatch {
mat,
is_recent: false,
})
.collect::<Vec<_>>()
})
}
}
@@ -311,19 +329,26 @@ pub fn extract_file_name_and_directory(
pub fn render_file_context_entry(
id: ElementId,
path: &Path,
worktree_id: WorktreeId,
path: &Arc<Path>,
path_prefix: &Arc<str>,
is_directory: bool,
context_store: WeakEntity<ContextStore>,
cx: &App,
) -> Stateful<Div> {
let (file_name, directory) = extract_file_name_and_directory(path, path_prefix);
let (file_name, directory) = extract_file_name_and_directory(&path, path_prefix);
let added = context_store.upgrade().and_then(|context_store| {
let project_path = ProjectPath {
worktree_id,
path: path.clone(),
};
if is_directory {
context_store.read(cx).includes_directory(path)
context_store.read(cx).includes_directory(&project_path)
} else {
context_store.read(cx).will_include_file_path(path, cx)
context_store
.read(cx)
.will_include_file_path(&project_path, cx)
}
});
@@ -363,8 +388,9 @@ pub fn render_file_context_entry(
)
.child(Label::new("Added").size(LabelSize::Small)),
),
FileInclusion::InDirectory(dir_name) => {
let dir_name = dir_name.to_string_lossy().into_owned();
FileInclusion::InDirectory(directory_project_path) => {
// TODO: Consider using worktree full_path to include worktree name.
let directory_path = directory_project_path.path.to_string_lossy().into_owned();
el.child(
h_flex()
@@ -378,7 +404,7 @@ pub fn render_file_context_entry(
)
.child(Label::new("Included").size(LabelSize::Small)),
)
.tooltip(Tooltip::text(format!("in {dir_name}")))
.tooltip(Tooltip::text(format!("in {directory_path}")))
}
})
}

View File

@@ -2,7 +2,7 @@ use std::cmp::Reverse;
use std::sync::Arc;
use std::sync::atomic::AtomicBool;
use anyhow::{Context as _, Result};
use anyhow::Result;
use fuzzy::{StringMatch, StringMatchCandidate};
use gpui::{
App, AppContext, DismissEvent, Entity, FocusHandle, Focusable, Stateful, Task, WeakEntity,
@@ -119,11 +119,7 @@ impl PickerDelegate for SymbolContextPickerDelegate {
let search_task = search_symbols(query, Arc::<AtomicBool>::default(), &workspace, cx);
let context_store = self.context_store.clone();
cx.spawn_in(window, async move |this, cx| {
let symbols = search_task
.await
.context("Failed to load symbols")
.log_err()
.unwrap_or_default();
let symbols = search_task.await;
let symbol_entries = context_store
.read_with(cx, |context_store, cx| {
@@ -285,12 +281,16 @@ fn find_matching_symbol(symbol: &Symbol, candidates: &[DocumentSymbol]) -> Optio
}
}
pub struct SymbolMatch {
pub symbol: Symbol,
}
pub(crate) fn search_symbols(
query: String,
cancellation_flag: Arc<AtomicBool>,
workspace: &Entity<Workspace>,
cx: &mut App,
) -> Task<Result<Vec<(StringMatch, Symbol)>>> {
) -> Task<Vec<SymbolMatch>> {
let symbols_task = workspace.update(cx, |workspace, cx| {
workspace
.project()
@@ -298,19 +298,28 @@ pub(crate) fn search_symbols(
});
let project = workspace.read(cx).project().clone();
cx.spawn(async move |cx| {
let symbols = symbols_task.await?;
let (visible_match_candidates, external_match_candidates): (Vec<_>, Vec<_>) = project
.update(cx, |project, cx| {
symbols
.iter()
.enumerate()
.map(|(id, symbol)| StringMatchCandidate::new(id, &symbol.label.filter_text()))
.partition(|candidate| {
project
.entry_for_path(&symbols[candidate.id].path, cx)
.map_or(false, |e| !e.is_ignored)
})
})?;
let Some(symbols) = symbols_task.await.log_err() else {
return Vec::new();
};
let Some((visible_match_candidates, external_match_candidates)): Option<(Vec<_>, Vec<_>)> =
project
.update(cx, |project, cx| {
symbols
.iter()
.enumerate()
.map(|(id, symbol)| {
StringMatchCandidate::new(id, &symbol.label.filter_text())
})
.partition(|candidate| {
project
.entry_for_path(&symbols[candidate.id].path, cx)
.map_or(false, |e| !e.is_ignored)
})
})
.log_err()
else {
return Vec::new();
};
const MAX_MATCHES: usize = 100;
let mut visible_matches = cx.background_executor().block(fuzzy::match_strings(
@@ -339,7 +348,7 @@ pub(crate) fn search_symbols(
let mut matches = visible_matches;
matches.append(&mut external_matches);
Ok(matches
matches
.into_iter()
.map(|mut mat| {
let symbol = symbols[mat.candidate_id].clone();
@@ -347,19 +356,19 @@ pub(crate) fn search_symbols(
for position in &mut mat.positions {
*position += filter_start;
}
(mat, symbol)
SymbolMatch { symbol }
})
.collect())
.collect()
})
}
fn compute_symbol_entries(
symbols: Vec<(StringMatch, Symbol)>,
symbols: Vec<SymbolMatch>,
context_store: &ContextStore,
cx: &App,
) -> Vec<SymbolEntry> {
let mut symbol_entries = Vec::with_capacity(symbols.len());
for (_, symbol) in symbols {
for SymbolMatch { symbol, .. } in symbols {
let symbols_for_path = context_store.included_symbols_by_path().get(&symbol.path);
let is_included = if let Some(symbols_for_path) = symbols_for_path {
let mut is_included = false;

View File

@@ -1,4 +1,5 @@
use std::sync::Arc;
use std::sync::atomic::AtomicBool;
use fuzzy::StringMatchCandidate;
use gpui::{App, DismissEvent, Entity, FocusHandle, Focusable, Task, WeakEntity};
@@ -114,11 +115,11 @@ impl PickerDelegate for ThreadContextPickerDelegate {
return Task::ready(());
};
let search_task = search_threads(query, threads, cx);
let search_task = search_threads(query, Arc::new(AtomicBool::default()), threads, cx);
cx.spawn_in(window, async move |this, cx| {
let matches = search_task.await;
this.update(cx, |this, cx| {
this.delegate.matches = matches;
this.delegate.matches = matches.into_iter().map(|mat| mat.thread).collect();
this.delegate.selected_index = 0;
cx.notify();
})
@@ -217,11 +218,18 @@ pub fn render_thread_context_entry(
})
}
#[derive(Clone)]
pub struct ThreadMatch {
pub thread: ThreadContextEntry,
pub is_recent: bool,
}
pub(crate) fn search_threads(
query: String,
cancellation_flag: Arc<AtomicBool>,
thread_store: Entity<ThreadStore>,
cx: &mut App,
) -> Task<Vec<ThreadContextEntry>> {
) -> Task<Vec<ThreadMatch>> {
let threads = thread_store.update(cx, |this, _cx| {
this.threads()
.into_iter()
@@ -236,6 +244,12 @@ pub(crate) fn search_threads(
cx.background_spawn(async move {
if query.is_empty() {
threads
.into_iter()
.map(|thread| ThreadMatch {
thread,
is_recent: false,
})
.collect()
} else {
let candidates = threads
.iter()
@@ -247,14 +261,17 @@ pub(crate) fn search_threads(
&query,
false,
100,
&Default::default(),
&cancellation_flag,
executor,
)
.await;
matches
.into_iter()
.map(|mat| threads[mat.candidate_id].clone())
.map(|mat| ThreadMatch {
thread: threads[mat.candidate_id].clone(),
is_recent: false,
})
.collect()
}
})

View File

@@ -1,5 +1,5 @@
use std::ops::Range;
use std::path::{Path, PathBuf};
use std::path::Path;
use std::sync::Arc;
use anyhow::{Context as _, Result, anyhow};
@@ -8,11 +8,10 @@ use futures::future::join_all;
use futures::{self, Future, FutureExt, future};
use gpui::{App, AppContext as _, Context, Entity, SharedString, Task, WeakEntity};
use language::{Buffer, File};
use project::{ProjectItem, ProjectPath, Worktree};
use project::{Project, ProjectItem, ProjectPath, Worktree};
use rope::Rope;
use text::{Anchor, BufferId, OffsetRangeExt};
use util::{ResultExt as _, maybe};
use workspace::Workspace;
use crate::ThreadStore;
use crate::context::{
@@ -23,13 +22,13 @@ use crate::context_strip::SuggestedContext;
use crate::thread::{Thread, ThreadId};
pub struct ContextStore {
workspace: WeakEntity<Workspace>,
project: WeakEntity<Project>,
context: Vec<AssistantContext>,
thread_store: Option<WeakEntity<ThreadStore>>,
// TODO: If an EntityId is used for all context types (like BufferId), can remove ContextId.
next_context_id: ContextId,
files: BTreeMap<BufferId, ContextId>,
directories: HashMap<PathBuf, ContextId>,
directories: HashMap<ProjectPath, ContextId>,
symbols: HashMap<ContextSymbolId, ContextId>,
symbol_buffers: HashMap<ContextSymbolId, Entity<Buffer>>,
symbols_by_path: HashMap<ProjectPath, Vec<ContextSymbolId>>,
@@ -40,11 +39,11 @@ pub struct ContextStore {
impl ContextStore {
pub fn new(
workspace: WeakEntity<Workspace>,
project: WeakEntity<Project>,
thread_store: Option<WeakEntity<ThreadStore>>,
) -> Self {
Self {
workspace,
project,
thread_store,
context: Vec::new(),
next_context_id: ContextId(0),
@@ -81,12 +80,7 @@ impl ContextStore {
remove_if_exists: bool,
cx: &mut Context<Self>,
) -> Task<Result<()>> {
let workspace = self.workspace.clone();
let Some(project) = workspace
.upgrade()
.map(|workspace| workspace.read(cx).project().clone())
else {
let Some(project) = self.project.upgrade() else {
return Task::ready(Err(anyhow!("failed to read project")));
};
@@ -99,7 +93,7 @@ impl ContextStore {
let buffer_id = this.update(cx, |_, cx| buffer.read(cx).remote_id())?;
let already_included = this.update(cx, |this, cx| {
match this.will_include_buffer(buffer_id, &project_path.path) {
match this.will_include_buffer(buffer_id, &project_path) {
Some(FileInclusion::Direct(context_id)) => {
if remove_if_exists {
this.remove_context(context_id, cx);
@@ -161,15 +155,11 @@ impl ContextStore {
remove_if_exists: bool,
cx: &mut Context<Self>,
) -> Task<Result<()>> {
let workspace = self.workspace.clone();
let Some(project) = workspace
.upgrade()
.map(|workspace| workspace.read(cx).project().clone())
else {
let Some(project) = self.project.upgrade() else {
return Task::ready(Err(anyhow!("failed to read project")));
};
let already_included = match self.includes_directory(&project_path.path) {
let already_included = match self.includes_directory(&project_path) {
Some(FileInclusion::Direct(context_id)) => {
if remove_if_exists {
self.remove_context(context_id, cx);
@@ -233,14 +223,12 @@ impl ContextStore {
.collect::<Vec<_>>();
if context_buffers.is_empty() {
return Err(anyhow!(
"No text files found in {}",
&project_path.path.display()
));
let full_path = cx.update(|cx| worktree.read(cx).full_path(&project_path.path))?;
return Err(anyhow!("No text files found in {}", &full_path.display()));
}
this.update(cx, |this, cx| {
this.insert_directory(project_path, context_buffers, cx);
this.insert_directory(worktree, project_path, context_buffers, cx);
})?;
anyhow::Ok(())
@@ -249,17 +237,20 @@ impl ContextStore {
fn insert_directory(
&mut self,
worktree: Entity<Worktree>,
project_path: ProjectPath,
context_buffers: Vec<ContextBuffer>,
cx: &mut Context<Self>,
) {
let id = self.next_context_id.post_inc();
self.directories.insert(project_path.path.to_path_buf(), id);
let path = project_path.path.clone();
self.directories.insert(project_path, id);
self.context
.push(AssistantContext::Directory(DirectoryContext {
id,
project_path,
worktree,
path,
context_buffers,
}));
cx.notify();
@@ -488,23 +479,31 @@ impl ContextStore {
/// Returns whether the buffer is already included directly in the context, or if it will be
/// included in the context via a directory. Directory inclusion is based on paths rather than
/// buffer IDs as the directory will be re-scanned.
pub fn will_include_buffer(&self, buffer_id: BufferId, path: &Path) -> Option<FileInclusion> {
pub fn will_include_buffer(
&self,
buffer_id: BufferId,
project_path: &ProjectPath,
) -> Option<FileInclusion> {
if let Some(context_id) = self.files.get(&buffer_id) {
return Some(FileInclusion::Direct(*context_id));
}
self.will_include_file_path_via_directory(path)
self.will_include_file_path_via_directory(project_path)
}
/// Returns whether this file path is already included directly in the context, or if it will be
/// included in the context via a directory.
pub fn will_include_file_path(&self, path: &Path, cx: &App) -> Option<FileInclusion> {
pub fn will_include_file_path(
&self,
project_path: &ProjectPath,
cx: &App,
) -> Option<FileInclusion> {
if !self.files.is_empty() {
let found_file_context = self.context.iter().find(|context| match &context {
AssistantContext::File(file_context) => {
let buffer = file_context.context_buffer.buffer.read(cx);
if let Some(file_path) = buffer_path_log_err(buffer, cx) {
*file_path == *path
if let Some(context_path) = buffer.project_path(cx) {
&context_path == project_path
} else {
false
}
@@ -516,31 +515,40 @@ impl ContextStore {
}
}
self.will_include_file_path_via_directory(path)
self.will_include_file_path_via_directory(project_path)
}
fn will_include_file_path_via_directory(&self, path: &Path) -> Option<FileInclusion> {
fn will_include_file_path_via_directory(
&self,
project_path: &ProjectPath,
) -> Option<FileInclusion> {
if self.directories.is_empty() {
return None;
}
let mut buf = path.to_path_buf();
let mut path_buf = project_path.path.to_path_buf();
while buf.pop() {
if let Some(_) = self.directories.get(&buf) {
return Some(FileInclusion::InDirectory(buf));
while path_buf.pop() {
// TODO: This isn't very efficient. Consider using a better representation of the
// directories map.
let directory_project_path = ProjectPath {
worktree_id: project_path.worktree_id,
path: path_buf.clone().into(),
};
if let Some(_) = self.directories.get(&directory_project_path) {
return Some(FileInclusion::InDirectory(directory_project_path));
}
}
None
}
pub fn includes_directory(&self, path: &Path) -> Option<FileInclusion> {
if let Some(context_id) = self.directories.get(path) {
pub fn includes_directory(&self, project_path: &ProjectPath) -> Option<FileInclusion> {
if let Some(context_id) = self.directories.get(project_path) {
return Some(FileInclusion::Direct(*context_id));
}
self.will_include_file_path_via_directory(path)
self.will_include_file_path_via_directory(project_path)
}
pub fn included_symbol(&self, symbol_id: &ContextSymbolId) -> Option<ContextId> {
@@ -574,13 +582,13 @@ impl ContextStore {
}
}
pub fn file_paths(&self, cx: &App) -> HashSet<PathBuf> {
pub fn file_paths(&self, cx: &App) -> HashSet<ProjectPath> {
self.context
.iter()
.filter_map(|context| match context {
AssistantContext::File(file) => {
let buffer = file.context_buffer.buffer.read(cx);
buffer_path_log_err(buffer, cx).map(|p| p.to_path_buf())
buffer.project_path(cx)
}
AssistantContext::Directory(_)
| AssistantContext::Symbol(_)
@@ -597,7 +605,7 @@ impl ContextStore {
pub enum FileInclusion {
Direct(ContextId),
InDirectory(PathBuf),
InDirectory(ProjectPath),
}
// ContextBuffer without text.
@@ -664,19 +672,6 @@ fn collect_buffer_info_and_text(
Ok((buffer_info, text_task))
}
pub fn buffer_path_log_err(buffer: &Buffer, cx: &App) -> Option<Arc<Path>> {
if let Some(file) = buffer.file() {
let mut path = file.path().clone();
if path.as_os_str().is_empty() {
path = file.full_path(cx).into();
}
Some(path)
} else {
log::error!("Buffer that had a path unexpectedly no longer has a path.");
None
}
}
fn to_fenced_codeblock(path: &Path, content: Rope) -> SharedString {
let path_extension = path.extension().and_then(|ext| ext.to_str());
let path_string = path.to_string_lossy();
@@ -752,13 +747,13 @@ pub fn refresh_context_store_text(
}
}
AssistantContext::Directory(directory_context) => {
let directory_path = directory_context.project_path(cx);
let should_refresh = changed_buffers.is_empty()
|| changed_buffers.iter().any(|buffer| {
let buffer = buffer.read(cx);
buffer_path_log_err(&buffer, cx).map_or(false, |path| {
path.starts_with(&directory_context.project_path.path)
})
let Some(buffer_path) = buffer.read(cx).project_path(cx) else {
return false;
};
buffer_path.starts_with(&directory_path)
});
if should_refresh {
@@ -845,14 +840,16 @@ fn refresh_directory_text(
let context_buffers = future::join_all(futures);
let id = directory_context.id;
let project_path = directory_context.project_path.clone();
let worktree = directory_context.worktree.clone();
let path = directory_context.path.clone();
Some(cx.spawn(async move |cx| {
let context_buffers = context_buffers.await;
context_store
.update(cx, |context_store, _| {
let new_directory_context = DirectoryContext {
id,
project_path,
worktree,
path,
context_buffers,
};
context_store.replace_context(AssistantContext::Directory(new_directory_context));

View File

@@ -1,3 +1,4 @@
use std::path::Path;
use std::rc::Rc;
use collections::HashSet;
@@ -9,6 +10,7 @@ use gpui::{
};
use itertools::Itertools;
use language::Buffer;
use project::ProjectItem;
use ui::{KeyBinding, PopoverMenu, PopoverMenuHandle, Tooltip, prelude::*};
use workspace::{Workspace, notifications::NotifyResultExt};
@@ -93,26 +95,23 @@ impl ContextStrip {
let active_buffer_entity = editor.buffer().read(cx).as_singleton()?;
let active_buffer = active_buffer_entity.read(cx);
let path = active_buffer.file()?.full_path(cx);
let project_path = active_buffer.project_path(cx)?;
if self
.context_store
.read(cx)
.will_include_buffer(active_buffer.remote_id(), &path)
.will_include_buffer(active_buffer.remote_id(), &project_path)
.is_some()
{
return None;
}
let name = match path.file_name() {
Some(name) => name.to_string_lossy().into_owned().into(),
None => path.to_string_lossy().into_owned().into(),
};
let file_name = active_buffer.file()?.file_name(cx);
let icon_path = FileIcons::get_icon(&path, cx);
let icon_path = FileIcons::get_icon(&Path::new(&file_name), cx);
Some(SuggestedContext::File {
name,
name: file_name.to_string_lossy().into_owned().into(),
buffer: active_buffer_entity.downgrade(),
icon_path,
})

View File

@@ -28,6 +28,7 @@ use language_model::{LanguageModelRegistry, report_assistant_event};
use multi_buffer::MultiBufferRow;
use parking_lot::Mutex;
use project::LspAction;
use project::Project;
use project::{CodeAction, ProjectTransaction};
use prompt_store::PromptBuilder;
use settings::{Settings, SettingsStore};
@@ -254,6 +255,7 @@ impl InlineAssistant {
assistant.assist(
&active_editor,
cx.entity().downgrade(),
workspace.project().downgrade(),
thread_store,
window,
cx,
@@ -265,6 +267,7 @@ impl InlineAssistant {
assistant.assist(
&active_terminal,
cx.entity().downgrade(),
workspace.project().downgrade(),
thread_store,
window,
cx,
@@ -318,6 +321,7 @@ impl InlineAssistant {
&mut self,
editor: &Entity<Editor>,
workspace: WeakEntity<Workspace>,
project: WeakEntity<Project>,
thread_store: Option<WeakEntity<ThreadStore>>,
window: &mut Window,
cx: &mut App,
@@ -425,7 +429,7 @@ impl InlineAssistant {
for range in codegen_ranges {
let assist_id = self.next_assist_id.post_inc();
let context_store =
cx.new(|_cx| ContextStore::new(workspace.clone(), thread_store.clone()));
cx.new(|_cx| ContextStore::new(project.clone(), thread_store.clone()));
let codegen = cx.new(|cx| {
BufferCodegen::new(
editor.read(cx).buffer().clone(),
@@ -519,7 +523,7 @@ impl InlineAssistant {
initial_prompt: String,
initial_transaction_id: Option<TransactionId>,
focus: bool,
workspace: WeakEntity<Workspace>,
workspace: Entity<Workspace>,
thread_store: Option<WeakEntity<ThreadStore>>,
window: &mut Window,
cx: &mut App,
@@ -537,8 +541,8 @@ impl InlineAssistant {
range.end = range.end.bias_right(&snapshot);
}
let context_store =
cx.new(|_cx| ContextStore::new(workspace.clone(), thread_store.clone()));
let project = workspace.read(cx).project().downgrade();
let context_store = cx.new(|_cx| ContextStore::new(project, thread_store.clone()));
let codegen = cx.new(|cx| {
BufferCodegen::new(
@@ -562,7 +566,7 @@ impl InlineAssistant {
codegen.clone(),
self.fs.clone(),
context_store,
workspace.clone(),
workspace.downgrade(),
thread_store,
window,
cx,
@@ -589,7 +593,7 @@ impl InlineAssistant {
end_block_id,
range,
codegen.clone(),
workspace.clone(),
workspace.downgrade(),
window,
cx,
),
@@ -1779,6 +1783,7 @@ impl CodeActionProvider for AssistantCodeActionProvider {
let workspace = self.workspace.clone();
let thread_store = self.thread_store.clone();
window.spawn(cx, async move |cx| {
let workspace = workspace.upgrade().context("workspace was released")?;
let editor = editor.upgrade().context("editor was released")?;
let range = editor
.update(cx, |editor, cx| {

View File

@@ -3,14 +3,17 @@ use std::sync::Arc;
use crate::assistant_model_selector::ModelType;
use collections::HashSet;
use editor::actions::MoveUp;
use editor::{ContextMenuOptions, ContextMenuPlacement, Editor, EditorElement, EditorStyle};
use editor::{
ContextMenuOptions, ContextMenuPlacement, Editor, EditorElement, EditorMode, EditorStyle,
MultiBuffer,
};
use file_icons::FileIcons;
use fs::Fs;
use gpui::{
Animation, AnimationExt, App, DismissEvent, Entity, Focusable, Subscription, TextStyle,
WeakEntity, linear_color_stop, linear_gradient, point, pulsating_between,
};
use language::Buffer;
use language::{Buffer, Language};
use language_model::{ConfiguredModel, LanguageModelRegistry};
use language_model_selector::ToggleModelSelector;
use multi_buffer;
@@ -30,7 +33,7 @@ use crate::profile_selector::ProfileSelector;
use crate::thread::{RequestKind, Thread, TokenUsageRatio};
use crate::thread_store::ThreadStore;
use crate::{
AgentDiff, Chat, ChatMode, NewThread, OpenAgentDiff, RemoveAllContext, ThreadEvent,
AgentDiff, Chat, ChatMode, ExpandMessageEditor, NewThread, OpenAgentDiff, RemoveAllContext,
ToggleContextPicker, ToggleProfileSelector,
};
@@ -48,10 +51,13 @@ pub struct MessageEditor {
model_selector: Entity<AssistantModelSelector>,
profile_selector: Entity<ProfileSelector>,
edits_expanded: bool,
editor_is_expanded: bool,
waiting_for_summaries_to_send: bool,
_subscriptions: Vec<Subscription>,
}
const MAX_EDITOR_LINES: usize = 10;
impl MessageEditor {
pub fn new(
fs: Arc<dyn Fs>,
@@ -66,8 +72,26 @@ impl MessageEditor {
let inline_context_picker_menu_handle = PopoverMenuHandle::default();
let model_selector_menu_handle = PopoverMenuHandle::default();
let language = Language::new(
language::LanguageConfig {
completion_query_characters: HashSet::from_iter(['.', '-', '_', '@']),
..Default::default()
},
None,
);
let editor = cx.new(|cx| {
let mut editor = Editor::auto_height(10, window, cx);
let buffer = cx.new(|cx| Buffer::local("", cx).with_language(Arc::new(language), cx));
let buffer = cx.new(|cx| MultiBuffer::singleton(buffer, cx));
let mut editor = Editor::new(
editor::EditorMode::AutoHeight {
max_lines: MAX_EDITOR_LINES,
},
buffer,
None,
window,
cx,
);
editor.set_placeholder_text("Ask anything, @ to mention, ↑ to select", cx);
editor.set_show_indent_guides(false, cx);
editor.set_context_menu_options(ContextMenuOptions {
@@ -75,7 +99,6 @@ impl MessageEditor {
max_entries_visible: 12,
placement: Some(ContextMenuPlacement::Above),
});
editor
});
@@ -142,6 +165,7 @@ impl MessageEditor {
)
}),
edits_expanded: false,
editor_is_expanded: false,
waiting_for_summaries_to_send: false,
profile_selector: cx
.new(|cx| ProfileSelector::new(fs, thread_store, editor.focus_handle(cx), cx)),
@@ -153,6 +177,32 @@ impl MessageEditor {
cx.notify();
}
fn expand_message_editor(
&mut self,
_: &ExpandMessageEditor,
_window: &mut Window,
cx: &mut Context<Self>,
) {
self.set_editor_is_expanded(!self.editor_is_expanded, cx);
}
fn set_editor_is_expanded(&mut self, is_expanded: bool, cx: &mut Context<Self>) {
self.editor_is_expanded = is_expanded;
self.editor.update(cx, |editor, _| {
if self.editor_is_expanded {
editor.set_mode(EditorMode::Full {
scale_ui_elements_with_buffer_font_size: false,
show_active_line_background: false,
})
} else {
editor.set_mode(EditorMode::AutoHeight {
max_lines: MAX_EDITOR_LINES,
})
}
});
cx.notify();
}
fn toggle_context_picker(
&mut self,
_: &ToggleContextPicker,
@@ -180,11 +230,14 @@ impl MessageEditor {
return;
}
self.set_editor_is_expanded(false, cx);
self.send_to_model(RequestKind::Chat, window, cx);
cx.notify();
}
fn is_editor_empty(&self, cx: &App) -> bool {
self.editor.read(cx).text(cx).is_empty()
self.editor.read(cx).text(cx).trim().is_empty()
}
fn is_model_selected(&self, cx: &App) -> bool {
@@ -218,8 +271,6 @@ impl MessageEditor {
let refresh_task =
refresh_context_store_text(self.context_store.clone(), &HashSet::default(), cx);
let system_prompt_context_task = self.thread.read(cx).load_system_prompt_context(cx);
let thread = self.thread.clone();
let context_store = self.context_store.clone();
let git_store = self.project.read(cx).git_store().clone();
@@ -228,16 +279,6 @@ impl MessageEditor {
cx.spawn(async move |this, cx| {
let checkpoint = checkpoint.await.ok();
refresh_task.await;
let (system_prompt_context, load_error) = system_prompt_context_task.await;
thread
.update(cx, |thread, cx| {
thread.set_system_prompt_context(system_prompt_context);
if let Some(load_error) = load_error {
cx.emit(ThreadEvent::ShowError(load_error));
}
})
.log_err();
thread
.update(cx, |thread, cx| {
@@ -340,12 +381,20 @@ impl Focusable for MessageEditor {
impl Render for MessageEditor {
fn render(&mut self, window: &mut Window, cx: &mut Context<Self>) -> impl IntoElement {
let font_size = TextSize::Default.rems(cx);
let font_size = TextSize::Small.rems(cx);
let line_height = font_size.to_pixels(window.rem_size()) * 1.5;
let focus_handle = self.editor.focus_handle(cx);
let focus_handle_clone = focus_handle.clone();
let inline_context_picker = self.inline_context_picker.clone();
let is_editor_expanded = self.editor_is_expanded;
let expand_icon = if is_editor_expanded {
IconName::Minimize
} else {
IconName::Maximize
};
let thread = self.thread.read(cx);
let is_generating = thread.is_generating();
let total_token_usage = thread.total_token_usage(cx);
@@ -652,24 +701,56 @@ impl Render for MessageEditor {
.on_action(cx.listener(Self::remove_all_context))
.on_action(cx.listener(Self::move_up))
.on_action(cx.listener(Self::toggle_chat_mode))
.on_action(cx.listener(Self::expand_message_editor))
.gap_2()
.p_2()
.bg(editor_bg_color)
.border_t_1()
.border_color(cx.theme().colors().border)
.child(h_flex().justify_between().child(self.context_strip.clone()))
.child(
h_flex()
.items_start()
.justify_between()
.child(self.context_strip.clone())
.child(
IconButton::new("toggle-height", expand_icon)
.icon_size(IconSize::XSmall)
.icon_color(Color::Muted)
.tooltip(move |window, cx| {
let focus_handle = focus_handle.clone();
let expand_label = if is_editor_expanded {
"Minimize Message Editor".to_string()
} else {
"Expand Message Editor".to_string()
};
Tooltip::for_action_in(
expand_label,
&ExpandMessageEditor,
&focus_handle,
window,
cx,
)
})
.on_click(cx.listener(|_, _, window, cx| {
window.dispatch_action(Box::new(ExpandMessageEditor), cx);
}))
)
)
.child(
v_flex()
.gap_5()
.child({
.size_full()
.gap_4()
.when(is_editor_expanded, |this| this.h(vh(0.8, window)).justify_between())
.child(div().when(is_editor_expanded, |this| this.h_full()).child({
let settings = ThemeSettings::get_global(cx);
let text_style = TextStyle {
color: cx.theme().colors().text,
font_family: settings.ui_font.family.clone(),
font_fallbacks: settings.ui_font.fallbacks.clone(),
font_features: settings.ui_font.features.clone(),
font_family: settings.buffer_font.family.clone(),
font_fallbacks: settings.buffer_font.fallbacks.clone(),
font_features: settings.buffer_font.features.clone(),
font_size: font_size.into(),
font_weight: settings.ui_font.weight,
line_height: line_height.into(),
..Default::default()
};
@@ -684,7 +765,7 @@ impl Render for MessageEditor {
..Default::default()
},
).into_any()
})
}))
.child(
PopoverMenu::new("inline-context-picker")
.menu(move |window, cx| {
@@ -704,11 +785,13 @@ impl Render for MessageEditor {
)
.child(
h_flex()
.flex_none()
.justify_between()
.child(h_flex().gap_2().child(self.profile_selector.clone()))
.child(
h_flex().gap_1().child(self.model_selector.clone())
.map(|parent| {
h_flex().gap_1()
.child(self.model_selector.clone())
.map(move |parent| {
if is_generating {
parent.child(
IconButton::new("stop-generation", IconName::StopFilled)
@@ -722,12 +805,15 @@ impl Render for MessageEditor {
cx,
)
})
.on_click(move |_event, window, cx| {
focus_handle.dispatch_action(
&editor::actions::Cancel,
window,
cx,
);
.on_click({
let focus_handle = focus_handle_clone.clone();
move |_event, window, cx| {
focus_handle.dispatch_action(
&editor::actions::Cancel,
window,
cx,
);
}
})
.with_animation(
"pulsating-label",
@@ -747,8 +833,11 @@ impl Render for MessageEditor {
|| !is_model_selected
|| self.waiting_for_summaries_to_send
)
.on_click(move |_event, window, cx| {
focus_handle.dispatch_action(&Chat, window, cx);
.on_click({
let focus_handle = focus_handle_clone.clone();
move |_event, window, cx| {
focus_handle.dispatch_action(&Chat, window, cx);
}
})
.when(!is_editor_empty && is_model_selected, |button| {
button.tooltip(move |window, cx| {

View File

@@ -16,6 +16,7 @@ use language_model::{
ConfiguredModel, LanguageModelRegistry, LanguageModelRequest, LanguageModelRequestMessage,
Role, report_assistant_event,
};
use project::Project;
use prompt_store::PromptBuilder;
use std::sync::Arc;
use telemetry_events::{AssistantEventData, AssistantKind, AssistantPhase};
@@ -67,6 +68,7 @@ impl TerminalInlineAssistant {
&mut self,
terminal_view: &Entity<TerminalView>,
workspace: WeakEntity<Workspace>,
project: WeakEntity<Project>,
thread_store: Option<WeakEntity<ThreadStore>>,
window: &mut Window,
cx: &mut App,
@@ -75,8 +77,7 @@ impl TerminalInlineAssistant {
let assist_id = self.next_assist_id.post_inc();
let prompt_buffer =
cx.new(|cx| MultiBuffer::singleton(cx.new(|cx| Buffer::local(String::new(), cx)), cx));
let context_store =
cx.new(|_cx| ContextStore::new(workspace.clone(), thread_store.clone()));
let context_store = cx.new(|_cx| ContextStore::new(project, thread_store.clone()));
let codegen = cx.new(|_| TerminalCodegen::new(terminal, self.telemetry.clone()));
let prompt_editor = cx.new(|cx| {

View File

@@ -2,13 +2,14 @@ use std::fmt::Write as _;
use std::io::Write;
use std::ops::Range;
use std::sync::Arc;
use std::time::Instant;
use anyhow::{Context as _, Result, anyhow};
use assistant_settings::AssistantSettings;
use assistant_tool::{ActionLog, Tool, ToolWorkingSet};
use chrono::{DateTime, Utc};
use collections::{BTreeMap, HashMap};
use fs::Fs;
use feature_flags::{self, FeatureFlagAppExt};
use futures::future::Shared;
use futures::{FutureExt, StreamExt as _};
use git::repository::DiffType;
@@ -19,21 +20,20 @@ use language_model::{
LanguageModelToolResult, LanguageModelToolUseId, MaxMonthlySpendReachedError, MessageContent,
PaymentRequiredError, Role, StopReason, TokenUsage,
};
use project::Project;
use project::git_store::{GitStore, GitStoreCheckpoint, RepositoryState};
use project::{Project, Worktree};
use prompt_store::{
AssistantSystemPromptContext, PromptBuilder, RulesFile, WorktreeInfoForSystemPrompt,
};
use prompt_store::PromptBuilder;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use settings::Settings;
use util::{ResultExt as _, TryFutureExt as _, maybe, post_inc};
use thiserror::Error;
use util::{ResultExt as _, TryFutureExt as _, post_inc};
use uuid::Uuid;
use crate::context::{AssistantContext, ContextId, format_context_as_string};
use crate::thread_store::{
SerializedMessage, SerializedMessageSegment, SerializedThread, SerializedToolResult,
SerializedToolUse,
SerializedToolUse, SharedProjectContext,
};
use crate::tool_use::{PendingToolUse, ToolUse, ToolUseState, USING_TOOL_MARKER};
@@ -183,7 +183,7 @@ pub struct ThreadCheckpoint {
git_checkpoint: GitStoreCheckpoint,
}
#[derive(Copy, Clone, Debug)]
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
pub enum ThreadFeedback {
Positive,
Negative,
@@ -247,7 +247,7 @@ pub struct Thread {
next_message_id: MessageId,
context: BTreeMap<ContextId, AssistantContext>,
context_by_message: HashMap<MessageId, Vec<ContextId>>,
system_prompt_context: Option<AssistantSystemPromptContext>,
project_context: SharedProjectContext,
checkpoints_by_message: HashMap<MessageId, ThreadCheckpoint>,
completion_count: usize,
pending_completions: Vec<PendingCompletion>,
@@ -261,6 +261,8 @@ pub struct Thread {
initial_project_snapshot: Shared<Task<Option<Arc<ProjectSnapshot>>>>,
cumulative_token_usage: TokenUsage,
feedback: Option<ThreadFeedback>,
message_feedback: HashMap<MessageId, ThreadFeedback>,
last_auto_capture_at: Option<Instant>,
}
impl Thread {
@@ -268,6 +270,7 @@ impl Thread {
project: Entity<Project>,
tools: Arc<ToolWorkingSet>,
prompt_builder: Arc<PromptBuilder>,
system_prompt: SharedProjectContext,
cx: &mut Context<Self>,
) -> Self {
Self {
@@ -280,7 +283,7 @@ impl Thread {
next_message_id: MessageId(0),
context: BTreeMap::default(),
context_by_message: HashMap::default(),
system_prompt_context: None,
project_context: system_prompt,
checkpoints_by_message: HashMap::default(),
completion_count: 0,
pending_completions: Vec::new(),
@@ -299,6 +302,8 @@ impl Thread {
},
cumulative_token_usage: TokenUsage::default(),
feedback: None,
message_feedback: HashMap::default(),
last_auto_capture_at: None,
}
}
@@ -308,6 +313,7 @@ impl Thread {
project: Entity<Project>,
tools: Arc<ToolWorkingSet>,
prompt_builder: Arc<PromptBuilder>,
project_context: SharedProjectContext,
cx: &mut Context<Self>,
) -> Self {
let next_message_id = MessageId(
@@ -348,7 +354,7 @@ impl Thread {
next_message_id,
context: BTreeMap::default(),
context_by_message: HashMap::default(),
system_prompt_context: None,
project_context,
checkpoints_by_message: HashMap::default(),
completion_count: 0,
pending_completions: Vec::new(),
@@ -362,6 +368,8 @@ impl Thread {
initial_project_snapshot: Task::ready(serialized.initial_project_snapshot).shared(),
cumulative_token_usage: serialized.cumulative_token_usage,
feedback: None,
message_feedback: HashMap::default(),
last_auto_capture_at: None,
}
}
@@ -385,6 +393,10 @@ impl Thread {
self.summary.clone()
}
pub fn project_context(&self) -> SharedProjectContext {
self.project_context.clone()
}
pub const DEFAULT_SUMMARY: SharedString = SharedString::new_static("New Thread");
pub fn summary_or_default(&self) -> SharedString {
@@ -675,6 +687,9 @@ impl Thread {
git_checkpoint,
});
}
self.auto_capture_telemetry(cx);
message_id
}
@@ -806,117 +821,6 @@ impl Thread {
})
}
pub fn set_system_prompt_context(&mut self, context: AssistantSystemPromptContext) {
self.system_prompt_context = Some(context);
}
pub fn system_prompt_context(&self) -> &Option<AssistantSystemPromptContext> {
&self.system_prompt_context
}
pub fn load_system_prompt_context(
&self,
cx: &App,
) -> Task<(AssistantSystemPromptContext, Option<ThreadError>)> {
let project = self.project.read(cx);
let tasks = project
.visible_worktrees(cx)
.map(|worktree| {
Self::load_worktree_info_for_system_prompt(
project.fs().clone(),
worktree.read(cx),
cx,
)
})
.collect::<Vec<_>>();
cx.spawn(async |_cx| {
let results = futures::future::join_all(tasks).await;
let mut first_err = None;
let worktrees = results
.into_iter()
.map(|(worktree, err)| {
if first_err.is_none() && err.is_some() {
first_err = err;
}
worktree
})
.collect::<Vec<_>>();
(AssistantSystemPromptContext::new(worktrees), first_err)
})
}
fn load_worktree_info_for_system_prompt(
fs: Arc<dyn Fs>,
worktree: &Worktree,
cx: &App,
) -> Task<(WorktreeInfoForSystemPrompt, Option<ThreadError>)> {
let root_name = worktree.root_name().into();
let abs_path = worktree.abs_path();
// Note that Cline supports `.clinerules` being a directory, but that is not currently
// supported. This doesn't seem to occur often in GitHub repositories.
const RULES_FILE_NAMES: [&'static str; 6] = [
".rules",
".cursorrules",
".windsurfrules",
".clinerules",
".github/copilot-instructions.md",
"CLAUDE.md",
];
let selected_rules_file = RULES_FILE_NAMES
.into_iter()
.filter_map(|name| {
worktree
.entry_for_path(name)
.filter(|entry| entry.is_file())
.map(|entry| (entry.path.clone(), worktree.absolutize(&entry.path)))
})
.next();
if let Some((rel_rules_path, abs_rules_path)) = selected_rules_file {
cx.spawn(async move |_| {
let rules_file_result = maybe!(async move {
let abs_rules_path = abs_rules_path?;
let text = fs.load(&abs_rules_path).await.with_context(|| {
format!("Failed to load assistant rules file {:?}", abs_rules_path)
})?;
anyhow::Ok(RulesFile {
rel_path: rel_rules_path,
abs_path: abs_rules_path.into(),
text: text.trim().to_string(),
})
})
.await;
let (rules_file, rules_file_error) = match rules_file_result {
Ok(rules_file) => (Some(rules_file), None),
Err(err) => (
None,
Some(ThreadError::Message {
header: "Error loading rules file".into(),
message: format!("{err}").into(),
}),
),
};
let worktree_info = WorktreeInfoForSystemPrompt {
root_name,
abs_path,
rules_file,
};
(worktree_info, rules_file_error)
})
} else {
Task::ready((
WorktreeInfoForSystemPrompt {
root_name,
abs_path,
rules_file: None,
},
None,
))
}
}
pub fn send_to_model(
&mut self,
model: Arc<dyn LanguageModel>,
@@ -966,10 +870,10 @@ impl Thread {
temperature: None,
};
if let Some(system_prompt_context) = self.system_prompt_context.as_ref() {
if let Some(project_context) = self.project_context.borrow().as_ref() {
if let Some(system_prompt) = self
.prompt_builder
.generate_assistant_system_prompt(system_prompt_context)
.generate_assistant_system_prompt(project_context)
.context("failed to generate assistant system prompt")
.log_err()
{
@@ -980,7 +884,7 @@ impl Thread {
});
}
} else {
log::error!("system_prompt_context not set.")
log::error!("project_context not set.")
}
for message in &self.messages {
@@ -1184,6 +1088,8 @@ impl Thread {
thread.touch_updated_at();
cx.emit(ThreadEvent::StreamedCompletion);
cx.notify();
thread.auto_capture_telemetry(cx);
})?;
smol::future::yield_now().await;
@@ -1210,7 +1116,8 @@ impl Thread {
match result.as_ref() {
Ok(stop_reason) => match stop_reason {
StopReason::ToolUse => {
cx.emit(ThreadEvent::UsePendingTools);
let tool_uses = thread.use_pending_tools(cx);
cx.emit(ThreadEvent::UsePendingTools { tool_uses });
}
StopReason::EndTurn => {}
StopReason::MaxTokens => {}
@@ -1237,7 +1144,9 @@ impl Thread {
thread.cancel_last_completion(cx);
}
}
cx.emit(ThreadEvent::DoneStreaming);
cx.emit(ThreadEvent::Stopped(result.map_err(Arc::new)));
thread.auto_capture_telemetry(cx);
if let Ok(initial_usage) = initial_token_usage {
let usage = thread.cumulative_token_usage.clone() - initial_usage;
@@ -1398,10 +1307,8 @@ impl Thread {
)
}
pub fn use_pending_tools(
&mut self,
cx: &mut Context<Self>,
) -> impl IntoIterator<Item = PendingToolUse> + use<> {
pub fn use_pending_tools(&mut self, cx: &mut Context<Self>) -> Vec<PendingToolUse> {
self.auto_capture_telemetry(cx);
let request = self.to_completion_request(RequestKind::Chat, cx);
let messages = Arc::new(request.messages);
let pending_tool_uses = self
@@ -1489,18 +1396,36 @@ impl Thread {
output,
cx,
);
cx.emit(ThreadEvent::ToolFinished {
tool_use_id,
pending_tool_use,
canceled: false,
});
thread.tool_finished(tool_use_id, pending_tool_use, false, cx);
})
.ok();
}
})
}
fn tool_finished(
&mut self,
tool_use_id: LanguageModelToolUseId,
pending_tool_use: Option<PendingToolUse>,
canceled: bool,
cx: &mut Context<Self>,
) {
if self.all_tools_finished() {
let model_registry = LanguageModelRegistry::read_global(cx);
if let Some(ConfiguredModel { model, .. }) = model_registry.default_model() {
self.attach_tool_results(cx);
if !canceled {
self.send_to_model(model, RequestKind::Chat, cx);
}
}
}
cx.emit(ThreadEvent::ToolFinished {
tool_use_id,
pending_tool_use,
});
}
pub fn attach_tool_results(&mut self, cx: &mut Context<Self>) {
// Insert a user message to contain the tool results.
self.insert_user_message(
@@ -1524,11 +1449,12 @@ impl Thread {
let mut canceled = false;
for pending_tool_use in self.tool_use.cancel_pending() {
canceled = true;
cx.emit(ThreadEvent::ToolFinished {
tool_use_id: pending_tool_use.id.clone(),
pending_tool_use: Some(pending_tool_use),
canceled: true,
});
self.tool_finished(
pending_tool_use.id.clone(),
Some(pending_tool_use),
true,
cx,
);
}
canceled
};
@@ -1536,24 +1462,45 @@ impl Thread {
canceled
}
/// Returns the feedback given to the thread, if any.
pub fn feedback(&self) -> Option<ThreadFeedback> {
self.feedback
}
/// Reports feedback about the thread and stores it in our telemetry backend.
pub fn report_feedback(
pub fn message_feedback(&self, message_id: MessageId) -> Option<ThreadFeedback> {
self.message_feedback.get(&message_id).copied()
}
pub fn report_message_feedback(
&mut self,
message_id: MessageId,
feedback: ThreadFeedback,
cx: &mut Context<Self>,
) -> Task<Result<()>> {
if self.message_feedback.get(&message_id) == Some(&feedback) {
return Task::ready(Ok(()));
}
let final_project_snapshot = Self::project_snapshot(self.project.clone(), cx);
let serialized_thread = self.serialize(cx);
let thread_id = self.id().clone();
let client = self.project.read(cx).client();
self.feedback = Some(feedback);
let enabled_tool_names: Vec<String> = self
.tools()
.enabled_tools(cx)
.iter()
.map(|tool| tool.name().to_string())
.collect();
self.message_feedback.insert(message_id, feedback);
cx.notify();
let message_content = self
.message(message_id)
.map(|msg| msg.to_string())
.unwrap_or_default();
cx.background_spawn(async move {
let final_project_snapshot = final_project_snapshot.await;
let serialized_thread = serialized_thread.await?;
@@ -1568,6 +1515,9 @@ impl Thread {
"Assistant Thread Rated",
rating,
thread_id,
enabled_tool_names,
message_id = message_id.0,
message_content,
thread_data,
final_project_snapshot
);
@@ -1577,6 +1527,52 @@ impl Thread {
})
}
pub fn report_feedback(
&mut self,
feedback: ThreadFeedback,
cx: &mut Context<Self>,
) -> Task<Result<()>> {
let last_assistant_message_id = self
.messages
.iter()
.rev()
.find(|msg| msg.role == Role::Assistant)
.map(|msg| msg.id);
if let Some(message_id) = last_assistant_message_id {
self.report_message_feedback(message_id, feedback, cx)
} else {
let final_project_snapshot = Self::project_snapshot(self.project.clone(), cx);
let serialized_thread = self.serialize(cx);
let thread_id = self.id().clone();
let client = self.project.read(cx).client();
self.feedback = Some(feedback);
cx.notify();
cx.background_spawn(async move {
let final_project_snapshot = final_project_snapshot.await;
let serialized_thread = serialized_thread.await?;
let thread_data = serde_json::to_value(serialized_thread)
.unwrap_or_else(|_| serde_json::Value::Null);
let rating = match feedback {
ThreadFeedback::Positive => "positive",
ThreadFeedback::Negative => "negative",
};
telemetry::event!(
"Assistant Thread Rated",
rating,
thread_id,
thread_data,
final_project_snapshot
);
client.telemetry().flush_events();
Ok(())
})
}
}
/// Create a snapshot of the current project state including git information and unsaved buffers.
fn project_snapshot(
project: Entity<Project>,
@@ -1792,6 +1788,50 @@ impl Thread {
self.cumulative_token_usage.clone()
}
pub fn auto_capture_telemetry(&mut self, cx: &mut Context<Self>) {
if !cx.has_flag::<feature_flags::ThreadAutoCapture>() {
return;
}
let now = Instant::now();
if let Some(last) = self.last_auto_capture_at {
if now.duration_since(last).as_secs() < 10 {
return;
}
}
self.last_auto_capture_at = Some(now);
let thread_id = self.id().clone();
let github_login = self
.project
.read(cx)
.user_store()
.read(cx)
.current_user()
.map(|user| user.github_login.clone());
let client = self.project.read(cx).client().clone();
let serialize_task = self.serialize(cx);
cx.background_executor()
.spawn(async move {
if let Ok(serialized_thread) = serialize_task.await {
if let Ok(thread_data) = serde_json::to_value(serialized_thread) {
telemetry::event!(
"Agent Thread Auto-Captured",
thread_id = thread_id.to_string(),
thread_data = thread_data,
auto_capture_reason = "tracked_user",
github_login = github_login
);
client.telemetry().flush_events();
}
}
})
.detach();
}
pub fn total_token_usage(&self, cx: &App) -> TotalTokenUsage {
let model_registry = LanguageModelRegistry::read_global(cx);
let Some(model) = model_registry.default_model() else {
@@ -1833,19 +1873,17 @@ impl Thread {
self.tool_use
.insert_tool_output(tool_use_id.clone(), tool_name, err, cx);
cx.emit(ThreadEvent::ToolFinished {
tool_use_id,
pending_tool_use: None,
canceled: true,
});
self.tool_finished(tool_use_id.clone(), None, true, cx);
}
}
#[derive(Debug, Clone)]
#[derive(Debug, Clone, Error)]
pub enum ThreadError {
#[error("Payment required")]
PaymentRequired,
#[error("Max monthly spend reached")]
MaxMonthlySpendReached,
#[error("Message {header}: {message}")]
Message {
header: SharedString,
message: SharedString,
@@ -1858,20 +1896,20 @@ pub enum ThreadEvent {
StreamedCompletion,
StreamedAssistantText(MessageId, String),
StreamedAssistantThinking(MessageId, String),
DoneStreaming,
Stopped(Result<StopReason, Arc<anyhow::Error>>),
MessageAdded(MessageId),
MessageEdited(MessageId),
MessageDeleted(MessageId),
SummaryGenerated,
SummaryChanged,
UsePendingTools,
UsePendingTools {
tool_uses: Vec<PendingToolUse>,
},
ToolFinished {
#[allow(unused)]
tool_use_id: LanguageModelToolUseId,
/// The pending tool use that corresponds to this tool.
pending_tool_use: Option<PendingToolUse>,
/// Whether the tool was canceled by the user.
canceled: bool,
},
CheckpointChanged,
ToolConfirmationNeeded,
@@ -1964,9 +2002,9 @@ fn main() {{
thread.to_completion_request(RequestKind::Chat, cx)
});
assert_eq!(request.messages.len(), 1);
assert_eq!(request.messages.len(), 2);
let expected_full_message = format!("{}Please explain this code", expected_context);
assert_eq!(request.messages[0].string_contents(), expected_full_message);
assert_eq!(request.messages[1].string_contents(), expected_full_message);
}
#[gpui::test]
@@ -2057,20 +2095,20 @@ fn main() {{
});
// The request should contain all 3 messages
assert_eq!(request.messages.len(), 3);
assert_eq!(request.messages.len(), 4);
// Check that the contexts are properly formatted in each message
assert!(request.messages[0].string_contents().contains("file1.rs"));
assert!(!request.messages[0].string_contents().contains("file2.rs"));
assert!(!request.messages[0].string_contents().contains("file3.rs"));
assert!(!request.messages[1].string_contents().contains("file1.rs"));
assert!(request.messages[1].string_contents().contains("file2.rs"));
assert!(request.messages[1].string_contents().contains("file1.rs"));
assert!(!request.messages[1].string_contents().contains("file2.rs"));
assert!(!request.messages[1].string_contents().contains("file3.rs"));
assert!(!request.messages[2].string_contents().contains("file1.rs"));
assert!(!request.messages[2].string_contents().contains("file2.rs"));
assert!(request.messages[2].string_contents().contains("file3.rs"));
assert!(request.messages[2].string_contents().contains("file2.rs"));
assert!(!request.messages[2].string_contents().contains("file3.rs"));
assert!(!request.messages[3].string_contents().contains("file1.rs"));
assert!(!request.messages[3].string_contents().contains("file2.rs"));
assert!(request.messages[3].string_contents().contains("file3.rs"));
}
#[gpui::test]
@@ -2108,9 +2146,9 @@ fn main() {{
thread.to_completion_request(RequestKind::Chat, cx)
});
assert_eq!(request.messages.len(), 1);
assert_eq!(request.messages.len(), 2);
assert_eq!(
request.messages[0].string_contents(),
request.messages[1].string_contents(),
"What is the best way to learn Rust?"
);
@@ -2128,13 +2166,13 @@ fn main() {{
thread.to_completion_request(RequestKind::Chat, cx)
});
assert_eq!(request.messages.len(), 2);
assert_eq!(request.messages.len(), 3);
assert_eq!(
request.messages[0].string_contents(),
request.messages[1].string_contents(),
"What is the best way to learn Rust?"
);
assert_eq!(
request.messages[1].string_contents(),
request.messages[2].string_contents(),
"Are there any good books?"
);
}
@@ -2255,18 +2293,19 @@ fn main() {{
let (workspace, cx) =
cx.add_window_view(|window, cx| Workspace::test_new(project.clone(), window, cx));
let thread_store = cx.update(|_, cx| {
ThreadStore::new(
project.clone(),
Arc::default(),
Arc::new(PromptBuilder::new(None).unwrap()),
cx,
)
.unwrap()
});
let thread_store = cx
.update(|_, cx| {
ThreadStore::load(
project.clone(),
Arc::default(),
Arc::new(PromptBuilder::new(None).unwrap()),
cx,
)
})
.await;
let thread = thread_store.update(cx, |store, cx| store.create_thread(cx));
let context_store = cx.new(|_cx| ContextStore::new(workspace.downgrade(), None));
let context_store = cx.new(|_cx| ContextStore::new(project.downgrade(), None));
(workspace, thread_store, thread, context_store)
}

View File

@@ -431,17 +431,6 @@ impl RenderOnce for PastThread {
.end_slot(
h_flex()
.gap_1p5()
.child(
Label::new("Thread")
.color(Color::Muted)
.size(LabelSize::XSmall),
)
.child(
div()
.size(px(3.))
.rounded_full()
.bg(cx.theme().colors().text_disabled),
)
.child(
Label::new(thread_timestamp)
.color(Color::Muted)
@@ -452,12 +441,7 @@ impl RenderOnce for PastThread {
.shape(IconButtonShape::Square)
.icon_size(IconSize::XSmall)
.tooltip(move |window, cx| {
Tooltip::for_action(
"Delete Thread",
&RemoveSelectedThread,
window,
cx,
)
Tooltip::for_action("Delete", &RemoveSelectedThread, window, cx)
})
.on_click({
let assistant_panel = self.assistant_panel.clone();
@@ -538,17 +522,6 @@ impl RenderOnce for PastContext {
.end_slot(
h_flex()
.gap_1p5()
.child(
Label::new("Prompt Editor")
.color(Color::Muted)
.size(LabelSize::XSmall),
)
.child(
div()
.size(px(3.))
.rounded_full()
.bg(cx.theme().colors().text_disabled),
)
.child(
Label::new(context_timestamp)
.color(Color::Muted)
@@ -559,12 +532,7 @@ impl RenderOnce for PastContext {
.shape(IconButtonShape::Square)
.icon_size(IconSize::XSmall)
.tooltip(move |window, cx| {
Tooltip::for_action(
"Delete Prompt Editor",
&RemoveSelectedThread,
window,
cx,
)
Tooltip::for_action("Delete", &RemoveSelectedThread, window, cx)
})
.on_click({
let assistant_panel = self.assistant_panel.clone();

View File

@@ -1,37 +1,57 @@
use std::borrow::Cow;
use std::path::PathBuf;
use std::cell::{Ref, RefCell};
use std::path::{Path, PathBuf};
use std::rc::Rc;
use std::sync::Arc;
use anyhow::{Result, anyhow};
use anyhow::{Context as _, Result, anyhow};
use assistant_settings::{AgentProfile, AgentProfileId, AssistantSettings};
use assistant_tool::{ToolId, ToolSource, ToolWorkingSet};
use chrono::{DateTime, Utc};
use collections::HashMap;
use context_server::manager::ContextServerManager;
use context_server::{ContextServerFactoryRegistry, ContextServerTool};
use fs::Fs;
use futures::FutureExt as _;
use futures::future::{self, BoxFuture, Shared};
use gpui::{
App, BackgroundExecutor, Context, Entity, Global, ReadGlobal, SharedString, Subscription, Task,
prelude::*,
App, BackgroundExecutor, Context, Entity, EventEmitter, Global, ReadGlobal, SharedString,
Subscription, Task, prelude::*,
};
use heed::Database;
use heed::types::SerdeBincode;
use language_model::{LanguageModelToolUseId, Role, TokenUsage};
use project::Project;
use prompt_store::PromptBuilder;
use project::{Project, Worktree};
use prompt_store::{ProjectContext, PromptBuilder, RulesFileContext, WorktreeContext};
use serde::{Deserialize, Serialize};
use settings::{Settings as _, SettingsStore};
use util::ResultExt as _;
use crate::thread::{
DetailedSummaryState, MessageId, ProjectSnapshot, Thread, ThreadEvent, ThreadId,
};
use crate::thread::{DetailedSummaryState, MessageId, ProjectSnapshot, Thread, ThreadId};
const RULES_FILE_NAMES: [&'static str; 6] = [
".rules",
".cursorrules",
".windsurfrules",
".clinerules",
".github/copilot-instructions.md",
"CLAUDE.md",
];
pub fn init(cx: &mut App) {
ThreadsDatabase::init(cx);
}
/// A system prompt shared by all threads created by this ThreadStore
#[derive(Clone, Default)]
pub struct SharedProjectContext(Rc<RefCell<Option<ProjectContext>>>);
impl SharedProjectContext {
pub fn borrow(&self) -> Ref<Option<ProjectContext>> {
self.0.borrow()
}
}
pub struct ThreadStore {
project: Entity<Project>,
tools: Arc<ToolWorkingSet>,
@@ -39,43 +59,187 @@ pub struct ThreadStore {
context_server_manager: Entity<ContextServerManager>,
context_server_tool_ids: HashMap<Arc<str>, Vec<ToolId>>,
threads: Vec<SerializedThreadMetadata>,
project_context: SharedProjectContext,
_subscriptions: Vec<Subscription>,
}
pub struct RulesLoadingError {
pub message: SharedString,
}
impl EventEmitter<RulesLoadingError> for ThreadStore {}
impl ThreadStore {
pub fn new(
pub fn load(
project: Entity<Project>,
tools: Arc<ToolWorkingSet>,
prompt_builder: Arc<PromptBuilder>,
cx: &mut App,
) -> Result<Entity<Self>> {
let this = cx.new(|cx| {
let context_server_factory_registry = ContextServerFactoryRegistry::default_global(cx);
let context_server_manager = cx.new(|cx| {
ContextServerManager::new(context_server_factory_registry, project.clone(), cx)
});
let settings_subscription =
cx.observe_global::<SettingsStore>(move |this: &mut Self, cx| {
this.load_default_profile(cx);
});
) -> Task<Entity<Self>> {
let thread_store = cx.new(|cx| Self::new(project, tools, prompt_builder, cx));
let reload = thread_store.update(cx, |store, cx| store.reload_system_prompt(cx));
cx.foreground_executor().spawn(async move {
reload.await;
thread_store
})
}
let this = Self {
project,
tools,
prompt_builder,
context_server_manager,
context_server_tool_ids: HashMap::default(),
threads: Vec::new(),
_subscriptions: vec![settings_subscription],
};
this.load_default_profile(cx);
this.register_context_server_handlers(cx);
this.reload(cx).detach_and_log_err(cx);
this
fn new(
project: Entity<Project>,
tools: Arc<ToolWorkingSet>,
prompt_builder: Arc<PromptBuilder>,
cx: &mut Context<Self>,
) -> Self {
let context_server_factory_registry = ContextServerFactoryRegistry::default_global(cx);
let context_server_manager = cx.new(|cx| {
ContextServerManager::new(context_server_factory_registry, project.clone(), cx)
});
let settings_subscription =
cx.observe_global::<SettingsStore>(move |this: &mut Self, cx| {
this.load_default_profile(cx);
});
let project_subscription = cx.subscribe(&project, Self::handle_project_event);
Ok(this)
let this = Self {
project,
tools,
prompt_builder,
context_server_manager,
context_server_tool_ids: HashMap::default(),
threads: Vec::new(),
project_context: SharedProjectContext::default(),
_subscriptions: vec![settings_subscription, project_subscription],
};
this.load_default_profile(cx);
this.register_context_server_handlers(cx);
this.reload(cx).detach_and_log_err(cx);
this
}
fn handle_project_event(
&mut self,
_project: Entity<Project>,
event: &project::Event,
cx: &mut Context<Self>,
) {
match event {
project::Event::WorktreeAdded(_) | project::Event::WorktreeRemoved(_) => {
self.reload_system_prompt(cx).detach();
}
project::Event::WorktreeUpdatedEntries(_, items) => {
if items.iter().any(|(path, _, _)| {
RULES_FILE_NAMES
.iter()
.any(|name| path.as_ref() == Path::new(name))
}) {
self.reload_system_prompt(cx).detach();
}
}
_ => {}
}
}
pub fn reload_system_prompt(&self, cx: &mut Context<Self>) -> Task<()> {
let project = self.project.read(cx);
let tasks = project
.visible_worktrees(cx)
.map(|worktree| {
Self::load_worktree_info_for_system_prompt(
project.fs().clone(),
worktree.read(cx),
cx,
)
})
.collect::<Vec<_>>();
cx.spawn(async move |this, cx| {
let results = futures::future::join_all(tasks).await;
let worktrees = results
.into_iter()
.map(|(worktree, rules_error)| {
if let Some(rules_error) = rules_error {
this.update(cx, |_, cx| cx.emit(rules_error)).ok();
}
worktree
})
.collect::<Vec<_>>();
this.update(cx, |this, _cx| {
*this.project_context.0.borrow_mut() = Some(ProjectContext::new(worktrees));
})
.ok();
})
}
fn load_worktree_info_for_system_prompt(
fs: Arc<dyn Fs>,
worktree: &Worktree,
cx: &App,
) -> Task<(WorktreeContext, Option<RulesLoadingError>)> {
let root_name = worktree.root_name().into();
let abs_path = worktree.abs_path();
let rules_task = Self::load_worktree_rules_file(fs, worktree, cx);
let Some(rules_task) = rules_task else {
return Task::ready((
WorktreeContext {
root_name,
abs_path,
rules_file: None,
},
None,
));
};
cx.spawn(async move |_| {
let (rules_file, rules_file_error) = match rules_task.await {
Ok(rules_file) => (Some(rules_file), None),
Err(err) => (
None,
Some(RulesLoadingError {
message: format!("{err}").into(),
}),
),
};
let worktree_info = WorktreeContext {
root_name,
abs_path,
rules_file,
};
(worktree_info, rules_file_error)
})
}
fn load_worktree_rules_file(
fs: Arc<dyn Fs>,
worktree: &Worktree,
cx: &App,
) -> Option<Task<Result<RulesFileContext>>> {
let selected_rules_file = RULES_FILE_NAMES
.into_iter()
.filter_map(|name| {
worktree
.entry_for_path(name)
.filter(|entry| entry.is_file())
.map(|entry| (entry.path.clone(), worktree.absolutize(&entry.path)))
})
.next();
// Note that Cline supports `.clinerules` being a directory, but that is not currently
// supported. This doesn't seem to occur often in GitHub repositories.
selected_rules_file.map(|(path_in_worktree, abs_path)| {
let fs = fs.clone();
cx.background_spawn(async move {
let abs_path = abs_path?;
let text = fs.load(&abs_path).await.with_context(|| {
format!("Failed to load assistant rules file {:?}", abs_path)
})?;
anyhow::Ok(RulesFileContext {
path_in_worktree,
abs_path: abs_path.into(),
text: text.trim().to_string(),
})
})
})
}
pub fn context_server_manager(&self) -> Entity<ContextServerManager> {
@@ -107,6 +271,7 @@ impl ThreadStore {
self.project.clone(),
self.tools.clone(),
self.prompt_builder.clone(),
self.project_context.clone(),
cx,
)
})
@@ -134,21 +299,12 @@ impl ThreadStore {
this.project.clone(),
this.tools.clone(),
this.prompt_builder.clone(),
this.project_context.clone(),
cx,
)
})
})?;
let (system_prompt_context, load_error) = thread
.update(cx, |thread, cx| thread.load_system_prompt_context(cx))?
.await;
thread.update(cx, |thread, cx| {
thread.set_system_prompt_context(system_prompt_context);
if let Some(load_error) = load_error {
cx.emit(ThreadEvent::ShowError(load_error));
}
})?;
Ok(thread)
})
}
@@ -491,7 +647,7 @@ impl ThreadsDatabase {
let database_future = executor
.spawn({
let executor = executor.clone();
let database_path = paths::support_dir().join("threads/threads-db.1.mdb");
let database_path = paths::data_dir().join("threads/threads-db.1.mdb");
async move { ThreadsDatabase::new(database_path, executor) }
})
.then(|result| future::ready(result.map(Arc::new).map_err(Arc::new)))

View File

@@ -334,6 +334,8 @@ impl ToolUseState {
output: Result<String>,
cx: &App,
) -> Option<PendingToolUse> {
telemetry::event!("Agent Tool Finished", tool_name, success = output.is_ok());
match output {
Ok(tool_result) => {
let model_registry = LanguageModelRegistry::read_global(cx);

View File

@@ -1,5 +1,7 @@
mod agent_notification;
mod context_pill;
mod user_spending;
pub use agent_notification::*;
pub use context_pill::*;
// pub use user_spending::*;

View File

@@ -280,9 +280,10 @@ impl AddedContext {
}
AssistantContext::Directory(directory_context) => {
// TODO: handle worktree disambiguation. Maybe by storing an `Arc<dyn File>` to also
// handle renames?
let full_path = &directory_context.project_path.path;
let full_path = directory_context
.worktree
.read(cx)
.full_path(&directory_context.path);
let full_path_string: SharedString =
full_path.to_string_lossy().into_owned().into();
let name = full_path

View File

@@ -0,0 +1,186 @@
use gpui::{Entity, Render};
use ui::{ProgressBar, prelude::*};
#[derive(RegisterComponent)]
pub struct UserSpending {
free_tier_current: u32,
free_tier_cap: u32,
over_tier_current: u32,
over_tier_cap: u32,
free_tier_progress: Entity<ProgressBar>,
over_tier_progress: Entity<ProgressBar>,
}
impl UserSpending {
pub fn new(
free_tier_current: u32,
free_tier_cap: u32,
over_tier_current: u32,
over_tier_cap: u32,
cx: &mut App,
) -> Self {
let free_tier_capped = free_tier_current == free_tier_cap;
let free_tier_near_capped =
free_tier_current as f32 / 100.0 >= free_tier_cap as f32 / 100.0 * 0.9;
let over_tier_capped = over_tier_current == over_tier_cap;
let over_tier_near_capped =
over_tier_current as f32 / 100.0 >= over_tier_cap as f32 / 100.0 * 0.9;
let free_tier_progress = cx.new(|cx| {
ProgressBar::new(
"free_tier",
free_tier_current as f32,
free_tier_cap as f32,
cx,
)
});
let over_tier_progress = cx.new(|cx| {
ProgressBar::new(
"over_tier",
over_tier_current as f32,
over_tier_cap as f32,
cx,
)
});
if free_tier_capped {
free_tier_progress.update(cx, |progress_bar, cx| {
progress_bar.fg_color(cx.theme().status().error);
});
} else if free_tier_near_capped {
free_tier_progress.update(cx, |progress_bar, cx| {
progress_bar.fg_color(cx.theme().status().warning);
});
}
if over_tier_capped {
over_tier_progress.update(cx, |progress_bar, cx| {
progress_bar.fg_color(cx.theme().status().error);
});
} else if over_tier_near_capped {
over_tier_progress.update(cx, |progress_bar, cx| {
progress_bar.fg_color(cx.theme().status().warning);
});
}
Self {
free_tier_current,
free_tier_cap,
over_tier_current,
over_tier_cap,
free_tier_progress,
over_tier_progress,
}
}
}
impl Render for UserSpending {
fn render(&mut self, _window: &mut Window, cx: &mut Context<Self>) -> impl IntoElement {
let formatted_free_tier = format!(
"${} / ${}",
self.free_tier_current as f32 / 100.0,
self.free_tier_cap as f32 / 100.0
);
let formatted_over_tier = format!(
"${} / ${}",
self.over_tier_current as f32 / 100.0,
self.over_tier_cap as f32 / 100.0
);
v_group()
.elevation_2(cx)
.py_1p5()
.px_2p5()
.w(px(360.))
.child(
v_flex()
.child(
v_flex()
.p_1p5()
.gap_0p5()
.child(
h_flex()
.justify_between()
.child(Label::new("Free Tier Usage").size(LabelSize::Small))
.child(
Label::new(formatted_free_tier)
.size(LabelSize::Small)
.color(Color::Muted),
),
)
.child(self.free_tier_progress.clone()),
)
.child(
v_flex()
.p_1p5()
.gap_0p5()
.child(
h_flex()
.justify_between()
.child(Label::new("Current Spending").size(LabelSize::Small))
.child(
Label::new(formatted_over_tier)
.size(LabelSize::Small)
.color(Color::Muted),
),
)
.child(self.over_tier_progress.clone()),
),
)
}
}
impl Component for UserSpending {
fn scope() -> ComponentScope {
ComponentScope::None
}
fn preview(_window: &mut Window, cx: &mut App) -> Option<AnyElement> {
let new_user = cx.new(|cx| UserSpending::new(0, 2000, 0, 2000, cx));
let free_capped = cx.new(|cx| UserSpending::new(2000, 2000, 0, 2000, cx));
let free_near_capped = cx.new(|cx| UserSpending::new(1800, 2000, 0, 2000, cx));
let over_near_capped = cx.new(|cx| UserSpending::new(2000, 2000, 1800, 2000, cx));
let over_capped = cx.new(|cx| UserSpending::new(1000, 2000, 2000, 2000, cx));
Some(
v_flex()
.gap_6()
.p_4()
.children(vec![example_group(vec![
single_example(
"New User",
div().size_full().child(new_user.clone()).into_any_element(),
),
single_example(
"Free Tier Capped",
div()
.size_full()
.child(free_capped.clone())
.into_any_element(),
),
single_example(
"Free Tier Near Capped",
div()
.size_full()
.child(free_near_capped.clone())
.into_any_element(),
),
single_example(
"Over Tier Near Capped",
div()
.size_full()
.child(over_near_capped.clone())
.into_any_element(),
),
single_example(
"Over Tier Capped",
div()
.size_full()
.child(over_capped.clone())
.into_any_element(),
),
])])
.into_any_element(),
)
}
}

View File

@@ -1,52 +0,0 @@
// Copied from `crates/zed/build.rs`, with removal of code for including the zed icon on windows.
use std::process::Command;
fn main() {
if cfg!(target_os = "macos") {
println!("cargo:rustc-env=MACOSX_DEPLOYMENT_TARGET=10.15.7");
// Weakly link ReplayKit to ensure Zed can be used on macOS 10.15+.
println!("cargo:rustc-link-arg=-Wl,-weak_framework,ReplayKit");
// Seems to be required to enable Swift concurrency
println!("cargo:rustc-link-arg=-Wl,-rpath,/usr/lib/swift");
// Register exported Objective-C selectors, protocols, etc
println!("cargo:rustc-link-arg=-Wl,-ObjC");
}
// Populate git sha environment variable if git is available
println!("cargo:rerun-if-changed=../../.git/logs/HEAD");
println!(
"cargo:rustc-env=TARGET={}",
std::env::var("TARGET").unwrap()
);
if let Ok(output) = Command::new("git").args(["rev-parse", "HEAD"]).output() {
if output.status.success() {
let git_sha = String::from_utf8_lossy(&output.stdout);
let git_sha = git_sha.trim();
println!("cargo:rustc-env=ZED_COMMIT_SHA={git_sha}");
if let Ok(build_profile) = std::env::var("PROFILE") {
if build_profile == "release" {
// This is currently the best way to make `cargo build ...`'s build script
// to print something to stdout without extra verbosity.
println!(
"cargo:warning=Info: using '{git_sha}' hash for ZED_COMMIT_SHA env var"
);
}
}
}
}
#[cfg(target_os = "windows")]
{
#[cfg(target_env = "msvc")]
{
// todo(windows): This is to avoid stack overflow. Remove it when solved.
println!("cargo:rustc-link-arg=/stack:{}", 8 * 1024 * 1024);
}
}
}

View File

@@ -1,384 +0,0 @@
use crate::git_commands::{run_git, setup_temp_repo};
use crate::headless_assistant::{HeadlessAppState, HeadlessAssistant};
use crate::{get_exercise_language, get_exercise_name};
use agent::RequestKind;
use anyhow::{Result, anyhow};
use collections::HashMap;
use gpui::{App, Task};
use language_model::{LanguageModel, TokenUsage};
use serde::{Deserialize, Serialize};
use std::{
fs,
io::Write,
path::{Path, PathBuf},
sync::Arc,
time::{Duration, SystemTime},
};
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct EvalResult {
pub exercise_name: String,
pub diff: String,
pub assistant_response: String,
pub elapsed_time_ms: u128,
pub timestamp: u128,
// Token usage fields
pub input_tokens: usize,
pub output_tokens: usize,
pub total_tokens: usize,
pub tool_use_counts: usize,
}
pub struct EvalOutput {
pub diff: String,
pub last_message: String,
pub elapsed_time: Duration,
pub assistant_response_count: usize,
pub tool_use_counts: HashMap<Arc<str>, u32>,
pub token_usage: TokenUsage,
}
#[derive(Deserialize)]
pub struct EvalSetup {
pub url: String,
pub base_sha: String,
}
pub struct Eval {
pub repo_path: PathBuf,
pub eval_setup: EvalSetup,
pub user_prompt: String,
}
impl Eval {
// Keep this method for potential future use, but mark it as intentionally unused
#[allow(dead_code)]
pub async fn load(_name: String, path: PathBuf, repos_dir: &Path) -> Result<Self> {
let prompt_path = path.join("prompt.txt");
let user_prompt = smol::unblock(|| std::fs::read_to_string(prompt_path)).await?;
let setup_path = path.join("setup.json");
let setup_contents = smol::unblock(|| std::fs::read_to_string(setup_path)).await?;
let eval_setup = serde_json_lenient::from_str_lenient::<EvalSetup>(&setup_contents)?;
// Move this internal function inside the load method since it's only used here
fn repo_dir_name(url: &str) -> String {
url.trim_start_matches("https://")
.replace(|c: char| !c.is_alphanumeric(), "_")
}
let repo_path = repos_dir.join(repo_dir_name(&eval_setup.url));
Ok(Eval {
repo_path,
eval_setup,
user_prompt,
})
}
pub fn run(
self,
app_state: Arc<HeadlessAppState>,
model: Arc<dyn LanguageModel>,
cx: &mut App,
) -> Task<Result<EvalOutput>> {
cx.spawn(async move |cx| {
run_git(&self.repo_path, &["checkout", &self.eval_setup.base_sha]).await?;
let (assistant, done_rx) =
cx.update(|cx| HeadlessAssistant::new(app_state.clone(), cx))??;
let _worktree = assistant
.update(cx, |assistant, cx| {
assistant.project.update(cx, |project, cx| {
project.create_worktree(&self.repo_path, true, cx)
})
})?
.await?;
let start_time = std::time::SystemTime::now();
let (system_prompt_context, load_error) = cx
.update(|cx| {
assistant
.read(cx)
.thread
.read(cx)
.load_system_prompt_context(cx)
})?
.await;
if let Some(load_error) = load_error {
return Err(anyhow!("{:?}", load_error));
};
assistant.update(cx, |assistant, cx| {
assistant.thread.update(cx, |thread, cx| {
let context = vec![];
thread.insert_user_message(self.user_prompt.clone(), context, None, cx);
thread.set_system_prompt_context(system_prompt_context);
thread.send_to_model(model, RequestKind::Chat, cx);
});
})?;
done_rx.recv().await??;
// Add this section to check untracked files
println!("Checking for untracked files:");
let untracked = run_git(
&self.repo_path,
&["ls-files", "--others", "--exclude-standard"],
)
.await?;
if untracked.is_empty() {
println!("No untracked files found");
} else {
// Add all files to git so they appear in the diff
println!("Adding untracked files to git");
run_git(&self.repo_path, &["add", "."]).await?;
}
// get git status
let _status = run_git(&self.repo_path, &["status", "--short"]).await?;
let elapsed_time = start_time.elapsed()?;
// Get diff of staged changes (the files we just added)
let staged_diff = run_git(&self.repo_path, &["diff", "--staged"]).await?;
// Get diff of unstaged changes
let unstaged_diff = run_git(&self.repo_path, &["diff"]).await?;
// Combine both diffs
let diff = if unstaged_diff.is_empty() {
staged_diff
} else if staged_diff.is_empty() {
unstaged_diff
} else {
format!(
"# Staged changes\n{}\n\n# Unstaged changes\n{}",
staged_diff, unstaged_diff
)
};
assistant.update(cx, |assistant, cx| {
let thread = assistant.thread.read(cx);
let last_message = thread.messages().last().unwrap();
if last_message.role != language_model::Role::Assistant {
return Err(anyhow!("Last message is not from assistant"));
}
let assistant_response_count = thread
.messages()
.filter(|message| message.role == language_model::Role::Assistant)
.count();
Ok(EvalOutput {
diff,
last_message: last_message.to_string(),
elapsed_time,
assistant_response_count,
tool_use_counts: assistant.tool_use_counts.clone(),
token_usage: thread.cumulative_token_usage(),
})
})?
})
}
}
impl EvalOutput {
// Keep this method for potential future use, but mark it as intentionally unused
#[allow(dead_code)]
pub fn save_to_directory(&self, output_dir: &Path, eval_output_value: String) -> Result<()> {
// Create the output directory if it doesn't exist
fs::create_dir_all(&output_dir)?;
// Save the diff to a file
let diff_path = output_dir.join("diff.patch");
let mut diff_file = fs::File::create(&diff_path)?;
diff_file.write_all(self.diff.as_bytes())?;
// Save the last message to a file
let message_path = output_dir.join("assistant_response.txt");
let mut message_file = fs::File::create(&message_path)?;
message_file.write_all(self.last_message.as_bytes())?;
// Current metrics for this run
let current_metrics = serde_json::json!({
"elapsed_time_ms": self.elapsed_time.as_millis(),
"assistant_response_count": self.assistant_response_count,
"tool_use_counts": self.tool_use_counts,
"token_usage": self.token_usage,
"eval_output_value": eval_output_value,
});
// Get current timestamp in milliseconds
let timestamp = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)?
.as_millis()
.to_string();
// Path to metrics file
let metrics_path = output_dir.join("metrics.json");
// Load existing metrics if the file exists, or create a new object
let mut historical_metrics = if metrics_path.exists() {
let metrics_content = fs::read_to_string(&metrics_path)?;
serde_json::from_str::<serde_json::Value>(&metrics_content)
.unwrap_or_else(|_| serde_json::json!({}))
} else {
serde_json::json!({})
};
// Add new run with timestamp as key
if let serde_json::Value::Object(ref mut map) = historical_metrics {
map.insert(timestamp, current_metrics);
}
// Write updated metrics back to file
let metrics_json = serde_json::to_string_pretty(&historical_metrics)?;
let mut metrics_file = fs::File::create(&metrics_path)?;
metrics_file.write_all(metrics_json.as_bytes())?;
Ok(())
}
}
pub async fn read_instructions(exercise_path: &Path) -> Result<String> {
let instructions_path = exercise_path.join(".docs").join("instructions.md");
println!("Reading instructions from: {}", instructions_path.display());
let instructions = smol::unblock(move || std::fs::read_to_string(&instructions_path)).await?;
Ok(instructions)
}
pub async fn save_eval_results(exercise_path: &Path, results: Vec<EvalResult>) -> Result<()> {
let eval_dir = exercise_path.join("evaluation");
fs::create_dir_all(&eval_dir)?;
let eval_file = eval_dir.join("evals.json");
println!("Saving evaluation results to: {}", eval_file.display());
println!(
"Results to save: {} evaluations for exercise path: {}",
results.len(),
exercise_path.display()
);
// Check file existence before reading/writing
if eval_file.exists() {
println!("Existing evals.json file found, will update it");
} else {
println!("No existing evals.json file found, will create new one");
}
// Structure to organize evaluations by test name and timestamp
let mut eval_data: serde_json::Value = if eval_file.exists() {
let content = fs::read_to_string(&eval_file)?;
serde_json::from_str(&content).unwrap_or_else(|_| serde_json::json!({}))
} else {
serde_json::json!({})
};
// Get current timestamp for this batch of results
let timestamp = SystemTime::now()
.duration_since(SystemTime::UNIX_EPOCH)?
.as_millis()
.to_string();
// Group the new results by test name (exercise name)
for result in results {
let exercise_name = &result.exercise_name;
println!("Adding result: exercise={}", exercise_name);
// Ensure the exercise entry exists
if eval_data.get(exercise_name).is_none() {
eval_data[exercise_name] = serde_json::json!({});
}
// Ensure the timestamp entry exists as an object
if eval_data[exercise_name].get(&timestamp).is_none() {
eval_data[exercise_name][&timestamp] = serde_json::json!({});
}
// Add this result under the timestamp with template name as key
eval_data[exercise_name][&timestamp] = serde_json::to_value(&result)?;
}
// Write back to file with pretty formatting
let json_content = serde_json::to_string_pretty(&eval_data)?;
match fs::write(&eval_file, json_content) {
Ok(_) => println!("✓ Successfully saved results to {}", eval_file.display()),
Err(e) => println!("✗ Failed to write results file: {}", e),
}
Ok(())
}
pub async fn run_exercise_eval(
exercise_path: PathBuf,
model: Arc<dyn LanguageModel>,
app_state: Arc<HeadlessAppState>,
base_sha: String,
_framework_path: PathBuf,
cx: gpui::AsyncApp,
) -> Result<EvalResult> {
let exercise_name = get_exercise_name(&exercise_path);
let language = get_exercise_language(&exercise_path)?;
let mut instructions = read_instructions(&exercise_path).await?;
instructions.push_str(&format!(
"\n\nWhen writing the code for this prompt, use {} to achieve the goal.",
language
));
println!("Running evaluation for exercise: {}", exercise_name);
// Create temporary directory with exercise files
let temp_dir = setup_temp_repo(&exercise_path, &base_sha).await?;
let temp_path = temp_dir.path().to_path_buf();
let local_commit_sha = run_git(&temp_path, &["rev-parse", "HEAD"]).await?;
let start_time = SystemTime::now();
// Create a basic eval struct to work with the existing system
let eval = Eval {
repo_path: temp_path.clone(),
eval_setup: EvalSetup {
url: format!("file://{}", temp_path.display()),
base_sha: local_commit_sha, // Use the local commit SHA instead of the framework base SHA
},
user_prompt: instructions.clone(),
};
// Run the evaluation
let eval_output = cx
.update(|cx| eval.run(app_state.clone(), model.clone(), cx))?
.await?;
// Get diff from git
let diff = eval_output.diff.clone();
let elapsed_time = start_time.elapsed()?;
// Calculate total tokens as the sum of input and output tokens
let input_tokens = eval_output.token_usage.input_tokens;
let output_tokens = eval_output.token_usage.output_tokens;
let tool_use_counts = eval_output.tool_use_counts.values().sum::<u32>();
let total_tokens = input_tokens + output_tokens;
// Save results to evaluation directory
let result = EvalResult {
exercise_name: exercise_name.clone(),
diff,
assistant_response: eval_output.last_message.clone(),
elapsed_time_ms: elapsed_time.as_millis(),
timestamp: SystemTime::now()
.duration_since(SystemTime::UNIX_EPOCH)?
.as_millis(),
// Convert u32 token counts to usize
input_tokens: input_tokens.try_into().unwrap(),
output_tokens: output_tokens.try_into().unwrap(),
total_tokens: total_tokens.try_into().unwrap(),
tool_use_counts: tool_use_counts.try_into().unwrap(),
};
Ok(result)
}

View File

@@ -1,149 +0,0 @@
use anyhow::{Result, anyhow};
use std::{
fs,
path::{Path, PathBuf},
};
pub fn get_exercise_name(exercise_path: &Path) -> String {
exercise_path
.file_name()
.unwrap_or_default()
.to_string_lossy()
.to_string()
}
pub fn get_exercise_language(exercise_path: &Path) -> Result<String> {
// Extract the language from path (data/python/exercises/... => python)
let parts: Vec<_> = exercise_path.components().collect();
for (i, part) in parts.iter().enumerate() {
if i > 0 && part.as_os_str() == "eval_code" {
if i + 1 < parts.len() {
let language = parts[i + 1].as_os_str().to_string_lossy().to_string();
return Ok(language);
}
}
}
Err(anyhow!(
"Could not determine language from path: {:?}",
exercise_path
))
}
pub fn find_exercises(
framework_path: &Path,
languages: &[&str],
max_per_language: Option<usize>,
) -> Result<Vec<PathBuf>> {
let mut all_exercises = Vec::new();
println!("Searching for exercises in languages: {:?}", languages);
for language in languages {
let language_dir = framework_path
.join("eval_code")
.join(language)
.join("exercises")
.join("practice");
println!("Checking language directory: {:?}", language_dir);
if !language_dir.exists() {
println!("Warning: Language directory not found: {:?}", language_dir);
continue;
}
let mut exercises = Vec::new();
match fs::read_dir(&language_dir) {
Ok(entries) => {
for entry_result in entries {
match entry_result {
Ok(entry) => {
let path = entry.path();
if path.is_dir() {
// Special handling for "internal" directory
if *language == "internal" {
// Check for repo_info.json to validate it's an internal exercise
let repo_info_path = path.join(".meta").join("repo_info.json");
let instructions_path =
path.join(".docs").join("instructions.md");
if repo_info_path.exists() && instructions_path.exists() {
exercises.push(path);
}
} else {
// Map the language to the file extension - original code
let language_extension = match *language {
"python" => "py",
"go" => "go",
"rust" => "rs",
"typescript" => "ts",
"javascript" => "js",
"ruby" => "rb",
"php" => "php",
"bash" => "sh",
"multi" => "diff",
_ => continue, // Skip unsupported languages
};
// Check if this is a valid exercise with instructions and example
let instructions_path =
path.join(".docs").join("instructions.md");
let has_instructions = instructions_path.exists();
let example_path = path
.join(".meta")
.join(format!("example.{}", language_extension));
let has_example = example_path.exists();
if has_instructions && has_example {
exercises.push(path);
}
}
}
}
Err(err) => println!("Error reading directory entry: {}", err),
}
}
}
Err(err) => println!(
"Error reading directory {}: {}",
language_dir.display(),
err
),
}
// Sort exercises by name for consistent selection
exercises.sort_by(|a, b| {
let a_name = a.file_name().unwrap_or_default().to_string_lossy();
let b_name = b.file_name().unwrap_or_default().to_string_lossy();
a_name.cmp(&b_name)
});
// Apply the limit if specified
if let Some(limit) = max_per_language {
if exercises.len() > limit {
println!(
"Limiting {} exercises to {} for language {}",
exercises.len(),
limit,
language
);
exercises.truncate(limit);
}
}
println!(
"Found {} exercises for language {}: {:?}",
exercises.len(),
language,
exercises
.iter()
.map(|p| p.file_name().unwrap_or_default().to_string_lossy())
.collect::<Vec<_>>()
);
all_exercises.extend(exercises);
}
Ok(all_exercises)
}

View File

@@ -1,125 +0,0 @@
use anyhow::{Result, anyhow};
use serde::Deserialize;
use std::{fs, path::Path};
use tempfile::TempDir;
use util::command::new_smol_command;
use walkdir::WalkDir;
#[derive(Debug, Deserialize)]
pub struct SetupConfig {
#[serde(rename = "base.sha")]
pub base_sha: String,
}
#[derive(Debug, Deserialize)]
pub struct RepoInfo {
pub remote_url: String,
pub head_sha: String,
}
pub async fn run_git(repo_path: &Path, args: &[&str]) -> Result<String> {
let output = new_smol_command("git")
.current_dir(repo_path)
.args(args)
.output()
.await?;
if output.status.success() {
Ok(String::from_utf8(output.stdout)?.trim().to_string())
} else {
Err(anyhow!(
"Git command failed: {} with status: {}",
args.join(" "),
output.status
))
}
}
pub async fn read_base_sha(framework_path: &Path) -> Result<String> {
let setup_path = framework_path.join("setup.json");
let setup_content = smol::unblock(move || std::fs::read_to_string(&setup_path)).await?;
let setup_config: SetupConfig = serde_json_lenient::from_str_lenient(&setup_content)?;
Ok(setup_config.base_sha)
}
pub async fn read_repo_info(exercise_path: &Path) -> Result<RepoInfo> {
let repo_info_path = exercise_path.join(".meta").join("repo_info.json");
println!("Reading repo info from: {}", repo_info_path.display());
let repo_info_content = smol::unblock(move || std::fs::read_to_string(&repo_info_path)).await?;
let repo_info: RepoInfo = serde_json_lenient::from_str_lenient(&repo_info_content)?;
// Remove any quotes from the strings
let remote_url = repo_info.remote_url.trim_matches('"').to_string();
let head_sha = repo_info.head_sha.trim_matches('"').to_string();
Ok(RepoInfo {
remote_url,
head_sha,
})
}
pub async fn setup_temp_repo(exercise_path: &Path, _base_sha: &str) -> Result<TempDir> {
let temp_dir = TempDir::new()?;
// Check if this is an internal exercise by looking for repo_info.json
let repo_info_path = exercise_path.join(".meta").join("repo_info.json");
if repo_info_path.exists() {
// This is an internal exercise, handle it differently
let repo_info = read_repo_info(exercise_path).await?;
// Clone the repository to the temp directory
let url = repo_info.remote_url;
let clone_path = temp_dir.path();
println!(
"Cloning repository from {} to {}",
url,
clone_path.display()
);
run_git(
&std::env::current_dir()?,
&["clone", &url, &clone_path.to_string_lossy()],
)
.await?;
// Checkout the specified commit
println!("Checking out commit: {}", repo_info.head_sha);
run_git(temp_dir.path(), &["checkout", &repo_info.head_sha]).await?;
println!("Successfully set up internal repository");
} else {
// Original code for regular exercises
// Copy the exercise files to the temp directory, excluding .docs and .meta
for entry in WalkDir::new(exercise_path).min_depth(0).max_depth(10) {
let entry = entry?;
let source_path = entry.path();
// Skip .docs and .meta directories completely
if source_path.starts_with(exercise_path.join(".docs"))
|| source_path.starts_with(exercise_path.join(".meta"))
{
continue;
}
if source_path.is_file() {
let relative_path = source_path.strip_prefix(exercise_path)?;
let dest_path = temp_dir.path().join(relative_path);
// Make sure parent directories exist
if let Some(parent) = dest_path.parent() {
fs::create_dir_all(parent)?;
}
fs::copy(source_path, dest_path)?;
}
}
// Initialize git repo in the temp directory
run_git(temp_dir.path(), &["init"]).await?;
run_git(temp_dir.path(), &["add", "."]).await?;
run_git(temp_dir.path(), &["commit", "-m", "Initial commit"]).await?;
println!("Created temp repo without .docs and .meta directories");
}
Ok(temp_dir)
}

View File

@@ -1,246 +0,0 @@
use agent::{RequestKind, Thread, ThreadEvent, ThreadStore};
use anyhow::anyhow;
use assistant_tool::ToolWorkingSet;
use client::{Client, UserStore};
use collections::HashMap;
use dap::DapRegistry;
use gpui::{App, Entity, SemanticVersion, Subscription, Task, prelude::*};
use language::LanguageRegistry;
use language_model::{
AuthenticateError, LanguageModel, LanguageModelProviderId, LanguageModelRegistry,
};
use node_runtime::NodeRuntime;
use project::{Project, RealFs};
use prompt_store::PromptBuilder;
use settings::SettingsStore;
use smol::channel;
use std::sync::Arc;
/// Subset of `workspace::AppState` needed by `HeadlessAssistant`, with additional fields.
pub struct HeadlessAppState {
pub languages: Arc<LanguageRegistry>,
pub client: Arc<Client>,
pub user_store: Entity<UserStore>,
pub fs: Arc<dyn fs::Fs>,
pub node_runtime: NodeRuntime,
// Additional fields not present in `workspace::AppState`.
pub prompt_builder: Arc<PromptBuilder>,
}
pub struct HeadlessAssistant {
pub thread: Entity<Thread>,
pub project: Entity<Project>,
#[allow(dead_code)]
pub thread_store: Entity<ThreadStore>,
pub tool_use_counts: HashMap<Arc<str>, u32>,
pub done_tx: channel::Sender<anyhow::Result<()>>,
_subscription: Subscription,
}
impl HeadlessAssistant {
pub fn new(
app_state: Arc<HeadlessAppState>,
cx: &mut App,
) -> anyhow::Result<(Entity<Self>, channel::Receiver<anyhow::Result<()>>)> {
let env = None;
let project = Project::local(
app_state.client.clone(),
app_state.node_runtime.clone(),
app_state.user_store.clone(),
app_state.languages.clone(),
Arc::new(DapRegistry::default()),
app_state.fs.clone(),
env,
cx,
);
let tools = Arc::new(ToolWorkingSet::default());
let thread_store =
ThreadStore::new(project.clone(), tools, app_state.prompt_builder.clone(), cx)?;
let thread = thread_store.update(cx, |thread_store, cx| thread_store.create_thread(cx));
let (done_tx, done_rx) = channel::unbounded::<anyhow::Result<()>>();
let headless_thread = cx.new(move |cx| Self {
_subscription: cx.subscribe(&thread, Self::handle_thread_event),
thread,
project,
thread_store,
tool_use_counts: HashMap::default(),
done_tx,
});
Ok((headless_thread, done_rx))
}
fn handle_thread_event(
&mut self,
thread: Entity<Thread>,
event: &ThreadEvent,
cx: &mut Context<Self>,
) {
match event {
ThreadEvent::ShowError(err) => self
.done_tx
.send_blocking(Err(anyhow!("{:?}", err)))
.unwrap(),
ThreadEvent::DoneStreaming => {
let thread = thread.read(cx);
if let Some(message) = thread.messages().last() {
println!("Message: {}", message.to_string());
}
if thread.all_tools_finished() {
self.done_tx.send_blocking(Ok(())).unwrap()
}
}
ThreadEvent::UsePendingTools => {
thread.update(cx, |thread, cx| {
thread.use_pending_tools(cx);
});
}
ThreadEvent::ToolConfirmationNeeded => {
// Automatically approve all tools that need confirmation in headless mode
println!("Tool confirmation needed - automatically approving in headless mode");
// Get the tools needing confirmation
let tools_needing_confirmation: Vec<_> = thread
.read(cx)
.tools_needing_confirmation()
.cloned()
.collect();
// Run each tool that needs confirmation
for tool_use in tools_needing_confirmation {
if let Some(tool) = thread.read(cx).tools().tool(&tool_use.name, cx) {
thread.update(cx, |thread, cx| {
println!("Auto-approving tool: {}", tool_use.name);
// Create a request to send to the tool
let request = thread.to_completion_request(RequestKind::Chat, cx);
let messages = Arc::new(request.messages);
// Run the tool
thread.run_tool(
tool_use.id.clone(),
tool_use.ui_text.clone(),
tool_use.input.clone(),
&messages,
tool,
cx,
);
});
}
}
}
ThreadEvent::ToolFinished {
tool_use_id,
pending_tool_use,
..
} => {
if let Some(pending_tool_use) = pending_tool_use {
println!(
"Used tool {} with input: {}",
pending_tool_use.name, pending_tool_use.input
);
*self
.tool_use_counts
.entry(pending_tool_use.name.clone())
.or_insert(0) += 1;
}
if let Some(tool_result) = thread.read(cx).tool_result(tool_use_id) {
println!("Tool result: {:?}", tool_result);
}
if thread.read(cx).all_tools_finished() {
let model_registry = LanguageModelRegistry::read_global(cx);
if let Some(model) = model_registry.default_model() {
thread.update(cx, |thread, cx| {
thread.attach_tool_results(cx);
thread.send_to_model(model.model, RequestKind::Chat, cx);
});
} else {
println!(
"Warning: No active language model available to continue conversation"
);
}
}
}
_ => {}
}
}
}
pub fn init(cx: &mut App) -> Arc<HeadlessAppState> {
release_channel::init(SemanticVersion::default(), cx);
gpui_tokio::init(cx);
let mut settings_store = SettingsStore::new(cx);
settings_store
.set_default_settings(settings::default_settings().as_ref(), cx)
.unwrap();
cx.set_global(settings_store);
client::init_settings(cx);
Project::init_settings(cx);
let client = Client::production(cx);
cx.set_http_client(client.http_client().clone());
let git_binary_path = None;
let fs = Arc::new(RealFs::new(
git_binary_path,
cx.background_executor().clone(),
));
let languages = Arc::new(LanguageRegistry::new(cx.background_executor().clone()));
let user_store = cx.new(|cx| UserStore::new(client.clone(), cx));
language::init(cx);
language_model::init(client.clone(), cx);
language_models::init(user_store.clone(), client.clone(), fs.clone(), cx);
assistant_tools::init(client.http_client().clone(), cx);
context_server::init(cx);
let stdout_is_a_pty = false;
let prompt_builder = PromptBuilder::load(fs.clone(), stdout_is_a_pty, cx);
agent::init(fs.clone(), client.clone(), prompt_builder.clone(), cx);
Arc::new(HeadlessAppState {
languages,
client,
user_store,
fs,
node_runtime: NodeRuntime::unavailable(),
prompt_builder,
})
}
pub fn find_model(model_name: &str, cx: &App) -> anyhow::Result<Arc<dyn LanguageModel>> {
let model_registry = LanguageModelRegistry::read_global(cx);
let model = model_registry
.available_models(cx)
.find(|model| model.id().0 == model_name);
let Some(model) = model else {
return Err(anyhow!(
"No language model named {} was available. Available models: {}",
model_name,
model_registry
.available_models(cx)
.map(|model| model.id().0.clone())
.collect::<Vec<_>>()
.join(", ")
));
};
Ok(model)
}
pub fn authenticate_model_provider(
provider_id: LanguageModelProviderId,
cx: &mut App,
) -> Task<std::result::Result<(), AuthenticateError>> {
let model_registry = LanguageModelRegistry::read_global(cx);
let model_provider = model_registry.provider(&provider_id).unwrap();
model_provider.authenticate(cx)
}

View File

@@ -1,205 +0,0 @@
mod eval;
mod get_exercise;
mod git_commands;
mod headless_assistant;
use clap::Parser;
use eval::{run_exercise_eval, save_eval_results};
use futures::stream::{self, StreamExt};
use get_exercise::{find_exercises, get_exercise_language, get_exercise_name};
use git_commands::read_base_sha;
use gpui::Application;
use headless_assistant::{authenticate_model_provider, find_model};
use language_model::LanguageModelRegistry;
use reqwest_client::ReqwestClient;
use std::{path::PathBuf, sync::Arc};
#[derive(Parser, Debug)]
#[command(
name = "agent_eval",
disable_version_flag = true,
before_help = "Tool eval runner"
)]
struct Args {
/// Match the names of evals to run.
#[arg(long)]
exercise_names: Vec<String>,
/// Runs all exercises, causes the exercise_names to be ignored.
#[arg(long)]
all: bool,
/// Supported language types to evaluate (default: internal).
/// Internal is data generated from the agent panel
#[arg(long, default_value = "internal")]
languages: String,
/// Name of the model (default: "claude-3-7-sonnet-latest")
#[arg(long, default_value = "claude-3-7-sonnet-latest")]
model_name: String,
/// Name of the editor model (default: value of `--model_name`).
#[arg(long)]
editor_model_name: Option<String>,
/// Number of evaluations to run concurrently (default: 3)
#[arg(short, long, default_value = "5")]
concurrency: usize,
/// Maximum number of exercises to evaluate per language
#[arg(long)]
max_exercises_per_language: Option<usize>,
}
fn main() {
env_logger::init();
let args = Args::parse();
let http_client = Arc::new(ReqwestClient::new());
let app = Application::headless().with_http_client(http_client.clone());
// Path to the zed-ace-framework repo
let framework_path = PathBuf::from("../zed-ace-framework")
.canonicalize()
.unwrap();
// Fix the 'languages' lifetime issue by creating owned Strings instead of slices
let languages: Vec<String> = args.languages.split(',').map(|s| s.to_string()).collect();
println!("Using zed-ace-framework at: {:?}", framework_path);
println!("Evaluating languages: {:?}", languages);
app.run(move |cx| {
let app_state = headless_assistant::init(cx);
let model = find_model(&args.model_name, cx).unwrap();
let editor_model = if let Some(model_name) = &args.editor_model_name {
find_model(model_name, cx).unwrap()
} else {
model.clone()
};
LanguageModelRegistry::global(cx).update(cx, |registry, cx| {
registry.set_default_model(Some(model.clone()), cx);
});
let model_provider_id = model.provider_id();
let editor_model_provider_id = editor_model.provider_id();
let framework_path_clone = framework_path.clone();
let languages_clone = languages.clone();
let exercise_names = args.exercise_names.clone();
let all_flag = args.all;
cx.spawn(async move |cx| {
// Authenticate all model providers first
cx.update(|cx| authenticate_model_provider(model_provider_id.clone(), cx))
.unwrap()
.await
.unwrap();
cx.update(|cx| authenticate_model_provider(editor_model_provider_id.clone(), cx))
.unwrap()
.await
.unwrap();
println!("framework path: {}", framework_path_clone.display());
let base_sha = read_base_sha(&framework_path_clone).await.unwrap();
println!("base sha: {}", base_sha);
let all_exercises = find_exercises(
&framework_path_clone,
&languages_clone
.iter()
.map(|s| s.as_str())
.collect::<Vec<_>>(),
args.max_exercises_per_language,
)
.unwrap();
println!("Found {} exercises total", all_exercises.len());
// Filter exercises if specific ones were requested
let exercises_to_run = if !exercise_names.is_empty() {
// If exercise names are specified, filter by them regardless of --all flag
all_exercises
.into_iter()
.filter(|path| {
let name = get_exercise_name(path);
exercise_names.iter().any(|filter| name.contains(filter))
})
.collect()
} else if all_flag {
// Only use all_flag if no exercise names are specified
all_exercises
} else {
// Default behavior (no filters)
all_exercises
};
println!("Will run {} exercises", exercises_to_run.len());
// Create exercise eval tasks - each exercise is a single task that will run templates sequentially
let exercise_tasks: Vec<_> = exercises_to_run
.into_iter()
.map(|exercise_path| {
let exercise_name = get_exercise_name(&exercise_path);
let model_clone = model.clone();
let app_state_clone = app_state.clone();
let base_sha_clone = base_sha.clone();
let framework_path_clone = framework_path_clone.clone();
let cx_clone = cx.clone();
async move {
println!("Processing exercise: {}", exercise_name);
let mut exercise_results = Vec::new();
match run_exercise_eval(
exercise_path.clone(),
model_clone.clone(),
app_state_clone.clone(),
base_sha_clone.clone(),
framework_path_clone.clone(),
cx_clone.clone(),
)
.await
{
Ok(result) => {
println!("Completed {}", exercise_name);
exercise_results.push(result);
}
Err(err) => {
println!("Error running {}: {}", exercise_name, err);
}
}
// Save results for this exercise
if !exercise_results.is_empty() {
if let Err(err) =
save_eval_results(&exercise_path, exercise_results.clone()).await
{
println!("Error saving results for {}: {}", exercise_name, err);
} else {
println!("Saved results for {}", exercise_name);
}
}
exercise_results
}
})
.collect();
println!(
"Running {} exercises with concurrency: {}",
exercise_tasks.len(),
args.concurrency
);
// Run exercises concurrently, with each exercise running its templates sequentially
let all_results = stream::iter(exercise_tasks)
.buffer_unordered(args.concurrency)
.flat_map(stream::iter)
.collect::<Vec<_>>()
.await;
println!("Completed {} evaluation runs", all_results.len());
cx.update(|cx| cx.quit()).unwrap();
})
.detach();
});
println!("Done running evals");
}

View File

@@ -37,9 +37,9 @@ pub enum AnthropicModelMode {
#[cfg_attr(feature = "schemars", derive(schemars::JsonSchema))]
#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq, EnumIter)]
pub enum Model {
#[default]
#[serde(rename = "claude-3-5-sonnet", alias = "claude-3-5-sonnet-latest")]
Claude3_5Sonnet,
#[default]
#[serde(rename = "claude-3-7-sonnet", alias = "claude-3-7-sonnet-latest")]
Claude3_7Sonnet,
#[serde(

View File

@@ -10,7 +10,7 @@ use collections::{BTreeSet, HashMap, HashSet, hash_map};
use editor::{
Anchor, Editor, EditorEvent, MenuInlineCompletionsPolicy, ProposedChangeLocation,
ProposedChangesEditor, RowExt, ToOffset as _, ToPoint,
actions::{FoldAt, MoveToEndOfLine, Newline, ShowCompletions, UnfoldAt},
actions::{MoveToEndOfLine, Newline, ShowCompletions},
display_map::{
BlockContext, BlockId, BlockPlacement, BlockProperties, BlockStyle, Crease, CreaseMetadata,
CustomBlockId, FoldId, RenderBlock, ToDisplayPoint,
@@ -1053,7 +1053,7 @@ impl ContextEditor {
let creases = editor.insert_creases(creases, cx);
for buffer_row in buffer_rows_to_fold.into_iter().rev() {
editor.fold_at(&FoldAt { buffer_row }, window, cx);
editor.fold_at(buffer_row, window, cx);
}
creases
@@ -1109,7 +1109,7 @@ impl ContextEditor {
buffer_rows_to_fold.clear();
}
for buffer_row in buffer_rows_to_fold.into_iter().rev() {
editor.fold_at(&FoldAt { buffer_row }, window, cx);
editor.fold_at(buffer_row, window, cx);
}
});
}
@@ -1844,13 +1844,7 @@ impl ContextEditor {
|_, _, _, _| Empty.into_any(),
);
editor.insert_creases(vec![crease], cx);
editor.fold_at(
&FoldAt {
buffer_row: start_row,
},
window,
cx,
);
editor.fold_at(start_row, window, cx);
}
})
}
@@ -2042,7 +2036,7 @@ impl ContextEditor {
cx,
);
for buffer_row in buffer_rows_to_fold.into_iter().rev() {
editor.fold_at(&FoldAt { buffer_row }, window, cx);
editor.fold_at(buffer_row, window, cx);
}
}
});
@@ -2793,7 +2787,7 @@ fn render_thought_process_fold_icon_button(
let button = match status {
ThoughtProcessStatus::Pending => button
.child(
Icon::new(IconName::Brain)
Icon::new(IconName::LightBulb)
.size(IconSize::Small)
.color(Color::Muted),
)
@@ -2808,7 +2802,7 @@ fn render_thought_process_fold_icon_button(
),
ThoughtProcessStatus::Completed => button
.style(ButtonStyle::Filled)
.child(Icon::new(IconName::Brain).size(IconSize::Small))
.child(Icon::new(IconName::LightBulb).size(IconSize::Small))
.child(Label::new("Thought Process").single_line()),
};
@@ -2820,7 +2814,7 @@ fn render_thought_process_fold_icon_button(
.start
.to_point(&editor.buffer().read(cx).read(cx));
let buffer_row = MultiBufferRow(buffer_start.row);
editor.unfold_at(&UnfoldAt { buffer_row }, window, cx);
editor.unfold_at(buffer_row, window, cx);
})
.ok();
})
@@ -2847,7 +2841,7 @@ fn render_fold_icon_button(
.start
.to_point(&editor.buffer().read(cx).read(cx));
let buffer_row = MultiBufferRow(buffer_start.row);
editor.unfold_at(&UnfoldAt { buffer_row }, window, cx);
editor.unfold_at(buffer_row, window, cx);
})
.ok();
})
@@ -2907,7 +2901,7 @@ fn quote_selection_fold_placeholder(title: String, editor: WeakEntity<Editor>) -
.start
.to_point(&editor.buffer().read(cx).read(cx));
let buffer_row = MultiBufferRow(buffer_start.row);
editor.unfold_at(&UnfoldAt { buffer_row }, window, cx);
editor.unfold_at(buffer_row, window, cx);
})
.ok();
})

View File

@@ -69,7 +69,7 @@ pub enum AssistantProviderContentV1 {
},
}
#[derive(Debug, Default)]
#[derive(Clone, Debug, Default)]
pub struct AssistantSettings {
pub enabled: bool,
pub button: bool,
@@ -742,7 +742,7 @@ mod tests {
AssistantSettings::get_global(cx).default_model,
LanguageModelSelection {
provider: "zed.dev".into(),
model: "claude-3-5-sonnet-latest".into(),
model: "claude-3-7-sonnet-latest".into(),
}
);
});

View File

@@ -235,7 +235,7 @@ impl ActionLog {
.await;
diff.update(cx, |diff, cx| {
diff.set_snapshot(diff_snapshot, &buffer_snapshot, None, cx)
diff.set_snapshot(diff_snapshot, &buffer_snapshot, cx)
})?;
}
this.update(cx, |this, cx| {

View File

@@ -1,5 +1,5 @@
mod bash_tool;
mod batch_tool;
mod code_action_tool;
mod code_symbols_tool;
mod copy_path_tool;
mod create_directory_tool;
@@ -15,9 +15,11 @@ mod open_tool;
mod path_search_tool;
mod read_file_tool;
mod regex_search_tool;
mod rename_tool;
mod replace;
mod schema;
mod symbol_info_tool;
mod terminal_tool;
mod thinking_tool;
use std::sync::Arc;
@@ -28,8 +30,8 @@ use gpui::App;
use http_client::HttpClientWithUrl;
use move_path_tool::MovePathTool;
use crate::bash_tool::BashTool;
use crate::batch_tool::BatchTool;
use crate::code_action_tool::CodeActionTool;
use crate::code_symbols_tool::CodeSymbolsTool;
use crate::create_directory_tool::CreateDirectoryTool;
use crate::create_file_tool::CreateFileTool;
@@ -43,14 +45,16 @@ use crate::open_tool::OpenTool;
use crate::path_search_tool::PathSearchTool;
use crate::read_file_tool::ReadFileTool;
use crate::regex_search_tool::RegexSearchTool;
use crate::rename_tool::RenameTool;
use crate::symbol_info_tool::SymbolInfoTool;
use crate::terminal_tool::TerminalTool;
use crate::thinking_tool::ThinkingTool;
pub fn init(http_client: Arc<HttpClientWithUrl>, cx: &mut App) {
assistant_tool::init(cx);
let registry = ToolRegistry::global(cx);
registry.register_tool(BashTool);
registry.register_tool(TerminalTool);
registry.register_tool(BatchTool);
registry.register_tool(CreateDirectoryTool);
registry.register_tool(CreateFileTool);
@@ -58,6 +62,7 @@ pub fn init(http_client: Arc<HttpClientWithUrl>, cx: &mut App) {
registry.register_tool(DeletePathTool);
registry.register_tool(FindReplaceFileTool);
registry.register_tool(SymbolInfoTool);
registry.register_tool(CodeActionTool);
registry.register_tool(MovePathTool);
registry.register_tool(DiagnosticsTool);
registry.register_tool(ListDirectoryTool);
@@ -67,6 +72,7 @@ pub fn init(http_client: Arc<HttpClientWithUrl>, cx: &mut App) {
registry.register_tool(PathSearchTool);
registry.register_tool(ReadFileTool);
registry.register_tool(RegexSearchTool);
registry.register_tool(RenameTool);
registry.register_tool(ThinkingTool);
registry.register_tool(FetchTool::new(http_client));
}

View File

@@ -1,235 +0,0 @@
use crate::schema::json_schema_for;
use anyhow::{Context as _, Result, anyhow};
use assistant_tool::{ActionLog, Tool};
use futures::io::BufReader;
use futures::{AsyncBufReadExt, AsyncReadExt, FutureExt};
use gpui::{App, Entity, Task};
use language_model::{LanguageModelRequestMessage, LanguageModelToolSchemaFormat};
use project::Project;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use std::path::Path;
use std::sync::Arc;
use ui::IconName;
use util::command::new_smol_command;
use util::markdown::MarkdownString;
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct BashToolInput {
/// The bash one-liner command to execute.
command: String,
/// Working directory for the command. This must be one of the root directories of the project.
cd: String,
}
pub struct BashTool;
impl Tool for BashTool {
fn name(&self) -> String {
"bash".to_string()
}
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
true
}
fn description(&self) -> String {
include_str!("./bash_tool/description.md").to_string()
}
fn icon(&self) -> IconName {
IconName::Terminal
}
fn input_schema(&self, format: LanguageModelToolSchemaFormat) -> serde_json::Value {
json_schema_for::<BashToolInput>(format)
}
fn ui_text(&self, input: &serde_json::Value) -> String {
match serde_json::from_value::<BashToolInput>(input.clone()) {
Ok(input) => {
let mut lines = input.command.lines();
let first_line = lines.next().unwrap_or_default();
let remaining_line_count = lines.count();
match remaining_line_count {
0 => MarkdownString::inline_code(&first_line).0,
1 => {
MarkdownString::inline_code(&format!(
"{} - {} more line",
first_line, remaining_line_count
))
.0
}
n => {
MarkdownString::inline_code(&format!("{} - {} more lines", first_line, n)).0
}
}
}
Err(_) => "Run bash command".to_string(),
}
}
fn run(
self: Arc<Self>,
input: serde_json::Value,
_messages: &[LanguageModelRequestMessage],
project: Entity<Project>,
_action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
let input: BashToolInput = match serde_json::from_value(input) {
Ok(input) => input,
Err(err) => return Task::ready(Err(anyhow!(err))),
};
let project = project.read(cx);
let input_path = Path::new(&input.cd);
let working_dir = if input.cd == "." {
// Accept "." as meaning "the one worktree" if we only have one worktree.
let mut worktrees = project.worktrees(cx);
let only_worktree = match worktrees.next() {
Some(worktree) => worktree,
None => return Task::ready(Err(anyhow!("No worktrees found in the project"))),
};
if worktrees.next().is_some() {
return Task::ready(Err(anyhow!(
"'.' is ambiguous in multi-root workspaces. Please specify a root directory explicitly."
)));
}
only_worktree.read(cx).abs_path()
} else if input_path.is_absolute() {
// Absolute paths are allowed, but only if they're in one of the project's worktrees.
if !project
.worktrees(cx)
.any(|worktree| input_path.starts_with(&worktree.read(cx).abs_path()))
{
return Task::ready(Err(anyhow!(
"The absolute path must be within one of the project's worktrees"
)));
}
input_path.into()
} else {
let Some(worktree) = project.worktree_for_root_name(&input.cd, cx) else {
return Task::ready(Err(anyhow!(
"`cd` directory {} not found in the project",
&input.cd
)));
};
worktree.read(cx).abs_path()
};
cx.spawn(async move |cx| {
// Add 2>&1 to merge stderr into stdout for proper interleaving.
let command = format!("({}) 2>&1", input.command);
let mut cmd = new_smol_command("bash")
.arg("-c")
.arg(&command)
.current_dir(working_dir)
.stdout(std::process::Stdio::piped())
.spawn()
.context("Failed to execute bash command")?;
// Capture stdout with a limit
let stdout = cmd.stdout.take().unwrap();
let mut reader = BufReader::new(stdout);
const MESSAGE_1: &str = "Command output too long. The first ";
const MESSAGE_2: &str = " bytes:\n\n";
const ERR_MESSAGE_1: &str = "Command failed with exit code ";
const ERR_MESSAGE_2: &str = "\n\n";
const STDOUT_LIMIT: usize = 8192;
const LIMIT: usize = STDOUT_LIMIT
- (MESSAGE_1.len()
+ (STDOUT_LIMIT.ilog10() as usize + 1) // byte count
+ MESSAGE_2.len()
+ ERR_MESSAGE_1.len()
+ 3 // status code
+ ERR_MESSAGE_2.len());
// Read one more byte to determine whether the output was truncated
let mut buffer = vec![0; LIMIT + 1];
let bytes_read = reader.read(&mut buffer).await?;
let mut timer = cx
.background_executor()
.timer(std::time::Duration::from_secs(10))
.fuse();
// Repeatedly fill the output reader's buffer without copying it.
loop {
let mut skipped_bytes = reader.fill_buf().fuse();
futures::select! {
skipped_bytes = skipped_bytes => {
let skipped_bytes = skipped_bytes?;
if skipped_bytes.is_empty() {
break;
}
let skipped_bytes_len = skipped_bytes.len();
reader.consume_unpin(skipped_bytes_len);
timer = cx
.background_executor()
.timer(std::time::Duration::from_secs(10))
.fuse();
}
_ = timer => {
return Err(anyhow!("Command timed out. Output so far:\n{}", String::from_utf8_lossy(&buffer[..bytes_read])));
}
}
}
let output_bytes = &buffer[..bytes_read];
// Let the process continue running
let status = cmd.status().await.context("Failed to get command status")?;
let output_string = if bytes_read > LIMIT {
// Valid to find `\n` in UTF-8 since 0-127 ASCII characters are not used in
// multi-byte characters.
let last_line_ix = output_bytes.iter().rposition(|b| *b == b'\n');
let output_string = String::from_utf8_lossy(
&output_bytes[..last_line_ix.unwrap_or(output_bytes.len())],
);
format!(
"{}{}{}{}",
MESSAGE_1,
output_string.len(),
MESSAGE_2,
output_string
)
} else {
String::from_utf8_lossy(&output_bytes).into()
};
let output_with_status = if status.success() {
if output_string.is_empty() {
"Command executed successfully.".to_string()
} else {
output_string.to_string()
}
} else {
format!(
"{}{}{}{}",
ERR_MESSAGE_1,
status.code().unwrap_or(-1),
ERR_MESSAGE_2,
output_string,
)
};
debug_assert!(output_with_status.len() <= STDOUT_LIMIT);
Ok(output_with_status)
})
}
}

View File

@@ -1,7 +0,0 @@
Executes a bash one-liner and returns the combined output.
This tool spawns a bash process, combines stdout and stderr into one interleaved stream as they are produced (preserving the order of writes), and captures that stream into a string which is returned.
Make sure you use the `cd` parameter to navigate to one of the root directories of the project. NEVER do it as part of the `command` itself, otherwise it will error.
Remember that each invocation of this tool will spawn a new bash process, so you can't rely on any state from previous invocations.

View File

@@ -0,0 +1,389 @@
use anyhow::{Context as _, Result, anyhow};
use assistant_tool::{ActionLog, Tool};
use gpui::{App, Entity, Task};
use language::{self, Anchor, Buffer, ToPointUtf16};
use language_model::LanguageModelRequestMessage;
use project::{self, LspAction, Project};
use regex::Regex;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use std::{ops::Range, sync::Arc};
use ui::IconName;
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct CodeActionToolInput {
/// The relative path to the file containing the text range.
///
/// WARNING: you MUST start this path with one of the project's root directories.
pub path: String,
/// The specific code action to execute.
///
/// If this field is provided, the tool will execute the specified action.
/// If omitted, the tool will list all available code actions for the text range.
///
/// Here are some actions that are commonly supported (but may not be for this particular
/// text range; you can omit this field to list all the actions, if you want to know
/// what your options are, or you can just try an action and if it fails I'll tell you
/// what the available actions were instead):
/// - "quickfix.all" - applies all available quick fixes in the range
/// - "source.organizeImports" - sorts and cleans up import statements
/// - "source.fixAll" - applies all available auto fixes
/// - "refactor.extract" - extracts selected code into a new function or variable
/// - "refactor.inline" - inlines a variable by replacing references with its value
/// - "refactor.rewrite" - general code rewriting operations
/// - "source.addMissingImports" - adds imports for references that lack them
/// - "source.removeUnusedImports" - removes imports that aren't being used
/// - "source.implementInterface" - generates methods required by an interface/trait
/// - "source.generateAccessors" - creates getter/setter methods
/// - "source.convertToAsyncFunction" - converts callback-style code to async/await
///
/// Also, there is a special case: if you specify exactly "textDocument/rename" as the action,
/// then this will rename the symbol to whatever string you specified for the `arguments` field.
pub action: Option<String>,
/// Optional arguments to pass to the code action.
///
/// For rename operations (when action="textDocument/rename"), this should contain the new name.
/// For other code actions, these arguments may be passed to the language server.
pub arguments: Option<serde_json::Value>,
/// The text that comes immediately before the text range in the file.
pub context_before_range: String,
/// The text range. This text must appear in the file right between `context_before_range`
/// and `context_after_range`.
///
/// The file must contain exactly one occurrence of `context_before_range` followed by
/// `text_range` followed by `context_after_range`. If the file contains zero occurrences,
/// or if it contains more than one occurrence, the tool will fail, so it is absolutely
/// critical that you verify ahead of time that the string is unique. You can search
/// the file's contents to verify this ahead of time.
///
/// To make the string more likely to be unique, include a minimum of 1 line of context
/// before the text range, as well as a minimum of 1 line of context after the text range.
/// If these lines of context are not enough to obtain a string that appears only once
/// in the file, then double the number of context lines until the string becomes unique.
/// (Start with 1 line before and 1 line after though, because too much context is
/// needlessly costly.)
///
/// Do not alter the context lines of code in any way, and make sure to preserve all
/// whitespace and indentation for all lines of code. The combined string must be exactly
/// as it appears in the file, or else this tool call will fail.
pub text_range: String,
/// The text that comes immediately after the text range in the file.
pub context_after_range: String,
}
pub struct CodeActionTool;
impl Tool for CodeActionTool {
fn name(&self) -> String {
"code_actions".into()
}
fn needs_confirmation(&self, _input: &serde_json::Value, _cx: &App) -> bool {
false
}
fn description(&self) -> String {
include_str!("./code_action_tool/description.md").into()
}
fn icon(&self) -> IconName {
IconName::Wand
}
fn input_schema(
&self,
_format: language_model::LanguageModelToolSchemaFormat,
) -> serde_json::Value {
let schema = schemars::schema_for!(CodeActionToolInput);
serde_json::to_value(&schema).unwrap()
}
fn ui_text(&self, input: &serde_json::Value) -> String {
match serde_json::from_value::<CodeActionToolInput>(input.clone()) {
Ok(input) => {
if let Some(action) = &input.action {
if action == "textDocument/rename" {
let new_name = match &input.arguments {
Some(serde_json::Value::String(new_name)) => new_name.clone(),
Some(value) => {
if let Ok(new_name) =
serde_json::from_value::<String>(value.clone())
{
new_name
} else {
"invalid name".to_string()
}
}
None => "missing name".to_string(),
};
format!("Rename '{}' to '{}'", input.text_range, new_name)
} else {
format!(
"Execute code action '{}' for '{}'",
action, input.text_range
)
}
} else {
format!("List available code actions for '{}'", input.text_range)
}
}
Err(_) => "Perform code action".to_string(),
}
}
fn run(
self: Arc<Self>,
input: serde_json::Value,
_messages: &[LanguageModelRequestMessage],
project: Entity<Project>,
action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
let input = match serde_json::from_value::<CodeActionToolInput>(input) {
Ok(input) => input,
Err(err) => return Task::ready(Err(anyhow!(err))),
};
cx.spawn(async move |cx| {
let buffer = {
let project_path = project.read_with(cx, |project, cx| {
project
.find_project_path(&input.path, cx)
.context("Path not found in project")
})??;
project.update(cx, |project, cx| project.open_buffer(project_path, cx))?.await?
};
action_log.update(cx, |action_log, cx| {
action_log.buffer_read(buffer.clone(), cx);
})?;
let range = {
let Some(range) = buffer.read_with(cx, |buffer, _cx| {
find_text_range(&buffer, &input.context_before_range, &input.text_range, &input.context_after_range)
})? else {
return Err(anyhow!(
"Failed to locate the text specified by context_before_range, text_range, and context_after_range. Make sure context_before_range and context_after_range each match exactly once in the file."
));
};
range
};
if let Some(action_type) = &input.action {
// Special-case the `rename` operation
let response = if action_type == "textDocument/rename" {
let Some(new_name) = input.arguments.and_then(|args| serde_json::from_value::<String>(args).ok()) else {
return Err(anyhow!("For rename operations, 'arguments' must be a string containing the new name"));
};
let position = buffer.read_with(cx, |buffer, _| {
range.start.to_point_utf16(&buffer.snapshot())
})?;
project
.update(cx, |project, cx| {
project.perform_rename(buffer.clone(), position, new_name.clone(), cx)
})?
.await?;
format!("Renamed '{}' to '{}'", input.text_range, new_name)
} else {
// Get code actions for the range
let actions = project
.update(cx, |project, cx| {
project.code_actions(&buffer, range.clone(), None, cx)
})?
.await?;
if actions.is_empty() {
return Err(anyhow!("No code actions available for this range"));
}
// Find all matching actions
let regex = match Regex::new(action_type) {
Ok(regex) => regex,
Err(err) => return Err(anyhow!("Invalid regex pattern: {}", err)),
};
let mut matching_actions = actions
.into_iter()
.filter(|action| { regex.is_match(action.lsp_action.title()) });
let Some(action) = matching_actions.next() else {
return Err(anyhow!("No code actions match the pattern: {}", action_type));
};
// There should have been exactly one matching action.
if let Some(second) = matching_actions.next() {
let mut all_matches = vec![action, second];
all_matches.extend(matching_actions);
return Err(anyhow!(
"Pattern '{}' matches multiple code actions: {}",
action_type,
all_matches.into_iter().map(|action| action.lsp_action.title().to_string()).collect::<Vec<_>>().join(", ")
));
}
let title = action.lsp_action.title().to_string();
project
.update(cx, |project, cx| {
project.apply_code_action(buffer.clone(), action, true, cx)
})?
.await?;
format!("Completed code action: {}", title)
};
project
.update(cx, |project, cx| project.save_buffer(buffer.clone(), cx))?
.await?;
action_log.update(cx, |log, cx| {
log.buffer_edited(buffer.clone(), cx)
})?;
Ok(response)
} else {
// No action specified, so list the available ones.
let (position_start, position_end) = buffer.read_with(cx, |buffer, _| {
let snapshot = buffer.snapshot();
(
range.start.to_point_utf16(&snapshot),
range.end.to_point_utf16(&snapshot)
)
})?;
// Convert position to display coordinates (1-based)
let position_start_display = language::Point {
row: position_start.row + 1,
column: position_start.column + 1,
};
let position_end_display = language::Point {
row: position_end.row + 1,
column: position_end.column + 1,
};
// Get code actions for the range
let actions = project
.update(cx, |project, cx| {
project.code_actions(&buffer, range.clone(), None, cx)
})?
.await?;
let mut response = format!(
"Available code actions for text range '{}' at position {}:{} to {}:{} (UTF-16 coordinates):\n\n",
input.text_range,
position_start_display.row, position_start_display.column,
position_end_display.row, position_end_display.column
);
if actions.is_empty() {
response.push_str("No code actions available for this range.");
} else {
for (i, action) in actions.iter().enumerate() {
let title = match &action.lsp_action {
LspAction::Action(code_action) => code_action.title.as_str(),
LspAction::Command(command) => command.title.as_str(),
LspAction::CodeLens(code_lens) => {
if let Some(cmd) = &code_lens.command {
cmd.title.as_str()
} else {
"Unknown code lens"
}
},
};
let kind = match &action.lsp_action {
LspAction::Action(code_action) => {
if let Some(kind) = &code_action.kind {
kind.as_str()
} else {
"unknown"
}
},
LspAction::Command(_) => "command",
LspAction::CodeLens(_) => "code_lens",
};
response.push_str(&format!("{}. {title} ({kind})\n", i + 1));
}
}
Ok(response)
}
})
}
}
/// Finds the range of the text in the buffer, if it appears between context_before_range
/// and context_after_range, and if that combined string has one unique result in the buffer.
///
/// If an exact match fails, it tries adding a newline to the end of context_before_range and
/// to the beginning of context_after_range to accommodate line-based context matching.
fn find_text_range(
buffer: &Buffer,
context_before_range: &str,
text_range: &str,
context_after_range: &str,
) -> Option<Range<Anchor>> {
let snapshot = buffer.snapshot();
let text = snapshot.text();
// First try with exact match
let search_string = format!("{context_before_range}{text_range}{context_after_range}");
let mut positions = text.match_indices(&search_string);
let position_result = positions.next();
if let Some(position) = position_result {
// Check if the matched string is unique
if positions.next().is_none() {
let range_start = position.0 + context_before_range.len();
let range_end = range_start + text_range.len();
let range_start_anchor = snapshot.anchor_before(snapshot.offset_to_point(range_start));
let range_end_anchor = snapshot.anchor_before(snapshot.offset_to_point(range_end));
return Some(range_start_anchor..range_end_anchor);
}
}
// If exact match fails or is not unique, try with line-based context
// Add a newline to the end of before context and beginning of after context
let line_based_before = if context_before_range.ends_with('\n') {
context_before_range.to_string()
} else {
format!("{context_before_range}\n")
};
let line_based_after = if context_after_range.starts_with('\n') {
context_after_range.to_string()
} else {
format!("\n{context_after_range}")
};
let line_search_string = format!("{line_based_before}{text_range}{line_based_after}");
let mut line_positions = text.match_indices(&line_search_string);
let line_position = line_positions.next()?;
// The line-based search string must also appear exactly once
if line_positions.next().is_some() {
return None;
}
let line_range_start = line_position.0 + line_based_before.len();
let line_range_end = line_range_start + text_range.len();
let line_range_start_anchor =
snapshot.anchor_before(snapshot.offset_to_point(line_range_start));
let line_range_end_anchor = snapshot.anchor_before(snapshot.offset_to_point(line_range_end));
Some(line_range_start_anchor..line_range_end_anchor)
}

View File

@@ -0,0 +1,19 @@
A tool for applying code actions to specific sections of your code. It uses language servers to provide refactoring capabilities similar to what you'd find in an IDE.
This tool can:
- List all available code actions for a selected text range
- Execute a specific code action on that range
- Rename symbols across your codebase. This tool is the preferred way to rename things, and you should always prefer to rename code symbols using this tool rather than using textual find/replace when both are available.
Use this tool when you want to:
- Discover what code actions are available for a piece of code
- Apply automatic fixes and code transformations
- Rename variables, functions, or other symbols consistently throughout your project
- Clean up imports, implement interfaces, or perform other language-specific operations
- If unsure what actions are available, call the tool without specifying an action to get a list
- For common operations, you can directly specify actions like "quickfix.all" or "source.organizeImports"
- For renaming, use the special "textDocument/rename" action and provide the new name in the arguments field
- Be specific with your text range and context to ensure the tool identifies the correct code location
The tool will automatically save any changes it makes to your files.

View File

@@ -179,11 +179,9 @@ pub async fn file_outline(
// Wait until the buffer has been fully parsed, so that we can read its outline.
let mut parse_status = buffer.read_with(cx, |buffer, _| buffer.parse_status())?;
while parse_status
.recv()
.await
.map_or(false, |status| status != ParseStatus::Idle)
{}
while *parse_status.borrow() != ParseStatus::Idle {
parse_status.changed().await?;
}
let snapshot = buffer.read_with(cx, |buffer, _| buffer.snapshot())?;
let Some(outline) = snapshot.outline(None) else {

View File

@@ -63,6 +63,16 @@ pub struct FindReplaceFileToolInput {
/// even one character in this string is different in any way from how it appears
/// in the file, then the tool call will fail.
///
/// If you get an error that the `find` string was not found, this means that either
/// you made a mistake, or that the file has changed since you last looked at it.
/// Either way, when this happens, you should retry doing this tool call until it
/// succeeds, up to 3 times. Each time you retry, you should take another look at
/// the exact text of the file in question, to make sure that you are searching for
/// exactly the right string. Regardless of whether it was because you made a mistake
/// or because the file changed since you last looked at it, you should be extra
/// careful when retrying in this way. It's a bad experience for the user if
/// this `find` string isn't found, so be super careful to get it exactly right!
///
/// <example>
/// If a file contains this code:
///

View File

@@ -1,9 +1,13 @@
Find one unique part of a file in the project and replace that text with new text.
This tool is the preferred way to make edits to files. If you have multiple edits to make, including edits across multiple files, then make a plan to respond with a single message containing multiple calls to this tool - one call for each find/replace operation.
This tool is the preferred way to make edits to files *except* when making a rename. When making a rename specifically, the rename tool must always be used instead.
You should use this tool when you want to edit a subset of a file's contents, but not the entire file. You should not use this tool when you want to replace the entire contents of a file with completely different contents. You also should not use this tool when you want to move or rename a file. You absolutely must NEVER use this tool to create new files from scratch. If you ever consider using this tool to create a new file from scratch, for any reason, instead you must reconsider and choose a different approach.
If you have multiple edits to make, including edits across multiple files, then make a plan to respond with a single message containing a batch of calls to this tool - one call for each find/replace operation.
You should only use this tool when you want to edit a subset of a file's contents, but not the entire file. You should not use this tool when you want to replace the entire contents of a file with completely different contents. You also should not use this tool when you want to move or rename a file. You absolutely must NEVER use this tool to create new files from scratch. If you ever consider using this tool to create a new file from scratch, for any reason, instead you must reconsider and choose a different approach.
DO NOT call this tool until the code to be edited appears in the conversation! You must use another tool to read the file's contents into the conversation, or ask the user to add it to context first.
Never call this tool with identical "find" and "replace" strings. Instead, stop and think about what you actually want to do.
REMEMBER: You can use this tool after you just used the `create_file` tool. It's better to edit the file you just created than to recreate a new file from scratch.

View File

@@ -26,6 +26,10 @@ pub struct RegexSearchToolInput {
/// When not provided, starts from the beginning.
#[serde(default)]
pub offset: u32,
/// Whether the regex is case-sensitive. Defaults to false (case-insensitive).
#[serde(default)]
pub case_sensitive: bool,
}
impl RegexSearchToolInput {
@@ -64,12 +68,17 @@ impl Tool for RegexSearchTool {
match serde_json::from_value::<RegexSearchToolInput>(input.clone()) {
Ok(input) => {
let page = input.page();
let regex = MarkdownString::inline_code(&input.regex);
let regex_str = MarkdownString::inline_code(&input.regex);
let case_info = if input.case_sensitive {
" (case-sensitive)"
} else {
""
};
if page > 1 {
format!("Get page {page} of search results for regex {regex}")
format!("Get page {page} of search results for regex {regex_str}{case_info}")
} else {
format!("Search files for regex {regex}")
format!("Search files for regex {regex_str}{case_info}")
}
}
Err(_) => "Search with regex".to_string(),
@@ -86,14 +95,16 @@ impl Tool for RegexSearchTool {
) -> Task<Result<String>> {
const CONTEXT_LINES: u32 = 2;
let (offset, regex) = match serde_json::from_value::<RegexSearchToolInput>(input) {
Ok(input) => (input.offset, input.regex),
Err(err) => return Task::ready(Err(anyhow!(err))),
};
let (offset, regex, case_sensitive) =
match serde_json::from_value::<RegexSearchToolInput>(input) {
Ok(input) => (input.offset, input.regex, input.case_sensitive),
Err(err) => return Task::ready(Err(anyhow!(err))),
};
let query = match SearchQuery::regex(
&regex,
false,
case_sensitive,
false,
false,
PathMatcher::default(),

View File

@@ -0,0 +1,205 @@
use anyhow::{Context as _, Result, anyhow};
use assistant_tool::{ActionLog, Tool};
use gpui::{App, Entity, Task};
use language::{self, Buffer, ToPointUtf16};
use language_model::LanguageModelRequestMessage;
use project::Project;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use std::sync::Arc;
use ui::IconName;
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct RenameToolInput {
/// The relative path to the file containing the symbol to rename.
///
/// WARNING: you MUST start this path with one of the project's root directories.
pub path: String,
/// The new name to give to the symbol.
pub new_name: String,
/// The text that comes immediately before the symbol in the file.
pub context_before_symbol: String,
/// The symbol to rename. This text must appear in the file right between
/// `context_before_symbol` and `context_after_symbol`.
///
/// The file must contain exactly one occurrence of `context_before_symbol` followed by
/// `symbol` followed by `context_after_symbol`. If the file contains zero occurrences,
/// or if it contains more than one occurrence, the tool will fail, so it is absolutely
/// critical that you verify ahead of time that the string is unique. You can search
/// the file's contents to verify this ahead of time.
///
/// To make the string more likely to be unique, include a minimum of 1 line of context
/// before the symbol, as well as a minimum of 1 line of context after the symbol.
/// If these lines of context are not enough to obtain a string that appears only once
/// in the file, then double the number of context lines until the string becomes unique.
/// (Start with 1 line before and 1 line after though, because too much context is
/// needlessly costly.)
///
/// Do not alter the context lines of code in any way, and make sure to preserve all
/// whitespace and indentation for all lines of code. The combined string must be exactly
/// as it appears in the file, or else this tool call will fail.
pub symbol: String,
/// The text that comes immediately after the symbol in the file.
pub context_after_symbol: String,
}
pub struct RenameTool;
impl Tool for RenameTool {
fn name(&self) -> String {
"rename".into()
}
fn needs_confirmation(&self, _input: &serde_json::Value, _cx: &App) -> bool {
false
}
fn description(&self) -> String {
include_str!("./rename_tool/description.md").into()
}
fn icon(&self) -> IconName {
IconName::Pencil
}
fn input_schema(
&self,
_format: language_model::LanguageModelToolSchemaFormat,
) -> serde_json::Value {
let schema = schemars::schema_for!(RenameToolInput);
serde_json::to_value(&schema).unwrap()
}
fn ui_text(&self, input: &serde_json::Value) -> String {
match serde_json::from_value::<RenameToolInput>(input.clone()) {
Ok(input) => {
format!("Rename '{}' to '{}'", input.symbol, input.new_name)
}
Err(_) => "Rename symbol".to_string(),
}
}
fn run(
self: Arc<Self>,
input: serde_json::Value,
_messages: &[LanguageModelRequestMessage],
project: Entity<Project>,
action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
let input = match serde_json::from_value::<RenameToolInput>(input) {
Ok(input) => input,
Err(err) => return Task::ready(Err(anyhow!(err))),
};
cx.spawn(async move |cx| {
let buffer = {
let project_path = project.read_with(cx, |project, cx| {
project
.find_project_path(&input.path, cx)
.context("Path not found in project")
})??;
project.update(cx, |project, cx| project.open_buffer(project_path, cx))?.await?
};
action_log.update(cx, |action_log, cx| {
action_log.buffer_read(buffer.clone(), cx);
})?;
let position = {
let Some(position) = buffer.read_with(cx, |buffer, _cx| {
find_symbol_position(&buffer, &input.context_before_symbol, &input.symbol, &input.context_after_symbol)
})? else {
return Err(anyhow!(
"Failed to locate the symbol specified by context_before_symbol, symbol, and context_after_symbol. Make sure context_before_symbol and context_after_symbol each match exactly once in the file."
));
};
buffer.read_with(cx, |buffer, _| {
position.to_point_utf16(&buffer.snapshot())
})?
};
project
.update(cx, |project, cx| {
project.perform_rename(buffer.clone(), position, input.new_name.clone(), cx)
})?
.await?;
project
.update(cx, |project, cx| project.save_buffer(buffer.clone(), cx))?
.await?;
action_log.update(cx, |log, cx| {
log.buffer_edited(buffer.clone(), cx)
})?;
Ok(format!("Renamed '{}' to '{}'", input.symbol, input.new_name))
})
}
}
/// Finds the position of the symbol in the buffer, if it appears between context_before_symbol
/// and context_after_symbol, and if that combined string has one unique result in the buffer.
///
/// If an exact match fails, it tries adding a newline to the end of context_before_symbol and
/// to the beginning of context_after_symbol to accommodate line-based context matching.
fn find_symbol_position(
buffer: &Buffer,
context_before_symbol: &str,
symbol: &str,
context_after_symbol: &str,
) -> Option<language::Anchor> {
let snapshot = buffer.snapshot();
let text = snapshot.text();
// First try with exact match
let search_string = format!("{context_before_symbol}{symbol}{context_after_symbol}");
let mut positions = text.match_indices(&search_string);
let position_result = positions.next();
if let Some(position) = position_result {
// Check if the matched string is unique
if positions.next().is_none() {
let symbol_start = position.0 + context_before_symbol.len();
let symbol_start_anchor =
snapshot.anchor_before(snapshot.offset_to_point(symbol_start));
return Some(symbol_start_anchor);
}
}
// If exact match fails or is not unique, try with line-based context
// Add a newline to the end of before context and beginning of after context
let line_based_before = if context_before_symbol.ends_with('\n') {
context_before_symbol.to_string()
} else {
format!("{context_before_symbol}\n")
};
let line_based_after = if context_after_symbol.starts_with('\n') {
context_after_symbol.to_string()
} else {
format!("\n{context_after_symbol}")
};
let line_search_string = format!("{line_based_before}{symbol}{line_based_after}");
let mut line_positions = text.match_indices(&line_search_string);
let line_position = line_positions.next()?;
// The line-based search string must also appear exactly once
if line_positions.next().is_some() {
return None;
}
let line_symbol_start = line_position.0 + line_based_before.len();
let line_symbol_start_anchor =
snapshot.anchor_before(snapshot.offset_to_point(line_symbol_start));
Some(line_symbol_start_anchor)
}

View File

@@ -0,0 +1,15 @@
Renames a symbol across your codebase using the language server's semantic knowledge.
This tool performs a rename refactoring operation on a specified symbol. It uses the project's language server to analyze the code and perform the rename correctly across all files where the symbol is referenced.
Unlike a simple find and replace, this tool understands the semantic meaning of the code, so it only renames the specific symbol you specify and not unrelated text that happens to have the same name.
Examples of symbols you can rename:
- Variables
- Functions
- Classes/structs
- Fields/properties
- Methods
- Interfaces/traits
The language server handles updating all references to the renamed symbol throughout the codebase.

View File

@@ -0,0 +1,366 @@
use crate::schema::json_schema_for;
use anyhow::{Context as _, Result, anyhow};
use assistant_tool::{ActionLog, Tool};
use futures::io::BufReader;
use futures::{AsyncBufReadExt, AsyncReadExt, FutureExt};
use gpui::{App, AppContext, Entity, Task};
use language_model::{LanguageModelRequestMessage, LanguageModelToolSchemaFormat};
use project::Project;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use std::future;
use util::get_system_shell;
use std::path::Path;
use std::sync::Arc;
use ui::IconName;
use util::command::new_smol_command;
use util::markdown::MarkdownString;
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct TerminalToolInput {
/// The one-liner command to execute.
command: String,
/// Working directory for the command. This must be one of the root directories of the project.
cd: String,
}
pub struct TerminalTool;
impl Tool for TerminalTool {
fn name(&self) -> String {
"terminal".to_string()
}
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
true
}
fn description(&self) -> String {
include_str!("./terminal_tool/description.md").to_string()
}
fn icon(&self) -> IconName {
IconName::Terminal
}
fn input_schema(&self, format: LanguageModelToolSchemaFormat) -> serde_json::Value {
json_schema_for::<TerminalToolInput>(format)
}
fn ui_text(&self, input: &serde_json::Value) -> String {
match serde_json::from_value::<TerminalToolInput>(input.clone()) {
Ok(input) => {
let mut lines = input.command.lines();
let first_line = lines.next().unwrap_or_default();
let remaining_line_count = lines.count();
match remaining_line_count {
0 => MarkdownString::inline_code(&first_line).0,
1 => {
MarkdownString::inline_code(&format!(
"{} - {} more line",
first_line, remaining_line_count
))
.0
}
n => {
MarkdownString::inline_code(&format!("{} - {} more lines", first_line, n)).0
}
}
}
Err(_) => "Run terminal command".to_string(),
}
}
fn run(
self: Arc<Self>,
input: serde_json::Value,
_messages: &[LanguageModelRequestMessage],
project: Entity<Project>,
_action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
let input: TerminalToolInput = match serde_json::from_value(input) {
Ok(input) => input,
Err(err) => return Task::ready(Err(anyhow!(err))),
};
let project = project.read(cx);
let input_path = Path::new(&input.cd);
let working_dir = if input.cd == "." {
// Accept "." as meaning "the one worktree" if we only have one worktree.
let mut worktrees = project.worktrees(cx);
let only_worktree = match worktrees.next() {
Some(worktree) => worktree,
None => return Task::ready(Err(anyhow!("No worktrees found in the project"))),
};
if worktrees.next().is_some() {
return Task::ready(Err(anyhow!(
"'.' is ambiguous in multi-root workspaces. Please specify a root directory explicitly."
)));
}
only_worktree.read(cx).abs_path()
} else if input_path.is_absolute() {
// Absolute paths are allowed, but only if they're in one of the project's worktrees.
if !project
.worktrees(cx)
.any(|worktree| input_path.starts_with(&worktree.read(cx).abs_path()))
{
return Task::ready(Err(anyhow!(
"The absolute path must be within one of the project's worktrees"
)));
}
input_path.into()
} else {
let Some(worktree) = project.worktree_for_root_name(&input.cd, cx) else {
return Task::ready(Err(anyhow!(
"`cd` directory {} not found in the project",
&input.cd
)));
};
worktree.read(cx).abs_path()
};
cx.background_spawn(run_command_limited(working_dir, input.command))
}
}
const LIMIT: usize = 16 * 1024;
async fn run_command_limited(working_dir: Arc<Path>, command: String) -> Result<String> {
let shell = get_system_shell();
let mut cmd = new_smol_command(&shell)
.arg("-c")
.arg(&command)
.current_dir(working_dir)
.stdout(std::process::Stdio::piped())
.stderr(std::process::Stdio::piped())
.spawn()
.context("Failed to execute terminal command")?;
let mut combined_buffer = String::with_capacity(LIMIT + 1);
let mut out_reader = BufReader::new(cmd.stdout.take().context("Failed to get stdout")?);
let mut out_tmp_buffer = String::with_capacity(512);
let mut err_reader = BufReader::new(cmd.stderr.take().context("Failed to get stderr")?);
let mut err_tmp_buffer = String::with_capacity(512);
let mut out_line = Box::pin(
out_reader
.read_line(&mut out_tmp_buffer)
.left_future()
.fuse(),
);
let mut err_line = Box::pin(
err_reader
.read_line(&mut err_tmp_buffer)
.left_future()
.fuse(),
);
let mut has_stdout = true;
let mut has_stderr = true;
while (has_stdout || has_stderr) && combined_buffer.len() < LIMIT + 1 {
futures::select_biased! {
read = out_line => {
drop(out_line);
combined_buffer.extend(out_tmp_buffer.drain(..));
if read? == 0 {
out_line = Box::pin(future::pending().right_future().fuse());
has_stdout = false;
} else {
out_line = Box::pin(out_reader.read_line(&mut out_tmp_buffer).left_future().fuse());
}
}
read = err_line => {
drop(err_line);
combined_buffer.extend(err_tmp_buffer.drain(..));
if read? == 0 {
err_line = Box::pin(future::pending().right_future().fuse());
has_stderr = false;
} else {
err_line = Box::pin(err_reader.read_line(&mut err_tmp_buffer).left_future().fuse());
}
}
};
}
drop((out_line, err_line));
let truncated = combined_buffer.len() > LIMIT;
combined_buffer.truncate(LIMIT);
consume_reader(out_reader, truncated).await?;
consume_reader(err_reader, truncated).await?;
let status = cmd.status().await.context("Failed to get command status")?;
let output_string = if truncated {
// Valid to find `\n` in UTF-8 since 0-127 ASCII characters are not used in
// multi-byte characters.
let last_line_ix = combined_buffer.bytes().rposition(|b| b == b'\n');
let combined_buffer = &combined_buffer[..last_line_ix.unwrap_or(combined_buffer.len())];
format!(
"Command output too long. The first {} bytes:\n\n{}",
combined_buffer.len(),
output_block(&combined_buffer),
)
} else {
output_block(&combined_buffer)
};
let output_with_status = if status.success() {
if output_string.is_empty() {
"Command executed successfully.".to_string()
} else {
output_string.to_string()
}
} else {
format!(
"Command failed with exit code {} (shell: {}).\n\n{}",
status.code().unwrap_or(-1),
shell,
output_string,
)
};
Ok(output_with_status)
}
async fn consume_reader<T: AsyncReadExt + Unpin>(
mut reader: BufReader<T>,
truncated: bool,
) -> Result<(), std::io::Error> {
loop {
let skipped_bytes = reader.fill_buf().await?;
if skipped_bytes.is_empty() {
break;
}
let skipped_bytes_len = skipped_bytes.len();
reader.consume_unpin(skipped_bytes_len);
// Should only skip if we went over the limit
debug_assert!(truncated);
}
Ok(())
}
fn output_block(output: &str) -> String {
format!(
"```\n{}{}```",
output,
if output.ends_with('\n') { "" } else { "\n" }
)
}
#[cfg(test)]
#[cfg(not(windows))]
mod tests {
use gpui::TestAppContext;
use super::*;
#[gpui::test(iterations = 10)]
async fn test_run_command_simple(cx: &mut TestAppContext) {
cx.executor().allow_parking();
let result =
run_command_limited(Path::new(".").into(), "echo 'Hello, World!'".to_string()).await;
assert!(result.is_ok());
assert_eq!(result.unwrap(), "```\nHello, World!\n```");
}
#[gpui::test(iterations = 10)]
async fn test_interleaved_stdout_stderr(cx: &mut TestAppContext) {
cx.executor().allow_parking();
let command = "echo 'stdout 1' && sleep 0.01 && echo 'stderr 1' >&2 && sleep 0.01 && echo 'stdout 2' && sleep 0.01 && echo 'stderr 2' >&2";
let result = run_command_limited(Path::new(".").into(), command.to_string()).await;
assert!(result.is_ok());
assert_eq!(
result.unwrap(),
"```\nstdout 1\nstderr 1\nstdout 2\nstderr 2\n```"
);
}
#[gpui::test(iterations = 10)]
async fn test_multiple_output_reads(cx: &mut TestAppContext) {
cx.executor().allow_parking();
// Command with multiple outputs that might require multiple reads
let result = run_command_limited(
Path::new(".").into(),
"echo '1'; sleep 0.01; echo '2'; sleep 0.01; echo '3'".to_string(),
)
.await;
assert!(result.is_ok());
assert_eq!(result.unwrap(), "```\n1\n2\n3\n```");
}
#[gpui::test(iterations = 10)]
async fn test_output_truncation_single_line(cx: &mut TestAppContext) {
cx.executor().allow_parking();
let cmd = format!("echo '{}'; sleep 0.01;", "X".repeat(LIMIT * 2));
let result = run_command_limited(Path::new(".").into(), cmd).await;
assert!(result.is_ok());
let output = result.unwrap();
let content_start = output.find("```\n").map(|i| i + 4).unwrap_or(0);
let content_end = output.rfind("\n```").unwrap_or(output.len());
let content_length = content_end - content_start;
// Output should be exactly the limit
assert_eq!(content_length, LIMIT);
}
#[gpui::test(iterations = 10)]
async fn test_output_truncation_multiline(cx: &mut TestAppContext) {
cx.executor().allow_parking();
let cmd = format!("echo '{}'; ", "X".repeat(120)).repeat(160);
let result = run_command_limited(Path::new(".").into(), cmd).await;
assert!(result.is_ok());
let output = result.unwrap();
assert!(output.starts_with("Command output too long. The first 16334 bytes:\n\n"));
let content_start = output.find("```\n").map(|i| i + 4).unwrap_or(0);
let content_end = output.rfind("\n```").unwrap_or(output.len());
let content_length = content_end - content_start;
assert!(content_length <= LIMIT);
}
#[gpui::test(iterations = 10)]
async fn test_command_failure(cx: &mut TestAppContext) {
cx.executor().allow_parking();
let result = run_command_limited(Path::new(".").into(), "exit 42".to_string()).await;
assert!(result.is_ok());
let output = result.unwrap();
// Extract the shell name from path for cleaner test output
let shell_path = std::env::var("SHELL").unwrap_or("bash".to_string());
let expected_output = format!(
"Command failed with exit code 42 (shell: {}).\n\n```\n\n```",
shell_path
);
assert_eq!(output, expected_output);
}
}

View File

@@ -0,0 +1,9 @@
Executes a shell one-liner and returns the combined output.
This tool spawns a process using the user's current shell, combines stdout and stderr into one interleaved stream as they are produced (preserving the order of writes), and captures that stream into a string which is returned.
Make sure you use the `cd` parameter to navigate to one of the root directories of the project. NEVER do it as part of the `command` itself, otherwise it will error.
Do not use this tool for commands that run indefinitely, such as servers (e.g., `python -m http.server`) or file watchers that don't terminate on their own.
Remember that each invocation of this tool will spawn a new shell process, so you can't rely on any state from previous invocations.

View File

@@ -33,7 +33,7 @@ impl Tool for ThinkingTool {
}
fn icon(&self) -> IconName {
IconName::Brain
IconName::LightBulb
}
fn input_schema(&self, format: LanguageModelToolSchemaFormat) -> serde_json::Value {

View File

@@ -1,13 +1,21 @@
use futures::channel::oneshot;
use git2::{DiffLineType as GitDiffLineType, DiffOptions as GitOptions, Patch as GitPatch};
use gpui::{App, AppContext as _, AsyncApp, Context, Entity, EventEmitter, Task};
use gpui::{App, AppContext as _, AsyncApp, Context, Entity, EventEmitter, Task, TaskLabel};
use language::{Language, LanguageRegistry};
use rope::Rope;
use std::{cmp::Ordering, future::Future, iter, mem, ops::Range, sync::Arc};
use std::{
cmp::Ordering,
future::Future,
iter,
ops::Range,
sync::{Arc, LazyLock},
};
use sum_tree::SumTree;
use text::{Anchor, Bias, BufferId, OffsetRangeExt, Point, ToOffset as _};
use util::ResultExt;
pub static CALCULATE_DIFF_TASK: LazyLock<TaskLabel> = LazyLock::new(TaskLabel::new);
pub struct BufferDiff {
pub buffer_id: BufferId,
inner: BufferDiffInner,
@@ -181,10 +189,12 @@ impl BufferDiffSnapshot {
base_text_exists = false;
};
let hunks = cx.background_spawn({
let buffer = buffer.clone();
async move { compute_hunks(base_text_pair, buffer) }
});
let hunks = cx
.background_executor()
.spawn_labeled(*CALCULATE_DIFF_TASK, {
let buffer = buffer.clone();
async move { compute_hunks(base_text_pair, buffer) }
});
async move {
let (base_text, hunks) = futures::join!(base_text_snapshot, hunks);
@@ -208,17 +218,18 @@ impl BufferDiffSnapshot {
) -> impl Future<Output = Self> + use<> {
let base_text_exists = base_text.is_some();
let base_text_pair = base_text.map(|text| (text, base_text_snapshot.as_rope().clone()));
cx.background_spawn(async move {
Self {
inner: BufferDiffInner {
base_text: base_text_snapshot,
pending_hunks: SumTree::new(&buffer),
hunks: compute_hunks(base_text_pair, buffer),
base_text_exists,
},
secondary_diff: None,
}
})
cx.background_executor()
.spawn_labeled(*CALCULATE_DIFF_TASK, async move {
Self {
inner: BufferDiffInner {
base_text: base_text_snapshot,
pending_hunks: SumTree::new(&buffer),
hunks: compute_hunks(base_text_pair, buffer),
base_text_exists,
},
secondary_diff: None,
}
})
}
#[cfg(test)]
@@ -381,6 +392,7 @@ impl BufferDiffInner {
while let Some(PendingHunk {
buffer_range,
diff_base_byte_range,
new_status,
..
}) = pending_hunks_iter.next()
{
@@ -439,16 +451,23 @@ impl BufferDiffInner {
let index_end = prev_unstaged_hunk_base_text_end + end_overshoot;
let index_byte_range = index_start..index_end;
let replacement_text = if stage {
log::debug!("stage hunk {:?}", buffer_offset_range);
buffer
.text_for_range(buffer_offset_range)
.collect::<String>()
} else {
log::debug!("unstage hunk {:?}", buffer_offset_range);
head_text
.chunks_in_range(diff_base_byte_range.clone())
.collect::<String>()
let replacement_text = match new_status {
DiffHunkSecondaryStatus::SecondaryHunkRemovalPending => {
log::debug!("staging hunk {:?}", buffer_offset_range);
buffer
.text_for_range(buffer_offset_range)
.collect::<String>()
}
DiffHunkSecondaryStatus::SecondaryHunkAdditionPending => {
log::debug!("unstaging hunk {:?}", buffer_offset_range);
head_text
.chunks_in_range(diff_base_byte_range.clone())
.collect::<String>()
}
_ => {
debug_assert!(false);
continue;
}
};
edits.push((index_byte_range, replacement_text));
@@ -631,28 +650,6 @@ impl BufferDiffInner {
})
}
fn set_state(
&mut self,
new_state: Self,
buffer: &text::BufferSnapshot,
) -> Option<Range<Anchor>> {
let (base_text_changed, changed_range) =
match (self.base_text_exists, new_state.base_text_exists) {
(false, false) => (true, None),
(true, true) if self.base_text.remote_id() == new_state.base_text.remote_id() => {
(false, new_state.compare(&self, buffer))
}
_ => (true, Some(text::Anchor::MIN..text::Anchor::MAX)),
};
let pending_hunks = mem::replace(&mut self.pending_hunks, SumTree::new(buffer));
*self = new_state;
if !base_text_changed {
self.pending_hunks = pending_hunks;
}
changed_range
}
fn compare(&self, old: &Self, new_snapshot: &text::BufferSnapshot) -> Option<Range<Anchor>> {
let mut new_cursor = self.hunks.cursor::<()>(new_snapshot);
let mut old_cursor = old.hunks.cursor::<()>(new_snapshot);
@@ -1011,26 +1008,61 @@ impl BufferDiff {
&mut self,
new_snapshot: BufferDiffSnapshot,
buffer: &text::BufferSnapshot,
secondary_changed_range: Option<Range<Anchor>>,
cx: &mut Context<Self>,
) -> Option<Range<Anchor>> {
let changed_range = self.inner.set_state(new_snapshot.inner, buffer);
self.set_snapshot_with_secondary(new_snapshot, buffer, None, false, cx)
}
let changed_range = match (secondary_changed_range, changed_range) {
(None, None) => None,
(Some(unstaged_range), None) => self.range_to_hunk_range(unstaged_range, &buffer, cx),
(None, Some(uncommitted_range)) => Some(uncommitted_range),
(Some(unstaged_range), Some(uncommitted_range)) => {
let mut start = uncommitted_range.start;
let mut end = uncommitted_range.end;
if let Some(unstaged_range) = self.range_to_hunk_range(unstaged_range, &buffer, cx)
{
start = unstaged_range.start.min(&uncommitted_range.start, &buffer);
end = unstaged_range.end.max(&uncommitted_range.end, &buffer);
pub fn set_snapshot_with_secondary(
&mut self,
new_snapshot: BufferDiffSnapshot,
buffer: &text::BufferSnapshot,
secondary_diff_change: Option<Range<Anchor>>,
clear_pending_hunks: bool,
cx: &mut Context<Self>,
) -> Option<Range<Anchor>> {
log::debug!("set snapshot with secondary {secondary_diff_change:?}");
let state = &mut self.inner;
let new_state = new_snapshot.inner;
let (base_text_changed, mut changed_range) =
match (state.base_text_exists, new_state.base_text_exists) {
(false, false) => (true, None),
(true, true) if state.base_text.remote_id() == new_state.base_text.remote_id() => {
(false, new_state.compare(&state, buffer))
}
_ => (true, Some(text::Anchor::MIN..text::Anchor::MAX)),
};
if let Some(secondary_changed_range) = secondary_diff_change {
if let Some(secondary_hunk_range) =
self.range_to_hunk_range(secondary_changed_range, &buffer, cx)
{
if let Some(range) = &mut changed_range {
range.start = secondary_hunk_range.start.min(&range.start, &buffer);
range.end = secondary_hunk_range.end.max(&range.end, &buffer);
} else {
changed_range = Some(secondary_hunk_range);
}
Some(start..end)
}
};
}
let state = &mut self.inner;
state.base_text_exists = new_state.base_text_exists;
state.base_text = new_state.base_text;
state.hunks = new_state.hunks;
if base_text_changed || clear_pending_hunks {
if let Some((first, last)) = state.pending_hunks.first().zip(state.pending_hunks.last())
{
if let Some(range) = &mut changed_range {
range.start = range.start.min(&first.buffer_range.start, &buffer);
range.end = range.end.max(&last.buffer_range.end, &buffer);
} else {
changed_range = Some(first.buffer_range.start..last.buffer_range.end);
}
}
state.pending_hunks = SumTree::new(buffer);
}
cx.emit(BufferDiffEvent::DiffChanged {
changed_range: changed_range.clone(),
@@ -1138,7 +1170,7 @@ impl BufferDiff {
return;
};
this.update(cx, |this, cx| {
this.set_snapshot(snapshot, &buffer, None, cx);
this.set_snapshot(snapshot, &buffer, cx);
})
.log_err();
drop(complete_on_drop)
@@ -1163,7 +1195,7 @@ impl BufferDiff {
cx,
);
let snapshot = cx.background_executor().block(snapshot);
self.set_snapshot(snapshot, &buffer, None, cx);
self.set_snapshot(snapshot, &buffer, cx);
}
}
@@ -1752,13 +1784,13 @@ mod tests {
let unstaged_diff = cx.new(|cx| {
let mut diff = BufferDiff::new(&buffer, cx);
diff.set_snapshot(unstaged, &buffer, None, cx);
diff.set_snapshot(unstaged, &buffer, cx);
diff
});
let uncommitted_diff = cx.new(|cx| {
let mut diff = BufferDiff::new(&buffer, cx);
diff.set_snapshot(uncommitted, &buffer, None, cx);
diff.set_snapshot(uncommitted, &buffer, cx);
diff.set_secondary_diff(unstaged_diff);
diff
});
@@ -1819,12 +1851,12 @@ mod tests {
let uncommitted = BufferDiffSnapshot::new_sync(buffer.clone(), head_text.clone(), cx);
let unstaged_diff = cx.new(|cx| {
let mut diff = BufferDiff::new(&buffer, cx);
diff.set_snapshot(unstaged, &buffer, None, cx);
diff.set_snapshot(unstaged, &buffer, cx);
diff
});
let uncommitted_diff = cx.new(|cx| {
let mut diff = BufferDiff::new(&buffer, cx);
diff.set_snapshot(uncommitted, &buffer, None, cx);
diff.set_snapshot(uncommitted, &buffer, cx);
diff.set_secondary_diff(unstaged_diff.clone());
diff
});

View File

@@ -16,6 +16,7 @@ pub enum CliRequest {
wait: bool,
open_new_workspace: Option<bool>,
env: Option<HashMap<String, String>>,
user_data_dir: Option<String>,
},
}

View File

@@ -26,7 +26,11 @@ struct Detect;
trait InstalledApp {
fn zed_version_string(&self) -> String;
fn launch(&self, ipc_url: String) -> anyhow::Result<()>;
fn run_foreground(&self, ipc_url: String) -> io::Result<ExitStatus>;
fn run_foreground(
&self,
ipc_url: String,
user_data_dir: Option<&str>,
) -> io::Result<ExitStatus>;
fn path(&self) -> PathBuf;
}
@@ -58,6 +62,13 @@ struct Args {
/// Create a new workspace
#[arg(short, long, overrides_with = "add")]
new: bool,
/// Sets a custom directory for all user data (e.g., database, extensions, logs).
/// This overrides the default platform-specific data directory location.
/// On macOS, the default is `~/Library/Application Support/Zed`.
/// On Linux/FreeBSD, the default is `$XDG_DATA_HOME/zed`.
/// On Windows, the default is `%LOCALAPPDATA%\Zed`.
#[arg(long, value_name = "DIR")]
user_data_dir: Option<String>,
/// The paths to open in Zed (space-separated).
///
/// Use `path:line:column` syntax to open a file at the given line and column.
@@ -135,6 +146,12 @@ fn main() -> Result<()> {
}
let args = Args::parse();
// Set custom data directory before any path operations
let user_data_dir = args.user_data_dir.clone();
if let Some(dir) = &user_data_dir {
paths::set_custom_data_dir(dir);
}
#[cfg(any(target_os = "linux", target_os = "freebsd"))]
let args = flatpak::set_bin_if_no_escape(args);
@@ -246,6 +263,7 @@ fn main() -> Result<()> {
let sender: JoinHandle<anyhow::Result<()>> = thread::spawn({
let exit_status = exit_status.clone();
let user_data_dir_for_thread = user_data_dir.clone();
move || {
let (_, handshake) = server.accept().context("Handshake after Zed spawn")?;
let (tx, rx) = (handshake.requests, handshake.responses);
@@ -256,6 +274,7 @@ fn main() -> Result<()> {
wait: args.wait,
open_new_workspace,
env,
user_data_dir: user_data_dir_for_thread,
})?;
while let Ok(response) = rx.recv() {
@@ -291,7 +310,7 @@ fn main() -> Result<()> {
.collect();
if args.foreground {
app.run_foreground(url)?;
app.run_foreground(url, user_data_dir.as_deref())?;
} else {
app.launch(url)?;
sender.join().unwrap()?;
@@ -437,7 +456,7 @@ mod linux {
}
fn launch(&self, ipc_url: String) -> anyhow::Result<()> {
let sock_path = paths::support_dir().join(format!("zed-{}.sock", *RELEASE_CHANNEL));
let sock_path = paths::data_dir().join(format!("zed-{}.sock", *RELEASE_CHANNEL));
let sock = UnixDatagram::unbound()?;
if sock.connect(&sock_path).is_err() {
self.boot_background(ipc_url)?;
@@ -447,10 +466,17 @@ mod linux {
Ok(())
}
fn run_foreground(&self, ipc_url: String) -> io::Result<ExitStatus> {
std::process::Command::new(self.0.clone())
.arg(ipc_url)
.status()
fn run_foreground(
&self,
ipc_url: String,
user_data_dir: Option<&str>,
) -> io::Result<ExitStatus> {
let mut cmd = std::process::Command::new(self.0.clone());
cmd.arg(ipc_url);
if let Some(dir) = user_data_dir {
cmd.arg("--user-data-dir").arg(dir);
}
cmd.status()
}
fn path(&self) -> PathBuf {
@@ -688,12 +714,17 @@ mod windows {
Ok(())
}
fn run_foreground(&self, ipc_url: String) -> io::Result<ExitStatus> {
std::process::Command::new(self.0.clone())
.arg(ipc_url)
.arg("--foreground")
.spawn()?
.wait()
fn run_foreground(
&self,
ipc_url: String,
user_data_dir: Option<&str>,
) -> io::Result<ExitStatus> {
let mut cmd = std::process::Command::new(self.0.clone());
cmd.arg(ipc_url).arg("--foreground");
if let Some(dir) = user_data_dir {
cmd.arg("--user-data-dir").arg(dir);
}
cmd.spawn()?.wait()
}
fn path(&self) -> PathBuf {
@@ -875,13 +906,22 @@ mod mac_os {
Ok(())
}
fn run_foreground(&self, ipc_url: String) -> io::Result<ExitStatus> {
fn run_foreground(
&self,
ipc_url: String,
user_data_dir: Option<&str>,
) -> io::Result<ExitStatus> {
let path = match self {
Bundle::App { app_bundle, .. } => app_bundle.join("Contents/MacOS/zed"),
Bundle::LocalPath { executable, .. } => executable.clone(),
};
std::process::Command::new(path).arg(ipc_url).status()
let mut cmd = std::process::Command::new(path);
cmd.arg(ipc_url);
if let Some(dir) = user_data_dir {
cmd.arg("--user-data-dir").arg(dir);
}
cmd.status()
}
fn path(&self) -> PathBuf {

View File

@@ -44,7 +44,7 @@ log.workspace = true
nanoid.workspace = true
open_ai.workspace = true
parking_lot.workspace = true
prometheus = "0.13"
prometheus = "0.14"
prost.workspace = true
rand.workspace = true
reqwest = { version = "0.11", features = ["json"] }

View File

@@ -274,7 +274,7 @@ async fn create_billing_subscription(
customer.id
};
let default_model = llm_db.model(rpc::LanguageModelProvider::Anthropic, "claude-3-5-sonnet")?;
let default_model = llm_db.model(rpc::LanguageModelProvider::Anthropic, "claude-3-7-sonnet")?;
let stripe_model = stripe_billing.register_model(default_model).await?;
let success_url = format!(
"{}/account?checkout_complete=1",

View File

@@ -34,6 +34,7 @@ static MENTIONS_SEARCH: LazyLock<SearchQuery> = LazyLock::new(|| {
false,
false,
false,
false,
Default::default(),
Default::default(),
None,

View File

@@ -187,22 +187,20 @@ impl ComponentPreview {
let mut entries = Vec::new();
let known_scopes = [
ComponentScope::Layout,
ComponentScope::Input,
ComponentScope::Editor,
ComponentScope::Notification,
ComponentScope::Collaboration,
ComponentScope::VersionControl,
ComponentScope::None,
];
// Always show all components first
entries.push(PreviewEntry::AllComponents);
entries.push(PreviewEntry::Separator);
for scope in known_scopes.iter() {
if let Some(components) = scope_groups.remove(scope) {
let mut scopes: Vec<_> = scope_groups
.keys()
.filter(|scope| !matches!(**scope, ComponentScope::None))
.cloned()
.collect();
scopes.sort_by_key(|s| s.to_string());
for scope in scopes {
if let Some(components) = scope_groups.remove(&scope) {
if !components.is_empty() {
entries.push(PreviewEntry::SectionHeader(scope.to_string().into()));
let mut sorted_components = components;
@@ -215,6 +213,7 @@ impl ComponentPreview {
}
}
// Add uncategorized components last
if let Some(components) = scope_groups.get(&ComponentScope::None) {
if !components.is_empty() {
entries.push(PreviewEntry::Separator);
@@ -272,7 +271,12 @@ impl ComponentPreview {
.into_any_element()
}
PreviewEntry::Separator => ListItem::new(ix)
.child(h_flex().pt_3().child(Divider::horizontal_dashed()))
.child(
h_flex()
.occlude()
.pt_3()
.child(Divider::horizontal_dashed()),
)
.into_any_element(),
}
}

View File

@@ -93,7 +93,7 @@ pub struct TcpArguments {
pub port: u16,
pub timeout: Option<u64>,
}
#[derive(Debug, Clone)]
#[derive(Default, Debug, Clone)]
pub struct DebugAdapterBinary {
pub command: String,
pub arguments: Option<Vec<OsString>>,
@@ -102,6 +102,7 @@ pub struct DebugAdapterBinary {
pub connection: Option<TcpArguments>,
}
#[derive(Debug)]
pub struct AdapterVersion {
pub tag_name: String,
pub url: String,

View File

@@ -0,0 +1,149 @@
use std::{path::PathBuf, sync::OnceLock};
use anyhow::{Result, bail};
use async_trait::async_trait;
use dap::adapters::latest_github_release;
use gpui::AsyncApp;
use task::{DebugAdapterConfig, DebugRequestType, DebugTaskDefinition};
use crate::*;
#[derive(Default)]
pub(crate) struct CodeLldbDebugAdapter {
last_known_version: OnceLock<String>,
}
impl CodeLldbDebugAdapter {
const ADAPTER_NAME: &'static str = "CodeLLDB";
}
#[async_trait(?Send)]
impl DebugAdapter for CodeLldbDebugAdapter {
fn name(&self) -> DebugAdapterName {
DebugAdapterName(Self::ADAPTER_NAME.into())
}
async fn install_binary(
&self,
version: AdapterVersion,
delegate: &dyn DapDelegate,
) -> Result<()> {
adapters::download_adapter_from_github(
self.name(),
version,
adapters::DownloadedFileType::Vsix,
delegate,
)
.await?;
Ok(())
}
async fn fetch_latest_adapter_version(
&self,
delegate: &dyn DapDelegate,
) -> Result<AdapterVersion> {
let release =
latest_github_release("vadimcn/codelldb", true, false, delegate.http_client()).await?;
let arch = match std::env::consts::ARCH {
"aarch64" => "arm64",
"x86_64" => "x64",
_ => {
return Err(anyhow!(
"unsupported architecture {}",
std::env::consts::ARCH
));
}
};
let platform = match std::env::consts::OS {
"macos" => "darwin",
"linux" => "linux",
"windows" => "win32",
_ => {
return Err(anyhow!(
"unsupported operating system {}",
std::env::consts::OS
));
}
};
let asset_name = format!("codelldb-{platform}-{arch}.vsix");
let _ = self.last_known_version.set(release.tag_name.clone());
let ret = AdapterVersion {
tag_name: release.tag_name,
url: release
.assets
.iter()
.find(|asset| asset.name == asset_name)
.ok_or_else(|| anyhow!("no asset found matching {:?}", asset_name))?
.browser_download_url
.clone(),
};
Ok(ret)
}
async fn get_installed_binary(
&self,
_: &dyn DapDelegate,
_: &DebugAdapterConfig,
_: Option<PathBuf>,
_: &mut AsyncApp,
) -> Result<DebugAdapterBinary> {
let Some(version) = self.last_known_version.get() else {
bail!("Could not determine latest CodeLLDB version");
};
let adapter_path = paths::debug_adapters_dir().join(&Self::ADAPTER_NAME);
let version_path = adapter_path.join(format!("{}_{}", Self::ADAPTER_NAME, version));
let adapter_dir = version_path.join("extension").join("adapter");
let command = adapter_dir.join("codelldb");
let command = command
.to_str()
.map(ToOwned::to_owned)
.ok_or_else(|| anyhow!("Adapter path is expected to be valid UTF-8"))?;
Ok(DebugAdapterBinary {
command,
cwd: Some(adapter_dir),
arguments: Some(vec![
"--settings".into(),
json!({"sourceLanguages": ["cpp", "rust"]})
.to_string()
.into(),
]),
..Default::default()
})
}
fn request_args(&self, config: &DebugTaskDefinition) -> Value {
let mut args = json!({
"request": match config.request {
DebugRequestType::Launch(_) => "launch",
DebugRequestType::Attach(_) => "attach",
},
});
let map = args.as_object_mut().unwrap();
// CodeLLDB uses `name` for a terminal label.
map.insert("name".into(), Value::String(config.label.clone()));
match &config.request {
DebugRequestType::Attach(attach) => {
map.insert("pid".into(), attach.process_id.into());
}
DebugRequestType::Launch(launch) => {
map.insert("program".into(), launch.program.clone().into());
if !launch.args.is_empty() {
map.insert("args".into(), launch.args.clone().into());
}
if let Some(stop_on_entry) = config.stop_on_entry {
map.insert("stopOnEntry".into(), stop_on_entry.into());
}
if let Some(cwd) = launch.cwd.as_ref() {
map.insert("cwd".into(), cwd.to_string_lossy().into_owned().into());
}
}
}
args
}
}

View File

@@ -1,3 +1,4 @@
mod codelldb;
mod gdb;
mod go;
mod javascript;
@@ -9,6 +10,7 @@ use std::{net::Ipv4Addr, sync::Arc};
use anyhow::{Result, anyhow};
use async_trait::async_trait;
use codelldb::CodeLldbDebugAdapter;
use dap::{
DapRegistry,
adapters::{
@@ -26,6 +28,7 @@ use serde_json::{Value, json};
use task::{DebugAdapterConfig, TCPHost};
pub fn init(registry: Arc<DapRegistry>) {
registry.add_adapter(Arc::from(CodeLldbDebugAdapter::default()));
registry.add_adapter(Arc::from(PythonDebugAdapter));
registry.add_adapter(Arc::from(PhpDebugAdapter));
registry.add_adapter(Arc::from(JsDebugAdapter::default()));

View File

@@ -45,6 +45,7 @@ serde_json.workspace = true
settings.workspace = true
sysinfo.workspace = true
task.workspace = true
tasks_ui.workspace = true
terminal_view.workspace = true
theme.workspace = true
ui.workspace = true

View File

@@ -15,6 +15,7 @@ use gpui::{
Action, App, AsyncWindowContext, Context, Entity, EntityId, EventEmitter, FocusHandle,
Focusable, Subscription, Task, WeakEntity, actions,
};
use project::{
Project,
debugger::{
@@ -25,7 +26,9 @@ use project::{
};
use rpc::proto::{self};
use settings::Settings;
use std::{any::TypeId, path::PathBuf};
use std::any::TypeId;
use std::path::Path;
use std::sync::Arc;
use task::DebugTaskDefinition;
use terminal_view::terminal_panel::TerminalPanel;
use ui::{ContextMenu, Divider, DropdownMenu, Tooltip, prelude::*};
@@ -74,8 +77,45 @@ impl DebugPanel {
let project = workspace.project().clone();
let dap_store = project.read(cx).dap_store();
let _subscriptions =
vec![cx.subscribe_in(&dap_store, window, Self::handle_dap_store_event)];
let weak = cx.weak_entity();
let modal_subscription =
cx.observe_new::<tasks_ui::TasksModal>(move |_, window, cx| {
let modal_entity = cx.entity();
weak.update(cx, |_: &mut DebugPanel, cx| {
let Some(window) = window else {
log::error!("Debug panel couldn't subscribe to tasks modal because there was no window");
return;
};
cx.subscribe_in(
&modal_entity,
window,
|panel, _, event: &tasks_ui::ShowAttachModal, window, cx| {
panel.workspace.update(cx, |workspace, cx| {
let project = workspace.project().clone();
workspace.toggle_modal(window, cx, |window, cx| {
crate::attach_modal::AttachModal::new(
project,
event.debug_config.clone(),
true,
window,
cx,
)
});
}).ok();
},
)
.detach();
})
.ok();
});
let _subscriptions = vec![
cx.subscribe_in(&dap_store, window, Self::handle_dap_store_event),
modal_subscription,
];
let debug_panel = Self {
size: px(300.),
@@ -92,6 +132,87 @@ impl DebugPanel {
})
}
fn filter_action_types(&self, cx: &mut App) {
let (has_active_session, supports_restart, support_step_back, status) = self
.active_session()
.map(|item| {
let running = item.read(cx).mode().as_running().cloned();
match running {
Some(running) => {
let caps = running.read(cx).capabilities(cx);
(
!running.read(cx).session().read(cx).is_terminated(),
caps.supports_restart_request.unwrap_or_default(),
caps.supports_step_back.unwrap_or_default(),
running.read(cx).thread_status(cx),
)
}
None => (false, false, false, None),
}
})
.unwrap_or((false, false, false, None));
let filter = CommandPaletteFilter::global_mut(cx);
let debugger_action_types = [
TypeId::of::<Disconnect>(),
TypeId::of::<Stop>(),
TypeId::of::<ToggleIgnoreBreakpoints>(),
];
let running_action_types = [TypeId::of::<Pause>()];
let stopped_action_type = [
TypeId::of::<Continue>(),
TypeId::of::<StepOver>(),
TypeId::of::<StepInto>(),
TypeId::of::<StepOut>(),
TypeId::of::<editor::actions::DebuggerRunToCursor>(),
TypeId::of::<editor::actions::DebuggerEvaluateSelectedText>(),
];
let step_back_action_type = [TypeId::of::<StepBack>()];
let restart_action_type = [TypeId::of::<Restart>()];
if has_active_session {
filter.show_action_types(debugger_action_types.iter());
if supports_restart {
filter.show_action_types(restart_action_type.iter());
} else {
filter.hide_action_types(&restart_action_type);
}
if support_step_back {
filter.show_action_types(step_back_action_type.iter());
} else {
filter.hide_action_types(&step_back_action_type);
}
match status {
Some(ThreadStatus::Running) => {
filter.show_action_types(running_action_types.iter());
filter.hide_action_types(&stopped_action_type);
}
Some(ThreadStatus::Stopped) => {
filter.show_action_types(stopped_action_type.iter());
filter.hide_action_types(&running_action_types);
}
_ => {
filter.hide_action_types(&running_action_types);
filter.hide_action_types(&stopped_action_type);
}
}
} else {
// show only the `debug: start`
filter.hide_action_types(&debugger_action_types);
filter.hide_action_types(&step_back_action_type);
filter.hide_action_types(&restart_action_type);
filter.hide_action_types(&running_action_types);
filter.hide_action_types(&stopped_action_type);
}
}
pub fn load(
workspace: WeakEntity<Workspace>,
cx: AsyncWindowContext,
@@ -109,63 +230,15 @@ impl DebugPanel {
)
});
cx.observe_new::<DebugPanel>(|debug_panel, _, cx| {
Self::filter_action_types(debug_panel, cx);
})
.detach();
cx.observe(&debug_panel, |_, debug_panel, cx| {
let (has_active_session, supports_restart, support_step_back) = debug_panel
.update(cx, |this, cx| {
this.active_session()
.map(|item| {
let running = item.read(cx).mode().as_running().cloned();
match running {
Some(running) => {
let caps = running.read(cx).capabilities(cx);
(
true,
caps.supports_restart_request.unwrap_or_default(),
caps.supports_step_back.unwrap_or_default(),
)
}
None => (false, false, false),
}
})
.unwrap_or((false, false, false))
});
let filter = CommandPaletteFilter::global_mut(cx);
let debugger_action_types = [
TypeId::of::<Continue>(),
TypeId::of::<StepOver>(),
TypeId::of::<StepInto>(),
TypeId::of::<StepOut>(),
TypeId::of::<Stop>(),
TypeId::of::<Disconnect>(),
TypeId::of::<Pause>(),
TypeId::of::<ToggleIgnoreBreakpoints>(),
];
let step_back_action_type = [TypeId::of::<StepBack>()];
let restart_action_type = [TypeId::of::<Restart>()];
if has_active_session {
filter.show_action_types(debugger_action_types.iter());
if supports_restart {
filter.show_action_types(restart_action_type.iter());
} else {
filter.hide_action_types(&restart_action_type);
}
if support_step_back {
filter.show_action_types(step_back_action_type.iter());
} else {
filter.hide_action_types(&step_back_action_type);
}
} else {
// show only the `debug: start`
filter.hide_action_types(&debugger_action_types);
filter.hide_action_types(&step_back_action_type);
filter.hide_action_types(&restart_action_type);
}
debug_panel.update(cx, |debug_panel, cx| {
Self::filter_action_types(debug_panel, cx);
});
})
.detach();
@@ -241,6 +314,12 @@ impl DebugPanel {
cx,
);
if let Some(running) = session_item.read(cx).mode().as_running().cloned() {
// We might want to make this an event subscription and only notify when a new thread is selected
// This is used to filter the command menu correctly
cx.observe(&running, |_, _, cx| cx.notify()).detach();
}
self.sessions.push(session_item.clone());
self.activate_session(session_item, window, cx);
}
@@ -272,7 +351,7 @@ impl DebugPanel {
fn handle_run_in_terminal_request(
&self,
title: Option<String>,
cwd: PathBuf,
cwd: Option<Arc<Path>>,
command: Option<String>,
args: Vec<String>,
envs: HashMap<String, String>,
@@ -358,6 +437,8 @@ impl DebugPanel {
self.active_session = self.sessions.first().cloned();
}
}
cx.notify();
}
fn sessions_drop_down_menu(
@@ -373,19 +454,26 @@ impl DebugPanel {
DropdownMenu::new_with_element(
"debugger-session-list",
label,
ContextMenu::build(window, cx, move |mut this, _, _| {
ContextMenu::build(window, cx, move |mut this, _, cx| {
let context_menu = cx.weak_entity();
for session in sessions.into_iter() {
let weak_session = session.downgrade();
let weak_id = weak_session.entity_id();
let weak_session_id = weak_session.entity_id();
this = this.custom_entry(
{
let weak = weak.clone();
let context_menu = context_menu.clone();
move |_, cx| {
weak_session
.read_with(cx, |session, cx| {
let context_menu = context_menu.clone();
let id: SharedString =
format!("debug-session-{}", session.session_id(cx).0)
.into();
h_flex()
.w_full()
.group(id.clone())
.justify_between()
.child(session.label_element(cx))
.child(
@@ -393,14 +481,25 @@ impl DebugPanel {
"close-debug-session",
IconName::Close,
)
.visible_on_hover(id.clone())
.icon_size(IconSize::Small)
.on_click({
let weak = weak.clone();
move |_, _, cx| {
move |_, window, cx| {
weak.update(cx, |panel, cx| {
panel.close_session(weak_id, cx);
panel
.close_session(weak_session_id, cx);
})
.ok();
context_menu
.update(cx, |this, cx| {
this.cancel(
&Default::default(),
window,
cx,
);
})
.ok();
}
}),
)

View File

@@ -1,10 +1,13 @@
use dap::debugger_settings::DebuggerSettings;
use debugger_panel::{DebugPanel, ToggleFocus};
use editor::Editor;
use feature_flags::{Debugger, FeatureFlagViewExt};
use gpui::{App, actions};
use gpui::{App, EntityInputHandler, actions};
use new_session_modal::NewSessionModal;
use project::debugger::{self, breakpoint_store::SourceBreakpoint};
use session::DebugSession;
use settings::Settings;
use util::maybe;
use workspace::{ShutdownDebugAdapters, Workspace};
pub mod attach_modal;
@@ -110,7 +113,9 @@ pub fn init(cx: &mut App) {
.active_session()
.and_then(|session| session.read(cx).mode().as_running().cloned())
}) {
active_item.update(cx, |item, cx| item.stop_thread(cx))
cx.defer(move |cx| {
active_item.update(cx, |item, cx| item.stop_thread(cx))
})
}
}
})
@@ -151,8 +156,105 @@ pub fn init(cx: &mut App) {
});
}
},
);
)
.register_action(|workspace: &mut Workspace, _: &Start, window, cx| {
tasks_ui::toggle_modal(
workspace,
None,
task::TaskModal::DebugModal,
window,
cx,
)
.detach();
});
})
})
.detach();
cx.observe_new({
move |editor: &mut Editor, _, cx| {
editor
.register_action(cx.listener(
move |editor, _: &editor::actions::DebuggerRunToCursor, _, cx| {
maybe!({
let debug_panel =
editor.workspace()?.read(cx).panel::<DebugPanel>(cx)?;
let cursor_point: language::Point = editor.selections.newest(cx).head();
let active_session = debug_panel.read(cx).active_session()?;
let (buffer, position, _) = editor
.buffer()
.read(cx)
.point_to_buffer_point(cursor_point, cx)?;
let path =
debugger::breakpoint_store::BreakpointStore::abs_path_from_buffer(
&buffer, cx,
)?;
let source_breakpoint = SourceBreakpoint {
row: position.row,
path,
message: None,
condition: None,
hit_condition: None,
state: debugger::breakpoint_store::BreakpointState::Enabled,
};
active_session
.update(cx, |session_item, _| {
session_item.mode().as_running().cloned()
})?
.update(cx, |state, cx| {
if let Some(thread_id) = state.selected_thread_id() {
state.session().update(cx, |session, cx| {
session.run_to_position(
source_breakpoint,
thread_id,
cx,
);
})
}
});
Some(())
});
},
))
.detach();
editor
.register_action(cx.listener(
move |editor, _: &editor::actions::DebuggerEvaluateSelectedText, window, cx| {
maybe!({
let debug_panel =
editor.workspace()?.read(cx).panel::<DebugPanel>(cx)?;
let active_session = debug_panel.read(cx).active_session()?;
let text = editor.text_for_range(
editor.selections.newest(cx).range(),
&mut None,
window,
cx,
)?;
active_session
.update(cx, |session_item, _| {
session_item.mode().as_running().cloned()
})?
.update(cx, |state, cx| {
let stack_id = state.selected_stack_frame_id(cx);
state.session().update(cx, |session, cx| {
session.evaluate(text, None, stack_id, None, cx);
});
});
Some(())
});
},
))
.detach();
}
})
.detach();
}

View File

@@ -102,7 +102,8 @@ impl NewSessionModal {
},
})
}
fn start_new_session(&self, cx: &mut Context<Self>) -> Result<()> {
fn start_new_session(&self, window: &mut Window, cx: &mut Context<Self>) -> Result<()> {
let workspace = self.workspace.clone();
let config = self
.debug_config(cx)
@@ -112,10 +113,41 @@ impl NewSessionModal {
panel.past_debug_definition = Some(config.clone());
});
let task_contexts = workspace
.update(cx, |workspace, cx| {
tasks_ui::task_contexts(workspace, window, cx)
})
.ok();
cx.spawn(async move |this, cx| {
let task_context = if let Some(task) = task_contexts {
task.await
.active_worktree_context
.map_or(task::TaskContext::default(), |context| context.1)
} else {
task::TaskContext::default()
};
let project = workspace.update(cx, |workspace, _| workspace.project().clone())?;
let task =
project.update(cx, |this, cx| this.start_debug_session(config.into(), cx))?;
let task = project.update(cx, |this, cx| {
if let Some(debug_config) =
config
.clone()
.to_zed_format()
.ok()
.and_then(|task_template| {
task_template
.resolve_task("debug_task", &task_context)
.and_then(|resolved_task| {
resolved_task.resolved_debug_adapter_config()
})
})
{
this.start_debug_session(debug_config, cx)
} else {
this.start_debug_session(config.into(), cx)
}
})?;
let spawn_result = task.await;
if spawn_result.is_ok() {
this.update(cx, |_, cx| {
@@ -614,8 +646,8 @@ impl Render for NewSessionModal {
})
.child(
Button::new("debugger-spawn", "Start")
.on_click(cx.listener(|this, _, _, cx| {
this.start_new_session(cx).log_err();
.on_click(cx.listener(|this, _, window, cx| {
this.start_new_session(window, cx).log_err();
}))
.disabled(self.debugger.is_none()),
),

View File

@@ -1,5 +1,7 @@
pub mod running;
use std::sync::OnceLock;
use dap::client::SessionId;
use gpui::{App, Entity, EventEmitter, FocusHandle, Focusable, Subscription, Task, WeakEntity};
use project::Project;
@@ -30,6 +32,7 @@ impl DebugSessionState {
pub struct DebugSession {
remote_id: Option<workspace::ViewId>,
mode: DebugSessionState,
label: OnceLock<String>,
dap_store: WeakEntity<DapStore>,
_debug_panel: WeakEntity<DebugPanel>,
_worktree_store: WeakEntity<WorktreeStore>,
@@ -68,6 +71,7 @@ impl DebugSession {
})],
remote_id: None,
mode: DebugSessionState::Running(mode),
label: OnceLock::new(),
dap_store: project.read(cx).dap_store().downgrade(),
_debug_panel,
_worktree_store: project.read(cx).worktree_store().downgrade(),
@@ -92,36 +96,45 @@ impl DebugSession {
}
pub(crate) fn label(&self, cx: &App) -> String {
if let Some(label) = self.label.get() {
return label.to_owned();
}
let session_id = match &self.mode {
DebugSessionState::Running(running_state) => running_state.read(cx).session_id(),
};
let Ok(Some(session)) = self
.dap_store
.read_with(cx, |store, _| store.session_by_id(session_id))
else {
return "".to_owned();
};
session
.read(cx)
.as_local()
.expect("Remote Debug Sessions are not implemented yet")
.label()
self.label
.get_or_init(|| {
session
.read(cx)
.as_local()
.expect("Remote Debug Sessions are not implemented yet")
.label()
})
.to_owned()
}
pub(crate) fn label_element(&self, cx: &App) -> AnyElement {
let label = self.label(cx);
let (icon, color) = match &self.mode {
let icon = match &self.mode {
DebugSessionState::Running(state) => {
if state.read(cx).session().read(cx).is_terminated() {
(Some(Indicator::dot().color(Color::Error)), Color::Error)
Some(Indicator::dot().color(Color::Error))
} else {
match state.read(cx).thread_status(cx).unwrap_or_default() {
project::debugger::session::ThreadStatus::Stopped => (
Some(Indicator::dot().color(Color::Conflict)),
Color::Conflict,
),
_ => (Some(Indicator::dot().color(Color::Success)), Color::Success),
project::debugger::session::ThreadStatus::Stopped => {
Some(Indicator::dot().color(Color::Conflict))
}
_ => Some(Indicator::dot().color(Color::Success)),
}
}
}
@@ -131,7 +144,7 @@ impl DebugSession {
.gap_2()
.when_some(icon, |this, indicator| this.child(indicator))
.justify_between()
.child(Label::new(label).color(color))
.child(Label::new(label))
.into_any_element()
}
}

View File

@@ -1,3 +1,4 @@
mod breakpoint_list;
mod console;
mod loaded_source_list;
mod module_list;
@@ -7,6 +8,7 @@ pub mod variable_list;
use std::{any::Any, ops::ControlFlow, sync::Arc};
use super::DebugPanelItemEvent;
use breakpoint_list::BreakpointList;
use collections::HashMap;
use console::Console;
use dap::{Capabilities, Thread, client::SessionId, debugger_settings::DebuggerSettings};
@@ -24,8 +26,8 @@ use rpc::proto::ViewId;
use settings::Settings;
use stack_frame_list::StackFrameList;
use ui::{
App, Context, ContextMenu, DropdownMenu, InteractiveElement, IntoElement, ParentElement,
Render, SharedString, Styled, Window, div, h_flex, v_flex,
AnyElement, App, Context, ContextMenu, DropdownMenu, InteractiveElement, IntoElement, Label,
LabelCommon as _, ParentElement, Render, SharedString, Styled, Window, div, h_flex, v_flex,
};
use util::ResultExt;
use variable_list::VariableList;
@@ -84,6 +86,7 @@ struct SubView {
inner: AnyView,
pane_focus_handle: FocusHandle,
tab_name: SharedString,
show_indicator: Box<dyn Fn(&App) -> bool>,
}
impl SubView {
@@ -91,12 +94,14 @@ impl SubView {
pane_focus_handle: FocusHandle,
view: AnyView,
tab_name: SharedString,
show_indicator: Option<Box<dyn Fn(&App) -> bool>>,
cx: &mut App,
) -> Entity<Self> {
cx.new(|_| Self {
tab_name,
inner: view,
pane_focus_handle,
show_indicator: show_indicator.unwrap_or(Box::new(|_| false)),
})
}
}
@@ -108,8 +113,27 @@ impl Focusable for SubView {
impl EventEmitter<()> for SubView {}
impl Item for SubView {
type Event = ();
fn tab_content_text(&self, _window: &Window, _cx: &App) -> Option<SharedString> {
Some(self.tab_name.clone())
fn tab_content(
&self,
params: workspace::item::TabContentParams,
_: &Window,
cx: &App,
) -> AnyElement {
let label = Label::new(self.tab_name.clone())
.color(params.text_color())
.into_any_element();
if !params.selected && self.show_indicator.as_ref()(cx) {
return h_flex()
.justify_between()
.child(ui::Indicator::dot())
.gap_2()
.child(label)
.into_any_element();
}
label
}
}
@@ -313,6 +337,7 @@ impl RunningState {
this.focus_handle(cx),
stack_frame_list.clone().into(),
SharedString::new_static("Frames"),
None,
cx,
)),
true,
@@ -321,6 +346,22 @@ impl RunningState {
window,
cx,
);
let breakpoints = BreakpointList::new(session.clone(), workspace.clone(), &project, cx);
this.add_item(
Box::new(SubView::new(
breakpoints.focus_handle(cx),
breakpoints.into(),
SharedString::new_static("Breakpoints"),
None,
cx,
)),
true,
false,
None,
window,
cx,
);
this.activate_item(0, false, false, window, cx);
});
let center_pane = new_debugger_pane(workspace.clone(), project.clone(), window, cx);
center_pane.update(cx, |this, cx| {
@@ -329,6 +370,7 @@ impl RunningState {
variable_list.focus_handle(cx),
variable_list.clone().into(),
SharedString::new_static("Variables"),
None,
cx,
)),
true,
@@ -342,6 +384,7 @@ impl RunningState {
this.focus_handle(cx),
module_list.clone().into(),
SharedString::new_static("Modules"),
None,
cx,
)),
false,
@@ -354,11 +397,17 @@ impl RunningState {
});
let rightmost_pane = new_debugger_pane(workspace.clone(), project.clone(), window, cx);
rightmost_pane.update(cx, |this, cx| {
let weak_console = console.downgrade();
this.add_item(
Box::new(SubView::new(
this.focus_handle(cx),
console.clone().into(),
SharedString::new_static("Console"),
Some(Box::new(move |cx| {
weak_console
.read_with(cx, |console, cx| console.show_indicator(cx))
.unwrap_or_default()
})),
cx,
)),
true,
@@ -432,6 +481,10 @@ impl RunningState {
self.session_id
}
pub(crate) fn selected_stack_frame_id(&self, cx: &App) -> Option<dap::StackFrameId> {
self.stack_frame_list.read(cx).selected_stack_frame_id()
}
#[cfg(test)]
pub fn stack_frame_list(&self) -> &Entity<StackFrameList> {
&self.stack_frame_list
@@ -492,7 +545,6 @@ impl RunningState {
}
}
#[cfg(test)]
pub(crate) fn selected_thread_id(&self) -> Option<ThreadId> {
self.thread_id
}

View File

@@ -0,0 +1,482 @@
use std::{
path::{Path, PathBuf},
time::Duration,
};
use dap::ExceptionBreakpointsFilter;
use editor::Editor;
use gpui::{
AppContext, Entity, FocusHandle, Focusable, ListState, MouseButton, Stateful, Task, WeakEntity,
list,
};
use language::Point;
use project::{
Project,
debugger::{
breakpoint_store::{BreakpointEditAction, BreakpointStore, SourceBreakpoint},
session::Session,
},
worktree_store::WorktreeStore,
};
use ui::{
App, Clickable, Color, Context, Div, Icon, IconButton, IconName, Indicator, InteractiveElement,
IntoElement, Label, LabelCommon, LabelSize, ListItem, ParentElement, Render, RenderOnce,
Scrollbar, ScrollbarState, SharedString, StatefulInteractiveElement, Styled, Window, div,
h_flex, px, v_flex,
};
use util::{ResultExt, maybe};
use workspace::Workspace;
pub(super) struct BreakpointList {
workspace: WeakEntity<Workspace>,
breakpoint_store: Entity<BreakpointStore>,
worktree_store: Entity<WorktreeStore>,
list_state: ListState,
scrollbar_state: ScrollbarState,
breakpoints: Vec<BreakpointEntry>,
session: Entity<Session>,
hide_scrollbar_task: Option<Task<()>>,
show_scrollbar: bool,
focus_handle: FocusHandle,
}
impl Focusable for BreakpointList {
fn focus_handle(&self, _: &App) -> gpui::FocusHandle {
self.focus_handle.clone()
}
}
impl BreakpointList {
pub(super) fn new(
session: Entity<Session>,
workspace: WeakEntity<Workspace>,
project: &Entity<Project>,
cx: &mut App,
) -> Entity<Self> {
let project = project.read(cx);
let breakpoint_store = project.breakpoint_store();
let worktree_store = project.worktree_store();
cx.new(|cx| {
let weak: gpui::WeakEntity<Self> = cx.weak_entity();
let list_state = ListState::new(
0,
gpui::ListAlignment::Top,
px(1000.),
move |ix, window, cx| {
let Ok(Some(breakpoint)) =
weak.update(cx, |this, _| this.breakpoints.get(ix).cloned())
else {
return div().into_any_element();
};
breakpoint.render(window, cx).into_any_element()
},
);
Self {
breakpoint_store,
worktree_store,
scrollbar_state: ScrollbarState::new(list_state.clone()),
list_state,
breakpoints: Default::default(),
hide_scrollbar_task: None,
show_scrollbar: false,
workspace,
session,
focus_handle: cx.focus_handle(),
}
})
}
fn hide_scrollbar(&mut self, window: &mut Window, cx: &mut Context<Self>) {
const SCROLLBAR_SHOW_INTERVAL: Duration = Duration::from_secs(1);
self.hide_scrollbar_task = Some(cx.spawn_in(window, async move |panel, cx| {
cx.background_executor()
.timer(SCROLLBAR_SHOW_INTERVAL)
.await;
panel
.update(cx, |panel, cx| {
panel.show_scrollbar = false;
cx.notify();
})
.log_err();
}))
}
fn render_vertical_scrollbar(&self, cx: &mut Context<Self>) -> Option<Stateful<Div>> {
if !(self.show_scrollbar || self.scrollbar_state.is_dragging()) {
return None;
}
Some(
div()
.occlude()
.id("breakpoint-list-vertical-scrollbar")
.on_mouse_move(cx.listener(|_, _, _, cx| {
cx.notify();
cx.stop_propagation()
}))
.on_hover(|_, _, cx| {
cx.stop_propagation();
})
.on_any_mouse_down(|_, _, cx| {
cx.stop_propagation();
})
.on_mouse_up(
MouseButton::Left,
cx.listener(|_, _, _, cx| {
cx.stop_propagation();
}),
)
.on_scroll_wheel(cx.listener(|_, _, _, cx| {
cx.notify();
}))
.h_full()
.absolute()
.right_1()
.top_1()
.bottom_0()
.w(px(12.))
.cursor_default()
.children(Scrollbar::vertical(self.scrollbar_state.clone())),
)
}
}
impl Render for BreakpointList {
fn render(
&mut self,
_window: &mut ui::Window,
cx: &mut ui::Context<Self>,
) -> impl ui::IntoElement {
let old_len = self.breakpoints.len();
let breakpoints = self.breakpoint_store.read(cx).all_breakpoints(cx);
self.breakpoints.clear();
let weak = cx.weak_entity();
let breakpoints = breakpoints.into_iter().flat_map(|(path, mut breakpoints)| {
let relative_worktree_path = self
.worktree_store
.read(cx)
.find_worktree(&path, cx)
.and_then(|(worktree, relative_path)| {
worktree
.read(cx)
.is_visible()
.then(|| Path::new(worktree.read(cx).root_name()).join(relative_path))
});
breakpoints.sort_by_key(|breakpoint| breakpoint.row);
let weak = weak.clone();
breakpoints.into_iter().filter_map(move |breakpoint| {
debug_assert_eq!(&path, &breakpoint.path);
let file_name = breakpoint.path.file_name()?;
let dir = relative_worktree_path
.clone()
.unwrap_or_else(|| PathBuf::from(&*breakpoint.path))
.parent()
.and_then(|parent| {
parent
.to_str()
.map(ToOwned::to_owned)
.map(SharedString::from)
});
let name = file_name
.to_str()
.map(ToOwned::to_owned)
.map(SharedString::from)?;
let weak = weak.clone();
let line = format!("Line {}", breakpoint.row + 1).into();
Some(BreakpointEntry {
kind: BreakpointEntryKind::LineBreakpoint(LineBreakpoint {
name,
dir,
line,
breakpoint,
}),
weak,
})
})
});
let exception_breakpoints =
self.session
.read(cx)
.exception_breakpoints()
.map(|(data, is_enabled)| BreakpointEntry {
kind: BreakpointEntryKind::ExceptionBreakpoint(ExceptionBreakpoint {
id: data.filter.clone(),
data: data.clone(),
is_enabled: *is_enabled,
}),
weak: weak.clone(),
});
self.breakpoints
.extend(breakpoints.chain(exception_breakpoints));
if self.breakpoints.len() != old_len {
self.list_state.reset(self.breakpoints.len());
}
v_flex()
.id("breakpoint-list")
.on_hover(cx.listener(|this, hovered, window, cx| {
if *hovered {
this.show_scrollbar = true;
this.hide_scrollbar_task.take();
cx.notify();
} else if !this.focus_handle.contains_focused(window, cx) {
this.hide_scrollbar(window, cx);
}
}))
.size_full()
.m_0p5()
.child(list(self.list_state.clone()).flex_grow())
.children(self.render_vertical_scrollbar(cx))
}
}
#[derive(Clone, Debug)]
struct LineBreakpoint {
name: SharedString,
dir: Option<SharedString>,
line: SharedString,
breakpoint: SourceBreakpoint,
}
impl LineBreakpoint {
fn render(self, weak: WeakEntity<BreakpointList>) -> ListItem {
let LineBreakpoint {
name,
dir,
line,
breakpoint,
} = self;
let icon_name = if breakpoint.state.is_enabled() {
IconName::DebugBreakpoint
} else {
IconName::DebugDisabledBreakpoint
};
let path = breakpoint.path;
let row = breakpoint.row;
let indicator = div()
.id(SharedString::from(format!(
"breakpoint-ui-toggle-{:?}/{}:{}",
dir, name, line
)))
.cursor_pointer()
.on_click({
let weak = weak.clone();
let path = path.clone();
move |_, _, cx| {
weak.update(cx, |this, cx| {
this.breakpoint_store.update(cx, |this, cx| {
if let Some((buffer, breakpoint)) =
this.breakpoint_at_row(&path, row, cx)
{
this.toggle_breakpoint(
buffer,
breakpoint,
BreakpointEditAction::InvertState,
cx,
);
} else {
log::error!("Couldn't find breakpoint at row event though it exists: row {row}")
}
})
})
.ok();
}
})
.child(Indicator::icon(Icon::new(icon_name)).color(Color::Debugger))
.on_mouse_down(MouseButton::Left, move |_, _, _| {});
ListItem::new(SharedString::from(format!(
"breakpoint-ui-item-{:?}/{}:{}",
dir, name, line
)))
.start_slot(indicator)
.rounded()
.end_hover_slot(
IconButton::new(
SharedString::from(format!(
"breakpoint-ui-on-click-go-to-line-{:?}/{}:{}",
dir, name, line
)),
IconName::Close,
)
.on_click({
let weak = weak.clone();
let path = path.clone();
move |_, _, cx| {
weak.update(cx, |this, cx| {
this.breakpoint_store.update(cx, |this, cx| {
if let Some((buffer, breakpoint)) =
this.breakpoint_at_row(&path, row, cx)
{
this.toggle_breakpoint(
buffer,
breakpoint,
BreakpointEditAction::Toggle,
cx,
);
} else {
log::error!("Couldn't find breakpoint at row event though it exists: row {row}")
}
})
})
.ok();
}
})
.icon_size(ui::IconSize::XSmall),
)
.child(
v_flex()
.id(SharedString::from(format!(
"breakpoint-ui-on-click-go-to-line-{:?}/{}:{}",
dir, name, line
)))
.on_click(move |_, window, cx| {
let path = path.clone();
let weak = weak.clone();
let row = breakpoint.row;
maybe!({
let task = weak
.update(cx, |this, cx| {
this.worktree_store.update(cx, |this, cx| {
this.find_or_create_worktree(path, false, cx)
})
})
.ok()?;
window
.spawn(cx, async move |cx| {
let (worktree, relative_path) = task.await?;
let worktree_id = worktree.update(cx, |this, _| this.id())?;
let item = weak
.update_in(cx, |this, window, cx| {
this.workspace.update(cx, |this, cx| {
this.open_path(
(worktree_id, relative_path),
None,
true,
window,
cx,
)
})
})??
.await?;
if let Some(editor) = item.downcast::<Editor>() {
editor
.update_in(cx, |this, window, cx| {
this.go_to_singleton_buffer_point(
Point { row, column: 0 },
window,
cx,
);
})
.ok();
}
Result::<_, anyhow::Error>::Ok(())
})
.detach();
Some(())
});
})
.cursor_pointer()
.py_1()
.items_center()
.child(
h_flex()
.gap_1()
.child(
Label::new(name)
.size(LabelSize::Small)
.line_height_style(ui::LineHeightStyle::UiLabel),
)
.children(dir.map(|dir| {
Label::new(dir)
.color(Color::Muted)
.size(LabelSize::Small)
.line_height_style(ui::LineHeightStyle::UiLabel)
})),
)
.child(
Label::new(line)
.size(LabelSize::XSmall)
.color(Color::Muted)
.line_height_style(ui::LineHeightStyle::UiLabel),
),
)
}
}
#[derive(Clone, Debug)]
struct ExceptionBreakpoint {
id: String,
data: ExceptionBreakpointsFilter,
is_enabled: bool,
}
impl ExceptionBreakpoint {
fn render(self, list: WeakEntity<BreakpointList>) -> ListItem {
let color = if self.is_enabled {
Color::Debugger
} else {
Color::Muted
};
let id = SharedString::from(&self.id);
ListItem::new(SharedString::from(format!(
"exception-breakpoint-ui-item-{}",
self.id
)))
.rounded()
.start_slot(
div()
.id(SharedString::from(format!(
"exception-breakpoint-ui-item-{}-click-handler",
self.id
)))
.on_click(move |_, _, cx| {
list.update(cx, |this, cx| {
this.session.update(cx, |this, cx| {
this.toggle_exception_breakpoint(&id, cx);
});
cx.notify();
})
.ok();
})
.cursor_pointer()
.child(Indicator::icon(Icon::new(IconName::Flame)).color(color)),
)
.child(
div()
.py_1()
.gap_1()
.child(
Label::new(self.data.label)
.size(LabelSize::Small)
.line_height_style(ui::LineHeightStyle::UiLabel),
)
.children(self.data.description.map(|description| {
Label::new(description)
.size(LabelSize::XSmall)
.line_height_style(ui::LineHeightStyle::UiLabel)
.color(Color::Muted)
})),
)
}
}
#[derive(Clone, Debug)]
enum BreakpointEntryKind {
LineBreakpoint(LineBreakpoint),
ExceptionBreakpoint(ExceptionBreakpoint),
}
#[derive(Clone, Debug)]
struct BreakpointEntry {
kind: BreakpointEntryKind,
weak: WeakEntity<BreakpointList>,
}
impl RenderOnce for BreakpointEntry {
fn render(self, _: &mut ui::Window, _: &mut App) -> impl ui::IntoElement {
match self.kind {
BreakpointEntryKind::LineBreakpoint(line_breakpoint) => {
line_breakpoint.render(self.weak)
}
BreakpointEntryKind::ExceptionBreakpoint(exception_breakpoint) => {
exception_breakpoint.render(self.weak)
}
}
}
}

View File

@@ -17,7 +17,7 @@ use project::{
use settings::Settings;
use std::{cell::RefCell, rc::Rc, usize};
use theme::ThemeSettings;
use ui::prelude::*;
use ui::{Divider, prelude::*};
pub struct Console {
console: Entity<Editor>,
@@ -105,6 +105,10 @@ impl Console {
}
}
pub(crate) fn show_indicator(&self, cx: &App) -> bool {
self.session.read(cx).has_new_output(self.last_token)
}
pub fn add_messages<'a>(
&mut self,
events: impl Iterator<Item = &'a OutputEvent>,
@@ -141,7 +145,7 @@ impl Console {
state.evaluate(
expression,
Some(dap::EvaluateArgumentsContext::Variables),
self.stack_frame_list.read(cx).current_stack_frame_id(),
self.stack_frame_list.read(cx).selected_stack_frame_id(),
None,
cx,
);
@@ -229,7 +233,8 @@ impl Render for Console {
.size_full()
.child(self.render_console(cx))
.when(self.is_local(cx), |this| {
this.child(self.render_query_bar(cx))
this.child(Divider::horizontal())
.child(self.render_query_bar(cx))
.pt(DynamicSpacing::Base04.rems(cx))
})
.border_2()
@@ -384,7 +389,7 @@ impl ConsoleQueryBarCompletionProvider {
) -> Task<Result<Option<Vec<Completion>>>> {
let completion_task = console.update(cx, |console, cx| {
console.session.update(cx, |state, cx| {
let frame_id = console.stack_frame_list.read(cx).current_stack_frame_id();
let frame_id = console.stack_frame_list.read(cx).selected_stack_frame_id();
state.completions(
CompletionsQuery::new(buffer.read(cx), buffer_position, frame_id),

View File

@@ -31,7 +31,7 @@ pub struct StackFrameList {
invalidate: bool,
entries: Vec<StackFrameEntry>,
workspace: WeakEntity<Workspace>,
current_stack_frame_id: Option<StackFrameId>,
selected_stack_frame_id: Option<StackFrameId>,
scrollbar_state: ScrollbarState,
}
@@ -85,7 +85,7 @@ impl StackFrameList {
_subscription,
invalidate: true,
entries: Default::default(),
current_stack_frame_id: None,
selected_stack_frame_id: None,
}
}
@@ -132,8 +132,8 @@ impl StackFrameList {
.unwrap_or(0)
}
pub fn current_stack_frame_id(&self) -> Option<StackFrameId> {
self.current_stack_frame_id
pub fn selected_stack_frame_id(&self) -> Option<StackFrameId> {
self.selected_stack_frame_id
}
pub(super) fn refresh(&mut self, cx: &mut Context<Self>) {
@@ -188,20 +188,20 @@ impl StackFrameList {
}
pub fn go_to_selected_stack_frame(&mut self, window: &Window, cx: &mut Context<Self>) {
if let Some(current_stack_frame_id) = self.current_stack_frame_id {
if let Some(selected_stack_frame_id) = self.selected_stack_frame_id {
let frame = self
.entries
.iter()
.find_map(|entry| match entry {
StackFrameEntry::Normal(dap) => {
if dap.id == current_stack_frame_id {
if dap.id == selected_stack_frame_id {
Some(dap)
} else {
None
}
}
StackFrameEntry::Collapsed(daps) => {
daps.iter().find(|dap| dap.id == current_stack_frame_id)
daps.iter().find(|dap| dap.id == selected_stack_frame_id)
}
})
.cloned();
@@ -220,7 +220,7 @@ impl StackFrameList {
window: &Window,
cx: &mut Context<Self>,
) -> Task<Result<()>> {
self.current_stack_frame_id = Some(stack_frame.id);
self.selected_stack_frame_id = Some(stack_frame.id);
cx.emit(StackFrameListEvent::SelectedStackFrameChanged(
stack_frame.id,
@@ -319,7 +319,7 @@ impl StackFrameList {
cx: &mut Context<Self>,
) -> AnyElement {
let source = stack_frame.source.clone();
let is_selected_frame = Some(stack_frame.id) == self.current_stack_frame_id;
let is_selected_frame = Some(stack_frame.id) == self.selected_stack_frame_id;
let formatted_path = format!(
"{}:{}",

View File

@@ -1172,7 +1172,7 @@ async fn test_send_breakpoints_when_editor_has_been_saved(
let (editor, cx) = cx.add_window_view(|window, cx| {
Editor::new(
EditorMode::Full,
EditorMode::full(),
MultiBuffer::build_from_buffer(buffer, cx),
Some(project.clone()),
window,
@@ -1347,7 +1347,7 @@ async fn test_unsetting_breakpoints_on_clear_breakpoint_action(
let (first_editor, cx) = cx.add_window_view(|window, cx| {
Editor::new(
EditorMode::Full,
EditorMode::full(),
MultiBuffer::build_from_buffer(first, cx),
Some(project.clone()),
window,
@@ -1357,7 +1357,7 @@ async fn test_unsetting_breakpoints_on_clear_breakpoint_action(
let (second_editor, cx) = cx.add_window_view(|window, cx| {
Editor::new(
EditorMode::Full,
EditorMode::full(),
MultiBuffer::build_from_buffer(second, cx),
Some(project.clone()),
window,

View File

@@ -191,7 +191,7 @@ async fn test_fetch_initial_stack_frames_and_go_to_stack_frame(
.update(cx, |state, _| state.stack_frame_list().clone());
stack_frame_list.update(cx, |stack_frame_list, cx| {
assert_eq!(Some(1), stack_frame_list.current_stack_frame_id());
assert_eq!(Some(1), stack_frame_list.selected_stack_frame_id());
assert_eq!(stack_frames, stack_frame_list.dap_stack_frames(cx));
});
});
@@ -425,7 +425,7 @@ async fn test_select_stack_frame(executor: BackgroundExecutor, cx: &mut TestAppC
.unwrap();
stack_frame_list.update(cx, |stack_frame_list, cx| {
assert_eq!(Some(1), stack_frame_list.current_stack_frame_id());
assert_eq!(Some(1), stack_frame_list.selected_stack_frame_id());
assert_eq!(stack_frames, stack_frame_list.dap_stack_frames(cx));
});
@@ -440,7 +440,7 @@ async fn test_select_stack_frame(executor: BackgroundExecutor, cx: &mut TestAppC
cx.run_until_parked();
stack_frame_list.update(cx, |stack_frame_list, cx| {
assert_eq!(Some(2), stack_frame_list.current_stack_frame_id());
assert_eq!(Some(2), stack_frame_list.selected_stack_frame_id());
assert_eq!(stack_frames, stack_frame_list.dap_stack_frames(cx));
});

View File

@@ -212,7 +212,7 @@ async fn test_basic_fetch_initial_scope_and_variables(
running_state.update(cx, |running_state, cx| {
let (stack_frame_list, stack_frame_id) =
running_state.stack_frame_list().update(cx, |list, _| {
(list.flatten_entries(), list.current_stack_frame_id())
(list.flatten_entries(), list.selected_stack_frame_id())
});
assert_eq!(stack_frames, stack_frame_list);
@@ -483,7 +483,7 @@ async fn test_fetch_variables_for_multiple_scopes(
running_state.update(cx, |running_state, cx| {
let (stack_frame_list, stack_frame_id) =
running_state.stack_frame_list().update(cx, |list, _| {
(list.flatten_entries(), list.current_stack_frame_id())
(list.flatten_entries(), list.selected_stack_frame_id())
});
assert_eq!(Some(1), stack_frame_id);
@@ -1565,7 +1565,7 @@ async fn test_variable_list_only_sends_requests_when_rendering(
running_state.update(cx, |running_state, cx| {
let (stack_frame_list, stack_frame_id) =
running_state.stack_frame_list().update(cx, |list, _| {
(list.flatten_entries(), list.current_stack_frame_id())
(list.flatten_entries(), list.selected_stack_frame_id())
});
assert_eq!(Some(1), stack_frame_id);
@@ -1877,7 +1877,7 @@ async fn test_it_fetches_scopes_variables_when_you_select_a_stack_frame(
running_state.update(cx, |running_state, cx| {
let (stack_frame_list, stack_frame_id) =
running_state.stack_frame_list().update(cx, |list, _| {
(list.flatten_entries(), list.current_stack_frame_id())
(list.flatten_entries(), list.selected_stack_frame_id())
});
let variable_list = running_state.variable_list().read(cx);
@@ -1888,7 +1888,7 @@ async fn test_it_fetches_scopes_variables_when_you_select_a_stack_frame(
running_state
.stack_frame_list()
.read(cx)
.current_stack_frame_id(),
.selected_stack_frame_id(),
Some(1)
);
@@ -1934,7 +1934,7 @@ async fn test_it_fetches_scopes_variables_when_you_select_a_stack_frame(
running_state.update(cx, |running_state, cx| {
let (stack_frame_list, stack_frame_id) =
running_state.stack_frame_list().update(cx, |list, _| {
(list.flatten_entries(), list.current_stack_frame_id())
(list.flatten_entries(), list.selected_stack_frame_id())
});
let variable_list = running_state.variable_list().read(cx);

View File

@@ -35,6 +35,7 @@ assets.workspace = true
client.workspace = true
clock.workspace = true
collections.workspace = true
command_palette_hooks.workspace = true
convert_case.workspace = true
db.workspace = true
buffer_diff.workspace = true

View File

@@ -110,20 +110,6 @@ pub struct ToggleComments {
pub ignore_indent: bool,
}
#[derive(PartialEq, Clone, Deserialize, Default, JsonSchema)]
#[serde(deny_unknown_fields)]
pub struct FoldAt {
#[serde(skip)]
pub buffer_row: MultiBufferRow,
}
#[derive(PartialEq, Clone, Deserialize, Default, JsonSchema)]
#[serde(deny_unknown_fields)]
pub struct UnfoldAt {
#[serde(skip)]
pub buffer_row: MultiBufferRow,
}
#[derive(PartialEq, Clone, Deserialize, Default, JsonSchema)]
#[serde(deny_unknown_fields)]
pub struct MoveUpByLines {
@@ -226,7 +212,6 @@ impl_actions!(
ExpandExcerpts,
ExpandExcerptsDown,
ExpandExcerptsUp,
FoldAt,
HandleInput,
MoveDownByLines,
MovePageDown,
@@ -244,7 +229,6 @@ impl_actions!(
ShowCompletions,
ToggleCodeActions,
ToggleComments,
UnfoldAt,
FoldAtLevel,
]
);
@@ -299,6 +283,8 @@ actions!(
DuplicateSelection,
ExpandMacroRecursively,
FindAllReferences,
FindNextMatch,
FindPreviousMatch,
Fold,
FoldAll,
FoldFunctionBodies,
@@ -420,9 +406,12 @@ actions!(
Tab,
Backtab,
ToggleBreakpoint,
ToggleCase,
DisableBreakpoint,
EnableBreakpoint,
EditLogBreakpoint,
DebuggerRunToCursor,
DebuggerEvaluateSelectedText,
ToggleAutoSignatureHelp,
ToggleGitBlameInline,
OpenGitBlameCommit,

View File

@@ -396,9 +396,31 @@ pub enum SelectMode {
#[derive(Copy, Clone, PartialEq, Eq, Debug)]
pub enum EditorMode {
SingleLine { auto_width: bool },
AutoHeight { max_lines: usize },
Full,
SingleLine {
auto_width: bool,
},
AutoHeight {
max_lines: usize,
},
Full {
/// When set to `true`, the editor will scale its UI elements with the buffer font size.
scale_ui_elements_with_buffer_font_size: bool,
/// When set to `true`, the editor will render a background for the active line.
show_active_line_background: bool,
},
}
impl EditorMode {
pub fn full() -> Self {
Self::Full {
scale_ui_elements_with_buffer_font_size: true,
show_active_line_background: true,
}
}
pub fn is_full(&self) -> bool {
matches!(self, Self::Full { .. })
}
}
#[derive(Copy, Clone, Debug)]
@@ -1198,7 +1220,7 @@ impl Editor {
pub fn multi_line(window: &mut Window, cx: &mut Context<Self>) -> Self {
let buffer = cx.new(|cx| Buffer::local("", cx));
let buffer = cx.new(|cx| MultiBuffer::singleton(buffer, cx));
Self::new(EditorMode::Full, buffer, None, window, cx)
Self::new(EditorMode::full(), buffer, None, window, cx)
}
pub fn auto_width(window: &mut Window, cx: &mut Context<Self>) -> Self {
@@ -1232,7 +1254,7 @@ impl Editor {
cx: &mut Context<Self>,
) -> Self {
let buffer = cx.new(|cx| MultiBuffer::singleton(buffer, cx));
Self::new(EditorMode::Full, buffer, project, window, cx)
Self::new(EditorMode::full(), buffer, project, window, cx)
}
pub fn for_multibuffer(
@@ -1241,7 +1263,7 @@ impl Editor {
window: &mut Window,
cx: &mut Context<Self>,
) -> Self {
Self::new(EditorMode::Full, buffer, project, window, cx)
Self::new(EditorMode::full(), buffer, project, window, cx)
}
pub fn clone(&self, window: &mut Window, cx: &mut Context<Self>) -> Self {
@@ -1262,6 +1284,7 @@ impl Editor {
clone.selections.clone_state(&self.selections);
clone.scroll_manager.clone_state(&self.scroll_manager);
clone.searchable = self.searchable;
clone.read_only = self.read_only;
clone
}
@@ -1328,7 +1351,7 @@ impl Editor {
.then(|| language_settings::SoftWrap::None);
let mut project_subscriptions = Vec::new();
if mode == EditorMode::Full {
if mode.is_full() {
if let Some(project) = project.as_ref() {
project_subscriptions.push(cx.subscribe_in(
project,
@@ -1416,7 +1439,7 @@ impl Editor {
};
let breakpoint_store = match (mode, project.as_ref()) {
(EditorMode::Full, Some(project)) => Some(project.read(cx).breakpoint_store()),
(EditorMode::Full { .. }, Some(project)) => Some(project.read(cx).breakpoint_store()),
_ => None,
};
@@ -1467,7 +1490,7 @@ impl Editor {
show_scrollbars: true,
mode,
show_breadcrumbs: EditorSettings::get_global(cx).toolbar.breadcrumbs,
show_gutter: mode == EditorMode::Full,
show_gutter: mode.is_full(),
show_line_numbers: None,
use_relative_line_numbers: None,
show_git_diff_gutter: None,
@@ -1509,7 +1532,7 @@ impl Editor {
collapse_matches: false,
workspace: None,
input_enabled: true,
use_modal_editing: mode == EditorMode::Full,
use_modal_editing: mode.is_full(),
read_only: false,
use_autoclose: true,
use_auto_surround: true,
@@ -1526,7 +1549,7 @@ impl Editor {
edit_prediction_preview: EditPredictionPreview::Inactive {
released_too_fast: false,
},
inline_diagnostics_enabled: mode == EditorMode::Full,
inline_diagnostics_enabled: mode.is_full(),
inlay_hint_cache: InlayHintCache::new(inlay_hint_settings),
gutter_hovered: false,
@@ -1634,7 +1657,7 @@ impl Editor {
this.scroll_manager.show_scrollbars(window, cx);
jsx_tag_auto_close::refresh_enabled_in_any_buffer(&mut this, &buffer, cx);
if mode == EditorMode::Full {
if mode.is_full() {
let should_auto_hide_scrollbars = cx.should_auto_hide_scrollbars();
cx.set_global(ScrollbarAutoHide(should_auto_hide_scrollbars));
@@ -1696,7 +1719,7 @@ impl Editor {
let mode = match self.mode {
EditorMode::SingleLine { .. } => "single_line",
EditorMode::AutoHeight { .. } => "auto_height",
EditorMode::Full => "full",
EditorMode::Full { .. } => "full",
};
if EditorSettings::jupyter_enabled(cx) {
@@ -1983,6 +2006,10 @@ impl Editor {
self.mode
}
pub fn set_mode(&mut self, mode: EditorMode) {
self.mode = mode;
}
pub fn collaboration_hub(&self) -> Option<&dyn CollaborationHub> {
self.collaboration_hub.as_deref()
}
@@ -3005,7 +3032,7 @@ impl Editor {
return;
}
if self.mode == EditorMode::Full
if self.mode.is_full()
&& self.change_selections(Some(Autoscroll::fit()), window, cx, |s| s.try_cancel())
{
return;
@@ -3048,7 +3075,7 @@ impl Editor {
return true;
}
if self.mode == EditorMode::Full && self.active_diagnostics.is_some() {
if self.mode.is_full() && self.active_diagnostics.is_some() {
self.dismiss_diagnostics(cx);
return true;
}
@@ -3129,6 +3156,7 @@ impl Editor {
let mut new_selections = Vec::with_capacity(selections.len());
let mut new_autoclose_regions = Vec::new();
let snapshot = self.buffer.read(cx).read(cx);
let mut clear_linked_edit_ranges = false;
for (selection, autoclose_region) in
self.selections_with_autoclose_regions(selections, &snapshot)
@@ -3356,6 +3384,8 @@ impl Editor {
.extend(edits.into_iter().map(|range| (range, text.clone())));
}
}
} else {
clear_linked_edit_ranges = true;
}
}
@@ -3366,6 +3396,9 @@ impl Editor {
drop(snapshot);
self.transact(window, cx, |this, window, cx| {
if clear_linked_edit_ranges {
this.linked_edit_ranges.clear();
}
let initial_buffer_versions =
jsx_tag_auto_close::construct_initial_buffer_versions_map(this, &edits, cx);
@@ -4030,7 +4063,7 @@ impl Editor {
}
fn refresh_inlay_hints(&mut self, reason: InlayHintRefreshReason, cx: &mut Context<Self>) {
if self.semantics_provider.is_none() || self.mode != EditorMode::Full {
if self.semantics_provider.is_none() || !self.mode.is_full() {
return;
}
@@ -4271,6 +4304,7 @@ impl Editor {
buffer
.update(cx, |buffer, _| {
buffer.push_transaction(transaction, Instant::now());
buffer.finalize_last_transaction();
})
.ok();
}
@@ -5164,13 +5198,16 @@ impl Editor {
}
fn refresh_code_actions(&mut self, window: &mut Window, cx: &mut Context<Self>) -> Option<()> {
let buffer = self.buffer.read(cx);
let newest_selection = self.selections.newest_anchor().clone();
let newest_selection_adjusted = self.selections.newest_adjusted(cx).clone();
let buffer = self.buffer.read(cx);
if newest_selection.head().diff_base_anchor.is_some() {
return None;
}
let (start_buffer, start) = buffer.text_anchor_for_position(newest_selection.start, cx)?;
let (end_buffer, end) = buffer.text_anchor_for_position(newest_selection.end, cx)?;
let (start_buffer, start) =
buffer.text_anchor_for_position(newest_selection_adjusted.start, cx)?;
let (end_buffer, end) =
buffer.text_anchor_for_position(newest_selection_adjusted.end, cx)?;
if start_buffer != end_buffer {
return None;
}
@@ -5534,7 +5571,7 @@ impl Editor {
buffer_position: language::Anchor,
cx: &App,
) -> EditPredictionSettings {
if self.mode != EditorMode::Full
if !self.mode.is_full()
|| !self.show_inline_completions_override.unwrap_or(true)
|| self.inline_completions_disabled_in_scope(buffer, buffer_position, cx)
{
@@ -6413,6 +6450,9 @@ impl Editor {
"Set Breakpoint"
};
let run_to_cursor = command_palette_hooks::CommandPaletteFilter::try_global(cx)
.map_or(false, |filter| !filter.is_hidden(&DebuggerRunToCursor));
let toggle_state_msg = breakpoint.as_ref().map_or(None, |bp| match bp.1.state {
BreakpointState::Enabled => Some("Disable"),
BreakpointState::Disabled => Some("Enable"),
@@ -6424,6 +6464,21 @@ impl Editor {
ui::ContextMenu::build(window, cx, |menu, _, _cx| {
menu.on_blur_subscription(Subscription::new(|| {}))
.context(focus_handle)
.when(run_to_cursor, |this| {
let weak_editor = weak_editor.clone();
this.entry("Run to cursor", None, move |window, cx| {
weak_editor
.update(cx, |editor, cx| {
editor.change_selections(None, window, cx, |s| {
s.select_ranges([Point::new(row, 0)..Point::new(row, 0)])
});
})
.ok();
window.dispatch_action(Box::new(DebuggerRunToCursor), cx);
})
.separator()
})
.when_some(toggle_state_msg, |this, msg| {
this.entry(msg, None, {
let weak_editor = weak_editor.clone();
@@ -8207,12 +8262,18 @@ impl Editor {
IndentSize::tab()
} else {
let tab_size = settings.tab_size.get();
let char_column = snapshot
let indent_remainder = snapshot
.text_for_range(Point::new(cursor.row, 0)..cursor)
.flat_map(str::chars)
.count()
+ row_delta as usize;
let chars_to_next_tab_stop = tab_size - (char_column as u32 % tab_size);
.fold(row_delta % tab_size, |counter: u32, c| {
if c == '\t' {
0
} else {
(counter + 1) % tab_size
}
});
let chars_to_next_tab_stop = tab_size - indent_remainder;
IndentSize::spaces(chars_to_next_tab_stop)
};
selection.start = Point::new(cursor.row, cursor.column + row_delta + tab_size.len);
@@ -8793,15 +8854,6 @@ impl Editor {
});
}
fn breakpoint_at_cursor_head(
&self,
window: &mut Window,
cx: &mut Context<Self>,
) -> Option<(Anchor, Breakpoint)> {
let cursor_position: Point = self.selections.newest(cx).head();
self.breakpoint_at_row(cursor_position.row, window, cx)
}
pub(crate) fn breakpoint_at_row(
&self,
row: u32,
@@ -8811,6 +8863,15 @@ impl Editor {
let snapshot = self.snapshot(window, cx);
let breakpoint_position = snapshot.buffer_snapshot.anchor_before(Point::new(row, 0));
self.breakpoint_at_anchor(breakpoint_position, &snapshot, cx)
}
pub(crate) fn breakpoint_at_anchor(
&self,
breakpoint_position: Anchor,
snapshot: &EditorSnapshot,
cx: &mut Context<Self>,
) -> Option<(Anchor, Breakpoint)> {
let project = self.project.clone()?;
let buffer_id = breakpoint_position.buffer_id.or_else(|| {
@@ -8868,29 +8929,51 @@ impl Editor {
window: &mut Window,
cx: &mut Context<Self>,
) {
let (anchor, bp) = self
.breakpoint_at_cursor_head(window, cx)
.unwrap_or_else(|| {
let cursor_position: Point = self.selections.newest(cx).head();
for (anchor, breakpoint) in self.breakpoints_at_cursors(window, cx) {
let breakpoint = breakpoint.unwrap_or_else(|| Breakpoint {
message: None,
state: BreakpointState::Enabled,
condition: None,
hit_condition: None,
});
let breakpoint_position = self
.snapshot(window, cx)
self.add_edit_breakpoint_block(
anchor,
&breakpoint,
BreakpointPromptEditAction::Log,
window,
cx,
);
}
}
fn breakpoints_at_cursors(
&self,
window: &mut Window,
cx: &mut Context<Self>,
) -> Vec<(Anchor, Option<Breakpoint>)> {
let snapshot = self.snapshot(window, cx);
let cursors = self
.selections
.disjoint_anchors()
.into_iter()
.map(|selection| {
let cursor_position: Point = selection.head().to_point(&snapshot.buffer_snapshot);
let breakpoint_position = snapshot
.display_snapshot
.buffer_snapshot
.anchor_after(Point::new(cursor_position.row, 0));
let breakpoint = self
.breakpoint_at_anchor(breakpoint_position, &snapshot, cx)
.map(|(anchor, breakpoint)| (anchor, Some(breakpoint)));
(
breakpoint_position,
Breakpoint {
message: None,
state: BreakpointState::Enabled,
condition: None,
hit_condition: None,
},
)
});
breakpoint.unwrap_or_else(|| (breakpoint_position, None))
})
// There might be multiple cursors on the same line; all of them should have the same anchors though as their breakpoints positions, which makes it possible to sort and dedup the list.
.collect::<HashMap<Anchor, _>>();
self.add_edit_breakpoint_block(anchor, &bp, BreakpointPromptEditAction::Log, window, cx);
cursors.into_iter().collect()
}
pub fn enable_breakpoint(
@@ -8899,15 +8982,16 @@ impl Editor {
window: &mut Window,
cx: &mut Context<Self>,
) {
if let Some((anchor, breakpoint)) = self.breakpoint_at_cursor_head(window, cx) {
if breakpoint.is_disabled() {
self.edit_breakpoint_at_anchor(
anchor,
breakpoint,
BreakpointEditAction::InvertState,
cx,
);
}
for (anchor, breakpoint) in self.breakpoints_at_cursors(window, cx) {
let Some(breakpoint) = breakpoint.filter(|breakpoint| breakpoint.is_disabled()) else {
continue;
};
self.edit_breakpoint_at_anchor(
anchor,
breakpoint,
BreakpointEditAction::InvertState,
cx,
);
}
}
@@ -8917,15 +9001,16 @@ impl Editor {
window: &mut Window,
cx: &mut Context<Self>,
) {
if let Some((anchor, breakpoint)) = self.breakpoint_at_cursor_head(window, cx) {
if breakpoint.is_enabled() {
self.edit_breakpoint_at_anchor(
anchor,
breakpoint,
BreakpointEditAction::InvertState,
cx,
);
}
for (anchor, breakpoint) in self.breakpoints_at_cursors(window, cx) {
let Some(breakpoint) = breakpoint.filter(|breakpoint| breakpoint.is_enabled()) else {
continue;
};
self.edit_breakpoint_at_anchor(
anchor,
breakpoint,
BreakpointEditAction::InvertState,
cx,
);
}
}
@@ -8935,25 +9020,22 @@ impl Editor {
window: &mut Window,
cx: &mut Context<Self>,
) {
let edit_action = BreakpointEditAction::Toggle;
if let Some((anchor, breakpoint)) = self.breakpoint_at_cursor_head(window, cx) {
self.edit_breakpoint_at_anchor(anchor, breakpoint, edit_action, cx);
} else {
let cursor_position: Point = self.selections.newest(cx).head();
let breakpoint_position = self
.snapshot(window, cx)
.display_snapshot
.buffer_snapshot
.anchor_after(Point::new(cursor_position.row, 0));
self.edit_breakpoint_at_anchor(
breakpoint_position,
Breakpoint::new_standard(),
edit_action,
cx,
);
for (anchor, breakpoint) in self.breakpoints_at_cursors(window, cx) {
if let Some(breakpoint) = breakpoint {
self.edit_breakpoint_at_anchor(
anchor,
breakpoint,
BreakpointEditAction::Toggle,
cx,
);
} else {
self.edit_breakpoint_at_anchor(
anchor,
Breakpoint::new_standard(),
BreakpointEditAction::Toggle,
cx,
);
}
}
}
@@ -9141,6 +9223,17 @@ impl Editor {
});
}
pub fn toggle_case(&mut self, _: &ToggleCase, window: &mut Window, cx: &mut Context<Self>) {
self.manipulate_text(window, cx, |text| {
let has_upper_case_characters = text.chars().any(|c| c.is_uppercase());
if has_upper_case_characters {
text.to_lowercase()
} else {
text.to_uppercase()
}
})
}
pub fn convert_to_upper_case(
&mut self,
_: &ConvertToUpperCase,
@@ -11554,7 +11647,7 @@ impl Editor {
window: &mut Window,
cx: &mut Context<Editor>,
) {
this.unfold_ranges(&[range.clone()], false, true, cx);
this.unfold_ranges(&[range.clone()], false, auto_scroll.is_some(), cx);
this.change_selections(auto_scroll, window, cx, |s| {
if replace_newest {
s.delete(s.newest_anchor().id);
@@ -11729,16 +11822,21 @@ impl Editor {
return Ok(());
}
let mut new_selections = self.selections.all::<usize>(cx);
let mut new_selections = Vec::new();
let reversed = self.selections.oldest::<usize>(cx).reversed;
let buffer = &display_map.buffer_snapshot;
let query_matches = select_next_state
.query
.stream_find_iter(buffer.bytes_in_range(0..buffer.len()));
for query_match in query_matches {
let query_match = query_match.unwrap(); // can only fail due to I/O
let offset_range = query_match.start()..query_match.end();
for query_match in query_matches.into_iter() {
let query_match = query_match.context("query match for select all action")?; // can only fail due to I/O
let offset_range = if reversed {
query_match.end()..query_match.start()
} else {
query_match.start()..query_match.end()
};
let display_range = offset_range.start.to_display_point(&display_map)
..offset_range.end.to_display_point(&display_map);
@@ -11746,52 +11844,14 @@ impl Editor {
|| (!movement::is_inside_word(&display_map, display_range.start)
&& !movement::is_inside_word(&display_map, display_range.end))
{
self.selections.change_with(cx, |selections| {
new_selections.push(Selection {
id: selections.new_selection_id(),
start: offset_range.start,
end: offset_range.end,
reversed: false,
goal: SelectionGoal::None,
});
});
new_selections.push(offset_range.start..offset_range.end);
}
}
new_selections.sort_by_key(|selection| selection.start);
let mut ix = 0;
while ix + 1 < new_selections.len() {
let current_selection = &new_selections[ix];
let next_selection = &new_selections[ix + 1];
if current_selection.range().overlaps(&next_selection.range()) {
if current_selection.id < next_selection.id {
new_selections.remove(ix + 1);
} else {
new_selections.remove(ix);
}
} else {
ix += 1;
}
}
let reversed = self.selections.oldest::<usize>(cx).reversed;
for selection in new_selections.iter_mut() {
selection.reversed = reversed;
}
select_next_state.done = true;
self.unfold_ranges(
&new_selections
.iter()
.map(|selection| selection.range())
.collect::<Vec<_>>(),
false,
false,
cx,
);
self.change_selections(Some(Autoscroll::fit()), window, cx, |selections| {
selections.select(new_selections)
self.unfold_ranges(&new_selections.clone(), false, false, cx);
self.change_selections(None, window, cx, |selections| {
selections.select_ranges(new_selections)
});
Ok(())
@@ -11960,6 +12020,54 @@ impl Editor {
Ok(())
}
pub fn find_next_match(
&mut self,
_: &FindNextMatch,
window: &mut Window,
cx: &mut Context<Self>,
) -> Result<()> {
let selections = self.selections.disjoint_anchors();
match selections.first() {
Some(first) if selections.len() >= 2 => {
self.change_selections(Some(Autoscroll::fit()), window, cx, |s| {
s.select_ranges([first.range()]);
});
}
_ => self.select_next(
&SelectNext {
replace_newest: true,
},
window,
cx,
)?,
}
Ok(())
}
pub fn find_previous_match(
&mut self,
_: &FindPreviousMatch,
window: &mut Window,
cx: &mut Context<Self>,
) -> Result<()> {
let selections = self.selections.disjoint_anchors();
match selections.last() {
Some(last) if selections.len() >= 2 => {
self.change_selections(Some(Autoscroll::fit()), window, cx, |s| {
s.select_ranges([last.range()]);
});
}
_ => self.select_previous(
&SelectPrevious {
replace_newest: true,
},
window,
cx,
)?,
}
Ok(())
}
pub fn toggle_comments(
&mut self,
action: &ToggleComments,
@@ -14171,12 +14279,28 @@ impl Editor {
}
};
let transaction_id_prev = buffer.read_with(cx, |b, cx| b.last_transaction_id(cx));
let selections_prev = transaction_id_prev
.and_then(|transaction_id_prev| {
// default to selections as they were after the last edit, if we have them,
// instead of how they are now.
// This will make it so that editing, moving somewhere else, formatting, then undoing the format
// will take you back to where you made the last edit, instead of staying where you scrolled
self.selection_history
.transaction(transaction_id_prev)
.map(|t| t.0.clone())
})
.unwrap_or_else(|| {
log::info!("Failed to determine selections from before format. Falling back to selections when format was initiated");
self.selections.disjoint_anchors()
});
let mut timeout = cx.background_executor().timer(FORMAT_TIMEOUT).fuse();
let format = project.update(cx, |project, cx| {
project.format(buffers, target, true, trigger, cx)
});
cx.spawn_in(window, async move |_, cx| {
cx.spawn_in(window, async move |editor, cx| {
let transaction = futures::select_biased! {
transaction = format.log_err().fuse() => transaction,
() = timeout => {
@@ -14196,6 +14320,19 @@ impl Editor {
})
.ok();
if let Some(transaction_id_now) =
buffer.read_with(cx, |b, cx| b.last_transaction_id(cx))?
{
let has_new_transaction = transaction_id_prev != Some(transaction_id_now);
if has_new_transaction {
_ = editor.update(cx, |editor, _| {
editor
.selection_history
.insert_transaction(transaction_id_now, selections_prev);
});
}
}
Ok(())
})
}
@@ -14896,8 +15033,12 @@ impl Editor {
self.fold_creases(to_fold, true, window, cx);
}
pub fn fold_at(&mut self, fold_at: &FoldAt, window: &mut Window, cx: &mut Context<Self>) {
let buffer_row = fold_at.buffer_row;
pub fn fold_at(
&mut self,
buffer_row: MultiBufferRow,
window: &mut Window,
cx: &mut Context<Self>,
) {
let display_map = self.display_map.update(cx, |map, cx| map.snapshot(cx));
if let Some(crease) = display_map.crease_for_buffer_row(buffer_row) {
@@ -14967,16 +15108,16 @@ impl Editor {
pub fn unfold_at(
&mut self,
unfold_at: &UnfoldAt,
buffer_row: MultiBufferRow,
_window: &mut Window,
cx: &mut Context<Self>,
) {
let display_map = self.display_map.update(cx, |map, cx| map.snapshot(cx));
let intersection_range = Point::new(unfold_at.buffer_row.0, 0)
let intersection_range = Point::new(buffer_row.0, 0)
..Point::new(
unfold_at.buffer_row.0,
display_map.buffer_snapshot.line_len(unfold_at.buffer_row),
buffer_row.0,
display_map.buffer_snapshot.line_len(buffer_row),
);
let autoscroll = self
@@ -17038,6 +17179,59 @@ impl Editor {
self.active_indent_guides_state.dirty = true;
self.refresh_active_diagnostics(cx);
self.refresh_code_actions(window, cx);
// Check if the edit occurred within a crease (like an @-mention)
if let Some(buffer) = buffer_edited {
// Get display snapshot for crease information
let display_snapshot = self.display_map.update(cx, |display_map, cx| {
display_map.snapshot(cx)
});
// Get multibuffer snapshot for reference
let multibuffer_snapshot = self.buffer.update(cx, |multi_buffer, cx| {
multi_buffer.snapshot(cx)
});
// Access the buffer that was edited
buffer.update(cx, |buffer, _| {
// Log that we're checking for edits within creases
let buffer_id = buffer.remote_id();
dbg!("Checking for edits in @-mentions (creases)", buffer_id);
// Try to find creases in the edited buffer
let mut found_crease = false;
// Check each row in the buffer for creases
for row_idx in 0..multibuffer_snapshot.len() as u32 {
let row = multi_buffer::MultiBufferRow(row_idx);
// Check if there's a crease at this row
if let Some(crease) = display_snapshot.crease_for_buffer_row(row) {
found_crease = true;
// Get crease range information
let crease_range = crease.range();
let crease_start = crease_range.start.to_point(&multibuffer_snapshot);
let crease_end = crease_range.end.to_point(&multibuffer_snapshot);
// Log that we found a crease - these are the @-mentions
dbg!("Found a crease (@-mention) in the buffer",
format!("Row: {}", row_idx),
format!("Range: {:?}..{:?}", crease_start, crease_end));
// Since we're in the buffer_edited event handler, an edit has just happened
// Log that this edit might have affected the crease
dbg!("Edit detected that might affect this @-mention");
}
}
if !found_crease {
// No creases found in this buffer
dbg!("No @-mentions (creases) found in the edited buffer");
}
});
}
if self.has_active_inline_completion() {
self.update_visible_inline_completion(window, cx);
}
@@ -17212,7 +17406,7 @@ impl Editor {
let project_settings = ProjectSettings::get_global(cx);
self.serialize_dirty_buffers = project_settings.session.restore_unsaved_buffers;
if self.mode == EditorMode::Full {
if self.mode.is_full() {
let show_inline_diagnostics = project_settings.diagnostics.inline.enabled;
let inline_blame_enabled = project_settings.git.inline_blame_enabled();
if self.show_inline_diagnostics != show_inline_diagnostics {
@@ -19345,15 +19539,11 @@ impl EditorSnapshot {
Arc::new(move |folded, window: &mut Window, cx: &mut App| {
if folded {
editor.update(cx, |editor, cx| {
editor.fold_at(&crate::FoldAt { buffer_row }, window, cx)
editor.fold_at(buffer_row, window, cx)
});
} else {
editor.update(cx, |editor, cx| {
editor.unfold_at(
&crate::UnfoldAt { buffer_row },
window,
cx,
)
editor.unfold_at(buffer_row, window, cx)
});
}
});
@@ -19377,9 +19567,9 @@ impl EditorSnapshot {
.toggle_state(folded)
.on_click(window.listener_for(&editor, move |this, _e, window, cx| {
if folded {
this.unfold_at(&UnfoldAt { buffer_row }, window, cx);
this.unfold_at(buffer_row, window, cx);
} else {
this.fold_at(&FoldAt { buffer_row }, window, cx);
this.fold_at(buffer_row, window, cx);
}
}))
.into_any_element(),
@@ -19500,7 +19690,7 @@ impl Render for Editor {
line_height: relative(settings.buffer_line_height.value()),
..Default::default()
},
EditorMode::Full => TextStyle {
EditorMode::Full { .. } => TextStyle {
color: cx.theme().colors().editor_foreground,
font_family: settings.buffer_font.family.clone(),
font_features: settings.buffer_font.features.clone(),
@@ -19518,7 +19708,7 @@ impl Render for Editor {
let background = match self.mode {
EditorMode::SingleLine { .. } => cx.theme().system().transparent,
EditorMode::AutoHeight { max_lines: _ } => cx.theme().system().transparent,
EditorMode::Full => cx.theme().colors().editor_background,
EditorMode::Full { .. } => cx.theme().colors().editor_background,
};
EditorElement::new(

View File

@@ -2918,7 +2918,32 @@ async fn test_tab_in_leading_whitespace_auto_indents_lines(cx: &mut TestAppConte
}
#[gpui::test]
async fn test_tab_with_mixed_whitespace(cx: &mut TestAppContext) {
async fn test_tab_with_mixed_whitespace_txt(cx: &mut TestAppContext) {
init_test(cx, |settings| {
settings.defaults.tab_size = NonZeroU32::new(3)
});
let mut cx = EditorTestContext::new(cx).await;
cx.set_state(indoc! {"
ˇ
\t ˇ
\t ˇ
\t ˇ
\t \t\t \t \t\t \t\t \t \t ˇ
"});
cx.update_editor(|e, window, cx| e.tab(&Tab, window, cx));
cx.assert_editor_state(indoc! {"
ˇ
\t ˇ
\t ˇ
\t ˇ
\t \t\t \t \t\t \t\t \t \t ˇ
"});
}
#[gpui::test]
async fn test_tab_with_mixed_whitespace_rust(cx: &mut TestAppContext) {
init_test(cx, |settings| {
settings.defaults.tab_size = NonZeroU32::new(4)
});
@@ -3875,6 +3900,41 @@ async fn test_manipulate_lines_with_multi_selection(cx: &mut TestAppContext) {
"});
}
#[gpui::test]
async fn test_toggle_case(cx: &mut TestAppContext) {
init_test(cx, |_| {});
let mut cx = EditorTestContext::new(cx).await;
// If all lower case -> upper case
cx.set_state(indoc! {"
«hello worldˇ»
"});
cx.update_editor(|e, window, cx| e.toggle_case(&ToggleCase, window, cx));
cx.assert_editor_state(indoc! {"
«HELLO WORLDˇ»
"});
// If all upper case -> lower case
cx.set_state(indoc! {"
«HELLO WORLDˇ»
"});
cx.update_editor(|e, window, cx| e.toggle_case(&ToggleCase, window, cx));
cx.assert_editor_state(indoc! {"
«hello worldˇ»
"});
// If any upper case characters are identified -> lower case
// This matches JetBrains IDEs
cx.set_state(indoc! {"
«hEllo worldˇ»
"});
cx.update_editor(|e, window, cx| e.toggle_case(&ToggleCase, window, cx));
cx.assert_editor_state(indoc! {"
«hello worldˇ»
"});
}
#[gpui::test]
async fn test_manipulate_text(cx: &mut TestAppContext) {
init_test(cx, |_| {});
@@ -5782,6 +5842,114 @@ async fn test_select_all_matches(cx: &mut TestAppContext) {
cx.assert_editor_state("abc\n« ˇ»abc\nabc");
}
#[gpui::test]
async fn test_select_all_matches_does_not_scroll(cx: &mut TestAppContext) {
init_test(cx, |_| {});
let mut cx = EditorTestContext::new(cx).await;
let large_body_1 = "\nd".repeat(200);
let large_body_2 = "\ne".repeat(200);
cx.set_state(&format!(
"abc\nabc{large_body_1} «ˇa»bc{large_body_2}\nefabc\nabc"
));
let initial_scroll_position = cx.update_editor(|editor, _, cx| {
let scroll_position = editor.scroll_position(cx);
assert!(scroll_position.y > 0.0, "Initial selection is between two large bodies and should have the editor scrolled to it");
scroll_position
});
cx.update_editor(|e, window, cx| e.select_all_matches(&SelectAllMatches, window, cx))
.unwrap();
cx.assert_editor_state(&format!(
"«ˇa»bc\n«ˇa»bc{large_body_1} «ˇa»bc{large_body_2}\nef«ˇa»bc\n«ˇa»bc"
));
let scroll_position_after_selection =
cx.update_editor(|editor, _, cx| editor.scroll_position(cx));
assert_eq!(
initial_scroll_position, scroll_position_after_selection,
"Scroll position should not change after selecting all matches"
);
}
#[gpui::test]
async fn test_undo_format_scrolls_to_last_edit_pos(cx: &mut TestAppContext) {
init_test(cx, |_| {});
let mut cx = EditorLspTestContext::new_rust(
lsp::ServerCapabilities {
document_formatting_provider: Some(lsp::OneOf::Left(true)),
..Default::default()
},
cx,
)
.await;
cx.set_state(indoc! {"
line 1
line 2
linˇe 3
line 4
line 5
"});
// Make an edit
cx.update_editor(|editor, window, cx| {
editor.handle_input("X", window, cx);
});
// Move cursor to a different position
cx.update_editor(|editor, window, cx| {
editor.change_selections(None, window, cx, |s| {
s.select_ranges([Point::new(4, 2)..Point::new(4, 2)]);
});
});
cx.assert_editor_state(indoc! {"
line 1
line 2
linXe 3
line 4
liˇne 5
"});
cx.lsp
.set_request_handler::<lsp::request::Formatting, _, _>(move |_, _| async move {
Ok(Some(vec![lsp::TextEdit::new(
lsp::Range::new(lsp::Position::new(0, 0), lsp::Position::new(0, 0)),
"PREFIX ".to_string(),
)]))
});
cx.update_editor(|editor, window, cx| editor.format(&Default::default(), window, cx))
.unwrap()
.await
.unwrap();
cx.assert_editor_state(indoc! {"
PREFIX line 1
line 2
linXe 3
line 4
liˇne 5
"});
// Undo formatting
cx.update_editor(|editor, window, cx| {
editor.undo(&Default::default(), window, cx);
});
// Verify cursor moved back to position after edit
cx.assert_editor_state(indoc! {"
line 1
line 2
linXˇe 3
line 4
line 5
"});
}
#[gpui::test]
async fn test_select_next_with_multiple_carets(cx: &mut TestAppContext) {
init_test(cx, |_| {});
@@ -7665,77 +7833,81 @@ async fn test_document_format_during_save(cx: &mut TestAppContext) {
cx.executor().start_waiting();
let fake_server = fake_servers.next().await.unwrap();
let save = editor
.update_in(cx, |editor, window, cx| {
editor.save(true, project.clone(), window, cx)
})
.unwrap();
fake_server
.set_request_handler::<lsp::request::Formatting, _, _>(move |params, _| async move {
assert_eq!(
params.text_document.uri,
lsp::Url::from_file_path(path!("/file.rs")).unwrap()
);
assert_eq!(params.options.tab_size, 4);
Ok(Some(vec![lsp::TextEdit::new(
lsp::Range::new(lsp::Position::new(0, 3), lsp::Position::new(1, 0)),
", ".to_string(),
)]))
})
.next()
.await;
cx.executor().start_waiting();
save.await;
{
fake_server.set_request_handler::<lsp::request::Formatting, _, _>(
move |params, _| async move {
assert_eq!(
params.text_document.uri,
lsp::Url::from_file_path(path!("/file.rs")).unwrap()
);
assert_eq!(params.options.tab_size, 4);
Ok(Some(vec![lsp::TextEdit::new(
lsp::Range::new(lsp::Position::new(0, 3), lsp::Position::new(1, 0)),
", ".to_string(),
)]))
},
);
let save = editor
.update_in(cx, |editor, window, cx| {
editor.save(true, project.clone(), window, cx)
})
.unwrap();
cx.executor().start_waiting();
save.await;
assert_eq!(
editor.update(cx, |editor, cx| editor.text(cx)),
"one, two\nthree\n"
);
assert!(!cx.read(|cx| editor.is_dirty(cx)));
assert_eq!(
editor.update(cx, |editor, cx| editor.text(cx)),
"one, two\nthree\n"
);
assert!(!cx.read(|cx| editor.is_dirty(cx)));
}
editor.update_in(cx, |editor, window, cx| {
editor.set_text("one\ntwo\nthree\n", window, cx)
});
assert!(cx.read(|cx| editor.is_dirty(cx)));
{
editor.update_in(cx, |editor, window, cx| {
editor.set_text("one\ntwo\nthree\n", window, cx)
});
assert!(cx.read(|cx| editor.is_dirty(cx)));
// Ensure we can still save even if formatting hangs.
fake_server.set_request_handler::<lsp::request::Formatting, _, _>(
move |params, _| async move {
assert_eq!(
params.text_document.uri,
lsp::Url::from_file_path(path!("/file.rs")).unwrap()
);
futures::future::pending::<()>().await;
unreachable!()
},
);
let save = editor
.update_in(cx, |editor, window, cx| {
editor.save(true, project.clone(), window, cx)
})
.unwrap();
cx.executor().advance_clock(super::FORMAT_TIMEOUT);
cx.executor().start_waiting();
save.await;
assert_eq!(
editor.update(cx, |editor, cx| editor.text(cx)),
"one\ntwo\nthree\n"
);
assert!(!cx.read(|cx| editor.is_dirty(cx)));
// Ensure we can still save even if formatting hangs.
fake_server.set_request_handler::<lsp::request::Formatting, _, _>(
move |params, _| async move {
assert_eq!(
params.text_document.uri,
lsp::Url::from_file_path(path!("/file.rs")).unwrap()
);
futures::future::pending::<()>().await;
unreachable!()
},
);
let save = editor
.update_in(cx, |editor, window, cx| {
editor.save(true, project.clone(), window, cx)
})
.unwrap();
cx.executor().advance_clock(super::FORMAT_TIMEOUT);
cx.executor().start_waiting();
save.await;
assert_eq!(
editor.update(cx, |editor, cx| editor.text(cx)),
"one\ntwo\nthree\n"
);
}
// For non-dirty buffer, no formatting request should be sent
let save = editor
.update_in(cx, |editor, window, cx| {
editor.save(true, project.clone(), window, cx)
})
.unwrap();
let _pending_format_request = fake_server
.set_request_handler::<lsp::request::RangeFormatting, _, _>(move |_, _| async move {
{
assert!(!cx.read(|cx| editor.is_dirty(cx)));
fake_server.set_request_handler::<lsp::request::Formatting, _, _>(move |_, _| async move {
panic!("Should not be invoked on non-dirty buffer");
})
.next();
cx.executor().start_waiting();
save.await;
});
let save = editor
.update_in(cx, |editor, window, cx| {
editor.save(true, project.clone(), window, cx)
})
.unwrap();
cx.executor().start_waiting();
save.await;
}
// Set rust language override and assert overridden tabsize is sent to language server
update_test_language_settings(cx, |settings| {
@@ -7748,28 +7920,28 @@ async fn test_document_format_during_save(cx: &mut TestAppContext) {
);
});
editor.update_in(cx, |editor, window, cx| {
editor.set_text("somehting_new\n", window, cx)
});
assert!(cx.read(|cx| editor.is_dirty(cx)));
let save = editor
.update_in(cx, |editor, window, cx| {
editor.save(true, project.clone(), window, cx)
})
.unwrap();
fake_server
.set_request_handler::<lsp::request::Formatting, _, _>(move |params, _| async move {
assert_eq!(
params.text_document.uri,
lsp::Url::from_file_path(path!("/file.rs")).unwrap()
);
assert_eq!(params.options.tab_size, 8);
Ok(Some(vec![]))
})
.next()
.await;
cx.executor().start_waiting();
save.await;
{
editor.update_in(cx, |editor, window, cx| {
editor.set_text("somehting_new\n", window, cx)
});
assert!(cx.read(|cx| editor.is_dirty(cx)));
let _formatting_request_signal = fake_server
.set_request_handler::<lsp::request::Formatting, _, _>(move |params, _| async move {
assert_eq!(
params.text_document.uri,
lsp::Url::from_file_path(path!("/file.rs")).unwrap()
);
assert_eq!(params.options.tab_size, 8);
Ok(Some(vec![]))
});
let save = editor
.update_in(cx, |editor, window, cx| {
editor.save(true, project.clone(), window, cx)
})
.unwrap();
cx.executor().start_waiting();
save.await;
}
}
#[gpui::test]
@@ -7881,7 +8053,7 @@ async fn test_multibuffer_format_during_save(cx: &mut TestAppContext) {
});
let multi_buffer_editor = cx.new_window_entity(|window, cx| {
Editor::new(
EditorMode::Full,
EditorMode::full(),
multi_buffer,
Some(project.clone()),
window,
@@ -8251,6 +8423,272 @@ async fn test_document_format_manual_trigger(cx: &mut TestAppContext) {
);
}
#[gpui::test]
async fn test_multiple_formatters(cx: &mut TestAppContext) {
init_test(cx, |settings| {
settings.defaults.remove_trailing_whitespace_on_save = Some(true);
settings.defaults.formatter =
Some(language_settings::SelectedFormatter::List(FormatterList(
vec![
Formatter::LanguageServer { name: None },
Formatter::CodeActions(
[
("code-action-1".into(), true),
("code-action-2".into(), true),
]
.into_iter()
.collect(),
),
]
.into(),
)))
});
let fs = FakeFs::new(cx.executor());
fs.insert_file(path!("/file.rs"), "one \ntwo \nthree".into())
.await;
let project = Project::test(fs, [path!("/").as_ref()], cx).await;
let language_registry = project.read_with(cx, |project, _| project.languages().clone());
language_registry.add(rust_lang());
let mut fake_servers = language_registry.register_fake_lsp(
"Rust",
FakeLspAdapter {
capabilities: lsp::ServerCapabilities {
document_formatting_provider: Some(lsp::OneOf::Left(true)),
execute_command_provider: Some(lsp::ExecuteCommandOptions {
commands: vec!["the-command-for-code-action-1".into()],
..Default::default()
}),
code_action_provider: Some(lsp::CodeActionProviderCapability::Simple(true)),
..Default::default()
},
..Default::default()
},
);
let buffer = project
.update(cx, |project, cx| {
project.open_local_buffer(path!("/file.rs"), cx)
})
.await
.unwrap();
let buffer = cx.new(|cx| MultiBuffer::singleton(buffer, cx));
let (editor, cx) = cx.add_window_view(|window, cx| {
build_editor_with_project(project.clone(), buffer, window, cx)
});
cx.executor().start_waiting();
let fake_server = fake_servers.next().await.unwrap();
fake_server.set_request_handler::<lsp::request::Formatting, _, _>(
move |_params, _| async move {
Ok(Some(vec![lsp::TextEdit::new(
lsp::Range::new(lsp::Position::new(0, 0), lsp::Position::new(0, 0)),
"applied-formatting\n".to_string(),
)]))
},
);
fake_server.set_request_handler::<lsp::request::CodeActionRequest, _, _>(
move |params, _| async move {
assert_eq!(
params.context.only,
Some(vec!["code-action-1".into(), "code-action-2".into()])
);
let uri = lsp::Url::from_file_path(path!("/file.rs")).unwrap();
Ok(Some(vec![
lsp::CodeActionOrCommand::CodeAction(lsp::CodeAction {
kind: Some("code-action-1".into()),
edit: Some(lsp::WorkspaceEdit::new(
[(
uri.clone(),
vec![lsp::TextEdit::new(
lsp::Range::new(lsp::Position::new(0, 0), lsp::Position::new(0, 0)),
"applied-code-action-1-edit\n".to_string(),
)],
)]
.into_iter()
.collect(),
)),
command: Some(lsp::Command {
command: "the-command-for-code-action-1".into(),
..Default::default()
}),
..Default::default()
}),
lsp::CodeActionOrCommand::CodeAction(lsp::CodeAction {
kind: Some("code-action-2".into()),
edit: Some(lsp::WorkspaceEdit::new(
[(
uri.clone(),
vec![lsp::TextEdit::new(
lsp::Range::new(lsp::Position::new(0, 0), lsp::Position::new(0, 0)),
"applied-code-action-2-edit\n".to_string(),
)],
)]
.into_iter()
.collect(),
)),
..Default::default()
}),
]))
},
);
fake_server.set_request_handler::<lsp::request::CodeActionResolveRequest, _, _>({
move |params, _| async move { Ok(params) }
});
let command_lock = Arc::new(futures::lock::Mutex::new(()));
fake_server.set_request_handler::<lsp::request::ExecuteCommand, _, _>({
let fake = fake_server.clone();
let lock = command_lock.clone();
move |params, _| {
assert_eq!(params.command, "the-command-for-code-action-1");
let fake = fake.clone();
let lock = lock.clone();
async move {
lock.lock().await;
fake.server
.request::<lsp::request::ApplyWorkspaceEdit>(lsp::ApplyWorkspaceEditParams {
label: None,
edit: lsp::WorkspaceEdit {
changes: Some(
[(
lsp::Url::from_file_path(path!("/file.rs")).unwrap(),
vec![lsp::TextEdit {
range: lsp::Range::new(
lsp::Position::new(0, 0),
lsp::Position::new(0, 0),
),
new_text: "applied-code-action-1-command\n".into(),
}],
)]
.into_iter()
.collect(),
),
..Default::default()
},
})
.await
.unwrap();
Ok(Some(json!(null)))
}
}
});
cx.executor().start_waiting();
editor
.update_in(cx, |editor, window, cx| {
editor.perform_format(
project.clone(),
FormatTrigger::Manual,
FormatTarget::Buffers,
window,
cx,
)
})
.unwrap()
.await;
editor.update(cx, |editor, cx| {
assert_eq!(
editor.text(cx),
r#"
applied-code-action-2-edit
applied-code-action-1-command
applied-code-action-1-edit
applied-formatting
one
two
three
"#
.unindent()
);
});
editor.update_in(cx, |editor, window, cx| {
editor.undo(&Default::default(), window, cx);
assert_eq!(editor.text(cx), "one \ntwo \nthree");
});
// Perform a manual edit while waiting for an LSP command
// that's being run as part of a formatting code action.
let lock_guard = command_lock.lock().await;
let format = editor
.update_in(cx, |editor, window, cx| {
editor.perform_format(
project.clone(),
FormatTrigger::Manual,
FormatTarget::Buffers,
window,
cx,
)
})
.unwrap();
cx.run_until_parked();
editor.update(cx, |editor, cx| {
assert_eq!(
editor.text(cx),
r#"
applied-code-action-1-edit
applied-formatting
one
two
three
"#
.unindent()
);
editor.buffer.update(cx, |buffer, cx| {
let ix = buffer.len(cx);
buffer.edit([(ix..ix, "edited\n")], None, cx);
});
});
// Allow the LSP command to proceed. Because the buffer was edited,
// the second code action will not be run.
drop(lock_guard);
format.await;
editor.update_in(cx, |editor, window, cx| {
assert_eq!(
editor.text(cx),
r#"
applied-code-action-1-command
applied-code-action-1-edit
applied-formatting
one
two
three
edited
"#
.unindent()
);
// The manual edit is undone first, because it is the last thing the user did
// (even though the command completed afterwards).
editor.undo(&Default::default(), window, cx);
assert_eq!(
editor.text(cx),
r#"
applied-code-action-1-command
applied-code-action-1-edit
applied-formatting
one
two
three
"#
.unindent()
);
// All the formatting (including the command, which completed after the manual edit)
// is undone together.
editor.undo(&Default::default(), window, cx);
assert_eq!(editor.text(cx), "one \ntwo \nthree");
});
}
#[gpui::test]
async fn test_organize_imports_manual_trigger(cx: &mut TestAppContext) {
init_test(cx, |settings| {
@@ -14062,7 +14500,7 @@ async fn test_mutlibuffer_in_navigation_history(cx: &mut TestAppContext) {
let cx = &mut VisualTestContext::from_window(*workspace.deref(), cx);
let multi_buffer_editor = cx.new_window_entity(|window, cx| {
Editor::new(
EditorMode::Full,
EditorMode::full(),
multi_buffer,
Some(project.clone()),
window,
@@ -14521,7 +14959,7 @@ async fn test_toggle_diff_expand_in_multi_buffer(cx: &mut TestAppContext) {
});
let editor =
cx.add_window(|window, cx| Editor::new(EditorMode::Full, multi_buffer, None, window, cx));
cx.add_window(|window, cx| Editor::new(EditorMode::full(), multi_buffer, None, window, cx));
editor
.update(cx, |editor, _window, cx| {
for (buffer, diff_base) in [
@@ -14632,7 +15070,7 @@ async fn test_expand_diff_hunk_at_excerpt_boundary(cx: &mut TestAppContext) {
});
let editor =
cx.add_window(|window, cx| Editor::new(EditorMode::Full, multi_buffer, None, window, cx));
cx.add_window(|window, cx| Editor::new(EditorMode::full(), multi_buffer, None, window, cx));
editor
.update(cx, |editor, _window, cx| {
let diff = cx.new(|cx| BufferDiff::new_with_base_text(base, &buffer, cx));
@@ -16188,7 +16626,7 @@ async fn test_display_diff_hunks(cx: &mut TestAppContext) {
});
let editor = cx.add_window(|window, cx| {
Editor::new(EditorMode::Full, multibuffer, Some(project), window, cx)
Editor::new(EditorMode::full(), multibuffer, Some(project), window, cx)
});
cx.run_until_parked();
@@ -16705,7 +17143,7 @@ async fn test_find_enclosing_node_with_task(cx: &mut TestAppContext) {
let editor = cx.new_window_entity(|window, cx| {
Editor::new(
EditorMode::Full,
EditorMode::full(),
multi_buffer,
Some(project.clone()),
window,
@@ -16832,7 +17270,7 @@ async fn test_folding_buffers(cx: &mut TestAppContext) {
});
let multi_buffer_editor = cx.new_window_entity(|window, cx| {
Editor::new(
EditorMode::Full,
EditorMode::full(),
multi_buffer.clone(),
Some(project.clone()),
window,
@@ -16989,7 +17427,7 @@ async fn test_folding_buffers_with_one_excerpt(cx: &mut TestAppContext) {
let multi_buffer_editor = cx.new_window_entity(|window, cx| {
Editor::new(
EditorMode::Full,
EditorMode::full(),
multi_buffer,
Some(project.clone()),
window,
@@ -17107,7 +17545,7 @@ async fn test_folding_buffer_when_multibuffer_has_only_one_excerpt(cx: &mut Test
});
let multi_buffer_editor = cx.new_window_entity(|window, cx| {
Editor::new(
EditorMode::Full,
EditorMode::full(),
multi_buffer,
Some(project.clone()),
window,
@@ -17157,7 +17595,7 @@ async fn test_multi_buffer_navigation_with_folded_buffers(cx: &mut TestAppContex
],
cx,
);
let mut editor = Editor::new(EditorMode::Full, multi_buffer.clone(), None, window, cx);
let mut editor = Editor::new(EditorMode::full(), multi_buffer.clone(), None, window, cx);
let buffer_ids = multi_buffer.read(cx).excerpt_buffer_ids();
// fold all but the second buffer, so that we test navigating between two
@@ -17469,7 +17907,7 @@ async fn assert_highlighted_edits(
) {
let window = cx.add_window(|window, cx| {
let buffer = MultiBuffer::build_simple(text, cx);
Editor::new(EditorMode::Full, buffer, None, window, cx)
Editor::new(EditorMode::full(), buffer, None, window, cx)
});
let cx = &mut VisualTestContext::from_window(*window, cx);
@@ -17561,7 +17999,15 @@ fn add_log_breakpoint_at_cursor(
cx: &mut Context<Editor>,
) {
let (anchor, bp) = editor
.breakpoint_at_cursor_head(window, cx)
.breakpoints_at_cursors(window, cx)
.first()
.and_then(|(anchor, bp)| {
if let Some(bp) = bp {
Some((*anchor, bp.clone()))
} else {
None
}
})
.unwrap_or_else(|| {
let cursor_position: Point = editor.selections.newest(cx).head();
@@ -17627,7 +18073,7 @@ async fn test_breakpoint_toggling(cx: &mut TestAppContext) {
let (editor, cx) = cx.add_window_view(|window, cx| {
Editor::new(
EditorMode::Full,
EditorMode::full(),
MultiBuffer::build_from_buffer(buffer, cx),
Some(project.clone()),
window,
@@ -17744,7 +18190,7 @@ async fn test_log_breakpoint_editing(cx: &mut TestAppContext) {
let (editor, cx) = cx.add_window_view(|window, cx| {
Editor::new(
EditorMode::Full,
EditorMode::full(),
MultiBuffer::build_from_buffer(buffer, cx),
Some(project.clone()),
window,
@@ -17919,7 +18365,7 @@ async fn test_breakpoint_enabling_and_disabling(cx: &mut TestAppContext) {
let (editor, cx) = cx.add_window_view(|window, cx| {
Editor::new(
EditorMode::Full,
EditorMode::full(),
MultiBuffer::build_from_buffer(buffer, cx),
Some(project.clone()),
window,

View File

@@ -211,6 +211,7 @@ impl EditorElement {
register_action(editor, window, Editor::sort_lines_case_insensitive);
register_action(editor, window, Editor::reverse_lines);
register_action(editor, window, Editor::shuffle_lines);
register_action(editor, window, Editor::toggle_case);
register_action(editor, window, Editor::convert_to_upper_case);
register_action(editor, window, Editor::convert_to_lower_case);
register_action(editor, window, Editor::convert_to_title_case);
@@ -324,6 +325,12 @@ impl EditorElement {
register_action(editor, window, |editor, action, window, cx| {
editor.select_previous(action, window, cx).log_err();
});
register_action(editor, window, |editor, action, window, cx| {
editor.find_next_match(action, window, cx).log_err();
});
register_action(editor, window, |editor, action, window, cx| {
editor.find_previous_match(action, window, cx).log_err();
});
register_action(editor, window, Editor::toggle_comments);
register_action(editor, window, Editor::select_larger_syntax_node);
register_action(editor, window, Editor::select_smaller_syntax_node);
@@ -386,14 +393,12 @@ impl EditorElement {
register_action(editor, window, Editor::fold_at_level);
register_action(editor, window, Editor::fold_all);
register_action(editor, window, Editor::fold_function_bodies);
register_action(editor, window, Editor::fold_at);
register_action(editor, window, Editor::fold_recursive);
register_action(editor, window, Editor::toggle_fold);
register_action(editor, window, Editor::toggle_fold_recursive);
register_action(editor, window, Editor::unfold_lines);
register_action(editor, window, Editor::unfold_recursive);
register_action(editor, window, Editor::unfold_all);
register_action(editor, window, Editor::unfold_at);
register_action(editor, window, Editor::fold_selected_ranges);
register_action(editor, window, Editor::set_mark);
register_action(editor, window, Editor::swap_selection_ends);
@@ -1403,7 +1408,7 @@ impl EditorElement {
window: &mut Window,
cx: &mut App,
) -> Option<EditorScrollbars> {
if snapshot.mode != EditorMode::Full {
if !snapshot.mode.is_full() {
return None;
}
@@ -2372,7 +2377,7 @@ impl EditorElement {
cx: &mut App,
) -> Arc<HashMap<MultiBufferRow, LineNumberLayout>> {
let include_line_numbers = snapshot.show_line_numbers.unwrap_or_else(|| {
EditorSettings::get_global(cx).gutter.line_numbers && snapshot.mode == EditorMode::Full
EditorSettings::get_global(cx).gutter.line_numbers && snapshot.mode.is_full()
});
if !include_line_numbers {
return Arc::default();
@@ -2478,7 +2483,7 @@ impl EditorElement {
cx: &mut App,
) -> Vec<Option<AnyElement>> {
let include_fold_statuses = EditorSettings::get_global(cx).gutter.folds
&& snapshot.mode == EditorMode::Full
&& snapshot.mode.is_full()
&& self.editor.read(cx).is_singleton(cx);
if include_fold_statuses {
row_infos
@@ -4177,7 +4182,11 @@ impl EditorElement {
self.style.background,
));
if let EditorMode::Full = layout.mode {
if let EditorMode::Full {
show_active_line_background,
..
} = layout.mode
{
let mut active_rows = layout.active_rows.iter().peekable();
while let Some((start_row, contains_non_empty_selection)) = active_rows.next() {
let mut end_row = start_row.0;
@@ -4192,7 +4201,7 @@ impl EditorElement {
end_row += 1;
}
if !contains_non_empty_selection.selection {
if show_active_line_background && !contains_non_empty_selection.selection {
let highlight_h_range =
match layout.position_map.snapshot.current_line_highlight {
CurrentLineHighlight::Gutter => Some(Range {
@@ -6060,7 +6069,7 @@ impl LineWithInvisibles {
strikethrough: text_style.strikethrough,
});
if editor_mode == EditorMode::Full {
if editor_mode.is_full() {
// Line wrap pads its contents with fake whitespaces,
// avoid printing them
let is_soft_wrapped = is_row_soft_wrapped(row);
@@ -6415,7 +6424,13 @@ impl EditorElement {
/// This allows UI elements to scale based on the `buffer_font_size`.
fn rem_size(&self, cx: &mut App) -> Option<Pixels> {
match self.editor.read(cx).mode {
EditorMode::Full => {
EditorMode::Full {
scale_ui_elements_with_buffer_font_size,
..
} => {
if !scale_ui_elements_with_buffer_font_size {
return None;
}
let buffer_font_size = self.style.text.font_size;
match buffer_font_size {
AbsoluteLength::Pixels(pixels) => {
@@ -6532,7 +6547,7 @@ impl Element for EditorElement {
},
)
}
EditorMode::Full => {
EditorMode::Full { .. } => {
let mut style = Style::default();
style.size.width = relative(1.).into();
style.size.height = relative(1.).into();
@@ -8508,7 +8523,7 @@ mod tests {
init_test(cx, |_| {});
let window = cx.add_window(|window, cx| {
let buffer = MultiBuffer::build_simple(&sample_text(6, 6, 'a'), cx);
Editor::new(EditorMode::Full, buffer, None, window, cx)
Editor::new(EditorMode::full(), buffer, None, window, cx)
});
let editor = window.root(cx).unwrap();
@@ -8609,7 +8624,7 @@ mod tests {
let window = cx.add_window(|window, cx| {
let buffer = MultiBuffer::build_simple(&(sample_text(6, 6, 'a') + "\n"), cx);
Editor::new(EditorMode::Full, buffer, None, window, cx)
Editor::new(EditorMode::full(), buffer, None, window, cx)
});
let cx = &mut VisualTestContext::from_window(*window, cx);
let editor = window.root(cx).unwrap();
@@ -8680,7 +8695,7 @@ mod tests {
let window = cx.add_window(|window, cx| {
let buffer = MultiBuffer::build_simple("", cx);
Editor::new(EditorMode::Full, buffer, None, window, cx)
Editor::new(EditorMode::full(), buffer, None, window, cx)
});
let cx = &mut VisualTestContext::from_window(*window, cx);
let editor = window.root(cx).unwrap();
@@ -8766,7 +8781,7 @@ mod tests {
let actual_invisibles = collect_invisibles_from_new_editor(
cx,
EditorMode::Full,
EditorMode::full(),
input_text,
px(500.0),
show_line_numbers,
@@ -8860,7 +8875,7 @@ mod tests {
let actual_invisibles = collect_invisibles_from_new_editor(
cx,
EditorMode::Full,
EditorMode::full(),
&input_text,
px(editor_width),
show_line_numbers,

View File

@@ -1540,8 +1540,24 @@ impl SearchableItem for Editor {
let text = self.buffer.read(cx);
let text = text.snapshot(cx);
let mut edits = vec![];
let mut last_point: Option<Point> = None;
for m in matches {
let point = m.start.to_point(&text);
let text = text.text_for_range(m.clone()).collect::<Vec<_>>();
// Check if the row for the current match is different from the last
// match. If that's not the case and we're still replacing matches
// in the same row/line, skip this match if the `one_match_per_line`
// option is enabled.
if last_point.is_none() {
last_point = Some(point);
} else if last_point.is_some() && point.row != last_point.unwrap().row {
last_point = Some(point);
} else if query.one_match_per_line().is_some_and(|enabled| enabled) {
continue;
}
let text: Cow<_> = if text.len() == 1 {
text.first().cloned().unwrap().into()
} else {

View File

@@ -34,6 +34,10 @@ impl LinkedEditingRanges {
pub(super) fn is_empty(&self) -> bool {
self.0.is_empty()
}
pub(super) fn clear(&mut self) {
self.0.clear();
}
}
const UPDATE_DEBOUNCE: Duration = Duration::from_millis(50);

View File

@@ -85,6 +85,10 @@ pub fn lsp_tasks(
.map(|(name, buffer_ids)| {
let buffers = buffer_ids
.iter()
.filter(|&&buffer_id| match for_position {
Some(for_position) => for_position.buffer_id == Some(buffer_id),
None => true,
})
.filter_map(|&buffer_id| project.read(cx).buffer_for_id(buffer_id, cx))
.collect::<Vec<_>>();
language_server_for_buffers(project.clone(), name.clone(), buffers, cx)

View File

@@ -1,10 +1,10 @@
use crate::CopyAndTrim;
use crate::actions::FormatSelections;
use crate::{
Copy, CopyPermalinkToLine, Cut, DisplayPoint, DisplaySnapshot, Editor, EditorMode,
FindAllReferences, GoToDeclaration, GoToDefinition, GoToImplementation, GoToTypeDefinition,
Paste, Rename, RevealInFileManager, SelectMode, ToDisplayPoint, ToggleCodeActions,
actions::Format, selections_collection::SelectionsCollection,
Copy, CopyAndTrim, CopyPermalinkToLine, Cut, DebuggerEvaluateSelectedText, DisplayPoint,
DisplaySnapshot, Editor, FindAllReferences, GoToDeclaration, GoToDefinition,
GoToImplementation, GoToTypeDefinition, Paste, Rename, RevealInFileManager, SelectMode,
ToDisplayPoint, ToggleCodeActions,
actions::{Format, FormatSelections},
selections_collection::SelectionsCollection,
};
use gpui::prelude::FluentBuilder;
use gpui::{Context, DismissEvent, Entity, Focusable as _, Pixels, Point, Subscription, Window};
@@ -123,7 +123,7 @@ pub fn deploy_context_menu(
}
// Don't show context menu for inline editors
if editor.mode() != EditorMode::Full {
if !editor.mode().is_full() {
return;
}
@@ -169,9 +169,19 @@ pub fn deploy_context_menu(
.is_some()
});
let evaluate_selection = command_palette_hooks::CommandPaletteFilter::try_global(cx)
.map_or(false, |filter| {
!filter.is_hidden(&DebuggerEvaluateSelectedText)
});
ui::ContextMenu::build(window, cx, |menu, _window, _cx| {
let builder = menu
.on_blur_subscription(Subscription::new(|| {}))
.when(evaluate_selection && has_selections, |builder| {
builder
.action("Evaluate Selection", Box::new(DebuggerEvaluateSelectedText))
.separator()
})
.action("Go to Definition", Box::new(GoToDefinition))
.action("Go to Declaration", Box::new(GoToDeclaration))
.action("Go to Type Definition", Box::new(GoToTypeDefinition))

View File

@@ -108,7 +108,7 @@ pub(crate) fn build_editor(
window: &mut Window,
cx: &mut Context<Editor>,
) -> Editor {
Editor::new(EditorMode::Full, buffer, None, window, cx)
Editor::new(EditorMode::full(), buffer, None, window, cx)
}
pub(crate) fn build_editor_with_project(
@@ -117,5 +117,5 @@ pub(crate) fn build_editor_with_project(
window: &mut Window,
cx: &mut Context<Editor>,
) -> Editor {
Editor::new(EditorMode::Full, buffer, Some(project), window, cx)
Editor::new(EditorMode::full(), buffer, Some(project), window, cx)
}

View File

@@ -1,25 +1,16 @@
[package]
name = "agent_eval"
name = "eval"
version = "0.1.0"
edition.workspace = true
publish.workspace = true
license = "GPL-3.0-or-later"
[lints]
workspace = true
[[bin]]
name = "agent_eval"
path = "src/main.rs"
edition.workspace = true
[dependencies]
agent.workspace = true
anyhow.workspace = true
assistant_tool.workspace = true
assistant_tools.workspace = true
clap.workspace = true
assistant_settings.workspace = true
client.workspace = true
collections.workspace = true
context_server.workspace = true
dap.workspace = true
env_logger.workspace = true
@@ -36,11 +27,13 @@ prompt_store.workspace = true
release_channel.workspace = true
reqwest_client.workspace = true
serde.workspace = true
serde_json.workspace = true
serde_json_lenient.workspace = true
settings.workspace = true
smol.workspace = true
tempfile.workspace = true
util.workspace = true
walkdir.workspace = true
toml.workspace = true
workspace-hack.workspace = true
[[bin]]
name = "eval"
path = "src/eval.rs"
[lints]
workspace = true

7
crates/eval/README.md Normal file
View File

@@ -0,0 +1,7 @@
# Eval
This eval assumes the working directory is the root of the repository. Run it with:
```sh
cargo run -p eval
```

Some files were not shown because too many files have changed in this diff Show More