Compare commits

..

57 Commits

Author SHA1 Message Date
Richard Feldman
9d8fe5d2da Drop backticks 2025-04-11 00:01:47 -04:00
Richard Feldman
5b9c7e378e Backticks 2025-04-11 00:01:25 -04:00
Richard Feldman
484648f6ff Stream create file tool output 2025-04-10 22:48:31 -04:00
Richard Feldman
b4572eec23 wip 2025-04-10 16:36:48 -04:00
Richard Feldman
35904fdd1b wip 2025-04-10 16:34:57 -04:00
Richard Feldman
1e64e6779b wip 2025-04-10 15:52:21 -04:00
Richard Feldman
73e1b9d4af wip 2025-04-10 15:13:24 -04:00
Ben Kunkle
53cde329da Clean up formatting code and add testing for formatting with multiple formatters (including code actions!) (#28457)
Release Notes:

- N/A

---------

Co-authored-by: Max Brunsfeld <maxbrunsfeld@gmail.com>
2025-04-10 15:32:43 +00:00
Cole Miller
b55b310ad0 Downgrade environment-related logging (#28509)
Closes #ISSUE

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-04-10 14:38:29 +00:00
Cole Miller
8ab25e2bac Fix merge conflicts jumping (#28508)
This regressed in #27568, oops.

Release Notes:

- Fixed a bug causing conflicted files in the git panel to jump to the
"Tracked" section as soon as they were staged.
2025-04-10 14:29:36 +00:00
Peter Tripp
c10b1f7c61 docs: Update system requirements (#28504)
Explicitly note that macOS 12.x Monterey is required for screen sharing.

Release Notes:

- N/A
2025-04-10 14:18:12 +00:00
Thomas Mickley-Doyle
cb1ee01a66 agent: Add selected tool names to agent panel telemetry (#28247)
Release Notes:

- N/A
2025-04-10 08:43:52 -05:00
Agus Zubiaga
90bcde116f agent: Use current shell (#28470)
Release Notes:

- agent: Replace `bash` tool with `terminal` tool which uses the current
shell

---------

Co-authored-by: Bennet <bennet@zed.dev>
Co-authored-by: Antonio <antonio@zed.dev>
2025-04-09 23:38:36 -06:00
Antonio Scandurra
8ac378b86e Lay the groundwork for a Rust-based eval (#28488)
Also, we moved the logic for driving the agentic loop into `Thread` so
that we don't have to re-implement it.

Release Notes:

- N/A

---------

Co-authored-by: Nathan Sobo <nathan@zed.dev>
2025-04-10 04:45:27 +00:00
Bennet Bo Fenner
55760295d9 agent: Optimize render_markdown_block function (#28487)
Co-Authored-by: Agus <agus@zed.dev>

Closes #ISSUE

Release Notes:

- N/A

Co-authored-by: Agus <agus@zed.dev>
2025-04-10 04:29:47 +00:00
Antonio Scandurra
9dfb907f97 Revert "Add reminder message about system prompt" (#28482)
This breaks the agentic loop.
2025-04-09 22:12:33 -06:00
Danilo Leal
e20daa7639 agent: Fix toolbar spacing (#28485)
Release Notes:

- N/A
2025-04-10 01:08:07 -03:00
Danilo Leal
b46ab367ef agent: Add button to open thread as markdown (#28481)
<img
src="https://github.com/user-attachments/assets/92ca8f64-a949-4cc1-a657-3978a2c65839"
width="600"/>

Release Notes:

- agent: The action to open the current active thread in Markdown is now
exposed in the UI.
2025-04-10 00:09:34 -03:00
5brian
12212dc329 agent: Prevent sending whitespace only messages (#28409)
Prevent this from happening when sending a prompt with only spaces and
newlines:


![image](https://github.com/user-attachments/assets/b275f4c5-c013-4695-8fb4-e3ad75d41750)

Release Notes:

- agent: Prevent from sending messages containing only white space.
2025-04-09 23:57:42 -03:00
Conrad Irwin
324e4658ba Reset modifiers when the window active state changes (#28348)
Closes #23449

Release Notes:

- Fixed a bug causing shift to get stuck down when the window focus
changes

---------

Co-authored-by: Dino <dinojoaocosta@gmail.com>
2025-04-09 20:55:19 -06:00
Conrad Irwin
ed500dacb6 Fix typo in symbolicate script (#28456)
Fix silly typo in symbolicate script

Release Notes:

- N/A
2025-04-09 20:55:10 -06:00
Danilo Leal
2f4b48129b agent: Collapse code blocks in the active thread (#28467)
Release Notes:

- N/A

---------

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-04-09 23:44:02 -03:00
Dino
ed7c55a04e vim: Reset search range after substitute (#28403)
Update the `Vim::replace_command` method so as to reset the range in the
`BufferSearchBar` after running the replacement in order to fix the
issue where the number of matches in the search bar would be incorrect
after the replacement was done, as it would only take into consideration
the range in which the replacement happened, instead of the whole
buffer.

In order to get this working a new
`BufferSearchBar::clear_search_within_ranges` method is introduced in
these changes.

Release Notes:

- Fixed the number of matches displayed in the search bar after running
vim's substitute command.
2025-04-09 20:43:53 -06:00
Richard Feldman
6db4ab381c Add code action tool and rename tool (#28453)
Having a separate rename tool seems to make the agent more likely to use
it compared to having it be part of the code actions tool.

Release Notes:

- Added code action tool and rename tool.
2025-04-09 22:38:01 -04:00
renovate[bot]
0e72a7e6ce Update Rust crate smallvec to v1.15.0 (#28469)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [smallvec](https://redirect.github.com/servo/rust-smallvec) |
workspace.dependencies | minor | `1.14.0` -> `1.15.0` |

---

### Release Notes

<details>
<summary>servo/rust-smallvec (smallvec)</summary>

###
[`v1.15.0`](https://redirect.github.com/servo/rust-smallvec/releases/tag/v1.15.0)

[Compare
Source](https://redirect.github.com/servo/rust-smallvec/compare/v1.14.0...v1.15.0)

#### What's Changed

- Fix typos by
[@&#8203;waywardmonkeys](https://redirect.github.com/waywardmonkeys) in
[https://github.com/servo/rust-smallvec/pull/373](https://redirect.github.com/servo/rust-smallvec/pull/373)
- Implement bincode2 encode/decode support for smallvec v1 by
[@&#8203;markbt](https://redirect.github.com/markbt) in
[https://github.com/servo/rust-smallvec/pull/375](https://redirect.github.com/servo/rust-smallvec/pull/375)

#### New Contributors

- [@&#8203;markbt](https://redirect.github.com/markbt) made their first
contribution in
[https://github.com/servo/rust-smallvec/pull/375](https://redirect.github.com/servo/rust-smallvec/pull/375)

**Full Changelog**:
https://github.com/servo/rust-smallvec/compare/v1.14.0...v1.15.0

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "after 3pm on Wednesday" in timezone
America/New_York, Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

Release Notes:

- N/A

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOS4yMzguMCIsInVwZGF0ZWRJblZlciI6IjM5LjIzOC4wIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-04-09 17:37:32 -06:00
renovate[bot]
3dc3ab062d Update Rust crate prometheus to 0.14 (#28468)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [prometheus](https://redirect.github.com/tikv/rust-prometheus) |
dependencies | minor | `0.13` -> `0.14` |

---

### Release Notes

<details>
<summary>tikv/rust-prometheus (prometheus)</summary>

###
[`v0.14.0`](https://redirect.github.com/tikv/rust-prometheus/blob/HEAD/CHANGELOG.md#0140)

[Compare
Source](https://redirect.github.com/tikv/rust-prometheus/compare/v0.13.4...v0.14.0)

- API change: Use `AsRef<str>` for owned label values
([#&#8203;537](https://redirect.github.com/tikv/rust-prometheus/issues/537))

- Improvement: Hashing improvements
([#&#8203;532](https://redirect.github.com/tikv/rust-prometheus/issues/532))

- Dependency upgrade: Update `hyper` to 1.6
([#&#8203;524](https://redirect.github.com/tikv/rust-prometheus/issues/524))

- Dependency upgrade: Update `procfs` to 0.17
([#&#8203;543](https://redirect.github.com/tikv/rust-prometheus/issues/543))

- Dependency upgrade: Update `protobuf` to 3.7.2 for RUSTSEC-2024-0437
([#&#8203;541](https://redirect.github.com/tikv/rust-prometheus/issues/541))

- Dependency upgrade: Update `thiserror` to 2.0
([#&#8203;534](https://redirect.github.com/tikv/rust-prometheus/issues/534))

- Internal change: Fix LSP and Clippy warnings
([#&#8203;540](https://redirect.github.com/tikv/rust-prometheus/issues/540))

- Internal change: Bump MSRV to 1.81
([#&#8203;539](https://redirect.github.com/tikv/rust-prometheus/issues/539))

- Documentation: Fix `register_histogram_vec_with_registry` docstring
([#&#8203;528](https://redirect.github.com/tikv/rust-prometheus/issues/528))

- Documentation: Fix typos in static-metric docstrings
([#&#8203;479](https://redirect.github.com/tikv/rust-prometheus/issues/479))

- Documentation: Add missing `protobuf` feature to README list
([#&#8203;531](https://redirect.github.com/tikv/rust-prometheus/issues/531))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "after 3pm on Wednesday" in timezone
America/New_York, Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

Release Notes:

- N/A

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOS4yMzguMCIsInVwZGF0ZWRJblZlciI6IjM5LjIzOC4wIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-04-09 17:37:15 -06:00
Michael Sloan
ed63f216e3 Remove "use_key_equivalents" from linux keymap as it does nothing (#28464)
Release Notes:

- N/A
2025-04-09 23:02:45 +00:00
Michael Sloan
ba767a1998 Fix directory context paths (#28459)
Release Notes:

- N/A
2025-04-09 21:40:46 +00:00
renovate[bot]
23c3f5f410 Update Rust crate indexmap to v2.9.0 (#28455)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [indexmap](https://redirect.github.com/indexmap-rs/indexmap) |
workspace.dependencies | minor | `2.8.0` -> `2.9.0` |

---

### Release Notes

<details>
<summary>indexmap-rs/indexmap (indexmap)</summary>

###
[`v2.9.0`](https://redirect.github.com/indexmap-rs/indexmap/blob/HEAD/RELEASES.md#290-2025-04-04)

[Compare
Source](https://redirect.github.com/indexmap-rs/indexmap/compare/2.8.0...2.9.0)

- Added a `get_disjoint_mut` method to `IndexMap`, matching Rust 1.86's
    `HashMap` method.
- Added a `get_disjoint_indices_mut` method to `IndexMap` and
`map::Slice`,
    matching Rust 1.86's `get_disjoint_mut` method on slices.
- Deprecated the `borsh` feature in favor of their own `indexmap`
feature,
    solving a cyclic dependency that occured via `borsh-derive`.

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "after 3pm on Wednesday" in timezone
America/New_York, Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

Release Notes:

- N/A

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOS4yMzguMCIsInVwZGF0ZWRJblZlciI6IjM5LjIzOC4wIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-04-09 14:52:40 -06:00
Vitaly Slobodin
b3be294c90 lsp_store: Preserve environment variables from ExtensionLspAdapter (#28173)
## Description

In https://github.com/zed-industries/zed/pull/27213 the new feature for
setting env variables for LSPs was added but env vars passed from an
instance of `ExtensionLspAdapter` are lost now. This means if an
extension returns any env variable like this:

```rust
zed::Command {
  command: some_command,
  args: some_args,
  env: vec![("A", "value_for_a")],
}
```

The env variable `A` will never be used by `LspStore`. This commit
preserves env variables passed from an instance of
`ExtensionLspAdapter`.

After this change overwriting of env variables
happens in the following order:

```plaintext
shell <- variables from an extension <- variables from settings
```

## How to reproduce

Allow any extension to return a `zed::Command` with environment
variables to Zed. You can use [this
branch](https://github.com/zed-extensions/ruby/pull/48) for the Ruby
extension:

1. Check out the branch and install the dev version of the Ruby
extension.
2. Ensure you have the `solargraph` LSP configured and enabled for the
Ruby extension. This LSP is enabled by default in Zed and in the Ruby
extension.
3. Make sure you don’t have `solargraph` installed in your user gemset.
4. Open any Ruby project, such as [this
one](https://github.com/vitallium/stimulus-lsp-error-zed).
5. Open a Ruby file and wait for the error message about failing to
start `solargraph`. It should look like this or something similar:

```
[2025-04-05T23:17:26+02:00 ERROR project::lsp_store] server stderr: "/Users/vslobodin/.local/share/mise/installs/ruby/3.4.1/lib/ruby/site_ruby/3.4.0/rubygems.rb:262:in 'Gem.find_spec_for_exe': can't find gem solargraph (>= 0.a) with executable solargraph (Gem::GemNotFoundException)\n\tfrom /Users/vslobodin/.local/share/mise/installs/ruby/3.4.1/lib/ruby/site_ruby/3.4.0/rubygems.rb:281:in 'Gem.activate_bin_path'\n"
```

This error occurs because the Ruby extension passes the `GEM_PATH`
environment variable to specify the location of Ruby gems. Without it,
Zed tries to spawn the `solargraph` gem in the user's gemset scope. Ruby
fails to start it because the `solargraph` gem is not installed in the
user gemset but in the extension directory. By setting the `GEM_PATH`
environment variable, Ruby searches additional locations to start the
`solargraph` LSP.

I hope I've described it correctly. Please let me know if you need more
information. Thanks!

Release Notes:

- Fixed the issue where environment variables from `ExtensionLspAdapter`
were lost
2025-04-09 14:50:50 -06:00
Dino
af5318df98 Update default vim substitute command behavior and add support for 'g' flag (#28138)
This Pull Request updates the default behavior of the substitute (`s`)
command in vim mode to only replace the next match by default, instead
of all, and replace all matches only when the `g` flag is provided,
making it more similar to NeoVim's behavior.

In order to achieve this, the following changes were introduced:

- Update `BufferSearchBar::replace_next` to be a public method, so it
can be called from `Vim::replace_command` .
- Update the `Replacement::parse` to set the `should_replace_all` field
to `false` by default, and only set it to `true` if the `'g'` flag is
present in the query.
- Add support for when the `Replacement.should_replace_all` is set to
`false` in `Vim::replace_command`, so as to have it only replace the
next occurrence instead of all occurrences in the line.
- Introduce `BufferSearchBar::select_first_match` so as to activate the
first match on the line under the cursor.

Closes #24450 

Release Notes:

- Improved vim's substitute command so as to only replace the first
match by default, and replace all matches if the `'g'` flag is provided

---------

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-04-09 14:34:51 -06:00
5brian
60c420a2da docs: Update vim features (#28360)
Follow up:
https://github.com/zed-industries/zed/pull/28044#issuecomment-2786769520

Adds
- Indent wise motions
- :ls
- :set

Release Notes:

- vim: Added documentation for indent-wise motions, `:ls`, and `:set`
2025-04-09 16:30:50 -04:00
5brian
ee6c33ffb3 Fix vim test keystroke (#28406)
I wrote the test wrongly in
https://github.com/zed-industries/zed/pull/28005:

It should be `$` instead of `shift-4`, so it was just yanking from the
middle of the line instead of the newline character. Fixed it and
regenerated it.

Release Notes:

- N/A
2025-04-09 14:29:03 -06:00
tidely
9ae4f4b158 gpui: Use BoolExt trait in more places (#28052)
Use the `BoolExt` trait which converts rust booleans to their objc
equivalent when applicable.


Release Notes:

- N/A
2025-04-09 14:28:15 -06:00
renovate[bot]
915a1cb116 Update actions/dependency-review-action digest to 67d4f4b (#28450)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[actions/dependency-review-action](https://redirect.github.com/actions/dependency-review-action)
| action | digest | `3b139cf` -> `67d4f4b` |

---

### Configuration

📅 **Schedule**: Branch creation - "after 3pm on Wednesday" in timezone
America/New_York, Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

Release Notes:

- N/A

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOS4yMzguMCIsInVwZGF0ZWRJblZlciI6IjM5LjIzOC4wIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-04-09 14:23:48 -06:00
renovate[bot]
aead0e11ff Update Rust crate mimalloc to v0.1.46 (#27964)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [mimalloc](https://redirect.github.com/purpleprotocol/mimalloc_rust) |
dependencies | patch | `0.1.45` -> `0.1.46` |

---

### Release Notes

<details>
<summary>purpleprotocol/mimalloc_rust (mimalloc)</summary>

###
[`v0.1.46`](https://redirect.github.com/purpleprotocol/mimalloc_rust/releases/tag/v0.1.46):
Version 0.1.46

[Compare
Source](https://redirect.github.com/purpleprotocol/mimalloc_rust/compare/v0.1.45...v0.1.46)

##### Changes

-   Fixed musl builds.

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "after 3pm on Wednesday" in timezone
America/New_York, Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

Release Notes:

- N/A

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOS4yMjcuMyIsInVwZGF0ZWRJblZlciI6IjM5LjIzOC4wIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-04-09 14:23:19 -06:00
Anthony Eid
2752c08810 debugger: Add run to cursor and evaluate selected text actions (#28405)
## Summary

### Actions

This PR implements actions that allow a user to "run to cursor" and
"evaluate selected text" while there's an active debug session and
exposes the functionality to the UI as well.

- Run to cursor: Can be accessed by right clicking on the gutter
- Evaluate selected text: Can be accessed by selecting text then right
clicking in the editor

### Bug fixes

I also fixed these bugs as well

- Panic when using debugger: Stop action
- Debugger actions command palette filter not working properly in all
cases
- We stopped displaying the correct label in the session's context menu
when a session was terminated

Release Notes:

- N/A

---------

Co-authored-by: Max Brunsfeld <max@zed.dev>
Co-authored-by: Remco Smits <djsmits12@gmail.com>
2025-04-09 19:57:29 +00:00
Bennet Bo Fenner
780143298a agent: Fuzzy match on paths and symbols when typing @ (#28357)
Release Notes:

- agent: Improve fuzzy matching when using @-mentions
2025-04-09 19:00:23 +00:00
João Marcos
088d7c1342 Add sublime keybinding for git::Restore (#28444)
Release Notes:

- Sublime Keymap: Added `git::Restore` compatibility bind (revert_hunk).
Mac: `cmd-k cmd-z` and Linux: `ctrl-k ctrl-z`.
2025-04-09 14:57:15 -03:00
neunato
64de6bd2a8 Don't scroll the editor on select all matches (#28435)
Part of https://github.com/zed-industries/zed/issues/9309

Release Notes:

- Improved scroll behavior of `editor: select all matches`

---------

Co-authored-by: Kirill Bulatov <kirill@zed.dev>
2025-04-09 17:50:14 +00:00
Finn Evers
6aa0248ab3 docs: Update outdated keybind for opening extensions page (#28443)
This PR updates an outdated keybind for opening the extensions page (the
shown keybind opens the project panel instead) on the `Configuring
Languages` page.

It also updates a nearby keybind to use the preprocessor syntax instead.

Release Notes:

- N/A
2025-04-09 13:46:12 -04:00
Thomas Mickley-Doyle
342134fbab agent: Add reactions at the response level (#27958)
Release Notes:

- Added the user reaction (👍 or 👎) to each agent response.
- 👎 will trigger a comment box linked to the response

---------

Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
Co-authored-by: Agus Zubiaga <hi@aguz.me>
2025-04-09 14:21:07 -03:00
João Marcos
b47aa33459 Remove actions UnfoldAt and FoldAt (#28442)
`UnfoldAt` and `FoldAt` are used internally, and don't really work
when users try to trigger them, they do however appear in the command
palette and keybindings, misleading users to try using them.

Release Notes:

- Remove unused actions `UnfoldAt` and `FoldAt` (prefer `Fold` and
`Unfold`).
2025-04-09 17:13:41 +00:00
Michael Sloan
9f6c5e2877 Reapply "Use Project instead of Workspace in ContextStore (#28402)" (#28441)
Motivation for this change is to use `ContextStore` in headless
assistant, which requires it to not depend on UI entities like
`Workspace`.

This reapplies a change that was revert was in #28428, and fixes the panic.

Release Notes:

- N/A
2025-04-09 16:56:14 +00:00
Cole Miller
7bf6cd4ccf Fix ancestor git repositories going missing (#28436)
Closes #ISSUE

Release Notes:

- Fixed a bug that caused Zed to sometimes not discover git repositories
above a worktree root.
2025-04-09 12:44:29 -04:00
Peter Tripp
c7963c8a93 ci: Require workspace_hack for PR merge (#28431)
Release Notes:

- N/A
2025-04-09 16:43:38 +00:00
renovate[bot]
dd4629433b Update cachix/install-nix-action digest to d1ca217 (#27951)
This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[cachix/install-nix-action](https://redirect.github.com/cachix/install-nix-action)
| action | digest | `02a151a` -> `d1ca217` |

---

Release Notes:

- N/A

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOS4yMjcuMyIsInVwZGF0ZWRJblZlciI6IjM5LjIyNy4zIiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2025-04-09 09:31:13 -07:00
Rodrigo Freire
2e56935997 Fix invalid number of space characters inserted for tab (#27336)
Closes #25941 

Release Notes:

- Corrected SoftTab indentation handling for lines with mixed spaces and
tabs across .go files and other file types.
- Renamed the editor test `test_tab_with_mixed_whitespace` to
`test_tab_with_mixed_whitespace_rust` as it only tested this behavior
for Rust buffers, which have auto-indentation support. This change
clarifies that the test does not cover default files without
language-specific features.
- Added a new editor test `test_tab_with_mixed_whitespace_txt` to ensure
proper coverage for files with no associated language.

While investigating the issue — initially thought to be Go-related — I
discovered that the underlying problem was how soft tabs were calculated
in `Editor::tab`, given that the problem could also be observed on
`.txt` files

The correct soft tab indentation is now determined by treating all `\t`
characters before the cursor (on the same row) as new indentation
levels, resetting the remainder counter accordingly.


https://github.com/user-attachments/assets/78192e98-2b81-43cb-ae6f-7c48cd17d168
2025-04-09 16:22:14 +00:00
Richard Feldman
e43a397f1d Make regex search tool optionally case-sensitive (#28427)
Release Notes:

- The agent panel's regex search tool is now optionally case-sensitive.
2025-04-09 16:21:21 +00:00
Richard Feldman
9d0fe164a7 Revert to fix panic in inline assistant (#28428)
This reverts commit f12a554f86, which
introduced a panic in inline assistant (cc @mgsloan) - I'm not sure what
the motivation was for that change, but I figure we can revert to fix
the inline assistant now and deal with that later. 😄

Panic was:

> Thread "main" panicked with "cannot read workspace::Workspace while it
is already being updated" at
/Users/rtfeldman/code/zed/crates/gpui/src/app/entity_map.rs:139:32


Release Notes:

- N/A
2025-04-09 11:24:53 -04:00
Kainoa Kanter
6d7fef6fd3 Add icon for Vyper files (#28307)
Release Notes:

- Added icon for Vyper (`.vy`, `.vyi`) files
2025-04-09 10:49:39 -04:00
5brian
b67d3fd21b git_ui: Show disabled states in context menu (#28288)
Other elements in the git panel are shown as disabled when an action is
not actionable (For example: stage all, commit). Updating the context
menu to match this behavior when an action does nothing.

|Before|After|
|--|--|

|![image](https://github.com/user-attachments/assets/e517f758-216f-4451-911b-7121dce0c53b)|![image](https://github.com/user-attachments/assets/a85905c1-2f42-44c3-8b11-2f93c8a6f686)|





Release Notes:

- Git: Improved the Git panel context menu to show actions with no
effect as disabled.
2025-04-09 10:46:21 -04:00
Agus Zubiaga
1cb4f8288d Fix bash tool output (#28391) 2025-04-09 08:20:24 -06:00
Richard Feldman
3a8fe4d973 Add reminder message about system prompt (#28344)
Trying out sending the model a reminder message about code blocks in the
system prompt. If this seems to work well, we can include more specific
reminder messages, e.g. tool-specific ones.

Release Notes:

- N/A
2025-04-09 10:09:48 -04:00
Joseph T. Lyons
9d6d152918 Bump Zed to v0.183 (#28419)
Release Notes:

-N/A
2025-04-09 09:11:25 -04:00
Joseph T. Lyons
31034f8296 Add toggle case command (#28415)
A small addition for those coming from JetBrain's IDEs. A behavioral
detail: when any upper case character is detected, the command defaults
to toggling to lower case.

> Note that when you apply the toggle case action to the CamelCase name
format, IntelliJ IDEA converts the name to the lower case.


https://www.jetbrains.com/help/idea/working-with-source-code.html#edit_code_fragments

Release Notes:

- Added an `editor: toggle case` command. Use `cmd-shift-u` for macOS
and `ctrl-shift-u` for Linux, when using the `JetBrains` keymap.
2025-04-09 08:44:53 -04:00
Piotr Osiewicz
c441b651fa debugger: Add support for CodeLLDB (#28376)
Closes #ISSUE

Release Notes:

- N/A
2025-04-09 12:57:24 +02:00
127 changed files with 4982 additions and 2590 deletions

View File

@@ -225,7 +225,7 @@ jobs:
- name: Check for new vulnerable dependencies
if: github.event_name == 'pull_request'
uses: actions/dependency-review-action@3b139cfc5fae8b618d3eae3675e383bb1769c019 # v4
uses: actions/dependency-review-action@67d4f4bd7a9b17a0db54d2a7519187c65e339de8 # v4
with:
license-check: false
@@ -465,6 +465,7 @@ jobs:
- job_spec
- style
- migration_checks
# run_tests: If adding required tests, add them here and to script below.
- workspace_hack
- linux_tests
- build_remote_server
@@ -482,11 +483,14 @@ jobs:
# Only check test jobs if they were supposed to run
if [[ "${{ needs.job_spec.outputs.run_tests }}" == "true" ]]; then
[[ "${{ needs.workspace_hack.result }}" != 'success' ]] && { RET_CODE=1; echo "Workspace Hack failed"; }
[[ "${{ needs.macos_tests.result }}" != 'success' ]] && { RET_CODE=1; echo "macOS tests failed"; }
[[ "${{ needs.linux_tests.result }}" != 'success' ]] && { RET_CODE=1; echo "Linux tests failed"; }
[[ "${{ needs.windows_tests.result }}" != 'success' ]] && { RET_CODE=1; echo "Windows tests failed"; }
[[ "${{ needs.windows_clippy.result }}" != 'success' ]] && { RET_CODE=1; echo "Windows clippy failed"; }
[[ "${{ needs.build_remote_server.result }}" != 'success' ]] && { RET_CODE=1; echo "Remote server build failed"; }
# This check is intentionally disabled. See: https://github.com/zed-industries/zed/pull/28431
# [[ "${{ needs.migration_checks.result }}" != 'success' ]] && { RET_CODE=1; echo "Migration Checks failed"; }
fi
if [[ "$RET_CODE" -eq 0 ]]; then
echo "All tests passed successfully!"
@@ -739,7 +743,7 @@ jobs:
echo "/nix/var/nix/profiles/default/bin" >> $GITHUB_PATH
echo "/Users/administrator/.nix-profile/bin" >> $GITHUB_PATH
- uses: cachix/install-nix-action@02a151ada4993995686f9ed4f1be7cfbb229e56f # v31
- uses: cachix/install-nix-action@d1ca217b388ee87b2507a9a93bf01368bde7cec2 # v31
if: ${{ matrix.system.install_nix }}
with:
github_access_token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -206,7 +206,7 @@ jobs:
echo "/nix/var/nix/profiles/default/bin" >> $GITHUB_PATH
echo "/Users/administrator/.nix-profile/bin" >> $GITHUB_PATH
- uses: cachix/install-nix-action@02a151ada4993995686f9ed4f1be7cfbb229e56f # v31
- uses: cachix/install-nix-action@d1ca217b388ee87b2507a9a93bf01368bde7cec2 # v31
if: ${{ matrix.system.install_nix }}
with:
github_access_token: ${{ secrets.GITHUB_TOKEN }}

72
Cargo.lock generated
View File

@@ -4600,6 +4600,7 @@ dependencies = [
"client",
"clock",
"collections",
"command_palette_hooks",
"convert_case 0.8.0",
"ctor",
"db",
@@ -4900,6 +4901,37 @@ dependencies = [
"num-traits",
]
[[package]]
name = "eval"
version = "0.1.0"
dependencies = [
"agent",
"anyhow",
"assistant_tool",
"assistant_tools",
"client",
"collections",
"context_server",
"dap",
"env_logger 0.11.8",
"fs",
"gpui",
"gpui_tokio",
"language",
"language_model",
"language_models",
"node_runtime",
"project",
"prompt_store",
"release_channel",
"reqwest_client",
"serde",
"settings",
"smol",
"toml 0.8.20",
"workspace-hack",
]
[[package]]
name = "evals"
version = "0.1.0"
@@ -7079,9 +7111,9 @@ dependencies = [
[[package]]
name = "indexmap"
version = "2.8.0"
version = "2.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3954d50fe15b02142bf25d3b8bdadb634ec3948f103d04ffe3031bc8fe9d7058"
checksum = "cea70ddb795996207ad57735b50c5982d8844f38ba9ee5f1aedcfb708a2aa11e"
dependencies = [
"equivalent",
"hashbrown 0.15.2",
@@ -7936,9 +7968,9 @@ checksum = "8355be11b20d696c8f18f6cc018c4e372165b1fa8126cef092399c9951984ffa"
[[package]]
name = "libmimalloc-sys"
version = "0.1.41"
version = "0.1.42"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6b20daca3a4ac14dbdc753c5e90fc7b490a48a9131daed3c9a9ced7b2defd37b"
checksum = "ec9d6fac27761dabcd4ee73571cdb06b7022dc99089acbe5435691edffaac0f4"
dependencies = [
"cc",
"libc",
@@ -8609,9 +8641,9 @@ dependencies = [
[[package]]
name = "mimalloc"
version = "0.1.45"
version = "0.1.46"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "03cb1f88093fe50061ca1195d336ffec131347c7b833db31f9ab62a2d1b7925f"
checksum = "995942f432bbb4822a7e9c3faa87a695185b0d09273ba85f097b54f4e458f2af"
dependencies = [
"libmimalloc-sys",
]
@@ -10949,9 +10981,9 @@ dependencies = [
[[package]]
name = "prometheus"
version = "0.13.4"
version = "0.14.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3d33c28a30771f7f96db69893f78b857f7450d7e0237e9c8fc6427a81bae7ed1"
checksum = "3ca5326d8d0b950a9acd87e6a3f94745394f62e4dae1b1ee22b2bc0c394af43a"
dependencies = [
"cfg-if",
"fnv",
@@ -10959,7 +10991,7 @@ dependencies = [
"memchr",
"parking_lot",
"protobuf",
"thiserror 1.0.69",
"thiserror 2.0.12",
]
[[package]]
@@ -11134,9 +11166,23 @@ dependencies = [
[[package]]
name = "protobuf"
version = "2.28.0"
version = "3.7.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "106dd99e98437432fed6519dedecfade6a06a73bb7b2a1e019fdd2bee5778d94"
checksum = "d65a1d4ddae7d8b5de68153b48f6aa3bba8cb002b243dbdbc55a5afbc98f99f4"
dependencies = [
"once_cell",
"protobuf-support",
"thiserror 1.0.69",
]
[[package]]
name = "protobuf-support"
version = "3.7.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3e36c2f31e0a47f9280fb347ef5e461ffcd2c52dd520d8e216b52f93b0b0d7d6"
dependencies = [
"thiserror 1.0.69",
]
[[package]]
name = "psm"
@@ -13221,9 +13267,9 @@ dependencies = [
[[package]]
name = "smallvec"
version = "1.14.0"
version = "1.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7fcf8323ef1faaee30a44a340193b1ac6814fd9b7b4e88e9d4519a3e4abe1cfd"
checksum = "8917285742e9f3e1683f0a9c4e6b57960b7314d0b08d30d1ecd426713ee2eee9"
dependencies = [
"serde",
]

View File

@@ -47,6 +47,7 @@ members = [
"crates/diagnostics",
"crates/docs_preprocessor",
"crates/editor",
"crates/eval",
"crates/evals",
"crates/extension",
"crates/extension_api",

View File

@@ -0,0 +1 @@
<svg width="16" height="16" fill="none" xml:space="preserve" xmlns="http://www.w3.org/2000/svg"><g style="fill:#000;fill-opacity:1" fill="#180c25"><path d="m-116.1-101.4-28.9-28.9a6.7 6.7 0 0 1-1.8-4.7v-41.2c0-2.4-2.4-4.8-4.8-4.8h-9.6a5.2 5.2 0 0 0-4.8 4.8v48c0 2.5 1 5 2.7 6.8l33.6 33.6a9.6 9.6 0 0 0 6.8 2.8h4.8c2.7 0 4.8-2.2 4.8-4.8v-4.8c0-2.5-1-5-2.8-6.8zM-79.6-176.2c0-2.4-2.4-4.8-4.8-4.8h-9.7a5.2 5.2 0 0 0-4.7 4.8v41.2c0 1.8-.8 3.5-2 4.7l-9.6 9.7a9.5 9.5 0 0 0-2.8 6.8v4.8c0 2.6 2.1 4.7 4.8 4.7h4.8c2.4 0 4.9-.9 6.7-2.8l14.4-14.3a9.6 9.6 0 0 0 2.8-6.8v-48z" style="fill:#000;fill-opacity:1;stroke-width:.255894" transform="translate(21.6 22.7) scale(.11067)"/></g></svg>

After

Width:  |  Height:  |  Size: 677 B

View File

@@ -644,7 +644,6 @@
},
{
"context": "AgentPanel && prompt_editor",
"use_key_equivalents": true,
"bindings": {
"cmd-n": "agent::NewPromptEditor",
"cmd-alt-t": "agent::NewThread"
@@ -660,7 +659,6 @@
},
{
"context": "EditMessageEditor > Editor",
"use_key_equivalents": true,
"bindings": {
"escape": "menu::Cancel",
"enter": "menu::Confirm",
@@ -669,7 +667,6 @@
},
{
"context": "AgentFeedbackMessageEditor > Editor",
"use_key_equivalents": true,
"bindings": {
"escape": "menu::Cancel",
"enter": "menu::Confirm",
@@ -801,7 +798,6 @@
},
{
"context": "GitPanel",
"use_key_equivalents": true,
"bindings": {
"ctrl-g ctrl-g": "git::Fetch",
"ctrl-g up": "git::Push",

View File

@@ -58,7 +58,8 @@
"ctrl-shift-home": "editor::SelectToBeginning",
"ctrl-shift-end": "editor::SelectToEnd",
"ctrl-f8": "editor::ToggleBreakpoint",
"ctrl-shift-f8": "editor::EditLogBreakpoint"
"ctrl-shift-f8": "editor::EditLogBreakpoint",
"ctrl-shift-u": "editor::ToggleCase"
}
},
{

View File

@@ -58,6 +58,12 @@
"ctrl-r": "outline::Toggle"
}
},
{
"context": "Editor && !agent_diff",
"bindings": {
"ctrl-k ctrl-z": "git::Restore"
}
},
{
"context": "Pane",
"bindings": {

View File

@@ -55,7 +55,8 @@
"cmd-shift-home": "editor::SelectToBeginning",
"cmd-shift-end": "editor::SelectToEnd",
"ctrl-f8": "editor::ToggleBreakpoint",
"ctrl-shift-f8": "editor::EditLogBreakpoint"
"ctrl-shift-f8": "editor::EditLogBreakpoint",
"cmd-shift-u": "editor::ToggleCase"
}
},
{

View File

@@ -60,6 +60,12 @@
"cmd-r": "outline::Toggle"
}
},
{
"context": "Editor && !agent_diff",
"bindings": {
"cmd-k cmd-z": "git::Restore"
}
},
{
"context": "Pane",
"bindings": {

View File

@@ -163,3 +163,8 @@ There are rules that apply to these root directories:
{{/if}}
{{/each}}
{{/if}}
<user_environment>
Operating System: {{os}} ({{arch}})
Shell: {{shell}}
</user_environment>

View File

@@ -656,8 +656,8 @@
"name": "Write",
"enable_all_context_servers": true,
"tools": {
"bash": true,
"batch_tool": true,
"terminal": true,
"code_actions": true,
"code_symbols": true,
"copy_path": false,
"create_file": true,
@@ -671,6 +671,7 @@
"path_search": true,
"read_file": true,
"regex_search": true,
"rename": true,
"symbol_info": true,
"thinking": true
}

View File

@@ -1,4 +1,3 @@
use crate::AssistantPanel;
use crate::context::{AssistantContext, ContextId};
use crate::context_picker::MentionLink;
use crate::thread::{
@@ -8,6 +7,7 @@ use crate::thread::{
use crate::thread_store::ThreadStore;
use crate::tool_use::{PendingToolUseStatus, ToolUse, ToolUseStatus};
use crate::ui::{AddedContext, AgentNotification, AgentNotificationEvent, ContextPill};
use crate::{AssistantPanel, OpenActiveThreadAsMarkdown};
use anyhow::Context as _;
use assistant_settings::{AssistantSettings, NotifyWhenAgentWaiting};
use collections::{HashMap, HashSet};
@@ -21,7 +21,7 @@ use gpui::{
linear_color_stop, linear_gradient, list, percentage, pulsating_between,
};
use language::{Buffer, LanguageRegistry};
use language_model::{ConfiguredModel, LanguageModelRegistry, LanguageModelToolUseId, Role};
use language_model::{LanguageModelRegistry, LanguageModelToolUseId, Role};
use markdown::parser::CodeBlockKind;
use markdown::{Markdown, MarkdownElement, MarkdownStyle, ParsedMarkdown, without_fences};
use project::ProjectItem as _;
@@ -57,12 +57,13 @@ pub struct ActiveThread {
editing_message: Option<(MessageId, EditMessageState)>,
expanded_tool_uses: HashMap<LanguageModelToolUseId, bool>,
expanded_thinking_segments: HashMap<(MessageId, usize), bool>,
expanded_code_blocks: HashMap<(MessageId, usize), bool>,
last_error: Option<ThreadError>,
notifications: Vec<WindowHandle<AgentNotification>>,
copied_code_block_ids: HashSet<(MessageId, usize)>,
_subscriptions: Vec<Subscription>,
notification_subscriptions: HashMap<WindowHandle<AgentNotification>, Vec<Subscription>>,
feedback_message_editor: Option<Entity<Editor>>,
open_feedback_editors: HashMap<MessageId, Entity<Editor>>,
}
struct RenderedMessage {
@@ -297,7 +298,7 @@ fn render_markdown_code_block(
codeblock_range: Range<usize>,
active_thread: Entity<ActiveThread>,
workspace: WeakEntity<Workspace>,
_window: &mut Window,
_window: &Window,
cx: &App,
) -> Div {
let label = match kind {
@@ -377,16 +378,20 @@ fn render_markdown_code_block(
.rounded_sm()
.hover(|item| item.bg(cx.theme().colors().element_hover.opacity(0.5)))
.tooltip(Tooltip::text("Jump to File"))
.children(
file_icons::FileIcons::get_icon(&path_range.path, cx)
.map(Icon::from_path)
.map(|icon| icon.color(Color::Muted).size(IconSize::XSmall)),
)
.child(content)
.child(
Icon::new(IconName::ArrowUpRight)
.size(IconSize::XSmall)
.color(Color::Ignored),
h_flex()
.gap_0p5()
.children(
file_icons::FileIcons::get_icon(&path_range.path, cx)
.map(Icon::from_path)
.map(|icon| icon.color(Color::Muted).size(IconSize::XSmall)),
)
.child(content)
.child(
Icon::new(IconName::ArrowUpRight)
.size(IconSize::XSmall)
.color(Color::Ignored),
),
)
.on_click({
let path_range = path_range.clone();
@@ -444,16 +449,32 @@ fn render_markdown_code_block(
}),
};
let codeblock_was_copied = active_thread
.read(cx)
.copied_code_block_ids
.contains(&(message_id, ix));
let is_expanded = active_thread
.read(cx)
.expanded_code_blocks
.get(&(message_id, ix))
.copied()
.unwrap_or(false);
let codeblock_header_bg = cx
.theme()
.colors()
.element_background
.blend(cx.theme().colors().editor_foreground.opacity(0.01));
let codeblock_was_copied = active_thread
.read(cx)
.copied_code_block_ids
.contains(&(message_id, ix));
const CODE_FENCES_LINE_COUNT: usize = 2;
const MAX_COLLAPSED_LINES: usize = 5;
let line_count = parsed_markdown.source()[codeblock_range.clone()]
.bytes()
.filter(|c| *c == b'\n')
.count()
.saturating_sub(CODE_FENCES_LINE_COUNT - 1);
let codeblock_header = h_flex()
.group("codeblock_header")
@@ -466,57 +487,104 @@ fn render_markdown_code_block(
.rounded_t_md()
.children(label)
.child(
div().visible_on_hover("codeblock_header").child(
IconButton::new(
("copy-markdown-code", ix),
if codeblock_was_copied {
IconName::Check
} else {
IconName::Copy
},
)
.icon_color(Color::Muted)
.shape(ui::IconButtonShape::Square)
.tooltip(Tooltip::text("Copy Code"))
.on_click({
let active_thread = active_thread.clone();
let parsed_markdown = parsed_markdown.clone();
move |_event, _window, cx| {
active_thread.update(cx, |this, cx| {
this.copied_code_block_ids.insert((message_id, ix));
h_flex()
.gap_1()
.child(
div().visible_on_hover("codeblock_header").child(
IconButton::new(
("copy-markdown-code", ix),
if codeblock_was_copied {
IconName::Check
} else {
IconName::Copy
},
)
.icon_color(Color::Muted)
.shape(ui::IconButtonShape::Square)
.tooltip(Tooltip::text("Copy Code"))
.on_click({
let active_thread = active_thread.clone();
let parsed_markdown = parsed_markdown.clone();
move |_event, _window, cx| {
active_thread.update(cx, |this, cx| {
this.copied_code_block_ids.insert((message_id, ix));
let code =
without_fences(&parsed_markdown.source()[codeblock_range.clone()])
let code = without_fences(
&parsed_markdown.source()[codeblock_range.clone()],
)
.to_string();
cx.write_to_clipboard(ClipboardItem::new_string(code.clone()));
cx.write_to_clipboard(ClipboardItem::new_string(code.clone()));
cx.spawn(async move |this, cx| {
cx.background_executor().timer(Duration::from_secs(2)).await;
cx.spawn(async move |this, cx| {
cx.background_executor()
.timer(Duration::from_secs(2))
.await;
cx.update(|cx| {
this.update(cx, |this, cx| {
this.copied_code_block_ids.remove(&(message_id, ix));
cx.notify();
cx.update(|cx| {
this.update(cx, |this, cx| {
this.copied_code_block_ids
.remove(&(message_id, ix));
cx.notify();
})
})
.ok();
})
})
.ok();
})
.detach();
});
}
.detach();
});
}
}),
),
)
.when(line_count > MAX_COLLAPSED_LINES, |header| {
header.child(
IconButton::new(
("expand-collapse-code", ix),
if is_expanded {
IconName::ChevronUp
} else {
IconName::ChevronDown
},
)
.icon_color(Color::Muted)
.shape(ui::IconButtonShape::Square)
.tooltip(Tooltip::text(if is_expanded {
"Collapse Code"
} else {
"Expand Code"
}))
.on_click({
let active_thread = active_thread.clone();
move |_event, _window, cx| {
active_thread.update(cx, |this, cx| {
let is_expanded = this
.expanded_code_blocks
.entry((message_id, ix))
.or_insert(false);
*is_expanded = !*is_expanded;
cx.notify();
});
}
}),
)
}),
),
);
v_flex()
.mb_2()
.relative()
.my_2()
.overflow_hidden()
.rounded_lg()
.border_1()
.border_color(cx.theme().colors().border_variant)
.bg(cx.theme().colors().editor_background)
.child(codeblock_header)
.when(line_count > MAX_COLLAPSED_LINES, |this| {
if is_expanded {
this.h_full()
} else {
this.max_h_40()
}
})
}
fn open_markdown_link(
@@ -626,6 +694,7 @@ impl ActiveThread {
rendered_tool_uses: HashMap::default(),
expanded_tool_uses: HashMap::default(),
expanded_thinking_segments: HashMap::default(),
expanded_code_blocks: HashMap::default(),
list_state: list_state.clone(),
scrollbar_state: ScrollbarState::new(list_state),
show_scrollbar: false,
@@ -636,7 +705,7 @@ impl ActiveThread {
notifications: Vec::new(),
_subscriptions: subscriptions,
notification_subscriptions: HashMap::default(),
feedback_message_editor: None,
open_feedback_editors: HashMap::default(),
};
for message in thread.read(cx).messages().cloned().collect::<Vec<_>>() {
@@ -754,7 +823,7 @@ impl ActiveThread {
fn handle_thread_event(
&mut self,
_thread: &Entity<Thread>,
thread: &Entity<Thread>,
event: &ThreadEvent,
window: &mut Window,
cx: &mut Context<Self>,
@@ -828,11 +897,7 @@ impl ActiveThread {
self.save_thread(cx);
cx.notify();
}
ThreadEvent::UsePendingTools => {
let tool_uses = self
.thread
.update(cx, |thread, cx| thread.use_pending_tools(cx));
ThreadEvent::UsePendingTools { tool_uses } => {
for tool_use in tool_uses {
self.render_tool_use_markdown(
tool_use.id.clone(),
@@ -844,11 +909,8 @@ impl ActiveThread {
}
}
ThreadEvent::ToolFinished {
pending_tool_use,
canceled,
..
pending_tool_use, ..
} => {
let canceled = *canceled;
if let Some(tool_use) = pending_tool_use {
self.render_tool_use_markdown(
tool_use.id.clone(),
@@ -862,20 +924,47 @@ impl ActiveThread {
cx,
);
}
if self.thread.read(cx).all_tools_finished() {
let model_registry = LanguageModelRegistry::read_global(cx);
if let Some(ConfiguredModel { model, .. }) = model_registry.default_model() {
self.thread.update(cx, |thread, cx| {
thread.attach_tool_results(cx);
if !canceled {
thread.send_to_model(model, RequestKind::Chat, cx);
}
});
}
}
}
ThreadEvent::CheckpointChanged => cx.notify(),
ThreadEvent::StreamedFileChunk { path, chunk } => {
let project = thread.read(cx).project().clone();
if let Some(project_path) = project.update(cx, |project, cx| {
project.find_project_path(path.as_ref(), cx)
}) {
log::info!("Appending {}B chunk to {path}", chunk.len());
let path = path.clone();
let chunk = chunk.clone();
cx.spawn(async move |_, cx| {
match project.update(cx, |project, cx| {
project.open_buffer(project_path.clone(), cx)
}) {
Ok(buffer_task) => {
if let Ok(buffer) = buffer_task.await {
// Append the text to the buffer
if let Err(e) = cx.update(|cx| {
buffer.update(cx, |buffer, cx| {
let point = buffer.snapshot().len();
buffer.edit([(point..point, chunk.as_str())], None, cx)
})
}) {
log::warn!("Failed to edit buffer: {}", e);
} else {
log::info!("Successfully appended chunk to file: {path}",);
}
}
}
Err(_) => {
let todo = (); // TODO record the error in the messages somehow.
todo!();
}
}
})
.detach();
}
}
}
}
@@ -939,7 +1028,7 @@ impl ActiveThread {
|this, _, event, window, cx| match event {
AgentNotificationEvent::Accepted => {
let handle = window.window_handle();
cx.activate(true); // Switch back to the Zed application
cx.activate(true);
let workspace_handle = this.workspace.clone();
@@ -1111,34 +1200,37 @@ impl ActiveThread {
fn handle_feedback_click(
&mut self,
message_id: MessageId,
feedback: ThreadFeedback,
window: &mut Window,
cx: &mut Context<Self>,
) {
let report = self.thread.update(cx, |thread, cx| {
thread.report_message_feedback(message_id, feedback, cx)
});
cx.spawn(async move |this, cx| {
report.await?;
this.update(cx, |_this, cx| cx.notify())
})
.detach_and_log_err(cx);
match feedback {
ThreadFeedback::Positive => {
let report = self
.thread
.update(cx, |thread, cx| thread.report_feedback(feedback, cx));
let this = cx.entity().downgrade();
cx.spawn(async move |_, cx| {
report.await?;
this.update(cx, |_this, cx| cx.notify())
})
.detach_and_log_err(cx);
self.open_feedback_editors.remove(&message_id);
}
ThreadFeedback::Negative => {
self.handle_show_feedback_comments(window, cx);
self.handle_show_feedback_comments(message_id, window, cx);
}
}
}
fn handle_show_feedback_comments(&mut self, window: &mut Window, cx: &mut Context<Self>) {
if self.feedback_message_editor.is_some() {
return;
}
fn handle_show_feedback_comments(
&mut self,
message_id: MessageId,
window: &mut Window,
cx: &mut Context<Self>,
) {
let buffer = cx.new(|cx| {
let empty_string = String::new();
MultiBuffer::singleton(cx.new(|cx| Buffer::local(empty_string, cx)), cx)
@@ -1160,34 +1252,47 @@ impl ActiveThread {
});
editor.read(cx).focus_handle(cx).focus(window);
self.feedback_message_editor = Some(editor);
self.open_feedback_editors.insert(message_id, editor);
cx.notify();
}
fn submit_feedback_message(&mut self, cx: &mut Context<Self>) {
let Some(editor) = self.feedback_message_editor.clone() else {
fn submit_feedback_message(&mut self, message_id: MessageId, cx: &mut Context<Self>) {
let Some(editor) = self.open_feedback_editors.get(&message_id) else {
return;
};
let report_task = self.thread.update(cx, |thread, cx| {
thread.report_feedback(ThreadFeedback::Negative, cx)
thread.report_message_feedback(message_id, ThreadFeedback::Negative, cx)
});
let comments = editor.read(cx).text(cx);
if !comments.is_empty() {
let thread_id = self.thread.read(cx).id().clone();
let comments_value = String::from(comments.as_str());
telemetry::event!("Assistant Thread Feedback Comments", thread_id, comments);
let message_content = self
.thread
.read(cx)
.message(message_id)
.map(|msg| msg.to_string())
.unwrap_or_default();
telemetry::event!(
"Assistant Thread Feedback Comments",
thread_id,
message_id = message_id.0,
message_content,
comments = comments_value
);
self.open_feedback_editors.remove(&message_id);
cx.spawn(async move |this, cx| {
report_task.await?;
this.update(cx, |_this, cx| cx.notify())
})
.detach_and_log_err(cx);
}
self.feedback_message_editor = None;
let this = cx.entity().downgrade();
cx.spawn(async move |_, cx| {
report_task.await?;
this.update(cx, |_this, cx| cx.notify())
})
.detach_and_log_err(cx);
}
fn render_message(&self, ix: usize, window: &mut Window, cx: &mut Context<Self>) -> AnyElement {
@@ -1214,7 +1319,18 @@ impl ActiveThread {
let is_first_message = ix == 0;
let is_last_message = ix == self.messages.len() - 1;
let show_feedback = is_last_message && message.role != Role::User;
let show_feedback = (!is_generating && is_last_message && message.role != Role::User)
|| self.messages.get(ix + 1).map_or(false, |next_id| {
self.thread
.read(cx)
.message(*next_id)
.map_or(false, |next_message| {
next_message.role == Role::User
&& thread.tool_uses_for_message(*next_id, cx).is_empty()
&& thread.tool_results_for_message(*next_id).is_empty()
})
});
let needs_confirmation = tool_uses.iter().any(|tool_use| tool_use.needs_confirmation);
@@ -1287,8 +1403,17 @@ impl ActiveThread {
let editor_bg_color = colors.editor_background;
let bg_user_message_header = editor_bg_color.blend(active_color.opacity(0.25));
let feedback_container = h_flex().pt_2().pb_4().px_4().gap_1().justify_between();
let feedback_items = match self.thread.read(cx).feedback() {
let open_as_markdown = IconButton::new("open-as-markdown", IconName::FileCode)
.shape(ui::IconButtonShape::Square)
.icon_size(IconSize::XSmall)
.icon_color(Color::Ignored)
.tooltip(Tooltip::text("Open Thread as Markdown"))
.on_click(|_event, window, cx| {
window.dispatch_action(Box::new(OpenActiveThreadAsMarkdown), cx)
});
let feedback_container = h_flex().py_2().px_4().gap_1().justify_between();
let feedback_items = match self.thread.read(cx).message_feedback(message_id) {
Some(feedback) => feedback_container
.child(
Label::new(match feedback {
@@ -1302,18 +1427,20 @@ impl ActiveThread {
)
.child(
h_flex()
.pr_1()
.gap_1()
.child(
IconButton::new("feedback-thumbs-up", IconName::ThumbsUp)
IconButton::new(("feedback-thumbs-up", ix), IconName::ThumbsUp)
.shape(ui::IconButtonShape::Square)
.icon_size(IconSize::XSmall)
.icon_color(match feedback {
ThreadFeedback::Positive => Color::Accent,
ThreadFeedback::Negative => Color::Ignored,
})
.shape(ui::IconButtonShape::Square)
.tooltip(Tooltip::text("Helpful Response"))
.on_click(cx.listener(move |this, _, window, cx| {
this.handle_feedback_click(
message_id,
ThreadFeedback::Positive,
window,
cx,
@@ -1321,22 +1448,24 @@ impl ActiveThread {
})),
)
.child(
IconButton::new("feedback-thumbs-down", IconName::ThumbsDown)
IconButton::new(("feedback-thumbs-down", ix), IconName::ThumbsDown)
.shape(ui::IconButtonShape::Square)
.icon_size(IconSize::XSmall)
.icon_color(match feedback {
ThreadFeedback::Positive => Color::Ignored,
ThreadFeedback::Negative => Color::Accent,
})
.shape(ui::IconButtonShape::Square)
.tooltip(Tooltip::text("Not Helpful"))
.on_click(cx.listener(move |this, _, window, cx| {
this.handle_feedback_click(
message_id,
ThreadFeedback::Negative,
window,
cx,
);
})),
),
)
.child(open_as_markdown),
)
.into_any_element(),
None => feedback_container
@@ -1349,15 +1478,17 @@ impl ActiveThread {
)
.child(
h_flex()
.pr_1()
.gap_1()
.child(
IconButton::new("feedback-thumbs-up", IconName::ThumbsUp)
IconButton::new(("feedback-thumbs-up", ix), IconName::ThumbsUp)
.icon_size(IconSize::XSmall)
.icon_color(Color::Ignored)
.shape(ui::IconButtonShape::Square)
.tooltip(Tooltip::text("Helpful Response"))
.on_click(cx.listener(move |this, _, window, cx| {
this.handle_feedback_click(
message_id,
ThreadFeedback::Positive,
window,
cx,
@@ -1365,19 +1496,21 @@ impl ActiveThread {
})),
)
.child(
IconButton::new("feedback-thumbs-down", IconName::ThumbsDown)
IconButton::new(("feedback-thumbs-down", ix), IconName::ThumbsDown)
.icon_size(IconSize::XSmall)
.icon_color(Color::Ignored)
.shape(ui::IconButtonShape::Square)
.tooltip(Tooltip::text("Not Helpful"))
.on_click(cx.listener(move |this, _, window, cx| {
this.handle_feedback_click(
message_id,
ThreadFeedback::Negative,
window,
cx,
);
})),
),
)
.child(open_as_markdown),
)
.into_any_element(),
};
@@ -1669,31 +1802,31 @@ impl ActiveThread {
.child(generating_label.unwrap()),
)
})
.when(show_feedback && !is_generating, |parent| {
.when(show_feedback, move |parent| {
parent.child(feedback_items).when_some(
self.feedback_message_editor.clone(),
|parent, feedback_editor| {
self.open_feedback_editors.get(&message_id),
move |parent, feedback_editor| {
let focus_handle = feedback_editor.focus_handle(cx);
parent.child(
v_flex()
.key_context("AgentFeedbackMessageEditor")
.on_action(cx.listener(|this, _: &menu::Cancel, _, cx| {
this.feedback_message_editor = None;
.on_action(cx.listener(move |this, _: &menu::Cancel, _, cx| {
this.open_feedback_editors.remove(&message_id);
cx.notify();
}))
.on_action(cx.listener(|this, _: &menu::Confirm, _, cx| {
this.submit_feedback_message(cx);
.on_action(cx.listener(move |this, _: &menu::Confirm, _, cx| {
this.submit_feedback_message(message_id, cx);
cx.notify();
}))
.on_action(cx.listener(Self::confirm_editing_message))
.my_3()
.mb_2()
.mx_4()
.p_2()
.rounded_md()
.border_1()
.border_color(cx.theme().colors().border)
.bg(cx.theme().colors().editor_background)
.child(feedback_editor)
.child(feedback_editor.clone())
.child(
h_flex()
.gap_1()
@@ -1710,10 +1843,13 @@ impl ActiveThread {
)
.map(|kb| kb.size(rems_from_px(10.))),
)
.on_click(cx.listener(|this, _, _, cx| {
this.feedback_message_editor = None;
cx.notify();
})),
.on_click(cx.listener(
move |this, _, _window, cx| {
this.open_feedback_editors
.remove(&message_id);
cx.notify();
},
)),
)
.child(
Button::new(
@@ -1732,9 +1868,9 @@ impl ActiveThread {
.map(|kb| kb.size(rems_from_px(10.))),
)
.on_click(
cx.listener(|this, _, _, cx| {
this.submit_feedback_message(cx);
cx.notify();
cx.listener(move |this, _, _window, cx| {
this.submit_feedback_message(message_id, cx);
cx.notify()
}),
),
),
@@ -1799,10 +1935,10 @@ impl ActiveThread {
render: Arc::new({
let workspace = workspace.clone();
let active_thread = cx.entity();
move |id, kind, parsed_markdown, range, window, cx| {
move |kind, parsed_markdown, range, window, cx| {
render_markdown_code_block(
message_id,
id,
range.start,
kind,
parsed_markdown,
range,
@@ -1813,6 +1949,44 @@ impl ActiveThread {
)
}
}),
transform: Some(Arc::new({
let active_thread = cx.entity();
move |el, range, _, cx| {
let is_expanded = active_thread
.read(cx)
.expanded_code_blocks
.get(&(message_id, range.start))
.copied()
.unwrap_or(false);
if is_expanded {
return el;
}
el.child(
div()
.absolute()
.bottom_0()
.left_0()
.w_full()
.h_1_4()
.rounded_b_lg()
.bg(gpui::linear_gradient(
0.,
gpui::linear_color_stop(
cx.theme().colors().editor_background,
0.,
),
gpui::linear_color_stop(
cx.theme()
.colors()
.editor_background
.opacity(0.),
1.,
),
)),
)
}
})),
})
.on_url_click({
let workspace = self.workspace.clone();
@@ -2804,10 +2978,10 @@ pub(crate) fn open_context(
}
}
AssistantContext::Directory(directory_context) => {
let path = directory_context.project_path.clone();
let project_path = directory_context.project_path(cx);
workspace.update(cx, |workspace, cx| {
workspace.project().update(cx, |project, cx| {
if let Some(entry) = project.entry_for_path(&path, cx) {
if let Some(entry) = project.entry_for_path(&project_path, cx) {
cx.emit(project::Event::RevealInProjectPanel(entry.id));
}
})

View File

@@ -863,7 +863,11 @@ impl AssistantPanel {
.truncate()
.into_any_element()
} else {
change_title_editor.clone().into_any_element()
div()
.ml_2()
.w_full()
.child(change_title_editor.clone())
.into_any_element()
}
}
ActiveView::PromptEditor => {
@@ -1624,7 +1628,21 @@ impl prompt_library::InlineAssistDelegate for PromptLibraryInlineAssist {
cx: &mut Context<PromptLibrary>,
) {
InlineAssistant::update_global(cx, |assistant, cx| {
assistant.assist(&prompt_editor, self.workspace.clone(), None, window, cx)
let Some(project) = self
.workspace
.upgrade()
.map(|workspace| workspace.read(cx).project().downgrade())
else {
return;
};
assistant.assist(
&prompt_editor,
self.workspace.clone(),
project,
None,
window,
cx,
)
})
}

View File

@@ -1,9 +1,9 @@
use std::{ops::Range, sync::Arc};
use std::{ops::Range, path::Path, sync::Arc};
use gpui::{App, Entity, SharedString};
use language::{Buffer, File};
use language_model::LanguageModelRequestMessage;
use project::ProjectPath;
use project::{ProjectPath, Worktree};
use serde::{Deserialize, Serialize};
use text::{Anchor, BufferId};
use ui::IconName;
@@ -69,10 +69,21 @@ pub struct FileContext {
#[derive(Debug, Clone)]
pub struct DirectoryContext {
pub id: ContextId,
pub project_path: ProjectPath,
pub worktree: Entity<Worktree>,
pub path: Arc<Path>,
/// Buffers of the files within the directory.
pub context_buffers: Vec<ContextBuffer>,
}
impl DirectoryContext {
pub fn project_path(&self, cx: &App) -> ProjectPath {
ProjectPath {
worktree_id: self.worktree.read(cx).id(),
path: self.path.clone(),
}
}
}
#[derive(Debug, Clone)]
pub struct SymbolContext {
pub id: ContextId,
@@ -86,12 +97,11 @@ pub struct FetchedUrlContext {
pub text: SharedString,
}
// TODO: Model<Thread> holds onto the thread even if the thread is deleted. Can either handle this
// explicitly or have a WeakModel<Thread> and remove during snapshot.
#[derive(Debug, Clone)]
pub struct ThreadContext {
pub id: ContextId,
// TODO: Entity<Thread> holds onto the thread even if the thread is deleted. Should probably be
// a WeakEntity and handle removal from the UI when it has dropped.
pub thread: Entity<Thread>,
pub text: SharedString,
}
@@ -105,12 +115,11 @@ impl ThreadContext {
}
}
// TODO: Model<Buffer> holds onto the buffer even if the file is deleted and closed. Should remove
// the context from the message editor in this case.
#[derive(Clone)]
pub struct ContextBuffer {
pub id: BufferId,
// TODO: Entity<Buffer> holds onto the thread even if the thread is deleted. Should probably be
// a WeakEntity and handle removal from the UI when it has dropped.
pub buffer: Entity<Buffer>,
pub file: Arc<dyn File>,
pub version: clock::Global,

View File

@@ -289,12 +289,14 @@ impl ContextPicker {
path_prefix,
} => {
let context_store = self.context_store.clone();
let worktree_id = project_path.worktree_id;
let path = project_path.path.clone();
ContextMenuItem::custom_entry(
move |_window, cx| {
render_file_context_entry(
ElementId::NamedInteger("ctx-recent".into(), ix),
worktree_id,
&path,
&path_prefix,
false,
@@ -466,7 +468,7 @@ fn recent_context_picker_entries(
recent.extend(
workspace
.recent_navigation_history_iter(cx)
.filter(|(path, _)| !current_files.contains(&path.path.to_path_buf()))
.filter(|(path, _)| !current_files.contains(path))
.take(4)
.filter_map(|(project_path, _)| {
project

View File

@@ -18,16 +18,133 @@ use text::{Anchor, ToPoint};
use ui::prelude::*;
use workspace::Workspace;
use crate::context::AssistantContext;
use crate::context_picker::file_context_picker::search_files;
use crate::context_picker::symbol_context_picker::search_symbols;
use crate::context_store::ContextStore;
use crate::thread_store::ThreadStore;
use super::fetch_context_picker::fetch_url_content;
use super::thread_context_picker::ThreadContextEntry;
use super::file_context_picker::FileMatch;
use super::symbol_context_picker::SymbolMatch;
use super::thread_context_picker::{ThreadContextEntry, ThreadMatch, search_threads};
use super::{
ContextPickerMode, MentionLink, recent_context_picker_entries, supported_context_picker_modes,
ContextPickerMode, MentionLink, RecentEntry, recent_context_picker_entries,
supported_context_picker_modes,
};
pub(crate) enum Match {
Symbol(SymbolMatch),
File(FileMatch),
Thread(ThreadMatch),
Fetch(SharedString),
Mode(ContextPickerMode),
}
fn search(
mode: Option<ContextPickerMode>,
query: String,
cancellation_flag: Arc<AtomicBool>,
recent_entries: Vec<RecentEntry>,
thread_store: Option<WeakEntity<ThreadStore>>,
workspace: Entity<Workspace>,
cx: &mut App,
) -> Task<Vec<Match>> {
match mode {
Some(ContextPickerMode::File) => {
let search_files_task =
search_files(query.clone(), cancellation_flag.clone(), &workspace, cx);
cx.background_spawn(async move {
search_files_task
.await
.into_iter()
.map(Match::File)
.collect()
})
}
Some(ContextPickerMode::Symbol) => {
let search_symbols_task =
search_symbols(query.clone(), cancellation_flag.clone(), &workspace, cx);
cx.background_spawn(async move {
search_symbols_task
.await
.into_iter()
.map(Match::Symbol)
.collect()
})
}
Some(ContextPickerMode::Thread) => {
if let Some(thread_store) = thread_store.as_ref().and_then(|t| t.upgrade()) {
let search_threads_task =
search_threads(query.clone(), cancellation_flag.clone(), thread_store, cx);
cx.background_spawn(async move {
search_threads_task
.await
.into_iter()
.map(Match::Thread)
.collect()
})
} else {
Task::ready(Vec::new())
}
}
Some(ContextPickerMode::Fetch) => {
if !query.is_empty() {
Task::ready(vec![Match::Fetch(query.into())])
} else {
Task::ready(Vec::new())
}
}
None => {
if query.is_empty() {
let mut matches = recent_entries
.into_iter()
.map(|entry| match entry {
super::RecentEntry::File {
project_path,
path_prefix,
} => Match::File(FileMatch {
mat: fuzzy::PathMatch {
score: 1.,
positions: Vec::new(),
worktree_id: project_path.worktree_id.to_usize(),
path: project_path.path,
path_prefix,
is_dir: false,
distance_to_relative_ancestor: 0,
},
is_recent: true,
}),
super::RecentEntry::Thread(thread_context_entry) => {
Match::Thread(ThreadMatch {
thread: thread_context_entry,
is_recent: true,
})
}
})
.collect::<Vec<_>>();
matches.extend(
supported_context_picker_modes(&thread_store)
.into_iter()
.map(Match::Mode),
);
Task::ready(matches)
} else {
let search_files_task =
search_files(query.clone(), cancellation_flag.clone(), &workspace, cx);
cx.background_spawn(async move {
search_files_task
.await
.into_iter()
.map(Match::File)
.collect()
})
}
}
}
}
pub struct ContextPickerCompletionProvider {
workspace: WeakEntity<Workspace>,
context_store: WeakEntity<ContextStore>,
@@ -50,97 +167,20 @@ impl ContextPickerCompletionProvider {
}
}
fn default_completions(
excerpt_id: ExcerptId,
source_range: Range<Anchor>,
context_store: Entity<ContextStore>,
thread_store: Option<WeakEntity<ThreadStore>>,
editor: Entity<Editor>,
workspace: Entity<Workspace>,
cx: &App,
) -> Vec<Completion> {
let mut completions = Vec::new();
completions.extend(
recent_context_picker_entries(
context_store.clone(),
thread_store.clone(),
workspace.clone(),
cx,
)
.iter()
.filter_map(|entry| match entry {
super::RecentEntry::File {
project_path,
path_prefix,
} => Some(Self::completion_for_path(
project_path.clone(),
path_prefix,
true,
false,
excerpt_id,
source_range.clone(),
editor.clone(),
context_store.clone(),
cx,
)),
super::RecentEntry::Thread(thread_context_entry) => {
let thread_store = thread_store
.as_ref()
.and_then(|thread_store| thread_store.upgrade())?;
Some(Self::completion_for_thread(
thread_context_entry.clone(),
excerpt_id,
source_range.clone(),
true,
editor.clone(),
context_store.clone(),
thread_store,
))
}
}),
);
completions.extend(
supported_context_picker_modes(&thread_store)
.iter()
.map(|mode| {
Completion {
replace_range: source_range.clone(),
new_text: format!("@{} ", mode.mention_prefix()),
label: CodeLabel::plain(mode.label().to_string(), None),
icon_path: Some(mode.icon().path().into()),
documentation: None,
source: project::CompletionSource::Custom,
insert_text_mode: None,
// This ensures that when a user accepts this completion, the
// completion menu will still be shown after "@category " is
// inserted
confirm: Some(Arc::new(|_, _, _| true)),
}
}),
);
completions
}
fn build_code_label_for_full_path(
file_name: &str,
directory: Option<&str>,
cx: &App,
) -> CodeLabel {
let comment_id = cx.theme().syntax().highlight_id("comment").map(HighlightId);
let mut label = CodeLabel::default();
label.push_str(&file_name, None);
label.push_str(" ", None);
if let Some(directory) = directory {
label.push_str(&directory, comment_id);
fn completion_for_mode(source_range: Range<Anchor>, mode: ContextPickerMode) -> Completion {
Completion {
replace_range: source_range.clone(),
new_text: format!("@{} ", mode.mention_prefix()),
label: CodeLabel::plain(mode.label().to_string(), None),
icon_path: Some(mode.icon().path().into()),
documentation: None,
source: project::CompletionSource::Custom,
insert_text_mode: None,
// This ensures that when a user accepts this completion, the
// completion menu will still be shown after "@category " is
// inserted
confirm: Some(Arc::new(|_, _, _| true)),
}
label.filter_range = 0..label.text().len();
label
}
fn completion_for_thread(
@@ -261,11 +301,8 @@ impl ContextPickerCompletionProvider {
path_prefix,
);
let label = Self::build_code_label_for_full_path(
&file_name,
directory.as_ref().map(|s| s.as_ref()),
cx,
);
let label =
build_code_label_for_full_path(&file_name, directory.as_ref().map(|s| s.as_ref()), cx);
let full_path = if let Some(directory) = directory {
format!("{}{}", directory, file_name)
} else {
@@ -382,6 +419,22 @@ impl ContextPickerCompletionProvider {
}
}
fn build_code_label_for_full_path(file_name: &str, directory: Option<&str>, cx: &App) -> CodeLabel {
let comment_id = cx.theme().syntax().highlight_id("comment").map(HighlightId);
let mut label = CodeLabel::default();
label.push_str(&file_name, None);
label.push_str(" ", None);
if let Some(directory) = directory {
label.push_str(&directory, comment_id);
}
label.filter_range = 0..label.text().len();
label
}
impl CompletionProvider for ContextPickerCompletionProvider {
fn completions(
&self,
@@ -404,10 +457,9 @@ impl CompletionProvider for ContextPickerCompletionProvider {
return Task::ready(Ok(None));
};
let Some(workspace) = self.workspace.upgrade() else {
return Task::ready(Ok(None));
};
let Some(context_store) = self.context_store.upgrade() else {
let Some((workspace, context_store)) =
self.workspace.upgrade().zip(self.context_store.upgrade())
else {
return Task::ready(Ok(None));
};
@@ -419,154 +471,89 @@ impl CompletionProvider for ContextPickerCompletionProvider {
let editor = self.editor.clone();
let http_client = workspace.read(cx).client().http_client().clone();
let MentionCompletion { mode, argument, .. } = state;
let query = argument.unwrap_or_else(|| "".to_string());
let recent_entries = recent_context_picker_entries(
context_store.clone(),
thread_store.clone(),
workspace.clone(),
cx,
);
let search_task = search(
mode,
query,
Arc::<AtomicBool>::default(),
recent_entries,
thread_store.clone(),
workspace.clone(),
cx,
);
cx.spawn(async move |_, cx| {
let mut completions = Vec::new();
let matches = search_task.await;
let Some(editor) = editor.upgrade() else {
return Ok(None);
};
let MentionCompletion { mode, argument, .. } = state;
let query = argument.unwrap_or_else(|| "".to_string());
match mode {
Some(ContextPickerMode::File) => {
let path_matches = cx
.update(|cx| {
super::file_context_picker::search_paths(
query,
Arc::<AtomicBool>::default(),
&workspace,
cx,
)
})?
.await;
if let Some(editor) = editor.upgrade() {
completions.reserve(path_matches.len());
cx.update(|cx| {
completions.extend(path_matches.iter().map(|mat| {
Self::completion_for_path(
ProjectPath {
worktree_id: WorktreeId::from_usize(mat.worktree_id),
path: mat.path.clone(),
},
&mat.path_prefix,
false,
mat.is_dir,
excerpt_id,
source_range.clone(),
editor.clone(),
context_store.clone(),
cx,
)
}));
})?;
}
}
Some(ContextPickerMode::Symbol) => {
if let Some(editor) = editor.upgrade() {
let symbol_matches = cx
.update(|cx| {
super::symbol_context_picker::search_symbols(
query,
Arc::new(AtomicBool::default()),
&workspace,
cx,
)
})?
.await?;
cx.update(|cx| {
completions.extend(symbol_matches.into_iter().filter_map(
|(_, symbol)| {
Self::completion_for_symbol(
symbol,
excerpt_id,
source_range.clone(),
editor.clone(),
context_store.clone(),
workspace.clone(),
cx,
)
Ok(Some(cx.update(|cx| {
matches
.into_iter()
.filter_map(|mat| match mat {
Match::File(FileMatch { mat, is_recent }) => {
Some(Self::completion_for_path(
ProjectPath {
worktree_id: WorktreeId::from_usize(mat.worktree_id),
path: mat.path.clone(),
},
));
})?;
}
}
Some(ContextPickerMode::Fetch) => {
if let Some(editor) = editor.upgrade() {
if !query.is_empty() {
completions.push(Self::completion_for_fetch(
source_range.clone(),
query.into(),
&mat.path_prefix,
is_recent,
mat.is_dir,
excerpt_id,
source_range.clone(),
editor.clone(),
context_store.clone(),
http_client.clone(),
));
}
context_store.update(cx, |store, _| {
let urls = store.context().iter().filter_map(|context| {
if let AssistantContext::FetchedUrl(context) = context {
Some(context.url.clone())
} else {
None
}
});
for url in urls {
completions.push(Self::completion_for_fetch(
source_range.clone(),
url,
excerpt_id,
editor.clone(),
context_store.clone(),
http_client.clone(),
));
}
})?;
}
}
Some(ContextPickerMode::Thread) => {
if let Some((thread_store, editor)) = thread_store
.and_then(|thread_store| thread_store.upgrade())
.zip(editor.upgrade())
{
let threads = cx
.update(|cx| {
super::thread_context_picker::search_threads(
query,
thread_store.clone(),
cx,
)
})?
.await;
for thread in threads {
completions.push(Self::completion_for_thread(
thread.clone(),
excerpt_id,
source_range.clone(),
false,
editor.clone(),
context_store.clone(),
thread_store.clone(),
));
}
}
}
None => {
cx.update(|cx| {
if let Some(editor) = editor.upgrade() {
completions.extend(Self::default_completions(
excerpt_id,
source_range.clone(),
context_store.clone(),
thread_store.clone(),
editor,
workspace.clone(),
cx,
));
))
}
})?;
}
}
Ok(Some(completions))
Match::Symbol(SymbolMatch { symbol, .. }) => Self::completion_for_symbol(
symbol,
excerpt_id,
source_range.clone(),
editor.clone(),
context_store.clone(),
workspace.clone(),
cx,
),
Match::Thread(ThreadMatch {
thread, is_recent, ..
}) => {
let thread_store = thread_store.as_ref().and_then(|t| t.upgrade())?;
Some(Self::completion_for_thread(
thread,
excerpt_id,
source_range.clone(),
is_recent,
editor.clone(),
context_store.clone(),
thread_store,
))
}
Match::Fetch(url) => Some(Self::completion_for_fetch(
source_range.clone(),
url,
excerpt_id,
editor.clone(),
context_store.clone(),
http_client.clone(),
)),
Match::Mode(mode) => {
Some(Self::completion_for_mode(source_range.clone(), mode))
}
})
.collect()
})?))
})
}
@@ -676,7 +663,12 @@ impl MentionCompletion {
let mut end = last_mention_start + 1;
if let Some(mode_text) = parts.next() {
end += mode_text.len();
mode = ContextPickerMode::try_from(mode_text).ok();
if let Some(parsed_mode) = ContextPickerMode::try_from(mode_text).ok() {
mode = Some(parsed_mode);
} else {
argument = Some(mode_text.to_string());
}
match rest_of_line[mode_text.len()..].find(|c: char| !c.is_whitespace()) {
Some(whitespace_count) => {
if let Some(argument_text) = parts.next() {
@@ -702,13 +694,13 @@ impl MentionCompletion {
#[cfg(test)]
mod tests {
use super::*;
use gpui::{Focusable, TestAppContext, VisualTestContext};
use gpui::{EventEmitter, FocusHandle, Focusable, TestAppContext, VisualTestContext};
use project::{Project, ProjectPath};
use serde_json::json;
use settings::SettingsStore;
use std::{ops::Deref, path::PathBuf};
use std::ops::Deref;
use util::{path, separator};
use workspace::AppState;
use workspace::{AppState, Item};
#[test]
fn test_mention_completion_parse() {
@@ -768,9 +760,42 @@ mod tests {
})
);
assert_eq!(
MentionCompletion::try_parse("Lorem @main", 0),
Some(MentionCompletion {
source_range: 6..11,
mode: None,
argument: Some("main".to_string()),
})
);
assert_eq!(MentionCompletion::try_parse("test@", 0), None);
}
struct AtMentionEditor(Entity<Editor>);
impl Item for AtMentionEditor {
type Event = ();
fn include_in_nav_history() -> bool {
false
}
}
impl EventEmitter<()> for AtMentionEditor {}
impl Focusable for AtMentionEditor {
fn focus_handle(&self, cx: &App) -> FocusHandle {
self.0.read(cx).focus_handle(cx).clone()
}
}
impl Render for AtMentionEditor {
fn render(&mut self, _window: &mut Window, _cx: &mut Context<Self>) -> impl IntoElement {
self.0.clone().into_any_element()
}
}
#[gpui::test]
async fn test_context_completion_provider(cx: &mut TestAppContext) {
init_test(cx);
@@ -846,25 +871,27 @@ mod tests {
.unwrap();
}
let item = workspace
.update_in(&mut cx, |workspace, window, cx| {
workspace.open_path(
ProjectPath {
worktree_id,
path: PathBuf::from("editor").into(),
},
let editor = workspace.update_in(&mut cx, |workspace, window, cx| {
let editor = cx.new(|cx| {
Editor::new(
editor::EditorMode::Full,
multi_buffer::MultiBuffer::build_simple("", cx),
None,
true,
window,
cx,
)
})
.await
.expect("Could not open test file");
let editor = cx.update(|_, cx| {
item.act_as::<Editor>(cx)
.expect("Opened test file wasn't an editor")
});
workspace.active_pane().update(cx, |pane, cx| {
pane.add_item(
Box::new(cx.new(|_| AtMentionEditor(editor.clone()))),
true,
true,
None,
window,
cx,
);
});
editor
});
let context_store = cx.new(|_| ContextStore::new(project.downgrade(), None));
@@ -895,10 +922,10 @@ mod tests {
assert_eq!(
current_completion_labels(editor),
&[
"editor dir/",
"seven.txt dir/b/",
"six.txt dir/b/",
"five.txt dir/b/",
"four.txt dir/a/",
"Files & Directories",
"Symbols",
"Fetch"
@@ -993,14 +1020,14 @@ mod tests {
editor.update(&mut cx, |editor, cx| {
assert_eq!(
editor.text(cx),
"Lorem [@one.txt](@file:dir/a/one.txt) Ipsum [@editor](@file:dir/editor)"
"Lorem [@one.txt](@file:dir/a/one.txt) Ipsum [@seven.txt](@file:dir/b/seven.txt)"
);
assert!(!editor.has_visible_completions_menu());
assert_eq!(
crease_ranges(editor, cx),
vec![
Point::new(0, 6)..Point::new(0, 37),
Point::new(0, 44)..Point::new(0, 71)
Point::new(0, 44)..Point::new(0, 79)
]
);
});
@@ -1010,14 +1037,14 @@ mod tests {
editor.update(&mut cx, |editor, cx| {
assert_eq!(
editor.text(cx),
"Lorem [@one.txt](@file:dir/a/one.txt) Ipsum [@editor](@file:dir/editor)\n@"
"Lorem [@one.txt](@file:dir/a/one.txt) Ipsum [@seven.txt](@file:dir/b/seven.txt)\n@"
);
assert!(editor.has_visible_completions_menu());
assert_eq!(
crease_ranges(editor, cx),
vec![
Point::new(0, 6)..Point::new(0, 37),
Point::new(0, 44)..Point::new(0, 71)
Point::new(0, 44)..Point::new(0, 79)
]
);
});
@@ -1031,15 +1058,15 @@ mod tests {
editor.update(&mut cx, |editor, cx| {
assert_eq!(
editor.text(cx),
"Lorem [@one.txt](@file:dir/a/one.txt) Ipsum [@editor](@file:dir/editor)\n[@seven.txt](@file:dir/b/seven.txt)"
"Lorem [@one.txt](@file:dir/a/one.txt) Ipsum [@seven.txt](@file:dir/b/seven.txt)\n[@six.txt](@file:dir/b/six.txt)"
);
assert!(!editor.has_visible_completions_menu());
assert_eq!(
crease_ranges(editor, cx),
vec![
Point::new(0, 6)..Point::new(0, 37),
Point::new(0, 44)..Point::new(0, 71),
Point::new(1, 0)..Point::new(1, 35)
Point::new(0, 44)..Point::new(0, 79),
Point::new(1, 0)..Point::new(1, 31)
]
);
});

View File

@@ -58,7 +58,7 @@ pub struct FileContextPickerDelegate {
workspace: WeakEntity<Workspace>,
context_store: WeakEntity<ContextStore>,
confirm_behavior: ConfirmBehavior,
matches: Vec<PathMatch>,
matches: Vec<FileMatch>,
selected_index: usize,
}
@@ -114,7 +114,7 @@ impl PickerDelegate for FileContextPickerDelegate {
return Task::ready(());
};
let search_task = search_paths(query, Arc::<AtomicBool>::default(), &workspace, cx);
let search_task = search_files(query, Arc::<AtomicBool>::default(), &workspace, cx);
cx.spawn_in(window, async move |this, cx| {
// TODO: This should be probably be run in the background.
@@ -128,7 +128,7 @@ impl PickerDelegate for FileContextPickerDelegate {
}
fn confirm(&mut self, _secondary: bool, window: &mut Window, cx: &mut Context<Picker<Self>>) {
let Some(mat) = self.matches.get(self.selected_index) else {
let Some(FileMatch { mat, .. }) = self.matches.get(self.selected_index) else {
return;
};
@@ -181,7 +181,7 @@ impl PickerDelegate for FileContextPickerDelegate {
_window: &mut Window,
cx: &mut Context<Picker<Self>>,
) -> Option<Self::ListItem> {
let path_match = &self.matches[ix];
let FileMatch { mat, .. } = &self.matches[ix];
Some(
ListItem::new(ix)
@@ -189,9 +189,10 @@ impl PickerDelegate for FileContextPickerDelegate {
.toggle_state(selected)
.child(render_file_context_entry(
ElementId::NamedInteger("file-ctx-picker".into(), ix),
&path_match.path,
&path_match.path_prefix,
path_match.is_dir,
WorktreeId::from_usize(mat.worktree_id),
&mat.path,
&mat.path_prefix,
mat.is_dir,
self.context_store.clone(),
cx,
)),
@@ -199,12 +200,17 @@ impl PickerDelegate for FileContextPickerDelegate {
}
}
pub(crate) fn search_paths(
pub struct FileMatch {
pub mat: PathMatch,
pub is_recent: bool,
}
pub(crate) fn search_files(
query: String,
cancellation_flag: Arc<AtomicBool>,
workspace: &Entity<Workspace>,
cx: &App,
) -> Task<Vec<PathMatch>> {
) -> Task<Vec<FileMatch>> {
if query.is_empty() {
let workspace = workspace.read(cx);
let project = workspace.project().read(cx);
@@ -213,28 +219,34 @@ pub(crate) fn search_paths(
.into_iter()
.filter_map(|(project_path, _)| {
let worktree = project.worktree_for_id(project_path.worktree_id, cx)?;
Some(PathMatch {
score: 0.,
positions: Vec::new(),
worktree_id: project_path.worktree_id.to_usize(),
path: project_path.path,
path_prefix: worktree.read(cx).root_name().into(),
distance_to_relative_ancestor: 0,
is_dir: false,
Some(FileMatch {
mat: PathMatch {
score: 0.,
positions: Vec::new(),
worktree_id: project_path.worktree_id.to_usize(),
path: project_path.path,
path_prefix: worktree.read(cx).root_name().into(),
distance_to_relative_ancestor: 0,
is_dir: false,
},
is_recent: true,
})
});
let file_matches = project.worktrees(cx).flat_map(|worktree| {
let worktree = worktree.read(cx);
let path_prefix: Arc<str> = worktree.root_name().into();
worktree.entries(false, 0).map(move |entry| PathMatch {
score: 0.,
positions: Vec::new(),
worktree_id: worktree.id().to_usize(),
path: entry.path.clone(),
path_prefix: path_prefix.clone(),
distance_to_relative_ancestor: 0,
is_dir: entry.is_dir(),
worktree.entries(false, 0).map(move |entry| FileMatch {
mat: PathMatch {
score: 0.,
positions: Vec::new(),
worktree_id: worktree.id().to_usize(),
path: entry.path.clone(),
path_prefix: path_prefix.clone(),
distance_to_relative_ancestor: 0,
is_dir: entry.is_dir(),
},
is_recent: false,
})
});
@@ -269,6 +281,12 @@ pub(crate) fn search_paths(
executor,
)
.await
.into_iter()
.map(|mat| FileMatch {
mat,
is_recent: false,
})
.collect::<Vec<_>>()
})
}
}
@@ -311,19 +329,26 @@ pub fn extract_file_name_and_directory(
pub fn render_file_context_entry(
id: ElementId,
path: &Path,
worktree_id: WorktreeId,
path: &Arc<Path>,
path_prefix: &Arc<str>,
is_directory: bool,
context_store: WeakEntity<ContextStore>,
cx: &App,
) -> Stateful<Div> {
let (file_name, directory) = extract_file_name_and_directory(path, path_prefix);
let (file_name, directory) = extract_file_name_and_directory(&path, path_prefix);
let added = context_store.upgrade().and_then(|context_store| {
let project_path = ProjectPath {
worktree_id,
path: path.clone(),
};
if is_directory {
context_store.read(cx).includes_directory(path)
context_store.read(cx).includes_directory(&project_path)
} else {
context_store.read(cx).will_include_file_path(path, cx)
context_store
.read(cx)
.will_include_file_path(&project_path, cx)
}
});
@@ -363,8 +388,9 @@ pub fn render_file_context_entry(
)
.child(Label::new("Added").size(LabelSize::Small)),
),
FileInclusion::InDirectory(dir_name) => {
let dir_name = dir_name.to_string_lossy().into_owned();
FileInclusion::InDirectory(directory_project_path) => {
// TODO: Consider using worktree full_path to include worktree name.
let directory_path = directory_project_path.path.to_string_lossy().into_owned();
el.child(
h_flex()
@@ -378,7 +404,7 @@ pub fn render_file_context_entry(
)
.child(Label::new("Included").size(LabelSize::Small)),
)
.tooltip(Tooltip::text(format!("in {dir_name}")))
.tooltip(Tooltip::text(format!("in {directory_path}")))
}
})
}

View File

@@ -2,7 +2,7 @@ use std::cmp::Reverse;
use std::sync::Arc;
use std::sync::atomic::AtomicBool;
use anyhow::{Context as _, Result};
use anyhow::Result;
use fuzzy::{StringMatch, StringMatchCandidate};
use gpui::{
App, AppContext, DismissEvent, Entity, FocusHandle, Focusable, Stateful, Task, WeakEntity,
@@ -119,11 +119,7 @@ impl PickerDelegate for SymbolContextPickerDelegate {
let search_task = search_symbols(query, Arc::<AtomicBool>::default(), &workspace, cx);
let context_store = self.context_store.clone();
cx.spawn_in(window, async move |this, cx| {
let symbols = search_task
.await
.context("Failed to load symbols")
.log_err()
.unwrap_or_default();
let symbols = search_task.await;
let symbol_entries = context_store
.read_with(cx, |context_store, cx| {
@@ -285,12 +281,16 @@ fn find_matching_symbol(symbol: &Symbol, candidates: &[DocumentSymbol]) -> Optio
}
}
pub struct SymbolMatch {
pub symbol: Symbol,
}
pub(crate) fn search_symbols(
query: String,
cancellation_flag: Arc<AtomicBool>,
workspace: &Entity<Workspace>,
cx: &mut App,
) -> Task<Result<Vec<(StringMatch, Symbol)>>> {
) -> Task<Vec<SymbolMatch>> {
let symbols_task = workspace.update(cx, |workspace, cx| {
workspace
.project()
@@ -298,19 +298,28 @@ pub(crate) fn search_symbols(
});
let project = workspace.read(cx).project().clone();
cx.spawn(async move |cx| {
let symbols = symbols_task.await?;
let (visible_match_candidates, external_match_candidates): (Vec<_>, Vec<_>) = project
.update(cx, |project, cx| {
symbols
.iter()
.enumerate()
.map(|(id, symbol)| StringMatchCandidate::new(id, &symbol.label.filter_text()))
.partition(|candidate| {
project
.entry_for_path(&symbols[candidate.id].path, cx)
.map_or(false, |e| !e.is_ignored)
})
})?;
let Some(symbols) = symbols_task.await.log_err() else {
return Vec::new();
};
let Some((visible_match_candidates, external_match_candidates)): Option<(Vec<_>, Vec<_>)> =
project
.update(cx, |project, cx| {
symbols
.iter()
.enumerate()
.map(|(id, symbol)| {
StringMatchCandidate::new(id, &symbol.label.filter_text())
})
.partition(|candidate| {
project
.entry_for_path(&symbols[candidate.id].path, cx)
.map_or(false, |e| !e.is_ignored)
})
})
.log_err()
else {
return Vec::new();
};
const MAX_MATCHES: usize = 100;
let mut visible_matches = cx.background_executor().block(fuzzy::match_strings(
@@ -339,7 +348,7 @@ pub(crate) fn search_symbols(
let mut matches = visible_matches;
matches.append(&mut external_matches);
Ok(matches
matches
.into_iter()
.map(|mut mat| {
let symbol = symbols[mat.candidate_id].clone();
@@ -347,19 +356,19 @@ pub(crate) fn search_symbols(
for position in &mut mat.positions {
*position += filter_start;
}
(mat, symbol)
SymbolMatch { symbol }
})
.collect())
.collect()
})
}
fn compute_symbol_entries(
symbols: Vec<(StringMatch, Symbol)>,
symbols: Vec<SymbolMatch>,
context_store: &ContextStore,
cx: &App,
) -> Vec<SymbolEntry> {
let mut symbol_entries = Vec::with_capacity(symbols.len());
for (_, symbol) in symbols {
for SymbolMatch { symbol, .. } in symbols {
let symbols_for_path = context_store.included_symbols_by_path().get(&symbol.path);
let is_included = if let Some(symbols_for_path) = symbols_for_path {
let mut is_included = false;

View File

@@ -1,4 +1,5 @@
use std::sync::Arc;
use std::sync::atomic::AtomicBool;
use fuzzy::StringMatchCandidate;
use gpui::{App, DismissEvent, Entity, FocusHandle, Focusable, Task, WeakEntity};
@@ -114,11 +115,11 @@ impl PickerDelegate for ThreadContextPickerDelegate {
return Task::ready(());
};
let search_task = search_threads(query, threads, cx);
let search_task = search_threads(query, Arc::new(AtomicBool::default()), threads, cx);
cx.spawn_in(window, async move |this, cx| {
let matches = search_task.await;
this.update(cx, |this, cx| {
this.delegate.matches = matches;
this.delegate.matches = matches.into_iter().map(|mat| mat.thread).collect();
this.delegate.selected_index = 0;
cx.notify();
})
@@ -217,11 +218,18 @@ pub fn render_thread_context_entry(
})
}
#[derive(Clone)]
pub struct ThreadMatch {
pub thread: ThreadContextEntry,
pub is_recent: bool,
}
pub(crate) fn search_threads(
query: String,
cancellation_flag: Arc<AtomicBool>,
thread_store: Entity<ThreadStore>,
cx: &mut App,
) -> Task<Vec<ThreadContextEntry>> {
) -> Task<Vec<ThreadMatch>> {
let threads = thread_store.update(cx, |this, _cx| {
this.threads()
.into_iter()
@@ -236,6 +244,12 @@ pub(crate) fn search_threads(
cx.background_spawn(async move {
if query.is_empty() {
threads
.into_iter()
.map(|thread| ThreadMatch {
thread,
is_recent: false,
})
.collect()
} else {
let candidates = threads
.iter()
@@ -247,14 +261,17 @@ pub(crate) fn search_threads(
&query,
false,
100,
&Default::default(),
&cancellation_flag,
executor,
)
.await;
matches
.into_iter()
.map(|mat| threads[mat.candidate_id].clone())
.map(|mat| ThreadMatch {
thread: threads[mat.candidate_id].clone(),
is_recent: false,
})
.collect()
}
})

View File

@@ -1,5 +1,5 @@
use std::ops::Range;
use std::path::{Path, PathBuf};
use std::path::Path;
use std::sync::Arc;
use anyhow::{Context as _, Result, anyhow};
@@ -28,7 +28,7 @@ pub struct ContextStore {
// TODO: If an EntityId is used for all context types (like BufferId), can remove ContextId.
next_context_id: ContextId,
files: BTreeMap<BufferId, ContextId>,
directories: HashMap<PathBuf, ContextId>,
directories: HashMap<ProjectPath, ContextId>,
symbols: HashMap<ContextSymbolId, ContextId>,
symbol_buffers: HashMap<ContextSymbolId, Entity<Buffer>>,
symbols_by_path: HashMap<ProjectPath, Vec<ContextSymbolId>>,
@@ -93,7 +93,7 @@ impl ContextStore {
let buffer_id = this.update(cx, |_, cx| buffer.read(cx).remote_id())?;
let already_included = this.update(cx, |this, cx| {
match this.will_include_buffer(buffer_id, &project_path.path) {
match this.will_include_buffer(buffer_id, &project_path) {
Some(FileInclusion::Direct(context_id)) => {
if remove_if_exists {
this.remove_context(context_id, cx);
@@ -159,7 +159,7 @@ impl ContextStore {
return Task::ready(Err(anyhow!("failed to read project")));
};
let already_included = match self.includes_directory(&project_path.path) {
let already_included = match self.includes_directory(&project_path) {
Some(FileInclusion::Direct(context_id)) => {
if remove_if_exists {
self.remove_context(context_id, cx);
@@ -223,14 +223,12 @@ impl ContextStore {
.collect::<Vec<_>>();
if context_buffers.is_empty() {
return Err(anyhow!(
"No text files found in {}",
&project_path.path.display()
));
let full_path = cx.update(|cx| worktree.read(cx).full_path(&project_path.path))?;
return Err(anyhow!("No text files found in {}", &full_path.display()));
}
this.update(cx, |this, cx| {
this.insert_directory(project_path, context_buffers, cx);
this.insert_directory(worktree, project_path, context_buffers, cx);
})?;
anyhow::Ok(())
@@ -239,17 +237,20 @@ impl ContextStore {
fn insert_directory(
&mut self,
worktree: Entity<Worktree>,
project_path: ProjectPath,
context_buffers: Vec<ContextBuffer>,
cx: &mut Context<Self>,
) {
let id = self.next_context_id.post_inc();
self.directories.insert(project_path.path.to_path_buf(), id);
let path = project_path.path.clone();
self.directories.insert(project_path, id);
self.context
.push(AssistantContext::Directory(DirectoryContext {
id,
project_path,
worktree,
path,
context_buffers,
}));
cx.notify();
@@ -478,23 +479,31 @@ impl ContextStore {
/// Returns whether the buffer is already included directly in the context, or if it will be
/// included in the context via a directory. Directory inclusion is based on paths rather than
/// buffer IDs as the directory will be re-scanned.
pub fn will_include_buffer(&self, buffer_id: BufferId, path: &Path) -> Option<FileInclusion> {
pub fn will_include_buffer(
&self,
buffer_id: BufferId,
project_path: &ProjectPath,
) -> Option<FileInclusion> {
if let Some(context_id) = self.files.get(&buffer_id) {
return Some(FileInclusion::Direct(*context_id));
}
self.will_include_file_path_via_directory(path)
self.will_include_file_path_via_directory(project_path)
}
/// Returns whether this file path is already included directly in the context, or if it will be
/// included in the context via a directory.
pub fn will_include_file_path(&self, path: &Path, cx: &App) -> Option<FileInclusion> {
pub fn will_include_file_path(
&self,
project_path: &ProjectPath,
cx: &App,
) -> Option<FileInclusion> {
if !self.files.is_empty() {
let found_file_context = self.context.iter().find(|context| match &context {
AssistantContext::File(file_context) => {
let buffer = file_context.context_buffer.buffer.read(cx);
if let Some(file_path) = buffer_path_log_err(buffer, cx) {
*file_path == *path
if let Some(context_path) = buffer.project_path(cx) {
&context_path == project_path
} else {
false
}
@@ -506,31 +515,40 @@ impl ContextStore {
}
}
self.will_include_file_path_via_directory(path)
self.will_include_file_path_via_directory(project_path)
}
fn will_include_file_path_via_directory(&self, path: &Path) -> Option<FileInclusion> {
fn will_include_file_path_via_directory(
&self,
project_path: &ProjectPath,
) -> Option<FileInclusion> {
if self.directories.is_empty() {
return None;
}
let mut buf = path.to_path_buf();
let mut path_buf = project_path.path.to_path_buf();
while buf.pop() {
if let Some(_) = self.directories.get(&buf) {
return Some(FileInclusion::InDirectory(buf));
while path_buf.pop() {
// TODO: This isn't very efficient. Consider using a better representation of the
// directories map.
let directory_project_path = ProjectPath {
worktree_id: project_path.worktree_id,
path: path_buf.clone().into(),
};
if let Some(_) = self.directories.get(&directory_project_path) {
return Some(FileInclusion::InDirectory(directory_project_path));
}
}
None
}
pub fn includes_directory(&self, path: &Path) -> Option<FileInclusion> {
if let Some(context_id) = self.directories.get(path) {
pub fn includes_directory(&self, project_path: &ProjectPath) -> Option<FileInclusion> {
if let Some(context_id) = self.directories.get(project_path) {
return Some(FileInclusion::Direct(*context_id));
}
self.will_include_file_path_via_directory(path)
self.will_include_file_path_via_directory(project_path)
}
pub fn included_symbol(&self, symbol_id: &ContextSymbolId) -> Option<ContextId> {
@@ -564,13 +582,13 @@ impl ContextStore {
}
}
pub fn file_paths(&self, cx: &App) -> HashSet<PathBuf> {
pub fn file_paths(&self, cx: &App) -> HashSet<ProjectPath> {
self.context
.iter()
.filter_map(|context| match context {
AssistantContext::File(file) => {
let buffer = file.context_buffer.buffer.read(cx);
buffer_path_log_err(buffer, cx).map(|p| p.to_path_buf())
buffer.project_path(cx)
}
AssistantContext::Directory(_)
| AssistantContext::Symbol(_)
@@ -587,7 +605,7 @@ impl ContextStore {
pub enum FileInclusion {
Direct(ContextId),
InDirectory(PathBuf),
InDirectory(ProjectPath),
}
// ContextBuffer without text.
@@ -654,19 +672,6 @@ fn collect_buffer_info_and_text(
Ok((buffer_info, text_task))
}
pub fn buffer_path_log_err(buffer: &Buffer, cx: &App) -> Option<Arc<Path>> {
if let Some(file) = buffer.file() {
let mut path = file.path().clone();
if path.as_os_str().is_empty() {
path = file.full_path(cx).into();
}
Some(path)
} else {
log::error!("Buffer that had a path unexpectedly no longer has a path.");
None
}
}
fn to_fenced_codeblock(path: &Path, content: Rope) -> SharedString {
let path_extension = path.extension().and_then(|ext| ext.to_str());
let path_string = path.to_string_lossy();
@@ -742,13 +747,13 @@ pub fn refresh_context_store_text(
}
}
AssistantContext::Directory(directory_context) => {
let directory_path = directory_context.project_path(cx);
let should_refresh = changed_buffers.is_empty()
|| changed_buffers.iter().any(|buffer| {
let buffer = buffer.read(cx);
buffer_path_log_err(&buffer, cx).map_or(false, |path| {
path.starts_with(&directory_context.project_path.path)
})
let Some(buffer_path) = buffer.read(cx).project_path(cx) else {
return false;
};
buffer_path.starts_with(&directory_path)
});
if should_refresh {
@@ -835,14 +840,16 @@ fn refresh_directory_text(
let context_buffers = future::join_all(futures);
let id = directory_context.id;
let project_path = directory_context.project_path.clone();
let worktree = directory_context.worktree.clone();
let path = directory_context.path.clone();
Some(cx.spawn(async move |cx| {
let context_buffers = context_buffers.await;
context_store
.update(cx, |context_store, _| {
let new_directory_context = DirectoryContext {
id,
project_path,
worktree,
path,
context_buffers,
};
context_store.replace_context(AssistantContext::Directory(new_directory_context));

View File

@@ -1,3 +1,4 @@
use std::path::Path;
use std::rc::Rc;
use collections::HashSet;
@@ -9,6 +10,7 @@ use gpui::{
};
use itertools::Itertools;
use language::Buffer;
use project::ProjectItem;
use ui::{KeyBinding, PopoverMenu, PopoverMenuHandle, Tooltip, prelude::*};
use workspace::{Workspace, notifications::NotifyResultExt};
@@ -93,26 +95,23 @@ impl ContextStrip {
let active_buffer_entity = editor.buffer().read(cx).as_singleton()?;
let active_buffer = active_buffer_entity.read(cx);
let path = active_buffer.file()?.full_path(cx);
let project_path = active_buffer.project_path(cx)?;
if self
.context_store
.read(cx)
.will_include_buffer(active_buffer.remote_id(), &path)
.will_include_buffer(active_buffer.remote_id(), &project_path)
.is_some()
{
return None;
}
let name = match path.file_name() {
Some(name) => name.to_string_lossy().into_owned().into(),
None => path.to_string_lossy().into_owned().into(),
};
let file_name = active_buffer.file()?.file_name(cx);
let icon_path = FileIcons::get_icon(&path, cx);
let icon_path = FileIcons::get_icon(&Path::new(&file_name), cx);
Some(SuggestedContext::File {
name,
name: file_name.to_string_lossy().into_owned().into(),
buffer: active_buffer_entity.downgrade(),
icon_path,
})

View File

@@ -28,6 +28,7 @@ use language_model::{LanguageModelRegistry, report_assistant_event};
use multi_buffer::MultiBufferRow;
use parking_lot::Mutex;
use project::LspAction;
use project::Project;
use project::{CodeAction, ProjectTransaction};
use prompt_store::PromptBuilder;
use settings::{Settings, SettingsStore};
@@ -254,6 +255,7 @@ impl InlineAssistant {
assistant.assist(
&active_editor,
cx.entity().downgrade(),
workspace.project().downgrade(),
thread_store,
window,
cx,
@@ -262,7 +264,14 @@ impl InlineAssistant {
}
InlineAssistTarget::Terminal(active_terminal) => {
TerminalInlineAssistant::update_global(cx, |assistant, cx| {
assistant.assist(&active_terminal, cx.entity(), thread_store, window, cx)
assistant.assist(
&active_terminal,
cx.entity().downgrade(),
workspace.project().downgrade(),
thread_store,
window,
cx,
)
})
}
};
@@ -312,17 +321,11 @@ impl InlineAssistant {
&mut self,
editor: &Entity<Editor>,
workspace: WeakEntity<Workspace>,
project: WeakEntity<Project>,
thread_store: Option<WeakEntity<ThreadStore>>,
window: &mut Window,
cx: &mut App,
) {
let Some(project) = workspace
.upgrade()
.map(|workspace| workspace.read(cx).project().downgrade())
else {
return;
};
let (snapshot, initial_selections) = editor.update(cx, |editor, cx| {
(
editor.snapshot(window, cx),

View File

@@ -3,14 +3,16 @@ use std::sync::Arc;
use crate::assistant_model_selector::ModelType;
use collections::HashSet;
use editor::actions::MoveUp;
use editor::{ContextMenuOptions, ContextMenuPlacement, Editor, EditorElement, EditorStyle};
use editor::{
ContextMenuOptions, ContextMenuPlacement, Editor, EditorElement, EditorStyle, MultiBuffer,
};
use file_icons::FileIcons;
use fs::Fs;
use gpui::{
Animation, AnimationExt, App, DismissEvent, Entity, Focusable, Subscription, TextStyle,
WeakEntity, linear_color_stop, linear_gradient, point, pulsating_between,
};
use language::Buffer;
use language::{Buffer, Language};
use language_model::{ConfiguredModel, LanguageModelRegistry};
use language_model_selector::ToggleModelSelector;
use multi_buffer;
@@ -66,8 +68,24 @@ impl MessageEditor {
let inline_context_picker_menu_handle = PopoverMenuHandle::default();
let model_selector_menu_handle = PopoverMenuHandle::default();
let language = Language::new(
language::LanguageConfig {
completion_query_characters: HashSet::from_iter(['.', '-', '_', '@']),
..Default::default()
},
None,
);
let editor = cx.new(|cx| {
let mut editor = Editor::auto_height(10, window, cx);
let buffer = cx.new(|cx| Buffer::local("", cx).with_language(Arc::new(language), cx));
let buffer = cx.new(|cx| MultiBuffer::singleton(buffer, cx));
let mut editor = Editor::new(
editor::EditorMode::AutoHeight { max_lines: 10 },
buffer,
None,
window,
cx,
);
editor.set_placeholder_text("Ask anything, @ to mention, ↑ to select", cx);
editor.set_show_indent_guides(false, cx);
editor.set_context_menu_options(ContextMenuOptions {
@@ -75,7 +93,6 @@ impl MessageEditor {
max_entries_visible: 12,
placement: Some(ContextMenuPlacement::Above),
});
editor
});
@@ -184,7 +201,7 @@ impl MessageEditor {
}
fn is_editor_empty(&self, cx: &App) -> bool {
self.editor.read(cx).text(cx).is_empty()
self.editor.read(cx).text(cx).trim().is_empty()
}
fn is_model_selected(&self, cx: &App) -> bool {

View File

@@ -16,6 +16,7 @@ use language_model::{
ConfiguredModel, LanguageModelRegistry, LanguageModelRequest, LanguageModelRequestMessage,
Role, report_assistant_event,
};
use project::Project;
use prompt_store::PromptBuilder;
use std::sync::Arc;
use telemetry_events::{AssistantEventData, AssistantKind, AssistantPhase};
@@ -66,7 +67,8 @@ impl TerminalInlineAssistant {
pub fn assist(
&mut self,
terminal_view: &Entity<TerminalView>,
workspace: Entity<Workspace>,
workspace: WeakEntity<Workspace>,
project: WeakEntity<Project>,
thread_store: Option<WeakEntity<ThreadStore>>,
window: &mut Window,
cx: &mut App,
@@ -75,7 +77,6 @@ impl TerminalInlineAssistant {
let assist_id = self.next_assist_id.post_inc();
let prompt_buffer =
cx.new(|cx| MultiBuffer::singleton(cx.new(|cx| Buffer::local(String::new(), cx)), cx));
let project = workspace.read(cx).project().downgrade();
let context_store = cx.new(|_cx| ContextStore::new(project, thread_store.clone()));
let codegen = cx.new(|_| TerminalCodegen::new(terminal, self.telemetry.clone()));
@@ -87,7 +88,7 @@ impl TerminalInlineAssistant {
codegen,
self.fs.clone(),
context_store.clone(),
workspace.downgrade(),
workspace.clone(),
thread_store.clone(),
window,
cx,
@@ -106,7 +107,7 @@ impl TerminalInlineAssistant {
assist_id,
terminal_view,
prompt_editor,
workspace.downgrade(),
workspace.clone(),
context_store,
window,
cx,

View File

@@ -6,7 +6,7 @@ use std::sync::Arc;
use agent_rules::load_worktree_rules_file;
use anyhow::{Context as _, Result, anyhow};
use assistant_settings::AssistantSettings;
use assistant_tool::{ActionLog, Tool, ToolWorkingSet};
use assistant_tool::{ActionLog, ResponseDest, Tool, ToolWorkingSet};
use chrono::{DateTime, Utc};
use collections::{BTreeMap, HashMap};
use fs::Fs;
@@ -182,7 +182,7 @@ pub struct ThreadCheckpoint {
git_checkpoint: GitStoreCheckpoint,
}
#[derive(Copy, Clone, Debug)]
#[derive(Copy, Clone, Debug, PartialEq, Eq)]
pub enum ThreadFeedback {
Positive,
Negative,
@@ -260,6 +260,8 @@ pub struct Thread {
initial_project_snapshot: Shared<Task<Option<Arc<ProjectSnapshot>>>>,
cumulative_token_usage: TokenUsage,
feedback: Option<ThreadFeedback>,
message_feedback: HashMap<MessageId, ThreadFeedback>,
response_dest: ResponseDest,
}
impl Thread {
@@ -298,6 +300,8 @@ impl Thread {
},
cumulative_token_usage: TokenUsage::default(),
feedback: None,
message_feedback: HashMap::default(),
response_dest: ResponseDest::default(),
}
}
@@ -361,6 +365,8 @@ impl Thread {
initial_project_snapshot: Task::ready(serialized.initial_project_snapshot).shared(),
cumulative_token_usage: serialized.cumulative_token_usage,
feedback: None,
message_feedback: HashMap::default(),
response_dest: ResponseDest::default(),
}
}
@@ -455,6 +461,10 @@ impl Thread {
!self.tool_use.pending_tool_uses().is_empty()
}
pub fn pending_tool_uses(&self) -> Vec<&PendingToolUse> {
self.tool_use.pending_tool_uses()
}
pub fn checkpoint_for_message(&self, id: MessageId) -> Option<ThreadCheckpoint> {
self.checkpoints_by_message.get(&id).cloned()
}
@@ -1088,6 +1098,20 @@ impl Thread {
current_token_usage = token_usage;
}
LanguageModelCompletionEvent::Text(chunk) => {
match &thread.response_dest {
ResponseDest::File { path } => {
log::info!(
"Emitting {}B StreamedFileChunk for {path}",
chunk.len()
);
cx.emit(ThreadEvent::StreamedFileChunk {
path: path.clone(),
chunk: chunk.clone(),
});
}
ResponseDest::TextOnly => {}
}
if let Some(last_message) = thread.messages.last_mut() {
if last_message.role == Role::Assistant {
last_message.push_text(&chunk);
@@ -1103,7 +1127,7 @@ impl Thread {
// will result in duplicating the text of the chunk in the rendered Markdown.
thread.insert_message(
Role::Assistant,
vec![MessageSegment::Text(chunk.to_string())],
vec![MessageSegment::Text(chunk)],
cx,
);
};
@@ -1178,7 +1202,8 @@ impl Thread {
match result.as_ref() {
Ok(stop_reason) => match stop_reason {
StopReason::ToolUse => {
cx.emit(ThreadEvent::UsePendingTools);
let tool_uses = thread.use_pending_tools(cx);
cx.emit(ThreadEvent::UsePendingTools { tool_uses });
}
StopReason::EndTurn => {}
StopReason::MaxTokens => {}
@@ -1366,10 +1391,7 @@ impl Thread {
)
}
pub fn use_pending_tools(
&mut self,
cx: &mut Context<Self>,
) -> impl IntoIterator<Item = PendingToolUse> + use<> {
pub fn use_pending_tools(&mut self, cx: &mut Context<Self>) -> Vec<PendingToolUse> {
let request = self.to_completion_request(RequestKind::Chat, cx);
let messages = Arc::new(request.messages);
let pending_tool_uses = self
@@ -1447,7 +1469,24 @@ impl Thread {
cx.spawn({
async move |thread: WeakEntity<Thread>, cx| {
let output = run_tool.await;
let new_response_dest;
let output = match run_tool.await {
Ok((response_dest, output)) => {
new_response_dest = response_dest;
Ok(output)
}
Err(err) => {
new_response_dest = ResponseDest::default();
Err(err)
}
};
thread.upgrade().map(|thread| {
thread.update(cx, |thread, _cx| {
thread.response_dest = new_response_dest;
})
});
thread
.update(cx, |thread, cx| {
@@ -1457,18 +1496,36 @@ impl Thread {
output,
cx,
);
cx.emit(ThreadEvent::ToolFinished {
tool_use_id,
pending_tool_use,
canceled: false,
});
thread.tool_finished(tool_use_id, pending_tool_use, false, cx);
})
.ok();
}
})
}
fn tool_finished(
&mut self,
tool_use_id: LanguageModelToolUseId,
pending_tool_use: Option<PendingToolUse>,
canceled: bool,
cx: &mut Context<Self>,
) {
if self.all_tools_finished() {
let model_registry = LanguageModelRegistry::read_global(cx);
if let Some(ConfiguredModel { model, .. }) = model_registry.default_model() {
self.attach_tool_results(cx);
if !canceled {
self.send_to_model(model, RequestKind::Chat, cx);
}
}
}
cx.emit(ThreadEvent::ToolFinished {
tool_use_id,
pending_tool_use,
});
}
pub fn attach_tool_results(&mut self, cx: &mut Context<Self>) {
// Insert a user message to contain the tool results.
self.insert_user_message(
@@ -1492,11 +1549,12 @@ impl Thread {
let mut canceled = false;
for pending_tool_use in self.tool_use.cancel_pending() {
canceled = true;
cx.emit(ThreadEvent::ToolFinished {
tool_use_id: pending_tool_use.id.clone(),
pending_tool_use: Some(pending_tool_use),
canceled: true,
});
self.tool_finished(
pending_tool_use.id.clone(),
Some(pending_tool_use),
true,
cx,
);
}
canceled
};
@@ -1504,24 +1562,45 @@ impl Thread {
canceled
}
/// Returns the feedback given to the thread, if any.
pub fn feedback(&self) -> Option<ThreadFeedback> {
self.feedback
}
/// Reports feedback about the thread and stores it in our telemetry backend.
pub fn report_feedback(
pub fn message_feedback(&self, message_id: MessageId) -> Option<ThreadFeedback> {
self.message_feedback.get(&message_id).copied()
}
pub fn report_message_feedback(
&mut self,
message_id: MessageId,
feedback: ThreadFeedback,
cx: &mut Context<Self>,
) -> Task<Result<()>> {
if self.message_feedback.get(&message_id) == Some(&feedback) {
return Task::ready(Ok(()));
}
let final_project_snapshot = Self::project_snapshot(self.project.clone(), cx);
let serialized_thread = self.serialize(cx);
let thread_id = self.id().clone();
let client = self.project.read(cx).client();
self.feedback = Some(feedback);
let enabled_tool_names: Vec<String> = self
.tools()
.enabled_tools(cx)
.iter()
.map(|tool| tool.name().to_string())
.collect();
self.message_feedback.insert(message_id, feedback);
cx.notify();
let message_content = self
.message(message_id)
.map(|msg| msg.to_string())
.unwrap_or_default();
cx.background_spawn(async move {
let final_project_snapshot = final_project_snapshot.await;
let serialized_thread = serialized_thread.await?;
@@ -1536,6 +1615,9 @@ impl Thread {
"Assistant Thread Rated",
rating,
thread_id,
enabled_tool_names,
message_id = message_id.0,
message_content,
thread_data,
final_project_snapshot
);
@@ -1545,6 +1627,52 @@ impl Thread {
})
}
pub fn report_feedback(
&mut self,
feedback: ThreadFeedback,
cx: &mut Context<Self>,
) -> Task<Result<()>> {
let last_assistant_message_id = self
.messages
.iter()
.rev()
.find(|msg| msg.role == Role::Assistant)
.map(|msg| msg.id);
if let Some(message_id) = last_assistant_message_id {
self.report_message_feedback(message_id, feedback, cx)
} else {
let final_project_snapshot = Self::project_snapshot(self.project.clone(), cx);
let serialized_thread = self.serialize(cx);
let thread_id = self.id().clone();
let client = self.project.read(cx).client();
self.feedback = Some(feedback);
cx.notify();
cx.background_spawn(async move {
let final_project_snapshot = final_project_snapshot.await;
let serialized_thread = serialized_thread.await?;
let thread_data = serde_json::to_value(serialized_thread)
.unwrap_or_else(|_| serde_json::Value::Null);
let rating = match feedback {
ThreadFeedback::Positive => "positive",
ThreadFeedback::Negative => "negative",
};
telemetry::event!(
"Assistant Thread Rated",
rating,
thread_id,
thread_data,
final_project_snapshot
);
client.telemetry().flush_events();
Ok(())
})
}
}
/// Create a snapshot of the current project state including git information and unsaved buffers.
fn project_snapshot(
project: Entity<Project>,
@@ -1801,12 +1929,7 @@ impl Thread {
self.tool_use
.insert_tool_output(tool_use_id.clone(), tool_name, err, cx);
cx.emit(ThreadEvent::ToolFinished {
tool_use_id,
pending_tool_use: None,
canceled: true,
});
self.tool_finished(tool_use_id.clone(), None, true, cx);
}
}
@@ -1826,20 +1949,24 @@ pub enum ThreadEvent {
StreamedCompletion,
StreamedAssistantText(MessageId, String),
StreamedAssistantThinking(MessageId, String),
StreamedFileChunk {
path: Arc<str>,
chunk: String,
},
DoneStreaming,
MessageAdded(MessageId),
MessageEdited(MessageId),
MessageDeleted(MessageId),
SummaryGenerated,
SummaryChanged,
UsePendingTools,
UsePendingTools {
tool_uses: Vec<PendingToolUse>,
},
ToolFinished {
#[allow(unused)]
tool_use_id: LanguageModelToolUseId,
/// The pending tool use that corresponds to this tool.
pending_tool_use: Option<PendingToolUse>,
/// Whether the tool was canceled by the user.
canceled: bool,
},
CheckpointChanged,
ToolConfirmationNeeded,

View File

@@ -280,9 +280,10 @@ impl AddedContext {
}
AssistantContext::Directory(directory_context) => {
// TODO: handle worktree disambiguation. Maybe by storing an `Arc<dyn File>` to also
// handle renames?
let full_path = &directory_context.project_path.path;
let full_path = directory_context
.worktree
.read(cx)
.full_path(&directory_context.path);
let full_path_string: SharedString =
full_path.to_string_lossy().into_owned().into();
let name = full_path

View File

@@ -95,11 +95,7 @@ impl HeadlessAssistant {
self.done_tx.send_blocking(Ok(())).unwrap()
}
}
ThreadEvent::UsePendingTools => {
thread.update(cx, |thread, cx| {
thread.use_pending_tools(cx);
});
}
ThreadEvent::UsePendingTools { .. } => {}
ThreadEvent::ToolConfirmationNeeded => {
// Automatically approve all tools that need confirmation in headless mode
println!("Tool confirmation needed - automatically approving in headless mode");
@@ -152,19 +148,6 @@ impl HeadlessAssistant {
if let Some(tool_result) = thread.read(cx).tool_result(tool_use_id) {
println!("Tool result: {:?}", tool_result);
}
if thread.read(cx).all_tools_finished() {
let model_registry = LanguageModelRegistry::read_global(cx);
if let Some(model) = model_registry.default_model() {
thread.update(cx, |thread, cx| {
thread.attach_tool_results(cx);
thread.send_to_model(model.model, RequestKind::Chat, cx);
});
} else {
println!(
"Warning: No active language model available to continue conversation"
);
}
}
}
_ => {}
}

View File

@@ -10,7 +10,7 @@ use collections::{BTreeSet, HashMap, HashSet, hash_map};
use editor::{
Anchor, Editor, EditorEvent, MenuInlineCompletionsPolicy, ProposedChangeLocation,
ProposedChangesEditor, RowExt, ToOffset as _, ToPoint,
actions::{FoldAt, MoveToEndOfLine, Newline, ShowCompletions, UnfoldAt},
actions::{MoveToEndOfLine, Newline, ShowCompletions},
display_map::{
BlockContext, BlockId, BlockPlacement, BlockProperties, BlockStyle, Crease, CreaseMetadata,
CustomBlockId, FoldId, RenderBlock, ToDisplayPoint,
@@ -1053,7 +1053,7 @@ impl ContextEditor {
let creases = editor.insert_creases(creases, cx);
for buffer_row in buffer_rows_to_fold.into_iter().rev() {
editor.fold_at(&FoldAt { buffer_row }, window, cx);
editor.fold_at(buffer_row, window, cx);
}
creases
@@ -1109,7 +1109,7 @@ impl ContextEditor {
buffer_rows_to_fold.clear();
}
for buffer_row in buffer_rows_to_fold.into_iter().rev() {
editor.fold_at(&FoldAt { buffer_row }, window, cx);
editor.fold_at(buffer_row, window, cx);
}
});
}
@@ -1844,13 +1844,7 @@ impl ContextEditor {
|_, _, _, _| Empty.into_any(),
);
editor.insert_creases(vec![crease], cx);
editor.fold_at(
&FoldAt {
buffer_row: start_row,
},
window,
cx,
);
editor.fold_at(start_row, window, cx);
}
})
}
@@ -2042,7 +2036,7 @@ impl ContextEditor {
cx,
);
for buffer_row in buffer_rows_to_fold.into_iter().rev() {
editor.fold_at(&FoldAt { buffer_row }, window, cx);
editor.fold_at(buffer_row, window, cx);
}
}
});
@@ -2820,7 +2814,7 @@ fn render_thought_process_fold_icon_button(
.start
.to_point(&editor.buffer().read(cx).read(cx));
let buffer_row = MultiBufferRow(buffer_start.row);
editor.unfold_at(&UnfoldAt { buffer_row }, window, cx);
editor.unfold_at(buffer_row, window, cx);
})
.ok();
})
@@ -2847,7 +2841,7 @@ fn render_fold_icon_button(
.start
.to_point(&editor.buffer().read(cx).read(cx));
let buffer_row = MultiBufferRow(buffer_start.row);
editor.unfold_at(&UnfoldAt { buffer_row }, window, cx);
editor.unfold_at(buffer_row, window, cx);
})
.ok();
})
@@ -2907,7 +2901,7 @@ fn quote_selection_fold_placeholder(title: String, editor: WeakEntity<Editor>) -
.start
.to_point(&editor.buffer().read(cx).read(cx));
let buffer_row = MultiBufferRow(buffer_start.row);
editor.unfold_at(&UnfoldAt { buffer_row }, window, cx);
editor.unfold_at(buffer_row, window, cx);
})
.ok();
})

View File

@@ -18,6 +18,20 @@ pub use crate::action_log::*;
pub use crate::tool_registry::*;
pub use crate::tool_working_set::*;
/// Where the streamed-in text should go.
/// For example, the file creation tool streams it to a file.
#[derive(Debug, Clone)]
pub enum ResponseDest {
File { path: Arc<str> },
TextOnly,
}
impl Default for ResponseDest {
fn default() -> Self {
Self::TextOnly
}
}
pub fn init(cx: &mut App) {
ToolRegistry::default_global(cx);
}
@@ -66,7 +80,7 @@ pub trait Tool: 'static + Send + Sync {
project: Entity<Project>,
action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>>;
) -> Task<Result<(ResponseDest, String)>>;
}
impl Debug for dyn Tool {

View File

@@ -1,5 +1,4 @@
mod bash_tool;
mod batch_tool;
mod code_action_tool;
mod code_symbols_tool;
mod copy_path_tool;
mod create_directory_tool;
@@ -15,9 +14,11 @@ mod open_tool;
mod path_search_tool;
mod read_file_tool;
mod regex_search_tool;
mod rename_tool;
mod replace;
mod schema;
mod symbol_info_tool;
mod terminal_tool;
mod thinking_tool;
use std::sync::Arc;
@@ -28,8 +29,7 @@ use gpui::App;
use http_client::HttpClientWithUrl;
use move_path_tool::MovePathTool;
use crate::bash_tool::BashTool;
use crate::batch_tool::BatchTool;
use crate::code_action_tool::CodeActionTool;
use crate::code_symbols_tool::CodeSymbolsTool;
use crate::create_directory_tool::CreateDirectoryTool;
use crate::create_file_tool::CreateFileTool;
@@ -43,21 +43,23 @@ use crate::open_tool::OpenTool;
use crate::path_search_tool::PathSearchTool;
use crate::read_file_tool::ReadFileTool;
use crate::regex_search_tool::RegexSearchTool;
use crate::rename_tool::RenameTool;
use crate::symbol_info_tool::SymbolInfoTool;
use crate::terminal_tool::TerminalTool;
use crate::thinking_tool::ThinkingTool;
pub fn init(http_client: Arc<HttpClientWithUrl>, cx: &mut App) {
assistant_tool::init(cx);
let registry = ToolRegistry::global(cx);
registry.register_tool(BashTool);
registry.register_tool(BatchTool);
registry.register_tool(TerminalTool);
registry.register_tool(CreateDirectoryTool);
registry.register_tool(CreateFileTool);
registry.register_tool(CopyPathTool);
registry.register_tool(DeletePathTool);
registry.register_tool(FindReplaceFileTool);
registry.register_tool(SymbolInfoTool);
registry.register_tool(CodeActionTool);
registry.register_tool(MovePathTool);
registry.register_tool(DiagnosticsTool);
registry.register_tool(ListDirectoryTool);
@@ -67,6 +69,7 @@ pub fn init(http_client: Arc<HttpClientWithUrl>, cx: &mut App) {
registry.register_tool(PathSearchTool);
registry.register_tool(ReadFileTool);
registry.register_tool(RegexSearchTool);
registry.register_tool(RenameTool);
registry.register_tool(ThinkingTool);
registry.register_tool(FetchTool::new(http_client));
}

View File

@@ -1,230 +0,0 @@
use crate::schema::json_schema_for;
use anyhow::{Context as _, Result, anyhow};
use assistant_tool::{ActionLog, Tool};
use futures::io::BufReader;
use futures::{AsyncBufReadExt, AsyncReadExt};
use gpui::{App, Entity, Task};
use language_model::{LanguageModelRequestMessage, LanguageModelToolSchemaFormat};
use project::Project;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use std::path::Path;
use std::sync::Arc;
use ui::IconName;
use util::command::new_smol_command;
use util::markdown::MarkdownString;
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct BashToolInput {
/// The bash one-liner command to execute.
command: String,
/// Working directory for the command. This must be one of the root directories of the project.
cd: String,
}
pub struct BashTool;
impl Tool for BashTool {
fn name(&self) -> String {
"bash".to_string()
}
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
true
}
fn description(&self) -> String {
include_str!("./bash_tool/description.md").to_string()
}
fn icon(&self) -> IconName {
IconName::Terminal
}
fn input_schema(&self, format: LanguageModelToolSchemaFormat) -> serde_json::Value {
json_schema_for::<BashToolInput>(format)
}
fn ui_text(&self, input: &serde_json::Value) -> String {
match serde_json::from_value::<BashToolInput>(input.clone()) {
Ok(input) => {
let mut lines = input.command.lines();
let first_line = lines.next().unwrap_or_default();
let remaining_line_count = lines.count();
match remaining_line_count {
0 => MarkdownString::inline_code(&first_line).0,
1 => {
MarkdownString::inline_code(&format!(
"{} - {} more line",
first_line, remaining_line_count
))
.0
}
n => {
MarkdownString::inline_code(&format!("{} - {} more lines", first_line, n)).0
}
}
}
Err(_) => "Run bash command".to_string(),
}
}
fn run(
self: Arc<Self>,
input: serde_json::Value,
_messages: &[LanguageModelRequestMessage],
project: Entity<Project>,
_action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
let input: BashToolInput = match serde_json::from_value(input) {
Ok(input) => input,
Err(err) => return Task::ready(Err(anyhow!(err))),
};
let project = project.read(cx);
let input_path = Path::new(&input.cd);
let working_dir = if input.cd == "." {
// Accept "." as meaning "the one worktree" if we only have one worktree.
let mut worktrees = project.worktrees(cx);
let only_worktree = match worktrees.next() {
Some(worktree) => worktree,
None => return Task::ready(Err(anyhow!("No worktrees found in the project"))),
};
if worktrees.next().is_some() {
return Task::ready(Err(anyhow!(
"'.' is ambiguous in multi-root workspaces. Please specify a root directory explicitly."
)));
}
only_worktree.read(cx).abs_path()
} else if input_path.is_absolute() {
// Absolute paths are allowed, but only if they're in one of the project's worktrees.
if !project
.worktrees(cx)
.any(|worktree| input_path.starts_with(&worktree.read(cx).abs_path()))
{
return Task::ready(Err(anyhow!(
"The absolute path must be within one of the project's worktrees"
)));
}
input_path.into()
} else {
let Some(worktree) = project.worktree_for_root_name(&input.cd, cx) else {
return Task::ready(Err(anyhow!(
"`cd` directory {} not found in the project",
&input.cd
)));
};
worktree.read(cx).abs_path()
};
cx.spawn(async move |_| {
// Add 2>&1 to merge stderr into stdout for proper interleaving.
let command = format!("({}) 2>&1", input.command);
let mut cmd = new_smol_command("bash")
.arg("-c")
.arg(&command)
.current_dir(working_dir)
.stdout(std::process::Stdio::piped())
.spawn()
.context("Failed to execute bash command")?;
// Capture stdout with a limit
let stdout = cmd.stdout.take().unwrap();
let mut reader = BufReader::new(stdout);
const MESSAGE_1: &str = "Command output too long. The first ";
const MESSAGE_2: &str = " bytes:\n\n";
const ERR_MESSAGE_1: &str = "Command failed with exit code ";
const ERR_MESSAGE_2: &str = "\n\n";
const STDOUT_LIMIT: usize = 8192;
const LIMIT: usize = STDOUT_LIMIT
- (MESSAGE_1.len()
+ (STDOUT_LIMIT.ilog10() as usize + 1) // byte count
+ MESSAGE_2.len()
+ ERR_MESSAGE_1.len()
+ 3 // status code
+ ERR_MESSAGE_2.len());
// Read one more byte to determine whether the output was truncated
let mut buffer = vec![0; LIMIT + 1];
let mut bytes_read = 0;
// Read until we reach the limit
loop {
let read = reader.read(&mut buffer).await?;
if read == 0 {
break;
}
bytes_read += read;
if bytes_read > LIMIT {
bytes_read = LIMIT + 1;
break;
}
}
// Repeatedly fill the output reader's buffer without copying it.
loop {
let skipped_bytes = reader.fill_buf().await?;
if skipped_bytes.is_empty() {
break;
}
let skipped_bytes_len = skipped_bytes.len();
reader.consume_unpin(skipped_bytes_len);
}
let output_bytes = &buffer[..bytes_read];
// Let the process continue running
let status = cmd.status().await.context("Failed to get command status")?;
let output_string = if bytes_read > LIMIT {
// Valid to find `\n` in UTF-8 since 0-127 ASCII characters are not used in
// multi-byte characters.
let last_line_ix = output_bytes.iter().rposition(|b| *b == b'\n');
let output_string = String::from_utf8_lossy(
&output_bytes[..last_line_ix.unwrap_or(output_bytes.len())],
);
format!(
"{}{}{}{}",
MESSAGE_1,
output_string.len(),
MESSAGE_2,
output_string
)
} else {
String::from_utf8_lossy(&output_bytes).into()
};
let output_with_status = if status.success() {
if output_string.is_empty() {
"Command executed successfully.".to_string()
} else {
output_string.to_string()
}
} else {
format!(
"{}{}{}{}",
ERR_MESSAGE_1,
status.code().unwrap_or(-1),
ERR_MESSAGE_2,
output_string,
)
};
debug_assert!(output_with_status.len() <= STDOUT_LIMIT);
Ok(output_with_status)
})
}
}

View File

@@ -1,7 +0,0 @@
Executes a bash one-liner and returns the combined output.
This tool spawns a bash process, combines stdout and stderr into one interleaved stream as they are produced (preserving the order of writes), and captures that stream into a string which is returned.
Make sure you use the `cd` parameter to navigate to one of the root directories of the project. NEVER do it as part of the `command` itself, otherwise it will error.
Remember that each invocation of this tool will spawn a new bash process, so you can't rely on any state from previous invocations.

View File

@@ -1,310 +0,0 @@
use crate::schema::json_schema_for;
use anyhow::{Result, anyhow};
use assistant_tool::{ActionLog, Tool, ToolWorkingSet};
use futures::future::join_all;
use gpui::{App, AppContext, Entity, Task};
use language_model::{LanguageModelRequestMessage, LanguageModelToolSchemaFormat};
use project::Project;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use std::sync::Arc;
use ui::IconName;
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct ToolInvocation {
/// The name of the tool to invoke
pub name: String,
/// The input to the tool in JSON format
pub input: serde_json::Value,
}
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct BatchToolInput {
/// The tool invocations to run as a batch. These tools will be run either sequentially
/// or concurrently depending on the `run_tools_concurrently` flag.
///
/// <example>
/// Basic file operations (concurrent)
///
/// ```json
/// {
/// "invocations": [
/// {
/// "name": "read_file",
/// "input": {
/// "path": "src/main.rs"
/// }
/// },
/// {
/// "name": "list_directory",
/// "input": {
/// "path": "src/lib"
/// }
/// },
/// {
/// "name": "regex_search",
/// "input": {
/// "regex": "fn run\\("
/// }
/// }
/// ],
/// "run_tools_concurrently": true
/// }
/// ```
/// </example>
///
/// <example>
/// Multiple find-replace operations on the same file (sequential)
///
/// ```json
/// {
/// "invocations": [
/// {
/// "name": "find_replace_file",
/// "input": {
/// "path": "src/config.rs",
/// "display_description": "Update default timeout value",
/// "find": "pub const DEFAULT_TIMEOUT: u64 = 30;\n\npub const MAX_RETRIES: u32 = 3;\n\npub const SERVER_URL: &str = \"https://api.example.com\";",
/// "replace": "pub const DEFAULT_TIMEOUT: u64 = 60;\n\npub const MAX_RETRIES: u32 = 3;\n\npub const SERVER_URL: &str = \"https://api.example.com\";"
/// }
/// },
/// {
/// "name": "find_replace_file",
/// "input": {
/// "path": "src/config.rs",
/// "display_description": "Update API endpoint URL",
/// "find": "pub const MAX_RETRIES: u32 = 3;\n\npub const SERVER_URL: &str = \"https://api.example.com\";\n\npub const API_VERSION: &str = \"v1\";",
/// "replace": "pub const MAX_RETRIES: u32 = 3;\n\npub const SERVER_URL: &str = \"https://api.newdomain.com\";\n\npub const API_VERSION: &str = \"v1\";"
/// }
/// }
/// ],
/// "run_tools_concurrently": false
/// }
/// ```
/// </example>
///
/// <example>
/// Searching and analyzing code (concurrent)
///
/// ```json
/// {
/// "invocations": [
/// {
/// "name": "regex_search",
/// "input": {
/// "regex": "impl Database"
/// }
/// },
/// {
/// "name": "path_search",
/// "input": {
/// "glob": "**/*test*.rs"
/// }
/// }
/// ],
/// "run_tools_concurrently": true
/// }
/// ```
/// </example>
///
/// <example>
/// Multi-file refactoring (concurrent)
///
/// ```json
/// {
/// "invocations": [
/// {
/// "name": "find_replace_file",
/// "input": {
/// "path": "src/models/user.rs",
/// "display_description": "Add email field to User struct",
/// "find": "pub struct User {\n pub id: u64,\n pub username: String,\n pub created_at: DateTime<Utc>,\n}",
/// "replace": "pub struct User {\n pub id: u64,\n pub username: String,\n pub email: String,\n pub created_at: DateTime<Utc>,\n}"
/// }
/// },
/// {
/// "name": "find_replace_file",
/// "input": {
/// "path": "src/db/queries.rs",
/// "display_description": "Update user insertion query",
/// "find": "pub async fn insert_user(conn: &mut Connection, user: &User) -> Result<(), DbError> {\n conn.execute(\n \"INSERT INTO users (id, username, created_at) VALUES ($1, $2, $3)\",\n &[&user.id, &user.username, &user.created_at],\n ).await?;\n \n Ok(())\n}",
/// "replace": "pub async fn insert_user(conn: &mut Connection, user: &User) -> Result<(), DbError> {\n conn.execute(\n \"INSERT INTO users (id, username, email, created_at) VALUES ($1, $2, $3, $4)\",\n &[&user.id, &user.username, &user.email, &user.created_at],\n ).await?;\n \n Ok(())\n}"
/// }
/// }
/// ],
/// "run_tools_concurrently": true
/// }
/// ```
/// </example>
pub invocations: Vec<ToolInvocation>,
/// Whether to run the tools in this batch concurrently. If this is false (the default), the tools will run sequentially.
#[serde(default)]
pub run_tools_concurrently: bool,
}
pub struct BatchTool;
impl Tool for BatchTool {
fn name(&self) -> String {
"batch_tool".into()
}
fn needs_confirmation(&self, input: &serde_json::Value, cx: &App) -> bool {
serde_json::from_value::<BatchToolInput>(input.clone())
.map(|input| {
let working_set = ToolWorkingSet::default();
input.invocations.iter().any(|invocation| {
working_set
.tool(&invocation.name, cx)
.map_or(false, |tool| tool.needs_confirmation(&invocation.input, cx))
})
})
.unwrap_or(false)
}
fn description(&self) -> String {
include_str!("./batch_tool/description.md").into()
}
fn icon(&self) -> IconName {
IconName::Cog
}
fn input_schema(&self, format: LanguageModelToolSchemaFormat) -> serde_json::Value {
json_schema_for::<BatchToolInput>(format)
}
fn ui_text(&self, input: &serde_json::Value) -> String {
match serde_json::from_value::<BatchToolInput>(input.clone()) {
Ok(input) => {
let count = input.invocations.len();
let mode = if input.run_tools_concurrently {
"concurrently"
} else {
"sequentially"
};
let first_tool_name = input
.invocations
.first()
.map(|inv| inv.name.clone())
.unwrap_or_default();
let all_same = input
.invocations
.iter()
.all(|invocation| invocation.name == first_tool_name);
if all_same {
format!(
"Run `{}` {} times {}",
first_tool_name,
input.invocations.len(),
mode
)
} else {
format!("Run {} tools {}", count, mode)
}
}
Err(_) => "Batch tools".to_string(),
}
}
fn run(
self: Arc<Self>,
input: serde_json::Value,
messages: &[LanguageModelRequestMessage],
project: Entity<Project>,
action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
let input = match serde_json::from_value::<BatchToolInput>(input) {
Ok(input) => input,
Err(err) => return Task::ready(Err(anyhow!(err))),
};
if input.invocations.is_empty() {
return Task::ready(Err(anyhow!("No tool invocations provided")));
}
let run_tools_concurrently = input.run_tools_concurrently;
let foreground_task = {
let working_set = ToolWorkingSet::default();
let invocations = input.invocations;
let messages = messages.to_vec();
cx.spawn(async move |cx| {
let mut tasks = Vec::new();
let mut tool_names = Vec::new();
for invocation in invocations {
let tool_name = invocation.name.clone();
tool_names.push(tool_name.clone());
let tool = cx
.update(|cx| working_set.tool(&tool_name, cx))
.map_err(|err| {
anyhow!("Failed to look up tool '{}': {}", tool_name, err)
})?;
let Some(tool) = tool else {
return Err(anyhow!("Tool '{}' not found", tool_name));
};
let project = project.clone();
let action_log = action_log.clone();
let messages = messages.clone();
let task = cx
.update(|cx| tool.run(invocation.input, &messages, project, action_log, cx))
.map_err(|err| anyhow!("Failed to start tool '{}': {}", tool_name, err))?;
tasks.push(task);
}
Ok((tasks, tool_names))
})
};
cx.background_spawn(async move {
let (tasks, tool_names) = foreground_task.await?;
let mut results = Vec::with_capacity(tasks.len());
if run_tools_concurrently {
results.extend(join_all(tasks).await)
} else {
for task in tasks {
results.push(task.await);
}
};
let mut formatted_results = String::new();
let mut error_occurred = false;
for (i, result) in results.into_iter().enumerate() {
let tool_name = &tool_names[i];
match result {
Ok(output) => {
formatted_results
.push_str(&format!("Tool '{}' result:\n{}\n\n", tool_name, output));
}
Err(err) => {
error_occurred = true;
formatted_results
.push_str(&format!("Tool '{}' error: {}\n\n", tool_name, err));
}
}
}
if error_occurred {
formatted_results
.push_str("Note: Some tool invocations failed. See individual results above.");
}
Ok(formatted_results.trim().to_string())
})
}
}

View File

@@ -0,0 +1,389 @@
use anyhow::{Context as _, Result, anyhow};
use assistant_tool::{ActionLog, ResponseDest, Tool};
use gpui::{App, Entity, Task};
use language::{self, Anchor, Buffer, ToPointUtf16};
use language_model::LanguageModelRequestMessage;
use project::{self, LspAction, Project};
use regex::Regex;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use std::{ops::Range, sync::Arc};
use ui::IconName;
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct CodeActionToolInput {
/// The relative path to the file containing the text range.
///
/// WARNING: you MUST start this path with one of the project's root directories.
pub path: String,
/// The specific code action to execute.
///
/// If this field is provided, the tool will execute the specified action.
/// If omitted, the tool will list all available code actions for the text range.
///
/// Here are some actions that are commonly supported (but may not be for this particular
/// text range; you can omit this field to list all the actions, if you want to know
/// what your options are, or you can just try an action and if it fails I'll tell you
/// what the available actions were instead):
/// - "quickfix.all" - applies all available quick fixes in the range
/// - "source.organizeImports" - sorts and cleans up import statements
/// - "source.fixAll" - applies all available auto fixes
/// - "refactor.extract" - extracts selected code into a new function or variable
/// - "refactor.inline" - inlines a variable by replacing references with its value
/// - "refactor.rewrite" - general code rewriting operations
/// - "source.addMissingImports" - adds imports for references that lack them
/// - "source.removeUnusedImports" - removes imports that aren't being used
/// - "source.implementInterface" - generates methods required by an interface/trait
/// - "source.generateAccessors" - creates getter/setter methods
/// - "source.convertToAsyncFunction" - converts callback-style code to async/await
///
/// Also, there is a special case: if you specify exactly "textDocument/rename" as the action,
/// then this will rename the symbol to whatever string you specified for the `arguments` field.
pub action: Option<String>,
/// Optional arguments to pass to the code action.
///
/// For rename operations (when action="textDocument/rename"), this should contain the new name.
/// For other code actions, these arguments may be passed to the language server.
pub arguments: Option<serde_json::Value>,
/// The text that comes immediately before the text range in the file.
pub context_before_range: String,
/// The text range. This text must appear in the file right between `context_before_range`
/// and `context_after_range`.
///
/// The file must contain exactly one occurrence of `context_before_range` followed by
/// `text_range` followed by `context_after_range`. If the file contains zero occurrences,
/// or if it contains more than one occurrence, the tool will fail, so it is absolutely
/// critical that you verify ahead of time that the string is unique. You can search
/// the file's contents to verify this ahead of time.
///
/// To make the string more likely to be unique, include a minimum of 1 line of context
/// before the text range, as well as a minimum of 1 line of context after the text range.
/// If these lines of context are not enough to obtain a string that appears only once
/// in the file, then double the number of context lines until the string becomes unique.
/// (Start with 1 line before and 1 line after though, because too much context is
/// needlessly costly.)
///
/// Do not alter the context lines of code in any way, and make sure to preserve all
/// whitespace and indentation for all lines of code. The combined string must be exactly
/// as it appears in the file, or else this tool call will fail.
pub text_range: String,
/// The text that comes immediately after the text range in the file.
pub context_after_range: String,
}
pub struct CodeActionTool;
impl Tool for CodeActionTool {
fn name(&self) -> String {
"code_actions".into()
}
fn needs_confirmation(&self, _input: &serde_json::Value, _cx: &App) -> bool {
false
}
fn description(&self) -> String {
include_str!("./code_action_tool/description.md").into()
}
fn icon(&self) -> IconName {
IconName::Wand
}
fn input_schema(
&self,
_format: language_model::LanguageModelToolSchemaFormat,
) -> serde_json::Value {
let schema = schemars::schema_for!(CodeActionToolInput);
serde_json::to_value(&schema).unwrap()
}
fn ui_text(&self, input: &serde_json::Value) -> String {
match serde_json::from_value::<CodeActionToolInput>(input.clone()) {
Ok(input) => {
if let Some(action) = &input.action {
if action == "textDocument/rename" {
let new_name = match &input.arguments {
Some(serde_json::Value::String(new_name)) => new_name.clone(),
Some(value) => {
if let Ok(new_name) =
serde_json::from_value::<String>(value.clone())
{
new_name
} else {
"invalid name".to_string()
}
}
None => "missing name".to_string(),
};
format!("Rename '{}' to '{}'", input.text_range, new_name)
} else {
format!(
"Execute code action '{}' for '{}'",
action, input.text_range
)
}
} else {
format!("List available code actions for '{}'", input.text_range)
}
}
Err(_) => "Perform code action".to_string(),
}
}
fn run(
self: Arc<Self>,
input: serde_json::Value,
_messages: &[LanguageModelRequestMessage],
project: Entity<Project>,
action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<(ResponseDest, String)>> {
let input = match serde_json::from_value::<CodeActionToolInput>(input) {
Ok(input) => input,
Err(err) => return Task::ready(Err(anyhow!(err))),
};
cx.spawn(async move |cx| {
let buffer = {
let project_path = project.read_with(cx, |project, cx| {
project
.find_project_path(&input.path, cx)
.context("Path not found in project")
})??;
project.update(cx, |project, cx| project.open_buffer(project_path, cx))?.await?
};
action_log.update(cx, |action_log, cx| {
action_log.buffer_read(buffer.clone(), cx);
})?;
let range = {
let Some(range) = buffer.read_with(cx, |buffer, _cx| {
find_text_range(&buffer, &input.context_before_range, &input.text_range, &input.context_after_range)
})? else {
return Err(anyhow!(
"Failed to locate the text specified by context_before_range, text_range, and context_after_range. Make sure context_before_range and context_after_range each match exactly once in the file."
));
};
range
};
if let Some(action_type) = &input.action {
// Special-case the `rename` operation
let response = if action_type == "textDocument/rename" {
let Some(new_name) = input.arguments.and_then(|args| serde_json::from_value::<String>(args).ok()) else {
return Err(anyhow!("For rename operations, 'arguments' must be a string containing the new name"));
};
let position = buffer.read_with(cx, |buffer, _| {
range.start.to_point_utf16(&buffer.snapshot())
})?;
project
.update(cx, |project, cx| {
project.perform_rename(buffer.clone(), position, new_name.clone(), cx)
})?
.await?;
format!("Renamed '{}' to '{}'", input.text_range, new_name)
} else {
// Get code actions for the range
let actions = project
.update(cx, |project, cx| {
project.code_actions(&buffer, range.clone(), None, cx)
})?
.await?;
if actions.is_empty() {
return Err(anyhow!("No code actions available for this range"));
}
// Find all matching actions
let regex = match Regex::new(action_type) {
Ok(regex) => regex,
Err(err) => return Err(anyhow!("Invalid regex pattern: {}", err)),
};
let mut matching_actions = actions
.into_iter()
.filter(|action| { regex.is_match(action.lsp_action.title()) });
let Some(action) = matching_actions.next() else {
return Err(anyhow!("No code actions match the pattern: {}", action_type));
};
// There should have been exactly one matching action.
if let Some(second) = matching_actions.next() {
let mut all_matches = vec![action, second];
all_matches.extend(matching_actions);
return Err(anyhow!(
"Pattern '{}' matches multiple code actions: {}",
action_type,
all_matches.into_iter().map(|action| action.lsp_action.title().to_string()).collect::<Vec<_>>().join(", ")
));
}
let title = action.lsp_action.title().to_string();
project
.update(cx, |project, cx| {
project.apply_code_action(buffer.clone(), action, true, cx)
})?
.await?;
format!("Completed code action: {}", title)
};
project
.update(cx, |project, cx| project.save_buffer(buffer.clone(), cx))?
.await?;
action_log.update(cx, |log, cx| {
log.buffer_edited(buffer.clone(), cx)
})?;
Ok((ResponseDest::TextOnly, response))
} else {
// No action specified, so list the available ones.
let (position_start, position_end) = buffer.read_with(cx, |buffer, _| {
let snapshot = buffer.snapshot();
(
range.start.to_point_utf16(&snapshot),
range.end.to_point_utf16(&snapshot)
)
})?;
// Convert position to display coordinates (1-based)
let position_start_display = language::Point {
row: position_start.row + 1,
column: position_start.column + 1,
};
let position_end_display = language::Point {
row: position_end.row + 1,
column: position_end.column + 1,
};
// Get code actions for the range
let actions = project
.update(cx, |project, cx| {
project.code_actions(&buffer, range.clone(), None, cx)
})?
.await?;
let mut response = format!(
"Available code actions for text range '{}' at position {}:{} to {}:{} (UTF-16 coordinates):\n\n",
input.text_range,
position_start_display.row, position_start_display.column,
position_end_display.row, position_end_display.column
);
if actions.is_empty() {
response.push_str("No code actions available for this range.");
} else {
for (i, action) in actions.iter().enumerate() {
let title = match &action.lsp_action {
LspAction::Action(code_action) => code_action.title.as_str(),
LspAction::Command(command) => command.title.as_str(),
LspAction::CodeLens(code_lens) => {
if let Some(cmd) = &code_lens.command {
cmd.title.as_str()
} else {
"Unknown code lens"
}
},
};
let kind = match &action.lsp_action {
LspAction::Action(code_action) => {
if let Some(kind) = &code_action.kind {
kind.as_str()
} else {
"unknown"
}
},
LspAction::Command(_) => "command",
LspAction::CodeLens(_) => "code_lens",
};
response.push_str(&format!("{}. {title} ({kind})\n", i + 1));
}
}
Ok((ResponseDest::TextOnly, response))
}
})
}
}
/// Finds the range of the text in the buffer, if it appears between context_before_range
/// and context_after_range, and if that combined string has one unique result in the buffer.
///
/// If an exact match fails, it tries adding a newline to the end of context_before_range and
/// to the beginning of context_after_range to accommodate line-based context matching.
fn find_text_range(
buffer: &Buffer,
context_before_range: &str,
text_range: &str,
context_after_range: &str,
) -> Option<Range<Anchor>> {
let snapshot = buffer.snapshot();
let text = snapshot.text();
// First try with exact match
let search_string = format!("{context_before_range}{text_range}{context_after_range}");
let mut positions = text.match_indices(&search_string);
let position_result = positions.next();
if let Some(position) = position_result {
// Check if the matched string is unique
if positions.next().is_none() {
let range_start = position.0 + context_before_range.len();
let range_end = range_start + text_range.len();
let range_start_anchor = snapshot.anchor_before(snapshot.offset_to_point(range_start));
let range_end_anchor = snapshot.anchor_before(snapshot.offset_to_point(range_end));
return Some(range_start_anchor..range_end_anchor);
}
}
// If exact match fails or is not unique, try with line-based context
// Add a newline to the end of before context and beginning of after context
let line_based_before = if context_before_range.ends_with('\n') {
context_before_range.to_string()
} else {
format!("{context_before_range}\n")
};
let line_based_after = if context_after_range.starts_with('\n') {
context_after_range.to_string()
} else {
format!("\n{context_after_range}")
};
let line_search_string = format!("{line_based_before}{text_range}{line_based_after}");
let mut line_positions = text.match_indices(&line_search_string);
let line_position = line_positions.next()?;
// The line-based search string must also appear exactly once
if line_positions.next().is_some() {
return None;
}
let line_range_start = line_position.0 + line_based_before.len();
let line_range_end = line_range_start + text_range.len();
let line_range_start_anchor =
snapshot.anchor_before(snapshot.offset_to_point(line_range_start));
let line_range_end_anchor = snapshot.anchor_before(snapshot.offset_to_point(line_range_end));
Some(line_range_start_anchor..line_range_end_anchor)
}

View File

@@ -0,0 +1,19 @@
A tool for applying code actions to specific sections of your code. It uses language servers to provide refactoring capabilities similar to what you'd find in an IDE.
This tool can:
- List all available code actions for a selected text range
- Execute a specific code action on that range
- Rename symbols across your codebase. This tool is the preferred way to rename things, and you should always prefer to rename code symbols using this tool rather than using textual find/replace when both are available.
Use this tool when you want to:
- Discover what code actions are available for a piece of code
- Apply automatic fixes and code transformations
- Rename variables, functions, or other symbols consistently throughout your project
- Clean up imports, implement interfaces, or perform other language-specific operations
- If unsure what actions are available, call the tool without specifying an action to get a list
- For common operations, you can directly specify actions like "quickfix.all" or "source.organizeImports"
- For renaming, use the special "textDocument/rename" action and provide the new name in the arguments field
- Be specific with your text range and context to ensure the tool identifies the correct code location
The tool will automatically save any changes it makes to your files.

View File

@@ -4,7 +4,7 @@ use std::sync::Arc;
use crate::schema::json_schema_for;
use anyhow::{Result, anyhow};
use assistant_tool::{ActionLog, Tool};
use assistant_tool::{ActionLog, ResponseDest, Tool};
use collections::IndexMap;
use gpui::{App, AsyncApp, Entity, Task};
use language::{OutlineItem, ParseStatus, Point};
@@ -129,7 +129,7 @@ impl Tool for CodeSymbolsTool {
project: Entity<Project>,
action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
) -> Task<Result<(ResponseDest, String)>> {
let input = match serde_json::from_value::<CodeSymbolsInput>(input) {
Ok(input) => input,
Err(err) => return Task::ready(Err(anyhow!(err))),
@@ -146,9 +146,14 @@ impl Tool for CodeSymbolsTool {
None => None,
};
cx.spawn(async move |cx| match input.path {
Some(path) => file_outline(project, path, action_log, regex, input.offset, cx).await,
None => project_symbols(project, regex, input.offset, cx).await,
cx.spawn(async move |cx| {
match input.path {
Some(path) => {
file_outline(project, path, action_log, regex, input.offset, cx).await
}
None => project_symbols(project, regex, input.offset, cx).await,
}
.map(|output| (ResponseDest::TextOnly, output))
})
}
}

View File

@@ -1,5 +1,6 @@
use crate::schema::json_schema_for;
use anyhow::{Result, anyhow};
use assistant_tool::ResponseDest;
use assistant_tool::{ActionLog, Tool};
use gpui::{App, AppContext, Entity, Task};
use language_model::LanguageModelRequestMessage;
@@ -77,7 +78,7 @@ impl Tool for CopyPathTool {
project: Entity<Project>,
_action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
) -> Task<Result<(ResponseDest, String)>> {
let input = match serde_json::from_value::<CopyPathToolInput>(input) {
Ok(input) => input,
Err(err) => return Task::ready(Err(anyhow!(err))),
@@ -105,9 +106,9 @@ impl Tool for CopyPathTool {
cx.background_spawn(async move {
match copy_task.await {
Ok(_) => Ok(format!(
"Copied {} to {}",
input.source_path, input.destination_path
Ok(_) => Ok((
ResponseDest::TextOnly,
format!("Copied {} to {}", input.source_path, input.destination_path),
)),
Err(err) => Err(anyhow!(
"Failed to copy {} to {}: {}",

View File

@@ -1,5 +1,6 @@
use crate::schema::json_schema_for;
use anyhow::{Result, anyhow};
use assistant_tool::ResponseDest;
use assistant_tool::{ActionLog, Tool};
use gpui::{App, Entity, Task};
use language_model::LanguageModelRequestMessage;
@@ -68,7 +69,7 @@ impl Tool for CreateDirectoryTool {
project: Entity<Project>,
_action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
) -> Task<Result<(ResponseDest, String)>> {
let input = match serde_json::from_value::<CreateDirectoryToolInput>(input) {
Ok(input) => input,
Err(err) => return Task::ready(Err(anyhow!(err))),
@@ -87,7 +88,10 @@ impl Tool for CreateDirectoryTool {
.await
.map_err(|err| anyhow!("Unable to create directory {destination_path}: {err}"))?;
Ok(format!("Created directory {destination_path}"))
Ok((
ResponseDest::TextOnly,
format!("Created directory {destination_path}"),
))
})
}
}

View File

@@ -1,5 +1,6 @@
use crate::schema::json_schema_for;
use anyhow::{Result, anyhow};
use assistant_tool::ResponseDest;
use assistant_tool::{ActionLog, Tool};
use gpui::{App, Entity, Task};
use language_model::LanguageModelRequestMessage;
@@ -24,13 +25,6 @@ pub struct CreateFileToolInput {
/// You can create a new file by providing a path of "directory1/new_file.txt"
/// </example>
pub path: String,
/// The text contents of the file to create.
///
/// <example>
/// To create a file with the text "Hello, World!", provide contents of "Hello, World!"
/// </example>
pub contents: String,
}
pub struct CreateFileTool;
@@ -73,7 +67,7 @@ impl Tool for CreateFileTool {
project: Entity<Project>,
action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
) -> Task<Result<(ResponseDest, String)>> {
let input = match serde_json::from_value::<CreateFileToolInput>(input) {
Ok(input) => input,
Err(err) => return Task::ready(Err(anyhow!(err))),
@@ -82,7 +76,6 @@ impl Tool for CreateFileTool {
Some(project_path) => project_path,
None => return Task::ready(Err(anyhow!("Path to create was outside the project"))),
};
let contents: Arc<str> = input.contents.as_str().into();
let destination_path: Arc<str> = input.path.as_str().into();
cx.spawn(async move |cx| {
@@ -93,7 +86,6 @@ impl Tool for CreateFileTool {
.await
.map_err(|err| anyhow!("Unable to open buffer for {destination_path}: {err}"))?;
cx.update(|cx| {
buffer.update(cx, |buffer, cx| buffer.set_text(contents, cx));
action_log.update(cx, |action_log, cx| {
action_log.will_create_buffer(buffer.clone(), cx)
});
@@ -104,7 +96,7 @@ impl Tool for CreateFileTool {
.await
.map_err(|err| anyhow!("Unable to save buffer for {destination_path}: {err}"))?;
Ok(format!("Created file {destination_path}"))
Ok((ResponseDest::File { path: destination_path.clone() }, format!("Created file {destination_path} - next, its exact contents will become your response:")))
})
}
}

View File

@@ -1,3 +1,5 @@
Creates a new file at the specified path within the project, containing the given text content. Returns confirmation that the file was created.
Creates a new file at the specified path within the project. The entire message you respond with will then be streamed into the file.
This tool is the most efficient way to create new files within the project, so it should always be chosen whenever it's necessary to create a new file in the project with specific text content, or whenever a file in the project needs such a drastic change that you would prefer to replace the entire thing instead of making individual edits. This tool should not be used when making changes to parts of an existing file but not all of it. In those cases, it's better to use another approach to edit the file.
This tool is the most efficient way to create new files within the project, so it should always be chosen whenever it's necessary to create a new file in the project with specific text content, or whenever a file in the project needs such a drastic change that you would prefer to replace the entire thing instead of making individual edits. This tool should not be used when making changes to parts of an existing file but not all of it.
Note that *all* the text you respond with will be streamed into the file, so you must ONLY respond with the contents of the file.

View File

@@ -1,6 +1,6 @@
use crate::schema::json_schema_for;
use anyhow::{Result, anyhow};
use assistant_tool::{ActionLog, Tool};
use assistant_tool::{ActionLog, ResponseDest, Tool};
use futures::{SinkExt, StreamExt, channel::mpsc};
use gpui::{App, AppContext, Entity, Task};
use language_model::{LanguageModelRequestMessage, LanguageModelToolSchemaFormat};
@@ -63,7 +63,7 @@ impl Tool for DeletePathTool {
project: Entity<Project>,
action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
) -> Task<Result<(ResponseDest, String)>> {
let path_str = match serde_json::from_value::<DeletePathToolInput>(input) {
Ok(input) => input.path,
Err(err) => return Task::ready(Err(anyhow!(err))),
@@ -124,7 +124,7 @@ impl Tool for DeletePathTool {
match delete {
Some(deletion_task) => match deletion_task.await {
Ok(()) => Ok(format!("Deleted {path_str}")),
Ok(()) => Ok((ResponseDest::TextOnly, format!("Deleted {path_str}"))),
Err(err) => Err(anyhow!("Failed to delete {path_str}: {err}")),
},
None => Err(anyhow!(

View File

@@ -1,6 +1,6 @@
use crate::schema::json_schema_for;
use anyhow::{Result, anyhow};
use assistant_tool::{ActionLog, Tool};
use assistant_tool::{ActionLog, ResponseDest, Tool};
use gpui::{App, Entity, Task};
use language::{DiagnosticSeverity, OffsetRangeExt};
use language_model::{LanguageModelRequestMessage, LanguageModelToolSchemaFormat};
@@ -83,7 +83,7 @@ impl Tool for DiagnosticsTool {
project: Entity<Project>,
action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
) -> Task<Result<(ResponseDest, String)>> {
match serde_json::from_value::<DiagnosticsToolInput>(input)
.ok()
.and_then(|input| input.path)
@@ -119,11 +119,14 @@ impl Tool for DiagnosticsTool {
)?;
}
if output.is_empty() {
Ok("File doesn't have errors or warnings!".to_string())
} else {
Ok(output)
}
Ok((
ResponseDest::TextOnly,
if output.is_empty() {
"File doesn't have errors or warnings!".to_string()
} else {
output
},
))
})
}
_ => {
@@ -154,11 +157,14 @@ impl Tool for DiagnosticsTool {
action_log.checked_project_diagnostics();
});
if has_diagnostics {
Task::ready(Ok(output))
} else {
Task::ready(Ok("No errors or warnings found in the project.".to_string()))
}
Task::ready(Ok((
ResponseDest::TextOnly,
if has_diagnostics {
output
} else {
"No errors or warnings found in the project.".to_string()
},
)))
}
}
}

View File

@@ -4,7 +4,7 @@ use std::sync::Arc;
use crate::schema::json_schema_for;
use anyhow::{Context as _, Result, anyhow, bail};
use assistant_tool::{ActionLog, Tool};
use assistant_tool::{ActionLog, ResponseDest, Tool};
use futures::AsyncReadExt as _;
use gpui::{App, AppContext as _, Entity, Task};
use html_to_markdown::{TagHandler, convert_html_to_markdown, markdown};
@@ -146,7 +146,7 @@ impl Tool for FetchTool {
_project: Entity<Project>,
_action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
) -> Task<Result<(ResponseDest, String)>> {
let input = match serde_json::from_value::<FetchToolInput>(input) {
Ok(input) => input,
Err(err) => return Task::ready(Err(anyhow!(err))),
@@ -164,7 +164,7 @@ impl Tool for FetchTool {
bail!("no textual content found");
}
Ok(text)
Ok((ResponseDest::TextOnly, text))
})
}
}

View File

@@ -1,6 +1,6 @@
use crate::{replace::replace_with_flexible_indent, schema::json_schema_for};
use anyhow::{Context as _, Result, anyhow};
use assistant_tool::{ActionLog, Tool};
use assistant_tool::{ActionLog, ResponseDest, Tool};
use gpui::{App, AppContext, AsyncApp, Entity, Task};
use language_model::{LanguageModelRequestMessage, LanguageModelToolSchemaFormat};
use project::Project;
@@ -63,6 +63,16 @@ pub struct FindReplaceFileToolInput {
/// even one character in this string is different in any way from how it appears
/// in the file, then the tool call will fail.
///
/// If you get an error that the `find` string was not found, this means that either
/// you made a mistake, or that the file has changed since you last looked at it.
/// Either way, when this happens, you should retry doing this tool call until it
/// succeeds, up to 3 times. Each time you retry, you should take another look at
/// the exact text of the file in question, to make sure that you are searching for
/// exactly the right string. Regardless of whether it was because you made a mistake
/// or because the file changed since you last looked at it, you should be extra
/// careful when retrying in this way. It's a bad experience for the user if
/// this `find` string isn't found, so be super careful to get it exactly right!
///
/// <example>
/// If a file contains this code:
///
@@ -159,7 +169,7 @@ impl Tool for FindReplaceFileTool {
project: Entity<Project>,
action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
) -> Task<Result<(ResponseDest, String)>> {
let input = match serde_json::from_value::<FindReplaceFileToolInput>(input) {
Ok(input) => input,
Err(err) => return Task::ready(Err(anyhow!(err))),
@@ -251,8 +261,7 @@ impl Tool for FindReplaceFileTool {
}).await;
Ok(format!("Edited {}:\n\n```diff\n{}\n```", input.path.display(), diff_str))
Ok((ResponseDest::TextOnly,format!("Edited {}:\n\n```diff\n{}\n```", input.path.display(), diff_str)))
})
}
}

View File

@@ -1,8 +1,10 @@
Find one unique part of a file in the project and replace that text with new text.
This tool is the preferred way to make edits to files. If you have multiple edits to make, including edits across multiple files, then make a plan to respond with a single message containing multiple calls to this tool - one call for each find/replace operation.
This tool is the preferred way to make edits to files *except* when making a rename. When making a rename specifically, the rename tool must always be used instead.
You should use this tool when you want to edit a subset of a file's contents, but not the entire file. You should not use this tool when you want to replace the entire contents of a file with completely different contents. You also should not use this tool when you want to move or rename a file. You absolutely must NEVER use this tool to create new files from scratch. If you ever consider using this tool to create a new file from scratch, for any reason, instead you must reconsider and choose a different approach.
If you have multiple edits to make, including edits across multiple files, then make a plan to respond with a single message containing a batch of calls to this tool - one call for each find/replace operation.
You should only use this tool when you want to edit a subset of a file's contents, but not the entire file. You should not use this tool when you want to replace the entire contents of a file with completely different contents. You also should not use this tool when you want to move or rename a file. You absolutely must NEVER use this tool to create new files from scratch. If you ever consider using this tool to create a new file from scratch, for any reason, instead you must reconsider and choose a different approach.
DO NOT call this tool until the code to be edited appears in the conversation! You must use another tool to read the file's contents into the conversation, or ask the user to add it to context first.

View File

@@ -1,6 +1,6 @@
use crate::schema::json_schema_for;
use anyhow::{Result, anyhow};
use assistant_tool::{ActionLog, Tool};
use assistant_tool::{ActionLog, ResponseDest, Tool};
use gpui::{App, Entity, Task};
use language_model::{LanguageModelRequestMessage, LanguageModelToolSchemaFormat};
use project::Project;
@@ -77,7 +77,7 @@ impl Tool for ListDirectoryTool {
project: Entity<Project>,
_action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
) -> Task<Result<(ResponseDest, String)>> {
let input = match serde_json::from_value::<ListDirectoryToolInput>(input) {
Ok(input) => input,
Err(err) => return Task::ready(Err(anyhow!(err))),
@@ -101,7 +101,7 @@ impl Tool for ListDirectoryTool {
.collect::<Vec<_>>()
.join("\n");
return Task::ready(Ok(output));
return Task::ready(Ok((ResponseDest::TextOnly, output)));
}
let Some(project_path) = project.read(cx).find_project_path(&input.path, cx) else {
@@ -132,9 +132,13 @@ impl Tool for ListDirectoryTool {
)
.unwrap();
}
if output.is_empty() {
return Task::ready(Ok(format!("{} is empty.", input.path)));
}
Task::ready(Ok(output))
Task::ready(Ok((
ResponseDest::TextOnly,
if output.is_empty() {
format!("{} is empty.", input.path)
} else {
output
},
)))
}
}

View File

@@ -1,6 +1,6 @@
use crate::schema::json_schema_for;
use anyhow::{Result, anyhow};
use assistant_tool::{ActionLog, Tool};
use assistant_tool::{ActionLog, ResponseDest, Tool};
use gpui::{App, AppContext, Entity, Task};
use language_model::{LanguageModelRequestMessage, LanguageModelToolSchemaFormat};
use project::Project;
@@ -90,7 +90,7 @@ impl Tool for MovePathTool {
project: Entity<Project>,
_action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
) -> Task<Result<(ResponseDest, String)>> {
let input = match serde_json::from_value::<MovePathToolInput>(input) {
Ok(input) => input,
Err(err) => return Task::ready(Err(anyhow!(err))),
@@ -116,9 +116,9 @@ impl Tool for MovePathTool {
cx.background_spawn(async move {
match rename_task.await {
Ok(_) => Ok(format!(
"Moved {} to {}",
input.source_path, input.destination_path
Ok(_) => Ok((
ResponseDest::TextOnly,
format!("Moved {} to {}", input.source_path, input.destination_path),
)),
Err(err) => Err(anyhow!(
"Failed to move {} to {}: {}",

View File

@@ -2,7 +2,7 @@ use std::sync::Arc;
use crate::schema::json_schema_for;
use anyhow::{Result, anyhow};
use assistant_tool::{ActionLog, Tool};
use assistant_tool::{ActionLog, ResponseDest, Tool};
use chrono::{Local, Utc};
use gpui::{App, Entity, Task};
use language_model::{LanguageModelRequestMessage, LanguageModelToolSchemaFormat};
@@ -60,7 +60,7 @@ impl Tool for NowTool {
_project: Entity<Project>,
_action_log: Entity<ActionLog>,
_cx: &mut App,
) -> Task<Result<String>> {
) -> Task<Result<(ResponseDest, String)>> {
let input: NowToolInput = match serde_json::from_value(input) {
Ok(input) => input,
Err(err) => return Task::ready(Err(anyhow!(err))),
@@ -72,6 +72,6 @@ impl Tool for NowTool {
};
let text = format!("The current datetime is {now}.");
Task::ready(Ok(text))
Task::ready(Ok((ResponseDest::TextOnly, text)))
}
}

View File

@@ -1,6 +1,6 @@
use crate::schema::json_schema_for;
use anyhow::{Context as _, Result, anyhow};
use assistant_tool::{ActionLog, Tool};
use assistant_tool::{ActionLog, ResponseDest, Tool};
use gpui::{App, AppContext, Entity, Task};
use language_model::{LanguageModelRequestMessage, LanguageModelToolSchemaFormat};
use project::Project;
@@ -53,7 +53,7 @@ impl Tool for OpenTool {
_project: Entity<Project>,
_action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
) -> Task<Result<(ResponseDest, String)>> {
let input: OpenToolInput = match serde_json::from_value(input) {
Ok(input) => input,
Err(err) => return Task::ready(Err(anyhow!(err))),
@@ -62,7 +62,10 @@ impl Tool for OpenTool {
cx.background_spawn(async move {
open::that(&input.path_or_url).context("Failed to open URL or file path")?;
Ok(format!("Successfully opened {}", input.path_or_url))
Ok((
ResponseDest::TextOnly,
format!("Successfully opened {}", input.path_or_url),
))
})
}
}

View File

@@ -1,6 +1,6 @@
use crate::schema::json_schema_for;
use anyhow::{Result, anyhow};
use assistant_tool::{ActionLog, Tool};
use assistant_tool::{ActionLog, ResponseDest, Tool};
use gpui::{App, AppContext, Entity, Task};
use language_model::{LanguageModelRequestMessage, LanguageModelToolSchemaFormat};
use project::Project;
@@ -71,7 +71,7 @@ impl Tool for PathSearchTool {
project: Entity<Project>,
_action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
) -> Task<Result<(ResponseDest, String)>> {
let (offset, glob) = match serde_json::from_value::<PathSearchToolInput>(input) {
Ok(input) => (input.offset, input.glob),
Err(err) => return Task::ready(Err(anyhow!(err))),
@@ -109,33 +109,35 @@ impl Tool for PathSearchTool {
}
}
if matches.is_empty() {
Ok(format!("No paths in the project matched the glob {glob:?}"))
} else {
// Sort to group entries in the same directory together.
matches.sort();
let total_matches = matches.len();
let response = if total_matches > RESULTS_PER_PAGE + offset as usize {
let paginated_matches: Vec<_> = matches
.into_iter()
.skip(offset as usize)
.take(RESULTS_PER_PAGE)
.collect();
format!(
"Found {} total matches. Showing results {}-{} (provide 'offset' parameter for more results):\n\n{}",
total_matches,
offset + 1,
offset as usize + paginated_matches.len(),
paginated_matches.join("\n")
)
Ok((ResponseDest::TextOnly,
if matches.is_empty() {
format!("No paths in the project matched the glob {glob:?}")
} else {
matches.join("\n")
};
// Sort to group entries in the same directory together.
matches.sort();
Ok(response)
}
})
let total_matches = matches.len();
let response = if total_matches > RESULTS_PER_PAGE + offset as usize {
let paginated_matches: Vec<_> = matches
.into_iter()
.skip(offset as usize)
.take(RESULTS_PER_PAGE)
.collect();
format!(
"Found {} total matches. Showing results {}-{} (provide 'offset' parameter for more results):\n\n{}",
total_matches,
offset + 1,
offset as usize + paginated_matches.len(),
paginated_matches.join("\n")
)
} else {
matches.join("\n")
};
response
}
))
})
}
}

View File

@@ -2,7 +2,7 @@ use std::sync::Arc;
use crate::{code_symbols_tool::file_outline, schema::json_schema_for};
use anyhow::{Result, anyhow};
use assistant_tool::{ActionLog, Tool};
use assistant_tool::{ActionLog, ResponseDest, Tool};
use gpui::{App, Entity, Task};
use itertools::Itertools;
use language_model::{LanguageModelRequestMessage, LanguageModelToolSchemaFormat};
@@ -88,7 +88,7 @@ impl Tool for ReadFileTool {
project: Entity<Project>,
action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
) -> Task<Result<(ResponseDest, String)>> {
let input = match serde_json::from_value::<ReadFileToolInput>(input) {
Ok(input) => input,
Err(err) => return Task::ready(Err(anyhow!(err))),
@@ -106,46 +106,48 @@ impl Tool for ReadFileTool {
})?
.await?;
// Check if specific line ranges are provided
if input.start_line.is_some() || input.end_line.is_some() {
let result = buffer.read_with(cx, |buffer, _cx| {
let text = buffer.text();
let start = input.start_line.unwrap_or(1);
let lines = text.split('\n').skip(start - 1);
if let Some(end) = input.end_line {
let count = end.saturating_sub(start).max(1); // Ensure at least 1 line
Itertools::intersperse(lines.take(count), "\n").collect()
} else {
Itertools::intersperse(lines, "\n").collect()
}
})?;
action_log.update(cx, |log, cx| {
log.buffer_read(buffer, cx);
})?;
Ok(result)
} else {
// No line ranges specified, so check file size to see if it's too big.
let file_size = buffer.read_with(cx, |buffer, _cx| buffer.text().len())?;
if file_size <= MAX_FILE_SIZE_TO_READ {
// File is small enough, so return its contents.
let result = buffer.read_with(cx, |buffer, _cx| buffer.text())?;
Ok((ResponseDest::TextOnly,
// Check if specific line ranges are provided
if input.start_line.is_some() || input.end_line.is_some() {
let answer = buffer.read_with(cx, |buffer, _cx| {
let text = buffer.text();
let start = input.start_line.unwrap_or(1);
let lines = text.split('\n').skip(start - 1);
if let Some(end) = input.end_line {
let count = end.saturating_sub(start).max(1); // Ensure at least 1 line
Itertools::intersperse(lines.take(count), "\n").collect()
} else {
Itertools::intersperse(lines, "\n").collect()
}
})?;
action_log.update(cx, |log, cx| {
log.buffer_read(buffer, cx);
})?;
Ok(result)
answer
} else {
// File is too big, so return an error with the outline
// and a suggestion to read again with line numbers.
let outline = file_outline(project, file_path, action_log, None, 0, cx).await?;
// No line ranges specified, so check file size to see if it's too big.
let file_size = buffer.read_with(cx, |buffer, _cx| buffer.text().len())?;
Ok(format!("This file was too big to read all at once. Here is an outline of its symbols:\n\n{outline}\n\nUsing the line numbers in this outline, you can call this tool again while specifying the start_line and end_line fields to see the implementations of symbols in the outline."))
if file_size <= MAX_FILE_SIZE_TO_READ {
// File is small enough, so return its contents.
let answer = buffer.read_with(cx, |buffer, _cx| buffer.text())?;
action_log.update(cx, |log, cx| {
log.buffer_read(buffer, cx);
})?;
answer
} else {
// File is too big, so return an error with the outline
// and a suggestion to read again with line numbers.
let outline = file_outline(project, file_path, action_log, None, 0, cx).await?;
format!("This file was too big to read all at once. Here is an outline of its symbols:\n\n{outline}\n\nUsing the line numbers in this outline, you can call this tool again while specifying the start_line and end_line fields to see the implementations of symbols in the outline.")
}
}
}
))
})
}
}

View File

@@ -1,6 +1,6 @@
use crate::schema::json_schema_for;
use anyhow::{Result, anyhow};
use assistant_tool::{ActionLog, Tool};
use assistant_tool::{ActionLog, ResponseDest, Tool};
use futures::StreamExt;
use gpui::{App, Entity, Task};
use language::OffsetRangeExt;
@@ -26,6 +26,10 @@ pub struct RegexSearchToolInput {
/// When not provided, starts from the beginning.
#[serde(default)]
pub offset: u32,
/// Whether the regex is case-sensitive. Defaults to false (case-insensitive).
#[serde(default)]
pub case_sensitive: bool,
}
impl RegexSearchToolInput {
@@ -64,12 +68,17 @@ impl Tool for RegexSearchTool {
match serde_json::from_value::<RegexSearchToolInput>(input.clone()) {
Ok(input) => {
let page = input.page();
let regex = MarkdownString::inline_code(&input.regex);
let regex_str = MarkdownString::inline_code(&input.regex);
let case_info = if input.case_sensitive {
" (case-sensitive)"
} else {
""
};
if page > 1 {
format!("Get page {page} of search results for regex {regex}")
format!("Get page {page} of search results for regex {regex_str}{case_info}")
} else {
format!("Search files for regex {regex}")
format!("Search files for regex {regex_str}{case_info}")
}
}
Err(_) => "Search with regex".to_string(),
@@ -83,17 +92,19 @@ impl Tool for RegexSearchTool {
project: Entity<Project>,
_action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
) -> Task<Result<(ResponseDest, String)>> {
const CONTEXT_LINES: u32 = 2;
let (offset, regex) = match serde_json::from_value::<RegexSearchToolInput>(input) {
Ok(input) => (input.offset, input.regex),
Err(err) => return Task::ready(Err(anyhow!(err))),
};
let (offset, regex, case_sensitive) =
match serde_json::from_value::<RegexSearchToolInput>(input) {
Ok(input) => (input.offset, input.regex, input.case_sensitive),
Err(err) => return Task::ready(Err(anyhow!(err))),
};
let query = match SearchQuery::regex(
&regex,
false,
case_sensitive,
false,
false,
PathMatcher::default(),
@@ -178,18 +189,20 @@ impl Tool for RegexSearchTool {
})??;
}
if matches_found == 0 {
Ok("No matches found".to_string())
} else if has_more_matches {
Ok(format!(
"Showing matches {}-{} (there were more matches found; use offset: {} to see next page):\n{output}",
offset + 1,
offset + matches_found,
offset + RESULTS_PER_PAGE,
))
} else {
Ok(format!("Found {matches_found} matches:\n{output}"))
}
Ok((ResponseDest::TextOnly,
if matches_found == 0 {
"No matches found".to_string()
} else if has_more_matches {
format!(
"Showing matches {}-{} (there were more matches found; use offset: {} to see next page):\n{output}",
offset + 1,
offset + matches_found,
offset + RESULTS_PER_PAGE,
)
} else {
format!("Found {matches_found} matches:\n{output}")
}
))
})
}
}

View File

@@ -0,0 +1,205 @@
use anyhow::{Context as _, Result, anyhow};
use assistant_tool::{ActionLog, ResponseDest, Tool};
use gpui::{App, Entity, Task};
use language::{self, Buffer, ToPointUtf16};
use language_model::LanguageModelRequestMessage;
use project::Project;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use std::sync::Arc;
use ui::IconName;
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct RenameToolInput {
/// The relative path to the file containing the symbol to rename.
///
/// WARNING: you MUST start this path with one of the project's root directories.
pub path: String,
/// The new name to give to the symbol.
pub new_name: String,
/// The text that comes immediately before the symbol in the file.
pub context_before_symbol: String,
/// The symbol to rename. This text must appear in the file right between
/// `context_before_symbol` and `context_after_symbol`.
///
/// The file must contain exactly one occurrence of `context_before_symbol` followed by
/// `symbol` followed by `context_after_symbol`. If the file contains zero occurrences,
/// or if it contains more than one occurrence, the tool will fail, so it is absolutely
/// critical that you verify ahead of time that the string is unique. You can search
/// the file's contents to verify this ahead of time.
///
/// To make the string more likely to be unique, include a minimum of 1 line of context
/// before the symbol, as well as a minimum of 1 line of context after the symbol.
/// If these lines of context are not enough to obtain a string that appears only once
/// in the file, then double the number of context lines until the string becomes unique.
/// (Start with 1 line before and 1 line after though, because too much context is
/// needlessly costly.)
///
/// Do not alter the context lines of code in any way, and make sure to preserve all
/// whitespace and indentation for all lines of code. The combined string must be exactly
/// as it appears in the file, or else this tool call will fail.
pub symbol: String,
/// The text that comes immediately after the symbol in the file.
pub context_after_symbol: String,
}
pub struct RenameTool;
impl Tool for RenameTool {
fn name(&self) -> String {
"rename".into()
}
fn needs_confirmation(&self, _input: &serde_json::Value, _cx: &App) -> bool {
false
}
fn description(&self) -> String {
include_str!("./rename_tool/description.md").into()
}
fn icon(&self) -> IconName {
IconName::Pencil
}
fn input_schema(
&self,
_format: language_model::LanguageModelToolSchemaFormat,
) -> serde_json::Value {
let schema = schemars::schema_for!(RenameToolInput);
serde_json::to_value(&schema).unwrap()
}
fn ui_text(&self, input: &serde_json::Value) -> String {
match serde_json::from_value::<RenameToolInput>(input.clone()) {
Ok(input) => {
format!("Rename '{}' to '{}'", input.symbol, input.new_name)
}
Err(_) => "Rename symbol".to_string(),
}
}
fn run(
self: Arc<Self>,
input: serde_json::Value,
_messages: &[LanguageModelRequestMessage],
project: Entity<Project>,
action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<(ResponseDest, String)>> {
let input = match serde_json::from_value::<RenameToolInput>(input) {
Ok(input) => input,
Err(err) => return Task::ready(Err(anyhow!(err))),
};
cx.spawn(async move |cx| {
let buffer = {
let project_path = project.read_with(cx, |project, cx| {
project
.find_project_path(&input.path, cx)
.context("Path not found in project")
})??;
project.update(cx, |project, cx| project.open_buffer(project_path, cx))?.await?
};
action_log.update(cx, |action_log, cx| {
action_log.buffer_read(buffer.clone(), cx);
})?;
let position = {
let Some(position) = buffer.read_with(cx, |buffer, _cx| {
find_symbol_position(&buffer, &input.context_before_symbol, &input.symbol, &input.context_after_symbol)
})? else {
return Err(anyhow!(
"Failed to locate the symbol specified by context_before_symbol, symbol, and context_after_symbol. Make sure context_before_symbol and context_after_symbol each match exactly once in the file."
));
};
buffer.read_with(cx, |buffer, _| {
position.to_point_utf16(&buffer.snapshot())
})?
};
project
.update(cx, |project, cx| {
project.perform_rename(buffer.clone(), position, input.new_name.clone(), cx)
})?
.await?;
project
.update(cx, |project, cx| project.save_buffer(buffer.clone(), cx))?
.await?;
action_log.update(cx, |log, cx| {
log.buffer_edited(buffer.clone(), cx)
})?;
Ok((ResponseDest::TextOnly, format!("Renamed '{}' to '{}'", input.symbol, input.new_name)))
})
}
}
/// Finds the position of the symbol in the buffer, if it appears between context_before_symbol
/// and context_after_symbol, and if that combined string has one unique result in the buffer.
///
/// If an exact match fails, it tries adding a newline to the end of context_before_symbol and
/// to the beginning of context_after_symbol to accommodate line-based context matching.
fn find_symbol_position(
buffer: &Buffer,
context_before_symbol: &str,
symbol: &str,
context_after_symbol: &str,
) -> Option<language::Anchor> {
let snapshot = buffer.snapshot();
let text = snapshot.text();
// First try with exact match
let search_string = format!("{context_before_symbol}{symbol}{context_after_symbol}");
let mut positions = text.match_indices(&search_string);
let position_result = positions.next();
if let Some(position) = position_result {
// Check if the matched string is unique
if positions.next().is_none() {
let symbol_start = position.0 + context_before_symbol.len();
let symbol_start_anchor =
snapshot.anchor_before(snapshot.offset_to_point(symbol_start));
return Some(symbol_start_anchor);
}
}
// If exact match fails or is not unique, try with line-based context
// Add a newline to the end of before context and beginning of after context
let line_based_before = if context_before_symbol.ends_with('\n') {
context_before_symbol.to_string()
} else {
format!("{context_before_symbol}\n")
};
let line_based_after = if context_after_symbol.starts_with('\n') {
context_after_symbol.to_string()
} else {
format!("\n{context_after_symbol}")
};
let line_search_string = format!("{line_based_before}{symbol}{line_based_after}");
let mut line_positions = text.match_indices(&line_search_string);
let line_position = line_positions.next()?;
// The line-based search string must also appear exactly once
if line_positions.next().is_some() {
return None;
}
let line_symbol_start = line_position.0 + line_based_before.len();
let line_symbol_start_anchor =
snapshot.anchor_before(snapshot.offset_to_point(line_symbol_start));
Some(line_symbol_start_anchor)
}

View File

@@ -0,0 +1,15 @@
Renames a symbol across your codebase using the language server's semantic knowledge.
This tool performs a rename refactoring operation on a specified symbol. It uses the project's language server to analyze the code and perform the rename correctly across all files where the symbol is referenced.
Unlike a simple find and replace, this tool understands the semantic meaning of the code, so it only renames the specific symbol you specify and not unrelated text that happens to have the same name.
Examples of symbols you can rename:
- Variables
- Functions
- Classes/structs
- Fields/properties
- Methods
- Interfaces/traits
The language server handles updating all references to the renamed symbol throughout the codebase.

View File

@@ -1,5 +1,5 @@
use anyhow::{Context as _, Result, anyhow};
use assistant_tool::{ActionLog, Tool};
use assistant_tool::{ActionLog, ResponseDest, Tool};
use gpui::{App, AsyncApp, Entity, Task};
use language::{self, Anchor, Buffer, BufferSnapshot, Location, Point, ToPoint, ToPointUtf16};
use language_model::{LanguageModelRequestMessage, LanguageModelToolSchemaFormat};
@@ -122,7 +122,7 @@ impl Tool for SymbolInfoTool {
project: Entity<Project>,
action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
) -> Task<Result<(ResponseDest, String)>> {
let input = match serde_json::from_value::<SymbolInfoToolInput>(input) {
Ok(input) => input,
Err(err) => return Task::ready(Err(anyhow!(err))),
@@ -203,7 +203,7 @@ impl Tool for SymbolInfoTool {
if output.is_empty() {
Err(anyhow!("None found."))
} else {
Ok(output)
Ok((ResponseDest::TextOnly, output))
}
})
}

View File

@@ -0,0 +1,369 @@
use crate::schema::json_schema_for;
use anyhow::{Context as _, Result, anyhow};
use assistant_tool::{ActionLog, ResponseDest, Tool};
use futures::io::BufReader;
use futures::{AsyncBufReadExt, AsyncReadExt, FutureExt};
use gpui::{App, AppContext, Entity, Task};
use language_model::{LanguageModelRequestMessage, LanguageModelToolSchemaFormat};
use project::Project;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use std::future;
use util::get_system_shell;
use std::path::Path;
use std::sync::Arc;
use ui::IconName;
use util::command::new_smol_command;
use util::markdown::MarkdownString;
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct TerminalToolInput {
/// The one-liner command to execute.
command: String,
/// Working directory for the command. This must be one of the root directories of the project.
cd: String,
}
pub struct TerminalTool;
impl Tool for TerminalTool {
fn name(&self) -> String {
"terminal".to_string()
}
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
true
}
fn description(&self) -> String {
include_str!("./terminal_tool/description.md").to_string()
}
fn icon(&self) -> IconName {
IconName::Terminal
}
fn input_schema(&self, format: LanguageModelToolSchemaFormat) -> serde_json::Value {
json_schema_for::<TerminalToolInput>(format)
}
fn ui_text(&self, input: &serde_json::Value) -> String {
match serde_json::from_value::<TerminalToolInput>(input.clone()) {
Ok(input) => {
let mut lines = input.command.lines();
let first_line = lines.next().unwrap_or_default();
let remaining_line_count = lines.count();
match remaining_line_count {
0 => MarkdownString::inline_code(&first_line).0,
1 => {
MarkdownString::inline_code(&format!(
"{} - {} more line",
first_line, remaining_line_count
))
.0
}
n => {
MarkdownString::inline_code(&format!("{} - {} more lines", first_line, n)).0
}
}
}
Err(_) => "Run terminal command".to_string(),
}
}
fn run(
self: Arc<Self>,
input: serde_json::Value,
_messages: &[LanguageModelRequestMessage],
project: Entity<Project>,
_action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<(ResponseDest, String)>> {
let input: TerminalToolInput = match serde_json::from_value(input) {
Ok(input) => input,
Err(err) => return Task::ready(Err(anyhow!(err))),
};
let project = project.read(cx);
let input_path = Path::new(&input.cd);
let working_dir = if input.cd == "." {
// Accept "." as meaning "the one worktree" if we only have one worktree.
let mut worktrees = project.worktrees(cx);
let only_worktree = match worktrees.next() {
Some(worktree) => worktree,
None => return Task::ready(Err(anyhow!("No worktrees found in the project"))),
};
if worktrees.next().is_some() {
return Task::ready(Err(anyhow!(
"'.' is ambiguous in multi-root workspaces. Please specify a root directory explicitly."
)));
}
only_worktree.read(cx).abs_path()
} else if input_path.is_absolute() {
// Absolute paths are allowed, but only if they're in one of the project's worktrees.
if !project
.worktrees(cx)
.any(|worktree| input_path.starts_with(&worktree.read(cx).abs_path()))
{
return Task::ready(Err(anyhow!(
"The absolute path must be within one of the project's worktrees"
)));
}
input_path.into()
} else {
let Some(worktree) = project.worktree_for_root_name(&input.cd, cx) else {
return Task::ready(Err(anyhow!(
"`cd` directory {} not found in the project",
&input.cd
)));
};
worktree.read(cx).abs_path()
};
cx.background_spawn(run_command_limited(working_dir, input.command))
}
}
const LIMIT: usize = 16 * 1024;
async fn run_command_limited(
working_dir: Arc<Path>,
command: String,
) -> Result<(ResponseDest, String)> {
let shell = get_system_shell();
let mut cmd = new_smol_command(&shell)
.arg("-c")
.arg(&command)
.current_dir(working_dir)
.stdout(std::process::Stdio::piped())
.stderr(std::process::Stdio::piped())
.spawn()
.context("Failed to execute terminal command")?;
let mut combined_buffer = String::with_capacity(LIMIT + 1);
let mut out_reader = BufReader::new(cmd.stdout.take().context("Failed to get stdout")?);
let mut out_tmp_buffer = String::with_capacity(512);
let mut err_reader = BufReader::new(cmd.stderr.take().context("Failed to get stderr")?);
let mut err_tmp_buffer = String::with_capacity(512);
let mut out_line = Box::pin(
out_reader
.read_line(&mut out_tmp_buffer)
.left_future()
.fuse(),
);
let mut err_line = Box::pin(
err_reader
.read_line(&mut err_tmp_buffer)
.left_future()
.fuse(),
);
let mut has_stdout = true;
let mut has_stderr = true;
while (has_stdout || has_stderr) && combined_buffer.len() < LIMIT + 1 {
futures::select_biased! {
read = out_line => {
drop(out_line);
combined_buffer.extend(out_tmp_buffer.drain(..));
if read? == 0 {
out_line = Box::pin(future::pending().right_future().fuse());
has_stdout = false;
} else {
out_line = Box::pin(out_reader.read_line(&mut out_tmp_buffer).left_future().fuse());
}
}
read = err_line => {
drop(err_line);
combined_buffer.extend(err_tmp_buffer.drain(..));
if read? == 0 {
err_line = Box::pin(future::pending().right_future().fuse());
has_stderr = false;
} else {
err_line = Box::pin(err_reader.read_line(&mut err_tmp_buffer).left_future().fuse());
}
}
};
}
drop((out_line, err_line));
let truncated = combined_buffer.len() > LIMIT;
combined_buffer.truncate(LIMIT);
consume_reader(out_reader, truncated).await?;
consume_reader(err_reader, truncated).await?;
let status = cmd.status().await.context("Failed to get command status")?;
let output_string = if truncated {
// Valid to find `\n` in UTF-8 since 0-127 ASCII characters are not used in
// multi-byte characters.
let last_line_ix = combined_buffer.bytes().rposition(|b| b == b'\n');
let combined_buffer = &combined_buffer[..last_line_ix.unwrap_or(combined_buffer.len())];
format!(
"Command output too long. The first {} bytes:\n\n{}",
combined_buffer.len(),
output_block(&combined_buffer),
)
} else {
output_block(&combined_buffer)
};
let output_with_status = if status.success() {
if output_string.is_empty() {
"Command executed successfully.".to_string()
} else {
output_string.to_string()
}
} else {
format!(
"Command failed with exit code {} (shell: {}).\n\n{}",
status.code().unwrap_or(-1),
shell,
output_string,
)
};
Ok((ResponseDest::TextOnly, output_with_status))
}
async fn consume_reader<T: AsyncReadExt + Unpin>(
mut reader: BufReader<T>,
truncated: bool,
) -> Result<(), std::io::Error> {
loop {
let skipped_bytes = reader.fill_buf().await?;
if skipped_bytes.is_empty() {
break;
}
let skipped_bytes_len = skipped_bytes.len();
reader.consume_unpin(skipped_bytes_len);
// Should only skip if we went over the limit
debug_assert!(truncated);
}
Ok(())
}
fn output_block(output: &str) -> String {
format!(
"```\n{}{}```",
output,
if output.ends_with('\n') { "" } else { "\n" }
)
}
#[cfg(test)]
#[cfg(not(windows))]
mod tests {
use gpui::TestAppContext;
use super::*;
#[gpui::test(iterations = 10)]
async fn test_run_command_simple(cx: &mut TestAppContext) {
cx.executor().allow_parking();
let result =
run_command_limited(Path::new(".").into(), "echo 'Hello, World!'".to_string()).await;
assert!(result.is_ok());
assert_eq!(result.unwrap().1, "```\nHello, World!\n```");
}
#[gpui::test(iterations = 10)]
async fn test_interleaved_stdout_stderr(cx: &mut TestAppContext) {
cx.executor().allow_parking();
let command = "echo 'stdout 1' && sleep 0.01 && echo 'stderr 1' >&2 && sleep 0.01 && echo 'stdout 2' && sleep 0.01 && echo 'stderr 2' >&2";
let result = run_command_limited(Path::new(".").into(), command.to_string()).await;
assert!(result.is_ok());
assert_eq!(
result.unwrap().1,
"```\nstdout 1\nstderr 1\nstdout 2\nstderr 2\n```"
);
}
#[gpui::test(iterations = 10)]
async fn test_multiple_output_reads(cx: &mut TestAppContext) {
cx.executor().allow_parking();
// Command with multiple outputs that might require multiple reads
let result = run_command_limited(
Path::new(".").into(),
"echo '1'; sleep 0.01; echo '2'; sleep 0.01; echo '3'".to_string(),
)
.await;
assert!(result.is_ok());
assert_eq!(result.unwrap().1, "```\n1\n2\n3\n```");
}
#[gpui::test(iterations = 10)]
async fn test_output_truncation_single_line(cx: &mut TestAppContext) {
cx.executor().allow_parking();
let cmd = format!("echo '{}'; sleep 0.01;", "X".repeat(LIMIT * 2));
let result = run_command_limited(Path::new(".").into(), cmd).await;
assert!(result.is_ok());
let output = result.unwrap().1;
let content_start = output.find("```\n").map(|i| i + 4).unwrap_or(0);
let content_end = output.rfind("\n```").unwrap_or(output.len());
let content_length = content_end - content_start;
// Output should be exactly the limit
assert_eq!(content_length, LIMIT);
}
#[gpui::test(iterations = 10)]
async fn test_output_truncation_multiline(cx: &mut TestAppContext) {
cx.executor().allow_parking();
let cmd = format!("echo '{}'; ", "X".repeat(120)).repeat(160);
let result = run_command_limited(Path::new(".").into(), cmd).await;
assert!(result.is_ok());
let output = result.unwrap().1;
assert!(output.starts_with("Command output too long. The first 16334 bytes:\n\n"));
let content_start = output.find("```\n").map(|i| i + 4).unwrap_or(0);
let content_end = output.rfind("\n```").unwrap_or(output.len());
let content_length = content_end - content_start;
assert!(content_length <= LIMIT);
}
#[gpui::test(iterations = 10)]
async fn test_command_failure(cx: &mut TestAppContext) {
cx.executor().allow_parking();
let result = run_command_limited(Path::new(".").into(), "exit 42".to_string()).await;
assert!(result.is_ok());
let output = result.unwrap().1;
// Extract the shell name from path for cleaner test output
let shell_path = std::env::var("SHELL").unwrap_or("bash".to_string());
let expected_output = format!(
"Command failed with exit code 42 (shell: {}).\n\n```\n\n```",
shell_path
);
assert_eq!(output, expected_output);
}
}

View File

@@ -0,0 +1,9 @@
Executes a shell one-liner and returns the combined output.
This tool spawns a process using the user's current shell, combines stdout and stderr into one interleaved stream as they are produced (preserving the order of writes), and captures that stream into a string which is returned.
Make sure you use the `cd` parameter to navigate to one of the root directories of the project. NEVER do it as part of the `command` itself, otherwise it will error.
Do not use this tool for commands that run indefinitely, such as servers (e.g., `python -m http.server`) or file watchers that don't terminate on their own.
Remember that each invocation of this tool will spawn a new shell process, so you can't rely on any state from previous invocations.

View File

@@ -2,7 +2,7 @@ use std::sync::Arc;
use crate::schema::json_schema_for;
use anyhow::{Result, anyhow};
use assistant_tool::{ActionLog, Tool};
use assistant_tool::{ActionLog, ResponseDest, Tool};
use gpui::{App, Entity, Task};
use language_model::{LanguageModelRequestMessage, LanguageModelToolSchemaFormat};
use project::Project;
@@ -51,10 +51,10 @@ impl Tool for ThinkingTool {
_project: Entity<Project>,
_action_log: Entity<ActionLog>,
_cx: &mut App,
) -> Task<Result<String>> {
) -> Task<Result<(ResponseDest, String)>> {
// This tool just "thinks out loud" and doesn't perform any actions.
Task::ready(match serde_json::from_value::<ThinkingToolInput>(input) {
Ok(_input) => Ok("Finished thinking.".to_string()),
Ok(_input) => Ok((ResponseDest::TextOnly, "Finished thinking.".to_string())),
Err(err) => Err(anyhow!(err)),
})
}

View File

@@ -44,7 +44,7 @@ log.workspace = true
nanoid.workspace = true
open_ai.workspace = true
parking_lot.workspace = true
prometheus = "0.13"
prometheus = "0.14"
prost.workspace = true
rand.workspace = true
reqwest = { version = "0.11", features = ["json"] }

View File

@@ -34,6 +34,7 @@ static MENTIONS_SEARCH: LazyLock<SearchQuery> = LazyLock::new(|| {
false,
false,
false,
false,
Default::default(),
Default::default(),
None,

View File

@@ -1,7 +1,7 @@
use std::sync::Arc;
use anyhow::{Result, anyhow, bail};
use assistant_tool::{ActionLog, Tool, ToolSource};
use assistant_tool::{ActionLog, ResponseDest, Tool, ToolSource};
use gpui::{App, Entity, Task};
use icons::IconName;
use language_model::{LanguageModelRequestMessage, LanguageModelToolSchemaFormat};
@@ -76,7 +76,7 @@ impl Tool for ContextServerTool {
_project: Entity<Project>,
_action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
) -> Task<Result<(ResponseDest, String)>> {
if let Some(server) = self.server_manager.read(cx).get_server(&self.server_id) {
let tool_name = self.tool.name.clone();
let server_clone = server.clone();
@@ -114,7 +114,7 @@ impl Tool for ContextServerTool {
}
}
}
Ok(result)
Ok((ResponseDest::TextOnly, result))
})
} else {
Task::ready(Err(anyhow!("Context server not found")))

View File

@@ -93,7 +93,7 @@ pub struct TcpArguments {
pub port: u16,
pub timeout: Option<u64>,
}
#[derive(Debug, Clone)]
#[derive(Default, Debug, Clone)]
pub struct DebugAdapterBinary {
pub command: String,
pub arguments: Option<Vec<OsString>>,
@@ -102,6 +102,7 @@ pub struct DebugAdapterBinary {
pub connection: Option<TcpArguments>,
}
#[derive(Debug)]
pub struct AdapterVersion {
pub tag_name: String,
pub url: String,

View File

@@ -0,0 +1,141 @@
use std::{path::PathBuf, sync::OnceLock};
use anyhow::{Result, bail};
use async_trait::async_trait;
use dap::adapters::latest_github_release;
use gpui::AsyncApp;
use task::{DebugAdapterConfig, DebugRequestType, DebugTaskDefinition};
use crate::*;
#[derive(Default)]
pub(crate) struct CodeLldbDebugAdapter {
last_known_version: OnceLock<String>,
}
impl CodeLldbDebugAdapter {
const ADAPTER_NAME: &'static str = "CodeLLDB";
}
#[async_trait(?Send)]
impl DebugAdapter for CodeLldbDebugAdapter {
fn name(&self) -> DebugAdapterName {
DebugAdapterName(Self::ADAPTER_NAME.into())
}
async fn install_binary(
&self,
version: AdapterVersion,
delegate: &dyn DapDelegate,
) -> Result<()> {
adapters::download_adapter_from_github(
self.name(),
version,
adapters::DownloadedFileType::Vsix,
delegate,
)
.await?;
Ok(())
}
async fn fetch_latest_adapter_version(
&self,
delegate: &dyn DapDelegate,
) -> Result<AdapterVersion> {
let release =
latest_github_release("vadimcn/codelldb", true, false, delegate.http_client()).await?;
let arch = match std::env::consts::ARCH {
"aarch64" => "arm64",
"x86_64" => "x64",
_ => {
return Err(anyhow!(
"unsupported architecture {}",
std::env::consts::ARCH
));
}
};
let platform = match std::env::consts::OS {
"macos" => "darwin",
"linux" => "linux",
"windows" => "win32",
_ => {
return Err(anyhow!(
"unsupported operating system {}",
std::env::consts::OS
));
}
};
let asset_name = format!("codelldb-{platform}-{arch}.vsix");
let _ = self.last_known_version.set(release.tag_name.clone());
let ret = AdapterVersion {
tag_name: release.tag_name,
url: release
.assets
.iter()
.find(|asset| asset.name == asset_name)
.ok_or_else(|| anyhow!("no asset found matching {:?}", asset_name))?
.browser_download_url
.clone(),
};
Ok(ret)
}
async fn get_installed_binary(
&self,
_: &dyn DapDelegate,
_: &DebugAdapterConfig,
_: Option<PathBuf>,
_: &mut AsyncApp,
) -> Result<DebugAdapterBinary> {
let Some(version) = self.last_known_version.get() else {
bail!("Could not determine latest CodeLLDB version");
};
let adapter_path = paths::debug_adapters_dir().join(&Self::ADAPTER_NAME);
let version_path = adapter_path.join(format!("{}_{}", Self::ADAPTER_NAME, version));
let adapter_dir = version_path.join("extension").join("adapter");
let command = adapter_dir.join("codelldb");
let command = command
.to_str()
.map(ToOwned::to_owned)
.ok_or_else(|| anyhow!("Adapter path is expected to be valid UTF-8"))?;
Ok(DebugAdapterBinary {
command,
cwd: Some(adapter_dir),
..Default::default()
})
}
fn request_args(&self, config: &DebugTaskDefinition) -> Value {
let mut args = json!({
"request": match config.request {
DebugRequestType::Launch(_) => "launch",
DebugRequestType::Attach(_) => "attach",
},
});
let map = args.as_object_mut().unwrap();
match &config.request {
DebugRequestType::Attach(attach) => {
map.insert("pid".into(), attach.process_id.into());
}
DebugRequestType::Launch(launch) => {
map.insert("program".into(), launch.program.clone().into());
if !launch.args.is_empty() {
map.insert("args".into(), launch.args.clone().into());
}
if let Some(stop_on_entry) = config.stop_on_entry {
map.insert("stopOnEntry".into(), stop_on_entry.into());
}
if let Some(cwd) = launch.cwd.as_ref() {
map.insert("cwd".into(), cwd.to_string_lossy().into_owned().into());
}
}
}
args
}
}

View File

@@ -1,3 +1,4 @@
mod codelldb;
mod gdb;
mod go;
mod javascript;
@@ -9,6 +10,7 @@ use std::{net::Ipv4Addr, sync::Arc};
use anyhow::{Result, anyhow};
use async_trait::async_trait;
use codelldb::CodeLldbDebugAdapter;
use dap::{
DapRegistry,
adapters::{
@@ -26,6 +28,7 @@ use serde_json::{Value, json};
use task::{DebugAdapterConfig, TCPHost};
pub fn init(registry: Arc<DapRegistry>) {
registry.add_adapter(Arc::from(CodeLldbDebugAdapter::default()));
registry.add_adapter(Arc::from(PythonDebugAdapter));
registry.add_adapter(Arc::from(PhpDebugAdapter));
registry.add_adapter(Arc::from(JsDebugAdapter::default()));

View File

@@ -15,6 +15,7 @@ use gpui::{
Action, App, AsyncWindowContext, Context, Entity, EntityId, EventEmitter, FocusHandle,
Focusable, Subscription, Task, WeakEntity, actions,
};
use project::{
Project,
debugger::{
@@ -25,7 +26,9 @@ use project::{
};
use rpc::proto::{self};
use settings::Settings;
use std::{any::TypeId, path::PathBuf};
use std::any::TypeId;
use std::path::Path;
use std::sync::Arc;
use task::DebugTaskDefinition;
use terminal_view::terminal_panel::TerminalPanel;
use ui::{ContextMenu, Divider, DropdownMenu, Tooltip, prelude::*};
@@ -92,6 +95,87 @@ impl DebugPanel {
})
}
fn filter_action_types(&self, cx: &mut App) {
let (has_active_session, supports_restart, support_step_back, status) = self
.active_session()
.map(|item| {
let running = item.read(cx).mode().as_running().cloned();
match running {
Some(running) => {
let caps = running.read(cx).capabilities(cx);
(
!running.read(cx).session().read(cx).is_terminated(),
caps.supports_restart_request.unwrap_or_default(),
caps.supports_step_back.unwrap_or_default(),
running.read(cx).thread_status(cx),
)
}
None => (false, false, false, None),
}
})
.unwrap_or((false, false, false, None));
let filter = CommandPaletteFilter::global_mut(cx);
let debugger_action_types = [
TypeId::of::<Disconnect>(),
TypeId::of::<Stop>(),
TypeId::of::<ToggleIgnoreBreakpoints>(),
];
let running_action_types = [TypeId::of::<Pause>()];
let stopped_action_type = [
TypeId::of::<Continue>(),
TypeId::of::<StepOver>(),
TypeId::of::<StepInto>(),
TypeId::of::<StepOut>(),
TypeId::of::<editor::actions::DebuggerRunToCursor>(),
TypeId::of::<editor::actions::DebuggerEvaluateSelectedText>(),
];
let step_back_action_type = [TypeId::of::<StepBack>()];
let restart_action_type = [TypeId::of::<Restart>()];
if has_active_session {
filter.show_action_types(debugger_action_types.iter());
if supports_restart {
filter.show_action_types(restart_action_type.iter());
} else {
filter.hide_action_types(&restart_action_type);
}
if support_step_back {
filter.show_action_types(step_back_action_type.iter());
} else {
filter.hide_action_types(&step_back_action_type);
}
match status {
Some(ThreadStatus::Running) => {
filter.show_action_types(running_action_types.iter());
filter.hide_action_types(&stopped_action_type);
}
Some(ThreadStatus::Stopped) => {
filter.show_action_types(stopped_action_type.iter());
filter.hide_action_types(&running_action_types);
}
_ => {
filter.hide_action_types(&running_action_types);
filter.hide_action_types(&stopped_action_type);
}
}
} else {
// show only the `debug: start`
filter.hide_action_types(&debugger_action_types);
filter.hide_action_types(&step_back_action_type);
filter.hide_action_types(&restart_action_type);
filter.hide_action_types(&running_action_types);
filter.hide_action_types(&stopped_action_type);
}
}
pub fn load(
workspace: WeakEntity<Workspace>,
cx: AsyncWindowContext,
@@ -109,63 +193,15 @@ impl DebugPanel {
)
});
cx.observe_new::<DebugPanel>(|debug_panel, _, cx| {
Self::filter_action_types(debug_panel, cx);
})
.detach();
cx.observe(&debug_panel, |_, debug_panel, cx| {
let (has_active_session, supports_restart, support_step_back) = debug_panel
.update(cx, |this, cx| {
this.active_session()
.map(|item| {
let running = item.read(cx).mode().as_running().cloned();
match running {
Some(running) => {
let caps = running.read(cx).capabilities(cx);
(
true,
caps.supports_restart_request.unwrap_or_default(),
caps.supports_step_back.unwrap_or_default(),
)
}
None => (false, false, false),
}
})
.unwrap_or((false, false, false))
});
let filter = CommandPaletteFilter::global_mut(cx);
let debugger_action_types = [
TypeId::of::<Continue>(),
TypeId::of::<StepOver>(),
TypeId::of::<StepInto>(),
TypeId::of::<StepOut>(),
TypeId::of::<Stop>(),
TypeId::of::<Disconnect>(),
TypeId::of::<Pause>(),
TypeId::of::<ToggleIgnoreBreakpoints>(),
];
let step_back_action_type = [TypeId::of::<StepBack>()];
let restart_action_type = [TypeId::of::<Restart>()];
if has_active_session {
filter.show_action_types(debugger_action_types.iter());
if supports_restart {
filter.show_action_types(restart_action_type.iter());
} else {
filter.hide_action_types(&restart_action_type);
}
if support_step_back {
filter.show_action_types(step_back_action_type.iter());
} else {
filter.hide_action_types(&step_back_action_type);
}
} else {
// show only the `debug: start`
filter.hide_action_types(&debugger_action_types);
filter.hide_action_types(&step_back_action_type);
filter.hide_action_types(&restart_action_type);
}
debug_panel.update(cx, |debug_panel, cx| {
Self::filter_action_types(debug_panel, cx);
});
})
.detach();
@@ -241,6 +277,12 @@ impl DebugPanel {
cx,
);
if let Some(running) = session_item.read(cx).mode().as_running().cloned() {
// We might want to make this an event subscription and only notify when a new thread is selected
// This is used to filter the command menu correctly
cx.observe(&running, |_, _, cx| cx.notify()).detach();
}
self.sessions.push(session_item.clone());
self.activate_session(session_item, window, cx);
}
@@ -272,7 +314,7 @@ impl DebugPanel {
fn handle_run_in_terminal_request(
&self,
title: Option<String>,
cwd: PathBuf,
cwd: Option<Arc<Path>>,
command: Option<String>,
args: Vec<String>,
envs: HashMap<String, String>,
@@ -358,6 +400,8 @@ impl DebugPanel {
self.active_session = self.sessions.first().cloned();
}
}
cx.notify();
}
fn sessions_drop_down_menu(
@@ -376,7 +420,7 @@ impl DebugPanel {
ContextMenu::build(window, cx, move |mut this, _, _| {
for session in sessions.into_iter() {
let weak_session = session.downgrade();
let weak_id = weak_session.entity_id();
let weak_session_id = weak_session.entity_id();
this = this.custom_entry(
{
@@ -398,7 +442,8 @@ impl DebugPanel {
let weak = weak.clone();
move |_, _, cx| {
weak.update(cx, |panel, cx| {
panel.close_session(weak_id, cx);
panel
.close_session(weak_session_id, cx);
})
.ok();
}

View File

@@ -1,10 +1,13 @@
use dap::debugger_settings::DebuggerSettings;
use debugger_panel::{DebugPanel, ToggleFocus};
use editor::Editor;
use feature_flags::{Debugger, FeatureFlagViewExt};
use gpui::{App, actions};
use gpui::{App, EntityInputHandler, actions};
use new_session_modal::NewSessionModal;
use project::debugger::{self, breakpoint_store::SourceBreakpoint};
use session::DebugSession;
use settings::Settings;
use util::maybe;
use workspace::{ShutdownDebugAdapters, Workspace};
pub mod attach_modal;
@@ -110,7 +113,9 @@ pub fn init(cx: &mut App) {
.active_session()
.and_then(|session| session.read(cx).mode().as_running().cloned())
}) {
active_item.update(cx, |item, cx| item.stop_thread(cx))
cx.defer(move |cx| {
active_item.update(cx, |item, cx| item.stop_thread(cx))
})
}
}
})
@@ -155,4 +160,91 @@ pub fn init(cx: &mut App) {
})
})
.detach();
cx.observe_new({
move |editor: &mut Editor, _, cx| {
editor
.register_action(cx.listener(
move |editor, _: &editor::actions::DebuggerRunToCursor, _, cx| {
maybe!({
let debug_panel =
editor.workspace()?.read(cx).panel::<DebugPanel>(cx)?;
let cursor_point: language::Point = editor.selections.newest(cx).head();
let active_session = debug_panel.read(cx).active_session()?;
let (buffer, position, _) = editor
.buffer()
.read(cx)
.point_to_buffer_point(cursor_point, cx)?;
let path =
debugger::breakpoint_store::BreakpointStore::abs_path_from_buffer(
&buffer, cx,
)?;
let source_breakpoint = SourceBreakpoint {
row: position.row,
path,
message: None,
condition: None,
hit_condition: None,
state: debugger::breakpoint_store::BreakpointState::Enabled,
};
active_session
.update(cx, |session_item, _| {
session_item.mode().as_running().cloned()
})?
.update(cx, |state, cx| {
if let Some(thread_id) = state.selected_thread_id() {
state.session().update(cx, |session, cx| {
session.run_to_position(
source_breakpoint,
thread_id,
cx,
);
})
}
});
Some(())
});
},
))
.detach();
editor
.register_action(cx.listener(
move |editor, _: &editor::actions::DebuggerEvaluateSelectedText, window, cx| {
maybe!({
let debug_panel =
editor.workspace()?.read(cx).panel::<DebugPanel>(cx)?;
let active_session = debug_panel.read(cx).active_session()?;
let text = editor.text_for_range(
editor.selections.newest(cx).range(),
&mut None,
window,
cx,
)?;
active_session
.update(cx, |session_item, _| {
session_item.mode().as_running().cloned()
})?
.update(cx, |state, cx| {
let stack_id = state.selected_stack_frame_id(cx);
state.session().update(cx, |session, cx| {
session.evaluate(text, None, stack_id, None, cx);
})
});
Some(())
});
},
))
.detach();
}
})
.detach();
}

View File

@@ -1,5 +1,7 @@
pub mod running;
use std::sync::OnceLock;
use dap::client::SessionId;
use gpui::{App, Entity, EventEmitter, FocusHandle, Focusable, Subscription, Task, WeakEntity};
use project::Project;
@@ -30,6 +32,7 @@ impl DebugSessionState {
pub struct DebugSession {
remote_id: Option<workspace::ViewId>,
mode: DebugSessionState,
label: OnceLock<String>,
dap_store: WeakEntity<DapStore>,
_debug_panel: WeakEntity<DebugPanel>,
_worktree_store: WeakEntity<WorktreeStore>,
@@ -68,6 +71,7 @@ impl DebugSession {
})],
remote_id: None,
mode: DebugSessionState::Running(mode),
label: OnceLock::new(),
dap_store: project.read(cx).dap_store().downgrade(),
_debug_panel,
_worktree_store: project.read(cx).worktree_store().downgrade(),
@@ -92,36 +96,45 @@ impl DebugSession {
}
pub(crate) fn label(&self, cx: &App) -> String {
if let Some(label) = self.label.get() {
return label.to_owned();
}
let session_id = match &self.mode {
DebugSessionState::Running(running_state) => running_state.read(cx).session_id(),
};
let Ok(Some(session)) = self
.dap_store
.read_with(cx, |store, _| store.session_by_id(session_id))
else {
return "".to_owned();
};
session
.read(cx)
.as_local()
.expect("Remote Debug Sessions are not implemented yet")
.label()
self.label
.get_or_init(|| {
session
.read(cx)
.as_local()
.expect("Remote Debug Sessions are not implemented yet")
.label()
})
.to_owned()
}
pub(crate) fn label_element(&self, cx: &App) -> AnyElement {
let label = self.label(cx);
let (icon, color) = match &self.mode {
let icon = match &self.mode {
DebugSessionState::Running(state) => {
if state.read(cx).session().read(cx).is_terminated() {
(Some(Indicator::dot().color(Color::Error)), Color::Error)
Some(Indicator::dot().color(Color::Error))
} else {
match state.read(cx).thread_status(cx).unwrap_or_default() {
project::debugger::session::ThreadStatus::Stopped => (
Some(Indicator::dot().color(Color::Conflict)),
Color::Conflict,
),
_ => (Some(Indicator::dot().color(Color::Success)), Color::Success),
project::debugger::session::ThreadStatus::Stopped => {
Some(Indicator::dot().color(Color::Conflict))
}
_ => Some(Indicator::dot().color(Color::Success)),
}
}
}
@@ -131,7 +144,7 @@ impl DebugSession {
.gap_2()
.when_some(icon, |this, indicator| this.child(indicator))
.justify_between()
.child(Label::new(label).color(color))
.child(Label::new(label))
.into_any_element()
}
}

View File

@@ -432,6 +432,10 @@ impl RunningState {
self.session_id
}
pub(crate) fn selected_stack_frame_id(&self, cx: &App) -> Option<dap::StackFrameId> {
self.stack_frame_list.read(cx).selected_stack_frame_id()
}
#[cfg(test)]
pub fn stack_frame_list(&self) -> &Entity<StackFrameList> {
&self.stack_frame_list
@@ -492,7 +496,6 @@ impl RunningState {
}
}
#[cfg(test)]
pub(crate) fn selected_thread_id(&self) -> Option<ThreadId> {
self.thread_id
}

View File

@@ -141,7 +141,7 @@ impl Console {
state.evaluate(
expression,
Some(dap::EvaluateArgumentsContext::Variables),
self.stack_frame_list.read(cx).current_stack_frame_id(),
self.stack_frame_list.read(cx).selected_stack_frame_id(),
None,
cx,
);
@@ -384,7 +384,7 @@ impl ConsoleQueryBarCompletionProvider {
) -> Task<Result<Option<Vec<Completion>>>> {
let completion_task = console.update(cx, |console, cx| {
console.session.update(cx, |state, cx| {
let frame_id = console.stack_frame_list.read(cx).current_stack_frame_id();
let frame_id = console.stack_frame_list.read(cx).selected_stack_frame_id();
state.completions(
CompletionsQuery::new(buffer.read(cx), buffer_position, frame_id),

View File

@@ -31,7 +31,7 @@ pub struct StackFrameList {
invalidate: bool,
entries: Vec<StackFrameEntry>,
workspace: WeakEntity<Workspace>,
current_stack_frame_id: Option<StackFrameId>,
selected_stack_frame_id: Option<StackFrameId>,
scrollbar_state: ScrollbarState,
}
@@ -85,7 +85,7 @@ impl StackFrameList {
_subscription,
invalidate: true,
entries: Default::default(),
current_stack_frame_id: None,
selected_stack_frame_id: None,
}
}
@@ -132,8 +132,8 @@ impl StackFrameList {
.unwrap_or(0)
}
pub fn current_stack_frame_id(&self) -> Option<StackFrameId> {
self.current_stack_frame_id
pub fn selected_stack_frame_id(&self) -> Option<StackFrameId> {
self.selected_stack_frame_id
}
pub(super) fn refresh(&mut self, cx: &mut Context<Self>) {
@@ -188,20 +188,20 @@ impl StackFrameList {
}
pub fn go_to_selected_stack_frame(&mut self, window: &Window, cx: &mut Context<Self>) {
if let Some(current_stack_frame_id) = self.current_stack_frame_id {
if let Some(selected_stack_frame_id) = self.selected_stack_frame_id {
let frame = self
.entries
.iter()
.find_map(|entry| match entry {
StackFrameEntry::Normal(dap) => {
if dap.id == current_stack_frame_id {
if dap.id == selected_stack_frame_id {
Some(dap)
} else {
None
}
}
StackFrameEntry::Collapsed(daps) => {
daps.iter().find(|dap| dap.id == current_stack_frame_id)
daps.iter().find(|dap| dap.id == selected_stack_frame_id)
}
})
.cloned();
@@ -220,7 +220,7 @@ impl StackFrameList {
window: &Window,
cx: &mut Context<Self>,
) -> Task<Result<()>> {
self.current_stack_frame_id = Some(stack_frame.id);
self.selected_stack_frame_id = Some(stack_frame.id);
cx.emit(StackFrameListEvent::SelectedStackFrameChanged(
stack_frame.id,
@@ -319,7 +319,7 @@ impl StackFrameList {
cx: &mut Context<Self>,
) -> AnyElement {
let source = stack_frame.source.clone();
let is_selected_frame = Some(stack_frame.id) == self.current_stack_frame_id;
let is_selected_frame = Some(stack_frame.id) == self.selected_stack_frame_id;
let formatted_path = format!(
"{}:{}",

View File

@@ -191,7 +191,7 @@ async fn test_fetch_initial_stack_frames_and_go_to_stack_frame(
.update(cx, |state, _| state.stack_frame_list().clone());
stack_frame_list.update(cx, |stack_frame_list, cx| {
assert_eq!(Some(1), stack_frame_list.current_stack_frame_id());
assert_eq!(Some(1), stack_frame_list.selected_stack_frame_id());
assert_eq!(stack_frames, stack_frame_list.dap_stack_frames(cx));
});
});
@@ -425,7 +425,7 @@ async fn test_select_stack_frame(executor: BackgroundExecutor, cx: &mut TestAppC
.unwrap();
stack_frame_list.update(cx, |stack_frame_list, cx| {
assert_eq!(Some(1), stack_frame_list.current_stack_frame_id());
assert_eq!(Some(1), stack_frame_list.selected_stack_frame_id());
assert_eq!(stack_frames, stack_frame_list.dap_stack_frames(cx));
});
@@ -440,7 +440,7 @@ async fn test_select_stack_frame(executor: BackgroundExecutor, cx: &mut TestAppC
cx.run_until_parked();
stack_frame_list.update(cx, |stack_frame_list, cx| {
assert_eq!(Some(2), stack_frame_list.current_stack_frame_id());
assert_eq!(Some(2), stack_frame_list.selected_stack_frame_id());
assert_eq!(stack_frames, stack_frame_list.dap_stack_frames(cx));
});

View File

@@ -212,7 +212,7 @@ async fn test_basic_fetch_initial_scope_and_variables(
running_state.update(cx, |running_state, cx| {
let (stack_frame_list, stack_frame_id) =
running_state.stack_frame_list().update(cx, |list, _| {
(list.flatten_entries(), list.current_stack_frame_id())
(list.flatten_entries(), list.selected_stack_frame_id())
});
assert_eq!(stack_frames, stack_frame_list);
@@ -483,7 +483,7 @@ async fn test_fetch_variables_for_multiple_scopes(
running_state.update(cx, |running_state, cx| {
let (stack_frame_list, stack_frame_id) =
running_state.stack_frame_list().update(cx, |list, _| {
(list.flatten_entries(), list.current_stack_frame_id())
(list.flatten_entries(), list.selected_stack_frame_id())
});
assert_eq!(Some(1), stack_frame_id);
@@ -1565,7 +1565,7 @@ async fn test_variable_list_only_sends_requests_when_rendering(
running_state.update(cx, |running_state, cx| {
let (stack_frame_list, stack_frame_id) =
running_state.stack_frame_list().update(cx, |list, _| {
(list.flatten_entries(), list.current_stack_frame_id())
(list.flatten_entries(), list.selected_stack_frame_id())
});
assert_eq!(Some(1), stack_frame_id);
@@ -1877,7 +1877,7 @@ async fn test_it_fetches_scopes_variables_when_you_select_a_stack_frame(
running_state.update(cx, |running_state, cx| {
let (stack_frame_list, stack_frame_id) =
running_state.stack_frame_list().update(cx, |list, _| {
(list.flatten_entries(), list.current_stack_frame_id())
(list.flatten_entries(), list.selected_stack_frame_id())
});
let variable_list = running_state.variable_list().read(cx);
@@ -1888,7 +1888,7 @@ async fn test_it_fetches_scopes_variables_when_you_select_a_stack_frame(
running_state
.stack_frame_list()
.read(cx)
.current_stack_frame_id(),
.selected_stack_frame_id(),
Some(1)
);
@@ -1934,7 +1934,7 @@ async fn test_it_fetches_scopes_variables_when_you_select_a_stack_frame(
running_state.update(cx, |running_state, cx| {
let (stack_frame_list, stack_frame_id) =
running_state.stack_frame_list().update(cx, |list, _| {
(list.flatten_entries(), list.current_stack_frame_id())
(list.flatten_entries(), list.selected_stack_frame_id())
});
let variable_list = running_state.variable_list().read(cx);

View File

@@ -35,6 +35,7 @@ assets.workspace = true
client.workspace = true
clock.workspace = true
collections.workspace = true
command_palette_hooks.workspace = true
convert_case.workspace = true
db.workspace = true
buffer_diff.workspace = true

View File

@@ -110,20 +110,6 @@ pub struct ToggleComments {
pub ignore_indent: bool,
}
#[derive(PartialEq, Clone, Deserialize, Default, JsonSchema)]
#[serde(deny_unknown_fields)]
pub struct FoldAt {
#[serde(skip)]
pub buffer_row: MultiBufferRow,
}
#[derive(PartialEq, Clone, Deserialize, Default, JsonSchema)]
#[serde(deny_unknown_fields)]
pub struct UnfoldAt {
#[serde(skip)]
pub buffer_row: MultiBufferRow,
}
#[derive(PartialEq, Clone, Deserialize, Default, JsonSchema)]
#[serde(deny_unknown_fields)]
pub struct MoveUpByLines {
@@ -226,7 +212,6 @@ impl_actions!(
ExpandExcerpts,
ExpandExcerptsDown,
ExpandExcerptsUp,
FoldAt,
HandleInput,
MoveDownByLines,
MovePageDown,
@@ -244,7 +229,6 @@ impl_actions!(
ShowCompletions,
ToggleCodeActions,
ToggleComments,
UnfoldAt,
FoldAtLevel,
]
);
@@ -420,9 +404,12 @@ actions!(
Tab,
Backtab,
ToggleBreakpoint,
ToggleCase,
DisableBreakpoint,
EnableBreakpoint,
EditLogBreakpoint,
DebuggerRunToCursor,
DebuggerEvaluateSelectedText,
ToggleAutoSignatureHelp,
ToggleGitBlameInline,
OpenGitBlameCommit,

View File

@@ -6415,6 +6415,9 @@ impl Editor {
"Set Breakpoint"
};
let run_to_cursor = command_palette_hooks::CommandPaletteFilter::try_global(cx)
.map_or(false, |filter| !filter.is_hidden(&DebuggerRunToCursor));
let toggle_state_msg = breakpoint.as_ref().map_or(None, |bp| match bp.1.state {
BreakpointState::Enabled => Some("Disable"),
BreakpointState::Disabled => Some("Enable"),
@@ -6426,6 +6429,21 @@ impl Editor {
ui::ContextMenu::build(window, cx, |menu, _, _cx| {
menu.on_blur_subscription(Subscription::new(|| {}))
.context(focus_handle)
.when(run_to_cursor, |this| {
let weak_editor = weak_editor.clone();
this.entry("Run to cursor", None, move |window, cx| {
weak_editor
.update(cx, |editor, cx| {
editor.change_selections(None, window, cx, |s| {
s.select_ranges([Point::new(row, 0)..Point::new(row, 0)])
});
})
.ok();
window.dispatch_action(Box::new(DebuggerRunToCursor), cx);
})
.separator()
})
.when_some(toggle_state_msg, |this, msg| {
this.entry(msg, None, {
let weak_editor = weak_editor.clone();
@@ -8209,12 +8227,18 @@ impl Editor {
IndentSize::tab()
} else {
let tab_size = settings.tab_size.get();
let char_column = snapshot
let indent_remainder = snapshot
.text_for_range(Point::new(cursor.row, 0)..cursor)
.flat_map(str::chars)
.count()
+ row_delta as usize;
let chars_to_next_tab_stop = tab_size - (char_column as u32 % tab_size);
.fold(row_delta % tab_size, |counter: u32, c| {
if c == '\t' {
0
} else {
(counter + 1) % tab_size
}
});
let chars_to_next_tab_stop = tab_size - indent_remainder;
IndentSize::spaces(chars_to_next_tab_stop)
};
selection.start = Point::new(cursor.row, cursor.column + row_delta + tab_size.len);
@@ -9143,6 +9167,17 @@ impl Editor {
});
}
pub fn toggle_case(&mut self, _: &ToggleCase, window: &mut Window, cx: &mut Context<Self>) {
self.manipulate_text(window, cx, |text| {
let has_upper_case_characters = text.chars().any(|c| c.is_uppercase());
if has_upper_case_characters {
text.to_lowercase()
} else {
text.to_uppercase()
}
})
}
pub fn convert_to_upper_case(
&mut self,
_: &ConvertToUpperCase,
@@ -11556,7 +11591,7 @@ impl Editor {
window: &mut Window,
cx: &mut Context<Editor>,
) {
this.unfold_ranges(&[range.clone()], false, true, cx);
this.unfold_ranges(&[range.clone()], false, auto_scroll.is_some(), cx);
this.change_selections(auto_scroll, window, cx, |s| {
if replace_newest {
s.delete(s.newest_anchor().id);
@@ -11731,16 +11766,21 @@ impl Editor {
return Ok(());
}
let mut new_selections = self.selections.all::<usize>(cx);
let mut new_selections = Vec::new();
let reversed = self.selections.oldest::<usize>(cx).reversed;
let buffer = &display_map.buffer_snapshot;
let query_matches = select_next_state
.query
.stream_find_iter(buffer.bytes_in_range(0..buffer.len()));
for query_match in query_matches {
let query_match = query_match.unwrap(); // can only fail due to I/O
let offset_range = query_match.start()..query_match.end();
for query_match in query_matches.into_iter() {
let query_match = query_match.context("query match for select all action")?; // can only fail due to I/O
let offset_range = if reversed {
query_match.end()..query_match.start()
} else {
query_match.start()..query_match.end()
};
let display_range = offset_range.start.to_display_point(&display_map)
..offset_range.end.to_display_point(&display_map);
@@ -11748,52 +11788,14 @@ impl Editor {
|| (!movement::is_inside_word(&display_map, display_range.start)
&& !movement::is_inside_word(&display_map, display_range.end))
{
self.selections.change_with(cx, |selections| {
new_selections.push(Selection {
id: selections.new_selection_id(),
start: offset_range.start,
end: offset_range.end,
reversed: false,
goal: SelectionGoal::None,
});
});
new_selections.push(offset_range.start..offset_range.end);
}
}
new_selections.sort_by_key(|selection| selection.start);
let mut ix = 0;
while ix + 1 < new_selections.len() {
let current_selection = &new_selections[ix];
let next_selection = &new_selections[ix + 1];
if current_selection.range().overlaps(&next_selection.range()) {
if current_selection.id < next_selection.id {
new_selections.remove(ix + 1);
} else {
new_selections.remove(ix);
}
} else {
ix += 1;
}
}
let reversed = self.selections.oldest::<usize>(cx).reversed;
for selection in new_selections.iter_mut() {
selection.reversed = reversed;
}
select_next_state.done = true;
self.unfold_ranges(
&new_selections
.iter()
.map(|selection| selection.range())
.collect::<Vec<_>>(),
false,
false,
cx,
);
self.change_selections(Some(Autoscroll::fit()), window, cx, |selections| {
selections.select(new_selections)
self.unfold_ranges(&new_selections.clone(), false, false, cx);
self.change_selections(None, window, cx, |selections| {
selections.select_ranges(new_selections)
});
Ok(())
@@ -14898,8 +14900,12 @@ impl Editor {
self.fold_creases(to_fold, true, window, cx);
}
pub fn fold_at(&mut self, fold_at: &FoldAt, window: &mut Window, cx: &mut Context<Self>) {
let buffer_row = fold_at.buffer_row;
pub fn fold_at(
&mut self,
buffer_row: MultiBufferRow,
window: &mut Window,
cx: &mut Context<Self>,
) {
let display_map = self.display_map.update(cx, |map, cx| map.snapshot(cx));
if let Some(crease) = display_map.crease_for_buffer_row(buffer_row) {
@@ -14969,16 +14975,16 @@ impl Editor {
pub fn unfold_at(
&mut self,
unfold_at: &UnfoldAt,
buffer_row: MultiBufferRow,
_window: &mut Window,
cx: &mut Context<Self>,
) {
let display_map = self.display_map.update(cx, |map, cx| map.snapshot(cx));
let intersection_range = Point::new(unfold_at.buffer_row.0, 0)
let intersection_range = Point::new(buffer_row.0, 0)
..Point::new(
unfold_at.buffer_row.0,
display_map.buffer_snapshot.line_len(unfold_at.buffer_row),
buffer_row.0,
display_map.buffer_snapshot.line_len(buffer_row),
);
let autoscroll = self
@@ -19347,15 +19353,11 @@ impl EditorSnapshot {
Arc::new(move |folded, window: &mut Window, cx: &mut App| {
if folded {
editor.update(cx, |editor, cx| {
editor.fold_at(&crate::FoldAt { buffer_row }, window, cx)
editor.fold_at(buffer_row, window, cx)
});
} else {
editor.update(cx, |editor, cx| {
editor.unfold_at(
&crate::UnfoldAt { buffer_row },
window,
cx,
)
editor.unfold_at(buffer_row, window, cx)
});
}
});
@@ -19379,9 +19381,9 @@ impl EditorSnapshot {
.toggle_state(folded)
.on_click(window.listener_for(&editor, move |this, _e, window, cx| {
if folded {
this.unfold_at(&UnfoldAt { buffer_row }, window, cx);
this.unfold_at(buffer_row, window, cx);
} else {
this.fold_at(&FoldAt { buffer_row }, window, cx);
this.fold_at(buffer_row, window, cx);
}
}))
.into_any_element(),

View File

@@ -2918,7 +2918,32 @@ async fn test_tab_in_leading_whitespace_auto_indents_lines(cx: &mut TestAppConte
}
#[gpui::test]
async fn test_tab_with_mixed_whitespace(cx: &mut TestAppContext) {
async fn test_tab_with_mixed_whitespace_txt(cx: &mut TestAppContext) {
init_test(cx, |settings| {
settings.defaults.tab_size = NonZeroU32::new(3)
});
let mut cx = EditorTestContext::new(cx).await;
cx.set_state(indoc! {"
ˇ
\t ˇ
\t ˇ
\t ˇ
\t \t\t \t \t\t \t\t \t \t ˇ
"});
cx.update_editor(|e, window, cx| e.tab(&Tab, window, cx));
cx.assert_editor_state(indoc! {"
ˇ
\t ˇ
\t ˇ
\t ˇ
\t \t\t \t \t\t \t\t \t \t ˇ
"});
}
#[gpui::test]
async fn test_tab_with_mixed_whitespace_rust(cx: &mut TestAppContext) {
init_test(cx, |settings| {
settings.defaults.tab_size = NonZeroU32::new(4)
});
@@ -3875,6 +3900,41 @@ async fn test_manipulate_lines_with_multi_selection(cx: &mut TestAppContext) {
"});
}
#[gpui::test]
async fn test_toggle_case(cx: &mut TestAppContext) {
init_test(cx, |_| {});
let mut cx = EditorTestContext::new(cx).await;
// If all lower case -> upper case
cx.set_state(indoc! {"
«hello worldˇ»
"});
cx.update_editor(|e, window, cx| e.toggle_case(&ToggleCase, window, cx));
cx.assert_editor_state(indoc! {"
«HELLO WORLDˇ»
"});
// If all upper case -> lower case
cx.set_state(indoc! {"
«HELLO WORLDˇ»
"});
cx.update_editor(|e, window, cx| e.toggle_case(&ToggleCase, window, cx));
cx.assert_editor_state(indoc! {"
«hello worldˇ»
"});
// If any upper case characters are identified -> lower case
// This matches JetBrains IDEs
cx.set_state(indoc! {"
«hEllo worldˇ»
"});
cx.update_editor(|e, window, cx| e.toggle_case(&ToggleCase, window, cx));
cx.assert_editor_state(indoc! {"
«hello worldˇ»
"});
}
#[gpui::test]
async fn test_manipulate_text(cx: &mut TestAppContext) {
init_test(cx, |_| {});
@@ -5782,6 +5842,37 @@ async fn test_select_all_matches(cx: &mut TestAppContext) {
cx.assert_editor_state("abc\n« ˇ»abc\nabc");
}
#[gpui::test]
async fn test_select_all_matches_does_not_scroll(cx: &mut TestAppContext) {
init_test(cx, |_| {});
let mut cx = EditorTestContext::new(cx).await;
let large_body_1 = "\nd".repeat(200);
let large_body_2 = "\ne".repeat(200);
cx.set_state(&format!(
"abc\nabc{large_body_1} «ˇa»bc{large_body_2}\nefabc\nabc"
));
let initial_scroll_position = cx.update_editor(|editor, _, cx| {
let scroll_position = editor.scroll_position(cx);
assert!(scroll_position.y > 0.0, "Initial selection is between two large bodies and should have the editor scrolled to it");
scroll_position
});
cx.update_editor(|e, window, cx| e.select_all_matches(&SelectAllMatches, window, cx))
.unwrap();
cx.assert_editor_state(&format!(
"«ˇa»bc\n«ˇa»bc{large_body_1} «ˇa»bc{large_body_2}\nef«ˇa»bc\n«ˇa»bc"
));
let scroll_position_after_selection =
cx.update_editor(|editor, _, cx| editor.scroll_position(cx));
assert_eq!(
initial_scroll_position, scroll_position_after_selection,
"Scroll position should not change after selecting all matches"
);
}
#[gpui::test]
async fn test_select_next_with_multiple_carets(cx: &mut TestAppContext) {
init_test(cx, |_| {});
@@ -7665,77 +7756,81 @@ async fn test_document_format_during_save(cx: &mut TestAppContext) {
cx.executor().start_waiting();
let fake_server = fake_servers.next().await.unwrap();
let save = editor
.update_in(cx, |editor, window, cx| {
editor.save(true, project.clone(), window, cx)
})
.unwrap();
fake_server
.set_request_handler::<lsp::request::Formatting, _, _>(move |params, _| async move {
assert_eq!(
params.text_document.uri,
lsp::Url::from_file_path(path!("/file.rs")).unwrap()
);
assert_eq!(params.options.tab_size, 4);
Ok(Some(vec![lsp::TextEdit::new(
lsp::Range::new(lsp::Position::new(0, 3), lsp::Position::new(1, 0)),
", ".to_string(),
)]))
})
.next()
.await;
cx.executor().start_waiting();
save.await;
{
fake_server.set_request_handler::<lsp::request::Formatting, _, _>(
move |params, _| async move {
assert_eq!(
params.text_document.uri,
lsp::Url::from_file_path(path!("/file.rs")).unwrap()
);
assert_eq!(params.options.tab_size, 4);
Ok(Some(vec![lsp::TextEdit::new(
lsp::Range::new(lsp::Position::new(0, 3), lsp::Position::new(1, 0)),
", ".to_string(),
)]))
},
);
let save = editor
.update_in(cx, |editor, window, cx| {
editor.save(true, project.clone(), window, cx)
})
.unwrap();
cx.executor().start_waiting();
save.await;
assert_eq!(
editor.update(cx, |editor, cx| editor.text(cx)),
"one, two\nthree\n"
);
assert!(!cx.read(|cx| editor.is_dirty(cx)));
assert_eq!(
editor.update(cx, |editor, cx| editor.text(cx)),
"one, two\nthree\n"
);
assert!(!cx.read(|cx| editor.is_dirty(cx)));
}
editor.update_in(cx, |editor, window, cx| {
editor.set_text("one\ntwo\nthree\n", window, cx)
});
assert!(cx.read(|cx| editor.is_dirty(cx)));
{
editor.update_in(cx, |editor, window, cx| {
editor.set_text("one\ntwo\nthree\n", window, cx)
});
assert!(cx.read(|cx| editor.is_dirty(cx)));
// Ensure we can still save even if formatting hangs.
fake_server.set_request_handler::<lsp::request::Formatting, _, _>(
move |params, _| async move {
assert_eq!(
params.text_document.uri,
lsp::Url::from_file_path(path!("/file.rs")).unwrap()
);
futures::future::pending::<()>().await;
unreachable!()
},
);
let save = editor
.update_in(cx, |editor, window, cx| {
editor.save(true, project.clone(), window, cx)
})
.unwrap();
cx.executor().advance_clock(super::FORMAT_TIMEOUT);
cx.executor().start_waiting();
save.await;
assert_eq!(
editor.update(cx, |editor, cx| editor.text(cx)),
"one\ntwo\nthree\n"
);
assert!(!cx.read(|cx| editor.is_dirty(cx)));
// Ensure we can still save even if formatting hangs.
fake_server.set_request_handler::<lsp::request::Formatting, _, _>(
move |params, _| async move {
assert_eq!(
params.text_document.uri,
lsp::Url::from_file_path(path!("/file.rs")).unwrap()
);
futures::future::pending::<()>().await;
unreachable!()
},
);
let save = editor
.update_in(cx, |editor, window, cx| {
editor.save(true, project.clone(), window, cx)
})
.unwrap();
cx.executor().advance_clock(super::FORMAT_TIMEOUT);
cx.executor().start_waiting();
save.await;
assert_eq!(
editor.update(cx, |editor, cx| editor.text(cx)),
"one\ntwo\nthree\n"
);
}
// For non-dirty buffer, no formatting request should be sent
let save = editor
.update_in(cx, |editor, window, cx| {
editor.save(true, project.clone(), window, cx)
})
.unwrap();
let _pending_format_request = fake_server
.set_request_handler::<lsp::request::RangeFormatting, _, _>(move |_, _| async move {
{
assert!(!cx.read(|cx| editor.is_dirty(cx)));
fake_server.set_request_handler::<lsp::request::Formatting, _, _>(move |_, _| async move {
panic!("Should not be invoked on non-dirty buffer");
})
.next();
cx.executor().start_waiting();
save.await;
});
let save = editor
.update_in(cx, |editor, window, cx| {
editor.save(true, project.clone(), window, cx)
})
.unwrap();
cx.executor().start_waiting();
save.await;
}
// Set rust language override and assert overridden tabsize is sent to language server
update_test_language_settings(cx, |settings| {
@@ -7748,28 +7843,28 @@ async fn test_document_format_during_save(cx: &mut TestAppContext) {
);
});
editor.update_in(cx, |editor, window, cx| {
editor.set_text("somehting_new\n", window, cx)
});
assert!(cx.read(|cx| editor.is_dirty(cx)));
let save = editor
.update_in(cx, |editor, window, cx| {
editor.save(true, project.clone(), window, cx)
})
.unwrap();
fake_server
.set_request_handler::<lsp::request::Formatting, _, _>(move |params, _| async move {
assert_eq!(
params.text_document.uri,
lsp::Url::from_file_path(path!("/file.rs")).unwrap()
);
assert_eq!(params.options.tab_size, 8);
Ok(Some(vec![]))
})
.next()
.await;
cx.executor().start_waiting();
save.await;
{
editor.update_in(cx, |editor, window, cx| {
editor.set_text("somehting_new\n", window, cx)
});
assert!(cx.read(|cx| editor.is_dirty(cx)));
let _formatting_request_signal = fake_server
.set_request_handler::<lsp::request::Formatting, _, _>(move |params, _| async move {
assert_eq!(
params.text_document.uri,
lsp::Url::from_file_path(path!("/file.rs")).unwrap()
);
assert_eq!(params.options.tab_size, 8);
Ok(Some(vec![]))
});
let save = editor
.update_in(cx, |editor, window, cx| {
editor.save(true, project.clone(), window, cx)
})
.unwrap();
cx.executor().start_waiting();
save.await;
}
}
#[gpui::test]
@@ -8251,6 +8346,272 @@ async fn test_document_format_manual_trigger(cx: &mut TestAppContext) {
);
}
#[gpui::test]
async fn test_multiple_formatters(cx: &mut TestAppContext) {
init_test(cx, |settings| {
settings.defaults.remove_trailing_whitespace_on_save = Some(true);
settings.defaults.formatter =
Some(language_settings::SelectedFormatter::List(FormatterList(
vec![
Formatter::LanguageServer { name: None },
Formatter::CodeActions(
[
("code-action-1".into(), true),
("code-action-2".into(), true),
]
.into_iter()
.collect(),
),
]
.into(),
)))
});
let fs = FakeFs::new(cx.executor());
fs.insert_file(path!("/file.rs"), "one \ntwo \nthree".into())
.await;
let project = Project::test(fs, [path!("/").as_ref()], cx).await;
let language_registry = project.read_with(cx, |project, _| project.languages().clone());
language_registry.add(rust_lang());
let mut fake_servers = language_registry.register_fake_lsp(
"Rust",
FakeLspAdapter {
capabilities: lsp::ServerCapabilities {
document_formatting_provider: Some(lsp::OneOf::Left(true)),
execute_command_provider: Some(lsp::ExecuteCommandOptions {
commands: vec!["the-command-for-code-action-1".into()],
..Default::default()
}),
code_action_provider: Some(lsp::CodeActionProviderCapability::Simple(true)),
..Default::default()
},
..Default::default()
},
);
let buffer = project
.update(cx, |project, cx| {
project.open_local_buffer(path!("/file.rs"), cx)
})
.await
.unwrap();
let buffer = cx.new(|cx| MultiBuffer::singleton(buffer, cx));
let (editor, cx) = cx.add_window_view(|window, cx| {
build_editor_with_project(project.clone(), buffer, window, cx)
});
cx.executor().start_waiting();
let fake_server = fake_servers.next().await.unwrap();
fake_server.set_request_handler::<lsp::request::Formatting, _, _>(
move |_params, _| async move {
Ok(Some(vec![lsp::TextEdit::new(
lsp::Range::new(lsp::Position::new(0, 0), lsp::Position::new(0, 0)),
"applied-formatting\n".to_string(),
)]))
},
);
fake_server.set_request_handler::<lsp::request::CodeActionRequest, _, _>(
move |params, _| async move {
assert_eq!(
params.context.only,
Some(vec!["code-action-1".into(), "code-action-2".into()])
);
let uri = lsp::Url::from_file_path(path!("/file.rs")).unwrap();
Ok(Some(vec![
lsp::CodeActionOrCommand::CodeAction(lsp::CodeAction {
kind: Some("code-action-1".into()),
edit: Some(lsp::WorkspaceEdit::new(
[(
uri.clone(),
vec![lsp::TextEdit::new(
lsp::Range::new(lsp::Position::new(0, 0), lsp::Position::new(0, 0)),
"applied-code-action-1-edit\n".to_string(),
)],
)]
.into_iter()
.collect(),
)),
command: Some(lsp::Command {
command: "the-command-for-code-action-1".into(),
..Default::default()
}),
..Default::default()
}),
lsp::CodeActionOrCommand::CodeAction(lsp::CodeAction {
kind: Some("code-action-2".into()),
edit: Some(lsp::WorkspaceEdit::new(
[(
uri.clone(),
vec![lsp::TextEdit::new(
lsp::Range::new(lsp::Position::new(0, 0), lsp::Position::new(0, 0)),
"applied-code-action-2-edit\n".to_string(),
)],
)]
.into_iter()
.collect(),
)),
..Default::default()
}),
]))
},
);
fake_server.set_request_handler::<lsp::request::CodeActionResolveRequest, _, _>({
move |params, _| async move { Ok(params) }
});
let command_lock = Arc::new(futures::lock::Mutex::new(()));
fake_server.set_request_handler::<lsp::request::ExecuteCommand, _, _>({
let fake = fake_server.clone();
let lock = command_lock.clone();
move |params, _| {
assert_eq!(params.command, "the-command-for-code-action-1");
let fake = fake.clone();
let lock = lock.clone();
async move {
lock.lock().await;
fake.server
.request::<lsp::request::ApplyWorkspaceEdit>(lsp::ApplyWorkspaceEditParams {
label: None,
edit: lsp::WorkspaceEdit {
changes: Some(
[(
lsp::Url::from_file_path(path!("/file.rs")).unwrap(),
vec![lsp::TextEdit {
range: lsp::Range::new(
lsp::Position::new(0, 0),
lsp::Position::new(0, 0),
),
new_text: "applied-code-action-1-command\n".into(),
}],
)]
.into_iter()
.collect(),
),
..Default::default()
},
})
.await
.unwrap();
Ok(Some(json!(null)))
}
}
});
cx.executor().start_waiting();
editor
.update_in(cx, |editor, window, cx| {
editor.perform_format(
project.clone(),
FormatTrigger::Manual,
FormatTarget::Buffers,
window,
cx,
)
})
.unwrap()
.await;
editor.update(cx, |editor, cx| {
assert_eq!(
editor.text(cx),
r#"
applied-code-action-2-edit
applied-code-action-1-command
applied-code-action-1-edit
applied-formatting
one
two
three
"#
.unindent()
);
});
editor.update_in(cx, |editor, window, cx| {
editor.undo(&Default::default(), window, cx);
assert_eq!(editor.text(cx), "one \ntwo \nthree");
});
// Perform a manual edit while waiting for an LSP command
// that's being run as part of a formatting code action.
let lock_guard = command_lock.lock().await;
let format = editor
.update_in(cx, |editor, window, cx| {
editor.perform_format(
project.clone(),
FormatTrigger::Manual,
FormatTarget::Buffers,
window,
cx,
)
})
.unwrap();
cx.run_until_parked();
editor.update(cx, |editor, cx| {
assert_eq!(
editor.text(cx),
r#"
applied-code-action-1-edit
applied-formatting
one
two
three
"#
.unindent()
);
editor.buffer.update(cx, |buffer, cx| {
let ix = buffer.len(cx);
buffer.edit([(ix..ix, "edited\n")], None, cx);
});
});
// Allow the LSP command to proceed. Because the buffer was edited,
// the second code action will not be run.
drop(lock_guard);
format.await;
editor.update_in(cx, |editor, window, cx| {
assert_eq!(
editor.text(cx),
r#"
applied-code-action-1-command
applied-code-action-1-edit
applied-formatting
one
two
three
edited
"#
.unindent()
);
// The manual edit is undone first, because it is the last thing the user did
// (even though the command completed afterwards).
editor.undo(&Default::default(), window, cx);
assert_eq!(
editor.text(cx),
r#"
applied-code-action-1-command
applied-code-action-1-edit
applied-formatting
one
two
three
"#
.unindent()
);
// All the formatting (including the command, which completed after the manual edit)
// is undone together.
editor.undo(&Default::default(), window, cx);
assert_eq!(editor.text(cx), "one \ntwo \nthree");
});
}
#[gpui::test]
async fn test_organize_imports_manual_trigger(cx: &mut TestAppContext) {
init_test(cx, |settings| {

View File

@@ -211,6 +211,7 @@ impl EditorElement {
register_action(editor, window, Editor::sort_lines_case_insensitive);
register_action(editor, window, Editor::reverse_lines);
register_action(editor, window, Editor::shuffle_lines);
register_action(editor, window, Editor::toggle_case);
register_action(editor, window, Editor::convert_to_upper_case);
register_action(editor, window, Editor::convert_to_lower_case);
register_action(editor, window, Editor::convert_to_title_case);
@@ -386,14 +387,12 @@ impl EditorElement {
register_action(editor, window, Editor::fold_at_level);
register_action(editor, window, Editor::fold_all);
register_action(editor, window, Editor::fold_function_bodies);
register_action(editor, window, Editor::fold_at);
register_action(editor, window, Editor::fold_recursive);
register_action(editor, window, Editor::toggle_fold);
register_action(editor, window, Editor::toggle_fold_recursive);
register_action(editor, window, Editor::unfold_lines);
register_action(editor, window, Editor::unfold_recursive);
register_action(editor, window, Editor::unfold_all);
register_action(editor, window, Editor::unfold_at);
register_action(editor, window, Editor::fold_selected_ranges);
register_action(editor, window, Editor::set_mark);
register_action(editor, window, Editor::swap_selection_ends);

View File

@@ -1540,8 +1540,24 @@ impl SearchableItem for Editor {
let text = self.buffer.read(cx);
let text = text.snapshot(cx);
let mut edits = vec![];
let mut last_point: Option<Point> = None;
for m in matches {
let point = m.start.to_point(&text);
let text = text.text_for_range(m.clone()).collect::<Vec<_>>();
// Check if the row for the current match is different from the last
// match. If that's not the case and we're still replacing matches
// in the same row/line, skip this match if the `one_match_per_line`
// option is enabled.
if last_point.is_none() {
last_point = Some(point);
} else if last_point.is_some() && point.row != last_point.unwrap().row {
last_point = Some(point);
} else if query.one_match_per_line().is_some_and(|enabled| enabled) {
continue;
}
let text: Cow<_> = if text.len() == 1 {
text.first().cloned().unwrap().into()
} else {

View File

@@ -1,10 +1,10 @@
use crate::CopyAndTrim;
use crate::actions::FormatSelections;
use crate::{
Copy, CopyPermalinkToLine, Cut, DisplayPoint, DisplaySnapshot, Editor, EditorMode,
FindAllReferences, GoToDeclaration, GoToDefinition, GoToImplementation, GoToTypeDefinition,
Paste, Rename, RevealInFileManager, SelectMode, ToDisplayPoint, ToggleCodeActions,
actions::Format, selections_collection::SelectionsCollection,
Copy, CopyAndTrim, CopyPermalinkToLine, Cut, DebuggerEvaluateSelectedText, DisplayPoint,
DisplaySnapshot, Editor, EditorMode, FindAllReferences, GoToDeclaration, GoToDefinition,
GoToImplementation, GoToTypeDefinition, Paste, Rename, RevealInFileManager, SelectMode,
ToDisplayPoint, ToggleCodeActions,
actions::{Format, FormatSelections},
selections_collection::SelectionsCollection,
};
use gpui::prelude::FluentBuilder;
use gpui::{Context, DismissEvent, Entity, Focusable as _, Pixels, Point, Subscription, Window};
@@ -169,9 +169,19 @@ pub fn deploy_context_menu(
.is_some()
});
let evaluate_selection = command_palette_hooks::CommandPaletteFilter::try_global(cx)
.map_or(false, |filter| {
!filter.is_hidden(&DebuggerEvaluateSelectedText)
});
ui::ContextMenu::build(window, cx, |menu, _window, _cx| {
let builder = menu
.on_blur_subscription(Subscription::new(|| {}))
.when(evaluate_selection && has_selections, |builder| {
builder
.action("Evaluate Selection", Box::new(DebuggerEvaluateSelectedText))
.separator()
})
.action("Go to Definition", Box::new(GoToDefinition))
.action("Go to Declaration", Box::new(GoToDeclaration))
.action("Go to Type Definition", Box::new(GoToTypeDefinition))

39
crates/eval/Cargo.toml Normal file
View File

@@ -0,0 +1,39 @@
[package]
name = "eval"
version = "0.1.0"
publish.workspace = true
edition.workspace = true
[dependencies]
agent.workspace = true
anyhow.workspace = true
assistant_tool.workspace = true
assistant_tools.workspace = true
client.workspace = true
collections.workspace = true
context_server.workspace = true
dap.workspace = true
env_logger.workspace = true
fs.workspace = true
gpui.workspace = true
gpui_tokio.workspace = true
language.workspace = true
language_model.workspace = true
language_models.workspace = true
node_runtime.workspace = true
project.workspace = true
prompt_store.workspace = true
release_channel.workspace = true
reqwest_client.workspace = true
serde.workspace = true
settings.workspace = true
smol.workspace = true
toml.workspace = true
workspace-hack.workspace = true
[[bin]]
name = "eval"
path = "src/eval.rs"
[lints]
workspace = true

1
crates/eval/LICENSE-GPL Symbolic link
View File

@@ -0,0 +1 @@
../../LICENSE-GPL

7
crates/eval/README.md Normal file
View File

@@ -0,0 +1,7 @@
# Eval
This eval assumes the working directory is the root of the repository. Run it with:
```sh
cargo run -p eval
```

View File

@@ -0,0 +1,2 @@
path = "../zed_worktree"
revision = "38fcadf9481d018543c65f36ac3bafeba190179b"

View File

@@ -0,0 +1,3 @@
Look at the `find_replace_file_tool.rs`. I want to implement a card for it. The card should be a brand new `Entity` with a `Render` implementation.
The card should show a diff. It should be a beautifully presented diff. The card "box" should look like what we show for markdown codeblocks (look at `MarkdownElement`). I want to see a red background for lines that were deleted and a green background for lines that were added. We should have a div per diff line.

229
crates/eval/src/agent.rs Normal file
View File

@@ -0,0 +1,229 @@
use ::agent::{RequestKind, Thread, ThreadEvent, ThreadStore};
use anyhow::anyhow;
use assistant_tool::ToolWorkingSet;
use client::{Client, UserStore};
use collections::HashMap;
use dap::DapRegistry;
use gpui::{App, Entity, SemanticVersion, Subscription, Task, prelude::*};
use language::LanguageRegistry;
use language_model::{
AuthenticateError, LanguageModel, LanguageModelProviderId, LanguageModelRegistry,
};
use node_runtime::NodeRuntime;
use project::{Project, RealFs};
use prompt_store::PromptBuilder;
use settings::SettingsStore;
use smol::channel;
use std::sync::Arc;
/// Subset of `workspace::AppState` needed by `HeadlessAssistant`, with additional fields.
pub struct AgentAppState {
pub languages: Arc<LanguageRegistry>,
pub client: Arc<Client>,
pub user_store: Entity<UserStore>,
pub fs: Arc<dyn fs::Fs>,
pub node_runtime: NodeRuntime,
// Additional fields not present in `workspace::AppState`.
pub prompt_builder: Arc<PromptBuilder>,
}
pub struct Agent {
// pub thread: Entity<Thread>,
// pub project: Entity<Project>,
#[allow(dead_code)]
pub thread_store: Entity<ThreadStore>,
pub tool_use_counts: HashMap<Arc<str>, u32>,
pub done_tx: channel::Sender<anyhow::Result<()>>,
_subscription: Subscription,
}
impl Agent {
pub fn new(
app_state: Arc<AgentAppState>,
cx: &mut App,
) -> anyhow::Result<(Entity<Self>, channel::Receiver<anyhow::Result<()>>)> {
let env = None;
let project = Project::local(
app_state.client.clone(),
app_state.node_runtime.clone(),
app_state.user_store.clone(),
app_state.languages.clone(),
Arc::new(DapRegistry::default()),
app_state.fs.clone(),
env,
cx,
);
let tools = Arc::new(ToolWorkingSet::default());
let thread_store =
ThreadStore::new(project.clone(), tools, app_state.prompt_builder.clone(), cx)?;
let thread = thread_store.update(cx, |thread_store, cx| thread_store.create_thread(cx));
let (done_tx, done_rx) = channel::unbounded::<anyhow::Result<()>>();
let headless_thread = cx.new(move |cx| Self {
_subscription: cx.subscribe(&thread, Self::handle_thread_event),
// thread,
// project,
thread_store,
tool_use_counts: HashMap::default(),
done_tx,
});
Ok((headless_thread, done_rx))
}
fn handle_thread_event(
&mut self,
thread: Entity<Thread>,
event: &ThreadEvent,
cx: &mut Context<Self>,
) {
match event {
ThreadEvent::ShowError(err) => self
.done_tx
.send_blocking(Err(anyhow!("{:?}", err)))
.unwrap(),
ThreadEvent::DoneStreaming => {
let thread = thread.read(cx);
if let Some(message) = thread.messages().last() {
println!("Message: {}", message.to_string());
}
if thread.all_tools_finished() {
self.done_tx.send_blocking(Ok(())).unwrap()
}
}
ThreadEvent::UsePendingTools { .. } => {}
ThreadEvent::ToolConfirmationNeeded => {
// Automatically approve all tools that need confirmation in headless mode
println!("Tool confirmation needed - automatically approving in headless mode");
// Get the tools needing confirmation
let tools_needing_confirmation: Vec<_> = thread
.read(cx)
.tools_needing_confirmation()
.cloned()
.collect();
// Run each tool that needs confirmation
for tool_use in tools_needing_confirmation {
if let Some(tool) = thread.read(cx).tools().tool(&tool_use.name, cx) {
thread.update(cx, |thread, cx| {
println!("Auto-approving tool: {}", tool_use.name);
// Create a request to send to the tool
let request = thread.to_completion_request(RequestKind::Chat, cx);
let messages = Arc::new(request.messages);
// Run the tool
thread.run_tool(
tool_use.id.clone(),
tool_use.ui_text.clone(),
tool_use.input.clone(),
&messages,
tool,
cx,
);
});
}
}
}
ThreadEvent::ToolFinished {
tool_use_id,
pending_tool_use,
..
} => {
if let Some(pending_tool_use) = pending_tool_use {
println!(
"Used tool {} with input: {}",
pending_tool_use.name, pending_tool_use.input
);
*self
.tool_use_counts
.entry(pending_tool_use.name.clone())
.or_insert(0) += 1;
}
if let Some(tool_result) = thread.read(cx).tool_result(tool_use_id) {
println!("Tool result: {:?}", tool_result);
}
}
_ => {}
}
}
}
pub fn init(cx: &mut App) -> Arc<AgentAppState> {
release_channel::init(SemanticVersion::default(), cx);
gpui_tokio::init(cx);
let mut settings_store = SettingsStore::new(cx);
settings_store
.set_default_settings(settings::default_settings().as_ref(), cx)
.unwrap();
cx.set_global(settings_store);
client::init_settings(cx);
Project::init_settings(cx);
let client = Client::production(cx);
cx.set_http_client(client.http_client().clone());
let git_binary_path = None;
let fs = Arc::new(RealFs::new(
git_binary_path,
cx.background_executor().clone(),
));
let languages = Arc::new(LanguageRegistry::new(cx.background_executor().clone()));
let user_store = cx.new(|cx| UserStore::new(client.clone(), cx));
language::init(cx);
language_model::init(client.clone(), cx);
language_models::init(user_store.clone(), client.clone(), fs.clone(), cx);
assistant_tools::init(client.http_client().clone(), cx);
context_server::init(cx);
let stdout_is_a_pty = false;
let prompt_builder = PromptBuilder::load(fs.clone(), stdout_is_a_pty, cx);
agent::init(fs.clone(), client.clone(), prompt_builder.clone(), cx);
Arc::new(AgentAppState {
languages,
client,
user_store,
fs,
node_runtime: NodeRuntime::unavailable(),
prompt_builder,
})
}
pub fn find_model(model_name: &str, cx: &App) -> anyhow::Result<Arc<dyn LanguageModel>> {
let model_registry = LanguageModelRegistry::read_global(cx);
let model = model_registry
.available_models(cx)
.find(|model| model.id().0 == model_name);
let Some(model) = model else {
return Err(anyhow!(
"No language model named {} was available. Available models: {}",
model_name,
model_registry
.available_models(cx)
.map(|model| model.id().0.clone())
.collect::<Vec<_>>()
.join(", ")
));
};
Ok(model)
}
pub fn authenticate_model_provider(
provider_id: LanguageModelProviderId,
cx: &mut App,
) -> Task<std::result::Result<(), AuthenticateError>> {
let model_registry = LanguageModelRegistry::read_global(cx);
let model_provider = model_registry.provider(&provider_id).unwrap();
model_provider.authenticate(cx)
}

101
crates/eval/src/eval.rs Normal file
View File

@@ -0,0 +1,101 @@
use agent::Agent;
use anyhow::Result;
use gpui::Application;
use language_model::LanguageModelRegistry;
use reqwest_client::ReqwestClient;
use serde::Deserialize;
use std::{
fs,
path::{Path, PathBuf},
sync::Arc,
};
mod agent;
#[derive(Debug, Deserialize)]
pub struct ExampleBase {
pub path: PathBuf,
pub revision: String,
}
#[derive(Debug)]
pub struct Example {
pub base: ExampleBase,
/// Content of the prompt.md file
pub prompt: String,
/// Content of the rubric.md file
pub rubric: String,
}
impl Example {
/// Load an example from a directory containing base.toml, prompt.md, and rubric.md
pub fn load_from_directory<P: AsRef<Path>>(dir_path: P) -> Result<Self> {
let base_path = dir_path.as_ref().join("base.toml");
let prompt_path = dir_path.as_ref().join("prompt.md");
let rubric_path = dir_path.as_ref().join("rubric.md");
let mut base: ExampleBase = toml::from_str(&fs::read_to_string(&base_path)?)?;
base.path = base.path.canonicalize()?;
Ok(Example {
base,
prompt: fs::read_to_string(prompt_path)?,
rubric: fs::read_to_string(rubric_path)?,
})
}
/// Set up the example by checking out the specified Git revision
pub fn setup(&self) -> Result<()> {
use std::process::Command;
// Check if the directory exists
let path = Path::new(&self.base.path);
anyhow::ensure!(path.exists(), "Path does not exist: {:?}", self.base.path);
// Change to the project directory and checkout the specified revision
let output = Command::new("git")
.current_dir(&self.base.path)
.arg("checkout")
.arg(&self.base.revision)
.output()?;
anyhow::ensure!(
output.status.success(),
"Failed to checkout revision {}: {}",
self.base.revision,
String::from_utf8_lossy(&output.stderr),
);
Ok(())
}
}
fn main() {
env_logger::init();
let http_client = Arc::new(ReqwestClient::new());
let app = Application::headless().with_http_client(http_client.clone());
app.run(move |cx| {
let app_state = crate::agent::init(cx);
let _agent = Agent::new(app_state, cx);
let model = agent::find_model("claude-3-7-sonnet-thinking-latest", cx).unwrap();
LanguageModelRegistry::global(cx).update(cx, |registry, cx| {
registry.set_default_model(Some(model.clone()), cx);
});
let model_provider_id = model.provider_id();
let authenticate = agent::authenticate_model_provider(model_provider_id.clone(), cx);
cx.spawn(async move |_cx| {
authenticate.await.unwrap();
})
.detach();
});
// let example =
// Example::load_from_directory("./crates/eval/examples/find_and_replace_diff_card")?;
// example.setup()?;
}

View File

@@ -105,21 +105,56 @@ enum TrashCancel {
Cancel,
}
struct GitMenuState {
has_tracked_changes: bool,
has_staged_changes: bool,
has_unstaged_changes: bool,
has_new_changes: bool,
}
fn git_panel_context_menu(
focus_handle: FocusHandle,
state: GitMenuState,
window: &mut Window,
cx: &mut App,
) -> Entity<ContextMenu> {
ContextMenu::build(window, cx, |context_menu, _, _| {
ContextMenu::build(window, cx, move |context_menu, _, _| {
context_menu
.context(focus_handle)
.action("Stage All", StageAll.boxed_clone())
.action("Unstage All", UnstageAll.boxed_clone())
.map(|menu| {
if state.has_unstaged_changes {
menu.action("Stage All", StageAll.boxed_clone())
} else {
menu.disabled_action("Stage All", StageAll.boxed_clone())
}
})
.map(|menu| {
if state.has_staged_changes {
menu.action("Unstage All", UnstageAll.boxed_clone())
} else {
menu.disabled_action("Unstage All", UnstageAll.boxed_clone())
}
})
.separator()
.action("Open Diff", project_diff::Diff.boxed_clone())
.separator()
.action("Discard Tracked Changes", RestoreTrackedFiles.boxed_clone())
.action("Trash Untracked Files", TrashUntrackedFiles.boxed_clone())
.map(|menu| {
if state.has_tracked_changes {
menu.action("Discard Tracked Changes", RestoreTrackedFiles.boxed_clone())
} else {
menu.disabled_action(
"Discard Tracked Changes",
RestoreTrackedFiles.boxed_clone(),
)
}
})
.map(|menu| {
if state.has_new_changes {
menu.action("Trash Untracked Files", TrashUntrackedFiles.boxed_clone())
} else {
menu.disabled_action("Trash Untracked Files", TrashUntrackedFiles.boxed_clone())
}
})
})
}
@@ -2571,13 +2606,30 @@ impl GitPanel {
fn render_overflow_menu(&self, id: impl Into<ElementId>) -> impl IntoElement {
let focus_handle = self.focus_handle.clone();
let has_tracked_changes = self.has_tracked_changes();
let has_staged_changes = self.has_staged_changes();
let has_unstaged_changes = self.has_unstaged_changes();
let has_new_changes = self.new_count > 0;
PopoverMenu::new(id.into())
.trigger(
IconButton::new("overflow-menu-trigger", IconName::EllipsisVertical)
.icon_size(IconSize::Small)
.icon_color(Color::Muted),
)
.menu(move |window, cx| Some(git_panel_context_menu(focus_handle.clone(), window, cx)))
.menu(move |window, cx| {
Some(git_panel_context_menu(
focus_handle.clone(),
GitMenuState {
has_tracked_changes,
has_staged_changes,
has_unstaged_changes,
has_new_changes,
},
window,
cx,
))
})
.anchor(Corner::TopRight)
}
@@ -3449,7 +3501,17 @@ impl GitPanel {
window: &mut Window,
cx: &mut Context<Self>,
) {
let context_menu = git_panel_context_menu(self.focus_handle.clone(), window, cx);
let context_menu = git_panel_context_menu(
self.focus_handle.clone(),
GitMenuState {
has_tracked_changes: self.has_tracked_changes(),
has_staged_changes: self.has_staged_changes(),
has_unstaged_changes: self.has_unstaged_changes(),
has_new_changes: self.new_count > 0,
},
window,
cx,
);
self.set_context_menu(context_menu, position, window, cx);
}

View File

@@ -1,4 +1,4 @@
use super::{MacDisplay, NSRange, NSStringExt, ns_string, renderer};
use super::{BoolExt, MacDisplay, NSRange, NSStringExt, ns_string, renderer};
use crate::{
AnyWindowHandle, Bounds, DisplayLink, ExternalPaths, FileDropEvent, ForegroundExecutor,
KeyDownEvent, Keystroke, Modifiers, ModifiersChangedEvent, MouseButton, MouseDownEvent,
@@ -1021,11 +1021,8 @@ impl PlatformWindow for MacWindow {
} else {
0
};
let opaque = if background_appearance == WindowBackgroundAppearance::Opaque {
YES
} else {
NO
};
let opaque = (background_appearance == WindowBackgroundAppearance::Opaque).to_objc();
unsafe {
this.native_window.setOpaque_(opaque);
// Shadows for transparent windows cause artifacts and performance issues
@@ -1981,14 +1978,11 @@ extern "C" fn dragging_exited(this: &Object, _: Sel, _: id) {
extern "C" fn perform_drag_operation(this: &Object, _: Sel, dragging_info: id) -> BOOL {
let window_state = unsafe { get_window_state(this) };
let position = drag_event_position(&window_state, dragging_info);
if send_new_event(
send_new_event(
&window_state,
PlatformInput::FileDrop(FileDropEvent::Submit { position }),
) {
YES
} else {
NO
}
)
.to_objc()
}
fn external_paths_from_event(dragging_info: *mut Object) -> Option<ExternalPaths> {

View File

@@ -845,6 +845,7 @@ impl Window {
handle
.update(&mut cx, |_, window, cx| {
window.active.set(active);
window.modifiers = window.platform_window.modifiers();
window
.activation_observers
.clone()

View File

@@ -88,12 +88,21 @@ struct Options {
}
pub enum CodeBlockRenderer {
Default { copy_button: bool },
Custom { render: CodeBlockRenderFn },
Default {
copy_button: bool,
},
Custom {
render: CodeBlockRenderFn,
/// A function that can modify the parent container after the code block
/// content has been appended as a child element.
transform: Option<CodeBlockTransformFn>,
},
}
pub type CodeBlockRenderFn =
Arc<dyn Fn(usize, &CodeBlockKind, &ParsedMarkdown, Range<usize>, &mut Window, &App) -> Div>;
Arc<dyn Fn(&CodeBlockKind, &ParsedMarkdown, Range<usize>, &mut Window, &App) -> Div>;
pub type CodeBlockTransformFn = Arc<dyn Fn(AnyDiv, Range<usize>, &mut Window, &App) -> AnyDiv>;
actions!(markdown, [Copy, CopyAsMarkdown]);
@@ -594,7 +603,7 @@ impl Element for MarkdownElement {
0
};
for (index, (range, event)) in parsed_markdown.events.iter().enumerate() {
for (range, event) in parsed_markdown.events.iter() {
match event {
MarkdownEvent::Start(tag) => {
match tag {
@@ -676,15 +685,9 @@ impl Element for MarkdownElement {
builder.push_code_block(language);
builder.push_div(code_block, range, markdown_end);
}
(CodeBlockRenderer::Custom { render }, _) => {
let parent_container = render(
index,
kind,
&parsed_markdown,
range.clone(),
window,
cx,
);
(CodeBlockRenderer::Custom { render, .. }, _) => {
let parent_container =
render(kind, &parsed_markdown, range.clone(), window, cx);
builder.push_div(parent_container, range, markdown_end);
@@ -695,9 +698,12 @@ impl Element for MarkdownElement {
if self.style.code_block_overflow_x_scroll {
code_block.style().restrict_scroll_to_axis =
Some(true);
code_block.flex().overflow_x_scroll()
code_block
.flex()
.overflow_x_scroll()
.overflow_y_hidden()
} else {
code_block.w_full()
code_block.w_full().overflow_hidden()
}
});
@@ -846,6 +852,14 @@ impl Element for MarkdownElement {
builder.pop_text_style();
}
if let CodeBlockRenderer::Custom {
transform: Some(modify),
..
} = &self.code_block_renderer
{
builder.modify_current_div(|el| modify(el, range.clone(), window, cx));
}
if matches!(
&self.code_block_renderer,
CodeBlockRenderer::Default { copy_button: true }
@@ -1049,7 +1063,7 @@ impl IntoElement for MarkdownElement {
}
}
enum AnyDiv {
pub enum AnyDiv {
Div(Div),
Stateful(Stateful<Div>),
}

View File

@@ -37,3 +37,9 @@ pub(crate) mod m_2025_03_29 {
pub(crate) use settings::SETTINGS_PATTERNS;
}
pub(crate) mod m_2025_04_15 {
mod settings;
pub(crate) use settings::SETTINGS_PATTERNS;
}

View File

@@ -0,0 +1,29 @@
use std::ops::Range;
use tree_sitter::{Query, QueryMatch};
use crate::MigrationPatterns;
use crate::patterns::SETTINGS_ASSISTANT_TOOLS_PATTERN;
pub const SETTINGS_PATTERNS: MigrationPatterns = &[(
SETTINGS_ASSISTANT_TOOLS_PATTERN,
replace_bash_with_terminal_in_profiles,
)];
fn replace_bash_with_terminal_in_profiles(
contents: &str,
mat: &QueryMatch,
query: &Query,
) -> Option<(Range<usize>, String)> {
let tool_name_capture_ix = query.capture_index_for_name("tool_name")?;
let tool_name_range = mat
.nodes_for_capture_index(tool_name_capture_ix)
.next()?
.byte_range();
let tool_name = contents.get(tool_name_range.clone())?;
if tool_name != "bash" {
return None;
}
Some((tool_name_range, "terminal".to_string()))
}

View File

@@ -120,6 +120,10 @@ pub fn migrate_settings(text: &str) -> Result<Option<String>> {
migrations::m_2025_03_29::SETTINGS_PATTERNS,
&SETTINGS_QUERY_2025_03_29,
),
(
migrations::m_2025_04_15::SETTINGS_PATTERNS,
&SETTINGS_QUERY_2025_04_15,
),
];
run_migrations(text, migrations)
}
@@ -190,6 +194,10 @@ define_query!(
SETTINGS_QUERY_2025_03_29,
migrations::m_2025_03_29::SETTINGS_PATTERNS
);
define_query!(
SETTINGS_QUERY_2025_04_15,
migrations::m_2025_04_15::SETTINGS_PATTERNS
);
// custom query
static EDIT_PREDICTION_SETTINGS_MIGRATION_QUERY: LazyLock<Query> = LazyLock::new(|| {
@@ -527,4 +535,103 @@ mod tests {
),
)
}
#[test]
fn test_replace_bash_with_terminal_in_profiles() {
assert_migrate_settings(
r#"
{
"assistant": {
"profiles": {
"custom": {
"name": "Custom",
"tools": {
"bash": true,
"diagnostics": true
}
}
}
}
}
"#,
Some(
r#"
{
"assistant": {
"profiles": {
"custom": {
"name": "Custom",
"tools": {
"terminal": true,
"diagnostics": true
}
}
}
}
}
"#,
),
)
}
#[test]
fn test_replace_bash_false_with_terminal_in_profiles() {
assert_migrate_settings(
r#"
{
"assistant": {
"profiles": {
"custom": {
"name": "Custom",
"tools": {
"bash": false,
"diagnostics": true
}
}
}
}
}
"#,
Some(
r#"
{
"assistant": {
"profiles": {
"custom": {
"name": "Custom",
"tools": {
"terminal": false,
"diagnostics": true
}
}
}
}
}
"#,
),
)
}
#[test]
fn test_no_bash_in_profiles() {
assert_migrate_settings(
r#"
{
"assistant": {
"profiles": {
"custom": {
"name": "Custom",
"tools": {
"diagnostics": true,
"path_search": true,
"read_file": true
}
}
}
}
}
"#,
None,
)
}
}

View File

@@ -7,5 +7,6 @@ pub(crate) use keymap::{
};
pub(crate) use settings::{
SETTINGS_LANGUAGES_PATTERN, SETTINGS_NESTED_KEY_VALUE_PATTERN, SETTINGS_ROOT_KEY_VALUE_PATTERN,
SETTINGS_ASSISTANT_TOOLS_PATTERN, SETTINGS_LANGUAGES_PATTERN,
SETTINGS_NESTED_KEY_VALUE_PATTERN, SETTINGS_ROOT_KEY_VALUE_PATTERN,
};

View File

@@ -39,3 +39,35 @@ pub const SETTINGS_LANGUAGES_PATTERN: &str = r#"(document
)
(#eq? @languages "languages")
)"#;
pub const SETTINGS_ASSISTANT_TOOLS_PATTERN: &str = r#"(document
(object
(pair
key: (string (string_content) @assistant)
value: (object
(pair
key: (string (string_content) @profiles)
value: (object
(pair
key: (_)
value: (object
(pair
key: (string (string_content) @tools_key)
value: (object
(pair
key: (string (string_content) @tool_name)
value: (_) @tool_value
)
)
)
)
)
)
)
)
)
)
(#eq? @assistant "assistant")
(#eq? @profiles "profiles")
(#eq? @tools_key "tools")
)"#;

View File

@@ -218,7 +218,7 @@ impl BreakpointStore {
}
}
fn abs_path_from_buffer(buffer: &Entity<Buffer>, cx: &App) -> Option<Arc<Path>> {
pub fn abs_path_from_buffer(buffer: &Entity<Buffer>, cx: &App) -> Option<Arc<Path>> {
worktree::File::from_dyn(buffer.read(cx).file())
.and_then(|file| file.worktree.read(cx).absolutize(&file.path).ok())
.map(Arc::<Path>::from)

View File

@@ -38,7 +38,7 @@ use std::{
borrow::Borrow,
collections::{BTreeMap, HashSet},
ffi::OsStr,
path::PathBuf,
path::{Path, PathBuf},
sync::{Arc, atomic::Ordering::SeqCst},
};
use std::{collections::VecDeque, sync::atomic::AtomicU32};
@@ -57,7 +57,7 @@ pub enum DapStoreEvent {
RunInTerminal {
session_id: SessionId,
title: Option<String>,
cwd: PathBuf,
cwd: Option<Arc<Path>>,
command: Option<String>,
args: Vec<String>,
envs: HashMap<String, String>,
@@ -549,10 +549,10 @@ impl DapStore {
let seq = request.seq;
let cwd = PathBuf::from(request_args.cwd);
let cwd = Path::new(&request_args.cwd);
match cwd.try_exists() {
Ok(true) => (),
Ok(false) | Err(_) => {
Ok(false) | Err(_) if !request_args.cwd.is_empty() => {
return session.update(cx, |session, cx| {
session.respond_to_client(
seq,
@@ -574,8 +574,8 @@ impl DapStore {
)
});
}
_ => (),
}
let mut args = request_args.args.clone();
// Handle special case for NodeJS debug adapter
@@ -602,7 +602,19 @@ impl DapStore {
}
let (tx, mut rx) = mpsc::channel::<Result<u32>>(1);
let cwd = Some(cwd)
.filter(|cwd| cwd.as_os_str().len() > 0)
.map(Arc::from)
.or_else(|| {
self.session_by_id(session_id)
.and_then(|session| {
session
.read(cx)
.configuration()
.and_then(|config| config.request.cwd())
})
.map(Arc::from)
});
cx.emit(DapStoreEvent::RunInTerminal {
session_id,
title: request_args.title,

View File

@@ -1,6 +1,8 @@
use crate::project_settings::ProjectSettings;
use super::breakpoint_store::{BreakpointStore, BreakpointStoreEvent, BreakpointUpdatedReason};
use super::breakpoint_store::{
BreakpointStore, BreakpointStoreEvent, BreakpointUpdatedReason, SourceBreakpoint,
};
use super::dap_command::{
self, Attach, ConfigurationDone, ContinueCommand, DapCommand, DisconnectCommand,
EvaluateCommand, Initialize, Launch, LoadedSourcesCommand, LocalDapCommand, LocationsCommand,
@@ -163,6 +165,7 @@ pub struct LocalMode {
config: DebugAdapterConfig,
adapter: Arc<dyn DebugAdapter>,
breakpoint_store: Entity<BreakpointStore>,
tmp_breakpoint: Option<SourceBreakpoint>,
}
fn client_source(abs_path: &Path) -> dap::Source {
@@ -383,6 +386,7 @@ impl LocalMode {
client,
adapter,
breakpoint_store,
tmp_breakpoint: None,
config: config.clone(),
};
@@ -431,6 +435,7 @@ impl LocalMode {
.read_with(cx, |store, cx| store.breakpoints_from_path(&abs_path, cx))
.into_iter()
.filter(|bp| bp.state.is_enabled())
.chain(self.tmp_breakpoint.clone())
.map(Into::into)
.collect();
@@ -1040,6 +1045,40 @@ impl Session {
}
}
pub fn run_to_position(
&mut self,
breakpoint: SourceBreakpoint,
active_thread_id: ThreadId,
cx: &mut Context<Self>,
) {
match &mut self.mode {
Mode::Local(local_mode) => {
if !matches!(
self.thread_states.thread_state(active_thread_id),
Some(ThreadStatus::Stopped)
) {
return;
};
let path = breakpoint.path.clone();
local_mode.tmp_breakpoint = Some(breakpoint);
let task = local_mode.send_breakpoints_from_path(
path,
BreakpointUpdatedReason::Toggled,
cx,
);
cx.spawn(async move |this, cx| {
task.await;
this.update(cx, |this, cx| {
this.continue_thread(active_thread_id, cx);
})
})
.detach();
}
Mode::Remote(_) => {}
}
}
pub fn output(
&self,
since: OutputToken,
@@ -1086,6 +1125,16 @@ impl Session {
}
fn handle_stopped_event(&mut self, event: StoppedEvent, cx: &mut Context<Self>) {
if let Some((local, path)) = self.as_local_mut().and_then(|local| {
let breakpoint = local.tmp_breakpoint.take()?;
let path = breakpoint.path.clone();
Some((local, path))
}) {
local
.send_breakpoints_from_path(path, BreakpointUpdatedReason::Toggled, cx)
.detach();
};
if event.all_threads_stopped.unwrap_or_default() || event.thread_id.is_none() {
self.thread_states.stop_all_threads();

Some files were not shown because too many files have changed in this diff Show More