Compare commits

..

66 Commits

Author SHA1 Message Date
Richard Feldman
8ffdc2c326 wip 2025-04-10 11:48:53 -04:00
Richard Feldman
c3eacb8c83 wip 2025-04-10 11:33:12 -04:00
Richard Feldman
c97f067fd3 Make diagnostics support multiple paths 2025-04-10 11:20:08 -04:00
Richard Feldman
a8c295e844 Allow configurable diagnostic levels 2025-04-10 10:50:23 -04:00
Richard Feldman
fd863ac9e9 wip 2025-04-10 09:06:01 -04:00
Richard Feldman
6f2ad775e5 Add quickfix tool 2025-04-09 22:37:38 -04:00
Richard Feldman
3340abd127 Fix typo in comment 2025-04-09 19:57:14 -04:00
Richard Feldman
b3911355b8 Split out rename_tool 2025-04-09 15:31:43 -04:00
Richard Feldman
9a5633b8e2 Cleanup 2025-04-09 14:37:10 -04:00
Richard Feldman
9cf5f85c8b Use snake_case for action name 2025-04-09 10:47:16 -04:00
Richard Feldman
a6c4f46bef Merge remote-tracking branch 'origin/main' into add-code-action-tool 2025-04-09 10:36:28 -04:00
Richard Feldman
b298ae47f6 Revert "Try replacing find-replace tool with code action tool"
This reverts commit 1927dc039e.
2025-04-09 10:33:08 -04:00
Agus Zubiaga
1cb4f8288d Fix bash tool output (#28391) 2025-04-09 08:20:24 -06:00
Richard Feldman
3a8fe4d973 Add reminder message about system prompt (#28344)
Trying out sending the model a reminder message about code blocks in the
system prompt. If this seems to work well, we can include more specific
reminder messages, e.g. tool-specific ones.

Release Notes:

- N/A
2025-04-09 10:09:48 -04:00
Joseph T. Lyons
9d6d152918 Bump Zed to v0.183 (#28419)
Release Notes:

-N/A
2025-04-09 09:11:25 -04:00
Joseph T. Lyons
31034f8296 Add toggle case command (#28415)
A small addition for those coming from JetBrain's IDEs. A behavioral
detail: when any upper case character is detected, the command defaults
to toggling to lower case.

> Note that when you apply the toggle case action to the CamelCase name
format, IntelliJ IDEA converts the name to the lower case.


https://www.jetbrains.com/help/idea/working-with-source-code.html#edit_code_fragments

Release Notes:

- Added an `editor: toggle case` command. Use `cmd-shift-u` for macOS
and `ctrl-shift-u` for Linux, when using the `JetBrains` keymap.
2025-04-09 08:44:53 -04:00
Piotr Osiewicz
c441b651fa debugger: Add support for CodeLLDB (#28376)
Closes #ISSUE

Release Notes:

- N/A
2025-04-09 12:57:24 +02:00
Piotr Osiewicz
61ddcd516f chore: Add workspace-hack dependency to agent_rules (#28412)
Closes #ISSUE

Release Notes:

- N/A
2025-04-09 10:19:54 +00:00
Michael Sloan
f12a554f86 Use Project instead of Workspace in ContextStore (#28402)
Release Notes:

- N/A
2025-04-09 05:05:24 +00:00
Cole Miller
9dae4d8c59 Remove references to SSH remoting beta (#28399)
Release Notes:

- N/A
2025-04-09 03:26:22 +00:00
Cole Miller
f0b7f355a2 Clean up environment loading a bit (#28356)
Closes #ISSUE

Release Notes:

- N/A
2025-04-08 22:16:35 -04:00
Cole Miller
b687a5e56d git: Always reload current branch after pushing (#28327)
Closes #27347 

Release Notes:

- Fixed a bug causing the git panel to not update after pushing to a
remote
2025-04-08 22:16:03 -04:00
Ben Kunkle
e66a24edcf format: Re-implement support for formatting with code actions that contain commands (#28392)
Closes #27692
Closes #27935

Release Notes:

- Fixed a regression where code-actions used when formatting on save
were rejected if they contained commands
2025-04-09 01:53:54 +00:00
Michael Sloan
301fc7cd7b Pull out plain rules file loading code into a new agent_rules crate (#28383)
Also renames for rules file templated into the system prompt

Release Notes:

- N/A
2025-04-09 01:31:56 +00:00
Mikayla Maki
020a1071d5 Add the project search as an item in the status bar (#28388)
Was chatting with @wilhelmklopp, he pointed out that our current
UI-accessible way to access the project search was pretty obscure.


<img width="393" alt="Screenshot 2025-04-08 at 6 57 51 PM"
src="https://github.com/user-attachments/assets/636053cd-5a88-4a5e-8155-6d41d189b7db"
/>

Release Notes:

- Added a button to open the project search to the status bar
2025-04-09 01:13:48 +00:00
Bennet Bo Fenner
38d2487630 agent: Polish Generating... animation (#28379)
https://github.com/user-attachments/assets/9e798a50-9403-4e1c-a3df-2931e748b77d



Release Notes:

- N/A
2025-04-08 18:14:30 -06:00
0x2CA
79c9f2bbd9 editor: Fix invalid read-only with split pane (#28012)
Closes #28004

Release Notes:

- Fixed invalid read-only with split pane
2025-04-08 18:09:34 -06:00
Danilo Leal
c8caae03df agent: Change the reject changes keybinding (#28381)
This PR makes the reject keybinding, in the Review Changes mutlbuffer,
`cmd-n`.

Release Notes:

- N/A
2025-04-08 21:09:05 -03:00
Danilo Leal
dabc4d8ff5 agent: Remove type of item in the panel history view (#28382)
This PR removes the labels displaying whether a certain item in the
Agent Panel's history is a thread or prompt editor.

Release Notes:

- N/A
2025-04-08 21:08:56 -03:00
Antonio Scandurra
c0ad3e8183 Introduce a telemetry event for when a tool finishes (#28380)
This should help us understand which tools fail the most.

Release Notes:

- N/A
2025-04-09 00:07:06 +00:00
Kirill Bulatov
afde25a5cb Fix a docs typo (#28384)
Closes https://github.com/zed-industries/zed/pull/28053

Release Notes:

- N/A
2025-04-09 00:05:58 +00:00
Michael Sloan
9f708ee789 Fix refactoring bug in dashes around rounded corners (#28378)
Accidentally introduced in #28341

Release Notes:

- N/A
2025-04-09 00:00:30 +00:00
Michael Sloan
58731e2fd1 Remove log when pulldown_cmark produces long substituted text (#28375)
Turns out that consecutive dashes are substituted with half the number
of input dashes. Extended the test with this case as well

Release Notes:

- N/A
2025-04-08 23:45:49 +00:00
Antonio Scandurra
d0632a5332 Fix truncation of bash output (#28374)
Release Notes:

- Fixed a regression that caused the bash tool to not include all of the
output.

---------

Co-authored-by: Agus Zubiaga <hi@aguz.me>
2025-04-08 23:41:20 +00:00
Danilo Leal
64cea2f1f1 agent: Refine toolbar spacing (#28373)
Release Notes:

- N/A
2025-04-08 23:28:25 +00:00
Antonio Scandurra
ac958d4a2d Encourage agent to edit files it just created (#28372)
Release Notes:

- Fixed a problem that would cause the agent to keep recreating a file
instead of editing it.
2025-04-08 23:18:34 +00:00
Danilo Leal
2df06cd2e4 agent: Improve thinking design display (#28186)
Release Notes:

- N/A
2025-04-08 20:13:49 -03:00
Danilo Leal
0d4ca71e68 agent: Change "prompt editor" to "text thread" (#28370)
Release Notes:

- N/A
2025-04-08 19:56:01 -03:00
Danilo Leal
e2d6505d12 agent: Make the copy button in the codeblock visible on hover (#28371)
This simplifies the UI a little bit.

Release Notes:

- N/A
2025-04-08 19:55:53 -03:00
Kirill Bulatov
f7c3c533a3 Update task defaults (#28368)
Follow-up of https://github.com/zed-industries/zed/pull/28359

Release Notes:

- N/A
2025-04-08 22:20:00 +00:00
Nate Butler
c05bf096f8 Merge Component and ComponentPreview trait (#28365)
- Merge `Component` and `ComponentPreview` trait
- Adds a number of component previews
- Removes a number of stories

Release Notes:

- N/A
2025-04-08 16:09:06 -06:00
João Marcos
b15ee1b1cc Add dedicated actions for LSP completions insertion mode (#28121)
Adds actions so you can have customized keybindings for `insert` and
`replace` modes.

And add `shift-enter` as a default for `replace`, this will override the
default setting
`completions.lsp_insert_mode` which is set to `replace_suffix`, which
tries to "smartly"
decide whether to replace or insert based on the surrounding text.

For those who come from VSCode, if you want to mimic their behavior, you
only have to
set `completions.lsp_insert_mode` to `insert`.

If you want `tab` and `enter` to do different things, you need to remap
them, here is
an example:

```jsonc
[
  // ...
  {
    "context": "Editor && showing_completions",
    "bindings": {
      "enter": "editor::ConfirmCompletionInsert",
      "tab": "editor::ConfirmCompletionReplace"
    }
  },
]
```

Closes #24577

- [x] Make LSP completion insertion mode decision in guest's machine
(host is currently deciding it and not allowing guests to have their own
setting for it)
- [x] Add shift-enter as a hotkey for `replace` by default.
- [x] Test actions.
- [x] Respect the setting being specified per language, instead of using
the "defaults".
- [x] Move `insert_range` of `Completion` to the Lsp variant of
`.source`.
- [x] Fix broken default, forgotten after
https://github.com/zed-industries/zed/pull/27453#pullrequestreview-2736906628,
should be `replace_suffix` and not `insert`.

Release Notes:

- LSP completions: added actions `ConfirmCompletionInsert` and
`ConfirmCompletionReplace` that control how completions are inserted,
these override `completions.lsp_insert_mode`, by default, `shift-enter`
triggers `ConfirmCompletionReplace` which replaces the whole word.
2025-04-08 22:03:03 +00:00
Cole Miller
0459b1d303 Fix panic when a file in a path-based multibuffer excerpt is renamed (#28364)
Closes #ISSUE

Release Notes:

- Fixed a panic that could occur when paths changed in the project diff.

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-04-08 22:01:40 +00:00
5brian
246013cfc2 tab_switcher: Add keybind to close tab tooltip (#27212)
| prev | new |
|--|--|
|<img width="619" alt="image"
src="https://github.com/user-attachments/assets/53b14fd4-17ee-4336-81ca-30324d918e15"
/>|<img width="620" alt="image"
src="https://github.com/user-attachments/assets/316699b3-295b-4f83-9fb1-b799f7c71d7f"
/>|


Release Notes:

- N/A
2025-04-08 15:57:36 -06:00
Bennet Bo Fenner
47eaf274d6 agent: Only require confirmation for batch tool when subset of tool calls require confirmation (#28363)
Release Notes:

- agent: Only require confirmation for batch tool when subset of tool
calls require confirmation
2025-04-08 21:37:10 +00:00
Peter Tripp
ef4b5b0698 script: Ignore feature/meta issues from issue_response nag (#28332)
Release Notes:

- N/A
2025-04-08 17:14:07 -04:00
Kirill Bulatov
39c98ce882 Support tasks from rust-analyzer (#28359)
(and any other LSP server in theory, if it exposes any LSP-ext endpoint
for the same)

Closes https://github.com/zed-industries/zed/issues/16160

* adds a way to disable tree-sitter tasks (the ones from the plugins,
enabled by default) with
```json5
"languages": {
  "Rust": "tasks": {
      "enabled": false
    }
  }
}
```
language settings

* adds a way to disable LSP tasks (the ones from the rust-analyzer
language server, enabled by default) with
```json5
"lsp": {
  "rust-analyzer": {
    "enable_lsp_tasks": false,
  }
}
```

* adds rust-analyzer tasks into tasks modal and gutter:

<img width="1728" alt="modal"
src="https://github.com/user-attachments/assets/22b9cee1-4ffb-4c9e-b1f1-d01e80e72508"
/>

<img width="396" alt="gutter"
src="https://github.com/user-attachments/assets/bd818079-e247-4332-bdb5-1b7cb1cce768"
/>


Release Notes:

- Added tasks from rust-analyzer
2025-04-08 15:07:56 -06:00
Joseph T. Lyons
763cc6dba3 Tell the model not to act on TODO type comments (#28358)
Release Notes:

- Adjusted system prompt to direct it to never act on TODO-type comments
it encounters, unless the user directly asked it to do so or they relate
to the current task at hand.
2025-04-08 21:00:02 +00:00
Piotr Osiewicz
0b75c13034 chore: Replace as_any functions with trait upcasting (#28221)
Closes #ISSUE

Release Notes:

- N/A
2025-04-08 22:16:27 +02:00
Ben Kunkle
38ec45008c project: Workaround invalid code action edits from pyright (#28354)
Co-authored-by: Max Brunsfeld <maxbrunsfeld@gmail.com>
Co-authored-by: Piotr Osiewicz <piotr@zed.dev>

fixes issue where:

In a two line python file like so
```

Path()
```

If the user asks for code actions on `Path` and they select (`From
pathlib import path`)
the result they get is
```

Pathfrom pathlib import Path


Path()
```
Instead of 

```

from pathlib import Path



Path()
```

This is due to a non-lsp-spec-compliant response from pyright below

```json
{"jsonrpc":"2.0","id":40,"result":[{"title":"from pathlib import Path","edit":{"changes":{"file:///Users/neb/Zed/example-project/pyright-project/main.py":[{"range":{"start":{"line":2,"character":0},"end":{"line":2,"character":4}},"newText":"Path"},{"range":{"start":{"line":2,"character":0},"end":{"line":2,"character":0}},"newText":"from pathlib import Path\n\n\n"}]}},"kind":"quickfix"}]}
```

Release Notes:

- Fixed an issue when using auto-import code actions provided by pyright
(or basedpyright) where the import would be jumbled with the scoped
import resulting in an invalid result

Co-authored-by: Max Brunsfeld <maxbrunsfeld@gmail.com>
Co-authored-by: Anthony Eid <hello@anthonyeid.me>
2025-04-08 20:13:44 +00:00
Antonio Scandurra
97641c3298 Use tree-sitter when returning symbols to the model for a given file (#28352)
This also increases the threshold for when we return an outline during
`read_file`.

Release Notes:

- Fixed an issue that caused the agent to fail reading large files if
the LSP hadn't started yet.
2025-04-08 16:11:05 -04:00
Joseph T. Lyons
ca8f6e8a3f Tell the model not to remove tests (#28349)
Release Notes:

- Adjusted system prompt to direct it to never remove tests as a way to
have the test suite pass, unless the user directly asks for test
removal.
2025-04-08 19:26:43 +00:00
Piotr Osiewicz
db53da49e1 debugger: Respect initialize_args from user profiles (#28347)
Closes #ISSUE

Release Notes:

- N/A

Co-authored-by: Ben Kunkle <ben.kunkle@gmail.com>
2025-04-08 21:15:05 +02:00
Peter Tripp
df94dcdea6 ci: Only run workspace_hack when tests run (#28346)
Skip `workspace_hack` for PRs that don't trigger tests (docs-only,
.github issue templates, etc).

Release Notes:

- N/A
2025-04-08 18:55:54 +00:00
Richard Feldman
1c85901440 Tell the model not to create .bak files (#28244)
Release Notes:

- Adjusted system prompt to avoid having the agent create backup files
unnecessarily.
2025-04-08 18:45:35 +00:00
Peter Tripp
9fb77ad176 Refine GitHub Issue templates (#28345)
Make various improvements to our github issue templates.

- Adjust line lengths to not wrap in constrained new issue view (85
cols) not just full screen view (95 columns)
- Remove reference to drag/drop logs to upload (recently multiple issues
with dead upload links)
- Cleanup list view

Release Notes:

- N/A
2025-04-08 14:41:55 -04:00
Michael Sloan
feafad2f9d Improve comments on shader code for dashed borders (#28341)
Improvements from going over the code with @as-cii 

Release Notes:

- N/A
2025-04-08 18:08:22 +00:00
Piotr Osiewicz
86ef00054b pylsp: Upgrade existing installation if possible (#28338)
Closes #ISSUE

Release Notes:

- Zed-managed pylsp installations will now correctly upgrade themselves
2025-04-08 20:01:09 +02:00
Richard Feldman
1927dc039e Try replacing find-replace tool with code action tool 2025-04-04 10:17:40 -05:00
Richard Feldman
af0e2068cd wip 2025-03-31 16:35:17 -05:00
Richard Feldman
44e6701ccc Save buffers after running tool actions 2025-03-31 15:53:00 -05:00
Richard Feldman
47948f8309 Get basic code actions working 2025-03-31 15:46:25 -05:00
Richard Feldman
84a67d82cb Let the code action actually do edits 2025-03-31 13:34:13 -05:00
Richard Feldman
d478a709ed Make code-actions a default action 2025-03-31 13:34:13 -05:00
Richard Feldman
07f7a391c9 Add ListActions to tool 2025-03-31 13:34:13 -05:00
Richard Feldman
09af38a144 Add Code Action Tool 2025-03-31 13:34:13 -05:00
190 changed files with 7830 additions and 3730 deletions

36
.github/ISSUE_TEMPLATE/01_bug_agent.yml vendored Normal file
View File

@@ -0,0 +1,36 @@
name: Bug Report (Agent Panel)
description: Zed Agent Panel Bugs
type: "Bug"
labels: ["agent", "ai"]
title: "Agent Panel: <a short description of the Agent Panel bug>"
body:
- type: textarea
attributes:
label: Summary
description: Describe the bug with a one line summary, and provide detailed reproduction steps
value: |
<!-- Please insert a one line summary of the issue below -->
SUMMARY_SENTENCE_HERE
### Description
<!-- Describe with sufficient detail to reproduce from a clean Zed install. -->
<!-- Please include the LLM provider and model name you are using -->
Steps to trigger the problem:
1.
2.
3.
Actual Behavior:
Expected Behavior:
validations:
required: true
- type: textarea
id: environment
attributes:
label: Zed Version and System Specs
description: 'Open Zed, and in the command palette select "zed: Copy System Specs Into Clipboard"'
placeholder: |
Output of "zed: Copy System Specs Into Clipboard"
validations:
required: true

View File

@@ -1,51 +0,0 @@
name: Git Bug Report
description: There is a bug related to Git features in Zed
type: "Bug"
labels: ["git"]
title: "Git: <a short description of the Git bug>"
body:
- type: textarea
attributes:
label: Summary
description: Describe the bug with a one line summary, and provide detailed reproduction steps
value: |
<!-- Please insert a one line summary of the issue below -->
<!-- Include all steps necessary to reproduce from a clean Zed installation. Be verbose -->
Steps to trigger the problem:
1.
2.
3.
Actual Behavior:
Expected Behavior:
validations:
required: true
- type: textarea
id: environment
attributes:
label: Zed Version and System Specs
description: 'Open Zed, and in the command palette select "zed: Copy System Specs Into Clipboard"'
placeholder: |
Output of "zed: Copy System Specs Into Clipboard"
validations:
required: true
- type: textarea
attributes:
label: If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue.
description: |
macOS: `~/Library/Logs/Zed/Zed.log`
Linux: `~/.local/share/zed/logs/Zed.log` or $XDG_DATA_HOME
If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000.
value: |
<details><summary>Zed.log</summary>
<!-- Click below this line and paste or drag-and-drop your log-->
```
```
<!-- Click above this line and paste or drag-and-drop your log--></details>
validations:
required: false

View File

@@ -1,51 +0,0 @@
name: Agent Panel Bug Report
description: There is a bug related to the Agent Panel in Zed
type: "Bug"
labels: ["agent", "ai"]
title: "Agent Panel: <a short description of the Agent Panel bug>"
body:
- type: textarea
attributes:
label: Summary
description: Describe the bug with a one line summary, and provide detailed reproduction steps
value: |
<!-- Please insert a one line summary of the issue below -->
<!-- Include all steps necessary to reproduce from a clean Zed installation. Be verbose -->
Steps to trigger the problem:
1.
2.
3.
Actual Behavior:
Expected Behavior:
validations:
required: true
- type: textarea
id: environment
attributes:
label: Zed Version and System Specs
description: 'Open Zed, and in the command palette select "zed: Copy System Specs Into Clipboard"'
placeholder: |
Output of "zed: Copy System Specs Into Clipboard"
validations:
required: true
- type: textarea
attributes:
label: If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue.
description: |
macOS: `~/Library/Logs/Zed/Zed.log`
Linux: `~/.local/share/zed/logs/Zed.log` or $XDG_DATA_HOME
If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000.
value: |
<details><summary>Zed.log</summary>
<!-- Click below this line and paste or drag-and-drop your log-->
```
```
<!-- Click above this line and paste or drag-and-drop your log--></details>
validations:
required: false

View File

@@ -1,5 +1,5 @@
name: Edit Predictions Bug Report
description: There is a bug related to Edit Predictions in Zed
name: Bug Report (Edit Predictions)
description: Zed Edit Predictions bugs
type: "Bug"
labels: ["ai", "inline completion", "zeta"]
title: "Edit Predictions: <a short description of the Edit Prediction bug>"
@@ -10,19 +10,21 @@ body:
description: Describe the bug with a one line summary, and provide detailed reproduction steps
value: |
<!-- Please insert a one line summary of the issue below -->
SUMMARY_SENTENCE_HERE
<!-- Include all steps necessary to reproduce from a clean Zed installation. Be verbose -->
### Description
<!-- Describe with sufficient detail to reproduce from a clean Zed install. -->
<!-- Please include the LLM provider and model name you are using -->
Steps to trigger the problem:
1.
2.
3.
Actual Behavior:
Expected Behavior:
validations:
required: true
- type: textarea
id: environment
attributes:
@@ -32,20 +34,3 @@ body:
Output of "zed: Copy System Specs Into Clipboard"
validations:
required: true
- type: textarea
attributes:
label: If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue.
description: |
macOS: `~/Library/Logs/Zed/Zed.log`
Linux: `~/.local/share/zed/logs/Zed.log` or $XDG_DATA_HOME
If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000.
value: |
<details><summary>Zed.log</summary>
<!-- Click below this line and paste or drag-and-drop your log-->
```
```
<!-- Click above this line and paste or drag-and-drop your log--></details>
validations:
required: false

35
.github/ISSUE_TEMPLATE/03_bug_git.yml vendored Normal file
View File

@@ -0,0 +1,35 @@
name: Bug Report (Git)
description: Zed Git-Related Bugs
type: "Bug"
labels: ["git"]
title: "Git: <a short description of the Git bug>"
body:
- type: textarea
attributes:
label: Summary
description: Describe the bug with a one line summary, and provide detailed reproduction steps
value: |
<!-- Please insert a one line summary of the issue below -->
SUMMARY_SENTENCE_HERE
### Description
<!-- Describe with sufficient detail to reproduce from a clean Zed install. -->
Steps to trigger the problem:
1.
2.
3.
Actual Behavior:
Expected Behavior:
validations:
required: true
- type: textarea
id: environment
attributes:
label: Zed Version and System Specs
description: 'Open Zed, and in the command palette select "zed: Copy System Specs Into Clipboard"'
placeholder: |
Output of "zed: Copy System Specs Into Clipboard"
validations:
required: true

View File

@@ -1,46 +1,44 @@
name: Bug Report
name: Bug Report (Other)
description: |
Something is broken in Zed (exclude crashing).
Something else is broken in Zed (exclude crashing).
type: "Bug"
body:
- type: textarea
attributes:
label: Summary
description: Describe the bug with a one line summary, and provide detailed reproduction steps
description: Provide a one sentence summary and detailed reproduction steps
value: |
<!-- Please insert a one line summary of the issue below -->
<!-- Begin your issue with a one sentence summary -->
SUMMARY_SENTENCE_HERE
<!-- Be verbose: Include all steps necessary to reproduce from a clean Zed installation. -->
<!-- Code snippets are better than images, a repository link that reproduces the issue is ideal. -->
### Description
<!-- Describe with sufficient detail to reproduce from a clean Zed install.
- Any code must be sufficient to reproduce (include context!)
- Code must as text, not just as a screenshot.
- Issues with insufficient detail may be summarily closed.
-->
Steps to trigger the problem:
Steps to reproduce:
1.
2.
3.
4.
Expected Behavior:
Actual Behavior:
Expected Behavior:
<!-- Before Submitting, did you:
1. Include settings.json, keymap.json, .editorconfig if relevant?
2. Check your Zed.log for relevant errors? (please include!)
3. Click Preview to ensure everything looks right?
4. Hide videos, large images and logs in ``` inside collapsible blocks:
<!--
Is there anything additional necessary to reproduce this issue?
- settings.json, keymap.json, .editorconfig etc?
- Does it happen intermittently or only with specific projects / file types?
- Have you found a workaround?
<details><summary>click to expand</summary>
Did you check your Zed.log to see if there is any relevant details there?
- When including large items (videos, screenshots, logs, configs) please wrap with:
```json
<details><summary>See inside for XXXXYYY</summary>
```shell
code
```
</details>
```
</details>
-->
validations:
@@ -50,7 +48,8 @@ body:
id: environment
attributes:
label: Zed Version and System Specs
description: 'Open Zed, and in the command palette select "zed: Copy System Specs Into Clipboard"'
description: |
Open Zed, from the command palette select "zed: Copy System Specs Into Clipboard"
placeholder: |
Output of "zed: Copy System Specs Into Clipboard"
validations:

View File

@@ -5,10 +5,12 @@ body:
- type: textarea
attributes:
label: Summary
description: Describe the bug with a one line summary, and provide detailed reproduction steps
description: Summarize the issue with detailed reproduction steps
value: |
<!-- Please insert a one line summary of the issue below -->
<!-- Begin your issue with a one sentence summary -->
SUMMARY_SENTENCE_HERE
### Description
<!-- Include all steps necessary to reproduce from a clean Zed installation. Be verbose -->
Steps to trigger the problem:
1.
@@ -16,7 +18,6 @@ body:
3.
Actual Behavior:
Expected Behavior:
validations:
@@ -40,10 +41,11 @@ body:
value: |
<details><summary>Zed.log</summary>
<!-- Click below this line and paste or drag-and-drop your log-->
```
<!-- Paste your log inside the code block. -->
```log
```
<!-- Click above this line and paste or drag-and-drop your log--></details>
</details>
validations:
required: false

View File

@@ -4,9 +4,6 @@ contact_links:
- name: Feature Request
url: https://github.com/zed-industries/zed/discussions/new/choose
about: To request a feature, open a new Discussion in one of the appropriate Discussion categories
- name: Zed Discussion Forum
url: https://github.com/zed-industries/zed/discussions
about: A community discussion forum
- name: "Zed Discord: #Support Channel"
- name: "Zed Discord"
url: https://zed.dev/community-links
about: Real-time discussion and user support

View File

@@ -114,7 +114,9 @@ jobs:
timeout-minutes: 60
name: Check workspace-hack crate
needs: [job_spec]
if: github.repository_owner == 'zed-industries'
if: |
github.repository_owner == 'zed-industries' &&
needs.job_spec.outputs.run_tests == 'true'
runs-on:
- buildjet-8vcpu-ubuntu-2204
steps:

86
Cargo.lock generated
View File

@@ -52,6 +52,7 @@ dependencies = [
name = "agent"
version = "0.1.0"
dependencies = [
"agent_rules",
"anyhow",
"assistant_context_editor",
"assistant_settings",
@@ -161,6 +162,20 @@ dependencies = [
"workspace-hack",
]
[[package]]
name = "agent_rules"
version = "0.1.0"
dependencies = [
"anyhow",
"fs",
"gpui",
"indoc",
"prompt_store",
"util",
"workspace-hack",
"worktree",
]
[[package]]
name = "ahash"
version = "0.7.8"
@@ -746,7 +761,8 @@ dependencies = [
"itertools 0.14.0",
"language",
"language_model",
"lsp",
"log",
"lsp-types",
"open",
"project",
"rand 0.8.5",
@@ -754,6 +770,7 @@ dependencies = [
"schemars",
"serde",
"serde_json",
"strum",
"ui",
"unindent",
"util",
@@ -3330,6 +3347,15 @@ version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6245d59a3e82a7fc217c5828a6692dbc6dfb63a0c8c90495621f7b9d79704a0e"
[[package]]
name = "convert_case"
version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ec182b0ca2f35d8fc196cf3404988fd8b8c739a4d270ff118a398feb0cbec1ca"
dependencies = [
"unicode-segmentation",
]
[[package]]
name = "convert_case"
version = "0.8.0"
@@ -4462,6 +4488,32 @@ dependencies = [
"workspace-hack",
]
[[package]]
name = "documented"
version = "0.9.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bc6db32f0995bc4553d2de888999075acd0dbeef75ba923503f6a724263dc6f3"
dependencies = [
"documented-macros",
"phf",
"thiserror 1.0.69",
]
[[package]]
name = "documented-macros"
version = "0.9.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a394bb35929b58f9a5fd418f7c6b17a4b616efcc1e53e6995ca123948f87e5fa"
dependencies = [
"convert_case 0.6.0",
"itertools 0.13.0",
"optfield",
"proc-macro2",
"quote",
"strum",
"syn 2.0.100",
]
[[package]]
name = "dotenvy"
version = "0.15.7"
@@ -8608,7 +8660,8 @@ checksum = "e53debba6bda7a793e5f99b8dacf19e626084f525f7829104ba9898f367d85ff"
[[package]]
name = "mio"
version = "1.0.3"
source = "git+https://github.com/ConradIrwin/mio?rev=d30ff26870457cdeee2f638be65543d65faff37d#d30ff26870457cdeee2f638be65543d65faff37d"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2886843bf800fba2e3377cff24abf6379b4c4d5c6681eaf9ea5b0d15090450bd"
dependencies = [
"libc",
"log",
@@ -9542,6 +9595,17 @@ dependencies = [
"vcpkg",
]
[[package]]
name = "optfield"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fa59f025cde9c698fcb4fcb3533db4621795374065bee908215263488f2d2a1d"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.100",
]
[[package]]
name = "option-ext"
version = "0.2.0"
@@ -12220,7 +12284,8 @@ dependencies = [
[[package]]
name = "rustls"
version = "0.23.25"
source = "git+https://github.com/ConradIrwin/rustls?rev=fa9c96ea259d6ce69445de54755401fa197e63cd#fa9c96ea259d6ce69445de54755401fa197e63cd"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "822ee9188ac4ec04a2f0531e55d035fb2de73f18b41a63c70c2712503b6fb13c"
dependencies = [
"aws-lc-rs",
"log",
@@ -14126,12 +14191,14 @@ name = "tasks_ui"
version = "0.1.0"
dependencies = [
"anyhow",
"collections",
"debugger_ui",
"editor",
"feature_flags",
"file_icons",
"fuzzy",
"gpui",
"itertools 0.14.0",
"language",
"menu",
"picker",
@@ -14622,7 +14689,8 @@ dependencies = [
[[package]]
name = "tokio"
version = "1.44.2"
source = "git+https://github.com/ConradIrwin/tokio?rev=5499df6df21837104cff5f85b51d3529179ed6f4#5499df6df21837104cff5f85b51d3529179ed6f4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e6b88822cbe49de4185e3a4cbf8321dd487cf5fe0c5c65695fef6346371e9c48"
dependencies = [
"backtrace",
"bytes 1.10.1",
@@ -14650,7 +14718,8 @@ dependencies = [
[[package]]
name = "tokio-macros"
version = "2.5.0"
source = "git+https://github.com/ConradIrwin/tokio?rev=5499df6df21837104cff5f85b51d3529179ed6f4#5499df6df21837104cff5f85b51d3529179ed6f4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6e06d43f1345a3bcd39f6a56dbb7dcab2ba47e68e8ac134855e7e2bdbaf8cab8"
dependencies = [
"proc-macro2",
"quote",
@@ -14680,7 +14749,8 @@ dependencies = [
[[package]]
name = "tokio-rustls"
version = "0.26.2"
source = "git+https://github.com/ConradIrwin/tokio-rustls?rev=f544a5d2f2eff8b2dd5527bd7dc78c854c218d06#f544a5d2f2eff8b2dd5527bd7dc78c854c218d06"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8e727b36a1a0e8b74c376ac2211e40c2c8af09fb4013c60d910495810f008e9b"
dependencies = [
"rustls 0.23.25",
"tokio",
@@ -15343,6 +15413,7 @@ version = "0.1.0"
dependencies = [
"chrono",
"component",
"documented",
"gpui",
"icons",
"itertools 0.14.0",
@@ -17608,6 +17679,7 @@ dependencies = [
"indexmap",
"inout",
"itertools 0.12.1",
"itertools 0.13.0",
"lazy_static",
"libc",
"libsqlite3-sys",
@@ -18036,7 +18108,7 @@ dependencies = [
[[package]]
name = "zed"
version = "0.182.0"
version = "0.183.0"
dependencies = [
"activity_indicator",
"agent",

View File

@@ -3,6 +3,7 @@ resolver = "2"
members = [
"crates/activity_indicator",
"crates/agent",
"crates/agent_rules",
"crates/anthropic",
"crates/askpass",
"crates/assets",
@@ -209,6 +210,7 @@ edition = "2024"
activity_indicator = { path = "crates/activity_indicator" }
agent = { path = "crates/agent" }
agent_rules = { path = "crates/agent_rules" }
ai = { path = "crates/ai" }
anthropic = { path = "crates/anthropic" }
askpass = { path = "crates/askpass" }
@@ -294,6 +296,7 @@ livekit_api = { path = "crates/livekit_api" }
livekit_client = { path = "crates/livekit_client" }
lmstudio = { path = "crates/lmstudio" }
lsp = { path = "crates/lsp" }
lsp-types = { git = "https://github.com/zed-industries/lsp-types", rev = "1fff0dd12e2071c5667327394cfec163d2a466ab" }
markdown = { path = "crates/markdown" }
markdown_preview = { path = "crates/markdown_preview" }
media = { path = "crates/media" }
@@ -663,11 +666,6 @@ features = [
cpal = { git = "https://github.com/zed-industries/cpal", rev = "fd8bc2fd39f1f5fdee5a0690656caff9a26d9d50" }
notify = { git = "https://github.com/zed-industries/notify.git", rev = "bbb9ea5ae52b253e095737847e367c30653a2e96" }
notify-types = { git = "https://github.com/zed-industries/notify.git", rev = "bbb9ea5ae52b253e095737847e367c30653a2e96" }
rustls = { git = "https://github.com/ConradIrwin/rustls", rev = "fa9c96ea259d6ce69445de54755401fa197e63cd"}
tokio-rustls = { git = "https://github.com/ConradIrwin/tokio-rustls", rev = "f544a5d2f2eff8b2dd5527bd7dc78c854c218d06" }
tokio = { git = "https://github.com/ConradIrwin/tokio", rev = "5499df6df21837104cff5f85b51d3529179ed6f4"}
mio = { git = "https://github.com/ConradIrwin/mio", rev = "d30ff26870457cdeee2f638be65543d65faff37d"}
# Makes the workspace hack crate refer to the local one, but only when you're building locally
workspace-hack = { path = "tooling/workspace-hack" }

View File

@@ -0,0 +1,3 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M10.1331 11.3776C10.2754 10.6665 10.1331 9.78593 11.1998 8.53327C11.82 7.80489 12.2664 6.96894 12.2664 6.04456C12.2664 4.91305 11.8169 3.82788 11.0168 3.02778C10.2167 2.22769 9.13152 1.7782 8.00001 1.7782C6.8685 1.7782 5.78334 2.22769 4.98324 3.02778C4.18314 3.82788 3.73364 4.91305 3.73364 6.04456C3.73364 6.75562 3.87586 7.6089 4.80024 8.53327C5.86683 9.80679 5.72462 10.6665 5.86683 11.3776M10.1331 11.3776V12.8821C10.1331 13.622 9.53341 14.2218 8.79353 14.2218H7.2065C6.46662 14.2218 5.86683 13.622 5.86683 12.8821V11.3776M10.1331 11.3776H5.86683" stroke="black" stroke-width="1.33333" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

After

Width:  |  Height:  |  Size: 751 B

View File

@@ -150,7 +150,7 @@
"context": "AgentDiff",
"bindings": {
"ctrl-y": "agent::Keep",
"ctrl-k ctrl-r": "agent::Reject"
"ctrl-n": "agent::Reject"
}
},
{
@@ -532,6 +532,7 @@
"context": "Editor && showing_completions",
"bindings": {
"enter": "editor::ConfirmCompletion",
"shift-enter": "editor::ConfirmCompletionReplace",
"tab": "editor::ComposeCompletion"
}
},

View File

@@ -242,7 +242,7 @@
"use_key_equivalents": true,
"bindings": {
"cmd-y": "agent::Keep",
"cmd-alt-z": "agent::Reject"
"cmd-n": "agent::Reject"
}
},
{
@@ -681,6 +681,7 @@
"use_key_equivalents": true,
"bindings": {
"enter": "editor::ConfirmCompletion",
"shift-enter": "editor::ConfirmCompletionReplace",
"tab": "editor::ComposeCompletion"
}
},

View File

@@ -58,7 +58,8 @@
"ctrl-shift-home": "editor::SelectToBeginning",
"ctrl-shift-end": "editor::SelectToEnd",
"ctrl-f8": "editor::ToggleBreakpoint",
"ctrl-shift-f8": "editor::EditLogBreakpoint"
"ctrl-shift-f8": "editor::EditLogBreakpoint",
"ctrl-shift-u": "editor::ToggleCase"
}
},
{

View File

@@ -55,7 +55,8 @@
"cmd-shift-home": "editor::SelectToBeginning",
"cmd-shift-end": "editor::SelectToEnd",
"ctrl-f8": "editor::ToggleBreakpoint",
"ctrl-shift-f8": "editor::EditLogBreakpoint"
"ctrl-shift-f8": "editor::EditLogBreakpoint",
"cmd-shift-u": "editor::ToggleCase"
}
},
{

View File

@@ -6,11 +6,18 @@ You are an AI assistant integrated into a code editor. You have the programming
It will be up to you to decide which of these you are doing based on what the user has told you. When unclear, ask clarifying questions to understand the user's intent before proceeding.
You should only perform actions that modify the user's system if explicitly requested by the user:
- If the user asks a question about how to accomplish a task, provide guidance or information, and use read-only tools (e.g., search) to assist. You may suggest potential actions, but do not directly modify the users system without explicit instruction.
- If the user asks a question about how to accomplish a task, provide guidance or information, and use read-only tools (e.g., search) to assist. You may suggest potential actions, but do not directly modify the user's system without explicit instruction.
- If the user clearly requests that you perform an action, carry out the action directly without explaining why you are doing so.
When answering questions, it's okay to give incomplete examples containing comments about what would go there in a real version. When being asked to directly perform tasks on the code base, you must ALWAYS make fully working code. You may never "simplify" the code by omitting or deleting functionality you know the user has requested, and you must NEVER write comments like "in a full version, this would..." - instead, you must actually implement the real version. Don't be lazy!
Note that project files are automatically backed up. The user can always get them back later if anything goes wrong, so there's
no need to create backup files (e.g. `.bak` files) because these files will just take up unnecessary space on the user's disk.
When attempting to resolve issues around failing tests, never simply remove the failing tests. Unless the user explicitly asks you to remove tests, ALWAYS attempt to fix the code causing the tests to fail.
Ignore "TODO"-type comments unless they're relevant to the user's explicit request or the user specifically asks you to address them. It is, however, okay to include them in codebase summaries.
<style>
Editing code:
- Make sure to take previous edits into account.
@@ -148,7 +155,7 @@ There are rules that apply to these root directories:
{{#each worktrees}}
{{#if rules_file}}
`{{root_name}}/{{rules_file.rel_path}}`:
`{{root_name}}/{{rules_file.path_in_worktree}}`:
``````
{{{rules_file.text}}}

View File

@@ -0,0 +1 @@
In your response, and also when thinking, make sure to remember and follow my instructions about how to format code blocks (and don't ever mention that you are remembering it, just follow the instructions).

View File

@@ -658,6 +658,7 @@
"tools": {
"bash": true,
"batch_tool": true,
"code_actions": true,
"code_symbols": true,
"copy_path": false,
"create_file": true,
@@ -671,6 +672,7 @@
"path_search": true,
"read_file": true,
"regex_search": true,
"rename": true,
"symbol_info": true,
"thinking": true
}
@@ -1136,7 +1138,8 @@
"code_actions_on_format": {},
// Settings related to running tasks.
"tasks": {
"variables": {}
"variables": {},
"enabled": true
},
// An object whose keys are language names, and whose values
// are arrays of filenames or extensions of files that should
@@ -1456,6 +1459,8 @@
"lsp": {
// Specify the LSP name as a key here.
// "rust-analyzer": {
// // A special flag for rust-analyzer integration, to use server-provided tasks
// enable_lsp_tasks": true,
// // These initialization options are merged into Zed's defaults
// "initialization_options": {
// "check": {

View File

@@ -19,6 +19,7 @@ test-support = [
]
[dependencies]
agent_rules.workspace = true
anyhow.workspace = true
assistant_context_editor.workspace = true
assistant_settings.workspace = true

View File

@@ -376,7 +376,7 @@ fn render_markdown_code_block(
.cursor_pointer()
.rounded_sm()
.hover(|item| item.bg(cx.theme().colors().element_hover.opacity(0.5)))
.tooltip(Tooltip::text("Jump to file"))
.tooltip(Tooltip::text("Jump to File"))
.children(
file_icons::FileIcons::get_icon(&path_range.path, cx)
.map(Icon::from_path)
@@ -456,6 +456,7 @@ fn render_markdown_code_block(
.contains(&(message_id, ix));
let codeblock_header = h_flex()
.group("codeblock_header")
.p_1()
.gap_1()
.justify_between()
@@ -465,45 +466,47 @@ fn render_markdown_code_block(
.rounded_t_md()
.children(label)
.child(
IconButton::new(
("copy-markdown-code", ix),
if codeblock_was_copied {
IconName::Check
} else {
IconName::Copy
},
)
.icon_color(Color::Muted)
.shape(ui::IconButtonShape::Square)
.tooltip(Tooltip::text("Copy Code"))
.on_click({
let active_thread = active_thread.clone();
let parsed_markdown = parsed_markdown.clone();
move |_event, _window, cx| {
active_thread.update(cx, |this, cx| {
this.copied_code_block_ids.insert((message_id, ix));
div().visible_on_hover("codeblock_header").child(
IconButton::new(
("copy-markdown-code", ix),
if codeblock_was_copied {
IconName::Check
} else {
IconName::Copy
},
)
.icon_color(Color::Muted)
.shape(ui::IconButtonShape::Square)
.tooltip(Tooltip::text("Copy Code"))
.on_click({
let active_thread = active_thread.clone();
let parsed_markdown = parsed_markdown.clone();
move |_event, _window, cx| {
active_thread.update(cx, |this, cx| {
this.copied_code_block_ids.insert((message_id, ix));
let code =
without_fences(&parsed_markdown.source()[codeblock_range.clone()])
.to_string();
let code =
without_fences(&parsed_markdown.source()[codeblock_range.clone()])
.to_string();
cx.write_to_clipboard(ClipboardItem::new_string(code.clone()));
cx.write_to_clipboard(ClipboardItem::new_string(code.clone()));
cx.spawn(async move |this, cx| {
cx.background_executor().timer(Duration::from_secs(2)).await;
cx.spawn(async move |this, cx| {
cx.background_executor().timer(Duration::from_secs(2)).await;
cx.update(|cx| {
this.update(cx, |this, cx| {
this.copied_code_block_ids.remove(&(message_id, ix));
cx.notify();
cx.update(|cx| {
this.update(cx, |this, cx| {
this.copied_code_block_ids.remove(&(message_id, ix));
cx.notify();
})
})
.ok();
})
.ok();
})
.detach();
});
}
}),
.detach();
});
}
}),
),
);
v_flex()
@@ -1219,17 +1222,30 @@ impl ActiveThread {
Label::new("Generating")
.color(Color::Muted)
.size(LabelSize::Small)
.with_animation(
.with_animations(
"generating-label",
Animation::new(Duration::from_secs(1)).repeat(),
|mut label, delta| {
let text = match delta {
d if d < 0.25 => "Generating",
d if d < 0.5 => "Generating.",
d if d < 0.75 => "Generating..",
_ => "Generating...",
};
label.set_text(text);
vec![
Animation::new(Duration::from_secs(1)),
Animation::new(Duration::from_secs(1)).repeat(),
],
|mut label, animation_ix, delta| {
match animation_ix {
0 => {
let chars_to_show = (delta * 10.).ceil() as usize;
let text = &"Generating"[0..chars_to_show];
label.set_text(text);
}
1 => {
let text = match delta {
d if d < 0.25 => "Generating",
d if d < 0.5 => "Generating.",
d if d < 0.75 => "Generating..",
_ => "Generating...",
};
label.set_text(text);
}
_ => {}
}
label
},
)
@@ -1753,7 +1769,7 @@ impl ActiveThread {
None
};
div()
v_flex()
.text_ui(cx)
.gap_2()
.children(
@@ -1838,177 +1854,225 @@ impl ActiveThread {
.copied()
.unwrap_or_default();
let editor_bg = cx.theme().colors().editor_background;
let editor_bg = cx.theme().colors().panel_background;
div().pt_0p5().pb_2().child(
v_flex()
.rounded_lg()
.border_1()
.border_color(self.tool_card_border_color(cx))
.child(
h_flex()
.group("disclosure-header")
.justify_between()
.py_1()
.px_2()
.bg(self.tool_card_header_bg(cx))
.map(|this| {
if pending || is_open {
this.rounded_t_md()
.border_b_1()
.border_color(self.tool_card_border_color(cx))
} else {
this.rounded_md()
}
})
.child(
h_flex()
.gap_1p5()
.child(
Icon::new(IconName::Brain)
.size(IconSize::XSmall)
.color(Color::Muted),
)
.child({
if pending {
Label::new("Thinking…")
div().map(|this| {
if pending {
this.v_flex()
.mt_neg_2()
.mb_1p5()
.child(
h_flex()
.group("disclosure-header")
.justify_between()
.child(
h_flex()
.gap_1p5()
.child(
Icon::new(IconName::LightBulb)
.size(IconSize::XSmall)
.color(Color::Muted),
)
.child({
Label::new("Thinking")
.color(Color::Muted)
.size(LabelSize::Small)
.buffer_font(cx)
.with_animation(
"generating-label",
Animation::new(Duration::from_secs(1)).repeat(),
|mut label, delta| {
let text = match delta {
d if d < 0.25 => "Thinking",
d if d < 0.5 => "Thinking.",
d if d < 0.75 => "Thinking..",
_ => "Thinking...",
};
label.set_text(text);
label
},
)
.with_animation(
"pulsating-label",
Animation::new(Duration::from_secs(2))
.repeat()
.with_easing(pulsating_between(0.4, 0.8)),
|label, delta| label.alpha(delta),
.with_easing(pulsating_between(0.6, 1.)),
|label, delta| {
label.map_element(|label| label.alpha(delta))
},
)
.into_any_element()
} else {
Label::new("Thought Process")
.size(LabelSize::Small)
.buffer_font(cx)
.into_any_element()
}
}),
)
.child(
h_flex()
.gap_1()
.child(
div().visible_on_hover("disclosure-header").child(
Disclosure::new("thinking-disclosure", is_open)
.opened_icon(IconName::ChevronUp)
.closed_icon(IconName::ChevronDown)
.on_click(cx.listener({
move |this, _event, _window, _cx| {
let is_open = this
.expanded_thinking_segments
.entry((message_id, ix))
.or_insert(false);
*is_open = !*is_open;
}
})),
),
)
.child({
let (icon_name, color, animated) = if pending {
(IconName::ArrowCircle, Color::Accent, true)
} else {
(IconName::Check, Color::Success, false)
};
let icon =
Icon::new(icon_name).color(color).size(IconSize::Small);
if animated {
icon.with_animation(
"arrow-circle",
Animation::new(Duration::from_secs(2)).repeat(),
|icon, delta| {
icon.transform(Transformation::rotate(percentage(
delta,
)))
},
)
.into_any_element()
} else {
icon.into_any_element()
}
}),
),
)
.when(pending && !is_open, |this| {
let gradient_overlay = div()
.rounded_b_lg()
.h_20()
.absolute()
.w_full()
.bottom_0()
.left_0()
.bg(linear_gradient(
180.,
linear_color_stop(editor_bg, 1.),
linear_color_stop(editor_bg.opacity(0.2), 0.),
));
this.child(
div()
.relative()
.bg(editor_bg)
.rounded_b_lg()
.child(
div()
.id(("thinking-content", ix))
.p_2()
.h_20()
.track_scroll(scroll_handle)
.text_ui_sm(cx)
.child(
MarkdownElement::new(
markdown.clone(),
default_markdown_style(window, cx),
)
.on_url_click({
let workspace = self.workspace.clone();
move |text, window, cx| {
open_markdown_link(
text,
workspace.clone(),
window,
cx,
);
}
}),
)
.overflow_hidden(),
}),
)
.child(gradient_overlay),
)
})
.when(is_open, |this| {
this.child(
div()
.id(("thinking-content", ix))
.h_full()
.p_2()
.rounded_b_lg()
.bg(editor_bg)
.text_ui_sm(cx)
.child(
MarkdownElement::new(
markdown.clone(),
default_markdown_style(window, cx),
)
.on_url_click({
let workspace = self.workspace.clone();
move |text, window, cx| {
open_markdown_link(text, workspace.clone(), window, cx);
}
}),
h_flex()
.gap_1()
.child(
div().visible_on_hover("disclosure-header").child(
Disclosure::new("thinking-disclosure", is_open)
.opened_icon(IconName::ChevronUp)
.closed_icon(IconName::ChevronDown)
.on_click(cx.listener({
move |this, _event, _window, _cx| {
let is_open = this
.expanded_thinking_segments
.entry((message_id, ix))
.or_insert(false);
*is_open = !*is_open;
}
})),
),
)
.child({
Icon::new(IconName::ArrowCircle)
.color(Color::Accent)
.size(IconSize::Small)
.with_animation(
"arrow-circle",
Animation::new(Duration::from_secs(2)).repeat(),
|icon, delta| {
icon.transform(Transformation::rotate(
percentage(delta),
))
},
)
}),
),
)
}),
)
.when(!is_open, |this| {
let gradient_overlay = div()
.rounded_b_lg()
.h_full()
.absolute()
.w_full()
.bottom_0()
.left_0()
.bg(linear_gradient(
180.,
linear_color_stop(editor_bg, 1.),
linear_color_stop(editor_bg.opacity(0.2), 0.),
));
this.child(
div()
.relative()
.bg(editor_bg)
.rounded_b_lg()
.mt_2()
.pl_4()
.child(
div()
.id(("thinking-content", ix))
.max_h_20()
.track_scroll(scroll_handle)
.text_ui_sm(cx)
.overflow_hidden()
.child(
MarkdownElement::new(
markdown.clone(),
default_markdown_style(window, cx),
)
.on_url_click({
let workspace = self.workspace.clone();
move |text, window, cx| {
open_markdown_link(
text,
workspace.clone(),
window,
cx,
);
}
}),
),
)
.child(gradient_overlay),
)
})
.when(is_open, |this| {
this.child(
div()
.id(("thinking-content", ix))
.h_full()
.bg(editor_bg)
.text_ui_sm(cx)
.child(
MarkdownElement::new(
markdown.clone(),
default_markdown_style(window, cx),
)
.on_url_click({
let workspace = self.workspace.clone();
move |text, window, cx| {
open_markdown_link(text, workspace.clone(), window, cx);
}
}),
),
)
})
} else {
this.v_flex()
.mt_neg_2()
.child(
h_flex()
.group("disclosure-header")
.pr_1()
.justify_between()
.opacity(0.8)
.hover(|style| style.opacity(1.))
.child(
h_flex()
.gap_1p5()
.child(
Icon::new(IconName::LightBulb)
.size(IconSize::XSmall)
.color(Color::Muted),
)
.child(Label::new("Thought Process").size(LabelSize::Small)),
)
.child(
div().visible_on_hover("disclosure-header").child(
Disclosure::new("thinking-disclosure", is_open)
.opened_icon(IconName::ChevronUp)
.closed_icon(IconName::ChevronDown)
.on_click(cx.listener({
move |this, _event, _window, _cx| {
let is_open = this
.expanded_thinking_segments
.entry((message_id, ix))
.or_insert(false);
*is_open = !*is_open;
}
})),
),
),
)
.child(
div()
.id(("thinking-content", ix))
.relative()
.mt_1p5()
.ml_1p5()
.pl_2p5()
.border_l_1()
.border_color(cx.theme().colors().border_variant)
.text_ui_sm(cx)
.when(is_open, |this| {
this.child(
MarkdownElement::new(
markdown.clone(),
default_markdown_style(window, cx),
)
.on_url_click({
let workspace = self.workspace.clone();
move |text, window, cx| {
open_markdown_link(text, workspace.clone(), window, cx);
}
}),
)
}),
)
}
})
}
fn render_tool_use(
@@ -2030,6 +2094,7 @@ impl ActiveThread {
.upgrade()
.map(|workspace| workspace.read(cx).app_state().fs.clone());
let needs_confirmation = matches!(&tool_use.status, ToolUseStatus::NeedsConfirmation);
let edit_tools = tool_use.needs_confirmation;
let status_icons = div().child(match &tool_use.status {
ToolUseStatus::Pending | ToolUseStatus::NeedsConfirmation => {
@@ -2206,10 +2271,10 @@ impl ActiveThread {
};
div().map(|element| {
if !tool_use.needs_confirmation {
if !edit_tools {
element.child(
v_flex()
.my_1p5()
.my_2()
.child(
h_flex()
.group("disclosure-header")
@@ -2516,7 +2581,7 @@ impl ActiveThread {
let label_text = match rules_files.as_slice() {
&[] => return div().into_any(),
&[rules_file] => {
format!("Using {:?} file", rules_file.rel_path)
format!("Using {:?} file", rules_file.path_in_worktree)
}
rules_files => {
format!("Using {} rules files", rules_files.len())

View File

@@ -227,14 +227,14 @@ impl AssistantPanel {
) -> Self {
let thread = thread_store.update(cx, |this, cx| this.create_thread(cx));
let fs = workspace.app_state().fs.clone();
let project = workspace.project().clone();
let project = workspace.project();
let language_registry = project.read(cx).languages().clone();
let workspace = workspace.weak_handle();
let weak_self = cx.entity().downgrade();
let message_editor_context_store = cx.new(|_cx| {
crate::context_store::ContextStore::new(
workspace.clone(),
project.downgrade(),
Some(thread_store.downgrade()),
)
});
@@ -344,7 +344,7 @@ impl AssistantPanel {
let message_editor_context_store = cx.new(|_cx| {
crate::context_store::ContextStore::new(
self.workspace.clone(),
self.project.downgrade(),
Some(self.thread_store.downgrade()),
)
});
@@ -521,7 +521,7 @@ impl AssistantPanel {
this.set_active_view(thread_view, window, cx);
let message_editor_context_store = cx.new(|_cx| {
crate::context_store::ContextStore::new(
this.workspace.clone(),
this.project.downgrade(),
Some(this.thread_store.downgrade()),
)
});
@@ -855,9 +855,11 @@ impl AssistantPanel {
if is_empty {
Label::new(Thread::DEFAULT_SUMMARY.clone())
.truncate()
.ml_2()
.into_any_element()
} else if summary.is_none() {
Label::new(LOADING_SUMMARY_PLACEHOLDER)
.ml_2()
.truncate()
.into_any_element()
} else {
@@ -873,7 +875,7 @@ impl AssistantPanel {
})
.unwrap_or_else(|| SharedString::from(LOADING_SUMMARY_PLACEHOLDER));
Label::new(title).truncate().into_any_element()
Label::new(title).ml_2().truncate().into_any_element()
}
ActiveView::History => Label::new("History").truncate().into_any_element(),
ActiveView::Configuration => Label::new("Settings").truncate().into_any_element(),
@@ -910,23 +912,25 @@ impl AssistantPanel {
let go_back_button = match &self.active_view {
ActiveView::History | ActiveView::Configuration => Some(
IconButton::new("go-back", IconName::ArrowLeft)
.icon_size(IconSize::Small)
.on_click(cx.listener(|this, _, window, cx| {
this.go_back(&workspace::GoBack, window, cx);
}))
.tooltip({
let focus_handle = focus_handle.clone();
move |window, cx| {
Tooltip::for_action_in(
"Go Back",
&workspace::GoBack,
&focus_handle,
window,
cx,
)
}
}),
div().pl_1().child(
IconButton::new("go-back", IconName::ArrowLeft)
.icon_size(IconSize::Small)
.on_click(cx.listener(|this, _, window, cx| {
this.go_back(&workspace::GoBack, window, cx);
}))
.tooltip({
let focus_handle = focus_handle.clone();
move |window, cx| {
Tooltip::for_action_in(
"Go Back",
&workspace::GoBack,
&focus_handle,
window,
cx,
)
}
}),
),
),
_ => None,
};
@@ -944,8 +948,7 @@ impl AssistantPanel {
.child(
h_flex()
.w_full()
.pl_2()
.gap_2()
.gap_1()
.children(go_back_button)
.child(self.render_title_view(window, cx)),
)
@@ -1080,7 +1083,7 @@ impl AssistantPanel {
cx,
|menu, _window, _cx| {
menu.action(
"New Prompt Editor",
"New Text Thread",
NewPromptEditor.boxed_clone(),
)
.when(!is_empty, |menu| {

View File

@@ -106,7 +106,7 @@ impl ContextPickerCompletionProvider {
.iter()
.map(|mode| {
Completion {
old_range: source_range.clone(),
replace_range: source_range.clone(),
new_text: format!("@{} ", mode.mention_prefix()),
label: CodeLabel::plain(mode.label().to_string(), None),
icon_path: Some(mode.icon().path().into()),
@@ -160,7 +160,7 @@ impl ContextPickerCompletionProvider {
let new_text = MentionLink::for_thread(&thread_entry);
let new_text_len = new_text.len();
Completion {
old_range: source_range.clone(),
replace_range: source_range.clone(),
new_text,
label: CodeLabel::plain(thread_entry.summary.to_string(), None),
documentation: None,
@@ -205,7 +205,7 @@ impl ContextPickerCompletionProvider {
let new_text = MentionLink::for_fetch(&url_to_fetch);
let new_text_len = new_text.len();
Completion {
old_range: source_range.clone(),
replace_range: source_range.clone(),
new_text,
label: CodeLabel::plain(url_to_fetch.to_string(), None),
documentation: None,
@@ -287,7 +287,7 @@ impl ContextPickerCompletionProvider {
let new_text = MentionLink::for_file(&file_name, &full_path);
let new_text_len = new_text.len();
Completion {
old_range: source_range.clone(),
replace_range: source_range.clone(),
new_text,
label,
documentation: None,
@@ -350,7 +350,7 @@ impl ContextPickerCompletionProvider {
let new_text = MentionLink::for_symbol(&symbol.name, &full_path);
let new_text_len = new_text.len();
Some(Completion {
old_range: source_range.clone(),
replace_range: source_range.clone(),
new_text,
label,
documentation: None,
@@ -867,7 +867,7 @@ mod tests {
.expect("Opened test file wasn't an editor")
});
let context_store = cx.new(|_| ContextStore::new(workspace.downgrade(), None));
let context_store = cx.new(|_| ContextStore::new(project.downgrade(), None));
let editor_entity = editor.downgrade();
editor.update_in(&mut cx, |editor, window, cx| {

View File

@@ -8,11 +8,10 @@ use futures::future::join_all;
use futures::{self, Future, FutureExt, future};
use gpui::{App, AppContext as _, Context, Entity, SharedString, Task, WeakEntity};
use language::{Buffer, File};
use project::{ProjectItem, ProjectPath, Worktree};
use project::{Project, ProjectItem, ProjectPath, Worktree};
use rope::Rope;
use text::{Anchor, BufferId, OffsetRangeExt};
use util::{ResultExt as _, maybe};
use workspace::Workspace;
use crate::ThreadStore;
use crate::context::{
@@ -23,7 +22,7 @@ use crate::context_strip::SuggestedContext;
use crate::thread::{Thread, ThreadId};
pub struct ContextStore {
workspace: WeakEntity<Workspace>,
project: WeakEntity<Project>,
context: Vec<AssistantContext>,
thread_store: Option<WeakEntity<ThreadStore>>,
// TODO: If an EntityId is used for all context types (like BufferId), can remove ContextId.
@@ -40,11 +39,11 @@ pub struct ContextStore {
impl ContextStore {
pub fn new(
workspace: WeakEntity<Workspace>,
project: WeakEntity<Project>,
thread_store: Option<WeakEntity<ThreadStore>>,
) -> Self {
Self {
workspace,
project,
thread_store,
context: Vec::new(),
next_context_id: ContextId(0),
@@ -81,12 +80,7 @@ impl ContextStore {
remove_if_exists: bool,
cx: &mut Context<Self>,
) -> Task<Result<()>> {
let workspace = self.workspace.clone();
let Some(project) = workspace
.upgrade()
.map(|workspace| workspace.read(cx).project().clone())
else {
let Some(project) = self.project.upgrade() else {
return Task::ready(Err(anyhow!("failed to read project")));
};
@@ -161,11 +155,7 @@ impl ContextStore {
remove_if_exists: bool,
cx: &mut Context<Self>,
) -> Task<Result<()>> {
let workspace = self.workspace.clone();
let Some(project) = workspace
.upgrade()
.map(|workspace| workspace.read(cx).project().clone())
else {
let Some(project) = self.project.upgrade() else {
return Task::ready(Err(anyhow!("failed to read project")));
};

View File

@@ -262,13 +262,7 @@ impl InlineAssistant {
}
InlineAssistTarget::Terminal(active_terminal) => {
TerminalInlineAssistant::update_global(cx, |assistant, cx| {
assistant.assist(
&active_terminal,
cx.entity().downgrade(),
thread_store,
window,
cx,
)
assistant.assist(&active_terminal, cx.entity(), thread_store, window, cx)
})
}
};
@@ -322,6 +316,13 @@ impl InlineAssistant {
window: &mut Window,
cx: &mut App,
) {
let Some(project) = workspace
.upgrade()
.map(|workspace| workspace.read(cx).project().downgrade())
else {
return;
};
let (snapshot, initial_selections) = editor.update(cx, |editor, cx| {
(
editor.snapshot(window, cx),
@@ -425,7 +426,7 @@ impl InlineAssistant {
for range in codegen_ranges {
let assist_id = self.next_assist_id.post_inc();
let context_store =
cx.new(|_cx| ContextStore::new(workspace.clone(), thread_store.clone()));
cx.new(|_cx| ContextStore::new(project.clone(), thread_store.clone()));
let codegen = cx.new(|cx| {
BufferCodegen::new(
editor.read(cx).buffer().clone(),
@@ -519,7 +520,7 @@ impl InlineAssistant {
initial_prompt: String,
initial_transaction_id: Option<TransactionId>,
focus: bool,
workspace: WeakEntity<Workspace>,
workspace: Entity<Workspace>,
thread_store: Option<WeakEntity<ThreadStore>>,
window: &mut Window,
cx: &mut App,
@@ -537,8 +538,8 @@ impl InlineAssistant {
range.end = range.end.bias_right(&snapshot);
}
let context_store =
cx.new(|_cx| ContextStore::new(workspace.clone(), thread_store.clone()));
let project = workspace.read(cx).project().downgrade();
let context_store = cx.new(|_cx| ContextStore::new(project, thread_store.clone()));
let codegen = cx.new(|cx| {
BufferCodegen::new(
@@ -562,7 +563,7 @@ impl InlineAssistant {
codegen.clone(),
self.fs.clone(),
context_store,
workspace.clone(),
workspace.downgrade(),
thread_store,
window,
cx,
@@ -589,7 +590,7 @@ impl InlineAssistant {
end_block_id,
range,
codegen.clone(),
workspace.clone(),
workspace.downgrade(),
window,
cx,
),
@@ -1779,6 +1780,7 @@ impl CodeActionProvider for AssistantCodeActionProvider {
let workspace = self.workspace.clone();
let thread_store = self.thread_store.clone();
window.spawn(cx, async move |cx| {
let workspace = workspace.upgrade().context("workspace was released")?;
let editor = editor.upgrade().context("editor was released")?;
let range = editor
.update(cx, |editor, cx| {

View File

@@ -66,7 +66,7 @@ impl TerminalInlineAssistant {
pub fn assist(
&mut self,
terminal_view: &Entity<TerminalView>,
workspace: WeakEntity<Workspace>,
workspace: Entity<Workspace>,
thread_store: Option<WeakEntity<ThreadStore>>,
window: &mut Window,
cx: &mut App,
@@ -75,8 +75,8 @@ impl TerminalInlineAssistant {
let assist_id = self.next_assist_id.post_inc();
let prompt_buffer =
cx.new(|cx| MultiBuffer::singleton(cx.new(|cx| Buffer::local(String::new(), cx)), cx));
let context_store =
cx.new(|_cx| ContextStore::new(workspace.clone(), thread_store.clone()));
let project = workspace.read(cx).project().downgrade();
let context_store = cx.new(|_cx| ContextStore::new(project, thread_store.clone()));
let codegen = cx.new(|_| TerminalCodegen::new(terminal, self.telemetry.clone()));
let prompt_editor = cx.new(|cx| {
@@ -87,7 +87,7 @@ impl TerminalInlineAssistant {
codegen,
self.fs.clone(),
context_store.clone(),
workspace.clone(),
workspace.downgrade(),
thread_store.clone(),
window,
cx,
@@ -106,7 +106,7 @@ impl TerminalInlineAssistant {
assist_id,
terminal_view,
prompt_editor,
workspace.clone(),
workspace.downgrade(),
context_store,
window,
cx,

View File

@@ -3,6 +3,7 @@ use std::io::Write;
use std::ops::Range;
use std::sync::Arc;
use agent_rules::load_worktree_rules_file;
use anyhow::{Context as _, Result, anyhow};
use assistant_settings::AssistantSettings;
use assistant_tool::{ActionLog, Tool, ToolWorkingSet};
@@ -21,13 +22,11 @@ use language_model::{
};
use project::git_store::{GitStore, GitStoreCheckpoint, RepositoryState};
use project::{Project, Worktree};
use prompt_store::{
AssistantSystemPromptContext, PromptBuilder, RulesFile, WorktreeInfoForSystemPrompt,
};
use prompt_store::{AssistantSystemPromptContext, PromptBuilder, WorktreeInfoForSystemPrompt};
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use settings::Settings;
use util::{ResultExt as _, TryFutureExt as _, maybe, post_inc};
use util::{ResultExt as _, TryFutureExt as _, post_inc};
use uuid::Uuid;
use crate::context::{AssistantContext, ContextId, format_context_as_string};
@@ -854,67 +853,36 @@ impl Thread {
let root_name = worktree.root_name().into();
let abs_path = worktree.abs_path();
// Note that Cline supports `.clinerules` being a directory, but that is not currently
// supported. This doesn't seem to occur often in GitHub repositories.
const RULES_FILE_NAMES: [&'static str; 6] = [
".rules",
".cursorrules",
".windsurfrules",
".clinerules",
".github/copilot-instructions.md",
"CLAUDE.md",
];
let selected_rules_file = RULES_FILE_NAMES
.into_iter()
.filter_map(|name| {
worktree
.entry_for_path(name)
.filter(|entry| entry.is_file())
.map(|entry| (entry.path.clone(), worktree.absolutize(&entry.path)))
})
.next();
if let Some((rel_rules_path, abs_rules_path)) = selected_rules_file {
cx.spawn(async move |_| {
let rules_file_result = maybe!(async move {
let abs_rules_path = abs_rules_path?;
let text = fs.load(&abs_rules_path).await.with_context(|| {
format!("Failed to load assistant rules file {:?}", abs_rules_path)
})?;
anyhow::Ok(RulesFile {
rel_path: rel_rules_path,
abs_path: abs_rules_path.into(),
text: text.trim().to_string(),
})
})
.await;
let (rules_file, rules_file_error) = match rules_file_result {
Ok(rules_file) => (Some(rules_file), None),
Err(err) => (
None,
Some(ThreadError::Message {
header: "Error loading rules file".into(),
message: format!("{err}").into(),
}),
),
};
let worktree_info = WorktreeInfoForSystemPrompt {
root_name,
abs_path,
rules_file,
};
(worktree_info, rules_file_error)
})
} else {
Task::ready((
let rules_task = load_worktree_rules_file(fs, worktree, cx);
let Some(rules_task) = rules_task else {
return Task::ready((
WorktreeInfoForSystemPrompt {
root_name,
abs_path,
rules_file: None,
},
None,
))
}
));
};
cx.spawn(async move |_| {
let (rules_file, rules_file_error) = match rules_task.await {
Ok(rules_file) => (Some(rules_file), None),
Err(err) => (
None,
Some(ThreadError::Message {
header: "Error loading rules file".into(),
message: format!("{err}").into(),
}),
),
};
let worktree_info = WorktreeInfoForSystemPrompt {
root_name,
abs_path,
rules_file,
};
(worktree_info, rules_file_error)
})
}
pub fn send_to_model(
@@ -1029,6 +997,21 @@ impl Thread {
self.attached_tracked_files_state(&mut request.messages, cx);
// Add reminder to the last user message about
// easily-forgotten aspects of the system prompt.
if let Some(last_user_message) = request
.messages
.iter_mut()
.rev()
.find(|msg| msg.role == Role::User)
{
last_user_message
.content
.push(MessageContent::Text(system_prompt_reminder(
&self.prompt_builder,
)));
}
request
}
@@ -1414,7 +1397,7 @@ impl Thread {
for tool_use in pending_tool_uses.iter() {
if let Some(tool) = self.tools.tool(&tool_use.name, cx) {
if tool.needs_confirmation()
if tool.needs_confirmation(&tool_use.input, cx)
&& !AssistantSettings::get_global(cx).always_allow_tool_actions
{
self.tool_use.confirm_tool_use(
@@ -1842,6 +1825,12 @@ impl Thread {
}
}
pub fn system_prompt_reminder(prompt_builder: &prompt_store::PromptBuilder) -> String {
prompt_builder
.generate_assistant_system_prompt_reminder()
.unwrap_or_default()
}
#[derive(Debug, Clone)]
pub enum ThreadError {
PaymentRequired,
@@ -1911,7 +1900,7 @@ mod tests {
)
.await;
let (_workspace, _thread_store, thread, context_store) =
let (_workspace, _thread_store, thread, context_store, prompt_builder) =
setup_test_environment(cx, project.clone()).await;
add_file_to_context(&project, &context_store, "test/code.rs", cx)
@@ -1965,8 +1954,14 @@ fn main() {{
});
assert_eq!(request.messages.len(), 1);
let expected_full_message = format!("{}Please explain this code", expected_context);
assert_eq!(request.messages[0].string_contents(), expected_full_message);
let actual_message = request.messages[0].string_contents();
let expected_content = format!(
"{}Please explain this code{}",
expected_context,
system_prompt_reminder(&prompt_builder)
);
assert_eq!(actual_message, expected_content);
}
#[gpui::test]
@@ -1983,7 +1978,7 @@ fn main() {{
)
.await;
let (_, _thread_store, thread, context_store) =
let (_, _thread_store, thread, context_store, _prompt_builder) =
setup_test_environment(cx, project.clone()).await;
// Open files individually
@@ -2083,7 +2078,7 @@ fn main() {{
)
.await;
let (_, _thread_store, thread, _context_store) =
let (_, _thread_store, thread, _context_store, prompt_builder) =
setup_test_environment(cx, project.clone()).await;
// Insert user message without any context (empty context vector)
@@ -2109,11 +2104,14 @@ fn main() {{
});
assert_eq!(request.messages.len(), 1);
assert_eq!(
request.messages[0].string_contents(),
"What is the best way to learn Rust?"
let actual_message = request.messages[0].string_contents();
let expected_content = format!(
"What is the best way to learn Rust?{}",
system_prompt_reminder(&prompt_builder)
);
assert_eq!(actual_message, expected_content);
// Add second message, also without context
let message2_id = thread.update(cx, |thread, cx| {
thread.insert_user_message("Are there any good books?", vec![], None, cx)
@@ -2129,14 +2127,17 @@ fn main() {{
});
assert_eq!(request.messages.len(), 2);
assert_eq!(
request.messages[0].string_contents(),
"What is the best way to learn Rust?"
);
assert_eq!(
request.messages[1].string_contents(),
"Are there any good books?"
// First message should be the system prompt
assert_eq!(request.messages[0].role, Role::User);
// Second message should be the user message with prompt reminder
let actual_message = request.messages[1].string_contents();
let expected_content = format!(
"Are there any good books?{}",
system_prompt_reminder(&prompt_builder)
);
assert_eq!(actual_message, expected_content);
}
#[gpui::test]
@@ -2149,7 +2150,7 @@ fn main() {{
)
.await;
let (_workspace, _thread_store, thread, context_store) =
let (_workspace, _thread_store, thread, context_store, prompt_builder) =
setup_test_environment(cx, project.clone()).await;
// Open buffer and add it to context
@@ -2209,11 +2210,14 @@ fn main() {{
// The last message should be the stale buffer notification
assert_eq!(last_message.role, Role::User);
// Check the exact content of the message
let expected_content = "These files changed since last read:\n- code.rs\n";
let actual_message = last_message.string_contents();
let expected_content = format!(
"These files changed since last read:\n- code.rs\n{}",
system_prompt_reminder(&prompt_builder)
);
assert_eq!(
last_message.string_contents(),
expected_content,
actual_message, expected_content,
"Last message should be exactly the stale buffer notification"
);
}
@@ -2251,24 +2255,27 @@ fn main() {{
Entity<ThreadStore>,
Entity<Thread>,
Entity<ContextStore>,
Arc<PromptBuilder>,
) {
let (workspace, cx) =
cx.add_window_view(|window, cx| Workspace::test_new(project.clone(), window, cx));
let prompt_builder = Arc::new(PromptBuilder::new(None).unwrap());
let thread_store = cx.update(|_, cx| {
ThreadStore::new(
project.clone(),
Arc::default(),
Arc::new(PromptBuilder::new(None).unwrap()),
cx,
)
.unwrap()
ThreadStore::new(project.clone(), Arc::default(), prompt_builder.clone(), cx).unwrap()
});
let thread = thread_store.update(cx, |store, cx| store.create_thread(cx));
let context_store = cx.new(|_cx| ContextStore::new(workspace.downgrade(), None));
let context_store = cx.new(|_cx| ContextStore::new(project.downgrade(), None));
(workspace, thread_store, thread, context_store)
(
workspace,
thread_store,
thread,
context_store,
prompt_builder,
)
}
async fn add_file_to_context(

View File

@@ -431,17 +431,6 @@ impl RenderOnce for PastThread {
.end_slot(
h_flex()
.gap_1p5()
.child(
Label::new("Thread")
.color(Color::Muted)
.size(LabelSize::XSmall),
)
.child(
div()
.size(px(3.))
.rounded_full()
.bg(cx.theme().colors().text_disabled),
)
.child(
Label::new(thread_timestamp)
.color(Color::Muted)
@@ -452,12 +441,7 @@ impl RenderOnce for PastThread {
.shape(IconButtonShape::Square)
.icon_size(IconSize::XSmall)
.tooltip(move |window, cx| {
Tooltip::for_action(
"Delete Thread",
&RemoveSelectedThread,
window,
cx,
)
Tooltip::for_action("Delete", &RemoveSelectedThread, window, cx)
})
.on_click({
let assistant_panel = self.assistant_panel.clone();
@@ -538,17 +522,6 @@ impl RenderOnce for PastContext {
.end_slot(
h_flex()
.gap_1p5()
.child(
Label::new("Prompt Editor")
.color(Color::Muted)
.size(LabelSize::XSmall),
)
.child(
div()
.size(px(3.))
.rounded_full()
.bg(cx.theme().colors().text_disabled),
)
.child(
Label::new(context_timestamp)
.color(Color::Muted)
@@ -559,12 +532,7 @@ impl RenderOnce for PastContext {
.shape(IconButtonShape::Square)
.icon_size(IconSize::XSmall)
.tooltip(move |window, cx| {
Tooltip::for_action(
"Delete Prompt Editor",
&RemoveSelectedThread,
window,
cx,
)
Tooltip::for_action("Delete", &RemoveSelectedThread, window, cx)
})
.on_click({
let assistant_panel = self.assistant_panel.clone();

View File

@@ -201,7 +201,7 @@ impl ToolUseState {
let (icon, needs_confirmation) = if let Some(tool) = self.tools.tool(&tool_use.name, cx)
{
(tool.icon(), tool.needs_confirmation())
(tool.icon(), tool.needs_confirmation(&tool_use.input, cx))
} else {
(IconName::Cog, false)
};
@@ -334,6 +334,8 @@ impl ToolUseState {
output: Result<String>,
cx: &App,
) -> Option<PendingToolUse> {
telemetry::event!("Agent Tool Finished", tool_name, success = output.is_ok());
match output {
Ok(tool_result) => {
let model_registry = LanguageModelRegistry::read_global(cx);

View File

@@ -0,0 +1,25 @@
[package]
name = "agent_rules"
version = "0.1.0"
edition.workspace = true
publish.workspace = true
license = "GPL-3.0-or-later"
[lints]
workspace = true
[lib]
path = "src/agent_rules.rs"
doctest = false
[dependencies]
anyhow.workspace = true
fs.workspace = true
gpui.workspace = true
prompt_store.workspace = true
util.workspace = true
worktree.workspace = true
workspace-hack = { version = "0.1", path = "../../tooling/workspace-hack" }
[dev-dependencies]
indoc.workspace = true

View File

@@ -0,0 +1 @@
../../LICENSE-GPL

View File

@@ -0,0 +1,51 @@
use std::sync::Arc;
use anyhow::{Context as _, Result};
use fs::Fs;
use gpui::{App, AppContext, Task};
use prompt_store::SystemPromptRulesFile;
use util::maybe;
use worktree::Worktree;
const RULES_FILE_NAMES: [&'static str; 6] = [
".rules",
".cursorrules",
".windsurfrules",
".clinerules",
".github/copilot-instructions.md",
"CLAUDE.md",
];
pub fn load_worktree_rules_file(
fs: Arc<dyn Fs>,
worktree: &Worktree,
cx: &App,
) -> Option<Task<Result<SystemPromptRulesFile>>> {
let selected_rules_file = RULES_FILE_NAMES
.into_iter()
.filter_map(|name| {
worktree
.entry_for_path(name)
.filter(|entry| entry.is_file())
.map(|entry| (entry.path.clone(), worktree.absolutize(&entry.path)))
})
.next();
// Note that Cline supports `.clinerules` being a directory, but that is not currently
// supported. This doesn't seem to occur often in GitHub repositories.
selected_rules_file.map(|(path_in_worktree, abs_path)| {
let fs = fs.clone();
cx.background_spawn(maybe!(async move {
let abs_path = abs_path?;
let text = fs
.load(&abs_path)
.await
.with_context(|| format!("Failed to load assistant rules file {:?}", abs_path))?;
anyhow::Ok(SystemPromptRulesFile {
path_in_worktree,
abs_path: abs_path.into(),
text: text.trim().to_string(),
})
}))
})
}

View File

@@ -2793,7 +2793,7 @@ fn render_thought_process_fold_icon_button(
let button = match status {
ThoughtProcessStatus::Pending => button
.child(
Icon::new(IconName::Brain)
Icon::new(IconName::LightBulb)
.size(IconSize::Small)
.color(Color::Muted),
)
@@ -2808,7 +2808,7 @@ fn render_thought_process_fold_icon_button(
),
ThoughtProcessStatus::Completed => button
.style(ButtonStyle::Filled)
.child(Icon::new(IconName::Brain).size(IconSize::Small))
.child(Icon::new(IconName::LightBulb).size(IconSize::Small))
.child(Label::new("Thought Process").single_line()),
};

View File

@@ -120,7 +120,7 @@ impl SlashCommandCompletionProvider {
) as Arc<_>
});
Some(project::Completion {
old_range: name_range.clone(),
replace_range: name_range.clone(),
documentation: Some(CompletionDocumentation::SingleLine(
command.description().into(),
)),
@@ -219,7 +219,7 @@ impl SlashCommandCompletionProvider {
}
project::Completion {
old_range: if new_argument.replace_previous_arguments {
replace_range: if new_argument.replace_previous_arguments {
argument_range.clone()
} else {
last_argument_range.clone()

View File

@@ -48,7 +48,7 @@ pub trait Tool: 'static + Send + Sync {
/// Returns true iff the tool needs the users's confirmation
/// before having permission to run.
fn needs_confirmation(&self) -> bool;
fn needs_confirmation(&self, input: &serde_json::Value, cx: &App) -> bool;
/// Returns the JSON schema that describes the tool's input.
fn input_schema(&self, _: LanguageModelToolSchemaFormat) -> serde_json::Value {

View File

@@ -23,16 +23,18 @@ http_client.workspace = true
itertools.workspace = true
language.workspace = true
language_model.workspace = true
lsp.workspace = true
log.workspace = true
lsp-types.workspace = true
open = { workspace = true }
project.workspace = true
regex.workspace = true
schemars.workspace = true
serde.workspace = true
serde_json.workspace = true
strum.workspace = true
ui.workspace = true
util.workspace = true
worktree.workspace = true
open = { workspace = true }
workspace-hack.workspace = true
[dev-dependencies]

View File

@@ -1,6 +1,6 @@
mod bash_tool;
mod batch_tool;
mod code_symbol_iter;
mod code_action_tool;
mod code_symbols_tool;
mod copy_path_tool;
mod create_directory_tool;
@@ -14,8 +14,10 @@ mod move_path_tool;
mod now_tool;
mod open_tool;
mod path_search_tool;
mod quickfix_tool;
mod read_file_tool;
mod regex_search_tool;
mod rename_tool;
mod replace;
mod schema;
mod symbol_info_tool;
@@ -31,6 +33,7 @@ use move_path_tool::MovePathTool;
use crate::bash_tool::BashTool;
use crate::batch_tool::BatchTool;
use crate::code_action_tool::CodeActionTool;
use crate::code_symbols_tool::CodeSymbolsTool;
use crate::create_directory_tool::CreateDirectoryTool;
use crate::create_file_tool::CreateFileTool;
@@ -42,8 +45,10 @@ use crate::list_directory_tool::ListDirectoryTool;
use crate::now_tool::NowTool;
use crate::open_tool::OpenTool;
use crate::path_search_tool::PathSearchTool;
use crate::quickfix_tool::QuickfixTool;
use crate::read_file_tool::ReadFileTool;
use crate::regex_search_tool::RegexSearchTool;
use crate::rename_tool::RenameTool;
use crate::symbol_info_tool::SymbolInfoTool;
use crate::thinking_tool::ThinkingTool;
@@ -59,6 +64,7 @@ pub fn init(http_client: Arc<HttpClientWithUrl>, cx: &mut App) {
registry.register_tool(DeletePathTool);
registry.register_tool(FindReplaceFileTool);
registry.register_tool(SymbolInfoTool);
registry.register_tool(CodeActionTool);
registry.register_tool(MovePathTool);
registry.register_tool(DiagnosticsTool);
registry.register_tool(ListDirectoryTool);
@@ -66,8 +72,10 @@ pub fn init(http_client: Arc<HttpClientWithUrl>, cx: &mut App) {
registry.register_tool(OpenTool);
registry.register_tool(CodeSymbolsTool);
registry.register_tool(PathSearchTool);
registry.register_tool(QuickfixTool);
registry.register_tool(ReadFileTool);
registry.register_tool(RegexSearchTool);
registry.register_tool(RenameTool);
registry.register_tool(ThinkingTool);
registry.register_tool(FetchTool::new(http_client));
}

View File

@@ -3,7 +3,7 @@ use anyhow::{Context as _, Result, anyhow};
use assistant_tool::{ActionLog, Tool};
use futures::io::BufReader;
use futures::{AsyncBufReadExt, AsyncReadExt};
use gpui::{App, Entity, Task};
use gpui::{App, AppContext, Entity, Task};
use language_model::{LanguageModelRequestMessage, LanguageModelToolSchemaFormat};
use project::Project;
use schemars::JsonSchema;
@@ -16,7 +16,7 @@ use util::markdown::MarkdownString;
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct BashToolInput {
/// The bash command to execute as a one-liner.
/// The bash one-liner command to execute.
command: String,
/// Working directory for the command. This must be one of the root directories of the project.
cd: String,
@@ -29,7 +29,7 @@ impl Tool for BashTool {
"bash".to_string()
}
fn needs_confirmation(&self) -> bool {
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
true
}
@@ -123,94 +123,184 @@ impl Tool for BashTool {
worktree.read(cx).abs_path()
};
cx.spawn(async move |_| {
// Add 2>&1 to merge stderr into stdout for proper interleaving.
let command = format!("({}) 2>&1", input.command);
let mut cmd = new_smol_command("bash")
.arg("-c")
.arg(&command)
.current_dir(working_dir)
.stdout(std::process::Stdio::piped())
.spawn()
.context("Failed to execute bash command")?;
// Capture stdout with a limit
let stdout = cmd.stdout.take().unwrap();
let mut reader = BufReader::new(stdout);
const MESSAGE_1: &str = "Command output too long. The first ";
const MESSAGE_2: &str = " bytes:\n\n";
const ERR_MESSAGE_1: &str = "Command failed with exit code ";
const ERR_MESSAGE_2: &str = "\n\n";
const STDOUT_LIMIT: usize = 8192;
const LIMIT: usize = STDOUT_LIMIT
- (MESSAGE_1.len()
+ (STDOUT_LIMIT.ilog10() as usize + 1) // byte count
+ MESSAGE_2.len()
+ ERR_MESSAGE_1.len()
+ 3 // status code
+ ERR_MESSAGE_2.len());
// Read one more byte to determine whether the output was truncated
let mut buffer = vec![0; LIMIT + 1];
let bytes_read = reader.read(&mut buffer).await?;
// Repeatedly fill the output reader's buffer without copying it.
loop {
let skipped_bytes = reader.fill_buf().await?;
if skipped_bytes.is_empty() {
break;
}
let skipped_bytes_len = skipped_bytes.len();
reader.consume_unpin(skipped_bytes_len);
}
let output_bytes = &buffer[..bytes_read];
// Let the process continue running
let status = cmd.status().await.context("Failed to get command status")?;
let output_string = if bytes_read > LIMIT {
// Valid to find `\n` in UTF-8 since 0-127 ASCII characters are not used in
// multi-byte characters.
let last_line_ix = output_bytes.iter().rposition(|b| *b == b'\n');
let output_string = String::from_utf8_lossy(
&output_bytes[..last_line_ix.unwrap_or(output_bytes.len())],
);
format!(
"{}{}{}{}",
MESSAGE_1,
output_string.len(),
MESSAGE_2,
output_string
)
} else {
String::from_utf8_lossy(&output_bytes).into()
};
let output_with_status = if status.success() {
if output_string.is_empty() {
"Command executed successfully.".to_string()
} else {
output_string.to_string()
}
} else {
format!(
"{}{}{}{}",
ERR_MESSAGE_1,
status.code().unwrap_or(-1),
ERR_MESSAGE_2,
output_string,
)
};
debug_assert!(output_with_status.len() <= STDOUT_LIMIT);
Ok(output_with_status)
})
cx.background_spawn(run_command_limited(working_dir, input.command))
}
}
const LIMIT: usize = 16 * 1024;
async fn run_command_limited(working_dir: Arc<Path>, command: String) -> Result<String> {
// Add 2>&1 to merge stderr into stdout for proper interleaving.
let command = format!("({}) 2>&1", command);
let mut cmd = new_smol_command("bash")
.arg("-c")
.arg(&command)
.current_dir(working_dir)
.stdout(std::process::Stdio::piped())
.spawn()
.context("Failed to execute bash command")?;
// Capture stdout with a limit
let stdout = cmd.stdout.take().unwrap();
let mut reader = BufReader::new(stdout);
// Read one more byte to determine whether the output was truncated
let mut buffer = vec![0; LIMIT + 1];
let mut bytes_read = 0;
// Read until we reach the limit
loop {
let read = reader.read(&mut buffer[bytes_read..]).await?;
if read == 0 {
break;
}
bytes_read += read;
if bytes_read > LIMIT {
bytes_read = LIMIT + 1;
break;
}
}
// Repeatedly fill the output reader's buffer without copying it.
loop {
let skipped_bytes = reader.fill_buf().await?;
if skipped_bytes.is_empty() {
break;
}
let skipped_bytes_len = skipped_bytes.len();
reader.consume_unpin(skipped_bytes_len);
}
let output_bytes = &buffer[..bytes_read.min(LIMIT)];
let status = cmd.status().await.context("Failed to get command status")?;
let output_string = if bytes_read > LIMIT {
// Valid to find `\n` in UTF-8 since 0-127 ASCII characters are not used in
// multi-byte characters.
let last_line_ix = output_bytes.iter().rposition(|b| *b == b'\n');
let until_last_line = &output_bytes[..last_line_ix.unwrap_or(output_bytes.len())];
let output_string = String::from_utf8_lossy(until_last_line);
format!(
"Command output too long. The first {} bytes:\n\n{}",
output_string.len(),
output_block(&output_string),
)
} else {
output_block(&String::from_utf8_lossy(&output_bytes))
};
let output_with_status = if status.success() {
if output_string.is_empty() {
"Command executed successfully.".to_string()
} else {
output_string.to_string()
}
} else {
format!(
"Command failed with exit code {}\n\n{}",
status.code().unwrap_or(-1),
output_string,
)
};
Ok(output_with_status)
}
fn output_block(output: &str) -> String {
format!(
"```\n{}{}```",
output,
if output.ends_with('\n') { "" } else { "\n" }
)
}
#[cfg(test)]
#[cfg(not(windows))]
mod tests {
use gpui::TestAppContext;
use super::*;
#[gpui::test]
async fn test_run_command_simple(cx: &mut TestAppContext) {
cx.executor().allow_parking();
let result =
run_command_limited(Path::new(".").into(), "echo 'Hello, World!'".to_string()).await;
assert!(result.is_ok());
assert_eq!(result.unwrap(), "```\nHello, World!\n```");
}
#[gpui::test]
async fn test_interleaved_stdout_stderr(cx: &mut TestAppContext) {
cx.executor().allow_parking();
let command =
"echo 'stdout 1' && echo 'stderr 1' >&2 && echo 'stdout 2' && echo 'stderr 2' >&2";
let result = run_command_limited(Path::new(".").into(), command.to_string()).await;
assert!(result.is_ok());
assert_eq!(
result.unwrap(),
"```\nstdout 1\nstderr 1\nstdout 2\nstderr 2\n```"
);
}
#[gpui::test]
async fn test_multiple_output_reads(cx: &mut TestAppContext) {
cx.executor().allow_parking();
// Command with multiple outputs that might require multiple reads
let result = run_command_limited(
Path::new(".").into(),
"echo '1'; sleep 0.01; echo '2'; sleep 0.01; echo '3'".to_string(),
)
.await;
assert!(result.is_ok());
assert_eq!(result.unwrap(), "```\n1\n2\n3\n```");
}
#[gpui::test]
async fn test_output_truncation_single_line(cx: &mut TestAppContext) {
cx.executor().allow_parking();
let cmd = format!("echo '{}';", "X".repeat(LIMIT * 2));
let result = run_command_limited(Path::new(".").into(), cmd).await;
assert!(result.is_ok());
let output = result.unwrap();
let content_start = output.find("```\n").map(|i| i + 4).unwrap_or(0);
let content_end = output.rfind("\n```").unwrap_or(output.len());
let content_length = content_end - content_start;
// Output should be exactly the limit
assert_eq!(content_length, LIMIT);
}
#[gpui::test]
async fn test_output_truncation_multiline(cx: &mut TestAppContext) {
cx.executor().allow_parking();
let cmd = format!("echo '{}'; ", "X".repeat(120)).repeat(160);
let result = run_command_limited(Path::new(".").into(), cmd).await;
assert!(result.is_ok());
let output = result.unwrap();
assert!(output.starts_with("Command output too long. The first 16334 bytes:\n\n"));
let content_start = output.find("```\n").map(|i| i + 4).unwrap_or(0);
let content_end = output.rfind("\n```").unwrap_or(output.len());
let content_length = content_end - content_start;
assert!(content_length <= LIMIT);
}
}

View File

@@ -151,8 +151,17 @@ impl Tool for BatchTool {
"batch_tool".into()
}
fn needs_confirmation(&self) -> bool {
true
fn needs_confirmation(&self, input: &serde_json::Value, cx: &App) -> bool {
serde_json::from_value::<BatchToolInput>(input.clone())
.map(|input| {
let working_set = ToolWorkingSet::default();
input.invocations.iter().any(|invocation| {
working_set
.tool(&invocation.name, cx)
.map_or(false, |tool| tool.needs_confirmation(&invocation.input, cx))
})
})
.unwrap_or(false)
}
fn description(&self) -> String {

View File

@@ -0,0 +1,389 @@
use anyhow::{Context as _, Result, anyhow};
use assistant_tool::{ActionLog, Tool};
use gpui::{App, Entity, Task};
use language::{self, Anchor, Buffer, ToPointUtf16};
use language_model::LanguageModelRequestMessage;
use project::{self, LspAction, Project};
use regex::Regex;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use std::{ops::Range, sync::Arc};
use ui::IconName;
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct CodeActionToolInput {
/// The relative path to the file containing the text range.
///
/// WARNING: you MUST start this path with one of the project's root directories.
pub path: String,
/// The specific code action to execute.
///
/// If this field is provided, the tool will execute the specified action.
/// If omitted, the tool will list all available code actions for the text range.
///
/// Here are some actions that are commonly supported (but may not be for this particular
/// text range; you can omit this field to list all the actions, if you want to know
/// what your options are, or you can just try an action and if it fails I'll tell you
/// what the available actions were instead):
/// - "quickfix.all" - applies all available quick fixes in the range
/// - "source.organizeImports" - sorts and cleans up import statements
/// - "source.fixAll" - applies all available auto fixes
/// - "refactor.extract" - extracts selected code into a new function or variable
/// - "refactor.inline" - inlines a variable by replacing references with its value
/// - "refactor.rewrite" - general code rewriting operations
/// - "source.addMissingImports" - adds imports for references that lack them
/// - "source.removeUnusedImports" - removes imports that aren't being used
/// - "source.implementInterface" - generates methods required by an interface/trait
/// - "source.generateAccessors" - creates getter/setter methods
/// - "source.convertToAsyncFunction" - converts callback-style code to async/await
///
/// Also, there is a special case: if you specify exactly "textDocument/rename" as the action,
/// then this will rename the symbol to whatever string you specified for the `arguments` field.
pub action: Option<String>,
/// Optional arguments to pass to the code action.
///
/// For rename operations (when action="textDocument/rename"), this should contain the new name.
/// For other code actions, these arguments may be passed to the language server.
pub arguments: Option<serde_json::Value>,
/// The text that comes immediately before the text range in the file.
pub context_before_range: String,
/// The text range. This text must appear in the file right between `context_before_range`
/// and `context_after_range`.
///
/// The file must contain exactly one occurrence of `context_before_range` followed by
/// `text_range` followed by `context_after_range`. If the file contains zero occurrences,
/// or if it contains more than one occurrence, the tool will fail, so it is absolutely
/// critical that you verify ahead of time that the string is unique. You can search
/// the file's contents to verify this ahead of time.
///
/// To make the string more likely to be unique, include a minimum of 1 line of context
/// before the text range, as well as a minimum of 1 line of context after the text range.
/// If these lines of context are not enough to obtain a string that appears only once
/// in the file, then double the number of context lines until the string becomes unique.
/// (Start with 1 line before and 1 line after though, because too much context is
/// needlessly costly.)
///
/// Do not alter the context lines of code in any way, and make sure to preserve all
/// whitespace and indentation for all lines of code. The combined string must be exactly
/// as it appears in the file, or else this tool call will fail.
pub text_range: String,
/// The text that comes immediately after the text range in the file.
pub context_after_range: String,
}
pub struct CodeActionTool;
impl Tool for CodeActionTool {
fn name(&self) -> String {
"code_actions".into()
}
fn needs_confirmation(&self, _input: &serde_json::Value, _cx: &App) -> bool {
false
}
fn description(&self) -> String {
include_str!("./code_action_tool/description.md").into()
}
fn icon(&self) -> IconName {
IconName::Wand
}
fn input_schema(
&self,
_format: language_model::LanguageModelToolSchemaFormat,
) -> serde_json::Value {
let schema = schemars::schema_for!(CodeActionToolInput);
serde_json::to_value(&schema).unwrap()
}
fn ui_text(&self, input: &serde_json::Value) -> String {
match serde_json::from_value::<CodeActionToolInput>(input.clone()) {
Ok(input) => {
if let Some(action) = &input.action {
if action == "textDocument/rename" {
let new_name = match &input.arguments {
Some(serde_json::Value::String(new_name)) => new_name.clone(),
Some(value) => {
if let Ok(new_name) =
serde_json::from_value::<String>(value.clone())
{
new_name
} else {
"invalid name".to_string()
}
}
None => "missing name".to_string(),
};
format!("Rename '{}' to '{}'", input.text_range, new_name)
} else {
format!(
"Execute code action '{}' for '{}'",
action, input.text_range
)
}
} else {
format!("List available code actions for '{}'", input.text_range)
}
}
Err(_) => "Perform code action".to_string(),
}
}
fn run(
self: Arc<Self>,
input: serde_json::Value,
_messages: &[LanguageModelRequestMessage],
project: Entity<Project>,
action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
let input = match serde_json::from_value::<CodeActionToolInput>(input) {
Ok(input) => input,
Err(err) => return Task::ready(Err(anyhow!(err))),
};
cx.spawn(async move |cx| {
let buffer = {
let project_path = project.read_with(cx, |project, cx| {
project
.find_project_path(&input.path, cx)
.context("Path not found in project")
})??;
project.update(cx, |project, cx| project.open_buffer(project_path, cx))?.await?
};
action_log.update(cx, |action_log, cx| {
action_log.buffer_read(buffer.clone(), cx);
})?;
let range = {
let Some(range) = buffer.read_with(cx, |buffer, _cx| {
find_text_range(&buffer, &input.context_before_range, &input.text_range, &input.context_after_range)
})? else {
return Err(anyhow!(
"Failed to locate the text specified by context_before_range, text_range, and context_after_range. Make sure context_before_range and context_after_range each match exactly once in the file."
));
};
range
};
if let Some(action_type) = &input.action {
// Special-case the `rename` operation
let response = if action_type == "textDocument/rename" {
let Some(new_name) = input.arguments.and_then(|args| serde_json::from_value::<String>(args).ok()) else {
return Err(anyhow!("For rename operations, 'arguments' must be a string containing the new name"));
};
let position = buffer.read_with(cx, |buffer, _| {
range.start.to_point_utf16(&buffer.snapshot())
})?;
project
.update(cx, |project, cx| {
project.perform_rename(buffer.clone(), position, new_name.clone(), cx)
})?
.await?;
format!("Renamed '{}' to '{}'", input.text_range, new_name)
} else {
// Get code actions for the range
let actions = project
.update(cx, |project, cx| {
project.code_actions(&buffer, range.clone(), None, cx)
})?
.await?;
if actions.is_empty() {
return Err(anyhow!("No code actions available for this range"));
}
// Find all matching actions
let regex = match Regex::new(action_type) {
Ok(regex) => regex,
Err(err) => return Err(anyhow!("Invalid regex pattern: {}", err)),
};
let mut matching_actions = actions
.into_iter()
.filter(|action| { regex.is_match(action.lsp_action.title()) });
let Some(action) = matching_actions.next() else {
return Err(anyhow!("No code actions match the pattern: {}", action_type));
};
// There should have been exactly one matching action.
if let Some(second) = matching_actions.next() {
let mut all_matches = vec![action, second];
all_matches.extend(matching_actions);
return Err(anyhow!(
"Pattern '{}' matches multiple code actions: {}",
action_type,
all_matches.into_iter().map(|action| action.lsp_action.title().to_string()).collect::<Vec<_>>().join(", ")
));
}
let title = action.lsp_action.title().to_string();
project
.update(cx, |project, cx| {
project.apply_code_action(buffer.clone(), action, true, cx)
})?
.await?;
format!("Completed code action: {}", title)
};
project
.update(cx, |project, cx| project.save_buffer(buffer.clone(), cx))?
.await?;
action_log.update(cx, |log, cx| {
log.buffer_edited(buffer.clone(), cx)
})?;
Ok(response)
} else {
// No action specified, so list the available ones.
let (position_start, position_end) = buffer.read_with(cx, |buffer, _| {
let snapshot = buffer.snapshot();
(
range.start.to_point_utf16(&snapshot),
range.end.to_point_utf16(&snapshot)
)
})?;
// Convert position to display coordinates (1-based)
let position_start_display = language::Point {
row: position_start.row + 1,
column: position_start.column + 1,
};
let position_end_display = language::Point {
row: position_end.row + 1,
column: position_end.column + 1,
};
// Get code actions for the range
let actions = project
.update(cx, |project, cx| {
project.code_actions(&buffer, range.clone(), None, cx)
})?
.await?;
let mut response = format!(
"Available code actions for text range '{}' at position {}:{} to {}:{} (UTF-16 coordinates):\n\n",
input.text_range,
position_start_display.row, position_start_display.column,
position_end_display.row, position_end_display.column
);
if actions.is_empty() {
response.push_str("No code actions available for this range.");
} else {
for (i, action) in actions.iter().enumerate() {
let title = match &action.lsp_action {
LspAction::Action(code_action) => code_action.title.as_str(),
LspAction::Command(command) => command.title.as_str(),
LspAction::CodeLens(code_lens) => {
if let Some(cmd) = &code_lens.command {
cmd.title.as_str()
} else {
"Unknown code lens"
}
},
};
let kind = match &action.lsp_action {
LspAction::Action(code_action) => {
if let Some(kind) = &code_action.kind {
kind.as_str()
} else {
"unknown"
}
},
LspAction::Command(_) => "command",
LspAction::CodeLens(_) => "code_lens",
};
response.push_str(&format!("{}. {title} ({kind})\n", i + 1));
}
}
Ok(response)
}
})
}
}
/// Finds the range of the text in the buffer, if it appears between context_before_range
/// and context_after_range, and if that combined string has one unique result in the buffer.
///
/// If an exact match fails, it tries adding a newline to the end of context_before_range and
/// to the beginning of context_after_range to accommodate line-based context matching.
fn find_text_range(
buffer: &Buffer,
context_before_range: &str,
text_range: &str,
context_after_range: &str,
) -> Option<Range<Anchor>> {
let snapshot = buffer.snapshot();
let text = snapshot.text();
// First try with exact match
let search_string = format!("{context_before_range}{text_range}{context_after_range}");
let mut positions = text.match_indices(&search_string);
let position_result = positions.next();
if let Some(position) = position_result {
// Check if the matched string is unique
if positions.next().is_none() {
let range_start = position.0 + context_before_range.len();
let range_end = range_start + text_range.len();
let range_start_anchor = snapshot.anchor_before(snapshot.offset_to_point(range_start));
let range_end_anchor = snapshot.anchor_before(snapshot.offset_to_point(range_end));
return Some(range_start_anchor..range_end_anchor);
}
}
// If exact match fails or is not unique, try with line-based context
// Add a newline to the end of before context and beginning of after context
let line_based_before = if context_before_range.ends_with('\n') {
context_before_range.to_string()
} else {
format!("{context_before_range}\n")
};
let line_based_after = if context_after_range.starts_with('\n') {
context_after_range.to_string()
} else {
format!("\n{context_after_range}")
};
let line_search_string = format!("{line_based_before}{text_range}{line_based_after}");
let mut line_positions = text.match_indices(&line_search_string);
let line_position = line_positions.next()?;
// The line-based search string must also appear exactly once
if line_positions.next().is_some() {
return None;
}
let line_range_start = line_position.0 + line_based_before.len();
let line_range_end = line_range_start + text_range.len();
let line_range_start_anchor =
snapshot.anchor_before(snapshot.offset_to_point(line_range_start));
let line_range_end_anchor = snapshot.anchor_before(snapshot.offset_to_point(line_range_end));
Some(line_range_start_anchor..line_range_end_anchor)
}

View File

@@ -0,0 +1,19 @@
A tool for applying code actions to specific sections of your code. It uses language servers to provide refactoring capabilities similar to what you'd find in an IDE.
This tool can:
- List all available code actions for a selected text range
- Execute a specific code action on that range
- Rename symbols across your codebase. This tool is the preferred way to rename things, and you should always prefer to rename code symbols using this tool rather than using textual find/replace when both are available.
Use this tool when you want to:
- Discover what code actions are available for a piece of code
- Apply automatic fixes and code transformations
- Rename variables, functions, or other symbols consistently throughout your project
- Clean up imports, implement interfaces, or perform other language-specific operations
- If unsure what actions are available, call the tool without specifying an action to get a list
- For common operations, you can directly specify actions like "quickfix.all" or "source.organizeImports"
- For renaming, use the special "textDocument/rename" action and provide the new name in the arguments field
- Be specific with your text range and context to ensure the tool identifies the correct code location
The tool will automatically save any changes it makes to your files.

View File

@@ -1,88 +0,0 @@
use project::DocumentSymbol;
use regex::Regex;
#[derive(Debug, Clone)]
pub struct Entry {
pub name: String,
pub kind: lsp::SymbolKind,
pub depth: u32,
pub start_line: usize,
pub end_line: usize,
}
/// An iterator that filters document symbols based on a regex pattern.
/// This iterator recursively traverses the document symbol tree, incrementing depth for child symbols.
#[derive(Debug, Clone)]
pub struct CodeSymbolIterator<'a> {
symbols: &'a [DocumentSymbol],
regex: Option<Regex>,
// Stack of (symbol, depth) pairs to process
pending_symbols: Vec<(&'a DocumentSymbol, u32)>,
current_index: usize,
current_depth: u32,
}
impl<'a> CodeSymbolIterator<'a> {
pub fn new(symbols: &'a [DocumentSymbol], regex: Option<Regex>) -> Self {
Self {
symbols,
regex,
pending_symbols: Vec::new(),
current_index: 0,
current_depth: 0,
}
}
}
impl Iterator for CodeSymbolIterator<'_> {
type Item = Entry;
fn next(&mut self) -> Option<Self::Item> {
if let Some((symbol, depth)) = self.pending_symbols.pop() {
for child in symbol.children.iter().rev() {
self.pending_symbols.push((child, depth + 1));
}
return Some(Entry {
name: symbol.name.clone(),
kind: symbol.kind,
depth,
start_line: symbol.range.start.0.row as usize,
end_line: symbol.range.end.0.row as usize,
});
}
while self.current_index < self.symbols.len() {
let regex = self.regex.as_ref();
let symbol = &self.symbols[self.current_index];
self.current_index += 1;
if regex.is_none_or(|regex| regex.is_match(&symbol.name)) {
// Push in reverse order to maintain traversal order
for child in symbol.children.iter().rev() {
self.pending_symbols.push((child, self.current_depth + 1));
}
return Some(Entry {
name: symbol.name.clone(),
kind: symbol.kind,
depth: self.current_depth,
start_line: symbol.range.start.0.row as usize,
end_line: symbol.range.end.0.row as usize,
});
} else {
// Even if parent doesn't match, push children to check them later
for child in symbol.children.iter().rev() {
self.pending_symbols.push((child, self.current_depth + 1));
}
// Check if any pending children match our criteria
if let Some(result) = self.next() {
return Some(result);
}
}
}
None
}
}

View File

@@ -1,24 +1,21 @@
use std::fmt::{self, Write};
use std::fmt::Write;
use std::path::PathBuf;
use std::sync::Arc;
use crate::schema::json_schema_for;
use anyhow::{Result, anyhow};
use assistant_tool::{ActionLog, Tool};
use collections::IndexMap;
use gpui::{App, AsyncApp, Entity, Task};
use language::{CodeLabel, Language, LanguageRegistry};
use language::{OutlineItem, ParseStatus, Point};
use language_model::{LanguageModelRequestMessage, LanguageModelToolSchemaFormat};
use lsp::SymbolKind;
use project::{DocumentSymbol, Project, Symbol};
use project::{Project, Symbol};
use regex::{Regex, RegexBuilder};
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use ui::IconName;
use util::markdown::MarkdownString;
use crate::code_symbol_iter::{CodeSymbolIterator, Entry};
use crate::schema::json_schema_for;
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct CodeSymbolsInput {
/// The relative path of the source code file to read and get the symbols for.
@@ -82,7 +79,7 @@ impl Tool for CodeSymbolsTool {
"code_symbols".into()
}
fn needs_confirmation(&self) -> bool {
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
false
}
@@ -180,24 +177,28 @@ pub async fn file_outline(
action_log.buffer_read(buffer.clone(), cx);
})?;
let symbols = project
.update(cx, |project, cx| project.document_symbols(&buffer, cx))?
.await?;
// Wait until the buffer has been fully parsed, so that we can read its outline.
let mut parse_status = buffer.read_with(cx, |buffer, _| buffer.parse_status())?;
while parse_status
.recv()
.await
.map_or(false, |status| status != ParseStatus::Idle)
{}
if symbols.is_empty() {
return Err(
if buffer.read_with(cx, |buffer, _| buffer.snapshot().is_empty())? {
anyhow!("This file is empty.")
} else {
anyhow!("No outline information available for this file.")
},
);
}
let snapshot = buffer.read_with(cx, |buffer, _| buffer.snapshot())?;
let Some(outline) = snapshot.outline(None) else {
return Err(anyhow!("No outline information available for this file."));
};
let language = buffer.read_with(cx, |buffer, _| buffer.language().cloned())?;
let language_registry = project.read_with(cx, |project, _| project.languages().clone())?;
render_outline(&symbols, language, language_registry, regex, offset).await
render_outline(
outline
.items
.into_iter()
.map(|item| item.to_point(&snapshot)),
regex,
offset,
)
.await
}
async fn project_symbols(
@@ -292,61 +293,27 @@ async fn project_symbols(
}
async fn render_outline(
symbols: &[DocumentSymbol],
language: Option<Arc<Language>>,
registry: Arc<LanguageRegistry>,
items: impl IntoIterator<Item = OutlineItem<Point>>,
regex: Option<Regex>,
offset: u32,
) -> Result<String> {
const RESULTS_PER_PAGE_USIZE: usize = RESULTS_PER_PAGE as usize;
let entries = CodeSymbolIterator::new(symbols, regex.clone())
.skip(offset as usize)
// Take 1 more than RESULTS_PER_PAGE so we can tell if there are more results.
.take(RESULTS_PER_PAGE_USIZE.saturating_add(1))
.collect::<Vec<Entry>>();
let has_more = entries.len() > RESULTS_PER_PAGE_USIZE;
// Get language-specific labels, if available
let labels = match &language {
Some(lang) => {
let entries_for_labels: Vec<(String, SymbolKind)> = entries
.iter()
.take(RESULTS_PER_PAGE_USIZE)
.map(|entry| (entry.name.clone(), entry.kind))
.collect();
let mut items = items.into_iter().skip(offset as usize);
let lang_name = lang.name();
if let Some(lsp_adapter) = registry.lsp_adapters(&lang_name).first().cloned() {
lsp_adapter
.labels_for_symbols(&entries_for_labels, lang)
.await
.ok()
} else {
None
}
}
None => None,
};
let entries = items
.by_ref()
.filter(|item| {
regex
.as_ref()
.is_none_or(|regex| regex.is_match(&item.text))
})
.take(RESULTS_PER_PAGE_USIZE)
.collect::<Vec<_>>();
let has_more = items.next().is_some();
let mut output = String::new();
let entries_rendered = match &labels {
Some(label_list) => render_entries(
&mut output,
entries
.into_iter()
.take(RESULTS_PER_PAGE_USIZE)
.zip(label_list.iter())
.map(|(entry, label)| (entry, label.as_ref())),
),
None => render_entries(
&mut output,
entries
.into_iter()
.take(RESULTS_PER_PAGE_USIZE)
.map(|entry| (entry, None)),
),
};
let entries_rendered = render_entries(&mut output, entries);
// Calculate pagination information
let page_start = offset + 1;
@@ -372,31 +339,19 @@ async fn render_outline(
Ok(output)
}
fn render_entries<'a>(
output: &mut String,
entries: impl IntoIterator<Item = (Entry, Option<&'a CodeLabel>)>,
) -> u32 {
fn render_entries(output: &mut String, items: impl IntoIterator<Item = OutlineItem<Point>>) -> u32 {
let mut entries_rendered = 0;
for (entry, label) in entries {
for item in items {
// Indent based on depth ("" for level 0, " " for level 1, etc.)
for _ in 0..entry.depth {
output.push_str(" ");
}
match label {
Some(label) => {
output.push_str(label.text());
}
None => {
write_symbol_kind(output, entry.kind).ok();
output.push_str(&entry.name);
}
for _ in 0..item.depth {
output.push(' ');
}
output.push_str(&item.text);
// Add position information - convert to 1-based line numbers for display
let start_line = entry.start_line + 1;
let end_line = entry.end_line + 1;
let start_line = item.range.start.row + 1;
let end_line = item.range.end.row + 1;
if start_line == end_line {
writeln!(output, " [L{}]", start_line).ok();
@@ -408,38 +363,3 @@ fn render_entries<'a>(
entries_rendered
}
// We may not have a language server adapter to have language-specific
// ways to translate SymbolKnd into a string. In that situation,
// fall back on some reasonable default strings to render.
fn write_symbol_kind(buf: &mut String, kind: SymbolKind) -> Result<(), fmt::Error> {
match kind {
SymbolKind::FILE => write!(buf, "file "),
SymbolKind::MODULE => write!(buf, "module "),
SymbolKind::NAMESPACE => write!(buf, "namespace "),
SymbolKind::PACKAGE => write!(buf, "package "),
SymbolKind::CLASS => write!(buf, "class "),
SymbolKind::METHOD => write!(buf, "method "),
SymbolKind::PROPERTY => write!(buf, "property "),
SymbolKind::FIELD => write!(buf, "field "),
SymbolKind::CONSTRUCTOR => write!(buf, "constructor "),
SymbolKind::ENUM => write!(buf, "enum "),
SymbolKind::INTERFACE => write!(buf, "interface "),
SymbolKind::FUNCTION => write!(buf, "function "),
SymbolKind::VARIABLE => write!(buf, "variable "),
SymbolKind::CONSTANT => write!(buf, "constant "),
SymbolKind::STRING => write!(buf, "string "),
SymbolKind::NUMBER => write!(buf, "number "),
SymbolKind::BOOLEAN => write!(buf, "boolean "),
SymbolKind::ARRAY => write!(buf, "array "),
SymbolKind::OBJECT => write!(buf, "object "),
SymbolKind::KEY => write!(buf, "key "),
SymbolKind::NULL => write!(buf, "null "),
SymbolKind::ENUM_MEMBER => write!(buf, "enum member "),
SymbolKind::STRUCT => write!(buf, "struct "),
SymbolKind::EVENT => write!(buf, "event "),
SymbolKind::OPERATOR => write!(buf, "operator "),
SymbolKind::TYPE_PARAMETER => write!(buf, "type parameter "),
_ => Ok(()),
}
}

View File

@@ -43,7 +43,7 @@ impl Tool for CopyPathTool {
"copy_path".into()
}
fn needs_confirmation(&self) -> bool {
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
true
}

View File

@@ -33,7 +33,7 @@ impl Tool for CreateDirectoryTool {
"create_directory".into()
}
fn needs_confirmation(&self) -> bool {
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
true
}

View File

@@ -40,7 +40,7 @@ impl Tool for CreateFileTool {
"create_file".into()
}
fn needs_confirmation(&self) -> bool {
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
false
}

View File

@@ -33,7 +33,7 @@ impl Tool for DeletePathTool {
"delete_path".into()
}
fn needs_confirmation(&self) -> bool {
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
true
}

View File

@@ -11,32 +11,58 @@ use std::{fmt::Write, path::Path, sync::Arc};
use ui::IconName;
use util::markdown::MarkdownString;
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
#[derive(Debug, Serialize, Deserialize, Default, JsonSchema)]
pub struct DiagnosticsToolInput {
/// The path to get diagnostics for. If not provided, returns a project-wide summary.
/// The specific paths to get detailed diagnostics for (including individual line numbers).
///
/// This path should never be absolute, and the first component
/// of the path should always be a root directory in a project.
/// Regardless of whether any paths are specified here, a count of the total number of warnings
/// and errors in the project will be reported, so providing paths here gets you strictly
/// more information.
///
/// These paths should never be absolute, and the first component
/// of each path should always be a root directory in a project.
///
/// <example>
/// If the project has the following root directories:
///
/// - lorem
/// - ipsum
/// - amet
///
/// If you wanna access diagnostics for `dolor.txt` in `ipsum`, you should use the path `ipsum/dolor.txt`.
/// If you want detailed diagnostics with line numbers for `dolor.txt` in `ipsum` and `consectetur.txt` in `amet`, you should use:
///
/// "paths": ["ipsum/dolor.txt", "amet/consectetur.txt"]
/// </example>
#[serde(deserialize_with = "deserialize_path")]
pub path: Option<String>,
#[serde(default)]
pub paths: Vec<String>,
/// Which severity levels to show. Default is all.
/// To show only errors and warnings, you should use:
///
/// "severity": ["error", "warning"]
#[serde(default)]
pub severity: Vec<Severity>,
}
fn deserialize_path<'de, D>(deserializer: D) -> Result<Option<String>, D::Error>
#[derive(
Debug, Serialize, Deserialize, JsonSchema, PartialEq, Eq, Copy, Clone, strum::Display, Hash,
)]
#[serde(rename_all = "camelCase")]
pub enum Severity {
Error,
Warning,
Information,
Hint,
}
fn deserialize_path<'de, D>(deserializer: D) -> Result<Vec<String>, D::Error>
where
D: serde::Deserializer<'de>,
{
let opt = Option::<String>::deserialize(deserializer)?;
// The model passes an empty string sometimes
Ok(opt.filter(|s| !s.is_empty()))
let paths = Vec::<String>::deserialize(deserializer)?;
// The model passes an empty string for some paths
Ok(paths.into_iter().filter(|s| !s.is_empty()).collect())
}
pub struct DiagnosticsTool;
@@ -46,7 +72,7 @@ impl Tool for DiagnosticsTool {
"diagnostics".into()
}
fn needs_confirmation(&self) -> bool {
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
false
}
@@ -63,17 +89,21 @@ impl Tool for DiagnosticsTool {
}
fn ui_text(&self, input: &serde_json::Value) -> String {
if let Some(path) = serde_json::from_value::<DiagnosticsToolInput>(input.clone())
serde_json::from_value::<DiagnosticsToolInput>(input.clone())
.ok()
.and_then(|input| match input.path {
Some(path) if !path.is_empty() => Some(MarkdownString::inline_code(&path)),
_ => None,
.and_then(|input| {
input.paths.first().map(|first_path| {
if input.paths.len() > 1 {
format!("Check diagnostics for {} paths", input.paths.len())
} else {
format!(
"Check diagnostics for {}",
MarkdownString::inline_code(first_path)
)
}
})
})
{
format!("Check diagnostics for {path}")
} else {
"Check project diagnostics".to_string()
}
.unwrap_or_else(|| "Check project diagnostics".to_string())
}
fn run(
@@ -84,62 +114,20 @@ impl Tool for DiagnosticsTool {
action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
match serde_json::from_value::<DiagnosticsToolInput>(input)
.ok()
.and_then(|input| input.path)
let input = serde_json::from_value::<DiagnosticsToolInput>(input).unwrap_or_default();
let severity_filter = input.severity;
let mut summary_output = String::new();
let mut has_diagnostics = false;
// Always report the global diagnostics summary.
{
Some(path) if !path.is_empty() => {
let Some(project_path) = project.read(cx).find_project_path(&path, cx) else {
return Task::ready(Err(anyhow!("Could not find path {path} in project",)));
};
let buffer =
project.update(cx, |project, cx| project.open_buffer(project_path, cx));
cx.spawn(async move |cx| {
let mut output = String::new();
let buffer = buffer.await?;
let snapshot = buffer.read_with(cx, |buffer, _cx| buffer.snapshot())?;
for (_, group) in snapshot.diagnostic_groups(None) {
let entry = &group.entries[group.primary_ix];
let range = entry.range.to_point(&snapshot);
let severity = match entry.diagnostic.severity {
DiagnosticSeverity::ERROR => "error",
DiagnosticSeverity::WARNING => "warning",
_ => continue,
};
writeln!(
output,
"{} at line {}: {}",
severity,
range.start.row + 1,
entry.diagnostic.message
)?;
}
if output.is_empty() {
Ok("File doesn't have errors or warnings!".to_string())
} else {
Ok(output)
}
})
}
_ => {
let project = project.read(cx);
let mut output = String::new();
let mut has_diagnostics = false;
for (project_path, _, summary) in project.diagnostic_summaries(true, cx) {
if summary.error_count > 0 || summary.warning_count > 0 {
let Some(worktree) = project.worktree_for_id(project_path.worktree_id, cx)
else {
continue;
};
let project = project.read(cx);
for (project_path, _, summary) in project.diagnostic_summaries(true, cx) {
if summary.error_count > 0 || summary.warning_count > 0 {
if let Some(worktree) = project.worktree_for_id(project_path.worktree_id, cx) {
has_diagnostics = true;
output.push_str(&format!(
summary_output.push_str(&format!(
"{}: {} error(s), {} warning(s)\n",
Path::new(worktree.read(cx).root_name())
.join(project_path.path)
@@ -149,17 +137,113 @@ impl Tool for DiagnosticsTool {
));
}
}
}
action_log.update(cx, |action_log, _cx| {
action_log.checked_project_diagnostics();
});
action_log.update(cx, |action_log, _cx| {
action_log.checked_project_diagnostics();
});
}
if input.paths.is_empty() {
// If no paths specified, just return the summary
if has_diagnostics {
Task::ready(Ok(summary_output))
} else {
Task::ready(Ok("No errors or warnings found.".to_string()))
}
} else {
let buffer_tasks = input
.paths
.into_iter()
.filter_map(|path| {
project
.read(cx)
.find_project_path(&path, cx)
.map(|project_path| {
(
project_path.clone(),
project.update(cx, |project, cx| {
project.open_buffer(project_path, cx)
}),
)
})
})
.collect::<Vec<_>>();
cx.spawn(async move |cx| {
let mut output = String::new();
output.push_str("# Project Summary\n\n");
if has_diagnostics {
Task::ready(Ok(output))
output.push_str(&summary_output);
output.push_str("\n");
} else {
Task::ready(Ok("No errors or warnings found in the project.".to_string()))
output.push_str("No errors or warnings found in the project.\n");
}
}
output.push('\n');
let mut header_printed = false;
for (project_path, buffer_task) in buffer_tasks {
let mut path_printed = false;
let buffer = buffer_task.await?;
let snapshot = buffer.read_with(cx, |buffer, _cx| buffer.snapshot())?;
for (_, group) in snapshot.diagnostic_groups(None) {
let entry = &group.entries[group.primary_ix];
let range = entry.range.to_point(&snapshot);
if let Ok(severity) = Severity::try_from(&entry.diagnostic.severity) {
if severity_filter.is_empty() || severity_filter.contains(&severity) {
if !header_printed {
output.push_str("# Per-Path Diagnostics\n\n");
header_printed = true;
}
if !path_printed {
writeln!(output, "## {}", project_path.path.display())?;
path_printed = true;
}
writeln!(
output,
"\n### {severity} at line {}\n{}",
range.start.row + 1,
entry.diagnostic.message
)?;
}
}
}
}
if !header_printed {
output.push_str("No specific diagnostics found for the requested paths.");
}
Ok(output)
})
}
}
}
impl TryFrom<&DiagnosticSeverity> for Severity {
type Error = ();
fn try_from(
value: &DiagnosticSeverity,
) -> Result<Self, <Severity as TryFrom<&DiagnosticSeverity>>::Error> {
if *value == DiagnosticSeverity::ERROR {
Ok(Self::Error)
} else if *value == DiagnosticSeverity::WARNING {
Ok(Self::Warning)
} else if *value == DiagnosticSeverity::INFORMATION {
Ok(Self::Information)
} else if *value == DiagnosticSeverity::HINT {
Ok(Self::Hint)
} else {
Err(())
}
}
}

View File

@@ -116,7 +116,7 @@ impl Tool for FetchTool {
"fetch".to_string()
}
fn needs_confirmation(&self) -> bool {
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
true
}

View File

@@ -63,6 +63,16 @@ pub struct FindReplaceFileToolInput {
/// even one character in this string is different in any way from how it appears
/// in the file, then the tool call will fail.
///
/// If you get an error that the `find` string was not found, this means that either
/// you made a mistake, or that the file has changed since you last looked at it.
/// Either way, when this happens, you should retry doing this tool call until it
/// succeeds, up to 3 times. Each time you retry, you should take another look at
/// the exact text of the file in question, to make sure that you are searching for
/// exactly the right string. Regardless of whether it was because you made a mistake
/// or because the file changed since you last looked at it, you should be extra
/// careful when retrying in this way. It's a bad experience for the user if
/// this `find` string isn't found, so be super careful to get it exactly right!
///
/// <example>
/// If a file contains this code:
///
@@ -129,7 +139,7 @@ impl Tool for FindReplaceFileTool {
"find_replace_file".into()
}
fn needs_confirmation(&self) -> bool {
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
false
}

View File

@@ -1,9 +1,13 @@
Find one unique part of a file in the project and replace that text with new text.
This tool is the preferred way to make edits to files. If you have multiple edits to make, including edits across multiple files, then make a plan to respond with a single message containing multiple calls to this tool - one call for each find/replace operation.
This tool is the preferred way to make edits to files *except* when making a rename. When making a rename specifically, the rename tool must always be used instead.
You should use this tool when you want to edit a subset of a file's contents, but not the entire file. You should not use this tool when you want to replace the entire contents of a file with completely different contents. You also should not use this tool when you want to move or rename a file. You absolutely must NEVER use this tool to create new files from scratch. If you ever consider using this tool to create a new file from scratch, for any reason, instead you must reconsider and choose a different approach.
If you have multiple edits to make, including edits across multiple files, then make a plan to respond with a single message containing a batch of calls to this tool - one call for each find/replace operation.
You should only use this tool when you want to edit a subset of a file's contents, but not the entire file. You should not use this tool when you want to replace the entire contents of a file with completely different contents. You also should not use this tool when you want to move or rename a file. You absolutely must NEVER use this tool to create new files from scratch. If you ever consider using this tool to create a new file from scratch, for any reason, instead you must reconsider and choose a different approach.
DO NOT call this tool until the code to be edited appears in the conversation! You must use another tool to read the file's contents into the conversation, or ask the user to add it to context first.
Never call this tool with identical "find" and "replace" strings. Instead, stop and think about what you actually want to do.
REMEMBER: You can use this tool after you just used the `create_file` tool. It's better to edit the file you just created than to recreate a new file from scratch.

View File

@@ -44,7 +44,7 @@ impl Tool for ListDirectoryTool {
"list_directory".into()
}
fn needs_confirmation(&self) -> bool {
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
false
}

View File

@@ -42,7 +42,7 @@ impl Tool for MovePathTool {
"move_path".into()
}
fn needs_confirmation(&self) -> bool {
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
true
}

View File

@@ -33,7 +33,7 @@ impl Tool for NowTool {
"now".into()
}
fn needs_confirmation(&self) -> bool {
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
false
}

View File

@@ -23,7 +23,7 @@ impl Tool for OpenTool {
"open".to_string()
}
fn needs_confirmation(&self) -> bool {
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
true
}

View File

@@ -41,7 +41,7 @@ impl Tool for PathSearchTool {
"path_search".into()
}
fn needs_confirmation(&self) -> bool {
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
false
}

View File

@@ -0,0 +1,361 @@
use crate::schema::json_schema_for;
use anyhow::{Result, anyhow};
use assistant_tool::{ActionLog, Tool};
use gpui::AppContext;
use gpui::{App, Entity, Task};
use language::{DiagnosticSeverity, OffsetRangeExt};
use language_model::{LanguageModelRequestMessage, LanguageModelToolSchemaFormat};
use lsp_types::CodeActionKind;
use project::LspAction;
use project::Project;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use std::f32::consts::E;
use std::{fmt::Write, path::Path, sync::Arc};
use ui::IconName;
use util::markdown::MarkdownString;
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct QuickfixToolInput {
/// The path to get diagnostics for and apply quickfixes. If not provided, checks the entire project.
///
/// This path should never be absolute, and the first component
/// of the path should always be a root directory in a project.
///
/// <example>
/// If the project has the following root directories:
///
/// - lorem
/// - ipsum
///
/// If you want to apply quickfixes for `dolor.txt` in `ipsum`, you should use the path `ipsum/dolor.txt`.
/// </example>
pub path: Option<String>,
}
pub struct QuickfixTool;
impl Tool for QuickfixTool {
fn name(&self) -> String {
"quickfix".into()
}
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
false
}
fn description(&self) -> String {
include_str!("./quickfix_tool/description.md").into()
}
fn icon(&self) -> IconName {
IconName::Check
}
fn input_schema(&self, format: LanguageModelToolSchemaFormat) -> serde_json::Value {
json_schema_for::<QuickfixToolInput>(format)
}
fn ui_text(&self, input: &serde_json::Value) -> String {
if let Some(path) = serde_json::from_value::<QuickfixToolInput>(input.clone())
.ok()
.and_then(|input| match input.path {
Some(path) if !path.is_empty() => Some(MarkdownString::inline_code(&path)),
_ => None,
})
{
format!("Apply quickfixes for {path}")
} else {
"Apply project-wide quickfixes".to_string()
}
}
fn run(
self: Arc<Self>,
input: serde_json::Value,
_messages: &[LanguageModelRequestMessage],
project: Entity<Project>,
action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
match serde_json::from_value::<QuickfixToolInput>(input)
.ok()
.and_then(|input| input.path)
{
Some(path) if !path.is_empty() => {
let Some(project_path) = project.read(cx).find_project_path(&path, cx) else {
return Task::ready(Err(anyhow!("Could not find path {path} in project")));
};
let buffer =
project.update(cx, |project, cx| project.open_buffer(project_path, cx));
cx.spawn(async move |cx| {
let mut output = String::new();
let mut fixes_applied = 0;
let buffer = buffer.await?;
let snapshot = buffer.read_with(cx, |buffer, _cx| buffer.snapshot())?;
let mut unfixed = Vec::new();
// Collect diagnostics to apply quickfixes for
for entry in snapshot.diagnostic_groups(None).into_iter().flat_map(
|(_, group)| group.entries.into_iter()
) {
let range = entry.range.to_point(&snapshot);
let actions = project
.update(cx, |project, cx| {
project.code_actions(
&buffer,
range.start..range.end,
None,
cx
)
})?.await?;
let quickfixes = actions.into_iter().filter(|action| {
if let LspAction::Action(code_action) = &action.lsp_action {
code_action.kind == Some(CodeActionKind::QUICKFIX)
} else {
false
}
}).collect::<Vec<_>>();
match quickfixes.first() {
Some(default_quickfix) => {
// If there's a quickfix marked as preferred, use that.
// Otherwise fall back on the first available quickfix in the list.
let preferred_action = quickfixes.iter().find(|action| {
if let LspAction::Action(code_action) = &action.lsp_action {
code_action.is_preferred.unwrap_or(false)
} else {
false
}
}).unwrap_or(default_quickfix);
project
.update(cx, |project, cx| {
project.apply_code_action(buffer.clone(), preferred_action.clone(), true, cx)
})?
.await?;
log::info!("Applied quickfix: {}", preferred_action.lsp_action.title());
fixes_applied += 1;
}
None => {
unfixed.push(entry);
}
}
}
// Save the buffer after applying fixes
if fixes_applied > 0 {
project
.update(cx, |project, cx| project.save_buffer(buffer.clone(), cx))?
.await?;
action_log.update(cx, |log, cx| {
log.buffer_edited(buffer.clone(), cx)
})?;
}
// Generate summary
if output.is_empty() {
Ok("No issues found in the file!".to_string())
} else {
writeln!(output, "\nSummary: Applied {} quickfixes. Remaining issues: {} - use the diagnostics tool to see them.",
fixes_applied, unfixed.len())?;
Ok(output)
}
})
}
_ => {
todo!("Share code with the other branch.");
let mut output = String::new();
let mut files_with_diagnostics = Vec::new();
// Collect all files with diagnostics for processing
{
let project_ref = project.read(cx);
for (project_path, _, summary) in project_ref.diagnostic_summaries(true, cx) {
if summary.error_count > 0 || summary.warning_count > 0 {
if let Some(worktree) =
project_ref.worktree_for_id(project_path.worktree_id, cx)
{
let path_str = Path::new(worktree.read(cx).root_name())
.join(project_path.path)
.display()
.to_string();
files_with_diagnostics.push(path_str);
}
}
}
}
// Create a task to process all files with diagnostics
let project = project.clone();
let action_log = action_log.clone();
let files_to_process = files_with_diagnostics.clone();
cx.spawn(async move |cx| {
let mut total_fixes_applied = 0;
let mut total_errors_unfixed = 0;
let mut total_warnings_unfixed = 0;
// Process each file with diagnostics
for file_path in files_to_process {
writeln!(output, "Processing {}...", file_path)?;
let Ok(Some(project_path)) = project.read_with(cx, |project, cx| project.find_project_path(&file_path, cx)) else {
writeln!(output, " Could not resolve project path for {}", file_path)?;
continue;
};
let buffer_open_result = project.update(cx, |project, cx|
project.open_buffer(project_path, cx));
let Ok(buffer_handle) = buffer_open_result else {
writeln!(output, " Failed to open buffer for {}", file_path)?;
continue;
};
let buffer = match buffer_handle.await {
Ok(buffer) => buffer,
Err(err) => {
writeln!(output, " Failed to load buffer for {}: {}", file_path, err)?;
continue;
}
};
let mut file_fixes_applied = 0;
let mut file_errors_unfixed = 0;
let mut file_warnings_unfixed = 0;
let mut needs_save = false;
let snapshot = buffer.read_with(cx, |buffer, _cx| buffer.snapshot())?;
// Process diagnostics for this file
for (_, group) in snapshot.diagnostic_groups(None) {
let entry = &group.entries[group.primary_ix];
let range = entry.range.to_point(&snapshot);
let severity = entry.diagnostic.severity;
// Skip informational and hint diagnostics
if severity != DiagnosticSeverity::ERROR &&
severity != DiagnosticSeverity::WARNING {
continue;
}
// Get code actions (quickfixes) for this diagnostic
let actions = project
.update(cx, |project, cx| {
project.code_actions(
&buffer,
range.start..range.end,
None,
cx
)
})?;
let actions_result = actions.await;
match actions_result {
Ok(actions) => {
// Find quickfix actions
let quickfixes = actions.into_iter().filter(|action| {
if let LspAction::Action(code_action) = &action.lsp_action {
if let Some(kind) = &code_action.kind {
kind.as_str().starts_with("quickfix")
} else {
false
}
} else {
false
}
}).collect::<Vec<_>>();
if !quickfixes.is_empty() {
// Find the preferred quickfix (marked as isPreferred or the first one)
let preferred_action = quickfixes.iter().find(|action| {
if let LspAction::Action(code_action) = &action.lsp_action {
code_action.is_preferred.unwrap_or(false)
} else {
false
}
}).unwrap_or(&quickfixes[0]);
// Apply the quickfix
let title = preferred_action.lsp_action.title().to_string();
project
.update(cx, |project, cx| {
project.apply_code_action(buffer.clone(), preferred_action.clone(), true, cx)
})?
.await?;
writeln!(output, " Applied quickfix: {title}")?;
file_fixes_applied += 1;
needs_save = true;
} else {
// Track unfixed diagnostics
match severity {
DiagnosticSeverity::ERROR => file_errors_unfixed += 1,
DiagnosticSeverity::WARNING => file_warnings_unfixed += 1,
_ => {}
}
}
},
Err(_) => {
// Track unfixed diagnostics
match severity {
DiagnosticSeverity::ERROR => file_errors_unfixed += 1,
DiagnosticSeverity::WARNING => file_warnings_unfixed += 1,
_ => {}
}
}
}
}
// Save the buffer after applying fixes
if needs_save {
project
.update(cx, |project, cx| project.save_buffer(buffer.clone(), cx))?
.await?;
action_log.update(cx, |log, cx| {
log.buffer_edited(buffer.clone(), cx)
})?;
}
// Update totals
total_fixes_applied += file_fixes_applied;
total_errors_unfixed += file_errors_unfixed;
total_warnings_unfixed += file_warnings_unfixed;
// File summary
if file_fixes_applied > 0 || file_errors_unfixed > 0 || file_warnings_unfixed > 0 {
writeln!(output, " {} quickfixes applied. Remaining: {} errors, {} warnings\n",
file_fixes_applied, file_errors_unfixed, file_warnings_unfixed)?;
} else {
writeln!(output, " No issues fixed or found\n")?;
}
}
// Mark that we've checked diagnostics
action_log.update(cx, |action_log, _cx| {
action_log.checked_project_diagnostics();
})?;
// Generate overall summary
if files_with_diagnostics.is_empty() {
Ok("No issues found in the project!".to_string())
} else {
writeln!(output, "\nProject-wide summary: Applied {} quickfixes. Remaining issues: {} errors, {} warnings.",
total_fixes_applied, total_errors_unfixed, total_warnings_unfixed)?;
Ok(output)
}
})
}
}
}
}

View File

@@ -0,0 +1,16 @@
Get errors and warnings for the project or a specific file and apply quickfixes automatically where possible.
This tool can be invoked to find diagnostics in your code and automatically apply available quickfixes to resolve them. Quickfixes are code edits suggested by language servers that can automatically fix common issues.
When a path is provided, it checks that specific file for diagnostics and applies quickfixes.
When no path is provided, it finds diagnostics project-wide and applies quickfixes where possible.
<example>
To automatically fix issues in a specific file:
{
"path": "src/main.rs"
}
To find and fix issues across the entire project:
{}
</example>

View File

@@ -1,7 +1,6 @@
use std::sync::Arc;
use crate::code_symbols_tool::file_outline;
use crate::schema::json_schema_for;
use crate::{code_symbols_tool::file_outline, schema::json_schema_for};
use anyhow::{Result, anyhow};
use assistant_tool::{ActionLog, Tool};
use gpui::{App, Entity, Task};
@@ -16,7 +15,7 @@ use util::markdown::MarkdownString;
/// If the model requests to read a file whose size exceeds this, then
/// the tool will return an error along with the model's symbol outline,
/// and suggest trying again using line ranges from the outline.
const MAX_FILE_SIZE_TO_READ: usize = 4096;
const MAX_FILE_SIZE_TO_READ: usize = 16384;
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct ReadFileToolInput {
@@ -52,7 +51,7 @@ impl Tool for ReadFileTool {
"read_file".into()
}
fn needs_confirmation(&self) -> bool {
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
false
}

View File

@@ -44,7 +44,7 @@ impl Tool for RegexSearchTool {
"regex_search".into()
}
fn needs_confirmation(&self) -> bool {
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
false
}

View File

@@ -0,0 +1,205 @@
use anyhow::{Context as _, Result, anyhow};
use assistant_tool::{ActionLog, Tool};
use gpui::{App, Entity, Task};
use language::{self, Buffer, ToPointUtf16};
use language_model::LanguageModelRequestMessage;
use project::Project;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use std::sync::Arc;
use ui::IconName;
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct RenameToolInput {
/// The relative path to the file containing the symbol to rename.
///
/// WARNING: you MUST start this path with one of the project's root directories.
pub path: String,
/// The new name to give to the symbol.
pub new_name: String,
/// The text that comes immediately before the symbol in the file.
pub context_before_symbol: String,
/// The symbol to rename. This text must appear in the file right between
/// `context_before_symbol` and `context_after_symbol`.
///
/// The file must contain exactly one occurrence of `context_before_symbol` followed by
/// `symbol` followed by `context_after_symbol`. If the file contains zero occurrences,
/// or if it contains more than one occurrence, the tool will fail, so it is absolutely
/// critical that you verify ahead of time that the string is unique. You can search
/// the file's contents to verify this ahead of time.
///
/// To make the string more likely to be unique, include a minimum of 1 line of context
/// before the symbol, as well as a minimum of 1 line of context after the symbol.
/// If these lines of context are not enough to obtain a string that appears only once
/// in the file, then double the number of context lines until the string becomes unique.
/// (Start with 1 line before and 1 line after though, because too much context is
/// needlessly costly.)
///
/// Do not alter the context lines of code in any way, and make sure to preserve all
/// whitespace and indentation for all lines of code. The combined string must be exactly
/// as it appears in the file, or else this tool call will fail.
pub symbol: String,
/// The text that comes immediately after the symbol in the file.
pub context_after_symbol: String,
}
pub struct RenameTool;
impl Tool for RenameTool {
fn name(&self) -> String {
"rename".into()
}
fn needs_confirmation(&self, _input: &serde_json::Value, _cx: &App) -> bool {
false
}
fn description(&self) -> String {
include_str!("./rename_tool/description.md").into()
}
fn icon(&self) -> IconName {
IconName::Pencil
}
fn input_schema(
&self,
_format: language_model::LanguageModelToolSchemaFormat,
) -> serde_json::Value {
let schema = schemars::schema_for!(RenameToolInput);
serde_json::to_value(&schema).unwrap()
}
fn ui_text(&self, input: &serde_json::Value) -> String {
match serde_json::from_value::<RenameToolInput>(input.clone()) {
Ok(input) => {
format!("Rename '{}' to '{}'", input.symbol, input.new_name)
}
Err(_) => "Rename symbol".to_string(),
}
}
fn run(
self: Arc<Self>,
input: serde_json::Value,
_messages: &[LanguageModelRequestMessage],
project: Entity<Project>,
action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
let input = match serde_json::from_value::<RenameToolInput>(input) {
Ok(input) => input,
Err(err) => return Task::ready(Err(anyhow!(err))),
};
cx.spawn(async move |cx| {
let buffer = {
let project_path = project.read_with(cx, |project, cx| {
project
.find_project_path(&input.path, cx)
.context("Path not found in project")
})??;
project.update(cx, |project, cx| project.open_buffer(project_path, cx))?.await?
};
action_log.update(cx, |action_log, cx| {
action_log.buffer_read(buffer.clone(), cx);
})?;
let position = {
let Some(position) = buffer.read_with(cx, |buffer, _cx| {
find_symbol_position(&buffer, &input.context_before_symbol, &input.symbol, &input.context_after_symbol)
})? else {
return Err(anyhow!(
"Failed to locate the symbol specified by context_before_symbol, symbol, and context_after_symbol. Make sure context_before_symbol and context_after_symbol each match exactly once in the file."
));
};
buffer.read_with(cx, |buffer, _| {
position.to_point_utf16(&buffer.snapshot())
})?
};
project
.update(cx, |project, cx| {
project.perform_rename(buffer.clone(), position, input.new_name.clone(), cx)
})?
.await?;
project
.update(cx, |project, cx| project.save_buffer(buffer.clone(), cx))?
.await?;
action_log.update(cx, |log, cx| {
log.buffer_edited(buffer.clone(), cx)
})?;
Ok(format!("Renamed '{}' to '{}'", input.symbol, input.new_name))
})
}
}
/// Finds the position of the symbol in the buffer, if it appears between context_before_symbol
/// and context_after_symbol, and if that combined string has one unique result in the buffer.
///
/// If an exact match fails, it tries adding a newline to the end of context_before_symbol and
/// to the beginning of context_after_symbol to accommodate line-based context matching.
fn find_symbol_position(
buffer: &Buffer,
context_before_symbol: &str,
symbol: &str,
context_after_symbol: &str,
) -> Option<language::Anchor> {
let snapshot = buffer.snapshot();
let text = snapshot.text();
// First try with exact match
let search_string = format!("{context_before_symbol}{symbol}{context_after_symbol}");
let mut positions = text.match_indices(&search_string);
let position_result = positions.next();
if let Some(position) = position_result {
// Check if the matched string is unique
if positions.next().is_none() {
let symbol_start = position.0 + context_before_symbol.len();
let symbol_start_anchor =
snapshot.anchor_before(snapshot.offset_to_point(symbol_start));
return Some(symbol_start_anchor);
}
}
// If exact match fails or is not unique, try with line-based context
// Add a newline to the end of before context and beginning of after context
let line_based_before = if context_before_symbol.ends_with('\n') {
context_before_symbol.to_string()
} else {
format!("{context_before_symbol}\n")
};
let line_based_after = if context_after_symbol.starts_with('\n') {
context_after_symbol.to_string()
} else {
format!("\n{context_after_symbol}")
};
let line_search_string = format!("{line_based_before}{symbol}{line_based_after}");
let mut line_positions = text.match_indices(&line_search_string);
let line_position = line_positions.next()?;
// The line-based search string must also appear exactly once
if line_positions.next().is_some() {
return None;
}
let line_symbol_start = line_position.0 + line_based_before.len();
let line_symbol_start_anchor =
snapshot.anchor_before(snapshot.offset_to_point(line_symbol_start));
Some(line_symbol_start_anchor)
}

View File

@@ -0,0 +1,15 @@
Renames a symbol across your codebase using the language server's semantic knowledge.
This tool performs a rename refactoring operation on a specified symbol. It uses the project's language server to analyze the code and perform the rename correctly across all files where the symbol is referenced.
Unlike a simple find and replace, this tool understands the semantic meaning of the code, so it only renames the specific symbol you specify and not unrelated text that happens to have the same name.
Examples of symbols you can rename:
- Variables
- Functions
- Classes/structs
- Fields/properties
- Methods
- Interfaces/traits
The language server handles updating all references to the renamed symbol throughout the codebase.

View File

@@ -72,7 +72,7 @@ impl Tool for SymbolInfoTool {
"symbol_info".into()
}
fn needs_confirmation(&self) -> bool {
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
false
}

View File

@@ -24,7 +24,7 @@ impl Tool for ThinkingTool {
"thinking".to_string()
}
fn needs_confirmation(&self) -> bool {
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
false
}
@@ -33,7 +33,7 @@ impl Tool for ThinkingTool {
}
fn icon(&self) -> IconName {
IconName::Brain
IconName::LightBulb
}
fn input_schema(&self, format: LanguageModelToolSchemaFormat) -> serde_json::Value {

View File

@@ -318,6 +318,7 @@ impl Server {
.add_request_handler(forward_read_only_project_request::<proto::OpenUncommittedDiff>)
.add_request_handler(forward_read_only_project_request::<proto::LspExtExpandMacro>)
.add_request_handler(forward_read_only_project_request::<proto::LspExtOpenDocs>)
.add_request_handler(forward_mutating_project_request::<proto::LspExtRunnables>)
.add_request_handler(
forward_read_only_project_request::<proto::LspExtSwitchSourceHeader>,
)

View File

@@ -309,7 +309,7 @@ impl MessageEditor {
.map(|mat| {
let (new_text, label) = completion_fn(&mat);
Completion {
old_range: range.clone(),
replace_range: range.clone(),
new_text,
label,
icon_path: None,

View File

@@ -3,37 +3,62 @@ use std::ops::{Deref, DerefMut};
use std::sync::LazyLock;
use collections::HashMap;
use gpui::{AnyElement, App, IntoElement, RenderOnce, SharedString, Window, div, prelude::*, px};
use gpui::{
AnyElement, App, IntoElement, RenderOnce, SharedString, Window, div, pattern_slash, prelude::*,
px, rems,
};
use linkme::distributed_slice;
use parking_lot::RwLock;
use theme::ActiveTheme;
pub trait Component {
fn scope() -> Option<ComponentScope>;
fn scope() -> ComponentScope {
ComponentScope::None
}
fn name() -> &'static str {
std::any::type_name::<Self>()
}
/// Returns a name that the component should be sorted by.
///
/// Implement this if the component should be sorted in an alternate order than its name.
///
/// Example:
///
/// For example, to group related components together when sorted:
///
/// - Button -> ButtonA
/// - IconButton -> ButtonBIcon
/// - ToggleButton -> ButtonCToggle
///
/// This naming scheme keeps these components together and allows them to /// be sorted in a logical order.
fn sort_name() -> &'static str {
Self::name()
}
fn description() -> Option<&'static str> {
None
}
}
pub trait ComponentPreview: Component {
fn preview(_window: &mut Window, _cx: &mut App) -> AnyElement;
fn preview(_window: &mut Window, _cx: &mut App) -> Option<AnyElement> {
None
}
}
#[distributed_slice]
pub static __ALL_COMPONENTS: [fn()] = [..];
#[distributed_slice]
pub static __ALL_PREVIEWS: [fn()] = [..];
pub static COMPONENT_DATA: LazyLock<RwLock<ComponentRegistry>> =
LazyLock::new(|| RwLock::new(ComponentRegistry::new()));
pub struct ComponentRegistry {
components: Vec<(Option<ComponentScope>, &'static str, Option<&'static str>)>,
previews: HashMap<&'static str, fn(&mut Window, &mut App) -> AnyElement>,
components: Vec<(
ComponentScope,
// name
&'static str,
// sort name
&'static str,
// description
Option<&'static str>,
)>,
previews: HashMap<&'static str, fn(&mut Window, &mut App) -> Option<AnyElement>>,
}
impl ComponentRegistry {
@@ -47,30 +72,16 @@ impl ComponentRegistry {
pub fn init() {
let component_fns: Vec<_> = __ALL_COMPONENTS.iter().cloned().collect();
let preview_fns: Vec<_> = __ALL_PREVIEWS.iter().cloned().collect();
for f in component_fns {
f();
}
for f in preview_fns {
f();
}
}
pub fn register_component<T: Component>() {
let component_data = (T::scope(), T::name(), T::description());
COMPONENT_DATA.write().components.push(component_data);
}
pub fn register_preview<T: ComponentPreview>() {
let preview_data = (
T::name(),
T::preview as fn(&mut Window, &mut App) -> AnyElement,
);
COMPONENT_DATA
.write()
.previews
.insert(preview_data.0, preview_data.1);
let component_data = (T::scope(), T::name(), T::sort_name(), T::description());
let mut data = COMPONENT_DATA.write();
data.components.push(component_data);
data.previews.insert(T::name(), T::preview);
}
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
@@ -80,29 +91,41 @@ pub struct ComponentId(pub &'static str);
pub struct ComponentMetadata {
id: ComponentId,
name: SharedString,
scope: Option<ComponentScope>,
sort_name: SharedString,
scope: ComponentScope,
description: Option<SharedString>,
preview: Option<fn(&mut Window, &mut App) -> AnyElement>,
preview: Option<fn(&mut Window, &mut App) -> Option<AnyElement>>,
}
impl ComponentMetadata {
pub fn id(&self) -> ComponentId {
self.id.clone()
}
pub fn name(&self) -> SharedString {
self.name.clone()
}
pub fn scope(&self) -> Option<ComponentScope> {
self.scope.clone()
pub fn sort_name(&self) -> SharedString {
self.sort_name.clone()
}
pub fn scopeless_name(&self) -> SharedString {
self.name
.clone()
.split("::")
.last()
.unwrap_or(&self.name)
.to_string()
.into()
}
pub fn scope(&self) -> ComponentScope {
self.scope.clone()
}
pub fn description(&self) -> Option<SharedString> {
self.description.clone()
}
pub fn preview(&self) -> Option<fn(&mut Window, &mut App) -> AnyElement> {
pub fn preview(&self) -> Option<fn(&mut Window, &mut App) -> Option<AnyElement>> {
self.preview
}
}
@@ -113,26 +136,18 @@ impl AllComponents {
pub fn new() -> Self {
AllComponents(HashMap::default())
}
/// Returns all components with previews
pub fn all_previews(&self) -> Vec<&ComponentMetadata> {
self.0.values().filter(|c| c.preview.is_some()).collect()
}
/// Returns all components with previews sorted by name
pub fn all_previews_sorted(&self) -> Vec<ComponentMetadata> {
let mut previews: Vec<ComponentMetadata> =
self.all_previews().into_iter().cloned().collect();
previews.sort_by_key(|a| a.name());
previews
}
/// Returns all components
pub fn all(&self) -> Vec<&ComponentMetadata> {
self.0.values().collect()
}
/// Returns all components sorted by name
pub fn all_sorted(&self) -> Vec<ComponentMetadata> {
let mut components: Vec<ComponentMetadata> = self.all().into_iter().cloned().collect();
components.sort_by_key(|a| a.name());
@@ -142,7 +157,6 @@ impl AllComponents {
impl Deref for AllComponents {
type Target = HashMap<ComponentId, ComponentMetadata>;
fn deref(&self) -> &Self::Target {
&self.0
}
@@ -157,139 +171,127 @@ impl DerefMut for AllComponents {
pub fn components() -> AllComponents {
let data = COMPONENT_DATA.read();
let mut all_components = AllComponents::new();
for (scope, name, description) in &data.components {
for (scope, name, sort_name, description) in &data.components {
let preview = data.previews.get(name).cloned();
let component_name = SharedString::new_static(name);
let sort_name = SharedString::new_static(sort_name);
let id = ComponentId(name);
all_components.insert(
id.clone(),
ComponentMetadata {
id,
name: component_name,
sort_name,
scope: scope.clone(),
description: description.map(Into::into),
preview,
},
);
}
all_components
}
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub enum ComponentScope {
Layout,
Input,
Notification,
Editor,
Collaboration,
DataDisplay,
Editor,
Images,
Input,
Layout,
Loading,
Navigation,
None,
Notification,
Overlays,
Status,
Typography,
VersionControl,
Unknown(SharedString),
}
impl Display for ComponentScope {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
ComponentScope::Layout => write!(f, "Layout"),
ComponentScope::Input => write!(f, "Input"),
ComponentScope::Notification => write!(f, "Notification"),
ComponentScope::Editor => write!(f, "Editor"),
ComponentScope::Collaboration => write!(f, "Collaboration"),
ComponentScope::DataDisplay => write!(f, "Data Display"),
ComponentScope::Editor => write!(f, "Editor"),
ComponentScope::Images => write!(f, "Images & Icons"),
ComponentScope::Input => write!(f, "Forms & Input"),
ComponentScope::Layout => write!(f, "Layout & Structure"),
ComponentScope::Loading => write!(f, "Loading & Progress"),
ComponentScope::Navigation => write!(f, "Navigation"),
ComponentScope::None => write!(f, "Unsorted"),
ComponentScope::Notification => write!(f, "Notification"),
ComponentScope::Overlays => write!(f, "Overlays & Layering"),
ComponentScope::Status => write!(f, "Status"),
ComponentScope::Typography => write!(f, "Typography"),
ComponentScope::VersionControl => write!(f, "Version Control"),
ComponentScope::Unknown(name) => write!(f, "Unknown: {}", name),
}
}
}
impl From<&str> for ComponentScope {
fn from(value: &str) -> Self {
match value {
"Layout" => ComponentScope::Layout,
"Input" => ComponentScope::Input,
"Notification" => ComponentScope::Notification,
"Editor" => ComponentScope::Editor,
"Collaboration" => ComponentScope::Collaboration,
"Version Control" | "VersionControl" => ComponentScope::VersionControl,
_ => ComponentScope::Unknown(SharedString::new(value)),
}
}
}
impl From<String> for ComponentScope {
fn from(value: String) -> Self {
match value.as_str() {
"Layout" => ComponentScope::Layout,
"Input" => ComponentScope::Input,
"Notification" => ComponentScope::Notification,
"Editor" => ComponentScope::Editor,
"Collaboration" => ComponentScope::Collaboration,
"Version Control" | "VersionControl" => ComponentScope::VersionControl,
_ => ComponentScope::Unknown(SharedString::new(value)),
}
}
}
/// Which side of the preview to show labels on
#[derive(Default, Debug, Clone, Copy, PartialEq, Eq)]
pub enum ExampleLabelSide {
/// Left side
Left,
/// Right side
Right,
/// Top side
#[default]
Top,
/// Bottom side
Bottom,
}
/// A single example of a component.
#[derive(IntoElement)]
pub struct ComponentExample {
variant_name: SharedString,
element: AnyElement,
label_side: ExampleLabelSide,
grow: bool,
pub variant_name: SharedString,
pub description: Option<SharedString>,
pub element: AnyElement,
}
impl RenderOnce for ComponentExample {
fn render(self, _window: &mut Window, cx: &mut App) -> impl IntoElement {
let base = div().flex();
let base = match self.label_side {
ExampleLabelSide::Right => base.flex_row(),
ExampleLabelSide::Left => base.flex_row_reverse(),
ExampleLabelSide::Bottom => base.flex_col(),
ExampleLabelSide::Top => base.flex_col_reverse(),
};
base.gap_2()
.p_2()
.text_size(px(10.))
.text_color(cx.theme().colors().text_muted)
.when(self.grow, |this| this.flex_1())
.when(!self.grow, |this| this.flex_none())
.child(self.element)
.child(self.variant_name)
div()
.w_full()
.flex()
.flex_col()
.gap_3()
.child(
div()
.child(self.variant_name.clone())
.text_size(rems(1.25))
.text_color(cx.theme().colors().text),
)
.when_some(self.description, |this, description| {
this.child(
div()
.text_size(rems(0.9375))
.text_color(cx.theme().colors().text_muted)
.child(description.clone()),
)
})
.child(
div()
.flex()
.w_full()
.rounded_xl()
.min_h(px(100.))
.justify_center()
.p_8()
.border_1()
.border_color(cx.theme().colors().border)
.bg(pattern_slash(
cx.theme().colors().surface_background.opacity(0.5),
24.0,
24.0,
))
.shadow_sm()
.child(self.element),
)
.into_any_element()
}
}
impl ComponentExample {
/// Create a new example with the given variant name and example value.
pub fn new(variant_name: impl Into<SharedString>, element: AnyElement) -> Self {
Self {
variant_name: variant_name.into(),
element,
label_side: ExampleLabelSide::default(),
grow: false,
description: None,
}
}
/// Set the example to grow to fill the available horizontal space.
pub fn grow(mut self) -> Self {
self.grow = true;
pub fn description(mut self, description: impl Into<SharedString>) -> Self {
self.description = Some(description.into());
self
}
}
@@ -309,7 +311,7 @@ impl RenderOnce for ComponentExampleGroup {
.flex_col()
.text_sm()
.text_color(cx.theme().colors().text_muted)
.when(self.grow, |this| this.w_full().flex_1())
.w_full()
.when_some(self.title, |this, title| {
this.gap_4().child(
div()
@@ -336,7 +338,7 @@ impl RenderOnce for ComponentExampleGroup {
.child(
div()
.flex()
.when(self.vertical, |this| this.flex_col())
.flex_col()
.items_start()
.w_full()
.gap_6()
@@ -348,7 +350,6 @@ impl RenderOnce for ComponentExampleGroup {
}
impl ComponentExampleGroup {
/// Create a new group of examples with the given title.
pub fn new(examples: Vec<ComponentExample>) -> Self {
Self {
title: None,
@@ -357,8 +358,6 @@ impl ComponentExampleGroup {
vertical: false,
}
}
/// Create a new group of examples with the given title.
pub fn with_title(title: impl Into<SharedString>, examples: Vec<ComponentExample>) -> Self {
Self {
title: Some(title.into()),
@@ -367,21 +366,16 @@ impl ComponentExampleGroup {
vertical: false,
}
}
/// Set the group to grow to fill the available horizontal space.
pub fn grow(mut self) -> Self {
self.grow = true;
self
}
/// Lay the group out vertically.
pub fn vertical(mut self) -> Self {
self.vertical = true;
self
}
}
/// Create a single example
pub fn single_example(
variant_name: impl Into<SharedString>,
example: AnyElement,
@@ -389,12 +383,10 @@ pub fn single_example(
ComponentExample::new(variant_name, example)
}
/// Create a group of examples without a title
pub fn example_group(examples: Vec<ComponentExample>) -> ComponentExampleGroup {
ComponentExampleGroup::new(examples)
}
/// Create a group of examples with a title
pub fn example_group_with_title(
title: impl Into<SharedString>,
examples: Vec<ComponentExample>,

View File

@@ -43,6 +43,7 @@ pub fn init(app_state: Arc<AppState>, cx: &mut App) {
language_registry,
user_store,
None,
None,
cx,
)
});
@@ -106,10 +107,12 @@ impl ComponentPreview {
language_registry: Arc<LanguageRegistry>,
user_store: Entity<UserStore>,
selected_index: impl Into<Option<usize>>,
active_page: Option<PreviewPage>,
cx: &mut Context<Self>,
) -> Self {
let sorted_components = components().all_sorted();
let selected_index = selected_index.into().unwrap_or(0);
let active_page = active_page.unwrap_or(PreviewPage::AllComponents);
let component_list = ListState::new(
sorted_components.len(),
@@ -135,7 +138,7 @@ impl ComponentPreview {
language_registry,
user_store,
workspace,
active_page: PreviewPage::AllComponents,
active_page,
component_map: components().0,
components: sorted_components,
component_list,
@@ -169,8 +172,7 @@ impl ComponentPreview {
fn scope_ordered_entries(&self) -> Vec<PreviewEntry> {
use std::collections::HashMap;
let mut scope_groups: HashMap<Option<ComponentScope>, Vec<ComponentMetadata>> =
HashMap::default();
let mut scope_groups: HashMap<ComponentScope, Vec<ComponentMetadata>> = HashMap::default();
for component in &self.components {
scope_groups
@@ -192,6 +194,7 @@ impl ComponentPreview {
ComponentScope::Notification,
ComponentScope::Collaboration,
ComponentScope::VersionControl,
ComponentScope::None,
];
// Always show all components first
@@ -199,38 +202,27 @@ impl ComponentPreview {
entries.push(PreviewEntry::Separator);
for scope in known_scopes.iter() {
let scope_key = Some(scope.clone());
if let Some(components) = scope_groups.remove(&scope_key) {
if let Some(components) = scope_groups.remove(scope) {
if !components.is_empty() {
entries.push(PreviewEntry::SectionHeader(scope.to_string().into()));
let mut sorted_components = components;
sorted_components.sort_by_key(|component| component.sort_name());
for component in components {
for component in sorted_components {
entries.push(PreviewEntry::Component(component));
}
}
}
}
for (scope, components) in &scope_groups {
if let Some(ComponentScope::Unknown(_)) = scope {
if !components.is_empty() {
if let Some(scope_value) = scope {
entries.push(PreviewEntry::SectionHeader(scope_value.to_string().into()));
}
for component in components {
entries.push(PreviewEntry::Component(component.clone()));
}
}
}
}
if let Some(components) = scope_groups.get(&None) {
if let Some(components) = scope_groups.get(&ComponentScope::None) {
if !components.is_empty() {
entries.push(PreviewEntry::Separator);
entries.push(PreviewEntry::SectionHeader("Uncategorized".into()));
let mut sorted_components = components.clone();
sorted_components.sort_by_key(|c| c.sort_name());
for component in components {
for component in sorted_components {
entries.push(PreviewEntry::Component(component.clone()));
}
}
@@ -250,7 +242,10 @@ impl ComponentPreview {
let id = component_metadata.id();
let selected = self.active_page == PreviewPage::Component(id.clone());
ListItem::new(ix)
.child(Label::new(component_metadata.name().clone()).color(Color::Default))
.child(
Label::new(component_metadata.scopeless_name().clone())
.color(Color::Default),
)
.selectable(true)
.toggle_state(selected)
.inset(true)
@@ -333,7 +328,7 @@ impl ComponentPreview {
window: &mut Window,
cx: &mut App,
) -> impl IntoElement {
let name = component.name();
let name = component.scopeless_name();
let scope = component.scope();
let description = component.description();
@@ -354,13 +349,12 @@ impl ComponentPreview {
v_flex()
.gap_1()
.child(
h_flex()
.gap_1()
.text_xl()
.child(div().child(name))
.when_some(scope, |this, scope| {
h_flex().gap_1().text_xl().child(div().child(name)).when(
!matches!(scope, ComponentScope::None),
|this| {
this.child(div().opacity(0.5).child(format!("({})", scope)))
}),
},
),
)
.when_some(description, |this, description| {
this.child(
@@ -373,7 +367,7 @@ impl ComponentPreview {
}),
)
.when_some(component.preview(), |this, preview| {
this.child(preview(window, cx))
this.children(preview(window, cx))
}),
)
.into_any_element()
@@ -395,17 +389,16 @@ impl ComponentPreview {
fn render_component_page(
&mut self,
component_id: &ComponentId,
window: &mut Window,
cx: &mut Context<Self>,
_window: &mut Window,
_cx: &mut Context<Self>,
) -> impl IntoElement {
let component = self.component_map.get(&component_id);
if let Some(component) = component {
v_flex()
.w_full()
.flex_initial()
.min_h_full()
.child(self.render_preview(component, window, cx))
.id("render-component-page")
.size_full()
.child(ComponentPreviewPage::new(component.clone()))
.into_any_element()
} else {
v_flex()
@@ -445,10 +438,11 @@ impl Render for ComponentPreview {
.overflow_hidden()
.size_full()
.track_focus(&self.focus_handle)
.px_2()
.bg(cx.theme().colors().editor_background)
.child(
v_flex()
.border_r_1()
.border_color(cx.theme().colors().border)
.h_full()
.child(
uniform_list(
@@ -465,6 +459,7 @@ impl Render for ComponentPreview {
)
.track_scroll(self.nav_scroll_handle.clone())
.pt_4()
.px_4()
.w(px(240.))
.h_full()
.flex_1(),
@@ -527,6 +522,7 @@ impl Item for ComponentPreview {
let user_store = self.user_store.clone();
let weak_workspace = self.workspace.clone();
let selected_index = self.cursor_index;
let active_page = self.active_page.clone();
Some(cx.new(|cx| {
Self::new(
@@ -534,6 +530,7 @@ impl Item for ComponentPreview {
language_registry,
user_store,
selected_index,
Some(active_page),
cx,
)
}))
@@ -566,7 +563,14 @@ impl SerializableItem for ComponentPreview {
let weak_workspace = workspace.clone();
cx.update(|_, cx| {
Ok(cx.new(|cx| {
ComponentPreview::new(weak_workspace, language_registry, user_store, None, cx)
ComponentPreview::new(
weak_workspace,
language_registry,
user_store,
None,
None,
cx,
)
}))
})?
})
@@ -600,3 +604,76 @@ impl SerializableItem for ComponentPreview {
false
}
}
#[derive(IntoElement)]
pub struct ComponentPreviewPage {
// languages: Arc<LanguageRegistry>,
component: ComponentMetadata,
}
impl ComponentPreviewPage {
pub fn new(
component: ComponentMetadata,
// languages: Arc<LanguageRegistry>
) -> Self {
Self {
// languages,
component,
}
}
fn render_header(&self, _: &Window, cx: &App) -> impl IntoElement {
v_flex()
.px_12()
.pt_16()
.pb_12()
.gap_6()
.bg(cx.theme().colors().surface_background)
.border_b_1()
.border_color(cx.theme().colors().border)
.child(
v_flex()
.gap_0p5()
.child(
Label::new(self.component.scope().to_string())
.size(LabelSize::Small)
.color(Color::Muted),
)
.child(
Headline::new(self.component.scopeless_name()).size(HeadlineSize::XLarge),
),
)
.when_some(self.component.description(), |this, description| {
this.child(div().text_sm().child(description))
})
}
fn render_preview(&self, window: &mut Window, cx: &mut App) -> impl IntoElement {
v_flex()
.flex_1()
.px_12()
.py_6()
.bg(cx.theme().colors().editor_background)
.child(if let Some(preview) = self.component.preview() {
preview(window, cx).unwrap_or_else(|| {
div()
.child("Failed to load preview. This path should be unreachable")
.into_any_element()
})
} else {
div().child("No preview available").into_any_element()
})
}
}
impl RenderOnce for ComponentPreviewPage {
fn render(self, window: &mut Window, cx: &mut App) -> impl IntoElement {
v_flex()
.id("component-preview-page")
.overflow_y_scroll()
.overflow_x_hidden()
.w_full()
.child(self.render_header(window, cx))
.child(self.render_preview(window, cx))
}
}

View File

@@ -49,7 +49,7 @@ impl Tool for ContextServerTool {
}
}
fn needs_confirmation(&self) -> bool {
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
true
}

View File

@@ -1280,10 +1280,6 @@ mod tests {
unimplemented!()
}
fn as_any(&self) -> &dyn std::any::Any {
unimplemented!()
}
fn to_proto(&self, _: &App) -> rpc::proto::File {
unimplemented!()
}

View File

@@ -93,7 +93,7 @@ pub struct TcpArguments {
pub port: u16,
pub timeout: Option<u64>,
}
#[derive(Debug, Clone)]
#[derive(Default, Debug, Clone)]
pub struct DebugAdapterBinary {
pub command: String,
pub arguments: Option<Vec<OsString>>,
@@ -102,6 +102,7 @@ pub struct DebugAdapterBinary {
pub connection: Option<TcpArguments>,
}
#[derive(Debug)]
pub struct AdapterVersion {
pub tag_name: String,
pub url: String,

View File

@@ -0,0 +1,141 @@
use std::{path::PathBuf, sync::OnceLock};
use anyhow::{Result, bail};
use async_trait::async_trait;
use dap::adapters::latest_github_release;
use gpui::AsyncApp;
use task::{DebugAdapterConfig, DebugRequestType, DebugTaskDefinition};
use crate::*;
#[derive(Default)]
pub(crate) struct CodeLldbDebugAdapter {
last_known_version: OnceLock<String>,
}
impl CodeLldbDebugAdapter {
const ADAPTER_NAME: &'static str = "CodeLLDB";
}
#[async_trait(?Send)]
impl DebugAdapter for CodeLldbDebugAdapter {
fn name(&self) -> DebugAdapterName {
DebugAdapterName(Self::ADAPTER_NAME.into())
}
async fn install_binary(
&self,
version: AdapterVersion,
delegate: &dyn DapDelegate,
) -> Result<()> {
adapters::download_adapter_from_github(
self.name(),
version,
adapters::DownloadedFileType::Vsix,
delegate,
)
.await?;
Ok(())
}
async fn fetch_latest_adapter_version(
&self,
delegate: &dyn DapDelegate,
) -> Result<AdapterVersion> {
let release =
latest_github_release("vadimcn/codelldb", true, false, delegate.http_client()).await?;
let arch = match std::env::consts::ARCH {
"aarch64" => "arm64",
"x86_64" => "x64",
_ => {
return Err(anyhow!(
"unsupported architecture {}",
std::env::consts::ARCH
));
}
};
let platform = match std::env::consts::OS {
"macos" => "darwin",
"linux" => "linux",
"windows" => "win32",
_ => {
return Err(anyhow!(
"unsupported operating system {}",
std::env::consts::OS
));
}
};
let asset_name = format!("codelldb-{platform}-{arch}.vsix");
let _ = self.last_known_version.set(release.tag_name.clone());
let ret = AdapterVersion {
tag_name: release.tag_name,
url: release
.assets
.iter()
.find(|asset| asset.name == asset_name)
.ok_or_else(|| anyhow!("no asset found matching {:?}", asset_name))?
.browser_download_url
.clone(),
};
Ok(ret)
}
async fn get_installed_binary(
&self,
_: &dyn DapDelegate,
_: &DebugAdapterConfig,
_: Option<PathBuf>,
_: &mut AsyncApp,
) -> Result<DebugAdapterBinary> {
let Some(version) = self.last_known_version.get() else {
bail!("Could not determine latest CodeLLDB version");
};
let adapter_path = paths::debug_adapters_dir().join(&Self::ADAPTER_NAME);
let version_path = adapter_path.join(format!("{}_{}", Self::ADAPTER_NAME, version));
let adapter_dir = version_path.join("extension").join("adapter");
let command = adapter_dir.join("codelldb");
let command = command
.to_str()
.map(ToOwned::to_owned)
.ok_or_else(|| anyhow!("Adapter path is expected to be valid UTF-8"))?;
Ok(DebugAdapterBinary {
command,
cwd: Some(adapter_dir),
..Default::default()
})
}
fn request_args(&self, config: &DebugTaskDefinition) -> Value {
let mut args = json!({
"request": match config.request {
DebugRequestType::Launch(_) => "launch",
DebugRequestType::Attach(_) => "attach",
},
});
let map = args.as_object_mut().unwrap();
match &config.request {
DebugRequestType::Attach(attach) => {
map.insert("pid".into(), attach.process_id.into());
}
DebugRequestType::Launch(launch) => {
map.insert("program".into(), launch.program.clone().into());
if !launch.args.is_empty() {
map.insert("args".into(), launch.args.clone().into());
}
if let Some(stop_on_entry) = config.stop_on_entry {
map.insert("stopOnEntry".into(), stop_on_entry.into());
}
if let Some(cwd) = launch.cwd.as_ref() {
map.insert("cwd".into(), cwd.to_string_lossy().into_owned().into());
}
}
}
args
}
}

View File

@@ -1,3 +1,4 @@
mod codelldb;
mod gdb;
mod go;
mod javascript;
@@ -9,6 +10,7 @@ use std::{net::Ipv4Addr, sync::Arc};
use anyhow::{Result, anyhow};
use async_trait::async_trait;
use codelldb::CodeLldbDebugAdapter;
use dap::{
DapRegistry,
adapters::{
@@ -26,6 +28,7 @@ use serde_json::{Value, json};
use task::{DebugAdapterConfig, TCPHost};
pub fn init(registry: Arc<DapRegistry>) {
registry.add_adapter(Arc::from(CodeLldbDebugAdapter::default()));
registry.add_adapter(Arc::from(PythonDebugAdapter));
registry.add_adapter(Arc::from(PhpDebugAdapter));
registry.add_adapter(Arc::from(JsDebugAdapter::default()));

View File

@@ -25,7 +25,9 @@ use project::{
};
use rpc::proto::{self};
use settings::Settings;
use std::{any::TypeId, path::PathBuf};
use std::any::TypeId;
use std::path::Path;
use std::sync::Arc;
use task::DebugTaskDefinition;
use terminal_view::terminal_panel::TerminalPanel;
use ui::{ContextMenu, Divider, DropdownMenu, Tooltip, prelude::*};
@@ -272,7 +274,7 @@ impl DebugPanel {
fn handle_run_in_terminal_request(
&self,
title: Option<String>,
cwd: PathBuf,
cwd: Option<Arc<Path>>,
command: Option<String>,
args: Vec<String>,
envs: HashMap<String, String>,

View File

@@ -31,6 +31,7 @@ pub(super) struct NewSessionModal {
debug_panel: WeakEntity<DebugPanel>,
mode: NewSessionMode,
stop_on_entry: ToggleState,
initialize_args: Option<serde_json::Value>,
debugger: Option<SharedString>,
last_selected_profile_name: Option<SharedString>,
}
@@ -82,17 +83,17 @@ impl NewSessionModal {
.map(Into::into)
.unwrap_or(ToggleState::Unselected),
last_selected_profile_name: None,
initialize_args: None,
}
}
fn debug_config(&self, cx: &App) -> Option<DebugTaskDefinition> {
let request = self.mode.debug_task(cx);
Some(DebugTaskDefinition {
adapter: self.debugger.clone()?.to_string(),
label: suggested_label(&request, self.debugger.as_deref()?),
request,
initialize_args: None,
initialize_args: self.initialize_args.clone(),
tcp_connection: None,
locator: None,
stop_on_entry: match self.stop_on_entry {
@@ -228,7 +229,7 @@ impl NewSessionModal {
weak.update(cx, |this, cx| {
this.last_selected_profile_name = Some(SharedString::from(&task.label));
this.debugger = Some(task.adapter.clone().into());
this.initialize_args = task.initialize_args.clone();
match &task.request {
DebugRequestType::Launch(launch_config) => {
this.mode = NewSessionMode::launch(

View File

@@ -356,7 +356,7 @@ impl ConsoleQueryBarCompletionProvider {
let variable_value = variables.get(&string_match.string)?;
Some(project::Completion {
old_range: buffer_position..buffer_position,
replace_range: buffer_position..buffer_position,
new_text: string_match.string.clone(),
label: CodeLabel {
filter_range: 0..string_match.string.len(),
@@ -428,10 +428,10 @@ impl ConsoleQueryBarCompletionProvider {
let buffer_offset = buffer_position.to_offset(&snapshot);
let start = buffer_offset - word_bytes_length;
let start = snapshot.anchor_before(start);
let old_range = start..buffer_position;
let replace_range = start..buffer_position;
project::Completion {
old_range,
replace_range,
new_text,
label: CodeLabel {
filter_range: 0..completion.label.len(),

View File

@@ -3,6 +3,7 @@ use super::*;
use gpui::{action_as, action_with_deprecated_aliases, actions};
use schemars::JsonSchema;
use util::serde::default_true;
#[derive(PartialEq, Clone, Deserialize, Default, JsonSchema)]
#[serde(deny_unknown_fields)]
pub struct SelectNext {
@@ -262,6 +263,8 @@ actions!(
Cancel,
CancelLanguageServerWork,
ConfirmRename,
ConfirmCompletionInsert,
ConfirmCompletionReplace,
ContextMenuFirst,
ContextMenuLast,
ContextMenuNext,
@@ -417,6 +420,7 @@ actions!(
Tab,
Backtab,
ToggleBreakpoint,
ToggleCase,
DisableBreakpoint,
EnableBreakpoint,
EditLogBreakpoint,

View File

@@ -230,7 +230,7 @@ impl CompletionsMenu {
let completions = choices
.iter()
.map(|choice| Completion {
old_range: selection.start.text_anchor..selection.end.text_anchor,
replace_range: selection.start.text_anchor..selection.end.text_anchor,
new_text: choice.to_string(),
label: CodeLabel {
text: choice.to_string(),

View File

@@ -109,8 +109,8 @@ use language::{
IndentKind, IndentSize, Language, OffsetRangeExt, Point, Selection, SelectionGoal, TextObject,
TransactionId, TreeSitterOptions, WordsQuery,
language_settings::{
self, InlayHintSettings, RewrapBehavior, WordsCompletionMode, all_language_settings,
language_settings,
self, InlayHintSettings, LspInsertMode, RewrapBehavior, WordsCompletionMode,
all_language_settings, language_settings,
},
point_from_lsp, text_diff_with_options,
};
@@ -131,7 +131,7 @@ pub use proposed_changes_editor::{
};
use smallvec::smallvec;
use std::{cell::OnceCell, iter::Peekable};
use task::{ResolvedTask, TaskTemplate, TaskVariables};
use task::{ResolvedTask, RunnableTag, TaskTemplate, TaskVariables};
pub use lsp::CompletionContext;
use lsp::{
@@ -140,6 +140,7 @@ use lsp::{
};
use language::BufferSnapshot;
pub use lsp_ext::lsp_tasks;
use movement::TextLayoutDetails;
pub use multi_buffer::{
Anchor, AnchorRangeExt, ExcerptId, ExcerptRange, MultiBuffer, MultiBufferSnapshot, RowInfo,
@@ -1261,6 +1262,7 @@ impl Editor {
clone.selections.clone_state(&self.selections);
clone.scroll_manager.clone_state(&self.scroll_manager);
clone.searchable = self.searchable;
clone.read_only = self.read_only;
clone
}
@@ -4270,6 +4272,7 @@ impl Editor {
buffer
.update(cx, |buffer, _| {
buffer.push_transaction(transaction, Instant::now());
buffer.finalize_last_transaction();
})
.ok();
}
@@ -4461,7 +4464,7 @@ impl Editor {
words.remove(&lsp_completion.new_text);
}
completions.extend(words.into_iter().map(|(word, word_range)| Completion {
old_range: old_range.clone(),
replace_range: old_range.clone(),
new_text: word.clone(),
label: CodeLabel::plain(word, None),
icon_path: None,
@@ -4568,6 +4571,26 @@ impl Editor {
self.do_completion(action.item_ix, CompletionIntent::Complete, window, cx)
}
pub fn confirm_completion_insert(
&mut self,
_: &ConfirmCompletionInsert,
window: &mut Window,
cx: &mut Context<Self>,
) -> Option<Task<Result<()>>> {
self.hide_mouse_cursor(&HideMouseCursorOrigin::TypingAction);
self.do_completion(None, CompletionIntent::CompleteWithInsert, window, cx)
}
pub fn confirm_completion_replace(
&mut self,
_: &ConfirmCompletionReplace,
window: &mut Window,
cx: &mut Context<Self>,
) -> Option<Task<Result<()>>> {
self.hide_mouse_cursor(&HideMouseCursorOrigin::TypingAction);
self.do_completion(None, CompletionIntent::CompleteWithReplace, window, cx)
}
pub fn compose_completion(
&mut self,
action: &ComposeCompletion,
@@ -4587,12 +4610,10 @@ impl Editor {
) -> Option<Task<Result<()>>> {
use language::ToOffset as _;
let completions_menu =
if let CodeContextMenu::Completions(menu) = self.hide_context_menu(window, cx)? {
menu
} else {
return None;
};
let CodeContextMenu::Completions(completions_menu) = self.hide_context_menu(window, cx)?
else {
return None;
};
let candidate_id = {
let entries = completions_menu.entries.borrow();
@@ -4621,9 +4642,12 @@ impl Editor {
new_text = completion.new_text.clone();
};
let selections = self.selections.all::<usize>(cx);
let replace_range = choose_completion_range(&completion, intent, &buffer_handle, cx);
let buffer = buffer_handle.read(cx);
let old_range = completion.old_range.to_offset(buffer);
let old_text = buffer.text_for_range(old_range.clone()).collect::<String>();
let old_text = buffer
.text_for_range(replace_range.clone())
.collect::<String>();
let newest_selection = self.selections.newest_anchor();
if newest_selection.start.buffer_id != Some(buffer_handle.read(cx).remote_id()) {
@@ -4634,8 +4658,8 @@ impl Editor {
.start
.text_anchor
.to_offset(buffer)
.saturating_sub(old_range.start);
let lookahead = old_range
.saturating_sub(replace_range.start);
let lookahead = replace_range
.end
.saturating_sub(newest_selection.end.text_anchor.to_offset(buffer));
let mut common_prefix_len = 0;
@@ -4664,8 +4688,8 @@ impl Editor {
ranges.clear();
ranges.extend(selections.iter().map(|s| {
if s.id == newest_selection.id {
range_to_replace = Some(old_range.clone());
old_range.clone()
range_to_replace = Some(replace_range.clone());
replace_range.clone()
} else {
s.start..s.end
}
@@ -9119,6 +9143,17 @@ impl Editor {
});
}
pub fn toggle_case(&mut self, _: &ToggleCase, window: &mut Window, cx: &mut Context<Self>) {
self.manipulate_text(window, cx, |text| {
let has_upper_case_characters = text.chars().any(|c| c.is_uppercase());
if has_upper_case_characters {
text.to_lowercase()
} else {
text.to_uppercase()
}
})
}
pub fn convert_to_upper_case(
&mut self,
_: &ConvertToUpperCase,
@@ -12449,12 +12484,13 @@ impl Editor {
return Task::ready(());
}
let project = self.project.as_ref().map(Entity::downgrade);
cx.spawn_in(window, async move |this, cx| {
let task_sources = self.lsp_task_sources(cx);
cx.spawn_in(window, async move |editor, cx| {
cx.background_executor().timer(UPDATE_DEBOUNCE).await;
let Some(project) = project.and_then(|p| p.upgrade()) else {
return;
};
let Ok(display_snapshot) = this.update(cx, |this, cx| {
let Ok(display_snapshot) = editor.update(cx, |this, cx| {
this.display_map.update(cx, |map, cx| map.snapshot(cx))
}) else {
return;
@@ -12477,15 +12513,77 @@ impl Editor {
}
})
.await;
let Ok(lsp_tasks) =
cx.update(|_, cx| crate::lsp_tasks(project.clone(), &task_sources, None, cx))
else {
return;
};
let lsp_tasks = lsp_tasks.await;
let Ok(mut lsp_tasks_by_rows) = cx.update(|_, cx| {
lsp_tasks
.into_iter()
.flat_map(|(kind, tasks)| {
tasks.into_iter().filter_map(move |(location, task)| {
Some((kind.clone(), location?, task))
})
})
.fold(HashMap::default(), |mut acc, (kind, location, task)| {
let buffer = location.target.buffer;
let buffer_snapshot = buffer.read(cx).snapshot();
let offset = display_snapshot.buffer_snapshot.excerpts().find_map(
|(excerpt_id, snapshot, _)| {
if snapshot.remote_id() == buffer_snapshot.remote_id() {
display_snapshot
.buffer_snapshot
.anchor_in_excerpt(excerpt_id, location.target.range.start)
} else {
None
}
},
);
if let Some(offset) = offset {
let task_buffer_range =
location.target.range.to_point(&buffer_snapshot);
let context_buffer_range =
task_buffer_range.to_offset(&buffer_snapshot);
let context_range = BufferOffset(context_buffer_range.start)
..BufferOffset(context_buffer_range.end);
acc.entry((buffer_snapshot.remote_id(), task_buffer_range.start.row))
.or_insert_with(|| RunnableTasks {
templates: Vec::new(),
offset,
column: task_buffer_range.start.column,
extra_variables: HashMap::default(),
context_range,
})
.templates
.push((kind, task.original_task().clone()));
}
acc
})
}) else {
return;
};
let rows = Self::runnable_rows(project, display_snapshot, new_rows, cx.clone());
this.update(cx, |this, _| {
this.clear_tasks();
for (key, value) in rows {
this.insert_tasks(key, value);
}
})
.ok();
editor
.update(cx, |editor, _| {
editor.clear_tasks();
for (key, mut value) in rows {
if let Some(lsp_tasks) = lsp_tasks_by_rows.remove(&key) {
value.templates.extend(lsp_tasks.templates);
}
editor.insert_tasks(key, value);
}
for (key, value) in lsp_tasks_by_rows {
editor.insert_tasks(key, value);
}
})
.ok();
})
}
fn fetch_runnable_ranges(
@@ -12500,7 +12598,7 @@ impl Editor {
snapshot: DisplaySnapshot,
runnable_ranges: Vec<RunnableRange>,
mut cx: AsyncWindowContext,
) -> Vec<((BufferId, u32), RunnableTasks)> {
) -> Vec<((BufferId, BufferRow), RunnableTasks)> {
runnable_ranges
.into_iter()
.filter_map(|mut runnable| {
@@ -12557,11 +12655,9 @@ impl Editor {
)
});
let tags = mem::take(&mut runnable.tags);
let mut tags: Vec<_> = tags
let mut templates_with_tags = mem::take(&mut runnable.tags)
.into_iter()
.flat_map(|tag| {
let tag = tag.0.clone();
.flat_map(|RunnableTag(tag)| {
inventory
.as_ref()
.into_iter()
@@ -12578,20 +12674,20 @@ impl Editor {
})
})
.sorted_by_key(|(kind, _)| kind.to_owned())
.collect();
if let Some((leading_tag_source, _)) = tags.first() {
.collect::<Vec<_>>();
if let Some((leading_tag_source, _)) = templates_with_tags.first() {
// Strongest source wins; if we have worktree tag binding, prefer that to
// global and language bindings;
// if we have a global binding, prefer that to language binding.
let first_mismatch = tags
let first_mismatch = templates_with_tags
.iter()
.position(|(tag_source, _)| tag_source != leading_tag_source);
if let Some(index) = first_mismatch {
tags.truncate(index);
templates_with_tags.truncate(index);
}
}
tags
templates_with_tags
}
pub fn move_to_enclosing_bracket(
@@ -17918,6 +18014,81 @@ impl Editor {
}
}
// Consider user intent and default settings
fn choose_completion_range(
completion: &Completion,
intent: CompletionIntent,
buffer: &Entity<Buffer>,
cx: &mut Context<Editor>,
) -> Range<usize> {
fn should_replace(
completion: &Completion,
insert_range: &Range<text::Anchor>,
intent: CompletionIntent,
completion_mode_setting: LspInsertMode,
buffer: &Buffer,
) -> bool {
// specific actions take precedence over settings
match intent {
CompletionIntent::CompleteWithInsert => return false,
CompletionIntent::CompleteWithReplace => return true,
CompletionIntent::Complete | CompletionIntent::Compose => {}
}
match completion_mode_setting {
LspInsertMode::Insert => false,
LspInsertMode::Replace => true,
LspInsertMode::ReplaceSubsequence => {
let mut text_to_replace = buffer.chars_for_range(
buffer.anchor_before(completion.replace_range.start)
..buffer.anchor_after(completion.replace_range.end),
);
let mut completion_text = completion.new_text.chars();
// is `text_to_replace` a subsequence of `completion_text`
text_to_replace
.all(|needle_ch| completion_text.any(|haystack_ch| haystack_ch == needle_ch))
}
LspInsertMode::ReplaceSuffix => {
let range_after_cursor = insert_range.end..completion.replace_range.end;
let text_after_cursor = buffer
.text_for_range(
buffer.anchor_before(range_after_cursor.start)
..buffer.anchor_after(range_after_cursor.end),
)
.collect::<String>();
completion.new_text.ends_with(&text_after_cursor)
}
}
}
let buffer = buffer.read(cx);
if let CompletionSource::Lsp {
insert_range: Some(insert_range),
..
} = &completion.source
{
let completion_mode_setting =
language_settings(buffer.language().map(|l| l.name()), buffer.file(), cx)
.completions
.lsp_insert_mode;
if !should_replace(
completion,
&insert_range,
intent,
completion_mode_setting,
buffer,
) {
return insert_range.to_offset(buffer);
}
}
completion.replace_range.to_offset(buffer)
}
fn insert_extra_newline_brackets(
buffer: &MultiBufferSnapshot,
range: Range<usize>,
@@ -18639,9 +18810,10 @@ fn snippet_completions(
end: lsp_end,
};
Some(Completion {
old_range: range,
replace_range: range,
new_text: snippet.body.clone(),
source: CompletionSource::Lsp {
insert_range: None,
server_id: LanguageServerId(usize::MAX),
resolved: true,
lsp_completion: Box::new(lsp::CompletionItem {

View File

@@ -3875,6 +3875,41 @@ async fn test_manipulate_lines_with_multi_selection(cx: &mut TestAppContext) {
"});
}
#[gpui::test]
async fn test_toggle_case(cx: &mut TestAppContext) {
init_test(cx, |_| {});
let mut cx = EditorTestContext::new(cx).await;
// If all lower case -> upper case
cx.set_state(indoc! {"
«hello worldˇ»
"});
cx.update_editor(|e, window, cx| e.toggle_case(&ToggleCase, window, cx));
cx.assert_editor_state(indoc! {"
«HELLO WORLDˇ»
"});
// If all upper case -> lower case
cx.set_state(indoc! {"
«HELLO WORLDˇ»
"});
cx.update_editor(|e, window, cx| e.toggle_case(&ToggleCase, window, cx));
cx.assert_editor_state(indoc! {"
«hello worldˇ»
"});
// If any upper case characters are identified -> lower case
// This matches JetBrains IDEs
cx.set_state(indoc! {"
«hEllo worldˇ»
"});
cx.update_editor(|e, window, cx| e.toggle_case(&ToggleCase, window, cx));
cx.assert_editor_state(indoc! {"
«hello worldˇ»
"});
}
#[gpui::test]
async fn test_manipulate_text(cx: &mut TestAppContext) {
init_test(cx, |_| {});
@@ -9218,7 +9253,7 @@ async fn test_completion_mode(cx: &mut TestAppContext) {
initial_state: String,
buffer_marked_text: String,
completion_text: &'static str,
expected_with_insertion_mode: String,
expected_with_insert_mode: String,
expected_with_replace_mode: String,
expected_with_replace_subsequence_mode: String,
expected_with_replace_suffix_mode: String,
@@ -9230,7 +9265,7 @@ async fn test_completion_mode(cx: &mut TestAppContext) {
initial_state: "before ediˇ after".into(),
buffer_marked_text: "before <edi|> after".into(),
completion_text: "editor",
expected_with_insertion_mode: "before editorˇ after".into(),
expected_with_insert_mode: "before editorˇ after".into(),
expected_with_replace_mode: "before editorˇ after".into(),
expected_with_replace_subsequence_mode: "before editorˇ after".into(),
expected_with_replace_suffix_mode: "before editorˇ after".into(),
@@ -9240,7 +9275,7 @@ async fn test_completion_mode(cx: &mut TestAppContext) {
initial_state: "before ediˇtor after".into(),
buffer_marked_text: "before <edi|tor> after".into(),
completion_text: "editor",
expected_with_insertion_mode: "before editorˇtor after".into(),
expected_with_insert_mode: "before editorˇtor after".into(),
expected_with_replace_mode: "before ediˇtor after".into(),
expected_with_replace_subsequence_mode: "before ediˇtor after".into(),
expected_with_replace_suffix_mode: "before ediˇtor after".into(),
@@ -9250,7 +9285,7 @@ async fn test_completion_mode(cx: &mut TestAppContext) {
initial_state: "before torˇ after".into(),
buffer_marked_text: "before <tor|> after".into(),
completion_text: "editor",
expected_with_insertion_mode: "before editorˇ after".into(),
expected_with_insert_mode: "before editorˇ after".into(),
expected_with_replace_mode: "before editorˇ after".into(),
expected_with_replace_subsequence_mode: "before editorˇ after".into(),
expected_with_replace_suffix_mode: "before editorˇ after".into(),
@@ -9260,7 +9295,7 @@ async fn test_completion_mode(cx: &mut TestAppContext) {
initial_state: "before ˇtor after".into(),
buffer_marked_text: "before <|tor> after".into(),
completion_text: "editor",
expected_with_insertion_mode: "before editorˇtor after".into(),
expected_with_insert_mode: "before editorˇtor after".into(),
expected_with_replace_mode: "before editorˇ after".into(),
expected_with_replace_subsequence_mode: "before editorˇ after".into(),
expected_with_replace_suffix_mode: "before editorˇ after".into(),
@@ -9270,7 +9305,7 @@ async fn test_completion_mode(cx: &mut TestAppContext) {
initial_state: "pˇfield: bool".into(),
buffer_marked_text: "<p|field>: bool".into(),
completion_text: "pub ",
expected_with_insertion_mode: "pub ˇfield: bool".into(),
expected_with_insert_mode: "pub ˇfield: bool".into(),
expected_with_replace_mode: "pub ˇ: bool".into(),
expected_with_replace_subsequence_mode: "pub ˇfield: bool".into(),
expected_with_replace_suffix_mode: "pub ˇfield: bool".into(),
@@ -9280,7 +9315,7 @@ async fn test_completion_mode(cx: &mut TestAppContext) {
initial_state: "[element_ˇelement_2]".into(),
buffer_marked_text: "[<element_|element_2>]".into(),
completion_text: "element_1",
expected_with_insertion_mode: "[element_1ˇelement_2]".into(),
expected_with_insert_mode: "[element_1ˇelement_2]".into(),
expected_with_replace_mode: "[element_1ˇ]".into(),
expected_with_replace_subsequence_mode: "[element_1ˇelement_2]".into(),
expected_with_replace_suffix_mode: "[element_1ˇelement_2]".into(),
@@ -9290,7 +9325,7 @@ async fn test_completion_mode(cx: &mut TestAppContext) {
initial_state: "[elˇelement]".into(),
buffer_marked_text: "[<el|element>]".into(),
completion_text: "element",
expected_with_insertion_mode: "[elementˇelement]".into(),
expected_with_insert_mode: "[elementˇelement]".into(),
expected_with_replace_mode: "[elˇement]".into(),
expected_with_replace_subsequence_mode: "[elementˇelement]".into(),
expected_with_replace_suffix_mode: "[elˇement]".into(),
@@ -9300,7 +9335,7 @@ async fn test_completion_mode(cx: &mut TestAppContext) {
initial_state: "SubˇError".into(),
buffer_marked_text: "<Sub|Error>".into(),
completion_text: "SubscriptionError",
expected_with_insertion_mode: "SubscriptionErrorˇError".into(),
expected_with_insert_mode: "SubscriptionErrorˇError".into(),
expected_with_replace_mode: "SubscriptionErrorˇ".into(),
expected_with_replace_subsequence_mode: "SubscriptionErrorˇ".into(),
expected_with_replace_suffix_mode: "SubscriptionErrorˇ".into(),
@@ -9310,7 +9345,7 @@ async fn test_completion_mode(cx: &mut TestAppContext) {
initial_state: "SubˇErr".into(),
buffer_marked_text: "<Sub|Err>".into(),
completion_text: "SubscriptionError",
expected_with_insertion_mode: "SubscriptionErrorˇErr".into(),
expected_with_insert_mode: "SubscriptionErrorˇErr".into(),
expected_with_replace_mode: "SubscriptionErrorˇ".into(),
expected_with_replace_subsequence_mode: "SubscriptionErrorˇ".into(),
expected_with_replace_suffix_mode: "SubscriptionErrorˇErr".into(),
@@ -9320,7 +9355,7 @@ async fn test_completion_mode(cx: &mut TestAppContext) {
initial_state: "Suˇscrirr".into(),
buffer_marked_text: "<Su|scrirr>".into(),
completion_text: "SubscriptionError",
expected_with_insertion_mode: "SubscriptionErrorˇscrirr".into(),
expected_with_insert_mode: "SubscriptionErrorˇscrirr".into(),
expected_with_replace_mode: "SubscriptionErrorˇ".into(),
expected_with_replace_subsequence_mode: "SubscriptionErrorˇ".into(),
expected_with_replace_suffix_mode: "SubscriptionErrorˇscrirr".into(),
@@ -9330,7 +9365,7 @@ async fn test_completion_mode(cx: &mut TestAppContext) {
initial_state: "foo(indˇix)".into(),
buffer_marked_text: "foo(<ind|ix>)".into(),
completion_text: "node_index",
expected_with_insertion_mode: "foo(node_indexˇix)".into(),
expected_with_insert_mode: "foo(node_indexˇix)".into(),
expected_with_replace_mode: "foo(node_indexˇ)".into(),
expected_with_replace_subsequence_mode: "foo(node_indexˇix)".into(),
expected_with_replace_suffix_mode: "foo(node_indexˇix)".into(),
@@ -9339,7 +9374,7 @@ async fn test_completion_mode(cx: &mut TestAppContext) {
for run in runs {
let run_variations = [
(LspInsertMode::Insert, run.expected_with_insertion_mode),
(LspInsertMode::Insert, run.expected_with_insert_mode),
(LspInsertMode::Replace, run.expected_with_replace_mode),
(
LspInsertMode::ReplaceSubsequence,
@@ -9395,6 +9430,98 @@ async fn test_completion_mode(cx: &mut TestAppContext) {
}
}
#[gpui::test]
async fn test_completion_with_mode_specified_by_action(cx: &mut TestAppContext) {
init_test(cx, |_| {});
let mut cx = EditorLspTestContext::new_rust(
lsp::ServerCapabilities {
completion_provider: Some(lsp::CompletionOptions {
resolve_provider: Some(true),
..Default::default()
}),
..Default::default()
},
cx,
)
.await;
let initial_state = "SubˇError";
let buffer_marked_text = "<Sub|Error>";
let completion_text = "SubscriptionError";
let expected_with_insert_mode = "SubscriptionErrorˇError";
let expected_with_replace_mode = "SubscriptionErrorˇ";
update_test_language_settings(&mut cx, |settings| {
settings.defaults.completions = Some(CompletionSettings {
words: WordsCompletionMode::Disabled,
// set the opposite here to ensure that the action is overriding the default behavior
lsp_insert_mode: LspInsertMode::Insert,
lsp: true,
lsp_fetch_timeout_ms: 0,
});
});
cx.set_state(initial_state);
cx.update_editor(|editor, window, cx| {
editor.show_completions(&ShowCompletions { trigger: None }, window, cx);
});
let counter = Arc::new(AtomicUsize::new(0));
handle_completion_request_with_insert_and_replace(
&mut cx,
&buffer_marked_text,
vec![completion_text],
counter.clone(),
)
.await;
cx.condition(|editor, _| editor.context_menu_visible())
.await;
assert_eq!(counter.load(atomic::Ordering::Acquire), 1);
let apply_additional_edits = cx.update_editor(|editor, window, cx| {
editor
.confirm_completion_replace(&ConfirmCompletionReplace, window, cx)
.unwrap()
});
cx.assert_editor_state(&expected_with_replace_mode);
handle_resolve_completion_request(&mut cx, None).await;
apply_additional_edits.await.unwrap();
update_test_language_settings(&mut cx, |settings| {
settings.defaults.completions = Some(CompletionSettings {
words: WordsCompletionMode::Disabled,
// set the opposite here to ensure that the action is overriding the default behavior
lsp_insert_mode: LspInsertMode::Replace,
lsp: true,
lsp_fetch_timeout_ms: 0,
});
});
cx.set_state(initial_state);
cx.update_editor(|editor, window, cx| {
editor.show_completions(&ShowCompletions { trigger: None }, window, cx);
});
handle_completion_request_with_insert_and_replace(
&mut cx,
&buffer_marked_text,
vec![completion_text],
counter.clone(),
)
.await;
cx.condition(|editor, _| editor.context_menu_visible())
.await;
assert_eq!(counter.load(atomic::Ordering::Acquire), 2);
let apply_additional_edits = cx.update_editor(|editor, window, cx| {
editor
.confirm_completion_insert(&ConfirmCompletionInsert, window, cx)
.unwrap()
});
cx.assert_editor_state(&expected_with_insert_mode);
handle_resolve_completion_request(&mut cx, None).await;
apply_additional_edits.await.unwrap();
}
#[gpui::test]
async fn test_completion(cx: &mut TestAppContext) {
init_test(cx, |_| {});
@@ -12539,6 +12666,7 @@ async fn test_language_server_restart_due_to_settings_change(cx: &mut TestAppCon
initialization_options: Some(json!({
"some other init value": false
})),
enable_lsp_tasks: false,
},
);
});
@@ -12558,6 +12686,7 @@ async fn test_language_server_restart_due_to_settings_change(cx: &mut TestAppCon
initialization_options: Some(json!({
"anotherInitValue": false
})),
enable_lsp_tasks: false,
},
);
});
@@ -12577,6 +12706,7 @@ async fn test_language_server_restart_due_to_settings_change(cx: &mut TestAppCon
initialization_options: Some(json!({
"anotherInitValue": false
})),
enable_lsp_tasks: false,
},
);
});
@@ -12594,6 +12724,7 @@ async fn test_language_server_restart_due_to_settings_change(cx: &mut TestAppCon
binary: None,
settings: None,
initialization_options: None,
enable_lsp_tasks: false,
},
);
});

View File

@@ -211,6 +211,7 @@ impl EditorElement {
register_action(editor, window, Editor::sort_lines_case_insensitive);
register_action(editor, window, Editor::reverse_lines);
register_action(editor, window, Editor::shuffle_lines);
register_action(editor, window, Editor::toggle_case);
register_action(editor, window, Editor::convert_to_upper_case);
register_action(editor, window, Editor::convert_to_lower_case);
register_action(editor, window, Editor::convert_to_title_case);
@@ -461,6 +462,20 @@ impl EditorElement {
cx.propagate();
}
});
register_action(editor, window, |editor, action, window, cx| {
if let Some(task) = editor.confirm_completion_replace(action, window, cx) {
task.detach_and_notify_err(window, cx);
} else {
cx.propagate();
}
});
register_action(editor, window, |editor, action, window, cx| {
if let Some(task) = editor.confirm_completion_insert(action, window, cx) {
task.detach_and_notify_err(window, cx);
} else {
cx.propagate();
}
});
register_action(editor, window, |editor, action, window, cx| {
if let Some(task) = editor.compose_completion(action, window, cx) {
task.detach_and_notify_err(window, cx);

View File

@@ -1,12 +1,25 @@
use std::sync::Arc;
use crate::Editor;
use collections::HashMap;
use futures::stream::FuturesUnordered;
use gpui::{App, AppContext as _, Entity, Task};
use itertools::Itertools;
use language::Buffer;
use language::Language;
use lsp::LanguageServerId;
use lsp::LanguageServerName;
use multi_buffer::Anchor;
use project::LanguageServerToQuery;
use project::LocationLink;
use project::Project;
use project::TaskSourceKind;
use project::lsp_store::lsp_ext_command::GetLspRunnables;
use smol::stream::StreamExt;
use task::ResolvedTask;
use task::TaskContext;
use text::BufferId;
use util::ResultExt as _;
pub(crate) fn find_specific_language_server_in_selection<F>(
editor: &Editor,
@@ -60,3 +73,83 @@ where
None
})
}
pub fn lsp_tasks(
project: Entity<Project>,
task_sources: &HashMap<LanguageServerName, Vec<BufferId>>,
for_position: Option<text::Anchor>,
cx: &mut App,
) -> Task<Vec<(TaskSourceKind, Vec<(Option<LocationLink>, ResolvedTask)>)>> {
let mut lsp_task_sources = task_sources
.iter()
.map(|(name, buffer_ids)| {
let buffers = buffer_ids
.iter()
.filter_map(|&buffer_id| project.read(cx).buffer_for_id(buffer_id, cx))
.collect::<Vec<_>>();
language_server_for_buffers(project.clone(), name.clone(), buffers, cx)
})
.collect::<FuturesUnordered<_>>();
cx.spawn(async move |cx| {
let mut lsp_tasks = Vec::new();
let lsp_task_context = TaskContext::default();
while let Some(server_to_query) = lsp_task_sources.next().await {
if let Some((server_id, buffers)) = server_to_query {
let source_kind = TaskSourceKind::Lsp(server_id);
let id_base = source_kind.to_id_base();
let mut new_lsp_tasks = Vec::new();
for buffer in buffers {
if let Ok(runnables_task) = project.update(cx, |project, cx| {
let buffer_id = buffer.read(cx).remote_id();
project.request_lsp(
buffer,
LanguageServerToQuery::Other(server_id),
GetLspRunnables {
buffer_id,
position: for_position,
},
cx,
)
}) {
if let Some(new_runnables) = runnables_task.await.log_err() {
new_lsp_tasks.extend(new_runnables.runnables.into_iter().filter_map(
|(location, runnable)| {
let resolved_task =
runnable.resolve_task(&id_base, &lsp_task_context)?;
Some((location, resolved_task))
},
));
}
}
}
lsp_tasks.push((source_kind, new_lsp_tasks));
}
}
lsp_tasks
})
}
fn language_server_for_buffers(
project: Entity<Project>,
name: LanguageServerName,
candidates: Vec<Entity<Buffer>>,
cx: &mut App,
) -> Task<Option<(LanguageServerId, Vec<Entity<Buffer>>)>> {
cx.spawn(async move |cx| {
for buffer in &candidates {
let server_id = buffer
.update(cx, |buffer, cx| {
project.update(cx, |project, cx| {
project.language_server_id_for_name(buffer, &name.0, cx)
})
})
.ok()?
.await;
if let Some(server_id) = server_id {
return Some((server_id, candidates));
}
}
None
})
}

View File

@@ -1,9 +1,12 @@
use crate::Editor;
use collections::HashMap;
use gpui::{App, Task, Window};
use project::Location;
use lsp::LanguageServerName;
use project::{Location, project_settings::ProjectSettings};
use settings::Settings as _;
use task::{TaskContext, TaskVariables, VariableName};
use text::{ToOffset, ToPoint};
use text::{BufferId, ToOffset, ToPoint};
impl Editor {
pub fn task_context(&self, window: &mut Window, cx: &mut App) -> Task<Option<TaskContext>> {
@@ -70,4 +73,38 @@ impl Editor {
})
})
}
pub fn lsp_task_sources(&self, cx: &App) -> HashMap<LanguageServerName, Vec<BufferId>> {
let lsp_settings = &ProjectSettings::get_global(cx).lsp;
self.buffer()
.read(cx)
.all_buffers()
.into_iter()
.filter_map(|buffer| {
let lsp_tasks_source = buffer
.read(cx)
.language()?
.context_provider()?
.lsp_task_source()?;
if lsp_settings
.get(&lsp_tasks_source)
.map_or(true, |s| s.enable_lsp_tasks)
{
let buffer_id = buffer.read(cx).remote_id();
Some((lsp_tasks_source, buffer_id))
} else {
None
}
})
.fold(
HashMap::default(),
|mut acc, (lsp_task_source, buffer_id)| {
acc.entry(lsp_task_source)
.or_insert_with(Vec::new)
.push(buffer_id);
acc
},
)
}
}

View File

@@ -244,10 +244,6 @@ impl language::File for GitBlob {
self.worktree_id
}
fn as_any(&self) -> &dyn Any {
self
}
fn to_proto(&self, _cx: &App) -> language::proto::File {
unimplemented!()
}
@@ -282,10 +278,6 @@ impl language::File for CommitMetadataFile {
self.worktree_id
}
fn as_any(&self) -> &dyn Any {
self
}
fn to_proto(&self, _: &App) -> language::proto::File {
unimplemented!()
}

View File

@@ -3953,8 +3953,7 @@ impl Render for GitPanelMessageTooltip {
}
}
#[derive(IntoElement, IntoComponent)]
#[component(scope = "Version Control")]
#[derive(IntoElement, RegisterComponent)]
pub struct PanelRepoFooter {
active_repository: SharedString,
branch: Option<Branch>,
@@ -4134,8 +4133,12 @@ impl RenderOnce for PanelRepoFooter {
}
}
impl ComponentPreview for PanelRepoFooter {
fn preview(_window: &mut Window, _cx: &mut App) -> AnyElement {
impl Component for PanelRepoFooter {
fn scope() -> ComponentScope {
ComponentScope::VersionControl
}
fn preview(_window: &mut Window, _cx: &mut App) -> Option<AnyElement> {
let unknown_upstream = None;
let no_remote_upstream = Some(UpstreamTracking::Gone);
let ahead_of_upstream = Some(
@@ -4207,192 +4210,180 @@ impl ComponentPreview for PanelRepoFooter {
}
let example_width = px(340.);
v_flex()
.gap_6()
.w_full()
.flex_none()
.children(vec![
example_group_with_title(
"Action Button States",
vec![
single_example(
"No Branch",
div()
.w(example_width)
.overflow_hidden()
.child(PanelRepoFooter::new_preview(
active_repository(1).clone(),
None,
))
.into_any_element(),
)
.grow(),
single_example(
"Remote status unknown",
div()
.w(example_width)
.overflow_hidden()
.child(PanelRepoFooter::new_preview(
active_repository(2).clone(),
Some(branch(unknown_upstream)),
))
.into_any_element(),
)
.grow(),
single_example(
"No Remote Upstream",
div()
.w(example_width)
.overflow_hidden()
.child(PanelRepoFooter::new_preview(
active_repository(3).clone(),
Some(branch(no_remote_upstream)),
))
.into_any_element(),
)
.grow(),
single_example(
"Not Ahead or Behind",
div()
.w(example_width)
.overflow_hidden()
.child(PanelRepoFooter::new_preview(
active_repository(4).clone(),
Some(branch(not_ahead_or_behind_upstream)),
))
.into_any_element(),
)
.grow(),
single_example(
"Behind remote",
div()
.w(example_width)
.overflow_hidden()
.child(PanelRepoFooter::new_preview(
active_repository(5).clone(),
Some(branch(behind_upstream)),
))
.into_any_element(),
)
.grow(),
single_example(
"Ahead of remote",
div()
.w(example_width)
.overflow_hidden()
.child(PanelRepoFooter::new_preview(
active_repository(6).clone(),
Some(branch(ahead_of_upstream)),
))
.into_any_element(),
)
.grow(),
single_example(
"Ahead and behind remote",
div()
.w(example_width)
.overflow_hidden()
.child(PanelRepoFooter::new_preview(
active_repository(7).clone(),
Some(branch(ahead_and_behind_upstream)),
))
.into_any_element(),
)
.grow(),
],
)
.grow()
.vertical(),
])
.children(vec![
example_group_with_title(
"Labels",
vec![
single_example(
"Short Branch & Repo",
div()
.w(example_width)
.overflow_hidden()
.child(PanelRepoFooter::new_preview(
SharedString::from("zed"),
Some(custom("main", behind_upstream)),
))
.into_any_element(),
)
.grow(),
single_example(
"Long Branch",
div()
.w(example_width)
.overflow_hidden()
.child(PanelRepoFooter::new_preview(
SharedString::from("zed"),
Some(custom(
"redesign-and-update-git-ui-list-entry-style",
behind_upstream,
)),
))
.into_any_element(),
)
.grow(),
single_example(
"Long Repo",
div()
.w(example_width)
.overflow_hidden()
.child(PanelRepoFooter::new_preview(
SharedString::from("zed-industries-community-examples"),
Some(custom("gpui", ahead_of_upstream)),
))
.into_any_element(),
)
.grow(),
single_example(
"Long Repo & Branch",
div()
.w(example_width)
.overflow_hidden()
.child(PanelRepoFooter::new_preview(
SharedString::from("zed-industries-community-examples"),
Some(custom(
"redesign-and-update-git-ui-list-entry-style",
behind_upstream,
)),
))
.into_any_element(),
)
.grow(),
single_example(
"Uppercase Repo",
div()
.w(example_width)
.overflow_hidden()
.child(PanelRepoFooter::new_preview(
SharedString::from("LICENSES"),
Some(custom("main", ahead_of_upstream)),
))
.into_any_element(),
)
.grow(),
single_example(
"Uppercase Branch",
div()
.w(example_width)
.overflow_hidden()
.child(PanelRepoFooter::new_preview(
SharedString::from("zed"),
Some(custom("update-README", behind_upstream)),
))
.into_any_element(),
)
.grow(),
],
)
.grow()
.vertical(),
])
.into_any_element()
Some(
v_flex()
.gap_6()
.w_full()
.flex_none()
.children(vec![
example_group_with_title(
"Action Button States",
vec![
single_example(
"No Branch",
div()
.w(example_width)
.overflow_hidden()
.child(PanelRepoFooter::new_preview(
active_repository(1).clone(),
None,
))
.into_any_element(),
),
single_example(
"Remote status unknown",
div()
.w(example_width)
.overflow_hidden()
.child(PanelRepoFooter::new_preview(
active_repository(2).clone(),
Some(branch(unknown_upstream)),
))
.into_any_element(),
),
single_example(
"No Remote Upstream",
div()
.w(example_width)
.overflow_hidden()
.child(PanelRepoFooter::new_preview(
active_repository(3).clone(),
Some(branch(no_remote_upstream)),
))
.into_any_element(),
),
single_example(
"Not Ahead or Behind",
div()
.w(example_width)
.overflow_hidden()
.child(PanelRepoFooter::new_preview(
active_repository(4).clone(),
Some(branch(not_ahead_or_behind_upstream)),
))
.into_any_element(),
),
single_example(
"Behind remote",
div()
.w(example_width)
.overflow_hidden()
.child(PanelRepoFooter::new_preview(
active_repository(5).clone(),
Some(branch(behind_upstream)),
))
.into_any_element(),
),
single_example(
"Ahead of remote",
div()
.w(example_width)
.overflow_hidden()
.child(PanelRepoFooter::new_preview(
active_repository(6).clone(),
Some(branch(ahead_of_upstream)),
))
.into_any_element(),
),
single_example(
"Ahead and behind remote",
div()
.w(example_width)
.overflow_hidden()
.child(PanelRepoFooter::new_preview(
active_repository(7).clone(),
Some(branch(ahead_and_behind_upstream)),
))
.into_any_element(),
),
],
)
.grow()
.vertical(),
])
.children(vec![
example_group_with_title(
"Labels",
vec![
single_example(
"Short Branch & Repo",
div()
.w(example_width)
.overflow_hidden()
.child(PanelRepoFooter::new_preview(
SharedString::from("zed"),
Some(custom("main", behind_upstream)),
))
.into_any_element(),
),
single_example(
"Long Branch",
div()
.w(example_width)
.overflow_hidden()
.child(PanelRepoFooter::new_preview(
SharedString::from("zed"),
Some(custom(
"redesign-and-update-git-ui-list-entry-style",
behind_upstream,
)),
))
.into_any_element(),
),
single_example(
"Long Repo",
div()
.w(example_width)
.overflow_hidden()
.child(PanelRepoFooter::new_preview(
SharedString::from("zed-industries-community-examples"),
Some(custom("gpui", ahead_of_upstream)),
))
.into_any_element(),
),
single_example(
"Long Repo & Branch",
div()
.w(example_width)
.overflow_hidden()
.child(PanelRepoFooter::new_preview(
SharedString::from("zed-industries-community-examples"),
Some(custom(
"redesign-and-update-git-ui-list-entry-style",
behind_upstream,
)),
))
.into_any_element(),
),
single_example(
"Uppercase Repo",
div()
.w(example_width)
.overflow_hidden()
.child(PanelRepoFooter::new_preview(
SharedString::from("LICENSES"),
Some(custom("main", ahead_of_upstream)),
))
.into_any_element(),
),
single_example(
"Uppercase Branch",
div()
.w(example_width)
.overflow_hidden()
.child(PanelRepoFooter::new_preview(
SharedString::from("zed"),
Some(custom("update-README", behind_upstream)),
))
.into_any_element(),
),
],
)
.grow()
.vertical(),
])
.into_any_element(),
)
}
}

View File

@@ -441,8 +441,8 @@ mod remote_button {
}
}
#[derive(IntoElement, IntoComponent)]
#[component(scope = "Version Control")]
/// A visual representation of a file's Git status.
#[derive(IntoElement, RegisterComponent)]
pub struct GitStatusIcon {
status: FileStatus,
}
@@ -484,8 +484,12 @@ impl RenderOnce for GitStatusIcon {
}
// View this component preview using `workspace: open component-preview`
impl ComponentPreview for GitStatusIcon {
fn preview(_window: &mut Window, _cx: &mut App) -> AnyElement {
impl Component for GitStatusIcon {
fn scope() -> ComponentScope {
ComponentScope::VersionControl
}
fn preview(_window: &mut Window, _cx: &mut App) -> Option<AnyElement> {
fn tracked_file_status(code: StatusCode) -> FileStatus {
FileStatus::Tracked(git::status::TrackedStatus {
index_status: code,
@@ -502,17 +506,19 @@ impl ComponentPreview for GitStatusIcon {
}
.into();
v_flex()
.gap_6()
.children(vec![example_group(vec![
single_example("Modified", GitStatusIcon::new(modified).into_any_element()),
single_example("Added", GitStatusIcon::new(added).into_any_element()),
single_example("Deleted", GitStatusIcon::new(deleted).into_any_element()),
single_example(
"Conflicted",
GitStatusIcon::new(conflict).into_any_element(),
),
])])
.into_any_element()
Some(
v_flex()
.gap_6()
.children(vec![example_group(vec![
single_example("Modified", GitStatusIcon::new(modified).into_any_element()),
single_example("Added", GitStatusIcon::new(added).into_any_element()),
single_example("Deleted", GitStatusIcon::new(deleted).into_any_element()),
single_example(
"Conflicted",
GitStatusIcon::new(conflict).into_any_element(),
),
])])
.into_any_element(),
)
}
}

View File

@@ -1005,8 +1005,7 @@ impl Render for ProjectDiffToolbar {
}
}
#[derive(IntoElement, IntoComponent)]
#[component(scope = "Version Control")]
#[derive(IntoElement, RegisterComponent)]
pub struct ProjectDiffEmptyState {
pub no_repo: bool,
pub can_push_and_pull: bool,
@@ -1178,8 +1177,12 @@ mod preview {
use super::ProjectDiffEmptyState;
// View this component preview using `workspace: open component-preview`
impl ComponentPreview for ProjectDiffEmptyState {
fn preview(_window: &mut Window, _cx: &mut App) -> AnyElement {
impl Component for ProjectDiffEmptyState {
fn scope() -> ComponentScope {
ComponentScope::VersionControl
}
fn preview(_window: &mut Window, _cx: &mut App) -> Option<AnyElement> {
let unknown_upstream: Option<UpstreamTracking> = None;
let ahead_of_upstream: Option<UpstreamTracking> = Some(
UpstreamTrackingStatus {
@@ -1244,46 +1247,48 @@ mod preview {
let (width, height) = (px(480.), px(320.));
v_flex()
.gap_6()
.children(vec![
example_group(vec![
single_example(
"No Repo",
div()
.w(width)
.h(height)
.child(no_repo_state)
.into_any_element(),
),
single_example(
"No Changes",
div()
.w(width)
.h(height)
.child(no_changes_state)
.into_any_element(),
),
single_example(
"Unknown Upstream",
div()
.w(width)
.h(height)
.child(unknown_upstream_state)
.into_any_element(),
),
single_example(
"Ahead of Remote",
div()
.w(width)
.h(height)
.child(ahead_of_upstream_state)
.into_any_element(),
),
Some(
v_flex()
.gap_6()
.children(vec![
example_group(vec![
single_example(
"No Repo",
div()
.w(width)
.h(height)
.child(no_repo_state)
.into_any_element(),
),
single_example(
"No Changes",
div()
.w(width)
.h(height)
.child(no_changes_state)
.into_any_element(),
),
single_example(
"Unknown Upstream",
div()
.w(width)
.h(height)
.child(unknown_upstream_state)
.into_any_element(),
),
single_example(
"Ahead of Remote",
div()
.w(width)
.h(height)
.child(ahead_of_upstream_state)
.into_any_element(),
),
])
.vertical(),
])
.vertical(),
])
.into_any_element()
.into_any_element(),
)
}
}
}

View File

@@ -42,13 +42,10 @@ use std::{
/// }
/// register_action!(Paste);
/// ```
pub trait Action: 'static + Send {
pub trait Action: Any + Send {
/// Clone the action into a new box
fn boxed_clone(&self) -> Box<dyn Action>;
/// Cast the action to the any type
fn as_any(&self) -> &dyn Any;
/// Do a partial equality check on this action and the other
fn partial_eq(&self, action: &dyn Action) -> bool;
@@ -94,9 +91,9 @@ impl std::fmt::Debug for dyn Action {
}
impl dyn Action {
/// Get the type id of this action
pub fn type_id(&self) -> TypeId {
self.as_any().type_id()
/// Type-erase Action type.
pub fn as_any(&self) -> &dyn Any {
self as &dyn Any
}
}
@@ -557,9 +554,6 @@ macro_rules! __impl_action {
::std::boxed::Box::new(self.clone())
}
fn as_any(&self) -> &dyn ::std::any::Any {
self
}
$($items)*
}

View File

@@ -3,6 +3,7 @@ use std::time::{Duration, Instant};
use crate::{AnyElement, App, Element, ElementId, GlobalElementId, IntoElement, Window};
pub use easing::*;
use smallvec::SmallVec;
/// An animation that can be applied to an element.
pub struct Animation {
@@ -50,6 +51,24 @@ pub trait AnimationExt {
animation: Animation,
animator: impl Fn(Self, f32) -> Self + 'static,
) -> AnimationElement<Self>
where
Self: Sized,
{
AnimationElement {
id: id.into(),
element: Some(self),
animator: Box::new(move |this, _, value| animator(this, value)),
animations: smallvec::smallvec![animation],
}
}
/// Render this component or element with a chain of animations
fn with_animations(
self,
id: impl Into<ElementId>,
animations: Vec<Animation>,
animator: impl Fn(Self, usize, f32) -> Self + 'static,
) -> AnimationElement<Self>
where
Self: Sized,
{
@@ -57,7 +76,7 @@ pub trait AnimationExt {
id: id.into(),
element: Some(self),
animator: Box::new(animator),
animation,
animations: animations.into(),
}
}
}
@@ -68,8 +87,8 @@ impl<E> AnimationExt for E {}
pub struct AnimationElement<E> {
id: ElementId,
element: Option<E>,
animation: Animation,
animator: Box<dyn Fn(E, f32) -> E + 'static>,
animations: SmallVec<[Animation; 1]>,
animator: Box<dyn Fn(E, usize, f32) -> E + 'static>,
}
impl<E> AnimationElement<E> {
@@ -91,6 +110,7 @@ impl<E: IntoElement + 'static> IntoElement for AnimationElement<E> {
struct AnimationState {
start: Instant,
animation_ix: usize,
}
impl<E: IntoElement + 'static> Element for AnimationElement<E> {
@@ -108,22 +128,30 @@ impl<E: IntoElement + 'static> Element for AnimationElement<E> {
cx: &mut App,
) -> (crate::LayoutId, Self::RequestLayoutState) {
window.with_element_state(global_id.unwrap(), |state, window| {
let state = state.unwrap_or_else(|| AnimationState {
let mut state = state.unwrap_or_else(|| AnimationState {
start: Instant::now(),
animation_ix: 0,
});
let mut delta =
state.start.elapsed().as_secs_f32() / self.animation.duration.as_secs_f32();
let animation_ix = state.animation_ix;
let mut delta = state.start.elapsed().as_secs_f32()
/ self.animations[animation_ix].duration.as_secs_f32();
let mut done = false;
if delta > 1.0 {
if self.animation.oneshot {
done = true;
if self.animations[animation_ix].oneshot {
if animation_ix >= self.animations.len() - 1 {
done = true;
} else {
state.start = Instant::now();
state.animation_ix += 1;
}
delta = 1.0;
} else {
delta %= 1.0;
}
}
let delta = (self.animation.easing)(delta);
let delta = (self.animations[animation_ix].easing)(delta);
debug_assert!(
(0.0..=1.0).contains(&delta),
@@ -131,7 +159,7 @@ impl<E: IntoElement + 'static> Element for AnimationElement<E> {
);
let element = self.element.take().expect("should only be called once");
let mut element = (self.animator)(element, delta).into_any_element();
let mut element = (self.animator)(element, animation_ix, delta).into_any_element();
if !done {
window.request_animation_frame();

View File

@@ -597,10 +597,6 @@ mod tests {
Box::new(TestAction)
}
fn as_any(&self) -> &dyn ::std::any::Any {
self
}
fn build(_value: serde_json::Value) -> anyhow::Result<Box<dyn Action>>
where
Self: Sized,

View File

@@ -513,9 +513,8 @@ fn fs_quad(input: QuadVarying) -> @location(0) vec4<f32> {
let point = input.position.xy - quad.bounds.origin;
let center_to_point = point - half_size;
// Signed distance field threshold for inclusion of pixels. Use of 0.5
// instead of 1.0 causes the width of rounded borders to appear more
// consistent with straight borders.
// Signed distance field threshold for inclusion of pixels. 0.5 is the
// minimum distance between the center of the pixel and the edge.
let antialias_threshold = 0.5;
// Radius of the nearest corner
@@ -612,24 +611,29 @@ fn fs_quad(input: QuadVarying) -> @location(0) vec4<f32> {
// Dashed border logic when border_style == 1
if (quad.border_style == 1) {
// Position in "dash space", where each dash period has length 1
// Position along the perimeter in "dash space", where each dash
// period has length 1
var t = 0.0;
// Total number of dash periods, so that the dash spacing can be
// adjusted to evenly divide it
var max_t = 0.0;
// Since border width affects the dash size, the density of dashes
// varies, and this is indicated by dash_velocity. It has units
// (dash period / pixel). So a dash velocity of (1 / 10) is 1 dash
// every 10 pixels.
var dash_velocity = 0.0;
// Border width is proportional to dash size. This is the behavior
// used by browsers, but also avoids dashes from different segments
// overlapping when dash size is smaller than the border width.
//
// Dash pattern: (2 * border width) dash, (1 * border width) gap
let dash_length_per_width = 2.0;
let dash_gap_per_width = 1.0;
let dash_period_per_width = dash_length_per_width + dash_gap_per_width;
// Since the dash size is determined by border width, the density of
// dashes varies. Multiplying a pixel distance by this returns a
// position in dash space - it has units (dash period / pixels). So
// a dash velocity of (1 / 10) is 1 dash every 10 pixels.
var dash_velocity = 0.0;
// Dividing this by the border width gives the dash velocity
let dv_numerator = 1.0 / dash_period_per_width;
@@ -645,8 +649,8 @@ fn fs_quad(input: QuadVarying) -> @location(0) vec4<f32> {
t = select(point.y, point.x, is_horizontal) * dash_velocity;
max_t = select(size.y, size.x, is_horizontal) * dash_velocity;
} else {
// When corners are rounded, the dashes are laid out around the
// whole perimeter.
// When corners are rounded, the dashes are laid out clockwise
// around the whole perimeter.
let r_tr = quad.corner_radii.top_right;
let r_br = quad.corner_radii.bottom_right;
@@ -699,17 +703,28 @@ fn fs_quad(input: QuadVarying) -> @location(0) vec4<f32> {
if (center_to_point.x >= 0.0) {
if (center_to_point.y < 0.0) {
dash_velocity = corner_dash_velocity_tr;
// Subtracted because radians is pi/2 to 0 when
// going clockwise around the top right corner,
// since the y axis has been flipped
t = upto_r - corner_t * dash_velocity;
} else {
dash_velocity = corner_dash_velocity_br;
// Added because radians is 0 to pi/2 when going
// clockwise around the bottom-right corner
t = upto_br + corner_t * dash_velocity;
}
} else {
if (center_to_point.y >= 0.0) {
dash_velocity = corner_dash_velocity_bl;
// Subtracted because radians is pi/2 to 0 when
// going clockwise around the bottom-left corner,
// since the x axis has been flipped
t = upto_l - corner_t * dash_velocity;
} else {
dash_velocity = corner_dash_velocity_tl;
// Added because radians is 0 to pi/2 when going
// clockwise around the top-left corner, since both
// axis were flipped
t = upto_tl + corner_t * dash_velocity;
}
}

View File

@@ -121,7 +121,8 @@ fragment float4 quad_fragment(QuadFragmentInput input [[stage_in]],
float2 point = input.position.xy - float2(quad.bounds.origin.x, quad.bounds.origin.y);
float2 center_to_point = point - half_size;
// Signed distance field threshold for inclusion of pixels
// Signed distance field threshold for inclusion of pixels. 0.5 is the
// minimum distance between the center of the pixel and the edge.
const float antialias_threshold = 0.5;
// Radius of the nearest corner
@@ -211,24 +212,29 @@ fragment float4 quad_fragment(QuadFragmentInput input [[stage_in]],
// Dashed border logic when border_style == 1
if (quad.border_style == 1) {
// Position in "dash space", where each dash period has length 1
// Position along the perimeter in "dash space", where each dash
// period has length 1
float t = 0.0;
// Total number of dash periods, so that the dash spacing can be
// adjusted to evenly divide it
float max_t = 0.0;
// Since border width affects the dash size, the density of dashes
// varies, and this is indicated by dash_velocity. It has units
// (dash period / pixel). So a dash velocity of (1 / 10) is 1 dash
// every 10 pixels.
float dash_velocity = 0.0;
// Border width is proportional to dash size. This is the behavior
// used by browsers, but also avoids dashes from different segments
// overlapping when dash size is smaller than the border width.
//
// Dash pattern: (2 * border width) dash, (1 * border width) gap
const float dash_length_per_width = 2.0;
const float dash_gap_per_width = 1.0;
const float dash_period_per_width = dash_length_per_width + dash_gap_per_width;
// Since the dash size is determined by border width, the density of
// dashes varies. Multiplying a pixel distance by this returns a
// position in dash space - it has units (dash period / pixels). So
// a dash velocity of (1 / 10) is 1 dash every 10 pixels.
float dash_velocity = 0.0;
// Dividing this by the border width gives the dash velocity
const float dv_numerator = 1.0 / dash_period_per_width;
@@ -244,8 +250,8 @@ fragment float4 quad_fragment(QuadFragmentInput input [[stage_in]],
max_t = is_horizontal ? size.x : size.y;
max_t *= dash_velocity;
} else {
// When corners are rounded, the dashes are laid out around the
// whole perimeter.
// When corners are rounded, the dashes are laid out clockwise
// around the whole perimeter.
float r_tr = quad.corner_radii.top_right;
float r_br = quad.corner_radii.bottom_right;
@@ -297,17 +303,28 @@ fragment float4 quad_fragment(QuadFragmentInput input [[stage_in]],
if (center_to_point.x >= 0.0) {
if (center_to_point.y < 0.0) {
dash_velocity = corner_dash_velocity_tr;
// Subtracted because radians is pi/2 to 0 when
// going clockwise around the top right corner,
// since the y axis has been flipped
t = upto_r - corner_t * dash_velocity;
} else {
dash_velocity = corner_dash_velocity_br;
// Added because radians is 0 to pi/2 when going
// clockwise around the bottom-right corner
t = upto_br + corner_t * dash_velocity;
}
} else {
if (center_to_point.y >= 0.0) {
dash_velocity = corner_dash_velocity_bl;
// Subtracted because radians is pi/1 to 0 when
// going clockwise around the bottom-left corner,
// since the x axis has been flipped
t = upto_l - corner_t * dash_velocity;
} else {
dash_velocity = corner_dash_velocity_tl;
// Added because radians is 0 to pi/2 when going
// clockwise around the top-left corner, since both
// axis were flipped
t = upto_tl + corner_t * dash_velocity;
}
}

View File

@@ -22,10 +22,6 @@ fn test_action_macros() {
unimplemented!()
}
fn as_any(&self) -> &dyn std::any::Any {
unimplemented!()
}
fn partial_eq(&self, _action: &dyn gpui::Action) -> bool {
unimplemented!()
}

View File

@@ -141,6 +141,7 @@ pub enum IconName {
InlayHint,
Keyboard,
Library,
LightBulb,
LineHeight,
Link,
ListTree,

View File

@@ -306,7 +306,7 @@ pub enum BufferEvent {
}
/// The file associated with a buffer.
pub trait File: Send + Sync {
pub trait File: Send + Sync + Any {
/// Returns the [`LocalFile`] associated with this file, if the
/// file is local.
fn as_local(&self) -> Option<&dyn LocalFile>;
@@ -336,9 +336,6 @@ pub trait File: Send + Sync {
/// This is needed for looking up project-specific settings.
fn worktree_id(&self, cx: &App) -> WorktreeId;
/// Converts this file into an [`Any`] trait object.
fn as_any(&self) -> &dyn Any;
/// Converts this file into a protobuf message.
fn to_proto(&self, cx: &App) -> rpc::proto::File;
@@ -2018,11 +2015,16 @@ impl Buffer {
}
/// Manually remove a transaction from the buffer's undo history
pub fn forget_transaction(&mut self, transaction_id: TransactionId) {
self.text.forget_transaction(transaction_id);
pub fn forget_transaction(&mut self, transaction_id: TransactionId) -> Option<Transaction> {
self.text.forget_transaction(transaction_id)
}
/// Manually merge two adjacent transactions in the buffer's undo history.
/// Retrieve a transaction from the buffer's undo history
pub fn get_transaction(&self, transaction_id: TransactionId) -> Option<&Transaction> {
self.text.get_transaction(transaction_id)
}
/// Manually merge two transactions in the buffer's undo history.
pub fn merge_transactions(&mut self, transaction: TransactionId, destination: TransactionId) {
self.text.merge_transactions(transaction, destination);
}
@@ -4610,10 +4612,6 @@ impl File for TestFile {
WorktreeId::from_usize(0)
}
fn as_any(&self) -> &dyn std::any::Any {
unimplemented!()
}
fn to_proto(&self, _: &App) -> rpc::proto::File {
unimplemented!()
}

View File

@@ -572,7 +572,11 @@ pub trait LspAdapter: 'static + Send + Sync {
}
/// Support custom initialize params.
fn prepare_initialize_params(&self, original: InitializeParams) -> Result<InitializeParams> {
fn prepare_initialize_params(
&self,
original: InitializeParams,
_: &App,
) -> Result<InitializeParams> {
Ok(original)
}

View File

@@ -370,7 +370,7 @@ fn default_words_completion_mode() -> WordsCompletionMode {
}
fn default_lsp_insert_mode() -> LspInsertMode {
LspInsertMode::Insert
LspInsertMode::ReplaceSuffix
}
fn default_lsp_fetch_timeout_ms() -> u64 {
@@ -1029,7 +1029,10 @@ fn scroll_debounce_ms() -> u64 {
#[derive(Debug, Clone, Deserialize, PartialEq, Serialize, JsonSchema)]
pub struct LanguageTaskConfig {
/// Extra task variables to set for a particular language.
#[serde(default)]
pub variables: HashMap<String, String>,
#[serde(default = "default_true")]
pub enabled: bool,
}
impl InlayHintSettings {

View File

@@ -5,6 +5,7 @@ use crate::{LanguageToolchainStore, Location, Runnable};
use anyhow::Result;
use collections::HashMap;
use gpui::{App, Task};
use lsp::LanguageServerName;
use task::{TaskTemplates, TaskVariables};
use text::BufferId;
@@ -15,6 +16,7 @@ pub struct RunnableRange {
pub runnable: Runnable,
pub extra_captures: HashMap<String, String>,
}
/// Language Contexts are used by Zed tasks to extract information about the source file where the tasks are supposed to be scheduled from.
/// Multiple context providers may be used together: by default, Zed provides a base [`BasicContextProvider`] context that fills all non-custom [`VariableName`] variants.
///
@@ -40,4 +42,9 @@ pub trait ContextProvider: Send + Sync {
) -> Option<TaskTemplates> {
None
}
/// A language server name, that can return tasks using LSP (ext) for this language.
fn lsp_task_source(&self) -> Option<LanguageServerName> {
None
}
}

View File

@@ -1,7 +1,7 @@
use anyhow::{Context, Result, anyhow, bail};
use async_trait::async_trait;
use futures::StreamExt;
use gpui::AsyncApp;
use gpui::{App, AsyncApp};
use http_client::github::{GitHubLspBinaryVersion, latest_github_release};
pub use language::*;
use lsp::{DiagnosticTag, InitializeParams, LanguageServerBinary, LanguageServerName};
@@ -273,6 +273,7 @@ impl super::LspAdapter for CLspAdapter {
fn prepare_initialize_params(
&self,
mut original: InitializeParams,
_: &App,
) -> Result<InitializeParams> {
let experimental = json!({
"textDocument": {

View File

@@ -991,6 +991,7 @@ impl LspAdapter for PyLspAdapter {
util::command::new_smol_command(pip_path.as_path())
.arg("install")
.arg("python-lsp-server")
.arg("-U")
.output()
.await?
.status
@@ -1001,6 +1002,7 @@ impl LspAdapter for PyLspAdapter {
util::command::new_smol_command(pip_path.as_path())
.arg("install")
.arg("python-lsp-server[all]")
.arg("-U")
.output()
.await?
.status
@@ -1011,6 +1013,7 @@ impl LspAdapter for PyLspAdapter {
util::command::new_smol_command(pip_path)
.arg("install")
.arg("pylsp-mypy")
.arg("-U")
.output()
.await?
.status

Some files were not shown because too many files have changed in this diff Show More