Compare commits

..

29 Commits

Author SHA1 Message Date
smit
6a9fa435f0 initial idea 2025-03-17 15:29:58 +05:30
Jason Lee
b81a1ad91d gpui: Fix text underline width (#26827)
Release Notes:

- N/A 

Fix #24721 mistake to make sure underline width same as the text.

## Before


![image](https://github.com/user-attachments/assets/1fe6a8c2-517f-41be-bdf0-0ee777b7f8aa)

## After

<img width="912" alt="image"
src="https://github.com/user-attachments/assets/222b5dcb-c0fb-4ec1-8e23-d68247621375"
/>
2025-03-15 09:18:11 -07:00
Peter Tripp
5f390f1bf8 Initial PyLSP documentation (#26835)
Closes https://github.com/zed-industries/zed/issues/26820

Release Notes:

- N/A
2025-03-15 11:03:35 -04:00
Richard Hao
c282acbe65 terminal: Don’t include line breaks for soft wrap in Assistant terminal context (#25415)
> Detects and combines wrapped lines into single logical lines, more
accurately representing the actual terminal content.


```shell
perl -i -pe \
    's/"vscode-languageserver(\/node)?"/"\@zed-industries\/vscode-languageserver$1"/g' packages/css/lib/node/cssServerMain.js
```

<img width="518" alt="image"
src="https://github.com/user-attachments/assets/52d9327c-c381-4e5f-a676-0cf84c824388"
/>

<img width="1314" alt="image"
src="https://github.com/user-attachments/assets/0a32e1f9-7e95-482e-9beb-2e8a6c40584c"
/>




Closes https://github.com/zed-industries/zed/issues/25341

Release Notes:

- Fixed a bug where context for the terminal assistant would add line
breaks in the presence of soft wrapped lines.

---------

Co-authored-by: Peter Tripp <peter@zed.dev>
2025-03-15 14:28:26 +00:00
Marshall Bowers
021d6584cc Revert "Use textDocument/codeLens data in the actions menu when applicable (#26811)" (#26832)
This reverts commit b61171f152.

This PR reverts #26811, as it has broken `rust-analyzer` code actions.

With this commit reverted my code actions are working again. 

Release Notes:

- Community: Reverted https://github.com/zed-industries/zed/pull/26811.
2025-03-15 14:14:29 +00:00
Marshall Bowers
b547cd1c70 ci: Remove migration_checks as a required check (#26833)
This PR removes the `migration_checks` job as a required check.

This was not required before, and we shouldn't make it required, as
there are cases where we need to bypass it, as is the case in
https://github.com/zed-industries/zed/pull/26832.

Release Notes:

- N/A
2025-03-15 13:59:25 +00:00
张小白
8f841d1ab7 Revert unintended Cargo.lock changes (#26830)
This PR reverts some of the changes made to `Cargo.lock` in #25702. In
that PR, several crate versions were unintentionally downgraded,
including `aws-lc-rs`, which has caused release builds to fail on
Windows again.

Release Notes:

- N/A
2025-03-15 21:08:55 +08:00
Jason Lee
4b153e7f7f gpui: Fix line_through, underline position when used text center or right (#24721)
Release Notes:

- N/A

---

| Before | After |
| --- | --- |
| <img width="912" alt="image"
src="https://github.com/user-attachments/assets/0640ac85-ee5d-4707-b866-997e36608c18"
/> | <img width="912" alt="image"
src="https://github.com/user-attachments/assets/caf84477-a7bc-4c22-a9e6-f44c3b6f86ef"
/> |
 
And fix the `line_through` doc link.
2025-03-15 11:44:51 +02:00
Kirill Bulatov
b61171f152 Use textDocument/codeLens data in the actions menu when applicable (#26811)
Similar to how tasks are fetched via LSP, also queries for document's
code lens and filters the ones with the commands, supported in server
capabilities.

Whatever's left and applicable to the range given, is added to the
actions menu:


![image](https://github.com/user-attachments/assets/6161e87f-f4b4-4173-8bf9-30db5e94b1ce)

This way, Zed can get more actions to run, albeit neither r-a nor vtsls
seem to provide anything by default.

Currently, there are no plans to render code lens the way as in VSCode,
it's just the extra actions that are show in the menu.

------------------

As part of the attempts to use rust-analyzer LSP data about the
runnables, I've explored a way to get this data via standard LSP.

When particular experimental client capabilities are enabled (similar to
how clangd does this now), r-a starts to send back code lens with the
data needed to run a cargo command:

```
{"jsonrpc":"2.0","id":48,"result":{"range":{"start":{"line":0,"character":0},"end":{"line":98,"character":0}},"command":{"title":"▶︎ Run Tests","command":"rust-analyzer.runSingle","arguments":[{"label":"test-mod tests::ecparser","location":{"targetUri":"file:///Users/someonetoignore/work/ec4rs/src/tests/ecparser.rs","targetRange":{"start":{"line":0,"character":0},"end":{"line":98,"character":0}},"targetSelectionRange":{"start":{"line":0,"character":0},"end":{"line":98,"character":0}}},"kind":"cargo","args":{"environment":{"RUSTC_TOOLCHAIN":"/Users/someonetoignore/.rustup/toolchains/1.85-aarch64-apple-darwin"},"cwd":"/Users/someonetoignore/work/ec4rs","overrideCargo":null,"workspaceRoot":"/Users/someonetoignore/work/ec4rs","cargoArgs":["test","--package","ec4rs","--lib"],"executableArgs":["tests::ecparser","--show-output"]}}]}}}
```

This data is passed as is to VSCode task processor, registered in


60cd01864a/editors/code/src/main.ts (L195)

where it gets eventually executed as a VSCode's task, all handled by the
r-a's extension code.

rust-analyzer does not declare server capabilities for such tasks, and
has no `workspace/executeCommand` handle, and Zed needs an interactive
terminal output during the test runs, so we cannot ask rust-analyzer
more than these descriptions.

Given that Zed needs experimental capabilities set to get these lens:

60cd01864a/editors/code/src/client.ts (L318-L327)

and that the lens may contain other odd tasks (e.g. docs opening or
references lookup), a protocol extension to get runnables looks more
preferred than lens:
https://rust-analyzer.github.io/book/contributing/lsp-extensions.html#runnables

This PR does not include any work on this direction, limiting to the
general code lens support.

As a proof of concept, it's possible to get the lens and even attempt to
run it, to no avail:

![image](https://github.com/user-attachments/assets/56950880-d387-48f9-b865-727f97b5633b)


Release Notes:

- Used `textDocument/codeLens` data in the actions menu when applicable
2025-03-15 09:50:32 +02:00
张小白
0b492c11de Use line_endings macro for the edit tool tests (#26642)
This aligns with how we handle other tests on Windows.

Release Notes:

- N/A
2025-03-15 14:16:10 +08:00
AidanV
265caed15e vim: Add global marks (#25702)
Closes https://github.com/zed-industries/zed/issues/13111

Release Notes:

- vim: Added global marks `'[A-Z]`
- vim: Added persistence for global (and local) marks. When re-opening
the same workspace your previous marks will be available.

---------

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-03-15 05:58:34 +00:00
Ryan Hawkins
148131786f Reveal always_included entries in Project Panel (#26197)
If the user has the `auto_reveal` option enabled, as well as
`file_scan_inclusions` and opens a file that is gitignored but is also
set to be always included, that file won't be revealed in the project
panel. I've personally found this annoying, as the project panel can
provide useful context on where you are in a codebase. It also just
feels weird for it to be out of sync with the editor state.

Release Notes:

- Fixed the interaction between `auto_reveal`, `file_scan_inclusions`,
and `.gitignore` within the Project Panel. Files that are always
included will now be auto-revealed in the Project Panel, even if those
files are also gitignored.
2025-03-15 01:42:11 +00:00
Jakub Charvat
7c1405db37 Update rendering of gutter diff hunks to show whether a hunk is staged or not (#26809)
In the gutter, it seems more intuitive to me for the unstaged hunks to
be hollow, indicating an action left to complete, and the staged hunks
to be filled. I therefore flipped the style of expanded hunks to match
the gutter icons. Is that acceptable? And would it be a breaking change?
If it is not acceptable, then 058dc216d5
contains the opposite behaviour, it is not a problem to revert to it.

In the following images, the first hunk is always ~unstaged~ staged and
the second is ~staged~ unstaged.

<img width="138" alt="image"
src="https://github.com/user-attachments/assets/35927069-da90-424a-8988-a4eb984d865f"
/>
<img width="133" alt="image"
src="https://github.com/user-attachments/assets/4edd0e0d-a2b5-453a-8172-47684e065c82"
/>

<br />
<img width="143" alt="image"
src="https://github.com/user-attachments/assets/2f295944-81aa-45f3-a103-c13b92bc2aba"
/>
<img width="133" alt="image"
src="https://github.com/user-attachments/assets/35248218-7104-4059-8742-ae0e54da6c6b"
/>


Release Notes:

- Improved gutter diff hunks to show whether a hunk is staged
2025-03-14 16:49:53 -07:00
Finn Evers
96b747e31d editor: Disable edit predictions in read-only buffers (#26804)
Closes #26797

Release Notes:

- Fixed edit predictions appearing in read-only buffers.

Co-authored-by: Marshall Bowers <git@maxdeviant.com>
2025-03-14 23:15:49 +00:00
Michael Sloan
7a888de9f5 Add initial implementation of evaluating changes generated by the assistant (#26799)
Release Notes:

- N/A

---------

Co-authored-by: Richard Feldman <oss@rtfeldman.com>
Co-authored-by: Thomas <thomas@zed.dev>
2025-03-14 23:10:25 +00:00
Finn Evers
e9b4fa1465 rust: Follow-up fixes for attribute highlighting (#26172)
Closes #26124

This PR fixes some more cases of improper attribute highlights for rust.

In #25501 I tried to address the regression in highlighting rust
attributes which were introduced by #25333 . However, I failed to
properly check all cases of attribute highlights as shown in the linked
issue - really sorry for that! Thus, this is a follow-up fix aiming to
resolve the issues the previous PR did not cover.

The changes do not affect any highlighting shown in the [previous
PR](https://github.com/zed-industries/zed/pull/25501):

| `main` | <img width="719" alt="main-working"
src="https://github.com/user-attachments/assets/9aa0e611-7bda-4b50-9335-c87da4c38057"
/> |
| --- | --- |
| This PR | <img width="719" alt="PR-working"
src="https://github.com/user-attachments/assets/605b275c-1d68-4bd7-97c6-251d7614a7ed"
/> |

But resolves the mentioned regressions in the linked issue:

| `main` | <img width="371" alt="main_broken"
src="https://github.com/user-attachments/assets/ebbb47b7-7945-41e0-b030-2fe3f2198653"
/> |
| --- | --- |
| This PR | <img width="371" alt="PR_broken"
src="https://github.com/user-attachments/assets/fa97408b-e1d6-4d99-81c1-cfb8073961a4"
/> |

Again, sorry for not checking this more thoroughly.


Release Notes:

- Fixed attributes in Rust being improperly highlighted.

Co-authored-by: Marshall Bowers <git@maxdeviant.com>
2025-03-14 23:02:45 +00:00
Devzeth
ead60d1857 docs: Add documentation for icon theme (#25973)
Adds documentation for the icon theme setting (mostly based on the
documentation from theme but adjusted for icon theme).

Release Notes:

- N/A

---------

Co-authored-by: Peter Tripp <peter@zed.dev>
Co-authored-by: Marshall Bowers <git@maxdeviant.com>
2025-03-14 22:39:11 +00:00
Cole Miller
768dfc8b6b Reinstate failing worktree tests (#26733)
Just debugging for now

Release Notes:

- N/A
2025-03-14 22:20:24 +00:00
Cole Miller
f2f9c786da Fix the feedback modal (#26793)
Closes #26787

Release Notes:

- Fixed a bug that prevented typing in the in-app feedback form
2025-03-14 17:55:52 -04:00
Smit Barmase
e5d2678d94 editor: Disable selection highlights for single line editor (#26805)
Fixes the selection highlight appearing in single-line editors like the
file picker, command palette, etc.

Release Notes:

- Fixed selection highlight appearing in input fields like the file
picker, command palette, etc.
2025-03-15 03:02:40 +05:30
Smit Barmase
3ad9074e63 editor: Fix auto-closing quotes after word character (#26803)
Closes #14349

When typing quotes immediately after a word character, it resulted in
auto-closing the quote.

```js
const thing = this is text^;
```

Typing a quote resulted in `this is text""^;` which is not correct, and
should be `this is text"^;`.

This PR changes logic for auto close:

1. We now prevent auto-closing in case of brackets where start == end
when they're typed immediately after a word character. i.e. For, ``` `,
", ' ```.
2. Other bracket pairs like `{}, (), etc` continue to auto-close
regardless of preceding character. So, `func^` to `func()^` will keep
working.
3. Auto-closing in other contexts like after spaces, punctuation, etc.
will still work.

Before:

![before](https://github.com/user-attachments/assets/6be02c95-4c71-488b-901d-b7b98c4170a4)

After:

![after](https://github.com/user-attachments/assets/680ece4d-20cb-428c-b430-846da3a2d643)

Release Notes:

- Fixed auto-paired quotes being inserted when typing a quote
immediately next to a word character.
2025-03-15 02:46:57 +05:30
Richard Feldman
f40b22c02a Add action log to thinking tool (#26802)
Release Notes:

- N/A
2025-03-14 20:44:36 +00:00
Richard Feldman
8490d0d4ef Add thinking tool (#26675)
Release Notes:

- N/A
2025-03-14 16:26:22 -04:00
Finn Evers
afd0da97b9 language_selector: Improve lookup for language icons (#26376)
This PR fixes a rare case where icons could be missing in the language
selector.

Currently, whilst looking up an icon, all file suffixes starting with a
dot are filtered out. While this works fine for some languages, there
are some languages having only file suffixes starting with a dot, e.g.
the "Git Attributes" language provided from the "Git Firefly" extension.
This results in no icon being displayed in the list, as shown in the
screenshots below.

To solve this, we can just simply remove the check for this special case
as well as the construction of an artificial file name in the code, as
both are not needed. A simple path just consisting of the extension is
sufficient, as we currently do not differentiate between file names and
file suffixes during an icon lookup. see the relevant code below:


013a646799/crates/file_icons/src/file_icons.rs (L23-L52)

As the first lookup is directly done using the entire file name and then
checked against all suffixes, we actually do not have to construct an
artificial file name at all. Should that produce no match, we check for
a hidden file right after, so we do not have to filter hidden file names
out.

With this fix, nothing changes for "normal" file suffixes, for some
cases where languges provide entire file names as a path suffix, the
matching might improve, and for languages with only hidden associated
file names, the initially described issue is resolved.

I do believe the behavior of matching icons to languages could be
improved in general. Fowever, I do think this is beyond the scope of
this change.

| Current main | <img width="546" alt="main"
src="https://github.com/user-attachments/assets/5c3c9fdc-cadf-4e44-9667-2530374aa0d2"
/> |
| --- | --- |
| This PR |<img width="546" alt="PR"
src="https://github.com/user-attachments/assets/82e59108-e31f-4ca9-8bbd-b9fd2b34feb0"
/>|

Aditionally, in 4395f78fb2 I refactored
the code which acquires the label and icon for a match, since I found it
a bit hard to read initially. The majority of this diff comes from this
change. Should that not be wanted, I can revert that change.

Release Notes:

- Fixed a rare case where languages had no associated icon in the
language selector.
2025-03-14 20:13:59 +00:00
Agus Zubiaga
1bf1c7223f assistant edit tool: Fix editing files in context (#26751)
When the user attached context in the thread, the editor model request
would fail because its tool use wouldn't be removed properly leading to
an API error.

Also, after an edit, we'd keep the old file snapshot in the context.
This would make the model think that the edits didn't apply and make it
go in a loop.

Release Notes:

- N/A
2025-03-14 17:07:43 -03:00
0x2CA
ba8b9ec2c7 gpui: Add interval in pattern (#26459)
Closes #ISSUE

[git: Use font size to determine pattern slash width
#26446](https://github.com/zed-industries/zed/pull/26446)

This PR only uses font size as the slant line width, and here it further
uses line height as the slant line interval control.

before


![image](https://github.com/user-attachments/assets/a8f2406e-5eed-4528-a9a2-867513613fc7)


now


![image](https://github.com/user-attachments/assets/9b8ccca9-8023-4cb2-a6fe-0e42e19642a4)

big line height


![image](https://github.com/user-attachments/assets/4498e858-4f25-432c-80ee-355726d9c41b)


Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-03-14 12:51:09 -07:00
Ben Kunkle
685536c27e editor: Change order of format and timeout futures (#26796)
Very small change, simply changing the order of the futures we pass to
`select_biased!` so that if the format request and the timeout resolve
at the same time (highly unlikely) we choose the format request instead
of choosing the timeout and throwing away our work!

Release Notes:

- N/A
2025-03-14 18:42:00 +00:00
Smit Barmase
ae017c3f96 file_finder: Fix panic when file name contains new line (#26791)
Closes #26777

This PR fixes a panic when a file name contains a newline and a
multi-byte character like 👋 (4 bytes in UTF-8). The issue was in the
regex not considering newlines in file names, causing it to match only
the latter part of the file name.

For example:

```
 left: PathWithPosition { path: "ab", row: None, column: None } // matched
 right: PathWithPosition { path: "ab\ncd", row: None, column: None } // actual file name
```


This resulted in incorrect index calculation later in the code, which
went unnoticed until now due to the lack of tests with file names
containing newlines.

We discovered this issue when a panic occurred due to incorrect index
calculation while trying to get the index of a multi-byte character.
After the newline fix, the index calculation is always correct, even in
the case of multi-byte characters.

Release Notes:

- Fixed an issue where file names with newlines and multi-byte
characters could cause a crash in certain cases.
2025-03-14 22:50:33 +05:30
João Marcos
f587e95a7e Add seed argument to #[gpui::test] attribute macro (#26764)
This PR introduces the arguments `seed` and `seeds` to `gpui::test`,
e.g.:
- `#[gpui::test(seed = 10)]`
- `#[gpui::test(seeds(10, 20, 30, 40))]`

Which allows us to run a test against a specific seed value without
slowing
down our tests like `iterations` does with high values.

This was motivated by a diff hunk test that only fails in a 400+ seed,
but is
slow to run 400+ times for every `cargo test`.

If your test failed with a specific seed, you can now add the `seed` arg
to
increase the chances of detecting a regression.

There are now three ways of setting seeds, the `SEED` env var,
`iterations`,
and the args this PR adds. See docs in `gpui::test`.

---

I also relaxed the limitation on `retries` not working with
`iterations`, as
that seemed unnecessary.

Release Notes:

- N/A
2025-03-14 13:40:02 -03:00
81 changed files with 3355 additions and 628 deletions

View File

@@ -453,7 +453,6 @@ jobs:
[[ "${{ needs.linux_tests.result }}" != 'success' ]] && { RET_CODE=1; echo "Linux tests failed"; }
[[ "${{ needs.windows_tests.result }}" != 'success' ]] && { RET_CODE=1; echo "Windows tests failed"; }
[[ "${{ needs.windows_clippy.result }}" != 'success' ]] && { RET_CODE=1; echo "Windows clippy failed"; }
[[ "${{ needs.migration_checks.result }}" != 'success' ]] && { RET_CODE=1; echo "Migration checks failed"; }
[[ "${{ needs.build_remote_server.result }}" != 'success' ]] && { RET_CODE=1; echo "Remote server build failed"; }
fi
if [[ "$RET_CODE" -eq 0 ]]; then

38
Cargo.lock generated
View File

@@ -566,6 +566,41 @@ dependencies = [
"workspace",
]
[[package]]
name = "assistant_eval"
version = "0.1.0"
dependencies = [
"anyhow",
"assistant2",
"assistant_tool",
"assistant_tools",
"clap",
"client",
"collections",
"context_server",
"env_logger 0.11.7",
"fs",
"futures 0.3.31",
"gpui",
"gpui_tokio",
"itertools 0.14.0",
"language",
"language_model",
"language_models",
"node_runtime",
"project",
"prompt_store",
"regex",
"release_channel",
"reqwest_client",
"serde",
"serde_json",
"serde_json_lenient",
"settings",
"smol",
"util",
]
[[package]]
name = "assistant_settings"
version = "0.1.0"
@@ -660,6 +695,7 @@ dependencies = [
"collections",
"derive_more",
"gpui",
"language",
"language_model",
"parking_lot",
"project",
@@ -15148,6 +15184,7 @@ dependencies = [
"collections",
"command_palette",
"command_palette_hooks",
"db",
"editor",
"futures 0.3.31",
"git_ui",
@@ -15173,6 +15210,7 @@ dependencies = [
"serde_json",
"settings",
"task",
"text",
"theme",
"tokio",
"ui",

View File

@@ -8,6 +8,7 @@ members = [
"crates/assistant",
"crates/assistant2",
"crates/assistant_context_editor",
"crates/assistant_eval",
"crates/assistant_settings",
"crates/assistant_slash_command",
"crates/assistant_slash_commands",
@@ -206,6 +207,7 @@ assets = { path = "crates/assets" }
assistant = { path = "crates/assistant" }
assistant2 = { path = "crates/assistant2" }
assistant_context_editor = { path = "crates/assistant_context_editor" }
assistant_eval = { path = "crates/assistant_eval" }
assistant_settings = { path = "crates/assistant_settings" }
assistant_slash_command = { path = "crates/assistant_slash_command" }
assistant_slash_commands = { path = "crates/assistant_slash_commands" }

View File

@@ -155,7 +155,6 @@
"z +": ["workspace::SendKeystrokes", "shift-l j z t ^"],
"z t": "editor::ScrollCursorTop",
"z z": "editor::ScrollCursorCenter",
"z l": "vim::ScrollLeftHalfWay",
"z .": ["workspace::SendKeystrokes", "z z ^"],
"z b": "editor::ScrollCursorBottom",
"z a": "editor::ToggleFold",

View File

@@ -22,10 +22,13 @@ use ui::Color;
use ui::{prelude::*, Disclosure, KeyBinding};
use util::ResultExt as _;
use crate::context_store::{refresh_context_store_text, ContextStore};
pub struct ActiveThread {
language_registry: Arc<LanguageRegistry>,
thread_store: Entity<ThreadStore>,
thread: Entity<Thread>,
context_store: Entity<ContextStore>,
save_thread_task: Option<Task<()>>,
messages: Vec<MessageId>,
list_state: ListState,
@@ -46,6 +49,7 @@ impl ActiveThread {
thread: Entity<Thread>,
thread_store: Entity<ThreadStore>,
language_registry: Arc<LanguageRegistry>,
context_store: Entity<ContextStore>,
window: &mut Window,
cx: &mut Context<Self>,
) -> Self {
@@ -58,6 +62,7 @@ impl ActiveThread {
language_registry,
thread_store,
thread: thread.clone(),
context_store,
save_thread_task: None,
messages: Vec::new(),
rendered_messages_by_id: HashMap::default(),
@@ -293,6 +298,7 @@ impl ActiveThread {
ThreadEvent::StreamedCompletion | ThreadEvent::SummaryChanged => {
self.save_thread(cx);
}
ThreadEvent::DoneStreaming => {}
ThreadEvent::StreamedAssistantText(message_id, text) => {
if let Some(markdown) = self.rendered_messages_by_id.get_mut(&message_id) {
markdown.update(cx, |markdown, cx| {
@@ -350,11 +356,51 @@ impl ActiveThread {
}
if self.thread.read(cx).all_tools_finished() {
let pending_refresh_buffers = self.thread.update(cx, |thread, cx| {
thread.action_log().update(cx, |action_log, _cx| {
action_log.take_pending_refresh_buffers()
})
});
let context_update_task = if !pending_refresh_buffers.is_empty() {
let refresh_task = refresh_context_store_text(
self.context_store.clone(),
&pending_refresh_buffers,
cx,
);
cx.spawn(|this, mut cx| async move {
let updated_context_ids = refresh_task.await;
this.update(&mut cx, |this, cx| {
this.context_store.read_with(cx, |context_store, cx| {
context_store
.context()
.iter()
.filter(|context| {
updated_context_ids.contains(&context.id())
})
.flat_map(|context| context.snapshot(cx))
.collect()
})
})
})
} else {
Task::ready(anyhow::Ok(Vec::new()))
};
let model_registry = LanguageModelRegistry::read_global(cx);
if let Some(model) = model_registry.active_model() {
self.thread.update(cx, |thread, cx| {
thread.send_tool_results_to_model(model, cx);
});
cx.spawn(|this, mut cx| async move {
let updated_context = context_update_task.await?;
this.update(&mut cx, |this, cx| {
this.thread.update(cx, |thread, cx| {
thread.send_tool_results_to_model(model, updated_context, cx);
});
})
})
.detach();
}
}
}

View File

@@ -31,8 +31,11 @@ use gpui::{actions, App};
use prompt_store::PromptBuilder;
use settings::Settings as _;
pub use crate::active_thread::ActiveThread;
pub use crate::assistant_panel::{AssistantPanel, ConcreteAssistantPanelDelegate};
pub use crate::inline_assistant::InlineAssistant;
pub use crate::thread::{Message, RequestKind, Thread, ThreadEvent};
pub use crate::thread_store::ThreadStore;
actions!(
assistant2,

View File

@@ -155,10 +155,14 @@ impl AssistantPanel {
let workspace = workspace.weak_handle();
let weak_self = cx.entity().downgrade();
let message_editor_context_store =
cx.new(|_cx| crate::context_store::ContextStore::new(workspace.clone()));
let message_editor = cx.new(|cx| {
MessageEditor::new(
fs.clone(),
workspace.clone(),
message_editor_context_store.clone(),
thread_store.downgrade(),
thread.clone(),
window,
@@ -174,6 +178,7 @@ impl AssistantPanel {
thread.clone(),
thread_store.clone(),
language_registry.clone(),
message_editor_context_store.clone(),
window,
cx,
)
@@ -242,11 +247,16 @@ impl AssistantPanel {
.update(cx, |this, cx| this.create_thread(cx));
self.active_view = ActiveView::Thread;
let message_editor_context_store =
cx.new(|_cx| crate::context_store::ContextStore::new(self.workspace.clone()));
self.thread = cx.new(|cx| {
ActiveThread::new(
thread.clone(),
self.thread_store.clone(),
self.language_registry.clone(),
message_editor_context_store.clone(),
window,
cx,
)
@@ -255,6 +265,7 @@ impl AssistantPanel {
MessageEditor::new(
self.fs.clone(),
self.workspace.clone(),
message_editor_context_store,
self.thread_store.downgrade(),
thread,
window,
@@ -375,11 +386,14 @@ impl AssistantPanel {
let thread = open_thread_task.await?;
this.update_in(&mut cx, |this, window, cx| {
this.active_view = ActiveView::Thread;
let message_editor_context_store =
cx.new(|_cx| crate::context_store::ContextStore::new(this.workspace.clone()));
this.thread = cx.new(|cx| {
ActiveThread::new(
thread.clone(),
this.thread_store.clone(),
this.language_registry.clone(),
message_editor_context_store.clone(),
window,
cx,
)
@@ -388,6 +402,7 @@ impl AssistantPanel {
MessageEditor::new(
this.fs.clone(),
this.workspace.clone(),
message_editor_context_store,
this.thread_store.downgrade(),
thread,
window,

View File

@@ -9,6 +9,7 @@ use language::Buffer;
use project::{ProjectPath, Worktree};
use rope::Rope;
use text::BufferId;
use util::maybe;
use workspace::Workspace;
use crate::context::{
@@ -531,35 +532,59 @@ fn collect_files_in_path(worktree: &Worktree, path: &Path) -> Vec<Arc<Path>> {
pub fn refresh_context_store_text(
context_store: Entity<ContextStore>,
changed_buffers: &HashSet<Entity<Buffer>>,
cx: &App,
) -> impl Future<Output = ()> {
) -> impl Future<Output = Vec<ContextId>> {
let mut tasks = Vec::new();
for context in &context_store.read(cx).context {
match context {
AssistantContext::File(file_context) => {
let context_store = context_store.clone();
if let Some(task) = refresh_file_text(context_store, file_context, cx) {
tasks.push(task);
let id = context.id();
let task = maybe!({
match context {
AssistantContext::File(file_context) => {
if changed_buffers.is_empty()
|| changed_buffers.contains(&file_context.context_buffer.buffer)
{
let context_store = context_store.clone();
return refresh_file_text(context_store, file_context, cx);
}
}
}
AssistantContext::Directory(directory_context) => {
let context_store = context_store.clone();
if let Some(task) = refresh_directory_text(context_store, directory_context, cx) {
tasks.push(task);
AssistantContext::Directory(directory_context) => {
let should_refresh = changed_buffers.is_empty()
|| changed_buffers.iter().any(|buffer| {
let buffer = buffer.read(cx);
buffer_path_log_err(&buffer)
.map_or(false, |path| path.starts_with(&directory_context.path))
});
if should_refresh {
let context_store = context_store.clone();
return refresh_directory_text(context_store, directory_context, cx);
}
}
AssistantContext::Thread(thread_context) => {
if changed_buffers.is_empty() {
let context_store = context_store.clone();
return Some(refresh_thread_text(context_store, thread_context, cx));
}
}
// Intentionally omit refreshing fetched URLs as it doesn't seem all that useful,
// and doing the caching properly could be tricky (unless it's already handled by
// the HttpClient?).
AssistantContext::FetchedUrl(_) => {}
}
AssistantContext::Thread(thread_context) => {
let context_store = context_store.clone();
tasks.push(refresh_thread_text(context_store, thread_context, cx));
}
// Intentionally omit refreshing fetched URLs as it doesn't seem all that useful,
// and doing the caching properly could be tricky (unless it's already handled by
// the HttpClient?).
AssistantContext::FetchedUrl(_) => {}
None
});
if let Some(task) = task {
tasks.push(task.map(move |_| id));
}
}
future::join_all(tasks).map(|_| ())
future::join_all(tasks)
}
fn refresh_file_text(

View File

@@ -1,5 +1,6 @@
use std::sync::Arc;
use collections::HashSet;
use editor::actions::MoveUp;
use editor::{Editor, EditorElement, EditorEvent, EditorStyle};
use file_icons::FileIcons;
@@ -51,13 +52,13 @@ impl MessageEditor {
pub fn new(
fs: Arc<dyn Fs>,
workspace: WeakEntity<Workspace>,
context_store: Entity<ContextStore>,
thread_store: WeakEntity<ThreadStore>,
thread: Entity<Thread>,
window: &mut Window,
cx: &mut Context<Self>,
) -> Self {
let tools = thread.read(cx).tools().clone();
let context_store = cx.new(|_cx| ContextStore::new(workspace.clone()));
let context_picker_menu_handle = PopoverMenuHandle::default();
let inline_context_picker_menu_handle = PopoverMenuHandle::default();
let model_selector_menu_handle = PopoverMenuHandle::default();
@@ -200,7 +201,8 @@ impl MessageEditor {
text
});
let refresh_task = refresh_context_store_text(self.context_store.clone(), cx);
let refresh_task =
refresh_context_store_text(self.context_store.clone(), &HashSet::default(), cx);
let thread = self.thread.clone();
let context_store = self.context_store.clone();

View File

@@ -2,7 +2,7 @@ use std::io::Write;
use std::sync::Arc;
use anyhow::{Context as _, Result};
use assistant_tool::ToolWorkingSet;
use assistant_tool::{ActionLog, ToolWorkingSet};
use chrono::{DateTime, Utc};
use collections::{BTreeMap, HashMap, HashSet};
use futures::future::Shared;
@@ -104,6 +104,7 @@ pub struct Thread {
prompt_builder: Arc<PromptBuilder>,
tools: Arc<ToolWorkingSet>,
tool_use: ToolUseState,
action_log: Entity<ActionLog>,
scripting_session: Entity<ScriptingSession>,
scripting_tool_use: ToolUseState,
initial_project_snapshot: Shared<Task<Option<Arc<ProjectSnapshot>>>>,
@@ -134,6 +135,7 @@ impl Thread {
tool_use: ToolUseState::new(),
scripting_session: cx.new(|cx| ScriptingSession::new(project.clone(), cx)),
scripting_tool_use: ToolUseState::new(),
action_log: cx.new(|_| ActionLog::new()),
initial_project_snapshot: {
let project_snapshot = Self::project_snapshot(project, cx);
cx.foreground_executor()
@@ -191,6 +193,7 @@ impl Thread {
prompt_builder,
tools,
tool_use,
action_log: cx.new(|_| ActionLog::new()),
scripting_session,
scripting_tool_use,
initial_project_snapshot: Task::ready(serialized.initial_project_snapshot).shared(),
@@ -281,6 +284,10 @@ impl Thread {
self.tool_use.tool_results_for_message(id)
}
pub fn tool_result(&self, id: &LanguageModelToolUseId) -> Option<&LanguageModelToolResult> {
self.tool_use.tool_result(id)
}
pub fn scripting_tool_results_for_message(
&self,
id: MessageId,
@@ -649,32 +656,37 @@ impl Thread {
let result = stream_completion.await;
thread
.update(&mut cx, |thread, cx| match result.as_ref() {
Ok(stop_reason) => match stop_reason {
StopReason::ToolUse => {
cx.emit(ThreadEvent::UsePendingTools);
}
StopReason::EndTurn => {}
StopReason::MaxTokens => {}
},
Err(error) => {
if error.is::<PaymentRequiredError>() {
cx.emit(ThreadEvent::ShowError(ThreadError::PaymentRequired));
} else if error.is::<MaxMonthlySpendReachedError>() {
cx.emit(ThreadEvent::ShowError(ThreadError::MaxMonthlySpendReached));
} else {
let error_message = error
.chain()
.map(|err| err.to_string())
.collect::<Vec<_>>()
.join("\n");
cx.emit(ThreadEvent::ShowError(ThreadError::Message(
SharedString::from(error_message.clone()),
)));
}
.update(&mut cx, |thread, cx| {
match result.as_ref() {
Ok(stop_reason) => match stop_reason {
StopReason::ToolUse => {
cx.emit(ThreadEvent::UsePendingTools);
}
StopReason::EndTurn => {}
StopReason::MaxTokens => {}
},
Err(error) => {
if error.is::<PaymentRequiredError>() {
cx.emit(ThreadEvent::ShowError(ThreadError::PaymentRequired));
} else if error.is::<MaxMonthlySpendReachedError>() {
cx.emit(ThreadEvent::ShowError(
ThreadError::MaxMonthlySpendReached,
));
} else {
let error_message = error
.chain()
.map(|err| err.to_string())
.collect::<Vec<_>>()
.join("\n");
cx.emit(ThreadEvent::ShowError(ThreadError::Message(
SharedString::from(error_message.clone()),
)));
}
thread.cancel_last_completion();
thread.cancel_last_completion();
}
}
cx.emit(ThreadEvent::DoneStreaming);
})
.ok();
});
@@ -750,7 +762,13 @@ impl Thread {
for tool_use in pending_tool_uses {
if let Some(tool) = self.tools.tool(&tool_use.name, cx) {
let task = tool.run(tool_use.input, &request.messages, self.project.clone(), cx);
let task = tool.run(
tool_use.input,
&request.messages,
self.project.clone(),
self.action_log.clone(),
cx,
);
self.insert_tool_output(tool_use.id.clone(), task, cx);
}
@@ -857,8 +875,15 @@ impl Thread {
pub fn send_tool_results_to_model(
&mut self,
model: Arc<dyn LanguageModel>,
updated_context: Vec<ContextSnapshot>,
cx: &mut Context<Self>,
) {
self.context.extend(
updated_context
.into_iter()
.map(|context| (context.id, context)),
);
// Insert a user message to contain the tool results.
self.insert_user_message(
// TODO: Sending up a user message without any content results in the model sending back
@@ -1057,6 +1082,10 @@ impl Thread {
Ok(String::from_utf8_lossy(&markdown).to_string())
}
pub fn action_log(&self) -> &Entity<ActionLog> {
&self.action_log
}
pub fn cumulative_token_usage(&self) -> TokenUsage {
self.cumulative_token_usage.clone()
}
@@ -1074,6 +1103,7 @@ pub enum ThreadEvent {
ShowError(ThreadError),
StreamedCompletion,
StreamedAssistantText(MessageId, String),
DoneStreaming,
MessageAdded(MessageId),
MessageEdited(MessageId),
MessageDeleted(MessageId),

View File

@@ -182,6 +182,13 @@ impl ToolUseState {
.map_or(false, |results| !results.is_empty())
}
pub fn tool_result(
&self,
tool_use_id: &LanguageModelToolUseId,
) -> Option<&LanguageModelToolResult> {
self.tool_results.get(tool_use_id)
}
pub fn request_tool_use(
&mut self,
assistant_message_id: MessageId,
@@ -226,12 +233,12 @@ impl ToolUseState {
output: Result<String>,
) -> Option<PendingToolUse> {
match output {
Ok(output) => {
Ok(tool_result) => {
self.tool_results.insert(
tool_use_id.clone(),
LanguageModelToolResult {
tool_use_id: tool_use_id.clone(),
content: output.into(),
content: tool_result.into(),
is_error: false,
},
);

View File

@@ -0,0 +1,44 @@
[package]
name = "assistant_eval"
version = "0.1.0"
edition.workspace = true
publish.workspace = true
license = "GPL-3.0-or-later"
[lints]
workspace = true
[[bin]]
name = "assistant_eval"
path = "src/main.rs"
[dependencies]
anyhow.workspace = true
assistant2.workspace = true
assistant_tool.workspace = true
assistant_tools.workspace = true
clap.workspace = true
client.workspace = true
collections.workspace = true
context_server.workspace = true
env_logger.workspace = true
fs.workspace = true
futures.workspace = true
gpui.workspace = true
gpui_tokio.workspace = true
itertools.workspace = true
language.workspace = true
language_model.workspace = true
language_models.workspace = true
node_runtime.workspace = true
project.workspace = true
prompt_store.workspace = true
regex.workspace = true
release_channel.workspace = true
reqwest_client.workspace = true
serde.workspace = true
serde_json.workspace = true
serde_json_lenient.workspace = true
settings.workspace = true
smol.workspace = true
util.workspace = true

View File

@@ -0,0 +1 @@
../../LICENSE-GPL

View File

@@ -0,0 +1,77 @@
# Tool Evals
A framework for evaluating and benchmarking AI assistant performance in the Zed editor.
## Overview
Tool Evals provides a headless environment for running assistants evaluations on code repositories. It automates the process of:
1. Cloning and setting up test repositories
2. Sending prompts to language models
3. Allowing the assistant to use tools to modify code
4. Collecting metrics on performance
5. Evaluating results against known good solutions
## How It Works
The system consists of several key components:
- **Eval**: Loads test cases from the evaluation_data directory, clones repos, and executes evaluations
- **HeadlessAssistant**: Provides a headless environment for running the AI assistant
- **Judge**: Compares AI-generated diffs with reference solutions and scores their functional similarity
The evaluation flow:
1. An evaluation is loaded from the evaluation_data directory
2. The target repository is cloned and checked out at a specific commit
3. A HeadlessAssistant instance is created with the specified language model
4. The user prompt is sent to the assistant
5. The assistant responds and uses tools to modify code
6. Upon completion, a diff is generated from the changes
7. Results are saved including the diff, assistant's response, and performance metrics
8. If a reference solution exists, a Judge evaluates the similarity of the solution
## Setup Requirements
### Prerequisites
- Rust and Cargo
- Git
- Network access to clone repositories
- Appropriate API keys for language models and git services (Anthropic, GitHub, etc.)
### Environment Variables
Ensure you have the required API keys set, either from a dev run of Zed or via these environment variables:
- `ZED_ANTHROPIC_API_KEY` for Claude models
- `ZED_OPENAI_API_KEY` for OpenAI models
- `ZED_GITHUB_API_KEY` for GitHub API (or similar)
## Usage
### Running a Single Evaluation
To run a specific evaluation:
```bash
cargo run -p assistant_eval -- bubbletea-add-set-window-title
```
The arguments are regex patterns for the evaluation names to run, so to run all evaluations that contain `bubbletea`, run:
```bash
cargo run -p assistant_eval -- bubbletea
```
To run all evaluations:
```bash
cargo run -p assistant_eval -- --all
```
## Evaluation Data Structure
Each evaluation should be placed in the `evaluation_data` directory with the following structure:
* `prompt.txt`: The user's prompt.
* `original.diff`: The `git diff` of the change anticipated for this prompt.
* `setup.json`: Information about the repo used for the evaluation.

View File

@@ -0,0 +1,61 @@
// Copied from `crates/zed/build.rs`, with removal of code for including the zed icon on windows.
use std::process::Command;
fn main() {
if cfg!(target_os = "macos") {
println!("cargo:rustc-env=MACOSX_DEPLOYMENT_TARGET=10.15.7");
println!("cargo:rerun-if-env-changed=ZED_BUNDLE");
if std::env::var("ZED_BUNDLE").ok().as_deref() == Some("true") {
// Find WebRTC.framework in the Frameworks folder when running as part of an application bundle.
println!("cargo:rustc-link-arg=-Wl,-rpath,@executable_path/../Frameworks");
} else {
// Find WebRTC.framework as a sibling of the executable when running outside of an application bundle.
println!("cargo:rustc-link-arg=-Wl,-rpath,@executable_path");
}
// Weakly link ReplayKit to ensure Zed can be used on macOS 10.15+.
println!("cargo:rustc-link-arg=-Wl,-weak_framework,ReplayKit");
// Seems to be required to enable Swift concurrency
println!("cargo:rustc-link-arg=-Wl,-rpath,/usr/lib/swift");
// Register exported Objective-C selectors, protocols, etc
println!("cargo:rustc-link-arg=-Wl,-ObjC");
}
// Populate git sha environment variable if git is available
println!("cargo:rerun-if-changed=../../.git/logs/HEAD");
println!(
"cargo:rustc-env=TARGET={}",
std::env::var("TARGET").unwrap()
);
if let Ok(output) = Command::new("git").args(["rev-parse", "HEAD"]).output() {
if output.status.success() {
let git_sha = String::from_utf8_lossy(&output.stdout);
let git_sha = git_sha.trim();
println!("cargo:rustc-env=ZED_COMMIT_SHA={git_sha}");
if let Ok(build_profile) = std::env::var("PROFILE") {
if build_profile == "release" {
// This is currently the best way to make `cargo build ...`'s build script
// to print something to stdout without extra verbosity.
println!(
"cargo:warning=Info: using '{git_sha}' hash for ZED_COMMIT_SHA env var"
);
}
}
}
}
#[cfg(target_os = "windows")]
{
#[cfg(target_env = "msvc")]
{
// todo(windows): This is to avoid stack overflow. Remove it when solved.
println!("cargo:rustc-link-arg=/stack:{}", 8 * 1024 * 1024);
}
}
}

View File

@@ -0,0 +1,252 @@
use crate::headless_assistant::{HeadlessAppState, HeadlessAssistant};
use anyhow::anyhow;
use assistant2::RequestKind;
use collections::HashMap;
use gpui::{App, Task};
use language_model::{LanguageModel, TokenUsage};
use serde::{Deserialize, Serialize};
use std::{
fs,
io::Write,
path::{Path, PathBuf},
sync::Arc,
time::Duration,
};
use util::command::new_smol_command;
pub struct Eval {
pub name: String,
pub path: PathBuf,
pub repo_path: PathBuf,
pub eval_setup: EvalSetup,
pub user_prompt: String,
}
#[derive(Debug, Serialize)]
pub struct EvalOutput {
pub diff: String,
pub last_message: String,
pub elapsed_time: Duration,
pub assistant_response_count: usize,
pub tool_use_counts: HashMap<Arc<str>, u32>,
pub token_usage: TokenUsage,
}
#[derive(Deserialize)]
pub struct EvalSetup {
pub url: String,
pub base_sha: String,
}
impl Eval {
/// Loads the eval from a path (typically in `evaluation_data`). Clones and checks out the repo
/// if necessary.
pub async fn load(name: String, path: PathBuf, repos_dir: &Path) -> anyhow::Result<Self> {
let prompt_path = path.join("prompt.txt");
let user_prompt = smol::unblock(|| std::fs::read_to_string(prompt_path)).await?;
let setup_path = path.join("setup.json");
let setup_contents = smol::unblock(|| std::fs::read_to_string(setup_path)).await?;
let eval_setup = serde_json_lenient::from_str_lenient::<EvalSetup>(&setup_contents)?;
let repo_path = repos_dir.join(repo_dir_name(&eval_setup.url));
Ok(Eval {
name,
path,
repo_path,
eval_setup,
user_prompt,
})
}
pub fn run(
self,
app_state: Arc<HeadlessAppState>,
model: Arc<dyn LanguageModel>,
cx: &mut App,
) -> Task<anyhow::Result<EvalOutput>> {
cx.spawn(move |mut cx| async move {
checkout_repo(&self.eval_setup, &self.repo_path).await?;
let (assistant, done_rx) =
cx.update(|cx| HeadlessAssistant::new(app_state.clone(), cx))??;
let _worktree = assistant
.update(&mut cx, |assistant, cx| {
assistant.project.update(cx, |project, cx| {
project.create_worktree(&self.repo_path, true, cx)
})
})?
.await?;
let start_time = std::time::SystemTime::now();
assistant.update(&mut cx, |assistant, cx| {
assistant.thread.update(cx, |thread, cx| {
let context = vec![];
thread.insert_user_message(self.user_prompt.clone(), context, cx);
thread.send_to_model(model, RequestKind::Chat, cx);
});
})?;
done_rx.recv().await??;
let elapsed_time = start_time.elapsed()?;
let diff = query_git(&self.repo_path, vec!["diff"]).await?;
assistant.update(&mut cx, |assistant, cx| {
let thread = assistant.thread.read(cx);
let last_message = thread.messages().last().unwrap();
if last_message.role != language_model::Role::Assistant {
return Err(anyhow!("Last message is not from assistant"));
}
let assistant_response_count = thread
.messages()
.filter(|message| message.role == language_model::Role::Assistant)
.count();
Ok(EvalOutput {
diff,
last_message: last_message.text.clone(),
elapsed_time,
assistant_response_count,
tool_use_counts: assistant.tool_use_counts.clone(),
token_usage: thread.cumulative_token_usage(),
})
})?
})
}
}
impl EvalOutput {
// Method to save the output to a directory
pub fn save_to_directory(
&self,
output_dir: &Path,
eval_output_value: String,
) -> anyhow::Result<()> {
// Create the output directory if it doesn't exist
fs::create_dir_all(&output_dir)?;
// Save the diff to a file
let diff_path = output_dir.join("diff.patch");
let mut diff_file = fs::File::create(&diff_path)?;
diff_file.write_all(self.diff.as_bytes())?;
// Save the last message to a file
let message_path = output_dir.join("assistant_response.txt");
let mut message_file = fs::File::create(&message_path)?;
message_file.write_all(self.last_message.as_bytes())?;
// Current metrics for this run
let current_metrics = serde_json::json!({
"elapsed_time_ms": self.elapsed_time.as_millis(),
"assistant_response_count": self.assistant_response_count,
"tool_use_counts": self.tool_use_counts,
"token_usage": self.token_usage,
"eval_output_value": eval_output_value,
});
// Get current timestamp in milliseconds
let timestamp = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)?
.as_millis()
.to_string();
// Path to metrics file
let metrics_path = output_dir.join("metrics.json");
// Load existing metrics if the file exists, or create a new object
let mut historical_metrics = if metrics_path.exists() {
let metrics_content = fs::read_to_string(&metrics_path)?;
serde_json::from_str::<serde_json::Value>(&metrics_content)
.unwrap_or_else(|_| serde_json::json!({}))
} else {
serde_json::json!({})
};
// Add new run with timestamp as key
if let serde_json::Value::Object(ref mut map) = historical_metrics {
map.insert(timestamp, current_metrics);
}
// Write updated metrics back to file
let metrics_json = serde_json::to_string_pretty(&historical_metrics)?;
let mut metrics_file = fs::File::create(&metrics_path)?;
metrics_file.write_all(metrics_json.as_bytes())?;
Ok(())
}
}
fn repo_dir_name(url: &str) -> String {
url.trim_start_matches("https://")
.replace(|c: char| !c.is_alphanumeric(), "_")
}
async fn checkout_repo(eval_setup: &EvalSetup, repo_path: &Path) -> anyhow::Result<()> {
if !repo_path.exists() {
smol::unblock({
let repo_path = repo_path.to_path_buf();
|| std::fs::create_dir_all(repo_path)
})
.await?;
run_git(repo_path, vec!["init"]).await?;
run_git(repo_path, vec!["remote", "add", "origin", &eval_setup.url]).await?;
} else {
let actual_origin = query_git(repo_path, vec!["remote", "get-url", "origin"]).await?;
if actual_origin != eval_setup.url {
return Err(anyhow!(
"remote origin {} does not match expected origin {}",
actual_origin,
eval_setup.url
));
}
// TODO: consider including "-x" to remove ignored files. The downside of this is that it will
// also remove build artifacts, and so prevent incremental reuse there.
run_git(repo_path, vec!["clean", "--force", "-d"]).await?;
run_git(repo_path, vec!["reset", "--hard", "HEAD"]).await?;
}
run_git(
repo_path,
vec!["fetch", "--depth", "1", "origin", &eval_setup.base_sha],
)
.await?;
run_git(repo_path, vec!["checkout", &eval_setup.base_sha]).await?;
Ok(())
}
async fn run_git(repo_path: &Path, args: Vec<&str>) -> anyhow::Result<()> {
let exit_status = new_smol_command("git")
.current_dir(repo_path)
.args(args.clone())
.status()
.await?;
if exit_status.success() {
Ok(())
} else {
Err(anyhow!(
"`git {}` failed with {}",
args.join(" "),
exit_status,
))
}
}
async fn query_git(repo_path: &Path, args: Vec<&str>) -> anyhow::Result<String> {
let output = new_smol_command("git")
.current_dir(repo_path)
.args(args.clone())
.output()
.await?;
if output.status.success() {
Ok(String::from_utf8(output.stdout)?.trim().to_string())
} else {
Err(anyhow!(
"`git {}` failed with {}",
args.join(" "),
output.status
))
}
}

View File

@@ -0,0 +1,241 @@
use anyhow::anyhow;
use assistant2::{Thread, ThreadEvent, ThreadStore};
use assistant_tool::ToolWorkingSet;
use client::{Client, UserStore};
use collections::HashMap;
use futures::StreamExt;
use gpui::{prelude::*, App, AsyncApp, Entity, SemanticVersion, Subscription, Task};
use language::LanguageRegistry;
use language_model::{
AuthenticateError, LanguageModel, LanguageModelProviderId, LanguageModelRegistry,
LanguageModelRequest,
};
use node_runtime::NodeRuntime;
use project::{Project, RealFs};
use prompt_store::PromptBuilder;
use settings::SettingsStore;
use smol::channel;
use std::sync::Arc;
/// Subset of `workspace::AppState` needed by `HeadlessAssistant`, with additional fields.
pub struct HeadlessAppState {
pub languages: Arc<LanguageRegistry>,
pub client: Arc<Client>,
pub user_store: Entity<UserStore>,
pub fs: Arc<dyn fs::Fs>,
pub node_runtime: NodeRuntime,
// Additional fields not present in `workspace::AppState`.
pub prompt_builder: Arc<PromptBuilder>,
}
pub struct HeadlessAssistant {
pub thread: Entity<Thread>,
pub project: Entity<Project>,
#[allow(dead_code)]
pub thread_store: Entity<ThreadStore>,
pub tool_use_counts: HashMap<Arc<str>, u32>,
pub done_tx: channel::Sender<anyhow::Result<()>>,
_subscription: Subscription,
}
impl HeadlessAssistant {
pub fn new(
app_state: Arc<HeadlessAppState>,
cx: &mut App,
) -> anyhow::Result<(Entity<Self>, channel::Receiver<anyhow::Result<()>>)> {
let env = None;
let project = Project::local(
app_state.client.clone(),
app_state.node_runtime.clone(),
app_state.user_store.clone(),
app_state.languages.clone(),
app_state.fs.clone(),
env,
cx,
);
let tools = Arc::new(ToolWorkingSet::default());
let thread_store =
ThreadStore::new(project.clone(), tools, app_state.prompt_builder.clone(), cx)?;
let thread = thread_store.update(cx, |thread_store, cx| thread_store.create_thread(cx));
let (done_tx, done_rx) = channel::unbounded::<anyhow::Result<()>>();
let headless_thread = cx.new(move |cx| Self {
_subscription: cx.subscribe(&thread, Self::handle_thread_event),
thread,
project,
thread_store,
tool_use_counts: HashMap::default(),
done_tx,
});
Ok((headless_thread, done_rx))
}
fn handle_thread_event(
&mut self,
thread: Entity<Thread>,
event: &ThreadEvent,
cx: &mut Context<Self>,
) {
match event {
ThreadEvent::ShowError(err) => self
.done_tx
.send_blocking(Err(anyhow!("{:?}", err)))
.unwrap(),
ThreadEvent::DoneStreaming => {
let thread = thread.read(cx);
if let Some(message) = thread.messages().last() {
println!("Message: {}", message.text,);
}
if thread.all_tools_finished() {
self.done_tx.send_blocking(Ok(())).unwrap()
}
}
ThreadEvent::UsePendingTools => {
thread.update(cx, |thread, cx| {
thread.use_pending_tools(cx);
});
}
ThreadEvent::ToolFinished {
tool_use_id,
pending_tool_use,
} => {
if let Some(pending_tool_use) = pending_tool_use {
println!(
"Used tool {} with input: {}",
pending_tool_use.name, pending_tool_use.input
);
*self
.tool_use_counts
.entry(pending_tool_use.name.clone())
.or_insert(0) += 1;
}
if let Some(tool_result) = thread.read(cx).tool_result(tool_use_id) {
println!("Tool result: {:?}", tool_result);
}
if thread.read(cx).all_tools_finished() {
let model_registry = LanguageModelRegistry::read_global(cx);
if let Some(model) = model_registry.active_model() {
thread.update(cx, |thread, cx| {
// Currently evals do not support specifying context.
let updated_context = vec![];
thread.send_tool_results_to_model(model, updated_context, cx);
});
}
}
}
ThreadEvent::StreamedCompletion
| ThreadEvent::SummaryChanged
| ThreadEvent::StreamedAssistantText(_, _)
| ThreadEvent::MessageAdded(_)
| ThreadEvent::MessageEdited(_)
| ThreadEvent::MessageDeleted(_) => {}
}
}
}
pub fn init(cx: &mut App) -> Arc<HeadlessAppState> {
release_channel::init(SemanticVersion::default(), cx);
gpui_tokio::init(cx);
let mut settings_store = SettingsStore::new(cx);
settings_store
.set_default_settings(settings::default_settings().as_ref(), cx)
.unwrap();
cx.set_global(settings_store);
client::init_settings(cx);
Project::init_settings(cx);
let client = Client::production(cx);
cx.set_http_client(client.http_client().clone());
let git_binary_path = None;
let fs = Arc::new(RealFs::new(git_binary_path));
let languages = Arc::new(LanguageRegistry::new(cx.background_executor().clone()));
let user_store = cx.new(|cx| UserStore::new(client.clone(), cx));
language::init(cx);
language_model::init(client.clone(), cx);
language_models::init(user_store.clone(), client.clone(), fs.clone(), cx);
assistant_tools::init(cx);
context_server::init(cx);
let stdout_is_a_pty = false;
let prompt_builder = PromptBuilder::load(fs.clone(), stdout_is_a_pty, cx);
assistant2::init(fs.clone(), client.clone(), prompt_builder.clone(), cx);
Arc::new(HeadlessAppState {
languages,
client,
user_store,
fs,
node_runtime: NodeRuntime::unavailable(),
prompt_builder,
})
}
pub fn find_model(model_name: &str, cx: &App) -> anyhow::Result<Arc<dyn LanguageModel>> {
let model_registry = LanguageModelRegistry::read_global(cx);
let model = model_registry
.available_models(cx)
.find(|model| model.id().0 == model_name);
let Some(model) = model else {
return Err(anyhow!(
"No language model named {} was available. Available models: {}",
model_name,
model_registry
.available_models(cx)
.map(|model| model.id().0.clone())
.collect::<Vec<_>>()
.join(", ")
));
};
Ok(model)
}
pub fn authenticate_model_provider(
provider_id: LanguageModelProviderId,
cx: &mut App,
) -> Task<std::result::Result<(), AuthenticateError>> {
let model_registry = LanguageModelRegistry::read_global(cx);
let model_provider = model_registry.provider(&provider_id).unwrap();
model_provider.authenticate(cx)
}
pub async fn send_language_model_request(
model: Arc<dyn LanguageModel>,
request: LanguageModelRequest,
cx: AsyncApp,
) -> anyhow::Result<String> {
match model.stream_completion_text(request, &cx).await {
Ok(mut stream) => {
let mut full_response = String::new();
// Process the response stream
while let Some(chunk_result) = stream.stream.next().await {
match chunk_result {
Ok(chunk_str) => {
full_response.push_str(&chunk_str);
}
Err(err) => {
return Err(anyhow!(
"Error receiving response from language model: {err}"
));
}
}
}
Ok(full_response)
}
Err(err) => Err(anyhow!(
"Failed to get response from language model. Error was: {err}"
)),
}
}

View File

@@ -0,0 +1,121 @@
use crate::eval::EvalOutput;
use crate::headless_assistant::send_language_model_request;
use anyhow::anyhow;
use gpui::{App, Task};
use language_model::{
LanguageModel, LanguageModelRequest, LanguageModelRequestMessage, MessageContent, Role,
};
use std::{path::Path, sync::Arc};
pub struct Judge {
pub original_diff: Option<String>,
#[allow(dead_code)]
pub original_message: Option<String>,
pub model: Arc<dyn LanguageModel>,
}
impl Judge {
pub async fn load(eval_path: &Path, model: Arc<dyn LanguageModel>) -> anyhow::Result<Judge> {
let original_diff_path = eval_path.join("original.diff");
let original_diff = smol::unblock(move || {
if std::fs::exists(&original_diff_path)? {
anyhow::Ok(Some(std::fs::read_to_string(&original_diff_path)?))
} else {
anyhow::Ok(None)
}
});
let original_message_path = eval_path.join("original_message.txt");
let original_message = smol::unblock(move || {
if std::fs::exists(&original_message_path)? {
anyhow::Ok(Some(std::fs::read_to_string(&original_message_path)?))
} else {
anyhow::Ok(None)
}
});
Ok(Self {
original_diff: original_diff.await?,
original_message: original_message.await?,
model,
})
}
pub fn run(&self, eval_output: &EvalOutput, cx: &mut App) -> Task<anyhow::Result<String>> {
let Some(original_diff) = self.original_diff.as_ref() else {
return Task::ready(Err(anyhow!("No original.diff found")));
};
// TODO: check for empty diff?
let prompt = diff_comparison_prompt(&original_diff, &eval_output.diff);
let request = LanguageModelRequest {
messages: vec![LanguageModelRequestMessage {
role: Role::User,
content: vec![MessageContent::Text(prompt)],
cache: false,
}],
temperature: Some(0.0),
tools: Vec::new(),
stop: Vec::new(),
};
let model = self.model.clone();
cx.spawn(move |cx| send_language_model_request(model, request, cx))
}
}
pub fn diff_comparison_prompt(original_diff: &str, new_diff: &str) -> String {
format!(
r#"# Git Diff Similarity Evaluation Template
## Instructions
Compare the two diffs and score them between 0.0 and 1.0 based on their functional similarity.
- 1.0 = Perfect functional match (achieves identical results)
- 0.0 = No functional similarity whatsoever
## Evaluation Criteria
Please consider the following aspects in order of importance:
1. **Functional Equivalence (60%)**
- Do both diffs achieve the same end result?
- Are the changes functionally equivalent despite possibly using different approaches?
- Do the modifications address the same issues or implement the same features?
2. **Logical Structure (20%)**
- Are the logical flows similar?
- Do the modifications affect the same code paths?
- Are control structures (if/else, loops, etc.) modified in similar ways?
3. **Code Content (15%)**
- Are similar lines added/removed?
- Are the same variables, functions, or methods being modified?
- Are the same APIs or libraries being used?
4. **File Layout (5%)**
- Are the same files being modified?
- Are changes occurring in similar locations within files?
## Input
Original Diff:
```git
{}
```
New Diff:
```git
{}
```
## Output Format
THE ONLY OUTPUT SHOULD BE A SCORE BETWEEN 0.0 AND 1.0.
Example output:
0.85"#,
original_diff, new_diff
)
}

View File

@@ -0,0 +1,234 @@
mod eval;
mod headless_assistant;
mod judge;
use clap::Parser;
use eval::{Eval, EvalOutput};
use futures::{stream, StreamExt};
use gpui::{Application, AsyncApp};
use headless_assistant::{authenticate_model_provider, find_model, HeadlessAppState};
use itertools::Itertools;
use judge::Judge;
use language_model::{LanguageModel, LanguageModelRegistry};
use regex::Regex;
use reqwest_client::ReqwestClient;
use std::{cmp, path::PathBuf, sync::Arc};
#[derive(Parser, Debug)]
#[command(
name = "assistant_eval",
disable_version_flag = true,
before_help = "Tool eval runner"
)]
struct Args {
/// Regexes to match the names of evals to run.
eval_name_regexes: Vec<String>,
/// Runs all evals in `evaluation_data`, causes the regex to be ignored.
#[arg(long)]
all: bool,
/// Name of the model (default: "claude-3-7-sonnet-latest")
#[arg(long, default_value = "claude-3-7-sonnet-latest")]
model_name: String,
/// Name of the editor model (default: value of `--model_name`).
#[arg(long)]
editor_model_name: Option<String>,
/// Name of the judge model (default: value of `--model_name`).
#[arg(long)]
judge_model_name: Option<String>,
/// Number of evaluations to run concurrently (default: 10)
#[arg(short, long, default_value = "10")]
concurrency: usize,
}
fn main() {
env_logger::init();
let args = Args::parse();
let http_client = Arc::new(ReqwestClient::new());
let app = Application::headless().with_http_client(http_client.clone());
let crate_dir = PathBuf::from("../zed-agent-bench");
let evaluation_data_dir = crate_dir.join("evaluation_data").canonicalize().unwrap();
let repos_dir = crate_dir.join("repos").canonicalize().unwrap();
let all_evals = std::fs::read_dir(&evaluation_data_dir)
.unwrap()
.map(|path| path.unwrap().file_name().to_string_lossy().to_string())
.collect::<Vec<_>>();
let evals_to_run = if args.all {
all_evals
} else {
args.eval_name_regexes
.into_iter()
.map(|regex_string| Regex::new(&regex_string).unwrap())
.flat_map(|regex| {
all_evals
.iter()
.filter(|eval_name| regex.is_match(eval_name))
.cloned()
.collect::<Vec<_>>()
})
.collect::<Vec<_>>()
};
if evals_to_run.is_empty() {
panic!("Names of evals to run must be provided or `--all` specified");
}
println!("Will run the following evals: {evals_to_run:?}");
println!("Running up to {} evals concurrently", args.concurrency);
let editor_model_name = if let Some(model_name) = args.editor_model_name {
model_name
} else {
args.model_name.clone()
};
let judge_model_name = if let Some(model_name) = args.judge_model_name {
model_name
} else {
args.model_name.clone()
};
app.run(move |cx| {
let app_state = headless_assistant::init(cx);
let model = find_model(&args.model_name, cx).unwrap();
let editor_model = find_model(&editor_model_name, cx).unwrap();
let judge_model = find_model(&judge_model_name, cx).unwrap();
LanguageModelRegistry::global(cx).update(cx, |registry, cx| {
registry.set_active_model(Some(model.clone()), cx);
registry.set_editor_model(Some(editor_model.clone()), cx);
});
let model_provider_id = model.provider_id();
let editor_model_provider_id = editor_model.provider_id();
let judge_model_provider_id = judge_model.provider_id();
cx.spawn(move |cx| async move {
// Authenticate all model providers first
cx.update(|cx| authenticate_model_provider(model_provider_id.clone(), cx))
.unwrap()
.await
.unwrap();
cx.update(|cx| authenticate_model_provider(editor_model_provider_id.clone(), cx))
.unwrap()
.await
.unwrap();
cx.update(|cx| authenticate_model_provider(judge_model_provider_id.clone(), cx))
.unwrap()
.await
.unwrap();
let loaded_evals = stream::iter(evals_to_run)
.map(|eval_name| {
let eval_path = evaluation_data_dir.join(&eval_name);
let repos_dir = repos_dir.clone();
async move {
match Eval::load(eval_name.clone(), eval_path, &repos_dir).await {
Ok(eval) => Some(eval),
Err(err) => {
// TODO: Persist errors / surface errors at the end.
println!("Error loading {eval_name}: {err}");
None
}
}
}
})
.buffer_unordered(args.concurrency)
.collect::<Vec<_>>()
.await
.into_iter()
.flatten()
.collect::<Vec<_>>();
// The evals need to be loaded and grouped by URL before concurrently running, since
// evals that use the same remote URL will use the same working directory.
let mut evals_grouped_by_url: Vec<Vec<Eval>> = loaded_evals
.into_iter()
.map(|eval| (eval.eval_setup.url.clone(), eval))
.into_group_map()
.into_values()
.collect::<Vec<_>>();
// Sort groups in descending order, so that bigger groups start first.
evals_grouped_by_url.sort_by_key(|evals| cmp::Reverse(evals.len()));
let results = stream::iter(evals_grouped_by_url)
.map(|evals| {
let model = model.clone();
let judge_model = judge_model.clone();
let app_state = app_state.clone();
let cx = cx.clone();
async move {
let mut results = Vec::new();
for eval in evals {
let name = eval.name.clone();
println!("Starting eval named {}", name);
let result = run_eval(
eval,
model.clone(),
judge_model.clone(),
app_state.clone(),
cx.clone(),
)
.await;
results.push((name, result));
}
results
}
})
.buffer_unordered(args.concurrency)
.collect::<Vec<_>>()
.await
.into_iter()
.flatten()
.collect::<Vec<_>>();
// Process results in order of completion
for (eval_name, result) in results {
match result {
Ok((eval_output, judge_output)) => {
println!("Generated diff for {eval_name}:\n");
println!("{}\n", eval_output.diff);
println!("Last message for {eval_name}:\n");
println!("{}\n", eval_output.last_message);
println!("Elapsed time: {:?}", eval_output.elapsed_time);
println!(
"Assistant response count: {}",
eval_output.assistant_response_count
);
println!("Tool use counts: {:?}", eval_output.tool_use_counts);
println!("Judge output for {eval_name}: {judge_output}");
}
Err(err) => {
// TODO: Persist errors / surface errors at the end.
println!("Error running {eval_name}: {err}");
}
}
}
cx.update(|cx| cx.quit()).unwrap();
})
.detach();
});
println!("Done running evals");
}
async fn run_eval(
eval: Eval,
model: Arc<dyn LanguageModel>,
judge_model: Arc<dyn LanguageModel>,
app_state: Arc<HeadlessAppState>,
cx: AsyncApp,
) -> anyhow::Result<(EvalOutput, String)> {
let path = eval.path.clone();
let judge = Judge::load(&path, judge_model).await?;
let eval_output = cx.update(|cx| eval.run(app_state, model, cx))?.await?;
let judge_output = cx.update(|cx| judge.run(&eval_output, cx))?.await?;
eval_output.save_to_directory(&path, judge_output.to_string())?;
Ok((eval_output, judge_output))
}

View File

@@ -15,8 +15,9 @@ path = "src/assistant_tool.rs"
anyhow.workspace = true
collections.workspace = true
derive_more.workspace = true
language_model.workspace = true
gpui.workspace = true
language.workspace = true
language_model.workspace = true
parking_lot.workspace = true
project.workspace = true
serde.workspace = true

View File

@@ -4,7 +4,10 @@ mod tool_working_set;
use std::sync::Arc;
use anyhow::Result;
use collections::HashSet;
use gpui::Context;
use gpui::{App, Entity, SharedString, Task};
use language::Buffer;
use language_model::LanguageModelRequestMessage;
use project::Project;
@@ -47,6 +50,39 @@ pub trait Tool: 'static + Send + Sync {
input: serde_json::Value,
messages: &[LanguageModelRequestMessage],
project: Entity<Project>,
action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>>;
}
/// Tracks actions performed by tools in a thread
#[derive(Debug)]
pub struct ActionLog {
changed_buffers: HashSet<Entity<Buffer>>,
pending_refresh: HashSet<Entity<Buffer>>,
}
impl ActionLog {
/// Creates a new, empty action log.
pub fn new() -> Self {
Self {
changed_buffers: HashSet::default(),
pending_refresh: HashSet::default(),
}
}
/// Registers buffers that have changed and need refreshing.
pub fn notify_buffers_changed(
&mut self,
buffers: HashSet<Entity<Buffer>>,
_cx: &mut Context<Self>,
) {
self.changed_buffers.extend(buffers.clone());
self.pending_refresh.extend(buffers);
}
/// Takes and returns the set of buffers pending refresh, clearing internal state.
pub fn take_pending_refresh_buffers(&mut self) -> HashSet<Entity<Buffer>> {
std::mem::take(&mut self.pending_refresh)
}
}

View File

@@ -7,6 +7,7 @@ mod now_tool;
mod path_search_tool;
mod read_file_tool;
mod regex_search;
mod thinking_tool;
use assistant_tool::ToolRegistry;
use gpui::App;
@@ -20,6 +21,7 @@ use crate::now_tool::NowTool;
use crate::path_search_tool::PathSearchTool;
use crate::read_file_tool::ReadFileTool;
use crate::regex_search::RegexSearchTool;
use crate::thinking_tool::ThinkingTool;
pub fn init(cx: &mut App) {
assistant_tool::init(cx);
@@ -35,4 +37,5 @@ pub fn init(cx: &mut App) {
registry.register_tool(PathSearchTool);
registry.register_tool(ReadFileTool);
registry.register_tool(RegexSearchTool);
registry.register_tool(ThinkingTool);
}

View File

@@ -1,5 +1,5 @@
use anyhow::{anyhow, Context as _, Result};
use assistant_tool::Tool;
use assistant_tool::{ActionLog, Tool};
use gpui::{App, Entity, Task};
use language_model::LanguageModelRequestMessage;
use project::Project;
@@ -37,6 +37,7 @@ impl Tool for BashTool {
input: serde_json::Value,
_messages: &[LanguageModelRequestMessage],
project: Entity<Project>,
_action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
let input: BashToolInput = match serde_json::from_value(input) {

View File

@@ -1,5 +1,5 @@
use anyhow::{anyhow, Result};
use assistant_tool::Tool;
use assistant_tool::{ActionLog, Tool};
use gpui::{App, Entity, Task};
use language_model::LanguageModelRequestMessage;
use project::Project;
@@ -45,6 +45,7 @@ impl Tool for DeletePathTool {
input: serde_json::Value,
_messages: &[LanguageModelRequestMessage],
project: Entity<Project>,
_action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
let glob = match serde_json::from_value::<DeletePathToolInput>(input) {

View File

@@ -1,5 +1,5 @@
use anyhow::{anyhow, Result};
use assistant_tool::Tool;
use assistant_tool::{ActionLog, Tool};
use gpui::{App, Entity, Task};
use language::{DiagnosticSeverity, OffsetRangeExt};
use language_model::LanguageModelRequestMessage;
@@ -51,6 +51,7 @@ impl Tool for DiagnosticsTool {
input: serde_json::Value,
_messages: &[LanguageModelRequestMessage],
project: Entity<Project>,
_action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
let input = match serde_json::from_value::<DiagnosticsToolInput>(input) {

View File

@@ -2,13 +2,13 @@ mod edit_action;
pub mod log;
use anyhow::{anyhow, Context, Result};
use assistant_tool::Tool;
use assistant_tool::{ActionLog, Tool};
use collections::HashSet;
use edit_action::{EditAction, EditActionParser};
use futures::StreamExt;
use gpui::{App, AsyncApp, Entity, Task};
use language_model::{
LanguageModelRegistry, LanguageModelRequest, LanguageModelRequestMessage, Role,
LanguageModelRegistry, LanguageModelRequest, LanguageModelRequestMessage, MessageContent, Role,
};
use log::{EditToolLog, EditToolRequestId};
use project::{search::SearchQuery, Project};
@@ -80,6 +80,7 @@ impl Tool for EditFilesTool {
input: serde_json::Value,
messages: &[LanguageModelRequestMessage],
project: Entity<Project>,
action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
let input = match serde_json::from_value::<EditFilesToolInput>(input) {
@@ -93,8 +94,14 @@ impl Tool for EditFilesTool {
log.new_request(input.edit_instructions.clone(), cx)
});
let task =
EditToolRequest::new(input, messages, project, Some((log.clone(), req_id)), cx);
let task = EditToolRequest::new(
input,
messages,
project,
action_log,
Some((log.clone(), req_id)),
cx,
);
cx.spawn(|mut cx| async move {
let result = task.await;
@@ -113,7 +120,7 @@ impl Tool for EditFilesTool {
})
}
None => EditToolRequest::new(input, messages, project, None, cx),
None => EditToolRequest::new(input, messages, project, action_log, None, cx),
}
}
}
@@ -123,7 +130,8 @@ struct EditToolRequest {
changed_buffers: HashSet<Entity<language::Buffer>>,
bad_searches: Vec<BadSearch>,
project: Entity<Project>,
log: Option<(Entity<EditToolLog>, EditToolRequestId)>,
action_log: Entity<ActionLog>,
tool_log: Option<(Entity<EditToolLog>, EditToolRequestId)>,
}
#[derive(Debug)]
@@ -143,7 +151,8 @@ impl EditToolRequest {
input: EditFilesToolInput,
messages: &[LanguageModelRequestMessage],
project: Entity<Project>,
log: Option<(Entity<EditToolLog>, EditToolRequestId)>,
action_log: Entity<ActionLog>,
tool_log: Option<(Entity<EditToolLog>, EditToolRequestId)>,
cx: &mut App,
) -> Task<Result<String>> {
let model_registry = LanguageModelRegistry::read_global(cx);
@@ -152,12 +161,23 @@ impl EditToolRequest {
};
let mut messages = messages.to_vec();
if let Some(last_message) = messages.last_mut() {
// Strip out tool use from the last message because we're in the middle of executing a tool call.
last_message
.content
.retain(|content| !matches!(content, language_model::MessageContent::ToolUse(_)))
// Remove the last tool use (this run) to prevent an invalid request
'outer: for message in messages.iter_mut().rev() {
for (index, content) in message.content.iter().enumerate().rev() {
match content {
MessageContent::ToolUse(_) => {
message.content.remove(index);
break 'outer;
}
MessageContent::ToolResult(_) => {
// If we find any tool results before a tool use, the request is already valid
break 'outer;
}
MessageContent::Text(_) | MessageContent::Image(_) => {}
}
}
}
messages.push(LanguageModelRequestMessage {
role: Role::User,
content: vec![
@@ -182,8 +202,9 @@ impl EditToolRequest {
parser: EditActionParser::new(),
changed_buffers: HashSet::default(),
bad_searches: Vec::new(),
action_log,
project,
log,
tool_log,
};
while let Some(chunk) = chunks.stream.next().await {
@@ -197,7 +218,7 @@ impl EditToolRequest {
async fn process_response_chunk(&mut self, chunk: &str, cx: &mut AsyncApp) -> Result<()> {
let new_actions = self.parser.parse_chunk(chunk);
if let Some((ref log, req_id)) = self.log {
if let Some((ref log, req_id)) = self.tool_log {
log.update(cx, |log, cx| {
log.push_editor_response_chunk(req_id, chunk, &new_actions, cx)
})
@@ -310,7 +331,7 @@ impl EditToolRequest {
};
// Save each buffer once at the end
for buffer in self.changed_buffers {
for buffer in &self.changed_buffers {
let (path, save_task) = self.project.update(cx, |project, cx| {
let path = buffer
.read(cx)
@@ -329,10 +350,17 @@ impl EditToolRequest {
}
}
self.action_log
.update(cx, |log, cx| {
log.notify_buffers_changed(self.changed_buffers, cx)
})
.log_err();
let errors = self.parser.errors();
if errors.is_empty() && self.bad_searches.is_empty() {
Ok(answer.trim_end().to_string())
let answer = answer.trim_end().to_string();
Ok(answer)
} else {
if !self.bad_searches.is_empty() {
writeln!(
@@ -369,7 +397,7 @@ impl EditToolRequest {
but errors are part of the conversation so you don't need to repeat them."
)?;
Err(anyhow!(answer))
Err(anyhow!(answer.trim_end().to_string()))
}
}
}

View File

@@ -355,6 +355,7 @@ impl std::fmt::Display for ParseError {
mod tests {
use super::*;
use rand::prelude::*;
use util::line_endings;
#[test]
fn test_simple_edit_action() {
@@ -798,19 +799,17 @@ fn new_utils_func() {}
EditAction::Replace {
file_path: PathBuf::from("mathweb/flask/app.py"),
old: "from flask import Flask".to_string(),
new: "import math\nfrom flask import Flask".to_string(),
}
.fix_lf(),
new: line_endings!("import math\nfrom flask import Flask").to_string(),
},
);
assert_eq!(
actions[1],
EditAction::Replace {
file_path: PathBuf::from("mathweb/flask/app.py"),
old: "def factorial(n):\n \"compute factorial\"\n\n if n == 0:\n return 1\n else:\n return n * factorial(n-1)\n".to_string(),
old: line_endings!("def factorial(n):\n \"compute factorial\"\n\n if n == 0:\n return 1\n else:\n return n * factorial(n-1)\n").to_string(),
new: "".to_string(),
}
.fix_lf()
);
assert_eq!(
@@ -819,28 +818,30 @@ fn new_utils_func() {}
file_path: PathBuf::from("mathweb/flask/app.py"),
old: " return str(factorial(n))".to_string(),
new: " return str(math.factorial(n))".to_string(),
}
.fix_lf(),
},
);
assert_eq!(
actions[3],
EditAction::Write {
file_path: PathBuf::from("hello.py"),
content: "def hello():\n \"print a greeting\"\n\n print(\"hello\")"
.to_string(),
}
.fix_lf(),
content: line_endings!(
"def hello():\n \"print a greeting\"\n\n print(\"hello\")"
)
.to_string(),
},
);
assert_eq!(
actions[4],
EditAction::Replace {
file_path: PathBuf::from("main.py"),
old: "def hello():\n \"print a greeting\"\n\n print(\"hello\")".to_string(),
old: line_endings!(
"def hello():\n \"print a greeting\"\n\n print(\"hello\")"
)
.to_string(),
new: "from hello import hello".to_string(),
}
.fix_lf(),
},
);
// The system prompt includes some text that would produce errors
@@ -860,29 +861,6 @@ fn new_utils_func() {}
);
}
impl EditAction {
fn fix_lf(self: EditAction) -> EditAction {
#[cfg(windows)]
match self {
EditAction::Replace {
file_path,
old,
new,
} => EditAction::Replace {
file_path: file_path.clone(),
old: old.replace("\n", "\r\n"),
new: new.replace("\n", "\r\n"),
},
EditAction::Write { file_path, content } => EditAction::Write {
file_path: file_path.clone(),
content: content.replace("\n", "\r\n"),
},
}
#[cfg(not(windows))]
self
}
}
#[test]
fn test_print_error() {
let input = r#"src/main.rs

View File

@@ -1,5 +1,5 @@
use anyhow::{anyhow, Result};
use assistant_tool::Tool;
use assistant_tool::{ActionLog, Tool};
use gpui::{App, Entity, Task};
use language_model::LanguageModelRequestMessage;
use project::Project;
@@ -55,6 +55,7 @@ impl Tool for ListDirectoryTool {
input: serde_json::Value,
_messages: &[LanguageModelRequestMessage],
project: Entity<Project>,
_action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
let input = match serde_json::from_value::<ListDirectoryToolInput>(input) {

View File

@@ -1,7 +1,7 @@
use std::sync::Arc;
use anyhow::{anyhow, Result};
use assistant_tool::Tool;
use assistant_tool::{ActionLog, Tool};
use chrono::{Local, Utc};
use gpui::{App, Entity, Task};
use language_model::LanguageModelRequestMessage;
@@ -45,6 +45,7 @@ impl Tool for NowTool {
input: serde_json::Value,
_messages: &[LanguageModelRequestMessage],
_project: Entity<Project>,
_action_log: Entity<ActionLog>,
_cx: &mut App,
) -> Task<Result<String>> {
let input: NowToolInput = match serde_json::from_value(input) {

View File

@@ -1,5 +1,5 @@
use anyhow::{anyhow, Result};
use assistant_tool::Tool;
use assistant_tool::{ActionLog, Tool};
use gpui::{App, Entity, Task};
use language_model::LanguageModelRequestMessage;
use project::Project;
@@ -45,6 +45,7 @@ impl Tool for PathSearchTool {
input: serde_json::Value,
_messages: &[LanguageModelRequestMessage],
project: Entity<Project>,
_action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
let glob = match serde_json::from_value::<PathSearchToolInput>(input) {

View File

@@ -2,7 +2,7 @@ use std::path::Path;
use std::sync::Arc;
use anyhow::{anyhow, Result};
use assistant_tool::Tool;
use assistant_tool::{ActionLog, Tool};
use gpui::{App, Entity, Task};
use language_model::LanguageModelRequestMessage;
use project::Project;
@@ -49,6 +49,7 @@ impl Tool for ReadFileTool {
input: serde_json::Value,
_messages: &[LanguageModelRequestMessage],
project: Entity<Project>,
_action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
let input = match serde_json::from_value::<ReadFileToolInput>(input) {

View File

@@ -1,5 +1,5 @@
use anyhow::{anyhow, Result};
use assistant_tool::Tool;
use assistant_tool::{ActionLog, Tool};
use futures::StreamExt;
use gpui::{App, Entity, Task};
use language::OffsetRangeExt;
@@ -38,6 +38,7 @@ impl Tool for RegexSearchTool {
input: serde_json::Value,
_messages: &[LanguageModelRequestMessage],
project: Entity<Project>,
_action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
const CONTEXT_LINES: u32 = 2;
@@ -110,7 +111,7 @@ impl Tool for RegexSearchTool {
}
if output.is_empty() {
Ok("No matches found".into())
Ok("No matches found".to_string())
} else {
Ok(output)
}

View File

@@ -0,0 +1,48 @@
use std::sync::Arc;
use anyhow::{anyhow, Result};
use assistant_tool::{ActionLog, Tool};
use gpui::{App, Entity, Task};
use language_model::LanguageModelRequestMessage;
use project::Project;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct ThinkingToolInput {
/// Content to think about. This should be a description of what to think about or
/// a problem to solve.
content: String,
}
pub struct ThinkingTool;
impl Tool for ThinkingTool {
fn name(&self) -> String {
"thinking".to_string()
}
fn description(&self) -> String {
include_str!("./thinking_tool/description.md").to_string()
}
fn input_schema(&self) -> serde_json::Value {
let schema = schemars::schema_for!(ThinkingToolInput);
serde_json::to_value(&schema).unwrap()
}
fn run(
self: Arc<Self>,
input: serde_json::Value,
_messages: &[LanguageModelRequestMessage],
_project: Entity<Project>,
_action_log: Entity<ActionLog>,
_cx: &mut App,
) -> Task<Result<String>> {
// This tool just "thinks out loud" and doesn't perform any actions.
Task::ready(match serde_json::from_value::<ThinkingToolInput>(input) {
Ok(_input) => Ok("Finished thinking.".to_string()),
Err(err) => Err(anyhow!(err)),
})
}
}

View File

@@ -0,0 +1 @@
A tool for thinking through problems, brainstorming ideas, or planning without executing any actions. Use this tool when you need to work through complex problems, develop strategies, or outline approaches before taking action.

View File

@@ -1,7 +1,7 @@
use std::sync::Arc;
use anyhow::{anyhow, bail, Result};
use assistant_tool::{Tool, ToolSource};
use assistant_tool::{ActionLog, Tool, ToolSource};
use gpui::{App, Entity, Task};
use language_model::LanguageModelRequestMessage;
use project::Project;
@@ -61,6 +61,7 @@ impl Tool for ContextServerTool {
input: serde_json::Value,
_messages: &[LanguageModelRequestMessage],
_project: Entity<Project>,
_action_log: Entity<ActionLog>,
cx: &mut App,
) -> Task<Result<String>> {
if let Some(server) = self.server_manager.read(cx).get_server(&self.server_id) {

View File

@@ -38,6 +38,7 @@ mod proposed_changes_editor;
mod rust_analyzer_ext;
pub mod scroll;
mod selections_collection;
mod smooth_cursor_manager;
pub mod tasks;
#[cfg(test)]
@@ -152,6 +153,7 @@ use selections_collection::{
use serde::{Deserialize, Serialize};
use settings::{update_settings_file, Settings, SettingsLocation, SettingsStore};
use smallvec::SmallVec;
use smooth_cursor_manager::SmoothCursorManager;
use snippet::Snippet;
use std::{
any::TypeId,
@@ -760,6 +762,7 @@ pub struct Editor {
toggle_fold_multiple_buffers: Task<()>,
_scroll_cursor_center_top_bottom_task: Task<()>,
serialize_selections: Task<()>,
smooth_cursor_manager: SmoothCursorManager,
}
#[derive(Copy, Clone, Debug, PartialEq, Eq, Default)]
@@ -1467,6 +1470,7 @@ impl Editor {
serialize_selections: Task::ready(()),
text_style_refinement: None,
load_diff_task: load_uncommitted_diff,
smooth_cursor_manager: SmoothCursorManager::Inactive,
};
this.tasks_update_task = Some(this.refresh_runnables(window, cx));
this._subscriptions.extend(project_subscriptions);
@@ -2030,6 +2034,7 @@ impl Editor {
local: bool,
old_cursor_position: &Anchor,
show_completions: bool,
pre_edit_pixel_points: HashMap<usize, Option<gpui::Point<Pixels>>>,
window: &mut Window,
cx: &mut Context<Self>,
) {
@@ -2162,6 +2167,23 @@ impl Editor {
hide_hover(self, cx);
let mut post_edit_pixel_points = HashMap::default();
for selection in self.selections.disjoint_anchors().iter() {
let head_point =
self.to_pixel_point(selection.head(), &self.snapshot(window, cx), window);
post_edit_pixel_points.insert(selection.id, head_point);
}
if let Some(pending) = self.selections.pending_anchor() {
let head_point =
self.to_pixel_point(pending.head(), &self.snapshot(window, cx), window);
post_edit_pixel_points.insert(pending.id, head_point);
}
self.smooth_cursor_manager
.update(pre_edit_pixel_points, post_edit_pixel_points);
if old_cursor_position.to_display_point(&display_map).row()
!= new_cursor_position.to_display_point(&display_map).row()
{
@@ -2279,6 +2301,21 @@ impl Editor {
change: impl FnOnce(&mut MutableSelectionsCollection<'_>) -> R,
) -> R {
let old_cursor_position = self.selections.newest_anchor().head();
let mut pre_edit_pixel_points = HashMap::default();
for selection in self.selections.disjoint_anchors().iter() {
let head_point =
self.to_pixel_point(selection.head(), &self.snapshot(window, cx), window);
pre_edit_pixel_points.insert(selection.id, head_point);
}
if let Some(pending) = self.selections.pending_anchor() {
let head_point =
self.to_pixel_point(pending.head(), &self.snapshot(window, cx), window);
pre_edit_pixel_points.insert(pending.id, head_point);
}
self.push_to_selection_history();
let (changed, result) = self.selections.change_with(cx, change);
@@ -2287,7 +2324,14 @@ impl Editor {
if let Some(autoscroll) = autoscroll {
self.request_autoscroll(autoscroll, cx);
}
self.selections_did_change(true, &old_cursor_position, request_completions, window, cx);
self.selections_did_change(
true,
&old_cursor_position,
request_completions,
pre_edit_pixel_points,
window,
cx,
);
if self.should_open_signature_help_automatically(
&old_cursor_position,
@@ -2918,6 +2962,17 @@ impl Editor {
.next()
.map_or(true, |c| scope.should_autoclose_before(c));
let preceding_text_allows_autoclose = selection.start.column == 0
|| snapshot.reversed_chars_at(selection.start).next().map_or(
true,
|c| {
bracket_pair.start != bracket_pair.end
|| !snapshot
.char_classifier_at(selection.start)
.is_word(c)
},
);
let is_closing_quote = if bracket_pair.end == bracket_pair.start
&& bracket_pair.start.len() == 1
{
@@ -2935,6 +2990,7 @@ impl Editor {
if autoclose
&& bracket_pair.close
&& following_text_allows_autoclose
&& preceding_text_allows_autoclose
&& !is_closing_quote
{
let anchor = snapshot.anchor_before(selection.end);
@@ -3088,6 +3144,20 @@ impl Editor {
let initial_buffer_versions =
jsx_tag_auto_close::construct_initial_buffer_versions_map(this, &edits, cx);
let mut pre_edit_pixel_points = HashMap::default();
for selection in this.selections.disjoint_anchors().iter() {
let head_point =
this.to_pixel_point(selection.head(), &this.snapshot(window, cx), window);
pre_edit_pixel_points.insert(selection.id, head_point);
}
if let Some(pending) = this.selections.pending_anchor() {
let head_point =
this.to_pixel_point(pending.head(), &this.snapshot(window, cx), window);
pre_edit_pixel_points.insert(pending.id, head_point);
}
this.buffer.update(cx, |buffer, cx| {
buffer.edit(edits, this.autoindent_mode.clone(), cx);
});
@@ -3189,6 +3259,22 @@ impl Editor {
linked_editing_ranges::refresh_linked_ranges(this, window, cx);
this.refresh_inline_completion(true, false, window, cx);
jsx_tag_auto_close::handle_from(this, initial_buffer_versions, window, cx);
let mut post_edit_pixel_points = HashMap::default();
for selection in this.selections.disjoint_anchors().iter() {
let head_point =
this.to_pixel_point(selection.head(), &this.snapshot(window, cx), window);
post_edit_pixel_points.insert(selection.id, head_point);
}
if let Some(pending) = this.selections.pending_anchor() {
let head_point =
this.to_pixel_point(pending.head(), &this.snapshot(window, cx), window);
post_edit_pixel_points.insert(pending.id, head_point);
}
this.smooth_cursor_manager
.update(pre_edit_pixel_points, post_edit_pixel_points);
});
}
@@ -3241,6 +3327,20 @@ impl Editor {
pub fn newline(&mut self, _: &Newline, window: &mut Window, cx: &mut Context<Self>) {
self.transact(window, cx, |this, window, cx| {
let mut pre_edit_pixel_points = HashMap::default();
for selection in this.selections.disjoint_anchors().iter() {
let head_point =
this.to_pixel_point(selection.head(), &this.snapshot(window, cx), window);
pre_edit_pixel_points.insert(selection.id, head_point);
}
if let Some(pending) = this.selections.pending_anchor() {
let head_point =
this.to_pixel_point(pending.head(), &this.snapshot(window, cx), window);
pre_edit_pixel_points.insert(pending.id, head_point);
}
let (edits, selection_fixup_info): (Vec<_>, Vec<_>) = {
let selections = this.selections.all::<usize>(cx);
let multi_buffer = this.buffer.read(cx);
@@ -3351,6 +3451,23 @@ impl Editor {
s.select(new_selections)
});
this.refresh_inline_completion(true, false, window, cx);
let mut post_edit_pixel_points = HashMap::default();
for selection in this.selections.disjoint_anchors().iter() {
let head_point =
this.to_pixel_point(selection.head(), &this.snapshot(window, cx), window);
post_edit_pixel_points.insert(selection.id, head_point);
}
if let Some(pending) = this.selections.pending_anchor() {
let head_point =
this.to_pixel_point(pending.head(), &this.snapshot(window, cx), window);
post_edit_pixel_points.insert(pending.id, head_point);
}
this.smooth_cursor_manager
.update(pre_edit_pixel_points, post_edit_pixel_points);
});
}
@@ -4937,6 +5054,9 @@ impl Editor {
window: &mut Window,
cx: &mut Context<Editor>,
) {
if matches!(self.mode, EditorMode::SingleLine { .. }) {
return;
}
self.selection_highlight_task.take();
if !EditorSettings::get_global(cx).selection_highlight {
self.clear_background_highlights::<SelectedTextHighlight>(cx);
@@ -5185,6 +5305,9 @@ impl Editor {
cx: &App,
) -> bool {
maybe!({
if self.read_only(cx) {
return Some(false);
}
let provider = self.edit_prediction_provider()?;
if !provider.is_enabled(&buffer, buffer_position, cx) {
return Some(false);
@@ -12820,11 +12943,11 @@ impl Editor {
cx.spawn_in(window, |_, mut cx| async move {
let transaction = futures::select_biased! {
transaction = format.log_err().fuse() => transaction,
() = timeout => {
log::warn!("timed out waiting for formatting");
None
}
transaction = format.log_err().fuse() => transaction,
};
buffer
@@ -13167,7 +13290,14 @@ impl Editor {
s.clear_pending();
}
});
self.selections_did_change(false, &old_cursor_position, true, window, cx);
self.selections_did_change(
false,
&old_cursor_position,
true,
HashMap::default(),
window,
cx,
);
}
fn push_to_selection_history(&mut self) {

View File

@@ -6357,12 +6357,29 @@ async fn test_autoclose_and_auto_surround_pairs(cx: &mut TestAppContext) {
cx.update_editor(|editor, window, cx| editor.handle_input("{", window, cx));
cx.assert_editor_state("{«aˇ»} b");
// Autclose pair where the start and end characters are the same
// Autoclose when not immediately after a word character
cx.set_state("a ˇ");
cx.update_editor(|editor, window, cx| editor.handle_input("\"", window, cx));
cx.assert_editor_state("a \"ˇ\"");
// Autoclose pair where the start and end characters are the same
cx.update_editor(|editor, window, cx| editor.handle_input("\"", window, cx));
cx.assert_editor_state("a \"\"ˇ");
// Don't autoclose when immediately after a word character
cx.set_state("");
cx.update_editor(|editor, window, cx| editor.handle_input("\"", window, cx));
cx.assert_editor_state("a\"ˇ\"");
cx.assert_editor_state("a\"ˇ");
// Do autoclose when after a non-word character
cx.set_state("");
cx.update_editor(|editor, window, cx| editor.handle_input("\"", window, cx));
cx.assert_editor_state("a\"\"ˇ");
cx.assert_editor_state("{\"ˇ\"");
// Non identical pairs autoclose regardless of preceding character
cx.set_state("");
cx.update_editor(|editor, window, cx| editor.handle_input("{", window, cx));
cx.assert_editor_state("a{ˇ}");
// Don't autoclose pair if autoclose is disabled
cx.set_state("ˇ");

View File

@@ -15,15 +15,15 @@ use crate::{
inlay_hint_settings,
items::BufferSearchHighlights,
mouse_context_menu::{self, MenuPosition, MouseContextMenu},
scroll::{axis_pair, scroll_amount::ScrollAmount, AxisPair, HorizontalLayoutDetails},
scroll::{axis_pair, scroll_amount::ScrollAmount, AxisPair},
BlockId, ChunkReplacement, CursorShape, CustomBlockId, DisplayDiffHunk, DisplayPoint,
DisplayRow, DocumentHighlightRead, DocumentHighlightWrite, EditDisplayMode, Editor, EditorMode,
EditorSettings, EditorSnapshot, EditorStyle, FocusedBlock, GoToHunk, GoToPreviousHunk,
GutterDimensions, HalfPageDown, HalfPageUp, HandleInput, HoveredCursor, InlayHintRefreshReason,
InlineCompletion, JumpData, LineDown, LineHighlight, LineUp, OpenExcerpts, PageDown, PageUp,
Point, RowExt, RowRangeExt, SelectPhase, SelectedTextHighlight, Selection, SoftWrap,
StickyHeaderExcerpt, ToPoint, ToggleFold, COLUMNAR_SELECTION_MODIFIERS, CURSORS_VISIBLE_FOR,
FILE_HEADER_HEIGHT, GIT_BLAME_MAX_AUTHOR_CHARS_DISPLAYED, MAX_LINE_LEN,
Point, RowExt, RowRangeExt, SelectPhase, SelectedTextHighlight, Selection, SmoothCursorManager,
SoftWrap, StickyHeaderExcerpt, ToPoint, ToggleFold, COLUMNAR_SELECTION_MODIFIERS,
CURSORS_VISIBLE_FOR, FILE_HEADER_HEIGHT, GIT_BLAME_MAX_AUTHOR_CHARS_DISPLAYED, MAX_LINE_LEN,
MULTI_BUFFER_EXCERPT_HEADER_HEIGHT,
};
use buffer_diff::{DiffHunkStatus, DiffHunkStatusKind};
@@ -83,6 +83,7 @@ const INLINE_BLAME_PADDING_EM_WIDTHS: f32 = 7.;
const MIN_SCROLL_THUMB_SIZE: f32 = 25.;
struct SelectionLayout {
id: usize,
head: DisplayPoint,
cursor_shape: CursorShape,
is_newest: bool,
@@ -140,6 +141,7 @@ impl SelectionLayout {
}
Self {
id: selection.id,
head,
cursor_shape,
is_newest,
@@ -1151,12 +1153,29 @@ impl EditorElement {
let cursor_layouts = self.editor.update(cx, |editor, cx| {
let mut cursors = Vec::new();
let is_animating =
!matches!(editor.smooth_cursor_manager, SmoothCursorManager::Inactive);
let animated_selection_ids = if is_animating {
match &editor.smooth_cursor_manager {
SmoothCursorManager::Active { cursors } => {
cursors.keys().copied().collect::<HashSet<_>>()
}
_ => HashSet::default(),
}
} else {
HashSet::default()
};
let show_local_cursors = editor.show_local_cursors(window, cx);
for (player_color, selections) in selections {
for selection in selections {
let cursor_position = selection.head;
if animated_selection_ids.contains(&selection.id) {
continue;
}
let in_range = visible_display_row_range.contains(&cursor_position.row());
if (selection.is_local && !show_local_cursors)
|| !in_range
@@ -1283,6 +1302,19 @@ impl EditorElement {
}
}
if is_animating {
let animated_cursors = self.layout_animated_cursors(
editor,
content_origin,
line_height,
em_advance,
selections,
window,
cx,
);
cursors.extend(animated_cursors);
}
cursors
});
@@ -1293,6 +1325,47 @@ impl EditorElement {
cursor_layouts
}
fn layout_animated_cursors(
&self,
editor: &mut Editor,
content_origin: gpui::Point<Pixels>,
line_height: Pixels,
em_advance: Pixels,
selections: &[(PlayerColor, Vec<SelectionLayout>)],
window: &mut Window,
cx: &mut App,
) -> Vec<CursorLayout> {
let new_positions = editor.smooth_cursor_manager.animate();
if !new_positions.is_empty() {
window.request_animation_frame();
}
new_positions
.into_iter()
.map(|(id, position)| {
// todo smit: worst way to get cursor shape and player color
let (cursor_shape, player_color) = selections
.iter()
.find_map(|(player_color, sels)| {
sels.iter()
.find(|sel| sel.id == id)
.map(|sel| (sel.cursor_shape, *player_color))
})
.unwrap_or((CursorShape::Bar, editor.current_user_player_color(cx)));
let mut cursor = CursorLayout {
color: player_color.cursor,
block_width: em_advance,
origin: position,
line_height,
shape: cursor_shape,
block_text: None,
cursor_name: None,
};
cursor.layout(content_origin, None, window, cx);
cursor
})
.collect()
}
fn layout_scrollbars(
&self,
snapshot: &EditorSnapshot,
@@ -4376,7 +4449,9 @@ impl EditorElement {
}),
};
if let Some((hunk_bounds, background_color, corner_radii, _)) = hunk_to_paint {
if let Some((hunk_bounds, background_color, corner_radii, status)) = hunk_to_paint {
let unstaged = status.has_secondary_hunk();
// Flatten the background color with the editor color to prevent
// elements below transparent hunks from showing through
let flattened_background_color = cx
@@ -4385,13 +4460,29 @@ impl EditorElement {
.editor_background
.blend(background_color);
window.paint_quad(quad(
hunk_bounds,
corner_radii,
flattened_background_color,
Edges::default(),
transparent_black(),
));
if unstaged {
window.paint_quad(quad(
hunk_bounds,
corner_radii,
flattened_background_color,
Edges::default(),
transparent_black(),
));
} else {
let flattened_unstaged_background_color = cx
.theme()
.colors()
.editor_background
.blend(background_color.opacity(0.3));
window.paint_quad(quad(
hunk_bounds,
corner_radii,
flattened_unstaged_background_color,
Edges::all(Pixels(1.0)),
flattened_background_color,
));
}
}
}
});
@@ -7031,11 +7122,6 @@ impl Element for EditorElement {
);
self.editor.update(cx, |editor, cx| {
editor.scroll_manager.latest_horizontal_details = HorizontalLayoutDetails {
letter_width: letter_size.width.0,
editor_width: editor_width.0,
scroll_max: scroll_max.x,
};
let clamped = editor.scroll_manager.clamp_scroll_left(scroll_max.x);
let autoscrolled = if autoscroll_horizontally {

View File

@@ -172,13 +172,6 @@ impl OngoingScroll {
}
}
#[derive(Default, Debug)]
pub struct HorizontalLayoutDetails {
pub letter_width: f32,
pub editor_width: f32,
pub scroll_max: f32,
}
pub struct ScrollManager {
pub(crate) vertical_scroll_margin: f32,
anchor: ScrollAnchor,
@@ -190,7 +183,6 @@ pub struct ScrollManager {
dragging_scrollbar: AxisPair<bool>,
visible_line_count: Option<f32>,
forbid_vertical_scroll: bool,
pub(crate) latest_horizontal_details: HorizontalLayoutDetails,
}
impl ScrollManager {
@@ -206,7 +198,6 @@ impl ScrollManager {
last_autoscroll: None,
visible_line_count: None,
forbid_vertical_scroll: false,
latest_horizontal_details: Default::default(),
}
}
@@ -388,15 +379,6 @@ impl ScrollManager {
cx.notify();
}
pub fn horizontal_scroll(
&mut self,
f: impl Fn(f32, &HorizontalLayoutDetails) -> f32,
cx: &mut Context<Editor>,
) {
self.anchor.offset.x = f(self.anchor.offset.x, &self.latest_horizontal_details);
cx.notify();
}
pub fn clamp_scroll_left(&mut self, max: f32) -> bool {
if max < self.anchor.offset.x {
self.anchor.offset.x = max;

View File

@@ -0,0 +1,117 @@
use collections::HashMap;
use gpui::Pixels;
const DELTA_PERCENT_PER_FRAME: f32 = 0.01;
pub struct Cursor {
current_position: gpui::Point<Pixels>,
target_position: gpui::Point<Pixels>,
}
pub enum SmoothCursorManager {
Inactive,
Active { cursors: HashMap<usize, Cursor> },
}
impl SmoothCursorManager {
pub fn update(
&mut self,
source_positions: HashMap<usize, Option<gpui::Point<Pixels>>>,
target_positions: HashMap<usize, Option<gpui::Point<Pixels>>>,
) {
if source_positions.len() == 1 && target_positions.len() == 1 {
let old_id = source_positions.keys().next().unwrap();
let new_id = target_positions.keys().next().unwrap();
if old_id != new_id {
if let (Some(Some(old_pos)), Some(Some(new_pos))) = (
source_positions.values().next(),
target_positions.values().next(),
) {
*self = Self::Active {
cursors: HashMap::from_iter([(
*new_id,
Cursor {
current_position: *old_pos,
target_position: *new_pos,
},
)]),
};
return;
}
}
}
match self {
Self::Inactive => {
let mut cursors = HashMap::default();
for (id, target_position) in target_positions.iter() {
let Some(target_position) = target_position else {
continue;
};
let Some(Some(source_position)) = source_positions.get(id) else {
continue;
};
if source_position == target_position {
continue;
}
cursors.insert(
*id,
Cursor {
current_position: *source_position,
target_position: *target_position,
},
);
}
if !cursors.is_empty() {
*self = Self::Active { cursors };
}
}
Self::Active { cursors } => {
for (id, target_position) in target_positions.iter() {
let Some(target_position) = target_position else {
continue;
};
if let Some(cursor) = cursors.get_mut(id) {
cursor.target_position = *target_position;
}
}
}
}
}
pub fn animate(&mut self) -> HashMap<usize, gpui::Point<Pixels>> {
match self {
Self::Inactive => HashMap::default(),
Self::Active { cursors } => {
let mut new_positions = HashMap::default();
let mut completed = Vec::new();
for (id, cursor) in cursors.iter_mut() {
let dx = cursor.target_position.x - cursor.current_position.x;
let dy = cursor.target_position.y - cursor.current_position.y;
let distance = (dx.0.powi(2) + dy.0.powi(2)).sqrt();
if distance < 0.2 {
new_positions.insert(*id, cursor.target_position);
completed.push(*id);
} else {
cursor.current_position.x =
Pixels(cursor.current_position.x.0 + dx.0 * DELTA_PERCENT_PER_FRAME);
cursor.current_position.y =
Pixels(cursor.current_position.y.0 + dy.0 * DELTA_PERCENT_PER_FRAME);
new_positions.insert(*id, cursor.current_position);
}
}
for id in completed {
cursors.remove(&id);
}
if cursors.is_empty() {
*self = Self::Inactive;
}
new_positions
}
}
}
}

View File

@@ -4,6 +4,7 @@ version = "0.1.0"
edition.workspace = true
authors = ["Nathan Sobo <nathan@zed.dev>"]
description = "Zed's GPU-accelerated UI framework"
repository = "https://github.com/zed-industries/zed"
publish.workspace = true
license = "Apache-2.0"

View File

@@ -25,30 +25,26 @@ impl Render for PatternExample {
.flex_col()
.border_1()
.border_color(gpui::blue())
.child(
div()
.w(px(54.0))
.h(px(18.0))
.bg(pattern_slash(gpui::red(), 18.0 / 2.0)),
)
.child(
div()
.w(px(54.0))
.h(px(18.0))
.bg(pattern_slash(gpui::red(), 18.0 / 2.0)),
)
.child(
div()
.w(px(54.0))
.h(px(18.0))
.bg(pattern_slash(gpui::red(), 18.0 / 2.0)),
)
.child(
div()
.w(px(54.0))
.h(px(18.0))
.bg(pattern_slash(gpui::red(), 18.0 / 2.0)),
),
.child(div().w(px(54.0)).h(px(18.0)).bg(pattern_slash(
gpui::red(),
18.0 / 4.0,
18.0 / 4.0,
)))
.child(div().w(px(54.0)).h(px(18.0)).bg(pattern_slash(
gpui::red(),
18.0 / 4.0,
18.0 / 4.0,
)))
.child(div().w(px(54.0)).h(px(18.0)).bg(pattern_slash(
gpui::red(),
18.0 / 4.0,
18.0 / 4.0,
)))
.child(div().w(px(54.0)).h(px(18.0)).bg(pattern_slash(
gpui::red(),
18.0 / 4.0,
18.0 / 2.0,
))),
)
.child(
div()
@@ -58,30 +54,26 @@ impl Render for PatternExample {
.border_color(gpui::blue())
.bg(gpui::green().opacity(0.16))
.child("Elements the same height should align")
.child(
div()
.w(px(256.0))
.h(px(56.0))
.bg(pattern_slash(gpui::red(), 56.0 / 3.0)),
)
.child(
div()
.w(px(256.0))
.h(px(56.0))
.bg(pattern_slash(gpui::green(), 56.0 / 3.0)),
)
.child(
div()
.w(px(256.0))
.h(px(56.0))
.bg(pattern_slash(gpui::blue(), 56.0 / 3.0)),
)
.child(
div()
.w(px(256.0))
.h(px(26.0))
.bg(pattern_slash(gpui::yellow(), 56.0 / 3.0)),
),
.child(div().w(px(256.0)).h(px(56.0)).bg(pattern_slash(
gpui::red(),
56.0 / 6.0,
56.0 / 6.0,
)))
.child(div().w(px(256.0)).h(px(56.0)).bg(pattern_slash(
gpui::green(),
56.0 / 6.0,
56.0 / 6.0,
)))
.child(div().w(px(256.0)).h(px(56.0)).bg(pattern_slash(
gpui::blue(),
56.0 / 6.0,
56.0 / 6.0,
)))
.child(div().w(px(256.0)).h(px(26.0)).bg(pattern_slash(
gpui::yellow(),
56.0 / 6.0,
56.0 / 6.0,
))),
)
.child(
div()

View File

@@ -11,12 +11,38 @@ impl Render for HelloWorld {
.bg(gpui::white())
.flex()
.flex_col()
.gap_3()
.gap_2()
.p_4()
.size_full()
.child(div().child("Text left"))
.child(div().text_center().child("Text center"))
.child(div().text_right().child("Text right"))
.child(div().text_decoration_1().child("Text left (underline)"))
.child(
div()
.text_center()
.text_decoration_1()
.child("Text center (underline)"),
)
.child(
div()
.text_right()
.text_decoration_1()
.child("Text right (underline)"),
)
.child(div().line_through().child("Text left (line_through)"))
.child(
div()
.text_center()
.line_through()
.child("Text center (line_through)"),
)
.child(
div()
.text_right()
.line_through()
.child("Text right (line_through)"),
)
.child(
div()
.flex()

View File

@@ -683,7 +683,11 @@ impl Default for Background {
}
/// Creates a hash pattern background
pub fn pattern_slash(color: Hsla, height: f32) -> Background {
pub fn pattern_slash(color: Hsla, width: f32, interval: f32) -> Background {
let width_scaled = (width * 255.0) as u32;
let interval_scaled = (interval * 255.0) as u32;
let height = ((width_scaled * 0xFFFF) + interval_scaled) as f32;
Background {
tag: BackgroundTag::PatternSlash,
solid: color,

View File

@@ -360,7 +360,10 @@ fn gradient_color(background: Background, position: vec2<f32>, bounds: Bounds,
}
}
case 2u: {
let pattern_height = background.gradient_angle_or_pattern_height;
let gradient_angle_or_pattern_height = background.gradient_angle_or_pattern_height;
let pattern_width = (gradient_angle_or_pattern_height / 65535.0f) / 255.0f;
let pattern_interval = (gradient_angle_or_pattern_height % 65535.0f) / 255.0f;
let pattern_height = pattern_width + pattern_interval;
let stripe_angle = M_PI_F / 4.0;
let pattern_period = pattern_height * sin(stripe_angle);
let rotation = mat2x2<f32>(
@@ -370,7 +373,7 @@ fn gradient_color(background: Background, position: vec2<f32>, bounds: Bounds,
let relative_position = position - bounds.origin;
let rotated_point = rotation * relative_position;
let pattern = rotated_point.x % pattern_period;
let distance = min(pattern, pattern_period - pattern) - pattern_period / 4;
let distance = min(pattern, pattern_period - pattern) - pattern_period * (pattern_width / pattern_height) / 2.0f;
background_color = solid_color;
background_color.a *= saturate(0.5 - distance);
}

View File

@@ -875,14 +875,17 @@ float4 fill_color(Background background,
break;
}
case 2: {
float pattern_height = background.gradient_angle_or_pattern_height;
float gradient_angle_or_pattern_height = background.gradient_angle_or_pattern_height;
float pattern_width = (gradient_angle_or_pattern_height / 65535.0f) / 255.0f;
float pattern_interval = fmod(gradient_angle_or_pattern_height, 65535.0f) / 255.0f;
float pattern_height = pattern_width + pattern_interval;
float stripe_angle = M_PI_F / 4.0;
float pattern_period = pattern_height * sin(stripe_angle);
float2x2 rotation = rotate2d(stripe_angle);
float2 relative_position = position - float2(bounds.origin.x, bounds.origin.y);
float2 rotated_point = rotation * relative_position;
float pattern = fmod(rotated_point.x, pattern_period);
float distance = min(pattern, pattern_period - pattern) - pattern_period / 4.0;
float distance = min(pattern, pattern_period - pattern) - pattern_period * (pattern_width / pattern_height) / 2.0f;
color = solid_color;
color.a *= saturate(0.5 - distance);
break;

View File

@@ -478,7 +478,7 @@ pub trait Styled: Sized {
}
/// Sets the font style of the element to normal (not italic).
/// [Docs](https://tailwindcss.com/docs/font-style#italicizing-text)
/// [Docs](https://tailwindcss.com/docs/font-style#displaying-text-normally)
fn not_italic(mut self) -> Self {
self.text_style()
.get_or_insert_with(Default::default)
@@ -498,7 +498,7 @@ pub trait Styled: Sized {
}
/// Sets the decoration of the text to have a line through it.
/// [Docs](https://tailwindcss.com/docs/text-decoration#setting-the-text-decoration)
/// [Docs](https://tailwindcss.com/docs/text-decoration-line#adding-a-line-through-text)
fn line_through(mut self) -> Self {
let style = self.text_style().get_or_insert_with(Default::default);
style.strikethrough = Some(StrikethroughStyle {

View File

@@ -39,23 +39,18 @@ use std::{
/// This is intended for use with the `gpui::test` macro
/// and generally should not be used directly.
pub fn run_test(
mut num_iterations: u64,
num_iterations: usize,
explicit_seeds: &[u64],
max_retries: usize,
test_fn: &mut (dyn RefUnwindSafe + Fn(TestDispatcher, u64)),
on_fail_fn: Option<fn()>,
) {
let starting_seed = env::var("SEED")
.map(|seed| seed.parse().expect("invalid SEED variable"))
.unwrap_or(0);
if let Ok(iterations) = env::var("ITERATIONS") {
num_iterations = iterations.parse().expect("invalid ITERATIONS variable");
}
let is_randomized = num_iterations > 1;
let (seeds, is_multiple_runs) = calculate_seeds(num_iterations as u64, explicit_seeds);
for seed in starting_seed..starting_seed + num_iterations {
let mut retry = 0;
for seed in seeds {
let mut attempt = 0;
loop {
if is_randomized {
if is_multiple_runs {
eprintln!("seed = {seed}");
}
let result = panic::catch_unwind(|| {
@@ -66,15 +61,15 @@ pub fn run_test(
match result {
Ok(_) => break,
Err(error) => {
if retry < max_retries {
println!("retrying: attempt {}", retry);
retry += 1;
if attempt < max_retries {
println!("attempt {} failed, retrying", attempt);
attempt += 1;
} else {
if is_randomized {
if is_multiple_runs {
eprintln!("failing seed: {}", seed);
}
if let Some(f) = on_fail_fn {
f()
if let Some(on_fail_fn) = on_fail_fn {
on_fail_fn()
}
panic::resume_unwind(error);
}
@@ -84,6 +79,54 @@ pub fn run_test(
}
}
fn calculate_seeds(
iterations: u64,
explicit_seeds: &[u64],
) -> (impl Iterator<Item = u64> + '_, bool) {
let iterations = env::var("ITERATIONS")
.ok()
.map(|var| var.parse().expect("invalid ITERATIONS variable"))
.unwrap_or(iterations);
let env_num = env::var("SEED")
.map(|seed| seed.parse().expect("invalid SEED variable as integer"))
.ok();
let empty_range = || 0..0;
let iter = {
let env_range = if let Some(env_num) = env_num {
env_num..env_num + 1
} else {
empty_range()
};
// if `iterations` is 1 and !(`explicit_seeds` is non-empty || `SEED` is set), then add the run `0`
// if `iterations` is 1 and (`explicit_seeds` is non-empty || `SEED` is set), then discard the run `0`
// if `iterations` isn't 1 and `SEED` is set, do `SEED..SEED+iterations`
// otherwise, do `0..iterations`
let iterations_range = match (iterations, env_num) {
(1, None) if explicit_seeds.is_empty() => 0..1,
(1, None) | (1, Some(_)) => empty_range(),
(iterations, Some(env)) => env..env + iterations,
(iterations, None) => 0..iterations,
};
// if `SEED` is set, ignore `explicit_seeds`
let explicit_seeds = if env_num.is_some() {
&[]
} else {
explicit_seeds
};
env_range
.chain(iterations_range)
.chain(explicit_seeds.iter().copied())
};
let is_multiple_runs = iter.clone().nth(1).is_some();
(iter, is_multiple_runs)
}
/// A test struct for converting an observation callback into a stream.
pub struct Observation<T> {
rx: Pin<Box<channel::Receiver<T>>>,

View File

@@ -196,11 +196,15 @@ fn paint_line(
);
let mut prev_glyph_position = Point::default();
let mut max_glyph_size = size(px(0.), px(0.));
let mut first_glyph_x = origin.x;
for (run_ix, run) in layout.runs.iter().enumerate() {
max_glyph_size = text_system.bounding_box(run.font_id, layout.font_size).size;
for (glyph_ix, glyph) in run.glyphs.iter().enumerate() {
glyph_origin.x += glyph.position.x - prev_glyph_position.x;
if glyph_ix == 0 {
first_glyph_x = glyph_origin.x;
}
if wraps.peek() == Some(&&WrapBoundary { run_ix, glyph_ix }) {
wraps.next();
@@ -355,7 +359,7 @@ fn paint_line(
}
}
let mut last_line_end_x = origin.x + layout.width;
let mut last_line_end_x = first_glyph_x + layout.width;
if let Some(boundary) = wrap_boundaries.last() {
let run = &layout.runs[boundary.run_ix];
let glyph = &run.glyphs[boundary.glyph_ix];

View File

@@ -144,7 +144,8 @@ pub fn box_shadow_style_methods(input: TokenStream) -> TokenStream {
}
/// `#[gpui::test]` can be used to annotate test functions that run with GPUI support.
/// it supports both synchronous and asynchronous tests, and can provide you with
///
/// It supports both synchronous and asynchronous tests, and can provide you with
/// as many `TestAppContext` instances as you need.
/// The output contains a `#[test]` annotation so this can be used with any existing
/// test harness (`cargo test` or `cargo-nextest`).
@@ -160,11 +161,25 @@ pub fn box_shadow_style_methods(input: TokenStream) -> TokenStream {
/// Using the same `StdRng` for behavior in your test will allow you to exercise a wide
/// variety of scenarios and interleavings just by changing the seed.
///
/// `#[gpui::test]` also takes three different arguments:
/// - `#[gpui::test(iterations=10)]` will run the test ten times with a different initial SEED.
/// - `#[gpui::test(retries=3)]` will run the test up to four times if it fails to try and make it pass.
/// - `#[gpui::test(on_failure="crate::test::report_failure")]` will call the specified function after the
/// # Arguments
///
/// - `#[gpui::test]` with no arguments runs once with the seed `0` or `SEED` env var if set.
/// - `#[gpui::test(seed = 10)]` runs once with the seed `10`.
/// - `#[gpui::test(seeds(10, 20, 30))]` runs three times with seeds `10`, `20`, and `30`.
/// - `#[gpui::test(iterations = 5)]` runs five times, providing as seed the values in the range `0..5`.
/// - `#[gpui::test(retries = 3)]` runs up to four times if it fails to try and make it pass.
/// - `#[gpui::test(on_failure = "crate::test::report_failure")]` will call the specified function after the
/// tests fail so that you can write out more detail about the failure.
///
/// You can combine `iterations = ...` with `seeds(...)`:
/// - `#[gpui::test(iterations = 5, seed = 10)]` is equivalent to `#[gpui::test(seeds(0, 1, 2, 3, 4, 10))]`.
/// - `#[gpui::test(iterations = 5, seeds(10, 20, 30)]` is equivalent to `#[gpui::test(seeds(0, 1, 2, 3, 4, 10, 20, 30))]`.
/// - `#[gpui::test(seeds(10, 20, 30), iterations = 5]` is equivalent to `#[gpui::test(seeds(0, 1, 2, 3, 4, 10, 20, 30))]`.
///
/// # Environment Variables
///
/// - `SEED`: sets a seed for the first run
/// - `ITERATIONS`: forces the value of the `iterations` argument
#[proc_macro_attribute]
pub fn test(args: TokenStream, function: TokenStream) -> TokenStream {
test::test(args, function)

View File

@@ -3,73 +3,72 @@ use proc_macro2::Ident;
use quote::{format_ident, quote};
use std::mem;
use syn::{
parse_macro_input, parse_quote, spanned::Spanned as _, AttributeArgs, FnArg, ItemFn, Lit, Meta,
NestedMeta, Type,
parse_quote, spanned::Spanned, AttributeArgs, FnArg, ItemFn, Lit, Meta, MetaList, NestedMeta,
PathSegment, Type,
};
pub fn test(args: TokenStream, function: TokenStream) -> TokenStream {
let args = syn::parse_macro_input!(args as AttributeArgs);
try_test(args, function).unwrap_or_else(|err| err)
}
fn try_test(args: Vec<NestedMeta>, function: TokenStream) -> Result<TokenStream, TokenStream> {
let mut seeds = Vec::<u64>::new();
let mut max_retries = 0;
let mut num_iterations = 1;
let mut on_failure_fn_name = quote!(None);
for arg in args {
match arg {
NestedMeta::Meta(Meta::NameValue(meta)) => {
let key_name = meta.path.get_ident().map(|i| i.to_string());
let result = (|| {
match key_name.as_deref() {
Some("retries") => max_retries = parse_int(&meta.lit)?,
Some("iterations") => num_iterations = parse_int(&meta.lit)?,
Some("on_failure") => {
if let Lit::Str(name) = meta.lit {
let mut path = syn::Path {
leading_colon: None,
segments: Default::default(),
};
for part in name.value().split("::") {
path.segments.push(Ident::new(part, name.span()).into());
}
on_failure_fn_name = quote!(Some(#path));
} else {
return Err(TokenStream::from(
syn::Error::new(
meta.lit.span(),
"on_failure argument must be a string",
)
.into_compile_error(),
));
}
}
_ => {
return Err(TokenStream::from(
syn::Error::new(meta.path.span(), "invalid argument")
.into_compile_error(),
))
}
}
Ok(())
})();
let NestedMeta::Meta(arg) = arg else {
return Err(error_with_message("unexpected literal", arg));
};
if let Err(tokens) = result {
return tokens;
}
let ident = {
let meta_path = match &arg {
Meta::NameValue(meta) => &meta.path,
Meta::List(list) => &list.path,
Meta::Path(path) => return Err(error_with_message("invalid path argument", path)),
};
let Some(ident) = meta_path.get_ident() else {
return Err(error_with_message("unexpected path", meta_path));
};
ident.to_string()
};
match (&arg, ident.as_str()) {
(Meta::NameValue(meta), "retries") => max_retries = parse_usize(&meta.lit)?,
(Meta::NameValue(meta), "iterations") => num_iterations = parse_usize(&meta.lit)?,
(Meta::NameValue(meta), "on_failure") => {
let Lit::Str(name) = &meta.lit else {
return Err(error_with_message(
"on_failure argument must be a string",
&meta.lit,
));
};
let segments = name
.value()
.split("::")
.map(|part| PathSegment::from(Ident::new(part, name.span())))
.collect();
let path = syn::Path {
leading_colon: None,
segments,
};
on_failure_fn_name = quote!(Some(#path));
}
other => {
return TokenStream::from(
syn::Error::new_spanned(other, "invalid argument").into_compile_error(),
)
(Meta::NameValue(meta), "seed") => seeds = vec![parse_usize(&meta.lit)? as u64],
(Meta::List(list), "seeds") => seeds = parse_u64_array(&list)?,
(Meta::Path(path), _) => {
return Err(error_with_message("invalid path argument", path));
}
(_, _) => {
return Err(error_with_message("invalid argument name", arg));
}
}
}
let seeds = quote!( #(#seeds),* );
let mut inner_fn = parse_macro_input!(function as ItemFn);
if max_retries > 0 && num_iterations > 1 {
return TokenStream::from(
syn::Error::new_spanned(inner_fn, "retries and randomized iterations can't be mixed")
.into_compile_error(),
);
}
let mut inner_fn = syn::parse::<ItemFn>(function).map_err(error_to_stream)?;
let inner_fn_attributes = mem::take(&mut inner_fn.attrs);
let inner_fn_name = format_ident!("_{}", inner_fn.sig.ident);
let outer_fn_name = mem::replace(&mut inner_fn.sig.ident, inner_fn_name.clone());
@@ -122,9 +121,7 @@ pub fn test(args: TokenStream, function: TokenStream) -> TokenStream {
}
}
return TokenStream::from(
syn::Error::new_spanned(arg, "invalid argument").into_compile_error(),
);
return Err(error_with_message("invalid function signature", arg));
}
parse_quote! {
@@ -133,7 +130,8 @@ pub fn test(args: TokenStream, function: TokenStream) -> TokenStream {
#inner_fn
gpui::run_test(
#num_iterations as u64,
#num_iterations,
&[#seeds],
#max_retries,
&mut |dispatcher, _seed| {
let executor = gpui::BackgroundExecutor::new(std::sync::Arc::new(dispatcher.clone()));
@@ -205,9 +203,7 @@ pub fn test(args: TokenStream, function: TokenStream) -> TokenStream {
}
}
return TokenStream::from(
syn::Error::new_spanned(arg, "invalid argument").into_compile_error(),
);
return Err(error_with_message("invalid function signature", arg));
}
parse_quote! {
@@ -216,7 +212,8 @@ pub fn test(args: TokenStream, function: TokenStream) -> TokenStream {
#inner_fn
gpui::run_test(
#num_iterations as u64,
#num_iterations,
&[#seeds],
#max_retries,
&mut |dispatcher, _seed| {
#cx_vars
@@ -230,15 +227,34 @@ pub fn test(args: TokenStream, function: TokenStream) -> TokenStream {
};
outer_fn.attrs.extend(inner_fn_attributes);
TokenStream::from(quote!(#outer_fn))
Ok(TokenStream::from(quote!(#outer_fn)))
}
fn parse_int(literal: &Lit) -> Result<usize, TokenStream> {
let result = if let Lit::Int(int) = &literal {
int.base10_parse()
} else {
Err(syn::Error::new(literal.span(), "must be an integer"))
fn parse_usize(literal: &Lit) -> Result<usize, TokenStream> {
let Lit::Int(int) = &literal else {
return Err(error_with_message("expected an usize", literal));
};
result.map_err(|err| TokenStream::from(err.into_compile_error()))
int.base10_parse().map_err(error_to_stream)
}
fn parse_u64_array(meta_list: &MetaList) -> Result<Vec<u64>, TokenStream> {
meta_list
.nested
.iter()
.map(|meta| {
if let NestedMeta::Lit(literal) = &meta {
parse_usize(literal).map(|value| value as u64)
} else {
Err(error_with_message("expected an integer", meta.span()))
}
})
.collect()
}
fn error_with_message(message: &str, spanned: impl Spanned) -> TokenStream {
error_to_stream(syn::Error::new(spanned.span(), message))
}
fn error_to_stream(err: syn::Error) -> TokenStream {
TokenStream::from(err.into_compile_error())
}

View File

@@ -139,31 +139,27 @@ impl LanguageSelectorDelegate {
let mut label = mat.string.clone();
let buffer_language = self.buffer.read(cx).language();
let need_icon = FileFinderSettings::get_global(cx).file_icons;
if let Some(buffer_language) = buffer_language {
let buffer_language_name = buffer_language.name();
if buffer_language_name.as_ref() == mat.string.as_str() {
label.push_str(" (current)");
let icon = need_icon
.then(|| self.language_icon(&buffer_language.config().matcher, cx))
.flatten();
return (label, icon);
}
}
if need_icon {
let language_name = LanguageName::new(mat.string.as_str());
match self
.language_registry
.available_language_for_name(language_name.as_ref())
{
Some(available_language) => {
let icon = self.language_icon(available_language.matcher(), cx);
(label, icon)
}
None => (label, None),
}
if let Some(buffer_language) = buffer_language
.filter(|buffer_language| buffer_language.name().as_ref() == mat.string.as_str())
{
label.push_str(" (current)");
let icon = need_icon
.then(|| self.language_icon(&buffer_language.config().matcher, cx))
.flatten();
(label, icon)
} else {
(label, None)
let icon = need_icon
.then(|| {
let language_name = LanguageName::new(mat.string.as_str());
self.language_registry
.available_language_for_name(language_name.as_ref())
.and_then(|available_language| {
self.language_icon(available_language.matcher(), cx)
})
})
.flatten();
(label, icon)
}
}
@@ -171,13 +167,7 @@ impl LanguageSelectorDelegate {
matcher
.path_suffixes
.iter()
.find_map(|extension| {
if extension.contains('.') {
None
} else {
FileIcons::get_icon(Path::new(&format!("file.{extension}")), cx)
}
})
.find_map(|extension| FileIcons::get_icon(Path::new(extension), cx))
.map(Icon::from_path)
.map(|icon| icon.color(Color::Muted))
}

View File

@@ -190,9 +190,15 @@ operator: "/" @operator
(parameter (identifier) @variable.parameter)
(attribute_item (attribute (identifier) @attribute))
(inner_attribute_item (attribute (identifier) @attribute))
(attribute_item (attribute [
(identifier) @attribute
(scoped_identifier name: (identifier) @attribute)
]))
(inner_attribute_item (attribute [
(identifier) @attribute
(scoped_identifier name: (identifier) @attribute)
]))
; Match nested snake case identifiers in attribute items.
(token_tree (identifier) @attribute (#match? @attribute "^[a-z\\d_]*$"))
; Override the attribute match for paths in scoped identifiers.
(token_tree (identifier) @variable "::")
; Override the attribute match for paths in scoped type/enum identifiers.
(token_tree (identifier) @variable "::" (identifier) @type (#match? @type "^[A-Z]"))

View File

@@ -4246,7 +4246,7 @@ impl ProjectPanel {
if skip_ignored
&& worktree
.entry_for_id(entry_id)
.map_or(true, |entry| entry.is_ignored)
.map_or(true, |entry| entry.is_ignored && !entry.is_always_included)
{
return;
}
@@ -7871,6 +7871,123 @@ mod tests {
);
}
#[gpui::test]
async fn test_gitignored_and_always_included(cx: &mut gpui::TestAppContext) {
init_test_with_editor(cx);
cx.update(|cx| {
cx.update_global::<SettingsStore, _>(|store, cx| {
store.update_user_settings::<WorktreeSettings>(cx, |worktree_settings| {
worktree_settings.file_scan_exclusions = Some(Vec::new());
worktree_settings.file_scan_inclusions =
Some(vec!["always_included_but_ignored_dir/*".to_string()]);
});
store.update_user_settings::<ProjectPanelSettings>(cx, |project_panel_settings| {
project_panel_settings.auto_reveal_entries = Some(false)
});
})
});
let fs = FakeFs::new(cx.background_executor.clone());
fs.insert_tree(
"/project_root",
json!({
".git": {},
".gitignore": "**/gitignored_dir\n/always_included_but_ignored_dir",
"dir_1": {
"file_1.py": "# File 1_1 contents",
"file_2.py": "# File 1_2 contents",
"file_3.py": "# File 1_3 contents",
"gitignored_dir": {
"file_a.py": "# File contents",
"file_b.py": "# File contents",
"file_c.py": "# File contents",
},
},
"dir_2": {
"file_1.py": "# File 2_1 contents",
"file_2.py": "# File 2_2 contents",
"file_3.py": "# File 2_3 contents",
},
"always_included_but_ignored_dir": {
"file_a.py": "# File contents",
"file_b.py": "# File contents",
"file_c.py": "# File contents",
},
}),
)
.await;
let project = Project::test(fs.clone(), ["/project_root".as_ref()], cx).await;
let workspace =
cx.add_window(|window, cx| Workspace::test_new(project.clone(), window, cx));
let cx = &mut VisualTestContext::from_window(*workspace, cx);
let panel = workspace.update(cx, ProjectPanel::new).unwrap();
assert_eq!(
visible_entries_as_strings(&panel, 0..20, cx),
&[
"v project_root",
" > .git",
" > always_included_but_ignored_dir",
" > dir_1",
" > dir_2",
" .gitignore",
]
);
let gitignored_dir_file =
find_project_entry(&panel, "project_root/dir_1/gitignored_dir/file_a.py", cx);
let always_included_but_ignored_dir_file = find_project_entry(
&panel,
"project_root/always_included_but_ignored_dir/file_a.py",
cx,
)
.expect("file that is .gitignored but set to always be included should have an entry");
assert_eq!(
gitignored_dir_file, None,
"File in the gitignored dir should not have an entry unless its directory is toggled"
);
toggle_expand_dir(&panel, "project_root/dir_1", cx);
cx.run_until_parked();
cx.update(|_, cx| {
cx.update_global::<SettingsStore, _>(|store, cx| {
store.update_user_settings::<ProjectPanelSettings>(cx, |project_panel_settings| {
project_panel_settings.auto_reveal_entries = Some(true)
});
})
});
panel.update(cx, |panel, cx| {
panel.project.update(cx, |_, cx| {
cx.emit(project::Event::ActiveEntryChanged(Some(
always_included_but_ignored_dir_file,
)))
})
});
cx.run_until_parked();
assert_eq!(
visible_entries_as_strings(&panel, 0..20, cx),
&[
"v project_root",
" > .git",
" v always_included_but_ignored_dir",
" file_a.py <== selected <== marked",
" file_b.py",
" file_c.py",
" v dir_1",
" > gitignored_dir",
" file_1.py",
" file_2.py",
" file_3.py",
" > dir_2",
" .gitignore",
],
"When auto reveal is enabled, a gitignored but always included selected entry should be revealed in the project panel"
);
}
#[gpui::test]
async fn test_explicit_reveal(cx: &mut gpui::TestAppContext) {
init_test_with_editor(cx);

View File

@@ -326,6 +326,13 @@ impl Bind for Arc<Path> {
self.as_ref().bind(statement, start_index)
}
}
impl Column for Arc<Path> {
fn column(statement: &mut Statement, start_index: i32) -> Result<(Self, i32)> {
let blob = statement.column_blob(start_index)?;
PathBuf::try_from_bytes(blob).map(|path| (Arc::from(path.as_path()), start_index + 1))
}
}
impl StaticColumnCount for PathBuf {}
impl Bind for PathBuf {

View File

@@ -8,12 +8,12 @@ pub mod terminal_settings;
use alacritty_terminal::{
event::{Event as AlacTermEvent, EventListener, Notify, WindowSize},
event_loop::{EventLoop, Msg, Notifier},
grid::{Dimensions, Scroll as AlacScroll},
grid::{Dimensions, Grid, Row, Scroll as AlacScroll},
index::{Boundary, Column, Direction as AlacDirection, Line, Point as AlacPoint},
selection::{Selection, SelectionRange, SelectionType},
sync::FairMutex,
term::{
cell::Cell,
cell::{Cell, Flags},
search::{Match, RegexIter, RegexSearch},
Config, RenderableCursor, TermMode,
},
@@ -1386,28 +1386,59 @@ impl Terminal {
pub fn last_n_non_empty_lines(&self, n: usize) -> Vec<String> {
let term = self.term.clone();
let terminal = term.lock_unfair();
let grid = terminal.grid();
let mut lines = Vec::new();
let mut current_line = terminal.bottommost_line();
while lines.len() < n {
let mut line_buffer = String::new();
for cell in &terminal.grid()[current_line] {
line_buffer.push(cell.c);
}
let line = line_buffer.trim_end();
if !line.is_empty() {
lines.push(line.to_string());
let mut current_line = grid.bottommost_line().0;
let topmost_line = grid.topmost_line().0;
while current_line >= topmost_line && lines.len() < n {
let logical_line_start = self.find_logical_line_start(grid, current_line, topmost_line);
let logical_line = self.construct_logical_line(grid, logical_line_start, current_line);
if let Some(line) = self.process_line(logical_line) {
lines.push(line);
}
if current_line == terminal.topmost_line() {
break;
}
current_line = Line(current_line.0 - 1);
// Move to the line above the start of the current logical line
current_line = logical_line_start - 1;
}
lines.reverse();
lines
}
fn find_logical_line_start(&self, grid: &Grid<Cell>, current: i32, topmost: i32) -> i32 {
let mut line_start = current;
while line_start > topmost {
let prev_line = Line(line_start - 1);
let last_cell = &grid[prev_line][Column(grid.columns() - 1)];
if !last_cell.flags.contains(Flags::WRAPLINE) {
break;
}
line_start -= 1;
}
line_start
}
fn construct_logical_line(&self, grid: &Grid<Cell>, start: i32, end: i32) -> String {
let mut logical_line = String::new();
for row in start..=end {
let grid_row = &grid[Line(row)];
logical_line.push_str(&row_to_string(grid_row));
}
logical_line
}
fn process_line(&self, line: String) -> Option<String> {
let trimmed = line.trim_end().to_string();
if !trimmed.is_empty() {
Some(trimmed)
} else {
None
}
}
pub fn focus_in(&self) {
if self.last_content.mode.contains(TermMode::FOCUS_IN_OUT) {
self.write_to_pty("\x1b[I".to_string());
@@ -1838,6 +1869,14 @@ impl Terminal {
}
}
// Helper function to convert a grid row to a string
pub fn row_to_string(row: &Row<Cell>) -> String {
row[..Column(row.len())]
.iter()
.map(|cell| cell.c)
.collect::<String>()
}
fn is_path_surrounded_by_common_symbols(path: &str) -> bool {
// Avoid detecting `[]` or `()` strings as paths, surrounded by common symbols
path.len() > 2

View File

@@ -169,7 +169,7 @@ impl<T: AsRef<Path>> From<T> for SanitizedPath {
/// A delimiter to use in `path_query:row_number:column_number` strings parsing.
pub const FILE_ROW_COLUMN_DELIMITER: char = ':';
const ROW_COL_CAPTURE_REGEX: &str = r"(?x)
const ROW_COL_CAPTURE_REGEX: &str = r"(?xs)
([^\(]+)(?:
\((\d+)[,:](\d+)\) # filename(row,column), filename(row:column)
|
@@ -624,6 +624,24 @@ mod tests {
column: None
}
);
assert_eq!(
PathWithPosition::parse_str("ab\ncd"),
PathWithPosition {
path: PathBuf::from("ab\ncd"),
row: None,
column: None
}
);
assert_eq!(
PathWithPosition::parse_str("👋\nab"),
PathWithPosition {
path: PathBuf::from("👋\nab"),
row: None,
column: None
}
);
}
#[test]

View File

@@ -29,7 +29,7 @@ use anyhow::{anyhow, Context as _};
pub use take_until::*;
#[cfg(any(test, feature = "test-support"))]
pub use util_macros::{separator, uri};
pub use util_macros::{line_endings, separator, uri};
#[macro_export]
macro_rules! debug_panic {

View File

@@ -54,3 +54,29 @@ pub fn uri(input: TokenStream) -> TokenStream {
#uri
})
}
/// This macro replaces the line endings `\n` with `\r\n` for Windows.
/// But if the target OS is not Windows, the line endings are returned as is.
///
/// # Example
/// ```rust
/// use util_macros::line_endings;
///
/// let text = line_endings!("Hello\nWorld");
/// #[cfg(target_os = "windows")]
/// assert_eq!(text, "Hello\r\nWorld");
/// #[cfg(not(target_os = "windows"))]
/// assert_eq!(text, "Hello\nWorld");
/// ```
#[proc_macro]
pub fn line_endings(input: TokenStream) -> TokenStream {
let text = parse_macro_input!(input as LitStr);
let text = text.value();
#[cfg(target_os = "windows")]
let text = text.replace("\n", "\r\n");
TokenStream::from(quote! {
#text
})
}

View File

@@ -22,6 +22,7 @@ async-trait = { workspace = true, "optional" = true }
collections.workspace = true
command_palette.workspace = true
command_palette_hooks.workspace = true
db.workspace = true
editor.workspace = true
futures.workspace = true
gpui.workspace = true
@@ -32,6 +33,7 @@ log.workspace = true
multi_buffer.workspace = true
nvim-rs = { git = "https://github.com/KillTheMule/nvim-rs", branch = "master", features = ["use_tokio"], optional = true }
picker.workspace = true
project.workspace = true
regex.workspace = true
schemars.workspace = true
search.workspace = true
@@ -40,6 +42,7 @@ serde_derive.workspace = true
serde_json.workspace = true
settings.workspace = true
task.workspace = true
text.workspace = true
theme.workspace = true
tokio = { version = "1.15", features = ["full"], optional = true }
ui.workspace = true

View File

@@ -1,4 +1,6 @@
use editor::{display_map::ToDisplayPoint, movement, scroll::Autoscroll, Bias, Direction, Editor};
use editor::{
display_map::ToDisplayPoint, movement, scroll::Autoscroll, Anchor, Bias, Direction, Editor,
};
use gpui::{actions, Context, Window};
use crate::{state::Mode, Vim};
@@ -48,8 +50,10 @@ impl Vim {
}
pub(crate) fn push_to_change_list(&mut self, window: &mut Window, cx: &mut Context<Self>) {
let Some((map, selections)) = self.update_editor(window, cx, |_, editor, _, cx| {
editor.selections.all_adjusted_display(cx)
let Some((map, selections, buffer)) = self.update_editor(window, cx, |_, editor, _, cx| {
let (map, selections) = editor.selections.all_adjusted_display(cx);
let buffer = editor.buffer().clone();
(map, selections, buffer)
}) else {
return;
};
@@ -65,7 +69,7 @@ impl Vim {
})
.unwrap_or(false);
let new_positions = selections
let new_positions: Vec<Anchor> = selections
.into_iter()
.map(|s| {
let point = if self.mode == Mode::Insert {
@@ -81,7 +85,8 @@ impl Vim {
if pop_state {
self.change_list.pop();
}
self.change_list.push(new_positions);
self.change_list.push(new_positions.clone());
self.set_mark(".".to_string(), new_positions, &buffer, window, cx)
}
}

View File

@@ -37,7 +37,7 @@ use crate::{
JoinLines,
},
object::Object,
state::Mode,
state::{Mark, Mode},
visual::VisualDeleteLine,
ToggleRegistersView, Vim,
};
@@ -284,6 +284,7 @@ pub fn register(editor: &mut Editor, cx: &mut Context<Vim>) {
true,
true,
vec![Point::new(range.start.0, 0)..end],
window,
cx,
)
}
@@ -594,9 +595,14 @@ impl Position {
}
}
Position::Mark { name, offset } => {
let Some(mark) = vim.marks.get(&name.to_string()).and_then(|vec| vec.last()) else {
let Some(Mark::Local(anchors)) =
vim.get_mark(&name.to_string(), editor, window, cx)
else {
return Err(anyhow!("mark {} not set", name));
};
let Some(mark) = anchors.last() else {
return Err(anyhow!("mark {} contains empty anchors", name));
};
mark.to_point(&snapshot.buffer_snapshot)
.row
.saturating_add_signed(*offset)

View File

@@ -254,7 +254,7 @@ impl Vim {
});
});
vim.copy_selections_content(editor, false, cx);
vim.copy_selections_content(editor, false, window, cx);
editor.insert("", window, cx);
});
}

View File

@@ -25,7 +25,7 @@ impl Vim {
let count = Vim::take_count(cx).unwrap_or(1);
self.stop_recording_immediately(action.boxed_clone(), cx);
if count <= 1 || Vim::globals(cx).dot_replaying {
self.create_mark("^".into(), false, window, cx);
self.create_mark("^".into(), window, cx);
self.update_editor(window, cx, |_, editor, window, cx| {
editor.dismiss_menus_and_popups(false, window, cx);
editor.change_selections(Some(Autoscroll::fit()), window, cx, |s| {

View File

@@ -1889,7 +1889,7 @@ pub(crate) fn end_of_line(
}
}
fn sentence_backwards(
pub(crate) fn sentence_backwards(
map: &DisplaySnapshot,
point: DisplayPoint,
mut times: usize,
@@ -1935,7 +1935,11 @@ fn sentence_backwards(
DisplayPoint::zero()
}
fn sentence_forwards(map: &DisplaySnapshot, point: DisplayPoint, mut times: usize) -> DisplayPoint {
pub(crate) fn sentence_forwards(
map: &DisplaySnapshot,
point: DisplayPoint,
mut times: usize,
) -> DisplayPoint {
let start = point.to_point(map).to_offset(&map.buffer_snapshot);
let mut chars = map.buffer_chars_at(start).peekable();

View File

@@ -18,7 +18,7 @@ use crate::{
indent::IndentDirection,
motion::{self, first_non_whitespace, next_line_end, right, Motion},
object::Object,
state::{Mode, Operator},
state::{Mark, Mode, Operator},
surrounds::SurroundsType,
Vim,
};
@@ -355,11 +355,13 @@ impl Vim {
self.start_recording(cx);
self.switch_mode(Mode::Insert, false, window, cx);
self.update_editor(window, cx, |vim, editor, window, cx| {
if let Some(marks) = vim.marks.get("^") {
editor.change_selections(Some(Autoscroll::fit()), window, cx, |s| {
s.select_anchor_ranges(marks.iter().map(|mark| *mark..*mark))
});
}
let Some(Mark::Local(marks)) = vim.get_mark("^", editor, window, cx) else {
return;
};
editor.change_selections(Some(Autoscroll::fit()), window, cx, |s| {
s.select_anchor_ranges(marks.iter().map(|mark| *mark..*mark))
});
});
}

View File

@@ -76,7 +76,7 @@ impl Vim {
}
});
});
vim.copy_selections_content(editor, motion.linewise(), cx);
vim.copy_selections_content(editor, motion.linewise(), window, cx);
editor.insert("", window, cx);
editor.refresh_inline_completion(true, false, window, cx);
});
@@ -107,7 +107,7 @@ impl Vim {
});
});
if objects_found {
vim.copy_selections_content(editor, false, cx);
vim.copy_selections_content(editor, false, window, cx);
editor.insert("", window, cx);
editor.refresh_inline_completion(true, false, window, cx);
}

View File

@@ -60,7 +60,7 @@ impl Vim {
}
});
});
vim.copy_selections_content(editor, motion.linewise(), cx);
vim.copy_selections_content(editor, motion.linewise(), window, cx);
editor.insert("", window, cx);
// Fixup cursor position after the deletion
@@ -148,7 +148,7 @@ impl Vim {
}
});
});
vim.copy_selections_content(editor, false, cx);
vim.copy_selections_content(editor, false, window, cx);
editor.insert("", window, cx);
// Fixup cursor position after the deletion

View File

@@ -1,39 +1,34 @@
use std::{ops::Range, sync::Arc};
use std::{ops::Range, path::Path, sync::Arc};
use editor::{
display_map::{DisplaySnapshot, ToDisplayPoint},
movement,
scroll::Autoscroll,
Anchor, Bias, DisplayPoint,
Anchor, Bias, DisplayPoint, Editor, MultiBuffer,
};
use gpui::{Context, Window};
use gpui::{Context, Entity, EntityId, UpdateGlobal, Window};
use language::SelectionGoal;
use text::Point;
use ui::App;
use workspace::OpenOptions;
use crate::{
motion::{self, Motion},
state::Mode,
state::{Mark, Mode, VimGlobals},
Vim,
};
impl Vim {
pub fn create_mark(
&mut self,
text: Arc<str>,
tail: bool,
window: &mut Window,
cx: &mut Context<Self>,
) {
let Some(anchors) = self.update_editor(window, cx, |_, editor, _, _| {
editor
pub fn create_mark(&mut self, text: Arc<str>, window: &mut Window, cx: &mut Context<Self>) {
self.update_editor(window, cx, |vim, editor, window, cx| {
let anchors = editor
.selections
.disjoint_anchors()
.iter()
.map(|s| if tail { s.tail() } else { s.head() })
.collect::<Vec<_>>()
}) else {
return;
};
self.marks.insert(text.to_string(), anchors);
.map(|s| s.head())
.collect::<Vec<_>>();
vim.set_mark(text.to_string(), anchors, editor.buffer(), window, cx);
});
self.clear_operator(window, cx);
}
@@ -55,7 +50,7 @@ impl Vim {
let mut ends = vec![];
let mut reversed = vec![];
self.update_editor(window, cx, |_, editor, _, cx| {
self.update_editor(window, cx, |vim, editor, window, cx| {
let (map, selections) = editor.selections.all_display(cx);
for selection in selections {
let end = movement::saturating_left(&map, selection.end);
@@ -69,13 +64,121 @@ impl Vim {
);
reversed.push(selection.reversed)
}
vim.set_mark("<".to_string(), starts, editor.buffer(), window, cx);
vim.set_mark(">".to_string(), ends, editor.buffer(), window, cx);
});
self.marks.insert("<".to_string(), starts);
self.marks.insert(">".to_string(), ends);
self.stored_visual_mode.replace((mode, reversed));
}
fn open_buffer_mark(
&mut self,
line: bool,
entity_id: EntityId,
anchors: Vec<Anchor>,
window: &mut Window,
cx: &mut Context<Self>,
) {
let Some(workspace) = self.workspace(window) else {
return;
};
workspace.update(cx, |workspace, cx| {
let item = workspace.items(cx).find(|item| {
item.act_as::<Editor>(cx)
.is_some_and(|editor| editor.read(cx).buffer().entity_id() == entity_id)
});
let Some(item) = item.cloned() else {
return;
};
if let Some(pane) = workspace.pane_for(item.as_ref()) {
pane.update(cx, |pane, cx| {
if let Some(index) = pane.index_for_item(item.as_ref()) {
pane.activate_item(index, true, true, window, cx);
}
});
};
item.act_as::<Editor>(cx).unwrap().update(cx, |editor, cx| {
let map = editor.snapshot(window, cx);
let mut ranges: Vec<Range<Anchor>> = Vec::new();
for mut anchor in anchors {
if line {
let mut point = anchor.to_display_point(&map.display_snapshot);
point = motion::first_non_whitespace(&map.display_snapshot, false, point);
anchor = map
.display_snapshot
.buffer_snapshot
.anchor_before(point.to_point(&map.display_snapshot));
}
if ranges.last() != Some(&(anchor..anchor)) {
ranges.push(anchor..anchor);
}
}
editor.change_selections(Some(Autoscroll::fit()), window, cx, |s| {
s.select_anchor_ranges(ranges)
});
})
});
return;
}
fn open_path_mark(
&mut self,
line: bool,
path: Arc<Path>,
points: Vec<Point>,
window: &mut Window,
cx: &mut Context<Self>,
) {
let Some(workspace) = self.workspace(window) else {
return;
};
let task = workspace.update(cx, |workspace, cx| {
workspace.open_abs_path(
path.to_path_buf(),
OpenOptions {
visible: Some(workspace::OpenVisible::All),
focus: Some(true),
..Default::default()
},
window,
cx,
)
});
cx.spawn_in(window, |this, mut cx| async move {
let editor = task.await?;
this.update_in(&mut cx, |_, window, cx| {
if let Some(editor) = editor.act_as::<Editor>(cx) {
editor.update(cx, |editor, cx| {
let map = editor.snapshot(window, cx);
let points: Vec<_> = points
.into_iter()
.map(|p| {
if line {
let point = p.to_display_point(&map.display_snapshot);
motion::first_non_whitespace(
&map.display_snapshot,
false,
point,
)
.to_point(&map.display_snapshot)
} else {
p
}
})
.collect();
editor.change_selections(Some(Autoscroll::fit()), window, cx, |s| {
s.select_ranges(points.into_iter().map(|p| p..p))
})
})
}
})
})
.detach_and_log_err(cx);
}
pub fn jump(
&mut self,
text: Arc<str>,
@@ -84,25 +187,22 @@ impl Vim {
cx: &mut Context<Self>,
) {
self.pop_operator(window, cx);
let anchors = match &*text {
"{" | "}" => self.update_editor(window, cx, |_, editor, _, cx| {
let (map, selections) = editor.selections.all_display(cx);
selections
.into_iter()
.map(|selection| {
let point = if &*text == "{" {
movement::start_of_paragraph(&map, selection.head(), 1)
} else {
movement::end_of_paragraph(&map, selection.head(), 1)
};
map.buffer_snapshot
.anchor_before(point.to_offset(&map, Bias::Left))
})
.collect::<Vec<Anchor>>()
}),
"." => self.change_list.last().cloned(),
_ => self.marks.get(&*text).cloned(),
let mark = self
.update_editor(window, cx, |vim, editor, window, cx| {
vim.get_mark(&text, editor, window, cx)
})
.flatten();
let anchors = match mark {
None => None,
Some(Mark::Local(anchors)) => Some(anchors),
Some(Mark::Buffer(entity_id, anchors)) => {
self.open_buffer_mark(line, entity_id, anchors, window, cx);
return;
}
Some(Mark::Path(path, points)) => {
self.open_path_mark(line, path, points, window, cx);
return;
}
};
let Some(mut anchors) = anchors else { return };
@@ -144,7 +244,7 @@ impl Vim {
}
}
if !should_jump {
if !should_jump && !ranges.is_empty() {
editor.change_selections(Some(Autoscroll::fit()), window, cx, |s| {
s.select_anchor_ranges(ranges)
});
@@ -158,6 +258,62 @@ impl Vim {
}
}
}
pub fn set_mark(
&mut self,
name: String,
anchors: Vec<Anchor>,
buffer_entity: &Entity<MultiBuffer>,
window: &mut Window,
cx: &mut App,
) {
let Some(workspace) = self.workspace(window) else {
return;
};
let entity_id = workspace.entity_id();
Vim::update_globals(cx, |vim_globals, cx| {
let Some(marks_state) = vim_globals.marks.get(&entity_id) else {
return;
};
marks_state.update(cx, |ms, cx| {
ms.set_mark(name.clone(), buffer_entity, anchors, cx);
});
});
}
pub fn get_mark(
&self,
name: &str,
editor: &mut Editor,
window: &mut Window,
cx: &mut App,
) -> Option<Mark> {
if matches!(name, "{" | "}" | "(" | ")") {
let (map, selections) = editor.selections.all_display(cx);
let anchors = selections
.into_iter()
.map(|selection| {
let point = match name {
"{" => movement::start_of_paragraph(&map, selection.head(), 1),
"}" => movement::end_of_paragraph(&map, selection.head(), 1),
"(" => motion::sentence_backwards(&map, selection.head(), 1),
")" => motion::sentence_forwards(&map, selection.head(), 1),
_ => unreachable!(),
};
map.buffer_snapshot
.anchor_before(point.to_offset(&map, Bias::Left))
})
.collect::<Vec<Anchor>>();
return Some(Mark::Local(anchors));
}
VimGlobals::update_global(cx, |globals, cx| {
let workspace_id = self.workspace(window)?.entity_id();
globals
.marks
.get_mut(&workspace_id)?
.update(cx, |ms, cx| ms.get_mark(name, editor.buffer(), cx))
})
}
}
pub fn jump_motion(

View File

@@ -50,7 +50,7 @@ impl Vim {
.filter(|sel| sel.len() > 1 && vim.mode != Mode::VisualLine);
if !action.preserve_clipboard && vim.mode.is_visual() {
vim.copy_selections_content(editor, vim.mode == Mode::VisualLine, cx);
vim.copy_selections_content(editor, vim.mode == Mode::VisualLine, window, cx);
}
let (display_map, current_selections) = editor.selections.all_adjusted_display(cx);

View File

@@ -7,20 +7,10 @@ use editor::{
use gpui::{actions, Context, Window};
use language::Bias;
use settings::Settings;
use ui::px;
actions!(
vim,
[
LineUp,
LineDown,
ScrollUp,
ScrollDown,
PageUp,
PageDown,
ScrollLeftHalfWay,
ScrollRightHalfWay,
]
[LineUp, LineDown, ScrollUp, ScrollDown, PageUp, PageDown]
);
pub fn register(editor: &mut Editor, cx: &mut Context<Vim>) {
@@ -54,25 +44,6 @@ pub fn register(editor: &mut Editor, cx: &mut Context<Vim>) {
}
})
});
Vim::action(editor, cx, |vim, _: &ScrollLeftHalfWay, window, cx| {
dbg!("here!");
vim.update_editor(window, cx, |_, editor, window, cx| {
editor.scroll_manager.horizontal_scroll(
|current, details| (current + details.editor_width / 2.).min(details.scroll_max),
cx,
);
});
});
// Vim::action(editor, cx, |vim, _: &ScrollRightHalfWay, window, cx| {
// vim.scroll_horizontal(window, cx, |c| {
// if let Some(c) = c {
// ScrollAmount::Line(-c)
// } else {
// ScrollAmount::Page(-0.5)
// }
// })
// });
}
impl Vim {

View File

@@ -75,7 +75,7 @@ impl Vim {
}
})
});
vim.copy_selections_content(editor, line_mode, cx);
vim.copy_selections_content(editor, line_mode, window, cx);
let selections = editor.selections.all::<Point>(cx).into_iter();
let edits = selections.map(|selection| (selection.start..selection.end, ""));
editor.edit(edits, cx);

View File

@@ -36,7 +36,7 @@ impl Vim {
motion.expand_selection(map, selection, times, true, &text_layout_details);
});
});
vim.yank_selections_content(editor, motion.linewise(), cx);
vim.yank_selections_content(editor, motion.linewise(), window, cx);
editor.change_selections(None, window, cx, |s| {
s.move_with(|_, selection| {
let (head, goal) = original_positions.remove(&selection.id).unwrap();
@@ -66,7 +66,7 @@ impl Vim {
start_positions.insert(selection.id, start_position);
});
});
vim.yank_selections_content(editor, false, cx);
vim.yank_selections_content(editor, false, window, cx);
editor.change_selections(None, window, cx, |s| {
s.move_with(|_, selection| {
let (head, goal) = start_positions.remove(&selection.id).unwrap();
@@ -82,6 +82,7 @@ impl Vim {
&mut self,
editor: &mut Editor,
linewise: bool,
window: &mut Window,
cx: &mut Context<Editor>,
) {
self.copy_ranges(
@@ -94,6 +95,7 @@ impl Vim {
.iter()
.map(|s| s.range())
.collect(),
window,
cx,
)
}
@@ -102,6 +104,7 @@ impl Vim {
&mut self,
editor: &mut Editor,
linewise: bool,
window: &mut Window,
cx: &mut Context<Editor>,
) {
self.copy_ranges(
@@ -114,6 +117,7 @@ impl Vim {
.iter()
.map(|s| s.range())
.collect(),
window,
cx,
)
}
@@ -124,28 +128,35 @@ impl Vim {
linewise: bool,
is_yank: bool,
selections: Vec<Range<Point>>,
window: &mut Window,
cx: &mut Context<Editor>,
) {
let buffer = editor.buffer().read(cx).snapshot(cx);
let mut text = String::new();
let mut clipboard_selections = Vec::with_capacity(selections.len());
let mut ranges_to_highlight = Vec::new();
self.marks.insert(
self.set_mark(
"[".to_string(),
selections
.iter()
.map(|s| buffer.anchor_before(s.start))
.collect(),
editor.buffer(),
window,
cx,
);
self.marks.insert(
self.set_mark(
"]".to_string(),
selections
.iter()
.map(|s| buffer.anchor_after(s.end))
.collect(),
editor.buffer(),
window,
cx,
);
let mut text = String::new();
let mut clipboard_selections = Vec::with_capacity(selections.len());
let mut ranges_to_highlight = Vec::new();
{
let mut is_first = true;
for selection in selections.iter() {

View File

@@ -3,27 +3,34 @@ use crate::normal::repeat::Replayer;
use crate::surrounds::SurroundsType;
use crate::{motion::Motion, object::Object};
use crate::{ToggleRegistersView, UseSystemClipboard, Vim, VimSettings};
use anyhow::Result;
use collections::HashMap;
use command_palette_hooks::{CommandPaletteFilter, CommandPaletteInterceptor};
use db::define_connection;
use db::sqlez_macros::sql;
use editor::display_map::{is_invisible, replacement};
use editor::{Anchor, ClipboardSelection, Editor};
use editor::{Anchor, ClipboardSelection, Editor, MultiBuffer};
use gpui::{
Action, App, BorrowAppContext, ClipboardEntry, ClipboardItem, Entity, Global, HighlightStyle,
StyledText, Task, TextStyle, WeakEntity,
Action, App, AppContext, BorrowAppContext, ClipboardEntry, ClipboardItem, Entity, EntityId,
Global, HighlightStyle, StyledText, Subscription, Task, TextStyle, WeakEntity,
};
use language::Point;
use language::{Buffer, BufferEvent, BufferId, Point};
use picker::{Picker, PickerDelegate};
use project::{Project, ProjectItem, ProjectPath};
use serde::{Deserialize, Serialize};
use settings::{Settings, SettingsStore};
use std::borrow::BorrowMut;
use std::path::Path;
use std::{fmt::Display, ops::Range, sync::Arc};
use text::Bias;
use theme::ThemeSettings;
use ui::{
h_flex, rems, ActiveTheme, Context, Div, FluentBuilder, KeyBinding, ParentElement,
SharedString, Styled, StyledTypography, Window,
};
use util::ResultExt;
use workspace::searchable::Direction;
use workspace::Workspace;
use workspace::{Workspace, WorkspaceDb, WorkspaceId};
#[derive(Clone, Copy, Debug, PartialEq, Serialize, Deserialize)]
pub enum Mode {
@@ -179,7 +186,7 @@ impl From<String> for Register {
}
}
#[derive(Default, Clone)]
#[derive(Default)]
pub struct VimGlobals {
pub last_find: Option<Motion>,
@@ -208,7 +215,399 @@ pub struct VimGlobals {
pub recordings: HashMap<char, Vec<ReplayableAction>>,
pub focused_vim: Option<WeakEntity<Vim>>,
pub marks: HashMap<EntityId, Entity<MarksState>>,
}
pub struct MarksState {
workspace: WeakEntity<Workspace>,
multibuffer_marks: HashMap<EntityId, HashMap<String, Vec<Anchor>>>,
buffer_marks: HashMap<BufferId, HashMap<String, Vec<text::Anchor>>>,
watched_buffers: HashMap<BufferId, (MarkLocation, Subscription, Subscription)>,
serialized_marks: HashMap<Arc<Path>, HashMap<String, Vec<Point>>>,
global_marks: HashMap<String, MarkLocation>,
_subscription: Subscription,
}
#[derive(Debug, PartialEq, Eq, Clone)]
pub enum MarkLocation {
Buffer(EntityId),
Path(Arc<Path>),
}
pub enum Mark {
Local(Vec<Anchor>),
Buffer(EntityId, Vec<Anchor>),
Path(Arc<Path>, Vec<Point>),
}
impl MarksState {
pub fn new(workspace: &Workspace, cx: &mut App) -> Entity<MarksState> {
cx.new(|cx| {
let buffer_store = workspace.project().read(cx).buffer_store().clone();
let subscription =
cx.subscribe(
&buffer_store,
move |this: &mut Self, _, event, cx| match event {
project::buffer_store::BufferStoreEvent::BufferAdded(buffer) => {
this.on_buffer_loaded(buffer, cx);
}
_ => {}
},
);
let mut this = Self {
workspace: workspace.weak_handle(),
multibuffer_marks: HashMap::default(),
buffer_marks: HashMap::default(),
watched_buffers: HashMap::default(),
serialized_marks: HashMap::default(),
global_marks: HashMap::default(),
_subscription: subscription,
};
this.load(cx);
this
})
}
fn workspace_id(&self, cx: &App) -> Option<WorkspaceId> {
self.workspace
.read_with(cx, |workspace, _| workspace.database_id())
.ok()
.flatten()
}
fn project(&self, cx: &App) -> Option<Entity<Project>> {
self.workspace
.read_with(cx, |workspace, _| workspace.project().clone())
.ok()
}
fn load(&mut self, cx: &mut Context<Self>) {
cx.spawn(|this, mut cx| async move {
let Some(workspace_id) = this.update(&mut cx, |this, cx| this.workspace_id(cx))? else {
return Ok(());
};
let (marks, paths) = cx
.background_spawn(async move {
let marks = DB.get_marks(workspace_id)?;
let paths = DB.get_global_marks_paths(workspace_id)?;
anyhow::Ok((marks, paths))
})
.await?;
this.update(&mut cx, |this, cx| this.loaded(marks, paths, cx))
})
.detach_and_log_err(cx);
}
fn loaded(
&mut self,
marks: Vec<SerializedMark>,
global_mark_paths: Vec<(String, Arc<Path>)>,
cx: &mut Context<Self>,
) {
let Some(project) = self.project(cx) else {
return;
};
for mark in marks {
self.serialized_marks
.entry(mark.path)
.or_default()
.insert(mark.name, mark.points);
}
for (name, path) in global_mark_paths {
self.global_marks
.insert(name, MarkLocation::Path(path.clone()));
let project_path = project
.read(cx)
.worktrees(cx)
.filter_map(|worktree| {
let relative = path.strip_prefix(worktree.read(cx).abs_path()).ok()?;
Some(ProjectPath {
worktree_id: worktree.read(cx).id(),
path: relative.into(),
})
})
.next();
if let Some(buffer) = project_path
.and_then(|project_path| project.read(cx).get_open_buffer(&project_path, cx))
{
self.on_buffer_loaded(&buffer, cx)
}
}
}
pub fn on_buffer_loaded(&mut self, buffer_handle: &Entity<Buffer>, cx: &mut Context<Self>) {
let Some(project) = self.project(cx) else {
return;
};
let Some(project_path) = buffer_handle.read(cx).project_path(cx) else {
return;
};
let Some(abs_path) = project.read(cx).absolute_path(&project_path, cx) else {
return;
};
let abs_path: Arc<Path> = abs_path.into();
let Some(serialized_marks) = self.serialized_marks.get(&abs_path) else {
return;
};
let mut loaded_marks = HashMap::default();
let buffer = buffer_handle.read(cx);
for (name, points) in serialized_marks.iter() {
loaded_marks.insert(
name.clone(),
points
.iter()
.map(|point| buffer.anchor_before(buffer.clip_point(*point, Bias::Left)))
.collect(),
);
}
self.buffer_marks.insert(buffer.remote_id(), loaded_marks);
self.watch_buffer(MarkLocation::Path(abs_path), buffer_handle, cx)
}
fn serialize_buffer_marks(
&mut self,
path: Arc<Path>,
buffer: &Entity<Buffer>,
cx: &mut Context<Self>,
) {
let new_points: HashMap<String, Vec<Point>> =
if let Some(anchors) = self.buffer_marks.get(&buffer.read(cx).remote_id()) {
anchors
.iter()
.map(|(name, anchors)| {
(
name.clone(),
buffer
.read(cx)
.summaries_for_anchors::<Point, _>(anchors)
.collect(),
)
})
.collect()
} else {
HashMap::default()
};
let old_points = self.serialized_marks.get(&path.clone());
if old_points == Some(&new_points) {
return;
}
let mut to_write = HashMap::default();
for (key, value) in &new_points {
if self.is_global_mark(key) {
if self.global_marks.get(key) != Some(&MarkLocation::Path(path.clone())) {
if let Some(workspace_id) = self.workspace_id(cx) {
let path = path.clone();
let key = key.clone();
cx.background_spawn(async move {
DB.set_global_mark_path(workspace_id, key, path).await
})
.detach_and_log_err(cx);
}
self.global_marks
.insert(key.clone(), MarkLocation::Path(path.clone()));
}
}
if old_points.and_then(|o| o.get(key)) != Some(value) {
to_write.insert(key.clone(), value.clone());
}
}
self.serialized_marks.insert(path.clone(), new_points);
if let Some(workspace_id) = self.workspace_id(cx) {
cx.background_spawn(async move {
DB.set_marks(workspace_id, path.clone(), to_write).await?;
anyhow::Ok(())
})
.detach_and_log_err(cx);
}
}
fn is_global_mark(&self, key: &str) -> bool {
key.chars()
.next()
.is_some_and(|c| c.is_uppercase() || c.is_digit(10))
}
fn rename_buffer(
&mut self,
old_path: MarkLocation,
new_path: Arc<Path>,
buffer: &Entity<Buffer>,
cx: &mut Context<Self>,
) {
if let MarkLocation::Buffer(entity_id) = old_path {
if let Some(old_marks) = self.multibuffer_marks.remove(&entity_id) {
let buffer_marks = old_marks
.into_iter()
.map(|(k, v)| (k, v.into_iter().map(|anchor| anchor.text_anchor).collect()))
.collect();
self.buffer_marks
.insert(buffer.read(cx).remote_id(), buffer_marks);
}
}
self.watch_buffer(MarkLocation::Path(new_path.clone()), buffer, cx);
self.serialize_buffer_marks(new_path, buffer, cx);
}
fn path_for_buffer(&self, buffer: &Entity<Buffer>, cx: &App) -> Option<Arc<Path>> {
let project_path = buffer.read(cx).project_path(cx)?;
let project = self.project(cx)?;
let abs_path = project.read(cx).absolute_path(&project_path, cx)?;
Some(abs_path.into())
}
fn points_at(
&self,
location: &MarkLocation,
multi_buffer: &Entity<MultiBuffer>,
cx: &App,
) -> bool {
match location {
MarkLocation::Buffer(entity_id) => entity_id == &multi_buffer.entity_id(),
MarkLocation::Path(path) => {
let Some(singleton) = multi_buffer.read(cx).as_singleton() else {
return false;
};
self.path_for_buffer(&singleton, cx).as_ref() == Some(path)
}
}
}
pub fn watch_buffer(
&mut self,
mark_location: MarkLocation,
buffer_handle: &Entity<Buffer>,
cx: &mut Context<Self>,
) {
let on_change = cx.subscribe(buffer_handle, move |this, buffer, event, cx| match event {
BufferEvent::Edited => {
if let Some(path) = this.path_for_buffer(&buffer, cx) {
this.serialize_buffer_marks(path, &buffer, cx);
}
}
BufferEvent::FileHandleChanged => {
let buffer_id = buffer.read(cx).remote_id();
if let Some(old_path) = this
.watched_buffers
.get(&buffer_id.clone())
.map(|(path, _, _)| path.clone())
{
if let Some(new_path) = this.path_for_buffer(&buffer, cx) {
this.rename_buffer(old_path, new_path, &buffer, cx)
}
}
}
_ => {}
});
let on_release = cx.observe_release(buffer_handle, |this, buffer, _| {
this.watched_buffers.remove(&buffer.remote_id());
this.buffer_marks.remove(&buffer.remote_id());
});
self.watched_buffers.insert(
buffer_handle.read(cx).remote_id(),
(mark_location, on_change, on_release),
);
}
pub fn set_mark(
&mut self,
name: String,
multibuffer: &Entity<MultiBuffer>,
anchors: Vec<Anchor>,
cx: &mut Context<Self>,
) {
let buffer = multibuffer.read(cx).as_singleton();
let abs_path = buffer.as_ref().and_then(|b| self.path_for_buffer(&b, cx));
let Some(abs_path) = abs_path else {
self.multibuffer_marks
.entry(multibuffer.entity_id())
.or_default()
.insert(name.clone(), anchors);
if self.is_global_mark(&name) {
self.global_marks
.insert(name.clone(), MarkLocation::Buffer(multibuffer.entity_id()));
}
if let Some(buffer) = buffer {
let buffer_id = buffer.read(cx).remote_id();
if !self.watched_buffers.contains_key(&buffer_id) {
self.watch_buffer(MarkLocation::Buffer(multibuffer.entity_id()), &buffer, cx)
}
}
return;
};
let buffer = buffer.unwrap();
let buffer_id = buffer.read(cx).remote_id();
self.buffer_marks.entry(buffer_id).or_default().insert(
name.clone(),
anchors
.into_iter()
.map(|anchor| anchor.text_anchor)
.collect(),
);
if !self.watched_buffers.contains_key(&buffer_id) {
self.watch_buffer(MarkLocation::Path(abs_path.clone()), &buffer, cx)
}
self.serialize_buffer_marks(abs_path, &buffer, cx)
}
pub fn get_mark(
&self,
name: &str,
multi_buffer: &Entity<MultiBuffer>,
cx: &App,
) -> Option<Mark> {
let target = self.global_marks.get(name);
if !self.is_global_mark(name) || target.is_some_and(|t| self.points_at(t, multi_buffer, cx))
{
if let Some(anchors) = self.multibuffer_marks.get(&multi_buffer.entity_id()) {
return Some(Mark::Local(anchors.get(name)?.clone()));
}
let singleton = multi_buffer.read(cx).as_singleton()?;
let excerpt_id = *multi_buffer.read(cx).excerpt_ids().first().unwrap();
let buffer_id = singleton.read(cx).remote_id();
if let Some(anchors) = self.buffer_marks.get(&buffer_id) {
let text_anchors = anchors.get(name)?;
let anchors = text_anchors
.into_iter()
.map(|anchor| Anchor::in_buffer(excerpt_id, buffer_id, *anchor))
.collect();
return Some(Mark::Local(anchors));
}
}
match target? {
MarkLocation::Buffer(entity_id) => {
let anchors = self.multibuffer_marks.get(&entity_id)?;
return Some(Mark::Buffer(*entity_id, anchors.get(name)?.clone()));
}
MarkLocation::Path(path) => {
let points = self.serialized_marks.get(path)?;
return Some(Mark::Path(path.clone(), points.get(name)?.clone()));
}
}
}
}
impl Global for VimGlobals {}
impl VimGlobals {
@@ -228,8 +627,15 @@ impl VimGlobals {
})
.detach();
let mut was_enabled = None;
cx.observe_global::<SettingsStore>(move |cx| {
if Vim::enabled(cx) {
let is_enabled = Vim::enabled(cx);
if was_enabled == Some(is_enabled) {
return;
}
was_enabled = Some(is_enabled);
if is_enabled {
KeyBinding::set_vim_mode(cx, true);
CommandPaletteFilter::update_global(cx, |filter, _| {
filter.show_namespace(Vim::NAMESPACE);
@@ -237,6 +643,17 @@ impl VimGlobals {
CommandPaletteInterceptor::update_global(cx, |interceptor, _| {
interceptor.set(Box::new(command_interceptor));
});
for window in cx.windows() {
if let Some(workspace) = window.downcast::<Workspace>() {
workspace
.update(cx, |workspace, _, cx| {
Vim::update_globals(cx, |globals, cx| {
globals.register_workspace(workspace, cx)
});
})
.ok();
}
}
} else {
KeyBinding::set_vim_mode(cx, false);
*Vim::globals(cx) = VimGlobals::default();
@@ -249,6 +666,21 @@ impl VimGlobals {
}
})
.detach();
cx.observe_new(|workspace: &mut Workspace, _, cx| {
Vim::update_globals(cx, |globals, cx| globals.register_workspace(workspace, cx));
})
.detach()
}
fn register_workspace(&mut self, workspace: &Workspace, cx: &mut Context<Workspace>) {
let entity_id = cx.entity_id();
self.marks.insert(entity_id, MarksState::new(workspace, cx));
cx.observe_release(&cx.entity(), move |_, _, cx| {
Vim::update_globals(cx, |globals, _| {
globals.marks.remove(&entity_id);
})
})
.detach();
}
pub(crate) fn write_registers(
@@ -799,3 +1231,111 @@ impl RegistersView {
.modal(true)
}
}
define_connection! (
pub static ref DB: VimDb<WorkspaceDb> = &[
sql! (
CREATE TABLE vim_marks (
workspace_id INTEGER,
mark_name TEXT,
path BLOB,
value TEXT
);
CREATE UNIQUE INDEX idx_vim_marks ON vim_marks (workspace_id, mark_name, path);
),
sql! (
CREATE TABLE vim_global_marks_paths(
workspace_id INTEGER,
mark_name TEXT,
path BLOB
);
CREATE UNIQUE INDEX idx_vim_global_marks_paths
ON vim_global_marks_paths(workspace_id, mark_name);
),
];
);
struct SerializedMark {
path: Arc<Path>,
name: String,
points: Vec<Point>,
}
impl VimDb {
pub(crate) async fn set_marks(
&self,
workspace_id: WorkspaceId,
path: Arc<Path>,
marks: HashMap<String, Vec<Point>>,
) -> Result<()> {
let result = self
.write(move |conn| {
let mut query = conn.exec_bound(sql!(
INSERT OR REPLACE INTO vim_marks
(workspace_id, mark_name, path, value)
VALUES
(?, ?, ?, ?)
))?;
for (mark_name, value) in marks {
let pairs: Vec<(u32, u32)> = value
.into_iter()
.map(|point| (point.row, point.column))
.collect();
let serialized = serde_json::to_string(&pairs)?;
query((workspace_id, mark_name, path.clone(), serialized))?;
}
Ok(())
})
.await;
result
}
fn get_marks(&self, workspace_id: WorkspaceId) -> Result<Vec<SerializedMark>> {
let result: Vec<(Arc<Path>, String, String)> = self.select_bound(sql!(
SELECT path, mark_name, value FROM vim_marks
WHERE workspace_id = ?
))?(workspace_id)?;
Ok(result
.into_iter()
.filter_map(|(path, name, value)| {
let pairs: Vec<(u32, u32)> = serde_json::from_str(&value).log_err()?;
Some(SerializedMark {
path,
name,
points: pairs
.into_iter()
.map(|(row, column)| Point { row, column })
.collect(),
})
})
.collect())
}
pub(crate) async fn set_global_mark_path(
&self,
workspace_id: WorkspaceId,
mark_name: String,
path: Arc<Path>,
) -> Result<()> {
self.write(move |conn| {
conn.exec_bound(sql!(
INSERT OR REPLACE INTO vim_global_marks_paths
(workspace_id, mark_name, path)
VALUES
(?, ?, ?)
))?((workspace_id, mark_name, path))
})
.await
}
pub fn get_global_marks_paths(
&self,
workspace_id: WorkspaceId,
) -> Result<Vec<(String, Arc<Path>)>> {
self.select_bound(sql!(
SELECT mark_name, path FROM vim_global_marks_paths
WHERE workspace_id = ?
))?(workspace_id)
}
}

View File

@@ -26,7 +26,7 @@ use editor::{
Anchor, Bias, Editor, EditorEvent, EditorMode, EditorSettings, ToPoint,
};
use gpui::{
actions, impl_actions, Action, App, AppContext as _, Axis, Context, Entity, EventEmitter,
actions, impl_actions, Action, App, AppContext, Axis, Context, Entity, EventEmitter,
KeyContext, KeystrokeEvent, Render, Subscription, Task, WeakEntity, Window,
};
use insert::{NormalBefore, TemporaryNormal};
@@ -314,7 +314,6 @@ pub(crate) struct Vim {
operator_stack: Vec<Operator>,
pub(crate) replacements: Vec<(Range<editor::Anchor>, String)>,
pub(crate) marks: HashMap<String, Vec<Anchor>>,
pub(crate) stored_visual_mode: Option<(Mode, Vec<bool>)>,
pub(crate) change_list: Vec<Vec<Anchor>>,
pub(crate) change_list_position: Option<usize>,
@@ -362,7 +361,6 @@ impl Vim {
operator_stack: Vec::new(),
replacements: Vec::new(),
marks: HashMap::default(),
stored_visual_mode: None,
change_list: Vec::new(),
change_list_position: None,
@@ -1573,7 +1571,7 @@ impl Vim {
}
_ => self.clear_operator(window, cx),
},
Some(Operator::Mark) => self.create_mark(text, false, window, cx),
Some(Operator::Mark) => self.create_mark(text, window, cx),
Some(Operator::RecordRegister) => {
self.record_register(text.chars().next().unwrap(), window, cx)
}

View File

@@ -17,7 +17,7 @@ use workspace::searchable::Direction;
use crate::{
motion::{first_non_whitespace, next_line_end, start_of_line, Motion},
object::Object,
state::{Mode, Operator},
state::{Mark, Mode, Operator},
Vim,
};
@@ -107,14 +107,20 @@ pub fn register(editor: &mut Editor, cx: &mut Context<Vim>) {
let Some((stored_mode, reversed)) = vim.stored_visual_mode.take() else {
return;
};
let Some((start, end)) = vim.marks.get("<").zip(vim.marks.get(">")) else {
let marks = vim
.update_editor(window, cx, |vim, editor, window, cx| {
vim.get_mark("<", editor, window, cx)
.zip(vim.get_mark(">", editor, window, cx))
})
.flatten();
let Some((Mark::Local(start), Mark::Local(end))) = marks else {
return;
};
let ranges = start
.iter()
.zip(end)
.zip(reversed)
.map(|((start, end), reversed)| (*start, *end, reversed))
.map(|((start, end), reversed)| (*start, end, reversed))
.collect::<Vec<_>>();
if vim.mode.is_visual() {
@@ -499,7 +505,7 @@ impl Vim {
selection.goal = SelectionGoal::None;
});
});
vim.copy_selections_content(editor, line_mode, cx);
vim.copy_selections_content(editor, line_mode, window, cx);
editor.insert("", window, cx);
// Fixup cursor position after the deletion
@@ -528,7 +534,7 @@ impl Vim {
self.update_editor(window, cx, |vim, editor, window, cx| {
let line_mode = line_mode || editor.selections.line_mode;
editor.selections.line_mode = line_mode;
vim.yank_selections_content(editor, line_mode, cx);
vim.yank_selections_content(editor, line_mode, window, cx);
editor.change_selections(None, window, cx, |s| {
s.move_with(|map, selection| {
if line_mode {

View File

@@ -193,7 +193,7 @@ impl Render for ModalLayer {
.child(
h_flex()
.occlude()
.child(div().child(active_modal.modal.view()))
.child(active_modal.modal.view())
.on_mouse_down(MouseButton::Left, |_, _, cx| {
cx.stop_propagation();
}),

View File

@@ -61,7 +61,7 @@ use std::{
path::{Component, Path, PathBuf},
pin::Pin,
sync::{
atomic::{self, AtomicU32, AtomicUsize, Ordering::SeqCst},
atomic::{self, AtomicI32, AtomicUsize, Ordering::SeqCst},
Arc,
},
time::{Duration, Instant},
@@ -1525,6 +1525,7 @@ impl LocalWorktree {
fs,
fs_case_sensitive,
status_updates_tx: scan_states_tx,
scans_running: Arc::new(AtomicI32::new(0)),
executor: background,
scan_requests_rx,
path_prefixes_to_scan_rx,
@@ -4249,11 +4250,6 @@ struct PathEntry {
scan_id: usize,
}
#[derive(Debug, Default)]
struct FsScanned {
status_scans: Arc<AtomicU32>,
}
impl sum_tree::Item for PathEntry {
type Summary = PathEntrySummary;
@@ -4321,6 +4317,7 @@ struct BackgroundScanner {
fs: Arc<dyn Fs>,
fs_case_sensitive: bool,
status_updates_tx: UnboundedSender<ScanState>,
scans_running: Arc<AtomicI32>,
executor: BackgroundExecutor,
scan_requests_rx: channel::Receiver<ScanRequest>,
path_prefixes_to_scan_rx: channel::Receiver<PathPrefixScanRequest>,
@@ -4428,13 +4425,13 @@ impl BackgroundScanner {
// Perform an initial scan of the directory.
drop(scan_job_tx);
let scans_running = self.scan_dirs(true, scan_job_rx).await;
self.scan_dirs(true, scan_job_rx).await;
{
let mut state = self.state.lock();
state.snapshot.completed_scan_id = state.snapshot.scan_id;
}
let scanning = scans_running.status_scans.load(atomic::Ordering::Acquire) > 0;
let scanning = self.scans_running.load(atomic::Ordering::Acquire) > 0;
self.send_status_update(scanning, SmallVec::new());
// Process any any FS events that occurred while performing the initial scan.
@@ -4461,7 +4458,7 @@ impl BackgroundScanner {
// these before handling changes reported by the filesystem.
request = self.next_scan_request().fuse() => {
let Ok(request) = request else { break };
let scanning = scans_running.status_scans.load(atomic::Ordering::Acquire) > 0;
let scanning = self.scans_running.load(atomic::Ordering::Acquire) > 0;
if !self.process_scan_request(request, scanning).await {
return;
}
@@ -4484,7 +4481,7 @@ impl BackgroundScanner {
self.process_events(vec![abs_path]).await;
}
}
let scanning = scans_running.status_scans.load(atomic::Ordering::Acquire) > 0;
let scanning = self.scans_running.load(atomic::Ordering::Acquire) > 0;
self.send_status_update(scanning, request.done);
}
@@ -4678,7 +4675,7 @@ impl BackgroundScanner {
.await;
self.update_ignore_statuses(scan_job_tx).await;
let scans_running = self.scan_dirs(false, scan_job_rx).await;
self.scan_dirs(false, scan_job_rx).await;
let status_update = if !dot_git_abs_paths.is_empty() {
Some(self.update_git_repositories(dot_git_abs_paths))
@@ -4689,6 +4686,7 @@ impl BackgroundScanner {
let phase = self.phase;
let status_update_tx = self.status_updates_tx.clone();
let state = self.state.clone();
let scans_running = self.scans_running.clone();
self.executor
.spawn(async move {
if let Some(status_update) = status_update {
@@ -4704,7 +4702,7 @@ impl BackgroundScanner {
#[cfg(test)]
state.snapshot.check_git_invariants();
}
let scanning = scans_running.status_scans.load(atomic::Ordering::Acquire) > 0;
let scanning = scans_running.load(atomic::Ordering::Acquire) > 0;
send_status_update_inner(phase, state, status_update_tx, scanning, SmallVec::new());
})
.detach();
@@ -4729,9 +4727,8 @@ impl BackgroundScanner {
}
drop(scan_job_tx);
}
let scans_running = Arc::new(AtomicU32::new(0));
while let Ok(job) = scan_job_rx.recv().await {
self.scan_dir(&scans_running, &job).await.log_err();
self.scan_dir(&job).await.log_err();
}
!mem::take(&mut self.state.lock().paths_to_scan).is_empty()
@@ -4741,16 +4738,16 @@ impl BackgroundScanner {
&self,
enable_progress_updates: bool,
scan_jobs_rx: channel::Receiver<ScanJob>,
) -> FsScanned {
) {
if self
.status_updates_tx
.unbounded_send(ScanState::Started)
.is_err()
{
return FsScanned::default();
return;
}
let scans_running = Arc::new(AtomicU32::new(1));
inc_scans_running(&self.scans_running);
let progress_update_count = AtomicUsize::new(0);
self.executor
.scoped(|scope| {
@@ -4795,7 +4792,7 @@ impl BackgroundScanner {
// Recursively load directories from the file system.
job = scan_jobs_rx.recv().fuse() => {
let Ok(job) = job else { break };
if let Err(err) = self.scan_dir(&scans_running, &job).await {
if let Err(err) = self.scan_dir(&job).await {
if job.path.as_ref() != Path::new("") {
log::error!("error scanning directory {:?}: {}", job.abs_path, err);
}
@@ -4808,10 +4805,7 @@ impl BackgroundScanner {
})
.await;
scans_running.fetch_sub(1, atomic::Ordering::Release);
FsScanned {
status_scans: scans_running,
}
dec_scans_running(&self.scans_running, 1);
}
fn send_status_update(&self, scanning: bool, barrier: SmallVec<[barrier::Sender; 1]>) -> bool {
@@ -4824,7 +4818,7 @@ impl BackgroundScanner {
)
}
async fn scan_dir(&self, scans_running: &Arc<AtomicU32>, job: &ScanJob) -> Result<()> {
async fn scan_dir(&self, job: &ScanJob) -> Result<()> {
let root_abs_path;
let root_char_bag;
{
@@ -4879,7 +4873,7 @@ impl BackgroundScanner {
self.watcher.as_ref(),
);
if let Some(local_repo) = repo {
scans_running.fetch_add(1, atomic::Ordering::Release);
inc_scans_running(&self.scans_running);
git_status_update_jobs
.push(self.schedule_git_statuses_update(&mut state, local_repo));
}
@@ -5002,7 +4996,7 @@ impl BackgroundScanner {
let task_state = self.state.clone();
let phase = self.phase;
let status_updates_tx = self.status_updates_tx.clone();
let scans_running = scans_running.clone();
let scans_running = self.scans_running.clone();
self.executor
.spawn(async move {
if !git_status_update_jobs.is_empty() {
@@ -5010,7 +5004,7 @@ impl BackgroundScanner {
let status_updated = status_updates
.iter()
.any(|update_result| update_result.is_ok());
scans_running.fetch_sub(status_updates.len() as u32, atomic::Ordering::Release);
dec_scans_running(&scans_running, status_updates.len() as i32);
if status_updated {
let scanning = scans_running.load(atomic::Ordering::Acquire) > 0;
send_status_update_inner(
@@ -5512,106 +5506,15 @@ impl BackgroundScanner {
fn schedule_git_statuses_update(
&self,
state: &mut BackgroundScannerState,
mut local_repository: LocalRepositoryEntry,
local_repository: LocalRepositoryEntry,
) -> oneshot::Receiver<()> {
let repository_name = local_repository.work_directory.display_name();
let path_key = local_repository.work_directory.path_key();
let job_state = self.state.clone();
let (tx, rx) = oneshot::channel();
state.repository_scans.insert(
path_key.clone(),
self.executor.spawn(async move {
update_branches(&job_state, &mut local_repository)
.await
.log_err();
log::trace!("updating git statuses for repo {repository_name}",);
let t0 = Instant::now();
let Some(statuses) = local_repository
.repo()
.status(&[git::WORK_DIRECTORY_REPO_PATH.clone()])
.log_err()
else {
return;
};
log::trace!(
"computed git statuses for repo {repository_name} in {:?}",
t0.elapsed()
);
let t0 = Instant::now();
let mut changed_paths = Vec::new();
let snapshot = job_state.lock().snapshot.snapshot.clone();
let Some(mut repository) = snapshot
.repository(path_key)
.context(
"Tried to update git statuses for a repository that isn't in the snapshot",
)
.log_err()
else {
return;
};
let merge_head_shas = local_repository.repo().merge_head_shas();
if merge_head_shas != local_repository.current_merge_head_shas {
mem::take(&mut repository.current_merge_conflicts);
}
let mut new_entries_by_path = SumTree::new(&());
for (repo_path, status) in statuses.entries.iter() {
let project_path = repository.work_directory.try_unrelativize(repo_path);
new_entries_by_path.insert_or_replace(
StatusEntry {
repo_path: repo_path.clone(),
status: *status,
},
&(),
);
if status.is_conflicted() {
repository.current_merge_conflicts.insert(repo_path.clone());
}
if let Some(path) = project_path {
changed_paths.push(path);
}
}
repository.statuses_by_path = new_entries_by_path;
let mut state = job_state.lock();
state
.snapshot
.repositories
.insert_or_replace(repository, &());
state.snapshot.git_repositories.update(
&local_repository.work_directory_id,
|entry| {
entry.current_merge_head_shas = merge_head_shas;
entry.merge_message = std::fs::read_to_string(
local_repository.dot_git_dir_abs_path.join("MERGE_MSG"),
)
.ok()
.and_then(|merge_msg| Some(merge_msg.lines().next()?.to_owned()));
entry.status_scan_id += 1;
},
);
util::extend_sorted(
&mut state.changed_paths,
changed_paths,
usize::MAX,
Ord::cmp,
);
log::trace!(
"applied git status updates for repo {repository_name} in {:?}",
t0.elapsed(),
);
tx.send(()).ok();
}),
local_repository.work_directory.path_key(),
self.executor
.spawn(do_git_status_update(job_state, local_repository, tx)),
);
rx
}
@@ -5643,6 +5546,15 @@ impl BackgroundScanner {
}
}
fn inc_scans_running(scans_running: &AtomicI32) {
scans_running.fetch_add(1, atomic::Ordering::Release);
}
fn dec_scans_running(scans_running: &AtomicI32, by: i32) {
let old = scans_running.fetch_sub(by, atomic::Ordering::Release);
debug_assert!(old >= by);
}
fn send_status_update_inner(
phase: BackgroundScannerPhase,
state: Arc<Mutex<BackgroundScannerState>>,
@@ -5690,6 +5602,100 @@ async fn update_branches(
Ok(())
}
async fn do_git_status_update(
job_state: Arc<Mutex<BackgroundScannerState>>,
mut local_repository: LocalRepositoryEntry,
tx: oneshot::Sender<()>,
) {
let repository_name = local_repository.work_directory.display_name();
log::trace!("updating git branches for repo {repository_name}");
update_branches(&job_state, &mut local_repository)
.await
.log_err();
let t0 = Instant::now();
log::trace!("updating git statuses for repo {repository_name}");
let Some(statuses) = local_repository
.repo()
.status(&[git::WORK_DIRECTORY_REPO_PATH.clone()])
.log_err()
else {
return;
};
log::trace!(
"computed git statuses for repo {repository_name} in {:?}",
t0.elapsed()
);
let t0 = Instant::now();
let mut changed_paths = Vec::new();
let snapshot = job_state.lock().snapshot.snapshot.clone();
let Some(mut repository) = snapshot
.repository(local_repository.work_directory.path_key())
.context("Tried to update git statuses for a repository that isn't in the snapshot")
.log_err()
else {
return;
};
let merge_head_shas = local_repository.repo().merge_head_shas();
if merge_head_shas != local_repository.current_merge_head_shas {
mem::take(&mut repository.current_merge_conflicts);
}
let mut new_entries_by_path = SumTree::new(&());
for (repo_path, status) in statuses.entries.iter() {
let project_path = repository.work_directory.try_unrelativize(repo_path);
new_entries_by_path.insert_or_replace(
StatusEntry {
repo_path: repo_path.clone(),
status: *status,
},
&(),
);
if status.is_conflicted() {
repository.current_merge_conflicts.insert(repo_path.clone());
}
if let Some(path) = project_path {
changed_paths.push(path);
}
}
repository.statuses_by_path = new_entries_by_path;
let mut state = job_state.lock();
state
.snapshot
.repositories
.insert_or_replace(repository, &());
state
.snapshot
.git_repositories
.update(&local_repository.work_directory_id, |entry| {
entry.current_merge_head_shas = merge_head_shas;
entry.merge_message =
std::fs::read_to_string(local_repository.dot_git_dir_abs_path.join("MERGE_MSG"))
.ok()
.and_then(|merge_msg| Some(merge_msg.lines().next()?.to_owned()));
entry.status_scan_id += 1;
});
util::extend_sorted(
&mut state.changed_paths,
changed_paths,
usize::MAX,
Ord::cmp,
);
log::trace!(
"applied git status updates for repo {repository_name} in {:?}",
t0.elapsed(),
);
tx.send(()).ok();
}
fn build_diff(
phase: BackgroundScannerPhase,
old_snapshot: &Snapshot,

View File

@@ -845,9 +845,7 @@ async fn test_update_gitignore(cx: &mut TestAppContext) {
});
}
// TODO: Fix flaky test.
// #[gpui::test]
#[allow(unused)]
#[gpui::test]
async fn test_write_file(cx: &mut TestAppContext) {
init_test(cx);
cx.executor().allow_parking();
@@ -2432,9 +2430,7 @@ async fn test_git_repository_for_path(cx: &mut TestAppContext) {
// you can't rename a directory which some program has already open. This is a
// limitation of the Windows. See:
// https://stackoverflow.com/questions/41365318/access-is-denied-when-renaming-folder
// TODO: Fix flaky test.
// #[gpui::test]
#[allow(unused)]
#[gpui::test]
#[cfg_attr(target_os = "windows", ignore)]
async fn test_file_status(cx: &mut TestAppContext) {
init_test(cx);
@@ -2627,9 +2623,7 @@ async fn test_file_status(cx: &mut TestAppContext) {
});
}
// TODO: Fix flaky test.
// #[gpui::test]
#[allow(unused)]
#[gpui::test]
async fn test_git_repository_status(cx: &mut TestAppContext) {
init_test(cx);
cx.executor().allow_parking();
@@ -2743,9 +2737,7 @@ async fn test_git_repository_status(cx: &mut TestAppContext) {
});
}
// TODO: Fix flaky test.
// #[gpui::test]
#[allow(unused)]
#[gpui::test]
async fn test_git_status_postprocessing(cx: &mut TestAppContext) {
init_test(cx);
cx.executor().allow_parking();
@@ -3541,8 +3533,6 @@ fn git_cherry_pick(commit: &git2::Commit<'_>, repo: &git2::Repository) {
repo.cherrypick(commit, None).expect("Failed to cherrypick");
}
// TODO: Remove allow(unused) once flaky tests are reinstated
#[allow(unused)]
#[track_caller]
fn git_stash(repo: &mut git2::Repository) {
use git2::Signature;
@@ -3552,8 +3542,6 @@ fn git_stash(repo: &mut git2::Repository) {
.expect("Failed to stash");
}
// TODO: Remove allow(unused) once flaky tests are reinstated
#[allow(unused)]
#[track_caller]
fn git_reset(offset: usize, repo: &git2::Repository) {
let head = repo.head().expect("Couldn't get repo head");

View File

@@ -1539,6 +1539,78 @@ To interpret all `.c` files as C++, files called `MyLockFile` as TOML and files
`boolean` values
## Icon Theme
- Description: The icon theme setting can be specified in two forms - either as the name of an icon theme or as an object containing the `mode`, `dark`, and `light` icon themes for files/folders inside Zed.
- Setting: `icon_theme`
- Default: `Zed (Default)`
### Icon Theme Object
- Description: Specify the icon theme using an object that includes the `mode`, `dark`, and `light`.
- Setting: `icon_theme`
- Default:
```json
"icon_theme": {
"mode": "system",
"dark": "Zed (Default)",
"light": "Zed (Default)"
},
```
### Mode
- Description: Specify the icon theme mode.
- Setting: `mode`
- Default: `system`
**Options**
1. Set the icon theme to dark mode
```json
{
"mode": "dark"
}
```
2. Set the icon theme to light mode
```json
{
"mode": "light"
}
```
3. Set the icon theme to system mode
```json
{
"mode": "system"
}
```
### Dark
- Description: The name of the dark icon theme.
- Setting: `dark`
- Default: `Zed (Default)`
**Options**
Run the `icon theme selector: toggle` action in the command palette to see a current list of valid icon themes names.
### Light
- Description: The name of the light icon theme.
- Setting: `light`
- Default: `Zed (Default)`
**Options**
Run the `icon theme selector: toggle` action in the command palette to see a current list of valid icon themes names.
## Inlay hints
- Description: Configuration for displaying extra text with hints in the editor.

View File

@@ -3,15 +3,36 @@
Python support is available natively in Zed.
- Tree-sitter: [tree-sitter-python](https://github.com/tree-sitter/tree-sitter-python)
- Language Server: [microsoft/pyright](https://github.com/microsoft/pyright)
- Language Servers:
- [microsoft/pyright](https://github.com/microsoft/pyright)
- [python-lsp/python-lsp-server](https://github.com/python-lsp/python-lsp-server) (PyLSP)
## Configuration
## Language Servers
Zed supports multiple Python language servers some of which may require configuration to work properly.
See: [Working with Language Servers](https://zed.dev/docs/configuring-languages#working-with-language-servers) for more information.
## Virtual Environments in the Terminal {#terminal-detect_venv}
Zed will detect Python virtual environments and automatically activate them in terminal if available.
See: [detect_venv documentation](../configuring-zed.md#terminal-detect_venv) for more.
## PyLSP
[python-lsp-server](https://github.com/python-lsp/python-lsp-server/), more commonly known as PyLSP, by default integrates with a number of external tools (autopep8, mccabe, pycodestyle, yapf) while others are optional and must be explicitly enabled and configured (flake8, pylint).
See [Python Language Server Configuration](https://github.com/python-lsp/python-lsp-server/blob/develop/CONFIGURATION.md) for more.
## PyRight
### PyRight Configuration
The [pyright](https://github.com/microsoft/pyright) language server offers flexible configuration options specified in a JSON-formatted text configuration. By default, the file is called `pyrightconfig.json` and is located within the root directory of your project. Pyright settings can also be specified in a `[tool.pyright]` section of a `pyproject.toml` file. A `pyrightconfig.json` file always takes precedence over `pyproject.toml` if both are present.
For more information, see the Pyright [configuration documentation](https://microsoft.github.io/pyright/#/configuration).
## Settings
### PyRight Settings
The [pyright](https://github.com/microsoft/pyright) language server also accepts specific LSP-related settings, not necessarily connected to a project. These can be changed in the `lsp` section of your `settings.json`.
@@ -41,7 +62,7 @@ For example, in order to:
For more information, see the Pyright [settings documentation](https://microsoft.github.io/pyright/#/settings).
## Virtual environments
### Pyright Virtual environments
A Python [virtual environment](https://docs.python.org/3/tutorial/venv.html) allows you to store all of a project's dependencies, including the Python interpreter and package manager, in a single directory that's isolated from any other Python projects on your computer.
@@ -94,17 +115,12 @@ You can also configure this option directly in your `settings.json` file ([pyrig
}
```
## Code formatting & Linting
### Code formatting & Linting
The Pyright language server does not provide code formatting or linting. If you want to detect lint errors and reformat your Python code upon saving, you'll need to set up.
A common tool for formatting Python code is [Ruff](https://docs.astral.sh/ruff/). It is another tool written in Rust, an extremely fast Python linter and code formatter. It is available through the [Ruff extension](https://github.com/zed-industries/zed/tree/main/extensions/ruff/). To configure the Ruff extension to work within Zed, see the setup documentation [here](https://docs.astral.sh/ruff/editors/setup/#zed).
## Virtual Environments in the Terminal {#terminal-detect_venv}
Zed will also detect virtual environments and automatically activate them in terminal if available.
See: [detect_venv documentation](../configuring-zed.md#terminal-detect_venv) for more.
<!--
TBD: Expand Python Ruff docs.
TBD: Ruff pyproject.toml, ruff.toml docs. `ruff.configuration`.