Compare commits

..

65 Commits

Author SHA1 Message Date
Cole Miller
8001cb9037 Remove log 2025-04-08 16:11:31 -04:00
Cole Miller
18144e7d63 Clean up ProjectEnvironment constructor 2025-04-08 16:06:57 -04:00
Cole Miller
f2bb495bff Remove buggy attempt to GC environments 2025-04-08 16:01:53 -04:00
Cole Miller
1106ba81ff Clean up environment loading a bit 2025-04-08 15:53:37 -04:00
Joseph T. Lyons
3a9fe61516 zed 0.181.4 2025-04-07 08:22:24 -04:00
Agus Zubiaga
28cca2527c agent: Refresh UI when context or thread history changes (#28188)
I found a few more cases where the UI wasn't updated immediately after
an interaction.

Release Notes:

- agent: Fixed delay after removing threads from "Past Interactions"
- agent: Fixed delay after adding/remove context via keyboard
2025-04-06 22:18:58 -04:00
Agus Zubiaga
3fde499eee agent: Refresh UI when sending first message (#28180)
Release Notes:

- Agent Beta: Fixed a delay when sending the first message in a new
thread
2025-04-06 22:18:51 -04:00
Richard Feldman
f176715f1e Try to improve behavior when agent is stuck (#28169)
Currently, it's pretty common that when the agent gets stuck, it deletes
whatever it's stuck on and replaces it with a TODO comment, then
cheerfully reports that it has "simpified" the implementation. This is
worse than leaving the broken code, because at least a human could take
over and try to get it across the finish line.

This system prompt adjustment attempts to make the agent do something
more useful when in this situation: report that it's stuck, explain why
it's stuck, and ask the user what to do.

Release Notes:

- N/A
2025-04-06 22:18:46 -04:00
Richard Feldman
4a302a28b7 If file is too big, provide the outline and suggest a follow-up tool (#28158)
<img width="622" alt="Screenshot 2025-04-05 at 5 48 14 PM"
src="https://github.com/user-attachments/assets/24b9c7d4-d3e2-4929-bca8-79db5b4e5748"
/>

Release Notes:

- The `read_files` tool now reads only the symbol outline files above a
certain size, to conserve context window space. Then it suggests that
the agent call `read_files` again with the relevant line ranges it saw
in the outline.
2025-04-06 22:18:38 -04:00
Marshall Bowers
3154b530a0 assistant: Fix assistant: open prompt library not opening the prompt library (#28156)
This PR fixes the `assistant: open prompt library` action in the command
palette not opening the prompt library when the Assistant Panel did not
have focus.

Fixes https://github.com/zed-industries/zed/issues/28058.

Release Notes:

- assistant: Fixed `assistant: open prompt library` not opening the
prompt library when the Assistant Panel was not focused.
2025-04-06 22:17:59 -04:00
Shardul Vaidya
959d5f52a4 bedrock: Add support for tool use, cross-region inference, and Claude 3.7 Thinking (#28137)
Closes #27223
Merges: #27996, #26734, #27949 

Release Notes:

- AWS Bedrock: Added advanced authentication strategies with:
  - Short lived credentials with Session Tokens 
  - AWS Named Profile
  - EC2 Identity, Pod Identity, Web Identity
- AWS Bedrock: Added Claude 3.7 Thinking support.
- AWS Bedrock: Adding Cross Region Inference for all combinations of
regions and model availability.
- Agent Beta: Added support for AWS Bedrock.

---------

Co-authored-by: Marshall Bowers <git@maxdeviant.com>
2025-04-06 22:17:35 -04:00
gcp-cherry-pick-bot[bot]
2771623bac copilot: Create Copilot directory if it does not exist (cherry-pick #28157) (#28178)
Cherry-picked copilot: Create Copilot directory if it does not exist
(#28157)

Closes https://github.com/zed-industries/zed/issues/27966

Issue:

- Copilot is failing to launch because the copilot directory is missing

<img width="497" alt="image"

src="https://github.com/user-attachments/assets/af35eb66-7e91-4dc6-a862-d1575da33b5b"
/>


<img width="943" alt="image"

src="https://github.com/user-attachments/assets/0b195c8c-52eb-42b9-bf36-40086398cc3f"
/>


Release Notes:

- copilot: Fixed an issue where GitHub Copilot would not install
properly if the directory was not present.

---------

Co-authored-by: Marshall Bowers <git@maxdeviant.com>

Co-authored-by: Richard Hao <richard@0xdev.dev>
Co-authored-by: Marshall Bowers <git@maxdeviant.com>
2025-04-06 07:50:12 -06:00
gcp-cherry-pick-bot[bot]
920487706d agent: Fix opening configuration view from the model selector (cherry-pick #28154) (#28155)
Cherry-picked agent: Fix opening configuration view from the model
selector (#28154)

This PR fixes an issue where opening the configuration view from the
model selector in the Agent (or inline assist) was not working properly.

Fixes https://github.com/zed-industries/zed/issues/28078.

Release Notes:

- Agent Beta: Fixed an issue where selecting "Configure" in the model
selector would not bring up the configuration view.

Co-authored-by: Marshall Bowers <git@maxdeviant.com>
2025-04-05 12:56:40 -04:00
gcp-cherry-pick-bot[bot]
3d5035d8a6 title_bar: Ensure git onboarding banner dismissal is properly respected (cherry-pick #28147) (#28148)
Cherry-picked title_bar: Ensure git onboarding banner dismissal is
properly respected (#28147)

A user reported this issue [on

Discord](https://discord.com/channels/869392257814519848/873292398204170290/1357879959422636185).

The issue here only arises for users which recently installed Zed or had
previously not dismissed the Git Onboarding component. It was introduced
by https://github.com/zed-industries/zed/pull/27412, which made the
banner component reusable.

For every banner, there is a value stored in the KVP store when it was
first dismissed. For the git onboarding banner, this was
`zed_git_banner_dismissed_at` initially, but this key would have been
changed by the linked PR. A change would have resulted in the banner
being shown again for users who already dismissed the panel, so for the
special case of `Git Onboarding`, a check was added which ensured this
would not happen.

However, this check was only added for reading from the key from the DB
but not on writing the git onboarding dismissal it to the DB. Thus, if a
user who had not previously dismissed the panel opened Zed, we would
check for the old key to be present in the DB. Since that would not be
the case, the banner would be shown. If the user dismissed the panel, it
would be stored in the database with the new key. Thus, on a reopen of
Zed, the banner would again be shown since for the old key there would
still be no value present and users are unable to dismiss the panel.


This PR fixes this behavior by moving the check into the method that
generates the key. With this, users which were unaffected by the bug
will still not see the panel again. Users who would install Zed with
this change present will be able to properly dismiss the panel aswell.
Users which were affected by the bug need to dismiss the banner one more
time. That happens because I did not want to modify the dismissal check
to check for two keys (the original one and the new one), as it would
clutter the logic even more for this special case. If this would be
preferred, feel free to let me know.

Release Notes:

- Fixed an issue where dismissing the git onboarding banner would not be
persisted across sessions.

Co-authored-by: Finn Evers <dev@bahn.sh>
2025-04-05 11:08:31 -04:00
Agus Zubiaga
d047801295 agent: Fix tool use output rendering (#28146)
Tool use output wouldn't get rendered in some states.

Release Notes:

- N/A
2025-04-05 10:20:44 -04:00
Agus Zubiaga
e7d1d731f4 agent: Add missing notify in ThreadHistory::delete_thread (#28144)
This would cause the history view not to get refreshed immediately when
a thread was deleted

Release Notes:

- agent: Fixed a bug where the history view wouldn't refresh after
deleting a thread
2025-04-05 10:20:34 -04:00
Agus Zubiaga
149db5cad4 agent: Fix thread summary generation (#28143)
#28102 introduced a bug where thread summaries wouldn't get generated
because they would get set to the default title instead of `None`.

Not adding a release note because the bug didn't make it to Preview.

Release Notes:

- N/A
2025-04-05 10:20:27 -04:00
Conrad Irwin
992d791852 Introduce "Near" block type (#28032)
A "Near" block acts similarly to a "Below" block, but can (if it's
height is <= one line height) be shown on the end of the preceding line
instead of adding an entire blank line to the editor.

You can test it out by pasting this into `go_to_diagnostic_impl` and
then press `F8`
```
        let buffer = self.buffer.read(cx).snapshot(cx);
        let selection = self.selections.newest_anchor();

        self.display_map.update(cx, |display_map, cx| {
            display_map.insert_blocks(
                [BlockProperties {
                    placement: BlockPlacement::Near(selection.start),
                    height: Some(1),
                    style: BlockStyle::Flex,
                    render: Arc::new(|_| {
                        div()
                            .w(px(100.))
                            .h(px(16.))
                            .bg(gpui::hsla(0., 0., 1., 0.5))
                            .into_any_element()
                    }),
                    priority: 0,
                }],
                cx,
            )
        });
        return;
```

Release Notes:

- N/A

---------

Co-authored-by: Antonio Scandurra <me@as-cii.com>
2025-04-05 10:19:58 -04:00
Nathan Sobo
18bf07822e Fix panic calling blocks_intersecting_buffer_range with an empty range (#28049)
Previously, when comparing a block with an empty range to an empty query
range in non-inclusive mode, our binary search logic could end up
computing an inverted range, causing a panic.

This commit adds special casing when comparing empty blocks with empty
ranges.

cc @as-cii: I'm realizing that the approach to searching for the
intersecting replacement blocks makes some invalid assumptions about the
ordering of replace decorations. They aren't ordered at all by their end
range. @maxbrunsfeld and I are wondering if long term, we should remove
replace decorations and find another solution for folding buffers in
multi buffers.

Release Notes:

- Fixed an occasional panic that would occur when navigating to the next
change hunk with a pending inline transformation present.

Co-authored-by: Peter Tripp <petertripp@gmail.com>
2025-04-05 10:19:51 -04:00
Marshall Bowers
3ba619db5c agent: Fix panic when opening Agent diff from the workspace (#28132)
This PR fixes a panic that could occur when opening the Agent diff from
the workspace (with the agent panel closed).

Release Notes:

- agent: Fixed a panic when running the `agent: open agent diff` command
with the Agent Panel closed.
2025-04-05 10:15:06 -04:00
Danilo Leal
3559464840 agent: Use Markdown to render tool input and output content (#28127)
Release Notes:

- agent: Tool call's input and output content are now rendered with
Markdown, which allows them to be selected and copied.

---------

Co-authored-by: Agus Zubiaga <hi@aguz.me>
2025-04-05 10:14:57 -04:00
Danilo Leal
32a03ee58a agent: Hide the scrollbar if there's no mouse movement (#28129)
Release Notes:

- agent: The scrollbar now automatically hides if there's no mouse
movement on the thread list.

---------

Co-authored-by: Agus Zubiaga <agus@zed.dev>
Co-authored-by: Agus Zubiaga <hi@aguz.me>
2025-04-05 10:14:52 -04:00
Bennet Bo Fenner
912ce5dc1a Add tool calling support for GitHub Copilot Chat (#28035)
This PR adds tool calling support for GitHub Copilot Chat models.

Currently only supports the Claude family of models.

Release Notes:

- agent: Added tool calling support for Claude models in GitHub Copilot
Chat.

---------

Co-authored-by: Marshall Bowers <git@maxdeviant.com>
2025-04-05 10:14:35 -04:00
Agus Zubiaga
97e8f499fd agent: Fix deleting threads in history via keyboard (#28113)
Using `shift-backspace` now because we need `backspace` for search

Release Notes:
- agent: Fix deleting threads in history via keyboard
2025-04-05 10:14:24 -04:00
Agus Zubiaga
736d5e6da1 agent: Disable redundant tools (might delete later) (#28114)
Release Notes:

- agent: Disable tools that are redundant in the presence of the bash
tool
2025-04-05 10:13:41 -04:00
gcp-cherry-pick-bot[bot]
68023fdc44 Properly query remote ssh server for language servers by name (cherry-pick #28124) (#28125)
Cherry-picked Properly query remote ssh server for language servers by
name (#28124)

Follow-up of https://github.com/zed-industries/zed/pull/27775

Release Notes:

- N/A

Co-authored-by: Kirill Bulatov <kirill@zed.dev>
2025-04-04 14:22:09 -06:00
gcp-cherry-pick-bot[bot]
253ef7ef44 jsx-tag-auto-close: Remove potential source of bugs and panics (cherry-pick #28119) (#28122)
Cherry-picked jsx-tag-auto-close: Remove potential source of bugs and
panics (#28119)

Switch to using anchors for storing edited ranges rather than offsets as
they have to be used with multiple buffer snapshots

Release Notes:

- N/A

Co-authored-by: Ben Kunkle <ben@zed.dev>
2025-04-04 16:03:03 -04:00
Max Brunsfeld
687cc5d661 Fix panic or bad hunks when expanding hunks w/ multiple ranges in 1 hunk (#28117)
Release Notes:

- Fixed a crash that could happen when expanding diff hunks with
multiple cursors in one hunk.
2025-04-04 12:22:53 -07:00
Bennet Bo Fenner
290d249e38 agent: Fix invalid tool names in batch tool description (#28109)
The description of the Batch Tool was still referring using `-` as a
seperator for tool names

Release Notes:

- N/A
2025-04-04 13:19:37 -04:00
Agus Zubiaga
4922a000aa agent: Remove edit_files tool (#28041)
Release Notes:

- agent: Remove `edit_files` tool  in favor of `find_replace`
2025-04-04 13:19:30 -04:00
Agus Zubiaga
96c9824997 agent: Allow renaming threads (#28102)
Release Notes:

- agent: Add support for renaming threads

---------

Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
Co-authored-by: Richard Feldman <oss@rtfeldman.com>
2025-04-04 13:19:22 -04:00
Richard Feldman
a83151730b Try adding beta token-efficient tool use for 3.7 Sonnet (#28100)
Release Notes:

- Enabled [token-efficient tool use
(beta)](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/token-efficient-tool-use)
for Claude 3.7 Sonnet models
2025-04-04 13:19:12 -04:00
Joseph T. Lyons
99335128eb zed 0.181.3 2025-04-04 10:45:15 -04:00
Agus Zubiaga
e8457656c6 ai: Separate model settings for each feature (#28088)
Closes: https://github.com/zed-industries/zed/issues/20582

Allows users to select a specific model for each AI-powered feature:
- Agent panel
- Inline assistant
- Thread summarization
- Commit message generation

If unspecified for a given feature, it will use the `default_model`
setting.

Release Notes:

- Added support for configuring a specific model for each AI-powered
feature

---------

Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-04-04 10:43:27 -04:00
Thomas Mickley-Doyle
b037c913e5 assistant_eval: Add ACE framework (#27181)
Release Notes:

- N/A

---------

Co-authored-by: Michael Sloan <michael@zed.dev>
2025-04-04 10:43:17 -04:00
Agus Zubiaga
aca7f54773 agent: Add search to Thread History (#28085)
![CleanShot 2025-04-04 at 09 45
47@2x](https://github.com/user-attachments/assets/a8ec4086-f71e-4ff4-a5b3-4eb5d4c48294)


Release Notes:

- agent: Add search box to thread history

---------

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
2025-04-04 09:28:14 -04:00
gcp-cherry-pick-bot[bot]
10a3ad078c Clear path-based excerpt data properly (cherry-pick #28026) (#28082)
Cherry-picked Clear path-based excerpt data properly (#28026)

Follow-up of https://github.com/zed-industries/zed/pull/27893

Release Notes:

- N/A

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>

Co-authored-by: Kirill Bulatov <kirill@zed.dev>
Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-04-04 09:18:56 -04:00
gcp-cherry-pick-bot[bot]
c4bbdd03c5 Temporarily disable flaky conflicted-cherry-pick test (cherry-pick #27950) (#28087)
Cherry-picked Temporarily disable flaky conflicted-cherry-pick test
(#27950)

Closes #ISSUE

Release Notes:

- N/A

Co-authored-by: Cole Miller <cole@zed.dev>
2025-04-04 09:18:20 -04:00
Antonio Scandurra
27a47233c9 Implement edit rejection in ActionLog (#28080)
Release Notes:

- Fixed a bug that would prevent rejecting certain agent edits.
2025-04-04 08:32:26 -04:00
Bennet Bo Fenner
0b8c14f0d6 agent: Show which lines were read when using read_file tool (#28077)
This makes sure that we specify which lines the agent actually read,
avoids confusing scenarios such as:

<img width="642" alt="Screenshot 2025-04-04 at 10 22 10"
src="https://github.com/user-attachments/assets/2680c313-4f77-4971-8743-8e3f5327c18d"
/>

Here the agent starts out by actually only reading a certain amount of
lines when the first tool call happens, then it does a second tool call
to read the whole file. To the user this looks like to identical tool
calls.

Now:
<img width="621" alt="image"
src="https://github.com/user-attachments/assets/76222258-9cc8-4b7c-98c0-6d5cffb282f2"
/>
<img width="362" alt="image"
src="https://github.com/user-attachments/assets/293f2fc0-365d-4b84-8400-4c11474caeb8"
/>
<img width="420" alt="image"
src="https://github.com/user-attachments/assets/ca92493e-67ce-4d45-8f83-0168df575326"
/>



Release Notes:

- N/A
2025-04-04 08:31:37 -04:00
Bennet Bo Fenner
800f40524b agent: Differentiate @mentions from markdown links (#28073)
This ensures that we display @mentions and normal markdown links
differently:

<img width="670" alt="Screenshot 2025-04-04 at 11 07 51"
src="https://github.com/user-attachments/assets/0a4d0881-abb9-42a8-b3fa-912cd6873ae0"
/>


Release Notes:

- N/A
2025-04-04 08:31:19 -04:00
Antonio Scandurra
1f4a2d50a0 Scroll to first hunk when clicking on a file to review in Agent Panel (#28075)
Release Notes:

- Added the ability to scroll to a file when clicking on it in the Agent
Panel review section.
2025-04-04 08:31:14 -04:00
Marshall Bowers
c706acbfcb open_ai: Disable parallel_tool_calls (#28056)
This PR disables `parallel_tool_calls` for the models that support it,
as the Agent currently expects at most one tool use per turn.

It was a bit of trial and error to figure this out. OpenAI's API
annoyingly will return an error if passing `parallel_tool_calls` to a
model that doesn't support it.

Release Notes:

- N/A
2025-04-04 00:37:35 -04:00
Marshall Bowers
3f19ae1689 Add tool use support for OpenAI models (#28051)
This PR adds support for using tools to the OpenAI models.

Release Notes:

- agent: Added support for tool use with OpenAI models (Preview only).
2025-04-04 00:37:26 -04:00
Marshall Bowers
4744c4ff7e Remove unused extract_tool_args_from_events functions (#28038)
This PR removes the unused `extract_tool_args_from_events` functions
that were defined in some of the LLM provider crates.

Release Notes:

- N/A
2025-04-04 00:37:10 -04:00
Nate Butler
aa168c696b ui_input: TextField -> SingleLineInput (#28031)
- Rename `TextField` -> `SingleLineInput`
- Add a component preview for `SingleLineInput`
- Apply `SingleLineInput` to the AddContextServerModal

Release Notes:

- N/A

---------

Co-authored-by: Agus Zubiaga <hi@aguz.me>
Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
Co-authored-by: Danilo Leal <67129314+danilo-leal@users.noreply.github.com>
2025-04-03 23:57:13 -04:00
Agus Zubiaga
ce2918ef46 agent: Snapshot context in user message instead of recreating it (#27967)
This makes context essentially work the same way as `read-file`,
increasing the likelihood of cache hits.

Just like with `read-file`, we'll notify the model when the user makes
an edit to one of the tracked files. In the future, we want to send a
diff instead of just a list of files, but that's an orthogonal change.


Release Notes:
- agent: Improved caching of files in context

---------

Co-authored-by: Antonio Scandurra <me@as-cii.com>
2025-04-03 23:56:49 -04:00
Joseph T. Lyons
3cfaadd2af zed 0.181.2 2025-04-03 14:45:38 -04:00
Danilo Leal
b69219e9e0 agent: Add token count in the thread view (#28037)
This PR adds the token count to the active thread view. It doesn't
behaves quite like Assistant 1 where it updates as you type, though; it
updates after you submit the message.

<img
src="https://github.com/user-attachments/assets/82d2a180-554a-43ee-b776-3743359b609b"
width="700" />

---

Release Notes:

- agent: Add token count in the thread view

---------

Co-authored-by: Agus Zubiaga <hi@aguz.me>
2025-04-03 14:44:32 -04:00
Antonio Scandurra
0ae3518b1c Fix soft-wrapping with fold creases (#28029)
Release Notes:

- Fixed a rendering bug that caused context in the agent to not wrap
properly.

---------

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
Co-authored-by: Zed AI <ai+claude-3.7@zed.dev>
2025-04-03 14:21:43 -04:00
Agus Zubiaga
73964353a4 agent: Handle tool use without text (#28030)
### Context 

The Anthropic API fails if a request message contains a tool use and no
`Text` segments or it only contains empty `Text` segments. These are
cases that the model itself produces, but the API doesn't support
sending them back.

#27917 fixed this by appending "Using tool..." in the thread's message,
but this causes the actual conversation to include it, so it would
appear in the UI (we would actually display a gap because we never
rendered its markdown, but "Using tool..." would show up when the thread
was restored).

### Solution

We'll now only append this placeholder when we build the request, so the
API still sees it, but the UI/Thread doesn't.

Another issue we found is that the model starts mimicking these
placeholders in later tool uses which is undesirable. So unfortunately,
we had to add logic to filter them out.

Release Notes:

- agent: Improved rendering of tool uses without text

---------

Co-authored-by: Bennet <bennet@zed.dev>
2025-04-03 14:20:56 -04:00
Danilo Leal
bf6e7cb6ee agent: Add button to continue iterating once all reviews are done (#28027)
This PR adds a button on the review tab empty state that toggles the
focus back to the agent panel so that users can keep iterating on the
thread that's active in the panel.

<img
src="https://github.com/user-attachments/assets/ace5cf93-8869-49bb-8106-e03a9e3c90f2"
width="700"/>

Release Notes:

- N/A
2025-04-03 14:20:37 -04:00
Joseph T. Lyons
36c4f6082c zed 0.181.1 2025-04-03 09:50:23 -04:00
Bennet Bo Fenner
3d032bcf2c agent: Fix thinking step showing up as pending when completion is cancelled (#28019)
Previously the "Thinking..." step would show up as pending, even though
the user cancelled the generation:
<img width="672" alt="image"
src="https://github.com/user-attachments/assets/c9cdce0a-d827-4e23-96f5-b150465911a7"
/>


Release Notes:

- Fixed an issue where the thinking step would show up as pending even
when the generation was cancelled
2025-04-03 09:49:24 -04:00
Agus Zubiaga
3e28fa2cc4 agent: Include active file in recent history (#27914)
This happened because of two reasons:

- `Workspace::recent_navigation_history` didn't include the current file
- The context picker added the current file to a exclude list

The latter was actually intentional because we already show the file in
the suggested context, but now that we actually have mentions, it's just
inconvenient not to have it there.

Release Notes:

- N/A
2025-04-03 09:49:18 -04:00
Julia Ryan
c42442974d workspace-hack: remove openssl from remote_server (#27990)
This was accidentally getting added due to increased feature
unification. We've manually excluded reqwest to go back to the desired
behavior: remote_server, doesn't depend on openssl.

Release Notes:

- N/A

Co-authored-by: Mikayla Maki <mikayla.c.maki@gmail.com>
2025-04-03 08:54:26 -04:00
Julia Ryan
f23b972203 Add workspace-hack (#27277)
This adds a "workspace-hack" crate, see
[mozilla's](https://hg.mozilla.org/mozilla-central/file/3a265fdc9f33e5946f0ca0a04af73acd7e6d1a39/build/workspace-hack/Cargo.toml#l7)
for a concise explanation of why this is useful. For us in practice this
means that if I were to run all the tests (`cargo nextest r
--workspace`) and then `cargo r`, all the deps from the previous cargo
command will be reused. Before this PR it would rebuild many deps due to
resolving different sets of features for them. For me this frequently
caused long rebuilds when things "should" already be cached.

To avoid manually maintaining our workspace-hack crate, we will use
[cargo hakari](https://docs.rs/cargo-hakari) to update the build files
when there's a necessary change. I've added a step to CI that checks
whether the workspace-hack crate is up to date, and instructs you to
re-run `script/update-workspace-hack` when it fails.

Finally, to make sure that people can still depend on crates in our
workspace without pulling in all the workspace deps, we use a `[patch]`
section following [hakari's
instructions](https://docs.rs/cargo-hakari/0.9.36/cargo_hakari/patch_directive/index.html)

One possible followup task would be making guppy use our
`rust-toolchain.toml` instead of having to duplicate that list in its
config, I opened an issue for that upstream: guppy-rs/guppy#481.

TODO:
- [x] Fix the extension test failure
- [x] Ensure the dev dependencies aren't being unified by Hakari into
the main dependencies
- [x] Ensure that the remote-server binary continues to not depend on
LibSSL

Release Notes:

- N/A

---------

Co-authored-by: Mikayla <mikayla@zed.dev>
Co-authored-by: Mikayla Maki <mikayla.c.maki@gmail.com>
2025-04-03 08:54:26 -04:00
5brian
07d9cd7e88 agent: Update thread label to use plural form (#27971)
Update thread label to match the other contexts.

|Before|After|
|--|--|

|![image](https://github.com/user-attachments/assets/6e02808e-50d7-480f-a9ca-251e9519a71d)|![image](https://github.com/user-attachments/assets/174aad84-9e55-4531-bb4a-1a1adaa46418)|

Release Notes:

- N/A
2025-04-03 08:54:01 -04:00
Marshall Bowers
a48238701d agent: Allow editing previous messages (#27965)
This PR adds the ability to edit previous user messages in the thread.

Release Notes:

- Agent: Added the ability to edit previous user messages
(Preview-only).
2025-04-03 08:54:01 -04:00
Danilo Leal
d17d747c62 agent: Change loading label if command is waiting on permission (#27955)
If there's a command pending confirmation, the label changes from
"Generating" to "Waiting for confirmation".

<img
src="https://github.com/user-attachments/assets/d804e382-5315-40b0-9588-c257cca2430c"
width="600"/>

Release Notes:

- N/A
2025-04-03 08:54:01 -04:00
Danilo Leal
bc08df2dfd agent: Refine feedback message input (#27948)
<img
src="https://github.com/user-attachments/assets/cde37a88-9973-4c27-80b7-459f5e986c74"
width="650" />

Release Notes:

- N/A

---------

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-04-03 08:54:01 -04:00
Shardul Vaidya
0f4b734d91 aws_http_client: Copy response headers (#27941)
Preemptive fixes required for #26734

Release Notes:

- N/A

---------

Co-authored-by: Marshall Bowers <git@maxdeviant.com>
2025-04-03 08:53:37 -04:00
Marshall Bowers
6df29d3279 agent: Do some cleanup of feedback comments submission (#27940)
This PR does some stylistic cleanup of the feedback comments submission
code.

Release Notes:

- N/A
2025-04-03 08:53:29 -04:00
Michael Sloan
647cca8c8d Use worktree qualified paths in agent file context + some code cleanup (#27943)
Release Notes:

- N/A
2025-04-03 08:53:19 -04:00
Joseph T. Lyons
1f81674927 v0.181.x preview 2025-04-02 13:44:55 -04:00
367 changed files with 11181 additions and 17742 deletions

View File

@@ -1,36 +0,0 @@
name: Bug Report (Agent Panel)
description: Zed Agent Panel Bugs
type: "Bug"
labels: ["agent", "ai"]
title: "Agent Panel: <a short description of the Agent Panel bug>"
body:
- type: textarea
attributes:
label: Summary
description: Describe the bug with a one line summary, and provide detailed reproduction steps
value: |
<!-- Please insert a one line summary of the issue below -->
SUMMARY_SENTENCE_HERE
### Description
<!-- Describe with sufficient detail to reproduce from a clean Zed install. -->
<!-- Please include the LLM provider and model name you are using -->
Steps to trigger the problem:
1.
2.
3.
Actual Behavior:
Expected Behavior:
validations:
required: true
- type: textarea
id: environment
attributes:
label: Zed Version and System Specs
description: 'Open Zed, and in the command palette select "zed: Copy System Specs Into Clipboard"'
placeholder: |
Output of "zed: Copy System Specs Into Clipboard"
validations:
required: true

View File

@@ -1,36 +0,0 @@
name: Bug Report (Edit Predictions)
description: Zed Edit Predictions bugs
type: "Bug"
labels: ["ai", "inline completion", "zeta"]
title: "Edit Predictions: <a short description of the Edit Prediction bug>"
body:
- type: textarea
attributes:
label: Summary
description: Describe the bug with a one line summary, and provide detailed reproduction steps
value: |
<!-- Please insert a one line summary of the issue below -->
SUMMARY_SENTENCE_HERE
### Description
<!-- Describe with sufficient detail to reproduce from a clean Zed install. -->
<!-- Please include the LLM provider and model name you are using -->
Steps to trigger the problem:
1.
2.
3.
Actual Behavior:
Expected Behavior:
validations:
required: true
- type: textarea
id: environment
attributes:
label: Zed Version and System Specs
description: 'Open Zed, and in the command palette select "zed: Copy System Specs Into Clipboard"'
placeholder: |
Output of "zed: Copy System Specs Into Clipboard"
validations:
required: true

View File

@@ -1,35 +0,0 @@
name: Bug Report (Git)
description: Zed Git-Related Bugs
type: "Bug"
labels: ["git"]
title: "Git: <a short description of the Git bug>"
body:
- type: textarea
attributes:
label: Summary
description: Describe the bug with a one line summary, and provide detailed reproduction steps
value: |
<!-- Please insert a one line summary of the issue below -->
SUMMARY_SENTENCE_HERE
### Description
<!-- Describe with sufficient detail to reproduce from a clean Zed install. -->
Steps to trigger the problem:
1.
2.
3.
Actual Behavior:
Expected Behavior:
validations:
required: true
- type: textarea
id: environment
attributes:
label: Zed Version and System Specs
description: 'Open Zed, and in the command palette select "zed: Copy System Specs Into Clipboard"'
placeholder: |
Output of "zed: Copy System Specs Into Clipboard"
validations:
required: true

View File

@@ -1,56 +0,0 @@
name: Bug Report (Other)
description: |
Something else is broken in Zed (exclude crashing).
type: "Bug"
body:
- type: textarea
attributes:
label: Summary
description: Provide a one sentence summary and detailed reproduction steps
value: |
<!-- Begin your issue with a one sentence summary -->
SUMMARY_SENTENCE_HERE
### Description
<!-- Describe with sufficient detail to reproduce from a clean Zed install.
- Any code must be sufficient to reproduce (include context!)
- Code must as text, not just as a screenshot.
- Issues with insufficient detail may be summarily closed.
-->
Steps to reproduce:
1.
2.
3.
4.
Expected Behavior:
Actual Behavior:
<!-- Before Submitting, did you:
1. Include settings.json, keymap.json, .editorconfig if relevant?
2. Check your Zed.log for relevant errors? (please include!)
3. Click Preview to ensure everything looks right?
4. Hide videos, large images and logs in ``` inside collapsible blocks:
<details><summary>click to expand</summary>
```json
```
</details>
-->
validations:
required: true
- type: textarea
id: environment
attributes:
label: Zed Version and System Specs
description: |
Open Zed, from the command palette select "zed: Copy System Specs Into Clipboard"
placeholder: |
Output of "zed: Copy System Specs Into Clipboard"
validations:
required: true

57
.github/ISSUE_TEMPLATE/1_bug_report.yml vendored Normal file
View File

@@ -0,0 +1,57 @@
name: Bug Report
description: |
Something is broken in Zed (exclude crashing).
type: "Bug"
body:
- type: textarea
attributes:
label: Summary
description: Describe the bug with a one line summary, and provide detailed reproduction steps
value: |
<!-- Please insert a one line summary of the issue below -->
SUMMARY_SENTENCE_HERE
<!-- Be verbose: Include all steps necessary to reproduce from a clean Zed installation. -->
<!-- Code snippets are better than images, a repository link that reproduces the issue is ideal. -->
Steps to trigger the problem:
1.
2.
3.
4.
Actual Behavior:
Expected Behavior:
<!--
Is there anything additional necessary to reproduce this issue?
- settings.json, keymap.json, .editorconfig etc?
- Does it happen intermittently or only with specific projects / file types?
- Have you found a workaround?
Did you check your Zed.log to see if there is any relevant details there?
- When including large items (videos, screenshots, logs, configs) please wrap with:
<details><summary>See inside for XXXXYYY</summary>
```shell
code
```
</details>
-->
validations:
required: true
- type: textarea
id: environment
attributes:
label: Zed Version and System Specs
description: 'Open Zed, and in the command palette select "zed: Copy System Specs Into Clipboard"'
placeholder: |
Output of "zed: Copy System Specs Into Clipboard"
validations:
required: true

View File

@@ -5,12 +5,10 @@ body:
- type: textarea
attributes:
label: Summary
description: Summarize the issue with detailed reproduction steps
description: Describe the bug with a one line summary, and provide detailed reproduction steps
value: |
<!-- Begin your issue with a one sentence summary -->
SUMMARY_SENTENCE_HERE
<!-- Please insert a one line summary of the issue below -->
### Description
<!-- Include all steps necessary to reproduce from a clean Zed installation. Be verbose -->
Steps to trigger the problem:
1.
@@ -18,6 +16,7 @@ body:
3.
Actual Behavior:
Expected Behavior:
validations:
@@ -41,11 +40,10 @@ body:
value: |
<details><summary>Zed.log</summary>
<!-- Paste your log inside the code block. -->
```log
<!-- Click below this line and paste or drag-and-drop your log-->
```
</details>
```
<!-- Click above this line and paste or drag-and-drop your log--></details>
validations:
required: false

View File

@@ -4,6 +4,9 @@ contact_links:
- name: Feature Request
url: https://github.com/zed-industries/zed/discussions/new/choose
about: To request a feature, open a new Discussion in one of the appropriate Discussion categories
- name: "Zed Discord"
- name: Zed Discussion Forum
url: https://github.com/zed-industries/zed/discussions
about: A community discussion forum
- name: "Zed Discord: #Support Channel"
url: https://zed.dev/community-links
about: Real-time discussion and user support

View File

@@ -23,4 +23,4 @@ runs:
- name: Run tests
shell: pwsh
working-directory: ${{ inputs.working-directory }}
run: cargo nextest run --workspace --no-fail-fast --config='profile.dev.debug="limited"'
run: cargo nextest run --workspace --no-fail-fast

View File

@@ -114,9 +114,7 @@ jobs:
timeout-minutes: 60
name: Check workspace-hack crate
needs: [job_spec]
if: |
github.repository_owner == 'zed-industries' &&
needs.job_spec.outputs.run_tests == 'true'
if: github.repository_owner == 'zed-industries'
runs-on:
- buildjet-8vcpu-ubuntu-2204
steps:
@@ -133,13 +131,13 @@ jobs:
- name: Check workspace-hack Cargo.toml is up-to-date
run: |
cargo hakari generate --diff || {
echo "To fix, run script/update-workspace-hack or script/update-workspace-hack.ps1";
echo "To fix, run script/update-workspace-hack";
false
}
- name: Check all crates depend on workspace-hack
run: |
cargo hakari manage-deps --dry-run || {
echo "To fix, run script/update-workspace-hack or script/update-workspace-hack.ps1"
echo "To fix, run script/update-workspace-hack"
false
}
@@ -708,51 +706,6 @@ jobs:
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
nix-build:
timeout-minutes: 60
name: Nix Build
continue-on-error: true
if: github.repository_owner == 'zed-industries' && contains(github.event.pull_request.labels.*.name, 'run-nix')
strategy:
fail-fast: false
matrix:
system:
- os: x86 Linux
runner: buildjet-16vcpu-ubuntu-2204
install_nix: true
- os: arm Mac
runner: [macOS, ARM64, test]
install_nix: false
runs-on: ${{ matrix.system.runner }}
env:
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}
GIT_LFS_SKIP_SMUDGE: 1 # breaks the livekit rust sdk examples which we don't actually depend on
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Set path
if: ${{ ! matrix.system.install_nix }}
run: |
echo "/nix/var/nix/profiles/default/bin" >> $GITHUB_PATH
echo "/Users/administrator/.nix-profile/bin" >> $GITHUB_PATH
- uses: cachix/install-nix-action@02a151ada4993995686f9ed4f1be7cfbb229e56f # v31
if: ${{ matrix.system.install_nix }}
with:
github_access_token: ${{ secrets.GITHUB_TOKEN }}
- uses: cachix/cachix-action@0fc020193b5a1fa3ac4575aa3a7d3aa6a35435ad # v16
with:
name: zed-industries
authToken: "${{ secrets.CACHIX_AUTH_TOKEN }}"
skipPush: true
- run: nix build .#debug
- name: Limit /nix/store to 50GB
run: "[ $(du -sm /nix/store | cut -f1) -gt 50000 ] && nix-collect-garbage -d"
auto-release-preview:
name: Auto release preview
if: |

View File

@@ -216,8 +216,7 @@ jobs:
name: zed-industries
authToken: "${{ secrets.CACHIX_AUTH_TOKEN }}"
- run: nix build
- name: Limit /nix/store to 50GB
run: '[ $(du -sm /nix/store | cut -f1) -gt 50000 ] && nix-collect-garbage -d'
- run: nix-collect-garbage -d
update-nightly-tag:
name: Update nightly tag

View File

@@ -1,14 +1,14 @@
[
{
"label": "Debug Zed with LLDB",
"adapter": "LLDB",
"adapter": "lldb",
"program": "$ZED_WORKTREE_ROOT/target/debug/zed",
"request": "launch",
"cwd": "$ZED_WORKTREE_ROOT"
},
{
"label": "Debug Zed with GDB",
"adapter": "GDB",
"adapter": "gdb",
"program": "$ZED_WORKTREE_ROOT/target/debug/zed",
"request": "launch",
"cwd": "$ZED_WORKTREE_ROOT",

430
Cargo.lock generated
View File

@@ -119,48 +119,12 @@ dependencies = [
"ui_input",
"util",
"uuid",
"vim_mode_setting",
"workspace",
"workspace-hack",
"zed_actions",
]
[[package]]
name = "agent_eval"
version = "0.1.0"
dependencies = [
"agent",
"anyhow",
"assistant_tool",
"assistant_tools",
"clap",
"client",
"collections",
"context_server",
"dap",
"env_logger 0.11.8",
"fs",
"futures 0.3.31",
"gpui",
"gpui_tokio",
"language",
"language_model",
"language_models",
"node_runtime",
"project",
"prompt_store",
"release_channel",
"reqwest_client",
"serde",
"serde_json",
"serde_json_lenient",
"settings",
"smol",
"tempfile",
"util",
"walkdir",
"workspace-hack",
]
[[package]]
name = "ahash"
version = "0.7.8"
@@ -616,6 +580,43 @@ dependencies = [
"zed_actions",
]
[[package]]
name = "assistant_eval"
version = "0.1.0"
dependencies = [
"agent",
"anyhow",
"assistant_tool",
"assistant_tools",
"clap",
"client",
"collections",
"context_server",
"dap",
"env_logger 0.11.8",
"fs",
"futures 0.3.31",
"gpui",
"gpui_tokio",
"language",
"language_model",
"language_models",
"node_runtime",
"project",
"prompt_store",
"release_channel",
"reqwest_client",
"serde",
"serde_json",
"serde_json_lenient",
"settings",
"smol",
"tempfile",
"util",
"walkdir",
"workspace-hack",
]
[[package]]
name = "assistant_settings"
version = "0.1.0"
@@ -746,6 +747,7 @@ dependencies = [
"itertools 0.14.0",
"language",
"language_model",
"lsp",
"open",
"project",
"rand 0.8.5",
@@ -912,6 +914,18 @@ dependencies = [
"pin-project-lite",
]
[[package]]
name = "async-native-tls"
version = "0.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9343dc5acf07e79ff82d0c37899f079db3534d99f189a1837c8e549c99405bec"
dependencies = [
"futures-util",
"native-tls",
"thiserror 1.0.69",
"url",
]
[[package]]
name = "async-net"
version = "2.0.0"
@@ -1081,6 +1095,18 @@ version = "4.7.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8b75356056920673b02621b35afd0f7dda9306d03c79a30f5c56c44cf256e3de"
[[package]]
name = "async-tls"
version = "0.13.0"
source = "git+https://github.com/zed-industries/async-tls?rev=1e759a4b5e370f87dc15e40756ac4f8815b61d9d#1e759a4b5e370f87dc15e40756ac4f8815b61d9d"
dependencies = [
"futures-core",
"futures-io",
"rustls 0.23.25",
"rustls-pemfile 2.2.0",
"webpki-roots",
]
[[package]]
name = "async-trait"
version = "0.1.88"
@@ -1094,10 +1120,12 @@ dependencies = [
[[package]]
name = "async-tungstenite"
version = "0.29.1"
version = "0.28.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ef0f7efedeac57d9b26170f72965ecfd31473ca52ca7a64e925b0b6f5f079886"
checksum = "1c348fb0b6d132c596eca3dcd941df48fb597aafcb07a738ec41c004b087dc99"
dependencies = [
"async-std",
"async-tls",
"atomic-waker",
"futures-core",
"futures-io",
@@ -1105,10 +1133,7 @@ dependencies = [
"futures-util",
"log",
"pin-project-lite",
"rustls-pki-types",
"tokio",
"tokio-rustls 0.26.2",
"tungstenite 0.26.2",
"tungstenite 0.24.0",
]
[[package]]
@@ -2814,6 +2839,7 @@ name = "client"
version = "0.1.0"
dependencies = [
"anyhow",
"async-native-tls",
"async-recursion 0.3.2",
"async-tungstenite",
"chrono",
@@ -2824,7 +2850,6 @@ dependencies = [
"feature_flags",
"futures 0.3.31",
"gpui",
"gpui_tokio",
"http_client",
"http_client_tls",
"log",
@@ -2846,7 +2871,6 @@ dependencies = [
"thiserror 2.0.12",
"time",
"tiny_http",
"tokio",
"tokio-socks",
"url",
"util",
@@ -3329,15 +3353,6 @@ version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6245d59a3e82a7fc217c5828a6692dbc6dfb63a0c8c90495621f7b9d79704a0e"
[[package]]
name = "convert_case"
version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ec182b0ca2f35d8fc196cf3404988fd8b8c739a4d270ff118a398feb0cbec1ca"
dependencies = [
"unicode-segmentation",
]
[[package]]
name = "convert_case"
version = "0.8.0"
@@ -3441,19 +3456,6 @@ dependencies = [
"libc",
]
[[package]]
name = "core-graphics-helmer-fork"
version = "0.24.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "32eb7c354ae9f6d437a6039099ce7ecd049337a8109b23d73e48e8ffba8e9cd5"
dependencies = [
"bitflags 2.9.0",
"core-foundation 0.9.4",
"core-graphics-types 0.1.3",
"foreign-types 0.5.0",
"libc",
]
[[package]]
name = "core-graphics-types"
version = "0.1.3"
@@ -4036,7 +4038,7 @@ dependencies = [
[[package]]
name = "dap-types"
version = "0.0.1"
source = "git+https://github.com/zed-industries/dap-types?rev=be69a016ba710191b9fdded28c8b042af4b617f7#be69a016ba710191b9fdded28c8b042af4b617f7"
source = "git+https://github.com/zed-industries/dap-types?rev=bfd4af0#bfd4af084bbaa5f344e6925370d7642e41d0b5b8"
dependencies = [
"schemars",
"serde",
@@ -4429,12 +4431,6 @@ dependencies = [
"windows-sys 0.48.0",
]
[[package]]
name = "dispatch"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bd0c93bb4b0c6d9b77f4435b0ae98c24d17f1c45b2ff844c6151a07256ca923b"
[[package]]
name = "displaydoc"
version = "0.2.5"
@@ -4470,32 +4466,6 @@ dependencies = [
"workspace-hack",
]
[[package]]
name = "documented"
version = "0.9.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bc6db32f0995bc4553d2de888999075acd0dbeef75ba923503f6a724263dc6f3"
dependencies = [
"documented-macros",
"phf",
"thiserror 1.0.69",
]
[[package]]
name = "documented-macros"
version = "0.9.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a394bb35929b58f9a5fd418f7c6b17a4b616efcc1e53e6995ca123948f87e5fa"
dependencies = [
"convert_case 0.6.0",
"itertools 0.13.0",
"optfield",
"proc-macro2",
"quote",
"strum",
"syn 2.0.100",
]
[[package]]
name = "dotenvy"
version = "0.15.7"
@@ -6131,7 +6101,6 @@ dependencies = [
"refineable",
"reqwest_client",
"resvg",
"scap",
"schemars",
"seahash",
"semantic_version",
@@ -7464,7 +7433,8 @@ dependencies = [
[[package]]
name = "jupyter-protocol"
version = "0.6.0"
source = "git+https://github.com/ConradIrwin/runtimed?rev=7130c804216b6914355d15d0b91ea91f6babd734#7130c804216b6914355d15d0b91ea91f6babd734"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c9ae6296f9476658b3550293c113996daf75fa542cd8d078abb4c60207bded14"
dependencies = [
"anyhow",
"async-trait",
@@ -7479,7 +7449,8 @@ dependencies = [
[[package]]
name = "jupyter-websocket-client"
version = "0.9.0"
source = "git+https://github.com/ConradIrwin/runtimed?rev=7130c804216b6914355d15d0b91ea91f6babd734#7130c804216b6914355d15d0b91ea91f6babd734"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "49c1ba895c5271ff8dcae51c347fd3588905ba0025a57e20955fd231fe1228cc"
dependencies = [
"anyhow",
"async-trait",
@@ -8155,7 +8126,6 @@ dependencies = [
"objc",
"parking_lot",
"postage",
"scap",
"serde",
"serde_json",
"sha2",
@@ -8372,7 +8342,6 @@ dependencies = [
"node_runtime",
"pulldown-cmark 0.12.2",
"settings",
"sum_tree",
"theme",
"ui",
"util",
@@ -8693,6 +8662,7 @@ dependencies = [
"collections",
"ctor",
"env_logger 0.11.8",
"futures 0.3.31",
"gpui",
"indoc",
"itertools 0.14.0",
@@ -8786,7 +8756,8 @@ dependencies = [
[[package]]
name = "nbformat"
version = "0.10.0"
source = "git+https://github.com/ConradIrwin/runtimed?rev=7130c804216b6914355d15d0b91ea91f6babd734#7130c804216b6914355d15d0b91ea91f6babd734"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "244c1673f02b4d5f3c51b6f8ed28d57182cb166a50a6dbf651a3d53e23dc81c0"
dependencies = [
"anyhow",
"chrono",
@@ -9167,18 +9138,6 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "915b1b472bc21c53464d6c8461c9d3af805ba1ef837e1cac254428f4a77177b1"
dependencies = [
"malloc_buf",
"objc_exception",
]
[[package]]
name = "objc-foundation"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1add1b659e36c9607c7aab864a76c7a4c2760cd0cd2e120f3fb8b952c7e22bf9"
dependencies = [
"block",
"objc",
"objc_id",
]
[[package]]
@@ -9383,24 +9342,6 @@ dependencies = [
"objc2-foundation",
]
[[package]]
name = "objc_exception"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ad970fb455818ad6cba4c122ad012fae53ae8b4795f86378bce65e4f6bab2ca4"
dependencies = [
"cc",
]
[[package]]
name = "objc_id"
version = "0.1.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c92d4ddb4bd7b50d730c215ff871754d0da6b2178849f8a2a2ab69712d0c073b"
dependencies = [
"objc",
]
[[package]]
name = "object"
version = "0.36.7"
@@ -9565,6 +9506,15 @@ version = "0.1.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d05e27ee213611ffe7d6348b942e8f942b37114c00cc03cec254295a4a17852e"
[[package]]
name = "openssl-src"
version = "300.4.2+3.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "168ce4e058f975fe43e89d9ccf78ca668601887ae736090aacc23ae353c298e2"
dependencies = [
"cc",
]
[[package]]
name = "openssl-sys"
version = "0.9.107"
@@ -9573,21 +9523,11 @@ checksum = "8288979acd84749c744a9014b4382d42b8f7b2592847b5afb2ed29e5d16ede07"
dependencies = [
"cc",
"libc",
"openssl-src",
"pkg-config",
"vcpkg",
]
[[package]]
name = "optfield"
version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fa59f025cde9c698fcb4fcb3533db4621795374065bee908215263488f2d2a1d"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.100",
]
[[package]]
name = "option-ext"
version = "0.2.0"
@@ -10007,7 +9947,7 @@ dependencies = [
[[package]]
name = "pet"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=845945b830297a50de0e24020b980a65e4820559#845945b830297a50de0e24020b980a65e4820559"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0#1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0"
dependencies = [
"clap",
"env_logger 0.10.2",
@@ -10044,7 +9984,7 @@ dependencies = [
[[package]]
name = "pet-conda"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=845945b830297a50de0e24020b980a65e4820559#845945b830297a50de0e24020b980a65e4820559"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0#1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0"
dependencies = [
"env_logger 0.10.2",
"lazy_static",
@@ -10063,7 +10003,7 @@ dependencies = [
[[package]]
name = "pet-core"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=845945b830297a50de0e24020b980a65e4820559#845945b830297a50de0e24020b980a65e4820559"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0#1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0"
dependencies = [
"clap",
"lazy_static",
@@ -10078,7 +10018,7 @@ dependencies = [
[[package]]
name = "pet-env-var-path"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=845945b830297a50de0e24020b980a65e4820559#845945b830297a50de0e24020b980a65e4820559"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0#1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0"
dependencies = [
"lazy_static",
"log",
@@ -10094,7 +10034,7 @@ dependencies = [
[[package]]
name = "pet-fs"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=845945b830297a50de0e24020b980a65e4820559#845945b830297a50de0e24020b980a65e4820559"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0#1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0"
dependencies = [
"log",
"msvc_spectre_libs",
@@ -10103,7 +10043,7 @@ dependencies = [
[[package]]
name = "pet-global-virtualenvs"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=845945b830297a50de0e24020b980a65e4820559#845945b830297a50de0e24020b980a65e4820559"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0#1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0"
dependencies = [
"log",
"msvc_spectre_libs",
@@ -10116,7 +10056,7 @@ dependencies = [
[[package]]
name = "pet-homebrew"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=845945b830297a50de0e24020b980a65e4820559#845945b830297a50de0e24020b980a65e4820559"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0#1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0"
dependencies = [
"lazy_static",
"log",
@@ -10134,7 +10074,7 @@ dependencies = [
[[package]]
name = "pet-jsonrpc"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=845945b830297a50de0e24020b980a65e4820559#845945b830297a50de0e24020b980a65e4820559"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0#1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0"
dependencies = [
"env_logger 0.10.2",
"log",
@@ -10147,7 +10087,7 @@ dependencies = [
[[package]]
name = "pet-linux-global-python"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=845945b830297a50de0e24020b980a65e4820559#845945b830297a50de0e24020b980a65e4820559"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0#1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0"
dependencies = [
"log",
"msvc_spectre_libs",
@@ -10160,7 +10100,7 @@ dependencies = [
[[package]]
name = "pet-mac-commandlinetools"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=845945b830297a50de0e24020b980a65e4820559#845945b830297a50de0e24020b980a65e4820559"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0#1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0"
dependencies = [
"log",
"msvc_spectre_libs",
@@ -10173,7 +10113,7 @@ dependencies = [
[[package]]
name = "pet-mac-python-org"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=845945b830297a50de0e24020b980a65e4820559#845945b830297a50de0e24020b980a65e4820559"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0#1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0"
dependencies = [
"log",
"msvc_spectre_libs",
@@ -10186,7 +10126,7 @@ dependencies = [
[[package]]
name = "pet-mac-xcode"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=845945b830297a50de0e24020b980a65e4820559#845945b830297a50de0e24020b980a65e4820559"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0#1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0"
dependencies = [
"log",
"msvc_spectre_libs",
@@ -10199,7 +10139,7 @@ dependencies = [
[[package]]
name = "pet-pipenv"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=845945b830297a50de0e24020b980a65e4820559#845945b830297a50de0e24020b980a65e4820559"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0#1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0"
dependencies = [
"log",
"msvc_spectre_libs",
@@ -10212,7 +10152,7 @@ dependencies = [
[[package]]
name = "pet-pixi"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=845945b830297a50de0e24020b980a65e4820559#845945b830297a50de0e24020b980a65e4820559"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0#1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0"
dependencies = [
"log",
"msvc_spectre_libs",
@@ -10224,7 +10164,7 @@ dependencies = [
[[package]]
name = "pet-poetry"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=845945b830297a50de0e24020b980a65e4820559#845945b830297a50de0e24020b980a65e4820559"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0#1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0"
dependencies = [
"base64 0.22.1",
"lazy_static",
@@ -10245,7 +10185,7 @@ dependencies = [
[[package]]
name = "pet-pyenv"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=845945b830297a50de0e24020b980a65e4820559#845945b830297a50de0e24020b980a65e4820559"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0#1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0"
dependencies = [
"lazy_static",
"log",
@@ -10263,7 +10203,7 @@ dependencies = [
[[package]]
name = "pet-python-utils"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=845945b830297a50de0e24020b980a65e4820559#845945b830297a50de0e24020b980a65e4820559"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0#1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0"
dependencies = [
"env_logger 0.10.2",
"lazy_static",
@@ -10280,7 +10220,7 @@ dependencies = [
[[package]]
name = "pet-reporter"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=845945b830297a50de0e24020b980a65e4820559#845945b830297a50de0e24020b980a65e4820559"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0#1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0"
dependencies = [
"env_logger 0.10.2",
"log",
@@ -10294,7 +10234,7 @@ dependencies = [
[[package]]
name = "pet-telemetry"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=845945b830297a50de0e24020b980a65e4820559#845945b830297a50de0e24020b980a65e4820559"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0#1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0"
dependencies = [
"env_logger 0.10.2",
"lazy_static",
@@ -10309,7 +10249,7 @@ dependencies = [
[[package]]
name = "pet-venv"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=845945b830297a50de0e24020b980a65e4820559#845945b830297a50de0e24020b980a65e4820559"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0#1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0"
dependencies = [
"log",
"msvc_spectre_libs",
@@ -10321,7 +10261,7 @@ dependencies = [
[[package]]
name = "pet-virtualenv"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=845945b830297a50de0e24020b980a65e4820559#845945b830297a50de0e24020b980a65e4820559"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0#1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0"
dependencies = [
"log",
"msvc_spectre_libs",
@@ -10333,7 +10273,7 @@ dependencies = [
[[package]]
name = "pet-virtualenvwrapper"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=845945b830297a50de0e24020b980a65e4820559#845945b830297a50de0e24020b980a65e4820559"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0#1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0"
dependencies = [
"log",
"msvc_spectre_libs",
@@ -10346,7 +10286,7 @@ dependencies = [
[[package]]
name = "pet-windows-registry"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=845945b830297a50de0e24020b980a65e4820559#845945b830297a50de0e24020b980a65e4820559"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0#1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0"
dependencies = [
"lazy_static",
"log",
@@ -10358,13 +10298,13 @@ dependencies = [
"pet-virtualenv",
"pet-windows-store",
"regex",
"winreg 0.55.0",
"winreg 0.52.0",
]
[[package]]
name = "pet-windows-store"
version = "0.1.0"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=845945b830297a50de0e24020b980a65e4820559#845945b830297a50de0e24020b980a65e4820559"
source = "git+https://github.com/microsoft/python-environment-tools.git?rev=1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0#1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0"
dependencies = [
"lazy_static",
"log",
@@ -10374,7 +10314,7 @@ dependencies = [
"pet-python-utils",
"pet-virtualenv",
"regex",
"winreg 0.55.0",
"winreg 0.52.0",
]
[[package]]
@@ -11208,15 +11148,6 @@ version = "2.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a993555f31e5a609f617c12db6250dedcac1b0a85076912c436e6fc9b2c8e6a3"
[[package]]
name = "quick-xml"
version = "0.30.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eff6510e86862b57b210fd8cbe8ed3f0d7d600b9c2863cd4549a2e033c66e956"
dependencies = [
"memchr",
]
[[package]]
name = "quick-xml"
version = "0.32.0"
@@ -12112,7 +12043,8 @@ dependencies = [
[[package]]
name = "runtimelib"
version = "0.25.0"
source = "git+https://github.com/ConradIrwin/runtimed?rev=7130c804216b6914355d15d0b91ea91f6babd734#7130c804216b6914355d15d0b91ea91f6babd734"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9af6ed9fd10d7ee940676945510c197c2a472806bb652096a713985c44ffd643"
dependencies = [
"anyhow",
"async-dispatcher",
@@ -12444,27 +12376,6 @@ dependencies = [
"winapi-util",
]
[[package]]
name = "scap"
version = "0.0.8"
source = "git+https://github.com/zed-industries/scap?rev=08f0a01417505cc0990b9931a37e5120db92e0d0#08f0a01417505cc0990b9931a37e5120db92e0d0"
dependencies = [
"anyhow",
"cocoa 0.25.0",
"core-graphics-helmer-fork",
"log",
"objc",
"rand 0.8.5",
"screencapturekit",
"screencapturekit-sys",
"sysinfo",
"tao-core-video-sys",
"windows 0.61.1",
"windows-capture",
"x11",
"xcb",
]
[[package]]
name = "schannel"
version = "0.1.27"
@@ -12531,29 +12442,6 @@ version = "1.0.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9f6280af86e5f559536da57a45ebc84948833b3bee313a7dd25232e09c878a52"
[[package]]
name = "screencapturekit"
version = "0.2.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1a5eeeb57ac94960cfe5ff4c402be6585ae4c8d29a2cf41b276048c2e849d64e"
dependencies = [
"screencapturekit-sys",
]
[[package]]
name = "screencapturekit-sys"
version = "0.2.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "22411b57f7d49e7fe08025198813ee6fd65e1ee5eff4ebc7880c12c82bde4c60"
dependencies = [
"block",
"dispatch",
"objc",
"objc-foundation",
"objc_id",
"once_cell",
]
[[package]]
name = "scrypt"
version = "0.11.0"
@@ -14116,18 +14004,6 @@ version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8bdb6fa0dfa67b38c1e66b7041ba9dcf23b99d8121907cd31c807a332f7a0bbb"
[[package]]
name = "tao-core-video-sys"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "271450eb289cb4d8d0720c6ce70c72c8c858c93dd61fc625881616752e6b98f6"
dependencies = [
"cfg-if",
"core-foundation-sys",
"libc",
"objc",
]
[[package]]
name = "tap"
version = "1.0.1"
@@ -14173,14 +14049,12 @@ name = "tasks_ui"
version = "0.1.0"
dependencies = [
"anyhow",
"collections",
"debugger_ui",
"editor",
"feature_flags",
"file_icons",
"fuzzy",
"gpui",
"itertools 0.14.0",
"language",
"menu",
"picker",
@@ -14670,9 +14544,9 @@ dependencies = [
[[package]]
name = "tokio"
version = "1.44.2"
version = "1.44.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e6b88822cbe49de4185e3a4cbf8321dd487cf5fe0c5c65695fef6346371e9c48"
checksum = "f382da615b842244d4b8738c82ed1275e6c5dd90c459a30941cd07080b06c91a"
dependencies = [
"backtrace",
"bytes 1.10.1",
@@ -15341,6 +15215,24 @@ dependencies = [
"utf-8",
]
[[package]]
name = "tungstenite"
version = "0.24.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "18e5b8366ee7a95b16d32197d0b2604b43a0be89dc5fac9f8e96ccafbaedda8a"
dependencies = [
"byteorder",
"bytes 1.10.1",
"data-encoding",
"http 1.3.1",
"httparse",
"log",
"rand 0.8.5",
"sha1",
"thiserror 1.0.69",
"utf-8",
]
[[package]]
name = "tungstenite"
version = "0.26.2"
@@ -15395,7 +15287,6 @@ version = "0.1.0"
dependencies = [
"chrono",
"component",
"documented",
"gpui",
"icons",
"itertools 0.14.0",
@@ -16782,20 +16673,6 @@ dependencies = [
"windows-numerics",
]
[[package]]
name = "windows-capture"
version = "1.4.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6001b777f61cafce437201de46a019ed7f4afed3b669f02e5ce4e0759164cb3e"
dependencies = [
"clap",
"ctrlc",
"parking_lot",
"rayon",
"thiserror 1.0.69",
"windows 0.58.0",
]
[[package]]
name = "windows-collections"
version = "0.2.0"
@@ -17245,16 +17122,6 @@ dependencies = [
"windows-sys 0.48.0",
]
[[package]]
name = "winreg"
version = "0.55.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cb5a765337c50e9ec252c2069be9bf91c7df47afb103b642ba3a53bf8101be97"
dependencies = [
"cfg-if",
"windows-sys 0.59.0",
]
[[package]]
name = "winresource"
version = "0.1.20"
@@ -17661,7 +17528,6 @@ dependencies = [
"indexmap",
"inout",
"itertools 0.12.1",
"itertools 0.13.0",
"lazy_static",
"libc",
"libsqlite3-sys",
@@ -17706,6 +17572,7 @@ dependencies = [
"scopeguard",
"sea-orm",
"sea-query-binder",
"security-framework 2.11.1",
"security-framework 3.2.0",
"security-framework-sys",
"semver",
@@ -17736,7 +17603,6 @@ dependencies = [
"toml_edit",
"tracing",
"tracing-core",
"tungstenite 0.26.2",
"unicode-properties",
"url",
"uuid",
@@ -17810,16 +17676,6 @@ dependencies = [
"tap",
]
[[package]]
name = "x11"
version = "2.21.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "502da5464ccd04011667b11c435cb992822c2c0dbde1770c988480d312a0db2e"
dependencies = [
"libc",
"pkg-config",
]
[[package]]
name = "x11-clipboard"
version = "0.9.3"
@@ -17858,18 +17714,6 @@ dependencies = [
"libc",
]
[[package]]
name = "xcb"
version = "1.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f1e2f212bb1a92cd8caac8051b829a6582ede155ccb60b5d5908b81b100952be"
dependencies = [
"bitflags 1.3.2",
"libc",
"quick-xml 0.30.0",
"x11",
]
[[package]]
name = "xcursor"
version = "0.3.8"
@@ -18090,7 +17934,7 @@ dependencies = [
[[package]]
name = "zed"
version = "0.182.0"
version = "0.181.4"
dependencies = [
"activity_indicator",
"agent",

View File

@@ -8,7 +8,7 @@ members = [
"crates/assets",
"crates/assistant",
"crates/assistant_context_editor",
"crates/agent_eval",
"crates/assistant_eval",
"crates/assistant_settings",
"crates/assistant_slash_command",
"crates/assistant_slash_commands",
@@ -215,7 +215,7 @@ askpass = { path = "crates/askpass" }
assets = { path = "crates/assets" }
assistant = { path = "crates/assistant" }
assistant_context_editor = { path = "crates/assistant_context_editor" }
assistant_eval = { path = "crates/agent_eval" }
assistant_eval = { path = "crates/assistant_eval" }
assistant_settings = { path = "crates/assistant_settings" }
assistant_slash_command = { path = "crates/assistant_slash_command" }
assistant_slash_commands = { path = "crates/assistant_slash_commands" }
@@ -396,7 +396,7 @@ async-pipe = { git = "https://github.com/zed-industries/async-pipe-rs", rev = "8
async-recursion = "1.0.0"
async-tar = "0.5.0"
async-trait = "0.1"
async-tungstenite = "0.29.1"
async-tungstenite = "0.28"
async-watch = "0.3.1"
async_zip = { version = "0.0.17", features = ["deflate", "deflate64"] }
aws-config = { version = "1.6.1", features = ["behavior-version-latest"] }
@@ -425,7 +425,7 @@ core-foundation = "0.10.0"
core-foundation-sys = "0.8.6"
ctor = "0.4.0"
dashmap = "6.0"
dap-types = { git = "https://github.com/zed-industries/dap-types", rev = "be69a016ba710191b9fdded28c8b042af4b617f7" }
dap-types = { git = "https://github.com/zed-industries/dap-types", rev = "bfd4af0" }
derive_more = "0.99.17"
dirs = "4.0"
ec4rs = "1.1"
@@ -453,8 +453,8 @@ indoc = "2"
inventory = "0.3.19"
itertools = "0.14.0"
jsonwebtoken = "9.3"
jupyter-protocol = { git = "https://github.com/ConradIrwin/runtimed", rev = "7130c804216b6914355d15d0b91ea91f6babd734" }
jupyter-websocket-client = { git = "https://github.com/ConradIrwin/runtimed" ,rev = "7130c804216b6914355d15d0b91ea91f6babd734" }
jupyter-protocol = { version = "0.6.0" }
jupyter-websocket-client = { version = "0.9.0" }
libc = "0.2"
libsqlite3-sys = { version = "0.30.1", features = ["bundled"] }
linkify = "0.10.0"
@@ -463,22 +463,21 @@ log = { version = "0.4.16", features = ["kv_unstable_serde", "serde"] }
markup5ever_rcdom = "0.3.0"
mlua = { version = "0.10", features = ["lua54", "vendored", "async", "send"] }
nanoid = "0.4"
nbformat = { git = "https://github.com/ConradIrwin/runtimed", rev = "7130c804216b6914355d15d0b91ea91f6babd734" }
nbformat = { version = "0.10.0" }
nix = "0.29"
objc = "0.2"
open = "5.0.0"
num-format = "0.4.4"
ordered-float = "2.1.1"
palette = { version = "0.7.5", default-features = false, features = ["std"] }
parking_lot = "0.12.1"
pathdiff = "0.2"
pet = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "845945b830297a50de0e24020b980a65e4820559" }
pet-fs = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "845945b830297a50de0e24020b980a65e4820559" }
pet-pixi = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "845945b830297a50de0e24020b980a65e4820559" }
pet-conda = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "845945b830297a50de0e24020b980a65e4820559" }
pet-core = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "845945b830297a50de0e24020b980a65e4820559" }
pet-poetry = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "845945b830297a50de0e24020b980a65e4820559" }
pet-reporter = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "845945b830297a50de0e24020b980a65e4820559" }
pet = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0" }
pet-fs = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0" }
pet-pixi = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0" }
pet-conda = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0" }
pet-core = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0" }
pet-poetry = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0" }
pet-reporter = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1abe5cec5ebfbe97ca71746a4cfc7fe89bddf8e0" }
postage = { version = "0.5", features = ["futures-traits"] }
pretty_assertions = { version = "1.3.0", features = ["unstable"] }
proc-macro2 = "1.0.93"
@@ -501,7 +500,7 @@ reqwest = { git = "https://github.com/zed-industries/reqwest.git", rev = "fd110f
"stream",
] }
rsa = "0.9.6"
runtimelib = { git = "https://github.com/ConradIrwin/runtimed", rev = "7130c804216b6914355d15d0b91ea91f6babd734", default-features = false, features = [
runtimelib = { version = "0.25.0", default-features = false, features = [
"async-dispatcher-runtime",
] }
rustc-demangle = "0.1.23"
@@ -509,7 +508,6 @@ rust-embed = { version = "8.4", features = ["include-exclude"] }
rustc-hash = "2.1.0"
rustls = { version = "0.23.22" }
rustls-platform-verifier = "0.5.0"
scap = { git = "https://github.com/zed-industries/scap", rev = "08f0a01417505cc0990b9931a37e5120db92e0d0", default-features = false }
schemars = { version = "0.8", features = ["impl_json_schema", "indexmap2"] }
semver = "1.0"
serde = { version = "1.0", features = ["derive", "rc"] }
@@ -549,7 +547,7 @@ time = { version = "0.3", features = [
tiny_http = "0.8"
toml = "0.8"
tokio = { version = "1" }
tokio-tungstenite = { version = "0.26", features = ["__rustls-tls"] }
tokio-tungstenite = { version = "0.26", features = ["__rustls-tls"]}
tower-http = "0.4.4"
tree-sitter = { version = "0.25.3", features = ["wasm"] }
tree-sitter-bash = "0.23"
@@ -661,6 +659,7 @@ features = [
# TODO livekit https://github.com/RustAudio/cpal/pull/891
[patch.crates-io]
cpal = { git = "https://github.com/zed-industries/cpal", rev = "fd8bc2fd39f1f5fdee5a0690656caff9a26d9d50" }
real-async-tls = { git = "https://github.com/zed-industries/async-tls", rev = "1e759a4b5e370f87dc15e40756ac4f8815b61d9d", package = "async-tls" }
notify = { git = "https://github.com/zed-industries/notify.git", rev = "bbb9ea5ae52b253e095737847e367c30653a2e96" }
notify-types = { git = "https://github.com/zed-industries/notify.git", rev = "bbb9ea5ae52b253e095737847e367c30653a2e96" }
@@ -669,6 +668,7 @@ workspace-hack = { path = "tooling/workspace-hack" }
[profile.dev]
split-debuginfo = "unpacked"
debug = "limited"
codegen-units = 16
[profile.dev.package]

View File

@@ -1,6 +1,6 @@
# syntax = docker/dockerfile:1.2
FROM rust:1.86-bookworm as builder
FROM rust:1.81-bookworm as builder
WORKDIR app
COPY . .

View File

@@ -1 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-arrow-down-right-icon lucide-arrow-down-right"><path d="m7 7 10 10"/><path d="M17 7v10H7"/></svg>

Before

Width:  |  Height:  |  Size: 300 B

View File

@@ -1 +1,3 @@
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-arrow-up-right-icon lucide-arrow-up-right"><path d="M7 7h10v10"/><path d="M7 17 17 7"/></svg>
<svg width="8" height="8" viewBox="0 0 8 8" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M6 2H6.5C6.5 1.86739 6.44732 1.74021 6.35355 1.64645C6.25979 1.55268 6.13261 1.5 6 1.5V2ZM2 1.5C1.72386 1.5 1.5 1.72386 1.5 2C1.5 2.27614 1.72386 2.5 2 2.5L2 1.5ZM5.5 6C5.5 6.27614 5.72386 6.5 6 6.5C6.27614 6.5 6.5 6.27614 6.5 6H5.5ZM1.64645 5.64645C1.45118 5.84171 1.45118 6.15829 1.64645 6.35355C1.84171 6.54882 2.15829 6.54882 2.35355 6.35355L1.64645 5.64645ZM6 1.5H2L2 2.5H6V1.5ZM5.5 2V6H6.5V2H5.5ZM5.64645 1.64645L1.64645 5.64645L2.35355 6.35355L6.35355 2.35355L5.64645 1.64645Z" fill="white"/>
</svg>

Before

Width:  |  Height:  |  Size: 296 B

After

Width:  |  Height:  |  Size: 608 B

View File

@@ -1 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-bug-off-icon lucide-bug-off"><path d="M15 7.13V6a3 3 0 0 0-5.14-2.1L8 2"/><path d="M14.12 3.88 16 2"/><path d="M22 13h-4v-2a4 4 0 0 0-4-4h-1.3"/><path d="M20.97 5c0 2.1-1.6 3.8-3.5 4"/><path d="m2 2 20 20"/><path d="M7.7 7.7A4 4 0 0 0 6 11v3a6 6 0 0 0 11.13 3.13"/><path d="M12 20v-8"/><path d="M6 13H2"/><path d="M3 21c0-2.1 1.7-3.9 3.8-4"/></svg>

Before

Width:  |  Height:  |  Size: 551 B

View File

@@ -1 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-circle-off-icon lucide-circle-off"><path d="m2 2 20 20"/><path d="M8.35 2.69A10 10 0 0 1 21.3 15.65"/><path d="M19.08 19.08A10 10 0 1 1 4.92 4.92"/></svg>

Before

Width:  |  Height:  |  Size: 357 B

View File

@@ -1 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-power-icon lucide-power"><path d="M12 2v10"/><path d="M18.4 6.6a9 9 0 1 1-12.77.04"/></svg>

Before

Width:  |  Height:  |  Size: 294 B

View File

@@ -1,4 +0,0 @@
<svg width="14" height="14" viewBox="0 0 14 14" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M3.09666 3.02263C3.0567 3.00312 3.01178 2.9961 2.96778 3.0025C2.92377 3.00889 2.88271 3.02839 2.84995 3.05847C2.8172 3.08854 2.79426 3.12778 2.78413 3.17108C2.77401 3.21439 2.77716 3.25973 2.79319 3.30121L4.05638 6.69C4.13088 6.89005 4.13088 7.11022 4.05638 7.31027L2.79363 10.6991C2.77769 10.7405 2.77457 10.7858 2.78469 10.829C2.79481 10.8722 2.8177 10.9114 2.85038 10.9414C2.88306 10.9715 2.92402 10.991 2.96794 10.9975C3.01186 11.0039 3.05671 10.997 3.09666 10.9776L11.0943 7.20097C11.1324 7.18297 11.1645 7.15455 11.187 7.11899C11.2096 7.08344 11.2215 7.04222 11.2215 7.00014C11.2215 6.95805 11.2096 6.91683 11.187 6.88128C11.1645 6.84573 11.1324 6.8173 11.0943 6.79931L3.09666 3.02263Z" stroke="black" stroke-width="1.33333" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M4.11255 7.00014H11.2216" stroke="black" stroke-width="1.33333" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

Before

Width:  |  Height:  |  Size: 1014 B

View File

@@ -1,3 +0,0 @@
<svg width="14" height="14" viewBox="0 0 14 14" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M4 9.8V4.2C4 4.08954 4.08954 4 4.2 4H9.8C9.91046 4 10 4.08954 10 4.2V9.8C10 9.91046 9.91046 10 9.8 10H4.2C4.08954 10 4 9.91046 4 9.8Z" fill="#C56757" stroke="#C56757" stroke-width="1.25" stroke-linejoin="round"/>
</svg>

Before

Width:  |  Height:  |  Size: 325 B

View File

@@ -311,7 +311,6 @@
"ctrl-k t": ["pane::CloseItemsToTheRight", { "close_pinned": false }],
"ctrl-k u": ["pane::CloseCleanItems", { "close_pinned": false }],
"ctrl-k w": ["pane::CloseAllItems", { "close_pinned": false }],
"ctrl-k ctrl-w": "workspace::CloseAllItemsAndPanes",
"back": "pane::GoBack",
"ctrl-alt--": "pane::GoBack",
"ctrl-alt-_": "pane::GoForward",
@@ -482,8 +481,6 @@
"alt-shift-r": ["task::Spawn", { "reveal_target": "center" }]
// also possible to spawn tasks by name:
// "foo-bar": ["task::Spawn", { "task_name": "MyTask", "reveal_target": "dock" }]
// or by tag:
// "foo-bar": ["task::Spawn", { "task_tag": "MyTag" }],
}
},
{
@@ -532,7 +529,6 @@
"context": "Editor && showing_completions",
"bindings": {
"enter": "editor::ConfirmCompletion",
"shift-enter": "editor::ConfirmCompletionReplace",
"tab": "editor::ComposeCompletion"
}
},
@@ -625,6 +621,7 @@
"context": "AgentPanel",
"bindings": {
"ctrl-n": "agent::NewThread",
"new": "agent::NewThread",
"ctrl-alt-n": "agent::NewPromptEditor",
"ctrl-shift-h": "agent::OpenHistory",
"ctrl-alt-c": "agent::OpenConfiguration",
@@ -635,13 +632,6 @@
"ctrl-alt-e": "agent::RemoveAllContext"
}
},
{
"context": "AgentPanel > Markdown",
"bindings": {
"copy": "markdown::CopyAsMarkdown",
"ctrl-c": "markdown::CopyAsMarkdown"
}
},
{
"context": "AgentPanel && prompt_editor",
"use_key_equivalents": true,

View File

@@ -291,13 +291,6 @@
"cmd-alt-e": "agent::RemoveAllContext"
}
},
{
"context": "AgentPanel > Markdown",
"use_key_equivalents": true,
"bindings": {
"cmd-c": "markdown::CopyAsMarkdown"
}
},
{
"context": "AgentPanel && prompt_editor",
"use_key_equivalents": true,
@@ -345,24 +338,6 @@
"enter": "agent::AcceptSuggestedContext"
}
},
{
"context": "AgentConfiguration",
"bindings": {
"ctrl--": "pane::GoBack"
}
},
{
"context": "ThreadHistory",
"bindings": {
"ctrl--": "pane::GoBack"
}
},
{
"context": "ThreadHistory",
"bindings": {
"ctrl--": "pane::GoBack"
}
},
{
"context": "ThreadHistory > Editor",
"bindings": {
@@ -455,8 +430,7 @@
"cmd-k e": ["pane::CloseItemsToTheLeft", { "close_pinned": false }],
"cmd-k t": ["pane::CloseItemsToTheRight", { "close_pinned": false }],
"cmd-k u": ["pane::CloseCleanItems", { "close_pinned": false }],
"cmd-k w": ["pane::CloseAllItems", { "close_pinned": false }],
"cmd-k cmd-w": "workspace::CloseAllItemsAndPanes",
"cmd-k cmd-w": ["pane::CloseAllItems", { "close_pinned": false }],
"cmd-f": "project_search::ToggleFocus",
"cmd-g": "search::SelectNextMatch",
"cmd-shift-g": "search::SelectPreviousMatch",
@@ -525,7 +499,7 @@
"cmd-k cmd-9": ["editor::FoldAtLevel", 9],
"cmd-k cmd-0": "editor::FoldAll",
"cmd-k cmd-j": "editor::UnfoldAll",
// Using `ctrl-space` / `ctrl-shift-space` in Zed requires disabling the macOS global shortcut.
// Using `ctrl-space` in Zed requires disabling the macOS global shortcut.
// System Preferences->Keyboard->Keyboard Shortcuts->Input Sources->Select the previous input source (uncheck)
"ctrl-space": "editor::ShowCompletions",
"ctrl-shift-space": "editor::ShowWordCompletions",
@@ -633,8 +607,6 @@
"ctrl-alt-shift-r": ["task::Spawn", { "reveal_target": "center" }]
// also possible to spawn tasks by name:
// "foo-bar": ["task::Spawn", { "task_name": "MyTask", "reveal_target": "dock" }]
// or by tag:
// "foo-bar": ["task::Spawn", { "task_tag": "MyTag" }],
}
},
// Bindings from Sublime Text
@@ -681,7 +653,6 @@
"use_key_equivalents": true,
"bindings": {
"enter": "editor::ConfirmCompletion",
"shift-enter": "editor::ConfirmCompletionReplace",
"tab": "editor::ComposeCompletion"
}
},
@@ -1004,8 +975,6 @@
"cmd-home": "terminal::ScrollToTop",
"shift-end": "terminal::ScrollToBottom",
"cmd-end": "terminal::ScrollToBottom",
// Using `ctrl-shift-space` in Zed requires disabling the macOS global shortcut.
// System Preferences->Keyboard->Keyboard Shortcuts->Input Sources->Select the previous input source (uncheck)
"ctrl-shift-space": "terminal::ToggleViMode",
"ctrl-k up": "pane::SplitUp",
"ctrl-k down": "pane::SplitDown",

View File

@@ -44,12 +44,6 @@
"[ /": "vim::PreviousComment",
"] *": "vim::NextComment",
"] /": "vim::NextComment",
"[ -": "vim::PreviousLesserIndent",
"[ +": "vim::PreviousGreaterIndent",
"[ =": "vim::PreviousSameIndent",
"] -": "vim::NextLesserIndent",
"] +": "vim::NextGreaterIndent",
"] =": "vim::NextSameIndent",
// Word motions
"w": "vim::NextWordStart",
"e": "vim::NextWordEnd",
@@ -341,106 +335,27 @@
}
},
{
"context": "vim_mode == helix_normal && !menu",
"context": "vim_mode == helix_normal",
"bindings": {
"escape": "editor::Cancel",
"ctrl-[": "editor::Cancel",
":": "command_palette::Toggle",
"shift-d": "vim::DeleteToEndOfLine",
"shift-j": "vim::JoinLines",
"y": "editor::Copy",
"shift-y": "vim::YankLine",
"i": "vim::InsertBefore",
"shift-i": "vim::InsertFirstNonWhitespace",
"a": "vim::InsertAfter",
"shift-a": "vim::InsertEndOfLine",
"o": "vim::InsertLineBelow",
"shift-o": "vim::InsertLineAbove",
"~": "vim::ChangeCase",
"ctrl-a": "vim::Increment",
"ctrl-x": "vim::Decrement",
"p": "vim::Paste",
"shift-p": ["vim::Paste", { "before": true }],
"u": "vim::Undo",
"ctrl-r": "vim::Redo",
"r": "vim::PushReplace",
"s": "vim::Substitute",
"shift-s": "vim::SubstituteLine",
">": "vim::Indent",
"<": "vim::Outdent",
"=": "vim::AutoIndent",
"g u": "vim::PushLowercase",
"g shift-u": "vim::PushUppercase",
"g ~": "vim::PushOppositeCase",
"\"": "vim::PushRegister",
"g q": "vim::PushRewrap",
"g w": "vim::PushRewrap",
"ctrl-pagedown": "pane::ActivateNextItem",
"ctrl-pageup": "pane::ActivatePreviousItem",
"insert": "vim::InsertBefore",
// tree-sitter related commands
"[ x": "editor::SelectLargerSyntaxNode",
"] x": "editor::SelectSmallerSyntaxNode",
"] d": "editor::GoToDiagnostic",
"[ d": "editor::GoToPreviousDiagnostic",
"] c": "editor::GoToHunk",
"[ c": "editor::GoToPreviousHunk",
// Goto mode
"g n": "pane::ActivateNextItem",
"g p": "pane::ActivatePreviousItem",
// "tab": "pane::ActivateNextItem",
// "shift-tab": "pane::ActivatePrevItem",
"shift-h": "pane::ActivatePreviousItem",
"shift-l": "pane::ActivateNextItem",
"g l": "vim::EndOfLine",
"g h": "vim::StartOfLine",
"g s": "vim::FirstNonWhitespace", // "g s" default behavior is "space s"
"g e": "vim::EndOfDocument",
"g y": "editor::GoToTypeDefinition",
"g r": "editor::FindAllReferences", // zed specific
"g t": "vim::WindowTop",
"g c": "vim::WindowMiddle",
"g b": "vim::WindowBottom",
"x": "editor::SelectLine",
"shift-x": "editor::SelectLine",
// Window mode
"space w h": "workspace::ActivatePaneLeft",
"space w l": "workspace::ActivatePaneRight",
"space w k": "workspace::ActivatePaneUp",
"space w j": "workspace::ActivatePaneDown",
"space w q": "pane::CloseActiveItem",
"space w s": "pane::SplitRight",
"space w r": "pane::SplitRight",
"space w v": "pane::SplitDown",
"space w d": "pane::SplitDown",
// Space mode
"space f": "file_finder::Toggle",
"space k": "editor::Hover",
"space s": "outline::Toggle",
"space shift-s": "project_symbols::Toggle",
"space d": "editor::GoToDiagnostic",
"space r": "editor::Rename",
"space a": "editor::ToggleCodeActions",
"space h": "editor::SelectAllMatches",
"space c": "editor::ToggleComments",
"space y": "editor::Copy",
"space p": "editor::Paste",
// Match mode
"m m": "vim::Matching",
"m i w": ["workspace::SendKeystrokes", "v i w"],
"shift-u": "editor::Redo",
"ctrl-c": "editor::ToggleComments",
"d": "vim::HelixDelete",
"c": "vim::Substitute",
"shift-c": "editor::AddSelectionBelow"
"w": "vim::NextWordStart",
"e": "vim::NextWordEnd",
"b": "vim::PreviousWordStart",
"h": "vim::Left",
"j": "vim::Down",
"k": "vim::Up",
"l": "vim::Right"
}
},
{
"context": "vim_mode == insert && !(showing_code_actions || showing_completions)",
"bindings": {
"ctrl-p": "editor::ShowWordCompletions",
"ctrl-n": "editor::ShowWordCompletions"
"ctrl-p": "editor::ShowCompletions",
"ctrl-n": "editor::ShowCompletions"
}
},
{

View File

@@ -6,19 +6,9 @@ You are an AI assistant integrated into a code editor. You have the programming
It will be up to you to decide which of these you are doing based on what the user has told you. When unclear, ask clarifying questions to understand the user's intent before proceeding.
You should only perform actions that modify the user's system if explicitly requested by the user:
- If the user asks a question about how to accomplish a task, provide guidance or information, and use read-only tools (e.g., search) to assist. You may suggest potential actions, but do not directly modify the user's system without explicit instruction.
- If the user asks a question about how to accomplish a task, provide guidance or information, and use read-only tools (e.g., search) to assist. You may suggest potential actions, but do not directly modify the users system without explicit instruction.
- If the user clearly requests that you perform an action, carry out the action directly without explaining why you are doing so.
When answering questions, it's okay to give incomplete examples containing comments about what would go there in a real version. When being asked to directly perform tasks on the code base, you must ALWAYS make fully working code. You may never "simplify" the code by omitting or deleting functionality you know the user has requested, and you must NEVER write comments like "in a full version, this would..." - instead, you must actually implement the real version. Don't be lazy!
Note that project files are automatically backed up. The user can always get them back later if anything goes wrong, so there's
no need to create backup files (e.g. `.bak` files) because these files will just take up unnecessary space on the user's disk.
When attempting to resolve issues around failing tests, never simply remove the failing tests. Unless the user explicitly asks you to remove tests, ALWAYS attempt to fix the code causing the tests to fail.
Ignore "TODO"-type comments unless they're relevant to the user's explicit request or the user specifically asks you to address them. It is, however, okay to include them in codebase summaries.
<style>
Editing code:
- Make sure to take previous edits into account.
- The edits you perform might lead to errors or warnings. At the end of your changes, check whether you introduced any problems, and fix them before providing a summary of the changes you made.
@@ -44,106 +34,6 @@ Responding:
For example, don't say "Now I'm going to check diagnostics to see if there are any warnings or errors," followed by running a tool which checks diagnostics and reports warnings or errors; instead, just request the tool call without saying anything.
- All tool results are provided to you automatically, so DO NOT thank the user when this happens.
Whenever you mention a code block, you MUST use ONLY the following format:
```language path/to/Something.blah#L123-456
(code goes here)
```
The `#L123-456` means the line number range 123 through 456, and the path/to/Something.blah
is a path in the project. (If there is no valid path in the project, then you can use
/dev/null/path.extension for its path.) This is the ONLY valid way to format code blocks, because the Markdown parser
does not understand the more common ```language syntax, or bare ``` blocks. It only
understands this path-based syntax, and if the path is missing, then it will error and you will have to do it over again.
Just to be really clear about this, if you ever find yourself writing three backticks followed by a language name, STOP!
You have made a mistake. You can only ever put paths after triple backticks!
<example>
Based on all the information I've gathered, here's a summary of how this system works:
1. The README file is loaded into the system.
2. The system finds the first two headers, including everything in between. In this case, that would be:
```path/to/README.md#L8-12
# First Header
This is the info under the first header.
## Sub-header
```
3. Then the system finds the last header in the README:
```path/to/README.md#L27-29
## Last Header
This is the last header in the README.
```
4. Finally, it passes this information on to the next process.
</example>
<example>
In Markdown, hash marks signify headings. For example:
```/dev/null/example.md#L1-3
# Level 1 heading
## Level 2 heading
### Level 3 heading
```
</example>
Here are examples of ways you must never render code blocks:
<bad_example_do_not_do_this>
In Markdown, hash marks signify headings. For example:
```
# Level 1 heading
## Level 2 heading
### Level 3 heading
```
</bad_example_do_not_do_this>
This example is unacceptable because it does not include the path.
<bad_example_do_not_do_this>
In Markdown, hash marks signify headings. For example:
```markdown
# Level 1 heading
## Level 2 heading
### Level 3 heading
```
</bad_example_do_not_do_this>
This example is unacceptable because it has the language instead of the path.
<bad_example_do_not_do_this>
In Markdown, hash marks signify headings. For example:
# Level 1 heading
## Level 2 heading
### Level 3 heading
</bad_example_do_not_do_this>
This example is unacceptable because it uses indentation to mark the code block
instead of backticks with a path.
<bad_example_do_not_do_this>
In Markdown, hash marks signify headings. For example:
```markdown
/dev/null/example.md#L1-3
# Level 1 heading
## Level 2 heading
### Level 3 heading
```
</bad_example_do_not_do_this>
This example is unacceptable because the path is in the wrong place. The path must be directly after the opening backticks.
</style>
The user has opened a project that contains the following root directories/files. Whenever you specify a path in the project, it must be a relative path which begins with one of these root directories/files:
{{#each worktrees}}

View File

@@ -0,0 +1,8 @@
A software developer is asking a question about their project. The source files in their project have been indexed into a database of semantic text embeddings.
Your task is to generate a list of 4 diverse search queries that can be run on this embedding database, in order to retrieve a list of code snippets
that are relevant to the developer's question. Redundant search queries will be heavily penalized, so only include another query if it's sufficiently
distinct from previous ones.
Here is the question that's been asked, together with context that the developer has added manually:
{{{context_buffer}}}

View File

@@ -1136,8 +1136,7 @@
"code_actions_on_format": {},
// Settings related to running tasks.
"tasks": {
"variables": {},
"enabled": true
"variables": {}
},
// An object whose keys are language names, and whose values
// are arrays of filenames or extensions of files that should
@@ -1201,27 +1200,7 @@
// When set to 0, waits indefinitely.
//
// Default: 0
"lsp_fetch_timeout_ms": 0,
// Controls what range to replace when accepting LSP completions.
//
// When LSP servers give an `InsertReplaceEdit` completion, they provides two ranges: `insert` and `replace`. Usually, `insert`
// contains the word prefix before your cursor and `replace` contains the whole word.
//
// Effectively, this setting just changes whether Zed will use the received range for `insert` or `replace`, so the results may
// differ depending on the underlying LSP server.
//
// Possible values:
// 1. "insert"
// Replaces text before the cursor, using the `insert` range described in the LSP specification.
// 2. "replace"
// Replaces text before and after the cursor, using the `replace` range described in the LSP specification.
// 3. "replace_subsequence"
// Behaves like `"replace"` if the text that would be replaced is a subsequence of the completion text,
// and like `"insert"` otherwise.
// 4. "replace_suffix"
// Behaves like `"replace"` if the text after the cursor is a suffix of the completion, and like
// `"insert"` otherwise.
"lsp_insert_mode": "replace_suffix"
"lsp_fetch_timeout_ms": 0
},
// Different settings for specific languages.
"languages": {
@@ -1457,8 +1436,6 @@
"lsp": {
// Specify the LSP name as a key here.
// "rust-analyzer": {
// // A special flag for rust-analyzer integration, to use server-provided tasks
// enable_lsp_tasks": true,
// // These initialization options are merged into Zed's defaults
// "initialization_options": {
// "check": {

View File

@@ -43,8 +43,6 @@
// "args": ["--login"]
// }
// }
"shell": "system",
// Represents the tags for inline runnable indicators, or spawning multiple tasks at once.
"tags": []
"shell": "system"
}
]

View File

@@ -87,9 +87,9 @@
"terminal.ansi.blue": "#83a598ff",
"terminal.ansi.bright_blue": "#414f4aff",
"terminal.ansi.dim_blue": "#c0d2cbff",
"terminal.ansi.magenta": "#d3869bff",
"terminal.ansi.bright_magenta": "#8e5868ff",
"terminal.ansi.dim_magenta": "#ff9ebbff",
"terminal.ansi.magenta": "#a89984ff",
"terminal.ansi.bright_magenta": "#514a41ff",
"terminal.ansi.dim_magenta": "#d2cabfff",
"terminal.ansi.cyan": "#8ec07cff",
"terminal.ansi.bright_cyan": "#45603eff",
"terminal.ansi.dim_cyan": "#c7dfbdff",
@@ -472,9 +472,9 @@
"terminal.ansi.blue": "#83a598ff",
"terminal.ansi.bright_blue": "#414f4aff",
"terminal.ansi.dim_blue": "#c0d2cbff",
"terminal.ansi.magenta": "#d3869bff",
"terminal.ansi.bright_magenta": "#8e5868ff",
"terminal.ansi.dim_magenta": "#ff9ebbff",
"terminal.ansi.magenta": "#a89984ff",
"terminal.ansi.bright_magenta": "#514a41ff",
"terminal.ansi.dim_magenta": "#d2cabfff",
"terminal.ansi.cyan": "#8ec07cff",
"terminal.ansi.bright_cyan": "#45603eff",
"terminal.ansi.dim_cyan": "#c7dfbdff",
@@ -857,9 +857,9 @@
"terminal.ansi.blue": "#83a598ff",
"terminal.ansi.bright_blue": "#414f4aff",
"terminal.ansi.dim_blue": "#c0d2cbff",
"terminal.ansi.magenta": "#d3869bff",
"terminal.ansi.bright_magenta": "#8e5868ff",
"terminal.ansi.dim_magenta": "#ff9ebbff",
"terminal.ansi.magenta": "#a89984ff",
"terminal.ansi.bright_magenta": "#514a41ff",
"terminal.ansi.dim_magenta": "#d2cabfff",
"terminal.ansi.cyan": "#8ec07cff",
"terminal.ansi.bright_cyan": "#45603eff",
"terminal.ansi.dim_cyan": "#c7dfbdff",
@@ -1242,9 +1242,9 @@
"terminal.ansi.blue": "#0b6678ff",
"terminal.ansi.bright_blue": "#8fb0baff",
"terminal.ansi.dim_blue": "#14333bff",
"terminal.ansi.magenta": "#8f3e71ff",
"terminal.ansi.bright_magenta": "#c76da0ff",
"terminal.ansi.dim_magenta": "#5c2848ff",
"terminal.ansi.magenta": "#7c6f64ff",
"terminal.ansi.bright_magenta": "#bcb5afff",
"terminal.ansi.dim_magenta": "#3e3833ff",
"terminal.ansi.cyan": "#437b59ff",
"terminal.ansi.bright_cyan": "#9fbca8ff",
"terminal.ansi.dim_cyan": "#253e2eff",
@@ -1627,9 +1627,9 @@
"terminal.ansi.blue": "#0b6678ff",
"terminal.ansi.bright_blue": "#8fb0baff",
"terminal.ansi.dim_blue": "#14333bff",
"terminal.ansi.magenta": "#8f3e71ff",
"terminal.ansi.bright_magenta": "#c76da0ff",
"terminal.ansi.dim_magenta": "#5c2848ff",
"terminal.ansi.magenta": "#7c6f64ff",
"terminal.ansi.bright_magenta": "#bcb5afff",
"terminal.ansi.dim_magenta": "#3e3833ff",
"terminal.ansi.cyan": "#437b59ff",
"terminal.ansi.bright_cyan": "#9fbca8ff",
"terminal.ansi.dim_cyan": "#253e2eff",
@@ -2012,9 +2012,9 @@
"terminal.ansi.blue": "#0b6678ff",
"terminal.ansi.bright_blue": "#8fb0baff",
"terminal.ansi.dim_blue": "#14333bff",
"terminal.ansi.magenta": "#8f3e71ff",
"terminal.ansi.bright_magenta": "#c76da0ff",
"terminal.ansi.dim_magenta": "#5c2848ff",
"terminal.ansi.magenta": "#7c6f64ff",
"terminal.ansi.bright_magenta": "#bcb5afff",
"terminal.ansi.dim_magenta": "#3e3833ff",
"terminal.ansi.cyan": "#437b59ff",
"terminal.ansi.bright_cyan": "#9fbca8ff",
"terminal.ansi.dim_cyan": "#253e2eff",

View File

@@ -11,22 +11,13 @@ use language::{BinaryStatus, LanguageRegistry, LanguageServerId};
use project::{
EnvironmentErrorMessage, LanguageServerProgress, LspStoreEvent, Project,
ProjectEnvironmentEvent,
git_store::{GitStoreEvent, Repository},
};
use smallvec::SmallVec;
use std::{
cmp::Reverse,
fmt::Write,
path::Path,
sync::Arc,
time::{Duration, Instant},
};
use std::{cmp::Reverse, fmt::Write, path::Path, sync::Arc, time::Duration};
use ui::{ButtonLike, ContextMenu, PopoverMenu, PopoverMenuHandle, Tooltip, prelude::*};
use util::truncate_and_trailoff;
use workspace::{StatusItemView, Workspace, item::ItemHandle};
const GIT_OPERATION_DELAY: Duration = Duration::from_millis(0);
actions!(activity_indicator, [ShowErrorMessage]);
pub enum Event {
@@ -114,15 +105,6 @@ impl ActivityIndicator {
)
.detach();
cx.subscribe(
&project.read(cx).git_store().clone(),
|_, _, event: &GitStoreEvent, cx| match event {
project::git_store::GitStoreEvent::JobsUpdated => cx.notify(),
_ => {}
},
)
.detach();
if let Some(auto_updater) = auto_updater.as_ref() {
cx.observe(auto_updater, |_, _, cx| cx.notify()).detach();
}
@@ -303,34 +285,6 @@ impl ActivityIndicator {
});
}
let current_job = self
.project
.read(cx)
.active_repository(cx)
.map(|r| r.read(cx))
.and_then(Repository::current_job);
// Show any long-running git command
if let Some(job_info) = current_job {
if Instant::now() - job_info.start >= GIT_OPERATION_DELAY {
return Some(Content {
icon: Some(
Icon::new(IconName::ArrowCircle)
.size(IconSize::Small)
.with_animation(
"arrow-circle",
Animation::new(Duration::from_secs(2)).repeat(),
|icon, delta| {
icon.transform(Transformation::rotate(percentage(delta)))
},
)
.into_any_element(),
),
message: job_info.message.into(),
on_click: None,
});
}
}
// Show any language server installation info.
let mut downloading = SmallVec::<[_; 3]>::new();
let mut checking_for_update = SmallVec::<[_; 3]>::new();

View File

@@ -84,6 +84,7 @@ ui.workspace = true
ui_input.workspace = true
util.workspace = true
uuid.workspace = true
vim_mode_setting.workspace = true
workspace.workspace = true
zed_actions.workspace = true
workspace-hack.workspace = true

File diff suppressed because it is too large Load Diff

View File

@@ -792,11 +792,15 @@ impl editor::Addon for AgentDiffAddon {
pub struct AgentDiffToolbar {
agent_diff: Option<WeakEntity<AgentDiff>>,
_workspace: WeakEntity<Workspace>,
}
impl AgentDiffToolbar {
pub fn new() -> Self {
Self { agent_diff: None }
pub fn new(workspace: &Workspace, _: &mut Context<Self>) -> Self {
Self {
agent_diff: None,
_workspace: workspace.weak_handle(),
}
}
fn agent_diff(&self, _: &App) -> Option<Entity<AgentDiff>> {

View File

@@ -409,7 +409,6 @@ impl Render for AssistantConfiguration {
v_flex()
.id("assistant-configuration")
.key_context("AgentConfiguration")
.track_focus(&self.focus_handle(cx))
.bg(cx.theme().colors().panel_background)
.size_full()

View File

@@ -178,7 +178,6 @@ pub struct AssistantPanel {
configuration_subscription: Option<Subscription>,
local_timezone: UtcOffset,
active_view: ActiveView,
previous_view: Option<ActiveView>,
history_store: Entity<HistoryStore>,
history: Entity<ThreadHistory>,
assistant_dropdown_menu_handle: PopoverMenuHandle<ContextMenu>,
@@ -293,7 +292,6 @@ impl AssistantPanel {
chrono::Local::now().offset().local_minus_utc(),
)
.unwrap(),
previous_view: None,
history_store: history_store.clone(),
history: cx.new(|cx| ThreadHistory::new(weak_self, history_store, window, cx)),
assistant_dropdown_menu_handle: PopoverMenuHandle::default(),
@@ -339,8 +337,7 @@ impl AssistantPanel {
.thread_store
.update(cx, |this, cx| this.create_thread(cx));
let thread_view = ActiveView::thread(thread.clone(), window, cx);
self.set_active_view(thread_view, window, cx);
self.active_view = ActiveView::thread(thread.clone(), window, cx);
let message_editor_context_store = cx.new(|_cx| {
crate::context_store::ContextStore::new(
@@ -403,7 +400,7 @@ impl AssistantPanel {
}
fn new_prompt_editor(&mut self, window: &mut Window, cx: &mut Context<Self>) {
self.set_active_view(ActiveView::PromptEditor, window, cx);
self.active_view = ActiveView::PromptEditor;
let context = self
.context_store
@@ -453,16 +450,11 @@ impl AssistantPanel {
}
fn open_history(&mut self, window: &mut Window, cx: &mut Context<Self>) {
if matches!(self.active_view, ActiveView::History) {
if let Some(previous_view) = self.previous_view.take() {
self.set_active_view(previous_view, window, cx);
}
} else {
self.thread_store
.update(cx, |thread_store, cx| thread_store.reload(cx))
.detach_and_log_err(cx);
self.set_active_view(ActiveView::History, window, cx);
}
self.thread_store
.update(cx, |thread_store, cx| thread_store.reload(cx))
.detach_and_log_err(cx);
self.active_view = ActiveView::History;
self.history.focus_handle(cx).focus(window);
cx.notify();
}
@@ -495,7 +487,7 @@ impl AssistantPanel {
cx,
)
});
this.set_active_view(ActiveView::PromptEditor, window, cx);
this.active_view = ActiveView::PromptEditor;
this.context_editor = Some(editor);
anyhow::Ok(())
@@ -517,8 +509,7 @@ impl AssistantPanel {
cx.spawn_in(window, async move |this, cx| {
let thread = open_thread_task.await?;
this.update_in(cx, |this, window, cx| {
let thread_view = ActiveView::thread(thread.clone(), window, cx);
this.set_active_view(thread_view, window, cx);
this.active_view = ActiveView::thread(thread.clone(), window, cx);
let message_editor_context_store = cx.new(|_cx| {
crate::context_store::ContextStore::new(
this.workspace.clone(),
@@ -552,17 +543,6 @@ impl AssistantPanel {
})
}
pub fn go_back(&mut self, _: &workspace::GoBack, window: &mut Window, cx: &mut Context<Self>) {
match self.active_view {
ActiveView::Configuration | ActiveView::History => {
self.active_view =
ActiveView::thread(self.thread.read(cx).thread().clone(), window, cx);
cx.notify();
}
_ => {}
}
}
pub fn open_agent_diff(
&mut self,
_: &OpenAgentDiff,
@@ -582,7 +562,7 @@ impl AssistantPanel {
let tools = self.thread_store.read(cx).tools();
let fs = self.fs.clone();
self.set_active_view(ActiveView::Configuration, window, cx);
self.active_view = ActiveView::Configuration;
self.configuration =
Some(cx.new(|cx| {
AssistantConfiguration::new(fs, context_server_manager, tools, window, cx)
@@ -710,29 +690,6 @@ impl AssistantPanel {
self.context_store
.update(cx, |this, cx| this.delete_local_context(path, cx))
}
fn set_active_view(
&mut self,
new_view: ActiveView,
window: &mut Window,
cx: &mut Context<Self>,
) {
let current_is_history = matches!(self.active_view, ActiveView::History);
let new_is_history = matches!(new_view, ActiveView::History);
if current_is_history && !new_is_history {
self.active_view = new_view;
} else if !current_is_history && new_is_history {
self.previous_view = Some(std::mem::replace(&mut self.active_view, new_view));
} else {
if !new_is_history {
self.previous_view = None;
}
self.active_view = new_view;
}
self.focus_handle(cx).focus(window);
}
}
impl Focusable for AssistantPanel {
@@ -882,6 +839,7 @@ impl AssistantPanel {
h_flex()
.key_context("TitleEditor")
.id("TitleEditor")
.pl_2()
.flex_grow()
.w_full()
.max_w_full()
@@ -900,37 +858,12 @@ impl AssistantPanel {
let is_empty = active_thread.is_empty();
let focus_handle = self.focus_handle(cx);
let is_history = matches!(self.active_view, ActiveView::History);
let show_token_count = match &self.active_view {
ActiveView::Thread { .. } => !is_empty,
ActiveView::PromptEditor => self.context_editor.is_some(),
_ => false,
};
let go_back_button = match &self.active_view {
ActiveView::History | ActiveView::Configuration => Some(
IconButton::new("go-back", IconName::ArrowLeft)
.icon_size(IconSize::Small)
.on_click(cx.listener(|this, _, window, cx| {
this.go_back(&workspace::GoBack, window, cx);
}))
.tooltip({
let focus_handle = focus_handle.clone();
move |window, cx| {
Tooltip::for_action_in(
"Go Back",
&workspace::GoBack,
&focus_handle,
window,
cx,
)
}
}),
),
_ => None,
};
h_flex()
.id("assistant-toolbar")
.h(Tab::container_height(cx))
@@ -941,14 +874,7 @@ impl AssistantPanel {
.bg(cx.theme().colors().tab_bar_background)
.border_b_1()
.border_color(cx.theme().colors().border)
.child(
h_flex()
.w_full()
.pl_2()
.gap_2()
.children(go_back_button)
.child(self.render_title_view(window, cx)),
)
.child(self.render_title_view(window, cx))
.child(
h_flex()
.h_full()
@@ -1043,27 +969,6 @@ impl AssistantPanel {
);
}),
)
.child(
IconButton::new("open-history", IconName::HistoryRerun)
.icon_size(IconSize::Small)
.toggle_state(is_history)
.selected_icon_color(Color::Accent)
.tooltip({
let focus_handle = self.focus_handle(cx);
move |window, cx| {
Tooltip::for_action_in(
"History",
&OpenHistory,
&focus_handle,
window,
cx,
)
}
})
.on_click(move |_event, window, cx| {
window.dispatch_action(OpenHistory.boxed_clone(), cx);
}),
)
.child(
PopoverMenu::new("assistant-menu")
.trigger_with_tooltip(
@@ -1080,6 +985,12 @@ impl AssistantPanel {
cx,
|menu, _window, _cx| {
menu.action(
"New Thread",
Box::new(NewThread {
from_thread_id: None,
}),
)
.action(
"New Prompt Editor",
NewPromptEditor.boxed_clone(),
)
@@ -1092,6 +1003,7 @@ impl AssistantPanel {
)
})
.separator()
.action("History", OpenHistory.boxed_clone())
.action("Settings", OpenConfiguration.boxed_clone())
},
))
@@ -1588,7 +1500,6 @@ impl Render for AssistantPanel {
.on_action(cx.listener(Self::open_active_thread_as_markdown))
.on_action(cx.listener(Self::deploy_prompt_library))
.on_action(cx.listener(Self::open_agent_diff))
.on_action(cx.listener(Self::go_back))
.child(self.render_toolbar(window, cx))
.map(|parent| match self.active_view {
ActiveView::Thread { .. } => parent

View File

@@ -28,7 +28,7 @@ use std::{
time::Instant,
};
use streaming_diff::{CharOperation, LineDiff, LineOperation, StreamingDiff};
use telemetry_events::{AssistantEventData, AssistantKind, AssistantPhase};
use telemetry_events::{AssistantEvent, AssistantKind, AssistantPhase};
pub struct BufferCodegen {
alternatives: Vec<Entity<CodegenAlternative>>,
@@ -601,7 +601,7 @@ impl CodegenAlternative {
let error_message = result.as_ref().err().map(|error| error.to_string());
report_assistant_event(
AssistantEventData {
AssistantEvent {
conversation_id: None,
message_id,
kind: AssistantKind::Inline,

View File

@@ -106,13 +106,12 @@ impl ContextPickerCompletionProvider {
.iter()
.map(|mode| {
Completion {
replace_range: source_range.clone(),
old_range: source_range.clone(),
new_text: format!("@{} ", mode.mention_prefix()),
label: CodeLabel::plain(mode.label().to_string(), None),
icon_path: Some(mode.icon().path().into()),
documentation: None,
source: project::CompletionSource::Custom,
insert_text_mode: None,
// This ensures that when a user accepts this completion, the
// completion menu will still be shown after "@category " is
// inserted
@@ -160,11 +159,10 @@ impl ContextPickerCompletionProvider {
let new_text = MentionLink::for_thread(&thread_entry);
let new_text_len = new_text.len();
Completion {
replace_range: source_range.clone(),
old_range: source_range.clone(),
new_text,
label: CodeLabel::plain(thread_entry.summary.to_string(), None),
documentation: None,
insert_text_mode: None,
source: project::CompletionSource::Custom,
icon_path: Some(icon_for_completion.path().into()),
confirm: Some(confirm_completion_callback(
@@ -205,13 +203,12 @@ impl ContextPickerCompletionProvider {
let new_text = MentionLink::for_fetch(&url_to_fetch);
let new_text_len = new_text.len();
Completion {
replace_range: source_range.clone(),
old_range: source_range.clone(),
new_text,
label: CodeLabel::plain(url_to_fetch.to_string(), None),
documentation: None,
source: project::CompletionSource::Custom,
icon_path: Some(IconName::Globe.path().into()),
insert_text_mode: None,
confirm: Some(confirm_completion_callback(
IconName::Globe.path().into(),
url_to_fetch.clone(),
@@ -287,13 +284,12 @@ impl ContextPickerCompletionProvider {
let new_text = MentionLink::for_file(&file_name, &full_path);
let new_text_len = new_text.len();
Completion {
replace_range: source_range.clone(),
old_range: source_range.clone(),
new_text,
label,
documentation: None,
source: project::CompletionSource::Custom,
icon_path: Some(completion_icon_path),
insert_text_mode: None,
confirm: Some(confirm_completion_callback(
crease_icon_path,
file_name,
@@ -350,13 +346,12 @@ impl ContextPickerCompletionProvider {
let new_text = MentionLink::for_symbol(&symbol.name, &full_path);
let new_text_len = new_text.len();
Some(Completion {
replace_range: source_range.clone(),
old_range: source_range.clone(),
new_text,
label,
documentation: None,
source: project::CompletionSource::Custom,
icon_path: Some(IconName::Code.path().into()),
insert_text_mode: None,
confirm: Some(confirm_completion_callback(
IconName::Code.path().into(),
symbol.name.clone().into(),

View File

@@ -31,7 +31,7 @@ use project::LspAction;
use project::{CodeAction, ProjectTransaction};
use prompt_store::PromptBuilder;
use settings::{Settings, SettingsStore};
use telemetry_events::{AssistantEventData, AssistantKind, AssistantPhase};
use telemetry_events::{AssistantEvent, AssistantKind, AssistantPhase};
use terminal_view::{TerminalView, terminal_panel::TerminalPanel};
use text::{OffsetRangeExt, ToPoint as _};
use ui::prelude::*;
@@ -402,7 +402,7 @@ impl InlineAssistant {
codegen_ranges.push(anchor_range);
if let Some(model) = LanguageModelRegistry::read_global(cx).inline_assistant_model() {
self.telemetry.report_assistant_event(AssistantEventData {
self.telemetry.report_assistant_event(AssistantEvent {
conversation_id: None,
kind: AssistantKind::Inline,
phase: AssistantPhase::Invoked,
@@ -987,7 +987,7 @@ impl InlineAssistant {
.map(|language| language.name())
});
report_assistant_event(
AssistantEventData {
AssistantEvent {
conversation_id: None,
kind: AssistantKind::Inline,
message_id,

View File

@@ -8,7 +8,7 @@ use file_icons::FileIcons;
use fs::Fs;
use gpui::{
Animation, AnimationExt, App, DismissEvent, Entity, Focusable, Subscription, TextStyle,
WeakEntity, linear_color_stop, linear_gradient, point, pulsating_between,
WeakEntity, linear_color_stop, linear_gradient, point,
};
use language::Buffer;
use language_model::{ConfiguredModel, LanguageModelRegistry};
@@ -18,8 +18,12 @@ use project::Project;
use settings::Settings;
use std::time::Duration;
use theme::ThemeSettings;
use ui::{Disclosure, KeyBinding, PopoverMenu, PopoverMenuHandle, Tooltip, prelude::*};
use ui::{
ButtonLike, Disclosure, KeyBinding, PlatformStyle, PopoverMenu, PopoverMenuHandle, Tooltip,
prelude::*,
};
use util::ResultExt as _;
use vim_mode_setting::VimModeSetting;
use workspace::Workspace;
use crate::assistant_model_selector::AssistantModelSelector;
@@ -222,8 +226,7 @@ impl MessageEditor {
let thread = self.thread.clone();
let context_store = self.context_store.clone();
let git_store = self.project.read(cx).git_store().clone();
let checkpoint = git_store.update(cx, |git_store, cx| git_store.checkpoint(cx));
let checkpoint = self.project.read(cx).git_store().read(cx).checkpoint(cx);
cx.spawn(async move |this, cx| {
let checkpoint = checkpoint.await.ok();
@@ -351,7 +354,24 @@ impl Render for MessageEditor {
let total_token_usage = thread.total_token_usage(cx);
let is_model_selected = self.is_model_selected(cx);
let is_editor_empty = self.is_editor_empty(cx);
let is_edit_changes_expanded = self.edits_expanded;
let needs_confirmation =
thread.has_pending_tool_uses() && thread.tools_needing_confirmation().next().is_some();
let submit_label_color = if is_editor_empty {
Color::Muted
} else {
Color::Default
};
let vim_mode_enabled = VimModeSetting::get_global(cx).0;
let platform = PlatformStyle::platform();
let linux = platform == PlatformStyle::Linux;
let windows = platform == PlatformStyle::Windows;
let button_width = if linux || windows || vim_mode_enabled {
px(82.)
} else {
px(64.)
};
let action_log = self.thread.read(cx).action_log();
let changed_buffers = action_log.read(cx).changed_buffers(cx);
@@ -399,6 +419,70 @@ impl Render for MessageEditor {
),
)
})
.when(is_generating, |parent| {
let focus_handle = self.editor.focus_handle(cx).clone();
parent.child(
h_flex().py_3().w_full().justify_center().child(
h_flex()
.flex_none()
.pl_2()
.pr_1()
.py_1()
.bg(editor_bg_color)
.border_1()
.border_color(cx.theme().colors().border_variant)
.rounded_lg()
.shadow_md()
.gap_1()
.child(
Icon::new(IconName::ArrowCircle)
.size(IconSize::XSmall)
.color(Color::Muted)
.with_animation(
"arrow-circle",
Animation::new(Duration::from_secs(2)).repeat(),
|icon, delta| {
icon.transform(gpui::Transformation::rotate(
gpui::percentage(delta),
))
},
),
)
.child({
Label::new(if needs_confirmation {
"Waiting for confirmation…"
} else {
"Generating…"
})
.size(LabelSize::XSmall)
.color(Color::Muted)
})
.child(ui::Divider::vertical())
.child(
Button::new("cancel-generation", "Cancel")
.label_size(LabelSize::XSmall)
.key_binding(
KeyBinding::for_action_in(
&editor::actions::Cancel,
&focus_handle,
window,
cx,
)
.map(|kb| kb.size(rems_from_px(10.))),
)
.on_click(move |_event, window, cx| {
focus_handle.dispatch_action(
&editor::actions::Cancel,
window,
cx,
);
}),
),
),
)
})
.when(changed_buffers_count > 0, |parent| {
parent.child(
v_flex()
@@ -417,12 +501,12 @@ impl Render for MessageEditor {
.child(
h_flex()
.id("edits-container")
.cursor_pointer()
.p_1p5()
.justify_between()
.when(is_edit_changes_expanded, |this| {
.when(self.edits_expanded, |this| {
this.border_b_1().border_color(border_color)
})
.cursor_pointer()
.on_click(cx.listener(|this, _, window, cx| {
this.handle_review_click(window, cx)
}))
@@ -432,7 +516,7 @@ impl Render for MessageEditor {
.child(
Disclosure::new(
"edits-disclosure",
is_edit_changes_expanded,
self.edits_expanded,
)
.on_click(
cx.listener(|this, _ev, _window, cx| {
@@ -482,9 +566,9 @@ impl Render for MessageEditor {
})),
),
)
.when(is_edit_changes_expanded, |parent| {
.when(self.edits_expanded, |parent| {
parent.child(
v_flex().children(
v_flex().bg(cx.theme().colors().editor_background).children(
changed_buffers.into_iter().enumerate().flat_map(
|(index, (buffer, _diff))| {
let file = buffer.read(cx).file()?;
@@ -498,7 +582,7 @@ impl Render for MessageEditor {
} else {
Some(
Label::new(format!(
"/{}{}",
"{}{}",
parent_str,
std::path::MAIN_SEPARATOR_STR
))
@@ -526,105 +610,70 @@ impl Render for MessageEditor {
.size(IconSize::Small)
});
let hover_color = cx.theme()
.colors()
.element_background
.blend(cx.theme().colors().editor_foreground.opacity(0.025));
let overlay_gradient = linear_gradient(
90.,
linear_color_stop(
editor_bg_color,
1.,
),
linear_color_stop(
editor_bg_color
.opacity(0.2),
0.,
),
);
let overlay_gradient_hover = linear_gradient(
90.,
linear_color_stop(
hover_color,
1.,
),
linear_color_stop(
hover_color
.opacity(0.2),
0.,
),
);
let element = h_flex()
.group("edited-code")
.id(("file-container", index))
.cursor_pointer()
let element = div()
.relative()
.py_1()
.pl_2()
.pr_1()
.gap_2()
.justify_between()
.bg(cx.theme().colors().editor_background)
.hover(|style| style.bg(hover_color))
.px_2()
.when(index + 1 < changed_buffers_count, |parent| {
parent.border_color(border_color).border_b_1()
})
.child(
h_flex()
.id("file-name")
.pr_8()
.gap_1p5()
.max_w_full()
.overflow_x_scroll()
.child(file_icon)
.gap_2()
.justify_between()
.child(
h_flex()
.gap_0p5()
.children(name_label)
.children(parent_label)
) // TODO: show lines changed
.child(
Label::new("+")
.color(Color::Created),
.id(("file-container", index))
.pr_8()
.gap_1p5()
.max_w_full()
.overflow_x_scroll()
.cursor_pointer()
.on_click({
let buffer = buffer.clone();
cx.listener(move |this, _, window, cx| {
this.handle_file_click(buffer.clone(), window, cx);
})
})
.tooltip(
Tooltip::text(format!("Review {}", path.display()))
)
.child(file_icon)
.child(
h_flex()
.children(parent_label)
.children(name_label),
) // TODO: show lines changed
.child(
Label::new("+")
.color(Color::Created),
)
.child(
Label::new("-")
.color(Color::Deleted),
),
)
.child(
Label::new("-")
.color(Color::Deleted),
div()
.h_full()
.absolute()
.w_8()
.bottom_0()
.right_0()
.bg(linear_gradient(
90.,
linear_color_stop(
editor_bg_color,
1.,
),
linear_color_stop(
editor_bg_color
.opacity(0.2),
0.,
),
)),
),
)
.child(
div().visible_on_hover("edited-code").child(
Button::new("review", "Review")
.label_size(LabelSize::Small)
.on_click({
let buffer = buffer.clone();
cx.listener(move |this, _, window, cx| {
this.handle_file_click(buffer.clone(), window, cx);
})
})
)
)
.child(
div()
.id("gradient-overlay")
.absolute()
.h_5_6()
.w_12()
.bottom_0()
.right(px(52.))
.bg(overlay_gradient)
.group_hover("edited-code", |style| style.bg(overlay_gradient_hover))
,
)
.on_click({
let buffer = buffer.clone();
cx.listener(move |this, _, window, cx| {
this.handle_file_click(buffer.clone(), window, cx);
})
});
);
Some(element)
},
@@ -684,6 +733,7 @@ impl Render for MessageEditor {
..Default::default()
},
).into_any()
})
.child(
PopoverMenu::new("inline-context-picker")
@@ -691,6 +741,7 @@ impl Render for MessageEditor {
inline_context_picker.update(cx, |this, cx| {
this.init(window, cx);
});
Some(inline_context_picker.clone())
})
.attach(gpui::Corner::TopLeft)
@@ -707,72 +758,60 @@ impl Render for MessageEditor {
.justify_between()
.child(h_flex().gap_2().child(self.profile_selector.clone()))
.child(
h_flex().gap_1().child(self.model_selector.clone())
.map(|parent| {
if is_generating {
parent.child(
IconButton::new("stop-generation", IconName::StopFilled)
.icon_color(Color::Error)
.style(ButtonStyle::Tinted(ui::TintColor::Error))
.tooltip(move |window, cx| {
Tooltip::for_action(
"Stop Generation",
&editor::actions::Cancel,
window,
cx,
)
})
.on_click(move |_event, window, cx| {
focus_handle.dispatch_action(
&editor::actions::Cancel,
window,
cx,
);
})
.with_animation(
"pulsating-label",
Animation::new(Duration::from_secs(2))
.repeat()
.with_easing(pulsating_between(0.4, 1.0)),
|icon_button, delta| icon_button.alpha(delta),
),
)
} else {
parent.child(
IconButton::new("send-message", IconName::Send)
.icon_color(Color::Accent)
.style(ButtonStyle::Filled)
.disabled(
is_editor_empty
|| !is_model_selected
|| self.waiting_for_summaries_to_send
h_flex().gap_1().child(self.model_selector.clone()).child(
ButtonLike::new("submit-message")
.width(button_width.into())
.style(ButtonStyle::Filled)
.disabled(
is_editor_empty
|| !is_model_selected
|| is_generating
|| self.waiting_for_summaries_to_send
)
.child(
h_flex()
.w_full()
.justify_between()
.child(
Label::new("Submit")
.size(LabelSize::Small)
.color(submit_label_color),
)
.children(
KeyBinding::for_action_in(
&Chat,
&focus_handle,
window,
cx,
)
.on_click(move |_event, window, cx| {
focus_handle.dispatch_action(&Chat, window, cx);
})
.when(!is_editor_empty && is_model_selected, |button| {
button.tooltip(move |window, cx| {
Tooltip::for_action(
"Send",
&Chat,
window,
cx,
)
})
})
.when(is_editor_empty, |button| {
button.tooltip(Tooltip::text(
"Type a message to submit",
))
})
.when(!is_model_selected, |button| {
button.tooltip(Tooltip::text(
"Select a model to continue",
))
})
)
}
})
.map(|binding| {
binding
.when(vim_mode_enabled, |kb| {
kb.size(rems_from_px(12.))
})
.into_any_element()
}),
),
)
.on_click(move |_event, window, cx| {
focus_handle.dispatch_action(&Chat, window, cx);
})
.when(is_editor_empty, |button| {
button.tooltip(Tooltip::text(
"Type a message to submit",
))
})
.when(is_generating, |button| {
button.tooltip(Tooltip::text(
"Cancel to submit a new message",
))
})
.when(!is_model_selected, |button| {
button.tooltip(Tooltip::text(
"Select a model to continue",
))
}),
),
),
),
)

View File

@@ -6,7 +6,7 @@ use language_model::{
ConfiguredModel, LanguageModelRegistry, LanguageModelRequest, report_assistant_event,
};
use std::{sync::Arc, time::Instant};
use telemetry_events::{AssistantEventData, AssistantKind, AssistantPhase};
use telemetry_events::{AssistantEvent, AssistantKind, AssistantPhase};
use terminal::Terminal;
pub struct TerminalCodegen {
@@ -79,7 +79,7 @@ impl TerminalCodegen {
let error_message = result.as_ref().err().map(|error| error.to_string());
report_assistant_event(
AssistantEventData {
AssistantEvent {
conversation_id: None,
kind: AssistantKind::InlineTerminal,
message_id,

View File

@@ -18,7 +18,7 @@ use language_model::{
};
use prompt_store::PromptBuilder;
use std::sync::Arc;
use telemetry_events::{AssistantEventData, AssistantKind, AssistantPhase};
use telemetry_events::{AssistantEvent, AssistantKind, AssistantPhase};
use terminal_view::TerminalView;
use ui::prelude::*;
use util::ResultExt;
@@ -292,7 +292,7 @@ impl TerminalInlineAssistant {
let codegen = assist.codegen.read(cx);
let executor = cx.background_executor().clone();
report_assistant_event(
AssistantEventData {
AssistantEvent {
conversation_id: None,
kind: AssistantKind::InlineTerminal,
message_id: codegen.message_id.clone(),

View File

@@ -471,11 +471,11 @@ impl Thread {
cx.emit(ThreadEvent::CheckpointChanged);
cx.notify();
let git_store = self.project().read(cx).git_store().clone();
let restore = git_store.update(cx, |git_store, cx| {
git_store.restore_checkpoint(checkpoint.git_checkpoint.clone(), cx)
});
let project = self.project.read(cx);
let restore = project
.git_store()
.read(cx)
.restore_checkpoint(checkpoint.git_checkpoint.clone(), cx);
cx.spawn(async move |this, cx| {
let result = restore.await;
this.update(cx, |this, cx| {
@@ -506,11 +506,11 @@ impl Thread {
};
let git_store = self.project.read(cx).git_store().clone();
let final_checkpoint = git_store.update(cx, |git_store, cx| git_store.checkpoint(cx));
let final_checkpoint = git_store.read(cx).checkpoint(cx);
cx.spawn(async move |this, cx| match final_checkpoint.await {
Ok(final_checkpoint) => {
let equal = git_store
.update(cx, |store, cx| {
.read_with(cx, |store, cx| {
store.compare_checkpoints(
pending_checkpoint.git_checkpoint.clone(),
final_checkpoint.clone(),
@@ -522,7 +522,7 @@ impl Thread {
if equal {
git_store
.update(cx, |store, cx| {
.read_with(cx, |store, cx| {
store.delete_checkpoint(pending_checkpoint.git_checkpoint, cx)
})?
.detach();
@@ -533,7 +533,7 @@ impl Thread {
}
git_store
.update(cx, |store, cx| {
.read_with(cx, |store, cx| {
store.delete_checkpoint(final_checkpoint, cx)
})?
.detach();
@@ -1414,7 +1414,7 @@ impl Thread {
for tool_use in pending_tool_uses.iter() {
if let Some(tool) = self.tools.tool(&tool_use.name, cx) {
if tool.needs_confirmation(&tool_use.input, cx)
if tool.needs_confirmation()
&& !AssistantSettings::get_global(cx).always_allow_tool_actions
{
self.tool_use.confirm_tool_use(
@@ -1487,7 +1487,6 @@ impl Thread {
tool_use_id.clone(),
tool_name,
output,
cx,
);
cx.emit(ThreadEvent::ToolFinished {
@@ -1651,10 +1650,10 @@ impl Thread {
.ok()
.flatten()
.map(|repo| {
repo.update(cx, |repo, _| {
repo.read_with(cx, |repo, _| {
let current_branch =
repo.branch.as_ref().map(|branch| branch.name.to_string());
repo.send_job(None, |state, _| async move {
repo.send_job(|state, _| async move {
let RepositoryState::Local { backend, .. } = state else {
return GitState {
remote_url: None,
@@ -1832,7 +1831,7 @@ impl Thread {
));
self.tool_use
.insert_tool_output(tool_use_id.clone(), tool_name, err, cx);
.insert_tool_output(tool_use_id.clone(), tool_name, err);
cx.emit(ThreadEvent::ToolFinished {
tool_use_id,

View File

@@ -558,14 +558,7 @@ impl RenderOnce for PastContext {
IconButton::new("delete", IconName::TrashAlt)
.shape(IconButtonShape::Square)
.icon_size(IconSize::XSmall)
.tooltip(move |window, cx| {
Tooltip::for_action(
"Delete Prompt Editor",
&RemoveSelectedThread,
window,
cx,
)
})
.tooltip(Tooltip::text("Delete Prompt Editor"))
.on_click({
let assistant_panel = self.assistant_panel.clone();
let path = self.context.path.clone();

View File

@@ -7,11 +7,10 @@ use futures::FutureExt as _;
use futures::future::Shared;
use gpui::{App, SharedString, Task};
use language_model::{
LanguageModelRegistry, LanguageModelRequestMessage, LanguageModelToolResult,
LanguageModelToolUse, LanguageModelToolUseId, MessageContent, Role,
LanguageModelRequestMessage, LanguageModelToolResult, LanguageModelToolUse,
LanguageModelToolUseId, MessageContent, Role,
};
use ui::IconName;
use util::truncate_lines_to_byte_limit;
use crate::thread::MessageId;
use crate::thread_store::SerializedMessage;
@@ -201,7 +200,7 @@ impl ToolUseState {
let (icon, needs_confirmation) = if let Some(tool) = self.tools.tool(&tool_use.name, cx)
{
(tool.icon(), tool.needs_confirmation(&tool_use.input, cx))
(tool.icon(), tool.needs_confirmation())
} else {
(IconName::Cog, false)
};
@@ -332,32 +331,9 @@ impl ToolUseState {
tool_use_id: LanguageModelToolUseId,
tool_name: Arc<str>,
output: Result<String>,
cx: &App,
) -> Option<PendingToolUse> {
match output {
Ok(tool_result) => {
let model_registry = LanguageModelRegistry::read_global(cx);
const BYTES_PER_TOKEN_ESTIMATE: usize = 3;
// Protect from clearly large output
let tool_output_limit = model_registry
.default_model()
.map(|model| model.model.max_token_count() * BYTES_PER_TOKEN_ESTIMATE)
.unwrap_or(usize::MAX);
let tool_result = if tool_result.len() <= tool_output_limit {
tool_result
} else {
let truncated = truncate_lines_to_byte_limit(&tool_result, tool_output_limit);
format!(
"Tool result too long. The first {} bytes:\n\n{}",
truncated.len(),
truncated
)
};
self.tool_results.insert(
tool_use_id.clone(),
LanguageModelToolResult {

View File

@@ -321,54 +321,38 @@ pub async fn stream_completion(
.map(|output| output.0)
}
/// An individual rate limit.
#[derive(Debug)]
pub struct RateLimit {
pub limit: usize,
pub remaining: usize,
pub reset: DateTime<Utc>,
}
impl RateLimit {
fn from_headers(resource: &str, headers: &HeaderMap<HeaderValue>) -> Result<Self> {
let limit =
get_header(&format!("anthropic-ratelimit-{resource}-limit"), headers)?.parse()?;
let remaining = get_header(
&format!("anthropic-ratelimit-{resource}-remaining"),
headers,
)?
.parse()?;
let reset = DateTime::parse_from_rfc3339(get_header(
&format!("anthropic-ratelimit-{resource}-reset"),
headers,
)?)?
.to_utc();
Ok(Self {
limit,
remaining,
reset,
})
}
}
/// <https://docs.anthropic.com/en/api/rate-limits#response-headers>
#[derive(Debug)]
pub struct RateLimitInfo {
pub requests: Option<RateLimit>,
pub tokens: Option<RateLimit>,
pub input_tokens: Option<RateLimit>,
pub output_tokens: Option<RateLimit>,
pub requests_limit: usize,
pub requests_remaining: usize,
pub requests_reset: DateTime<Utc>,
pub tokens_limit: usize,
pub tokens_remaining: usize,
pub tokens_reset: DateTime<Utc>,
}
impl RateLimitInfo {
fn from_headers(headers: &HeaderMap<HeaderValue>) -> Self {
Self {
requests: RateLimit::from_headers("requests", headers).log_err(),
tokens: RateLimit::from_headers("tokens", headers).log_err(),
input_tokens: RateLimit::from_headers("input-tokens", headers).log_err(),
output_tokens: RateLimit::from_headers("output-tokens", headers).log_err(),
}
fn from_headers(headers: &HeaderMap<HeaderValue>) -> Result<Self> {
let tokens_limit = get_header("anthropic-ratelimit-tokens-limit", headers)?.parse()?;
let requests_limit = get_header("anthropic-ratelimit-requests-limit", headers)?.parse()?;
let tokens_remaining =
get_header("anthropic-ratelimit-tokens-remaining", headers)?.parse()?;
let requests_remaining =
get_header("anthropic-ratelimit-requests-remaining", headers)?.parse()?;
let requests_reset = get_header("anthropic-ratelimit-requests-reset", headers)?;
let tokens_reset = get_header("anthropic-ratelimit-tokens-reset", headers)?;
let requests_reset = DateTime::parse_from_rfc3339(requests_reset)?.to_utc();
let tokens_reset = DateTime::parse_from_rfc3339(tokens_reset)?.to_utc();
Ok(Self {
requests_limit,
tokens_limit,
requests_remaining,
tokens_remaining,
requests_reset,
tokens_reset,
})
}
}
@@ -434,7 +418,7 @@ pub async fn stream_completion_with_rate_limit_info(
}
})
.boxed();
Ok((stream, Some(rate_limits)))
Ok((stream, rate_limits.log_err()))
} else {
let mut body = Vec::new();
response

View File

@@ -57,7 +57,7 @@ use std::{
time::{Duration, Instant},
};
use streaming_diff::{CharOperation, LineDiff, LineOperation, StreamingDiff};
use telemetry_events::{AssistantEventData, AssistantKind, AssistantPhase};
use telemetry_events::{AssistantEvent, AssistantKind, AssistantPhase};
use terminal_view::terminal_panel::TerminalPanel;
use text::{OffsetRangeExt, ToPoint as _};
use theme::ThemeSettings;
@@ -315,7 +315,7 @@ impl InlineAssistant {
if let Some(ConfiguredModel { model, .. }) =
LanguageModelRegistry::read_global(cx).default_model()
{
self.telemetry.report_assistant_event(AssistantEventData {
self.telemetry.report_assistant_event(AssistantEvent {
conversation_id: None,
kind: AssistantKind::Inline,
phase: AssistantPhase::Invoked,
@@ -892,7 +892,7 @@ impl InlineAssistant {
.map(|language| language.name())
});
report_assistant_event(
AssistantEventData {
AssistantEvent {
conversation_id: None,
kind: AssistantKind::Inline,
message_id,
@@ -3148,7 +3148,7 @@ impl CodegenAlternative {
let error_message = result.as_ref().err().map(|error| error.to_string());
report_assistant_event(
AssistantEventData {
AssistantEvent {
conversation_id: None,
message_id,
kind: AssistantKind::Inline,

View File

@@ -27,7 +27,7 @@ use std::{
sync::Arc,
time::{Duration, Instant},
};
use telemetry_events::{AssistantEventData, AssistantKind, AssistantPhase};
use telemetry_events::{AssistantEvent, AssistantKind, AssistantPhase};
use terminal::Terminal;
use terminal_view::TerminalView;
use theme::ThemeSettings;
@@ -324,7 +324,7 @@ impl TerminalInlineAssistant {
let codegen = assist.codegen.read(cx);
let executor = cx.background_executor().clone();
report_assistant_event(
AssistantEventData {
AssistantEvent {
conversation_id: None,
kind: AssistantKind::InlineTerminal,
message_id: codegen.message_id.clone(),
@@ -1183,7 +1183,7 @@ impl Codegen {
let error_message = result.as_ref().err().map(|error| error.to_string());
report_assistant_event(
AssistantEventData {
AssistantEvent {
conversation_id: None,
kind: AssistantKind::InlineTerminal,
message_id,

View File

@@ -40,7 +40,7 @@ use std::{
sync::Arc,
time::{Duration, Instant},
};
use telemetry_events::{AssistantEventData, AssistantKind, AssistantPhase};
use telemetry_events::{AssistantEvent, AssistantKind, AssistantPhase};
use text::{BufferSnapshot, ToPoint};
use ui::IconName;
use util::{ResultExt, TryFutureExt, post_inc};
@@ -2498,7 +2498,7 @@ impl AssistantContext {
.language()
.map(|language| language.name());
report_assistant_event(
AssistantEventData {
AssistantEvent {
conversation_id: Some(this.id.0.clone()),
kind: AssistantKind::Panel,
phase: AssistantPhase::Response,

View File

@@ -120,14 +120,13 @@ impl SlashCommandCompletionProvider {
) as Arc<_>
});
Some(project::Completion {
replace_range: name_range.clone(),
old_range: name_range.clone(),
documentation: Some(CompletionDocumentation::SingleLine(
command.description().into(),
)),
new_text,
label: command.label(cx),
icon_path: None,
insert_text_mode: None,
confirm,
source: CompletionSource::Custom,
})
@@ -219,7 +218,7 @@ impl SlashCommandCompletionProvider {
}
project::Completion {
replace_range: if new_argument.replace_previous_arguments {
old_range: if new_argument.replace_previous_arguments {
argument_range.clone()
} else {
last_argument_range.clone()
@@ -229,7 +228,6 @@ impl SlashCommandCompletionProvider {
new_text,
documentation: None,
confirm,
insert_text_mode: None,
source: CompletionSource::Custom,
}
})

View File

@@ -1,5 +1,5 @@
[package]
name = "agent_eval"
name = "assistant_eval"
version = "0.1.0"
edition.workspace = true
publish.workspace = true
@@ -9,7 +9,7 @@ license = "GPL-3.0-or-later"
workspace = true
[[bin]]
name = "agent_eval"
name = "assistant_eval"
path = "src/main.rs"
[dependencies]

View File

@@ -0,0 +1,68 @@
# Tool Evals
A framework for evaluating and benchmarking the agent panel generations.
## Overview
Tool Evals provides a headless environment for running assistants evaluations on code repositories. It automates the process of:
1. Setting up test code and repositories
2. Sending prompts to language models
3. Allowing the assistant to use tools to modify code
4. Collecting metrics on performance and tool usage
5. Evaluating results against known good solutions
## How It Works
The system consists of several key components:
- **Eval**: Loads exercises from the zed-ace-framework repository, creates temporary repos, and executes evaluations
- **HeadlessAssistant**: Provides a headless environment for running the AI assistant
- **Judge**: Evaluates AI-generated solutions against reference implementations and assigns scores
- **Templates**: Defines evaluation frameworks for different tasks (Project Creation, Code Modification, Conversational Guidance)
## Setup Requirements
### Prerequisites
- Rust and Cargo
- Git
- Python (for report generation)
- Network access to clone repositories
- Appropriate API keys for language models and git services (Anthropic, GitHub, etc.)
### Environment Variables
Ensure you have the required API keys set, either from a dev run of Zed or via these environment variables:
- `ZED_ANTHROPIC_API_KEY` for Claude models
- `ZED_GITHUB_API_KEY` for GitHub API (or similar)
## Usage
### Running Evaluations
```bash
# Run all tests
cargo run -p assistant_eval -- --all
# Run only specific languages
cargo run -p assistant_eval -- --all --languages python,rust
# Limit concurrent evaluations
cargo run -p assistant_eval -- --all --concurrency 5
# Limit number of exercises per language
cargo run -p assistant_eval -- --all --max-exercises-per-language 3
```
### Evaluation Template Types
The system supports three types of evaluation templates:
1. **ProjectCreation**: Tests the model's ability to create new implementations from scratch
2. **CodeModification**: Tests the model's ability to modify existing code to meet new requirements
3. **ConversationalGuidance**: Tests the model's ability to provide guidance without writing code
### Support Repo
The [zed-industries/zed-ace-framework](https://github.com/zed-industries/zed-ace-framework) contains the analytics and reporting scripts.

View File

@@ -1,6 +1,6 @@
use crate::git_commands::{run_git, setup_temp_repo};
use crate::headless_assistant::{HeadlessAppState, HeadlessAssistant};
use crate::{get_exercise_language, get_exercise_name};
use crate::{get_exercise_language, get_exercise_name, templates_eval::Template};
use agent::RequestKind;
use anyhow::{Result, anyhow};
use collections::HashMap;
@@ -18,6 +18,8 @@ use std::{
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct EvalResult {
pub exercise_name: String,
pub template_name: String,
pub score: String,
pub diff: String,
pub assistant_response: String,
pub elapsed_time_ms: u128,
@@ -27,6 +29,7 @@ pub struct EvalResult {
pub output_tokens: usize,
pub total_tokens: usize,
pub tool_use_counts: usize,
pub judge_model_name: String, // Added field for judge model name
}
pub struct EvalOutput {
@@ -248,6 +251,29 @@ pub async fn read_instructions(exercise_path: &Path) -> Result<String> {
Ok(instructions)
}
pub async fn read_example_solution(exercise_path: &Path, language: &str) -> Result<String> {
// Map the language to the file extension
let language_extension = match language {
"python" => "py",
"go" => "go",
"rust" => "rs",
"typescript" => "ts",
"javascript" => "js",
"ruby" => "rb",
"php" => "php",
"bash" => "sh",
"multi" => "diff",
"internal" => "diff",
_ => return Err(anyhow!("Unsupported language: {}", language)),
};
let example_path = exercise_path
.join(".meta")
.join(format!("example.{}", language_extension));
println!("Reading example solution from: {}", example_path.display());
let example = smol::unblock(move || std::fs::read_to_string(&example_path)).await?;
Ok(example)
}
pub async fn save_eval_results(exercise_path: &Path, results: Vec<EvalResult>) -> Result<()> {
let eval_dir = exercise_path.join("evaluation");
fs::create_dir_all(&eval_dir)?;
@@ -285,8 +311,12 @@ pub async fn save_eval_results(exercise_path: &Path, results: Vec<EvalResult>) -
// Group the new results by test name (exercise name)
for result in results {
let exercise_name = &result.exercise_name;
let template_name = &result.template_name;
println!("Adding result: exercise={}", exercise_name);
println!(
"Adding result: exercise={}, template={}",
exercise_name, template_name
);
// Ensure the exercise entry exists
if eval_data.get(exercise_name).is_none() {
@@ -299,7 +329,7 @@ pub async fn save_eval_results(exercise_path: &Path, results: Vec<EvalResult>) -
}
// Add this result under the timestamp with template name as key
eval_data[exercise_name][&timestamp] = serde_json::to_value(&result)?;
eval_data[exercise_name][&timestamp][template_name] = serde_json::to_value(&result)?;
}
// Write back to file with pretty formatting
@@ -314,7 +344,9 @@ pub async fn save_eval_results(exercise_path: &Path, results: Vec<EvalResult>) -
pub async fn run_exercise_eval(
exercise_path: PathBuf,
template: Template,
model: Arc<dyn LanguageModel>,
judge_model: Arc<dyn LanguageModel>,
app_state: Arc<HeadlessAppState>,
base_sha: String,
_framework_path: PathBuf,
@@ -327,15 +359,68 @@ pub async fn run_exercise_eval(
"\n\nWhen writing the code for this prompt, use {} to achieve the goal.",
language
));
let example_solution = read_example_solution(&exercise_path, &language).await?;
println!("Running evaluation for exercise: {}", exercise_name);
println!(
"Running evaluation for exercise: {} with template: {}",
exercise_name, template.name
);
// Create temporary directory with exercise files
let temp_dir = setup_temp_repo(&exercise_path, &base_sha).await?;
let temp_path = temp_dir.path().to_path_buf();
if template.name == "ProjectCreation" {
for entry in fs::read_dir(&temp_path)? {
let entry = entry?;
let path = entry.path();
// Skip directories that start with dot (like .docs, .meta, .git)
if path.is_dir()
&& path
.file_name()
.and_then(|name| name.to_str())
.map(|name| name.starts_with("."))
.unwrap_or(false)
{
continue;
}
// Delete regular files
if path.is_file() {
println!(" Deleting file: {}", path.display());
fs::remove_file(path)?;
}
}
// Commit the deletion so it shows up in the diff
run_git(&temp_path, &["add", "."]).await?;
run_git(
&temp_path,
&["commit", "-m", "Remove root files for clean slate"],
)
.await?;
}
let local_commit_sha = run_git(&temp_path, &["rev-parse", "HEAD"]).await?;
// Prepare prompt based on template
let prompt = match template.name {
"ProjectCreation" => format!(
"I need to create a new implementation for this exercise. Please create all the necessary files in the best location.\n\n{}",
instructions
),
"CodeModification" => format!(
"I need help updating my code to meet these requirements. Please modify the appropriate files:\n\n{}",
instructions
),
"ConversationalGuidance" => format!(
"I'm trying to solve this coding exercise but I'm not sure where to start. Can you help me understand the requirements and guide me through the solution process without writing code for me?\n\n{}",
instructions
),
_ => instructions.clone(),
};
let start_time = SystemTime::now();
// Create a basic eval struct to work with the existing system
@@ -345,7 +430,7 @@ pub async fn run_exercise_eval(
url: format!("file://{}", temp_path.display()),
base_sha: local_commit_sha, // Use the local commit SHA instead of the framework base SHA
},
user_prompt: instructions.clone(),
user_prompt: prompt,
};
// Run the evaluation
@@ -356,6 +441,79 @@ pub async fn run_exercise_eval(
// Get diff from git
let diff = eval_output.diff.clone();
// For project creation template, we need to compare with reference implementation
let judge_output = if template.name == "ProjectCreation" {
let project_judge_prompt = template
.content
.replace(
"<!-- ```requirements go here``` -->",
&format!("```\n{}\n```", instructions),
)
.replace(
"<!-- ```reference code goes here``` -->",
&format!("```{}\n{}\n```", language, example_solution),
)
.replace(
"<!-- ```git diff goes here``` -->",
&format!("```\n{}\n```", diff),
);
// Use the run_with_prompt method which we'll add to judge.rs
let judge = crate::judge::Judge {
original_diff: None,
original_message: Some(project_judge_prompt),
model: judge_model.clone(),
};
cx.update(|cx| judge.run_with_prompt(cx))?.await?
} else if template.name == "CodeModification" {
// For CodeModification, we'll compare the example solution with the LLM-generated solution
let code_judge_prompt = template
.content
.replace(
"<!-- ```reference code goes here``` -->",
&format!("```{}\n{}\n```", language, example_solution),
)
.replace(
"<!-- ```git diff goes here``` -->",
&format!("```\n{}\n```", diff),
);
// Use the run_with_prompt method
let judge = crate::judge::Judge {
original_diff: None,
original_message: Some(code_judge_prompt),
model: judge_model.clone(),
};
cx.update(|cx| judge.run_with_prompt(cx))?.await?
} else {
// Conversational template
let conv_judge_prompt = template
.content
.replace(
"<!-- ```query goes here``` -->",
&format!("```\n{}\n```", instructions),
)
.replace(
"<!-- ```transcript goes here``` -->",
&format!("```\n{}\n```", eval_output.last_message),
)
.replace(
"<!-- ```git diff goes here``` -->",
&format!("```\n{}\n```", diff),
);
// Use the run_with_prompt method for consistency
let judge = crate::judge::Judge {
original_diff: None,
original_message: Some(conv_judge_prompt),
model: judge_model.clone(),
};
cx.update(|cx| judge.run_with_prompt(cx))?.await?
};
let elapsed_time = start_time.elapsed()?;
// Calculate total tokens as the sum of input and output tokens
@@ -364,9 +522,14 @@ pub async fn run_exercise_eval(
let tool_use_counts = eval_output.tool_use_counts.values().sum::<u32>();
let total_tokens = input_tokens + output_tokens;
// Get judge model name
let judge_model_name = judge_model.id().0.to_string();
// Save results to evaluation directory
let result = EvalResult {
exercise_name: exercise_name.clone(),
template_name: template.name.to_string(),
score: judge_output.trim().to_string(),
diff,
assistant_response: eval_output.last_message.clone(),
elapsed_time_ms: elapsed_time.as_millis(),
@@ -378,6 +541,7 @@ pub async fn run_exercise_eval(
output_tokens: output_tokens.try_into().unwrap(),
total_tokens: total_tokens.try_into().unwrap(),
tool_use_counts: tool_use_counts.try_into().unwrap(),
judge_model_name, // Add judge model name to result
};
Ok(result)

View File

@@ -4,10 +4,12 @@ use assistant_tool::ToolWorkingSet;
use client::{Client, UserStore};
use collections::HashMap;
use dap::DapRegistry;
use gpui::{App, Entity, SemanticVersion, Subscription, Task, prelude::*};
use futures::StreamExt;
use gpui::{App, AsyncApp, Entity, SemanticVersion, Subscription, Task, prelude::*};
use language::LanguageRegistry;
use language_model::{
AuthenticateError, LanguageModel, LanguageModelProviderId, LanguageModelRegistry,
LanguageModelRequest,
};
use node_runtime::NodeRuntime;
use project::{Project, RealFs};
@@ -244,3 +246,34 @@ pub fn authenticate_model_provider(
let model_provider = model_registry.provider(&provider_id).unwrap();
model_provider.authenticate(cx)
}
pub async fn send_language_model_request(
model: Arc<dyn LanguageModel>,
request: LanguageModelRequest,
cx: &mut AsyncApp,
) -> anyhow::Result<String> {
match model.stream_completion_text(request, &cx).await {
Ok(mut stream) => {
let mut full_response = String::new();
// Process the response stream
while let Some(chunk_result) = stream.stream.next().await {
match chunk_result {
Ok(chunk_str) => {
full_response.push_str(&chunk_str);
}
Err(err) => {
return Err(anyhow!(
"Error receiving response from language model: {err}"
));
}
}
}
Ok(full_response)
}
Err(err) => Err(anyhow!(
"Failed to get response from language model. Error was: {err}"
)),
}
}

View File

@@ -0,0 +1,37 @@
use crate::headless_assistant::send_language_model_request;
use anyhow::anyhow;
use gpui::{App, Task};
use language_model::{
LanguageModel, LanguageModelRequest, LanguageModelRequestMessage, MessageContent, Role,
};
use std::sync::Arc;
pub struct Judge {
#[allow(dead_code)]
pub original_diff: Option<String>,
pub original_message: Option<String>,
pub model: Arc<dyn LanguageModel>,
}
impl Judge {
pub fn run_with_prompt(&self, cx: &mut App) -> Task<anyhow::Result<String>> {
let Some(prompt) = self.original_message.as_ref() else {
return Task::ready(Err(anyhow!("No prompt provided in original_message")));
};
let request = LanguageModelRequest {
messages: vec![LanguageModelRequestMessage {
role: Role::User,
content: vec![MessageContent::Text(prompt.clone())],
cache: false,
}],
temperature: Some(0.0),
tools: Vec::new(),
stop: Vec::new(),
};
let model = self.model.clone();
let request = request.clone();
cx.spawn(async move |cx| send_language_model_request(model, request, cx).await)
}
}

View File

@@ -2,6 +2,8 @@ mod eval;
mod get_exercise;
mod git_commands;
mod headless_assistant;
mod judge;
mod templates_eval;
use clap::Parser;
use eval::{run_exercise_eval, save_eval_results};
@@ -13,10 +15,11 @@ use headless_assistant::{authenticate_model_provider, find_model};
use language_model::LanguageModelRegistry;
use reqwest_client::ReqwestClient;
use std::{path::PathBuf, sync::Arc};
use templates_eval::all_templates;
#[derive(Parser, Debug)]
#[command(
name = "agent_eval",
name = "assistant_eval",
disable_version_flag = true,
before_help = "Tool eval runner"
)]
@@ -34,17 +37,24 @@ struct Args {
/// Name of the model (default: "claude-3-7-sonnet-latest")
#[arg(long, default_value = "claude-3-7-sonnet-latest")]
model_name: String,
/// Name of the editor model (default: value of `--model_name`).
/// Name of the judge model (default: value of `--model_name`).
#[arg(long)]
editor_model_name: Option<String>,
judge_model_name: Option<String>,
/// Number of evaluations to run concurrently (default: 3)
#[arg(short, long, default_value = "5")]
#[arg(short, long, default_value = "3")]
concurrency: usize,
/// Maximum number of exercises to evaluate per language
#[arg(long)]
max_exercises_per_language: Option<usize>,
}
// First, let's define the order in which templates should be executed
const TEMPLATE_EXECUTION_ORDER: [&str; 3] = [
"ProjectCreation",
"CodeModification",
"ConversationalGuidance",
];
fn main() {
env_logger::init();
let args = Args::parse();
@@ -66,7 +76,7 @@ fn main() {
let app_state = headless_assistant::init(cx);
let model = find_model(&args.model_name, cx).unwrap();
let editor_model = if let Some(model_name) = &args.editor_model_name {
let judge_model = if let Some(model_name) = &args.judge_model_name {
find_model(model_name, cx).unwrap()
} else {
model.clone()
@@ -77,7 +87,7 @@ fn main() {
});
let model_provider_id = model.provider_id();
let editor_model_provider_id = editor_model.provider_id();
let judge_model_provider_id = judge_model.provider_id();
let framework_path_clone = framework_path.clone();
let languages_clone = languages.clone();
@@ -90,17 +100,15 @@ fn main() {
.unwrap()
.await
.unwrap();
cx.update(|cx| authenticate_model_provider(editor_model_provider_id.clone(), cx))
cx.update(|cx| authenticate_model_provider(judge_model_provider_id.clone(), cx))
.unwrap()
.await
.unwrap();
println!("framework path: {}", framework_path_clone.display());
// Read base SHA from setup.json
let base_sha = read_base_sha(&framework_path_clone).await.unwrap();
println!("base sha: {}", base_sha);
// Find all exercises for the specified languages
let all_exercises = find_exercises(
&framework_path_clone,
&languages_clone
@@ -132,12 +140,23 @@ fn main() {
println!("Will run {} exercises", exercises_to_run.len());
// Get all templates and sort them according to the execution order
let mut templates = all_templates();
templates.sort_by_key(|template| {
TEMPLATE_EXECUTION_ORDER
.iter()
.position(|&name| name == template.name)
.unwrap_or(usize::MAX)
});
// Create exercise eval tasks - each exercise is a single task that will run templates sequentially
let exercise_tasks: Vec<_> = exercises_to_run
.into_iter()
.map(|exercise_path| {
let exercise_name = get_exercise_name(&exercise_path);
let templates_clone = templates.clone();
let model_clone = model.clone();
let judge_model_clone = judge_model.clone();
let app_state_clone = app_state.clone();
let base_sha_clone = base_sha.clone();
let framework_path_clone = framework_path_clone.clone();
@@ -147,22 +166,56 @@ fn main() {
println!("Processing exercise: {}", exercise_name);
let mut exercise_results = Vec::new();
match run_exercise_eval(
exercise_path.clone(),
model_clone.clone(),
app_state_clone.clone(),
base_sha_clone.clone(),
framework_path_clone.clone(),
cx_clone.clone(),
)
.await
{
Ok(result) => {
println!("Completed {}", exercise_name);
exercise_results.push(result);
}
// Determine the language for this exercise
let language = match get_exercise_language(&exercise_path) {
Ok(lang) => lang,
Err(err) => {
println!("Error running {}: {}", exercise_name, err);
println!(
"Error determining language for {}: {}",
exercise_name, err
);
return exercise_results;
}
};
// Run each template sequentially for this exercise
for template in templates_clone {
// For "multi" or "internal" language, only run the CodeModification template
if (language == "multi" || language == "internal")
&& template.name != "CodeModification"
{
println!(
"Skipping {} template for {} language",
template.name, language
);
continue;
}
match run_exercise_eval(
exercise_path.clone(),
template.clone(),
model_clone.clone(),
judge_model_clone.clone(),
app_state_clone.clone(),
base_sha_clone.clone(),
framework_path_clone.clone(),
cx_clone.clone(),
)
.await
{
Ok(result) => {
println!(
"Completed {} with template {} - score: {}",
exercise_name, template.name, result.score
);
exercise_results.push(result);
}
Err(err) => {
println!(
"Error running {} with template {}: {}",
exercise_name, template.name, err
);
}
}
}

View File

@@ -0,0 +1,210 @@
#[derive(Clone, Debug)]
pub struct Template {
pub name: &'static str,
pub content: &'static str,
}
pub fn all_templates() -> Vec<Template> {
vec![
Template {
name: "ProjectCreation",
content: r#"
# Project Creation Evaluation Template
## Instructions
Evaluate how well the AI assistant created a new implementation from scratch. Score it between 0.0 and 1.0 based on quality and fulfillment of requirements.
- 1.0 = Perfect implementation that creates all necessary files with correct functionality.
- 0.0 = Completely fails to create working files or meet requirements.
Note: A git diff output is required. If no code changes are provided (i.e., no git diff output), the score must be 0.0.
## Evaluation Criteria
Please consider the following aspects in order of importance:
1. **File Creation (25%)**
- Did the assistant create all necessary files?
- Are the files appropriately named and organized?
- Did the assistant create a complete solution without missing components?
2. **Functional Correctness (40%)**
- Does the implementation fulfill all specified requirements?
- Does it handle edge cases properly?
- Is it free of logical errors and bugs?
- Do all components work together as expected?
3. **Code Quality (20%)**
- Is the code well-structured, readable and well-documented?
- Does it follow language-specific best practices?
- Is there proper error handling?
- Are naming conventions clear and consistent?
4. **Architecture Design (15%)**
- Is the code modular and extensible?
- Is there proper separation of concerns?
- Are appropriate design patterns used?
- Is the overall architecture appropriate for the requirements?
## Input
Requirements:
<!-- ```requirements go here``` -->
Reference Implementation:
<!-- ```reference code goes here``` -->
AI-Generated Implementation (git diff output):
<!-- ```git diff goes here``` -->
## Output Format
THE ONLY OUTPUT SHOULD BE A SCORE BETWEEN 0.0 AND 1.0.
EXAMPLE ONE:
0.92
EXAMPLE TWO:
0.85
EXAMPLE THREE:
0.78
"#,
},
Template {
name: "CodeModification",
content: r#"
# Code Modification Evaluation Template
## Instructions
Evaluate how well the AI assistant modified existing code to meet requirements. Score between 0.0 and 1.0 based on quality and appropriateness of changes.
- 1.0 = Perfect modifications that correctly implement all requirements.
- 0.0 = Failed to make appropriate changes or introduced serious errors.
## Evaluation Criteria
Please consider the following aspects in order of importance:
1. **Functional Correctness (50%)**
- Do the modifications correctly implement the requirements?
- Did the assistant modify the right files and code sections?
- Are the changes free of bugs and logical errors?
- Do the modifications maintain compatibility with existing code?
2. **Modification Approach (25%)**
- Are the changes minimal and focused on what needs to be changed?
- Did the assistant avoid unnecessary modifications?
- Are the changes integrated seamlessly with the existing codebase?
- Did the assistant preserve the original code style and patterns?
3. **Code Quality (15%)**
- Are the modifications well-structured and documented?
- Do they follow the same conventions as the original code?
- Is there proper error handling in the modified code?
- Are the changes readable and maintainable?
4. **Solution Completeness (10%)**
- Do the modifications completely address all requirements?
- Are there any missing changes or overlooked requirements?
- Did the assistant consider all necessary edge cases?
## Input
Original:
<!-- ```reference code goes here``` -->
New (git diff output):
<!-- ```git diff goes here``` -->
## Output Format
THE ONLY OUTPUT SHOULD BE A SCORE BETWEEN 0.0 AND 1.0.
EXAMPLE ONE:
0.92
EXAMPLE TWO:
0.85
EXAMPLE THREE:
0.78
"#,
},
Template {
name: "ConversationalGuidance",
content: r#"
# Conversational Guidance Evaluation Template
## Instructions
Evaluate the quality of the AI assistant's conversational guidance and score it between 0.0 and 1.0.
- 1.0 = Perfect guidance with ideal information gathering, clarification, and advice without writing code.
- 0.0 = Completely unhelpful, inappropriate guidance, or wrote code when it should not have.
## Evaluation Criteria
ABSOLUTE REQUIREMENT:
- The assistant should NOT generate complete code solutions in conversation mode.
- If the git diff shows the assistant wrote complete code, the score should be significantly reduced.
1. **Information Gathering Effectiveness (30%)**
- Did the assistant ask relevant and precise questions?
- Did it efficiently narrow down the problem scope?
- Did it avoid unnecessary or redundant questions?
- Was questioning appropriately paced and contextual?
2. **Conceptual Guidance (30%)**
- Did the assistant provide high-level approaches and strategies?
- Did it explain relevant concepts and algorithms?
- Did it offer planning advice without implementing the solution?
- Did it suggest a structured approach to solving the problem?
3. **Educational Value (20%)**
- Did the assistant help the user understand the problem better?
- Did it provide explanations that would help the user learn?
- Did it guide without simply giving away answers?
- Did it encourage the user to think through parts of the problem?
4. **Conversation Quality (20%)**
- Was the conversation logically structured and easy to follow?
- Did the assistant maintain appropriate context throughout?
- Was the interaction helpful without being condescending?
- Did the conversation reach a satisfactory conclusion with clear next steps?
## Input
Initial Query:
<!-- ```query goes here``` -->
Conversation Transcript:
<!-- ```transcript goes here``` -->
Git Diff:
<!-- ```git diff goes here``` -->
## Output Format
THE ONLY OUTPUT SHOULD BE A SCORE BETWEEN 0.0 AND 1.0.
EXAMPLE ONE:
0.92
EXAMPLE TWO:
0.85
EXAMPLE THREE:
0.78
"#,
},
]
}

View File

@@ -48,7 +48,7 @@ pub trait Tool: 'static + Send + Sync {
/// Returns true iff the tool needs the users's confirmation
/// before having permission to run.
fn needs_confirmation(&self, input: &serde_json::Value, cx: &App) -> bool;
fn needs_confirmation(&self) -> bool;
/// Returns the JSON schema that describes the tool's input.
fn input_schema(&self, _: LanguageModelToolSchemaFormat) -> serde_json::Value {

View File

@@ -23,6 +23,7 @@ http_client.workspace = true
itertools.workspace = true
language.workspace = true
language_model.workspace = true
lsp.workspace = true
project.workspace = true
regex.workspace = true
schemars.workspace = true

View File

@@ -1,5 +1,6 @@
mod bash_tool;
mod batch_tool;
mod code_symbol_iter;
mod code_symbols_tool;
mod copy_path_tool;
mod create_directory_tool;

View File

@@ -1,8 +1,6 @@
use crate::schema::json_schema_for;
use anyhow::{Context as _, Result, anyhow};
use assistant_tool::{ActionLog, Tool};
use futures::io::BufReader;
use futures::{AsyncBufReadExt, AsyncReadExt, FutureExt};
use gpui::{App, Entity, Task};
use language_model::{LanguageModelRequestMessage, LanguageModelToolSchemaFormat};
use project::Project;
@@ -16,7 +14,7 @@ use util::markdown::MarkdownString;
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct BashToolInput {
/// The bash one-liner command to execute.
/// The bash command to execute as a one-liner.
command: String,
/// Working directory for the command. This must be one of the root directories of the project.
cd: String,
@@ -29,7 +27,7 @@ impl Tool for BashTool {
"bash".to_string()
}
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
fn needs_confirmation(&self) -> bool {
true
}
@@ -123,113 +121,33 @@ impl Tool for BashTool {
worktree.read(cx).abs_path()
};
cx.spawn(async move |cx| {
cx.spawn(async move |_| {
// Add 2>&1 to merge stderr into stdout for proper interleaving.
let command = format!("({}) 2>&1", input.command);
let mut cmd = new_smol_command("bash")
let output = new_smol_command("bash")
.arg("-c")
.arg(&command)
.current_dir(working_dir)
.stdout(std::process::Stdio::piped())
.spawn()
.output()
.await
.context("Failed to execute bash command")?;
// Capture stdout with a limit
let stdout = cmd.stdout.take().unwrap();
let mut reader = BufReader::new(stdout);
let output_string = String::from_utf8_lossy(&output.stdout).to_string();
const MESSAGE_1: &str = "Command output too long. The first ";
const MESSAGE_2: &str = " bytes:\n\n";
const ERR_MESSAGE_1: &str = "Command failed with exit code ";
const ERR_MESSAGE_2: &str = "\n\n";
const STDOUT_LIMIT: usize = 8192;
const LIMIT: usize = STDOUT_LIMIT
- (MESSAGE_1.len()
+ (STDOUT_LIMIT.ilog10() as usize + 1) // byte count
+ MESSAGE_2.len()
+ ERR_MESSAGE_1.len()
+ 3 // status code
+ ERR_MESSAGE_2.len());
// Read one more byte to determine whether the output was truncated
let mut buffer = vec![0; LIMIT + 1];
let bytes_read = reader.read(&mut buffer).await?;
let mut timer = cx
.background_executor()
.timer(std::time::Duration::from_secs(10))
.fuse();
// Repeatedly fill the output reader's buffer without copying it.
loop {
let mut skipped_bytes = reader.fill_buf().fuse();
futures::select! {
skipped_bytes = skipped_bytes => {
let skipped_bytes = skipped_bytes?;
if skipped_bytes.is_empty() {
break;
}
let skipped_bytes_len = skipped_bytes.len();
reader.consume_unpin(skipped_bytes_len);
timer = cx
.background_executor()
.timer(std::time::Duration::from_secs(10))
.fuse();
}
_ = timer => {
return Err(anyhow!("Command timed out. Output so far:\n{}", String::from_utf8_lossy(&buffer[..bytes_read])));
}
}
}
let output_bytes = &buffer[..bytes_read];
// Let the process continue running
let status = cmd.status().await.context("Failed to get command status")?;
let output_string = if bytes_read > LIMIT {
// Valid to find `\n` in UTF-8 since 0-127 ASCII characters are not used in
// multi-byte characters.
let last_line_ix = output_bytes.iter().rposition(|b| *b == b'\n');
let output_string = String::from_utf8_lossy(
&output_bytes[..last_line_ix.unwrap_or(output_bytes.len())],
);
format!(
"{}{}{}{}",
MESSAGE_1,
output_string.len(),
MESSAGE_2,
output_string
)
} else {
String::from_utf8_lossy(&output_bytes).into()
};
let output_with_status = if status.success() {
if output.status.success() {
if output_string.is_empty() {
"Command executed successfully.".to_string()
Ok("Command executed successfully.".to_string())
} else {
output_string.to_string()
Ok(output_string)
}
} else {
format!(
"{}{}{}{}",
ERR_MESSAGE_1,
status.code().unwrap_or(-1),
ERR_MESSAGE_2,
output_string,
)
};
debug_assert!(output_with_status.len() <= STDOUT_LIMIT);
Ok(output_with_status)
Ok(format!(
"Command failed with exit code {}\n{}",
output.status.code().unwrap_or(-1),
&output_string
))
}
})
}
}

View File

@@ -151,17 +151,8 @@ impl Tool for BatchTool {
"batch_tool".into()
}
fn needs_confirmation(&self, input: &serde_json::Value, cx: &App) -> bool {
serde_json::from_value::<BatchToolInput>(input.clone())
.map(|input| {
let working_set = ToolWorkingSet::default();
input.invocations.iter().any(|invocation| {
working_set
.tool(&invocation.name, cx)
.map_or(false, |tool| tool.needs_confirmation(&invocation.input, cx))
})
})
.unwrap_or(false)
fn needs_confirmation(&self) -> bool {
true
}
fn description(&self) -> String {

View File

@@ -0,0 +1,88 @@
use project::DocumentSymbol;
use regex::Regex;
#[derive(Debug, Clone)]
pub struct Entry {
pub name: String,
pub kind: lsp::SymbolKind,
pub depth: u32,
pub start_line: usize,
pub end_line: usize,
}
/// An iterator that filters document symbols based on a regex pattern.
/// This iterator recursively traverses the document symbol tree, incrementing depth for child symbols.
#[derive(Debug, Clone)]
pub struct CodeSymbolIterator<'a> {
symbols: &'a [DocumentSymbol],
regex: Option<Regex>,
// Stack of (symbol, depth) pairs to process
pending_symbols: Vec<(&'a DocumentSymbol, u32)>,
current_index: usize,
current_depth: u32,
}
impl<'a> CodeSymbolIterator<'a> {
pub fn new(symbols: &'a [DocumentSymbol], regex: Option<Regex>) -> Self {
Self {
symbols,
regex,
pending_symbols: Vec::new(),
current_index: 0,
current_depth: 0,
}
}
}
impl Iterator for CodeSymbolIterator<'_> {
type Item = Entry;
fn next(&mut self) -> Option<Self::Item> {
if let Some((symbol, depth)) = self.pending_symbols.pop() {
for child in symbol.children.iter().rev() {
self.pending_symbols.push((child, depth + 1));
}
return Some(Entry {
name: symbol.name.clone(),
kind: symbol.kind,
depth,
start_line: symbol.range.start.0.row as usize,
end_line: symbol.range.end.0.row as usize,
});
}
while self.current_index < self.symbols.len() {
let regex = self.regex.as_ref();
let symbol = &self.symbols[self.current_index];
self.current_index += 1;
if regex.is_none_or(|regex| regex.is_match(&symbol.name)) {
// Push in reverse order to maintain traversal order
for child in symbol.children.iter().rev() {
self.pending_symbols.push((child, self.current_depth + 1));
}
return Some(Entry {
name: symbol.name.clone(),
kind: symbol.kind,
depth: self.current_depth,
start_line: symbol.range.start.0.row as usize,
end_line: symbol.range.end.0.row as usize,
});
} else {
// Even if parent doesn't match, push children to check them later
for child in symbol.children.iter().rev() {
self.pending_symbols.push((child, self.current_depth + 1));
}
// Check if any pending children match our criteria
if let Some(result) = self.next() {
return Some(result);
}
}
}
None
}
}

View File

@@ -1,21 +1,24 @@
use std::fmt::Write;
use std::fmt::{self, Write};
use std::path::PathBuf;
use std::sync::Arc;
use crate::schema::json_schema_for;
use anyhow::{Result, anyhow};
use assistant_tool::{ActionLog, Tool};
use collections::IndexMap;
use gpui::{App, AsyncApp, Entity, Task};
use language::{OutlineItem, ParseStatus, Point};
use language::{CodeLabel, Language, LanguageRegistry};
use language_model::{LanguageModelRequestMessage, LanguageModelToolSchemaFormat};
use project::{Project, Symbol};
use lsp::SymbolKind;
use project::{DocumentSymbol, Project, Symbol};
use regex::{Regex, RegexBuilder};
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use ui::IconName;
use util::markdown::MarkdownString;
use crate::code_symbol_iter::{CodeSymbolIterator, Entry};
use crate::schema::json_schema_for;
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct CodeSymbolsInput {
/// The relative path of the source code file to read and get the symbols for.
@@ -79,7 +82,7 @@ impl Tool for CodeSymbolsTool {
"code_symbols".into()
}
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
fn needs_confirmation(&self) -> bool {
false
}
@@ -177,28 +180,24 @@ pub async fn file_outline(
action_log.buffer_read(buffer.clone(), cx);
})?;
// Wait until the buffer has been fully parsed, so that we can read its outline.
let mut parse_status = buffer.read_with(cx, |buffer, _| buffer.parse_status())?;
while parse_status
.recv()
.await
.map_or(false, |status| status != ParseStatus::Idle)
{}
let symbols = project
.update(cx, |project, cx| project.document_symbols(&buffer, cx))?
.await?;
let snapshot = buffer.read_with(cx, |buffer, _| buffer.snapshot())?;
let Some(outline) = snapshot.outline(None) else {
return Err(anyhow!("No outline information available for this file."));
};
if symbols.is_empty() {
return Err(
if buffer.read_with(cx, |buffer, _| buffer.snapshot().is_empty())? {
anyhow!("This file is empty.")
} else {
anyhow!("No outline information available for this file.")
},
);
}
render_outline(
outline
.items
.into_iter()
.map(|item| item.to_point(&snapshot)),
regex,
offset,
)
.await
let language = buffer.read_with(cx, |buffer, _| buffer.language().cloned())?;
let language_registry = project.read_with(cx, |project, _| project.languages().clone())?;
render_outline(&symbols, language, language_registry, regex, offset).await
}
async fn project_symbols(
@@ -293,27 +292,61 @@ async fn project_symbols(
}
async fn render_outline(
items: impl IntoIterator<Item = OutlineItem<Point>>,
symbols: &[DocumentSymbol],
language: Option<Arc<Language>>,
registry: Arc<LanguageRegistry>,
regex: Option<Regex>,
offset: u32,
) -> Result<String> {
const RESULTS_PER_PAGE_USIZE: usize = RESULTS_PER_PAGE as usize;
let entries = CodeSymbolIterator::new(symbols, regex.clone())
.skip(offset as usize)
// Take 1 more than RESULTS_PER_PAGE so we can tell if there are more results.
.take(RESULTS_PER_PAGE_USIZE.saturating_add(1))
.collect::<Vec<Entry>>();
let has_more = entries.len() > RESULTS_PER_PAGE_USIZE;
let mut items = items.into_iter().skip(offset as usize);
// Get language-specific labels, if available
let labels = match &language {
Some(lang) => {
let entries_for_labels: Vec<(String, SymbolKind)> = entries
.iter()
.take(RESULTS_PER_PAGE_USIZE)
.map(|entry| (entry.name.clone(), entry.kind))
.collect();
let entries = items
.by_ref()
.filter(|item| {
regex
.as_ref()
.is_none_or(|regex| regex.is_match(&item.text))
})
.take(RESULTS_PER_PAGE_USIZE)
.collect::<Vec<_>>();
let has_more = items.next().is_some();
let lang_name = lang.name();
if let Some(lsp_adapter) = registry.lsp_adapters(&lang_name).first().cloned() {
lsp_adapter
.labels_for_symbols(&entries_for_labels, lang)
.await
.ok()
} else {
None
}
}
None => None,
};
let mut output = String::new();
let entries_rendered = render_entries(&mut output, entries);
let entries_rendered = match &labels {
Some(label_list) => render_entries(
&mut output,
entries
.into_iter()
.take(RESULTS_PER_PAGE_USIZE)
.zip(label_list.iter())
.map(|(entry, label)| (entry, label.as_ref())),
),
None => render_entries(
&mut output,
entries
.into_iter()
.take(RESULTS_PER_PAGE_USIZE)
.map(|entry| (entry, None)),
),
};
// Calculate pagination information
let page_start = offset + 1;
@@ -339,19 +372,31 @@ async fn render_outline(
Ok(output)
}
fn render_entries(output: &mut String, items: impl IntoIterator<Item = OutlineItem<Point>>) -> u32 {
fn render_entries<'a>(
output: &mut String,
entries: impl IntoIterator<Item = (Entry, Option<&'a CodeLabel>)>,
) -> u32 {
let mut entries_rendered = 0;
for item in items {
for (entry, label) in entries {
// Indent based on depth ("" for level 0, " " for level 1, etc.)
for _ in 0..item.depth {
output.push(' ');
for _ in 0..entry.depth {
output.push_str(" ");
}
match label {
Some(label) => {
output.push_str(label.text());
}
None => {
write_symbol_kind(output, entry.kind).ok();
output.push_str(&entry.name);
}
}
output.push_str(&item.text);
// Add position information - convert to 1-based line numbers for display
let start_line = item.range.start.row + 1;
let end_line = item.range.end.row + 1;
let start_line = entry.start_line + 1;
let end_line = entry.end_line + 1;
if start_line == end_line {
writeln!(output, " [L{}]", start_line).ok();
@@ -363,3 +408,38 @@ fn render_entries(output: &mut String, items: impl IntoIterator<Item = OutlineIt
entries_rendered
}
// We may not have a language server adapter to have language-specific
// ways to translate SymbolKnd into a string. In that situation,
// fall back on some reasonable default strings to render.
fn write_symbol_kind(buf: &mut String, kind: SymbolKind) -> Result<(), fmt::Error> {
match kind {
SymbolKind::FILE => write!(buf, "file "),
SymbolKind::MODULE => write!(buf, "module "),
SymbolKind::NAMESPACE => write!(buf, "namespace "),
SymbolKind::PACKAGE => write!(buf, "package "),
SymbolKind::CLASS => write!(buf, "class "),
SymbolKind::METHOD => write!(buf, "method "),
SymbolKind::PROPERTY => write!(buf, "property "),
SymbolKind::FIELD => write!(buf, "field "),
SymbolKind::CONSTRUCTOR => write!(buf, "constructor "),
SymbolKind::ENUM => write!(buf, "enum "),
SymbolKind::INTERFACE => write!(buf, "interface "),
SymbolKind::FUNCTION => write!(buf, "function "),
SymbolKind::VARIABLE => write!(buf, "variable "),
SymbolKind::CONSTANT => write!(buf, "constant "),
SymbolKind::STRING => write!(buf, "string "),
SymbolKind::NUMBER => write!(buf, "number "),
SymbolKind::BOOLEAN => write!(buf, "boolean "),
SymbolKind::ARRAY => write!(buf, "array "),
SymbolKind::OBJECT => write!(buf, "object "),
SymbolKind::KEY => write!(buf, "key "),
SymbolKind::NULL => write!(buf, "null "),
SymbolKind::ENUM_MEMBER => write!(buf, "enum member "),
SymbolKind::STRUCT => write!(buf, "struct "),
SymbolKind::EVENT => write!(buf, "event "),
SymbolKind::OPERATOR => write!(buf, "operator "),
SymbolKind::TYPE_PARAMETER => write!(buf, "type parameter "),
_ => Ok(()),
}
}

View File

@@ -43,7 +43,7 @@ impl Tool for CopyPathTool {
"copy_path".into()
}
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
fn needs_confirmation(&self) -> bool {
true
}

View File

@@ -33,7 +33,7 @@ impl Tool for CreateDirectoryTool {
"create_directory".into()
}
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
fn needs_confirmation(&self) -> bool {
true
}

View File

@@ -40,7 +40,7 @@ impl Tool for CreateFileTool {
"create_file".into()
}
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
fn needs_confirmation(&self) -> bool {
false
}

View File

@@ -33,7 +33,7 @@ impl Tool for DeletePathTool {
"delete_path".into()
}
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
fn needs_confirmation(&self) -> bool {
true
}

View File

@@ -46,7 +46,7 @@ impl Tool for DiagnosticsTool {
"diagnostics".into()
}
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
fn needs_confirmation(&self) -> bool {
false
}

View File

@@ -116,7 +116,7 @@ impl Tool for FetchTool {
"fetch".to_string()
}
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
fn needs_confirmation(&self) -> bool {
true
}

View File

@@ -129,7 +129,7 @@ impl Tool for FindReplaceFileTool {
"find_replace_file".into()
}
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
fn needs_confirmation(&self) -> bool {
false
}

View File

@@ -44,7 +44,7 @@ impl Tool for ListDirectoryTool {
"list_directory".into()
}
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
fn needs_confirmation(&self) -> bool {
false
}

View File

@@ -42,7 +42,7 @@ impl Tool for MovePathTool {
"move_path".into()
}
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
fn needs_confirmation(&self) -> bool {
true
}

View File

@@ -33,7 +33,7 @@ impl Tool for NowTool {
"now".into()
}
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
fn needs_confirmation(&self) -> bool {
false
}

View File

@@ -23,7 +23,7 @@ impl Tool for OpenTool {
"open".to_string()
}
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
fn needs_confirmation(&self) -> bool {
true
}

View File

@@ -41,7 +41,7 @@ impl Tool for PathSearchTool {
"path_search".into()
}
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
fn needs_confirmation(&self) -> bool {
false
}

View File

@@ -1,6 +1,7 @@
use std::sync::Arc;
use crate::{code_symbols_tool::file_outline, schema::json_schema_for};
use crate::code_symbols_tool::file_outline;
use crate::schema::json_schema_for;
use anyhow::{Result, anyhow};
use assistant_tool::{ActionLog, Tool};
use gpui::{App, Entity, Task};
@@ -15,7 +16,7 @@ use util::markdown::MarkdownString;
/// If the model requests to read a file whose size exceeds this, then
/// the tool will return an error along with the model's symbol outline,
/// and suggest trying again using line ranges from the outline.
const MAX_FILE_SIZE_TO_READ: usize = 16384;
const MAX_FILE_SIZE_TO_READ: usize = 4096;
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct ReadFileToolInput {
@@ -51,7 +52,7 @@ impl Tool for ReadFileTool {
"read_file".into()
}
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
fn needs_confirmation(&self) -> bool {
false
}

View File

@@ -44,7 +44,7 @@ impl Tool for RegexSearchTool {
"regex_search".into()
}
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
fn needs_confirmation(&self) -> bool {
false
}

View File

@@ -72,7 +72,7 @@ impl Tool for SymbolInfoTool {
"symbol_info".into()
}
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
fn needs_confirmation(&self) -> bool {
false
}

View File

@@ -24,7 +24,7 @@ impl Tool for ThinkingTool {
"thinking".to_string()
}
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
fn needs_confirmation(&self) -> bool {
false
}

View File

@@ -131,9 +131,8 @@ impl Render for Breadcrumbs {
}),
),
None => element
// Match the height and padding of the `ButtonLike` in the other arm.
// Match the height of the `ButtonLike` in the other arm.
.h(rems_from_px(22.))
.pl_1()
.child(breadcrumbs_stack),
}
}

View File

@@ -19,7 +19,7 @@ use livekit_client::{self as livekit, TrackSid};
use postage::{sink::Sink, stream::Stream, watch};
use project::Project;
use settings::Settings as _;
use std::{any::Any, future::Future, mem, rc::Rc, sync::Arc, time::Duration};
use std::{any::Any, future::Future, mem, sync::Arc, time::Duration};
use util::{ResultExt, TryFutureExt, post_inc};
pub const RECONNECT_TIMEOUT: Duration = Duration::from_secs(30);
@@ -944,8 +944,8 @@ impl Room {
)
})?;
if self.live_kit.as_ref().map_or(true, |kit| kit.deafened) {
if publication.is_audio() {
publication.set_enabled(false, cx);
if matches!(track, livekit_client::RemoteTrack::Audio(_)) {
track.set_enabled(false, cx);
}
}
match track {
@@ -1594,7 +1594,7 @@ fn spawn_room_connection(
let muted_by_user = Room::mute_on_join(cx);
this.live_kit = Some(LiveKitRoom {
room: Rc::new(room),
room: Arc::new(room),
screen_track: LocalTrack::None,
microphone_track: LocalTrack::None,
next_publish_id: 0,
@@ -1617,7 +1617,7 @@ fn spawn_room_connection(
}
struct LiveKitRoom {
room: Rc<livekit::Room>,
room: Arc<livekit::Room>,
screen_track: LocalTrack,
microphone_track: LocalTrack,
/// Tracks whether we're currently in a muted state due to auto-mute from deafening or manual mute performed by user.

View File

@@ -18,7 +18,7 @@ test-support = ["clock/test-support", "collections/test-support", "gpui/test-sup
[dependencies]
anyhow.workspace = true
async-recursion = "0.3"
async-tungstenite = { workspace = true, features = ["tokio", "tokio-rustls-manual-roots"] }
async-tungstenite = { workspace = true, features = ["async-std", "async-tls"] }
chrono = { workspace = true, features = ["serde"] }
clock.workspace = true
collections.workspace = true
@@ -26,7 +26,6 @@ credentials_provider.workspace = true
feature_flags.workspace = true
futures.workspace = true
gpui.workspace = true
gpui_tokio.workspace = true
http_client.workspace = true
http_client_tls.workspace = true
log.workspace = true
@@ -52,7 +51,6 @@ url.workspace = true
util.workspace = true
worktree.workspace = true
telemetry.workspace = true
tokio.workspace = true
workspace-hack.workspace = true
[dev-dependencies]
@@ -69,3 +67,4 @@ windows.workspace = true
[target.'cfg(target_os = "macos")'.dependencies]
cocoa.workspace = true
async-native-tls = { version = "0.5.0", features = ["vendored"] }

View File

@@ -20,7 +20,7 @@ use futures::{
AsyncReadExt, FutureExt, SinkExt, Stream, StreamExt, TryFutureExt as _, TryStreamExt,
channel::oneshot, future::BoxFuture,
};
use gpui::{App, AsyncApp, Entity, Global, Task, WeakEntity, actions};
use gpui::{App, AppContext as _, AsyncApp, Entity, Global, Task, WeakEntity, actions};
use http_client::{AsyncBody, HttpClient, HttpClientWithUrl};
use parking_lot::RwLock;
use postage::watch;
@@ -1086,7 +1086,7 @@ impl Client {
let rpc_url = self.rpc_url(http, release_channel);
let system_id = self.telemetry.system_id();
let metrics_id = self.telemetry.metrics_id();
cx.spawn(async move |cx| {
cx.background_spawn(async move {
use HttpOrHttps::*;
#[derive(Debug)]
@@ -1105,12 +1105,7 @@ impl Client {
.host_str()
.zip(rpc_url.port_or_known_default())
.ok_or_else(|| anyhow!("missing host in rpc url"))?;
let stream = {
let handle = cx.update(|cx| gpui_tokio::Tokio::handle(cx)).ok().unwrap();
let _guard = handle.enter();
connect_socks_proxy_stream(proxy.as_ref(), rpc_host).await?
};
let stream = connect_socks_proxy_stream(proxy.as_ref(), rpc_host).await?;
log::info!("connected to rpc endpoint {}", rpc_url);
@@ -1149,19 +1144,30 @@ impl Client {
request_headers.insert("x-zed-metrics-id", HeaderValue::from_str(&metrics_id)?);
}
let (stream, _) = async_tungstenite::tokio::client_async_tls_with_connector_and_config(
request,
stream,
Some(Arc::new(http_client_tls::tls_config()).into()),
None,
)
.await?;
Ok(Connection::new(
stream
.map_err(|error| anyhow!(error))
.sink_map_err(|error| anyhow!(error)),
))
match url_scheme {
Https => {
let (stream, _) =
async_tungstenite::async_tls::client_async_tls_with_connector(
request,
stream,
Some(http_client_tls::tls_config().into()),
)
.await?;
Ok(Connection::new(
stream
.map_err(|error| anyhow!(error))
.sink_map_err(|error| anyhow!(error)),
))
}
Http => {
let (stream, _) = async_tungstenite::client_async(request, stream).await?;
Ok(Connection::new(
stream
.map_err(|error| anyhow!(error))
.sink_map_err(|error| anyhow!(error)),
))
}
}
})
}
@@ -1633,7 +1639,7 @@ mod tests {
use crate::test::FakeServer;
use clock::FakeSystemClock;
use gpui::{AppContext as _, BackgroundExecutor, TestAppContext};
use gpui::{BackgroundExecutor, TestAppContext};
use http_client::FakeHttpClient;
use parking_lot::Mutex;
use proto::TypedEnvelope;

View File

@@ -1,7 +1,11 @@
//! socks proxy
use anyhow::{Result, anyhow};
use futures::io::{AsyncRead, AsyncWrite};
use http_client::Uri;
use tokio_socks::tcp::{Socks4Stream, Socks5Stream};
use tokio_socks::{
io::Compat,
tcp::{Socks4Stream, Socks5Stream},
};
pub(crate) async fn connect_socks_proxy_stream(
proxy: Option<&Uri>,
@@ -10,7 +14,7 @@ pub(crate) async fn connect_socks_proxy_stream(
let stream = match parse_socks_proxy(proxy) {
Some((socks_proxy, SocksVersion::V4)) => {
let stream = Socks4Stream::connect_with_socket(
tokio::net::TcpStream::connect(socks_proxy).await?,
Compat::new(smol::net::TcpStream::connect(socks_proxy).await?),
rpc_host,
)
.await
@@ -19,15 +23,13 @@ pub(crate) async fn connect_socks_proxy_stream(
}
Some((socks_proxy, SocksVersion::V5)) => Box::new(
Socks5Stream::connect_with_socket(
tokio::net::TcpStream::connect(socks_proxy).await?,
Compat::new(smol::net::TcpStream::connect(socks_proxy).await?),
rpc_host,
)
.await
.map_err(|err| anyhow!("error connecting to socks {}", err))?,
) as Box<dyn AsyncReadWrite>,
None => {
Box::new(tokio::net::TcpStream::connect(rpc_host).await?) as Box<dyn AsyncReadWrite>
}
None => Box::new(smol::net::TcpStream::connect(rpc_host).await?) as Box<dyn AsyncReadWrite>,
};
Ok(stream)
}
@@ -58,11 +60,5 @@ enum SocksVersion {
V5,
}
pub(crate) trait AsyncReadWrite:
tokio::io::AsyncRead + tokio::io::AsyncWrite + Unpin + Send + 'static
{
}
impl<T: tokio::io::AsyncRead + tokio::io::AsyncWrite + Unpin + Send + 'static> AsyncReadWrite
for T
{
}
pub(crate) trait AsyncReadWrite: AsyncRead + AsyncWrite + Unpin + Send + 'static {}
impl<T: AsyncRead + AsyncWrite + Unpin + Send + 'static> AsyncReadWrite for T {}

View File

@@ -17,7 +17,7 @@ use std::io::Write;
use std::sync::LazyLock;
use std::time::Instant;
use std::{env, mem, path::PathBuf, sync::Arc, time::Duration};
use telemetry_events::{AssistantEventData, AssistantPhase, Event, EventRequestBody, EventWrapper};
use telemetry_events::{AssistantEvent, AssistantPhase, Event, EventRequestBody, EventWrapper};
use util::{ResultExt, TryFutureExt};
use worktree::{UpdatedEntriesSet, WorktreeId};
@@ -329,7 +329,7 @@ impl Telemetry {
drop(state);
}
pub fn report_assistant_event(self: &Arc<Self>, event: AssistantEventData) {
pub fn report_assistant_event(self: &Arc<Self>, event: AssistantEvent) {
let event_type = match event.phase {
AssistantPhase::Response => "Assistant Responded",
AssistantPhase::Invoked => "Assistant Invoked",

View File

@@ -33,6 +33,7 @@ clock.workspace = true
collections.workspace = true
dashmap.workspace = true
derive_more.workspace = true
buffer_diff.workspace = true
envy = "0.4.2"
futures.workspace = true
google_ai.workspace = true
@@ -84,7 +85,6 @@ assistant_slash_command.workspace = true
assistant_tool.workspace = true
async-trait.workspace = true
audio.workspace = true
buffer_diff.workspace = true
call = { workspace = true, features = ["test-support"] }
channel.workspace = true
client = { workspace = true, features = ["test-support"] }

View File

@@ -1,3 +0,0 @@
alter table models
add column max_input_tokens_per_minute bigint not null default 0,
add column max_output_tokens_per_minute bigint not null default 0;

View File

@@ -127,7 +127,7 @@ async fn update_billing_preferences(
SnowflakeRow::new(
"Spend Limit Updated",
Some(user.metrics_id),
user.metrics_id,
user.admin,
None,
json!({

View File

@@ -149,37 +149,6 @@ pub async fn post_crash(
"crash report"
);
if let Some(kinesis_client) = app.kinesis_client.clone() {
if let Some(stream) = app.config.kinesis_stream.clone() {
let properties = json!({
"app_version": report.header.app_version,
"os_version": report.header.os_version,
"os_name": "macOS",
"bundle_id": report.header.bundle_id,
"incident_id": report.header.incident_id,
"installation_id": installation_id,
"description": description,
"backtrace": summary,
});
let row = SnowflakeRow::new(
"Crash Reported",
None,
false,
Some(installation_id),
properties,
);
let data = serde_json::to_vec(&row)?;
kinesis_client
.put_record()
.stream_name(stream)
.partition_key(row.insert_id.unwrap_or_default())
.data(data.into())
.send()
.await
.log_err();
}
}
if let Some(slack_panics_webhook) = app.config.slack_panics_webhook.clone() {
let payload = slack::WebhookBody::new(|w| {
w.add_section(|s| s.text(slack::Text::markdown(description)))
@@ -345,8 +314,6 @@ pub async fn post_panic(
.ok();
}
let backtrace = panic.backtrace.join("\n");
tracing::error!(
service = "client",
version = %panic.app_version,
@@ -355,40 +322,10 @@ pub async fn post_panic(
incident_id = %incident_id,
installation_id = %panic.installation_id.clone().unwrap_or_default(),
description = %panic.payload,
backtrace = %backtrace,
backtrace = %panic.backtrace.join("\n"),
"panic report"
);
if let Some(kinesis_client) = app.kinesis_client.clone() {
if let Some(stream) = app.config.kinesis_stream.clone() {
let properties = json!({
"app_version": panic.app_version,
"os_name": panic.os_name,
"os_version": panic.os_version,
"incident_id": incident_id,
"installation_id": panic.installation_id,
"description": panic.payload,
"backtrace": backtrace,
});
let row = SnowflakeRow::new(
"Panic Reported",
None,
false,
panic.installation_id.clone(),
properties,
);
let data = serde_json::to_vec(&row)?;
kinesis_client
.put_record()
.stream_name(stream)
.partition_key(row.insert_id.unwrap_or_default())
.data(data.into())
.send()
.await
.log_err();
}
}
let backtrace = if panic.backtrace.len() > 25 {
let total = panic.backtrace.len();
format!(
@@ -774,7 +711,7 @@ pub struct SnowflakeRow {
impl SnowflakeRow {
pub fn new(
event_type: impl Into<String>,
metrics_id: Option<Uuid>,
metrics_id: Uuid,
is_staff: bool,
system_id: Option<String>,
event_properties: serde_json::Value,
@@ -783,7 +720,7 @@ impl SnowflakeRow {
time: chrono::Utc::now(),
event_type: event_type.into(),
device_id: system_id,
user_id: metrics_id.map(|id| id.to_string()),
user_id: Some(metrics_id.to_string()),
insert_id: Some(uuid::Uuid::new_v4().to_string()),
event_properties,
user_properties: Some(json!({"is_staff": is_staff})),

View File

@@ -316,14 +316,10 @@ async fn perform_completion(
is_staff = claims.is_staff,
provider = params.provider.to_string(),
model = model,
tokens_remaining = rate_limit_info.tokens.as_ref().map(|limits| limits.remaining),
input_tokens_remaining = rate_limit_info.input_tokens.as_ref().map(|limits| limits.remaining),
output_tokens_remaining = rate_limit_info.output_tokens.as_ref().map(|limits| limits.remaining),
requests_remaining = rate_limit_info.requests.as_ref().map(|limits| limits.remaining),
requests_reset = ?rate_limit_info.requests.as_ref().map(|limits| limits.reset),
tokens_reset = ?rate_limit_info.tokens.as_ref().map(|limits| limits.reset),
input_tokens_reset = ?rate_limit_info.input_tokens.as_ref().map(|limits| limits.reset),
output_tokens_reset = ?rate_limit_info.output_tokens.as_ref().map(|limits| limits.reset),
tokens_remaining = rate_limit_info.tokens_remaining,
requests_remaining = rate_limit_info.requests_remaining,
requests_reset = ?rate_limit_info.requests_reset,
tokens_reset = ?rate_limit_info.tokens_reset,
);
}
@@ -503,10 +499,6 @@ async fn check_usage_limit(
model.max_requests_per_minute as usize / users_in_recent_minutes;
let per_user_max_tokens_per_minute =
model.max_tokens_per_minute as usize / users_in_recent_minutes;
let per_user_max_input_tokens_per_minute =
model.max_input_tokens_per_minute as usize / users_in_recent_minutes;
let per_user_max_output_tokens_per_minute =
model.max_output_tokens_per_minute as usize / users_in_recent_minutes;
let per_user_max_tokens_per_day = model.max_tokens_per_day as usize / users_in_recent_days;
let usage = state
@@ -514,55 +506,29 @@ async fn check_usage_limit(
.get_usage(user_id, provider, model_name, Utc::now())
.await?;
let checks = match (provider, model_name) {
(LanguageModelProvider::Anthropic, "claude-3-7-sonnet") => vec![
(
usage.requests_this_minute,
per_user_max_requests_per_minute,
UsageMeasure::RequestsPerMinute,
),
(
usage.input_tokens_this_minute,
per_user_max_tokens_per_minute,
UsageMeasure::InputTokensPerMinute,
),
(
usage.output_tokens_this_minute,
per_user_max_tokens_per_minute,
UsageMeasure::OutputTokensPerMinute,
),
(
usage.tokens_this_day,
per_user_max_tokens_per_day,
UsageMeasure::TokensPerDay,
),
],
_ => vec![
(
usage.requests_this_minute,
per_user_max_requests_per_minute,
UsageMeasure::RequestsPerMinute,
),
(
usage.tokens_this_minute,
per_user_max_tokens_per_minute,
UsageMeasure::TokensPerMinute,
),
(
usage.tokens_this_day,
per_user_max_tokens_per_day,
UsageMeasure::TokensPerDay,
),
],
};
let checks = [
(
usage.requests_this_minute,
per_user_max_requests_per_minute,
UsageMeasure::RequestsPerMinute,
),
(
usage.tokens_this_minute,
per_user_max_tokens_per_minute,
UsageMeasure::TokensPerMinute,
),
(
usage.tokens_this_day,
per_user_max_tokens_per_day,
UsageMeasure::TokensPerDay,
),
];
for (used, limit, usage_measure) in checks {
if used > limit {
let resource = match usage_measure {
UsageMeasure::RequestsPerMinute => "requests_per_minute",
UsageMeasure::TokensPerMinute => "tokens_per_minute",
UsageMeasure::InputTokensPerMinute => "input_tokens_per_minute",
UsageMeasure::OutputTokensPerMinute => "output_tokens_per_minute",
UsageMeasure::TokensPerDay => "tokens_per_day",
};
@@ -574,24 +540,19 @@ async fn check_usage_limit(
is_staff = claims.is_staff,
provider = provider.to_string(),
model = model.name,
usage_measure = resource,
requests_this_minute = usage.requests_this_minute,
tokens_this_minute = usage.tokens_this_minute,
input_tokens_this_minute = usage.input_tokens_this_minute,
output_tokens_this_minute = usage.output_tokens_this_minute,
tokens_this_day = usage.tokens_this_day,
users_in_recent_minutes = users_in_recent_minutes,
users_in_recent_days = users_in_recent_days,
max_requests_per_minute = per_user_max_requests_per_minute,
max_tokens_per_minute = per_user_max_tokens_per_minute,
max_input_tokens_per_minute = per_user_max_input_tokens_per_minute,
max_output_tokens_per_minute = per_user_max_output_tokens_per_minute,
max_tokens_per_day = per_user_max_tokens_per_day,
);
SnowflakeRow::new(
"Language Model Rate Limited",
Some(claims.metrics_id),
claims.metrics_id,
claims.is_staff,
claims.system_id.clone(),
json!({
@@ -600,8 +561,6 @@ async fn check_usage_limit(
"users_in_recent_days": users_in_recent_days,
"max_requests_per_minute": per_user_max_requests_per_minute,
"max_tokens_per_minute": per_user_max_tokens_per_minute,
"max_input_tokens_per_minute": per_user_max_input_tokens_per_minute,
"max_output_tokens_per_minute": per_user_max_output_tokens_per_minute,
"max_tokens_per_day": per_user_max_tokens_per_day,
"plan": match claims.plan {
Plan::Free => "free".to_string(),
@@ -697,12 +656,8 @@ impl<S> Drop for TokenCountingStream<S> {
login = claims.github_user_login,
authn.jti = claims.jti,
is_staff = claims.is_staff,
provider = provider.to_string(),
model = model,
requests_this_minute = usage.requests_this_minute,
tokens_this_minute = usage.tokens_this_minute,
input_tokens_this_minute = usage.input_tokens_this_minute,
output_tokens_this_minute = usage.output_tokens_this_minute,
);
let properties = json!({
@@ -719,7 +674,7 @@ impl<S> Drop for TokenCountingStream<S> {
});
SnowflakeRow::new(
"Language Model Used",
Some(claims.metrics_id),
claims.metrics_id,
claims.is_staff,
claims.system_id.clone(),
properties,
@@ -771,8 +726,6 @@ pub fn log_usage_periodically(state: Arc<LlmState>) {
model = usage.model,
requests_this_minute = usage.requests_this_minute,
tokens_this_minute = usage.tokens_this_minute,
input_tokens_this_minute = usage.input_tokens_this_minute,
output_tokens_this_minute = usage.output_tokens_this_minute,
);
}
}

View File

@@ -27,8 +27,6 @@ impl TokenUsage {
pub struct Usage {
pub requests_this_minute: usize,
pub tokens_this_minute: usize,
pub input_tokens_this_minute: usize,
pub output_tokens_this_minute: usize,
pub tokens_this_day: usize,
pub tokens_this_month: TokenUsage,
pub spending_this_month: Cents,
@@ -41,8 +39,6 @@ pub struct ApplicationWideUsage {
pub model: String,
pub requests_this_minute: usize,
pub tokens_this_minute: usize,
pub input_tokens_this_minute: usize,
pub output_tokens_this_minute: usize,
}
#[derive(Clone, Copy, Debug, Default)]
@@ -98,10 +94,6 @@ impl LlmDatabase {
let past_minute = now - Duration::minutes(1);
let requests_per_minute = self.usage_measure_ids[&UsageMeasure::RequestsPerMinute];
let tokens_per_minute = self.usage_measure_ids[&UsageMeasure::TokensPerMinute];
let input_tokens_per_minute =
self.usage_measure_ids[&UsageMeasure::InputTokensPerMinute];
let output_tokens_per_minute =
self.usage_measure_ids[&UsageMeasure::OutputTokensPerMinute];
let mut results = Vec::new();
for ((provider, model_name), model) in self.models.iter() {
@@ -122,8 +114,6 @@ impl LlmDatabase {
let mut requests_this_minute = 0;
let mut tokens_this_minute = 0;
let mut input_tokens_this_minute = 0;
let mut output_tokens_this_minute = 0;
while let Some(usage) = usages.next().await {
let usage = usage?;
if usage.measure_id == requests_per_minute {
@@ -146,26 +136,6 @@ impl LlmDatabase {
.iter()
.copied()
.sum::<i64>() as usize;
} else if usage.measure_id == input_tokens_per_minute {
input_tokens_this_minute += Self::get_live_buckets(
&usage,
now.naive_utc(),
UsageMeasure::InputTokensPerMinute,
)
.0
.iter()
.copied()
.sum::<i64>() as usize;
} else if usage.measure_id == output_tokens_per_minute {
output_tokens_this_minute += Self::get_live_buckets(
&usage,
now.naive_utc(),
UsageMeasure::OutputTokensPerMinute,
)
.0
.iter()
.copied()
.sum::<i64>() as usize;
}
}
@@ -174,8 +144,6 @@ impl LlmDatabase {
model: model_name.clone(),
requests_this_minute,
tokens_this_minute,
input_tokens_this_minute,
output_tokens_this_minute,
})
}
@@ -271,10 +239,6 @@ impl LlmDatabase {
self.get_usage_for_measure(&usages, now, UsageMeasure::RequestsPerMinute)?;
let tokens_this_minute =
self.get_usage_for_measure(&usages, now, UsageMeasure::TokensPerMinute)?;
let input_tokens_this_minute =
self.get_usage_for_measure(&usages, now, UsageMeasure::InputTokensPerMinute)?;
let output_tokens_this_minute =
self.get_usage_for_measure(&usages, now, UsageMeasure::OutputTokensPerMinute)?;
let tokens_this_day =
self.get_usage_for_measure(&usages, now, UsageMeasure::TokensPerDay)?;
let spending_this_month = if let Some(monthly_usage) = &monthly_usage {
@@ -303,8 +267,6 @@ impl LlmDatabase {
Ok(Usage {
requests_this_minute,
tokens_this_minute,
input_tokens_this_minute,
output_tokens_this_minute,
tokens_this_day,
tokens_this_month: TokenUsage {
input: monthly_usage
@@ -375,31 +337,6 @@ impl LlmDatabase {
&tx,
)
.await?;
let input_tokens_this_minute = self
.update_usage_for_measure(
user_id,
is_staff,
model.id,
&usages,
UsageMeasure::InputTokensPerMinute,
now,
// Cache read input tokens are not counted for the purposes of rate limits (but they are still billed).
tokens.input + tokens.input_cache_creation,
&tx,
)
.await?;
let output_tokens_this_minute = self
.update_usage_for_measure(
user_id,
is_staff,
model.id,
&usages,
UsageMeasure::OutputTokensPerMinute,
now,
tokens.output,
&tx,
)
.await?;
let tokens_this_day = self
.update_usage_for_measure(
user_id,
@@ -548,8 +485,6 @@ impl LlmDatabase {
Ok(Usage {
requests_this_minute,
tokens_this_minute,
input_tokens_this_minute,
output_tokens_this_minute,
tokens_this_day,
tokens_this_month: TokenUsage {
input: monthly_usage.input_tokens as usize,
@@ -749,9 +684,7 @@ impl UsageMeasure {
fn bucket_count(&self) -> usize {
match self {
UsageMeasure::RequestsPerMinute => MINUTE_BUCKET_COUNT,
UsageMeasure::TokensPerMinute
| UsageMeasure::InputTokensPerMinute
| UsageMeasure::OutputTokensPerMinute => MINUTE_BUCKET_COUNT,
UsageMeasure::TokensPerMinute => MINUTE_BUCKET_COUNT,
UsageMeasure::TokensPerDay => DAY_BUCKET_COUNT,
}
}
@@ -759,9 +692,7 @@ impl UsageMeasure {
fn total_duration(&self) -> Duration {
match self {
UsageMeasure::RequestsPerMinute => Duration::minutes(1),
UsageMeasure::TokensPerMinute
| UsageMeasure::InputTokensPerMinute
| UsageMeasure::OutputTokensPerMinute => Duration::minutes(1),
UsageMeasure::TokensPerMinute => Duration::minutes(1),
UsageMeasure::TokensPerDay => Duration::hours(24),
}
}

View File

@@ -12,8 +12,6 @@ pub struct Model {
pub name: String,
pub max_requests_per_minute: i64,
pub max_tokens_per_minute: i64,
pub max_input_tokens_per_minute: i64,
pub max_output_tokens_per_minute: i64,
pub max_tokens_per_day: i64,
pub price_per_million_input_tokens: i32,
pub price_per_million_cache_creation_input_tokens: i32,

View File

@@ -8,8 +8,6 @@ use sea_orm::entity::prelude::*;
pub enum UsageMeasure {
RequestsPerMinute,
TokensPerMinute,
InputTokensPerMinute,
OutputTokensPerMinute,
TokensPerDay,
}

View File

@@ -83,8 +83,6 @@ async fn test_tracking_usage(db: &mut LlmDatabase) {
Usage {
requests_this_minute: 2,
tokens_this_minute: 3000,
input_tokens_this_minute: 3000,
output_tokens_this_minute: 0,
tokens_this_day: 3000,
tokens_this_month: TokenUsage {
input: 3000,
@@ -104,8 +102,6 @@ async fn test_tracking_usage(db: &mut LlmDatabase) {
Usage {
requests_this_minute: 1,
tokens_this_minute: 2000,
input_tokens_this_minute: 2000,
output_tokens_this_minute: 0,
tokens_this_day: 3000,
tokens_this_month: TokenUsage {
input: 3000,
@@ -144,8 +140,6 @@ async fn test_tracking_usage(db: &mut LlmDatabase) {
Usage {
requests_this_minute: 2,
tokens_this_minute: 5000,
input_tokens_this_minute: 5000,
output_tokens_this_minute: 0,
tokens_this_day: 6000,
tokens_this_month: TokenUsage {
input: 6000,
@@ -166,8 +160,6 @@ async fn test_tracking_usage(db: &mut LlmDatabase) {
Usage {
requests_this_minute: 0,
tokens_this_minute: 0,
input_tokens_this_minute: 0,
output_tokens_this_minute: 0,
tokens_this_day: 5000,
tokens_this_month: TokenUsage {
input: 6000,
@@ -205,8 +197,6 @@ async fn test_tracking_usage(db: &mut LlmDatabase) {
Usage {
requests_this_minute: 1,
tokens_this_minute: 4000,
input_tokens_this_minute: 4000,
output_tokens_this_minute: 0,
tokens_this_day: 9000,
tokens_this_month: TokenUsage {
input: 10000,
@@ -250,8 +240,6 @@ async fn test_tracking_usage(db: &mut LlmDatabase) {
Usage {
requests_this_minute: 1,
tokens_this_minute: 1500,
input_tokens_this_minute: 1500,
output_tokens_this_minute: 0,
tokens_this_day: 1500,
tokens_this_month: TokenUsage {
input: 1000,
@@ -290,8 +278,6 @@ async fn test_tracking_usage(db: &mut LlmDatabase) {
Usage {
requests_this_minute: 2,
tokens_this_minute: 2800,
input_tokens_this_minute: 2500,
output_tokens_this_minute: 0,
tokens_this_day: 2800,
tokens_this_month: TokenUsage {
input: 2000,

View File

@@ -318,7 +318,6 @@ impl Server {
.add_request_handler(forward_read_only_project_request::<proto::OpenUncommittedDiff>)
.add_request_handler(forward_read_only_project_request::<proto::LspExtExpandMacro>)
.add_request_handler(forward_read_only_project_request::<proto::LspExtOpenDocs>)
.add_request_handler(forward_mutating_project_request::<proto::LspExtRunnables>)
.add_request_handler(
forward_read_only_project_request::<proto::LspExtSwitchSourceHeader>,
)
@@ -357,7 +356,6 @@ impl Server {
.add_request_handler(forward_mutating_project_request::<proto::BlameBuffer>)
.add_request_handler(forward_mutating_project_request::<proto::MultiLspQuery>)
.add_request_handler(forward_mutating_project_request::<proto::RestartLanguageServers>)
.add_request_handler(forward_mutating_project_request::<proto::StopLanguageServers>)
.add_request_handler(forward_mutating_project_request::<proto::LinkedEditingRange>)
.add_message_handler(create_buffer_for_peer)
.add_request_handler(update_buffer)
@@ -985,7 +983,7 @@ impl Server {
}
}
pub async fn snapshot(self: &Arc<Self>) -> ServerSnapshot {
pub async fn snapshot<'a>(self: &'a Arc<Self>) -> ServerSnapshot<'a> {
ServerSnapshot {
connection_pool: ConnectionPoolGuard {
guard: self.connection_pool.lock(),
@@ -4154,13 +4152,13 @@ async fn get_llm_api_token(
fn to_axum_message(message: TungsteniteMessage) -> anyhow::Result<AxumMessage> {
let message = match message {
TungsteniteMessage::Text(payload) => AxumMessage::Text(payload.as_str().to_string()),
TungsteniteMessage::Binary(payload) => AxumMessage::Binary(payload.into()),
TungsteniteMessage::Ping(payload) => AxumMessage::Ping(payload.into()),
TungsteniteMessage::Pong(payload) => AxumMessage::Pong(payload.into()),
TungsteniteMessage::Text(payload) => AxumMessage::Text(payload),
TungsteniteMessage::Binary(payload) => AxumMessage::Binary(payload),
TungsteniteMessage::Ping(payload) => AxumMessage::Ping(payload),
TungsteniteMessage::Pong(payload) => AxumMessage::Pong(payload),
TungsteniteMessage::Close(frame) => AxumMessage::Close(frame.map(|frame| AxumCloseFrame {
code: frame.code.into(),
reason: frame.reason.as_str().to_owned().into(),
reason: frame.reason,
})),
// We should never receive a frame while reading the message, according
// to the `tungstenite` maintainers:
@@ -4180,14 +4178,14 @@ fn to_axum_message(message: TungsteniteMessage) -> anyhow::Result<AxumMessage> {
fn to_tungstenite_message(message: AxumMessage) -> TungsteniteMessage {
match message {
AxumMessage::Text(payload) => TungsteniteMessage::Text(payload.into()),
AxumMessage::Binary(payload) => TungsteniteMessage::Binary(payload.into()),
AxumMessage::Ping(payload) => TungsteniteMessage::Ping(payload.into()),
AxumMessage::Pong(payload) => TungsteniteMessage::Pong(payload.into()),
AxumMessage::Text(payload) => TungsteniteMessage::Text(payload),
AxumMessage::Binary(payload) => TungsteniteMessage::Binary(payload),
AxumMessage::Ping(payload) => TungsteniteMessage::Ping(payload),
AxumMessage::Pong(payload) => TungsteniteMessage::Pong(payload),
AxumMessage::Close(frame) => {
TungsteniteMessage::Close(frame.map(|frame| TungsteniteCloseFrame {
code: frame.code.into(),
reason: frame.reason.as_ref().into(),
reason: frame.reason,
}))
}
}

View File

@@ -674,7 +674,7 @@ async fn test_peers_following_each_other(cx_a: &mut TestAppContext, cx_b: &mut T
client_a
.fs()
.insert_tree(
path!("/a"),
"/a",
json!({
"1.txt": "one",
"2.txt": "two",
@@ -683,7 +683,7 @@ async fn test_peers_following_each_other(cx_a: &mut TestAppContext, cx_b: &mut T
}),
)
.await;
let (project_a, worktree_id) = client_a.build_local_project(path!("/a"), cx_a).await;
let (project_a, worktree_id) = client_a.build_local_project("/a", cx_a).await;
active_call_a
.update(cx_a, |call, cx| call.set_location(Some(&project_a), cx))
.await

Some files were not shown because too many files have changed in this diff Show More