Compare commits

..

104 Commits

Author SHA1 Message Date
Zed Bot
0890516c1f Bump to 0.183.13 for @osiewicz 2025-04-30 11:04:01 +00:00
gcp-cherry-pick-bot[bot]
570585d917 Revert "python: Enable subroot detection for pylsp and pyright (#27364)" (cherry-pick #29658) (#29662)
Cherry-picked Revert "python: Enable subroot detection for pylsp and
pyright (#27364)" (#29658)

This reverts commit e661a0afd6.

Closes #ISSUE

Release Notes:

- Reverted changes to Python subroot detection which could have caused
multiple python processes to be spawned when working in projects with
multiple `pyproject.toml` files.

Co-authored-by: Piotr Osiewicz <24362066+osiewicz@users.noreply.github.com>
2025-04-30 12:59:47 +02:00
Peter Tripp
34cb4a1d67 zed 0.183.12 2025-04-29 09:13:40 -04:00
Ben Kunkle
267c74db5b Fix data loss when project settings opened with ".zed" in file_scan_exclusions (#29578)
Closes #28640

Before creating an entry for a file opened with `open_local_file`, make
sure it doesn't exist, in addition to checking that it isn't already
tracked in the workspace

Release Notes:

- Fixed an issue where the project settings file would be truncated when
opened with `zed: open project settings` if the ".zed" directory was
excluded from the files scanned in a workspace (in
"file_scan_exclusions")
2025-04-29 09:10:45 -04:00
Peter Tripp
5d140dee49 Fix ctrl-enter opening inline-assistant in assistant text threads (#29313)
Closes: https://github.com/zed-industries/zed/issues/24501

This has been broken for a while on linux (at least since Feb 8th!) for Assistant1.
It is also broken for Text Threads in Assitant2 (on macos and linux).

This should fix both.

Potentially related:
- https://github.com/zed-industries/zed/pull/29107

Release Notes:

- Fix for `ctrl-enter` shortcut in Assistant text threads incorrectly
opening inline assist instead of triggering Send.

Co-authored-by: Conrad Irwin <conrad@zed.dev>
2025-04-28 21:49:13 -04:00
Peter Tripp
f74c86dc49 ollama: Add Qwen3 and Gemma3 (default to 16K context) (#29580)
If you have the VRAM you can increase the context by adding this to your
settings.json:

```json
  "language_models": {
    "ollama": {
      "available_models": [
        { "max_tokens": 65536, "name": "qwen3", "display_name": "Qwen3-64k" }
      ]
    }
  },
```

Release Notes:

- ollama: Add support for Qwen3. Defaults to 16K token context. See:
[Assistant Configuration
Docs](https://zed.dev/docs/assistant/configuration#ollama-context) to
increase.
2025-04-28 21:47:14 -04:00
shenjack
68c15fbad6 ollama: Add DeepSeek v3 max token length (#29156)
Add deepseek-v3 max token length for ollama

Release Notes:

- N/A
2025-04-28 21:47:05 -04:00
Zed Bot
84f21f46b6 Bump to 0.183.11 for @bennetbo 2025-04-25 13:20:48 +00:00
gcp-cherry-pick-bot[bot]
75feba2107 assistant: Fix issue when using inline assistant with Gemini models (cherry-pick #29407) (#29408)
Cherry-picked assistant: Fix issue when using inline assistant with
Gemini models (#29407)

Closes #29020

Release Notes:

- assistant: Fix issue when using inline assistant with Gemini models

Co-authored-by: Bennet Bo Fenner <bennet@zed.dev>
2025-04-25 14:47:08 +02:00
Smit Barmase
f26b3337f6 editor: Revert flattening of code actions in mouse context menu (#28988)
In light of making context not move dynamically, reverting back these
changes.

- Doing it async will lead to a loading state, which moves the context
menu.
- Doing it sync introduces noticeable lag in opening the context menu.

Future idea is to introduce fixed code actions like refactor, rewrite,
etc depending on code action kind [(see
more)](https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#codeActionKind)
which will use submenus.

Release Notes:

- N/A
2025-04-24 02:27:52 +05:30
Ben Kunkle
52db5223c7 editor: Dismiss mouse context menus on selections change (#28729)
Closes #ISSUE

Adds an extra subscription for mouse context menus (i.e. right click context menu) so that when selections change in the editor while the context menu is open (e.g. with vim motions), the context menu closes.

Release Notes:

- N/A
2025-04-24 02:07:29 +05:30
Joseph T. Lyons
7736c850ae v0.183.x stable 2025-04-23 11:57:41 -04:00
gcp-cherry-pick-bot[bot]
28bfcc603c Fix panic in vim selection restoration (cherry-pick #29251) (#29254)
Cherry-picked Fix panic in vim selection restoration (#29251)

Closes #27986

Closes #ISSUE

Release Notes:

- vim: Fixed a panic when using `gv` after `p` in visual line mode

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-04-23 09:06:58 -06:00
Marko Kungla
11392e4bd5 Add zed to Flatpak config and data directories (#28952)
Closes #28944 

Release Notes:

- linux: Fixed incorrect config directory being used when Zed is
installed via Flatpak

Signed-off-by: Marko Kungla <marko.kungla@gmail.com>
2025-04-23 09:40:22 -04:00
gcp-cherry-pick-bot[bot]
17ca3f8e9a Fix panic when collaborating with new multibuffers (cherry-pick #29245) (#29252)
Cherry-picked Fix panic when collaborating with new multibuffers
(#29245)

Before this change, when syncing a multibuffer (such as
find-all-references) to a remote, we would renumber the excerpts from 1.
This did not matter in the past because the buffers' list of excerpts
could not change. In #27876, I added the ability for excerpts to merge,
which meant that the excerpt list could change. This manifested as
people seeing "invalid excerpt id" panics when syncing.

The initial fix to this (to re-use the excerpt ids from the host) ran
into problems because `insert_excerpts_with_ids_after` assumes that you
call it in excerpt-id order. This change de-optimizes that code to
insert the excerpts 1-by-1 in excerpt-id order, but with the
insert_after set to preserve the correct UI order.

I hope to soon remove this code path and use something more like
set-excerpts-for-path for syncing, but in the meantime we should not
panic.

Release Notes:

- Fix a panic when joining a project with a multibuffer with merged
excerpts

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-04-22 22:28:24 -06:00
Joseph T. Lyons
6874c0d483 zed 0.183.10 2025-04-22 16:22:24 -04:00
Smit Barmase
2fb5f57afb language: Fix language_scope_at for markdown code comments (#29230)
Closes #29176

This PR fix an issue where uncommenting a code block in Markdown would
add Markdown comments instead of removing the language-specific
comments.

Why?
`language_scope_at` for comments in a code block in Markdown would
result in the language being detected as Markdown. This happens because
the smallest range, such as `//` or `#` on the Markdown layer, is
preferred over `// whole comment line` for any other language. This
results in language detection as Markdown for that point.

To fix this, we also use a depth factor and try to prefer the layer with
greater depth over one with lesser depth. In this case, the code block's
language depth would be preferred over Markdown. The smallest range is
now used as a tiebreaker.

Added test for this case.

Release Notes:

- Fixed issue where uncommenting a code block in Markdown would add
Markdown comments instead of removing the language comments.
2025-04-22 16:21:44 -04:00
Bennet Bo Fenner
2a72164262 agent: Improve the review changes UX (#29221)
Release Notes:

- agent: Improved the AI-generated changes review UX by clearly exposing
the generating state in the multibuffer tab.

---------

Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
2025-04-22 13:01:21 -04:00
Danilo Leal
f7130ebf21 agent: Add small design tweaks (#29218)
Nothing too serious over here, just spacing and other small-ish tweaks.

Release Notes:

- N/A
2025-04-22 13:01:15 -04:00
Richard Feldman
57cbea7b93 Streaming tool calls (#29179)
https://github.com/user-attachments/assets/7854a737-ef83-414c-b397-45122e4f32e8



Release Notes:

- Create file and edit file tools now stream their tool descriptions, so
you can see what they're doing sooner.

---------

Co-authored-by: Marshall Bowers <git@maxdeviant.com>
2025-04-22 13:00:07 -04:00
Danilo Leal
a4665c2db6 agent: Show project name in the Agent notification (#29211)
Release Notes:

- agent: Added the project name in the Agent Panel notification.
2025-04-22 12:59:29 -04:00
Joseph T. Lyons
7bd0822135 zed 0.183.9 2025-04-22 09:29:13 -04:00
Nate Butler
89384cafad Add example agent tool preview (#28984)
This PR adds an example of rendering previews for tools using the new
Agent ToolCard style.

![CleanShot 2025-04-17 at 13 03
12@2x](https://github.com/user-attachments/assets/d4c7d266-cc32-4038-9170-f3e070fce60e)

Release Notes:

- N/A

---------

Co-authored-by: Marshall Bowers <git@maxdeviant.com>
2025-04-22 09:14:12 -04:00
Danilo Leal
c3239ca4a6 agent: Refine the web search tool call UI (#29190)
This PR refines a bit the web search tool UI by introducing a component
(`ToolCallCardHeader`) that aims to standardize the heading element of
tool calls in the thread.

In terms of next steps, I plan to evolve this component further soon
(e.g., building a full-blown "tool call card" component), and even move
it to a place where I can re-use it in the active_thread as well without
making the `assistant_tools` a dependency of it.

Release Notes:

- N/A
2025-04-22 09:11:04 -04:00
Danilo Leal
029b3434ff agent: Simplify user message design (#29165)
Mainly removing the "You" label, which didn't add a lot of value. Still
figuring out an issue with font size Markdown rendering before merging
this PR.

Release Notes:

- N/A
2025-04-22 09:11:04 -04:00
Bennet Bo Fenner
d3113ef126 agent: Support inserting selections as context via @selection (#29045)
WIP

Release Notes:

- N/A
2025-04-22 09:11:04 -04:00
Stephan Seidt
7cb9b46eb1 agent: Add support for google gemini 2.5 flash preview (#29205)
Adds support for the new gemini-2.5-flash-preview-04-17

Release Notes:

- agent: Added support for gemini-2.5-flash-preview
2025-04-22 09:11:03 -04:00
Bennet Bo Fenner
9c3ea6d86e gemini: Add support for passing images as part of the prompt (#29203)
Release Notes:

- agent: Add support for adding images as context when using Google
Gemini
2025-04-22 09:11:03 -04:00
Bennet Bo Fenner
cc8d096cf3 agent: Support pasting images as context (#29177)
https://github.com/user-attachments/assets/d6a27b05-3590-4f40-a820-f6f99f6bd581

Release Notes:

- agent: Added support for pasting images as context

---------

Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
2025-04-22 09:11:03 -04:00
Agus Zubiaga
937f57a862 agent: Do not add <using_tool> placeholder (#29194)
Our provider code in `language_models` filters out messages for which
`LanguageModelRequestMessage::contents_empty` returns `false`. This
doesn't seem wrong by itself, but `contents_empty` was returning `false`
for messages whose first segment didn't contain non-whitespace text even
if they contained other non-empty segments. This caused requests to fail
when a message with a tool call didn't contain any preceding text.

Release Notes:

- N/A
2025-04-22 09:11:03 -04:00
Michael Sloan
06c4720055 agent: Fix file context renames affecting display + simplify loading code (#29192)
Release Notes:

- N/A
2025-04-22 09:11:03 -04:00
Michael Sloan
1255a1b355 agent: Make directory context display update on rename (#29189)
Release Notes:

- N/A
2025-04-22 09:11:03 -04:00
Zed Bot
aefa9e73d8 Bump to 0.183.8 for @probably-neb 2025-04-22 00:26:44 +00:00
gcp-cherry-pick-bot[bot]
2009abb22a editor: Expand selection to word under cursor before expanding to next enclosing syntax node (cherry-pick #28864) (#29184)
Cherry-picked editor: Expand selection to word under cursor before
expanding to next enclosing syntax node (#28864)

Closes #27995

For strings in any language and Markdown, `select_larger_syntax_node`
will first select the word and then expand from there if:
- The cursor is on the word.
- The selection is inside the word.

It will not select the word and will directly proceed to expand if:
- The word is already selected.
- Multiple partial words are selected.

Todo:
- [x] Tests

Release Notes:

- Fixed `select_larger_syntax_node` to first expand to the word within a
string, and then to the larger syntax node.

Co-authored-by: Smit Barmase <heysmitbarmase@gmail.com>
2025-04-21 19:50:22 -04:00
gcp-cherry-pick-bot[bot]
f0f56d72b5 Use buffer size for markdown preview (cherry-pick #29172) (#29183)
Cherry-picked Use buffer size for markdown preview (#29172)

Note:

This is implemented in a very hacky and one-off manner. The primary
change is to pass a rem size through the markdown render tree, and scale
all sizing (rems & pixels) based on the passed in rem size manually.
This required copying in the `CheckBox` component from `ui::CheckBox` to
make it use the manual rem scaling without modifying the `CheckBox`
implementation directly as it is used elsewhere.

A better solution is required, likely involving `window.with_rem_size`
and/or _actual_ `em` units that allow text-size-relative scaling.

Release Notes:

- Made it so Markdown preview uses the _buffer_ font size instead of the
_ui_ font size.

---------

Co-authored-by: Ben Kunkle <ben@zed.dev>
Co-authored-by: Nate Butler <nate@zed.dev>

Co-authored-by: Mikayla Maki <mikayla@zed.dev>
Co-authored-by: Ben Kunkle <ben@zed.dev>
Co-authored-by: Nate Butler <nate@zed.dev>
2025-04-21 19:48:22 -04:00
Michael Sloan
6c53ee23c5 Add ability to attach rules as context (#29109)
Release Notes:

- agent: Added support for adding rules as context.
2025-04-21 18:08:11 -04:00
gcp-cherry-pick-bot[bot]
18ac67372e Fix ctrl-c in vim normal mode (cherry-pick #29167) (#29169)
Cherry-picked Fix ctrl-c in vim normal mode (#29167)

This was broken when we added helix keybindings because we populate the
menu's shortcut based on the "last" seen binding for an action ignoring
context.

Release Notes:

- Fix `ctrl-c` in vim normal mode

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-04-21 14:15:16 -06:00
Joseph T. Lyons
4464589942 zed 0.183.7 2025-04-21 13:30:52 -04:00
Agus Zubiaga
754c5f2eb5 agent: Migrate tool names in settings (#29168)
Release Notes:

- agent: Add migration to rename `find_replace_file` tool to
`edit_file`, and `regex_search` to `grep`.
2025-04-21 13:29:49 -04:00
Marshall Bowers
0860283cba agent: Add additional fields to Agent Tool Finished telemetry event (#29163)
This PR adds additional fields to the `Agent Tool Finished` telemetry
event:

- `model`
- `model_provider`
- `thread_id`
- `prompt_id`

Release Notes:

- N/A
2025-04-21 13:29:49 -04:00
Danilo Leal
b1ac0d9390 agent: Update Switch color in the settings view (#29154)
Just using the color method for the Switch component added in
https://github.com/zed-industries/zed/pull/29074.

Release Notes:

- N/A
2025-04-21 11:37:45 -04:00
Agus Zubiaga
98f2208314 edit tool: Handle over-indentation in replace_with_flexible_indent (#29153)
Release Notes:

- agent: Correct over-indentation in search/replace strings from model
2025-04-21 11:37:45 -04:00
Danilo Leal
2084e0b339 ui: Add .color method to the Switch (#29074)
This allows to pass, for example, `.color(SwitchColor::Accent)` to the
Switch component and have it render differently.

<img
src="https://github.com/user-attachments/assets/c60bac8a-c5ae-4693-912a-c754e5081f45"
width="550"/>

Release Notes:

- N/A
2025-04-21 11:37:45 -04:00
Agus Zubiaga
7c0db88457 inline assistant: Fix model picker (#29136)
Release Notes:

- inline assistant: Fixed a bug where the default model would be used
even when a specific inline assistant model was configured
2025-04-21 08:48:38 -04:00
Nathan Sobo
933032013b Rename regex search tool to grep and accept an include glob pattern (#29100)
This PR renames the `regex_search` tool to `grep` because I think it
conveys more meaning to the model, the idea of searching the filesystem
with a regular expression. It's also one word and the model seems to be
using it effectively after some additional prompt tuning.

It also takes an include pattern to filter on the specific files we try
to search. I'd like to encourage the model to scope its searches more
aggressively, as in my testing, I'm only seeing it filter on file
extension.

Release Notes:

- N/A
2025-04-21 08:48:05 -04:00
Smit Barmase
5ac0baa536 editor: Improve selection highlights speed (#29097)
Before, we used to debounce selection highlight because it needed to
search the whole file to show gutter line highlights, etc. This
experience felt extremely laggy.

This PR introduces a new approach where:
1. We query only visible rows without debounce. The search function
itself is async and runs in a background thread, so it's not blocking
anything. With no debounce and such a small search space, highlights
feel realtime.
2. In parallel, we also query the whole file (still debounced, like
before). Once this query resolves, it updates highlights across the
file, making scrollbar markers visible.

This hybrid way gives the feeling of realtime, while keeping the same
functionality.


https://github.com/user-attachments/assets/432b65f1-89d2-4658-ad5e-048921b06a23

P.S. I have removed the user setting for custom debounce delay, because
(one) now it doesn't really make sense to configure that, and (two) the
whole logic is based on the assumption that the fast query will resolve
before the debounced query. A static debounce time makes sure of that.
Configuring it might lead to cases where the fast query resolves after
the debounced query, and we end up only seeing visible viewport
highlights.

Release Notes:

- Improved selection highlight speed.
2025-04-21 08:47:55 -04:00
Michael Sloan
05b8a6da25 agent: Reorder some linux keybindings to match mac keybindings (#29107)
Release Notes:

- Made keybindings for agent panel closer to the precedence order used
on Mac. This fixes use of `enter` to add context from the menu triggered
by `@` referencing.
2025-04-21 08:45:18 -04:00
Michael Sloan
e0e46daa62 Default to fast model for thread summaries and titles + don't include system prompt / context / thinking segments (#29102)
* Adds a fast / cheaper model to providers and defaults thread
summarization to this model. Initial motivation for this was that
https://github.com/zed-industries/zed/pull/29099 would cause these
requests to fail when used with a thinking model. It doesn't seem
correct to use a thinking model for summarization.

* Skips system prompt, context, and thinking segments.

* If tool use is happening, allows 2 tool uses + one more agent response
before summarizing.

Downside of this is that there was potential for some prefix cache reuse
before, especially for title summarization (thread summarization omitted
tool results and so would not share a prefix for those). This seems fine
as these requests should typically be fairly small. Even for full thread
summarization, skipping all tool use / context should greatly reduce the
token use.

Release Notes:

- N/A
2025-04-21 08:43:33 -04:00
Nathan Sobo
220b2cd959 Don't send dummy user text with tool results (#29099)
Previously, we were including the dummy text "Here are the tool
results." whenever reporting tool call results. I'm worried this is
adding noise and confusing the model, because the user didn't actually
say anything. This inserts an empty message to be populated later. My
preference would be something less stateful, where tool results (or
batches of them requested simultaneously) would be sent to the model as
soon as they were ready, without bothering to do this message
association dance. But for now, this seems to work.

Release Notes:

- N/A
2025-04-21 08:43:24 -04:00
Bennet Bo Fenner
9375cb2277 agent: Preserve thinking blocks between requests (#29055)
Looks like the required backend component of this was deployed.

https://github.com/zed-industries/monorepo/actions/runs/14541199197

Release Notes:

- N/A

---------

Co-authored-by: Antonio Scandurra <me@as-cii.com>
Co-authored-by: Agus Zubiaga <hi@aguz.me>
Co-authored-by: Richard Feldman <oss@rtfeldman.com>
Co-authored-by: Nathan Sobo <nathan@zed.dev>
2025-04-21 08:43:13 -04:00
Danilo Leal
9e8af50cd8 agent: Add item to add custom MCP server in the panel's menu (#29091)
This is based on user feedback that the Agent Panel menu was only
linking to extensions as a way to add MCP servers while we also support
adding "custom" servers, too, which don't go through the extensions
flow.

Release Notes:

- N/A
2025-04-21 08:43:02 -04:00
Michael Sloan
b7eb695a09 Simplify language model registry + only emit change events on change (#29086)
* Now only does default fallback logic in the registry

* Only emits change events when there is actually a change

Release Notes:

- N/A
2025-04-21 08:42:55 -04:00
Nathan Sobo
bf64cd4eb4 Systematically optimize agentic editing performance (#28961)
Now that we've established a proper eval in tree, this PR is reboots of
our agent loop back to a set of minimal tools and simpler prompts. We
should aim to get this branch feeling subjectively competitive with
what's on main and then merge it, and build from there.

Let's invest in our eval and use it to drive better performance of the
agent loop. How you can help: Pick an example, and then make the outcome
faster or better. It's fine to even use your own subjective judgment, as
our evaluation criteria likely need tuning as well at this point. Focus
on making the agent work better in your own subjective experience first.
Let's focus on simple/practical improvements to make this thing work
better, then determine how we can craft our judgment criteria to lock
those improvements in.

Release Notes:

- N/A

---------

Co-authored-by: Max <max@zed.dev>
Co-authored-by: Antonio <antonio@zed.dev>
Co-authored-by: Agus <agus@zed.dev>
Co-authored-by: Richard <richard@zed.dev>
Co-authored-by: Max Brunsfeld <maxbrunsfeld@gmail.com>
Co-authored-by: Antonio Scandurra <me@as-cii.com>
Co-authored-by: Michael Sloan <mgsloan@gmail.com>
2025-04-21 08:42:13 -04:00
Danilo Leal
0b9ae6e7b3 agent: Make copy button show while hovering the codeblock container (#29075) 2025-04-21 08:42:13 -04:00
Marshall Bowers
ba9c033770 language_models: Fix passing of thread_id and prompt_id (#29071)
This PR is a follow-up to
https://github.com/zed-industries/zed/pull/29069 that fixes an issue
where the thread ID and prompt ID were not being sent up correctly.

Release Notes:

- N/A
2025-04-18 17:27:34 -04:00
Marshall Bowers
07ca5a6a33 agent: Attach thread ID and prompt ID to telemetry events (#29069)
This PR attaches the thread ID and the new prompt ID to telemetry events
for completions in the Agent panel.

Release Notes:

- N/A

---------

Co-authored-by: Mikayla Maki <mikayla.c.maki@gmail.com>
2025-04-18 17:27:34 -04:00
Michael Sloan
0c13b42c3a Add hidden prompt_to_focus field to OpenPromptLibrary action (#29062)
Release Notes:

- N/A
2025-04-18 17:27:34 -04:00
Michael Sloan
bb1b132922 Init prompt store in agent eval (#29068)
Needed after #28915

Release Notes:

- N/A
2025-04-18 16:18:11 -04:00
Danilo Leal
eefdcb36be agent: Simplify design of the settings view (#29041)
Containing everything in boxes wasn't super necessary here. Want to
still improve the switch color contrast here, but will probably do that
in a separate PR.

<img
src="https://github.com/user-attachments/assets/f826a7a8-beaf-45d0-9dc2-36dc210c418e"
width="700"/>

Release Notes:

- N/A
2025-04-18 13:39:13 -04:00
Michael Sloan
2ac8a84c9f agent: Use default prompts from prompt library in system prompt (#28915)
Related to #28490.

- Default prompts from the prompt library are now included as "user
rules" in the system prompt.
- Presence of these user rules is shown at the beginning of the thread
in the UI.
_ Now uses an `Entity<PromptStore>` instead of an `Arc<PromptStore>`.
Motivation for this is emitting a `PromptsUpdatedEvent`.
- Now disallows concurrent reloading of the system prompt. Before this
change it was possible for reloads to race.

Release Notes:

- agent: Added support for including default prompts from the Prompt
Library as "user rules" in the system prompt.

---------

Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
2025-04-18 13:39:13 -04:00
Kirill Bulatov
d35ffc7e10 debugger: Fix gutter tasks display for users without the debugger feature flag (#29056) 2025-04-18 11:20:13 -06:00
Joseph T. Lyons
093248ae05 zed 0.183.6 2025-04-18 09:39:04 -04:00
Bennet Bo Fenner
21ff2bb39a agent: Do not insert selection as context when selection is empty (#29031)
Release Notes:

- N/A
2025-04-18 09:14:31 -04:00
Bennet Bo Fenner
843a621b1c agent: Remove selections as context once message is sent (#29030)
Release Notes:

- N/A
2025-04-18 09:14:31 -04:00
gcp-cherry-pick-bot[bot]
bbe956f750 Make Copy and Trim ignore empty lines, and fix vim line selections (cherry-pick #29019) (#29023)
Cherry-picked Make Copy and Trim ignore empty lines, and fix vim line
selections (#29019)

Close #28519 

Release Notes:

Update `editor: copy and trim` command:

1. Ignore empty lines in the middle:

    ```
      Line 1

      Line 2
    ```

    Will copy text to clipboard:

    ```
    Line 1

    Line 2
    ```

    Before this commit trim not performed

1. Fix select use vim line selections, trim not works

Co-authored-by: redforks <redforks@gmail.com>
2025-04-17 21:50:04 -06:00
Marshall Bowers
0179e4c511 agent: Report usage from thread summarization requests (#29012)
This PR makes it so the thread summarization also reports the model
request usage, to prevent the case where the count would appear to jump
by 2 the next time a message was sent after summarization.

Release Notes:

- N/A
2025-04-17 22:43:16 -04:00
Marshall Bowers
df49cad705 agent: Show request usage in the panel (#29006)
This PR adds a banner showing request usage in the Agent panel:

<img width="640" alt="Screenshot 2025-04-17 at 5 51 46 PM"
src="https://github.com/user-attachments/assets/e0eb036c-57c1-441c-bbab-7dab1c6e56d9"
/>

Only visible to users on the new billing.

Note to Joseph: Doesn't need to be cherry-picked to Preview.

Release Notes:

- N/A

---------

Co-authored-by: Nate <nate@zed.dev>
2025-04-17 22:43:16 -04:00
Marshall Bowers
13b3beb4d8 agent: Extract usage information from response headers (#29002)
This PR updates the Agent to extract the usage information from the
response headers, if they are present.

For now we just log the information, but we'll be using this soon to
populate some UI.

Release Notes:

- N/A
2025-04-17 22:43:16 -04:00
Marshall Bowers
5f8efc9370 zeta: Extract usage information from response headers (#28999)
This PR updates the Zeta provider to extract the usage information from
the response headers, if they are present.

For now we just log the information, but we'll need to figure out where
this needs to get threaded through to in order to display it in the UI.

Release Notes:

- N/A
2025-04-17 22:43:16 -04:00
Marshall Bowers
a1d643103a Use more types/constants from zed_llm_client (#28909)
This PR makes it so we use more types and constants from the
`zed_llm_client` crate to avoid duplicating information.

Also updates the current usage endpoint to use limits derived from the
`Plan`.

Release Notes:

- N/A
2025-04-17 22:43:16 -04:00
Marshall Bowers
220d853dba rpc: Remove llm module in favor of zed_llm_client (#28900)
This PR removes the `llm` module of the `rpc` crate in favor of using
the types from the `zed_llm_client`.

Release Notes:

- N/A
2025-04-17 22:43:16 -04:00
Marshall Bowers
911f329303 collab: Add plan column to subscription_usages (#28889)
This PR adds a `plan` column to the `subscription_usages` table.

These tables don't have any records in them yet, so it's fine to make
the column required without a default.

Release Notes:

- N/A
2025-04-17 22:43:16 -04:00
Marshall Bowers
1bdcf318a6 proto: Add ZedProTrial to Plan (#28885)
This PR adds the `ZedProTrial` member to the `Plan` enum.

Release Notes:

- N/A
2025-04-17 22:43:16 -04:00
Zed Bot
d4f44c1137 Bump to 0.183.5 for @probably-neb 2025-04-17 21:02:02 +00:00
gcp-cherry-pick-bot[bot]
e0dc131418 Fix multiline completions when surrounding text doesn't match completion text (cherry-pick #28995) (#28997)
Cherry-picked Fix multiline completions when surroundings don't match
completion text (#28995)

Follow up to the scenarios I overlooked in
https://github.com/zed-industries/zed/pull/28586.

Release Notes:

- N/A

Co-authored-by: João Marcos <marcospb19@hotmail.com>
2025-04-17 15:50:37 -03:00
gcp-cherry-pick-bot[bot]
5054d0768d Revert "git_panel: Pad end of list to avoid obscuring final entry with horizontal scrollbar (#28823)" (cherry-pick #28971) (#28985)
Cherry-picked Revert "git_panel: Pad end of list to avoid obscuring
final entry with horizontal scrollbar (#28823)" (#28971)

This reverts commit 1d98b33ae0.

Not sure why, but seems like this breaks the binary search used to
correlate items to each other in the lists.

Release Notes:

- N/A

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-04-17 14:02:28 -04:00
gcp-cherry-pick-bot[bot]
40add8682a Escape all runnables' cargo extra arguments coming from rust-analyzer (cherry-pick #28977) (#28981)
Cherry-picked Escape all runnables' cargo extra arguments coming from
rust-analyzer (#28977)

Closes https://github.com/zed-industries/zed/issues/28947

Release Notes:

- Fixed certain doctests not being run properly

Co-authored-by: Kirill Bulatov <kirill@zed.dev>
2025-04-17 11:07:25 -06:00
Danilo Leal
d168fb5a16 agent: Add design tweaks (#28963)
One more batch of fine-tuning the agent panel's design.

Release Notes:

- N/A
2025-04-17 12:08:11 -04:00
Bennet Bo Fenner
6bfd2593c9 agent: Support adding selection as context (#28964)
https://github.com/user-attachments/assets/42ebe911-3392-48f7-8583-caab285aca09

Release Notes:

- agent: Support adding selections via @selection or `assistant: Quote
selection` as context
2025-04-17 11:21:24 -04:00
Umesh Yadav
6db3b9c2e7 Add support for OpenAI o3 and o4-mini models (#28881)
Release Notes:

- Add support for OpenAI o3 and o4-mini models via OpenAI API and
Copilot Chat providers.

---------

Co-authored-by: Peter Tripp <peter@zed.dev>
2025-04-17 10:59:13 -04:00
redforks
01daf6e7d4 Fix snippets from extensions being listed twice (#28940)
lookup_snippets() merges global snippets and extension snippets, but
global_snippets::lookup_snippets() also returns extension snippets, make
them double

Closes #28661 

Release Notes:

- Fixed a bug where extension provided snippets were being displayed in
duplicate.
2025-04-17 10:59:10 -04:00
Joseph T. Lyons
e50872ca26 zed 0.183.4 2025-04-17 09:33:41 -04:00
Danilo Leal
8b288aa98d agent: Fix "open thread as markdown" button (#28962)
Just now realized that the reason this button wasn't working reliably is
because we weren't passing the index to it. It's now fixed.

Release Notes:

- N/A
2025-04-17 09:31:06 -04:00
Agus Zubiaga
aa1d400024 edit prediction: Assign providers when client status changes (#28919)
There was recently a change that caused the Zed Edit Prediction provider
to only be assigned when the client was connected. However, this check
happened too early, resulting in restored buffers never getting
registered. We'll now subscribe to client status changes and reassign
providers accordingly.

Release Notes:

- edit prediction: Fixed bug disabling prediction in restored buffers
2025-04-17 08:56:35 -04:00
Bennet Bo Fenner
fd6e093827 agent: Show context server name in incompatible tool warning (#28954)
<img width="410" alt="image"
src="https://github.com/user-attachments/assets/e29a0ba8-3d37-4e66-b90c-398b24da0453"
/>


Release Notes:

- N/A
2025-04-17 08:16:29 -04:00
Zed Bot
91581d6d2b Bump to 0.183.3 for @bennetbo 2025-04-17 09:12:51 +00:00
gcp-cherry-pick-bot[bot]
7102d40414 gemini: Fix invalid field name in request (cherry-pick #28949) (#28950)
Cherry-picked agent: Fix system instructions typo (#28949)

See #28793, the name of the field is actually `systemInstruction` not
`systemInstructions`.

Release Notes:

- Fixed an issue where Gemini requests would fail

Co-authored-by: Bennet Bo Fenner <bennet@zed.dev>
2025-04-17 11:10:20 +02:00
gcp-cherry-pick-bot[bot]
718e0a9851 Fix panic when diagnostics first opens (cherry-pick #28935) (#28939)
Cherry-picked Fix panic when diagnostics first opens (#28935)

Closes #ISSUE

Release Notes:

- N/A

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-04-16 22:26:12 -06:00
Zed Bot
2f4bd2a24b Bump to 0.183.2 for @ConradIrwin 2025-04-16 22:44:23 +00:00
gcp-cherry-pick-bot[bot]
3aac735cb2 Fix more inlay/excerpt race conditions (cherry-pick #28914) (#28916)
Cherry-picked Fix more inlay/excerpt race conditions (#28914)

Closes #ISSUE

Release Notes:

- N/A

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-04-16 16:41:38 -06:00
Agus Zubiaga
82fb597b95 agent: Fix conversation token usage and estimate unsent message (#28878)
The UI was mistakenly using the cumulative token usage for the token
counter. It will now display the last request token count, plus an
estimation of the tokens in the message editor and context entries that
haven't been sent yet.


https://github.com/user-attachments/assets/0438c501-b850-4397-9135-57214ca3c07a

Additionally, when the user edits a message, we'll display the actual
token count up to it and estimate the tokens in the new message.

Note: We don't currently estimate the delta when switching profiles. In
the future, we want to use the count tokens API to measure every part of
the request and display a breakdown.

Release Notes:

- agent: Made the token count more accurate and added back estimation of
used tokens as you type and add context.

---------

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
2025-04-16 16:05:57 -04:00
Thomas Mickley-Doyle
07a0d91ea2 agent: Add git commit ID to the eval telemetry data (#28895)
Release Notes:

- N/A
2025-04-16 16:05:57 -04:00
Bennet Bo Fenner
88ddd7be46 agent: Allow quoting selection when text thread is active (#28887)
This makes the `assistant: Quote selection` work again for text threads.
Next up is supporting this also in normal threads.

Release Notes:

- agent: Add support for inserting selections (assistant: Quote
selection) into text threads
2025-04-16 16:05:57 -04:00
Conrad Irwin
f701d69233 Show all warnings (#28899)
Release Notes:

- (preview only) Fixes a bug where some warnings were not rendered
correctly in the Diagnostics view
2025-04-16 14:03:32 -06:00
gcp-cherry-pick-bot[bot]
a8a99414d0 Fix anchor_in_excerpt on replaced excerpts (cherry-pick #28880) (#28892)
Cherry-picked Fix anchor_in_excerpt on replaced excerpts (#28880)

Release Notes:

- N/A

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-04-16 14:01:45 -06:00
Mikayla Maki
83ce1712dc zed 0.183.1 2025-04-16 12:00:52 -07:00
Mikayla Maki
9a54d111ef Remove bottom dock layout button (#28876)
Release Notes:

- Preview: Removed the layout button from the title bar. The
`bottom_dock_layout` setting still functions.
- Added a setting, `bottom_dock_layout`, for controlling the
relationship between the bottom dock and the left and right docks.
2025-04-16 11:59:53 -07:00
Bennet Bo Fenner
c2ff375787 agent: Improve fuzzy matching for @mentions (#28883)
Make fuzzy search in @-mention match paths and context kinds as well
(e.g., typing "sym" should let me select the "Symbols" label, as opposed
to just paths)

Release Notes:

- agent: Improve fuzzy-matching when using @mentions
2025-04-16 14:05:13 -04:00
Danilo Leal
1a81946137 agent: Add item to open Prompt Library in the panel's menu (#28877)
Release Notes:

- agent: Added a menu item to open the Prompt Library from the panel's
dropdown menu on the top right.
2025-04-16 14:05:13 -04:00
Bennet Bo Fenner
36ca5ab7c2 agent: Add websearch tool (#28621)
Staff only for now. We'll work on making this usable for non zed.dev
users later

Release Notes:

- N/A

---------

Co-authored-by: Antonio Scandurra <me@as-cii.com>
Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
Co-authored-by: Marshall Bowers <git@maxdeviant.com>
2025-04-16 14:05:13 -04:00
Danilo Leal
ad3a319465 agent: Add small design tweaks (#28874)
Some small adjustments to simplify the agent panel's design.

Release Notes:

- N/A
2025-04-16 12:40:07 -04:00
gcp-cherry-pick-bot[bot]
19b7c1ae89 Fix more panics when removing excerpts (cherry-pick #28836) (#28873)
Cherry-picked Fix more panics when removing excerpts (#28836)

Release Notes:

- Fixed a panic when an excerpt removed has an edit suggestion inlay in
it

---------

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>

Co-authored-by: Kirill Bulatov <kirill@zed.dev>
Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2025-04-16 10:06:28 -06:00
Marshall Bowers
9f8320f3a3 agent: Show an error when the model requests limit has been reached (#28868)
This PR adds an error message when the model requests limit has been
hit.

Release Notes:

- N/A

Co-authored-by: Oleksiy Syvokon <oleksiy.syvokon@gmail.com>
2025-04-16 11:33:31 -04:00
Joseph T. Lyons
7c483b231d v0.183.x preview 2025-04-16 08:45:21 -04:00
387 changed files with 6142 additions and 10865 deletions

View File

@@ -1 +0,0 @@
.rules

View File

@@ -1 +0,0 @@
.rules

View File

@@ -10,7 +10,7 @@ runs:
cargo install cargo-nextest --locked
- name: Install Node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
uses: actions/setup-node@cdca7365b2dadb8aad0a33bc7601856ffabcc48e # v4
with:
node-version: "18"

View File

@@ -16,7 +16,7 @@ runs:
run: cargo install cargo-nextest --locked
- name: Install Node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
uses: actions/setup-node@cdca7365b2dadb8aad0a33bc7601856ffabcc48e # v4
with:
node-version: "18"

View File

@@ -519,7 +519,7 @@ jobs:
DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }}
steps:
- name: Install Node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
uses: actions/setup-node@cdca7365b2dadb8aad0a33bc7601856ffabcc48e # v4
with:
node-version: "18"

View File

@@ -22,7 +22,7 @@ jobs:
version: 9
- name: Setup Node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
uses: actions/setup-node@cdca7365b2dadb8aad0a33bc7601856ffabcc48e # v4
with:
node-version: "20"
cache: "pnpm"

View File

@@ -1,71 +0,0 @@
name: Run Agent Eval
on:
schedule:
- cron: "0 * * * *"
pull_request:
branches:
- "**"
types: [opened, synchronize, reopened, labeled]
workflow_dispatch:
concurrency:
# Allow only one workflow per any non-`main` branch.
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: 0
RUST_BACKTRACE: 1
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_EVAL_TELEMETRY: 1
jobs:
run_eval:
timeout-minutes: 60
name: Run Agent Eval
if: >
github.repository_owner == 'zed-industries' &&
(github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'run-eval'))
runs-on:
- buildjet-16vcpu-ubuntu-2204
steps:
- name: Add Rust to the PATH
run: echo "$HOME/.cargo/bin" >> $GITHUB_PATH
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Cache dependencies
uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
cache-provider: "buildjet"
- name: Install Linux dependencies
run: ./script/linux
- name: Configure CI
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
- name: Compile eval
run: cargo build --package=eval
- name: Run eval
run: cargo run --package=eval -- --repetitions=3 --concurrency=1
# Even the Linux runner is not stateful, in theory there is no need to do this cleanup.
# But, to avoid potential issues in the future if we choose to use a stateful Linux runner and forget to add code
# to clean up the config file, Ive included the cleanup code here as a precaution.
# While its not strictly necessary at this moment, I believe its better to err on the side of caution.
- name: Clean CI config file
if: always()
run: rm -rf ./../.cargo

View File

@@ -18,7 +18,7 @@ jobs:
version: 9
- name: Setup Node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
uses: actions/setup-node@cdca7365b2dadb8aad0a33bc7601856ffabcc48e # v4
with:
node-version: "20"
cache: "pnpm"

View File

@@ -23,7 +23,7 @@ jobs:
- buildjet-16vcpu-ubuntu-2204
steps:
- name: Install Node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
uses: actions/setup-node@cdca7365b2dadb8aad0a33bc7601856ffabcc48e # v4
with:
node-version: "18"

View File

@@ -71,7 +71,7 @@ jobs:
ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}
steps:
- name: Install Node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
uses: actions/setup-node@cdca7365b2dadb8aad0a33bc7601856ffabcc48e # v4
with:
node-version: "18"

1
.gitignore vendored
View File

@@ -18,6 +18,7 @@
.venv
.vscode
.wrangler
/.direnv
/assets/*licenses.*
/crates/collab/seed.json
/crates/theme/schemas/theme.json

121
.rules
View File

@@ -1,121 +0,0 @@
# Rust coding guidelines
* Prioritize code correctness and clarity. Speed and efficiency are secondary priorities unless otherwise specified.
* Do not write organizational or comments that summarize the code. Comments should only be written in order to explain "why" the code is written in some way in the case there is a reason that is tricky / non-obvious.
* Prefer implementing functionality in existing files unless it is a new logical component. Avoid creating many small files.
* Avoid using functions that panic like `unwrap()`, instead use mechanisms like `?` to propagate errors.
* Be careful with operations like indexing which may panic if the indexes are out of bounds.
* Never create files with `mod.rs` paths - prefer `src/some_module.rs` instead of `src/some_module/mod.rs`.
# GPUI
GPUI is a UI framework which also provides primitives for state and concurrency management.
## Context
Context types allow interaction with global state, windows, entities, and system services. They are typically passed to functions as the argument named `cx`. When a function takes callbacks they come after the `cx` parameter.
* `App` is the root context type, providing access to global state and read and update of entities.
* `Context<T>` is provided when updating an `Entity<T>`. This context dereferences into `App`, so functions which take `&App` can also take `&Context<T>`.
* `AsyncApp` and `AsyncWindowContext` are provided by `cx.spawn` and `cx.spawn_in`. These can be held across await points.
## `Window`
`Window` provides access to the state of an application window. It is passed to functions as an argument named `window` and comes before `cx` when present. It is used for managing focus, dispatching actions, directly drawing, getting user input state, etc.
## Entities
An `Entity<T>` is a handle to state of type `T`. With `thing: Entity<T>`:
* `thing.entity_id()` returns `EntityId`
* `thing.downgrade()` returns `WeakEntity<T>`
* `thing.read(cx: &App)` returns `&T`.
* `thing.read_with(cx, |thing: &T, cx: &App| ...)` returns the closure's return value.
* `thing.update(cx, |thing: &mut T, cx: &mut Context<T>| ...)` allows the closure to mutate the state, and provides a `Context<T>` for interacting with the entity. It returns the closure's return value.
* `thing.update_in(cx, |thing: &mut T, window: &mut Window, cx: &mut Context<T>| ...)` takes a `AsyncWindowContext` or `VisualTestContext`. It's the same as `update` while also providing the `Window`.
Within the closures, the inner `cx` provided to the closure must be used instead of the outer `cx` to avoid issues with multiple borrows.
Trying to update an entity while it's already being updated must be avoided as this will cause a panic.
When `read_with`, `update`, or `update_in` are used with an async context, the closure's return value is wrapped in an `anyhow::Result`.
`WeakEntity<T>` is a weak handle. It has `read_with`, `update`, and `update_in` methods that work the same, but always return an `anyhow::Result` so that they can fail if the entity no longer exists. This can be useful to avoid memory leaks - if entities have mutually recursive handles to eachother they will never be dropped.
## Concurrency
All use of entities and UI rendering occurs on a single foreground thread.
`cx.spawn(async move |cx| ...)` runs an async closure on the foreground thread. Within the closure, `cx` is an async context like `AsyncApp` or `AsyncWindowContext`.
When the outer cx is a `Context<T>`, the use of `spawn` instead looks like `cx.spawn(async move |handle, cx| ...)`, where `handle: WeakEntity<T>`.
To do work on other threads, `cx.background_spawn(async move { ... })` is used. Often this background task is awaited on by a foreground task which uses the results to update state.
Both `cx.spawn` and `cx.background_spawn` return a `Task<R>`, which is a future that can be awaited upon. If this task is dropped, then its work is cancelled. To prevent this one of the following must be done:
* Awaiting the task in some other async context.
* Detaching the task via `task.detach()` or `task.detach_and_log_err(cx)`, allowing it to run indefinitely.
* Storing the task in a field, if the work should be halted when the struct is dropped.
A task which doesn't do anything but provide a value can be created with `Task::ready(value)`.
## Elements
The `Render` trait is used to render some state into an element tree that is laid out using flexbox layout. An `Entity<T>` where `T` implements `Render` is sometimes called a "view".
Example:
```
struct TextWithBorder(SharedString);
impl Render for TextWithBorder {
fn render(&mut self, _window: &mut Window, _cx: &mut Context<Self>) -> impl IntoElement {
div().border_1().child(self.0.clone())
}
}
```
Since `impl IntoElement for SharedString` exists, it can be used as an argument to `child`. `SharedString` is used to avoid copying strings, and is either an `&'static str` or `Arc<str>`.
UI components that are constructed just to be turned into elements can instead implement the `RenderOnce` trait, which is similar to `Render`, but its `render` method takes ownership of `self`. Types that implement this trait can use `#[derive(IntoElement)]` to use them directly as children.
The style methods on elements are similar to those used by Tailwind CSS.
If some attributes or children of an element tree are conditional, `.when(condition, |this| ...)` can be used to run the closure only when `condition` is true. Similarly, `.when_some(option, |this, value| ...)` runs the closure when the `Option` has a value.
## Input events
Input event handlers can be registered on an element via methods like `.on_click(|event, window, cx: &mut App| ...)`.
Often event handlers will want to update the entity that's in the current `Context<T>`. The `cx.listener` method provides this - its use looks like `.on_click(cx.listener(|this: &mut T, event, window, cx: &mut Context<T>| ...)`.
## Actions
Actions are dispatched via user keyboard interaction or in code via `window.dispatch_action(SomeAction.boxed_clone(), cx)` or `focus_handle.dispatch_action(&SomeAction, window, cx)`.
Actions which have no data inside are created and registered with the `actions!(some_namespace, [SomeAction, AnotherAction])` macro call.
Actions that do have data must implement `Clone, Default, PartialEq, Deserialize, JsonSchema` and can be registered with an `impl_actions!(some_namespace, [SomeActionWithData])` macro call.
Action handlers can be registered on an element via the event handler `.on_action(|action, window, cx| ...)`. Like other event handlers, this is often used with `cx.listener`.
## Notify
When a view's state has changed in a way that may affect its rendering, it should call `cx.notify()`. This will cause the view to be rerendered. It will also cause any observe callbacks registered for the entity with `cx.observe` to be called.
## Entity events
While updating an entity (`cx: Context<T>`), it can emit an event using `cx.emit(event)`. Entities register which events they can emit by declaring `impl EventEmittor<EventType> for EntityType {}`.
Other entities can then register a callback to handle these events by doing `cx.subscribe(other_entity, |this, other_entity, event, cx| ...)`. This will return a `Subscription` which deregisters the callback when dropped. Typically `cx.subscribe` happens when creating a new entity and the subscriptions are stored in a `_subscriptions: Vec<Subscription>` field.
## Recent API changes
GPUI has had some changes to its APIs. Always write code using the new APIs:
* `spawn` methods now take async closures (`AsyncFn`), and so should be called like `cx.spawn(async move |cx| ...)`.
* Use `Entity<T>`. This replaces `Model<T>` and `View<T>` which longer exists and should NEVER be used.
* Use `App` references. This replaces `AppContext` which no longer exists and should NEVER be used.
* Use `Context<T>` references. This replaces `ModelContext<T>` which no longer exists and should NEVER be used.
* `Window` is now passed around explicitly. The new interface adds a `Window` reference parameter to some methods, and adds some new "*_in" methods for plumbing `Window`. The old types `WindowContext` and `ViewContext<T>` should NEVER be used.

View File

@@ -1 +0,0 @@
.rules

View File

@@ -45,6 +45,5 @@
"hard_tabs": false,
"formatter": "auto",
"remove_trailing_whitespace_on_save": true,
"ensure_final_newline_on_save": true,
"file_scan_exclusions": ["crates/eval/worktrees/", "crates/eval/repos/"]
"ensure_final_newline_on_save": true
}

View File

@@ -1 +0,0 @@
.rules

140
Cargo.lock generated
View File

@@ -338,9 +338,9 @@ checksum = "34cd60c5e3152cef0a592f1b296f1cc93715d89d2551d85315828c3a09575ff4"
[[package]]
name = "anyhow"
version = "1.0.98"
version = "1.0.97"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e16d2d3311acee920a9eb8d33b8cbc1787ce4a264e85f964c2404b969bdcd487"
checksum = "dcfed56ad506cb2c684a14971b8861fdc3baaaae314b9e5f9bb532cbe3ba7a4f"
[[package]]
name = "approx"
@@ -449,6 +449,7 @@ dependencies = [
"smol",
"tempfile",
"util",
"which 6.0.3",
"workspace-hack",
]
@@ -1344,13 +1345,12 @@ dependencies = [
[[package]]
name = "aws-sdk-bedrockruntime"
version = "1.82.0"
version = "1.80.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8cb95f77abd4321348dd2f52a25e1de199732f54d2a35860ad20f5df21c66b44"
checksum = "39ee8ef191b908d013659ca2c0670215f0c920c781998e1dc55904d6bdb73b51"
dependencies = [
"aws-credential-types",
"aws-runtime",
"aws-sigv4",
"aws-smithy-async",
"aws-smithy-eventstream",
"aws-smithy-http",
@@ -1362,7 +1362,6 @@ dependencies = [
"bytes 1.10.1",
"fastrand 2.3.0",
"http 0.2.12",
"hyper 0.14.32",
"once_cell",
"regex-lite",
"tracing",
@@ -2726,9 +2725,9 @@ dependencies = [
[[package]]
name = "clap"
version = "4.5.36"
version = "4.5.35"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2df961d8c8a0d08aa9945718ccf584145eee3f3aa06cddbeac12933781102e04"
checksum = "d8aa86934b44c19c50f87cc2790e19f54f7a67aedb64101c2e1a2e5ecfb73944"
dependencies = [
"clap_builder",
"clap_derive",
@@ -2736,9 +2735,9 @@ dependencies = [
[[package]]
name = "clap_builder"
version = "4.5.36"
version = "4.5.35"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "132dbda40fb6753878316a489d5a1242a8ef2f0d9e47ba01c951ea8aa7d013a5"
checksum = "2414dbb2dd0695280da6ea9261e327479e9d37b0630f6b53ba2a11c60c679fd9"
dependencies = [
"anstream",
"anstyle",
@@ -3042,7 +3041,6 @@ dependencies = [
"strum 0.27.1",
"subtle",
"supermaven_api",
"task",
"telemetry_events",
"text",
"theme",
@@ -4014,7 +4012,6 @@ dependencies = [
"node_runtime",
"parking_lot",
"paths",
"proto",
"schemars",
"serde",
"serde_json",
@@ -4190,7 +4187,6 @@ dependencies = [
"command_palette_hooks",
"dap",
"db",
"debugger_tools",
"editor",
"env_logger 0.11.8",
"feature_flags",
@@ -4200,7 +4196,6 @@ dependencies = [
"language",
"log",
"menu",
"parking_lot",
"picker",
"pretty_assertions",
"project",
@@ -4895,13 +4890,13 @@ dependencies = [
"anyhow",
"assistant_tool",
"assistant_tools",
"async-trait",
"async-watch",
"chrono",
"clap",
"client",
"collections",
"context_server",
"dap",
"dirs 5.0.1",
"env_logger 0.11.8",
"extension",
@@ -4919,15 +4914,12 @@ dependencies = [
"paths",
"project",
"prompt_store",
"regex",
"release_channel",
"reqwest_client",
"rust-embed",
"serde",
"serde_json",
"settings",
"shellexpand 2.1.2",
"smol",
"telemetry",
"toml 0.8.20",
"unindent",
@@ -4936,6 +4928,36 @@ dependencies = [
"workspace-hack",
]
[[package]]
name = "evals"
version = "0.1.0"
dependencies = [
"anyhow",
"clap",
"client",
"clock",
"collections",
"dap",
"env_logger 0.11.8",
"feature_flags",
"fs",
"gpui",
"http_client",
"language",
"languages",
"node_runtime",
"open_ai",
"project",
"reqwest_client",
"semantic_index",
"serde",
"serde_json",
"settings",
"smol",
"util",
"workspace-hack",
]
[[package]]
name = "event-listener"
version = "2.5.3"
@@ -6179,7 +6201,6 @@ dependencies = [
"windows 0.61.1",
"windows-core 0.61.0",
"windows-numerics",
"windows-registry 0.5.1",
"workspace-hack",
"x11-clipboard",
"x11rb",
@@ -6282,7 +6303,6 @@ dependencies = [
"log",
"pest",
"pest_derive",
"rust-embed",
"serde",
"serde_json",
"thiserror 1.0.69",
@@ -7087,9 +7107,9 @@ dependencies = [
[[package]]
name = "indexmap"
version = "2.8.0"
version = "2.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3954d50fe15b02142bf25d3b8bdadb634ec3948f103d04ffe3031bc8fe9d7058"
checksum = "cea70ddb795996207ad57735b50c5982d8844f38ba9ee5f1aedcfb708a2aa11e"
dependencies = [
"equivalent",
"hashbrown 0.15.2",
@@ -7123,12 +7143,10 @@ dependencies = [
name = "inline_completion"
version = "0.1.0"
dependencies = [
"anyhow",
"gpui",
"language",
"project",
"workspace-hack",
"zed_llm_client",
]
[[package]]
@@ -7159,7 +7177,6 @@ dependencies = [
"workspace",
"workspace-hack",
"zed_actions",
"zed_llm_client",
"zeta",
]
@@ -7900,9 +7917,9 @@ checksum = "03087c2bad5e1034e8cace5926dec053fb3790248370865f5117a7d0213354c8"
[[package]]
name = "libc"
version = "0.2.172"
version = "0.2.171"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d750af042f7ef4f724306de029d18836c26c1765a54a6a3f094cbd23a7267ffa"
checksum = "c19937216e9d3aa9956d9bb8dfc0b0c8beb6058fc4f7a4dc4d850edf86a237d6"
[[package]]
name = "libdbus-sys"
@@ -10805,9 +10822,9 @@ dependencies = [
[[package]]
name = "proc-macro2"
version = "1.0.95"
version = "1.0.94"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "02b3e5e68a3a1a02aad3ec490a98007cbc13c37cbe84a3cd7b8e406d76e7f778"
checksum = "a31971752e70b8b2686d7e46ec17fb38dad4051d94024c88df49b667caea9c84"
dependencies = [
"unicode-ident",
]
@@ -11750,7 +11767,6 @@ name = "remote_server"
version = "0.1.0"
dependencies = [
"anyhow",
"askpass",
"async-watch",
"backtrace",
"cargo_toml",
@@ -11759,7 +11775,6 @@ dependencies = [
"client",
"clock",
"dap",
"dap_adapters",
"env_logger 0.11.8",
"extension",
"extension_host",
@@ -11948,7 +11963,7 @@ dependencies = [
"wasm-bindgen-futures",
"wasm-streams",
"web-sys",
"windows-registry 0.2.0",
"windows-registry",
]
[[package]]
@@ -12646,9 +12661,9 @@ dependencies = [
[[package]]
name = "sea-orm"
version = "1.1.10"
version = "1.1.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "21e61af841881c137d4bc8e0d8411cee9168548b404f9e4788e8af7e8f94bd4e"
checksum = "013d6c9e421b9c44c6eb6ebbee283b2a2c90eab2682088c4a2449706a42d117f"
dependencies = [
"async-stream",
"async-trait",
@@ -12675,9 +12690,9 @@ dependencies = [
[[package]]
name = "sea-orm-macros"
version = "1.1.10"
version = "1.1.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d6b86e3e77b548e6c6c1f612a1ca024d557dffdb81b838bf482ad3222140c77b"
checksum = "31feeff3ad5e999c64b2b8fd30933b2871911567c4d62dcf70b3effd970c7891"
dependencies = [
"heck 0.4.1",
"proc-macro2",
@@ -13261,9 +13276,9 @@ dependencies = [
[[package]]
name = "smallvec"
version = "1.14.0"
version = "1.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7fcf8323ef1faaee30a44a340193b1ac6814fd9b7b4e88e9d4519a3e4abe1cfd"
checksum = "8917285742e9f3e1683f0a9c4e6b57960b7314d0b08d30d1ecd426713ee2eee9"
dependencies = [
"serde",
]
@@ -13459,9 +13474,9 @@ dependencies = [
[[package]]
name = "sqlx"
version = "0.8.5"
version = "0.8.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f3c3a85280daca669cfd3bcb68a337882a8bc57ec882f72c5d13a430613a738e"
checksum = "4410e73b3c0d8442c5f99b425d7a435b5ee0ae4167b3196771dd3f7a01be745f"
dependencies = [
"sqlx-core",
"sqlx-macros",
@@ -13472,11 +13487,10 @@ dependencies = [
[[package]]
name = "sqlx-core"
version = "0.8.5"
version = "0.8.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f743f2a3cea30a58cd479013f75550e879009e3a02f616f18ca699335aa248c3"
checksum = "6a007b6936676aa9ab40207cde35daab0a04b823be8ae004368c0793b96a61e0"
dependencies = [
"base64 0.22.1",
"bigdecimal",
"bytes 1.10.1",
"chrono",
@@ -13497,6 +13511,7 @@ dependencies = [
"percent-encoding",
"rust_decimal",
"rustls 0.23.26",
"rustls-pemfile 2.2.0",
"serde",
"serde_json",
"sha2",
@@ -13513,9 +13528,9 @@ dependencies = [
[[package]]
name = "sqlx-macros"
version = "0.8.5"
version = "0.8.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7f4200e0fde19834956d4252347c12a083bdcb237d7a1a1446bffd8768417dce"
checksum = "3112e2ad78643fef903618d78cf0aec1cb3134b019730edb039b69eaf531f310"
dependencies = [
"proc-macro2",
"quote",
@@ -13526,9 +13541,9 @@ dependencies = [
[[package]]
name = "sqlx-macros-core"
version = "0.8.5"
version = "0.8.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "882ceaa29cade31beca7129b6beeb05737f44f82dbe2a9806ecea5a7093d00b7"
checksum = "4e9f90acc5ab146a99bf5061a7eb4976b573f560bc898ef3bf8435448dd5e7ad"
dependencies = [
"dotenvy",
"either",
@@ -13552,9 +13567,9 @@ dependencies = [
[[package]]
name = "sqlx-mysql"
version = "0.8.5"
version = "0.8.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0afdd3aa7a629683c2d750c2df343025545087081ab5942593a5288855b1b7a7"
checksum = "4560278f0e00ce64938540546f59f590d60beee33fffbd3b9cd47851e5fff233"
dependencies = [
"atoi",
"base64 0.22.1",
@@ -13599,9 +13614,9 @@ dependencies = [
[[package]]
name = "sqlx-postgres"
version = "0.8.5"
version = "0.8.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a0bedbe1bbb5e2615ef347a5e9d8cd7680fb63e77d9dafc0f29be15e53f1ebe6"
checksum = "c5b98a57f363ed6764d5b3a12bfedf62f07aa16e1856a7ddc2a0bb190a959613"
dependencies = [
"atoi",
"base64 0.22.1",
@@ -13642,9 +13657,9 @@ dependencies = [
[[package]]
name = "sqlx-sqlite"
version = "0.8.5"
version = "0.8.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c26083e9a520e8eb87a06b12347679b142dc2ea29e6e409f805644a7a979a5bc"
checksum = "f85ca71d3a5b24e64e1d08dd8fe36c6c95c339a896cc33068148906784620540"
dependencies = [
"atoi",
"chrono",
@@ -13660,7 +13675,6 @@ dependencies = [
"serde",
"serde_urlencoded",
"sqlx-core",
"thiserror 2.0.12",
"time",
"tracing",
"url",
@@ -14231,12 +14245,11 @@ version = "0.1.0"
dependencies = [
"anyhow",
"collections",
"dap-types",
"futures 0.3.31",
"gpui",
"hex",
"parking_lot",
"pretty_assertions",
"proto",
"schemars",
"serde",
"serde_json",
@@ -15857,6 +15870,7 @@ dependencies = [
"indoc",
"itertools 0.14.0",
"language",
"libc",
"log",
"lsp",
"multi_buffer",
@@ -17063,17 +17077,6 @@ dependencies = [
"windows-targets 0.52.6",
]
[[package]]
name = "windows-registry"
version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ad1da3e436dc7653dfdf3da67332e22bff09bb0e28b0239e1624499c7830842e"
dependencies = [
"windows-link",
"windows-result 0.3.2",
"windows-strings 0.4.0",
]
[[package]]
name = "windows-result"
version = "0.1.2"
@@ -18211,13 +18214,12 @@ dependencies = [
[[package]]
name = "zed"
version = "0.184.0"
version = "0.183.13"
dependencies = [
"activity_indicator",
"agent",
"anyhow",
"ashpd",
"askpass",
"assets",
"assistant",
"assistant_context_editor",

View File

@@ -47,6 +47,7 @@ members = [
"crates/docs_preprocessor",
"crates/editor",
"crates/eval",
"crates/evals",
"crates/extension",
"crates/extension_api",
"crates/extension_cli",
@@ -696,6 +697,7 @@ breadcrumbs = { codegen-units = 1 }
collections = { codegen-units = 1 }
command_palette = { codegen-units = 1 }
command_palette_hooks = { codegen-units = 1 }
evals = { codegen-units = 1 }
extension_cli = { codegen-units = 1 }
feature_flags = { codegen-units = 1 }
file_icons = { codegen-units = 1 }

View File

@@ -125,9 +125,7 @@
"shift-f10": "editor::OpenContextMenu",
"ctrl-shift-e": "editor::ToggleEditPrediction",
"f9": "editor::ToggleBreakpoint",
"shift-f9": "editor::EditLogBreakpoint",
"ctrl-shift-backspace": "editor::GoToPreviousChange",
"ctrl-shift-alt-backspace": "editor::GoToNextChange"
"shift-f9": "editor::EditLogBreakpoint"
}
},
{
@@ -675,7 +673,7 @@
}
},
{
"context": "Editor && mode == full",
"context": "!ContextEditor > Editor && mode == full",
"bindings": {
"alt-enter": "editor::OpenExcerpts",
"shift-enter": "editor::ExpandExcerpts",
@@ -721,7 +719,7 @@
"alt-shift-copy": "workspace::CopyRelativePath",
"alt-ctrl-shift-c": "workspace::CopyRelativePath",
"alt-ctrl-r": "outline_panel::RevealInFileManager",
"space": "outline_panel::OpenSelectedEntry",
"space": "outline_panel::Open",
"shift-down": "menu::SelectNext",
"shift-up": "menu::SelectPrevious",
"alt-enter": "editor::OpenExcerpts",
@@ -920,7 +918,6 @@
"ctrl-enter": "assistant::InlineAssist",
"alt-b": ["terminal::SendText", "\u001bb"],
"alt-f": ["terminal::SendText", "\u001bf"],
"alt-.": ["terminal::SendText", "\u001b."],
// Overrides for conflicting keybindings
"ctrl-b": ["terminal::SendKeystroke", "ctrl-b"],
"ctrl-c": ["terminal::SendKeystroke", "ctrl-c"],

View File

@@ -542,9 +542,7 @@
"cmd-\\": "pane::SplitRight",
"cmd-k v": "markdown::OpenPreviewToTheSide",
"cmd-shift-v": "markdown::OpenPreview",
"ctrl-cmd-c": "editor::DisplayCursorNames",
"cmd-shift-backspace": "editor::GoToPreviousChange",
"cmd-shift-alt-backspace": "editor::GoToNextChange"
"ctrl-cmd-c": "editor::DisplayCursorNames"
}
},
{
@@ -738,7 +736,7 @@
}
},
{
"context": "Editor && mode == full",
"context": "!ContextEditor > Editor && mode == full",
"use_key_equivalents": true,
"bindings": {
"alt-enter": "editor::OpenExcerpts",
@@ -789,7 +787,7 @@
"cmd-alt-c": "workspace::CopyPath",
"alt-cmd-shift-c": "workspace::CopyRelativePath",
"alt-cmd-r": "outline_panel::RevealInFileManager",
"space": "outline_panel::OpenSelectedEntry",
"space": "outline_panel::Open",
"shift-down": "menu::SelectNext",
"shift-up": "menu::SelectPrevious",
"alt-enter": "editor::OpenExcerpts",
@@ -1005,7 +1003,6 @@
"alt-right": ["terminal::SendText", "\u001bf"],
"alt-b": ["terminal::SendText", "\u001bb"],
"alt-f": ["terminal::SendText", "\u001bf"],
"alt-.": ["terminal::SendText", "\u001b."],
// There are conflicting bindings for these keys in the global context.
// these bindings override them, remove at your own risk:
"up": ["terminal::SendKeystroke", "up"],
@@ -1028,10 +1025,10 @@
// Using `ctrl-shift-space` in Zed requires disabling the macOS global shortcut.
// System Preferences->Keyboard->Keyboard Shortcuts->Input Sources->Select the previous input source (uncheck)
"ctrl-shift-space": "terminal::ToggleViMode",
"ctrl-alt-up": "pane::SplitUp",
"ctrl-alt-down": "pane::SplitDown",
"ctrl-alt-left": "pane::SplitLeft",
"ctrl-alt-right": "pane::SplitRight"
"ctrl-k up": "pane::SplitUp",
"ctrl-k down": "pane::SplitDown",
"ctrl-k left": "pane::SplitLeft",
"ctrl-k right": "pane::SplitRight"
}
},
{

View File

@@ -190,8 +190,7 @@
"ctrl-w g shift-d": "editor::GoToTypeDefinitionSplit",
"ctrl-w space": "editor::OpenExcerptsSplit",
"ctrl-w g space": "editor::OpenExcerptsSplit",
"ctrl-6": "pane::AlternateFile",
"ctrl-^": "pane::AlternateFile"
"ctrl-6": "pane::AlternateFile"
}
},
{
@@ -787,7 +786,8 @@
"{": "project_panel::SelectPrevDirectory",
"shift-g": "menu::SelectLast",
"g g": "menu::SelectFirst",
"-": "project_panel::SelectParent"
"-": "project_panel::SelectParent",
"ctrl-6": "pane::AlternateFile"
}
},
{

View File

@@ -1489,12 +1489,7 @@
"use_multiline_find": false,
"use_smartcase_find": false,
"highlight_on_yank_duration": 200,
"custom_digraphs": {},
// Cursor shape for the each mode.
// Specify the mode as the key and the shape as the value.
// The mode can be one of the following: "normal", "replace", "insert", "visual".
// The shape can be one of the following: "block", "bar", "underline", "hollow".
"cursor_shape": {}
"custom_digraphs": {}
},
// The server to connect to. If the environment variable
// ZED_SERVER_URL is set, it will override this setting.

View File

@@ -1032,7 +1032,6 @@ impl ActiveThread {
}
}
ThreadEvent::CheckpointChanged => cx.notify(),
ThreadEvent::ReceivedTextChunk => {}
}
}

View File

@@ -988,7 +988,7 @@ mod tests {
.await
.unwrap();
cx.update(|_, cx| {
action_log.update(cx, |log, cx| log.track_buffer(buffer.clone(), cx));
action_log.update(cx, |log, cx| log.buffer_read(buffer.clone(), cx));
buffer.update(cx, |buffer, cx| {
buffer
.edit(

View File

@@ -1550,7 +1550,7 @@ impl AssistantPanel {
fn render_usage_banner(&self, cx: &mut Context<Self>) -> Option<AnyElement> {
let usage = self.thread.read(cx).last_usage()?;
Some(UsageBanner::new(zed_llm_client::Plan::ZedProTrial, usage).into_any_element())
Some(UsageBanner::new(zed_llm_client::Plan::ZedProTrial, usage.amount).into_any_element())
}
fn render_last_error(&self, cx: &mut Context<Self>) -> Option<AnyElement> {

View File

@@ -315,7 +315,6 @@ pub struct Thread {
request_callback: Option<
Box<dyn FnMut(&LanguageModelRequest, &[Result<LanguageModelCompletionEvent, String>])>,
>,
remaining_turns: u32,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
@@ -369,7 +368,6 @@ impl Thread {
message_feedback: HashMap::default(),
last_auto_capture_at: None,
request_callback: None,
remaining_turns: u32::MAX,
}
}
@@ -444,7 +442,6 @@ impl Thread {
message_feedback: HashMap::default(),
last_auto_capture_at: None,
request_callback: None,
remaining_turns: u32::MAX,
}
}
@@ -525,7 +522,7 @@ impl Thread {
self.messages.iter().find(|message| message.id == id)
}
pub fn messages(&self) -> impl ExactSizeIterator<Item = &Message> {
pub fn messages(&self) -> impl Iterator<Item = &Message> {
self.messages.iter()
}
@@ -619,12 +616,24 @@ impl Thread {
.await
.unwrap_or(false);
if !equal {
if equal {
git_store
.update(cx, |store, cx| {
store.delete_checkpoint(pending_checkpoint.git_checkpoint, cx)
})?
.detach();
} else {
this.update(cx, |this, cx| {
this.insert_checkpoint(pending_checkpoint, cx)
})?;
}
git_store
.update(cx, |store, cx| {
store.delete_checkpoint(final_checkpoint, cx)
})?
.detach();
Ok(())
}
Err(_) => this.update(cx, |this, cx| {
@@ -758,18 +767,24 @@ impl Thread {
for ctx in &new_context {
match ctx {
AssistantContext::File(file_ctx) => {
log.track_buffer(file_ctx.context_buffer.buffer.clone(), cx);
log.buffer_added_as_context(file_ctx.context_buffer.buffer.clone(), cx);
}
AssistantContext::Directory(dir_ctx) => {
for context_buffer in &dir_ctx.context_buffers {
log.track_buffer(context_buffer.buffer.clone(), cx);
log.buffer_added_as_context(context_buffer.buffer.clone(), cx);
}
}
AssistantContext::Symbol(symbol_ctx) => {
log.track_buffer(symbol_ctx.context_symbol.buffer.clone(), cx);
log.buffer_added_as_context(
symbol_ctx.context_symbol.buffer.clone(),
cx,
);
}
AssistantContext::Selection(selection_context) => {
log.track_buffer(selection_context.context_buffer.buffer.clone(), cx);
log.buffer_added_as_context(
selection_context.context_buffer.buffer.clone(),
cx,
);
}
AssistantContext::FetchedUrl(_)
| AssistantContext::Thread(_)
@@ -943,21 +958,7 @@ impl Thread {
})
}
pub fn remaining_turns(&self) -> u32 {
self.remaining_turns
}
pub fn set_remaining_turns(&mut self, remaining_turns: u32) {
self.remaining_turns = remaining_turns;
}
pub fn send_to_model(&mut self, model: Arc<dyn LanguageModel>, cx: &mut Context<Self>) {
if self.remaining_turns == 0 {
return;
}
self.remaining_turns -= 1;
let mut request = self.to_completion_request(cx);
if model.supports_tools() {
request.tools = {
@@ -1262,7 +1263,6 @@ impl Thread {
current_token_usage = token_usage;
}
LanguageModelCompletionEvent::Text(chunk) => {
cx.emit(ThreadEvent::ReceivedTextChunk);
if let Some(last_message) = thread.messages.last_mut() {
if last_message.role == Role::Assistant {
last_message.push_text(&chunk);
@@ -1812,7 +1812,7 @@ impl Thread {
thread_data,
final_project_snapshot
);
client.telemetry().flush_events().await;
client.telemetry().flush_events();
Ok(())
})
@@ -1857,7 +1857,7 @@ impl Thread {
thread_data,
final_project_snapshot
);
client.telemetry().flush_events().await;
client.telemetry().flush_events();
Ok(())
})
@@ -2113,7 +2113,7 @@ impl Thread {
github_login = github_login
);
client.telemetry().flush_events().await;
client.telemetry().flush_events();
}
}
})
@@ -2231,7 +2231,6 @@ pub enum ThreadEvent {
ShowError(ThreadError),
UsageUpdated(RequestUsage),
StreamedCompletion,
ReceivedTextChunk,
StreamedAssistantText(MessageId, String),
StreamedAssistantThinking(MessageId, String),
StreamedToolUse {

View File

@@ -1,30 +1,31 @@
use client::zed_urls;
use language_model::RequestUsage;
use ui::{Banner, ProgressBar, Severity, prelude::*};
use zed_llm_client::{Plan, UsageLimit};
#[derive(IntoElement, RegisterComponent)]
pub struct UsageBanner {
plan: Plan,
usage: RequestUsage,
requests: i32,
}
impl UsageBanner {
pub fn new(plan: Plan, usage: RequestUsage) -> Self {
Self { plan, usage }
pub fn new(plan: Plan, requests: i32) -> Self {
Self { plan, requests }
}
}
impl RenderOnce for UsageBanner {
fn render(self, _window: &mut Window, cx: &mut App) -> impl IntoElement {
let used_percentage = match self.usage.limit {
UsageLimit::Limited(limit) => Some((self.usage.amount as f32 / limit as f32) * 100.),
let request_limit = self.plan.model_requests_limit();
let used_percentage = match request_limit {
UsageLimit::Limited(limit) => Some((self.requests as f32 / limit as f32) * 100.),
UsageLimit::Unlimited => None,
};
let (severity, message) = match self.usage.limit {
let (severity, message) = match request_limit {
UsageLimit::Limited(limit) => {
if self.usage.amount >= limit {
if self.requests >= limit {
let message = match self.plan {
Plan::ZedPro => "Monthly request limit reached",
Plan::ZedProTrial => "Trial request limit reached",
@@ -32,7 +33,7 @@ impl RenderOnce for UsageBanner {
};
(Severity::Error, message)
} else if (self.usage.amount as f32 / limit as f32) >= 0.9 {
} else if (self.requests as f32 / limit as f32) >= 0.9 {
(Severity::Warning, "Approaching request limit")
} else {
let message = match self.plan {
@@ -80,11 +81,11 @@ impl RenderOnce for UsageBanner {
.child(ProgressBar::new("usage", percent, 100., cx))
}))
.child(
Label::new(match self.usage.limit {
Label::new(match request_limit {
UsageLimit::Limited(limit) => {
format!("{} / {limit}", self.usage.amount)
format!("{} / {limit}", self.requests)
}
UsageLimit::Unlimited => format!("{} / ∞", self.usage.amount),
UsageLimit::Unlimited => format!("{} / ∞", self.requests),
})
.size(LabelSize::Small)
.color(Color::Muted),
@@ -103,131 +104,74 @@ impl Component for UsageBanner {
}
fn preview(_window: &mut Window, _cx: &mut App) -> Option<AnyElement> {
let trial_limit = Plan::ZedProTrial.model_requests_limit();
let trial_examples = vec![
single_example(
"Zed Pro Trial - New User",
div()
.size_full()
.child(UsageBanner::new(
Plan::ZedProTrial,
RequestUsage {
limit: trial_limit,
amount: 10,
},
))
.child(UsageBanner::new(Plan::ZedProTrial, 10))
.into_any_element(),
),
single_example(
"Zed Pro Trial - Approaching Limit",
div()
.size_full()
.child(UsageBanner::new(
Plan::ZedProTrial,
RequestUsage {
limit: trial_limit,
amount: 135,
},
))
.child(UsageBanner::new(Plan::ZedProTrial, 135))
.into_any_element(),
),
single_example(
"Zed Pro Trial - Request Limit Reached",
div()
.size_full()
.child(UsageBanner::new(
Plan::ZedProTrial,
RequestUsage {
limit: trial_limit,
amount: 150,
},
))
.child(UsageBanner::new(Plan::ZedProTrial, 150))
.into_any_element(),
),
];
let free_limit = Plan::Free.model_requests_limit();
let free_examples = vec![
single_example(
"Free - Normal Usage",
div()
.size_full()
.child(UsageBanner::new(
Plan::Free,
RequestUsage {
limit: free_limit,
amount: 25,
},
))
.child(UsageBanner::new(Plan::Free, 25))
.into_any_element(),
),
single_example(
"Free - Approaching Limit",
div()
.size_full()
.child(UsageBanner::new(
Plan::Free,
RequestUsage {
limit: free_limit,
amount: 45,
},
))
.child(UsageBanner::new(Plan::Free, 45))
.into_any_element(),
),
single_example(
"Free - Request Limit Reached",
div()
.size_full()
.child(UsageBanner::new(
Plan::Free,
RequestUsage {
limit: free_limit,
amount: 50,
},
))
.child(UsageBanner::new(Plan::Free, 50))
.into_any_element(),
),
];
let zed_pro_limit = Plan::ZedPro.model_requests_limit();
let zed_pro_examples = vec![
single_example(
"Zed Pro - Normal Usage",
div()
.size_full()
.child(UsageBanner::new(
Plan::ZedPro,
RequestUsage {
limit: zed_pro_limit,
amount: 250,
},
))
.child(UsageBanner::new(Plan::ZedPro, 250))
.into_any_element(),
),
single_example(
"Zed Pro - Approaching Limit",
div()
.size_full()
.child(UsageBanner::new(
Plan::ZedPro,
RequestUsage {
limit: zed_pro_limit,
amount: 450,
},
))
.child(UsageBanner::new(Plan::ZedPro, 450))
.into_any_element(),
),
single_example(
"Zed Pro - Request Limit Reached",
div()
.size_full()
.child(UsageBanner::new(
Plan::ZedPro,
RequestUsage {
limit: zed_pro_limit,
amount: 500,
},
))
.child(UsageBanner::new(Plan::ZedPro, 500))
.into_any_element(),
),
];

View File

@@ -18,4 +18,5 @@ gpui.workspace = true
smol.workspace = true
tempfile.workspace = true
util.workspace = true
which.workspace = true
workspace-hack.workspace = true

View File

@@ -72,8 +72,6 @@ impl AskPassSession {
let (askpass_opened_tx, askpass_opened_rx) = oneshot::channel::<()>();
let listener =
UnixListener::bind(&askpass_socket).context("failed to create askpass socket")?;
let zed_path = std::env::current_exe()
.context("Failed to figure out current executable path for use in askpass")?;
let (askpass_kill_master_tx, askpass_kill_master_rx) = oneshot::channel::<()>();
let mut kill_tx = Some(askpass_kill_master_tx);
@@ -112,10 +110,21 @@ impl AskPassSession {
drop(temp_dir)
});
anyhow::ensure!(
which::which("nc").is_ok(),
"Cannot find `nc` command (netcat), which is required to connect over SSH."
);
// Create an askpass script that communicates back to this process.
let askpass_script = format!(
"{shebang}\n{print_args} | {zed_exe} --askpass={askpass_socket} 2> /dev/null \n",
zed_exe = zed_path.display(),
"{shebang}\n{print_args} | {nc} -U {askpass_socket} 2> /dev/null \n",
// on macOS `brew install netcat` provides the GNU netcat implementation
// which does not support -U.
nc = if cfg!(target_os = "macos") {
"/usr/bin/nc"
} else {
"nc"
},
askpass_socket = askpass_socket.display(),
print_args = "printf '%s\\0' \"$@\"",
shebang = "#!/bin/sh",
@@ -161,51 +170,6 @@ impl AskPassSession {
}
}
/// The main function for when Zed is running in netcat mode for use in askpass.
/// Called from both the remote server binary and the zed binary in their respective main functions.
#[cfg(unix)]
pub fn main(socket: &str) {
use std::io::{self, Read, Write};
use std::os::unix::net::UnixStream;
use std::process::exit;
let mut stream = match UnixStream::connect(socket) {
Ok(stream) => stream,
Err(err) => {
eprintln!("Error connecting to socket {}: {}", socket, err);
exit(1);
}
};
let mut buffer = Vec::new();
if let Err(err) = io::stdin().read_to_end(&mut buffer) {
eprintln!("Error reading from stdin: {}", err);
exit(1);
}
if buffer.last() != Some(&b'\0') {
buffer.push(b'\0');
}
if let Err(err) = stream.write_all(&buffer) {
eprintln!("Error writing to socket: {}", err);
exit(1);
}
let mut response = Vec::new();
if let Err(err) = stream.read_to_end(&mut response) {
eprintln!("Error reading from socket: {}", err);
exit(1);
}
if let Err(err) = io::stdout().write_all(&response) {
eprintln!("Error writing to stdout: {}", err);
exit(1);
}
}
#[cfg(not(unix))]
pub fn main(_socket: &str) {}
#[cfg(not(unix))]
pub struct AskPassSession {
path: PathBuf,

View File

@@ -2611,7 +2611,9 @@ impl AssistantContext {
.map(MessageContent::Text),
);
completion_request.messages.push(request_message);
if !request_message.contents_empty() {
completion_request.messages.push(request_message);
}
}
if let RequestType::SuggestEdits = request_type {

View File

@@ -39,9 +39,10 @@ impl ActionLog {
self.edited_since_project_diagnostics_check
}
fn track_buffer_internal(
fn track_buffer(
&mut self,
buffer: Entity<Buffer>,
created: bool,
cx: &mut Context<Self>,
) -> &mut TrackedBuffer {
let tracked_buffer = self
@@ -58,11 +59,7 @@ impl ActionLog {
let base_text;
let status;
let unreviewed_changes;
if buffer
.read(cx)
.file()
.map_or(true, |file| !file.disk_state().exists())
{
if created {
base_text = Rope::default();
status = TrackedBufferStatus::Created;
unreviewed_changes = Patch::new(vec![Edit {
@@ -149,7 +146,7 @@ impl ActionLog {
// resurrected externally, we want to clear the changes we
// were tracking and reset the buffer's state.
self.tracked_buffers.remove(&buffer);
self.track_buffer_internal(buffer, cx);
self.track_buffer(buffer, false, cx);
}
cx.notify();
}
@@ -263,15 +260,26 @@ impl ActionLog {
}
/// Track a buffer as read, so we can notify the model about user edits.
pub fn track_buffer(&mut self, buffer: Entity<Buffer>, cx: &mut Context<Self>) {
self.track_buffer_internal(buffer, cx);
pub fn buffer_read(&mut self, buffer: Entity<Buffer>, cx: &mut Context<Self>) {
self.track_buffer(buffer, false, cx);
}
/// Track a buffer that was added as context, so we can notify the model about user edits.
pub fn buffer_added_as_context(&mut self, buffer: Entity<Buffer>, cx: &mut Context<Self>) {
self.track_buffer(buffer, false, cx);
}
/// Track a buffer as read, so we can notify the model about user edits.
pub fn will_create_buffer(&mut self, buffer: Entity<Buffer>, cx: &mut Context<Self>) {
self.track_buffer(buffer.clone(), true, cx);
self.buffer_edited(buffer, cx)
}
/// Mark a buffer as edited, so we can refresh it in the context
pub fn buffer_edited(&mut self, buffer: Entity<Buffer>, cx: &mut Context<Self>) {
self.edited_since_project_diagnostics_check = true;
let tracked_buffer = self.track_buffer_internal(buffer.clone(), cx);
let tracked_buffer = self.track_buffer(buffer.clone(), false, cx);
if let TrackedBufferStatus::Deleted = tracked_buffer.status {
tracked_buffer.status = TrackedBufferStatus::Modified;
}
@@ -279,7 +287,7 @@ impl ActionLog {
}
pub fn will_delete_buffer(&mut self, buffer: Entity<Buffer>, cx: &mut Context<Self>) {
let tracked_buffer = self.track_buffer_internal(buffer.clone(), cx);
let tracked_buffer = self.track_buffer(buffer.clone(), false, cx);
match tracked_buffer.status {
TrackedBufferStatus::Created => {
self.tracked_buffers.remove(&buffer);
@@ -389,7 +397,7 @@ impl ActionLog {
// Clear all tracked changes for this buffer and start over as if we just read it.
self.tracked_buffers.remove(&buffer);
self.track_buffer_internal(buffer.clone(), cx);
self.track_buffer(buffer.clone(), false, cx);
cx.notify();
save
}
@@ -687,20 +695,12 @@ mod tests {
init_test(cx);
let fs = FakeFs::new(cx.executor());
fs.insert_tree(path!("/dir"), json!({"file": "abc\ndef\nghi\njkl\nmno"}))
.await;
let project = Project::test(fs.clone(), [path!("/dir").as_ref()], cx).await;
let project = Project::test(fs.clone(), [], cx).await;
let action_log = cx.new(|_| ActionLog::new(project.clone()));
let file_path = project
.read_with(cx, |project, cx| project.find_project_path("dir/file", cx))
.unwrap();
let buffer = project
.update(cx, |project, cx| project.open_buffer(file_path, cx))
.await
.unwrap();
let buffer = cx.new(|cx| Buffer::local("abc\ndef\nghi\njkl\nmno", cx));
cx.update(|cx| {
action_log.update(cx, |log, cx| log.track_buffer(buffer.clone(), cx));
action_log.update(cx, |log, cx| log.buffer_read(buffer.clone(), cx));
buffer.update(cx, |buffer, cx| {
buffer
.edit([(Point::new(1, 1)..Point::new(1, 2), "E")], None, cx)
@@ -765,23 +765,12 @@ mod tests {
init_test(cx);
let fs = FakeFs::new(cx.executor());
fs.insert_tree(
path!("/dir"),
json!({"file": "abc\ndef\nghi\njkl\nmno\npqr"}),
)
.await;
let project = Project::test(fs.clone(), [path!("/dir").as_ref()], cx).await;
let project = Project::test(fs.clone(), [], cx).await;
let action_log = cx.new(|_| ActionLog::new(project.clone()));
let file_path = project
.read_with(cx, |project, cx| project.find_project_path("dir/file", cx))
.unwrap();
let buffer = project
.update(cx, |project, cx| project.open_buffer(file_path, cx))
.await
.unwrap();
let buffer = cx.new(|cx| Buffer::local("abc\ndef\nghi\njkl\nmno\npqr", cx));
cx.update(|cx| {
action_log.update(cx, |log, cx| log.track_buffer(buffer.clone(), cx));
action_log.update(cx, |log, cx| log.buffer_read(buffer.clone(), cx));
buffer.update(cx, |buffer, cx| {
buffer
.edit([(Point::new(1, 0)..Point::new(2, 0), "")], None, cx)
@@ -850,20 +839,12 @@ mod tests {
init_test(cx);
let fs = FakeFs::new(cx.executor());
fs.insert_tree(path!("/dir"), json!({"file": "abc\ndef\nghi\njkl\nmno"}))
.await;
let project = Project::test(fs.clone(), [path!("/dir").as_ref()], cx).await;
let project = Project::test(fs.clone(), [], cx).await;
let action_log = cx.new(|_| ActionLog::new(project.clone()));
let file_path = project
.read_with(cx, |project, cx| project.find_project_path("dir/file", cx))
.unwrap();
let buffer = project
.update(cx, |project, cx| project.open_buffer(file_path, cx))
.await
.unwrap();
let buffer = cx.new(|cx| Buffer::local("abc\ndef\nghi\njkl\nmno", cx));
cx.update(|cx| {
action_log.update(cx, |log, cx| log.track_buffer(buffer.clone(), cx));
action_log.update(cx, |log, cx| log.buffer_read(buffer.clone(), cx));
buffer.update(cx, |buffer, cx| {
buffer
.edit([(Point::new(1, 2)..Point::new(2, 3), "F\nGHI")], None, cx)
@@ -947,21 +928,25 @@ mod tests {
init_test(cx);
let fs = FakeFs::new(cx.executor());
fs.insert_tree(path!("/dir"), json!({})).await;
let project = Project::test(fs.clone(), [path!("/dir").as_ref()], cx).await;
let action_log = cx.new(|_| ActionLog::new(project.clone()));
let fs = FakeFs::new(cx.executor());
fs.insert_tree(path!("/dir"), json!({})).await;
let project = Project::test(fs.clone(), [path!("/dir").as_ref()], cx).await;
let file_path = project
.read_with(cx, |project, cx| project.find_project_path("dir/file1", cx))
.unwrap();
// Simulate file2 being recreated by a tool.
let buffer = project
.update(cx, |project, cx| project.open_buffer(file_path, cx))
.await
.unwrap();
cx.update(|cx| {
action_log.update(cx, |log, cx| log.track_buffer(buffer.clone(), cx));
buffer.update(cx, |buffer, cx| buffer.set_text("lorem", cx));
action_log.update(cx, |log, cx| log.buffer_edited(buffer.clone(), cx));
action_log.update(cx, |log, cx| log.will_create_buffer(buffer.clone(), cx));
});
project
.update(cx, |project, cx| project.save_buffer(buffer.clone(), cx))
@@ -1082,9 +1067,8 @@ mod tests {
.update(cx, |project, cx| project.open_buffer(file2_path, cx))
.await
.unwrap();
action_log.update(cx, |log, cx| log.track_buffer(buffer2.clone(), cx));
buffer2.update(cx, |buffer, cx| buffer.set_text("IPSUM", cx));
action_log.update(cx, |log, cx| log.buffer_edited(buffer2.clone(), cx));
action_log.update(cx, |log, cx| log.will_create_buffer(buffer2.clone(), cx));
project
.update(cx, |project, cx| project.save_buffer(buffer2.clone(), cx))
.await
@@ -1129,7 +1113,7 @@ mod tests {
.unwrap();
cx.update(|cx| {
action_log.update(cx, |log, cx| log.track_buffer(buffer.clone(), cx));
action_log.update(cx, |log, cx| log.buffer_read(buffer.clone(), cx));
buffer.update(cx, |buffer, cx| {
buffer
.edit([(Point::new(1, 1)..Point::new(1, 2), "E\nXYZ")], None, cx)
@@ -1264,7 +1248,7 @@ mod tests {
.unwrap();
cx.update(|cx| {
action_log.update(cx, |log, cx| log.track_buffer(buffer.clone(), cx));
action_log.update(cx, |log, cx| log.buffer_read(buffer.clone(), cx));
buffer.update(cx, |buffer, cx| {
buffer
.edit([(Point::new(1, 1)..Point::new(1, 2), "E\nXYZ")], None, cx)
@@ -1397,9 +1381,8 @@ mod tests {
.await
.unwrap();
cx.update(|cx| {
action_log.update(cx, |log, cx| log.track_buffer(buffer.clone(), cx));
buffer.update(cx, |buffer, cx| buffer.set_text("content", cx));
action_log.update(cx, |log, cx| log.buffer_edited(buffer.clone(), cx));
action_log.update(cx, |log, cx| log.will_create_buffer(buffer.clone(), cx));
});
project
.update(cx, |project, cx| project.save_buffer(buffer.clone(), cx))
@@ -1455,7 +1438,7 @@ mod tests {
.await
.unwrap();
action_log.update(cx, |log, cx| log.track_buffer(buffer.clone(), cx));
action_log.update(cx, |log, cx| log.buffer_read(buffer.clone(), cx));
for _ in 0..operations {
match rng.gen_range(0..100) {
@@ -1507,7 +1490,7 @@ mod tests {
log::info!("quiescing...");
cx.run_until_parked();
action_log.update(cx, |log, cx| {
let tracked_buffer = log.track_buffer_internal(buffer.clone(), cx);
let tracked_buffer = log.track_buffer(buffer.clone(), false, cx);
let mut old_text = tracked_buffer.base_text.clone();
let new_text = buffer.read(cx).as_rope();
for edit in tracked_buffer.unreviewed_changes.edits() {

View File

@@ -56,8 +56,6 @@ use crate::symbol_info_tool::SymbolInfoTool;
use crate::terminal_tool::TerminalTool;
use crate::thinking_tool::ThinkingTool;
pub use path_search_tool::PathSearchToolInput;
pub fn init(http_client: Arc<HttpClientWithUrl>, cx: &mut App) {
assistant_tool::init(cx);

View File

@@ -159,7 +159,7 @@ impl Tool for CodeActionTool {
};
action_log.update(cx, |action_log, cx| {
action_log.track_buffer(buffer.clone(), cx);
action_log.buffer_read(buffer.clone(), cx);
})?;
let range = {

View File

@@ -174,7 +174,7 @@ pub async fn file_outline(
};
action_log.update(cx, |action_log, cx| {
action_log.track_buffer(buffer.clone(), cx);
action_log.buffer_read(buffer.clone(), cx);
})?;
// Wait until the buffer has been fully parsed, so that we can read its outline.

View File

@@ -209,7 +209,7 @@ impl Tool for ContentsTool {
})?;
action_log.update(cx, |log, cx| {
log.track_buffer(buffer, cx);
log.buffer_read(buffer, cx);
})?;
Ok(result)
@@ -221,7 +221,7 @@ impl Tool for ContentsTool {
let result = buffer.read_with(cx, |buffer, _cx| buffer.text())?;
action_log.update(cx, |log, cx| {
log.track_buffer(buffer, cx);
log.buffer_read(buffer, cx);
})?;
Ok(result)

View File

@@ -112,12 +112,9 @@ impl Tool for CreateFileTool {
.await
.map_err(|err| anyhow!("Unable to open buffer for {destination_path}: {err}"))?;
cx.update(|cx| {
action_log.update(cx, |action_log, cx| {
action_log.track_buffer(buffer.clone(), cx)
});
buffer.update(cx, |buffer, cx| buffer.set_text(contents, cx));
action_log.update(cx, |action_log, cx| {
action_log.buffer_edited(buffer.clone(), cx)
action_log.will_create_buffer(buffer.clone(), cx)
});
})?;

View File

@@ -182,7 +182,7 @@ impl Tool for EditFileTool {
let snapshot = cx.update(|cx| {
action_log.update(cx, |log, cx| {
log.track_buffer(buffer.clone(), cx)
log.buffer_read(buffer.clone(), cx)
});
let snapshot = buffer.update(cx, |buffer, cx| {
buffer.finalize_last_transaction();

View File

@@ -134,7 +134,7 @@ impl Tool for ReadFileTool {
})?;
action_log.update(cx, |log, cx| {
log.track_buffer(buffer, cx);
log.buffer_read(buffer, cx);
})?;
Ok(result)
@@ -147,7 +147,7 @@ impl Tool for ReadFileTool {
let result = buffer.read_with(cx, |buffer, _cx| buffer.text())?;
action_log.update(cx, |log, cx| {
log.track_buffer(buffer, cx);
log.buffer_read(buffer, cx);
})?;
Ok(result)

View File

@@ -106,7 +106,7 @@ impl Tool for RenameTool {
};
action_log.update(cx, |action_log, cx| {
action_log.track_buffer(buffer.clone(), cx);
action_log.buffer_read(buffer.clone(), cx);
})?;
let position = {

View File

@@ -140,7 +140,7 @@ impl Tool for SymbolInfoTool {
};
action_log.update(cx, |action_log, cx| {
action_log.track_buffer(buffer.clone(), cx);
action_log.buffer_read(buffer.clone(), cx);
})?;
let position = {

View File

@@ -4,7 +4,7 @@ use crate::TelemetrySettings;
use anyhow::Result;
use clock::SystemClock;
use futures::channel::mpsc;
use futures::{Future, FutureExt, StreamExt};
use futures::{Future, StreamExt};
use gpui::{App, AppContext as _, BackgroundExecutor, Task};
use http_client::{self, AsyncBody, HttpClient, HttpClientWithUrl, Method, Request};
use parking_lot::Mutex;
@@ -290,10 +290,6 @@ impl Telemetry {
paths::logs_dir().join("telemetry.log")
}
pub fn has_checksum_seed(&self) -> bool {
ZED_CLIENT_CHECKSUM_SEED.is_some()
}
pub fn start(
self: &Arc<Self>,
system_id: Option<String>,
@@ -434,7 +430,7 @@ impl Telemetry {
let executor = self.executor.clone();
state.flush_events_task = Some(self.executor.spawn(async move {
executor.timer(FLUSH_INTERVAL).await;
this.flush_events().detach();
this.flush_events();
}));
}
@@ -460,7 +456,7 @@ impl Telemetry {
if state.installation_id.is_some() && state.events_queue.len() >= state.max_queue_size {
drop(state);
self.flush_events().detach();
self.flush_events();
}
}
@@ -503,59 +499,60 @@ impl Telemetry {
.body(json_bytes.into())?)
}
pub fn flush_events(self: &Arc<Self>) -> Task<()> {
pub fn flush_events(self: &Arc<Self>) {
let mut state = self.state.lock();
state.first_event_date_time = None;
let mut events = mem::take(&mut state.events_queue);
state.flush_events_task.take();
drop(state);
if events.is_empty() {
return Task::ready(());
return;
}
let this = self.clone();
self.executor.spawn(
async move {
let mut json_bytes = Vec::new();
self.executor
.spawn(
async move {
let mut json_bytes = Vec::new();
if let Some(file) = &mut this.state.lock().log_file {
for event in &mut events {
json_bytes.clear();
serde_json::to_writer(&mut json_bytes, event)?;
file.write_all(&json_bytes)?;
file.write_all(b"\n")?;
if let Some(file) = &mut this.state.lock().log_file {
for event in &mut events {
json_bytes.clear();
serde_json::to_writer(&mut json_bytes, event)?;
file.write_all(&json_bytes)?;
file.write_all(b"\n")?;
}
}
}
let request_body = {
let state = this.state.lock();
let request_body = {
let state = this.state.lock();
EventRequestBody {
system_id: state.system_id.as_deref().map(Into::into),
installation_id: state.installation_id.as_deref().map(Into::into),
session_id: state.session_id.clone(),
metrics_id: state.metrics_id.as_deref().map(Into::into),
is_staff: state.is_staff,
app_version: state.app_version.clone(),
os_name: state.os_name.clone(),
os_version: state.os_version.clone(),
architecture: state.architecture.to_string(),
EventRequestBody {
system_id: state.system_id.as_deref().map(Into::into),
installation_id: state.installation_id.as_deref().map(Into::into),
session_id: state.session_id.clone(),
metrics_id: state.metrics_id.as_deref().map(Into::into),
is_staff: state.is_staff,
app_version: state.app_version.clone(),
os_name: state.os_name.clone(),
os_version: state.os_version.clone(),
architecture: state.architecture.to_string(),
release_channel: state.release_channel.map(Into::into),
events,
release_channel: state.release_channel.map(Into::into),
events,
}
};
let request = this.build_request(json_bytes, request_body)?;
let response = this.http_client.send(request).await?;
if response.status() != 200 {
log::error!("Failed to send events: HTTP {:?}", response.status());
}
};
let request = this.build_request(json_bytes, request_body)?;
let response = this.http_client.send(request).await?;
if response.status() != 200 {
log::error!("Failed to send events: HTTP {:?}", response.status());
anyhow::Ok(())
}
anyhow::Ok(())
}
.log_err()
.map(|_| ()),
)
.log_err(),
)
.detach();
}
}

View File

@@ -128,7 +128,6 @@ serde_json.workspace = true
session = { workspace = true, features = ["test-support"] }
settings = { workspace = true, features = ["test-support"] }
sqlx = { version = "0.8", features = ["sqlite"] }
task.workspace = true
theme.workspace = true
unindent.workspace = true
util.workspace = true

View File

@@ -117,7 +117,6 @@ CREATE TABLE "project_repositories" (
"is_deleted" BOOL NOT NULL,
"current_merge_conflicts" VARCHAR,
"branch_summary" VARCHAR,
"head_commit_details" VARCHAR,
PRIMARY KEY (project_id, id)
);
@@ -492,8 +491,7 @@ CREATE TABLE IF NOT EXISTS billing_customers (
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
user_id INTEGER NOT NULL REFERENCES users (id),
has_overdue_invoices BOOLEAN NOT NULL DEFAULT FALSE,
stripe_customer_id TEXT NOT NULL,
trial_started_at TIMESTAMP
stripe_customer_id TEXT NOT NULL
);
CREATE UNIQUE INDEX "uix_billing_customers_on_user_id" ON billing_customers (user_id);

View File

@@ -1,2 +0,0 @@
alter table billing_customers
add column trial_started_at timestamp without time zone;

View File

@@ -287,7 +287,7 @@ async fn create_billing_subscription(
}
}
let customer_id = if let Some(existing_customer) = &existing_billing_customer {
let customer_id = if let Some(existing_customer) = existing_billing_customer {
CustomerId::from_str(&existing_customer.stripe_customer_id)
.context("failed to parse customer ID")?
} else {
@@ -320,18 +320,9 @@ async fn create_billing_subscription(
.await?
}
Some(ProductCode::ZedProTrial) => {
if let Some(existing_billing_customer) = &existing_billing_customer {
if existing_billing_customer.trial_started_at.is_some() {
return Err(Error::http(
StatusCode::FORBIDDEN,
"user already used free trial".into(),
));
}
}
stripe_billing
.checkout_with_zed_pro_trial(
app.config.zed_pro_price_id()?,
.checkout_with_price(
app.config.zed_pro_trial_price_id()?,
customer_id,
&user.github_login,
&success_url,
@@ -453,34 +444,12 @@ async fn manage_billing_subscription(
ManageSubscriptionIntent::ManageSubscription => None,
ManageSubscriptionIntent::UpgradeToPro => {
let zed_pro_price_id = app.config.zed_pro_price_id()?;
let zed_pro_trial_price_id = app.config.zed_pro_trial_price_id()?;
let zed_free_price_id = app.config.zed_free_price_id()?;
let stripe_subscription =
Subscription::retrieve(&stripe_client, &subscription_id, &[]).await?;
let is_on_zed_pro_trial = stripe_subscription.status == SubscriptionStatus::Trialing
&& stripe_subscription.items.data.iter().any(|item| {
item.price
.as_ref()
.map_or(false, |price| price.id == zed_pro_price_id)
});
if is_on_zed_pro_trial {
// If the user is already on a Zed Pro trial and wants to upgrade to Pro, we just need to end their trial early.
Subscription::update(
&stripe_client,
&stripe_subscription.id,
stripe::UpdateSubscription {
trial_end: Some(stripe::Scheduled::now()),
..Default::default()
},
)
.await?;
return Ok(Json(ManageBillingSubscriptionResponse {
billing_portal_session_url: None,
}));
}
let subscription_item_to_update = stripe_subscription
.items
.data
@@ -488,7 +457,7 @@ async fn manage_billing_subscription(
.find_map(|item| {
let price = item.price.as_ref()?;
if price.id == zed_free_price_id {
if price.id == zed_free_price_id || price.id == zed_pro_trial_price_id {
Some(item.id.clone())
} else {
None
@@ -802,17 +771,16 @@ async fn handle_customer_subscription_event(
let subscription_kind = maybe!({
let zed_pro_price_id = app.config.zed_pro_price_id().ok()?;
let zed_pro_trial_price_id = app.config.zed_pro_trial_price_id().ok()?;
let zed_free_price_id = app.config.zed_free_price_id().ok()?;
subscription.items.data.iter().find_map(|item| {
let price = item.price.as_ref()?;
if price.id == zed_pro_price_id {
Some(if subscription.status == SubscriptionStatus::Trialing {
SubscriptionKind::ZedProTrial
} else {
SubscriptionKind::ZedPro
})
Some(SubscriptionKind::ZedPro)
} else if price.id == zed_pro_trial_price_id {
Some(SubscriptionKind::ZedProTrial)
} else if price.id == zed_free_price_id {
Some(SubscriptionKind::ZedFree)
} else {
@@ -826,24 +794,6 @@ async fn handle_customer_subscription_event(
.await?
.ok_or_else(|| anyhow!("billing customer not found"))?;
if let Some(SubscriptionKind::ZedProTrial) = subscription_kind {
if subscription.status == SubscriptionStatus::Trialing {
let current_period_start =
DateTime::from_timestamp(subscription.current_period_start, 0)
.ok_or_else(|| anyhow!("No trial subscription period start"))?;
app.db
.update_billing_customer(
billing_customer.id,
&UpdateBillingCustomerParams {
trial_started_at: ActiveValue::set(Some(current_period_start.naive_utc())),
..Default::default()
},
)
.await?;
}
}
let was_canceled_due_to_payment_failure = subscription.status == SubscriptionStatus::Canceled
&& subscription
.cancellation_details
@@ -870,28 +820,6 @@ async fn handle_customer_subscription_event(
.get_billing_subscription_by_stripe_subscription_id(&subscription.id)
.await?
{
let llm_db = app
.llm_db
.clone()
.ok_or_else(|| anyhow!("LLM DB not initialized"))?;
let new_period_start_at =
chrono::DateTime::from_timestamp(subscription.current_period_start, 0)
.ok_or_else(|| anyhow!("No subscription period start"))?;
let new_period_end_at =
chrono::DateTime::from_timestamp(subscription.current_period_end, 0)
.ok_or_else(|| anyhow!("No subscription period end"))?;
llm_db
.transfer_existing_subscription_usage(
billing_customer.user_id,
&existing_subscription,
subscription_kind,
new_period_start_at,
new_period_end_at,
)
.await?;
app.db
.update_billing_subscription(
existing_subscription.id,

View File

@@ -516,7 +516,6 @@ pub async fn post_events(
if let Some(kinesis_client) = app.kinesis_client.clone() {
if let Some(stream) = app.config.kinesis_stream.clone() {
let mut request = kinesis_client.put_records().stream_name(stream);
let mut has_records = false;
for row in for_snowflake(
request_body.clone(),
first_event_at,
@@ -531,12 +530,9 @@ pub async fn post_events(
.build()
.unwrap(),
);
has_records = true;
}
}
if has_records {
request.send().await.log_err();
}
request.send().await.log_err();
}
};
@@ -559,7 +555,7 @@ fn for_snowflake(
country_code: Option<String>,
checksum_matched: bool,
) -> impl Iterator<Item = SnowflakeRow> {
body.events.into_iter().filter_map(move |event| {
body.events.into_iter().flat_map(move |event| {
let timestamp =
first_event_at + Duration::milliseconds(event.milliseconds_since_first_event);
// We will need to double check, but I believe all of the events that
@@ -748,11 +744,9 @@ fn for_snowflake(
// NOTE: most amplitude user properties are read out of our event_properties
// dictionary. See https://app.amplitude.com/data/zed/Zed/sources/detail/production/falcon%3A159998
// for how that is configured.
let user_properties = body.is_staff.map(|is_staff| {
serde_json::json!({
"is_staff": is_staff,
})
});
let user_properties = Some(serde_json::json!({
"is_staff": body.is_staff,
}));
Some(SnowflakeRow {
time: timestamp,

View File

@@ -11,7 +11,6 @@ pub struct UpdateBillingCustomerParams {
pub user_id: ActiveValue<UserId>,
pub stripe_customer_id: ActiveValue<String>,
pub has_overdue_invoices: ActiveValue<bool>,
pub trial_started_at: ActiveValue<Option<DateTime>>,
}
impl Database {
@@ -46,8 +45,7 @@ impl Database {
user_id: params.user_id.clone(),
stripe_customer_id: params.stripe_customer_id.clone(),
has_overdue_invoices: params.has_overdue_invoices.clone(),
trial_started_at: params.trial_started_at.clone(),
created_at: ActiveValue::not_set(),
..Default::default()
})
.exec(&*tx)
.await?;

View File

@@ -119,15 +119,8 @@ impl Database {
.filter(
Condition::all()
.add(
Condition::any()
.add(
billing_subscription::Column::StripeSubscriptionStatus
.eq(StripeSubscriptionStatus::Active),
)
.add(
billing_subscription::Column::StripeSubscriptionStatus
.eq(StripeSubscriptionStatus::Trialing),
),
billing_subscription::Column::StripeSubscriptionStatus
.eq(StripeSubscriptionStatus::Active),
)
.add(billing_subscription::Column::Kind.is_not_null()),
)

View File

@@ -348,10 +348,9 @@ impl Database {
.unwrap(),
)),
// Old clients do not use abs path, entry ids or head_commit_details.
// Old clients do not use abs path or entry ids.
abs_path: ActiveValue::set(String::new()),
entry_ids: ActiveValue::set("[]".into()),
head_commit_details: ActiveValue::set(None),
}
}),
)
@@ -491,12 +490,6 @@ impl Database {
.as_ref()
.map(|summary| serde_json::to_string(summary).unwrap()),
),
head_commit_details: ActiveValue::Set(
update
.head_commit_details
.as_ref()
.map(|details| serde_json::to_string(details).unwrap()),
),
current_merge_conflicts: ActiveValue::Set(Some(
serde_json::to_string(&update.current_merge_conflicts).unwrap(),
)),
@@ -512,7 +505,6 @@ impl Database {
project_repository::Column::EntryIds,
project_repository::Column::AbsPath,
project_repository::Column::CurrentMergeConflicts,
project_repository::Column::HeadCommitDetails,
])
.to_owned(),
)
@@ -936,13 +928,6 @@ impl Database {
.transpose()?
.unwrap_or_default();
let head_commit_details = db_repository_entry
.head_commit_details
.as_ref()
.map(|head_commit_details| serde_json::from_str(&head_commit_details))
.transpose()?
.unwrap_or_default();
let entry_ids = serde_json::from_str(&db_repository_entry.entry_ids)
.context("failed to deserialize repository's entry ids")?;
@@ -969,7 +954,6 @@ impl Database {
removed_statuses: Vec::new(),
current_merge_conflicts,
branch_summary,
head_commit_details,
scan_id: db_repository_entry.scan_id as u64,
is_last_update: true,
});

View File

@@ -755,13 +755,6 @@ impl Database {
.transpose()?
.unwrap_or_default();
let head_commit_details = db_repository
.head_commit_details
.as_ref()
.map(|head_commit_details| serde_json::from_str(&head_commit_details))
.transpose()?
.unwrap_or_default();
let entry_ids = serde_json::from_str(&db_repository.entry_ids)
.context("failed to deserialize repository's entry ids")?;
@@ -785,7 +778,6 @@ impl Database {
removed_statuses,
current_merge_conflicts,
branch_summary,
head_commit_details,
project_id: project_id.to_proto(),
id: db_repository.id as u64,
abs_path: db_repository.abs_path,

View File

@@ -10,7 +10,6 @@ pub struct Model {
pub user_id: UserId,
pub stripe_customer_id: String,
pub has_overdue_invoices: bool,
pub trial_started_at: Option<DateTime>,
pub created_at: DateTime,
}

View File

@@ -18,8 +18,6 @@ pub struct Model {
pub current_merge_conflicts: Option<String>,
// A JSON object representing the current Branch values
pub branch_summary: Option<String>,
// A JSON object representing the current Head commit values
pub head_commit_details: Option<String>,
}
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]

View File

@@ -207,6 +207,13 @@ impl Config {
Self::parse_stripe_price_id("Zed Pro", self.stripe_zed_pro_price_id.as_deref())
}
pub fn zed_pro_trial_price_id(&self) -> anyhow::Result<stripe::PriceId> {
Self::parse_stripe_price_id(
"Zed Pro Trial",
self.stripe_zed_pro_trial_price_id.as_deref(),
)
}
pub fn zed_free_price_id(&self) -> anyhow::Result<stripe::PriceId> {
Self::parse_stripe_price_id("Zed Free", self.stripe_zed_pro_price_id.as_deref())
}

View File

@@ -1,87 +1,8 @@
use chrono::Timelike;
use time::PrimitiveDateTime;
use crate::db::billing_subscription::SubscriptionKind;
use crate::db::{UserId, billing_subscription};
use crate::db::UserId;
use super::*;
fn convert_chrono_to_time(datetime: DateTimeUtc) -> anyhow::Result<PrimitiveDateTime> {
use chrono::{Datelike as _, Timelike as _};
let date = time::Date::from_calendar_date(
datetime.year(),
time::Month::try_from(datetime.month() as u8).unwrap(),
datetime.day() as u8,
)?;
let time = time::Time::from_hms_nano(
datetime.hour() as u8,
datetime.minute() as u8,
datetime.second() as u8,
datetime.nanosecond(),
)?;
Ok(PrimitiveDateTime::new(date, time))
}
impl LlmDatabase {
pub async fn create_subscription_usage(
&self,
user_id: UserId,
period_start_at: DateTimeUtc,
period_end_at: DateTimeUtc,
plan: SubscriptionKind,
model_requests: i32,
edit_predictions: i32,
) -> Result<subscription_usage::Model> {
self.transaction(|tx| async move {
self.create_subscription_usage_in_tx(
user_id,
period_start_at,
period_end_at,
plan,
model_requests,
edit_predictions,
&tx,
)
.await
})
.await
}
async fn create_subscription_usage_in_tx(
&self,
user_id: UserId,
period_start_at: DateTimeUtc,
period_end_at: DateTimeUtc,
plan: SubscriptionKind,
model_requests: i32,
edit_predictions: i32,
tx: &DatabaseTransaction,
) -> Result<subscription_usage::Model> {
// Clear out the nanoseconds so that these timestamps are comparable with Unix timestamps.
let period_start_at = period_start_at.with_nanosecond(0).unwrap();
let period_end_at = period_end_at.with_nanosecond(0).unwrap();
let period_start_at = convert_chrono_to_time(period_start_at)?;
let period_end_at = convert_chrono_to_time(period_end_at)?;
Ok(
subscription_usage::Entity::insert(subscription_usage::ActiveModel {
id: ActiveValue::not_set(),
user_id: ActiveValue::set(user_id),
period_start_at: ActiveValue::set(period_start_at),
period_end_at: ActiveValue::set(period_end_at),
plan: ActiveValue::set(plan),
model_requests: ActiveValue::set(model_requests),
edit_predictions: ActiveValue::set(edit_predictions),
})
.exec_with_returning(tx)
.await?,
)
}
pub async fn get_subscription_usage_for_period(
&self,
user_id: UserId,
@@ -89,77 +10,12 @@ impl LlmDatabase {
period_end_at: DateTimeUtc,
) -> Result<Option<subscription_usage::Model>> {
self.transaction(|tx| async move {
self.get_subscription_usage_for_period_in_tx(
user_id,
period_start_at,
period_end_at,
&tx,
)
.await
})
.await
}
async fn get_subscription_usage_for_period_in_tx(
&self,
user_id: UserId,
period_start_at: DateTimeUtc,
period_end_at: DateTimeUtc,
tx: &DatabaseTransaction,
) -> Result<Option<subscription_usage::Model>> {
Ok(subscription_usage::Entity::find()
.filter(subscription_usage::Column::UserId.eq(user_id))
.filter(subscription_usage::Column::PeriodStartAt.eq(period_start_at))
.filter(subscription_usage::Column::PeriodEndAt.eq(period_end_at))
.one(tx)
.await?)
}
pub async fn transfer_existing_subscription_usage(
&self,
user_id: UserId,
existing_subscription: &billing_subscription::Model,
new_subscription_kind: Option<SubscriptionKind>,
new_period_start_at: DateTimeUtc,
new_period_end_at: DateTimeUtc,
) -> Result<Option<subscription_usage::Model>> {
self.transaction(|tx| async move {
match existing_subscription.kind {
Some(SubscriptionKind::ZedProTrial) => {
let trial_period_start_at = existing_subscription
.current_period_start_at()
.ok_or_else(|| anyhow!("No trial subscription period start"))?;
let trial_period_end_at = existing_subscription
.current_period_end_at()
.ok_or_else(|| anyhow!("No trial subscription period end"))?;
let existing_usage = self
.get_subscription_usage_for_period_in_tx(
user_id,
trial_period_start_at,
trial_period_end_at,
&tx,
)
.await?;
if let Some(existing_usage) = existing_usage {
return Ok(Some(
self.create_subscription_usage_in_tx(
user_id,
new_period_start_at,
new_period_end_at,
new_subscription_kind.unwrap_or(existing_usage.plan),
existing_usage.model_requests,
existing_usage.edit_predictions,
&tx,
)
.await?,
));
}
}
_ => {}
}
Ok(None)
Ok(subscription_usage::Entity::find()
.filter(subscription_usage::Column::UserId.eq(user_id))
.filter(subscription_usage::Column::PeriodStartAt.eq(period_start_at))
.filter(subscription_usage::Column::PeriodEndAt.eq(period_end_at))
.one(&*tx)
.await?)
})
.await
}

View File

@@ -1,5 +1,4 @@
mod provider_tests;
mod subscription_usage_tests;
use gpui::BackgroundExecutor;
use parking_lot::Mutex;

View File

@@ -1,69 +0,0 @@
use chrono::{Duration, Utc};
use pretty_assertions::assert_eq;
use crate::db::billing_subscription::SubscriptionKind;
use crate::db::{UserId, billing_subscription};
use crate::llm::db::LlmDatabase;
use crate::test_llm_db;
test_llm_db!(
test_transfer_existing_subscription_usage,
test_transfer_existing_subscription_usage_postgres
);
async fn test_transfer_existing_subscription_usage(db: &mut LlmDatabase) {
let user_id = UserId(1);
let now = Utc::now();
let trial_period_start_at = now - Duration::days(14);
let trial_period_end_at = now;
let new_period_start_at = now;
let new_period_end_at = now + Duration::days(30);
let existing_subscription = billing_subscription::Model {
kind: Some(SubscriptionKind::ZedProTrial),
stripe_current_period_start: Some(trial_period_start_at.timestamp()),
stripe_current_period_end: Some(trial_period_end_at.timestamp()),
..Default::default()
};
let existing_usage = db
.create_subscription_usage(
user_id,
trial_period_start_at,
trial_period_end_at,
SubscriptionKind::ZedProTrial,
25,
1_000,
)
.await
.unwrap();
let transferred_usage = db
.transfer_existing_subscription_usage(
user_id,
&existing_subscription,
Some(SubscriptionKind::ZedPro),
new_period_start_at,
new_period_end_at,
)
.await
.unwrap();
assert!(
transferred_usage.is_some(),
"subscription usage not transferred successfully"
);
let transferred_usage = transferred_usage.unwrap();
assert_eq!(
transferred_usage.model_requests,
existing_usage.model_requests
);
assert_eq!(
transferred_usage.edit_predictions,
existing_usage.edit_predictions
);
}

View File

@@ -1,5 +1,4 @@
use crate::Cents;
use crate::db::billing_subscription::SubscriptionKind;
use crate::db::{billing_subscription, user};
use crate::llm::{DEFAULT_MAX_MONTHLY_SPEND, FREE_TIER_MONTHLY_SPENDING_LIMIT};
use crate::{Config, db::billing_preference};
@@ -27,14 +26,13 @@ pub struct LlmTokenClaims {
pub is_staff: bool,
pub has_llm_closed_beta_feature_flag: bool,
pub bypass_account_age_check: bool,
pub has_predict_edits_feature_flag: bool,
pub has_llm_subscription: bool,
pub max_monthly_spend_in_cents: u32,
pub custom_llm_monthly_allowance_in_cents: Option<u32>,
pub plan: Plan,
#[serde(default)]
pub subscription_period: Option<(NaiveDateTime, NaiveDateTime)>,
#[serde(default)]
pub can_use_web_search_tool: bool,
}
const LLM_TOKEN_LIFETIME: Duration = Duration::from_secs(60 * 60);
@@ -46,6 +44,7 @@ impl LlmTokenClaims {
billing_preferences: Option<billing_preference::Model>,
feature_flags: &Vec<String>,
has_legacy_llm_subscription: bool,
plan: rpc::proto::Plan,
subscription: Option<billing_subscription::Model>,
system_id: Option<String>,
config: &Config,
@@ -72,7 +71,9 @@ impl LlmTokenClaims {
bypass_account_age_check: feature_flags
.iter()
.any(|flag| flag == "bypass-account-age-check"),
can_use_web_search_tool: feature_flags.iter().any(|flag| flag == "assistant2"),
has_predict_edits_feature_flag: feature_flags
.iter()
.any(|flag| flag == "predict-edits"),
has_llm_subscription: has_legacy_llm_subscription,
max_monthly_spend_in_cents: billing_preferences
.map_or(DEFAULT_MAX_MONTHLY_SPEND.0, |preferences| {
@@ -81,14 +82,11 @@ impl LlmTokenClaims {
custom_llm_monthly_allowance_in_cents: user
.custom_llm_monthly_allowance_in_cents
.map(|allowance| allowance as u32),
plan: subscription
.as_ref()
.and_then(|subscription| subscription.kind)
.map_or(Plan::Free, |kind| match kind {
SubscriptionKind::ZedFree => Plan::Free,
SubscriptionKind::ZedPro => Plan::ZedPro,
SubscriptionKind::ZedProTrial => Plan::ZedProTrial,
}),
plan: match plan {
rpc::proto::Plan::Free => Plan::Free,
rpc::proto::Plan::ZedPro => Plan::ZedPro,
rpc::proto::Plan::ZedProTrial => Plan::ZedProTrial,
},
subscription_period: maybe!({
let subscription = subscription?;
let period_start_at = subscription.current_period_start_at()?;

View File

@@ -4147,6 +4147,7 @@ async fn get_llm_api_token(
billing_preferences,
&flags,
has_legacy_llm_subscription,
session.current_plan(&db).await?,
billing_subscription,
session.system_id.clone(),
&session.app_state.config,

View File

@@ -405,39 +405,6 @@ impl StripeBilling {
let session = stripe::CheckoutSession::create(&self.client, params).await?;
Ok(session.url.context("no checkout session URL")?)
}
pub async fn checkout_with_zed_pro_trial(
&self,
zed_pro_price_id: PriceId,
customer_id: stripe::CustomerId,
github_login: &str,
success_url: &str,
) -> Result<String> {
let mut params = stripe::CreateCheckoutSession::new();
params.subscription_data = Some(stripe::CreateCheckoutSessionSubscriptionData {
trial_period_days: Some(14),
trial_settings: Some(stripe::CreateCheckoutSessionSubscriptionDataTrialSettings {
end_behavior: stripe::CreateCheckoutSessionSubscriptionDataTrialSettingsEndBehavior {
missing_payment_method: stripe::CreateCheckoutSessionSubscriptionDataTrialSettingsEndBehaviorMissingPaymentMethod::Pause,
}
}),
..Default::default()
});
params.mode = Some(stripe::CheckoutSessionMode::Subscription);
params.payment_method_collection =
Some(stripe::CheckoutSessionPaymentMethodCollection::IfRequired);
params.customer = Some(customer_id);
params.client_reference_id = Some(github_login);
params.line_items = Some(vec![stripe::CreateCheckoutSessionLineItems {
price: Some(zed_pro_price_id.to_string()),
quantity: Some(1),
..Default::default()
}]);
params.success_url = Some(success_url);
let session = stripe::CheckoutSession::create(&self.client, params).await?;
Ok(session.url.context("no checkout session URL")?)
}
}
#[derive(Serialize)]

View File

@@ -1,7 +1,7 @@
use call::ActiveCall;
use dap::DebugRequestType;
use dap::requests::{Initialize, Launch, StackTrace};
use dap::{SourceBreakpoint, requests::SetBreakpoints};
use dap::DebugRequestType;
use dap::{requests::SetBreakpoints, SourceBreakpoint};
use debugger_ui::debugger_panel::DebugPanel;
use debugger_ui::session::DebugSession;
use editor::Editor;
@@ -13,7 +13,7 @@ use std::{
path::Path,
sync::atomic::{AtomicBool, Ordering},
};
use workspace::{Workspace, dock::Panel};
use workspace::{dock::Panel, Workspace};
use super::{TestClient, TestServer};

View File

@@ -1,14 +1,12 @@
use crate::tests::TestServer;
use call::ActiveCall;
use collections::{HashMap, HashSet};
use debugger_ui::debugger_panel::DebugPanel;
use dap::DapRegistry;
use extension::ExtensionHostProxy;
use fs::{FakeFs, Fs as _, RemoveOptions};
use futures::StreamExt as _;
use gpui::{
AppContext as _, BackgroundExecutor, SemanticVersion, TestAppContext, UpdateGlobal as _,
VisualContext,
};
use http_client::BlockedHttpClient;
use language::{
@@ -26,7 +24,6 @@ use project::{
};
use remote::SshRemoteClient;
use remote_server::{HeadlessAppState, HeadlessProject};
use rpc::proto;
use serde_json::json;
use settings::SettingsStore;
use std::{path::Path, sync::Arc};
@@ -89,6 +86,7 @@ async fn test_sharing_an_ssh_remote_project(
http_client: remote_http_client,
node_runtime: node,
languages,
debug_adapters: Arc::new(DapRegistry::fake()),
extension_host_proxy: Arc::new(ExtensionHostProxy::new()),
},
cx,
@@ -256,6 +254,7 @@ async fn test_ssh_collaboration_git_branches(
http_client: remote_http_client,
node_runtime: node,
languages,
debug_adapters: Arc::new(DapRegistry::fake()),
extension_host_proxy: Arc::new(ExtensionHostProxy::new()),
},
cx,
@@ -461,6 +460,7 @@ async fn test_ssh_collaboration_formatting_with_prettier(
http_client: remote_http_client,
node_runtime: NodeRuntime::unavailable(),
languages,
debug_adapters: Arc::new(DapRegistry::fake()),
extension_host_proxy: Arc::new(ExtensionHostProxy::new()),
},
cx,
@@ -579,108 +579,3 @@ async fn test_ssh_collaboration_formatting_with_prettier(
"Prettier formatting was not applied to client buffer after host's request"
);
}
#[gpui::test]
async fn test_remote_server_debugger(cx_a: &mut TestAppContext, server_cx: &mut TestAppContext) {
cx_a.update(|cx| {
release_channel::init(SemanticVersion::default(), cx);
command_palette_hooks::init(cx);
if std::env::var("RUST_LOG").is_ok() {
env_logger::try_init().ok();
}
});
server_cx.update(|cx| {
release_channel::init(SemanticVersion::default(), cx);
});
let (opts, server_ssh) = SshRemoteClient::fake_server(cx_a, server_cx);
let remote_fs = FakeFs::new(server_cx.executor());
remote_fs
.insert_tree(
path!("/code"),
json!({
"lib.rs": "fn one() -> usize { 1 }"
}),
)
.await;
// User A connects to the remote project via SSH.
server_cx.update(HeadlessProject::init);
let remote_http_client = Arc::new(BlockedHttpClient);
let node = NodeRuntime::unavailable();
let languages = Arc::new(LanguageRegistry::new(server_cx.executor()));
let _headless_project = server_cx.new(|cx| {
client::init_settings(cx);
HeadlessProject::new(
HeadlessAppState {
session: server_ssh,
fs: remote_fs.clone(),
http_client: remote_http_client,
node_runtime: node,
languages,
extension_host_proxy: Arc::new(ExtensionHostProxy::new()),
},
cx,
)
});
let client_ssh = SshRemoteClient::fake_client(opts, cx_a).await;
let mut server = TestServer::start(server_cx.executor()).await;
let client_a = server.create_client(cx_a, "user_a").await;
cx_a.update(|cx| {
debugger_ui::init(cx);
command_palette_hooks::init(cx);
});
let (project_a, _) = client_a
.build_ssh_project(path!("/code"), client_ssh.clone(), cx_a)
.await;
let (workspace, cx_a) = client_a.build_workspace(&project_a, cx_a);
let debugger_panel = workspace
.update_in(cx_a, |_workspace, window, cx| {
cx.spawn_in(window, DebugPanel::load)
})
.await
.unwrap();
workspace.update_in(cx_a, |workspace, window, cx| {
workspace.add_panel(debugger_panel, window, cx);
});
cx_a.run_until_parked();
let debug_panel = workspace
.update(cx_a, |workspace, cx| workspace.panel::<DebugPanel>(cx))
.unwrap();
let workspace_window = cx_a
.window_handle()
.downcast::<workspace::Workspace>()
.unwrap();
let session = debugger_ui::tests::start_debug_session(&workspace_window, cx_a, |_| {}).unwrap();
cx_a.run_until_parked();
debug_panel.update(cx_a, |debug_panel, cx| {
assert_eq!(
debug_panel.active_session().unwrap().read(cx).session(cx),
session
)
});
session.update(cx_a, |session, _| {
assert_eq!(session.binary().command, "ssh");
});
let shutdown_session = workspace.update(cx_a, |workspace, cx| {
workspace.project().update(cx, |project, cx| {
project.dap_store().update(cx, |dap_store, cx| {
dap_store.shutdown_session(session.read(cx).session_id(), cx)
})
})
});
client_ssh.update(cx_a, |a, _| {
a.shutdown_processes(Some(proto::ShutdownRemoteServer {}))
});
shutdown_session.await.unwrap();
}

View File

@@ -14,7 +14,7 @@ use client::{
use clock::FakeSystemClock;
use collab_ui::channel_view::ChannelView;
use collections::{HashMap, HashSet};
use dap::DapRegistry;
use fs::FakeFs;
use futures::{StreamExt as _, channel::oneshot};
use git::GitHostingProviderRegistry;
@@ -275,12 +275,14 @@ impl TestServer {
let user_store = cx.new(|cx| UserStore::new(client.clone(), cx));
let workspace_store = cx.new(|cx| WorkspaceStore::new(client.clone(), cx));
let language_registry = Arc::new(LanguageRegistry::test(cx.executor()));
let debug_adapters = Arc::new(DapRegistry::default());
let session = cx.new(|cx| AppSession::new(Session::test(), cx));
let app_state = Arc::new(workspace::AppState {
client: client.clone(),
user_store: user_store.clone(),
workspace_store,
languages: language_registry,
debug_adapters,
fs: fs.clone(),
build_window_options: |_, _| Default::default(),
node_runtime: NodeRuntime::unavailable(),
@@ -796,6 +798,7 @@ impl TestClient {
self.app_state.node_runtime.clone(),
self.app_state.user_store.clone(),
self.app_state.languages.clone(),
self.app_state.debug_adapters.clone(),
self.app_state.fs.clone(),
None,
cx,

View File

@@ -166,9 +166,6 @@ impl ComponentPreview {
component_preview.update_component_list(cx);
let focus_handle = component_preview.filter_editor.read(cx).focus_handle(cx);
window.focus(&focus_handle);
component_preview
}
@@ -782,13 +779,10 @@ impl Item for ComponentPreview {
fn added_to_workspace(
&mut self,
workspace: &mut Workspace,
window: &mut Window,
cx: &mut Context<Self>,
_window: &mut Window,
_cx: &mut Context<Self>,
) {
self.workspace_id = workspace.database_id();
let focus_handle = self.filter_editor.read(cx).focus_handle(cx);
window.focus(&focus_handle);
}
}

View File

@@ -39,7 +39,6 @@ log.workspace = true
node_runtime.workspace = true
parking_lot.workspace = true
paths.workspace = true
proto.workspace = true
schemars.workspace = true
serde.workspace = true
serde_json.workspace = true

View File

@@ -1,10 +1,9 @@
use ::fs::Fs;
use anyhow::{Context as _, Result, anyhow};
use anyhow::{Context as _, Ok, Result, anyhow};
use async_compression::futures::bufread::GzipDecoder;
use async_tar::Archive;
use async_trait::async_trait;
use collections::HashMap;
use dap_types::{StartDebuggingRequestArguments, StartDebuggingRequestArgumentsRequest};
use dap_types::StartDebuggingRequestArguments;
use futures::io::BufReader;
use gpui::{AsyncApp, SharedString};
pub use http_client::{HttpClient, github::latest_github_release};
@@ -14,10 +13,16 @@ use serde::{Deserialize, Serialize};
use settings::WorktreeId;
use smol::{self, fs::File, lock::Mutex};
use std::{
borrow::Borrow, collections::HashSet, ffi::OsStr, fmt::Debug, net::Ipv4Addr, ops::Deref,
path::PathBuf, sync::Arc,
borrow::Borrow,
collections::{HashMap, HashSet},
ffi::{OsStr, OsString},
fmt::Debug,
net::Ipv4Addr,
ops::Deref,
path::PathBuf,
sync::Arc,
};
use task::{DebugTaskDefinition, TcpArgumentsTemplate};
use task::DebugTaskDefinition;
use util::ResultExt;
#[derive(Clone, Debug, PartialEq, Eq)]
@@ -88,91 +93,17 @@ pub struct TcpArguments {
pub port: u16,
pub timeout: Option<u64>,
}
impl TcpArguments {
pub fn from_proto(proto: proto::TcpHost) -> anyhow::Result<Self> {
let host = TcpArgumentsTemplate::from_proto(proto)?;
Ok(TcpArguments {
host: host.host.ok_or_else(|| anyhow!("missing host"))?,
port: host.port.ok_or_else(|| anyhow!("missing port"))?,
timeout: host.timeout,
})
}
pub fn to_proto(&self) -> proto::TcpHost {
TcpArgumentsTemplate {
host: Some(self.host),
port: Some(self.port),
timeout: self.timeout,
}
.to_proto()
}
}
#[derive(Debug, Clone)]
pub struct DebugAdapterBinary {
pub adapter_name: DebugAdapterName,
pub command: String,
pub arguments: Vec<String>,
pub envs: HashMap<String, String>,
pub arguments: Option<Vec<OsString>>,
pub envs: Option<HashMap<String, String>>,
pub cwd: Option<PathBuf>,
pub connection: Option<TcpArguments>,
pub request_args: StartDebuggingRequestArguments,
}
impl DebugAdapterBinary {
pub fn from_proto(binary: proto::DebugAdapterBinary) -> anyhow::Result<Self> {
let request = match binary.launch_type() {
proto::debug_adapter_binary::LaunchType::Launch => {
StartDebuggingRequestArgumentsRequest::Launch
}
proto::debug_adapter_binary::LaunchType::Attach => {
StartDebuggingRequestArgumentsRequest::Attach
}
};
Ok(DebugAdapterBinary {
command: binary.command,
arguments: binary.arguments,
envs: binary.envs.into_iter().collect(),
connection: binary
.connection
.map(TcpArguments::from_proto)
.transpose()?,
request_args: StartDebuggingRequestArguments {
configuration: serde_json::from_str(&binary.configuration)?,
request,
},
cwd: binary.cwd.map(|cwd| cwd.into()),
})
}
pub fn to_proto(&self) -> proto::DebugAdapterBinary {
proto::DebugAdapterBinary {
command: self.command.clone(),
arguments: self.arguments.clone(),
envs: self
.envs
.iter()
.map(|(k, v)| (k.clone(), v.clone()))
.collect(),
cwd: self
.cwd
.as_ref()
.map(|cwd| cwd.to_string_lossy().to_string()),
connection: self.connection.as_ref().map(|c| c.to_proto()),
launch_type: match self.request_args.request {
StartDebuggingRequestArgumentsRequest::Launch => {
proto::debug_adapter_binary::LaunchType::Launch.into()
}
StartDebuggingRequestArgumentsRequest::Attach => {
proto::debug_adapter_binary::LaunchType::Attach.into()
}
},
configuration: self.request_args.configuration.to_string(),
}
}
}
#[derive(Debug)]
pub struct AdapterVersion {
pub tag_name: String,
@@ -325,21 +256,7 @@ pub trait DebugAdapter: 'static + Send + Sync {
self.name()
);
delegate.update_status(self.name(), DapStatus::Downloading);
match self.install_binary(version, delegate).await {
Ok(_) => {
delegate.update_status(self.name(), DapStatus::None);
}
Err(error) => {
delegate.update_status(
self.name(),
DapStatus::Failed {
error: error.to_string(),
},
);
return Err(error);
}
}
self.install_binary(version, delegate).await?;
delegate
.updated_adapters()
@@ -387,22 +304,22 @@ impl FakeAdapter {
fn request_args(&self, config: &DebugTaskDefinition) -> StartDebuggingRequestArguments {
use serde_json::json;
use task::DebugRequest;
use task::DebugRequestType;
let value = json!({
"request": match config.request {
DebugRequest::Launch(_) => "launch",
DebugRequest::Attach(_) => "attach",
DebugRequestType::Launch(_) => "launch",
DebugRequestType::Attach(_) => "attach",
},
"process_id": if let DebugRequest::Attach(attach_config) = &config.request {
"process_id": if let DebugRequestType::Attach(attach_config) = &config.request {
attach_config.process_id
} else {
None
},
});
let request = match config.request {
DebugRequest::Launch(_) => dap_types::StartDebuggingRequestArgumentsRequest::Launch,
DebugRequest::Attach(_) => dap_types::StartDebuggingRequestArgumentsRequest::Attach,
DebugRequestType::Launch(_) => dap_types::StartDebuggingRequestArgumentsRequest::Launch,
DebugRequestType::Attach(_) => dap_types::StartDebuggingRequestArgumentsRequest::Attach,
};
StartDebuggingRequestArguments {
configuration: value,
@@ -426,10 +343,11 @@ impl DebugAdapter for FakeAdapter {
_: &mut AsyncApp,
) -> Result<DebugAdapterBinary> {
Ok(DebugAdapterBinary {
adapter_name: Self::ADAPTER_NAME.into(),
command: "command".into(),
arguments: vec![],
arguments: None,
connection: None,
envs: HashMap::default(),
envs: None,
cwd: None,
request_args: self.request_args(config),
})

View File

@@ -1,5 +1,5 @@
use crate::{
adapters::DebugAdapterBinary,
adapters::{DebugAdapterBinary, DebugAdapterName},
transport::{IoKind, LogKind, TransportDelegate},
};
use anyhow::{Result, anyhow};
@@ -88,6 +88,7 @@ impl DebugAdapterClient {
) -> Result<Self> {
let binary = match self.transport_delegate.transport() {
crate::transport::Transport::Tcp(tcp_transport) => DebugAdapterBinary {
adapter_name: binary.adapter_name,
command: binary.command,
arguments: binary.arguments,
envs: binary.envs,
@@ -218,6 +219,9 @@ impl DebugAdapterClient {
self.id
}
pub fn name(&self) -> DebugAdapterName {
self.binary.adapter_name.clone()
}
pub fn binary(&self) -> &DebugAdapterBinary {
&self.binary
}
@@ -318,6 +322,7 @@ mod tests {
let client = DebugAdapterClient::start(
crate::client::SessionId(1),
DebugAdapterBinary {
adapter_name: "adapter".into(),
command: "command".into(),
arguments: Default::default(),
envs: Default::default(),
@@ -388,6 +393,7 @@ mod tests {
let client = DebugAdapterClient::start(
crate::client::SessionId(1),
DebugAdapterBinary {
adapter_name: "adapter".into(),
command: "command".into(),
arguments: Default::default(),
envs: Default::default(),
@@ -441,6 +447,7 @@ mod tests {
let client = DebugAdapterClient::start(
crate::client::SessionId(1),
DebugAdapterBinary {
adapter_name: "test-adapter".into(),
command: "command".into(),
arguments: Default::default(),
envs: Default::default(),

View File

@@ -7,7 +7,7 @@ pub mod transport;
pub use dap_types::*;
pub use registry::DapRegistry;
pub use task::DebugRequest;
pub use task::{DebugAdapterConfig, DebugRequestType};
pub type ScopeId = u64;
pub type VariableReference = u64;

View File

@@ -1,4 +1,3 @@
use gpui::{App, Global};
use parking_lot::RwLock;
use crate::adapters::{DebugAdapter, DebugAdapterName};
@@ -12,20 +11,8 @@ struct DapRegistryState {
#[derive(Clone, Default)]
/// Stores available debug adapters.
pub struct DapRegistry(Arc<RwLock<DapRegistryState>>);
impl Global for DapRegistry {}
impl DapRegistry {
pub fn global(cx: &mut App) -> &mut Self {
let ret = cx.default_global::<Self>();
#[cfg(any(test, feature = "test-support"))]
if ret.adapter(crate::FakeAdapter::ADAPTER_NAME).is_none() {
ret.add_adapter(Arc::new(crate::FakeAdapter::new()));
}
ret
}
pub fn add_adapter(&self, adapter: Arc<dyn DebugAdapter>) {
let name = adapter.name();
let _previous_value = self.0.write().adapters.insert(name, adapter);
@@ -34,12 +21,19 @@ impl DapRegistry {
"Attempted to insert a new debug adapter when one is already registered"
);
}
pub fn adapter(&self, name: &str) -> Option<Arc<dyn DebugAdapter>> {
self.0.read().adapters.get(name).cloned()
}
pub fn enumerate_adapters(&self) -> Vec<DebugAdapterName> {
self.0.read().adapters.keys().cloned().collect()
}
#[cfg(any(test, feature = "test-support"))]
pub fn fake() -> Self {
use crate::FakeAdapter;
let register = Self::default();
register.add_adapter(Arc::new(FakeAdapter::new()));
register
}
}

View File

@@ -21,8 +21,8 @@ use std::{
sync::Arc,
time::Duration,
};
use task::TcpArgumentsTemplate;
use util::{ResultExt as _, TryFutureExt};
use task::TCPHost;
use util::ResultExt as _;
use crate::{adapters::DebugAdapterBinary, debugger_settings::DebuggerSettings};
@@ -74,14 +74,16 @@ pub enum Transport {
}
impl Transport {
async fn start(binary: &DebugAdapterBinary, cx: AsyncApp) -> Result<(TransportPipe, Self)> {
#[cfg(any(test, feature = "test-support"))]
async fn start(_: &DebugAdapterBinary, cx: AsyncApp) -> Result<(TransportPipe, Self)> {
#[cfg(any(test, feature = "test-support"))]
if cfg!(any(test, feature = "test-support")) {
return FakeTransport::start(cx)
.await
.map(|(transports, fake)| (transports, Self::Fake(fake)));
}
return FakeTransport::start(cx)
.await
.map(|(transports, fake)| (transports, Self::Fake(fake)));
}
#[cfg(not(any(test, feature = "test-support")))]
async fn start(binary: &DebugAdapterBinary, cx: AsyncApp) -> Result<(TransportPipe, Self)> {
if binary.connection.is_some() {
TcpTransport::start(binary, cx)
.await
@@ -126,7 +128,6 @@ pub(crate) struct TransportDelegate {
pending_requests: Requests,
transport: Transport,
server_tx: Arc<Mutex<Option<Sender<Message>>>>,
_tasks: Vec<gpui::Task<Option<()>>>,
}
impl TransportDelegate {
@@ -141,7 +142,6 @@ impl TransportDelegate {
log_handlers: Default::default(),
current_requests: Default::default(),
pending_requests: Default::default(),
_tasks: Default::default(),
};
let messages = this.start_handlers(transport_pipes, cx).await?;
Ok((messages, this))
@@ -168,43 +168,35 @@ impl TransportDelegate {
cx.update(|cx| {
if let Some(stdout) = params.stdout.take() {
self._tasks.push(
cx.background_executor()
.spawn(Self::handle_adapter_log(stdout, log_handler.clone()).log_err()),
);
cx.background_executor()
.spawn(Self::handle_adapter_log(stdout, log_handler.clone()))
.detach_and_log_err(cx);
}
self._tasks.push(
cx.background_executor().spawn(
Self::handle_output(
params.output,
client_tx,
self.pending_requests.clone(),
log_handler.clone(),
)
.log_err(),
),
);
cx.background_executor()
.spawn(Self::handle_output(
params.output,
client_tx,
self.pending_requests.clone(),
log_handler.clone(),
))
.detach_and_log_err(cx);
if let Some(stderr) = params.stderr.take() {
self._tasks.push(
cx.background_executor()
.spawn(Self::handle_error(stderr, self.log_handlers.clone()).log_err()),
);
cx.background_executor()
.spawn(Self::handle_error(stderr, self.log_handlers.clone()))
.detach_and_log_err(cx);
}
self._tasks.push(
cx.background_executor().spawn(
Self::handle_input(
params.input,
client_rx,
self.current_requests.clone(),
self.pending_requests.clone(),
log_handler.clone(),
)
.log_err(),
),
);
cx.background_executor()
.spawn(Self::handle_input(
params.input,
client_rx,
self.current_requests.clone(),
self.pending_requests.clone(),
log_handler.clone(),
))
.detach_and_log_err(cx);
})?;
{
@@ -377,7 +369,6 @@ impl TransportDelegate {
where
Stderr: AsyncRead + Unpin + Send + 'static,
{
log::debug!("Handle error started");
let mut buffer = String::new();
let mut reader = BufReader::new(stderr);
@@ -529,21 +520,18 @@ pub struct TcpTransport {
impl TcpTransport {
/// Get an open port to use with the tcp client when not supplied by debug config
pub async fn port(host: &TcpArgumentsTemplate) -> Result<u16> {
pub async fn port(host: &TCPHost) -> Result<u16> {
if let Some(port) = host.port {
Ok(port)
} else {
Self::unused_port(host.host()).await
Ok(TcpListener::bind(SocketAddrV4::new(host.host(), 0))
.await?
.local_addr()?
.port())
}
}
pub async fn unused_port(host: Ipv4Addr) -> Result<u16> {
Ok(TcpListener::bind(SocketAddrV4::new(host, 0))
.await?
.local_addr()?
.port())
}
#[allow(dead_code, reason = "This is used in non test builds of Zed")]
async fn start(binary: &DebugAdapterBinary, cx: AsyncApp) -> Result<(TransportPipe, Self)> {
let Some(connection_args) = binary.connection.as_ref() else {
return Err(anyhow!("No connection arguments provided"));
@@ -558,8 +546,13 @@ impl TcpTransport {
command.current_dir(cwd);
}
command.args(&binary.arguments);
command.envs(&binary.envs);
if let Some(args) = &binary.arguments {
command.args(args);
}
if let Some(envs) = &binary.envs {
command.envs(envs);
}
command
.stdin(Stdio::null())
@@ -642,8 +635,13 @@ impl StdioTransport {
command.current_dir(cwd);
}
command.args(&binary.arguments);
command.envs(&binary.envs);
if let Some(args) = &binary.arguments {
command.args(args);
}
if let Some(envs) = &binary.envs {
command.envs(envs);
}
command
.stdin(Stdio::piped())

View File

@@ -1,10 +1,10 @@
use std::{collections::HashMap, path::PathBuf, sync::OnceLock};
use std::{path::PathBuf, sync::OnceLock};
use anyhow::{Result, bail};
use async_trait::async_trait;
use dap::adapters::latest_github_release;
use gpui::AsyncApp;
use task::{DebugRequest, DebugTaskDefinition};
use task::{DebugRequestType, DebugTaskDefinition};
use crate::*;
@@ -19,8 +19,8 @@ impl CodeLldbDebugAdapter {
fn request_args(&self, config: &DebugTaskDefinition) -> dap::StartDebuggingRequestArguments {
let mut configuration = json!({
"request": match config.request {
DebugRequest::Launch(_) => "launch",
DebugRequest::Attach(_) => "attach",
DebugRequestType::Launch(_) => "launch",
DebugRequestType::Attach(_) => "attach",
},
});
let map = configuration.as_object_mut().unwrap();
@@ -28,10 +28,10 @@ impl CodeLldbDebugAdapter {
map.insert("name".into(), Value::String(config.label.clone()));
let request = config.request.to_dap();
match &config.request {
DebugRequest::Attach(attach) => {
DebugRequestType::Attach(attach) => {
map.insert("pid".into(), attach.process_id.into());
}
DebugRequest::Launch(launch) => {
DebugRequestType::Launch(launch) => {
map.insert("program".into(), launch.program.clone().into());
if !launch.args.is_empty() {
@@ -140,13 +140,16 @@ impl DebugAdapter for CodeLldbDebugAdapter {
.ok_or_else(|| anyhow!("Adapter path is expected to be valid UTF-8"))?;
Ok(DebugAdapterBinary {
command,
cwd: None,
arguments: vec![
cwd: Some(adapter_dir),
arguments: Some(vec![
"--settings".into(),
json!({"sourceLanguages": ["cpp", "rust"]}).to_string(),
],
json!({"sourceLanguages": ["cpp", "rust"]})
.to_string()
.into(),
]),
request_args: self.request_args(config),
envs: HashMap::default(),
adapter_name: "test".into(),
envs: None,
connection: None,
})
}

View File

@@ -11,7 +11,7 @@ use anyhow::{Result, anyhow};
use async_trait::async_trait;
use codelldb::CodeLldbDebugAdapter;
use dap::{
DapRegistry, DebugRequest,
DapRegistry, DebugRequestType,
adapters::{
self, AdapterVersion, DapDelegate, DebugAdapter, DebugAdapterBinary, DebugAdapterName,
GithubRepo,
@@ -19,26 +19,23 @@ use dap::{
};
use gdb::GdbDebugAdapter;
use go::GoDebugAdapter;
use gpui::{App, BorrowAppContext};
use javascript::JsDebugAdapter;
use php::PhpDebugAdapter;
use python::PythonDebugAdapter;
use serde_json::{Value, json};
use task::TcpArgumentsTemplate;
use task::TCPHost;
pub fn init(cx: &mut App) {
cx.update_default_global(|registry: &mut DapRegistry, _cx| {
registry.add_adapter(Arc::from(CodeLldbDebugAdapter::default()));
registry.add_adapter(Arc::from(PythonDebugAdapter));
registry.add_adapter(Arc::from(PhpDebugAdapter));
registry.add_adapter(Arc::from(JsDebugAdapter));
registry.add_adapter(Arc::from(GoDebugAdapter));
registry.add_adapter(Arc::from(GdbDebugAdapter));
})
pub fn init(registry: Arc<DapRegistry>) {
registry.add_adapter(Arc::from(CodeLldbDebugAdapter::default()));
registry.add_adapter(Arc::from(PythonDebugAdapter));
registry.add_adapter(Arc::from(PhpDebugAdapter));
registry.add_adapter(Arc::from(JsDebugAdapter));
registry.add_adapter(Arc::from(GoDebugAdapter));
registry.add_adapter(Arc::from(GdbDebugAdapter));
}
pub(crate) async fn configure_tcp_connection(
tcp_connection: TcpArgumentsTemplate,
tcp_connection: TCPHost,
) -> Result<(Ipv4Addr, u16, Option<u64>)> {
let host = tcp_connection.host();
let timeout = tcp_connection.timeout;
@@ -56,7 +53,7 @@ trait ToDap {
fn to_dap(&self) -> dap::StartDebuggingRequestArgumentsRequest;
}
impl ToDap for DebugRequest {
impl ToDap for DebugRequestType {
fn to_dap(&self) -> dap::StartDebuggingRequestArgumentsRequest {
match self {
Self::Launch(_) => dap::StartDebuggingRequestArgumentsRequest::Launch,

View File

@@ -1,10 +1,10 @@
use std::{collections::HashMap, ffi::OsStr};
use std::ffi::OsStr;
use anyhow::{Result, bail};
use async_trait::async_trait;
use dap::StartDebuggingRequestArguments;
use gpui::AsyncApp;
use task::{DebugRequest, DebugTaskDefinition};
use task::{DebugRequestType, DebugTaskDefinition};
use crate::*;
@@ -17,18 +17,18 @@ impl GdbDebugAdapter {
fn request_args(&self, config: &DebugTaskDefinition) -> StartDebuggingRequestArguments {
let mut args = json!({
"request": match config.request {
DebugRequest::Launch(_) => "launch",
DebugRequest::Attach(_) => "attach",
DebugRequestType::Launch(_) => "launch",
DebugRequestType::Attach(_) => "attach",
},
});
let map = args.as_object_mut().unwrap();
match &config.request {
DebugRequest::Attach(attach) => {
DebugRequestType::Attach(attach) => {
map.insert("pid".into(), attach.process_id.into());
}
DebugRequest::Launch(launch) => {
DebugRequestType::Launch(launch) => {
map.insert("program".into(), launch.program.clone().into());
if !launch.args.is_empty() {
@@ -82,9 +82,10 @@ impl DebugAdapter for GdbDebugAdapter {
let gdb_path = user_setting_path.unwrap_or(gdb_path?);
Ok(DebugAdapterBinary {
adapter_name: Self::ADAPTER_NAME.into(),
command: gdb_path,
arguments: vec!["-i=dap".into()],
envs: HashMap::default(),
arguments: Some(vec!["-i=dap".into()]),
envs: None,
cwd: None,
connection: None,
request_args: self.request_args(config),

View File

@@ -1,6 +1,6 @@
use dap::StartDebuggingRequestArguments;
use gpui::AsyncApp;
use std::{collections::HashMap, ffi::OsStr, path::PathBuf};
use std::{ffi::OsStr, path::PathBuf};
use task::DebugTaskDefinition;
use crate::*;
@@ -12,12 +12,12 @@ impl GoDebugAdapter {
const ADAPTER_NAME: &'static str = "Delve";
fn request_args(&self, config: &DebugTaskDefinition) -> StartDebuggingRequestArguments {
let mut args = match &config.request {
dap::DebugRequest::Attach(attach_config) => {
dap::DebugRequestType::Attach(attach_config) => {
json!({
"processId": attach_config.process_id,
})
}
dap::DebugRequest::Launch(launch_config) => json!({
dap::DebugRequestType::Launch(launch_config) => json!({
"program": launch_config.program,
"cwd": launch_config.cwd,
"args": launch_config.args
@@ -92,14 +92,15 @@ impl DebugAdapter for GoDebugAdapter {
let (host, port, timeout) = crate::configure_tcp_connection(tcp_connection).await?;
Ok(DebugAdapterBinary {
adapter_name: self.name(),
command: delve_path,
arguments: vec![
arguments: Some(vec![
"dap".into(),
"--listen".into(),
format!("{}:{}", host, port),
],
format!("{}:{}", host, port).into(),
]),
cwd: None,
envs: HashMap::default(),
envs: None,
connection: Some(adapters::TcpArguments {
host,
port,

View File

@@ -1,8 +1,8 @@
use adapters::latest_github_release;
use dap::StartDebuggingRequestArguments;
use gpui::AsyncApp;
use std::{collections::HashMap, path::PathBuf};
use task::{DebugRequest, DebugTaskDefinition};
use std::path::PathBuf;
use task::{DebugRequestType, DebugTaskDefinition};
use crate::*;
@@ -18,16 +18,16 @@ impl JsDebugAdapter {
let mut args = json!({
"type": "pwa-node",
"request": match config.request {
DebugRequest::Launch(_) => "launch",
DebugRequest::Attach(_) => "attach",
DebugRequestType::Launch(_) => "launch",
DebugRequestType::Attach(_) => "attach",
},
});
let map = args.as_object_mut().unwrap();
match &config.request {
DebugRequest::Attach(attach) => {
DebugRequestType::Attach(attach) => {
map.insert("processId".into(), attach.process_id.into());
}
DebugRequest::Launch(launch) => {
DebugRequestType::Launch(launch) => {
map.insert("program".into(), launch.program.clone().into());
if !launch.args.is_empty() {
@@ -106,22 +106,20 @@ impl DebugAdapter for JsDebugAdapter {
let (host, port, timeout) = crate::configure_tcp_connection(tcp_connection).await?;
Ok(DebugAdapterBinary {
adapter_name: self.name(),
command: delegate
.node_runtime()
.binary_path()
.await?
.to_string_lossy()
.into_owned(),
arguments: vec![
adapter_path
.join(Self::ADAPTER_PATH)
.to_string_lossy()
.to_string(),
port.to_string(),
host.to_string(),
],
arguments: Some(vec![
adapter_path.join(Self::ADAPTER_PATH).into(),
port.to_string().into(),
host.to_string().into(),
]),
cwd: None,
envs: HashMap::default(),
envs: None,
connection: Some(adapters::TcpArguments {
host,
port,

View File

@@ -1,7 +1,7 @@
use adapters::latest_github_release;
use dap::adapters::TcpArguments;
use gpui::AsyncApp;
use std::{collections::HashMap, path::PathBuf};
use std::path::PathBuf;
use task::DebugTaskDefinition;
use crate::*;
@@ -19,18 +19,20 @@ impl PhpDebugAdapter {
config: &DebugTaskDefinition,
) -> Result<dap::StartDebuggingRequestArguments> {
match &config.request {
dap::DebugRequest::Attach(_) => {
dap::DebugRequestType::Attach(_) => {
anyhow::bail!("php adapter does not support attaching")
}
dap::DebugRequest::Launch(launch_config) => Ok(dap::StartDebuggingRequestArguments {
configuration: json!({
"program": launch_config.program,
"cwd": launch_config.cwd,
"args": launch_config.args,
"stopOnEntry": config.stop_on_entry.unwrap_or_default(),
}),
request: config.request.to_dap(),
}),
dap::DebugRequestType::Launch(launch_config) => {
Ok(dap::StartDebuggingRequestArguments {
configuration: json!({
"program": launch_config.program,
"cwd": launch_config.cwd,
"args": launch_config.args,
"stopOnEntry": config.stop_on_entry.unwrap_or_default(),
}),
request: config.request.to_dap(),
})
}
}
}
}
@@ -92,26 +94,24 @@ impl DebugAdapter for PhpDebugAdapter {
let (host, port, timeout) = crate::configure_tcp_connection(tcp_connection).await?;
Ok(DebugAdapterBinary {
adapter_name: self.name(),
command: delegate
.node_runtime()
.binary_path()
.await?
.to_string_lossy()
.into_owned(),
arguments: vec![
adapter_path
.join(Self::ADAPTER_PATH)
.to_string_lossy()
.to_string(),
format!("--server={}", port),
],
arguments: Some(vec![
adapter_path.join(Self::ADAPTER_PATH).into(),
format!("--server={}", port).into(),
]),
connection: Some(TcpArguments {
port,
host,
timeout,
}),
cwd: None,
envs: HashMap::default(),
envs: None,
request_args: self.request_args(config)?,
})
}

View File

@@ -1,7 +1,7 @@
use crate::*;
use dap::{DebugRequest, StartDebuggingRequestArguments};
use dap::{DebugRequestType, StartDebuggingRequestArguments};
use gpui::AsyncApp;
use std::{collections::HashMap, ffi::OsStr, path::PathBuf};
use std::{ffi::OsStr, path::PathBuf};
use task::DebugTaskDefinition;
#[derive(Default)]
@@ -16,18 +16,18 @@ impl PythonDebugAdapter {
fn request_args(&self, config: &DebugTaskDefinition) -> StartDebuggingRequestArguments {
let mut args = json!({
"request": match config.request {
DebugRequest::Launch(_) => "launch",
DebugRequest::Attach(_) => "attach",
DebugRequestType::Launch(_) => "launch",
DebugRequestType::Attach(_) => "attach",
},
"subProcess": true,
"redirectOutput": true,
});
let map = args.as_object_mut().unwrap();
match &config.request {
DebugRequest::Attach(attach) => {
DebugRequestType::Attach(attach) => {
map.insert("processId".into(), attach.process_id.into());
}
DebugRequest::Launch(launch) => {
DebugRequestType::Launch(launch) => {
map.insert("program".into(), launch.program.clone().into());
map.insert("args".into(), launch.args.clone().into());
@@ -141,22 +141,20 @@ impl DebugAdapter for PythonDebugAdapter {
};
Ok(DebugAdapterBinary {
adapter_name: self.name(),
command: python_path.ok_or(anyhow!("failed to find binary path for python"))?,
arguments: vec![
debugpy_dir
.join(Self::ADAPTER_PATH)
.to_string_lossy()
.to_string(),
format!("--port={}", port),
format!("--host={}", host),
],
arguments: Some(vec![
debugpy_dir.join(Self::ADAPTER_PATH).into(),
format!("--port={}", port).into(),
format!("--host={}", host).into(),
]),
connection: Some(adapters::TcpArguments {
host,
port,
timeout,
}),
cwd: None,
envs: HashMap::default(),
envs: None,
request_args: self.request_args(config),
})
}

View File

@@ -12,9 +12,6 @@ workspace = true
path = "src/debugger_tools.rs"
doctest = false
[features]
test-support = []
[dependencies]
anyhow.workspace = true
dap.workspace = true

View File

@@ -41,7 +41,7 @@ struct DapLogView {
_subscriptions: Vec<Subscription>,
}
pub struct LogStore {
struct LogStore {
projects: HashMap<WeakEntity<Project>, ProjectState>,
debug_clients: HashMap<SessionId, DebugAdapterState>,
rpc_tx: UnboundedSender<(SessionId, IoKind, String)>,
@@ -101,7 +101,7 @@ impl DebugAdapterState {
}
impl LogStore {
pub fn new(cx: &Context<Self>) -> Self {
fn new(cx: &Context<Self>) -> Self {
let (rpc_tx, mut rpc_rx) = unbounded::<(SessionId, IoKind, String)>();
cx.spawn(async move |this, cx| {
while let Some((client_id, io_kind, message)) = rpc_rx.next().await {
@@ -566,13 +566,11 @@ impl DapLogView {
.dap_store()
.read(cx)
.sessions()
.filter_map(|session| {
let session = session.read(cx);
session.adapter_name();
let client = session.adapter_client()?;
.filter_map(|client| {
let client = client.read(cx).adapter_client()?;
Some(DapMenuItem {
client_id: client.id(),
client_name: session.adapter_name().to_string(),
client_name: client.name().0.as_ref().into(),
has_adapter_logs: client.has_adapter_logs(),
selected_entry: self.current_view.map_or(LogKind::Adapter, |(_, kind)| kind),
})
@@ -845,29 +843,3 @@ impl EventEmitter<Event> for LogStore {}
impl EventEmitter<Event> for DapLogView {}
impl EventEmitter<EditorEvent> for DapLogView {}
impl EventEmitter<SearchEvent> for DapLogView {}
#[cfg(any(test, feature = "test-support"))]
impl LogStore {
pub fn contained_session_ids(&self) -> Vec<SessionId> {
self.debug_clients.keys().cloned().collect()
}
pub fn rpc_messages_for_session_id(&self, session_id: SessionId) -> Vec<String> {
self.debug_clients
.get(&session_id)
.expect("This session should exist if a test is calling")
.rpc_messages
.messages
.clone()
.into()
}
pub fn log_messages_for_session_id(&self, session_id: SessionId) -> Vec<String> {
self.debug_clients
.get(&session_id)
.expect("This session should exist if a test is calling")
.log_messages
.clone()
.into()
}
}

View File

@@ -20,9 +20,6 @@ test-support = [
"project/test-support",
"util/test-support",
"workspace/test-support",
"env_logger",
"unindent",
"debugger_tools"
]
[dependencies]
@@ -40,7 +37,6 @@ gpui.workspace = true
language.workspace = true
log.workspace = true
menu.workspace = true
parking_lot.workspace = true
picker.workspace = true
pretty_assertions.workspace = true
project.workspace = true
@@ -57,13 +53,9 @@ ui.workspace = true
util.workspace = true
workspace.workspace = true
workspace-hack.workspace = true
env_logger = { workspace = true, optional = true }
debugger_tools = { workspace = true, optional = true }
unindent = { workspace = true, optional = true }
[dev-dependencies]
dap = { workspace = true, features = ["test-support"] }
debugger_tools = { workspace = true, features = ["test-support"] }
editor = { workspace = true, features = ["test-support"] }
env_logger.workspace = true
gpui = { workspace = true, features = ["test-support"] }

View File

@@ -1,7 +1,7 @@
use dap::DebugRequest;
use dap::DebugRequestType;
use fuzzy::{StringMatch, StringMatchCandidate};
use gpui::Subscription;
use gpui::{DismissEvent, Entity, EventEmitter, Focusable, Render};
use gpui::{Subscription, WeakEntity};
use picker::{Picker, PickerDelegate};
use std::sync::Arc;
@@ -9,9 +9,7 @@ use sysinfo::System;
use ui::{Context, Tooltip, prelude::*};
use ui::{ListItem, ListItemSpacing};
use util::debug_panic;
use workspace::{ModalView, Workspace};
use crate::debugger_panel::DebugPanel;
use workspace::ModalView;
#[derive(Debug, Clone)]
pub(super) struct Candidate {
@@ -24,19 +22,19 @@ pub(crate) struct AttachModalDelegate {
selected_index: usize,
matches: Vec<StringMatch>,
placeholder_text: Arc<str>,
workspace: WeakEntity<Workspace>,
project: Entity<project::Project>,
pub(crate) debug_config: task::DebugTaskDefinition,
candidates: Arc<[Candidate]>,
}
impl AttachModalDelegate {
fn new(
workspace: Entity<Workspace>,
project: Entity<project::Project>,
debug_config: task::DebugTaskDefinition,
candidates: Arc<[Candidate]>,
) -> Self {
Self {
workspace: workspace.downgrade(),
project,
debug_config,
candidates,
selected_index: 0,
@@ -53,7 +51,7 @@ pub struct AttachModal {
impl AttachModal {
pub fn new(
workspace: Entity<Workspace>,
project: Entity<project::Project>,
debug_config: task::DebugTaskDefinition,
modal: bool,
window: &mut Window,
@@ -77,11 +75,11 @@ impl AttachModal {
.collect();
processes.sort_by_key(|k| k.name.clone());
let processes = processes.into_iter().collect();
Self::with_processes(workspace, debug_config, processes, modal, window, cx)
Self::with_processes(project, debug_config, processes, modal, window, cx)
}
pub(super) fn with_processes(
workspace: Entity<Workspace>,
project: Entity<project::Project>,
debug_config: task::DebugTaskDefinition,
processes: Arc<[Candidate]>,
modal: bool,
@@ -90,7 +88,7 @@ impl AttachModal {
) -> Self {
let picker = cx.new(|cx| {
Picker::uniform_list(
AttachModalDelegate::new(workspace, debug_config, processes),
AttachModalDelegate::new(project, debug_config, processes),
window,
cx,
)
@@ -204,7 +202,7 @@ impl PickerDelegate for AttachModalDelegate {
})
}
fn confirm(&mut self, _: bool, window: &mut Window, cx: &mut Context<Picker<Self>>) {
fn confirm(&mut self, _: bool, _window: &mut Window, cx: &mut Context<Picker<Self>>) {
let candidate = self
.matches
.get(self.selected_index())
@@ -218,26 +216,23 @@ impl PickerDelegate for AttachModalDelegate {
};
match &mut self.debug_config.request {
DebugRequest::Attach(config) => {
DebugRequestType::Attach(config) => {
config.process_id = Some(candidate.pid);
}
DebugRequest::Launch(_) => {
DebugRequestType::Launch(_) => {
debug_panic!("Debugger attach modal used on launch debug config");
return;
}
}
let definition = self.debug_config.clone();
let panel = self
.workspace
.update(cx, |workspace, cx| workspace.panel::<DebugPanel>(cx))
.ok()
.flatten();
if let Some(panel) = panel {
panel.update(cx, |panel, cx| {
panel.start_session(definition, window, cx);
});
}
let config = self.debug_config.clone();
self.project
.update(cx, |project, cx| {
let ret = project.start_debug_session(config, cx);
ret
})
.detach_and_log_err(cx);
cx.emit(DismissEvent);
}

View File

@@ -6,7 +6,6 @@ use crate::{new_session_modal::NewSessionModal, session::DebugSession};
use anyhow::{Result, anyhow};
use collections::HashMap;
use command_palette_hooks::CommandPaletteFilter;
use dap::StartDebuggingRequestArguments;
use dap::{
ContinuedEvent, LoadedSourceEvent, ModuleEvent, OutputEvent, StoppedEvent, ThreadEvent,
client::SessionId, debugger_settings::DebuggerSettings,
@@ -18,7 +17,6 @@ use gpui::{
actions, anchored, deferred,
};
use project::debugger::session::{Session, SessionStateEvent};
use project::{
Project,
debugger::{
@@ -32,7 +30,7 @@ use settings::Settings;
use std::any::TypeId;
use std::path::Path;
use std::sync::Arc;
use task::{DebugTaskDefinition, DebugTaskTemplate};
use task::DebugTaskDefinition;
use terminal_view::terminal_panel::TerminalPanel;
use ui::{ContextMenu, Divider, DropdownMenu, Tooltip, prelude::*};
use workspace::{
@@ -64,7 +62,7 @@ pub struct DebugPanel {
active_session: Option<Entity<DebugSession>>,
/// This represents the last debug definition that was created in the new session modal
pub(crate) past_debug_definition: Option<DebugTaskDefinition>,
project: Entity<Project>,
project: WeakEntity<Project>,
workspace: WeakEntity<Workspace>,
focus_handle: FocusHandle,
context_menu: Option<(Entity<ContextMenu>, Point<Pixels>, Subscription)>,
@@ -98,10 +96,10 @@ impl DebugPanel {
window,
|panel, _, event: &tasks_ui::ShowAttachModal, window, cx| {
panel.workspace.update(cx, |workspace, cx| {
let workspace_handle = cx.entity().clone();
let project = workspace.project().clone();
workspace.toggle_modal(window, cx, |window, cx| {
crate::attach_modal::AttachModal::new(
workspace_handle,
project,
event.debug_config.clone(),
true,
window,
@@ -128,7 +126,7 @@ impl DebugPanel {
_subscriptions,
past_debug_definition: None,
focus_handle: cx.focus_handle(),
project,
project: project.downgrade(),
workspace: workspace.weak_handle(),
context_menu: None,
};
@@ -220,7 +218,7 @@ impl DebugPanel {
pub fn load(
workspace: WeakEntity<Workspace>,
cx: &mut AsyncWindowContext,
cx: AsyncWindowContext,
) -> Task<Result<Entity<Self>>> {
cx.spawn(async move |cx| {
workspace.update_in(cx, |workspace, window, cx| {
@@ -246,226 +244,102 @@ impl DebugPanel {
});
})
.detach();
workspace.set_debugger_provider(DebuggerProvider(debug_panel.clone()));
debug_panel
})
})
}
pub fn start_session(
&mut self,
definition: DebugTaskDefinition,
window: &mut Window,
cx: &mut Context<Self>,
) {
let task_contexts = self
.workspace
.update(cx, |workspace, cx| {
tasks_ui::task_contexts(workspace, window, cx)
})
.ok();
let dap_store = self.project.read(cx).dap_store().clone();
cx.spawn_in(window, async move |this, cx| {
let task_context = if let Some(task) = task_contexts {
task.await
.active_worktree_context
.map_or(task::TaskContext::default(), |context| context.1)
} else {
task::TaskContext::default()
};
let (session, task) = dap_store.update(cx, |dap_store, cx| {
let template = DebugTaskTemplate {
locator: None,
definition: definition.clone(),
};
let session = if let Some(debug_config) = template
.to_zed_format()
.resolve_task("debug_task", &task_context)
.and_then(|resolved_task| resolved_task.resolved_debug_adapter_config())
{
dap_store.new_session(debug_config.definition, None, cx)
} else {
dap_store.new_session(definition.clone(), None, cx)
};
(session.clone(), dap_store.boot_session(session, cx))
})?;
match task.await {
Err(e) => {
this.update(cx, |this, cx| {
this.workspace
.update(cx, |workspace, cx| {
workspace.show_error(&e, cx);
})
.ok();
})
.ok();
session
.update(cx, |session, cx| session.shutdown(cx))?
.await;
}
Ok(_) => Self::register_session(this, session, cx).await?,
}
anyhow::Ok(())
})
.detach_and_log_err(cx);
}
async fn register_session(
this: WeakEntity<Self>,
session: Entity<Session>,
cx: &mut AsyncWindowContext,
) -> Result<()> {
let adapter_name = session.update(cx, |session, _| session.adapter_name())?;
this.update_in(cx, |_, window, cx| {
cx.subscribe_in(
&session,
window,
move |_, session, event: &SessionStateEvent, window, cx| match event {
SessionStateEvent::Restart => {
let mut curr_session = session.clone();
while let Some(parent_session) = curr_session
.read_with(cx, |session, _| session.parent_session().cloned())
{
curr_session = parent_session;
}
let definition = curr_session.update(cx, |session, _| session.definition());
let task = curr_session.update(cx, |session, cx| session.shutdown(cx));
let definition = definition.clone();
cx.spawn_in(window, async move |this, cx| {
task.await;
this.update_in(cx, |this, window, cx| {
this.start_session(definition, window, cx)
})
})
.detach_and_log_err(cx);
}
_ => {}
},
)
.detach();
})
.ok();
let serialized_layout = persistence::get_serialized_pane_layout(adapter_name).await;
let workspace = this.update_in(cx, |this, window, cx| {
this.sessions.retain(|session| {
session
.read(cx)
.mode()
.as_running()
.map_or(false, |running_state| {
!running_state.read(cx).session().read(cx).is_terminated()
})
});
let session_item = DebugSession::running(
this.project.clone(),
this.workspace.clone(),
session,
cx.weak_entity(),
serialized_layout,
window,
cx,
);
if let Some(running) = session_item.read(cx).mode().as_running().cloned() {
// We might want to make this an event subscription and only notify when a new thread is selected
// This is used to filter the command menu correctly
cx.observe(&running, |_, _, cx| cx.notify()).detach();
}
this.sessions.push(session_item.clone());
this.activate_session(session_item, window, cx);
this.workspace.clone()
})?;
workspace.update_in(cx, |workspace, window, cx| {
workspace.focus_panel::<Self>(window, cx);
})?;
Ok(())
}
pub fn start_child_session(
&mut self,
request: &StartDebuggingRequestArguments,
parent_session: Entity<Session>,
window: &mut Window,
cx: &mut Context<Self>,
) {
let Some(worktree) = parent_session.read(cx).worktree() else {
log::error!("Attempted to start a child session from non local debug session");
return;
};
let dap_store_handle = self.project.read(cx).dap_store().clone();
let breakpoint_store = self.project.read(cx).breakpoint_store();
let definition = parent_session.read(cx).definition().clone();
let mut binary = parent_session.read(cx).binary().clone();
binary.request_args = request.clone();
cx.spawn_in(window, async move |this, cx| {
let (session, task) = dap_store_handle.update(cx, |dap_store, cx| {
let session =
dap_store.new_session(definition.clone(), Some(parent_session.clone()), cx);
let task = session.update(cx, |session, cx| {
session.boot(
binary,
worktree,
breakpoint_store,
dap_store_handle.downgrade(),
cx,
)
});
(session, task)
})?;
match task.await {
Err(e) => {
this.update(cx, |this, cx| {
this.workspace
.update(cx, |workspace, cx| {
workspace.show_error(&e, cx);
})
.ok();
})
.ok();
session
.update(cx, |session, cx| session.shutdown(cx))?
.await;
}
Ok(_) => Self::register_session(this, session, cx).await?,
}
anyhow::Ok(())
})
.detach_and_log_err(cx);
}
pub fn active_session(&self) -> Option<Entity<DebugSession>> {
self.active_session.clone()
}
pub fn debug_panel_items_by_client(
&self,
client_id: &SessionId,
cx: &Context<Self>,
) -> Vec<Entity<DebugSession>> {
self.sessions
.iter()
.filter(|item| item.read(cx).session_id(cx) == *client_id)
.map(|item| item.clone())
.collect()
}
pub fn debug_panel_item_by_client(
&self,
client_id: SessionId,
cx: &mut Context<Self>,
) -> Option<Entity<DebugSession>> {
self.sessions
.iter()
.find(|item| {
let item = item.read(cx);
item.session_id(cx) == client_id
})
.cloned()
}
fn handle_dap_store_event(
&mut self,
_dap_store: &Entity<DapStore>,
dap_store: &Entity<DapStore>,
event: &dap_store::DapStoreEvent,
window: &mut Window,
cx: &mut Context<Self>,
) {
match event {
dap_store::DapStoreEvent::DebugSessionInitialized(session_id) => {
let Some(session) = dap_store.read(cx).session_by_id(session_id) else {
return log::error!(
"Couldn't get session with id: {session_id:?} from DebugClientStarted event"
);
};
let adapter_name = session.read(cx).adapter_name();
let session_id = *session_id;
cx.spawn_in(window, async move |this, cx| {
let serialized_layout =
persistence::get_serialized_pane_layout(adapter_name).await;
this.update_in(cx, |this, window, cx| {
let Some(project) = this.project.upgrade() else {
return log::error!(
"Debug Panel out lived it's weak reference to Project"
);
};
if this
.sessions
.iter()
.any(|item| item.read(cx).session_id(cx) == session_id)
{
// We already have an item for this session.
return;
}
let session_item = DebugSession::running(
project,
this.workspace.clone(),
session,
cx.weak_entity(),
serialized_layout,
window,
cx,
);
if let Some(running) = session_item.read(cx).mode().as_running().cloned() {
// We might want to make this an event subscription and only notify when a new thread is selected
// This is used to filter the command menu correctly
cx.observe(&running, |_, _, cx| cx.notify()).detach();
}
this.sessions.push(session_item.clone());
this.activate_session(session_item, window, cx);
})
})
.detach();
}
dap_store::DapStoreEvent::RunInTerminal {
title,
cwd,
@@ -487,12 +361,6 @@ impl DebugPanel {
)
.detach_and_log_err(cx);
}
dap_store::DapStoreEvent::SpawnChildSession {
request,
parent_session,
} => {
self.start_child_session(request, parent_session.clone(), window, cx);
}
_ => {}
}
}
@@ -527,7 +395,7 @@ impl DebugPanel {
cwd,
title,
},
task::RevealStrategy::Never,
task::RevealStrategy::Always,
window,
cx,
);
@@ -587,6 +455,8 @@ impl DebugPanel {
let session = this.dap_store().read(cx).session_by_id(session_id);
session.map(|session| !session.read(cx).is_terminated())
})
.ok()
.flatten()
.unwrap_or_default();
cx.spawn_in(window, async move |this, cx| {
@@ -899,6 +769,9 @@ impl DebugPanel {
this.restart_session(cx);
},
))
.disabled(
!capabilities.supports_restart_request.unwrap_or_default(),
)
.tooltip(move |window, cx| {
Tooltip::text("Restart")(window, cx)
}),
@@ -1010,6 +883,7 @@ impl DebugPanel {
impl EventEmitter<PanelEvent> for DebugPanel {}
impl EventEmitter<DebugPanelEvent> for DebugPanel {}
impl EventEmitter<project::Event> for DebugPanel {}
impl Focusable for DebugPanel {
fn focus_handle(&self, _: &App) -> FocusHandle {
@@ -1155,15 +1029,3 @@ impl Render for DebugPanel {
.into_any()
}
}
struct DebuggerProvider(Entity<DebugPanel>);
impl workspace::DebuggerProvider for DebuggerProvider {
fn start_session(&self, definition: DebugTaskDefinition, window: &mut Window, cx: &mut App) {
self.0.update(cx, |_, cx| {
cx.defer_in(window, |this, window, cx| {
this.start_session(definition, window, cx);
})
})
}
}

View File

@@ -16,7 +16,7 @@ mod new_session_modal;
mod persistence;
pub(crate) mod session;
#[cfg(any(test, feature = "test-support"))]
#[cfg(test)]
pub mod tests;
actions!(

View File

@@ -4,14 +4,16 @@ use std::{
path::{Path, PathBuf},
};
use dap::{DapRegistry, DebugRequest};
use anyhow::{Result, anyhow};
use dap::DebugRequestType;
use editor::{Editor, EditorElement, EditorStyle};
use gpui::{
App, AppContext, DismissEvent, Entity, EventEmitter, FocusHandle, Focusable, Render, TextStyle,
WeakEntity,
};
use project::Project;
use settings::Settings;
use task::{DebugTaskDefinition, DebugTaskTemplate, LaunchRequest};
use task::{DebugTaskDefinition, LaunchConfig};
use theme::ThemeSettings;
use ui::{
ActiveTheme, Button, ButtonCommon, ButtonSize, CheckboxWithLabel, Clickable, Color, Context,
@@ -19,6 +21,7 @@ use ui::{
LabelCommon as _, ParentElement, RenderOnce, SharedString, Styled, StyledExt, ToggleButton,
ToggleState, Toggleable, Window, div, h_flex, relative, rems, v_flex,
};
use util::ResultExt;
use workspace::{ModalView, Workspace};
use crate::{attach_modal::AttachModal, debugger_panel::DebugPanel};
@@ -34,9 +37,9 @@ pub(super) struct NewSessionModal {
last_selected_profile_name: Option<SharedString>,
}
fn suggested_label(request: &DebugRequest, debugger: &str) -> String {
fn suggested_label(request: &DebugRequestType, debugger: &str) -> String {
match request {
DebugRequest::Launch(config) => {
DebugRequestType::Launch(config) => {
let last_path_component = Path::new(&config.program)
.file_name()
.map(|name| name.to_string_lossy())
@@ -44,7 +47,7 @@ fn suggested_label(request: &DebugRequest, debugger: &str) -> String {
format!("{} ({debugger})", last_path_component)
}
DebugRequest::Attach(config) => format!(
DebugRequestType::Attach(config) => format!(
"pid: {} ({debugger})",
config.process_id.unwrap_or(u32::MAX)
),
@@ -68,7 +71,7 @@ impl NewSessionModal {
.and_then(|def| def.stop_on_entry);
let launch_config = match past_debug_definition.map(|def| def.request) {
Some(DebugRequest::Launch(launch_config)) => Some(launch_config),
Some(DebugRequestType::Launch(launch_config)) => Some(launch_config),
_ => None,
};
@@ -85,38 +88,39 @@ impl NewSessionModal {
}
}
fn debug_config(&self, cx: &App, debugger: &str) -> DebugTaskDefinition {
fn debug_config(&self, cx: &App) -> Option<DebugTaskDefinition> {
let request = self.mode.debug_task(cx);
DebugTaskDefinition {
adapter: debugger.to_owned(),
label: suggested_label(&request, debugger),
Some(DebugTaskDefinition {
adapter: self.debugger.clone()?.to_string(),
label: suggested_label(&request, self.debugger.as_deref()?),
request,
initialize_args: self.initialize_args.clone(),
tcp_connection: None,
locator: None,
stop_on_entry: match self.stop_on_entry {
ToggleState::Selected => Some(true),
_ => None,
},
}
})
}
fn start_new_session(&self, window: &mut Window, cx: &mut Context<Self>) {
let Some(debugger) = self.debugger.as_ref() else {
// todo: show in UI.
log::error!("No debugger selected");
return;
};
let config = self.debug_config(cx, debugger);
let debug_panel = self.debug_panel.clone();
fn start_new_session(&self, window: &mut Window, cx: &mut Context<Self>) -> Result<()> {
let workspace = self.workspace.clone();
let config = self
.debug_config(cx)
.ok_or_else(|| anyhow!("Failed to create a debug config"))?;
let task_contexts = self
.workspace
let _ = self.debug_panel.update(cx, |panel, _| {
panel.past_debug_definition = Some(config.clone());
});
let task_contexts = workspace
.update(cx, |workspace, cx| {
tasks_ui::task_contexts(workspace, window, cx)
})
.ok();
cx.spawn_in(window, async move |this, cx| {
cx.spawn(async move |this, cx| {
let task_context = if let Some(task) = task_contexts {
task.await
.active_worktree_context
@@ -124,29 +128,39 @@ impl NewSessionModal {
} else {
task::TaskContext::default()
};
let project = workspace.update(cx, |workspace, _| workspace.project().clone())?;
debug_panel.update_in(cx, |debug_panel, window, cx| {
let template = DebugTaskTemplate {
locator: None,
definition: config.clone(),
};
if let Some(debug_config) = template
.to_zed_format()
.resolve_task("debug_task", &task_context)
.and_then(|resolved_task| resolved_task.resolved_debug_adapter_config())
let task = project.update(cx, |this, cx| {
if let Some(debug_config) =
config
.clone()
.to_zed_format()
.ok()
.and_then(|task_template| {
task_template
.resolve_task("debug_task", &task_context)
.and_then(|resolved_task| {
resolved_task.resolved_debug_adapter_config()
})
})
{
debug_panel.start_session(debug_config.definition, window, cx)
this.start_debug_session(debug_config, cx)
} else {
debug_panel.start_session(config, window, cx)
this.start_debug_session(config, cx)
}
})?;
this.update(cx, |_, cx| {
cx.emit(DismissEvent);
})
.ok();
let spawn_result = task.await;
if spawn_result.is_ok() {
this.update(cx, |_, cx| {
cx.emit(DismissEvent);
})
.ok();
}
spawn_result?;
anyhow::Result::<_, anyhow::Error>::Ok(())
})
.detach_and_log_err(cx);
Ok(())
}
fn update_attach_picker(
@@ -200,7 +214,12 @@ impl NewSessionModal {
};
let available_adapters = workspace
.update(cx, |_, cx| DapRegistry::global(cx).enumerate_adapters())
.update(cx, |this, cx| {
this.project()
.read(cx)
.debug_adapters()
.enumerate_adapters()
})
.ok()
.unwrap_or_default();
@@ -232,20 +251,23 @@ impl NewSessionModal {
this.debugger = Some(task.adapter.clone().into());
this.initialize_args = task.initialize_args.clone();
match &task.request {
DebugRequest::Launch(launch_config) => {
DebugRequestType::Launch(launch_config) => {
this.mode = NewSessionMode::launch(
Some(launch_config.clone()),
window,
cx,
);
}
DebugRequest::Attach(_) => {
let Some(workspace) = this.workspace.upgrade() else {
DebugRequestType::Attach(_) => {
let Ok(project) = this
.workspace
.read_with(cx, |this, _| this.project().clone())
else {
return;
};
this.mode = NewSessionMode::attach(
this.debugger.clone(),
workspace,
project,
window,
cx,
);
@@ -263,7 +285,7 @@ impl NewSessionModal {
}
};
let available_adapters: Vec<DebugTaskTemplate> = workspace
let available_adapters: Vec<DebugTaskDefinition> = workspace
.update(cx, |this, cx| {
this.project()
.read(cx)
@@ -281,9 +303,9 @@ impl NewSessionModal {
for debug_definition in available_adapters {
menu = menu.entry(
debug_definition.definition.label.clone(),
debug_definition.label.clone(),
None,
setter_for_name(debug_definition.definition),
setter_for_name(debug_definition),
);
}
menu
@@ -300,7 +322,7 @@ struct LaunchMode {
impl LaunchMode {
fn new(
past_launch_config: Option<LaunchRequest>,
past_launch_config: Option<LaunchConfig>,
window: &mut Window,
cx: &mut App,
) -> Entity<Self> {
@@ -326,9 +348,9 @@ impl LaunchMode {
cx.new(|_| Self { program, cwd })
}
fn debug_task(&self, cx: &App) -> task::LaunchRequest {
fn debug_task(&self, cx: &App) -> task::LaunchConfig {
let path = self.cwd.read(cx).text(cx);
task::LaunchRequest {
task::LaunchConfig {
program: self.program.read(cx).text(cx),
cwd: path.is_empty().not().then(|| PathBuf::from(path)),
args: Default::default(),
@@ -345,20 +367,21 @@ struct AttachMode {
impl AttachMode {
fn new(
debugger: Option<SharedString>,
workspace: Entity<Workspace>,
project: Entity<Project>,
window: &mut Window,
cx: &mut Context<NewSessionModal>,
) -> Entity<Self> {
let debug_definition = DebugTaskDefinition {
label: "Attach New Session Setup".into(),
request: dap::DebugRequest::Attach(task::AttachRequest { process_id: None }),
request: dap::DebugRequestType::Attach(task::AttachConfig { process_id: None }),
tcp_connection: None,
adapter: debugger.clone().unwrap_or_default().into(),
locator: None,
initialize_args: None,
stop_on_entry: Some(false),
};
let attach_picker = cx.new(|cx| {
let modal = AttachModal::new(workspace, debug_definition.clone(), false, window, cx);
let modal = AttachModal::new(project, debug_definition.clone(), false, window, cx);
window.focus(&modal.focus_handle(cx));
modal
@@ -368,8 +391,8 @@ impl AttachMode {
attach_picker,
})
}
fn debug_task(&self) -> task::AttachRequest {
task::AttachRequest { process_id: None }
fn debug_task(&self) -> task::AttachConfig {
task::AttachConfig { process_id: None }
}
}
@@ -383,7 +406,7 @@ enum NewSessionMode {
}
impl NewSessionMode {
fn debug_task(&self, cx: &App) -> DebugRequest {
fn debug_task(&self, cx: &App) -> DebugRequestType {
match self {
NewSessionMode::Launch(entity) => entity.read(cx).debug_task(cx).into(),
NewSessionMode::Attach(entity) => entity.read(cx).debug_task().into(),
@@ -458,14 +481,14 @@ impl RenderOnce for NewSessionMode {
impl NewSessionMode {
fn attach(
debugger: Option<SharedString>,
workspace: Entity<Workspace>,
project: Entity<Project>,
window: &mut Window,
cx: &mut Context<NewSessionModal>,
) -> Self {
Self::Attach(AttachMode::new(debugger, workspace, window, cx))
Self::Attach(AttachMode::new(debugger, project, window, cx))
}
fn launch(
past_launch_config: Option<LaunchRequest>,
past_launch_config: Option<LaunchConfig>,
window: &mut Window,
cx: &mut Context<NewSessionModal>,
) -> Self {
@@ -557,12 +580,15 @@ impl Render for NewSessionModal {
.toggle_state(matches!(self.mode, NewSessionMode::Attach(_)))
.style(ui::ButtonStyle::Subtle)
.on_click(cx.listener(|this, _, window, cx| {
let Some(workspace) = this.workspace.upgrade() else {
let Ok(project) = this
.workspace
.read_with(cx, |this, _| this.project().clone())
else {
return;
};
this.mode = NewSessionMode::attach(
this.debugger.clone(),
workspace,
project,
window,
cx,
);
@@ -616,7 +642,7 @@ impl Render for NewSessionModal {
.child(
Button::new("debugger-spawn", "Start")
.on_click(cx.listener(|this, _, window, cx| {
this.start_new_session(window, cx);
this.start_new_session(window, cx).log_err();
}))
.disabled(self.debugger.is_none()),
),

View File

@@ -88,12 +88,6 @@ impl DebugSession {
}
}
pub fn session(&self, cx: &App) -> Entity<Session> {
match &self.mode {
DebugSessionState::Running(entity) => entity.read(cx).session().clone(),
}
}
pub(crate) fn shutdown(&mut self, cx: &mut Context<Self>) {
match &self.mode {
DebugSessionState::Running(state) => state.update(cx, |state, cx| state.shutdown(cx)),
@@ -121,7 +115,13 @@ impl DebugSession {
};
self.label
.get_or_init(|| session.read(cx).label())
.get_or_init(|| {
session
.read(cx)
.as_local()
.expect("Remote Debug Sessions are not implemented yet")
.label()
})
.to_owned()
}

View File

@@ -418,19 +418,6 @@ impl RunningState {
let threads = this.session.update(cx, |this, cx| this.threads(cx));
this.select_current_thread(&threads, cx);
}
SessionEvent::CapabilitiesLoaded => {
let capabilities = this.capabilities(cx);
if !capabilities.supports_modules_request.unwrap_or(false) {
this.remove_pane_item(DebuggerPaneItem::Modules, window, cx);
}
if !capabilities
.supports_loaded_sources_request
.unwrap_or(false)
{
this.remove_pane_item(DebuggerPaneItem::LoadedSources, window, cx);
}
}
_ => {}
}
cx.notify()
@@ -460,14 +447,12 @@ impl RunningState {
workspace::PaneGroup::with_root(root)
} else {
pane_close_subscriptions.clear();
let root = Self::default_pane_layout(
project,
&workspace,
&stack_frame_list,
&variable_list,
&module_list,
&loaded_source_list,
&console,
&breakpoint_list,
&mut pane_close_subscriptions,
@@ -504,6 +489,11 @@ impl RunningState {
window: &mut Window,
cx: &mut Context<Self>,
) {
debug_assert!(
item_kind.is_supported(self.session.read(cx).capabilities()),
"We should only allow removing supported item kinds"
);
if let Some((pane, item_id)) = self.panes.panes().iter().find_map(|pane| {
Some(pane).zip(
pane.read(cx)
@@ -934,7 +924,6 @@ impl RunningState {
stack_frame_list: &Entity<StackFrameList>,
variable_list: &Entity<VariableList>,
module_list: &Entity<ModuleList>,
loaded_source_list: &Entity<LoadedSourceList>,
console: &Entity<Console>,
breakpoints: &Entity<BreakpointList>,
subscriptions: &mut HashMap<EntityId, Subscription>,
@@ -974,7 +963,6 @@ impl RunningState {
this.activate_item(0, false, false, window, cx);
});
let center_pane = new_debugger_pane(workspace.clone(), project.clone(), window, cx);
center_pane.update(cx, |this, cx| {
this.add_item(
Box::new(SubView::new(
@@ -992,7 +980,7 @@ impl RunningState {
);
this.add_item(
Box::new(SubView::new(
module_list.focus_handle(cx),
this.focus_handle(cx),
module_list.clone().into(),
DebuggerPaneItem::Modules,
None,
@@ -1004,24 +992,8 @@ impl RunningState {
window,
cx,
);
this.add_item(
Box::new(SubView::new(
loaded_source_list.focus_handle(cx),
loaded_source_list.clone().into(),
DebuggerPaneItem::LoadedSources,
None,
cx,
)),
false,
false,
None,
window,
cx,
);
this.activate_item(0, false, false, window, cx);
});
let rightmost_pane = new_debugger_pane(workspace.clone(), project.clone(), window, cx);
rightmost_pane.update(cx, |this, cx| {
let weak_console = console.downgrade();

View File

@@ -1,29 +1,16 @@
use std::sync::Arc;
use anyhow::{Result, anyhow};
use dap::{DebugRequest, client::DebugAdapterClient};
use gpui::{Entity, TestAppContext, WindowHandle};
use project::{Project, debugger::session::Session};
use project::Project;
use settings::SettingsStore;
use task::DebugTaskDefinition;
use terminal_view::terminal_panel::TerminalPanel;
use workspace::Workspace;
use crate::{debugger_panel::DebugPanel, session::DebugSession};
#[cfg(test)]
mod attach_modal;
#[cfg(test)]
mod console;
#[cfg(test)]
mod dap_logger;
#[cfg(test)]
mod debugger_panel;
#[cfg(test)]
mod module_list;
#[cfg(test)]
mod stack_frame_list;
#[cfg(test)]
mod variable_list;
pub fn init_test(cx: &mut gpui::TestAppContext) {
@@ -55,7 +42,7 @@ pub async fn init_test_workspace(
let debugger_panel = workspace_handle
.update(cx, |_, window, cx| {
cx.spawn_in(window, async move |this, cx| {
DebugPanel::load(this, cx).await
DebugPanel::load(this, cx.clone()).await
})
})
.unwrap()
@@ -95,46 +82,3 @@ pub fn active_debug_session_panel(
})
.unwrap()
}
pub fn start_debug_session_with<T: Fn(&Arc<DebugAdapterClient>) + 'static>(
workspace: &WindowHandle<Workspace>,
cx: &mut gpui::TestAppContext,
config: DebugTaskDefinition,
configure: T,
) -> Result<Entity<Session>> {
let _subscription = project::debugger::test::intercept_debug_sessions(cx, configure);
workspace.update(cx, |workspace, window, cx| {
workspace.start_debug_session(config, window, cx)
})?;
cx.run_until_parked();
let session = workspace.read_with(cx, |workspace, cx| {
workspace
.panel::<DebugPanel>(cx)
.and_then(|panel| panel.read(cx).active_session())
.and_then(|session| session.read(cx).mode().as_running().cloned())
.map(|running| running.read(cx).session().clone())
.ok_or_else(|| anyhow!("Failed to get active session"))
})??;
Ok(session)
}
pub fn start_debug_session<T: Fn(&Arc<DebugAdapterClient>) + 'static>(
workspace: &WindowHandle<Workspace>,
cx: &mut gpui::TestAppContext,
configure: T,
) -> Result<Entity<Session>> {
start_debug_session_with(
workspace,
cx,
DebugTaskDefinition {
adapter: "fake-adapter".to_string(),
request: DebugRequest::Launch(Default::default()),
label: "test".to_string(),
initialize_args: None,
tcp_connection: None,
stop_on_entry: None,
},
configure,
)
}

View File

@@ -1,11 +1,11 @@
use crate::{attach_modal::Candidate, tests::start_debug_session_with, *};
use crate::{attach_modal::Candidate, *};
use attach_modal::AttachModal;
use dap::{FakeAdapter, client::SessionId};
use gpui::{BackgroundExecutor, TestAppContext, VisualTestContext};
use menu::Confirm;
use project::{FakeFs, Project};
use serde_json::json;
use task::{AttachRequest, DebugTaskDefinition, TcpArgumentsTemplate};
use task::{AttachConfig, DebugTaskDefinition, TCPHost};
use tests::{init_test, init_test_workspace};
#[gpui::test]
@@ -26,17 +26,18 @@ async fn test_direct_attach_to_process(executor: BackgroundExecutor, cx: &mut Te
let workspace = init_test_workspace(&project, cx).await;
let cx = &mut VisualTestContext::from_window(*workspace, cx);
let session = start_debug_session_with(
&workspace,
let session = debugger::test::start_debug_session_with(
&project,
cx,
DebugTaskDefinition {
adapter: "fake-adapter".to_string(),
request: dap::DebugRequest::Attach(AttachRequest {
request: dap::DebugRequestType::Attach(AttachConfig {
process_id: Some(10),
}),
label: "label".to_string(),
initialize_args: None,
tcp_connection: None,
locator: None,
stop_on_entry: None,
},
|client| {
@@ -47,6 +48,7 @@ async fn test_direct_attach_to_process(executor: BackgroundExecutor, cx: &mut Te
});
},
)
.await
.unwrap();
cx.run_until_parked();
@@ -98,16 +100,16 @@ async fn test_show_attach_modal_and_select_process(
});
let attach_modal = workspace
.update(cx, |workspace, window, cx| {
let workspace_handle = cx.entity();
workspace.toggle_modal(window, cx, |window, cx| {
AttachModal::with_processes(
workspace_handle,
project.clone(),
DebugTaskDefinition {
adapter: FakeAdapter::ADAPTER_NAME.into(),
request: dap::DebugRequest::Attach(AttachRequest::default()),
request: dap::DebugRequestType::Attach(AttachConfig::default()),
label: "attach example".into(),
initialize_args: None,
tcp_connection: Some(TcpArgumentsTemplate::default()),
tcp_connection: Some(TCPHost::default()),
locator: None,
stop_on_entry: None,
},
vec![

View File

@@ -1,7 +1,4 @@
use crate::{
tests::{active_debug_session_panel, start_debug_session},
*,
};
use crate::{tests::active_debug_session_panel, *};
use dap::requests::StackTrace;
use gpui::{BackgroundExecutor, TestAppContext, VisualTestContext};
use project::{FakeFs, Project};
@@ -31,7 +28,9 @@ async fn test_handle_output_event(executor: BackgroundExecutor, cx: &mut TestApp
})
.unwrap();
let session = start_debug_session(&workspace, cx, |_| {}).unwrap();
let session = debugger::test::start_debug_session(&project, cx, |_| {})
.await
.unwrap();
let client = session.update(cx, |session, _| session.adapter_client().unwrap());
client.on_request::<StackTrace, _>(move |_, _| {

View File

@@ -1,118 +0,0 @@
use crate::tests::{init_test, init_test_workspace, start_debug_session};
use dap::requests::{StackTrace, Threads};
use debugger_tools::LogStore;
use gpui::{BackgroundExecutor, TestAppContext, VisualTestContext};
use project::Project;
use serde_json::json;
use std::cell::OnceCell;
#[gpui::test]
async fn test_dap_logger_captures_all_session_rpc_messages(
executor: BackgroundExecutor,
cx: &mut TestAppContext,
) {
let log_store_cell = std::rc::Rc::new(OnceCell::new());
cx.update(|cx| {
let log_store_cell = log_store_cell.clone();
cx.observe_new::<LogStore>(move |_, _, cx| {
log_store_cell.set(cx.entity()).unwrap();
})
.detach();
debugger_tools::init(cx);
});
init_test(cx);
let log_store = log_store_cell.get().unwrap().clone();
// Create a filesystem with a simple project
let fs = project::FakeFs::new(executor.clone());
fs.insert_tree(
"/project",
json!({
"main.rs": "fn main() {\n println!(\"Hello, world!\");\n}"
}),
)
.await;
assert!(
log_store.read_with(cx, |log_store, _| log_store
.contained_session_ids()
.is_empty()),
"log_store shouldn't contain any session IDs before any sessions were created"
);
let project = Project::test(fs, ["/project".as_ref()], cx).await;
let workspace = init_test_workspace(&project, cx).await;
let cx = &mut VisualTestContext::from_window(*workspace, cx);
// Start a debug session
let session = start_debug_session(&workspace, cx, |_| {}).unwrap();
let session_id = session.read_with(cx, |session, _| session.session_id());
let client = session.update(cx, |session, _| session.adapter_client().unwrap());
assert_eq!(
log_store.read_with(cx, |log_store, _| log_store.contained_session_ids().len()),
1,
);
assert!(
log_store.read_with(cx, |log_store, _| log_store
.contained_session_ids()
.contains(&session_id)),
"log_store should contain the session IDs of the started session"
);
assert!(
!log_store.read_with(cx, |log_store, _| log_store
.rpc_messages_for_session_id(session_id)
.is_empty()),
"We should have the initialization sequence in the log store"
);
// Set up basic responses for common requests
client.on_request::<Threads, _>(move |_, _| {
Ok(dap::ThreadsResponse {
threads: vec![dap::Thread {
id: 1,
name: "Thread 1".into(),
}],
})
});
client.on_request::<StackTrace, _>(move |_, _| {
Ok(dap::StackTraceResponse {
stack_frames: Vec::default(),
total_frames: None,
})
});
// Run until all pending tasks are executed
cx.run_until_parked();
// Simulate a stopped event to generate more DAP messages
client
.fake_event(dap::messages::Events::Stopped(dap::StoppedEvent {
reason: dap::StoppedEventReason::Pause,
description: None,
thread_id: Some(1),
preserve_focus_hint: None,
text: None,
all_threads_stopped: None,
hit_breakpoint_ids: None,
}))
.await;
cx.run_until_parked();
// Shutdown the debug session
let shutdown_session = project.update(cx, |project, cx| {
project.dap_store().update(cx, |dap_store, cx| {
dap_store.shutdown_session(session.read(cx).session_id(), cx)
})
});
shutdown_session.await.unwrap();
cx.run_until_parked();
}

View File

@@ -1,4 +1,4 @@
use crate::{tests::start_debug_session, *};
use crate::*;
use dap::{
ErrorResponse, Message, RunInTerminalRequestArguments, SourceBreakpoint,
StartDebuggingRequestArguments, StartDebuggingRequestArgumentsRequest,
@@ -48,7 +48,9 @@ async fn test_basic_show_debug_panel(executor: BackgroundExecutor, cx: &mut Test
let workspace = init_test_workspace(&project, cx).await;
let cx = &mut VisualTestContext::from_window(*workspace, cx);
let session = start_debug_session(&workspace, cx, |_| {}).unwrap();
let session = debugger::test::start_debug_session(&project, cx, |_| {})
.await
.unwrap();
let client = session.update(cx, |session, _| session.adapter_client().unwrap());
client.on_request::<Threads, _>(move |_, _| {
@@ -185,7 +187,9 @@ async fn test_we_can_only_have_one_panel_per_debug_session(
let workspace = init_test_workspace(&project, cx).await;
let cx = &mut VisualTestContext::from_window(*workspace, cx);
let session = start_debug_session(&workspace, cx, |_| {}).unwrap();
let session = debugger::test::start_debug_session(&project, cx, |_| {})
.await
.unwrap();
let client = session.update(cx, |session, _| session.adapter_client().unwrap());
client.on_request::<Threads, _>(move |_, _| {
@@ -350,7 +354,9 @@ async fn test_handle_successful_run_in_terminal_reverse_request(
let workspace = init_test_workspace(&project, cx).await;
let cx = &mut VisualTestContext::from_window(*workspace, cx);
let session = start_debug_session(&workspace, cx, |_| {}).unwrap();
let session = debugger::test::start_debug_session(&project, cx, |_| {})
.await
.unwrap();
let client = session.update(cx, |session, _| session.adapter_client().unwrap());
client
@@ -413,86 +419,6 @@ async fn test_handle_successful_run_in_terminal_reverse_request(
shutdown_session.await.unwrap();
}
#[gpui::test]
async fn test_handle_start_debugging_request(
executor: BackgroundExecutor,
cx: &mut TestAppContext,
) {
init_test(cx);
let fs = FakeFs::new(executor.clone());
fs.insert_tree(
"/project",
json!({
"main.rs": "First line\nSecond line\nThird line\nFourth line",
}),
)
.await;
let project = Project::test(fs, ["/project".as_ref()], cx).await;
let workspace = init_test_workspace(&project, cx).await;
let cx = &mut VisualTestContext::from_window(*workspace, cx);
let session = start_debug_session(&workspace, cx, |_| {}).unwrap();
let client = session.update(cx, |session, _| session.adapter_client().unwrap());
let fake_config = json!({"one": "two"});
let launched_with = Arc::new(parking_lot::Mutex::new(None));
let _subscription = project::debugger::test::intercept_debug_sessions(cx, {
let launched_with = launched_with.clone();
move |client| {
let launched_with = launched_with.clone();
client.on_request::<dap::requests::Launch, _>(move |_, args| {
launched_with.lock().replace(args.raw);
Ok(())
});
client.on_request::<dap::requests::Attach, _>(move |_, _| {
assert!(false, "should not get attach request");
Ok(())
});
}
});
client
.fake_reverse_request::<StartDebugging>(StartDebuggingRequestArguments {
request: StartDebuggingRequestArgumentsRequest::Launch,
configuration: fake_config.clone(),
})
.await;
cx.run_until_parked();
workspace
.update(cx, |workspace, _window, cx| {
let debug_panel = workspace.panel::<DebugPanel>(cx).unwrap();
let active_session = debug_panel
.read(cx)
.active_session()
.unwrap()
.read(cx)
.session(cx);
let parent_session = active_session.read(cx).parent_session().unwrap();
assert_eq!(
active_session.read(cx).definition(),
parent_session.read(cx).definition()
);
})
.unwrap();
assert_eq!(&fake_config, launched_with.lock().as_ref().unwrap());
let shutdown_session = project.update(cx, |project, cx| {
project.dap_store().update(cx, |dap_store, cx| {
dap_store.shutdown_session(session.read(cx).session_id(), cx)
})
});
shutdown_session.await.unwrap();
}
// // covers that we always send a response back, if something when wrong,
// // while spawning the terminal
#[gpui::test]
@@ -518,7 +444,9 @@ async fn test_handle_error_run_in_terminal_reverse_request(
let workspace = init_test_workspace(&project, cx).await;
let cx = &mut VisualTestContext::from_window(*workspace, cx);
let session = start_debug_session(&workspace, cx, |_| {}).unwrap();
let session = debugger::test::start_debug_session(&project, cx, |_| {})
.await
.unwrap();
let client = session.update(cx, |session, _| session.adapter_client().unwrap());
client
@@ -594,7 +522,9 @@ async fn test_handle_start_debugging_reverse_request(
let workspace = init_test_workspace(&project, cx).await;
let cx = &mut VisualTestContext::from_window(*workspace, cx);
let session = start_debug_session(&workspace, cx, |_| {}).unwrap();
let session = debugger::test::start_debug_session(&project, cx, |_| {})
.await
.unwrap();
let client = session.update(cx, |session, _| session.adapter_client().unwrap());
client.on_request::<dap::requests::Threads, _>(move |_, _| {
@@ -699,7 +629,9 @@ async fn test_shutdown_children_when_parent_session_shutdown(
let workspace = init_test_workspace(&project, cx).await;
let cx = &mut VisualTestContext::from_window(*workspace, cx);
let parent_session = start_debug_session(&workspace, cx, |_| {}).unwrap();
let parent_session = debugger::test::start_debug_session(&project, cx, |_| {})
.await
.unwrap();
let client = parent_session.update(cx, |session, _| session.adapter_client().unwrap());
client.on_request::<dap::requests::Threads, _>(move |_, _| {
@@ -805,7 +737,9 @@ async fn test_shutdown_parent_session_if_all_children_are_shutdown(
let workspace = init_test_workspace(&project, cx).await;
let cx = &mut VisualTestContext::from_window(*workspace, cx);
let parent_session = start_debug_session(&workspace, cx, |_| {}).unwrap();
let parent_session = debugger::test::start_debug_session(&project, cx, |_| {})
.await
.unwrap();
let client = parent_session.update(cx, |session, _| session.adapter_client().unwrap());
client.on_response::<StartDebugging, _>(move |_| {}).await;
@@ -924,7 +858,7 @@ async fn test_debug_panel_item_thread_status_reset_on_failure(
let workspace = init_test_workspace(&project, cx).await;
let cx = &mut VisualTestContext::from_window(*workspace, cx);
let session = start_debug_session(&workspace, cx, |client| {
let session = debugger::test::start_debug_session(&project, cx, |client| {
client.on_request::<dap::requests::Initialize, _>(move |_, _| {
Ok(dap::Capabilities {
supports_step_back: Some(true),
@@ -932,6 +866,7 @@ async fn test_debug_panel_item_thread_status_reset_on_failure(
})
});
})
.await
.unwrap();
let client = session.update(cx, |session, _| session.adapter_client().unwrap());
@@ -1138,7 +1073,9 @@ async fn test_send_breakpoints_when_editor_has_been_saved(
.update(cx, |_, _, cx| worktree.read(cx).id())
.unwrap();
let session = start_debug_session(&workspace, cx, |_| {}).unwrap();
let session = debugger::test::start_debug_session(&project, cx, |_| {})
.await
.unwrap();
let client = session.update(cx, |session, _| session.adapter_client().unwrap());
let buffer = project
@@ -1353,7 +1290,9 @@ async fn test_unsetting_breakpoints_on_clear_breakpoint_action(
editor.toggle_breakpoint(&actions::ToggleBreakpoint, window, cx);
});
let session = start_debug_session(&workspace, cx, |_| {}).unwrap();
let session = debugger::test::start_debug_session(&project, cx, |_| {})
.await
.unwrap();
let client = session.update(cx, |session, _| session.adapter_client().unwrap());
let called_set_breakpoints = Arc::new(AtomicBool::new(false));
@@ -1419,7 +1358,7 @@ async fn test_debug_session_is_shutdown_when_attach_and_launch_request_fails(
let workspace = init_test_workspace(&project, cx).await;
let cx = &mut VisualTestContext::from_window(*workspace, cx);
start_debug_session(&workspace, cx, |client| {
let task = project::debugger::test::start_debug_session(&project, cx, |client| {
client.on_request::<dap::requests::Initialize, _>(|_, _| {
Err(ErrorResponse {
error: Some(Message {
@@ -1433,8 +1372,12 @@ async fn test_debug_session_is_shutdown_when_attach_and_launch_request_fails(
}),
})
});
})
.ok();
});
assert!(
task.await.is_err(),
"Session should failed to start if launch request fails"
);
cx.run_until_parked();

View File

@@ -1,13 +1,16 @@
use crate::{
debugger_panel::DebugPanel,
tests::{active_debug_session_panel, init_test, init_test_workspace, start_debug_session},
tests::{active_debug_session_panel, init_test, init_test_workspace},
};
use dap::{
StoppedEvent,
requests::{Initialize, Modules},
};
use gpui::{BackgroundExecutor, TestAppContext, VisualTestContext};
use project::{FakeFs, Project};
use project::{
FakeFs, Project,
debugger::{self},
};
use std::sync::{
Arc,
atomic::{AtomicBool, AtomicI32, Ordering},
@@ -28,7 +31,7 @@ async fn test_module_list(executor: BackgroundExecutor, cx: &mut TestAppContext)
.unwrap();
let cx = &mut VisualTestContext::from_window(*workspace, cx);
let session = start_debug_session(&workspace, cx, |client| {
let session = debugger::test::start_debug_session(&project, cx, |client| {
client.on_request::<Initialize, _>(move |_, _| {
Ok(dap::Capabilities {
supports_modules_request: Some(true),
@@ -36,6 +39,7 @@ async fn test_module_list(executor: BackgroundExecutor, cx: &mut TestAppContext)
})
});
})
.await
.unwrap();
let client = session.update(cx, |session, _| session.adapter_client().unwrap());

View File

@@ -1,7 +1,7 @@
use crate::{
debugger_panel::DebugPanel,
session::running::stack_frame_list::StackFrameEntry,
tests::{active_debug_session_panel, init_test, init_test_workspace, start_debug_session},
tests::{active_debug_session_panel, init_test, init_test_workspace},
};
use dap::{
StackFrame,
@@ -9,7 +9,7 @@ use dap::{
};
use editor::{Editor, ToPoint as _};
use gpui::{BackgroundExecutor, TestAppContext, VisualTestContext};
use project::{FakeFs, Project};
use project::{FakeFs, Project, debugger};
use serde_json::json;
use std::sync::Arc;
use unindent::Unindent as _;
@@ -50,7 +50,9 @@ async fn test_fetch_initial_stack_frames_and_go_to_stack_frame(
let project = Project::test(fs, [path!("/project").as_ref()], cx).await;
let workspace = init_test_workspace(&project, cx).await;
let cx = &mut VisualTestContext::from_window(*workspace, cx);
let session = start_debug_session(&workspace, cx, |_| {}).unwrap();
let session = debugger::test::start_debug_session(&project, cx, |_| {})
.await
.unwrap();
let client = session.update(cx, |session, _| session.adapter_client().unwrap());
client.on_request::<Scopes, _>(move |_, _| Ok(dap::ScopesResponse { scopes: vec![] }));
@@ -227,7 +229,9 @@ async fn test_select_stack_frame(executor: BackgroundExecutor, cx: &mut TestAppC
});
let cx = &mut VisualTestContext::from_window(*workspace, cx);
let session = start_debug_session(&workspace, cx, |_| {}).unwrap();
let session = debugger::test::start_debug_session(&project, cx, |_| {})
.await
.unwrap();
let client = session.update(cx, |session, _| session.adapter_client().unwrap());
client.on_request::<Threads, _>(move |_, _| {
@@ -491,7 +495,9 @@ async fn test_collapsed_entries(executor: BackgroundExecutor, cx: &mut TestAppCo
let workspace = init_test_workspace(&project, cx).await;
let cx = &mut VisualTestContext::from_window(*workspace, cx);
let session = start_debug_session(&workspace, cx, |_| {}).unwrap();
let session = debugger::test::start_debug_session(&project, cx, |_| {})
.await
.unwrap();
let client = session.update(cx, |session, _| session.adapter_client().unwrap());
client.on_request::<Threads, _>(move |_, _| {

View File

@@ -6,7 +6,7 @@ use std::sync::{
use crate::{
DebugPanel,
session::running::variable_list::{CollapseSelectedEntry, ExpandSelectedEntry},
tests::{active_debug_session_panel, init_test, init_test_workspace, start_debug_session},
tests::{active_debug_session_panel, init_test, init_test_workspace},
};
use collections::HashMap;
use dap::{
@@ -15,7 +15,7 @@ use dap::{
};
use gpui::{BackgroundExecutor, TestAppContext, VisualTestContext};
use menu::{SelectFirst, SelectNext, SelectPrevious};
use project::{FakeFs, Project};
use project::{FakeFs, Project, debugger};
use serde_json::json;
use unindent::Unindent as _;
use util::path;
@@ -54,7 +54,9 @@ async fn test_basic_fetch_initial_scope_and_variables(
})
.unwrap();
let cx = &mut VisualTestContext::from_window(*workspace, cx);
let session = start_debug_session(&workspace, cx, |_| {}).unwrap();
let session = debugger::test::start_debug_session(&project, cx, |_| {})
.await
.unwrap();
let client = session.update(cx, |session, _| session.adapter_client().unwrap());
client.on_request::<dap::requests::Threads, _>(move |_, _| {
@@ -264,7 +266,9 @@ async fn test_fetch_variables_for_multiple_scopes(
.unwrap();
let cx = &mut VisualTestContext::from_window(*workspace, cx);
let session = start_debug_session(&workspace, cx, |_| {}).unwrap();
let session = debugger::test::start_debug_session(&project, cx, |_| {})
.await
.unwrap();
let client = session.update(cx, |session, _| session.adapter_client().unwrap());
client.on_request::<dap::requests::Threads, _>(move |_, _| {
@@ -524,7 +528,9 @@ async fn test_keyboard_navigation(executor: BackgroundExecutor, cx: &mut TestApp
})
.unwrap();
let cx = &mut VisualTestContext::from_window(*workspace, cx);
let session = start_debug_session(&workspace, cx, |_| {}).unwrap();
let session = debugger::test::start_debug_session(&project, cx, |_| {})
.await
.unwrap();
let client = session.update(cx, |session, _| session.adapter_client().unwrap());
client.on_request::<dap::requests::Threads, _>(move |_, _| {
@@ -1307,7 +1313,9 @@ async fn test_variable_list_only_sends_requests_when_rendering(
let workspace = init_test_workspace(&project, cx).await;
let cx = &mut VisualTestContext::from_window(*workspace, cx);
let session = start_debug_session(&workspace, cx, |_| {}).unwrap();
let session = debugger::test::start_debug_session(&project, cx, |_| {})
.await
.unwrap();
let client = session.update(cx, |session, _| session.adapter_client().unwrap());
client.on_request::<dap::requests::Threads, _>(move |_, _| {
@@ -1552,7 +1560,9 @@ async fn test_it_fetches_scopes_variables_when_you_select_a_stack_frame(
.unwrap();
let cx = &mut VisualTestContext::from_window(*workspace, cx);
let session = start_debug_session(&workspace, cx, |_| {}).unwrap();
let session = debugger::test::start_debug_session(&project, cx, |_| {})
.await
.unwrap();
let client = session.update(cx, |session, _| session.adapter_client().unwrap());
client.on_request::<dap::requests::Threads, _>(move |_, _| {

View File

@@ -15,7 +15,7 @@ use text::{AnchorRangeExt, Point};
use theme::ThemeSettings;
use ui::{
ActiveTheme, AnyElement, App, Context, IntoElement, ParentElement, SharedString, Styled,
Window, div,
Window, div, px,
};
use util::maybe;
@@ -166,8 +166,7 @@ impl DiagnosticBlock {
pub fn render_block(&self, editor: WeakEntity<Editor>, bcx: &BlockContext) -> AnyElement {
let cx = &bcx.app;
let status_colors = bcx.app.theme().status();
let max_width = bcx.em_width * 100.;
let max_width = px(600.);
let (background_color, border_color) = match self.severity {
DiagnosticSeverity::ERROR => (status_colors.error_background, status_colors.error),

View File

@@ -760,7 +760,7 @@ async fn test_random_diagnostics_blocks(cx: &mut TestAppContext, mut rng: StdRng
// The mutated view may contain more than the reference view as
// we don't currently shrink excerpts when diagnostics were removed.
let mut ref_iter = reference_excerpts.lines().filter(|line| *line != "§ -----");
let mut ref_iter = reference_excerpts.lines();
let mut next_ref_line = ref_iter.next();
let mut skipped_block = false;
@@ -768,7 +768,7 @@ async fn test_random_diagnostics_blocks(cx: &mut TestAppContext, mut rng: StdRng
if let Some(ref_line) = next_ref_line {
if mut_line == ref_line {
next_ref_line = ref_iter.next();
} else if mut_line.contains('§') && mut_line != "§ -----" {
} else if mut_line.contains('§') {
skipped_block = true;
}
}

View File

@@ -303,8 +303,6 @@ actions!(
GoToPreviousHunk,
GoToImplementation,
GoToImplementationSplit,
GoToNextChange,
GoToPreviousChange,
GoToPreviousDiagnostic,
GoToTypeDefinition,
GoToTypeDefinitionSplit,

File diff suppressed because it is too large Load Diff

View File

@@ -27,8 +27,8 @@ use util::ResultExt;
use crate::hover_popover::{hover_markdown_style, open_markdown_url};
use crate::{
CodeActionProvider, CompletionId, CompletionItemKind, CompletionProvider, DisplayRow, Editor,
EditorStyle, ResolvedTasks,
CodeActionProvider, CompletionId, CompletionProvider, DisplayRow, Editor, EditorStyle,
ResolvedTasks,
actions::{ConfirmCodeAction, ConfirmCompletion},
split_words, styled_runs_for_code_label,
};
@@ -657,63 +657,6 @@ impl CompletionsMenu {
)
}
pub fn sort_matches(matches: &mut Vec<SortableMatch<'_>>, query: Option<&str>) {
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord)]
enum MatchTier<'a> {
WordStartMatch {
sort_score_int: Reverse<i32>,
sort_snippet: Reverse<i32>,
sort_text: Option<&'a str>,
sort_key: (usize, &'a str),
},
OtherMatch {
sort_score: Reverse<OrderedFloat<f64>>,
},
}
// Our goal here is to intelligently sort completion suggestions. We want to
// balance the raw fuzzy match score with hints from the language server
//
// We first primary sort using fuzzy score by putting matches into two buckets
// strong one and weak one. Among these buckets matches are then compared by
// various criteria like snippet, LSP hints, kind, label text etc.
//
const FUZZY_THRESHOLD: f64 = 0.1317;
let query_start_lower = query
.and_then(|q| q.chars().next())
.and_then(|c| c.to_lowercase().next());
matches.sort_unstable_by_key(|mat| {
let score = mat.string_match.score;
let is_other_match = query_start_lower
.map(|query_char| {
!split_words(&mat.string_match.string).any(|word| {
word.chars()
.next()
.and_then(|c| c.to_lowercase().next())
.map_or(false, |word_char| word_char == query_char)
})
})
.unwrap_or(false);
if is_other_match {
let sort_score = Reverse(OrderedFloat(score));
MatchTier::OtherMatch { sort_score }
} else {
let sort_score_int = Reverse(if score >= FUZZY_THRESHOLD { 1 } else { 0 });
let sort_snippet = Reverse(if mat.is_snippet { 1 } else { 0 });
MatchTier::WordStartMatch {
sort_score_int,
sort_snippet,
sort_text: mat.sort_text,
sort_key: mat.sort_key,
}
}
});
}
pub async fn filter(&mut self, query: Option<&str>, executor: BackgroundExecutor) {
let mut matches = if let Some(query) = query {
fuzzy::match_strings(
@@ -738,45 +681,85 @@ impl CompletionsMenu {
.collect()
};
let mut additional_matches = Vec::new();
// Deprioritize all candidates where the query's start does not match the start of any word in the candidate
if let Some(query) = query {
if let Some(query_start) = query.chars().next() {
let (primary, secondary) = matches.into_iter().partition(|string_match| {
split_words(&string_match.string).any(|word| {
// Check that the first codepoint of the word as lowercase matches the first
// codepoint of the query as lowercase
word.chars()
.flat_map(|codepoint| codepoint.to_lowercase())
.zip(query_start.to_lowercase())
.all(|(word_cp, query_cp)| word_cp == query_cp)
})
});
matches = primary;
additional_matches = secondary;
}
}
let completions = self.completions.borrow_mut();
if self.sort_completions {
let completions = self.completions.borrow();
matches.sort_unstable_by_key(|mat| {
// We do want to strike a balance here between what the language server tells us
// to sort by (the sort_text) and what are "obvious" good matches (i.e. when you type
// `Creat` and there is a local variable called `CreateComponent`).
// So what we do is: we bucket all matches into two buckets
// - Strong matches
// - Weak matches
// Strong matches are the ones with a high fuzzy-matcher score (the "obvious" matches)
// and the Weak matches are the rest.
//
// For the strong matches, we sort by our fuzzy-finder score first and for the weak
// matches, we prefer language-server sort_text first.
//
// The thinking behind that: we want to show strong matches first in order of relevance(fuzzy score).
// Rest of the matches(weak) can be sorted as language-server expects.
let mut sortable_items: Vec<SortableMatch<'_>> = matches
.into_iter()
.map(|string_match| {
let completion = &completions[string_match.candidate_id];
#[derive(PartialEq, Eq, PartialOrd, Ord)]
enum MatchScore<'a> {
Strong {
score: Reverse<OrderedFloat<f64>>,
sort_text: Option<&'a str>,
sort_key: (usize, &'a str),
},
Weak {
sort_text: Option<&'a str>,
score: Reverse<OrderedFloat<f64>>,
sort_key: (usize, &'a str),
},
}
let is_snippet = matches!(
&completion.source,
CompletionSource::Lsp { lsp_completion, .. }
if lsp_completion.kind == Some(CompletionItemKind::SNIPPET)
);
let completion = &completions[mat.candidate_id];
let sort_key = completion.sort_key();
let sort_text =
if let CompletionSource::Lsp { lsp_completion, .. } = &completion.source {
lsp_completion.sort_text.as_deref()
} else {
None
};
let score = Reverse(OrderedFloat(mat.score));
let sort_text =
if let CompletionSource::Lsp { lsp_completion, .. } = &completion.source {
lsp_completion.sort_text.as_deref()
} else {
None
};
let sort_key = completion.sort_key();
SortableMatch {
string_match,
is_snippet,
if mat.score >= 0.2 {
MatchScore::Strong {
score,
sort_text,
sort_key,
}
})
.collect();
Self::sort_matches(&mut sortable_items, query);
matches = sortable_items
.into_iter()
.map(|sortable| sortable.string_match)
.collect();
} else {
MatchScore::Weak {
sort_text,
score,
sort_key,
}
}
});
}
drop(completions);
matches.extend(additional_matches);
*self.entries.borrow_mut() = matches;
self.selected_item = 0;
@@ -785,14 +768,6 @@ impl CompletionsMenu {
}
}
#[derive(Debug)]
pub struct SortableMatch<'a> {
pub string_match: StringMatch,
pub is_snippet: bool,
pub sort_text: Option<&'a str>,
pub sort_key: (usize, &'a str),
}
#[derive(Clone)]
pub struct AvailableCodeAction {
pub excerpt_id: ExcerptId,
@@ -803,7 +778,7 @@ pub struct AvailableCodeAction {
#[derive(Clone)]
pub struct CodeActionContents {
tasks: Option<Rc<ResolvedTasks>>,
actions: Option<Rc<[AvailableCodeAction]>>,
pub(crate) actions: Option<Rc<[AvailableCodeAction]>>,
}
impl CodeActionContents {

View File

@@ -808,14 +808,10 @@ impl DisplaySnapshot {
// used by line_mode selections and tries to match vim behavior
pub fn expand_to_line(&self, range: Range<Point>) -> Range<Point> {
let new_start = MultiBufferPoint::new(range.start.row, 0);
let new_end = if range.end.column > 0 {
MultiBufferPoint::new(
range.end.row,
self.buffer_snapshot.line_len(MultiBufferRow(range.end.row)),
)
} else {
range.end
};
let new_end = MultiBufferPoint::new(
range.end.row,
self.buffer_snapshot.line_len(MultiBufferRow(range.end.row)),
);
new_start..new_end
}

View File

@@ -39,8 +39,6 @@ pub mod scroll;
mod selections_collection;
pub mod tasks;
#[cfg(test)]
mod code_completion_tests;
#[cfg(test)]
mod editor_tests;
#[cfg(test)]
@@ -696,52 +694,6 @@ pub trait Addon: 'static {
fn to_any(&self) -> &dyn std::any::Any;
}
/// A set of caret positions, registered when the editor was edited.
pub struct ChangeList {
changes: Vec<Vec<Anchor>>,
/// Currently "selected" change.
position: Option<usize>,
}
impl ChangeList {
pub fn new() -> Self {
Self {
changes: Vec::new(),
position: None,
}
}
/// Moves to the next change in the list (based on the direction given) and returns the caret positions for the next change.
/// If reaches the end of the list in the direction, returns the corresponding change until called for a different direction.
pub fn next_change(&mut self, count: usize, direction: Direction) -> Option<&[Anchor]> {
if self.changes.is_empty() {
return None;
}
let prev = self.position.unwrap_or(self.changes.len());
let next = if direction == Direction::Prev {
prev.saturating_sub(count)
} else {
(prev + count).min(self.changes.len() - 1)
};
self.position = Some(next);
self.changes.get(next).map(|anchors| anchors.as_slice())
}
/// Adds a new change to the list, resetting the change list position.
pub fn push_to_change_list(&mut self, pop_state: bool, new_positions: Vec<Anchor>) {
self.position.take();
if pop_state {
self.changes.pop();
}
self.changes.push(new_positions.clone());
}
pub fn last(&self) -> Option<&[Anchor]> {
self.changes.last().map(|anchors| anchors.as_slice())
}
}
/// Zed's primary implementation of text input, allowing users to edit a [`MultiBuffer`].
///
/// See the [module level documentation](self) for more information.
@@ -907,7 +859,6 @@ pub struct Editor {
serialize_folds: Task<()>,
mouse_cursor_hidden: bool,
hide_mouse_mode: HideMouseMode,
pub change_list: ChangeList,
}
#[derive(Copy, Clone, Debug, PartialEq, Eq, Default)]
@@ -1700,7 +1651,6 @@ impl Editor {
hide_mouse_mode: EditorSettings::get_global(cx)
.hide_mouse
.unwrap_or_default(),
change_list: ChangeList::new(),
};
if let Some(breakpoints) = this.breakpoint_store.as_ref() {
this._subscriptions
@@ -1714,8 +1664,8 @@ impl Editor {
this._subscriptions.push(cx.subscribe_in(
&cx.entity(),
window,
|editor, _, e: &EditorEvent, window, cx| match e {
EditorEvent::ScrollPositionChanged { local, .. } => {
|editor, _, e: &EditorEvent, window, cx| {
if let EditorEvent::SelectionsChanged { local } = e {
if *local {
let new_anchor = editor.scroll_manager.anchor();
let snapshot = editor.snapshot(window, cx);
@@ -1725,33 +1675,8 @@ impl Editor {
new_anchor.offset,
);
});
editor.hide_signature_help(cx, SignatureHelpHiddenBy::Escape);
}
}
EditorEvent::Edited { .. } => {
if !vim_enabled(cx) {
let (map, selections) = editor.selections.all_adjusted_display(cx);
let pop_state = editor
.change_list
.last()
.map(|previous| {
previous.len() == selections.len()
&& previous.iter().enumerate().all(|(ix, p)| {
p.to_display_point(&map).row()
== selections[ix].head().row()
})
})
.unwrap_or(false);
let new_positions = selections
.into_iter()
.map(|s| map.display_point_to_anchor(s.head(), Bias::Left))
.collect();
editor
.change_list
.push_to_change_list(pop_state, new_positions);
}
}
_ => (),
},
));
@@ -5120,21 +5045,44 @@ impl Editor {
CodeActionsItem::Task(task_source_kind, resolved_task) => {
match resolved_task.task_type() {
task::TaskType::Script => workspace.update(cx, |workspace, cx| {
workspace.schedule_resolved_task(
workspace::tasks::schedule_resolved_task(
workspace,
task_source_kind,
resolved_task,
false,
window,
cx,
);
Some(Task::ready(Ok(())))
}),
task::TaskType::Debug(_) => {
workspace.update(cx, |workspace, cx| {
workspace.schedule_debug_task(resolved_task, window, cx);
});
Some(Task::ready(Ok(())))
task::TaskType::Debug(debug_args) => {
if debug_args.locator.is_some() {
workspace.update(cx, |workspace, cx| {
workspace::tasks::schedule_resolved_task(
workspace,
task_source_kind,
resolved_task,
false,
cx,
);
});
return Some(Task::ready(Ok(())));
}
if let Some(project) = self.project.as_ref() {
project
.update(cx, |project, cx| {
project.start_debug_session(
resolved_task.resolved_debug_adapter_config().unwrap(),
cx,
)
})
.detach_and_log_err(cx);
Some(Task::ready(Ok(())))
} else {
Some(Task::ready(Ok(())))
}
}
}
}
@@ -6831,12 +6779,12 @@ impl Editor {
resolved.reveal = reveal_strategy;
workspace
.update_in(cx, |workspace, window, cx| {
workspace.schedule_resolved_task(
.update(cx, |workspace, cx| {
workspace::tasks::schedule_resolved_task(
workspace,
task_source_kind,
resolved_task,
false,
window,
cx,
);
})
@@ -13387,48 +13335,6 @@ impl Editor {
.or_else(|| snapshot.buffer_snapshot.diff_hunk_before(Point::MAX))
}
fn go_to_next_change(
&mut self,
_: &GoToNextChange,
window: &mut Window,
cx: &mut Context<Self>,
) {
if let Some(selections) = self
.change_list
.next_change(1, Direction::Next)
.map(|s| s.to_vec())
{
self.change_selections(Some(Autoscroll::fit()), window, cx, |s| {
let map = s.display_map();
s.select_display_ranges(selections.iter().map(|a| {
let point = a.to_display_point(&map);
point..point
}))
})
}
}
fn go_to_previous_change(
&mut self,
_: &GoToPreviousChange,
window: &mut Window,
cx: &mut Context<Self>,
) {
if let Some(selections) = self
.change_list
.next_change(1, Direction::Prev)
.map(|s| s.to_vec())
{
self.change_selections(Some(Autoscroll::fit()), window, cx, |s| {
let map = s.display_map();
s.select_display_ranges(selections.iter().map(|a| {
let point = a.to_display_point(&map);
point..point
}))
})
}
}
fn go_to_line<T: 'static>(
&mut self,
position: Anchor,
@@ -17837,7 +17743,11 @@ impl Editor {
.and_then(|e| e.to_str())
.map(|a| a.to_string()));
let vim_mode = vim_enabled(cx);
let vim_mode = cx
.global::<SettingsStore>()
.raw_user_settings()
.get("vim_mode")
== Some(&serde_json::Value::Bool(true));
let edit_predictions_provider = all_language_settings(file, cx).edit_predictions.provider;
let copilot_enabled = edit_predictions_provider
@@ -18283,13 +18193,6 @@ impl Editor {
}
}
fn vim_enabled(cx: &App) -> bool {
cx.global::<SettingsStore>()
.raw_user_settings()
.get("vim_mode")
== Some(&serde_json::Value::Bool(true))
}
// Consider user intent and default settings
fn choose_completion_range(
completion: &Completion,

View File

@@ -12,8 +12,8 @@ use crate::{
use buffer_diff::{BufferDiff, DiffHunkSecondaryStatus, DiffHunkStatus, DiffHunkStatusKind};
use futures::StreamExt;
use gpui::{
BackgroundExecutor, DismissEvent, SemanticVersion, TestAppContext, UpdateGlobal,
VisualTestContext, WindowBounds, WindowOptions, div,
BackgroundExecutor, SemanticVersion, TestAppContext, UpdateGlobal, VisualTestContext,
WindowBounds, WindowOptions, div,
};
use indoc::indoc;
use language::{
@@ -10704,7 +10704,7 @@ async fn test_completion(cx: &mut TestAppContext) {
.confirm_completion(&ConfirmCompletion::default(), window, cx)
.unwrap()
});
cx.assert_editor_state("editor.clobberˇ");
cx.assert_editor_state("editor.closeˇ");
handle_resolve_completion_request(&mut cx, None).await;
apply_additional_edits.await.unwrap();
}
@@ -11266,6 +11266,76 @@ async fn test_completion_page_up_down_keys(cx: &mut TestAppContext) {
});
}
#[gpui::test]
async fn test_completion_sort(cx: &mut TestAppContext) {
init_test(cx, |_| {});
let mut cx = EditorLspTestContext::new_rust(
lsp::ServerCapabilities {
completion_provider: Some(lsp::CompletionOptions {
trigger_characters: Some(vec![".".to_string()]),
..Default::default()
}),
..Default::default()
},
cx,
)
.await;
cx.lsp
.set_request_handler::<lsp::request::Completion, _, _>(move |_, _| async move {
Ok(Some(lsp::CompletionResponse::Array(vec![
lsp::CompletionItem {
label: "Range".into(),
sort_text: Some("a".into()),
..Default::default()
},
lsp::CompletionItem {
label: "r".into(),
sort_text: Some("b".into()),
..Default::default()
},
lsp::CompletionItem {
label: "ret".into(),
sort_text: Some("c".into()),
..Default::default()
},
lsp::CompletionItem {
label: "return".into(),
sort_text: Some("d".into()),
..Default::default()
},
lsp::CompletionItem {
label: "slice".into(),
sort_text: Some("d".into()),
..Default::default()
},
])))
});
cx.set_state("");
cx.executor().run_until_parked();
cx.update_editor(|editor, window, cx| {
editor.show_completions(
&ShowCompletions {
trigger: Some("r".into()),
},
window,
cx,
);
});
cx.executor().run_until_parked();
cx.update_editor(|editor, _, _| {
if let Some(CodeContextMenu::Completions(menu)) = editor.context_menu.borrow_mut().as_ref()
{
assert_eq!(
completion_menu_entries(&menu),
&["r", "ret", "Range", "return"]
);
} else {
panic!("expected completion menu to be open");
}
});
}
#[gpui::test]
async fn test_as_is_completions(cx: &mut TestAppContext) {
init_test(cx, |_| {});
@@ -13994,7 +14064,7 @@ async fn test_completions_in_languages_with_extra_word_characters(cx: &mut TestA
{
assert_eq!(
completion_menu_entries(&menu),
&["bg-blue", "bg-red", "bg-yellow"]
&["bg-red", "bg-blue", "bg-yellow"]
);
} else {
panic!("expected completion menu to be open");
@@ -19482,64 +19552,6 @@ println!("5");
});
}
#[gpui::test]
async fn test_hide_mouse_context_menu_on_modal_opened(cx: &mut TestAppContext) {
struct EmptyModalView {
focus_handle: gpui::FocusHandle,
}
impl EventEmitter<DismissEvent> for EmptyModalView {}
impl Render for EmptyModalView {
fn render(&mut self, _: &mut Window, _: &mut Context<'_, Self>) -> impl IntoElement {
div()
}
}
impl Focusable for EmptyModalView {
fn focus_handle(&self, _cx: &App) -> gpui::FocusHandle {
self.focus_handle.clone()
}
}
impl workspace::ModalView for EmptyModalView {}
fn new_empty_modal_view(cx: &App) -> EmptyModalView {
EmptyModalView {
focus_handle: cx.focus_handle(),
}
}
init_test(cx, |_| {});
let fs = FakeFs::new(cx.executor());
let project = Project::test(fs, [], cx).await;
let workspace = cx.add_window(|window, cx| Workspace::test_new(project.clone(), window, cx));
let buffer = cx.update(|cx| MultiBuffer::build_simple("hello world!", cx));
let cx = &mut VisualTestContext::from_window(*workspace.deref(), cx);
let editor = cx.new_window_entity(|window, cx| {
Editor::new(
EditorMode::full(),
buffer,
Some(project.clone()),
window,
cx,
)
});
workspace
.update(cx, |workspace, window, cx| {
workspace.add_item_to_active_pane(Box::new(editor.clone()), None, true, window, cx);
})
.unwrap();
editor.update_in(cx, |editor, window, cx| {
editor.open_context_menu(&OpenContextMenu, window, cx);
assert!(editor.mouse_context_menu.is_some());
});
workspace
.update(cx, |workspace, window, cx| {
workspace.toggle_modal(window, cx, |_, cx| new_empty_modal_view(cx));
})
.unwrap();
cx.read(|cx| {
assert!(editor.read(cx).mouse_context_menu.is_none());
});
}
fn empty_range(row: usize, column: usize) -> Range<DisplayPoint> {
let point = DisplayPoint::new(DisplayRow(row as u32), column as u32);
point..point

Some files were not shown because too many files have changed in this diff Show More