Compare commits

...

144 Commits

Author SHA1 Message Date
Jason Mancuso
a533e622fa clean up documentation, fix links, add to summary 2024-08-20 13:59:41 -04:00
Jason Mancuso
cd7073bd19 docs: add documentation about context servers 2024-08-20 12:35:15 -04:00
Bennet Bo Fenner
0c980cde74 docs: Cleanup assistant configuration documentation (#16526)
Release Notes:

- N/A
2024-08-20 17:40:19 +02:00
Nate Butler
9951df7709 Update some docs keybindings to new format (#16524)
Updates some of the docs pages to the new keybinding format.

Release Notes:

- N/A
2024-08-20 11:23:40 -04:00
Marshall Bowers
936466e02c docs: Reword "Extensibility" section of slash command docs (#16521)
This PR rewords the "Extensibility" section of the slash command docs.

Release Notes:

- N/A
2024-08-20 10:13:53 -04:00
Marshall Bowers
1eb1e16954 docs: Fix possessive "its" typos (#16516)
This PR fixes a number of typos where possessive "its" wasn't being used
properly.

Release Notes:

- N/A
2024-08-20 09:59:29 -04:00
Suhun Han
b67404323c workspace: Improve error handling when dropping a file that cannot be opened into the workspace pane (#15613)
This PR can improve the UX when dropping a file that cannot be opened
into the workspace pane. Previously, nothing happened without any
messages when such error occurred, which could be awkward for users.
Additionally the pane was being split even though the file failed to
open.

Here's a screen recording demonstrating the previous/updated behavior:


https://github.com/user-attachments/assets/cfdf3488-9464-4568-b16a-9b87718bd729

Changes:

- It now displays an error message if a file cannot be opened.
- Updated the logic to first try to open the file. The pane splits only
if the file opening process is successful.

Release Notes:

- Improved error handling when opening files in the workspace pane. An
error message will now be displayed if the file cannot be opened.
- Fixed an issue where unnecessary pane splitting occurred when a file
fails to open.
2024-08-20 15:05:59 +02:00
Bennet Bo Fenner
c251a50e41 assistant: Update docs (#16515)
- Fix links on assistant page to subpages
- Mention the configuration view in the `configuration.md` and document
more settings

Release Notes:

- N/A

---------

Co-authored-by: Piotr <piotr@zed.dev>
2024-08-20 14:42:10 +02:00
Kirill Bulatov
e482fcde5b Fall back to FindAllReferences if GoToDefinition have not navigated (#16512)
Follow-up of https://github.com/zed-industries/zed/pull/9243 

Release Notes:

- N/A

---------

Co-authored-by: Alex Kladov <aleksey.kladov@gmail.com>
2024-08-20 14:56:19 +03:00
Kyle Kelley
f185269d03 repl: Upgrade runtimelib (#16499)
Upgrades runtimelib to bring in some fixes from
https://github.com/runtimed/runtimed/pull/114 and
https://github.com/runtimed/runtimed/pull/113 that work towards
addressing issues interfacing with the Julia kernel.

Release Notes:

- N/A
2024-08-19 22:39:17 -07:00
Nate Butler
1f0dc8b754 Expand assistant docs (#16501)
This PR significantly expands the assistant documentation, breaking it
out into sections, adding examples and further documenting features.

This PR introduces a convention in docs for swapping keybindings for mac
vs linux:

`<kbd>cmd-enter|ctrl-enter</kbd>`

In the above example, the first will be shown for mac, the second for
linux or windows.

TODO:

- [ ] Fix table style (for `/assistant/configuration`)
- [x] Add script to swap keybindings based on platform
- It should take in this format: [`cmd-n` (mac)|`ctrl-n`(linux)] and
return just the correct binding for the viewer's platform.
- [ ] Add image/video assets (non-blocking)

Release Notes:

- Updated assistant documentation
2024-08-19 23:50:09 -04:00
Marshall Bowers
395a68133d Add Postgrest to Docker Compose (#16498)
This PR adds two Postgrest containers—one for the app database and one
for the LLM database—to the Docker Compose cluster.

Also fixed an issue where `postgres_app.conf` and `postgres_llm.conf`
had been switched.

Release Notes:

- N/A
2024-08-19 20:50:45 -04:00
Marshall Bowers
77c08fade5 elixir: Bump to v0.0.8 (#16495)
This PR bumps the Elixir extension to v0.0.8.

Changes:

- #16382

Release Notes:

- N/A
2024-08-19 19:15:41 -04:00
Marshall Bowers
f7f7cd5bb9 repl: Don't prefix free variables with _ (#16494)
This PR is a small refactor to remove the leading `_` for some free
variables, as this unintentionally marks them as unused to the compiler.

While the fields on the struct _are_ unused, the free variables should
participate in usage tracking, as we want to make sure they get stored
on the struct.

Release Notes:

- N/A
2024-08-19 19:15:27 -04:00
Bennet Bo Fenner
6f5674691c assistant: Set default provider to zed.dev (#16454)
Do NOT merge until tomorrow

Release Notes:

- N/A

---------

Co-authored-by: Thorsten <thorsten@zed.dev>
2024-08-19 19:00:38 -04:00
Stanislav Alekseev
8993a9f2ee elixir: Make two more files required by lexical executable (#16382)
I still haven't fixed building dev extensions with rust managed by nix,
so I'd appreciate testing this for me

Release Notes:

- N/A
2024-08-19 18:48:05 -04:00
Joseph T Lyons
9f66f12f7b v0.151.x dev 2024-08-19 18:40:19 -04:00
Peter Tripp
3eb5488c63 Update Terms and Conditions (#16478)
- Update Zed Terms of Use:
  - Rename from 'EULA' / 'Terms and Conditions'
  - Rename 'Zed Network Based Service' to 'Zed Service'
  - 3.3.2 Usage Data (formerly Telemetry Data)
    - Add examples of 'Usage Data'
- Add link to https://zed.dev/docs/telemetry - Explain 'telemetry ID' and user linkage
- 3.3.5 Privacy Policy - Add privacy policy reference - Add link to https://zed.dev/privacy-policy/
  - 5. OWNERSHIP
- Move "You retain all right, title and interest..." from 3.3 Customer Data
    - Additional note that customers retain Intellectual Property rights
- 9. Third Party Services - Add link to https://zed.dev/third-party-terms
- Add Privacy Policy
- Add Subprocessors
- Add Third Party Terms
- Update script/terms/terms.rtf for DMG bundle
2024-08-19 17:08:46 -04:00
Max Brunsfeld
30bfa56a24 Avoid double message header in new contexts, don't expand default prompt (#16490)
Follow-up to https://github.com/zed-industries/zed/pull/16471

* Don't expand the default prompt by default, since it looks strange in
the expanded state
* Don't create two `You` headers by default. Just insert a blank line
after the default prompt.

Release Notes:

- N/A
2024-08-19 12:54:03 -07:00
Roy Williams
0042c24d3c Simplify logic & add UI affordances to show model cache status (#16395)
Release Notes:

- Adds UI affordances to the assistant panel to show which messages have
been cached
- Migrate cache invalidation to be based on `has_edits_since_in_range`
to be smarter and more selective about when to invalidate the cache and
when to fetch.

<img width="310" alt="Screenshot 2024-08-16 at 11 19 23 PM"
src="https://github.com/user-attachments/assets/4ee2d111-2f55-4b0e-b944-50c4f78afc42">

<img width="580" alt="Screenshot 2024-08-18 at 10 05 16 PM"
src="https://github.com/user-attachments/assets/17630a60-7b78-421c-ae39-425246638a12">


I had originally added the lightening bolt on every message and only
added the tooltip warning about editing prior messages on the first
anchor, but thought it looked too busy, so I settled on just annotating
the last anchor.
2024-08-19 12:06:14 -07:00
Marshall Bowers
971db5c6f6 ci: Set the ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON for builds (#16486)
This PR updates the various GitHub Actions that build Zed binaries to
set the `ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON` environment variable
from the corresponding secret.

Release Notes:

- N/A
2024-08-19 14:47:20 -04:00
Max Brunsfeld
b5bd8a5c5d Add logic for closed beta LLM models (#16482)
Release Notes:

- N/A

---------

Co-authored-by: Marshall <marshall@zed.dev>
2024-08-19 11:09:52 -07:00
Nate Butler
41fc6d0885 Make providers more clear in model selector (#16480)
Make providers more clear in model selector

Before:

![CleanShot 2024-08-19 at 13 20
36@2x](https://github.com/user-attachments/assets/5b43fa27-4aca-446a-a035-bc8bcb0d9b0e)

After:

![CleanShot 2024-08-19 at 13 20
05@2x](https://github.com/user-attachments/assets/cb961405-b573-42fe-80e1-f3c2ce828ea4)


Release Notes:

- N/A
2024-08-19 13:38:19 -04:00
Bennet Bo Fenner
90897707c3 assistant: Add imports in a single area when using workflows (#16355)
Co-Authored-by: Kirill <kirill@zed.dev>

Release Notes:

- N/A

---------

Co-authored-by: Kirill <kirill@zed.dev>
Co-authored-by: Thorsten <thorsten@zed.dev>
2024-08-19 19:01:45 +02:00
Piotr Osiewicz
7fbea39566 ui: Dismiss popovers when clicking on trigger button (#16476)
Release Notes:

- Clicking on an already-deployed popover menu trigger now hides the
popover menu.
2024-08-19 18:48:57 +02:00
Thorsten Ball
037cf1393c assistant: Undo workflow step when buffer is discarded (#16465)
This fixes a weird bug:

1. Use `/workflow` in assistant
2. Have it generate a step that modifies a file
3. Either (a) select the step in the assistant and have it auto-insert
newlines (b) select "Transform" to have the step applied
4. Close the modified file in the editor ("Discard")
5. Re-open the file
6. BUG: the changes made by assistant are still there!

The reason for the bug is that the assistant keeps references to buffers
and they're not closed/reloaded when closed/reopened.

To fix the bug we now rollback the applied workflow steps when
discarding a buffer.

(This does *not* yet fix the issue where a workflow step inserts a new
buffer into the project/worktree that does not show up on the file
system yet but in `/file` and hangs around until Zed is closed.)


Release Notes:

- N/A

Co-authored-by: Bennet <bennet@zed.dev>
2024-08-19 18:42:49 +02:00
Kirill Bulatov
69aae2037d Display default prompts more elaborately (#16471)
Show them under `User` role instead of a `System` one, and insert them
expanded.

Release Notes:

- N/A
2024-08-19 18:44:52 +03:00
Piotr Osiewicz
bac8e81e73 assistant: Add the "create your command" item (#16467)
This PR adds an extra item to the slash command picker that links users to the doc that teaches how to create a custom one.

Release Notes:

- N/A

---------

Co-authored-by: Danilo Leal <67129314+danilo-leal@users.noreply.github.com>
2024-08-19 12:29:16 -03:00
Marshall Bowers
0bea4d5fa6 theme: Change autocomplete value for ui_font_features and buffer_font_features (#16466)
This PR changes the default value used when autocompleting the
`ui_font_features` and `ui_font_features` settings from `null` to `{}`.

Release Notes:

- N/A
2024-08-19 10:55:25 -04:00
renovate[bot]
4dec7806cb Update Rust crate heed to v0.20.5 (#16464)
[![Mend
Renovate](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com)

This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [heed](https://togithub.com/Kerollmops/heed) | workspace.dependencies
| patch | `0.20.4` -> `0.20.5` |

---

### Release Notes

<details>
<summary>Kerollmops/heed (heed)</summary>

###
[`v0.20.5`](https://togithub.com/meilisearch/heed/releases/tag/v0.20.5):
🛁

[Compare
Source](https://togithub.com/Kerollmops/heed/compare/v0.20.4...v0.20.5)

<p align="center"><img width="280px"
src="https://raw.githubusercontent.com/meilisearch/heed/main/assets/heed-pigeon-logo.png"></a></p>
<h1 align="center" >heed</h1>

##### What's Changed
* fix function docs (clippy warnings) by
@&#8203;antonil[https://github.com/meilisearch/heed/pull/273](https://togithub.com/meilisearch/heed/pull/273)ll/273
* fix custom_key_cmp_wrapper being able to unwind to C code (ub) by
@&#8203;antonil[https://github.com/meilisearch/heed/pull/275](https://togithub.com/meilisearch/heed/pull/275)ll/275

##### New Contributors
* @&#8203;antonilol made their first
contributi[https://github.com/meilisearch/heed/pull/273](https://togithub.com/meilisearch/heed/pull/273)ll/273

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "after 3pm on Wednesday" in timezone
America/New_York, Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

Release Notes:

- N/A

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOC4yNi4xIiwidXBkYXRlZEluVmVyIjoiMzguMjYuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-08-19 10:26:14 -04:00
Marshall Bowers
de41c151c8 collab: Add is_staff to upstream rate limit spans (#16463)
This PR adds the `is_staff` field to the `upstream rate limit` spans.

Since we use different API keys for staff vs non-staff, it will be
useful to break down the rate limits accordingly.

Release Notes:

- N/A
2024-08-19 10:15:25 -04:00
Piotr Osiewicz
56f1ab9459 assistant: Remove "Resolving" text for step resolution and use Transform instead (#16461)
That way, user can click on "Transform" straight away and get it applied
immediately when it's resolved.



https://github.com/user-attachments/assets/08c99804-3841-4eba-a5eb-7066a9f45b47


TODO:
- [x] Tie "Send" button at the bottom into the same behavior

Release Notes:

- N/A
2024-08-19 15:17:04 +02:00
Piotr Osiewicz
911112d94a assistant: Fix toggling slash command menu from toolbar menu (#16459)
Release Notes:

- N/A
2024-08-19 14:47:05 +02:00
Thorsten Ball
e68b2d5ecc assistant panel: Disable send button on config error (#16455)
Release Notes:

- N/A

Co-authored-by: Bennet <bennet@zed.dev>
2024-08-19 11:44:56 +02:00
Thorsten Ball
f651333896 assistant panel: Show if env var with API key is set (#16453)
This makes it easier to debug why resetting a key doesn't work. We now
show when the key is set via an env var and if so, we disable the
reset-key button and instead give instructions.

![screenshot-2024-08-19-11 22
05@2x](https://github.com/user-attachments/assets/6c75dc82-cb61-4661-9647-f77fca8fdf41)


Release Notes:

- N/A

Co-authored-by: Bennet <bennet@zed.dev>
2024-08-19 11:34:58 +02:00
Bennet Bo Fenner
14fa4abce4 assistant: Fix edge case where "Open new context" button would do nothing (#16452)
Co-Authored-by: Thorsten <thorsten@zed.dev>

Release Notes:

- N/A

Co-authored-by: Thorsten <thorsten@zed.dev>
2024-08-19 11:07:04 +02:00
Ryan Hawkins
8a320668ed Add support for GPT-4o in Copilot Chat (#16446)
Release Notes:
- Added support for GPT-4o for Copilot Chat.
2024-08-19 09:03:06 +02:00
Mikayla Maki
86efde4b76 Fixed bugs in workflow step preview (#16445)
Release Notes:

- N/A
2024-08-18 22:18:04 -07:00
Nathan Sobo
43e13df9f3 Add a /perplexity slash command in an extension (#16438)
Release Notes:

- N/A
2024-08-18 16:34:55 -06:00
Nathan Sobo
b9176fe4bb Add custom icon for Anthropic hosted models (#16436)
This commit adds a custom icon for Anthropic hosted models.


![CleanShot 2024-08-18 at 15 40
38@2x](https://github.com/user-attachments/assets/d467ccab-9628-4258-89fc-782e0d4a48d4)
![CleanShot 2024-08-18 at 15 40
34@2x](https://github.com/user-attachments/assets/7efaff9c-6a58-47ba-87ea-e0fe0586fedc)


- Adding a new SVG icon for Anthropic hosted models.
  - The new icon is located at: `assets/icons/ai_anthropic_hosted.svg`
- Updating the LanguageModel trait to include an optional icon method
- Implementing the icon method for CloudModel to return the custom icon
for Anthropic hosted models
- Updating the UI components to use the model-specific icon when
available
- Adding a new IconName variant for the Anthropic hosted icon

We should change the non-hosted icon in some small way to distinguish it
from the hosted version. I duplicated the path for now so we can
hopefully add it for the next release.

Release Notes:

- N/A
2024-08-18 16:07:15 -06:00
Nathan Sobo
11753914d7 Add a setting to show time to first window draw and frames per second in status bar (#16422)
I want to showcase Zed's performance via videos, and this seemed like a
good way to demonstrate it.


https://github.com/user-attachments/assets/f4a5fabc-efe7-4b48-9ba5-719882fdc856

Release Notes:

- On macOS, you can now set assign `performance.show_in_status_bar:
true` in your settings to show the time to the first window draw on
startup and then current FPS of the containing window's renderer.

---------

Co-authored-by: Max Brunsfeld <maxbrunsfeld@gmail.com>
Co-authored-by: Kirill Bulatov <kirill@zed.dev>
Co-authored-by: David Soria Parra <167242713+dsp-ant@users.noreply.github.com>
Co-authored-by: Danny Hua <danny.hua@hey.com>
2024-08-18 15:22:19 -06:00
Danny Hua
6f93b42ecb docs: Fix example extension directory structure (#16424)
Add language-specific subdirectory in example directory structure, since
that's the requisite structure - see `extensions/languages.md`

Release Notes:

- N/A
2024-08-18 07:40:08 -04:00
David Soria Parra
10a996cbc4 context_servers: Fix argument handling (#16402) 2024-08-17 20:04:34 -07:00
Kirill Bulatov
5e6e465294 Show correct number of characters selected (#16420) 2024-08-18 02:24:32 +03:00
Max Brunsfeld
8841d6faad Avoid redundant newline insertion after file command (#16419)
Release Notes:

- Fixed an issue where an extra newline was inserted after running a
`/file` command in the assistant.
2024-08-17 15:10:10 -07:00
Nathan Sobo
c9c5eef8f2 Improve dev experience for built-in prompts (#16413)
When launching Zed from the CLI via `cargo run`, we'll always prompt
load templates from the repo.

This restores behavior that I reverted last night in #16403.

Also, I've improved the `script/prompts link/unlink` workflow for
overriding prompts of your production copy of Zed. Zed now detects when
the overrides directory is created or removed, and does the right thing.
You can link and unlink repeatedly without restarting Zed.

Release Notes:

- N/A
2024-08-17 12:28:53 -06:00
Danilo Leal
7c268d0c6d assistant: Remove meta description from quote selection tooltip (#16412)
The original idea was for the keybinding to be within the description, but given it's already inline with the title, I figure we don't need this anymore—cleaning it up a bit!

--- 

Release Notes:

- N/A
2024-08-17 13:30:32 -03:00
Danilo Leal
e4a591dcbd workflow: Add button to open the step view (#16387)
This PR adds an icon button that appears as you hover over the step header, which allows users to visit the step view.

---

Release Notes:

- N/A
2024-08-17 13:06:34 -03:00
Nathan Sobo
07d5e22cbe Revert changes to inline assist indentation logic and prompt (#16403)
This PR reverts #16145 and subsequent changes.

This reverts commit a515442a36.

We still have issues with our approach to indentation in Python
unfortunately, but this feels like a safer equilibrium than where we
were.

Release Notes:

- Returned to our previous prompt for inline assist transformations,
since recent changes were introducing issues.
2024-08-17 02:24:55 -06:00
Joseph T. Lyons
ebecd7e65f Fix issue with fetching users in seed script (#16393)
Release Notes:

- N/A
2024-08-16 21:51:51 -04:00
Joseph T Lyons
18f0626e08 Update assistant docs to mention inline works in the terminal 2024-08-16 21:13:02 -04:00
Marshall Bowers
3d997e5fd6 collab: Add is_staff to spans (#16389)
This PR adds the `is_staff` field to our LLM spans so that we can
distinguish between staff and non-staff traffic.

Release Notes:

- N/A
2024-08-16 18:42:44 -04:00
Max Brunsfeld
1b1070e0f7 Add tracing needed for LLM rate limit dashboards (#16388)
Release Notes:

- N/A

---------

Co-authored-by: Marshall <marshall@zed.dev>
2024-08-16 17:52:31 -04:00
Joseph T. Lyons
9ef3306f55 Add feature flags to seed script (#16385)
Release Notes:

- N/A
2024-08-16 17:08:44 -04:00
Kyle Kelley
0fdc9d0f05 context_servers: Log errors from detached context server tasks (#16377)
Logged several of the detached tasks that before would silently fail if
the context server wasn't in compliance.

Release Notes:

- N/A
2024-08-16 13:50:19 -07:00
Nathan Sobo
907d76208d Allow display name of custom Anthropic models to be customized (#16376)
Also added some docs for our settings.

Release Notes:

- N/A
2024-08-16 14:02:37 -06:00
Kirill Bulatov
ae9e6a9daa Allow rerunning tasks with unknown termination status (#16374) 2024-08-16 23:00:20 +03:00
Danilo Leal
e36e605c96 assistant: Fine-tune error toast design (#16373)
Just some super small padding and absolute-positioning tweaks. 

---

Release Notes:

- N/A
2024-08-16 16:56:52 -03:00
Marshall Bowers
35cd397a40 collab: Allow enabling feature flags for all users (#16372)
This PR adds a new `enabled_for_all` column to the `feature_flags` table
to allow enabling a feature flag for all users.

Release Notes:

- N/A
2024-08-16 15:17:03 -04:00
Danilo Leal
2180dbdb50 assistant: Add action footer and refine slash command popover (#16360)
- [x] Put the slash command popover on the footer
- [x] Refine the popover (change it to a picker)
- [x] Add more options dropdown on the assistant's toolbar
- [x] Add quote selection button on the footer

---

Release Notes:

- N/A

---------

Co-authored-by: Piotr Osiewicz <24362066+osiewicz@users.noreply.github.com>
Co-authored-by: Nate Butler <iamnbutler@gmail.com>
Co-authored-by: Kirill Bulatov <mail4score@gmail.com>
2024-08-16 16:07:42 -03:00
Mikayla Maki
23d56a1a84 Add configuration flow for inline assist button (#16369)
This adds a configuration prompt when using the inline assist button in
the editor.

Release Notes:

- N/A
2024-08-16 11:21:30 -07:00
Marshall Bowers
a9441879c3 collab: Fix writing LLM rate limit events to Clickhouse (#16367)
This PR fixes the writing of LLM rate limit events to Clickhouse.

We had a table in the table name: `llm_rate_limits` instead of
`llm_rate_limit_events`.

I also extracted a helper function to write to Clickhouse so we can use
it anywhere we need to.

Release Notes:

- N/A
2024-08-16 14:03:34 -04:00
Nate Butler
6cfbb54ede Switch icon (#16363)
Updates instances of the `MagicWand` icon to our more recent `Sparkle` /
`ZedAssistant` icon in places where we reference inline assist.

Before:

![CleanShot 2024-08-16 at 13 41
58@2x](https://github.com/user-attachments/assets/67af27a2-a09b-44bb-a8af-2bafcbbd9038)

After:
![CleanShot 2024-08-16 at 13 48
34@2x](https://github.com/user-attachments/assets/229ccc8e-8a93-44c1-abe0-7b6e22ca93e2)


Release Notes:

- Updated inline assist icon in the editor & terminal.
2024-08-16 14:01:56 -04:00
Marshall Bowers
7a5acc0b0c collab: Rework model name checks (#16365)
This PR reworks how we do checks for model names in the LLM service.

We now normalize the model names using the models defined in the
database.

Release Notes:

- N/A
2024-08-16 13:54:28 -04:00
Joseph T Lyons
463ac7f5e4 Correct H1 text for assistant documentation 2024-08-16 13:52:41 -04:00
Joseph T. Lyons
ee27114b35 Remove redundant assistant content (#16364)
Release Notes:

- N/A
2024-08-16 13:42:16 -04:00
Joseph T. Lyons
ebac9a7342 Combine assistant documentation (#16362)
Release Notes:

- N/A
2024-08-16 13:37:54 -04:00
Mikayla Maki
455850505f Fix more bugs in files (#16241)
Fixes:
- [x] an issue where directories would only match by prefix, causing
both a directory and a file to be matched if in the same directory
- [x] An issue where you could not continue a file completion when
selecting a directory, as `tab` on a file would always run the command.
This effectively disabled directory sub queries.
- [x] Inconsistent rendering of files and directories in the slash
command

Release Notes:

- N/A

---------

Co-authored-by: max <max@zed.dev>
2024-08-16 10:09:38 -07:00
jvmncs
a3a6ebcf31 Small fixes to content generation prompt (#16359)
Fixed the output format section of the content_prompt.hbs template
getting rendered away by handlebars. Also fixed a leftover hardcoded
"Rust" in the rewrite section snippet. (follow-up to #16333)

Release Notes:

- N/A
2024-08-16 12:31:38 -04:00
Roy Williams
b4f5f5024e Support 8192 output tokens for Claude Sonnet 3.5 (#16358)
Release Notes:

- Added support for 8192 output tokens from Claude Sonnet 3.5
(https://x.com/alexalbert__/status/1812921642143900036)
2024-08-16 11:47:39 -04:00
Kirill Bulatov
7eab57a264 Add a task for running zed in "local release" mode (#16357)
Release Notes:

- N/A
2024-08-16 17:33:04 +03:00
Kirill Bulatov
f1a2638d29 Do not enable copilot for always read-only editors (#16356)
Release Notes:

- N/A
2024-08-16 17:22:47 +03:00
Kirill Bulatov
0fe22f2a48 After streaming generation is over, show a regular, batch diff in the file altered (#16350)
Release Notes:

- N/A
2024-08-16 15:31:02 +03:00
Kyle Kelley
f7f5a25584 repl: Apply border for error output on left (#16334) 2024-08-16 00:31:46 -05:00
张小白
40d97fd346 windows: Fix missing title bar on prompt library (#16302)
Closes #16297

It seems that currently we can't draw custom title bar. I have checked
the `title_bar` crate, it seems to be `zed` only.

Before:

![Screenshot 2024-08-16
004350](https://github.com/user-attachments/assets/e11aa0bb-9d3e-47d5-b488-d3b8220158cc)

After:

![Screenshot 2024-08-16
004235](https://github.com/user-attachments/assets/028b4eb2-c878-4ea7-87e3-22817caefa00)


Release Notes:

- N/A
2024-08-15 22:23:11 -06:00
Nathan Sobo
ad44b459cd Improve content generation prompt to reduce over-generation (#16333)
I focused on cases where we're inserting doc comments or annotations
above symbols.

I added 5 new examples to the content generation prompt, covering
various scenarios:

1. Inserting documentation for a Rust struct
2. Writing docstrings for a Python class
3. Adding comments to a TypeScript method
4. Adding a derive attribute to a Rust struct
5. Adding a decorator to a Python class

These examples demonstrate how to handle different languages and common
tasks like adding documentation, attributes, and decorators.

To improve context integration, I've made the following changes:

1. Added a `transform_context_range` that includes 3 lines before and
after the transform range
2. Introduced `rewrite_section_prefix` and `rewrite_section_suffix` to
provide more context around the section being rewritten
3. Updated the prompt template to include this additional context in a
separate code snippet

Release Notes:

- Reduced instances of over-generation when inserting docs or
annotations above a symbol.
2024-08-15 22:20:11 -06:00
Kyle Kelley
bac39d7743 assistant: Only push text content if not empty with image content (#16270)
If you submit an image with empty space above it and text below, it will
fail with this error:


![image](https://github.com/user-attachments/assets/a4a2265e-815f-48b5-b09e-e178fce82ef7)

Now instead it fails with an error about needing a message.

<img width="640" alt="image"
src="https://github.com/user-attachments/assets/72b267eb-b288-40a5-a829-750121ff16cc">

It will however work with text above and empty text below the image now.

Release Notes:

- Improved conformance with Anthropic Images in Chat Completions API
2024-08-15 22:38:52 -05:00
Roy Williams
46fb917e02 Implement Anthropic prompt caching (#16274)
Release Notes:

- Adds support for Prompt Caching in Anthropic. For models that support
it this can dramatically lower cost while improving performance.
2024-08-15 22:21:06 -05:00
Max Brunsfeld
09b6e3f2a6 Improve workflow step view (#16329)
* Improve the tab title: give it an icon, and indicate the step index.
* Display the line number ranges that the symbols resolve to.
* Don't open duplicate tabs for the same step

Release Notes:

- N/A
2024-08-15 17:45:23 -07:00
Max Brunsfeld
364a58a262 Move context tests into their own file (#16327)
This makes it easier to use the outline view to navigate `context.rs`,
and reduces the indentation level of the tests.

Release Notes:

- N/A
2024-08-15 17:14:05 -07:00
Max Brunsfeld
c896ff292c Remove workflow inspector, clean up workflow code (#16325)
Now that there's a dedicated, user-facing view for each workflow step,
we don't need the inspector functionality. This PR also cleans up some
naming around workflow steps and step resolutions.

Release Notes:

- N/A
2024-08-15 16:47:29 -07:00
Kyle Kelley
da2bfbd29f repl: Scale the text_style font_size and line_height (#16308)
Replaces #16273.

Release Notes:

- repl: Fixed scaling of stdout/stderr line heights

---------

Co-authored-by: Mikayla <mikayla@zed.dev>
Co-authored-by: Nate Butler <iamnbutler@gmail.com>
2024-08-15 18:46:36 -05:00
Joseph T. Lyons
0b407164d0 Update assistant docs (#16324)
Release Notes:

- N/A
2024-08-15 18:42:13 -04:00
Marshall Bowers
b151241d84 assistant: Improve the empty state for the prompt library (#16320)
This PR improves the empty state of the prompt library.

The right-hand side of the library is now dedicated to an empty state
that guides the user to create their first prompt.

Additionally, the message in the picker now reads "No prompts." when
there are no prompts.

#### No prompts

<img width="1136" alt="Screenshot 2024-08-15 at 6 20 26 PM"
src="https://github.com/user-attachments/assets/f9af2b5d-c4d3-4e2c-9ba2-f17e89f19bb7">

#### No prompts that match the search

<img width="1136" alt="Screenshot 2024-08-15 at 5 55 07 PM"
src="https://github.com/user-attachments/assets/2cd4ff9b-958d-4bd0-90d3-dca62b1a91a0">

Release Notes:

- N/A
2024-08-15 18:28:17 -04:00
Marshall Bowers
f65b2b9a2d assistant: Fix toggling the model selector via keybind (#16319)
This PR restores the ability to toggle the model selector via a keybind
after it was lost in #15693.

Release Notes:

- Restored the ability to toggle the model selector in the Assistant via
a keybinding (Preview only).
2024-08-15 17:45:25 -04:00
Max Brunsfeld
776442f3ae Add a workflow step resolution view (#16315)
You can now click on a step header (the words `Step 3`, etc) to open a
new tab containing a dedicated view for the resolution of that step.
This view looks similar to a context editor, and has sections for the
step input, the streaming tool output, and the interpreted results.

Hitting `cmd-enter` in this view re-resolves the step.


https://github.com/user-attachments/assets/64d82cdb-e70f-4204-8697-b30df5a645d5



Release Notes:

- N/A

---------

Co-authored-by: Nathan <nathan@zed.dev>
2024-08-15 14:16:58 -07:00
Marshall Bowers
583959f82a collab: Add support for reading boolean values from .env.toml (#16317)
This PR adds support for reading boolean values from `.env.toml`, since
it wasn't supported previously.

Release Notes:

- N/A
2024-08-15 17:07:17 -04:00
Marshall Bowers
9233418cb8 collab: Attach GitHub login to LLM spans (#16316)
This PR updates the LLM service to include the GitHub login on its
spans.

We need to pass this information through on the LLM token, so it will
temporarily be `None` until this change is deployed and new tokens have
been issued.

Release Notes:

- N/A
2024-08-15 17:06:20 -04:00
Marshall Bowers
df20bae80e danger: Don't look for #NNNN, as it's not specific enough (#16313)
This PR updates the regex we use to search for issues to not search for
`#NNNN`, as it's not specific enough.

It currently catches issue numbers from other repos, which are then
linked to random Zed issues/PRs that happen to have the same number:

<img width="935" alt="Screenshot 2024-08-15 at 3 50 29 PM"
src="https://github.com/user-attachments/assets/b779e503-3027-43e2-b355-e81d8d094694">

As well as catching PRs:

<img width="924" alt="Screenshot 2024-08-15 at 3 48 59 PM"
src="https://github.com/user-attachments/assets/6c2f7594-9234-4454-97da-5a33a1844892">

Given that:

1. We can't distinguish any given `#NNNN` as an issue _and_ can't ensure
it belongs to the Zed repo
2. Any issue/PR referenced as `#NNNN` will already create a backlink

It seems that looking for these is causing more noise than signal.

Release Notes:

- N/A
2024-08-15 15:57:42 -04:00
Marshall Bowers
cb423bcb6f Remove tooltip_text from extension manifests (#16312)
This PR removes the `tooltip_text` from the extension manifests.

We stopped reading this value in #16306, as it wasn't being used, so we
don't need to include it in the manifest anymore.

Release Notes:

- N/A
2024-08-15 15:54:53 -04:00
Marshall Bowers
5e05821d18 collab: Attach user_id to LLM spans (#16311)
This PR updates the LLM service to attach the user ID to the spans.

Release Notes:

- N/A
2024-08-15 15:49:12 -04:00
Kirill Bulatov
ff83e5b55a Improve workflow suggestion steps and debug info (#16309)
Release Notes:

- N/A

---------

Co-authored-by: Nathan Sobo <nathan@zed.dev>
Co-authored-by: Bennet Bo Fenner <bennet@zed.dev>
2024-08-15 22:46:19 +03:00
Max Brunsfeld
6b7664ef4a Fix bugs preventing non-staff users from using LLM service (#16307)
- db deadlock in GetLlmToken for non-staff users
- typo in allowed model name for non-staff users

Release Notes:

- N/A

---------

Co-authored-by: Marshall <marshall@zed.dev>
Co-authored-by: Joseph <joseph@zed.dev>
2024-08-15 11:21:19 -07:00
Marshall Bowers
931883aca9 extension: Remove tooltip_text from SlashCommandManifestEntry (#16306)
This PR removes the `tooltip_text` field from
`SlashCommandManifestEntry`s.

The `tooltip_text` is currently only used to set the `menu_text` on a
slash command, which is only used for featured slash commands.

Since slash commands from extensions are not currently able to be
featured, we don't need extension authors to provide this field in the
manifest.

This is a backwards-compatible change.

Release Notes:

- N/A
2024-08-15 13:25:55 -04:00
Marshall Bowers
e982ff7b9e zed_extension_api: Start a list of pending changes (#16305)
This PR starts a list of pending changes for the Zed extension API.

We'll want to keep this list updated as we note things that we want to
change in the next version of the extension API. This will help with
batching breaking changes together so that we're not constantly creating
new versions of the extension API for one-off changes.

Release Notes:

- N/A
2024-08-15 13:10:46 -04:00
Bennet Bo Fenner
0b3e5b2649 assistant: Support retrying empty workflow step (#16301)
Co-Authored-by: Nathan <nathan@zed.dev>
Co-Authored-by: Kirill <kirill@zed.dev>

Release Notes:

- N/A

Co-authored-by: Nathan <nathan@zed.dev>
Co-authored-by: Kirill <kirill@zed.dev>
2024-08-15 19:05:30 +02:00
Marshall Bowers
7434b56e68 docs: Link to slash-commands-example extension (#16304)
This PR updates the slash command extension docs to link to the
`slash-commands-example` extension, for a quick start.

Release Notes:

- N/A
2024-08-15 13:00:31 -04:00
Marshall Bowers
b764174e8b docs: Add docs for defining slash commands in extensions (#16303)
This PR adds docs for defining slash commands within extensions.

Release Notes:

- N/A
2024-08-15 12:50:30 -04:00
Kirill Bulatov
c45adce2e3 Run slash commands both on enter and on argument completion that requires it (#16283)
Release Notes:

- N/A
2024-08-15 19:36:30 +03:00
Marshall Bowers
5a30e29848 Add example extension to showcase slash commands (#16300)
This PR adds an example extension to showcase how to write slash
commands in an extension.

Release Notes:

- N/A
2024-08-15 12:26:13 -04:00
Marshall Bowers
03b843ebf3 live_kit_client: Suppress clippy::arc_with_non_send_sync (#16298)
This PR suppresses the
[`clippy::arc_with_non_send_sync`](https://rust-lang.github.io/rust-clippy/master/index.html#/arc_with_non_send_sync),
as there were some warnings that would—only sometimes—show up when
running Clippy.

Release Notes:

- N/A
2024-08-15 11:46:43 -04:00
David Soria Parra
02ea6ac845 context_servers: Add initial implementation (#16103)
This commit proposes the addition of "context serveres" and the
underlying protocol (model context protocol). Context servers allow
simple definition of slash commands in another language and running
local on the user machines. This aims to quickly prototype new commands,
and provide a way to add personal (or company wide) customizations to
the assistant panel, without having to maintain an extension. We can
use this to reuse our existing codebase, with authenticators, etc and
easily have it provide context into the assistant panel.

As such it occupies a different design space as extensions, which I
think are
more aimed towards long-term, well maintained pieces of code that can be
easily distributed.

It's implemented as a central crate for easy reusability across the
codebase
and to easily hook into the assistant panel at all points.

Design wise there are a few pieces:
1. client.rs: A simple JSON-RPC client talking over stdio to a spawned
server. This is
very close to how LSP work and likely there could be a combined client
down the line.
2. types.rs: Serialization and deserialization client for the underlying
model context protocol.
3. protocol.rs: Handling the session between client and server.
4. manager.rs: Manages settings and adding and deleting servers from a
central pool.

A server can be defined in the settings.json as:

```
"context_servers": [
   {"id": "test", "executable": "python", "args": ["-m", "context_server"]
]
```

## Quick Example
A quick example of how a theoretical backend site can look like. With
roughly 100 lines
of code (nicely generated by Claude) and a bit of decorator magic (200
lines in total), one
can come up with a framework that makes it as easy as:

```python
@context_server.slash_command(name="rot13", description="Perform a rot13 transformation")
@context_server.argument(name="input", type=str, help="String to rot13")
async def rot13(input: str) -> str:
    return ''.join(chr((ord(c) - 97 + 13) % 26 + 97) if c.isalpha() else c for c in echo.lower())
```

to define a new slash_command.

## Todo:
 - Allow context servers to be defined in workspace settings.
 - Allow passing env variables to context_servers


Release Notes:

- N/A

---------

Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
2024-08-15 10:49:30 -04:00
renovate[bot]
d54818fd9e Update 2428392/gh-truncate-string-action action to v1.4.0 (#16263)
[![Mend
Renovate](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com)

This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
|
[2428392/gh-truncate-string-action](https://togithub.com/2428392/gh-truncate-string-action)
| action | minor | `v1.3.0` -> `v1.4.0` |

---

### Release Notes

<details>
<summary>2428392/gh-truncate-string-action
(2428392/gh-truncate-string-action)</summary>

###
[`v1.4.0`](https://togithub.com/2428392/gh-truncate-string-action/releases/tag/v1.4.0)

[Compare
Source](https://togithub.com/2428392/gh-truncate-string-action/compare/v1.3.0...v1.4.0)

#### What's Changed

- feat: update nodejs to version 20 by
[@&#8203;psilore](https://togithub.com/psilore) in
[https://github.com/2428392/gh-truncate-string-action/pull/11](https://togithub.com/2428392/gh-truncate-string-action/pull/11)

#### New Contributors

- [@&#8203;psilore](https://togithub.com/psilore) made their first
contribution in
[https://github.com/2428392/gh-truncate-string-action/pull/11](https://togithub.com/2428392/gh-truncate-string-action/pull/11)

**Full Changelog**:
https://github.com/2428392/gh-truncate-string-action/compare/v1.3.0...v1.4.0

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "after 3pm on Wednesday" in timezone
America/New_York, Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

Release Notes:

- N/A

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOC4yNi4xIiwidXBkYXRlZEluVmVyIjoiMzguMjYuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-08-15 10:38:06 -04:00
renovate[bot]
add0f0dbe5 Update Rust crate async-stripe to 0.38 (#16264)
[![Mend
Renovate](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com)

This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [async-stripe](https://togithub.com/arlyon/async-stripe) |
workspace.dependencies | minor | `0.37` -> `0.38` |

---

### Release Notes

<details>
<summary>arlyon/async-stripe (async-stripe)</summary>

###
[`v0.38.1`](https://togithub.com/arlyon/async-stripe/blob/HEAD/CHANGELOG.md#0381-2024-08-06)

[Compare
Source](https://togithub.com/arlyon/async-stripe/compare/v0.38.0...v0.38.1)

##### Bug Fixes

- [#&#8203;578](https://togithub.com/arlyon/async-stripe/issues/578)
allow arbitrary strings for priceId
([a16bc6e](a16bc6e80c))

###
[`v0.38.0`](https://togithub.com/arlyon/async-stripe/blob/HEAD/CHANGELOG.md#0380-2024-07-31)

[Compare
Source](https://togithub.com/arlyon/async-stripe/compare/v0.37.3...v0.38.0)

##### Features

- add support for TestClock operations
([d792798](d792798c3f)),
closes
[#&#8203;574](https://togithub.com/arlyon/async-stripe/issues/574)

####
[0.37.3](https://togithub.com/arlyon/async-stripe/compare/v0.37.2...v0.37.3)
(2024-07-29)

##### Bug Fixes

- linting issue for Rust 1.80
([9232213](9232213c06))

####
[0.37.2](https://togithub.com/arlyon/async-stripe/compare/v0.37.1...v0.37.2)
(2024-07-23)

##### Bug Fixes

- rtx id prefix
([67ea232](67ea2325ba))

####
[0.37.1](https://togithub.com/arlyon/async-stripe/compare/v0.37.0...v0.37.1)
(2024-05-24)

##### Bug Fixes

- Leftover clippy warnings
([888307d](888307d23d))
- Run clippy on openapi generator
([c63c197](c63c197e7c))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "after 3pm on Wednesday" in timezone
America/New_York, Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

Release Notes:

- N/A

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOC4yNi4xIiwidXBkYXRlZEluVmVyIjoiMzguMjYuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-08-15 09:50:59 -04:00
renovate[bot]
506b7198c2 Update Rust crate serde_json to v1.0.125 (#16253)
[![Mend
Renovate](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com)

This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [serde_json](https://togithub.com/serde-rs/json) | dependencies |
patch | `1.0.122` -> `1.0.125` |
| [serde_json](https://togithub.com/serde-rs/json) |
workspace.dependencies | patch | `1.0.122` -> `1.0.125` |

---

### Release Notes

<details>
<summary>serde-rs/json (serde_json)</summary>

###
[`v1.0.125`](https://togithub.com/serde-rs/json/releases/tag/1.0.125)

[Compare
Source](https://togithub.com/serde-rs/json/compare/v1.0.124...1.0.125)

- Speed up \uXXXX parsing and improve handling of unpaired surrogates
when deserializing to bytes
([#&#8203;1172](https://togithub.com/serde-rs/json/issues/1172),
[#&#8203;1175](https://togithub.com/serde-rs/json/issues/1175), thanks
[@&#8203;purplesyringa](https://togithub.com/purplesyringa))

###
[`v1.0.124`](https://togithub.com/serde-rs/json/releases/tag/v1.0.124)

[Compare
Source](https://togithub.com/serde-rs/json/compare/v1.0.123...v1.0.124)

- Fix a bug in processing string escapes in big-endian architectures
([#&#8203;1173](https://togithub.com/serde-rs/json/issues/1173), thanks
[@&#8203;purplesyringa](https://togithub.com/purplesyringa))

###
[`v1.0.123`](https://togithub.com/serde-rs/json/releases/tag/v1.0.123)

[Compare
Source](https://togithub.com/serde-rs/json/compare/v1.0.122...v1.0.123)

- Optimize string parsing by applying SIMD-within-a-register: 30.3%
improvement on
[twitter.json](https://togithub.com/miloyip/nativejson-benchmark/blob/v1.0.0/data/twitter.json)
from 613 MB/s to 799 MB/s
([#&#8203;1161](https://togithub.com/serde-rs/json/issues/1161), thanks
[@&#8203;purplesyringa](https://togithub.com/purplesyringa))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "after 3pm on Wednesday" in timezone
America/New_York, Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about these
updates again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

Release Notes:

- N/A

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOC4yNi4xIiwidXBkYXRlZEluVmVyIjoiMzguMjYuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-08-15 09:22:05 -04:00
Marshall Bowers
e10f2d8deb danger: Use a regular message for notice about GitHub Issue links (#16287)
This PR make it so Danger will use a regular message—as opposed to a
warning—for notices about GitHub issue links.

There are still some false-positives getting flagged, so showing a
warning is a bit too aggressive.

Release Notes:

- N/A
2024-08-15 09:21:50 -04:00
renovate[bot]
3c8d890702 Update Rust crate heed to v0.20.4 (#16250)
[![Mend
Renovate](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com)

This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [heed](https://togithub.com/Kerollmops/heed) | workspace.dependencies
| patch | `0.20.3` -> `0.20.4` |

---

### Release Notes

<details>
<summary>Kerollmops/heed (heed)</summary>

###
[`v0.20.4`](https://togithub.com/meilisearch/heed/releases/tag/v0.20.4):
🛁

[Compare
Source](https://togithub.com/Kerollmops/heed/compare/v0.20.3...v0.20.4)

<p align="center"><img width="280px"
src="https://raw.githubusercontent.com/meilisearch/heed/main/assets/heed-pigeon-logo.png"></a></p>
<h1 align="center" >heed</h1>

##### What's Changed
* Use features that actually exists by
@&#8203;Kerollmo[https://github.com/meilisearch/heed/pull/270](https://togithub.com/meilisearch/heed/pull/270)ll/270
* Bump lmdb submodules commit by
@&#8203;zanavis[https://github.com/meilisearch/heed/pull/269](https://togithub.com/meilisearch/heed/pull/269)ll/269

##### New Contributors
* @&#8203;zanaviska made their first
contributi[https://github.com/meilisearch/heed/pull/269](https://togithub.com/meilisearch/heed/pull/269)ll/269

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "after 3pm on Wednesday" in timezone
America/New_York, Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

Release Notes:

- N/A

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOC4yNi4xIiwidXBkYXRlZEluVmVyIjoiMzguMjYuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-08-15 08:59:10 -04:00
renovate[bot]
05f7583444 Update aws-sdk-rust monorepo (#16257)
[![Mend
Renovate](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com)

This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [aws-config](https://togithub.com/smithy-lang/smithy-rs) |
dependencies | patch | `1.5.4` -> `1.5.5` |
| [aws-sdk-s3](https://togithub.com/awslabs/aws-sdk-rust) | dependencies
| minor | `1.42.0` -> `1.43.0` |

---

### Configuration

📅 **Schedule**: Branch creation - "after 3pm on Wednesday" in timezone
America/New_York, Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

👻 **Immortal**: This PR will be recreated if closed unmerged. Get
[config help](https://togithub.com/renovatebot/renovate/discussions) if
that's undesired.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

Release Notes:

- N/A

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOC4yNi4xIiwidXBkYXRlZEluVmVyIjoiMzguMjYuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-08-15 08:56:44 -04:00
renovate[bot]
e6ed97051d Update serde monorepo to v1.0.207 (#16259)
[![Mend
Renovate](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com)

This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [serde](https://serde.rs)
([source](https://togithub.com/serde-rs/serde)) | dependencies | patch |
`1.0.204` -> `1.0.207` |
| [serde](https://serde.rs)
([source](https://togithub.com/serde-rs/serde)) | workspace.dependencies
| patch | `1.0.204` -> `1.0.207` |
| [serde_derive](https://serde.rs)
([source](https://togithub.com/serde-rs/serde)) | workspace.dependencies
| patch | `1.0.204` -> `1.0.207` |

---

### Release Notes

<details>
<summary>serde-rs/serde (serde)</summary>

###
[`v1.0.207`](https://togithub.com/serde-rs/serde/releases/tag/v1.0.207)

[Compare
Source](https://togithub.com/serde-rs/serde/compare/v1.0.206...v1.0.207)

- Improve interactions between `flatten` attribute and
`skip_serializing`/`skip_deserializing`
([#&#8203;2795](https://togithub.com/serde-rs/serde/issues/2795), thanks
[@&#8203;Mingun](https://togithub.com/Mingun))

###
[`v1.0.206`](https://togithub.com/serde-rs/serde/releases/tag/v1.0.206)

[Compare
Source](https://togithub.com/serde-rs/serde/compare/v1.0.205...v1.0.206)

- Improve support for `flatten` attribute inside of enums
([#&#8203;2567](https://togithub.com/serde-rs/serde/issues/2567), thanks
[@&#8203;Mingun](https://togithub.com/Mingun))

###
[`v1.0.205`](https://togithub.com/serde-rs/serde/releases/tag/v1.0.205)

[Compare
Source](https://togithub.com/serde-rs/serde/compare/v1.0.204...v1.0.205)

- Use serialize_entry instead of serialize_key + serialize_value when
serialize flattened newtype enum variants
([#&#8203;2785](https://togithub.com/serde-rs/serde/issues/2785), thanks
[@&#8203;Mingun](https://togithub.com/Mingun))
- Avoid triggering a collection_is_never_read lint in the
deserialization of enums containing flattened fields
([#&#8203;2791](https://togithub.com/serde-rs/serde/issues/2791))

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "after 3pm on Wednesday" in timezone
America/New_York, Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about these
updates again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

Release Notes:

- N/A

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOC4yNi4xIiwidXBkYXRlZEluVmVyIjoiMzguMjYuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-08-15 08:55:33 -04:00
Max Brunsfeld
e0cabbd142 Make WorkflowStepResolution an entity (#16268)
This PR is just a refactor, to pave the way toward adding a view for
workflow step resolution. The entity carries the state of the tool
call's streaming output.

Release Notes:

- N/A
2024-08-14 22:44:44 -07:00
Danilo Leal
102796979b assistant: Fine-tune workflow step header design (#16272)
- Removes layout shift when buttons appear after transformation resolution
- Refine icons and button colors

Release Notes:

- N/A
2024-08-15 01:28:54 -03:00
Max Brunsfeld
4c390b82fb Make LanguageModel::use_any_tool return a stream of chunks (#16262)
This PR is a refactor to pave the way for allowing the user to view and
edit workflow step resolutions. I've made tool calls work more like
normal streaming completions for all providers. The `use_any_tool`
method returns a stream of strings (which contain chunks of JSON). I've
also done some minor cleanup of language model providers in general,
removing the duplication around handling streaming responses.

Release Notes:

- N/A
2024-08-14 18:02:46 -07:00
Marshall Bowers
1117d89057 zig: Bump to v0.2.0 (#16261)
This PR bumps the Zig extension to v0.2.0.

Changes:

- https://github.com/zed-industries/zed/pull/16260

Release Notes:

- N/A
2024-08-14 18:57:43 -04:00
Marshall Bowers
0df4d12234 zig: Upgrade zed_extension_api to v0.1.0 (#16260)
This PR updates the Zig extension to use v0.1.0 of the Zed extension
API.

This allows us to pin ZLS to v0.11.0, as the more recent releases of ZLS
don't have `.tar.gz` assets available.

Release Notes:

- N/A
2024-08-14 18:40:27 -04:00
Richard Feldman
796cba9e0e Improve workflow prompt, accept nonexistent directories from workflows (#16251)
Release Notes:

- Workflows can now create new files in folders that didn't exist

---------

Co-authored-by: jvmncs <7891333+jvmncs@users.noreply.github.com>
2024-08-14 16:18:41 -06:00
Marshall Bowers
a6461f90a1 gleam: Bump to v0.2.0 (#16258)
This PR bumps the Gleam extension to v0.2.0.

Changes:

- Added `/gleam-project` slash command
- Added `gleam-hexdocs` provider for the `/docs` slash command
- https://github.com/zed-industries/zed/pull/12221
- https://github.com/zed-industries/zed/pull/15659

Release Notes:

- N/A
2024-08-14 18:14:19 -04:00
Marshall Bowers
2e1750d5e2 gleam: Switch to published version of zed_extension_api (#16256)
This PR updates the Gleam extension to use the published version of the
`zed_extension_api`.

Release Notes:

- N/A
2024-08-14 18:04:05 -04:00
Marshall Bowers
18aff55f34 zed_extension_api: Release v0.1.0 (#16254)
This PR releases v0.1.0 of the Zed extension API.

Release Notes:

- N/A
2024-08-14 17:33:17 -04:00
renovate[bot]
ec062cfe66 Update Rust crate linkme to v0.3.28 (#16252)
[![Mend
Renovate](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com)

This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [linkme](https://togithub.com/dtolnay/linkme) | dependencies | patch |
`0.3.27` -> `0.3.28` |

---

### Release Notes

<details>
<summary>dtolnay/linkme (linkme)</summary>

### [`v0.3.28`](https://togithub.com/dtolnay/linkme/releases/tag/0.3.28)

[Compare
Source](https://togithub.com/dtolnay/linkme/compare/0.3.27...0.3.28)

-   Documentation improvements

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "after 3pm on Wednesday" in timezone
America/New_York, Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

Release Notes:

- N/A

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOC4yNi4xIiwidXBkYXRlZEluVmVyIjoiMzguMjYuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-08-14 17:17:01 -04:00
renovate[bot]
e39671cad5 Update Rust crate ctrlc to v3.4.5 (#16248)
[![Mend
Renovate](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com)

This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [ctrlc](https://togithub.com/Detegr/rust-ctrlc) | dependencies | patch
| `3.4.4` -> `3.4.5` |

---

### Release Notes

<details>
<summary>Detegr/rust-ctrlc (ctrlc)</summary>

###
[`v3.4.5`](https://togithub.com/Detegr/rust-ctrlc/compare/3.4.4...3.4.5)

[Compare
Source](https://togithub.com/Detegr/rust-ctrlc/compare/3.4.4...3.4.5)

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "after 3pm on Wednesday" in timezone
America/New_York, Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

Release Notes:

- N/A

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOC4yNi4xIiwidXBkYXRlZEluVmVyIjoiMzguMjYuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-08-14 17:05:38 -04:00
renovate[bot]
ffaecbaa0c Update Rust crate core-foundation-sys to v0.8.7 (#16244)
[![Mend
Renovate](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com)

This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [core-foundation-sys](https://togithub.com/servo/core-foundation-rs) |
dependencies | patch | `0.8.6` -> `0.8.7` |
| [core-foundation-sys](https://togithub.com/servo/core-foundation-rs) |
workspace.dependencies | patch | `0.8.6` -> `0.8.7` |

---

### Release Notes

<details>
<summary>servo/core-foundation-rs (core-foundation-sys)</summary>

###
[`v0.8.7`](https://togithub.com/servo/core-foundation-rs/compare/core-foundation-sys-v0.8.6...core-foundation-sys-v0.8.7)

[Compare
Source](https://togithub.com/servo/core-foundation-rs/compare/core-foundation-sys-v0.8.6...core-foundation-sys-v0.8.7)

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "after 3pm on Wednesday" in timezone
America/New_York, Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about these
updates again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

Release Notes:

- N/A

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOC4yNi4xIiwidXBkYXRlZEluVmVyIjoiMzguMjYuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-08-14 16:57:38 -04:00
renovate[bot]
7a693235a5 Update Rust crate clap to v4.5.15 (#16243)
[![Mend
Renovate](https://app.renovatebot.com/images/banner.svg)](https://renovatebot.com)

This PR contains the following updates:

| Package | Type | Update | Change |
|---|---|---|---|
| [clap](https://togithub.com/clap-rs/clap) | workspace.dependencies |
patch | `4.5.13` -> `4.5.15` |

---

### Release Notes

<details>
<summary>clap-rs/clap (clap)</summary>

###
[`v4.5.15`](https://togithub.com/clap-rs/clap/blob/HEAD/CHANGELOG.md#4515---2024-08-10)

[Compare
Source](https://togithub.com/clap-rs/clap/compare/v4.5.14...v4.5.15)

##### Compatiblity

-   *(unstable-ext)* `Arg::remove` changed return types

##### Fixes

-   *(unstable-ext)* Make `Arg::remove` return the removed item

###
[`v4.5.14`](https://togithub.com/clap-rs/clap/blob/HEAD/CHANGELOG.md#4514---2024-08-08)

[Compare
Source](https://togithub.com/clap-rs/clap/compare/v4.5.13...v4.5.14)

##### Features

- *(unstable-ext)* Added `Arg::add` for attaching arbitrary state, like
completion hints, to `Arg` without `Arg` knowing about it

</details>

---

### Configuration

📅 **Schedule**: Branch creation - "after 3pm on Wednesday" in timezone
America/New_York, Automerge - At any time (no schedule defined).

🚦 **Automerge**: Disabled by config. Please merge this manually once you
are satisfied.

♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the
rebase/retry checkbox.

🔕 **Ignore**: Close this PR and you won't be reminded about this update
again.

---

- [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check
this box

---

Release Notes:

- N/A

<!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOC4yNi4xIiwidXBkYXRlZEluVmVyIjoiMzguMjYuMSIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==-->

Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
2024-08-14 16:57:05 -04:00
Nathan Sobo
177aa7d9c0 Revert "Match VSCode behavior for ctrl-a/ctrl-e on MacOS" (#16246)
Reverts zed-industries/zed#15981

Release Notes:

- Restored the behavior of `ctrl-a` until we can separate the behavior
of the command for soft wraps and leading indentation.
2024-08-14 14:51:49 -06:00
Mikayla Maki
271e774713 Fix a bug where directories were not matching in the fuzzy matcher, when query contains the worktree root name (#16242)
Release Notes:

- N/A

Co-authored-by: Max <max@zed.dev>
2024-08-14 12:43:00 -07:00
Kirill Bulatov
e8bae839ed Disable forceful sorting of the slash command argument completions (#16240)
Also bubble up the current active tab's path in the \tab argument
completions.

Release Notes:

- N/A
2024-08-14 22:34:25 +03:00
Vitaly Slobodin
b55e8383c8 terminal: Fix Python virtual environment detection (#15989)
A Python virtual environment places a copy of the Python interpreter and
related files into a special directory, such as `.env` or `env`.
Currently, the built-in Zed terminal does not check if any entries
specified in `terminal.detect_venv.directories` are directories. If a
regular file with the same name exists, the terminal incorrectly
attempts to activate it as a virtual environment. The fix is to ensure
that an entry is a directory before attempting to activate the virtual
environment.

Here are screenshots of 3 possible scenarios:

# With a regular file `.env` in the worktree

## Before


![before](https://github.com/user-attachments/assets/6237a048-432c-4530-892e-91db16ac71bb)

## After


![after](https://github.com/user-attachments/assets/8268dbf4-7f22-441c-a46d-5df9c38131f9)


# With a directory called `.env` in the worktree


![with_pyenv](https://github.com/user-attachments/assets/8d901874-758d-4473-b35a-9c3db32d3b38)


Release Notes:

- Fixed detection of Python virtual environments
([#15570](https://github.com/zed-industries/zed/issues/15570)).
2024-08-14 21:33:02 +03:00
Bennet Bo Fenner
ccd8f75cff assistant: Adjust terms of service notice (#16235)
Co-Authored-by: Max <max@zed.dev>
Co-Authored-by: Marshall <marshall@zed.dev>
Co-Authored-by: Peter <peter@zed.dev>

<img width="396" alt="image"
src="https://github.com/user-attachments/assets/62282506-c74a-455e-ae4d-0438d47fed96">

Release Notes:

- N/A

Co-authored-by: Max <max@zed.dev>
Co-authored-by: Marshall <marshall@zed.dev>
Co-authored-by: Peter <peter@zed.dev>
2024-08-14 19:21:07 +02:00
Marshall Bowers
66e750eea2 Fix Windows build in CI (#15990)
This PR fixes the Windows build in CI, which was failing due to Clippy
warnings.

Release Notes:

- N/A
2024-08-14 13:17:49 -04:00
Joseph T Lyons
04ee5e3e6e v0.150.x dev 2024-08-14 12:46:00 -04:00
Marshall Bowers
8ad7d69378 indexed_docs: Normalize - in crate names to _ when computing rustdoc output path (#16234)
This PR fixes an issue where crate names that included `-`s would not
work properly when indexing them with rustdoc, due to the output
directories using `_` instead of `-`.

Release Notes:

- N/A
2024-08-14 12:37:02 -04:00
Kirill Bulatov
8df21f7bcd Fix slash command argument completion bugs (#16233)
Release Notes:

- N/A

---------

Co-authored-by: Mikayla Maki <mikayla@zed.dev>
2024-08-14 19:36:55 +03:00
Marshall Bowers
6365000b68 gleam: Clean up slash commands in preparation for release (#16232)
This PR cleans up the slash command functionality in preparation for an
upcoming release:

- Removed arguments to `/gleam-project` that were just used as an
example
- Removed `/gleam-docs` in favor of `/docs` with the `gleam-hexdocs`
provider
- Pulled a list of all Gleam packages to use as suggestions

Release Notes:

- N/A
2024-08-14 12:22:22 -04:00
jvmncs
6713e40875 Revert "Simplify inline assist to avoid spurious xml in completions (… (#16231)
…#16184)"

This reverts commit c3edbd7d9a, which
caused a regression that leaked chatter into inline assist replacements
and <rewrite_this> tags into insertions.

Release Notes:

- N/A
2024-08-14 12:20:22 -04:00
Bennet Bo Fenner
793a90c3e1 assistant: Improve terminal slash command (#16229)
- Fix terminal slash command not working when terminal tab was placed in
center workspace
- Removed `--line-count` argument, you can now just pass a number to the
slash command
e.g. `/terminal 10` will show the last 10 lines of the active terminal
- Increase default context lines to 50
- We will revisit this once we add support for only including the last n
amount of commands that were run

Release Notes:

- N/A
2024-08-14 18:04:36 +02:00
Marshall Bowers
340a1d145e haskell: Bump to v0.1.1 (#16228)
This PR bumps the Haskell extension to v0.1.1.

Changes:

- https://github.com/zed-industries/zed/pull/13268
- https://github.com/zed-industries/zed/pull/15998

Release Notes:

- N/A
2024-08-14 11:57:47 -04:00
Bennet Bo Fenner
caf222a71d assistant: Show errors without mouse interaction (#16226)
https://github.com/user-attachments/assets/a8e5e0ce-349d-4836-afe6-fc960a307c9f


Release Notes:

- N/A

---------

Co-authored-by: Thorsten <thorsten@zed.dev>
Co-authored-by: Danilo <danilo@zed.dev>
2024-08-14 17:18:39 +02:00
Thanabodee Charoenpiriyakij
f5532afaa7 erlang: Update tree-sitter grammar and query files (#15973)
Hi, I bump the tree-sitter-erlang to a newest version and sync the
hightlight query to fix long standing issue in the Zed erlang extension
about incorrect function highlighting, not support the triple quote
string and many more.

Release Notes:

- N/A

Signed-off-by: Thanabodee Charoenpiriyakij <wingyminus@gmail.com>
2024-08-14 11:07:54 -04:00
Kirill Bulatov
8d9dcf1e78 Stop automatically running /file command afer file suggestions (#16222)
https://github.com/user-attachments/assets/c5723950-7628-4073-bf03-f0a7473e984e

Release Notes:

- N/A
2024-08-14 17:47:03 +03:00
Stanislav Alekseev
aa31f9ded0 Fix diagnostic popups not having a max width (#16092)
They were probably broken by #14518 

Release Notes:

- N/A
2024-08-14 16:37:35 +02:00
Marshall Bowers
f8a72b5d0a assistant: Run /docs when completing a suggested (unindexed) package (#16218)
This PR is a follow-up to
https://github.com/zed-industries/zed/pull/16216, as we want to run the
`/docs` command when completing a suggested package that has yet to be
indexed.

Release Notes:

- N/A
2024-08-14 10:29:03 -04:00
Kirill Bulatov
8fe2de1737 Further improve /tabs command and slash arguments completion (#16216)
* renames `/tabs` to `/tab`
* allows to insert multiple tabs when fuzzy matching by the names
* improve slash command completion API, introduce a notion of multiple
arguments
* properly fire off commands on arguments' completions with
`run_command: true`

Release Notes:

- N/A

---------

Co-authored-by: Marshall Bowers <marshall@zed.dev>
2024-08-14 17:11:51 +03:00
张小白
88a12b60a9 windows: Fix supermaven (#16203)
Closes #16194

This PR introduces the following changes:

1. Updated the download process to retrieve the `.exe` file, as the API
response indicates that the `.exe` file should be downloaded on Windows.
> API response:
"https://supermaven-public.s3.amazonaws.com/sm-agent/26/windows/amd64/sm-agent.exe"
2. Modified the startup behavior of supermaven to prevent the cmd window
from appearing.

Release Notes:

- N/A
2024-08-14 17:01:16 +03:00
Thorsten Ball
0eb96c72e3 context inspector: Log when no suggestions (#16208)
Release Notes:

- N/A
2024-08-14 14:58:32 +02:00
Danilo Leal
55563831c5 assistant: Adjust slash command popover padding (#16181)
I've looked for other instances of the popover component where this change could cause a spacing regression but couldn't find any yet. Let me know if you do! Intuitively, I wouldn't change this padding directly on the component container, but I didn't find any other way to tackle it.

Release Notes:

- N/A
2024-08-14 09:48:48 -03:00
226 changed files with 13253 additions and 5401 deletions

View File

@@ -167,6 +167,7 @@ jobs:
APPLE_NOTARIZATION_USERNAME: ${{ secrets.APPLE_NOTARIZATION_USERNAME }} APPLE_NOTARIZATION_USERNAME: ${{ secrets.APPLE_NOTARIZATION_USERNAME }}
APPLE_NOTARIZATION_PASSWORD: ${{ secrets.APPLE_NOTARIZATION_PASSWORD }} APPLE_NOTARIZATION_PASSWORD: ${{ secrets.APPLE_NOTARIZATION_PASSWORD }}
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }} ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}
DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }} DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }}
DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }} DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }}
steps: steps:
@@ -276,6 +277,7 @@ jobs:
needs: [linux_tests] needs: [linux_tests]
env: env:
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }} ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}
steps: steps:
- name: Add Rust to the PATH - name: Add Rust to the PATH
run: echo "$HOME/.cargo/bin" >> $GITHUB_PATH run: echo "$HOME/.cargo/bin" >> $GITHUB_PATH
@@ -346,6 +348,7 @@ jobs:
needs: [linux_tests] needs: [linux_tests]
env: env:
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }} ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}
steps: steps:
- name: Checkout repo - name: Checkout repo
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4

View File

@@ -16,7 +16,7 @@ jobs:
fi fi
echo "::set-output name=URL::$URL" echo "::set-output name=URL::$URL"
- name: Get content - name: Get content
uses: 2428392/gh-truncate-string-action@67b1b814955634208b103cff064be3cb1c7a19be # v1.3.0 uses: 2428392/gh-truncate-string-action@e6b5885fb83c81ca9a700a91b079baec2133be3e # v1.4.0
id: get-content id: get-content
with: with:
stringToTruncate: | stringToTruncate: |

View File

@@ -67,6 +67,7 @@ jobs:
DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }} DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }}
DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }} DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }}
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }} ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}
steps: steps:
- name: Install Node - name: Install Node
uses: actions/setup-node@1e60f620b9541d16bece96c5465dc8ee9832be0b # v4 uses: actions/setup-node@1e60f620b9541d16bece96c5465dc8ee9832be0b # v4
@@ -106,6 +107,7 @@ jobs:
DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }} DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }}
DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }} DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }}
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }} ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}
steps: steps:
- name: Checkout repo - name: Checkout repo
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4
@@ -139,6 +141,7 @@ jobs:
DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }} DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }}
DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }} DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }}
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }} ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}
steps: steps:
- name: Checkout repo - name: Checkout repo
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4 uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4

View File

@@ -3,5 +3,10 @@
"label": "clippy", "label": "clippy",
"command": "./script/clippy", "command": "./script/clippy",
"args": [] "args": []
},
{
"label": "cargo run --profile release-fast",
"command": "cargo",
"args": ["run", "--profile", "release-fast"]
} }
] ]

173
Cargo.lock generated
View File

@@ -223,6 +223,7 @@ name = "anthropic"
version = "0.1.0" version = "0.1.0"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"chrono",
"futures 0.3.30", "futures 0.3.30",
"http_client", "http_client",
"isahc", "isahc",
@@ -232,6 +233,7 @@ dependencies = [
"strum", "strum",
"thiserror", "thiserror",
"tokio", "tokio",
"util",
] ]
[[package]] [[package]]
@@ -358,6 +360,7 @@ dependencies = [
"clock", "clock",
"collections", "collections",
"command_palette_hooks", "command_palette_hooks",
"context_servers",
"ctor", "ctor",
"db", "db",
"editor", "editor",
@@ -366,6 +369,7 @@ dependencies = [
"fs", "fs",
"futures 0.3.30", "futures 0.3.30",
"fuzzy", "fuzzy",
"globset",
"gpui", "gpui",
"handlebars", "handlebars",
"heed", "heed",
@@ -397,6 +401,7 @@ dependencies = [
"serde_json", "serde_json",
"serde_json_lenient", "serde_json_lenient",
"settings", "settings",
"similar",
"smallvec", "smallvec",
"smol", "smol",
"telemetry_events", "telemetry_events",
@@ -791,9 +796,9 @@ dependencies = [
[[package]] [[package]]
name = "async-stripe" name = "async-stripe"
version = "0.37.3" version = "0.38.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e2f14b5943a52cf051bbbbb68538e93a69d1e291934174121e769f4b181113f5" checksum = "97ddaa6999d246ba2c6c84d830a1ba0cd16c9234d58701988b3869f0e5bd732d"
dependencies = [ dependencies = [
"chrono", "chrono",
"futures-util", "futures-util",
@@ -983,9 +988,9 @@ dependencies = [
[[package]] [[package]]
name = "aws-config" name = "aws-config"
version = "1.5.4" version = "1.5.5"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "caf6cfe2881cb1fcbba9ae946fb9a6480d3b7a714ca84c74925014a89ef3387a" checksum = "4e95816a168520d72c0e7680c405a5a8c1fb6a035b4bc4b9d7b0de8e1a941697"
dependencies = [ dependencies = [
"aws-credential-types", "aws-credential-types",
"aws-runtime", "aws-runtime",
@@ -1003,7 +1008,6 @@ dependencies = [
"fastrand 2.1.0", "fastrand 2.1.0",
"hex", "hex",
"http 0.2.12", "http 0.2.12",
"hyper",
"ring", "ring",
"time", "time",
"tokio", "tokio",
@@ -1026,9 +1030,9 @@ dependencies = [
[[package]] [[package]]
name = "aws-runtime" name = "aws-runtime"
version = "1.3.1" version = "1.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "87c5f920ffd1e0526ec9e70e50bf444db50b204395a0fa7016bbf9e31ea1698f" checksum = "f42c2d4218de4dcd890a109461e2f799a1a2ba3bcd2cde9af88360f5df9266c6"
dependencies = [ dependencies = [
"aws-credential-types", "aws-credential-types",
"aws-sigv4", "aws-sigv4",
@@ -1042,6 +1046,7 @@ dependencies = [
"fastrand 2.1.0", "fastrand 2.1.0",
"http 0.2.12", "http 0.2.12",
"http-body 0.4.6", "http-body 0.4.6",
"once_cell",
"percent-encoding", "percent-encoding",
"pin-project-lite", "pin-project-lite",
"tracing", "tracing",
@@ -1050,9 +1055,9 @@ dependencies = [
[[package]] [[package]]
name = "aws-sdk-s3" name = "aws-sdk-s3"
version = "1.42.0" version = "1.43.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "558bbcec8db82a1a8af1610afcb3b10d00652d25ad366a0558eecdff2400a1d1" checksum = "9ccda7e730ace3cb8bbd4071bc650c6d294364891f9564bd4e43adfc8dea3177"
dependencies = [ dependencies = [
"ahash 0.8.11", "ahash 0.8.11",
"aws-credential-types", "aws-credential-types",
@@ -1085,9 +1090,9 @@ dependencies = [
[[package]] [[package]]
name = "aws-sdk-sso" name = "aws-sdk-sso"
version = "1.36.0" version = "1.37.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6acca681c53374bf1d9af0e317a41d12a44902ca0f2d1e10e5cb5bb98ed74f35" checksum = "1074e818fbe4f9169242d78448b15be8916a79daa38ea1231f2e2e10d993fcd2"
dependencies = [ dependencies = [
"aws-credential-types", "aws-credential-types",
"aws-runtime", "aws-runtime",
@@ -1107,9 +1112,9 @@ dependencies = [
[[package]] [[package]]
name = "aws-sdk-ssooidc" name = "aws-sdk-ssooidc"
version = "1.37.0" version = "1.38.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b79c6bdfe612503a526059c05c9ccccbf6bd9530b003673cb863e547fd7c0c9a" checksum = "29755c51e33fa3f678598f64324a169cf4b7d3c4865d2709d4308f53366a92a4"
dependencies = [ dependencies = [
"aws-credential-types", "aws-credential-types",
"aws-runtime", "aws-runtime",
@@ -1129,9 +1134,9 @@ dependencies = [
[[package]] [[package]]
name = "aws-sdk-sts" name = "aws-sdk-sts"
version = "1.36.0" version = "1.37.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "32e6ecdb2bd756f3b2383e6f0588dc10a4e65f5d551e70a56e0bfe0c884673ce" checksum = "6e52dc3fd7dfa6c01a69cf3903e00aa467261639138a05b06cd92314d2c8fb07"
dependencies = [ dependencies = [
"aws-credential-types", "aws-credential-types",
"aws-runtime", "aws-runtime",
@@ -1192,9 +1197,9 @@ dependencies = [
[[package]] [[package]]
name = "aws-smithy-checksums" name = "aws-smithy-checksums"
version = "0.60.11" version = "0.60.12"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "48c4134cf3adaeacff34d588dbe814200357b0c466d730cf1c0d8054384a2de4" checksum = "598b1689d001c4d4dc3cb386adb07d37786783aee3ac4b324bcadac116bf3d23"
dependencies = [ dependencies = [
"aws-smithy-http", "aws-smithy-http",
"aws-smithy-types", "aws-smithy-types",
@@ -1291,9 +1296,9 @@ dependencies = [
[[package]] [[package]]
name = "aws-smithy-runtime-api" name = "aws-smithy-runtime-api"
version = "1.7.1" version = "1.7.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "30819352ed0a04ecf6a2f3477e344d2d1ba33d43e0f09ad9047c12e0d923616f" checksum = "e086682a53d3aa241192aa110fa8dfce98f2f5ac2ead0de84d41582c7e8fdb96"
dependencies = [ dependencies = [
"aws-smithy-async", "aws-smithy-async",
"aws-smithy-types", "aws-smithy-types",
@@ -2206,9 +2211,9 @@ dependencies = [
[[package]] [[package]]
name = "clap" name = "clap"
version = "4.5.13" version = "4.5.15"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0fbb260a053428790f3de475e304ff84cdbc4face759ea7a3e64c1edd938a7fc" checksum = "11d8838454fda655dafd3accb2b6e2bea645b9e4078abe84a22ceb947235c5cc"
dependencies = [ dependencies = [
"clap_builder", "clap_builder",
"clap_derive", "clap_derive",
@@ -2216,9 +2221,9 @@ dependencies = [
[[package]] [[package]]
name = "clap_builder" name = "clap_builder"
version = "4.5.13" version = "4.5.15"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "64b17d7ea74e9f833c7dbf2cbe4fb12ff26783eda4782a8975b72f895c9b4d99" checksum = "216aec2b177652e3846684cbfe25c9964d18ec45234f0f5da5157b207ed1aab6"
dependencies = [ dependencies = [
"anstream", "anstream",
"anstyle", "anstyle",
@@ -2668,6 +2673,27 @@ dependencies = [
"tiny-keccak", "tiny-keccak",
] ]
[[package]]
name = "context_servers"
version = "0.1.0"
dependencies = [
"anyhow",
"collections",
"futures 0.3.30",
"gpui",
"log",
"parking_lot",
"postage",
"schemars",
"serde",
"serde_json",
"settings",
"smol",
"url",
"util",
"workspace",
]
[[package]] [[package]]
name = "convert_case" name = "convert_case"
version = "0.4.0" version = "0.4.0"
@@ -2736,9 +2762,9 @@ dependencies = [
[[package]] [[package]]
name = "core-foundation-sys" name = "core-foundation-sys"
version = "0.8.6" version = "0.8.7"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "06ea2b9bc92be3c2baa9334a323ebca2d6f074ff852cd1d7b11064035cd3868f" checksum = "773648b94d0e5d620f64f280777445740e61fe701025087ec8b57f45c791888b"
[[package]] [[package]]
name = "core-graphics" name = "core-graphics"
@@ -3141,12 +3167,12 @@ dependencies = [
[[package]] [[package]]
name = "ctrlc" name = "ctrlc"
version = "3.4.4" version = "3.4.5"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "672465ae37dc1bc6380a6547a8883d5dd397b0f1faaad4f265726cc7042a5345" checksum = "90eeab0aa92f3f9b4e87f258c72b139c207d251f9cbc1080a0086b86a8870dd3"
dependencies = [ dependencies = [
"nix 0.28.0", "nix 0.29.0",
"windows-sys 0.52.0", "windows-sys 0.59.0",
] ]
[[package]] [[package]]
@@ -5033,9 +5059,9 @@ checksum = "2304e00983f87ffb38b55b444b5e3b60a884b5d30c0fca7d82fe33449bbe55ea"
[[package]] [[package]]
name = "heed" name = "heed"
version = "0.20.3" version = "0.20.5"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2bc30da4a93ff8cb98e535d595d6de42731d4719d707bc1c86f579158751a24e" checksum = "7d4f449bab7320c56003d37732a917e18798e2f1709d80263face2b4f9436ddb"
dependencies = [ dependencies = [
"bitflags 2.6.0", "bitflags 2.6.0",
"byteorder", "byteorder",
@@ -6220,18 +6246,18 @@ dependencies = [
[[package]] [[package]]
name = "linkme" name = "linkme"
version = "0.3.27" version = "0.3.28"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ccb76662d78edc9f9bf56360d6919bdacc8b7761227727e5082f128eeb90bbf5" checksum = "3c943daedff228392b791b33bba32e75737756e80a613e32e246c6ce9cbab20a"
dependencies = [ dependencies = [
"linkme-impl", "linkme-impl",
] ]
[[package]] [[package]]
name = "linkme-impl" name = "linkme-impl"
version = "0.3.27" version = "0.3.28"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f8dccda732e04fa3baf2e17cf835bfe2601c7c2edafd64417c627dabae3a8cda" checksum = "cb26336e6dc7cc76e7927d2c9e7e3bb376d7af65a6f56a0b16c47d18a9b1abc5"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
@@ -6290,9 +6316,9 @@ dependencies = [
[[package]] [[package]]
name = "lmdb-master-sys" name = "lmdb-master-sys"
version = "0.2.2" version = "0.2.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "57640c190703d5ccf4a86aff4aeb749b2d287a8cb1723c76b51f39d77ab53b24" checksum = "472c3760e2a8d0f61f322fb36788021bb36d573c502b50fa3e2bcaac3ec326c9"
dependencies = [ dependencies = [
"cc", "cc",
"doxygen-rs", "doxygen-rs",
@@ -7566,6 +7592,29 @@ version = "2.3.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e3148f5046208a5d56bcfc03053e3ca6334e51da8dfb19b6cdc8b306fae3283e" checksum = "e3148f5046208a5d56bcfc03053e3ca6334e51da8dfb19b6cdc8b306fae3283e"
[[package]]
name = "performance"
version = "0.1.0"
dependencies = [
"anyhow",
"collections",
"gpui",
"log",
"schemars",
"serde",
"settings",
"util",
"workspace",
]
[[package]]
name = "perplexity"
version = "0.1.0"
dependencies = [
"serde",
"zed_extension_api 0.1.0",
]
[[package]] [[package]]
name = "pest" name = "pest"
version = "2.7.11" version = "2.7.11"
@@ -9009,9 +9058,9 @@ dependencies = [
[[package]] [[package]]
name = "runtimelib" name = "runtimelib"
version = "0.14.0" version = "0.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0c3d817764e3971867351e6103955b17d808f5330e9ef63aaaaab55bf8c664c1" checksum = "a7d76d28b882a7b889ebb04e79bc2b160b3061821ea596ff0f4a838fc7a76db0"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"async-dispatcher", "async-dispatcher",
@@ -9533,18 +9582,18 @@ dependencies = [
[[package]] [[package]]
name = "serde" name = "serde"
version = "1.0.204" version = "1.0.207"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bc76f558e0cbb2a839d37354c575f1dc3fdc6546b5be373ba43d95f231bf7c12" checksum = "5665e14a49a4ea1b91029ba7d3bca9f299e1f7cfa194388ccc20f14743e784f2"
dependencies = [ dependencies = [
"serde_derive", "serde_derive",
] ]
[[package]] [[package]]
name = "serde_derive" name = "serde_derive"
version = "1.0.204" version = "1.0.207"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e0cd7e117be63d3c3678776753929474f3b04a43a080c744d6b0ae2a8c28e222" checksum = "6aea2634c86b0e8ef2cfdc0c340baede54ec27b1e46febd7f80dffb2aa44a00e"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
@@ -9584,9 +9633,9 @@ dependencies = [
[[package]] [[package]]
name = "serde_json" name = "serde_json"
version = "1.0.122" version = "1.0.125"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "784b6203951c57ff748476b126ccb5e8e2959a5c19e5c617ab1956be3dbc68da" checksum = "83c8e735a073ccf5be70aa8066aa984eaf2fa000db6c8d0100ae605b366d31ed"
dependencies = [ dependencies = [
"indexmap 2.3.0", "indexmap 2.3.0",
"itoa", "itoa",
@@ -9920,6 +9969,13 @@ dependencies = [
"autocfg", "autocfg",
] ]
[[package]]
name = "slash_commands_example"
version = "0.1.0"
dependencies = [
"zed_extension_api 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
]
[[package]] [[package]]
name = "slice-group-by" name = "slice-group-by"
version = "0.3.1" version = "0.3.1"
@@ -10509,6 +10565,7 @@ dependencies = [
"theme", "theme",
"ui", "ui",
"util", "util",
"windows 0.58.0",
] ]
[[package]] [[package]]
@@ -13782,7 +13839,7 @@ dependencies = [
[[package]] [[package]]
name = "zed" name = "zed"
version = "0.149.0" version = "0.151.0"
dependencies = [ dependencies = [
"activity_indicator", "activity_indicator",
"anyhow", "anyhow",
@@ -13843,6 +13900,7 @@ dependencies = [
"outline_panel", "outline_panel",
"parking_lot", "parking_lot",
"paths", "paths",
"performance",
"profiling", "profiling",
"project", "project",
"project_panel", "project_panel",
@@ -13930,7 +13988,7 @@ dependencies = [
[[package]] [[package]]
name = "zed_elixir" name = "zed_elixir"
version = "0.0.7" version = "0.0.8"
dependencies = [ dependencies = [
"zed_extension_api 0.0.6", "zed_extension_api 0.0.6",
] ]
@@ -13976,12 +14034,23 @@ dependencies = [
"wit-bindgen", "wit-bindgen",
] ]
[[package]]
name = "zed_extension_api"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "594fd10dd0f2f853eb243e2425e7c95938cef49adb81d9602921d002c5e6d9d9"
dependencies = [
"serde",
"serde_json",
"wit-bindgen",
]
[[package]] [[package]]
name = "zed_gleam" name = "zed_gleam"
version = "0.1.3" version = "0.2.0"
dependencies = [ dependencies = [
"html_to_markdown 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)", "html_to_markdown 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
"zed_extension_api 0.1.0", "zed_extension_api 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
] ]
[[package]] [[package]]
@@ -13993,7 +14062,7 @@ dependencies = [
[[package]] [[package]]
name = "zed_haskell" name = "zed_haskell"
version = "0.1.0" version = "0.1.1"
dependencies = [ dependencies = [
"zed_extension_api 0.0.6", "zed_extension_api 0.0.6",
] ]
@@ -14107,9 +14176,9 @@ dependencies = [
[[package]] [[package]]
name = "zed_zig" name = "zed_zig"
version = "0.1.5" version = "0.2.0"
dependencies = [ dependencies = [
"zed_extension_api 0.0.6", "zed_extension_api 0.1.0 (registry+https://github.com/rust-lang/crates.io-index)",
] ]
[[package]] [[package]]

View File

@@ -19,6 +19,7 @@ members = [
"crates/collections", "crates/collections",
"crates/command_palette", "crates/command_palette",
"crates/command_palette_hooks", "crates/command_palette_hooks",
"crates/context_servers",
"crates/copilot", "crates/copilot",
"crates/db", "crates/db",
"crates/dev_server_projects", "crates/dev_server_projects",
@@ -69,6 +70,7 @@ members = [
"crates/outline", "crates/outline",
"crates/outline_panel", "crates/outline_panel",
"crates/paths", "crates/paths",
"crates/performance",
"crates/picker", "crates/picker",
"crates/prettier", "crates/prettier",
"crates/project", "crates/project",
@@ -144,10 +146,12 @@ members = [
"extensions/lua", "extensions/lua",
"extensions/ocaml", "extensions/ocaml",
"extensions/php", "extensions/php",
"extensions/perplexity",
"extensions/prisma", "extensions/prisma",
"extensions/purescript", "extensions/purescript",
"extensions/ruff", "extensions/ruff",
"extensions/ruby", "extensions/ruby",
"extensions/slash-commands-example",
"extensions/snippets", "extensions/snippets",
"extensions/svelte", "extensions/svelte",
"extensions/terraform", "extensions/terraform",
@@ -189,6 +193,7 @@ collab_ui = { path = "crates/collab_ui" }
collections = { path = "crates/collections" } collections = { path = "crates/collections" }
command_palette = { path = "crates/command_palette" } command_palette = { path = "crates/command_palette" }
command_palette_hooks = { path = "crates/command_palette_hooks" } command_palette_hooks = { path = "crates/command_palette_hooks" }
context_servers = { path = "crates/context_servers" }
copilot = { path = "crates/copilot" } copilot = { path = "crates/copilot" }
db = { path = "crates/db" } db = { path = "crates/db" }
dev_server_projects = { path = "crates/dev_server_projects" } dev_server_projects = { path = "crates/dev_server_projects" }
@@ -238,6 +243,7 @@ open_ai = { path = "crates/open_ai" }
outline = { path = "crates/outline" } outline = { path = "crates/outline" }
outline_panel = { path = "crates/outline_panel" } outline_panel = { path = "crates/outline_panel" }
paths = { path = "crates/paths" } paths = { path = "crates/paths" }
performance = { path = "crates/performance" }
picker = { path = "crates/picker" } picker = { path = "crates/picker" }
plugin = { path = "crates/plugin" } plugin = { path = "crates/plugin" }
plugin_macros = { path = "crates/plugin_macros" } plugin_macros = { path = "crates/plugin_macros" }
@@ -377,7 +383,7 @@ rand = "0.8.5"
regex = "1.5" regex = "1.5"
repair_json = "0.1.0" repair_json = "0.1.0"
rsa = "0.9.6" rsa = "0.9.6"
runtimelib = { version = "0.14", default-features = false, features = [ runtimelib = { version = "0.15", default-features = false, features = [
"async-dispatcher-runtime", "async-dispatcher-runtime",
] } ] }
rusqlite = { version = "0.29.0", features = ["blob", "array", "modern_sqlite"] } rusqlite = { version = "0.29.0", features = ["blob", "array", "modern_sqlite"] }
@@ -461,7 +467,7 @@ which = "6.0.0"
wit-component = "0.201" wit-component = "0.201"
[workspace.dependencies.async-stripe] [workspace.dependencies.async-stripe]
version = "0.37" version = "0.38"
default-features = false default-features = false
features = [ features = [
"runtime-tokio-hyper-rustls", "runtime-tokio-hyper-rustls",

View File

@@ -0,0 +1,12 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<rect width="16" height="16" rx="2" fill="black" fill-opacity="0.2"/>
<g clip-path="url(#clip0_1916_18)">
<path d="M10.652 3.79999H8.816L12.164 12.2H14L10.652 3.79999Z" fill="#1F1F1E"/>
<path d="M5.348 3.79999L2 12.2H3.872L4.55672 10.436H8.05927L8.744 12.2H10.616L7.268 3.79999H5.348ZM5.16224 8.87599L6.308 5.92399L7.45374 8.87599H5.16224Z" fill="#1F1F1E"/>
</g>
<defs>
<clipPath id="clip0_1916_18">
<rect width="12" height="8.4" fill="white" transform="translate(2 3.79999)"/>
</clipPath>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 601 B

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-database-zap"><ellipse cx="12" cy="5" rx="9" ry="3"/><path d="M3 5V19A9 3 0 0 0 15 21.84"/><path d="M21 5V8"/><path d="M21 12L18 17H22L19 22"/><path d="M3 12A9 3 0 0 0 14.59 14.87"/></svg>

After

Width:  |  Height:  |  Size: 391 B

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-ellipsis-vertical"><circle cx="12" cy="12" r="1"/><circle cx="12" cy="5" r="1"/><circle cx="12" cy="19" r="1"/></svg>

After

Width:  |  Height:  |  Size: 320 B

View File

@@ -1,10 +0,0 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M3 13L7.01562 8.98438" stroke="black" stroke-width="1.5" stroke-linecap="round"/>
<path d="M8.6875 7.3125L9.5 6.5" stroke="black" stroke-width="1.5" stroke-linecap="round"/>
<path d="M7 5V3" stroke="black" stroke-width="1.5" stroke-linecap="round"/>
<path d="M12 5V3" stroke="black" stroke-width="1.5" stroke-linecap="round"/>
<path d="M12 10V8" stroke="black" stroke-width="1.5" stroke-linecap="round"/>
<path d="M6 4L8 4" stroke="black" stroke-width="1.5" stroke-linecap="round"/>
<path d="M11 4L13 4" stroke="black" stroke-width="1.5" stroke-linecap="round"/>
<path d="M11 9L13 9" stroke="black" stroke-width="1.5" stroke-linecap="round"/>
</svg>

Before

Width:  |  Height:  |  Size: 787 B

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-search-code"><path d="m13 13.5 2-2.5-2-2.5"/><path d="m21 21-4.3-4.3"/><path d="M9 8.5 7 11l2 2.5"/><circle cx="11" cy="11" r="8"/></svg>

After

Width:  |  Height:  |  Size: 340 B

1
assets/icons/slash.svg Normal file
View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-slash"><path d="M22 2 2 22"/></svg>

After

Width:  |  Height:  |  Size: 238 B

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" width="24" height="24" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-square-slash"><rect width="18" height="18" x="3" y="3" rx="2"/><line x1="9" x2="15" y1="15" y2="9"/></svg>

After

Width:  |  Height:  |  Size: 309 B

View File

@@ -89,9 +89,9 @@
"alt-right": "editor::MoveToNextWordEnd", "alt-right": "editor::MoveToNextWordEnd",
"alt-f": "editor::MoveToNextWordEnd", "alt-f": "editor::MoveToNextWordEnd",
"cmd-left": "editor::MoveToBeginningOfLine", "cmd-left": "editor::MoveToBeginningOfLine",
"ctrl-a": ["editor::MoveToBeginningOfLine", { "stop_at_soft_wraps": false }], "ctrl-a": "editor::MoveToBeginningOfLine",
"cmd-right": "editor::MoveToEndOfLine", "cmd-right": "editor::MoveToEndOfLine",
"ctrl-e": ["editor::MoveToEndOfLine", { "stop_at_soft_wraps": false }], "ctrl-e": "editor::MoveToEndOfLine",
"cmd-up": "editor::MoveToBeginning", "cmd-up": "editor::MoveToBeginning",
"cmd-down": "editor::MoveToEnd", "cmd-down": "editor::MoveToEnd",
"shift-up": "editor::SelectUp", "shift-up": "editor::SelectUp",

View File

@@ -1,5 +1,13 @@
{{#if language_name}} {{#if language_name}}
File language: {{language_name}} Here's a file of {{language_name}} that I'm going to ask you to make an edit to.
{{else}}
Here's a file of text that I'm going to ask you to make an edit to.
{{/if}}
{{#if is_insert}}
The point you'll need to insert at is marked with <insert_here></insert_here>.
{{else}}
The section you'll need to rewrite is marked with <rewrite_this></rewrite_this> tags.
{{/if}} {{/if}}
<document> <document>
@@ -7,35 +15,47 @@ File language: {{language_name}}
</document> </document>
{{#if is_truncated}} {{#if is_truncated}}
Note: Context around the relevant section has been truncated for brevity. The context around the relevant section has been truncated (possibly in the middle of a line) for brevity.
{{/if}} {{/if}}
Editing instructions: {{#if is_insert}}
1. Rewrite the section marked with <rewrite_this></rewrite_this> tags based on this prompt: You can't replace {{content_type}}, your answer will be inserted in place of the `<insert_here></insert_here>` tags. Don't include the insert_here tags in your output.
Generate {{content_type}} based on the following prompt:
<prompt> <prompt>
{{{user_prompt}}} {{{user_prompt}}}
</prompt> </prompt>
2. Within <rewrite_this></rewrite_this>, make changes only in these subsections: Match the indentation in the original file in the inserted {{content_type}}, don't include any indentation on blank lines.
{{#if has_insertion}}
- Insert text where marked with <insert_here></insert_here> tags Immediately start with the following format with no remarks:
{{/if}}
{{#if has_replacement}} ```
- Edit text surrounded by <edit_here></edit_here> tags \{{INSERTED_CODE}}
{{/if}} ```
{{else}}
Edit the section of {{content_type}} in <rewrite_this></rewrite_this> tags based on the following prompt:
<prompt>
{{{user_prompt}}}
</prompt>
{{#if rewrite_section}}
And here's the section to rewrite based on that prompt again for reference:
3. Section to edit:
<rewrite_this> <rewrite_this>
{{{rewrite_section_with_selections}}} {{{rewrite_section}}}
</rewrite_this> </rewrite_this>
{{/if}}
4. Guidelines: Only make changes that are necessary to fulfill the prompt, leave everything else as-is. All surrounding {{content_type}} will be preserved.
- Only make changes necessary to fulfill the prompt
- Preserve all surrounding {{content_type}}
- Maintain the original indentation level
- Rewrite the entire section, even if no changes are needed
- Do not include <rewrite_this>, <insert_here>, or <edit_here> tags in your output
Output format: Start at the indentation level in the original file in the rewritten {{content_type}}. Don't stop until you've rewritten the entire section, even if you have no more changes to make, always write out the whole section with no unnecessary elisions.
Immediately start with the following, ensuring no leading whitespace:
```{{REWRITTEN_CODE}}``` Immediately start with the following format with no remarks:
```
\{{REWRITTEN_CODE}}
```
{{/if}}

View File

@@ -1,87 +1,302 @@
<workflow>
Guide the user through code changes in numbered steps that focus on individual functions, type definitions, etc. Guide the user through code changes in numbered steps that focus on individual functions, type definitions, etc.
Surround each distinct step in a <step></step> XML tag. Surround each distinct step in a <step></step> XML tag. The user will be performing these steps in a code editor
named Zed, which is where they will have entered this prompt and will be seeing the response.
Here's an example of a user-assistant dialog with step numbers and a follow-up question in Rust: <instructions>
- Use the language of the file for code fence blocks unless otherwise specified.
- Include a code or file action in each step.
- Only put code in separate steps if it should either go in separate files, or in different (non-contiguous) places in the same file.
- Provide error handling and input validation where appropriate.
- Adapt explanations based on the user's perceived level of expertise.
- Include comments in code examples to enhance understanding.
- Consider more complex programming scenarios when relevant.
- Avoid using the terminal to perform filesystem operations such as creating, deleting, renaming, and editing files. Assume the user prefers to use Zed for these operations, unless the prompt suggests that the user is expecting terminal commands.
- When creating a new file, Zed will automatically create any necessary directories in that file's path. So when a step will create a new file in Zed, don't mention needing to create the directory that it will go in.
- Assume the user is performing these actions in the Zed code editor, so avoid redundancies like "In your code editor, ..." or "In Zed, ..."
- Be concise without leaving out important information.
</instructions>
1. User: I want to create a User struct with name and age fields, and a method to print the user's information. Can you help me with that? Examples of user-assistant dialogs with step numbers and user follow-up requests:
A: Certainly! Let's create a User struct with the specified fields and a method to print the information:
<step>Define the User struct <example language="rust">
<turn number="1">
<user>Create a Library struct with a vector of Book structs. Include methods to add books and find books by author. Use generics for the book ID.</user>
<assistant>Certainly! Let's create a Library struct with a vector of Book structs, including methods to add books and find books by author, using generics for the book ID:
<step>Define the Book and Library structs
```rust ```rust
struct User { use std::fmt::Display;
name: String,
age: u32, struct Book<T>
where
T: Display + Clone
{
id: T,
title: String,
author: String,
}
struct Library<T>
where
T: Display + Clone
{
books: Vec<Book<T>>,
} }
``` ```
</step> </step>
<step>Implement methods for the User struct <step>Implement methods for the Library struct
```rust ```rust
impl User { impl<T> Library<T>
fn new(name: String, age: u32) -> Self { where
User { name, age } T: Display + Clone
{
fn new() -> Self {
Library { books: Vec::new() }
} }
fn print_info(&self) {
println!("Name: {}, Age: {}", self.name, self.age); fn add_book(&mut self, id: T, title: String, author: String) {
let book = Book { id, title, author };
self.books.push(book);
}
fn find_books_by_author(&self, author: &str) -> Vec<&Book<T>> {
self.books
.iter()
.filter(|book| book.author.to_lowercase() == author.to_lowercase())
.collect()
} }
} }
``` ```
</step> </step>
This code defines a User struct and implements two methods: This code defines:
- `new`: a constructor to create a new User 1. A generic Book struct with an ID of type T, title, and author
- `print_info`: a method to print the user's information 2. A Library struct containing a vector of Books
3. Methods to create a new Library, add books, and find books by author
The use of generics allows flexibility in the type of ID used for books.</assistant>
</turn>
2. User: Great! Now, how can I create a User instance and call the print_info method? <turn number="2">
A: You can create a User instance and call the print_info method like this: <user>How can I use this Library struct in a main function?</user>
<assistant>Here's how you can use the Library struct in a main function:
<step>Create a User instance and call print_info <step>Create a main function to demonstrate Library usage
```rust ```rust
fn main() { fn main() {
let user = User::new(String::from("Alice"), 30); // Create a new Library with string IDs
user.print_info(); let mut library = Library::new();
}
```
</step>
This will output: `Name: Alice, Age: 30` // Add some books
library.add_book(String::from("001"), String::from("The Rust Programming Language"), String::from("Steve Klabnik"));
library.add_book(String::from("002"), String::from("Programming Rust"), String::from("Jim Blandy"));
library.add_book(String::from("003"), String::from("Rust in Action"), String::from("Tim McNamara"));
3. User: Can you add an email field to the User struct and update the print_info method to include it? // Find books by author
A: Certainly! Let's modify the User struct and update the print_info method: let author = "Steve Klabnik";
let books = library.find_books_by_author(author);
<step>Add email field to User struct println!("Books by {}:", author);
```rust for book in books {
struct User { println!("ID: {}, Title: {}", book.id, book.title);
name: String,
age: u32,
email: String,
}
```
</step>
<step>Update the new method and print_info method
```rust
impl User {
fn new(name: String, age: u32, email: String) -> Self {
User { name, age, email }
}
fn print_info(&self) {
println!("Name: {}, Age: {}, Email: {}", self.name, self.age, self.email);
} }
} }
``` ```
</step> </step>
<step>Update the main function This main function demonstrates:
1. Creating a new Library instance
2. Adding books with string IDs
3. Finding and displaying books by a specific author</assistant>
</turn>
<turn number="3">
<user>Can you add a method to remove a book by ID and handle potential errors?</user>
<assistant>Certainly! Let's add a method to remove a book by ID and handle potential errors:
<step>Add a remove_book method to the Library struct
```rust
use std::fmt::Display;
use std::error::Error;
use std::fmt;
#[derive(Debug)]
struct BookNotFoundError;
impl Error for BookNotFoundError {}
impl fmt::Display for BookNotFoundError {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "Book not found")
}
}
impl<T> Library<T>
where
T: Display + Clone + PartialEq
{
// ... (previous methods remain the same)
fn remove_book(&mut self, id: &T) -> Result<Book<T>, BookNotFoundError> {
if let Some(index) = self.books.iter().position(|book| &book.id == id) {
Ok(self.books.remove(index))
} else {
Err(BookNotFoundError)
}
}
}
```
</step>
<step>Update the main function to demonstrate book removal
```rust ```rust
fn main() { fn main() {
let user = User::new(String::from("Alice"), 30, String::from("alice@example.com")); // ... (previous code remains the same)
user.print_info();
// Attempt to remove a book
let id_to_remove = String::from("002");
match library.remove_book(&id_to_remove) {
Ok(removed_book) => println!("Removed book: {} by {}", removed_book.title, removed_book.author),
Err(e) => println!("Error removing book: {}", e),
}
// Attempt to remove a non-existent book
let non_existent_id = String::from("999");
match library.remove_book(&non_existent_id) {
Ok(_) => println!("Book removed successfully"),
Err(e) => println!("Error removing book: {}", e),
}
// Display remaining books
println!("\nRemaining books:");
for book in &library.books {
println!("ID: {}, Title: {} by {}", book.id, book.title, book.author);
}
} }
``` ```
</step> </step>
This will now output: `Name: Alice, Age: 30, Email: alice@example.com` This update includes:
The changes include: 1. A new remove_book method that returns a Result type
1. Adding the `email` field to the User struct 2. A custom BookNotFoundError for error handling
2. Updating the `new` method to accept an email parameter 3. Demonstration of successful book removal and error handling for non-existent books
3. Modifying the `print_info` method to include the email 4. Display of remaining books after removal attempts
4. Updating the main function to provide an email when creating a User instance
The remove_book method uses the PartialEq trait to compare book IDs, allowing for flexible ID types while maintaining type safety.</assistant>
</turn>
</example>
<example language="go">
<turn number="1">
<user>Create a Person struct with name, age, and email fields, and a method to greet. Include error handling for age.</user>
<assistant>Certainly! Let's create a Person struct with the specified fields and a method to greet in Go:
<step>Define the Person struct and NewPerson function with error handling
```go
type Person struct {
Name string
Age int
Email string
}
func NewPerson(name string, age int, email string) (*Person, error) {
if age < 0 || age > 150 {
return nil, fmt.Errorf("invalid age: %d", age)
}
return &Person{Name: name, Age: age, Email: email}, nil
}
```
</step>
<step>Implement the Greet method for the Person struct
```go
func (p *Person) Greet() string {
return fmt.Sprintf("Hello, I'm %s, %d years old. You can reach me at %s.", p.Name, p.Age, p.Email)
}
```
</step>
This code defines a Person struct, a NewPerson function with age validation, and a Greet method.</assistant>
</turn>
<turn number="2">
<user>How can I use these in a main function?</user>
<assistant>Here's how to use the Person struct and its methods in a main function:
<step>Create a main function with error handling
```go
func main() {
// Create a new Person instance
person, err := NewPerson("Alice", 30, "alice@example.com")
if err != nil {
fmt.Println("Error creating person:", err)
return
}
// Use the Greet method
fmt.Println(person.Greet())
// Demonstrate error handling with invalid age
invalidPerson, err := NewPerson("Bob", 200, "bob@example.com")
if err != nil {
fmt.Println("Error creating person with invalid age:", err)
}
}
```
</step>
This main function demonstrates:
1. Creating a valid Person instance
2. Using the Greet method
3. Handling errors when creating a Person with an invalid age</assistant>
</turn>
<turn number="3">
<user>Can you add a method to update the email with validation?</user>
<assistant>Certainly! Let's add a method to update the email with validation:
<step>Add UpdateEmail method with validation
```go
import (
"fmt"
"regexp"
)
func (p *Person) UpdateEmail(newEmail string) error {
// Simple email validation using regex
emailRegex := regexp.MustCompile(`^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$`)
if !emailRegex.MatchString(newEmail) {
return fmt.Errorf("invalid email format: %s", newEmail)
}
p.Email = newEmail
return nil
}
```
</step>
<step>Update the main function to demonstrate email update
```go
func main() {
// ... (previous code remains the same)
// Demonstrate email update
err = person.UpdateEmail("alice.new@example.com")
if err != nil {
fmt.Println("Error updating email:", err)
} else {
fmt.Println("Updated greeting:", person.Greet())
}
// Demonstrate email update with invalid email
err = person.UpdateEmail("invalid-email")
if err != nil {
fmt.Println("Error updating email:", err)
}
}
```
</step>
This update includes:
1. An UpdateEmail method with email format validation
2. Demonstration of successful email update in the main function
3. Handling of invalid email update attempt</assistant>
</turn>
</example>
</workflow>

View File

@@ -1,22 +1,28 @@
Your task is to map a step from the conversation above to suggestions on symbols inside the provided source files. <overview>
Your task is to map a step from a workflow to locations in source code where code needs to be changed to fulfill that step.
Given a workflow containing background context plus a series of <step> tags, you will resolve *one* of these step tags to resolve to one or more locations in the code.
With each location, you will produce a brief, one-line description of the changes to be made.
Guidelines: <guidelines>
- There's no need to describe *what* to do, just *where* to do it. - There's no need to describe *what* to do, just *where* to do it.
- Only reference locations that actually exist (unless you're creating a file).
- If creating a file, assume any subsequent updates are included at the time of creation. - If creating a file, assume any subsequent updates are included at the time of creation.
- Don't create and then update a file. - Don't create and then update a file. Always create new files in shot.
- We'll create it in one shot.
- Prefer updating symbols lower in the syntax tree if possible. - Prefer updating symbols lower in the syntax tree if possible.
- Never include suggestions on a parent symbol and one of its children in the same suggestions block. - Never include suggestions on a parent symbol and one of its children in the same suggestions block.
- Never nest an operation with another operation or include CDATA or other content. All suggestions are leaf nodes. - Never nest an operation with another operation or include CDATA or other content. All suggestions are leaf nodes.
- Include a description attribute for each operation with a brief, one-line description of the change to perform.
- Descriptions are required for all suggestions except delete. - Descriptions are required for all suggestions except delete.
- When generating multiple suggestions, ensure the descriptions are specific to each individual operation. - When generating multiple suggestions, ensure the descriptions are specific to each individual operation.
- Avoid referring to the location in the description. Focus on the change to be made, not the location where it's made. That's implicit with the symbol you provide. - Avoid referring to the location in the description. Focus on the change to be made, not the location where it's made. That's implicit with the symbol you provide.
- Don't generate multiple suggestions at the same location. Instead, combine them together in a single operation with a succinct combined description. - Don't generate multiple suggestions at the same location. Instead, combine them together in a single operation with a succinct combined description.
- To add imports respond with a suggestion where the `"symbol"` key is set to `"#imports"`
</guidelines>
</overview>
Example 1: <examples>
<example>
User: <workflow_context>
<message role="user">
```rs src/rectangle.rs ```rs src/rectangle.rs
struct Rectangle { struct Rectangle {
width: f64, width: f64,
@@ -30,12 +36,21 @@ impl Rectangle {
} }
``` ```
We need to add methods to calculate the area and perimeter of the rectangle. Can you help with that?
</message>
<message role="assistant">
Sure, I can help with that!
<step>Add new methods 'calculate_area' and 'calculate_perimeter' to the Rectangle struct</step> <step>Add new methods 'calculate_area' and 'calculate_perimeter' to the Rectangle struct</step>
<step>Implement the 'Display' trait for the Rectangle struct</step> <step>Implement the 'Display' trait for the Rectangle struct</step>
</message>
</workflow_context>
What are the suggestions for the step: <step>Add a new method 'calculate_area' to the Rectangle struct</step> <step_to_resolve>
Add new methods 'calculate_area' and 'calculate_perimeter' to the Rectangle struct
</step_to_resolve>
A (wrong): <incorrect_output reason="NEVER append multiple children at the same location.">
{ {
"title": "Add Rectangle methods", "title": "Add Rectangle methods",
"suggestions": [ "suggestions": [
@@ -53,10 +68,9 @@ A (wrong):
} }
] ]
} }
</incorrect_output>
This demonstrates what NOT to do. NEVER append multiple children at the same location. <correct_output>
A (corrected):
{ {
"title": "Add Rectangle methods", "title": "Add Rectangle methods",
"suggestions": [ "suggestions": [
@@ -68,11 +82,13 @@ A (corrected):
} }
] ]
} }
</correct_output>
User: <step_to_resolve>
What are the suggestions for the step: <step>Implement the 'Display' trait for the Rectangle struct</step> Implement the 'Display' trait for the Rectangle struct
</step_to_resolve>
A: <output>
{ {
"title": "Implement Display for Rectangle", "title": "Implement Display for Rectangle",
"suggestions": [ "suggestions": [
@@ -84,10 +100,11 @@ A:
} }
] ]
} }
</output>
Example 2: <example>
<workflow_context>
User: <message role="user">
```rs src/user.rs ```rs src/user.rs
struct User { struct User {
pub name: String, pub name: String,
@@ -105,13 +122,19 @@ impl User {
} }
} }
``` ```
</message>
<message role="assistant">
Certainly!
<step>Update the 'print_info' method to use formatted output</step> <step>Update the 'print_info' method to use formatted output</step>
<step>Remove the 'email' field from the User struct</step> <step>Remove the 'email' field from the User struct</step>
</message>
</workflow_context>
What are the suggestions for the step: <step>Update the 'print_info' method to use formatted output</step> <step_to_resolve>
Update the 'print_info' method to use formatted output
</step_to_resolve>
A: <output>
{ {
"title": "Use formatted output", "title": "Use formatted output",
"suggestions": [ "suggestions": [
@@ -123,11 +146,13 @@ A:
} }
] ]
} }
</output>
User: <step_to_resolve>
What are the suggestions for the step: <step>Remove the 'email' field from the User struct</step> Remove the 'email' field from the User struct
</step_to_resolve>
A: <output>
{ {
"title": "Remove email field", "title": "Remove email field",
"suggestions": [ "suggestions": [
@@ -138,10 +163,12 @@ A:
} }
] ]
} }
</output>
</example>
Example 3: <example>
<workflow_context>
User: <message role="user">
```rs src/vehicle.rs ```rs src/vehicle.rs
struct Vehicle { struct Vehicle {
make: String, make: String,
@@ -159,28 +186,36 @@ impl Vehicle {
} }
} }
``` ```
</message>
<message role="assistant">
<step>Add a 'use std::fmt;' statement at the beginning of the file</step> <step>Add a 'use std::fmt;' statement at the beginning of the file</step>
<step>Add a new method 'start_engine' in the Vehicle impl block</step> <step>Add a new method 'start_engine' in the Vehicle impl block</step>
</message>
</workflow_context>
What are the suggestions for the step: <step>Add a 'use std::fmt;' statement at the beginning of the file</step> <step_to_resolve>
Add a 'use std::fmt;' statement at the beginning of the file
</step_to_resolve>
A: <output>
{ {
"title": "Add use std::fmt statement", "title": "Add use std::fmt statement",
"suggestions": [ "suggestions": [
{ {
"kind": "PrependChild", "kind": "PrependChild",
"path": "src/vehicle.rs", "path": "src/vehicle.rs",
"symbol": "#imports",
"description": "Add 'use std::fmt' statement" "description": "Add 'use std::fmt' statement"
} }
] ]
} }
</output>
User: <step_to_resolve>
What are the suggestions for the step: <step>Add a new method 'start_engine' in the Vehicle impl block</step> Add a new method 'start_engine' in the Vehicle impl block
</step_to_resolve>
A: <output>
{ {
"title": "Add start_engine method", "title": "Add start_engine method",
"suggestions": [ "suggestions": [
@@ -192,10 +227,12 @@ A:
} }
] ]
} }
</output>
</example>
Example 4: <example>
<workflow_context>
User: <message role="user">
```rs src/employee.rs ```rs src/employee.rs
struct Employee { struct Employee {
name: String, name: String,
@@ -219,12 +256,18 @@ impl Employee {
} }
} }
``` ```
</message>
<message role="assistant">
<step>Make salary an f32</step> <step>Make salary an f32</step>
<step>Remove the 'department' field and update the 'print_details' method</step>
</message>
</workflow_context>
What are the suggestions for the step: <step>Make salary an f32</step> <step_to_resolve>
Make salary an f32
</step_to_resolve>
A (wrong): <incorrect_output reason="NEVER include suggestions on a parent symbol and one of its children in the same suggestions block.">
{ {
"title": "Change salary to f32", "title": "Change salary to f32",
"suggestions": [ "suggestions": [
@@ -242,10 +285,9 @@ A (wrong):
} }
] ]
} }
</incorrect_output>
This example demonstrates what not to do. `struct Employee salary` is a child of `struct Employee`. <correct_output>
A (corrected):
{ {
"title": "Change salary to f32", "title": "Change salary to f32",
"suggestions": [ "suggestions": [
@@ -257,11 +299,13 @@ A (corrected):
} }
] ]
} }
</correct_output>
User: <step_to_resolve>
What are the correct suggestions for the step: <step>Remove the 'department' field and update the 'print_details' method</step> Remove the 'department' field and update the 'print_details' method
</step_to_resolve>
A: <output>
{ {
"title": "Remove department", "title": "Remove department",
"suggestions": [ "suggestions": [
@@ -278,10 +322,12 @@ A:
} }
] ]
} }
</output>
</example>
Example 5: <example>
<workflow_context>
User: <message role="user">
```rs src/game.rs ```rs src/game.rs
struct Player { struct Player {
name: String, name: String,
@@ -305,10 +351,17 @@ impl Game {
} }
} }
``` ```
</message>
<message role="assistant">
<step>Add a 'level' field to Player and update the 'new' method</step> <step>Add a 'level' field to Player and update the 'new' method</step>
</message>
</workflow_context>
A: <step_to_resolve>
Add a 'level' field to Player and update the 'new' method
</step_to_resolve>
<output>
{ {
"title": "Add level field to Player", "title": "Add level field to Player",
"suggestions": [ "suggestions": [
@@ -326,10 +379,12 @@ A:
} }
] ]
} }
</output>
</example>
Example 6: <example>
<workflow_context>
User: <message role="user">
```rs src/config.rs ```rs src/config.rs
use std::collections::HashMap; use std::collections::HashMap;
@@ -343,16 +398,24 @@ impl Config {
} }
} }
``` ```
</message>
<message role="assistant">
<step>Add a 'load_from_file' method to Config and import necessary modules</step> <step>Add a 'load_from_file' method to Config and import necessary modules</step>
</message>
</workflow_context>
A: <step_to_resolve>
Add a 'load_from_file' method to Config and import necessary modules
</step_to_resolve>
<output>
{ {
"title": "Add load_from_file method", "title": "Add load_from_file method",
"suggestions": [ "suggestions": [
{ {
"kind": "PrependChild", "kind": "PrependChild",
"path": "src/config.rs", "path": "src/config.rs",
"symbol": "#imports",
"description": "Import std::fs and std::io modules" "description": "Import std::fs and std::io modules"
}, },
{ {
@@ -363,10 +426,12 @@ A:
} }
] ]
} }
</output>
</example>
Example 7: <example>
<workflow_context>
User: <message role="user">
```rs src/database.rs ```rs src/database.rs
pub(crate) struct Database { pub(crate) struct Database {
connection: Connection, connection: Connection,
@@ -383,10 +448,17 @@ impl Database {
} }
} }
``` ```
</message>
<message role="assistant">
<step>Add error handling to the 'query' method and create a custom error type</step> <step>Add error handling to the 'query' method and create a custom error type</step>
</message>
</workflow_context>
A: <step_to_resolve>
Add error handling to the 'query' method and create a custom error type
</step_to_resolve>
<output>
{ {
"title": "Add error handling to query", "title": "Add error handling to query",
"suggestions": [ "suggestions": [
@@ -409,5 +481,16 @@ A:
} }
] ]
} }
</output>
</example>
</examples>
Now generate the suggestions for the following step: Now generate the suggestions for the following step:
<workflow_context>
{{{workflow_context}}}
</workflow_context>
<step_to_resolve>
{{{step_to_resolve}}}
</step_to_resolve>

View File

@@ -395,9 +395,9 @@
// The default model to use when creating new contexts. // The default model to use when creating new contexts.
"default_model": { "default_model": {
// The provider to use. // The provider to use.
"provider": "openai", "provider": "zed.dev",
// The model to use. // The model to use.
"model": "gpt-4o" "model": "claude-3-5-sonnet"
} }
}, },
// The settings for slash commands. // The settings for slash commands.
@@ -1010,5 +1010,16 @@
// ] // ]
// } // }
// ] // ]
"ssh_connections": null "ssh_connections": null,
// Configures the Context Server Protocol binaries
//
// Examples:
// {
// "id": "server-1",
// "executable": "/path",
// "args": ['arg1", "args2"]
// }
"experimental.context_servers": {
"servers": []
}
} }

View File

@@ -33,5 +33,31 @@ services:
volumes: volumes:
- ./livekit.yaml:/livekit.yaml - ./livekit.yaml:/livekit.yaml
postgrest_app:
image: postgrest/postgrest
container_name: postgrest_app
ports:
- 8081:8081
environment:
PGRST_DB_URI: postgres://postgres@postgres:5432/zed
volumes:
- ./crates/collab/postgrest_app.conf:/etc/postgrest.conf
command: postgrest /etc/postgrest.conf
depends_on:
- postgres
postgrest_llm:
image: postgrest/postgrest
container_name: postgrest_llm
ports:
- 8082:8082
environment:
PGRST_DB_URI: postgres://postgres@postgres:5432/zed_llm
volumes:
- ./crates/collab/postgrest_llm.conf:/etc/postgrest.conf
command: postgrest /etc/postgrest.conf
depends_on:
- postgres
volumes: volumes:
postgres_data: postgres_data:

View File

@@ -17,6 +17,7 @@ path = "src/anthropic.rs"
[dependencies] [dependencies]
anyhow.workspace = true anyhow.workspace = true
chrono.workspace = true
futures.workspace = true futures.workspace = true
http_client.workspace = true http_client.workspace = true
isahc.workspace = true isahc.workspace = true
@@ -25,6 +26,7 @@ serde.workspace = true
serde_json.workspace = true serde_json.workspace = true
strum.workspace = true strum.workspace = true
thiserror.workspace = true thiserror.workspace = true
util.workspace = true
[dev-dependencies] [dev-dependencies]
tokio.workspace = true tokio.workspace = true

View File

@@ -1,19 +1,30 @@
mod supported_countries; mod supported_countries;
use anyhow::{anyhow, Context, Result}; use anyhow::{anyhow, Context, Result};
use chrono::{DateTime, Utc};
use futures::{io::BufReader, stream::BoxStream, AsyncBufReadExt, AsyncReadExt, Stream, StreamExt}; use futures::{io::BufReader, stream::BoxStream, AsyncBufReadExt, AsyncReadExt, Stream, StreamExt};
use http_client::{AsyncBody, HttpClient, Method, Request as HttpRequest}; use http_client::{AsyncBody, HttpClient, Method, Request as HttpRequest};
use isahc::config::Configurable; use isahc::config::Configurable;
use isahc::http::{HeaderMap, HeaderValue};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use std::str::FromStr;
use std::time::Duration; use std::time::Duration;
use std::{pin::Pin, str::FromStr};
use strum::{EnumIter, EnumString}; use strum::{EnumIter, EnumString};
use thiserror::Error; use thiserror::Error;
use util::ResultExt as _;
pub use supported_countries::*; pub use supported_countries::*;
pub const ANTHROPIC_API_URL: &'static str = "https://api.anthropic.com"; pub const ANTHROPIC_API_URL: &'static str = "https://api.anthropic.com";
#[cfg_attr(feature = "schemars", derive(schemars::JsonSchema))]
#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq)]
pub struct AnthropicModelCacheConfiguration {
pub min_total_token: usize,
pub should_speculate: bool,
pub max_cache_anchors: usize,
}
#[cfg_attr(feature = "schemars", derive(schemars::JsonSchema))] #[cfg_attr(feature = "schemars", derive(schemars::JsonSchema))]
#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq, EnumIter)] #[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq, EnumIter)]
pub enum Model { pub enum Model {
@@ -30,8 +41,13 @@ pub enum Model {
Custom { Custom {
name: String, name: String,
max_tokens: usize, max_tokens: usize,
/// The name displayed in the UI, such as in the assistant panel model dropdown menu.
display_name: Option<String>,
/// Override this model with a different Anthropic model for tool calls. /// Override this model with a different Anthropic model for tool calls.
tool_override: Option<String>, tool_override: Option<String>,
/// Indicates whether this custom model supports caching.
cache_configuration: Option<AnthropicModelCacheConfiguration>,
max_output_tokens: Option<u32>,
}, },
} }
@@ -66,7 +82,24 @@ impl Model {
Self::Claude3Opus => "Claude 3 Opus", Self::Claude3Opus => "Claude 3 Opus",
Self::Claude3Sonnet => "Claude 3 Sonnet", Self::Claude3Sonnet => "Claude 3 Sonnet",
Self::Claude3Haiku => "Claude 3 Haiku", Self::Claude3Haiku => "Claude 3 Haiku",
Self::Custom { name, .. } => name, Self::Custom {
name, display_name, ..
} => display_name.as_ref().unwrap_or(name),
}
}
pub fn cache_configuration(&self) -> Option<AnthropicModelCacheConfiguration> {
match self {
Self::Claude3_5Sonnet | Self::Claude3Haiku => Some(AnthropicModelCacheConfiguration {
min_total_token: 2_048,
should_speculate: true,
max_cache_anchors: 4,
}),
Self::Custom {
cache_configuration,
..
} => cache_configuration.clone(),
_ => None,
} }
} }
@@ -80,6 +113,16 @@ impl Model {
} }
} }
pub fn max_output_tokens(&self) -> u32 {
match self {
Self::Claude3Opus | Self::Claude3Sonnet | Self::Claude3Haiku => 4_096,
Self::Claude3_5Sonnet => 8_192,
Self::Custom {
max_output_tokens, ..
} => max_output_tokens.unwrap_or(4_096),
}
}
pub fn tool_model_id(&self) -> &str { pub fn tool_model_id(&self) -> &str {
if let Self::Custom { if let Self::Custom {
tool_override: Some(tool_override), tool_override: Some(tool_override),
@@ -104,7 +147,10 @@ pub async fn complete(
.method(Method::POST) .method(Method::POST)
.uri(uri) .uri(uri)
.header("Anthropic-Version", "2023-06-01") .header("Anthropic-Version", "2023-06-01")
.header("Anthropic-Beta", "tools-2024-04-04") .header(
"Anthropic-Beta",
"tools-2024-04-04,prompt-caching-2024-07-31,max-tokens-3-5-sonnet-2024-07-15",
)
.header("X-Api-Key", api_key) .header("X-Api-Key", api_key)
.header("Content-Type", "application/json"); .header("Content-Type", "application/json");
@@ -152,6 +198,66 @@ pub async fn stream_completion(
request: Request, request: Request,
low_speed_timeout: Option<Duration>, low_speed_timeout: Option<Duration>,
) -> Result<BoxStream<'static, Result<Event, AnthropicError>>, AnthropicError> { ) -> Result<BoxStream<'static, Result<Event, AnthropicError>>, AnthropicError> {
stream_completion_with_rate_limit_info(client, api_url, api_key, request, low_speed_timeout)
.await
.map(|output| output.0)
}
/// https://docs.anthropic.com/en/api/rate-limits#response-headers
#[derive(Debug)]
pub struct RateLimitInfo {
pub requests_limit: usize,
pub requests_remaining: usize,
pub requests_reset: DateTime<Utc>,
pub tokens_limit: usize,
pub tokens_remaining: usize,
pub tokens_reset: DateTime<Utc>,
}
impl RateLimitInfo {
fn from_headers(headers: &HeaderMap<HeaderValue>) -> Result<Self> {
let tokens_limit = get_header("anthropic-ratelimit-tokens-limit", headers)?.parse()?;
let requests_limit = get_header("anthropic-ratelimit-requests-limit", headers)?.parse()?;
let tokens_remaining =
get_header("anthropic-ratelimit-tokens-remaining", headers)?.parse()?;
let requests_remaining =
get_header("anthropic-ratelimit-requests-remaining", headers)?.parse()?;
let requests_reset = get_header("anthropic-ratelimit-requests-reset", headers)?;
let tokens_reset = get_header("anthropic-ratelimit-tokens-reset", headers)?;
let requests_reset = DateTime::parse_from_rfc3339(requests_reset)?.to_utc();
let tokens_reset = DateTime::parse_from_rfc3339(tokens_reset)?.to_utc();
Ok(Self {
requests_limit,
tokens_limit,
requests_remaining,
tokens_remaining,
requests_reset,
tokens_reset,
})
}
}
fn get_header<'a>(key: &str, headers: &'a HeaderMap) -> Result<&'a str, anyhow::Error> {
Ok(headers
.get(key)
.ok_or_else(|| anyhow!("missing header `{key}`"))?
.to_str()?)
}
pub async fn stream_completion_with_rate_limit_info(
client: &dyn HttpClient,
api_url: &str,
api_key: &str,
request: Request,
low_speed_timeout: Option<Duration>,
) -> Result<
(
BoxStream<'static, Result<Event, AnthropicError>>,
Option<RateLimitInfo>,
),
AnthropicError,
> {
let request = StreamingRequest { let request = StreamingRequest {
base: request, base: request,
stream: true, stream: true,
@@ -161,7 +267,10 @@ pub async fn stream_completion(
.method(Method::POST) .method(Method::POST)
.uri(uri) .uri(uri)
.header("Anthropic-Version", "2023-06-01") .header("Anthropic-Version", "2023-06-01")
.header("Anthropic-Beta", "tools-2024-04-04") .header(
"Anthropic-Beta",
"tools-2024-04-04,prompt-caching-2024-07-31,max-tokens-3-5-sonnet-2024-07-15",
)
.header("X-Api-Key", api_key) .header("X-Api-Key", api_key)
.header("Content-Type", "application/json"); .header("Content-Type", "application/json");
if let Some(low_speed_timeout) = low_speed_timeout { if let Some(low_speed_timeout) = low_speed_timeout {
@@ -178,8 +287,9 @@ pub async fn stream_completion(
.await .await
.context("failed to send request to Anthropic")?; .context("failed to send request to Anthropic")?;
if response.status().is_success() { if response.status().is_success() {
let rate_limits = RateLimitInfo::from_headers(response.headers());
let reader = BufReader::new(response.into_body()); let reader = BufReader::new(response.into_body());
Ok(reader let stream = reader
.lines() .lines()
.filter_map(|line| async move { .filter_map(|line| async move {
match line { match line {
@@ -193,7 +303,8 @@ pub async fn stream_completion(
Err(error) => Some(Err(AnthropicError::Other(anyhow!(error)))), Err(error) => Some(Err(AnthropicError::Other(anyhow!(error)))),
} }
}) })
.boxed()) .boxed();
Ok((stream, rate_limits.log_err()))
} else { } else {
let mut body = Vec::new(); let mut body = Vec::new();
response response
@@ -226,7 +337,7 @@ pub fn extract_text_from_events(
match response { match response {
Ok(response) => match response { Ok(response) => match response {
Event::ContentBlockStart { content_block, .. } => match content_block { Event::ContentBlockStart { content_block, .. } => match content_block {
Content::Text { text } => Some(Ok(text)), Content::Text { text, .. } => Some(Ok(text)),
_ => None, _ => None,
}, },
Event::ContentBlockDelta { delta, .. } => match delta { Event::ContentBlockDelta { delta, .. } => match delta {
@@ -241,13 +352,69 @@ pub fn extract_text_from_events(
}) })
} }
pub async fn extract_tool_args_from_events(
tool_name: String,
mut events: Pin<Box<dyn Send + Stream<Item = Result<Event>>>>,
) -> Result<impl Send + Stream<Item = Result<String>>> {
let mut tool_use_index = None;
while let Some(event) = events.next().await {
if let Event::ContentBlockStart {
index,
content_block,
} = event?
{
if let Content::ToolUse { name, .. } = content_block {
if name == tool_name {
tool_use_index = Some(index);
break;
}
}
}
}
let Some(tool_use_index) = tool_use_index else {
return Err(anyhow!("tool not used"));
};
Ok(events.filter_map(move |event| {
let result = match event {
Err(error) => Some(Err(error)),
Ok(Event::ContentBlockDelta { index, delta }) => match delta {
ContentDelta::TextDelta { .. } => None,
ContentDelta::InputJsonDelta { partial_json } => {
if index == tool_use_index {
Some(Ok(partial_json))
} else {
None
}
}
},
_ => None,
};
async move { result }
}))
}
#[derive(Debug, Serialize, Deserialize, Copy, Clone)]
#[serde(rename_all = "lowercase")]
pub enum CacheControlType {
Ephemeral,
}
#[derive(Debug, Serialize, Deserialize, Copy, Clone)]
pub struct CacheControl {
#[serde(rename = "type")]
pub cache_type: CacheControlType,
}
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize)]
pub struct Message { pub struct Message {
pub role: Role, pub role: Role,
pub content: Vec<Content>, pub content: Vec<Content>,
} }
#[derive(Debug, Serialize, Deserialize)] #[derive(Debug, Serialize, Deserialize, Eq, PartialEq, Hash)]
#[serde(rename_all = "lowercase")] #[serde(rename_all = "lowercase")]
pub enum Role { pub enum Role {
User, User,
@@ -258,19 +425,31 @@ pub enum Role {
#[serde(tag = "type")] #[serde(tag = "type")]
pub enum Content { pub enum Content {
#[serde(rename = "text")] #[serde(rename = "text")]
Text { text: String }, Text {
text: String,
#[serde(skip_serializing_if = "Option::is_none")]
cache_control: Option<CacheControl>,
},
#[serde(rename = "image")] #[serde(rename = "image")]
Image { source: ImageSource }, Image {
source: ImageSource,
#[serde(skip_serializing_if = "Option::is_none")]
cache_control: Option<CacheControl>,
},
#[serde(rename = "tool_use")] #[serde(rename = "tool_use")]
ToolUse { ToolUse {
id: String, id: String,
name: String, name: String,
input: serde_json::Value, input: serde_json::Value,
#[serde(skip_serializing_if = "Option::is_none")]
cache_control: Option<CacheControl>,
}, },
#[serde(rename = "tool_result")] #[serde(rename = "tool_result")]
ToolResult { ToolResult {
tool_use_id: String, tool_use_id: String,
content: String, content: String,
#[serde(skip_serializing_if = "Option::is_none")]
cache_control: Option<CacheControl>,
}, },
} }

View File

@@ -33,11 +33,13 @@ clock.workspace = true
collections.workspace = true collections.workspace = true
command_palette_hooks.workspace = true command_palette_hooks.workspace = true
db.workspace = true db.workspace = true
context_servers.workspace = true
editor.workspace = true editor.workspace = true
feature_flags.workspace = true feature_flags.workspace = true
fs.workspace = true fs.workspace = true
futures.workspace = true futures.workspace = true
fuzzy.workspace = true fuzzy.workspace = true
globset.workspace = true
gpui.workspace = true gpui.workspace = true
handlebars.workspace = true handlebars.workspace = true
heed.workspace = true heed.workspace = true
@@ -67,6 +69,7 @@ serde.workspace = true
serde_json.workspace = true serde_json.workspace = true
settings.workspace = true settings.workspace = true
smallvec.workspace = true smallvec.workspace = true
similar.workspace = true
smol.workspace = true smol.workspace = true
telemetry_events.workspace = true telemetry_events.workspace = true
terminal.workspace = true terminal.workspace = true

View File

@@ -3,16 +3,17 @@
pub mod assistant_panel; pub mod assistant_panel;
pub mod assistant_settings; pub mod assistant_settings;
mod context; mod context;
pub(crate) mod context_inspector;
pub mod context_store; pub mod context_store;
mod inline_assistant; mod inline_assistant;
mod model_selector; mod model_selector;
mod prompt_library; mod prompt_library;
mod prompts; mod prompts;
mod slash_command; mod slash_command;
pub(crate) mod slash_command_picker;
pub mod slash_command_settings; pub mod slash_command_settings;
mod streaming_diff; mod streaming_diff;
mod terminal_inline_assistant; mod terminal_inline_assistant;
mod workflow;
pub use assistant_panel::{AssistantPanel, AssistantPanelEvent}; pub use assistant_panel::{AssistantPanel, AssistantPanelEvent};
use assistant_settings::AssistantSettings; use assistant_settings::AssistantSettings;
@@ -20,9 +21,11 @@ use assistant_slash_command::SlashCommandRegistry;
use client::{proto, Client}; use client::{proto, Client};
use command_palette_hooks::CommandPaletteFilter; use command_palette_hooks::CommandPaletteFilter;
pub use context::*; pub use context::*;
use context_servers::ContextServerRegistry;
pub use context_store::*; pub use context_store::*;
use feature_flags::FeatureFlagAppExt; use feature_flags::FeatureFlagAppExt;
use fs::Fs; use fs::Fs;
use gpui::Context as _;
use gpui::{actions, impl_actions, AppContext, Global, SharedString, UpdateGlobal}; use gpui::{actions, impl_actions, AppContext, Global, SharedString, UpdateGlobal};
use indexed_docs::IndexedDocsRegistry; use indexed_docs::IndexedDocsRegistry;
pub(crate) use inline_assistant::*; pub(crate) use inline_assistant::*;
@@ -31,18 +34,19 @@ use language_model::{
}; };
pub(crate) use model_selector::*; pub(crate) use model_selector::*;
pub use prompts::PromptBuilder; pub use prompts::PromptBuilder;
use prompts::PromptOverrideContext; use prompts::PromptLoadingParams;
use semantic_index::{CloudEmbeddingProvider, SemanticIndex}; use semantic_index::{CloudEmbeddingProvider, SemanticIndex};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use settings::{update_settings_file, Settings, SettingsStore}; use settings::{update_settings_file, Settings, SettingsStore};
use slash_command::{ use slash_command::{
default_command, diagnostics_command, docs_command, fetch_command, file_command, now_command, context_server_command, default_command, diagnostics_command, docs_command, fetch_command,
project_command, prompt_command, search_command, symbols_command, tabs_command, file_command, now_command, project_command, prompt_command, search_command, symbols_command,
terminal_command, workflow_command, tab_command, terminal_command, workflow_command,
}; };
use std::sync::Arc; use std::sync::Arc;
pub(crate) use streaming_diff::*; pub(crate) use streaming_diff::*;
use util::ResultExt; use util::ResultExt;
pub use workflow::*;
use crate::slash_command_settings::SlashCommandSettings; use crate::slash_command_settings::SlashCommandSettings;
@@ -56,16 +60,14 @@ actions!(
InsertIntoEditor, InsertIntoEditor,
ToggleFocus, ToggleFocus,
InsertActivePrompt, InsertActivePrompt,
ShowConfiguration,
DeployHistory, DeployHistory,
DeployPromptLibrary, DeployPromptLibrary,
ConfirmCommand, ConfirmCommand,
ToggleModelSelector, ToggleModelSelector,
DebugWorkflowSteps
] ]
); );
const DEFAULT_CONTEXT_LINES: usize = 20; const DEFAULT_CONTEXT_LINES: usize = 50;
#[derive(Clone, Default, Deserialize, PartialEq)] #[derive(Clone, Default, Deserialize, PartialEq)]
pub struct InlineAssist { pub struct InlineAssist {
@@ -182,7 +184,7 @@ impl Assistant {
pub fn init( pub fn init(
fs: Arc<dyn Fs>, fs: Arc<dyn Fs>,
client: Arc<Client>, client: Arc<Client>,
dev_mode: bool, stdout_is_a_pty: bool,
cx: &mut AppContext, cx: &mut AppContext,
) -> Arc<PromptBuilder> { ) -> Arc<PromptBuilder> {
cx.set_global(Assistant::default()); cx.set_global(Assistant::default());
@@ -219,10 +221,13 @@ pub fn init(
init_language_model_settings(cx); init_language_model_settings(cx);
assistant_slash_command::init(cx); assistant_slash_command::init(cx);
assistant_panel::init(cx); assistant_panel::init(cx);
context_servers::init(cx);
let prompt_builder = prompts::PromptBuilder::new(Some(PromptOverrideContext { let prompt_builder = prompts::PromptBuilder::new(Some(PromptLoadingParams {
dev_mode,
fs: fs.clone(), fs: fs.clone(),
repo_path: stdout_is_a_pty
.then(|| std::env::current_dir().log_err())
.flatten(),
cx, cx,
})) }))
.log_err() .log_err()
@@ -259,9 +264,69 @@ pub fn init(
}) })
.detach(); .detach();
register_context_server_handlers(cx);
prompt_builder prompt_builder
} }
fn register_context_server_handlers(cx: &mut AppContext) {
cx.subscribe(
&context_servers::manager::ContextServerManager::global(cx),
|manager, event, cx| match event {
context_servers::manager::Event::ServerStarted { server_id } => {
cx.update_model(
&manager,
|manager: &mut context_servers::manager::ContextServerManager, cx| {
let slash_command_registry = SlashCommandRegistry::global(cx);
let context_server_registry = ContextServerRegistry::global(cx);
if let Some(server) = manager.get_server(server_id) {
cx.spawn(|_, _| async move {
let Some(protocol) = server.client.read().clone() else {
return;
};
if let Some(prompts) = protocol.list_prompts().await.log_err() {
for prompt in prompts
.into_iter()
.filter(context_server_command::acceptable_prompt)
{
log::info!(
"registering context server command: {:?}",
prompt.name
);
context_server_registry.register_command(
server.id.clone(),
prompt.name.as_str(),
);
slash_command_registry.register_command(
context_server_command::ContextServerSlashCommand::new(
&server, prompt,
),
true,
);
}
}
})
.detach();
}
},
);
}
context_servers::manager::Event::ServerStopped { server_id } => {
let slash_command_registry = SlashCommandRegistry::global(cx);
let context_server_registry = ContextServerRegistry::global(cx);
if let Some(commands) = context_server_registry.get_commands(server_id) {
for command_name in commands {
slash_command_registry.unregister_command_by_name(&command_name);
context_server_registry.unregister_command(&server_id, &command_name);
}
}
}
},
)
.detach();
}
fn init_language_model_settings(cx: &mut AppContext) { fn init_language_model_settings(cx: &mut AppContext) {
update_active_language_model_from_settings(cx); update_active_language_model_from_settings(cx);
@@ -294,7 +359,7 @@ fn register_slash_commands(prompt_builder: Option<Arc<PromptBuilder>>, cx: &mut
let slash_command_registry = SlashCommandRegistry::global(cx); let slash_command_registry = SlashCommandRegistry::global(cx);
slash_command_registry.register_command(file_command::FileSlashCommand, true); slash_command_registry.register_command(file_command::FileSlashCommand, true);
slash_command_registry.register_command(symbols_command::OutlineSlashCommand, true); slash_command_registry.register_command(symbols_command::OutlineSlashCommand, true);
slash_command_registry.register_command(tabs_command::TabsSlashCommand, true); slash_command_registry.register_command(tab_command::TabSlashCommand, true);
slash_command_registry.register_command(project_command::ProjectSlashCommand, true); slash_command_registry.register_command(project_command::ProjectSlashCommand, true);
slash_command_registry.register_command(prompt_command::PromptSlashCommand, true); slash_command_registry.register_command(prompt_command::PromptSlashCommand, true);
slash_command_registry.register_command(default_command::DefaultSlashCommand, false); slash_command_registry.register_command(default_command::DefaultSlashCommand, false);

File diff suppressed because it is too large Load Diff

View File

@@ -543,8 +543,8 @@ mod tests {
assert_eq!( assert_eq!(
AssistantSettings::get_global(cx).default_model, AssistantSettings::get_global(cx).default_model,
LanguageModelSelection { LanguageModelSelection {
provider: "openai".into(), provider: "zed.dev".into(),
model: "gpt-4o".into(), model: "claude-3-5-sonnet".into(),
} }
); );
}); });

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,219 +0,0 @@
use std::{ops::Range, sync::Arc};
use collections::{HashMap, HashSet};
use editor::{
display_map::{BlockDisposition, BlockProperties, BlockStyle, CustomBlockId},
Editor,
};
use gpui::{AppContext, Model, View};
use text::{Bias, ToOffset, ToPoint};
use ui::{
div, h_flex, px, Color, Element as _, ParentElement as _, Styled, ViewContext, WindowContext,
};
use crate::{Context, ResolvedWorkflowStep, WorkflowSuggestion};
type StepRange = Range<language::Anchor>;
struct DebugInfo {
range: Range<editor::Anchor>,
block_id: CustomBlockId,
}
pub(crate) struct ContextInspector {
active_debug_views: HashMap<Range<language::Anchor>, DebugInfo>,
context: Model<Context>,
editor: View<Editor>,
}
impl ContextInspector {
pub(crate) fn new(editor: View<Editor>, context: Model<Context>) -> Self {
Self {
editor,
context,
active_debug_views: Default::default(),
}
}
pub(crate) fn is_active(&self, range: &StepRange) -> bool {
self.active_debug_views.contains_key(range)
}
pub(crate) fn refresh(&mut self, range: &StepRange, cx: &mut WindowContext<'_>) {
if self.deactivate_for(range, cx) {
self.activate_for_step(range.clone(), cx);
}
}
fn crease_content(
context: &Model<Context>,
range: StepRange,
cx: &mut AppContext,
) -> Option<Arc<str>> {
use std::fmt::Write;
let step = context.read(cx).workflow_step_for_range(range)?;
let mut output = String::from("\n\n");
match &step.status {
crate::WorkflowStepStatus::Resolved(ResolvedWorkflowStep { title, suggestions }) => {
writeln!(output, "Resolution:").ok()?;
writeln!(output, " {title:?}").ok()?;
for (buffer, suggestion_groups) in suggestions {
let buffer = buffer.read(cx);
let buffer_path = buffer
.file()
.and_then(|file| file.path().to_str())
.unwrap_or("untitled");
let snapshot = buffer.text_snapshot();
writeln!(output, " {buffer_path}:").ok()?;
for group in suggestion_groups {
for suggestion in &group.suggestions {
pretty_print_workflow_suggestion(&mut output, suggestion, &snapshot);
}
}
}
}
crate::WorkflowStepStatus::Pending(_) => {
writeln!(output, "Resolution: Pending").ok()?;
}
crate::WorkflowStepStatus::Error(error) => {
writeln!(output, "Resolution: Error").ok()?;
writeln!(output, "{error:?}").ok()?;
}
}
Some(output.into())
}
pub(crate) fn activate_for_step(&mut self, range: StepRange, cx: &mut WindowContext<'_>) {
let text = Self::crease_content(&self.context, range.clone(), cx)
.unwrap_or_else(|| Arc::from("Error fetching debug info"));
self.editor.update(cx, |editor, cx| {
let buffer = editor.buffer().read(cx).as_singleton()?;
let snapshot = buffer.read(cx).text_snapshot();
let start_offset = range.end.to_offset(&snapshot) + 1;
let start_offset = snapshot.clip_offset(start_offset, Bias::Right);
let text_len = text.len();
buffer.update(cx, |this, cx| {
this.edit([(start_offset..start_offset, text)], None, cx);
});
let end_offset = start_offset + text_len;
let multibuffer_snapshot = editor.buffer().read(cx).snapshot(cx);
let anchor_before = multibuffer_snapshot.anchor_after(start_offset);
let anchor_after = multibuffer_snapshot.anchor_before(end_offset);
let block_id = editor
.insert_blocks(
[BlockProperties {
position: anchor_after,
height: 0,
style: BlockStyle::Sticky,
render: Box::new(move |cx| {
div()
.w_full()
.px(cx.gutter_dimensions.full_width())
.child(h_flex().h(px(1.)).bg(Color::Warning.color(cx)))
.into_any()
}),
disposition: BlockDisposition::Below,
priority: 0,
}],
None,
cx,
)
.into_iter()
.next()?;
let info = DebugInfo {
range: anchor_before..anchor_after,
block_id,
};
self.active_debug_views.insert(range, info);
Some(())
});
}
fn deactivate_impl(editor: &mut Editor, debug_data: DebugInfo, cx: &mut ViewContext<Editor>) {
editor.remove_blocks(HashSet::from_iter([debug_data.block_id]), None, cx);
editor.edit([(debug_data.range, Arc::<str>::default())], cx)
}
pub(crate) fn deactivate_for(&mut self, range: &StepRange, cx: &mut WindowContext<'_>) -> bool {
if let Some(debug_data) = self.active_debug_views.remove(range) {
self.editor.update(cx, |this, cx| {
Self::deactivate_impl(this, debug_data, cx);
});
true
} else {
false
}
}
pub(crate) fn deactivate(&mut self, cx: &mut WindowContext<'_>) {
let steps_to_disable = std::mem::take(&mut self.active_debug_views);
self.editor.update(cx, move |editor, cx| {
for (_, debug_data) in steps_to_disable {
Self::deactivate_impl(editor, debug_data, cx);
}
});
}
}
fn pretty_print_anchor(
out: &mut String,
anchor: &language::Anchor,
snapshot: &text::BufferSnapshot,
) {
use std::fmt::Write;
let point = anchor.to_point(snapshot);
write!(out, "{}:{}", point.row, point.column).ok();
}
fn pretty_print_range(
out: &mut String,
range: &Range<language::Anchor>,
snapshot: &text::BufferSnapshot,
) {
use std::fmt::Write;
write!(out, " Range: ").ok();
pretty_print_anchor(out, &range.start, snapshot);
write!(out, "..").ok();
pretty_print_anchor(out, &range.end, snapshot);
}
fn pretty_print_workflow_suggestion(
out: &mut String,
suggestion: &WorkflowSuggestion,
snapshot: &text::BufferSnapshot,
) {
use std::fmt::Write;
let (range, description, position) = match suggestion {
WorkflowSuggestion::Update { range, description } => (Some(range), Some(description), None),
WorkflowSuggestion::CreateFile { description } => (None, Some(description), None),
WorkflowSuggestion::AppendChild {
position,
description,
}
| WorkflowSuggestion::InsertSiblingBefore {
position,
description,
}
| WorkflowSuggestion::InsertSiblingAfter {
position,
description,
}
| WorkflowSuggestion::PrependChild {
position,
description,
} => (None, Some(description), Some(position)),
WorkflowSuggestion::Delete { range } => (Some(range), None, None),
};
if let Some(description) = description {
writeln!(out, " Description: {description}").ok();
}
if let Some(range) = range {
pretty_print_range(out, range, snapshot);
}
if let Some(position) = position {
write!(out, " Position: ").ok();
pretty_print_anchor(out, position, snapshot);
write!(out, "\n").ok();
}
write!(out, "\n").ok();
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,15 +1,16 @@
use feature_flags::ZedPro; use feature_flags::ZedPro;
use gpui::Action;
use gpui::DismissEvent; use gpui::DismissEvent;
use language_model::{LanguageModel, LanguageModelAvailability, LanguageModelRegistry}; use language_model::{LanguageModel, LanguageModelAvailability, LanguageModelRegistry};
use proto::Plan; use proto::Plan;
use workspace::ShowConfiguration;
use std::sync::Arc; use std::sync::Arc;
use ui::ListItemSpacing; use ui::ListItemSpacing;
use crate::assistant_settings::AssistantSettings; use crate::assistant_settings::AssistantSettings;
use crate::ShowConfiguration;
use fs::Fs; use fs::Fs;
use gpui::Action;
use gpui::SharedString; use gpui::SharedString;
use gpui::Task; use gpui::Task;
use picker::{Picker, PickerDelegate}; use picker::{Picker, PickerDelegate};
@@ -36,7 +37,7 @@ pub struct ModelPickerDelegate {
#[derive(Clone)] #[derive(Clone)]
struct ModelInfo { struct ModelInfo {
model: Arc<dyn LanguageModel>, model: Arc<dyn LanguageModel>,
provider_icon: IconName, icon: IconName,
availability: LanguageModelAvailability, availability: LanguageModelAvailability,
is_selected: bool, is_selected: bool,
} }
@@ -149,6 +150,8 @@ impl PickerDelegate for ModelPickerDelegate {
use feature_flags::FeatureFlagAppExt; use feature_flags::FeatureFlagAppExt;
let model_info = self.filtered_models.get(ix)?; let model_info = self.filtered_models.get(ix)?;
let show_badges = cx.has_flag::<ZedPro>(); let show_badges = cx.has_flag::<ZedPro>();
let provider_name: String = model_info.model.provider_name().0.into();
Some( Some(
ListItem::new(ix) ListItem::new(ix)
.inset(true) .inset(true)
@@ -156,7 +159,7 @@ impl PickerDelegate for ModelPickerDelegate {
.selected(selected) .selected(selected)
.start_slot( .start_slot(
div().pr_1().child( div().pr_1().child(
Icon::new(model_info.provider_icon) Icon::new(model_info.icon)
.color(Color::Muted) .color(Color::Muted)
.size(IconSize::Medium), .size(IconSize::Medium),
), ),
@@ -166,11 +169,16 @@ impl PickerDelegate for ModelPickerDelegate {
.w_full() .w_full()
.justify_between() .justify_between()
.font_buffer(cx) .font_buffer(cx)
.min_w(px(200.)) .min_w(px(240.))
.child( .child(
h_flex() h_flex()
.gap_2() .gap_2()
.child(Label::new(model_info.model.name().0.clone())) .child(Label::new(model_info.model.name().0.clone()))
.child(
Label::new(provider_name)
.size(LabelSize::XSmall)
.color(Color::Muted),
)
.children(match model_info.availability { .children(match model_info.availability {
LanguageModelAvailability::Public => None, LanguageModelAvailability::Public => None,
LanguageModelAvailability::RequiresPlan(Plan::Free) => None, LanguageModelAvailability::RequiresPlan(Plan::Free) => None,
@@ -261,16 +269,17 @@ impl<T: PopoverTrigger> RenderOnce for ModelSelector<T> {
.iter() .iter()
.flat_map(|provider| { .flat_map(|provider| {
let provider_id = provider.id(); let provider_id = provider.id();
let provider_icon = provider.icon(); let icon = provider.icon();
let selected_model = selected_model.clone(); let selected_model = selected_model.clone();
let selected_provider = selected_provider.clone(); let selected_provider = selected_provider.clone();
provider.provided_models(cx).into_iter().map(move |model| { provider.provided_models(cx).into_iter().map(move |model| {
let model = model.clone(); let model = model.clone();
let icon = model.icon().unwrap_or(icon);
ModelInfo { ModelInfo {
model: model.clone(), model: model.clone(),
provider_icon, icon,
availability: model.availability(), availability: model.availability(),
is_selected: selected_model.as_ref() == Some(&model.id()) is_selected: selected_model.as_ref() == Some(&model.id())
&& selected_provider.as_ref() == Some(&provider_id), && selected_provider.as_ref() == Some(&provider_id),
@@ -295,5 +304,6 @@ impl<T: PopoverTrigger> RenderOnce for ModelSelector<T> {
.menu(move |_cx| Some(picker_view.clone())) .menu(move |_cx| Some(picker_view.clone()))
.trigger(self.trigger) .trigger(self.trigger)
.attach(gpui::AnchorCorner::BottomLeft) .attach(gpui::AnchorCorner::BottomLeft)
.when_some(self.handle, |menu, handle| menu.with_handle(handle))
} }
} }

View File

@@ -11,8 +11,8 @@ use futures::{
}; };
use fuzzy::StringMatchCandidate; use fuzzy::StringMatchCandidate;
use gpui::{ use gpui::{
actions, point, size, transparent_black, AppContext, BackgroundExecutor, Bounds, EventEmitter, actions, point, size, transparent_black, Action, AppContext, BackgroundExecutor, Bounds,
Global, HighlightStyle, PromptLevel, ReadGlobal, Subscription, Task, TextStyle, EventEmitter, Global, HighlightStyle, PromptLevel, ReadGlobal, Subscription, Task, TextStyle,
TitlebarOptions, UpdateGlobal, View, WindowBounds, WindowHandle, WindowOptions, TitlebarOptions, UpdateGlobal, View, WindowBounds, WindowHandle, WindowOptions,
}; };
use heed::{ use heed::{
@@ -38,7 +38,7 @@ use std::{
use text::LineEnding; use text::LineEnding;
use theme::ThemeSettings; use theme::ThemeSettings;
use ui::{ use ui::{
div, prelude::*, IconButtonShape, ListItem, ListItemSpacing, ParentElement, Render, div, prelude::*, IconButtonShape, KeyBinding, ListItem, ListItemSpacing, ParentElement, Render,
SharedString, Styled, Tooltip, ViewContext, VisualContext, SharedString, Styled, Tooltip, ViewContext, VisualContext,
}; };
use util::{ResultExt, TryFutureExt}; use util::{ResultExt, TryFutureExt};
@@ -100,7 +100,7 @@ pub fn open_prompt_library(
WindowOptions { WindowOptions {
titlebar: Some(TitlebarOptions { titlebar: Some(TitlebarOptions {
title: Some("Prompt Library".into()), title: Some("Prompt Library".into()),
appears_transparent: true, appears_transparent: !cfg!(windows),
traffic_light_position: Some(point(px(9.0), px(9.0))), traffic_light_position: Some(point(px(9.0), px(9.0))),
}), }),
window_bounds: Some(WindowBounds::Windowed(bounds)), window_bounds: Some(WindowBounds::Windowed(bounds)),
@@ -155,6 +155,14 @@ impl PickerDelegate for PromptPickerDelegate {
self.matches.len() self.matches.len()
} }
fn no_matches_text(&self, _cx: &mut WindowContext) -> SharedString {
if self.store.prompt_count() == 0 {
"No prompts.".into()
} else {
"No prompts found matching your search.".into()
}
}
fn selected_index(&self) -> usize { fn selected_index(&self) -> usize {
self.selected_index self.selected_index
} }
@@ -486,7 +494,10 @@ impl PromptLibrary {
let mut editor = Editor::auto_width(cx); let mut editor = Editor::auto_width(cx);
editor.set_placeholder_text("Untitled", cx); editor.set_placeholder_text("Untitled", cx);
editor.set_text(prompt_metadata.title.unwrap_or_default(), cx); editor.set_text(prompt_metadata.title.unwrap_or_default(), cx);
editor.set_read_only(prompt_id.is_built_in()); if prompt_id.is_built_in() {
editor.set_read_only(true);
editor.set_show_inline_completions(false);
}
editor editor
}); });
let body_editor = cx.new_view(|cx| { let body_editor = cx.new_view(|cx| {
@@ -498,7 +509,10 @@ impl PromptLibrary {
}); });
let mut editor = Editor::for_buffer(buffer, None, cx); let mut editor = Editor::for_buffer(buffer, None, cx);
editor.set_read_only(prompt_id.is_built_in()); if prompt_id.is_built_in() {
editor.set_read_only(true);
editor.set_show_inline_completions(false);
}
editor.set_soft_wrap_mode(SoftWrap::EditorWidth, cx); editor.set_soft_wrap_mode(SoftWrap::EditorWidth, cx);
editor.set_show_gutter(false, cx); editor.set_show_gutter(false, cx);
editor.set_show_wrap_guides(false, cx); editor.set_show_wrap_guides(false, cx);
@@ -776,6 +790,7 @@ impl PromptLibrary {
messages: vec![LanguageModelRequestMessage { messages: vec![LanguageModelRequestMessage {
role: Role::System, role: Role::System,
content: vec![body.to_string().into()], content: vec![body.to_string().into()],
cache: false,
}], }],
stop: Vec::new(), stop: Vec::new(),
temperature: 1., temperature: 1.,
@@ -1094,7 +1109,55 @@ impl Render for PromptLibrary {
.font(ui_font) .font(ui_font)
.text_color(theme.colors().text) .text_color(theme.colors().text)
.child(self.render_prompt_list(cx)) .child(self.render_prompt_list(cx))
.child(self.render_active_prompt(cx)) .map(|el| {
if self.store.prompt_count() == 0 {
el.child(
v_flex()
.w_2_3()
.h_full()
.items_center()
.justify_center()
.gap_4()
.bg(cx.theme().colors().editor_background)
.child(
h_flex()
.gap_2()
.child(
Icon::new(IconName::Book)
.size(IconSize::Medium)
.color(Color::Muted),
)
.child(
Label::new("No prompts yet")
.size(LabelSize::Large)
.color(Color::Muted),
),
)
.child(
h_flex()
.child(h_flex())
.child(
v_flex()
.gap_1()
.child(Label::new("Create your first prompt:"))
.child(
Button::new("create-prompt", "New Prompt")
.full_width()
.key_binding(KeyBinding::for_action(
&NewPrompt, cx,
))
.on_click(|_, cx| {
cx.dispatch_action(NewPrompt.boxed_clone())
}),
),
)
.child(h_flex()),
),
)
} else {
el.child(self.render_active_prompt(cx))
}
})
} }
} }
@@ -1342,6 +1405,11 @@ impl PromptStore {
}) })
} }
/// Returns the number of prompts in the store.
fn prompt_count(&self) -> usize {
self.metadata_cache.read().metadata.len()
}
fn metadata(&self, id: PromptId) -> Option<PromptMetadata> { fn metadata(&self, id: PromptId) -> Option<PromptMetadata> {
self.metadata_cache.read().metadata_by_id.get(&id).cloned() self.metadata_cache.read().metadata_by_id.get(&id).cloned()
} }

View File

@@ -1,24 +1,24 @@
use anyhow::Result;
use assets::Assets; use assets::Assets;
use fs::Fs; use fs::Fs;
use futures::StreamExt; use futures::StreamExt;
use handlebars::{Handlebars, RenderError, TemplateError}; use gpui::AssetSource;
use handlebars::{Handlebars, RenderError};
use language::BufferSnapshot; use language::BufferSnapshot;
use parking_lot::Mutex; use parking_lot::Mutex;
use serde::Serialize; use serde::Serialize;
use std::{ops::Range, sync::Arc, time::Duration}; use std::{ops::Range, path::PathBuf, sync::Arc, time::Duration};
use util::ResultExt; use util::ResultExt;
#[derive(Serialize)] #[derive(Serialize)]
pub struct ContentPromptContext { pub struct ContentPromptContext {
pub content_type: String, pub content_type: String,
pub language_name: Option<String>, pub language_name: Option<String>,
pub is_insert: bool,
pub is_truncated: bool, pub is_truncated: bool,
pub document_content: String, pub document_content: String,
pub user_prompt: String, pub user_prompt: String,
pub rewrite_section: String, pub rewrite_section: Option<String>,
pub rewrite_section_with_selections: String,
pub has_insertion: bool,
pub has_replacement: bool,
} }
#[derive(Serialize)] #[derive(Serialize)]
@@ -31,128 +31,171 @@ pub struct TerminalAssistantPromptContext {
pub user_prompt: String, pub user_prompt: String,
} }
/// Context required to generate a workflow step resolution prompt.
#[derive(Debug, Serialize)]
pub struct StepResolutionContext {
/// The full context, including <step>...</step> tags
pub workflow_context: String,
/// The text of the specific step from the context to resolve
pub step_to_resolve: String,
}
pub struct PromptLoadingParams<'a> {
pub fs: Arc<dyn Fs>,
pub repo_path: Option<PathBuf>,
pub cx: &'a gpui::AppContext,
}
pub struct PromptBuilder { pub struct PromptBuilder {
handlebars: Arc<Mutex<Handlebars<'static>>>, handlebars: Arc<Mutex<Handlebars<'static>>>,
} }
pub struct PromptOverrideContext<'a> {
pub dev_mode: bool,
pub fs: Arc<dyn Fs>,
pub cx: &'a mut gpui::AppContext,
}
impl PromptBuilder { impl PromptBuilder {
pub fn new(override_cx: Option<PromptOverrideContext>) -> Result<Self, Box<TemplateError>> { pub fn new(loading_params: Option<PromptLoadingParams>) -> Result<Self> {
let mut handlebars = Handlebars::new(); let mut handlebars = Handlebars::new();
Self::register_templates(&mut handlebars)?; Self::register_built_in_templates(&mut handlebars)?;
let handlebars = Arc::new(Mutex::new(handlebars)); let handlebars = Arc::new(Mutex::new(handlebars));
if let Some(override_cx) = override_cx { if let Some(params) = loading_params {
Self::watch_fs_for_template_overrides(override_cx, handlebars.clone()); Self::watch_fs_for_template_overrides(params, handlebars.clone());
} }
Ok(Self { handlebars }) Ok(Self { handlebars })
} }
/// Watches the filesystem for changes to prompt template overrides.
///
/// This function sets up a file watcher on the prompt templates directory. It performs
/// an initial scan of the directory and registers any existing template overrides.
/// Then it continuously monitors for changes, reloading templates as they are
/// modified or added.
///
/// If the templates directory doesn't exist initially, it waits for it to be created.
/// If the directory is removed, it restores the built-in templates and waits for the
/// directory to be recreated.
///
/// # Arguments
///
/// * `params` - A `PromptLoadingParams` struct containing the filesystem, repository path,
/// and application context.
/// * `handlebars` - An `Arc<Mutex<Handlebars>>` for registering and updating templates.
fn watch_fs_for_template_overrides( fn watch_fs_for_template_overrides(
PromptOverrideContext { dev_mode, fs, cx }: PromptOverrideContext, mut params: PromptLoadingParams,
handlebars: Arc<Mutex<Handlebars<'static>>>, handlebars: Arc<Mutex<Handlebars<'static>>>,
) { ) {
cx.background_executor() params.repo_path = None;
let templates_dir = paths::prompt_overrides_dir(params.repo_path.as_deref());
params.cx.background_executor()
.spawn(async move { .spawn(async move {
let templates_dir = if dev_mode { let Some(parent_dir) = templates_dir.parent() else {
std::env::current_dir() return;
.ok()
.and_then(|pwd| {
let pwd_assets_prompts = pwd.join("assets").join("prompts");
pwd_assets_prompts.exists().then_some(pwd_assets_prompts)
})
.unwrap_or_else(|| paths::prompt_overrides_dir().clone())
} else {
paths::prompt_overrides_dir().clone()
}; };
// Create the prompt templates directory if it doesn't exist let mut found_dir_once = false;
if !fs.is_dir(&templates_dir).await { loop {
if let Err(e) = fs.create_dir(&templates_dir).await { // Check if the templates directory exists and handle its status
log::error!("Failed to create prompt templates directory: {}", e); // If it exists, log its presence and check if it's a symlink
return; // If it doesn't exist:
// - Log that we're using built-in prompts
// - Check if it's a broken symlink and log if so
// - Set up a watcher to detect when it's created
// After the first check, set the `found_dir_once` flag
// This allows us to avoid logging when looping back around after deleting the prompt overrides directory.
let dir_status = params.fs.is_dir(&templates_dir).await;
let symlink_status = params.fs.read_link(&templates_dir).await.ok();
if dir_status {
let mut log_message = format!("Prompt template overrides directory found at {}", templates_dir.display());
if let Some(target) = symlink_status {
log_message.push_str(" -> ");
log_message.push_str(&target.display().to_string());
}
log::info!("{}.", log_message);
} else {
if !found_dir_once {
log::info!("No prompt template overrides directory found at {}. Using built-in prompts.", templates_dir.display());
if let Some(target) = symlink_status {
log::info!("Symlink found pointing to {}, but target is invalid.", target.display());
}
}
if params.fs.is_dir(parent_dir).await {
let (mut changes, _watcher) = params.fs.watch(parent_dir, Duration::from_secs(1)).await;
while let Some(changed_paths) = changes.next().await {
if changed_paths.iter().any(|p| p == &templates_dir) {
let mut log_message = format!("Prompt template overrides directory detected at {}", templates_dir.display());
if let Ok(target) = params.fs.read_link(&templates_dir).await {
log_message.push_str(" -> ");
log_message.push_str(&target.display().to_string());
}
log::info!("{}.", log_message);
break;
}
}
} else {
return;
}
} }
}
// Initial scan of the prompts directory found_dir_once = true;
if let Ok(mut entries) = fs.read_dir(&templates_dir).await {
while let Some(Ok(file_path)) = entries.next().await {
if file_path.to_string_lossy().ends_with(".hbs") {
if let Ok(content) = fs.load(&file_path).await {
let file_name = file_path.file_stem().unwrap().to_string_lossy();
match handlebars.lock().register_template_string(&file_name, content) { // Initial scan of the prompt overrides directory
Ok(_) => { if let Ok(mut entries) = params.fs.read_dir(&templates_dir).await {
log::info!( while let Some(Ok(file_path)) = entries.next().await {
"Successfully registered template override: {} ({})", if file_path.to_string_lossy().ends_with(".hbs") {
file_name, if let Ok(content) = params.fs.load(&file_path).await {
file_path.display() let file_name = file_path.file_stem().unwrap().to_string_lossy();
); log::info!("Registering prompt template override: {}", file_name);
}, handlebars.lock().register_template_string(&file_name, content).log_err();
Err(e) => {
log::error!(
"Failed to register template during initial scan: {} ({})",
e,
file_path.display()
);
},
} }
} }
} }
} }
}
// Watch for changes // Watch both the parent directory and the template overrides directory:
let (mut changes, watcher) = fs.watch(&templates_dir, Duration::from_secs(1)).await; // - Monitor the parent directory to detect if the template overrides directory is deleted.
while let Some(changed_paths) = changes.next().await { // - Monitor the template overrides directory to re-register templates when they change.
for changed_path in changed_paths { // Combine both watch streams into a single stream.
if changed_path.extension().map_or(false, |ext| ext == "hbs") { let (parent_changes, parent_watcher) = params.fs.watch(parent_dir, Duration::from_secs(1)).await;
log::info!("Reloading template: {}", changed_path.display()); let (changes, watcher) = params.fs.watch(&templates_dir, Duration::from_secs(1)).await;
if let Some(content) = fs.load(&changed_path).await.log_err() { let mut combined_changes = futures::stream::select(changes, parent_changes);
let file_name = changed_path.file_stem().unwrap().to_string_lossy();
let file_path = changed_path.to_string_lossy(); while let Some(changed_paths) = combined_changes.next().await {
match handlebars.lock().register_template_string(&file_name, content) { if changed_paths.iter().any(|p| p == &templates_dir) {
Ok(_) => log::info!( if !params.fs.is_dir(&templates_dir).await {
"Successfully reloaded template: {} ({})", log::info!("Prompt template overrides directory removed. Restoring built-in prompt templates.");
file_name, Self::register_built_in_templates(&mut handlebars.lock()).log_err();
file_path break;
), }
Err(e) => log::error!( }
"Failed to register template: {} ({})", for changed_path in changed_paths {
e, if changed_path.starts_with(&templates_dir) && changed_path.extension().map_or(false, |ext| ext == "hbs") {
file_path log::info!("Reloading prompt template override: {}", changed_path.display());
), if let Some(content) = params.fs.load(&changed_path).await.log_err() {
let file_name = changed_path.file_stem().unwrap().to_string_lossy();
handlebars.lock().register_template_string(&file_name, content).log_err();
} }
} }
} }
} }
drop(watcher);
drop(parent_watcher);
} }
drop(watcher);
}) })
.detach(); .detach();
} }
fn register_templates(handlebars: &mut Handlebars) -> Result<(), Box<TemplateError>> { fn register_built_in_templates(handlebars: &mut Handlebars) -> Result<()> {
let mut register_template = |id: &str| { for path in Assets.list("prompts")? {
let prompt = Assets::get(&format!("prompts/{}.hbs", id)) if let Some(id) = path.split('/').last().and_then(|s| s.strip_suffix(".hbs")) {
.unwrap_or_else(|| panic!("{} prompt template not found", id)) if let Some(prompt) = Assets.load(path.as_ref()).log_err().flatten() {
.data; log::info!("Registering built-in prompt template: {}", id);
handlebars handlebars
.register_template_string(id, String::from_utf8_lossy(&prompt)) .register_template_string(id, String::from_utf8_lossy(prompt.as_ref()))?
.map_err(Box::new) }
}; }
}
register_template("content_prompt")?;
register_template("terminal_assistant_prompt")?;
register_template("edit_workflow")?;
register_template("step_resolution")?;
Ok(()) Ok(())
} }
@@ -162,8 +205,7 @@ impl PromptBuilder {
user_prompt: String, user_prompt: String,
language_name: Option<&str>, language_name: Option<&str>,
buffer: BufferSnapshot, buffer: BufferSnapshot,
transform_range: Range<usize>, range: Range<usize>,
selected_ranges: Vec<Range<usize>>,
) -> Result<String, RenderError> { ) -> Result<String, RenderError> {
let content_type = match language_name { let content_type = match language_name {
None | Some("Markdown" | "Plain Text") => "text", None | Some("Markdown" | "Plain Text") => "text",
@@ -171,20 +213,21 @@ impl PromptBuilder {
}; };
const MAX_CTX: usize = 50000; const MAX_CTX: usize = 50000;
let is_insert = range.is_empty();
let mut is_truncated = false; let mut is_truncated = false;
let before_range = 0..transform_range.start; let before_range = 0..range.start;
let truncated_before = if before_range.len() > MAX_CTX { let truncated_before = if before_range.len() > MAX_CTX {
is_truncated = true; is_truncated = true;
transform_range.start - MAX_CTX..transform_range.start range.start - MAX_CTX..range.start
} else { } else {
before_range before_range
}; };
let after_range = transform_range.end..buffer.len(); let after_range = range.end..buffer.len();
let truncated_after = if after_range.len() > MAX_CTX { let truncated_after = if after_range.len() > MAX_CTX {
is_truncated = true; is_truncated = true;
transform_range.end..transform_range.end + MAX_CTX range.end..range.end + MAX_CTX
} else { } else {
after_range after_range
}; };
@@ -193,61 +236,37 @@ impl PromptBuilder {
for chunk in buffer.text_for_range(truncated_before) { for chunk in buffer.text_for_range(truncated_before) {
document_content.push_str(chunk); document_content.push_str(chunk);
} }
document_content.push_str("<rewrite_this>\n"); if is_insert {
for chunk in buffer.text_for_range(transform_range.clone()) { document_content.push_str("<insert_here></insert_here>");
document_content.push_str(chunk); } else {
document_content.push_str("<rewrite_this>\n");
for chunk in buffer.text_for_range(range.clone()) {
document_content.push_str(chunk);
}
document_content.push_str("\n</rewrite_this>");
} }
document_content.push_str("\n</rewrite_this>");
for chunk in buffer.text_for_range(truncated_after) { for chunk in buffer.text_for_range(truncated_after) {
document_content.push_str(chunk); document_content.push_str(chunk);
} }
let mut rewrite_section = String::new(); let rewrite_section = if !is_insert {
for chunk in buffer.text_for_range(transform_range.clone()) { let mut section = String::new();
rewrite_section.push_str(chunk); for chunk in buffer.text_for_range(range.clone()) {
} section.push_str(chunk);
let rewrite_section_with_selections = {
let mut section_with_selections = String::new();
let mut last_end = 0;
for selected_range in &selected_ranges {
if selected_range.start > last_end {
section_with_selections.push_str(
&rewrite_section[last_end..selected_range.start - transform_range.start],
);
}
if selected_range.start == selected_range.end {
section_with_selections.push_str("<insert_here></insert_here>");
} else {
section_with_selections.push_str("<edit_here>");
section_with_selections.push_str(
&rewrite_section[selected_range.start - transform_range.start
..selected_range.end - transform_range.start],
);
section_with_selections.push_str("</edit_here>");
}
last_end = selected_range.end - transform_range.start;
} }
if last_end < rewrite_section.len() { Some(section)
section_with_selections.push_str(&rewrite_section[last_end..]); } else {
} None
section_with_selections
}; };
let has_insertion = selected_ranges.iter().any(|range| range.start == range.end);
let has_replacement = selected_ranges.iter().any(|range| range.start != range.end);
let context = ContentPromptContext { let context = ContentPromptContext {
content_type: content_type.to_string(), content_type: content_type.to_string(),
language_name: language_name.map(|s| s.to_string()), language_name: language_name.map(|s| s.to_string()),
is_insert,
is_truncated, is_truncated,
document_content, document_content,
user_prompt, user_prompt,
rewrite_section, rewrite_section,
rewrite_section_with_selections,
has_insertion,
has_replacement,
}; };
self.handlebars.lock().render("content_prompt", &context) self.handlebars.lock().render("content_prompt", &context)
@@ -278,7 +297,10 @@ impl PromptBuilder {
self.handlebars.lock().render("edit_workflow", &()) self.handlebars.lock().render("edit_workflow", &())
} }
pub fn generate_step_resolution_prompt(&self) -> Result<String, RenderError> { pub fn generate_step_resolution_prompt(
self.handlebars.lock().render("step_resolution", &()) &self,
context: &StepResolutionContext,
) -> Result<String, RenderError> {
self.handlebars.lock().render("step_resolution", context)
} }
} }

View File

@@ -1,5 +1,6 @@
use crate::assistant_panel::ContextEditor; use crate::assistant_panel::ContextEditor;
use anyhow::Result; use anyhow::Result;
use assistant_slash_command::AfterCompletion;
pub use assistant_slash_command::{SlashCommand, SlashCommandOutput, SlashCommandRegistry}; pub use assistant_slash_command::{SlashCommand, SlashCommandOutput, SlashCommandRegistry};
use editor::{CompletionProvider, Editor}; use editor::{CompletionProvider, Editor};
use fuzzy::{match_strings, StringMatchCandidate}; use fuzzy::{match_strings, StringMatchCandidate};
@@ -18,6 +19,7 @@ use std::{
use ui::ActiveTheme; use ui::ActiveTheme;
use workspace::Workspace; use workspace::Workspace;
pub mod context_server_command;
pub mod default_command; pub mod default_command;
pub mod diagnostics_command; pub mod diagnostics_command;
pub mod docs_command; pub mod docs_command;
@@ -28,7 +30,7 @@ pub mod project_command;
pub mod prompt_command; pub mod prompt_command;
pub mod search_command; pub mod search_command;
pub mod symbols_command; pub mod symbols_command;
pub mod tabs_command; pub mod tab_command;
pub mod terminal_command; pub mod terminal_command;
pub mod workflow_command; pub mod workflow_command;
@@ -41,8 +43,8 @@ pub(crate) struct SlashCommandCompletionProvider {
pub(crate) struct SlashCommandLine { pub(crate) struct SlashCommandLine {
/// The range within the line containing the command name. /// The range within the line containing the command name.
pub name: Range<usize>, pub name: Range<usize>,
/// The range within the line containing the command argument. /// Ranges within the line containing the command arguments.
pub argument: Option<Range<usize>>, pub arguments: Vec<Range<usize>>,
} }
impl SlashCommandCompletionProvider { impl SlashCommandCompletionProvider {
@@ -96,38 +98,45 @@ impl SlashCommandCompletionProvider {
let command = commands.command(&mat.string)?; let command = commands.command(&mat.string)?;
let mut new_text = mat.string.clone(); let mut new_text = mat.string.clone();
let requires_argument = command.requires_argument(); let requires_argument = command.requires_argument();
if requires_argument { let accepts_arguments = command.accepts_arguments();
if requires_argument || accepts_arguments {
new_text.push(' '); new_text.push(' ');
} }
let confirm = editor.clone().zip(workspace.clone()).and_then( let confirm =
|(editor, workspace)| { editor
(!requires_argument).then(|| { .clone()
.zip(workspace.clone())
.map(|(editor, workspace)| {
let command_name = mat.string.clone(); let command_name = mat.string.clone();
let command_range = command_range.clone(); let command_range = command_range.clone();
let editor = editor.clone(); let editor = editor.clone();
let workspace = workspace.clone(); let workspace = workspace.clone();
Arc::new( Arc::new(
move |intent: CompletionIntent, cx: &mut WindowContext| { move |intent: CompletionIntent, cx: &mut WindowContext| {
if intent.is_complete() { if !requires_argument
&& (!accepts_arguments || intent.is_complete())
{
editor editor
.update(cx, |editor, cx| { .update(cx, |editor, cx| {
editor.run_command( editor.run_command(
command_range.clone(), command_range.clone(),
&command_name, &command_name,
None, &[],
true, true,
false,
workspace.clone(), workspace.clone(),
cx, cx,
); );
}) })
.ok(); .ok();
false
} else {
requires_argument || accepts_arguments
} }
}, },
) as Arc<_> ) as Arc<_>
}) });
},
);
Some(project::Completion { Some(project::Completion {
old_range: name_range.clone(), old_range: name_range.clone(),
documentation: Some(Documentation::SingleLine(command.description())), documentation: Some(Documentation::SingleLine(command.description())),
@@ -135,7 +144,6 @@ impl SlashCommandCompletionProvider {
label: command.label(cx), label: command.label(cx),
server_id: LanguageServerId(0), server_id: LanguageServerId(0),
lsp_completion: Default::default(), lsp_completion: Default::default(),
show_new_completions_on_confirm: requires_argument,
confirm, confirm,
}) })
}) })
@@ -147,9 +155,10 @@ impl SlashCommandCompletionProvider {
fn complete_command_argument( fn complete_command_argument(
&self, &self,
command_name: &str, command_name: &str,
argument: String, arguments: &[String],
command_range: Range<Anchor>, command_range: Range<Anchor>,
argument_range: Range<Anchor>, argument_range: Range<Anchor>,
last_argument_range: Range<Anchor>,
cx: &mut WindowContext, cx: &mut WindowContext,
) -> Task<Result<Vec<project::Completion>>> { ) -> Task<Result<Vec<project::Completion>>> {
let new_cancel_flag = Arc::new(AtomicBool::new(false)); let new_cancel_flag = Arc::new(AtomicBool::new(false));
@@ -159,7 +168,7 @@ impl SlashCommandCompletionProvider {
let commands = SlashCommandRegistry::global(cx); let commands = SlashCommandRegistry::global(cx);
if let Some(command) = commands.command(command_name) { if let Some(command) = commands.command(command_name) {
let completions = command.complete_argument( let completions = command.complete_argument(
argument, arguments,
new_cancel_flag.clone(), new_cancel_flag.clone(),
self.workspace.clone(), self.workspace.clone(),
cx, cx,
@@ -167,63 +176,76 @@ impl SlashCommandCompletionProvider {
let command_name: Arc<str> = command_name.into(); let command_name: Arc<str> = command_name.into();
let editor = self.editor.clone(); let editor = self.editor.clone();
let workspace = self.workspace.clone(); let workspace = self.workspace.clone();
let arguments = arguments.to_vec();
cx.background_executor().spawn(async move { cx.background_executor().spawn(async move {
Ok(completions Ok(completions
.await? .await?
.into_iter() .into_iter()
.map(|command_argument| { .map(|new_argument| {
let confirm = if command_argument.run_command { let confirm =
editor editor
.clone() .clone()
.zip(workspace.clone()) .zip(workspace.clone())
.map(|(editor, workspace)| { .map(|(editor, workspace)| {
Arc::new({ Arc::new({
let mut completed_arguments = arguments.clone();
if new_argument.replace_previous_arguments {
completed_arguments.clear();
} else {
completed_arguments.pop();
}
completed_arguments.push(new_argument.new_text.clone());
let command_range = command_range.clone(); let command_range = command_range.clone();
let command_name = command_name.clone(); let command_name = command_name.clone();
let command_argument = command_argument.new_text.clone();
move |intent: CompletionIntent, cx: &mut WindowContext| { move |intent: CompletionIntent, cx: &mut WindowContext| {
if intent.is_complete() { if new_argument.after_completion.run()
|| intent.is_complete()
{
editor editor
.update(cx, |editor, cx| { .update(cx, |editor, cx| {
editor.run_command( editor.run_command(
command_range.clone(), command_range.clone(),
&command_name, &command_name,
Some(&command_argument), &completed_arguments,
true, true,
false,
workspace.clone(), workspace.clone(),
cx, cx,
); );
}) })
.ok(); .ok();
false
} else {
!new_argument.after_completion.run()
} }
} }
}) as Arc<_> }) as Arc<_>
}) });
} else {
None
};
let mut new_text = command_argument.new_text.clone(); let mut new_text = new_argument.new_text.clone();
if !command_argument.run_command { if new_argument.after_completion == AfterCompletion::Continue {
new_text.push(' '); new_text.push(' ');
} }
project::Completion { project::Completion {
old_range: argument_range.clone(), old_range: if new_argument.replace_previous_arguments {
label: command_argument.label, argument_range.clone()
} else {
last_argument_range.clone()
},
label: new_argument.label,
new_text, new_text,
documentation: None, documentation: None,
server_id: LanguageServerId(0), server_id: LanguageServerId(0),
lsp_completion: Default::default(), lsp_completion: Default::default(),
show_new_completions_on_confirm: !command_argument.run_command,
confirm, confirm,
} }
}) })
.collect()) .collect())
}) })
} else { } else {
cx.background_executor() Task::ready(Ok(Vec::new()))
.spawn(async move { Ok(Vec::new()) })
} }
} }
} }
@@ -236,7 +258,7 @@ impl CompletionProvider for SlashCommandCompletionProvider {
_: editor::CompletionContext, _: editor::CompletionContext,
cx: &mut ViewContext<Editor>, cx: &mut ViewContext<Editor>,
) -> Task<Result<Vec<project::Completion>>> { ) -> Task<Result<Vec<project::Completion>>> {
let Some((name, argument, command_range, argument_range)) = let Some((name, arguments, command_range, last_argument_range)) =
buffer.update(cx, |buffer, _cx| { buffer.update(cx, |buffer, _cx| {
let position = buffer_position.to_point(buffer); let position = buffer_position.to_point(buffer);
let line_start = Point::new(position.row, 0); let line_start = Point::new(position.row, 0);
@@ -247,32 +269,52 @@ impl CompletionProvider for SlashCommandCompletionProvider {
let command_range_start = Point::new(position.row, call.name.start as u32 - 1); let command_range_start = Point::new(position.row, call.name.start as u32 - 1);
let command_range_end = Point::new( let command_range_end = Point::new(
position.row, position.row,
call.argument.as_ref().map_or(call.name.end, |arg| arg.end) as u32, call.arguments.last().map_or(call.name.end, |arg| arg.end) as u32,
); );
let command_range = buffer.anchor_after(command_range_start) let command_range = buffer.anchor_after(command_range_start)
..buffer.anchor_after(command_range_end); ..buffer.anchor_after(command_range_end);
let name = line[call.name.clone()].to_string(); let name = line[call.name.clone()].to_string();
let (arguments, last_argument_range) = if let Some(argument) = call.arguments.last()
Some(if let Some(argument) = call.argument { {
let start = let last_arg_start =
buffer.anchor_after(Point::new(position.row, argument.start as u32)); buffer.anchor_after(Point::new(position.row, argument.start as u32));
let argument = line[argument.clone()].to_string(); let first_arg_start = call.arguments.first().expect("we have the last element");
(name, Some(argument), command_range, start..buffer_position) let first_arg_start =
buffer.anchor_after(Point::new(position.row, first_arg_start.start as u32));
let arguments = call
.arguments
.iter()
.filter_map(|argument| Some(line.get(argument.clone())?.to_string()))
.collect::<Vec<_>>();
let argument_range = first_arg_start..buffer_position;
(
Some((arguments, argument_range)),
last_arg_start..buffer_position,
)
} else { } else {
let start = let start =
buffer.anchor_after(Point::new(position.row, call.name.start as u32)); buffer.anchor_after(Point::new(position.row, call.name.start as u32));
(name, None, command_range, start..buffer_position) (None, start..buffer_position)
}) };
Some((name, arguments, command_range, last_argument_range))
}) })
else { else {
return Task::ready(Ok(Vec::new())); return Task::ready(Ok(Vec::new()));
}; };
if let Some(argument) = argument { if let Some((arguments, argument_range)) = arguments {
self.complete_command_argument(&name, argument, command_range, argument_range, cx) self.complete_command_argument(
&name,
&arguments,
command_range,
argument_range,
last_argument_range,
cx,
)
} else { } else {
self.complete_command_name(&name, command_range, argument_range, cx) self.complete_command_name(&name, command_range, last_argument_range, cx)
} }
} }
@@ -314,6 +356,10 @@ impl CompletionProvider for SlashCommandCompletionProvider {
false false
} }
} }
fn sort_completions(&self) -> bool {
false
}
} }
impl SlashCommandLine { impl SlashCommandLine {
@@ -325,16 +371,23 @@ impl SlashCommandLine {
if let Some(call) = &mut call { if let Some(call) = &mut call {
// The command arguments start at the first non-whitespace character // The command arguments start at the first non-whitespace character
// after the command name, and continue until the end of the line. // after the command name, and continue until the end of the line.
if let Some(argument) = &mut call.argument { if let Some(argument) = call.arguments.last_mut() {
if (*argument).is_empty() && c.is_whitespace() { if c.is_whitespace() {
argument.start = next_ix; if (*argument).is_empty() {
argument.start = next_ix;
argument.end = next_ix;
} else {
argument.end = ix;
call.arguments.push(next_ix..next_ix);
}
} else {
argument.end = next_ix;
} }
argument.end = next_ix;
} }
// The command name ends at the first whitespace character. // The command name ends at the first whitespace character.
else if !call.name.is_empty() { else if !call.name.is_empty() {
if c.is_whitespace() { if c.is_whitespace() {
call.argument = Some(next_ix..next_ix); call.arguments = vec![next_ix..next_ix];
} else { } else {
call.name.end = next_ix; call.name.end = next_ix;
} }
@@ -350,7 +403,7 @@ impl SlashCommandLine {
else if c == '/' { else if c == '/' {
call = Some(SlashCommandLine { call = Some(SlashCommandLine {
name: next_ix..next_ix, name: next_ix..next_ix,
argument: None, arguments: Vec::new(),
}); });
} }
// The line can't contain anything before the slash except for whitespace. // The line can't contain anything before the slash except for whitespace.

View File

@@ -0,0 +1,134 @@
use anyhow::{anyhow, Result};
use assistant_slash_command::{
ArgumentCompletion, SlashCommand, SlashCommandOutput, SlashCommandOutputSection,
};
use collections::HashMap;
use context_servers::{
manager::{ContextServer, ContextServerManager},
protocol::PromptInfo,
};
use gpui::{Task, WeakView, WindowContext};
use language::LspAdapterDelegate;
use std::sync::atomic::AtomicBool;
use std::sync::Arc;
use ui::{IconName, SharedString};
use workspace::Workspace;
pub struct ContextServerSlashCommand {
server_id: String,
prompt: PromptInfo,
}
impl ContextServerSlashCommand {
pub fn new(server: &Arc<ContextServer>, prompt: PromptInfo) -> Self {
Self {
server_id: server.id.clone(),
prompt,
}
}
}
impl SlashCommand for ContextServerSlashCommand {
fn name(&self) -> String {
self.prompt.name.clone()
}
fn description(&self) -> String {
format!("Run context server command: {}", self.prompt.name)
}
fn menu_text(&self) -> String {
format!("Run '{}' from {}", self.prompt.name, self.server_id)
}
fn requires_argument(&self) -> bool {
self.prompt
.arguments
.as_ref()
.map_or(false, |args| !args.is_empty())
}
fn complete_argument(
self: Arc<Self>,
_arguments: &[String],
_cancel: Arc<AtomicBool>,
_workspace: Option<WeakView<Workspace>>,
_cx: &mut WindowContext,
) -> Task<Result<Vec<ArgumentCompletion>>> {
Task::ready(Ok(Vec::new()))
}
fn run(
self: Arc<Self>,
arguments: &[String],
_workspace: WeakView<Workspace>,
_delegate: Option<Arc<dyn LspAdapterDelegate>>,
cx: &mut WindowContext,
) -> Task<Result<SlashCommandOutput>> {
let server_id = self.server_id.clone();
let prompt_name = self.prompt.name.clone();
let prompt_args = match prompt_arguments(&self.prompt, arguments) {
Ok(args) => args,
Err(e) => return Task::ready(Err(e)),
};
let manager = ContextServerManager::global(cx);
let manager = manager.read(cx);
if let Some(server) = manager.get_server(&server_id) {
cx.foreground_executor().spawn(async move {
let Some(protocol) = server.client.read().clone() else {
return Err(anyhow!("Context server not initialized"));
};
let result = protocol.run_prompt(&prompt_name, prompt_args).await?;
Ok(SlashCommandOutput {
sections: vec![SlashCommandOutputSection {
range: 0..result.len(),
icon: IconName::ZedAssistant,
label: SharedString::from(format!("Result from {}", prompt_name)),
}],
text: result,
run_commands_in_text: false,
})
})
} else {
Task::ready(Err(anyhow!("Context server not found")))
}
}
}
fn prompt_arguments(prompt: &PromptInfo, arguments: &[String]) -> Result<HashMap<String, String>> {
match &prompt.arguments {
Some(args) if args.len() > 1 => Err(anyhow!(
"Prompt has more than one argument, which is not supported"
)),
Some(args) if args.len() == 1 => {
if !arguments.is_empty() {
let mut map = HashMap::default();
map.insert(args[0].name.clone(), arguments.join(" "));
Ok(map)
} else {
Err(anyhow!("Prompt expects argument but none given"))
}
}
Some(_) | None => {
if arguments.is_empty() {
Ok(HashMap::default())
} else {
Err(anyhow!("Prompt expects no arguments but some were given"))
}
}
}
}
/// MCP servers can return prompts with multiple arguments. Since we only
/// support one argument, we ignore all others. This is the necessary predicate
/// for this.
pub fn acceptable_prompt(prompt: &PromptInfo) -> bool {
match &prompt.arguments {
None => true,
Some(args) if args.len() == 1 => true,
_ => false,
}
}

View File

@@ -32,7 +32,7 @@ impl SlashCommand for DefaultSlashCommand {
fn complete_argument( fn complete_argument(
self: Arc<Self>, self: Arc<Self>,
_query: String, _arguments: &[String],
_cancellation_flag: Arc<AtomicBool>, _cancellation_flag: Arc<AtomicBool>,
_workspace: Option<WeakView<Workspace>>, _workspace: Option<WeakView<Workspace>>,
_cx: &mut WindowContext, _cx: &mut WindowContext,
@@ -42,7 +42,7 @@ impl SlashCommand for DefaultSlashCommand {
fn run( fn run(
self: Arc<Self>, self: Arc<Self>,
_argument: Option<&str>, _arguments: &[String],
_workspace: WeakView<Workspace>, _workspace: WeakView<Workspace>,
_delegate: Option<Arc<dyn LspAdapterDelegate>>, _delegate: Option<Arc<dyn LspAdapterDelegate>>,
cx: &mut WindowContext, cx: &mut WindowContext,

View File

@@ -103,9 +103,13 @@ impl SlashCommand for DiagnosticsSlashCommand {
false false
} }
fn accepts_arguments(&self) -> bool {
true
}
fn complete_argument( fn complete_argument(
self: Arc<Self>, self: Arc<Self>,
query: String, arguments: &[String],
cancellation_flag: Arc<AtomicBool>, cancellation_flag: Arc<AtomicBool>,
workspace: Option<WeakView<Workspace>>, workspace: Option<WeakView<Workspace>>,
cx: &mut WindowContext, cx: &mut WindowContext,
@@ -113,7 +117,7 @@ impl SlashCommand for DiagnosticsSlashCommand {
let Some(workspace) = workspace.and_then(|workspace| workspace.upgrade()) else { let Some(workspace) = workspace.and_then(|workspace| workspace.upgrade()) else {
return Task::ready(Err(anyhow!("workspace was dropped"))); return Task::ready(Err(anyhow!("workspace was dropped")));
}; };
let query = query.split_whitespace().last().unwrap_or("").to_string(); let query = arguments.last().cloned().unwrap_or_default();
let paths = self.search_paths(query.clone(), cancellation_flag.clone(), &workspace, cx); let paths = self.search_paths(query.clone(), cancellation_flag.clone(), &workspace, cx);
let executor = cx.background_executor().clone(); let executor = cx.background_executor().clone();
@@ -149,7 +153,8 @@ impl SlashCommand for DiagnosticsSlashCommand {
.map(|completion| ArgumentCompletion { .map(|completion| ArgumentCompletion {
label: completion.clone().into(), label: completion.clone().into(),
new_text: completion, new_text: completion,
run_command: true, after_completion: assistant_slash_command::AfterCompletion::Run,
replace_previous_arguments: false,
}) })
.collect()) .collect())
}) })
@@ -157,7 +162,7 @@ impl SlashCommand for DiagnosticsSlashCommand {
fn run( fn run(
self: Arc<Self>, self: Arc<Self>,
argument: Option<&str>, arguments: &[String],
workspace: WeakView<Workspace>, workspace: WeakView<Workspace>,
_delegate: Option<Arc<dyn LspAdapterDelegate>>, _delegate: Option<Arc<dyn LspAdapterDelegate>>,
cx: &mut WindowContext, cx: &mut WindowContext,
@@ -166,7 +171,7 @@ impl SlashCommand for DiagnosticsSlashCommand {
return Task::ready(Err(anyhow!("workspace was dropped"))); return Task::ready(Err(anyhow!("workspace was dropped")));
}; };
let options = Options::parse(argument); let options = Options::parse(arguments);
let task = collect_diagnostics(workspace.read(cx).project().clone(), options, cx); let task = collect_diagnostics(workspace.read(cx).project().clone(), options, cx);
@@ -244,25 +249,20 @@ struct Options {
const INCLUDE_WARNINGS_ARGUMENT: &str = "--include-warnings"; const INCLUDE_WARNINGS_ARGUMENT: &str = "--include-warnings";
impl Options { impl Options {
fn parse(arguments_line: Option<&str>) -> Self { fn parse(arguments: &[String]) -> Self {
arguments_line let mut include_warnings = false;
.map(|arguments_line| { let mut path_matcher = None;
let args = arguments_line.split_whitespace().collect::<Vec<_>>(); for arg in arguments {
let mut include_warnings = false; if arg == INCLUDE_WARNINGS_ARGUMENT {
let mut path_matcher = None; include_warnings = true;
for arg in args { } else {
if arg == INCLUDE_WARNINGS_ARGUMENT { path_matcher = PathMatcher::new(&[arg.to_owned()]).log_err();
include_warnings = true; }
} else { }
path_matcher = PathMatcher::new(&[arg.to_owned()]).log_err(); Self {
} include_warnings,
} path_matcher,
Self { }
include_warnings,
path_matcher,
}
})
.unwrap_or_default()
} }
fn match_candidates_for_args() -> [StringMatchCandidate; 1] { fn match_candidates_for_args() -> [StringMatchCandidate; 1] {

View File

@@ -161,7 +161,7 @@ impl SlashCommand for DocsSlashCommand {
fn complete_argument( fn complete_argument(
self: Arc<Self>, self: Arc<Self>,
query: String, arguments: &[String],
_cancel: Arc<AtomicBool>, _cancel: Arc<AtomicBool>,
workspace: Option<WeakView<Workspace>>, workspace: Option<WeakView<Workspace>>,
cx: &mut WindowContext, cx: &mut WindowContext,
@@ -169,22 +169,20 @@ impl SlashCommand for DocsSlashCommand {
self.ensure_rust_doc_providers_are_registered(workspace, cx); self.ensure_rust_doc_providers_are_registered(workspace, cx);
let indexed_docs_registry = IndexedDocsRegistry::global(cx); let indexed_docs_registry = IndexedDocsRegistry::global(cx);
let args = DocsSlashCommandArgs::parse(&query); let args = DocsSlashCommandArgs::parse(arguments);
let store = args let store = args
.provider() .provider()
.ok_or_else(|| anyhow!("no docs provider specified")) .ok_or_else(|| anyhow!("no docs provider specified"))
.and_then(|provider| IndexedDocsStore::try_global(provider, cx)); .and_then(|provider| IndexedDocsStore::try_global(provider, cx));
cx.background_executor().spawn(async move { cx.background_executor().spawn(async move {
fn build_completions( fn build_completions(items: Vec<String>) -> Vec<ArgumentCompletion> {
provider: ProviderId,
items: Vec<String>,
) -> Vec<ArgumentCompletion> {
items items
.into_iter() .into_iter()
.map(|item| ArgumentCompletion { .map(|item| ArgumentCompletion {
label: item.clone().into(), label: item.clone().into(),
new_text: format!("{provider} {item}"), new_text: item.to_string(),
run_command: true, after_completion: assistant_slash_command::AfterCompletion::Run,
replace_previous_arguments: false,
}) })
.collect() .collect()
} }
@@ -196,7 +194,8 @@ impl SlashCommand for DocsSlashCommand {
return Ok(vec![ArgumentCompletion { return Ok(vec![ArgumentCompletion {
label: "No available docs providers.".into(), label: "No available docs providers.".into(),
new_text: String::new(), new_text: String::new(),
run_command: false, after_completion: false.into(),
replace_previous_arguments: false,
}]); }]);
} }
@@ -205,7 +204,8 @@ impl SlashCommand for DocsSlashCommand {
.map(|provider| ArgumentCompletion { .map(|provider| ArgumentCompletion {
label: provider.to_string().into(), label: provider.to_string().into(),
new_text: provider.to_string(), new_text: provider.to_string(),
run_command: false, after_completion: false.into(),
replace_previous_arguments: false,
}) })
.collect()) .collect())
} }
@@ -225,7 +225,7 @@ impl SlashCommand for DocsSlashCommand {
let suggested_packages = store.clone().suggest_packages().await?; let suggested_packages = store.clone().suggest_packages().await?;
let search_results = store.search(package).await; let search_results = store.search(package).await;
let mut items = build_completions(provider.clone(), search_results); let mut items = build_completions(search_results);
let workspace_crate_completions = suggested_packages let workspace_crate_completions = suggested_packages
.into_iter() .into_iter()
.filter(|package_name| { .filter(|package_name| {
@@ -235,8 +235,9 @@ impl SlashCommand for DocsSlashCommand {
}) })
.map(|package_name| ArgumentCompletion { .map(|package_name| ArgumentCompletion {
label: format!("{package_name} (unindexed)").into(), label: format!("{package_name} (unindexed)").into(),
new_text: format!("{provider} {package_name}"), new_text: format!("{package_name}"),
run_command: true, after_completion: true.into(),
replace_previous_arguments: false,
}) })
.collect::<Vec<_>>(); .collect::<Vec<_>>();
items.extend(workspace_crate_completions); items.extend(workspace_crate_completions);
@@ -249,20 +250,17 @@ impl SlashCommand for DocsSlashCommand {
) )
.into(), .into(),
new_text: provider.to_string(), new_text: provider.to_string(),
run_command: false, after_completion: false.into(),
replace_previous_arguments: false,
}]); }]);
} }
Ok(items) Ok(items)
} }
DocsSlashCommandArgs::SearchItemDocs { DocsSlashCommandArgs::SearchItemDocs { item_path, .. } => {
provider,
item_path,
..
} => {
let store = store?; let store = store?;
let items = store.search(item_path).await; let items = store.search(item_path).await;
Ok(build_completions(provider, items)) Ok(build_completions(items))
} }
} }
}) })
@@ -270,16 +268,16 @@ impl SlashCommand for DocsSlashCommand {
fn run( fn run(
self: Arc<Self>, self: Arc<Self>,
argument: Option<&str>, arguments: &[String],
_workspace: WeakView<Workspace>, _workspace: WeakView<Workspace>,
_delegate: Option<Arc<dyn LspAdapterDelegate>>, _delegate: Option<Arc<dyn LspAdapterDelegate>>,
cx: &mut WindowContext, cx: &mut WindowContext,
) -> Task<Result<SlashCommandOutput>> { ) -> Task<Result<SlashCommandOutput>> {
let Some(argument) = argument else { if arguments.is_empty() {
return Task::ready(Err(anyhow!("missing argument"))); return Task::ready(Err(anyhow!("missing an argument")));
}; };
let args = DocsSlashCommandArgs::parse(argument); let args = DocsSlashCommandArgs::parse(arguments);
let executor = cx.background_executor().clone(); let executor = cx.background_executor().clone();
let task = cx.background_executor().spawn({ let task = cx.background_executor().spawn({
let store = args let store = args
@@ -379,12 +377,18 @@ pub(crate) enum DocsSlashCommandArgs {
} }
impl DocsSlashCommandArgs { impl DocsSlashCommandArgs {
pub fn parse(argument: &str) -> Self { pub fn parse(arguments: &[String]) -> Self {
let Some((provider, argument)) = argument.split_once(' ') else { let Some(provider) = arguments
.get(0)
.cloned()
.filter(|arg| !arg.trim().is_empty())
else {
return Self::NoProvider; return Self::NoProvider;
}; };
let provider = ProviderId(provider.into()); let provider = ProviderId(provider.into());
let Some(argument) = arguments.get(1) else {
return Self::NoProvider;
};
if let Some((package, rest)) = argument.split_once(is_item_path_delimiter) { if let Some((package, rest)) = argument.split_once(is_item_path_delimiter) {
if rest.trim().is_empty() { if rest.trim().is_empty() {
@@ -444,16 +448,16 @@ mod tests {
#[test] #[test]
fn test_parse_docs_slash_command_args() { fn test_parse_docs_slash_command_args() {
assert_eq!( assert_eq!(
DocsSlashCommandArgs::parse(""), DocsSlashCommandArgs::parse(&["".to_string()]),
DocsSlashCommandArgs::NoProvider DocsSlashCommandArgs::NoProvider
); );
assert_eq!( assert_eq!(
DocsSlashCommandArgs::parse("rustdoc"), DocsSlashCommandArgs::parse(&["rustdoc".to_string()]),
DocsSlashCommandArgs::NoProvider DocsSlashCommandArgs::NoProvider
); );
assert_eq!( assert_eq!(
DocsSlashCommandArgs::parse("rustdoc "), DocsSlashCommandArgs::parse(&["rustdoc".to_string(), "".to_string()]),
DocsSlashCommandArgs::SearchPackageDocs { DocsSlashCommandArgs::SearchPackageDocs {
provider: ProviderId("rustdoc".into()), provider: ProviderId("rustdoc".into()),
package: "".into(), package: "".into(),
@@ -461,7 +465,7 @@ mod tests {
} }
); );
assert_eq!( assert_eq!(
DocsSlashCommandArgs::parse("gleam "), DocsSlashCommandArgs::parse(&["gleam".to_string(), "".to_string()]),
DocsSlashCommandArgs::SearchPackageDocs { DocsSlashCommandArgs::SearchPackageDocs {
provider: ProviderId("gleam".into()), provider: ProviderId("gleam".into()),
package: "".into(), package: "".into(),
@@ -470,7 +474,7 @@ mod tests {
); );
assert_eq!( assert_eq!(
DocsSlashCommandArgs::parse("rustdoc gpui"), DocsSlashCommandArgs::parse(&["rustdoc".to_string(), "gpui".to_string()]),
DocsSlashCommandArgs::SearchPackageDocs { DocsSlashCommandArgs::SearchPackageDocs {
provider: ProviderId("rustdoc".into()), provider: ProviderId("rustdoc".into()),
package: "gpui".into(), package: "gpui".into(),
@@ -478,7 +482,7 @@ mod tests {
} }
); );
assert_eq!( assert_eq!(
DocsSlashCommandArgs::parse("gleam gleam_stdlib"), DocsSlashCommandArgs::parse(&["gleam".to_string(), "gleam_stdlib".to_string()]),
DocsSlashCommandArgs::SearchPackageDocs { DocsSlashCommandArgs::SearchPackageDocs {
provider: ProviderId("gleam".into()), provider: ProviderId("gleam".into()),
package: "gleam_stdlib".into(), package: "gleam_stdlib".into(),
@@ -488,7 +492,7 @@ mod tests {
// Adding an item path delimiter indicates we can start indexing. // Adding an item path delimiter indicates we can start indexing.
assert_eq!( assert_eq!(
DocsSlashCommandArgs::parse("rustdoc gpui:"), DocsSlashCommandArgs::parse(&["rustdoc".to_string(), "gpui:".to_string()]),
DocsSlashCommandArgs::SearchPackageDocs { DocsSlashCommandArgs::SearchPackageDocs {
provider: ProviderId("rustdoc".into()), provider: ProviderId("rustdoc".into()),
package: "gpui".into(), package: "gpui".into(),
@@ -496,7 +500,7 @@ mod tests {
} }
); );
assert_eq!( assert_eq!(
DocsSlashCommandArgs::parse("gleam gleam_stdlib/"), DocsSlashCommandArgs::parse(&["gleam".to_string(), "gleam_stdlib/".to_string()]),
DocsSlashCommandArgs::SearchPackageDocs { DocsSlashCommandArgs::SearchPackageDocs {
provider: ProviderId("gleam".into()), provider: ProviderId("gleam".into()),
package: "gleam_stdlib".into(), package: "gleam_stdlib".into(),
@@ -505,7 +509,10 @@ mod tests {
); );
assert_eq!( assert_eq!(
DocsSlashCommandArgs::parse("rustdoc gpui::foo::bar::Baz"), DocsSlashCommandArgs::parse(&[
"rustdoc".to_string(),
"gpui::foo::bar::Baz".to_string()
]),
DocsSlashCommandArgs::SearchItemDocs { DocsSlashCommandArgs::SearchItemDocs {
provider: ProviderId("rustdoc".into()), provider: ProviderId("rustdoc".into()),
package: "gpui".into(), package: "gpui".into(),
@@ -513,7 +520,10 @@ mod tests {
} }
); );
assert_eq!( assert_eq!(
DocsSlashCommandArgs::parse("gleam gleam_stdlib/gleam/int"), DocsSlashCommandArgs::parse(&[
"gleam".to_string(),
"gleam_stdlib/gleam/int".to_string()
]),
DocsSlashCommandArgs::SearchItemDocs { DocsSlashCommandArgs::SearchItemDocs {
provider: ProviderId("gleam".into()), provider: ProviderId("gleam".into()),
package: "gleam_stdlib".into(), package: "gleam_stdlib".into(),

View File

@@ -117,7 +117,7 @@ impl SlashCommand for FetchSlashCommand {
fn complete_argument( fn complete_argument(
self: Arc<Self>, self: Arc<Self>,
_query: String, _arguments: &[String],
_cancel: Arc<AtomicBool>, _cancel: Arc<AtomicBool>,
_workspace: Option<WeakView<Workspace>>, _workspace: Option<WeakView<Workspace>>,
_cx: &mut WindowContext, _cx: &mut WindowContext,
@@ -127,12 +127,12 @@ impl SlashCommand for FetchSlashCommand {
fn run( fn run(
self: Arc<Self>, self: Arc<Self>,
argument: Option<&str>, arguments: &[String],
workspace: WeakView<Workspace>, workspace: WeakView<Workspace>,
_delegate: Option<Arc<dyn LspAdapterDelegate>>, _delegate: Option<Arc<dyn LspAdapterDelegate>>,
cx: &mut WindowContext, cx: &mut WindowContext,
) -> Task<Result<SlashCommandOutput>> { ) -> Task<Result<SlashCommandOutput>> {
let Some(argument) = argument else { let Some(argument) = arguments.first() else {
return Task::ready(Err(anyhow!("missing URL"))); return Task::ready(Err(anyhow!("missing URL")));
}; };
let Some(workspace) = workspace.upgrade() else { let Some(workspace) = workspace.upgrade() else {

View File

@@ -1,6 +1,6 @@
use super::{diagnostics_command::write_single_file_diagnostics, SlashCommand, SlashCommandOutput}; use super::{diagnostics_command::write_single_file_diagnostics, SlashCommand, SlashCommandOutput};
use anyhow::{anyhow, Result}; use anyhow::{anyhow, Context as _, Result};
use assistant_slash_command::{ArgumentCompletion, SlashCommandOutputSection}; use assistant_slash_command::{AfterCompletion, ArgumentCompletion, SlashCommandOutputSection};
use fuzzy::PathMatch; use fuzzy::PathMatch;
use gpui::{AppContext, Model, Task, View, WeakView}; use gpui::{AppContext, Model, Task, View, WeakView};
use language::{BufferSnapshot, CodeLabel, HighlightId, LineEnding, LspAdapterDelegate}; use language::{BufferSnapshot, CodeLabel, HighlightId, LineEnding, LspAdapterDelegate};
@@ -12,7 +12,7 @@ use std::{
sync::{atomic::AtomicBool, Arc}, sync::{atomic::AtomicBool, Arc},
}; };
use ui::prelude::*; use ui::prelude::*;
use util::{paths::PathMatcher, ResultExt}; use util::ResultExt;
use workspace::Workspace; use workspace::Workspace;
pub(crate) struct FileSlashCommand; pub(crate) struct FileSlashCommand;
@@ -122,7 +122,7 @@ impl SlashCommand for FileSlashCommand {
fn complete_argument( fn complete_argument(
self: Arc<Self>, self: Arc<Self>,
query: String, arguments: &[String],
cancellation_flag: Arc<AtomicBool>, cancellation_flag: Arc<AtomicBool>,
workspace: Option<WeakView<Workspace>>, workspace: Option<WeakView<Workspace>>,
cx: &mut WindowContext, cx: &mut WindowContext,
@@ -131,7 +131,12 @@ impl SlashCommand for FileSlashCommand {
return Task::ready(Err(anyhow!("workspace was dropped"))); return Task::ready(Err(anyhow!("workspace was dropped")));
}; };
let paths = self.search_paths(query, cancellation_flag, &workspace, cx); let paths = self.search_paths(
arguments.last().cloned().unwrap_or_default(),
cancellation_flag,
&workspace,
cx,
);
let comment_id = cx.theme().syntax().highlight_id("comment").map(HighlightId); let comment_id = cx.theme().syntax().highlight_id("comment").map(HighlightId);
cx.background_executor().spawn(async move { cx.background_executor().spawn(async move {
Ok(paths Ok(paths
@@ -159,7 +164,12 @@ impl SlashCommand for FileSlashCommand {
Some(ArgumentCompletion { Some(ArgumentCompletion {
label, label,
new_text: text, new_text: text,
run_command: true, after_completion: if path_match.is_dir {
AfterCompletion::Compose
} else {
AfterCompletion::Run
},
replace_previous_arguments: false,
}) })
}) })
.collect()) .collect())
@@ -168,7 +178,7 @@ impl SlashCommand for FileSlashCommand {
fn run( fn run(
self: Arc<Self>, self: Arc<Self>,
argument: Option<&str>, arguments: &[String],
workspace: WeakView<Workspace>, workspace: WeakView<Workspace>,
_delegate: Option<Arc<dyn LspAdapterDelegate>>, _delegate: Option<Arc<dyn LspAdapterDelegate>>,
cx: &mut WindowContext, cx: &mut WindowContext,
@@ -177,23 +187,24 @@ impl SlashCommand for FileSlashCommand {
return Task::ready(Err(anyhow!("workspace was dropped"))); return Task::ready(Err(anyhow!("workspace was dropped")));
}; };
let Some(argument) = argument else { if arguments.is_empty() {
return Task::ready(Err(anyhow!("missing path"))); return Task::ready(Err(anyhow!("missing path")));
}; };
let task = collect_files(workspace.read(cx).project().clone(), argument, cx); let task = collect_files(workspace.read(cx).project().clone(), arguments, cx);
cx.foreground_executor().spawn(async move { cx.foreground_executor().spawn(async move {
let (text, ranges) = task.await?; let output = task.await?;
Ok(SlashCommandOutput { Ok(SlashCommandOutput {
text, text: output.completion_text,
sections: ranges sections: output
.files
.into_iter() .into_iter()
.map(|(range, path, entry_type)| { .map(|file| {
build_entry_output_section( build_entry_output_section(
range, file.range_in_text,
Some(&path), Some(&file.path),
entry_type == EntryType::Directory, file.entry_type == EntryType::Directory,
None, None,
) )
}) })
@@ -204,18 +215,38 @@ impl SlashCommand for FileSlashCommand {
} }
} }
#[derive(Clone, Copy, PartialEq)] #[derive(Clone, Copy, PartialEq, Debug)]
enum EntryType { enum EntryType {
File, File,
Directory, Directory,
} }
#[derive(Clone, PartialEq, Debug)]
struct FileCommandOutput {
completion_text: String,
files: Vec<OutputFile>,
}
#[derive(Clone, PartialEq, Debug)]
struct OutputFile {
range_in_text: Range<usize>,
path: PathBuf,
entry_type: EntryType,
}
fn collect_files( fn collect_files(
project: Model<Project>, project: Model<Project>,
glob_input: &str, glob_inputs: &[String],
cx: &mut AppContext, cx: &mut AppContext,
) -> Task<Result<(String, Vec<(Range<usize>, PathBuf, EntryType)>)>> { ) -> Task<Result<FileCommandOutput>> {
let Ok(matcher) = PathMatcher::new(&[glob_input.to_owned()]) else { let Ok(matchers) = glob_inputs
.into_iter()
.map(|glob_input| {
custom_path_matcher::PathMatcher::new(&[glob_input.to_owned()])
.with_context(|| format!("invalid path {glob_input}"))
})
.collect::<anyhow::Result<Vec<custom_path_matcher::PathMatcher>>>()
else {
return Task::ready(Err(anyhow!("invalid path"))); return Task::ready(Err(anyhow!("invalid path")));
}; };
@@ -225,6 +256,7 @@ fn collect_files(
.worktrees(cx) .worktrees(cx)
.map(|worktree| worktree.read(cx).snapshot()) .map(|worktree| worktree.read(cx).snapshot())
.collect::<Vec<_>>(); .collect::<Vec<_>>();
cx.spawn(|mut cx| async move { cx.spawn(|mut cx| async move {
let mut text = String::new(); let mut text = String::new();
let mut ranges = Vec::new(); let mut ranges = Vec::new();
@@ -233,11 +265,16 @@ fn collect_files(
let mut directory_stack: Vec<(Arc<Path>, String, usize)> = Vec::new(); let mut directory_stack: Vec<(Arc<Path>, String, usize)> = Vec::new();
let mut folded_directory_names_stack = Vec::new(); let mut folded_directory_names_stack = Vec::new();
let mut is_top_level_directory = true; let mut is_top_level_directory = true;
for entry in snapshot.entries(false, 0) { for entry in snapshot.entries(false, 0) {
let mut path_including_worktree_name = PathBuf::new(); let mut path_including_worktree_name = PathBuf::new();
path_including_worktree_name.push(snapshot.root_name()); path_including_worktree_name.push(snapshot.root_name());
path_including_worktree_name.push(&entry.path); path_including_worktree_name.push(&entry.path);
if !matcher.is_match(&path_including_worktree_name) {
if !matchers
.iter()
.any(|matcher| matcher.is_match(&path_including_worktree_name))
{
continue; continue;
} }
@@ -246,11 +283,11 @@ fn collect_files(
break; break;
} }
let (_, entry_name, start) = directory_stack.pop().unwrap(); let (_, entry_name, start) = directory_stack.pop().unwrap();
ranges.push(( ranges.push(OutputFile {
start..text.len().saturating_sub(1), range_in_text: start..text.len().saturating_sub(1),
PathBuf::from(entry_name), path: PathBuf::from(entry_name),
EntryType::Directory, entry_type: EntryType::Directory,
)); });
} }
let filename = entry let filename = entry
@@ -323,24 +360,39 @@ fn collect_files(
) { ) {
text.pop(); text.pop();
} }
ranges.push(( ranges.push(OutputFile {
prev_len..text.len(), range_in_text: prev_len..text.len(),
path_including_worktree_name, path: path_including_worktree_name,
EntryType::File, entry_type: EntryType::File,
)); });
text.push('\n'); text.push('\n');
} }
} }
} }
while let Some((dir, _, start)) = directory_stack.pop() { while let Some((dir, entry, start)) = directory_stack.pop() {
let mut root_path = PathBuf::new(); if directory_stack.is_empty() {
root_path.push(snapshot.root_name()); let mut root_path = PathBuf::new();
root_path.push(&dir); root_path.push(snapshot.root_name());
ranges.push((start..text.len(), root_path, EntryType::Directory)); root_path.push(&dir);
ranges.push(OutputFile {
range_in_text: start..text.len(),
path: root_path,
entry_type: EntryType::Directory,
});
} else {
ranges.push(OutputFile {
range_in_text: start..text.len(),
path: PathBuf::from(entry.as_str()),
entry_type: EntryType::Directory,
});
}
} }
} }
Ok((text, ranges)) Ok(FileCommandOutput {
completion_text: text,
files: ranges,
})
}) })
} }
@@ -408,3 +460,300 @@ pub fn build_entry_output_section(
label: label.into(), label: label.into(),
} }
} }
/// This contains a small fork of the util::paths::PathMatcher, that is stricter about the prefix
/// check. Only subpaths pass the prefix check, rather than any prefix.
mod custom_path_matcher {
use std::{fmt::Debug as _, path::Path};
use globset::{Glob, GlobSet, GlobSetBuilder};
#[derive(Clone, Debug, Default)]
pub struct PathMatcher {
sources: Vec<String>,
sources_with_trailing_slash: Vec<String>,
glob: GlobSet,
}
impl std::fmt::Display for PathMatcher {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
self.sources.fmt(f)
}
}
impl PartialEq for PathMatcher {
fn eq(&self, other: &Self) -> bool {
self.sources.eq(&other.sources)
}
}
impl Eq for PathMatcher {}
impl PathMatcher {
pub fn new(globs: &[String]) -> Result<Self, globset::Error> {
let globs = globs
.into_iter()
.map(|glob| Glob::new(&glob))
.collect::<Result<Vec<_>, _>>()?;
let sources = globs.iter().map(|glob| glob.glob().to_owned()).collect();
let sources_with_trailing_slash = globs
.iter()
.map(|glob| glob.glob().to_string() + std::path::MAIN_SEPARATOR_STR)
.collect();
let mut glob_builder = GlobSetBuilder::new();
for single_glob in globs {
glob_builder.add(single_glob);
}
let glob = glob_builder.build()?;
Ok(PathMatcher {
glob,
sources,
sources_with_trailing_slash,
})
}
pub fn sources(&self) -> &[String] {
&self.sources
}
pub fn is_match<P: AsRef<Path>>(&self, other: P) -> bool {
let other_path = other.as_ref();
self.sources
.iter()
.zip(self.sources_with_trailing_slash.iter())
.any(|(source, with_slash)| {
let as_bytes = other_path.as_os_str().as_encoded_bytes();
let with_slash = if source.ends_with("/") {
source.as_bytes()
} else {
with_slash.as_bytes()
};
as_bytes.starts_with(with_slash) || as_bytes.ends_with(source.as_bytes())
})
|| self.glob.is_match(other_path)
|| self.check_with_end_separator(other_path)
}
fn check_with_end_separator(&self, path: &Path) -> bool {
let path_str = path.to_string_lossy();
let separator = std::path::MAIN_SEPARATOR_STR;
if path_str.ends_with(separator) {
return false;
} else {
self.glob.is_match(path_str.to_string() + separator)
}
}
}
}
#[cfg(test)]
mod test {
use fs::FakeFs;
use gpui::TestAppContext;
use project::Project;
use serde_json::json;
use settings::SettingsStore;
use crate::slash_command::file_command::collect_files;
pub fn init_test(cx: &mut gpui::TestAppContext) {
if std::env::var("RUST_LOG").is_ok() {
env_logger::try_init().ok();
}
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
// release_channel::init(SemanticVersion::default(), cx);
language::init(cx);
Project::init_settings(cx);
});
}
#[gpui::test]
async fn test_file_exact_matching(cx: &mut TestAppContext) {
init_test(cx);
let fs = FakeFs::new(cx.executor());
fs.insert_tree(
"/root",
json!({
"dir": {
"subdir": {
"file_0": "0"
},
"file_1": "1",
"file_2": "2",
"file_3": "3",
},
"dir.rs": "4"
}),
)
.await;
let project = Project::test(fs, ["/root".as_ref()], cx).await;
let result_1 = cx
.update(|cx| collect_files(project.clone(), &["root/dir".to_string()], cx))
.await
.unwrap();
assert!(result_1.completion_text.starts_with("root/dir"));
// 4 files + 2 directories
assert_eq!(6, result_1.files.len());
let result_2 = cx
.update(|cx| collect_files(project.clone(), &["root/dir/".to_string()], cx))
.await
.unwrap();
assert_eq!(result_1, result_2);
let result = cx
.update(|cx| collect_files(project.clone(), &["root/dir*".to_string()], cx))
.await
.unwrap();
assert!(result.completion_text.starts_with("root/dir"));
// 5 files + 2 directories
assert_eq!(7, result.files.len());
// Ensure that the project lasts until after the last await
drop(project);
}
#[gpui::test]
async fn test_file_sub_directory_rendering(cx: &mut TestAppContext) {
init_test(cx);
let fs = FakeFs::new(cx.executor());
fs.insert_tree(
"/zed",
json!({
"assets": {
"dir1": {
".gitkeep": ""
},
"dir2": {
".gitkeep": ""
},
"themes": {
"ayu": {
"LICENSE": "1",
},
"andromeda": {
"LICENSE": "2",
},
"summercamp": {
"LICENSE": "3",
},
},
},
}),
)
.await;
let project = Project::test(fs, ["/zed".as_ref()], cx).await;
let result = cx
.update(|cx| collect_files(project.clone(), &["zed/assets/themes".to_string()], cx))
.await
.unwrap();
// Sanity check
assert!(result.completion_text.starts_with("zed/assets/themes\n"));
assert_eq!(7, result.files.len());
// Ensure that full file paths are included in the real output
assert!(result
.completion_text
.contains("zed/assets/themes/andromeda/LICENSE"));
assert!(result
.completion_text
.contains("zed/assets/themes/ayu/LICENSE"));
assert!(result
.completion_text
.contains("zed/assets/themes/summercamp/LICENSE"));
assert_eq!("summercamp", result.files[5].path.to_string_lossy());
// Ensure that things are in descending order, with properly relativized paths
assert_eq!(
"zed/assets/themes/andromeda/LICENSE",
result.files[0].path.to_string_lossy()
);
assert_eq!("andromeda", result.files[1].path.to_string_lossy());
assert_eq!(
"zed/assets/themes/ayu/LICENSE",
result.files[2].path.to_string_lossy()
);
assert_eq!("ayu", result.files[3].path.to_string_lossy());
assert_eq!(
"zed/assets/themes/summercamp/LICENSE",
result.files[4].path.to_string_lossy()
);
// Ensure that the project lasts until after the last await
drop(project);
}
#[gpui::test]
async fn test_file_deep_sub_directory_rendering(cx: &mut TestAppContext) {
init_test(cx);
let fs = FakeFs::new(cx.executor());
fs.insert_tree(
"/zed",
json!({
"assets": {
"themes": {
"LICENSE": "1",
"summercamp": {
"LICENSE": "1",
"subdir": {
"LICENSE": "1",
"subsubdir": {
"LICENSE": "3",
}
}
},
},
},
}),
)
.await;
let project = Project::test(fs, ["/zed".as_ref()], cx).await;
let result = cx
.update(|cx| collect_files(project.clone(), &["zed/assets/themes".to_string()], cx))
.await
.unwrap();
assert!(result.completion_text.starts_with("zed/assets/themes\n"));
assert_eq!(
"zed/assets/themes/LICENSE",
result.files[0].path.to_string_lossy()
);
assert_eq!(
"zed/assets/themes/summercamp/LICENSE",
result.files[1].path.to_string_lossy()
);
assert_eq!(
"zed/assets/themes/summercamp/subdir/LICENSE",
result.files[2].path.to_string_lossy()
);
assert_eq!(
"zed/assets/themes/summercamp/subdir/subsubdir/LICENSE",
result.files[3].path.to_string_lossy()
);
assert_eq!("subsubdir", result.files[4].path.to_string_lossy());
assert_eq!("subdir", result.files[5].path.to_string_lossy());
assert_eq!("summercamp", result.files[6].path.to_string_lossy());
assert_eq!("zed/assets/themes", result.files[7].path.to_string_lossy());
// Ensure that the project lasts until after the last await
drop(project);
}
}

View File

@@ -32,7 +32,7 @@ impl SlashCommand for NowSlashCommand {
fn complete_argument( fn complete_argument(
self: Arc<Self>, self: Arc<Self>,
_query: String, _arguments: &[String],
_cancel: Arc<AtomicBool>, _cancel: Arc<AtomicBool>,
_workspace: Option<WeakView<Workspace>>, _workspace: Option<WeakView<Workspace>>,
_cx: &mut WindowContext, _cx: &mut WindowContext,
@@ -42,7 +42,7 @@ impl SlashCommand for NowSlashCommand {
fn run( fn run(
self: Arc<Self>, self: Arc<Self>,
_argument: Option<&str>, _arguments: &[String],
_workspace: WeakView<Workspace>, _workspace: WeakView<Workspace>,
_delegate: Option<Arc<dyn LspAdapterDelegate>>, _delegate: Option<Arc<dyn LspAdapterDelegate>>,
_cx: &mut WindowContext, _cx: &mut WindowContext,

View File

@@ -103,7 +103,7 @@ impl SlashCommand for ProjectSlashCommand {
fn complete_argument( fn complete_argument(
self: Arc<Self>, self: Arc<Self>,
_query: String, _arguments: &[String],
_cancel: Arc<AtomicBool>, _cancel: Arc<AtomicBool>,
_workspace: Option<WeakView<Workspace>>, _workspace: Option<WeakView<Workspace>>,
_cx: &mut WindowContext, _cx: &mut WindowContext,
@@ -117,7 +117,7 @@ impl SlashCommand for ProjectSlashCommand {
fn run( fn run(
self: Arc<Self>, self: Arc<Self>,
_argument: Option<&str>, _arguments: &[String],
workspace: WeakView<Workspace>, workspace: WeakView<Workspace>,
_delegate: Option<Arc<dyn LspAdapterDelegate>>, _delegate: Option<Arc<dyn LspAdapterDelegate>>,
cx: &mut WindowContext, cx: &mut WindowContext,

View File

@@ -29,12 +29,13 @@ impl SlashCommand for PromptSlashCommand {
fn complete_argument( fn complete_argument(
self: Arc<Self>, self: Arc<Self>,
query: String, arguments: &[String],
_cancellation_flag: Arc<AtomicBool>, _cancellation_flag: Arc<AtomicBool>,
_workspace: Option<WeakView<Workspace>>, _workspace: Option<WeakView<Workspace>>,
cx: &mut WindowContext, cx: &mut WindowContext,
) -> Task<Result<Vec<ArgumentCompletion>>> { ) -> Task<Result<Vec<ArgumentCompletion>>> {
let store = PromptStore::global(cx); let store = PromptStore::global(cx);
let query = arguments.to_owned().join(" ");
cx.background_executor().spawn(async move { cx.background_executor().spawn(async move {
let prompts = store.await?.search(query).await; let prompts = store.await?.search(query).await;
Ok(prompts Ok(prompts
@@ -44,7 +45,8 @@ impl SlashCommand for PromptSlashCommand {
Some(ArgumentCompletion { Some(ArgumentCompletion {
label: prompt_title.clone().into(), label: prompt_title.clone().into(),
new_text: prompt_title, new_text: prompt_title,
run_command: true, after_completion: true.into(),
replace_previous_arguments: true,
}) })
}) })
.collect()) .collect())
@@ -53,17 +55,18 @@ impl SlashCommand for PromptSlashCommand {
fn run( fn run(
self: Arc<Self>, self: Arc<Self>,
title: Option<&str>, arguments: &[String],
_workspace: WeakView<Workspace>, _workspace: WeakView<Workspace>,
_delegate: Option<Arc<dyn LspAdapterDelegate>>, _delegate: Option<Arc<dyn LspAdapterDelegate>>,
cx: &mut WindowContext, cx: &mut WindowContext,
) -> Task<Result<SlashCommandOutput>> { ) -> Task<Result<SlashCommandOutput>> {
let Some(title) = title else { let title = arguments.to_owned().join(" ");
if title.trim().is_empty() {
return Task::ready(Err(anyhow!("missing prompt name"))); return Task::ready(Err(anyhow!("missing prompt name")));
}; };
let store = PromptStore::global(cx); let store = PromptStore::global(cx);
let title = SharedString::from(title.to_string()); let title = SharedString::from(title.clone());
let prompt = cx.background_executor().spawn({ let prompt = cx.background_executor().spawn({
let title = title.clone(); let title = title.clone();
async move { async move {

View File

@@ -49,7 +49,7 @@ impl SlashCommand for SearchSlashCommand {
fn complete_argument( fn complete_argument(
self: Arc<Self>, self: Arc<Self>,
_query: String, _arguments: &[String],
_cancel: Arc<AtomicBool>, _cancel: Arc<AtomicBool>,
_workspace: Option<WeakView<Workspace>>, _workspace: Option<WeakView<Workspace>>,
_cx: &mut WindowContext, _cx: &mut WindowContext,
@@ -59,7 +59,7 @@ impl SlashCommand for SearchSlashCommand {
fn run( fn run(
self: Arc<Self>, self: Arc<Self>,
argument: Option<&str>, arguments: &[String],
workspace: WeakView<Workspace>, workspace: WeakView<Workspace>,
_delegate: Option<Arc<dyn LspAdapterDelegate>>, _delegate: Option<Arc<dyn LspAdapterDelegate>>,
cx: &mut WindowContext, cx: &mut WindowContext,
@@ -67,13 +67,13 @@ impl SlashCommand for SearchSlashCommand {
let Some(workspace) = workspace.upgrade() else { let Some(workspace) = workspace.upgrade() else {
return Task::ready(Err(anyhow::anyhow!("workspace was dropped"))); return Task::ready(Err(anyhow::anyhow!("workspace was dropped")));
}; };
let Some(argument) = argument else { if arguments.is_empty() {
return Task::ready(Err(anyhow::anyhow!("missing search query"))); return Task::ready(Err(anyhow::anyhow!("missing search query")));
}; };
let mut limit = None; let mut limit = None;
let mut query = String::new(); let mut query = String::new();
for part in argument.split(' ') { for part in arguments {
if let Some(parameter) = part.strip_prefix("--") { if let Some(parameter) = part.strip_prefix("--") {
if let Ok(count) = parameter.parse::<usize>() { if let Ok(count) = parameter.parse::<usize>() {
limit = Some(count); limit = Some(count);

View File

@@ -26,7 +26,7 @@ impl SlashCommand for OutlineSlashCommand {
fn complete_argument( fn complete_argument(
self: Arc<Self>, self: Arc<Self>,
_query: String, _arguments: &[String],
_cancel: Arc<AtomicBool>, _cancel: Arc<AtomicBool>,
_workspace: Option<WeakView<Workspace>>, _workspace: Option<WeakView<Workspace>>,
_cx: &mut WindowContext, _cx: &mut WindowContext,
@@ -40,7 +40,7 @@ impl SlashCommand for OutlineSlashCommand {
fn run( fn run(
self: Arc<Self>, self: Arc<Self>,
_argument: Option<&str>, _arguments: &[String],
workspace: WeakView<Workspace>, workspace: WeakView<Workspace>,
_delegate: Option<Arc<dyn LspAdapterDelegate>>, _delegate: Option<Arc<dyn LspAdapterDelegate>>,
cx: &mut WindowContext, cx: &mut WindowContext,

View File

@@ -0,0 +1,318 @@
use super::{
diagnostics_command::write_single_file_diagnostics,
file_command::{build_entry_output_section, codeblock_fence_for_path},
SlashCommand, SlashCommandOutput,
};
use anyhow::{Context, Result};
use assistant_slash_command::ArgumentCompletion;
use collections::{HashMap, HashSet};
use editor::Editor;
use futures::future::join_all;
use gpui::{Entity, Task, WeakView};
use language::{BufferSnapshot, LspAdapterDelegate};
use std::{
fmt::Write,
path::PathBuf,
sync::{atomic::AtomicBool, Arc},
};
use ui::WindowContext;
use workspace::Workspace;
pub(crate) struct TabSlashCommand;
const ALL_TABS_COMPLETION_ITEM: &str = "all";
impl SlashCommand for TabSlashCommand {
fn name(&self) -> String {
"tab".into()
}
fn description(&self) -> String {
"insert open tabs (active tab by default)".to_owned()
}
fn menu_text(&self) -> String {
"Insert Open Tabs".to_owned()
}
fn requires_argument(&self) -> bool {
false
}
fn accepts_arguments(&self) -> bool {
true
}
fn complete_argument(
self: Arc<Self>,
arguments: &[String],
cancel: Arc<AtomicBool>,
workspace: Option<WeakView<Workspace>>,
cx: &mut WindowContext,
) -> Task<Result<Vec<ArgumentCompletion>>> {
let mut has_all_tabs_completion_item = false;
let argument_set = arguments
.iter()
.filter(|argument| {
if has_all_tabs_completion_item || ALL_TABS_COMPLETION_ITEM == argument.as_str() {
has_all_tabs_completion_item = true;
false
} else {
true
}
})
.cloned()
.collect::<HashSet<_>>();
if has_all_tabs_completion_item {
return Task::ready(Ok(Vec::new()));
}
let active_item_path = workspace.as_ref().and_then(|workspace| {
workspace
.update(cx, |workspace, cx| {
let snapshot = active_item_buffer(workspace, cx).ok()?;
snapshot.resolve_file_path(cx, true)
})
.ok()
.flatten()
});
let current_query = arguments.last().cloned().unwrap_or_default();
let tab_items_search =
tab_items_for_queries(workspace, &[current_query], cancel, false, cx);
cx.spawn(|_| async move {
let tab_items = tab_items_search.await?;
let run_command = tab_items.len() == 1;
let tab_completion_items = tab_items.into_iter().filter_map(|(path, ..)| {
let path_string = path.as_deref()?.to_string_lossy().to_string();
if argument_set.contains(&path_string) {
return None;
}
if active_item_path.is_some() && active_item_path == path {
return None;
}
Some(ArgumentCompletion {
label: path_string.clone().into(),
new_text: path_string,
replace_previous_arguments: false,
after_completion: run_command.into(),
})
});
let active_item_completion = active_item_path
.as_deref()
.map(|active_item_path| active_item_path.to_string_lossy().to_string())
.filter(|path_string| !argument_set.contains(path_string))
.map(|path_string| ArgumentCompletion {
label: path_string.clone().into(),
new_text: path_string,
replace_previous_arguments: false,
after_completion: run_command.into(),
});
Ok(active_item_completion
.into_iter()
.chain(Some(ArgumentCompletion {
label: ALL_TABS_COMPLETION_ITEM.into(),
new_text: ALL_TABS_COMPLETION_ITEM.to_owned(),
replace_previous_arguments: false,
after_completion: true.into(),
}))
.chain(tab_completion_items)
.collect())
})
}
fn run(
self: Arc<Self>,
arguments: &[String],
workspace: WeakView<Workspace>,
_delegate: Option<Arc<dyn LspAdapterDelegate>>,
cx: &mut WindowContext,
) -> Task<Result<SlashCommandOutput>> {
let tab_items_search = tab_items_for_queries(
Some(workspace),
arguments,
Arc::new(AtomicBool::new(false)),
true,
cx,
);
cx.background_executor().spawn(async move {
let mut sections = Vec::new();
let mut text = String::new();
let mut has_diagnostics = false;
for (full_path, buffer, _) in tab_items_search.await? {
let section_start_ix = text.len();
text.push_str(&codeblock_fence_for_path(full_path.as_deref(), None));
for chunk in buffer.as_rope().chunks() {
text.push_str(chunk);
}
if !text.ends_with('\n') {
text.push('\n');
}
writeln!(text, "```").unwrap();
if write_single_file_diagnostics(&mut text, full_path.as_deref(), &buffer) {
has_diagnostics = true;
}
if !text.ends_with('\n') {
text.push('\n');
}
let section_end_ix = text.len() - 1;
sections.push(build_entry_output_section(
section_start_ix..section_end_ix,
full_path.as_deref(),
false,
None,
));
}
Ok(SlashCommandOutput {
text,
sections,
run_commands_in_text: has_diagnostics,
})
})
}
}
fn tab_items_for_queries(
workspace: Option<WeakView<Workspace>>,
queries: &[String],
cancel: Arc<AtomicBool>,
strict_match: bool,
cx: &mut WindowContext,
) -> Task<anyhow::Result<Vec<(Option<PathBuf>, BufferSnapshot, usize)>>> {
let empty_query = queries.is_empty() || queries.iter().all(|query| query.trim().is_empty());
let queries = queries.to_owned();
cx.spawn(|mut cx| async move {
let mut open_buffers =
workspace
.context("no workspace")?
.update(&mut cx, |workspace, cx| {
if strict_match && empty_query {
let snapshot = active_item_buffer(workspace, cx)?;
let full_path = snapshot.resolve_file_path(cx, true);
return anyhow::Ok(vec![(full_path, snapshot, 0)]);
}
let mut timestamps_by_entity_id = HashMap::default();
let mut open_buffers = Vec::new();
for pane in workspace.panes() {
let pane = pane.read(cx);
for entry in pane.activation_history() {
timestamps_by_entity_id.insert(entry.entity_id, entry.timestamp);
}
}
for editor in workspace.items_of_type::<Editor>(cx) {
if let Some(buffer) = editor.read(cx).buffer().read(cx).as_singleton() {
if let Some(timestamp) =
timestamps_by_entity_id.get(&editor.entity_id())
{
let snapshot = buffer.read(cx).snapshot();
let full_path = snapshot.resolve_file_path(cx, true);
open_buffers.push((full_path, snapshot, *timestamp));
}
}
}
Ok(open_buffers)
})??;
let background_executor = cx.background_executor().clone();
cx.background_executor()
.spawn(async move {
open_buffers.sort_by_key(|(_, _, timestamp)| *timestamp);
if empty_query
|| queries
.iter()
.any(|query| query == ALL_TABS_COMPLETION_ITEM)
{
return Ok(open_buffers);
}
let matched_items = if strict_match {
let match_candidates = open_buffers
.iter()
.enumerate()
.filter_map(|(id, (full_path, ..))| {
let path_string = full_path.as_deref()?.to_string_lossy().to_string();
Some((id, path_string))
})
.fold(HashMap::default(), |mut candidates, (id, path_string)| {
candidates
.entry(path_string)
.or_insert_with(|| Vec::new())
.push(id);
candidates
});
queries
.iter()
.filter_map(|query| match_candidates.get(query))
.flatten()
.copied()
.filter_map(|id| open_buffers.get(id))
.cloned()
.collect()
} else {
let match_candidates = open_buffers
.iter()
.enumerate()
.filter_map(|(id, (full_path, ..))| {
let path_string = full_path.as_deref()?.to_string_lossy().to_string();
Some(fuzzy::StringMatchCandidate {
id,
char_bag: path_string.as_str().into(),
string: path_string,
})
})
.collect::<Vec<_>>();
let mut processed_matches = HashSet::default();
let file_queries = queries.iter().map(|query| {
fuzzy::match_strings(
&match_candidates,
query,
true,
usize::MAX,
&cancel,
background_executor.clone(),
)
});
join_all(file_queries)
.await
.into_iter()
.flatten()
.filter(|string_match| processed_matches.insert(string_match.candidate_id))
.filter_map(|string_match| open_buffers.get(string_match.candidate_id))
.cloned()
.collect()
};
Ok(matched_items)
})
.await
})
}
fn active_item_buffer(
workspace: &mut Workspace,
cx: &mut ui::ViewContext<Workspace>,
) -> anyhow::Result<BufferSnapshot> {
let active_editor = workspace
.active_item(cx)
.context("no active item")?
.downcast::<Editor>()
.context("active item is not an editor")?;
let snapshot = active_editor
.read(cx)
.buffer()
.read(cx)
.as_singleton()
.context("active editor is not a singleton buffer")?
.read(cx)
.snapshot();
Ok(snapshot)
}

View File

@@ -1,227 +0,0 @@
use super::{
diagnostics_command::write_single_file_diagnostics,
file_command::{build_entry_output_section, codeblock_fence_for_path},
SlashCommand, SlashCommandOutput,
};
use anyhow::{Context, Result};
use assistant_slash_command::ArgumentCompletion;
use collections::HashMap;
use editor::Editor;
use gpui::{Entity, Task, WeakView};
use language::{BufferSnapshot, LspAdapterDelegate};
use std::{
fmt::Write,
path::PathBuf,
sync::{atomic::AtomicBool, Arc},
};
use ui::WindowContext;
use workspace::Workspace;
pub(crate) struct TabsSlashCommand;
const ALL_TABS_COMPLETION_ITEM: &str = "all";
impl SlashCommand for TabsSlashCommand {
fn name(&self) -> String {
"tabs".into()
}
fn description(&self) -> String {
"insert open tabs (active tab by default)".to_owned()
}
fn menu_text(&self) -> String {
"Insert Open Tabs".to_owned()
}
fn requires_argument(&self) -> bool {
false
}
fn complete_argument(
self: Arc<Self>,
query: String,
cancel: Arc<AtomicBool>,
workspace: Option<WeakView<Workspace>>,
cx: &mut WindowContext,
) -> Task<Result<Vec<ArgumentCompletion>>> {
let all_tabs_completion_item = if ALL_TABS_COMPLETION_ITEM.contains(&query) {
Some(ArgumentCompletion {
label: ALL_TABS_COMPLETION_ITEM.into(),
new_text: ALL_TABS_COMPLETION_ITEM.to_owned(),
run_command: true,
})
} else {
None
};
let tab_items_search = tab_items_for_query(workspace, query, cancel, false, cx);
cx.spawn(|_| async move {
let tab_completion_items =
tab_items_search
.await?
.into_iter()
.filter_map(|(path, ..)| {
let path_string = path.as_deref()?.to_string_lossy().to_string();
Some(ArgumentCompletion {
label: path_string.clone().into(),
new_text: path_string,
run_command: true,
})
});
Ok(all_tabs_completion_item
.into_iter()
.chain(tab_completion_items)
.collect::<Vec<_>>())
})
}
fn run(
self: Arc<Self>,
argument: Option<&str>,
workspace: WeakView<Workspace>,
_delegate: Option<Arc<dyn LspAdapterDelegate>>,
cx: &mut WindowContext,
) -> Task<Result<SlashCommandOutput>> {
let tab_items_search = tab_items_for_query(
Some(workspace),
argument.map(ToOwned::to_owned).unwrap_or_default(),
Arc::new(AtomicBool::new(false)),
true,
cx,
);
cx.background_executor().spawn(async move {
let mut sections = Vec::new();
let mut text = String::new();
let mut has_diagnostics = false;
for (full_path, buffer, _) in tab_items_search.await? {
let section_start_ix = text.len();
text.push_str(&codeblock_fence_for_path(full_path.as_deref(), None));
for chunk in buffer.as_rope().chunks() {
text.push_str(chunk);
}
if !text.ends_with('\n') {
text.push('\n');
}
writeln!(text, "```").unwrap();
if write_single_file_diagnostics(&mut text, full_path.as_deref(), &buffer) {
has_diagnostics = true;
}
if !text.ends_with('\n') {
text.push('\n');
}
let section_end_ix = text.len() - 1;
sections.push(build_entry_output_section(
section_start_ix..section_end_ix,
full_path.as_deref(),
false,
None,
));
}
Ok(SlashCommandOutput {
text,
sections,
run_commands_in_text: has_diagnostics,
})
})
}
}
fn tab_items_for_query(
workspace: Option<WeakView<Workspace>>,
mut query: String,
cancel: Arc<AtomicBool>,
use_active_tab_for_empty_query: bool,
cx: &mut WindowContext,
) -> Task<anyhow::Result<Vec<(Option<PathBuf>, BufferSnapshot, usize)>>> {
cx.spawn(|mut cx| async move {
query.make_ascii_lowercase();
let mut open_buffers =
workspace
.context("no workspace")?
.update(&mut cx, |workspace, cx| {
if use_active_tab_for_empty_query && query.trim().is_empty() {
let active_editor = workspace
.active_item(cx)
.context("no active item")?
.downcast::<Editor>()
.context("active item is not an editor")?;
let snapshot = active_editor
.read(cx)
.buffer()
.read(cx)
.as_singleton()
.context("active editor is not a singleton buffer")?
.read(cx)
.snapshot();
let full_path = snapshot.resolve_file_path(cx, true);
return anyhow::Ok(vec![(full_path, snapshot, 0)]);
}
let mut timestamps_by_entity_id = HashMap::default();
let mut open_buffers = Vec::new();
for pane in workspace.panes() {
let pane = pane.read(cx);
for entry in pane.activation_history() {
timestamps_by_entity_id.insert(entry.entity_id, entry.timestamp);
}
}
for editor in workspace.items_of_type::<Editor>(cx) {
if let Some(buffer) = editor.read(cx).buffer().read(cx).as_singleton() {
if let Some(timestamp) =
timestamps_by_entity_id.get(&editor.entity_id())
{
let snapshot = buffer.read(cx).snapshot();
let full_path = snapshot.resolve_file_path(cx, true);
open_buffers.push((full_path, snapshot, *timestamp));
}
}
}
Ok(open_buffers)
})??;
let background_executor = cx.background_executor().clone();
cx.background_executor()
.spawn(async move {
open_buffers.sort_by_key(|(_, _, timestamp)| *timestamp);
let query = query.trim();
if query.is_empty() || query == ALL_TABS_COMPLETION_ITEM {
return Ok(open_buffers);
}
let match_candidates = open_buffers
.iter()
.enumerate()
.filter_map(|(id, (full_path, ..))| {
let path_string = full_path.as_deref()?.to_string_lossy().to_string();
Some(fuzzy::StringMatchCandidate {
id,
char_bag: path_string.as_str().into(),
string: path_string,
})
})
.collect::<Vec<_>>();
let string_matches = fuzzy::match_strings(
&match_candidates,
&query,
true,
usize::MAX,
&cancel,
background_executor,
)
.await;
Ok(string_matches
.into_iter()
.filter_map(|string_match| open_buffers.get(string_match.candidate_id))
.cloned()
.collect())
})
.await
})
}

View File

@@ -5,7 +5,7 @@ use anyhow::Result;
use assistant_slash_command::{ use assistant_slash_command::{
ArgumentCompletion, SlashCommand, SlashCommandOutput, SlashCommandOutputSection, ArgumentCompletion, SlashCommand, SlashCommandOutput, SlashCommandOutputSection,
}; };
use gpui::{AppContext, Task, WeakView}; use gpui::{AppContext, Task, View, WeakView};
use language::{CodeLabel, LspAdapterDelegate}; use language::{CodeLabel, LspAdapterDelegate};
use terminal_view::{terminal_panel::TerminalPanel, TerminalView}; use terminal_view::{terminal_panel::TerminalPanel, TerminalView};
use ui::prelude::*; use ui::prelude::*;
@@ -40,23 +40,23 @@ impl SlashCommand for TerminalSlashCommand {
false false
} }
fn accepts_arguments(&self) -> bool {
true
}
fn complete_argument( fn complete_argument(
self: Arc<Self>, self: Arc<Self>,
_query: String, _arguments: &[String],
_cancel: Arc<AtomicBool>, _cancel: Arc<AtomicBool>,
_workspace: Option<WeakView<Workspace>>, _workspace: Option<WeakView<Workspace>>,
_cx: &mut WindowContext, _cx: &mut WindowContext,
) -> Task<Result<Vec<ArgumentCompletion>>> { ) -> Task<Result<Vec<ArgumentCompletion>>> {
Task::ready(Ok(vec![ArgumentCompletion { Task::ready(Ok(Vec::new()))
label: LINE_COUNT_ARG.into(),
new_text: LINE_COUNT_ARG.to_string(),
run_command: true,
}]))
} }
fn run( fn run(
self: Arc<Self>, self: Arc<Self>,
argument: Option<&str>, arguments: &[String],
workspace: WeakView<Workspace>, workspace: WeakView<Workspace>,
_delegate: Option<Arc<dyn LspAdapterDelegate>>, _delegate: Option<Arc<dyn LspAdapterDelegate>>,
cx: &mut WindowContext, cx: &mut WindowContext,
@@ -64,19 +64,14 @@ impl SlashCommand for TerminalSlashCommand {
let Some(workspace) = workspace.upgrade() else { let Some(workspace) = workspace.upgrade() else {
return Task::ready(Err(anyhow::anyhow!("workspace was dropped"))); return Task::ready(Err(anyhow::anyhow!("workspace was dropped")));
}; };
let Some(terminal_panel) = workspace.read(cx).panel::<TerminalPanel>(cx) else {
return Task::ready(Err(anyhow::anyhow!("no terminal panel open"))); let Some(active_terminal) = resolve_active_terminal(&workspace, cx) else {
};
let Some(active_terminal) = terminal_panel.read(cx).pane().and_then(|pane| {
pane.read(cx)
.active_item()
.and_then(|t| t.downcast::<TerminalView>())
}) else {
return Task::ready(Err(anyhow::anyhow!("no active terminal"))); return Task::ready(Err(anyhow::anyhow!("no active terminal")));
}; };
let line_count = argument let line_count = arguments
.and_then(|a| parse_argument(a)) .get(0)
.and_then(|s| s.parse::<usize>().ok())
.unwrap_or(DEFAULT_CONTEXT_LINES); .unwrap_or(DEFAULT_CONTEXT_LINES);
let lines = active_terminal let lines = active_terminal
@@ -102,12 +97,22 @@ impl SlashCommand for TerminalSlashCommand {
} }
} }
fn parse_argument(argument: &str) -> Option<usize> { fn resolve_active_terminal(
let mut args = argument.split(' '); workspace: &View<Workspace>,
if args.next() == Some(LINE_COUNT_ARG) { cx: &WindowContext,
if let Some(line_count) = args.next().and_then(|s| s.parse::<usize>().ok()) { ) -> Option<View<TerminalView>> {
return Some(line_count); if let Some(terminal_view) = workspace
} .read(cx)
.active_item(cx)
.and_then(|item| item.act_as::<TerminalView>(cx))
{
return Some(terminal_view);
} }
None
let terminal_panel = workspace.read(cx).panel::<TerminalPanel>(cx)?;
terminal_panel.read(cx).pane().and_then(|pane| {
pane.read(cx)
.active_item()
.and_then(|t| t.downcast::<TerminalView>())
})
} }

View File

@@ -42,7 +42,7 @@ impl SlashCommand for WorkflowSlashCommand {
fn complete_argument( fn complete_argument(
self: Arc<Self>, self: Arc<Self>,
_query: String, _arguments: &[String],
_cancel: Arc<AtomicBool>, _cancel: Arc<AtomicBool>,
_workspace: Option<WeakView<Workspace>>, _workspace: Option<WeakView<Workspace>>,
_cx: &mut WindowContext, _cx: &mut WindowContext,
@@ -52,7 +52,7 @@ impl SlashCommand for WorkflowSlashCommand {
fn run( fn run(
self: Arc<Self>, self: Arc<Self>,
_argument: Option<&str>, _arguments: &[String],
_workspace: WeakView<Workspace>, _workspace: WeakView<Workspace>,
_delegate: Option<Arc<dyn LspAdapterDelegate>>, _delegate: Option<Arc<dyn LspAdapterDelegate>>,
cx: &mut WindowContext, cx: &mut WindowContext,

View File

@@ -0,0 +1,306 @@
use std::sync::Arc;
use assistant_slash_command::SlashCommandRegistry;
use gpui::AnyElement;
use gpui::DismissEvent;
use gpui::WeakView;
use picker::PickerEditorPosition;
use ui::ListItemSpacing;
use gpui::SharedString;
use gpui::Task;
use picker::{Picker, PickerDelegate};
use ui::{prelude::*, ListItem, PopoverMenu, PopoverTrigger};
use crate::assistant_panel::ContextEditor;
#[derive(IntoElement)]
pub(super) struct SlashCommandSelector<T: PopoverTrigger> {
registry: Arc<SlashCommandRegistry>,
active_context_editor: WeakView<ContextEditor>,
trigger: T,
}
#[derive(Clone)]
struct SlashCommandInfo {
name: SharedString,
description: SharedString,
args: Option<SharedString>,
}
#[derive(Clone)]
enum SlashCommandEntry {
Info(SlashCommandInfo),
Advert {
name: SharedString,
renderer: fn(&mut WindowContext<'_>) -> AnyElement,
on_confirm: fn(&mut WindowContext<'_>),
},
}
impl AsRef<str> for SlashCommandEntry {
fn as_ref(&self) -> &str {
match self {
SlashCommandEntry::Info(SlashCommandInfo { name, .. })
| SlashCommandEntry::Advert { name, .. } => name,
}
}
}
pub(crate) struct SlashCommandDelegate {
all_commands: Vec<SlashCommandEntry>,
filtered_commands: Vec<SlashCommandEntry>,
active_context_editor: WeakView<ContextEditor>,
selected_index: usize,
}
impl<T: PopoverTrigger> SlashCommandSelector<T> {
pub(crate) fn new(
registry: Arc<SlashCommandRegistry>,
active_context_editor: WeakView<ContextEditor>,
trigger: T,
) -> Self {
SlashCommandSelector {
registry,
active_context_editor,
trigger,
}
}
}
impl PickerDelegate for SlashCommandDelegate {
type ListItem = ListItem;
fn match_count(&self) -> usize {
self.filtered_commands.len()
}
fn selected_index(&self) -> usize {
self.selected_index
}
fn set_selected_index(&mut self, ix: usize, cx: &mut ViewContext<Picker<Self>>) {
self.selected_index = ix.min(self.filtered_commands.len().saturating_sub(1));
cx.notify();
}
fn placeholder_text(&self, _cx: &mut WindowContext) -> Arc<str> {
"Select a command...".into()
}
fn update_matches(&mut self, query: String, cx: &mut ViewContext<Picker<Self>>) -> Task<()> {
let all_commands = self.all_commands.clone();
cx.spawn(|this, mut cx| async move {
let filtered_commands = cx
.background_executor()
.spawn(async move {
if query.is_empty() {
all_commands
} else {
all_commands
.into_iter()
.filter(|model_info| {
model_info
.as_ref()
.to_lowercase()
.contains(&query.to_lowercase())
})
.collect()
}
})
.await;
this.update(&mut cx, |this, cx| {
this.delegate.filtered_commands = filtered_commands;
this.delegate.set_selected_index(0, cx);
cx.notify();
})
.ok();
})
}
fn separators_after_indices(&self) -> Vec<usize> {
let mut ret = vec![];
let mut previous_is_advert = false;
for (index, command) in self.filtered_commands.iter().enumerate() {
if previous_is_advert {
if let SlashCommandEntry::Info(_) = command {
previous_is_advert = false;
debug_assert_ne!(
index, 0,
"index cannot be zero, as we can never have a separator at 0th position"
);
ret.push(index - 1);
}
} else {
if let SlashCommandEntry::Advert { .. } = command {
previous_is_advert = true;
if index != 0 {
ret.push(index - 1);
}
}
}
}
ret
}
fn confirm(&mut self, _secondary: bool, cx: &mut ViewContext<Picker<Self>>) {
if let Some(command) = self.filtered_commands.get(self.selected_index) {
if let SlashCommandEntry::Info(info) = command {
self.active_context_editor
.update(cx, |context_editor, cx| {
context_editor.insert_command(&info.name, cx)
})
.ok();
} else if let SlashCommandEntry::Advert { on_confirm, .. } = command {
on_confirm(cx);
}
cx.emit(DismissEvent);
}
}
fn dismissed(&mut self, _cx: &mut ViewContext<Picker<Self>>) {}
fn editor_position(&self) -> PickerEditorPosition {
PickerEditorPosition::End
}
fn render_match(
&self,
ix: usize,
selected: bool,
cx: &mut ViewContext<Picker<Self>>,
) -> Option<Self::ListItem> {
let command_info = self.filtered_commands.get(ix)?;
match command_info {
SlashCommandEntry::Info(info) => Some(
ListItem::new(ix)
.inset(true)
.spacing(ListItemSpacing::Sparse)
.selected(selected)
.child(
h_flex()
.group(format!("command-entry-label-{ix}"))
.w_full()
.min_w(px(220.))
.child(
v_flex()
.child(
h_flex()
.child(div().font_buffer(cx).child({
let mut label = format!("/{}", info.name);
if let Some(args) =
info.args.as_ref().filter(|_| selected)
{
label.push_str(&args);
}
Label::new(label).size(LabelSize::Small)
}))
.children(info.args.clone().filter(|_| !selected).map(
|args| {
div()
.font_buffer(cx)
.child(
Label::new(args).size(LabelSize::Small),
)
.visible_on_hover(format!(
"command-entry-label-{ix}"
))
},
)),
)
.child(
Label::new(info.description.clone())
.size(LabelSize::Small)
.color(Color::Muted),
),
),
),
),
SlashCommandEntry::Advert { renderer, .. } => Some(
ListItem::new(ix)
.inset(true)
.spacing(ListItemSpacing::Sparse)
.selected(selected)
.child(renderer(cx)),
),
}
}
}
impl<T: PopoverTrigger> RenderOnce for SlashCommandSelector<T> {
fn render(self, cx: &mut WindowContext) -> impl IntoElement {
let all_models = self
.registry
.featured_command_names()
.into_iter()
.filter_map(|command_name| {
let command = self.registry.command(&command_name)?;
let menu_text = SharedString::from(Arc::from(command.menu_text()));
let label = command.label(cx);
let args = label.filter_range.end.ne(&label.text.len()).then(|| {
SharedString::from(
label.text[label.filter_range.end..label.text.len()].to_owned(),
)
});
Some(SlashCommandEntry::Info(SlashCommandInfo {
name: command_name.into(),
description: menu_text,
args,
}))
})
.chain([SlashCommandEntry::Advert {
name: "create-your-command".into(),
renderer: |cx| {
v_flex()
.child(
h_flex()
.font_buffer(cx)
.items_center()
.gap_1()
.child(div().font_buffer(cx).child(
Label::new("create-your-command").size(LabelSize::Small),
))
.child(Icon::new(IconName::ArrowUpRight).size(IconSize::XSmall)),
)
.child(
Label::new("Learn how to create a custom command")
.size(LabelSize::Small)
.color(Color::Muted),
)
.into_any_element()
},
on_confirm: |cx| cx.open_url("https://zed.dev/docs/extensions/slash-commands"),
}])
.collect::<Vec<_>>();
let delegate = SlashCommandDelegate {
all_commands: all_models.clone(),
active_context_editor: self.active_context_editor.clone(),
filtered_commands: all_models,
selected_index: 0,
};
let picker_view = cx.new_view(|cx| {
let picker = Picker::uniform_list(delegate, cx).max_height(Some(rems(20.).into()));
picker
});
let handle = self
.active_context_editor
.update(cx, |this, _| this.slash_menu_handle.clone())
.ok();
PopoverMenu::new("model-switcher")
.menu(move |_cx| Some(picker_view.clone()))
.trigger(self.trigger)
.attach(gpui::AnchorCorner::TopLeft)
.anchor(gpui::AnchorCorner::BottomLeft)
.offset(gpui::Point {
x: px(0.0),
y: px(-16.0),
})
.when_some(handle, |this, handle| this.with_handle(handle))
}
}

View File

@@ -277,6 +277,7 @@ impl TerminalInlineAssistant {
messages.push(LanguageModelRequestMessage { messages.push(LanguageModelRequestMessage {
role: Role::User, role: Role::User,
content: vec![prompt.into()], content: vec![prompt.into()],
cache: false,
}); });
Ok(LanguageModelRequest { Ok(LanguageModelRequest {

View File

@@ -1,25 +0,0 @@
### Using the Assistant
Once you have configured a provider, you can interact with the provider's language models in a context editor.
To create a new context editor, use the menu in the top right of the assistant panel and the `New Context` option.
In the context editor, select a model from one of the configured providers, type a message in the `You` block, and submit with `cmd-enter` (or `ctrl-enter` on Linux).
### Inline assistant
When you're in a normal editor, you can use `ctrl-enter` to open the inline assistant.
The inline assistant allows you to send the current selection (or the current line) to a language model and modify the selection with the language model's response.
### Adding Prompts
You can customize the default prompts that are used in new context editor, by opening the `Prompt Library`.
Open the `Prompt Library` using either the menu in the top right of the assistant panel and choosing the `Prompt Library` option, or by using the `assistant: deploy prompt library` command when the assistant panel is focused.
### Viewing past contexts
You view all previous contexts by opening up the `History` tab in the assistant panel.
Open the `History` using the menu in the top right of the assistant panel and choosing the `History`.

View File

@@ -0,0 +1,803 @@
mod step_view;
use crate::{
prompts::StepResolutionContext, AssistantPanel, Context, InlineAssistId, InlineAssistant,
};
use anyhow::{anyhow, Error, Result};
use collections::HashMap;
use editor::Editor;
use futures::future;
use gpui::{
Model, ModelContext, Task, UpdateGlobal as _, View, WeakModel, WeakView, WindowContext,
};
use language::{Anchor, Buffer, BufferSnapshot, SymbolPath};
use language_model::{LanguageModelRegistry, LanguageModelRequestMessage, Role};
use project::Project;
use rope::Point;
use serde::{Deserialize, Serialize};
use smol::stream::StreamExt;
use std::{cmp, fmt::Write, ops::Range, sync::Arc};
use text::{AnchorRangeExt as _, OffsetRangeExt as _};
use util::ResultExt as _;
use workspace::Workspace;
pub use step_view::WorkflowStepView;
const IMPORTS_SYMBOL: &str = "#imports";
pub struct WorkflowStep {
context: WeakModel<Context>,
context_buffer_range: Range<Anchor>,
tool_output: String,
resolve_task: Option<Task<()>>,
pub resolution: Option<Result<WorkflowStepResolution, Arc<Error>>>,
}
#[derive(Clone, Debug, Eq, PartialEq)]
pub struct WorkflowStepResolution {
pub title: String,
pub suggestion_groups: HashMap<Model<Buffer>, Vec<WorkflowSuggestionGroup>>,
}
#[derive(Clone, Debug, Eq, PartialEq)]
pub struct WorkflowSuggestionGroup {
pub context_range: Range<language::Anchor>,
pub suggestions: Vec<WorkflowSuggestion>,
}
#[derive(Clone, Debug, Eq, PartialEq)]
pub enum WorkflowSuggestion {
Update {
symbol_path: SymbolPath,
range: Range<language::Anchor>,
description: String,
},
CreateFile {
description: String,
},
InsertSiblingBefore {
symbol_path: SymbolPath,
position: language::Anchor,
description: String,
},
InsertSiblingAfter {
symbol_path: SymbolPath,
position: language::Anchor,
description: String,
},
PrependChild {
symbol_path: Option<SymbolPath>,
position: language::Anchor,
description: String,
},
AppendChild {
symbol_path: Option<SymbolPath>,
position: language::Anchor,
description: String,
},
Delete {
symbol_path: SymbolPath,
range: Range<language::Anchor>,
},
}
impl WorkflowStep {
pub fn new(range: Range<Anchor>, context: WeakModel<Context>) -> Self {
Self {
context_buffer_range: range,
tool_output: String::new(),
context,
resolution: None,
resolve_task: None,
}
}
pub fn resolve(&mut self, cx: &mut ModelContext<WorkflowStep>) -> Option<()> {
let range = self.context_buffer_range.clone();
let context = self.context.upgrade()?;
let context = context.read(cx);
let project = context.project()?;
let prompt_builder = context.prompt_builder();
let mut request = context.to_completion_request(cx);
let model = LanguageModelRegistry::read_global(cx).active_model();
let context_buffer = context.buffer();
let step_text = context_buffer
.read(cx)
.text_for_range(range.clone())
.collect::<String>();
let mut workflow_context = String::new();
for message in context.messages(cx) {
write!(&mut workflow_context, "<message role={}>", message.role).unwrap();
for chunk in context_buffer.read(cx).text_for_range(message.offset_range) {
write!(&mut workflow_context, "{chunk}").unwrap();
}
write!(&mut workflow_context, "</message>").unwrap();
}
self.resolve_task = Some(cx.spawn(|this, mut cx| async move {
let result = async {
let Some(model) = model else {
return Err(anyhow!("no model selected"));
};
this.update(&mut cx, |this, cx| {
this.tool_output.clear();
this.resolution = None;
this.result_updated(cx);
cx.notify();
})?;
let resolution_context = StepResolutionContext {
workflow_context,
step_to_resolve: step_text.clone(),
};
let mut prompt =
prompt_builder.generate_step_resolution_prompt(&resolution_context)?;
prompt.push_str(&step_text);
request.messages.push(LanguageModelRequestMessage {
role: Role::User,
content: vec![prompt.into()],
cache: false,
});
// Invoke the model to get its edit suggestions for this workflow step.
let mut stream = model
.use_tool_stream::<tool::WorkflowStepResolutionTool>(request, &cx)
.await?;
while let Some(chunk) = stream.next().await {
let chunk = chunk?;
this.update(&mut cx, |this, cx| {
this.tool_output.push_str(&chunk);
cx.notify();
})?;
}
let resolution = this.update(&mut cx, |this, _| {
serde_json::from_str::<tool::WorkflowStepResolutionTool>(&this.tool_output)
})??;
this.update(&mut cx, |this, cx| {
this.tool_output = serde_json::to_string_pretty(&resolution).unwrap();
cx.notify();
})?;
// Translate the parsed suggestions to our internal types, which anchor the suggestions to locations in the code.
let suggestion_tasks: Vec<_> = resolution
.suggestions
.iter()
.map(|suggestion| suggestion.resolve(project.clone(), cx.clone()))
.collect();
// Expand the context ranges of each suggestion and group suggestions with overlapping context ranges.
let suggestions = future::join_all(suggestion_tasks)
.await
.into_iter()
.filter_map(|task| task.log_err())
.collect::<Vec<_>>();
let mut suggestions_by_buffer = HashMap::default();
for (buffer, suggestion) in suggestions {
suggestions_by_buffer
.entry(buffer)
.or_insert_with(Vec::new)
.push(suggestion);
}
let mut suggestion_groups_by_buffer = HashMap::default();
for (buffer, mut suggestions) in suggestions_by_buffer {
let mut suggestion_groups = Vec::<WorkflowSuggestionGroup>::new();
let snapshot = buffer.update(&mut cx, |buffer, _| buffer.snapshot())?;
// Sort suggestions by their range so that earlier, larger ranges come first
suggestions.sort_by(|a, b| a.range().cmp(&b.range(), &snapshot));
// Merge overlapping suggestions
suggestions.dedup_by(|a, b| b.try_merge(a, &snapshot));
// Create context ranges for each suggestion
for suggestion in suggestions {
let context_range = {
let suggestion_point_range = suggestion.range().to_point(&snapshot);
let start_row = suggestion_point_range.start.row.saturating_sub(5);
let end_row = cmp::min(
suggestion_point_range.end.row + 5,
snapshot.max_point().row,
);
let start = snapshot.anchor_before(Point::new(start_row, 0));
let end = snapshot
.anchor_after(Point::new(end_row, snapshot.line_len(end_row)));
start..end
};
if let Some(last_group) = suggestion_groups.last_mut() {
if last_group
.context_range
.end
.cmp(&context_range.start, &snapshot)
.is_ge()
{
// Merge with the previous group if context ranges overlap
last_group.context_range.end = context_range.end;
last_group.suggestions.push(suggestion);
} else {
// Create a new group
suggestion_groups.push(WorkflowSuggestionGroup {
context_range,
suggestions: vec![suggestion],
});
}
} else {
// Create the first group
suggestion_groups.push(WorkflowSuggestionGroup {
context_range,
suggestions: vec![suggestion],
});
}
}
suggestion_groups_by_buffer.insert(buffer, suggestion_groups);
}
Ok((resolution.step_title, suggestion_groups_by_buffer))
};
let result = result.await;
this.update(&mut cx, |this, cx| {
this.resolution = Some(match result {
Ok((title, suggestion_groups)) => Ok(WorkflowStepResolution {
title,
suggestion_groups,
}),
Err(error) => Err(Arc::new(error)),
});
this.context
.update(cx, |context, cx| context.workflow_step_updated(range, cx))
.ok();
cx.notify();
})
.ok();
}));
None
}
fn result_updated(&mut self, cx: &mut ModelContext<Self>) {
self.context
.update(cx, |context, cx| {
context.workflow_step_updated(self.context_buffer_range.clone(), cx)
})
.ok();
}
}
impl WorkflowSuggestion {
pub fn range(&self) -> Range<language::Anchor> {
match self {
Self::Update { range, .. } => range.clone(),
Self::CreateFile { .. } => language::Anchor::MIN..language::Anchor::MAX,
Self::InsertSiblingBefore { position, .. }
| Self::InsertSiblingAfter { position, .. }
| Self::PrependChild { position, .. }
| Self::AppendChild { position, .. } => *position..*position,
Self::Delete { range, .. } => range.clone(),
}
}
pub fn description(&self) -> Option<&str> {
match self {
Self::Update { description, .. }
| Self::CreateFile { description }
| Self::InsertSiblingBefore { description, .. }
| Self::InsertSiblingAfter { description, .. }
| Self::PrependChild { description, .. }
| Self::AppendChild { description, .. } => Some(description),
Self::Delete { .. } => None,
}
}
fn description_mut(&mut self) -> Option<&mut String> {
match self {
Self::Update { description, .. }
| Self::CreateFile { description }
| Self::InsertSiblingBefore { description, .. }
| Self::InsertSiblingAfter { description, .. }
| Self::PrependChild { description, .. }
| Self::AppendChild { description, .. } => Some(description),
Self::Delete { .. } => None,
}
}
fn symbol_path(&self) -> Option<&SymbolPath> {
match self {
Self::Update { symbol_path, .. } => Some(symbol_path),
Self::InsertSiblingBefore { symbol_path, .. } => Some(symbol_path),
Self::InsertSiblingAfter { symbol_path, .. } => Some(symbol_path),
Self::PrependChild { symbol_path, .. } => symbol_path.as_ref(),
Self::AppendChild { symbol_path, .. } => symbol_path.as_ref(),
Self::Delete { symbol_path, .. } => Some(symbol_path),
Self::CreateFile { .. } => None,
}
}
fn kind(&self) -> &str {
match self {
Self::Update { .. } => "Update",
Self::CreateFile { .. } => "CreateFile",
Self::InsertSiblingBefore { .. } => "InsertSiblingBefore",
Self::InsertSiblingAfter { .. } => "InsertSiblingAfter",
Self::PrependChild { .. } => "PrependChild",
Self::AppendChild { .. } => "AppendChild",
Self::Delete { .. } => "Delete",
}
}
fn try_merge(&mut self, other: &Self, buffer: &BufferSnapshot) -> bool {
let range = self.range();
let other_range = other.range();
// Don't merge if we don't contain the other suggestion.
if range.start.cmp(&other_range.start, buffer).is_gt()
|| range.end.cmp(&other_range.end, buffer).is_lt()
{
return false;
}
if let Some(description) = self.description_mut() {
if let Some(other_description) = other.description() {
description.push('\n');
description.push_str(other_description);
}
}
true
}
pub fn show(
&self,
editor: &View<Editor>,
excerpt_id: editor::ExcerptId,
workspace: &WeakView<Workspace>,
assistant_panel: &View<AssistantPanel>,
cx: &mut WindowContext,
) -> Option<InlineAssistId> {
let mut initial_transaction_id = None;
let initial_prompt;
let suggestion_range;
let buffer = editor.read(cx).buffer().clone();
let snapshot = buffer.read(cx).snapshot(cx);
match self {
Self::Update {
range, description, ..
} => {
initial_prompt = description.clone();
suggestion_range = snapshot.anchor_in_excerpt(excerpt_id, range.start)?
..snapshot.anchor_in_excerpt(excerpt_id, range.end)?;
}
Self::CreateFile { description } => {
initial_prompt = description.clone();
suggestion_range = editor::Anchor::min()..editor::Anchor::min();
}
Self::InsertSiblingBefore {
position,
description,
..
} => {
let position = snapshot.anchor_in_excerpt(excerpt_id, *position)?;
initial_prompt = description.clone();
suggestion_range = buffer.update(cx, |buffer, cx| {
buffer.start_transaction(cx);
let line_start = buffer.insert_empty_line(position, true, true, cx);
initial_transaction_id = buffer.end_transaction(cx);
buffer.refresh_preview(cx);
let line_start = buffer.read(cx).anchor_before(line_start);
line_start..line_start
});
}
Self::InsertSiblingAfter {
position,
description,
..
} => {
let position = snapshot.anchor_in_excerpt(excerpt_id, *position)?;
initial_prompt = description.clone();
suggestion_range = buffer.update(cx, |buffer, cx| {
buffer.start_transaction(cx);
let line_start = buffer.insert_empty_line(position, true, true, cx);
initial_transaction_id = buffer.end_transaction(cx);
buffer.refresh_preview(cx);
let line_start = buffer.read(cx).anchor_before(line_start);
line_start..line_start
});
}
Self::PrependChild {
position,
description,
..
} => {
let position = snapshot.anchor_in_excerpt(excerpt_id, *position)?;
initial_prompt = description.clone();
suggestion_range = buffer.update(cx, |buffer, cx| {
buffer.start_transaction(cx);
let line_start = buffer.insert_empty_line(position, false, true, cx);
initial_transaction_id = buffer.end_transaction(cx);
buffer.refresh_preview(cx);
let line_start = buffer.read(cx).anchor_before(line_start);
line_start..line_start
});
}
Self::AppendChild {
position,
description,
..
} => {
let position = snapshot.anchor_in_excerpt(excerpt_id, *position)?;
initial_prompt = description.clone();
suggestion_range = buffer.update(cx, |buffer, cx| {
buffer.start_transaction(cx);
let line_start = buffer.insert_empty_line(position, true, false, cx);
initial_transaction_id = buffer.end_transaction(cx);
buffer.refresh_preview(cx);
let line_start = buffer.read(cx).anchor_before(line_start);
line_start..line_start
});
}
Self::Delete { range, .. } => {
initial_prompt = "Delete".to_string();
suggestion_range = snapshot.anchor_in_excerpt(excerpt_id, range.start)?
..snapshot.anchor_in_excerpt(excerpt_id, range.end)?;
}
}
InlineAssistant::update_global(cx, |inline_assistant, cx| {
Some(inline_assistant.suggest_assist(
editor,
suggestion_range,
initial_prompt,
initial_transaction_id,
Some(workspace.clone()),
Some(assistant_panel),
cx,
))
})
}
}
pub mod tool {
use super::*;
use anyhow::Context as _;
use gpui::AsyncAppContext;
use language::{Outline, OutlineItem, ParseStatus};
use language_model::LanguageModelTool;
use project::ProjectPath;
use schemars::JsonSchema;
use std::path::Path;
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct WorkflowStepResolutionTool {
/// An extremely short title for the edit step represented by these operations.
pub step_title: String,
/// A sequence of operations to apply to the codebase.
/// When multiple operations are required for a step, be sure to include multiple operations in this list.
pub suggestions: Vec<WorkflowSuggestionTool>,
}
impl LanguageModelTool for WorkflowStepResolutionTool {
fn name() -> String {
"edit".into()
}
fn description() -> String {
"suggest edits to one or more locations in the codebase".into()
}
}
/// A description of an operation to apply to one location in the codebase.
///
/// This object represents a single edit operation that can be performed on a specific file
/// in the codebase. It encapsulates both the location (file path) and the nature of the
/// edit to be made.
///
/// # Fields
///
/// * `path`: A string representing the file path where the edit operation should be applied.
/// This path is relative to the root of the project or repository.
///
/// * `kind`: An enum representing the specific type of edit operation to be performed.
///
/// # Usage
///
/// `EditOperation` is used within a code editor to represent and apply
/// programmatic changes to source code. It provides a structured way to describe
/// edits for features like refactoring tools or AI-assisted coding suggestions.
#[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize, JsonSchema)]
pub struct WorkflowSuggestionTool {
/// The path to the file containing the relevant operation
pub path: String,
#[serde(flatten)]
pub kind: WorkflowSuggestionToolKind,
}
impl WorkflowSuggestionTool {
pub(super) async fn resolve(
&self,
project: Model<Project>,
mut cx: AsyncAppContext,
) -> Result<(Model<Buffer>, super::WorkflowSuggestion)> {
let path = self.path.clone();
let kind = self.kind.clone();
let buffer = project
.update(&mut cx, |project, cx| {
let project_path = project
.find_project_path(Path::new(&path), cx)
.or_else(|| {
// If we couldn't find a project path for it, put it in the active worktree
// so that when we create the buffer, it can be saved.
let worktree = project
.active_entry()
.and_then(|entry_id| project.worktree_for_entry(entry_id, cx))
.or_else(|| project.worktrees(cx).next())?;
let worktree = worktree.read(cx);
Some(ProjectPath {
worktree_id: worktree.id(),
path: Arc::from(Path::new(&path)),
})
})
.with_context(|| format!("worktree not found for {:?}", path))?;
anyhow::Ok(project.open_buffer(project_path, cx))
})??
.await?;
let mut parse_status = buffer.read_with(&cx, |buffer, _cx| buffer.parse_status())?;
while *parse_status.borrow() != ParseStatus::Idle {
parse_status.changed().await?;
}
let snapshot = buffer.update(&mut cx, |buffer, _| buffer.snapshot())?;
let outline = snapshot.outline(None).context("no outline for buffer")?;
let suggestion = match kind {
WorkflowSuggestionToolKind::Update {
symbol,
description,
} => {
let (symbol_path, symbol) = Self::resolve_symbol(&snapshot, &outline, &symbol)?;
let start = symbol
.annotation_range
.map_or(symbol.range.start, |range| range.start);
let start = Point::new(start.row, 0);
let end = Point::new(
symbol.range.end.row,
snapshot.line_len(symbol.range.end.row),
);
let range = snapshot.anchor_before(start)..snapshot.anchor_after(end);
WorkflowSuggestion::Update {
range,
description,
symbol_path,
}
}
WorkflowSuggestionToolKind::Create { description } => {
WorkflowSuggestion::CreateFile { description }
}
WorkflowSuggestionToolKind::InsertSiblingBefore {
symbol,
description,
} => {
let (symbol_path, symbol) = Self::resolve_symbol(&snapshot, &outline, &symbol)?;
let position = snapshot.anchor_before(
symbol
.annotation_range
.map_or(symbol.range.start, |annotation_range| {
annotation_range.start
}),
);
WorkflowSuggestion::InsertSiblingBefore {
position,
description,
symbol_path,
}
}
WorkflowSuggestionToolKind::InsertSiblingAfter {
symbol,
description,
} => {
let (symbol_path, symbol) = Self::resolve_symbol(&snapshot, &outline, &symbol)?;
let position = snapshot.anchor_after(symbol.range.end);
WorkflowSuggestion::InsertSiblingAfter {
position,
description,
symbol_path,
}
}
WorkflowSuggestionToolKind::PrependChild {
symbol,
description,
} => {
if let Some(symbol) = symbol {
let (symbol_path, symbol) =
Self::resolve_symbol(&snapshot, &outline, &symbol)?;
let position = snapshot.anchor_after(
symbol
.body_range
.map_or(symbol.range.start, |body_range| body_range.start),
);
WorkflowSuggestion::PrependChild {
position,
description,
symbol_path: Some(symbol_path),
}
} else {
WorkflowSuggestion::PrependChild {
position: language::Anchor::MIN,
description,
symbol_path: None,
}
}
}
WorkflowSuggestionToolKind::AppendChild {
symbol,
description,
} => {
if let Some(symbol) = symbol {
let (symbol_path, symbol) =
Self::resolve_symbol(&snapshot, &outline, &symbol)?;
let position = snapshot.anchor_before(
symbol
.body_range
.map_or(symbol.range.end, |body_range| body_range.end),
);
WorkflowSuggestion::AppendChild {
position,
description,
symbol_path: Some(symbol_path),
}
} else {
WorkflowSuggestion::PrependChild {
position: language::Anchor::MAX,
description,
symbol_path: None,
}
}
}
WorkflowSuggestionToolKind::Delete { symbol } => {
let (symbol_path, symbol) = Self::resolve_symbol(&snapshot, &outline, &symbol)?;
let start = symbol
.annotation_range
.map_or(symbol.range.start, |range| range.start);
let start = Point::new(start.row, 0);
let end = Point::new(
symbol.range.end.row,
snapshot.line_len(symbol.range.end.row),
);
let range = snapshot.anchor_before(start)..snapshot.anchor_after(end);
WorkflowSuggestion::Delete { range, symbol_path }
}
};
Ok((buffer, suggestion))
}
fn resolve_symbol(
snapshot: &BufferSnapshot,
outline: &Outline<Anchor>,
symbol: &str,
) -> Result<(SymbolPath, OutlineItem<Point>)> {
if symbol == IMPORTS_SYMBOL {
let target_row = find_first_non_comment_line(snapshot);
Ok((
SymbolPath(IMPORTS_SYMBOL.to_string()),
OutlineItem {
range: Point::new(target_row, 0)..Point::new(target_row + 1, 0),
..Default::default()
},
))
} else {
let (symbol_path, symbol) = outline
.find_most_similar(symbol)
.with_context(|| format!("symbol not found: {symbol}"))?;
Ok((symbol_path, symbol.to_point(snapshot)))
}
}
}
fn find_first_non_comment_line(snapshot: &BufferSnapshot) -> u32 {
let Some(language) = snapshot.language() else {
return 0;
};
let scope = language.default_scope();
let comment_prefixes = scope.line_comment_prefixes();
let mut chunks = snapshot.as_rope().chunks();
let mut target_row = 0;
loop {
let starts_with_comment = chunks
.peek()
.map(|chunk| {
comment_prefixes
.iter()
.any(|s| chunk.starts_with(s.as_ref().trim_end()))
})
.unwrap_or(false);
if !starts_with_comment {
break;
}
target_row += 1;
if !chunks.next_line() {
break;
}
}
target_row
}
#[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize, JsonSchema)]
#[serde(tag = "kind")]
pub enum WorkflowSuggestionToolKind {
/// Rewrites the specified symbol entirely based on the given description.
/// This operation completely replaces the existing symbol with new content.
Update {
/// A fully-qualified reference to the symbol, e.g. `mod foo impl Bar pub fn baz` instead of just `fn baz`.
/// The path should uniquely identify the symbol within the containing file.
symbol: String,
/// A brief description of the transformation to apply to the symbol.
description: String,
},
/// Creates a new file with the given path based on the provided description.
/// This operation adds a new file to the codebase.
Create {
/// A brief description of the file to be created.
description: String,
},
/// Inserts a new symbol based on the given description before the specified symbol.
/// This operation adds new content immediately preceding an existing symbol.
InsertSiblingBefore {
/// A fully-qualified reference to the symbol, e.g. `mod foo impl Bar pub fn baz` instead of just `fn baz`.
/// The new content will be inserted immediately before this symbol.
symbol: String,
/// A brief description of the new symbol to be inserted.
description: String,
},
/// Inserts a new symbol based on the given description after the specified symbol.
/// This operation adds new content immediately following an existing symbol.
InsertSiblingAfter {
/// A fully-qualified reference to the symbol, e.g. `mod foo impl Bar pub fn baz` instead of just `fn baz`.
/// The new content will be inserted immediately after this symbol.
symbol: String,
/// A brief description of the new symbol to be inserted.
description: String,
},
/// Inserts a new symbol as a child of the specified symbol at the start.
/// This operation adds new content as the first child of an existing symbol (or file if no symbol is provided).
PrependChild {
/// An optional fully-qualified reference to the symbol after the code you want to insert, e.g. `mod foo impl Bar pub fn baz` instead of just `fn baz`.
/// If provided, the new content will be inserted as the first child of this symbol.
/// If not provided, the new content will be inserted at the top of the file.
symbol: Option<String>,
/// A brief description of the new symbol to be inserted.
description: String,
},
/// Inserts a new symbol as a child of the specified symbol at the end.
/// This operation adds new content as the last child of an existing symbol (or file if no symbol is provided).
AppendChild {
/// An optional fully-qualified reference to the symbol before the code you want to insert, e.g. `mod foo impl Bar pub fn baz` instead of just `fn baz`.
/// If provided, the new content will be inserted as the last child of this symbol.
/// If not provided, the new content will be applied at the bottom of the file.
symbol: Option<String>,
/// A brief description of the new symbol to be inserted.
description: String,
},
/// Deletes the specified symbol from the containing file.
Delete {
/// An fully-qualified reference to the symbol to be deleted, e.g. `mod foo impl Bar pub fn baz` instead of just `fn baz`.
symbol: String,
},
}
}

View File

@@ -0,0 +1,315 @@
use super::WorkflowStep;
use crate::{Assist, Context};
use editor::{
display_map::{BlockDisposition, BlockProperties, BlockStyle},
Editor, EditorEvent, ExcerptRange, MultiBuffer,
};
use gpui::{
div, AnyElement, AppContext, Context as _, Empty, EventEmitter, FocusableView, IntoElement,
Model, ParentElement as _, Render, SharedString, Styled as _, View, ViewContext,
VisualContext as _, WeakModel, WindowContext,
};
use language::{language_settings::SoftWrap, Anchor, Buffer, LanguageRegistry};
use std::{ops::DerefMut, sync::Arc};
use text::OffsetRangeExt;
use theme::ActiveTheme as _;
use ui::{
h_flex, v_flex, ButtonCommon as _, ButtonLike, ButtonStyle, Color, Icon, IconName,
InteractiveElement as _, Label, LabelCommon as _,
};
use workspace::{
item::{self, Item},
pane,
searchable::SearchableItemHandle,
};
pub struct WorkflowStepView {
step: WeakModel<WorkflowStep>,
tool_output_buffer: Model<Buffer>,
editor: View<Editor>,
}
impl WorkflowStepView {
pub fn new(
context: Model<Context>,
step: Model<WorkflowStep>,
language_registry: Arc<LanguageRegistry>,
cx: &mut ViewContext<Self>,
) -> Self {
let tool_output_buffer =
cx.new_model(|cx| Buffer::local(step.read(cx).tool_output.clone(), cx));
let buffer = cx.new_model(|cx| {
let mut buffer = MultiBuffer::without_headers(0, language::Capability::ReadWrite);
buffer.push_excerpts(
context.read(cx).buffer().clone(),
[ExcerptRange {
context: step.read(cx).context_buffer_range.clone(),
primary: None,
}],
cx,
);
buffer.push_excerpts(
tool_output_buffer.clone(),
[ExcerptRange {
context: Anchor::MIN..Anchor::MAX,
primary: None,
}],
cx,
);
buffer
});
let buffer_snapshot = buffer.read(cx).snapshot(cx);
let output_excerpt = buffer_snapshot.excerpts().skip(1).next().unwrap().0;
let input_start_anchor = multi_buffer::Anchor::min();
let output_start_anchor = buffer_snapshot
.anchor_in_excerpt(output_excerpt, Anchor::MIN)
.unwrap();
let output_end_anchor = multi_buffer::Anchor::max();
let handle = cx.view().downgrade();
let editor = cx.new_view(|cx| {
let mut editor = Editor::for_multibuffer(buffer.clone(), None, false, cx);
editor.set_soft_wrap_mode(SoftWrap::EditorWidth, cx);
editor.set_show_line_numbers(false, cx);
editor.set_show_git_diff_gutter(false, cx);
editor.set_show_code_actions(false, cx);
editor.set_show_runnables(false, cx);
editor.set_show_wrap_guides(false, cx);
editor.set_show_indent_guides(false, cx);
editor.set_read_only(true);
editor.set_show_inline_completions(false);
editor.insert_blocks(
[
BlockProperties {
position: input_start_anchor,
height: 1,
style: BlockStyle::Fixed,
render: Box::new(|cx| section_header("Step Input", cx)),
disposition: BlockDisposition::Above,
priority: 0,
},
BlockProperties {
position: output_start_anchor,
height: 1,
style: BlockStyle::Fixed,
render: Box::new(|cx| section_header("Tool Output", cx)),
disposition: BlockDisposition::Above,
priority: 0,
},
BlockProperties {
position: output_end_anchor,
height: 1,
style: BlockStyle::Fixed,
render: Box::new(move |cx| {
if let Some(result) = handle.upgrade().and_then(|this| {
this.update(cx.deref_mut(), |this, cx| this.render_result(cx))
}) {
v_flex()
.child(section_header("Output", cx))
.child(
div().pl(cx.gutter_dimensions.full_width()).child(result),
)
.into_any_element()
} else {
Empty.into_any_element()
}
}),
disposition: BlockDisposition::Below,
priority: 0,
},
],
None,
cx,
);
editor
});
cx.observe(&step, Self::step_updated).detach();
cx.observe_release(&step, Self::step_released).detach();
cx.spawn(|this, mut cx| async move {
if let Ok(language) = language_registry.language_for_name("JSON").await {
this.update(&mut cx, |this, cx| {
this.tool_output_buffer.update(cx, |buffer, cx| {
buffer.set_language(Some(language), cx);
});
})
.ok();
}
})
.detach();
Self {
tool_output_buffer,
step: step.downgrade(),
editor,
}
}
pub fn step(&self) -> &WeakModel<WorkflowStep> {
&self.step
}
fn render_result(&mut self, cx: &mut ViewContext<Self>) -> Option<AnyElement> {
let step = self.step.upgrade()?;
let result = step.read(cx).resolution.as_ref()?;
match result {
Ok(result) => {
Some(
v_flex()
.child(result.title.clone())
.children(result.suggestion_groups.iter().filter_map(
|(buffer, suggestion_groups)| {
let buffer = buffer.read(cx);
let path = buffer.file().map(|f| f.path());
let snapshot = buffer.snapshot();
v_flex()
.mb_2()
.border_b_1()
.children(path.map(|path| format!("path: {}", path.display())))
.children(suggestion_groups.iter().map(|group| {
v_flex().pt_2().pl_2().children(
group.suggestions.iter().map(|suggestion| {
let range = suggestion.range().to_point(&snapshot);
v_flex()
.children(
suggestion.description().map(|desc| {
format!("description: {desc}")
}),
)
.child(format!("kind: {}", suggestion.kind()))
.children(suggestion.symbol_path().map(
|path| format!("symbol path: {}", path.0),
))
.child(format!(
"lines: {} - {}",
range.start.row + 1,
range.end.row + 1
))
}),
)
}))
.into()
},
))
.into_any_element(),
)
}
Err(error) => Some(format!("{:?}", error).into_any_element()),
}
}
fn step_updated(&mut self, step: Model<WorkflowStep>, cx: &mut ViewContext<Self>) {
self.tool_output_buffer.update(cx, |buffer, cx| {
let text = step.read(cx).tool_output.clone();
buffer.set_text(text, cx);
});
cx.notify();
}
fn step_released(&mut self, _: &mut WorkflowStep, cx: &mut ViewContext<Self>) {
cx.emit(EditorEvent::Closed);
}
fn resolve(&mut self, _: &Assist, cx: &mut ViewContext<Self>) {
self.step
.update(cx, |step, cx| {
step.resolve(cx);
})
.ok();
}
}
fn section_header(
name: &'static str,
cx: &mut editor::display_map::BlockContext,
) -> gpui::AnyElement {
h_flex()
.pl(cx.gutter_dimensions.full_width())
.h_11()
.w_full()
.relative()
.gap_1()
.child(
ButtonLike::new("role")
.style(ButtonStyle::Filled)
.child(Label::new(name).color(Color::Default)),
)
.into_any_element()
}
impl Render for WorkflowStepView {
fn render(&mut self, cx: &mut ViewContext<Self>) -> impl IntoElement {
div()
.key_context("ContextEditor")
.on_action(cx.listener(Self::resolve))
.flex_grow()
.bg(cx.theme().colors().editor_background)
.child(self.editor.clone())
}
}
impl EventEmitter<EditorEvent> for WorkflowStepView {}
impl FocusableView for WorkflowStepView {
fn focus_handle(&self, cx: &gpui::AppContext) -> gpui::FocusHandle {
self.editor.read(cx).focus_handle(cx)
}
}
impl Item for WorkflowStepView {
type Event = EditorEvent;
fn tab_content_text(&self, cx: &WindowContext) -> Option<SharedString> {
let step = self.step.upgrade()?.read(cx);
let context = step.context.upgrade()?.read(cx);
let buffer = context.buffer().read(cx);
let index = context
.workflow_step_index_for_range(&step.context_buffer_range, buffer)
.ok()?
+ 1;
Some(format!("Step {index}").into())
}
fn tab_icon(&self, _cx: &WindowContext) -> Option<ui::Icon> {
Some(Icon::new(IconName::SearchCode))
}
fn to_item_events(event: &Self::Event, mut f: impl FnMut(item::ItemEvent)) {
match event {
EditorEvent::Edited { .. } => {
f(item::ItemEvent::Edit);
}
EditorEvent::TitleChanged => {
f(item::ItemEvent::UpdateTab);
}
EditorEvent::Closed => f(item::ItemEvent::CloseItem),
_ => {}
}
}
fn tab_tooltip_text(&self, _cx: &AppContext) -> Option<SharedString> {
None
}
fn as_searchable(&self, _handle: &View<Self>) -> Option<Box<dyn SearchableItemHandle>> {
None
}
fn set_nav_history(&mut self, nav_history: pane::ItemNavHistory, cx: &mut ViewContext<Self>) {
self.editor.update(cx, |editor, cx| {
Item::set_nav_history(editor, nav_history, cx)
})
}
fn navigate(&mut self, data: Box<dyn std::any::Any>, cx: &mut ViewContext<Self>) -> bool {
self.editor
.update(cx, |editor, cx| Item::navigate(editor, data, cx))
}
fn deactivated(&mut self, cx: &mut ViewContext<Self>) {
self.editor
.update(cx, |editor, cx| Item::deactivated(editor, cx))
}
}

View File

@@ -15,6 +15,35 @@ pub fn init(cx: &mut AppContext) {
SlashCommandRegistry::default_global(cx); SlashCommandRegistry::default_global(cx);
} }
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub enum AfterCompletion {
/// Run the command
Run,
/// Continue composing the current argument, doesn't add a space
Compose,
/// Continue the command composition, adds a space
Continue,
}
impl From<bool> for AfterCompletion {
fn from(value: bool) -> Self {
if value {
AfterCompletion::Run
} else {
AfterCompletion::Continue
}
}
}
impl AfterCompletion {
pub fn run(&self) -> bool {
match self {
AfterCompletion::Run => true,
AfterCompletion::Compose | AfterCompletion::Continue => false,
}
}
}
#[derive(Debug)] #[derive(Debug)]
pub struct ArgumentCompletion { pub struct ArgumentCompletion {
/// The label to display for this completion. /// The label to display for this completion.
@@ -22,7 +51,9 @@ pub struct ArgumentCompletion {
/// The new text that should be inserted into the command when this completion is accepted. /// The new text that should be inserted into the command when this completion is accepted.
pub new_text: String, pub new_text: String,
/// Whether the command should be run when accepting this completion. /// Whether the command should be run when accepting this completion.
pub run_command: bool, pub after_completion: AfterCompletion,
/// Whether to replace the all arguments, or whether to treat this as an independent argument.
pub replace_previous_arguments: bool,
} }
pub trait SlashCommand: 'static + Send + Sync { pub trait SlashCommand: 'static + Send + Sync {
@@ -34,15 +65,18 @@ pub trait SlashCommand: 'static + Send + Sync {
fn menu_text(&self) -> String; fn menu_text(&self) -> String;
fn complete_argument( fn complete_argument(
self: Arc<Self>, self: Arc<Self>,
query: String, arguments: &[String],
cancel: Arc<AtomicBool>, cancel: Arc<AtomicBool>,
workspace: Option<WeakView<Workspace>>, workspace: Option<WeakView<Workspace>>,
cx: &mut WindowContext, cx: &mut WindowContext,
) -> Task<Result<Vec<ArgumentCompletion>>>; ) -> Task<Result<Vec<ArgumentCompletion>>>;
fn requires_argument(&self) -> bool; fn requires_argument(&self) -> bool;
fn accepts_arguments(&self) -> bool {
self.requires_argument()
}
fn run( fn run(
self: Arc<Self>, self: Arc<Self>,
argument: Option<&str>, arguments: &[String],
workspace: WeakView<Workspace>, workspace: WeakView<Workspace>,
// TODO: We're just using the `LspAdapterDelegate` here because that is // TODO: We're just using the `LspAdapterDelegate` here because that is
// what the extension API is already expecting. // what the extension API is already expecting.

View File

@@ -58,10 +58,14 @@ impl SlashCommandRegistry {
/// Unregisters the provided [`SlashCommand`]. /// Unregisters the provided [`SlashCommand`].
pub fn unregister_command(&self, command: impl SlashCommand) { pub fn unregister_command(&self, command: impl SlashCommand) {
self.unregister_command_by_name(command.name().as_str())
}
/// Unregisters the command with the given name.
pub fn unregister_command_by_name(&self, command_name: &str) {
let mut state = self.state.write(); let mut state = self.state.write();
let command_name: Arc<str> = command.name().into(); state.featured_commands.remove(command_name);
state.featured_commands.remove(&command_name); state.commands.remove(command_name);
state.commands.remove(&command_name);
} }
/// Returns the names of registered [`SlashCommand`]s. /// Returns the names of registered [`SlashCommand`]s.

View File

@@ -139,6 +139,11 @@ spec:
secretKeyRef: secretKeyRef:
name: anthropic name: anthropic
key: staff_api_key key: staff_api_key
- name: LLM_CLOSED_BETA_MODEL_NAME
valueFrom:
secretKeyRef:
name: llm-closed-beta
key: model_name
- name: GOOGLE_AI_API_KEY - name: GOOGLE_AI_API_KEY
valueFrom: valueFrom:
secretKeyRef: secretKeyRef:

View File

@@ -295,7 +295,8 @@ CREATE UNIQUE INDEX "index_channel_buffer_collaborators_on_channel_id_connection
CREATE TABLE "feature_flags" ( CREATE TABLE "feature_flags" (
"id" INTEGER PRIMARY KEY AUTOINCREMENT, "id" INTEGER PRIMARY KEY AUTOINCREMENT,
"flag" TEXT NOT NULL UNIQUE "flag" TEXT NOT NULL UNIQUE,
"enabled_for_all" BOOLEAN NOT NULL DEFAULT false
); );
CREATE INDEX "index_feature_flags" ON "feature_flags" ("id"); CREATE INDEX "index_feature_flags" ON "feature_flags" ("id");

View File

@@ -0,0 +1 @@
alter table feature_flags add column enabled_for_all boolean not null default false;

View File

@@ -1,4 +1,4 @@
db-uri = "postgres://postgres@localhost/zed_llm" db-uri = "postgres://postgres@localhost/zed"
server-port = 8082 server-port = 8081
jwt-secret = "the-postgrest-jwt-secret-for-authorization" jwt-secret = "the-postgrest-jwt-secret-for-authorization"
log-level = "info" log-level = "info"

View File

@@ -1,4 +1,4 @@
db-uri = "postgres://postgres@localhost/zed" db-uri = "postgres://postgres@localhost/zed_llm"
server-port = 8081 server-port = 8082
jwt-secret = "the-postgrest-jwt-secret-for-authorization" jwt-secret = "the-postgrest-jwt-secret-for-authorization"
log-level = "info" log-level = "info"

View File

@@ -1,5 +1,6 @@
use super::ips_file::IpsFile; use super::ips_file::IpsFile;
use crate::api::CloudflareIpCountryHeader; use crate::api::CloudflareIpCountryHeader;
use crate::clickhouse::write_to_table;
use crate::{api::slack, AppState, Error, Result}; use crate::{api::slack, AppState, Error, Result};
use anyhow::{anyhow, Context}; use anyhow::{anyhow, Context};
use aws_sdk_s3::primitives::ByteStream; use aws_sdk_s3::primitives::ByteStream;
@@ -529,12 +530,12 @@ struct ToUpload {
impl ToUpload { impl ToUpload {
pub async fn upload(&self, clickhouse_client: &clickhouse::Client) -> anyhow::Result<()> { pub async fn upload(&self, clickhouse_client: &clickhouse::Client) -> anyhow::Result<()> {
const EDITOR_EVENTS_TABLE: &str = "editor_events"; const EDITOR_EVENTS_TABLE: &str = "editor_events";
Self::upload_to_table(EDITOR_EVENTS_TABLE, &self.editor_events, clickhouse_client) write_to_table(EDITOR_EVENTS_TABLE, &self.editor_events, clickhouse_client)
.await .await
.with_context(|| format!("failed to upload to table '{EDITOR_EVENTS_TABLE}'"))?; .with_context(|| format!("failed to upload to table '{EDITOR_EVENTS_TABLE}'"))?;
const INLINE_COMPLETION_EVENTS_TABLE: &str = "inline_completion_events"; const INLINE_COMPLETION_EVENTS_TABLE: &str = "inline_completion_events";
Self::upload_to_table( write_to_table(
INLINE_COMPLETION_EVENTS_TABLE, INLINE_COMPLETION_EVENTS_TABLE,
&self.inline_completion_events, &self.inline_completion_events,
clickhouse_client, clickhouse_client,
@@ -543,7 +544,7 @@ impl ToUpload {
.with_context(|| format!("failed to upload to table '{INLINE_COMPLETION_EVENTS_TABLE}'"))?; .with_context(|| format!("failed to upload to table '{INLINE_COMPLETION_EVENTS_TABLE}'"))?;
const ASSISTANT_EVENTS_TABLE: &str = "assistant_events"; const ASSISTANT_EVENTS_TABLE: &str = "assistant_events";
Self::upload_to_table( write_to_table(
ASSISTANT_EVENTS_TABLE, ASSISTANT_EVENTS_TABLE,
&self.assistant_events, &self.assistant_events,
clickhouse_client, clickhouse_client,
@@ -552,27 +553,27 @@ impl ToUpload {
.with_context(|| format!("failed to upload to table '{ASSISTANT_EVENTS_TABLE}'"))?; .with_context(|| format!("failed to upload to table '{ASSISTANT_EVENTS_TABLE}'"))?;
const CALL_EVENTS_TABLE: &str = "call_events"; const CALL_EVENTS_TABLE: &str = "call_events";
Self::upload_to_table(CALL_EVENTS_TABLE, &self.call_events, clickhouse_client) write_to_table(CALL_EVENTS_TABLE, &self.call_events, clickhouse_client)
.await .await
.with_context(|| format!("failed to upload to table '{CALL_EVENTS_TABLE}'"))?; .with_context(|| format!("failed to upload to table '{CALL_EVENTS_TABLE}'"))?;
const CPU_EVENTS_TABLE: &str = "cpu_events"; const CPU_EVENTS_TABLE: &str = "cpu_events";
Self::upload_to_table(CPU_EVENTS_TABLE, &self.cpu_events, clickhouse_client) write_to_table(CPU_EVENTS_TABLE, &self.cpu_events, clickhouse_client)
.await .await
.with_context(|| format!("failed to upload to table '{CPU_EVENTS_TABLE}'"))?; .with_context(|| format!("failed to upload to table '{CPU_EVENTS_TABLE}'"))?;
const MEMORY_EVENTS_TABLE: &str = "memory_events"; const MEMORY_EVENTS_TABLE: &str = "memory_events";
Self::upload_to_table(MEMORY_EVENTS_TABLE, &self.memory_events, clickhouse_client) write_to_table(MEMORY_EVENTS_TABLE, &self.memory_events, clickhouse_client)
.await .await
.with_context(|| format!("failed to upload to table '{MEMORY_EVENTS_TABLE}'"))?; .with_context(|| format!("failed to upload to table '{MEMORY_EVENTS_TABLE}'"))?;
const APP_EVENTS_TABLE: &str = "app_events"; const APP_EVENTS_TABLE: &str = "app_events";
Self::upload_to_table(APP_EVENTS_TABLE, &self.app_events, clickhouse_client) write_to_table(APP_EVENTS_TABLE, &self.app_events, clickhouse_client)
.await .await
.with_context(|| format!("failed to upload to table '{APP_EVENTS_TABLE}'"))?; .with_context(|| format!("failed to upload to table '{APP_EVENTS_TABLE}'"))?;
const SETTING_EVENTS_TABLE: &str = "setting_events"; const SETTING_EVENTS_TABLE: &str = "setting_events";
Self::upload_to_table( write_to_table(
SETTING_EVENTS_TABLE, SETTING_EVENTS_TABLE,
&self.setting_events, &self.setting_events,
clickhouse_client, clickhouse_client,
@@ -581,7 +582,7 @@ impl ToUpload {
.with_context(|| format!("failed to upload to table '{SETTING_EVENTS_TABLE}'"))?; .with_context(|| format!("failed to upload to table '{SETTING_EVENTS_TABLE}'"))?;
const EXTENSION_EVENTS_TABLE: &str = "extension_events"; const EXTENSION_EVENTS_TABLE: &str = "extension_events";
Self::upload_to_table( write_to_table(
EXTENSION_EVENTS_TABLE, EXTENSION_EVENTS_TABLE,
&self.extension_events, &self.extension_events,
clickhouse_client, clickhouse_client,
@@ -590,48 +591,22 @@ impl ToUpload {
.with_context(|| format!("failed to upload to table '{EXTENSION_EVENTS_TABLE}'"))?; .with_context(|| format!("failed to upload to table '{EXTENSION_EVENTS_TABLE}'"))?;
const EDIT_EVENTS_TABLE: &str = "edit_events"; const EDIT_EVENTS_TABLE: &str = "edit_events";
Self::upload_to_table(EDIT_EVENTS_TABLE, &self.edit_events, clickhouse_client) write_to_table(EDIT_EVENTS_TABLE, &self.edit_events, clickhouse_client)
.await .await
.with_context(|| format!("failed to upload to table '{EDIT_EVENTS_TABLE}'"))?; .with_context(|| format!("failed to upload to table '{EDIT_EVENTS_TABLE}'"))?;
const ACTION_EVENTS_TABLE: &str = "action_events"; const ACTION_EVENTS_TABLE: &str = "action_events";
Self::upload_to_table(ACTION_EVENTS_TABLE, &self.action_events, clickhouse_client) write_to_table(ACTION_EVENTS_TABLE, &self.action_events, clickhouse_client)
.await .await
.with_context(|| format!("failed to upload to table '{ACTION_EVENTS_TABLE}'"))?; .with_context(|| format!("failed to upload to table '{ACTION_EVENTS_TABLE}'"))?;
const REPL_EVENTS_TABLE: &str = "repl_events"; const REPL_EVENTS_TABLE: &str = "repl_events";
Self::upload_to_table(REPL_EVENTS_TABLE, &self.repl_events, clickhouse_client) write_to_table(REPL_EVENTS_TABLE, &self.repl_events, clickhouse_client)
.await .await
.with_context(|| format!("failed to upload to table '{REPL_EVENTS_TABLE}'"))?; .with_context(|| format!("failed to upload to table '{REPL_EVENTS_TABLE}'"))?;
Ok(()) Ok(())
} }
async fn upload_to_table<T: clickhouse::Row + Serialize + std::fmt::Debug>(
table: &str,
rows: &[T],
clickhouse_client: &clickhouse::Client,
) -> anyhow::Result<()> {
if rows.is_empty() {
return Ok(());
}
let mut insert = clickhouse_client.insert(table)?;
for event in rows {
insert.write(event).await?;
}
insert.end().await?;
let event_count = rows.len();
log::info!(
"wrote {event_count} {event_specifier} to '{table}'",
event_specifier = if event_count == 1 { "event" } else { "events" }
);
Ok(())
}
} }
pub fn serialize_country_code<S>(country_code: &str, serializer: S) -> Result<S::Ok, S::Error> pub fn serialize_country_code<S>(country_code: &str, serializer: S) -> Result<S::Ok, S::Error>

View File

@@ -0,0 +1,28 @@
use serde::Serialize;
/// Writes the given rows to the specified Clickhouse table.
pub async fn write_to_table<T: clickhouse::Row + Serialize + std::fmt::Debug>(
table: &str,
rows: &[T],
clickhouse_client: &clickhouse::Client,
) -> anyhow::Result<()> {
if rows.is_empty() {
return Ok(());
}
let mut insert = clickhouse_client.insert(table)?;
for event in rows {
insert.write(event).await?;
}
insert.end().await?;
let event_count = rows.len();
log::info!(
"wrote {event_count} {event_specifier} to '{table}'",
event_specifier = if event_count == 1 { "event" } else { "events" }
);
Ok(())
}

View File

@@ -312,10 +312,11 @@ impl Database {
} }
/// Creates a new feature flag. /// Creates a new feature flag.
pub async fn create_user_flag(&self, flag: &str) -> Result<FlagId> { pub async fn create_user_flag(&self, flag: &str, enabled_for_all: bool) -> Result<FlagId> {
self.transaction(|tx| async move { self.transaction(|tx| async move {
let flag = feature_flag::Entity::insert(feature_flag::ActiveModel { let flag = feature_flag::Entity::insert(feature_flag::ActiveModel {
flag: ActiveValue::set(flag.to_string()), flag: ActiveValue::set(flag.to_string()),
enabled_for_all: ActiveValue::set(enabled_for_all),
..Default::default() ..Default::default()
}) })
.exec(&*tx) .exec(&*tx)
@@ -350,7 +351,15 @@ impl Database {
Flag, Flag,
} }
let flags = user::Model { let flags_enabled_for_all = feature_flag::Entity::find()
.filter(feature_flag::Column::EnabledForAll.eq(true))
.select_only()
.column(feature_flag::Column::Flag)
.into_values::<_, QueryAs>()
.all(&*tx)
.await?;
let flags_enabled_for_user = user::Model {
id: user, id: user,
..Default::default() ..Default::default()
} }
@@ -361,7 +370,10 @@ impl Database {
.all(&*tx) .all(&*tx)
.await?; .await?;
Ok(flags) let mut all_flags = HashSet::from_iter(flags_enabled_for_all);
all_flags.extend(flags_enabled_for_user);
Ok(all_flags.into_iter().collect())
}) })
.await .await
} }

View File

@@ -8,6 +8,7 @@ pub struct Model {
#[sea_orm(primary_key)] #[sea_orm(primary_key)]
pub id: FlagId, pub id: FlagId,
pub flag: String, pub flag: String,
pub enabled_for_all: bool,
} }
#[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)] #[derive(Copy, Clone, Debug, EnumIter, DeriveRelation)]

View File

@@ -2,6 +2,7 @@ use crate::{
db::{Database, NewUserParams}, db::{Database, NewUserParams},
test_both_dbs, test_both_dbs,
}; };
use pretty_assertions::assert_eq;
use std::sync::Arc; use std::sync::Arc;
test_both_dbs!( test_both_dbs!(
@@ -37,22 +38,27 @@ async fn test_get_user_flags(db: &Arc<Database>) {
.unwrap() .unwrap()
.user_id; .user_id;
const CHANNELS_ALPHA: &str = "channels-alpha"; const FEATURE_FLAG_ONE: &str = "brand-new-ux";
const NEW_SEARCH: &str = "new-search"; const FEATURE_FLAG_TWO: &str = "cool-feature";
const FEATURE_FLAG_THREE: &str = "feature-enabled-for-everyone";
let channels_flag = db.create_user_flag(CHANNELS_ALPHA).await.unwrap(); let feature_flag_one = db.create_user_flag(FEATURE_FLAG_ONE, false).await.unwrap();
let search_flag = db.create_user_flag(NEW_SEARCH).await.unwrap(); let feature_flag_two = db.create_user_flag(FEATURE_FLAG_TWO, false).await.unwrap();
db.create_user_flag(FEATURE_FLAG_THREE, true).await.unwrap();
db.add_user_flag(user_1, channels_flag).await.unwrap(); db.add_user_flag(user_1, feature_flag_one).await.unwrap();
db.add_user_flag(user_1, search_flag).await.unwrap(); db.add_user_flag(user_1, feature_flag_two).await.unwrap();
db.add_user_flag(user_2, channels_flag).await.unwrap(); db.add_user_flag(user_2, feature_flag_one).await.unwrap();
let mut user_1_flags = db.get_user_flags(user_1).await.unwrap(); let mut user_1_flags = db.get_user_flags(user_1).await.unwrap();
user_1_flags.sort(); user_1_flags.sort();
assert_eq!(user_1_flags, &[CHANNELS_ALPHA, NEW_SEARCH]); assert_eq!(
user_1_flags,
&[FEATURE_FLAG_ONE, FEATURE_FLAG_TWO, FEATURE_FLAG_THREE]
);
let mut user_2_flags = db.get_user_flags(user_2).await.unwrap(); let mut user_2_flags = db.get_user_flags(user_2).await.unwrap();
user_2_flags.sort(); user_2_flags.sort();
assert_eq!(user_2_flags, &[CHANNELS_ALPHA]); assert_eq!(user_2_flags, &[FEATURE_FLAG_ONE, FEATURE_FLAG_THREE]);
} }

View File

@@ -32,6 +32,7 @@ fn add_vars(env_content: String, vars: &mut Vec<(String, String)>) -> Result<()>
toml::Value::String(value) => value, toml::Value::String(value) => value,
toml::Value::Integer(value) => value.to_string(), toml::Value::Integer(value) => value.to_string(),
toml::Value::Float(value) => value.to_string(), toml::Value::Float(value) => value.to_string(),
toml::Value::Boolean(value) => value.to_string(),
_ => panic!("unsupported TOML value in .env.toml for key {}", key), _ => panic!("unsupported TOML value in .env.toml for key {}", key),
}; };
vars.push((key, value)); vars.push((key, value));

View File

@@ -1,5 +1,6 @@
pub mod api; pub mod api;
pub mod auth; pub mod auth;
pub mod clickhouse;
pub mod db; pub mod db;
pub mod env; pub mod env;
pub mod executor; pub mod executor;
@@ -167,6 +168,7 @@ pub struct Config {
pub google_ai_api_key: Option<Arc<str>>, pub google_ai_api_key: Option<Arc<str>>,
pub anthropic_api_key: Option<Arc<str>>, pub anthropic_api_key: Option<Arc<str>>,
pub anthropic_staff_api_key: Option<Arc<str>>, pub anthropic_staff_api_key: Option<Arc<str>>,
pub llm_closed_beta_model_name: Option<Arc<str>>,
pub qwen2_7b_api_key: Option<Arc<str>>, pub qwen2_7b_api_key: Option<Arc<str>>,
pub qwen2_7b_api_url: Option<Arc<str>>, pub qwen2_7b_api_url: Option<Arc<str>>,
pub zed_client_checksum_seed: Option<String>, pub zed_client_checksum_seed: Option<String>,
@@ -218,6 +220,7 @@ impl Config {
google_ai_api_key: None, google_ai_api_key: None,
anthropic_api_key: None, anthropic_api_key: None,
anthropic_staff_api_key: None, anthropic_staff_api_key: None,
llm_closed_beta_model_name: None,
clickhouse_url: None, clickhouse_url: None,
clickhouse_user: None, clickhouse_user: None,
clickhouse_password: None, clickhouse_password: None,
@@ -267,7 +270,7 @@ pub struct AppState {
pub stripe_client: Option<Arc<stripe::Client>>, pub stripe_client: Option<Arc<stripe::Client>>,
pub rate_limiter: Arc<RateLimiter>, pub rate_limiter: Arc<RateLimiter>,
pub executor: Executor, pub executor: Executor,
pub clickhouse_client: Option<clickhouse::Client>, pub clickhouse_client: Option<::clickhouse::Client>,
pub config: Config, pub config: Config,
} }
@@ -358,8 +361,8 @@ async fn build_blob_store_client(config: &Config) -> anyhow::Result<aws_sdk_s3::
Ok(aws_sdk_s3::Client::new(&s3_config)) Ok(aws_sdk_s3::Client::new(&s3_config))
} }
fn build_clickhouse_client(config: &Config) -> anyhow::Result<clickhouse::Client> { fn build_clickhouse_client(config: &Config) -> anyhow::Result<::clickhouse::Client> {
Ok(clickhouse::Client::default() Ok(::clickhouse::Client::default()
.with_url( .with_url(
config config
.clickhouse_url .clickhouse_url

View File

@@ -138,7 +138,11 @@ async fn validate_api_token<B>(mut req: Request<B>, next: Next<B>) -> impl IntoR
)); ));
} }
tracing::Span::current().record("authn.jti", &claims.jti); tracing::Span::current()
.record("user_id", claims.user_id)
.record("login", claims.github_user_login.clone())
.record("authn.jti", &claims.jti)
.record("is_staff", &claims.is_staff);
req.extensions_mut().insert(claims); req.extensions_mut().insert(claims);
Ok::<_, Error>(next.run(req).await.into_response()) Ok::<_, Error>(next.run(req).await.into_response())
@@ -166,7 +170,10 @@ async fn perform_completion(
country_code_header: Option<TypedHeader<CloudflareIpCountryHeader>>, country_code_header: Option<TypedHeader<CloudflareIpCountryHeader>>,
Json(params): Json<PerformCompletionParams>, Json(params): Json<PerformCompletionParams>,
) -> Result<impl IntoResponse> { ) -> Result<impl IntoResponse> {
let model = normalize_model_name(params.provider, params.model); let model = normalize_model_name(
state.db.model_names_for_provider(params.provider),
params.model,
);
authorize_access_to_language_model( authorize_access_to_language_model(
&state.config, &state.config,
@@ -197,17 +204,21 @@ async fn perform_completion(
let mut request: anthropic::Request = let mut request: anthropic::Request =
serde_json::from_str(&params.provider_request.get())?; serde_json::from_str(&params.provider_request.get())?;
// Parse the model, throw away the version that was included, and then set a specific // Override the model on the request with the latest version of the model that is
// version that we control on the server. // known to the server.
//
// Right now, we use the version that's defined in `model.id()`, but we will likely // Right now, we use the version that's defined in `model.id()`, but we will likely
// want to change this code once a new version of an Anthropic model is released, // want to change this code once a new version of an Anthropic model is released,
// so that users can use the new version, without having to update Zed. // so that users can use the new version, without having to update Zed.
request.model = match anthropic::Model::from_id(&request.model) { request.model = match model.as_str() {
Ok(model) => model.id().to_string(), "claude-3-5-sonnet" => anthropic::Model::Claude3_5Sonnet.id().to_string(),
Err(_) => request.model, "claude-3-opus" => anthropic::Model::Claude3Opus.id().to_string(),
"claude-3-haiku" => anthropic::Model::Claude3Haiku.id().to_string(),
"claude-3-sonnet" => anthropic::Model::Claude3Sonnet.id().to_string(),
_ => request.model,
}; };
let chunks = anthropic::stream_completion( let (chunks, rate_limit_info) = anthropic::stream_completion_with_rate_limit_info(
&state.http_client, &state.http_client,
anthropic::ANTHROPIC_API_URL, anthropic::ANTHROPIC_API_URL,
api_key, api_key,
@@ -235,6 +246,19 @@ async fn perform_completion(
anthropic::AnthropicError::Other(err) => Error::Internal(err), anthropic::AnthropicError::Other(err) => Error::Internal(err),
})?; })?;
if let Some(rate_limit_info) = rate_limit_info {
tracing::info!(
target: "upstream rate limit",
is_staff = claims.is_staff,
provider = params.provider.to_string(),
model = model,
tokens_remaining = rate_limit_info.tokens_remaining,
requests_remaining = rate_limit_info.requests_remaining,
requests_reset = ?rate_limit_info.requests_reset,
tokens_reset = ?rate_limit_info.tokens_reset,
);
}
chunks chunks
.map(move |event| { .map(move |event| {
let chunk = event?; let chunk = event?;
@@ -366,31 +390,13 @@ async fn perform_completion(
}))) })))
} }
fn normalize_model_name(provider: LanguageModelProvider, name: String) -> String { fn normalize_model_name(known_models: Vec<String>, name: String) -> String {
let prefixes: &[_] = match provider { if let Some(known_model_name) = known_models
LanguageModelProvider::Anthropic => &[
"claude-3-5-sonnet",
"claude-3-haiku",
"claude-3-opus",
"claude-3-sonnet",
],
LanguageModelProvider::OpenAi => &[
"gpt-3.5-turbo",
"gpt-4-turbo-preview",
"gpt-4o-mini",
"gpt-4o",
"gpt-4",
],
LanguageModelProvider::Google => &[],
LanguageModelProvider::Zed => &[],
};
if let Some(prefix) = prefixes
.iter() .iter()
.filter(|&&prefix| name.starts_with(prefix)) .filter(|known_model_name| name.starts_with(known_model_name.as_str()))
.max_by_key(|&&prefix| prefix.len()) .max_by_key(|known_model_name| known_model_name.len())
{ {
prefix.to_string() known_model_name.to_string()
} else { } else {
name name
} }
@@ -548,33 +554,75 @@ impl<S> Drop for TokenCountingStream<S> {
.await .await
.log_err(); .log_err();
if let Some((clickhouse_client, usage)) = state.clickhouse_client.as_ref().zip(usage) { if let Some(usage) = usage {
report_llm_usage( tracing::info!(
clickhouse_client, target: "user usage",
LlmUsageEventRow { user_id = claims.user_id,
time: Utc::now().timestamp_millis(), login = claims.github_user_login,
user_id: claims.user_id as i32, authn.jti = claims.jti,
is_staff: claims.is_staff, is_staff = claims.is_staff,
plan: match claims.plan { requests_this_minute = usage.requests_this_minute,
Plan::Free => "free".to_string(), tokens_this_minute = usage.tokens_this_minute,
Plan::ZedPro => "zed_pro".to_string(), );
if let Some(clickhouse_client) = state.clickhouse_client.as_ref() {
report_llm_usage(
clickhouse_client,
LlmUsageEventRow {
time: Utc::now().timestamp_millis(),
user_id: claims.user_id as i32,
is_staff: claims.is_staff,
plan: match claims.plan {
Plan::Free => "free".to_string(),
Plan::ZedPro => "zed_pro".to_string(),
},
model,
provider: provider.to_string(),
input_token_count: input_token_count as u64,
output_token_count: output_token_count as u64,
requests_this_minute: usage.requests_this_minute as u64,
tokens_this_minute: usage.tokens_this_minute as u64,
tokens_this_day: usage.tokens_this_day as u64,
input_tokens_this_month: usage.input_tokens_this_month as u64,
output_tokens_this_month: usage.output_tokens_this_month as u64,
spending_this_month: usage.spending_this_month as u64,
lifetime_spending: usage.lifetime_spending as u64,
}, },
model, )
provider: provider.to_string(), .await
input_token_count: input_token_count as u64, .log_err();
output_token_count: output_token_count as u64, }
requests_this_minute: usage.requests_this_minute as u64,
tokens_this_minute: usage.tokens_this_minute as u64,
tokens_this_day: usage.tokens_this_day as u64,
input_tokens_this_month: usage.input_tokens_this_month as u64,
output_tokens_this_month: usage.output_tokens_this_month as u64,
spending_this_month: usage.spending_this_month as u64,
lifetime_spending: usage.lifetime_spending as u64,
},
)
.await
.log_err();
} }
}) })
} }
} }
pub fn log_usage_periodically(state: Arc<LlmState>) {
state.executor.clone().spawn_detached(async move {
loop {
state
.executor
.sleep(std::time::Duration::from_secs(30))
.await;
let Some(usages) = state
.db
.get_application_wide_usages_by_model(Utc::now())
.await
.log_err()
else {
continue;
};
for usage in usages {
tracing::info!(
target: "computed usage",
provider = usage.provider.to_string(),
model = usage.model,
requests_this_minute = usage.requests_this_minute,
tokens_this_minute = usage.tokens_this_minute,
);
}
}
})
}

View File

@@ -12,11 +12,12 @@ pub fn authorize_access_to_language_model(
model: &str, model: &str,
) -> Result<()> { ) -> Result<()> {
authorize_access_for_country(config, country_code, provider)?; authorize_access_for_country(config, country_code, provider)?;
authorize_access_to_model(claims, provider, model)?; authorize_access_to_model(config, claims, provider, model)?;
Ok(()) Ok(())
} }
fn authorize_access_to_model( fn authorize_access_to_model(
config: &Config,
claims: &LlmTokenClaims, claims: &LlmTokenClaims,
provider: LanguageModelProvider, provider: LanguageModelProvider,
model: &str, model: &str,
@@ -25,15 +26,25 @@ fn authorize_access_to_model(
return Ok(()); return Ok(());
} }
match (provider, model) { match provider {
(LanguageModelProvider::Anthropic, model) if model.starts_with("claude-3.5-sonnet") => { LanguageModelProvider::Anthropic => {
Ok(()) if model == "claude-3-5-sonnet" {
return Ok(());
}
if claims.has_llm_closed_beta_feature_flag
&& Some(model) == config.llm_closed_beta_model_name.as_deref()
{
return Ok(());
}
} }
_ => Err(Error::http( _ => {}
StatusCode::FORBIDDEN,
format!("access to model {model:?} is not included in your plan"),
))?,
} }
Err(Error::http(
StatusCode::FORBIDDEN,
format!("access to model {model:?} is not included in your plan"),
))
} }
fn authorize_access_for_country( fn authorize_access_for_country(
@@ -240,14 +251,14 @@ mod tests {
( (
Plan::ZedPro, Plan::ZedPro,
LanguageModelProvider::Anthropic, LanguageModelProvider::Anthropic,
"claude-3.5-sonnet", "claude-3-5-sonnet",
true, true,
), ),
// Free plan should have access to claude-3.5-sonnet // Free plan should have access to claude-3.5-sonnet
( (
Plan::Free, Plan::Free,
LanguageModelProvider::Anthropic, LanguageModelProvider::Anthropic,
"claude-3.5-sonnet", "claude-3-5-sonnet",
true, true,
), ),
// Pro plan should NOT have access to other Anthropic models // Pro plan should NOT have access to other Anthropic models
@@ -303,7 +314,7 @@ mod tests {
// Staff should have access to all models // Staff should have access to all models
let test_cases = vec![ let test_cases = vec![
(LanguageModelProvider::Anthropic, "claude-3.5-sonnet"), (LanguageModelProvider::Anthropic, "claude-3-5-sonnet"),
(LanguageModelProvider::Anthropic, "claude-2"), (LanguageModelProvider::Anthropic, "claude-2"),
(LanguageModelProvider::Anthropic, "claude-123-agi"), (LanguageModelProvider::Anthropic, "claude-123-agi"),
(LanguageModelProvider::OpenAi, "gpt-4"), (LanguageModelProvider::OpenAi, "gpt-4"),

View File

@@ -67,6 +67,21 @@ impl LlmDatabase {
Ok(()) Ok(())
} }
/// Returns the names of the known models for the given [`LanguageModelProvider`].
pub fn model_names_for_provider(&self, provider: LanguageModelProvider) -> Vec<String> {
self.models
.keys()
.filter_map(|(model_provider, model_name)| {
if model_provider == &provider {
Some(model_name)
} else {
None
}
})
.cloned()
.collect::<Vec<_>>()
}
pub fn model(&self, provider: LanguageModelProvider, name: &str) -> Result<&model::Model> { pub fn model(&self, provider: LanguageModelProvider, name: &str) -> Result<&model::Model> {
Ok(self Ok(self
.models .models

View File

@@ -1,5 +1,6 @@
use crate::db::UserId; use crate::db::UserId;
use chrono::Duration; use chrono::Duration;
use futures::StreamExt as _;
use rpc::LanguageModelProvider; use rpc::LanguageModelProvider;
use sea_orm::QuerySelect; use sea_orm::QuerySelect;
use std::{iter, str::FromStr}; use std::{iter, str::FromStr};
@@ -18,6 +19,14 @@ pub struct Usage {
pub lifetime_spending: usize, pub lifetime_spending: usize,
} }
#[derive(Debug, PartialEq, Clone)]
pub struct ApplicationWideUsage {
pub provider: LanguageModelProvider,
pub model: String,
pub requests_this_minute: usize,
pub tokens_this_minute: usize,
}
#[derive(Clone, Copy, Debug, Default)] #[derive(Clone, Copy, Debug, Default)]
pub struct ActiveUserCount { pub struct ActiveUserCount {
pub users_in_recent_minutes: usize, pub users_in_recent_minutes: usize,
@@ -63,6 +72,72 @@ impl LlmDatabase {
Ok(()) Ok(())
} }
pub async fn get_application_wide_usages_by_model(
&self,
now: DateTimeUtc,
) -> Result<Vec<ApplicationWideUsage>> {
self.transaction(|tx| async move {
let past_minute = now - Duration::minutes(1);
let requests_per_minute = self.usage_measure_ids[&UsageMeasure::RequestsPerMinute];
let tokens_per_minute = self.usage_measure_ids[&UsageMeasure::TokensPerMinute];
let mut results = Vec::new();
for ((provider, model_name), model) in self.models.iter() {
let mut usages = usage::Entity::find()
.filter(
usage::Column::Timestamp
.gte(past_minute.naive_utc())
.and(usage::Column::IsStaff.eq(false))
.and(usage::Column::ModelId.eq(model.id))
.and(
usage::Column::MeasureId
.eq(requests_per_minute)
.or(usage::Column::MeasureId.eq(tokens_per_minute)),
),
)
.stream(&*tx)
.await?;
let mut requests_this_minute = 0;
let mut tokens_this_minute = 0;
while let Some(usage) = usages.next().await {
let usage = usage?;
if usage.measure_id == requests_per_minute {
requests_this_minute += Self::get_live_buckets(
&usage,
now.naive_utc(),
UsageMeasure::RequestsPerMinute,
)
.0
.iter()
.copied()
.sum::<i64>() as usize;
} else if usage.measure_id == tokens_per_minute {
tokens_this_minute += Self::get_live_buckets(
&usage,
now.naive_utc(),
UsageMeasure::TokensPerMinute,
)
.0
.iter()
.copied()
.sum::<i64>() as usize;
}
}
results.push(ApplicationWideUsage {
provider: *provider,
model: model_name.clone(),
requests_this_minute,
tokens_this_minute,
})
}
Ok(results)
})
.await
}
pub async fn get_usage( pub async fn get_usage(
&self, &self,
user_id: UserId, user_id: UserId,

View File

@@ -1,6 +1,8 @@
use anyhow::Result; use anyhow::{Context, Result};
use serde::Serialize; use serde::Serialize;
use crate::clickhouse::write_to_table;
#[derive(Serialize, Debug, clickhouse::Row)] #[derive(Serialize, Debug, clickhouse::Row)]
pub struct LlmUsageEventRow { pub struct LlmUsageEventRow {
pub time: i64, pub time: i64,
@@ -40,9 +42,10 @@ pub struct LlmRateLimitEventRow {
} }
pub async fn report_llm_usage(client: &clickhouse::Client, row: LlmUsageEventRow) -> Result<()> { pub async fn report_llm_usage(client: &clickhouse::Client, row: LlmUsageEventRow) -> Result<()> {
let mut insert = client.insert("llm_usage_events")?; const LLM_USAGE_EVENTS_TABLE: &str = "llm_usage_events";
insert.write(&row).await?; write_to_table(LLM_USAGE_EVENTS_TABLE, &[row], client)
insert.end().await?; .await
.with_context(|| format!("failed to upload to table '{LLM_USAGE_EVENTS_TABLE}'"))?;
Ok(()) Ok(())
} }
@@ -50,8 +53,9 @@ pub async fn report_llm_rate_limit(
client: &clickhouse::Client, client: &clickhouse::Client,
row: LlmRateLimitEventRow, row: LlmRateLimitEventRow,
) -> Result<()> { ) -> Result<()> {
let mut insert = client.insert("llm_rate_limits")?; const LLM_RATE_LIMIT_EVENTS_TABLE: &str = "llm_rate_limit_events";
insert.write(&row).await?; write_to_table(LLM_RATE_LIMIT_EVENTS_TABLE, &[row], client)
insert.end().await?; .await
.with_context(|| format!("failed to upload to table '{LLM_RATE_LIMIT_EVENTS_TABLE}'"))?;
Ok(()) Ok(())
} }

View File

@@ -13,7 +13,15 @@ pub struct LlmTokenClaims {
pub exp: u64, pub exp: u64,
pub jti: String, pub jti: String,
pub user_id: u64, pub user_id: u64,
// This field is temporarily optional so it can be added
// in a backwards-compatible way. We can make it required
// once all of the LLM tokens have cycled (~1 hour after
// this change has been deployed).
#[serde(default)]
pub github_user_login: Option<String>,
pub is_staff: bool, pub is_staff: bool,
#[serde(default)]
pub has_llm_closed_beta_feature_flag: bool,
pub plan: rpc::proto::Plan, pub plan: rpc::proto::Plan,
} }
@@ -22,7 +30,9 @@ const LLM_TOKEN_LIFETIME: Duration = Duration::from_secs(60 * 60);
impl LlmTokenClaims { impl LlmTokenClaims {
pub fn create( pub fn create(
user_id: UserId, user_id: UserId,
github_user_login: String,
is_staff: bool, is_staff: bool,
has_llm_closed_beta_feature_flag: bool,
plan: rpc::proto::Plan, plan: rpc::proto::Plan,
config: &Config, config: &Config,
) -> Result<String> { ) -> Result<String> {
@@ -37,7 +47,9 @@ impl LlmTokenClaims {
exp: (now + LLM_TOKEN_LIFETIME).timestamp() as u64, exp: (now + LLM_TOKEN_LIFETIME).timestamp() as u64,
jti: uuid::Uuid::new_v4().to_string(), jti: uuid::Uuid::new_v4().to_string(),
user_id: user_id.to_proto(), user_id: user_id.to_proto(),
github_user_login: Some(github_user_login),
is_staff, is_staff,
has_llm_closed_beta_feature_flag,
plan, plan,
}; };

View File

@@ -5,7 +5,7 @@ use axum::{
routing::get, routing::get,
Extension, Router, Extension, Router,
}; };
use collab::llm::db::LlmDatabase; use collab::llm::{db::LlmDatabase, log_usage_periodically};
use collab::migrations::run_database_migrations; use collab::migrations::run_database_migrations;
use collab::{api::billing::poll_stripe_events_periodically, llm::LlmState, ServiceMode}; use collab::{api::billing::poll_stripe_events_periodically, llm::LlmState, ServiceMode};
use collab::{ use collab::{
@@ -95,6 +95,8 @@ async fn main() -> Result<()> {
let state = LlmState::new(config.clone(), Executor::Production).await?; let state = LlmState::new(config.clone(), Executor::Production).await?;
log_usage_periodically(state.clone());
app = app app = app
.merge(collab::llm::routes()) .merge(collab::llm::routes())
.layer(Extension(state.clone())); .layer(Extension(state.clone()));
@@ -150,7 +152,10 @@ async fn main() -> Result<()> {
"http_request", "http_request",
method = ?request.method(), method = ?request.method(),
matched_path, matched_path,
authn.jti = tracing::field::Empty user_id = tracing::field::Empty,
login = tracing::field::Empty,
authn.jti = tracing::field::Empty,
is_staff = tracing::field::Empty
) )
}) })
.on_response( .on_response(

View File

@@ -71,7 +71,7 @@ use std::{
time::{Duration, Instant}, time::{Duration, Instant},
}; };
use time::OffsetDateTime; use time::OffsetDateTime;
use tokio::sync::{watch, Semaphore}; use tokio::sync::{watch, MutexGuard, Semaphore};
use tower::ServiceBuilder; use tower::ServiceBuilder;
use tracing::{ use tracing::{
field::{self}, field::{self},
@@ -192,7 +192,7 @@ impl Session {
} }
} }
pub async fn current_plan(&self) -> anyhow::Result<proto::Plan> { pub async fn current_plan(&self, db: MutexGuard<'_, DbHandle>) -> anyhow::Result<proto::Plan> {
if self.is_staff() { if self.is_staff() {
return Ok(proto::Plan::ZedPro); return Ok(proto::Plan::ZedPro);
} }
@@ -201,7 +201,6 @@ impl Session {
return Ok(proto::Plan::Free); return Ok(proto::Plan::Free);
}; };
let db = self.db().await;
if db.has_active_billing_subscription(user_id).await? { if db.has_active_billing_subscription(user_id).await? {
Ok(proto::Plan::ZedPro) Ok(proto::Plan::ZedPro)
} else { } else {
@@ -3500,7 +3499,7 @@ fn should_auto_subscribe_to_channels(version: ZedVersion) -> bool {
} }
async fn update_user_plan(_user_id: UserId, session: &Session) -> Result<()> { async fn update_user_plan(_user_id: UserId, session: &Session) -> Result<()> {
let plan = session.current_plan().await?; let plan = session.current_plan(session.db().await).await?;
session session
.peer .peer
@@ -4503,7 +4502,7 @@ async fn count_language_model_tokens(
}; };
authorize_access_to_legacy_llm_endpoints(&session).await?; authorize_access_to_legacy_llm_endpoints(&session).await?;
let rate_limit: Box<dyn RateLimit> = match session.current_plan().await? { let rate_limit: Box<dyn RateLimit> = match session.current_plan(session.db().await).await? {
proto::Plan::ZedPro => Box::new(ZedProCountLanguageModelTokensRateLimit), proto::Plan::ZedPro => Box::new(ZedProCountLanguageModelTokensRateLimit),
proto::Plan::Free => Box::new(FreeCountLanguageModelTokensRateLimit), proto::Plan::Free => Box::new(FreeCountLanguageModelTokensRateLimit),
}; };
@@ -4623,7 +4622,7 @@ async fn compute_embeddings(
let api_key = api_key.context("no OpenAI API key configured on the server")?; let api_key = api_key.context("no OpenAI API key configured on the server")?;
authorize_access_to_legacy_llm_endpoints(&session).await?; authorize_access_to_legacy_llm_endpoints(&session).await?;
let rate_limit: Box<dyn RateLimit> = match session.current_plan().await? { let rate_limit: Box<dyn RateLimit> = match session.current_plan(session.db().await).await? {
proto::Plan::ZedPro => Box::new(ZedProComputeEmbeddingsRateLimit), proto::Plan::ZedPro => Box::new(ZedProComputeEmbeddingsRateLimit),
proto::Plan::Free => Box::new(FreeComputeEmbeddingsRateLimit), proto::Plan::Free => Box::new(FreeComputeEmbeddingsRateLimit),
}; };
@@ -4919,7 +4918,10 @@ async fn get_llm_api_token(
let db = session.db().await; let db = session.db().await;
let flags = db.get_user_flags(session.user_id()).await?; let flags = db.get_user_flags(session.user_id()).await?;
if !session.is_staff() && !flags.iter().any(|flag| flag == "language-models") { let has_language_models_feature_flag = flags.iter().any(|flag| flag == "language-models");
let has_llm_closed_beta_feature_flag = flags.iter().any(|flag| flag == "llm-closed-beta");
if !session.is_staff() && !has_language_models_feature_flag {
Err(anyhow!("permission denied"))? Err(anyhow!("permission denied"))?
} }
@@ -4940,11 +4942,12 @@ async fn get_llm_api_token(
if Utc::now().naive_utc() - account_created_at < MIN_ACCOUNT_AGE_FOR_LLM_USE { if Utc::now().naive_utc() - account_created_at < MIN_ACCOUNT_AGE_FOR_LLM_USE {
Err(anyhow!("account too young"))? Err(anyhow!("account too young"))?
} }
let token = LlmTokenClaims::create( let token = LlmTokenClaims::create(
user.id, user.id,
user.github_login.clone(),
session.is_staff(), session.is_staff(),
session.current_plan().await?, has_llm_closed_beta_feature_flag,
session.current_plan(db).await?,
&session.app_state.config, &session.app_state.config,
)?; )?;
response.send(proto::GetLlmTokenResponse { token })?; response.send(proto::GetLlmTokenResponse { token })?;

View File

@@ -1,7 +1,6 @@
use crate::db::{self, ChannelRole, NewUserParams}; use crate::db::{self, ChannelRole, NewUserParams};
use anyhow::Context; use anyhow::Context;
use chrono::{DateTime, Utc};
use db::Database; use db::Database;
use serde::{de::DeserializeOwned, Deserialize}; use serde::{de::DeserializeOwned, Deserialize};
use std::{fmt::Write, fs, path::Path}; use std::{fmt::Write, fs, path::Path};
@@ -13,7 +12,6 @@ struct GitHubUser {
id: i32, id: i32,
login: String, login: String,
email: Option<String>, email: Option<String>,
created_at: DateTime<Utc>,
} }
#[derive(Deserialize)] #[derive(Deserialize)]
@@ -44,6 +42,17 @@ pub async fn seed(config: &Config, db: &Database, force: bool) -> anyhow::Result
let mut first_user = None; let mut first_user = None;
let mut others = vec![]; let mut others = vec![];
let flag_names = ["remoting", "language-models"];
let mut flags = Vec::new();
for flag_name in flag_names {
let flag = db
.create_user_flag(flag_name, false)
.await
.unwrap_or_else(|_| panic!("failed to create flag: '{flag_name}'"));
flags.push(flag);
}
for admin_login in seed_config.admins { for admin_login in seed_config.admins {
let user = fetch_github::<GitHubUser>( let user = fetch_github::<GitHubUser>(
&client, &client,
@@ -66,6 +75,15 @@ pub async fn seed(config: &Config, db: &Database, force: bool) -> anyhow::Result
} else { } else {
others.push(user.user_id) others.push(user.user_id)
} }
for flag in &flags {
db.add_user_flag(user.user_id, *flag)
.await
.context(format!(
"Unable to enable flag '{}' for user '{}'",
flag, user.user_id
))?;
}
} }
for channel in seed_config.channels { for channel in seed_config.channels {
@@ -86,6 +104,7 @@ pub async fn seed(config: &Config, db: &Database, force: bool) -> anyhow::Result
} }
} }
// TODO: Fix this later
if let Some(number_of_users) = seed_config.number_of_users { if let Some(number_of_users) = seed_config.number_of_users {
// Fetch 100 other random users from GitHub and insert them into the database // Fetch 100 other random users from GitHub and insert them into the database
// (for testing autocompleters, etc.) // (for testing autocompleters, etc.)
@@ -105,15 +124,23 @@ pub async fn seed(config: &Config, db: &Database, force: bool) -> anyhow::Result
for github_user in users { for github_user in users {
last_user_id = Some(github_user.id); last_user_id = Some(github_user.id);
user_count += 1; user_count += 1;
db.get_or_create_user_by_github_account( let user = db
&github_user.login, .get_or_create_user_by_github_account(
Some(github_user.id), &github_user.login,
github_user.email.as_deref(), Some(github_user.id),
Some(github_user.created_at), github_user.email.as_deref(),
None, None,
) None,
.await )
.expect("failed to insert user"); .await
.expect("failed to insert user");
for flag in &flags {
db.add_user_flag(user.id, *flag).await.context(format!(
"Unable to enable flag '{}' for user '{}'",
flag, user.id
))?;
}
} }
} }
} }
@@ -132,9 +159,9 @@ async fn fetch_github<T: DeserializeOwned>(client: &reqwest::Client, url: &str)
.header("user-agent", "zed") .header("user-agent", "zed")
.send() .send()
.await .await
.unwrap_or_else(|_| panic!("failed to fetch '{}'", url)); .unwrap_or_else(|error| panic!("failed to fetch '{url}': {error}"));
response response
.json() .json()
.await .await
.unwrap_or_else(|_| panic!("failed to deserialize github user from '{}'", url)) .unwrap_or_else(|error| panic!("failed to deserialize github user from '{url}': {error}"))
} }

View File

@@ -667,6 +667,7 @@ impl TestServer {
google_ai_api_key: None, google_ai_api_key: None,
anthropic_api_key: None, anthropic_api_key: None,
anthropic_staff_api_key: None, anthropic_staff_api_key: None,
llm_closed_beta_model_name: None,
clickhouse_url: None, clickhouse_url: None,
clickhouse_user: None, clickhouse_user: None,
clickhouse_password: None, clickhouse_password: None,

View File

@@ -314,7 +314,6 @@ impl MessageEditor {
server_id: LanguageServerId(0), // TODO: Make this optional or something? server_id: LanguageServerId(0), // TODO: Make this optional or something?
lsp_completion: Default::default(), // TODO: Make this optional or something? lsp_completion: Default::default(), // TODO: Make this optional or something?
confirm: None, confirm: None,
show_new_completions_on_confirm: false,
} }
}) })
.collect() .collect()

View File

@@ -0,0 +1,29 @@
[package]
name = "context_servers"
version = "0.1.0"
edition = "2021"
publish = false
license = "GPL-3.0-or-later"
[lints]
workspace = true
[lib]
path = "src/context_servers.rs"
[dependencies]
anyhow.workspace = true
collections.workspace = true
futures.workspace = true
gpui.workspace = true
log.workspace = true
parking_lot.workspace = true
postage.workspace = true
schemars.workspace = true
serde.workspace = true
serde_json.workspace = true
settings.workspace = true
smol.workspace = true
url = { workspace = true, features = ["serde"] }
util.workspace = true
workspace.workspace = true

View File

@@ -0,0 +1 @@
../../LICENSE-GPL

View File

@@ -0,0 +1,432 @@
use anyhow::{anyhow, Context, Result};
use collections::HashMap;
use futures::{channel::oneshot, io::BufWriter, select, AsyncRead, AsyncWrite, FutureExt};
use gpui::{AsyncAppContext, BackgroundExecutor, Task};
use parking_lot::Mutex;
use postage::barrier;
use serde::{de::DeserializeOwned, Deserialize, Serialize};
use serde_json::{value::RawValue, Value};
use smol::{
channel,
io::{AsyncBufReadExt, AsyncWriteExt, BufReader},
process::{self, Child},
};
use std::{
fmt,
path::PathBuf,
sync::{
atomic::{AtomicI32, Ordering::SeqCst},
Arc,
},
time::{Duration, Instant},
};
use util::TryFutureExt;
const JSON_RPC_VERSION: &str = "2.0";
const REQUEST_TIMEOUT: Duration = Duration::from_secs(60);
type ResponseHandler = Box<dyn Send + FnOnce(Result<String, Error>)>;
type NotificationHandler = Box<dyn Send + FnMut(RequestId, Value, AsyncAppContext)>;
#[derive(Debug, Clone, Eq, PartialEq, Hash, Serialize, Deserialize)]
#[serde(untagged)]
pub enum RequestId {
Int(i32),
Str(String),
}
pub struct Client {
server_id: ContextServerId,
next_id: AtomicI32,
outbound_tx: channel::Sender<String>,
name: Arc<str>,
notification_handlers: Arc<Mutex<HashMap<&'static str, NotificationHandler>>>,
response_handlers: Arc<Mutex<Option<HashMap<RequestId, ResponseHandler>>>>,
#[allow(clippy::type_complexity)]
#[allow(dead_code)]
io_tasks: Mutex<Option<(Task<Option<()>>, Task<Option<()>>)>>,
#[allow(dead_code)]
output_done_rx: Mutex<Option<barrier::Receiver>>,
executor: BackgroundExecutor,
server: Arc<Mutex<Option<Child>>>,
}
#[derive(Clone, Debug, PartialEq, Eq, PartialOrd, Ord, Hash)]
#[repr(transparent)]
pub struct ContextServerId(pub String);
#[derive(Serialize, Deserialize)]
struct Request<'a, T> {
jsonrpc: &'static str,
id: RequestId,
method: &'a str,
params: T,
}
#[derive(Serialize, Deserialize)]
struct AnyResponse<'a> {
jsonrpc: &'a str,
id: RequestId,
#[serde(default)]
error: Option<Error>,
#[serde(borrow)]
result: Option<&'a RawValue>,
}
#[derive(Deserialize)]
#[allow(dead_code)]
struct Response<T> {
jsonrpc: &'static str,
id: RequestId,
#[serde(flatten)]
value: CspResult<T>,
}
#[derive(Deserialize)]
#[serde(rename_all = "snake_case")]
enum CspResult<T> {
#[serde(rename = "result")]
Ok(Option<T>),
#[allow(dead_code)]
Error(Option<Error>),
}
#[derive(Serialize, Deserialize)]
struct Notification<'a, T> {
jsonrpc: &'static str,
id: RequestId,
#[serde(borrow)]
method: &'a str,
params: T,
}
#[derive(Debug, Clone, Deserialize)]
struct AnyNotification<'a> {
jsonrpc: &'a str,
id: RequestId,
method: String,
#[serde(default)]
params: Option<Value>,
}
#[derive(Debug, Serialize, Deserialize)]
struct Error {
message: String,
}
#[derive(Debug, Clone, Deserialize)]
pub struct ModelContextServerBinary {
pub executable: PathBuf,
pub args: Vec<String>,
pub env: Option<HashMap<String, String>>,
}
impl Client {
/// Creates a new Client instance for a context server.
///
/// This function initializes a new Client by spawning a child process for the context server,
/// setting up communication channels, and initializing handlers for input/output operations.
/// It takes a server ID, binary information, and an async app context as input.
pub fn new(
server_id: ContextServerId,
binary: ModelContextServerBinary,
cx: AsyncAppContext,
) -> Result<Self> {
log::info!(
"starting context server (executable={:?}, args={:?})",
binary.executable,
&binary.args
);
let mut command = process::Command::new(&binary.executable);
command
.args(&binary.args)
.envs(binary.env.unwrap_or_default())
.stdin(std::process::Stdio::piped())
.stdout(std::process::Stdio::piped())
.stderr(std::process::Stdio::piped())
.kill_on_drop(true);
let mut server = command.spawn().with_context(|| {
format!(
"failed to spawn command. (path={:?}, args={:?})",
binary.executable, &binary.args
)
})?;
let stdin = server.stdin.take().unwrap();
let stdout = server.stdout.take().unwrap();
let stderr = server.stderr.take().unwrap();
let (outbound_tx, outbound_rx) = channel::unbounded::<String>();
let (output_done_tx, output_done_rx) = barrier::channel();
let notification_handlers =
Arc::new(Mutex::new(HashMap::<_, NotificationHandler>::default()));
let response_handlers =
Arc::new(Mutex::new(Some(HashMap::<_, ResponseHandler>::default())));
let stdout_input_task = cx.spawn({
let notification_handlers = notification_handlers.clone();
let response_handlers = response_handlers.clone();
move |cx| {
Self::handle_input(stdout, notification_handlers, response_handlers, cx).log_err()
}
});
let stderr_input_task = cx.spawn(|_| Self::handle_stderr(stderr).log_err());
let input_task = cx.spawn(|_| async move {
let (stdout, stderr) = futures::join!(stdout_input_task, stderr_input_task);
stdout.or(stderr)
});
let output_task = cx.background_executor().spawn({
Self::handle_output(
stdin,
outbound_rx,
output_done_tx,
response_handlers.clone(),
)
.log_err()
});
let mut context_server = Self {
server_id,
notification_handlers,
response_handlers,
name: "".into(),
next_id: Default::default(),
outbound_tx,
executor: cx.background_executor().clone(),
io_tasks: Mutex::new(Some((input_task, output_task))),
output_done_rx: Mutex::new(Some(output_done_rx)),
server: Arc::new(Mutex::new(Some(server))),
};
if let Some(name) = binary.executable.file_name() {
context_server.name = name.to_string_lossy().into();
}
Ok(context_server)
}
/// Handles input from the server's stdout.
///
/// This function continuously reads lines from the provided stdout stream,
/// parses them as JSON-RPC responses or notifications, and dispatches them
/// to the appropriate handlers. It processes both responses (which are matched
/// to pending requests) and notifications (which trigger registered handlers).
async fn handle_input<Stdout>(
stdout: Stdout,
notification_handlers: Arc<Mutex<HashMap<&'static str, NotificationHandler>>>,
response_handlers: Arc<Mutex<Option<HashMap<RequestId, ResponseHandler>>>>,
cx: AsyncAppContext,
) -> anyhow::Result<()>
where
Stdout: AsyncRead + Unpin + Send + 'static,
{
let mut stdout = BufReader::new(stdout);
let mut buffer = String::new();
loop {
buffer.clear();
if stdout.read_line(&mut buffer).await? == 0 {
return Ok(());
}
let content = buffer.trim();
if !content.is_empty() {
if let Ok(response) = serde_json::from_str::<AnyResponse>(&content) {
if let Some(handlers) = response_handlers.lock().as_mut() {
if let Some(handler) = handlers.remove(&response.id) {
handler(Ok(content.to_string()));
}
}
} else if let Ok(notification) = serde_json::from_str::<AnyNotification>(&content) {
let mut notification_handlers = notification_handlers.lock();
if let Some(handler) =
notification_handlers.get_mut(notification.method.as_str())
{
handler(
notification.id,
notification.params.unwrap_or(Value::Null),
cx.clone(),
);
}
}
}
smol::future::yield_now().await;
}
}
/// Handles the stderr output from the context server.
/// Continuously reads and logs any error messages from the server.
async fn handle_stderr<Stderr>(stderr: Stderr) -> anyhow::Result<()>
where
Stderr: AsyncRead + Unpin + Send + 'static,
{
let mut stderr = BufReader::new(stderr);
let mut buffer = String::new();
loop {
buffer.clear();
if stderr.read_line(&mut buffer).await? == 0 {
return Ok(());
}
log::warn!("context server stderr: {}", buffer.trim());
smol::future::yield_now().await;
}
}
/// Handles the output to the context server's stdin.
/// This function continuously receives messages from the outbound channel,
/// writes them to the server's stdin, and manages the lifecycle of response handlers.
async fn handle_output<Stdin>(
stdin: Stdin,
outbound_rx: channel::Receiver<String>,
output_done_tx: barrier::Sender,
response_handlers: Arc<Mutex<Option<HashMap<RequestId, ResponseHandler>>>>,
) -> anyhow::Result<()>
where
Stdin: AsyncWrite + Unpin + Send + 'static,
{
let mut stdin = BufWriter::new(stdin);
let _clear_response_handlers = util::defer({
let response_handlers = response_handlers.clone();
move || {
response_handlers.lock().take();
}
});
while let Ok(message) = outbound_rx.recv().await {
log::trace!("outgoing message: {}", message);
stdin.write_all(message.as_bytes()).await?;
stdin.write_all(b"\n").await?;
stdin.flush().await?;
}
drop(output_done_tx);
Ok(())
}
/// Sends a JSON-RPC request to the context server and waits for a response.
/// This function handles serialization, deserialization, timeout, and error handling.
pub async fn request<T: DeserializeOwned>(
&self,
method: &str,
params: impl Serialize,
) -> Result<T> {
let id = self.next_id.fetch_add(1, SeqCst);
let request = serde_json::to_string(&Request {
jsonrpc: JSON_RPC_VERSION,
id: RequestId::Int(id),
method,
params,
})
.unwrap();
let (tx, rx) = oneshot::channel();
let handle_response = self
.response_handlers
.lock()
.as_mut()
.ok_or_else(|| anyhow!("server shut down"))
.map(|handlers| {
handlers.insert(
RequestId::Int(id),
Box::new(move |result| {
let _ = tx.send(result);
}),
);
});
let send = self
.outbound_tx
.try_send(request)
.context("failed to write to context server's stdin");
let executor = self.executor.clone();
let started = Instant::now();
handle_response?;
send?;
let mut timeout = executor.timer(REQUEST_TIMEOUT).fuse();
select! {
response = rx.fuse() => {
let elapsed = started.elapsed();
log::trace!("took {elapsed:?} to receive response to {method:?} id {id}");
match response? {
Ok(response) => {
let parsed: AnyResponse = serde_json::from_str(&response)?;
if let Some(error) = parsed.error {
Err(anyhow!(error.message))
} else if let Some(result) = parsed.result {
Ok(serde_json::from_str(result.get())?)
} else {
Err(anyhow!("Invalid response: no result or error"))
}
}
Err(_) => anyhow::bail!("cancelled")
}
}
_ = timeout => {
log::error!("cancelled csp request task for {method:?} id {id} which took over {:?}", REQUEST_TIMEOUT);
anyhow::bail!("Context server request timeout");
}
}
}
/// Sends a notification to the context server without expecting a response.
/// This function serializes the notification and sends it through the outbound channel.
pub fn notify(&self, method: &str, params: impl Serialize) -> Result<()> {
let id = self.next_id.fetch_add(1, SeqCst);
let notification = serde_json::to_string(&Notification {
jsonrpc: JSON_RPC_VERSION,
id: RequestId::Int(id),
method,
params,
})
.unwrap();
self.outbound_tx.try_send(notification)?;
Ok(())
}
pub fn on_notification<F>(&self, method: &'static str, mut f: F)
where
F: 'static + Send + FnMut(Value, AsyncAppContext),
{
self.notification_handlers
.lock()
.insert(method, Box::new(move |_, params, cx| f(params, cx)));
}
pub fn name(&self) -> &str {
&self.name
}
pub fn server_id(&self) -> ContextServerId {
self.server_id.clone()
}
}
impl Drop for Client {
fn drop(&mut self) {
if let Some(mut server) = self.server.lock().take() {
let _ = server.kill();
}
}
}
impl fmt::Display for ContextServerId {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
self.0.fmt(f)
}
}
impl fmt::Debug for Client {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("Context Server Client")
.field("id", &self.server_id.0)
.field("name", &self.name)
.finish_non_exhaustive()
}
}

View File

@@ -0,0 +1,38 @@
use gpui::{actions, AppContext, Context, ViewContext};
use log;
use manager::ContextServerManager;
use workspace::Workspace;
pub mod client;
pub mod manager;
pub mod protocol;
mod registry;
pub mod types;
pub use registry::*;
actions!(context_servers, [Restart]);
pub fn init(cx: &mut AppContext) {
log::info!("initializing context server client");
manager::init(cx);
ContextServerRegistry::register(cx);
cx.observe_new_views(
|workspace: &mut Workspace, _cx: &mut ViewContext<Workspace>| {
workspace.register_action(restart_servers);
},
)
.detach();
}
fn restart_servers(_workspace: &mut Workspace, _action: &Restart, cx: &mut ViewContext<Workspace>) {
let model = ContextServerManager::global(&cx);
cx.update_model(&model, |manager, cx| {
for server in manager.servers() {
manager
.restart_server(&server.id, cx)
.detach_and_log_err(cx);
}
});
}

View File

@@ -0,0 +1,278 @@
//! This module implements a context server management system for Zed.
//!
//! It provides functionality to:
//! - Define and load context server settings
//! - Manage individual context servers (start, stop, restart)
//! - Maintain a global manager for all context servers
//!
//! Key components:
//! - `ContextServerSettings`: Defines the structure for server configurations
//! - `ContextServer`: Represents an individual context server
//! - `ContextServerManager`: Manages multiple context servers
//! - `GlobalContextServerManager`: Provides global access to the ContextServerManager
//!
//! The module also includes initialization logic to set up the context server system
//! and react to changes in settings.
use collections::{HashMap, HashSet};
use gpui::{AppContext, AsyncAppContext, Context, EventEmitter, Global, Model, ModelContext, Task};
use log;
use parking_lot::RwLock;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use settings::{Settings, SettingsSources, SettingsStore};
use std::path::Path;
use std::sync::Arc;
use crate::{
client::{self, Client},
types,
};
#[derive(Deserialize, Serialize, Default, Clone, PartialEq, Eq, JsonSchema, Debug)]
pub struct ContextServerSettings {
pub servers: Vec<ServerConfig>,
}
#[derive(Deserialize, Serialize, Clone, PartialEq, Eq, JsonSchema, Debug)]
pub struct ServerConfig {
pub id: String,
pub executable: String,
pub args: Vec<String>,
}
impl Settings for ContextServerSettings {
const KEY: Option<&'static str> = Some("experimental.context_servers");
type FileContent = Self;
fn load(
sources: SettingsSources<Self::FileContent>,
_: &mut gpui::AppContext,
) -> anyhow::Result<Self> {
sources.json_merge()
}
}
pub struct ContextServer {
pub id: String,
pub config: ServerConfig,
pub client: RwLock<Option<Arc<crate::protocol::InitializedContextServerProtocol>>>,
}
impl ContextServer {
fn new(config: ServerConfig) -> Self {
Self {
id: config.id.clone(),
config,
client: RwLock::new(None),
}
}
async fn start(&self, cx: &AsyncAppContext) -> anyhow::Result<()> {
log::info!("starting context server {}", self.config.id);
let client = Client::new(
client::ContextServerId(self.config.id.clone()),
client::ModelContextServerBinary {
executable: Path::new(&self.config.executable).to_path_buf(),
args: self.config.args.clone(),
env: None,
},
cx.clone(),
)?;
let protocol = crate::protocol::ModelContextProtocol::new(client);
let client_info = types::EntityInfo {
name: "Zed".to_string(),
version: env!("CARGO_PKG_VERSION").to_string(),
};
let initialized_protocol = protocol.initialize(client_info).await?;
log::debug!(
"context server {} initialized: {:?}",
self.config.id,
initialized_protocol.initialize,
);
*self.client.write() = Some(Arc::new(initialized_protocol));
Ok(())
}
async fn stop(&self) -> anyhow::Result<()> {
let mut client = self.client.write();
if let Some(protocol) = client.take() {
drop(protocol);
}
Ok(())
}
}
/// A Context server manager manages the starting and stopping
/// of all servers. To obtain a server to interact with, a crate
/// must go through the `GlobalContextServerManager` which holds
/// a model to the ContextServerManager.
pub struct ContextServerManager {
servers: HashMap<String, Arc<ContextServer>>,
pending_servers: HashSet<String>,
}
pub enum Event {
ServerStarted { server_id: String },
ServerStopped { server_id: String },
}
impl Global for ContextServerManager {}
impl EventEmitter<Event> for ContextServerManager {}
impl ContextServerManager {
pub fn new() -> Self {
Self {
servers: HashMap::default(),
pending_servers: HashSet::default(),
}
}
pub fn global(cx: &AppContext) -> Model<Self> {
cx.global::<GlobalContextServerManager>().0.clone()
}
pub fn add_server(
&mut self,
config: ServerConfig,
cx: &mut ModelContext<Self>,
) -> Task<anyhow::Result<()>> {
let server_id = config.id.clone();
let server_id2 = config.id.clone();
if self.servers.contains_key(&server_id) || self.pending_servers.contains(&server_id) {
return Task::ready(Ok(()));
}
let task = cx.spawn(|this, mut cx| async move {
let server = Arc::new(ContextServer::new(config));
server.start(&cx).await?;
this.update(&mut cx, |this, cx| {
this.servers.insert(server_id.clone(), server);
this.pending_servers.remove(&server_id);
cx.emit(Event::ServerStarted {
server_id: server_id.clone(),
});
})?;
Ok(())
});
self.pending_servers.insert(server_id2);
task
}
pub fn get_server(&self, id: &str) -> Option<Arc<ContextServer>> {
self.servers.get(id).cloned()
}
pub fn remove_server(
&mut self,
id: &str,
cx: &mut ModelContext<Self>,
) -> Task<anyhow::Result<()>> {
let id = id.to_string();
cx.spawn(|this, mut cx| async move {
if let Some(server) = this.update(&mut cx, |this, _cx| this.servers.remove(&id))? {
server.stop().await?;
}
this.update(&mut cx, |this, cx| {
this.pending_servers.remove(&id);
cx.emit(Event::ServerStopped {
server_id: id.clone(),
})
})?;
Ok(())
})
}
pub fn restart_server(
&mut self,
id: &str,
cx: &mut ModelContext<Self>,
) -> Task<anyhow::Result<()>> {
let id = id.to_string();
cx.spawn(|this, mut cx| async move {
if let Some(server) = this.update(&mut cx, |this, _cx| this.servers.remove(&id))? {
server.stop().await?;
let config = server.config.clone();
let new_server = Arc::new(ContextServer::new(config));
new_server.start(&cx).await?;
this.update(&mut cx, |this, cx| {
this.servers.insert(id.clone(), new_server);
cx.emit(Event::ServerStopped {
server_id: id.clone(),
});
cx.emit(Event::ServerStarted {
server_id: id.clone(),
});
})?;
}
Ok(())
})
}
pub fn servers(&self) -> Vec<Arc<ContextServer>> {
self.servers.values().cloned().collect()
}
pub fn model(cx: &mut AppContext) -> Model<Self> {
cx.new_model(|_cx| ContextServerManager::new())
}
}
pub struct GlobalContextServerManager(Model<ContextServerManager>);
impl Global for GlobalContextServerManager {}
impl GlobalContextServerManager {
fn register(cx: &mut AppContext) {
let model = ContextServerManager::model(cx);
cx.set_global(Self(model));
}
}
pub fn init(cx: &mut AppContext) {
ContextServerSettings::register(cx);
GlobalContextServerManager::register(cx);
cx.observe_global::<SettingsStore>(|cx| {
let manager = ContextServerManager::global(cx);
cx.update_model(&manager, |manager, cx| {
let settings = ContextServerSettings::get_global(cx);
let current_servers: HashMap<String, ServerConfig> = manager
.servers()
.into_iter()
.map(|server| (server.id.clone(), server.config.clone()))
.collect();
let new_servers = settings
.servers
.iter()
.map(|config| (config.id.clone(), config.clone()))
.collect::<HashMap<_, _>>();
let servers_to_add = new_servers
.values()
.filter(|config| !current_servers.contains_key(&config.id))
.cloned()
.collect::<Vec<_>>();
let servers_to_remove = current_servers
.keys()
.filter(|id| !new_servers.contains_key(*id))
.cloned()
.collect::<Vec<_>>();
log::trace!("servers_to_add={:?}", servers_to_add);
for config in servers_to_add {
manager.add_server(config, cx).detach_and_log_err(cx);
}
for id in servers_to_remove {
manager.remove_server(&id, cx).detach_and_log_err(cx);
}
})
})
.detach();
}

View File

@@ -0,0 +1,140 @@
//! This module implements parts of the Model Context Protocol.
//!
//! It handles the lifecycle messages, and provides a general interface to
//! interacting with an MCP server. It uses the generic JSON-RPC client to
//! read/write messages and the types from types.rs for serialization/deserialization
//! of messages.
use anyhow::Result;
use collections::HashMap;
use crate::client::Client;
use crate::types;
pub use types::PromptInfo;
const PROTOCOL_VERSION: u32 = 1;
pub struct ModelContextProtocol {
inner: Client,
}
impl ModelContextProtocol {
pub fn new(inner: Client) -> Self {
Self { inner }
}
pub async fn initialize(
self,
client_info: types::EntityInfo,
) -> Result<InitializedContextServerProtocol> {
let params = types::InitializeParams {
protocol_version: PROTOCOL_VERSION,
capabilities: types::ClientCapabilities {
experimental: None,
sampling: None,
},
client_info,
};
let response: types::InitializeResponse = self
.inner
.request(types::RequestType::Initialize.as_str(), params)
.await?;
log::trace!("mcp server info {:?}", response.server_info);
self.inner.notify(
types::NotificationType::Initialized.as_str(),
serde_json::json!({}),
)?;
let initialized_protocol = InitializedContextServerProtocol {
inner: self.inner,
initialize: response,
};
Ok(initialized_protocol)
}
}
pub struct InitializedContextServerProtocol {
inner: Client,
pub initialize: types::InitializeResponse,
}
#[derive(Debug, PartialEq, Clone, Copy)]
pub enum ServerCapability {
Experimental,
Logging,
Prompts,
Resources,
Tools,
}
impl InitializedContextServerProtocol {
/// Check if the server supports a specific capability
pub fn capable(&self, capability: ServerCapability) -> bool {
match capability {
ServerCapability::Experimental => self.initialize.capabilities.experimental.is_some(),
ServerCapability::Logging => self.initialize.capabilities.logging.is_some(),
ServerCapability::Prompts => self.initialize.capabilities.prompts.is_some(),
ServerCapability::Resources => self.initialize.capabilities.resources.is_some(),
ServerCapability::Tools => self.initialize.capabilities.tools.is_some(),
}
}
fn check_capability(&self, capability: ServerCapability) -> Result<()> {
if self.capable(capability) {
Ok(())
} else {
Err(anyhow::anyhow!(
"Server does not support {:?} capability",
capability
))
}
}
/// List the MCP prompts.
pub async fn list_prompts(&self) -> Result<Vec<types::PromptInfo>> {
self.check_capability(ServerCapability::Prompts)?;
let response: types::PromptsListResponse = self
.inner
.request(types::RequestType::PromptsList.as_str(), ())
.await?;
Ok(response.prompts)
}
/// Executes a prompt with the given arguments and returns the result.
pub async fn run_prompt<P: AsRef<str>>(
&self,
prompt: P,
arguments: HashMap<String, String>,
) -> Result<String> {
self.check_capability(ServerCapability::Prompts)?;
let params = types::PromptsGetParams {
name: prompt.as_ref().to_string(),
arguments: Some(arguments),
};
let response: types::PromptsGetResponse = self
.inner
.request(types::RequestType::PromptsGet.as_str(), params)
.await?;
Ok(response.prompt)
}
}
impl InitializedContextServerProtocol {
pub async fn request<R: serde::de::DeserializeOwned>(
&self,
method: &str,
params: impl serde::Serialize,
) -> Result<R> {
self.inner.request(method, params).await
}
}

View File

@@ -0,0 +1,47 @@
use std::sync::Arc;
use collections::HashMap;
use gpui::{AppContext, Global, ReadGlobal};
use parking_lot::RwLock;
struct GlobalContextServerRegistry(Arc<ContextServerRegistry>);
impl Global for GlobalContextServerRegistry {}
pub struct ContextServerRegistry {
registry: RwLock<HashMap<String, Vec<Arc<str>>>>,
}
impl ContextServerRegistry {
pub fn global(cx: &AppContext) -> Arc<Self> {
GlobalContextServerRegistry::global(cx).0.clone()
}
pub fn register(cx: &mut AppContext) {
cx.set_global(GlobalContextServerRegistry(Arc::new(
ContextServerRegistry {
registry: RwLock::new(HashMap::default()),
},
)))
}
pub fn register_command(&self, server_id: String, command_name: &str) {
let mut registry = self.registry.write();
registry
.entry(server_id)
.or_default()
.push(command_name.into());
}
pub fn unregister_command(&self, server_id: &str, command_name: &str) {
let mut registry = self.registry.write();
if let Some(commands) = registry.get_mut(server_id) {
commands.retain(|name| name.as_ref() != command_name);
}
}
pub fn get_commands(&self, server_id: &str) -> Option<Vec<Arc<str>>> {
let registry = self.registry.read();
registry.get(server_id).cloned()
}
}

View File

@@ -0,0 +1,234 @@
use collections::HashMap;
use serde::{Deserialize, Serialize};
use url::Url;
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub enum RequestType {
Initialize,
CallTool,
ResourcesUnsubscribe,
ResourcesSubscribe,
ResourcesRead,
ResourcesList,
LoggingSetLevel,
PromptsGet,
PromptsList,
}
impl RequestType {
pub fn as_str(&self) -> &'static str {
match self {
RequestType::Initialize => "initialize",
RequestType::CallTool => "tools/call",
RequestType::ResourcesUnsubscribe => "resources/unsubscribe",
RequestType::ResourcesSubscribe => "resources/subscribe",
RequestType::ResourcesRead => "resources/read",
RequestType::ResourcesList => "resources/list",
RequestType::LoggingSetLevel => "logging/setLevel",
RequestType::PromptsGet => "prompts/get",
RequestType::PromptsList => "prompts/list",
}
}
}
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct InitializeParams {
pub protocol_version: u32,
pub capabilities: ClientCapabilities,
pub client_info: EntityInfo,
}
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct CallToolParams {
pub name: String,
pub arguments: Option<serde_json::Value>,
}
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct ResourcesUnsubscribeParams {
pub uri: Url,
}
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct ResourcesSubscribeParams {
pub uri: Url,
}
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct ResourcesReadParams {
pub uri: Url,
}
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct LoggingSetLevelParams {
pub level: LoggingLevel,
}
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct PromptsGetParams {
pub name: String,
pub arguments: Option<HashMap<String, String>>,
}
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct InitializeResponse {
pub protocol_version: u32,
pub capabilities: ServerCapabilities,
pub server_info: EntityInfo,
}
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ResourcesReadResponse {
pub contents: Vec<ResourceContent>,
}
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ResourcesListResponse {
pub resource_templates: Option<Vec<ResourceTemplate>>,
pub resources: Vec<Resource>,
}
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct PromptsGetResponse {
pub prompt: String,
}
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct PromptsListResponse {
pub prompts: Vec<PromptInfo>,
}
#[derive(Debug, Deserialize, Clone)]
#[serde(rename_all = "camelCase")]
pub struct PromptInfo {
pub name: String,
pub arguments: Option<Vec<PromptArgument>>,
}
#[derive(Debug, Deserialize, Clone)]
#[serde(rename_all = "camelCase")]
pub struct PromptArgument {
pub name: String,
pub description: Option<String>,
pub required: Option<bool>,
}
// Shared Types
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ClientCapabilities {
pub experimental: Option<HashMap<String, serde_json::Value>>,
pub sampling: Option<HashMap<String, serde_json::Value>>,
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ServerCapabilities {
pub experimental: Option<HashMap<String, serde_json::Value>>,
pub logging: Option<HashMap<String, serde_json::Value>>,
pub prompts: Option<HashMap<String, serde_json::Value>>,
pub resources: Option<ResourcesCapabilities>,
pub tools: Option<HashMap<String, serde_json::Value>>,
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ResourcesCapabilities {
pub subscribe: Option<bool>,
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct Tool {
pub name: String,
pub description: Option<String>,
pub input_schema: serde_json::Value,
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct EntityInfo {
pub name: String,
pub version: String,
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct Resource {
pub uri: Url,
pub mime_type: Option<String>,
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ResourceContent {
pub uri: Url,
pub mime_type: Option<String>,
pub content_type: String,
pub text: Option<String>,
pub data: Option<String>,
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct ResourceTemplate {
pub uri_template: String,
pub name: Option<String>,
pub description: Option<String>,
}
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "lowercase")]
pub enum LoggingLevel {
Debug,
Info,
Warning,
Error,
}
// Client Notifications
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub enum NotificationType {
Initialized,
Progress,
}
impl NotificationType {
pub fn as_str(&self) -> &'static str {
match self {
NotificationType::Initialized => "notifications/initialized",
NotificationType::Progress => "notifications/progress",
}
}
}
#[derive(Debug, Serialize)]
#[serde(untagged)]
pub enum ClientNotification {
Initialized,
Progress(ProgressParams),
}
#[derive(Debug, Serialize)]
#[serde(rename_all = "camelCase")]
pub struct ProgressParams {
pub progress_token: String,
pub progress: f64,
pub total: Option<f64>,
}

View File

@@ -31,6 +31,8 @@ pub enum Role {
#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq, EnumIter)] #[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq, EnumIter)]
pub enum Model { pub enum Model {
#[default] #[default]
#[serde(alias = "gpt-4o", rename = "gpt-4o-2024-05-13")]
Gpt4o,
#[serde(alias = "gpt-4", rename = "gpt-4")] #[serde(alias = "gpt-4", rename = "gpt-4")]
Gpt4, Gpt4,
#[serde(alias = "gpt-3.5-turbo", rename = "gpt-3.5-turbo")] #[serde(alias = "gpt-3.5-turbo", rename = "gpt-3.5-turbo")]
@@ -40,6 +42,7 @@ pub enum Model {
impl Model { impl Model {
pub fn from_id(id: &str) -> Result<Self> { pub fn from_id(id: &str) -> Result<Self> {
match id { match id {
"gpt-4o" => Ok(Self::Gpt4o),
"gpt-4" => Ok(Self::Gpt4), "gpt-4" => Ok(Self::Gpt4),
"gpt-3.5-turbo" => Ok(Self::Gpt3_5Turbo), "gpt-3.5-turbo" => Ok(Self::Gpt3_5Turbo),
_ => Err(anyhow!("Invalid model id: {}", id)), _ => Err(anyhow!("Invalid model id: {}", id)),
@@ -50,6 +53,7 @@ impl Model {
match self { match self {
Self::Gpt3_5Turbo => "gpt-3.5-turbo", Self::Gpt3_5Turbo => "gpt-3.5-turbo",
Self::Gpt4 => "gpt-4", Self::Gpt4 => "gpt-4",
Self::Gpt4o => "gpt-4o",
} }
} }
@@ -57,11 +61,13 @@ impl Model {
match self { match self {
Self::Gpt3_5Turbo => "GPT-3.5", Self::Gpt3_5Turbo => "GPT-3.5",
Self::Gpt4 => "GPT-4", Self::Gpt4 => "GPT-4",
Self::Gpt4o => "GPT-4o",
} }
} }
pub fn max_token_count(&self) -> usize { pub fn max_token_count(&self) -> usize {
match self { match self {
Self::Gpt4o => 128000,
Self::Gpt4 => 8192, Self::Gpt4 => 8192,
Self::Gpt3_5Turbo => 16385, Self::Gpt3_5Turbo => 16385,
} }

View File

@@ -266,6 +266,22 @@ pub enum Direction {
Next, Next,
} }
#[derive(Debug, Copy, Clone, PartialEq, Eq)]
pub enum Navigated {
Yes,
No,
}
impl Navigated {
pub fn from_bool(yes: bool) -> Navigated {
if yes {
Navigated::Yes
} else {
Navigated::No
}
}
}
pub fn init_settings(cx: &mut AppContext) { pub fn init_settings(cx: &mut AppContext) {
EditorSettings::register(cx); EditorSettings::register(cx);
} }
@@ -900,6 +916,7 @@ enum ContextMenuOrigin {
#[derive(Clone)] #[derive(Clone)]
struct CompletionsMenu { struct CompletionsMenu {
id: CompletionId, id: CompletionId,
sort_completions: bool,
initial_position: Anchor, initial_position: Anchor,
buffer: Model<Buffer>, buffer: Model<Buffer>,
completions: Arc<RwLock<Box<[Completion]>>>, completions: Arc<RwLock<Box<[Completion]>>>,
@@ -1225,55 +1242,57 @@ impl CompletionsMenu {
} }
let completions = self.completions.read(); let completions = self.completions.read();
matches.sort_unstable_by_key(|mat| { if self.sort_completions {
// We do want to strike a balance here between what the language server tells us matches.sort_unstable_by_key(|mat| {
// to sort by (the sort_text) and what are "obvious" good matches (i.e. when you type // We do want to strike a balance here between what the language server tells us
// `Creat` and there is a local variable called `CreateComponent`). // to sort by (the sort_text) and what are "obvious" good matches (i.e. when you type
// So what we do is: we bucket all matches into two buckets // `Creat` and there is a local variable called `CreateComponent`).
// - Strong matches // So what we do is: we bucket all matches into two buckets
// - Weak matches // - Strong matches
// Strong matches are the ones with a high fuzzy-matcher score (the "obvious" matches) // - Weak matches
// and the Weak matches are the rest. // Strong matches are the ones with a high fuzzy-matcher score (the "obvious" matches)
// // and the Weak matches are the rest.
// For the strong matches, we sort by the language-servers score first and for the weak //
// matches, we prefer our fuzzy finder first. // For the strong matches, we sort by the language-servers score first and for the weak
// // matches, we prefer our fuzzy finder first.
// The thinking behind that: it's useless to take the sort_text the language-server gives //
// us into account when it's obviously a bad match. // The thinking behind that: it's useless to take the sort_text the language-server gives
// us into account when it's obviously a bad match.
#[derive(PartialEq, Eq, PartialOrd, Ord)] #[derive(PartialEq, Eq, PartialOrd, Ord)]
enum MatchScore<'a> { enum MatchScore<'a> {
Strong { Strong {
sort_text: Option<&'a str>, sort_text: Option<&'a str>,
score: Reverse<OrderedFloat<f64>>, score: Reverse<OrderedFloat<f64>>,
sort_key: (usize, &'a str), sort_key: (usize, &'a str),
}, },
Weak { Weak {
score: Reverse<OrderedFloat<f64>>, score: Reverse<OrderedFloat<f64>>,
sort_text: Option<&'a str>, sort_text: Option<&'a str>,
sort_key: (usize, &'a str), sort_key: (usize, &'a str),
}, },
}
let completion = &completions[mat.candidate_id];
let sort_key = completion.sort_key();
let sort_text = completion.lsp_completion.sort_text.as_deref();
let score = Reverse(OrderedFloat(mat.score));
if mat.score >= 0.2 {
MatchScore::Strong {
sort_text,
score,
sort_key,
} }
} else {
MatchScore::Weak { let completion = &completions[mat.candidate_id];
score, let sort_key = completion.sort_key();
sort_text, let sort_text = completion.lsp_completion.sort_text.as_deref();
sort_key, let score = Reverse(OrderedFloat(mat.score));
if mat.score >= 0.2 {
MatchScore::Strong {
sort_text,
score,
sort_key,
}
} else {
MatchScore::Weak {
score,
sort_text,
sort_key,
}
} }
} });
}); }
for mat in &mut matches { for mat in &mut matches {
let completion = &completions[mat.candidate_id]; let completion = &completions[mat.candidate_id];
@@ -1558,6 +1577,7 @@ pub(crate) struct NavigationData {
scroll_top_row: u32, scroll_top_row: u32,
} }
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
enum GotoDefinitionKind { enum GotoDefinitionKind {
Symbol, Symbol,
Declaration, Declaration,
@@ -4105,6 +4125,7 @@ impl Editor {
trigger_kind, trigger_kind,
}; };
let completions = provider.completions(&buffer, buffer_position, completion_context, cx); let completions = provider.completions(&buffer, buffer_position, completion_context, cx);
let sort_completions = provider.sort_completions();
let id = post_inc(&mut self.next_completion_id); let id = post_inc(&mut self.next_completion_id);
let task = cx.spawn(|this, mut cx| { let task = cx.spawn(|this, mut cx| {
@@ -4116,6 +4137,7 @@ impl Editor {
let menu = if let Some(completions) = completions { let menu = if let Some(completions) = completions {
let mut menu = CompletionsMenu { let mut menu = CompletionsMenu {
id, id,
sort_completions,
initial_position: position, initial_position: position,
match_candidates: completions match_candidates: completions
.iter() .iter()
@@ -4374,11 +4396,11 @@ impl Editor {
this.refresh_inline_completion(true, cx); this.refresh_inline_completion(true, cx);
}); });
if let Some(confirm) = completion.confirm.as_ref() { let show_new_completions_on_confirm = completion
(confirm)(intent, cx); .confirm
} .as_ref()
.map_or(false, |confirm| confirm(intent, cx));
if completion.show_new_completions_on_confirm { if show_new_completions_on_confirm {
self.show_completions(&ShowCompletions { trigger: None }, cx); self.show_completions(&ShowCompletions { trigger: None }, cx);
} }
@@ -4838,7 +4860,7 @@ impl Editor {
let range = Anchor { let range = Anchor {
buffer_id, buffer_id,
excerpt_id: excerpt_id, excerpt_id,
text_anchor: start, text_anchor: start,
}..Anchor { }..Anchor {
buffer_id, buffer_id,
@@ -9015,15 +9037,28 @@ impl Editor {
&mut self, &mut self,
_: &GoToDefinition, _: &GoToDefinition,
cx: &mut ViewContext<Self>, cx: &mut ViewContext<Self>,
) -> Task<Result<bool>> { ) -> Task<Result<Navigated>> {
self.go_to_definition_of_kind(GotoDefinitionKind::Symbol, false, cx) let definition = self.go_to_definition_of_kind(GotoDefinitionKind::Symbol, false, cx);
let references = self.find_all_references(&FindAllReferences, cx);
cx.background_executor().spawn(async move {
if definition.await? == Navigated::Yes {
return Ok(Navigated::Yes);
}
if let Some(references) = references {
if references.await? == Navigated::Yes {
return Ok(Navigated::Yes);
}
}
Ok(Navigated::No)
})
} }
pub fn go_to_declaration( pub fn go_to_declaration(
&mut self, &mut self,
_: &GoToDeclaration, _: &GoToDeclaration,
cx: &mut ViewContext<Self>, cx: &mut ViewContext<Self>,
) -> Task<Result<bool>> { ) -> Task<Result<Navigated>> {
self.go_to_definition_of_kind(GotoDefinitionKind::Declaration, false, cx) self.go_to_definition_of_kind(GotoDefinitionKind::Declaration, false, cx)
} }
@@ -9031,7 +9066,7 @@ impl Editor {
&mut self, &mut self,
_: &GoToDeclaration, _: &GoToDeclaration,
cx: &mut ViewContext<Self>, cx: &mut ViewContext<Self>,
) -> Task<Result<bool>> { ) -> Task<Result<Navigated>> {
self.go_to_definition_of_kind(GotoDefinitionKind::Declaration, true, cx) self.go_to_definition_of_kind(GotoDefinitionKind::Declaration, true, cx)
} }
@@ -9039,7 +9074,7 @@ impl Editor {
&mut self, &mut self,
_: &GoToImplementation, _: &GoToImplementation,
cx: &mut ViewContext<Self>, cx: &mut ViewContext<Self>,
) -> Task<Result<bool>> { ) -> Task<Result<Navigated>> {
self.go_to_definition_of_kind(GotoDefinitionKind::Implementation, false, cx) self.go_to_definition_of_kind(GotoDefinitionKind::Implementation, false, cx)
} }
@@ -9047,7 +9082,7 @@ impl Editor {
&mut self, &mut self,
_: &GoToImplementationSplit, _: &GoToImplementationSplit,
cx: &mut ViewContext<Self>, cx: &mut ViewContext<Self>,
) -> Task<Result<bool>> { ) -> Task<Result<Navigated>> {
self.go_to_definition_of_kind(GotoDefinitionKind::Implementation, true, cx) self.go_to_definition_of_kind(GotoDefinitionKind::Implementation, true, cx)
} }
@@ -9055,7 +9090,7 @@ impl Editor {
&mut self, &mut self,
_: &GoToTypeDefinition, _: &GoToTypeDefinition,
cx: &mut ViewContext<Self>, cx: &mut ViewContext<Self>,
) -> Task<Result<bool>> { ) -> Task<Result<Navigated>> {
self.go_to_definition_of_kind(GotoDefinitionKind::Type, false, cx) self.go_to_definition_of_kind(GotoDefinitionKind::Type, false, cx)
} }
@@ -9063,7 +9098,7 @@ impl Editor {
&mut self, &mut self,
_: &GoToDefinitionSplit, _: &GoToDefinitionSplit,
cx: &mut ViewContext<Self>, cx: &mut ViewContext<Self>,
) -> Task<Result<bool>> { ) -> Task<Result<Navigated>> {
self.go_to_definition_of_kind(GotoDefinitionKind::Symbol, true, cx) self.go_to_definition_of_kind(GotoDefinitionKind::Symbol, true, cx)
} }
@@ -9071,7 +9106,7 @@ impl Editor {
&mut self, &mut self,
_: &GoToTypeDefinitionSplit, _: &GoToTypeDefinitionSplit,
cx: &mut ViewContext<Self>, cx: &mut ViewContext<Self>,
) -> Task<Result<bool>> { ) -> Task<Result<Navigated>> {
self.go_to_definition_of_kind(GotoDefinitionKind::Type, true, cx) self.go_to_definition_of_kind(GotoDefinitionKind::Type, true, cx)
} }
@@ -9080,16 +9115,16 @@ impl Editor {
kind: GotoDefinitionKind, kind: GotoDefinitionKind,
split: bool, split: bool,
cx: &mut ViewContext<Self>, cx: &mut ViewContext<Self>,
) -> Task<Result<bool>> { ) -> Task<Result<Navigated>> {
let Some(workspace) = self.workspace() else { let Some(workspace) = self.workspace() else {
return Task::ready(Ok(false)); return Task::ready(Ok(Navigated::No));
}; };
let buffer = self.buffer.read(cx); let buffer = self.buffer.read(cx);
let head = self.selections.newest::<usize>(cx).head(); let head = self.selections.newest::<usize>(cx).head();
let (buffer, head) = if let Some(text_anchor) = buffer.text_anchor_for_position(head, cx) { let (buffer, head) = if let Some(text_anchor) = buffer.text_anchor_for_position(head, cx) {
text_anchor text_anchor
} else { } else {
return Task::ready(Ok(false)); return Task::ready(Ok(Navigated::No));
}; };
let project = workspace.read(cx).project().clone(); let project = workspace.read(cx).project().clone();
@@ -9148,7 +9183,7 @@ impl Editor {
mut definitions: Vec<HoverLink>, mut definitions: Vec<HoverLink>,
split: bool, split: bool,
cx: &mut ViewContext<Editor>, cx: &mut ViewContext<Editor>,
) -> Task<Result<bool>> { ) -> Task<Result<Navigated>> {
// If there is one definition, just open it directly // If there is one definition, just open it directly
if definitions.len() == 1 { if definitions.len() == 1 {
let definition = definitions.pop().unwrap(); let definition = definitions.pop().unwrap();
@@ -9164,77 +9199,61 @@ impl Editor {
}; };
cx.spawn(|editor, mut cx| async move { cx.spawn(|editor, mut cx| async move {
let target = target_task.await.context("target resolution task")?; let target = target_task.await.context("target resolution task")?;
if let Some(target) = target { let Some(target) = target else {
editor.update(&mut cx, |editor, cx| { return Ok(Navigated::No);
let Some(workspace) = editor.workspace() else { };
return false; editor.update(&mut cx, |editor, cx| {
}; let Some(workspace) = editor.workspace() else {
let pane = workspace.read(cx).active_pane().clone(); return Navigated::No;
};
let pane = workspace.read(cx).active_pane().clone();
let range = target.range.to_offset(target.buffer.read(cx)); let range = target.range.to_offset(target.buffer.read(cx));
let range = editor.range_for_match(&range); let range = editor.range_for_match(&range);
/// If select range has more than one line, we if Some(&target.buffer) == editor.buffer.read(cx).as_singleton().as_ref() {
/// just point the cursor to range.start. let buffer = target.buffer.read(cx);
fn check_multiline_range( let range = check_multiline_range(buffer, range);
buffer: &Buffer, editor.change_selections(Some(Autoscroll::focused()), cx, |s| {
range: Range<usize>, s.select_ranges([range]);
) -> Range<usize> { });
if buffer.offset_to_point(range.start).row } else {
== buffer.offset_to_point(range.end).row cx.window_context().defer(move |cx| {
{ let target_editor: View<Self> =
range workspace.update(cx, |workspace, cx| {
} else { let pane = if split {
range.start..range.start workspace.adjacent_pane(cx)
} } else {
} workspace.active_pane().clone()
};
if Some(&target.buffer) == editor.buffer.read(cx).as_singleton().as_ref() { workspace.open_project_item(
let buffer = target.buffer.read(cx); pane,
let range = check_multiline_range(buffer, range); target.buffer.clone(),
editor.change_selections(Some(Autoscroll::focused()), cx, |s| { true,
s.select_ranges([range]); true,
});
} else {
cx.window_context().defer(move |cx| {
let target_editor: View<Self> =
workspace.update(cx, |workspace, cx| {
let pane = if split {
workspace.adjacent_pane(cx)
} else {
workspace.active_pane().clone()
};
workspace.open_project_item(
pane,
target.buffer.clone(),
true,
true,
cx,
)
});
target_editor.update(cx, |target_editor, cx| {
// When selecting a definition in a different buffer, disable the nav history
// to avoid creating a history entry at the previous cursor location.
pane.update(cx, |pane, _| pane.disable_history());
let buffer = target.buffer.read(cx);
let range = check_multiline_range(buffer, range);
target_editor.change_selections(
Some(Autoscroll::focused()),
cx, cx,
|s| { )
s.select_ranges([range]);
},
);
pane.update(cx, |pane, _| pane.enable_history());
}); });
target_editor.update(cx, |target_editor, cx| {
// When selecting a definition in a different buffer, disable the nav history
// to avoid creating a history entry at the previous cursor location.
pane.update(cx, |pane, _| pane.disable_history());
let buffer = target.buffer.read(cx);
let range = check_multiline_range(buffer, range);
target_editor.change_selections(
Some(Autoscroll::focused()),
cx,
|s| {
s.select_ranges([range]);
},
);
pane.update(cx, |pane, _| pane.enable_history());
}); });
} });
true }
}) Navigated::Yes
} else { })
Ok(false)
}
}) })
} else if !definitions.is_empty() { } else if !definitions.is_empty() {
let replica_id = self.replica_id(cx); let replica_id = self.replica_id(cx);
@@ -9284,7 +9303,7 @@ impl Editor {
.context("location tasks")?; .context("location tasks")?;
let Some(workspace) = workspace else { let Some(workspace) = workspace else {
return Ok(false); return Ok(Navigated::No);
}; };
let opened = workspace let opened = workspace
.update(&mut cx, |workspace, cx| { .update(&mut cx, |workspace, cx| {
@@ -9294,10 +9313,10 @@ impl Editor {
}) })
.ok(); .ok();
anyhow::Ok(opened.is_some()) anyhow::Ok(Navigated::from_bool(opened.is_some()))
}) })
} else { } else {
Task::ready(Ok(false)) Task::ready(Ok(Navigated::No))
} }
} }
@@ -9356,7 +9375,7 @@ impl Editor {
&mut self, &mut self,
_: &FindAllReferences, _: &FindAllReferences,
cx: &mut ViewContext<Self>, cx: &mut ViewContext<Self>,
) -> Option<Task<Result<()>>> { ) -> Option<Task<Result<Navigated>>> {
let multi_buffer = self.buffer.read(cx); let multi_buffer = self.buffer.read(cx);
let selection = self.selections.newest::<usize>(cx); let selection = self.selections.newest::<usize>(cx);
let head = selection.head(); let head = selection.head();
@@ -9411,7 +9430,7 @@ impl Editor {
let locations = references.await?; let locations = references.await?;
if locations.is_empty() { if locations.is_empty() {
return anyhow::Ok(()); return anyhow::Ok(Navigated::No);
} }
workspace.update(&mut cx, |workspace, cx| { workspace.update(&mut cx, |workspace, cx| {
@@ -9431,6 +9450,7 @@ impl Editor {
Self::open_locations_in_multibuffer( Self::open_locations_in_multibuffer(
workspace, locations, replica_id, title, false, cx, workspace, locations, replica_id, title, false, cx,
); );
Navigated::Yes
}) })
})) }))
} }
@@ -11921,6 +11941,12 @@ impl Editor {
let bounds = self.last_bounds?; let bounds = self.last_bounds?;
Some(element::gutter_bounds(bounds, self.gutter_dimensions)) Some(element::gutter_bounds(bounds, self.gutter_dimensions))
} }
pub fn has_active_completions_menu(&self) -> bool {
self.context_menu.read().as_ref().map_or(false, |menu| {
menu.visible() && matches!(menu, ContextMenu::Completions(_))
})
}
} }
fn hunks_for_selections( fn hunks_for_selections(
@@ -12045,6 +12071,10 @@ pub trait CompletionProvider {
trigger_in_words: bool, trigger_in_words: bool,
cx: &mut ViewContext<Editor>, cx: &mut ViewContext<Editor>,
) -> bool; ) -> bool;
fn sort_completions(&self) -> bool {
true
}
} }
fn snippet_completions( fn snippet_completions(
@@ -12132,7 +12162,6 @@ fn snippet_completions(
..Default::default() ..Default::default()
}, },
confirm: None, confirm: None,
show_new_completions_on_confirm: false,
}) })
}) })
.collect() .collect()
@@ -13263,3 +13292,13 @@ fn hunk_status(hunk: &DiffHunk<MultiBufferRow>) -> DiffHunkStatus {
DiffHunkStatus::Modified DiffHunkStatus::Modified
} }
} }
/// If select range has more than one line, we
/// just point the cursor to range.start.
fn check_multiline_range(buffer: &Buffer, range: Range<usize>) -> Range<usize> {
if buffer.offset_to_point(range.start).row == buffer.offset_to_point(range.end).row {
range
} else {
range.start..range.start
}
}

View File

@@ -13221,6 +13221,127 @@ let foo = 15;"#,
}); });
} }
#[gpui::test]
async fn test_goto_definition_with_find_all_references_fallback(cx: &mut gpui::TestAppContext) {
init_test(cx, |_| {});
let mut cx = EditorLspTestContext::new_rust(
lsp::ServerCapabilities {
definition_provider: Some(lsp::OneOf::Left(true)),
references_provider: Some(lsp::OneOf::Left(true)),
..lsp::ServerCapabilities::default()
},
cx,
)
.await;
let set_up_lsp_handlers = |empty_go_to_definition: bool, cx: &mut EditorLspTestContext| {
let go_to_definition = cx.lsp.handle_request::<lsp::request::GotoDefinition, _, _>(
move |params, _| async move {
if empty_go_to_definition {
Ok(None)
} else {
Ok(Some(lsp::GotoDefinitionResponse::Scalar(lsp::Location {
uri: params.text_document_position_params.text_document.uri,
range: lsp::Range::new(lsp::Position::new(4, 3), lsp::Position::new(4, 6)),
})))
}
},
);
let references =
cx.lsp
.handle_request::<lsp::request::References, _, _>(move |params, _| async move {
Ok(Some(vec![lsp::Location {
uri: params.text_document_position.text_document.uri,
range: lsp::Range::new(lsp::Position::new(0, 8), lsp::Position::new(0, 11)),
}]))
});
(go_to_definition, references)
};
cx.set_state(
&r#"fn one() {
let mut a = ˇtwo();
}
fn two() {}"#
.unindent(),
);
set_up_lsp_handlers(false, &mut cx);
let navigated = cx
.update_editor(|editor, cx| editor.go_to_definition(&GoToDefinition, cx))
.await
.expect("Failed to navigate to definition");
assert_eq!(
navigated,
Navigated::Yes,
"Should have navigated to definition from the GetDefinition response"
);
cx.assert_editor_state(
&r#"fn one() {
let mut a = two();
}
fn «twoˇ»() {}"#
.unindent(),
);
let editors = cx.update_workspace(|workspace, cx| {
workspace.items_of_type::<Editor>(cx).collect::<Vec<_>>()
});
cx.update_editor(|_, test_editor_cx| {
assert_eq!(
editors.len(),
1,
"Initially, only one, test, editor should be open in the workspace"
);
assert_eq!(
test_editor_cx.view(),
editors.last().expect("Asserted len is 1")
);
});
set_up_lsp_handlers(true, &mut cx);
let navigated = cx
.update_editor(|editor, cx| editor.go_to_definition(&GoToDefinition, cx))
.await
.expect("Failed to navigate to lookup references");
assert_eq!(
navigated,
Navigated::Yes,
"Should have navigated to references as a fallback after empty GoToDefinition response"
);
// We should not change the selections in the existing file,
// if opening another milti buffer with the references
cx.assert_editor_state(
&r#"fn one() {
let mut a = two();
}
fn «twoˇ»() {}"#
.unindent(),
);
let editors = cx.update_workspace(|workspace, cx| {
workspace.items_of_type::<Editor>(cx).collect::<Vec<_>>()
});
cx.update_editor(|_, test_editor_cx| {
assert_eq!(
editors.len(),
2,
"After falling back to references search, we open a new editor with the results"
);
let references_fallback_text = editors
.into_iter()
.find(|new_editor| new_editor != test_editor_cx.view())
.expect("Should have one non-test editor now")
.read(test_editor_cx)
.text(test_editor_cx);
assert_eq!(
references_fallback_text, "fn one() {\n let mut a = two();\n}",
"Should use the range from the references response and not the GoToDefinition one"
);
});
}
fn empty_range(row: usize, column: usize) -> Range<DisplayPoint> { fn empty_range(row: usize, column: usize) -> Range<DisplayPoint> {
let point = DisplayPoint::new(DisplayRow(row as u32), column as u32); let point = DisplayPoint::new(DisplayRow(row as u32), column as u32);
point..point point..point

View File

@@ -2,7 +2,7 @@ use crate::{
hover_popover::{self, InlayHover}, hover_popover::{self, InlayHover},
scroll::ScrollAmount, scroll::ScrollAmount,
Anchor, Editor, EditorSnapshot, FindAllReferences, GoToDefinition, GoToTypeDefinition, InlayId, Anchor, Editor, EditorSnapshot, FindAllReferences, GoToDefinition, GoToTypeDefinition, InlayId,
PointForPosition, SelectPhase, Navigated, PointForPosition, SelectPhase,
}; };
use gpui::{px, AppContext, AsyncWindowContext, Model, Modifiers, Task, ViewContext}; use gpui::{px, AppContext, AsyncWindowContext, Model, Modifiers, Task, ViewContext};
use language::{Bias, ToOffset}; use language::{Bias, ToOffset};
@@ -157,10 +157,10 @@ impl Editor {
) { ) {
let reveal_task = self.cmd_click_reveal_task(point, modifiers, cx); let reveal_task = self.cmd_click_reveal_task(point, modifiers, cx);
cx.spawn(|editor, mut cx| async move { cx.spawn(|editor, mut cx| async move {
let definition_revealed = reveal_task.await.log_err().unwrap_or(false); let definition_revealed = reveal_task.await.log_err().unwrap_or(Navigated::No);
let find_references = editor let find_references = editor
.update(&mut cx, |editor, cx| { .update(&mut cx, |editor, cx| {
if definition_revealed { if definition_revealed == Navigated::Yes {
return None; return None;
} }
editor.find_all_references(&FindAllReferences, cx) editor.find_all_references(&FindAllReferences, cx)
@@ -194,7 +194,7 @@ impl Editor {
point: PointForPosition, point: PointForPosition,
modifiers: Modifiers, modifiers: Modifiers,
cx: &mut ViewContext<Editor>, cx: &mut ViewContext<Editor>,
) -> Task<anyhow::Result<bool>> { ) -> Task<anyhow::Result<Navigated>> {
if let Some(hovered_link_state) = self.hovered_link_state.take() { if let Some(hovered_link_state) = self.hovered_link_state.take() {
self.hide_hovered_link(cx); self.hide_hovered_link(cx);
if !hovered_link_state.links.is_empty() { if !hovered_link_state.links.is_empty() {
@@ -211,7 +211,7 @@ impl Editor {
.read(cx) .read(cx)
.text_anchor_for_position(current_position, cx) .text_anchor_for_position(current_position, cx)
else { else {
return Task::ready(Ok(false)); return Task::ready(Ok(Navigated::No));
}; };
let links = hovered_link_state let links = hovered_link_state
.links .links
@@ -247,7 +247,7 @@ impl Editor {
self.go_to_definition(&GoToDefinition, cx) self.go_to_definition(&GoToDefinition, cx)
} }
} else { } else {
Task::ready(Ok(false)) Task::ready(Ok(Navigated::No))
} }
} }
} }

View File

@@ -733,6 +733,8 @@ impl DiagnosticPopover {
.id("diagnostic") .id("diagnostic")
.block() .block()
.max_h(max_size.height) .max_h(max_size.height)
.overflow_y_scroll()
.max_w(max_size.width)
.elevation_2_borderless(cx) .elevation_2_borderless(cx)
// Don't draw the background color if the theme // Don't draw the background color if the theme
// allows transparent surfaces. // allows transparent surfaces.

View File

@@ -777,19 +777,18 @@ fn editor_with_deleted_text(
}); });
let mut editor = Editor::for_multibuffer(multi_buffer, None, true, cx); let mut editor = Editor::for_multibuffer(multi_buffer, None, true, cx);
editor.soft_wrap_mode_override = Some(language::language_settings::SoftWrap::None); editor.set_soft_wrap_mode(language::language_settings::SoftWrap::None, cx);
editor.show_wrap_guides = Some(false); editor.set_show_wrap_guides(false, cx);
editor.show_gutter = false; editor.set_show_gutter(false, cx);
editor.scroll_manager.set_forbid_vertical_scroll(true); editor.scroll_manager.set_forbid_vertical_scroll(true);
editor.set_read_only(true); editor.set_read_only(true);
editor.set_show_inline_completions(false);
let editor_snapshot = editor.snapshot(cx); editor.highlight_rows::<DiffRowHighlight>(
let start = editor_snapshot.buffer_snapshot.anchor_before(0); Anchor::min()..=Anchor::max(),
let end = editor_snapshot Some(deleted_color),
.buffer_snapshot false,
.anchor_after(editor.buffer.read(cx).len(cx)); cx,
);
editor.highlight_rows::<DiffRowHighlight>(start..=end, Some(deleted_color), false, cx);
let subscription_editor = parent_editor.clone(); let subscription_editor = parent_editor.clone();
editor._subscriptions.extend([ editor._subscriptions.extend([

View File

@@ -680,6 +680,12 @@ impl Item for Editor {
self.nav_history = Some(history); self.nav_history = Some(history);
} }
fn discarded(&self, _project: Model<Project>, cx: &mut ViewContext<Self>) {
for buffer in self.buffer().clone().read(cx).all_buffers() {
buffer.update(cx, |buffer, cx| buffer.discarded(cx))
}
}
fn deactivated(&mut self, cx: &mut ViewContext<Self>) { fn deactivated(&mut self, cx: &mut ViewContext<Self>) {
let selection = self.selections.newest_anchor(); let selection = self.selections.newest_anchor();
self.push_to_nav_history(selection.head(), None, cx); self.push_to_nav_history(selection.head(), None, cx);

View File

@@ -137,7 +137,6 @@ impl LanguageServerManifestEntry {
#[derive(Clone, PartialEq, Eq, Debug, Deserialize, Serialize)] #[derive(Clone, PartialEq, Eq, Debug, Deserialize, Serialize)]
pub struct SlashCommandManifestEntry { pub struct SlashCommandManifestEntry {
pub description: String, pub description: String,
pub tooltip_text: String,
pub requires_argument: bool, pub requires_argument: bool,
} }

View File

@@ -39,11 +39,12 @@ impl SlashCommand for ExtensionSlashCommand {
fn complete_argument( fn complete_argument(
self: Arc<Self>, self: Arc<Self>,
query: String, arguments: &[String],
_cancel: Arc<AtomicBool>, _cancel: Arc<AtomicBool>,
_workspace: Option<WeakView<Workspace>>, _workspace: Option<WeakView<Workspace>>,
cx: &mut WindowContext, cx: &mut WindowContext,
) -> Task<Result<Vec<ArgumentCompletion>>> { ) -> Task<Result<Vec<ArgumentCompletion>>> {
let arguments = arguments.to_owned();
cx.background_executor().spawn(async move { cx.background_executor().spawn(async move {
self.extension self.extension
.call({ .call({
@@ -54,7 +55,7 @@ impl SlashCommand for ExtensionSlashCommand {
.call_complete_slash_command_argument( .call_complete_slash_command_argument(
store, store,
&this.command, &this.command,
query.as_ref(), &arguments,
) )
.await? .await?
.map_err(|e| anyhow!("{}", e))?; .map_err(|e| anyhow!("{}", e))?;
@@ -65,7 +66,8 @@ impl SlashCommand for ExtensionSlashCommand {
.map(|completion| ArgumentCompletion { .map(|completion| ArgumentCompletion {
label: completion.label.into(), label: completion.label.into(),
new_text: completion.new_text, new_text: completion.new_text,
run_command: completion.run_command, replace_previous_arguments: false,
after_completion: completion.run_command.into(),
}) })
.collect(), .collect(),
) )
@@ -79,12 +81,12 @@ impl SlashCommand for ExtensionSlashCommand {
fn run( fn run(
self: Arc<Self>, self: Arc<Self>,
argument: Option<&str>, arguments: &[String],
_workspace: WeakView<Workspace>, _workspace: WeakView<Workspace>,
delegate: Option<Arc<dyn LspAdapterDelegate>>, delegate: Option<Arc<dyn LspAdapterDelegate>>,
cx: &mut WindowContext, cx: &mut WindowContext,
) -> Task<Result<SlashCommandOutput>> { ) -> Task<Result<SlashCommandOutput>> {
let argument = argument.map(|arg| arg.to_string()); let arguments = arguments.to_owned();
let output = cx.background_executor().spawn(async move { let output = cx.background_executor().spawn(async move {
self.extension self.extension
.call({ .call({
@@ -97,12 +99,7 @@ impl SlashCommand for ExtensionSlashCommand {
None None
}; };
let output = extension let output = extension
.call_run_slash_command( .call_run_slash_command(store, &this.command, &arguments, resource)
store,
&this.command,
argument.as_deref(),
resource,
)
.await? .await?
.map_err(|e| anyhow!("{}", e))?; .map_err(|e| anyhow!("{}", e))?;

View File

@@ -1216,7 +1216,10 @@ impl ExtensionStore {
command: crate::wit::SlashCommand { command: crate::wit::SlashCommand {
name: slash_command_name.to_string(), name: slash_command_name.to_string(),
description: slash_command.description.to_string(), description: slash_command.description.to_string(),
tooltip_text: slash_command.tooltip_text.to_string(), // We don't currently expose this as a configurable option, as it currently drives
// the `menu_text` on the `SlashCommand` trait, which is not used for slash commands
// defined in extensions, as they are not able to be added to the menu.
tooltip_text: String::new(),
requires_argument: slash_command.requires_argument, requires_argument: slash_command.requires_argument,
}, },
extension: wasm_extension.clone(), extension: wasm_extension.clone(),

View File

@@ -262,11 +262,11 @@ impl Extension {
&self, &self,
store: &mut Store<WasmState>, store: &mut Store<WasmState>,
command: &SlashCommand, command: &SlashCommand,
query: &str, arguments: &[String],
) -> Result<Result<Vec<SlashCommandArgumentCompletion>, String>> { ) -> Result<Result<Vec<SlashCommandArgumentCompletion>, String>> {
match self { match self {
Extension::V010(ext) => { Extension::V010(ext) => {
ext.call_complete_slash_command_argument(store, command, query) ext.call_complete_slash_command_argument(store, command, arguments)
.await .await
} }
Extension::V001(_) | Extension::V004(_) | Extension::V006(_) => Ok(Ok(Vec::new())), Extension::V001(_) | Extension::V004(_) | Extension::V006(_) => Ok(Ok(Vec::new())),
@@ -277,12 +277,12 @@ impl Extension {
&self, &self,
store: &mut Store<WasmState>, store: &mut Store<WasmState>,
command: &SlashCommand, command: &SlashCommand,
argument: Option<&str>, arguments: &[String],
resource: Option<Resource<Arc<dyn LspAdapterDelegate>>>, resource: Option<Resource<Arc<dyn LspAdapterDelegate>>>,
) -> Result<Result<SlashCommandOutput, String>> { ) -> Result<Result<SlashCommandOutput, String>> {
match self { match self {
Extension::V010(ext) => { Extension::V010(ext) => {
ext.call_run_slash_command(store, command, argument, resource) ext.call_run_slash_command(store, command, arguments, resource)
.await .await
} }
Extension::V001(_) | Extension::V004(_) | Extension::V006(_) => { Extension::V001(_) | Extension::V004(_) | Extension::V006(_) => {

View File

@@ -8,9 +8,6 @@ keywords = ["zed", "extension"]
edition = "2021" edition = "2021"
license = "Apache-2.0" license = "Apache-2.0"
# We'll publish v0.1.0 after the release on Wednesday (2024-08-14).
publish = false
[lints] [lints]
workspace = true workspace = true

View File

@@ -0,0 +1,12 @@
# Pending Changes
This is a list of pending changes to the Zed extension API that require a breaking change.
This list should be updated as we notice things that should be changed so that we can batch them up in a single release.
## vNext
### Slash Commands
- Rename `SlashCommand.tooltip_text` to `SlashCommand.menu_text`
- We may even want to remove it entirely, as right now this is only used for featured slash commands, and slash commands defined by extensions aren't currently able to be featured.

View File

@@ -114,7 +114,7 @@ pub trait Extension: Send + Sync {
fn complete_slash_command_argument( fn complete_slash_command_argument(
&self, &self,
_command: SlashCommand, _command: SlashCommand,
_query: String, _args: Vec<String>,
) -> Result<Vec<SlashCommandArgumentCompletion>, String> { ) -> Result<Vec<SlashCommandArgumentCompletion>, String> {
Ok(Vec::new()) Ok(Vec::new())
} }
@@ -123,7 +123,7 @@ pub trait Extension: Send + Sync {
fn run_slash_command( fn run_slash_command(
&self, &self,
_command: SlashCommand, _command: SlashCommand,
_argument: Option<String>, _args: Vec<String>,
_worktree: Option<&Worktree>, _worktree: Option<&Worktree>,
) -> Result<SlashCommandOutput, String> { ) -> Result<SlashCommandOutput, String> {
Err("`run_slash_command` not implemented".to_string()) Err("`run_slash_command` not implemented".to_string())
@@ -257,17 +257,17 @@ impl wit::Guest for Component {
fn complete_slash_command_argument( fn complete_slash_command_argument(
command: SlashCommand, command: SlashCommand,
query: String, args: Vec<String>,
) -> Result<Vec<SlashCommandArgumentCompletion>, String> { ) -> Result<Vec<SlashCommandArgumentCompletion>, String> {
extension().complete_slash_command_argument(command, query) extension().complete_slash_command_argument(command, args)
} }
fn run_slash_command( fn run_slash_command(
command: SlashCommand, command: SlashCommand,
argument: Option<String>, args: Vec<String>,
worktree: Option<&Worktree>, worktree: Option<&Worktree>,
) -> Result<SlashCommandOutput, String> { ) -> Result<SlashCommandOutput, String> {
extension().run_slash_command(command, argument, worktree) extension().run_slash_command(command, args, worktree)
} }
fn suggest_docs_packages(provider: String) -> Result<Vec<String>, String> { fn suggest_docs_packages(provider: String) -> Result<Vec<String>, String> {

Some files were not shown because too many files have changed in this diff Show More