Compare commits

...

157 Commits

Author SHA1 Message Date
Nathan Sobo
24e29488b9 💄 2024-05-14 16:42:19 -06:00
Nathan Sobo
0a6cb92af9 Start on a domain socket event server
Set event_stream: "/path/to/socket" in settings.

Co-Authored-By: Matthew Weitzel <mweitzel@gmail.com>
2024-05-14 16:35:05 -06:00
Flafy
c8ddde27e1 Fix reveal_path blocks on linux (#11702)
If you go to the file tree and press "x" (which is
"project_panel::RevealInFinder"). It will open the default file
manager(in my case nautilus). But on Linux it makes Zed unresponsive.
This fixes that.

Release Notes:

- Fixed Zed blocked after opening file manager in the file tree on
Linux.
2024-05-14 11:18:45 -07:00
Mikayla Maki
bfc066a1ec Toss return value (#11815)
Release Notes:

- N/A
2024-05-14 11:17:10 -07:00
CharlesChen0823
fd8336c8cb linux: Handle modification events from file watcher (#11778)
Fixed #11595 


Release Notes:

- N/A
2024-05-14 11:00:26 -07:00
张小白
d0dd8bf059 windows: Improve handling of the WM_SETTINGCHANGE (#11738)
This event is broadcast to all windows, so we can handle this message in
the `WindowsWindow` ranther than in `WindowsPlatform`.

Release Notes:

- N/A
2024-05-14 10:57:46 -07:00
张小白
491c04e176 windows: Support multi-monitor (#11699)
Zed can detect changes in monitor connections and disconnections and
provide corresponding feedback. For example, if the current window's
display monitor is disconnected, the window will be moved to the primary
monitor. And now Zed always opens on the monitor specified in
`WindowParams`.

Release Notes:

- N/A
2024-05-14 10:54:18 -07:00
张小白
5154910c64 windows: Update crate Windows from 0.53 to 0.56 (#11662)
Version 0.56 has fixed many of the previous bugs, and one of the bugs
prevent me implementing some functions.

Release Notes:

- N/A
2024-05-14 09:59:55 -07:00
apricotbucket28
d1ee2d0749 wayland: Window controls and drag (#11525)
Based on https://github.com/zed-industries/zed/pull/11046

- Partially fixes #10346 
- Fixes https://github.com/zed-industries/zed/issues/9964

## Features
Window buttons

![image](https://github.com/zed-industries/zed/assets/71973804/1b7e0504-3925-45ba-90b5-5adb55e0d739)

Window drag

![image](https://github.com/zed-industries/zed/assets/71973804/9c509a37-e5a5-484c-9f80-c722aeee4380)

Native window context menu

![image](https://github.com/zed-industries/zed/assets/71973804/048ecf52-e277-49bb-a106-91cad226fd8a)

### Limitations

- No resizing
- Wayland only (though X11 always has window decorations)

### Technical

This PR adds three APIs to gpui.

1. `show_window_menu`: Triggers the native title bar context menu.
2. `start_system_move`: Tells the compositor to start dragging the
window.
3. `should_render_window_controls`: Whether the compositor doesn't
support server side decorations.

These APIs have only been implemented for Wayland, but they should be
portable to other platforms.

Release Notes:

- N/A

---------

Co-authored-by: Akilan Elango <akilan1997@gmail.com>
2024-05-14 09:44:55 -07:00
Thorsten Ball
db89353193 git: Support git repos with .git folder above project root (#11550)
TODOs:

- [x] Add assertions to the test to ensure that the git status is
propagated
- [x] Get collaboration working
- [x] Test opening a git repository inside another git repository
- [x] Test opening the sub-folder of a repository that itself contains
another git repository in a subfolder

Fixes:
- Fixes https://github.com/zed-industries/zed/issues/10154
- Fixes https://github.com/zed-industries/zed/issues/8418
- Fixes https://github.com/zed-industries/zed/issues/8275
- Fixes https://github.com/zed-industries/zed/issues/7816
- Fixes https://github.com/zed-industries/zed/issues/6762
- Fixes https://github.com/zed-industries/zed/issues/4419
- Fixes https://github.com/zed-industries/zed/issues/4672
- Fixes https://github.com/zed-industries/zed/issues/5161

Release Notes:

- Added support for opening subfolders of git repositories and treating
them as part of a repository (show git status in project panel, show git
diff in gutter, git blame works, ...)
([#4672](https://github.com/zed-industries/zed/issues/4672)).

Demo video:


https://github.com/zed-industries/zed/assets/1185253/afc1cdc3-372c-404e-99ea-15708589251c
2024-05-14 18:34:51 +02:00
Antonio Scandurra
9f0a20241b Report response latency and errors when using (inline) assistant (#11806)
Release Notes:

- N/A

Co-authored-by: Nathan <nathan@zed.dev>
Co-authored-by: David <davidsp@anthropic.com>
2024-05-14 18:18:26 +02:00
Antonio Scandurra
de09409f01 Sanitize messages before sending them to Anthropic (#11810)
Release Notes:

- N/A

Co-authored-by: Nathan <nathan@zed.dev>
Co-authored-by: David <davidsp@anthropic.com>
2024-05-14 17:47:33 +02:00
Conrad Irwin
69f9489aa9 Don't bundle libdl 🤦 (#11809)
Release Notes:

- N/A
2024-05-14 09:41:35 -06:00
Marshall Bowers
652748b0c9 gleam: Bump to v0.1.2 (#11803)
This PR bumps the Gleam extension to v0.1.2.

Changes:

- #11476
- #11801

Release Notes:

- N/A
2024-05-14 10:53:43 -04:00
Marshall Bowers
77f0d35684 gleam: Add gleam test task (#11801)
This PR adds a task for running `gleam test`.

Release Notes:

- N/A
2024-05-14 10:45:14 -04:00
Antonio Scandurra
5944caaa90 Add support for interacting with Claude in the assistant panel (#11798)
Release Notes:

- Added support for interacting with Claude in the assistant panel. You
can enable it by adding the following to your `settings.json`:

    ```json
    "assistant": {
        "version": "1",
        "provider": {
            "name": "anthropic"
        }
    }
    ```
2024-05-14 15:57:52 +02:00
Antonio Scandurra
019d98898e Add support for gpt-4o when using zed.dev as the model provider (#11794)
Release Notes:

- N/A
2024-05-14 13:55:47 +02:00
Antonio Scandurra
a13a92fbbf Introduce recent files ambient context for assistant (#11791)
<img width="1637" alt="image"
src="https://github.com/zed-industries/zed/assets/482957/5aaec657-3499-42c9-9528-c83728f2a7a1">

Release Notes:

- Added a new ambient context feature that allows showing the model up
to three buffers (along with their diagnostics) that the user interacted
with recently.

---------

Co-authored-by: Nathan Sobo <nathan@zed.dev>
2024-05-14 13:48:36 +02:00
Antonio Scandurra
e4c95b25bf Allow using the inline assistant within the assistant panel (#11754)
Release Notes:

- Added the ability to use the inline assistant within the assistant
panel.
2024-05-14 13:42:32 +02:00
Thorsten Ball
4766b41e96 docs: Document how to use custom api_url in Assistant (#11790)
This essentially documents the comment here:
https://github.com/zed-industries/zed/issues/4424#issuecomment-2053646583

Release Notes:


- N/A
2024-05-14 11:39:57 +02:00
Piotr Osiewicz
95e0d5ed74 tasks: Reorganize task modal (#11752)
![image](https://github.com/zed-industries/zed/assets/24362066/bc7cc3d3-d9fc-4be6-b9b6-e3d8edf5b533)

Release Notes:
- Improved tasks modal by highlighting a distinction between a task
template and concrete task instance and surfacing available keybindings
more prominently. Task templates are now always available in the modal,
even if there's already a history entry with the same label.
- Changed default key binding for "picker::UseSelectedQuery" to `opt-e`.
2024-05-14 11:22:09 +02:00
CharlesChen0823
0a096bf531 terminal: Fix Alacritty key bindings (#11782)
Close #10502 

Release Notes:

- Fixed `ctrl-space` not being forwarded correctly in the terminal view.
([#10502](https://github.com/zed-industries/zed/issues/10502))
2024-05-14 11:09:21 +02:00
Thorsten Ball
ec65035659 inline blame: Match icon size to font size in buffer (#11788)
This fixes #11311.


Release Notes:

- Fixed icon in inline git blame entry not changing with the buffer font
size. ([#11311](https://github.com/zed-industries/zed/issues/11311)).

Before:

![screenshot-2024-05-14-10 48
49@2x](https://github.com/zed-industries/zed/assets/1185253/4a288cae-a52b-4bee-8681-f1d9ba3b57f3)

After:

![screenshot-2024-05-14-10 50
06@2x](https://github.com/zed-industries/zed/assets/1185253/f7a6a608-8ecc-4642-adbd-80858dea75e9)
2024-05-14 11:06:16 +02:00
Toon Willems
9b74acc4f5 Add GPT-4o as possible model (#11764)
Resolves: #11766

Release Notes:

- Add GPT-4o support (see: https://openai.com/index/hello-gpt-4o/).
GPT-4o is better and faster than 4-turbo, at half the price.
2024-05-14 10:43:24 +02:00
Thorsten Ball
43da37b0ab shell: Load SHELL from passwd entry if launched as desktop app (#11758)
This fixes #8794 and other related problems.

The problem, in short, is this: `$SHELL` might be outdated. This code
ensures that we update `$SHELL` to what we can deem the newest version,
if we're started as a desktop application.

The background is that you can get the user's preferred shell in two
ways:

1. Read the `SHELL` env variable
2. Read the `/etc/passwd` file and check which shell is set

Most applications should and do prefer (1) over (2).

Why is it preferred? Reading `SHELL` means that processes can inherit
the variable from each other. And you can do something like
`SHELL=/bin/cool-shell ./my-cool-app`

But what happens if the application was launched from the desktop? Which
SHELL env does it inherit then?

It inherits the env from the process that launched it, which is
Finder.app or launchd or GNOME or something else — these are all
long-running processes that get their environment when the user logs in.

They do *not* get a new environment unless restarted (either process
restarted or computer restarted)

That means the `SHELL` env variable they have might be outdated.

That's a problem if you, for example, change your shell with `chsh` and
then launch the app from the desktop.

That change of the default shell is not reflected in the app if the app
only reads from SHELL. Because that hasn’t been updated. Instead it
should read from passwd file to get the newest value.



Release Notes:

- Fixed SHELL being outdated if Zed was launched via Finder or Raycast
or other desktop launchers.
([#8794](https://github.com/zed-industries/zed/issues/8794))
2024-05-14 10:16:55 +02:00
Conrad Irwin
15e1895159 Try some more linker magic to get it working on ubuntu 20 (#11784)
Release Notes:

- N/A
2024-05-13 22:08:10 -06:00
Conrad Irwin
aee00d41d8 Fix script/bundle-linux (#11783)
Release Notes:

- N/A
2024-05-13 20:39:27 -06:00
Marshall Bowers
172cb81e82 xtask: Check for licenses that are duplicated instead of being symlinked (#11777)
This PR updates `cargo xtask licenses` to also check for license files
that are not symlinks.

Release Notes:

- N/A
2024-05-13 19:13:09 -04:00
Marshall Bowers
b01878aadf Add xtask for finding crates with missing licenses (#11776)
This PR adds a new `cargo xtask licenses` command for finding crates
with missing license files.

A number of crates were uncovered that were missing a license file, and
have had the appropriate license file added.

Release Notes:

- N/A
2024-05-13 18:52:12 -04:00
Marshall Bowers
ff2eacead7 Add missing LICENSE file to http crate (#11773)
This PR adds a missing LICENSE file to the recently-extracted `http`
crate.

Release Notes:

- N/A
2024-05-13 18:26:12 -04:00
Kirill Bulatov
fcd5fa9257 Remove selection highlights from deleted diff editors on blur (#11772)
Follow-up of https://github.com/zed-industries/zed/pull/11710

Release Notes:

- Removed extra line highlights when deleted diff editors loose focus
2024-05-14 01:15:49 +03:00
Danilo Leal
cb34507ece docs: Fix typos on the Assistant Panel page (#11725)
Fix typos on the Assistant Panel page, also including removal of
unnecessary commas and standardization to US English.

Release Notes:

- N/A

PS: Assuming here US English is the preferred style (e.g., "canceled"
vs. "cancelled".) Happy to revert if that's not the case! :)
2024-05-13 18:03:06 -04:00
Nate Butler
1ab247756a Add Tool Strip (#11756)
Release Notes:

- N/A

---------

Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
2024-05-13 17:58:08 -04:00
Marshall Bowers
335636c42e ruby: Bump to v0.0.2 (#11769)
This PR bumps the Ruby extension to v0.0.2.

Changes:

- #11768

Release Notes:

- N/A
2024-05-13 17:32:43 -04:00
Vitaly Slobodin
24cc4c69f8 ruby: Add ruby-lsp as an experimental language server (#11768)
Adds [ruby-lsp](https://shopify.github.io/ruby-lsp/) as an alternative
LS for Ruby language.
While support for fully functional `ruby-lsp` is limited due to some
limitations (see https://github.com/zed-industries/zed/pull/8613) I
think it's OK to add it but disable by default. Thanks!

Resolves #4834.

Release Notes:

- N/A

### Some screenshots

Completion support
![CleanShot 2024-05-13 at 22 58
23@2x](https://github.com/zed-industries/zed/assets/1894248/d5047baa-c58f-465d-ae31-a7045aa56adf)

Symbol search
![CleanShot 2024-05-13 at 23 03
59@2x](https://github.com/zed-industries/zed/assets/1894248/0cb6320a-b000-4a0c-85eb-f8d1a8f6936e)

---------

Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
2024-05-13 17:22:01 -04:00
Conrad Irwin
9af1298a7f Bundle linux deps (#11681)
Inlcude linux deps in the bundle

Release Notes:

- N/A
2024-05-13 14:10:03 -06:00
Andrew Lygin
8e92f19fed editor: Current line highlight options (#11710)
None:

<img width="717" alt="none"
src="https://github.com/zed-industries/zed/assets/2101250/b2a741db-c64a-4275-a612-5a0d15c9cab7">

Gutter:

<img width="715" alt="gutter"
src="https://github.com/zed-industries/zed/assets/2101250/f7a68a6e-6eba-41b4-9042-5a5fe2ee21a4">

Line:

<img width="717" alt="line"
src="https://github.com/zed-industries/zed/assets/2101250/117f5b00-abd7-425b-8047-1a6fab8293a7">

All:

<img width="715" alt="all"
src="https://github.com/zed-industries/zed/assets/2101250/ebccc0da-0fa0-44e5-903c-cc49d975db76">

This PR adds the `current_line_highlight` setting that defines how to
highlight the current line in the editor:

- `none`: Don't highlight the current line.
- `gutter`: Highlight the gutter area only.
- `line`: Highlight the editor area only.
- `all` (default): Highlight the whole line.

The options have been borrowed from VSCode.

Fixes #5222
Part of #4382

Release Notes:

- Added the `current_line_highlight` setting that defines how to
highlight the current line in the editor (#5222).
2024-05-13 22:02:12 +03:00
Conrad Irwin
cf97b995b2 Fix panic when accepting completions (#11762)
Release Notes:

- Fixed a panic caused by missing bounds check in completion handler
2024-05-13 13:24:54 -04:00
sebbeutler
bc292186cd Fix key-bindings.md example (#11415)
Fixed obselete command name in keymap example.
From "zed::OpenKeyMap" to "zed::OpenKeymap".



Release Notes:

- N/A
2024-05-13 09:09:59 -07:00
Danilo Leal
2e8197ce6e docs: Standardize "Note" blockquote usage (#11724)
Standardize the blockquote "Notes" usage, so all places are using the
`>` blockquote notation, as well as a consistent style for the "Note"
word.

PS: Thought that bolding the word "**Note**" would make for a higher
visual distinction, so went for it in all existing cases! No strong
feelings, though; happy to roll back to just "Note:" if that's
preferrable!

Release Notes:

- N/A
2024-05-13 09:02:44 -04:00
Marshall Bowers
76139330e3 vue: Bump to v0.0.2 (#11747)
This PR bumps the Vue extension to v0.0.2.

Changes:

- #11743

Release Notes:

- N/A
2024-05-13 08:49:40 -04:00
Thorsten Ball
c769d58b4c vue: Fix Vue.js language server not starting (#11743)
This fixes #10871.

The introduction of #11412 broke Vue.js language support, since it made
Zed rely more heavily on correct language name -> language ID mappings,
which the Vue.js extension didn't have.

Release Notes:

- N/A
2024-05-13 14:38:14 +02:00
Piotr Osiewicz
c90263d59b editor: Use proper rows for fold indicators in the gutter (#11741)
Follow-up to #11656

Release Notes:

- N/A

---------

Co-authored-by: Kirill <kirill@zed.dev>
2024-05-13 12:03:13 +02:00
Thorsten Ball
1afcd12747 snippets: Fix <tab> not working when at end of snippet (#11740)
This fixes #10185 by not keeping snippet state around when already at
the end of the snippet and the tabstop is empty (i.e. it's not a
selection) and we're already on it.

The reason for the fix is outlined in the comments of #10185 but to
repeat:

1. `gopls` sends completions with type "snippet" even when suggesting
single word completions that don't contain tabstops
2. We use a default behavior and add an "end tabstop" by default so that
the cursor jumps to the end of the snippet when accepting it.
3. We'd then push the state of the snippet on the stack which is where
it would stay, with the cursor already at the end and the user unable to
get rid of the tabstop state.

This fixes the issue by not pushing snippet state when the tabstop we
accepted is the "end tabstop".

Release Notes:

- Fixed completions inside snippets breaking the jump-to-next-tabstop
behaviour when using Go/`gopls`
([#10185](https://github.com/zed-industries/zed/issues/10185)).

Demo:



https://github.com/zed-industries/zed/assets/1185253/35384e5e-45c6-46ab-870d-b48e56d8786b
2024-05-13 11:39:00 +02:00
Jason Lee
6df1bc85aa Fix runnable, code_actions button can not trigger when editor not focused (#11729)
## Before


https://github.com/zed-industries/zed/assets/5518/546450fc-ad2c-45d0-8bdb-7b15cfebe235

## After


https://github.com/zed-industries/zed/assets/5518/efc4f863-9db1-4846-83ae-c99ae4dcb3ed

Release Notes:

- Fixed code actions/runnable buttons not triggering when editor is not focused.
2024-05-13 11:02:15 +02:00
Thorsten Ball
91b9e4ee9f git blame: add "Open permalink" to right-click menu (#11734)
This adds a new option to the right click menu for git blame entries in
the gutter: "Open permalink". If there is a URL for the code host, then
this will open it.

Release Notes:

- Added "Open permalink" option to right-click menu of git blame entries
in gutter.

Demo:

![screenshot-2024-05-13-09 39
48@2x](https://github.com/zed-industries/zed/assets/1185253/656c177c-79f0-4a40-8838-7e963d099479)
2024-05-13 09:47:03 +02:00
Gu
8dd26553a3 docs: Fix broken link (#11685)
configuration link in `javascript.md`

https://zed.dev/docs/languages/javascript

Release Notes:

- N/A
2024-05-12 23:06:48 -04:00
João Miguel Nogueira
78f1482cd1 Add autosave with delay (#11325)
Implemented autosave functionality with a delay, which now refrains from
formatting the code upon triggering unless the user manually saves it.
Additionally, enhanced documentation for the `format_on_save` setting
has been added. This resolves the issue where autosave with delay would
inadvertently format the code, disrupting the user experience, as
reported in the corresponding issue.

Release Notes:

- Fixed a bug where autosave after_delay would auto-format the buffer
([#9787](https://github.com/zed-industries/zed/issues/9787)).

---------

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2024-05-12 17:18:30 -04:00
Andrew Lygin
9fdfe5c813 Don't hide last symbol under the scrollbar (#11696)
This PR adds an extra scrollbar-wide margin to the right side of the
editor. This prevents hiding the last character under the scrollbar.

Fixes #7098

Release Notes:

- Fixed hiding of the last character under the scrollbar (#7098).
2024-05-12 22:04:20 +03:00
Danilo Leal
4446c38705 docs: Add link for "Configuring Zed" mention (#11723)
Release Notes:

- N/A
2024-05-12 21:57:38 +03:00
Kirill Bulatov
692afdca27 Remove deploy artifacts after uploads (#11726)
Release Notes:

- N/A
2024-05-12 21:53:27 +03:00
Erik Simmler
6657e301cd Update reference to keymap.json in tasks docs (#11711)
I assume this was an older file name or just a typo as I can't find any
other references to a `keybindings.json` file. Either way it was
confusing for a bit :)

Release Notes:

- N/A
2024-05-12 13:55:19 +02:00
Kyle Kelley
f990f70936 Bring the Tool Calling README up to date (#11683) 2024-05-12 04:47:19 -07:00
Conrad Irwin
f550f23b97 vim test redux (#11709)
This cleans up the neovim-backed vim tests:
- removed exempted tests (we'll rely on bug reports to find missing edge
cases)
- moved all assertions into non-async fn's so that failures are
reporting on the right file/line
- removed the NeovimBackedBindingTestContext
- renamed a few things to make them clearer
- reduced the number of permutations tested in some cases to reduce
slowest test from 60s to 5s

Release Notes:

- N/A
2024-05-11 14:04:05 -04:00
Kirill Bulatov
48cba328f2 Revert "Use sha in the names of Linux nightly archives (#11693)"
This reverts commit 6a64360ec8.
2024-05-11 12:04:49 +03:00
Kirill Bulatov
6a64360ec8 Use sha in the names of Linux nightly archives (#11693)
Release Notes:

- N/A
2024-05-11 11:14:49 +03:00
Piotr Osiewicz
fa04f7514e chore: Improve dev build startup time (#11692)
RustEmbed repeatedly compiled regexes for handling of
'include='/'exclude' statements in a hot loop, which caused each call to
Assets::iter() to take 600ms. Since it is being called twice on our
startup path, that alone contributed over a second to startup time in
debug builds. I've filed a PR with them
https://github.com/pyrossh/rust-embed/pull/244 which brings down the
time for a single iter() call to 6ms.

Release Notes:

- N/A
2024-05-11 10:10:13 +02:00
Maksim Bondarenkov
69676c9d33 update ring dependency (#11689)
this updates ring dependency to 0.17.x version, which has Windows on ARM
support

Release Notes:

- N/A
2024-05-11 09:44:58 +02:00
Kalle Ahlström
b8a83443ac editor: Support walking through overlapping diagnostics (#11139)
While looking into how to implement #4901, noticed that the current
`Goto next/previous diagnostic` behaved a bit weirdly. That is, when
there are multiple errors that have overlapping ranges, only the first
one can be chosen to be active by the `go_to_diagnostic_impl`.

### Previous behavior:


https://github.com/zed-industries/zed/assets/71292737/95897675-f5ee-40e5-869f-0a40066eb8e3

Doesn't go through all the diagnostics, and going backwards and forwards
doesn't show the same diagnostic always.

### New behavior:


https://github.com/zed-industries/zed/assets/71292737/81f7945a-7ad8-4a34-b286-cc2799b10500

Should always go through the diagnostics in a consistent manner.

Release Notes:
* Improved the behavioral consistency of "Go to Next/Previous
Diagnostic"
2024-05-11 00:32:49 +02:00
Kyle Kelley
c71cfd5da4 Change ToolOutput to ToolView (#11682)
Additionally, the internal `ToolView` trait used by the registry is now
called `InternalToolView`.

This should make it a bit easier to understand that the `ToolView` is
intended for a `gpui::View` (implementing `Render`). It does still feel
like more could be merged here but I think the built tools are now a bit
clearer.

Release Notes:

- N/A
2024-05-10 15:22:09 -07:00
Conrad Irwin
5515ba6043 Extract http from util (#11680)
This avoids the CLI linking libssl etc...

Release Notes:

- N/A
2024-05-10 15:50:20 -06:00
Kirill Bulatov
df41435d1a Introduce DisplayRow, MultiBufferRow newtypes and BufferRow type alias (#11656)
Part of https://github.com/zed-industries/zed/issues/8081

To avoid confusion and bugs when converting between various row `u32`'s,
use different types for each.
Further PRs should split `Point` into buffer and multi buffer variants
and make the code more readable.

Release Notes:

- N/A

---------

Co-authored-by: Piotr <piotr@zed.dev>
2024-05-11 00:06:51 +03:00
Kyle Kelley
38f110852f Improve prompts for tools (#11669)
Improves the descriptions for some of the tools. I wish we had metrics
to back up changes in how the model responds to tool schema changes so
anecdotally I'm just going to say this _seems_ improved.

Release Notes:

- N/A
2024-05-10 13:18:05 -07:00
Marshall Bowers
fc584017d1 ruby: Add embedded_template grammar (#11677)
This PR adds the `embedded_template` grammar to the Ruby extension, as
we need it present for ERB.

Release Notes:

- N/A
2024-05-10 16:08:46 -04:00
Conrad Irwin
451727d257 Create release archive in the target dir (#11675)
Release Notes:

- N/A
2024-05-10 13:48:48 -06:00
Conrad Irwin
c73d6502d6 Make block_with_timeout more robust (#11670)
The previous implementation relied on a background thread to wake up the
main thread,
which was prone to priority inversion under heavy load.

In a synthetic test, where we spawn 200 git processes while doing a 5ms
timeout, the old version blocked for 5-80ms, the new version blocks for
5.1-5.4ms.

Release Notes:

- Improved responsiveness of the main thread under high system load
2024-05-10 13:10:02 -06:00
Marshall Bowers
b34ab6f3a1 Remove references to submodules (#11673)
This PR removes the references to initializing Git submodules as part of
building Zed.

These are no longer needed, as our only submodule was removed in #11672.

Release Notes:

- N/A
2024-05-10 14:33:53 -04:00
Marshall Bowers
c9738a233e Vendor LiveKit protocol (#11672)
This PR vendors the protobuf files from the LiveKit protocol so that we
don't need to have that entire LiveKit protocol repo as a submodule.

---

Eventually I would like to replace this with the
[`livekit-protocol`](https://crates.io/crates/livekit-protocol) crate,
but there is some churn that needs to happen for that.

The main problem is that we're currently on a different version of
`prost` used by `livekit-protocol`, and upgrading our version of `prost`
means that we now need to source `protoc` ourselves (since it is no
longer available to be compiled from source as part of `prost-build`).

Release Notes:

- N/A
2024-05-10 14:18:40 -04:00
Robert Falkén
80d3eafa30 Alternate files with ctrl-6 (#11367)
This is my stab at #7709 

I realize the code is flawed. There's no test coverage, I'm using
`clone()` and there are probably better ways to hook into the events.
Also, I didn't know what context to use for the keybinding. But maybe
with some pointers from someone who actually know what they're doing, I
can get this shippable.

Release Notes:

- vim: Added ctrl-6 for
[alternate-file](https://vimhelp.org/editing.txt.html#CTRL-%5E) to
navigate back and forth between two buffers.



https://github.com/zed-industries/zed/assets/261929/2d10494e-5668-4988-b7b4-417c922d6c61

---------

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2024-05-10 11:40:08 -06:00
Marshall Bowers
0d26beb91b Add configurable low-speed timeout for OpenAI provider (#11668)
This PR adds a setting to allow configuring the low-speed timeout for
the Assistant when using the OpenAI provider.

The `low_speed_timeout_in_seconds` accepts a number of seconds that the
HTTP client can go below a minimum speed limit (currently set to 100
bytes/second) before it times out.

```json
{
  "assistant": {
    "version": "1",
    "provider": { "name": "openai", "low_speed_timeout_in_seconds": 60 }
  },
}
```

This should help the case where the `openai` provider is being used with
a local model that requires higher timeouts.

Issue: https://github.com/zed-industries/zed/issues/9913

Release Notes:

- Added a `low_speed_timeout_in_seconds` setting to the Assistant's
OpenAI provider
([#9913](https://github.com/zed-industries/zed/issues/9913)).
2024-05-10 13:19:21 -04:00
Marshall Bowers
19994fc190 ruby: Move injections to extension (#11664)
This PR moves the Ruby injections added in #8796 to the right location,
since Ruby support was extracted into an extension in #11360.

Release Notes:

- N/A
2024-05-10 12:06:15 -04:00
Ulysse Buonomo
4f256c7577 Add Ruby language injections (#8796)
This adds support for Ruby heredoc's syntax highlighting. The injection
was directly taken from the tree-sitter
[documentation](https://tree-sitter.github.io/tree-sitter/syntax-highlighting#language-injection).

It is quite simple, but has the drawback of only showing highlighting
once the heredoc is fully written and next line is started. This is due
to the fact that we use the last line of the heredoc to determine the
language. As using the first one would require some cleaning up that we
cannot do trivially. (I might have not fully understood the behaviour of
the `#match?` predicate, which could help us)

Fixes #4473



Release Notes:

- Added Ruby language injections
([#4473](https://github.com/zed-industries/zed/issues/4473)).

<img width="359" alt="image"
src="https://github.com/zed-industries/zed/assets/11378424/5115b875-a32d-4f28-b21f-471495169266">
2024-05-10 09:00:51 -07:00
Vitaly Slobodin
400e938997 Extract Ruby extension (#11360)
This PR extracts Ruby and ERB support into an extension and removes the
built-in Ruby and Ruby support from Zed.

As part of this, the new extension is prepared for adding support for
the `Ruby LSP` which has some blockers. See
https://github.com/zed-industries/zed/pull/8613 I was thinking of adding
an initial support for Ruby LSP but I think it would be better to start
with extracting the Ruby extension for now.

The implementation, as the 1st step, matches the bundled version but
with 3 differences:

1. Added signature output to the completion popup. See my comment below.
![CleanShot 2024-05-04 at 09 17
37@2x](https://github.com/zed-industries/zed/assets/1894248/486b7a48-ea0c-44ce-b0c9-9f8f5d3ad42d)

3. Use the shell environment for starting the `solargraph` executable.
See my comment below.
4. Bumped the tree sitter version for Ruby to the latest available
version.

Additionally, I plan to tweak this extension a bit in the future but I
think we should do this bit by bit. Thanks!

Release Notes:

- Removed built-in support for Ruby, in favor of making it available as
an extension.

---------

Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
2024-05-10 11:53:11 -04:00
Piotr Osiewicz
df00854bbc gpui: Bump taffy to 0.4.3 again (#11655)
We reverted bump to taffy 0.4.3 following an issue spotted by
@maxdeviant where chat text input was not being rendered correctly:

![image](https://github.com/zed-industries/zed/assets/24362066/9d7e6444-47b1-4ac2-808f-bf10404377c0)
This was an issue with the previous attempt to upgrade to taffy 0.4.0 as
well. We bail early in `compute_auto_height_layout` due to a missing
width:
df190ea846/crates/editor/src/element.rs (L5266)
The same issue is visible in story for auto-height editor (or rather,
the breakage is visible - the editor simply does not render at all
there).

I tracked down the breakage to
https://github.com/DioxusLabs/taffy/pull/573 ; it looks like it
specifically affects editors with auto-height. In taffy <0.4 which we
were using previously, we'd eventually get a proper width for
auto-height EditorElement after having initially computed the size. With
taffy 0.4 however (and specifically that PR mentioned earlier), we're
getting `Size::NONE` in layout phase [^1].
I've noticed though that even with taffy <0.3, the
`known_dimensions.width` was always equal to `available_space.width` in
layout phase. Hence, I went with falling back to `available_space.width`
when it is a definite value and we don't have a
`known_dimensions.width`.
Done this way, both chat input and auto-height story render correctly.
/cc @as-cii 
Related:
https://github.com/zed-industries/zed/pull/11606
https://github.com/zed-industries/zed/pull/11622
https://github.com/zed-industries/zed/pull/7868
https://github.com/zed-industries/zed/pull/7896

[^1]: This could possibly be related to change in what gets passed in
https://github.com/DioxusLabs/taffy/pull/573/files#diff-60c916e9b0c507925f032cecdde6ae163e41b84b8e4bc0a6c04f7d846b0aad9eR133
, though I'm not sure if editor is a leaf node in this case

Release Notes:

- N/A
2024-05-10 15:05:50 +02:00
Thorsten Ball
df190ea846 vcs menu: Use project's repositories, do not open directly (#11652)
I ran into this when trying to get #11550 working: the VCS menu would
open repositories on its owned, based on paths, instead of going through
the worktree on which we already store the git repositories.



Release Notes:

- N/A
2024-05-10 11:06:32 +02:00
Piotr Osiewicz
b3dc31d7c9 tasks: Filter out run indicators outside of excerpt bounds instead of using saturating_sub (#11634)
This way we'll display run indicators around excerpt boundaries
correctly.

Release Notes:

- N/A
2024-05-10 10:45:28 +02:00
Antonio Scandurra
358bc2d225 Replace rich_text with markdown in assistant2 (#11650)
We don't implement copy yet but it should be pretty straightforward to
add.


https://github.com/zed-industries/zed/assets/482957/6b4d7c34-de6b-4b07-aed9-608c771bbbdb

/cc: @rgbkrk @maxbrunsfeld @maxdeviant 

Release Notes:

- N/A
2024-05-10 10:22:14 +02:00
Conrad Irwin
0d760d8d19 Clarify key binding documentation (#11644)
Fixes #10762

Release Notes:

- N/A
2024-05-09 22:42:09 -06:00
Conrad Irwin
45f12b9426 vim cl (#11641)
Release Notes:

- vim: Added support for the changelist. `g;` and `g,` to the
previous/next change
- vim: Added support for the `'.` mark
- vim: Added support for `gi` to resume the previous insert
2024-05-09 21:18:56 -06:00
Conrad Irwin
4f9ba28a25 linux cli (#11585)
- [x] Build out cli on linux
- [x] Add support for --dev-server-token sent by the CLI
- [x] Package cli into the .tar.gz
- [x] Link the cli to ~/.local/bin in install.sh

Release Notes:

- linux: Add cli support for managing zed
2024-05-09 21:08:49 -06:00
Conrad Irwin
0c2d71f1ac Remove 'Destructive' prompts (#11631)
While these would match how macOS handles this scenario, they crash on
Catalina, and require mouse clicks to interact.

cc @bennetbo



Release Notes:

- N/A
2024-05-09 18:52:09 -06:00
Zachiah Sawyer
901cb8b3d2 vim: Add basic mark support (#11507)
Release Notes:
- vim: Added support for buffer-local marks (`'a-'z`) and some builtin
marks `'<`,`'>`,`'[`,`']`, `'{`, `'}` and `^`. Global marks (`'A-'Z`),
and other builtin marks (`'0-'9`, `'(`, `')`, `''`, `'.`, `'"`) are not
yet implemented. (#5122)

---------

Co-authored-by: Conrad Irwin <conrad.irwin@gmail.com>
2024-05-09 18:51:19 -06:00
Kyle Kelley
9cef0ac869 Cleanup tool registry API surface (#11637)
Fast followups to #11629 

Release Notes:

- N/A

---------

Co-authored-by: Max <max@zed.dev>
2024-05-09 16:43:27 -07:00
Kirill Bulatov
79b5556267 Remove a stray eprintln (#11635)
Release Notes:

- N/A
2024-05-10 02:27:55 +03:00
Marshall Bowers
dd67bda595 Update .mailmap (#11633)
This PR updates the `.mailmap` file to merge some commit authors using
multiple emails.

Release Notes:

- N/A
2024-05-09 19:03:34 -04:00
Kirill Bulatov
4762e52d31 Properly calculate expanded git diff hunk highlight ranges (#11632)
Closes https://github.com/zed-industries/zed/issues/11576

Release Notes:

- Fixed expanded diff hunks highlighting an extra row as added
([11576](https://github.com/zed-industries/zed/issues/11576))
2024-05-10 02:02:56 +03:00
Kyle Kelley
50c45c7897 Streaming tools (#11629)
Stream characters in for tool calls to allow rendering partial input.


https://github.com/zed-industries/zed/assets/836375/0f023a4b-9c46-4449-ae69-8b6bcab41673

Release Notes:

- N/A

---------

Co-authored-by: Max Brunsfeld <maxbrunsfeld@gmail.com>
Co-authored-by: Marshall <marshall@zed.dev>
Co-authored-by: Max <max@zed.dev>
2024-05-09 15:57:14 -07:00
Marshall Bowers
27ed0f4273 assistant2: Render saved conversations inline instead of in a modal (#11630)
This PR reworks how saved conversations are rendered in the new
assistant panel.

Instead of rendering them in a modal we now display them in the panel
itself:

<img width="402" alt="Screenshot 2024-05-09 at 6 18 40 PM"
src="https://github.com/zed-industries/zed/assets/1486634/82decc04-cb31-4d83-a942-7e8426e02679">

Release Notes:

- N/A
2024-05-09 18:29:08 -04:00
Conrad Irwin
a3e75540af Reduce serializability of project delete (#11628)
This may reduce locks when deleting projects.

Release Notes:

- N/A
2024-05-09 16:17:13 -06:00
Conrad Irwin
aa5113cd92 vim: Support paste with count (#11621)
Fixes: #10842



Release Notes:

- vim: Fix pasting with a count (#10842)
2024-05-09 16:12:59 -06:00
Kirill Bulatov
bca639bda9 Use larger runners for Linux CI steps (#11574)
To speed up Linux CI builds, use a set of self-hosted Linux machines and
use them to run all slow CI steps for Linux: "tests", bundling and
nightly builds.

Also adds a set of dev icons as Linux bundling script required them for
`run-bundling`-tagged builds from regular PRs.
Same icons as for Preview were used, but, ideally, something different
could be created.

Release Notes:

- N/A
2024-05-10 00:44:31 +03:00
Piotr Osiewicz
bff1d8b142 task: Allow obtaining custom task variables from tree-sitter queries (#11624)
From now on, only top-level captures are treated as runnable tags and
the rest is appended to task context as custom environmental variables
(unless the name is prefixed with _, in which case the capture is
ignored). This is most likely gonna help with Pest-like test runners.



Release Notes:

- N/A

---------

Co-authored-by: Remco <djsmits12@gmail.com>
2024-05-09 23:38:18 +02:00
张小白
95e246ac1c windows: Better dispatcher (#11485)
This PR leverages a more modern Windows API to implement
`WindowsDispatcher`, aligning its implementation more closely with that
of the `macOS` platform. The following improvements have been made:

1. Similar to `macOS`, there is no longer a need to use `sender` and
`receiver` to dispatch a `Runnable` on the main thread.
2. There is also no longer a need to use an `Event` for synchronization.
3. Consistent with #7506 and #11269, `Runnable` is now executed with
high priority.

However, this PR raises the minimum Windows version requirement of
`GPUI` to Windows 10, specifically Windows 10 Fall Creators Update
(10.0.16299). However, the `alacritty_terminal` dependency in Zed relies
on `conPTY` on Windows, an API introduced in the Windows 10 Fall
Creators Update. Therefore, the impact of this PR on Zed should be
minimal. I'd like to hear your voices about this PR, especially about
the minimum Windows version bumping.

Release Notes:

- N/A
2024-05-09 14:24:57 -07:00
Andrew Lygin
ba25e371be Fix scrollbar markers for folded code (#11625)
There're two errors in scrollbar markers in the presence of folded code:

1. Some markers are not displayed (when the marked row numbers are
greater than the total displayed rows count after folding).
2. Code folding / unfolding doesn't trigger markers repainting.

This PR fixes both problems.

Release Notes:

- Fixed scrollbar markers for folded code.

The second problem (markers are repainted after I move the cursor, not
after folding):


https://github.com/zed-industries/zed/assets/2101250/57ed563d-186d-4497-98ab-d4f946416726
2024-05-09 14:23:21 -07:00
Marshall Bowers
c73ef1a5f3 assistant2: List saved conversations from disk (#11627)
This PR updates the saved conversation picker to use a list of
conversations retrieved from disk instead of the static placeholder
values.

Release Notes:

- N/A
2024-05-09 16:17:07 -04:00
Conrad Irwin
8b5a0cff10 vim: Fix e/E with inlay hints (#11616)
Co-Authored-By: Sergey <sergey.b@hey.com>
Fixes: #7046

Release Notes:

- vim: Fixes e/E with inlay hints (#7046)

Co-authored-by: Sergey <sergey.b@hey.com>
2024-05-09 13:45:45 -06:00
Piotr Osiewicz
f0af508ae5 Revert "chore: Bump taffy version to 0.4.3" (#11622)
Reverts zed-industries/zed#11606
2024-05-09 19:11:37 +02:00
Marshall Bowers
5fe4070501 docs: Fix copying code blocks (#11617)
This PR fixes copying code blocks in the docs.

The problem was that some of the elements we removed from the base
mdBook template were causing errors in the script, which prevented the
right event listeners from being registered for the copy button.

To remedy this, the elements have been restored, but are using `display:
none` so that they don't appear in the UI.

Resolves #11592.

Release Notes:

- N/A
2024-05-09 11:52:26 -04:00
Marshall Bowers
981a143e9b copilot: Update root path on Windows (#11613)
This PR updates the root path used by Copilot to be a validate path when
running on Windows.

Release Notes:

- N/A

Co-authored-by: Jason Lee <huacnlee@gmail.com>
2024-05-09 10:14:29 -04:00
Jason Lee
5e06ce4df3 Add zip extract support for Windows (#11156)
Release Notes:

- [x] Fixed install Node.js runtime and NPM lsp installation on Windows.
- [x] Update Node runtime command to execute on Windows with no window
popup.

Ref #9619, #9424

---------

Co-authored-by: Marshall Bowers <elliott.codes@gmail.com>
2024-05-09 09:23:21 -04:00
Piotr Osiewicz
3bd53d0441 chore: Bump taffy version to 0.4.3 (#11606)
Taffy 0.4 has been released 2 months ago. We've been using an older
commit from their 0.4 development branch since November.
Compared to the commit we were pinned to, the following relevant changes
have been made:
-
563d5dcee7
-
64f8aa0fb1
-
70b35712a2

![image](https://github.com/zed-industries/zed/assets/24362066/ffdfae03-2743-496f-bb21-7aa38462178f)

Release Notes:

- N/A
2024-05-09 12:51:53 +02:00
Piotr Osiewicz
76535578e9 Task indicators in multibuffers (#11603)
Following #11487 the task indicators would no longer show up in
multibuffers.
Release Notes:

- N/A
2024-05-09 12:22:33 +02:00
Piotr Osiewicz
fdcedf15b7 editor: Do not show test indicators if a line is folded (#11599)
Originally reported by @RemcoSmitsDev



Release Notes:

- N/A
2024-05-09 11:43:50 +02:00
Piotr Osiewicz
bd6d385817 gpui: Pass Style by value to request_layout (#11597)
A minor thing I've spotted and decided to fix on the spot.
It was being cloned twice within the body of that function (one of which
was redundant even without this PR); now in most cases we go down from 2
clones to 0.
Release Notes:

- N/A
2024-05-09 11:38:53 +02:00
Antonio Scandurra
5df1481297 Introduce a new markdown crate (#11556)
This pull request introduces a new `markdown` crate which is capable of
parsing and rendering a Markdown source. One of the key additions is
that it enables text selection within a `Markdown` view. Eventually,
this will replace `RichText` but for now the goal is to use it in the
assistant revamped assistant in the spirit of making progress.

<img width="711" alt="image"
src="https://github.com/zed-industries/zed/assets/482957/b56c777b-e57c-42f9-95c1-3ada22f63a69">

Note that this pull request doesn't yet use the new markdown renderer in
`assistant2`. This is because we need to modify the assistant before
slotting in the new renderer and I wanted to merge this independently of
those changes.

Release Notes:

- N/A

---------

Co-authored-by: Nathan Sobo <nathan@zed.dev>
Co-authored-by: Conrad <conrad@zed.dev>
Co-authored-by: Alp <akeles@umd.edu>
Co-authored-by: Zachiah Sawyer <zachiah@proton.me>
2024-05-09 11:03:33 +02:00
Doy Bachtiar
ddaaaee973 docs: Fix a typo (#11588)
This PR fixes a typo in docs/src/development/debugging-crashes.md.

Release Notes:

- N/A
2024-05-08 20:16:02 -07:00
张小白
9772b7ac33 windows: Fix Zed freezing when resuming from sleep (#11589)
It seems that on the first frame after the system resumes from sleep,
`dcomp_vsync_fn` mistakenly detects the `timer_stop_event` triggering
and exits the loop.

Release Notes:

- N/A
2024-05-08 20:15:32 -07:00
CharlesChen0823
2e0811e113 windows: Improve platform clipboard (#11553)
I thought platform clipboard should share one ctx. and fixed in vim
mode, read from clipboard crash when using `unwrap`.

Release Notes:

- N/A
2024-05-08 16:09:13 -07:00
张小白
1b292d2fb3 Fix crash when the length of a line is greater than 1024 chars (#11536)
Close #11518 

Release Notes:

- N/A
2024-05-08 16:08:39 -07:00
Marshall Bowers
adecbd1815 Make Markdown default to "format_on_save": "off" (#11584)
This PR changes the Markdown language defaults to set `format_on_save`
to be `off`.

Prettier's Markdown formatting is a bit controversial for some people,
so we turn it off by default.

To restore the previous behavior, add the following to your settings:

```json
{
  "languages": {
    "Markdown": {
      "format_on_save": "on"
    }
  }
}
```


Release Notes:

- Changed the default `format_on_save` behavior for Markdown files to be
`off`.
2024-05-08 18:44:21 -04:00
Max Brunsfeld
a7aa2578e1 Implement serialization of assistant conversations, including tool calls and attachments (#11577)
Release Notes:

- N/A

---------

Co-authored-by: Kyle <kylek@zed.dev>
Co-authored-by: Marshall <marshall@zed.dev>
2024-05-08 17:52:15 -04:00
Conrad Irwin
24ffa0fcf3 Don't panic on failure to allocate an AtlasTile (#11579)
Release Notes:

- Fixed a panic in graphics allocation
2024-05-08 15:47:15 -06:00
Conrad Irwin
b0494d1c05 Pass hover position as an anchor (#11578)
It's too easily to accidentally pass a point from one snapshot into
another

Release Notes:

- Fixed a panic in show hover
2024-05-08 15:39:37 -06:00
Dzmitry Malyshau
a89dc8c42e blade: Switch to linear color space (#11534)
Release Notes:

- N/A

## What

Addresses a long-standing issue of doing the blending operations in sRGB
space. Currently, the input HSL colors are provided in sRGB space and
converted to linear in the vertex shader. Conversion back to sRGB, which
is required on most platforms today, happens at the very end of the
pipeline when writing into sRGB render target.

Note-1: in the future we may consider doing HSL -> sRGB -> Linear
transform on CPU before feeding into shaders. However, I don't expect
any significant difference here given that we are likely bound by fill
rate and pixel shaders, anyway.

Note-2: the graphics stack is programmed to detect if the platform
supports presenting in linear color space and avoids converting to sRGB
at the end in this case. However, on my Z13 laptop this isn't supported
by the RADV driver.

Closes #7684 
Closes #11462
@jansol please confirm if you can!

## Comparison

Screenshot of the Glazier theme before the change:

![glazier-old](https://github.com/zed-industries/zed/assets/107301/6a9552e1-0819-4a4e-8121-8d62ec012bf4)
Same theme after the change:

![glazier-new](https://github.com/zed-industries/zed/assets/107301/4e61c422-4a4b-4c4b-84a3-55680626d681)
2024-05-08 12:47:29 -07:00
Nate Butler
d103903229 Style header for assistant2 (#11570)
Release Notes:

- N/A
2024-05-08 14:17:07 -04:00
CharlesChen0823
ec3aabe2c2 windows: Fix crash when saving files to disk (#11547)
closes #11544, sorry for introduce this issue by pre pr.
Release Notes:

- N/A
2024-05-08 11:12:07 -07:00
张小白
4b98c35d68 windows: Let IME early return in vim mode (#11551)
This PR follows up #11387 , slightly changes the IME window behavior to
match macOS implementation.

Release Notes:

- N/A
2024-05-08 11:01:48 -07:00
张小白
5103995c32 windows: Fix incorrect font rendering (#11545)
Previously, `DirectWrite` had been following the text system
implementation on `macOS`, using the font's postscript name to
differentiate between different font faces. However, I noticed
occasional rendering issues, such as fonts incorrectly rendering as
italics. Later, I discovered that on `Windows`, the postscript name is
**not** unique. Surprisingly, even the same font can have different
postscript names on macOS and Windows! It's hard to believe! The
postscript name of a font face should be obtained from the same font
table. Why would the same font face have different names?

For example, the postscript name of ZedMono on `macOS` is
`Zed-Mono-Bold-Extended-Italic`, while on `Windows`, it is
`Zed-Mono-Extended`, missing weight and style information, leading to
incorrect rendering.

This PR introduces a `FontIdentifier` struct to uniquely identify font
faces.

Release Notes:

- N/A
2024-05-08 10:58:31 -07:00
张小白
fb4c6dbaa7 windows: Implement ResizeColumn and ResizeRow cursor style (#11533)
This PR follows up #11406

Release Notes:

- N/A
2024-05-08 10:57:09 -07:00
LoganDark
91c1716858 Fix horizontal scrolling direction on Windows (#11520)
As per Microsoft documentation, positive values scroll right, not left.
GPUI was incorrectly assuming it perfectly mirrored vertical scrolling.

Fixes #11515

Release Notes:

- N/A
2024-05-08 10:56:31 -07:00
Andrew Lygin
0933426e63 Editor tab bar settings (#7356)
This PR is another step to tabless editing (#6424, #4963). It adds
support for tab bar settings that allow the user to change its placement
or to hide completely.

Configuraton:

```json
"tab_bar": {
  "show": true
}
```

Placemnet options are "top", "bottom" and "no".

This PR intentionally doesn't affect tab bars of other panes (Terminal
for instance) to keep code changes small. I guess we'll do the rest in
separate PRs.

Release Notes:

- Added support for configuring the editor tab bar (part of #6424,
#4963).

---------

Co-authored-by: Mikayla <mikayla@zed.dev>
2024-05-08 10:54:48 -07:00
Kyle Kelley
689e4aef2f Render messages as early as possible to show progress (#11569)
This shows "Researching..." as placeholder text as early as possible so
that the user can see the model is working on reading/researching/etc.

This also adds on an `Option<Value>` to the `render_running` function so
that tools can hopefully render based on partially completed JSON (still
to come).

Release Notes:

- N/A
2024-05-08 10:24:51 -07:00
Thorsten Ball
dbebb40956 linux: Store binary path before restart to handle deleted binary file (#11568)
This fixes restart after updates not working on Linux.

On Linux we can't reliably get the binary path after an update, because
the original binary was deleted and the path will contain ` (deleted)`.

See: https://github.com/rust-lang/rust/issues/69343

We *could* strip ` (deleted)` off, but that feels nasty. So instead we
save the original binary path, before we do the installation, then
restart.

Later on, we can also change this to be a _new_ binary path returned by
the installers, which we then have to start.


Release Notes:

- N/A
2024-05-08 19:13:28 +02:00
Max Brunsfeld
d2cec0221b Run windows CI on our own GH-hosted windows runner (#11567)
It's a 16-core runner.

Release Notes:

- N/A
2024-05-08 10:09:43 -07:00
Joseph T. Lyons
724acaab61 v0.136.x dev 2024-05-08 12:05:45 -04:00
Kirill Bulatov
ffa2d90dc3 Return prettier entry back to LSP logs (#11563)
Fixes prettier entries disappeared after
https://github.com/zed-industries/zed/pull/10788

Release Notes:

- N/A
2024-05-08 18:21:43 +03:00
Marshall Bowers
5c2ec1705d docs: Fix default value for formatter setting (#11560)
This PR fixes the default value for the `formatter` setting listed in
the docs.

The default value is `"auto"`:
194c43b84b/assets/settings/default.json (L367)

Resolves #11468.

Release Notes:

- N/A
2024-05-08 10:42:41 -04:00
Piotr Osiewicz
2671b9c63c Fix alignment of code actions menu with narrow panes 2024-05-08 16:34:56 +02:00
Piotr Osiewicz
68fe2bb776 Do away with display points in toggle_code_actions 2024-05-08 16:34:56 +02:00
Piotr Osiewicz
65f7238777 editor: Fix task indicator layout for wrapped lines 2024-05-08 16:34:56 +02:00
Thorsten Ball
ca680f07f7 branch picker: Always show HEAD first (#11552)
I think this is a lot less confusing than another branch being selected
by default when there's no query.



Release Notes:

- Changed the branch picker to always show the current branch as the
default selected entry.



https://github.com/zed-industries/zed/assets/1185253/18b50656-f6ac-4138-b4e0-9024072e1555
2024-05-08 15:01:36 +02:00
d1y
9d681bda8d go: support highlight regexp (#11538)
Before:
<img width="521" alt="image"
src="https://github.com/zed-industries/zed/assets/45585937/7b87e552-0cf0-4168-933f-21d1ae84bf64">

After:
<img width="499" alt="image"
src="https://github.com/zed-industries/zed/assets/45585937/f1171375-8c22-4186-a5f1-1bdfa56cf01f">

Release Notes:

- Added go regexp highlighting
2024-05-08 12:52:56 +02:00
Piotr Osiewicz
1669ff80df editor: Fix menu::Confirm falling through to editor when confirming task entry in code actions (#11546)
Release Notes:

- N/A
2024-05-08 12:50:08 +02:00
LoganDark
45aca348b8 Take local project settings into account when launching terminals (#11526)
Fixes #7599

Use project level settings if possible, when creating terminals.

Release Notes:

- Fixed terminals ignoring project-specific settings ([7599](https://github.com/zed-industries/zed/issues/7599))
2024-05-08 13:39:47 +03:00
Piotr Osiewicz
07942bbdfe Editor: Do not display code actions in task gutter menu if they belong to different line (#11506)
This doesn't address the focus issues we saw with @maxbrunsfeld yet.

Release Notes:

- N/A
2024-05-08 12:34:47 +02:00
Owen Law
47c12c6563 Make prefer_line the default soft-wrap behavior. (#11542)
Stops lines from clipping at 1024 by default, returning it to the old
default behaviour where it wraps at 512. The `none` mode is only
supposed to be used for git hunks. See
https://github.com/zed-industries/zed/issues/11518#issuecomment-2099836283.

Release Notes:

- N/A
2024-05-08 13:14:48 +03:00
张小白
63a5f46df4 Remember window restore size (#10429)
Now, regardless of how the Zed window is closed, Zed can remember the
window's restore size.

- [x] Windows implementation
- [x] macOS implementation
- [x] Linux implementation (partial)
- [x] update SQL data base (mark column `fullscreen` as deprecated)

The current implementation on Linux is basic, and I'm not sure if it's
correct.

The variable `fullscreen` in SQL can be removed, but I'm unsure how to
do it.
edit: mark `fullscreen` as deprecated

### Case 1

When the window is closed as maximized, reopening it will open in the
maximized state, and returning from maximized state will restore the
position and size it had when it was maximized.



https://github.com/zed-industries/zed/assets/14981363/7207752e-878a-4d43-93a7-41ad1fdb3a06


### Case 2

When the window is closed as fullscreen, reopening it will open in
fullscreen mode, and toggling fullscreen will restore the position and
size it had when it entered fullscreen (note that the fullscreen
application was not recorded in the video, showing a black screen, but
it had actually entered fullscreen mode).



https://github.com/zed-industries/zed/assets/14981363/ea5aa70d-b296-462a-afb3-4c3372883ea3

### What's more

- As English is not my native language, some variable and struct names
may need to be modified to match their actual meaning.
- I am not familiar with the APIs related to macOS and Linux, so
implementation for these two platforms has not been done for now.
- Any suggestions and ideas are welcome.

Release Notes:

- N/A
2024-05-07 23:29:03 -06:00
Kyle Kelley
6ddcff25e3 Show annotations more like the inline assistant (#11530)
* compute gutter width
* show the AI icon
* borders, background, and text color for annotations

<img width="1840" alt="image"
src="https://github.com/zed-industries/zed/assets/836375/93f2e9b8-d7f7-4c25-b3e2-cf77a0c4ca36">

Release Notes:

- N/A
2024-05-07 19:16:35 -07:00
Kyle Kelley
1cf40d77e2 Supermaven enhanced (#11521)
Fixes #11422 by accepting just the start of the line.

Release Notes:

- N/A

---------

Co-authored-by: max <max@zed.dev>
Co-authored-by: jacob <jacob@supermaven.com>
2024-05-07 15:38:03 -07:00
Marshall Bowers
33a72219c0 assistant2: Add new conversation button, that also saves the current conversation (#11522)
This PR updates the new assistant with a button to start a new
conversation.

Clicking on it will reset the chat and put it into a fresh state.

The current conversation will be serialized and written to
`~/.config/zed/conversations`.

Release Notes:

- N/A
2024-05-07 18:16:48 -04:00
Andrei Zvonimir Crnković
c77dd8b9e0 docs: Add linux build command explanation (#11513)
Just adding a short note about `cargo build --release` and the location
of the compiled binary. It helps a lot to streamline usage of Zed day to
day on Linux.

Release Notes:

- N/A
2024-05-07 14:37:34 -07:00
Kirill Bulatov
3d9f0087ff Do not show diffs for files with \r\n contents (#11519) 2024-05-08 00:37:09 +03:00
Marshall Bowers
6a64b732b6 assistant2: Add general structure for conversation history (#11516)
This PR adds the general structure for conversation history to the new
assistant.

Right now we have a placeholder button in the assistant panel that will
toggle a picker containing some placeholder saved conversations.

Release Notes:

- N/A
2024-05-07 17:03:33 -04:00
Nate Butler
47ca343803 Add DecoratedIcon (#11512)
This allows us to create icons with dynamic decorations drawn on top
like these:


![image](https://github.com/zed-industries/zed/assets/1714999/1d1a22df-8f90-47f2-abbd-ed7afa8fc641)

### Examples:

```rust
div()
    .child(DecoratedIcon::new(
        Icon::new(IconName::Bell).color(Color::Muted),
        IconDecoration::IndicatorDot,
    ))
    .child(
        DecoratedIcon::new(Icon::new(IconName::Bell), IconDecoration::IndicatorDot)
            .decoration_color(Color::Accent),
    )
    .child(DecoratedIcon::new(
        Icon::new(IconName::Bell).color(Color::Muted),
        IconDecoration::Strikethrough,
    ))
    .child(
        DecoratedIcon::new(Icon::new(IconName::Bell), IconDecoration::X)
            .decoration_color(Color::Error),
    )
```

Release Notes:

- N/A
2024-05-07 16:36:13 -04:00
Mikayla Maki
768b63a497 Remove windows dependency from all non-windows platforms (#11510)
Whoops

Release Notes:

- N/A
2024-05-07 12:45:23 -07:00
Kyle Kelley
0d2f65ac13 Language Model Tool for commenting in a multibuffer (#11509)
Language Model can now open multibuffers and insert comments as block
decorations in the editor.


![image](https://github.com/zed-industries/zed/assets/836375/f4456ad0-66e7-4ad6-a2b3-63810a3223a5)


Release Notes:

- N/A

---------

Co-authored-by: max <max@zed.dev>
Co-authored-by: marshall <marshall@zed.dev>
Co-authored-by: nate <nate@zed.dev>
2024-05-07 12:35:37 -07:00
Joseph T. Lyons
953acb0f6d Document configuring pyright for pyproject.toml (#11508)
Release Notes:

- N/A
2024-05-07 14:40:34 -04:00
Marshall Bowers
d64106e01b Add development credentials provider (#11505)
This PR adds a new development credentials provider for the purpose of
streamlining local development against production collab.

## Problem

Today if you want to run a development build of Zed against the
production collab server, you need to either:

1. Enter your keychain password every time in order to retrieve your
saved credentials
2. Re-authenticate with zed.dev every time
    - This can get annoying as you need to pop out into a browser window
- I've also seen cases where if you re-auth too many times in a row
GitHub will make you confirm the authentication, as it looks suspicious

## Solution

This PR decouples the concept of the credentials provider from the
keychain, and adds a new development credentials provider to address
this specific case.

Now when running a development build of Zed and the
`ZED_DEVELOPMENT_AUTH` environment variable is set to a non-empty value,
the credentials will be saved to disk instead of the system keychain.

While this is not as secure as storing them in the system keychain,
since it is only used for development the tradeoff seems acceptable for
the resulting improvement in UX.

Release Notes:

- N/A
2024-05-07 13:59:18 -04:00
张小白
c260f7d5ac Refactor Windows platform implementation (#11393)
This aligns the Windows platform implementation with a code style
similar to macOS platform, eliminating most of the `Cell`s and
`Mutex`es. This adjustment facilitates potential future porting to a
multi-threaded implementation if required.

Overall, this PR made the following changes: it segregated all member
variables in `WindowsPlatform` and `WindowsWindow`, grouping together
variables that remain constant throughout the entire app lifecycle,
while placing variables that may change during app runtime into
`RefCell`.

Edit: 
During the code refactoring process, a bug was also fixed.

**Before**: 
Close window when file has changed, nothing happen:


https://github.com/zed-industries/zed/assets/14981363/0bcda7c1-808c-4b36-8953-a3a3365a314e

**After**:
Now `should_close` callback is properly handled


https://github.com/zed-industries/zed/assets/14981363/c8887b72-9a0b-42ad-b9ab-7d0775d843f5


Release Notes:

- N/A
2024-05-07 09:49:39 -07:00
Kyle Kelley
b038fb3729 rename attachment_store -> attachment_registry (#11501)
Minor touch up from #11471

Release Notes:

- N/A
2024-05-07 09:18:18 -07:00
Thorsten Ball
4eedbdedae linux: Use optional compile-time RELEASE_VERSION variable (#11490)
This implements `app_version` on Linux by using an optional,
compile-time `RELEASE_VERSION` env var that can be set.

We settled on the `RELEASE_VERSION` as the name, since it's similar to
`RELEASE_CHANNEL` which we use in Zed.

cc @ConradIrwin @mikayla-maki 

Release Notes:

- N/A

Co-authored-by: Bennet <bennetbo@gmx.de>
2024-05-07 17:50:19 +02:00
Conrad Irwin
33d4c563fb Add a nohup workaround (#11499)
Release Notes:

- N/A
2024-05-07 09:50:06 -06:00
Conrad Irwin
e2907983d1 don't report hangs on stable (#11494)
Release Notes:

- N/A
2024-05-07 09:35:52 -06:00
Conrad Irwin
ceab446409 Restore vim docs (#11491)
These got reset to the wrong version as part of the docs migration

Release Notes:

- N/A
2024-05-07 09:24:00 -06:00
Kyle Kelley
72c47b7f01 Assistant grouping (#11479)
Groups collections of assistant messages with their tool calls as
children of the assistant message container.


![image](https://github.com/zed-industries/zed/assets/836375/b26b7c90-4c8d-4bbd-972a-1e769d78a455)

Release Notes:

- N/A
2024-05-07 08:21:57 -07:00
Marshall Bowers
c77d2eb73f Increase short SHA length to 7 characters (#11492)
This PR increases the length of a shortened Git SHA from 6 to 7
characters.

This matches what GitHub uses.

I also took the opportunity to factor out a common method for computing
a short SHA so that we have a single source of truth for the length that
we're using.

Release Notes:

- Increased the short commit SHA length used by git blame from 6 to 7
characters.
2024-05-07 11:10:44 -04:00
Conrad Irwin
fc4ea55d3c Update pull_request_template.md
Make it easier to edit to select the N/A option.
2024-05-07 08:51:38 -06:00
406 changed files with 19069 additions and 13522 deletions

View File

@@ -6,6 +6,8 @@ Release Notes:
Optionally, include screenshots / media showcasing your addition that can be included in the release notes.
**or**
### Or...
Release Notes:
- N/A

View File

@@ -32,7 +32,6 @@ jobs:
uses: actions/checkout@v4
with:
clean: false
submodules: "recursive"
fetch-depth: 0
- name: Remove untracked files
@@ -87,7 +86,6 @@ jobs:
uses: actions/checkout@v4
with:
clean: false
submodules: "recursive"
- name: cargo clippy
run: cargo xtask clippy
@@ -104,22 +102,17 @@ jobs:
# todo(linux): Actually run the tests
linux_tests:
name: (Linux) Run Clippy and tests
runs-on: ubuntu-latest
runs-on:
- self-hosted
- deploy
steps:
- name: Add Rust to the PATH
run: echo "$HOME/.cargo/bin" >> $GITHUB_PATH
- name: Checkout repo
uses: actions/checkout@v4
with:
clean: false
submodules: "recursive"
- name: Cache dependencies
uses: swatinem/rust-cache@v2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- name: configure linux
shell: bash -euxo pipefail {0}
run: script/linux
- name: cargo clippy
run: cargo xtask clippy
@@ -130,13 +123,12 @@ jobs:
# todo(windows): Actually run the tests
windows_tests:
name: (Windows) Run Clippy and tests
runs-on: windows-latest
runs-on: hosted-windows-1
steps:
- name: Checkout repo
uses: actions/checkout@v4
with:
clean: false
submodules: "recursive"
- name: Cache dependencies
uses: swatinem/rust-cache@v2
@@ -179,7 +171,6 @@ jobs:
# 25 was chosen arbitrarily.
fetch-depth: 25
clean: false
submodules: "recursive"
- name: Limit target directory size
run: script/clear-target-dir-if-larger-than 100
@@ -262,26 +253,24 @@ jobs:
bundle-linux:
name: Create a Linux bundle
runs-on: ubuntu-22.04 # keep the version fixed to avoid libc and dynamic linked library issues
runs-on:
- self-hosted
- deploy
if: ${{ startsWith(github.ref, 'refs/tags/v') || contains(github.event.pull_request.labels.*.name, 'run-bundling') }}
needs: [linux_tests]
env:
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
steps:
- name: Add Rust to the PATH
run: echo "$HOME/.cargo/bin" >> $GITHUB_PATH
- name: Checkout repo
uses: actions/checkout@v4
with:
clean: false
submodules: "recursive"
- name: Cache dependencies
uses: swatinem/rust-cache@v2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- name: Configure linux
shell: bash -euxo pipefail {0}
run: script/linux
- name: Limit target directory size
run: script/clear-target-dir-if-larger-than 100
- name: Determine version and release channel
if: ${{ startsWith(github.ref, 'refs/tags/v') }}

View File

@@ -21,7 +21,6 @@ jobs:
uses: actions/checkout@v4
with:
clean: false
submodules: "recursive"
fetch-depth: 0
- name: Run style checks
@@ -41,7 +40,6 @@ jobs:
uses: actions/checkout@v4
with:
clean: false
submodules: "recursive"
fetch-depth: 0
- name: Install cargo nextest
@@ -76,7 +74,6 @@ jobs:
uses: actions/checkout@v4
with:
clean: false
submodules: "recursive"
- name: Build docker image
run: docker build . --build-arg GITHUB_SHA=$GITHUB_SHA --tag registry.digitalocean.com/zed/collab:$GITHUB_SHA

View File

@@ -19,7 +19,6 @@ jobs:
uses: actions/checkout@v4
with:
clean: false
submodules: "recursive"
- name: Cache dependencies
uses: swatinem/rust-cache@v2

View File

@@ -31,7 +31,6 @@ jobs:
uses: actions/checkout@v4
with:
clean: false
submodules: "recursive"
- name: Run randomized tests
run: script/randomized-test-ci

View File

@@ -25,7 +25,6 @@ jobs:
uses: actions/checkout@v4
with:
clean: false
submodules: "recursive"
fetch-depth: 0
- name: Run style checks
@@ -45,7 +44,6 @@ jobs:
uses: actions/checkout@v4
with:
clean: false
submodules: "recursive"
- name: Run tests
uses: ./.github/actions/run_tests
@@ -75,7 +73,6 @@ jobs:
uses: actions/checkout@v4
with:
clean: false
submodules: "recursive"
- name: Set release channel to nightly
run: |
@@ -96,7 +93,9 @@ jobs:
bundle-deb:
name: Create a Linux *.tar.gz bundle
if: github.repository_owner == 'zed-industries'
runs-on: ubuntu-22.04 # keep the version fixed to avoid libc and dynamic linked library issues
runs-on:
- self-hosted
- deploy
needs: tests
env:
DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }}
@@ -107,16 +106,9 @@ jobs:
uses: actions/checkout@v4
with:
clean: false
submodules: "recursive"
- name: Cache dependencies
uses: swatinem/rust-cache@v2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
- name: Configure linux
shell: bash -euxo pipefail {0}
run: script/linux
- name: Add Rust to the PATH
run: echo "$HOME/.cargo/bin" >> $GITHUB_PATH
- name: Set release channel to nightly
run: |

3
.gitmodules vendored
View File

@@ -1,3 +0,0 @@
[submodule "crates/live_kit_server/protocol"]
path = crates/live_kit_server/protocol
url = https://github.com/livekit/protocol

View File

@@ -15,8 +15,12 @@ Christian Bergschneider <christian.bergschneider@gmx.de>
Christian Bergschneider <christian.bergschneider@gmx.de> <magiclake@gmx.de>
Conrad Irwin <conrad@zed.dev>
Conrad Irwin <conrad@zed.dev> <conrad.irwin@gmail.com>
Fernando Tagawa <tagawafernando@gmail.com>
Fernando Tagawa <tagawafernando@gmail.com> <fernando.tagawa.gamail.com@gmail.com>
Greg Morenz <greg-morenz@droid.cafe>
Greg Morenz <greg-morenz@droid.cafe> <morenzg@gmail.com>
Ivan Žužak <izuzak@gmail.com>
Ivan Žužak <izuzak@gmail.com> <ivan.zuzak@github.com>
Joseph T. Lyons <JosephTLyons@gmail.com>
Joseph T. Lyons <JosephTLyons@gmail.com> <JosephTLyons@users.noreply.github.com>
Julia <floc@unpromptedtirade.com>
@@ -29,6 +33,9 @@ Kirill Bulatov <kirill@zed.dev>
Kirill Bulatov <kirill@zed.dev> <mail4score@gmail.com>
Kyle Caverly <kylebcaverly@gmail.com>
Kyle Caverly <kylebcaverly@gmail.com> <kyle@zed.dev>
LoganDark <contact@logandark.mozmail.com>
LoganDark <contact@logandark.mozmail.com> <git@logandark.mozmail.com>
LoganDark <contact@logandark.mozmail.com> <github@logandark.mozmail.com>
Marshall Bowers <elliott.codes@gmail.com>
Marshall Bowers <elliott.codes@gmail.com> <marshall@zed.dev>
Max Brunsfeld <maxbrunsfeld@gmail.com>
@@ -41,6 +48,8 @@ Nate Butler <iamnbutler@gmail.com> <nate@zed.dev>
Nathan Sobo <nathan@zed.dev>
Nathan Sobo <nathan@zed.dev> <nathan@warp.dev>
Nathan Sobo <nathan@zed.dev> <nathansobo@gmail.com>
Petros Amoiridis <petros@hey.com>
Petros Amoiridis <petros@hey.com> <petros@zed.dev>
Piotr Osiewicz <piotr@zed.dev>
Piotr Osiewicz <piotr@zed.dev> <24362066+osiewicz@users.noreply.github.com>
Robert Clover <git@clo4.net>

433
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -4,8 +4,8 @@ members = [
"crates/anthropic",
"crates/assets",
"crates/assistant",
"crates/assistant_tooling",
"crates/assistant2",
"crates/assistant_tooling",
"crates/audio",
"crates/auto_update",
"crates/breadcrumbs",
@@ -23,6 +23,7 @@ members = [
"crates/db",
"crates/diagnostics",
"crates/editor",
"crates/event_server",
"crates/extension",
"crates/extension_api",
"crates/extension_cli",
@@ -41,6 +42,7 @@ members = [
"crates/gpui",
"crates/gpui_macros",
"crates/headless",
"crates/http",
"crates/image_viewer",
"crates/inline_completion_button",
"crates/install_cli",
@@ -52,6 +54,7 @@ members = [
"crates/live_kit_client",
"crates/live_kit_server",
"crates/lsp",
"crates/markdown",
"crates/markdown_preview",
"crates/media",
"crates/menu",
@@ -126,6 +129,7 @@ members = [
"extensions/php",
"extensions/prisma",
"extensions/purescript",
"extensions/ruby",
"extensions/svelte",
"extensions/terraform",
"extensions/toml",
@@ -165,6 +169,7 @@ copilot = { path = "crates/copilot" }
db = { path = "crates/db" }
diagnostics = { path = "crates/diagnostics" }
editor = { path = "crates/editor" }
event_server = { path = "crates/event_server" }
extension = { path = "crates/extension" }
extensions_ui = { path = "crates/extensions_ui" }
feature_flags = { path = "crates/feature_flags" }
@@ -181,6 +186,7 @@ google_ai = { path = "crates/google_ai" }
gpui = { path = "crates/gpui" }
gpui_macros = { path = "crates/gpui_macros" }
headless = { path = "crates/headless" }
http = { path = "crates/http" }
install_cli = { path = "crates/install_cli" }
image_viewer = { path = "crates/image_viewer" }
inline_completion_button = { path = "crates/inline_completion_button" }
@@ -192,6 +198,7 @@ languages = { path = "crates/languages" }
live_kit_client = { path = "crates/live_kit_client" }
live_kit_server = { path = "crates/live_kit_server" }
lsp = { path = "crates/lsp" }
markdown = { path = "crates/markdown" }
markdown_preview = { path = "crates/markdown_preview" }
media = { path = "crates/media" }
menu = { path = "crates/menu" }
@@ -225,7 +232,7 @@ snippet = { path = "crates/snippet" }
sqlez = { path = "crates/sqlez" }
sqlez_macros = { path = "crates/sqlez_macros" }
supermaven = { path = "crates/supermaven" }
supermaven_api = { path = "crates/supermaven_api"}
supermaven_api = { path = "crates/supermaven_api" }
story = { path = "crates/story" }
storybook = { path = "crates/storybook" }
sum_tree = { path = "crates/sum_tree" }
@@ -255,20 +262,24 @@ async-fs = "1.6"
async-recursion = "1.0.0"
async-tar = "0.4.2"
async-trait = "0.1"
async_zip = { version = "0.0.17", features = ["deflate", "deflate64"] }
bitflags = "2.4.2"
blade-graphics = { git = "https://github.com/kvark/blade", rev = "f5766863de9dcc092e90fdbbc5e0007a99e7f9bf" }
blade-macros = { git = "https://github.com/kvark/blade", rev = "f5766863de9dcc092e90fdbbc5e0007a99e7f9bf" }
blade-graphics = { git = "https://github.com/kvark/blade", rev = "e35b2d41f221a48b75f7cf2e78a81e7ecb7a383c" }
blade-macros = { git = "https://github.com/kvark/blade", rev = "e35b2d41f221a48b75f7cf2e78a81e7ecb7a383c" }
cap-std = "3.0"
cargo_toml = "0.20"
chrono = { version = "0.4", features = ["serde"] }
clap = { version = "4.4", features = ["derive"] }
clickhouse = { version = "0.11.6" }
ctor = "0.2.6"
ctrlc = "3.4.4"
signal-hook = "0.3.17"
core-foundation = { version = "0.9.3" }
core-foundation-sys = "0.8.6"
derive_more = "0.99.17"
emojis = "0.6.1"
env_logger = "0.9"
exec = "0.3.1"
fork = "0.1.23"
futures = "0.3"
futures-batch = "0.6.1"
futures-lite = "1.13"
@@ -287,10 +298,12 @@ isahc = { version = "1.7.2", default-features = false, features = [
] }
itertools = "0.11.0"
lazy_static = "1.4.0"
libc = "0.2"
linkify = "0.10.0"
log = { version = "0.4.16", features = ["kv_unstable_serde"] }
nanoid = "0.4"
nix = "0.28"
once_cell = "1.19.0"
ordered-float = "2.1.1"
palette = { version = "0.7.5", default-features = false, features = ["std"] }
parking_lot = "0.12.1"
@@ -304,8 +317,9 @@ pulldown-cmark = { version = "0.10.0", default-features = false }
rand = "0.8.5"
refineable = { path = "./crates/refineable" }
regex = "1.5"
repair_json = "0.1.0"
rusqlite = { version = "0.29.0", features = ["blob", "array", "modern_sqlite"] }
rust-embed = { version = "8.0", features = ["include-exclude"] }
rust-embed = { version = "8.4", features = ["include-exclude"] }
schemars = "0.8"
semver = "1.0"
serde = { version = "1.0", features = ["derive", "rc"] }
@@ -379,10 +393,12 @@ wit-component = "0.201"
sys-locale = "0.3.1"
[workspace.dependencies.windows]
version = "0.53.0"
version = "0.56.0"
features = [
"implement",
"Foundation_Numerics",
"System",
"System_Threading",
"Wdk_System_SystemServices",
"Win32_Globalization",
"Win32_Graphics_Direct2D",
@@ -406,6 +422,7 @@ features = [
"Win32_System_SystemServices",
"Win32_System_Threading",
"Win32_System_Time",
"Win32_System_WinRT",
"Win32_UI_Controls",
"Win32_UI_HiDpi",
"Win32_UI_Input_Ime",

View File

@@ -0,0 +1 @@
<svg width="15" height="15" viewBox="0 0 15 15" fill="none" xmlns="http://www.w3.org/2000/svg"><path d="M13.15 7.49998C13.15 4.66458 10.9402 1.84998 7.50002 1.84998C4.7217 1.84998 3.34851 3.90636 2.76336 4.99997H4.5C4.77614 4.99997 5 5.22383 5 5.49997C5 5.77611 4.77614 5.99997 4.5 5.99997H1.5C1.22386 5.99997 1 5.77611 1 5.49997V2.49997C1 2.22383 1.22386 1.99997 1.5 1.99997C1.77614 1.99997 2 2.22383 2 2.49997V4.31318C2.70453 3.07126 4.33406 0.849976 7.50002 0.849976C11.5628 0.849976 14.15 4.18537 14.15 7.49998C14.15 10.8146 11.5628 14.15 7.50002 14.15C5.55618 14.15 3.93778 13.3808 2.78548 12.2084C2.16852 11.5806 1.68668 10.839 1.35816 10.0407C1.25306 9.78536 1.37488 9.49315 1.63024 9.38806C1.8856 9.28296 2.17781 9.40478 2.2829 9.66014C2.56374 10.3425 2.97495 10.9745 3.4987 11.5074C4.47052 12.4963 5.83496 13.15 7.50002 13.15C10.9402 13.15 13.15 10.3354 13.15 7.49998ZM7 10V5.00001H8V10H7Z" fill="currentColor" fill-rule="evenodd" clip-rule="evenodd"></path></svg>

After

Width:  |  Height:  |  Size: 974 B

View File

@@ -0,0 +1,5 @@
<svg width="12" height="12" viewBox="0 0 12 12" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M1.5 6C1.5 6.89002 1.76392 7.76004 2.25839 8.50007C2.75285 9.24009 3.45566 9.81686 4.27792 10.1575C5.10019 10.4981 6.00499 10.5872 6.87791 10.4135C7.75082 10.2399 8.55264 9.81132 9.18198 9.18198C9.81132 8.55264 10.2399 7.75082 10.4135 6.87791C10.5872 6.00499 10.4981 5.10019 10.1575 4.27792C9.81686 3.45566 9.24009 2.75285 8.50007 2.25839C7.76004 1.76392 6.89002 1.5 6 1.5C4.74198 1.50473 3.53448 1.99561 2.63 2.87L1.5 4" stroke="#919081" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M1.5 1.5V4H4" stroke="#919081" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M6 3.5V6L8 7" stroke="#919081" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

After

Width:  |  Height:  |  Size: 778 B

View File

@@ -0,0 +1,3 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M11.5 15C13.433 15 15 13.433 15 11.5C15 9.567 13.433 8 11.5 8C9.567 8 8 9.567 8 11.5C8 13.433 9.567 15 11.5 15Z" fill="black"/>
</svg>

After

Width:  |  Height:  |  Size: 240 B

View File

@@ -0,0 +1,3 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path fill-rule="evenodd" clip-rule="evenodd" d="M13.4662 14.9152C13.5801 15.0291 13.7648 15.0291 13.8787 14.9152L14.9145 13.8793C15.0284 13.7654 15.0284 13.5807 14.9145 13.4667L12.9483 11.5004L14.9145 9.53392C15.0285 9.42004 15.0285 9.23533 14.9145 9.12137L13.8787 8.08547C13.7648 7.97154 13.5801 7.97154 13.4662 8.08547L11.5 10.0519L9.53376 8.08545C9.41988 7.97152 9.23517 7.97152 9.12124 8.08545L8.08543 9.12136C7.97152 9.23533 7.97152 9.42004 8.08543 9.53392L10.0517 11.5004L8.08545 13.4667C7.97155 13.5807 7.97155 13.7654 8.08545 13.8793L9.12126 14.9152C9.23517 15.0292 9.41988 15.0292 9.53376 14.9152L11.5 12.9489L13.4662 14.9152Z" fill="black"/>
</svg>

After

Width:  |  Height:  |  Size: 756 B

View File

@@ -0,0 +1,3 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M3 4L13 12" stroke="black" stroke-width="2" stroke-linecap="round"/>
</svg>

After

Width:  |  Height:  |  Size: 181 B

View File

@@ -19,9 +19,6 @@
"cmd-escape": "menu::Cancel",
"ctrl-escape": "menu::Cancel",
"ctrl-c": "menu::Cancel",
"shift-enter": "picker::UseSelectedQuery",
"alt-enter": ["picker::ConfirmInput", { "secondary": false }],
"cmd-alt-enter": ["picker::ConfirmInput", { "secondary": true }],
"cmd-shift-w": "workspace::CloseWindow",
"shift-escape": "workspace::ToggleZoom",
"cmd-o": "workspace::Open",
@@ -630,6 +627,14 @@
"ctrl-backspace": "tab_switcher::CloseSelectedItem"
}
},
{
"context": "Picker",
"bindings": {
"alt-e": "picker::UseSelectedQuery",
"alt-enter": ["picker::ConfirmInput", { "secondary": false }],
"cmd-alt-enter": ["picker::ConfirmInput", { "secondary": true }]
}
},
{
"context": "Terminal",
"bindings": {

View File

@@ -1,4 +1,10 @@
[
{
"context": "ProjectPanel || Editor",
"bindings": {
"ctrl-6": "pane::AlternateFile"
}
},
{
"context": "Editor && VimControl && !VimWaiting && !menu",
"bindings": {
@@ -117,6 +123,9 @@
}
}
],
"m": ["vim::PushOperator", "Mark"],
"'": ["vim::PushOperator", { "Jump": { "line": true } }],
"`": ["vim::PushOperator", { "Jump": { "line": false } }],
";": "vim::RepeatFind",
",": "vim::RepeatFindReversed",
"ctrl-o": "pane::GoBack",
@@ -237,6 +246,9 @@
],
"g ]": "editor::GoToDiagnostic",
"g [": "editor::GoToPrevDiagnostic",
"g i": ["workspace::SendKeystrokes", "` ^ i"],
"g ,": "vim::ChangeListNewer",
"g ;": "vim::ChangeListOlder",
"shift-h": "vim::WindowTop",
"shift-m": "vim::WindowMiddle",
"shift-l": "vim::WindowBottom",

View File

@@ -73,6 +73,17 @@
"drop_target_size": 0.2,
// Whether the cursor blinks in the editor.
"cursor_blink": true,
// How to highlight the current line in the editor.
//
// 1. Don't highlight the current line:
// "none"
// 2. Highlight the gutter area:
// "gutter"
// 3. Highlight the editor area:
// "line"
// 4. Highlight the full line (default):
// "all"
"current_line_highlight": "all",
// Whether to pop the completions menu while typing in an editor without
// explicitly requesting it.
"show_completions_on_input": true,
@@ -289,7 +300,8 @@
// 1. "gpt-3.5-turbo"
// 2. "gpt-4"
// 3. "gpt-4-turbo-preview"
"default_model": "gpt-4-turbo-preview"
// 4. "gpt-4o"
"default_model": "gpt-4o"
}
},
// Whether the screen sharing icon is shown in the os status bar.
@@ -316,6 +328,8 @@
"autosave": "off",
// Settings related to the editor's tab bar.
"tab_bar": {
// Whether or not to show the tab bar in the editor
"show": true,
// Whether or not to show the navigation history buttons.
"show_nav_history_buttons": true
},
@@ -347,6 +361,8 @@
// when saving it.
"ensure_final_newline_on_save": true,
// Whether or not to perform a buffer format before saving
//
// Keep in mind, if the autosave with delay is enabled, format_on_save will be ignored
"format_on_save": "on",
// How to perform a buffer format. This setting can take 4 values:
//
@@ -370,11 +386,13 @@
//
// 1. Do not soft wrap.
// "soft_wrap": "none",
// 2. Soft wrap lines that overflow the editor:
// 2. Prefer a single line generally, unless an overly long line is encountered.
// "soft_wrap": "prefer_line",
// 3. Soft wrap lines that overflow the editor:
// "soft_wrap": "editor_width",
// 3. Soft wrap lines at the preferred line length
// 4. Soft wrap lines at the preferred line length
// "soft_wrap": "preferred_line_length",
"soft_wrap": "none",
"soft_wrap": "prefer_line",
// The column at which to soft-wrap lines, for buffers where soft-wrap
// is enabled.
"preferred_line_length": 80,
@@ -598,7 +616,12 @@
"format_on_save": "off"
},
"Elixir": {
"language_servers": ["elixir-ls", "!next-ls", "!lexical", "..."]
"language_servers": [
"elixir-ls",
"!next-ls",
"!lexical",
"..."
]
},
"Gleam": {
"tab_size": 2
@@ -609,13 +632,24 @@
}
},
"HEEX": {
"language_servers": ["elixir-ls", "!next-ls", "!lexical", "..."]
"language_servers": [
"elixir-ls",
"!next-ls",
"!lexical",
"..."
]
},
"Make": {
"hard_tabs": true
},
"Markdown": {
"format_on_save": "off"
},
"Prisma": {
"tab_size": 2
},
"Ruby": {
"language_servers": ["solargraph", "!ruby-lsp", "..."]
}
},
// Zed's Prettier integration settings.

View File

@@ -281,11 +281,14 @@ impl ActivityIndicator {
message: "Installing Zed update…".to_string(),
on_click: None,
},
AutoUpdateStatus::Updated => Content {
AutoUpdateStatus::Updated { binary_path } => Content {
icon: None,
message: "Click to restart and update Zed".to_string(),
on_click: Some(Arc::new(|_, cx| {
workspace::restart(&Default::default(), cx)
on_click: Some(Arc::new({
let restart = workspace::Restart {
binary_path: Some(binary_path.clone()),
};
move |_, cx| workspace::restart(&restart, cx)
})),
},
AutoUpdateStatus::Errored => Content {

View File

@@ -5,6 +5,10 @@ edition = "2021"
publish = false
license = "AGPL-3.0-or-later"
[features]
default = []
schemars = ["dep:schemars"]
[lints]
workspace = true
@@ -14,9 +18,11 @@ path = "src/anthropic.rs"
[dependencies]
anyhow.workspace = true
futures.workspace = true
http.workspace = true
isahc.workspace = true
schemars = { workspace = true, optional = true }
serde.workspace = true
serde_json.workspace = true
util.workspace = true
[dev-dependencies]
tokio.workspace = true

View File

@@ -1,17 +1,21 @@
use anyhow::{anyhow, Result};
use futures::{io::BufReader, stream::BoxStream, AsyncBufReadExt, AsyncReadExt, StreamExt};
use http::{AsyncBody, HttpClient, Method, Request as HttpRequest};
use isahc::config::Configurable;
use serde::{Deserialize, Serialize};
use std::{convert::TryFrom, sync::Arc};
use util::http::{AsyncBody, HttpClient, Method, Request as HttpRequest};
use std::{convert::TryFrom, time::Duration};
pub const ANTHROPIC_API_URL: &'static str = "https://api.anthropic.com";
#[cfg_attr(feature = "schemars", derive(schemars::JsonSchema))]
#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq)]
pub enum Model {
#[default]
#[serde(rename = "claude-3-opus-20240229")]
#[serde(alias = "claude-3-opus", rename = "claude-3-opus-20240229")]
Claude3Opus,
#[serde(rename = "claude-3-sonnet-20240229")]
#[serde(alias = "claude-3-sonnet", rename = "claude-3-sonnet-20240229")]
Claude3Sonnet,
#[serde(rename = "claude-3-haiku-20240307")]
#[serde(alias = "claude-3-haiku", rename = "claude-3-haiku-20240307")]
Claude3Haiku,
}
@@ -28,6 +32,14 @@ impl Model {
}
}
pub fn id(&self) -> &'static str {
match self {
Model::Claude3Opus => "claude-3-opus-20240229",
Model::Claude3Sonnet => "claude-3-sonnet-20240229",
Model::Claude3Haiku => "claude-3-opus-20240307",
}
}
pub fn display_name(&self) -> &'static str {
match self {
Self::Claude3Opus => "Claude 3 Opus",
@@ -141,20 +153,24 @@ pub enum TextDelta {
}
pub async fn stream_completion(
client: Arc<dyn HttpClient>,
client: &dyn HttpClient,
api_url: &str,
api_key: &str,
request: Request,
low_speed_timeout: Option<Duration>,
) -> Result<BoxStream<'static, Result<ResponseEvent>>> {
let uri = format!("{api_url}/v1/messages");
let request = HttpRequest::builder()
let mut request_builder = HttpRequest::builder()
.method(Method::POST)
.uri(uri)
.header("Anthropic-Version", "2023-06-01")
.header("Anthropic-Beta", "messages-2023-12-15")
.header("Anthropic-Beta", "tools-2024-04-04")
.header("X-Api-Key", api_key)
.header("Content-Type", "application/json")
.body(AsyncBody::from(serde_json::to_string(&request)?))?;
.header("Content-Type", "application/json");
if let Some(low_speed_timeout) = low_speed_timeout {
request_builder = request_builder.low_speed_timeout(100, low_speed_timeout);
}
let request = request_builder.body(AsyncBody::from(serde_json::to_string(&request)?))?;
let mut response = client.send(request).await?;
if response.status().is_success() {
let reader = BufReader::new(response.into_body());
@@ -196,7 +212,7 @@ pub async fn stream_completion(
// #[cfg(test)]
// mod tests {
// use super::*;
// use util::http::IsahcHttpClient;
// use http::IsahcHttpClient;
// #[tokio::test]
// async fn stream_completion_success() {

View File

@@ -11,6 +11,7 @@ doctest = false
[dependencies]
anyhow.workspace = true
anthropic = { workspace = true, features = ["schemars"] }
chrono.workspace = true
client.workspace = true
collections.workspace = true
@@ -20,6 +21,7 @@ file_icons.workspace = true
fs.workspace = true
futures.workspace = true
gpui.workspace = true
http.workspace = true
indoc.workspace = true
language.workspace = true
log.workspace = true

View File

@@ -6,11 +6,8 @@ mod prompts;
mod saved_conversation;
mod streaming_diff;
mod embedded_scope;
pub use assistant_panel::AssistantPanel;
use assistant_settings::{AssistantSettings, OpenAiModel, ZedDotDevModel};
use chrono::{DateTime, Local};
use assistant_settings::{AnthropicModel, AssistantSettings, OpenAiModel, ZedDotDevModel};
use client::{proto, Client};
use command_palette_hooks::CommandPaletteFilter;
pub(crate) use completion_provider::*;
@@ -26,7 +23,6 @@ use std::{
actions!(
assistant,
[
NewConversation,
Assist,
Split,
CycleMessageRole,
@@ -35,6 +31,7 @@ actions!(
ResetKey,
InlineAssist,
ToggleIncludeConversation,
ToggleHistory,
]
);
@@ -75,6 +72,7 @@ impl Display for Role {
pub enum LanguageModel {
ZedDotDev(ZedDotDevModel),
OpenAi(OpenAiModel),
Anthropic(AnthropicModel),
}
impl Default for LanguageModel {
@@ -87,20 +85,23 @@ impl LanguageModel {
pub fn telemetry_id(&self) -> String {
match self {
LanguageModel::OpenAi(model) => format!("openai/{}", model.id()),
LanguageModel::Anthropic(model) => format!("anthropic/{}", model.id()),
LanguageModel::ZedDotDev(model) => format!("zed.dev/{}", model.id()),
}
}
pub fn display_name(&self) -> String {
match self {
LanguageModel::OpenAi(model) => format!("openai/{}", model.display_name()),
LanguageModel::ZedDotDev(model) => format!("zed.dev/{}", model.display_name()),
LanguageModel::OpenAi(model) => model.display_name().into(),
LanguageModel::Anthropic(model) => model.display_name().into(),
LanguageModel::ZedDotDev(model) => model.display_name().into(),
}
}
pub fn max_token_count(&self) -> usize {
match self {
LanguageModel::OpenAi(model) => model.max_token_count(),
LanguageModel::Anthropic(model) => model.max_token_count(),
LanguageModel::ZedDotDev(model) => model.max_token_count(),
}
}
@@ -108,6 +109,7 @@ impl LanguageModel {
pub fn id(&self) -> &str {
match self {
LanguageModel::OpenAi(model) => model.id(),
LanguageModel::Anthropic(model) => model.id(),
LanguageModel::ZedDotDev(model) => model.id(),
}
}
@@ -178,7 +180,6 @@ pub struct LanguageModelChoiceDelta {
#[derive(Clone, Debug, Serialize, Deserialize)]
struct MessageMetadata {
role: Role,
sent_at: DateTime<Local>,
status: MessageStatus,
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,6 @@
use std::fmt;
pub use anthropic::Model as AnthropicModel;
use gpui::Pixels;
pub use open_ai::Model as OpenAiModel;
use schemars::{
@@ -16,8 +17,9 @@ use settings::{Settings, SettingsSources};
pub enum ZedDotDevModel {
Gpt3Point5Turbo,
Gpt4,
#[default]
Gpt4Turbo,
#[default]
Gpt4Omni,
Claude3Opus,
Claude3Sonnet,
Claude3Haiku,
@@ -55,6 +57,7 @@ impl<'de> Deserialize<'de> for ZedDotDevModel {
"gpt-3.5-turbo" => Ok(ZedDotDevModel::Gpt3Point5Turbo),
"gpt-4" => Ok(ZedDotDevModel::Gpt4),
"gpt-4-turbo-preview" => Ok(ZedDotDevModel::Gpt4Turbo),
"gpt-4o" => Ok(ZedDotDevModel::Gpt4Omni),
_ => Ok(ZedDotDevModel::Custom(value.to_owned())),
}
}
@@ -74,6 +77,7 @@ impl JsonSchema for ZedDotDevModel {
"gpt-3.5-turbo".to_owned(),
"gpt-4".to_owned(),
"gpt-4-turbo-preview".to_owned(),
"gpt-4o".to_owned(),
];
Schema::Object(SchemaObject {
instance_type: Some(InstanceType::String.into()),
@@ -100,6 +104,7 @@ impl ZedDotDevModel {
Self::Gpt3Point5Turbo => "gpt-3.5-turbo",
Self::Gpt4 => "gpt-4",
Self::Gpt4Turbo => "gpt-4-turbo-preview",
Self::Gpt4Omni => "gpt-4o",
Self::Claude3Opus => "claude-3-opus",
Self::Claude3Sonnet => "claude-3-sonnet",
Self::Claude3Haiku => "claude-3-haiku",
@@ -112,6 +117,7 @@ impl ZedDotDevModel {
Self::Gpt3Point5Turbo => "GPT 3.5 Turbo",
Self::Gpt4 => "GPT 4",
Self::Gpt4Turbo => "GPT 4 Turbo",
Self::Gpt4Omni => "GPT 4 Omni",
Self::Claude3Opus => "Claude 3 Opus",
Self::Claude3Sonnet => "Claude 3 Sonnet",
Self::Claude3Haiku => "Claude 3 Haiku",
@@ -123,7 +129,7 @@ impl ZedDotDevModel {
match self {
Self::Gpt3Point5Turbo => 2048,
Self::Gpt4 => 4096,
Self::Gpt4Turbo => 128000,
Self::Gpt4Turbo | Self::Gpt4Omni => 128000,
Self::Claude3Opus | Self::Claude3Sonnet | Self::Claude3Haiku => 200000,
Self::Custom(_) => 4096, // TODO: Make this configurable
}
@@ -153,6 +159,17 @@ pub enum AssistantProvider {
default_model: OpenAiModel,
#[serde(default = "open_ai_url")]
api_url: String,
#[serde(default)]
low_speed_timeout_in_seconds: Option<u64>,
},
#[serde(rename = "anthropic")]
Anthropic {
#[serde(default)]
default_model: AnthropicModel,
#[serde(default = "anthropic_api_url")]
api_url: String,
#[serde(default)]
low_speed_timeout_in_seconds: Option<u64>,
},
}
@@ -165,7 +182,11 @@ impl Default for AssistantProvider {
}
fn open_ai_url() -> String {
"https://api.openai.com/v1".into()
open_ai::OPEN_AI_API_URL.to_string()
}
fn anthropic_api_url() -> String {
anthropic::ANTHROPIC_API_URL.to_string()
}
#[derive(Default, Debug, Deserialize, Serialize)]
@@ -222,12 +243,14 @@ impl AssistantSettingsContent {
Some(AssistantProvider::OpenAi {
default_model: settings.default_open_ai_model.clone().unwrap_or_default(),
api_url: open_ai_api_url.clone(),
low_speed_timeout_in_seconds: None,
})
} else {
settings.default_open_ai_model.clone().map(|open_ai_model| {
AssistantProvider::OpenAi {
default_model: open_ai_model,
api_url: open_ai_url(),
low_speed_timeout_in_seconds: None,
}
})
},
@@ -364,14 +387,17 @@ impl Settings for AssistantSettings {
AssistantProvider::OpenAi {
default_model,
api_url,
low_speed_timeout_in_seconds,
},
AssistantProvider::OpenAi {
default_model: default_model_override,
api_url: api_url_override,
low_speed_timeout_in_seconds: low_speed_timeout_in_seconds_override,
},
) => {
*default_model = default_model_override;
*api_url = api_url_override;
*low_speed_timeout_in_seconds = low_speed_timeout_in_seconds_override;
}
(merged, provider_override) => {
*merged = provider_override;
@@ -407,8 +433,9 @@ mod tests {
assert_eq!(
AssistantSettings::get_global(cx).provider,
AssistantProvider::OpenAi {
default_model: OpenAiModel::FourTurbo,
api_url: open_ai_url()
default_model: OpenAiModel::FourOmni,
api_url: open_ai_url(),
low_speed_timeout_in_seconds: None,
}
);
@@ -428,8 +455,9 @@ mod tests {
assert_eq!(
AssistantSettings::get_global(cx).provider,
AssistantProvider::OpenAi {
default_model: OpenAiModel::FourTurbo,
api_url: "test-url".into()
default_model: OpenAiModel::FourOmni,
api_url: "test-url".into(),
low_speed_timeout_in_seconds: None,
}
);
cx.update_global::<SettingsStore, _>(|store, cx| {
@@ -448,7 +476,8 @@ mod tests {
AssistantSettings::get_global(cx).provider,
AssistantProvider::OpenAi {
default_model: OpenAiModel::Four,
api_url: open_ai_url()
api_url: open_ai_url(),
low_speed_timeout_in_seconds: None,
}
);

View File

@@ -3,11 +3,13 @@ use crate::{
CompletionProvider, LanguageModelRequest,
};
use anyhow::Result;
use client::telemetry::Telemetry;
use editor::{Anchor, MultiBuffer, MultiBufferSnapshot, ToOffset, ToPoint};
use futures::{channel::mpsc, SinkExt, Stream, StreamExt};
use gpui::{EventEmitter, Model, ModelContext, Task};
use language::{Rope, TransactionId};
use std::{cmp, future, ops::Range};
use multi_buffer::MultiBufferRow;
use std::{cmp, future, ops::Range, sync::Arc, time::Instant};
pub enum Event {
Finished,
@@ -29,13 +31,19 @@ pub struct Codegen {
error: Option<anyhow::Error>,
generation: Task<()>,
idle: bool,
telemetry: Option<Arc<Telemetry>>,
_subscription: gpui::Subscription,
}
impl EventEmitter<Event> for Codegen {}
impl Codegen {
pub fn new(buffer: Model<MultiBuffer>, kind: CodegenKind, cx: &mut ModelContext<Self>) -> Self {
pub fn new(
buffer: Model<MultiBuffer>,
kind: CodegenKind,
telemetry: Option<Arc<Telemetry>>,
cx: &mut ModelContext<Self>,
) -> Self {
let snapshot = buffer.read(cx).snapshot(cx);
Self {
buffer: buffer.clone(),
@@ -46,6 +54,7 @@ impl Codegen {
error: Default::default(),
idle: true,
generation: Task::ready(()),
telemetry,
_subscription: cx.subscribe(&buffer, Self::handle_buffer_event),
}
}
@@ -100,9 +109,11 @@ impl Codegen {
.suggested_indents(selection_start.row..selection_start.row + 1, cx)
.into_values()
.next()
.unwrap_or_else(|| snapshot.indent_size_for_line(selection_start.row));
.unwrap_or_else(|| snapshot.indent_size_for_line(MultiBufferRow(selection_start.row)));
let model_telemetry_id = prompt.model.telemetry_id();
let response = CompletionProvider::global(cx).complete(prompt);
let telemetry = self.telemetry.clone();
self.generation = cx.spawn(|this, mut cx| {
async move {
let generate = async {
@@ -110,68 +121,89 @@ impl Codegen {
let (mut hunks_tx, mut hunks_rx) = mpsc::channel(1);
let diff = cx.background_executor().spawn(async move {
let chunks = strip_invalid_spans_from_codeblock(response.await?);
futures::pin_mut!(chunks);
let mut diff = StreamingDiff::new(selected_text.to_string());
let mut response_latency = None;
let request_start = Instant::now();
let diff = async {
let chunks = strip_invalid_spans_from_codeblock(response.await?);
futures::pin_mut!(chunks);
let mut diff = StreamingDiff::new(selected_text.to_string());
let mut new_text = String::new();
let mut base_indent = None;
let mut line_indent = None;
let mut first_line = true;
let mut new_text = String::new();
let mut base_indent = None;
let mut line_indent = None;
let mut first_line = true;
while let Some(chunk) = chunks.next().await {
let chunk = chunk?;
while let Some(chunk) = chunks.next().await {
if response_latency.is_none() {
response_latency = Some(request_start.elapsed());
}
let chunk = chunk?;
let mut lines = chunk.split('\n').peekable();
while let Some(line) = lines.next() {
new_text.push_str(line);
if line_indent.is_none() {
if let Some(non_whitespace_ch_ix) =
new_text.find(|ch: char| !ch.is_whitespace())
{
line_indent = Some(non_whitespace_ch_ix);
base_indent = base_indent.or(line_indent);
let mut lines = chunk.split('\n').peekable();
while let Some(line) = lines.next() {
new_text.push_str(line);
if line_indent.is_none() {
if let Some(non_whitespace_ch_ix) =
new_text.find(|ch: char| !ch.is_whitespace())
{
line_indent = Some(non_whitespace_ch_ix);
base_indent = base_indent.or(line_indent);
let line_indent = line_indent.unwrap();
let base_indent = base_indent.unwrap();
let indent_delta = line_indent as i32 - base_indent as i32;
let mut corrected_indent_len = cmp::max(
0,
suggested_line_indent.len as i32 + indent_delta,
)
as usize;
if first_line {
corrected_indent_len = corrected_indent_len
.saturating_sub(selection_start.column as usize);
let line_indent = line_indent.unwrap();
let base_indent = base_indent.unwrap();
let indent_delta =
line_indent as i32 - base_indent as i32;
let mut corrected_indent_len = cmp::max(
0,
suggested_line_indent.len as i32 + indent_delta,
)
as usize;
if first_line {
corrected_indent_len = corrected_indent_len
.saturating_sub(
selection_start.column as usize,
);
}
let indent_char = suggested_line_indent.char();
let mut indent_buffer = [0; 4];
let indent_str =
indent_char.encode_utf8(&mut indent_buffer);
new_text.replace_range(
..line_indent,
&indent_str.repeat(corrected_indent_len),
);
}
}
let indent_char = suggested_line_indent.char();
let mut indent_buffer = [0; 4];
let indent_str =
indent_char.encode_utf8(&mut indent_buffer);
new_text.replace_range(
..line_indent,
&indent_str.repeat(corrected_indent_len),
);
if line_indent.is_some() {
hunks_tx.send(diff.push_new(&new_text)).await?;
new_text.clear();
}
if lines.peek().is_some() {
hunks_tx.send(diff.push_new("\n")).await?;
line_indent = None;
first_line = false;
}
}
if line_indent.is_some() {
hunks_tx.send(diff.push_new(&new_text)).await?;
new_text.clear();
}
if lines.peek().is_some() {
hunks_tx.send(diff.push_new("\n")).await?;
line_indent = None;
first_line = false;
}
}
}
hunks_tx.send(diff.push_new(&new_text)).await?;
hunks_tx.send(diff.finish()).await?;
hunks_tx.send(diff.push_new(&new_text)).await?;
hunks_tx.send(diff.finish()).await?;
anyhow::Ok(())
anyhow::Ok(())
};
let error_message = diff.await.err().map(|error| error.to_string());
if let Some(telemetry) = telemetry {
telemetry.report_assistant_event(
None,
telemetry_events::AssistantKind::Inline,
model_telemetry_id,
response_latency,
error_message,
);
}
});
while let Some(hunks) = hunks_rx.next().await {
@@ -234,7 +266,8 @@ impl Codegen {
})?;
}
diff.await?;
diff.await;
anyhow::Ok(())
};
@@ -395,8 +428,9 @@ mod tests {
let snapshot = buffer.snapshot(cx);
snapshot.anchor_before(Point::new(1, 0))..snapshot.anchor_after(Point::new(4, 5))
});
let codegen =
cx.new_model(|cx| Codegen::new(buffer.clone(), CodegenKind::Transform { range }, cx));
let codegen = cx.new_model(|cx| {
Codegen::new(buffer.clone(), CodegenKind::Transform { range }, None, cx)
});
let request = LanguageModelRequest::default();
codegen.update(cx, |codegen, cx| codegen.start(request, cx));
@@ -453,8 +487,9 @@ mod tests {
let snapshot = buffer.snapshot(cx);
snapshot.anchor_before(Point::new(1, 6))
});
let codegen =
cx.new_model(|cx| Codegen::new(buffer.clone(), CodegenKind::Generate { position }, cx));
let codegen = cx.new_model(|cx| {
Codegen::new(buffer.clone(), CodegenKind::Generate { position }, None, cx)
});
let request = LanguageModelRequest::default();
codegen.update(cx, |codegen, cx| codegen.start(request, cx));
@@ -511,8 +546,9 @@ mod tests {
let snapshot = buffer.snapshot(cx);
snapshot.anchor_before(Point::new(1, 2))
});
let codegen =
cx.new_model(|cx| Codegen::new(buffer.clone(), CodegenKind::Generate { position }, cx));
let codegen = cx.new_model(|cx| {
Codegen::new(buffer.clone(), CodegenKind::Generate { position }, None, cx)
});
let request = LanguageModelRequest::default();
codegen.update(cx, |codegen, cx| codegen.start(request, cx));

View File

@@ -1,8 +1,10 @@
mod anthropic;
#[cfg(test)]
mod fake;
mod open_ai;
mod zed;
pub use anthropic::*;
#[cfg(test)]
pub use fake::*;
pub use open_ai::*;
@@ -18,6 +20,7 @@ use futures::{future::BoxFuture, stream::BoxStream};
use gpui::{AnyView, AppContext, BorrowAppContext, Task, WindowContext};
use settings::{Settings, SettingsStore};
use std::sync::Arc;
use std::time::Duration;
pub fn init(client: Arc<Client>, cx: &mut AppContext) {
let mut settings_version = 0;
@@ -33,10 +36,23 @@ pub fn init(client: Arc<Client>, cx: &mut AppContext) {
AssistantProvider::OpenAi {
default_model,
api_url,
low_speed_timeout_in_seconds,
} => CompletionProvider::OpenAi(OpenAiCompletionProvider::new(
default_model.clone(),
api_url.clone(),
client.http_client(),
low_speed_timeout_in_seconds.map(Duration::from_secs),
settings_version,
)),
AssistantProvider::Anthropic {
default_model,
api_url,
low_speed_timeout_in_seconds,
} => CompletionProvider::Anthropic(AnthropicCompletionProvider::new(
default_model.clone(),
api_url.clone(),
client.http_client(),
low_speed_timeout_in_seconds.map(Duration::from_secs),
settings_version,
)),
};
@@ -51,9 +67,30 @@ pub fn init(client: Arc<Client>, cx: &mut AppContext) {
AssistantProvider::OpenAi {
default_model,
api_url,
low_speed_timeout_in_seconds,
},
) => {
provider.update(default_model.clone(), api_url.clone(), settings_version);
provider.update(
default_model.clone(),
api_url.clone(),
low_speed_timeout_in_seconds.map(Duration::from_secs),
settings_version,
);
}
(
CompletionProvider::Anthropic(provider),
AssistantProvider::Anthropic {
default_model,
api_url,
low_speed_timeout_in_seconds,
},
) => {
provider.update(
default_model.clone(),
api_url.clone(),
low_speed_timeout_in_seconds.map(Duration::from_secs),
settings_version,
);
}
(
CompletionProvider::ZedDotDev(provider),
@@ -61,7 +98,7 @@ pub fn init(client: Arc<Client>, cx: &mut AppContext) {
) => {
provider.update(default_model.clone(), settings_version);
}
(CompletionProvider::OpenAi(_), AssistantProvider::ZedDotDev { default_model }) => {
(_, AssistantProvider::ZedDotDev { default_model }) => {
*provider = CompletionProvider::ZedDotDev(ZedDotDevCompletionProvider::new(
default_model.clone(),
client.clone(),
@@ -70,21 +107,37 @@ pub fn init(client: Arc<Client>, cx: &mut AppContext) {
));
}
(
CompletionProvider::ZedDotDev(_),
_,
AssistantProvider::OpenAi {
default_model,
api_url,
low_speed_timeout_in_seconds,
},
) => {
*provider = CompletionProvider::OpenAi(OpenAiCompletionProvider::new(
default_model.clone(),
api_url.clone(),
client.http_client(),
low_speed_timeout_in_seconds.map(Duration::from_secs),
settings_version,
));
}
(
_,
AssistantProvider::Anthropic {
default_model,
api_url,
low_speed_timeout_in_seconds,
},
) => {
*provider = CompletionProvider::Anthropic(AnthropicCompletionProvider::new(
default_model.clone(),
api_url.clone(),
client.http_client(),
low_speed_timeout_in_seconds.map(Duration::from_secs),
settings_version,
));
}
#[cfg(test)]
(CompletionProvider::Fake(_), _) => unimplemented!(),
}
})
})
@@ -93,6 +146,7 @@ pub fn init(client: Arc<Client>, cx: &mut AppContext) {
pub enum CompletionProvider {
OpenAi(OpenAiCompletionProvider),
Anthropic(AnthropicCompletionProvider),
ZedDotDev(ZedDotDevCompletionProvider),
#[cfg(test)]
Fake(FakeCompletionProvider),
@@ -108,6 +162,7 @@ impl CompletionProvider {
pub fn settings_version(&self) -> usize {
match self {
CompletionProvider::OpenAi(provider) => provider.settings_version(),
CompletionProvider::Anthropic(provider) => provider.settings_version(),
CompletionProvider::ZedDotDev(provider) => provider.settings_version(),
#[cfg(test)]
CompletionProvider::Fake(_) => unimplemented!(),
@@ -117,6 +172,7 @@ impl CompletionProvider {
pub fn is_authenticated(&self) -> bool {
match self {
CompletionProvider::OpenAi(provider) => provider.is_authenticated(),
CompletionProvider::Anthropic(provider) => provider.is_authenticated(),
CompletionProvider::ZedDotDev(provider) => provider.is_authenticated(),
#[cfg(test)]
CompletionProvider::Fake(_) => true,
@@ -126,6 +182,7 @@ impl CompletionProvider {
pub fn authenticate(&self, cx: &AppContext) -> Task<Result<()>> {
match self {
CompletionProvider::OpenAi(provider) => provider.authenticate(cx),
CompletionProvider::Anthropic(provider) => provider.authenticate(cx),
CompletionProvider::ZedDotDev(provider) => provider.authenticate(cx),
#[cfg(test)]
CompletionProvider::Fake(_) => Task::ready(Ok(())),
@@ -135,6 +192,7 @@ impl CompletionProvider {
pub fn authentication_prompt(&self, cx: &mut WindowContext) -> AnyView {
match self {
CompletionProvider::OpenAi(provider) => provider.authentication_prompt(cx),
CompletionProvider::Anthropic(provider) => provider.authentication_prompt(cx),
CompletionProvider::ZedDotDev(provider) => provider.authentication_prompt(cx),
#[cfg(test)]
CompletionProvider::Fake(_) => unimplemented!(),
@@ -144,6 +202,7 @@ impl CompletionProvider {
pub fn reset_credentials(&self, cx: &AppContext) -> Task<Result<()>> {
match self {
CompletionProvider::OpenAi(provider) => provider.reset_credentials(cx),
CompletionProvider::Anthropic(provider) => provider.reset_credentials(cx),
CompletionProvider::ZedDotDev(_) => Task::ready(Ok(())),
#[cfg(test)]
CompletionProvider::Fake(_) => Task::ready(Ok(())),
@@ -153,6 +212,9 @@ impl CompletionProvider {
pub fn default_model(&self) -> LanguageModel {
match self {
CompletionProvider::OpenAi(provider) => LanguageModel::OpenAi(provider.default_model()),
CompletionProvider::Anthropic(provider) => {
LanguageModel::Anthropic(provider.default_model())
}
CompletionProvider::ZedDotDev(provider) => {
LanguageModel::ZedDotDev(provider.default_model())
}
@@ -168,6 +230,7 @@ impl CompletionProvider {
) -> BoxFuture<'static, Result<usize>> {
match self {
CompletionProvider::OpenAi(provider) => provider.count_tokens(request, cx),
CompletionProvider::Anthropic(provider) => provider.count_tokens(request, cx),
CompletionProvider::ZedDotDev(provider) => provider.count_tokens(request, cx),
#[cfg(test)]
CompletionProvider::Fake(_) => unimplemented!(),
@@ -180,6 +243,7 @@ impl CompletionProvider {
) -> BoxFuture<'static, Result<BoxStream<'static, Result<String>>>> {
match self {
CompletionProvider::OpenAi(provider) => provider.complete(request),
CompletionProvider::Anthropic(provider) => provider.complete(request),
CompletionProvider::ZedDotDev(provider) => provider.complete(request),
#[cfg(test)]
CompletionProvider::Fake(provider) => provider.complete(),

View File

@@ -0,0 +1,327 @@
use crate::count_open_ai_tokens;
use crate::{
assistant_settings::AnthropicModel, CompletionProvider, LanguageModel, LanguageModelRequest,
Role,
};
use anthropic::{stream_completion, Request, RequestMessage, Role as AnthropicRole};
use anyhow::{anyhow, Result};
use editor::{Editor, EditorElement, EditorStyle};
use futures::{future::BoxFuture, stream::BoxStream, FutureExt, StreamExt};
use gpui::{AnyView, AppContext, FontStyle, FontWeight, Task, TextStyle, View, WhiteSpace};
use http::HttpClient;
use settings::Settings;
use std::time::Duration;
use std::{env, sync::Arc};
use theme::ThemeSettings;
use ui::prelude::*;
use util::ResultExt;
pub struct AnthropicCompletionProvider {
api_key: Option<String>,
api_url: String,
default_model: AnthropicModel,
http_client: Arc<dyn HttpClient>,
low_speed_timeout: Option<Duration>,
settings_version: usize,
}
impl AnthropicCompletionProvider {
pub fn new(
default_model: AnthropicModel,
api_url: String,
http_client: Arc<dyn HttpClient>,
low_speed_timeout: Option<Duration>,
settings_version: usize,
) -> Self {
Self {
api_key: None,
api_url,
default_model,
http_client,
low_speed_timeout,
settings_version,
}
}
pub fn update(
&mut self,
default_model: AnthropicModel,
api_url: String,
low_speed_timeout: Option<Duration>,
settings_version: usize,
) {
self.default_model = default_model;
self.api_url = api_url;
self.low_speed_timeout = low_speed_timeout;
self.settings_version = settings_version;
}
pub fn settings_version(&self) -> usize {
self.settings_version
}
pub fn is_authenticated(&self) -> bool {
self.api_key.is_some()
}
pub fn authenticate(&self, cx: &AppContext) -> Task<Result<()>> {
if self.is_authenticated() {
Task::ready(Ok(()))
} else {
let api_url = self.api_url.clone();
cx.spawn(|mut cx| async move {
let api_key = if let Ok(api_key) = env::var("ANTHROPIC_API_KEY") {
api_key
} else {
let (_, api_key) = cx
.update(|cx| cx.read_credentials(&api_url))?
.await?
.ok_or_else(|| anyhow!("credentials not found"))?;
String::from_utf8(api_key)?
};
cx.update_global::<CompletionProvider, _>(|provider, _cx| {
if let CompletionProvider::Anthropic(provider) = provider {
provider.api_key = Some(api_key);
}
})
})
}
}
pub fn reset_credentials(&self, cx: &AppContext) -> Task<Result<()>> {
let delete_credentials = cx.delete_credentials(&self.api_url);
cx.spawn(|mut cx| async move {
delete_credentials.await.log_err();
cx.update_global::<CompletionProvider, _>(|provider, _cx| {
if let CompletionProvider::Anthropic(provider) = provider {
provider.api_key = None;
}
})
})
}
pub fn authentication_prompt(&self, cx: &mut WindowContext) -> AnyView {
cx.new_view(|cx| AuthenticationPrompt::new(self.api_url.clone(), cx))
.into()
}
pub fn default_model(&self) -> AnthropicModel {
self.default_model.clone()
}
pub fn count_tokens(
&self,
request: LanguageModelRequest,
cx: &AppContext,
) -> BoxFuture<'static, Result<usize>> {
count_open_ai_tokens(request, cx.background_executor())
}
pub fn complete(
&self,
request: LanguageModelRequest,
) -> BoxFuture<'static, Result<BoxStream<'static, Result<String>>>> {
let request = self.to_anthropic_request(request);
let http_client = self.http_client.clone();
let api_key = self.api_key.clone();
let api_url = self.api_url.clone();
let low_speed_timeout = self.low_speed_timeout;
async move {
let api_key = api_key.ok_or_else(|| anyhow!("missing api key"))?;
let request = stream_completion(
http_client.as_ref(),
&api_url,
&api_key,
request,
low_speed_timeout,
);
let response = request.await?;
let stream = response
.filter_map(|response| async move {
match response {
Ok(response) => match response {
anthropic::ResponseEvent::ContentBlockStart {
content_block, ..
} => match content_block {
anthropic::ContentBlock::Text { text } => Some(Ok(text)),
},
anthropic::ResponseEvent::ContentBlockDelta { delta, .. } => {
match delta {
anthropic::TextDelta::TextDelta { text } => Some(Ok(text)),
}
}
_ => None,
},
Err(error) => Some(Err(error)),
}
})
.boxed();
Ok(stream)
}
.boxed()
}
fn to_anthropic_request(&self, request: LanguageModelRequest) -> Request {
let model = match request.model {
LanguageModel::Anthropic(model) => model,
_ => self.default_model(),
};
let mut system_message = String::new();
let mut messages: Vec<RequestMessage> = Vec::new();
for message in request.messages {
if message.content.is_empty() {
continue;
}
match message.role {
Role::User | Role::Assistant => {
let role = match message.role {
Role::User => AnthropicRole::User,
Role::Assistant => AnthropicRole::Assistant,
_ => unreachable!(),
};
if let Some(last_message) = messages.last_mut() {
if last_message.role == role {
last_message.content.push_str("\n\n");
last_message.content.push_str(&message.content);
continue;
}
}
messages.push(RequestMessage {
role,
content: message.content,
});
}
Role::System => {
if !system_message.is_empty() {
system_message.push_str("\n\n");
}
system_message.push_str(&message.content);
}
}
}
Request {
model,
messages,
stream: true,
system: system_message,
max_tokens: 4092,
}
}
}
struct AuthenticationPrompt {
api_key: View<Editor>,
api_url: String,
}
impl AuthenticationPrompt {
fn new(api_url: String, cx: &mut WindowContext) -> Self {
Self {
api_key: cx.new_view(|cx| {
let mut editor = Editor::single_line(cx);
editor.set_placeholder_text(
"sk-000000000000000000000000000000000000000000000000",
cx,
);
editor
}),
api_url,
}
}
fn save_api_key(&mut self, _: &menu::Confirm, cx: &mut ViewContext<Self>) {
let api_key = self.api_key.read(cx).text(cx);
if api_key.is_empty() {
return;
}
let write_credentials = cx.write_credentials(&self.api_url, "Bearer", api_key.as_bytes());
cx.spawn(|_, mut cx| async move {
write_credentials.await?;
cx.update_global::<CompletionProvider, _>(|provider, _cx| {
if let CompletionProvider::Anthropic(provider) = provider {
provider.api_key = Some(api_key);
}
})
})
.detach_and_log_err(cx);
}
fn render_api_key_editor(&self, cx: &mut ViewContext<Self>) -> impl IntoElement {
let settings = ThemeSettings::get_global(cx);
let text_style = TextStyle {
color: cx.theme().colors().text,
font_family: settings.ui_font.family.clone(),
font_features: settings.ui_font.features.clone(),
font_size: rems(0.875).into(),
font_weight: FontWeight::NORMAL,
font_style: FontStyle::Normal,
line_height: relative(1.3),
background_color: None,
underline: None,
strikethrough: None,
white_space: WhiteSpace::Normal,
};
EditorElement::new(
&self.api_key,
EditorStyle {
background: cx.theme().colors().editor_background,
local_player: cx.theme().players().local(),
text: text_style,
..Default::default()
},
)
}
}
impl Render for AuthenticationPrompt {
fn render(&mut self, cx: &mut ViewContext<Self>) -> impl IntoElement {
const INSTRUCTIONS: [&str; 4] = [
"To use the assistant panel or inline assistant, you need to add your Anthropic API key.",
"You can create an API key at: https://console.anthropic.com/settings/keys",
"",
"Paste your Anthropic API key below and hit enter to use the assistant:",
];
v_flex()
.p_4()
.size_full()
.on_action(cx.listener(Self::save_api_key))
.children(
INSTRUCTIONS.map(|instruction| Label::new(instruction).size(LabelSize::Small)),
)
.child(
h_flex()
.w_full()
.my_2()
.px_2()
.py_1()
.bg(cx.theme().colors().editor_background)
.rounded_md()
.child(self.render_api_key_editor(cx)),
)
.child(
Label::new(
"You can also assign the ANTHROPIC_API_KEY environment variable and restart Zed.",
)
.size(LabelSize::Small),
)
.child(
h_flex()
.gap_2()
.child(Label::new("Click on").size(LabelSize::Small))
.child(Icon::new(IconName::Ai).size(IconSize::XSmall))
.child(
Label::new("in the status bar to close this panel.").size(LabelSize::Small),
),
)
.into_any()
}
}

View File

@@ -1,3 +1,4 @@
use crate::assistant_settings::ZedDotDevModel;
use crate::{
assistant_settings::OpenAiModel, CompletionProvider, LanguageModel, LanguageModelRequest, Role,
};
@@ -5,18 +6,21 @@ use anyhow::{anyhow, Result};
use editor::{Editor, EditorElement, EditorStyle};
use futures::{future::BoxFuture, stream::BoxStream, FutureExt, StreamExt};
use gpui::{AnyView, AppContext, FontStyle, FontWeight, Task, TextStyle, View, WhiteSpace};
use http::HttpClient;
use open_ai::{stream_completion, Request, RequestMessage, Role as OpenAiRole};
use settings::Settings;
use std::time::Duration;
use std::{env, sync::Arc};
use theme::ThemeSettings;
use ui::prelude::*;
use util::{http::HttpClient, ResultExt};
use util::ResultExt;
pub struct OpenAiCompletionProvider {
api_key: Option<String>,
api_url: String,
default_model: OpenAiModel,
http_client: Arc<dyn HttpClient>,
low_speed_timeout: Option<Duration>,
settings_version: usize,
}
@@ -25,6 +29,7 @@ impl OpenAiCompletionProvider {
default_model: OpenAiModel,
api_url: String,
http_client: Arc<dyn HttpClient>,
low_speed_timeout: Option<Duration>,
settings_version: usize,
) -> Self {
Self {
@@ -32,13 +37,21 @@ impl OpenAiCompletionProvider {
api_url,
default_model,
http_client,
low_speed_timeout,
settings_version,
}
}
pub fn update(&mut self, default_model: OpenAiModel, api_url: String, settings_version: usize) {
pub fn update(
&mut self,
default_model: OpenAiModel,
api_url: String,
low_speed_timeout: Option<Duration>,
settings_version: usize,
) {
self.default_model = default_model;
self.api_url = api_url;
self.low_speed_timeout = low_speed_timeout;
self.settings_version = settings_version;
}
@@ -112,9 +125,16 @@ impl OpenAiCompletionProvider {
let http_client = self.http_client.clone();
let api_key = self.api_key.clone();
let api_url = self.api_url.clone();
let low_speed_timeout = self.low_speed_timeout;
async move {
let api_key = api_key.ok_or_else(|| anyhow!("missing api key"))?;
let request = stream_completion(http_client.as_ref(), &api_url, &api_key, request);
let request = stream_completion(
http_client.as_ref(),
&api_url,
&api_key,
request,
low_speed_timeout,
);
let response = request.await?;
let stream = response
.filter_map(|response| async move {
@@ -131,8 +151,8 @@ impl OpenAiCompletionProvider {
fn to_open_ai_request(&self, request: LanguageModelRequest) -> Request {
let model = match request.model {
LanguageModel::ZedDotDev(_) => self.default_model(),
LanguageModel::OpenAi(model) => model,
_ => self.default_model(),
};
Request {
@@ -183,7 +203,19 @@ pub fn count_open_ai_tokens(
})
.collect::<Vec<_>>();
tiktoken_rs::num_tokens_from_messages(request.model.id(), &messages)
match request.model {
LanguageModel::OpenAi(OpenAiModel::FourOmni)
| LanguageModel::ZedDotDev(ZedDotDevModel::Gpt4Omni)
| LanguageModel::Anthropic(_)
| LanguageModel::ZedDotDev(ZedDotDevModel::Claude3Opus)
| LanguageModel::ZedDotDev(ZedDotDevModel::Claude3Sonnet)
| LanguageModel::ZedDotDev(ZedDotDevModel::Claude3Haiku) => {
// Tiktoken doesn't yet support these models, so we manually use the
// same tokenizer as GPT-4.
tiktoken_rs::num_tokens_from_messages("gpt-4", &messages)
}
_ => tiktoken_rs::num_tokens_from_messages(request.model.id(), &messages),
}
})
.boxed()
}

View File

@@ -78,9 +78,9 @@ impl ZedDotDevCompletionProvider {
cx: &AppContext,
) -> BoxFuture<'static, Result<usize>> {
match request.model {
LanguageModel::OpenAi(_) => future::ready(Err(anyhow!("invalid model"))).boxed(),
LanguageModel::ZedDotDev(ZedDotDevModel::Gpt4)
| LanguageModel::ZedDotDev(ZedDotDevModel::Gpt4Turbo)
| LanguageModel::ZedDotDev(ZedDotDevModel::Gpt4Omni)
| LanguageModel::ZedDotDev(ZedDotDevModel::Gpt3Point5Turbo) => {
count_open_ai_tokens(request, cx.background_executor())
}
@@ -107,6 +107,7 @@ impl ZedDotDevCompletionProvider {
}
.boxed()
}
_ => future::ready(Err(anyhow!("invalid model"))).boxed(),
}
}

View File

@@ -1,91 +0,0 @@
use editor::MultiBuffer;
use gpui::{AppContext, Model, ModelContext, Subscription};
use crate::{assistant_panel::Conversation, LanguageModelRequestMessage, Role};
#[derive(Default)]
pub struct EmbeddedScope {
active_buffer: Option<Model<MultiBuffer>>,
active_buffer_enabled: bool,
active_buffer_subscription: Option<Subscription>,
}
impl EmbeddedScope {
pub fn new() -> Self {
Self {
active_buffer: None,
active_buffer_enabled: true,
active_buffer_subscription: None,
}
}
pub fn set_active_buffer(
&mut self,
buffer: Option<Model<MultiBuffer>>,
cx: &mut ModelContext<Conversation>,
) {
self.active_buffer_subscription.take();
if let Some(active_buffer) = buffer.clone() {
self.active_buffer_subscription =
Some(cx.subscribe(&active_buffer, |conversation, _, e, cx| {
if let multi_buffer::Event::Edited { .. } = e {
conversation.count_remaining_tokens(cx)
}
}));
}
self.active_buffer = buffer;
}
pub fn active_buffer(&self) -> Option<&Model<MultiBuffer>> {
self.active_buffer.as_ref()
}
pub fn active_buffer_enabled(&self) -> bool {
self.active_buffer_enabled
}
pub fn set_active_buffer_enabled(&mut self, enabled: bool) {
self.active_buffer_enabled = enabled;
}
/// Provide a message for the language model based on the active buffer.
pub fn message(&self, cx: &AppContext) -> Option<LanguageModelRequestMessage> {
if !self.active_buffer_enabled {
return None;
}
let active_buffer = self.active_buffer.as_ref()?;
let buffer = active_buffer.read(cx);
if let Some(singleton) = buffer.as_singleton() {
let singleton = singleton.read(cx);
let filename = singleton
.file()
.map(|file| file.path().to_string_lossy())
.unwrap_or("Untitled".into());
let text = singleton.text();
let language = singleton
.language()
.map(|l| {
let name = l.code_fence_block_name();
name.to_string()
})
.unwrap_or_default();
let markdown =
format!("User's active file `{filename}`:\n\n```{language}\n{text}```\n\n");
return Some(LanguageModelRequestMessage {
role: Role::System,
content: markdown,
});
}
None
}
}

View File

@@ -106,6 +106,11 @@ impl SavedConversationMetadata {
.and_then(|name| name.to_str())
.zip(metadata)
{
// This is used to filter out conversations saved by the new assistant.
if !re.is_match(file_name) {
continue;
}
let title = re.replace(file_name, "");
conversations.push(Self {
title: title.into_owned(),

View File

@@ -19,17 +19,22 @@ stories = ["dep:story"]
anyhow.workspace = true
assistant_tooling.workspace = true
client.workspace = true
chrono.workspace = true
collections.workspace = true
editor.workspace = true
feature_flags.workspace = true
file_icons.workspace = true
fs.workspace = true
futures.workspace = true
fuzzy.workspace = true
gpui.workspace = true
language.workspace = true
log.workspace = true
nanoid.workspace = true
markdown.workspace = true
open_ai.workspace = true
picker.workspace = true
project.workspace = true
rich_text.workspace = true
regex.workspace = true
schemars.workspace = true
semantic_index.workspace = true
serde.workspace = true
@@ -39,6 +44,7 @@ story = { workspace = true, optional = true }
theme.workspace = true
ui.workspace = true
util.workspace = true
unindent.workspace = true
workspace.workspace = true
[dev-dependencies]
@@ -48,6 +54,7 @@ env_logger.workspace = true
gpui = { workspace = true, features = ["test-support"] }
language = { workspace = true, features = ["test-support"] }
languages.workspace = true
markdown = { workspace = true, features = ["test-support"] }
node_runtime.workspace = true
project = { workspace = true, features = ["test-support"] }
rand.workspace = true
@@ -55,4 +62,5 @@ release_channel.workspace = true
settings = { workspace = true, features = ["test-support"] }
theme = { workspace = true, features = ["test-support"] }
util = { workspace = true, features = ["test-support"] }
http = { workspace = true, features = ["test-support"] }
workspace = { workspace = true, features = ["test-support"] }

View File

@@ -1 +1 @@
> Give me a comprehensive list of all the elements define in my project (impl Element for {}, impl<T: 'static> Element for {}, impl IntoElement for {})
> Give me a comprehensive list of all the elements defined in my project using the following query: `impl Element for {}, impl<T: 'static> Element for {}, impl IntoElement for {})`

View File

@@ -0,0 +1,3 @@
Use tools frequently, especially when referring to files and code. The Zed editor we're working in can show me files directly when you add annotations. Be concise in chat, bountiful in tool calling.
Teach me everything you can about how zed loads settings. Please annotate the code inline.

File diff suppressed because it is too large Load Diff

View File

@@ -1,114 +1,3 @@
pub mod active_file;
mod active_file;
use anyhow::{anyhow, Result};
use assistant_tooling::{LanguageModelAttachment, ProjectContext, ToolOutput};
use editor::Editor;
use gpui::{Render, Task, View, WeakModel, WeakView};
use language::Buffer;
use project::ProjectPath;
use ui::{prelude::*, ButtonLike, Tooltip, WindowContext};
use util::maybe;
use workspace::Workspace;
pub struct ActiveEditorAttachment {
buffer: WeakModel<Buffer>,
path: Option<ProjectPath>,
}
pub struct FileAttachmentView {
output: Result<ActiveEditorAttachment>,
}
impl Render for FileAttachmentView {
fn render(&mut self, cx: &mut ViewContext<Self>) -> impl IntoElement {
match &self.output {
Ok(attachment) => {
let filename: SharedString = attachment
.path
.as_ref()
.and_then(|p| p.path.file_name()?.to_str())
.unwrap_or("Untitled")
.to_string()
.into();
// todo!(): make the button link to the actual file to open
ButtonLike::new("file-attachment")
.child(
h_flex()
.gap_1()
.bg(cx.theme().colors().editor_background)
.rounded_md()
.child(ui::Icon::new(IconName::File))
.child(filename.clone()),
)
.tooltip({
move |cx| Tooltip::with_meta("File Attached", None, filename.clone(), cx)
})
.into_any_element()
}
Err(err) => div().child(err.to_string()).into_any_element(),
}
}
}
impl ToolOutput for FileAttachmentView {
fn generate(&self, project: &mut ProjectContext, cx: &mut WindowContext) -> String {
if let Ok(result) = &self.output {
if let Some(path) = &result.path {
project.add_file(path.clone());
return format!("current file: {}", path.path.display());
} else if let Some(buffer) = result.buffer.upgrade() {
return format!("current untitled buffer text:\n{}", buffer.read(cx).text());
}
}
String::new()
}
}
pub struct ActiveEditorAttachmentTool {
workspace: WeakView<Workspace>,
}
impl ActiveEditorAttachmentTool {
pub fn new(workspace: WeakView<Workspace>, _cx: &mut WindowContext) -> Self {
Self { workspace }
}
}
impl LanguageModelAttachment for ActiveEditorAttachmentTool {
type Output = ActiveEditorAttachment;
type View = FileAttachmentView;
fn run(&self, cx: &mut WindowContext) -> Task<Result<ActiveEditorAttachment>> {
Task::ready(maybe!({
let active_buffer = self
.workspace
.update(cx, |workspace, cx| {
workspace
.active_item(cx)
.and_then(|item| Some(item.act_as::<Editor>(cx)?.read(cx).buffer().clone()))
})?
.ok_or_else(|| anyhow!("no active buffer"))?;
let buffer = active_buffer.read(cx);
if let Some(buffer) = buffer.as_singleton() {
let path =
project::File::from_dyn(buffer.read(cx).file()).map(|file| ProjectPath {
worktree_id: file.worktree_id(cx),
path: file.path.clone(),
});
return Ok(ActiveEditorAttachment {
buffer: buffer.downgrade(),
path,
});
} else {
Err(anyhow!("no active buffer"))
}
}))
}
fn view(output: Result<Self::Output>, cx: &mut WindowContext) -> View<Self::View> {
cx.new_view(|_cx| FileAttachmentView { output })
}
}
pub use active_file::*;

View File

@@ -1 +1,144 @@
use std::{path::PathBuf, sync::Arc};
use anyhow::{anyhow, Result};
use assistant_tooling::{AttachmentOutput, LanguageModelAttachment, ProjectContext};
use editor::Editor;
use gpui::{Render, Task, View, WeakModel, WeakView};
use language::Buffer;
use project::ProjectPath;
use serde::{Deserialize, Serialize};
use ui::{prelude::*, ButtonLike, Tooltip, WindowContext};
use util::maybe;
use workspace::Workspace;
#[derive(Serialize, Deserialize)]
pub struct ActiveEditorAttachment {
#[serde(skip)]
buffer: Option<WeakModel<Buffer>>,
path: Option<PathBuf>,
}
pub struct FileAttachmentView {
project_path: Option<ProjectPath>,
buffer: Option<WeakModel<Buffer>>,
error: Option<anyhow::Error>,
}
impl Render for FileAttachmentView {
fn render(&mut self, cx: &mut ViewContext<Self>) -> impl IntoElement {
if let Some(error) = &self.error {
return div().child(error.to_string()).into_any_element();
}
let filename: SharedString = self
.project_path
.as_ref()
.and_then(|p| p.path.file_name()?.to_str())
.unwrap_or("Untitled")
.to_string()
.into();
ButtonLike::new("file-attachment")
.child(
h_flex()
.gap_1()
.bg(cx.theme().colors().editor_background)
.rounded_md()
.child(ui::Icon::new(IconName::File))
.child(filename.clone()),
)
.tooltip(move |cx| Tooltip::with_meta("File Attached", None, filename.clone(), cx))
.into_any_element()
}
}
impl AttachmentOutput for FileAttachmentView {
fn generate(&self, project: &mut ProjectContext, cx: &mut WindowContext) -> String {
if let Some(path) = &self.project_path {
project.add_file(path.clone());
return format!("current file: {}", path.path.display());
}
if let Some(buffer) = self.buffer.as_ref().and_then(|buffer| buffer.upgrade()) {
return format!("current untitled buffer text:\n{}", buffer.read(cx).text());
}
String::new()
}
}
pub struct ActiveEditorAttachmentTool {
workspace: WeakView<Workspace>,
}
impl ActiveEditorAttachmentTool {
pub fn new(workspace: WeakView<Workspace>, _cx: &mut WindowContext) -> Self {
Self { workspace }
}
}
impl LanguageModelAttachment for ActiveEditorAttachmentTool {
type Output = ActiveEditorAttachment;
type View = FileAttachmentView;
fn name(&self) -> Arc<str> {
"active-editor-attachment".into()
}
fn run(&self, cx: &mut WindowContext) -> Task<Result<ActiveEditorAttachment>> {
Task::ready(maybe!({
let active_buffer = self
.workspace
.update(cx, |workspace, cx| {
workspace
.active_item(cx)
.and_then(|item| Some(item.act_as::<Editor>(cx)?.read(cx).buffer().clone()))
})?
.ok_or_else(|| anyhow!("no active buffer"))?;
let buffer = active_buffer.read(cx);
if let Some(buffer) = buffer.as_singleton() {
let path = project::File::from_dyn(buffer.read(cx).file())
.and_then(|file| file.worktree.read(cx).absolutize(&file.path).ok());
return Ok(ActiveEditorAttachment {
buffer: Some(buffer.downgrade()),
path,
});
} else {
Err(anyhow!("no active buffer"))
}
}))
}
fn view(
&self,
output: Result<ActiveEditorAttachment>,
cx: &mut WindowContext,
) -> View<Self::View> {
let error;
let project_path;
let buffer;
match output {
Ok(output) => {
error = None;
let workspace = self.workspace.upgrade().unwrap();
let project = workspace.read(cx).project();
project_path = output
.path
.and_then(|path| project.read(cx).project_path_for_absolute_path(&path, cx));
buffer = output.buffer;
}
Err(err) => {
error = Some(err);
buffer = None;
project_path = None;
}
}
cx.new_view(|_cx| FileAttachmentView {
project_path,
buffer,
error,
})
}
}

View File

@@ -0,0 +1,90 @@
use std::cmp::Reverse;
use std::ffi::OsStr;
use std::path::PathBuf;
use std::sync::Arc;
use anyhow::Result;
use assistant_tooling::{SavedToolFunctionCall, SavedUserAttachment};
use fs::Fs;
use futures::StreamExt;
use gpui::SharedString;
use regex::Regex;
use serde::{Deserialize, Serialize};
use util::paths::CONVERSATIONS_DIR;
use crate::MessageId;
#[derive(Serialize, Deserialize)]
pub struct SavedConversation {
/// The schema version of the conversation.
pub version: String,
/// The title of the conversation, generated by the Assistant.
pub title: String,
pub messages: Vec<SavedChatMessage>,
}
#[derive(Serialize, Deserialize)]
pub enum SavedChatMessage {
User {
id: MessageId,
body: String,
attachments: Vec<SavedUserAttachment>,
},
Assistant {
id: MessageId,
messages: Vec<SavedAssistantMessagePart>,
error: Option<SharedString>,
},
}
#[derive(Serialize, Deserialize)]
pub struct SavedAssistantMessagePart {
pub body: SharedString,
pub tool_calls: Vec<SavedToolFunctionCall>,
}
pub struct SavedConversationMetadata {
pub title: String,
pub path: PathBuf,
pub mtime: chrono::DateTime<chrono::Local>,
}
impl SavedConversationMetadata {
pub async fn list(fs: Arc<dyn Fs>) -> Result<Vec<Self>> {
fs.create_dir(&CONVERSATIONS_DIR).await?;
let mut paths = fs.read_dir(&CONVERSATIONS_DIR).await?;
let mut conversations = Vec::new();
while let Some(path) = paths.next().await {
let path = path?;
if path.extension() != Some(OsStr::new("json")) {
continue;
}
let pattern = r" - \d+.zed.\d.\d.\d.json$";
let re = Regex::new(pattern).unwrap();
let metadata = fs.metadata(&path).await?;
if let Some((file_name, metadata)) = path
.file_name()
.and_then(|name| name.to_str())
.zip(metadata)
{
// This is used to filter out conversations saved by the old assistant.
if !re.is_match(file_name) {
continue;
}
let title = re.replace(file_name, "");
conversations.push(Self {
title: title.into_owned(),
path,
mtime: metadata.mtime.into(),
});
}
}
conversations.sort_unstable_by_key(|conversation| Reverse(conversation.mtime));
Ok(conversations)
}
}

View File

@@ -0,0 +1,196 @@
use std::sync::Arc;
use fuzzy::{match_strings, StringMatch, StringMatchCandidate};
use gpui::{AppContext, DismissEvent, EventEmitter, FocusHandle, FocusableView, View, WeakView};
use picker::{Picker, PickerDelegate};
use ui::{prelude::*, HighlightedLabel, ListItem, ListItemSpacing};
use util::ResultExt;
use crate::saved_conversation::SavedConversationMetadata;
pub struct SavedConversations {
focus_handle: FocusHandle,
picker: Option<View<Picker<SavedConversationPickerDelegate>>>,
}
impl EventEmitter<DismissEvent> for SavedConversations {}
impl FocusableView for SavedConversations {
fn focus_handle(&self, cx: &AppContext) -> FocusHandle {
if let Some(picker) = self.picker.as_ref() {
picker.focus_handle(cx)
} else {
self.focus_handle.clone()
}
}
}
impl SavedConversations {
pub fn new(cx: &mut ViewContext<Self>) -> Self {
Self {
focus_handle: cx.focus_handle(),
picker: None,
}
}
pub fn init(
&mut self,
saved_conversations: Vec<SavedConversationMetadata>,
cx: &mut ViewContext<Self>,
) {
let delegate =
SavedConversationPickerDelegate::new(cx.view().downgrade(), saved_conversations);
self.picker = Some(cx.new_view(|cx| Picker::uniform_list(delegate, cx).modal(false)));
}
}
impl Render for SavedConversations {
fn render(&mut self, cx: &mut ViewContext<Self>) -> impl IntoElement {
v_flex()
.w_full()
.bg(cx.theme().colors().panel_background)
.children(self.picker.clone())
}
}
pub struct SavedConversationPickerDelegate {
view: WeakView<SavedConversations>,
saved_conversations: Vec<SavedConversationMetadata>,
selected_index: usize,
matches: Vec<StringMatch>,
}
impl SavedConversationPickerDelegate {
pub fn new(
weak_view: WeakView<SavedConversations>,
saved_conversations: Vec<SavedConversationMetadata>,
) -> Self {
let matches = saved_conversations
.iter()
.map(|conversation| StringMatch {
candidate_id: 0,
score: 0.0,
positions: Default::default(),
string: conversation.title.clone(),
})
.collect();
Self {
view: weak_view,
saved_conversations,
selected_index: 0,
matches,
}
}
}
impl PickerDelegate for SavedConversationPickerDelegate {
type ListItem = ui::ListItem;
fn placeholder_text(&self, _cx: &mut WindowContext) -> Arc<str> {
"Select saved conversation...".into()
}
fn match_count(&self) -> usize {
self.matches.len()
}
fn selected_index(&self) -> usize {
self.selected_index
}
fn set_selected_index(&mut self, ix: usize, _cx: &mut ViewContext<Picker<Self>>) {
self.selected_index = ix;
}
fn update_matches(
&mut self,
query: String,
cx: &mut ViewContext<Picker<Self>>,
) -> gpui::Task<()> {
let background_executor = cx.background_executor().clone();
let candidates = self
.saved_conversations
.iter()
.enumerate()
.map(|(id, conversation)| {
let text = conversation.title.clone();
StringMatchCandidate {
id,
char_bag: text.as_str().into(),
string: text,
}
})
.collect::<Vec<_>>();
cx.spawn(move |this, mut cx| async move {
let matches = if query.is_empty() {
candidates
.into_iter()
.enumerate()
.map(|(index, candidate)| StringMatch {
candidate_id: index,
string: candidate.string,
positions: Vec::new(),
score: 0.0,
})
.collect()
} else {
match_strings(
&candidates,
&query,
false,
100,
&Default::default(),
background_executor,
)
.await
};
this.update(&mut cx, |this, _cx| {
this.delegate.matches = matches;
this.delegate.selected_index = this
.delegate
.selected_index
.min(this.delegate.matches.len().saturating_sub(1));
})
.log_err();
})
}
fn confirm(&mut self, _secondary: bool, cx: &mut ViewContext<Picker<Self>>) {
if self.matches.is_empty() {
self.dismissed(cx);
return;
}
// TODO: Implement selecting a saved conversation.
}
fn dismissed(&mut self, cx: &mut ui::prelude::ViewContext<Picker<Self>>) {
self.view
.update(cx, |_, cx| cx.emit(DismissEvent))
.log_err();
}
fn render_match(
&self,
ix: usize,
selected: bool,
_cx: &mut ViewContext<Picker<Self>>,
) -> Option<Self::ListItem> {
let conversation_match = &self.matches[ix];
let _conversation = &self.saved_conversations[conversation_match.candidate_id];
Some(
ListItem::new(ix)
.spacing(ListItemSpacing::Sparse)
.selected(selected)
.child(HighlightedLabel::new(
conversation_match.string.clone(),
conversation_match.positions.clone(),
)),
)
}
}

View File

@@ -1,5 +1,7 @@
mod annotate_code;
mod create_buffer;
mod project_index;
pub use annotate_code::*;
pub use create_buffer::*;
pub use project_index::*;

View File

@@ -0,0 +1,304 @@
use anyhow::Result;
use assistant_tooling::{LanguageModelTool, ProjectContext, ToolView};
use editor::{
display_map::{BlockContext, BlockDisposition, BlockProperties, BlockStyle},
Editor, MultiBuffer,
};
use futures::{channel::mpsc::UnboundedSender, StreamExt as _};
use gpui::{prelude::*, AnyElement, AsyncWindowContext, Model, Task, View, WeakView};
use language::ToPoint;
use project::{search::SearchQuery, Project, ProjectPath};
use schemars::JsonSchema;
use serde::Deserialize;
use std::path::Path;
use ui::prelude::*;
use util::ResultExt;
use workspace::Workspace;
pub struct AnnotationTool {
workspace: WeakView<Workspace>,
project: Model<Project>,
}
impl AnnotationTool {
pub fn new(workspace: WeakView<Workspace>, project: Model<Project>) -> Self {
Self { workspace, project }
}
}
#[derive(Default, Debug, Deserialize, JsonSchema, Clone)]
pub struct AnnotationInput {
/// Name for this set of annotations
#[serde(default = "default_title")]
title: String,
/// Excerpts from the file to show to the user.
excerpts: Vec<Excerpt>,
}
fn default_title() -> String {
"Untitled".to_string()
}
#[derive(Debug, Deserialize, JsonSchema, Clone)]
struct Excerpt {
/// Path to the file
path: String,
/// A short, distinctive string that appears in the file, used to define a location in the file.
text_passage: String,
/// Text to display above the code excerpt
annotation: String,
}
impl LanguageModelTool for AnnotationTool {
type View = AnnotationResultView;
fn name(&self) -> String {
"annotate_code".to_string()
}
fn description(&self) -> String {
"Dynamically annotate symbols in the current codebase. Opens a buffer in a panel in their editor, to the side of the conversation. The annotations are shown in the editor as a block decoration.".to_string()
}
fn view(&self, cx: &mut WindowContext) -> View<Self::View> {
cx.new_view(|cx| {
let (tx, mut rx) = futures::channel::mpsc::unbounded();
cx.spawn(|view, mut cx| async move {
while let Some(excerpt) = rx.next().await {
AnnotationResultView::add_excerpt(view.clone(), excerpt, &mut cx).await?;
}
anyhow::Ok(())
})
.detach();
AnnotationResultView {
project: self.project.clone(),
workspace: self.workspace.clone(),
tx,
pending_excerpt: None,
added_editor_to_workspace: false,
editor: None,
error: None,
rendered_excerpt_count: 0,
}
})
}
}
pub struct AnnotationResultView {
workspace: WeakView<Workspace>,
project: Model<Project>,
pending_excerpt: Option<Excerpt>,
added_editor_to_workspace: bool,
editor: Option<View<Editor>>,
tx: UnboundedSender<Excerpt>,
error: Option<anyhow::Error>,
rendered_excerpt_count: usize,
}
impl AnnotationResultView {
async fn add_excerpt(
this: WeakView<Self>,
excerpt: Excerpt,
cx: &mut AsyncWindowContext,
) -> Result<()> {
let project = this.update(cx, |this, _cx| this.project.clone())?;
let worktree_id = project.update(cx, |project, cx| {
let worktree = project.worktrees().next()?;
let worktree_id = worktree.read(cx).id();
Some(worktree_id)
})?;
let worktree_id = if let Some(worktree_id) = worktree_id {
worktree_id
} else {
return Err(anyhow::anyhow!("No worktree found"));
};
let buffer_task = project.update(cx, |project, cx| {
project.open_buffer(
ProjectPath {
worktree_id,
path: Path::new(&excerpt.path).into(),
},
cx,
)
})?;
let buffer = match buffer_task.await {
Ok(buffer) => buffer,
Err(error) => {
return this.update(cx, |this, cx| {
this.error = Some(error);
cx.notify();
})
}
};
let snapshot = buffer.update(cx, |buffer, _cx| buffer.snapshot())?;
let query = SearchQuery::text(&excerpt.text_passage, false, false, false, vec![], vec![])?;
let matches = query.search(&snapshot, None).await;
let Some(first_match) = matches.first() else {
log::warn!(
"text {:?} does not appear in '{}'",
excerpt.text_passage,
excerpt.path
);
return Ok(());
};
this.update(cx, |this, cx| {
let mut start = first_match.start.to_point(&snapshot);
start.column = 0;
if let Some(editor) = &this.editor {
editor.update(cx, |editor, cx| {
let ranges = editor.buffer().update(cx, |multibuffer, cx| {
multibuffer.push_excerpts_with_context_lines(
buffer.clone(),
vec![start..start],
5,
cx,
)
});
let annotation = SharedString::from(excerpt.annotation);
editor.insert_blocks(
[BlockProperties {
position: ranges[0].start,
height: annotation.split('\n').count() as u8 + 1,
style: BlockStyle::Fixed,
render: Box::new(move |cx| Self::render_note_block(&annotation, cx)),
disposition: BlockDisposition::Above,
}],
None,
cx,
);
});
if !this.added_editor_to_workspace {
this.added_editor_to_workspace = true;
this.workspace
.update(cx, |workspace, cx| {
workspace.add_item_to_active_pane(Box::new(editor.clone()), None, cx);
})
.log_err();
}
}
})?;
Ok(())
}
fn render_note_block(explanation: &SharedString, cx: &mut BlockContext) -> AnyElement {
let anchor_x = cx.anchor_x;
let gutter_width = cx.gutter_dimensions.width;
h_flex()
.w_full()
.py_2()
.border_y_1()
.border_color(cx.theme().colors().border)
.child(
h_flex()
.justify_center()
.w(gutter_width)
.child(Icon::new(IconName::Ai).color(Color::Hint)),
)
.child(
h_flex()
.w_full()
.ml(anchor_x - gutter_width)
.child(explanation.clone()),
)
.into_any_element()
}
}
impl Render for AnnotationResultView {
fn render(&mut self, _cx: &mut ViewContext<Self>) -> impl IntoElement {
if let Some(error) = &self.error {
ui::Label::new(error.to_string()).into_any_element()
} else {
ui::Label::new(SharedString::from(format!(
"Opened a buffer with {} excerpts",
self.rendered_excerpt_count
)))
.into_any_element()
}
}
}
impl ToolView for AnnotationResultView {
type Input = AnnotationInput;
type SerializedState = Option<String>;
fn generate(&self, _: &mut ProjectContext, _: &mut ViewContext<Self>) -> String {
if let Some(error) = &self.error {
format!("Failed to create buffer: {error:?}")
} else {
format!(
"opened {} excerpts in a buffer",
self.rendered_excerpt_count
)
}
}
fn set_input(&mut self, mut input: Self::Input, cx: &mut ViewContext<Self>) {
let editor = if let Some(editor) = &self.editor {
editor.clone()
} else {
let multibuffer = cx.new_model(|_cx| {
MultiBuffer::new(0, language::Capability::ReadWrite).with_title(String::new())
});
let editor = cx.new_view(|cx| {
Editor::for_multibuffer(multibuffer.clone(), Some(self.project.clone()), cx)
});
self.editor = Some(editor.clone());
editor
};
editor.update(cx, |editor, cx| {
editor.buffer().update(cx, |multibuffer, cx| {
if multibuffer.title(cx) != input.title {
multibuffer.set_title(input.title.clone(), cx);
}
});
self.pending_excerpt = input.excerpts.pop();
for excerpt in input.excerpts.iter().skip(self.rendered_excerpt_count) {
self.tx.unbounded_send(excerpt.clone()).ok();
}
self.rendered_excerpt_count = input.excerpts.len();
});
cx.notify();
}
fn execute(&mut self, _cx: &mut ViewContext<Self>) -> Task<Result<()>> {
if let Some(excerpt) = self.pending_excerpt.take() {
self.rendered_excerpt_count += 1;
self.tx.unbounded_send(excerpt.clone()).ok();
}
self.tx.close_channel();
Task::ready(Ok(()))
}
fn serialize(&self, _cx: &mut ViewContext<Self>) -> Self::SerializedState {
self.error.as_ref().map(|error| error.to_string())
}
fn deserialize(
&mut self,
output: Self::SerializedState,
_cx: &mut ViewContext<Self>,
) -> Result<()> {
if let Some(error_message) = output {
self.error = Some(anyhow::anyhow!("{}", error_message));
}
Ok(())
}
}

View File

@@ -1,5 +1,5 @@
use anyhow::Result;
use assistant_tooling::{LanguageModelTool, ProjectContext, ToolOutput};
use anyhow::{anyhow, Result};
use assistant_tooling::{LanguageModelTool, ProjectContext, ToolView};
use editor::Editor;
use gpui::{prelude::*, Model, Task, View, WeakView};
use project::Project;
@@ -20,7 +20,7 @@ impl CreateBufferTool {
}
}
#[derive(Debug, Deserialize, JsonSchema)]
#[derive(Debug, Clone, Deserialize, JsonSchema)]
pub struct CreateBufferInput {
/// The contents of the buffer.
text: String,
@@ -32,25 +32,70 @@ pub struct CreateBufferInput {
}
impl LanguageModelTool for CreateBufferTool {
type Input = CreateBufferInput;
type Output = ();
type View = CreateBufferView;
fn name(&self) -> String {
"create_buffer".to_string()
"create_file".to_string()
}
fn description(&self) -> String {
"Create a new buffer in the current codebase".to_string()
"Create a new untitled file in the current codebase. Side effect: opens it in a new pane/tab for the user to edit.".to_string()
}
fn execute(&self, input: &Self::Input, cx: &mut WindowContext) -> Task<Result<Self::Output>> {
fn view(&self, cx: &mut WindowContext) -> View<Self::View> {
cx.new_view(|_cx| CreateBufferView {
workspace: self.workspace.clone(),
project: self.project.clone(),
input: None,
error: None,
})
}
}
pub struct CreateBufferView {
workspace: WeakView<Workspace>,
project: Model<Project>,
input: Option<CreateBufferInput>,
error: Option<anyhow::Error>,
}
impl Render for CreateBufferView {
fn render(&mut self, _cx: &mut ViewContext<Self>) -> impl IntoElement {
ui::Label::new("Opening a buffer")
}
}
impl ToolView for CreateBufferView {
type Input = CreateBufferInput;
type SerializedState = ();
fn generate(&self, _project: &mut ProjectContext, _cx: &mut ViewContext<Self>) -> String {
let Some(input) = self.input.as_ref() else {
return "No input".to_string();
};
match &self.error {
None => format!("Created a new {} buffer", input.language),
Some(err) => format!("Failed to create buffer: {err:?}"),
}
}
fn set_input(&mut self, input: Self::Input, cx: &mut ViewContext<Self>) {
self.input = Some(input);
cx.notify();
}
fn execute(&mut self, cx: &mut ViewContext<Self>) -> Task<Result<()>> {
cx.spawn({
let workspace = self.workspace.clone();
let project = self.project.clone();
let text = input.text.clone();
let language_name = input.language.clone();
|mut cx| async move {
let input = self.input.clone();
|_this, mut cx| async move {
let input = input.ok_or_else(|| anyhow!("no input"))?;
let text = input.text.clone();
let language_name = input.language.clone();
let language = cx
.update(|cx| {
project
@@ -86,34 +131,15 @@ impl LanguageModelTool for CreateBufferTool {
})
}
fn output_view(
input: Self::Input,
output: Result<Self::Output>,
cx: &mut WindowContext,
) -> View<Self::View> {
cx.new_view(|_cx| CreateBufferView {
language: input.language,
output,
})
}
}
pub struct CreateBufferView {
language: String,
output: Result<()>,
}
impl Render for CreateBufferView {
fn render(&mut self, _cx: &mut ViewContext<Self>) -> impl IntoElement {
div().child("Opening a buffer")
}
}
impl ToolOutput for CreateBufferView {
fn generate(&self, _: &mut ProjectContext, _: &mut WindowContext) -> String {
match &self.output {
Ok(_) => format!("Created a new {} buffer", self.language),
Err(err) => format!("Failed to create buffer: {err:?}"),
}
fn serialize(&self, _cx: &mut ViewContext<Self>) -> Self::SerializedState {
()
}
fn deserialize(
&mut self,
_output: Self::SerializedState,
_cx: &mut ViewContext<Self>,
) -> Result<()> {
Ok(())
}
}

View File

@@ -1,13 +1,20 @@
use anyhow::Result;
use assistant_tooling::{LanguageModelTool, ToolOutput};
use assistant_tooling::{LanguageModelTool, ToolView};
use collections::BTreeMap;
use gpui::{prelude::*, Model, Task};
use file_icons::FileIcons;
use gpui::{prelude::*, AnyElement, Model, Task};
use project::ProjectPath;
use schemars::JsonSchema;
use semantic_index::{ProjectIndex, Status};
use serde::Deserialize;
use std::{fmt::Write as _, ops::Range};
use ui::{div, prelude::*, CollapsibleContainer, Color, Icon, IconName, Label, WindowContext};
use serde::{Deserialize, Serialize};
use std::{
fmt::Write as _,
ops::Range,
path::{Path, PathBuf},
str::FromStr as _,
sync::Arc,
};
use ui::{prelude::*, CollapsibleContainer, Color, Icon, IconName, Label, WindowContext};
const DEFAULT_SEARCH_LIMIT: usize = 20;
@@ -15,126 +22,373 @@ pub struct ProjectIndexTool {
project_index: Model<ProjectIndex>,
}
// Note: Comments on a `LanguageModelTool::Input` become descriptions on the generated JSON schema as shown to the language model.
// Any changes or deletions to the `CodebaseQuery` comments will change model behavior.
#[derive(Deserialize, JsonSchema)]
pub struct CodebaseQuery {
/// Semantic search query
query: String,
/// Maximum number of results to return, defaults to 20
limit: Option<usize>,
#[derive(Default)]
enum ProjectIndexToolState {
#[default]
CollectingQuery,
Searching,
Error(anyhow::Error),
Finished {
excerpts: BTreeMap<ProjectPath, Vec<Range<usize>>>,
index_status: Status,
},
}
pub struct ProjectIndexView {
project_index: Model<ProjectIndex>,
input: CodebaseQuery,
output: Result<ProjectIndexOutput>,
element_id: ElementId,
expanded_header: bool,
state: ProjectIndexToolState,
}
pub struct ProjectIndexOutput {
status: Status,
excerpts: BTreeMap<ProjectPath, Vec<Range<usize>>>,
#[derive(Default, Deserialize, JsonSchema)]
pub struct CodebaseQuery {
/// Semantic search query
query: String,
/// Criteria to include results
includes: Option<SearchFilter>,
/// Criteria to exclude results
excludes: Option<SearchFilter>,
}
impl ProjectIndexView {
fn new(input: CodebaseQuery, output: Result<ProjectIndexOutput>) -> Self {
let element_id = ElementId::Name(nanoid::nanoid!().into());
#[derive(Deserialize, JsonSchema, Clone, Default)]
pub struct SearchFilter {
/// Filter by file path prefix
prefix_path: Option<String>,
/// Filter by file extension
extension: Option<String>,
// Note: we possibly can't do content filtering very easily given the project context handling
// the final results, so we're leaving out direct string matches for now
}
Self {
input,
output,
element_id,
expanded_header: false,
fn project_starts_with(prefix_path: Option<String>, project_path: ProjectPath) -> bool {
if let Some(path) = &prefix_path {
if let Some(path) = PathBuf::from_str(path).ok() {
return project_path.path.starts_with(path);
}
}
return false;
}
impl SearchFilter {
fn matches(&self, project_path: &ProjectPath) -> bool {
let path_match = project_starts_with(self.prefix_path.clone(), project_path.clone());
path_match
&& (if let Some(extension) = &self.extension {
project_path
.path
.extension()
.and_then(|ext| ext.to_str())
.map(|ext| ext == extension)
.unwrap_or(false)
} else {
true
})
}
}
#[derive(Serialize, Deserialize)]
pub struct SerializedState {
index_status: Status,
error_message: Option<String>,
worktrees: BTreeMap<Arc<Path>, WorktreeIndexOutput>,
}
#[derive(Default, Serialize, Deserialize)]
struct WorktreeIndexOutput {
excerpts: BTreeMap<Arc<Path>, Vec<Range<usize>>>,
}
impl ProjectIndexView {
fn toggle_header(&mut self, cx: &mut ViewContext<Self>) {
self.expanded_header = !self.expanded_header;
cx.notify();
}
fn render_filter_section(
&mut self,
heading: &str,
filter: Option<SearchFilter>,
cx: &mut ViewContext<Self>,
) -> Option<AnyElement> {
let filter = match filter {
Some(filter) => filter,
None => return None,
};
// Any of the filter fields can be empty. We'll show nothing if they're all empty.
let path = filter.prefix_path.as_ref().map(|path| {
let icon_path = FileIcons::get_icon(Path::new(path), cx)
.map(SharedString::from)
.unwrap_or_else(|| SharedString::from("icons/file_icons/file.svg"));
h_flex()
.gap_1()
.child("Paths: ")
.child(Icon::from_path(icon_path))
.child(ui::Label::new(path.clone()).color(Color::Muted))
});
let extension = filter.extension.as_ref().map(|extension| {
let icon_path = FileIcons::get_icon(Path::new(extension), cx)
.map(SharedString::from)
.unwrap_or_else(|| SharedString::from("icons/file_icons/file.svg"));
h_flex()
.gap_1()
.child("Extensions: ")
.child(Icon::from_path(icon_path))
.child(ui::Label::new(extension.clone()).color(Color::Muted))
});
if path.is_none() && extension.is_none() {
return None;
}
Some(
v_flex()
.child(ui::Label::new(heading.to_string()))
.gap_1()
.children(path)
.children(extension)
.into_any_element(),
)
}
}
impl Render for ProjectIndexView {
fn render(&mut self, cx: &mut ViewContext<Self>) -> impl IntoElement {
let query = self.input.query.clone();
let result = &self.output;
let output = match result {
Err(err) => {
return div().child(Label::new(format!("Error: {}", err)).color(Color::Error));
let (header_text, content) = match &self.state {
ProjectIndexToolState::Error(error) => {
return format!("failed to search: {error:?}").into_any_element()
}
Ok(output) => output,
};
ProjectIndexToolState::CollectingQuery | ProjectIndexToolState::Searching => {
("Searching...".to_string(), div())
}
ProjectIndexToolState::Finished { excerpts, .. } => {
let file_count = excerpts.len();
let file_count = output.excerpts.len();
if excerpts.is_empty() {
("No results found".to_string(), div())
} else {
let header_text = format!(
"Read {} {}",
file_count,
if file_count == 1 { "file" } else { "files" }
);
let el = v_flex().gap_2().children(excerpts.keys().map(|path| {
h_flex().gap_2().child(Icon::new(IconName::File)).child(
Label::new(path.path.to_string_lossy().to_string()).color(Color::Muted),
)
}));
(header_text, el)
}
}
};
let header = h_flex()
.gap_2()
.child(Icon::new(IconName::File))
.child(format!(
"Read {} {}",
file_count,
if file_count == 1 { "file" } else { "files" }
));
.child(header_text);
v_flex().gap_3().child(
CollapsibleContainer::new(self.element_id.clone(), self.expanded_header)
.start_slot(header)
.on_click(cx.listener(move |this, _, cx| {
this.toggle_header(cx);
}))
.child(
v_flex()
.gap_3()
.p_3()
.child(
h_flex()
.gap_2()
.child(Icon::new(IconName::MagnifyingGlass))
.child(Label::new(format!("`{}`", query)).color(Color::Muted)),
)
.child(
v_flex()
.gap_2()
.children(output.excerpts.keys().map(|path| {
h_flex().gap_2().child(Icon::new(IconName::File)).child(
Label::new(path.path.to_string_lossy().to_string())
.color(Color::Muted),
)
})),
),
),
)
v_flex()
.gap_3()
.child(
CollapsibleContainer::new("collapsible-container", self.expanded_header)
.start_slot(header)
.on_click(cx.listener(move |this, _, cx| {
this.toggle_header(cx);
}))
.child(
v_flex()
.gap_3()
.p_3()
.child(
h_flex()
.gap_2()
.child(Icon::new(IconName::MagnifyingGlass))
.child(Label::new(format!("`{}`", query)).color(Color::Muted)),
)
.children(self.render_filter_section(
"Includes",
self.input.includes.clone(),
cx,
))
.children(self.render_filter_section(
"Excludes",
self.input.excludes.clone(),
cx,
))
.child(content),
),
)
.into_any_element()
}
}
impl ToolOutput for ProjectIndexView {
impl ToolView for ProjectIndexView {
type Input = CodebaseQuery;
type SerializedState = SerializedState;
fn generate(
&self,
context: &mut assistant_tooling::ProjectContext,
_: &mut WindowContext,
_: &mut ViewContext<Self>,
) -> String {
match &self.output {
Ok(output) => {
match &self.state {
ProjectIndexToolState::CollectingQuery => String::new(),
ProjectIndexToolState::Searching => String::new(),
ProjectIndexToolState::Error(error) => format!("failed to search: {error:?}"),
ProjectIndexToolState::Finished {
excerpts,
index_status,
} => {
let mut body = "found results in the following paths:\n".to_string();
for (project_path, ranges) in &output.excerpts {
for (project_path, ranges) in excerpts {
context.add_excerpts(project_path.clone(), ranges);
writeln!(&mut body, "* {}", &project_path.path.display()).unwrap();
}
if output.status != Status::Idle {
if *index_status != Status::Idle {
body.push_str("Still indexing. Results may be incomplete.\n");
}
body
}
Err(err) => format!("Error: {}", err),
}
}
fn set_input(&mut self, input: Self::Input, cx: &mut ViewContext<Self>) {
self.input = input;
cx.notify();
}
fn execute(&mut self, cx: &mut ViewContext<Self>) -> Task<Result<()>> {
self.state = ProjectIndexToolState::Searching;
cx.notify();
let project_index = self.project_index.read(cx);
let index_status = project_index.status();
// TODO: wire the filters into the search here instead of processing after.
// Otherwise we'll get zero results sometimes.
let search = project_index.search(self.input.query.clone(), DEFAULT_SEARCH_LIMIT, cx);
let includes = self.input.includes.clone();
let excludes = self.input.excludes.clone();
cx.spawn(|this, mut cx| async move {
let search_result = search.await;
this.update(&mut cx, |this, cx| {
match search_result {
Ok(search_results) => {
let mut excerpts = BTreeMap::<ProjectPath, Vec<Range<usize>>>::new();
for search_result in search_results {
let project_path = ProjectPath {
worktree_id: search_result.worktree.read(cx).id(),
path: search_result.path,
};
if let Some(includes) = &includes {
if !includes.matches(&project_path) {
continue;
}
} else if let Some(excludes) = &excludes {
if excludes.matches(&project_path) {
continue;
}
}
excerpts
.entry(project_path)
.or_default()
.push(search_result.range);
}
this.state = ProjectIndexToolState::Finished {
excerpts,
index_status,
};
}
Err(error) => {
this.state = ProjectIndexToolState::Error(error);
}
}
cx.notify();
})
})
}
fn serialize(&self, cx: &mut ViewContext<Self>) -> Self::SerializedState {
let mut serialized = SerializedState {
error_message: None,
index_status: Status::Idle,
worktrees: Default::default(),
};
match &self.state {
ProjectIndexToolState::Error(err) => serialized.error_message = Some(err.to_string()),
ProjectIndexToolState::Finished {
excerpts,
index_status,
} => {
serialized.index_status = *index_status;
if let Some(project) = self.project_index.read(cx).project().upgrade() {
let project = project.read(cx);
for (project_path, excerpts) in excerpts {
if let Some(worktree) =
project.worktree_for_id(project_path.worktree_id, cx)
{
let worktree_path = worktree.read(cx).abs_path();
serialized
.worktrees
.entry(worktree_path)
.or_default()
.excerpts
.insert(project_path.path.clone(), excerpts.clone());
}
}
}
}
_ => {}
}
serialized
}
fn deserialize(
&mut self,
serialized: Self::SerializedState,
cx: &mut ViewContext<Self>,
) -> Result<()> {
if !serialized.worktrees.is_empty() {
let mut excerpts = BTreeMap::<ProjectPath, Vec<Range<usize>>>::new();
if let Some(project) = self.project_index.read(cx).project().upgrade() {
let project = project.read(cx);
for (worktree_path, worktree_state) in serialized.worktrees {
if let Some(worktree) = project
.worktrees()
.find(|worktree| worktree.read(cx).abs_path() == worktree_path)
{
let worktree_id = worktree.read(cx).id();
for (path, serialized_excerpts) in worktree_state.excerpts {
excerpts.insert(ProjectPath { worktree_id, path }, serialized_excerpts);
}
}
}
}
self.state = ProjectIndexToolState::Finished {
excerpts,
index_status: serialized.index_status,
};
}
cx.notify();
Ok(())
}
}
impl ProjectIndexTool {
@@ -144,66 +398,31 @@ impl ProjectIndexTool {
}
impl LanguageModelTool for ProjectIndexTool {
type Input = CodebaseQuery;
type Output = ProjectIndexOutput;
type View = ProjectIndexView;
fn name(&self) -> String {
"query_codebase".to_string()
"semantic_search_codebase".to_string()
}
fn description(&self) -> String {
"Semantic search against the user's current codebase, returning excerpts related to the query by computing a dot product against embeddings of code chunks in the code base and an embedding of the query.".to_string()
unindent::unindent(
r#"This search tool uses a semantic index to perform search queries across your codebase, identifying and returning excerpts of text and code possibly related to the query.
Ideal for:
- Discovering implementations of similar logic within the project
- Finding usage examples of functions, classes/structures, libraries, and other code elements
- Developing understanding of the codebase's architecture and design
Note: The search's effectiveness is directly related to the current state of the codebase and the specificity of your query. It is recommended that you use snippets of code that are similar to the code you wish to find."#,
)
}
fn execute(&self, query: &Self::Input, cx: &mut WindowContext) -> Task<Result<Self::Output>> {
let project_index = self.project_index.read(cx);
let status = project_index.status();
let search = project_index.search(
query.query.clone(),
query.limit.unwrap_or(DEFAULT_SEARCH_LIMIT),
cx,
);
cx.spawn(|mut cx| async move {
let search_results = search.await?;
cx.update(|cx| {
let mut output = ProjectIndexOutput {
status,
excerpts: Default::default(),
};
for search_result in search_results {
let path = ProjectPath {
worktree_id: search_result.worktree.read(cx).id(),
path: search_result.path.clone(),
};
let excerpts_for_path = output.excerpts.entry(path).or_default();
let ix = match excerpts_for_path
.binary_search_by_key(&search_result.range.start, |r| r.start)
{
Ok(ix) | Err(ix) => ix,
};
excerpts_for_path.insert(ix, search_result.range);
}
output
})
fn view(&self, cx: &mut WindowContext) -> gpui::View<Self::View> {
cx.new_view(|_| ProjectIndexView {
state: ProjectIndexToolState::CollectingQuery,
input: Default::default(),
expanded_header: false,
project_index: self.project_index.clone(),
})
}
fn output_view(
input: Self::Input,
output: Result<Self::Output>,
cx: &mut WindowContext,
) -> gpui::View<Self::View> {
cx.new_view(|_cx| ProjectIndexView::new(input, output))
}
fn render_running(_: &mut WindowContext) -> impl IntoElement {
CollapsibleContainer::new(ElementId::Name(nanoid::nanoid!().into()), false)
.start_slot("Searching code base")
}
}

View File

@@ -22,7 +22,7 @@ pub struct ActiveFileButton {
impl ActiveFileButton {
pub fn new(
attachment_store: Arc<AttachmentRegistry>,
attachment_registry: Arc<AttachmentRegistry>,
workspace: View<Workspace>,
cx: &mut ViewContext<Self>,
) -> Self {
@@ -31,7 +31,7 @@ impl ActiveFileButton {
cx.defer(move |this, cx| this.update_active_buffer(workspace.clone(), cx));
Self {
attachment_registry: attachment_store,
attachment_registry,
status: Status::NoFile,
workspace_subscription,
}

View File

@@ -15,8 +15,7 @@ pub enum UserOrAssistant {
pub struct ChatMessage {
id: MessageId,
player: UserOrAssistant,
message: Option<AnyElement>,
tools_used: Option<AnyElement>,
messages: Vec<AnyElement>,
selected: bool,
collapsed: bool,
on_collapse_handle_click: Box<dyn Fn(&ClickEvent, &mut WindowContext) + 'static>,
@@ -26,16 +25,14 @@ impl ChatMessage {
pub fn new(
id: MessageId,
player: UserOrAssistant,
message: Option<AnyElement>,
tools_used: Option<AnyElement>,
messages: Vec<AnyElement>,
collapsed: bool,
on_collapse_handle_click: Box<dyn Fn(&ClickEvent, &mut WindowContext) + 'static>,
) -> Self {
Self {
id,
player,
message,
tools_used,
messages,
selected: false,
collapsed,
on_collapse_handle_click,
@@ -117,21 +114,12 @@ impl RenderOnce for ChatMessage {
.icon_color(Color::Muted)
.on_click(self.on_collapse_handle_click)
.tooltip(|cx| Tooltip::text("Collapse Message", cx)),
), // .child(
// IconButton::new("copy-message", IconName::Copy)
// .icon_color(Color::Muted)
// .icon_size(IconSize::XSmall),
// )
// .child(
// IconButton::new("menu", IconName::Ellipsis)
// .icon_color(Color::Muted)
// .icon_size(IconSize::XSmall),
// ),
),
),
)
.when(self.message.is_some() || self.tools_used.is_some(), |el| {
.when(self.messages.len() > 0, |el| {
el.child(
h_flex().child(
h_flex().w_full().child(
v_flex()
.relative()
.overflow_hidden()
@@ -144,8 +132,7 @@ impl RenderOnce for ChatMessage {
this.bg(background_color)
})
.when(self.collapsed, |this| this.h(collapsed_height))
.children(self.message)
.when_some(self.tools_used, |this, tools_used| this.child(tools_used)),
.children(self.messages),
),
)
})

View File

@@ -48,68 +48,73 @@ impl RenderOnce for Composer {
fn render(mut self, cx: &mut WindowContext) -> impl IntoElement {
let font_size = TextSize::Default.rems(cx);
let line_height = font_size.to_pixels(cx.rem_size()) * 1.3;
let mut editor_border = cx.theme().colors().text;
editor_border.fade_out(0.90);
// Remove the extra 1px added by the border
let padding = Spacing::XLarge.rems(cx) - rems_from_px(1.);
h_flex()
.p(Spacing::Small.rems(cx))
.w_full()
.items_start()
.child(
v_flex().size_full().gap_1().child(
v_flex()
.w_full()
.p_3()
.bg(cx.theme().colors().editor_background)
.rounded_lg()
.child(
v_flex()
.justify_between()
.w_full()
.gap_2()
.child({
let settings = ThemeSettings::get_global(cx);
let text_style = TextStyle {
color: cx.theme().colors().editor_foreground,
font_family: settings.buffer_font.family.clone(),
font_features: settings.buffer_font.features.clone(),
font_size: font_size.into(),
font_weight: FontWeight::NORMAL,
font_style: FontStyle::Normal,
line_height: line_height.into(),
background_color: None,
underline: None,
strikethrough: None,
white_space: WhiteSpace::Normal,
};
v_flex()
.w_full()
.rounded_lg()
.p(padding)
.border_1()
.border_color(editor_border)
.bg(cx.theme().colors().editor_background)
.child(
v_flex()
.justify_between()
.w_full()
.gap_2()
.child({
let settings = ThemeSettings::get_global(cx);
let text_style = TextStyle {
color: cx.theme().colors().editor_foreground,
font_family: settings.buffer_font.family.clone(),
font_features: settings.buffer_font.features.clone(),
font_size: font_size.into(),
font_weight: FontWeight::NORMAL,
font_style: FontStyle::Normal,
line_height: line_height.into(),
background_color: None,
underline: None,
strikethrough: None,
white_space: WhiteSpace::Normal,
};
EditorElement::new(
&self.editor,
EditorStyle {
background: cx.theme().colors().editor_background,
local_player: cx.theme().players().local(),
text: text_style,
..Default::default()
},
EditorElement::new(
&self.editor,
EditorStyle {
background: cx.theme().colors().editor_background,
local_player: cx.theme().players().local(),
text: text_style,
..Default::default()
},
)
})
.child(
h_flex()
.flex_none()
.gap_2()
.justify_between()
.w_full()
.child(
h_flex().gap_1().child(
h_flex()
.gap_2()
.child(self.render_tools(cx))
.child(Divider::vertical())
.child(self.render_attachment_tools(cx)),
),
)
})
.child(
h_flex()
.flex_none()
.gap_2()
.justify_between()
.w_full()
.child(
h_flex().gap_1().child(
h_flex()
.gap_2()
.child(self.render_tools(cx))
.child(Divider::vertical())
.child(self.render_attachment_tools(cx)),
),
)
.child(h_flex().gap_1().child(self.model_selector)),
),
),
),
.child(h_flex().gap_1().child(self.model_selector)),
),
),
)
}
}

View File

@@ -28,8 +28,7 @@ impl Render for ChatMessageStory {
ChatMessage::new(
MessageId(0),
UserOrAssistant::User(Some(user_1.clone())),
Some(div().child("What can I do here?").into_any_element()),
None,
vec![div().child("What can I do here?").into_any_element()],
false,
Box::new(|_, _| {}),
),
@@ -39,8 +38,7 @@ impl Render for ChatMessageStory {
ChatMessage::new(
MessageId(0),
UserOrAssistant::User(Some(user_1.clone())),
Some(div().child("What can I do here?").into_any_element()),
None,
vec![div().child("What can I do here?").into_any_element()],
true,
Box::new(|_, _| {}),
),
@@ -53,8 +51,7 @@ impl Render for ChatMessageStory {
ChatMessage::new(
MessageId(0),
UserOrAssistant::Assistant,
Some(div().child("You can talk to me!").into_any_element()),
None,
vec![div().child("You can talk to me!").into_any_element()],
false,
Box::new(|_, _| {}),
),
@@ -64,8 +61,7 @@ impl Render for ChatMessageStory {
ChatMessage::new(
MessageId(0),
UserOrAssistant::Assistant,
Some(div().child(MULTI_LINE_MESSAGE).into_any_element()),
None,
vec![div().child(MULTI_LINE_MESSAGE).into_any_element()],
true,
Box::new(|_, _| {}),
),
@@ -79,24 +75,21 @@ impl Render for ChatMessageStory {
.child(ChatMessage::new(
MessageId(0),
UserOrAssistant::User(Some(user_1.clone())),
Some(div().child("What is Rust??").into_any_element()),
None,
vec![div().child("What is Rust??").into_any_element()],
false,
Box::new(|_, _| {}),
))
.child(ChatMessage::new(
MessageId(0),
UserOrAssistant::Assistant,
Some(div().child("Rust is a multi-paradigm programming language focused on performance and safety").into_any_element()),
None,
vec![div().child("Rust is a multi-paradigm programming language focused on performance and safety").into_any_element()],
false,
Box::new(|_, _| {}),
))
.child(ChatMessage::new(
MessageId(0),
UserOrAssistant::User(Some(user_1)),
Some(div().child("Sounds pretty cool!").into_any_element()),
None,
vec![div().child("Sounds pretty cool!").into_any_element()],
false,
Box::new(|_, _| {}),
)),

View File

@@ -16,11 +16,14 @@ anyhow.workspace = true
collections.workspace = true
futures.workspace = true
gpui.workspace = true
log.workspace = true
project.workspace = true
repair_json.workspace = true
schemars.workspace = true
serde.workspace = true
serde_json.workspace = true
sum_tree.workspace = true
ui.workspace = true
util.workspace = true
[dev-dependencies]

View File

@@ -1,16 +1,16 @@
# Assistant Tooling
Bringing OpenAI compatible tool calling to GPUI.
Bringing Language Model tool calling to GPUI.
This unlocks:
- **Structured Extraction** of model responses
- **Validation** of model inputs
- **Execution** of chosen toolsn
- **Execution** of chosen tools
## Overview
Language Models can produce structured outputs that are perfect for calling functions. The most famous of these is OpenAI's tool calling. When make a chat completion you can pass a list of tools available to the model. The model will choose `0..n` tools to help them complete a user's task. It's up to _you_ to create the tools that the model can call.
Language Models can produce structured outputs that are perfect for calling functions. The most famous of these is OpenAI's tool calling. When making a chat completion you can pass a list of tools available to the model. The model will choose `0..n` tools to help them complete a user's task. It's up to _you_ to create the tools that the model can call.
> **User**: "Hey I need help with implementing a collapsible panel in GPUI"
>
@@ -22,187 +22,64 @@ Language Models can produce structured outputs that are perfect for calling func
>
> **Assistant**: "Here are some excerpts from the GPUI codebase that might help you."
This library is designed to facilitate this interaction mode by allowing you to go from `struct` to `tool` with a simple trait, `LanguageModelTool`.
This library is designed to facilitate this interaction mode by allowing you to go from `struct` to `tool` with two simple traits, `LanguageModelTool` and `ToolView`.
## Example
Let's expose querying a semantic index directly by the model. First, we'll set up some _necessary_ imports
## Using the Tool Registry
```rust
use anyhow::Result;
use assistant_tooling::{LanguageModelTool, ToolRegistry};
use gpui::{App, AppContext, Task};
use schemars::JsonSchema;
use serde::Deserialize;
use serde_json::json;
```
let mut tool_registry = ToolRegistry::new();
tool_registry
.register(WeatherTool { api_client },
})
.unwrap(); // You can only register one tool per name
Then we'll define the query structure the model must fill in. This _must_ derive `Deserialize` from `serde` and `JsonSchema` from the `schemars` crate.
```rust
#[derive(Deserialize, JsonSchema)]
struct CodebaseQuery {
query: String,
}
```
After that we can define our tool, with the expectation that it will need a `ProjectIndex` to search against. For this example, the index uses the same interface as `semantic_index::ProjectIndex`.
```rust
struct ProjectIndex {}
impl ProjectIndex {
fn new() -> Self {
ProjectIndex {}
}
fn search(&self, _query: &str, _limit: usize, _cx: &AppContext) -> Task<Result<Vec<String>>> {
// Instead of hooking up a real index, we're going to fake it
if _query.contains("gpui") {
return Task::ready(Ok(vec![r#"// crates/gpui/src/gpui.rs
//! # Welcome to GPUI!
//!
//! GPUI is a hybrid immediate and retained mode, GPU accelerated, UI framework
//! for Rust, designed to support a wide variety of applications
"#
.to_string()]));
}
return Task::ready(Ok(vec![]));
}
}
struct ProjectIndexTool {
project_index: ProjectIndex,
}
```
Now we can implement the `LanguageModelTool` trait for our tool by:
- Defining the `Input` from the model, which is `CodebaseQuery`
- Defining the `Output`
- Implementing the `name` and `description` functions to provide the model information when it's choosing a tool
- Implementing the `execute` function to run the tool
```rust
impl LanguageModelTool for ProjectIndexTool {
type Input = CodebaseQuery;
type Output = String;
fn name(&self) -> String {
"query_codebase".to_string()
}
fn description(&self) -> String {
"Executes a query against the codebase, returning excerpts related to the query".to_string()
}
fn execute(&self, query: Self::Input, cx: &AppContext) -> Task<Result<Self::Output>> {
let results = self.project_index.search(query.query.as_str(), 10, cx);
cx.spawn(|_cx| async move {
let results = results.await?;
if !results.is_empty() {
Ok(results.join("\n"))
} else {
Ok("No results".to_string())
}
})
}
}
```
For the sake of this example, let's look at the types that OpenAI will be passing to us
```rust
// OpenAI definitions, shown here for demonstration
#[derive(Deserialize)]
struct FunctionCall {
name: String,
args: String,
}
#[derive(Deserialize, Eq, PartialEq)]
enum ToolCallType {
#[serde(rename = "function")]
Function,
Other,
}
#[derive(Deserialize, Clone, Debug, Eq, PartialEq, Hash, Ord, PartialOrd)]
struct ToolCallId(String);
#[derive(Deserialize)]
#[serde(tag = "type", rename_all = "snake_case")]
enum ToolCall {
Function {
#[allow(dead_code)]
id: ToolCallId,
function: FunctionCall,
},
Other {
#[allow(dead_code)]
id: ToolCallId,
},
}
#[derive(Deserialize)]
struct AssistantMessage {
role: String,
content: Option<String>,
tool_calls: Option<Vec<ToolCall>>,
}
```
When the model wants to call tools, it will pass a list of `ToolCall`s. When those are `function`s that we can handle, we'll pass them to our `ToolRegistry` to get a future that we can await.
```rust
// Inside `fn main()`
App::new().run(|cx: &mut AppContext| {
let tool = ProjectIndexTool {
project_index: ProjectIndex::new(),
};
let mut registry = ToolRegistry::new();
let registered = registry.register(tool);
assert!(registered.is_ok());
```
Let's pretend the model sent us back a message requesting
```rust
let model_response = json!({
"role": "assistant",
"tool_calls": [
{
"id": "call_1",
"function": {
"name": "query_codebase",
"args": r#"{"query":"GPUI Task background_executor"}"#
},
"type": "function"
}
]
let completion = cx.update(|cx| {
CompletionProvider::get(cx).complete(
model_name,
messages,
Vec::new(),
1.0,
// The definitions get passed directly to OpenAI when you want
// the model to be able to call your tool
tool_registry.definitions(),
)
});
let message: AssistantMessage = serde_json::from_value(model_response).unwrap();
let mut stream = completion?.await?;
// We know there's a tool call, so let's skip straight to it for this example
let tool_calls = message.tool_calls.as_ref().unwrap();
let tool_call = tool_calls.get(0).unwrap();
let mut message = AssistantMessage::new();
while let Some(delta) = stream.next().await {
// As messages stream in, you'll get both assistant content
if let Some(content) = &delta.content {
message
.body
.update(cx, |message, cx| message.append(&content, cx));
}
// And tool calls!
for tool_call_delta in delta.tool_calls {
let index = tool_call_delta.index as usize;
if index >= message.tool_calls.len() {
message.tool_calls.resize_with(index + 1, Default::default);
}
let tool_call = &mut message.tool_calls[index];
// Build up an ID
if let Some(id) = &tool_call_delta.id {
tool_call.id.push_str(id);
}
tool_registry.update_tool_call(
tool_call,
tool_call_delta.name.as_deref(),
tool_call_delta.arguments.as_deref(),
cx,
);
}
}
```
We can now use our registry to call the tool.
Once the stream of tokens is complete, you can exexute the tool call by calling `tool_registry.execute_tool_call(tool_call, cx)`, which returns a `Task<Result<()>>`.
```rust
let task = registry.call(
tool_call.name,
tool_call.args,
);
cx.spawn(|_cx| async move {
let result = task.await?;
println!("{}", result.unwrap());
Ok(())
})
```
As the tokens stream in and tool calls are executed, your `ToolView` will get updates. Render each tool call by passing that `tool_call` in to `tool_registry.render_tool_call(tool_call, cx)`. The final message for the model can be pulled by calling `self.tool_registry.content_for_tool_call( tool_call, &mut project_context, cx, )`.

View File

@@ -2,8 +2,12 @@ mod attachment_registry;
mod project_context;
mod tool_registry;
pub use attachment_registry::{AttachmentRegistry, LanguageModelAttachment, UserAttachment};
pub use attachment_registry::{
AttachmentOutput, AttachmentRegistry, LanguageModelAttachment, SavedUserAttachment,
UserAttachment,
};
pub use project_context::ProjectContext;
pub use tool_registry::{
LanguageModelTool, ToolFunctionCall, ToolFunctionDefinition, ToolOutput, ToolRegistry,
LanguageModelTool, SavedToolFunctionCall, ToolFunctionCall, ToolFunctionDefinition,
ToolRegistry, ToolView,
};

View File

@@ -1,8 +1,10 @@
use crate::{ProjectContext, ToolOutput};
use crate::ProjectContext;
use anyhow::{anyhow, Result};
use collections::HashMap;
use futures::future::join_all;
use gpui::{AnyView, Render, Task, View, WindowContext};
use serde::{de::DeserializeOwned, Deserialize, Serialize};
use serde_json::value::RawValue;
use std::{
any::TypeId,
sync::{
@@ -16,25 +18,39 @@ pub struct AttachmentRegistry {
registered_attachments: HashMap<TypeId, RegisteredAttachment>,
}
pub trait AttachmentOutput {
fn generate(&self, project: &mut ProjectContext, cx: &mut WindowContext) -> String;
}
pub trait LanguageModelAttachment {
type Output: 'static;
type View: Render + ToolOutput;
type Output: DeserializeOwned + Serialize + 'static;
type View: Render + AttachmentOutput;
fn name(&self) -> Arc<str>;
fn run(&self, cx: &mut WindowContext) -> Task<Result<Self::Output>>;
fn view(output: Result<Self::Output>, cx: &mut WindowContext) -> View<Self::View>;
fn view(&self, output: Result<Self::Output>, cx: &mut WindowContext) -> View<Self::View>;
}
/// A collected attachment from running an attachment tool
pub struct UserAttachment {
pub view: AnyView,
name: Arc<str>,
serialized_output: Result<Box<RawValue>, String>,
generate_fn: fn(AnyView, &mut ProjectContext, cx: &mut WindowContext) -> String,
}
#[derive(Serialize, Deserialize)]
pub struct SavedUserAttachment {
name: Arc<str>,
serialized_output: Result<Box<RawValue>, String>,
}
/// Internal representation of an attachment tool to allow us to treat them dynamically
struct RegisteredAttachment {
name: Arc<str>,
enabled: AtomicBool,
call: Box<dyn Fn(&mut WindowContext) -> Task<Result<UserAttachment>>>,
deserialize: Box<dyn Fn(&SavedUserAttachment, &mut WindowContext) -> Result<UserAttachment>>,
}
impl AttachmentRegistry {
@@ -45,24 +61,65 @@ impl AttachmentRegistry {
}
pub fn register<A: LanguageModelAttachment + 'static>(&mut self, attachment: A) {
let call = Box::new(move |cx: &mut WindowContext| {
let result = attachment.run(cx);
let attachment = Arc::new(attachment);
cx.spawn(move |mut cx| async move {
let result: Result<A::Output> = result.await;
let view = cx.update(|cx| A::view(result, cx))?;
let call = Box::new({
let attachment = attachment.clone();
move |cx: &mut WindowContext| {
let result = attachment.run(cx);
let attachment = attachment.clone();
cx.spawn(move |mut cx| async move {
let result: Result<A::Output> = result.await;
let serialized_output =
result
.as_ref()
.map_err(ToString::to_string)
.and_then(|output| {
Ok(RawValue::from_string(
serde_json::to_string(output).map_err(|e| e.to_string())?,
)
.unwrap())
});
let view = cx.update(|cx| attachment.view(result, cx))?;
Ok(UserAttachment {
name: attachment.name(),
view: view.into(),
generate_fn: generate::<A>,
serialized_output,
})
})
}
});
let deserialize = Box::new({
let attachment = attachment.clone();
move |saved_attachment: &SavedUserAttachment, cx: &mut WindowContext| {
let serialized_output = saved_attachment.serialized_output.clone();
let output = match &serialized_output {
Ok(serialized_output) => {
Ok(serde_json::from_str::<A::Output>(serialized_output.get())?)
}
Err(error) => Err(anyhow!("{error}")),
};
let view = attachment.view(output, cx).into();
Ok(UserAttachment {
view: view.into(),
name: saved_attachment.name.clone(),
view,
serialized_output,
generate_fn: generate::<A>,
})
})
}
});
self.registered_attachments.insert(
TypeId::of::<A>(),
RegisteredAttachment {
name: attachment.name(),
call,
deserialize,
enabled: AtomicBool::new(true),
},
);
@@ -134,6 +191,35 @@ impl AttachmentRegistry {
.collect())
})
}
pub fn serialize_user_attachment(
&self,
user_attachment: &UserAttachment,
) -> SavedUserAttachment {
SavedUserAttachment {
name: user_attachment.name.clone(),
serialized_output: user_attachment.serialized_output.clone(),
}
}
pub fn deserialize_user_attachment(
&self,
saved_user_attachment: SavedUserAttachment,
cx: &mut WindowContext,
) -> Result<UserAttachment> {
if let Some(registered_attachment) = self
.registered_attachments
.values()
.find(|attachment| attachment.name == saved_user_attachment.name)
{
(registered_attachment.deserialize)(&saved_user_attachment, cx)
} else {
Err(anyhow!(
"no attachment tool for name {}",
saved_user_attachment.name
))
}
}
}
impl UserAttachment {

View File

@@ -1,41 +1,67 @@
use crate::ProjectContext;
use anyhow::{anyhow, Result};
use gpui::{
div, AnyElement, AnyView, IntoElement, ParentElement, Render, Styled, Task, View, WindowContext,
};
use gpui::{AnyElement, AnyView, IntoElement, Render, Task, View, WindowContext};
use repair_json::repair;
use schemars::{schema::RootSchema, schema_for, JsonSchema};
use serde::Deserialize;
use serde::{de::DeserializeOwned, Deserialize, Serialize};
use serde_json::value::RawValue;
use std::{
any::TypeId,
collections::HashMap,
fmt::Display,
mem,
sync::atomic::{AtomicBool, Ordering::SeqCst},
};
use crate::ProjectContext;
use ui::ViewContext;
pub struct ToolRegistry {
registered_tools: HashMap<String, RegisteredTool>,
}
#[derive(Default, Deserialize)]
#[derive(Default)]
pub struct ToolFunctionCall {
pub id: String,
pub name: String,
pub arguments: String,
#[serde(skip)]
pub result: Option<ToolFunctionCallResult>,
state: ToolFunctionCallState,
}
pub enum ToolFunctionCallResult {
#[derive(Default)]
enum ToolFunctionCallState {
#[default]
Initializing,
NoSuchTool,
ParsingFailed,
Finished {
view: AnyView,
generate_fn: fn(AnyView, &mut ProjectContext, &mut WindowContext) -> String,
},
KnownTool(Box<dyn InternalToolView>),
ExecutedTool(Box<dyn InternalToolView>),
}
#[derive(Clone)]
trait InternalToolView {
fn view(&self) -> AnyView;
fn generate(&self, project: &mut ProjectContext, cx: &mut WindowContext) -> String;
fn try_set_input(&self, input: &str, cx: &mut WindowContext);
fn execute(&self, cx: &mut WindowContext) -> Task<Result<()>>;
fn serialize_output(&self, cx: &mut WindowContext) -> Result<Box<RawValue>>;
fn deserialize_output(&self, raw_value: &RawValue, cx: &mut WindowContext) -> Result<()>;
}
#[derive(Default, Serialize, Deserialize)]
pub struct SavedToolFunctionCall {
id: String,
name: String,
arguments: String,
state: SavedToolFunctionCallState,
}
#[derive(Default, Serialize, Deserialize)]
enum SavedToolFunctionCallState {
#[default]
Initializing,
NoSuchTool,
KnownTool,
ExecutedTool(Box<RawValue>),
}
#[derive(Clone, Debug, PartialEq)]
pub struct ToolFunctionDefinition {
pub name: String,
pub description: String,
@@ -43,14 +69,7 @@ pub struct ToolFunctionDefinition {
}
pub trait LanguageModelTool {
/// The input type that will be passed in to `execute` when the tool is called
/// by the language model.
type Input: for<'de> Deserialize<'de> + JsonSchema;
/// The output returned by executing the tool.
type Output: 'static;
type View: Render + ToolOutput;
type View: ToolView;
/// Returns the name of the tool.
///
@@ -66,7 +85,7 @@ pub trait LanguageModelTool {
/// Returns the OpenAI Function definition for the tool, for direct use with OpenAI's API.
fn definition(&self) -> ToolFunctionDefinition {
let root_schema = schema_for!(Self::Input);
let root_schema = schema_for!(<Self::View as ToolView>::Input);
ToolFunctionDefinition {
name: self.name(),
@@ -75,29 +94,34 @@ pub trait LanguageModelTool {
}
}
/// Executes the tool with the given input.
fn execute(&self, input: &Self::Input, cx: &mut WindowContext) -> Task<Result<Self::Output>>;
fn output_view(
input: Self::Input,
output: Result<Self::Output>,
cx: &mut WindowContext,
) -> View<Self::View>;
fn render_running(_cx: &mut WindowContext) -> impl IntoElement {
div()
}
/// A view of the output of running the tool, for displaying to the user.
fn view(&self, cx: &mut WindowContext) -> View<Self::View>;
}
pub trait ToolOutput: Sized {
fn generate(&self, project: &mut ProjectContext, cx: &mut WindowContext) -> String;
pub trait ToolView: Render {
/// The input type that will be passed in to `execute` when the tool is called
/// by the language model.
type Input: DeserializeOwned + JsonSchema;
/// The output returned by executing the tool.
type SerializedState: DeserializeOwned + Serialize;
fn generate(&self, project: &mut ProjectContext, cx: &mut ViewContext<Self>) -> String;
fn set_input(&mut self, input: Self::Input, cx: &mut ViewContext<Self>);
fn execute(&mut self, cx: &mut ViewContext<Self>) -> Task<Result<()>>;
fn serialize(&self, cx: &mut ViewContext<Self>) -> Self::SerializedState;
fn deserialize(
&mut self,
output: Self::SerializedState,
cx: &mut ViewContext<Self>,
) -> Result<()>;
}
struct RegisteredTool {
enabled: AtomicBool,
type_id: TypeId,
call: Box<dyn Fn(&ToolFunctionCall, &mut WindowContext) -> Task<Result<ToolFunctionCall>>>,
render_running: fn(&mut WindowContext) -> gpui::AnyElement,
build_view: Box<dyn Fn(&mut WindowContext) -> Box<dyn InternalToolView>>,
definition: ToolFunctionDefinition,
}
@@ -134,68 +158,141 @@ impl ToolRegistry {
.collect()
}
pub fn render_tool_call(
pub fn update_tool_call(
&self,
tool_call: &ToolFunctionCall,
call: &mut ToolFunctionCall,
name: Option<&str>,
arguments: Option<&str>,
cx: &mut WindowContext,
) -> AnyElement {
match &tool_call.result {
Some(result) => div()
.p_2()
.child(result.into_any_element(&tool_call.name))
.into_any_element(),
None => self
.registered_tools
.get(&tool_call.name)
.map(|tool| (tool.render_running)(cx))
.unwrap_or_else(|| div().into_any_element()),
) {
if let Some(name) = name {
call.name.push_str(name);
}
if let Some(arguments) = arguments {
if call.arguments.is_empty() {
if let Some(tool) = self.registered_tools.get(&call.name) {
let view = (tool.build_view)(cx);
call.state = ToolFunctionCallState::KnownTool(view);
} else {
call.state = ToolFunctionCallState::NoSuchTool;
}
}
call.arguments.push_str(arguments);
if let ToolFunctionCallState::KnownTool(view) = &call.state {
if let Ok(repaired_arguments) = repair(call.arguments.clone()) {
view.try_set_input(&repaired_arguments, cx)
}
}
}
}
pub fn register<T: 'static + LanguageModelTool>(
&mut self,
tool: T,
pub fn execute_tool_call(
&self,
tool_call: &mut ToolFunctionCall,
cx: &mut WindowContext,
) -> Option<Task<Result<()>>> {
if let ToolFunctionCallState::KnownTool(view) = mem::take(&mut tool_call.state) {
let task = view.execute(cx);
tool_call.state = ToolFunctionCallState::ExecutedTool(view);
Some(task)
} else {
None
}
}
pub fn render_tool_call(
&self,
tool_call: &ToolFunctionCall,
_cx: &mut WindowContext,
) -> Result<()> {
) -> Option<AnyElement> {
match &tool_call.state {
ToolFunctionCallState::NoSuchTool => {
Some(ui::Label::new("No such tool").into_any_element())
}
ToolFunctionCallState::Initializing => None,
ToolFunctionCallState::KnownTool(view) | ToolFunctionCallState::ExecutedTool(view) => {
Some(view.view().into_any_element())
}
}
}
pub fn content_for_tool_call(
&self,
tool_call: &ToolFunctionCall,
project_context: &mut ProjectContext,
cx: &mut WindowContext,
) -> String {
match &tool_call.state {
ToolFunctionCallState::Initializing => String::new(),
ToolFunctionCallState::NoSuchTool => {
format!("No such tool: {}", tool_call.name)
}
ToolFunctionCallState::KnownTool(view) | ToolFunctionCallState::ExecutedTool(view) => {
view.generate(project_context, cx)
}
}
}
pub fn serialize_tool_call(
&self,
call: &ToolFunctionCall,
cx: &mut WindowContext,
) -> Result<SavedToolFunctionCall> {
Ok(SavedToolFunctionCall {
id: call.id.clone(),
name: call.name.clone(),
arguments: call.arguments.clone(),
state: match &call.state {
ToolFunctionCallState::Initializing => SavedToolFunctionCallState::Initializing,
ToolFunctionCallState::NoSuchTool => SavedToolFunctionCallState::NoSuchTool,
ToolFunctionCallState::KnownTool(_) => SavedToolFunctionCallState::KnownTool,
ToolFunctionCallState::ExecutedTool(view) => {
SavedToolFunctionCallState::ExecutedTool(view.serialize_output(cx)?)
}
},
})
}
pub fn deserialize_tool_call(
&self,
call: &SavedToolFunctionCall,
cx: &mut WindowContext,
) -> Result<ToolFunctionCall> {
let Some(tool) = self.registered_tools.get(&call.name) else {
return Err(anyhow!("no such tool {}", call.name));
};
Ok(ToolFunctionCall {
id: call.id.clone(),
name: call.name.clone(),
arguments: call.arguments.clone(),
state: match &call.state {
SavedToolFunctionCallState::Initializing => ToolFunctionCallState::Initializing,
SavedToolFunctionCallState::NoSuchTool => ToolFunctionCallState::NoSuchTool,
SavedToolFunctionCallState::KnownTool => {
log::error!("Deserialized tool that had not executed");
let view = (tool.build_view)(cx);
view.try_set_input(&call.arguments, cx);
ToolFunctionCallState::KnownTool(view)
}
SavedToolFunctionCallState::ExecutedTool(output) => {
let view = (tool.build_view)(cx);
view.try_set_input(&call.arguments, cx);
view.deserialize_output(output, cx)?;
ToolFunctionCallState::ExecutedTool(view)
}
},
})
}
pub fn register<T: 'static + LanguageModelTool>(&mut self, tool: T) -> Result<()> {
let name = tool.name();
let registered_tool = RegisteredTool {
type_id: TypeId::of::<T>(),
definition: tool.definition(),
enabled: AtomicBool::new(true),
call: Box::new(
move |tool_call: &ToolFunctionCall, cx: &mut WindowContext| {
let name = tool_call.name.clone();
let arguments = tool_call.arguments.clone();
let id = tool_call.id.clone();
let Ok(input) = serde_json::from_str::<T::Input>(arguments.as_str()) else {
return Task::ready(Ok(ToolFunctionCall {
id,
name: name.clone(),
arguments,
result: Some(ToolFunctionCallResult::ParsingFailed),
}));
};
let result = tool.execute(&input, cx);
cx.spawn(move |mut cx| async move {
let result: Result<T::Output> = result.await;
let view = cx.update(|cx| T::output_view(input, result, cx))?;
Ok(ToolFunctionCall {
id,
name: name.clone(),
arguments,
result: Some(ToolFunctionCallResult::Finished {
view: view.into(),
generate_fn: generate::<T>,
}),
})
})
},
),
render_running: render_running::<T>,
build_view: Box::new(move |cx: &mut WindowContext| Box::new(tool.view(cx))),
};
let previous = self.registered_tools.insert(name.clone(), registered_tool);
@@ -204,77 +301,40 @@ impl ToolRegistry {
}
return Ok(());
fn render_running<T: LanguageModelTool>(cx: &mut WindowContext) -> AnyElement {
T::render_running(cx).into_any_element()
}
fn generate<T: LanguageModelTool>(
view: AnyView,
project: &mut ProjectContext,
cx: &mut WindowContext,
) -> String {
view.downcast::<T::View>()
.unwrap()
.update(cx, |view, cx| T::View::generate(view, project, cx))
}
}
/// Task yields an error if the window for the given WindowContext is closed before the task completes.
pub fn call(
&self,
tool_call: &ToolFunctionCall,
cx: &mut WindowContext,
) -> Task<Result<ToolFunctionCall>> {
let name = tool_call.name.clone();
let arguments = tool_call.arguments.clone();
let id = tool_call.id.clone();
let tool = match self.registered_tools.get(&name) {
Some(tool) => tool,
None => {
let name = name.clone();
return Task::ready(Ok(ToolFunctionCall {
id,
name: name.clone(),
arguments,
result: Some(ToolFunctionCallResult::NoSuchTool),
}));
}
};
(tool.call)(tool_call, cx)
}
}
impl ToolFunctionCallResult {
pub fn generate(
&self,
name: &String,
project: &mut ProjectContext,
cx: &mut WindowContext,
) -> String {
match self {
ToolFunctionCallResult::NoSuchTool => format!("No tool for {name}"),
ToolFunctionCallResult::ParsingFailed => {
format!("Unable to parse arguments for {name}")
}
ToolFunctionCallResult::Finished { generate_fn, view } => {
(generate_fn)(view.clone(), project, cx)
}
impl<T: ToolView> InternalToolView for View<T> {
fn view(&self) -> AnyView {
self.clone().into()
}
fn generate(&self, project: &mut ProjectContext, cx: &mut WindowContext) -> String {
self.update(cx, |view, cx| view.generate(project, cx))
}
fn try_set_input(&self, input: &str, cx: &mut WindowContext) {
if let Ok(input) = serde_json::from_str::<T::Input>(input) {
self.update(cx, |view, cx| {
view.set_input(input, cx);
cx.notify();
});
}
}
fn into_any_element(&self, name: &String) -> AnyElement {
match self {
ToolFunctionCallResult::NoSuchTool => {
format!("Language Model attempted to call {name}").into_any_element()
}
ToolFunctionCallResult::ParsingFailed => {
format!("Language Model called {name} with bad arguments").into_any_element()
}
ToolFunctionCallResult::Finished { view, .. } => view.clone().into_any_element(),
}
fn execute(&self, cx: &mut WindowContext) -> Task<Result<()>> {
self.update(cx, |view, cx| view.execute(cx))
}
fn serialize_output(&self, cx: &mut WindowContext) -> Result<Box<RawValue>> {
let output = self.update(cx, |view, cx| view.serialize(cx));
Ok(RawValue::from_string(serde_json::to_string(&output)?)?)
}
fn deserialize_output(&self, output: &RawValue, cx: &mut WindowContext) -> Result<()> {
let state = serde_json::from_str::<T::SerializedState>(output.get())?;
self.update(cx, |view, cx| view.deserialize(state, cx))?;
Ok(())
}
}
@@ -293,7 +353,6 @@ mod test {
use super::*;
use gpui::{div, prelude::*, Render, TestAppContext};
use gpui::{EmptyView, View};
use schemars::schema_for;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use serde_json::json;
@@ -304,10 +363,6 @@ mod test {
unit: String,
}
struct WeatherTool {
current_weather: WeatherResult,
}
#[derive(Clone, Serialize, Deserialize, PartialEq, Debug)]
struct WeatherResult {
location: String,
@@ -316,24 +371,81 @@ mod test {
}
struct WeatherView {
result: WeatherResult,
input: Option<WeatherQuery>,
result: Option<WeatherResult>,
// Fake API call
current_weather: WeatherResult,
}
#[derive(Clone, Serialize)]
struct WeatherTool {
current_weather: WeatherResult,
}
impl WeatherView {
fn new(current_weather: WeatherResult) -> Self {
Self {
input: None,
result: None,
current_weather,
}
}
}
impl Render for WeatherView {
fn render(&mut self, _cx: &mut gpui::ViewContext<Self>) -> impl IntoElement {
div().child(format!("temperature: {}", self.result.temperature))
match self.result {
Some(ref result) => div()
.child(format!("temperature: {}", result.temperature))
.into_any_element(),
None => div().child("Calculating weather...").into_any_element(),
}
}
}
impl ToolOutput for WeatherView {
fn generate(&self, _output: &mut ProjectContext, _cx: &mut WindowContext) -> String {
impl ToolView for WeatherView {
type Input = WeatherQuery;
type SerializedState = WeatherResult;
fn generate(&self, _output: &mut ProjectContext, _cx: &mut ViewContext<Self>) -> String {
serde_json::to_string(&self.result).unwrap()
}
fn set_input(&mut self, input: Self::Input, cx: &mut ViewContext<Self>) {
self.input = Some(input);
cx.notify();
}
fn execute(&mut self, _cx: &mut ViewContext<Self>) -> Task<Result<()>> {
let input = self.input.as_ref().unwrap();
let _location = input.location.clone();
let _unit = input.unit.clone();
let weather = self.current_weather.clone();
self.result = Some(weather);
Task::ready(Ok(()))
}
fn serialize(&self, _cx: &mut ViewContext<Self>) -> Self::SerializedState {
self.current_weather.clone()
}
fn deserialize(
&mut self,
output: Self::SerializedState,
_cx: &mut ViewContext<Self>,
) -> Result<()> {
self.current_weather = output;
Ok(())
}
}
impl LanguageModelTool for WeatherTool {
type Input = WeatherQuery;
type Output = WeatherResult;
type View = WeatherView;
fn name(&self) -> String {
@@ -344,88 +456,71 @@ mod test {
"Fetches the current weather for a given location.".to_string()
}
fn execute(
&self,
input: &Self::Input,
_cx: &mut WindowContext,
) -> Task<Result<Self::Output>> {
let _location = input.location.clone();
let _unit = input.unit.clone();
let weather = self.current_weather.clone();
Task::ready(Ok(weather))
}
fn output_view(
_input: Self::Input,
result: Result<Self::Output>,
cx: &mut WindowContext,
) -> View<Self::View> {
cx.new_view(|_cx| {
let result = result.unwrap();
WeatherView { result }
})
fn view(&self, cx: &mut WindowContext) -> View<Self::View> {
cx.new_view(|_cx| WeatherView::new(self.current_weather.clone()))
}
}
#[gpui::test]
async fn test_openai_weather_example(cx: &mut TestAppContext) {
cx.background_executor.run_until_parked();
let (_, cx) = cx.add_window_view(|_cx| EmptyView);
let tool = WeatherTool {
current_weather: WeatherResult {
location: "San Francisco".to_string(),
temperature: 21.0,
unit: "Celsius".to_string(),
},
};
let tools = vec![tool.definition()];
assert_eq!(tools.len(), 1);
let expected = ToolFunctionDefinition {
name: "get_current_weather".to_string(),
description: "Fetches the current weather for a given location.".to_string(),
parameters: schema_for!(WeatherQuery),
};
assert_eq!(tools[0].name, expected.name);
assert_eq!(tools[0].description, expected.description);
let expected_schema = serde_json::to_value(&tools[0].parameters).unwrap();
assert_eq!(
expected_schema,
json!({
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "WeatherQuery",
"type": "object",
"properties": {
"location": {
"type": "string"
},
"unit": {
"type": "string"
}
let mut registry = ToolRegistry::new();
registry
.register(WeatherTool {
current_weather: WeatherResult {
location: "San Francisco".to_string(),
temperature: 21.0,
unit: "Celsius".to_string(),
},
"required": ["location", "unit"]
})
.unwrap();
let definitions = registry.definitions();
assert_eq!(
definitions,
[ToolFunctionDefinition {
name: "get_current_weather".to_string(),
description: "Fetches the current weather for a given location.".to_string(),
parameters: serde_json::from_value(json!({
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "WeatherQuery",
"type": "object",
"properties": {
"location": {
"type": "string"
},
"unit": {
"type": "string"
}
},
"required": ["location", "unit"]
}))
.unwrap(),
}]
);
let args = json!({
"location": "San Francisco",
"unit": "Celsius"
let mut call = ToolFunctionCall {
id: "the-id".to_string(),
name: "get_cur".to_string(),
..Default::default()
};
let task = cx.update(|cx| {
registry.update_tool_call(
&mut call,
Some("rent_weather"),
Some(r#"{"location": "San Francisco","#),
cx,
);
registry.update_tool_call(&mut call, None, Some(r#" "unit": "Celsius"}"#), cx);
registry.execute_tool_call(&mut call, cx).unwrap()
});
task.await.unwrap();
let query: WeatherQuery = serde_json::from_value(args).unwrap();
let result = cx.update(|cx| tool.execute(&query, cx)).await;
assert!(result.is_ok());
let result = result.unwrap();
assert_eq!(result, tool.current_weather);
match &call.state {
ToolFunctionCallState::ExecutedTool(_view) => {}
_ => panic!(),
}
}
}

View File

@@ -18,6 +18,7 @@ client.workspace = true
db.workspace = true
editor.workspace = true
gpui.workspace = true
http.workspace = true
isahc.workspace = true
log.workspace = true
markdown_preview.workspace = true

View File

@@ -20,6 +20,7 @@ use smol::{fs, io::AsyncReadExt};
use settings::{Settings, SettingsSources, SettingsStore};
use smol::{fs::File, process::Command};
use http::{HttpClient, HttpClientWithUrl};
use release_channel::{AppCommitSha, AppVersion, ReleaseChannel};
use std::{
env::consts::{ARCH, OS},
@@ -29,10 +30,7 @@ use std::{
time::Duration,
};
use update_notification::UpdateNotification;
use util::{
http::{HttpClient, HttpClientWithUrl},
ResultExt,
};
use util::ResultExt;
use workspace::notifications::NotificationId;
use workspace::Workspace;
@@ -56,16 +54,22 @@ struct UpdateRequestBody {
telemetry: bool,
}
#[derive(Clone, Copy, PartialEq, Eq)]
#[derive(Clone, PartialEq, Eq)]
pub enum AutoUpdateStatus {
Idle,
Checking,
Downloading,
Installing,
Updated,
Updated { binary_path: PathBuf },
Errored,
}
impl AutoUpdateStatus {
pub fn is_updated(&self) -> bool {
matches!(self, Self::Updated { .. })
}
}
pub struct AutoUpdater {
status: AutoUpdateStatus,
current_version: SemanticVersion,
@@ -306,7 +310,7 @@ impl AutoUpdater {
}
pub fn poll(&mut self, cx: &mut ModelContext<Self>) {
if self.pending_poll.is_some() || self.status == AutoUpdateStatus::Updated {
if self.pending_poll.is_some() || self.status.is_updated() {
return;
}
@@ -328,7 +332,7 @@ impl AutoUpdater {
}
pub fn status(&self) -> AutoUpdateStatus {
self.status
self.status.clone()
}
pub fn dismiss_error(&mut self, cx: &mut ModelContext<Self>) {
@@ -404,6 +408,11 @@ impl AutoUpdater {
cx.notify();
})?;
// We store the path of our current binary, before we install, since installation might
// delete it. Once deleted, it's hard to get the path to our binary on Linux.
// So we cache it here, which allows us to then restart later on.
let binary_path = cx.update(|cx| cx.app_path())??;
match OS {
"macos" => install_release_macos(&temp_dir, downloaded_asset, &cx).await,
"linux" => install_release_linux(&temp_dir, downloaded_asset, &cx).await,
@@ -413,7 +422,7 @@ impl AutoUpdater {
this.update(&mut cx, |this, cx| {
this.set_should_show_update_notification(true, cx)
.detach_and_log_err(cx);
this.status = AutoUpdateStatus::Updated;
this.status = AutoUpdateStatus::Updated { binary_path };
cx.notify();
})?;

View File

@@ -50,3 +50,4 @@ language = { workspace = true, features = ["test-support"] }
live_kit_client = { workspace = true, features = ["test-support"] }
project = { workspace = true, features = ["test-support"] }
util = { workspace = true, features = ["test-support"] }
http = { workspace = true, features = ["test-support"] }

View File

@@ -40,3 +40,4 @@ rpc = { workspace = true, features = ["test-support"] }
client = { workspace = true, features = ["test-support"] }
settings = { workspace = true, features = ["test-support"] }
util = { workspace = true, features = ["test-support"] }
http = { workspace = true, features = ["test-support"] }

View File

@@ -4,9 +4,9 @@ use super::*;
use client::{test::FakeServer, Client, UserStore};
use clock::FakeSystemClock;
use gpui::{AppContext, Context, Model, TestAppContext};
use http::FakeHttpClient;
use rpc::proto::{self};
use settings::SettingsStore;
use util::http::FakeHttpClient;
#[gpui::test]
fn test_update_channels(cx: &mut AppContext) {

View File

@@ -19,11 +19,17 @@ path = "src/main.rs"
[dependencies]
anyhow.workspace = true
clap.workspace = true
libc.workspace = true
ipc-channel = "0.18"
once_cell.workspace = true
release_channel.workspace = true
serde.workspace = true
util.workspace = true
[target.'cfg(target_os = "linux")'.dependencies]
exec.workspace = true
fork.workspace = true
[target.'cfg(target_os = "macos")'.dependencies]
core-foundation.workspace = true
core-services = "0.2"

View File

@@ -13,6 +13,7 @@ pub enum CliRequest {
paths: Vec<String>,
wait: bool,
open_new_workspace: Option<bool>,
dev_server_token: Option<String>,
},
}

View File

@@ -1,17 +1,21 @@
#![cfg_attr(any(target_os = "linux", target_os = "windows"), allow(dead_code))]
use anyhow::{anyhow, Context, Result};
use anyhow::{Context, Result};
use clap::Parser;
use cli::{CliRequest, CliResponse};
use serde::Deserialize;
use cli::{ipc::IpcOneShotServer, CliRequest, CliResponse, IpcHandshake};
use std::{
env,
ffi::OsStr,
fs,
env, fs,
path::{Path, PathBuf},
};
use util::paths::PathLikeWithPosition;
struct Detect;
trait InstalledApp {
fn zed_version_string(&self) -> String;
fn launch(&self, ipc_url: String) -> anyhow::Result<()>;
}
#[derive(Parser, Debug)]
#[command(name = "zed", disable_version_flag = true)]
struct Args {
@@ -33,9 +37,9 @@ struct Args {
/// Print Zed's version and the app path.
#[arg(short, long)]
version: bool,
/// Custom Zed.app path
#[arg(short, long)]
bundle_path: Option<PathBuf>,
/// Custom path to Zed.app or the zed binary
#[arg(long)]
zed: Option<PathBuf>,
/// Run zed in dev-server mode
#[arg(long)]
dev_server_token: Option<String>,
@@ -49,12 +53,6 @@ fn parse_path_with_position(
})
}
#[derive(Debug, Deserialize)]
struct InfoPlist {
#[serde(rename = "CFBundleShortVersionString")]
bundle_short_version_string: String,
}
fn main() -> Result<()> {
// Intercept version designators
#[cfg(target_os = "macos")]
@@ -68,14 +66,10 @@ fn main() -> Result<()> {
}
let args = Args::parse();
let bundle = Bundle::detect(args.bundle_path.as_deref()).context("Bundle detection")?;
if let Some(dev_server_token) = args.dev_server_token {
return bundle.spawn(vec!["--dev-server-token".into(), dev_server_token]);
}
let app = Detect::detect(args.zed.as_deref()).context("Bundle detection")?;
if args.version {
println!("{}", bundle.zed_version_string());
println!("{}", app.zed_version_string());
return Ok(());
}
@@ -101,7 +95,14 @@ fn main() -> Result<()> {
paths.push(canonicalized.to_string(|path| path.display().to_string()))
}
let (tx, rx) = bundle.launch()?;
let (server, server_name) =
IpcOneShotServer::<IpcHandshake>::new().context("Handshake before Zed spawn")?;
let url = format!("zed-cli://{server_name}");
app.launch(url)?;
let (_, handshake) = server.accept().context("Handshake after Zed spawn")?;
let (tx, rx) = (handshake.requests, handshake.responses);
let open_new_workspace = if args.new {
Some(true)
} else if args.add {
@@ -114,6 +115,7 @@ fn main() -> Result<()> {
paths,
wait: args.wait,
open_new_workspace,
dev_server_token: args.dev_server_token,
})?;
while let Ok(response) = rx.recv() {
@@ -128,60 +130,125 @@ fn main() -> Result<()> {
Ok(())
}
enum Bundle {
App {
app_bundle: PathBuf,
plist: InfoPlist,
},
LocalPath {
executable: PathBuf,
plist: InfoPlist,
},
}
fn locate_bundle() -> Result<PathBuf> {
let cli_path = std::env::current_exe()?.canonicalize()?;
let mut app_path = cli_path.clone();
while app_path.extension() != Some(OsStr::new("app")) {
if !app_path.pop() {
return Err(anyhow!("cannot find app bundle containing {:?}", cli_path));
}
}
Ok(app_path)
}
#[cfg(target_os = "linux")]
mod linux {
use std::path::Path;
use std::{
env,
ffi::OsString,
io,
os::{
linux::net::SocketAddrExt,
unix::net::{SocketAddr, UnixDatagram},
},
path::{Path, PathBuf},
process, thread,
time::Duration,
};
use cli::{CliRequest, CliResponse};
use ipc_channel::ipc::{IpcReceiver, IpcSender};
use anyhow::anyhow;
use cli::FORCE_CLI_MODE_ENV_VAR_NAME;
use fork::Fork;
use once_cell::sync::Lazy;
use crate::{Bundle, InfoPlist};
use crate::{Detect, InstalledApp};
impl Bundle {
pub fn detect(_args_bundle_path: Option<&Path>) -> anyhow::Result<Self> {
unimplemented!()
static RELEASE_CHANNEL: Lazy<String> =
Lazy::new(|| include_str!("../../zed/RELEASE_CHANNEL").trim().to_string());
struct App(PathBuf);
impl Detect {
pub fn detect(path: Option<&Path>) -> anyhow::Result<impl InstalledApp> {
let path = if let Some(path) = path {
path.to_path_buf().canonicalize()
} else {
let cli = env::current_exe()?;
let dir = cli
.parent()
.ok_or_else(|| anyhow!("no parent path for cli"))?;
match dir.join("zed").canonicalize() {
Ok(path) => Ok(path),
// development builds have Zed capitalized
Err(e) => match dir.join("Zed").canonicalize() {
Ok(path) => Ok(path),
Err(_) => Err(e),
},
}
}?;
Ok(App(path))
}
}
impl InstalledApp for App {
fn zed_version_string(&self) -> String {
format!(
"Zed {}{} {}",
if *RELEASE_CHANNEL == "stable" {
"".to_string()
} else {
format!(" {} ", *RELEASE_CHANNEL)
},
option_env!("RELEASE_VERSION").unwrap_or_default(),
self.0.display(),
)
}
pub fn plist(&self) -> &InfoPlist {
unimplemented!()
fn launch(&self, ipc_url: String) -> anyhow::Result<()> {
let uid: u32 = unsafe { libc::getuid() };
let sock_addr =
SocketAddr::from_abstract_name(format!("zed-{}-{}", *RELEASE_CHANNEL, uid))?;
let sock = UnixDatagram::unbound()?;
if sock.connect_addr(&sock_addr).is_err() {
self.boot_background(ipc_url)?;
} else {
sock.send(ipc_url.as_bytes())?;
}
Ok(())
}
}
impl App {
fn boot_background(&self, ipc_url: String) -> anyhow::Result<()> {
let path = &self.0;
match fork::fork() {
Ok(Fork::Parent(_)) => Ok(()),
Ok(Fork::Child) => {
std::env::set_var(FORCE_CLI_MODE_ENV_VAR_NAME, "");
if let Err(_) = fork::setsid() {
eprintln!("failed to setsid: {}", std::io::Error::last_os_error());
process::exit(1);
}
if std::env::var("ZED_KEEP_FD").is_err() {
if let Err(_) = fork::close_fd() {
eprintln!("failed to close_fd: {}", std::io::Error::last_os_error());
}
}
let error =
exec::execvp(path.clone(), &[path.as_os_str(), &OsString::from(ipc_url)]);
// if exec succeeded, we never get here.
eprintln!("failed to exec {:?}: {}", path, error);
process::exit(1)
}
Err(_) => Err(anyhow!(io::Error::last_os_error())),
}
}
pub fn path(&self) -> &Path {
unimplemented!()
}
pub fn launch(&self) -> anyhow::Result<(IpcSender<CliRequest>, IpcReceiver<CliResponse>)> {
unimplemented!()
}
pub fn spawn(&self, _args: Vec<String>) -> anyhow::Result<()> {
unimplemented!()
}
pub fn zed_version_string(&self) -> String {
unimplemented!()
fn wait_for_socket(
&self,
sock_addr: &SocketAddr,
sock: &mut UnixDatagram,
) -> Result<(), std::io::Error> {
for _ in 0..100 {
thread::sleep(Duration::from_millis(10));
if sock.connect_addr(&sock_addr).is_ok() {
return Ok(());
}
}
sock.connect_addr(&sock_addr)
}
}
}
@@ -189,59 +256,79 @@ mod linux {
// todo("windows")
#[cfg(target_os = "windows")]
mod windows {
use crate::{Detect, InstalledApp};
use std::path::Path;
use cli::{CliRequest, CliResponse};
use ipc_channel::ipc::{IpcReceiver, IpcSender};
use crate::{Bundle, InfoPlist};
impl Bundle {
pub fn detect(_args_bundle_path: Option<&Path>) -> anyhow::Result<Self> {
struct App;
impl InstalledApp for App {
fn zed_version_string(&self) -> String {
unimplemented!()
}
pub fn plist(&self) -> &InfoPlist {
fn launch(&self, _ipc_url: String) -> anyhow::Result<()> {
unimplemented!()
}
}
pub fn path(&self) -> &Path {
unimplemented!()
}
pub fn launch(&self) -> anyhow::Result<(IpcSender<CliRequest>, IpcReceiver<CliResponse>)> {
unimplemented!()
}
pub fn spawn(&self, _args: Vec<String>) -> anyhow::Result<()> {
unimplemented!()
}
pub fn zed_version_string(&self) -> String {
unimplemented!()
impl Detect {
pub fn detect(_path: Option<&Path>) -> anyhow::Result<impl InstalledApp> {
Ok(App)
}
}
}
#[cfg(target_os = "macos")]
mod mac_os {
use anyhow::{Context, Result};
use anyhow::{anyhow, Context, Result};
use core_foundation::{
array::{CFArray, CFIndex},
string::kCFStringEncodingUTF8,
url::{CFURLCreateWithBytes, CFURL},
};
use core_services::{kLSLaunchDefaults, LSLaunchURLSpec, LSOpenFromURLSpec, TCFType};
use std::{fs, path::Path, process::Command, ptr};
use serde::Deserialize;
use std::{
ffi::OsStr,
fs,
path::{Path, PathBuf},
process::Command,
ptr,
};
use cli::{CliRequest, CliResponse, IpcHandshake, FORCE_CLI_MODE_ENV_VAR_NAME};
use ipc_channel::ipc::{IpcOneShotServer, IpcReceiver, IpcSender};
use cli::FORCE_CLI_MODE_ENV_VAR_NAME;
use crate::{locate_bundle, Bundle, InfoPlist};
use crate::{Detect, InstalledApp};
impl Bundle {
pub fn detect(args_bundle_path: Option<&Path>) -> anyhow::Result<Self> {
let bundle_path = if let Some(bundle_path) = args_bundle_path {
#[derive(Debug, Deserialize)]
struct InfoPlist {
#[serde(rename = "CFBundleShortVersionString")]
bundle_short_version_string: String,
}
enum Bundle {
App {
app_bundle: PathBuf,
plist: InfoPlist,
},
LocalPath {
executable: PathBuf,
plist: InfoPlist,
},
}
fn locate_bundle() -> Result<PathBuf> {
let cli_path = std::env::current_exe()?.canonicalize()?;
let mut app_path = cli_path.clone();
while app_path.extension() != Some(OsStr::new("app")) {
if !app_path.pop() {
return Err(anyhow!("cannot find app bundle containing {:?}", cli_path));
}
}
Ok(app_path)
}
impl Detect {
pub fn detect(path: Option<&Path>) -> anyhow::Result<impl InstalledApp> {
let bundle_path = if let Some(bundle_path) = path {
bundle_path
.canonicalize()
.with_context(|| format!("Args bundle path {bundle_path:?} canonicalization"))?
@@ -256,7 +343,7 @@ mod mac_os {
plist::from_file::<_, InfoPlist>(&plist_path).with_context(|| {
format!("Reading *.app bundle plist file at {plist_path:?}")
})?;
Ok(Self::App {
Ok(Bundle::App {
app_bundle: bundle_path,
plist,
})
@@ -271,42 +358,27 @@ mod mac_os {
plist::from_file::<_, InfoPlist>(&plist_path).with_context(|| {
format!("Reading dev bundle plist file at {plist_path:?}")
})?;
Ok(Self::LocalPath {
Ok(Bundle::LocalPath {
executable: bundle_path,
plist,
})
}
}
}
}
fn plist(&self) -> &InfoPlist {
match self {
Self::App { plist, .. } => plist,
Self::LocalPath { plist, .. } => plist,
}
impl InstalledApp for Bundle {
fn zed_version_string(&self) -> String {
let is_dev = matches!(self, Self::LocalPath { .. });
format!(
"Zed {}{} {}",
self.plist().bundle_short_version_string,
if is_dev { " (dev)" } else { "" },
self.path().display(),
)
}
fn path(&self) -> &Path {
match self {
Self::App { app_bundle, .. } => app_bundle,
Self::LocalPath { executable, .. } => executable,
}
}
pub fn spawn(&self, args: Vec<String>) -> Result<()> {
let path = match self {
Self::App { app_bundle, .. } => app_bundle.join("Contents/MacOS/zed"),
Self::LocalPath { executable, .. } => executable.clone(),
};
Command::new(path).args(args).status()?;
Ok(())
}
pub fn launch(&self) -> anyhow::Result<(IpcSender<CliRequest>, IpcReceiver<CliResponse>)> {
let (server, server_name) =
IpcOneShotServer::<IpcHandshake>::new().context("Handshake before Zed spawn")?;
let url = format!("zed-cli://{server_name}");
fn launch(&self, url: String) -> anyhow::Result<()> {
match self {
Self::App { app_bundle, .. } => {
let app_path = app_bundle;
@@ -368,18 +440,23 @@ mod mac_os {
}
}
let (_, handshake) = server.accept().context("Handshake after Zed spawn")?;
Ok((handshake.requests, handshake.responses))
Ok(())
}
}
impl Bundle {
fn plist(&self) -> &InfoPlist {
match self {
Self::App { plist, .. } => plist,
Self::LocalPath { plist, .. } => plist,
}
}
pub fn zed_version_string(&self) -> String {
let is_dev = matches!(self, Self::LocalPath { .. });
format!(
"Zed {}{} {}",
self.plist().bundle_short_version_string,
if is_dev { " (dev)" } else { "" },
self.path().display(),
)
fn path(&self) -> &Path {
match self {
Self::App { app_bundle, .. } => app_bundle,
Self::LocalPath { executable, .. } => executable,
}
}
}

View File

@@ -16,39 +16,40 @@ doctest = false
test-support = ["clock/test-support", "collections/test-support", "gpui/test-support", "rpc/test-support"]
[dependencies]
chrono = { workspace = true, features = ["serde"] }
clock.workspace = true
collections.workspace = true
gpui.workspace = true
util.workspace = true
release_channel.workspace = true
rpc.workspace = true
text.workspace = true
settings.workspace = true
feature_flags.workspace = true
anyhow.workspace = true
async-recursion = "0.3"
async-tungstenite = { version = "0.16", features = ["async-std", "async-native-tls"] }
async-native-tls = { version = "0.5.0", features = ["vendored"] }
chrono = { workspace = true, features = ["serde"] }
clock.workspace = true
collections.workspace = true
feature_flags.workspace = true
futures.workspace = true
gpui.workspace = true
http.workspace = true
lazy_static.workspace = true
log.workspace = true
once_cell = "1.19.0"
once_cell.workspace = true
parking_lot.workspace = true
postage.workspace = true
rand.workspace = true
release_channel.workspace = true
rpc.workspace = true
schemars.workspace = true
serde.workspace = true
serde_json.workspace = true
settings.workspace = true
sha2.workspace = true
smol.workspace = true
sysinfo.workspace = true
telemetry_events.workspace = true
tempfile.workspace = true
text.workspace = true
thiserror.workspace = true
time.workspace = true
tiny_http = "0.8"
url.workspace = true
util.workspace = true
[dev-dependencies]
clock = { workspace = true, features = ["test-support"] }
@@ -57,3 +58,11 @@ gpui = { workspace = true, features = ["test-support"] }
rpc = { workspace = true, features = ["test-support"] }
settings = { workspace = true, features = ["test-support"] }
util = { workspace = true, features = ["test-support"] }
http = { workspace = true, features = ["test-support"] }
[target.'cfg(target_os = "linux")'.dependencies]
async-native-tls = {"version" = "0.5.0", features = ["vendored"]}
# This is an indirect dependency of async-tungstenite that is included
# here so we can vendor libssl with the feature flag.
[package.metadata.cargo-machete]
ignored = ["async-native-tls"]

View File

@@ -20,6 +20,7 @@ use gpui::{
actions, AnyModel, AnyWeakModel, AppContext, AsyncAppContext, BorrowAppContext, Global, Model,
Task, WeakModel,
};
use http::{HttpClient, HttpClientWithUrl};
use lazy_static::lazy_static;
use parking_lot::RwLock;
use postage::watch;
@@ -30,6 +31,7 @@ use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use settings::{Settings, SettingsSources, SettingsStore};
use std::fmt;
use std::pin::Pin;
use std::{
any::TypeId,
convert::TryFrom,
@@ -46,7 +48,6 @@ use std::{
use telemetry::Telemetry;
use thiserror::Error;
use url::Url;
use util::http::{HttpClient, HttpClientWithUrl};
use util::{ResultExt, TryFutureExt};
pub use rpc::*;
@@ -65,6 +66,13 @@ impl fmt::Display for DevServerToken {
lazy_static! {
static ref ZED_SERVER_URL: Option<String> = std::env::var("ZED_SERVER_URL").ok();
static ref ZED_RPC_URL: Option<String> = std::env::var("ZED_RPC_URL").ok();
/// An environment variable whose presence indicates that the development auth
/// provider should be used.
///
/// Only works in development. Setting this environment variable in other release
/// channels is a no-op.
pub static ref ZED_DEVELOPMENT_AUTH: bool =
std::env::var("ZED_DEVELOPMENT_AUTH").map_or(false, |value| !value.is_empty());
pub static ref IMPERSONATE_LOGIN: Option<String> = std::env::var("ZED_IMPERSONATE")
.ok()
.and_then(|s| if s.is_empty() { None } else { Some(s) });
@@ -161,6 +169,7 @@ pub struct Client {
peer: Arc<Peer>,
http: Arc<HttpClientWithUrl>,
telemetry: Arc<Telemetry>,
credentials_provider: Arc<dyn CredentialsProvider + Send + Sync + 'static>,
state: RwLock<ClientState>,
#[allow(clippy::type_complexity)]
@@ -195,7 +204,7 @@ pub enum EstablishConnectionError {
#[error("{0}")]
Other(#[from] anyhow::Error),
#[error("{0}")]
Http(#[from] util::http::Error),
Http(#[from] http::Error),
#[error("{0}")]
Io(#[from] std::io::Error),
#[error("{0}")]
@@ -298,6 +307,32 @@ impl Credentials {
}
}
/// A provider for [`Credentials`].
///
/// Used to abstract over reading and writing credentials to some form of
/// persistence (like the system keychain).
trait CredentialsProvider {
/// Reads the credentials from the provider.
fn read_credentials<'a>(
&'a self,
cx: &'a AsyncAppContext,
) -> Pin<Box<dyn Future<Output = Option<Credentials>> + 'a>>;
/// Writes the credentials to the provider.
fn write_credentials<'a>(
&'a self,
user_id: u64,
access_token: String,
cx: &'a AsyncAppContext,
) -> Pin<Box<dyn Future<Output = Result<()>> + 'a>>;
/// Deletes the credentials from the provider.
fn delete_credentials<'a>(
&'a self,
cx: &'a AsyncAppContext,
) -> Pin<Box<dyn Future<Output = Result<()>> + 'a>>;
}
impl Default for ClientState {
fn default() -> Self {
Self {
@@ -443,11 +478,27 @@ impl Client {
http: Arc<HttpClientWithUrl>,
cx: &mut AppContext,
) -> Arc<Self> {
let use_zed_development_auth = match ReleaseChannel::try_global(cx) {
Some(ReleaseChannel::Dev) => *ZED_DEVELOPMENT_AUTH,
Some(ReleaseChannel::Nightly | ReleaseChannel::Preview | ReleaseChannel::Stable)
| None => false,
};
let credentials_provider: Arc<dyn CredentialsProvider + Send + Sync + 'static> =
if use_zed_development_auth {
Arc::new(DevelopmentCredentialsProvider {
path: util::paths::CONFIG_DIR.join("development_auth"),
})
} else {
Arc::new(KeychainCredentialsProvider)
};
Arc::new(Self {
id: AtomicU64::new(0),
peer: Peer::new(0),
telemetry: Telemetry::new(clock, http.clone(), cx),
http,
credentials_provider,
state: Default::default(),
#[cfg(any(test, feature = "test-support"))]
@@ -763,8 +814,11 @@ impl Client {
}
}
pub async fn has_keychain_credentials(&self, cx: &AsyncAppContext) -> bool {
read_credentials_from_keychain(cx).await.is_some()
pub async fn has_credentials(&self, cx: &AsyncAppContext) -> bool {
self.credentials_provider
.read_credentials(cx)
.await
.is_some()
}
pub fn set_dev_server_token(&self, token: DevServerToken) -> &Self {
@@ -775,7 +829,7 @@ impl Client {
#[async_recursion(?Send)]
pub async fn authenticate_and_connect(
self: &Arc<Self>,
try_keychain: bool,
try_provider: bool,
cx: &AsyncAppContext,
) -> anyhow::Result<()> {
let was_disconnected = match *self.status().borrow() {
@@ -796,12 +850,13 @@ impl Client {
self.set_status(Status::Reauthenticating, cx)
}
let mut read_from_keychain = false;
let mut read_from_provider = false;
let mut credentials = self.state.read().credentials.clone();
if credentials.is_none() && try_keychain {
credentials = read_credentials_from_keychain(cx).await;
read_from_keychain = credentials.is_some();
if credentials.is_none() && try_provider {
credentials = self.credentials_provider.read_credentials(cx).await;
read_from_provider = credentials.is_some();
}
if credentials.is_none() {
let mut status_rx = self.status();
let _ = status_rx.next().await;
@@ -838,9 +893,9 @@ impl Client {
match connection {
Ok(conn) => {
self.state.write().credentials = Some(credentials.clone());
if !read_from_keychain && IMPERSONATE_LOGIN.is_none() {
if !read_from_provider && IMPERSONATE_LOGIN.is_none() {
if let Credentials::User{user_id, access_token} = credentials {
write_credentials_to_keychain(user_id, access_token, cx).await.log_err();
self.credentials_provider.write_credentials(user_id, access_token, cx).await.log_err();
}
}
@@ -854,8 +909,8 @@ impl Client {
}
Err(EstablishConnectionError::Unauthorized) => {
self.state.write().credentials.take();
if read_from_keychain {
delete_credentials_from_keychain(cx).await.log_err();
if read_from_provider {
self.credentials_provider.delete_credentials(cx).await.log_err();
self.set_status(Status::SignedOut, cx);
self.authenticate_and_connect(false, cx).await
} else {
@@ -1264,8 +1319,11 @@ impl Client {
self.state.write().credentials = None;
self.disconnect(&cx);
if self.has_keychain_credentials(cx).await {
delete_credentials_from_keychain(cx).await.log_err();
if self.has_credentials(cx).await {
self.credentials_provider
.delete_credentials(cx)
.await
.log_err();
}
}
@@ -1465,41 +1523,128 @@ impl Client {
}
}
async fn read_credentials_from_keychain(cx: &AsyncAppContext) -> Option<Credentials> {
if IMPERSONATE_LOGIN.is_some() {
return None;
}
let (user_id, access_token) = cx
.update(|cx| cx.read_credentials(&ClientSettings::get_global(cx).server_url))
.log_err()?
.await
.log_err()??;
Some(Credentials::User {
user_id: user_id.parse().ok()?,
access_token: String::from_utf8(access_token).ok()?,
})
}
async fn write_credentials_to_keychain(
#[derive(Serialize, Deserialize)]
struct DevelopmentCredentials {
user_id: u64,
access_token: String,
cx: &AsyncAppContext,
) -> Result<()> {
cx.update(move |cx| {
cx.write_credentials(
&ClientSettings::get_global(cx).server_url,
&user_id.to_string(),
access_token.as_bytes(),
)
})?
.await
}
async fn delete_credentials_from_keychain(cx: &AsyncAppContext) -> Result<()> {
cx.update(move |cx| cx.delete_credentials(&ClientSettings::get_global(cx).server_url))?
.await
/// A credentials provider that stores credentials in a local file.
///
/// This MUST only be used in development, as this is not a secure way of storing
/// credentials on user machines.
///
/// Its existence is purely to work around the annoyance of having to constantly
/// re-allow access to the system keychain when developing Zed.
struct DevelopmentCredentialsProvider {
path: PathBuf,
}
impl CredentialsProvider for DevelopmentCredentialsProvider {
fn read_credentials<'a>(
&'a self,
_cx: &'a AsyncAppContext,
) -> Pin<Box<dyn Future<Output = Option<Credentials>> + 'a>> {
async move {
if IMPERSONATE_LOGIN.is_some() {
return None;
}
let json = std::fs::read(&self.path).log_err()?;
let credentials: DevelopmentCredentials = serde_json::from_slice(&json).log_err()?;
Some(Credentials::User {
user_id: credentials.user_id,
access_token: credentials.access_token,
})
}
.boxed_local()
}
fn write_credentials<'a>(
&'a self,
user_id: u64,
access_token: String,
_cx: &'a AsyncAppContext,
) -> Pin<Box<dyn Future<Output = Result<()>> + 'a>> {
async move {
let json = serde_json::to_string(&DevelopmentCredentials {
user_id,
access_token,
})?;
std::fs::write(&self.path, json)?;
Ok(())
}
.boxed_local()
}
fn delete_credentials<'a>(
&'a self,
_cx: &'a AsyncAppContext,
) -> Pin<Box<dyn Future<Output = Result<()>> + 'a>> {
async move { Ok(std::fs::remove_file(&self.path)?) }.boxed_local()
}
}
/// A credentials provider that stores credentials in the system keychain.
struct KeychainCredentialsProvider;
impl CredentialsProvider for KeychainCredentialsProvider {
fn read_credentials<'a>(
&'a self,
cx: &'a AsyncAppContext,
) -> Pin<Box<dyn Future<Output = Option<Credentials>> + 'a>> {
async move {
if IMPERSONATE_LOGIN.is_some() {
return None;
}
let (user_id, access_token) = cx
.update(|cx| cx.read_credentials(&ClientSettings::get_global(cx).server_url))
.log_err()?
.await
.log_err()??;
Some(Credentials::User {
user_id: user_id.parse().ok()?,
access_token: String::from_utf8(access_token).ok()?,
})
}
.boxed_local()
}
fn write_credentials<'a>(
&'a self,
user_id: u64,
access_token: String,
cx: &'a AsyncAppContext,
) -> Pin<Box<dyn Future<Output = Result<()>> + 'a>> {
async move {
cx.update(move |cx| {
cx.write_credentials(
&ClientSettings::get_global(cx).server_url,
&user_id.to_string(),
access_token.as_bytes(),
)
})?
.await
}
.boxed_local()
}
fn delete_credentials<'a>(
&'a self,
cx: &'a AsyncAppContext,
) -> Pin<Box<dyn Future<Output = Result<()>> + 'a>> {
async move {
cx.update(move |cx| cx.delete_credentials(&ClientSettings::get_global(cx).server_url))?
.await
}
.boxed_local()
}
}
/// prefix for the zed:// url scheme
@@ -1534,10 +1679,10 @@ mod tests {
use clock::FakeSystemClock;
use gpui::{BackgroundExecutor, Context, TestAppContext};
use http::FakeHttpClient;
use parking_lot::Mutex;
use settings::SettingsStore;
use std::future;
use util::http::FakeHttpClient;
#[gpui::test(iterations = 10)]
async fn test_reconnection(cx: &mut TestAppContext) {

View File

@@ -0,0 +1 @@

View File

@@ -5,6 +5,7 @@ use chrono::{DateTime, Utc};
use clock::SystemClock;
use futures::Future;
use gpui::{AppContext, AppMetadata, BackgroundExecutor, Task};
use http::{self, HttpClient, HttpClientWithUrl, Method};
use once_cell::sync::Lazy;
use parking_lot::Mutex;
use release_channel::ReleaseChannel;
@@ -19,7 +20,6 @@ use telemetry_events::{
SettingEvent,
};
use tempfile::NamedTempFile;
use util::http::{self, HttpClient, HttpClientWithUrl, Method};
#[cfg(not(debug_assertions))]
use util::ResultExt;
use util::TryFutureExt;
@@ -261,11 +261,15 @@ impl Telemetry {
conversation_id: Option<String>,
kind: AssistantKind,
model: String,
response_latency: Option<Duration>,
error_message: Option<String>,
) {
let event = Event::Assistant(AssistantEvent {
conversation_id,
kind,
model: model.to_string(),
response_latency,
error_message,
});
self.report_event(event)
@@ -497,7 +501,7 @@ mod tests {
use chrono::TimeZone;
use clock::FakeSystemClock;
use gpui::TestAppContext;
use util::http::FakeHttpClient;
use http::FakeHttpClient;
#[gpui::test]
fn test_telemetry_flush_on_max_queue_size(cx: &mut TestAppContext) {

View File

@@ -35,6 +35,7 @@ envy = "0.4.2"
futures.workspace = true
google_ai.workspace = true
hex.workspace = true
http.workspace = true
live_kit_server.workspace = true
log.workspace = true
nanoid.workspace = true
@@ -90,6 +91,7 @@ language = { workspace = true, features = ["test-support"] }
live_kit_client = { workspace = true, features = ["test-support"] }
lsp = { workspace = true, features = ["test-support"] }
menu.workspace = true
multi_buffer = { workspace = true, features = ["test-support"] }
node_runtime.workspace = true
notifications = { workspace = true, features = ["test-support"] }
pretty_assertions.workspace = true

View File

@@ -785,6 +785,8 @@ pub struct AssistantEventRow {
conversation_id: String,
kind: String,
model: String,
response_latency_in_ms: Option<i64>,
error_message: Option<String>,
}
impl AssistantEventRow {
@@ -811,6 +813,10 @@ impl AssistantEventRow {
conversation_id: event.conversation_id.unwrap_or_default(),
kind: event.kind.to_string(),
model: event.model,
response_latency_in_ms: event
.response_latency
.map(|latency| latency.as_millis() as i64),
error_message: event.error_message,
}
}
}

View File

@@ -415,7 +415,7 @@ impl Database {
if is_serialization_error(error) && prev_attempt_count < SLEEPS.len() {
let base_delay = SLEEPS[prev_attempt_count];
let randomized_delay = base_delay * self.rng.lock().await.gen_range(0.5..=2.0);
log::info!(
log::warn!(
"retrying transaction after serialization error. delay: {} ms.",
randomized_delay
);

View File

@@ -130,13 +130,21 @@ impl Database {
.await
}
pub async fn delete_project(&self, project_id: ProjectId) -> Result<()> {
self.weak_transaction(|tx| async move {
project::Entity::delete_by_id(project_id).exec(&*tx).await?;
Ok(())
})
.await
}
/// Unshares the given project.
pub async fn unshare_project(
&self,
project_id: ProjectId,
connection: ConnectionId,
user_id: Option<UserId>,
) -> Result<TransactionGuard<(Option<proto::Room>, Vec<ConnectionId>)>> {
) -> Result<TransactionGuard<(bool, Option<proto::Room>, Vec<ConnectionId>)>> {
self.project_transaction(project_id, |tx| async move {
let guest_connection_ids = self.project_guest_connection_ids(project_id, &tx).await?;
let project = project::Entity::find_by_id(project_id)
@@ -149,10 +157,7 @@ impl Database {
None
};
if project.host_connection()? == connection {
project::Entity::delete(project.into_active_model())
.exec(&*tx)
.await?;
return Ok((room, guest_connection_ids));
return Ok((true, room, guest_connection_ids));
}
if let Some(dev_server_project_id) = project.dev_server_project_id {
if let Some(user_id) = user_id {
@@ -169,7 +174,7 @@ impl Database {
})
.exec(&*tx)
.await?;
return Ok((room, guest_connection_ids));
return Ok((false, room, guest_connection_ids));
}
}

View File

@@ -42,6 +42,7 @@ use futures::{
stream::FuturesUnordered,
FutureExt, SinkExt, StreamExt, TryStreamExt,
};
use http::IsahcHttpClient;
use prometheus::{register_int_gauge, IntGauge};
use rpc::{
proto::{
@@ -73,7 +74,6 @@ use tracing::{
field::{self},
info_span, instrument, Instrument,
};
use util::http::IsahcHttpClient;
use self::connection_pool::VersionedMessage;
@@ -2032,23 +2032,34 @@ async fn unshare_project_internal(
user_id: Option<UserId>,
session: &Session,
) -> Result<()> {
let (room, guest_connection_ids) = &*session
.db()
.await
.unshare_project(project_id, connection_id, user_id)
.await?;
let delete = {
let room_guard = session
.db()
.await
.unshare_project(project_id, connection_id, user_id)
.await?;
let message = proto::UnshareProject {
project_id: project_id.to_proto(),
let (delete, room, guest_connection_ids) = &*room_guard;
let message = proto::UnshareProject {
project_id: project_id.to_proto(),
};
broadcast(
Some(connection_id),
guest_connection_ids.iter().copied(),
|conn_id| session.peer.send(conn_id, message.clone()),
);
if let Some(room) = room {
room_updated(room, &session.peer);
}
*delete
};
broadcast(
Some(connection_id),
guest_connection_ids.iter().copied(),
|conn_id| session.peer.send(conn_id, message.clone()),
);
if let Some(room) = room {
room_updated(room, &session.peer);
if delete {
let db = session.db().await;
db.delete_project(project_id).await?;
}
Ok(())
@@ -4333,6 +4344,7 @@ async fn complete_with_open_ai(
OPEN_AI_API_URL,
&api_key,
crate::ai::language_model_request_to_open_ai(request)?,
None,
)
.await
.context("open_ai::stream_completion request failed within collab")?;
@@ -4477,8 +4489,8 @@ async fn complete_with_anthropic(
.collect();
let mut stream = anthropic::stream_completion(
session.http_client.clone(),
"https://api.anthropic.com",
session.http_client.as_ref(),
anthropic::ANTHROPIC_API_URL,
&api_key,
anthropic::Request {
model,
@@ -4487,6 +4499,7 @@ async fn complete_with_anthropic(
system: system_message,
max_tokens: 4092,
},
None,
)
.await?;

View File

@@ -9,6 +9,7 @@ use editor::{
ConfirmCodeAction, ConfirmCompletion, ConfirmRename, ContextMenuFirst, Redo, Rename,
RevertSelectedHunks, ToggleCodeActions, Undo,
},
display_map::DisplayRow,
test::{
editor_hunks,
editor_test_context::{AssertionContextManager, EditorTestContext},
@@ -24,6 +25,7 @@ use language::{
language_settings::{AllLanguageSettings, InlayHintSettings},
FakeLspAdapter,
};
use multi_buffer::MultiBufferRow;
use project::{
project_settings::{InlineBlameSettings, ProjectSettings},
SERVER_PROGRESS_DEBOUNCE_TIMEOUT,
@@ -2110,21 +2112,34 @@ struct Row10;"#};
let snapshot = editor.snapshot(cx);
let all_hunks = editor_hunks(editor, &snapshot, cx);
let all_expanded_hunks = expanded_hunks(&editor, &snapshot, cx);
assert_eq!(
expanded_hunks_background_highlights(editor, &snapshot),
Vec::new(),
);
assert_eq!(expanded_hunks_background_highlights(editor, cx), Vec::new());
assert_eq!(
all_hunks,
vec![
("".to_string(), DiffHunkStatus::Added, 1..3),
("struct Row2;\n".to_string(), DiffHunkStatus::Removed, 4..4),
("struct Row5;\n".to_string(), DiffHunkStatus::Modified, 6..7),
("struct Row8;\n".to_string(), DiffHunkStatus::Removed, 9..9),
(
"".to_string(),
DiffHunkStatus::Added,
DisplayRow(1)..DisplayRow(3)
),
(
"struct Row2;\n".to_string(),
DiffHunkStatus::Removed,
DisplayRow(4)..DisplayRow(4)
),
(
"struct Row5;\n".to_string(),
DiffHunkStatus::Modified,
DisplayRow(6)..DisplayRow(7)
),
(
"struct Row8;\n".to_string(),
DiffHunkStatus::Removed,
DisplayRow(9)..DisplayRow(9)
),
(
"struct Row10;".to_string(),
DiffHunkStatus::Modified,
10..10,
DisplayRow(10)..DisplayRow(10),
),
]
);
@@ -2135,24 +2150,36 @@ struct Row10;"#};
let all_hunks = editor_hunks(editor, &snapshot, cx);
let all_expanded_hunks = expanded_hunks(&editor, &snapshot, cx);
assert_eq!(
expanded_hunks_background_highlights(editor, &snapshot),
vec![1..3, 8..9],
expanded_hunks_background_highlights(editor, cx),
vec![DisplayRow(1)..=DisplayRow(2), DisplayRow(8)..=DisplayRow(8)],
);
assert_eq!(
all_hunks,
vec![
("".to_string(), DiffHunkStatus::Added, 1..3),
("struct Row2;\n".to_string(), DiffHunkStatus::Removed, 5..5),
("struct Row5;\n".to_string(), DiffHunkStatus::Modified, 8..9),
(
"".to_string(),
DiffHunkStatus::Added,
DisplayRow(1)..DisplayRow(3)
),
(
"struct Row2;\n".to_string(),
DiffHunkStatus::Removed,
DisplayRow(5)..DisplayRow(5)
),
(
"struct Row5;\n".to_string(),
DiffHunkStatus::Modified,
DisplayRow(8)..DisplayRow(9)
),
(
"struct Row8;\n".to_string(),
DiffHunkStatus::Removed,
12..12
DisplayRow(12)..DisplayRow(12)
),
(
"struct Row10;".to_string(),
DiffHunkStatus::Modified,
13..13,
DisplayRow(13)..DisplayRow(13),
),
]
);
@@ -2170,16 +2197,13 @@ struct Row10;"#};
let snapshot = editor.snapshot(cx);
let all_hunks = editor_hunks(editor, &snapshot, cx);
let all_expanded_hunks = expanded_hunks(&editor, &snapshot, cx);
assert_eq!(
expanded_hunks_background_highlights(editor, &snapshot),
Vec::new(),
);
assert_eq!(expanded_hunks_background_highlights(editor, cx), Vec::new());
assert_eq!(
all_hunks,
vec![(
"struct Row10;".to_string(),
DiffHunkStatus::Modified,
10..10,
DisplayRow(10)..DisplayRow(10),
)]
);
assert_eq!(all_expanded_hunks, Vec::new());
@@ -2189,15 +2213,15 @@ struct Row10;"#};
let all_hunks = editor_hunks(editor, &snapshot, cx);
let all_expanded_hunks = expanded_hunks(&editor, &snapshot, cx);
assert_eq!(
expanded_hunks_background_highlights(editor, &snapshot),
Vec::new(),
expanded_hunks_background_highlights(editor, cx),
vec![DisplayRow(5)..=DisplayRow(5)]
);
assert_eq!(
all_hunks,
vec![(
"struct Row10;".to_string(),
DiffHunkStatus::Modified,
10..10,
DisplayRow(10)..DisplayRow(10),
)]
);
assert_eq!(all_expanded_hunks, Vec::new());
@@ -2336,7 +2360,7 @@ async fn test_git_blame_is_forwarded(cx_a: &mut TestAppContext, cx_b: &mut TestA
let blame = editor_b.blame().expect("editor_b should have blame now");
let entries = blame.update(cx, |blame, cx| {
blame
.blame_for_rows((0..4).map(Some), cx)
.blame_for_rows((0..4).map(MultiBufferRow).map(Some), cx)
.collect::<Vec<_>>()
});
@@ -2375,7 +2399,7 @@ async fn test_git_blame_is_forwarded(cx_a: &mut TestAppContext, cx_b: &mut TestA
let blame = editor_b.blame().expect("editor_b should have blame now");
let entries = blame.update(cx, |blame, cx| {
blame
.blame_for_rows((0..4).map(Some), cx)
.blame_for_rows((0..4).map(MultiBufferRow).map(Some), cx)
.collect::<Vec<_>>()
});
@@ -2402,7 +2426,7 @@ async fn test_git_blame_is_forwarded(cx_a: &mut TestAppContext, cx_b: &mut TestA
let blame = editor_b.blame().expect("editor_b should have blame now");
let entries = blame.update(cx, |blame, cx| {
blame
.blame_for_rows((0..4).map(Some), cx)
.blame_for_rows((0..4).map(MultiBufferRow).map(Some), cx)
.collect::<Vec<_>>()
});

View File

@@ -19,6 +19,7 @@ use fs::FakeFs;
use futures::{channel::oneshot, StreamExt as _};
use git::GitHostingProviderRegistry;
use gpui::{BackgroundExecutor, Context, Model, Task, TestAppContext, View, VisualTestContext};
use http::FakeHttpClient;
use language::LanguageRegistry;
use node_runtime::FakeNodeRuntime;
use notifications::NotificationStore;
@@ -41,7 +42,6 @@ use std::{
Arc,
},
};
use util::http::FakeHttpClient;
use workspace::{Workspace, WorkspaceId, WorkspaceStore};
pub struct TestServer {
@@ -428,8 +428,10 @@ impl TestServer {
node_runtime: app_state.node_runtime.clone(),
},
cx,
);
});
)
})
.await
.unwrap();
TestClient {
app_state,

View File

@@ -25,6 +25,7 @@ test-support = [
"settings/test-support",
"util/test-support",
"workspace/test-support",
"http/test-support",
]
[dependencies]
@@ -84,4 +85,5 @@ rpc = { workspace = true, features = ["test-support"] }
settings = { workspace = true, features = ["test-support"] }
tree-sitter-markdown.workspace = true
util = { workspace = true, features = ["test-support"] }
http = { workspace = true, features = ["test-support"] }
workspace = { workspace = true, features = ["test-support"] }

View File

@@ -557,11 +557,12 @@ mod tests {
use client::{Client, User, UserStore};
use clock::FakeSystemClock;
use gpui::TestAppContext;
use http::FakeHttpClient;
use language::{Language, LanguageConfig};
use project::Project;
use rpc::proto;
use settings::SettingsStore;
use util::{http::FakeHttpClient, test::marked_text_ranges};
use util::test::marked_text_ranges;
#[gpui::test]
async fn test_message_editor(cx: &mut TestAppContext) {

View File

@@ -58,7 +58,7 @@ impl Render for CollabTitlebarItem {
let project_id = self.project.read(cx).remote_id();
let workspace = self.workspace.upgrade();
TitleBar::new("collab-titlebar")
TitleBar::new("collab-titlebar", Box::new(workspace::CloseWindow))
// note: on windows titlebar behaviour is handled by the platform implementation
.when(cfg!(not(windows)), |this| {
this.on_click(|event, cx| {
@@ -73,7 +73,8 @@ impl Render for CollabTitlebarItem {
.gap_1()
.children(self.render_project_host(cx))
.child(self.render_project_name(cx))
.children(self.render_project_branch(cx)),
.children(self.render_project_branch(cx))
.on_mouse_move(|_, cx| cx.stop_propagation()),
)
.child(
h_flex()
@@ -105,6 +106,7 @@ impl Render for CollabTitlebarItem {
this.children(current_user_face_pile.map(|face_pile| {
v_flex()
.on_mouse_move(|_, cx| cx.stop_propagation())
.child(face_pile)
.child(render_color_ribbon(player_colors.local().cursor))
}))
@@ -167,6 +169,7 @@ impl Render for CollabTitlebarItem {
h_flex()
.gap_1()
.pr_1()
.on_mouse_move(|_, cx| cx.stop_propagation())
.when_some(room, |this, room| {
let room = room.read(cx);
let project = self.project.read(cx);
@@ -677,7 +680,7 @@ impl CollabTitlebarItem {
client::Status::UpgradeRequired => {
let auto_updater = auto_update::AutoUpdater::get(cx);
let label = match auto_updater.map(|auto_update| auto_update.read(cx).status()) {
Some(AutoUpdateStatus::Updated) => "Please restart Zed to Collaborate",
Some(AutoUpdateStatus::Updated { .. }) => "Please restart Zed to Collaborate",
Some(AutoUpdateStatus::Installing)
| Some(AutoUpdateStatus::Downloading)
| Some(AutoUpdateStatus::Checking) => "Updating...",
@@ -691,7 +694,7 @@ impl CollabTitlebarItem {
.label_size(LabelSize::Small)
.on_click(|_, cx| {
if let Some(auto_updater) = auto_update::AutoUpdater::get(cx) {
if auto_updater.read(cx).status() == AutoUpdateStatus::Updated {
if auto_updater.read(cx).status().is_updated() {
workspace::restart(&Default::default(), cx);
return;
}

View File

@@ -14,7 +14,7 @@ pub use collab_panel::CollabPanel;
pub use collab_titlebar_item::CollabTitlebarItem;
use gpui::{
actions, point, AppContext, DevicePixels, Pixels, PlatformDisplay, Size, Task,
WindowBackgroundAppearance, WindowContext, WindowKind, WindowOptions,
WindowBackgroundAppearance, WindowBounds, WindowContext, WindowKind, WindowOptions,
};
use panel_settings::MessageEditorSettings;
pub use panel_settings::{
@@ -117,14 +117,13 @@ fn notification_window_options(
let app_id = ReleaseChannel::global(cx).app_id();
WindowOptions {
bounds: Some(bounds),
window_bounds: Some(WindowBounds::Windowed(bounds)),
titlebar: None,
focus: false,
show: true,
kind: WindowKind::PopUp,
is_movable: false,
display_id: Some(screen.id()),
fullscreen: false,
window_background: WindowBackgroundAppearance::default(),
app_id: Some(app_id.to_owned()),
}

View File

@@ -26,7 +26,7 @@ impl CollabNotification {
}
impl ParentElement for CollabNotification {
fn extend(&mut self, elements: impl Iterator<Item = AnyElement>) {
fn extend(&mut self, elements: impl IntoIterator<Item = AnyElement>) {
self.children.extend(elements)
}
}

View File

@@ -32,6 +32,7 @@ command_palette_hooks.workspace = true
editor.workspace = true
futures.workspace = true
gpui.workspace = true
http.workspace = true
language.workspace = true
lsp.workspace = true
menu.workspace = true
@@ -63,3 +64,4 @@ rpc = { workspace = true, features = ["test-support"] }
settings = { workspace = true, features = ["test-support"] }
theme = { workspace = true, features = ["test-support"] }
util = { workspace = true, features = ["test-support"] }
http = { workspace = true, features = ["test-support"] }

View File

@@ -12,6 +12,8 @@ use gpui::{
actions, AppContext, AsyncAppContext, Context, Entity, EntityId, EventEmitter, Global, Model,
ModelContext, Task, WeakModel,
};
use http::github::latest_github_release;
use http::HttpClient;
use language::{
language_settings::{all_language_settings, language_settings, InlineCompletionProvider},
point_from_lsp, point_to_lsp, Anchor, Bias, Buffer, BufferSnapshot, Language, PointUtf16,
@@ -31,9 +33,7 @@ use std::{
path::{Path, PathBuf},
sync::Arc,
};
use util::{
fs::remove_matching, github::latest_github_release, http::HttpClient, maybe, paths, ResultExt,
};
use util::{fs::remove_matching, maybe, paths, ResultExt};
pub use copilot_completion_provider::CopilotCompletionProvider;
pub use sign_in::CopilotCodeVerification;
@@ -393,7 +393,7 @@ impl Copilot {
Default::default(),
cx.to_async(),
);
let http = util::http::FakeHttpClient::create(|_| async { unreachable!() });
let http = http::FakeHttpClient::create(|_| async { unreachable!() });
let node_runtime = FakeNodeRuntime::new();
let this = cx.new_model(|cx| Self {
server_id: LanguageServerId(0),
@@ -429,11 +429,17 @@ impl Copilot {
env: None,
};
let root_path = if cfg!(target_os = "windows") {
Path::new("C:/")
} else {
Path::new("/")
};
let server = LanguageServer::new(
Arc::new(Mutex::new(None)),
new_server_id,
binary,
Path::new("/"),
root_path,
None,
cx.clone(),
)?;

View File

@@ -215,12 +215,12 @@ impl InlineCompletionProvider for CopilotCompletionProvider {
}
}
fn active_completion_text(
&self,
fn active_completion_text<'a>(
&'a self,
buffer: &Model<Buffer>,
cursor_position: language::Anchor,
cx: &AppContext,
) -> Option<&str> {
cx: &'a AppContext,
) -> Option<&'a str> {
let buffer_id = buffer.entity_id();
let buffer = buffer.read(cx);
let completion = self.active_completion()?;

View File

@@ -0,0 +1 @@
../../LICENSE-GPL

View File

@@ -1,7 +1,7 @@
use super::*;
use collections::HashMap;
use editor::{
display_map::{BlockContext, TransformBlock},
display_map::{BlockContext, DisplayRow, TransformBlock},
DisplayPoint, GutterDimensions,
};
use gpui::{px, AvailableSpace, Stateful, TestAppContext, VisualTestContext};
@@ -158,11 +158,11 @@ async fn test_diagnostics(cx: &mut TestAppContext) {
assert_eq!(
editor_blocks(&editor, cx),
[
(0, "path header block".into()),
(2, "diagnostic header".into()),
(15, "collapsed context".into()),
(16, "diagnostic header".into()),
(25, "collapsed context".into()),
(DisplayRow(0), "path header block".into()),
(DisplayRow(2), "diagnostic header".into()),
(DisplayRow(15), "collapsed context".into()),
(DisplayRow(16), "diagnostic header".into()),
(DisplayRow(25), "collapsed context".into()),
]
);
assert_eq!(
@@ -210,7 +210,7 @@ async fn test_diagnostics(cx: &mut TestAppContext) {
editor.update(cx, |editor, cx| {
assert_eq!(
editor.selections.display_ranges(cx),
[DisplayPoint::new(12, 6)..DisplayPoint::new(12, 6)]
[DisplayPoint::new(DisplayRow(12), 6)..DisplayPoint::new(DisplayRow(12), 6)]
);
});
@@ -243,13 +243,13 @@ async fn test_diagnostics(cx: &mut TestAppContext) {
assert_eq!(
editor_blocks(&editor, cx),
[
(0, "path header block".into()),
(2, "diagnostic header".into()),
(7, "path header block".into()),
(9, "diagnostic header".into()),
(22, "collapsed context".into()),
(23, "diagnostic header".into()),
(32, "collapsed context".into()),
(DisplayRow(0), "path header block".into()),
(DisplayRow(2), "diagnostic header".into()),
(DisplayRow(7), "path header block".into()),
(DisplayRow(9), "diagnostic header".into()),
(DisplayRow(22), "collapsed context".into()),
(DisplayRow(23), "diagnostic header".into()),
(DisplayRow(32), "collapsed context".into()),
]
);
@@ -309,7 +309,7 @@ async fn test_diagnostics(cx: &mut TestAppContext) {
editor.update(cx, |editor, cx| {
assert_eq!(
editor.selections.display_ranges(cx),
[DisplayPoint::new(19, 6)..DisplayPoint::new(19, 6)]
[DisplayPoint::new(DisplayRow(19), 6)..DisplayPoint::new(DisplayRow(19), 6)]
);
});
@@ -355,15 +355,15 @@ async fn test_diagnostics(cx: &mut TestAppContext) {
assert_eq!(
editor_blocks(&editor, cx),
[
(0, "path header block".into()),
(2, "diagnostic header".into()),
(7, "collapsed context".into()),
(8, "diagnostic header".into()),
(13, "path header block".into()),
(15, "diagnostic header".into()),
(28, "collapsed context".into()),
(29, "diagnostic header".into()),
(38, "collapsed context".into()),
(DisplayRow(0), "path header block".into()),
(DisplayRow(2), "diagnostic header".into()),
(DisplayRow(7), "collapsed context".into()),
(DisplayRow(8), "diagnostic header".into()),
(DisplayRow(13), "path header block".into()),
(DisplayRow(15), "diagnostic header".into()),
(DisplayRow(28), "collapsed context".into()),
(DisplayRow(29), "diagnostic header".into()),
(DisplayRow(38), "collapsed context".into()),
]
);
@@ -493,8 +493,8 @@ async fn test_diagnostics_multiple_servers(cx: &mut TestAppContext) {
assert_eq!(
editor_blocks(&editor, cx),
[
(0, "path header block".into()),
(2, "diagnostic header".into()),
(DisplayRow(0), "path header block".into()),
(DisplayRow(2), "diagnostic header".into()),
]
);
assert_eq!(
@@ -539,10 +539,10 @@ async fn test_diagnostics_multiple_servers(cx: &mut TestAppContext) {
assert_eq!(
editor_blocks(&editor, cx),
[
(0, "path header block".into()),
(2, "diagnostic header".into()),
(6, "collapsed context".into()),
(7, "diagnostic header".into()),
(DisplayRow(0), "path header block".into()),
(DisplayRow(2), "diagnostic header".into()),
(DisplayRow(6), "collapsed context".into()),
(DisplayRow(7), "diagnostic header".into()),
]
);
assert_eq!(
@@ -605,10 +605,10 @@ async fn test_diagnostics_multiple_servers(cx: &mut TestAppContext) {
assert_eq!(
editor_blocks(&editor, cx),
[
(0, "path header block".into()),
(2, "diagnostic header".into()),
(7, "collapsed context".into()),
(8, "diagnostic header".into()),
(DisplayRow(0), "path header block".into()),
(DisplayRow(2), "diagnostic header".into()),
(DisplayRow(7), "collapsed context".into()),
(DisplayRow(8), "diagnostic header".into()),
]
);
assert_eq!(
@@ -661,10 +661,10 @@ async fn test_diagnostics_multiple_servers(cx: &mut TestAppContext) {
assert_eq!(
editor_blocks(&editor, cx),
[
(0, "path header block".into()),
(2, "diagnostic header".into()),
(7, "collapsed context".into()),
(8, "diagnostic header".into()),
(DisplayRow(0), "path header block".into()),
(DisplayRow(2), "diagnostic header".into()),
(DisplayRow(7), "collapsed context".into()),
(DisplayRow(8), "diagnostic header".into()),
]
);
assert_eq!(
@@ -958,14 +958,17 @@ fn random_diagnostic(
}
}
fn editor_blocks(editor: &View<Editor>, cx: &mut VisualTestContext) -> Vec<(u32, SharedString)> {
fn editor_blocks(
editor: &View<Editor>,
cx: &mut VisualTestContext,
) -> Vec<(DisplayRow, SharedString)> {
let mut blocks = Vec::new();
cx.draw(gpui::Point::default(), AvailableSpace::min_size(), |cx| {
editor.update(cx, |editor, cx| {
let snapshot = editor.snapshot(cx);
blocks.extend(
snapshot
.blocks_in_range(0..snapshot.max_point().row())
.blocks_in_range(DisplayRow(0)..snapshot.max_point().row())
.enumerate()
.filter_map(|(ix, (row, block))| {
let name: SharedString = match block {

View File

@@ -39,6 +39,7 @@ futures.workspace = true
fuzzy.workspace = true
git.workspace = true
gpui.workspace = true
http.workspace = true
indoc.workspace = true
itertools.workspace = true
language.workspace = true
@@ -91,3 +92,4 @@ tree-sitter-typescript.workspace = true
unindent.workspace = true
util = { workspace = true, features = ["test-support"] }
workspace = { workspace = true, features = ["test-support"] }
http = { workspace = true, features = ["test-support"] }

View File

@@ -52,14 +52,9 @@ pub struct SelectToEndOfLine {
#[derive(PartialEq, Clone, Deserialize, Default)]
pub struct ToggleCodeActions {
// Display row from which the action was deployed.
#[serde(default)]
pub deployed_from_indicator: Option<u32>,
}
#[derive(PartialEq, Clone, Deserialize, Default)]
pub struct ToggleTestRunner {
#[serde(default)]
pub deployed_from_row: Option<u32>,
pub deployed_from_indicator: Option<DisplayRow>,
}
#[derive(PartialEq, Clone, Deserialize, Default)]
@@ -82,12 +77,12 @@ pub struct ToggleComments {
#[derive(PartialEq, Clone, Deserialize, Default)]
pub struct FoldAt {
pub buffer_row: u32,
pub buffer_row: MultiBufferRow,
}
#[derive(PartialEq, Clone, Deserialize, Default)]
pub struct UnfoldAt {
pub buffer_row: u32,
pub buffer_row: MultiBufferRow,
}
#[derive(PartialEq, Clone, Deserialize, Default)]

View File

@@ -128,8 +128,7 @@ impl Render for BlameEntryTooltip {
let author_email = self.blame_entry.author_mail.clone();
let pretty_commit_id = format!("{}", self.blame_entry.sha);
let short_commit_id = pretty_commit_id.chars().take(6).collect::<String>();
let short_commit_id = self.blame_entry.sha.display_short();
let absolute_timestamp = blame_entry_absolute_timestamp(&self.blame_entry, cx);
let message = self

View File

@@ -23,8 +23,8 @@ mod inlay_map;
mod tab_map;
mod wrap_map;
use crate::EditorStyle;
use crate::{hover_links::InlayHighlight, movement::TextLayoutDetails, InlayId};
use crate::{EditorStyle, RowExt};
pub use block_map::{BlockMap, BlockPoint};
use collections::{HashMap, HashSet};
use fold_map::FoldMap;
@@ -34,7 +34,11 @@ use language::{
language_settings::language_settings, OffsetUtf16, Point, Subscription as BufferSubscription,
};
use lsp::DiagnosticSeverity;
use multi_buffer::{Anchor, AnchorRangeExt, MultiBuffer, MultiBufferSnapshot, ToOffset, ToPoint};
use multi_buffer::{
Anchor, AnchorRangeExt, MultiBuffer, MultiBufferPoint, MultiBufferRow, MultiBufferSnapshot,
ToOffset, ToPoint,
};
use serde::Deserialize;
use std::{any::TypeId, borrow::Cow, fmt::Debug, num::NonZeroU32, ops::Range, sync::Arc};
use sum_tree::{Bias, TreeMap};
use tab_map::TabMap;
@@ -42,10 +46,11 @@ use tab_map::TabMap;
use wrap_map::WrapMap;
pub use block_map::{
BlockBufferRows as DisplayBufferRows, BlockChunks as DisplayChunks, BlockContext,
BlockDisposition, BlockId, BlockProperties, BlockStyle, RenderBlock, TransformBlock,
BlockBufferRows, BlockChunks as DisplayChunks, BlockContext, BlockDisposition, BlockId,
BlockProperties, BlockStyle, RenderBlock, TransformBlock,
};
use self::block_map::BlockRow;
pub use self::fold_map::{Fold, FoldId, FoldPoint};
pub use self::inlay_map::{InlayOffset, InlayPoint};
pub(crate) use inlay_map::Inlay;
@@ -382,15 +387,20 @@ impl DisplaySnapshot {
self.buffer_snapshot.len() == 0
}
pub fn buffer_rows(&self, start_row: u32) -> DisplayBufferRows {
self.block_snapshot.buffer_rows(start_row)
pub fn buffer_rows(
&self,
start_row: DisplayRow,
) -> impl Iterator<Item = Option<MultiBufferRow>> + '_ {
self.block_snapshot
.buffer_rows(BlockRow(start_row.0))
.map(|row| row.map(|row| MultiBufferRow(row.0)))
}
pub fn max_buffer_row(&self) -> u32 {
pub fn max_buffer_row(&self) -> MultiBufferRow {
self.buffer_snapshot.max_buffer_row()
}
pub fn prev_line_boundary(&self, mut point: Point) -> (Point, DisplayPoint) {
pub fn prev_line_boundary(&self, mut point: MultiBufferPoint) -> (Point, DisplayPoint) {
loop {
let mut inlay_point = self.inlay_snapshot.to_inlay_point(point);
let mut fold_point = self.fold_snapshot.to_fold_point(inlay_point, Bias::Left);
@@ -408,7 +418,7 @@ impl DisplaySnapshot {
}
}
pub fn next_line_boundary(&self, mut point: Point) -> (Point, DisplayPoint) {
pub fn next_line_boundary(&self, mut point: MultiBufferPoint) -> (Point, DisplayPoint) {
loop {
let mut inlay_point = self.inlay_snapshot.to_inlay_point(point);
let mut fold_point = self.fold_snapshot.to_fold_point(inlay_point, Bias::Right);
@@ -429,13 +439,14 @@ impl DisplaySnapshot {
// used by line_mode selections and tries to match vim behaviour
pub fn expand_to_line(&self, range: Range<Point>) -> Range<Point> {
let new_start = if range.start.row == 0 {
Point::new(0, 0)
} else if range.start.row == self.max_buffer_row()
|| (range.end.column > 0 && range.end.row == self.max_buffer_row())
MultiBufferPoint::new(0, 0)
} else if range.start.row == self.max_buffer_row().0
|| (range.end.column > 0 && range.end.row == self.max_buffer_row().0)
{
Point::new(
MultiBufferPoint::new(
range.start.row - 1,
self.buffer_snapshot.line_len(range.start.row - 1),
self.buffer_snapshot
.line_len(MultiBufferRow(range.start.row - 1)),
)
} else {
self.prev_line_boundary(range.start).0
@@ -443,9 +454,9 @@ impl DisplaySnapshot {
let new_end = if range.end.column == 0 {
range.end
} else if range.end.row < self.max_buffer_row() {
} else if range.end.row < self.max_buffer_row().0 {
self.buffer_snapshot
.clip_point(Point::new(range.end.row + 1, 0), Bias::Left)
.clip_point(MultiBufferPoint::new(range.end.row + 1, 0), Bias::Left)
} else {
self.buffer_snapshot.max_point()
};
@@ -453,7 +464,7 @@ impl DisplaySnapshot {
new_start..new_end
}
fn point_to_display_point(&self, point: Point, bias: Bias) -> DisplayPoint {
fn point_to_display_point(&self, point: MultiBufferPoint, bias: Bias) -> DisplayPoint {
let inlay_point = self.inlay_snapshot.to_inlay_point(point);
let fold_point = self.fold_snapshot.to_fold_point(inlay_point, bias);
let tab_point = self.tab_snapshot.to_tab_point(fold_point);
@@ -477,6 +488,11 @@ impl DisplaySnapshot {
.to_inlay_offset(anchor.to_offset(&self.buffer_snapshot))
}
pub fn display_point_to_anchor(&self, point: DisplayPoint, bias: Bias) -> Anchor {
self.buffer_snapshot
.anchor_at(point.to_offset(&self, bias), bias)
}
fn display_point_to_inlay_point(&self, point: DisplayPoint, bias: Bias) -> InlayPoint {
let block_point = point.0;
let wrap_point = self.block_snapshot.to_wrap_point(block_point);
@@ -504,10 +520,10 @@ impl DisplaySnapshot {
}
/// Returns text chunks starting at the given display row until the end of the file
pub fn text_chunks(&self, display_row: u32) -> impl Iterator<Item = &str> {
pub fn text_chunks(&self, display_row: DisplayRow) -> impl Iterator<Item = &str> {
self.block_snapshot
.chunks(
display_row..self.max_point().row() + 1,
display_row.0..self.max_point().row().next_row().0,
false,
Highlights::default(),
)
@@ -515,8 +531,8 @@ impl DisplaySnapshot {
}
/// Returns text chunks starting at the end of the given display row in reverse until the start of the file
pub fn reverse_text_chunks(&self, display_row: u32) -> impl Iterator<Item = &str> {
(0..=display_row).rev().flat_map(|row| {
pub fn reverse_text_chunks(&self, display_row: DisplayRow) -> impl Iterator<Item = &str> {
(0..=display_row.0).rev().flat_map(|row| {
self.block_snapshot
.chunks(row..row + 1, false, Highlights::default())
.map(|h| h.text)
@@ -528,12 +544,12 @@ impl DisplaySnapshot {
pub fn chunks(
&self,
display_rows: Range<u32>,
display_rows: Range<DisplayRow>,
language_aware: bool,
highlight_styles: HighlightStyles,
) -> DisplayChunks<'_> {
self.block_snapshot.chunks(
display_rows,
display_rows.start.0..display_rows.end.0,
language_aware,
Highlights {
text_highlights: Some(&self.text_highlights),
@@ -545,7 +561,7 @@ impl DisplaySnapshot {
pub fn highlighted_chunks<'a>(
&'a self,
display_rows: Range<u32>,
display_rows: Range<DisplayRow>,
language_aware: bool,
editor_style: &'a EditorStyle,
) -> impl Iterator<Item = HighlightedChunk<'a>> {
@@ -605,7 +621,7 @@ impl DisplaySnapshot {
pub fn layout_row(
&self,
display_row: u32,
display_row: DisplayRow,
TextLayoutDetails {
text_system,
editor_style,
@@ -618,7 +634,7 @@ impl DisplaySnapshot {
let mut runs = Vec::new();
let mut line = String::new();
let range = display_row..display_row + 1;
let range = display_row..display_row.next_row();
for chunk in self.highlighted_chunks(range, false, &editor_style) {
line.push_str(chunk.chunk);
@@ -658,7 +674,7 @@ impl DisplaySnapshot {
pub fn display_column_for_x(
&self,
display_row: u32,
display_row: DisplayRow,
x: Pixels,
details: &TextLayoutDetails,
) -> u32 {
@@ -721,9 +737,13 @@ impl DisplaySnapshot {
DisplayPoint(clipped)
}
pub fn clip_ignoring_line_ends(&self, point: DisplayPoint, bias: Bias) -> DisplayPoint {
DisplayPoint(self.block_snapshot.clip_point(point.0, bias))
}
pub fn clip_at_line_end(&self, point: DisplayPoint) -> DisplayPoint {
let mut point = point.0;
if point.column == self.line_len(point.row) {
if point.column == self.line_len(DisplayRow(point.row)) {
point.column = point.column.saturating_sub(1);
point = self.block_snapshot.clip_point(point, Bias::Left);
}
@@ -739,36 +759,38 @@ impl DisplaySnapshot {
pub fn blocks_in_range(
&self,
rows: Range<u32>,
) -> impl Iterator<Item = (u32, &TransformBlock)> {
self.block_snapshot.blocks_in_range(rows)
rows: Range<DisplayRow>,
) -> impl Iterator<Item = (DisplayRow, &TransformBlock)> {
self.block_snapshot
.blocks_in_range(rows.start.0..rows.end.0)
.map(|(row, block)| (DisplayRow(row), block))
}
pub fn intersects_fold<T: ToOffset>(&self, offset: T) -> bool {
self.fold_snapshot.intersects_fold(offset)
}
pub fn is_line_folded(&self, buffer_row: u32) -> bool {
pub fn is_line_folded(&self, buffer_row: MultiBufferRow) -> bool {
self.fold_snapshot.is_line_folded(buffer_row)
}
pub fn is_block_line(&self, display_row: u32) -> bool {
self.block_snapshot.is_block_line(display_row)
pub fn is_block_line(&self, display_row: DisplayRow) -> bool {
self.block_snapshot.is_block_line(BlockRow(display_row.0))
}
pub fn soft_wrap_indent(&self, display_row: u32) -> Option<u32> {
pub fn soft_wrap_indent(&self, display_row: DisplayRow) -> Option<u32> {
let wrap_row = self
.block_snapshot
.to_wrap_point(BlockPoint::new(display_row, 0))
.to_wrap_point(BlockPoint::new(display_row.0, 0))
.row();
self.wrap_snapshot.soft_wrap_indent(wrap_row)
}
pub fn text(&self) -> String {
self.text_chunks(0).collect()
self.text_chunks(DisplayRow(0)).collect()
}
pub fn line(&self, display_row: u32) -> String {
pub fn line(&self, display_row: DisplayRow) -> String {
let mut result = String::new();
for chunk in self.text_chunks(display_row) {
if let Some(ix) = chunk.find('\n') {
@@ -781,7 +803,7 @@ impl DisplaySnapshot {
result
}
pub fn line_indent_for_buffer_row(&self, buffer_row: u32) -> (u32, bool) {
pub fn line_indent_for_buffer_row(&self, buffer_row: MultiBufferRow) -> (u32, bool) {
let (buffer, range) = self
.buffer_snapshot
.buffer_line_for_row(buffer_row)
@@ -803,15 +825,15 @@ impl DisplaySnapshot {
(indent_size, is_blank)
}
pub fn line_len(&self, row: u32) -> u32 {
self.block_snapshot.line_len(row)
pub fn line_len(&self, row: DisplayRow) -> u32 {
self.block_snapshot.line_len(BlockRow(row.0))
}
pub fn longest_row(&self) -> u32 {
self.block_snapshot.longest_row()
pub fn longest_row(&self) -> DisplayRow {
DisplayRow(self.block_snapshot.longest_row())
}
pub fn fold_for_line(&self, buffer_row: u32) -> Option<FoldStatus> {
pub fn fold_for_line(&self, buffer_row: MultiBufferRow) -> Option<FoldStatus> {
if self.is_line_folded(buffer_row) {
Some(FoldStatus::Folded)
} else if self.is_foldable(buffer_row) {
@@ -821,7 +843,7 @@ impl DisplaySnapshot {
}
}
pub fn is_foldable(&self, buffer_row: u32) -> bool {
pub fn is_foldable(&self, buffer_row: MultiBufferRow) -> bool {
let max_row = self.buffer_snapshot.max_buffer_row();
if buffer_row >= max_row {
return false;
@@ -832,8 +854,9 @@ impl DisplaySnapshot {
return false;
}
for next_row in (buffer_row + 1)..=max_row {
let (next_indent_size, next_line_is_blank) = self.line_indent_for_buffer_row(next_row);
for next_row in (buffer_row.0 + 1)..=max_row.0 {
let (next_indent_size, next_line_is_blank) =
self.line_indent_for_buffer_row(MultiBufferRow(next_row));
if next_indent_size > indent_size {
return true;
} else if !next_line_is_blank {
@@ -844,20 +867,22 @@ impl DisplaySnapshot {
false
}
pub fn foldable_range(&self, buffer_row: u32) -> Option<Range<Point>> {
let start = Point::new(buffer_row, self.buffer_snapshot.line_len(buffer_row));
if self.is_foldable(start.row) && !self.is_line_folded(start.row) {
pub fn foldable_range(&self, buffer_row: MultiBufferRow) -> Option<Range<Point>> {
let start = MultiBufferPoint::new(buffer_row.0, self.buffer_snapshot.line_len(buffer_row));
if self.is_foldable(MultiBufferRow(start.row))
&& !self.is_line_folded(MultiBufferRow(start.row))
{
let (start_indent, _) = self.line_indent_for_buffer_row(buffer_row);
let max_point = self.buffer_snapshot.max_point();
let mut end = None;
for row in (buffer_row + 1)..=max_point.row {
let (indent, is_blank) = self.line_indent_for_buffer_row(row);
for row in (buffer_row.0 + 1)..=max_point.row {
let (indent, is_blank) = self.line_indent_for_buffer_row(MultiBufferRow(row));
if !is_blank && indent <= start_indent {
let prev_row = row - 1;
end = Some(Point::new(
prev_row,
self.buffer_snapshot.line_len(prev_row),
self.buffer_snapshot.line_len(MultiBufferRow(prev_row)),
));
break;
}
@@ -894,27 +919,31 @@ impl Debug for DisplayPoint {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.write_fmt(format_args!(
"DisplayPoint({}, {})",
self.row(),
self.row().0,
self.column()
))
}
}
#[derive(Debug, Copy, Clone, Default, Eq, Ord, PartialOrd, PartialEq, Deserialize, Hash)]
#[serde(transparent)]
pub struct DisplayRow(pub u32);
impl DisplayPoint {
pub fn new(row: u32, column: u32) -> Self {
Self(BlockPoint(Point::new(row, column)))
pub fn new(row: DisplayRow, column: u32) -> Self {
Self(BlockPoint(Point::new(row.0, column)))
}
pub fn zero() -> Self {
Self::new(0, 0)
Self::new(DisplayRow(0), 0)
}
pub fn is_zero(&self) -> bool {
self.0.is_zero()
}
pub fn row(self) -> u32 {
self.0.row
pub fn row(self) -> DisplayRow {
DisplayRow(self.0.row)
}
pub fn column(self) -> u32 {
@@ -1162,7 +1191,7 @@ pub mod tests {
let buffer = &snapshot.buffer_snapshot;
for _ in 0..5 {
let row = rng.gen_range(0..=buffer.max_point().row);
let column = rng.gen_range(0..=buffer.line_len(row));
let column = rng.gen_range(0..=buffer.line_len(MultiBufferRow(row)));
let point = buffer.clip_point(Point::new(row, column), Left);
let (prev_buffer_bound, prev_display_bound) = snapshot.prev_line_boundary(point);
@@ -1207,12 +1236,12 @@ pub mod tests {
}
// Movement
let min_point = snapshot.clip_point(DisplayPoint::new(0, 0), Left);
let min_point = snapshot.clip_point(DisplayPoint::new(DisplayRow(0), 0), Left);
let max_point = snapshot.clip_point(snapshot.max_point(), Right);
for _ in 0..5 {
let row = rng.gen_range(0..=snapshot.max_point().row());
let column = rng.gen_range(0..=snapshot.line_len(row));
let point = snapshot.clip_point(DisplayPoint::new(row, column), Left);
let row = rng.gen_range(0..=snapshot.max_point().row().0);
let column = rng.gen_range(0..=snapshot.line_len(DisplayRow(row)));
let point = snapshot.clip_point(DisplayPoint::new(DisplayRow(row), column), Left);
log::info!("Moving from point {:?}", point);
@@ -1279,63 +1308,64 @@ pub mod tests {
let snapshot = map.update(cx, |map, cx| map.snapshot(cx));
assert_eq!(
snapshot.text_chunks(0).collect::<String>(),
snapshot.text_chunks(DisplayRow(0)).collect::<String>(),
"one two \nthree four \nfive\nsix seven \neight"
);
assert_eq!(
snapshot.clip_point(DisplayPoint::new(0, 8), Bias::Left),
DisplayPoint::new(0, 7)
snapshot.clip_point(DisplayPoint::new(DisplayRow(0), 8), Bias::Left),
DisplayPoint::new(DisplayRow(0), 7)
);
assert_eq!(
snapshot.clip_point(DisplayPoint::new(0, 8), Bias::Right),
DisplayPoint::new(1, 0)
snapshot.clip_point(DisplayPoint::new(DisplayRow(0), 8), Bias::Right),
DisplayPoint::new(DisplayRow(1), 0)
);
assert_eq!(
movement::right(&snapshot, DisplayPoint::new(0, 7)),
DisplayPoint::new(1, 0)
movement::right(&snapshot, DisplayPoint::new(DisplayRow(0), 7)),
DisplayPoint::new(DisplayRow(1), 0)
);
assert_eq!(
movement::left(&snapshot, DisplayPoint::new(1, 0)),
DisplayPoint::new(0, 7)
movement::left(&snapshot, DisplayPoint::new(DisplayRow(1), 0)),
DisplayPoint::new(DisplayRow(0), 7)
);
let x = snapshot.x_for_display_point(DisplayPoint::new(1, 10), &text_layout_details);
let x = snapshot
.x_for_display_point(DisplayPoint::new(DisplayRow(1), 10), &text_layout_details);
assert_eq!(
movement::up(
&snapshot,
DisplayPoint::new(1, 10),
DisplayPoint::new(DisplayRow(1), 10),
SelectionGoal::None,
false,
&text_layout_details,
),
(
DisplayPoint::new(0, 7),
DisplayPoint::new(DisplayRow(0), 7),
SelectionGoal::HorizontalPosition(x.0)
)
);
assert_eq!(
movement::down(
&snapshot,
DisplayPoint::new(0, 7),
DisplayPoint::new(DisplayRow(0), 7),
SelectionGoal::HorizontalPosition(x.0),
false,
&text_layout_details
),
(
DisplayPoint::new(1, 10),
DisplayPoint::new(DisplayRow(1), 10),
SelectionGoal::HorizontalPosition(x.0)
)
);
assert_eq!(
movement::down(
&snapshot,
DisplayPoint::new(1, 10),
DisplayPoint::new(DisplayRow(1), 10),
SelectionGoal::HorizontalPosition(x.0),
false,
&text_layout_details
),
(
DisplayPoint::new(2, 4),
DisplayPoint::new(DisplayRow(2), 4),
SelectionGoal::HorizontalPosition(x.0)
)
);
@@ -1347,7 +1377,7 @@ pub mod tests {
let snapshot = map.update(cx, |map, cx| map.snapshot(cx));
assert_eq!(
snapshot.text_chunks(1).collect::<String>(),
snapshot.text_chunks(DisplayRow(1)).collect::<String>(),
"three four \nfive\nsix and \nseven eight"
);
@@ -1358,7 +1388,7 @@ pub mod tests {
let snapshot = map.update(cx, |map, cx| map.snapshot(cx));
assert_eq!(
snapshot.text_chunks(1).collect::<String>(),
snapshot.text_chunks(DisplayRow(1)).collect::<String>(),
"three \nfour five\nsix and \nseven \neight"
)
});
@@ -1379,9 +1409,18 @@ pub mod tests {
buffer.update(cx, |buffer, cx| {
buffer.edit(
vec![
(Point::new(1, 0)..Point::new(1, 0), "\t"),
(Point::new(1, 1)..Point::new(1, 1), "\t"),
(Point::new(2, 1)..Point::new(2, 1), "\t"),
(
MultiBufferPoint::new(1, 0)..MultiBufferPoint::new(1, 0),
"\t",
),
(
MultiBufferPoint::new(1, 1)..MultiBufferPoint::new(1, 1),
"\t",
),
(
MultiBufferPoint::new(2, 1)..MultiBufferPoint::new(2, 1),
"\t",
),
],
None,
cx,
@@ -1390,7 +1429,7 @@ pub mod tests {
assert_eq!(
map.update(cx, |map, cx| map.snapshot(cx))
.text_chunks(1)
.text_chunks(DisplayRow(1))
.collect::<String>()
.lines()
.next(),
@@ -1398,7 +1437,7 @@ pub mod tests {
);
assert_eq!(
map.update(cx, |map, cx| map.snapshot(cx))
.text_chunks(2)
.text_chunks(DisplayRow(2))
.collect::<String>()
.lines()
.next(),
@@ -1453,7 +1492,7 @@ pub mod tests {
let map = cx
.new_model(|cx| DisplayMap::new(buffer, font("Helvetica"), font_size, None, 1, 1, cx));
assert_eq!(
cx.update(|cx| syntax_chunks(0..5, &map, &theme, cx)),
cx.update(|cx| syntax_chunks(DisplayRow(0)..DisplayRow(5), &map, &theme, cx)),
vec![
("fn ".to_string(), None),
("outer".to_string(), Some(Hsla::blue())),
@@ -1464,7 +1503,7 @@ pub mod tests {
]
);
assert_eq!(
cx.update(|cx| syntax_chunks(3..5, &map, &theme, cx)),
cx.update(|cx| syntax_chunks(DisplayRow(3)..DisplayRow(5), &map, &theme, cx)),
vec![
(" fn ".to_string(), Some(Hsla::red())),
("inner".to_string(), Some(Hsla::blue())),
@@ -1473,10 +1512,13 @@ pub mod tests {
);
map.update(cx, |map, cx| {
map.fold(vec![Point::new(0, 6)..Point::new(3, 2)], cx)
map.fold(
vec![MultiBufferPoint::new(0, 6)..MultiBufferPoint::new(3, 2)],
cx,
)
});
assert_eq!(
cx.update(|cx| syntax_chunks(0..2, &map, &theme, cx)),
cx.update(|cx| syntax_chunks(DisplayRow(0)..DisplayRow(2), &map, &theme, cx)),
vec![
("fn ".to_string(), None),
("out".to_string(), Some(Hsla::blue())),
@@ -1539,7 +1581,7 @@ pub mod tests {
DisplayMap::new(buffer, font("Courier"), font_size, Some(px(40.0)), 1, 1, cx)
});
assert_eq!(
cx.update(|cx| syntax_chunks(0..5, &map, &theme, cx)),
cx.update(|cx| syntax_chunks(DisplayRow(0)..DisplayRow(5), &map, &theme, cx)),
[
("fn \n".to_string(), None),
("oute\nr".to_string(), Some(Hsla::blue())),
@@ -1547,15 +1589,18 @@ pub mod tests {
]
);
assert_eq!(
cx.update(|cx| syntax_chunks(3..5, &map, &theme, cx)),
cx.update(|cx| syntax_chunks(DisplayRow(3)..DisplayRow(5), &map, &theme, cx)),
[("{}\n\n".to_string(), None)]
);
map.update(cx, |map, cx| {
map.fold(vec![Point::new(0, 6)..Point::new(3, 2)], cx)
map.fold(
vec![MultiBufferPoint::new(0, 6)..MultiBufferPoint::new(3, 2)],
cx,
)
});
assert_eq!(
cx.update(|cx| syntax_chunks(1..4, &map, &theme, cx)),
cx.update(|cx| syntax_chunks(DisplayRow(1)..DisplayRow(4), &map, &theme, cx)),
[
("out".to_string(), Some(Hsla::blue())),
("\n".to_string(), None),
@@ -1627,7 +1672,7 @@ pub mod tests {
});
assert_eq!(
cx.update(|cx| chunks(0..10, &map, &theme, cx)),
cx.update(|cx| chunks(DisplayRow(0)..DisplayRow(10), &map, &theme, cx)),
[
("const ".to_string(), None, None),
("a".to_string(), None, Some(Hsla::blue())),
@@ -1723,45 +1768,57 @@ pub mod tests {
let map = map.update(cx, |map, cx| map.snapshot(cx));
assert_eq!(map.text(), "α\nβ \n🏀β γ");
assert_eq!(
map.text_chunks(0).collect::<String>(),
map.text_chunks(DisplayRow(0)).collect::<String>(),
"α\nβ \n🏀β γ"
);
assert_eq!(map.text_chunks(1).collect::<String>(), "β \n🏀β γ");
assert_eq!(map.text_chunks(2).collect::<String>(), "🏀β γ");
assert_eq!(
map.text_chunks(DisplayRow(1)).collect::<String>(),
"β \n🏀β γ"
);
assert_eq!(
map.text_chunks(DisplayRow(2)).collect::<String>(),
"🏀β γ"
);
let point = Point::new(0, "\t\t".len() as u32);
let display_point = DisplayPoint::new(0, "".len() as u32);
let point = MultiBufferPoint::new(0, "\t\t".len() as u32);
let display_point = DisplayPoint::new(DisplayRow(0), "".len() as u32);
assert_eq!(point.to_display_point(&map), display_point);
assert_eq!(display_point.to_point(&map), point);
let point = Point::new(1, "β\t".len() as u32);
let display_point = DisplayPoint::new(1, "β ".len() as u32);
let point = MultiBufferPoint::new(1, "β\t".len() as u32);
let display_point = DisplayPoint::new(DisplayRow(1), "β ".len() as u32);
assert_eq!(point.to_display_point(&map), display_point);
assert_eq!(display_point.to_point(&map), point,);
let point = Point::new(2, "🏀β\t\t".len() as u32);
let display_point = DisplayPoint::new(2, "🏀β ".len() as u32);
let point = MultiBufferPoint::new(2, "🏀β\t\t".len() as u32);
let display_point = DisplayPoint::new(DisplayRow(2), "🏀β ".len() as u32);
assert_eq!(point.to_display_point(&map), display_point);
assert_eq!(display_point.to_point(&map), point,);
// Display points inside of expanded tabs
assert_eq!(
DisplayPoint::new(0, "".len() as u32).to_point(&map),
Point::new(0, "\t".len() as u32),
DisplayPoint::new(DisplayRow(0), "".len() as u32).to_point(&map),
MultiBufferPoint::new(0, "\t".len() as u32),
);
assert_eq!(
DisplayPoint::new(0, "".len() as u32).to_point(&map),
Point::new(0, "".len() as u32),
DisplayPoint::new(DisplayRow(0), "".len() as u32).to_point(&map),
MultiBufferPoint::new(0, "".len() as u32),
);
// Clipping display points inside of multi-byte characters
assert_eq!(
map.clip_point(DisplayPoint::new(0, "".len() as u32 - 1), Left),
DisplayPoint::new(0, 0)
map.clip_point(
DisplayPoint::new(DisplayRow(0), "".len() as u32 - 1),
Left
),
DisplayPoint::new(DisplayRow(0), 0)
);
assert_eq!(
map.clip_point(DisplayPoint::new(0, "".len() as u32 - 1), Bias::Right),
DisplayPoint::new(0, "".len() as u32)
map.clip_point(
DisplayPoint::new(DisplayRow(0), "".len() as u32 - 1),
Bias::Right
),
DisplayPoint::new(DisplayRow(0), "".len() as u32)
);
}
@@ -1776,12 +1833,12 @@ pub mod tests {
});
assert_eq!(
map.update(cx, |map, cx| map.snapshot(cx)).max_point(),
DisplayPoint::new(1, 11)
DisplayPoint::new(DisplayRow(1), 11)
)
}
fn syntax_chunks(
rows: Range<u32>,
rows: Range<DisplayRow>,
map: &Model<DisplayMap>,
theme: &SyntaxTheme,
cx: &mut AppContext,
@@ -1793,7 +1850,7 @@ pub mod tests {
}
fn chunks(
rows: Range<u32>,
rows: Range<DisplayRow>,
map: &Model<DisplayMap>,
theme: &SyntaxTheme,
cx: &mut AppContext,

View File

@@ -6,7 +6,7 @@ use crate::{EditorStyle, GutterDimensions};
use collections::{Bound, HashMap, HashSet};
use gpui::{AnyElement, Pixels, WindowContext};
use language::{BufferSnapshot, Chunk, Patch, Point};
use multi_buffer::{Anchor, ExcerptId, ExcerptRange, ToPoint as _};
use multi_buffer::{Anchor, ExcerptId, ExcerptRange, MultiBufferRow, ToPoint as _};
use parking_lot::Mutex;
use std::{
cell::RefCell,
@@ -50,7 +50,7 @@ pub struct BlockId(usize);
pub struct BlockPoint(pub Point);
#[derive(Copy, Clone, Debug, Default, Eq, Ord, PartialOrd, PartialEq)]
struct BlockRow(u32);
pub struct BlockRow(pub(super) u32);
#[derive(Copy, Clone, Debug, Default, Eq, Ord, PartialOrd, PartialEq)]
struct WrapRow(u32);
@@ -163,7 +163,7 @@ pub struct BlockChunks<'a> {
pub struct BlockBufferRows<'a> {
transforms: sum_tree::Cursor<'a, Transform, (BlockRow, WrapRow)>,
input_buffer_rows: wrap_map::WrapBufferRows<'a>,
output_row: u32,
output_row: BlockRow,
started: bool,
}
@@ -357,7 +357,7 @@ impl BlockMap {
match block.disposition {
BlockDisposition::Above => position.column = 0,
BlockDisposition::Below => {
position.column = buffer.line_len(position.row)
position.column = buffer.line_len(MultiBufferRow(position.row))
}
}
let position = wrap_snapshot.make_wrap_point(position, Bias::Left);
@@ -372,7 +372,7 @@ impl BlockMap {
(
wrap_snapshot
.make_wrap_point(
Point::new(excerpt_boundary.row, 0),
Point::new(excerpt_boundary.row.0, 0),
Bias::Left,
)
.row(),
@@ -633,12 +633,12 @@ impl BlockSnapshot {
}
}
pub fn buffer_rows(&self, start_row: u32) -> BlockBufferRows {
pub(super) fn buffer_rows(&self, start_row: BlockRow) -> BlockBufferRows {
let mut cursor = self.transforms.cursor::<(BlockRow, WrapRow)>();
cursor.seek(&BlockRow(start_row), Bias::Right, &());
cursor.seek(&start_row, Bias::Right, &());
let (output_start, input_start) = cursor.start();
let overshoot = if cursor.item().map_or(false, |t| t.is_isomorphic()) {
start_row - output_start.0
start_row.0 - output_start.0
} else {
0
};
@@ -676,7 +676,7 @@ impl BlockSnapshot {
pub fn max_point(&self) -> BlockPoint {
let row = self.transforms.summary().output_rows - 1;
BlockPoint::new(row, self.line_len(row))
BlockPoint::new(row, self.line_len(BlockRow(row)))
}
pub fn longest_row(&self) -> u32 {
@@ -684,12 +684,12 @@ impl BlockSnapshot {
self.to_block_point(WrapPoint::new(input_row, 0)).row
}
pub fn line_len(&self, row: u32) -> u32 {
pub(super) fn line_len(&self, row: BlockRow) -> u32 {
let mut cursor = self.transforms.cursor::<(BlockRow, WrapRow)>();
cursor.seek(&BlockRow(row), Bias::Right, &());
cursor.seek(&BlockRow(row.0), Bias::Right, &());
if let Some(transform) = cursor.item() {
let (output_start, input_start) = cursor.start();
let overshoot = row - output_start.0;
let overshoot = row.0 - output_start.0;
if transform.block.is_some() {
0
} else {
@@ -700,9 +700,9 @@ impl BlockSnapshot {
}
}
pub fn is_block_line(&self, row: u32) -> bool {
pub(super) fn is_block_line(&self, row: BlockRow) -> bool {
let mut cursor = self.transforms.cursor::<(BlockRow, WrapRow)>();
cursor.seek(&BlockRow(row), Bias::Right, &());
cursor.seek(&row, Bias::Right, &());
cursor.item().map_or(false, |t| t.block.is_some())
}
@@ -881,16 +881,16 @@ impl<'a> Iterator for BlockChunks<'a> {
}
impl<'a> Iterator for BlockBufferRows<'a> {
type Item = Option<u32>;
type Item = Option<BlockRow>;
fn next(&mut self) -> Option<Self::Item> {
if self.started {
self.output_row += 1;
self.output_row.0 += 1;
} else {
self.started = true;
}
if self.output_row >= self.transforms.end(&()).0 .0 {
if self.output_row.0 >= self.transforms.end(&()).0 .0 {
self.transforms.next(&());
}
@@ -898,7 +898,7 @@ impl<'a> Iterator for BlockBufferRows<'a> {
if transform.block.is_some() {
Some(None)
} else {
Some(self.input_buffer_rows.next().unwrap())
Some(self.input_buffer_rows.next().unwrap().map(BlockRow))
}
}
}
@@ -1154,7 +1154,10 @@ mod tests {
);
assert_eq!(
snapshot.buffer_rows(0).collect::<Vec<_>>(),
snapshot
.buffer_rows(BlockRow(0))
.map(|row| row.map(|r| r.0))
.collect::<Vec<_>>(),
&[
Some(0),
None,
@@ -1394,7 +1397,7 @@ mod tests {
position.column = 0;
}
BlockDisposition::Below => {
position.column = buffer_snapshot.line_len(position.row);
position.column = buffer_snapshot.line_len(MultiBufferRow(position.row));
}
};
let row = wraps_snapshot.make_wrap_point(position, Bias::Left).row();
@@ -1410,7 +1413,7 @@ mod tests {
expected_blocks.extend(buffer_snapshot.excerpt_boundaries_in_range(0..).map(
|boundary| {
let position =
wraps_snapshot.make_wrap_point(Point::new(boundary.row, 0), Bias::Left);
wraps_snapshot.make_wrap_point(Point::new(boundary.row.0, 0), Bias::Left);
(
position.row(),
ExpectedBlock::ExcerptHeader {
@@ -1427,7 +1430,9 @@ mod tests {
expected_blocks.sort_unstable();
let mut sorted_blocks_iter = expected_blocks.into_iter().peekable();
let input_buffer_rows = buffer_snapshot.buffer_rows(0).collect::<Vec<_>>();
let input_buffer_rows = buffer_snapshot
.buffer_rows(MultiBufferRow(0))
.collect::<Vec<_>>();
let mut expected_buffer_rows = Vec::new();
let mut expected_text = String::new();
let mut expected_block_positions = Vec::new();
@@ -1498,7 +1503,8 @@ mod tests {
);
assert_eq!(
blocks_snapshot
.buffer_rows(start_row as u32)
.buffer_rows(BlockRow(start_row as u32))
.map(|row| row.map(|r| r.0))
.collect::<Vec<_>>(),
&expected_buffer_rows[start_row..]
);
@@ -1518,7 +1524,7 @@ mod tests {
let row = row as u32;
assert_eq!(
blocks_snapshot.line_len(row),
blocks_snapshot.line_len(BlockRow(row)),
line.len() as u32,
"invalid line len for row {}",
row

View File

@@ -4,7 +4,7 @@ use super::{
};
use gpui::{ElementId, HighlightStyle, Hsla};
use language::{Chunk, Edit, Point, TextSummary};
use multi_buffer::{Anchor, AnchorRangeExt, MultiBufferSnapshot, ToOffset};
use multi_buffer::{Anchor, AnchorRangeExt, MultiBufferRow, MultiBufferSnapshot, ToOffset};
use std::{
cmp::{self, Ordering},
iter,
@@ -629,17 +629,17 @@ impl FoldSnapshot {
cursor.item().map_or(false, |t| t.output_text.is_some())
}
pub fn is_line_folded(&self, buffer_row: u32) -> bool {
pub fn is_line_folded(&self, buffer_row: MultiBufferRow) -> bool {
let mut inlay_point = self
.inlay_snapshot
.to_inlay_point(Point::new(buffer_row, 0));
.to_inlay_point(Point::new(buffer_row.0, 0));
let mut cursor = self.transforms.cursor::<InlayPoint>();
cursor.seek(&inlay_point, Bias::Right, &());
loop {
match cursor.item() {
Some(transform) => {
let buffer_point = self.inlay_snapshot.to_buffer_point(inlay_point);
if buffer_point.row != buffer_row {
if buffer_point.row != buffer_row.0 {
return false;
} else if transform.output_text.is_some() {
return true;
@@ -1549,7 +1549,7 @@ mod tests {
.collect::<HashSet<_>>();
for row in 0..=buffer_snapshot.max_point().row {
assert_eq!(
snapshot.is_line_folded(row),
snapshot.is_line_folded(MultiBufferRow(row)),
folded_buffer_rows.contains(&row),
"expected buffer row {}{} to be folded",
row,

View File

@@ -2,7 +2,9 @@ use crate::{HighlightStyles, InlayId};
use collections::{BTreeMap, BTreeSet};
use gpui::HighlightStyle;
use language::{Chunk, Edit, Point, TextSummary};
use multi_buffer::{Anchor, MultiBufferChunks, MultiBufferRows, MultiBufferSnapshot, ToOffset};
use multi_buffer::{
Anchor, MultiBufferChunks, MultiBufferRow, MultiBufferRows, MultiBufferSnapshot, ToOffset,
};
use std::{
any::TypeId,
cmp,
@@ -182,7 +184,7 @@ pub struct InlayBufferRows<'a> {
transforms: Cursor<'a, Transform, (InlayPoint, Point)>,
buffer_rows: MultiBufferRows<'a>,
inlay_row: u32,
max_buffer_row: u32,
max_buffer_row: MultiBufferRow,
}
#[derive(Debug, Copy, Clone, Eq, PartialEq)]
@@ -375,7 +377,7 @@ impl<'a> InlayBufferRows<'a> {
self.transforms.seek(&inlay_point, Bias::Left, &());
let mut buffer_point = self.transforms.start().1;
let buffer_row = if row == 0 {
let buffer_row = MultiBufferRow(if row == 0 {
0
} else {
match self.transforms.item() {
@@ -383,9 +385,9 @@ impl<'a> InlayBufferRows<'a> {
buffer_point += inlay_point.0 - self.transforms.start().0 .0;
buffer_point.row
}
_ => cmp::min(buffer_point.row + 1, self.max_buffer_row),
_ => cmp::min(buffer_point.row + 1, self.max_buffer_row.0),
}
};
});
self.inlay_row = inlay_point.row();
self.buffer_rows.seek(buffer_row);
}
@@ -986,17 +988,17 @@ impl InlaySnapshot {
let inlay_point = InlayPoint::new(row, 0);
cursor.seek(&inlay_point, Bias::Left, &());
let max_buffer_row = self.buffer.max_point().row;
let max_buffer_row = MultiBufferRow(self.buffer.max_point().row);
let mut buffer_point = cursor.start().1;
let buffer_row = if row == 0 {
0
MultiBufferRow(0)
} else {
match cursor.item() {
Some(Transform::Isomorphic(_)) => {
buffer_point += inlay_point.0 - cursor.start().0 .0;
buffer_point.row
MultiBufferRow(buffer_point.row)
}
_ => cmp::min(buffer_point.row + 1, max_buffer_row),
_ => cmp::min(MultiBufferRow(buffer_point.row + 1), max_buffer_row),
}
};

File diff suppressed because it is too large Load Diff

View File

@@ -6,6 +6,7 @@ use settings::{Settings, SettingsSources};
#[derive(Deserialize, Clone)]
pub struct EditorSettings {
pub cursor_blink: bool,
pub current_line_highlight: CurrentLineHighlight,
pub hover_popover_enabled: bool,
pub show_completions_on_input: bool,
pub show_completion_documentation: bool,
@@ -24,6 +25,19 @@ pub struct EditorSettings {
pub double_click_in_multibuffer: DoubleClickInMultibuffer,
}
#[derive(Copy, Clone, Debug, Serialize, Deserialize, PartialEq, Eq, JsonSchema)]
#[serde(rename_all = "snake_case")]
pub enum CurrentLineHighlight {
// Don't highlight the current line.
None,
// Highlight the gutter area.
Gutter,
// Highlight the editor area.
Line,
// Highlight the full line.
All,
}
/// When to populate a new search's query based on the text under the cursor.
#[derive(Copy, Clone, Debug, Serialize, Deserialize, PartialEq, Eq, JsonSchema)]
#[serde(rename_all = "snake_case")]
@@ -105,6 +119,10 @@ pub struct EditorSettingsContent {
///
/// Default: true
pub cursor_blink: Option<bool>,
/// How to highlight the current line in the editor.
///
/// Default: all
pub current_line_highlight: Option<CurrentLineHighlight>,
/// Whether to show the informational hover box when moving the mouse
/// over symbols in the editor.
///

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -4,29 +4,29 @@ use std::ops::Range;
use git::diff::{DiffHunk, DiffHunkStatus};
use language::Point;
use multi_buffer::Anchor;
use multi_buffer::{Anchor, MultiBufferRow};
use crate::{
display_map::{DisplaySnapshot, ToDisplayPoint},
AnchorRangeExt,
hunk_status, AnchorRangeExt, DisplayRow,
};
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum DisplayDiffHunk {
Folded {
display_row: u32,
display_row: DisplayRow,
},
Unfolded {
diff_base_byte_range: Range<usize>,
display_row_range: Range<u32>,
display_row_range: Range<DisplayRow>,
multi_buffer_range: Range<Anchor>,
status: DiffHunkStatus,
},
}
impl DisplayDiffHunk {
pub fn start_display_row(&self) -> u32 {
pub fn start_display_row(&self) -> DisplayRow {
match self {
&DisplayDiffHunk::Folded { display_row } => display_row,
DisplayDiffHunk::Unfolded {
@@ -35,7 +35,7 @@ impl DisplayDiffHunk {
}
}
pub fn contains_display_row(&self, display_row: u32) -> bool {
pub fn contains_display_row(&self, display_row: DisplayRow) -> bool {
let range = match self {
&DisplayDiffHunk::Folded { display_row } => display_row..=display_row,
@@ -48,21 +48,26 @@ impl DisplayDiffHunk {
}
}
pub fn diff_hunk_to_display(hunk: &DiffHunk<u32>, snapshot: &DisplaySnapshot) -> DisplayDiffHunk {
let hunk_start_point = Point::new(hunk.associated_range.start, 0);
let hunk_start_point_sub = Point::new(hunk.associated_range.start.saturating_sub(1), 0);
pub fn diff_hunk_to_display(
hunk: &DiffHunk<MultiBufferRow>,
snapshot: &DisplaySnapshot,
) -> DisplayDiffHunk {
let hunk_start_point = Point::new(hunk.associated_range.start.0, 0);
let hunk_start_point_sub = Point::new(hunk.associated_range.start.0.saturating_sub(1), 0);
let hunk_end_point_sub = Point::new(
hunk.associated_range
.end
.0
.saturating_sub(1)
.max(hunk.associated_range.start),
.max(hunk.associated_range.start.0),
0,
);
let is_removal = hunk.status() == DiffHunkStatus::Removed;
let status = hunk_status(hunk);
let is_removal = status == DiffHunkStatus::Removed;
let folds_start = Point::new(hunk.associated_range.start.saturating_sub(2), 0);
let folds_end = Point::new(hunk.associated_range.end + 2, 0);
let folds_start = Point::new(hunk.associated_range.start.0.saturating_sub(2), 0);
let folds_end = Point::new(hunk.associated_range.end.0 + 2, 0);
let folds_range = folds_start..folds_end;
let containing_fold = snapshot.folds_in_range(folds_range).find(|fold| {
@@ -83,7 +88,7 @@ pub fn diff_hunk_to_display(hunk: &DiffHunk<u32>, snapshot: &DisplaySnapshot) ->
let start = hunk_start_point.to_display_point(snapshot).row();
let hunk_end_row = hunk.associated_range.end.max(hunk.associated_range.start);
let hunk_end_point = Point::new(hunk_end_row, 0);
let hunk_end_point = Point::new(hunk_end_row.0, 0);
let multi_buffer_start = snapshot.buffer_snapshot.anchor_after(hunk_start_point);
let multi_buffer_end = snapshot.buffer_snapshot.anchor_before(hunk_end_point);
@@ -92,7 +97,7 @@ pub fn diff_hunk_to_display(hunk: &DiffHunk<u32>, snapshot: &DisplaySnapshot) ->
DisplayDiffHunk::Unfolded {
display_row_range: start..end,
multi_buffer_range: multi_buffer_start..multi_buffer_end,
status: hunk.status(),
status,
diff_base_byte_range: hunk.diff_base_byte_range.clone(),
}
}
@@ -100,11 +105,11 @@ pub fn diff_hunk_to_display(hunk: &DiffHunk<u32>, snapshot: &DisplaySnapshot) ->
#[cfg(test)]
mod tests {
use crate::editor_tests::init_test;
use crate::Point;
use crate::{editor_tests::init_test, hunk_status};
use gpui::{Context, TestAppContext};
use language::Capability::ReadWrite;
use multi_buffer::{ExcerptRange, MultiBuffer};
use multi_buffer::{ExcerptRange, MultiBuffer, MultiBufferRow};
use project::{FakeFs, Project};
use unindent::Unindent;
#[gpui::test]
@@ -145,8 +150,7 @@ mod tests {
1.five
1.six
"
.unindent()
.into(),
.unindent(),
),
cx,
);
@@ -182,8 +186,7 @@ mod tests {
2.four
2.six
"
.unindent()
.into(),
.unindent(),
),
cx,
);
@@ -259,26 +262,41 @@ mod tests {
);
let expected = [
(DiffHunkStatus::Modified, 1..2),
(DiffHunkStatus::Modified, 2..3),
(
DiffHunkStatus::Modified,
MultiBufferRow(1)..MultiBufferRow(2),
),
(
DiffHunkStatus::Modified,
MultiBufferRow(2)..MultiBufferRow(3),
),
//TODO: Define better when and where removed hunks show up at range extremities
(DiffHunkStatus::Removed, 6..6),
(DiffHunkStatus::Removed, 8..8),
(DiffHunkStatus::Added, 10..11),
(
DiffHunkStatus::Removed,
MultiBufferRow(6)..MultiBufferRow(6),
),
(
DiffHunkStatus::Removed,
MultiBufferRow(8)..MultiBufferRow(8),
),
(
DiffHunkStatus::Added,
MultiBufferRow(10)..MultiBufferRow(11),
),
];
assert_eq!(
snapshot
.git_diff_hunks_in_range(0..12)
.map(|hunk| (hunk.status(), hunk.associated_range))
.git_diff_hunks_in_range(MultiBufferRow(0)..MultiBufferRow(12))
.map(|hunk| (hunk_status(&hunk), hunk.associated_range))
.collect::<Vec<_>>(),
&expected,
);
assert_eq!(
snapshot
.git_diff_hunks_in_range_rev(0..12)
.map(|hunk| (hunk.status(), hunk.associated_range))
.git_diff_hunks_in_range_rev(MultiBufferRow(0)..MultiBufferRow(12))
.map(|hunk| (hunk_status(&hunk), hunk.associated_range))
.collect::<Vec<_>>(),
expected
.iter()

View File

@@ -7,12 +7,13 @@ use git::{
parse_git_remote_url, GitHostingProvider, GitHostingProviderRegistry, Oid, PullRequest,
};
use gpui::{Model, ModelContext, Subscription, Task};
use http::HttpClient;
use language::{markdown, Bias, Buffer, BufferSnapshot, Edit, LanguageRegistry, ParsedMarkdown};
use multi_buffer::MultiBufferRow;
use project::{Item, Project};
use smallvec::SmallVec;
use sum_tree::SumTree;
use url::Url;
use util::http::HttpClient;
#[derive(Clone, Debug, Default)]
pub struct GitBlameEntry {
@@ -185,7 +186,7 @@ impl GitBlame {
pub fn blame_for_rows<'a>(
&'a mut self,
rows: impl 'a + IntoIterator<Item = Option<u32>>,
rows: impl 'a + IntoIterator<Item = Option<MultiBufferRow>>,
cx: &mut ModelContext<Self>,
) -> impl 'a + Iterator<Item = Option<BlameEntry>> {
self.sync(cx);
@@ -193,7 +194,7 @@ impl GitBlame {
let mut cursor = self.entries.cursor::<u32>();
rows.into_iter().map(move |row| {
let row = row?;
cursor.seek_forward(&row, Bias::Right, &());
cursor.seek_forward(&row.0, Bias::Right, &());
cursor.item()?.blame.clone()
})
}
@@ -532,7 +533,7 @@ mod tests {
($blame:expr, $rows:expr, $expected:expr, $cx:expr) => {
assert_eq!(
$blame
.blame_for_rows($rows.map(Some), $cx)
.blame_for_rows($rows.map(MultiBufferRow).map(Some), $cx)
.collect::<Vec<_>>(),
$expected
);
@@ -597,7 +598,7 @@ mod tests {
blame.update(cx, |blame, cx| {
assert_eq!(
blame
.blame_for_rows((0..1).map(Some), cx)
.blame_for_rows((0..1).map(MultiBufferRow).map(Some), cx)
.collect::<Vec<_>>(),
vec![None]
);
@@ -661,7 +662,7 @@ mod tests {
// All lines
assert_eq!(
blame
.blame_for_rows((0..8).map(Some), cx)
.blame_for_rows((0..8).map(MultiBufferRow).map(Some), cx)
.collect::<Vec<_>>(),
vec![
Some(blame_entry("1b1b1b", 0..1)),
@@ -677,7 +678,7 @@ mod tests {
// Subset of lines
assert_eq!(
blame
.blame_for_rows((1..4).map(Some), cx)
.blame_for_rows((1..4).map(MultiBufferRow).map(Some), cx)
.collect::<Vec<_>>(),
vec![
Some(blame_entry("0d0d0d", 1..2)),
@@ -688,7 +689,7 @@ mod tests {
// Subset of lines, with some not displayed
assert_eq!(
blame
.blame_for_rows(vec![Some(1), None, None], cx)
.blame_for_rows(vec![Some(MultiBufferRow(1)), None, None], cx)
.collect::<Vec<_>>(),
vec![Some(blame_entry("0d0d0d", 1..2)), None, None]
);

View File

@@ -1,8 +1,8 @@
use crate::{
display_map::{InlayOffset, ToDisplayPoint},
hover_links::{InlayHighlight, RangeInEditor},
Anchor, AnchorRangeExt, DisplayPoint, Editor, EditorSettings, EditorSnapshot, EditorStyle,
ExcerptId, Hover, RangeToAnchorExt,
Anchor, AnchorRangeExt, DisplayPoint, DisplayRow, Editor, EditorSettings, EditorSnapshot,
EditorStyle, ExcerptId, Hover, RangeToAnchorExt,
};
use futures::{stream::FuturesUnordered, FutureExt};
use gpui::{
@@ -10,9 +10,10 @@ use gpui::{
ParentElement, Pixels, SharedString, Size, StatefulInteractiveElement, Styled, Task,
ViewContext, WeakView,
};
use language::{markdown, Bias, DiagnosticEntry, Language, LanguageRegistry, ParsedMarkdown};
use language::{markdown, DiagnosticEntry, Language, LanguageRegistry, ParsedMarkdown};
use lsp::DiagnosticSeverity;
use multi_buffer::ToOffset;
use project::{HoverBlock, HoverBlockKind, InlayHintLabelPart};
use settings::Settings;
use smol::stream::StreamExt;
@@ -30,21 +31,16 @@ pub const HOVER_POPOVER_GAP: Pixels = px(10.);
/// Bindable action which uses the most recent selection head to trigger a hover
pub fn hover(editor: &mut Editor, _: &Hover, cx: &mut ViewContext<Editor>) {
let head = editor.selections.newest_display(cx).head();
let snapshot = editor.snapshot(cx);
show_hover(editor, head, &snapshot, true, cx);
let head = editor.selections.newest_anchor().head();
show_hover(editor, head, true, cx);
}
/// The internal hover action dispatches between `show_hover` or `hide_hover`
/// depending on whether a point to hover over is provided.
pub fn hover_at(
editor: &mut Editor,
point: Option<(DisplayPoint, &EditorSnapshot)>,
cx: &mut ViewContext<Editor>,
) {
pub fn hover_at(editor: &mut Editor, anchor: Option<Anchor>, cx: &mut ViewContext<Editor>) {
if EditorSettings::get_global(cx).hover_popover_enabled {
if let Some((point, snapshot)) = point {
show_hover(editor, point, snapshot, false, cx);
if let Some(anchor) = anchor {
show_hover(editor, anchor, false, cx);
} else {
hide_hover(editor, cx);
}
@@ -164,8 +160,7 @@ pub fn hide_hover(editor: &mut Editor, cx: &mut ViewContext<Editor>) -> bool {
/// Triggered by the `Hover` action when the cursor may be over a symbol.
fn show_hover(
editor: &mut Editor,
point: DisplayPoint,
snapshot: &EditorSnapshot,
anchor: Anchor,
ignore_timeout: bool,
cx: &mut ViewContext<Editor>,
) {
@@ -173,27 +168,21 @@ fn show_hover(
return;
}
let multibuffer_offset = point.to_offset(&snapshot.display_snapshot, Bias::Left);
let snapshot = editor.snapshot(cx);
let (buffer, buffer_position) = if let Some(output) = editor
.buffer
.read(cx)
.text_anchor_for_position(multibuffer_offset, cx)
{
output
} else {
return;
};
let (buffer, buffer_position) =
if let Some(output) = editor.buffer.read(cx).text_anchor_for_position(anchor, cx) {
output
} else {
return;
};
let excerpt_id = if let Some((excerpt_id, _, _)) = editor
.buffer()
.read(cx)
.excerpt_containing(multibuffer_offset, cx)
{
excerpt_id
} else {
return;
};
let excerpt_id =
if let Some((excerpt_id, _, _)) = editor.buffer().read(cx).excerpt_containing(anchor, cx) {
excerpt_id
} else {
return;
};
let project = if let Some(project) = editor.project.clone() {
project
@@ -211,9 +200,10 @@ fn show_hover(
.as_text_range()
.map(|range| {
let hover_range = range.to_offset(&snapshot.buffer_snapshot);
let offset = anchor.to_offset(&snapshot.buffer_snapshot);
// LSP returns a hover result for the end index of ranges that should be hovered, so we need to
// use an inclusive range here to check if we should dismiss the popover
(hover_range.start..=hover_range.end).contains(&multibuffer_offset)
(hover_range.start..=hover_range.end).contains(&offset)
})
.unwrap_or(false)
})
@@ -225,11 +215,6 @@ fn show_hover(
}
}
// Get input anchor
let anchor = snapshot
.buffer_snapshot
.anchor_at(multibuffer_offset, Bias::Left);
// Don't request again if the location is the same as the previous request
if let Some(triggered_from) = &editor.hover_state.triggered_from {
if triggered_from
@@ -239,7 +224,6 @@ fn show_hover(
return;
}
}
let snapshot = snapshot.clone();
let task = cx.spawn(|this, mut cx| {
async move {
@@ -273,7 +257,7 @@ fn show_hover(
// If there's a diagnostic, assign it on the hover state and notify
let local_diagnostic = snapshot
.buffer_snapshot
.diagnostics_in_range::<_, usize>(multibuffer_offset..multibuffer_offset, false)
.diagnostics_in_range::<_, usize>(anchor..anchor, false)
// Find the entry with the most specific range
.min_by_key(|entry| entry.range.end - entry.range.start)
.map(|entry| DiagnosticEntry {
@@ -456,7 +440,7 @@ impl HoverState {
&mut self,
snapshot: &EditorSnapshot,
style: &EditorStyle,
visible_rows: Range<u32>,
visible_rows: Range<DisplayRow>,
max_size: Size<Pixels>,
workspace: Option<WeakView<Workspace>>,
cx: &mut ViewContext<Editor>,
@@ -641,6 +625,7 @@ mod tests {
use lsp::LanguageServerId;
use project::{HoverBlock, HoverBlockKind};
use smol::stream::StreamExt;
use text::Bias;
use unindent::Unindent;
use util::test::marked_text_ranges;
@@ -667,7 +652,10 @@ mod tests {
cx.update_editor(|editor, cx| {
let snapshot = editor.snapshot(cx);
hover_at(editor, Some((hover_point, &snapshot)), cx)
let anchor = snapshot
.buffer_snapshot
.anchor_before(hover_point.to_offset(&snapshot, Bias::Left));
hover_at(editor, Some(anchor), cx)
});
assert!(!cx.editor(|editor, _| editor.hover_state.visible()));
@@ -716,7 +704,10 @@ mod tests {
.handle_request::<lsp::request::HoverRequest, _, _>(|_, _| async move { Ok(None) });
cx.update_editor(|editor, cx| {
let snapshot = editor.snapshot(cx);
hover_at(editor, Some((hover_point, &snapshot)), cx)
let anchor = snapshot
.buffer_snapshot
.anchor_before(hover_point.to_offset(&snapshot, Bias::Left));
hover_at(editor, Some(anchor), cx)
});
cx.background_executor
.advance_clock(Duration::from_millis(HOVER_DELAY_MILLIS + 100));

View File

@@ -1,10 +1,16 @@
use std::{ops::Range, sync::Arc};
use std::{
ops::{Range, RangeInclusive},
sync::Arc,
};
use collections::{hash_map, HashMap, HashSet};
use git::diff::{DiffHunk, DiffHunkStatus};
use gpui::{AppContext, Hsla, Model, Task, View};
use language::Buffer;
use multi_buffer::{Anchor, ExcerptRange, MultiBuffer, MultiBufferSnapshot, ToPoint};
use multi_buffer::{
Anchor, ExcerptRange, MultiBuffer, MultiBufferRow, MultiBufferSnapshot, ToPoint,
};
use settings::{Settings, SettingsStore};
use text::{BufferId, Point};
use ui::{
div, ActiveTheme, Context as _, IntoElement, ParentElement, Styled, ViewContext, VisualContext,
@@ -12,10 +18,11 @@ use ui::{
use util::{debug_panic, RangeExt};
use crate::{
editor_settings::CurrentLineHighlight,
git::{diff_hunk_to_display, DisplayDiffHunk},
hunks_for_selections, BlockDisposition, BlockId, BlockProperties, BlockStyle, DiffRowHighlight,
Editor, ExpandAllHunkDiffs, RangeToAnchorExt, RevertSelectedHunks, ToDisplayPoint,
ToggleHunkDiff,
hunk_status, hunks_for_selections, BlockDisposition, BlockId, BlockProperties, BlockStyle,
DiffRowHighlight, Editor, EditorSettings, EditorSnapshot, ExpandAllHunkDiffs, RangeToAnchorExt,
RevertSelectedHunks, ToDisplayPoint, ToggleHunkDiff,
};
#[derive(Debug, Clone)]
@@ -87,11 +94,11 @@ impl Editor {
let hunks = snapshot
.display_snapshot
.buffer_snapshot
.git_diff_hunks_in_range(0..u32::MAX)
.git_diff_hunks_in_range(MultiBufferRow::MIN..MultiBufferRow::MAX)
.filter(|hunk| {
let hunk_display_row_range = Point::new(hunk.associated_range.start, 0)
let hunk_display_row_range = Point::new(hunk.associated_range.start.0, 0)
.to_display_point(&snapshot.display_snapshot)
..Point::new(hunk.associated_range.end, 0)
..Point::new(hunk.associated_range.end.0, 0)
.to_display_point(&snapshot.display_snapshot);
let row_range_end =
display_rows_with_expanded_hunks.get(&hunk_display_row_range.start.row());
@@ -102,7 +109,7 @@ impl Editor {
fn toggle_hunks_expanded(
&mut self,
hunks_to_toggle: Vec<DiffHunk<u32>>,
hunks_to_toggle: Vec<DiffHunk<MultiBufferRow>>,
cx: &mut ViewContext<Self>,
) {
let previous_toggle_task = self.expanded_hunks.hunk_update_tasks.remove(&None);
@@ -173,10 +180,10 @@ impl Editor {
});
for remaining_hunk in hunks_to_toggle {
let remaining_hunk_point_range =
Point::new(remaining_hunk.associated_range.start, 0)
..Point::new(remaining_hunk.associated_range.end, 0);
Point::new(remaining_hunk.associated_range.start.0, 0)
..Point::new(remaining_hunk.associated_range.end.0, 0);
hunks_to_expand.push(HunkToExpand {
status: remaining_hunk.status(),
status: hunk_status(&remaining_hunk),
multi_buffer_range: remaining_hunk_point_range
.to_anchors(&snapshot.buffer_snapshot),
diff_base_byte_range: remaining_hunk.diff_base_byte_range.clone(),
@@ -184,7 +191,11 @@ impl Editor {
}
for removed_rows in highlights_to_remove {
editor.highlight_rows::<DiffRowHighlight>(removed_rows, None, cx);
editor.highlight_rows::<DiffRowHighlight>(
to_inclusive_row_range(removed_rows, &snapshot),
None,
cx,
);
}
editor.remove_blocks(blocks_to_remove, None, cx);
for hunk in hunks_to_expand {
@@ -216,9 +227,9 @@ impl Editor {
let hunk_end = hunk.multi_buffer_range.end;
let buffer = self.buffer().clone();
let snapshot = self.snapshot(cx);
let (diff_base_buffer, deleted_text_lines) = buffer.update(cx, |buffer, cx| {
let snapshot = buffer.snapshot(cx);
let hunk = buffer_diff_hunk(&snapshot, multi_buffer_row_range.clone())?;
let hunk = buffer_diff_hunk(&snapshot.buffer_snapshot, multi_buffer_row_range.clone())?;
let mut buffer_ranges = buffer.range_to_buffer_ranges(multi_buffer_row_range, cx);
if buffer_ranges.len() == 1 {
let (buffer, _, _) = buffer_ranges.pop()?;
@@ -256,7 +267,7 @@ impl Editor {
}
DiffHunkStatus::Added => {
self.highlight_rows::<DiffRowHighlight>(
hunk_start..hunk_end,
to_inclusive_row_range(hunk_start..hunk_end, &snapshot),
Some(added_hunk_color(cx)),
cx,
);
@@ -264,7 +275,7 @@ impl Editor {
}
DiffHunkStatus::Modified => {
self.highlight_rows::<DiffRowHighlight>(
hunk_start..hunk_end,
to_inclusive_row_range(hunk_start..hunk_end, &snapshot),
Some(added_hunk_color(cx)),
cx,
);
@@ -367,9 +378,10 @@ impl Editor {
}
let snapshot = editor.snapshot(cx);
let buffer_snapshot = buffer.read(cx).snapshot();
let mut recalculated_hunks = buffer_snapshot
.git_diff_hunks_in_row_range(0..u32::MAX)
let mut recalculated_hunks = snapshot
.buffer_snapshot
.git_diff_hunks_in_range(MultiBufferRow::MIN..MultiBufferRow::MAX)
.filter(|hunk| hunk.buffer_id == buffer_id)
.fuse()
.peekable();
let mut highlights_to_remove =
@@ -395,7 +407,7 @@ impl Editor {
.to_display_point(&snapshot)
.row();
while let Some(buffer_hunk) = recalculated_hunks.peek() {
match diff_hunk_to_display(buffer_hunk, &snapshot) {
match diff_hunk_to_display(&buffer_hunk, &snapshot) {
DisplayDiffHunk::Folded { display_row } => {
recalculated_hunks.next();
if !expanded_hunk.folded
@@ -434,7 +446,7 @@ impl Editor {
} else {
if !expanded_hunk.folded
&& expanded_hunk_display_range == hunk_display_range
&& expanded_hunk.status == buffer_hunk.status()
&& expanded_hunk.status == hunk_status(buffer_hunk)
&& expanded_hunk.diff_base_byte_range
== buffer_hunk.diff_base_byte_range
{
@@ -461,7 +473,11 @@ impl Editor {
});
for removed_rows in highlights_to_remove {
editor.highlight_rows::<DiffRowHighlight>(removed_rows, None, cx);
editor.highlight_rows::<DiffRowHighlight>(
to_inclusive_row_range(removed_rows, &snapshot),
None,
cx,
);
}
editor.remove_blocks(blocks_to_remove, None, cx);
@@ -565,13 +581,33 @@ fn editor_with_deleted_text(
.buffer_snapshot
.anchor_after(editor.buffer.read(cx).len(cx));
editor.highlight_rows::<DiffRowHighlight>(start..end, Some(deleted_color), cx);
let hunk_related_subscription = cx.on_blur(&editor.focus_handle, |editor, cx| {
editor.change_selections(None, cx, |s| {
s.try_cancel();
});
});
editor._subscriptions.push(hunk_related_subscription);
editor.highlight_rows::<DiffRowHighlight>(start..=end, Some(deleted_color), cx);
let subscription_editor = parent_editor.clone();
editor._subscriptions.extend([
cx.on_blur(&editor.focus_handle, |editor, cx| {
editor.set_current_line_highlight(CurrentLineHighlight::None);
editor.change_selections(None, cx, |s| {
s.try_cancel();
});
cx.notify();
}),
cx.on_focus(&editor.focus_handle, move |editor, cx| {
let restored_highlight = if let Some(parent_editor) = subscription_editor.upgrade()
{
parent_editor.read(cx).current_line_highlight
} else {
EditorSettings::get_global(cx).current_line_highlight
};
editor.set_current_line_highlight(restored_highlight);
cx.notify();
}),
cx.observe_global::<SettingsStore>(|editor, cx| {
if !editor.is_focused(cx) {
editor.set_current_line_highlight(CurrentLineHighlight::None);
}
}),
]);
let original_multi_buffer_range = hunk.multi_buffer_range.clone();
let diff_base_range = hunk.diff_base_byte_range.clone();
editor.register_action::<RevertSelectedHunks>(move |_, cx| {
@@ -603,15 +639,17 @@ fn editor_with_deleted_text(
editor
});
let editor_height = editor.update(cx, |editor, cx| editor.max_point(cx).row() as u8);
let editor_height = editor.update(cx, |editor, cx| editor.max_point(cx).row().0 as u8);
(editor_height, editor)
}
fn buffer_diff_hunk(
buffer_snapshot: &MultiBufferSnapshot,
row_range: Range<Point>,
) -> Option<DiffHunk<u32>> {
let mut hunks = buffer_snapshot.git_diff_hunks_in_range(row_range.start.row..row_range.end.row);
) -> Option<DiffHunk<MultiBufferRow>> {
let mut hunks = buffer_snapshot.git_diff_hunks_in_range(
MultiBufferRow(row_range.start.row)..MultiBufferRow(row_range.end.row),
);
let hunk = hunks.next()?;
let second_hunk = hunks.next();
if second_hunk.is_none() {
@@ -619,3 +657,18 @@ fn buffer_diff_hunk(
}
None
}
fn to_inclusive_row_range(
row_range: Range<Anchor>,
snapshot: &EditorSnapshot,
) -> RangeInclusive<Anchor> {
let mut display_row_range =
row_range.start.to_display_point(snapshot)..row_range.end.to_display_point(snapshot);
if display_row_range.end.row() > display_row_range.start.row() {
*display_row_range.end.row_mut() -= 1;
}
let point_range = display_row_range.start.to_point(&snapshot.display_snapshot)
..display_row_range.end.to_point(&snapshot.display_snapshot);
let new_range = point_range.to_anchors(&snapshot.buffer_snapshot);
new_range.start..=new_range.end
}

View File

@@ -30,7 +30,7 @@ pub trait InlineCompletionProvider: 'static + Sized {
buffer: &Model<Buffer>,
cursor_position: language::Anchor,
cx: &'a AppContext,
) -> Option<&str>;
) -> Option<&'a str>;
}
pub trait InlineCompletionProviderHandle {

View File

@@ -2,10 +2,12 @@
//! in editor given a given motion (e.g. it handles converting a "move left" command into coordinates in editor). It is exposed mostly for use by vim crate.
use super::{Bias, DisplayPoint, DisplaySnapshot, SelectionGoal, ToDisplayPoint};
use crate::{char_kind, scroll::ScrollAnchor, CharKind, EditorStyle, ToOffset, ToPoint};
use crate::{
char_kind, scroll::ScrollAnchor, CharKind, DisplayRow, EditorStyle, RowExt, ToOffset, ToPoint,
};
use gpui::{px, Pixels, WindowTextSystem};
use language::Point;
use multi_buffer::MultiBufferSnapshot;
use multi_buffer::{MultiBufferRow, MultiBufferSnapshot};
use serde::Deserialize;
use std::{ops::Range, sync::Arc};
@@ -35,7 +37,7 @@ pub struct TextLayoutDetails {
pub fn left(map: &DisplaySnapshot, mut point: DisplayPoint) -> DisplayPoint {
if point.column() > 0 {
*point.column_mut() -= 1;
} else if point.row() > 0 {
} else if point.row().0 > 0 {
*point.row_mut() -= 1;
*point.column_mut() = map.line_len(point.row());
}
@@ -127,7 +129,7 @@ pub(crate) fn up_by_rows(
_ => map.x_for_display_point(start, text_layout_details),
};
let prev_row = start.row().saturating_sub(row_count);
let prev_row = DisplayRow(start.row().0.saturating_sub(row_count));
let mut point = map.clip_point(
DisplayPoint::new(prev_row, map.line_len(prev_row)),
Bias::Left,
@@ -137,7 +139,7 @@ pub(crate) fn up_by_rows(
} else if preserve_column_at_start {
return (start, goal);
} else {
point = DisplayPoint::new(0, 0);
point = DisplayPoint::new(DisplayRow(0), 0);
goal_x = px(0.);
}
@@ -166,7 +168,7 @@ pub(crate) fn down_by_rows(
_ => map.x_for_display_point(start, text_layout_details),
};
let new_row = start.row() + row_count;
let new_row = DisplayRow(start.row().0 + row_count);
let mut point = map.clip_point(DisplayPoint::new(new_row, 0), Bias::Right);
if point.row() > start.row() {
*point.column_mut() = map.display_column_for_x(point.row(), goal_x, text_layout_details)
@@ -220,7 +222,9 @@ pub fn indented_line_beginning(
let soft_line_start = map.clip_point(DisplayPoint::new(display_point.row(), 0), Bias::Right);
let indent_start = Point::new(
point.row,
map.buffer_snapshot.indent_size_for_line(point.row).len,
map.buffer_snapshot
.indent_size_for_line(MultiBufferRow(point.row))
.len,
)
.to_display_point(map);
let line_start = map.prev_line_boundary(point).1;
@@ -326,7 +330,7 @@ pub fn start_of_paragraph(
let mut found_non_blank_line = false;
for row in (0..point.row + 1).rev() {
let blank = map.buffer_snapshot.is_line_blank(row);
let blank = map.buffer_snapshot.is_line_blank(MultiBufferRow(row));
if found_non_blank_line && blank {
if count <= 1 {
return Point::new(row, 0).to_display_point(map);
@@ -349,13 +353,13 @@ pub fn end_of_paragraph(
mut count: usize,
) -> DisplayPoint {
let point = display_point.to_point(map);
if point.row == map.max_buffer_row() {
if point.row == map.max_buffer_row().0 {
return map.max_point();
}
let mut found_non_blank_line = false;
for row in point.row..map.max_buffer_row() + 1 {
let blank = map.buffer_snapshot.is_line_blank(row);
for row in point.row..map.max_buffer_row().next_row().0 {
let blank = map.buffer_snapshot.is_line_blank(MultiBufferRow(row));
if found_non_blank_line && blank {
if count <= 1 {
return Point::new(row, 0).to_display_point(map);
@@ -549,13 +553,16 @@ pub fn split_display_range_by_lines(
let mut start = range.start;
// Loop over all the covered rows until the one containing the range end
for row in range.start.row()..range.end.row() {
let row_end_column = map.line_len(row);
let end = map.clip_point(DisplayPoint::new(row, row_end_column), Bias::Left);
for row in range.start.row().0..range.end.row().0 {
let row_end_column = map.line_len(DisplayRow(row));
let end = map.clip_point(
DisplayPoint::new(DisplayRow(row), row_end_column),
Bias::Left,
);
if start != end {
result.push(start..end);
}
start = map.clip_point(DisplayPoint::new(row + 1, 0), Bias::Left);
start = map.clip_point(DisplayPoint::new(DisplayRow(row + 1), 0), Bias::Left);
}
// Add the final range from the start of the last end to the original range end.
@@ -570,7 +577,7 @@ mod tests {
use crate::{
display_map::Inlay,
test::{editor_test_context::EditorTestContext, marked_display_snapshot},
Buffer, DisplayMap, ExcerptRange, InlayId, MultiBuffer,
Buffer, DisplayMap, DisplayRow, ExcerptRange, InlayId, MultiBuffer,
};
use gpui::{font, Context as _};
use language::Capability;
@@ -900,126 +907,126 @@ mod tests {
assert_eq!(snapshot.text(), "\n\nabc\ndefg\n\n\nhijkl\nmn");
let col_2_x =
snapshot.x_for_display_point(DisplayPoint::new(2, 2), &text_layout_details);
let col_2_x = snapshot
.x_for_display_point(DisplayPoint::new(DisplayRow(2), 2), &text_layout_details);
// Can't move up into the first excerpt's header
assert_eq!(
up(
&snapshot,
DisplayPoint::new(2, 2),
DisplayPoint::new(DisplayRow(2), 2),
SelectionGoal::HorizontalPosition(col_2_x.0),
false,
&text_layout_details
),
(
DisplayPoint::new(2, 0),
DisplayPoint::new(DisplayRow(2), 0),
SelectionGoal::HorizontalPosition(0.0)
),
);
assert_eq!(
up(
&snapshot,
DisplayPoint::new(2, 0),
DisplayPoint::new(DisplayRow(2), 0),
SelectionGoal::None,
false,
&text_layout_details
),
(
DisplayPoint::new(2, 0),
DisplayPoint::new(DisplayRow(2), 0),
SelectionGoal::HorizontalPosition(0.0)
),
);
let col_4_x =
snapshot.x_for_display_point(DisplayPoint::new(3, 4), &text_layout_details);
let col_4_x = snapshot
.x_for_display_point(DisplayPoint::new(DisplayRow(3), 4), &text_layout_details);
// Move up and down within first excerpt
assert_eq!(
up(
&snapshot,
DisplayPoint::new(3, 4),
DisplayPoint::new(DisplayRow(3), 4),
SelectionGoal::HorizontalPosition(col_4_x.0),
false,
&text_layout_details
),
(
DisplayPoint::new(2, 3),
DisplayPoint::new(DisplayRow(2), 3),
SelectionGoal::HorizontalPosition(col_4_x.0)
),
);
assert_eq!(
down(
&snapshot,
DisplayPoint::new(2, 3),
DisplayPoint::new(DisplayRow(2), 3),
SelectionGoal::HorizontalPosition(col_4_x.0),
false,
&text_layout_details
),
(
DisplayPoint::new(3, 4),
DisplayPoint::new(DisplayRow(3), 4),
SelectionGoal::HorizontalPosition(col_4_x.0)
),
);
let col_5_x =
snapshot.x_for_display_point(DisplayPoint::new(6, 5), &text_layout_details);
let col_5_x = snapshot
.x_for_display_point(DisplayPoint::new(DisplayRow(6), 5), &text_layout_details);
// Move up and down across second excerpt's header
assert_eq!(
up(
&snapshot,
DisplayPoint::new(6, 5),
DisplayPoint::new(DisplayRow(6), 5),
SelectionGoal::HorizontalPosition(col_5_x.0),
false,
&text_layout_details
),
(
DisplayPoint::new(3, 4),
DisplayPoint::new(DisplayRow(3), 4),
SelectionGoal::HorizontalPosition(col_5_x.0)
),
);
assert_eq!(
down(
&snapshot,
DisplayPoint::new(3, 4),
DisplayPoint::new(DisplayRow(3), 4),
SelectionGoal::HorizontalPosition(col_5_x.0),
false,
&text_layout_details
),
(
DisplayPoint::new(6, 5),
DisplayPoint::new(DisplayRow(6), 5),
SelectionGoal::HorizontalPosition(col_5_x.0)
),
);
let max_point_x =
snapshot.x_for_display_point(DisplayPoint::new(7, 2), &text_layout_details);
let max_point_x = snapshot
.x_for_display_point(DisplayPoint::new(DisplayRow(7), 2), &text_layout_details);
// Can't move down off the end
assert_eq!(
down(
&snapshot,
DisplayPoint::new(7, 0),
DisplayPoint::new(DisplayRow(7), 0),
SelectionGoal::HorizontalPosition(0.0),
false,
&text_layout_details
),
(
DisplayPoint::new(7, 2),
DisplayPoint::new(DisplayRow(7), 2),
SelectionGoal::HorizontalPosition(max_point_x.0)
),
);
assert_eq!(
down(
&snapshot,
DisplayPoint::new(7, 2),
DisplayPoint::new(DisplayRow(7), 2),
SelectionGoal::HorizontalPosition(max_point_x.0),
false,
&text_layout_details
),
(
DisplayPoint::new(7, 2),
DisplayPoint::new(DisplayRow(7), 2),
SelectionGoal::HorizontalPosition(max_point_x.0)
),
);

View File

@@ -6,8 +6,8 @@ use crate::{
display_map::{DisplaySnapshot, ToDisplayPoint},
hover_popover::hide_hover,
persistence::DB,
Anchor, DisplayPoint, Editor, EditorEvent, EditorMode, EditorSettings, InlayHintRefreshReason,
MultiBufferSnapshot, ToPoint,
Anchor, DisplayPoint, DisplayRow, Editor, EditorEvent, EditorMode, EditorSettings,
InlayHintRefreshReason, MultiBufferSnapshot, RowExt, ToPoint,
};
pub use autoscroll::{Autoscroll, AutoscrollStrategy};
use gpui::{point, px, AppContext, Entity, Global, Pixels, Task, ViewContext, WindowContext};
@@ -48,7 +48,7 @@ impl ScrollAnchor {
if self.anchor == Anchor::min() {
scroll_position.y = 0.;
} else {
let scroll_top = self.anchor.to_display_point(snapshot).row() as f32;
let scroll_top = self.anchor.to_display_point(snapshot).row().as_f32();
scroll_position.y = scroll_top + scroll_position.y;
}
scroll_position
@@ -200,7 +200,7 @@ impl ScrollManager {
)
} else {
let scroll_top_buffer_point =
DisplayPoint::new(scroll_position.y as u32, 0).to_point(&map);
DisplayPoint::new(DisplayRow(scroll_position.y as u32), 0).to_point(&map);
let top_anchor = map
.buffer_snapshot
.anchor_at(scroll_top_buffer_point, Bias::Right);
@@ -210,7 +210,7 @@ impl ScrollManager {
anchor: top_anchor,
offset: point(
scroll_position.x.max(0.),
scroll_position.y - top_anchor.to_display_point(&map).row() as f32,
scroll_position.y - top_anchor.to_display_point(&map).row().as_f32(),
),
},
scroll_top_buffer_point.row,
@@ -472,7 +472,7 @@ impl Editor {
}
if let Some(visible_lines) = self.visible_line_count() {
if newest_head.row() < screen_top.row() + visible_lines as u32 {
if newest_head.row() < DisplayRow(screen_top.row().0 + visible_lines as u32) {
return Ordering::Equal;
}
}

Some files were not shown because too many files have changed in this diff Show More