Compare commits

...

23 Commits

Author SHA1 Message Date
Richard Feldman
0e1fe66a19 Show Lua script with syntax highlighting in assistant2 2025-03-10 11:26:12 -04:00
Marshall Bowers
ed52e759d7 docs: Fix language links (#26368)
This PR fixes some language links in the docs.

The Shell Script page wasn't being linked from `SUMMARY.md`, so no page
was being generated.

There were also some differences in the language lists in the sidebar
and on the top-level languages page.

Release Notes:

- N/A
2025-03-10 15:22:51 +00:00
Richard Feldman
6da099a9d7 Unsandbox Lua scripts (#26365)
Per a conversation with @nathansobo, have the Lua scripts run
unsandboxed for now (while this feature is behind the staff feature
flag).

Release Notes:

- N/A
2025-03-10 11:04:37 -04:00
Smit Barmase
5f159bc95e go_to_line: Fix goto line + mouse click jumps to previous scroll position (#26362)
Closes #20658

Now, when the "Go to Line" palette is open:  
- Clicking on the editor will dismiss the palette without changing the
scroll position. (PR change)
- Pressing Enter will jump to the line number entered in the palette.
(Unchanged)
- Pressing Escape will jump back to the previous cursor location.
(Unchanged)

Release Notes:

- Fixed an issue where clicking the editor with the mouse while the "Go
to Line" palette is open would cause it to jump to the previous scroll
position.
2025-03-10 20:33:07 +05:30
Marshall Bowers
a4462577bf Sort Cargo.tomls (#26367)
This PR sorts some `Cargo.toml`s that had become unsorted.

Release Notes:

- N/A
2025-03-10 14:48:21 +00:00
João Marcos
c147b58558 Remove redundant checks in do_stage_or_unstage_and_next (#26364)
Release Notes:

- N/A
2025-03-10 14:23:17 +00:00
Devzeth
84fe1bfe9b Recognize ixx as part of the cpp suffix (#26333)
Adds "ixx" as path suffix to be recognized for c++. 

> ixx documentation
https://learn.microsoft.com/en-us/cpp/cpp/modules-cpp?view=msvc-

I've also added it to the icon file. 

Release Notes:

- N/A
2025-03-10 09:10:29 -05:00
Danilo Leal
657d7a911d Add logo for wgsl (WebGPU Shading Language) (#26360)
Was dabbling on the shaders these past few days and felt like we could
have the WGSL logo. This is based on the logo found on the GPU Web
repository: https://github.com/gpuweb/gpuweb/tree/main/logo

Release Notes:

- N/A
2025-03-10 09:44:19 -03:00
Danilo Leal
ee05cc3ad9 Add a line numbers toggle to the editor controls menu (#26318)
Closes https://github.com/zed-industries/zed/issues/26305

<img
src="https://github.com/user-attachments/assets/795029ad-128a-471f-9adf-c0ef26319bbf"
width="400px" />

Release Notes:

- N/A
2025-03-10 09:28:46 -03:00
Smit Barmase
5ed144f9d2 macOS: Add support for external file managers to open directory in Zed (#26357)
Closes #25421

This PR adds support for external file managers to show Zed as an option
in the "Open With" context menu for directories on macOS.

<img width="350" alt="image"
src="https://github.com/user-attachments/assets/c52acd48-73c4-47be-8683-6950e0371b73"
/>


Release Notes:

- Added support for opening folders in Zed from third-party macOS file
managers like Path Finder and Super Charge through their "Open With"
menu.
2025-03-10 15:21:39 +05:30
Julia Ryan
2a862b3c54 nix: Disable checks and remove crane workaround (#26356)
The checkPhase was failing for me in darwin so I turned it off. I think
eventually we'll want to use a separate derivation for tests (which
crane has a helper for).

Crane also solved our issue with spaces in paths so I bumped the flake
to pick up that fix and removed our workaround: ipetkov/crane#808.

Release Notes:

- N/A
2025-03-10 02:00:57 -07:00
Julia Ryan
4a7c84f490 Fix nix build (#26270)
This PR includes lots of small fixes to get our `build.nix` and
`shell.nix` back to a working state.

I've tested this by running `cargo run` (inside the devshell) and `nix
run` on x86 nixos and arm64 darwin machines. I'd appreciate it if others
could test building inside the devshell to double-check that it's not
just working because I happen to have some system-level packages
installed, as well as seeing if it works on other platforms (non-nixos
linux, arm linux, x86 darwin).

I couldn't get the full test suite (`cargo nextest run --workspace`)
passing in the devshell on darwin, but they _are_ all passing on nixos.
nixpkgs [disables some of our
tests](92d11f06d5/pkgs/by-name/ze/zed-editor/package.nix (L226-L234))
that apparently fail or are flakey on hydra, but they don't know why.
I'm going to punt on debugging those for now, especially given that they
seem to be working for me. I'm also unsure of whether we actually want
the nix checkPhase to run the full test suite (it's currently not
passing `--workspace`) given that we have separate CI that should
enforce that those pass on all PRs.

Here's an overview of the changes made:
- Fix our `generate-licenses` script
- Relaxes the `cargo-about` version requirement slightly so it doesn't
try to install an older binary when the nixpkgs one is newer than our
requirement
- Add a workaround for [this cargo-about
issue](https://github.com/zed-industries/zed/issues/19971) obviating the
need for the patching done in the nixpkgs package
- Set the new `--frozen` flag to avoid network access/mutating the
lockfile
- Use dynamic webrtc lib from nixpkgs, and fixes up the build script in
webrtc-sys that hardcodes it to be statically linked.
- Use `inputsFrom` in `shell.nix` and avoid duplicating everything from
`build.nix`
- Add a temporary workaround for an [upstream crane
bug](https://github.com/ipetkov/crane/issues/808).
- Fix shebangs in our `script` dir to not hard-code `/bin/bash`

There are still a bunch of issues that aren't resolved here, I'll make a
tracking issue for those and try to land this first just to get back to
an unbroken state. Eventually among other things I'd like to use a
`libgit2` from `staticPkgs` and musl cross compilation to build the
remote server under nix, and then add that as a separate flake output
and include it in the shell's `inputsFrom` list.

Thanks @niklaskorz, @GaetanLepage, @bbigras and all the other nixpkgs
maintainers that have kept the `zed-editor` package working and up to
date! I seriously considered just making our flake `overrideAttrs` the
package in nixpkgs given how well maintained it is.

Thanks @WeetHet for your volunteer maintinance of this flake. I
referenced #24953 while working on these fixes, and I'd love to
collaborate on adding some of those pieces like treefmt and a github
action. If you're interested I'd really appreciate some help debugging
why crane's `buildDepsOnly` isn't working for us. I'm assuming it'd make
our `nix build` times go way down from the improved dep caching if we
could get it working.

Thanks @rrbutani for all the help on this PR 💙.

Release Notes:

- N/A

---------

Co-authored-by: Rahul Butani <rrbutani@users.noreply.github.com>
Co-authored-by: Rahul Butani <rr.butani@gmail.com>
2025-03-10 01:06:11 -07:00
Mikayla Maki
230e2e4107 Restore git panel header (#26354)
Let's play around with it. This should not be added to tomorrow's
preview.

Release Notes:

- Git Beta: Added a panel header with an open diff and stage/unstage all
buttons.
2025-03-10 07:08:10 +00:00
Richard Hao
d732b8ba0f git: Disable commit message generation when commit not possible (#26329)
## Issue:

- `Generate Commit Message` will generate a random message if there are
no changes.
<img width="614" alt="image"
src="https://github.com/user-attachments/assets/c16cadac-01af-47c0-a2db-a5bbf62f84bb"
/>


## After Fixed:

- `Generate Commit Message` will be disabled if commit is not possible
<img width="610" alt="image"
src="https://github.com/user-attachments/assets/5ea9ca70-6fa3-4144-ab4e-be7a986d5496"
/>


## Release Notes:

- Fixed: Disable commit message generation when commit is not possible
2025-03-09 23:45:25 -07:00
Michael Sloan
7c3eecc9c7 Add support for querying file outline in assistant script (#26351)
Release Notes:

- N/A
2025-03-10 05:26:17 +00:00
Max Brunsfeld
fff37ab823 Follow-up fixes for recent multi buffer optimizations (#26345)
I realized that the optimization broke multi buffer syncing after buffer
reparses.

Release Notes:

- N/A
2025-03-09 22:15:38 -07:00
Kirill Bulatov
8a7a78fafb Avoid modifying the LSP message before resolving it (#26347)
Closes https://github.com/zed-industries/zed/issues/21277

To the left is current Zed, right is the improved version.
3rd message, from Zed, to resolve the item, does not have `textEdit` on
the right side, and has one on the left.
Seems to not influence the end result though, but at least Zed behaves
more appropriate now.

<img width="1727" alt="image"
src="https://github.com/user-attachments/assets/ca1236fd-9ce2-41ba-88fe-1f3178cdcbde"
/>


Instead of modifying the original LSP completion item, store completion
list defaults and apply them when the item is requested (except `data`
defaults, needed for resolve).

Now, the only place that can modify the completion items is this method,
and Python impl seems to be the one doing it:


ca9c3af56f/crates/languages/src/python.rs (L182-L204)

Seems ok to leave untouched for now.

Release Notes:

- Fixed LSP completion items modified before resolve request
2025-03-10 00:12:53 +02:00
Ben Kunkle
6de3ac3e17 Revert "Highlight super and this as keywords in JS/TS/TSX" (#26342)
Reverts zed-industries/zed#25135

This approach was not the best as explained in the response to the
original PR. Likely, the better approach is to create a newer specific
scope for these kinds of variables under the `@variable` prefix so that
themes can control these pseudo-keywords specifically
2025-03-09 16:11:37 +00:00
Smit Barmase
5aae3bdc69 copilot: Fix missing sign-out button when Zed is the edit prediction provider (#26340)
Closes #25884

Added a sign-out button for Copilot in Assistant settings, allowing
sign-out even when copilot is disabled.

<img width="500" alt="image"
src="https://github.com/user-attachments/assets/43fc97ad-f73c-49e1-a7b6-a3910434d661"
/>



Release Notes:

- Added a sign-out button for Copilot in Assistant settings.
2025-03-09 21:39:14 +05:30
Agus Zubiaga
e298301b40 assistant: Make scripting a first-class concept instead of a tool (#26338)
This PR makes refactors the scripting functionality to be a first-class
concept of the assistant instead of a generic tool, which will allow us
to build a more customized experience.

- The tool prompt has been slightly tweaked and is now included as a
system message in all conversations. I'm getting decent results, but now
that it isn't in the tools framework, it will probably require more
refining.

- The model will now include an `<eval ...>` tag at the end of the
message with the script. We parse this tag incrementally as it streams
in so that we can indicate that we are generating a script before we see
the closing `</eval>` tag. Later, this will help us interpret the script
as it arrives also.

- Threads now hold a `ScriptSession` entity which manages the state of
all scripts (from parsing to exited) in a centralized way, and will
later collect all script operations so they can be displayed in the UI.

- `script_tool` has been renamed to `assistant_scripting` 

- Script source now opens in a regular read-only buffer  

Note: We still need to handle persistence properly

Release Notes:

- N/A

---------

Co-authored-by: Marshall Bowers <git@maxdeviant.com>
2025-03-09 09:01:49 +00:00
brian tan
ed6bf7f161 diagnostics: Fix losing focus when activating from diagnostics view (#25517)
Closes #25509

Changes:
- If active item is already diagnostics, don't try to focus it again.

Instead of not focusing, should it just not activate instead? Something
like:

            if !workspace
                .active_item(cx)
                .map(|item| item.item_id() == existing.item_id())
                .unwrap_or(false)
            {
workspace.activate_item(&existing, true, true, window, cx);
            }


Release Notes:

- N/A
2025-03-08 22:17:20 +00:00
Smit Barmase
f14d6670ba copilot: Fix onboarding into Copilot requires Zed restart (#26330)
Closes #25594

This PR fixes an issue where signing into Copilot required restarting
Zed.

Copilot depends on an OAuth token that comes from either `hosts.json` or
`apps.json`. Initially, both files don't exist. If neither file is
found, we fallback to watching `hosts.json` for updates. However, if the
auth process creates `apps.json`, we won't receive updates from it,
causing the UI to remain outdated.

This PR fixes that by watching the parent `github-copilot` directory
instead, which will always contain one of those files along with an
additional version file.

I have tested this on macOS and Linux Wayland.

Release Notes:

- Fixed an issue where signing into Copilot required restarting Zed.
2025-03-09 03:19:09 +05:30
Joseph T. Lyons
22d9b5d8ca Update key binding documentation (#26321)
Release Notes:

- N/A
2025-03-08 01:03:52 -05:00
80 changed files with 1865 additions and 806 deletions

45
Cargo.lock generated
View File

@@ -84,7 +84,7 @@ dependencies = [
[[package]]
name = "alacritty_terminal"
version = "0.25.1-dev"
source = "git+https://github.com/zed-industries/alacritty.git?rev=03c2907b44b4189aac5fdeaea331f5aab5c7072e#03c2907b44b4189aac5fdeaea331f5aab5c7072e"
source = "git+https://github.com/zed-industries/alacritty.git?branch=add-hush-login-flag#828457c9ff1f7ea0a0469337cc8a37ee3a1b0590"
dependencies = [
"base64 0.22.1",
"bitflags 2.8.0",
@@ -450,6 +450,7 @@ version = "0.1.0"
dependencies = [
"anyhow",
"assistant_context_editor",
"assistant_scripting",
"assistant_settings",
"assistant_slash_command",
"assistant_tool",
@@ -489,6 +490,7 @@ dependencies = [
"prompt_store",
"proto",
"rand 0.8.5",
"rich_text",
"rope",
"serde",
"serde_json",
@@ -563,6 +565,26 @@ dependencies = [
"workspace",
]
[[package]]
name = "assistant_scripting"
version = "0.1.0"
dependencies = [
"anyhow",
"collections",
"futures 0.3.31",
"gpui",
"log",
"mlua",
"parking_lot",
"project",
"rand 0.8.5",
"regex",
"serde",
"serde_json",
"settings",
"util",
]
[[package]]
name = "assistant_settings"
version = "0.1.0"
@@ -11910,26 +11932,6 @@ version = "1.0.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a3cf7c11c38cb994f3d40e8a8cde3bbd1f72a435e4c49e85d6553d8312306152"
[[package]]
name = "scripting_tool"
version = "0.1.0"
dependencies = [
"anyhow",
"assistant_tool",
"collections",
"futures 0.3.31",
"gpui",
"mlua",
"parking_lot",
"project",
"regex",
"schemars",
"serde",
"serde_json",
"settings",
"util",
]
[[package]]
name = "scrypt"
version = "0.11.0"
@@ -16984,7 +16986,6 @@ dependencies = [
"repl",
"reqwest_client",
"rope",
"scripting_tool",
"search",
"serde",
"serde_json",

View File

@@ -8,6 +8,7 @@ members = [
"crates/assistant",
"crates/assistant2",
"crates/assistant_context_editor",
"crates/assistant_scripting",
"crates/assistant_settings",
"crates/assistant_slash_command",
"crates/assistant_slash_commands",
@@ -118,7 +119,6 @@ members = [
"crates/rope",
"crates/rpc",
"crates/schema_generator",
"crates/scripting_tool",
"crates/search",
"crates/semantic_index",
"crates/semantic_version",
@@ -318,7 +318,7 @@ reqwest_client = { path = "crates/reqwest_client" }
rich_text = { path = "crates/rich_text" }
rope = { path = "crates/rope" }
rpc = { path = "crates/rpc" }
scripting_tool = { path = "crates/scripting_tool" }
assistant_scripting = { path = "crates/assistant_scripting" }
search = { path = "crates/search" }
semantic_index = { path = "crates/semantic_index" }
semantic_version = { path = "crates/semantic_version" }
@@ -370,7 +370,7 @@ zeta = { path = "crates/zeta" }
#
aho-corasick = "1.1"
alacritty_terminal = { git = "https://github.com/zed-industries/alacritty.git", rev = "03c2907b44b4189aac5fdeaea331f5aab5c7072e" }
alacritty_terminal = { git = "https://github.com/zed-industries/alacritty.git", branch = "add-hush-login-flag" }
any_vec = "0.14"
anyhow = "1.0.86"
arrayvec = { version = "0.7.4", features = ["serde"] }

View File

@@ -0,0 +1,7 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M6.5 13L1.5 5H11.5L6.5 13Z" fill="black"/>
<path d="M14 9H9L11.5 5L14 9Z" fill="black" fill-opacity="0.75"/>
<path d="M9 9L14 9L11.5 13L9 9Z" fill="black" fill-opacity="0.65"/>
<path d="M14 5L15.25 7L12.75 7L14 5Z" fill="black" fill-opacity="0.5"/>
<path d="M14 9L12.75 7H15.25L14 9Z" fill="black" fill-opacity="0.55"/>
</svg>

After

Width:  |  Height:  |  Size: 432 B

View File

@@ -21,6 +21,7 @@ test-support = [
[dependencies]
anyhow.workspace = true
assistant_context_editor.workspace = true
assistant_scripting.workspace = true
assistant_settings.workspace = true
assistant_slash_command.workspace = true
assistant_tool.workspace = true
@@ -63,6 +64,7 @@ serde.workspace = true
serde_json.workspace = true
settings.workspace = true
smol.workspace = true
rich_text.workspace = true
streaming_diff.workspace = true
telemetry_events.workspace = true
terminal.workspace = true
@@ -81,8 +83,8 @@ zed_actions.workspace = true
[dev-dependencies]
editor = { workspace = true, features = ["test-support"] }
gpui = { workspace = true, "features" = ["test-support"] }
indoc.workspace = true
language = { workspace = true, "features" = ["test-support"] }
language_model = { workspace = true, "features" = ["test-support"] }
project = { workspace = true, features = ["test-support"] }
rand.workspace = true
indoc.workspace = true

View File

@@ -1,26 +1,32 @@
use std::sync::Arc;
use collections::HashMap;
use assistant_scripting::{ScriptId, ScriptState};
use collections::{HashMap, HashSet};
use editor::{Editor, MultiBuffer};
use futures::FutureExt;
use gpui::{
list, AbsoluteLength, AnyElement, App, ClickEvent, DefiniteLength, EdgesRefinement, Empty,
Entity, Focusable, Length, ListAlignment, ListOffset, ListState, StyleRefinement, Subscription,
Task, TextStyleRefinement, UnderlineStyle,
Task, TextStyleRefinement, UnderlineStyle, WeakEntity,
};
use language::{Buffer, LanguageRegistry};
use language_model::{LanguageModelRegistry, LanguageModelToolUseId, Role};
use markdown::{Markdown, MarkdownStyle};
use settings::Settings as _;
use std::ops::Range;
use std::sync::Arc;
use theme::ThemeSettings;
use ui::{prelude::*, Disclosure, KeyBinding};
use util::ResultExt as _;
use workspace::Workspace;
use crate::thread::{MessageId, RequestKind, Thread, ThreadError, ThreadEvent};
use crate::thread_store::ThreadStore;
use crate::tool_use::{ToolUse, ToolUseStatus};
use crate::ui::ContextPill;
use gpui::{HighlightStyle, StyledText};
use rich_text::{self, Highlight};
pub struct ActiveThread {
workspace: WeakEntity<Workspace>,
language_registry: Arc<LanguageRegistry>,
thread_store: Entity<ThreadStore>,
thread: Entity<Thread>,
@@ -30,6 +36,7 @@ pub struct ActiveThread {
rendered_messages_by_id: HashMap<MessageId, Entity<Markdown>>,
editing_message: Option<(MessageId, EditMessageState)>,
expanded_tool_uses: HashMap<LanguageModelToolUseId, bool>,
expanded_scripts: HashSet<ScriptId>,
last_error: Option<ThreadError>,
_subscriptions: Vec<Subscription>,
}
@@ -40,6 +47,7 @@ struct EditMessageState {
impl ActiveThread {
pub fn new(
workspace: WeakEntity<Workspace>,
thread: Entity<Thread>,
thread_store: Entity<ThreadStore>,
language_registry: Arc<LanguageRegistry>,
@@ -52,6 +60,7 @@ impl ActiveThread {
];
let mut this = Self {
workspace,
language_registry,
thread_store,
thread: thread.clone(),
@@ -59,6 +68,7 @@ impl ActiveThread {
messages: Vec::new(),
rendered_messages_by_id: HashMap::default(),
expanded_tool_uses: HashMap::default(),
expanded_scripts: HashSet::default(),
list_state: ListState::new(0, ListAlignment::Bottom, px(1024.), {
let this = cx.entity().downgrade();
move |ix, window: &mut Window, cx: &mut App| {
@@ -241,7 +251,7 @@ impl ActiveThread {
fn handle_thread_event(
&mut self,
_: &Entity<Thread>,
_thread: &Entity<Thread>,
event: &ThreadEvent,
window: &mut Window,
cx: &mut Context<Self>,
@@ -306,6 +316,14 @@ impl ActiveThread {
}
}
}
ThreadEvent::ScriptFinished => {
let model_registry = LanguageModelRegistry::read_global(cx);
if let Some(model) = model_registry.active_model() {
self.thread.update(cx, |thread, cx| {
thread.send_to_model(model, RequestKind::Chat, false, cx);
});
}
}
}
}
@@ -445,12 +463,16 @@ impl ActiveThread {
return Empty.into_any();
};
let context = self.thread.read(cx).context_for_message(message_id);
let tool_uses = self.thread.read(cx).tool_uses_for_message(message_id);
let colors = cx.theme().colors();
let thread = self.thread.read(cx);
let context = thread.context_for_message(message_id);
let tool_uses = thread.tool_uses_for_message(message_id);
// Don't render user messages that are just there for returning tool results.
if message.role == Role::User && self.thread.read(cx).message_has_tool_results(message_id) {
if message.role == Role::User
&& (thread.message_has_tool_results(message_id)
|| thread.message_has_script_output(message_id))
{
return Empty.into_any();
}
@@ -463,6 +485,8 @@ impl ActiveThread {
.filter(|(id, _)| *id == message_id)
.map(|(_, state)| state.editor.clone());
let colors = cx.theme().colors();
let message_content = v_flex()
.child(
if let Some(edit_message_editor) = edit_message_editor.clone() {
@@ -597,6 +621,7 @@ impl ActiveThread {
Role::Assistant => div()
.id(("message-container", ix))
.child(message_content)
.children(self.render_script(message_id, cx))
.map(|parent| {
if tool_uses.is_empty() {
return parent;
@@ -716,6 +741,191 @@ impl ActiveThread {
}),
)
}
fn render_script(&self, message_id: MessageId, cx: &mut Context<Self>) -> Option<AnyElement> {
let script = self.thread.read(cx).script_for_message(message_id, cx)?;
let is_open = self.expanded_scripts.contains(&script.id);
let colors = cx.theme().colors();
let element = div().px_2p5().child(
v_flex()
.gap_1()
.rounded_lg()
.border_1()
.border_color(colors.border)
.child(
h_flex()
.justify_between()
.py_0p5()
.pl_1()
.pr_2()
.bg(colors.editor_foreground.opacity(0.02))
.when(is_open, |element| element.border_b_1().rounded_t(px(6.)))
.when(!is_open, |element| element.rounded_md())
.border_color(colors.border)
.child(
h_flex()
.gap_1()
.child(Disclosure::new("script-disclosure", is_open).on_click(
cx.listener({
let script_id = script.id;
move |this, _event, _window, _cx| {
if this.expanded_scripts.contains(&script_id) {
this.expanded_scripts.remove(&script_id);
} else {
this.expanded_scripts.insert(script_id);
}
}
}),
))
// TODO: Generate script description
.child(Label::new("Script")),
)
.child(
h_flex()
.gap_1()
.child(
Label::new(match script.state {
ScriptState::Generating => "Generating",
ScriptState::Running { .. } => "Running",
ScriptState::Succeeded { .. } => "Finished",
ScriptState::Failed { .. } => "Error",
})
.size(LabelSize::XSmall)
.buffer_font(cx),
)
.child(
IconButton::new("view-source", IconName::Eye)
.icon_color(Color::Muted)
.disabled(matches!(script.state, ScriptState::Generating))
.on_click(cx.listener({
let source = script.source.clone();
move |this, _event, window, cx| {
this.open_script_source(source.clone(), window, cx);
}
})),
),
),
)
.when(is_open, |parent| {
let stdout = script.stdout_snapshot();
let error = script.error();
let lua_language =
async { self.language_registry.language_for_name("Lua").await.ok() }
.now_or_never()
.flatten();
let source_display = if let Some(lua_language) = &lua_language {
let mut highlights = Vec::new();
let mut buf = String::new();
rich_text::render_code(
&mut buf,
&mut highlights,
&script.source,
lua_language,
);
let theme = cx.theme();
let gpui_highlights: Vec<(Range<usize>, HighlightStyle)> = highlights
.iter()
.map(|(range, highlight)| {
let style = match highlight {
Highlight::Code => Default::default(),
Highlight::Id(id) => {
id.style(theme.syntax()).unwrap_or_default()
}
Highlight::InlineCode(_) => Default::default(),
Highlight::Highlight(highlight) => *highlight,
_ => HighlightStyle::default(),
};
(range.clone(), style)
})
.collect();
StyledText::new(buf)
.with_highlights(gpui_highlights)
.into_any_element()
} else {
Label::new(script.source.clone())
.size(LabelSize::Small)
.buffer_font(cx)
.into_any_element()
};
parent.child(
v_flex()
.p_2()
.bg(colors.editor_background)
.gap_2()
.child(
div()
.border_1()
.border_color(colors.border)
.p_2()
.bg(colors.editor_foreground.opacity(0.025))
.rounded_md()
.child(source_display),
)
.child(if stdout.is_empty() && error.is_none() {
Label::new("No output yet")
.size(LabelSize::Small)
.color(Color::Muted)
} else {
Label::new(stdout).size(LabelSize::Small).buffer_font(cx)
})
.children(script.error().map(|err| {
Label::new(err.to_string())
.size(LabelSize::Small)
.color(Color::Error)
})),
)
}),
);
Some(element.into_any())
}
fn open_script_source(
&mut self,
source: SharedString,
window: &mut Window,
cx: &mut Context<'_, ActiveThread>,
) {
let language_registry = self.language_registry.clone();
let workspace = self.workspace.clone();
let source = source.clone();
cx.spawn_in(window, |_, mut cx| async move {
let lua = language_registry.language_for_name("Lua").await.log_err();
workspace.update_in(&mut cx, |workspace, window, cx| {
let project = workspace.project().clone();
let buffer = project.update(cx, |project, cx| {
project.create_local_buffer(&source.trim(), lua, cx)
});
let buffer = cx.new(|cx| {
MultiBuffer::singleton(buffer, cx)
// TODO: Generate script description
.with_title("Assistant script".into())
});
let editor = cx.new(|cx| {
let mut editor =
Editor::for_multibuffer(buffer, Some(project), true, window, cx);
editor.set_read_only(true);
editor
});
workspace.add_item_to_active_pane(Box::new(editor), None, true, window, cx);
})
})
.detach_and_log_err(cx);
}
}
impl Render for ActiveThread {

View File

@@ -166,22 +166,25 @@ impl AssistantPanel {
let history_store =
cx.new(|cx| HistoryStore::new(thread_store.clone(), context_store.clone(), cx));
let thread = cx.new(|cx| {
ActiveThread::new(
workspace.clone(),
thread.clone(),
thread_store.clone(),
language_registry.clone(),
window,
cx,
)
});
Self {
active_view: ActiveView::Thread,
workspace,
project: project.clone(),
fs: fs.clone(),
language_registry: language_registry.clone(),
language_registry,
thread_store: thread_store.clone(),
thread: cx.new(|cx| {
ActiveThread::new(
thread.clone(),
thread_store.clone(),
language_registry,
window,
cx,
)
}),
thread,
message_editor,
context_store,
context_editor: None,
@@ -239,6 +242,7 @@ impl AssistantPanel {
self.active_view = ActiveView::Thread;
self.thread = cx.new(|cx| {
ActiveThread::new(
self.workspace.clone(),
thread.clone(),
self.thread_store.clone(),
self.language_registry.clone(),
@@ -372,6 +376,7 @@ impl AssistantPanel {
this.active_view = ActiveView::Thread;
this.thread = cx.new(|cx| {
ActiveThread::new(
this.workspace.clone(),
thread.clone(),
this.thread_store.clone(),
this.language_registry.clone(),

View File

@@ -1,11 +1,14 @@
use std::sync::Arc;
use anyhow::Result;
use assistant_scripting::{
Script, ScriptEvent, ScriptId, ScriptSession, ScriptTagParser, SCRIPTING_PROMPT,
};
use assistant_tool::ToolWorkingSet;
use chrono::{DateTime, Utc};
use collections::{BTreeMap, HashMap, HashSet};
use futures::StreamExt as _;
use gpui::{App, Context, Entity, EventEmitter, SharedString, Task};
use gpui::{App, AppContext, Context, Entity, EventEmitter, SharedString, Subscription, Task};
use language_model::{
LanguageModel, LanguageModelCompletionEvent, LanguageModelRegistry, LanguageModelRequest,
LanguageModelRequestMessage, LanguageModelRequestTool, LanguageModelToolResult,
@@ -75,14 +78,21 @@ pub struct Thread {
project: Entity<Project>,
tools: Arc<ToolWorkingSet>,
tool_use: ToolUseState,
scripts_by_assistant_message: HashMap<MessageId, ScriptId>,
script_output_messages: HashSet<MessageId>,
script_session: Entity<ScriptSession>,
_script_session_subscription: Subscription,
}
impl Thread {
pub fn new(
project: Entity<Project>,
tools: Arc<ToolWorkingSet>,
_cx: &mut Context<Self>,
cx: &mut Context<Self>,
) -> Self {
let script_session = cx.new(|cx| ScriptSession::new(project.clone(), cx));
let script_session_subscription = cx.subscribe(&script_session, Self::handle_script_event);
Self {
id: ThreadId::new(),
updated_at: Utc::now(),
@@ -97,6 +107,10 @@ impl Thread {
project,
tools,
tool_use: ToolUseState::new(),
scripts_by_assistant_message: HashMap::default(),
script_output_messages: HashSet::default(),
script_session,
_script_session_subscription: script_session_subscription,
}
}
@@ -105,7 +119,7 @@ impl Thread {
saved: SavedThread,
project: Entity<Project>,
tools: Arc<ToolWorkingSet>,
_cx: &mut Context<Self>,
cx: &mut Context<Self>,
) -> Self {
let next_message_id = MessageId(
saved
@@ -115,6 +129,8 @@ impl Thread {
.unwrap_or(0),
);
let tool_use = ToolUseState::from_saved_messages(&saved.messages);
let script_session = cx.new(|cx| ScriptSession::new(project.clone(), cx));
let script_session_subscription = cx.subscribe(&script_session, Self::handle_script_event);
Self {
id,
@@ -138,6 +154,10 @@ impl Thread {
project,
tools,
tool_use,
scripts_by_assistant_message: HashMap::default(),
script_output_messages: HashSet::default(),
script_session,
_script_session_subscription: script_session_subscription,
}
}
@@ -223,17 +243,22 @@ impl Thread {
self.tool_use.message_has_tool_results(message_id)
}
pub fn message_has_script_output(&self, message_id: MessageId) -> bool {
self.script_output_messages.contains(&message_id)
}
pub fn insert_user_message(
&mut self,
text: impl Into<String>,
context: Vec<ContextSnapshot>,
cx: &mut Context<Self>,
) {
) -> MessageId {
let message_id = self.insert_message(Role::User, text, cx);
let context_ids = context.iter().map(|context| context.id).collect::<Vec<_>>();
self.context
.extend(context.into_iter().map(|context| (context.id, context)));
self.context_by_message.insert(message_id, context_ids);
message_id
}
pub fn insert_message(
@@ -302,6 +327,39 @@ impl Thread {
text
}
pub fn script_for_message<'a>(
&'a self,
message_id: MessageId,
cx: &'a App,
) -> Option<&'a Script> {
self.scripts_by_assistant_message
.get(&message_id)
.map(|script_id| self.script_session.read(cx).get(*script_id))
}
fn handle_script_event(
&mut self,
_script_session: Entity<ScriptSession>,
event: &ScriptEvent,
cx: &mut Context<Self>,
) {
match event {
ScriptEvent::Spawned(_) => {}
ScriptEvent::Exited(script_id) => {
if let Some(output_message) = self
.script_session
.read(cx)
.get(*script_id)
.output_message_for_llm()
{
let message_id = self.insert_user_message(output_message, vec![], cx);
self.script_output_messages.insert(message_id);
cx.emit(ThreadEvent::ScriptFinished)
}
}
}
}
pub fn send_to_model(
&mut self,
model: Arc<dyn LanguageModel>,
@@ -330,7 +388,7 @@ impl Thread {
pub fn to_completion_request(
&self,
request_kind: RequestKind,
_cx: &App,
cx: &App,
) -> LanguageModelRequest {
let mut request = LanguageModelRequest {
messages: vec![],
@@ -339,6 +397,12 @@ impl Thread {
temperature: None,
};
request.messages.push(LanguageModelRequestMessage {
role: Role::System,
content: vec![SCRIPTING_PROMPT.to_string().into()],
cache: true,
});
let mut referenced_context_ids = HashSet::default();
for message in &self.messages {
@@ -351,6 +415,7 @@ impl Thread {
content: Vec::new(),
cache: false,
};
match request_kind {
RequestKind::Chat => {
self.tool_use
@@ -371,11 +436,20 @@ impl Thread {
RequestKind::Chat => {
self.tool_use
.attach_tool_uses(message.id, &mut request_message);
if matches!(message.role, Role::Assistant) {
if let Some(script_id) = self.scripts_by_assistant_message.get(&message.id)
{
let script = self.script_session.read(cx).get(*script_id);
request_message.content.push(script.source_tag().into());
}
}
}
RequestKind::Summarize => {
// We don't care about tool use during summarization.
}
}
};
request.messages.push(request_message);
}
@@ -412,6 +486,8 @@ impl Thread {
let stream_completion = async {
let mut events = stream.await?;
let mut stop_reason = StopReason::EndTurn;
let mut script_tag_parser = ScriptTagParser::new();
let mut script_id = None;
while let Some(event) = events.next().await {
let event = event?;
@@ -426,19 +502,43 @@ impl Thread {
}
LanguageModelCompletionEvent::Text(chunk) => {
if let Some(last_message) = thread.messages.last_mut() {
if last_message.role == Role::Assistant {
last_message.text.push_str(&chunk);
let chunk = script_tag_parser.parse_chunk(&chunk);
let message_id = if last_message.role == Role::Assistant {
last_message.text.push_str(&chunk.content);
cx.emit(ThreadEvent::StreamedAssistantText(
last_message.id,
chunk,
chunk.content,
));
last_message.id
} else {
// If we won't have an Assistant message yet, assume this chunk marks the beginning
// of a new Assistant response.
//
// Importantly: We do *not* want to emit a `StreamedAssistantText` event here, as it
// will result in duplicating the text of the chunk in the rendered Markdown.
thread.insert_message(Role::Assistant, chunk, cx);
thread.insert_message(Role::Assistant, chunk.content, cx)
};
if script_id.is_none() && script_tag_parser.found_script() {
let id = thread
.script_session
.update(cx, |session, _cx| session.new_script());
thread.scripts_by_assistant_message.insert(message_id, id);
script_id = Some(id);
}
if let (Some(script_source), Some(script_id)) =
(chunk.script_source, script_id)
{
// TODO: move buffer to script and run as it streams
thread
.script_session
.update(cx, |this, cx| {
this.run_script(script_id, script_source, cx)
})
.detach_and_log_err(cx);
}
}
}
@@ -661,6 +761,7 @@ pub enum ThreadEvent {
#[allow(unused)]
tool_use_id: LanguageModelToolUseId,
},
ScriptFinished,
}
impl EventEmitter<ThreadEvent> for Thread {}

View File

@@ -1,5 +1,5 @@
[package]
name = "scripting_tool"
name = "assistant_scripting"
version = "0.1.0"
edition.workspace = true
publish.workspace = true
@@ -9,20 +9,19 @@ license = "GPL-3.0-or-later"
workspace = true
[lib]
path = "src/scripting_tool.rs"
path = "src/assistant_scripting.rs"
doctest = false
[dependencies]
anyhow.workspace = true
assistant_tool.workspace = true
collections.workspace = true
futures.workspace = true
gpui.workspace = true
log.workspace = true
mlua.workspace = true
parking_lot.workspace = true
project.workspace = true
regex.workspace = true
schemars.workspace = true
serde.workspace = true
serde_json.workspace = true
settings.workspace = true
@@ -32,4 +31,5 @@ util.workspace = true
collections = { workspace = true, features = ["test-support"] }
gpui = { workspace = true, features = ["test-support"] }
project = { workspace = true, features = ["test-support"] }
rand.workspace = true
settings = { workspace = true, features = ["test-support"] }

View File

@@ -0,0 +1,7 @@
mod session;
mod tag;
pub use session::*;
pub use tag::*;
pub const SCRIPTING_PROMPT: &str = include_str!("./system_prompt.txt");

View File

@@ -3,6 +3,16 @@
-- Create a sandbox environment
local sandbox = {}
-- For now, add all globals to `sandbox` (so there effectively is no sandbox).
-- We still need the logic below so that we can do things like overriding print() to write
-- to our in-memory log rather than to stdout, we will delete this loop (and re-enable
-- the I/O module being sandboxed below) to have things be sandboxed again.
for k, v in pairs(_G) do
if sandbox[k] == nil then
sandbox[k] = v
end
end
-- Allow access to standard libraries (safe subset)
sandbox.string = string
sandbox.table = table
@@ -13,7 +23,10 @@ sandbox.tostring = tostring
sandbox.tonumber = tonumber
sandbox.pairs = pairs
sandbox.ipairs = ipairs
-- Access to custom functions
sandbox.search = search
sandbox.outline = outline
-- Create a sandboxed version of LuaFileIO
local io = {}
@@ -22,8 +35,7 @@ local io = {}
io.open = sb_io_open
-- Add the sandboxed io library to the sandbox environment
sandbox.io = io
-- sandbox.io = io -- Uncomment this line to re-enable sandboxed file I/O.
-- Load the script with the sandbox environment
local user_script_fn, err = load(user_script, nil, "t", sandbox)

View File

@@ -1,11 +1,11 @@
use anyhow::Result;
use anyhow::anyhow;
use collections::{HashMap, HashSet};
use futures::{
channel::{mpsc, oneshot},
pin_mut, SinkExt, StreamExt,
};
use gpui::{AppContext, AsyncApp, Context, Entity, Task, WeakEntity};
use mlua::{Lua, MultiValue, Table, UserData, UserDataMethods};
use gpui::{AppContext, AsyncApp, Context, Entity, EventEmitter, SharedString, Task, WeakEntity};
use mlua::{ExternalResult, Lua, MultiValue, Table, UserData, UserDataMethods};
use parking_lot::Mutex;
use project::{search::SearchQuery, Fs, Project};
use regex::Regex;
@@ -16,24 +16,23 @@ use std::{
};
use util::{paths::PathMatcher, ResultExt};
pub struct ScriptOutput {
pub stdout: String,
}
use crate::{SCRIPT_END_TAG, SCRIPT_START_TAG};
struct ForegroundFn(Box<dyn FnOnce(WeakEntity<Session>, AsyncApp) + Send>);
struct ForegroundFn(Box<dyn FnOnce(WeakEntity<ScriptSession>, AsyncApp) + Send>);
pub struct Session {
pub struct ScriptSession {
project: Entity<Project>,
// TODO Remove this
fs_changes: Arc<Mutex<HashMap<PathBuf, Vec<u8>>>>,
foreground_fns_tx: mpsc::Sender<ForegroundFn>,
_invoke_foreground_fns: Task<()>,
scripts: Vec<Script>,
}
impl Session {
impl ScriptSession {
pub fn new(project: Entity<Project>, cx: &mut Context<Self>) -> Self {
let (foreground_fns_tx, mut foreground_fns_rx) = mpsc::channel(128);
Session {
ScriptSession {
project,
fs_changes: Arc::new(Mutex::new(HashMap::default())),
foreground_fns_tx,
@@ -42,15 +41,62 @@ impl Session {
foreground_fn.0(this.clone(), cx.clone());
}
}),
scripts: Vec::new(),
}
}
/// Runs a Lua script in a sandboxed environment and returns the printed lines
pub fn new_script(&mut self) -> ScriptId {
let id = ScriptId(self.scripts.len() as u32);
let script = Script {
id,
state: ScriptState::Generating,
source: SharedString::new_static(""),
};
self.scripts.push(script);
id
}
pub fn run_script(
&mut self,
script: String,
script_id: ScriptId,
script_src: String,
cx: &mut Context<Self>,
) -> Task<Result<ScriptOutput>> {
) -> Task<anyhow::Result<()>> {
let script = self.get_mut(script_id);
let stdout = Arc::new(Mutex::new(String::new()));
script.source = script_src.clone().into();
script.state = ScriptState::Running {
stdout: stdout.clone(),
};
let task = self.run_lua(script_src, stdout, cx);
cx.emit(ScriptEvent::Spawned(script_id));
cx.spawn(|session, mut cx| async move {
let result = task.await;
session.update(&mut cx, |session, cx| {
let script = session.get_mut(script_id);
let stdout = script.stdout_snapshot();
script.state = match result {
Ok(()) => ScriptState::Succeeded { stdout },
Err(error) => ScriptState::Failed { stdout, error },
};
cx.emit(ScriptEvent::Exited(script_id))
})
})
}
fn run_lua(
&mut self,
script: String,
stdout: Arc<Mutex<String>>,
cx: &mut Context<Self>,
) -> Task<anyhow::Result<()>> {
const SANDBOX_PREAMBLE: &str = include_str!("sandbox_preamble.lua");
// TODO Remove fs_changes
@@ -62,52 +108,92 @@ impl Session {
.visible_worktrees(cx)
.next()
.map(|worktree| worktree.read(cx).abs_path());
let fs = self.project.read(cx).fs().clone();
let foreground_fns_tx = self.foreground_fns_tx.clone();
cx.background_spawn(async move {
let lua = Lua::new();
lua.set_memory_limit(2 * 1024 * 1024 * 1024)?; // 2 GB
let globals = lua.globals();
let stdout = Arc::new(Mutex::new(String::new()));
globals.set(
"sb_print",
lua.create_function({
let stdout = stdout.clone();
move |_, args: MultiValue| Self::print(args, &stdout)
})?,
)?;
globals.set(
"search",
lua.create_async_function({
let foreground_fns_tx = foreground_fns_tx.clone();
let fs = fs.clone();
move |lua, regex| {
Self::search(lua, foreground_fns_tx.clone(), fs.clone(), regex)
let task = cx.background_spawn({
let stdout = stdout.clone();
async move {
let lua = Lua::new();
lua.set_memory_limit(2 * 1024 * 1024 * 1024)?; // 2 GB
let globals = lua.globals();
// Use the project root dir as the script's current working dir.
if let Some(root_dir) = &root_dir {
if let Some(root_dir) = root_dir.to_str() {
globals.set("cwd", root_dir)?;
}
})?,
)?;
globals.set(
"sb_io_open",
lua.create_function({
let fs_changes = fs_changes.clone();
let root_dir = root_dir.clone();
move |lua, (path_str, mode)| {
Self::io_open(&lua, &fs_changes, root_dir.as_ref(), path_str, mode)
}
})?,
)?;
globals.set("user_script", script)?;
}
lua.load(SANDBOX_PREAMBLE).exec_async().await?;
globals.set(
"sb_print",
lua.create_function({
let stdout = stdout.clone();
move |_, args: MultiValue| Self::print(args, &stdout)
})?,
)?;
globals.set(
"search",
lua.create_async_function({
let foreground_fns_tx = foreground_fns_tx.clone();
move |lua, regex| {
let mut foreground_fns_tx = foreground_fns_tx.clone();
let fs = fs.clone();
async move {
Self::search(&lua, &mut foreground_fns_tx, fs, regex)
.await
.into_lua_err()
}
}
})?,
)?;
globals.set(
"outline",
lua.create_async_function({
let root_dir = root_dir.clone();
move |_lua, path| {
let mut foreground_fns_tx = foreground_fns_tx.clone();
let root_dir = root_dir.clone();
async move {
Self::outline(root_dir, &mut foreground_fns_tx, path)
.await
.into_lua_err()
}
}
})?,
)?;
globals.set(
"sb_io_open",
lua.create_function({
let fs_changes = fs_changes.clone();
let root_dir = root_dir.clone();
move |lua, (path_str, mode)| {
Self::io_open(&lua, &fs_changes, root_dir.as_ref(), path_str, mode)
}
})?,
)?;
globals.set("user_script", script)?;
// Drop Lua instance to decrement reference count.
drop(lua);
lua.load(SANDBOX_PREAMBLE).exec_async().await?;
let stdout = Arc::try_unwrap(stdout)
.expect("no more references to stdout")
.into_inner();
Ok(ScriptOutput { stdout })
})
// Drop Lua instance to decrement reference count.
drop(lua);
anyhow::Ok(())
}
});
task
}
pub fn get(&self, script_id: ScriptId) -> &Script {
&self.scripts[script_id.0 as usize]
}
fn get_mut(&mut self, script_id: ScriptId) -> &mut Script {
&mut self.scripts[script_id.0 as usize]
}
/// Sandboxed print() function in Lua.
@@ -154,27 +240,9 @@ impl Session {
file.set("__read_perm", read_perm)?;
file.set("__write_perm", write_perm)?;
// Sandbox the path; it must be within root_dir
let path: PathBuf = {
let rust_path = Path::new(&path_str);
// Get absolute path
if rust_path.is_absolute() {
// Check if path starts with root_dir prefix without resolving symlinks
if !rust_path.starts_with(&root_dir) {
return Ok((
None,
format!(
"Error: Absolute path {} is outside the current working directory",
path_str
),
));
}
rust_path.to_path_buf()
} else {
// Make relative path absolute relative to cwd
root_dir.join(rust_path)
}
let path = match Self::parse_abs_path_in_root_dir(&root_dir, &path_str) {
Ok(path) => path,
Err(err) => return Ok((None, format!("{err}"))),
};
// close method
@@ -510,11 +578,11 @@ impl Session {
}
async fn search(
lua: Lua,
mut foreground_tx: mpsc::Sender<ForegroundFn>,
lua: &Lua,
foreground_tx: &mut mpsc::Sender<ForegroundFn>,
fs: Arc<dyn Fs>,
regex: String,
) -> mlua::Result<Table> {
) -> anyhow::Result<Table> {
// TODO: Allow specification of these options.
let search_query = SearchQuery::regex(
&regex,
@@ -527,18 +595,17 @@ impl Session {
);
let search_query = match search_query {
Ok(query) => query,
Err(e) => return Err(mlua::Error::runtime(format!("Invalid search query: {}", e))),
Err(e) => return Err(anyhow!("Invalid search query: {}", e)),
};
// TODO: Should use `search_query.regex`. The tool description should also be updated,
// as it specifies standard regex.
let search_regex = match Regex::new(&regex) {
Ok(re) => re,
Err(e) => return Err(mlua::Error::runtime(format!("Invalid regex: {}", e))),
Err(e) => return Err(anyhow!("Invalid regex: {}", e)),
};
let mut abs_paths_rx =
Self::find_search_candidates(search_query, &mut foreground_tx).await?;
let mut abs_paths_rx = Self::find_search_candidates(search_query, foreground_tx).await?;
let mut search_results: Vec<Table> = Vec::new();
while let Some(path) = abs_paths_rx.next().await {
@@ -586,7 +653,7 @@ impl Session {
async fn find_search_candidates(
search_query: SearchQuery,
foreground_tx: &mut mpsc::Sender<ForegroundFn>,
) -> mlua::Result<mpsc::UnboundedReceiver<PathBuf>> {
) -> anyhow::Result<mpsc::UnboundedReceiver<PathBuf>> {
Self::run_foreground_fn(
"finding search file candidates",
foreground_tx,
@@ -636,14 +703,62 @@ impl Session {
})
}),
)
.await
.await?
}
async fn outline(
root_dir: Option<Arc<Path>>,
foreground_tx: &mut mpsc::Sender<ForegroundFn>,
path_str: String,
) -> anyhow::Result<String> {
let root_dir = root_dir
.ok_or_else(|| mlua::Error::runtime("cannot get outline without a root directory"))?;
let path = Self::parse_abs_path_in_root_dir(&root_dir, &path_str)?;
let outline = Self::run_foreground_fn(
"getting code outline",
foreground_tx,
Box::new(move |session, cx| {
cx.spawn(move |mut cx| async move {
// TODO: This will not use file content from `fs_changes`. It will also reflect
// user changes that have not been saved.
let buffer = session
.update(&mut cx, |session, cx| {
session
.project
.update(cx, |project, cx| project.open_local_buffer(&path, cx))
})?
.await?;
buffer.update(&mut cx, |buffer, _cx| {
if let Some(outline) = buffer.snapshot().outline(None) {
Ok(outline)
} else {
Err(anyhow!("No outline for file {path_str}"))
}
})
})
}),
)
.await?
.await??;
Ok(outline
.items
.into_iter()
.map(|item| {
if item.text.contains('\n') {
log::error!("Outline item unexpectedly contains newline");
}
format!("{}{}", " ".repeat(item.depth), item.text)
})
.collect::<Vec<String>>()
.join("\n"))
}
async fn run_foreground_fn<R: Send + 'static>(
description: &str,
foreground_tx: &mut mpsc::Sender<ForegroundFn>,
function: Box<dyn FnOnce(WeakEntity<Self>, AsyncApp) -> anyhow::Result<R> + Send>,
) -> mlua::Result<R> {
function: Box<dyn FnOnce(WeakEntity<Self>, AsyncApp) -> R + Send>,
) -> anyhow::Result<R> {
let (response_tx, response_rx) = oneshot::channel();
let send_result = foreground_tx
.send(ForegroundFn(Box::new(move |this, cx| {
@@ -653,19 +768,34 @@ impl Session {
match send_result {
Ok(()) => (),
Err(err) => {
return Err(mlua::Error::runtime(format!(
"Internal error while enqueuing work for {description}: {err}"
)))
return Err(anyhow::Error::new(err).context(format!(
"Internal error while enqueuing work for {description}"
)));
}
}
match response_rx.await {
Ok(Ok(result)) => Ok(result),
Ok(Err(err)) => Err(mlua::Error::runtime(format!(
"Error while {description}: {err}"
))),
Err(oneshot::Canceled) => Err(mlua::Error::runtime(format!(
Ok(result) => Ok(result),
Err(oneshot::Canceled) => Err(anyhow!(
"Internal error: response oneshot was canceled while {description}."
))),
)),
}
}
fn parse_abs_path_in_root_dir(root_dir: &Path, path_str: &str) -> anyhow::Result<PathBuf> {
let path = Path::new(&path_str);
if path.is_absolute() {
// Check if path starts with root_dir prefix without resolving symlinks
if path.starts_with(&root_dir) {
Ok(path.to_path_buf())
} else {
Err(anyhow!(
"Error: Absolute path {} is outside the current working directory",
path_str
))
}
} else {
// TODO: Does use of `../` break sandbox - is path canonicalization needed?
Ok(root_dir.join(path))
}
}
}
@@ -678,6 +808,79 @@ impl UserData for FileContent {
}
}
#[derive(Debug)]
pub enum ScriptEvent {
Spawned(ScriptId),
Exited(ScriptId),
}
impl EventEmitter<ScriptEvent> for ScriptSession {}
#[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
pub struct ScriptId(u32);
pub struct Script {
pub id: ScriptId,
pub state: ScriptState,
pub source: SharedString,
}
pub enum ScriptState {
Generating,
Running {
stdout: Arc<Mutex<String>>,
},
Succeeded {
stdout: String,
},
Failed {
stdout: String,
error: anyhow::Error,
},
}
impl Script {
pub fn source_tag(&self) -> String {
format!("{}{}{}", SCRIPT_START_TAG, self.source, SCRIPT_END_TAG)
}
/// If exited, returns a message with the output for the LLM
pub fn output_message_for_llm(&self) -> Option<String> {
match &self.state {
ScriptState::Generating { .. } => None,
ScriptState::Running { .. } => None,
ScriptState::Succeeded { stdout } => {
format!("Here's the script output:\n{}", stdout).into()
}
ScriptState::Failed { stdout, error } => format!(
"The script failed with:\n{}\n\nHere's the output it managed to print:\n{}",
error, stdout
)
.into(),
}
}
/// Get a snapshot of the script's stdout
pub fn stdout_snapshot(&self) -> String {
match &self.state {
ScriptState::Generating { .. } => String::new(),
ScriptState::Running { stdout } => stdout.lock().clone(),
ScriptState::Succeeded { stdout } => stdout.clone(),
ScriptState::Failed { stdout, .. } => stdout.clone(),
}
}
/// Returns the error if the script failed, otherwise None
pub fn error(&self) -> Option<&anyhow::Error> {
match &self.state {
ScriptState::Generating { .. } => None,
ScriptState::Running { .. } => None,
ScriptState::Succeeded { .. } => None,
ScriptState::Failed { error, .. } => Some(error),
}
}
}
#[cfg(test)]
mod tests {
use gpui::TestAppContext;
@@ -689,35 +892,17 @@ mod tests {
#[gpui::test]
async fn test_print(cx: &mut TestAppContext) {
init_test(cx);
let fs = FakeFs::new(cx.executor());
let project = Project::test(fs, [], cx).await;
let session = cx.new(|cx| Session::new(project, cx));
let script = r#"
print("Hello", "world!")
print("Goodbye", "moon!")
"#;
let output = session
.update(cx, |session, cx| session.run_script(script.to_string(), cx))
.await
.unwrap();
assert_eq!(output.stdout, "Hello\tworld!\nGoodbye\tmoon!\n");
let output = test_script(script, cx).await.unwrap();
assert_eq!(output, "Hello\tworld!\nGoodbye\tmoon!\n");
}
#[gpui::test]
async fn test_search(cx: &mut TestAppContext) {
init_test(cx);
let fs = FakeFs::new(cx.executor());
fs.insert_tree(
"/",
json!({
"file1.txt": "Hello world!",
"file2.txt": "Goodbye moon!"
}),
)
.await;
let project = Project::test(fs, [Path::new("/")], cx).await;
let session = cx.new(|cx| Session::new(project, cx));
let script = r#"
local results = search("world")
for i, result in ipairs(results) do
@@ -728,11 +913,36 @@ mod tests {
end
end
"#;
let output = session
.update(cx, |session, cx| session.run_script(script.to_string(), cx))
.await
.unwrap();
assert_eq!(output.stdout, "File: /file1.txt\nMatches:\n world\n");
let output = test_script(script, cx).await.unwrap();
assert_eq!(output, "File: /file1.txt\nMatches:\n world\n");
}
async fn test_script(source: &str, cx: &mut TestAppContext) -> anyhow::Result<String> {
init_test(cx);
let fs = FakeFs::new(cx.executor());
fs.insert_tree(
"/",
json!({
"file1.txt": "Hello world!",
"file2.txt": "Goodbye moon!"
}),
)
.await;
let project = Project::test(fs, [Path::new("/")], cx).await;
let session = cx.new(|cx| ScriptSession::new(project, cx));
let (script_id, task) = session.update(cx, |session, cx| {
let script_id = session.new_script();
let task = session.run_script(script_id, source.to_string(), cx);
(script_id, task)
});
task.await?;
Ok(session.read_with(cx, |session, _cx| session.get(script_id).stdout_snapshot()))
}
fn init_test(cx: &mut TestAppContext) {

View File

@@ -3,6 +3,12 @@ output was, including both stdout as well as the git diff of changes it made to
the filesystem. That way, you can get more information about the code base, or
make changes to the code base directly.
Put the Lua script inside of an `<eval>` tag like so:
<eval type="lua">
print("Hello, world!")
</eval>
The Lua script will have access to `io` and it will run with the current working
directory being in the root of the code base, so you can use it to explore,
search, make changes, etc. You can also have the script print things, and I'll
@@ -10,13 +16,21 @@ tell you what the output was. Note that `io` only has `open`, and then the file
it returns only has the methods read, write, and close - it doesn't have popen
or anything else.
Also, I'm going to be putting this Lua script into JSON, so please don't use
Lua's double quote syntax for string literals - use one of Lua's other syntaxes
for string literals, so I don't have to escape the double quotes.
There will be a global called `search` which accepts a regex (it's implemented
There is a function called `search` which accepts a regex (it's implemented
using Rust's regex crate, so use that regex syntax) and runs that regex on the
contents of every file in the code base (aside from gitignored files), then
returns an array of tables with two fields: "path" (the path to the file that
had the matches) and "matches" (an array of strings, with each string being a
match that was found within the file).
There is a function called `outline` which accepts the path to a source file,
and returns a string where each line is a declaration. These lines are indented
with 2 spaces to indicate when a declaration is inside another.
When I send you the script output, do not thank me for running it,
act as if you ran it yourself.
IMPORTANT!
Only include a maximum of one Lua script at the very end of your message
DO NOT WRITE ANYTHING ELSE AFTER THE SCRIPT. Wait for my response with the script
output to continue.

View File

@@ -0,0 +1,260 @@
pub const SCRIPT_START_TAG: &str = "<eval type=\"lua\">";
pub const SCRIPT_END_TAG: &str = "</eval>";
const START_TAG: &[u8] = SCRIPT_START_TAG.as_bytes();
const END_TAG: &[u8] = SCRIPT_END_TAG.as_bytes();
/// Parses a script tag in an assistant message as it is being streamed.
pub struct ScriptTagParser {
state: State,
buffer: Vec<u8>,
tag_match_ix: usize,
}
enum State {
Unstarted,
Streaming,
Ended,
}
#[derive(Debug, PartialEq)]
pub struct ChunkOutput {
/// The chunk with script tags removed.
pub content: String,
/// The full script tag content. `None` until closed.
pub script_source: Option<String>,
}
impl ScriptTagParser {
/// Create a new script tag parser.
pub fn new() -> Self {
Self {
state: State::Unstarted,
buffer: Vec::new(),
tag_match_ix: 0,
}
}
/// Returns true if the parser has found a script tag.
pub fn found_script(&self) -> bool {
match self.state {
State::Unstarted => false,
State::Streaming | State::Ended => true,
}
}
/// Process a new chunk of input, splitting it into surrounding content and script source.
pub fn parse_chunk(&mut self, input: &str) -> ChunkOutput {
let mut content = Vec::with_capacity(input.len());
for byte in input.bytes() {
match self.state {
State::Unstarted => {
if collect_until_tag(byte, START_TAG, &mut self.tag_match_ix, &mut content) {
self.state = State::Streaming;
self.buffer = Vec::with_capacity(1024);
self.tag_match_ix = 0;
}
}
State::Streaming => {
if collect_until_tag(byte, END_TAG, &mut self.tag_match_ix, &mut self.buffer) {
self.state = State::Ended;
}
}
State::Ended => content.push(byte),
}
}
let content = unsafe { String::from_utf8_unchecked(content) };
let script_source = if matches!(self.state, State::Ended) && !self.buffer.is_empty() {
let source = unsafe { String::from_utf8_unchecked(std::mem::take(&mut self.buffer)) };
Some(source)
} else {
None
};
ChunkOutput {
content,
script_source,
}
}
}
fn collect_until_tag(byte: u8, tag: &[u8], tag_match_ix: &mut usize, buffer: &mut Vec<u8>) -> bool {
// this can't be a method because it'd require a mutable borrow on both self and self.buffer
if match_tag_byte(byte, tag, tag_match_ix) {
*tag_match_ix >= tag.len()
} else {
if *tag_match_ix > 0 {
// push the partially matched tag to the buffer
buffer.extend_from_slice(&tag[..*tag_match_ix]);
*tag_match_ix = 0;
// the tag might start to match again
if match_tag_byte(byte, tag, tag_match_ix) {
return *tag_match_ix >= tag.len();
}
}
buffer.push(byte);
false
}
}
fn match_tag_byte(byte: u8, tag: &[u8], tag_match_ix: &mut usize) -> bool {
if byte == tag[*tag_match_ix] {
*tag_match_ix += 1;
true
} else {
false
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_parse_complete_tag() {
let mut parser = ScriptTagParser::new();
let input = "<eval type=\"lua\">print(\"Hello, World!\")</eval>";
let result = parser.parse_chunk(input);
assert_eq!(result.content, "");
assert_eq!(
result.script_source,
Some("print(\"Hello, World!\")".to_string())
);
}
#[test]
fn test_no_tag() {
let mut parser = ScriptTagParser::new();
let input = "No tags here, just plain text";
let result = parser.parse_chunk(input);
assert_eq!(result.content, "No tags here, just plain text");
assert_eq!(result.script_source, None);
}
#[test]
fn test_partial_end_tag() {
let mut parser = ScriptTagParser::new();
// Start the tag
let result = parser.parse_chunk("<eval type=\"lua\">let x = '</e");
assert_eq!(result.content, "");
assert_eq!(result.script_source, None);
// Finish with the rest
let result = parser.parse_chunk("val' + 'not the end';</eval>");
assert_eq!(result.content, "");
assert_eq!(
result.script_source,
Some("let x = '</eval' + 'not the end';".to_string())
);
}
#[test]
fn test_text_before_and_after_tag() {
let mut parser = ScriptTagParser::new();
let input = "Before tag <eval type=\"lua\">print(\"Hello\")</eval> After tag";
let result = parser.parse_chunk(input);
assert_eq!(result.content, "Before tag After tag");
assert_eq!(result.script_source, Some("print(\"Hello\")".to_string()));
}
#[test]
fn test_multiple_chunks_with_surrounding_text() {
let mut parser = ScriptTagParser::new();
// First chunk with text before
let result = parser.parse_chunk("Before script <eval type=\"lua\">local x = 10");
assert_eq!(result.content, "Before script ");
assert_eq!(result.script_source, None);
// Second chunk with script content
let result = parser.parse_chunk("\nlocal y = 20");
assert_eq!(result.content, "");
assert_eq!(result.script_source, None);
// Last chunk with text after
let result = parser.parse_chunk("\nprint(x + y)</eval> After script");
assert_eq!(result.content, " After script");
assert_eq!(
result.script_source,
Some("local x = 10\nlocal y = 20\nprint(x + y)".to_string())
);
let result = parser.parse_chunk(" there's more text");
assert_eq!(result.content, " there's more text");
assert_eq!(result.script_source, None);
}
#[test]
fn test_partial_start_tag_matching() {
let mut parser = ScriptTagParser::new();
// partial match of start tag...
let result = parser.parse_chunk("<ev");
assert_eq!(result.content, "");
// ...that's abandandoned when the < of a real tag is encountered
let result = parser.parse_chunk("<eval type=\"lua\">script content</eval>");
// ...so it gets pushed to content
assert_eq!(result.content, "<ev");
// ...and the real tag is parsed correctly
assert_eq!(result.script_source, Some("script content".to_string()));
}
#[test]
fn test_random_chunked_parsing() {
use rand::rngs::StdRng;
use rand::{Rng, SeedableRng};
use std::time::{SystemTime, UNIX_EPOCH};
let test_inputs = [
"Before <eval type=\"lua\">print(\"Hello\")</eval> After",
"No tags here at all",
"<eval type=\"lua\">local x = 10\nlocal y = 20\nprint(x + y)</eval>",
"Text <eval type=\"lua\">if true then\nprint(\"nested </e\")\nend</eval> more",
];
let seed = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_secs();
eprintln!("Using random seed: {}", seed);
let mut rng = StdRng::seed_from_u64(seed);
for test_input in &test_inputs {
let mut reference_parser = ScriptTagParser::new();
let expected = reference_parser.parse_chunk(test_input);
let mut chunked_parser = ScriptTagParser::new();
let mut remaining = test_input.as_bytes();
let mut actual_content = String::new();
let mut actual_script = None;
while !remaining.is_empty() {
let chunk_size = rng.gen_range(1..=remaining.len().min(5));
let (chunk, rest) = remaining.split_at(chunk_size);
remaining = rest;
let chunk_str = std::str::from_utf8(chunk).unwrap();
let result = chunked_parser.parse_chunk(chunk_str);
actual_content.push_str(&result.content);
if result.script_source.is_some() {
actual_script = result.script_source;
}
}
assert_eq!(actual_content, expected.content);
assert_eq!(actual_script, expected.script_source);
}
}
}

View File

@@ -623,16 +623,21 @@ impl Copilot {
pub fn sign_out(&mut self, cx: &mut Context<Self>) -> Task<Result<()>> {
self.update_sign_in_status(request::SignInStatus::NotSignedIn, cx);
if let CopilotServer::Running(RunningCopilotServer { lsp: server, .. }) = &self.server {
let server = server.clone();
cx.background_spawn(async move {
server
.request::<request::SignOut>(request::SignOutParams {})
.await?;
match &self.server {
CopilotServer::Running(RunningCopilotServer { lsp: server, .. }) => {
let server = server.clone();
cx.background_spawn(async move {
server
.request::<request::SignOut>(request::SignOutParams {})
.await?;
anyhow::Ok(())
})
}
CopilotServer::Disabled => cx.background_spawn(async move {
clear_copilot_config_dir().await;
anyhow::Ok(())
})
} else {
Task::ready(Err(anyhow!("copilot hasn't started yet")))
}),
_ => Task::ready(Err(anyhow!("copilot hasn't started yet"))),
}
}
@@ -1016,6 +1021,10 @@ async fn clear_copilot_dir() {
remove_matching(paths::copilot_dir(), |_| true).await
}
async fn clear_copilot_config_dir() {
remove_matching(copilot_chat::copilot_chat_config_dir(), |_| true).await
}
async fn get_copilot_lsp(http: Arc<dyn HttpClient>) -> anyhow::Result<PathBuf> {
const SERVER_PATH: &str = "dist/language-server.js";

View File

@@ -4,13 +4,14 @@ use std::sync::OnceLock;
use anyhow::{anyhow, Result};
use chrono::DateTime;
use collections::HashSet;
use fs::Fs;
use futures::{io::BufReader, stream::BoxStream, AsyncBufReadExt, AsyncReadExt, StreamExt};
use gpui::{prelude::*, App, AsyncApp, Global};
use http_client::{AsyncBody, HttpClient, Method, Request as HttpRequest};
use paths::home_dir;
use serde::{Deserialize, Serialize};
use settings::watch_config_file;
use settings::watch_config_dir;
use strum::EnumIter;
pub const COPILOT_CHAT_COMPLETION_URL: &str = "https://api.githubcopilot.com/chat/completions";
@@ -212,7 +213,7 @@ pub fn init(fs: Arc<dyn Fs>, client: Arc<dyn HttpClient>, cx: &mut App) {
cx.set_global(GlobalCopilotChat(copilot_chat));
}
fn copilot_chat_config_dir() -> &'static PathBuf {
pub fn copilot_chat_config_dir() -> &'static PathBuf {
static COPILOT_CHAT_CONFIG_DIR: OnceLock<PathBuf> = OnceLock::new();
COPILOT_CHAT_CONFIG_DIR.get_or_init(|| {
@@ -237,27 +238,18 @@ impl CopilotChat {
}
pub fn new(fs: Arc<dyn Fs>, client: Arc<dyn HttpClient>, cx: &App) -> Self {
let config_paths = copilot_chat_config_paths();
let resolve_config_path = {
let fs = fs.clone();
async move {
for config_path in config_paths.iter() {
if fs.metadata(config_path).await.is_ok_and(|v| v.is_some()) {
return config_path.clone();
}
}
config_paths[0].clone()
}
};
let config_paths: HashSet<PathBuf> = copilot_chat_config_paths().into_iter().collect();
let dir_path = copilot_chat_config_dir();
cx.spawn(|cx| async move {
let config_file = resolve_config_path.await;
let mut config_file_rx = watch_config_file(cx.background_executor(), fs, config_file);
while let Some(contents) = config_file_rx.next().await {
let mut parent_watch_rx = watch_config_dir(
cx.background_executor(),
fs.clone(),
dir_path.clone(),
config_paths,
);
while let Some(contents) = parent_watch_rx.next().await {
let oauth_token = extract_oauth_token(contents);
cx.update(|cx| {
if let Some(this) = Self::global(cx).as_ref() {
this.update(cx, |this, cx| {

View File

@@ -311,7 +311,10 @@ impl ProjectDiagnosticsEditor {
cx: &mut Context<Workspace>,
) {
if let Some(existing) = workspace.item_of_type::<ProjectDiagnosticsEditor>(cx) {
workspace.activate_item(&existing, true, true, window, cx);
let is_active = workspace
.active_item(cx)
.is_some_and(|item| item.item_id() == existing.item_id());
workspace.activate_item(&existing, true, !is_active, window, cx);
} else {
let workspace_handle = cx.entity().downgrade();

View File

@@ -500,7 +500,7 @@ impl CompletionsMenu {
highlight.font_weight = None;
if completion
.source
.lsp_completion()
.lsp_completion(false)
.and_then(|lsp_completion| lsp_completion.deprecated)
.unwrap_or(false)
{
@@ -711,10 +711,12 @@ impl CompletionsMenu {
let completion = &completions[mat.candidate_id];
let sort_key = completion.sort_key();
let sort_text = completion
.source
.lsp_completion()
.and_then(|lsp_completion| lsp_completion.sort_text.as_deref());
let sort_text =
if let CompletionSource::Lsp { lsp_completion, .. } = &completion.source {
lsp_completion.sort_text.as_deref()
} else {
None
};
let score = Reverse(OrderedFloat(mat.score));
if mat.score >= 0.2 {

View File

@@ -11639,7 +11639,7 @@ impl Editor {
fn go_to_next_hunk(&mut self, _: &GoToHunk, window: &mut Window, cx: &mut Context<Self>) {
let snapshot = self.snapshot(window, cx);
let selection = self.selections.newest::<Point>(cx);
self.go_to_hunk_after_or_before_position(
self.go_to_hunk_before_or_after_position(
&snapshot,
selection.head(),
Direction::Next,
@@ -11648,7 +11648,7 @@ impl Editor {
);
}
fn go_to_hunk_after_or_before_position(
fn go_to_hunk_before_or_after_position(
&mut self,
snapshot: &EditorSnapshot,
position: Point,
@@ -11699,7 +11699,7 @@ impl Editor {
) {
let snapshot = self.snapshot(window, cx);
let selection = self.selections.newest::<Point>(cx);
self.go_to_hunk_after_or_before_position(
self.go_to_hunk_before_or_after_position(
&snapshot,
selection.head(),
Direction::Prev,
@@ -13861,21 +13861,6 @@ impl Editor {
return;
}
let snapshot = self.snapshot(window, cx);
let newest_range = self.selections.newest::<Point>(cx).range();
let run_twice = snapshot
.hunks_for_ranges([newest_range])
.first()
.is_some_and(|hunk| {
let next_line = Point::new(hunk.row_range.end.0 + 1, 0);
self.hunk_after_position(&snapshot, next_line)
.is_some_and(|other| other.row_range == hunk.row_range)
});
if run_twice {
self.go_to_next_hunk(&GoToHunk, window, cx);
}
self.stage_or_unstage_diff_hunks(stage, ranges, cx);
self.go_to_next_hunk(&GoToHunk, window, cx);
}
@@ -14303,6 +14288,13 @@ impl Editor {
EditorSettings::override_global(editor_settings, cx);
}
pub fn line_numbers_enabled(&self, cx: &App) -> bool {
if let Some(show_line_numbers) = self.show_line_numbers {
return show_line_numbers;
}
EditorSettings::get_global(cx).gutter.line_numbers
}
pub fn should_use_relative_line_numbers(&self, cx: &mut App) -> bool {
self.use_relative_line_numbers
.unwrap_or(EditorSettings::get_global(cx).relative_line_numbers)
@@ -17017,6 +17009,7 @@ fn snippet_completions(
sort_text: Some(char::MAX.to_string()),
..lsp::CompletionItem::default()
}),
lsp_defaults: None,
},
label: CodeLabel {
text: matching_prefix.clone(),

View File

@@ -12334,24 +12334,6 @@ async fn test_completions_default_resolve_data_handling(cx: &mut TestAppContext)
},
};
let item_0_out = lsp::CompletionItem {
commit_characters: Some(default_commit_characters.clone()),
insert_text_format: Some(default_insert_text_format),
..item_0
};
let items_out = iter::once(item_0_out)
.chain(items[1..].iter().map(|item| lsp::CompletionItem {
commit_characters: Some(default_commit_characters.clone()),
data: Some(default_data.clone()),
insert_text_mode: Some(default_insert_text_mode),
text_edit: Some(lsp::CompletionTextEdit::Edit(lsp::TextEdit {
range: default_edit_range,
new_text: item.label.clone(),
})),
..item.clone()
}))
.collect::<Vec<lsp::CompletionItem>>();
let mut cx = EditorLspTestContext::new_rust(
lsp::ServerCapabilities {
completion_provider: Some(lsp::CompletionOptions {
@@ -12370,10 +12352,11 @@ async fn test_completions_default_resolve_data_handling(cx: &mut TestAppContext)
let completion_data = default_data.clone();
let completion_characters = default_commit_characters.clone();
let completion_items = items.clone();
cx.handle_request::<lsp::request::Completion, _, _>(move |_, _, _| {
let default_data = completion_data.clone();
let default_commit_characters = completion_characters.clone();
let items = items.clone();
let items = completion_items.clone();
async move {
Ok(Some(lsp::CompletionResponse::List(lsp::CompletionList {
items,
@@ -12422,7 +12405,7 @@ async fn test_completions_default_resolve_data_handling(cx: &mut TestAppContext)
.iter()
.map(|mat| mat.string.clone())
.collect::<Vec<String>>(),
items_out
items
.iter()
.map(|completion| completion.label.clone())
.collect::<Vec<String>>()
@@ -12435,14 +12418,18 @@ async fn test_completions_default_resolve_data_handling(cx: &mut TestAppContext)
// with 4 from the end.
assert_eq!(
*resolved_items.lock(),
[
&items_out[0..16],
&items_out[items_out.len() - 4..items_out.len()]
]
.concat()
.iter()
.cloned()
.collect::<Vec<lsp::CompletionItem>>()
[&items[0..16], &items[items.len() - 4..items.len()]]
.concat()
.iter()
.cloned()
.map(|mut item| {
if item.data.is_none() {
item.data = Some(default_data.clone());
}
item
})
.collect::<Vec<lsp::CompletionItem>>(),
"Items sent for resolve should be unchanged modulo resolve `data` filled with default if missing"
);
resolved_items.lock().clear();
@@ -12453,9 +12440,15 @@ async fn test_completions_default_resolve_data_handling(cx: &mut TestAppContext)
// Completions that have already been resolved are skipped.
assert_eq!(
*resolved_items.lock(),
items_out[items_out.len() - 16..items_out.len() - 4]
items[items.len() - 16..items.len() - 4]
.iter()
.cloned()
.map(|mut item| {
if item.data.is_none() {
item.data = Some(default_data.clone());
}
item
})
.collect::<Vec<lsp::CompletionItem>>()
);
resolved_items.lock().clear();

View File

@@ -9014,7 +9014,7 @@ fn diff_hunk_controls(
let snapshot = editor.snapshot(window, cx);
let position =
hunk_range.end.to_point(&snapshot.buffer_snapshot);
editor.go_to_hunk_after_or_before_position(
editor.go_to_hunk_before_or_after_position(
&snapshot,
position,
Direction::Next,
@@ -9050,7 +9050,7 @@ fn diff_hunk_controls(
let snapshot = editor.snapshot(window, cx);
let point =
hunk_range.start.to_point(&snapshot.buffer_snapshot);
editor.go_to_hunk_after_or_before_position(
editor.go_to_hunk_before_or_after_position(
&snapshot,
point,
Direction::Prev,

View File

@@ -1,6 +1,7 @@
use crate::askpass_modal::AskPassModal;
use crate::commit_modal::CommitModal;
use crate::git_panel_settings::StatusStyle;
use crate::project_diff::Diff;
use crate::remote_output_toast::{RemoteAction, RemoteOutputToast};
use crate::repository_selector::filtered_repository_entries;
use crate::{branch_picker, render_remote_button};
@@ -231,6 +232,7 @@ pub struct GitPanel {
fs: Arc<dyn Fs>,
hide_scrollbar_task: Option<Task<()>>,
new_count: usize,
entry_count: usize,
new_staged_count: usize,
pending: Vec<PendingOperation>,
pending_commit: Option<Task<()>>,
@@ -381,6 +383,7 @@ impl GitPanel {
context_menu: None,
workspace,
modal_open: false,
entry_count: 0,
};
git_panel.schedule_update(false, window, cx);
git_panel.show_scrollbar = git_panel.should_show_scrollbar(cx);
@@ -1078,7 +1081,7 @@ impl GitPanel {
});
}
fn stage_all(&mut self, _: &StageAll, _window: &mut Window, cx: &mut Context<Self>) {
pub fn stage_all(&mut self, _: &StageAll, _window: &mut Window, cx: &mut Context<Self>) {
let entries = self
.entries
.iter()
@@ -1089,7 +1092,7 @@ impl GitPanel {
self.change_file_stage(true, entries, cx);
}
fn unstage_all(&mut self, _: &UnstageAll, _window: &mut Window, cx: &mut Context<Self>) {
pub fn unstage_all(&mut self, _: &UnstageAll, _window: &mut Window, cx: &mut Context<Self>) {
let entries = self
.entries
.iter()
@@ -1453,6 +1456,10 @@ impl GitPanel {
/// Generates a commit message using an LLM.
pub fn generate_commit_message(&mut self, cx: &mut Context<Self>) {
if !self.can_commit() {
return;
}
let model = match current_language_model(cx) {
Some(value) => value,
None => return,
@@ -2133,10 +2140,12 @@ impl GitPanel {
self.tracked_count = 0;
self.new_staged_count = 0;
self.tracked_staged_count = 0;
self.entry_count = 0;
for entry in &self.entries {
let Some(status_entry) = entry.status_entry() else {
continue;
};
self.entry_count += 1;
if repo.has_conflict(&status_entry.repo_path) {
self.conflicted_count += 1;
if self.entry_staging(status_entry).has_staged() {
@@ -2291,14 +2300,28 @@ impl GitPanel {
.into_any_element();
}
let can_commit = self.can_commit();
let editor_focus_handle = self.commit_editor.focus_handle(cx);
IconButton::new("generate-commit-message", IconName::AiEdit)
.shape(ui::IconButtonShape::Square)
.icon_color(Color::Muted)
.tooltip(Tooltip::for_action_title_in(
"Generate Commit Message",
&git::GenerateCommitMessage,
&self.commit_editor.focus_handle(cx),
))
.tooltip(move |window, cx| {
if can_commit {
Tooltip::for_action_in(
"Generate Commit Message",
&git::GenerateCommitMessage,
&editor_focus_handle,
window,
cx,
)
} else {
Tooltip::simple(
"You must have either staged changes or tracked files to generate a commit message",
cx,
)
}
})
.disabled(!can_commit)
.on_click(cx.listener(move |this, _event, _window, cx| {
this.generate_commit_message(cx);
}))
@@ -2384,6 +2407,43 @@ impl GitPanel {
})
}
fn render_panel_header(&self, window: &mut Window, cx: &mut Context<Self>) -> impl IntoElement {
let text;
let action;
let tooltip;
if self.total_staged_count() == self.entry_count {
text = "Unstage All";
action = git::UnstageAll.boxed_clone();
tooltip = "git reset";
} else {
text = "Stage All";
action = git::StageAll.boxed_clone();
tooltip = "git add --all ."
}
self.panel_header_container(window, cx)
.child(
Button::new("diff", "Open diff")
.tooltip(Tooltip::for_action_title("Open diff", &Diff))
.on_click(|_, _, cx| {
cx.defer(|cx| {
cx.dispatch_action(&Diff);
})
}),
)
.child(div().flex_grow()) // spacer
.child(
Button::new("stage-unstage-all", text)
.tooltip(Tooltip::for_action_title(tooltip, action.as_ref()))
.on_click(move |_, _, cx| {
let action = action.boxed_clone();
cx.defer(move |cx| {
cx.dispatch_action(action.as_ref());
})
}),
)
}
pub fn render_footer(
&self,
window: &mut Window,
@@ -3148,6 +3208,7 @@ impl Render for GitPanel {
.child(
v_flex()
.size_full()
.child(self.render_panel_header(window, cx))
.map(|this| {
if has_entries {
this.child(self.render_entries(has_write_access, window, cx))

View File

@@ -64,6 +64,22 @@ pub fn init(cx: &mut App) {
panel.pull(window, cx);
});
});
workspace.register_action(|workspace, action: &git::StageAll, window, cx| {
let Some(panel) = workspace.panel::<git_panel::GitPanel>(cx) else {
return;
};
panel.update(cx, |panel, cx| {
panel.stage_all(action, window, cx);
});
});
workspace.register_action(|workspace, action: &git::UnstageAll, window, cx| {
let Some(panel) = workspace.panel::<git_panel::GitPanel>(cx) else {
return;
};
panel.update(cx, |panel, cx| {
panel.unstage_all(action, window, cx);
});
});
})
.detach();
}

View File

@@ -152,7 +152,10 @@ impl GoToLine {
cx: &mut Context<Self>,
) {
match event {
editor::EditorEvent::Blurred => cx.emit(DismissEvent),
editor::EditorEvent::Blurred => {
self.prev_scroll_position.take();
cx.emit(DismissEvent)
}
editor::EditorEvent::BufferEdited { .. } => self.highlight_current_line(cx),
_ => {}
}

View File

@@ -1527,6 +1527,7 @@ impl Buffer {
}
fn did_finish_parsing(&mut self, syntax_snapshot: SyntaxSnapshot, cx: &mut Context<Self>) {
self.was_changed();
self.non_text_state_update_count += 1;
self.syntax_map.lock().did_parse(syntax_snapshot);
self.request_autoindent(cx);
@@ -1968,7 +1969,12 @@ impl Buffer {
/// This allows downstream code to check if the buffer's text has changed without
/// waiting for an effect cycle, which would be required if using eents.
pub fn record_changes(&mut self, bit: rc::Weak<Cell<bool>>) {
self.change_bits.push(bit);
if let Err(ix) = self
.change_bits
.binary_search_by_key(&rc::Weak::as_ptr(&bit), rc::Weak::as_ptr)
{
self.change_bits.insert(ix, bit);
}
}
fn was_changed(&mut self) {
@@ -2273,12 +2279,13 @@ impl Buffer {
}
fn did_edit(&mut self, old_version: &clock::Global, was_dirty: bool, cx: &mut Context<Self>) {
self.was_changed();
if self.edits_since::<usize>(old_version).next().is_none() {
return;
}
self.reparse(cx);
cx.emit(BufferEvent::Edited);
if was_dirty != self.is_dirty() {
cx.emit(BufferEvent::DirtyChanged);
@@ -2390,7 +2397,6 @@ impl Buffer {
}
self.text.apply_ops(buffer_ops);
self.deferred_ops.insert(deferred_ops);
self.was_changed();
self.flush_deferred_ops(cx);
self.did_edit(&old_version, was_dirty, cx);
// Notify independently of whether the buffer was edited as the operations could include a

View File

@@ -11,8 +11,8 @@ use futures::future::BoxFuture;
use futures::stream::BoxStream;
use futures::{FutureExt, StreamExt};
use gpui::{
percentage, svg, Animation, AnimationExt, AnyView, App, AsyncApp, Entity, Render, Subscription,
Task, Transformation,
percentage, svg, Action, Animation, AnimationExt, AnyView, App, AsyncApp, Entity, Render,
Subscription, Task, Transformation,
};
use language_model::{
AuthenticateError, LanguageModel, LanguageModelCompletionEvent, LanguageModelId,
@@ -337,9 +337,20 @@ impl Render for ConfigurationView {
if self.state.read(cx).is_authenticated(cx) {
const LABEL: &str = "Authorized.";
h_flex()
.gap_1()
.child(Icon::new(IconName::Check).color(Color::Success))
.child(Label::new(LABEL))
.justify_between()
.child(
h_flex()
.gap_1()
.child(Icon::new(IconName::Check).color(Color::Success))
.child(Label::new(LABEL)),
)
.child(
Button::new("sign_out", "Sign Out")
.style(ui::ButtonStyle::Filled)
.on_click(|_, window, cx| {
window.dispatch_action(copilot::SignOut.boxed_clone(), cx);
}),
)
} else {
let loading_icon = svg()
.size_8()

View File

@@ -1,6 +1,6 @@
name = "C++"
grammar = "cpp"
path_suffixes = ["cc", "hh", "cpp", "h", "hpp", "cxx", "hxx", "c++", "ipp", "inl", "cu", "cuh", "C", "H"]
path_suffixes = ["cc", "hh", "cpp", "h", "hpp", "cxx", "hxx", "c++", "ipp", "inl", "ixx", "cu", "cuh", "C", "H"]
line_comments = ["// ", "/// ", "//! "]
autoclose_before = ";:.,=}])>"
brackets = [

View File

@@ -62,8 +62,8 @@
; Literals
(this) @keyword
(super) @keyword
(this) @variable.special
(super) @variable.special
[
(null)

View File

@@ -62,8 +62,8 @@
; Literals
(this) @keyword
(super) @keyword
(this) @variable.special
(super) @variable.special
[
(null)

View File

@@ -80,8 +80,8 @@
; Literals
(this) @keyword
(super) @keyword
(this) @variable.special
(super) @variable.special
[
(null)

View File

@@ -1847,7 +1847,6 @@ impl LspCommand for GetCompletions {
let mut completions = if let Some(completions) = completions {
match completions {
lsp::CompletionResponse::Array(completions) => completions,
lsp::CompletionResponse::List(mut list) => {
let items = std::mem::take(&mut list.items);
response_list = Some(list);
@@ -1855,74 +1854,19 @@ impl LspCommand for GetCompletions {
}
}
} else {
Default::default()
Vec::new()
};
let language_server_adapter = lsp_store
.update(&mut cx, |lsp_store, _| {
lsp_store.language_server_adapter_for_id(server_id)
})?
.ok_or_else(|| anyhow!("no such language server"))?;
.with_context(|| format!("no language server with id {server_id}"))?;
let item_defaults = response_list
let lsp_defaults = response_list
.as_ref()
.and_then(|list| list.item_defaults.as_ref());
if let Some(item_defaults) = item_defaults {
let default_data = item_defaults.data.as_ref();
let default_commit_characters = item_defaults.commit_characters.as_ref();
let default_edit_range = item_defaults.edit_range.as_ref();
let default_insert_text_format = item_defaults.insert_text_format.as_ref();
let default_insert_text_mode = item_defaults.insert_text_mode.as_ref();
if default_data.is_some()
|| default_commit_characters.is_some()
|| default_edit_range.is_some()
|| default_insert_text_format.is_some()
|| default_insert_text_mode.is_some()
{
for item in completions.iter_mut() {
if item.data.is_none() && default_data.is_some() {
item.data = default_data.cloned()
}
if item.commit_characters.is_none() && default_commit_characters.is_some() {
item.commit_characters = default_commit_characters.cloned()
}
if item.text_edit.is_none() {
if let Some(default_edit_range) = default_edit_range {
match default_edit_range {
CompletionListItemDefaultsEditRange::Range(range) => {
item.text_edit =
Some(lsp::CompletionTextEdit::Edit(lsp::TextEdit {
range: *range,
new_text: item.label.clone(),
}))
}
CompletionListItemDefaultsEditRange::InsertAndReplace {
insert,
replace,
} => {
item.text_edit =
Some(lsp::CompletionTextEdit::InsertAndReplace(
lsp::InsertReplaceEdit {
new_text: item.label.clone(),
insert: *insert,
replace: *replace,
},
))
}
}
}
}
if item.insert_text_format.is_none() && default_insert_text_format.is_some() {
item.insert_text_format = default_insert_text_format.cloned()
}
if item.insert_text_mode.is_none() && default_insert_text_mode.is_some() {
item.insert_text_mode = default_insert_text_mode.cloned()
}
}
}
}
.and_then(|list| list.item_defaults.clone())
.map(Arc::new);
let mut completion_edits = Vec::new();
buffer.update(&mut cx, |buffer, _cx| {
@@ -1930,12 +1874,34 @@ impl LspCommand for GetCompletions {
let clipped_position = buffer.clip_point_utf16(Unclipped(self.position), Bias::Left);
let mut range_for_token = None;
completions.retain_mut(|lsp_completion| {
let edit = match lsp_completion.text_edit.as_ref() {
completions.retain(|lsp_completion| {
let lsp_edit = lsp_completion.text_edit.clone().or_else(|| {
let default_text_edit = lsp_defaults.as_deref()?.edit_range.as_ref()?;
match default_text_edit {
CompletionListItemDefaultsEditRange::Range(range) => {
Some(lsp::CompletionTextEdit::Edit(lsp::TextEdit {
range: *range,
new_text: lsp_completion.label.clone(),
}))
}
CompletionListItemDefaultsEditRange::InsertAndReplace {
insert,
replace,
} => Some(lsp::CompletionTextEdit::InsertAndReplace(
lsp::InsertReplaceEdit {
new_text: lsp_completion.label.clone(),
insert: *insert,
replace: *replace,
},
)),
}
});
let edit = match lsp_edit {
// If the language server provides a range to overwrite, then
// check that the range is valid.
Some(completion_text_edit) => {
match parse_completion_text_edit(completion_text_edit, &snapshot) {
match parse_completion_text_edit(&completion_text_edit, &snapshot) {
Some(edit) => edit,
None => return false,
}
@@ -1949,14 +1915,15 @@ impl LspCommand for GetCompletions {
return false;
}
let default_edit_range = response_list
.as_ref()
.and_then(|list| list.item_defaults.as_ref())
.and_then(|defaults| defaults.edit_range.as_ref())
.and_then(|range| match range {
CompletionListItemDefaultsEditRange::Range(r) => Some(r),
_ => None,
});
let default_edit_range = lsp_defaults.as_ref().and_then(|lsp_defaults| {
lsp_defaults
.edit_range
.as_ref()
.and_then(|range| match range {
CompletionListItemDefaultsEditRange::Range(r) => Some(r),
_ => None,
})
});
let range = if let Some(range) = default_edit_range {
let range = range_from_lsp(*range);
@@ -2006,14 +1973,25 @@ impl LspCommand for GetCompletions {
Ok(completions
.into_iter()
.zip(completion_edits)
.map(|(lsp_completion, (old_range, mut new_text))| {
.map(|(mut lsp_completion, (old_range, mut new_text))| {
LineEnding::normalize(&mut new_text);
if lsp_completion.data.is_none() {
if let Some(default_data) = lsp_defaults
.as_ref()
.and_then(|item_defaults| item_defaults.data.clone())
{
// Servers (e.g. JDTLS) prefer unchanged completions, when resolving the items later,
// so we do not insert the defaults here, but `data` is needed for resolving, so this is an exception.
lsp_completion.data = Some(default_data);
}
}
CoreCompletion {
old_range,
new_text,
source: CompletionSource::Lsp {
server_id,
lsp_completion: Box::new(lsp_completion),
lsp_defaults: lsp_defaults.clone(),
resolved: false,
},
}

View File

@@ -49,10 +49,9 @@ use lsp::{
notification::DidRenameFiles, CodeActionKind, CompletionContext, DiagnosticSeverity,
DiagnosticTag, DidChangeWatchedFilesRegistrationOptions, Edit, FileOperationFilter,
FileOperationPatternKind, FileOperationRegistrationOptions, FileRename, FileSystemWatcher,
InsertTextFormat, LanguageServer, LanguageServerBinary, LanguageServerBinaryOptions,
LanguageServerId, LanguageServerName, LspRequestFuture, MessageActionItem, MessageType, OneOf,
RenameFilesParams, SymbolKind, TextEdit, WillRenameFiles, WorkDoneProgressCancelParams,
WorkspaceFolder,
LanguageServer, LanguageServerBinary, LanguageServerBinaryOptions, LanguageServerId,
LanguageServerName, LspRequestFuture, MessageActionItem, MessageType, OneOf, RenameFilesParams,
SymbolKind, TextEdit, WillRenameFiles, WorkDoneProgressCancelParams, WorkspaceFolder,
};
use node_runtime::read_package_installed_version;
use parking_lot::Mutex;
@@ -70,6 +69,7 @@ use smol::channel::Sender;
use snippet::Snippet;
use std::{
any::Any,
borrow::Cow,
cell::RefCell,
cmp::Ordering,
convert::TryInto,
@@ -4475,6 +4475,7 @@ impl LspStore {
completions: Rc<RefCell<Box<[Completion]>>>,
completion_index: usize,
) -> Result<()> {
let server_id = server.server_id();
let can_resolve = server
.capabilities()
.completion_provider
@@ -4491,19 +4492,24 @@ impl LspStore {
CompletionSource::Lsp {
lsp_completion,
resolved,
server_id: completion_server_id,
..
} => {
if *resolved {
return Ok(());
}
anyhow::ensure!(
server_id == *completion_server_id,
"server_id mismatch, querying completion resolve for {server_id} but completion server id is {completion_server_id}"
);
server.request::<lsp::request::ResolveCompletionItem>(*lsp_completion.clone())
}
CompletionSource::Custom => return Ok(()),
}
};
let completion_item = request.await?;
let resolved_completion = request.await?;
if let Some(text_edit) = completion_item.text_edit.as_ref() {
if let Some(text_edit) = resolved_completion.text_edit.as_ref() {
// Technically we don't have to parse the whole `text_edit`, since the only
// language server we currently use that does update `text_edit` in `completionItem/resolve`
// is `typescript-language-server` and they only update `text_edit.new_text`.
@@ -4520,24 +4526,26 @@ impl LspStore {
completion.old_range = old_range;
}
}
if completion_item.insert_text_format == Some(InsertTextFormat::SNIPPET) {
// vtsls might change the type of completion after resolution.
let mut completions = completions.borrow_mut();
let completion = &mut completions[completion_index];
if let Some(lsp_completion) = completion.source.lsp_completion_mut() {
if completion_item.insert_text_format != lsp_completion.insert_text_format {
lsp_completion.insert_text_format = completion_item.insert_text_format;
}
}
}
let mut completions = completions.borrow_mut();
let completion = &mut completions[completion_index];
completion.source = CompletionSource::Lsp {
lsp_completion: Box::new(completion_item),
resolved: true,
server_id: server.server_id(),
};
if let CompletionSource::Lsp {
lsp_completion,
resolved,
server_id: completion_server_id,
..
} = &mut completion.source
{
if *resolved {
return Ok(());
}
anyhow::ensure!(
server_id == *completion_server_id,
"server_id mismatch, applying completion resolve for {server_id} but completion server id is {completion_server_id}"
);
*lsp_completion = Box::new(resolved_completion);
*resolved = true;
}
Ok(())
}
@@ -4549,8 +4557,8 @@ impl LspStore {
) -> Result<()> {
let completion_item = completions.borrow()[completion_index]
.source
.lsp_completion()
.cloned();
.lsp_completion(true)
.map(Cow::into_owned);
if let Some(lsp_documentation) = completion_item
.as_ref()
.and_then(|completion_item| completion_item.documentation.clone())
@@ -4626,8 +4634,13 @@ impl LspStore {
CompletionSource::Lsp {
lsp_completion,
resolved,
server_id: completion_server_id,
..
} => {
anyhow::ensure!(
server_id == *completion_server_id,
"remote server_id mismatch, querying completion resolve for {server_id} but completion server id is {completion_server_id}"
);
if *resolved {
return Ok(());
}
@@ -4647,7 +4660,7 @@ impl LspStore {
.request(request)
.await
.context("completion documentation resolve proto request")?;
let lsp_completion = serde_json::from_slice(&response.lsp_completion)?;
let resolved_lsp_completion = serde_json::from_slice(&response.lsp_completion)?;
let documentation = if response.documentation.is_empty() {
CompletionDocumentation::Undocumented
@@ -4662,11 +4675,23 @@ impl LspStore {
let mut completions = completions.borrow_mut();
let completion = &mut completions[completion_index];
completion.documentation = Some(documentation);
completion.source = CompletionSource::Lsp {
server_id,
if let CompletionSource::Lsp {
lsp_completion,
resolved: true,
};
resolved,
server_id: completion_server_id,
lsp_defaults: _,
} = &mut completion.source
{
if *resolved {
return Ok(());
}
anyhow::ensure!(
server_id == *completion_server_id,
"remote server_id mismatch, applying completion resolve for {server_id} but completion server id is {completion_server_id}"
);
*lsp_completion = Box::new(resolved_lsp_completion);
*resolved = true;
}
let old_range = response
.old_start
@@ -4750,7 +4775,7 @@ impl LspStore {
let completion = completions.borrow()[completion_index].clone();
let additional_text_edits = completion
.source
.lsp_completion()
.lsp_completion(true)
.as_ref()
.and_then(|lsp_completion| lsp_completion.additional_text_edits.clone());
if let Some(edits) = additional_text_edits {
@@ -8153,21 +8178,26 @@ impl LspStore {
}
pub(crate) fn serialize_completion(completion: &CoreCompletion) -> proto::Completion {
let (source, server_id, lsp_completion, resolved) = match &completion.source {
let (source, server_id, lsp_completion, lsp_defaults, resolved) = match &completion.source {
CompletionSource::Lsp {
server_id,
lsp_completion,
lsp_defaults,
resolved,
} => (
proto::completion::Source::Lsp as i32,
server_id.0 as u64,
serde_json::to_vec(lsp_completion).unwrap(),
lsp_defaults
.as_deref()
.map(|lsp_defaults| serde_json::to_vec(lsp_defaults).unwrap()),
*resolved,
),
CompletionSource::Custom => (
proto::completion::Source::Custom as i32,
0,
Vec::new(),
None,
true,
),
};
@@ -8178,6 +8208,7 @@ impl LspStore {
new_text: completion.new_text.clone(),
server_id,
lsp_completion,
lsp_defaults,
resolved,
source,
}
@@ -8200,6 +8231,11 @@ impl LspStore {
Some(proto::completion::Source::Lsp) => CompletionSource::Lsp {
server_id: LanguageServerId::from_proto(completion.server_id),
lsp_completion: serde_json::from_slice(&completion.lsp_completion)?,
lsp_defaults: completion
.lsp_defaults
.as_deref()
.map(serde_json::from_slice)
.transpose()?,
resolved: completion.resolved,
},
_ => anyhow::bail!("Unexpected completion source {}", completion.source),
@@ -8288,8 +8324,8 @@ async fn populate_labels_for_completions(
let lsp_completions = new_completions
.iter()
.filter_map(|new_completion| {
if let CompletionSource::Lsp { lsp_completion, .. } = &new_completion.source {
Some(*lsp_completion.clone())
if let Some(lsp_completion) = new_completion.source.lsp_completion(true) {
Some(lsp_completion.into_owned())
} else {
None
}
@@ -8309,8 +8345,8 @@ async fn populate_labels_for_completions(
.fuse();
for completion in new_completions {
match &completion.source {
CompletionSource::Lsp { lsp_completion, .. } => {
match completion.source.lsp_completion(true) {
Some(lsp_completion) => {
let documentation = if let Some(docs) = lsp_completion.documentation.clone() {
Some(docs.into())
} else {
@@ -8328,9 +8364,9 @@ async fn populate_labels_for_completions(
new_text: completion.new_text,
source: completion.source,
confirm: None,
})
});
}
CompletionSource::Custom => {
None => {
let mut label = CodeLabel::plain(completion.new_text.clone(), None);
ensure_uniform_list_compatible_label(&mut label);
completions.push(Completion {
@@ -8340,7 +8376,7 @@ async fn populate_labels_for_completions(
new_text: completion.new_text,
source: completion.source,
confirm: None,
})
});
}
}
}

View File

@@ -382,6 +382,8 @@ pub enum CompletionSource {
server_id: LanguageServerId,
/// The raw completion provided by the language server.
lsp_completion: Box<lsp::CompletionItem>,
/// A set of defaults for this completion item.
lsp_defaults: Option<Arc<lsp::CompletionListItemDefaults>>,
/// Whether this completion has been resolved, to ensure it happens once per completion.
resolved: bool,
},
@@ -397,17 +399,76 @@ impl CompletionSource {
}
}
pub fn lsp_completion(&self) -> Option<&lsp::CompletionItem> {
if let Self::Lsp { lsp_completion, .. } = self {
Some(lsp_completion)
} else {
None
}
}
pub fn lsp_completion(&self, apply_defaults: bool) -> Option<Cow<lsp::CompletionItem>> {
if let Self::Lsp {
lsp_completion,
lsp_defaults,
..
} = self
{
if apply_defaults {
if let Some(lsp_defaults) = lsp_defaults {
let mut completion_with_defaults = *lsp_completion.clone();
let default_commit_characters = lsp_defaults.commit_characters.as_ref();
let default_edit_range = lsp_defaults.edit_range.as_ref();
let default_insert_text_format = lsp_defaults.insert_text_format.as_ref();
let default_insert_text_mode = lsp_defaults.insert_text_mode.as_ref();
fn lsp_completion_mut(&mut self) -> Option<&mut lsp::CompletionItem> {
if let Self::Lsp { lsp_completion, .. } = self {
Some(lsp_completion)
if default_commit_characters.is_some()
|| default_edit_range.is_some()
|| default_insert_text_format.is_some()
|| default_insert_text_mode.is_some()
{
if completion_with_defaults.commit_characters.is_none()
&& default_commit_characters.is_some()
{
completion_with_defaults.commit_characters =
default_commit_characters.cloned()
}
if completion_with_defaults.text_edit.is_none() {
match default_edit_range {
Some(lsp::CompletionListItemDefaultsEditRange::Range(range)) => {
completion_with_defaults.text_edit =
Some(lsp::CompletionTextEdit::Edit(lsp::TextEdit {
range: *range,
new_text: completion_with_defaults.label.clone(),
}))
}
Some(
lsp::CompletionListItemDefaultsEditRange::InsertAndReplace {
insert,
replace,
},
) => {
completion_with_defaults.text_edit =
Some(lsp::CompletionTextEdit::InsertAndReplace(
lsp::InsertReplaceEdit {
new_text: completion_with_defaults.label.clone(),
insert: *insert,
replace: *replace,
},
))
}
None => {}
}
}
if completion_with_defaults.insert_text_format.is_none()
&& default_insert_text_format.is_some()
{
completion_with_defaults.insert_text_format =
default_insert_text_format.cloned()
}
if completion_with_defaults.insert_text_mode.is_none()
&& default_insert_text_mode.is_some()
{
completion_with_defaults.insert_text_mode =
default_insert_text_mode.cloned()
}
}
return Some(Cow::Owned(completion_with_defaults));
}
}
Some(Cow::Borrowed(lsp_completion))
} else {
None
}
@@ -4640,7 +4701,8 @@ impl Completion {
const DEFAULT_KIND_KEY: usize = 2;
let kind_key = self
.source
.lsp_completion()
// `lsp::CompletionListItemDefaults` has no `kind` field
.lsp_completion(false)
.and_then(|lsp_completion| lsp_completion.kind)
.and_then(|lsp_completion_kind| match lsp_completion_kind {
lsp::CompletionItemKind::KEYWORD => Some(0),
@@ -4654,7 +4716,8 @@ impl Completion {
/// Whether this completion is a snippet.
pub fn is_snippet(&self) -> bool {
self.source
.lsp_completion()
// `lsp::CompletionListItemDefaults` has `insert_text_format` field
.lsp_completion(true)
.map_or(false, |lsp_completion| {
lsp_completion.insert_text_format == Some(lsp::InsertTextFormat::SNIPPET)
})
@@ -4664,9 +4727,10 @@ impl Completion {
///
/// Will return `None` if this completion's kind is not [`CompletionItemKind::COLOR`].
pub fn color(&self) -> Option<Hsla> {
let lsp_completion = self.source.lsp_completion()?;
// `lsp::CompletionListItemDefaults` has no `kind` field
let lsp_completion = self.source.lsp_completion(false)?;
if lsp_completion.kind? == CompletionItemKind::COLOR {
return color_extractor::extract_color(lsp_completion);
return color_extractor::extract_color(&lsp_completion);
}
None
}

View File

@@ -1000,6 +1000,7 @@ message Completion {
bytes lsp_completion = 5;
bool resolved = 6;
Source source = 7;
optional bytes lsp_defaults = 8;
enum Source {
Custom = 0;

View File

@@ -1,58 +0,0 @@
mod session;
use project::Project;
pub(crate) use session::*;
use assistant_tool::{Tool, ToolRegistry};
use gpui::{App, AppContext as _, Entity, Task};
use schemars::JsonSchema;
use serde::Deserialize;
use std::sync::Arc;
pub fn init(cx: &App) {
let registry = ToolRegistry::global(cx);
registry.register_tool(ScriptingTool);
}
#[derive(Debug, Deserialize, JsonSchema)]
struct ScriptingToolInput {
lua_script: String,
}
struct ScriptingTool;
impl Tool for ScriptingTool {
fn name(&self) -> String {
"lua-interpreter".into()
}
fn description(&self) -> String {
include_str!("scripting_tool_description.txt").into()
}
fn input_schema(&self) -> serde_json::Value {
let schema = schemars::schema_for!(ScriptingToolInput);
serde_json::to_value(&schema).unwrap()
}
fn run(
self: Arc<Self>,
input: serde_json::Value,
project: Entity<Project>,
cx: &mut App,
) -> Task<anyhow::Result<String>> {
let input = match serde_json::from_value::<ScriptingToolInput>(input) {
Err(err) => return Task::ready(Err(err.into())),
Ok(input) => input,
};
let session = cx.new(|cx| Session::new(project, cx));
let lua_script = input.lua_script;
let script = session.update(cx, |session, cx| session.run_script(lua_script, cx));
cx.spawn(|_cx| async move {
let output = script.await?.stdout;
drop(session);
Ok(format!("The script output the following:\n{output}"))
})
}
}

View File

@@ -1,5 +1,6 @@
use crate::{settings_store::SettingsStore, Settings};
use fs::Fs;
use collections::HashSet;
use fs::{Fs, PathEventKind};
use futures::{channel::mpsc, StreamExt};
use gpui::{App, BackgroundExecutor, ReadGlobal};
use std::{path::PathBuf, sync::Arc, time::Duration};
@@ -78,6 +79,55 @@ pub fn watch_config_file(
rx
}
pub fn watch_config_dir(
executor: &BackgroundExecutor,
fs: Arc<dyn Fs>,
dir_path: PathBuf,
config_paths: HashSet<PathBuf>,
) -> mpsc::UnboundedReceiver<String> {
let (tx, rx) = mpsc::unbounded();
executor
.spawn(async move {
for file_path in &config_paths {
if fs.metadata(file_path).await.is_ok_and(|v| v.is_some()) {
if let Ok(contents) = fs.load(file_path).await {
if tx.unbounded_send(contents).is_err() {
return;
}
}
}
}
let (events, _) = fs.watch(&dir_path, Duration::from_millis(100)).await;
futures::pin_mut!(events);
while let Some(event_batch) = events.next().await {
for event in event_batch {
if config_paths.contains(&event.path) {
match event.kind {
Some(PathEventKind::Removed) => {
if tx.unbounded_send(String::new()).is_err() {
return;
}
}
Some(PathEventKind::Created) | Some(PathEventKind::Changed) => {
if let Ok(contents) = fs.load(&event.path).await {
if tx.unbounded_send(contents).is_err() {
return;
}
}
}
_ => {}
}
}
}
}
})
.detach();
rx
}
pub fn update_settings_file<T: Settings>(
fs: Arc<dyn Fs>,
cx: &App,

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
# Tom Hale, 2016. MIT Licence.
# Print out 256 colours, with each number printed in its corresponding colour

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
# Copied from: https://unix.stackexchange.com/a/696756
# Based on: https://gist.github.com/XVilka/8346728 and https://unix.stackexchange.com/a/404415/395213

View File

@@ -85,7 +85,7 @@ const FILE_SUFFIXES_BY_ICON_KEY: &[(&str, &[&str])] = &[
("coffeescript", &["coffee"]),
(
"cpp",
&["c++", "cc", "cpp", "cxx", "hh", "hpp", "hxx", "inl"],
&["c++", "cc", "cpp", "cxx", "hh", "hpp", "hxx", "inl", "ixx"],
),
("crystal", &["cr", "ecr"]),
("csharp", &["cs"]),
@@ -264,6 +264,7 @@ const FILE_SUFFIXES_BY_ICON_KEY: &[(&str, &[&str])] = &[
("vs_sln", &["sln"]),
("vs_suo", &["suo"]),
("vue", &["vue"]),
("wgsl", &["wgsl"]),
("zig", &["zig"]),
];
@@ -348,6 +349,7 @@ const FILE_ICONS: &[(&str, &str)] = &[
("vs_sln", "icons/file_icons/file.svg"),
("vs_suo", "icons/file_icons/file.svg"),
("vue", "icons/file_icons/vue.svg"),
("wgsl", "icons/file_icons/wgsl.svg"),
("zig", "icons/file_icons/zig.svg"),
];

View File

@@ -98,7 +98,6 @@ remote.workspace = true
repl.workspace = true
reqwest_client.workspace = true
rope.workspace = true
scripting_tool.workspace = true
search.workspace = true
serde.workspace = true
serde_json.workspace = true

View File

@@ -9,8 +9,9 @@
<string>Alternate</string>
<key>LSItemContentTypes</key>
<array>
<string>public.text</string>
<string>public.folder</string>
<string>public.plain-text</string>
<string>public.text</string>
<string>public.utf8-plain-text</string>
</array>
</dict>

View File

@@ -476,7 +476,6 @@ fn main() {
cx,
);
assistant_tools::init(cx);
scripting_tool::init(cx);
repl::init(app_state.fs.clone(), cx);
extension_host::init(
extension_host_proxy,

View File

@@ -96,6 +96,7 @@ impl Render for QuickActionBar {
let git_blame_inline_enabled = editor_value.git_blame_inline_enabled();
let show_git_blame_gutter = editor_value.show_git_blame_gutter();
let auto_signature_help_enabled = editor_value.auto_signature_help_enabled(cx);
let show_line_numbers = editor_value.line_numbers_enabled(cx);
let has_edit_prediction_provider = editor_value.edit_prediction_provider().is_some();
let show_edit_predictions = editor_value.edit_predictions_enabled();
let edit_predictions_enabled_at_cursor =
@@ -261,6 +262,58 @@ impl Render for QuickActionBar {
);
}
if has_edit_prediction_provider {
let mut inline_completion_entry = ContextMenuEntry::new("Edit Predictions")
.toggleable(IconPosition::Start, edit_predictions_enabled_at_cursor && show_edit_predictions)
.disabled(!edit_predictions_enabled_at_cursor)
.action(
editor::actions::ToggleEditPrediction.boxed_clone(),
).handler({
let editor = editor.clone();
move |window, cx| {
editor
.update(cx, |editor, cx| {
editor.toggle_edit_predictions(
&editor::actions::ToggleEditPrediction,
window,
cx,
);
})
.ok();
}
});
if !edit_predictions_enabled_at_cursor {
inline_completion_entry = inline_completion_entry.documentation_aside(|_| {
Label::new("You can't toggle edit predictions for this file as it is within the excluded files list.").into_any_element()
});
}
menu = menu.item(inline_completion_entry);
}
menu = menu.separator();
menu = menu.toggleable_entry(
"Line Numbers",
show_line_numbers,
IconPosition::Start,
Some(editor::actions::ToggleLineNumbers.boxed_clone()),
{
let editor = editor.clone();
move |window, cx| {
editor
.update(cx, |editor, cx| {
editor.toggle_line_numbers(
&editor::actions::ToggleLineNumbers,
window,
cx,
);
})
.ok();
}
},
);
menu = menu.toggleable_entry(
"Selection Menu",
selection_menu_enabled,
@@ -303,35 +356,6 @@ impl Render for QuickActionBar {
},
);
if has_edit_prediction_provider {
let mut inline_completion_entry = ContextMenuEntry::new("Edit Predictions")
.toggleable(IconPosition::Start, edit_predictions_enabled_at_cursor && show_edit_predictions)
.disabled(!edit_predictions_enabled_at_cursor)
.action(
editor::actions::ToggleEditPrediction.boxed_clone(),
).handler({
let editor = editor.clone();
move |window, cx| {
editor
.update(cx, |editor, cx| {
editor.toggle_edit_predictions(
&editor::actions::ToggleEditPrediction,
window,
cx,
);
})
.ok();
}
});
if !edit_predictions_enabled_at_cursor {
inline_completion_entry = inline_completion_entry.documentation_aside(|_| {
Label::new("You can't toggle edit predictions for this file as it is within the excluded files list.").into_any_element()
});
}
menu = menu.item(inline_completion_entry);
}
menu = menu.separator();
menu = menu.toggleable_entry(

View File

@@ -1,14 +1,11 @@
(
import
(
let
lock = builtins.fromJSON (builtins.readFile ./flake.lock);
in
fetchTarball {
url = lock.nodes.flake-compat.locked.url or "https://github.com/edolstra/flake-compat/archive/${lock.nodes.flake-compat.locked.rev}.tar.gz";
sha256 = lock.nodes.flake-compat.locked.narHash;
}
)
{src = ./.;}
)
.defaultNix
(import (
let
lock = builtins.fromJSON (builtins.readFile ./flake.lock);
in
fetchTarball {
url =
lock.nodes.flake-compat.locked.url
or "https://github.com/edolstra/flake-compat/archive/${lock.nodes.flake-compat.locked.rev}.tar.gz";
sha256 = lock.nodes.flake-compat.locked.narHash;
}
) { src = ./.; }).defaultNix

View File

@@ -115,6 +115,7 @@
- [Rust](./languages/rust.md)
- [Scala](./languages/scala.md)
- [Scheme](./languages/scheme.md)
- [Shell Script](./languages/sh.md)
- [Svelte](./languages/svelte.md)
- [Swift](./languages/swift.md)
- [Tailwind CSS](./languages/tailwindcss.md)

View File

@@ -8,6 +8,7 @@ If you're used to a specific editor's defaults you can set a `base_keymap` in yo
- VSCode (default)
- Atom
- Emacs (Beta)
- JetBrains
- SublimeText
- TextMate
@@ -52,7 +53,7 @@ If you want to debug problems with custom keymaps you can use `debug: Open Key C
Zed has the ability to match against not just a single keypress, but a sequence of keys typed in order. Each key in the `"bindings"` map is a sequence of keypresses separated with a space.
Each key press is a sequence of modifiers followed by a key. The modifiers are:
Each keypress is a sequence of modifiers followed by a key. The modifiers are:
- `ctrl-` The control key
- `cmd-`, `win-` or `super-` for the platform modifier (Command on macOS, Windows key on Windows, and the Super key on Linux).
@@ -77,7 +78,7 @@ The `shift-` modifier can only be used in combination with a letter to indicate
The `alt-` modifier can be used on many layouts to generate a different key. For example on macOS US keyboard the combination `alt-c` types `ç`. You can match against either in your keymap file, though by convention Zed spells this combination as `alt-c`.
It is possible to match against typing a modifier key on its own. For example `shift shift` can be used to implement JetBrains search everywhere shortcut. In this case the binding happens on key release instead of key press.
It is possible to match against typing a modifier key on its own. For example `shift shift` can be used to implement JetBrains search everywhere shortcut. In this case the binding happens on key release instead of keypress.
### Contexts
@@ -138,13 +139,13 @@ As of Zed 0.162.0, Zed has some support for non-QWERTY keyboards on macOS. Bette
There are roughly three categories of keyboard to consider:
Keyboards that support full ASCII (QWERTY, DVORAK, COLEMAK, etc.). On these keyboards bindings are resolved based on the character that would be generated by the key. So to type `cmd-[`, find the key labelled `[` and press it with command.
Keyboards that support full ASCII (QWERTY, DVORAK, COLEMAK, etc.). On these keyboards bindings are resolved based on the character that would be generated by the key. So to type `cmd-[`, find the key labeled `[` and press it with command.
Keyboards that are mostly non-ASCII, but support full ASCII when the command key is pressed. For example Cyrillic keyboards, Armenian, Hebrew, etc. On these keyboards bindings are resolved based on the character that would be generated by typing the key with command pressed. So to type `ctrl-a`, find the key that generates `cmd-a`. For these keyboards, keyboard shortcuts are displayed in the app using their ASCII equivalents. If the ASCII-equivalents are not printed on your keyboard, you can use the macOS keyboard viewer and holding down the `cmd` key to find things (though often the ASCII equivalents are in a QWERTY layout).
Finally keyboards that support extended Latin alphabets (usually ISO keyboards) require the most support. For example French AZERTY, German QWERTZ, etc. On these keyboards it is often not possible to type the entire ASCII range without option. To ensure that shortcuts _can_ be typed without option, keyboard shortcuts are mapped to "key equivalents" in the same way as [macOS](). This mapping is defined per layout, and is a compromise between leaving keyboard shortcuts triggered by the same character they are defined with, keeping shortcuts in the same place as a QWERTY layout, and moving shortcuts out of the way of system shortcuts.
For example on a German QWERTZ keyboard, the `cmd->` shortcut is moved to `cmd-:` because `cmd->` is the system window switcher and this is where that shortcut is typed on a QWERTY keyboard. `cmd-+` stays the same because + is still typable without option, and as a result, `cmd-[` and `cmd-]` become `cmd-ö` and `cmd-ä`, moving out of the way of the `+` key.
For example on a German QWERTZ keyboard, the `cmd->` shortcut is moved to `cmd-:` because `cmd->` is the system window switcher and this is where that shortcut is typed on a QWERTY keyboard. `cmd-+` stays the same because + is still typeable without option, and as a result, `cmd-[` and `cmd-]` become `cmd-ö` and `cmd-ä`, moving out of the way of the `+` key.
If you are defining shortcuts in your personal keymap, you can opt into the key equivalent mapping by setting `use_key_equivalents` to `true` in your keymap:
@@ -208,7 +209,7 @@ There are some limitations to this, notably:
The argument to `SendKeystrokes` is a space-separated list of keystrokes (using the same syntax as above). Due to the way that keystrokes are parsed, any segment that is not recognized as a keypress will be sent verbatim to the currently focused input field.
If the argument to `SendKeystrokes` contains the binding used to trigger it, it will use the next-highest-precedence definition of that binding. This allows you to extend the default behaviour of a key binding.
If the argument to `SendKeystrokes` contains the binding used to trigger it, it will use the next-highest-precedence definition of that binding. This allows you to extend the default behavior of a key binding.
### Forward keys to terminal

View File

@@ -28,6 +28,7 @@ Zed supports hundreds of programming languages and text formats. Some work out-o
- [Go](./languages/go.md)
- [Groovy](./languages/groovy.md)
- [Haskell](./languages/haskell.md)
- [Helm](./languages/helm.md)
- [HTML](./languages/html.md)
- [Java](./languages/java.md)
- [JavaScript](./languages/javascript.md)
@@ -58,7 +59,7 @@ Zed supports hundreds of programming languages and text formats. Some work out-o
- [Shell Script](./languages/sh.md)
- [Svelte](./languages/svelte.md)
- [Swift](./languages/swift.md)
- [TailwindCSS](./languages/tailwindcss.md)
- [Tailwind CSS](./languages/tailwindcss.md)
- [Terraform](./languages/terraform.md)
- [TOML](./languages/toml.md)
- [TypeScript](./languages/typescript.md)

18
flake.lock generated
View File

@@ -2,11 +2,11 @@
"nodes": {
"crane": {
"locked": {
"lastModified": 1739936662,
"narHash": "sha256-x4syUjNUuRblR07nDPeLDP7DpphaBVbUaSoeZkFbGSk=",
"lastModified": 1741481578,
"narHash": "sha256-JBTSyJFQdO3V8cgcL08VaBUByEU6P5kXbTJN6R0PFQo=",
"owner": "ipetkov",
"repo": "crane",
"rev": "19de14aaeb869287647d9461cbd389187d8ecdb7",
"rev": "bb1c9567c43e4434f54e9481eb4b8e8e0d50f0b5",
"type": "github"
},
"original": {
@@ -32,11 +32,11 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1740695751,
"narHash": "sha256-D+R+kFxy1KsheiIzkkx/6L63wEHBYX21OIwlFV8JvDs=",
"lastModified": 1741379970,
"narHash": "sha256-Wh7esNh7G24qYleLvgOSY/7HlDUzWaL/n4qzlBePpiw=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "6313551cd05425cd5b3e63fe47dbc324eabb15e4",
"rev": "36fd87baa9083f34f7f5027900b62ee6d09b1f2f",
"type": "github"
},
"original": {
@@ -61,11 +61,11 @@
]
},
"locked": {
"lastModified": 1740882709,
"narHash": "sha256-VC+8GxWK4p08jjIbmsNfeFQajW2lsiOR/XQiOOvqgvs=",
"lastModified": 1741573199,
"narHash": "sha256-A2sln1GdCf+uZ8yrERSCZUCqZ3JUlOv1WE2VFqqfaLQ=",
"owner": "oxalica",
"repo": "rust-overlay",
"rev": "f4d5a693c18b389f0d58f55b6f7be6ef85af186f",
"rev": "c777dc8a1e35407b0e80ec89817fe69970f4e81a",
"type": "github"
},
"original": {

View File

@@ -50,12 +50,11 @@
in
{
packages = forAllSystems (pkgs: {
zed-editor = pkgs.zed-editor;
default = pkgs.zed-editor;
});
devShells = forAllSystems (pkgs: {
default = import ./nix/shell.nix { inherit pkgs; };
default = pkgs.callPackage ./nix/shell.nix { };
});
formatter = forAllSystems (pkgs: pkgs.nixfmt-rfc-style);

View File

@@ -2,11 +2,12 @@
lib,
crane,
rustToolchain,
fetchpatch,
clang,
rustPlatform,
cmake,
copyDesktopItems,
fetchFromGitHub,
curl,
clang,
perl,
pkg-config,
protobuf,
@@ -29,11 +30,12 @@
cargo-about,
cargo-bundle,
git,
livekit-libwebrtc,
apple-sdk_15,
darwin,
darwinMinVersionHook,
makeWrapper,
nodejs_22,
nix-gitignore,
withGLES ? false,
}:
@@ -41,176 +43,205 @@
assert withGLES -> stdenv.hostPlatform.isLinux;
let
includeFilter =
path: type:
mkIncludeFilter =
root': path: type:
let
baseName = baseNameOf (toString path);
parentDir = dirOf path;
inRootDir = type == "directory" && parentDir == ../.;
# note: under lazy-trees this introduces an extra copy
root = toString root' + "/";
relPath = lib.removePrefix root path;
topLevelIncludes = [
"crates"
"assets"
"extensions"
"script"
"tooling"
"Cargo.toml"
".config" # nextest?
];
firstComp = builtins.head (lib.path.subpath.components relPath);
in
!(
inRootDir
&& (baseName == "docs" || baseName == ".github" || baseName == ".git" || baseName == "target")
);
builtins.elem firstComp topLevelIncludes;
craneLib = crane.overrideToolchain rustToolchain;
commonSrc = lib.cleanSourceWith {
src = nix-gitignore.gitignoreSource [ ] ../.;
filter = includeFilter;
name = "source";
};
commonArgs = rec {
pname = "zed-editor";
version = "nightly";
src = commonSrc;
nativeBuildInputs =
[
clang
cmake
copyDesktopItems
curl
perl
pkg-config
protobuf
cargo-about
]
++ lib.optionals stdenv.hostPlatform.isLinux [ makeWrapper ]
++ lib.optionals stdenv.hostPlatform.isDarwin [ cargo-bundle ];
buildInputs =
[
curl
fontconfig
freetype
libgit2
openssl
sqlite
zlib
zstd
]
++ lib.optionals stdenv.hostPlatform.isLinux [
alsa-lib
libxkbcommon
wayland
xorg.libxcb
]
++ lib.optionals stdenv.hostPlatform.isDarwin [
apple-sdk_15
(darwinMinVersionHook "10.15")
];
env = {
ZSTD_SYS_USE_PKG_CONFIG = true;
FONTCONFIG_FILE = makeFontsConf {
fontDirectories = [
"${src}/assets/fonts/plex-mono"
"${src}/assets/fonts/plex-sans"
];
gpu-lib = if withGLES then libglvnd else vulkan-loader;
commonArgs =
let
zedCargoLock = builtins.fromTOML (builtins.readFile ../crates/zed/Cargo.toml);
in
rec {
pname = "zed-editor";
version = zedCargoLock.package.version + "-nightly";
src = builtins.path {
path = ../.;
filter = mkIncludeFilter ../.;
name = "source";
};
ZED_UPDATE_EXPLANATION = "Zed has been installed using Nix. Auto-updates have thus been disabled.";
RELEASE_VERSION = version;
};
};
cargoArtifacts = craneLib.buildDepsOnly commonArgs;
in
craneLib.buildPackage (
commonArgs
// rec {
inherit cargoArtifacts;
patches =
[
# Zed uses cargo-install to install cargo-about during the script execution.
# We provide cargo-about ourselves and can skip this step.
# Until https://github.com/zed-industries/zed/issues/19971 is fixed,
# we also skip any crate for which the license cannot be determined.
(fetchpatch {
url = "https://raw.githubusercontent.com/NixOS/nixpkgs/1fd02d90c6c097f91349df35da62d36c19359ba7/pkgs/by-name/ze/zed-editor/0001-generate-licenses.patch";
hash = "sha256-cLgqLDXW1JtQ2OQFLd5UolAjfy7bMoTw40lEx2jA2pk=";
})
]
++ lib.optionals stdenv.hostPlatform.isDarwin [
# Livekit requires Swift 6
# We need this until livekit-rust sdk is used
(fetchpatch {
url = "https://raw.githubusercontent.com/NixOS/nixpkgs/1fd02d90c6c097f91349df35da62d36c19359ba7/pkgs/by-name/ze/zed-editor/0002-disable-livekit-darwin.patch";
hash = "sha256-whZ7RaXv8hrVzWAveU3qiBnZSrvGNEHTuyNhxgMIo5w=";
})
];
cargoLock = ../Cargo.lock;
cargoExtraArgs = "--package=zed --package=cli --features=gpui/runtime_shaders";
dontUseCmakeConfigure = true;
preBuild = ''
bash script/generate-licenses
'';
postFixup = lib.optionalString stdenv.hostPlatform.isLinux ''
patchelf --add-rpath ${gpu-lib}/lib $out/libexec/*
patchelf --add-rpath ${wayland}/lib $out/libexec/*
wrapProgram $out/libexec/zed-editor --suffix PATH : ${lib.makeBinPath [ nodejs_22 ]}
'';
RUSTFLAGS = if withGLES then "--cfg gles" else "";
gpu-lib = if withGLES then libglvnd else vulkan-loader;
preCheck = ''
export HOME=$(mktemp -d);
'';
cargoTestExtraArgs =
"-- "
+ lib.concatStringsSep " " (
nativeBuildInputs =
[
# Flaky: unreliably fails on certain hosts (including Hydra)
"--skip=zed::tests::test_window_edit_state_restoring_enabled"
clang # TODO: use pkgs.clangStdenv or ignore cargo config?
cmake
copyDesktopItems
curl
perl
pkg-config
protobuf
cargo-about
rustPlatform.bindgenHook
]
++ lib.optionals stdenv.hostPlatform.isLinux [ makeWrapper ]
++ lib.optionals stdenv.hostPlatform.isDarwin [
# TODO: move to overlay so it's usable in the shell
(cargo-bundle.overrideAttrs (old: {
version = "0.6.0-zed";
src = fetchFromGitHub {
owner = "zed-industries";
repo = "cargo-bundle";
rev = "zed-deploy";
hash = "sha256-OxYdTSiR9ueCvtt7Y2OJkvzwxxnxu453cMS+l/Bi5hM=";
};
}))
];
buildInputs =
[
curl
fontconfig
freetype
# TODO: need staticlib of this for linking the musl remote server.
# should make it a separate derivation/flake output
# see https://crane.dev/examples/cross-musl.html
libgit2
openssl
sqlite
zlib
zstd
]
++ lib.optionals stdenv.hostPlatform.isLinux [
# Fails on certain hosts (including Hydra) for unclear reason
"--skip=test_open_paths_action"
alsa-lib
libxkbcommon
wayland
gpu-lib
xorg.libxcb
]
);
++ lib.optionals stdenv.hostPlatform.isDarwin [
apple-sdk_15
darwin.apple_sdk.frameworks.System
(darwinMinVersionHook "10.15")
];
cargoExtraArgs = "--package=zed --package=cli --features=gpui/runtime_shaders";
env = {
ZSTD_SYS_USE_PKG_CONFIG = true;
FONTCONFIG_FILE = makeFontsConf {
fontDirectories = [
../assets/fonts/plex-mono
../assets/fonts/plex-sans
];
};
ZED_UPDATE_EXPLANATION = "Zed has been installed using Nix. Auto-updates have thus been disabled.";
RELEASE_VERSION = version;
RUSTFLAGS = if withGLES then "--cfg gles" else "";
# TODO: why are these not handled by the linker given that they're in buildInputs?
NIX_LDFLAGS = "-rpath ${
lib.makeLibraryPath [
gpu-lib
wayland
]
}";
LK_CUSTOM_WEBRTC = livekit-libwebrtc;
};
cargoVendorDir = craneLib.vendorCargoDeps {
inherit src cargoLock;
overrideVendorGitCheckout =
let
hasWebRtcSys = builtins.any (crate: crate.name == "webrtc-sys");
# `webrtc-sys` expects a staticlib; nixpkgs' `livekit-webrtc` has been patched to
# produce a `dylib`... patching `webrtc-sys`'s build script is the easier option
# TODO: send livekit sdk a PR to make this configurable
postPatch = ''
substituteInPlace webrtc-sys/build.rs --replace-fail \
"cargo:rustc-link-lib=static=webrtc" "cargo:rustc-link-lib=dylib=webrtc"
'';
in
crates: drv:
if hasWebRtcSys crates then
drv.overrideAttrs (o: {
postPatch = (o.postPatch or "") + postPatch;
})
else
drv;
};
};
cargoArtifacts = craneLib.buildDepsOnly (
commonArgs
// {
# TODO: figure out why the main derivation is still rebuilding deps...
# disable pre-building the deps for now
buildPhaseCargoCommand = "true";
# forcibly inhibit `doInstallCargoArtifacts`...
# https://github.com/ipetkov/crane/blob/1d19e2ec7a29dcc25845eec5f1527aaf275ec23e/lib/setupHooks/installCargoArtifactsHook.sh#L111
#
# it is, unfortunately, not overridable in `buildDepsOnly`:
# https://github.com/ipetkov/crane/blob/1d19e2ec7a29dcc25845eec5f1527aaf275ec23e/lib/buildDepsOnly.nix#L85
preBuild = "postInstallHooks=()";
doCheck = false;
}
);
in
craneLib.buildPackage (
lib.recursiveUpdate commonArgs {
inherit cargoArtifacts;
patches = lib.optionals stdenv.hostPlatform.isDarwin [
# Livekit requires Swift 6
# We need this until livekit-rust sdk is used
../script/patches/use-cross-platform-livekit.patch
];
dontUseCmakeConfigure = true;
# without the env var generate-licenses fails due to crane's fetchCargoVendor, see:
# https://github.com/zed-industries/zed/issues/19971#issuecomment-2688455390
preBuild = ''
ALLOW_MISSING_LICENSES=yes bash script/generate-licenses
echo nightly > crates/zed/RELEASE_CHANNEL
'';
# TODO: try craneLib.cargoNextest separate output
# for now we're not worried about running our test suite in the nix sandbox
doCheck = false;
installPhase =
if stdenv.hostPlatform.isDarwin then
''
runHook preInstall
# cargo-bundle expects the binary in target/release
mv target/release/zed target/release/zed
pushd crates/zed
# Note that this is GNU sed, while Zed's bundle-mac uses BSD sed
sed -i "s/package.metadata.bundle-stable/package.metadata.bundle/" Cargo.toml
sed -i "s/package.metadata.bundle-nightly/package.metadata.bundle/" Cargo.toml
export CARGO_BUNDLE_SKIP_BUILD=true
app_path=$(cargo bundle --release | xargs)
# We're not using the fork of cargo-bundle, so we must manually append plist extensions
# Remove closing tags from Info.plist (last two lines)
head -n -2 $app_path/Contents/Info.plist > Info.plist
# Append extensions
cat resources/info/*.plist >> Info.plist
# Add closing tags
printf "</dict>\n</plist>\n" >> Info.plist
mv Info.plist $app_path/Contents/Info.plist
app_path="$(cargo bundle --release | xargs)"
popd
mkdir -p $out/Applications $out/bin
# Zed expects git next to its own binary
ln -s ${git}/bin/git $app_path/Contents/MacOS/git
mv target/release/cli $app_path/Contents/MacOS/cli
mv $app_path $out/Applications/
ln -s ${git}/bin/git "$app_path/Contents/MacOS/git"
mv target/release/cli "$app_path/Contents/MacOS/cli"
mv "$app_path" $out/Applications/
# Physical location of the CLI must be inside the app bundle as this is used
# to determine which app to start
ln -s $out/Applications/Zed.app/Contents/MacOS/cli $out/bin/zed
ln -s "$out/Applications/Zed Nightly.app/Contents/MacOS/cli" $out/bin/zed
runHook postInstall
''
else
# TODO: icons should probably be named "zed-nightly". fix bundle-linux first
''
runHook preInstall
@@ -218,24 +249,31 @@ craneLib.buildPackage (
cp target/release/zed $out/libexec/zed-editor
cp target/release/cli $out/bin/zed
install -D ${commonSrc}/crates/zed/resources/app-icon@2x.png $out/share/icons/hicolor/1024x1024@2x/apps/zed.png
install -D ${commonSrc}/crates/zed/resources/app-icon.png $out/share/icons/hicolor/512x512/apps/zed.png
install -D "crates/zed/resources/app-icon-nightly@2x.png" \
"$out/share/icons/hicolor/1024x1024@2x/apps/zed.png"
install -D crates/zed/resources/app-icon-nightly.png \
$out/share/icons/hicolor/512x512/apps/zed.png
# extracted from https://github.com/zed-industries/zed/blob/v0.141.2/script/bundle-linux (envsubst)
# and https://github.com/zed-industries/zed/blob/v0.141.2/script/install.sh (final desktop file name)
# extracted from ../script/bundle-linux (envsubst) and
# ../script/install.sh (final desktop file name)
(
export DO_STARTUP_NOTIFY="true"
export APP_CLI="zed"
export APP_ICON="zed"
export APP_NAME="Zed"
export APP_NAME="Zed Nightly"
export APP_ARGS="%U"
mkdir -p "$out/share/applications"
${lib.getExe envsubst} < "crates/zed/resources/zed.desktop.in" > "$out/share/applications/dev.zed.Zed.desktop"
${lib.getExe envsubst} < "crates/zed/resources/zed.desktop.in" > "$out/share/applications/dev.zed.Zed-Nightly.desktop"
)
runHook postInstall
'';
# TODO: why isn't this also done on macOS?
postFixup = lib.optionalString stdenv.hostPlatform.isLinux ''
wrapProgram $out/libexec/zed-editor --suffix PATH : ${lib.makeBinPath [ nodejs_22 ]}
'';
meta = {
description = "High-performance, multiplayer code editor from the creators of Atom and Tree-sitter";
homepage = "https://zed.dev";

View File

@@ -1,65 +1,62 @@
{
pkgs ? import <nixpkgs> { },
lib,
mkShell,
stdenv,
stdenvAdapters,
makeFontsConf,
zed-editor,
rust-analyzer,
cargo-nextest,
nixfmt-rfc-style,
protobuf,
nodejs_22,
}:
let
inherit (pkgs) lib;
moldStdenv = stdenvAdapters.useMoldLinker stdenv;
mkShell' =
if stdenv.hostPlatform.isLinux then mkShell.override { stdenv = moldStdenv; } else mkShell;
in
pkgs.mkShell rec {
packages =
[
pkgs.clang
pkgs.curl
pkgs.cmake
pkgs.perl
pkgs.pkg-config
pkgs.protobuf
pkgs.rustPlatform.bindgenHook
pkgs.rust-analyzer
]
++ lib.optionals pkgs.stdenv.hostPlatform.isLinux [
pkgs.mold
];
mkShell' {
inputsFrom = [ zed-editor ];
packages = [
rust-analyzer
cargo-nextest
nixfmt-rfc-style
# TODO: package protobuf-language-server for editing zed.proto
# TODO: add other tools used in our scripts
buildInputs =
[
pkgs.bzip2
pkgs.curl
pkgs.fontconfig
pkgs.freetype
pkgs.libgit2
pkgs.openssl
pkgs.sqlite
pkgs.stdenv.cc.cc
pkgs.zlib
pkgs.zstd
pkgs.rustToolchain
]
++ lib.optionals pkgs.stdenv.hostPlatform.isLinux [
pkgs.alsa-lib
pkgs.libxkbcommon
pkgs.wayland
pkgs.xorg.libxcb
pkgs.vulkan-loader
]
++ lib.optional pkgs.stdenv.hostPlatform.isDarwin pkgs.apple-sdk_15;
# `build.nix` adds this to the `zed-editor` wrapper (see `postFixup`)
# we'll just put it on `$PATH`:
nodejs_22
];
LD_LIBRARY_PATH = lib.makeLibraryPath buildInputs;
# We set SDKROOT and DEVELOPER_DIR to the Xcode ones instead of the nixpkgs ones, because
# we need Swift 6.0 and nixpkgs doesn't have it
shellHook = lib.optionalString stdenv.hostPlatform.isDarwin ''
export SDKROOT="/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk";
export DEVELOPER_DIR="/Applications/Xcode.app/Contents/Developer";
'';
PROTOC="${pkgs.protobuf}/bin/protoc";
# We set SDKROOT and DEVELOPER_DIR to the Xcode ones instead of the nixpkgs ones,
# because we need Swift 6.0 and nixpkgs doesn't have it.
# Xcode is required for development anyways
shellHook = lib.optionalString pkgs.stdenv.hostPlatform.isDarwin ''
export SDKROOT="/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk";
export DEVELOPER_DIR="/Applications/Xcode.app/Contents/Developer";
'';
FONTCONFIG_FILE = pkgs.makeFontsConf {
fontDirectories = [
"./assets/fonts/zed-mono"
"./assets/fonts/zed-sans"
];
};
ZSTD_SYS_USE_PKG_CONFIG = true;
env =
let
baseEnvs =
(zed-editor.overrideAttrs (attrs: {
passthru = { inherit (attrs) env; };
})).env; # exfil `env`; it's not in drvAttrs
in
# unsetting this var so we download the staticlib during the build
(removeAttrs baseEnvs [ "LK_CUSTOM_WEBRTC" ])
// {
# note: different than `$FONTCONFIG_FILE` in `build.nix` this refers to relative paths
# outside the nix store instead of to `$src`
FONTCONFIG_FILE = makeFontsConf {
fontDirectories = [
"./assets/fonts/plex-mono"
"./assets/fonts/plex-sans"
];
};
PROTOC = "${protobuf}/bin/protoc";
};
}

View File

@@ -2,4 +2,11 @@
channel = "1.85"
profile = "minimal"
components = [ "rustfmt", "clippy" ]
targets = [ "x86_64-apple-darwin", "aarch64-apple-darwin", "x86_64-unknown-linux-gnu", "wasm32-wasip1", "x86_64-pc-windows-msvc" ]
targets = [
"x86_64-apple-darwin",
"aarch64-apple-darwin",
"x86_64-unknown-linux-gnu",
"x86_64-pc-windows-msvc",
"wasm32-wasip1", # extensions
"x86_64-unknown-linux-musl", # remote server
]

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -e

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -e

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -eu

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
channel=$(cat crates/zed/RELEASE_CHANNEL)

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -exuo pipefail

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -eu

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
# Notes for fixing this script if it's broken:
# - if you see an error about "can't find perf_6.1" you need to install `linux-perf` from the

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -e

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -eu
source script/lib/deploy-helpers.sh

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -eu
source script/lib/deploy-helpers.sh

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
databases=$(psql --tuples-only --command "
SELECT

View File

@@ -2,11 +2,11 @@
set -euo pipefail
CARGO_ABOUT_VERSION="0.6.6"
CARGO_ABOUT_VERSION="0.6"
OUTPUT_FILE="${1:-$(pwd)/assets/licenses.md}"
TEMPLATE_FILE="script/licenses/template.md.hbs"
echo -n "" > "$OUTPUT_FILE"
echo -n "" >"$OUTPUT_FILE"
{
echo -e "# ###### THEME LICENSES ######\n"
@@ -16,21 +16,24 @@ echo -n "" > "$OUTPUT_FILE"
cat assets/icons/LICENSES
echo -e "\n# ###### CODE LICENSES ######\n"
} >> "$OUTPUT_FILE"
} >>"$OUTPUT_FILE"
if ! cargo install --list | grep "cargo-about v$CARGO_ABOUT_VERSION" > /dev/null; then
echo "Installing cargo-about@$CARGO_ABOUT_VERSION..."
cargo install "cargo-about@$CARGO_ABOUT_VERSION"
if ! cargo about --version | grep "cargo-about $CARGO_ABOUT_VERSION" >/dev/null; then
echo "Installing cargo-about@^$CARGO_ABOUT_VERSION..."
cargo install "cargo-about@^$CARGO_ABOUT_VERSION"
else
echo "cargo-about@$CARGO_ABOUT_VERSION is already installed."
echo "cargo-about@^$CARGO_ABOUT_VERSION is already installed."
fi
echo "Generating cargo licenses"
if [ -z "${ALLOW_MISSING_LICENSES-}" ]; then FAIL_FLAG=--fail; else FAIL_FLAG=""; fi
set -x
cargo about generate \
--fail \
$FAIL_FLAG \
--frozen \
-c script/licenses/zed-licenses.toml \
"$TEMPLATE_FILE" >> "$OUTPUT_FILE"
"$TEMPLATE_FILE" >>"$OUTPUT_FILE"
set +x
sed -i.bak 's/&quot;/"/g' "$OUTPUT_FILE"
sed -i.bak 's/&#x27;/'\''/g' "$OUTPUT_FILE" # The ` '\'' ` thing ends the string, appends a single quote, and re-opens the string

View File

@@ -2,24 +2,26 @@
set -euo pipefail
CARGO_ABOUT_VERSION="0.6.6"
CARGO_ABOUT_VERSION="0.6"
OUTPUT_FILE="${1:-$(pwd)/assets/licenses.csv}"
TEMPLATE_FILE="script/licenses/template.csv.hbs"
if ! cargo install --list | grep "cargo-about v$CARGO_ABOUT_VERSION" > /dev/null; then
echo "Installing cargo-about@$CARGO_ABOUT_VERSION..."
cargo install "cargo-about@$CARGO_ABOUT_VERSION"
if ! cargo about --version | grep "cargo-about $CARGO_ABOUT_VERSION" > /dev/null; then
echo "Installing cargo-about@^$CARGO_ABOUT_VERSION..."
cargo install "cargo-about@^$CARGO_ABOUT_VERSION"
else
echo "cargo-about@$CARGO_ABOUT_VERSION is already installed."
echo "cargo-about@^$CARGO_ABOUT_VERSION is already installed."
fi
echo "Generating cargo licenses"
set -x
cargo about generate \
--fail \
--frozen \
-c script/licenses/zed-licenses.toml \
"$TEMPLATE_FILE" \
$TEMPLATE_FILE \
| awk 'NR==1{print;next} NF{print | "sort"}' \
> "$OUTPUT_FILE"
set +x
echo "generate-licenses-csv completed. See $OUTPUT_FILE"

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -e

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -eu

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
channel="$1"

View File

@@ -1,3 +1,3 @@
#!/bin/bash
#!/usr/bin/env bash
cargo run -p theme_importer -- "$@"

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
if [[ $# -ne 1 ]]; then
echo "Usage: $0 [production|staging|...]"

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -eu

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
export GPUTOOLS_LOAD_GTMTLCAPTURE=1
export DYLD_LIBRARY_PATH="/usr/lib/system/introspection"

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
# Try to make sure we are in the zed repo root
if [ ! -d "crates" ] || [ ! -d "script" ]; then

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
# This script manages prompt overrides for the Zed editor.
#

View File

@@ -1,4 +1,6 @@
#!/bin/bash -e
#!/usr/bin/env bash
set -e
which minio > /dev/null || (echo "installing minio..."; brew install minio/stable/minio)
mkdir -p .blob_store/the-extensions-bucket

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -e
cargo run -p collab migrate

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
if [ -z "$1" ]; then
cargo run -p storybook

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -euo pipefail
which gh >/dev/null || brew install gh

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -eu
source script/lib/deploy-helpers.sh

View File

@@ -1,14 +1,11 @@
(
import
(
let
lock = builtins.fromJSON (builtins.readFile ./flake.lock);
in
fetchTarball {
url = lock.nodes.flake-compat.locked.url or "https://github.com/edolstra/flake-compat/archive/${lock.nodes.flake-compat.locked.rev}.tar.gz";
sha256 = lock.nodes.flake-compat.locked.narHash;
}
)
{src = ./.;}
)
.shellNix
(import (
let
lock = builtins.fromJSON (builtins.readFile ./flake.lock);
in
fetchTarball {
url =
lock.nodes.flake-compat.locked.url
or "https://github.com/edolstra/flake-compat/archive/${lock.nodes.flake-compat.locked.rev}.tar.gz";
sha256 = lock.nodes.flake-compat.locked.narHash;
}
) { src = ./.; }).shellNix