Compare commits

..

29 Commits

Author SHA1 Message Date
Nate Butler
d95267b88a sfsa
Co-authored-by: Cole Miller <m@cole-miller.net>
Co-authored-by: Max Brunsfeld <maxbrunsfeld@gmail.com>
2025-03-10 15:38:03 -04:00
Nate Butler
c7641b31e0 wip hscroll
Co-authored-by: Max Brunsfeld <maxbrunsfeld@gmail.com>
Co-authored-by: Cole Miller <m@cole-miller.net>
2025-03-10 15:35:09 -04:00
Nate Butler
18c41e79d8 Remove unused imports 2025-03-10 12:16:24 -04:00
Nate Butler
2367ffdd49 Remove version control background colors from theme system 2025-03-10 12:12:17 -04:00
Nate Butler
64552b964c Use disabled color for branch icon when in a single repo 2025-03-10 12:03:40 -04:00
Nate Butler
5779773232 Fix vertically clipped labels on buttons 2025-03-10 11:59:42 -04:00
Nate Butler
8afa3638a4 Truncate last commit message less aggressively in panel 2025-03-10 11:57:35 -04:00
Nate Butler
66bb55087e Trying a really long commit message to see if the previous commit message ui is truncating correctly now. 2025-03-10 11:49:46 -04:00
Nate Butler
1918c72d83 Some header tidying 2025-03-10 11:47:40 -04:00
Nate Butler
8e890c8520 Fix alignnment of checkboxes with section titles 2025-03-10 11:32:08 -04:00
Nate Butler
c741298fff Use a component for GitStatusIcon, add preview 2025-03-10 11:17:16 -04:00
Nate Butler
446e64fde7 Remove log, update spacing 2025-03-10 11:16:58 -04:00
Nate Butler
c01a039853 WIP: Allow staging a section when shift+clicking any checkbox in the list 2025-03-10 09:48:08 -04:00
Nate Butler
980f78524e Ensure checkbox placeholder is centered, minor checkbox refinements 2025-03-10 09:30:32 -04:00
Danilo Leal
ee05cc3ad9 Add a line numbers toggle to the editor controls menu (#26318)
Closes https://github.com/zed-industries/zed/issues/26305

<img
src="https://github.com/user-attachments/assets/795029ad-128a-471f-9adf-c0ef26319bbf"
width="400px" />

Release Notes:

- N/A
2025-03-10 09:28:46 -03:00
Smit Barmase
5ed144f9d2 macOS: Add support for external file managers to open directory in Zed (#26357)
Closes #25421

This PR adds support for external file managers to show Zed as an option
in the "Open With" context menu for directories on macOS.

<img width="350" alt="image"
src="https://github.com/user-attachments/assets/c52acd48-73c4-47be-8683-6950e0371b73"
/>


Release Notes:

- Added support for opening folders in Zed from third-party macOS file
managers like Path Finder and Super Charge through their "Open With"
menu.
2025-03-10 15:21:39 +05:30
Julia Ryan
2a862b3c54 nix: Disable checks and remove crane workaround (#26356)
The checkPhase was failing for me in darwin so I turned it off. I think
eventually we'll want to use a separate derivation for tests (which
crane has a helper for).

Crane also solved our issue with spaces in paths so I bumped the flake
to pick up that fix and removed our workaround: ipetkov/crane#808.

Release Notes:

- N/A
2025-03-10 02:00:57 -07:00
Julia Ryan
4a7c84f490 Fix nix build (#26270)
This PR includes lots of small fixes to get our `build.nix` and
`shell.nix` back to a working state.

I've tested this by running `cargo run` (inside the devshell) and `nix
run` on x86 nixos and arm64 darwin machines. I'd appreciate it if others
could test building inside the devshell to double-check that it's not
just working because I happen to have some system-level packages
installed, as well as seeing if it works on other platforms (non-nixos
linux, arm linux, x86 darwin).

I couldn't get the full test suite (`cargo nextest run --workspace`)
passing in the devshell on darwin, but they _are_ all passing on nixos.
nixpkgs [disables some of our
tests](92d11f06d5/pkgs/by-name/ze/zed-editor/package.nix (L226-L234))
that apparently fail or are flakey on hydra, but they don't know why.
I'm going to punt on debugging those for now, especially given that they
seem to be working for me. I'm also unsure of whether we actually want
the nix checkPhase to run the full test suite (it's currently not
passing `--workspace`) given that we have separate CI that should
enforce that those pass on all PRs.

Here's an overview of the changes made:
- Fix our `generate-licenses` script
- Relaxes the `cargo-about` version requirement slightly so it doesn't
try to install an older binary when the nixpkgs one is newer than our
requirement
- Add a workaround for [this cargo-about
issue](https://github.com/zed-industries/zed/issues/19971) obviating the
need for the patching done in the nixpkgs package
- Set the new `--frozen` flag to avoid network access/mutating the
lockfile
- Use dynamic webrtc lib from nixpkgs, and fixes up the build script in
webrtc-sys that hardcodes it to be statically linked.
- Use `inputsFrom` in `shell.nix` and avoid duplicating everything from
`build.nix`
- Add a temporary workaround for an [upstream crane
bug](https://github.com/ipetkov/crane/issues/808).
- Fix shebangs in our `script` dir to not hard-code `/bin/bash`

There are still a bunch of issues that aren't resolved here, I'll make a
tracking issue for those and try to land this first just to get back to
an unbroken state. Eventually among other things I'd like to use a
`libgit2` from `staticPkgs` and musl cross compilation to build the
remote server under nix, and then add that as a separate flake output
and include it in the shell's `inputsFrom` list.

Thanks @niklaskorz, @GaetanLepage, @bbigras and all the other nixpkgs
maintainers that have kept the `zed-editor` package working and up to
date! I seriously considered just making our flake `overrideAttrs` the
package in nixpkgs given how well maintained it is.

Thanks @WeetHet for your volunteer maintinance of this flake. I
referenced #24953 while working on these fixes, and I'd love to
collaborate on adding some of those pieces like treefmt and a github
action. If you're interested I'd really appreciate some help debugging
why crane's `buildDepsOnly` isn't working for us. I'm assuming it'd make
our `nix build` times go way down from the improved dep caching if we
could get it working.

Thanks @rrbutani for all the help on this PR 💙.

Release Notes:

- N/A

---------

Co-authored-by: Rahul Butani <rrbutani@users.noreply.github.com>
Co-authored-by: Rahul Butani <rr.butani@gmail.com>
2025-03-10 01:06:11 -07:00
Mikayla Maki
230e2e4107 Restore git panel header (#26354)
Let's play around with it. This should not be added to tomorrow's
preview.

Release Notes:

- Git Beta: Added a panel header with an open diff and stage/unstage all
buttons.
2025-03-10 07:08:10 +00:00
Richard Hao
d732b8ba0f git: Disable commit message generation when commit not possible (#26329)
## Issue:

- `Generate Commit Message` will generate a random message if there are
no changes.
<img width="614" alt="image"
src="https://github.com/user-attachments/assets/c16cadac-01af-47c0-a2db-a5bbf62f84bb"
/>


## After Fixed:

- `Generate Commit Message` will be disabled if commit is not possible
<img width="610" alt="image"
src="https://github.com/user-attachments/assets/5ea9ca70-6fa3-4144-ab4e-be7a986d5496"
/>


## Release Notes:

- Fixed: Disable commit message generation when commit is not possible
2025-03-09 23:45:25 -07:00
Michael Sloan
7c3eecc9c7 Add support for querying file outline in assistant script (#26351)
Release Notes:

- N/A
2025-03-10 05:26:17 +00:00
Max Brunsfeld
fff37ab823 Follow-up fixes for recent multi buffer optimizations (#26345)
I realized that the optimization broke multi buffer syncing after buffer
reparses.

Release Notes:

- N/A
2025-03-09 22:15:38 -07:00
Kirill Bulatov
8a7a78fafb Avoid modifying the LSP message before resolving it (#26347)
Closes https://github.com/zed-industries/zed/issues/21277

To the left is current Zed, right is the improved version.
3rd message, from Zed, to resolve the item, does not have `textEdit` on
the right side, and has one on the left.
Seems to not influence the end result though, but at least Zed behaves
more appropriate now.

<img width="1727" alt="image"
src="https://github.com/user-attachments/assets/ca1236fd-9ce2-41ba-88fe-1f3178cdcbde"
/>


Instead of modifying the original LSP completion item, store completion
list defaults and apply them when the item is requested (except `data`
defaults, needed for resolve).

Now, the only place that can modify the completion items is this method,
and Python impl seems to be the one doing it:


ca9c3af56f/crates/languages/src/python.rs (L182-L204)

Seems ok to leave untouched for now.

Release Notes:

- Fixed LSP completion items modified before resolve request
2025-03-10 00:12:53 +02:00
Ben Kunkle
6de3ac3e17 Revert "Highlight super and this as keywords in JS/TS/TSX" (#26342)
Reverts zed-industries/zed#25135

This approach was not the best as explained in the response to the
original PR. Likely, the better approach is to create a newer specific
scope for these kinds of variables under the `@variable` prefix so that
themes can control these pseudo-keywords specifically
2025-03-09 16:11:37 +00:00
Smit Barmase
5aae3bdc69 copilot: Fix missing sign-out button when Zed is the edit prediction provider (#26340)
Closes #25884

Added a sign-out button for Copilot in Assistant settings, allowing
sign-out even when copilot is disabled.

<img width="500" alt="image"
src="https://github.com/user-attachments/assets/43fc97ad-f73c-49e1-a7b6-a3910434d661"
/>



Release Notes:

- Added a sign-out button for Copilot in Assistant settings.
2025-03-09 21:39:14 +05:30
Agus Zubiaga
e298301b40 assistant: Make scripting a first-class concept instead of a tool (#26338)
This PR makes refactors the scripting functionality to be a first-class
concept of the assistant instead of a generic tool, which will allow us
to build a more customized experience.

- The tool prompt has been slightly tweaked and is now included as a
system message in all conversations. I'm getting decent results, but now
that it isn't in the tools framework, it will probably require more
refining.

- The model will now include an `<eval ...>` tag at the end of the
message with the script. We parse this tag incrementally as it streams
in so that we can indicate that we are generating a script before we see
the closing `</eval>` tag. Later, this will help us interpret the script
as it arrives also.

- Threads now hold a `ScriptSession` entity which manages the state of
all scripts (from parsing to exited) in a centralized way, and will
later collect all script operations so they can be displayed in the UI.

- `script_tool` has been renamed to `assistant_scripting` 

- Script source now opens in a regular read-only buffer  

Note: We still need to handle persistence properly

Release Notes:

- N/A

---------

Co-authored-by: Marshall Bowers <git@maxdeviant.com>
2025-03-09 09:01:49 +00:00
brian tan
ed6bf7f161 diagnostics: Fix losing focus when activating from diagnostics view (#25517)
Closes #25509

Changes:
- If active item is already diagnostics, don't try to focus it again.

Instead of not focusing, should it just not activate instead? Something
like:

            if !workspace
                .active_item(cx)
                .map(|item| item.item_id() == existing.item_id())
                .unwrap_or(false)
            {
workspace.activate_item(&existing, true, true, window, cx);
            }


Release Notes:

- N/A
2025-03-08 22:17:20 +00:00
Smit Barmase
f14d6670ba copilot: Fix onboarding into Copilot requires Zed restart (#26330)
Closes #25594

This PR fixes an issue where signing into Copilot required restarting
Zed.

Copilot depends on an OAuth token that comes from either `hosts.json` or
`apps.json`. Initially, both files don't exist. If neither file is
found, we fallback to watching `hosts.json` for updates. However, if the
auth process creates `apps.json`, we won't receive updates from it,
causing the UI to remain outdated.

This PR fixes that by watching the parent `github-copilot` directory
instead, which will always contain one of those files along with an
additional version file.

I have tested this on macOS and Linux Wayland.

Release Notes:

- Fixed an issue where signing into Copilot required restarting Zed.
2025-03-09 03:19:09 +05:30
Joseph T. Lyons
22d9b5d8ca Update key binding documentation (#26321)
Release Notes:

- N/A
2025-03-08 01:03:52 -05:00
81 changed files with 2084 additions and 2068 deletions

67
Cargo.lock generated
View File

@@ -84,7 +84,7 @@ dependencies = [
[[package]]
name = "alacritty_terminal"
version = "0.25.1-dev"
source = "git+https://github.com/zed-industries/alacritty.git?rev=03c2907b44b4189aac5fdeaea331f5aab5c7072e#03c2907b44b4189aac5fdeaea331f5aab5c7072e"
source = "git+https://github.com/zed-industries/alacritty.git?branch=add-hush-login-flag#828457c9ff1f7ea0a0469337cc8a37ee3a1b0590"
dependencies = [
"base64 0.22.1",
"bitflags 2.8.0",
@@ -450,6 +450,7 @@ version = "0.1.0"
dependencies = [
"anyhow",
"assistant_context_editor",
"assistant_scripting",
"assistant_settings",
"assistant_slash_command",
"assistant_tool",
@@ -563,6 +564,26 @@ dependencies = [
"workspace",
]
[[package]]
name = "assistant_scripting"
version = "0.1.0"
dependencies = [
"anyhow",
"collections",
"futures 0.3.31",
"gpui",
"log",
"mlua",
"parking_lot",
"project",
"rand 0.8.5",
"regex",
"serde",
"serde_json",
"settings",
"util",
]
[[package]]
name = "assistant_settings"
version = "0.1.0"
@@ -2300,7 +2321,7 @@ dependencies = [
"cap-primitives",
"cap-std",
"io-lifetimes",
"windows-sys 0.59.0",
"windows-sys 0.52.0",
]
[[package]]
@@ -2328,7 +2349,7 @@ dependencies = [
"ipnet",
"maybe-owned",
"rustix",
"windows-sys 0.59.0",
"windows-sys 0.52.0",
"winx",
]
@@ -4402,7 +4423,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "33d852cb9b869c2a9b3df2f71a3074817f01e1844f839a144f5fcef059a4eb5d"
dependencies = [
"libc",
"windows-sys 0.59.0",
"windows-sys 0.52.0",
]
[[package]]
@@ -5064,7 +5085,7 @@ checksum = "5e2e6123af26f0f2c51cc66869137080199406754903cc926a7690401ce09cb4"
dependencies = [
"io-lifetimes",
"rustix",
"windows-sys 0.59.0",
"windows-sys 0.52.0",
]
[[package]]
@@ -6709,7 +6730,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2285ddfe3054097ef4b2fe909ef8c3bcd1ea52a8f0d274416caebeef39f04a65"
dependencies = [
"io-lifetimes",
"windows-sys 0.59.0",
"windows-sys 0.52.0",
]
[[package]]
@@ -10734,7 +10755,7 @@ dependencies = [
"once_cell",
"socket2",
"tracing",
"windows-sys 0.59.0",
"windows-sys 0.52.0",
]
[[package]]
@@ -11656,7 +11677,7 @@ dependencies = [
"libc",
"linux-raw-sys",
"once_cell",
"windows-sys 0.59.0",
"windows-sys 0.52.0",
]
[[package]]
@@ -11910,27 +11931,6 @@ version = "1.0.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a3cf7c11c38cb994f3d40e8a8cde3bbd1f72a435e4c49e85d6553d8312306152"
[[package]]
name = "scripting_tool"
version = "0.1.0"
dependencies = [
"anyhow",
"assistant_tool",
"collections",
"futures 0.3.31",
"gpui",
"mlua",
"parking_lot",
"project",
"regex",
"schemars",
"serde",
"serde_json",
"settings",
"shlex",
"util",
]
[[package]]
name = "scrypt"
version = "0.11.0"
@@ -13427,7 +13427,7 @@ dependencies = [
"fd-lock",
"io-lifetimes",
"rustix",
"windows-sys 0.59.0",
"windows-sys 0.52.0",
"winx",
]
@@ -13567,7 +13567,7 @@ dependencies = [
"getrandom 0.3.1",
"once_cell",
"rustix",
"windows-sys 0.59.0",
"windows-sys 0.52.0",
]
[[package]]
@@ -15913,7 +15913,7 @@ version = "0.1.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cf221c93e13a30d793f7645a0e7762c55d169dbb0a49671918a2319d289b10bb"
dependencies = [
"windows-sys 0.59.0",
"windows-sys 0.52.0",
]
[[package]]
@@ -16378,7 +16378,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3f3fd376f71958b862e7afb20cfe5a22830e1963462f3a17f49d82a6c1d1f42d"
dependencies = [
"bitflags 2.8.0",
"windows-sys 0.59.0",
"windows-sys 0.52.0",
]
[[package]]
@@ -16985,7 +16985,6 @@ dependencies = [
"repl",
"reqwest_client",
"rope",
"scripting_tool",
"search",
"serde",
"serde_json",

View File

@@ -118,7 +118,7 @@ members = [
"crates/rope",
"crates/rpc",
"crates/schema_generator",
"crates/scripting_tool",
"crates/assistant_scripting",
"crates/search",
"crates/semantic_index",
"crates/semantic_version",
@@ -318,7 +318,7 @@ reqwest_client = { path = "crates/reqwest_client" }
rich_text = { path = "crates/rich_text" }
rope = { path = "crates/rope" }
rpc = { path = "crates/rpc" }
scripting_tool = { path = "crates/scripting_tool" }
assistant_scripting = { path = "crates/assistant_scripting" }
search = { path = "crates/search" }
semantic_index = { path = "crates/semantic_index" }
semantic_version = { path = "crates/semantic_version" }
@@ -370,7 +370,7 @@ zeta = { path = "crates/zeta" }
#
aho-corasick = "1.1"
alacritty_terminal = { git = "https://github.com/zed-industries/alacritty.git", rev = "03c2907b44b4189aac5fdeaea331f5aab5c7072e" }
alacritty_terminal = { git = "https://github.com/zed-industries/alacritty.git", branch = "add-hush-login-flag" }
any_vec = "0.14"
anyhow = "1.0.86"
arrayvec = { version = "0.7.4", features = ["serde"] }

View File

@@ -63,6 +63,7 @@ serde.workspace = true
serde_json.workspace = true
settings.workspace = true
smol.workspace = true
assistant_scripting.workspace = true
streaming_diff.workspace = true
telemetry_events.workspace = true
terminal.workspace = true

View File

@@ -1,11 +1,12 @@
use std::sync::Arc;
use collections::HashMap;
use assistant_scripting::{ScriptId, ScriptState};
use collections::{HashMap, HashSet};
use editor::{Editor, MultiBuffer};
use gpui::{
list, AbsoluteLength, AnyElement, App, ClickEvent, DefiniteLength, EdgesRefinement, Empty,
Entity, Focusable, Length, ListAlignment, ListOffset, ListState, StyleRefinement, Subscription,
Task, TextStyleRefinement, UnderlineStyle,
Task, TextStyleRefinement, UnderlineStyle, WeakEntity,
};
use language::{Buffer, LanguageRegistry};
use language_model::{LanguageModelRegistry, LanguageModelToolUseId, Role};
@@ -14,6 +15,7 @@ use settings::Settings as _;
use theme::ThemeSettings;
use ui::{prelude::*, Disclosure, KeyBinding};
use util::ResultExt as _;
use workspace::Workspace;
use crate::thread::{MessageId, RequestKind, Thread, ThreadError, ThreadEvent};
use crate::thread_store::ThreadStore;
@@ -21,6 +23,7 @@ use crate::tool_use::{ToolUse, ToolUseStatus};
use crate::ui::ContextPill;
pub struct ActiveThread {
workspace: WeakEntity<Workspace>,
language_registry: Arc<LanguageRegistry>,
thread_store: Entity<ThreadStore>,
thread: Entity<Thread>,
@@ -30,6 +33,7 @@ pub struct ActiveThread {
rendered_messages_by_id: HashMap<MessageId, Entity<Markdown>>,
editing_message: Option<(MessageId, EditMessageState)>,
expanded_tool_uses: HashMap<LanguageModelToolUseId, bool>,
expanded_scripts: HashSet<ScriptId>,
last_error: Option<ThreadError>,
_subscriptions: Vec<Subscription>,
}
@@ -40,6 +44,7 @@ struct EditMessageState {
impl ActiveThread {
pub fn new(
workspace: WeakEntity<Workspace>,
thread: Entity<Thread>,
thread_store: Entity<ThreadStore>,
language_registry: Arc<LanguageRegistry>,
@@ -52,6 +57,7 @@ impl ActiveThread {
];
let mut this = Self {
workspace,
language_registry,
thread_store,
thread: thread.clone(),
@@ -59,6 +65,7 @@ impl ActiveThread {
messages: Vec::new(),
rendered_messages_by_id: HashMap::default(),
expanded_tool_uses: HashMap::default(),
expanded_scripts: HashSet::default(),
list_state: ListState::new(0, ListAlignment::Bottom, px(1024.), {
let this = cx.entity().downgrade();
move |ix, window: &mut Window, cx: &mut App| {
@@ -241,7 +248,7 @@ impl ActiveThread {
fn handle_thread_event(
&mut self,
_: &Entity<Thread>,
_thread: &Entity<Thread>,
event: &ThreadEvent,
window: &mut Window,
cx: &mut Context<Self>,
@@ -306,6 +313,14 @@ impl ActiveThread {
}
}
}
ThreadEvent::ScriptFinished => {
let model_registry = LanguageModelRegistry::read_global(cx);
if let Some(model) = model_registry.active_model() {
self.thread.update(cx, |thread, cx| {
thread.send_to_model(model, RequestKind::Chat, false, cx);
});
}
}
}
}
@@ -445,12 +460,16 @@ impl ActiveThread {
return Empty.into_any();
};
let context = self.thread.read(cx).context_for_message(message_id);
let tool_uses = self.thread.read(cx).tool_uses_for_message(message_id);
let colors = cx.theme().colors();
let thread = self.thread.read(cx);
let context = thread.context_for_message(message_id);
let tool_uses = thread.tool_uses_for_message(message_id);
// Don't render user messages that are just there for returning tool results.
if message.role == Role::User && self.thread.read(cx).message_has_tool_results(message_id) {
if message.role == Role::User
&& (thread.message_has_tool_results(message_id)
|| thread.message_has_script_output(message_id))
{
return Empty.into_any();
}
@@ -463,6 +482,8 @@ impl ActiveThread {
.filter(|(id, _)| *id == message_id)
.map(|(_, state)| state.editor.clone());
let colors = cx.theme().colors();
let message_content = v_flex()
.child(
if let Some(edit_message_editor) = edit_message_editor.clone() {
@@ -597,6 +618,7 @@ impl ActiveThread {
Role::Assistant => div()
.id(("message-container", ix))
.child(message_content)
.children(self.render_script(message_id, cx))
.map(|parent| {
if tool_uses.is_empty() {
return parent;
@@ -716,6 +738,139 @@ impl ActiveThread {
}),
)
}
fn render_script(&self, message_id: MessageId, cx: &mut Context<Self>) -> Option<AnyElement> {
let script = self.thread.read(cx).script_for_message(message_id, cx)?;
let is_open = self.expanded_scripts.contains(&script.id);
let colors = cx.theme().colors();
let element = div().px_2p5().child(
v_flex()
.gap_1()
.rounded_lg()
.border_1()
.border_color(colors.border)
.child(
h_flex()
.justify_between()
.py_0p5()
.pl_1()
.pr_2()
.bg(colors.editor_foreground.opacity(0.02))
.when(is_open, |element| element.border_b_1().rounded_t(px(6.)))
.when(!is_open, |element| element.rounded_md())
.border_color(colors.border)
.child(
h_flex()
.gap_1()
.child(Disclosure::new("script-disclosure", is_open).on_click(
cx.listener({
let script_id = script.id;
move |this, _event, _window, _cx| {
if this.expanded_scripts.contains(&script_id) {
this.expanded_scripts.remove(&script_id);
} else {
this.expanded_scripts.insert(script_id);
}
}
}),
))
// TODO: Generate script description
.child(Label::new("Script")),
)
.child(
h_flex()
.gap_1()
.child(
Label::new(match script.state {
ScriptState::Generating => "Generating",
ScriptState::Running { .. } => "Running",
ScriptState::Succeeded { .. } => "Finished",
ScriptState::Failed { .. } => "Error",
})
.size(LabelSize::XSmall)
.buffer_font(cx),
)
.child(
IconButton::new("view-source", IconName::Eye)
.icon_color(Color::Muted)
.disabled(matches!(script.state, ScriptState::Generating))
.on_click(cx.listener({
let source = script.source.clone();
move |this, _event, window, cx| {
this.open_script_source(source.clone(), window, cx);
}
})),
),
),
)
.when(is_open, |parent| {
let stdout = script.stdout_snapshot();
let error = script.error();
parent.child(
v_flex()
.p_2()
.bg(colors.editor_background)
.gap_2()
.child(if stdout.is_empty() && error.is_none() {
Label::new("No output yet")
.size(LabelSize::Small)
.color(Color::Muted)
} else {
Label::new(stdout).size(LabelSize::Small).buffer_font(cx)
})
.children(script.error().map(|err| {
Label::new(err.to_string())
.size(LabelSize::Small)
.color(Color::Error)
})),
)
}),
);
Some(element.into_any())
}
fn open_script_source(
&mut self,
source: SharedString,
window: &mut Window,
cx: &mut Context<'_, ActiveThread>,
) {
let language_registry = self.language_registry.clone();
let workspace = self.workspace.clone();
let source = source.clone();
cx.spawn_in(window, |_, mut cx| async move {
let lua = language_registry.language_for_name("Lua").await.log_err();
workspace.update_in(&mut cx, |workspace, window, cx| {
let project = workspace.project().clone();
let buffer = project.update(cx, |project, cx| {
project.create_local_buffer(&source.trim(), lua, cx)
});
let buffer = cx.new(|cx| {
MultiBuffer::singleton(buffer, cx)
// TODO: Generate script description
.with_title("Assistant script".into())
});
let editor = cx.new(|cx| {
let mut editor =
Editor::for_multibuffer(buffer, Some(project), true, window, cx);
editor.set_read_only(true);
editor
});
workspace.add_item_to_active_pane(Box::new(editor), None, true, window, cx);
})
})
.detach_and_log_err(cx);
}
}
impl Render for ActiveThread {

View File

@@ -166,22 +166,25 @@ impl AssistantPanel {
let history_store =
cx.new(|cx| HistoryStore::new(thread_store.clone(), context_store.clone(), cx));
let thread = cx.new(|cx| {
ActiveThread::new(
workspace.clone(),
thread.clone(),
thread_store.clone(),
language_registry.clone(),
window,
cx,
)
});
Self {
active_view: ActiveView::Thread,
workspace,
project: project.clone(),
fs: fs.clone(),
language_registry: language_registry.clone(),
language_registry,
thread_store: thread_store.clone(),
thread: cx.new(|cx| {
ActiveThread::new(
thread.clone(),
thread_store.clone(),
language_registry,
window,
cx,
)
}),
thread,
message_editor,
context_store,
context_editor: None,
@@ -239,6 +242,7 @@ impl AssistantPanel {
self.active_view = ActiveView::Thread;
self.thread = cx.new(|cx| {
ActiveThread::new(
self.workspace.clone(),
thread.clone(),
self.thread_store.clone(),
self.language_registry.clone(),
@@ -372,6 +376,7 @@ impl AssistantPanel {
this.active_view = ActiveView::Thread;
this.thread = cx.new(|cx| {
ActiveThread::new(
this.workspace.clone(),
thread.clone(),
this.thread_store.clone(),
this.language_registry.clone(),

View File

@@ -1,11 +1,14 @@
use std::sync::Arc;
use anyhow::Result;
use assistant_scripting::{
Script, ScriptEvent, ScriptId, ScriptSession, ScriptTagParser, SCRIPTING_PROMPT,
};
use assistant_tool::ToolWorkingSet;
use chrono::{DateTime, Utc};
use collections::{BTreeMap, HashMap, HashSet};
use futures::StreamExt as _;
use gpui::{App, Context, Entity, EventEmitter, SharedString, Task};
use gpui::{App, AppContext, Context, Entity, EventEmitter, SharedString, Subscription, Task};
use language_model::{
LanguageModel, LanguageModelCompletionEvent, LanguageModelRegistry, LanguageModelRequest,
LanguageModelRequestMessage, LanguageModelRequestTool, LanguageModelToolResult,
@@ -75,14 +78,21 @@ pub struct Thread {
project: Entity<Project>,
tools: Arc<ToolWorkingSet>,
tool_use: ToolUseState,
scripts_by_assistant_message: HashMap<MessageId, ScriptId>,
script_output_messages: HashSet<MessageId>,
script_session: Entity<ScriptSession>,
_script_session_subscription: Subscription,
}
impl Thread {
pub fn new(
project: Entity<Project>,
tools: Arc<ToolWorkingSet>,
_cx: &mut Context<Self>,
cx: &mut Context<Self>,
) -> Self {
let script_session = cx.new(|cx| ScriptSession::new(project.clone(), cx));
let script_session_subscription = cx.subscribe(&script_session, Self::handle_script_event);
Self {
id: ThreadId::new(),
updated_at: Utc::now(),
@@ -97,6 +107,10 @@ impl Thread {
project,
tools,
tool_use: ToolUseState::new(),
scripts_by_assistant_message: HashMap::default(),
script_output_messages: HashSet::default(),
script_session,
_script_session_subscription: script_session_subscription,
}
}
@@ -105,7 +119,7 @@ impl Thread {
saved: SavedThread,
project: Entity<Project>,
tools: Arc<ToolWorkingSet>,
_cx: &mut Context<Self>,
cx: &mut Context<Self>,
) -> Self {
let next_message_id = MessageId(
saved
@@ -115,6 +129,8 @@ impl Thread {
.unwrap_or(0),
);
let tool_use = ToolUseState::from_saved_messages(&saved.messages);
let script_session = cx.new(|cx| ScriptSession::new(project.clone(), cx));
let script_session_subscription = cx.subscribe(&script_session, Self::handle_script_event);
Self {
id,
@@ -138,6 +154,10 @@ impl Thread {
project,
tools,
tool_use,
scripts_by_assistant_message: HashMap::default(),
script_output_messages: HashSet::default(),
script_session,
_script_session_subscription: script_session_subscription,
}
}
@@ -223,17 +243,22 @@ impl Thread {
self.tool_use.message_has_tool_results(message_id)
}
pub fn message_has_script_output(&self, message_id: MessageId) -> bool {
self.script_output_messages.contains(&message_id)
}
pub fn insert_user_message(
&mut self,
text: impl Into<String>,
context: Vec<ContextSnapshot>,
cx: &mut Context<Self>,
) {
) -> MessageId {
let message_id = self.insert_message(Role::User, text, cx);
let context_ids = context.iter().map(|context| context.id).collect::<Vec<_>>();
self.context
.extend(context.into_iter().map(|context| (context.id, context)));
self.context_by_message.insert(message_id, context_ids);
message_id
}
pub fn insert_message(
@@ -302,6 +327,39 @@ impl Thread {
text
}
pub fn script_for_message<'a>(
&'a self,
message_id: MessageId,
cx: &'a App,
) -> Option<&'a Script> {
self.scripts_by_assistant_message
.get(&message_id)
.map(|script_id| self.script_session.read(cx).get(*script_id))
}
fn handle_script_event(
&mut self,
_script_session: Entity<ScriptSession>,
event: &ScriptEvent,
cx: &mut Context<Self>,
) {
match event {
ScriptEvent::Spawned(_) => {}
ScriptEvent::Exited(script_id) => {
if let Some(output_message) = self
.script_session
.read(cx)
.get(*script_id)
.output_message_for_llm()
{
let message_id = self.insert_user_message(output_message, vec![], cx);
self.script_output_messages.insert(message_id);
cx.emit(ThreadEvent::ScriptFinished)
}
}
}
}
pub fn send_to_model(
&mut self,
model: Arc<dyn LanguageModel>,
@@ -330,7 +388,7 @@ impl Thread {
pub fn to_completion_request(
&self,
request_kind: RequestKind,
_cx: &App,
cx: &App,
) -> LanguageModelRequest {
let mut request = LanguageModelRequest {
messages: vec![],
@@ -339,6 +397,12 @@ impl Thread {
temperature: None,
};
request.messages.push(LanguageModelRequestMessage {
role: Role::System,
content: vec![SCRIPTING_PROMPT.to_string().into()],
cache: true,
});
let mut referenced_context_ids = HashSet::default();
for message in &self.messages {
@@ -351,6 +415,7 @@ impl Thread {
content: Vec::new(),
cache: false,
};
match request_kind {
RequestKind::Chat => {
self.tool_use
@@ -371,11 +436,20 @@ impl Thread {
RequestKind::Chat => {
self.tool_use
.attach_tool_uses(message.id, &mut request_message);
if matches!(message.role, Role::Assistant) {
if let Some(script_id) = self.scripts_by_assistant_message.get(&message.id)
{
let script = self.script_session.read(cx).get(*script_id);
request_message.content.push(script.source_tag().into());
}
}
}
RequestKind::Summarize => {
// We don't care about tool use during summarization.
}
}
};
request.messages.push(request_message);
}
@@ -412,6 +486,8 @@ impl Thread {
let stream_completion = async {
let mut events = stream.await?;
let mut stop_reason = StopReason::EndTurn;
let mut script_tag_parser = ScriptTagParser::new();
let mut script_id = None;
while let Some(event) = events.next().await {
let event = event?;
@@ -426,19 +502,43 @@ impl Thread {
}
LanguageModelCompletionEvent::Text(chunk) => {
if let Some(last_message) = thread.messages.last_mut() {
if last_message.role == Role::Assistant {
last_message.text.push_str(&chunk);
let chunk = script_tag_parser.parse_chunk(&chunk);
let message_id = if last_message.role == Role::Assistant {
last_message.text.push_str(&chunk.content);
cx.emit(ThreadEvent::StreamedAssistantText(
last_message.id,
chunk,
chunk.content,
));
last_message.id
} else {
// If we won't have an Assistant message yet, assume this chunk marks the beginning
// of a new Assistant response.
//
// Importantly: We do *not* want to emit a `StreamedAssistantText` event here, as it
// will result in duplicating the text of the chunk in the rendered Markdown.
thread.insert_message(Role::Assistant, chunk, cx);
thread.insert_message(Role::Assistant, chunk.content, cx)
};
if script_id.is_none() && script_tag_parser.found_script() {
let id = thread
.script_session
.update(cx, |session, _cx| session.new_script());
thread.scripts_by_assistant_message.insert(message_id, id);
script_id = Some(id);
}
if let (Some(script_source), Some(script_id)) =
(chunk.script_source, script_id)
{
// TODO: move buffer to script and run as it streams
thread
.script_session
.update(cx, |this, cx| {
this.run_script(script_id, script_source, cx)
})
.detach_and_log_err(cx);
}
}
}
@@ -661,6 +761,7 @@ pub enum ThreadEvent {
#[allow(unused)]
tool_use_id: LanguageModelToolUseId,
},
ScriptFinished,
}
impl EventEmitter<ThreadEvent> for Thread {}

View File

@@ -1,5 +1,5 @@
[package]
name = "scripting_tool"
name = "assistant_scripting"
version = "0.1.0"
edition.workspace = true
publish.workspace = true
@@ -9,21 +9,19 @@ license = "GPL-3.0-or-later"
workspace = true
[lib]
path = "src/scripting_tool.rs"
path = "src/assistant_scripting.rs"
doctest = false
[dependencies]
anyhow.workspace = true
assistant_tool.workspace = true
collections.workspace = true
shlex.workspace = true
futures.workspace = true
gpui.workspace = true
log.workspace = true
mlua.workspace = true
parking_lot.workspace = true
project.workspace = true
regex.workspace = true
schemars.workspace = true
serde.workspace = true
serde_json.workspace = true
settings.workspace = true
@@ -33,4 +31,5 @@ util.workspace = true
collections = { workspace = true, features = ["test-support"] }
gpui = { workspace = true, features = ["test-support"] }
project = { workspace = true, features = ["test-support"] }
rand.workspace = true
settings = { workspace = true, features = ["test-support"] }

View File

@@ -0,0 +1,7 @@
mod session;
mod tag;
pub use session::*;
pub use tag::*;
pub const SCRIPTING_PROMPT: &str = include_str!("./system_prompt.txt");

View File

@@ -1,7 +1,9 @@
---@diagnostic disable: undefined-global
-- Create a sandbox environment
local sandbox = {}
-- Allow access to standard libraries (safe subset)
sandbox.string = string
sandbox.table = table
sandbox.math = math
@@ -11,21 +13,29 @@ sandbox.tostring = tostring
sandbox.tonumber = tonumber
sandbox.pairs = pairs
sandbox.ipairs = ipairs
sandbox.search = search
-- Access to custom functions
sandbox.search = search
sandbox.outline = outline
-- Create a sandboxed version of LuaFileIO
local io = {}
-- File functions
io.open = sb_io_open
io.popen = sb_io_popen
-- Add the sandboxed io library to the sandbox environment
sandbox.io = io
-- Load the script with the sandbox environment
local user_script_fn, err = load(user_script, nil, "t", sandbox)
if not user_script_fn then
error("Failed to load user script: " .. tostring(err))
end
-- Execute the user script within the sandbox
local success, result = pcall(user_script_fn)
if not success then

View File

@@ -1,43 +1,38 @@
use anyhow::Result;
use anyhow::anyhow;
use collections::{HashMap, HashSet};
use futures::{
channel::{mpsc, oneshot},
pin_mut, SinkExt, StreamExt,
};
use gpui::{AppContext, AsyncApp, Context, Entity, Task, WeakEntity};
use mlua::{Lua, MultiValue, Table, UserData, UserDataMethods};
use gpui::{AppContext, AsyncApp, Context, Entity, EventEmitter, SharedString, Task, WeakEntity};
use mlua::{ExternalResult, Lua, MultiValue, Table, UserData, UserDataMethods};
use parking_lot::Mutex;
use project::{search::SearchQuery, Fs, Project};
use regex::Regex;
use std::{
cell::RefCell,
fs::File,
path::{Path, PathBuf},
process::{Command, Stdio},
sync::Arc,
};
use util::{paths::PathMatcher, ResultExt};
use crate::sandboxed_shell::{Operator, ShellAst, ShellCmd};
use crate::{SCRIPT_END_TAG, SCRIPT_START_TAG};
pub struct ScriptOutput {
pub stdout: String,
}
struct ForegroundFn(Box<dyn FnOnce(WeakEntity<ScriptSession>, AsyncApp) + Send>);
struct ForegroundFn(Box<dyn FnOnce(WeakEntity<Session>, AsyncApp) + Send>);
pub struct Session {
pub struct ScriptSession {
project: Entity<Project>,
// TODO Remove this
fs_changes: Arc<Mutex<HashMap<PathBuf, Vec<u8>>>>,
foreground_fns_tx: mpsc::Sender<ForegroundFn>,
_invoke_foreground_fns: Task<()>,
scripts: Vec<Script>,
}
impl Session {
impl ScriptSession {
pub fn new(project: Entity<Project>, cx: &mut Context<Self>) -> Self {
let (foreground_fns_tx, mut foreground_fns_rx) = mpsc::channel(128);
Session {
ScriptSession {
project,
fs_changes: Arc::new(Mutex::new(HashMap::default())),
foreground_fns_tx,
@@ -46,15 +41,62 @@ impl Session {
foreground_fn.0(this.clone(), cx.clone());
}
}),
scripts: Vec::new(),
}
}
/// Runs a Lua script in a sandboxed environment and returns the printed lines
pub fn new_script(&mut self) -> ScriptId {
let id = ScriptId(self.scripts.len() as u32);
let script = Script {
id,
state: ScriptState::Generating,
source: SharedString::new_static(""),
};
self.scripts.push(script);
id
}
pub fn run_script(
&mut self,
script: String,
script_id: ScriptId,
script_src: String,
cx: &mut Context<Self>,
) -> Task<Result<ScriptOutput>> {
) -> Task<anyhow::Result<()>> {
let script = self.get_mut(script_id);
let stdout = Arc::new(Mutex::new(String::new()));
script.source = script_src.clone().into();
script.state = ScriptState::Running {
stdout: stdout.clone(),
};
let task = self.run_lua(script_src, stdout, cx);
cx.emit(ScriptEvent::Spawned(script_id));
cx.spawn(|session, mut cx| async move {
let result = task.await;
session.update(&mut cx, |session, cx| {
let script = session.get_mut(script_id);
let stdout = script.stdout_snapshot();
script.state = match result {
Ok(()) => ScriptState::Succeeded { stdout },
Err(error) => ScriptState::Failed { stdout, error },
};
cx.emit(ScriptEvent::Exited(script_id))
})
})
}
fn run_lua(
&mut self,
script: String,
stdout: Arc<Mutex<String>>,
cx: &mut Context<Self>,
) -> Task<anyhow::Result<()>> {
const SANDBOX_PREAMBLE: &str = include_str!("sandbox_preamble.lua");
// TODO Remove fs_changes
@@ -66,62 +108,84 @@ impl Session {
.visible_worktrees(cx)
.next()
.map(|worktree| worktree.read(cx).abs_path());
let fs = self.project.read(cx).fs().clone();
let foreground_fns_tx = self.foreground_fns_tx.clone();
cx.background_spawn(async move {
let lua = Lua::new();
lua.set_memory_limit(2 * 1024 * 1024 * 1024)?; // 2 GB
let globals = lua.globals();
let stdout = Arc::new(Mutex::new(String::new()));
globals.set(
"sb_print",
lua.create_function({
let stdout = stdout.clone();
move |_, args: MultiValue| Self::print(args, &stdout)
})?,
)?;
globals.set(
"search",
lua.create_async_function({
let foreground_fns_tx = foreground_fns_tx.clone();
let fs = fs.clone();
move |lua, regex| {
Self::search(lua, foreground_fns_tx.clone(), fs.clone(), regex)
}
})?,
)?;
globals.set(
"sb_io_open",
lua.create_function({
let fs_changes = fs_changes.clone();
let root_dir = root_dir.clone();
move |lua, (path_str, mode)| {
Self::io_open(&lua, &fs_changes, root_dir.as_ref(), path_str, mode)
}
})?,
)?;
globals.set(
"sb_io_popen",
lua.create_function({
move |lua, shell_str| {
let mut allowed_commands = HashMap::default(); // TODO persist this
Self::io_popen(&lua, root_dir.as_ref(), shell_str, &mut allowed_commands)
}
})?,
)?;
globals.set("user_script", script)?;
let task = cx.background_spawn({
let stdout = stdout.clone();
lua.load(SANDBOX_PREAMBLE).exec_async().await?;
async move {
let lua = Lua::new();
lua.set_memory_limit(2 * 1024 * 1024 * 1024)?; // 2 GB
let globals = lua.globals();
globals.set(
"sb_print",
lua.create_function({
let stdout = stdout.clone();
move |_, args: MultiValue| Self::print(args, &stdout)
})?,
)?;
globals.set(
"search",
lua.create_async_function({
let foreground_fns_tx = foreground_fns_tx.clone();
move |lua, regex| {
let mut foreground_fns_tx = foreground_fns_tx.clone();
let fs = fs.clone();
async move {
Self::search(&lua, &mut foreground_fns_tx, fs, regex)
.await
.into_lua_err()
}
}
})?,
)?;
globals.set(
"outline",
lua.create_async_function({
let root_dir = root_dir.clone();
move |_lua, path| {
let mut foreground_fns_tx = foreground_fns_tx.clone();
let root_dir = root_dir.clone();
async move {
Self::outline(root_dir, &mut foreground_fns_tx, path)
.await
.into_lua_err()
}
}
})?,
)?;
globals.set(
"sb_io_open",
lua.create_function({
let fs_changes = fs_changes.clone();
let root_dir = root_dir.clone();
move |lua, (path_str, mode)| {
Self::io_open(&lua, &fs_changes, root_dir.as_ref(), path_str, mode)
}
})?,
)?;
globals.set("user_script", script)?;
// Drop Lua instance to decrement reference count.
drop(lua);
lua.load(SANDBOX_PREAMBLE).exec_async().await?;
let stdout = Arc::try_unwrap(stdout)
.expect("no more references to stdout")
.into_inner();
Ok(ScriptOutput { stdout })
})
// Drop Lua instance to decrement reference count.
drop(lua);
anyhow::Ok(())
}
});
task
}
pub fn get(&self, script_id: ScriptId) -> &Script {
&self.scripts[script_id.0 as usize]
}
fn get_mut(&mut self, script_id: ScriptId) -> &mut Script {
&mut self.scripts[script_id.0 as usize]
}
/// Sandboxed print() function in Lua.
@@ -140,399 +204,6 @@ impl Session {
Ok(())
}
/// Sandboxed io.popen() function in Lua.
fn io_popen(
lua: &Lua,
root_dir: Option<&Arc<Path>>,
shell_str: mlua::String,
allowed_commands: &mut HashMap<String, bool>,
) -> mlua::Result<(Option<Table>, String)> {
let root_dir = root_dir.ok_or_else(|| {
mlua::Error::runtime("cannot execute command without a root directory")
})?;
// Parse the shell command into our AST
let ast = ShellAst::parse(shell_str.to_str()?)?;
// Create a lua file handle for the command output
let file = lua.create_table()?;
// Create a buffer to store the command output
let output_buffer = Arc::new(Mutex::new(String::new()));
// Execute the shell command based on the parsed AST
match ast {
ShellAst::Command(shell_cmd) => {
let result = Self::execute_command(&shell_cmd, root_dir, allowed_commands)?;
output_buffer.lock().push_str(&result);
}
ShellAst::Operation {
operator,
left,
right,
} => {
// Handle compound operations by recursively executing them
let left_output = Self::execute_ast_node(*left, root_dir, allowed_commands)?;
match operator {
Operator::Pipe => {
// For pipe, use left output as input to right command
let right_output = Self::execute_ast_node_with_input(
*right,
&left_output,
root_dir,
allowed_commands,
)?;
output_buffer.lock().push_str(&right_output);
}
Operator::And => {
// For AND, only execute right if left was successful (non-empty output as success indicator)
if !left_output.trim().is_empty() {
let right_output =
Self::execute_ast_node(*right, root_dir, allowed_commands)?;
output_buffer.lock().push_str(&right_output);
} else {
output_buffer.lock().push_str(&left_output);
}
}
Operator::Semicolon => {
// For semicolon, execute both regardless of result
output_buffer.lock().push_str(&left_output);
let right_output =
Self::execute_ast_node(*right, root_dir, allowed_commands)?;
output_buffer.lock().push_str(&right_output);
}
}
}
}
// Set up the file's content
file.set(
"__content",
lua.create_userdata(FileContent(RefCell::new(
output_buffer.lock().as_bytes().to_vec(),
)))?,
)?;
file.set("__position", 0usize)?;
file.set("__read_perm", true)?;
file.set("__write_perm", false)?;
// Implement the read method for the file
let read_fn = {
lua.create_function(
move |_lua, (file_userdata, format): (mlua::Table, Option<mlua::Value>)| {
let content = file_userdata.get::<mlua::AnyUserData>("__content")?;
let mut position = file_userdata.get::<usize>("__position")?;
let content_ref = content.borrow::<FileContent>()?;
let content_vec = content_ref.0.borrow();
if position >= content_vec.len() {
return Ok(None); // EOF
}
match format {
Some(mlua::Value::String(s)) => {
let format_str = s.to_string_lossy();
// Handle different read formats
if format_str.starts_with("*a") {
// Read all
let result =
String::from_utf8_lossy(&content_vec[position..]).to_string();
position = content_vec.len();
file_userdata.set("__position", position)?;
Ok(Some(result))
} else if format_str.starts_with("*l") {
// Read line
let mut line = Vec::new();
let mut found_newline = false;
while position < content_vec.len() {
let byte = content_vec[position];
position += 1;
if byte == b'\n' {
found_newline = true;
break;
}
// Handle \r\n sequence
if byte == b'\r'
&& position < content_vec.len()
&& content_vec[position] == b'\n'
{
position += 1;
found_newline = true;
break;
}
line.push(byte);
}
file_userdata.set("__position", position)?;
if !found_newline
&& line.is_empty()
&& position >= content_vec.len()
{
return Ok(None); // EOF
}
let result = String::from_utf8_lossy(&line).to_string();
Ok(Some(result))
} else {
Err(mlua::Error::runtime(format!(
"Unsupported read format: {}",
format_str
)))
}
}
Some(_) => Err(mlua::Error::runtime("Invalid format")),
None => {
// Default is to read a line
let mut line = Vec::new();
let mut found_newline = false;
while position < content_vec.len() {
let byte = content_vec[position];
position += 1;
if byte == b'\n' {
found_newline = true;
break;
}
if byte == b'\r'
&& position < content_vec.len()
&& content_vec[position] == b'\n'
{
position += 1;
found_newline = true;
break;
}
line.push(byte);
}
file_userdata.set("__position", position)?;
if !found_newline && line.is_empty() && position >= content_vec.len() {
return Ok(None); // EOF
}
let result = String::from_utf8_lossy(&line).to_string();
Ok(Some(result))
}
}
},
)?
};
file.set("read", read_fn)?;
// Implement close method
let close_fn = lua.create_function(|_lua, _: mlua::Table| Ok(true))?;
file.set("close", close_fn)?;
Ok((Some(file), String::new()))
}
// Helper function to execute a single command
fn execute_command(
cmd: &ShellCmd,
root_dir: &Arc<Path>,
allowed_commands: &mut HashMap<String, bool>,
) -> mlua::Result<String> {
// Check if command is allowed
if !allowed_commands.contains_key(&cmd.command) {
// If it's the first time we see this command, ask for permission
// In a real application, this would prompt the user, but for simplicity
// we'll just allow all commands in this sample implementation
allowed_commands.insert(cmd.command.clone(), true);
}
if !allowed_commands[&cmd.command] {
return Err(mlua::Error::runtime(format!(
"Command '{}' is not allowed in this sandbox",
cmd.command
)));
}
// Execute the command
let mut command = Command::new(&cmd.command);
// Set the current directory
command.current_dir(root_dir);
// Add arguments
command.args(&cmd.args);
// Configure stdio
command.stdin(Stdio::piped());
command.stdout(Stdio::piped());
command.stderr(Stdio::piped());
// Execute the command
let output = command
.output()
.map_err(|e| mlua::Error::runtime(format!("Failed to execute command: {}", e)))?;
let mut result = String::new();
// Handle stdout
if cmd.stdout_redirect.is_none() {
result.push_str(&String::from_utf8_lossy(&output.stdout));
} else {
// Handle file redirection
let redirect_path = root_dir.join(cmd.stdout_redirect.as_ref().unwrap());
Self::write_to_file(&redirect_path, &output.stdout)
.map_err(|e| mlua::Error::runtime(format!("Failed to redirect stdout: {}", e)))?;
}
// Handle stderr
if cmd.stderr_redirect.is_none() {
// If stderr is not redirected, append it to the result
result.push_str(&String::from_utf8_lossy(&output.stderr));
} else {
// Handle file redirection
let redirect_path = root_dir.join(cmd.stderr_redirect.as_ref().unwrap());
Self::write_to_file(&redirect_path, &output.stderr)
.map_err(|e| mlua::Error::runtime(format!("Failed to redirect stderr: {}", e)))?;
}
Ok(result)
}
// Helper function to write data to a file
fn write_to_file(path: &Path, data: &[u8]) -> std::io::Result<()> {
let mut file = File::create(path)?;
std::io::Write::write_all(&mut file, data)?;
Ok(())
}
// Helper function to execute an AST node
fn execute_ast_node(
node: ShellAst,
root_dir: &Arc<Path>,
allowed_commands: &mut HashMap<String, bool>,
) -> mlua::Result<String> {
match node {
ShellAst::Command(cmd) => Self::execute_command(&cmd, root_dir, allowed_commands),
ShellAst::Operation {
operator,
left,
right,
} => {
let left_output = Self::execute_ast_node(*left, root_dir, allowed_commands)?;
match operator {
Operator::Pipe => Self::execute_ast_node_with_input(
*right,
&left_output,
root_dir,
allowed_commands,
),
Operator::And => {
if !left_output.trim().is_empty() {
Self::execute_ast_node(*right, root_dir, allowed_commands)
} else {
Ok(left_output)
}
}
Operator::Semicolon => {
let mut result = left_output;
let right_output =
Self::execute_ast_node(*right, root_dir, allowed_commands)?;
result.push_str(&right_output);
Ok(result)
}
}
}
}
}
// Helper function to execute an AST node with input from a previous command
fn execute_ast_node_with_input(
node: ShellAst,
input: &str,
root_dir: &Arc<Path>,
allowed_commands: &mut HashMap<String, bool>,
) -> mlua::Result<String> {
match node {
ShellAst::Command(cmd) => {
// Check if command is allowed
if !allowed_commands.contains_key(&cmd.command) {
allowed_commands.insert(cmd.command.clone(), true);
}
if !allowed_commands[&cmd.command] {
return Err(mlua::Error::runtime(format!(
"Command '{}' is not allowed in this sandbox",
cmd.command
)));
}
// Execute the command with input
let mut command = Command::new(&cmd.command);
command.current_dir(root_dir);
command.args(&cmd.args);
// Configure stdio
command.stdin(Stdio::piped());
command.stdout(Stdio::piped());
command.stderr(Stdio::piped());
let mut child = command.spawn().map_err(|e| {
mlua::Error::runtime(format!("Failed to execute command: {}", e))
})?;
// Write input to stdin
if let Some(mut stdin) = child.stdin.take() {
std::io::Write::write_all(&mut stdin, input.as_bytes()).map_err(|e| {
mlua::Error::runtime(format!("Failed to write to stdin: {}", e))
})?;
// Stdin is closed when it goes out of scope
}
let output = child.wait_with_output().map_err(|e| {
mlua::Error::runtime(format!("Failed to wait for command: {}", e))
})?;
let mut result = String::new();
// Handle stdout
if cmd.stdout_redirect.is_none() {
result.push_str(&String::from_utf8_lossy(&output.stdout));
} else {
// Handle file redirection
let redirect_path = root_dir.join(cmd.stdout_redirect.as_ref().unwrap());
Self::write_to_file(&redirect_path, &output.stdout).map_err(|e| {
mlua::Error::runtime(format!("Failed to redirect stdout: {}", e))
})?;
}
// Handle stderr
if cmd.stderr_redirect.is_none() {
result.push_str(&String::from_utf8_lossy(&output.stderr));
} else {
// Handle file redirection
let redirect_path = root_dir.join(cmd.stderr_redirect.as_ref().unwrap());
Self::write_to_file(&redirect_path, &output.stderr).map_err(|e| {
mlua::Error::runtime(format!("Failed to redirect stderr: {}", e))
})?;
}
Ok(result)
}
ShellAst::Operation { .. } => {
// For complex operations, we'd need to create temporary files for intermediate results
// For simplicity, we'll return an error for now
Err(mlua::Error::runtime(
"Nested operations in pipes are not supported",
))
}
}
}
/// Sandboxed io.open() function in Lua.
fn io_open(
lua: &Lua,
@@ -561,27 +232,9 @@ impl Session {
file.set("__read_perm", read_perm)?;
file.set("__write_perm", write_perm)?;
// Sandbox the path; it must be within root_dir
let path: PathBuf = {
let rust_path = Path::new(&path_str);
// Get absolute path
if rust_path.is_absolute() {
// Check if path starts with root_dir prefix without resolving symlinks
if !rust_path.starts_with(&root_dir) {
return Ok((
None,
format!(
"Error: Absolute path {} is outside the current working directory",
path_str
),
));
}
rust_path.to_path_buf()
} else {
// Make relative path absolute relative to cwd
root_dir.join(rust_path)
}
let path = match Self::parse_abs_path_in_root_dir(&root_dir, &path_str) {
Ok(path) => path,
Err(err) => return Ok((None, format!("{err}"))),
};
// close method
@@ -917,11 +570,11 @@ impl Session {
}
async fn search(
lua: Lua,
mut foreground_tx: mpsc::Sender<ForegroundFn>,
lua: &Lua,
foreground_tx: &mut mpsc::Sender<ForegroundFn>,
fs: Arc<dyn Fs>,
regex: String,
) -> mlua::Result<Table> {
) -> anyhow::Result<Table> {
// TODO: Allow specification of these options.
let search_query = SearchQuery::regex(
&regex,
@@ -934,18 +587,17 @@ impl Session {
);
let search_query = match search_query {
Ok(query) => query,
Err(e) => return Err(mlua::Error::runtime(format!("Invalid search query: {}", e))),
Err(e) => return Err(anyhow!("Invalid search query: {}", e)),
};
// TODO: Should use `search_query.regex`. The tool description should also be updated,
// as it specifies standard regex.
let search_regex = match Regex::new(&regex) {
Ok(re) => re,
Err(e) => return Err(mlua::Error::runtime(format!("Invalid regex: {}", e))),
Err(e) => return Err(anyhow!("Invalid regex: {}", e)),
};
let mut abs_paths_rx =
Self::find_search_candidates(search_query, &mut foreground_tx).await?;
let mut abs_paths_rx = Self::find_search_candidates(search_query, foreground_tx).await?;
let mut search_results: Vec<Table> = Vec::new();
while let Some(path) = abs_paths_rx.next().await {
@@ -993,7 +645,7 @@ impl Session {
async fn find_search_candidates(
search_query: SearchQuery,
foreground_tx: &mut mpsc::Sender<ForegroundFn>,
) -> mlua::Result<mpsc::UnboundedReceiver<PathBuf>> {
) -> anyhow::Result<mpsc::UnboundedReceiver<PathBuf>> {
Self::run_foreground_fn(
"finding search file candidates",
foreground_tx,
@@ -1043,14 +695,62 @@ impl Session {
})
}),
)
.await
.await?
}
async fn outline(
root_dir: Option<Arc<Path>>,
foreground_tx: &mut mpsc::Sender<ForegroundFn>,
path_str: String,
) -> anyhow::Result<String> {
let root_dir = root_dir
.ok_or_else(|| mlua::Error::runtime("cannot get outline without a root directory"))?;
let path = Self::parse_abs_path_in_root_dir(&root_dir, &path_str)?;
let outline = Self::run_foreground_fn(
"getting code outline",
foreground_tx,
Box::new(move |session, cx| {
cx.spawn(move |mut cx| async move {
// TODO: This will not use file content from `fs_changes`. It will also reflect
// user changes that have not been saved.
let buffer = session
.update(&mut cx, |session, cx| {
session
.project
.update(cx, |project, cx| project.open_local_buffer(&path, cx))
})?
.await?;
buffer.update(&mut cx, |buffer, _cx| {
if let Some(outline) = buffer.snapshot().outline(None) {
Ok(outline)
} else {
Err(anyhow!("No outline for file {path_str}"))
}
})
})
}),
)
.await?
.await??;
Ok(outline
.items
.into_iter()
.map(|item| {
if item.text.contains('\n') {
log::error!("Outline item unexpectedly contains newline");
}
format!("{}{}", " ".repeat(item.depth), item.text)
})
.collect::<Vec<String>>()
.join("\n"))
}
async fn run_foreground_fn<R: Send + 'static>(
description: &str,
foreground_tx: &mut mpsc::Sender<ForegroundFn>,
function: Box<dyn FnOnce(WeakEntity<Self>, AsyncApp) -> anyhow::Result<R> + Send>,
) -> mlua::Result<R> {
function: Box<dyn FnOnce(WeakEntity<Self>, AsyncApp) -> R + Send>,
) -> anyhow::Result<R> {
let (response_tx, response_rx) = oneshot::channel();
let send_result = foreground_tx
.send(ForegroundFn(Box::new(move |this, cx| {
@@ -1060,19 +760,34 @@ impl Session {
match send_result {
Ok(()) => (),
Err(err) => {
return Err(mlua::Error::runtime(format!(
"Internal error while enqueuing work for {description}: {err}"
)))
return Err(anyhow::Error::new(err).context(format!(
"Internal error while enqueuing work for {description}"
)));
}
}
match response_rx.await {
Ok(Ok(result)) => Ok(result),
Ok(Err(err)) => Err(mlua::Error::runtime(format!(
"Error while {description}: {err}"
))),
Err(oneshot::Canceled) => Err(mlua::Error::runtime(format!(
Ok(result) => Ok(result),
Err(oneshot::Canceled) => Err(anyhow!(
"Internal error: response oneshot was canceled while {description}."
))),
)),
}
}
fn parse_abs_path_in_root_dir(root_dir: &Path, path_str: &str) -> anyhow::Result<PathBuf> {
let path = Path::new(&path_str);
if path.is_absolute() {
// Check if path starts with root_dir prefix without resolving symlinks
if path.starts_with(&root_dir) {
Ok(path.to_path_buf())
} else {
Err(anyhow!(
"Error: Absolute path {} is outside the current working directory",
path_str
))
}
} else {
// TODO: Does use of `../` break sandbox - is path canonicalization needed?
Ok(root_dir.join(path))
}
}
}
@@ -1085,6 +800,79 @@ impl UserData for FileContent {
}
}
#[derive(Debug)]
pub enum ScriptEvent {
Spawned(ScriptId),
Exited(ScriptId),
}
impl EventEmitter<ScriptEvent> for ScriptSession {}
#[derive(Clone, Copy, Debug, PartialEq, Eq, Hash)]
pub struct ScriptId(u32);
pub struct Script {
pub id: ScriptId,
pub state: ScriptState,
pub source: SharedString,
}
pub enum ScriptState {
Generating,
Running {
stdout: Arc<Mutex<String>>,
},
Succeeded {
stdout: String,
},
Failed {
stdout: String,
error: anyhow::Error,
},
}
impl Script {
pub fn source_tag(&self) -> String {
format!("{}{}{}", SCRIPT_START_TAG, self.source, SCRIPT_END_TAG)
}
/// If exited, returns a message with the output for the LLM
pub fn output_message_for_llm(&self) -> Option<String> {
match &self.state {
ScriptState::Generating { .. } => None,
ScriptState::Running { .. } => None,
ScriptState::Succeeded { stdout } => {
format!("Here's the script output:\n{}", stdout).into()
}
ScriptState::Failed { stdout, error } => format!(
"The script failed with:\n{}\n\nHere's the output it managed to print:\n{}",
error, stdout
)
.into(),
}
}
/// Get a snapshot of the script's stdout
pub fn stdout_snapshot(&self) -> String {
match &self.state {
ScriptState::Generating { .. } => String::new(),
ScriptState::Running { stdout } => stdout.lock().clone(),
ScriptState::Succeeded { stdout } => stdout.clone(),
ScriptState::Failed { stdout, .. } => stdout.clone(),
}
}
/// Returns the error if the script failed, otherwise None
pub fn error(&self) -> Option<&anyhow::Error> {
match &self.state {
ScriptState::Generating { .. } => None,
ScriptState::Running { .. } => None,
ScriptState::Succeeded { .. } => None,
ScriptState::Failed { error, .. } => Some(error),
}
}
}
#[cfg(test)]
mod tests {
use gpui::TestAppContext;
@@ -1096,35 +884,17 @@ mod tests {
#[gpui::test]
async fn test_print(cx: &mut TestAppContext) {
init_test(cx);
let fs = FakeFs::new(cx.executor());
let project = Project::test(fs, [], cx).await;
let session = cx.new(|cx| Session::new(project, cx));
let script = r#"
print("Hello", "world!")
print("Goodbye", "moon!")
"#;
let output = session
.update(cx, |session, cx| session.run_script(script.to_string(), cx))
.await
.unwrap();
assert_eq!(output.stdout, "Hello\tworld!\nGoodbye\tmoon!\n");
let output = test_script(script, cx).await.unwrap();
assert_eq!(output, "Hello\tworld!\nGoodbye\tmoon!\n");
}
#[gpui::test]
async fn test_search(cx: &mut TestAppContext) {
init_test(cx);
let fs = FakeFs::new(cx.executor());
fs.insert_tree(
"/",
json!({
"file1.txt": "Hello world!",
"file2.txt": "Goodbye moon!"
}),
)
.await;
let project = Project::test(fs, [Path::new("/")], cx).await;
let session = cx.new(|cx| Session::new(project, cx));
let script = r#"
local results = search("world")
for i, result in ipairs(results) do
@@ -1135,11 +905,36 @@ mod tests {
end
end
"#;
let output = session
.update(cx, |session, cx| session.run_script(script.to_string(), cx))
.await
.unwrap();
assert_eq!(output.stdout, "File: /file1.txt\nMatches:\n world\n");
let output = test_script(script, cx).await.unwrap();
assert_eq!(output, "File: /file1.txt\nMatches:\n world\n");
}
async fn test_script(source: &str, cx: &mut TestAppContext) -> anyhow::Result<String> {
init_test(cx);
let fs = FakeFs::new(cx.executor());
fs.insert_tree(
"/",
json!({
"file1.txt": "Hello world!",
"file2.txt": "Goodbye moon!"
}),
)
.await;
let project = Project::test(fs, [Path::new("/")], cx).await;
let session = cx.new(|cx| ScriptSession::new(project, cx));
let (script_id, task) = session.update(cx, |session, cx| {
let script_id = session.new_script();
let task = session.run_script(script_id, source.to_string(), cx);
(script_id, task)
});
task.await?;
Ok(session.read_with(cx, |session, _cx| session.get(script_id).stdout_snapshot()))
}
fn init_test(cx: &mut TestAppContext) {

View File

@@ -3,6 +3,12 @@ output was, including both stdout as well as the git diff of changes it made to
the filesystem. That way, you can get more information about the code base, or
make changes to the code base directly.
Put the Lua script inside of an `<eval>` tag like so:
<eval type="lua">
print("Hello, world!")
</eval>
The Lua script will have access to `io` and it will run with the current working
directory being in the root of the code base, so you can use it to explore,
search, make changes, etc. You can also have the script print things, and I'll
@@ -10,13 +16,21 @@ tell you what the output was. Note that `io` only has `open`, and then the file
it returns only has the methods read, write, and close - it doesn't have popen
or anything else.
Also, I'm going to be putting this Lua script into JSON, so please don't use
Lua's double quote syntax for string literals - use one of Lua's other syntaxes
for string literals, so I don't have to escape the double quotes.
There will be a global called `search` which accepts a regex (it's implemented
There is a function called `search` which accepts a regex (it's implemented
using Rust's regex crate, so use that regex syntax) and runs that regex on the
contents of every file in the code base (aside from gitignored files), then
returns an array of tables with two fields: "path" (the path to the file that
had the matches) and "matches" (an array of strings, with each string being a
match that was found within the file).
There is a function called `outline` which accepts the path to a source file,
and returns a string where each line is a declaration. These lines are indented
with 2 spaces to indicate when a declaration is inside another.
When I send you the script output, do not thank me for running it,
act as if you ran it yourself.
IMPORTANT!
Only include a maximum of one Lua script at the very end of your message
DO NOT WRITE ANYTHING ELSE AFTER THE SCRIPT. Wait for my response with the script
output to continue.

View File

@@ -0,0 +1,260 @@
pub const SCRIPT_START_TAG: &str = "<eval type=\"lua\">";
pub const SCRIPT_END_TAG: &str = "</eval>";
const START_TAG: &[u8] = SCRIPT_START_TAG.as_bytes();
const END_TAG: &[u8] = SCRIPT_END_TAG.as_bytes();
/// Parses a script tag in an assistant message as it is being streamed.
pub struct ScriptTagParser {
state: State,
buffer: Vec<u8>,
tag_match_ix: usize,
}
enum State {
Unstarted,
Streaming,
Ended,
}
#[derive(Debug, PartialEq)]
pub struct ChunkOutput {
/// The chunk with script tags removed.
pub content: String,
/// The full script tag content. `None` until closed.
pub script_source: Option<String>,
}
impl ScriptTagParser {
/// Create a new script tag parser.
pub fn new() -> Self {
Self {
state: State::Unstarted,
buffer: Vec::new(),
tag_match_ix: 0,
}
}
/// Returns true if the parser has found a script tag.
pub fn found_script(&self) -> bool {
match self.state {
State::Unstarted => false,
State::Streaming | State::Ended => true,
}
}
/// Process a new chunk of input, splitting it into surrounding content and script source.
pub fn parse_chunk(&mut self, input: &str) -> ChunkOutput {
let mut content = Vec::with_capacity(input.len());
for byte in input.bytes() {
match self.state {
State::Unstarted => {
if collect_until_tag(byte, START_TAG, &mut self.tag_match_ix, &mut content) {
self.state = State::Streaming;
self.buffer = Vec::with_capacity(1024);
self.tag_match_ix = 0;
}
}
State::Streaming => {
if collect_until_tag(byte, END_TAG, &mut self.tag_match_ix, &mut self.buffer) {
self.state = State::Ended;
}
}
State::Ended => content.push(byte),
}
}
let content = unsafe { String::from_utf8_unchecked(content) };
let script_source = if matches!(self.state, State::Ended) && !self.buffer.is_empty() {
let source = unsafe { String::from_utf8_unchecked(std::mem::take(&mut self.buffer)) };
Some(source)
} else {
None
};
ChunkOutput {
content,
script_source,
}
}
}
fn collect_until_tag(byte: u8, tag: &[u8], tag_match_ix: &mut usize, buffer: &mut Vec<u8>) -> bool {
// this can't be a method because it'd require a mutable borrow on both self and self.buffer
if match_tag_byte(byte, tag, tag_match_ix) {
*tag_match_ix >= tag.len()
} else {
if *tag_match_ix > 0 {
// push the partially matched tag to the buffer
buffer.extend_from_slice(&tag[..*tag_match_ix]);
*tag_match_ix = 0;
// the tag might start to match again
if match_tag_byte(byte, tag, tag_match_ix) {
return *tag_match_ix >= tag.len();
}
}
buffer.push(byte);
false
}
}
fn match_tag_byte(byte: u8, tag: &[u8], tag_match_ix: &mut usize) -> bool {
if byte == tag[*tag_match_ix] {
*tag_match_ix += 1;
true
} else {
false
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_parse_complete_tag() {
let mut parser = ScriptTagParser::new();
let input = "<eval type=\"lua\">print(\"Hello, World!\")</eval>";
let result = parser.parse_chunk(input);
assert_eq!(result.content, "");
assert_eq!(
result.script_source,
Some("print(\"Hello, World!\")".to_string())
);
}
#[test]
fn test_no_tag() {
let mut parser = ScriptTagParser::new();
let input = "No tags here, just plain text";
let result = parser.parse_chunk(input);
assert_eq!(result.content, "No tags here, just plain text");
assert_eq!(result.script_source, None);
}
#[test]
fn test_partial_end_tag() {
let mut parser = ScriptTagParser::new();
// Start the tag
let result = parser.parse_chunk("<eval type=\"lua\">let x = '</e");
assert_eq!(result.content, "");
assert_eq!(result.script_source, None);
// Finish with the rest
let result = parser.parse_chunk("val' + 'not the end';</eval>");
assert_eq!(result.content, "");
assert_eq!(
result.script_source,
Some("let x = '</eval' + 'not the end';".to_string())
);
}
#[test]
fn test_text_before_and_after_tag() {
let mut parser = ScriptTagParser::new();
let input = "Before tag <eval type=\"lua\">print(\"Hello\")</eval> After tag";
let result = parser.parse_chunk(input);
assert_eq!(result.content, "Before tag After tag");
assert_eq!(result.script_source, Some("print(\"Hello\")".to_string()));
}
#[test]
fn test_multiple_chunks_with_surrounding_text() {
let mut parser = ScriptTagParser::new();
// First chunk with text before
let result = parser.parse_chunk("Before script <eval type=\"lua\">local x = 10");
assert_eq!(result.content, "Before script ");
assert_eq!(result.script_source, None);
// Second chunk with script content
let result = parser.parse_chunk("\nlocal y = 20");
assert_eq!(result.content, "");
assert_eq!(result.script_source, None);
// Last chunk with text after
let result = parser.parse_chunk("\nprint(x + y)</eval> After script");
assert_eq!(result.content, " After script");
assert_eq!(
result.script_source,
Some("local x = 10\nlocal y = 20\nprint(x + y)".to_string())
);
let result = parser.parse_chunk(" there's more text");
assert_eq!(result.content, " there's more text");
assert_eq!(result.script_source, None);
}
#[test]
fn test_partial_start_tag_matching() {
let mut parser = ScriptTagParser::new();
// partial match of start tag...
let result = parser.parse_chunk("<ev");
assert_eq!(result.content, "");
// ...that's abandandoned when the < of a real tag is encountered
let result = parser.parse_chunk("<eval type=\"lua\">script content</eval>");
// ...so it gets pushed to content
assert_eq!(result.content, "<ev");
// ...and the real tag is parsed correctly
assert_eq!(result.script_source, Some("script content".to_string()));
}
#[test]
fn test_random_chunked_parsing() {
use rand::rngs::StdRng;
use rand::{Rng, SeedableRng};
use std::time::{SystemTime, UNIX_EPOCH};
let test_inputs = [
"Before <eval type=\"lua\">print(\"Hello\")</eval> After",
"No tags here at all",
"<eval type=\"lua\">local x = 10\nlocal y = 20\nprint(x + y)</eval>",
"Text <eval type=\"lua\">if true then\nprint(\"nested </e\")\nend</eval> more",
];
let seed = SystemTime::now()
.duration_since(UNIX_EPOCH)
.unwrap()
.as_secs();
eprintln!("Using random seed: {}", seed);
let mut rng = StdRng::seed_from_u64(seed);
for test_input in &test_inputs {
let mut reference_parser = ScriptTagParser::new();
let expected = reference_parser.parse_chunk(test_input);
let mut chunked_parser = ScriptTagParser::new();
let mut remaining = test_input.as_bytes();
let mut actual_content = String::new();
let mut actual_script = None;
while !remaining.is_empty() {
let chunk_size = rng.gen_range(1..=remaining.len().min(5));
let (chunk, rest) = remaining.split_at(chunk_size);
remaining = rest;
let chunk_str = std::str::from_utf8(chunk).unwrap();
let result = chunked_parser.parse_chunk(chunk_str);
actual_content.push_str(&result.content);
if result.script_source.is_some() {
actual_script = result.script_source;
}
}
assert_eq!(actual_content, expected.content);
assert_eq!(actual_script, expected.script_source);
}
}
}

View File

@@ -623,16 +623,21 @@ impl Copilot {
pub fn sign_out(&mut self, cx: &mut Context<Self>) -> Task<Result<()>> {
self.update_sign_in_status(request::SignInStatus::NotSignedIn, cx);
if let CopilotServer::Running(RunningCopilotServer { lsp: server, .. }) = &self.server {
let server = server.clone();
cx.background_spawn(async move {
server
.request::<request::SignOut>(request::SignOutParams {})
.await?;
match &self.server {
CopilotServer::Running(RunningCopilotServer { lsp: server, .. }) => {
let server = server.clone();
cx.background_spawn(async move {
server
.request::<request::SignOut>(request::SignOutParams {})
.await?;
anyhow::Ok(())
})
}
CopilotServer::Disabled => cx.background_spawn(async move {
clear_copilot_config_dir().await;
anyhow::Ok(())
})
} else {
Task::ready(Err(anyhow!("copilot hasn't started yet")))
}),
_ => Task::ready(Err(anyhow!("copilot hasn't started yet"))),
}
}
@@ -1016,6 +1021,10 @@ async fn clear_copilot_dir() {
remove_matching(paths::copilot_dir(), |_| true).await
}
async fn clear_copilot_config_dir() {
remove_matching(copilot_chat::copilot_chat_config_dir(), |_| true).await
}
async fn get_copilot_lsp(http: Arc<dyn HttpClient>) -> anyhow::Result<PathBuf> {
const SERVER_PATH: &str = "dist/language-server.js";

View File

@@ -4,13 +4,14 @@ use std::sync::OnceLock;
use anyhow::{anyhow, Result};
use chrono::DateTime;
use collections::HashSet;
use fs::Fs;
use futures::{io::BufReader, stream::BoxStream, AsyncBufReadExt, AsyncReadExt, StreamExt};
use gpui::{prelude::*, App, AsyncApp, Global};
use http_client::{AsyncBody, HttpClient, Method, Request as HttpRequest};
use paths::home_dir;
use serde::{Deserialize, Serialize};
use settings::watch_config_file;
use settings::watch_config_dir;
use strum::EnumIter;
pub const COPILOT_CHAT_COMPLETION_URL: &str = "https://api.githubcopilot.com/chat/completions";
@@ -212,7 +213,7 @@ pub fn init(fs: Arc<dyn Fs>, client: Arc<dyn HttpClient>, cx: &mut App) {
cx.set_global(GlobalCopilotChat(copilot_chat));
}
fn copilot_chat_config_dir() -> &'static PathBuf {
pub fn copilot_chat_config_dir() -> &'static PathBuf {
static COPILOT_CHAT_CONFIG_DIR: OnceLock<PathBuf> = OnceLock::new();
COPILOT_CHAT_CONFIG_DIR.get_or_init(|| {
@@ -237,27 +238,18 @@ impl CopilotChat {
}
pub fn new(fs: Arc<dyn Fs>, client: Arc<dyn HttpClient>, cx: &App) -> Self {
let config_paths = copilot_chat_config_paths();
let resolve_config_path = {
let fs = fs.clone();
async move {
for config_path in config_paths.iter() {
if fs.metadata(config_path).await.is_ok_and(|v| v.is_some()) {
return config_path.clone();
}
}
config_paths[0].clone()
}
};
let config_paths: HashSet<PathBuf> = copilot_chat_config_paths().into_iter().collect();
let dir_path = copilot_chat_config_dir();
cx.spawn(|cx| async move {
let config_file = resolve_config_path.await;
let mut config_file_rx = watch_config_file(cx.background_executor(), fs, config_file);
while let Some(contents) = config_file_rx.next().await {
let mut parent_watch_rx = watch_config_dir(
cx.background_executor(),
fs.clone(),
dir_path.clone(),
config_paths,
);
while let Some(contents) = parent_watch_rx.next().await {
let oauth_token = extract_oauth_token(contents);
cx.update(|cx| {
if let Some(this) = Self::global(cx).as_ref() {
this.update(cx, |this, cx| {

View File

@@ -311,7 +311,10 @@ impl ProjectDiagnosticsEditor {
cx: &mut Context<Workspace>,
) {
if let Some(existing) = workspace.item_of_type::<ProjectDiagnosticsEditor>(cx) {
workspace.activate_item(&existing, true, true, window, cx);
let is_active = workspace
.active_item(cx)
.is_some_and(|item| item.item_id() == existing.item_id());
workspace.activate_item(&existing, true, !is_active, window, cx);
} else {
let workspace_handle = cx.entity().downgrade();

View File

@@ -500,7 +500,7 @@ impl CompletionsMenu {
highlight.font_weight = None;
if completion
.source
.lsp_completion()
.lsp_completion(false)
.and_then(|lsp_completion| lsp_completion.deprecated)
.unwrap_or(false)
{
@@ -711,10 +711,12 @@ impl CompletionsMenu {
let completion = &completions[mat.candidate_id];
let sort_key = completion.sort_key();
let sort_text = completion
.source
.lsp_completion()
.and_then(|lsp_completion| lsp_completion.sort_text.as_deref());
let sort_text =
if let CompletionSource::Lsp { lsp_completion, .. } = &completion.source {
lsp_completion.sort_text.as_deref()
} else {
None
};
let score = Reverse(OrderedFloat(mat.score));
if mat.score >= 0.2 {

View File

@@ -14303,6 +14303,13 @@ impl Editor {
EditorSettings::override_global(editor_settings, cx);
}
pub fn line_numbers_enabled(&self, cx: &App) -> bool {
if let Some(show_line_numbers) = self.show_line_numbers {
return show_line_numbers;
}
EditorSettings::get_global(cx).gutter.line_numbers
}
pub fn should_use_relative_line_numbers(&self, cx: &mut App) -> bool {
self.use_relative_line_numbers
.unwrap_or(EditorSettings::get_global(cx).relative_line_numbers)
@@ -17017,6 +17024,7 @@ fn snippet_completions(
sort_text: Some(char::MAX.to_string()),
..lsp::CompletionItem::default()
}),
lsp_defaults: None,
},
label: CodeLabel {
text: matching_prefix.clone(),

View File

@@ -12334,24 +12334,6 @@ async fn test_completions_default_resolve_data_handling(cx: &mut TestAppContext)
},
};
let item_0_out = lsp::CompletionItem {
commit_characters: Some(default_commit_characters.clone()),
insert_text_format: Some(default_insert_text_format),
..item_0
};
let items_out = iter::once(item_0_out)
.chain(items[1..].iter().map(|item| lsp::CompletionItem {
commit_characters: Some(default_commit_characters.clone()),
data: Some(default_data.clone()),
insert_text_mode: Some(default_insert_text_mode),
text_edit: Some(lsp::CompletionTextEdit::Edit(lsp::TextEdit {
range: default_edit_range,
new_text: item.label.clone(),
})),
..item.clone()
}))
.collect::<Vec<lsp::CompletionItem>>();
let mut cx = EditorLspTestContext::new_rust(
lsp::ServerCapabilities {
completion_provider: Some(lsp::CompletionOptions {
@@ -12370,10 +12352,11 @@ async fn test_completions_default_resolve_data_handling(cx: &mut TestAppContext)
let completion_data = default_data.clone();
let completion_characters = default_commit_characters.clone();
let completion_items = items.clone();
cx.handle_request::<lsp::request::Completion, _, _>(move |_, _, _| {
let default_data = completion_data.clone();
let default_commit_characters = completion_characters.clone();
let items = items.clone();
let items = completion_items.clone();
async move {
Ok(Some(lsp::CompletionResponse::List(lsp::CompletionList {
items,
@@ -12422,7 +12405,7 @@ async fn test_completions_default_resolve_data_handling(cx: &mut TestAppContext)
.iter()
.map(|mat| mat.string.clone())
.collect::<Vec<String>>(),
items_out
items
.iter()
.map(|completion| completion.label.clone())
.collect::<Vec<String>>()
@@ -12435,14 +12418,18 @@ async fn test_completions_default_resolve_data_handling(cx: &mut TestAppContext)
// with 4 from the end.
assert_eq!(
*resolved_items.lock(),
[
&items_out[0..16],
&items_out[items_out.len() - 4..items_out.len()]
]
.concat()
.iter()
.cloned()
.collect::<Vec<lsp::CompletionItem>>()
[&items[0..16], &items[items.len() - 4..items.len()]]
.concat()
.iter()
.cloned()
.map(|mut item| {
if item.data.is_none() {
item.data = Some(default_data.clone());
}
item
})
.collect::<Vec<lsp::CompletionItem>>(),
"Items sent for resolve should be unchanged modulo resolve `data` filled with default if missing"
);
resolved_items.lock().clear();
@@ -12453,9 +12440,15 @@ async fn test_completions_default_resolve_data_handling(cx: &mut TestAppContext)
// Completions that have already been resolved are skipped.
assert_eq!(
*resolved_items.lock(),
items_out[items_out.len() - 16..items_out.len() - 4]
items[items.len() - 16..items.len() - 4]
.iter()
.cloned()
.map(|mut item| {
if item.data.is_none() {
item.data = Some(default_data.clone());
}
item
})
.collect::<Vec<lsp::CompletionItem>>()
);
resolved_items.lock().clear();

View File

@@ -1,6 +1,7 @@
use crate::askpass_modal::AskPassModal;
use crate::commit_modal::CommitModal;
use crate::git_panel_settings::StatusStyle;
use crate::project_diff::Diff;
use crate::remote_output_toast::{RemoteAction, RemoteOutputToast};
use crate::repository_selector::filtered_repository_entries;
use crate::{branch_picker, render_remote_button};
@@ -40,7 +41,8 @@ use language_model::{
use menu::{Confirm, SecondaryConfirm, SelectFirst, SelectLast, SelectNext, SelectPrevious};
use multi_buffer::ExcerptInfo;
use panel::{
panel_editor_container, panel_editor_style, panel_filled_button, panel_icon_button, PanelHeader,
panel_button, panel_editor_container, panel_editor_style, panel_filled_button,
panel_icon_button, PanelHeader,
};
use project::{
git::{GitEvent, Repository},
@@ -197,6 +199,21 @@ pub struct GitStatusEntry {
pub(crate) staging: StageStatus,
}
impl GitStatusEntry {
fn display_name(&self) -> String {
self.worktree_path
.file_name()
.map(|name| name.to_string_lossy().into_owned())
.unwrap_or_else(|| self.worktree_path.to_string_lossy().into_owned())
}
fn parent_dir(&self) -> Option<String> {
self.worktree_path
.parent()
.map(|parent| parent.to_string_lossy().into_owned())
}
}
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
enum TargetStatus {
Staged,
@@ -231,13 +248,16 @@ pub struct GitPanel {
fs: Arc<dyn Fs>,
hide_scrollbar_task: Option<Task<()>>,
new_count: usize,
entry_count: usize,
new_staged_count: usize,
pending: Vec<PendingOperation>,
pending_commit: Option<Task<()>>,
pending_serialization: Task<Option<()>>,
pub(crate) project: Entity<Project>,
scroll_handle: UniformListScrollHandle,
scrollbar_state: ScrollbarState,
vertical_scrollbar_state: ScrollbarState,
horizontal_scrollbar_state: ScrollbarState,
max_width_item_index: Option<usize>,
selected_entry: Option<usize>,
marked_entries: Vec<usize>,
show_scrollbar: bool,
@@ -344,7 +364,9 @@ impl GitPanel {
)
.detach();
let scrollbar_state =
let vertical_scrollbar_state =
ScrollbarState::new(scroll_handle.clone()).parent_entity(&cx.entity());
let horizontal_scrollbar_state =
ScrollbarState::new(scroll_handle.clone()).parent_entity(&cx.entity());
let mut git_panel = Self {
@@ -370,7 +392,9 @@ impl GitPanel {
single_tracked_entry: None,
project,
scroll_handle,
scrollbar_state,
vertical_scrollbar_state,
horizontal_scrollbar_state,
max_width_item_index: None,
selected_entry: None,
marked_entries: Vec::new(),
show_scrollbar: false,
@@ -381,6 +405,7 @@ impl GitPanel {
context_menu: None,
workspace,
modal_open: false,
entry_count: 0,
};
git_panel.schedule_update(false, window, cx);
git_panel.show_scrollbar = git_panel.should_show_scrollbar(cx);
@@ -1078,7 +1103,7 @@ impl GitPanel {
});
}
fn stage_all(&mut self, _: &StageAll, _window: &mut Window, cx: &mut Context<Self>) {
pub fn stage_all(&mut self, _: &StageAll, _window: &mut Window, cx: &mut Context<Self>) {
let entries = self
.entries
.iter()
@@ -1089,7 +1114,7 @@ impl GitPanel {
self.change_file_stage(true, entries, cx);
}
fn unstage_all(&mut self, _: &UnstageAll, _window: &mut Window, cx: &mut Context<Self>) {
pub fn unstage_all(&mut self, _: &UnstageAll, _window: &mut Window, cx: &mut Context<Self>) {
let entries = self
.entries
.iter()
@@ -1453,6 +1478,10 @@ impl GitPanel {
/// Generates a commit message using an LLM.
pub fn generate_commit_message(&mut self, cx: &mut Context<Self>) {
if !self.can_commit() {
return;
}
let model = match current_language_model(cx) {
Some(value) => value,
None => return,
@@ -1987,6 +2016,7 @@ impl GitPanel {
let mut conflict_entries = Vec::new();
let mut last_staged = None;
let mut staged_count = 0;
let mut max_width_item: Option<(RepoPath, usize)> = None;
let Some(repo) = self.active_repository.as_ref() else {
// Just clear entries if no repository is active.
@@ -2032,6 +2062,21 @@ impl GitPanel {
last_staged = Some(entry.clone());
}
let width_estimate = Self::item_width_estimate(
entry.parent_dir().map(|s| s.len()).unwrap_or(0),
entry.display_name().len(),
);
match max_width_item.as_mut() {
Some((repo_path, estimate)) => {
if width_estimate > *estimate {
*repo_path = entry.repo_path.clone();
*estimate = width_estimate;
}
}
None => max_width_item = Some((entry.repo_path.clone(), width_estimate)),
}
if is_conflict {
conflict_entries.push(entry);
} else if is_new {
@@ -2104,6 +2149,15 @@ impl GitPanel {
.extend(new_entries.into_iter().map(GitListEntry::GitStatusEntry));
}
if let Some((repo_path, _)) = max_width_item {
self.max_width_item_index = self.entries.iter().position(|entry| match entry {
GitListEntry::GitStatusEntry(git_status_entry) => {
git_status_entry.repo_path == repo_path
}
GitListEntry::Header(_) => false,
});
}
self.update_counts(repo);
self.select_first_entry_if_none(cx);
@@ -2133,10 +2187,12 @@ impl GitPanel {
self.tracked_count = 0;
self.new_staged_count = 0;
self.tracked_staged_count = 0;
self.entry_count = 0;
for entry in &self.entries {
let Some(status_entry) = entry.status_entry() else {
continue;
};
self.entry_count += 1;
if repo.has_conflict(&status_entry.repo_path) {
self.conflicted_count += 1;
if self.entry_staging(status_entry).has_staged() {
@@ -2263,6 +2319,12 @@ impl GitPanel {
self.has_staged_changes()
}
// eventually we'll need to take depth into account here
// if we add a tree view
fn item_width_estimate(path: usize, file_name: usize) -> usize {
path + file_name
}
pub(crate) fn render_generate_commit_message_button(
&self,
cx: &Context<Self>,
@@ -2291,14 +2353,28 @@ impl GitPanel {
.into_any_element();
}
let can_commit = self.can_commit();
let editor_focus_handle = self.commit_editor.focus_handle(cx);
IconButton::new("generate-commit-message", IconName::AiEdit)
.shape(ui::IconButtonShape::Square)
.icon_color(Color::Muted)
.tooltip(Tooltip::for_action_title_in(
"Generate Commit Message",
&git::GenerateCommitMessage,
&self.commit_editor.focus_handle(cx),
))
.tooltip(move |window, cx| {
if can_commit {
Tooltip::for_action_in(
"Generate Commit Message",
&git::GenerateCommitMessage,
&editor_focus_handle,
window,
cx,
)
} else {
Tooltip::simple(
"You must have either staged changes or tracked files to generate a commit message",
cx,
)
}
})
.disabled(!can_commit)
.on_click(cx.listener(move |this, _event, _window, cx| {
this.generate_commit_message(cx);
}))
@@ -2384,6 +2460,45 @@ impl GitPanel {
})
}
fn render_panel_header(&self, window: &mut Window, cx: &mut Context<Self>) -> impl IntoElement {
let text;
let action;
let tooltip;
if self.total_staged_count() == self.entry_count {
text = "Unstage All";
action = git::UnstageAll.boxed_clone();
tooltip = "git reset";
} else {
text = "Stage All";
action = git::StageAll.boxed_clone();
tooltip = "git add --all ."
}
self.panel_header_container(window, cx)
.px_2()
.child(
panel_button("Open Diff")
.color(Color::Muted)
.tooltip(Tooltip::for_action_title("Open diff", &Diff))
.on_click(|_, _, cx| {
cx.defer(|cx| {
cx.dispatch_action(&Diff);
})
}),
)
.child(div().flex_grow()) // spacer
.child(
panel_filled_button(text)
.tooltip(Tooltip::for_action_title(tooltip, action.as_ref()))
.on_click(move |_, _, cx| {
let action = action.boxed_clone();
cx.defer(move |cx| {
cx.dispatch_action(action.as_ref());
})
}),
)
}
pub fn render_footer(
&self,
window: &mut Window,
@@ -2526,15 +2641,14 @@ impl GitPanel {
.items_center()
.py_2()
.px(px(8.))
// .bg(cx.theme().colors().background)
// .border_t_1()
.border_color(cx.theme().colors().border)
.gap_1p5()
.child(
div()
.flex_grow()
.overflow_hidden()
.max_w(relative(0.6))
.items_center()
.max_w(relative(0.85))
.h_full()
.child(
Label::new(commit.subject.clone())
@@ -2598,12 +2712,12 @@ impl GitPanel {
)
}
fn render_scrollbar(&self, cx: &mut Context<Self>) -> Option<Stateful<Div>> {
fn render_vertical_scrollbar(&self, cx: &mut Context<Self>) -> Option<Stateful<Div>> {
let scroll_bar_style = self.show_scrollbar(cx);
let show_container = matches!(scroll_bar_style, ShowScrollbar::Always);
if !self.should_show_scrollbar(cx)
|| !(self.show_scrollbar || self.scrollbar_state.is_dragging())
|| !(self.show_scrollbar || self.vertical_scrollbar_state.is_dragging())
{
return None;
}
@@ -2632,7 +2746,7 @@ impl GitPanel {
.on_mouse_up(
MouseButton::Left,
cx.listener(|this, _, window, cx| {
if !this.scrollbar_state.is_dragging()
if !this.vertical_scrollbar_state.is_dragging()
&& !this.focus_handle.contains_focused(window, cx)
{
this.hide_scrollbar(window, cx);
@@ -2647,7 +2761,79 @@ impl GitPanel {
}))
.children(Scrollbar::vertical(
// percentage as f32..end_offset as f32,
self.scrollbar_state.clone(),
self.vertical_scrollbar_state.clone(),
)),
)
}
fn render_horizontal_scrollbar(&self, cx: &mut Context<Self>) -> Option<Stateful<Div>> {
let scroll_bar_style = self.show_scrollbar(cx);
let show_container = matches!(scroll_bar_style, ShowScrollbar::Always);
// if !self.should_show_scrollbar(cx)
// || !(self.show_scrollbar || self.horizontal_scrollbar_state.is_dragging())
// {
// return None;
// }
let scroll_handle = self.scroll_handle.0.borrow();
dbg!(scroll_handle.last_item_size);
// let longest_item_width = dbg!(scroll_handle.last_item_size)
// .filter(|size| dbg!(px(10.) * size.contents.width > size.item.width))?
// .contents
// .width
// .0 as f64;
// println!("Longest item width: {}", longest_item_width);
// if longest_item_width < scroll_handle.base_handle.bounds().size.width.0 as f64 {
// return None;
// }
Some(
div()
.id("git-panel-horizontal-scroll")
.occlude()
.flex_none()
.w_full()
.cursor_default()
.absolute()
.bottom_1()
.left_1()
.right_1()
.h(px(32.))
// .when(show_container, |this| this.pt_1().pb_1p5())
// .when(!show_container, |this| {
// this.bottom_1().left_1().right_1().h(px(32.))
// })
.on_mouse_move(cx.listener(|_, _, _, cx| {
cx.notify();
cx.stop_propagation()
}))
.on_hover(|_, _, cx| {
cx.stop_propagation();
})
.on_any_mouse_down(|_, _, cx| {
cx.stop_propagation();
})
.on_mouse_up(
MouseButton::Left,
cx.listener(|this, _, window, cx| {
if !this.horizontal_scrollbar_state.is_dragging()
&& !this.focus_handle.contains_focused(window, cx)
{
this.hide_scrollbar(window, cx);
cx.notify();
}
cx.stop_propagation();
}),
)
.on_scroll_wheel(cx.listener(|_, _, _, cx| {
cx.notify();
}))
.children(Scrollbar::horizontal(
// percentage as f32..end_offset as f32,
self.horizontal_scrollbar_state.clone(),
)),
)
}
@@ -2704,8 +2890,10 @@ impl GitPanel {
let entry_count = self.entries.len();
h_flex()
// .debug_below()
.flex_1()
.size_full()
.flex_grow()
.relative()
.overflow_hidden()
.child(
uniform_list(cx.entity().clone(), "entries", entry_count, {
@@ -2740,8 +2928,10 @@ impl GitPanel {
}
})
.size_full()
.flex_grow()
.with_sizing_behavior(ListSizingBehavior::Auto)
.with_horizontal_sizing_behavior(ListHorizontalSizingBehavior::Unconstrained)
.with_width_from_item(self.max_width_item_index)
.track_scroll(self.scroll_handle.clone()),
)
.on_mouse_down(
@@ -2750,7 +2940,8 @@ impl GitPanel {
this.deploy_panel_context_menu(event.position, window, cx)
}),
)
.children(self.render_scrollbar(cx))
// .children(self.render_vertical_scrollbar(cx))
.children(self.render_horizontal_scrollbar(cx))
}
fn entry_label(&self, label: impl Into<SharedString>, color: Color) -> Label {
@@ -2876,17 +3067,16 @@ impl GitPanel {
window: &Window,
cx: &Context<Self>,
) -> AnyElement {
let display_name = entry
.worktree_path
.file_name()
.map(|name| name.to_string_lossy().into_owned())
.unwrap_or_else(|| entry.worktree_path.to_string_lossy().into_owned());
let display_name = entry.display_name();
let worktree_path = entry.worktree_path.clone();
let selected = self.selected_entry == Some(ix);
let marked = self.marked_entries.contains(&ix);
let status_style = GitPanelSettings::get_global(cx).status_style;
let status = entry.status;
let modifiers = self.current_modifiers;
let shift_held = modifiers.shift;
let has_conflict = status.is_conflicted();
let is_modified = status.is_modified();
let is_deleted = status.is_deleted();
@@ -2970,9 +3160,8 @@ impl GitPanel {
el.border_color(cx.theme().colors().border_focused)
})
.px(rems(0.75)) // ~12px
.overflow_hidden()
.flex_none()
.gap(DynamicSpacing::Base04.rems(cx))
// .flex_none()
.gap_1p5()
.bg(base_bg)
.hover(|this| this.bg(hover_bg))
.active(|this| this.bg(active_bg))
@@ -3017,6 +3206,7 @@ impl GitPanel {
.flex_none()
.occlude()
.cursor_pointer()
.ml_neg_0p5()
.child(
Checkbox::new(checkbox_id, is_staged)
.disabled(!has_write_access)
@@ -3038,26 +3228,44 @@ impl GitPanel {
})
})
.tooltip(move |window, cx| {
let tooltip_name = if entry_staging.is_fully_staged() {
"Unstage"
let is_staged = entry_staging.is_fully_staged();
let action = if is_staged { "Unstage" } else { "Stage" };
let tooltip_name = if shift_held {
format!("{} section", action)
} else {
"Stage"
action.to_string()
};
Tooltip::for_action(tooltip_name, &ToggleStaged, window, cx)
let meta = if shift_held {
format!(
"Release shift to {} single entry",
action.to_lowercase()
)
} else {
format!("Shift click to {} section", action.to_lowercase())
};
Tooltip::with_meta(
tooltip_name,
Some(&ToggleStaged),
meta,
window,
cx,
)
}),
),
)
.child(git_status_icon(status, cx))
.child(git_status_icon(status))
.child(
h_flex()
.items_center()
.overflow_hidden()
.when_some(worktree_path.parent(), |this, parent| {
let parent_str = parent.to_string_lossy();
if !parent_str.is_empty() {
.flex_1()
// .overflow_hidden()
.when_some(entry.parent_dir(), |this, parent| {
if !parent.is_empty() {
this.child(
self.entry_label(format!("{}/", parent_str), path_color)
self.entry_label(format!("{}/", parent), path_color)
.when(status.is_deleted(), |this| this.strikethrough()),
)
} else {
@@ -3148,6 +3356,7 @@ impl Render for GitPanel {
.child(
v_flex()
.size_full()
.child(self.render_panel_header(window, cx))
.map(|this| {
if has_entries {
this.child(self.render_entries(has_write_access, window, cx))
@@ -3455,7 +3664,11 @@ impl RenderOnce for PanelRepoFooter {
div().child(
Icon::new(IconName::GitBranchSmall)
.size(IconSize::Small)
.color(Color::Muted),
.color(if single_repo {
Color::Disabled
} else {
Color::Muted
}),
),
)
.child(repo_selector)

View File

@@ -1,13 +1,13 @@
use ::settings::Settings;
use git::{
repository::{Branch, Upstream, UpstreamTracking, UpstreamTrackingStatus},
status::FileStatus,
status::{FileStatus, StatusCode, UnmergedStatus, UnmergedStatusCode},
};
use git_panel_settings::GitPanelSettings;
use gpui::{App, Entity, FocusHandle};
use project::Project;
use project_diff::ProjectDiff;
use ui::{ActiveTheme, Color, Icon, IconName, IntoElement, SharedString};
use ui::prelude::*;
use workspace::Workspace;
mod askpass_modal;
@@ -64,34 +64,28 @@ pub fn init(cx: &mut App) {
panel.pull(window, cx);
});
});
workspace.register_action(|workspace, action: &git::StageAll, window, cx| {
let Some(panel) = workspace.panel::<git_panel::GitPanel>(cx) else {
return;
};
panel.update(cx, |panel, cx| {
panel.stage_all(action, window, cx);
});
});
workspace.register_action(|workspace, action: &git::UnstageAll, window, cx| {
let Some(panel) = workspace.panel::<git_panel::GitPanel>(cx) else {
return;
};
panel.update(cx, |panel, cx| {
panel.unstage_all(action, window, cx);
});
});
})
.detach();
}
// TODO: Add updated status colors to theme
pub fn git_status_icon(status: FileStatus, cx: &App) -> impl IntoElement {
let (icon_name, color) = if status.is_conflicted() {
(
IconName::Warning,
cx.theme().colors().version_control_conflict,
)
} else if status.is_deleted() {
(
IconName::SquareMinus,
cx.theme().colors().version_control_deleted,
)
} else if status.is_modified() {
(
IconName::SquareDot,
cx.theme().colors().version_control_modified,
)
} else {
(
IconName::SquarePlus,
cx.theme().colors().version_control_added,
)
};
Icon::new(icon_name).color(Color::Custom(color))
pub fn git_status_icon(status: FileStatus) -> impl IntoElement {
GitStatusIcon::new(status)
}
fn can_push_and_pull(project: &Entity<Project>, cx: &App) -> bool {
@@ -433,3 +427,79 @@ mod remote_button {
}
}
}
#[derive(IntoElement, IntoComponent)]
#[component(scope = "Version Control")]
pub struct GitStatusIcon {
status: FileStatus,
}
impl GitStatusIcon {
pub fn new(status: FileStatus) -> Self {
Self { status }
}
}
impl RenderOnce for GitStatusIcon {
fn render(self, _window: &mut ui::Window, cx: &mut App) -> impl IntoElement {
let status = self.status;
let (icon_name, color) = if status.is_conflicted() {
(
IconName::Warning,
cx.theme().colors().version_control_conflict,
)
} else if status.is_deleted() {
(
IconName::SquareMinus,
cx.theme().colors().version_control_deleted,
)
} else if status.is_modified() {
(
IconName::SquareDot,
cx.theme().colors().version_control_modified,
)
} else {
(
IconName::SquarePlus,
cx.theme().colors().version_control_added,
)
};
Icon::new(icon_name).color(Color::Custom(color))
}
}
// View this component preview using `workspace: open component-preview`
impl ComponentPreview for GitStatusIcon {
fn preview(_window: &mut Window, _cx: &mut App) -> AnyElement {
fn tracked_file_status(code: StatusCode) -> FileStatus {
FileStatus::Tracked(git::status::TrackedStatus {
index_status: code,
worktree_status: code,
})
}
let modified = tracked_file_status(StatusCode::Modified);
let added = tracked_file_status(StatusCode::Added);
let deleted = tracked_file_status(StatusCode::Deleted);
let conflict = UnmergedStatus {
first_head: UnmergedStatusCode::Updated,
second_head: UnmergedStatusCode::Updated,
}
.into();
v_flex()
.gap_6()
.children(vec![example_group(vec![
single_example("Modified", GitStatusIcon::new(modified).into_any_element()),
single_example("Added", GitStatusIcon::new(added).into_any_element()),
single_example("Deleted", GitStatusIcon::new(deleted).into_any_element()),
single_example(
"Conflicted",
GitStatusIcon::new(conflict).into_any_element(),
),
])])
.into_any_element()
}
}

View File

@@ -1527,6 +1527,7 @@ impl Buffer {
}
fn did_finish_parsing(&mut self, syntax_snapshot: SyntaxSnapshot, cx: &mut Context<Self>) {
self.was_changed();
self.non_text_state_update_count += 1;
self.syntax_map.lock().did_parse(syntax_snapshot);
self.request_autoindent(cx);
@@ -1968,7 +1969,12 @@ impl Buffer {
/// This allows downstream code to check if the buffer's text has changed without
/// waiting for an effect cycle, which would be required if using eents.
pub fn record_changes(&mut self, bit: rc::Weak<Cell<bool>>) {
self.change_bits.push(bit);
if let Err(ix) = self
.change_bits
.binary_search_by_key(&rc::Weak::as_ptr(&bit), rc::Weak::as_ptr)
{
self.change_bits.insert(ix, bit);
}
}
fn was_changed(&mut self) {
@@ -2273,12 +2279,13 @@ impl Buffer {
}
fn did_edit(&mut self, old_version: &clock::Global, was_dirty: bool, cx: &mut Context<Self>) {
self.was_changed();
if self.edits_since::<usize>(old_version).next().is_none() {
return;
}
self.reparse(cx);
cx.emit(BufferEvent::Edited);
if was_dirty != self.is_dirty() {
cx.emit(BufferEvent::DirtyChanged);
@@ -2390,7 +2397,6 @@ impl Buffer {
}
self.text.apply_ops(buffer_ops);
self.deferred_ops.insert(deferred_ops);
self.was_changed();
self.flush_deferred_ops(cx);
self.did_edit(&old_version, was_dirty, cx);
// Notify independently of whether the buffer was edited as the operations could include a

View File

@@ -11,8 +11,8 @@ use futures::future::BoxFuture;
use futures::stream::BoxStream;
use futures::{FutureExt, StreamExt};
use gpui::{
percentage, svg, Animation, AnimationExt, AnyView, App, AsyncApp, Entity, Render, Subscription,
Task, Transformation,
percentage, svg, Action, Animation, AnimationExt, AnyView, App, AsyncApp, Entity, Render,
Subscription, Task, Transformation,
};
use language_model::{
AuthenticateError, LanguageModel, LanguageModelCompletionEvent, LanguageModelId,
@@ -337,9 +337,20 @@ impl Render for ConfigurationView {
if self.state.read(cx).is_authenticated(cx) {
const LABEL: &str = "Authorized.";
h_flex()
.gap_1()
.child(Icon::new(IconName::Check).color(Color::Success))
.child(Label::new(LABEL))
.justify_between()
.child(
h_flex()
.gap_1()
.child(Icon::new(IconName::Check).color(Color::Success))
.child(Label::new(LABEL)),
)
.child(
Button::new("sign_out", "Sign Out")
.style(ui::ButtonStyle::Filled)
.on_click(|_, window, cx| {
window.dispatch_action(copilot::SignOut.boxed_clone(), cx);
}),
)
} else {
let loading_icon = svg()
.size_8()

View File

@@ -62,8 +62,8 @@
; Literals
(this) @keyword
(super) @keyword
(this) @variable.special
(super) @variable.special
[
(null)

View File

@@ -62,8 +62,8 @@
; Literals
(this) @keyword
(super) @keyword
(this) @variable.special
(super) @variable.special
[
(null)

View File

@@ -80,8 +80,8 @@
; Literals
(this) @keyword
(super) @keyword
(this) @variable.special
(super) @variable.special
[
(null)

View File

@@ -18,8 +18,6 @@ pub trait PanelHeader: workspace::Panel {
.w_full()
.px_1()
.flex_none()
.border_b_1()
.border_color(cx.theme().colors().border)
}
}

View File

@@ -1847,7 +1847,6 @@ impl LspCommand for GetCompletions {
let mut completions = if let Some(completions) = completions {
match completions {
lsp::CompletionResponse::Array(completions) => completions,
lsp::CompletionResponse::List(mut list) => {
let items = std::mem::take(&mut list.items);
response_list = Some(list);
@@ -1855,74 +1854,19 @@ impl LspCommand for GetCompletions {
}
}
} else {
Default::default()
Vec::new()
};
let language_server_adapter = lsp_store
.update(&mut cx, |lsp_store, _| {
lsp_store.language_server_adapter_for_id(server_id)
})?
.ok_or_else(|| anyhow!("no such language server"))?;
.with_context(|| format!("no language server with id {server_id}"))?;
let item_defaults = response_list
let lsp_defaults = response_list
.as_ref()
.and_then(|list| list.item_defaults.as_ref());
if let Some(item_defaults) = item_defaults {
let default_data = item_defaults.data.as_ref();
let default_commit_characters = item_defaults.commit_characters.as_ref();
let default_edit_range = item_defaults.edit_range.as_ref();
let default_insert_text_format = item_defaults.insert_text_format.as_ref();
let default_insert_text_mode = item_defaults.insert_text_mode.as_ref();
if default_data.is_some()
|| default_commit_characters.is_some()
|| default_edit_range.is_some()
|| default_insert_text_format.is_some()
|| default_insert_text_mode.is_some()
{
for item in completions.iter_mut() {
if item.data.is_none() && default_data.is_some() {
item.data = default_data.cloned()
}
if item.commit_characters.is_none() && default_commit_characters.is_some() {
item.commit_characters = default_commit_characters.cloned()
}
if item.text_edit.is_none() {
if let Some(default_edit_range) = default_edit_range {
match default_edit_range {
CompletionListItemDefaultsEditRange::Range(range) => {
item.text_edit =
Some(lsp::CompletionTextEdit::Edit(lsp::TextEdit {
range: *range,
new_text: item.label.clone(),
}))
}
CompletionListItemDefaultsEditRange::InsertAndReplace {
insert,
replace,
} => {
item.text_edit =
Some(lsp::CompletionTextEdit::InsertAndReplace(
lsp::InsertReplaceEdit {
new_text: item.label.clone(),
insert: *insert,
replace: *replace,
},
))
}
}
}
}
if item.insert_text_format.is_none() && default_insert_text_format.is_some() {
item.insert_text_format = default_insert_text_format.cloned()
}
if item.insert_text_mode.is_none() && default_insert_text_mode.is_some() {
item.insert_text_mode = default_insert_text_mode.cloned()
}
}
}
}
.and_then(|list| list.item_defaults.clone())
.map(Arc::new);
let mut completion_edits = Vec::new();
buffer.update(&mut cx, |buffer, _cx| {
@@ -1930,12 +1874,34 @@ impl LspCommand for GetCompletions {
let clipped_position = buffer.clip_point_utf16(Unclipped(self.position), Bias::Left);
let mut range_for_token = None;
completions.retain_mut(|lsp_completion| {
let edit = match lsp_completion.text_edit.as_ref() {
completions.retain(|lsp_completion| {
let lsp_edit = lsp_completion.text_edit.clone().or_else(|| {
let default_text_edit = lsp_defaults.as_deref()?.edit_range.as_ref()?;
match default_text_edit {
CompletionListItemDefaultsEditRange::Range(range) => {
Some(lsp::CompletionTextEdit::Edit(lsp::TextEdit {
range: *range,
new_text: lsp_completion.label.clone(),
}))
}
CompletionListItemDefaultsEditRange::InsertAndReplace {
insert,
replace,
} => Some(lsp::CompletionTextEdit::InsertAndReplace(
lsp::InsertReplaceEdit {
new_text: lsp_completion.label.clone(),
insert: *insert,
replace: *replace,
},
)),
}
});
let edit = match lsp_edit {
// If the language server provides a range to overwrite, then
// check that the range is valid.
Some(completion_text_edit) => {
match parse_completion_text_edit(completion_text_edit, &snapshot) {
match parse_completion_text_edit(&completion_text_edit, &snapshot) {
Some(edit) => edit,
None => return false,
}
@@ -1949,14 +1915,15 @@ impl LspCommand for GetCompletions {
return false;
}
let default_edit_range = response_list
.as_ref()
.and_then(|list| list.item_defaults.as_ref())
.and_then(|defaults| defaults.edit_range.as_ref())
.and_then(|range| match range {
CompletionListItemDefaultsEditRange::Range(r) => Some(r),
_ => None,
});
let default_edit_range = lsp_defaults.as_ref().and_then(|lsp_defaults| {
lsp_defaults
.edit_range
.as_ref()
.and_then(|range| match range {
CompletionListItemDefaultsEditRange::Range(r) => Some(r),
_ => None,
})
});
let range = if let Some(range) = default_edit_range {
let range = range_from_lsp(*range);
@@ -2006,14 +1973,25 @@ impl LspCommand for GetCompletions {
Ok(completions
.into_iter()
.zip(completion_edits)
.map(|(lsp_completion, (old_range, mut new_text))| {
.map(|(mut lsp_completion, (old_range, mut new_text))| {
LineEnding::normalize(&mut new_text);
if lsp_completion.data.is_none() {
if let Some(default_data) = lsp_defaults
.as_ref()
.and_then(|item_defaults| item_defaults.data.clone())
{
// Servers (e.g. JDTLS) prefer unchanged completions, when resolving the items later,
// so we do not insert the defaults here, but `data` is needed for resolving, so this is an exception.
lsp_completion.data = Some(default_data);
}
}
CoreCompletion {
old_range,
new_text,
source: CompletionSource::Lsp {
server_id,
lsp_completion: Box::new(lsp_completion),
lsp_defaults: lsp_defaults.clone(),
resolved: false,
},
}

View File

@@ -49,10 +49,9 @@ use lsp::{
notification::DidRenameFiles, CodeActionKind, CompletionContext, DiagnosticSeverity,
DiagnosticTag, DidChangeWatchedFilesRegistrationOptions, Edit, FileOperationFilter,
FileOperationPatternKind, FileOperationRegistrationOptions, FileRename, FileSystemWatcher,
InsertTextFormat, LanguageServer, LanguageServerBinary, LanguageServerBinaryOptions,
LanguageServerId, LanguageServerName, LspRequestFuture, MessageActionItem, MessageType, OneOf,
RenameFilesParams, SymbolKind, TextEdit, WillRenameFiles, WorkDoneProgressCancelParams,
WorkspaceFolder,
LanguageServer, LanguageServerBinary, LanguageServerBinaryOptions, LanguageServerId,
LanguageServerName, LspRequestFuture, MessageActionItem, MessageType, OneOf, RenameFilesParams,
SymbolKind, TextEdit, WillRenameFiles, WorkDoneProgressCancelParams, WorkspaceFolder,
};
use node_runtime::read_package_installed_version;
use parking_lot::Mutex;
@@ -70,6 +69,7 @@ use smol::channel::Sender;
use snippet::Snippet;
use std::{
any::Any,
borrow::Cow,
cell::RefCell,
cmp::Ordering,
convert::TryInto,
@@ -4475,6 +4475,7 @@ impl LspStore {
completions: Rc<RefCell<Box<[Completion]>>>,
completion_index: usize,
) -> Result<()> {
let server_id = server.server_id();
let can_resolve = server
.capabilities()
.completion_provider
@@ -4491,19 +4492,24 @@ impl LspStore {
CompletionSource::Lsp {
lsp_completion,
resolved,
server_id: completion_server_id,
..
} => {
if *resolved {
return Ok(());
}
anyhow::ensure!(
server_id == *completion_server_id,
"server_id mismatch, querying completion resolve for {server_id} but completion server id is {completion_server_id}"
);
server.request::<lsp::request::ResolveCompletionItem>(*lsp_completion.clone())
}
CompletionSource::Custom => return Ok(()),
}
};
let completion_item = request.await?;
let resolved_completion = request.await?;
if let Some(text_edit) = completion_item.text_edit.as_ref() {
if let Some(text_edit) = resolved_completion.text_edit.as_ref() {
// Technically we don't have to parse the whole `text_edit`, since the only
// language server we currently use that does update `text_edit` in `completionItem/resolve`
// is `typescript-language-server` and they only update `text_edit.new_text`.
@@ -4520,24 +4526,26 @@ impl LspStore {
completion.old_range = old_range;
}
}
if completion_item.insert_text_format == Some(InsertTextFormat::SNIPPET) {
// vtsls might change the type of completion after resolution.
let mut completions = completions.borrow_mut();
let completion = &mut completions[completion_index];
if let Some(lsp_completion) = completion.source.lsp_completion_mut() {
if completion_item.insert_text_format != lsp_completion.insert_text_format {
lsp_completion.insert_text_format = completion_item.insert_text_format;
}
}
}
let mut completions = completions.borrow_mut();
let completion = &mut completions[completion_index];
completion.source = CompletionSource::Lsp {
lsp_completion: Box::new(completion_item),
resolved: true,
server_id: server.server_id(),
};
if let CompletionSource::Lsp {
lsp_completion,
resolved,
server_id: completion_server_id,
..
} = &mut completion.source
{
if *resolved {
return Ok(());
}
anyhow::ensure!(
server_id == *completion_server_id,
"server_id mismatch, applying completion resolve for {server_id} but completion server id is {completion_server_id}"
);
*lsp_completion = Box::new(resolved_completion);
*resolved = true;
}
Ok(())
}
@@ -4549,8 +4557,8 @@ impl LspStore {
) -> Result<()> {
let completion_item = completions.borrow()[completion_index]
.source
.lsp_completion()
.cloned();
.lsp_completion(true)
.map(Cow::into_owned);
if let Some(lsp_documentation) = completion_item
.as_ref()
.and_then(|completion_item| completion_item.documentation.clone())
@@ -4626,8 +4634,13 @@ impl LspStore {
CompletionSource::Lsp {
lsp_completion,
resolved,
server_id: completion_server_id,
..
} => {
anyhow::ensure!(
server_id == *completion_server_id,
"remote server_id mismatch, querying completion resolve for {server_id} but completion server id is {completion_server_id}"
);
if *resolved {
return Ok(());
}
@@ -4647,7 +4660,7 @@ impl LspStore {
.request(request)
.await
.context("completion documentation resolve proto request")?;
let lsp_completion = serde_json::from_slice(&response.lsp_completion)?;
let resolved_lsp_completion = serde_json::from_slice(&response.lsp_completion)?;
let documentation = if response.documentation.is_empty() {
CompletionDocumentation::Undocumented
@@ -4662,11 +4675,23 @@ impl LspStore {
let mut completions = completions.borrow_mut();
let completion = &mut completions[completion_index];
completion.documentation = Some(documentation);
completion.source = CompletionSource::Lsp {
server_id,
if let CompletionSource::Lsp {
lsp_completion,
resolved: true,
};
resolved,
server_id: completion_server_id,
lsp_defaults: _,
} = &mut completion.source
{
if *resolved {
return Ok(());
}
anyhow::ensure!(
server_id == *completion_server_id,
"remote server_id mismatch, applying completion resolve for {server_id} but completion server id is {completion_server_id}"
);
*lsp_completion = Box::new(resolved_lsp_completion);
*resolved = true;
}
let old_range = response
.old_start
@@ -4750,7 +4775,7 @@ impl LspStore {
let completion = completions.borrow()[completion_index].clone();
let additional_text_edits = completion
.source
.lsp_completion()
.lsp_completion(true)
.as_ref()
.and_then(|lsp_completion| lsp_completion.additional_text_edits.clone());
if let Some(edits) = additional_text_edits {
@@ -8153,21 +8178,26 @@ impl LspStore {
}
pub(crate) fn serialize_completion(completion: &CoreCompletion) -> proto::Completion {
let (source, server_id, lsp_completion, resolved) = match &completion.source {
let (source, server_id, lsp_completion, lsp_defaults, resolved) = match &completion.source {
CompletionSource::Lsp {
server_id,
lsp_completion,
lsp_defaults,
resolved,
} => (
proto::completion::Source::Lsp as i32,
server_id.0 as u64,
serde_json::to_vec(lsp_completion).unwrap(),
lsp_defaults
.as_deref()
.map(|lsp_defaults| serde_json::to_vec(lsp_defaults).unwrap()),
*resolved,
),
CompletionSource::Custom => (
proto::completion::Source::Custom as i32,
0,
Vec::new(),
None,
true,
),
};
@@ -8178,6 +8208,7 @@ impl LspStore {
new_text: completion.new_text.clone(),
server_id,
lsp_completion,
lsp_defaults,
resolved,
source,
}
@@ -8200,6 +8231,11 @@ impl LspStore {
Some(proto::completion::Source::Lsp) => CompletionSource::Lsp {
server_id: LanguageServerId::from_proto(completion.server_id),
lsp_completion: serde_json::from_slice(&completion.lsp_completion)?,
lsp_defaults: completion
.lsp_defaults
.as_deref()
.map(serde_json::from_slice)
.transpose()?,
resolved: completion.resolved,
},
_ => anyhow::bail!("Unexpected completion source {}", completion.source),
@@ -8288,8 +8324,8 @@ async fn populate_labels_for_completions(
let lsp_completions = new_completions
.iter()
.filter_map(|new_completion| {
if let CompletionSource::Lsp { lsp_completion, .. } = &new_completion.source {
Some(*lsp_completion.clone())
if let Some(lsp_completion) = new_completion.source.lsp_completion(true) {
Some(lsp_completion.into_owned())
} else {
None
}
@@ -8309,8 +8345,8 @@ async fn populate_labels_for_completions(
.fuse();
for completion in new_completions {
match &completion.source {
CompletionSource::Lsp { lsp_completion, .. } => {
match completion.source.lsp_completion(true) {
Some(lsp_completion) => {
let documentation = if let Some(docs) = lsp_completion.documentation.clone() {
Some(docs.into())
} else {
@@ -8328,9 +8364,9 @@ async fn populate_labels_for_completions(
new_text: completion.new_text,
source: completion.source,
confirm: None,
})
});
}
CompletionSource::Custom => {
None => {
let mut label = CodeLabel::plain(completion.new_text.clone(), None);
ensure_uniform_list_compatible_label(&mut label);
completions.push(Completion {
@@ -8340,7 +8376,7 @@ async fn populate_labels_for_completions(
new_text: completion.new_text,
source: completion.source,
confirm: None,
})
});
}
}
}

View File

@@ -382,6 +382,8 @@ pub enum CompletionSource {
server_id: LanguageServerId,
/// The raw completion provided by the language server.
lsp_completion: Box<lsp::CompletionItem>,
/// A set of defaults for this completion item.
lsp_defaults: Option<Arc<lsp::CompletionListItemDefaults>>,
/// Whether this completion has been resolved, to ensure it happens once per completion.
resolved: bool,
},
@@ -397,17 +399,76 @@ impl CompletionSource {
}
}
pub fn lsp_completion(&self) -> Option<&lsp::CompletionItem> {
if let Self::Lsp { lsp_completion, .. } = self {
Some(lsp_completion)
} else {
None
}
}
pub fn lsp_completion(&self, apply_defaults: bool) -> Option<Cow<lsp::CompletionItem>> {
if let Self::Lsp {
lsp_completion,
lsp_defaults,
..
} = self
{
if apply_defaults {
if let Some(lsp_defaults) = lsp_defaults {
let mut completion_with_defaults = *lsp_completion.clone();
let default_commit_characters = lsp_defaults.commit_characters.as_ref();
let default_edit_range = lsp_defaults.edit_range.as_ref();
let default_insert_text_format = lsp_defaults.insert_text_format.as_ref();
let default_insert_text_mode = lsp_defaults.insert_text_mode.as_ref();
fn lsp_completion_mut(&mut self) -> Option<&mut lsp::CompletionItem> {
if let Self::Lsp { lsp_completion, .. } = self {
Some(lsp_completion)
if default_commit_characters.is_some()
|| default_edit_range.is_some()
|| default_insert_text_format.is_some()
|| default_insert_text_mode.is_some()
{
if completion_with_defaults.commit_characters.is_none()
&& default_commit_characters.is_some()
{
completion_with_defaults.commit_characters =
default_commit_characters.cloned()
}
if completion_with_defaults.text_edit.is_none() {
match default_edit_range {
Some(lsp::CompletionListItemDefaultsEditRange::Range(range)) => {
completion_with_defaults.text_edit =
Some(lsp::CompletionTextEdit::Edit(lsp::TextEdit {
range: *range,
new_text: completion_with_defaults.label.clone(),
}))
}
Some(
lsp::CompletionListItemDefaultsEditRange::InsertAndReplace {
insert,
replace,
},
) => {
completion_with_defaults.text_edit =
Some(lsp::CompletionTextEdit::InsertAndReplace(
lsp::InsertReplaceEdit {
new_text: completion_with_defaults.label.clone(),
insert: *insert,
replace: *replace,
},
))
}
None => {}
}
}
if completion_with_defaults.insert_text_format.is_none()
&& default_insert_text_format.is_some()
{
completion_with_defaults.insert_text_format =
default_insert_text_format.cloned()
}
if completion_with_defaults.insert_text_mode.is_none()
&& default_insert_text_mode.is_some()
{
completion_with_defaults.insert_text_mode =
default_insert_text_mode.cloned()
}
}
return Some(Cow::Owned(completion_with_defaults));
}
}
Some(Cow::Borrowed(lsp_completion))
} else {
None
}
@@ -4640,7 +4701,8 @@ impl Completion {
const DEFAULT_KIND_KEY: usize = 2;
let kind_key = self
.source
.lsp_completion()
// `lsp::CompletionListItemDefaults` has no `kind` field
.lsp_completion(false)
.and_then(|lsp_completion| lsp_completion.kind)
.and_then(|lsp_completion_kind| match lsp_completion_kind {
lsp::CompletionItemKind::KEYWORD => Some(0),
@@ -4654,7 +4716,8 @@ impl Completion {
/// Whether this completion is a snippet.
pub fn is_snippet(&self) -> bool {
self.source
.lsp_completion()
// `lsp::CompletionListItemDefaults` has `insert_text_format` field
.lsp_completion(true)
.map_or(false, |lsp_completion| {
lsp_completion.insert_text_format == Some(lsp::InsertTextFormat::SNIPPET)
})
@@ -4664,9 +4727,10 @@ impl Completion {
///
/// Will return `None` if this completion's kind is not [`CompletionItemKind::COLOR`].
pub fn color(&self) -> Option<Hsla> {
let lsp_completion = self.source.lsp_completion()?;
// `lsp::CompletionListItemDefaults` has no `kind` field
let lsp_completion = self.source.lsp_completion(false)?;
if lsp_completion.kind? == CompletionItemKind::COLOR {
return color_extractor::extract_color(lsp_completion);
return color_extractor::extract_color(&lsp_completion);
}
None
}

View File

@@ -1000,6 +1000,7 @@ message Completion {
bytes lsp_completion = 5;
bool resolved = 6;
Source source = 7;
optional bytes lsp_defaults = 8;
enum Source {
Custom = 0;

View File

@@ -1,713 +0,0 @@
/// Models will commonly generate POSIX shell one-liner commands which
/// they run via io.popen() in Lua. Instead of giving those shell command
/// strings to the operating system - which is a security risk, and
/// which can eaisly fail on Windows, since Windows doesn't do POSIX - we
/// parse the shell command ourselves and translate it into a sequence of
/// commands in our normal sandbox. Essentially, this is an extremely
/// minimalstic shell which Lua popen() commands can execute in.
///
/// Our shell supports:
/// - Basic commands and args
/// - The operators `|`, `&&`, `;`, `>`, `1>`, `2>`, `&>`, `>&`
///
/// The operators currently have to have whitespace around them because the
/// `shlex` crate we use to tokenize the strings does not treat operators
/// as word boundaries, even though shells do. Fortunately, LLMs consistently
/// generate spaces around these operators anyway.
use mlua::{Error, Result};
#[derive(Debug, Clone, PartialEq, Default)]
pub struct ShellCmd {
pub command: String,
pub args: Vec<String>,
pub stdout_redirect: Option<String>,
pub stderr_redirect: Option<String>,
}
#[derive(Debug, Clone, PartialEq)]
pub enum Operator {
/// The `|` shell operator (highest precedence)
Pipe,
/// The `&&` shell operator (medium precedence)
And,
/// The `;` shell operator (lowest precedence)
Semicolon,
}
impl Operator {
fn precedence(&self) -> u8 {
match self {
Operator::Pipe => 3,
Operator::And => 2,
Operator::Semicolon => 1,
}
}
}
#[derive(Debug, Clone, PartialEq)]
pub enum ShellAst {
Command(ShellCmd),
Operation {
operator: Operator,
left: Box<ShellAst>,
right: Box<ShellAst>,
},
}
impl ShellAst {
/// Parse a shell string and build an abstract syntax tree.
pub fn parse(string: impl AsRef<str>) -> Result<Self> {
let string = string.as_ref();
// Check for unsupported shell features
if string.contains('$')
|| string.contains('`')
|| string.contains('(')
|| string.contains(')')
|| string.contains('{')
|| string.contains('}')
{
return Err(Error::RuntimeError(
"Complex shell features (subshells, variables, backgrounding, etc.) are not available in this shell."
.to_string(),
));
}
let mut parser = ShellParser::new(string);
parser.parse_expression(0)
}
}
enum Redirect {
Stdout,
Stderr,
Both,
}
struct ShellParser<'a> {
lexer: shlex::Shlex<'a>,
current_token: Option<String>,
}
impl<'a> ShellParser<'a> {
fn new(input: &'a str) -> Self {
let mut lexer = shlex::Shlex::new(input);
let current_token = lexer.next();
Self {
lexer,
current_token,
}
}
fn advance(&mut self) {
self.current_token = self.lexer.next();
}
fn peek(&self) -> Option<&str> {
self.current_token.as_deref()
}
fn parse_expression(&mut self, min_precedence: u8) -> Result<ShellAst> {
// Parse the first command or atom
let mut left = ShellAst::Command(self.parse_command()?);
// While we have operators with sufficient precedence, keep building the tree
loop {
let op = match self.parse_operator() {
Some(op) if op.precedence() >= min_precedence => op,
_ => break,
};
// Consume the operator token
self.advance();
// Special case for trailing semicolons - if we have no more tokens,
// we don't need to parse another command
if op == Operator::Semicolon && self.peek().is_none() {
break;
}
// Parse the right side with higher precedence
// For left-associative operators, we use op.precedence() + 1
let right = self.parse_expression(op.precedence() + 1)?;
// Build the operation node
left = ShellAst::Operation {
operator: op,
left: Box::new(left),
right: Box::new(right),
};
}
Ok(left)
}
fn parse_operator(&self) -> Option<Operator> {
match self.peek()? {
"|" => Some(Operator::Pipe),
"&&" => Some(Operator::And),
";" => Some(Operator::Semicolon),
_ => None,
}
}
fn handle_redirection(&mut self, cmd: &mut ShellCmd, redirect: Redirect) -> Result<()> {
self.advance(); // consume the redirection operator
let target = self.peek().ok_or_else(|| {
Error::RuntimeError("Missing redirection target in shell".to_string())
})?;
match redirect {
Redirect::Stdout => {
cmd.stdout_redirect = Some(target.to_string());
}
Redirect::Stderr => {
cmd.stderr_redirect = Some(target.to_string());
}
Redirect::Both => {
cmd.stdout_redirect = Some(target.to_string());
cmd.stderr_redirect = Some(target.to_string());
}
}
self.advance(); // consume the target
Ok(())
}
fn parse_command(&mut self) -> Result<ShellCmd> {
let mut cmd = ShellCmd::default();
// Process tokens until we hit an operator or end of input
loop {
let redirect;
match self.peek() {
Some(token) => {
match token {
"|" | "&&" | ";" => break, // These are operators, not part of the command
">" | "1>" => {
redirect = Some(Redirect::Stdout);
}
"2>" => {
redirect = Some(Redirect::Stderr);
}
"&>" | ">&" => {
redirect = Some(Redirect::Both);
}
"&" => {
// Reject ampersand as it's used for backgrounding processes
return Err(Error::RuntimeError(
"Background processes (using &) are not available in this shell."
.to_string(),
));
}
_ => {
redirect = None;
}
}
}
None => {
break; // We ran out of tokens; exit the loop.
}
}
// We do this separate conditional after the borrow from the peek()
// has expired, to avoid a borrow checker error.
match redirect {
Some(redirect) => {
self.handle_redirection(&mut cmd, redirect)?;
}
None => {
// It's either the command name or an argument
let mut token = self.current_token.take().unwrap();
self.advance();
// Handle trailing semicolons
let original_token_len = token.len();
while token.ends_with(';') {
token.pop();
}
let had_semicolon = token.len() != original_token_len;
if cmd.command.is_empty() {
cmd.command = token;
} else {
cmd.args.push(token);
}
if had_semicolon {
// Put the semicolon back as the next token, so after we break we parse it.
self.current_token = Some(";".to_string());
break;
}
}
}
}
if cmd.command.is_empty() {
return Err(Error::RuntimeError(
"Missing command to run in shell".to_string(),
));
}
Ok(cmd)
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_simple_command() {
// Basic command with no args or operators
let cmd = "ls";
let ast = ShellAst::parse(cmd).expect("parsing failed for {cmd:?}");
if let ShellAst::Command(shell_cmd) = ast {
assert_eq!(shell_cmd.command, "ls");
assert!(shell_cmd.args.is_empty());
assert_eq!(shell_cmd.stdout_redirect, None);
assert_eq!(shell_cmd.stderr_redirect, None);
} else {
panic!("Expected Command node");
}
}
#[test]
fn test_command_with_args() {
// Command with arguments
let cmd = "ls -la /home";
let ast = ShellAst::parse(cmd).expect("parsing failed for {cmd:?}");
if let ShellAst::Command(shell_cmd) = ast {
assert_eq!(shell_cmd.command, "ls");
assert_eq!(shell_cmd.args, vec!["-la".to_string(), "/home".to_string()]);
assert_eq!(shell_cmd.stdout_redirect, None);
assert_eq!(shell_cmd.stderr_redirect, None);
} else {
panic!("Expected Command node");
}
}
#[test]
fn test_simple_pipe() {
// Test pipe operator
let cmd = "ls -l | grep txt";
let ast = ShellAst::parse(cmd).expect("parsing failed for {cmd:?}");
if let ShellAst::Operation {
operator,
left,
right,
} = ast
{
assert_eq!(operator, Operator::Pipe);
if let ShellAst::Command(left_cmd) = *left {
assert_eq!(left_cmd.command, "ls");
assert_eq!(left_cmd.args, vec!["-l".to_string()]);
} else {
panic!("Expected Command node for left side");
}
if let ShellAst::Command(right_cmd) = *right {
assert_eq!(right_cmd.command, "grep");
assert_eq!(right_cmd.args, vec!["txt".to_string()]);
} else {
panic!("Expected Command node for right side");
}
} else {
panic!("Expected Operation node");
}
}
#[test]
fn test_simple_and() {
// Test && operator
let cmd = "mkdir test && cd test";
let ast = ShellAst::parse(cmd).expect("parsing failed for {cmd:?}");
if let ShellAst::Operation {
operator,
left,
right,
} = ast
{
assert_eq!(operator, Operator::And);
if let ShellAst::Command(left_cmd) = *left {
assert_eq!(left_cmd.command, "mkdir");
assert_eq!(left_cmd.args, vec!["test".to_string()]);
} else {
panic!("Expected Command node for left side");
}
if let ShellAst::Command(right_cmd) = *right {
assert_eq!(right_cmd.command, "cd");
assert_eq!(right_cmd.args, vec!["test".to_string()]);
} else {
panic!("Expected Command node for right side");
}
} else {
panic!("Expected Operation node");
}
}
#[test]
fn test_complex_chain_with_precedence() {
// Test a more complex chain with different precedence levels
let cmd = "echo hello | grep e && ls -l ; echo done";
let ast = ShellAst::parse(cmd).expect("parsing failed for {cmd:?}");
// The tree should be structured with precedence:
// - Pipe has highest precedence
// - Then And
// - Then Semicolon (lowest)
if let ShellAst::Operation {
operator,
left,
right,
} = &ast
{
assert_eq!(*operator, Operator::Semicolon);
if let ShellAst::Operation {
operator,
left: inner_left,
right: inner_right,
} = &**left
{
assert_eq!(*operator, Operator::And);
if let ShellAst::Operation {
operator,
left: pipe_left,
right: pipe_right,
} = &**inner_left
{
assert_eq!(*operator, Operator::Pipe);
if let ShellAst::Command(cmd) = &**pipe_left {
assert_eq!(cmd.command, "echo");
assert_eq!(cmd.args, vec!["hello".to_string()]);
} else {
panic!("Expected Command node for pipe left branch");
}
if let ShellAst::Command(cmd) = &**pipe_right {
assert_eq!(cmd.command, "grep");
assert_eq!(cmd.args, vec!["e".to_string()]);
} else {
panic!("Expected Command node for pipe right branch");
}
} else {
panic!("Expected Pipe operation node");
}
if let ShellAst::Command(cmd) = &**inner_right {
assert_eq!(cmd.command, "ls");
assert_eq!(cmd.args, vec!["-l".to_string()]);
} else {
panic!("Expected Command node for and right branch");
}
} else {
panic!("Expected And operation node");
}
if let ShellAst::Command(cmd) = &**right {
assert_eq!(cmd.command, "echo");
assert_eq!(cmd.args, vec!["done".to_string()]);
} else {
panic!("Expected Command node for semicolon right branch");
}
} else {
panic!("Expected Semicolon operation node");
}
}
#[test]
fn test_stdout_redirection() {
// Test stdout redirection
let cmd = "echo hello > output.txt";
let ast = ShellAst::parse(cmd).expect("parsing failed for {cmd:?}");
if let ShellAst::Command(shell_cmd) = ast {
assert_eq!(shell_cmd.command, "echo");
assert_eq!(shell_cmd.args, vec!["hello".to_string()]);
assert_eq!(shell_cmd.stdout_redirect, Some("output.txt".to_string()));
assert_eq!(shell_cmd.stderr_redirect, None);
} else {
panic!("Expected Command node");
}
}
#[test]
fn test_stderr_redirection() {
// Test stderr redirection
let cmd = "find / -name test 2> errors.log";
let ast = ShellAst::parse(cmd).expect("parsing failed for {cmd:?}");
if let ShellAst::Command(shell_cmd) = ast {
assert_eq!(shell_cmd.command, "find");
assert_eq!(
shell_cmd.args,
vec!["/".to_string(), "-name".to_string(), "test".to_string()]
);
assert_eq!(shell_cmd.stdout_redirect, None);
assert_eq!(shell_cmd.stderr_redirect, Some("errors.log".to_string()));
} else {
panic!("Expected Command node");
}
}
#[test]
fn test_both_redirections() {
// Test both stdout and stderr redirection
let cmd = "make &> build.log";
let ast = ShellAst::parse(cmd).expect("parsing failed for {cmd:?}");
if let ShellAst::Command(shell_cmd) = ast {
assert_eq!(shell_cmd.command, "make");
assert!(shell_cmd.args.is_empty());
assert_eq!(shell_cmd.stdout_redirect, Some("build.log".to_string()));
assert_eq!(shell_cmd.stderr_redirect, Some("build.log".to_string()));
} else {
panic!("Expected Command node");
}
// Test alternative syntax
let cmd = "make >& build.log";
let ast = ShellAst::parse(cmd).expect("parsing failed for {cmd:?}");
if let ShellAst::Command(shell_cmd) = ast {
assert_eq!(shell_cmd.command, "make");
assert!(shell_cmd.args.is_empty());
assert_eq!(shell_cmd.stdout_redirect, Some("build.log".to_string()));
assert_eq!(shell_cmd.stderr_redirect, Some("build.log".to_string()));
} else {
panic!("Expected Command node");
}
}
#[test]
fn test_multiple_operators() {
// Test multiple operators in a single command
let cmd =
"find . -name \"*.rs\" | grep impl && echo \"Found implementations\" ; echo \"Done\"";
// Verify the AST structure
let ast = ShellAst::parse(cmd).expect("parsing failed for {cmd:?}");
if let ShellAst::Operation {
operator: semicolon_op,
left: semicolon_left,
right: semicolon_right,
} = ast
{
assert_eq!(semicolon_op, Operator::Semicolon);
if let ShellAst::Operation {
operator: and_op,
left: and_left,
right: and_right,
} = *semicolon_left
{
assert_eq!(and_op, Operator::And);
if let ShellAst::Operation {
operator: pipe_op,
left: pipe_left,
right: pipe_right,
} = *and_left
{
assert_eq!(pipe_op, Operator::Pipe);
if let ShellAst::Command(cmd) = *pipe_left {
assert_eq!(cmd.command, "find");
assert_eq!(
cmd.args,
vec![".".to_string(), "-name".to_string(), "*.rs".to_string()]
);
} else {
panic!("Expected Command node for pipe left");
}
if let ShellAst::Command(cmd) = *pipe_right {
assert_eq!(cmd.command, "grep");
assert_eq!(cmd.args, vec!["impl".to_string()]);
} else {
panic!("Expected Command node for pipe right");
}
} else {
panic!("Expected Pipe operation");
}
if let ShellAst::Command(cmd) = *and_right {
assert_eq!(cmd.command, "echo");
assert_eq!(cmd.args, vec!["Found implementations".to_string()]);
} else {
panic!("Expected Command node for and right");
}
} else {
panic!("Expected And operation");
}
if let ShellAst::Command(cmd) = *semicolon_right {
assert_eq!(cmd.command, "echo");
assert_eq!(cmd.args, vec!["Done".to_string()]);
} else {
panic!("Expected Command node for semicolon right");
}
} else {
panic!("Expected Semicolon operation at root");
}
}
#[test]
fn test_pipe_with_redirections() {
// Test pipe with redirections
let cmd = "cat file.txt | grep error > results.txt 2> errors.log";
let ast = ShellAst::parse(cmd).expect("parsing failed for {cmd:?}");
if let ShellAst::Operation {
operator,
left,
right,
} = ast
{
assert_eq!(operator, Operator::Pipe);
if let ShellAst::Command(left_cmd) = *left {
assert_eq!(left_cmd.command, "cat");
assert_eq!(left_cmd.args, vec!["file.txt".to_string()]);
assert_eq!(left_cmd.stdout_redirect, None);
assert_eq!(left_cmd.stderr_redirect, None);
} else {
panic!("Expected Command node for left side");
}
if let ShellAst::Command(right_cmd) = *right {
assert_eq!(right_cmd.command, "grep");
assert_eq!(right_cmd.args, vec!["error".to_string()]);
assert_eq!(right_cmd.stdout_redirect, Some("results.txt".to_string()));
assert_eq!(right_cmd.stderr_redirect, Some("errors.log".to_string()));
} else {
panic!("Expected Command node for right side");
}
} else {
panic!("Expected Operation node");
}
}
#[test]
fn test_quoted_arguments() {
// Test quoted arguments
let cmd = "echo \"hello world\" | grep \"o w\"";
let ast = ShellAst::parse(cmd).expect("parsing failed for {cmd:?}");
if let ShellAst::Operation {
operator,
left,
right,
} = ast
{
assert_eq!(operator, Operator::Pipe);
if let ShellAst::Command(left_cmd) = *left {
assert_eq!(left_cmd.command, "echo");
assert_eq!(left_cmd.args, vec!["hello world".to_string()]);
} else {
panic!("Expected Command node for left side");
}
if let ShellAst::Command(right_cmd) = *right {
assert_eq!(right_cmd.command, "grep");
assert_eq!(right_cmd.args, vec!["o w".to_string()]);
} else {
panic!("Expected Command node for right side");
}
} else {
panic!("Expected Operation node");
}
}
#[test]
fn test_unsupported_features() {
// Test unsupported shell features
let result = ShellAst::parse("echo $HOME");
assert!(result.is_err());
let result = ShellAst::parse("echo `date`");
assert!(result.is_err());
let result = ShellAst::parse("echo $(date)");
assert!(result.is_err());
let result = ShellAst::parse("for i in {1..5}; do echo $i; done");
assert!(result.is_err());
}
#[test]
fn test_complex_command() {
let cmd = "find /path/to/dir -type f -name \"*.txt\" -exec grep \"pattern with spaces\";";
let ast = ShellAst::parse(cmd).expect("parsing failed for {cmd:?}");
if let ShellAst::Command(shell_cmd) = ast {
assert_eq!(shell_cmd.command, "find");
assert_eq!(
shell_cmd.args,
vec![
"/path/to/dir".to_string(),
"-type".to_string(),
"f".to_string(),
"-name".to_string(),
"*.txt".to_string(),
"-exec".to_string(),
"grep".to_string(),
"pattern with spaces".to_string(),
]
);
assert_eq!(shell_cmd.stdout_redirect, None);
assert_eq!(shell_cmd.stderr_redirect, None);
} else {
panic!("Expected Command node");
}
}
#[test]
fn test_empty_command() {
// Test empty command
let result = ShellAst::parse("");
assert!(result.is_err());
}
#[test]
fn test_missing_redirection_target() {
// Test missing redirection target
let result = ShellAst::parse("echo hello >");
assert!(result.is_err());
let result = ShellAst::parse("ls 2>");
assert!(result.is_err());
}
#[test]
fn test_ampersand_as_argument() {
// Test & as a background operator is not allowed
let result = ShellAst::parse("grep & file.txt");
assert!(result.is_err());
// Verify the error message mentions background processes
if let Err(Error::RuntimeError(msg)) = ShellAst::parse("grep & file.txt") {
assert!(msg.contains("Background processes"));
} else {
panic!("Expected RuntimeError about background processes");
}
}
}

View File

@@ -1,59 +0,0 @@
mod sandboxed_shell;
mod session;
use project::Project;
pub(crate) use session::*;
use assistant_tool::{Tool, ToolRegistry};
use gpui::{App, AppContext as _, Entity, Task};
use schemars::JsonSchema;
use serde::Deserialize;
use std::sync::Arc;
pub fn init(cx: &App) {
let registry = ToolRegistry::global(cx);
registry.register_tool(ScriptingTool);
}
#[derive(Debug, Deserialize, JsonSchema)]
struct ScriptingToolInput {
lua_script: String,
}
struct ScriptingTool;
impl Tool for ScriptingTool {
fn name(&self) -> String {
"lua-interpreter".into()
}
fn description(&self) -> String {
include_str!("scripting_tool_description.txt").into()
}
fn input_schema(&self) -> serde_json::Value {
let schema = schemars::schema_for!(ScriptingToolInput);
serde_json::to_value(&schema).unwrap()
}
fn run(
self: Arc<Self>,
input: serde_json::Value,
project: Entity<Project>,
cx: &mut App,
) -> Task<anyhow::Result<String>> {
let input = match serde_json::from_value::<ScriptingToolInput>(input) {
Err(err) => return Task::ready(Err(err.into())),
Ok(input) => input,
};
let session = cx.new(|cx| Session::new(project, cx));
let lua_script = input.lua_script;
let script = session.update(cx, |session, cx| session.run_script(lua_script, cx));
cx.spawn(|_cx| async move {
let output = script.await?.stdout;
drop(session);
Ok(format!("The script output the following:\n{output}"))
})
}
}

View File

@@ -1,5 +1,6 @@
use crate::{settings_store::SettingsStore, Settings};
use fs::Fs;
use collections::HashSet;
use fs::{Fs, PathEventKind};
use futures::{channel::mpsc, StreamExt};
use gpui::{App, BackgroundExecutor, ReadGlobal};
use std::{path::PathBuf, sync::Arc, time::Duration};
@@ -78,6 +79,55 @@ pub fn watch_config_file(
rx
}
pub fn watch_config_dir(
executor: &BackgroundExecutor,
fs: Arc<dyn Fs>,
dir_path: PathBuf,
config_paths: HashSet<PathBuf>,
) -> mpsc::UnboundedReceiver<String> {
let (tx, rx) = mpsc::unbounded();
executor
.spawn(async move {
for file_path in &config_paths {
if fs.metadata(file_path).await.is_ok_and(|v| v.is_some()) {
if let Ok(contents) = fs.load(file_path).await {
if tx.unbounded_send(contents).is_err() {
return;
}
}
}
}
let (events, _) = fs.watch(&dir_path, Duration::from_millis(100)).await;
futures::pin_mut!(events);
while let Some(event_batch) = events.next().await {
for event in event_batch {
if config_paths.contains(&event.path) {
match event.kind {
Some(PathEventKind::Removed) => {
if tx.unbounded_send(String::new()).is_err() {
return;
}
}
Some(PathEventKind::Created) | Some(PathEventKind::Changed) => {
if let Ok(contents) = fs.load(&event.path).await {
if tx.unbounded_send(contents).is_err() {
return;
}
}
}
_ => {}
}
}
}
}
})
.detach();
rx
}
pub fn update_settings_file<T: Settings>(
fs: Arc<dyn Fs>,
cx: &App,

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
# Tom Hale, 2016. MIT Licence.
# Print out 256 colours, with each number printed in its corresponding colour

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
# Copied from: https://unix.stackexchange.com/a/696756
# Based on: https://gist.github.com/XVilka/8346728 and https://unix.stackexchange.com/a/404415/395213

View File

@@ -136,14 +136,10 @@ impl ThemeColors {
terminal_ansi_dim_white: neutral().light().step_11(),
link_text_hover: orange().light().step_10(),
version_control_added: ADDED_COLOR,
version_control_added_background: ADDED_COLOR.opacity(0.1),
version_control_deleted: REMOVED_COLOR,
version_control_deleted_background: REMOVED_COLOR.opacity(0.1),
version_control_modified: MODIFIED_COLOR,
version_control_modified_background: MODIFIED_COLOR.opacity(0.1),
version_control_renamed: MODIFIED_COLOR,
version_control_conflict: orange().light().step_12(),
version_control_conflict_background: orange().light().step_12().opacity(0.1),
version_control_ignored: gray().light().step_12(),
}
}
@@ -253,14 +249,10 @@ impl ThemeColors {
terminal_ansi_dim_white: neutral().dark().step_10(),
link_text_hover: orange().dark().step_10(),
version_control_added: ADDED_COLOR,
version_control_added_background: ADDED_COLOR.opacity(0.1),
version_control_deleted: REMOVED_COLOR,
version_control_deleted_background: REMOVED_COLOR.opacity(0.1),
version_control_modified: MODIFIED_COLOR,
version_control_modified_background: MODIFIED_COLOR.opacity(0.1),
version_control_renamed: MODIFIED_COLOR,
version_control_conflict: orange().dark().step_12(),
version_control_conflict_background: orange().dark().step_12().opacity(0.1),
version_control_ignored: gray().dark().step_12(),
}
}

View File

@@ -190,14 +190,10 @@ pub(crate) fn zed_default_dark() -> Theme {
editor_foreground: hsla(218. / 360., 14. / 100., 71. / 100., 1.),
link_text_hover: blue,
version_control_added: ADDED_COLOR,
version_control_added_background: ADDED_COLOR.opacity(0.1),
version_control_deleted: REMOVED_COLOR,
version_control_deleted_background: REMOVED_COLOR.opacity(0.1),
version_control_modified: MODIFIED_COLOR,
version_control_modified_background: MODIFIED_COLOR.opacity(0.1),
version_control_renamed: MODIFIED_COLOR,
version_control_conflict: crate::orange().light().step_12(),
version_control_conflict_background: crate::orange().light().step_12().opacity(0.1),
version_control_ignored: crate::gray().light().step_12(),
},
status: StatusColors {

View File

@@ -557,26 +557,14 @@ pub struct ThemeColorsContent {
#[serde(rename = "version_control.added")]
pub version_control_added: Option<String>,
/// Added version control background color.
#[serde(rename = "version_control.added_background")]
pub version_control_added_background: Option<String>,
/// Deleted version control color.
#[serde(rename = "version_control.deleted")]
pub version_control_deleted: Option<String>,
/// Deleted version control background color.
#[serde(rename = "version_control.deleted_background")]
pub version_control_deleted_background: Option<String>,
/// Modified version control color.
#[serde(rename = "version_control.modified")]
pub version_control_modified: Option<String>,
/// Modified version control background color.
#[serde(rename = "version_control.modified_background")]
pub version_control_modified_background: Option<String>,
/// Renamed version control color.
#[serde(rename = "version_control.renamed")]
pub version_control_renamed: Option<String>,
@@ -585,10 +573,6 @@ pub struct ThemeColorsContent {
#[serde(rename = "version_control.conflict")]
pub version_control_conflict: Option<String>,
/// Conflict version control background color.
#[serde(rename = "version_control.conflict_background")]
pub version_control_conflict_background: Option<String>,
/// Ignored version control color.
#[serde(rename = "version_control.ignored")]
pub version_control_ignored: Option<String>,
@@ -1000,26 +984,14 @@ impl ThemeColorsContent {
.version_control_added
.as_ref()
.and_then(|color| try_parse_color(color).ok()),
version_control_added_background: self
.version_control_added_background
.as_ref()
.and_then(|color| try_parse_color(color).ok()),
version_control_deleted: self
.version_control_deleted
.as_ref()
.and_then(|color| try_parse_color(color).ok()),
version_control_deleted_background: self
.version_control_deleted_background
.as_ref()
.and_then(|color| try_parse_color(color).ok()),
version_control_modified: self
.version_control_modified
.as_ref()
.and_then(|color| try_parse_color(color).ok()),
version_control_modified_background: self
.version_control_modified_background
.as_ref()
.and_then(|color| try_parse_color(color).ok()),
version_control_renamed: self
.version_control_renamed
.as_ref()
@@ -1028,10 +1000,6 @@ impl ThemeColorsContent {
.version_control_conflict
.as_ref()
.and_then(|color| try_parse_color(color).ok()),
version_control_conflict_background: self
.version_control_conflict_background
.as_ref()
.and_then(|color| try_parse_color(color).ok()),
version_control_ignored: self
.version_control_ignored
.as_ref()

View File

@@ -246,22 +246,14 @@ pub struct ThemeColors {
/// Represents an added entry or hunk in vcs, like git.
pub version_control_added: Hsla,
/// Represents the line background of an added entry or hunk in vcs, like git.
pub version_control_added_background: Hsla,
/// Represents a deleted entry in version control systems.
pub version_control_deleted: Hsla,
/// Represents the background color for deleted entries in version control systems.
pub version_control_deleted_background: Hsla,
/// Represents a modified entry in version control systems.
pub version_control_modified: Hsla,
/// Represents the background color for modified entries in version control systems.
pub version_control_modified_background: Hsla,
/// Represents a renamed entry in version control systems.
pub version_control_renamed: Hsla,
/// Represents a conflicting entry in version control systems.
pub version_control_conflict: Hsla,
/// Represents the background color for conflicting entries in version control systems.
pub version_control_conflict_background: Hsla,
/// Represents an ignored entry in version control systems.
pub version_control_ignored: Hsla,
}
@@ -366,14 +358,10 @@ pub enum ThemeColorField {
TerminalAnsiDimWhite,
LinkTextHover,
VersionControlAdded,
VersionControlAddedBackground,
VersionControlDeleted,
VersionControlDeletedBackground,
VersionControlModified,
VersionControlModifiedBackground,
VersionControlRenamed,
VersionControlConflict,
VersionControlConflictBackground,
VersionControlIgnored,
}
@@ -485,20 +473,10 @@ impl ThemeColors {
ThemeColorField::TerminalAnsiDimWhite => self.terminal_ansi_dim_white,
ThemeColorField::LinkTextHover => self.link_text_hover,
ThemeColorField::VersionControlAdded => self.version_control_added,
ThemeColorField::VersionControlAddedBackground => self.version_control_added_background,
ThemeColorField::VersionControlDeleted => self.version_control_deleted,
ThemeColorField::VersionControlDeletedBackground => {
self.version_control_deleted_background
}
ThemeColorField::VersionControlModified => self.version_control_modified,
ThemeColorField::VersionControlModifiedBackground => {
self.version_control_modified_background
}
ThemeColorField::VersionControlRenamed => self.version_control_renamed,
ThemeColorField::VersionControlConflict => self.version_control_conflict,
ThemeColorField::VersionControlConflictBackground => {
self.version_control_conflict_background
}
ThemeColorField::VersionControlIgnored => self.version_control_ignored,
}
}

View File

@@ -6,9 +6,7 @@ use crate::{
prelude::*, Color, DynamicSpacing, ElevationIndex, IconPosition, KeyBinding,
KeybindingPosition, TintColor,
};
use crate::{
ButtonCommon, ButtonLike, ButtonSize, ButtonStyle, IconName, IconSize, Label, LineHeightStyle,
};
use crate::{ButtonCommon, ButtonLike, ButtonSize, ButtonStyle, IconName, IconSize, Label};
use super::button_icon::ButtonIcon;
@@ -448,7 +446,6 @@ impl RenderOnce for Button {
.color(label_color)
.size(self.label_size.unwrap_or_default())
.when_some(self.alpha, |this, alpha| this.alpha(alpha))
.line_height_style(LineHeightStyle::UiLabel)
.when(self.truncate, |this| this.truncate()),
)
.children(self.key_binding),

View File

@@ -1,6 +1,5 @@
use gpui::{
div, hsla, prelude::*, AnyElement, AnyView, CursorStyle, ElementId, Hsla, IntoElement, Styled,
Window,
div, hsla, prelude::*, AnyElement, AnyView, ElementId, Hsla, IntoElement, Styled, Window,
};
use std::sync::Arc;
@@ -141,14 +140,14 @@ impl Checkbox {
match self.style.clone() {
ToggleStyle::Ghost => cx.theme().colors().border,
ToggleStyle::ElevationBased(elevation) => elevation.on_elevation_bg(cx),
ToggleStyle::ElevationBased(_) => cx.theme().colors().border,
ToggleStyle::Custom(color) => color.opacity(0.3),
}
}
/// container size
pub fn container_size(cx: &App) -> Rems {
DynamicSpacing::Base20.rems(cx)
pub fn container_size() -> Pixels {
px(20.0)
}
}
@@ -157,21 +156,21 @@ impl RenderOnce for Checkbox {
let group_id = format!("checkbox_group_{:?}", self.id);
let color = if self.disabled {
Color::Disabled
} else if self.placeholder {
Color::Placeholder
} else {
Color::Selected
};
let icon = match self.toggle_state {
ToggleState::Selected => Some(if self.placeholder {
Icon::new(IconName::Circle)
.size(IconSize::XSmall)
.color(color)
} else {
Icon::new(IconName::Check)
.size(IconSize::Small)
.color(color)
}),
ToggleState::Selected => {
if self.placeholder {
None
} else {
Some(
Icon::new(IconName::Check)
.size(IconSize::Small)
.color(color),
)
}
}
ToggleState::Indeterminate => {
Some(Icon::new(IconName::Dash).size(IconSize::Small).color(color))
}
@@ -180,8 +179,9 @@ impl RenderOnce for Checkbox {
let bg_color = self.bg_color(cx);
let border_color = self.border_color(cx);
let hover_border_color = border_color.alpha(0.7);
let size = Self::container_size(cx);
let size = Self::container_size();
let checkbox = h_flex()
.id(self.id.clone())
@@ -195,22 +195,27 @@ impl RenderOnce for Checkbox {
.flex_none()
.justify_center()
.items_center()
.m(DynamicSpacing::Base04.px(cx))
.size(DynamicSpacing::Base16.rems(cx))
.m_1()
.size_4()
.rounded_xs()
.bg(bg_color)
.border_1()
.border_color(border_color)
.when(self.disabled, |this| {
this.cursor(CursorStyle::OperationNotAllowed)
})
.when(self.disabled, |this| this.cursor_not_allowed())
.when(self.disabled, |this| {
this.bg(cx.theme().colors().element_disabled.opacity(0.6))
})
.when(!self.disabled, |this| {
this.group_hover(group_id.clone(), |el| {
el.bg(cx.theme().colors().element_hover)
})
this.group_hover(group_id.clone(), |el| el.border_color(hover_border_color))
})
.when(self.placeholder, |this| {
this.child(
div()
.flex_none()
.rounded_full()
.bg(color.color(cx).alpha(0.5))
.size(px(4.)),
)
})
.children(icon),
);
@@ -522,6 +527,12 @@ impl ComponentPreview for Checkbox {
Checkbox::new("checkbox_unselected", ToggleState::Unselected)
.into_any_element(),
),
single_example(
"Placeholder",
Checkbox::new("checkbox_indeterminate", ToggleState::Selected)
.placeholder(true)
.into_any_element(),
),
single_example(
"Indeterminate",
Checkbox::new("checkbox_indeterminate", ToggleState::Indeterminate)

View File

@@ -98,7 +98,6 @@ remote.workspace = true
repl.workspace = true
reqwest_client.workspace = true
rope.workspace = true
scripting_tool.workspace = true
search.workspace = true
serde.workspace = true
serde_json.workspace = true

View File

@@ -9,8 +9,9 @@
<string>Alternate</string>
<key>LSItemContentTypes</key>
<array>
<string>public.text</string>
<string>public.folder</string>
<string>public.plain-text</string>
<string>public.text</string>
<string>public.utf8-plain-text</string>
</array>
</dict>

View File

@@ -476,7 +476,6 @@ fn main() {
cx,
);
assistant_tools::init(cx);
scripting_tool::init(cx);
repl::init(app_state.fs.clone(), cx);
extension_host::init(
extension_host_proxy,

View File

@@ -96,6 +96,7 @@ impl Render for QuickActionBar {
let git_blame_inline_enabled = editor_value.git_blame_inline_enabled();
let show_git_blame_gutter = editor_value.show_git_blame_gutter();
let auto_signature_help_enabled = editor_value.auto_signature_help_enabled(cx);
let show_line_numbers = editor_value.line_numbers_enabled(cx);
let has_edit_prediction_provider = editor_value.edit_prediction_provider().is_some();
let show_edit_predictions = editor_value.edit_predictions_enabled();
let edit_predictions_enabled_at_cursor =
@@ -261,6 +262,58 @@ impl Render for QuickActionBar {
);
}
if has_edit_prediction_provider {
let mut inline_completion_entry = ContextMenuEntry::new("Edit Predictions")
.toggleable(IconPosition::Start, edit_predictions_enabled_at_cursor && show_edit_predictions)
.disabled(!edit_predictions_enabled_at_cursor)
.action(
editor::actions::ToggleEditPrediction.boxed_clone(),
).handler({
let editor = editor.clone();
move |window, cx| {
editor
.update(cx, |editor, cx| {
editor.toggle_edit_predictions(
&editor::actions::ToggleEditPrediction,
window,
cx,
);
})
.ok();
}
});
if !edit_predictions_enabled_at_cursor {
inline_completion_entry = inline_completion_entry.documentation_aside(|_| {
Label::new("You can't toggle edit predictions for this file as it is within the excluded files list.").into_any_element()
});
}
menu = menu.item(inline_completion_entry);
}
menu = menu.separator();
menu = menu.toggleable_entry(
"Line Numbers",
show_line_numbers,
IconPosition::Start,
Some(editor::actions::ToggleLineNumbers.boxed_clone()),
{
let editor = editor.clone();
move |window, cx| {
editor
.update(cx, |editor, cx| {
editor.toggle_line_numbers(
&editor::actions::ToggleLineNumbers,
window,
cx,
);
})
.ok();
}
},
);
menu = menu.toggleable_entry(
"Selection Menu",
selection_menu_enabled,
@@ -303,35 +356,6 @@ impl Render for QuickActionBar {
},
);
if has_edit_prediction_provider {
let mut inline_completion_entry = ContextMenuEntry::new("Edit Predictions")
.toggleable(IconPosition::Start, edit_predictions_enabled_at_cursor && show_edit_predictions)
.disabled(!edit_predictions_enabled_at_cursor)
.action(
editor::actions::ToggleEditPrediction.boxed_clone(),
).handler({
let editor = editor.clone();
move |window, cx| {
editor
.update(cx, |editor, cx| {
editor.toggle_edit_predictions(
&editor::actions::ToggleEditPrediction,
window,
cx,
);
})
.ok();
}
});
if !edit_predictions_enabled_at_cursor {
inline_completion_entry = inline_completion_entry.documentation_aside(|_| {
Label::new("You can't toggle edit predictions for this file as it is within the excluded files list.").into_any_element()
});
}
menu = menu.item(inline_completion_entry);
}
menu = menu.separator();
menu = menu.toggleable_entry(

View File

@@ -1,14 +1,11 @@
(
import
(
let
lock = builtins.fromJSON (builtins.readFile ./flake.lock);
in
fetchTarball {
url = lock.nodes.flake-compat.locked.url or "https://github.com/edolstra/flake-compat/archive/${lock.nodes.flake-compat.locked.rev}.tar.gz";
sha256 = lock.nodes.flake-compat.locked.narHash;
}
)
{src = ./.;}
)
.defaultNix
(import (
let
lock = builtins.fromJSON (builtins.readFile ./flake.lock);
in
fetchTarball {
url =
lock.nodes.flake-compat.locked.url
or "https://github.com/edolstra/flake-compat/archive/${lock.nodes.flake-compat.locked.rev}.tar.gz";
sha256 = lock.nodes.flake-compat.locked.narHash;
}
) { src = ./.; }).defaultNix

View File

@@ -8,6 +8,7 @@ If you're used to a specific editor's defaults you can set a `base_keymap` in yo
- VSCode (default)
- Atom
- Emacs (Beta)
- JetBrains
- SublimeText
- TextMate
@@ -52,7 +53,7 @@ If you want to debug problems with custom keymaps you can use `debug: Open Key C
Zed has the ability to match against not just a single keypress, but a sequence of keys typed in order. Each key in the `"bindings"` map is a sequence of keypresses separated with a space.
Each key press is a sequence of modifiers followed by a key. The modifiers are:
Each keypress is a sequence of modifiers followed by a key. The modifiers are:
- `ctrl-` The control key
- `cmd-`, `win-` or `super-` for the platform modifier (Command on macOS, Windows key on Windows, and the Super key on Linux).
@@ -77,7 +78,7 @@ The `shift-` modifier can only be used in combination with a letter to indicate
The `alt-` modifier can be used on many layouts to generate a different key. For example on macOS US keyboard the combination `alt-c` types `ç`. You can match against either in your keymap file, though by convention Zed spells this combination as `alt-c`.
It is possible to match against typing a modifier key on its own. For example `shift shift` can be used to implement JetBrains search everywhere shortcut. In this case the binding happens on key release instead of key press.
It is possible to match against typing a modifier key on its own. For example `shift shift` can be used to implement JetBrains search everywhere shortcut. In this case the binding happens on key release instead of keypress.
### Contexts
@@ -138,13 +139,13 @@ As of Zed 0.162.0, Zed has some support for non-QWERTY keyboards on macOS. Bette
There are roughly three categories of keyboard to consider:
Keyboards that support full ASCII (QWERTY, DVORAK, COLEMAK, etc.). On these keyboards bindings are resolved based on the character that would be generated by the key. So to type `cmd-[`, find the key labelled `[` and press it with command.
Keyboards that support full ASCII (QWERTY, DVORAK, COLEMAK, etc.). On these keyboards bindings are resolved based on the character that would be generated by the key. So to type `cmd-[`, find the key labeled `[` and press it with command.
Keyboards that are mostly non-ASCII, but support full ASCII when the command key is pressed. For example Cyrillic keyboards, Armenian, Hebrew, etc. On these keyboards bindings are resolved based on the character that would be generated by typing the key with command pressed. So to type `ctrl-a`, find the key that generates `cmd-a`. For these keyboards, keyboard shortcuts are displayed in the app using their ASCII equivalents. If the ASCII-equivalents are not printed on your keyboard, you can use the macOS keyboard viewer and holding down the `cmd` key to find things (though often the ASCII equivalents are in a QWERTY layout).
Finally keyboards that support extended Latin alphabets (usually ISO keyboards) require the most support. For example French AZERTY, German QWERTZ, etc. On these keyboards it is often not possible to type the entire ASCII range without option. To ensure that shortcuts _can_ be typed without option, keyboard shortcuts are mapped to "key equivalents" in the same way as [macOS](). This mapping is defined per layout, and is a compromise between leaving keyboard shortcuts triggered by the same character they are defined with, keeping shortcuts in the same place as a QWERTY layout, and moving shortcuts out of the way of system shortcuts.
For example on a German QWERTZ keyboard, the `cmd->` shortcut is moved to `cmd-:` because `cmd->` is the system window switcher and this is where that shortcut is typed on a QWERTY keyboard. `cmd-+` stays the same because + is still typable without option, and as a result, `cmd-[` and `cmd-]` become `cmd-ö` and `cmd-ä`, moving out of the way of the `+` key.
For example on a German QWERTZ keyboard, the `cmd->` shortcut is moved to `cmd-:` because `cmd->` is the system window switcher and this is where that shortcut is typed on a QWERTY keyboard. `cmd-+` stays the same because + is still typeable without option, and as a result, `cmd-[` and `cmd-]` become `cmd-ö` and `cmd-ä`, moving out of the way of the `+` key.
If you are defining shortcuts in your personal keymap, you can opt into the key equivalent mapping by setting `use_key_equivalents` to `true` in your keymap:
@@ -208,7 +209,7 @@ There are some limitations to this, notably:
The argument to `SendKeystrokes` is a space-separated list of keystrokes (using the same syntax as above). Due to the way that keystrokes are parsed, any segment that is not recognized as a keypress will be sent verbatim to the currently focused input field.
If the argument to `SendKeystrokes` contains the binding used to trigger it, it will use the next-highest-precedence definition of that binding. This allows you to extend the default behaviour of a key binding.
If the argument to `SendKeystrokes` contains the binding used to trigger it, it will use the next-highest-precedence definition of that binding. This allows you to extend the default behavior of a key binding.
### Forward keys to terminal

18
flake.lock generated
View File

@@ -2,11 +2,11 @@
"nodes": {
"crane": {
"locked": {
"lastModified": 1739936662,
"narHash": "sha256-x4syUjNUuRblR07nDPeLDP7DpphaBVbUaSoeZkFbGSk=",
"lastModified": 1741481578,
"narHash": "sha256-JBTSyJFQdO3V8cgcL08VaBUByEU6P5kXbTJN6R0PFQo=",
"owner": "ipetkov",
"repo": "crane",
"rev": "19de14aaeb869287647d9461cbd389187d8ecdb7",
"rev": "bb1c9567c43e4434f54e9481eb4b8e8e0d50f0b5",
"type": "github"
},
"original": {
@@ -32,11 +32,11 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1740695751,
"narHash": "sha256-D+R+kFxy1KsheiIzkkx/6L63wEHBYX21OIwlFV8JvDs=",
"lastModified": 1741379970,
"narHash": "sha256-Wh7esNh7G24qYleLvgOSY/7HlDUzWaL/n4qzlBePpiw=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "6313551cd05425cd5b3e63fe47dbc324eabb15e4",
"rev": "36fd87baa9083f34f7f5027900b62ee6d09b1f2f",
"type": "github"
},
"original": {
@@ -61,11 +61,11 @@
]
},
"locked": {
"lastModified": 1740882709,
"narHash": "sha256-VC+8GxWK4p08jjIbmsNfeFQajW2lsiOR/XQiOOvqgvs=",
"lastModified": 1741573199,
"narHash": "sha256-A2sln1GdCf+uZ8yrERSCZUCqZ3JUlOv1WE2VFqqfaLQ=",
"owner": "oxalica",
"repo": "rust-overlay",
"rev": "f4d5a693c18b389f0d58f55b6f7be6ef85af186f",
"rev": "c777dc8a1e35407b0e80ec89817fe69970f4e81a",
"type": "github"
},
"original": {

View File

@@ -50,12 +50,11 @@
in
{
packages = forAllSystems (pkgs: {
zed-editor = pkgs.zed-editor;
default = pkgs.zed-editor;
});
devShells = forAllSystems (pkgs: {
default = import ./nix/shell.nix { inherit pkgs; };
default = pkgs.callPackage ./nix/shell.nix { };
});
formatter = forAllSystems (pkgs: pkgs.nixfmt-rfc-style);

View File

@@ -2,11 +2,12 @@
lib,
crane,
rustToolchain,
fetchpatch,
clang,
rustPlatform,
cmake,
copyDesktopItems,
fetchFromGitHub,
curl,
clang,
perl,
pkg-config,
protobuf,
@@ -29,11 +30,12 @@
cargo-about,
cargo-bundle,
git,
livekit-libwebrtc,
apple-sdk_15,
darwin,
darwinMinVersionHook,
makeWrapper,
nodejs_22,
nix-gitignore,
withGLES ? false,
}:
@@ -41,176 +43,205 @@
assert withGLES -> stdenv.hostPlatform.isLinux;
let
includeFilter =
path: type:
mkIncludeFilter =
root': path: type:
let
baseName = baseNameOf (toString path);
parentDir = dirOf path;
inRootDir = type == "directory" && parentDir == ../.;
# note: under lazy-trees this introduces an extra copy
root = toString root' + "/";
relPath = lib.removePrefix root path;
topLevelIncludes = [
"crates"
"assets"
"extensions"
"script"
"tooling"
"Cargo.toml"
".config" # nextest?
];
firstComp = builtins.head (lib.path.subpath.components relPath);
in
!(
inRootDir
&& (baseName == "docs" || baseName == ".github" || baseName == ".git" || baseName == "target")
);
builtins.elem firstComp topLevelIncludes;
craneLib = crane.overrideToolchain rustToolchain;
commonSrc = lib.cleanSourceWith {
src = nix-gitignore.gitignoreSource [ ] ../.;
filter = includeFilter;
name = "source";
};
commonArgs = rec {
pname = "zed-editor";
version = "nightly";
src = commonSrc;
nativeBuildInputs =
[
clang
cmake
copyDesktopItems
curl
perl
pkg-config
protobuf
cargo-about
]
++ lib.optionals stdenv.hostPlatform.isLinux [ makeWrapper ]
++ lib.optionals stdenv.hostPlatform.isDarwin [ cargo-bundle ];
buildInputs =
[
curl
fontconfig
freetype
libgit2
openssl
sqlite
zlib
zstd
]
++ lib.optionals stdenv.hostPlatform.isLinux [
alsa-lib
libxkbcommon
wayland
xorg.libxcb
]
++ lib.optionals stdenv.hostPlatform.isDarwin [
apple-sdk_15
(darwinMinVersionHook "10.15")
];
env = {
ZSTD_SYS_USE_PKG_CONFIG = true;
FONTCONFIG_FILE = makeFontsConf {
fontDirectories = [
"${src}/assets/fonts/plex-mono"
"${src}/assets/fonts/plex-sans"
];
gpu-lib = if withGLES then libglvnd else vulkan-loader;
commonArgs =
let
zedCargoLock = builtins.fromTOML (builtins.readFile ../crates/zed/Cargo.toml);
in
rec {
pname = "zed-editor";
version = zedCargoLock.package.version + "-nightly";
src = builtins.path {
path = ../.;
filter = mkIncludeFilter ../.;
name = "source";
};
ZED_UPDATE_EXPLANATION = "Zed has been installed using Nix. Auto-updates have thus been disabled.";
RELEASE_VERSION = version;
};
};
cargoArtifacts = craneLib.buildDepsOnly commonArgs;
in
craneLib.buildPackage (
commonArgs
// rec {
inherit cargoArtifacts;
patches =
[
# Zed uses cargo-install to install cargo-about during the script execution.
# We provide cargo-about ourselves and can skip this step.
# Until https://github.com/zed-industries/zed/issues/19971 is fixed,
# we also skip any crate for which the license cannot be determined.
(fetchpatch {
url = "https://raw.githubusercontent.com/NixOS/nixpkgs/1fd02d90c6c097f91349df35da62d36c19359ba7/pkgs/by-name/ze/zed-editor/0001-generate-licenses.patch";
hash = "sha256-cLgqLDXW1JtQ2OQFLd5UolAjfy7bMoTw40lEx2jA2pk=";
})
]
++ lib.optionals stdenv.hostPlatform.isDarwin [
# Livekit requires Swift 6
# We need this until livekit-rust sdk is used
(fetchpatch {
url = "https://raw.githubusercontent.com/NixOS/nixpkgs/1fd02d90c6c097f91349df35da62d36c19359ba7/pkgs/by-name/ze/zed-editor/0002-disable-livekit-darwin.patch";
hash = "sha256-whZ7RaXv8hrVzWAveU3qiBnZSrvGNEHTuyNhxgMIo5w=";
})
];
cargoLock = ../Cargo.lock;
cargoExtraArgs = "--package=zed --package=cli --features=gpui/runtime_shaders";
dontUseCmakeConfigure = true;
preBuild = ''
bash script/generate-licenses
'';
postFixup = lib.optionalString stdenv.hostPlatform.isLinux ''
patchelf --add-rpath ${gpu-lib}/lib $out/libexec/*
patchelf --add-rpath ${wayland}/lib $out/libexec/*
wrapProgram $out/libexec/zed-editor --suffix PATH : ${lib.makeBinPath [ nodejs_22 ]}
'';
RUSTFLAGS = if withGLES then "--cfg gles" else "";
gpu-lib = if withGLES then libglvnd else vulkan-loader;
preCheck = ''
export HOME=$(mktemp -d);
'';
cargoTestExtraArgs =
"-- "
+ lib.concatStringsSep " " (
nativeBuildInputs =
[
# Flaky: unreliably fails on certain hosts (including Hydra)
"--skip=zed::tests::test_window_edit_state_restoring_enabled"
clang # TODO: use pkgs.clangStdenv or ignore cargo config?
cmake
copyDesktopItems
curl
perl
pkg-config
protobuf
cargo-about
rustPlatform.bindgenHook
]
++ lib.optionals stdenv.hostPlatform.isLinux [ makeWrapper ]
++ lib.optionals stdenv.hostPlatform.isDarwin [
# TODO: move to overlay so it's usable in the shell
(cargo-bundle.overrideAttrs (old: {
version = "0.6.0-zed";
src = fetchFromGitHub {
owner = "zed-industries";
repo = "cargo-bundle";
rev = "zed-deploy";
hash = "sha256-OxYdTSiR9ueCvtt7Y2OJkvzwxxnxu453cMS+l/Bi5hM=";
};
}))
];
buildInputs =
[
curl
fontconfig
freetype
# TODO: need staticlib of this for linking the musl remote server.
# should make it a separate derivation/flake output
# see https://crane.dev/examples/cross-musl.html
libgit2
openssl
sqlite
zlib
zstd
]
++ lib.optionals stdenv.hostPlatform.isLinux [
# Fails on certain hosts (including Hydra) for unclear reason
"--skip=test_open_paths_action"
alsa-lib
libxkbcommon
wayland
gpu-lib
xorg.libxcb
]
);
++ lib.optionals stdenv.hostPlatform.isDarwin [
apple-sdk_15
darwin.apple_sdk.frameworks.System
(darwinMinVersionHook "10.15")
];
cargoExtraArgs = "--package=zed --package=cli --features=gpui/runtime_shaders";
env = {
ZSTD_SYS_USE_PKG_CONFIG = true;
FONTCONFIG_FILE = makeFontsConf {
fontDirectories = [
../assets/fonts/plex-mono
../assets/fonts/plex-sans
];
};
ZED_UPDATE_EXPLANATION = "Zed has been installed using Nix. Auto-updates have thus been disabled.";
RELEASE_VERSION = version;
RUSTFLAGS = if withGLES then "--cfg gles" else "";
# TODO: why are these not handled by the linker given that they're in buildInputs?
NIX_LDFLAGS = "-rpath ${
lib.makeLibraryPath [
gpu-lib
wayland
]
}";
LK_CUSTOM_WEBRTC = livekit-libwebrtc;
};
cargoVendorDir = craneLib.vendorCargoDeps {
inherit src cargoLock;
overrideVendorGitCheckout =
let
hasWebRtcSys = builtins.any (crate: crate.name == "webrtc-sys");
# `webrtc-sys` expects a staticlib; nixpkgs' `livekit-webrtc` has been patched to
# produce a `dylib`... patching `webrtc-sys`'s build script is the easier option
# TODO: send livekit sdk a PR to make this configurable
postPatch = ''
substituteInPlace webrtc-sys/build.rs --replace-fail \
"cargo:rustc-link-lib=static=webrtc" "cargo:rustc-link-lib=dylib=webrtc"
'';
in
crates: drv:
if hasWebRtcSys crates then
drv.overrideAttrs (o: {
postPatch = (o.postPatch or "") + postPatch;
})
else
drv;
};
};
cargoArtifacts = craneLib.buildDepsOnly (
commonArgs
// {
# TODO: figure out why the main derivation is still rebuilding deps...
# disable pre-building the deps for now
buildPhaseCargoCommand = "true";
# forcibly inhibit `doInstallCargoArtifacts`...
# https://github.com/ipetkov/crane/blob/1d19e2ec7a29dcc25845eec5f1527aaf275ec23e/lib/setupHooks/installCargoArtifactsHook.sh#L111
#
# it is, unfortunately, not overridable in `buildDepsOnly`:
# https://github.com/ipetkov/crane/blob/1d19e2ec7a29dcc25845eec5f1527aaf275ec23e/lib/buildDepsOnly.nix#L85
preBuild = "postInstallHooks=()";
doCheck = false;
}
);
in
craneLib.buildPackage (
lib.recursiveUpdate commonArgs {
inherit cargoArtifacts;
patches = lib.optionals stdenv.hostPlatform.isDarwin [
# Livekit requires Swift 6
# We need this until livekit-rust sdk is used
../script/patches/use-cross-platform-livekit.patch
];
dontUseCmakeConfigure = true;
# without the env var generate-licenses fails due to crane's fetchCargoVendor, see:
# https://github.com/zed-industries/zed/issues/19971#issuecomment-2688455390
preBuild = ''
ALLOW_MISSING_LICENSES=yes bash script/generate-licenses
echo nightly > crates/zed/RELEASE_CHANNEL
'';
# TODO: try craneLib.cargoNextest separate output
# for now we're not worried about running our test suite in the nix sandbox
doCheck = false;
installPhase =
if stdenv.hostPlatform.isDarwin then
''
runHook preInstall
# cargo-bundle expects the binary in target/release
mv target/release/zed target/release/zed
pushd crates/zed
# Note that this is GNU sed, while Zed's bundle-mac uses BSD sed
sed -i "s/package.metadata.bundle-stable/package.metadata.bundle/" Cargo.toml
sed -i "s/package.metadata.bundle-nightly/package.metadata.bundle/" Cargo.toml
export CARGO_BUNDLE_SKIP_BUILD=true
app_path=$(cargo bundle --release | xargs)
# We're not using the fork of cargo-bundle, so we must manually append plist extensions
# Remove closing tags from Info.plist (last two lines)
head -n -2 $app_path/Contents/Info.plist > Info.plist
# Append extensions
cat resources/info/*.plist >> Info.plist
# Add closing tags
printf "</dict>\n</plist>\n" >> Info.plist
mv Info.plist $app_path/Contents/Info.plist
app_path="$(cargo bundle --release | xargs)"
popd
mkdir -p $out/Applications $out/bin
# Zed expects git next to its own binary
ln -s ${git}/bin/git $app_path/Contents/MacOS/git
mv target/release/cli $app_path/Contents/MacOS/cli
mv $app_path $out/Applications/
ln -s ${git}/bin/git "$app_path/Contents/MacOS/git"
mv target/release/cli "$app_path/Contents/MacOS/cli"
mv "$app_path" $out/Applications/
# Physical location of the CLI must be inside the app bundle as this is used
# to determine which app to start
ln -s $out/Applications/Zed.app/Contents/MacOS/cli $out/bin/zed
ln -s "$out/Applications/Zed Nightly.app/Contents/MacOS/cli" $out/bin/zed
runHook postInstall
''
else
# TODO: icons should probably be named "zed-nightly". fix bundle-linux first
''
runHook preInstall
@@ -218,24 +249,31 @@ craneLib.buildPackage (
cp target/release/zed $out/libexec/zed-editor
cp target/release/cli $out/bin/zed
install -D ${commonSrc}/crates/zed/resources/app-icon@2x.png $out/share/icons/hicolor/1024x1024@2x/apps/zed.png
install -D ${commonSrc}/crates/zed/resources/app-icon.png $out/share/icons/hicolor/512x512/apps/zed.png
install -D "crates/zed/resources/app-icon-nightly@2x.png" \
"$out/share/icons/hicolor/1024x1024@2x/apps/zed.png"
install -D crates/zed/resources/app-icon-nightly.png \
$out/share/icons/hicolor/512x512/apps/zed.png
# extracted from https://github.com/zed-industries/zed/blob/v0.141.2/script/bundle-linux (envsubst)
# and https://github.com/zed-industries/zed/blob/v0.141.2/script/install.sh (final desktop file name)
# extracted from ../script/bundle-linux (envsubst) and
# ../script/install.sh (final desktop file name)
(
export DO_STARTUP_NOTIFY="true"
export APP_CLI="zed"
export APP_ICON="zed"
export APP_NAME="Zed"
export APP_NAME="Zed Nightly"
export APP_ARGS="%U"
mkdir -p "$out/share/applications"
${lib.getExe envsubst} < "crates/zed/resources/zed.desktop.in" > "$out/share/applications/dev.zed.Zed.desktop"
${lib.getExe envsubst} < "crates/zed/resources/zed.desktop.in" > "$out/share/applications/dev.zed.Zed-Nightly.desktop"
)
runHook postInstall
'';
# TODO: why isn't this also done on macOS?
postFixup = lib.optionalString stdenv.hostPlatform.isLinux ''
wrapProgram $out/libexec/zed-editor --suffix PATH : ${lib.makeBinPath [ nodejs_22 ]}
'';
meta = {
description = "High-performance, multiplayer code editor from the creators of Atom and Tree-sitter";
homepage = "https://zed.dev";

View File

@@ -1,65 +1,62 @@
{
pkgs ? import <nixpkgs> { },
lib,
mkShell,
stdenv,
stdenvAdapters,
makeFontsConf,
zed-editor,
rust-analyzer,
cargo-nextest,
nixfmt-rfc-style,
protobuf,
nodejs_22,
}:
let
inherit (pkgs) lib;
moldStdenv = stdenvAdapters.useMoldLinker stdenv;
mkShell' =
if stdenv.hostPlatform.isLinux then mkShell.override { stdenv = moldStdenv; } else mkShell;
in
pkgs.mkShell rec {
packages =
[
pkgs.clang
pkgs.curl
pkgs.cmake
pkgs.perl
pkgs.pkg-config
pkgs.protobuf
pkgs.rustPlatform.bindgenHook
pkgs.rust-analyzer
]
++ lib.optionals pkgs.stdenv.hostPlatform.isLinux [
pkgs.mold
];
mkShell' {
inputsFrom = [ zed-editor ];
packages = [
rust-analyzer
cargo-nextest
nixfmt-rfc-style
# TODO: package protobuf-language-server for editing zed.proto
# TODO: add other tools used in our scripts
buildInputs =
[
pkgs.bzip2
pkgs.curl
pkgs.fontconfig
pkgs.freetype
pkgs.libgit2
pkgs.openssl
pkgs.sqlite
pkgs.stdenv.cc.cc
pkgs.zlib
pkgs.zstd
pkgs.rustToolchain
]
++ lib.optionals pkgs.stdenv.hostPlatform.isLinux [
pkgs.alsa-lib
pkgs.libxkbcommon
pkgs.wayland
pkgs.xorg.libxcb
pkgs.vulkan-loader
]
++ lib.optional pkgs.stdenv.hostPlatform.isDarwin pkgs.apple-sdk_15;
# `build.nix` adds this to the `zed-editor` wrapper (see `postFixup`)
# we'll just put it on `$PATH`:
nodejs_22
];
LD_LIBRARY_PATH = lib.makeLibraryPath buildInputs;
# We set SDKROOT and DEVELOPER_DIR to the Xcode ones instead of the nixpkgs ones, because
# we need Swift 6.0 and nixpkgs doesn't have it
shellHook = lib.optionalString stdenv.hostPlatform.isDarwin ''
export SDKROOT="/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk";
export DEVELOPER_DIR="/Applications/Xcode.app/Contents/Developer";
'';
PROTOC="${pkgs.protobuf}/bin/protoc";
# We set SDKROOT and DEVELOPER_DIR to the Xcode ones instead of the nixpkgs ones,
# because we need Swift 6.0 and nixpkgs doesn't have it.
# Xcode is required for development anyways
shellHook = lib.optionalString pkgs.stdenv.hostPlatform.isDarwin ''
export SDKROOT="/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk";
export DEVELOPER_DIR="/Applications/Xcode.app/Contents/Developer";
'';
FONTCONFIG_FILE = pkgs.makeFontsConf {
fontDirectories = [
"./assets/fonts/zed-mono"
"./assets/fonts/zed-sans"
];
};
ZSTD_SYS_USE_PKG_CONFIG = true;
env =
let
baseEnvs =
(zed-editor.overrideAttrs (attrs: {
passthru = { inherit (attrs) env; };
})).env; # exfil `env`; it's not in drvAttrs
in
# unsetting this var so we download the staticlib during the build
(removeAttrs baseEnvs [ "LK_CUSTOM_WEBRTC" ])
// {
# note: different than `$FONTCONFIG_FILE` in `build.nix` this refers to relative paths
# outside the nix store instead of to `$src`
FONTCONFIG_FILE = makeFontsConf {
fontDirectories = [
"./assets/fonts/plex-mono"
"./assets/fonts/plex-sans"
];
};
PROTOC = "${protobuf}/bin/protoc";
};
}

View File

@@ -2,4 +2,11 @@
channel = "1.85"
profile = "minimal"
components = [ "rustfmt", "clippy" ]
targets = [ "x86_64-apple-darwin", "aarch64-apple-darwin", "x86_64-unknown-linux-gnu", "wasm32-wasip1", "x86_64-pc-windows-msvc" ]
targets = [
"x86_64-apple-darwin",
"aarch64-apple-darwin",
"x86_64-unknown-linux-gnu",
"x86_64-pc-windows-msvc",
"wasm32-wasip1", # extensions
"x86_64-unknown-linux-musl", # remote server
]

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -e

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -e

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -eu

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
channel=$(cat crates/zed/RELEASE_CHANNEL)

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -exuo pipefail

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -eu

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
# Notes for fixing this script if it's broken:
# - if you see an error about "can't find perf_6.1" you need to install `linux-perf` from the

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -e

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -eu
source script/lib/deploy-helpers.sh

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -eu
source script/lib/deploy-helpers.sh

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
databases=$(psql --tuples-only --command "
SELECT

View File

@@ -2,11 +2,11 @@
set -euo pipefail
CARGO_ABOUT_VERSION="0.6.6"
CARGO_ABOUT_VERSION="0.6"
OUTPUT_FILE="${1:-$(pwd)/assets/licenses.md}"
TEMPLATE_FILE="script/licenses/template.md.hbs"
echo -n "" > "$OUTPUT_FILE"
echo -n "" >"$OUTPUT_FILE"
{
echo -e "# ###### THEME LICENSES ######\n"
@@ -16,21 +16,24 @@ echo -n "" > "$OUTPUT_FILE"
cat assets/icons/LICENSES
echo -e "\n# ###### CODE LICENSES ######\n"
} >> "$OUTPUT_FILE"
} >>"$OUTPUT_FILE"
if ! cargo install --list | grep "cargo-about v$CARGO_ABOUT_VERSION" > /dev/null; then
echo "Installing cargo-about@$CARGO_ABOUT_VERSION..."
cargo install "cargo-about@$CARGO_ABOUT_VERSION"
if ! cargo about --version | grep "cargo-about $CARGO_ABOUT_VERSION" >/dev/null; then
echo "Installing cargo-about@^$CARGO_ABOUT_VERSION..."
cargo install "cargo-about@^$CARGO_ABOUT_VERSION"
else
echo "cargo-about@$CARGO_ABOUT_VERSION is already installed."
echo "cargo-about@^$CARGO_ABOUT_VERSION is already installed."
fi
echo "Generating cargo licenses"
if [ -z "${ALLOW_MISSING_LICENSES-}" ]; then FAIL_FLAG=--fail; else FAIL_FLAG=""; fi
set -x
cargo about generate \
--fail \
$FAIL_FLAG \
--frozen \
-c script/licenses/zed-licenses.toml \
"$TEMPLATE_FILE" >> "$OUTPUT_FILE"
"$TEMPLATE_FILE" >>"$OUTPUT_FILE"
set +x
sed -i.bak 's/&quot;/"/g' "$OUTPUT_FILE"
sed -i.bak 's/&#x27;/'\''/g' "$OUTPUT_FILE" # The ` '\'' ` thing ends the string, appends a single quote, and re-opens the string

View File

@@ -2,24 +2,26 @@
set -euo pipefail
CARGO_ABOUT_VERSION="0.6.6"
CARGO_ABOUT_VERSION="0.6"
OUTPUT_FILE="${1:-$(pwd)/assets/licenses.csv}"
TEMPLATE_FILE="script/licenses/template.csv.hbs"
if ! cargo install --list | grep "cargo-about v$CARGO_ABOUT_VERSION" > /dev/null; then
echo "Installing cargo-about@$CARGO_ABOUT_VERSION..."
cargo install "cargo-about@$CARGO_ABOUT_VERSION"
if ! cargo about --version | grep "cargo-about $CARGO_ABOUT_VERSION" > /dev/null; then
echo "Installing cargo-about@^$CARGO_ABOUT_VERSION..."
cargo install "cargo-about@^$CARGO_ABOUT_VERSION"
else
echo "cargo-about@$CARGO_ABOUT_VERSION is already installed."
echo "cargo-about@^$CARGO_ABOUT_VERSION is already installed."
fi
echo "Generating cargo licenses"
set -x
cargo about generate \
--fail \
--frozen \
-c script/licenses/zed-licenses.toml \
"$TEMPLATE_FILE" \
$TEMPLATE_FILE \
| awk 'NR==1{print;next} NF{print | "sort"}' \
> "$OUTPUT_FILE"
set +x
echo "generate-licenses-csv completed. See $OUTPUT_FILE"

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -e

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -eu

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
channel="$1"

View File

@@ -1,3 +1,3 @@
#!/bin/bash
#!/usr/bin/env bash
cargo run -p theme_importer -- "$@"

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
if [[ $# -ne 1 ]]; then
echo "Usage: $0 [production|staging|...]"

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -eu

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
export GPUTOOLS_LOAD_GTMTLCAPTURE=1
export DYLD_LIBRARY_PATH="/usr/lib/system/introspection"

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
# Try to make sure we are in the zed repo root
if [ ! -d "crates" ] || [ ! -d "script" ]; then

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
# This script manages prompt overrides for the Zed editor.
#

View File

@@ -1,4 +1,6 @@
#!/bin/bash -e
#!/usr/bin/env bash
set -e
which minio > /dev/null || (echo "installing minio..."; brew install minio/stable/minio)
mkdir -p .blob_store/the-extensions-bucket

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -e
cargo run -p collab migrate

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
if [ -z "$1" ]; then
cargo run -p storybook

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -euo pipefail
which gh >/dev/null || brew install gh

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -eu
source script/lib/deploy-helpers.sh

View File

@@ -1,14 +1,11 @@
(
import
(
let
lock = builtins.fromJSON (builtins.readFile ./flake.lock);
in
fetchTarball {
url = lock.nodes.flake-compat.locked.url or "https://github.com/edolstra/flake-compat/archive/${lock.nodes.flake-compat.locked.rev}.tar.gz";
sha256 = lock.nodes.flake-compat.locked.narHash;
}
)
{src = ./.;}
)
.shellNix
(import (
let
lock = builtins.fromJSON (builtins.readFile ./flake.lock);
in
fetchTarball {
url =
lock.nodes.flake-compat.locked.url
or "https://github.com/edolstra/flake-compat/archive/${lock.nodes.flake-compat.locked.rev}.tar.gz";
sha256 = lock.nodes.flake-compat.locked.narHash;
}
) { src = ./.; }).shellNix