Compare commits

...

37 Commits

Author SHA1 Message Date
Zed Bot
c6ee7bc5a4 Bump to 0.206.7 for @maxdeviant 2025-10-06 17:51:28 +00:00
Piotr Osiewicz
b444b7f567 python: Fix user settings not getting passed on for Ty (#39174)
Closes #39144

Release Notes:

- python: Fixed user settings not being respected with Ty language
server.
2025-10-06 19:23:42 +02:00
Marshall Bowers
598ba645d7 x_ai: Add support for Grok 4 Fast (#39492)
This PR adds support for Grok 4 Fast.

Release Notes:

- Added support for Grok 4 Fast models.

Co-authored-by: David Kleingeld <davidsk@zed.dev>
2025-10-06 12:57:42 -04:00
Jowell Young
667ff5b8fa x_ai: Add support for tools and images with custom models (#38792)
After the change, we can add "supports_images", "supports_tools" and
"parallel_tool_calls" properties to set up new models. Our
`settings.json` will be as follows:
```json
  "language_models": {
     "x_ai": {
       "api_url": "https://api.x.ai/v1",
       "available_models": [
         {
           "name": "grok-4-fast-reasoning",
           "display_name": "Grok 4 Fast Reasoning",
           "max_tokens": 2000000,
           "max_output_tokens": 64000,
           "supports_tools": true,
           "parallel_tool_calls": true,
         },
         {
           "name": "grok-4-fast-non-reasoning",
           "display_name": "Grok 4 Fast Non-Reasoning",
           "max_tokens": 2000000,
           "max_output_tokens": 64000,
           "supports_images": true,
         }
       ]
     }
   }

```

Closes https://github.com/zed-industries/zed/issues/38752

Release Notes:

- xAI: Added support for for configuring tool and image support for
custom model configurations
2025-10-06 12:57:37 -04:00
Marshall Bowers
c97a7506ef language_models: Send a header indicating that the client supports xAI models (#38931)
This PR adds an `x-zed-client-supports-x-ai` header to the `GET /models`
request sent to Cloud to indicate that the client supports xAI models.

Release Notes:

- N/A
2025-10-06 12:56:38 -04:00
Marshall Bowers
67566ad381 x_ai: Fix Model::from_id for Grok 4 (#38930)
This PR fixes `x_ai::Model::from_id`, which was not properly handling
`grok-4`.

Release Notes:

- N/A
2025-10-06 12:56:14 -04:00
Marshall Bowers
8afabc18a7 language_models: Add xAI support to Zed Cloud provider (#38928)
This PR adds xAI support to the Zed Cloud provider.

Release Notes:

- N/A
2025-10-06 12:56:08 -04:00
Joseph T. Lyons
f6695400b1 v0.206.x stable 2025-10-01 11:27:38 -04:00
Smit Barmase
da48536e7d copilot: Ensure minimum Node version (#38945)
Closes #38918

Release Notes:

- N/A
2025-10-01 15:59:27 +05:30
Bennet Bo Fenner
7eacefd8ac worktree: Remove unwrap in BackgroundScanner::update_ignore_status (#39191)
We've seen this panic come up in the last two weeks, which might be
caused by #33592. However, we are not sure what paths can cause this
`unwrap()` to fail. Therefore adding some logging around this, so that
the next time someone opens a bug report we can further diagnose the
issue.

Fixes ZED-1F6

Release Notes:

- Fixed an issue where Zed could crash when including specific paths in
a global `.gitignore` files
2025-10-01 11:54:33 +02:00
Ben Kunkle
cd9eb77ace Fix bug in code action formatter handling (#39246)
Closes #39112

Release Notes:

- Fixed an issue when using code actions on format where specifying
multiple code actions in the same code actions block that resolved to
code actions from different language servers could result in conflicting
edits being applied and mangled buffer text.
2025-09-30 22:08:58 -04:00
Joseph T. Lyons
62f2df90ed zed 0.206.6 2025-09-30 15:17:53 -04:00
Anthony Eid
6412621855 debugger: Fix python debug scenario not showing up in code actions (#39224)
The bug happened because the Python locator was checking for a quote
before the ZED task variable. Removing that part of the check fixed the
issue.

Closes #39179 

Release Notes:

- Fix Python debug tasks not showing up in code actions or debug picker
2025-09-30 13:01:10 -06:00
Lukas Wirth
bc42327f8e terminals: Remove (now) incorrect alacritty workaround for task spawning 2025-09-30 20:32:24 +02:00
Sergei Zharinov
fa163e7049 gpui: Respect font smoothing on macOS (#39197)
- Closes #38847
- See also: #37622 and #38467

Release Notes:

- Fonts are now rendered in accordance with the `AppleFontSmoothing`
setting.
2025-09-30 18:31:32 +03:00
Ben Brandt
5041b51d49 acp: Notify of latest agent version only after successful download (#39201)
Before we would notify the user even if the download failed. We also
we're overwriting the directory, which means a user could be stuck in a
loop if a previous download failed

Release Notes:

- acp: Fix user seeing update prompt in a loop because of a previous
failed download
2025-09-30 15:54:20 +02:00
Conrad Irwin
b801ab8c9c Fix panic in UnwrapSyntaxNode (#39139)
Closes #39139
Fixes ZED-1HY

Release Notes:

- Fixed a panic in UnwrapSyntaxNode in multi-buffers
2025-09-29 15:13:46 -06:00
Joseph T. Lyons
e2b5fbc4fd zed 0.206.5 2025-09-29 17:02:11 -04:00
Richard Feldman
8e75ee673f Default to Sonnet 4.5 in BYOK (#39132)
<img width="381" height="204" alt="Screenshot 2025-09-29 at 2 29 58 PM"
src="https://github.com/user-attachments/assets/c7aaf0b0-b09b-4ed9-8113-8d7b18eefc2f"
/>


Release Notes:

- Claude Sonnet 4.5 and 4.5 Thinking are now the recommended Anthropic
models
2025-09-29 16:55:37 -04:00
Richard Feldman
3dcae67cb5 Add Sonnet 4.5 support (#39127)
Release Notes:

- Added support for Claude Sonnet 4.5 for Bring-Your-Own-Key (BYOK)
2025-09-29 16:55:37 -04:00
Zed Bot
e1876a1486 Bump to 0.206.4 for @ConradIrwin 2025-09-29 17:54:16 +00:00
Ben Brandt
c81b4c8f7e acp: Add NO_PROXY if not set otherwise to not proxy localhost urls (#39100)
Since we might run MCP servers locally for an agent, we don't want to
use the proxy for those.
We set this if the user has set a proxy, but not a custom NO_PROXY env
var.

Closes #38839

Release Notes:

- acp: Don't run local mcp servers through proxy, if set
2025-09-29 13:47:11 +02:00
Lukas Wirth
d54783bccd acp_thread: Fix terminal tool incorrectly redirecting stdin to /dev/null 2025-09-29 11:39:44 +02:00
Lukas Wirth
8baaea6ae3 auto_update: Unmount update disk image in the background (#38867)
Release Notes:

- Fixed potentially temporarily hanging on macOS when updating the app
2025-09-27 19:55:45 +02:00
Joseph T. Lyons
3ff1eef029 zed 0.206.3 2025-09-26 14:23:08 -04:00
Conrad Irwin
2828705f39 Revert "Fix arrow function detection in TypeScript/JavaScript outline (#38411)" (#38982)
This reverts commit 1bbf98aea6.

We found that #38411 caused problems where anonymous functions are
included too many times in the outline. We'd like to figure out a better
fix before shipping this to stable.

Fixes #38956

Release Notes:

- (preview only) revert changes to outline view
2025-09-26 13:55:06 -04:00
Smit Barmase
a3f2838380 editor: Fix predict edit at cursor action when show_edit_predictions is false (#38821)
Closes #37601 

Regressed in https://github.com/zed-industries/zed/pull/36469. 

Edit: Original issue https://github.com/zed-industries/zed/issues/25744
is fixed for Zeta in this PR. For Copilot, it will be covered in a
follow-up. In the case of Copilot, even after discarding, we still get a
prediction on suggest, which is a bug.

Release Notes:

- Fixed issue where predict edit at cursor didn't work when
`show_edit_predictions` is `false`.
2025-09-26 06:23:27 -04:00
Derek Nguyen
aeddd518a6 python: Fix ty archive extraction on Linux (#38917)
Closes #38553 
Release Notes:

- Fixed wrong AssetKind specified on linux for ty 


As discussed in the linked issue. All of the non windows assets for ty
are `tar.gz` files. This change applies that fix.
2025-09-25 19:31:55 -04:00
Joseph T. Lyons
5e4d3970d1 zed 0.206.2 2025-09-25 11:49:48 -04:00
Ben Brandt
0f0f9c93ce acp: Use ACP error types in read_text_file (#38863)
- Map path lookup and internal failures to acp::Error 
- Return INVALID_PARAMS for reads beyond EOF

Release Notes:

- acp: Return more informative error types from `read_text_file` to
agents
2025-09-25 14:09:43 +02:00
Ben Brandt
4ee165dd60 acp: Fix read_text_file erroring on empty files (#38856)
The previous validation was too strict and didn't permit reading empty
files.

Addresses: https://github.com/google-gemini/gemini-cli/issues/9280

Release Notes:

- acp: Fix `read_text_file` returning errors for empty files
2025-09-25 12:13:29 +02:00
Zed Bot
94d19baf4e Bump to 0.206.1 for @ConradIrwin 2025-09-25 03:12:47 +00:00
Conrad Irwin
70cb2f5d55 Whitespace map more (#38827)
Release Notes:

- N/A
2025-09-24 21:09:20 -06:00
Conrad Irwin
15baa8f5ed Only allow single chars for whitespace map (#38825)
Release Notes:

- Only allow single characters in the whitespace map
2025-09-24 16:30:50 -06:00
Lukas Wirth
c68345f502 editor: Fix invalid anchors in hover_links::surrounding_filename (#38766)
Fixes ZED-1K3

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-09-24 13:09:55 +02:00
Lukas Wirth
72007a075a editor: Prevent panics in BlockChunks if the block spans more than 128 lines (#38763)
Not an ideal fix, but a proper one will require restructuring the
iterator state (which would be easier if Rust had first class
generators)
Fixes ZED-1MB

Release Notes:

- N/A *or* Added/Fixed/Improved ...
2025-09-24 13:09:55 +02:00
Joseph T. Lyons
c68602612f v0.206.x preview 2025-09-23 16:14:50 -04:00
43 changed files with 989 additions and 627 deletions

7
Cargo.lock generated
View File

@@ -195,9 +195,9 @@ dependencies = [
[[package]]
name = "agent-client-protocol"
version = "0.4.2"
version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "00e33b9f4bd34d342b6f80b7156d3a37a04aeec16313f264001e52d6a9118600"
checksum = "3aaa2bd05a2401887945f8bfd70026e90bc3cf96c62ab9eba2779835bf21dc60"
dependencies = [
"anyhow",
"async-broadcast",
@@ -3722,6 +3722,7 @@ dependencies = [
"paths",
"project",
"rpc",
"semver",
"serde",
"serde_json",
"settings",
@@ -21217,7 +21218,7 @@ dependencies = [
[[package]]
name = "zed"
version = "0.206.0"
version = "0.206.7"
dependencies = [
"acp_tools",
"activity_indicator",

View File

@@ -443,7 +443,7 @@ zlog_settings = { path = "crates/zlog_settings" }
# External crates
#
agent-client-protocol = { version = "0.4.2", features = ["unstable"] }
agent-client-protocol = { version = "0.4.3", features = ["unstable"] }
aho-corasick = "1.1"
alacritty_terminal = "0.25.1-rc1"
any_vec = "0.14"

View File

@@ -1780,20 +1780,26 @@ impl AcpThread {
limit: Option<u32>,
reuse_shared_snapshot: bool,
cx: &mut Context<Self>,
) -> Task<Result<String>> {
) -> Task<Result<String, acp::Error>> {
// Args are 1-based, move to 0-based
let line = line.unwrap_or_default().saturating_sub(1);
let limit = limit.unwrap_or(u32::MAX);
let project = self.project.clone();
let action_log = self.action_log.clone();
cx.spawn(async move |this, cx| {
let load = project.update(cx, |project, cx| {
let path = project
.project_path_for_absolute_path(&path, cx)
.context("invalid path")?;
anyhow::Ok(project.open_buffer(path, cx))
});
let buffer = load??.await?;
let load = project
.update(cx, |project, cx| {
let path = project
.project_path_for_absolute_path(&path, cx)
.ok_or_else(|| {
acp::Error::resource_not_found(Some(path.display().to_string()))
})?;
Ok(project.open_buffer(path, cx))
})
.map_err(|e| acp::Error::internal_error().with_data(e.to_string()))
.flatten()?;
let buffer = load.await?;
let snapshot = if reuse_shared_snapshot {
this.read_with(cx, |this, _| {
@@ -1820,15 +1826,17 @@ impl AcpThread {
};
let max_point = snapshot.max_point();
if line >= max_point.row {
anyhow::bail!(
let start_position = Point::new(line, 0);
if start_position > max_point {
return Err(acp::Error::invalid_params().with_data(format!(
"Attempting to read beyond the end of the file, line {}:{}",
max_point.row + 1,
max_point.column
);
)));
}
let start = snapshot.anchor_before(Point::new(line, 0));
let start = snapshot.anchor_before(start_position);
let end = snapshot.anchor_before(Point::new(line.saturating_add(limit), 0));
project.update(cx, |project, cx| {
@@ -1977,7 +1985,7 @@ impl AcpThread {
let terminal_id = terminal_id.clone();
async move |_this, cx| {
let env = env.await;
let (command, args) = ShellBuilder::new(
let (task_command, task_args) = ShellBuilder::new(
project
.update(cx, |project, cx| {
project
@@ -1988,13 +1996,13 @@ impl AcpThread {
&Shell::Program(get_default_system_shell()),
)
.redirect_stdin_to_dev_null()
.build(Some(command), &args);
.build(Some(command.clone()), &args);
let terminal = project
.update(cx, |project, cx| {
project.create_terminal_task(
task::SpawnInTerminal {
command: Some(command.clone()),
args: args.clone(),
command: Some(task_command),
args: task_args,
cwd: cwd.clone(),
env,
..Default::default()
@@ -2449,6 +2457,81 @@ mod tests {
assert_eq!(content, "two\nthree\n");
// Invalid
let err = thread
.update(cx, |thread, cx| {
thread.read_text_file(path!("/tmp/foo").into(), Some(6), Some(2), false, cx)
})
.await
.unwrap_err();
assert_eq!(
err.to_string(),
"Invalid params: \"Attempting to read beyond the end of the file, line 5:0\""
);
}
#[gpui::test]
async fn test_reading_empty_file(cx: &mut TestAppContext) {
init_test(cx);
let fs = FakeFs::new(cx.executor());
fs.insert_tree(path!("/tmp"), json!({"foo": ""})).await;
let project = Project::test(fs.clone(), [], cx).await;
project
.update(cx, |project, cx| {
project.find_or_create_worktree(path!("/tmp/foo"), true, cx)
})
.await
.unwrap();
let connection = Rc::new(FakeAgentConnection::new());
let thread = cx
.update(|cx| connection.new_thread(project, Path::new(path!("/tmp")), cx))
.await
.unwrap();
// Whole file
let content = thread
.update(cx, |thread, cx| {
thread.read_text_file(path!("/tmp/foo").into(), None, None, false, cx)
})
.await
.unwrap();
assert_eq!(content, "");
// Only start line
let content = thread
.update(cx, |thread, cx| {
thread.read_text_file(path!("/tmp/foo").into(), Some(1), None, false, cx)
})
.await
.unwrap();
assert_eq!(content, "");
// Only limit
let content = thread
.update(cx, |thread, cx| {
thread.read_text_file(path!("/tmp/foo").into(), None, Some(2), false, cx)
})
.await
.unwrap();
assert_eq!(content, "");
// Range
let content = thread
.update(cx, |thread, cx| {
thread.read_text_file(path!("/tmp/foo").into(), Some(1), Some(1), false, cx)
})
.await
.unwrap();
assert_eq!(content, "");
// Invalid
let err = thread
.update(cx, |thread, cx| {
@@ -2459,9 +2542,40 @@ mod tests {
assert_eq!(
err.to_string(),
"Attempting to read beyond the end of the file, line 5:0"
"Invalid params: \"Attempting to read beyond the end of the file, line 1:0\""
);
}
#[gpui::test]
async fn test_reading_non_existing_file(cx: &mut TestAppContext) {
init_test(cx);
let fs = FakeFs::new(cx.executor());
fs.insert_tree(path!("/tmp"), json!({})).await;
let project = Project::test(fs.clone(), [], cx).await;
project
.update(cx, |project, cx| {
project.find_or_create_worktree(path!("/tmp"), true, cx)
})
.await
.unwrap();
let connection = Rc::new(FakeAgentConnection::new());
let thread = cx
.update(|cx| connection.new_thread(project, Path::new(path!("/tmp")), cx))
.await
.unwrap();
// Out of project file
let err = thread
.update(cx, |thread, cx| {
thread.read_text_file(path!("/foo").into(), None, None, false, cx)
})
.await
.unwrap_err();
assert_eq!(err.code, acp::ErrorCode::RESOURCE_NOT_FOUND.code);
}
#[gpui::test]
async fn test_succeeding_canceled_toolcall(cx: &mut TestAppContext) {

View File

@@ -99,6 +99,9 @@ pub fn load_proxy_env(cx: &mut App) -> HashMap<String, String> {
if let Some(no_proxy) = read_no_proxy_from_env() {
env.insert("NO_PROXY".to_owned(), no_proxy);
} else if proxy_url.is_some() {
// We sometimes need local MCP servers that we don't want to proxy
env.insert("NO_PROXY".to_owned(), "localhost,127.0.0.1".to_owned());
}
env

View File

@@ -67,7 +67,6 @@ pub enum Model {
alias = "claude-opus-4-1-thinking-latest"
)]
ClaudeOpus4_1Thinking,
#[default]
#[serde(rename = "claude-sonnet-4", alias = "claude-sonnet-4-latest")]
ClaudeSonnet4,
#[serde(
@@ -75,6 +74,14 @@ pub enum Model {
alias = "claude-sonnet-4-thinking-latest"
)]
ClaudeSonnet4Thinking,
#[default]
#[serde(rename = "claude-sonnet-4-5", alias = "claude-sonnet-4-5-latest")]
ClaudeSonnet4_5,
#[serde(
rename = "claude-sonnet-4-5-thinking",
alias = "claude-sonnet-4-5-thinking-latest"
)]
ClaudeSonnet4_5Thinking,
#[serde(rename = "claude-3-7-sonnet", alias = "claude-3-7-sonnet-latest")]
Claude3_7Sonnet,
#[serde(
@@ -133,6 +140,14 @@ impl Model {
return Ok(Self::ClaudeOpus4);
}
if id.starts_with("claude-sonnet-4-5-thinking") {
return Ok(Self::ClaudeSonnet4_5Thinking);
}
if id.starts_with("claude-sonnet-4-5") {
return Ok(Self::ClaudeSonnet4_5);
}
if id.starts_with("claude-sonnet-4-thinking") {
return Ok(Self::ClaudeSonnet4Thinking);
}
@@ -180,6 +195,8 @@ impl Model {
Self::ClaudeOpus4_1Thinking => "claude-opus-4-1-thinking-latest",
Self::ClaudeSonnet4 => "claude-sonnet-4-latest",
Self::ClaudeSonnet4Thinking => "claude-sonnet-4-thinking-latest",
Self::ClaudeSonnet4_5 => "claude-sonnet-4-5-latest",
Self::ClaudeSonnet4_5Thinking => "claude-sonnet-4-5-thinking-latest",
Self::Claude3_5Sonnet => "claude-3-5-sonnet-latest",
Self::Claude3_7Sonnet => "claude-3-7-sonnet-latest",
Self::Claude3_7SonnetThinking => "claude-3-7-sonnet-thinking-latest",
@@ -197,6 +214,7 @@ impl Model {
Self::ClaudeOpus4 | Self::ClaudeOpus4Thinking => "claude-opus-4-20250514",
Self::ClaudeOpus4_1 | Self::ClaudeOpus4_1Thinking => "claude-opus-4-1-20250805",
Self::ClaudeSonnet4 | Self::ClaudeSonnet4Thinking => "claude-sonnet-4-20250514",
Self::ClaudeSonnet4_5 | Self::ClaudeSonnet4_5Thinking => "claude-sonnet-4-5-20250929",
Self::Claude3_5Sonnet => "claude-3-5-sonnet-latest",
Self::Claude3_7Sonnet | Self::Claude3_7SonnetThinking => "claude-3-7-sonnet-latest",
Self::Claude3_5Haiku => "claude-3-5-haiku-latest",
@@ -215,6 +233,8 @@ impl Model {
Self::ClaudeOpus4_1Thinking => "Claude Opus 4.1 Thinking",
Self::ClaudeSonnet4 => "Claude Sonnet 4",
Self::ClaudeSonnet4Thinking => "Claude Sonnet 4 Thinking",
Self::ClaudeSonnet4_5 => "Claude Sonnet 4.5",
Self::ClaudeSonnet4_5Thinking => "Claude Sonnet 4.5 Thinking",
Self::Claude3_7Sonnet => "Claude 3.7 Sonnet",
Self::Claude3_5Sonnet => "Claude 3.5 Sonnet",
Self::Claude3_7SonnetThinking => "Claude 3.7 Sonnet Thinking",
@@ -236,6 +256,8 @@ impl Model {
| Self::ClaudeOpus4_1Thinking
| Self::ClaudeSonnet4
| Self::ClaudeSonnet4Thinking
| Self::ClaudeSonnet4_5
| Self::ClaudeSonnet4_5Thinking
| Self::Claude3_5Sonnet
| Self::Claude3_5Haiku
| Self::Claude3_7Sonnet
@@ -261,6 +283,8 @@ impl Model {
| Self::ClaudeOpus4_1Thinking
| Self::ClaudeSonnet4
| Self::ClaudeSonnet4Thinking
| Self::ClaudeSonnet4_5
| Self::ClaudeSonnet4_5Thinking
| Self::Claude3_5Sonnet
| Self::Claude3_5Haiku
| Self::Claude3_7Sonnet
@@ -280,6 +304,8 @@ impl Model {
| Self::ClaudeOpus4_1Thinking
| Self::ClaudeSonnet4
| Self::ClaudeSonnet4Thinking
| Self::ClaudeSonnet4_5
| Self::ClaudeSonnet4_5Thinking
| Self::Claude3_5Sonnet
| Self::Claude3_7Sonnet
| Self::Claude3_7SonnetThinking
@@ -299,6 +325,8 @@ impl Model {
| Self::ClaudeOpus4_1Thinking
| Self::ClaudeSonnet4
| Self::ClaudeSonnet4Thinking
| Self::ClaudeSonnet4_5
| Self::ClaudeSonnet4_5Thinking
| Self::Claude3_5Sonnet
| Self::Claude3_7Sonnet
| Self::Claude3_7SonnetThinking
@@ -318,6 +346,7 @@ impl Model {
Self::ClaudeOpus4
| Self::ClaudeOpus4_1
| Self::ClaudeSonnet4
| Self::ClaudeSonnet4_5
| Self::Claude3_5Sonnet
| Self::Claude3_7Sonnet
| Self::Claude3_5Haiku
@@ -327,6 +356,7 @@ impl Model {
Self::ClaudeOpus4Thinking
| Self::ClaudeOpus4_1Thinking
| Self::ClaudeSonnet4Thinking
| Self::ClaudeSonnet4_5Thinking
| Self::Claude3_7SonnetThinking => AnthropicModelMode::Thinking {
budget_tokens: Some(4_096),
},

View File

@@ -139,18 +139,25 @@ impl Tool for TerminalTool {
env
});
let build_cmd = {
let input_command = input.command.clone();
move || {
ShellBuilder::new(
remote_shell.as_deref(),
&Shell::Program(get_default_system_shell()),
)
.redirect_stdin_to_dev_null()
.build(Some(input_command.clone()), &[])
}
};
let Some(window) = window else {
// Headless setup, a test or eval. Our terminal subsystem requires a workspace,
// so bypass it and provide a convincing imitation using a pty.
let task = cx.background_spawn(async move {
let env = env.await;
let pty_system = native_pty_system();
let (command, args) = ShellBuilder::new(
remote_shell.as_deref(),
&Shell::Program(get_default_system_shell()),
)
.redirect_stdin_to_dev_null()
.build(Some(input.command.clone()), &[]);
let (command, args) = build_cmd();
let mut cmd = CommandBuilder::new(command);
cmd.args(args);
for (k, v) in env {
@@ -187,16 +194,10 @@ impl Tool for TerminalTool {
};
};
let command = input.command.clone();
let terminal = cx.spawn({
let project = project.downgrade();
async move |cx| {
let (command, args) = ShellBuilder::new(
remote_shell.as_deref(),
&Shell::Program(get_default_system_shell()),
)
.redirect_stdin_to_dev_null()
.build(Some(input.command), &[]);
let (command, args) = build_cmd();
let env = env.await;
project
.update(cx, |project, cx| {
@@ -215,18 +216,18 @@ impl Tool for TerminalTool {
}
});
let command_markdown =
cx.new(|cx| Markdown::new(format!("```bash\n{}\n```", command).into(), None, None, cx));
let card = cx.new(|cx| {
TerminalToolCard::new(
command_markdown.clone(),
working_dir.clone(),
cx.entity_id(),
let command_markdown = cx.new(|cx| {
Markdown::new(
format!("```bash\n{}\n```", input.command).into(),
None,
None,
cx,
)
});
let card =
cx.new(|cx| TerminalToolCard::new(command_markdown, working_dir, cx.entity_id(), cx));
let output = cx.spawn({
let card = card.clone();
async move |cx| {
@@ -267,7 +268,7 @@ impl Tool for TerminalTool {
let previous_len = content.len();
let (processed_content, finished_with_empty_output) = process_content(
&content,
&command,
&input.command,
exit_status.map(portable_pty::ExitStatus::from),
);

View File

@@ -3,7 +3,8 @@ use client::{Client, TelemetrySettings};
use db::RELEASE_CHANNEL;
use db::kvp::KEY_VALUE_STORE;
use gpui::{
App, AppContext as _, AsyncApp, Context, Entity, Global, SemanticVersion, Task, Window, actions,
App, AppContext as _, AsyncApp, BackgroundExecutor, Context, Entity, Global, SemanticVersion,
Task, Window, actions,
};
use http_client::{AsyncBody, HttpClient, HttpClientWithUrl};
use paths::remote_servers_dir;
@@ -12,6 +13,7 @@ use serde::{Deserialize, Serialize};
use settings::{Settings, SettingsStore};
use smol::{fs, io::AsyncReadExt};
use smol::{fs::File, process::Command};
use std::mem;
use std::{
env::{
self,
@@ -84,31 +86,37 @@ pub struct JsonRelease {
pub url: String,
}
struct MacOsUnmounter {
struct MacOsUnmounter<'a> {
mount_path: PathBuf,
background_executor: &'a BackgroundExecutor,
}
impl Drop for MacOsUnmounter {
impl Drop for MacOsUnmounter<'_> {
fn drop(&mut self) {
let unmount_output = std::process::Command::new("hdiutil")
.args(["detach", "-force"])
.arg(&self.mount_path)
.output();
match unmount_output {
Ok(output) if output.status.success() => {
log::info!("Successfully unmounted the disk image");
}
Ok(output) => {
log::error!(
"Failed to unmount disk image: {:?}",
String::from_utf8_lossy(&output.stderr)
);
}
Err(error) => {
log::error!("Error while trying to unmount disk image: {:?}", error);
}
}
let mount_path = mem::take(&mut self.mount_path);
self.background_executor
.spawn(async move {
let unmount_output = Command::new("hdiutil")
.args(["detach", "-force"])
.arg(&mount_path)
.output()
.await;
match unmount_output {
Ok(output) if output.status.success() => {
log::info!("Successfully unmounted the disk image");
}
Ok(output) => {
log::error!(
"Failed to unmount disk image: {:?}",
String::from_utf8_lossy(&output.stderr)
);
}
Err(error) => {
log::error!("Error while trying to unmount disk image: {:?}", error);
}
}
})
.detach();
}
}
@@ -896,6 +904,7 @@ async fn install_release_macos(
// Create an MacOsUnmounter that will be dropped (and thus unmount the disk) when this function exits
let _unmounter = MacOsUnmounter {
mount_path: mount_path.clone(),
background_executor: cx.background_executor(),
};
let output = Command::new("rsync")

View File

@@ -22,7 +22,6 @@ pub struct BedrockModelCacheConfiguration {
#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq, EnumIter)]
pub enum Model {
// Anthropic models (already included)
#[default]
#[serde(rename = "claude-sonnet-4", alias = "claude-sonnet-4-latest")]
ClaudeSonnet4,
#[serde(
@@ -30,6 +29,14 @@ pub enum Model {
alias = "claude-sonnet-4-thinking-latest"
)]
ClaudeSonnet4Thinking,
#[default]
#[serde(rename = "claude-sonnet-4-5", alias = "claude-sonnet-4-5-latest")]
ClaudeSonnet4_5,
#[serde(
rename = "claude-sonnet-4-5-thinking",
alias = "claude-sonnet-4-5-thinking-latest"
)]
ClaudeSonnet4_5Thinking,
#[serde(rename = "claude-opus-4", alias = "claude-opus-4-latest")]
ClaudeOpus4,
#[serde(rename = "claude-opus-4-1", alias = "claude-opus-4-1-latest")]
@@ -144,6 +151,14 @@ impl Model {
Ok(Self::Claude3_7Sonnet)
} else if id.starts_with("claude-3-7-sonnet-thinking") {
Ok(Self::Claude3_7SonnetThinking)
} else if id.starts_with("claude-sonnet-4-5-thinking") {
Ok(Self::ClaudeSonnet4_5Thinking)
} else if id.starts_with("claude-sonnet-4-5") {
Ok(Self::ClaudeSonnet4_5)
} else if id.starts_with("claude-sonnet-4-thinking") {
Ok(Self::ClaudeSonnet4Thinking)
} else if id.starts_with("claude-sonnet-4") {
Ok(Self::ClaudeSonnet4)
} else {
anyhow::bail!("invalid model id {id}");
}
@@ -153,6 +168,8 @@ impl Model {
match self {
Model::ClaudeSonnet4 => "claude-sonnet-4",
Model::ClaudeSonnet4Thinking => "claude-sonnet-4-thinking",
Model::ClaudeSonnet4_5 => "claude-sonnet-4-5",
Model::ClaudeSonnet4_5Thinking => "claude-sonnet-4-5-thinking",
Model::ClaudeOpus4 => "claude-opus-4",
Model::ClaudeOpus4_1 => "claude-opus-4-1",
Model::ClaudeOpus4Thinking => "claude-opus-4-thinking",
@@ -214,6 +231,9 @@ impl Model {
Model::ClaudeSonnet4 | Model::ClaudeSonnet4Thinking => {
"anthropic.claude-sonnet-4-20250514-v1:0"
}
Model::ClaudeSonnet4_5 | Model::ClaudeSonnet4_5Thinking => {
"anthropic.claude-sonnet-4-5-20250929-v1:0"
}
Model::ClaudeOpus4 | Model::ClaudeOpus4Thinking => {
"anthropic.claude-opus-4-20250514-v1:0"
}
@@ -277,6 +297,8 @@ impl Model {
match self {
Self::ClaudeSonnet4 => "Claude Sonnet 4",
Self::ClaudeSonnet4Thinking => "Claude Sonnet 4 Thinking",
Self::ClaudeSonnet4_5 => "Claude Sonnet 4.5",
Self::ClaudeSonnet4_5Thinking => "Claude Sonnet 4.5 Thinking",
Self::ClaudeOpus4 => "Claude Opus 4",
Self::ClaudeOpus4_1 => "Claude Opus 4.1",
Self::ClaudeOpus4Thinking => "Claude Opus 4 Thinking",
@@ -346,6 +368,8 @@ impl Model {
| Self::ClaudeOpus4
| Self::ClaudeOpus4_1
| Self::ClaudeSonnet4Thinking
| Self::ClaudeSonnet4_5
| Self::ClaudeSonnet4_5Thinking
| Self::ClaudeOpus4Thinking
| Self::ClaudeOpus4_1Thinking => 200_000,
Self::AmazonNovaPremier => 1_000_000,
@@ -361,6 +385,7 @@ impl Model {
Self::Claude3Opus | Self::Claude3Sonnet | Self::Claude3_5Haiku => 4_096,
Self::Claude3_7Sonnet | Self::Claude3_7SonnetThinking => 128_000,
Self::ClaudeSonnet4 | Self::ClaudeSonnet4Thinking => 64_000,
Self::ClaudeSonnet4_5 | Self::ClaudeSonnet4_5Thinking => 64_000,
Self::ClaudeOpus4
| Self::ClaudeOpus4Thinking
| Self::ClaudeOpus4_1
@@ -385,7 +410,9 @@ impl Model {
| Self::ClaudeOpus4_1
| Self::ClaudeOpus4_1Thinking
| Self::ClaudeSonnet4
| Self::ClaudeSonnet4Thinking => 1.0,
| Self::ClaudeSonnet4Thinking
| Self::ClaudeSonnet4_5
| Self::ClaudeSonnet4_5Thinking => 1.0,
Self::Custom {
default_temperature,
..
@@ -409,6 +436,8 @@ impl Model {
| Self::ClaudeOpus4_1Thinking
| Self::ClaudeSonnet4
| Self::ClaudeSonnet4Thinking
| Self::ClaudeSonnet4_5
| Self::ClaudeSonnet4_5Thinking
| Self::Claude3_5Haiku => true,
// Amazon Nova models (all support tool use)
@@ -439,6 +468,8 @@ impl Model {
| Self::Claude3_7SonnetThinking
| Self::ClaudeSonnet4
| Self::ClaudeSonnet4Thinking
| Self::ClaudeSonnet4_5
| Self::ClaudeSonnet4_5Thinking
| Self::ClaudeOpus4
| Self::ClaudeOpus4Thinking
| Self::ClaudeOpus4_1
@@ -488,9 +519,11 @@ impl Model {
Model::Claude3_7SonnetThinking => BedrockModelMode::Thinking {
budget_tokens: Some(4096),
},
Model::ClaudeSonnet4Thinking => BedrockModelMode::Thinking {
budget_tokens: Some(4096),
},
Model::ClaudeSonnet4Thinking | Model::ClaudeSonnet4_5Thinking => {
BedrockModelMode::Thinking {
budget_tokens: Some(4096),
}
}
Model::ClaudeOpus4Thinking | Model::ClaudeOpus4_1Thinking => {
BedrockModelMode::Thinking {
budget_tokens: Some(4096),
@@ -542,6 +575,8 @@ impl Model {
| Model::Claude3_7SonnetThinking
| Model::ClaudeSonnet4
| Model::ClaudeSonnet4Thinking
| Model::ClaudeSonnet4_5
| Model::ClaudeSonnet4_5Thinking
| Model::ClaudeOpus4
| Model::ClaudeOpus4Thinking
| Model::ClaudeOpus4_1
@@ -575,6 +610,8 @@ impl Model {
| Model::Claude3_7SonnetThinking
| Model::ClaudeSonnet4
| Model::ClaudeSonnet4Thinking
| Model::ClaudeSonnet4_5
| Model::ClaudeSonnet4_5Thinking
| Model::Claude3Haiku
| Model::Claude3Sonnet
| Model::MetaLlama321BInstructV1
@@ -592,7 +629,9 @@ impl Model {
| Model::Claude3_7Sonnet
| Model::Claude3_7SonnetThinking
| Model::ClaudeSonnet4
| Model::ClaudeSonnet4Thinking,
| Model::ClaudeSonnet4Thinking
| Model::ClaudeSonnet4_5
| Model::ClaudeSonnet4_5Thinking,
"apac",
) => Ok(format!("{}.{}", region_group, model_id)),
@@ -631,6 +670,10 @@ mod tests {
Model::ClaudeSonnet4.cross_region_inference_id("eu-west-1")?,
"eu.anthropic.claude-sonnet-4-20250514-v1:0"
);
assert_eq!(
Model::ClaudeSonnet4_5.cross_region_inference_id("eu-west-1")?,
"eu.anthropic.claude-sonnet-4-5-20250929-v1:0"
);
assert_eq!(
Model::Claude3Sonnet.cross_region_inference_id("eu-west-1")?,
"eu.anthropic.claude-3-sonnet-20240229-v1:0"

View File

@@ -55,6 +55,9 @@ pub const CLIENT_SUPPORTS_STATUS_MESSAGES_HEADER_NAME: &str =
pub const SERVER_SUPPORTS_STATUS_MESSAGES_HEADER_NAME: &str =
"x-zed-server-supports-status-messages";
/// The name of the header used by the client to indicate that it supports receiving xAI models.
pub const CLIENT_SUPPORTS_X_AI_HEADER_NAME: &str = "x-zed-client-supports-x-ai";
#[derive(Debug, PartialEq, Clone, Copy, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum UsageLimit {
@@ -144,6 +147,7 @@ pub enum LanguageModelProvider {
Anthropic,
OpenAi,
Google,
XAi,
}
#[derive(Debug, Clone, Serialize, Deserialize)]

View File

@@ -43,6 +43,7 @@ node_runtime.workspace = true
parking_lot.workspace = true
paths.workspace = true
project.workspace = true
semver.workspace = true
serde.workspace = true
serde_json.workspace = true
settings.workspace = true

View File

@@ -25,6 +25,7 @@ use node_runtime::{NodeRuntime, VersionStrategy};
use parking_lot::Mutex;
use project::DisableAiSettings;
use request::StatusNotification;
use semver::Version;
use serde_json::json;
use settings::Settings;
use settings::SettingsStore;
@@ -484,6 +485,8 @@ impl Copilot {
let start_language_server = async {
let server_path = get_copilot_lsp(fs, node_runtime.clone()).await?;
let node_path = node_runtime.binary_path().await?;
ensure_node_version_for_copilot(&node_path).await?;
let arguments: Vec<OsString> = vec![server_path.into(), "--stdio".into()];
let binary = LanguageServerBinary {
path: node_path,
@@ -1161,6 +1164,44 @@ async fn clear_copilot_config_dir() {
remove_matching(copilot_chat::copilot_chat_config_dir(), |_| true).await
}
async fn ensure_node_version_for_copilot(node_path: &Path) -> anyhow::Result<()> {
const MIN_COPILOT_NODE_VERSION: Version = Version::new(20, 8, 0);
log::info!("Checking Node.js version for Copilot at: {:?}", node_path);
let output = util::command::new_smol_command(node_path)
.arg("--version")
.output()
.await
.with_context(|| format!("checking Node.js version at {:?}", node_path))?;
if !output.status.success() {
anyhow::bail!(
"failed to run node --version for Copilot. stdout: {}, stderr: {}",
String::from_utf8_lossy(&output.stdout),
String::from_utf8_lossy(&output.stderr),
);
}
let version_str = String::from_utf8_lossy(&output.stdout);
let version = Version::parse(version_str.trim().trim_start_matches('v'))
.with_context(|| format!("parsing Node.js version from '{}'", version_str.trim()))?;
if version < MIN_COPILOT_NODE_VERSION {
anyhow::bail!(
"GitHub Copilot language server requires Node.js {MIN_COPILOT_NODE_VERSION} or later, but found {version}. \
Please update your Node.js version or configure a different Node.js path in settings."
);
}
log::info!(
"Node.js version {} meets Copilot requirements (>= {})",
version,
MIN_COPILOT_NODE_VERSION
);
Ok(())
}
async fn get_copilot_lsp(fs: Arc<dyn Fs>, node_runtime: NodeRuntime) -> anyhow::Result<PathBuf> {
const PACKAGE_NAME: &str = "@github/copilot-language-server";
const SERVER_PATH: &str =

View File

@@ -776,6 +776,8 @@ actions!(
UniqueLinesCaseInsensitive,
/// Removes duplicate lines (case-sensitive).
UniqueLinesCaseSensitive,
/// Removes the surrounding syntax node (for example brackets, or closures)
/// from the current selections.
UnwrapSyntaxNode,
/// Wraps selections in tag specified by language.
WrapSelectionsInTag

View File

@@ -26,7 +26,7 @@ use sum_tree::{Bias, Dimensions, SumTree, Summary, TreeMap};
use text::{BufferId, Edit};
use ui::ElementId;
const NEWLINES: &[u8] = &[b'\n'; u8::MAX as usize];
const NEWLINES: &[u8] = &[b'\n'; u128::BITS as usize];
const BULLETS: &str = "********************************************************************************************************************************";
/// Tracks custom blocks such as diagnostics that should be displayed within buffer.
@@ -1726,12 +1726,13 @@ impl<'a> Iterator for BlockChunks<'a> {
let start_in_block = self.output_row - block_start;
let end_in_block = cmp::min(self.max_output_row, block_end) - block_start;
let line_count = end_in_block - start_in_block;
// todo: We need to split the chunk here?
let line_count = cmp::min(end_in_block - start_in_block, u128::BITS);
self.output_row += line_count;
return Some(Chunk {
text: unsafe { std::str::from_utf8_unchecked(&NEWLINES[..line_count as usize]) },
chars: (1 << line_count) - 1,
chars: 1u128.unbounded_shl(line_count) - 1,
..Default::default()
});
}
@@ -1746,6 +1747,7 @@ impl<'a> Iterator for BlockChunks<'a> {
if self.transforms.item().is_some() {
return Some(Chunk {
text: "\n",
chars: 1,
..Default::default()
});
}
@@ -1773,7 +1775,7 @@ impl<'a> Iterator for BlockChunks<'a> {
let chars_count = prefix.chars().count();
let bullet_len = chars_count;
prefix = &BULLETS[..bullet_len];
chars = (1 << bullet_len) - 1;
chars = 1u128.unbounded_shl(bullet_len as u32) - 1;
tabs = 0;
}

View File

@@ -551,7 +551,7 @@ impl TabChunks<'_> {
self.chunk = Chunk {
text: &SPACES[0..(to_next_stop as usize)],
is_tab: true,
chars: (1u128 << to_next_stop) - 1,
chars: 1u128.unbounded_shl(to_next_stop) - 1,
..Default::default()
};
self.inside_leading_tab = to_next_stop > 0;
@@ -623,7 +623,7 @@ impl<'a> Iterator for TabChunks<'a> {
return Some(Chunk {
text: &SPACES[..len as usize],
is_tab: true,
chars: (1 << len) - 1,
chars: 1u128.unbounded_shl(len) - 1,
tabs: 0,
..self.chunk.clone()
});

View File

@@ -7,9 +7,7 @@ use std::ops::Range;
use text::{Point, ToOffset};
use crate::{
EditPrediction,
editor_tests::{init_test, update_test_language_settings},
test::editor_test_context::EditorTestContext,
EditPrediction, editor_tests::init_test, test::editor_test_context::EditorTestContext,
};
#[gpui::test]
@@ -273,44 +271,6 @@ async fn test_edit_prediction_jump_disabled_for_non_zed_providers(cx: &mut gpui:
});
}
#[gpui::test]
async fn test_edit_predictions_disabled_in_scope(cx: &mut gpui::TestAppContext) {
init_test(cx, |_| {});
update_test_language_settings(cx, |settings| {
settings.defaults.edit_predictions_disabled_in = Some(vec!["string".to_string()]);
});
let mut cx = EditorTestContext::new(cx).await;
let provider = cx.new(|_| FakeEditPredictionProvider::default());
assign_editor_completion_provider(provider.clone(), &mut cx);
let language = languages::language("javascript", tree_sitter_typescript::LANGUAGE_TSX.into());
cx.update_buffer(|buffer, cx| buffer.set_language(Some(language), cx));
// Test disabled inside of string
cx.set_state("const x = \"hello ˇworld\";");
propose_edits(&provider, vec![(17..17, "beautiful ")], &mut cx);
cx.update_editor(|editor, window, cx| editor.update_visible_edit_prediction(window, cx));
cx.editor(|editor, _, _| {
assert!(
editor.active_edit_prediction.is_none(),
"Edit predictions should be disabled in string scopes when configured in edit_predictions_disabled_in"
);
});
// Test enabled outside of string
cx.set_state("const x = \"hello world\"; ˇ");
propose_edits(&provider, vec![(24..24, "// comment")], &mut cx);
cx.update_editor(|editor, window, cx| editor.update_visible_edit_prediction(window, cx));
cx.editor(|editor, _, _| {
assert!(
editor.active_edit_prediction.is_some(),
"Edit predictions should work outside of disabled scopes"
);
});
}
fn assert_editor_active_edit_completion(
cx: &mut EditorTestContext,
assert: impl FnOnce(MultiBufferSnapshot, &Vec<(Range<Anchor>, String)>),

View File

@@ -142,7 +142,7 @@ use mouse_context_menu::MouseContextMenu;
use movement::TextLayoutDetails;
use multi_buffer::{
ExcerptInfo, ExpandExcerptDirection, MultiBufferDiffHunk, MultiBufferPoint, MultiBufferRow,
MultiOrSingleBufferOffsetRange, ToOffsetUtf16,
ToOffsetUtf16,
};
use parking_lot::Mutex;
use persistence::DB;
@@ -7152,6 +7152,8 @@ impl Editor {
return None;
}
self.update_visible_edit_prediction(window, cx);
if !user_requested
&& (!self.should_show_edit_predictions()
|| !self.is_focused(window)
@@ -7161,7 +7163,6 @@ impl Editor {
return None;
}
self.update_visible_edit_prediction(window, cx);
provider.refresh(
self.project.clone(),
buffer,
@@ -7854,11 +7855,6 @@ impl Editor {
self.edit_prediction_settings =
self.edit_prediction_settings_at_position(&buffer, cursor_buffer_position, cx);
if let EditPredictionSettings::Disabled = self.edit_prediction_settings {
self.discard_edit_prediction(false, cx);
return None;
};
self.edit_prediction_indent_conflict = multibuffer.is_line_whitespace_upto(cursor);
if self.edit_prediction_indent_conflict {
@@ -15054,12 +15050,8 @@ impl Editor {
}
let mut new_range = old_range.clone();
while let Some((node, containing_range)) = buffer.syntax_ancestor(new_range.clone())
{
new_range = match containing_range {
MultiOrSingleBufferOffsetRange::Single(_) => break,
MultiOrSingleBufferOffsetRange::Multi(range) => range,
};
while let Some((node, range)) = buffer.syntax_ancestor(new_range.clone()) {
new_range = range;
if !node.is_named() {
continue;
}
@@ -15189,20 +15181,14 @@ impl Editor {
&& let Some((_, ancestor_range)) =
buffer.syntax_ancestor(selection.start..selection.end)
{
match ancestor_range {
MultiOrSingleBufferOffsetRange::Single(range) => range,
MultiOrSingleBufferOffsetRange::Multi(range) => range,
}
ancestor_range
} else {
selection.range()
};
let mut parent = child.clone();
while let Some((_, ancestor_range)) = buffer.syntax_ancestor(parent.clone()) {
parent = match ancestor_range {
MultiOrSingleBufferOffsetRange::Single(range) => range,
MultiOrSingleBufferOffsetRange::Multi(range) => range,
};
parent = ancestor_range;
if parent.start < child.start || parent.end > child.end {
break;
}

View File

@@ -9152,6 +9152,64 @@ async fn test_unwrap_syntax_nodes(cx: &mut gpui::TestAppContext) {
});
cx.assert_editor_state(indoc! { r#"use mod1::{mod2::«mod3ˇ», mod5::«mod7ˇ»};"# });
cx.set_state(indoc! { r#"fn a() {
// what
// a
// ˇlong
// method
// I
// sure
// hope
// it
// works
}"# });
let buffer = cx.update_multibuffer(|multibuffer, _| multibuffer.as_singleton().unwrap());
let multi_buffer = cx.new(|_| MultiBuffer::new(Capability::ReadWrite));
cx.update(|_, cx| {
multi_buffer.update(cx, |multi_buffer, cx| {
multi_buffer.set_excerpts_for_path(
PathKey::for_buffer(&buffer, cx),
buffer,
[Point::new(1, 0)..Point::new(1, 0)],
3,
cx,
);
});
});
let editor2 = cx.new_window_entity(|window, cx| {
Editor::new(EditorMode::full(), multi_buffer, None, window, cx)
});
let mut cx = EditorTestContext::for_editor_in(editor2, &mut cx).await;
cx.update_editor(|editor, window, cx| {
editor.change_selections(SelectionEffects::default(), window, cx, |s| {
s.select_ranges([Point::new(3, 0)..Point::new(3, 0)]);
})
});
cx.assert_editor_state(indoc! { "
fn a() {
// what
// a
ˇ // long
// method"});
cx.update_editor(|editor, window, cx| {
editor.unwrap_syntax_node(&UnwrapSyntaxNode, window, cx);
});
// Although we could potentially make the action work when the syntax node
// is half-hidden, it seems a bit dangerous as you can't easily tell what it
// did. Maybe we could also expand the excerpt to contain the range?
cx.assert_editor_state(indoc! { "
fn a() {
// what
// a
ˇ // long
// method"});
}
#[gpui::test]
@@ -11867,17 +11925,16 @@ async fn test_multiple_formatters(cx: &mut TestAppContext) {
);
fake_server.set_request_handler::<lsp::request::CodeActionRequest, _, _>(
move |params, _| async move {
assert_eq!(
params.context.only,
Some(vec!["code-action-1".into(), "code-action-2".into()])
);
let requested_code_actions = params.context.only.expect("Expected code action request");
assert_eq!(requested_code_actions.len(), 1);
let uri = lsp::Uri::from_file_path(path!("/file.rs")).unwrap();
Ok(Some(vec![
lsp::CodeActionOrCommand::CodeAction(lsp::CodeAction {
let code_action = match requested_code_actions[0].as_str() {
"code-action-1" => lsp::CodeAction {
kind: Some("code-action-1".into()),
edit: Some(lsp::WorkspaceEdit::new(
[(
uri.clone(),
uri,
vec![lsp::TextEdit::new(
lsp::Range::new(lsp::Position::new(0, 0), lsp::Position::new(0, 0)),
"applied-code-action-1-edit\n".to_string(),
@@ -11891,8 +11948,8 @@ async fn test_multiple_formatters(cx: &mut TestAppContext) {
..Default::default()
}),
..Default::default()
}),
lsp::CodeActionOrCommand::CodeAction(lsp::CodeAction {
},
"code-action-2" => lsp::CodeAction {
kind: Some("code-action-2".into()),
edit: Some(lsp::WorkspaceEdit::new(
[(
@@ -11906,8 +11963,12 @@ async fn test_multiple_formatters(cx: &mut TestAppContext) {
.collect(),
)),
..Default::default()
}),
]))
},
req => panic!("Unexpected code action request: {:?}", req),
};
Ok(Some(vec![lsp::CodeActionOrCommand::CodeAction(
code_action,
)]))
},
);

View File

@@ -9323,7 +9323,7 @@ impl Element for EditorElement {
.language_settings(cx)
.whitespace_map;
let tab_char = whitespace_map.tab();
let tab_char = whitespace_map.tab.clone();
let tab_len = tab_char.len();
let tab_invisible = window.text_system().shape_line(
tab_char,
@@ -9339,7 +9339,7 @@ impl Element for EditorElement {
None,
);
let space_char = whitespace_map.space();
let space_char = whitespace_map.space.clone();
let space_len = space_char.len();
let space_invisible = window.text_system().shape_line(
space_char,

View File

@@ -898,6 +898,7 @@ fn surrounding_filename(
} else {
// Otherwise, we skip the quote
inside_quotes = true;
token_end += ch.len_utf8();
continue;
}
}
@@ -1545,6 +1546,10 @@ mod tests {
("'fˇile.txt'", Some("file.txt")),
("ˇ'file.txt'", Some("file.txt")),
("ˇ'fi\\ le.txt'", Some("fi le.txt")),
// Quoted multibyte characters
(" ˇ\"\"", Some("")),
(" \"ˇ常\"", Some("")),
("ˇ\"\"", Some("")),
];
for (input, expected) in test_cases {

View File

@@ -16,7 +16,7 @@ use itertools::Itertools;
use language::{DiagnosticEntry, Language, LanguageRegistry};
use lsp::DiagnosticSeverity;
use markdown::{Markdown, MarkdownElement, MarkdownStyle};
use multi_buffer::{MultiOrSingleBufferOffsetRange, ToOffset, ToPoint};
use multi_buffer::{ToOffset, ToPoint};
use project::{HoverBlock, HoverBlockKind, InlayHintLabelPart};
use settings::Settings;
use std::{borrow::Cow, cell::RefCell};
@@ -477,13 +477,8 @@ fn show_hover(
})
.or_else(|| {
let snapshot = &snapshot.buffer_snapshot;
match snapshot.syntax_ancestor(anchor..anchor)?.1 {
MultiOrSingleBufferOffsetRange::Multi(range) => Some(
snapshot.anchor_before(range.start)
..snapshot.anchor_after(range.end),
),
MultiOrSingleBufferOffsetRange::Single(_) => None,
}
let range = snapshot.syntax_ancestor(anchor..anchor)?.1;
Some(snapshot.anchor_before(range.start)..snapshot.anchor_after(range.end))
})
.unwrap_or_else(|| anchor..anchor);

View File

@@ -396,7 +396,6 @@ impl MacTextSystemState {
let subpixel_shift = params
.subpixel_variant
.map(|v| v as f32 / SUBPIXEL_VARIANTS_X as f32);
cx.set_allows_font_smoothing(true);
cx.set_text_drawing_mode(CGTextDrawingMode::CGTextFill);
cx.set_gray_fill_color(0.0, 1.0);
cx.set_allows_antialiasing(true);

View File

@@ -7,7 +7,7 @@ use ec4rs::{
property::{FinalNewline, IndentSize, IndentStyle, MaxLineLen, TabWidth, TrimTrailingWs},
};
use globset::{Glob, GlobMatcher, GlobSet, GlobSetBuilder};
use gpui::{App, Modifiers};
use gpui::{App, Modifiers, SharedString};
use itertools::{Either, Itertools};
pub use settings::{
@@ -59,6 +59,12 @@ pub struct AllLanguageSettings {
pub(crate) file_types: FxHashMap<Arc<str>, GlobSet>,
}
#[derive(Debug, Clone)]
pub struct WhitespaceMap {
pub space: SharedString,
pub tab: SharedString,
}
/// The settings for a particular language.
#[derive(Debug, Clone)]
pub struct LanguageSettings {
@@ -118,7 +124,7 @@ pub struct LanguageSettings {
/// Whether to show tabs and spaces in the editor.
pub show_whitespaces: settings::ShowWhitespaceSetting,
/// Visible characters used to render whitespace when show_whitespaces is enabled.
pub whitespace_map: settings::WhitespaceMap,
pub whitespace_map: WhitespaceMap,
/// Whether to start a new line with a comment when a previous line is a comment as well.
pub extend_comment_on_newline: bool,
/// Inlay hint related settings.
@@ -503,6 +509,8 @@ impl settings::Settings for AllLanguageSettings {
let prettier = settings.prettier.unwrap();
let indent_guides = settings.indent_guides.unwrap();
let tasks = settings.tasks.unwrap();
let whitespace_map = settings.whitespace_map.unwrap();
LanguageSettings {
tab_size: settings.tab_size.unwrap(),
hard_tabs: settings.hard_tabs.unwrap(),
@@ -536,7 +544,10 @@ impl settings::Settings for AllLanguageSettings {
show_edit_predictions: settings.show_edit_predictions.unwrap(),
edit_predictions_disabled_in: settings.edit_predictions_disabled_in.unwrap(),
show_whitespaces: settings.show_whitespaces.unwrap(),
whitespace_map: settings.whitespace_map.unwrap(),
whitespace_map: WhitespaceMap {
space: SharedString::new(whitespace_map.space.unwrap().to_string()),
tab: SharedString::new(whitespace_map.tab.unwrap().to_string()),
},
extend_comment_on_newline: settings.extend_comment_on_newline.unwrap(),
inlay_hints: InlayHintSettings {
enabled: inlay_hints.enabled.unwrap(),

View File

@@ -50,6 +50,9 @@ pub const OPEN_AI_PROVIDER_ID: LanguageModelProviderId = LanguageModelProviderId
pub const OPEN_AI_PROVIDER_NAME: LanguageModelProviderName =
LanguageModelProviderName::new("OpenAI");
pub const X_AI_PROVIDER_ID: LanguageModelProviderId = LanguageModelProviderId::new("x_ai");
pub const X_AI_PROVIDER_NAME: LanguageModelProviderName = LanguageModelProviderName::new("xAI");
pub const ZED_CLOUD_PROVIDER_ID: LanguageModelProviderId = LanguageModelProviderId::new("zed.dev");
pub const ZED_CLOUD_PROVIDER_NAME: LanguageModelProviderName =
LanguageModelProviderName::new("Zed");

View File

@@ -151,8 +151,8 @@ impl LanguageModelProvider for AnthropicLanguageModelProvider {
fn recommended_models(&self, _cx: &App) -> Vec<Arc<dyn LanguageModel>> {
[
anthropic::Model::ClaudeSonnet4,
anthropic::Model::ClaudeSonnet4Thinking,
anthropic::Model::ClaudeSonnet4_5,
anthropic::Model::ClaudeSonnet4_5Thinking,
]
.into_iter()
.map(|model| self.create_language_model(model))

View File

@@ -4,12 +4,12 @@ use anyhow::{Context as _, Result, anyhow};
use chrono::{DateTime, Utc};
use client::{Client, ModelRequestUsage, UserStore, zed_urls};
use cloud_llm_client::{
CLIENT_SUPPORTS_STATUS_MESSAGES_HEADER_NAME, CURRENT_PLAN_HEADER_NAME, CompletionBody,
CompletionEvent, CompletionRequestStatus, CountTokensBody, CountTokensResponse,
EXPIRED_LLM_TOKEN_HEADER_NAME, ListModelsResponse, MODEL_REQUESTS_RESOURCE_HEADER_VALUE, Plan,
PlanV1, PlanV2, SERVER_SUPPORTS_STATUS_MESSAGES_HEADER_NAME,
SUBSCRIPTION_LIMIT_RESOURCE_HEADER_NAME, TOOL_USE_LIMIT_REACHED_HEADER_NAME,
ZED_VERSION_HEADER_NAME,
CLIENT_SUPPORTS_STATUS_MESSAGES_HEADER_NAME, CLIENT_SUPPORTS_X_AI_HEADER_NAME,
CURRENT_PLAN_HEADER_NAME, CompletionBody, CompletionEvent, CompletionRequestStatus,
CountTokensBody, CountTokensResponse, EXPIRED_LLM_TOKEN_HEADER_NAME, ListModelsResponse,
MODEL_REQUESTS_RESOURCE_HEADER_VALUE, Plan, PlanV1, PlanV2,
SERVER_SUPPORTS_STATUS_MESSAGES_HEADER_NAME, SUBSCRIPTION_LIMIT_RESOURCE_HEADER_NAME,
TOOL_USE_LIMIT_REACHED_HEADER_NAME, ZED_VERSION_HEADER_NAME,
};
use feature_flags::{BillingV2FeatureFlag, FeatureFlagAppExt};
use futures::{
@@ -47,6 +47,7 @@ use util::{ResultExt as _, maybe};
use crate::provider::anthropic::{AnthropicEventMapper, count_anthropic_tokens, into_anthropic};
use crate::provider::google::{GoogleEventMapper, into_google};
use crate::provider::open_ai::{OpenAiEventMapper, count_open_ai_tokens, into_open_ai};
use crate::provider::x_ai::count_xai_tokens;
const PROVIDER_ID: LanguageModelProviderId = language_model::ZED_CLOUD_PROVIDER_ID;
const PROVIDER_NAME: LanguageModelProviderName = language_model::ZED_CLOUD_PROVIDER_NAME;
@@ -217,6 +218,7 @@ impl State {
let request = http_client::Request::builder()
.method(Method::GET)
.header(CLIENT_SUPPORTS_X_AI_HEADER_NAME, "true")
.uri(http_client.build_zed_llm_url("/models", &[])?.as_ref())
.header("Authorization", format!("Bearer {token}"))
.body(AsyncBody::empty())?;
@@ -580,6 +582,7 @@ impl LanguageModel for CloudLanguageModel {
Anthropic => language_model::ANTHROPIC_PROVIDER_ID,
OpenAi => language_model::OPEN_AI_PROVIDER_ID,
Google => language_model::GOOGLE_PROVIDER_ID,
XAi => language_model::X_AI_PROVIDER_ID,
}
}
@@ -589,6 +592,7 @@ impl LanguageModel for CloudLanguageModel {
Anthropic => language_model::ANTHROPIC_PROVIDER_NAME,
OpenAi => language_model::OPEN_AI_PROVIDER_NAME,
Google => language_model::GOOGLE_PROVIDER_NAME,
XAi => language_model::X_AI_PROVIDER_NAME,
}
}
@@ -619,7 +623,8 @@ impl LanguageModel for CloudLanguageModel {
fn tool_input_format(&self) -> LanguageModelToolSchemaFormat {
match self.model.provider {
cloud_llm_client::LanguageModelProvider::Anthropic
| cloud_llm_client::LanguageModelProvider::OpenAi => {
| cloud_llm_client::LanguageModelProvider::OpenAi
| cloud_llm_client::LanguageModelProvider::XAi => {
LanguageModelToolSchemaFormat::JsonSchema
}
cloud_llm_client::LanguageModelProvider::Google => {
@@ -649,6 +654,7 @@ impl LanguageModel for CloudLanguageModel {
})
}
cloud_llm_client::LanguageModelProvider::OpenAi
| cloud_llm_client::LanguageModelProvider::XAi
| cloud_llm_client::LanguageModelProvider::Google => None,
}
}
@@ -669,6 +675,13 @@ impl LanguageModel for CloudLanguageModel {
};
count_open_ai_tokens(request, model, cx)
}
cloud_llm_client::LanguageModelProvider::XAi => {
let model = match x_ai::Model::from_id(&self.model.id.0) {
Ok(model) => model,
Err(err) => return async move { Err(anyhow!(err)) }.boxed(),
};
count_xai_tokens(request, model, cx)
}
cloud_llm_client::LanguageModelProvider::Google => {
let client = self.client.clone();
let llm_api_token = self.llm_api_token.clone();
@@ -846,6 +859,56 @@ impl LanguageModel for CloudLanguageModel {
});
async move { Ok(future.await?.boxed()) }.boxed()
}
cloud_llm_client::LanguageModelProvider::XAi => {
let client = self.client.clone();
let model = match x_ai::Model::from_id(&self.model.id.0) {
Ok(model) => model,
Err(err) => return async move { Err(anyhow!(err).into()) }.boxed(),
};
let request = into_open_ai(
request,
model.id(),
model.supports_parallel_tool_calls(),
model.supports_prompt_cache_key(),
None,
None,
);
let llm_api_token = self.llm_api_token.clone();
let future = self.request_limiter.stream(async move {
let PerformLlmCompletionResponse {
response,
usage,
includes_status_messages,
tool_use_limit_reached,
} = Self::perform_llm_completion(
client.clone(),
llm_api_token,
app_version,
CompletionBody {
thread_id,
prompt_id,
intent,
mode,
provider: cloud_llm_client::LanguageModelProvider::XAi,
model: request.model.clone(),
provider_request: serde_json::to_value(&request)
.map_err(|e| anyhow!(e))?,
},
)
.await?;
let mut mapper = OpenAiEventMapper::new();
Ok(map_cloud_completion_events(
Box::pin(
response_lines(response, includes_status_messages)
.chain(usage_updated_event(usage))
.chain(tool_use_limit_reached_event(tool_use_limit_reached)),
),
move |event| mapper.map_event(event),
))
});
async move { Ok(future.await?.boxed()) }.boxed()
}
cloud_llm_client::LanguageModelProvider::Google => {
let client = self.client.clone();
let request =

View File

@@ -158,6 +158,9 @@ impl LanguageModelProvider for XAiLanguageModelProvider {
max_tokens: model.max_tokens,
max_output_tokens: model.max_output_tokens,
max_completion_tokens: model.max_completion_tokens,
supports_images: model.supports_images,
supports_tools: model.supports_tools,
parallel_tool_calls: model.parallel_tool_calls,
},
);
}

View File

@@ -116,26 +116,4 @@
)
) @item
; Arrow functions in variable declarations (anywhere in the tree, including nested in functions)
(lexical_declaration
["let" "const"] @context
(variable_declarator
name: (_) @name
value: (arrow_function)) @item)
; Async arrow functions in variable declarations
(lexical_declaration
["let" "const"] @context
(variable_declarator
name: (_) @name
value: (arrow_function
"async" @context)) @item)
; Named function expressions in variable declarations
(lexical_declaration
["let" "const"] @context
(variable_declarator
name: (_) @name
value: (function_expression)) @item)
(comment) @annotation

View File

@@ -106,13 +106,13 @@ impl TyLspAdapter {
#[cfg(target_os = "linux")]
impl TyLspAdapter {
const GITHUB_ASSET_KIND: AssetKind = AssetKind::Gz;
const GITHUB_ASSET_KIND: AssetKind = AssetKind::TarGz;
const ARCH_SERVER_NAME: &str = "unknown-linux-gnu";
}
#[cfg(target_os = "freebsd")]
impl TyLspAdapter {
const GITHUB_ASSET_KIND: AssetKind = AssetKind::Gz;
const GITHUB_ASSET_KIND: AssetKind = AssetKind::TarGz;
const ARCH_SERVER_NAME: &str = "unknown-freebsd";
}
@@ -153,11 +153,16 @@ impl LspAdapter for TyLspAdapter {
async fn workspace_configuration(
self: Arc<Self>,
_: &Arc<dyn LspAdapterDelegate>,
delegate: &Arc<dyn LspAdapterDelegate>,
toolchain: Option<Toolchain>,
_cx: &mut AsyncApp,
cx: &mut AsyncApp,
) -> Result<Value> {
let mut ret = json!({});
let mut ret = cx
.update(|cx| {
language_server_settings(delegate.as_ref(), &self.name(), cx)
.and_then(|s| s.settings.clone())
})?
.unwrap_or_else(|| json!({}));
if let Some(toolchain) = toolchain.and_then(|toolchain| {
serde_json::from_value::<PythonEnvironment>(toolchain.as_json).ok()
}) {
@@ -170,10 +175,9 @@ impl LspAdapter for TyLspAdapter {
"sysPrefix": sys_prefix
}
});
ret.as_object_mut()?.insert(
"pythonExtension".into(),
json!({ "activeEnvironment": environment }),
);
ret.as_object_mut()?
.entry("pythonExtension")
.or_insert_with(|| json!({ "activeEnvironment": environment }));
Some(())
});
}
@@ -462,7 +466,6 @@ impl LspAdapter for PyrightLspAdapter {
async fn workspace_configuration(
self: Arc<Self>,
adapter: &Arc<dyn LspAdapterDelegate>,
toolchain: Option<Toolchain>,
cx: &mut AsyncApp,

View File

@@ -124,26 +124,4 @@
)
) @item
; Arrow functions in variable declarations (anywhere in the tree, including nested in functions)
(lexical_declaration
["let" "const"] @context
(variable_declarator
name: (_) @name
value: (arrow_function)) @item)
; Async arrow functions in variable declarations
(lexical_declaration
["let" "const"] @context
(variable_declarator
name: (_) @name
value: (arrow_function
"async" @context)) @item)
; Named function expressions in variable declarations
(lexical_declaration
["let" "const"] @context
(variable_declarator
name: (_) @name
value: (function_expression)) @item)
(comment) @annotation

View File

@@ -124,26 +124,4 @@
)
) @item
; Arrow functions in variable declarations (anywhere in the tree, including nested in functions)
(lexical_declaration
["let" "const"] @context
(variable_declarator
name: (_) @name
value: (arrow_function)) @item)
; Async arrow functions in variable declarations
(lexical_declaration
["let" "const"] @context
(variable_declarator
name: (_) @name
value: (arrow_function
"async" @context)) @item)
; Named function expressions in variable declarations
(lexical_declaration
["let" "const"] @context
(variable_declarator
name: (_) @name
value: (function_expression)) @item)
(comment) @annotation

View File

@@ -80,12 +80,6 @@ pub struct MultiBuffer {
buffer_changed_since_sync: Rc<Cell<bool>>,
}
#[derive(Clone, Debug, PartialEq, Eq)]
pub enum MultiOrSingleBufferOffsetRange {
Single(Range<usize>),
Multi(Range<usize>),
}
#[derive(Clone, Debug, PartialEq, Eq)]
pub enum Event {
ExcerptsAdded {
@@ -6078,19 +6072,17 @@ impl MultiBufferSnapshot {
pub fn syntax_ancestor<T: ToOffset>(
&self,
range: Range<T>,
) -> Option<(tree_sitter::Node<'_>, MultiOrSingleBufferOffsetRange)> {
) -> Option<(tree_sitter::Node<'_>, Range<usize>)> {
let range = range.start.to_offset(self)..range.end.to_offset(self);
let mut excerpt = self.excerpt_containing(range.clone())?;
let node = excerpt
.buffer()
.syntax_ancestor(excerpt.map_range_to_buffer(range))?;
let node_range = node.byte_range();
let range = if excerpt.contains_buffer_range(node_range.clone()) {
MultiOrSingleBufferOffsetRange::Multi(excerpt.map_range_from_buffer(node_range))
} else {
MultiOrSingleBufferOffsetRange::Single(node_range)
if !excerpt.contains_buffer_range(node_range.clone()) {
return None;
};
Some((node, range))
Some((node, excerpt.map_range_from_buffer(node_range)))
}
pub fn syntax_next_sibling<T: ToOffset>(

View File

@@ -612,7 +612,7 @@ fn get_or_npm_install_builtin_agent(
if let Ok(latest_version) = latest_version
&& &latest_version != &file_name.to_string_lossy()
{
download_latest_version(
let download_result = download_latest_version(
fs,
dir.clone(),
node_runtime,
@@ -620,7 +620,9 @@ fn get_or_npm_install_builtin_agent(
)
.await
.log_err();
if let Some(mut new_version_available) = new_version_available {
if let Some(mut new_version_available) = new_version_available
&& download_result.is_some()
{
new_version_available.send(Some(latest_version)).ok();
}
}
@@ -705,7 +707,7 @@ async fn download_latest_version(
&dir.join(&version),
RenameOptions {
ignore_if_exists: true,
overwrite: false,
overwrite: true,
},
)
.await?;

View File

@@ -25,7 +25,7 @@ impl DapLocator for PythonLocator {
if adapter.0.as_ref() != "Debugpy" {
return None;
}
let valid_program = build_config.command.starts_with("\"$ZED_")
let valid_program = build_config.command.starts_with("$ZED_")
|| Path::new(&build_config.command)
.file_name()
.is_some_and(|name| name.to_str().is_some_and(|path| path.starts_with("python")));
@@ -94,3 +94,53 @@ impl DapLocator for PythonLocator {
bail!("Python locator should not require DapLocator::run to be ran");
}
}
#[cfg(test)]
mod test {
use serde_json::json;
use super::*;
#[gpui::test]
async fn test_python_locator() {
let adapter = DebugAdapterName("Debugpy".into());
let build_task = TaskTemplate {
label: "run module '$ZED_FILE'".into(),
command: "$ZED_CUSTOM_PYTHON_ACTIVE_ZED_TOOLCHAIN".into(),
args: vec!["-m".into(), "$ZED_CUSTOM_PYTHON_MODULE_NAME".into()],
env: Default::default(),
cwd: Some("$ZED_WORKTREE_ROOT".into()),
use_new_terminal: false,
allow_concurrent_runs: false,
reveal: task::RevealStrategy::Always,
reveal_target: task::RevealTarget::Dock,
hide: task::HideStrategy::Never,
tags: vec!["python-module-main-method".into()],
shell: task::Shell::System,
show_summary: false,
show_command: false,
};
let expected_scenario = DebugScenario {
adapter: "Debugpy".into(),
label: "run module 'main.py'".into(),
build: None,
config: json!({
"request": "launch",
"python": "$ZED_CUSTOM_PYTHON_ACTIVE_ZED_TOOLCHAIN",
"args": [],
"cwd": "$ZED_WORKTREE_ROOT",
"module": "$ZED_CUSTOM_PYTHON_MODULE_NAME",
}),
tcp_connection: None,
};
assert_eq!(
PythonLocator
.create_scenario(&build_task, "run module 'main.py'", &adapter)
.await
.expect("Failed to create a scenario"),
expected_scenario
);
}
}

View File

@@ -1521,347 +1521,369 @@ impl LocalLspStore {
zlog::warn!(logger => "Cannot format buffer that is not backed by a file on disk using code actions. Skipping");
continue;
};
let code_action_kinds = code_actions
.iter()
.filter_map(|(action_kind, enabled)| {
enabled.then_some(action_kind.clone().into())
})
.collect::<Vec<_>>();
if code_action_kinds.is_empty() {
if code_actions.iter().filter(|(_, enabled)| **enabled).count() == 0 {
zlog::trace!(logger => "No code action kinds enabled, skipping");
continue;
}
zlog::trace!(logger => "Attempting to resolve code actions {:?}", &code_action_kinds);
let mut actions_and_servers = Vec::new();
for (index, (_, language_server)) in adapters_and_servers.iter().enumerate() {
let actions_result = Self::get_server_code_actions_from_action_kinds(
&lsp_store,
language_server.server_id(),
code_action_kinds.clone(),
&buffer.handle,
cx,
)
.await
.with_context(
|| format!("Failed to resolve code actions with kinds {:?} for language server {}",
code_action_kinds.iter().map(|kind| kind.as_str()).join(", "),
language_server.name())
);
let Ok(actions) = actions_result else {
// note: it may be better to set result to the error and break formatters here
// but for now we try to execute the actions that we can resolve and skip the rest
zlog::error!(
logger =>
"Failed to resolve code actions with kinds {:?} with language server {}",
code_action_kinds.iter().map(|kind| kind.as_str()).join(", "),
language_server.name()
);
// Note: this loop exists despite `Self::get_server_code_actions_from_action_kinds` taking a `Vec<CodeActionKind>`
// because code actions can resolve edits when `Self::get_server_code_actions_from_action_kinds` is called, which
// are not updated to reflect edits from previous actions or actions from other servers when `resolve_code_actions` is called later
// which can result in conflicting edits and mangled buffer state
for (code_action_name, enabled) in code_actions {
if !enabled {
continue;
};
for action in actions {
actions_and_servers.push((action, index));
}
}
let code_action_kind: CodeActionKind = code_action_name.clone().into();
zlog::trace!(logger => "Attempting to resolve code actions {:?}", &code_action_kind);
if actions_and_servers.is_empty() {
zlog::warn!(logger => "No code actions were resolved, continuing");
continue;
}
let mut actions_and_servers = Vec::new();
'actions: for (mut action, server_index) in actions_and_servers {
let server = &adapters_and_servers[server_index].1;
let describe_code_action = |action: &CodeAction| {
format!(
"code action '{}' with title \"{}\" on server {}",
action
.lsp_action
.action_kind()
.unwrap_or("unknown".into())
.as_str(),
action.lsp_action.title(),
server.name(),
for (index, (_, language_server)) in adapters_and_servers.iter().enumerate()
{
let actions_result = Self::get_server_code_actions_from_action_kinds(
&lsp_store,
language_server.server_id(),
vec![code_action_kind.clone()],
&buffer.handle,
cx,
)
};
.await
.with_context(|| {
format!(
"Failed to resolve code action {:?} with language server {}",
code_action_kind,
language_server.name()
)
});
let Ok(actions) = actions_result else {
// note: it may be better to set result to the error and break formatters here
// but for now we try to execute the actions that we can resolve and skip the rest
zlog::error!(
logger =>
"Failed to resolve code action {:?} with language server {}",
code_action_kind,
language_server.name()
);
continue;
};
for action in actions {
actions_and_servers.push((action, index));
}
}
zlog::trace!(logger => "Executing {}", describe_code_action(&action));
if let Err(err) = Self::try_resolve_code_action(server, &mut action).await {
zlog::error!(
logger =>
"Failed to resolve {}. Error: {}",
describe_code_action(&action),
err
);
if actions_and_servers.is_empty() {
zlog::warn!(logger => "No code actions were resolved, continuing");
continue;
}
if let Some(edit) = action.lsp_action.edit().cloned() {
// NOTE: code below duplicated from `Self::deserialize_workspace_edit`
// but filters out and logs warnings for code actions that cause unreasonably
// difficult handling on our part, such as:
// - applying edits that call commands
// which can result in arbitrary workspace edits being sent from the server that
// have no way of being tied back to the command that initiated them (i.e. we
// can't know which edits are part of the format request, or if the server is done sending
// actions in response to the command)
// - actions that create/delete/modify/rename files other than the one we are formatting
// as we then would need to handle such changes correctly in the local history as well
// as the remote history through the ProjectTransaction
// - actions with snippet edits, as these simply don't make sense in the context of a format request
// Supporting these actions is not impossible, but not supported as of yet.
if edit.changes.is_none() && edit.document_changes.is_none() {
zlog::trace!(
'actions: for (mut action, server_index) in actions_and_servers {
let server = &adapters_and_servers[server_index].1;
let describe_code_action = |action: &CodeAction| {
format!(
"code action '{}' with title \"{}\" on server {}",
action
.lsp_action
.action_kind()
.unwrap_or("unknown".into())
.as_str(),
action.lsp_action.title(),
server.name(),
)
};
zlog::trace!(logger => "Executing {}", describe_code_action(&action));
if let Err(err) =
Self::try_resolve_code_action(server, &mut action).await
{
zlog::error!(
logger =>
"No changes for code action. Skipping {}",
"Failed to resolve {}. Error: {}",
describe_code_action(&action),
err
);
continue;
}
let mut operations = Vec::new();
if let Some(document_changes) = edit.document_changes {
match document_changes {
lsp::DocumentChanges::Edits(edits) => operations.extend(
edits.into_iter().map(lsp::DocumentChangeOperation::Edit),
),
lsp::DocumentChanges::Operations(ops) => operations = ops,
if let Some(edit) = action.lsp_action.edit().cloned() {
// NOTE: code below duplicated from `Self::deserialize_workspace_edit`
// but filters out and logs warnings for code actions that require unreasonably
// difficult handling on our part, such as:
// - applying edits that call commands
// which can result in arbitrary workspace edits being sent from the server that
// have no way of being tied back to the command that initiated them (i.e. we
// can't know which edits are part of the format request, or if the server is done sending
// actions in response to the command)
// - actions that create/delete/modify/rename files other than the one we are formatting
// as we then would need to handle such changes correctly in the local history as well
// as the remote history through the ProjectTransaction
// - actions with snippet edits, as these simply don't make sense in the context of a format request
// Supporting these actions is not impossible, but not supported as of yet.
if edit.changes.is_none() && edit.document_changes.is_none() {
zlog::trace!(
logger =>
"No changes for code action. Skipping {}",
describe_code_action(&action),
);
continue;
}
} else if let Some(changes) = edit.changes {
operations.extend(changes.into_iter().map(|(uri, edits)| {
lsp::DocumentChangeOperation::Edit(lsp::TextDocumentEdit {
text_document:
lsp::OptionalVersionedTextDocumentIdentifier {
uri,
version: None,
},
edits: edits.into_iter().map(Edit::Plain).collect(),
})
}));
}
let mut edits = Vec::with_capacity(operations.len());
if operations.is_empty() {
zlog::trace!(
logger =>
"No changes for code action. Skipping {}",
describe_code_action(&action),
);
continue;
}
for operation in operations {
let op = match operation {
lsp::DocumentChangeOperation::Edit(op) => op,
lsp::DocumentChangeOperation::Op(_) => {
zlog::warn!(
logger =>
"Code actions which create, delete, or rename files are not supported on format. Skipping {}",
describe_code_action(&action),
);
continue 'actions;
let mut operations = Vec::new();
if let Some(document_changes) = edit.document_changes {
match document_changes {
lsp::DocumentChanges::Edits(edits) => operations.extend(
edits
.into_iter()
.map(lsp::DocumentChangeOperation::Edit),
),
lsp::DocumentChanges::Operations(ops) => operations = ops,
}
};
let Ok(file_path) = op.text_document.uri.to_file_path() else {
zlog::warn!(
logger =>
"Failed to convert URI '{:?}' to file path. Skipping {}",
&op.text_document.uri,
describe_code_action(&action),
);
continue 'actions;
};
if &file_path != buffer_path_abs {
zlog::warn!(
logger =>
"File path '{:?}' does not match buffer path '{:?}'. Skipping {}",
file_path,
buffer_path_abs,
describe_code_action(&action),
);
continue 'actions;
} else if let Some(changes) = edit.changes {
operations.extend(changes.into_iter().map(|(uri, edits)| {
lsp::DocumentChangeOperation::Edit(lsp::TextDocumentEdit {
text_document:
lsp::OptionalVersionedTextDocumentIdentifier {
uri,
version: None,
},
edits: edits.into_iter().map(Edit::Plain).collect(),
})
}));
}
let mut lsp_edits = Vec::new();
for edit in op.edits {
match edit {
Edit::Plain(edit) => {
if !lsp_edits.contains(&edit) {
lsp_edits.push(edit);
}
}
Edit::Annotated(edit) => {
if !lsp_edits.contains(&edit.text_edit) {
lsp_edits.push(edit.text_edit);
}
}
Edit::Snippet(_) => {
let mut edits = Vec::with_capacity(operations.len());
if operations.is_empty() {
zlog::trace!(
logger =>
"No changes for code action. Skipping {}",
describe_code_action(&action),
);
continue;
}
for operation in operations {
let op = match operation {
lsp::DocumentChangeOperation::Edit(op) => op,
lsp::DocumentChangeOperation::Op(_) => {
zlog::warn!(
logger =>
"Code actions which produce snippet edits are not supported during formatting. Skipping {}",
"Code actions which create, delete, or rename files are not supported on format. Skipping {}",
describe_code_action(&action),
);
continue 'actions;
}
};
let Ok(file_path) = op.text_document.uri.to_file_path() else {
zlog::warn!(
logger =>
"Failed to convert URI '{:?}' to file path. Skipping {}",
&op.text_document.uri,
describe_code_action(&action),
);
continue 'actions;
};
if &file_path != buffer_path_abs {
zlog::warn!(
logger =>
"File path '{:?}' does not match buffer path '{:?}'. Skipping {}",
file_path,
buffer_path_abs,
describe_code_action(&action),
);
continue 'actions;
}
let mut lsp_edits = Vec::new();
for edit in op.edits {
match edit {
Edit::Plain(edit) => {
if !lsp_edits.contains(&edit) {
lsp_edits.push(edit);
}
}
Edit::Annotated(edit) => {
if !lsp_edits.contains(&edit.text_edit) {
lsp_edits.push(edit.text_edit);
}
}
Edit::Snippet(_) => {
zlog::warn!(
logger =>
"Code actions which produce snippet edits are not supported during formatting. Skipping {}",
describe_code_action(&action),
);
continue 'actions;
}
}
}
let edits_result = lsp_store
.update(cx, |lsp_store, cx| {
lsp_store.as_local_mut().unwrap().edits_from_lsp(
&buffer.handle,
lsp_edits,
server.server_id(),
op.text_document.version,
cx,
)
})?
.await;
let Ok(resolved_edits) = edits_result else {
zlog::warn!(
logger =>
"Failed to resolve edits from LSP for buffer {:?} while handling {}",
buffer_path_abs.as_path(),
describe_code_action(&action),
);
continue 'actions;
};
edits.extend(resolved_edits);
}
let edits_result = lsp_store
.update(cx, |lsp_store, cx| {
lsp_store.as_local_mut().unwrap().edits_from_lsp(
&buffer.handle,
lsp_edits,
server.server_id(),
op.text_document.version,
cx,
)
})?
.await;
let Ok(resolved_edits) = edits_result else {
zlog::warn!(
logger =>
"Failed to resolve edits from LSP for buffer {:?} while handling {}",
buffer_path_abs.as_path(),
describe_code_action(&action),
);
continue 'actions;
};
edits.extend(resolved_edits);
if edits.is_empty() {
zlog::warn!(logger => "No edits resolved from LSP");
continue;
}
extend_formatting_transaction(
buffer,
formatting_transaction_id,
cx,
|buffer, cx| {
zlog::info!(
"Applying edits {edits:?}. Content: {:?}",
buffer.text()
);
buffer.edit(edits, None, cx);
zlog::info!(
"Applied edits. New Content: {:?}",
buffer.text()
);
},
)?;
}
if edits.is_empty() {
zlog::warn!(logger => "No edits resolved from LSP");
continue;
}
extend_formatting_transaction(
buffer,
formatting_transaction_id,
cx,
|buffer, cx| {
buffer.edit(edits, None, cx);
},
)?;
}
if let Some(command) = action.lsp_action.command() {
zlog::warn!(
logger =>
"Executing code action command '{}'. This may cause formatting to abort unnecessarily as well as splitting formatting into two entries in the undo history",
&command.command,
);
// bail early if command is invalid
let server_capabilities = server.capabilities();
let available_commands = server_capabilities
.execute_command_provider
.as_ref()
.map(|options| options.commands.as_slice())
.unwrap_or_default();
if !available_commands.contains(&command.command) {
if let Some(command) = action.lsp_action.command() {
zlog::warn!(
logger =>
"Cannot execute a command {} not listed in the language server capabilities of server {}",
command.command,
server.name(),
);
continue;
}
// noop so we just ensure buffer hasn't been edited since resolving code actions
extend_formatting_transaction(
buffer,
formatting_transaction_id,
cx,
|_, _| {},
)?;
zlog::info!(logger => "Executing command {}", &command.command);
lsp_store.update(cx, |this, _| {
this.as_local_mut()
.unwrap()
.last_workspace_edits_by_language_server
.remove(&server.server_id());
})?;
let execute_command_result = server
.request::<lsp::request::ExecuteCommand>(
lsp::ExecuteCommandParams {
command: command.command.clone(),
arguments: command.arguments.clone().unwrap_or_default(),
..Default::default()
},
)
.await
.into_response();
if execute_command_result.is_err() {
zlog::error!(
logger =>
"Failed to execute command '{}' as part of {}",
"Executing code action command '{}'. This may cause formatting to abort unnecessarily as well as splitting formatting into two entries in the undo history",
&command.command,
describe_code_action(&action),
);
continue 'actions;
}
let mut project_transaction_command =
// bail early if command is invalid
let server_capabilities = server.capabilities();
let available_commands = server_capabilities
.execute_command_provider
.as_ref()
.map(|options| options.commands.as_slice())
.unwrap_or_default();
if !available_commands.contains(&command.command) {
zlog::warn!(
logger =>
"Cannot execute a command {} not listed in the language server capabilities of server {}",
command.command,
server.name(),
);
continue;
}
// noop so we just ensure buffer hasn't been edited since resolving code actions
extend_formatting_transaction(
buffer,
formatting_transaction_id,
cx,
|_, _| {},
)?;
zlog::info!(logger => "Executing command {}", &command.command);
lsp_store.update(cx, |this, _| {
this.as_local_mut()
.unwrap()
.last_workspace_edits_by_language_server
.remove(&server.server_id())
.unwrap_or_default()
.remove(&server.server_id());
})?;
if let Some(transaction) =
project_transaction_command.0.remove(&buffer.handle)
{
zlog::trace!(
logger =>
"Successfully captured {} edits that resulted from command {}",
transaction.edit_ids.len(),
&command.command,
);
let transaction_id_project_transaction = transaction.id;
buffer.handle.update(cx, |buffer, _| {
// it may have been removed from history if push_to_history was
// false in deserialize_workspace_edit. If so push it so we
// can merge it with the format transaction
// and pop the combined transaction off the history stack
// later if push_to_history is false
if buffer.get_transaction(transaction.id).is_none() {
buffer.push_transaction(transaction, Instant::now());
}
buffer.merge_transactions(
transaction_id_project_transaction,
formatting_transaction_id,
let execute_command_result = server
.request::<lsp::request::ExecuteCommand>(
lsp::ExecuteCommandParams {
command: command.command.clone(),
arguments: command
.arguments
.clone()
.unwrap_or_default(),
..Default::default()
},
)
.await
.into_response();
if execute_command_result.is_err() {
zlog::error!(
logger =>
"Failed to execute command '{}' as part of {}",
&command.command,
describe_code_action(&action),
);
})?;
}
continue 'actions;
}
if !project_transaction_command.0.is_empty() {
let extra_buffers = project_transaction_command
.0
.keys()
.filter_map(|buffer_handle| {
buffer_handle
.read_with(cx, |b, cx| b.project_path(cx))
.ok()
.flatten()
})
.map(|p| p.path.to_sanitized_string())
.join(", ");
zlog::warn!(
logger =>
"Unexpected edits to buffers other than the buffer actively being formatted due to command {}. Impacted buffers: [{}].",
&command.command,
extra_buffers,
);
// NOTE: if this case is hit, the proper thing to do is to for each buffer, merge the extra transaction
// into the existing transaction in project_transaction if there is one, and if there isn't one in project_transaction,
// add it so it's included, and merge it into the format transaction when its created later
let mut project_transaction_command =
lsp_store.update(cx, |this, _| {
this.as_local_mut()
.unwrap()
.last_workspace_edits_by_language_server
.remove(&server.server_id())
.unwrap_or_default()
})?;
if let Some(transaction) =
project_transaction_command.0.remove(&buffer.handle)
{
zlog::trace!(
logger =>
"Successfully captured {} edits that resulted from command {}",
transaction.edit_ids.len(),
&command.command,
);
let transaction_id_project_transaction = transaction.id;
buffer.handle.update(cx, |buffer, _| {
// it may have been removed from history if push_to_history was
// false in deserialize_workspace_edit. If so push it so we
// can merge it with the format transaction
// and pop the combined transaction off the history stack
// later if push_to_history is false
if buffer.get_transaction(transaction.id).is_none() {
buffer.push_transaction(transaction, Instant::now());
}
buffer.merge_transactions(
transaction_id_project_transaction,
formatting_transaction_id,
);
})?;
}
if !project_transaction_command.0.is_empty() {
let extra_buffers = project_transaction_command
.0
.keys()
.filter_map(|buffer_handle| {
buffer_handle
.read_with(cx, |b, cx| b.project_path(cx))
.ok()
.flatten()
})
.map(|p| p.path.to_sanitized_string())
.join(", ");
zlog::warn!(
logger =>
"Unexpected edits to buffers other than the buffer actively being formatted due to command {}. Impacted buffers: [{}].",
&command.command,
extra_buffers,
);
// NOTE: if this case is hit, the proper thing to do is to for each buffer, merge the extra transaction
// into the existing transaction in project_transaction if there is one, and if there isn't one in project_transaction,
// add it so it's included, and merge it into the format transaction when its created later
}
}
}
}

View File

@@ -211,12 +211,6 @@ impl Project {
let activation_script = activation_script.join("; ");
let to_run = format_to_run();
// todo(lw): Alacritty uses `CreateProcessW` on windows with the entire command and arg sequence merged into a single string,
// without quoting the arguments
#[cfg(windows)]
let arg =
quote_arg(&format!("{activation_script}; {to_run}"), true);
#[cfg(not(windows))]
let arg = format!("{activation_script}; {to_run}");
(
@@ -515,37 +509,6 @@ impl Project {
}
}
/// We're not using shlex for windows as it is overly eager with escaping some of the special characters (^) we need for nu. Hence, we took
/// that quote impl straight from Rust stdlib (Command API).
#[cfg(windows)]
fn quote_arg(argument: &str, quote: bool) -> String {
let mut arg = String::new();
if quote {
arg.push('"');
}
let mut backslashes: usize = 0;
for x in argument.chars() {
if x == '\\' {
backslashes += 1;
} else {
if x == '"' {
// Add n+1 backslashes to total 2n+1 before internal '"'.
arg.extend((0..=backslashes).map(|_| '\\'));
}
backslashes = 0;
}
arg.push(x);
}
if quote {
// Add n backslashes to total 2n before ending '"'.
arg.extend((0..backslashes).map(|_| '\\'));
arg.push('"');
}
arg
}
fn create_remote_shell(
spawn_command: Option<(&String, &Vec<String>)>,
mut env: HashMap<String, String>,

View File

@@ -49,6 +49,7 @@ merge_from_overwrites!(
bool,
f64,
f32,
char,
std::num::NonZeroUsize,
std::num::NonZeroU32,
String,

View File

@@ -261,7 +261,7 @@ pub struct LanguageSettingsContent {
/// Visible characters used to render whitespace when show_whitespaces is enabled.
///
/// Default: "•" for spaces, "→" for tabs.
pub whitespace_map: Option<WhitespaceMap>,
pub whitespace_map: Option<WhitespaceMapContent>,
/// Whether to start a new line with a comment when a previous line is a comment as well.
///
/// Default: true
@@ -354,23 +354,9 @@ pub enum ShowWhitespaceSetting {
#[skip_serializing_none]
#[derive(Clone, Debug, Default, Serialize, Deserialize, JsonSchema, MergeFrom, PartialEq)]
pub struct WhitespaceMap {
pub space: Option<String>,
pub tab: Option<String>,
}
impl WhitespaceMap {
pub fn space(&self) -> SharedString {
self.space
.as_ref()
.map_or_else(|| SharedString::from(""), |s| SharedString::from(s))
}
pub fn tab(&self) -> SharedString {
self.tab
.as_ref()
.map_or_else(|| SharedString::from(""), |s| SharedString::from(s))
}
pub struct WhitespaceMapContent {
pub space: Option<char>,
pub tab: Option<char>,
}
/// The behavior of `editor::Rewrap`.

View File

@@ -301,6 +301,9 @@ pub struct XaiAvailableModel {
pub max_tokens: u64,
pub max_output_tokens: Option<u64>,
pub max_completion_tokens: Option<u64>,
pub supports_images: Option<bool>,
pub supports_tools: Option<bool>,
pub parallel_tool_calls: Option<bool>,
}
#[skip_serializing_none]

View File

@@ -210,6 +210,7 @@ pub struct ShellBuilder {
program: String,
args: Vec<String>,
interactive: bool,
/// Whether to redirect stdin to /dev/null for the spawned command as a subshell.
redirect_stdin: bool,
kind: ShellKind,
}
@@ -283,10 +284,12 @@ impl ShellBuilder {
if self.redirect_stdin {
match self.kind {
ShellKind::Posix | ShellKind::Nushell | ShellKind::Fish | ShellKind::Csh => {
combined_command.push_str(" </dev/null");
combined_command.insert(0, '(');
combined_command.push_str(") </dev/null");
}
ShellKind::PowerShell => {
combined_command.insert_str(0, "$null | ");
combined_command.insert_str(0, "$null | {");
combined_command.push_str("}");
}
ShellKind::Cmd => {
combined_command.push_str("< NUL");
@@ -333,4 +336,17 @@ mod test {
]
);
}
#[test]
fn redirect_stdin_to_dev_null_precedence() {
let shell = Shell::Program("nu".to_owned());
let shell_builder = ShellBuilder::new(None, &shell);
let (program, args) = shell_builder
.redirect_stdin_to_dev_null()
.build(Some("echo".into()), &["nothing".to_string()]);
assert_eq!(program, "nu");
assert_eq!(args, vec!["-i", "-c", "(echo nothing) </dev/null"]);
}
}

View File

@@ -4841,10 +4841,20 @@ impl BackgroundScanner {
let mut entries_by_id_edits = Vec::new();
let mut entries_by_path_edits = Vec::new();
let path = job
let Some(path) = job
.abs_path
.strip_prefix(snapshot.abs_path.as_path())
.unwrap();
.map_err(|_| {
anyhow::anyhow!(
"Failed to strip prefix '{}' from path '{}'",
snapshot.abs_path.as_path().display(),
job.abs_path.display()
)
})
.log_err()
else {
return;
};
if let Ok(Some(metadata)) = smol::block_on(self.fs.metadata(&job.abs_path.join(*DOT_GIT)))
&& metadata.is_dir

View File

@@ -18,9 +18,19 @@ pub enum Model {
Grok3Fast,
#[serde(rename = "grok-3-mini-fast-latest")]
Grok3MiniFast,
#[serde(rename = "grok-4-latest")]
#[serde(rename = "grok-4", alias = "grok-4-latest")]
Grok4,
#[serde(rename = "grok-code-fast-1")]
#[serde(
rename = "grok-4-fast-reasoning",
alias = "grok-4-fast-reasoning-latest"
)]
Grok4FastReasoning,
#[serde(
rename = "grok-4-fast-non-reasoning",
alias = "grok-4-fast-non-reasoning-latest"
)]
Grok4FastNonReasoning,
#[serde(rename = "grok-code-fast-1", alias = "grok-code-fast-1-0825")]
GrokCodeFast1,
#[serde(rename = "custom")]
Custom {
@@ -30,6 +40,9 @@ pub enum Model {
max_tokens: u64,
max_output_tokens: Option<u64>,
max_completion_tokens: Option<u64>,
supports_images: Option<bool>,
supports_tools: Option<bool>,
parallel_tool_calls: Option<bool>,
},
}
@@ -40,6 +53,9 @@ impl Model {
pub fn from_id(id: &str) -> Result<Self> {
match id {
"grok-4" => Ok(Self::Grok4),
"grok-4-fast-reasoning" => Ok(Self::Grok4FastReasoning),
"grok-4-fast-non-reasoning" => Ok(Self::Grok4FastNonReasoning),
"grok-2-vision" => Ok(Self::Grok2Vision),
"grok-3" => Ok(Self::Grok3),
"grok-3-mini" => Ok(Self::Grok3Mini),
@@ -58,6 +74,8 @@ impl Model {
Self::Grok3Fast => "grok-3-fast",
Self::Grok3MiniFast => "grok-3-mini-fast",
Self::Grok4 => "grok-4",
Self::Grok4FastReasoning => "grok-4-fast-reasoning",
Self::Grok4FastNonReasoning => "grok-4-fast-non-reasoning",
Self::GrokCodeFast1 => "grok-code-fast-1",
Self::Custom { name, .. } => name,
}
@@ -71,6 +89,8 @@ impl Model {
Self::Grok3Fast => "Grok 3 Fast",
Self::Grok3MiniFast => "Grok 3 Mini Fast",
Self::Grok4 => "Grok 4",
Self::Grok4FastReasoning => "Grok 4 Fast",
Self::Grok4FastNonReasoning => "Grok 4 Fast (Non-Reasoning)",
Self::GrokCodeFast1 => "Grok Code Fast 1",
Self::Custom {
name, display_name, ..
@@ -82,6 +102,7 @@ impl Model {
match self {
Self::Grok3 | Self::Grok3Mini | Self::Grok3Fast | Self::Grok3MiniFast => 131_072,
Self::Grok4 | Self::GrokCodeFast1 => 256_000,
Self::Grok4FastReasoning | Self::Grok4FastNonReasoning => 128_000,
Self::Grok2Vision => 8_192,
Self::Custom { max_tokens, .. } => *max_tokens,
}
@@ -90,7 +111,10 @@ impl Model {
pub fn max_output_tokens(&self) -> Option<u64> {
match self {
Self::Grok3 | Self::Grok3Mini | Self::Grok3Fast | Self::Grok3MiniFast => Some(8_192),
Self::Grok4 | Self::GrokCodeFast1 => Some(64_000),
Self::Grok4
| Self::Grok4FastReasoning
| Self::Grok4FastNonReasoning
| Self::GrokCodeFast1 => Some(64_000),
Self::Grok2Vision => Some(4_096),
Self::Custom {
max_output_tokens, ..
@@ -105,7 +129,13 @@ impl Model {
| Self::Grok3Mini
| Self::Grok3Fast
| Self::Grok3MiniFast
| Self::Grok4 => true,
| Self::Grok4
| Self::Grok4FastReasoning
| Self::Grok4FastNonReasoning => true,
Self::Custom {
parallel_tool_calls: Some(support),
..
} => *support,
Self::GrokCodeFast1 | Model::Custom { .. } => false,
}
}
@@ -122,12 +152,25 @@ impl Model {
| Self::Grok3Fast
| Self::Grok3MiniFast
| Self::Grok4
| Self::Grok4FastReasoning
| Self::Grok4FastNonReasoning
| Self::GrokCodeFast1 => true,
Self::Custom {
supports_tools: Some(support),
..
} => *support,
Model::Custom { .. } => false,
}
}
pub fn supports_images(&self) -> bool {
matches!(self, Self::Grok2Vision)
match self {
Self::Grok2Vision => true,
Self::Custom {
supports_images: Some(support),
..
} => *support,
_ => false,
}
}
}

View File

@@ -2,7 +2,7 @@
description = "The fast, collaborative code editor."
edition.workspace = true
name = "zed"
version = "0.206.0"
version = "0.206.7"
publish.workspace = true
license = "GPL-3.0-or-later"
authors = ["Zed Team <hi@zed.dev>"]

View File

@@ -1 +1 @@
dev
stable