Commit Graph

77 Commits

Author SHA1 Message Date
Richard Feldman
b5a0a3322d Add GPT-5.2 support (#44656)
<img width="429" height="188" alt="Screenshot 2025-12-11 at 3 45 26 PM"
src="https://github.com/user-attachments/assets/fe9f1b86-7268-4c63-a8c2-75ac671012c9"
/>


Release Notes:

- Added GPT-5.2 support when using your own OpenAI key
2025-12-11 15:49:10 -05:00
Agus Zubiaga
f08fd732a7 Add experimental mercury edit prediction provider (#44256)
Release Notes:

- N/A

---------

Co-authored-by: Ben Kunkle <ben@zed.dev>
Co-authored-by: Max Brunsfeld <maxbrunsfeld@gmail.com>
2025-12-06 10:08:44 +00:00
Mikayla Maki
53eb35f5b2 Add GPT 5.1 to Zed BYOK (#43492)
Release Notes:

- Added support for OpenAI's GPT 5.1 model to BYOK
2025-11-25 14:17:27 -08:00
Tim McLean
fb90b12073 Add retry support for OpenAI-compatible LLM providers (#37891)
Automatically retry the agent's LLM completion requests when the
provider returns 429 Too Many Requests. Uses the Retry-After header to
determine the retry delay if it is available.

Many providers are frequently overloaded or have low rate limits. These
providers are essentially unusable without automatic retries.

Tested with Cerebras configured via openai_compatible.

Related: #31531 

Release Notes:

- Added automatic retries for OpenAI-compatible LLM providers

---------

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-11-13 14:15:46 +00:00
Max Brunsfeld
784fdcaee3 zeta2: Build edit prediction prompt and process model output in client (#41870)
Release Notes:

- N/A

---------

Co-authored-by: Agus Zubiaga <agus@zed.dev>
Co-authored-by: Ben Kunkle <ben@zed.dev>
Co-authored-by: Piotr Osiewicz <24362066+osiewicz@users.noreply.github.com>
2025-11-06 18:36:58 -05:00
Techy
27a18843d4 open_ai: Make the deltas optional (#39142)
I am using an Azure OpenAI instance since that is what is provided at
work and with how they have it setup not all responses contain a delta,
which lead to errors and truncated responses. This is related to how
they are filtering potentially offensive requests and responses. I don't
believe this filter was made in-house, instead I believe it is provided
by Microsoft/Azure, so I suspect this fix may help other users.

Release Notes:

- N/A

Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-11-05 13:47:14 +01:00
Julia Ryan
ef5b8c6fed Remove workspace-hack (#40216)
We've been considering removing workspace-hack for a couple reasons:
- Lukas ran into a situation where its build script seemed to be causing
spurious rebuilds. This seems more likely to be a cargo bug than an
issue with workspace-hack itself (given that it has an empty build
script), but we don't necessarily want to take the time to hunt that
down right now.
- Marshall mentioned hakari interacts poorly with automated crate
updates (in our case provided by rennovate) because you'd need to have
`cargo hakari generate && cargo hakari manage-deps` after their changes
and we prefer to not have actions that make commits.

Currently removing workspace-hack causes our workspace to grow from
~1700 to ~2000 crates being built (depending on platform), which is
mainly a problem when you're building the whole workspace or running
tests across the the normal and remote binaries (which is where
feature-unification nets us the most sharing). It doesn't impact
incremental times noticeably when you're just iterating on `-p zed`, and
we'll hopefully get these savings back in the future when
rust-lang/cargo#14774 (which re-implements the functionality of hakari)
is finished.

Release Notes:

- N/A
2025-10-17 18:58:14 +00:00
Conrad Irwin
fcdab160f9 Settings refactor (#38367)
Co-Authored-By: Ben K <ben@zed.dev>
Co-Authored-By: Anthony <anthony@zed.dev>
Co-Authored-By: Mikayla <mikayla@zed.dev>

Release Notes:

- settings: Major internal changes to settings. The primary user-facing
effect is that some settings which did not make sense in project
settings files are no-longer read from there. (For example the inline
blame settings)

---------

Co-authored-by: Ben Kunkle <ben@zed.dev>
Co-authored-by: Mikayla Maki <mikayla.c.maki@gmail.com>
Co-authored-by: Anthony <anthony@zed.dev>
2025-09-18 16:47:23 +00:00
ZhangJun
7091c70a1e open_ai: Trim newline before "data:" prefix and account for the possibility of no space after ":" (#37644)
I'am using an openai compatible model, but got nothing in agent thread
panel, and Zed log has "Model generated an empty summary" line.

I add one log to open_ai.rs:
<img width="2454" height="626" alt="图片"
src="https://github.com/user-attachments/assets/85354c7d-a0cc-4bba-86fd-2a640038a13e"
/>

and got:

<img width="3456" height="278" alt="图片"
src="https://github.com/user-attachments/assets/7746aedd-5d76-44b5-90f2-e129a1507178"
/>

It appear that `let line = line.strip_prefix("data: ")?;` can not handle
correctly.

Release Notes:

- N/A

---------

Co-authored-by: Ben Brandt <benjamin.j.brandt@gmail.com>
2025-09-08 22:01:55 +02:00
Umesh Yadav
9f749881b3 language_models: Fix tool_choice null issue for other providers (#34554)
Follow up: #34532

Closes #35434 

Mostly fixes a issue were when the tool_choice is none it was getting
serialised as null. This was fixed for openrouter just wanted to follow
up and cleanup for other providers which might have this issue as this
is against the spec.

Release Notes:

- N/A
2025-09-03 01:22:57 +02:00
Antonio Scandurra
39d86eeb7f Trim API key when submitting requests to LLM providers (#37082)
This prevents the common footgun of copy/pasting an API key
starting/ending with extra newlines, which would lead to a "bad request"
error.

Closes #37038 

Release Notes:

- agent: Support pasting language model API keys that contain newlines.
2025-08-28 12:00:44 +00:00
Michael Sloan
0470baca50 open_ai: Remove model field from ResponseStreamEvent (#36902)
Closes #36901

Release Notes:

- Fixed use of Open WebUI as an LLM provider.
2025-08-25 19:50:08 +00:00
Piotr Osiewicz
05fc0c432c Fix a bunch of other low-hanging style lints (#36498)
- **Fix a bunch of low hanging style lints like unnecessary-return**
- **Fix single worktree violation**
- **And the rest**

Release Notes:

- N/A
2025-08-19 21:26:17 +02:00
Oleksiy Syvokon
42ffa8900a open_ai: Fix error response parsing (#36390)
Closes #35925

Release Notes:

- Fixed OpenAI error response parsing in some cases
2025-08-18 08:54:31 +00:00
Oleksiy Syvokon
2a57b160b0 openai: Don't send prompt_cache_key for OpenAI-compatible models (#36231)
Some APIs fail when they get this parameter

Closes #36215

Release Notes:

- Fixed OpenAI-compatible providers that don't support prompt caching
and/or reasoning
2025-08-15 13:54:24 +03:00
Oleksiy Syvokon
a3dcc76687 openai: Don't send reasoning_effort if it's not set (#36228)
Release Notes:

- N/A
2025-08-15 09:12:18 +00:00
Cretezy
8ff2e3e195 language_models: Add reasoning_effort for custom models (#35929)
Release Notes:

- Added `reasoning_effort` support to custom models

Tested using the following config:
```json5
  "language_models": {
    "openai": {
      "available_models": [
        {
          "name": "gpt-5-mini",
          "display_name": "GPT 5 Mini (custom reasoning)",
          "max_output_tokens": 128000,
          "max_tokens": 272000,
          "reasoning_effort": "high" // Can be minimal, low, medium (default), and high
        }
      ],
      "version": "1"
    }
  }
```

Docs:
https://platform.openai.com/docs/api-reference/chat/create#chat_create-reasoning_effort

This work could be used to split the GPT 5/5-mini/5-nano into each of
it's reasoning effort variant. E.g. `gpt-5`, `gpt-5 low`, `gpt-5
minimal`, `gpt-5 high`, and same for mini/nano.

Release Notes:

* Added a setting to control `reasoning_effort` in OpenAI models
2025-08-13 06:09:16 +00:00
Oleksiy Syvokon
7167f193c0 open_ai: Send prompt_cache_key to improve caching (#36065)
Release Notes:

- N/A

Co-authored-by: Michael Sloan <mgsloan@gmail.com>
2025-08-12 21:51:23 +03:00
Oleksiy Syvokon
7ff0f1525e open_ai: Log inputs that caused parsing errors (#36063)
Release Notes:

- N/A

Co-authored-by: Michael Sloan <mgsloan@gmail.com>
2025-08-12 21:49:19 +03:00
Richard Feldman
7d4d8b8398 Add GPT-5 support through OpenAI API (#35822)
(This PR does not add GPT-5 to Zed Pro, but rather adds access if you're
using your own OpenAI API key.)

<img width="772" height="333" alt="Screenshot 2025-08-07 at 2 23 18 PM"
src="https://github.com/user-attachments/assets/42e75082-118a-4737-89b6-a740ae33b169"
/>

---

**NOTE:** If your API key is not through a verified organization, you
may see this error:

<img width="549" height="253" alt="Screenshot 2025-08-07 at 2 04 54 PM"
src="https://github.com/user-attachments/assets/d0b6d739-9c39-4af3-88d7-0c9609b0e6ba"
/>

Even if your org is verified, you still may not have access to GPT-5, in
which case you could see this error:

<img width="543" height="98" alt="Screenshot 2025-08-07 at 2 09 18 PM"
src="https://github.com/user-attachments/assets/e3ed31e3-2a11-4f07-8f3c-5b410fbe4540"
/>

One way to test if you're in this situation is to visit
https://platform.openai.com/chat/edit?models=gpt-5 and see if you get
the same "you don't have access to GPT-5" error on OpenAI's official
playground. It looks like this:

<img width="581" height="196" alt="Screenshot 2025-08-07 at 2 15 25 PM"
src="https://github.com/user-attachments/assets/ea1454ca-3c10-4703-8126-c02cb92a34f2"
/>

Release Notes:

- Added GPT-5, as well as its mini and nano variants. To use this, you
need to have an OpenAI API key configured via the `OPENAI_API_KEY`
environment variable.
2025-08-07 23:35:41 +00:00
Umesh Yadav
3f4098e87b open_ai: Make OpenAI error message generic (#33383)
Context: In this PR: https://github.com/zed-industries/zed/pull/33362,
we started to use underlying open_ai crate for making api calls for
vercel as well. Now whenever we get the error we get something like the
below. Where on part of the error mentions OpenAI but the rest of the
error returns the actual error from provider. This PR tries to make the
error generic for now so that people don't get confused seeing OpenAI in
their v0 integration.

```
Error interacting with language model
Failed to connect to OpenAI API: 403 Forbidden {"success":false,"error":"Premium or Team plan required to access the v0 API: https://v0.dev/chat/settings/billing"}
```

Release Notes:

- N/A
2025-06-28 14:38:27 +02:00
Umesh Yadav
108162423d language_models: Emit UsageUpdate events for token usage in DeepSeek and OpenAI (#33242)
Closes #ISSUE

Release Notes:

- N/A
2025-06-25 09:42:30 +02:00
Bennet Bo Fenner
c34b24b5fb open_ai: Fix issues with OpenAI compatible APIs (#32982)
Ran into this while adding support for Vercel v0s models:
- The timestamp seems to be returned in Milliseconds instead of seconds
so it breaks the bounds of `created: u32`. We did not use this field
anywhere so just decided to remove it
- Sometimes the `choices` field can be empty when the last chunk comes
in because it only contains `usage`

Release Notes:

- N/A
2025-06-18 21:51:51 +00:00
Richard Feldman
5405c2c2d3 Standardize on u64 for token counts (#32869)
Previously we were using a mix of `u32` and `usize`, e.g. `max_tokens:
usize, max_output_tokens: Option<u32>` in the same `struct`.

Although [tiktoken](https://github.com/openai/tiktoken) uses `usize`,
token counts should be consistent across targets (e.g. the same model
doesn't suddenly get a smaller context window if you're compiling for
wasm32), and these token counts could end up getting serialized using a
binary protocol, so `usize` is not the right choice for token counts.

I chose to standardize on `u64` over `u32` because we don't store many
of them (so the extra size should be insignificant) and future models
may exceed `u32::MAX` tokens.

Release Notes:

- N/A
2025-06-17 10:43:07 -04:00
Ben Brandt
2d4e427b45 OpenAI cleanups (#32597)
Release Notes:

- openai: Remove support for deprecated o1-preview and o1-mini models 
- openai: Support streaming for o1 model
2025-06-12 08:55:48 +00:00
Ben Brandt
8cc5b04045 open_ai: Remove redundant serde aliases and add model limits (#32572)
Remove unnecessary alias attributes from Model enum variants and add
max_output_tokens limits for all OpenAI models. Also fix
supports_system_messages to explicitly handle all model variants.

Release Notes:

- N/A
2025-06-11 22:51:41 +02:00
Adrian Furo
e6f51966a1 open_ai: Fix parallel tools issue (#30467)
There is no ISSUE opened on this topic

Release Notes:

- N/A

---------

Co-authored-by: Peter Tripp <peter@zed.dev>
Co-authored-by: Ben Brandt <benjamin.j.brandt@gmail.com>
2025-05-26 11:46:35 +00:00
Ben Brandt
ef0e1cb2ba open_ai: Make Assistant message content optional (#31418)
Fixes regression caused by:
https://github.com/zed-industries/zed/pull/30639

Assistant messages can come back with no content, and we no longer
allowed that in the deserialization.

Release Notes:

- open_ai: fixed deserialization issue if assistant content was empty
2025-05-26 09:59:39 +00:00
Kirill Bulatov
16366cf9f2 Use anyhow more idiomatically (#31052)
https://github.com/zed-industries/zed/issues/30972 brought up another
case where our context is not enough to track the actual source of the
issue: we get a general top-level error without inner error.

The reason for this was `.ok_or_else(|| anyhow!("failed to read HEAD
SHA"))?; ` on the top level.

The PR finally reworks the way we use anyhow to reduce such issues (or
at least make it simpler to bubble them up later in a fix).
On top of that, uses a few more anyhow methods for better readability.

* `.ok_or_else(|| anyhow!("..."))`, `map_err` and other similar error
conversion/option reporting cases are replaced with `context` and
`with_context` calls
* in addition to that, various `anyhow!("failed to do ...")` are
stripped with `.context("Doing ...")` messages instead to remove the
parasitic `failed to` text
* `anyhow::ensure!` is used instead of `if ... { return Err(...); }`
calls
* `anyhow::bail!` is used instead of `return Err(anyhow!(...));`

Release Notes:

- N/A
2025-05-20 23:06:07 +00:00
Agus Zubiaga
dd6594621f Add image input support for OpenAI models (#30639)
Release Notes:

- Added input image support for OpenAI models
2025-05-13 17:32:42 +02:00
Marshall Bowers
b54bbebc03 open_ai: Remove list of supported countries (#29347)
This PR removes the list of supported countries from the `open_ai`
crate, as it is no longer referenced in this repo.

Release Notes:

- N/A
2025-04-24 14:55:37 +00:00
Michael Sloan
fbf7caf93e Default to fast model for thread summaries and titles + don't include system prompt / context / thinking segments (#29102)
* Adds a fast / cheaper model to providers and defaults thread
summarization to this model. Initial motivation for this was that
https://github.com/zed-industries/zed/pull/29099 would cause these
requests to fail when used with a thinking model. It doesn't seem
correct to use a thinking model for summarization.

* Skips system prompt, context, and thinking segments.

* If tool use is happening, allows 2 tool uses + one more agent response
before summarizing.

Downside of this is that there was potential for some prefix cache reuse
before, especially for title summarization (thread summarization omitted
tool results and so would not share a prefix for those). This seems fine
as these requests should typically be fairly small. Even for full thread
summarization, skipping all tool use / context should greatly reduce the
token use.

Release Notes:

- N/A
2025-04-19 23:26:29 +00:00
Umesh Yadav
8117940aca Add support for OpenAI o3 and o4-mini models (#28881)
Release Notes:

- Add support for OpenAI o3 and o4-mini models via OpenAI API and
Copilot Chat providers.

---------

Co-authored-by: Peter Tripp <peter@zed.dev>
2025-04-17 10:58:41 -04:00
Umesh Yadav
84aa480344 Add support for OpenAI GPT-4.1 models (#28708)
Release Notes:

- Add support for OpenAI GPT-4.1 via Copilot Chat and OpenAI API

---------

Co-authored-by: Danilo Leal <daniloleal09@gmail.com>
Co-authored-by: Bennet Bo Fenner <bennetbo@gmx.de>
2025-04-14 16:15:59 -03:00
Marshall Bowers
819bb8fffb open_ai: Disable parallel_tool_calls (#28056)
This PR disables `parallel_tool_calls` for the models that support it,
as the Agent currently expects at most one tool use per turn.

It was a bit of trial and error to figure this out. OpenAI's API
annoyingly will return an error if passing `parallel_tool_calls` to a
model that doesn't support it.

Release Notes:

- N/A
2025-04-03 22:07:37 +00:00
Marshall Bowers
7492ec3f67 Add tool use support for OpenAI models (#28051)
This PR adds support for using tools to the OpenAI models.

Release Notes:

- agent: Added support for tool use with OpenAI models (Preview only).
2025-04-03 20:55:11 +00:00
Marshall Bowers
e5b347b03a Remove unused extract_tool_args_from_events functions (#28038)
This PR removes the unused `extract_tool_args_from_events` functions
that were defined in some of the LLM provider crates.

Release Notes:

- N/A
2025-04-03 18:38:35 +00:00
Julia Ryan
01ec6e0f77 Add workspace-hack (#27277)
This adds a "workspace-hack" crate, see
[mozilla's](https://hg.mozilla.org/mozilla-central/file/3a265fdc9f33e5946f0ca0a04af73acd7e6d1a39/build/workspace-hack/Cargo.toml#l7)
for a concise explanation of why this is useful. For us in practice this
means that if I were to run all the tests (`cargo nextest r
--workspace`) and then `cargo r`, all the deps from the previous cargo
command will be reused. Before this PR it would rebuild many deps due to
resolving different sets of features for them. For me this frequently
caused long rebuilds when things "should" already be cached.

To avoid manually maintaining our workspace-hack crate, we will use
[cargo hakari](https://docs.rs/cargo-hakari) to update the build files
when there's a necessary change. I've added a step to CI that checks
whether the workspace-hack crate is up to date, and instructs you to
re-run `script/update-workspace-hack` when it fails.

Finally, to make sure that people can still depend on crates in our
workspace without pulling in all the workspace deps, we use a `[patch]`
section following [hakari's
instructions](https://docs.rs/cargo-hakari/0.9.36/cargo_hakari/patch_directive/index.html)

One possible followup task would be making guppy use our
`rust-toolchain.toml` instead of having to duplicate that list in its
config, I opened an issue for that upstream: guppy-rs/guppy#481.

TODO:
- [x] Fix the extension test failure
- [x] Ensure the dev dependencies aren't being unified by Hakari into
the main dependencies
- [x] Ensure that the remote-server binary continues to not depend on
LibSSL

Release Notes:

- N/A

---------

Co-authored-by: Mikayla <mikayla@zed.dev>
Co-authored-by: Mikayla Maki <mikayla.c.maki@gmail.com>
2025-04-02 13:26:34 -07:00
Piotr Osiewicz
dc64ec9cc8 chore: Bump Rust edition to 2024 (#27800)
Follow-up to https://github.com/zed-industries/zed/pull/27791

Release Notes:

- N/A
2025-03-31 20:55:27 +02:00
Peter Tripp
d1af7b1322 Update Assistant context limits (#25087)
- Update GitHub Copilot Chat context limits
- Add decimal separators for consistency
2025-02-19 11:06:20 -05:00
Patrick Detlefsen
c0dd7e8367 open_ai: Include o3-mini in Model::from_id (#24261) 2025-02-05 16:45:38 -05:00
João Marcos
5bd7eaa173 Solve 50+ cargo doc warnings (#24071)
Release Notes:

- N/A
2025-02-01 06:19:29 +00:00
Peter Tripp
d04800c329 Add OpenAI o3-mini support (#24044)
Release Notes:

- Add support for OpenAI o3-mini
2025-01-31 15:48:55 -05:00
Peter Tripp
f6d286c7db openai: Add back O1-Preview (#23715)
Follow-up to: https://github.com/zed-industries/zed/pull/23425
2025-01-27 14:44:12 +00:00
Nathan Sobo
6fca1d2b0b Eliminate GPUI View, ViewContext, and WindowContext types (#22632)
There's still a bit more work to do on this, but this PR is compiling
(with warnings) after eliminating the key types. When the tasks below
are complete, this will be the new narrative for GPUI:

- `Entity<T>` - This replaces `View<T>`/`Model<T>`. It represents a unit
of state, and if `T` implements `Render`, then `Entity<T>` implements
`Element`.
- `&mut App` This replaces `AppContext` and represents the app.
- `&mut Context<T>` This replaces `ModelContext` and derefs to `App`. It
is provided by the framework when updating an entity.
- `&mut Window` Broken out of `&mut WindowContext` which no longer
exists. Every method that once took `&mut WindowContext` now takes `&mut
Window, &mut App` and every method that took `&mut ViewContext<T>` now
takes `&mut Window, &mut Context<T>`

Not pictured here are the two other failed attempts. It's been quite a
month!

Tasks:

- [x] Remove `View`, `ViewContext`, `WindowContext` and thread through
`Window`
- [x] [@cole-miller @mikayla-maki] Redraw window when entities change
- [x] [@cole-miller @mikayla-maki] Get examples and Zed running
- [x] [@cole-miller @mikayla-maki] Fix Zed rendering
- [x] [@mikayla-maki] Fix todo! macros and comments
- [x] Fix a bug where the editor would not be redrawn because of view
caching
- [x] remove publicness window.notify() and replace with
`AppContext::notify`
- [x] remove `observe_new_window_models`, replace with
`observe_new_models` with an optional window
- [x] Fix a bug where the project panel would not be redrawn because of
the wrong refresh() call being used
- [x] Fix the tests
- [x] Fix warnings by eliminating `Window` params or using `_`
- [x] Fix conflicts
- [x] Simplify generic code where possible
- [x] Rename types
- [ ] Update docs

### issues post merge

- [x] Issues switching between normal and insert mode
- [x] Assistant re-rendering failure
- [x] Vim test failures
- [x] Mac build issue



Release Notes:

- N/A

---------

Co-authored-by: Antonio Scandurra <me@as-cii.com>
Co-authored-by: Cole Miller <cole@zed.dev>
Co-authored-by: Mikayla <mikayla@zed.dev>
Co-authored-by: Joseph <joseph@zed.dev>
Co-authored-by: max <max@zed.dev>
Co-authored-by: Michael Sloan <michael@zed.dev>
Co-authored-by: Mikayla Maki <mikaylamaki@Mikaylas-MacBook-Pro.local>
Co-authored-by: Mikayla <mikayla.c.maki@gmail.com>
Co-authored-by: joão <joao@zed.dev>
2025-01-26 03:02:45 +00:00
Peter Tripp
c450cd51ea open_ai: Move from o1-preview to o1 for OpenAI Assistant provider (#23425)
- Closes: https://github.com/zed-industries/zed/issues/22521
- Follow-up to: https://github.com/zed-industries/zed/pull/22376
2025-01-21 15:05:21 -05:00
Piotr Osiewicz
c9534e8025 chore: Use workspace fields for edition and publish (#23291)
This prepares us for an upcoming bump to Rust 2024 edition.

Release Notes:

- N/A
2025-01-17 17:39:22 +01:00
Antonio Scandurra
77b8296fbb Introduce staff-only inline completion provider (#21739)
Release Notes:

- N/A

---------

Co-authored-by: Thorsten Ball <mrnugget@gmail.com>
Co-authored-by: Bennet <bennet@zed.dev>
Co-authored-by: Thorsten <thorsten@zed.dev>
2024-12-09 14:26:36 +01:00
Thorsten Ball
aee01f2c50 assistant: Remove low_speed_timeout (#20681)
This removes the `low_speed_timeout` setting from all providers as a
response to issue #19509.

Reason being that the original `low_speed_timeout` was only as part of
#9913 because users wanted to _get rid of timeouts_. They wanted to bump
the default timeout from 5sec to a lot more.

Then, in the meantime, the meaning of `low_speed_timeout` changed in
#19055 and was changed to a normal `timeout`, which is a different thing
and breaks slower LLMs that don't reply with a complete response in the
configured timeout.

So we figured: let's remove the whole thing and replace it with a
default _connect_ timeout to make sure that we can connect to a server
in 10s, but then give the server as long as it wants to complete its
response.

Closes #19509

Release Notes:

- Removed the `low_speed_timeout` setting from LLM provider settings,
since it was only used to _increase_ the timeout to give LLMs more time,
but since we don't have any other use for it, we simply remove the
setting to give LLMs as long as they need.

---------

Co-authored-by: Antonio <antonio@zed.dev>
Co-authored-by: Peter Tripp <peter@zed.dev>
2024-11-15 07:37:31 +01:00
Conrad Irwin
e28496d4e2 Stop leaking isahc assumption (#18408)
Users of our http_client crate knew they were interacting with isahc as
they set its extensions on the request. This change adds our own
equivalents for their APIs in preparation for changing the default http
client.

Release Notes:

- N/A
2024-09-26 14:01:05 -06:00