We've been considering removing workspace-hack for a couple reasons:
- Lukas ran into a situation where its build script seemed to be causing
spurious rebuilds. This seems more likely to be a cargo bug than an
issue with workspace-hack itself (given that it has an empty build
script), but we don't necessarily want to take the time to hunt that
down right now.
- Marshall mentioned hakari interacts poorly with automated crate
updates (in our case provided by rennovate) because you'd need to have
`cargo hakari generate && cargo hakari manage-deps` after their changes
and we prefer to not have actions that make commits.
Currently removing workspace-hack causes our workspace to grow from
~1700 to ~2000 crates being built (depending on platform), which is
mainly a problem when you're building the whole workspace or running
tests across the the normal and remote binaries (which is where
feature-unification nets us the most sharing). It doesn't impact
incremental times noticeably when you're just iterating on `-p zed`, and
we'll hopefully get these savings back in the future when
rust-lang/cargo#14774 (which re-implements the functionality of hakari)
is finished.
Release Notes:
- N/A
Co-Authored-By: Ben K <ben@zed.dev>
Co-Authored-By: Anthony <anthony@zed.dev>
Co-Authored-By: Mikayla <mikayla@zed.dev>
Release Notes:
- settings: Major internal changes to settings. The primary user-facing
effect is that some settings which did not make sense in project
settings files are no-longer read from there. (For example the inline
blame settings)
---------
Co-authored-by: Ben Kunkle <ben@zed.dev>
Co-authored-by: Mikayla Maki <mikayla.c.maki@gmail.com>
Co-authored-by: Anthony <anthony@zed.dev>
Supersedes: #34500
Also this will allow to fix this: #35386 without the UX changes but
providers can now be control through settings as well within zed.
Just rebased the latest main and docs added. Added @AurelienTollard as
co-author as it was started by him everything else remains the same from
original PR.
Release Notes:
- Added ability to control Provider Routing for OpenRouter models from
settings.
Co-authored-by: Aurelien Tollard <tollard.aurelien1999@gmail.com>
This PR switches the OpenRouter integration from fetching all models to
fetching only the models specified in the user's account preferences.
This will help improve the experience
**The Problem**
The previous implementation used the `/models` endpoint, which returned
an exhaustive list of all models supported by OpenRouter. This resulted
in a long and cluttered model selection dropdown in Zed, making it
difficult for users to find the models they actually use.
**The Solution**
We now use the `/models/user` endpoint. This API call returns a curated
list based on the models and providers the user has selected in their
[OpenRouter dashboard](https://openrouter.ai/models).
Ref: [OpenRouter API Docs for User-Filtered
Models](https://openrouter.ai/docs/api-reference/list-models-filtered-by-user-provider-preferences)
Release Notes:
- language_models: Support OpenRouter user preferences for available
models
Improves the error handling for openrouter and adds automatic retry like
anthropic for few of the status codes.
Release Notes:
- Improves error messages for Openrouter provider
- Automatic retry when rate limited or Server error from Openrouter
This prevents the common footgun of copy/pasting an API key
starting/ending with extra newlines, which would lead to a "bad request"
error.
Closes#37038
Release Notes:
- agent: Support pasting language model API keys that contain newlines.
Closes#34314
This PR resolves an issue where serde(untagged) caused Rust None values
to serialize as null, which OpenRouter's Mistral API (when tool_choice
is present) incorrectly interprets as a defined value, leading to a 400
error. By replacing serde(untagged) with serde(snake_case), None values
are now correctly omitted from the serialized JSON, fixing the problem.
P.S. A separate PR will address serde(untagged) usage for other
providers, as null is not expected for them either.
Release Notes:
- Fix ToolChoice getting serialized to null on OpenRouter
Did some bit cleanup of code for loading models for settings as that is
not required as we are fetching all the models from openrouter so it's
better to maintain one source of truth
Release Notes:
- Add thinking support to OpenRouter provider
Previously we were using a mix of `u32` and `usize`, e.g. `max_tokens:
usize, max_output_tokens: Option<u32>` in the same `struct`.
Although [tiktoken](https://github.com/openai/tiktoken) uses `usize`,
token counts should be consistent across targets (e.g. the same model
doesn't suddenly get a smaller context window if you're compiling for
wasm32), and these token counts could end up getting serialized using a
binary protocol, so `usize` is not the right choice for token counts.
I chose to standardize on `u64` over `u32` because we don't store many
of them (so the extra size should be insignificant) and future models
may exceed `u32::MAX` tokens.
Release Notes:
- N/A
We push the usage data whenever we receive it from the provider to make
sure the counting is correct after the turn has ended.
- [x] Ollama
- [x] Copilot
- [x] Mistral
- [x] OpenRouter
- [x] LMStudio
Put all the changes into a single PR open to move these to separate PR
if that makes the review and testing easier.
Release Notes:
- N/A
- [x] Manual Testing(Tested this with Qwen2.5 VL 32B Instruct (free) and
Llama 4 Scout (free), Llama 4 Maverick (free). Llama models have some
issues in write profile due to one of the in built tools schema, so I
tested it with minimal profile.
Closes #ISSUE
Release Notes:
- Add image support to OpenRouter models
---------
Signed-off-by: Umesh Yadav <umesh4257@gmail.com>
Co-authored-by: Ben Brandt <benjamin.j.brandt@gmail.com>
This pull request adds full integration with OpenRouter, allowing users
to access a wide variety of language models through a single API key.
**Implementation Details:**
* **Provider Registration:** Registers OpenRouter as a new language
model provider within the application's model registry. This includes UI
for API key authentication, token counting, streaming completions, and
tool-call handling.
* **Dedicated Crate:** Adds a new `open_router` crate to manage
interactions with the OpenRouter HTTP API, including model discovery and
streaming helpers.
* **UI & Configuration:** Extends workspace manifests, the settings
schema, icons, and default configurations to surface the OpenRouter
provider and its settings within the UI.
* **Readability:** Reformats JSON arrays within the settings files for
improved readability.
**Design Decisions & Discussion Points:**
* **Code Reuse:** I leveraged much of the existing logic from the
`openai` provider integration due to the significant similarities
between the OpenAI and OpenRouter API specifications.
* **Default Model:** I set the default model to `openrouter/auto`. This
model automatically routes user prompts to the most suitable underlying
model on OpenRouter, providing a convenient starting point.
* **Model Population Strategy:**
* <strike>I've implemented dynamic population of available models by
querying the OpenRouter API upon initialization.
* Currently, this involves three separate API calls: one for all models,
one for tool-use models, and one for models good at programming.
* The data from the tool-use API call sets a `tool_use` flag for
relevant models.
* The data from the programming models API call is used to sort the
list, prioritizing coding-focused models in the dropdown.</strike>
* <strike>**Feedback Welcome:** I acknowledge this multi-call approach
is API-intensive. I am open to feedback and alternative implementation
suggestions if the team believes this can be optimized.</strike>
* **Update: Now this has been simplified to one api call.**
* **UI/UX Considerations:**
* <strike>Authentication Method: Currently, I've implemented the
standard API key input in settings, similar to other providers like
OpenAI/Anthropic. However, OpenRouter also supports OAuth 2.0 with PKCE.
This could offer a potentially smoother, more integrated setup
experience for users (e.g., clicking a button to authorize instead of
copy-pasting a key). Should we prioritize implementing OAuth PKCE now,
or perhaps add it as an alternative option later?</strike>(PKCE is not
straight forward and complicated so skipping this for now. So that we
can add the support and work on this later.)
* <strike>To visually distinguish models better suited for programming,
I've considered adding a marker (e.g., `</>` or `🧠`) next to their
names. Thoughts on this proposal?</strike>. (This will require a changes
and discussion across model provider. This doesn't fall under the scope
of current PR).
* OpenRouter offers 300+ models. The current implementation loads all of
them. **Feedback Needed:** Should we refine this list or implement more
sophisticated filtering/categorization for better usability?
**Motivation:**
This integration directly addresses one of the most highly upvoted
feature requests/discussions within the Zed community. Adding OpenRouter
support significantly expands the range of AI models accessible to
users.
I welcome feedback from the Zed team on this implementation and the
design choices made. I am eager to refine this feature and make it
available to users.
ISSUES: https://github.com/zed-industries/zed/discussions/16576
Release Notes:
- Added support for OpenRouter as a language model provider.
---------
Signed-off-by: Umesh Yadav <umesh4257@gmail.com>
Co-authored-by: Marshall Bowers <git@maxdeviant.com>