Compare commits
3 Commits
load_diffs
...
agent-docs
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
1e809aa190 | ||
|
|
6866528943 | ||
|
|
d156fd4c09 |
@@ -4,12 +4,14 @@ Welcome to Zed's documentation.
|
||||
|
||||
This is built on push to `main` and published automatically to [https://zed.dev/docs](https://zed.dev/docs).
|
||||
|
||||
To preview the docs locally you will need to install [mdBook](https://rust-lang.github.io/mdBook/) (`cargo install mdbook`) and then run:
|
||||
To preview the docs locally you will need to install [mdBook](https://rust-lang.github.io/mdBook/) (`cargo install mdbook@0.4.40`) and then run:
|
||||
|
||||
```sh
|
||||
mdbook serve docs
|
||||
```
|
||||
|
||||
It's important to note the version number above. For an unknown reason, as of 2025-04-23, running 0.4.48 will cause odd URL behavior that breaks docs.
|
||||
|
||||
Before committing, verify that the docs are formatted in the way prettier expects with:
|
||||
|
||||
```
|
||||
|
||||
@@ -21,8 +21,8 @@ enable = false
|
||||
"/ruby.html" = "/docs/languages/ruby.html"
|
||||
"/python.html" = "/docs/languages/python.html"
|
||||
"/adding-new-languages.html" = "/docs/extensions/languages.html"
|
||||
"/language-model-integration.html" = "/docs/assistant/assistant.html"
|
||||
"/assistant.html" = "/docs/assistant/assistant.html"
|
||||
# "/language-model-integration.html" = "/docs/agent/assistant.html"
|
||||
# "/assistant.html" = "/docs/agent/assistant.html"
|
||||
"/developing-zed.html" = "/docs/development.html"
|
||||
"/conversations.html" = "/community-links"
|
||||
|
||||
|
||||
@@ -37,18 +37,26 @@
|
||||
- [Environment Variables](./environment.md)
|
||||
- [REPL](./repl.md)
|
||||
|
||||
# Assistant
|
||||
# Agent
|
||||
|
||||
- [Overview](./assistant/assistant.md)
|
||||
- [Configuration](./assistant/configuration.md)
|
||||
- [Assistant Panel](./assistant/assistant-panel.md)
|
||||
- [Contexts](./assistant/contexts.md)
|
||||
- [Inline Assistant](./assistant/inline-assistant.md)
|
||||
- [Commands](./assistant/commands.md)
|
||||
- [Prompts](./assistant/prompting.md)
|
||||
- [Context Servers](./assistant/context-servers.md)
|
||||
- [Model Context Protocol](./assistant/model-context-protocol.md)
|
||||
- [Model Improvement](./model-improvement.md)
|
||||
- [Overview](./agent/assistant.md)
|
||||
- [Subscription](./agent/subscription.md)
|
||||
- [Plans and Usage](./agent/plans-and-usage.md)
|
||||
- [Billing](./agent/billing.md)
|
||||
- [Models](./agent/models.md)
|
||||
- [Configuration](./agent/configuration.md)
|
||||
- [Custom API Keys](./agent/custom-api-keys.md)
|
||||
- [Product](./agent/product.md)
|
||||
- [Assistant Panel](./agent/assistant-panel.md)
|
||||
- [Inline Assistant](./agent/inline-assist.md)
|
||||
- [Contexts](./agent/contexts.md)
|
||||
- [Commands](./agent/commands.md)
|
||||
- [Prompts](./agent/prompting.md)
|
||||
- [Enhancing the Agent](./agent/enhancing.md)
|
||||
- [Context Servers](./agent/context-servers.md)
|
||||
- [Model Context Protocol](./agent/model-context-protocol.md)
|
||||
- [Privacy and Security](./agent/privacy-and-security.md)
|
||||
- [Model Improvement](./agent/model-improvement.md)
|
||||
|
||||
# Extensions
|
||||
|
||||
|
||||
@@ -6,7 +6,7 @@ This section covers various aspects of the Assistant:
|
||||
|
||||
- [Assistant Panel](./assistant-panel.md): Create and collaboratively edit new chats, and manage interactions with language models.
|
||||
|
||||
- [Inline Assistant](./inline-assistant.md): Discover how to use the Assistant to power inline transformations directly within your code editor and terminal.
|
||||
- [Inline Assist](./inline-assist.md): Discover how to use the Inline Assist feature to power inline transformations directly within your code editor and terminal.
|
||||
|
||||
- [Providers & Configuration](./configuration.md): Configure the Assistant, and set up different language model providers like Anthropic, OpenAI, Ollama, LM Studio, Google Gemini, and GitHub Copilot Chat.
|
||||
|
||||
35
docs/src/agent/billing.md
Normal file
35
docs/src/agent/billing.md
Normal file
@@ -0,0 +1,35 @@
|
||||
# Billing
|
||||
|
||||
We use Stripe as our billing and payments provider. All Pro plans require payment via credit card. For invoice based billing, a Business plan is required. Contact sales@zed.dev for more details.
|
||||
|
||||
## Settings {#settings}
|
||||
|
||||
You can access billing settings at /account. Clicking [button] will navigate you to Stripe’s secure portal, where you can update all billing-related settings and configuration.
|
||||
|
||||
## Billing Cycles {#billing-cycles}
|
||||
|
||||
Zed is billed on a monthly basis based on the date you initially subscribe. We’ll also bill for additional prompts used beyond your plan’s prompt limit, if usage exceeds $20 before month end. See [usage-based pricing](./plans-and-usage.md#ubp) for more.
|
||||
|
||||
## Invoice History {#invoice-history}
|
||||
|
||||
You can access your invoice history by navigating to /account and clicking [button]. From Stripe’s secure portal, you can download all current and historical invoices.
|
||||
|
||||
## Updating Billing Information {#updating-billing-info}
|
||||
|
||||
You can update your payment method, company name, address, and tax information through the billing portal. We use Stripe as our payment processor to ensure secure transactions. Please note that changes to billing information will **only** affect future invoices - **we cannot modify historical invoices**.
|
||||
|
||||
## Cancellation and Refunds {#cancel-refund}
|
||||
|
||||
You can cancel your subscription directly through the billing portal using the “Cancel subscription” button. Your access will continue until the end of your current billing period.
|
||||
|
||||
You can self-serve a refund by going to the billing portal and clicking on the Cancel subscription button. Our self-serve refund policy is as follows:
|
||||
|
||||
**EU, UK or Turkey customers**
|
||||
|
||||
Eligible for a refund if you cancel your subscription within 14 days of purchase.
|
||||
|
||||
**All other customers (US + rest of world)**
|
||||
|
||||
Refundable within 24 hours after purchase.
|
||||
|
||||
If you’re not in the window of self-serve refunds, reach out at billing-support@zed.dev and we’ll be happy to assist you.
|
||||
165
docs/src/agent/configuration.md
Normal file
165
docs/src/agent/configuration.md
Normal file
@@ -0,0 +1,165 @@
|
||||
# Configuring the Assistant
|
||||
|
||||
Here's a bird's-eye view of all the configuration options available in Zed's Assistant:
|
||||
|
||||
- Configure Custom API Keys for LLM Providers
|
||||
- [Custom API Keys](./custom-api-keys.md)
|
||||
- Advanced configuration options
|
||||
- [Configuring Endpoints](#custom-endpoint)
|
||||
- [Configuring Timeouts](#provider-timeout)
|
||||
- [Configuring Models](#default-model)
|
||||
- [Configuring Feature-specific Models](#feature-specific-models)
|
||||
- [Configuring Alternative Models for Inline Assists](#alternative-assists)
|
||||
- [Common Panel Settings](#common-panel-settings)
|
||||
- [General Configuration Example](#general-example)
|
||||
|
||||
## Configure Custom API Keys for LLM Providers {#configure-custom-api-keys}
|
||||
|
||||
See [Configuring Custom API Keys](./custom-api-keys.md)
|
||||
|
||||
## Advanced Configuration {#advanced-configuration}
|
||||
|
||||
### Custom Endpoints {#custom-endpoint}
|
||||
|
||||
You can use a custom API endpoint for different providers, as long as it's compatible with the providers API structure.
|
||||
|
||||
To do so, add the following to your Zed `settings.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"language_models": {
|
||||
"some-provider": {
|
||||
"api_url": "http://localhost:11434"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Where `some-provider` can be any of the following values: `anthropic`, `google`, `ollama`, `openai`.
|
||||
|
||||
### Configuring Models {#default-model}
|
||||
|
||||
Zed's hosted LLM service sets `claude-3-7-sonnet-latest` as the default model.
|
||||
However, you can change it either via the model dropdown in the Assistant Panel's bottom-left corner or by manually editing the `default_model` object in your settings:
|
||||
|
||||
```json
|
||||
{
|
||||
"assistant": {
|
||||
"version": "2",
|
||||
"default_model": {
|
||||
"provider": "zed.dev",
|
||||
"model": "gpt-4o"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Feature-specific Models {#feature-specific-models}
|
||||
|
||||
> Currently only available in [Preview](https://zed.dev/releases/preview).
|
||||
|
||||
Zed allows you to configure different models for specific features.
|
||||
This provides flexibility to use more powerful models for certain tasks while using faster or more efficient models for others.
|
||||
|
||||
If a feature-specific model is not set, it will fall back to using the default model, which is the one you set on the Agent Panel.
|
||||
|
||||
You can configure the following feature-specific models:
|
||||
|
||||
- Thread summary model: Used for generating thread summaries
|
||||
- Inline Assist model: Used for the Inline Assist feature
|
||||
- Commit message model: Used for generating Git commit messages
|
||||
|
||||
Example configuration:
|
||||
|
||||
```json
|
||||
{
|
||||
"assistant": {
|
||||
"version": "2",
|
||||
"default_model": {
|
||||
"provider": "zed.dev",
|
||||
"model": "claude-3-7-sonnet"
|
||||
},
|
||||
"inline_assistant_model": {
|
||||
"provider": "anthropic",
|
||||
"model": "claude-3-5-sonnet"
|
||||
},
|
||||
"commit_message_model": {
|
||||
"provider": "openai",
|
||||
"model": "gpt-4o-mini"
|
||||
},
|
||||
"thread_summary_model": {
|
||||
"provider": "google",
|
||||
"model": "gemini-2.0-flash"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Configuring Alternative Models for Inline Assists {#alternative-assists}
|
||||
|
||||
You can configure additional models that will be used to perform inline assists in parallel. When you do this,
|
||||
the Inline Assist UI will surface controls to cycle between the alternatives generated by each model. The models
|
||||
you specify here are always used in _addition_ to your default model. For example, the following configuration
|
||||
will generate two outputs for every assist. One with Claude 3.5 Sonnet, and one with GPT-4o.
|
||||
|
||||
```json
|
||||
{
|
||||
"assistant": {
|
||||
"default_model": {
|
||||
"provider": "zed.dev",
|
||||
"model": "claude-3-5-sonnet"
|
||||
},
|
||||
"inline_alternatives": [
|
||||
{
|
||||
"provider": "zed.dev",
|
||||
"model": "gpt-4o"
|
||||
}
|
||||
],
|
||||
"version": "2"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Common Panel Settings {#common-panel-settings}
|
||||
|
||||
| key | type | default | description |
|
||||
| -------------- | ------- | ------- | ------------------------------------------------------------------------------------- |
|
||||
| enabled | boolean | true | Setting this to `false` will completely disable the assistant |
|
||||
| button | boolean | true | Show the assistant icon in the status bar |
|
||||
| dock | string | "right" | The default dock position for the assistant panel. Can be ["left", "right", "bottom"] |
|
||||
| default_height | string | null | The pixel height of the assistant panel when docked to the bottom |
|
||||
| default_width | string | null | The pixel width of the assistant panel when docked to the left or right |
|
||||
|
||||
## General Configuration Example {#general-example}
|
||||
|
||||
```json
|
||||
{
|
||||
"assistant": {
|
||||
"enabled": true,
|
||||
"default_model": {
|
||||
"provider": "zed.dev",
|
||||
"model": "claude-3-7-sonnet"
|
||||
},
|
||||
"editor_model": {
|
||||
"provider": "openai",
|
||||
"model": "gpt-4o"
|
||||
},
|
||||
"inline_assistant_model": {
|
||||
"provider": "anthropic",
|
||||
"model": "claude-3-5-sonnet"
|
||||
},
|
||||
"commit_message_model": {
|
||||
"provider": "openai",
|
||||
"model": "gpt-4o-mini"
|
||||
},
|
||||
"thread_summary_model": {
|
||||
"provider": "google",
|
||||
"model": "gemini-1.5-flash"
|
||||
},
|
||||
"version": "2",
|
||||
"button": true,
|
||||
"default_width": 480,
|
||||
"dock": "right"
|
||||
}
|
||||
}
|
||||
```
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
Contexts are like conversations in most assistant-like tools. A context is a collaborative tool for sharing information between you, your project, and the assistant/model.
|
||||
|
||||
The model can reference content from your active context in the assistant panel, but also elsewhere like the inline assistant.
|
||||
The model can reference content from your active context in the assistant panel, but also elsewhere like Inline Assist.
|
||||
|
||||
### Saving and Loading Contexts
|
||||
|
||||
@@ -1,24 +1,16 @@
|
||||
# Configuring the Assistant
|
||||
# Configuring Custom API Keys
|
||||
|
||||
Here's a bird's-eye view of all the configuration options available in Zed's Assistant:
|
||||
While Zed offers hosted versions of models through our various plans, we're always happy to support users wanting to supply their own API keys for LLM providers.
|
||||
|
||||
- Configure LLM Providers
|
||||
- [Zed AI (Configured by default when signed in)](#zed-ai)
|
||||
- Supported LLM Providers
|
||||
- [Anthropic](#anthropic)
|
||||
- [GitHub Copilot Chat](#github-copilot-chat)
|
||||
- [Google AI](#google-ai)
|
||||
- [Ollama](#ollama)
|
||||
- [OpenAI](#openai)
|
||||
- [DeepSeek](#deepseek)
|
||||
- [OpenAI API Compatible](#openai-api-compatible)
|
||||
- [LM Studio](#lmstudio)
|
||||
- Advanced configuration options
|
||||
- [Configuring Endpoints](#custom-endpoint)
|
||||
- [Configuring Timeouts](#provider-timeout)
|
||||
- [Configuring Models](#default-model)
|
||||
- [Configuring Feature-specific Models](#feature-specific-models)
|
||||
- [Configuring Alternative Models for Inline Assists](#alternative-assists)
|
||||
- [Common Panel Settings](#common-panel-settings)
|
||||
- [General Configuration Example](#general-example)
|
||||
|
||||
## Providers {#providers}
|
||||
|
||||
@@ -26,13 +18,9 @@ To access the Assistant configuration view, run `assistant: show configuration`
|
||||
|
||||
Below you can find all the supported providers available so far.
|
||||
|
||||
### Zed AI {#zed-ai}
|
||||
|
||||
A hosted service providing convenient and performant support for AI-enabled coding in Zed, powered by Anthropic's Claude 3.5 Sonnet and accessible just by signing in.
|
||||
|
||||
### Anthropic {#anthropic}
|
||||
|
||||
You can use Claude 3.5 Sonnet via [Zed AI](#zed-ai) for free. To use other Anthropic models you will need to configure it by providing your own API key.
|
||||
You can use Anthropic models with the Zed assistant by choosing it via the model dropdown in the assistant panel.
|
||||
|
||||
1. Sign up for Anthropic and [create an API key](https://console.anthropic.com/settings/keys)
|
||||
2. Make sure that your Anthropic account has credits
|
||||
@@ -251,7 +239,7 @@ The Zed Assistant comes pre-configured to use the latest version for common mode
|
||||
|
||||
Custom models will be listed in the model dropdown in the assistant panel. You can also modify the `api_url` to use a custom endpoint if needed.
|
||||
|
||||
### OpenAI API Compatible
|
||||
### OpenAI API Compatible{#openai-api-compatible}
|
||||
|
||||
Zed supports using OpenAI compatible APIs by specifying a custom `endpoint` and `available_models` for the OpenAI provider.
|
||||
|
||||
@@ -293,150 +281,3 @@ Example configuration for using X.ai Grok with Zed:
|
||||
```
|
||||
|
||||
Tip: Set [LM Studio as a login item](https://lmstudio.ai/docs/advanced/headless#run-the-llm-service-on-machine-login) to automate running the LM Studio server.
|
||||
|
||||
## Advanced Configuration {#advanced-configuration}
|
||||
|
||||
### Custom Endpoints {#custom-endpoint}
|
||||
|
||||
You can use a custom API endpoint for different providers, as long as it's compatible with the providers API structure.
|
||||
|
||||
To do so, add the following to your Zed `settings.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"language_models": {
|
||||
"some-provider": {
|
||||
"api_url": "http://localhost:11434"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Where `some-provider` can be any of the following values: `anthropic`, `google`, `ollama`, `openai`.
|
||||
|
||||
### Configuring Models {#default-model}
|
||||
|
||||
Zed's hosted LLM service sets `claude-3-7-sonnet-latest` as the default model.
|
||||
However, you can change it either via the model dropdown in the Assistant Panel's bottom-left corner or by manually editing the `default_model` object in your settings:
|
||||
|
||||
```json
|
||||
{
|
||||
"assistant": {
|
||||
"version": "2",
|
||||
"default_model": {
|
||||
"provider": "zed.dev",
|
||||
"model": "gpt-4o"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### Feature-specific Models {#feature-specific-models}
|
||||
|
||||
> Currently only available in [Preview](https://zed.dev/releases/preview).
|
||||
|
||||
Zed allows you to configure different models for specific features.
|
||||
This provides flexibility to use more powerful models for certain tasks while using faster or more efficient models for others.
|
||||
|
||||
If a feature-specific model is not set, it will fall back to using the default model, which is the one you set on the Agent Panel.
|
||||
|
||||
You can configure the following feature-specific models:
|
||||
|
||||
- Thread summary model: Used for generating thread summaries
|
||||
- Inline assistant model: Used for the inline assistant feature
|
||||
- Commit message model: Used for generating Git commit messages
|
||||
|
||||
Example configuration:
|
||||
|
||||
```json
|
||||
{
|
||||
"assistant": {
|
||||
"version": "2",
|
||||
"default_model": {
|
||||
"provider": "zed.dev",
|
||||
"model": "claude-3-7-sonnet"
|
||||
},
|
||||
"inline_assistant_model": {
|
||||
"provider": "anthropic",
|
||||
"model": "claude-3-5-sonnet"
|
||||
},
|
||||
"commit_message_model": {
|
||||
"provider": "openai",
|
||||
"model": "gpt-4o-mini"
|
||||
},
|
||||
"thread_summary_model": {
|
||||
"provider": "google",
|
||||
"model": "gemini-2.0-flash"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Configuring Alternative Models for Inline Assists {#alternative-assists}
|
||||
|
||||
You can configure additional models that will be used to perform inline assists in parallel. When you do this,
|
||||
the inline assist UI will surface controls to cycle between the alternatives generated by each model. The models
|
||||
you specify here are always used in _addition_ to your default model. For example, the following configuration
|
||||
will generate two outputs for every assist. One with Claude 3.5 Sonnet, and one with GPT-4o.
|
||||
|
||||
```json
|
||||
{
|
||||
"assistant": {
|
||||
"default_model": {
|
||||
"provider": "zed.dev",
|
||||
"model": "claude-3-5-sonnet"
|
||||
},
|
||||
"inline_alternatives": [
|
||||
{
|
||||
"provider": "zed.dev",
|
||||
"model": "gpt-4o"
|
||||
}
|
||||
],
|
||||
"version": "2"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Common Panel Settings {#common-panel-settings}
|
||||
|
||||
| key | type | default | description |
|
||||
| -------------- | ------- | ------- | ------------------------------------------------------------------------------------- |
|
||||
| enabled | boolean | true | Setting this to `false` will completely disable the assistant |
|
||||
| button | boolean | true | Show the assistant icon in the status bar |
|
||||
| dock | string | "right" | The default dock position for the assistant panel. Can be ["left", "right", "bottom"] |
|
||||
| default_height | string | null | The pixel height of the assistant panel when docked to the bottom |
|
||||
| default_width | string | null | The pixel width of the assistant panel when docked to the left or right |
|
||||
|
||||
## General Configuration Example {#general-example}
|
||||
|
||||
```json
|
||||
{
|
||||
"assistant": {
|
||||
"enabled": true,
|
||||
"default_model": {
|
||||
"provider": "zed.dev",
|
||||
"model": "claude-3-7-sonnet"
|
||||
},
|
||||
"editor_model": {
|
||||
"provider": "openai",
|
||||
"model": "gpt-4o"
|
||||
},
|
||||
"inline_assistant_model": {
|
||||
"provider": "anthropic",
|
||||
"model": "claude-3-5-sonnet"
|
||||
},
|
||||
"commit_message_model": {
|
||||
"provider": "openai",
|
||||
"model": "gpt-4o-mini"
|
||||
},
|
||||
"thread_summary_model": {
|
||||
"provider": "google",
|
||||
"model": "gemini-1.5-flash"
|
||||
},
|
||||
"version": "2",
|
||||
"button": true,
|
||||
"default_width": 480,
|
||||
"dock": "right"
|
||||
}
|
||||
}
|
||||
```
|
||||
1
docs/src/agent/enhancing.md
Normal file
1
docs/src/agent/enhancing.md
Normal file
@@ -0,0 +1 @@
|
||||
# Enhancing the Agent
|
||||
44
docs/src/agent/inline-assist.md
Normal file
44
docs/src/agent/inline-assist.md
Normal file
@@ -0,0 +1,44 @@
|
||||
# Inline Assist
|
||||
|
||||
## Using Inline Assist
|
||||
|
||||
You can use `ctrl-enter` to open Inline Assist nearly anywhere you can enter text: Editors, the assistant panel, the prompt library, channel notes, and even within the terminal panel.
|
||||
|
||||
Inline Assist allows you to send the current selection (or the current line) to a model and modify the selection with the model's response.
|
||||
|
||||
You can also perform multiple generation requests in parallel by pressing `ctrl-enter` with multiple cursors, or by pressing `ctrl-enter` with a selection that spans multiple excerpts in a multibuffer.
|
||||
|
||||
Inline Assist pulls its context from the assistant panel, allowing you to provide additional instructions or rules for code transformations.
|
||||
|
||||
> **Note**: Inline Assist sees the entire active context from the assistant panel. This means the assistant panel's context editor becomes one of the most powerful tools for shaping the results of Inline Assist.
|
||||
|
||||
## Using Prompts & Commands
|
||||
|
||||
While you can't directly use slash commands (and by extension, the `/prompt` command to include prompts) with Inline Assist, you can use them in the active context in the assistant panel.
|
||||
|
||||
A common workflow when using Inline Assist is to create a context in the assistant panel, add the desired context through text, prompts and commands, and then use Inline Assist to generate and apply transformations.
|
||||
|
||||
### Example Recipe - Fixing Errors with Inline Assist
|
||||
|
||||
1. Create a new chat in the assistant panel.
|
||||
2. Use the `/diagnostic` command to add current diagnostics to the context.
|
||||
3. OR use the `/terminal` command to add the current terminal output to the context (maybe a panic, error, or log?)
|
||||
4. Use Inline Assist to generate a fix for the error.
|
||||
|
||||
## Prefilling Prompts
|
||||
|
||||
To create a custom keybinding that prefills a prompt, you can add the following format in your keymap:
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"context": "Editor && mode == full",
|
||||
"bindings": {
|
||||
"ctrl-shift-enter": [
|
||||
"assistant::InlineAssist",
|
||||
{ "prompt": "Build a snake game" }
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
```
|
||||
32
docs/src/agent/models.md
Normal file
32
docs/src/agent/models.md
Normal file
@@ -0,0 +1,32 @@
|
||||
# Models
|
||||
|
||||
Zed’s plans offer hosted versions of major LLM’s, generally with higher rate limits than individual API keys. We’re working hard to expand the models supported by Zed’s subscription offerings, so please check back often.
|
||||
|
||||
| Model | Provider | Max Mode | Context Window | Price per Prompt | Price per Request |
|
||||
| --- | --- | --- | --- | --- | --- |
|
||||
| Claude 3.5 Sonnet | Anthropic | ❌ | 120k | $0.04 | N/A |
|
||||
| Claude 3.7 Sonnet | Anthropic | ❌ | 120k | $0.04 | N/A |
|
||||
| Claude 3.7 Sonnet | Anthropic | ✅ | 200k | N/A | $0.05 |
|
||||
|
||||
## Usage {#usage}
|
||||
|
||||
The models above can be used with the prompts included in your plan. For models not marked with [“Max Mode”](#max-mode), each prompt is counted against the monthly limit of your plan. If you’ve exceeded your limit for the month, and are on a paid plan, you can enable usage-based pricing to continue using models for the rest of the month. See [Plans and Usage](./plans-and-usage.md) for more information.
|
||||
|
||||
## Max Mode {#max-mode}
|
||||
In Max Mode, we enable models to use [large context windows](#context-windows), unlimited tool calls, and other capabilities for expanded reasoning, to allow an unfettered agentic experience. Because of the increased cost to Zed, each subsequent request beyond the initial user prompt in [Max Mode](#max-mode) models is counted as a prompt for metering. In addition, usage-based pricing per request is slightly more expensive for [Max Mode](#max-mode) models than usage-based pricing per prompt for regular models.
|
||||
|
||||
## Context Windows {#context-windows}
|
||||
|
||||
A context window is the maximum span of text and code an LLM can consider at once, including both the input prompt and output generated by the model.
|
||||
|
||||
In [Max Mode](#max-mode), we increase context window size to allow models enhanced reasoning capabilities.
|
||||
|
||||
Each Agent thread in Zed maintains its own context window. The more prompts, attached files, and responses included in a session, the larger the context window grows.
|
||||
|
||||
For best results, it’s recommended you take a purpose-based approach to Agent thread management, starting a new thread for each unique task.
|
||||
|
||||
## Tool Calls {#tool-calls}
|
||||
|
||||
Models can use tools to interface with your code. In [Max Mode](#max-mode), models can use an unlimited number of tools per prompt, with each tool call counting as a prompt for metering purposes. For non-Max Mode models, you’ll need to interact with the model every 25 tool calls to continue, at which point a new prompt will be counted against your plan limit.
|
||||
|
||||
*We need a list of tools here for when we launch, with a summary of what they do? Maybe in a new page*
|
||||
32
docs/src/agent/plans-and-usage.md
Normal file
32
docs/src/agent/plans-and-usage.md
Normal file
@@ -0,0 +1,32 @@
|
||||
# Plans and Usage
|
||||
To view your current usage, you can visit your account at zed.dev/account. You’ll also see some usage meters in-product when you’re nearing the limit for your plan or trial.
|
||||
|
||||
## Available Plans {#plans}
|
||||
|
||||
Personal (details below)
|
||||
Trial (details below)
|
||||
Pro (details below)
|
||||
Business (details below)
|
||||
For costs and more information on pricing, visit Zed’s pricing page. Please note that if you’re interested in just using Zed as the world’s fastest editor, with no AI or subscription features, you can always do so for free, without [authentication](link to Joseph’s auth page).
|
||||
|
||||
## Usage {#usage}
|
||||
|
||||
A `prompt` in Zed is an input from the user, initiated on pressing enter, composed of one or many `requests`. A `prompt` can be initiated from the Agent panel, or via Inline Assist.
|
||||
A `request` in Zed is a response to a `prompt`, plus any tool calls that are initiated as part of that response. There may be one `request` per `prompt`, or many.
|
||||
|
||||
Most models offered by Zed are metered per-prompt. Some models that use large context windows and unlimited tool calls ([“Max Mode”](./models.md#max-mode)) count each individual request within a prompt against your prompt limit, since the agentic work spawned by the prompt is expensive to support. See [Models](./models.md) for a list of which subset of models are metered by request.
|
||||
|
||||
Plans come with a set amount of prompts included, with the number varying depending on the plan you’ve selected.
|
||||
|
||||
## Usage-Based Pricing {#ubp}
|
||||
You may opt in to usage-based pricing for prompts that exceed what is included in your paid plan from [your account page](/account).
|
||||
|
||||
Usage-based pricing is only available with a paid plan, and is exclusively opt-in. From the dashboard, you can toggle usage-based pricing for usage exceeding your paid plan. You can also configure a spend limit in USD. Once the spend limit is hit, we’ll stop any further usage until your prompt limit resets.
|
||||
|
||||
We will bill for additional prompts when you’ve made prompts totaling $20, or when your billing date occurs, whichever comes first.
|
||||
|
||||
Cost per request for each model can be found on the [models](./models.md) page.
|
||||
|
||||
## Business Usage {#business-usage}
|
||||
|
||||
Email sales@zed.dev with any questions on business plans, metering, and usage-based pricing.
|
||||
3
docs/src/agent/privacy-and-security.md
Normal file
3
docs/src/agent/privacy-and-security.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# Privacy and Security
|
||||
|
||||
*To be completed*
|
||||
1
docs/src/agent/product.md
Normal file
1
docs/src/agent/product.md
Normal file
@@ -0,0 +1 @@
|
||||
# Product
|
||||
@@ -19,7 +19,7 @@ Here are some tips for using prompts effectively:
|
||||
|
||||
The Prompt Library is an interface for writing and managing prompts. Like other text-driven UIs in Zed, it is a full editor with syntax highlighting, keyboard shortcuts, etc.
|
||||
|
||||
You can use the inline assistant right in the prompt editor, allowing you to automate and rewrite prompts.
|
||||
You can use Inline Assist right in the prompt editor, allowing you to automate and rewrite prompts.
|
||||
|
||||
### Opening the Prompt Library
|
||||
|
||||
@@ -131,7 +131,7 @@ By using nested prompts, you can create modular and reusable prompt components t
|
||||
|
||||
### Prompt Templates
|
||||
|
||||
Zed uses prompt templates to power internal assistant features, like the terminal assistant, or the content prompt used in the inline assistant.
|
||||
Zed uses prompt templates to power internal assistant features, like the terminal assistant, or the content prompt used in Inline Assist.
|
||||
|
||||
Zed has the following internal prompt templates:
|
||||
|
||||
1
docs/src/agent/subscription.md
Normal file
1
docs/src/agent/subscription.md
Normal file
@@ -0,0 +1 @@
|
||||
# Subscription
|
||||
@@ -1,44 +0,0 @@
|
||||
# Inline Assistant
|
||||
|
||||
## Using the Inline Assistant
|
||||
|
||||
You can use `ctrl-enter` to open the inline assistant nearly anywhere you can enter text: Editors, the assistant panel, the prompt library, channel notes, and even within the terminal panel.
|
||||
|
||||
The inline assistant allows you to send the current selection (or the current line) to a language model and modify the selection with the language model's response.
|
||||
|
||||
You can also perform multiple generation requests in parallel by pressing `ctrl-enter` with multiple cursors, or by pressing `ctrl-enter` with a selection that spans multiple excerpts in a multibuffer.
|
||||
|
||||
The inline assistant pulls its context from the assistant panel, allowing you to provide additional instructions or rules for code transformations.
|
||||
|
||||
> **Note**: The inline assistant sees the entire active context from the assistant panel. This means the assistant panel's context editor becomes one of the most powerful tools for shaping the results of the inline assistant.
|
||||
|
||||
## Using Prompts & Commands
|
||||
|
||||
While you can't directly use slash commands (and by extension, the `/prompt` command to include prompts) in the inline assistant, you can use them in the active context in the assistant panel.
|
||||
|
||||
A common workflow when using the inline assistant is to create a context in the assistant panel, add the desired context through text, prompts and commands, and then use the inline assistant to generate and apply transformations.
|
||||
|
||||
### Example Recipe - Fixing Errors with the Inline Assistant
|
||||
|
||||
1. Create a new chat in the assistant panel.
|
||||
2. Use the `/diagnostic` command to add current diagnostics to the context.
|
||||
3. OR use the `/terminal` command to add the current terminal output to the context (maybe a panic, error, or log?)
|
||||
4. Use the inline assistant to generate a fix for the error.
|
||||
|
||||
## Prefilling Prompts
|
||||
|
||||
To create a custom keybinding that prefills a prompt, you can add the following format in your keymap:
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"context": "Editor && mode == full",
|
||||
"bindings": {
|
||||
"ctrl-shift-enter": [
|
||||
"assistant::InlineAssist",
|
||||
{ "prompt": "Build a snake game" }
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
```
|
||||
@@ -298,4 +298,4 @@ You should be able to sign-in to Supermaven by clicking on the Supermaven icon i
|
||||
|
||||
## See also
|
||||
|
||||
You may also use the Assistant Panel or the Inline Assistant to interact with language models, see [the assistant documentation](assistant/assistant.md) for more information.
|
||||
You may also use the Assistant Panel or Inline Assist to interact with language models, see [the assistant documentation](assistant/assistant.md) for more information.
|
||||
|
||||
@@ -146,7 +146,7 @@ These commands open new panes or jump to specific panes.
|
||||
|
||||
### In insert mode
|
||||
|
||||
The following commands help you bring up Zed's completion menu, request a suggestion from GitHub Copilot, or open the inline AI assistant without leaving insert mode.
|
||||
The following commands help you bring up Zed's completion menu, request a suggestion from GitHub Copilot, or open [Inline Assist](./agent/inline-assist.md) without leaving insert mode.
|
||||
|
||||
| Command | Default Shortcut |
|
||||
| ---------------------------------------------------------------------------- | ---------------- |
|
||||
|
||||
Reference in New Issue
Block a user