Compare commits

..

2 Commits

Author SHA1 Message Date
Jakub Konka
370a418fb7 git: Dedup paths needing status update in the job 2025-11-26 13:28:57 +01:00
Jakub Konka
78879876c1 git: Log job queue 2025-11-26 13:28:05 +01:00
1178 changed files with 43697 additions and 84740 deletions

View File

@@ -1,55 +0,0 @@
# Phase 2: Explore Repository
You are analyzing a codebase to understand its structure before reviewing documentation impact.
## Objective
Produce a structured overview of the repository to inform subsequent documentation analysis.
## Instructions
1. **Identify Primary Languages and Frameworks**
- Scan for Cargo.toml, package.json, or other manifest files
- Note the primary language(s) and key dependencies
2. **Map Documentation Structure**
- This project uses **mdBook** (https://rust-lang.github.io/mdBook/)
- Documentation is in `docs/src/`
- Table of contents: `docs/src/SUMMARY.md` (mdBook format: https://rust-lang.github.io/mdBook/format/summary.html)
- Style guide: `docs/.rules`
- Agent guidelines: `docs/AGENTS.md`
- Formatting: Prettier (config in `docs/.prettierrc`)
3. **Identify Build and Tooling**
- Note build systems (cargo, npm, etc.)
- Identify documentation tooling (mdbook, etc.)
4. **Output Format**
Produce a JSON summary:
```json
{
"primary_language": "Rust",
"frameworks": ["GPUI"],
"documentation": {
"system": "mdBook",
"location": "docs/src/",
"toc_file": "docs/src/SUMMARY.md",
"toc_format": "https://rust-lang.github.io/mdBook/format/summary.html",
"style_guide": "docs/.rules",
"agent_guidelines": "docs/AGENTS.md",
"formatter": "prettier",
"formatter_config": "docs/.prettierrc",
"custom_preprocessor": "docs_preprocessor (handles {#kb action::Name} syntax)"
},
"key_directories": {
"source": "crates/",
"docs": "docs/src/",
"extensions": "extensions/"
}
}
```
## Constraints
- Read-only: Do not modify any files
- Focus on structure, not content details
- Complete within 2 minutes

View File

@@ -1,57 +0,0 @@
# Phase 3: Analyze Changes
You are analyzing code changes to understand their nature and scope.
## Objective
Produce a clear, neutral summary of what changed in the codebase.
## Input
You will receive:
- List of changed files from the triggering commit/PR
- Repository structure from Phase 2
## Instructions
1. **Categorize Changed Files**
- Source code (which crates/modules)
- Configuration
- Tests
- Documentation (already existing)
- Other
2. **Analyze Each Change**
- Review diffs for files likely to impact documentation
- Focus on: public APIs, settings, keybindings, commands, user-visible behavior
3. **Identify What Did NOT Change**
- Note stable interfaces or behaviors
- Important for avoiding unnecessary documentation updates
4. **Output Format**
Produce a markdown summary:
```markdown
## Change Analysis
### Changed Files Summary
| Category | Files | Impact Level |
| --- | --- | --- |
| Source - [crate] | file1.rs, file2.rs | High/Medium/Low |
| Settings | settings.json | Medium |
| Tests | test_*.rs | None |
### Behavioral Changes
- **[Feature/Area]**: Description of what changed from user perspective
- **[Feature/Area]**: Description...
### Unchanged Areas
- [Area]: Confirmed no changes to [specific behavior]
### Files Requiring Deeper Review
- `path/to/file.rs`: Reason for deeper review
```
## Constraints
- Read-only: Do not modify any files
- Neutral tone: Describe what changed, not whether it's good/bad
- Do not propose documentation changes yet

View File

@@ -1,76 +0,0 @@
# Phase 4: Plan Documentation Impact
You are determining whether and how documentation should be updated based on code changes.
## Objective
Produce a structured documentation plan that will guide Phase 5 execution.
## Documentation System
This is an **mdBook** site (https://rust-lang.github.io/mdBook/):
- `docs/src/SUMMARY.md` defines book structure per https://rust-lang.github.io/mdBook/format/summary.html
- If adding new pages, they MUST be added to SUMMARY.md
- Use `{#kb action::ActionName}` syntax for keybindings (custom preprocessor expands these)
- Prettier formatting (80 char width) will be applied automatically
## Input
You will receive:
- Change analysis from Phase 3
- Repository structure from Phase 2
- Documentation guidelines from `docs/AGENTS.md`
## Instructions
1. **Review AGENTS.md**
- Load and apply all rules from `docs/AGENTS.md`
- Respect scope boundaries (in-scope vs out-of-scope)
2. **Evaluate Documentation Impact**
For each behavioral change from Phase 3:
- Does existing documentation cover this area?
- Is the documentation now inaccurate or incomplete?
- Classify per AGENTS.md "Change Classification" section
3. **Identify Specific Updates**
For each required update:
- Exact file path
- Specific section or heading
- Type of change (update existing, add new, deprecate)
- Description of the change
4. **Flag Uncertainty**
Explicitly mark:
- Assumptions you're making
- Areas where human confirmation is needed
- Ambiguous requirements
5. **Output Format**
Use the exact format specified in `docs/AGENTS.md` Phase 4 section:
```markdown
## Documentation Impact Assessment
### Summary
Brief description of code changes analyzed.
### Documentation Updates Required: [Yes/No]
### Planned Changes
#### 1. [File Path]
- **Section**: [Section name or "New section"]
- **Change Type**: [Update/Add/Deprecate]
- **Reason**: Why this change is needed
- **Description**: What will be added/modified
### Uncertainty Flags
- [ ] [Description of any assumptions or areas needing confirmation]
### No Changes Needed
- [List files reviewed but not requiring updates, with brief reason]
```
## Constraints
- Read-only: Do not modify any files
- Conservative: When uncertain, flag for human review rather than planning changes
- Scoped: Only plan changes that trace directly to code changes from Phase 3
- No scope expansion: Do not plan "improvements" unrelated to triggering changes

View File

@@ -1,67 +0,0 @@
# Phase 5: Apply Documentation Plan
You are executing a pre-approved documentation plan for an **mdBook** documentation site.
## Objective
Implement exactly the changes specified in the documentation plan from Phase 4.
## Documentation System
- **mdBook**: https://rust-lang.github.io/mdBook/
- **SUMMARY.md**: Follows mdBook format (https://rust-lang.github.io/mdBook/format/summary.html)
- **Prettier**: Will be run automatically after this phase (80 char line width)
- **Custom preprocessor**: Use `{#kb action::ActionName}` for keybindings instead of hardcoding
## Input
You will receive:
- Documentation plan from Phase 4
- Documentation guidelines from `docs/AGENTS.md`
- Style rules from `docs/.rules`
## Instructions
1. **Validate Plan**
- Confirm all planned files are within scope per AGENTS.md
- Verify no out-of-scope files are targeted
2. **Execute Each Planned Change**
For each item in "Planned Changes":
- Navigate to the specified file
- Locate the specified section
- Apply the described change
- Follow style rules from `docs/.rules`
3. **Style Compliance**
Every edit must follow `docs/.rules`:
- Second person, present tense
- No hedging words ("simply", "just", "easily")
- Proper keybinding format (`Cmd+Shift+P`)
- Settings Editor first, JSON second
- Correct terminology (folder not directory, etc.)
4. **Preserve Context**
- Maintain surrounding content structure
- Keep consistent heading levels
- Preserve existing cross-references
## Constraints
- Execute ONLY changes listed in the plan
- Do not discover new documentation targets
- Do not make stylistic improvements outside planned sections
- Do not expand scope beyond what Phase 4 specified
- If a planned change cannot be applied (file missing, section not found), skip and note it
## Output
After applying changes, output a summary:
```markdown
## Applied Changes
### Successfully Applied
- `path/to/file.md`: [Brief description of change]
### Skipped (Could Not Apply)
- `path/to/file.md`: [Reason - e.g., "Section not found"]
### Warnings
- [Any issues encountered during application]
```

View File

@@ -1,54 +0,0 @@
# Phase 6: Summarize Changes
You are generating a summary of documentation updates for PR review.
## Objective
Create a clear, reviewable summary of all documentation changes made.
## Input
You will receive:
- Applied changes report from Phase 5
- Original change analysis from Phase 3
- Git diff of documentation changes
## Instructions
1. **Gather Change Information**
- List all modified documentation files
- Identify the corresponding code changes that triggered each update
2. **Generate Summary**
Use the format specified in `docs/AGENTS.md` Phase 6 section:
```markdown
## Documentation Update Summary
### Changes Made
| File | Change | Related Code |
| --- | --- | --- |
| docs/src/path.md | Brief description | PR #123 or commit SHA |
### Rationale
Brief explanation of why these updates were made, linking back to the triggering code changes.
### Review Notes
- Items reviewers should pay special attention to
- Any uncertainty flags from Phase 4 that were addressed
- Assumptions made during documentation
```
3. **Add Context for Reviewers**
- Highlight any changes that might be controversial
- Note if any planned changes were skipped and why
- Flag areas where reviewer expertise is especially needed
## Output Format
The summary should be suitable for:
- PR description body
- Commit message (condensed version)
- Team communication
## Constraints
- Read-only (documentation changes already applied in Phase 5)
- Factual: Describe what was done, not justify why it's good
- Complete: Account for all changes, including skipped items

View File

@@ -1,67 +0,0 @@
# Phase 7: Commit and Open PR
You are creating a git branch, committing documentation changes, and opening a PR.
## Objective
Package documentation updates into a reviewable pull request.
## Input
You will receive:
- Summary from Phase 6
- List of modified files
## Instructions
1. **Create Branch**
```sh
git checkout -b docs/auto-update-{date}
```
Use format: `docs/auto-update-YYYY-MM-DD` or `docs/auto-update-{short-sha}`
2. **Stage and Commit**
- Stage only documentation files in `docs/src/`
- Do not stage any other files
Commit message format:
```
docs: auto-update documentation for [brief description]
[Summary from Phase 6, condensed]
Triggered by: [commit SHA or PR reference]
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
```
3. **Push Branch**
```sh
git push -u origin docs/auto-update-{date}
```
4. **Create Pull Request**
Use the Phase 6 summary as the PR body.
PR Title: `docs: [Brief description of documentation updates]`
Labels (if available): `documentation`, `automated`
Base branch: `main`
## Constraints
- Do NOT auto-merge
- Do NOT request specific reviewers (let CODEOWNERS handle it)
- Do NOT modify files outside `docs/src/`
- If no changes to commit, exit gracefully with message "No documentation changes to commit"
## Output
```markdown
## PR Created
- **Branch**: docs/auto-update-{date}
- **PR URL**: https://github.com/zed-industries/zed/pull/XXXX
- **Status**: Ready for review
### Commit
- SHA: {commit-sha}
- Files: {count} documentation files modified
```

70
.github/ISSUE_TEMPLATE/1.bug-report.yml vendored Normal file
View File

@@ -0,0 +1,70 @@
name: Report an issue
description: Report an issue with Zed.
type: Bug
body:
- type: markdown
attributes:
value: |
Feature requests should be opened in [discussions](https://github.com/zed-industries/zed/discussions/new/choose).
Before opening a new issue, please do a [search](https://github.com/zed-industries/zed/issues) of existing issues and :+1: upvote the existing issue instead. This will help us maintain a proper signal-to-noise ratio.
If you need help with your own project, you can ask a question in our [Discord Support Forums](https://discord.com/invite/zedindustries).
- type: textarea
attributes:
label: Reproduction steps
description: A step-by-step description of how to reproduce the issue from a **clean Zed install**. Any code must be sufficient to reproduce (make sure to include context!). Include code as text, not just as a screenshot. **Issues with insufficient detail may be summarily closed**.
placeholder: |
1. Start Zed
2. Click X
3. Y will happen
validations:
required: true
- type: textarea
attributes:
label: Current vs. Expected behavior
description: |
A clear and concise description of what is the current behavior (screenshots, videos), vs. what you expected the behavior to be.
**Skipping this/failure to provide complete information will result in the issue being closed.**
placeholder: "Based on my reproduction steps above, when I click X, I expect this to happen, but instead Y happens."
validations:
required: true
- type: textarea
attributes:
label: If applicable, attach your Zed log file to this issue.
description: |
Open the command palette in Zed, then type `zed: open log` to see the last 1000 lines. Or type `zed: reveal log in file manager` in the command palette to reveal the log file itself.
value: |
<details><summary>Zed.log</summary>
<!-- Paste your log inside the code block. -->
```log
```
</details>
validations:
required: false
- type: textarea
attributes:
label: If applicable, provide details about your model provider
placeholder: |
- Provider: (Anthropic via ZedPro, Anthropic via API key, Copilot Chat, Mistral, OpenAI, etc.)
- Model Name: (Claude Sonnet 4.5, Gemini 3 Pro, GPT-5)
- Mode: (Agent Panel, Inline Assistant, Terminal Assistant or Text Threads)
- Other details (ACPs, MCPs, other settings, etc.):
validations:
required: false
- type: textarea
attributes:
label: Zed version and system specs
description: |
Open the command palette in Zed, then type “zed: copy system specs into clipboard”. **Skipping this/failure to provide complete information will result in the issue being closed**.
placeholder: |
Zed: v0.215.0 (Zed Nightly bfe141ea79aa4984028934067ba75c48d99136ae)
OS: macOS 15.1
Memory: 36 GiB
Architecture: aarch64
validations:
required: true

View File

@@ -1,115 +0,0 @@
name: Report a bug
description: Report a problem with Zed.
type: Bug
labels: "state:needs triage"
body:
- type: markdown
attributes:
value: |
Is this bug already reported? Upvote to get it noticed faster. [Here's the search](https://github.com/zed-industries/zed/issues). Upvote means giving it a :+1: reaction.
Feature request? Please open in [discussions](https://github.com/zed-industries/zed/discussions/new/choose) instead.
Just have a question or need support? Welcome to [Discord Support Forums](https://discord.com/invite/zedindustries).
- type: textarea
attributes:
label: Reproduction steps
description: A step-by-step description of how to reproduce the bug from a **clean Zed install**. The more context you provide, the easier it is to find and fix the problem fast.
placeholder: |
1. Start Zed
2. Click X
validations:
required: true
- type: textarea
attributes:
label: Current vs. Expected behavior
description: |
Current behavior (screenshots, videos, etc. are appreciated), vs. what you expected the behavior to be.
placeholder: |
Current behavior: <screenshot with an arrow> The icon is blue. Expected behavior: The icon should be red because this is what the setting is documented to do.
validations:
required: true
- type: textarea
id: environment
attributes:
label: Zed version and system specs
description: |
Open the command palette in Zed, then type “zed: copy system specs into clipboard”.
placeholder: |
Zed: v0.215.0 (Zed Nightly bfe141ea79aa4984028934067ba75c48d99136ae)
OS: macOS 15.1
Memory: 36 GiB
Architecture: aarch64
validations:
required: true
- type: textarea
attributes:
label: Attach Zed log file
description: |
Open the command palette in Zed, then type `zed: open log` to see the last 1000 lines. Or type `zed: reveal log in file manager` in the command palette to reveal the log file itself.
value: |
<details><summary>Zed.log</summary>
<!-- Paste your log inside the code block. -->
```log
```
</details>
validations:
required: false
- type: textarea
attributes:
label: Relevant Zed settings
description: |
Open the command palette in Zed, then type “zed: open settings file” and copy/paste any relevant (e.g., LSP-specific) settings.
value: |
<details><summary>settings.json</summary>
<!-- Paste your settings inside the code block. -->
```json
```
</details>
validations:
required: false
- type: textarea
attributes:
label: Relevant Keymap
description: |
Open the command palette in Zed, then type “zed: open keymap file” and copy/paste the file's contents.
value: |
<details><summary>keymap.json</summary>
<!-- Paste your keymap file inside the code block. -->
```json
```
</details>
validations:
required: false
- type: textarea
attributes:
label: (for AI issues) Model provider details
placeholder: |
- Provider: (Anthropic via ZedPro, Anthropic via API key, Copilot Chat, Mistral, OpenAI, etc.)
- Model Name: (Claude Sonnet 4.5, Gemini 3 Pro, GPT-5)
- Mode: (Agent Panel, Inline Assistant, Terminal Assistant or Text Threads)
- Other details (ACPs, MCPs, other settings, etc.):
validations:
required: false
- type: dropdown
attributes:
label: If you are using WSL on Windows, what flavor of Linux are you using?
multiple: false
options:
- Arch Linux
- Ubuntu
- Fedora
- Mint
- Pop!_OS
- NixOS
- Other

View File

@@ -1,23 +1,32 @@
name: Report a crash
description: Zed is crashing or freezing or hanging.
type: Crash
labels: "state:needs triage"
body:
- type: textarea
attributes:
label: Reproduction steps
description: A step-by-step description of how to reproduce the crash from a **clean Zed install**. The more context you provide, the easier it is to find and fix the problem fast.
description: A step-by-step description of how to reproduce the crash from a **clean Zed install**. **Be verbose**. **Issues with insufficient detail may be summarily closed**.
placeholder: |
1. Start Zed
2. Perform an action
3. Zed crashes
validations:
required: true
- type: textarea
attributes:
label: Current vs. Expected behavior
description: |
Go into depth about what actions youre performing in Zed to trigger the crash. If Zed crashes before it loads any windows, make sure to mention that. Again, **be verbose**.
**Skipping this/failure to provide complete information will result in the issue being closed.**
placeholder: "Based on my reproduction steps above, when I perform said action, I expect this to happen, but instead Zed crashes."
validations:
required: true
- type: textarea
attributes:
label: Zed version and system specs
description: |
Open the command palette in Zed, then type “zed: copy system specs into clipboard”.
Open the command palette in Zed, then type “zed: copy system specs into clipboard”. **Skipping this/failure to provide complete information will result in the issue being closed**.
placeholder: |
Zed: v0.215.0 (Zed Nightly bfe141ea79aa4984028934067ba75c48d99136ae)
OS: macOS 15.1
@@ -27,7 +36,7 @@ body:
required: true
- type: textarea
attributes:
label: Attach Zed log file
label: If applicable, attach your Zed log file to this issue
description: |
Open the command palette in Zed, then type `zed: open log` to see the last 1000 lines. Or type `zed: reveal log in file manager` in the command palette to reveal the log file itself.
value: |

19
.github/ISSUE_TEMPLATE/99_other.yml vendored Normal file
View File

@@ -0,0 +1,19 @@
name: Other [Staff Only]
description: Zed Staff Only
body:
- type: textarea
attributes:
label: Summary
value: |
<!-- Please insert a one line summary of the issue below -->
SUMMARY_SENTENCE_HERE
### Description
IF YOU DO NOT WORK FOR ZED INDUSTRIES DO NOT CREATE ISSUES WITH THIS TEMPLATE.
THEY WILL BE AUTO-CLOSED AND MAY RESULT IN YOU BEING BANNED FROM THE ZED ISSUE TRACKER.
FEATURE REQUESTS / SUPPORT REQUESTS SHOULD BE OPENED AS DISCUSSIONS:
https://github.com/zed-industries/zed/discussions/new/choose
validations:
required: true

View File

@@ -19,18 +19,6 @@ runs:
shell: bash -euxo pipefail {0}
run: ./script/linux
- name: Install mold linker
shell: bash -euxo pipefail {0}
run: ./script/install-mold
- name: Download WASI SDK
shell: bash -euxo pipefail {0}
run: ./script/download-wasi-sdk
- name: Generate action metadata
shell: bash -euxo pipefail {0}
run: ./script/generate-action-metadata
- name: Check for broken links (in MD)
uses: lycheeverse/lychee-action@82202e5e9c2f4ef1a55a3d02563e1cb6041e5332 # v2.4.1
with:

View File

@@ -5,27 +5,13 @@ on:
release:
types:
- published
workflow_dispatch:
inputs:
tag_name:
description: tag_name
required: true
type: string
prerelease:
description: prerelease
required: true
type: boolean
body:
description: body
type: string
default: ''
jobs:
rebuild_releases_page:
if: (github.repository_owner == 'zed-industries' || github.repository_owner == 'zed-extensions')
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: after_release::rebuild_releases_page::refresh_cloud_releases
run: curl -fX POST https://cloud.zed.dev/releases/refresh?expect_tag=${{ github.event.release.tag_name || inputs.tag_name }}
run: curl -fX POST https://cloud.zed.dev/releases/refresh?expect_tag=${{ github.event.release.tag_name }}
shell: bash -euxo pipefail {0}
- name: after_release::rebuild_releases_page::redeploy_zed_dev
run: npm exec --yes -- vercel@37 --token="$VERCEL_TOKEN" --scope zed-industries redeploy https://zed.dev
@@ -41,7 +27,7 @@ jobs:
- id: get-release-url
name: after_release::post_to_discord::get_release_url
run: |
if [ "${{ github.event.release.prerelease || inputs.prerelease }}" == "true" ]; then
if [ "${{ github.event.release.prerelease }}" == "true" ]; then
URL="https://zed.dev/releases/preview"
else
URL="https://zed.dev/releases/stable"
@@ -54,9 +40,9 @@ jobs:
uses: 2428392/gh-truncate-string-action@b3ff790d21cf42af3ca7579146eedb93c8fb0757
with:
stringToTruncate: |
📣 Zed [${{ github.event.release.tag_name || inputs.tag_name }}](<${{ steps.get-release-url.outputs.URL }}>) was just released!
📣 Zed [${{ github.event.release.tag_name }}](<${{ steps.get-release-url.outputs.URL }}>) was just released!
${{ github.event.release.body || inputs.body }}
${{ github.event.release.body }}
maxLength: 2000
truncationSymbol: '...'
- name: after_release::post_to_discord::discord_webhook_action
@@ -70,7 +56,7 @@ jobs:
- id: set-package-name
name: after_release::publish_winget::set_package_name
run: |
if ("${{ github.event.release.prerelease || inputs.prerelease }}" -eq "true") {
if ("${{ github.event.release.prerelease }}" -eq "true") {
$PACKAGE_NAME = "ZedIndustries.Zed.Preview"
} else {
$PACKAGE_NAME = "ZedIndustries.Zed"
@@ -82,7 +68,6 @@ jobs:
uses: vedantmgoyal9/winget-releaser@19e706d4c9121098010096f9c495a70a7518b30f
with:
identifier: ${{ steps.set-package-name.outputs.PACKAGE_NAME }}
release-tag: ${{ github.event.release.tag_name || inputs.tag_name }}
max-versions-to-keep: 5
token: ${{ secrets.WINGET_TOKEN }}
create_sentry_release:

View File

@@ -1,132 +0,0 @@
# Generated from xtask::workflows::autofix_pr
# Rebuild with `cargo xtask workflows`.
name: autofix_pr
run-name: 'autofix PR #${{ inputs.pr_number }}'
on:
workflow_dispatch:
inputs:
pr_number:
description: pr_number
required: true
type: string
run_clippy:
description: run_clippy
type: boolean
default: 'true'
jobs:
run_autofix:
runs-on: namespace-profile-16x32-ubuntu-2204
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: autofix_pr::run_autofix::checkout_pr
run: gh pr checkout ${{ inputs.pr_number }}
shell: bash -euxo pipefail {0}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: steps::setup_cargo_config
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
shell: bash -euxo pipefail {0}
- name: steps::cache_rust_dependencies_namespace
uses: namespacelabs/nscloud-cache-action@v1
with:
cache: rust
- name: steps::setup_linux
run: ./script/linux
shell: bash -euxo pipefail {0}
- name: steps::install_mold
run: ./script/install-mold
shell: bash -euxo pipefail {0}
- name: steps::download_wasi_sdk
run: ./script/download-wasi-sdk
shell: bash -euxo pipefail {0}
- name: steps::setup_pnpm
uses: pnpm/action-setup@fe02b34f77f8bc703788d5817da081398fad5dd2
with:
version: '9'
- name: autofix_pr::run_autofix::run_prettier_fix
run: ./script/prettier --write
shell: bash -euxo pipefail {0}
- name: autofix_pr::run_autofix::run_cargo_fmt
run: cargo fmt --all
shell: bash -euxo pipefail {0}
- name: autofix_pr::run_autofix::run_cargo_fix
if: ${{ inputs.run_clippy }}
run: cargo fix --workspace --release --all-targets --all-features --allow-dirty --allow-staged
shell: bash -euxo pipefail {0}
- name: autofix_pr::run_autofix::run_clippy_fix
if: ${{ inputs.run_clippy }}
run: cargo clippy --workspace --release --all-targets --all-features --fix --allow-dirty --allow-staged
shell: bash -euxo pipefail {0}
- id: create-patch
name: autofix_pr::run_autofix::create_patch
run: |
if git diff --quiet; then
echo "No changes to commit"
echo "has_changes=false" >> "$GITHUB_OUTPUT"
else
git diff > autofix.patch
echo "has_changes=true" >> "$GITHUB_OUTPUT"
fi
shell: bash -euxo pipefail {0}
- name: upload artifact autofix-patch
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: autofix-patch
path: autofix.patch
if-no-files-found: ignore
retention-days: '1'
- name: steps::cleanup_cargo_config
if: always()
run: |
rm -rf ./../.cargo
shell: bash -euxo pipefail {0}
outputs:
has_changes: ${{ steps.create-patch.outputs.has_changes }}
commit_changes:
needs:
- run_autofix
if: needs.run_autofix.outputs.has_changes == 'true'
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- id: get-app-token
name: steps::authenticate_as_zippy
uses: actions/create-github-app-token@bef1eaf1c0ac2b148ee2a0a74c65fbe6db0631f1
with:
app-id: ${{ secrets.ZED_ZIPPY_APP_ID }}
private-key: ${{ secrets.ZED_ZIPPY_APP_PRIVATE_KEY }}
- name: steps::checkout_repo_with_token
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
token: ${{ steps.get-app-token.outputs.token }}
- name: autofix_pr::commit_changes::checkout_pr
run: gh pr checkout ${{ inputs.pr_number }}
shell: bash -euxo pipefail {0}
env:
GITHUB_TOKEN: ${{ steps.get-app-token.outputs.token }}
- name: autofix_pr::download_patch_artifact
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53
with:
name: autofix-patch
- name: autofix_pr::commit_changes::apply_patch
run: git apply autofix.patch
shell: bash -euxo pipefail {0}
- name: autofix_pr::commit_changes::commit_and_push
run: |
git commit -am "Autofix"
git push
shell: bash -euxo pipefail {0}
env:
GIT_COMMITTER_NAME: Zed Zippy
GIT_COMMITTER_EMAIL: 234243425+zed-zippy[bot]@users.noreply.github.com
GIT_AUTHOR_NAME: Zed Zippy
GIT_AUTHOR_EMAIL: 234243425+zed-zippy[bot]@users.noreply.github.com
GITHUB_TOKEN: ${{ steps.get-app-token.outputs.token }}
concurrency:
group: ${{ github.workflow }}-${{ inputs.pr_number }}
cancel-in-progress: true

View File

@@ -30,7 +30,7 @@ jobs:
with:
clean: false
- id: get-app-token
name: steps::authenticate_as_zippy
name: cherry_pick::run_cherry_pick::authenticate_as_zippy
uses: actions/create-github-app-token@bef1eaf1c0ac2b148ee2a0a74c65fbe6db0631f1
with:
app-id: ${{ secrets.ZED_ZIPPY_APP_ID }}

View File

@@ -23,25 +23,20 @@ jobs:
AidanV
akbxr
AlvaroParker
amtoaer
artemevsevev
bajrangCoder
bcomnes
Be-ing
blopker
bnjjj
bobbymannino
CharlesChen0823
chbk
cppcoffee
davidbarsky
davewa
ddoemonn
djsauble
errmayank
fantacell
findrakecil
FloppyDisco
gko
huacnlee
imumesh18
@@ -56,7 +51,6 @@ jobs:
marius851000
mikebronner
ognevny
playdohface
RemcoSmitsDev
romaninsh
Simek
@@ -64,14 +58,12 @@ jobs:
sourcefrog
suxiaoshao
Takk8IS
thedadams
tidely
timvermeulen
valentinegb
versecafe
vitallium
warrenjokinen
WhySoBad
ya7010
Zertsov
with:

View File

@@ -1,40 +1,29 @@
name: "Close Stale Issues"
on:
schedule:
- cron: "0 2 * * 5"
- cron: "0 8 31 DEC *"
workflow_dispatch:
inputs:
debug-only:
description: "Run in dry-run mode (no changes made)"
type: boolean
default: false
operations-per-run:
description: "Max number of issues to process (default: 1000)"
type: number
default: 1000
jobs:
stale:
if: github.repository_owner == 'zed-industries'
runs-on: ubuntu-latest
steps:
- uses: actions/stale@997185467fa4f803885201cee163a9f38240193d # v10
- uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 # v9
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: >
Hi there!
Zed development moves fast and a significant number of bugs become outdated.
If you can reproduce this bug on the latest stable Zed, please let us know by leaving a comment with the Zed version.
If the bug doesn't appear for you anymore, feel free to close the issue yourself; otherwise, the bot will close it in a couple of weeks.
Hi there! 👋
We're working to clean up our issue tracker by closing older bugs that might not be relevant anymore. If you are able to reproduce this issue in the latest version of Zed, please let us know by commenting on this issue, and it will be kept open. If you can't reproduce it, feel free to close the issue yourself. Otherwise, it will close automatically in 14 days.
Thanks for your help!
close-issue-message: "This issue was closed due to inactivity. If you're still experiencing this problem, please leave a comment with your Zed version so that we can reopen the issue."
close-issue-message: "This issue was closed due to inactivity. If you're still experiencing this problem, please open a new issue with a link to this issue."
days-before-stale: 60
days-before-close: 14
only-issue-types: "Bug,Crash"
operations-per-run: ${{ inputs.operations-per-run || 1000 }}
operations-per-run: 1000
ascending: true
enable-statistics: true
debug-only: ${{ inputs.debug-only }}
stale-issue-label: "stale"
exempt-issue-labels: "never stale"

View File

@@ -1,264 +0,0 @@
name: Documentation Automation
on:
# push:
# branches: [main]
# paths:
# - 'crates/**'
# - 'extensions/**'
workflow_dispatch:
inputs:
pr_number:
description: 'PR number to analyze (gets full PR diff)'
required: false
type: string
trigger_sha:
description: 'Commit SHA to analyze (ignored if pr_number is set)'
required: false
type: string
permissions:
contents: write
pull-requests: write
env:
FACTORY_API_KEY: ${{ secrets.FACTORY_API_KEY }}
DROID_MODEL: claude-opus-4-5-20251101
jobs:
docs-automation:
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Install Droid CLI
id: install-droid
run: |
curl -fsSL https://app.factory.ai/cli | sh
echo "${HOME}/.local/bin" >> "$GITHUB_PATH"
echo "DROID_BIN=${HOME}/.local/bin/droid" >> "$GITHUB_ENV"
# Verify installation
"${HOME}/.local/bin/droid" --version
- name: Setup Node.js (for Prettier)
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install Prettier
run: npm install -g prettier
- name: Get changed files
id: changed
run: |
if [ -n "${{ inputs.pr_number }}" ]; then
# Get full PR diff
echo "Analyzing PR #${{ inputs.pr_number }}"
echo "source=pr" >> "$GITHUB_OUTPUT"
echo "ref=${{ inputs.pr_number }}" >> "$GITHUB_OUTPUT"
gh pr diff "${{ inputs.pr_number }}" --name-only > /tmp/changed_files.txt
elif [ -n "${{ inputs.trigger_sha }}" ]; then
# Get single commit diff
SHA="${{ inputs.trigger_sha }}"
echo "Analyzing commit $SHA"
echo "source=commit" >> "$GITHUB_OUTPUT"
echo "ref=$SHA" >> "$GITHUB_OUTPUT"
git diff --name-only "${SHA}^" "$SHA" > /tmp/changed_files.txt
else
# Default to current commit
SHA="${{ github.sha }}"
echo "Analyzing commit $SHA"
echo "source=commit" >> "$GITHUB_OUTPUT"
echo "ref=$SHA" >> "$GITHUB_OUTPUT"
git diff --name-only "${SHA}^" "$SHA" > /tmp/changed_files.txt || git diff --name-only HEAD~1 HEAD > /tmp/changed_files.txt
fi
echo "Changed files:"
cat /tmp/changed_files.txt
env:
GH_TOKEN: ${{ github.token }}
# Phase 0: Guardrails are loaded via AGENTS.md in each phase
# Phase 2: Explore Repository (Read-Only - default)
- name: "Phase 2: Explore Repository"
id: phase2
run: |
"$DROID_BIN" exec \
-m "$DROID_MODEL" \
-f .factory/prompts/docs-automation/phase2-explore.md \
> /tmp/phase2-output.txt 2>&1 || true
echo "Repository exploration complete"
cat /tmp/phase2-output.txt
# Phase 3: Analyze Changes (Read-Only - default)
- name: "Phase 3: Analyze Changes"
id: phase3
run: |
CHANGED_FILES=$(tr '\n' ' ' < /tmp/changed_files.txt)
echo "Analyzing changes in: $CHANGED_FILES"
# Build prompt with context
cat > /tmp/phase3-prompt.md << 'EOF'
$(cat .factory/prompts/docs-automation/phase3-analyze.md)
## Context
### Changed Files
$CHANGED_FILES
### Phase 2 Output
$(cat /tmp/phase2-output.txt)
EOF
"$DROID_BIN" exec \
-m "$DROID_MODEL" \
"$(cat .factory/prompts/docs-automation/phase3-analyze.md)
Changed files: $CHANGED_FILES" \
> /tmp/phase3-output.md 2>&1 || true
echo "Change analysis complete"
cat /tmp/phase3-output.md
# Phase 4: Plan Documentation Impact (Read-Only - default)
- name: "Phase 4: Plan Documentation Impact"
id: phase4
run: |
"$DROID_BIN" exec \
-m "$DROID_MODEL" \
-f .factory/prompts/docs-automation/phase4-plan.md \
> /tmp/phase4-plan.md 2>&1 || true
echo "Documentation plan complete"
cat /tmp/phase4-plan.md
# Check if updates are required
if grep -q "NO_UPDATES_REQUIRED" /tmp/phase4-plan.md; then
echo "updates_required=false" >> "$GITHUB_OUTPUT"
else
echo "updates_required=true" >> "$GITHUB_OUTPUT"
fi
# Phase 5: Apply Plan (Write-Enabled with --auto medium)
- name: "Phase 5: Apply Documentation Plan"
id: phase5
if: steps.phase4.outputs.updates_required == 'true'
run: |
"$DROID_BIN" exec \
-m "$DROID_MODEL" \
--auto medium \
-f .factory/prompts/docs-automation/phase5-apply.md \
> /tmp/phase5-report.md 2>&1 || true
echo "Documentation updates applied"
cat /tmp/phase5-report.md
# Phase 5b: Format with Prettier
- name: "Phase 5b: Format with Prettier"
id: phase5b
if: steps.phase4.outputs.updates_required == 'true'
run: |
echo "Formatting documentation with Prettier..."
cd docs && prettier --write src/
echo "Verifying Prettier formatting passes..."
cd docs && prettier --check src/
echo "Prettier formatting complete"
# Phase 6: Summarize Changes (Read-Only - default)
- name: "Phase 6: Summarize Changes"
id: phase6
if: steps.phase4.outputs.updates_required == 'true'
run: |
# Get git diff of docs
git diff docs/src/ > /tmp/docs-diff.txt || true
"$DROID_BIN" exec \
-m "$DROID_MODEL" \
-f .factory/prompts/docs-automation/phase6-summarize.md \
> /tmp/phase6-summary.md 2>&1 || true
echo "Summary generated"
cat /tmp/phase6-summary.md
# Phase 7: Commit and Open PR
- name: "Phase 7: Create PR"
id: phase7
if: steps.phase4.outputs.updates_required == 'true'
run: |
# Check if there are actual changes
if git diff --quiet docs/src/; then
echo "No documentation changes detected"
exit 0
fi
# Configure git
git config user.name "factory-droid[bot]"
git config user.email "138933559+factory-droid[bot]@users.noreply.github.com"
# Daily batch branch - one branch per day, multiple commits accumulate
BRANCH_NAME="docs/auto-update-$(date +%Y-%m-%d)"
# Stash local changes from phase 5
git stash push -m "docs-automation-changes" -- docs/src/
# Check if branch already exists on remote
if git ls-remote --exit-code --heads origin "$BRANCH_NAME" > /dev/null 2>&1; then
echo "Branch $BRANCH_NAME exists, checking out and updating..."
git fetch origin "$BRANCH_NAME"
git checkout -B "$BRANCH_NAME" "origin/$BRANCH_NAME"
else
echo "Creating new branch $BRANCH_NAME..."
git checkout -b "$BRANCH_NAME"
fi
# Apply stashed changes
git stash pop || true
# Stage and commit
git add docs/src/
SUMMARY=$(head -50 < /tmp/phase6-summary.md)
git commit -m "docs: auto-update documentation
${SUMMARY}
Triggered by: ${{ steps.changed.outputs.source }} ${{ steps.changed.outputs.ref }}
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>"
# Push
git push -u origin "$BRANCH_NAME"
# Check if PR already exists for this branch
EXISTING_PR=$(gh pr list --head "$BRANCH_NAME" --json number --jq '.[0].number' || echo "")
if [ -n "$EXISTING_PR" ]; then
echo "PR #$EXISTING_PR already exists for branch $BRANCH_NAME, updated with new commit"
else
# Create new PR
gh pr create \
--title "docs: automated documentation update ($(date +%Y-%m-%d))" \
--body-file /tmp/phase6-summary.md \
--base main || true
echo "PR created on branch: $BRANCH_NAME"
fi
env:
GH_TOKEN: ${{ github.token }}
# Summary output
- name: "Summary"
if: always()
run: |
echo "## Documentation Automation Summary" >> "$GITHUB_STEP_SUMMARY"
echo "" >> "$GITHUB_STEP_SUMMARY"
if [ "${{ steps.phase4.outputs.updates_required }}" == "false" ]; then
echo "No documentation updates required for this change." >> "$GITHUB_STEP_SUMMARY"
elif [ -f /tmp/phase6-summary.md ]; then
cat /tmp/phase6-summary.md >> "$GITHUB_STEP_SUMMARY"
else
echo "Workflow completed. Check individual phase outputs for details." >> "$GITHUB_STEP_SUMMARY"
fi

View File

@@ -1,148 +0,0 @@
# Generated from xtask::workflows::extension_bump
# Rebuild with `cargo xtask workflows`.
name: extension_bump
env:
CARGO_TERM_COLOR: always
RUST_BACKTRACE: '1'
CARGO_INCREMENTAL: '0'
ZED_EXTENSION_CLI_SHA: 7cfce605704d41ca247e3f84804bf323f6c6caaf
on:
workflow_call:
inputs:
bump-type:
description: bump-type
type: string
default: patch
force-bump:
description: force-bump
required: true
type: boolean
secrets:
app-id:
description: The app ID used to create the PR
required: true
app-secret:
description: The app secret for the corresponding app ID
required: true
jobs:
check_bump_needed:
if: (github.repository_owner == 'zed-industries' || github.repository_owner == 'zed-extensions')
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
fetch-depth: 0
- id: compare-versions-check
name: extension_bump::compare_versions
run: |
CURRENT_VERSION="$(sed -n 's/version = \"\(.*\)\"/\1/p' < extension.toml)"
PR_PARENT_SHA="${{ github.event.pull_request.head.sha }}"
if [[ -n "$PR_PARENT_SHA" ]]; then
git checkout "$PR_PARENT_SHA"
elif BRANCH_PARENT_SHA="$(git merge-base origin/main origin/zed-zippy-autobump)"; then
git checkout "$BRANCH_PARENT_SHA"
else
git checkout "$(git log -1 --format=%H)"~1
fi
PARENT_COMMIT_VERSION="$(sed -n 's/version = \"\(.*\)\"/\1/p' < extension.toml)"
[[ "$CURRENT_VERSION" == "$PARENT_COMMIT_VERSION" ]] && \
echo "needs_bump=true" >> "$GITHUB_OUTPUT" || \
echo "needs_bump=false" >> "$GITHUB_OUTPUT"
echo "current_version=${CURRENT_VERSION}" >> "$GITHUB_OUTPUT"
shell: bash -euxo pipefail {0}
outputs:
needs_bump: ${{ steps.compare-versions-check.outputs.needs_bump }}
current_version: ${{ steps.compare-versions-check.outputs.current_version }}
timeout-minutes: 1
bump_extension_version:
needs:
- check_bump_needed
if: |-
(github.repository_owner == 'zed-industries' || github.repository_owner == 'zed-extensions') &&
(inputs.force-bump == 'true' || needs.check_bump_needed.outputs.needs_bump == 'true')
runs-on: namespace-profile-8x16-ubuntu-2204
steps:
- id: generate-token
name: extension_bump::generate_token
uses: actions/create-github-app-token@v2
with:
app-id: ${{ secrets.app-id }}
private-key: ${{ secrets.app-secret }}
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: extension_bump::install_bump_2_version
run: pip install bump2version
shell: bash -euxo pipefail {0}
- id: bump-version
name: extension_bump::bump_version
run: |
OLD_VERSION="${{ needs.check_bump_needed.outputs.current_version }}"
BUMP_FILES=("extension.toml")
if [[ -f "Cargo.toml" ]]; then
BUMP_FILES+=("Cargo.toml")
fi
bump2version --verbose --current-version "$OLD_VERSION" --no-configured-files ${{ inputs.bump-type }} "${BUMP_FILES[@]}"
if [[ -f "Cargo.toml" ]]; then
cargo update --workspace
fi
NEW_VERSION="$(sed -n 's/version = \"\(.*\)\"/\1/p' < extension.toml)"
echo "new_version=${NEW_VERSION}" >> "$GITHUB_OUTPUT"
shell: bash -euxo pipefail {0}
- name: extension_bump::create_pull_request
uses: peter-evans/create-pull-request@v7
with:
title: Bump version to ${{ steps.bump-version.outputs.new_version }}
body: This PR bumps the version of this extension to v${{ steps.bump-version.outputs.new_version }}
commit-message: Bump version to v${{ steps.bump-version.outputs.new_version }}
branch: zed-zippy-autobump
committer: zed-zippy[bot] <234243425+zed-zippy[bot]@users.noreply.github.com>
base: main
delete-branch: true
token: ${{ steps.generate-token.outputs.token }}
sign-commits: true
assignees: ${{ github.actor }}
timeout-minutes: 1
create_version_label:
needs:
- check_bump_needed
if: (github.repository_owner == 'zed-industries' || github.repository_owner == 'zed-extensions') && github.event_name == 'push' && github.ref == 'refs/heads/main' && needs.check_bump_needed.outputs.needs_bump == 'false'
runs-on: namespace-profile-8x16-ubuntu-2204
steps:
- id: generate-token
name: extension_bump::generate_token
uses: actions/create-github-app-token@v2
with:
app-id: ${{ secrets.app-id }}
private-key: ${{ secrets.app-secret }}
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: extension_bump::create_version_tag
uses: actions/github-script@v7
with:
script: |-
github.rest.git.createRef({
owner: context.repo.owner,
repo: context.repo.repo,
ref: 'refs/tags/v${{ needs.check_bump_needed.outputs.current_version }}',
sha: context.sha
})
github-token: ${{ steps.generate-token.outputs.token }}
timeout-minutes: 1
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true

View File

@@ -1,43 +0,0 @@
# Generated from xtask::workflows::extension_release
# Rebuild with `cargo xtask workflows`.
name: extension_release
on:
workflow_call:
secrets:
app-id:
description: The app ID used to create the PR
required: true
app-secret:
description: The app secret for the corresponding app ID
required: true
jobs:
create_release:
if: (github.repository_owner == 'zed-industries' || github.repository_owner == 'zed-extensions')
runs-on: namespace-profile-8x16-ubuntu-2204
steps:
- id: generate-token
name: extension_bump::generate_token
uses: actions/create-github-app-token@v2
with:
app-id: ${{ secrets.app-id }}
private-key: ${{ secrets.app-secret }}
owner: zed-industries
repositories: extensions
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- id: get-extension-id
name: extension_release::get_extension_id
run: |
EXTENSION_ID="$(sed -n 's/id = \"\(.*\)\"/\1/p' < extension.toml)"
echo "extension_id=${EXTENSION_ID}" >> "$GITHUB_OUTPUT"
shell: bash -euxo pipefail {0}
- name: extension_release::release_action
uses: huacnlee/zed-extension-action@v2
with:
extension-name: ${{ steps.get-extension-id.outputs.extension_id }}
push-to: zed-industries/extensions
env:
COMMITTER_TOKEN: ${{ steps.generate-token.outputs.token }}

View File

@@ -7,7 +7,12 @@ env:
CARGO_INCREMENTAL: '0'
ZED_EXTENSION_CLI_SHA: 7cfce605704d41ca247e3f84804bf323f6c6caaf
on:
workflow_call: {}
workflow_call:
inputs:
run_tests:
description: Whether the workflow should run rust tests
required: true
type: boolean
jobs:
orchestrate:
if: (github.repository_owner == 'zed-industries' || github.repository_owner == 'zed-extensions')
@@ -68,12 +73,12 @@ jobs:
run: cargo clippy --release --all-targets --all-features -- --deny warnings
shell: bash -euxo pipefail {0}
- name: steps::cargo_install_nextest
if: inputs.run_tests
uses: taiki-e/install-action@nextest
- name: steps::cargo_nextest
if: inputs.run_tests
run: cargo nextest run --workspace --no-fail-fast
shell: bash -euxo pipefail {0}
env:
NEXTEST_NO_TESTS: warn
timeout-minutes: 3
check_extension:
needs:
@@ -103,7 +108,7 @@ jobs:
mkdir -p /tmp/ext-output
./zed-extension --source-dir . --scratch-dir /tmp/ext-scratch --output-dir /tmp/ext-output
shell: bash -euxo pipefail {0}
timeout-minutes: 2
timeout-minutes: 1
tests_pass:
needs:
- orchestrate

View File

@@ -472,17 +472,11 @@ jobs:
if: startsWith(github.ref, 'refs/tags/v') && endsWith(github.ref, '-pre') && !endsWith(github.ref, '.0-pre')
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- id: get-app-token
name: steps::authenticate_as_zippy
uses: actions/create-github-app-token@bef1eaf1c0ac2b148ee2a0a74c65fbe6db0631f1
with:
app-id: ${{ secrets.ZED_ZIPPY_APP_ID }}
private-key: ${{ secrets.ZED_ZIPPY_APP_PRIVATE_KEY }}
- name: gh release edit "$GITHUB_REF_NAME" --repo=zed-industries/zed --draft=false
run: gh release edit "$GITHUB_REF_NAME" --repo=zed-industries/zed --draft=false
shell: bash -euxo pipefail {0}
env:
GITHUB_TOKEN: ${{ steps.get-app-token.outputs.token }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
notify_on_failure:
needs:
- upload_release_assets

View File

@@ -13,7 +13,7 @@ jobs:
bundle_linux_aarch64:
if: |-
(github.event.action == 'labeled' && github.event.label.name == 'run-bundling') ||
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
runs-on: namespace-profile-8x32-ubuntu-2004-arm-m4
env:
CARGO_INCREMENTAL: 0
@@ -56,7 +56,7 @@ jobs:
bundle_linux_x86_64:
if: |-
(github.event.action == 'labeled' && github.event.label.name == 'run-bundling') ||
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
runs-on: namespace-profile-32x64-ubuntu-2004
env:
CARGO_INCREMENTAL: 0
@@ -99,7 +99,7 @@ jobs:
bundle_mac_aarch64:
if: |-
(github.event.action == 'labeled' && github.event.label.name == 'run-bundling') ||
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
runs-on: self-mini-macos
env:
CARGO_INCREMENTAL: 0
@@ -145,7 +145,7 @@ jobs:
bundle_mac_x86_64:
if: |-
(github.event.action == 'labeled' && github.event.label.name == 'run-bundling') ||
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
runs-on: self-mini-macos
env:
CARGO_INCREMENTAL: 0
@@ -191,7 +191,7 @@ jobs:
bundle_windows_aarch64:
if: |-
(github.event.action == 'labeled' && github.event.label.name == 'run-bundling') ||
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
runs-on: self-32vcpu-windows-2022
env:
CARGO_INCREMENTAL: 0
@@ -229,7 +229,7 @@ jobs:
bundle_windows_x86_64:
if: |-
(github.event.action == 'labeled' && github.event.label.name == 'run-bundling') ||
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
(github.event.action == 'synchronize' && contains(github.event.pull_request.labels.*.name, 'run-bundling'))
runs-on: self-32vcpu-windows-2022
env:
CARGO_INCREMENTAL: 0

View File

@@ -13,14 +13,6 @@ on:
jobs:
cron_unit_evals:
runs-on: namespace-profile-16x32-ubuntu-2204
strategy:
matrix:
model:
- anthropic/claude-sonnet-4-5-latest
- anthropic/claude-opus-4-5-latest
- google/gemini-3-pro
- openai/gpt-5
fail-fast: false
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
@@ -57,7 +49,6 @@ jobs:
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
GOOGLE_AI_API_KEY: ${{ secrets.GOOGLE_AI_API_KEY }}
GOOGLE_CLOUD_PROJECT: ${{ secrets.GOOGLE_CLOUD_PROJECT }}
ZED_AGENT_MODEL: ${{ matrix.model }}
- name: steps::cleanup_cargo_config
if: always()
run: |

View File

@@ -74,12 +74,9 @@ jobs:
uses: pnpm/action-setup@fe02b34f77f8bc703788d5817da081398fad5dd2
with:
version: '9'
- name: steps::prettier
- name: ./script/prettier
run: ./script/prettier
shell: bash -euxo pipefail {0}
- name: steps::cargo_fmt
run: cargo fmt --all -- --check
shell: bash -euxo pipefail {0}
- name: ./script/check-todos
run: ./script/check-todos
shell: bash -euxo pipefail {0}
@@ -87,9 +84,12 @@ jobs:
run: ./script/check-keymaps
shell: bash -euxo pipefail {0}
- name: run_tests::check_style::check_for_typos
uses: crate-ci/typos@2d0ce569feab1f8752f1dde43cc2f2aa53236e06
uses: crate-ci/typos@80c8a4945eec0f6d464eaf9e65ed98ef085283d1
with:
config: ./typos.toml
- name: steps::cargo_fmt
run: cargo fmt --all -- --check
shell: bash -euxo pipefail {0}
timeout-minutes: 60
run_tests_windows:
needs:
@@ -353,9 +353,6 @@ jobs:
- name: steps::download_wasi_sdk
run: ./script/download-wasi-sdk
shell: bash -euxo pipefail {0}
- name: ./script/generate-action-metadata
run: ./script/generate-action-metadata
shell: bash -euxo pipefail {0}
- name: run_tests::check_docs::install_mdbook
uses: peaceiris/actions-mdbook@ee69d230fe19748b7abf22df32acaa93833fad08
with:
@@ -500,8 +497,6 @@ jobs:
env:
GIT_AUTHOR_NAME: Protobuf Action
GIT_AUTHOR_EMAIL: ci@zed.dev
GIT_COMMITTER_NAME: Protobuf Action
GIT_COMMITTER_EMAIL: ci@zed.dev
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
@@ -525,7 +520,6 @@ jobs:
uses: bufbuild/buf-setup-action@v1
with:
version: v1.29.0
github_token: ${{ secrets.GITHUB_TOKEN }}
- name: run_tests::check_postgres_and_protobuf_migrations::bufbuild_breaking_action
uses: bufbuild/buf-breaking-action@v1
with:

5
.gitignore vendored
View File

@@ -8,7 +8,6 @@
.DS_Store
.blob_store
.build
.claude/settings.local.json
.envrc
.flatpak-builder
.idea
@@ -36,11 +35,7 @@
DerivedData/
Packages
xcuserdata/
crates/docs_preprocessor/actions.json
# Don't commit any secrets to the repo.
.env
.env.secret.toml
# `nix build` output
/result

View File

@@ -141,9 +141,6 @@ Uladzislau Kaminski <i@uladkaminski.com>
Uladzislau Kaminski <i@uladkaminski.com> <uladzislau_kaminski@epam.com>
Vitaly Slobodin <vitaliy.slobodin@gmail.com>
Vitaly Slobodin <vitaliy.slobodin@gmail.com> <vitaly_slobodin@fastmail.com>
Yara <davidsk@zed.dev>
Yara <git@davidsk.dev>
Yara <git@yara.blue>
Will Bradley <williambbradley@gmail.com>
Will Bradley <williambbradley@gmail.com> <will@zed.dev>
WindSoilder <WindSoilder@outlook.com>

6
.rules
View File

@@ -26,12 +26,6 @@
});
```
# Timers in tests
* In GPUI tests, prefer GPUI executor timers over `smol::Timer::after(...)` when you need timeouts, delays, or to drive `run_until_parked()`:
- Use `cx.background_executor().timer(duration).await` (or `cx.background_executor.timer(duration).await` in `TestAppContext`) so the work is scheduled on GPUI's dispatcher.
- Avoid `smol::Timer::after(...)` for test timeouts when you rely on `run_until_parked()`, because it may not be tracked by GPUI's scheduler and can lead to "nothing left to run" when pumping.
# GPUI
GPUI is a UI framework which also provides primitives for state and concurrency management.

View File

@@ -15,17 +15,15 @@ with the community to improve the product in ways we haven't thought of (or had
In particular we love PRs that are:
- Fixing or extending the docs.
- Fixing bugs.
- Small enhancements to existing features to make them work for more people (making things work on more platforms/modes/whatever).
- Fixes to existing bugs and issues.
- Small enhancements to existing features, particularly to make them work for more people.
- Small extra features, like keybindings or actions you miss from other editors or extensions.
- Part of a Community Program like [Let's Git Together](https://github.com/zed-industries/zed/issues/41541).
- Work towards shipping larger features on our roadmap.
If you're looking for concrete ideas:
- [Curated board of issues](https://github.com/orgs/zed-industries/projects/69) suitable for everyone from first-time contributors to seasoned community champions.
- [Triaged bugs with confirmed steps to reproduce](https://github.com/zed-industries/zed/issues?q=is%3Aissue%20state%3Aopen%20type%3ABug%20label%3Astate%3Areproducible).
- [Area labels](https://github.com/zed-industries/zed/labels?q=area%3A*) to browse bugs in a specific part of the product you care about (after clicking on an area label, add type:Bug to the search).
- Our [top-ranking issues](https://github.com/zed-industries/zed/issues/5393) based on votes by the community.
- Our [public roadmap](https://zed.dev/roadmap) contains a rough outline of our near-term priorities for Zed.
## Sending changes
@@ -39,17 +37,9 @@ like, sorry).
Although we will take a look, we tend to only merge about half the PRs that are
submitted. If you'd like your PR to have the best chance of being merged:
- Make sure the change is **desired**: we're always happy to accept bugfixes,
but features should be confirmed with us first if you aim to avoid wasted
effort. If there isn't already a GitHub issue for your feature with staff
confirmation that we want it, start with a GitHub discussion rather than a PR.
- Include a clear description of **what you're solving**, and why it's important.
- Include **tests**.
- If it changes the UI, attach **screenshots** or screen recordings.
- Make the PR about **one thing only**, e.g. if it's a bugfix, don't add two
features and a refactoring on top of that.
- Keep AI assistance under your judgement and responsibility: it's unlikely
we'll merge a vibe-coded PR that the author doesn't understand.
- Include a clear description of what you're solving, and why it's important to you.
- Include tests.
- If it changes the UI, attach screenshots or screen recordings.
The internal advice for reviewers is as follows:
@@ -60,9 +50,10 @@ The internal advice for reviewers is as follows:
If you need more feedback from us: the best way is to be responsive to
Github comments, or to offer up time to pair with us.
If you need help deciding how to fix a bug, or finish implementing a feature
that we've agreed we want, please open a PR early so we can discuss how to make
the change with code in hand.
If you are making a larger change, or need advice on how to finish the change
you're making, please open the PR early. We would love to help you get
things right, and it's often easier to see how to solve a problem before the
diff gets too big.
## Things we will (probably) not merge
@@ -70,11 +61,11 @@ Although there are few hard and fast rules, typically we don't merge:
- Anything that can be provided by an extension. For example a new language, or theme. For adding themes or support for a new language to Zed, check out our [docs on developing extensions](https://zed.dev/docs/extensions/developing-extensions).
- New file icons. Zed's default icon theme consists of icons that are hand-designed to fit together in a cohesive manner, please don't submit PRs with off-the-shelf SVGs.
- Features where (in our subjective opinion) the extra complexity isn't worth it for the number of people who will benefit.
- Giant refactorings.
- Non-trivial changes with no tests.
- Stylistic code changes that do not alter any app logic. Reducing allocations, removing `.unwrap()`s, fixing typos is great; making code "more readable" — maybe not so much.
- Anything that seems AI-generated without understanding the output.
- Features where (in our subjective opinion) the extra complexity isn't worth it for the number of people who will benefit.
- Anything that seems completely AI generated.
## Bird's-eye view of Zed

2123
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -9,7 +9,6 @@ members = [
"crates/agent_servers",
"crates/agent_settings",
"crates/agent_ui",
"crates/agent_ui_v2",
"crates/ai_onboarding",
"crates/anthropic",
"crates/askpass",
@@ -33,13 +32,13 @@ members = [
"crates/cloud_api_client",
"crates/cloud_api_types",
"crates/cloud_llm_client",
"crates/cloud_zeta2_prompt",
"crates/collab",
"crates/collab_ui",
"crates/collections",
"crates/command_palette",
"crates/command_palette_hooks",
"crates/component",
"crates/component_preview",
"crates/context_server",
"crates/copilot",
"crates/crashes",
@@ -55,12 +54,11 @@ members = [
"crates/diagnostics",
"crates/docs_preprocessor",
"crates/edit_prediction",
"crates/edit_prediction_types",
"crates/edit_prediction_ui",
"crates/edit_prediction_button",
"crates/edit_prediction_context",
"crates/zeta2_tools",
"crates/editor",
"crates/eval",
"crates/eval_utils",
"crates/explorer_command_injector",
"crates/extension",
"crates/extension_api",
@@ -193,23 +191,19 @@ members = [
"crates/vercel",
"crates/vim",
"crates/vim_mode_setting",
"crates/which_key",
"crates/watch",
"crates/web_search",
"crates/web_search_providers",
"crates/workspace",
"crates/worktree",
"crates/worktree_benchmarks",
"crates/x_ai",
"crates/zed",
"crates/zed_actions",
"crates/zed_env_vars",
"crates/edit_prediction_cli",
"crates/zeta_prompt",
"crates/zeta",
"crates/zeta_cli",
"crates/zlog",
"crates/zlog_settings",
"crates/ztracing",
"crates/ztracing_macro",
#
# Extensions
@@ -246,9 +240,9 @@ action_log = { path = "crates/action_log" }
agent = { path = "crates/agent" }
activity_indicator = { path = "crates/activity_indicator" }
agent_ui = { path = "crates/agent_ui" }
agent_ui_v2 = { path = "crates/agent_ui_v2" }
agent_settings = { path = "crates/agent_settings" }
agent_servers = { path = "crates/agent_servers" }
ai = { path = "crates/ai" }
ai_onboarding = { path = "crates/ai_onboarding" }
anthropic = { path = "crates/anthropic" }
askpass = { path = "crates/askpass" }
@@ -258,6 +252,7 @@ assistant_slash_command = { path = "crates/assistant_slash_command" }
assistant_slash_commands = { path = "crates/assistant_slash_commands" }
audio = { path = "crates/audio" }
auto_update = { path = "crates/auto_update" }
auto_update_helper = { path = "crates/auto_update_helper" }
auto_update_ui = { path = "crates/auto_update_ui" }
aws_http_client = { path = "crates/aws_http_client" }
bedrock = { path = "crates/bedrock" }
@@ -271,12 +266,13 @@ clock = { path = "crates/clock" }
cloud_api_client = { path = "crates/cloud_api_client" }
cloud_api_types = { path = "crates/cloud_api_types" }
cloud_llm_client = { path = "crates/cloud_llm_client" }
cloud_zeta2_prompt = { path = "crates/cloud_zeta2_prompt" }
collab = { path = "crates/collab" }
collab_ui = { path = "crates/collab_ui" }
collections = { path = "crates/collections", version = "0.1.0" }
command_palette = { path = "crates/command_palette" }
command_palette_hooks = { path = "crates/command_palette_hooks" }
component = { path = "crates/component" }
component_preview = { path = "crates/component_preview" }
context_server = { path = "crates/context_server" }
copilot = { path = "crates/copilot" }
crashes = { path = "crates/crashes" }
@@ -292,7 +288,6 @@ deepseek = { path = "crates/deepseek" }
derive_refineable = { path = "crates/refineable/derive_refineable" }
diagnostics = { path = "crates/diagnostics" }
editor = { path = "crates/editor" }
eval_utils = { path = "crates/eval_utils" }
extension = { path = "crates/extension" }
extension_host = { path = "crates/extension_host" }
extensions_ui = { path = "crates/extensions_ui" }
@@ -316,9 +311,10 @@ http_client = { path = "crates/http_client" }
http_client_tls = { path = "crates/http_client_tls" }
icons = { path = "crates/icons" }
image_viewer = { path = "crates/image_viewer" }
edit_prediction_types = { path = "crates/edit_prediction_types" }
edit_prediction_ui = { path = "crates/edit_prediction_ui" }
edit_prediction = { path = "crates/edit_prediction" }
edit_prediction_button = { path = "crates/edit_prediction_button" }
edit_prediction_context = { path = "crates/edit_prediction_context" }
zeta2_tools = { path = "crates/zeta2_tools" }
inspector_ui = { path = "crates/inspector_ui" }
install_cli = { path = "crates/install_cli" }
journal = { path = "crates/journal" }
@@ -360,6 +356,8 @@ panel = { path = "crates/panel" }
paths = { path = "crates/paths" }
perf = { path = "tooling/perf" }
picker = { path = "crates/picker" }
plugin = { path = "crates/plugin" }
plugin_macros = { path = "crates/plugin_macros" }
prettier = { path = "crates/prettier" }
settings_profile_selector = { path = "crates/settings_profile_selector" }
project = { path = "crates/project" }
@@ -370,10 +368,12 @@ proto = { path = "crates/proto" }
recent_projects = { path = "crates/recent_projects" }
refineable = { path = "crates/refineable" }
release_channel = { path = "crates/release_channel" }
scheduler = { path = "crates/scheduler" }
remote = { path = "crates/remote" }
remote_server = { path = "crates/remote_server" }
repl = { path = "crates/repl" }
reqwest_client = { path = "crates/reqwest_client" }
rich_text = { path = "crates/rich_text" }
rodio = { git = "https://github.com/RustAudio/rodio", rev ="e2074c6c2acf07b57cf717e076bdda7a9ac6e70b", features = ["wav", "playback", "wav_output", "recording"] }
rope = { path = "crates/rope" }
rpc = { path = "crates/rpc" }
@@ -390,6 +390,7 @@ snippets_ui = { path = "crates/snippets_ui" }
sqlez = { path = "crates/sqlez" }
sqlez_macros = { path = "crates/sqlez_macros" }
story = { path = "crates/story" }
storybook = { path = "crates/storybook" }
streaming_diff = { path = "crates/streaming_diff" }
sum_tree = { path = "crates/sum_tree" }
supermaven = { path = "crates/supermaven" }
@@ -406,6 +407,7 @@ terminal_view = { path = "crates/terminal_view" }
text = { path = "crates/text" }
theme = { path = "crates/theme" }
theme_extension = { path = "crates/theme_extension" }
theme_importer = { path = "crates/theme_importer" }
theme_selector = { path = "crates/theme_selector" }
time_format = { path = "crates/time_format" }
title_bar = { path = "crates/title_bar" }
@@ -419,7 +421,6 @@ util_macros = { path = "crates/util_macros" }
vercel = { path = "crates/vercel" }
vim = { path = "crates/vim" }
vim_mode_setting = { path = "crates/vim_mode_setting" }
which_key = { path = "crates/which_key" }
watch = { path = "crates/watch" }
web_search = { path = "crates/web_search" }
@@ -430,18 +431,15 @@ x_ai = { path = "crates/x_ai" }
zed = { path = "crates/zed" }
zed_actions = { path = "crates/zed_actions" }
zed_env_vars = { path = "crates/zed_env_vars" }
edit_prediction = { path = "crates/edit_prediction" }
zeta_prompt = { path = "crates/zeta_prompt" }
zeta = { path = "crates/zeta" }
zlog = { path = "crates/zlog" }
zlog_settings = { path = "crates/zlog_settings" }
ztracing = { path = "crates/ztracing" }
ztracing_macro = { path = "crates/ztracing_macro" }
#
# External crates
#
agent-client-protocol = { version = "=0.9.2", features = ["unstable"] }
agent-client-protocol = { version = "0.7.0", features = ["unstable"] }
aho-corasick = "1.1"
alacritty_terminal = "0.25.1-rc1"
any_vec = "0.14"
@@ -460,15 +458,15 @@ async-task = "4.7"
async-trait = "0.1"
async-tungstenite = "0.31.0"
async_zip = { version = "0.0.18", features = ["deflate", "deflate64"] }
aws-config = { version = "1.8.10", features = ["behavior-version-latest"] }
aws-credential-types = { version = "1.2.8", features = [
aws-config = { version = "1.6.1", features = ["behavior-version-latest"] }
aws-credential-types = { version = "1.2.2", features = [
"hardcoded-credentials",
] }
aws-sdk-bedrockruntime = { version = "1.112.0", features = [
aws-sdk-bedrockruntime = { version = "1.80.0", features = [
"behavior-version-latest",
] }
aws-smithy-runtime-api = { version = "1.9.2", features = ["http-1x", "client"] }
aws-smithy-types = { version = "1.3.4", features = ["http-body-1-x"] }
aws-smithy-runtime-api = { version = "1.7.4", features = ["http-1x", "client"] }
aws-smithy-types = { version = "1.3.0", features = ["http-body-1-x"] }
backtrace = "0.3"
base64 = "0.22"
bincode = "1.2.1"
@@ -481,7 +479,6 @@ bytes = "1.0"
cargo_metadata = "0.19"
cargo_toml = "0.21"
cfg-if = "1.0.3"
chardetng = "0.1"
chrono = { version = "0.4", features = ["serde"] }
ciborium = "0.2"
circular-buffer = "1.0"
@@ -505,16 +502,17 @@ dotenvy = "0.15.0"
ec4rs = "1.1"
emojis = "0.6.1"
env_logger = "0.11"
encoding_rs = "0.8"
exec = "0.3.1"
fancy-regex = "0.16.0"
fancy-regex = "0.14.0"
fork = "0.4.0"
futures = "0.3"
futures-batch = "0.6.1"
futures-lite = "1.13"
gh-workflow = { git = "https://github.com/zed-industries/gh-workflow", rev = "09acfdf2bd5c1d6254abefd609c808ff73547b2c" }
gh-workflow = { git = "https://github.com/zed-industries/gh-workflow", rev = "3eaa84abca0778eb54272f45a312cb24f9a0b435" }
git2 = { version = "0.20.1", default-features = false }
globset = "0.4"
handlebars = "4.3"
hashbrown = "0.15.3"
heck = "0.5"
heed = { version = "0.21.0", features = ["read-txn-no-tls"] }
hex = "0.4.3"
@@ -531,7 +529,7 @@ indoc = "2"
inventory = "0.3.19"
itertools = "0.14.0"
json_dotpath = "1.1"
jsonschema = "0.37.0"
jsonschema = "0.30.0"
jsonwebtoken = "9.3"
jupyter-protocol = "0.10.0"
jupyter-websocket-client = "0.15.0"
@@ -550,6 +548,7 @@ nanoid = "0.4"
nbformat = "0.15.0"
nix = "0.29"
num-format = "0.4.4"
num-traits = "0.2"
objc = "0.2"
objc2-foundation = { version = "=0.3.1", default-features = false, features = [
"NSArray",
@@ -588,6 +587,7 @@ pet = { git = "https://github.com/microsoft/python-environment-tools.git", rev =
pet-conda = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
pet-core = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
pet-fs = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
pet-pixi = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
pet-poetry = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
pet-reporter = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
pet-virtualenv = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
@@ -627,6 +627,7 @@ scap = { git = "https://github.com/zed-industries/scap", rev = "4afea48c3b002197
schemars = { version = "1.0", features = ["indexmap2"] }
semver = { version = "1.0", features = ["serde"] }
serde = { version = "1.0.221", features = ["derive", "rc"] }
serde_derive = "1.0.221"
serde_json = { version = "1.0.144", features = ["preserve_order", "raw_value"] }
serde_json_lenient = { version = "0.2", features = [
"preserve_order",
@@ -640,7 +641,7 @@ shellexpand = "2.1.0"
shlex = "1.3.0"
simplelog = "0.12.2"
slotmap = "1.0.6"
smallvec = { version = "1.6", features = ["union", "const_new"] }
smallvec = { version = "1.6", features = ["union"] }
smol = "2.0"
sqlformat = "0.2"
stacksafe = "0.1"
@@ -654,7 +655,7 @@ sysinfo = "0.37.0"
take-until = "0.2.0"
tempfile = "3.20.0"
thiserror = "2.0.12"
tiktoken-rs = { git = "https://github.com/zed-industries/tiktoken-rs", rev = "2570c4387a8505fb8f1d3f3557454b474f1e8271" }
tiktoken-rs = { git = "https://github.com/zed-industries/tiktoken-rs", rev = "7249f999c5fdf9bf3cc5c288c964454e4dac0c00" }
time = { version = "0.3", features = [
"macros",
"parsing",
@@ -666,12 +667,11 @@ time = { version = "0.3", features = [
tiny_http = "0.8"
tokio = { version = "1" }
tokio-tungstenite = { version = "0.26", features = ["__rustls-tls"] }
tokio-socks = { version = "0.5.2", default-features = false, features = ["futures-io", "tokio"] }
toml = "0.8"
toml_edit = { version = "0.22", default-features = false, features = ["display", "parse", "serde"] }
tower-http = "0.4.4"
tree-sitter = { version = "0.26", features = ["wasm"] }
tree-sitter-bash = "0.25.1"
tree-sitter = { version = "0.25.10", features = ["wasm"] }
tree-sitter-bash = "0.25.0"
tree-sitter-c = "0.23"
tree-sitter-cpp = { git = "https://github.com/tree-sitter/tree-sitter-cpp", rev = "5cb9b693cfd7bfacab1d9ff4acac1a4150700609" }
tree-sitter-css = "0.23"
@@ -693,7 +693,6 @@ tree-sitter-ruby = "0.23"
tree-sitter-rust = "0.24"
tree-sitter-typescript = { git = "https://github.com/zed-industries/tree-sitter-typescript", rev = "e2c53597d6a5d9cf7bbe8dccde576fe1e46c5899" } # https://github.com/tree-sitter/tree-sitter-typescript/pull/347
tree-sitter-yaml = { git = "https://github.com/zed-industries/tree-sitter-yaml", rev = "baff0b51c64ef6a1fb1f8390f3ad6015b83ec13a" }
tracing = "0.1.40"
unicase = "2.6"
unicode-script = "0.5.7"
unicode-segmentation = "1.10"
@@ -704,7 +703,7 @@ uuid = { version = "1.1.2", features = ["v4", "v5", "v7", "serde"] }
walkdir = "2.5"
wasm-encoder = "0.221"
wasmparser = "0.221"
wasmtime = { version = "33", default-features = false, features = [
wasmtime = { version = "29", default-features = false, features = [
"async",
"demangle",
"runtime",
@@ -713,10 +712,11 @@ wasmtime = { version = "33", default-features = false, features = [
"incremental-cache",
"parallel-compilation",
] }
wasmtime-wasi = "33"
wasmtime-wasi = "29"
wax = "0.6"
which = "6.0.0"
windows-core = "0.61"
wit-component = "0.221"
yawc = "0.2.5"
zeroize = "1.8"
zstd = "0.11"
@@ -798,13 +798,20 @@ settings_macros = { opt-level = 3 }
sqlez_macros = { opt-level = 3, codegen-units = 1 }
ui_macros = { opt-level = 3 }
util_macros = { opt-level = 3 }
serde_derive = { opt-level = 3 }
quote = { opt-level = 3 }
syn = { opt-level = 3 }
proc-macro2 = { opt-level = 3 }
# proc-macros end
taffy = { opt-level = 3 }
cranelift-codegen = { opt-level = 3 }
cranelift-codegen-meta = { opt-level = 3 }
cranelift-codegen-shared = { opt-level = 3 }
resvg = { opt-level = 3 }
rustybuzz = { opt-level = 3 }
ttf-parser = { opt-level = 3 }
wasmtime-cranelift = { opt-level = 3 }
wasmtime = { opt-level = 3 }
# Build single-source-file crates with cg=1 as it helps make `cargo build` of a whole workspace a bit faster
activity_indicator = { codegen-units = 1 }
@@ -813,11 +820,12 @@ breadcrumbs = { codegen-units = 1 }
collections = { codegen-units = 1 }
command_palette = { codegen-units = 1 }
command_palette_hooks = { codegen-units = 1 }
extension_cli = { codegen-units = 1 }
feature_flags = { codegen-units = 1 }
file_icons = { codegen-units = 1 }
fsevent = { codegen-units = 1 }
image_viewer = { codegen-units = 1 }
edit_prediction_ui = { codegen-units = 1 }
edit_prediction_button = { codegen-units = 1 }
install_cli = { codegen-units = 1 }
journal = { codegen-units = 1 }
json_schema_store = { codegen-units = 1 }
@@ -832,6 +840,7 @@ project_symbols = { codegen-units = 1 }
refineable = { codegen-units = 1 }
release_channel = { codegen-units = 1 }
reqwest_client = { codegen-units = 1 }
rich_text = { codegen-units = 1 }
session = { codegen-units = 1 }
snippet = { codegen-units = 1 }
snippets_ui = { codegen-units = 1 }
@@ -864,6 +873,8 @@ unexpected_cfgs = { level = "allow" }
dbg_macro = "deny"
todo = "deny"
# This is not a style lint, see https://github.com/rust-lang/rust-clippy/pull/15454
# Remove when the lint gets promoted to `suspicious`.
declare_interior_mutable_const = "deny"
redundant_clone = "deny"

View File

@@ -1,6 +1,6 @@
# syntax = docker/dockerfile:1.2
FROM rust:1.92-bookworm as builder
FROM rust:1.91.1-bookworm as builder
WORKDIR app
COPY . .
@@ -34,4 +34,8 @@ RUN apt-get update; \
linux-perf binutils
WORKDIR app
COPY --from=builder /app/collab /app/collab
COPY --from=builder /app/crates/collab/migrations /app/migrations
COPY --from=builder /app/crates/collab/migrations_llm /app/migrations_llm
ENV MIGRATIONS_PATH=/app/migrations
ENV LLM_DATABASE_MIGRATIONS_PATH=/app/migrations_llm
ENTRYPOINT ["/app/collab"]

View File

@@ -9,7 +9,7 @@ Welcome to Zed, a high-performance, multiplayer code editor from the creators of
### Installation
On macOS, Linux, and Windows you can [download Zed directly](https://zed.dev/download) or install Zed via your local package manager ([macOS](https://zed.dev/docs/installation#macos)/[Linux](https://zed.dev/docs/linux#installing-via-a-package-manager)/[Windows](https://zed.dev/docs/windows#package-managers)).
On macOS, Linux, and Windows you can [download Zed directly](https://zed.dev/download) or [install Zed via your local package manager](https://zed.dev/docs/linux#installing-via-a-package-manager).
Other platforms are not yet available:
@@ -20,6 +20,7 @@ Other platforms are not yet available:
- [Building Zed for macOS](./docs/src/development/macos.md)
- [Building Zed for Linux](./docs/src/development/linux.md)
- [Building Zed for Windows](./docs/src/development/windows.md)
- [Running Collaboration Locally](./docs/src/development/local-collaboration.md)
### Contributing

View File

@@ -28,7 +28,7 @@ ai
= @rtfeldman
audio
= @yara-blue
= @dvdsk
crashes
= @p1n3appl3
@@ -43,9 +43,8 @@ design
= @danilo-leal
docs
= @miguelraz
= @probably-neb
= @yeskunall
= @miguelraz
extension
= @kubkon
@@ -53,10 +52,6 @@ extension
git
= @cole-miller
= @danilo-leal
= @yara-blue
= @kubkon
= @Anthony-Eid
= @cameron1024
gpui
= @Anthony-Eid
@@ -76,7 +71,7 @@ languages
linux
= @cole-miller
= @yara-blue
= @dvdsk
= @p1n3appl3
= @probably-neb
= @smitbarmase
@@ -92,7 +87,7 @@ multi_buffer
= @SomeoneToIgnore
pickers
= @yara-blue
= @dvdsk
= @p1n3appl3
= @SomeoneToIgnore
@@ -104,9 +99,6 @@ settings_ui
= @danilo-leal
= @probably-neb
sum_tree
= @Veykril
support
= @miguelraz
@@ -118,9 +110,6 @@ terminal
= @kubkon
= @Veykril
text
= @Veykril
vim
= @ConradIrwin
= @dinocosta
@@ -130,4 +119,3 @@ vim
windows
= @localcc
= @reflectronic
= @Veykril

View File

@@ -1,5 +0,0 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M13.3996 5.59852C13.3994 5.3881 13.3439 5.18144 13.2386 4.99926C13.1333 4.81709 12.9819 4.66581 12.7997 4.56059L8.59996 2.16076C8.41755 2.05544 8.21063 2 8 2C7.78937 2 7.58246 2.05544 7.40004 2.16076L3.20033 4.56059C3.0181 4.66581 2.86674 4.81709 2.76144 4.99926C2.65613 5.18144 2.60059 5.3881 2.60037 5.59852V10.3982C2.60059 10.6086 2.65613 10.8153 2.76144 10.9975C2.86674 11.1796 3.0181 11.3309 3.20033 11.4361L7.40004 13.836C7.58246 13.9413 7.78937 13.9967 8 13.9967C8.21063 13.9967 8.41755 13.9413 8.59996 13.836L12.7997 11.4361C12.9819 11.3309 13.1333 11.1796 13.2386 10.9975C13.3439 10.8153 13.3994 10.6086 13.3996 10.3982V5.59852Z" stroke="white" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M2.78033 4.99857L7.99998 7.99836L13.2196 4.99857" stroke="white" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M8 13.9979V7.99829" stroke="white" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

Before

Width:  |  Height:  |  Size: 1.1 KiB

View File

@@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="none"><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.2" d="M14 11.333A6 6 0 0 0 4 6.867l-1 .9"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.333" d="M2 4.667v4h4"/><path fill="#000" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.2" d="M8 12a.667.667 0 1 0 0-1.333A.667.667 0 0 0 8 12Z"/></svg>

After

Width:  |  Height:  |  Size: 467 B

View File

@@ -1,5 +1 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M2 13H5" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M11 13H14" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M11.5 8.5L8 12M8 12L4.5 8.5M8 12L8 3" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
</svg>
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="none"><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.2" d="M3.333 10 8 14.667 12.667 10M8 5.333v9.334"/><path fill="#000" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.2" d="M8 2.667a.667.667 0 1 0 0-1.334.667.667 0 0 0 0 1.334Z"/></svg>

Before

Width:  |  Height:  |  Size: 443 B

After

Width:  |  Height:  |  Size: 374 B

View File

@@ -1,5 +1 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M4.5 6.5L8 3M8 3L11.5 6.5M8 3V12" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M2 13H5" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M11 13H14" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
</svg>
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="none"><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.2" d="M3.333 6 8 1.333 12.667 6M8 10.667V1.333"/><path fill="#000" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.2" d="M8 13.333a.667.667 0 1 1 0 1.334.667.667 0 0 1 0-1.334Z"/></svg>

Before

Width:  |  Height:  |  Size: 439 B

After

Width:  |  Height:  |  Size: 373 B

View File

@@ -1,5 +1 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M2 11.333C2.00118 10.1752 2.33729 9.04258 2.96777 8.07159C3.59826 7.10059 4.49621 6.33274 5.55331 5.86064C6.61041 5.38853 7.78152 5.23235 8.9254 5.41091C10.0693 5.58947 11.1371 6.09516 12 6.86698L13 7.76698" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M14 4.66699V8.66699H10" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M7 13H10" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
</svg>
<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" fill="none"><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.2" d="M2 11.333a6 6 0 0 1 10-4.466l1 .9"/><path stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.333" d="M14 4.667v4h-4"/><path fill="#000" stroke="#000" stroke-linecap="round" stroke-linejoin="round" stroke-width="1.2" d="M8 12a.667.667 0 1 1 0-1.333A.667.667 0 0 1 8 12Z"/></svg>

Before

Width:  |  Height:  |  Size: 627 B

After

Width:  |  Height:  |  Size: 468 B

View File

@@ -1,10 +0,0 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_1_2)">
<path fill-rule="evenodd" clip-rule="evenodd" d="M4.58747 12.9359C4.35741 12.778 4.17558 12.625 4.17558 12.625L10.092 2.37749C10.092 2.37749 10.3355 2.46782 10.5367 2.56426C10.7903 2.6858 11.0003 2.80429 11.0003 2.80429C13.8681 4.46005 14.8523 8.13267 13.1965 11.0005C11.5407 13.8684 7.8681 14.8525 5.00023 13.1967C5.00023 13.1967 4.79936 13.0812 4.58747 12.9359ZM10.5003 3.67032L5.50023 12.3307C7.89013 13.7105 10.9506 12.8904 12.3305 10.5006C13.7102 8.1106 12.8902 5.05015 10.5003 3.67032ZM3.07664 11.4314C2.87558 11.1403 2.804 11.0006 2.804 11.0006C1.77036 9.20524 1.69456 6.92215 2.80404 5.00046C3.91353 3.07877 5.92859 2.00291 8.0003 2.00036C8.0003 2.00036 8.28 1.99964 8.51289 2.02194C8.86375 2.05556 9.09702 2.10083 9.09702 2.10083L3.43905 11.9007C3.43905 11.9007 3.30482 11.7618 3.07664 11.4314ZM7.40178 3.03702C5.89399 3.22027 4.48727 4.08506 3.67008 5.50052C2.85288 6.9159 2.80733 8.56653 3.40252 9.96401L7.40178 3.03702Z" fill="black" stroke="black" stroke-width="0.1"/>
</g>
<defs>
<clipPath id="clip0_1_2">
<rect width="16" height="16" fill="white"/>
</clipPath>
</defs>
</svg>

Before

Width:  |  Height:  |  Size: 1.2 KiB

View File

@@ -1,8 +0,0 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M4 2V10" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M12 6C12.5304 6 13.0391 5.78929 13.4142 5.41421C13.7893 5.03914 14 4.53043 14 4C14 3.46957 13.7893 2.96086 13.4142 2.58579C13.0391 2.21071 12.5304 2 12 2C11.4696 2 10.9609 2.21071 10.5858 2.58579C10.2107 2.96086 10 3.46957 10 4C10 4.53043 10.2107 5.03914 10.5858 5.41421C10.9609 5.78929 11.4696 6 12 6Z" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M4 14C4.53043 14 5.03914 13.7893 5.41421 13.4142C5.78929 13.0391 6 12.5304 6 12C6 11.4696 5.78929 10.9609 5.41421 10.5858C5.03914 10.2107 4.53043 10 4 10C3.46957 10 2.96086 10.2107 2.58579 10.5858C2.21071 10.9609 2 11.4696 2 12C2 12.5304 2.21071 13.0391 2.58579 13.4142C2.96086 13.7893 3.46957 14 4 14Z" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M10 4C8.4087 4 6.88258 4.63214 5.75736 5.75736C4.63214 6.88258 4 8.4087 4 10" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M12 10V14" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M14 12H10" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

Before

Width:  |  Height:  |  Size: 1.4 KiB

View File

@@ -1,11 +0,0 @@
<svg width="28" height="28" viewBox="0 0 28 28" fill="none" id="svg1378540956_510">
<g clip-path="url(#svg1378540956_510_clip0_1_1506)" transform="translate(4, 4) scale(0.857)">
<path d="M17.0547 0.372066H8.52652L-0.00165176 8.90024V17.4284H8.52652V8.90024H17.0547V0.372066Z" fill="#1A1C20"></path>
<path d="M10.1992 27.6279H18.7274L27.2556 19.0998V10.5716H18.7274V19.0998H10.1992V27.6279Z" fill="#1A1C20"></path>
</g>
<defs>
<clipPath id="svg1378540956_510_clip0_1_1506">
<rect width="27.2559" height="27.2559" fill="white" transform="translate(0 0.37207)"></rect>
</clipPath>
</defs>
</svg>

Before

Width:  |  Height:  |  Size: 593 B

View File

@@ -1,5 +0,0 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M6.2224 1.32129L5.2036 4.41875C5.15145 4.57727 5.06282 4.72134 4.94481 4.83934C4.82681 4.95735 4.68274 5.04598 4.52422 5.09813L1.42676 6.11693L4.52422 7.13574C4.68274 7.18788 4.82681 7.27652 4.94481 7.39453C5.06282 7.51253 5.15145 7.6566 5.2036 7.81512L6.2224 10.9126L7.24121 7.81512C7.29335 7.6566 7.38199 7.51253 7.5 7.39453C7.618 7.27652 7.76207 7.18788 7.9206 7.13574L11.018 6.11693L7.9206 5.09813C7.76207 5.04598 7.618 4.95735 7.5 4.83934C7.38199 4.72134 7.29335 4.57727 7.24121 4.41875L6.2224 1.32129Z" fill="black" fill-opacity="0.15" stroke="black" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M9.76681 13.9373C9.76681 13.6048 9.95997 13.3083 10.5126 12.7917L11.8872 11.4978C12.3545 11.0575 12.5612 10.77 12.5612 10.4735C12.5612 10.1411 12.3185 9.91643 11.9681 9.91643C11.6986 9.91643 11.5054 10.0242 11.2673 10.3208C10.9933 10.6622 10.7956 10.779 10.4946 10.779C10.0633 10.779 9.75781 10.4915 9.75781 10.0916C9.75781 9.21559 10.8136 8.44287 12.067 8.44287C13.3743 8.44287 14.3492 9.22907 14.3492 10.2848C14.3492 10.9452 13.9988 11.5742 13.2845 12.2077L12.2242 13.1511V13.223H13.7292C14.2503 13.223 14.5738 13.5015 14.5738 13.9552C14.5738 14.4089 14.2593 14.6785 13.7292 14.6785H10.5979C10.1037 14.6785 9.76681 14.3775 9.76681 13.9373Z" fill="black"/>
<path d="M12.8994 1.32129V4.00482M11.5576 2.66302H14.2412" stroke="black" stroke-opacity="0.75" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

Before

Width:  |  Height:  |  Size: 1.5 KiB

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -10,12 +10,12 @@
"context": "Workspace",
"bindings": {
// "shift shift": "file_finder::Toggle"
},
}
},
{
"context": "Editor && vim_mode == insert",
"bindings": {
// "j k": "vim::NormalBefore"
},
},
}
}
]

View File

@@ -4,15 +4,15 @@
"bindings": {
"ctrl-shift-f5": "workspace::Reload", // window:reload
"ctrl-k ctrl-n": "workspace::ActivatePreviousPane", // window:focus-next-pane
"ctrl-k ctrl-p": "workspace::ActivateNextPane", // window:focus-previous-pane
},
"ctrl-k ctrl-p": "workspace::ActivateNextPane" // window:focus-previous-pane
}
},
{
"context": "Editor",
"bindings": {
"ctrl-k ctrl-u": "editor::ConvertToUpperCase", // editor:upper-case
"ctrl-k ctrl-l": "editor::ConvertToLowerCase", // editor:lower-case
},
"ctrl-k ctrl-l": "editor::ConvertToLowerCase" // editor:lower-case
}
},
{
"context": "Editor && mode == full",
@@ -32,8 +32,8 @@
"ctrl-down": "editor::MoveLineDown", // editor:move-line-down
"ctrl-\\": "workspace::ToggleLeftDock", // tree-view:toggle
"ctrl-shift-m": "markdown::OpenPreviewToTheSide", // markdown-preview:toggle
"ctrl-r": "outline::Toggle", // symbols-view:toggle-project-symbols
},
"ctrl-r": "outline::Toggle" // symbols-view:toggle-project-symbols
}
},
{
"context": "BufferSearchBar",
@@ -41,8 +41,8 @@
"f3": ["editor::SelectNext", { "replace_newest": true }], // find-and-replace:find-next
"shift-f3": ["editor::SelectPrevious", { "replace_newest": true }], //find-and-replace:find-previous
"ctrl-f3": "search::SelectNextMatch", // find-and-replace:find-next-selected
"ctrl-shift-f3": "search::SelectPreviousMatch", // find-and-replace:find-previous-selected
},
"ctrl-shift-f3": "search::SelectPreviousMatch" // find-and-replace:find-previous-selected
}
},
{
"context": "Workspace",
@@ -50,8 +50,8 @@
"ctrl-\\": "workspace::ToggleLeftDock", // tree-view:toggle
"ctrl-k ctrl-b": "workspace::ToggleLeftDock", // tree-view:toggle
"ctrl-t": "file_finder::Toggle", // fuzzy-finder:toggle-file-finder
"ctrl-r": "project_symbols::Toggle", // symbols-view:toggle-project-symbols
},
"ctrl-r": "project_symbols::Toggle" // symbols-view:toggle-project-symbols
}
},
{
"context": "Pane",
@@ -65,8 +65,8 @@
"ctrl-6": ["pane::ActivateItem", 5], // tree-view:open-selected-entry-in-pane-6
"ctrl-7": ["pane::ActivateItem", 6], // tree-view:open-selected-entry-in-pane-7
"ctrl-8": ["pane::ActivateItem", 7], // tree-view:open-selected-entry-in-pane-8
"ctrl-9": ["pane::ActivateItem", 8], // tree-view:open-selected-entry-in-pane-9
},
"ctrl-9": ["pane::ActivateItem", 8] // tree-view:open-selected-entry-in-pane-9
}
},
{
"context": "ProjectPanel",
@@ -75,8 +75,8 @@
"backspace": ["project_panel::Trash", { "skip_prompt": false }],
"ctrl-x": "project_panel::Cut", // tree-view:cut
"ctrl-c": "project_panel::Copy", // tree-view:copy
"ctrl-v": "project_panel::Paste", // tree-view:paste
},
"ctrl-v": "project_panel::Paste" // tree-view:paste
}
},
{
"context": "ProjectPanel && not_editing",
@@ -90,7 +90,7 @@
"d": "project_panel::Duplicate", // tree-view:duplicate
"home": "menu::SelectFirst", // core:move-to-top
"end": "menu::SelectLast", // core:move-to-bottom
"shift-a": "project_panel::NewDirectory", // tree-view:add-folder
},
},
"shift-a": "project_panel::NewDirectory" // tree-view:add-folder
}
}
]

View File

@@ -8,8 +8,8 @@
"ctrl-shift-i": "agent::ToggleFocus",
"ctrl-l": "agent::ToggleFocus",
"ctrl-shift-l": "agent::ToggleFocus",
"ctrl-shift-j": "agent::OpenSettings",
},
"ctrl-shift-j": "agent::OpenSettings"
}
},
{
"context": "Editor && mode == full",
@@ -20,18 +20,18 @@
"ctrl-shift-l": "agent::AddSelectionToThread", // In cursor uses "Ask" mode
"ctrl-l": "agent::AddSelectionToThread", // In cursor uses "Agent" mode
"ctrl-k": "assistant::InlineAssist",
"ctrl-shift-k": "assistant::InsertIntoEditor",
},
"ctrl-shift-k": "assistant::InsertIntoEditor"
}
},
{
"context": "InlineAssistEditor",
"use_key_equivalents": true,
"bindings": {
"ctrl-shift-backspace": "editor::Cancel",
"ctrl-shift-backspace": "editor::Cancel"
// "alt-enter": // Quick Question
// "ctrl-shift-enter": // Full File Context
// "ctrl-shift-k": // Toggle input focus (editor <> inline assist)
},
}
},
{
"context": "AgentPanel || ContextEditor || (MessageEditor > Editor)",
@@ -47,7 +47,7 @@
"ctrl-shift-backspace": "editor::Cancel",
"ctrl-r": "agent::NewThread",
"ctrl-shift-v": "editor::Paste",
"ctrl-shift-k": "assistant::InsertIntoEditor",
"ctrl-shift-k": "assistant::InsertIntoEditor"
// "escape": "agent::ToggleFocus"
///// Enable when Zed supports multiple thread tabs
// "ctrl-t": // new thread tab
@@ -56,29 +56,28 @@
///// Enable if Zed adds support for keyboard navigation of thread elements
// "tab": // cycle to next message
// "shift-tab": // cycle to previous message
},
}
},
{
"context": "Editor && editor_agent_diff",
"use_key_equivalents": true,
"bindings": {
"ctrl-enter": "agent::KeepAll",
"ctrl-backspace": "agent::RejectAll",
},
"ctrl-backspace": "agent::RejectAll"
}
},
{
"context": "Editor && mode == full && edit_prediction",
"use_key_equivalents": true,
"bindings": {
"ctrl-right": "editor::AcceptNextWordEditPrediction",
"ctrl-down": "editor::AcceptNextLineEditPrediction",
},
"ctrl-right": "editor::AcceptPartialEditPrediction"
}
},
{
"context": "Terminal",
"use_key_equivalents": true,
"bindings": {
"ctrl-k": "assistant::InlineAssist",
},
},
"ctrl-k": "assistant::InlineAssist"
}
}
]

View File

@@ -5,8 +5,8 @@
[
{
"bindings": {
"ctrl-g": "menu::Cancel",
},
"ctrl-g": "menu::Cancel"
}
},
{
// Workaround to avoid falling back to default bindings.
@@ -18,8 +18,8 @@
"ctrl-g": null, // currently activates `go_to_line::Toggle` when there is nothing to cancel
"ctrl-x": null, // currently activates `editor::Cut` if no following key is pressed for 1 second
"ctrl-p": null, // currently activates `file_finder::Toggle` when the cursor is on the first character of the buffer
"ctrl-n": null, // currently activates `workspace::NewFile` when the cursor is on the last character of the buffer
},
"ctrl-n": null // currently activates `workspace::NewFile` when the cursor is on the last character of the buffer
}
},
{
"context": "Editor",
@@ -82,8 +82,8 @@
"ctrl-s": "buffer_search::Deploy", // isearch-forward
"ctrl-r": "buffer_search::Deploy", // isearch-backward
"alt-^": "editor::JoinLines", // join-line
"alt-q": "editor::Rewrap", // fill-paragraph
},
"alt-q": "editor::Rewrap" // fill-paragraph
}
},
{
"context": "Editor && selection_mode", // region selection
@@ -119,22 +119,22 @@
"alt->": "editor::SelectToEnd",
"ctrl-home": "editor::SelectToBeginning",
"ctrl-end": "editor::SelectToEnd",
"ctrl-g": "editor::Cancel",
},
"ctrl-g": "editor::Cancel"
}
},
{
"context": "Editor && (showing_code_actions || showing_completions)",
"bindings": {
"ctrl-p": "editor::ContextMenuPrevious",
"ctrl-n": "editor::ContextMenuNext",
},
"ctrl-n": "editor::ContextMenuNext"
}
},
{
"context": "Editor && showing_signature_help && !showing_completions",
"bindings": {
"ctrl-p": "editor::SignatureHelpPrevious",
"ctrl-n": "editor::SignatureHelpNext",
},
"ctrl-n": "editor::SignatureHelpNext"
}
},
// Example setting for using emacs-style tab
// (i.e. indent the current line / selection or perform symbol completion depending on context)
@@ -164,8 +164,8 @@
"ctrl-x ctrl-f": "file_finder::Toggle", // find-file
"ctrl-x ctrl-s": "workspace::Save", // save-buffer
"ctrl-x ctrl-w": "workspace::SaveAs", // write-file
"ctrl-x s": "workspace::SaveAll", // save-some-buffers
},
"ctrl-x s": "workspace::SaveAll" // save-some-buffers
}
},
{
// Workaround to enable using native emacs from the Zed terminal.
@@ -185,22 +185,22 @@
"ctrl-x ctrl-f": null, // find-file
"ctrl-x ctrl-s": null, // save-buffer
"ctrl-x ctrl-w": null, // write-file
"ctrl-x s": null, // save-some-buffers
},
"ctrl-x s": null // save-some-buffers
}
},
{
"context": "BufferSearchBar > Editor",
"bindings": {
"ctrl-s": "search::SelectNextMatch",
"ctrl-r": "search::SelectPreviousMatch",
"ctrl-g": "buffer_search::Dismiss",
},
"ctrl-g": "buffer_search::Dismiss"
}
},
{
"context": "Pane",
"bindings": {
"ctrl-alt-left": "pane::GoBack",
"ctrl-alt-right": "pane::GoForward",
},
},
"ctrl-alt-right": "pane::GoForward"
}
}
]

View File

@@ -13,8 +13,8 @@
"shift-f8": "debugger::StepOut",
"f9": "debugger::Continue",
"shift-f9": "debugger::Start",
"alt-shift-f9": "debugger::Start",
},
"alt-shift-f9": "debugger::Start"
}
},
{
"context": "Editor",
@@ -62,30 +62,28 @@
"ctrl-shift-end": "editor::SelectToEnd",
"ctrl-f8": "editor::ToggleBreakpoint",
"ctrl-shift-f8": "editor::EditLogBreakpoint",
"ctrl-shift-u": "editor::ToggleCase",
},
"ctrl-shift-u": "editor::ToggleCase"
}
},
{
"context": "Editor && mode == full",
"bindings": {
"ctrl-f12": "outline::Toggle",
"ctrl-r": ["buffer_search::Deploy", { "replace_enabled": true }],
"ctrl-e": "file_finder::Toggle",
"ctrl-shift-n": "file_finder::Toggle",
"ctrl-alt-n": "file_finder::Toggle",
"ctrl-g": "go_to_line::Toggle",
"alt-enter": "editor::ToggleCodeActions",
"ctrl-space": "editor::ShowCompletions",
"ctrl-q": "editor::Hover",
"ctrl-p": "editor::ShowSignatureHelp",
"ctrl-\\": "assistant::InlineAssist",
},
"ctrl-\\": "assistant::InlineAssist"
}
},
{
"context": "BufferSearchBar",
"bindings": {
"shift-enter": "search::SelectPreviousMatch",
},
"shift-enter": "search::SelectPreviousMatch"
}
},
{
"context": "BufferSearchBar || ProjectSearchBar",
@@ -93,8 +91,8 @@
"alt-c": "search::ToggleCaseSensitive",
"alt-e": "search::ToggleSelection",
"alt-x": "search::ToggleRegex",
"alt-w": "search::ToggleWholeWord",
},
"alt-w": "search::ToggleWholeWord"
}
},
{
"context": "Workspace",
@@ -107,8 +105,8 @@
"ctrl-e": "file_finder::Toggle",
"ctrl-k": "git_panel::ToggleFocus", // bug: This should also focus commit editor
"ctrl-shift-n": "file_finder::Toggle",
"ctrl-alt-n": "file_finder::Toggle",
"ctrl-n": "project_symbols::Toggle",
"ctrl-alt-n": "file_finder::Toggle",
"ctrl-shift-a": "command_palette::Toggle",
"shift shift": "command_palette::Toggle",
"ctrl-alt-shift-n": "project_symbols::Toggle",
@@ -116,8 +114,8 @@
"alt-1": "project_panel::ToggleFocus",
"alt-5": "debug_panel::ToggleFocus",
"alt-6": "diagnostics::Deploy",
"alt-7": "outline_panel::ToggleFocus",
},
"alt-7": "outline_panel::ToggleFocus"
}
},
{
"context": "Pane", // this is to override the default Pane mappings to switch tabs
@@ -131,15 +129,15 @@
"alt-7": "outline_panel::ToggleFocus",
"alt-8": null, // Services (bottom dock)
"alt-9": null, // Git History (bottom dock)
"alt-0": "git_panel::ToggleFocus",
},
"alt-0": "git_panel::ToggleFocus"
}
},
{
"context": "Workspace || Editor",
"bindings": {
"alt-f12": "terminal_panel::Toggle",
"ctrl-shift-k": "git::Push",
},
"ctrl-shift-k": "git::Push"
}
},
{
"context": "Pane",
@@ -147,8 +145,8 @@
"ctrl-alt-left": "pane::GoBack",
"ctrl-alt-right": "pane::GoForward",
"alt-left": "pane::ActivatePreviousItem",
"alt-right": "pane::ActivateNextItem",
},
"alt-right": "pane::ActivateNextItem"
}
},
{
"context": "ProjectPanel",
@@ -158,8 +156,8 @@
"backspace": ["project_panel::Trash", { "skip_prompt": false }],
"delete": ["project_panel::Trash", { "skip_prompt": false }],
"shift-delete": ["project_panel::Delete", { "skip_prompt": false }],
"shift-f6": "project_panel::Rename",
},
"shift-f6": "project_panel::Rename"
}
},
{
"context": "Terminal",
@@ -169,8 +167,8 @@
"ctrl-up": "terminal::ScrollLineUp",
"ctrl-down": "terminal::ScrollLineDown",
"shift-pageup": "terminal::ScrollPageUp",
"shift-pagedown": "terminal::ScrollPageDown",
},
"shift-pagedown": "terminal::ScrollPageDown"
}
},
{ "context": "GitPanel", "bindings": { "alt-0": "workspace::CloseActiveDock" } },
{ "context": "ProjectPanel", "bindings": { "alt-1": "workspace::CloseActiveDock" } },
@@ -181,7 +179,7 @@
"context": "Dock || Workspace || OutlinePanel || ProjectPanel || CollabPanel || (Editor && mode == auto_height)",
"bindings": {
"escape": "editor::ToggleFocus",
"shift-escape": "workspace::CloseActiveDock",
},
},
"shift-escape": "workspace::CloseActiveDock"
}
}
]

View File

@@ -22,8 +22,8 @@
"ctrl-^": ["workspace::MoveItemToPane", { "destination": 5 }],
"ctrl-&": ["workspace::MoveItemToPane", { "destination": 6 }],
"ctrl-*": ["workspace::MoveItemToPane", { "destination": 7 }],
"ctrl-(": ["workspace::MoveItemToPane", { "destination": 8 }],
},
"ctrl-(": ["workspace::MoveItemToPane", { "destination": 8 }]
}
},
{
"context": "Editor",
@@ -55,20 +55,20 @@
"alt-right": "editor::MoveToNextSubwordEnd",
"alt-left": "editor::MoveToPreviousSubwordStart",
"alt-shift-right": "editor::SelectToNextSubwordEnd",
"alt-shift-left": "editor::SelectToPreviousSubwordStart",
},
"alt-shift-left": "editor::SelectToPreviousSubwordStart"
}
},
{
"context": "Editor && mode == full",
"bindings": {
"ctrl-r": "outline::Toggle",
},
"ctrl-r": "outline::Toggle"
}
},
{
"context": "Editor && !agent_diff",
"bindings": {
"ctrl-k ctrl-z": "git::Restore",
},
"ctrl-k ctrl-z": "git::Restore"
}
},
{
"context": "Pane",
@@ -83,15 +83,15 @@
"alt-6": ["pane::ActivateItem", 5],
"alt-7": ["pane::ActivateItem", 6],
"alt-8": ["pane::ActivateItem", 7],
"alt-9": "pane::ActivateLastItem",
},
"alt-9": "pane::ActivateLastItem"
}
},
{
"context": "Workspace",
"bindings": {
"ctrl-k ctrl-b": "workspace::ToggleLeftDock",
// "ctrl-0": "project_panel::ToggleFocus", // normally resets zoom
"shift-ctrl-r": "project_symbols::Toggle",
},
},
"shift-ctrl-r": "project_symbols::Toggle"
}
}
]

View File

@@ -4,16 +4,16 @@
"bindings": {
"ctrl-alt-cmd-l": "workspace::Reload",
"cmd-k cmd-p": "workspace::ActivatePreviousPane",
"cmd-k cmd-n": "workspace::ActivateNextPane",
},
"cmd-k cmd-n": "workspace::ActivateNextPane"
}
},
{
"context": "Editor",
"bindings": {
"cmd-shift-backspace": "editor::DeleteToBeginningOfLine",
"cmd-k cmd-u": "editor::ConvertToUpperCase",
"cmd-k cmd-l": "editor::ConvertToLowerCase",
},
"cmd-k cmd-l": "editor::ConvertToLowerCase"
}
},
{
"context": "Editor && mode == full",
@@ -33,8 +33,8 @@
"ctrl-cmd-down": "editor::MoveLineDown",
"cmd-\\": "workspace::ToggleLeftDock",
"ctrl-shift-m": "markdown::OpenPreviewToTheSide",
"cmd-r": "outline::Toggle",
},
"cmd-r": "outline::Toggle"
}
},
{
"context": "BufferSearchBar",
@@ -42,8 +42,8 @@
"cmd-g": ["editor::SelectNext", { "replace_newest": true }],
"cmd-shift-g": ["editor::SelectPrevious", { "replace_newest": true }],
"cmd-f3": "search::SelectNextMatch",
"cmd-shift-f3": "search::SelectPreviousMatch",
},
"cmd-shift-f3": "search::SelectPreviousMatch"
}
},
{
"context": "Workspace",
@@ -51,8 +51,8 @@
"cmd-\\": "workspace::ToggleLeftDock",
"cmd-k cmd-b": "workspace::ToggleLeftDock",
"cmd-t": "file_finder::Toggle",
"cmd-shift-r": "project_symbols::Toggle",
},
"cmd-shift-r": "project_symbols::Toggle"
}
},
{
"context": "Pane",
@@ -67,8 +67,8 @@
"cmd-6": ["pane::ActivateItem", 5],
"cmd-7": ["pane::ActivateItem", 6],
"cmd-8": ["pane::ActivateItem", 7],
"cmd-9": "pane::ActivateLastItem",
},
"cmd-9": "pane::ActivateLastItem"
}
},
{
"context": "ProjectPanel",
@@ -77,8 +77,8 @@
"backspace": ["project_panel::Trash", { "skip_prompt": false }],
"cmd-x": "project_panel::Cut",
"cmd-c": "project_panel::Copy",
"cmd-v": "project_panel::Paste",
},
"cmd-v": "project_panel::Paste"
}
},
{
"context": "ProjectPanel && not_editing",
@@ -92,7 +92,7 @@
"d": "project_panel::Duplicate",
"home": "menu::SelectFirst",
"end": "menu::SelectLast",
"shift-a": "project_panel::NewDirectory",
},
},
"shift-a": "project_panel::NewDirectory"
}
}
]

View File

@@ -8,8 +8,8 @@
"cmd-shift-i": "agent::ToggleFocus",
"cmd-l": "agent::ToggleFocus",
"cmd-shift-l": "agent::ToggleFocus",
"cmd-shift-j": "agent::OpenSettings",
},
"cmd-shift-j": "agent::OpenSettings"
}
},
{
"context": "Editor && mode == full",
@@ -20,19 +20,19 @@
"cmd-shift-l": "agent::AddSelectionToThread", // In cursor uses "Ask" mode
"cmd-l": "agent::AddSelectionToThread", // In cursor uses "Agent" mode
"cmd-k": "assistant::InlineAssist",
"cmd-shift-k": "assistant::InsertIntoEditor",
},
"cmd-shift-k": "assistant::InsertIntoEditor"
}
},
{
"context": "InlineAssistEditor",
"use_key_equivalents": true,
"bindings": {
"cmd-shift-backspace": "editor::Cancel",
"cmd-enter": "menu::Confirm",
"cmd-enter": "menu::Confirm"
// "alt-enter": // Quick Question
// "cmd-shift-enter": // Full File Context
// "cmd-shift-k": // Toggle input focus (editor <> inline assist)
},
}
},
{
"context": "AgentPanel || ContextEditor || (MessageEditor > Editor)",
@@ -48,7 +48,7 @@
"cmd-shift-backspace": "editor::Cancel",
"cmd-r": "agent::NewThread",
"cmd-shift-v": "editor::Paste",
"cmd-shift-k": "assistant::InsertIntoEditor",
"cmd-shift-k": "assistant::InsertIntoEditor"
// "escape": "agent::ToggleFocus"
///// Enable when Zed supports multiple thread tabs
// "cmd-t": // new thread tab
@@ -57,29 +57,28 @@
///// Enable if Zed adds support for keyboard navigation of thread elements
// "tab": // cycle to next message
// "shift-tab": // cycle to previous message
},
}
},
{
"context": "Editor && editor_agent_diff",
"use_key_equivalents": true,
"bindings": {
"cmd-enter": "agent::KeepAll",
"cmd-backspace": "agent::RejectAll",
},
"cmd-backspace": "agent::RejectAll"
}
},
{
"context": "Editor && mode == full && edit_prediction",
"use_key_equivalents": true,
"bindings": {
"cmd-right": "editor::AcceptNextWordEditPrediction",
"cmd-down": "editor::AcceptNextLineEditPrediction",
},
"cmd-right": "editor::AcceptPartialEditPrediction"
}
},
{
"context": "Terminal",
"use_key_equivalents": true,
"bindings": {
"cmd-k": "assistant::InlineAssist",
},
},
"cmd-k": "assistant::InlineAssist"
}
}
]

View File

@@ -6,8 +6,8 @@
{
"context": "!GitPanel",
"bindings": {
"ctrl-g": "menu::Cancel",
},
"ctrl-g": "menu::Cancel"
}
},
{
// Workaround to avoid falling back to default bindings.
@@ -15,8 +15,8 @@
// NOTE: must be declared before the `Editor` override.
"context": "Editor",
"bindings": {
"ctrl-g": null, // currently activates `go_to_line::Toggle` when there is nothing to cancel
},
"ctrl-g": null // currently activates `go_to_line::Toggle` when there is nothing to cancel
}
},
{
"context": "Editor",
@@ -79,8 +79,8 @@
"ctrl-s": "buffer_search::Deploy", // isearch-forward
"ctrl-r": "buffer_search::Deploy", // isearch-backward
"alt-^": "editor::JoinLines", // join-line
"alt-q": "editor::Rewrap", // fill-paragraph
},
"alt-q": "editor::Rewrap" // fill-paragraph
}
},
{
"context": "Editor && selection_mode", // region selection
@@ -116,22 +116,22 @@
"alt->": "editor::SelectToEnd",
"ctrl-home": "editor::SelectToBeginning",
"ctrl-end": "editor::SelectToEnd",
"ctrl-g": "editor::Cancel",
},
"ctrl-g": "editor::Cancel"
}
},
{
"context": "Editor && (showing_code_actions || showing_completions)",
"bindings": {
"ctrl-p": "editor::ContextMenuPrevious",
"ctrl-n": "editor::ContextMenuNext",
},
"ctrl-n": "editor::ContextMenuNext"
}
},
{
"context": "Editor && showing_signature_help && !showing_completions",
"bindings": {
"ctrl-p": "editor::SignatureHelpPrevious",
"ctrl-n": "editor::SignatureHelpNext",
},
"ctrl-n": "editor::SignatureHelpNext"
}
},
// Example setting for using emacs-style tab
// (i.e. indent the current line / selection or perform symbol completion depending on context)
@@ -161,8 +161,8 @@
"ctrl-x ctrl-f": "file_finder::Toggle", // find-file
"ctrl-x ctrl-s": "workspace::Save", // save-buffer
"ctrl-x ctrl-w": "workspace::SaveAs", // write-file
"ctrl-x s": "workspace::SaveAll", // save-some-buffers
},
"ctrl-x s": "workspace::SaveAll" // save-some-buffers
}
},
{
// Workaround to enable using native emacs from the Zed terminal.
@@ -182,22 +182,22 @@
"ctrl-x ctrl-f": null, // find-file
"ctrl-x ctrl-s": null, // save-buffer
"ctrl-x ctrl-w": null, // write-file
"ctrl-x s": null, // save-some-buffers
},
"ctrl-x s": null // save-some-buffers
}
},
{
"context": "BufferSearchBar > Editor",
"bindings": {
"ctrl-s": "search::SelectNextMatch",
"ctrl-r": "search::SelectPreviousMatch",
"ctrl-g": "buffer_search::Dismiss",
},
"ctrl-g": "buffer_search::Dismiss"
}
},
{
"context": "Pane",
"bindings": {
"ctrl-alt-left": "pane::GoBack",
"ctrl-alt-right": "pane::GoForward",
},
},
"ctrl-alt-right": "pane::GoForward"
}
}
]

View File

@@ -13,8 +13,8 @@
"shift-f8": "debugger::StepOut",
"f9": "debugger::Continue",
"shift-f9": "debugger::Start",
"alt-shift-f9": "debugger::Start",
},
"alt-shift-f9": "debugger::Start"
}
},
{
"context": "Editor",
@@ -60,30 +60,28 @@
"cmd-shift-end": "editor::SelectToEnd",
"ctrl-f8": "editor::ToggleBreakpoint",
"ctrl-shift-f8": "editor::EditLogBreakpoint",
"cmd-shift-u": "editor::ToggleCase",
},
"cmd-shift-u": "editor::ToggleCase"
}
},
{
"context": "Editor && mode == full",
"bindings": {
"cmd-f12": "outline::Toggle",
"cmd-r": ["buffer_search::Deploy", { "replace_enabled": true }],
"cmd-l": "go_to_line::Toggle",
"cmd-e": "file_finder::Toggle",
"cmd-shift-o": "file_finder::Toggle",
"cmd-shift-n": "file_finder::Toggle",
"cmd-l": "go_to_line::Toggle",
"alt-enter": "editor::ToggleCodeActions",
"ctrl-space": "editor::ShowCompletions",
"cmd-j": "editor::Hover",
"cmd-p": "editor::ShowSignatureHelp",
"cmd-\\": "assistant::InlineAssist",
},
"cmd-\\": "assistant::InlineAssist"
}
},
{
"context": "BufferSearchBar",
"bindings": {
"shift-enter": "search::SelectPreviousMatch",
},
"shift-enter": "search::SelectPreviousMatch"
}
},
{
"context": "BufferSearchBar || ProjectSearchBar",
@@ -95,8 +93,8 @@
"ctrl-alt-c": "search::ToggleCaseSensitive",
"ctrl-alt-e": "search::ToggleSelection",
"ctrl-alt-w": "search::ToggleWholeWord",
"ctrl-alt-x": "search::ToggleRegex",
},
"ctrl-alt-x": "search::ToggleRegex"
}
},
{
"context": "Workspace",
@@ -118,8 +116,8 @@
"cmd-1": "project_panel::ToggleFocus",
"cmd-5": "debug_panel::ToggleFocus",
"cmd-6": "diagnostics::Deploy",
"cmd-7": "outline_panel::ToggleFocus",
},
"cmd-7": "outline_panel::ToggleFocus"
}
},
{
"context": "Pane", // this is to override the default Pane mappings to switch tabs
@@ -133,15 +131,15 @@
"cmd-7": "outline_panel::ToggleFocus",
"cmd-8": null, // Services (bottom dock)
"cmd-9": null, // Git History (bottom dock)
"cmd-0": "git_panel::ToggleFocus",
},
"cmd-0": "git_panel::ToggleFocus"
}
},
{
"context": "Workspace || Editor",
"bindings": {
"alt-f12": "terminal_panel::Toggle",
"cmd-shift-k": "git::Push",
},
"cmd-shift-k": "git::Push"
}
},
{
"context": "Pane",
@@ -149,8 +147,8 @@
"cmd-alt-left": "pane::GoBack",
"cmd-alt-right": "pane::GoForward",
"alt-left": "pane::ActivatePreviousItem",
"alt-right": "pane::ActivateNextItem",
},
"alt-right": "pane::ActivateNextItem"
}
},
{
"context": "ProjectPanel",
@@ -161,8 +159,8 @@
"backspace": ["project_panel::Trash", { "skip_prompt": false }],
"delete": ["project_panel::Trash", { "skip_prompt": false }],
"shift-delete": ["project_panel::Delete", { "skip_prompt": false }],
"shift-f6": "project_panel::Rename",
},
"shift-f6": "project_panel::Rename"
}
},
{
"context": "Terminal",
@@ -172,8 +170,8 @@
"cmd-up": "terminal::ScrollLineUp",
"cmd-down": "terminal::ScrollLineDown",
"shift-pageup": "terminal::ScrollPageUp",
"shift-pagedown": "terminal::ScrollPageDown",
},
"shift-pagedown": "terminal::ScrollPageDown"
}
},
{ "context": "GitPanel", "bindings": { "cmd-0": "workspace::CloseActiveDock" } },
{ "context": "ProjectPanel", "bindings": { "cmd-1": "workspace::CloseActiveDock" } },
@@ -184,7 +182,7 @@
"context": "Dock || Workspace || OutlinePanel || ProjectPanel || CollabPanel || (Editor && mode == auto_height)",
"bindings": {
"escape": "editor::ToggleFocus",
"shift-escape": "workspace::CloseActiveDock",
},
},
"shift-escape": "workspace::CloseActiveDock"
}
}
]

View File

@@ -22,8 +22,8 @@
"ctrl-^": ["workspace::MoveItemToPane", { "destination": 5 }],
"ctrl-&": ["workspace::MoveItemToPane", { "destination": 6 }],
"ctrl-*": ["workspace::MoveItemToPane", { "destination": 7 }],
"ctrl-(": ["workspace::MoveItemToPane", { "destination": 8 }],
},
"ctrl-(": ["workspace::MoveItemToPane", { "destination": 8 }]
}
},
{
"context": "Editor",
@@ -57,20 +57,20 @@
"ctrl-right": "editor::MoveToNextSubwordEnd",
"ctrl-left": "editor::MoveToPreviousSubwordStart",
"ctrl-shift-right": "editor::SelectToNextSubwordEnd",
"ctrl-shift-left": "editor::SelectToPreviousSubwordStart",
},
"ctrl-shift-left": "editor::SelectToPreviousSubwordStart"
}
},
{
"context": "Editor && mode == full",
"bindings": {
"cmd-r": "outline::Toggle",
},
"cmd-r": "outline::Toggle"
}
},
{
"context": "Editor && !agent_diff",
"bindings": {
"cmd-k cmd-z": "git::Restore",
},
"cmd-k cmd-z": "git::Restore"
}
},
{
"context": "Pane",
@@ -85,8 +85,8 @@
"cmd-6": ["pane::ActivateItem", 5],
"cmd-7": ["pane::ActivateItem", 6],
"cmd-8": ["pane::ActivateItem", 7],
"cmd-9": "pane::ActivateLastItem",
},
"cmd-9": "pane::ActivateLastItem"
}
},
{
"context": "Workspace",
@@ -95,7 +95,7 @@
"cmd-t": "file_finder::Toggle",
"shift-cmd-r": "project_symbols::Toggle",
// Currently busted: https://github.com/zed-industries/feedback/issues/898
"ctrl-0": "project_panel::ToggleFocus",
},
},
"ctrl-0": "project_panel::ToggleFocus"
}
}
]

View File

@@ -2,8 +2,8 @@
{
"bindings": {
"cmd-shift-o": "projects::OpenRecent",
"cmd-alt-tab": "project_panel::ToggleFocus",
},
"cmd-alt-tab": "project_panel::ToggleFocus"
}
},
{
"context": "Editor && mode == full",
@@ -15,8 +15,8 @@
"cmd-enter": "editor::NewlineBelow",
"cmd-alt-enter": "editor::NewlineAbove",
"cmd-shift-l": "editor::SelectLine",
"cmd-shift-t": "outline::Toggle",
},
"cmd-shift-t": "outline::Toggle"
}
},
{
"context": "Editor",
@@ -41,30 +41,30 @@
"ctrl-u": "editor::ConvertToUpperCase",
"ctrl-shift-u": "editor::ConvertToLowerCase",
"ctrl-alt-u": "editor::ConvertToUpperCamelCase",
"ctrl-_": "editor::ConvertToSnakeCase",
},
"ctrl-_": "editor::ConvertToSnakeCase"
}
},
{
"context": "BufferSearchBar",
"bindings": {
"ctrl-s": "search::SelectNextMatch",
"ctrl-shift-s": "search::SelectPreviousMatch",
},
"ctrl-shift-s": "search::SelectPreviousMatch"
}
},
{
"context": "Workspace",
"bindings": {
"cmd-alt-ctrl-d": "workspace::ToggleLeftDock",
"cmd-t": "file_finder::Toggle",
"cmd-shift-t": "project_symbols::Toggle",
},
"cmd-shift-t": "project_symbols::Toggle"
}
},
{
"context": "Pane",
"bindings": {
"alt-cmd-r": "search::ToggleRegex",
"ctrl-tab": "project_panel::ToggleFocus",
},
"ctrl-tab": "project_panel::ToggleFocus"
}
},
{
"context": "ProjectPanel",
@@ -75,11 +75,11 @@
"return": "project_panel::Rename",
"cmd-c": "project_panel::Copy",
"cmd-v": "project_panel::Paste",
"cmd-alt-c": "project_panel::CopyPath",
},
"cmd-alt-c": "project_panel::CopyPath"
}
},
{
"context": "Dock",
"bindings": {},
},
"bindings": {}
}
]

View File

@@ -27,7 +27,7 @@
"backspace": "editor::Backspace",
"delete": "editor::Delete",
"left": "editor::MoveLeft",
"right": "editor::MoveRight",
},
},
"right": "editor::MoveRight"
}
}
]

View File

@@ -180,9 +180,10 @@
"ctrl-w g shift-d": "editor::GoToTypeDefinitionSplit",
"ctrl-w space": "editor::OpenExcerptsSplit",
"ctrl-w g space": "editor::OpenExcerptsSplit",
"ctrl-6": "pane::AlternateFile",
"ctrl-^": "pane::AlternateFile",
".": "vim::Repeat",
},
".": "vim::Repeat"
}
},
{
"context": "vim_mode == normal || vim_mode == visual || vim_mode == operator",
@@ -223,8 +224,8 @@
"] r": "vim::GoToNextReference",
// tree-sitter related commands
"[ x": "vim::SelectLargerSyntaxNode",
"] x": "vim::SelectSmallerSyntaxNode",
},
"] x": "vim::SelectSmallerSyntaxNode"
}
},
{
"context": "vim_mode == normal",
@@ -261,16 +262,16 @@
"[ d": "editor::GoToPreviousDiagnostic",
"] c": "editor::GoToHunk",
"[ c": "editor::GoToPreviousHunk",
"g c": "vim::PushToggleComments",
},
"g c": "vim::PushToggleComments"
}
},
{
"context": "VimControl && VimCount",
"bindings": {
"0": ["vim::Number", 0],
":": "vim::CountCommand",
"%": "vim::GoToPercentage",
},
"%": "vim::GoToPercentage"
}
},
{
"context": "vim_mode == visual",
@@ -322,8 +323,8 @@
"g w": "vim::Rewrap",
"g ?": "vim::ConvertToRot13",
// "g ?": "vim::ConvertToRot47",
"\"": "vim::PushRegister",
},
"\"": "vim::PushRegister"
}
},
{
"context": "vim_mode == helix_select",
@@ -343,8 +344,8 @@
"ctrl-pageup": "pane::ActivatePreviousItem",
"ctrl-pagedown": "pane::ActivateNextItem",
".": "vim::Repeat",
"alt-.": "vim::RepeatFind",
},
"alt-.": "vim::RepeatFind"
}
},
{
"context": "vim_mode == insert",
@@ -374,8 +375,8 @@
"ctrl-r": "vim::PushRegister",
"insert": "vim::ToggleReplace",
"ctrl-o": "vim::TemporaryNormal",
"ctrl-s": "editor::ShowSignatureHelp",
},
"ctrl-s": "editor::ShowSignatureHelp"
}
},
{
"context": "showing_completions",
@@ -383,8 +384,8 @@
"ctrl-d": "vim::ScrollDown",
"ctrl-u": "vim::ScrollUp",
"ctrl-e": "vim::LineDown",
"ctrl-y": "vim::LineUp",
},
"ctrl-y": "vim::LineUp"
}
},
{
"context": "(vim_mode == normal || vim_mode == helix_normal) && !menu",
@@ -409,31 +410,23 @@
"shift-s": "vim::SubstituteLine",
"\"": "vim::PushRegister",
"ctrl-pagedown": "pane::ActivateNextItem",
"ctrl-pageup": "pane::ActivatePreviousItem",
},
"ctrl-pageup": "pane::ActivatePreviousItem"
}
},
{
"context": "VimControl && vim_mode == helix_normal && !menu",
"bindings": {
"j": ["vim::Down", { "display_lines": true }],
"down": ["vim::Down", { "display_lines": true }],
"k": ["vim::Up", { "display_lines": true }],
"up": ["vim::Up", { "display_lines": true }],
"g j": "vim::Down",
"g down": "vim::Down",
"g k": "vim::Up",
"g up": "vim::Up",
"escape": "vim::SwitchToHelixNormalMode",
"i": "vim::HelixInsert",
"a": "vim::HelixAppend",
"ctrl-[": "editor::Cancel",
},
"ctrl-[": "editor::Cancel"
}
},
{
"context": "vim_mode == helix_select && !menu",
"bindings": {
"escape": "vim::SwitchToHelixNormalMode",
},
"escape": "vim::SwitchToHelixNormalMode"
}
},
{
"context": "(vim_mode == helix_normal || vim_mode == helix_select) && !menu",
@@ -453,9 +446,9 @@
"shift-r": "editor::Paste",
"`": "vim::ConvertToLowerCase",
"alt-`": "vim::ConvertToUpperCase",
"insert": "vim::InsertBefore", // not a helix default
"insert": "vim::InsertBefore",
"shift-u": "editor::Redo",
"ctrl-r": "vim::Redo", // not a helix default
"ctrl-r": "vim::Redo",
"y": "vim::HelixYank",
"p": "vim::HelixPaste",
"shift-p": ["vim::HelixPaste", { "before": true }],
@@ -484,7 +477,6 @@
"alt-p": "editor::SelectPreviousSyntaxNode",
"alt-n": "editor::SelectNextSyntaxNode",
// Search
"n": "vim::HelixSelectNext",
"shift-n": "vim::HelixSelectPrevious",
@@ -492,32 +484,27 @@
"g e": "vim::EndOfDocument",
"g h": "vim::StartOfLine",
"g l": "vim::EndOfLine",
"g s": "vim::FirstNonWhitespace",
"g s": "vim::FirstNonWhitespace", // "g s" default behavior is "space s"
"g t": "vim::WindowTop",
"g c": "vim::WindowMiddle",
"g b": "vim::WindowBottom",
"g r": "editor::FindAllReferences",
"g r": "editor::FindAllReferences", // zed specific
"g n": "pane::ActivateNextItem",
"shift-l": "pane::ActivateNextItem", // not a helix default
"shift-l": "pane::ActivateNextItem",
"g p": "pane::ActivatePreviousItem",
"shift-h": "pane::ActivatePreviousItem", // not a helix default
"g .": "vim::HelixGotoLastModification",
"g o": "editor::ToggleSelectedDiffHunks", // Zed specific
"g shift-o": "git::ToggleStaged", // Zed specific
"g shift-r": "git::Restore", // Zed specific
"g u": "git::StageAndNext", // Zed specific
"g shift-u": "git::UnstageAndNext", // Zed specific
"shift-h": "pane::ActivatePreviousItem",
"g .": "vim::HelixGotoLastModification", // go to last modification
// Window mode
"space w v": "pane::SplitDown",
"space w s": "pane::SplitRight",
"space w h": "workspace::ActivatePaneLeft",
"space w j": "workspace::ActivatePaneDown",
"space w k": "workspace::ActivatePaneUp",
"space w l": "workspace::ActivatePaneRight",
"space w k": "workspace::ActivatePaneUp",
"space w j": "workspace::ActivatePaneDown",
"space w q": "pane::CloseActiveItem",
"space w r": "pane::SplitRight", // not a helix default
"space w d": "pane::SplitDown", // not a helix default
"space w s": "pane::SplitRight",
"space w r": "pane::SplitRight",
"space w v": "pane::SplitDown",
"space w d": "pane::SplitDown",
// Space mode
"space f": "file_finder::Toggle",
@@ -531,7 +518,6 @@
"space c": "editor::ToggleComments",
"space p": "editor::Paste",
"space y": "editor::Copy",
"space /": "pane::DeploySearch",
// Other
":": "command_palette::Toggle",
@@ -539,22 +525,24 @@
"]": ["vim::PushHelixNext", { "around": true }],
"[": ["vim::PushHelixPrevious", { "around": true }],
"g q": "vim::PushRewrap",
"g w": "vim::PushRewrap", // not a helix default & clashes with helix `goto_word`
},
"g w": "vim::PushRewrap"
// "tab": "pane::ActivateNextItem",
// "shift-tab": "pane::ActivatePrevItem",
}
},
{
"context": "vim_mode == insert && !(showing_code_actions || showing_completions)",
"bindings": {
"ctrl-p": "editor::ShowWordCompletions",
"ctrl-n": "editor::ShowWordCompletions",
},
"ctrl-n": "editor::ShowWordCompletions"
}
},
{
"context": "(vim_mode == insert || vim_mode == normal) && showing_signature_help && !showing_completions",
"bindings": {
"ctrl-p": "editor::SignatureHelpPrevious",
"ctrl-n": "editor::SignatureHelpNext",
},
"ctrl-n": "editor::SignatureHelpNext"
}
},
{
"context": "vim_mode == replace",
@@ -570,8 +558,8 @@
"backspace": "vim::UndoReplace",
"tab": "vim::Tab",
"enter": "vim::Enter",
"insert": "vim::InsertBefore",
},
"insert": "vim::InsertBefore"
}
},
{
"context": "vim_mode == waiting",
@@ -583,14 +571,14 @@
"escape": "vim::ClearOperators",
"ctrl-k": ["vim::PushDigraph", {}],
"ctrl-v": ["vim::PushLiteral", {}],
"ctrl-q": ["vim::PushLiteral", {}],
},
"ctrl-q": ["vim::PushLiteral", {}]
}
},
{
"context": "Editor && vim_mode == waiting && (vim_operator == ys || vim_operator == cs)",
"bindings": {
"escape": "vim::SwitchToNormalMode",
},
"escape": "vim::SwitchToNormalMode"
}
},
{
"context": "vim_mode == operator",
@@ -598,8 +586,8 @@
"ctrl-c": "vim::ClearOperators",
"ctrl-[": "vim::ClearOperators",
"escape": "vim::ClearOperators",
"g c": "vim::Comment",
},
"g c": "vim::Comment"
}
},
{
"context": "vim_operator == a || vim_operator == i || vim_operator == cs || vim_operator == helix_next || vim_operator == helix_previous",
@@ -636,14 +624,14 @@
"shift-i": ["vim::IndentObj", { "include_below": true }],
"f": "vim::Method",
"c": "vim::Class",
"e": "vim::EntireFile",
},
"e": "vim::EntireFile"
}
},
{
"context": "vim_operator == helix_m",
"bindings": {
"m": "vim::Matching",
},
"m": "vim::Matching"
}
},
{
"context": "vim_operator == helix_next",
@@ -660,8 +648,8 @@
"x": "editor::SelectSmallerSyntaxNode",
"d": "editor::GoToDiagnostic",
"c": "editor::GoToHunk",
"space": "vim::InsertEmptyLineBelow",
},
"space": "vim::InsertEmptyLineBelow"
}
},
{
"context": "vim_operator == helix_previous",
@@ -678,8 +666,8 @@
"x": "editor::SelectLargerSyntaxNode",
"d": "editor::GoToPreviousDiagnostic",
"c": "editor::GoToPreviousHunk",
"space": "vim::InsertEmptyLineAbove",
},
"space": "vim::InsertEmptyLineAbove"
}
},
{
"context": "vim_operator == c",
@@ -687,8 +675,8 @@
"c": "vim::CurrentLine",
"x": "vim::Exchange",
"d": "editor::Rename", // zed specific
"s": ["vim::PushChangeSurrounds", {}],
},
"s": ["vim::PushChangeSurrounds", {}]
}
},
{
"context": "vim_operator == d",
@@ -700,36 +688,36 @@
"shift-o": "git::ToggleStaged",
"p": "git::Restore", // "d p"
"u": "git::StageAndNext", // "d u"
"shift-u": "git::UnstageAndNext", // "d shift-u"
},
"shift-u": "git::UnstageAndNext" // "d shift-u"
}
},
{
"context": "vim_operator == gu",
"bindings": {
"g u": "vim::CurrentLine",
"u": "vim::CurrentLine",
},
"u": "vim::CurrentLine"
}
},
{
"context": "vim_operator == gU",
"bindings": {
"g shift-u": "vim::CurrentLine",
"shift-u": "vim::CurrentLine",
},
"shift-u": "vim::CurrentLine"
}
},
{
"context": "vim_operator == g~",
"bindings": {
"g ~": "vim::CurrentLine",
"~": "vim::CurrentLine",
},
"~": "vim::CurrentLine"
}
},
{
"context": "vim_operator == g?",
"bindings": {
"g ?": "vim::CurrentLine",
"?": "vim::CurrentLine",
},
"?": "vim::CurrentLine"
}
},
{
"context": "vim_operator == gq",
@@ -737,66 +725,66 @@
"g q": "vim::CurrentLine",
"q": "vim::CurrentLine",
"g w": "vim::CurrentLine",
"w": "vim::CurrentLine",
},
"w": "vim::CurrentLine"
}
},
{
"context": "vim_operator == y",
"bindings": {
"y": "vim::CurrentLine",
"v": "vim::PushForcedMotion",
"s": ["vim::PushAddSurrounds", {}],
},
"s": ["vim::PushAddSurrounds", {}]
}
},
{
"context": "vim_operator == ys",
"bindings": {
"s": "vim::CurrentLine",
},
"s": "vim::CurrentLine"
}
},
{
"context": "vim_operator == >",
"bindings": {
">": "vim::CurrentLine",
},
">": "vim::CurrentLine"
}
},
{
"context": "vim_operator == <",
"bindings": {
"<": "vim::CurrentLine",
},
"<": "vim::CurrentLine"
}
},
{
"context": "vim_operator == eq",
"bindings": {
"=": "vim::CurrentLine",
},
"=": "vim::CurrentLine"
}
},
{
"context": "vim_operator == sh",
"bindings": {
"!": "vim::CurrentLine",
},
"!": "vim::CurrentLine"
}
},
{
"context": "vim_operator == gc",
"bindings": {
"c": "vim::CurrentLine",
},
"c": "vim::CurrentLine"
}
},
{
"context": "vim_operator == gR",
"bindings": {
"r": "vim::CurrentLine",
"shift-r": "vim::CurrentLine",
},
"shift-r": "vim::CurrentLine"
}
},
{
"context": "vim_operator == cx",
"bindings": {
"x": "vim::CurrentLine",
"c": "vim::ClearExchange",
},
"c": "vim::ClearExchange"
}
},
{
"context": "vim_mode == literal",
@@ -838,15 +826,15 @@
"tab": ["vim::Literal", ["tab", "\u0009"]],
// zed extensions:
"backspace": ["vim::Literal", ["backspace", "\u0008"]],
"delete": ["vim::Literal", ["delete", "\u007F"]],
},
"delete": ["vim::Literal", ["delete", "\u007F"]]
}
},
{
"context": "BufferSearchBar && !in_replace",
"bindings": {
"enter": "vim::SearchSubmit",
"escape": "buffer_search::Dismiss",
},
"escape": "buffer_search::Dismiss"
}
},
{
"context": "VimControl && !menu || !Editor && !Terminal",
@@ -869,8 +857,6 @@
"ctrl-w shift-right": "workspace::SwapPaneRight",
"ctrl-w shift-up": "workspace::SwapPaneUp",
"ctrl-w shift-down": "workspace::SwapPaneDown",
"ctrl-w x": "workspace::SwapPaneAdjacent",
"ctrl-w ctrl-x": "workspace::SwapPaneAdjacent",
"ctrl-w shift-h": "workspace::MovePaneLeft",
"ctrl-w shift-l": "workspace::MovePaneRight",
"ctrl-w shift-k": "workspace::MovePaneUp",
@@ -907,19 +893,15 @@
"ctrl-w ctrl-n": "workspace::NewFileSplitHorizontal",
"ctrl-w n": "workspace::NewFileSplitHorizontal",
"g t": "vim::GoToTab",
"g shift-t": "vim::GoToPreviousTab",
},
"g shift-t": "vim::GoToPreviousTab"
}
},
{
"context": "!Editor && !Terminal",
"bindings": {
":": "command_palette::Toggle",
"g /": "pane::DeploySearch",
"] b": "pane::ActivateNextItem",
"[ b": "pane::ActivatePreviousItem",
"] shift-b": "pane::ActivateLastItem",
"[ shift-b": ["pane::ActivateItem", 0],
},
"g /": "pane::DeploySearch"
}
},
{
// netrw compatibility
@@ -969,45 +951,17 @@
"6": ["vim::Number", 6],
"7": ["vim::Number", 7],
"8": ["vim::Number", 8],
"9": ["vim::Number", 9],
},
"9": ["vim::Number", 9]
}
},
{
"context": "OutlinePanel && not_editing",
"bindings": {
"h": "outline_panel::CollapseSelectedEntry",
"j": "vim::MenuSelectNext",
"k": "vim::MenuSelectPrevious",
"down": "vim::MenuSelectNext",
"up": "vim::MenuSelectPrevious",
"l": "outline_panel::ExpandSelectedEntry",
"j": "menu::SelectNext",
"k": "menu::SelectPrevious",
"shift-g": "menu::SelectLast",
"g g": "menu::SelectFirst",
"-": "outline_panel::SelectParent",
"enter": "editor::ToggleFocus",
"/": "menu::Cancel",
"ctrl-u": "outline_panel::ScrollUp",
"ctrl-d": "outline_panel::ScrollDown",
"z t": "outline_panel::ScrollCursorTop",
"z z": "outline_panel::ScrollCursorCenter",
"z b": "outline_panel::ScrollCursorBottom",
"0": ["vim::Number", 0],
"1": ["vim::Number", 1],
"2": ["vim::Number", 2],
"3": ["vim::Number", 3],
"4": ["vim::Number", 4],
"5": ["vim::Number", 5],
"6": ["vim::Number", 6],
"7": ["vim::Number", 7],
"8": ["vim::Number", 8],
"9": ["vim::Number", 9],
},
},
{
"context": "OutlinePanel && editing",
"bindings": {
"enter": "menu::Cancel",
},
"g g": "menu::SelectFirst"
}
},
{
"context": "GitPanel && ChangesList",
@@ -1022,8 +976,8 @@
"x": "git::ToggleStaged",
"shift-x": "git::StageAll",
"g x": "git::StageRange",
"shift-u": "git::UnstageAll",
},
"shift-u": "git::UnstageAll"
}
},
{
"context": "Editor && mode == auto_height && VimControl",
@@ -1034,8 +988,8 @@
"#": null,
"*": null,
"n": null,
"shift-n": null,
},
"shift-n": null
}
},
{
"context": "Picker > Editor",
@@ -1044,29 +998,29 @@
"ctrl-u": "editor::DeleteToBeginningOfLine",
"ctrl-w": "editor::DeleteToPreviousWordStart",
"ctrl-p": "menu::SelectPrevious",
"ctrl-n": "menu::SelectNext",
},
"ctrl-n": "menu::SelectNext"
}
},
{
"context": "GitCommit > Editor && VimControl && vim_mode == normal",
"bindings": {
"ctrl-c": "menu::Cancel",
"escape": "menu::Cancel",
},
"escape": "menu::Cancel"
}
},
{
"context": "Editor && edit_prediction",
"bindings": {
// This is identical to the binding in the base keymap, but the vim bindings above to
// "vim::Tab" shadow it, so it needs to be bound again.
"tab": "editor::AcceptEditPrediction",
},
"tab": "editor::AcceptEditPrediction"
}
},
{
"context": "MessageEditor > Editor && VimControl",
"bindings": {
"enter": "agent::Chat",
},
"enter": "agent::Chat"
}
},
{
"context": "os != macos && Editor && edit_prediction_conflict",
@@ -1074,8 +1028,8 @@
// alt-l is provided as an alternative to tab/alt-tab. and will be displayed in the UI. This
// is because alt-tab may not be available, as it is often used for window switching on Linux
// and Windows.
"alt-l": "editor::AcceptEditPrediction",
},
"alt-l": "editor::AcceptEditPrediction"
}
},
{
"context": "SettingsWindow > NavigationMenu && !search",
@@ -1085,16 +1039,7 @@
"k": "settings_editor::FocusPreviousNavEntry",
"j": "settings_editor::FocusNextNavEntry",
"g g": "settings_editor::FocusFirstNavEntry",
"shift-g": "settings_editor::FocusLastNavEntry",
},
},
{
"context": "MarkdownPreview",
"bindings": {
"ctrl-u": "markdown::ScrollPageUp",
"ctrl-d": "markdown::ScrollPageDown",
"ctrl-y": "markdown::ScrollUp",
"ctrl-e": "markdown::ScrollDown",
},
},
"shift-g": "settings_editor::FocusLastNavEntry"
}
}
]

View File

@@ -1,40 +0,0 @@
{{#if language_name}}
Here's a file of {{language_name}} that the user is going to ask you to make an edit to.
{{else}}
Here's a file of text that the user is going to ask you to make an edit to.
{{/if}}
The section you'll need to rewrite is marked with <rewrite_this></rewrite_this> tags.
<document>
{{{document_content}}}
</document>
{{#if is_truncated}}
The context around the relevant section has been truncated (possibly in the middle of a line) for brevity.
{{/if}}
And here's the section to rewrite based on that prompt again for reference:
<rewrite_this>
{{{rewrite_section}}}
</rewrite_this>
{{#if diagnostic_errors}}
Below are the diagnostic errors visible to the user. If the user requests problems to be fixed, use this information, but do not try to fix these errors if the user hasn't asked you to.
{{#each diagnostic_errors}}
<diagnostic_error>
<line_number>{{line_number}}</line_number>
<error_message>{{error_message}}</error_message>
<code_content>{{code_content}}</code_content>
</diagnostic_error>
{{/each}}
{{/if}}
Only make changes that are necessary to fulfill the prompt, leave everything else as-is. All surrounding {{content_type}} will be preserved.
Start at the indentation level in the original file in the rewritten {{content_type}}.
IMPORTANT: You MUST use one of the provided tools to make the rewrite or to provide an explanation as to why the user's request cannot be fulfilled. You MUST NOT send back unstructured text. If you need to make a statement or ask a question you MUST use one of the tools to do so.
It is an error if you try to make a change that cannot be made simply by editing the rewrite_section.

File diff suppressed because it is too large Load Diff

View File

@@ -8,7 +8,7 @@
"adapter": "Debugpy",
"program": "$ZED_FILE",
"request": "launch",
"cwd": "$ZED_WORKTREE_ROOT",
"cwd": "$ZED_WORKTREE_ROOT"
},
{
"label": "Debug active JavaScript file",
@@ -16,7 +16,7 @@
"program": "$ZED_FILE",
"request": "launch",
"cwd": "$ZED_WORKTREE_ROOT",
"type": "pwa-node",
"type": "pwa-node"
},
{
"label": "JavaScript debug terminal",
@@ -24,6 +24,6 @@
"request": "launch",
"cwd": "$ZED_WORKTREE_ROOT",
"console": "integratedTerminal",
"type": "pwa-node",
},
"type": "pwa-node"
}
]

View File

@@ -3,5 +3,5 @@
// For a full list of overridable settings, and general information on settings,
// see the documentation: https://zed.dev/docs/configuring-zed#settings-files
{
"lsp": {},
"lsp": {}
}

View File

@@ -47,8 +47,8 @@
// Whether to show the task line in the output of the spawned task, defaults to `true`.
"show_summary": true,
// Whether to show the command line in the output of the spawned task, defaults to `true`.
"show_command": true,
"show_command": true
// Represents the tags for inline runnable indicators, or spawning multiple tasks at once.
// "tags": []
},
}
]

View File

@@ -12,6 +12,6 @@
"theme": {
"mode": "system",
"light": "One Light",
"dark": "One Dark",
},
"dark": "One Dark"
}
}

View File

@@ -45,7 +45,6 @@
"tab.inactive_background": "#1f2127ff",
"tab.active_background": "#0d1016ff",
"search.match_background": "#5ac2fe66",
"search.active_match_background": "#ea570166",
"panel.background": "#1f2127ff",
"panel.focused_border": "#5ac1feff",
"pane.focused_border": null,
@@ -437,7 +436,6 @@
"tab.inactive_background": "#ececedff",
"tab.active_background": "#fcfcfcff",
"search.match_background": "#3b9ee566",
"search.active_match_background": "#f88b3666",
"panel.background": "#ececedff",
"panel.focused_border": "#3b9ee5ff",
"pane.focused_border": null,
@@ -829,7 +827,6 @@
"tab.inactive_background": "#353944ff",
"tab.active_background": "#242835ff",
"search.match_background": "#73cffe66",
"search.active_match_background": "#fd722b66",
"panel.background": "#353944ff",
"panel.focused_border": null,
"pane.focused_border": null,

View File

@@ -46,7 +46,6 @@
"tab.inactive_background": "#3a3735ff",
"tab.active_background": "#282828ff",
"search.match_background": "#83a59866",
"search.active_match_background": "#c09f3f66",
"panel.background": "#3a3735ff",
"panel.focused_border": "#83a598ff",
"pane.focused_border": null,
@@ -71,33 +70,33 @@
"editor.document_highlight.read_background": "#83a5981a",
"editor.document_highlight.write_background": "#92847466",
"terminal.background": "#282828ff",
"terminal.foreground": "#ebdbb2ff",
"terminal.foreground": "#fbf1c7ff",
"terminal.bright_foreground": "#fbf1c7ff",
"terminal.dim_foreground": "#766b5dff",
"terminal.dim_foreground": "#282828ff",
"terminal.ansi.black": "#282828ff",
"terminal.ansi.bright_black": "#928374ff",
"terminal.ansi.bright_black": "#73675eff",
"terminal.ansi.dim_black": "#fbf1c7ff",
"terminal.ansi.red": "#cc241dff",
"terminal.ansi.bright_red": "#fb4934ff",
"terminal.ansi.dim_red": "#8e1814ff",
"terminal.ansi.green": "#98971aff",
"terminal.ansi.bright_green": "#b8bb26ff",
"terminal.ansi.dim_green": "#6a6912ff",
"terminal.ansi.yellow": "#d79921ff",
"terminal.ansi.bright_yellow": "#fabd2fff",
"terminal.ansi.dim_yellow": "#966a17ff",
"terminal.ansi.blue": "#458588ff",
"terminal.ansi.bright_blue": "#83a598ff",
"terminal.ansi.dim_blue": "#305d5fff",
"terminal.ansi.magenta": "#b16286ff",
"terminal.ansi.bright_magenta": "#d3869bff",
"terminal.ansi.dim_magenta": "#7c455eff",
"terminal.ansi.cyan": "#689d6aff",
"terminal.ansi.bright_cyan": "#8ec07cff",
"terminal.ansi.dim_cyan": "#496e4aff",
"terminal.ansi.white": "#a89984ff",
"terminal.ansi.bright_white": "#fbf1c7ff",
"terminal.ansi.dim_white": "#766b5dff",
"terminal.ansi.red": "#fb4a35ff",
"terminal.ansi.bright_red": "#93201dff",
"terminal.ansi.dim_red": "#ffaa95ff",
"terminal.ansi.green": "#b7bb26ff",
"terminal.ansi.bright_green": "#605c1bff",
"terminal.ansi.dim_green": "#e0dc98ff",
"terminal.ansi.yellow": "#f9bd2fff",
"terminal.ansi.bright_yellow": "#91611bff",
"terminal.ansi.dim_yellow": "#fedc9bff",
"terminal.ansi.blue": "#83a598ff",
"terminal.ansi.bright_blue": "#414f4aff",
"terminal.ansi.dim_blue": "#c0d2cbff",
"terminal.ansi.magenta": "#d3869bff",
"terminal.ansi.bright_magenta": "#8e5868ff",
"terminal.ansi.dim_magenta": "#ff9ebbff",
"terminal.ansi.cyan": "#8ec07cff",
"terminal.ansi.bright_cyan": "#45603eff",
"terminal.ansi.dim_cyan": "#c7dfbdff",
"terminal.ansi.white": "#fbf1c7ff",
"terminal.ansi.bright_white": "#ffffffff",
"terminal.ansi.dim_white": "#b0a189ff",
"link_text.hover": "#83a598ff",
"version_control.added": "#b7bb26ff",
"version_control.modified": "#f9bd2fff",
@@ -453,7 +452,6 @@
"tab.inactive_background": "#393634ff",
"tab.active_background": "#1d2021ff",
"search.match_background": "#83a59866",
"search.active_match_background": "#c9653666",
"panel.background": "#393634ff",
"panel.focused_border": "#83a598ff",
"pane.focused_border": null,
@@ -478,33 +476,33 @@
"editor.document_highlight.read_background": "#83a5981a",
"editor.document_highlight.write_background": "#92847466",
"terminal.background": "#1d2021ff",
"terminal.foreground": "#ebdbb2ff",
"terminal.foreground": "#fbf1c7ff",
"terminal.bright_foreground": "#fbf1c7ff",
"terminal.dim_foreground": "#766b5dff",
"terminal.ansi.black": "#282828ff",
"terminal.ansi.bright_black": "#928374ff",
"terminal.dim_foreground": "#1d2021ff",
"terminal.ansi.black": "#1d2021ff",
"terminal.ansi.bright_black": "#73675eff",
"terminal.ansi.dim_black": "#fbf1c7ff",
"terminal.ansi.red": "#cc241dff",
"terminal.ansi.bright_red": "#fb4934ff",
"terminal.ansi.dim_red": "#8e1814ff",
"terminal.ansi.green": "#98971aff",
"terminal.ansi.bright_green": "#b8bb26ff",
"terminal.ansi.dim_green": "#6a6912ff",
"terminal.ansi.yellow": "#d79921ff",
"terminal.ansi.bright_yellow": "#fabd2fff",
"terminal.ansi.dim_yellow": "#966a17ff",
"terminal.ansi.blue": "#458588ff",
"terminal.ansi.bright_blue": "#83a598ff",
"terminal.ansi.dim_blue": "#305d5fff",
"terminal.ansi.magenta": "#b16286ff",
"terminal.ansi.bright_magenta": "#d3869bff",
"terminal.ansi.dim_magenta": "#7c455eff",
"terminal.ansi.cyan": "#689d6aff",
"terminal.ansi.bright_cyan": "#8ec07cff",
"terminal.ansi.dim_cyan": "#496e4aff",
"terminal.ansi.white": "#a89984ff",
"terminal.ansi.bright_white": "#fbf1c7ff",
"terminal.ansi.dim_white": "#766b5dff",
"terminal.ansi.red": "#fb4a35ff",
"terminal.ansi.bright_red": "#93201dff",
"terminal.ansi.dim_red": "#ffaa95ff",
"terminal.ansi.green": "#b7bb26ff",
"terminal.ansi.bright_green": "#605c1bff",
"terminal.ansi.dim_green": "#e0dc98ff",
"terminal.ansi.yellow": "#f9bd2fff",
"terminal.ansi.bright_yellow": "#91611bff",
"terminal.ansi.dim_yellow": "#fedc9bff",
"terminal.ansi.blue": "#83a598ff",
"terminal.ansi.bright_blue": "#414f4aff",
"terminal.ansi.dim_blue": "#c0d2cbff",
"terminal.ansi.magenta": "#d3869bff",
"terminal.ansi.bright_magenta": "#8e5868ff",
"terminal.ansi.dim_magenta": "#ff9ebbff",
"terminal.ansi.cyan": "#8ec07cff",
"terminal.ansi.bright_cyan": "#45603eff",
"terminal.ansi.dim_cyan": "#c7dfbdff",
"terminal.ansi.white": "#fbf1c7ff",
"terminal.ansi.bright_white": "#ffffffff",
"terminal.ansi.dim_white": "#b0a189ff",
"link_text.hover": "#83a598ff",
"version_control.added": "#b7bb26ff",
"version_control.modified": "#f9bd2fff",
@@ -860,7 +858,6 @@
"tab.inactive_background": "#3b3735ff",
"tab.active_background": "#32302fff",
"search.match_background": "#83a59866",
"search.active_match_background": "#aea85166",
"panel.background": "#3b3735ff",
"panel.focused_border": null,
"pane.focused_border": null,
@@ -885,33 +882,33 @@
"editor.document_highlight.read_background": "#83a5981a",
"editor.document_highlight.write_background": "#92847466",
"terminal.background": "#32302fff",
"terminal.foreground": "#ebdbb2ff",
"terminal.foreground": "#fbf1c7ff",
"terminal.bright_foreground": "#fbf1c7ff",
"terminal.dim_foreground": "#766b5dff",
"terminal.ansi.black": "#282828ff",
"terminal.ansi.bright_black": "#928374ff",
"terminal.dim_foreground": "#32302fff",
"terminal.ansi.black": "#32302fff",
"terminal.ansi.bright_black": "#73675eff",
"terminal.ansi.dim_black": "#fbf1c7ff",
"terminal.ansi.red": "#cc241dff",
"terminal.ansi.bright_red": "#fb4934ff",
"terminal.ansi.dim_red": "#8e1814ff",
"terminal.ansi.green": "#98971aff",
"terminal.ansi.bright_green": "#b8bb26ff",
"terminal.ansi.dim_green": "#6a6912ff",
"terminal.ansi.yellow": "#d79921ff",
"terminal.ansi.bright_yellow": "#fabd2fff",
"terminal.ansi.dim_yellow": "#966a17ff",
"terminal.ansi.blue": "#458588ff",
"terminal.ansi.bright_blue": "#83a598ff",
"terminal.ansi.dim_blue": "#305d5fff",
"terminal.ansi.magenta": "#b16286ff",
"terminal.ansi.bright_magenta": "#d3869bff",
"terminal.ansi.dim_magenta": "#7c455eff",
"terminal.ansi.cyan": "#689d6aff",
"terminal.ansi.bright_cyan": "#8ec07cff",
"terminal.ansi.dim_cyan": "#496e4aff",
"terminal.ansi.white": "#a89984ff",
"terminal.ansi.bright_white": "#fbf1c7ff",
"terminal.ansi.dim_white": "#766b5dff",
"terminal.ansi.red": "#fb4a35ff",
"terminal.ansi.bright_red": "#93201dff",
"terminal.ansi.dim_red": "#ffaa95ff",
"terminal.ansi.green": "#b7bb26ff",
"terminal.ansi.bright_green": "#605c1bff",
"terminal.ansi.dim_green": "#e0dc98ff",
"terminal.ansi.yellow": "#f9bd2fff",
"terminal.ansi.bright_yellow": "#91611bff",
"terminal.ansi.dim_yellow": "#fedc9bff",
"terminal.ansi.blue": "#83a598ff",
"terminal.ansi.bright_blue": "#414f4aff",
"terminal.ansi.dim_blue": "#c0d2cbff",
"terminal.ansi.magenta": "#d3869bff",
"terminal.ansi.bright_magenta": "#8e5868ff",
"terminal.ansi.dim_magenta": "#ff9ebbff",
"terminal.ansi.cyan": "#8ec07cff",
"terminal.ansi.bright_cyan": "#45603eff",
"terminal.ansi.dim_cyan": "#c7dfbdff",
"terminal.ansi.white": "#fbf1c7ff",
"terminal.ansi.bright_white": "#ffffffff",
"terminal.ansi.dim_white": "#b0a189ff",
"link_text.hover": "#83a598ff",
"version_control.added": "#b7bb26ff",
"version_control.modified": "#f9bd2fff",
@@ -1267,7 +1264,6 @@
"tab.inactive_background": "#ecddb4ff",
"tab.active_background": "#fbf1c7ff",
"search.match_background": "#0b667866",
"search.active_match_background": "#ba2d1166",
"panel.background": "#ecddb4ff",
"panel.focused_border": null,
"pane.focused_border": null,
@@ -1295,30 +1291,30 @@
"terminal.foreground": "#282828ff",
"terminal.bright_foreground": "#282828ff",
"terminal.dim_foreground": "#fbf1c7ff",
"terminal.ansi.black": "#fbf1c7ff",
"terminal.ansi.bright_black": "#928374ff",
"terminal.ansi.dim_black": "#7c6f64ff",
"terminal.ansi.red": "#cc241dff",
"terminal.ansi.bright_red": "#9d0006ff",
"terminal.ansi.dim_red": "#c31c16ff",
"terminal.ansi.green": "#98971aff",
"terminal.ansi.bright_green": "#79740eff",
"terminal.ansi.dim_green": "#929015ff",
"terminal.ansi.yellow": "#d79921ff",
"terminal.ansi.bright_yellow": "#b57614ff",
"terminal.ansi.dim_yellow": "#cf8e1aff",
"terminal.ansi.blue": "#458588ff",
"terminal.ansi.bright_blue": "#076678ff",
"terminal.ansi.dim_blue": "#356f77ff",
"terminal.ansi.magenta": "#b16286ff",
"terminal.ansi.bright_magenta": "#8f3f71ff",
"terminal.ansi.dim_magenta": "#a85580ff",
"terminal.ansi.cyan": "#689d6aff",
"terminal.ansi.bright_cyan": "#427b58ff",
"terminal.ansi.dim_cyan": "#5f9166ff",
"terminal.ansi.white": "#7c6f64ff",
"terminal.ansi.bright_white": "#282828ff",
"terminal.ansi.dim_white": "#282828ff",
"terminal.ansi.black": "#282828ff",
"terminal.ansi.bright_black": "#0b6678ff",
"terminal.ansi.dim_black": "#5f5650ff",
"terminal.ansi.red": "#9d0308ff",
"terminal.ansi.bright_red": "#db8b7aff",
"terminal.ansi.dim_red": "#4e1207ff",
"terminal.ansi.green": "#797410ff",
"terminal.ansi.bright_green": "#bfb787ff",
"terminal.ansi.dim_green": "#3e3a11ff",
"terminal.ansi.yellow": "#b57615ff",
"terminal.ansi.bright_yellow": "#e2b88bff",
"terminal.ansi.dim_yellow": "#5c3a12ff",
"terminal.ansi.blue": "#0b6678ff",
"terminal.ansi.bright_blue": "#8fb0baff",
"terminal.ansi.dim_blue": "#14333bff",
"terminal.ansi.magenta": "#8f3e71ff",
"terminal.ansi.bright_magenta": "#c76da0ff",
"terminal.ansi.dim_magenta": "#5c2848ff",
"terminal.ansi.cyan": "#437b59ff",
"terminal.ansi.bright_cyan": "#9fbca8ff",
"terminal.ansi.dim_cyan": "#253e2eff",
"terminal.ansi.white": "#fbf1c7ff",
"terminal.ansi.bright_white": "#ffffffff",
"terminal.ansi.dim_white": "#b0a189ff",
"link_text.hover": "#0b6678ff",
"version_control.added": "#797410ff",
"version_control.modified": "#b57615ff",
@@ -1674,7 +1670,6 @@
"tab.inactive_background": "#ecddb5ff",
"tab.active_background": "#f9f5d7ff",
"search.match_background": "#0b667866",
"search.active_match_background": "#dc351466",
"panel.background": "#ecddb5ff",
"panel.focused_border": null,
"pane.focused_border": null,
@@ -1702,30 +1697,30 @@
"terminal.foreground": "#282828ff",
"terminal.bright_foreground": "#282828ff",
"terminal.dim_foreground": "#f9f5d7ff",
"terminal.ansi.black": "#fbf1c7ff",
"terminal.ansi.bright_black": "#928374ff",
"terminal.ansi.dim_black": "#7c6f64ff",
"terminal.ansi.red": "#cc241dff",
"terminal.ansi.bright_red": "#9d0006ff",
"terminal.ansi.dim_red": "#c31c16ff",
"terminal.ansi.green": "#98971aff",
"terminal.ansi.bright_green": "#79740eff",
"terminal.ansi.dim_green": "#929015ff",
"terminal.ansi.yellow": "#d79921ff",
"terminal.ansi.bright_yellow": "#b57614ff",
"terminal.ansi.dim_yellow": "#cf8e1aff",
"terminal.ansi.blue": "#458588ff",
"terminal.ansi.bright_blue": "#076678ff",
"terminal.ansi.dim_blue": "#356f77ff",
"terminal.ansi.magenta": "#b16286ff",
"terminal.ansi.bright_magenta": "#8f3f71ff",
"terminal.ansi.dim_magenta": "#a85580ff",
"terminal.ansi.cyan": "#689d6aff",
"terminal.ansi.bright_cyan": "#427b58ff",
"terminal.ansi.dim_cyan": "#5f9166ff",
"terminal.ansi.white": "#7c6f64ff",
"terminal.ansi.bright_white": "#282828ff",
"terminal.ansi.dim_white": "#282828ff",
"terminal.ansi.black": "#282828ff",
"terminal.ansi.bright_black": "#73675eff",
"terminal.ansi.dim_black": "#f9f5d7ff",
"terminal.ansi.red": "#9d0308ff",
"terminal.ansi.bright_red": "#db8b7aff",
"terminal.ansi.dim_red": "#4e1207ff",
"terminal.ansi.green": "#797410ff",
"terminal.ansi.bright_green": "#bfb787ff",
"terminal.ansi.dim_green": "#3e3a11ff",
"terminal.ansi.yellow": "#b57615ff",
"terminal.ansi.bright_yellow": "#e2b88bff",
"terminal.ansi.dim_yellow": "#5c3a12ff",
"terminal.ansi.blue": "#0b6678ff",
"terminal.ansi.bright_blue": "#8fb0baff",
"terminal.ansi.dim_blue": "#14333bff",
"terminal.ansi.magenta": "#8f3e71ff",
"terminal.ansi.bright_magenta": "#c76da0ff",
"terminal.ansi.dim_magenta": "#5c2848ff",
"terminal.ansi.cyan": "#437b59ff",
"terminal.ansi.bright_cyan": "#9fbca8ff",
"terminal.ansi.dim_cyan": "#253e2eff",
"terminal.ansi.white": "#f9f5d7ff",
"terminal.ansi.bright_white": "#ffffffff",
"terminal.ansi.dim_white": "#b0a189ff",
"link_text.hover": "#0b6678ff",
"version_control.added": "#797410ff",
"version_control.modified": "#b57615ff",
@@ -2081,7 +2076,6 @@
"tab.inactive_background": "#ecdcb3ff",
"tab.active_background": "#f2e5bcff",
"search.match_background": "#0b667866",
"search.active_match_background": "#d7331466",
"panel.background": "#ecdcb3ff",
"panel.focused_border": null,
"pane.focused_border": null,
@@ -2109,30 +2103,30 @@
"terminal.foreground": "#282828ff",
"terminal.bright_foreground": "#282828ff",
"terminal.dim_foreground": "#f2e5bcff",
"terminal.ansi.black": "#fbf1c7ff",
"terminal.ansi.bright_black": "#928374ff",
"terminal.ansi.dim_black": "#7c6f64ff",
"terminal.ansi.red": "#cc241dff",
"terminal.ansi.bright_red": "#9d0006ff",
"terminal.ansi.dim_red": "#c31c16ff",
"terminal.ansi.green": "#98971aff",
"terminal.ansi.bright_green": "#79740eff",
"terminal.ansi.dim_green": "#929015ff",
"terminal.ansi.yellow": "#d79921ff",
"terminal.ansi.bright_yellow": "#b57614ff",
"terminal.ansi.dim_yellow": "#cf8e1aff",
"terminal.ansi.blue": "#458588ff",
"terminal.ansi.bright_blue": "#076678ff",
"terminal.ansi.dim_blue": "#356f77ff",
"terminal.ansi.magenta": "#b16286ff",
"terminal.ansi.bright_magenta": "#8f3f71ff",
"terminal.ansi.dim_magenta": "#a85580ff",
"terminal.ansi.cyan": "#689d6aff",
"terminal.ansi.bright_cyan": "#427b58ff",
"terminal.ansi.dim_cyan": "#5f9166ff",
"terminal.ansi.white": "#7c6f64ff",
"terminal.ansi.bright_white": "#282828ff",
"terminal.ansi.dim_white": "#282828ff",
"terminal.ansi.black": "#282828ff",
"terminal.ansi.bright_black": "#73675eff",
"terminal.ansi.dim_black": "#f2e5bcff",
"terminal.ansi.red": "#9d0308ff",
"terminal.ansi.bright_red": "#db8b7aff",
"terminal.ansi.dim_red": "#4e1207ff",
"terminal.ansi.green": "#797410ff",
"terminal.ansi.bright_green": "#bfb787ff",
"terminal.ansi.dim_green": "#3e3a11ff",
"terminal.ansi.yellow": "#b57615ff",
"terminal.ansi.bright_yellow": "#e2b88bff",
"terminal.ansi.dim_yellow": "#5c3a12ff",
"terminal.ansi.blue": "#0b6678ff",
"terminal.ansi.bright_blue": "#8fb0baff",
"terminal.ansi.dim_blue": "#14333bff",
"terminal.ansi.magenta": "#8f3e71ff",
"terminal.ansi.bright_magenta": "#c76da0ff",
"terminal.ansi.dim_magenta": "#5c2848ff",
"terminal.ansi.cyan": "#437b59ff",
"terminal.ansi.bright_cyan": "#9fbca8ff",
"terminal.ansi.dim_cyan": "#253e2eff",
"terminal.ansi.white": "#f2e5bcff",
"terminal.ansi.bright_white": "#ffffffff",
"terminal.ansi.dim_white": "#b0a189ff",
"link_text.hover": "#0b6678ff",
"version_control.added": "#797410ff",
"version_control.modified": "#b57615ff",

View File

@@ -45,7 +45,6 @@
"tab.inactive_background": "#2f343eff",
"tab.active_background": "#282c33ff",
"search.match_background": "#74ade866",
"search.active_match_background": "#e8af7466",
"panel.background": "#2f343eff",
"panel.focused_border": null,
"pane.focused_border": null,
@@ -68,39 +67,37 @@
"editor.active_wrap_guide": "#c8ccd41a",
"editor.document_highlight.read_background": "#74ade81a",
"editor.document_highlight.write_background": "#555a6366",
"terminal.background": "#282c34ff",
"terminal.foreground": "#abb2bfff",
"terminal.background": "#282c33ff",
"terminal.foreground": "#dce0e5ff",
"terminal.bright_foreground": "#dce0e5ff",
"terminal.dim_foreground": "#636d83ff",
"terminal.ansi.black": "#282c34ff",
"terminal.ansi.bright_black": "#636d83ff",
"terminal.ansi.dim_black": "#3b3f4aff",
"terminal.ansi.red": "#e06c75ff",
"terminal.ansi.bright_red": "#EA858Bff",
"terminal.ansi.dim_red": "#a7545aff",
"terminal.ansi.green": "#98c379ff",
"terminal.ansi.bright_green": "#AAD581ff",
"terminal.ansi.dim_green": "#6d8f59ff",
"terminal.ansi.yellow": "#e5c07bff",
"terminal.ansi.bright_yellow": "#FFD885ff",
"terminal.ansi.dim_yellow": "#b8985bff",
"terminal.ansi.blue": "#61afefff",
"terminal.ansi.bright_blue": "#85C1FFff",
"terminal.ansi.dim_blue": "#457cadff",
"terminal.ansi.magenta": "#c678ddff",
"terminal.ansi.bright_magenta": "#D398EBff",
"terminal.ansi.dim_magenta": "#8d54a0ff",
"terminal.ansi.cyan": "#56b6c2ff",
"terminal.ansi.bright_cyan": "#6ED5DEff",
"terminal.ansi.dim_cyan": "#3c818aff",
"terminal.ansi.white": "#abb2bfff",
"terminal.dim_foreground": "#282c33ff",
"terminal.ansi.black": "#282c33ff",
"terminal.ansi.bright_black": "#525561ff",
"terminal.ansi.dim_black": "#dce0e5ff",
"terminal.ansi.red": "#d07277ff",
"terminal.ansi.bright_red": "#673a3cff",
"terminal.ansi.dim_red": "#eab7b9ff",
"terminal.ansi.green": "#a1c181ff",
"terminal.ansi.bright_green": "#4d6140ff",
"terminal.ansi.dim_green": "#d1e0bfff",
"terminal.ansi.yellow": "#dec184ff",
"terminal.ansi.bright_yellow": "#e5c07bff",
"terminal.ansi.dim_yellow": "#f1dfc1ff",
"terminal.ansi.blue": "#74ade8ff",
"terminal.ansi.bright_blue": "#385378ff",
"terminal.ansi.dim_blue": "#bed5f4ff",
"terminal.ansi.magenta": "#b477cfff",
"terminal.ansi.bright_magenta": "#d6b4e4ff",
"terminal.ansi.dim_magenta": "#612a79ff",
"terminal.ansi.cyan": "#6eb4bfff",
"terminal.ansi.bright_cyan": "#3a565bff",
"terminal.ansi.dim_cyan": "#b9d9dfff",
"terminal.ansi.white": "#dce0e5ff",
"terminal.ansi.bright_white": "#fafafaff",
"terminal.ansi.dim_white": "#8f969bff",
"terminal.ansi.dim_white": "#575d65ff",
"link_text.hover": "#74ade8ff",
"version_control.added": "#27a657ff",
"version_control.modified": "#d3b020ff",
"version_control.word_added": "#2EA04859",
"version_control.word_deleted": "#78081BCC",
"version_control.deleted": "#e06c76ff",
"version_control.conflict_marker.ours": "#a1c1811a",
"version_control.conflict_marker.theirs": "#74ade81a",
@@ -449,7 +446,6 @@
"tab.inactive_background": "#ebebecff",
"tab.active_background": "#fafafaff",
"search.match_background": "#5c79e266",
"search.active_match_background": "#d0a92366",
"panel.background": "#ebebecff",
"panel.focused_border": null,
"pane.focused_border": null,
@@ -473,38 +469,36 @@
"editor.document_highlight.read_background": "#5c78e225",
"editor.document_highlight.write_background": "#a3a3a466",
"terminal.background": "#fafafaff",
"terminal.foreground": "#2a2c33ff",
"terminal.bright_foreground": "#2a2c33ff",
"terminal.dim_foreground": "#bbbbbbff",
"terminal.ansi.black": "#000000ff",
"terminal.ansi.bright_black": "#000000ff",
"terminal.ansi.dim_black": "#555555ff",
"terminal.ansi.red": "#de3e35ff",
"terminal.ansi.bright_red": "#de3e35ff",
"terminal.ansi.dim_red": "#9c2b26ff",
"terminal.ansi.green": "#3f953aff",
"terminal.ansi.bright_green": "#3f953aff",
"terminal.ansi.dim_green": "#2b6927ff",
"terminal.ansi.yellow": "#d2b67cff",
"terminal.ansi.bright_yellow": "#d2b67cff",
"terminal.ansi.dim_yellow": "#a48c5aff",
"terminal.ansi.blue": "#2f5af3ff",
"terminal.ansi.bright_blue": "#2f5af3ff",
"terminal.ansi.dim_blue": "#2140abff",
"terminal.ansi.magenta": "#950095ff",
"terminal.ansi.bright_magenta": "#a00095ff",
"terminal.ansi.dim_magenta": "#6a006aff",
"terminal.ansi.cyan": "#3f953aff",
"terminal.ansi.bright_cyan": "#3f953aff",
"terminal.ansi.dim_cyan": "#2b6927ff",
"terminal.ansi.white": "#bbbbbbff",
"terminal.foreground": "#242529ff",
"terminal.bright_foreground": "#242529ff",
"terminal.dim_foreground": "#fafafaff",
"terminal.ansi.black": "#242529ff",
"terminal.ansi.bright_black": "#747579ff",
"terminal.ansi.dim_black": "#97979aff",
"terminal.ansi.red": "#d36151ff",
"terminal.ansi.bright_red": "#f0b0a4ff",
"terminal.ansi.dim_red": "#6f312aff",
"terminal.ansi.green": "#669f59ff",
"terminal.ansi.bright_green": "#b2cfa9ff",
"terminal.ansi.dim_green": "#354d2eff",
"terminal.ansi.yellow": "#dec184ff",
"terminal.ansi.bright_yellow": "#826221ff",
"terminal.ansi.dim_yellow": "#786441ff",
"terminal.ansi.blue": "#5c78e2ff",
"terminal.ansi.bright_blue": "#b5baf2ff",
"terminal.ansi.dim_blue": "#2d3d75ff",
"terminal.ansi.magenta": "#984ea5ff",
"terminal.ansi.bright_magenta": "#cea6d3ff",
"terminal.ansi.dim_magenta": "#4b2a50ff",
"terminal.ansi.cyan": "#3a82b7ff",
"terminal.ansi.bright_cyan": "#a3bedaff",
"terminal.ansi.dim_cyan": "#254058ff",
"terminal.ansi.white": "#fafafaff",
"terminal.ansi.bright_white": "#ffffffff",
"terminal.ansi.dim_white": "#888888ff",
"terminal.ansi.dim_white": "#aaaaaaff",
"link_text.hover": "#5c78e2ff",
"version_control.added": "#27a657ff",
"version_control.modified": "#d3b020ff",
"version_control.word_added": "#2EA04859",
"version_control.word_deleted": "#F85149CC",
"version_control.deleted": "#e06c76ff",
"conflict": "#a48819ff",
"conflict.background": "#faf2e6ff",

View File

@@ -14,7 +14,6 @@ disallowed-methods = [
{ path = "std::process::Command::stderr", reason = "`smol::process::Command::from()` does not preserve stdio configuration", replacement = "smol::process::Command::stderr" },
{ path = "serde_json::from_reader", reason = "Parsing from a buffer is much slower than first reading the buffer into a Vec/String, see https://github.com/serde-rs/json/issues/160#issuecomment-253446892. Use `serde_json::from_slice` instead." },
{ path = "serde_json_lenient::from_reader", reason = "Parsing from a buffer is much slower than first reading the buffer into a Vec/String, see https://github.com/serde-rs/json/issues/160#issuecomment-253446892, Use `serde_json_lenient::from_slice` instead." },
{ path = "cocoa::foundation::NSString::alloc", reason = "NSString must be autoreleased to avoid memory leaks. Use `ns_string()` helper instead." },
]
disallowed-types = [
# { path = "std::collections::HashMap", replacement = "collections::HashMap" },

View File

@@ -46,7 +46,6 @@ url.workspace = true
util.workspace = true
uuid.workspace = true
watch.workspace = true
urlencoding.workspace = true
[dev-dependencies]
env_logger.workspace = true

File diff suppressed because it is too large Load Diff

View File

@@ -20,7 +20,7 @@ impl UserMessageId {
}
pub trait AgentConnection {
fn telemetry_id(&self) -> SharedString;
fn telemetry_id(&self) -> &'static str;
fn new_thread(
self: Rc<Self>,
@@ -202,21 +202,6 @@ pub trait AgentModelSelector: 'static {
fn should_render_footer(&self) -> bool {
false
}
/// Whether this selector supports the favorites feature.
/// Only the native agent uses the model ID format that maps to settings.
fn supports_favorites(&self) -> bool {
false
}
}
/// Icon for a model in the model selector.
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum AgentModelIcon {
/// A built-in icon from Zed's icon set.
Named(IconName),
/// Path to a custom SVG icon file.
Path(SharedString),
}
#[derive(Debug, Clone, PartialEq, Eq)]
@@ -224,7 +209,7 @@ pub struct AgentModelInfo {
pub id: acp::ModelId,
pub name: SharedString,
pub description: Option<SharedString>,
pub icon: Option<AgentModelIcon>,
pub icon: Option<IconName>,
}
impl From<acp::ModelInfo> for AgentModelInfo {
@@ -254,10 +239,6 @@ impl AgentModelList {
AgentModelList::Grouped(groups) => groups.is_empty(),
}
}
pub fn is_flat(&self) -> bool {
matches!(self, AgentModelList::Flat(_))
}
}
#[cfg(feature = "test-support")]
@@ -341,8 +322,8 @@ mod test_support {
}
impl AgentConnection for StubAgentConnection {
fn telemetry_id(&self) -> SharedString {
"stub".into()
fn telemetry_id(&self) -> &'static str {
"stub"
}
fn auth_methods(&self) -> &[acp::AuthMethod] {
@@ -355,7 +336,7 @@ mod test_support {
_cwd: &Path,
cx: &mut gpui::App,
) -> Task<gpui::Result<Entity<AcpThread>>> {
let session_id = acp::SessionId::new(self.sessions.lock().len().to_string());
let session_id = acp::SessionId(self.sessions.lock().len().to_string().into());
let action_log = cx.new(|_| ActionLog::new(project.clone()));
let thread = cx.new(|cx| {
AcpThread::new(
@@ -364,12 +345,12 @@ mod test_support {
project,
action_log,
session_id.clone(),
watch::Receiver::constant(
acp::PromptCapabilities::new()
.image(true)
.audio(true)
.embedded_context(true),
),
watch::Receiver::constant(acp::PromptCapabilities {
image: true,
audio: true,
embedded_context: true,
meta: None,
}),
cx,
)
});
@@ -408,7 +389,10 @@ mod test_support {
response_tx.replace(tx);
cx.spawn(async move |_| {
let stop_reason = rx.await?;
Ok(acp::PromptResponse::new(stop_reason))
Ok(acp::PromptResponse {
stop_reason,
meta: None,
})
})
} else {
for update in self.next_prompt_updates.lock().drain(..) {
@@ -416,7 +400,7 @@ mod test_support {
let update = update.clone();
let permission_request = if let acp::SessionUpdate::ToolCall(tool_call) =
&update
&& let Some(options) = self.permission_requests.get(&tool_call.tool_call_id)
&& let Some(options) = self.permission_requests.get(&tool_call.id)
{
Some((tool_call.clone(), options.clone()))
} else {
@@ -445,7 +429,10 @@ mod test_support {
cx.spawn(async move |_| {
try_join_all(tasks).await?;
Ok(acp::PromptResponse::new(acp::StopReason::EndTurn))
Ok(acp::PromptResponse {
stop_reason: acp::StopReason::EndTurn,
meta: None,
})
})
}
}

View File

@@ -166,7 +166,7 @@ impl Diff {
}
pub fn has_revealed_range(&self, cx: &App) -> bool {
self.multibuffer().read(cx).paths().next().is_some()
self.multibuffer().read(cx).excerpt_paths().next().is_some()
}
pub fn needs_update(&self, old_text: &str, new_text: &str, cx: &App) -> bool {

View File

@@ -4,14 +4,12 @@ use file_icons::FileIcons;
use prompt_store::{PromptId, UserPromptId};
use serde::{Deserialize, Serialize};
use std::{
borrow::Cow,
fmt,
ops::RangeInclusive,
path::{Path, PathBuf},
};
use ui::{App, IconName, SharedString};
use url::Url;
use urlencoding::decode;
use util::paths::PathStyle;
#[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize, Hash)]
@@ -76,13 +74,11 @@ impl MentionUri {
let path = url.path();
match url.scheme() {
"file" => {
let normalized = if path_style.is_windows() {
let path = if path_style.is_windows() {
path.trim_start_matches("/")
} else {
path
};
let decoded = decode(normalized).unwrap_or(Cow::Borrowed(normalized));
let path = decoded.as_ref();
if let Some(fragment) = url.fragment() {
let line_range = parse_line_range(fragment)?;
@@ -112,7 +108,7 @@ impl MentionUri {
if let Some(thread_id) = path.strip_prefix("/agent/thread/") {
let name = single_query_param(&url, "name")?.context("Missing thread name")?;
Ok(Self::Thread {
id: acp::SessionId::new(thread_id),
id: acp::SessionId(thread_id.into()),
name,
})
} else if let Some(path) = path.strip_prefix("/agent/text-thread/") {
@@ -410,19 +406,6 @@ mod tests {
assert_eq!(parsed.to_uri().to_string(), selection_uri);
}
#[test]
fn test_parse_file_uri_with_non_ascii() {
let file_uri = uri!("file:///path/to/%E6%97%A5%E6%9C%AC%E8%AA%9E.txt");
let parsed = MentionUri::parse(file_uri, PathStyle::local()).unwrap();
match &parsed {
MentionUri::File { abs_path } => {
assert_eq!(abs_path, Path::new(path!("/path/to/日本語.txt")));
}
_ => panic!("Expected File variant"),
}
assert_eq!(parsed.to_uri().to_string(), file_uri);
}
#[test]
fn test_parse_untitled_selection_uri() {
let selection_uri = uri!("zed:///agent/untitled-buffer#L1:10");

View File

@@ -75,9 +75,11 @@ impl Terminal {
let exit_status = exit_status.map(portable_pty::ExitStatus::from);
acp::TerminalExitStatus::new()
.exit_code(exit_status.as_ref().map(|e| e.exit_code()))
.signal(exit_status.and_then(|e| e.signal().map(ToOwned::to_owned)))
acp::TerminalExitStatus {
exit_code: exit_status.as_ref().map(|e| e.exit_code()),
signal: exit_status.and_then(|e| e.signal().map(Into::into)),
meta: None,
}
})
.shared(),
}
@@ -101,19 +103,25 @@ impl Terminal {
if let Some(output) = self.output.as_ref() {
let exit_status = output.exit_status.map(portable_pty::ExitStatus::from);
acp::TerminalOutputResponse::new(
output.content.clone(),
output.original_content_len > output.content.len(),
)
.exit_status(
acp::TerminalExitStatus::new()
.exit_code(exit_status.as_ref().map(|e| e.exit_code()))
.signal(exit_status.and_then(|e| e.signal().map(ToOwned::to_owned))),
)
acp::TerminalOutputResponse {
output: output.content.clone(),
truncated: output.original_content_len > output.content.len(),
exit_status: Some(acp::TerminalExitStatus {
exit_code: exit_status.as_ref().map(|e| e.exit_code()),
signal: exit_status.and_then(|e| e.signal().map(Into::into)),
meta: None,
}),
meta: None,
}
} else {
let (current_content, original_len) = self.truncated_output(cx);
let truncated = current_content.len() < original_len;
acp::TerminalOutputResponse::new(current_content, truncated)
acp::TerminalOutputResponse {
truncated: current_content.len() < original_len,
output: current_content,
exit_status: None,
meta: None,
}
}
}
@@ -187,10 +195,8 @@ pub async fn create_terminal_entity(
Default::default()
};
// Disable pagers so agent/terminal commands don't hang behind interactive UIs
// Disables paging for `git` and hopefully other commands
env.insert("PAGER".into(), "".into());
// Override user core.pager (e.g. delta) which Git prefers over PAGER
env.insert("GIT_PAGER".into(), "cat".into());
env.extend(env_vars);
// Use remote shell or default system shell, as appropriate

View File

@@ -371,13 +371,13 @@ impl AcpTools {
syntax: cx.theme().syntax().clone(),
code_block_overflow_x_scroll: true,
code_block: StyleRefinement {
text: TextStyleRefinement {
text: Some(TextStyleRefinement {
font_family: Some(
theme_settings.buffer_font.family.clone(),
),
font_size: Some((base_size * 0.8).into()),
..Default::default()
},
}),
..Default::default()
},
..Default::default()
@@ -528,7 +528,7 @@ impl Render for AcpTools {
.with_sizing_behavior(gpui::ListSizingBehavior::Auto)
.size_full(),
)
.vertical_scrollbar_for(&connection.list_state, window, cx)
.vertical_scrollbar_for(connection.list_state.clone(), window, cx)
.into_any()
}
}

View File

@@ -6,7 +6,7 @@ use futures::{FutureExt, StreamExt, channel::mpsc};
use gpui::{
App, AppContext, AsyncApp, Context, Entity, SharedString, Subscription, Task, WeakEntity,
};
use language::{Anchor, Buffer, BufferEvent, Point, ToPoint};
use language::{Anchor, Buffer, BufferEvent, DiskState, Point, ToPoint};
use project::{Project, ProjectItem, lsp_store::OpenLspBufferHandle};
use std::{cmp, ops::Range, sync::Arc};
use text::{Edit, Patch, Rope};
@@ -150,7 +150,7 @@ impl ActionLog {
if buffer
.read(cx)
.file()
.is_some_and(|file| file.disk_state().is_deleted())
.is_some_and(|file| file.disk_state() == DiskState::Deleted)
{
// If the buffer had been edited by a tool, but it got
// deleted externally, we want to stop tracking it.
@@ -162,7 +162,7 @@ impl ActionLog {
if buffer
.read(cx)
.file()
.is_some_and(|file| !file.disk_state().is_deleted())
.is_some_and(|file| file.disk_state() != DiskState::Deleted)
{
// If the buffer had been deleted by a tool, but it got
// resurrected externally, we want to clear the edits we
@@ -769,7 +769,7 @@ impl ActionLog {
tracked.version != buffer.version
&& buffer
.file()
.is_some_and(|file| !file.disk_state().is_deleted())
.is_some_and(|file| file.disk_state() != DiskState::Deleted)
})
.map(|(buffer, _)| buffer)
}
@@ -777,7 +777,7 @@ impl ActionLog {
#[derive(Clone)]
pub struct ActionLogTelemetry {
pub agent_telemetry_id: SharedString,
pub agent_telemetry_id: &'static str,
pub session_id: Arc<str>,
}

View File

@@ -83,7 +83,6 @@ ctor.workspace = true
db = { workspace = true, "features" = ["test-support"] }
editor = { workspace = true, "features" = ["test-support"] }
env_logger.workspace = true
eval_utils.workspace = true
fs = { workspace = true, "features" = ["test-support"] }
git = { workspace = true, "features" = ["test-support"] }
gpui = { workspace = true, "features" = ["test-support"] }

View File

@@ -5,12 +5,12 @@ mod legacy_thread;
mod native_agent_server;
pub mod outline;
mod templates;
#[cfg(test)]
mod tests;
mod thread;
mod tools;
use context_server::ContextServerId;
#[cfg(test)]
mod tests;
pub use db::*;
pub use history_store::*;
pub use native_agent_server::NativeAgentServer;
@@ -18,11 +18,11 @@ pub use templates::*;
pub use thread::*;
pub use tools::*;
use acp_thread::{AcpThread, AgentModelSelector, UserMessageId};
use acp_thread::{AcpThread, AgentModelSelector};
use agent_client_protocol as acp;
use anyhow::{Context as _, Result, anyhow};
use chrono::{DateTime, Utc};
use collections::{HashMap, HashSet, IndexMap};
use collections::{HashSet, IndexMap};
use fs::Fs;
use futures::channel::{mpsc, oneshot};
use futures::future::Shared;
@@ -30,15 +30,15 @@ use futures::{StreamExt, future};
use gpui::{
App, AppContext, AsyncApp, Context, Entity, SharedString, Subscription, Task, WeakEntity,
};
use language_model::{IconOrSvg, LanguageModel, LanguageModelProvider, LanguageModelRegistry};
use language_model::{LanguageModel, LanguageModelProvider, LanguageModelRegistry};
use project::{Project, ProjectItem, ProjectPath, Worktree};
use prompt_store::{
ProjectContext, PromptStore, RULES_FILE_NAMES, RulesFileContext, UserRulesContext,
WorktreeContext,
ProjectContext, PromptStore, RulesFileContext, UserRulesContext, WorktreeContext,
};
use serde::{Deserialize, Serialize};
use settings::{LanguageModelSelection, update_settings_file};
use std::any::Any;
use std::collections::HashMap;
use std::path::{Path, PathBuf};
use std::rc::Rc;
use std::sync::Arc;
@@ -51,6 +51,18 @@ pub struct ProjectSnapshot {
pub timestamp: DateTime<Utc>,
}
const RULES_FILE_NAMES: [&str; 9] = [
".rules",
".cursorrules",
".windsurfrules",
".clinerules",
".github/copilot-instructions.md",
"CLAUDE.md",
"AGENT.md",
"AGENTS.md",
"GEMINI.md",
];
pub struct RulesLoadingError {
pub message: SharedString,
}
@@ -93,7 +105,7 @@ impl LanguageModels {
fn refresh_list(&mut self, cx: &App) {
let providers = LanguageModelRegistry::global(cx)
.read(cx)
.visible_providers()
.providers()
.into_iter()
.filter(|provider| provider.is_authenticated(cx))
.collect::<Vec<_>>();
@@ -153,21 +165,18 @@ impl LanguageModels {
id: Self::model_id(model),
name: model.name().0,
description: None,
icon: Some(match provider.icon() {
IconOrSvg::Svg(path) => acp_thread::AgentModelIcon::Path(path),
IconOrSvg::Icon(name) => acp_thread::AgentModelIcon::Named(name),
}),
icon: Some(provider.icon()),
}
}
fn model_id(model: &Arc<dyn LanguageModel>) -> acp::ModelId {
acp::ModelId::new(format!("{}/{}", model.provider_id().0, model.id().0))
acp::ModelId(format!("{}/{}", model.provider_id().0, model.id().0).into())
}
fn authenticate_all_language_model_providers(cx: &mut App) -> Task<()> {
let authenticate_all_providers = LanguageModelRegistry::global(cx)
.read(cx)
.visible_providers()
.providers()
.iter()
.map(|provider| (provider.id(), provider.name(), provider.authenticate(cx)))
.collect::<Vec<_>>();
@@ -254,24 +263,12 @@ impl NativeAgent {
.await;
cx.new(|cx| {
let context_server_store = project.read(cx).context_server_store();
let context_server_registry =
cx.new(|cx| ContextServerRegistry::new(context_server_store.clone(), cx));
let mut subscriptions = vec![
cx.subscribe(&project, Self::handle_project_event),
cx.subscribe(
&LanguageModelRegistry::global(cx),
Self::handle_models_updated_event,
),
cx.subscribe(
&context_server_store,
Self::handle_context_server_store_updated,
),
cx.subscribe(
&context_server_registry,
Self::handle_context_server_registry_event,
),
];
if let Some(prompt_store) = prompt_store.as_ref() {
subscriptions.push(cx.subscribe(prompt_store, Self::handle_prompts_updated_event))
@@ -280,14 +277,16 @@ impl NativeAgent {
let (project_context_needs_refresh_tx, project_context_needs_refresh_rx) =
watch::channel(());
Self {
sessions: HashMap::default(),
sessions: HashMap::new(),
history,
project_context: cx.new(|_| project_context),
project_context_needs_refresh: project_context_needs_refresh_tx,
_maintain_project_context: cx.spawn(async move |this, cx| {
Self::maintain_project_context(this, project_context_needs_refresh_rx, cx).await
}),
context_server_registry,
context_server_registry: cx.new(|cx| {
ContextServerRegistry::new(project.read(cx).context_server_store(), cx)
}),
templates,
models: LanguageModels::new(cx),
project,
@@ -356,9 +355,6 @@ impl NativeAgent {
pending_save: Task::ready(()),
},
);
self.update_available_commands(cx);
acp_thread
}
@@ -429,7 +425,10 @@ impl NativeAgent {
.into_iter()
.flat_map(|(contents, prompt_metadata)| match contents {
Ok(contents) => Some(UserRulesContext {
uuid: prompt_metadata.id.as_user()?,
uuid: match prompt_metadata.id {
prompt_store::PromptId::User { uuid } => uuid,
prompt_store::PromptId::EditWorkflow => return None,
},
title: prompt_metadata.title.map(|title| title.to_string()),
contents,
}),
@@ -623,99 +622,6 @@ impl NativeAgent {
}
}
fn handle_context_server_store_updated(
&mut self,
_store: Entity<project::context_server_store::ContextServerStore>,
_event: &project::context_server_store::Event,
cx: &mut Context<Self>,
) {
self.update_available_commands(cx);
}
fn handle_context_server_registry_event(
&mut self,
_registry: Entity<ContextServerRegistry>,
event: &ContextServerRegistryEvent,
cx: &mut Context<Self>,
) {
match event {
ContextServerRegistryEvent::ToolsChanged => {}
ContextServerRegistryEvent::PromptsChanged => {
self.update_available_commands(cx);
}
}
}
fn update_available_commands(&self, cx: &mut Context<Self>) {
let available_commands = self.build_available_commands(cx);
for session in self.sessions.values() {
if let Some(acp_thread) = session.acp_thread.upgrade() {
acp_thread.update(cx, |thread, cx| {
thread
.handle_session_update(
acp::SessionUpdate::AvailableCommandsUpdate(
acp::AvailableCommandsUpdate::new(available_commands.clone()),
),
cx,
)
.log_err();
});
}
}
}
fn build_available_commands(&self, cx: &App) -> Vec<acp::AvailableCommand> {
let registry = self.context_server_registry.read(cx);
let mut prompt_name_counts: HashMap<&str, usize> = HashMap::default();
for context_server_prompt in registry.prompts() {
*prompt_name_counts
.entry(context_server_prompt.prompt.name.as_str())
.or_insert(0) += 1;
}
registry
.prompts()
.flat_map(|context_server_prompt| {
let prompt = &context_server_prompt.prompt;
let should_prefix = prompt_name_counts
.get(prompt.name.as_str())
.copied()
.unwrap_or(0)
> 1;
let name = if should_prefix {
format!("{}.{}", context_server_prompt.server_id, prompt.name)
} else {
prompt.name.clone()
};
let mut command = acp::AvailableCommand::new(
name,
prompt.description.clone().unwrap_or_default(),
);
match prompt.arguments.as_deref() {
Some([arg]) => {
let hint = format!("<{}>", arg.name);
command = command.input(acp::AvailableCommandInput::Unstructured(
acp::UnstructuredCommandInput::new(hint),
));
}
Some([]) | None => {}
Some(_) => {
// skip >1 argument commands since we don't support them yet
return None;
}
}
Some(command)
})
.collect()
}
pub fn load_thread(
&mut self,
id: acp::SessionId,
@@ -814,102 +720,6 @@ impl NativeAgent {
history.update(cx, |history, cx| history.reload(cx)).ok();
});
}
fn send_mcp_prompt(
&self,
message_id: UserMessageId,
session_id: agent_client_protocol::SessionId,
prompt_name: String,
server_id: ContextServerId,
arguments: HashMap<String, String>,
original_content: Vec<acp::ContentBlock>,
cx: &mut Context<Self>,
) -> Task<Result<acp::PromptResponse>> {
let server_store = self.context_server_registry.read(cx).server_store().clone();
let path_style = self.project.read(cx).path_style(cx);
cx.spawn(async move |this, cx| {
let prompt =
crate::get_prompt(&server_store, &server_id, &prompt_name, arguments, cx).await?;
let (acp_thread, thread) = this.update(cx, |this, _cx| {
let session = this
.sessions
.get(&session_id)
.context("Failed to get session")?;
anyhow::Ok((session.acp_thread.clone(), session.thread.clone()))
})??;
let mut last_is_user = true;
thread.update(cx, |thread, cx| {
thread.push_acp_user_block(
message_id,
original_content.into_iter().skip(1),
path_style,
cx,
);
})?;
for message in prompt.messages {
let context_server::types::PromptMessage { role, content } = message;
let block = mcp_message_content_to_acp_content_block(content);
match role {
context_server::types::Role::User => {
let id = acp_thread::UserMessageId::new();
acp_thread.update(cx, |acp_thread, cx| {
acp_thread.push_user_content_block_with_indent(
Some(id.clone()),
block.clone(),
true,
cx,
);
anyhow::Ok(())
})??;
thread.update(cx, |thread, cx| {
thread.push_acp_user_block(id, [block], path_style, cx);
anyhow::Ok(())
})??;
}
context_server::types::Role::Assistant => {
acp_thread.update(cx, |acp_thread, cx| {
acp_thread.push_assistant_content_block_with_indent(
block.clone(),
false,
true,
cx,
);
anyhow::Ok(())
})??;
thread.update(cx, |thread, cx| {
thread.push_acp_agent_block(block, cx);
anyhow::Ok(())
})??;
}
}
last_is_user = role == context_server::types::Role::User;
}
let response_stream = thread.update(cx, |thread, cx| {
if last_is_user {
thread.send_existing(cx)
} else {
// Resume if MCP prompt did not end with a user message
thread.resume(cx)
}
})??;
cx.update(|cx| {
NativeAgentConnection::handle_thread_events(response_stream, acp_thread, cx)
})?
.await
})
}
}
/// Wrapper struct that implements the AgentConnection trait
@@ -979,12 +789,28 @@ impl NativeAgentConnection {
}
ThreadEvent::AgentText(text) => {
acp_thread.update(cx, |thread, cx| {
thread.push_assistant_content_block(text.into(), false, cx)
thread.push_assistant_content_block(
acp::ContentBlock::Text(acp::TextContent {
text,
annotations: None,
meta: None,
}),
false,
cx,
)
})?;
}
ThreadEvent::AgentThinking(text) => {
acp_thread.update(cx, |thread, cx| {
thread.push_assistant_content_block(text.into(), true, cx)
thread.push_assistant_content_block(
acp::ContentBlock::Text(acp::TextContent {
text,
annotations: None,
meta: None,
}),
true,
cx,
)
})?;
}
ThreadEvent::ToolCallAuthorization(ToolCallAuthorization {
@@ -998,9 +824,8 @@ impl NativeAgentConnection {
)
})??;
cx.background_spawn(async move {
if let acp::RequestPermissionOutcome::Selected(
acp::SelectedPermissionOutcome { option_id, .. },
) = outcome_task.await
if let acp::RequestPermissionOutcome::Selected { option_id } =
outcome_task.await
{
response
.send(option_id)
@@ -1027,7 +852,10 @@ impl NativeAgentConnection {
}
ThreadEvent::Stop(stop_reason) => {
log::debug!("Assistant message complete: {:?}", stop_reason);
return Ok(acp::PromptResponse::new(stop_reason));
return Ok(acp::PromptResponse {
stop_reason,
meta: None,
});
}
}
}
@@ -1039,44 +867,14 @@ impl NativeAgentConnection {
}
log::debug!("Response stream completed");
anyhow::Ok(acp::PromptResponse::new(acp::StopReason::EndTurn))
anyhow::Ok(acp::PromptResponse {
stop_reason: acp::StopReason::EndTurn,
meta: None,
})
})
}
}
struct Command<'a> {
prompt_name: &'a str,
arg_value: &'a str,
explicit_server_id: Option<&'a str>,
}
impl<'a> Command<'a> {
fn parse(prompt: &'a [acp::ContentBlock]) -> Option<Self> {
let acp::ContentBlock::Text(text_content) = prompt.first()? else {
return None;
};
let text = text_content.text.trim();
let command = text.strip_prefix('/')?;
let (command, arg_value) = command
.split_once(char::is_whitespace)
.unwrap_or((command, ""));
if let Some((server_id, prompt_name)) = command.split_once('.') {
Some(Self {
prompt_name,
arg_value,
explicit_server_id: Some(server_id),
})
} else {
Some(Self {
prompt_name: command,
arg_value,
explicit_server_id: None,
})
}
}
}
struct NativeAgentModelSelector {
session_id: acp::SessionId,
connection: NativeAgentConnection,
@@ -1167,15 +965,11 @@ impl acp_thread::AgentModelSelector for NativeAgentModelSelector {
fn should_render_footer(&self) -> bool {
true
}
fn supports_favorites(&self) -> bool {
true
}
}
impl acp_thread::AgentConnection for NativeAgentConnection {
fn telemetry_id(&self) -> SharedString {
"zed".into()
fn telemetry_id(&self) -> &'static str {
"zed"
}
fn new_thread(
@@ -1246,47 +1040,6 @@ impl acp_thread::AgentConnection for NativeAgentConnection {
let session_id = params.session_id.clone();
log::info!("Received prompt request for session: {}", session_id);
log::debug!("Prompt blocks count: {}", params.prompt.len());
if let Some(parsed_command) = Command::parse(&params.prompt) {
let registry = self.0.read(cx).context_server_registry.read(cx);
let explicit_server_id = parsed_command
.explicit_server_id
.map(|server_id| ContextServerId(server_id.into()));
if let Some(prompt) =
registry.find_prompt(explicit_server_id.as_ref(), parsed_command.prompt_name)
{
let arguments = if !parsed_command.arg_value.is_empty()
&& let Some(arg_name) = prompt
.prompt
.arguments
.as_ref()
.and_then(|args| args.first())
.map(|arg| arg.name.clone())
{
HashMap::from_iter([(arg_name, parsed_command.arg_value.to_string())])
} else {
Default::default()
};
let prompt_name = prompt.prompt.name.clone();
let server_id = prompt.server_id.clone();
return self.0.update(cx, |agent, cx| {
agent.send_mcp_prompt(
id,
session_id.clone(),
prompt_name,
server_id,
arguments,
params.prompt,
cx,
)
});
};
};
let path_style = self.0.read(cx).project.read(cx).path_style(cx);
self.run_turn(session_id, cx, move |thread, cx| {
@@ -1487,15 +1240,6 @@ impl TerminalHandle for AcpTerminalHandle {
self.terminal
.read_with(cx, |term, cx| term.current_output(cx))
}
fn kill(&self, cx: &AsyncApp) -> Result<()> {
cx.update(|cx| {
self.terminal.update(cx, |terminal, cx| {
terminal.kill(cx);
});
})?;
Ok(())
}
}
#[cfg(test)]
@@ -1630,12 +1374,10 @@ mod internal_tests {
IndexMap::from_iter([(
AgentModelGroupName("Fake".into()),
vec![AgentModelInfo {
id: acp::ModelId::new("fake/fake"),
id: acp::ModelId("fake/fake".into()),
name: "Fake".into(),
description: None,
icon: Some(acp_thread::AgentModelIcon::Named(
ui::IconName::ZedAssistant
)),
icon: Some(ui::IconName::ZedAssistant),
}]
)])
);
@@ -1693,7 +1435,7 @@ mod internal_tests {
// Select a model
let selector = connection.model_selector(&session_id).unwrap();
let model_id = acp::ModelId::new("fake/fake");
let model_id = acp::ModelId("fake/fake".into());
cx.update(|cx| selector.select_model(model_id.clone(), cx))
.await
.unwrap();
@@ -1779,14 +1521,20 @@ mod internal_tests {
thread.send(
vec![
"What does ".into(),
acp::ContentBlock::ResourceLink(acp::ResourceLink::new(
"b.md",
MentionUri::File {
acp::ContentBlock::ResourceLink(acp::ResourceLink {
name: "b.md".into(),
uri: MentionUri::File {
abs_path: path!("/a/b.md").into(),
}
.to_uri()
.to_string(),
)),
annotations: None,
description: None,
mime_type: None,
size: None,
title: None,
meta: None,
}),
" mean?".into(),
],
cx,
@@ -1885,35 +1633,3 @@ mod internal_tests {
});
}
}
fn mcp_message_content_to_acp_content_block(
content: context_server::types::MessageContent,
) -> acp::ContentBlock {
match content {
context_server::types::MessageContent::Text {
text,
annotations: _,
} => text.into(),
context_server::types::MessageContent::Image {
data,
mime_type,
annotations: _,
} => acp::ContentBlock::Image(acp::ImageContent::new(data, mime_type)),
context_server::types::MessageContent::Audio {
data,
mime_type,
annotations: _,
} => acp::ContentBlock::Audio(acp::AudioContent::new(data, mime_type)),
context_server::types::MessageContent::Resource {
resource,
annotations: _,
} => {
let mut link =
acp::ResourceLink::new(resource.uri.to_string(), resource.uri.to_string());
if let Some(mime_type) = resource.mime_type {
link = link.mime_type(mime_type);
}
acp::ContentBlock::ResourceLink(link)
}
}
}

View File

@@ -366,7 +366,7 @@ impl ThreadsDatabase {
for (id, summary, updated_at) in rows {
threads.push(DbThreadMetadata {
id: acp::SessionId::new(id),
id: acp::SessionId(id),
title: summary.into(),
updated_at: DateTime::parse_from_rfc3339(&updated_at)?.with_timezone(&Utc),
});
@@ -424,20 +424,4 @@ impl ThreadsDatabase {
Ok(())
})
}
pub fn delete_threads(&self) -> Task<Result<()>> {
let connection = self.connection.clone();
self.executor.spawn(async move {
let connection = connection.lock();
let mut delete = connection.exec_bound::<()>(indoc! {"
DELETE FROM threads
"})?;
delete(())?;
Ok(())
})
}
}

View File

@@ -4,7 +4,7 @@ use crate::{
};
use Role::*;
use client::{Client, UserStore};
use eval_utils::{EvalOutput, EvalOutputProcessor, OutcomeKind};
use collections::HashMap;
use fs::FakeFs;
use futures::{FutureExt, future::LocalBoxFuture};
use gpui::{AppContext, TestAppContext, Timer};
@@ -20,62 +20,16 @@ use rand::prelude::*;
use reqwest_client::ReqwestClient;
use serde_json::json;
use std::{
cmp::Reverse,
fmt::{self, Display},
io::Write as _,
path::Path,
str::FromStr,
sync::mpsc,
time::Duration,
};
use util::path;
#[derive(Default, Clone, Debug)]
struct EditAgentOutputProcessor {
mismatched_tag_threshold: f32,
cumulative_tags: usize,
cumulative_mismatched_tags: usize,
eval_outputs: Vec<EvalOutput<EditEvalMetadata>>,
}
fn mismatched_tag_threshold(mismatched_tag_threshold: f32) -> EditAgentOutputProcessor {
EditAgentOutputProcessor {
mismatched_tag_threshold,
cumulative_tags: 0,
cumulative_mismatched_tags: 0,
eval_outputs: Vec::new(),
}
}
#[derive(Clone, Debug)]
struct EditEvalMetadata {
tags: usize,
mismatched_tags: usize,
}
impl EvalOutputProcessor for EditAgentOutputProcessor {
type Metadata = EditEvalMetadata;
fn process(&mut self, output: &EvalOutput<Self::Metadata>) {
if matches!(output.outcome, OutcomeKind::Passed | OutcomeKind::Failed) {
self.cumulative_mismatched_tags += output.metadata.mismatched_tags;
self.cumulative_tags += output.metadata.tags;
self.eval_outputs.push(output.clone());
}
}
fn assert(&mut self) {
let mismatched_tag_ratio =
self.cumulative_mismatched_tags as f32 / self.cumulative_tags as f32;
if mismatched_tag_ratio > self.mismatched_tag_threshold {
for eval_output in &self.eval_outputs {
println!("{}", eval_output.data);
}
panic!(
"Too many mismatched tags: {:?}",
self.cumulative_mismatched_tags
);
}
}
}
#[test]
#[cfg_attr(not(feature = "unit-eval"), ignore)]
fn eval_extract_handle_command_output() {
@@ -101,19 +55,22 @@ fn eval_extract_handle_command_output() {
include_str!("evals/fixtures/extract_handle_command_output/possible-07.diff"),
];
let edit_description = "Extract `handle_command_output` method from `run_git_blame`.";
eval_utils::eval(100, 0.95, mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
eval(
100,
0.95,
0.05,
EvalInput::from_conversation(
vec![
message(
User,
[text(formatdoc! {"
Read the `{input_file_path}` file and extract a method in
the final stanza of `run_git_blame` to deal with command failures,
call it `handle_command_output` and take the std::process::Output as the only parameter.
Do not document the method and do not add any comments.
Read the `{input_file_path}` file and extract a method in
the final stanza of `run_git_blame` to deal with command failures,
call it `handle_command_output` and take the std::process::Output as the only parameter.
Do not document the method and do not add any comments.
Add it right next to `run_git_blame` and copy it verbatim from `run_git_blame`.
"})],
Add it right next to `run_git_blame` and copy it verbatim from `run_git_blame`.
"})],
),
message(
Assistant,
@@ -145,9 +102,9 @@ fn eval_extract_handle_command_output() {
),
],
Some(input_file_content.into()),
EvalAssertion::assert_diff_any(possible_diffs.clone()),
))
});
EvalAssertion::assert_diff_any(possible_diffs),
),
);
}
#[test]
@@ -165,16 +122,18 @@ fn eval_delete_run_git_blame() {
let input_file_content = include_str!("evals/fixtures/delete_run_git_blame/before.rs");
let output_file_content = include_str!("evals/fixtures/delete_run_git_blame/after.rs");
let edit_description = "Delete the `run_git_blame` function.";
eval_utils::eval(100, 0.95, mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
eval(
100,
0.95,
0.05,
EvalInput::from_conversation(
vec![
message(
User,
[text(formatdoc! {"
Read the `{input_file_path}` file and delete `run_git_blame`. Just that
one function, not its usages.
"})],
Read the `{input_file_path}` file and delete `run_git_blame`. Just that
one function, not its usages.
"})],
),
message(
Assistant,
@@ -207,8 +166,8 @@ fn eval_delete_run_git_blame() {
],
Some(input_file_content.into()),
EvalAssertion::assert_eq(output_file_content),
))
});
),
);
}
#[test]
@@ -226,16 +185,18 @@ fn eval_translate_doc_comments() {
let input_file_path = "root/canvas.rs";
let input_file_content = include_str!("evals/fixtures/translate_doc_comments/before.rs");
let edit_description = "Translate all doc comments to Italian";
eval_utils::eval(200, 1., mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
eval(
200,
1.,
0.05,
EvalInput::from_conversation(
vec![
message(
User,
[text(formatdoc! {"
Read the {input_file_path} file and edit it (without overwriting it),
translating all the doc comments to italian.
"})],
Read the {input_file_path} file and edit it (without overwriting it),
translating all the doc comments to italian.
"})],
),
message(
Assistant,
@@ -268,8 +229,8 @@ fn eval_translate_doc_comments() {
],
Some(input_file_content.into()),
EvalAssertion::judge_diff("Doc comments were translated to Italian"),
))
});
),
);
}
#[test]
@@ -288,31 +249,33 @@ fn eval_use_wasi_sdk_in_compile_parser_to_wasm() {
let input_file_content =
include_str!("evals/fixtures/use_wasi_sdk_in_compile_parser_to_wasm/before.rs");
let edit_description = "Update compile_parser_to_wasm to use wasi-sdk instead of emscripten";
eval_utils::eval(100, 0.95, mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
eval(
100,
0.95,
0.05,
EvalInput::from_conversation(
vec![
message(
User,
[text(formatdoc! {"
Read the `{input_file_path}` file and change `compile_parser_to_wasm` to use `wasi-sdk` instead of emscripten.
Use `ureq` to download the SDK for the current platform and architecture.
Extract the archive into a sibling of `lib` inside the `tree-sitter` directory in the cache_dir.
Compile the parser to wasm using the `bin/clang` executable (or `bin/clang.exe` on windows)
that's inside of the archive.
Don't re-download the SDK if that executable already exists.
Read the `{input_file_path}` file and change `compile_parser_to_wasm` to use `wasi-sdk` instead of emscripten.
Use `ureq` to download the SDK for the current platform and architecture.
Extract the archive into a sibling of `lib` inside the `tree-sitter` directory in the cache_dir.
Compile the parser to wasm using the `bin/clang` executable (or `bin/clang.exe` on windows)
that's inside of the archive.
Don't re-download the SDK if that executable already exists.
Use these clang flags: -fPIC -shared -Os -Wl,--export=tree_sitter_{{language_name}}
Use these clang flags: -fPIC -shared -Os -Wl,--export=tree_sitter_{{language_name}}
Here are the available wasi-sdk assets:
- wasi-sdk-25.0-x86_64-macos.tar.gz
- wasi-sdk-25.0-arm64-macos.tar.gz
- wasi-sdk-25.0-x86_64-linux.tar.gz
- wasi-sdk-25.0-arm64-linux.tar.gz
- wasi-sdk-25.0-x86_64-linux.tar.gz
- wasi-sdk-25.0-arm64-linux.tar.gz
- wasi-sdk-25.0-x86_64-windows.tar.gz
"})],
Here are the available wasi-sdk assets:
- wasi-sdk-25.0-x86_64-macos.tar.gz
- wasi-sdk-25.0-arm64-macos.tar.gz
- wasi-sdk-25.0-x86_64-linux.tar.gz
- wasi-sdk-25.0-arm64-linux.tar.gz
- wasi-sdk-25.0-x86_64-linux.tar.gz
- wasi-sdk-25.0-arm64-linux.tar.gz
- wasi-sdk-25.0-x86_64-windows.tar.gz
"})],
),
message(
Assistant,
@@ -389,11 +352,11 @@ fn eval_use_wasi_sdk_in_compile_parser_to_wasm() {
],
Some(input_file_content.into()),
EvalAssertion::judge_diff(indoc! {"
- The compile_parser_to_wasm method has been changed to use wasi-sdk
- ureq is used to download the SDK for current platform and architecture
"}),
))
});
- The compile_parser_to_wasm method has been changed to use wasi-sdk
- ureq is used to download the SDK for current platform and architecture
"}),
),
);
}
#[test]
@@ -417,8 +380,11 @@ fn eval_disable_cursor_blinking() {
include_str!("evals/fixtures/disable_cursor_blinking/possible-03.diff"),
include_str!("evals/fixtures/disable_cursor_blinking/possible-04.diff"),
];
eval_utils::eval(100, 0.51, mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
eval(
100,
0.51,
0.05,
EvalInput::from_conversation(
vec![
message(User, [text("Let's research how to cursor blinking works.")]),
message(
@@ -455,10 +421,10 @@ fn eval_disable_cursor_blinking() {
message(
User,
[text(indoc! {"
Comment out the lines that interact with the BlinkManager.
Keep the outer `update` blocks, but comments everything that's inside (including if statements).
Don't add additional comments.
"})],
Comment out the lines that interact with the BlinkManager.
Keep the outer `update` blocks, but comments everything that's inside (including if statements).
Don't add additional comments.
"})],
),
message(
Assistant,
@@ -474,9 +440,9 @@ fn eval_disable_cursor_blinking() {
),
],
Some(input_file_content.into()),
EvalAssertion::assert_diff_any(possible_diffs.clone()),
))
});
EvalAssertion::assert_diff_any(possible_diffs),
),
);
}
#[test]
@@ -501,16 +467,20 @@ fn eval_from_pixels_constructor() {
let input_file_path = "root/canvas.rs";
let input_file_content = include_str!("evals/fixtures/from_pixels_constructor/before.rs");
let edit_description = "Implement from_pixels constructor and add tests.";
eval_utils::eval(100, 0.95, mismatched_tag_threshold(0.25), move || {
run_eval(EvalInput::from_conversation(
eval(
100,
0.95,
// For whatever reason, this eval produces more mismatched tags.
// Increasing for now, let's see if we can bring this down.
0.25,
EvalInput::from_conversation(
vec![
message(
User,
[text(indoc! {"
Introduce a new `from_pixels` constructor in Canvas and
also add tests for it in the same file.
"})],
Introduce a new `from_pixels` constructor in Canvas and
also add tests for it in the same file.
"})],
),
message(
Assistant,
@@ -575,92 +545,92 @@ fn eval_from_pixels_constructor() {
"tool_4",
"grep",
indoc! {"
Found 6 matches:
Found 6 matches:
## Matches in font-kit/src/loaders/core_text.rs
## Matches in font-kit/src/loaders/core_text.rs
### mod test L926-936
```
mod test {
use super::Font;
use crate::properties::{Stretch, Weight};
### mod test L926-936
```
mod test {
use super::Font;
use crate::properties::{Stretch, Weight};
#[cfg(feature = \"source\")]
use crate::source::SystemSource;
#[cfg(feature = \"source\")]
use crate::source::SystemSource;
static TEST_FONT_POSTSCRIPT_NAME: &'static str = \"ArialMT\";
static TEST_FONT_POSTSCRIPT_NAME: &'static str = \"ArialMT\";
#[cfg(feature = \"source\")]
#[test]
```
#[cfg(feature = \"source\")]
#[test]
```
55 lines remaining in ancestor node. Read the file to see all.
55 lines remaining in ancestor node. Read the file to see all.
### mod test L947-951
```
}
### mod test L947-951
```
}
#[test]
fn test_core_text_to_css_font_weight() {
// Exact matches
```
#[test]
fn test_core_text_to_css_font_weight() {
// Exact matches
```
### mod test L959-963
```
}
### mod test L959-963
```
}
#[test]
fn test_core_text_to_css_font_stretch() {
// Exact matches
```
#[test]
fn test_core_text_to_css_font_stretch() {
// Exact matches
```
## Matches in font-kit/src/loaders/freetype.rs
## Matches in font-kit/src/loaders/freetype.rs
### mod test L1238-1248
```
mod test {
use crate::loaders::freetype::Font;
### mod test L1238-1248
```
mod test {
use crate::loaders::freetype::Font;
static PCF_FONT_PATH: &str = \"resources/tests/times-roman-pcf/timR12.pcf\";
static PCF_FONT_POSTSCRIPT_NAME: &str = \"Times-Roman\";
static PCF_FONT_PATH: &str = \"resources/tests/times-roman-pcf/timR12.pcf\";
static PCF_FONT_POSTSCRIPT_NAME: &str = \"Times-Roman\";
#[test]
fn get_pcf_postscript_name() {
let font = Font::from_path(PCF_FONT_PATH, 0).unwrap();
assert_eq!(font.postscript_name().unwrap(), PCF_FONT_POSTSCRIPT_NAME);
}
```
#[test]
fn get_pcf_postscript_name() {
let font = Font::from_path(PCF_FONT_PATH, 0).unwrap();
assert_eq!(font.postscript_name().unwrap(), PCF_FONT_POSTSCRIPT_NAME);
}
```
1 lines remaining in ancestor node. Read the file to see all.
1 lines remaining in ancestor node. Read the file to see all.
## Matches in font-kit/src/sources/core_text.rs
## Matches in font-kit/src/sources/core_text.rs
### mod test L265-275
```
mod test {
use crate::properties::{Stretch, Weight};
### mod test L265-275
```
mod test {
use crate::properties::{Stretch, Weight};
#[test]
fn test_css_to_core_text_font_weight() {
// Exact matches
assert_eq!(super::css_to_core_text_font_weight(Weight(100.0)), -0.7);
assert_eq!(super::css_to_core_text_font_weight(Weight(400.0)), 0.0);
assert_eq!(super::css_to_core_text_font_weight(Weight(700.0)), 0.4);
assert_eq!(super::css_to_core_text_font_weight(Weight(900.0)), 0.8);
#[test]
fn test_css_to_core_text_font_weight() {
// Exact matches
assert_eq!(super::css_to_core_text_font_weight(Weight(100.0)), -0.7);
assert_eq!(super::css_to_core_text_font_weight(Weight(400.0)), 0.0);
assert_eq!(super::css_to_core_text_font_weight(Weight(700.0)), 0.4);
assert_eq!(super::css_to_core_text_font_weight(Weight(900.0)), 0.8);
```
```
27 lines remaining in ancestor node. Read the file to see all.
27 lines remaining in ancestor node. Read the file to see all.
### mod test L278-282
```
}
### mod test L278-282
```
}
#[test]
fn test_css_to_core_text_font_stretch() {
// Exact matches
```
"},
#[test]
fn test_css_to_core_text_font_stretch() {
// Exact matches
```
"},
)],
),
message(
@@ -678,11 +648,11 @@ fn eval_from_pixels_constructor() {
],
Some(input_file_content.into()),
EvalAssertion::judge_diff(indoc! {"
- The diff contains a new `from_pixels` constructor
- The diff contains new tests for the `from_pixels` constructor
"}),
))
});
- The diff contains a new `from_pixels` constructor
- The diff contains new tests for the `from_pixels` constructor
"}),
),
);
}
#[test]
@@ -700,9 +670,11 @@ fn eval_zode() {
let input_file_path = "root/zode.py";
let input_content = None;
let edit_description = "Create the main Zode CLI script";
eval_utils::eval(50, 1., mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
eval(
50,
1.,
0.05,
EvalInput::from_conversation(
vec![
message(User, [text(include_str!("evals/fixtures/zode/prompt.md"))]),
message(
@@ -761,7 +733,7 @@ fn eval_zode() {
],
),
],
input_content.clone(),
input_content,
EvalAssertion::new(async move |sample, _, _cx| {
let invalid_starts = [' ', '`', '\n'];
let mut message = String::new();
@@ -786,8 +758,8 @@ fn eval_zode() {
})
}
}),
))
});
),
);
}
#[test]
@@ -805,17 +777,19 @@ fn eval_add_overwrite_test() {
let input_file_path = "root/action_log.rs";
let input_file_content = include_str!("evals/fixtures/add_overwrite_test/before.rs");
let edit_description = "Add a new test for overwriting a file in action_log.rs";
eval_utils::eval(200, 0.5, mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
eval(
200,
0.5, // TODO: make this eval better
0.05,
EvalInput::from_conversation(
vec![
message(
User,
[text(indoc! {"
Introduce a new test in `action_log.rs` to test overwriting a file.
That is, a file already exists, but we call `buffer_created` as if the file were new.
Take inspiration from all the other tests in the file.
"})],
Introduce a new test in `action_log.rs` to test overwriting a file.
That is, a file already exists, but we call `buffer_created` as if the file were new.
Take inspiration from all the other tests in the file.
"})],
),
message(
Assistant,
@@ -835,81 +809,81 @@ fn eval_add_overwrite_test() {
"tool_1",
"read_file",
indoc! {"
pub struct ActionLog [L13-20]
tracked_buffers [L15]
edited_since_project_diagnostics_check [L17]
project [L19]
impl ActionLog [L22-498]
pub fn new [L24-30]
pub fn project [L32-34]
pub fn checked_project_diagnostics [L37-39]
pub fn has_edited_files_since_project_diagnostics_check [L42-44]
fn track_buffer_internal [L46-101]
fn handle_buffer_event [L103-116]
fn handle_buffer_edited [L118-123]
fn handle_buffer_file_changed [L125-158]
async fn maintain_diff [L160-264]
pub fn buffer_read [L267-269]
pub fn buffer_created [L272-276]
pub fn buffer_edited [L279-287]
pub fn will_delete_buffer [L289-304]
pub fn keep_edits_in_range [L306-364]
pub fn reject_edits_in_ranges [L366-459]
pub fn keep_all_edits [L461-473]
pub fn changed_buffers [L476-482]
pub fn stale_buffers [L485-497]
fn apply_non_conflicting_edits [L500-561]
fn diff_snapshots [L563-585]
fn point_to_row_edit [L587-614]
enum ChangeAuthor [L617-620]
User [L618]
Agent [L619]
enum TrackedBufferStatus [L623-627]
Created [L624]
Modified [L625]
Deleted [L626]
struct TrackedBuffer [L629-641]
buffer [L630]
base_text [L631]
unreviewed_changes [L632]
status [L633]
version [L634]
diff [L635]
snapshot [L636]
diff_update [L637]
_open_lsp_handle [L638]
_maintain_diff [L639]
_subscription [L640]
impl TrackedBuffer [L643-657]
fn has_changes [L644-650]
fn schedule_diff_update [L652-656]
pub struct ChangedBuffer [L659-661]
pub diff [L660]
mod tests [L664-1574]
fn init_logger [L678-682]
fn init_test [L684-691]
async fn test_keep_edits [L694-769]
async fn test_deletions [L772-854]
async fn test_overlapping_user_edits [L857-951]
async fn test_creating_files [L954-1010]
async fn test_deleting_files [L1013-1120]
async fn test_reject_edits [L1123-1255]
async fn test_reject_multiple_edits [L1258-1331]
async fn test_reject_deleted_file [L1334-1388]
async fn test_reject_created_file [L1391-1443]
async fn test_random_diffs [L1446-1535]
fn quiesce [L1510-1534]
struct HunkStatus [L1538-1542]
range [L1539]
diff_status [L1540]
old_text [L1541]
fn unreviewed_hunks [L1544-1573]
pub struct ActionLog [L13-20]
tracked_buffers [L15]
edited_since_project_diagnostics_check [L17]
project [L19]
impl ActionLog [L22-498]
pub fn new [L24-30]
pub fn project [L32-34]
pub fn checked_project_diagnostics [L37-39]
pub fn has_edited_files_since_project_diagnostics_check [L42-44]
fn track_buffer_internal [L46-101]
fn handle_buffer_event [L103-116]
fn handle_buffer_edited [L118-123]
fn handle_buffer_file_changed [L125-158]
async fn maintain_diff [L160-264]
pub fn buffer_read [L267-269]
pub fn buffer_created [L272-276]
pub fn buffer_edited [L279-287]
pub fn will_delete_buffer [L289-304]
pub fn keep_edits_in_range [L306-364]
pub fn reject_edits_in_ranges [L366-459]
pub fn keep_all_edits [L461-473]
pub fn changed_buffers [L476-482]
pub fn stale_buffers [L485-497]
fn apply_non_conflicting_edits [L500-561]
fn diff_snapshots [L563-585]
fn point_to_row_edit [L587-614]
enum ChangeAuthor [L617-620]
User [L618]
Agent [L619]
enum TrackedBufferStatus [L623-627]
Created [L624]
Modified [L625]
Deleted [L626]
struct TrackedBuffer [L629-641]
buffer [L630]
base_text [L631]
unreviewed_changes [L632]
status [L633]
version [L634]
diff [L635]
snapshot [L636]
diff_update [L637]
_open_lsp_handle [L638]
_maintain_diff [L639]
_subscription [L640]
impl TrackedBuffer [L643-657]
fn has_changes [L644-650]
fn schedule_diff_update [L652-656]
pub struct ChangedBuffer [L659-661]
pub diff [L660]
mod tests [L664-1574]
fn init_logger [L678-682]
fn init_test [L684-691]
async fn test_keep_edits [L694-769]
async fn test_deletions [L772-854]
async fn test_overlapping_user_edits [L857-951]
async fn test_creating_files [L954-1010]
async fn test_deleting_files [L1013-1120]
async fn test_reject_edits [L1123-1255]
async fn test_reject_multiple_edits [L1258-1331]
async fn test_reject_deleted_file [L1334-1388]
async fn test_reject_created_file [L1391-1443]
async fn test_random_diffs [L1446-1535]
fn quiesce [L1510-1534]
struct HunkStatus [L1538-1542]
range [L1539]
diff_status [L1540]
old_text [L1541]
fn unreviewed_hunks [L1544-1573]
Showing symbols 1-69 (total symbols: 69)
Showing symbols 1-69 (total symbols: 69)
Using the line numbers in this outline, you can call this tool again while specifying
the start_line and end_line fields to see the implementations of symbols in the outline.
"},
Using the line numbers in this outline, you can call this tool again while specifying
the start_line and end_line fields to see the implementations of symbols in the outline.
"},
)],
),
message(
@@ -989,17 +963,17 @@ fn eval_add_overwrite_test() {
Assistant,
[
text(indoc! {"
Based on my understanding of the existing tests and code, I'll now create a new test for overwriting a file. In this test, we'll:
Based on my understanding of the existing tests and code, I'll now create a new test for overwriting a file. In this test, we'll:
1. Create a file
2. Open a buffer for it
3. Track it in the action log
4. Modify the buffer
5. Call `buffer_created` again as if the file were new
6. Check that the changes are properly tracked
1. Create a file
2. Open a buffer for it
3. Track it in the action log
4. Modify the buffer
5. Call `buffer_created` again as if the file were new
6. Check that the changes are properly tracked
Let's write the test:
"}),
Let's write the test:
"}),
tool_use(
"tool_5",
"edit_file",
@@ -1016,8 +990,8 @@ fn eval_add_overwrite_test() {
EvalAssertion::judge_diff(
"A new test for overwritten files was created, without changing any previous test",
),
))
});
),
);
}
#[test]
@@ -1042,18 +1016,20 @@ fn eval_create_empty_file() {
let input_file_content = None;
let expected_output_content = String::new();
eval_utils::eval(100, 0.99, mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
eval(
100,
0.99,
0.05,
EvalInput::from_conversation(
vec![
message(User, [text("Create a second empty todo file ")]),
message(
Assistant,
[
text(formatdoc! {"
I'll help you create a second empty todo file.
First, let me examine the project structure to see if there's already a todo file, which will help me determine the appropriate name and location for the second one.
"}),
I'll help you create a second empty todo file.
First, let me examine the project structure to see if there's already a todo file, which will help me determine the appropriate name and location for the second one.
"}),
tool_use(
"toolu_01GAF8TtsgpjKxCr8fgQLDgR",
"list_directory",
@@ -1075,8 +1051,8 @@ fn eval_create_empty_file() {
Assistant,
[
text(formatdoc! {"
I can see there's already a `TODO` file in the `root` directory. Let me create a second empty todo file called `TODO3` in the same directory:
"}),
I can see there's already a `TODO` file in the `root` directory. Let me create a second empty todo file called `TODO3` in the same directory:
"}),
tool_use(
"toolu_01Tb3iQ9griqSYMmVuykQPWU",
"edit_file",
@@ -1089,12 +1065,12 @@ fn eval_create_empty_file() {
],
),
],
input_file_content.clone(),
input_file_content,
// Bad behavior is to write something like
// "I'll create an empty TODO3 file as requested."
EvalAssertion::assert_eq(expected_output_content.clone()),
))
});
EvalAssertion::assert_eq(expected_output_content),
),
);
}
fn message(
@@ -1336,45 +1312,115 @@ impl EvalAssertion {
}
}
fn run_eval(eval: EvalInput) -> eval_utils::EvalOutput<EditEvalMetadata> {
let dispatcher = gpui::TestDispatcher::new(StdRng::from_os_rng());
let mut cx = TestAppContext::build(dispatcher, None);
let result = cx.executor().block_test(async {
let test = EditAgentTest::new(&mut cx).await;
test.eval(eval, &mut cx).await
});
cx.quit();
match result {
Ok(output) => eval_utils::EvalOutput {
data: output.to_string(),
outcome: if output.assertion.score < 80 {
eval_utils::OutcomeKind::Failed
} else {
eval_utils::OutcomeKind::Passed
},
metadata: EditEvalMetadata {
tags: output.sample.edit_output.parser_metrics.tags,
mismatched_tags: output.sample.edit_output.parser_metrics.mismatched_tags,
},
},
Err(e) => eval_utils::EvalOutput {
data: format!("{e:?}"),
outcome: eval_utils::OutcomeKind::Error,
metadata: EditEvalMetadata {
tags: 0,
mismatched_tags: 0,
},
},
fn eval(
iterations: usize,
expected_pass_ratio: f32,
mismatched_tag_threshold: f32,
mut eval: EvalInput,
) {
let mut evaluated_count = 0;
let mut failed_count = 0;
report_progress(evaluated_count, failed_count, iterations);
let (tx, rx) = mpsc::channel();
// Cache the last message in the conversation, and run one instance of the eval so that
// all the next ones are cached.
eval.conversation.last_mut().unwrap().cache = true;
run_eval(eval.clone(), tx.clone());
let executor = gpui::background_executor();
let semaphore = Arc::new(smol::lock::Semaphore::new(32));
for _ in 1..iterations {
let eval = eval.clone();
let tx = tx.clone();
let semaphore = semaphore.clone();
executor
.spawn(async move {
let _guard = semaphore.acquire().await;
run_eval(eval, tx)
})
.detach();
}
drop(tx);
let mut failed_evals = HashMap::default();
let mut errored_evals = HashMap::default();
let mut eval_outputs = Vec::new();
let mut cumulative_parser_metrics = EditParserMetrics::default();
while let Ok(output) = rx.recv() {
match output {
Ok(output) => {
cumulative_parser_metrics += output.sample.edit_output.parser_metrics.clone();
eval_outputs.push(output.clone());
if output.assertion.score < 80 {
failed_count += 1;
failed_evals
.entry(output.sample.text_after.clone())
.or_insert(Vec::new())
.push(output);
}
}
Err(error) => {
failed_count += 1;
*errored_evals.entry(format!("{:?}", error)).or_insert(0) += 1;
}
}
evaluated_count += 1;
report_progress(evaluated_count, failed_count, iterations);
}
let actual_pass_ratio = (iterations - failed_count) as f32 / iterations as f32;
println!("Actual pass ratio: {}\n", actual_pass_ratio);
if actual_pass_ratio < expected_pass_ratio {
let mut errored_evals = errored_evals.into_iter().collect::<Vec<_>>();
errored_evals.sort_by_key(|(_, count)| Reverse(*count));
for (error, count) in errored_evals {
println!("Eval errored {} times. Error: {}", count, error);
}
let mut failed_evals = failed_evals.into_iter().collect::<Vec<_>>();
failed_evals.sort_by_key(|(_, evals)| Reverse(evals.len()));
for (_buffer_output, failed_evals) in failed_evals {
let eval_output = failed_evals.first().unwrap();
println!("Eval failed {} times", failed_evals.len());
println!("{}", eval_output);
}
panic!(
"Actual pass ratio: {}\nExpected pass ratio: {}",
actual_pass_ratio, expected_pass_ratio
);
}
let mismatched_tag_ratio =
cumulative_parser_metrics.mismatched_tags as f32 / cumulative_parser_metrics.tags as f32;
if mismatched_tag_ratio > mismatched_tag_threshold {
for eval_output in eval_outputs {
println!("{}", eval_output);
}
panic!("Too many mismatched tags: {:?}", cumulative_parser_metrics);
}
}
fn run_eval(eval: EvalInput, tx: mpsc::Sender<Result<EvalOutput>>) {
let dispatcher = gpui::TestDispatcher::new(StdRng::from_os_rng());
let mut cx = TestAppContext::build(dispatcher, None);
let output = cx.executor().block_test(async {
let test = EditAgentTest::new(&mut cx).await;
test.eval(eval, &mut cx).await
});
tx.send(output).unwrap();
}
#[derive(Clone)]
struct EditEvalOutput {
struct EvalOutput {
sample: EvalSample,
assertion: EvalAssertionOutcome,
}
impl Display for EditEvalOutput {
impl Display for EvalOutput {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
writeln!(f, "Score: {:?}", self.assertion.score)?;
if let Some(message) = self.assertion.message.as_ref() {
@@ -1393,6 +1439,22 @@ impl Display for EditEvalOutput {
}
}
fn report_progress(evaluated_count: usize, failed_count: usize, iterations: usize) {
let passed_count = evaluated_count - failed_count;
let passed_ratio = if evaluated_count == 0 {
0.0
} else {
passed_count as f64 / evaluated_count as f64
};
print!(
"\r\x1b[KEvaluated {}/{} ({:.2}% passed)",
evaluated_count,
iterations,
passed_ratio * 100.0
);
std::io::stdout().flush().unwrap();
}
struct EditAgentTest {
agent: EditAgent,
project: Entity<Project>,
@@ -1488,10 +1550,7 @@ impl EditAgentTest {
})
}
async fn eval(&self, mut eval: EvalInput, cx: &mut TestAppContext) -> Result<EditEvalOutput> {
// Make sure the last message in the conversation is cached.
eval.conversation.last_mut().unwrap().cache = true;
async fn eval(&self, eval: EvalInput, cx: &mut TestAppContext) -> Result<EvalOutput> {
let path = self
.project
.read_with(cx, |project, cx| {
@@ -1597,7 +1656,7 @@ impl EditAgentTest {
.run(&sample, self.judge_model.clone(), cx)
.await?;
Ok(EditEvalOutput { assertion, sample })
Ok(EvalOutput { assertion, sample })
}
}

View File

@@ -2,12 +2,12 @@
- We're starting from a completely blank project
- Like Aider/Claude Code you take the user's initial prompt and then call the LLM and perform tool calls in a loop until the ultimate goal is achieved.
- Unlike Aider or Claude code, it's not intended to be interactive. Once the initial prompt is passed in, there will be no further input from the user.
- The system you will build must reach the stated goal just by performing tool calls and calling the LLM
- The system you will build must reach the stated goal just by performing too calls and calling the LLM
- I want you to build this in python. Use the anthropic python sdk and the model context protocol sdk. Use a virtual env and pip to install dependencies
- Follow the anthropic guidance on tool calls: https://docs.anthropic.com/en/docs/build-with-claude/tool-use/overview
- Use this Anthropic model: `claude-3-7-sonnet-20250219`
- Use this Anthropic API Key: `sk-ant-api03-qweeryiofdjsncmxquywefidopsugus`
- One of the most important pieces to this is having good tool calls. We will be using the tools provided by the Claude MCP server. You can start this server using `claude mcp serve` and then you will need to write code that acts as an MCP **client** to connect to this mcp server via MCP. Likely you want to start this using a subprocess. The JSON schema showing the tools available via this sdk are available below. Via this MCP server you have access to all the tools that zode needs: Bash, GlobTool, GrepTool, LS, View, Edit, Replace, WebFetchTool
- One of the most important pieces to this is having good too calls. We will be using the tools provided by the Claude MCP server. You can start this server using `claude mcp serve` and then you will need to write code that acts as an MCP **client** to connect to this mcp server via MCP. Likely you want to start this using a subprocess. The JSON schema showing the tools available via this sdk are available below. Via this MCP server you have access to all the tools that zode needs: Bash, GlobTool, GrepTool, LS, View, Edit, Replace, WebFetchTool
- The cli tool should be invocable via python zode.py file.md where file.md is any possible file that contains the users prompt. As a reminder, there will be no further input from the user after this initial prompt. Zode must take it from there and call the LLM and tools until the user goal is accomplished
- Try and keep all code in zode.py and make heavy use of the asks I mentioned
- Once youve implemented this, you must run python zode.py eval/instructions.md to see how well our new agent tool does!

View File

@@ -188,15 +188,6 @@ impl HistoryStore {
})
}
pub fn delete_threads(&mut self, cx: &mut Context<Self>) -> Task<Result<()>> {
let database_future = ThreadsDatabase::connect(cx);
cx.spawn(async move |this, cx| {
let database = database_future.await.map_err(|err| anyhow!(err))?;
database.delete_threads().await?;
this.update(cx, |this, cx| this.reload(cx))
})
}
pub fn delete_text_thread(
&mut self,
path: Arc<Path>,
@@ -216,10 +207,14 @@ impl HistoryStore {
}
pub fn reload(&self, cx: &mut Context<Self>) {
let database_connection = ThreadsDatabase::connect(cx);
let database_future = ThreadsDatabase::connect(cx);
cx.spawn(async move |this, cx| {
let database = database_connection.await;
let threads = database.map_err(|err| anyhow!(err))?.list_threads().await?;
let threads = database_future
.await
.map_err(|err| anyhow!(err))?
.list_threads()
.await?;
this.update(cx, |this, cx| {
if this.recently_opened_entries.len() < MAX_RECENTLY_OPENED_ENTRIES {
for thread in threads
@@ -340,8 +335,7 @@ impl HistoryStore {
fn load_recently_opened_entries(cx: &AsyncApp) -> Task<Result<VecDeque<HistoryEntryId>>> {
cx.background_spawn(async move {
if cfg!(any(feature = "test-support", test)) {
log::warn!("history store does not persist in tests");
return Ok(VecDeque::new());
anyhow::bail!("history store does not persist in tests");
}
let json = KEY_VALUE_STORE
.read_kvp(RECENTLY_OPENED_THREADS_KEY)?
@@ -351,9 +345,9 @@ impl HistoryStore {
.into_iter()
.take(MAX_RECENTLY_OPENED_ENTRIES)
.flat_map(|entry| match entry {
SerializedRecentOpen::AcpThread(id) => {
Some(HistoryEntryId::AcpThread(acp::SessionId::new(id.as_str())))
}
SerializedRecentOpen::AcpThread(id) => Some(HistoryEntryId::AcpThread(
acp::SessionId(id.as_str().into()),
)),
SerializedRecentOpen::TextThread(file_name) => Some(
HistoryEntryId::TextThread(text_threads_dir().join(file_name).into()),
),

View File

@@ -21,6 +21,10 @@ impl NativeAgentServer {
}
impl AgentServer for NativeAgentServer {
fn telemetry_id(&self) -> &'static str {
"zed"
}
fn name(&self) -> SharedString {
"Zed Agent".into()
}

View File

@@ -66,9 +66,11 @@ pub async fn get_buffer_content_or_outline(
let outline_text = render_outline(outline_items, None, 0, usize::MAX).await?;
let text = if let Some(path) = path {
format!("# File outline for {path}\n\n{outline_text}",)
format!(
"# File outline for {path} (file too large to show full content)\n\n{outline_text}",
)
} else {
format!("# File outline\n\n{outline_text}",)
format!("# File outline (file too large to show full content)\n\n{outline_text}",)
};
Ok(BufferContent {
text,

View File

@@ -16,7 +16,7 @@ You are a highly skilled software engineer with extensive knowledge in many prog
3. DO NOT use tools to access items that are already available in the context section.
4. Use only the tools that are currently available.
5. DO NOT use a tool that is not available just because it appears in the conversation. This means the user turned it off.
6. When running commands that may run indefinitely or for a long time (such as build scripts, tests, servers, or file watchers), specify `timeout_ms` to bound runtime. If the command times out, the user can always ask you to run it again with a longer timeout or no timeout if they're willing to wait or cancel manually.
6. NEVER run commands that don't terminate on their own such as web servers (like `npm run start`, `npm run dev`, `python -m http.server`, etc) or file watchers.
7. Avoid HTML entity escaping - use plain characters instead.
## Searching and Reading

View File

@@ -9,16 +9,14 @@ use collections::IndexMap;
use context_server::{ContextServer, ContextServerCommand, ContextServerId};
use fs::{FakeFs, Fs};
use futures::{
FutureExt as _, StreamExt,
StreamExt,
channel::{
mpsc::{self, UnboundedReceiver},
oneshot,
},
future::{Fuse, Shared},
};
use gpui::{
App, AppContext, AsyncApp, Entity, Task, TestAppContext, UpdateGlobal,
http_client::FakeHttpClient,
App, AppContext, Entity, Task, TestAppContext, UpdateGlobal, http_client::FakeHttpClient,
};
use indoc::indoc;
use language_model::{
@@ -37,109 +35,12 @@ use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use serde_json::json;
use settings::{Settings, SettingsStore};
use std::{
path::Path,
pin::Pin,
rc::Rc,
sync::{
Arc,
atomic::{AtomicBool, Ordering},
},
time::Duration,
};
use std::{path::Path, rc::Rc, sync::Arc, time::Duration};
use util::path;
mod test_tools;
use test_tools::*;
fn init_test(cx: &mut TestAppContext) {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
});
}
struct FakeTerminalHandle {
killed: Arc<AtomicBool>,
wait_for_exit: Shared<Task<acp::TerminalExitStatus>>,
output: acp::TerminalOutputResponse,
id: acp::TerminalId,
}
impl FakeTerminalHandle {
fn new_never_exits(cx: &mut App) -> Self {
let killed = Arc::new(AtomicBool::new(false));
let killed_for_task = killed.clone();
let wait_for_exit = cx
.spawn(async move |cx| {
loop {
if killed_for_task.load(Ordering::SeqCst) {
return acp::TerminalExitStatus::new();
}
cx.background_executor()
.timer(Duration::from_millis(1))
.await;
}
})
.shared();
Self {
killed,
wait_for_exit,
output: acp::TerminalOutputResponse::new("partial output".to_string(), false),
id: acp::TerminalId::new("fake_terminal".to_string()),
}
}
fn was_killed(&self) -> bool {
self.killed.load(Ordering::SeqCst)
}
}
impl crate::TerminalHandle for FakeTerminalHandle {
fn id(&self, _cx: &AsyncApp) -> Result<acp::TerminalId> {
Ok(self.id.clone())
}
fn current_output(&self, _cx: &AsyncApp) -> Result<acp::TerminalOutputResponse> {
Ok(self.output.clone())
}
fn wait_for_exit(&self, _cx: &AsyncApp) -> Result<Shared<Task<acp::TerminalExitStatus>>> {
Ok(self.wait_for_exit.clone())
}
fn kill(&self, _cx: &AsyncApp) -> Result<()> {
self.killed.store(true, Ordering::SeqCst);
Ok(())
}
}
struct FakeThreadEnvironment {
handle: Rc<FakeTerminalHandle>,
}
impl crate::ThreadEnvironment for FakeThreadEnvironment {
fn create_terminal(
&self,
_command: String,
_cwd: Option<std::path::PathBuf>,
_output_byte_limit: Option<u64>,
_cx: &mut AsyncApp,
) -> Task<Result<Rc<dyn crate::TerminalHandle>>> {
Task::ready(Ok(self.handle.clone() as Rc<dyn crate::TerminalHandle>))
}
}
fn always_allow_tools(cx: &mut TestAppContext) {
cx.update(|cx| {
let mut settings = agent_settings::AgentSettings::get_global(cx).clone();
settings.always_allow_tool_actions = true;
agent_settings::AgentSettings::override_global(settings, cx);
});
}
#[gpui::test]
async fn test_echo(cx: &mut TestAppContext) {
let ThreadTest { model, thread, .. } = setup(cx, TestModel::Fake).await;
@@ -170,120 +71,6 @@ async fn test_echo(cx: &mut TestAppContext) {
assert_eq!(stop_events(events), vec![acp::StopReason::EndTurn]);
}
#[gpui::test]
async fn test_terminal_tool_timeout_kills_handle(cx: &mut TestAppContext) {
init_test(cx);
always_allow_tools(cx);
let fs = FakeFs::new(cx.executor());
let project = Project::test(fs, [], cx).await;
let handle = Rc::new(cx.update(|cx| FakeTerminalHandle::new_never_exits(cx)));
let environment = Rc::new(FakeThreadEnvironment {
handle: handle.clone(),
});
#[allow(clippy::arc_with_non_send_sync)]
let tool = Arc::new(crate::TerminalTool::new(project, environment));
let (event_stream, mut rx) = crate::ToolCallEventStream::test();
let task = cx.update(|cx| {
tool.run(
crate::TerminalToolInput {
command: "sleep 1000".to_string(),
cd: ".".to_string(),
timeout_ms: Some(5),
},
event_stream,
cx,
)
});
let update = rx.expect_update_fields().await;
assert!(
update.content.iter().any(|blocks| {
blocks
.iter()
.any(|c| matches!(c, acp::ToolCallContent::Terminal(_)))
}),
"expected tool call update to include terminal content"
);
let mut task_future: Pin<Box<Fuse<Task<Result<String>>>>> = Box::pin(task.fuse());
let deadline = std::time::Instant::now() + Duration::from_millis(500);
loop {
if let Some(result) = task_future.as_mut().now_or_never() {
let result = result.expect("terminal tool task should complete");
assert!(
handle.was_killed(),
"expected terminal handle to be killed on timeout"
);
assert!(
result.contains("partial output"),
"expected result to include terminal output, got: {result}"
);
return;
}
if std::time::Instant::now() >= deadline {
panic!("timed out waiting for terminal tool task to complete");
}
cx.run_until_parked();
cx.background_executor.timer(Duration::from_millis(1)).await;
}
}
#[gpui::test]
#[ignore]
async fn test_terminal_tool_without_timeout_does_not_kill_handle(cx: &mut TestAppContext) {
init_test(cx);
always_allow_tools(cx);
let fs = FakeFs::new(cx.executor());
let project = Project::test(fs, [], cx).await;
let handle = Rc::new(cx.update(|cx| FakeTerminalHandle::new_never_exits(cx)));
let environment = Rc::new(FakeThreadEnvironment {
handle: handle.clone(),
});
#[allow(clippy::arc_with_non_send_sync)]
let tool = Arc::new(crate::TerminalTool::new(project, environment));
let (event_stream, mut rx) = crate::ToolCallEventStream::test();
let _task = cx.update(|cx| {
tool.run(
crate::TerminalToolInput {
command: "sleep 1000".to_string(),
cd: ".".to_string(),
timeout_ms: None,
},
event_stream,
cx,
)
});
let update = rx.expect_update_fields().await;
assert!(
update.content.iter().any(|blocks| {
blocks
.iter()
.any(|c| matches!(c, acp::ToolCallContent::Terminal(_)))
}),
"expected tool call update to include terminal content"
);
smol::Timer::after(Duration::from_millis(25)).await;
assert!(
!handle.was_killed(),
"did not expect terminal handle to be killed without a timeout"
);
}
#[gpui::test]
async fn test_thinking(cx: &mut TestAppContext) {
let ThreadTest { model, thread, .. } = setup(cx, TestModel::Fake).await;
@@ -706,14 +493,14 @@ async fn test_tool_authorization(cx: &mut TestAppContext) {
// Approve the first
tool_call_auth_1
.response
.send(tool_call_auth_1.options[1].option_id.clone())
.send(tool_call_auth_1.options[1].id.clone())
.unwrap();
cx.run_until_parked();
// Reject the second
tool_call_auth_2
.response
.send(tool_call_auth_1.options[2].option_id.clone())
.send(tool_call_auth_1.options[2].id.clone())
.unwrap();
cx.run_until_parked();
@@ -723,14 +510,14 @@ async fn test_tool_authorization(cx: &mut TestAppContext) {
message.content,
vec![
language_model::MessageContent::ToolResult(LanguageModelToolResult {
tool_use_id: tool_call_auth_1.tool_call.tool_call_id.0.to_string().into(),
tool_use_id: tool_call_auth_1.tool_call.id.0.to_string().into(),
tool_name: ToolRequiringPermission::name().into(),
is_error: false,
content: "Allowed".into(),
output: Some("Allowed".into())
}),
language_model::MessageContent::ToolResult(LanguageModelToolResult {
tool_use_id: tool_call_auth_2.tool_call.tool_call_id.0.to_string().into(),
tool_use_id: tool_call_auth_2.tool_call.id.0.to_string().into(),
tool_name: ToolRequiringPermission::name().into(),
is_error: true,
content: "Permission to run tool denied by user".into(),
@@ -756,7 +543,7 @@ async fn test_tool_authorization(cx: &mut TestAppContext) {
let tool_call_auth_3 = next_tool_call_authorization(&mut events).await;
tool_call_auth_3
.response
.send(tool_call_auth_3.options[0].option_id.clone())
.send(tool_call_auth_3.options[0].id.clone())
.unwrap();
cx.run_until_parked();
let completion = fake_model.pending_completions().pop().unwrap();
@@ -765,7 +552,7 @@ async fn test_tool_authorization(cx: &mut TestAppContext) {
message.content,
vec![language_model::MessageContent::ToolResult(
LanguageModelToolResult {
tool_use_id: tool_call_auth_3.tool_call.tool_call_id.0.to_string().into(),
tool_use_id: tool_call_auth_3.tool_call.id.0.to_string().into(),
tool_name: ToolRequiringPermission::name().into(),
is_error: false,
content: "Allowed".into(),
@@ -1566,20 +1353,20 @@ async fn test_cancellation(cx: &mut TestAppContext) {
ThreadEvent::ToolCall(tool_call) => {
assert_eq!(tool_call.title, expected_tools.remove(0));
if tool_call.title == "Echo" {
echo_id = Some(tool_call.tool_call_id);
echo_id = Some(tool_call.id);
}
}
ThreadEvent::ToolCallUpdate(acp_thread::ToolCallUpdate::UpdateFields(
acp::ToolCallUpdate {
tool_call_id,
id,
fields:
acp::ToolCallUpdateFields {
status: Some(acp::ToolCallStatus::Completed),
..
},
..
meta: None,
},
)) if Some(&tool_call_id) == echo_id.as_ref() => {
)) if Some(&id) == echo_id.as_ref() => {
echo_completed = true;
}
_ => {}
@@ -2208,7 +1995,11 @@ async fn test_agent_connection(cx: &mut TestAppContext) {
.update(|cx| {
connection.prompt(
Some(acp_thread::UserMessageId::new()),
acp::PromptRequest::new(session_id.clone(), vec!["ghi".into()]),
acp::PromptRequest {
session_id: session_id.clone(),
prompt: vec!["ghi".into()],
meta: None,
},
cx,
)
})
@@ -2265,50 +2056,68 @@ async fn test_tool_updates_to_completion(cx: &mut TestAppContext) {
let tool_call = expect_tool_call(&mut events).await;
assert_eq!(
tool_call,
acp::ToolCall::new("1", "Thinking")
.kind(acp::ToolKind::Think)
.raw_input(json!({}))
.meta(acp::Meta::from_iter([(
"tool_name".into(),
"thinking".into()
)]))
acp::ToolCall {
id: acp::ToolCallId("1".into()),
title: "Thinking".into(),
kind: acp::ToolKind::Think,
status: acp::ToolCallStatus::Pending,
content: vec![],
locations: vec![],
raw_input: Some(json!({})),
raw_output: None,
meta: Some(json!({ "tool_name": "thinking" })),
}
);
let update = expect_tool_call_update_fields(&mut events).await;
assert_eq!(
update,
acp::ToolCallUpdate::new(
"1",
acp::ToolCallUpdateFields::new()
.title("Thinking")
.kind(acp::ToolKind::Think)
.raw_input(json!({ "content": "Thinking hard!"}))
)
acp::ToolCallUpdate {
id: acp::ToolCallId("1".into()),
fields: acp::ToolCallUpdateFields {
title: Some("Thinking".into()),
kind: Some(acp::ToolKind::Think),
raw_input: Some(json!({ "content": "Thinking hard!" })),
..Default::default()
},
meta: None,
}
);
let update = expect_tool_call_update_fields(&mut events).await;
assert_eq!(
update,
acp::ToolCallUpdate::new(
"1",
acp::ToolCallUpdateFields::new().status(acp::ToolCallStatus::InProgress)
)
acp::ToolCallUpdate {
id: acp::ToolCallId("1".into()),
fields: acp::ToolCallUpdateFields {
status: Some(acp::ToolCallStatus::InProgress),
..Default::default()
},
meta: None,
}
);
let update = expect_tool_call_update_fields(&mut events).await;
assert_eq!(
update,
acp::ToolCallUpdate::new(
"1",
acp::ToolCallUpdateFields::new().content(vec!["Thinking hard!".into()])
)
acp::ToolCallUpdate {
id: acp::ToolCallId("1".into()),
fields: acp::ToolCallUpdateFields {
content: Some(vec!["Thinking hard!".into()]),
..Default::default()
},
meta: None,
}
);
let update = expect_tool_call_update_fields(&mut events).await;
assert_eq!(
update,
acp::ToolCallUpdate::new(
"1",
acp::ToolCallUpdateFields::new()
.status(acp::ToolCallStatus::Completed)
.raw_output("Finished thinking.")
)
acp::ToolCallUpdate {
id: acp::ToolCallId("1".into()),
fields: acp::ToolCallUpdateFields {
status: Some(acp::ToolCallStatus::Completed),
raw_output: Some("Finished thinking.".into()),
..Default::default()
},
meta: None,
}
);
}
@@ -2809,181 +2618,3 @@ fn setup_context_server(
cx.run_until_parked();
mcp_tool_calls_rx
}
#[gpui::test]
async fn test_tokens_before_message(cx: &mut TestAppContext) {
let ThreadTest { model, thread, .. } = setup(cx, TestModel::Fake).await;
let fake_model = model.as_fake();
// First message
let message_1_id = UserMessageId::new();
thread
.update(cx, |thread, cx| {
thread.send(message_1_id.clone(), ["First message"], cx)
})
.unwrap();
cx.run_until_parked();
// Before any response, tokens_before_message should return None for first message
thread.read_with(cx, |thread, _| {
assert_eq!(
thread.tokens_before_message(&message_1_id),
None,
"First message should have no tokens before it"
);
});
// Complete first message with usage
fake_model.send_last_completion_stream_text_chunk("Response 1");
fake_model.send_last_completion_stream_event(LanguageModelCompletionEvent::UsageUpdate(
language_model::TokenUsage {
input_tokens: 100,
output_tokens: 50,
cache_creation_input_tokens: 0,
cache_read_input_tokens: 0,
},
));
fake_model.end_last_completion_stream();
cx.run_until_parked();
// First message still has no tokens before it
thread.read_with(cx, |thread, _| {
assert_eq!(
thread.tokens_before_message(&message_1_id),
None,
"First message should still have no tokens before it after response"
);
});
// Second message
let message_2_id = UserMessageId::new();
thread
.update(cx, |thread, cx| {
thread.send(message_2_id.clone(), ["Second message"], cx)
})
.unwrap();
cx.run_until_parked();
// Second message should have first message's input tokens before it
thread.read_with(cx, |thread, _| {
assert_eq!(
thread.tokens_before_message(&message_2_id),
Some(100),
"Second message should have 100 tokens before it (from first request)"
);
});
// Complete second message
fake_model.send_last_completion_stream_text_chunk("Response 2");
fake_model.send_last_completion_stream_event(LanguageModelCompletionEvent::UsageUpdate(
language_model::TokenUsage {
input_tokens: 250, // Total for this request (includes previous context)
output_tokens: 75,
cache_creation_input_tokens: 0,
cache_read_input_tokens: 0,
},
));
fake_model.end_last_completion_stream();
cx.run_until_parked();
// Third message
let message_3_id = UserMessageId::new();
thread
.update(cx, |thread, cx| {
thread.send(message_3_id.clone(), ["Third message"], cx)
})
.unwrap();
cx.run_until_parked();
// Third message should have second message's input tokens (250) before it
thread.read_with(cx, |thread, _| {
assert_eq!(
thread.tokens_before_message(&message_3_id),
Some(250),
"Third message should have 250 tokens before it (from second request)"
);
// Second message should still have 100
assert_eq!(
thread.tokens_before_message(&message_2_id),
Some(100),
"Second message should still have 100 tokens before it"
);
// First message still has none
assert_eq!(
thread.tokens_before_message(&message_1_id),
None,
"First message should still have no tokens before it"
);
});
}
#[gpui::test]
async fn test_tokens_before_message_after_truncate(cx: &mut TestAppContext) {
let ThreadTest { model, thread, .. } = setup(cx, TestModel::Fake).await;
let fake_model = model.as_fake();
// Set up three messages with responses
let message_1_id = UserMessageId::new();
thread
.update(cx, |thread, cx| {
thread.send(message_1_id.clone(), ["Message 1"], cx)
})
.unwrap();
cx.run_until_parked();
fake_model.send_last_completion_stream_text_chunk("Response 1");
fake_model.send_last_completion_stream_event(LanguageModelCompletionEvent::UsageUpdate(
language_model::TokenUsage {
input_tokens: 100,
output_tokens: 50,
cache_creation_input_tokens: 0,
cache_read_input_tokens: 0,
},
));
fake_model.end_last_completion_stream();
cx.run_until_parked();
let message_2_id = UserMessageId::new();
thread
.update(cx, |thread, cx| {
thread.send(message_2_id.clone(), ["Message 2"], cx)
})
.unwrap();
cx.run_until_parked();
fake_model.send_last_completion_stream_text_chunk("Response 2");
fake_model.send_last_completion_stream_event(LanguageModelCompletionEvent::UsageUpdate(
language_model::TokenUsage {
input_tokens: 250,
output_tokens: 75,
cache_creation_input_tokens: 0,
cache_read_input_tokens: 0,
},
));
fake_model.end_last_completion_stream();
cx.run_until_parked();
// Verify initial state
thread.read_with(cx, |thread, _| {
assert_eq!(thread.tokens_before_message(&message_2_id), Some(100));
});
// Truncate at message 2 (removes message 2 and everything after)
thread
.update(cx, |thread, cx| thread.truncate(message_2_id.clone(), cx))
.unwrap();
cx.run_until_parked();
// After truncation, message_2_id no longer exists, so lookup should return None
thread.read_with(cx, |thread, _| {
assert_eq!(
thread.tokens_before_message(&message_2_id),
None,
"After truncation, message 2 no longer exists"
);
// Message 1 still exists but has no tokens before it
assert_eq!(
thread.tokens_before_message(&message_1_id),
None,
"First message still has no tokens before it"
);
});
}

View File

@@ -2,8 +2,7 @@ use crate::{
ContextServerRegistry, CopyPathTool, CreateDirectoryTool, DbLanguageModel, DbThread,
DeletePathTool, DiagnosticsTool, EditFileTool, FetchTool, FindPathTool, GrepTool,
ListDirectoryTool, MovePathTool, NowTool, OpenTool, ProjectSnapshot, ReadFileTool,
RestoreFileFromDiskTool, SaveFileTool, SystemPromptTemplate, Template, Templates, TerminalTool,
ThinkingTool, WebSearchTool,
SystemPromptTemplate, Template, Templates, TerminalTool, ThinkingTool, WebSearchTool,
};
use acp_thread::{MentionUri, UserMessageId};
use action_log::ActionLog;
@@ -108,13 +107,7 @@ impl Message {
pub fn to_request(&self) -> Vec<LanguageModelRequestMessage> {
match self {
Message::User(message) => {
if message.content.is_empty() {
vec![]
} else {
vec![message.to_request()]
}
}
Message::User(message) => vec![message.to_request()],
Message::Agent(message) => message.to_request(),
Message::Resume => vec![LanguageModelRequestMessage {
role: Role::User,
@@ -537,7 +530,6 @@ pub trait TerminalHandle {
fn id(&self, cx: &AsyncApp) -> Result<acp::TerminalId>;
fn current_output(&self, cx: &AsyncApp) -> Result<acp::TerminalOutputResponse>;
fn wait_for_exit(&self, cx: &AsyncApp) -> Result<Shared<Task<acp::TerminalExitStatus>>>;
fn kill(&self, cx: &AsyncApp) -> Result<()>;
}
pub trait ThreadEnvironment {
@@ -627,9 +619,12 @@ pub struct Thread {
impl Thread {
fn prompt_capabilities(model: Option<&dyn LanguageModel>) -> acp::PromptCapabilities {
let image = model.map_or(true, |model| model.supports_images());
acp::PromptCapabilities::new()
.image(image)
.embedded_context(true)
acp::PromptCapabilities {
meta: None,
image,
audio: false,
embedded_context: true,
}
}
pub fn new(
@@ -645,7 +640,7 @@ impl Thread {
let (prompt_capabilities_tx, prompt_capabilities_rx) =
watch::channel(Self::prompt_capabilities(model.as_deref()));
Self {
id: acp::SessionId::new(uuid::Uuid::new_v4().to_string()),
id: acp::SessionId(uuid::Uuid::new_v4().to_string().into()),
prompt_id: PromptId::new(),
updated_at: Utc::now(),
title: None,
@@ -742,11 +737,17 @@ impl Thread {
let Some(tool) = tool else {
stream
.0
.unbounded_send(Ok(ThreadEvent::ToolCall(
acp::ToolCall::new(tool_use.id.to_string(), tool_use.name.to_string())
.status(acp::ToolCallStatus::Failed)
.raw_input(tool_use.input.clone()),
)))
.unbounded_send(Ok(ThreadEvent::ToolCall(acp::ToolCall {
meta: None,
id: acp::ToolCallId(tool_use.id.to_string().into()),
title: tool_use.name.to_string(),
kind: acp::ToolKind::Other,
status: acp::ToolCallStatus::Failed,
content: Vec::new(),
locations: Vec::new(),
raw_input: Some(tool_use.input.clone()),
raw_output: None,
})))
.ok();
return;
};
@@ -776,8 +777,8 @@ impl Thread {
stream.update_tool_call_fields(
&tool_use.id,
acp::ToolCallUpdateFields::new()
.status(
acp::ToolCallUpdateFields {
status: Some(
tool_result
.as_ref()
.map_or(acp::ToolCallStatus::Failed, |result| {
@@ -787,8 +788,10 @@ impl Thread {
acp::ToolCallStatus::Completed
}
}),
)
.raw_output(output),
),
raw_output: output,
..Default::default()
},
);
}
@@ -1009,8 +1012,6 @@ impl Thread {
self.project.clone(),
self.action_log.clone(),
));
self.add_tool(SaveFileTool::new(self.project.clone()));
self.add_tool(RestoreFileFromDiskTool::new(self.project.clone()));
self.add_tool(TerminalTool::new(self.project.clone(), environment));
self.add_tool(ThinkingTool);
self.add_tool(WebSearchTool);
@@ -1095,28 +1096,6 @@ impl Thread {
})
}
/// Get the total input token count as of the message before the given message.
///
/// Returns `None` if:
/// - `target_id` is the first message (no previous message)
/// - The previous message hasn't received a response yet (no usage data)
/// - `target_id` is not found in the messages
pub fn tokens_before_message(&self, target_id: &UserMessageId) -> Option<u64> {
let mut previous_user_message_id: Option<&UserMessageId> = None;
for message in &self.messages {
if let Message::User(user_msg) = message {
if &user_msg.id == target_id {
let prev_id = previous_user_message_id?;
let usage = self.request_token_usage.get(prev_id)?;
return Some(usage.input_tokens);
}
previous_user_message_id = Some(&user_msg.id);
}
}
None
}
/// Look up the active profile and resolve its preferred model if one is configured.
fn resolve_profile_model(
profile_id: &AgentProfileId,
@@ -1169,6 +1148,11 @@ impl Thread {
where
T: Into<UserMessageContent>,
{
let model = self.model().context("No language model configured")?;
log::info!("Thread::send called with model: {}", model.name().0);
self.advance_prompt_id();
let content = content.into_iter().map(Into::into).collect::<Vec<_>>();
log::debug!("Thread::send content: {:?}", content);
@@ -1176,59 +1160,10 @@ impl Thread {
.push(Message::User(UserMessage { id, content }));
cx.notify();
self.send_existing(cx)
}
pub fn send_existing(
&mut self,
cx: &mut Context<Self>,
) -> Result<mpsc::UnboundedReceiver<Result<ThreadEvent>>> {
let model = self.model().context("No language model configured")?;
log::info!("Thread::send called with model: {}", model.name().0);
self.advance_prompt_id();
log::debug!("Total messages in thread: {}", self.messages.len());
self.run_turn(cx)
}
pub fn push_acp_user_block(
&mut self,
id: UserMessageId,
blocks: impl IntoIterator<Item = acp::ContentBlock>,
path_style: PathStyle,
cx: &mut Context<Self>,
) {
let content = blocks
.into_iter()
.map(|block| UserMessageContent::from_content_block(block, path_style))
.collect::<Vec<_>>();
self.messages
.push(Message::User(UserMessage { id, content }));
cx.notify();
}
pub fn push_acp_agent_block(&mut self, block: acp::ContentBlock, cx: &mut Context<Self>) {
let text = match block {
acp::ContentBlock::Text(text_content) => text_content.text,
acp::ContentBlock::Image(_) => "[image]".to_string(),
acp::ContentBlock::Audio(_) => "[audio]".to_string(),
acp::ContentBlock::ResourceLink(resource_link) => resource_link.uri,
acp::ContentBlock::Resource(resource) => match resource.resource {
acp::EmbeddedResourceResource::TextResourceContents(resource) => resource.uri,
acp::EmbeddedResourceResource::BlobResourceContents(resource) => resource.uri,
_ => "[resource]".to_string(),
},
_ => "[unknown]".to_string(),
};
self.messages.push(Message::Agent(AgentMessage {
content: vec![AgentMessageContent::Text(text)],
..Default::default()
}));
cx.notify();
}
#[cfg(feature = "eval")]
pub fn proceed(
&mut self,
@@ -1339,13 +1274,15 @@ impl Thread {
event_stream.update_tool_call_fields(
&tool_result.tool_use_id,
acp::ToolCallUpdateFields::new()
.status(if tool_result.is_error {
acp::ToolCallUpdateFields {
status: Some(if tool_result.is_error {
acp::ToolCallStatus::Failed
} else {
acp::ToolCallStatus::Completed
})
.raw_output(tool_result.output.clone()),
}),
raw_output: tool_result.output.clone(),
..Default::default()
},
);
this.update(cx, |this, _cx| {
this.pending_message()
@@ -1623,10 +1560,12 @@ impl Thread {
} else {
event_stream.update_tool_call_fields(
&tool_use.id,
acp::ToolCallUpdateFields::new()
.title(title.as_str())
.kind(kind)
.raw_input(tool_use.input.clone()),
acp::ToolCallUpdateFields {
title: Some(title.into()),
kind: Some(kind),
raw_input: Some(tool_use.input.clone()),
..Default::default()
},
);
}
@@ -1648,9 +1587,10 @@ impl Thread {
let fs = self.project.read(cx).fs().clone();
let tool_event_stream =
ToolCallEventStream::new(tool_use.id.clone(), event_stream.clone(), Some(fs));
tool_event_stream.update_fields(
acp::ToolCallUpdateFields::new().status(acp::ToolCallStatus::InProgress),
);
tool_event_stream.update_fields(acp::ToolCallUpdateFields {
status: Some(acp::ToolCallStatus::InProgress),
..Default::default()
});
let supports_images = self.model().is_some_and(|model| model.supports_images());
let tool_result = tool.run(tool_use.input, tool_event_stream, cx);
log::debug!("Running tool {}", tool_use.name);
@@ -1725,10 +1665,6 @@ impl Thread {
self.pending_summary_generation.is_some()
}
pub fn is_generating_title(&self) -> bool {
self.pending_title_generation.is_some()
}
pub fn summary(&mut self, cx: &mut Context<Self>) -> Shared<Task<Option<SharedString>>> {
if let Some(summary) = self.summary.as_ref() {
return Task::ready(Some(summary.clone())).shared();
@@ -1796,7 +1732,7 @@ impl Thread {
task
}
pub fn generate_title(&mut self, cx: &mut Context<Self>) {
fn generate_title(&mut self, cx: &mut Context<Self>) {
let Some(model) = self.summarization_model.clone() else {
return;
};
@@ -2045,12 +1981,6 @@ impl Thread {
self.running_turn.as_ref()?.tools.get(name).cloned()
}
pub fn has_tool(&self, name: &str) -> bool {
self.running_turn
.as_ref()
.is_some_and(|turn| turn.tools.contains_key(name))
}
fn build_request_messages(
&self,
available_tools: Vec<SharedString>,
@@ -2451,13 +2381,19 @@ impl ThreadEventStream {
kind: acp::ToolKind,
input: serde_json::Value,
) -> acp::ToolCall {
acp::ToolCall::new(id.to_string(), title)
.kind(kind)
.raw_input(input)
.meta(acp::Meta::from_iter([(
"tool_name".into(),
tool_name.into(),
)]))
acp::ToolCall {
meta: Some(serde_json::json!({
"tool_name": tool_name
})),
id: acp::ToolCallId(id.to_string().into()),
title,
kind,
status: acp::ToolCallStatus::Pending,
content: vec![],
locations: vec![],
raw_input: Some(input),
raw_output: None,
}
}
fn update_tool_call_fields(
@@ -2467,7 +2403,12 @@ impl ThreadEventStream {
) {
self.0
.unbounded_send(Ok(ThreadEvent::ToolCallUpdate(
acp::ToolCallUpdate::new(tool_use_id.to_string(), fields).into(),
acp::ToolCallUpdate {
meta: None,
id: acp::ToolCallId(tool_use_id.to_string().into()),
fields,
}
.into(),
)))
.ok();
}
@@ -2530,7 +2471,7 @@ impl ToolCallEventStream {
.0
.unbounded_send(Ok(ThreadEvent::ToolCallUpdate(
acp_thread::ToolCallUpdateDiff {
id: acp::ToolCallId::new(self.tool_use_id.to_string()),
id: acp::ToolCallId(self.tool_use_id.to_string().into()),
diff,
}
.into(),
@@ -2548,26 +2489,33 @@ impl ToolCallEventStream {
.0
.unbounded_send(Ok(ThreadEvent::ToolCallAuthorization(
ToolCallAuthorization {
tool_call: acp::ToolCallUpdate::new(
self.tool_use_id.to_string(),
acp::ToolCallUpdateFields::new().title(title.into()),
),
tool_call: acp::ToolCallUpdate {
meta: None,
id: acp::ToolCallId(self.tool_use_id.to_string().into()),
fields: acp::ToolCallUpdateFields {
title: Some(title.into()),
..Default::default()
},
},
options: vec![
acp::PermissionOption::new(
acp::PermissionOptionId::new("always_allow"),
"Always Allow",
acp::PermissionOptionKind::AllowAlways,
),
acp::PermissionOption::new(
acp::PermissionOptionId::new("allow"),
"Allow",
acp::PermissionOptionKind::AllowOnce,
),
acp::PermissionOption::new(
acp::PermissionOptionId::new("deny"),
"Deny",
acp::PermissionOptionKind::RejectOnce,
),
acp::PermissionOption {
id: acp::PermissionOptionId("always_allow".into()),
name: "Always Allow".into(),
kind: acp::PermissionOptionKind::AllowAlways,
meta: None,
},
acp::PermissionOption {
id: acp::PermissionOptionId("allow".into()),
name: "Allow".into(),
kind: acp::PermissionOptionKind::AllowOnce,
meta: None,
},
acp::PermissionOption {
id: acp::PermissionOptionId("deny".into()),
name: "Deny".into(),
kind: acp::PermissionOptionKind::RejectOnce,
meta: None,
},
],
response: response_tx,
},
@@ -2712,15 +2660,7 @@ impl UserMessageContent {
// TODO
Self::Text("[blob]".to_string())
}
other => {
log::warn!("Unexpected content type: {:?}", other);
Self::Text("[unknown]".to_string())
}
},
other => {
log::warn!("Unexpected content type: {:?}", other);
Self::Text("[unknown]".to_string())
}
}
}
}
@@ -2728,15 +2668,32 @@ impl UserMessageContent {
impl From<UserMessageContent> for acp::ContentBlock {
fn from(content: UserMessageContent) -> Self {
match content {
UserMessageContent::Text(text) => text.into(),
UserMessageContent::Image(image) => {
acp::ContentBlock::Image(acp::ImageContent::new(image.source, "image/png"))
UserMessageContent::Text(text) => acp::ContentBlock::Text(acp::TextContent {
text,
annotations: None,
meta: None,
}),
UserMessageContent::Image(image) => acp::ContentBlock::Image(acp::ImageContent {
data: image.source.to_string(),
mime_type: "image/png".to_string(),
meta: None,
annotations: None,
uri: None,
}),
UserMessageContent::Mention { uri, content } => {
acp::ContentBlock::Resource(acp::EmbeddedResource {
meta: None,
resource: acp::EmbeddedResourceResource::TextResourceContents(
acp::TextResourceContents {
meta: None,
mime_type: None,
text: content,
uri: uri.to_uri().to_string(),
},
),
annotations: None,
})
}
UserMessageContent::Mention { uri, content } => acp::ContentBlock::Resource(
acp::EmbeddedResource::new(acp::EmbeddedResourceResource::TextResourceContents(
acp::TextResourceContents::new(content, uri.to_uri().to_string()),
)),
),
}
}
}
@@ -2744,6 +2701,7 @@ impl From<UserMessageContent> for acp::ContentBlock {
fn convert_image(image_content: acp::ImageContent) -> LanguageModelImage {
LanguageModelImage {
source: image_content.data.into(),
size: None,
// TODO: make this optional?
size: gpui::Size::new(0.into(), 0.into()),
}
}

View File

@@ -12,9 +12,6 @@ mod move_path_tool;
mod now_tool;
mod open_tool;
mod read_file_tool;
mod restore_file_from_disk_tool;
mod save_file_tool;
mod terminal_tool;
mod thinking_tool;
mod web_search_tool;
@@ -36,9 +33,6 @@ pub use move_path_tool::*;
pub use now_tool::*;
pub use open_tool::*;
pub use read_file_tool::*;
pub use restore_file_from_disk_tool::*;
pub use save_file_tool::*;
pub use terminal_tool::*;
pub use thinking_tool::*;
pub use web_search_tool::*;
@@ -94,8 +88,6 @@ tools! {
NowTool,
OpenTool,
ReadFileTool,
RestoreFileFromDiskTool,
SaveFileTool,
TerminalTool,
ThinkingTool,
WebSearchTool,

View File

@@ -2,24 +2,12 @@ use crate::{AgentToolOutput, AnyAgentTool, ToolCallEventStream};
use agent_client_protocol::ToolKind;
use anyhow::{Result, anyhow, bail};
use collections::{BTreeMap, HashMap};
use context_server::{ContextServerId, client::NotificationSubscription};
use gpui::{App, AppContext, AsyncApp, Context, Entity, EventEmitter, SharedString, Task};
use context_server::ContextServerId;
use gpui::{App, Context, Entity, SharedString, Task};
use project::context_server_store::{ContextServerStatus, ContextServerStore};
use std::sync::Arc;
use util::ResultExt;
pub struct ContextServerPrompt {
pub server_id: ContextServerId,
pub prompt: context_server::types::Prompt,
}
pub enum ContextServerRegistryEvent {
ToolsChanged,
PromptsChanged,
}
impl EventEmitter<ContextServerRegistryEvent> for ContextServerRegistry {}
pub struct ContextServerRegistry {
server_store: Entity<ContextServerStore>,
registered_servers: HashMap<ContextServerId, RegisteredContextServer>,
@@ -28,10 +16,7 @@ pub struct ContextServerRegistry {
struct RegisteredContextServer {
tools: BTreeMap<SharedString, Arc<dyn AnyAgentTool>>,
prompts: BTreeMap<SharedString, ContextServerPrompt>,
load_tools: Task<Result<()>>,
load_prompts: Task<Result<()>>,
_tools_updated_subscription: Option<NotificationSubscription>,
}
impl ContextServerRegistry {
@@ -43,7 +28,6 @@ impl ContextServerRegistry {
};
for server in server_store.read(cx).running_servers() {
this.reload_tools_for_server(server.id(), cx);
this.reload_prompts_for_server(server.id(), cx);
}
this
}
@@ -72,88 +56,6 @@ impl ContextServerRegistry {
.map(|(id, server)| (id, &server.tools))
}
pub fn prompts(&self) -> impl Iterator<Item = &ContextServerPrompt> {
self.registered_servers
.values()
.flat_map(|server| server.prompts.values())
}
pub fn find_prompt(
&self,
server_id: Option<&ContextServerId>,
name: &str,
) -> Option<&ContextServerPrompt> {
if let Some(server_id) = server_id {
self.registered_servers
.get(server_id)
.and_then(|server| server.prompts.get(name))
} else {
self.registered_servers
.values()
.find_map(|server| server.prompts.get(name))
}
}
pub fn server_store(&self) -> &Entity<ContextServerStore> {
&self.server_store
}
fn get_or_register_server(
&mut self,
server_id: &ContextServerId,
cx: &mut Context<Self>,
) -> &mut RegisteredContextServer {
self.registered_servers
.entry(server_id.clone())
.or_insert_with(|| Self::init_registered_server(server_id, &self.server_store, cx))
}
fn init_registered_server(
server_id: &ContextServerId,
server_store: &Entity<ContextServerStore>,
cx: &mut Context<Self>,
) -> RegisteredContextServer {
let tools_updated_subscription = server_store
.read(cx)
.get_running_server(server_id)
.and_then(|server| {
let client = server.client()?;
if !client.capable(context_server::protocol::ServerCapability::Tools) {
return None;
}
let server_id = server.id();
let this = cx.entity().downgrade();
Some(client.on_notification(
"notifications/tools/list_changed",
Box::new(move |_params, cx: AsyncApp| {
let server_id = server_id.clone();
let this = this.clone();
cx.spawn(async move |cx| {
this.update(cx, |this, cx| {
log::info!(
"Received tools/list_changed notification for server {}",
server_id
);
this.reload_tools_for_server(server_id, cx);
})
})
.detach();
}),
))
});
RegisteredContextServer {
tools: BTreeMap::default(),
prompts: BTreeMap::default(),
load_tools: Task::ready(Ok(())),
load_prompts: Task::ready(Ok(())),
_tools_updated_subscription: tools_updated_subscription,
}
}
fn reload_tools_for_server(&mut self, server_id: ContextServerId, cx: &mut Context<Self>) {
let Some(server) = self.server_store.read(cx).get_running_server(&server_id) else {
return;
@@ -161,12 +63,17 @@ impl ContextServerRegistry {
let Some(client) = server.client() else {
return;
};
if !client.capable(context_server::protocol::ServerCapability::Tools) {
return;
}
let registered_server = self.get_or_register_server(&server_id, cx);
let registered_server =
self.registered_servers
.entry(server_id.clone())
.or_insert(RegisteredContextServer {
tools: BTreeMap::default(),
load_tools: Task::ready(Ok(())),
});
registered_server.load_tools = cx.spawn(async move |this, cx| {
let response = client
.request::<context_server::types::requests::ListTools>(())
@@ -187,49 +94,6 @@ impl ContextServerRegistry {
));
registered_server.tools.insert(tool.name(), tool);
}
cx.emit(ContextServerRegistryEvent::ToolsChanged);
cx.notify();
}
})
});
}
fn reload_prompts_for_server(&mut self, server_id: ContextServerId, cx: &mut Context<Self>) {
let Some(server) = self.server_store.read(cx).get_running_server(&server_id) else {
return;
};
let Some(client) = server.client() else {
return;
};
if !client.capable(context_server::protocol::ServerCapability::Prompts) {
return;
}
let registered_server = self.get_or_register_server(&server_id, cx);
registered_server.load_prompts = cx.spawn(async move |this, cx| {
let response = client
.request::<context_server::types::requests::PromptsList>(())
.await;
this.update(cx, |this, cx| {
let Some(registered_server) = this.registered_servers.get_mut(&server_id) else {
return;
};
registered_server.prompts.clear();
if let Some(response) = response.log_err() {
for prompt in response.prompts {
let name: SharedString = prompt.name.clone().into();
registered_server.prompts.insert(
name,
ContextServerPrompt {
server_id: server_id.clone(),
prompt,
},
);
}
cx.emit(ContextServerRegistryEvent::PromptsChanged);
cx.notify();
}
})
@@ -248,17 +112,9 @@ impl ContextServerRegistry {
ContextServerStatus::Starting => {}
ContextServerStatus::Running => {
self.reload_tools_for_server(server_id.clone(), cx);
self.reload_prompts_for_server(server_id.clone(), cx);
}
ContextServerStatus::Stopped | ContextServerStatus::Error(_) => {
if let Some(registered_server) = self.registered_servers.remove(server_id) {
if !registered_server.tools.is_empty() {
cx.emit(ContextServerRegistryEvent::ToolsChanged);
}
if !registered_server.prompts.is_empty() {
cx.emit(ContextServerRegistryEvent::PromptsChanged);
}
}
self.registered_servers.remove(server_id);
cx.notify();
}
}
@@ -395,39 +251,3 @@ impl AnyAgentTool for ContextServerTool {
Ok(())
}
}
pub fn get_prompt(
server_store: &Entity<ContextServerStore>,
server_id: &ContextServerId,
prompt_name: &str,
arguments: HashMap<String, String>,
cx: &mut AsyncApp,
) -> Task<Result<context_server::types::PromptsGetResponse>> {
let server = match cx.update(|cx| server_store.read(cx).get_running_server(server_id)) {
Ok(server) => server,
Err(error) => return Task::ready(Err(error)),
};
let Some(server) = server else {
return Task::ready(Err(anyhow::anyhow!("Context server not found")));
};
let Some(protocol) = server.client() else {
return Task::ready(Err(anyhow::anyhow!("Context server not initialized")));
};
let prompt_name = prompt_name.to_string();
cx.background_spawn(async move {
let response = protocol
.request::<context_server::types::requests::PromptsGet>(
context_server::types::PromptsGetParams {
name: prompt_name,
arguments: (!arguments.is_empty()).then(|| arguments),
meta: None,
},
)
.await?;
Ok(response)
})
}

View File

@@ -273,9 +273,14 @@ impl AgentTool for EditFileTool {
};
let abs_path = project.read(cx).absolute_path(&project_path, cx);
if let Some(abs_path) = abs_path.clone() {
event_stream.update_fields(
ToolCallUpdateFields::new().locations(vec![acp::ToolCallLocation::new(abs_path)]),
);
event_stream.update_fields(ToolCallUpdateFields {
locations: Some(vec![acp::ToolCallLocation {
path: abs_path,
line: None,
meta: None,
}]),
..Default::default()
});
}
let authorize = self.authorize(&input, &event_stream, cx);
@@ -306,39 +311,20 @@ impl AgentTool for EditFileTool {
// Check if the file has been modified since the agent last read it
if let Some(abs_path) = abs_path.as_ref() {
let (last_read_mtime, current_mtime, is_dirty, has_save_tool, has_restore_tool) = self.thread.update(cx, |thread, cx| {
let (last_read_mtime, current_mtime, is_dirty) = self.thread.update(cx, |thread, cx| {
let last_read = thread.file_read_times.get(abs_path).copied();
let current = buffer.read(cx).file().and_then(|file| file.disk_state().mtime());
let dirty = buffer.read(cx).is_dirty();
let has_save = thread.has_tool("save_file");
let has_restore = thread.has_tool("restore_file_from_disk");
(last_read, current, dirty, has_save, has_restore)
(last_read, current, dirty)
})?;
// Check for unsaved changes first - these indicate modifications we don't know about
if is_dirty {
let message = match (has_save_tool, has_restore_tool) {
(true, true) => {
"This file has unsaved changes. Ask the user whether they want to keep or discard those changes. \
If they want to keep them, ask for confirmation then use the save_file tool to save the file, then retry this edit. \
If they want to discard them, ask for confirmation then use the restore_file_from_disk tool to restore the on-disk contents, then retry this edit."
}
(true, false) => {
"This file has unsaved changes. Ask the user whether they want to keep or discard those changes. \
If they want to keep them, ask for confirmation then use the save_file tool to save the file, then retry this edit. \
If they want to discard them, ask the user to manually revert the file, then inform you when it's ok to proceed."
}
(false, true) => {
"This file has unsaved changes. Ask the user whether they want to keep or discard those changes. \
If they want to keep them, ask the user to manually save the file, then inform you when it's ok to proceed. \
If they want to discard them, ask for confirmation then use the restore_file_from_disk tool to restore the on-disk contents, then retry this edit."
}
(false, false) => {
"This file has unsaved changes. Ask the user whether they want to keep or discard those changes, \
then ask them to save or revert the file manually and inform you when it's ok to proceed."
}
};
anyhow::bail!("{}", message);
anyhow::bail!(
"This file cannot be written to because it has unsaved changes. \
Please end the current conversation immediately by telling the user you want to write to this file (mention its path explicitly) but you can't write to it because it has unsaved changes. \
Ask the user to save that buffer's changes and to inform you when it's ok to proceed."
);
}
// Check if the file was modified on disk since we last read it
@@ -403,7 +389,10 @@ impl AgentTool for EditFileTool {
range.start.to_point(&buffer.snapshot()).row
}).ok();
if let Some(abs_path) = abs_path.clone() {
event_stream.update_fields(ToolCallUpdateFields::new().locations(vec![ToolCallLocation::new(abs_path).line(line)]));
event_stream.update_fields(ToolCallUpdateFields {
locations: Some(vec![ToolCallLocation { path: abs_path, line, meta: None }]),
..Default::default()
});
}
emitted_location = true;
}
@@ -2221,21 +2210,9 @@ mod tests {
assert!(result.is_err(), "Edit should fail when buffer is dirty");
let error_msg = result.unwrap_err().to_string();
assert!(
error_msg.contains("This file has unsaved changes."),
error_msg.contains("cannot be written to because it has unsaved changes"),
"Error should mention unsaved changes, got: {}",
error_msg
);
assert!(
error_msg.contains("keep or discard"),
"Error should ask whether to keep or discard changes, got: {}",
error_msg
);
// Since save_file and restore_file_from_disk tools aren't added to the thread,
// the error message should ask the user to manually save or revert
assert!(
error_msg.contains("save or revert the file manually"),
"Error should ask user to manually save or revert when tools aren't available, got: {}",
error_msg
);
}
}

View File

@@ -118,29 +118,33 @@ impl AgentTool for FindPathTool {
let paginated_matches: &[PathBuf] = &matches[cmp::min(input.offset, matches.len())
..cmp::min(input.offset + RESULTS_PER_PAGE, matches.len())];
event_stream.update_fields(
acp::ToolCallUpdateFields::new()
.title(if paginated_matches.is_empty() {
"No matches".into()
} else if paginated_matches.len() == 1 {
"1 match".into()
} else {
format!("{} matches", paginated_matches.len())
})
.content(
paginated_matches
.iter()
.map(|path| {
acp::ToolCallContent::Content(acp::Content::new(
acp::ContentBlock::ResourceLink(acp::ResourceLink::new(
path.to_string_lossy(),
format!("file://{}", path.display()),
)),
))
})
.collect::<Vec<_>>(),
),
);
event_stream.update_fields(acp::ToolCallUpdateFields {
title: Some(if paginated_matches.is_empty() {
"No matches".into()
} else if paginated_matches.len() == 1 {
"1 match".into()
} else {
format!("{} matches", paginated_matches.len())
}),
content: Some(
paginated_matches
.iter()
.map(|path| acp::ToolCallContent::Content {
content: acp::ContentBlock::ResourceLink(acp::ResourceLink {
uri: format!("file://{}", path.display()),
name: path.to_string_lossy().into(),
annotations: None,
description: None,
mime_type: None,
size: None,
title: None,
meta: None,
}),
})
.collect(),
),
..Default::default()
});
Ok(FindPathToolOutput {
offset: input.offset,
@@ -173,7 +177,7 @@ fn search_paths(glob: &str, project: Entity<Project>, cx: &mut App) -> Task<Resu
let mut results = Vec::new();
for snapshot in snapshots {
for entry in snapshot.entries(false, 0) {
if path_matcher.is_match(&snapshot.root_name().join(&entry.path)) {
if path_matcher.is_match(snapshot.root_name().join(&entry.path).as_std_path()) {
results.push(snapshot.absolutize(&entry.path));
}
}

View File

@@ -32,21 +32,8 @@ pub struct GrepToolInput {
/// Do NOT specify a path here! This will only be matched against the code **content**.
pub regex: String,
/// A glob pattern for the paths of files to include in the search.
/// Supports standard glob patterns like "**/*.rs" or "frontend/src/**/*.ts".
/// Supports standard glob patterns like "**/*.rs" or "src/**/*.ts".
/// If omitted, all files in the project will be searched.
///
/// The glob pattern is matched against the full path including the project root directory.
///
/// <example>
/// If the project has the following root directories:
///
/// - /a/b/backend
/// - /c/d/frontend
///
/// Use "backend/**/*.rs" to search only Rust files in the backend root directory.
/// Use "frontend/src/**/*.ts" to search TypeScript files only in the frontend root directory (sub-directory "src").
/// Use "**/*.rs" to search Rust files across all root directories.
/// </example>
pub include_pattern: Option<String>,
/// Optional starting position for paginated results (0-based).
/// When not provided, starts from the beginning.
@@ -145,7 +132,8 @@ impl AgentTool for GrepTool {
let exclude_patterns = global_settings
.file_scan_exclusions
.sources()
.chain(global_settings.private_files.sources());
.iter()
.chain(global_settings.private_files.sources().iter());
match PathMatcher::new(exclude_patterns, path_style) {
Ok(matcher) => matcher,
@@ -322,6 +310,7 @@ mod tests {
use super::*;
use gpui::{TestAppContext, UpdateGlobal};
use language::{Language, LanguageConfig, LanguageMatcher};
use project::{FakeFs, Project};
use serde_json::json;
use settings::SettingsStore;
@@ -563,7 +552,7 @@ mod tests {
let project = Project::test(fs.clone(), [path!("/root").as_ref()], cx).await;
project.update(cx, |project, _cx| {
project.languages().add(language::rust_lang())
project.languages().add(rust_lang().into())
});
project
@@ -792,6 +781,22 @@ mod tests {
});
}
fn rust_lang() -> Language {
Language::new(
LanguageConfig {
name: "Rust".into(),
matcher: LanguageMatcher {
path_suffixes: vec!["rs".to_string()],
..Default::default()
},
..Default::default()
},
Some(tree_sitter_rust::LANGUAGE.into()),
)
.with_outline_query(include_str!("../../../languages/src/rust/outline.scm"))
.unwrap()
}
#[gpui::test]
async fn test_grep_security_boundaries(cx: &mut TestAppContext) {
init_test(cx);

View File

@@ -17,9 +17,6 @@ use crate::{AgentTool, Thread, ToolCallEventStream, outline};
/// Reads the content of the given file in the project.
///
/// - Never attempt to read a path that hasn't been previously mentioned.
/// - For large files, this tool returns a file outline with symbol names and line numbers instead of the full content.
/// This outline IS a successful response - use the line numbers to read specific sections with start_line/end_line.
/// Do NOT retry reading the same file without line numbers if you receive an outline.
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct ReadFileToolInput {
/// The relative path of the file to read.
@@ -153,10 +150,14 @@ impl AgentTool for ReadFileTool {
let file_path = input.path.clone();
event_stream.update_fields(ToolCallUpdateFields::new().locations(vec![
acp::ToolCallLocation::new(&abs_path)
.line(input.start_line.map(|line| line.saturating_sub(1))),
]));
event_stream.update_fields(ToolCallUpdateFields {
locations: Some(vec![acp::ToolCallLocation {
path: abs_path.clone(),
line: input.start_line.map(|line| line.saturating_sub(1)),
meta: None,
}]),
..Default::default()
});
if image_store::is_image_file(&self.project, &project_path, cx) {
return cx.spawn(async move |cx| {
@@ -253,15 +254,16 @@ impl AgentTool for ReadFileTool {
if buffer_content.is_outline {
Ok(formatdoc! {"
SUCCESS: File outline retrieved. This file is too large to read all at once, so the outline below shows the file's structure with line numbers.
IMPORTANT: Do NOT retry this call without line numbers - you will get the same outline.
Instead, use the line numbers below to read specific sections by calling this tool again with start_line and end_line parameters.
This file was too big to read all at once.
{}
NEXT STEPS: To read a specific symbol's implementation, call read_file with the same path plus start_line and end_line from the outline above.
For example, to read a function shown as [L100-150], use start_line: 100 and end_line: 150.", buffer_content.text
Using the line numbers in this outline, you can call this tool again
while specifying the start_line and end_line fields to see the
implementations of symbols in the outline.
Alternatively, you can fall back to the `grep` tool (if available)
to search the file for specific content.", buffer_content.text
}
.into())
} else {
@@ -285,9 +287,12 @@ impl AgentTool for ReadFileTool {
text,
}
.to_string();
event_stream.update_fields(ToolCallUpdateFields::new().content(vec![
acp::ToolCallContent::Content(acp::Content::new(markdown)),
]));
event_stream.update_fields(ToolCallUpdateFields {
content: Some(vec![acp::ToolCallContent::Content {
content: markdown.into(),
}]),
..Default::default()
})
}
})?;
@@ -301,6 +306,7 @@ mod test {
use super::*;
use crate::{ContextServerRegistry, Templates, Thread};
use gpui::{AppContext, TestAppContext, UpdateGlobal as _};
use language::{Language, LanguageConfig, LanguageMatcher, tree_sitter_rust};
use language_model::fake_provider::FakeLanguageModel;
use project::{FakeFs, Project};
use prompt_store::ProjectContext;
@@ -404,7 +410,7 @@ mod test {
.await;
let project = Project::test(fs.clone(), [path!("/root").as_ref()], cx).await;
let language_registry = project.read_with(cx, |project, _| project.languages().clone());
language_registry.add(language::rust_lang());
language_registry.add(Arc::new(rust_lang()));
let action_log = cx.new(|_| ActionLog::new(project.clone()));
let context_server_registry =
cx.new(|cx| ContextServerRegistry::new(project.read(cx).context_server_store(), cx));
@@ -434,7 +440,7 @@ mod test {
let content = result.to_str().unwrap();
assert_eq!(
content.lines().skip(7).take(6).collect::<Vec<_>>(),
content.lines().skip(4).take(6).collect::<Vec<_>>(),
vec![
"struct Test0 [L1-4]",
" a [L2]",
@@ -469,7 +475,7 @@ mod test {
pretty_assertions::assert_eq!(
content
.lines()
.skip(7)
.skip(4)
.take(expected_content.len())
.collect::<Vec<_>>(),
expected_content
@@ -594,6 +600,49 @@ mod test {
});
}
fn rust_lang() -> Language {
Language::new(
LanguageConfig {
name: "Rust".into(),
matcher: LanguageMatcher {
path_suffixes: vec!["rs".to_string()],
..Default::default()
},
..Default::default()
},
Some(tree_sitter_rust::LANGUAGE.into()),
)
.with_outline_query(
r#"
(line_comment) @annotation
(struct_item
"struct" @context
name: (_) @name) @item
(enum_item
"enum" @context
name: (_) @name) @item
(enum_variant
name: (_) @name) @item
(field_declaration
name: (_) @name) @item
(impl_item
"impl" @context
trait: (_)? @name
"for"? @context
type: (_) @name
body: (_ "{" (_)* "}")) @item
(function_item
"fn" @context
name: (_) @name) @item
(mod_item
"mod" @context
name: (_) @name) @item
"#,
)
.unwrap()
}
#[gpui::test]
async fn test_read_file_security(cx: &mut TestAppContext) {
init_test(cx);

View File

@@ -1,352 +0,0 @@
use agent_client_protocol as acp;
use anyhow::Result;
use collections::FxHashSet;
use gpui::{App, Entity, SharedString, Task};
use language::Buffer;
use project::Project;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use std::path::PathBuf;
use std::sync::Arc;
use crate::{AgentTool, ToolCallEventStream};
/// Discards unsaved changes in open buffers by reloading file contents from disk.
///
/// Use this tool when:
/// - You attempted to edit files but they have unsaved changes the user does not want to keep.
/// - You want to reset files to the on-disk state before retrying an edit.
///
/// Only use this tool after asking the user for permission, because it will discard unsaved changes.
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct RestoreFileFromDiskToolInput {
/// The paths of the files to restore from disk.
pub paths: Vec<PathBuf>,
}
pub struct RestoreFileFromDiskTool {
project: Entity<Project>,
}
impl RestoreFileFromDiskTool {
pub fn new(project: Entity<Project>) -> Self {
Self { project }
}
}
impl AgentTool for RestoreFileFromDiskTool {
type Input = RestoreFileFromDiskToolInput;
type Output = String;
fn name() -> &'static str {
"restore_file_from_disk"
}
fn kind() -> acp::ToolKind {
acp::ToolKind::Other
}
fn initial_title(
&self,
input: Result<Self::Input, serde_json::Value>,
_cx: &mut App,
) -> SharedString {
match input {
Ok(input) if input.paths.len() == 1 => "Restore file from disk".into(),
Ok(input) => format!("Restore {} files from disk", input.paths.len()).into(),
Err(_) => "Restore files from disk".into(),
}
}
fn run(
self: Arc<Self>,
input: Self::Input,
_event_stream: ToolCallEventStream,
cx: &mut App,
) -> Task<Result<String>> {
let project = self.project.clone();
let input_paths = input.paths;
cx.spawn(async move |cx| {
let mut buffers_to_reload: FxHashSet<Entity<Buffer>> = FxHashSet::default();
let mut restored_paths: Vec<PathBuf> = Vec::new();
let mut clean_paths: Vec<PathBuf> = Vec::new();
let mut not_found_paths: Vec<PathBuf> = Vec::new();
let mut open_errors: Vec<(PathBuf, String)> = Vec::new();
let mut dirty_check_errors: Vec<(PathBuf, String)> = Vec::new();
let mut reload_errors: Vec<String> = Vec::new();
for path in input_paths {
let project_path =
project.read_with(cx, |project, cx| project.find_project_path(&path, cx));
let project_path = match project_path {
Ok(Some(project_path)) => project_path,
Ok(None) => {
not_found_paths.push(path);
continue;
}
Err(error) => {
open_errors.push((path, error.to_string()));
continue;
}
};
let open_buffer_task =
project.update(cx, |project, cx| project.open_buffer(project_path, cx));
let buffer = match open_buffer_task {
Ok(task) => match task.await {
Ok(buffer) => buffer,
Err(error) => {
open_errors.push((path, error.to_string()));
continue;
}
},
Err(error) => {
open_errors.push((path, error.to_string()));
continue;
}
};
let is_dirty = match buffer.read_with(cx, |buffer, _| buffer.is_dirty()) {
Ok(is_dirty) => is_dirty,
Err(error) => {
dirty_check_errors.push((path, error.to_string()));
continue;
}
};
if is_dirty {
buffers_to_reload.insert(buffer);
restored_paths.push(path);
} else {
clean_paths.push(path);
}
}
if !buffers_to_reload.is_empty() {
let reload_task = project.update(cx, |project, cx| {
project.reload_buffers(buffers_to_reload, true, cx)
});
match reload_task {
Ok(task) => {
if let Err(error) = task.await {
reload_errors.push(error.to_string());
}
}
Err(error) => {
reload_errors.push(error.to_string());
}
}
}
let mut lines: Vec<String> = Vec::new();
if !restored_paths.is_empty() {
lines.push(format!("Restored {} file(s).", restored_paths.len()));
}
if !clean_paths.is_empty() {
lines.push(format!("{} clean.", clean_paths.len()));
}
if !not_found_paths.is_empty() {
lines.push(format!("Not found ({}):", not_found_paths.len()));
for path in &not_found_paths {
lines.push(format!("- {}", path.display()));
}
}
if !open_errors.is_empty() {
lines.push(format!("Open failed ({}):", open_errors.len()));
for (path, error) in &open_errors {
lines.push(format!("- {}: {}", path.display(), error));
}
}
if !dirty_check_errors.is_empty() {
lines.push(format!(
"Dirty check failed ({}):",
dirty_check_errors.len()
));
for (path, error) in &dirty_check_errors {
lines.push(format!("- {}: {}", path.display(), error));
}
}
if !reload_errors.is_empty() {
lines.push(format!("Reload failed ({}):", reload_errors.len()));
for error in &reload_errors {
lines.push(format!("- {}", error));
}
}
if lines.is_empty() {
Ok("No paths provided.".to_string())
} else {
Ok(lines.join("\n"))
}
})
}
}
#[cfg(test)]
mod tests {
use super::*;
use fs::Fs;
use gpui::TestAppContext;
use language::LineEnding;
use project::FakeFs;
use serde_json::json;
use settings::SettingsStore;
use util::path;
fn init_test(cx: &mut TestAppContext) {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
});
}
#[gpui::test]
async fn test_restore_file_from_disk_output_and_effects(cx: &mut TestAppContext) {
init_test(cx);
let fs = FakeFs::new(cx.executor());
fs.insert_tree(
"/root",
json!({
"dirty.txt": "on disk: dirty\n",
"clean.txt": "on disk: clean\n",
}),
)
.await;
let project = Project::test(fs.clone(), [path!("/root").as_ref()], cx).await;
let tool = Arc::new(RestoreFileFromDiskTool::new(project.clone()));
// Make dirty.txt dirty in-memory by saving different content into the buffer without saving to disk.
let dirty_project_path = project.read_with(cx, |project, cx| {
project
.find_project_path("root/dirty.txt", cx)
.expect("dirty.txt should exist in project")
});
let dirty_buffer = project
.update(cx, |project, cx| {
project.open_buffer(dirty_project_path, cx)
})
.await
.unwrap();
dirty_buffer.update(cx, |buffer, cx| {
buffer.edit([(0..buffer.len(), "in memory: dirty\n")], None, cx);
});
assert!(
dirty_buffer.read_with(cx, |buffer, _| buffer.is_dirty()),
"dirty.txt buffer should be dirty before restore"
);
// Ensure clean.txt is opened but remains clean.
let clean_project_path = project.read_with(cx, |project, cx| {
project
.find_project_path("root/clean.txt", cx)
.expect("clean.txt should exist in project")
});
let clean_buffer = project
.update(cx, |project, cx| {
project.open_buffer(clean_project_path, cx)
})
.await
.unwrap();
assert!(
!clean_buffer.read_with(cx, |buffer, _| buffer.is_dirty()),
"clean.txt buffer should start clean"
);
let output = cx
.update(|cx| {
tool.clone().run(
RestoreFileFromDiskToolInput {
paths: vec![
PathBuf::from("root/dirty.txt"),
PathBuf::from("root/clean.txt"),
],
},
ToolCallEventStream::test().0,
cx,
)
})
.await
.unwrap();
// Output should mention restored + clean.
assert!(
output.contains("Restored 1 file(s)."),
"expected restored count line, got:\n{output}"
);
assert!(
output.contains("1 clean."),
"expected clean count line, got:\n{output}"
);
// Effect: dirty buffer should be restored back to disk content and become clean.
let dirty_text = dirty_buffer.read_with(cx, |buffer, _| buffer.text());
assert_eq!(
dirty_text, "on disk: dirty\n",
"dirty.txt buffer should be restored to disk contents"
);
assert!(
!dirty_buffer.read_with(cx, |buffer, _| buffer.is_dirty()),
"dirty.txt buffer should not be dirty after restore"
);
// Disk contents should be unchanged (restore-from-disk should not write).
let disk_dirty = fs.load(path!("/root/dirty.txt").as_ref()).await.unwrap();
assert_eq!(disk_dirty, "on disk: dirty\n");
// Sanity: clean buffer should remain clean and unchanged.
let clean_text = clean_buffer.read_with(cx, |buffer, _| buffer.text());
assert_eq!(clean_text, "on disk: clean\n");
assert!(
!clean_buffer.read_with(cx, |buffer, _| buffer.is_dirty()),
"clean.txt buffer should remain clean"
);
// Test empty paths case.
let output = cx
.update(|cx| {
tool.clone().run(
RestoreFileFromDiskToolInput { paths: vec![] },
ToolCallEventStream::test().0,
cx,
)
})
.await
.unwrap();
assert_eq!(output, "No paths provided.");
// Test not-found path case (path outside the project root).
let output = cx
.update(|cx| {
tool.clone().run(
RestoreFileFromDiskToolInput {
paths: vec![PathBuf::from("nonexistent/path.txt")],
},
ToolCallEventStream::test().0,
cx,
)
})
.await
.unwrap();
assert!(
output.contains("Not found (1):"),
"expected not-found header line, got:\n{output}"
);
assert!(
output.contains("- nonexistent/path.txt"),
"expected not-found path bullet, got:\n{output}"
);
let _ = LineEnding::Unix; // keep import used if the buffer edit API changes
}
}

View File

@@ -1,351 +0,0 @@
use agent_client_protocol as acp;
use anyhow::Result;
use collections::FxHashSet;
use gpui::{App, Entity, SharedString, Task};
use language::Buffer;
use project::Project;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use std::path::PathBuf;
use std::sync::Arc;
use crate::{AgentTool, ToolCallEventStream};
/// Saves files that have unsaved changes.
///
/// Use this tool when you need to edit files but they have unsaved changes that must be saved first.
/// Only use this tool after asking the user for permission to save their unsaved changes.
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct SaveFileToolInput {
/// The paths of the files to save.
pub paths: Vec<PathBuf>,
}
pub struct SaveFileTool {
project: Entity<Project>,
}
impl SaveFileTool {
pub fn new(project: Entity<Project>) -> Self {
Self { project }
}
}
impl AgentTool for SaveFileTool {
type Input = SaveFileToolInput;
type Output = String;
fn name() -> &'static str {
"save_file"
}
fn kind() -> acp::ToolKind {
acp::ToolKind::Other
}
fn initial_title(
&self,
input: Result<Self::Input, serde_json::Value>,
_cx: &mut App,
) -> SharedString {
match input {
Ok(input) if input.paths.len() == 1 => "Save file".into(),
Ok(input) => format!("Save {} files", input.paths.len()).into(),
Err(_) => "Save files".into(),
}
}
fn run(
self: Arc<Self>,
input: Self::Input,
_event_stream: ToolCallEventStream,
cx: &mut App,
) -> Task<Result<String>> {
let project = self.project.clone();
let input_paths = input.paths;
cx.spawn(async move |cx| {
let mut buffers_to_save: FxHashSet<Entity<Buffer>> = FxHashSet::default();
let mut saved_paths: Vec<PathBuf> = Vec::new();
let mut clean_paths: Vec<PathBuf> = Vec::new();
let mut not_found_paths: Vec<PathBuf> = Vec::new();
let mut open_errors: Vec<(PathBuf, String)> = Vec::new();
let mut dirty_check_errors: Vec<(PathBuf, String)> = Vec::new();
let mut save_errors: Vec<(String, String)> = Vec::new();
for path in input_paths {
let project_path =
project.read_with(cx, |project, cx| project.find_project_path(&path, cx));
let project_path = match project_path {
Ok(Some(project_path)) => project_path,
Ok(None) => {
not_found_paths.push(path);
continue;
}
Err(error) => {
open_errors.push((path, error.to_string()));
continue;
}
};
let open_buffer_task =
project.update(cx, |project, cx| project.open_buffer(project_path, cx));
let buffer = match open_buffer_task {
Ok(task) => match task.await {
Ok(buffer) => buffer,
Err(error) => {
open_errors.push((path, error.to_string()));
continue;
}
},
Err(error) => {
open_errors.push((path, error.to_string()));
continue;
}
};
let is_dirty = match buffer.read_with(cx, |buffer, _| buffer.is_dirty()) {
Ok(is_dirty) => is_dirty,
Err(error) => {
dirty_check_errors.push((path, error.to_string()));
continue;
}
};
if is_dirty {
buffers_to_save.insert(buffer);
saved_paths.push(path);
} else {
clean_paths.push(path);
}
}
// Save each buffer individually since there's no batch save API.
for buffer in buffers_to_save {
let path_for_buffer = match buffer.read_with(cx, |buffer, _| {
buffer
.file()
.map(|file| file.path().to_rel_path_buf())
.map(|path| path.as_rel_path().as_unix_str().to_owned())
}) {
Ok(path) => path.unwrap_or_else(|| "<unknown>".to_string()),
Err(error) => {
save_errors.push(("<unknown>".to_string(), error.to_string()));
continue;
}
};
let save_task = project.update(cx, |project, cx| project.save_buffer(buffer, cx));
match save_task {
Ok(task) => {
if let Err(error) = task.await {
save_errors.push((path_for_buffer, error.to_string()));
}
}
Err(error) => {
save_errors.push((path_for_buffer, error.to_string()));
}
}
}
let mut lines: Vec<String> = Vec::new();
if !saved_paths.is_empty() {
lines.push(format!("Saved {} file(s).", saved_paths.len()));
}
if !clean_paths.is_empty() {
lines.push(format!("{} clean.", clean_paths.len()));
}
if !not_found_paths.is_empty() {
lines.push(format!("Not found ({}):", not_found_paths.len()));
for path in &not_found_paths {
lines.push(format!("- {}", path.display()));
}
}
if !open_errors.is_empty() {
lines.push(format!("Open failed ({}):", open_errors.len()));
for (path, error) in &open_errors {
lines.push(format!("- {}: {}", path.display(), error));
}
}
if !dirty_check_errors.is_empty() {
lines.push(format!(
"Dirty check failed ({}):",
dirty_check_errors.len()
));
for (path, error) in &dirty_check_errors {
lines.push(format!("- {}: {}", path.display(), error));
}
}
if !save_errors.is_empty() {
lines.push(format!("Save failed ({}):", save_errors.len()));
for (path, error) in &save_errors {
lines.push(format!("- {}: {}", path, error));
}
}
if lines.is_empty() {
Ok("No paths provided.".to_string())
} else {
Ok(lines.join("\n"))
}
})
}
}
#[cfg(test)]
mod tests {
use super::*;
use fs::Fs;
use gpui::TestAppContext;
use project::FakeFs;
use serde_json::json;
use settings::SettingsStore;
use util::path;
fn init_test(cx: &mut TestAppContext) {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
});
}
#[gpui::test]
async fn test_save_file_output_and_effects(cx: &mut TestAppContext) {
init_test(cx);
let fs = FakeFs::new(cx.executor());
fs.insert_tree(
"/root",
json!({
"dirty.txt": "on disk: dirty\n",
"clean.txt": "on disk: clean\n",
}),
)
.await;
let project = Project::test(fs.clone(), [path!("/root").as_ref()], cx).await;
let tool = Arc::new(SaveFileTool::new(project.clone()));
// Make dirty.txt dirty in-memory.
let dirty_project_path = project.read_with(cx, |project, cx| {
project
.find_project_path("root/dirty.txt", cx)
.expect("dirty.txt should exist in project")
});
let dirty_buffer = project
.update(cx, |project, cx| {
project.open_buffer(dirty_project_path, cx)
})
.await
.unwrap();
dirty_buffer.update(cx, |buffer, cx| {
buffer.edit([(0..buffer.len(), "in memory: dirty\n")], None, cx);
});
assert!(
dirty_buffer.read_with(cx, |buffer, _| buffer.is_dirty()),
"dirty.txt buffer should be dirty before save"
);
// Ensure clean.txt is opened but remains clean.
let clean_project_path = project.read_with(cx, |project, cx| {
project
.find_project_path("root/clean.txt", cx)
.expect("clean.txt should exist in project")
});
let clean_buffer = project
.update(cx, |project, cx| {
project.open_buffer(clean_project_path, cx)
})
.await
.unwrap();
assert!(
!clean_buffer.read_with(cx, |buffer, _| buffer.is_dirty()),
"clean.txt buffer should start clean"
);
let output = cx
.update(|cx| {
tool.clone().run(
SaveFileToolInput {
paths: vec![
PathBuf::from("root/dirty.txt"),
PathBuf::from("root/clean.txt"),
],
},
ToolCallEventStream::test().0,
cx,
)
})
.await
.unwrap();
// Output should mention saved + clean.
assert!(
output.contains("Saved 1 file(s)."),
"expected saved count line, got:\n{output}"
);
assert!(
output.contains("1 clean."),
"expected clean count line, got:\n{output}"
);
// Effect: dirty buffer should now be clean and disk should have new content.
assert!(
!dirty_buffer.read_with(cx, |buffer, _| buffer.is_dirty()),
"dirty.txt buffer should not be dirty after save"
);
let disk_dirty = fs.load(path!("/root/dirty.txt").as_ref()).await.unwrap();
assert_eq!(
disk_dirty, "in memory: dirty\n",
"dirty.txt disk content should be updated"
);
// Sanity: clean buffer should remain clean and disk unchanged.
let disk_clean = fs.load(path!("/root/clean.txt").as_ref()).await.unwrap();
assert_eq!(disk_clean, "on disk: clean\n");
// Test empty paths case.
let output = cx
.update(|cx| {
tool.clone().run(
SaveFileToolInput { paths: vec![] },
ToolCallEventStream::test().0,
cx,
)
})
.await
.unwrap();
assert_eq!(output, "No paths provided.");
// Test not-found path case.
let output = cx
.update(|cx| {
tool.clone().run(
SaveFileToolInput {
paths: vec![PathBuf::from("nonexistent/path.txt")],
},
ToolCallEventStream::test().0,
cx,
)
})
.await
.unwrap();
assert!(
output.contains("Not found (1):"),
"expected not-found header line, got:\n{output}"
);
assert!(
output.contains("- nonexistent/path.txt"),
"expected not-found path bullet, got:\n{output}"
);
}
}

View File

@@ -1,7 +1,6 @@
use agent_client_protocol as acp;
use anyhow::Result;
use futures::FutureExt as _;
use gpui::{App, AppContext, Entity, SharedString, Task};
use gpui::{App, Entity, SharedString, Task};
use project::Project;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
@@ -9,7 +8,6 @@ use std::{
path::{Path, PathBuf},
rc::Rc,
sync::Arc,
time::Duration,
};
use util::markdown::MarkdownInlineCode;
@@ -27,17 +25,13 @@ const COMMAND_OUTPUT_LIMIT: u64 = 16 * 1024;
///
/// Do not use this tool for commands that run indefinitely, such as servers (like `npm run start`, `npm run dev`, `python -m http.server`, etc) or file watchers that don't terminate on their own.
///
/// For potentially long-running commands, prefer specifying `timeout_ms` to bound runtime and prevent indefinite hangs.
///
/// Remember that each invocation of this tool will spawn a new shell process, so you can't rely on any state from previous invocations.
#[derive(Clone, Debug, Serialize, Deserialize, JsonSchema)]
pub struct TerminalToolInput {
/// The one-liner command to execute.
pub command: String,
command: String,
/// Working directory for the command. This must be one of the root directories of the project.
pub cd: String,
/// Optional maximum runtime (in milliseconds). If exceeded, the running terminal task is killed.
pub timeout_ms: Option<u64>,
cd: String,
}
pub struct TerminalTool {
@@ -118,30 +112,12 @@ impl AgentTool for TerminalTool {
.await?;
let terminal_id = terminal.id(cx)?;
event_stream.update_fields(acp::ToolCallUpdateFields::new().content(vec![
acp::ToolCallContent::Terminal(acp::Terminal::new(terminal_id)),
]));
let timeout = input.timeout_ms.map(Duration::from_millis);
let exit_status = match timeout {
Some(timeout) => {
let wait_for_exit = terminal.wait_for_exit(cx)?;
let timeout_task = cx.background_spawn(async move {
smol::Timer::after(timeout).await;
});
futures::select! {
status = wait_for_exit.clone().fuse() => status,
_ = timeout_task.fuse() => {
terminal.kill(cx)?;
wait_for_exit.await
}
}
}
None => terminal.wait_for_exit(cx)?.await,
};
event_stream.update_fields(acp::ToolCallUpdateFields {
content: Some(vec![acp::ToolCallContent::Terminal { terminal_id }]),
..Default::default()
});
let exit_status = terminal.wait_for_exit(cx)?.await;
let output = terminal.current_output(cx)?;
Ok(process_content(output, &input.command, exit_status))

View File

@@ -43,8 +43,10 @@ impl AgentTool for ThinkingTool {
event_stream: ToolCallEventStream,
_cx: &mut App,
) -> Task<Result<String>> {
event_stream
.update_fields(acp::ToolCallUpdateFields::new().content(vec![input.content.into()]));
event_stream.update_fields(acp::ToolCallUpdateFields {
content: Some(vec![input.content.into()]),
..Default::default()
});
Task::ready(Ok("Finished thinking.".to_string()))
}
}

View File

@@ -76,8 +76,10 @@ impl AgentTool for WebSearchTool {
let response = match search_task.await {
Ok(response) => response,
Err(err) => {
event_stream
.update_fields(acp::ToolCallUpdateFields::new().title("Web Search Failed"));
event_stream.update_fields(acp::ToolCallUpdateFields {
title: Some("Web Search Failed".to_string()),
..Default::default()
});
return Err(err);
}
};
@@ -105,23 +107,26 @@ fn emit_update(response: &WebSearchResponse, event_stream: &ToolCallEventStream)
} else {
format!("{} results", response.results.len())
};
event_stream.update_fields(
acp::ToolCallUpdateFields::new()
.title(format!("Searched the web: {result_text}"))
.content(
response
.results
.iter()
.map(|result| {
acp::ToolCallContent::Content(acp::Content::new(
acp::ContentBlock::ResourceLink(
acp::ResourceLink::new(result.title.clone(), result.url.clone())
.title(result.title.clone())
.description(result.text.clone()),
),
))
})
.collect::<Vec<_>>(),
),
);
event_stream.update_fields(acp::ToolCallUpdateFields {
title: Some(format!("Searched the web: {result_text}")),
content: Some(
response
.results
.iter()
.map(|result| acp::ToolCallContent::Content {
content: acp::ContentBlock::ResourceLink(acp::ResourceLink {
name: result.title.clone(),
uri: result.url.clone(),
title: Some(result.title.clone()),
description: Some(result.text.clone()),
mime_type: None,
annotations: None,
size: None,
meta: None,
}),
})
.collect(),
),
..Default::default()
});
}

View File

@@ -9,8 +9,6 @@ use futures::io::BufReader;
use project::Project;
use project::agent_server_store::AgentServerCommand;
use serde::Deserialize;
use settings::Settings as _;
use task::ShellBuilder;
use util::ResultExt as _;
use std::path::PathBuf;
@@ -23,7 +21,7 @@ use gpui::{App, AppContext as _, AsyncApp, Entity, SharedString, Task, WeakEntit
use acp_thread::{AcpThread, AuthRequired, LoadError, TerminalProviderEvent};
use terminal::TerminalBuilder;
use terminal::terminal_settings::{AlternateScroll, CursorShape, TerminalSettings};
use terminal::terminal_settings::{AlternateScroll, CursorShape};
#[derive(Debug, Error)]
#[error("Unsupported version")]
@@ -31,7 +29,7 @@ pub struct UnsupportedVersion;
pub struct AcpConnection {
server_name: SharedString,
telemetry_id: SharedString,
telemetry_id: &'static str,
connection: Rc<acp::ClientSideConnection>,
sessions: Rc<RefCell<HashMap<acp::SessionId, AcpSession>>>,
auth_methods: Vec<acp::AuthMethod>,
@@ -56,6 +54,7 @@ pub struct AcpSession {
pub async fn connect(
server_name: SharedString,
telemetry_id: &'static str,
command: AgentServerCommand,
root_dir: &Path,
default_mode: Option<acp::SessionModeId>,
@@ -65,6 +64,7 @@ pub async fn connect(
) -> Result<Rc<dyn AgentConnection>> {
let conn = AcpConnection::stdio(
server_name,
telemetry_id,
command.clone(),
root_dir,
default_mode,
@@ -76,11 +76,12 @@ pub async fn connect(
Ok(Rc::new(conn) as _)
}
const MINIMUM_SUPPORTED_VERSION: acp::ProtocolVersion = acp::ProtocolVersion::V1;
const MINIMUM_SUPPORTED_VERSION: acp::ProtocolVersion = acp::V1;
impl AcpConnection {
pub async fn stdio(
server_name: SharedString,
telemetry_id: &'static str,
command: AgentServerCommand,
root_dir: &Path,
default_mode: Option<acp::SessionModeId>,
@@ -88,11 +89,9 @@ impl AcpConnection {
is_remote: bool,
cx: &mut AsyncApp,
) -> Result<Self> {
let shell = cx.update(|cx| TerminalSettings::get(None, cx).shell.clone())?;
let builder = ShellBuilder::new(&shell, cfg!(windows)).non_interactive();
let mut child =
builder.build_command(Some(command.path.display().to_string()), &command.args);
let mut child = util::command::new_smol_command(&command.path);
child
.args(command.args.iter().map(|arg| arg.as_str()))
.envs(command.env.iter().flatten())
.stdin(std::process::Stdio::piped())
.stdout(std::process::Stdio::piped())
@@ -175,38 +174,34 @@ impl AcpConnection {
})?;
let response = connection
.initialize(
acp::InitializeRequest::new(acp::ProtocolVersion::V1)
.client_capabilities(
acp::ClientCapabilities::new()
.fs(acp::FileSystemCapability::new()
.read_text_file(true)
.write_text_file(true))
.terminal(true)
// Experimental: Allow for rendering terminal output from the agents
.meta(acp::Meta::from_iter([
("terminal_output".into(), true.into()),
("terminal-auth".into(), true.into()),
])),
)
.client_info(
acp::Implementation::new("zed", version)
.title(release_channel.map(ToOwned::to_owned)),
),
)
.initialize(acp::InitializeRequest {
protocol_version: acp::VERSION,
client_capabilities: acp::ClientCapabilities {
fs: acp::FileSystemCapability {
read_text_file: true,
write_text_file: true,
meta: None,
},
terminal: true,
meta: Some(serde_json::json!({
// Experimental: Allow for rendering terminal output from the agents
"terminal_output": true,
"terminal-auth": true,
})),
},
client_info: Some(acp::Implementation {
name: "zed".to_owned(),
title: release_channel.map(|c| c.to_owned()),
version,
}),
meta: None,
})
.await?;
if response.protocol_version < MINIMUM_SUPPORTED_VERSION {
return Err(UnsupportedVersion.into());
}
let telemetry_id = response
.agent_info
// Use the one the agent provides if we have one
.map(|info| info.name.into())
// Otherwise, just use the name
.unwrap_or_else(|| server_name.clone());
Ok(Self {
auth_methods: response.auth_methods,
root_dir: root_dir.to_owned(),
@@ -241,8 +236,8 @@ impl Drop for AcpConnection {
}
impl AgentConnection for AcpConnection {
fn telemetry_id(&self) -> SharedString {
self.telemetry_id.clone()
fn telemetry_id(&self) -> &'static str {
self.telemetry_id
}
fn new_thread(
@@ -258,13 +253,14 @@ impl AgentConnection for AcpConnection {
let default_model = self.default_model.clone();
let cwd = cwd.to_path_buf();
let context_server_store = project.read(cx).context_server_store().read(cx);
let mcp_servers = if project.read(cx).is_local() {
context_server_store
.configured_server_ids()
.iter()
.filter_map(|id| {
let configuration = context_server_store.configuration_for_server(id)?;
match &*configuration {
let mcp_servers =
if project.read(cx).is_local() {
context_server_store
.configured_server_ids()
.iter()
.filter_map(|id| {
let configuration = context_server_store.configuration_for_server(id)?;
match &*configuration {
project::context_server_store::ContextServerConfiguration::Custom {
command,
..
@@ -272,47 +268,53 @@ impl AgentConnection for AcpConnection {
| project::context_server_store::ContextServerConfiguration::Extension {
command,
..
} => Some(acp::McpServer::Stdio(
acp::McpServerStdio::new(id.0.to_string(), &command.path)
.args(command.args.clone())
.env(if let Some(env) = command.env.as_ref() {
env.iter()
.map(|(name, value)| acp::EnvVariable::new(name, value))
.collect()
} else {
vec![]
}),
)),
} => Some(acp::McpServer::Stdio {
name: id.0.to_string(),
command: command.path.clone(),
args: command.args.clone(),
env: if let Some(env) = command.env.as_ref() {
env.iter()
.map(|(name, value)| acp::EnvVariable {
name: name.clone(),
value: value.clone(),
meta: None,
})
.collect()
} else {
vec![]
},
}),
project::context_server_store::ContextServerConfiguration::Http {
url,
headers,
} => Some(acp::McpServer::Http(
acp::McpServerHttp::new(id.0.to_string(), url.to_string()).headers(
headers
.iter()
.map(|(name, value)| acp::HttpHeader::new(name, value))
.collect(),
),
)),
} => Some(acp::McpServer::Http {
name: id.0.to_string(),
url: url.to_string(),
headers: headers.iter().map(|(name, value)| acp::HttpHeader {
name: name.clone(),
value: value.clone(),
meta: None,
}).collect(),
}),
}
})
.collect()
} else {
// In SSH projects, the external agent is running on the remote
// machine, and currently we only run MCP servers on the local
// machine. So don't pass any MCP servers to the agent in that case.
Vec::new()
};
})
.collect()
} else {
// In SSH projects, the external agent is running on the remote
// machine, and currently we only run MCP servers on the local
// machine. So don't pass any MCP servers to the agent in that case.
Vec::new()
};
cx.spawn(async move |cx| {
let response = conn
.new_session(acp::NewSessionRequest::new(cwd).mcp_servers(mcp_servers))
.new_session(acp::NewSessionRequest { mcp_servers, cwd, meta: None })
.await
.map_err(|err| {
if err.code == acp::ErrorCode::AuthRequired {
if err.code == acp::ErrorCode::AUTH_REQUIRED.code {
let mut error = AuthRequired::new();
if err.message != acp::ErrorCode::AuthRequired.to_string() {
if err.message != acp::ErrorCode::AUTH_REQUIRED.message {
error = error.with_description(err.message);
}
@@ -339,7 +341,11 @@ impl AgentConnection for AcpConnection {
let modes = modes.clone();
let conn = conn.clone();
async move |_| {
let result = conn.set_session_mode(acp::SetSessionModeRequest::new(session_id, default_mode))
let result = conn.set_session_mode(acp::SetSessionModeRequest {
session_id,
mode_id: default_mode,
meta: None,
})
.await.log_err();
if result.is_none() {
@@ -382,7 +388,11 @@ impl AgentConnection for AcpConnection {
let models = models.clone();
let conn = conn.clone();
async move |_| {
let result = conn.set_session_model(acp::SetSessionModelRequest::new(session_id, default_model))
let result = conn.set_session_model(acp::SetSessionModelRequest {
session_id,
model_id: default_model,
meta: None,
})
.await.log_err();
if result.is_none() {
@@ -446,8 +456,12 @@ impl AgentConnection for AcpConnection {
fn authenticate(&self, method_id: acp::AuthMethodId, cx: &mut App) -> Task<Result<()>> {
let conn = self.connection.clone();
cx.foreground_executor().spawn(async move {
conn.authenticate(acp::AuthenticateRequest::new(method_id))
.await?;
conn.authenticate(acp::AuthenticateRequest {
method_id: method_id.clone(),
meta: None,
})
.await?;
Ok(())
})
}
@@ -474,11 +488,11 @@ impl AgentConnection for AcpConnection {
match result {
Ok(response) => Ok(response),
Err(err) => {
if err.code == acp::ErrorCode::AuthRequired {
if err.code == acp::ErrorCode::AUTH_REQUIRED.code {
return Err(anyhow!(acp::Error::auth_required()));
}
if err.code != ErrorCode::InternalError {
if err.code != ErrorCode::INTERNAL_ERROR.code {
anyhow::bail!(err)
}
@@ -501,7 +515,10 @@ impl AgentConnection for AcpConnection {
&& (details.contains("This operation was aborted")
|| details.contains("The user aborted a request"))
{
Ok(acp::PromptResponse::new(acp::StopReason::Cancelled))
Ok(acp::PromptResponse {
stop_reason: acp::StopReason::Cancelled,
meta: None,
})
} else {
Err(anyhow!(details))
}
@@ -518,7 +535,10 @@ impl AgentConnection for AcpConnection {
session.suppress_abort_err = true;
}
let conn = self.connection.clone();
let params = acp::CancelNotification::new(session_id.clone());
let params = acp::CancelNotification {
session_id: session_id.clone(),
meta: None,
};
cx.foreground_executor()
.spawn(async move { conn.cancel(params).await })
.detach();
@@ -599,7 +619,11 @@ impl acp_thread::AgentSessionModes for AcpSessionModes {
let state = self.state.clone();
cx.foreground_executor().spawn(async move {
let result = connection
.set_session_mode(acp::SetSessionModeRequest::new(session_id, mode_id))
.set_session_mode(acp::SetSessionModeRequest {
session_id,
mode_id,
meta: None,
})
.await;
if result.is_err() {
@@ -658,7 +682,11 @@ impl acp_thread::AgentModelSelector for AcpModelSelector {
let state = self.state.clone();
cx.foreground_executor().spawn(async move {
let result = connection
.set_session_model(acp::SetSessionModelRequest::new(session_id, model_id))
.set_session_model(acp::SetSessionModelRequest {
session_id,
model_id,
meta: None,
})
.await;
if result.is_err() {
@@ -720,7 +748,10 @@ impl acp::Client for ClientDelegate {
let outcome = task.await;
Ok(acp::RequestPermissionResponse::new(outcome))
Ok(acp::RequestPermissionResponse {
outcome,
meta: None,
})
}
async fn write_text_file(
@@ -752,7 +783,10 @@ impl acp::Client for ClientDelegate {
let content = task.await?;
Ok(acp::ReadTextFileResponse::new(content))
Ok(acp::ReadTextFileResponse {
content,
meta: None,
})
}
async fn session_notification(
@@ -787,7 +821,7 @@ impl acp::Client for ClientDelegate {
if let Some(terminal_info) = meta.get("terminal_info") {
if let Some(id_str) = terminal_info.get("terminal_id").and_then(|v| v.as_str())
{
let terminal_id = acp::TerminalId::new(id_str);
let terminal_id = acp::TerminalId(id_str.into());
let cwd = terminal_info
.get("cwd")
.and_then(|v| v.as_str().map(PathBuf::from));
@@ -803,7 +837,7 @@ impl acp::Client for ClientDelegate {
let lower = cx.new(|cx| builder.subscribe(cx));
thread.on_terminal_provider_event(
TerminalProviderEvent::Created {
terminal_id,
terminal_id: terminal_id.clone(),
label: tc.title.clone(),
cwd,
output_byte_limit: None,
@@ -828,12 +862,15 @@ impl acp::Client for ClientDelegate {
if let Some(meta) = &tcu.meta {
if let Some(term_out) = meta.get("terminal_output") {
if let Some(id_str) = term_out.get("terminal_id").and_then(|v| v.as_str()) {
let terminal_id = acp::TerminalId::new(id_str);
let terminal_id = acp::TerminalId(id_str.into());
if let Some(s) = term_out.get("data").and_then(|v| v.as_str()) {
let data = s.as_bytes().to_vec();
let _ = session.thread.update(&mut self.cx.clone(), |thread, cx| {
thread.on_terminal_provider_event(
TerminalProviderEvent::Output { terminal_id, data },
TerminalProviderEvent::Output {
terminal_id: terminal_id.clone(),
data,
},
cx,
);
});
@@ -844,24 +881,21 @@ impl acp::Client for ClientDelegate {
// terminal_exit
if let Some(term_exit) = meta.get("terminal_exit") {
if let Some(id_str) = term_exit.get("terminal_id").and_then(|v| v.as_str()) {
let terminal_id = acp::TerminalId::new(id_str);
let status = acp::TerminalExitStatus::new()
.exit_code(
term_exit
.get("exit_code")
.and_then(|v| v.as_u64())
.map(|i| i as u32),
)
.signal(
term_exit
.get("signal")
.and_then(|v| v.as_str().map(|s| s.to_string())),
);
let terminal_id = acp::TerminalId(id_str.into());
let status = acp::TerminalExitStatus {
exit_code: term_exit
.get("exit_code")
.and_then(|v| v.as_u64())
.map(|i| i as u32),
signal: term_exit
.get("signal")
.and_then(|v| v.as_str().map(|s| s.to_string())),
meta: None,
};
let _ = session.thread.update(&mut self.cx.clone(), |thread, cx| {
thread.on_terminal_provider_event(
TerminalProviderEvent::Exit {
terminal_id,
terminal_id: terminal_id.clone(),
status,
},
cx,
@@ -898,7 +932,7 @@ impl acp::Client for ClientDelegate {
// Register with renderer
let terminal_entity = thread.update(&mut self.cx.clone(), |thread, cx| {
thread.register_terminal_created(
acp::TerminalId::new(uuid::Uuid::new_v4().to_string()),
acp::TerminalId(uuid::Uuid::new_v4().to_string().into()),
format!("{} {}", args.command, args.args.join(" ")),
args.cwd.clone(),
args.output_byte_limit,
@@ -908,7 +942,10 @@ impl acp::Client for ClientDelegate {
})?;
let terminal_id =
terminal_entity.read_with(&self.cx, |terminal, _| terminal.id().clone())?;
Ok(acp::CreateTerminalResponse::new(terminal_id))
Ok(acp::CreateTerminalResponse {
terminal_id,
meta: None,
})
}
async fn kill_terminal_command(
@@ -969,7 +1006,10 @@ impl acp::Client for ClientDelegate {
})??
.await;
Ok(acp::WaitForTerminalExitResponse::new(exit_status))
Ok(acp::WaitForTerminalExitResponse {
exit_status,
meta: None,
})
}
}

Some files were not shown because too many files have changed in this diff Show More