Compare commits

..

13 Commits

Author SHA1 Message Date
Michael Benfield
e8d44ce267 comments 2025-12-03 10:09:22 -08:00
Michael Benfield
08d89254d1 select better and use tools for inline assistant 2025-12-03 09:51:16 -08:00
Mikayla Maki
d7cc949e61 fix license 2025-12-02 17:13:52 -08:00
Mikayla Maki
a3f9dffb04 Clippy 2025-12-02 17:13:52 -08:00
Mikayla Maki
78b32840c3 Feature gate eval 2025-12-02 17:13:52 -08:00
Mikayla Maki
a636b59d72 Clean up PR for merging 2025-12-02 17:13:52 -08:00
Mikayla Maki
983c3a02a8 Clean up dbg 2025-12-02 17:13:52 -08:00
Mikayla Maki
5bc7b11a6a Fix a miscompilation 2025-12-02 17:13:52 -08:00
Michael Benfield
2a5e8c62a7 Add eval utils to workspace and inline assistant test
Co-authored-by: Mikayla Maki <mikayla.c.maki@gmail.com>
2025-12-02 17:13:52 -08:00
Michael Benfield
4efeef564c evals uses eval_utils 2025-12-02 17:13:52 -08:00
Michael Benfield
514cb933d1 eval_utils crate 2025-12-02 17:13:52 -08:00
Michael Benfield
06f65b29ab working on thread_store thing 2025-12-02 17:13:52 -08:00
Michael Benfield
cd9242c544 Progress towards an eval for inline assistants.
Co-authored-by: Mikayla Maki <mikayla.c.maki@gmail.com>
2025-12-02 17:13:52 -08:00
1125 changed files with 44390 additions and 80003 deletions

View File

@@ -1,55 +0,0 @@
# Phase 2: Explore Repository
You are analyzing a codebase to understand its structure before reviewing documentation impact.
## Objective
Produce a structured overview of the repository to inform subsequent documentation analysis.
## Instructions
1. **Identify Primary Languages and Frameworks**
- Scan for Cargo.toml, package.json, or other manifest files
- Note the primary language(s) and key dependencies
2. **Map Documentation Structure**
- This project uses **mdBook** (https://rust-lang.github.io/mdBook/)
- Documentation is in `docs/src/`
- Table of contents: `docs/src/SUMMARY.md` (mdBook format: https://rust-lang.github.io/mdBook/format/summary.html)
- Style guide: `docs/.rules`
- Agent guidelines: `docs/AGENTS.md`
- Formatting: Prettier (config in `docs/.prettierrc`)
3. **Identify Build and Tooling**
- Note build systems (cargo, npm, etc.)
- Identify documentation tooling (mdbook, etc.)
4. **Output Format**
Produce a JSON summary:
```json
{
"primary_language": "Rust",
"frameworks": ["GPUI"],
"documentation": {
"system": "mdBook",
"location": "docs/src/",
"toc_file": "docs/src/SUMMARY.md",
"toc_format": "https://rust-lang.github.io/mdBook/format/summary.html",
"style_guide": "docs/.rules",
"agent_guidelines": "docs/AGENTS.md",
"formatter": "prettier",
"formatter_config": "docs/.prettierrc",
"custom_preprocessor": "docs_preprocessor (handles {#kb action::Name} syntax)"
},
"key_directories": {
"source": "crates/",
"docs": "docs/src/",
"extensions": "extensions/"
}
}
```
## Constraints
- Read-only: Do not modify any files
- Focus on structure, not content details
- Complete within 2 minutes

View File

@@ -1,57 +0,0 @@
# Phase 3: Analyze Changes
You are analyzing code changes to understand their nature and scope.
## Objective
Produce a clear, neutral summary of what changed in the codebase.
## Input
You will receive:
- List of changed files from the triggering commit/PR
- Repository structure from Phase 2
## Instructions
1. **Categorize Changed Files**
- Source code (which crates/modules)
- Configuration
- Tests
- Documentation (already existing)
- Other
2. **Analyze Each Change**
- Review diffs for files likely to impact documentation
- Focus on: public APIs, settings, keybindings, commands, user-visible behavior
3. **Identify What Did NOT Change**
- Note stable interfaces or behaviors
- Important for avoiding unnecessary documentation updates
4. **Output Format**
Produce a markdown summary:
```markdown
## Change Analysis
### Changed Files Summary
| Category | Files | Impact Level |
| --- | --- | --- |
| Source - [crate] | file1.rs, file2.rs | High/Medium/Low |
| Settings | settings.json | Medium |
| Tests | test_*.rs | None |
### Behavioral Changes
- **[Feature/Area]**: Description of what changed from user perspective
- **[Feature/Area]**: Description...
### Unchanged Areas
- [Area]: Confirmed no changes to [specific behavior]
### Files Requiring Deeper Review
- `path/to/file.rs`: Reason for deeper review
```
## Constraints
- Read-only: Do not modify any files
- Neutral tone: Describe what changed, not whether it's good/bad
- Do not propose documentation changes yet

View File

@@ -1,76 +0,0 @@
# Phase 4: Plan Documentation Impact
You are determining whether and how documentation should be updated based on code changes.
## Objective
Produce a structured documentation plan that will guide Phase 5 execution.
## Documentation System
This is an **mdBook** site (https://rust-lang.github.io/mdBook/):
- `docs/src/SUMMARY.md` defines book structure per https://rust-lang.github.io/mdBook/format/summary.html
- If adding new pages, they MUST be added to SUMMARY.md
- Use `{#kb action::ActionName}` syntax for keybindings (custom preprocessor expands these)
- Prettier formatting (80 char width) will be applied automatically
## Input
You will receive:
- Change analysis from Phase 3
- Repository structure from Phase 2
- Documentation guidelines from `docs/AGENTS.md`
## Instructions
1. **Review AGENTS.md**
- Load and apply all rules from `docs/AGENTS.md`
- Respect scope boundaries (in-scope vs out-of-scope)
2. **Evaluate Documentation Impact**
For each behavioral change from Phase 3:
- Does existing documentation cover this area?
- Is the documentation now inaccurate or incomplete?
- Classify per AGENTS.md "Change Classification" section
3. **Identify Specific Updates**
For each required update:
- Exact file path
- Specific section or heading
- Type of change (update existing, add new, deprecate)
- Description of the change
4. **Flag Uncertainty**
Explicitly mark:
- Assumptions you're making
- Areas where human confirmation is needed
- Ambiguous requirements
5. **Output Format**
Use the exact format specified in `docs/AGENTS.md` Phase 4 section:
```markdown
## Documentation Impact Assessment
### Summary
Brief description of code changes analyzed.
### Documentation Updates Required: [Yes/No]
### Planned Changes
#### 1. [File Path]
- **Section**: [Section name or "New section"]
- **Change Type**: [Update/Add/Deprecate]
- **Reason**: Why this change is needed
- **Description**: What will be added/modified
### Uncertainty Flags
- [ ] [Description of any assumptions or areas needing confirmation]
### No Changes Needed
- [List files reviewed but not requiring updates, with brief reason]
```
## Constraints
- Read-only: Do not modify any files
- Conservative: When uncertain, flag for human review rather than planning changes
- Scoped: Only plan changes that trace directly to code changes from Phase 3
- No scope expansion: Do not plan "improvements" unrelated to triggering changes

View File

@@ -1,67 +0,0 @@
# Phase 5: Apply Documentation Plan
You are executing a pre-approved documentation plan for an **mdBook** documentation site.
## Objective
Implement exactly the changes specified in the documentation plan from Phase 4.
## Documentation System
- **mdBook**: https://rust-lang.github.io/mdBook/
- **SUMMARY.md**: Follows mdBook format (https://rust-lang.github.io/mdBook/format/summary.html)
- **Prettier**: Will be run automatically after this phase (80 char line width)
- **Custom preprocessor**: Use `{#kb action::ActionName}` for keybindings instead of hardcoding
## Input
You will receive:
- Documentation plan from Phase 4
- Documentation guidelines from `docs/AGENTS.md`
- Style rules from `docs/.rules`
## Instructions
1. **Validate Plan**
- Confirm all planned files are within scope per AGENTS.md
- Verify no out-of-scope files are targeted
2. **Execute Each Planned Change**
For each item in "Planned Changes":
- Navigate to the specified file
- Locate the specified section
- Apply the described change
- Follow style rules from `docs/.rules`
3. **Style Compliance**
Every edit must follow `docs/.rules`:
- Second person, present tense
- No hedging words ("simply", "just", "easily")
- Proper keybinding format (`Cmd+Shift+P`)
- Settings Editor first, JSON second
- Correct terminology (folder not directory, etc.)
4. **Preserve Context**
- Maintain surrounding content structure
- Keep consistent heading levels
- Preserve existing cross-references
## Constraints
- Execute ONLY changes listed in the plan
- Do not discover new documentation targets
- Do not make stylistic improvements outside planned sections
- Do not expand scope beyond what Phase 4 specified
- If a planned change cannot be applied (file missing, section not found), skip and note it
## Output
After applying changes, output a summary:
```markdown
## Applied Changes
### Successfully Applied
- `path/to/file.md`: [Brief description of change]
### Skipped (Could Not Apply)
- `path/to/file.md`: [Reason - e.g., "Section not found"]
### Warnings
- [Any issues encountered during application]
```

View File

@@ -1,54 +0,0 @@
# Phase 6: Summarize Changes
You are generating a summary of documentation updates for PR review.
## Objective
Create a clear, reviewable summary of all documentation changes made.
## Input
You will receive:
- Applied changes report from Phase 5
- Original change analysis from Phase 3
- Git diff of documentation changes
## Instructions
1. **Gather Change Information**
- List all modified documentation files
- Identify the corresponding code changes that triggered each update
2. **Generate Summary**
Use the format specified in `docs/AGENTS.md` Phase 6 section:
```markdown
## Documentation Update Summary
### Changes Made
| File | Change | Related Code |
| --- | --- | --- |
| docs/src/path.md | Brief description | PR #123 or commit SHA |
### Rationale
Brief explanation of why these updates were made, linking back to the triggering code changes.
### Review Notes
- Items reviewers should pay special attention to
- Any uncertainty flags from Phase 4 that were addressed
- Assumptions made during documentation
```
3. **Add Context for Reviewers**
- Highlight any changes that might be controversial
- Note if any planned changes were skipped and why
- Flag areas where reviewer expertise is especially needed
## Output Format
The summary should be suitable for:
- PR description body
- Commit message (condensed version)
- Team communication
## Constraints
- Read-only (documentation changes already applied in Phase 5)
- Factual: Describe what was done, not justify why it's good
- Complete: Account for all changes, including skipped items

View File

@@ -1,67 +0,0 @@
# Phase 7: Commit and Open PR
You are creating a git branch, committing documentation changes, and opening a PR.
## Objective
Package documentation updates into a reviewable pull request.
## Input
You will receive:
- Summary from Phase 6
- List of modified files
## Instructions
1. **Create Branch**
```sh
git checkout -b docs/auto-update-{date}
```
Use format: `docs/auto-update-YYYY-MM-DD` or `docs/auto-update-{short-sha}`
2. **Stage and Commit**
- Stage only documentation files in `docs/src/`
- Do not stage any other files
Commit message format:
```
docs: auto-update documentation for [brief description]
[Summary from Phase 6, condensed]
Triggered by: [commit SHA or PR reference]
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
```
3. **Push Branch**
```sh
git push -u origin docs/auto-update-{date}
```
4. **Create Pull Request**
Use the Phase 6 summary as the PR body.
PR Title: `docs: [Brief description of documentation updates]`
Labels (if available): `documentation`, `automated`
Base branch: `main`
## Constraints
- Do NOT auto-merge
- Do NOT request specific reviewers (let CODEOWNERS handle it)
- Do NOT modify files outside `docs/src/`
- If no changes to commit, exit gracefully with message "No documentation changes to commit"
## Output
```markdown
## PR Created
- **Branch**: docs/auto-update-{date}
- **PR URL**: https://github.com/zed-industries/zed/pull/XXXX
- **Status**: Ready for review
### Commit
- SHA: {commit-sha}
- Files: {count} documentation files modified
```

View File

@@ -75,22 +75,6 @@ body:
</details>
validations:
required: false
- type: textarea
attributes:
label: Relevant Keymap
description: |
Open the command palette in Zed, then type “zed: open keymap file” and copy/paste the file's contents.
value: |
<details><summary>keymap.json</summary>
<!-- Paste your keymap file inside the code block. -->
```json
```
</details>
validations:
required: false
- type: textarea
attributes:
label: (for AI issues) Model provider details

View File

@@ -25,7 +25,6 @@ self-hosted-runner:
- namespace-profile-32x64-ubuntu-2204
# Namespace Ubuntu 24.04 (like ubuntu-latest)
- namespace-profile-2x4-ubuntu-2404
- namespace-profile-8x32-ubuntu-2404
# Namespace Limited Preview
- namespace-profile-8x16-ubuntu-2004-arm-m4
- namespace-profile-8x32-ubuntu-2004-arm-m4

View File

@@ -19,18 +19,6 @@ runs:
shell: bash -euxo pipefail {0}
run: ./script/linux
- name: Install mold linker
shell: bash -euxo pipefail {0}
run: ./script/install-mold
- name: Download WASI SDK
shell: bash -euxo pipefail {0}
run: ./script/download-wasi-sdk
- name: Generate action metadata
shell: bash -euxo pipefail {0}
run: ./script/generate-action-metadata
- name: Check for broken links (in MD)
uses: lycheeverse/lychee-action@82202e5e9c2f4ef1a55a3d02563e1cb6041e5332 # v2.4.1
with:

View File

@@ -5,27 +5,13 @@ on:
release:
types:
- published
workflow_dispatch:
inputs:
tag_name:
description: tag_name
required: true
type: string
prerelease:
description: prerelease
required: true
type: boolean
body:
description: body
type: string
default: ''
jobs:
rebuild_releases_page:
if: (github.repository_owner == 'zed-industries' || github.repository_owner == 'zed-extensions')
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: after_release::rebuild_releases_page::refresh_cloud_releases
run: curl -fX POST https://cloud.zed.dev/releases/refresh?expect_tag=${{ github.event.release.tag_name || inputs.tag_name }}
run: curl -fX POST https://cloud.zed.dev/releases/refresh?expect_tag=${{ github.event.release.tag_name }}
shell: bash -euxo pipefail {0}
- name: after_release::rebuild_releases_page::redeploy_zed_dev
run: npm exec --yes -- vercel@37 --token="$VERCEL_TOKEN" --scope zed-industries redeploy https://zed.dev
@@ -41,7 +27,7 @@ jobs:
- id: get-release-url
name: after_release::post_to_discord::get_release_url
run: |
if [ "${{ github.event.release.prerelease || inputs.prerelease }}" == "true" ]; then
if [ "${{ github.event.release.prerelease }}" == "true" ]; then
URL="https://zed.dev/releases/preview"
else
URL="https://zed.dev/releases/stable"
@@ -54,9 +40,9 @@ jobs:
uses: 2428392/gh-truncate-string-action@b3ff790d21cf42af3ca7579146eedb93c8fb0757
with:
stringToTruncate: |
📣 Zed [${{ github.event.release.tag_name || inputs.tag_name }}](<${{ steps.get-release-url.outputs.URL }}>) was just released!
📣 Zed [${{ github.event.release.tag_name }}](<${{ steps.get-release-url.outputs.URL }}>) was just released!
${{ github.event.release.body || inputs.body }}
${{ github.event.release.body }}
maxLength: 2000
truncationSymbol: '...'
- name: after_release::post_to_discord::discord_webhook_action
@@ -70,7 +56,7 @@ jobs:
- id: set-package-name
name: after_release::publish_winget::set_package_name
run: |
if ("${{ github.event.release.prerelease || inputs.prerelease }}" -eq "true") {
if ("${{ github.event.release.prerelease }}" -eq "true") {
$PACKAGE_NAME = "ZedIndustries.Zed.Preview"
} else {
$PACKAGE_NAME = "ZedIndustries.Zed"
@@ -82,7 +68,6 @@ jobs:
uses: vedantmgoyal9/winget-releaser@19e706d4c9121098010096f9c495a70a7518b30f
with:
identifier: ${{ steps.set-package-name.outputs.PACKAGE_NAME }}
release-tag: ${{ github.event.release.tag_name || inputs.tag_name }}
max-versions-to-keep: 5
token: ${{ secrets.WINGET_TOKEN }}
create_sentry_release:

View File

@@ -1,132 +0,0 @@
# Generated from xtask::workflows::autofix_pr
# Rebuild with `cargo xtask workflows`.
name: autofix_pr
run-name: 'autofix PR #${{ inputs.pr_number }}'
on:
workflow_dispatch:
inputs:
pr_number:
description: pr_number
required: true
type: string
run_clippy:
description: run_clippy
type: boolean
default: 'true'
jobs:
run_autofix:
runs-on: namespace-profile-16x32-ubuntu-2204
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- name: autofix_pr::run_autofix::checkout_pr
run: gh pr checkout ${{ inputs.pr_number }}
shell: bash -euxo pipefail {0}
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: steps::setup_cargo_config
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
shell: bash -euxo pipefail {0}
- name: steps::cache_rust_dependencies_namespace
uses: namespacelabs/nscloud-cache-action@v1
with:
cache: rust
- name: steps::setup_linux
run: ./script/linux
shell: bash -euxo pipefail {0}
- name: steps::install_mold
run: ./script/install-mold
shell: bash -euxo pipefail {0}
- name: steps::download_wasi_sdk
run: ./script/download-wasi-sdk
shell: bash -euxo pipefail {0}
- name: steps::setup_pnpm
uses: pnpm/action-setup@fe02b34f77f8bc703788d5817da081398fad5dd2
with:
version: '9'
- name: autofix_pr::run_autofix::run_prettier_fix
run: ./script/prettier --write
shell: bash -euxo pipefail {0}
- name: autofix_pr::run_autofix::run_cargo_fmt
run: cargo fmt --all
shell: bash -euxo pipefail {0}
- name: autofix_pr::run_autofix::run_cargo_fix
if: ${{ inputs.run_clippy }}
run: cargo fix --workspace --release --all-targets --all-features --allow-dirty --allow-staged
shell: bash -euxo pipefail {0}
- name: autofix_pr::run_autofix::run_clippy_fix
if: ${{ inputs.run_clippy }}
run: cargo clippy --workspace --release --all-targets --all-features --fix --allow-dirty --allow-staged
shell: bash -euxo pipefail {0}
- id: create-patch
name: autofix_pr::run_autofix::create_patch
run: |
if git diff --quiet; then
echo "No changes to commit"
echo "has_changes=false" >> "$GITHUB_OUTPUT"
else
git diff > autofix.patch
echo "has_changes=true" >> "$GITHUB_OUTPUT"
fi
shell: bash -euxo pipefail {0}
- name: upload artifact autofix-patch
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4
with:
name: autofix-patch
path: autofix.patch
if-no-files-found: ignore
retention-days: '1'
- name: steps::cleanup_cargo_config
if: always()
run: |
rm -rf ./../.cargo
shell: bash -euxo pipefail {0}
outputs:
has_changes: ${{ steps.create-patch.outputs.has_changes }}
commit_changes:
needs:
- run_autofix
if: needs.run_autofix.outputs.has_changes == 'true'
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- id: get-app-token
name: steps::authenticate_as_zippy
uses: actions/create-github-app-token@bef1eaf1c0ac2b148ee2a0a74c65fbe6db0631f1
with:
app-id: ${{ secrets.ZED_ZIPPY_APP_ID }}
private-key: ${{ secrets.ZED_ZIPPY_APP_PRIVATE_KEY }}
- name: steps::checkout_repo_with_token
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
token: ${{ steps.get-app-token.outputs.token }}
- name: autofix_pr::commit_changes::checkout_pr
run: gh pr checkout ${{ inputs.pr_number }}
shell: bash -euxo pipefail {0}
env:
GITHUB_TOKEN: ${{ steps.get-app-token.outputs.token }}
- name: autofix_pr::download_patch_artifact
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53
with:
name: autofix-patch
- name: autofix_pr::commit_changes::apply_patch
run: git apply autofix.patch
shell: bash -euxo pipefail {0}
- name: autofix_pr::commit_changes::commit_and_push
run: |
git commit -am "Autofix"
git push
shell: bash -euxo pipefail {0}
env:
GIT_COMMITTER_NAME: Zed Zippy
GIT_COMMITTER_EMAIL: 234243425+zed-zippy[bot]@users.noreply.github.com
GIT_AUTHOR_NAME: Zed Zippy
GIT_AUTHOR_EMAIL: 234243425+zed-zippy[bot]@users.noreply.github.com
GITHUB_TOKEN: ${{ steps.get-app-token.outputs.token }}
concurrency:
group: ${{ github.workflow }}-${{ inputs.pr_number }}
cancel-in-progress: true

View File

@@ -30,7 +30,7 @@ jobs:
with:
clean: false
- id: get-app-token
name: steps::authenticate_as_zippy
name: cherry_pick::run_cherry_pick::authenticate_as_zippy
uses: actions/create-github-app-token@bef1eaf1c0ac2b148ee2a0a74c65fbe6db0631f1
with:
app-id: ${{ secrets.ZED_ZIPPY_APP_ID }}

View File

@@ -34,7 +34,6 @@ jobs:
CharlesChen0823
chbk
cppcoffee
davidbarsky
davewa
ddoemonn
djsauble

View File

@@ -1,40 +1,29 @@
name: "Close Stale Issues"
on:
schedule:
- cron: "0 2 * * 5"
- cron: "0 8 31 DEC *"
workflow_dispatch:
inputs:
debug-only:
description: "Run in dry-run mode (no changes made)"
type: boolean
default: false
operations-per-run:
description: "Max number of issues to process (default: 1000)"
type: number
default: 1000
jobs:
stale:
if: github.repository_owner == 'zed-industries'
runs-on: ubuntu-latest
steps:
- uses: actions/stale@997185467fa4f803885201cee163a9f38240193d # v10
- uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 # v9
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: >
Hi there!
Zed development moves fast and a significant number of bugs become outdated.
If you can reproduce this bug on the latest stable Zed, please let us know by leaving a comment with the Zed version.
If the bug doesn't appear for you anymore, feel free to close the issue yourself; otherwise, the bot will close it in a couple of weeks.
Hi there! 👋
We're working to clean up our issue tracker by closing older bugs that might not be relevant anymore. If you are able to reproduce this issue in the latest version of Zed, please let us know by commenting on this issue, and it will be kept open. If you can't reproduce it, feel free to close the issue yourself. Otherwise, it will close automatically in 14 days.
Thanks for your help!
close-issue-message: "This issue was closed due to inactivity. If you're still experiencing this problem, please leave a comment with your Zed version so that we can reopen the issue."
close-issue-message: "This issue was closed due to inactivity. If you're still experiencing this problem, please open a new issue with a link to this issue."
days-before-stale: 60
days-before-close: 14
only-issue-types: "Bug,Crash"
operations-per-run: ${{ inputs.operations-per-run || 1000 }}
operations-per-run: 1000
ascending: true
enable-statistics: true
debug-only: ${{ inputs.debug-only }}
stale-issue-label: "stale"
exempt-issue-labels: "never stale"

View File

@@ -1,264 +0,0 @@
name: Documentation Automation
on:
# push:
# branches: [main]
# paths:
# - 'crates/**'
# - 'extensions/**'
workflow_dispatch:
inputs:
pr_number:
description: 'PR number to analyze (gets full PR diff)'
required: false
type: string
trigger_sha:
description: 'Commit SHA to analyze (ignored if pr_number is set)'
required: false
type: string
permissions:
contents: write
pull-requests: write
env:
FACTORY_API_KEY: ${{ secrets.FACTORY_API_KEY }}
DROID_MODEL: claude-opus-4-5-20251101
jobs:
docs-automation:
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Install Droid CLI
id: install-droid
run: |
curl -fsSL https://app.factory.ai/cli | sh
echo "${HOME}/.local/bin" >> "$GITHUB_PATH"
echo "DROID_BIN=${HOME}/.local/bin/droid" >> "$GITHUB_ENV"
# Verify installation
"${HOME}/.local/bin/droid" --version
- name: Setup Node.js (for Prettier)
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install Prettier
run: npm install -g prettier
- name: Get changed files
id: changed
run: |
if [ -n "${{ inputs.pr_number }}" ]; then
# Get full PR diff
echo "Analyzing PR #${{ inputs.pr_number }}"
echo "source=pr" >> "$GITHUB_OUTPUT"
echo "ref=${{ inputs.pr_number }}" >> "$GITHUB_OUTPUT"
gh pr diff "${{ inputs.pr_number }}" --name-only > /tmp/changed_files.txt
elif [ -n "${{ inputs.trigger_sha }}" ]; then
# Get single commit diff
SHA="${{ inputs.trigger_sha }}"
echo "Analyzing commit $SHA"
echo "source=commit" >> "$GITHUB_OUTPUT"
echo "ref=$SHA" >> "$GITHUB_OUTPUT"
git diff --name-only "${SHA}^" "$SHA" > /tmp/changed_files.txt
else
# Default to current commit
SHA="${{ github.sha }}"
echo "Analyzing commit $SHA"
echo "source=commit" >> "$GITHUB_OUTPUT"
echo "ref=$SHA" >> "$GITHUB_OUTPUT"
git diff --name-only "${SHA}^" "$SHA" > /tmp/changed_files.txt || git diff --name-only HEAD~1 HEAD > /tmp/changed_files.txt
fi
echo "Changed files:"
cat /tmp/changed_files.txt
env:
GH_TOKEN: ${{ github.token }}
# Phase 0: Guardrails are loaded via AGENTS.md in each phase
# Phase 2: Explore Repository (Read-Only - default)
- name: "Phase 2: Explore Repository"
id: phase2
run: |
"$DROID_BIN" exec \
-m "$DROID_MODEL" \
-f .factory/prompts/docs-automation/phase2-explore.md \
> /tmp/phase2-output.txt 2>&1 || true
echo "Repository exploration complete"
cat /tmp/phase2-output.txt
# Phase 3: Analyze Changes (Read-Only - default)
- name: "Phase 3: Analyze Changes"
id: phase3
run: |
CHANGED_FILES=$(tr '\n' ' ' < /tmp/changed_files.txt)
echo "Analyzing changes in: $CHANGED_FILES"
# Build prompt with context
cat > /tmp/phase3-prompt.md << 'EOF'
$(cat .factory/prompts/docs-automation/phase3-analyze.md)
## Context
### Changed Files
$CHANGED_FILES
### Phase 2 Output
$(cat /tmp/phase2-output.txt)
EOF
"$DROID_BIN" exec \
-m "$DROID_MODEL" \
"$(cat .factory/prompts/docs-automation/phase3-analyze.md)
Changed files: $CHANGED_FILES" \
> /tmp/phase3-output.md 2>&1 || true
echo "Change analysis complete"
cat /tmp/phase3-output.md
# Phase 4: Plan Documentation Impact (Read-Only - default)
- name: "Phase 4: Plan Documentation Impact"
id: phase4
run: |
"$DROID_BIN" exec \
-m "$DROID_MODEL" \
-f .factory/prompts/docs-automation/phase4-plan.md \
> /tmp/phase4-plan.md 2>&1 || true
echo "Documentation plan complete"
cat /tmp/phase4-plan.md
# Check if updates are required
if grep -q "NO_UPDATES_REQUIRED" /tmp/phase4-plan.md; then
echo "updates_required=false" >> "$GITHUB_OUTPUT"
else
echo "updates_required=true" >> "$GITHUB_OUTPUT"
fi
# Phase 5: Apply Plan (Write-Enabled with --auto medium)
- name: "Phase 5: Apply Documentation Plan"
id: phase5
if: steps.phase4.outputs.updates_required == 'true'
run: |
"$DROID_BIN" exec \
-m "$DROID_MODEL" \
--auto medium \
-f .factory/prompts/docs-automation/phase5-apply.md \
> /tmp/phase5-report.md 2>&1 || true
echo "Documentation updates applied"
cat /tmp/phase5-report.md
# Phase 5b: Format with Prettier
- name: "Phase 5b: Format with Prettier"
id: phase5b
if: steps.phase4.outputs.updates_required == 'true'
run: |
echo "Formatting documentation with Prettier..."
cd docs && prettier --write src/
echo "Verifying Prettier formatting passes..."
cd docs && prettier --check src/
echo "Prettier formatting complete"
# Phase 6: Summarize Changes (Read-Only - default)
- name: "Phase 6: Summarize Changes"
id: phase6
if: steps.phase4.outputs.updates_required == 'true'
run: |
# Get git diff of docs
git diff docs/src/ > /tmp/docs-diff.txt || true
"$DROID_BIN" exec \
-m "$DROID_MODEL" \
-f .factory/prompts/docs-automation/phase6-summarize.md \
> /tmp/phase6-summary.md 2>&1 || true
echo "Summary generated"
cat /tmp/phase6-summary.md
# Phase 7: Commit and Open PR
- name: "Phase 7: Create PR"
id: phase7
if: steps.phase4.outputs.updates_required == 'true'
run: |
# Check if there are actual changes
if git diff --quiet docs/src/; then
echo "No documentation changes detected"
exit 0
fi
# Configure git
git config user.name "factory-droid[bot]"
git config user.email "138933559+factory-droid[bot]@users.noreply.github.com"
# Daily batch branch - one branch per day, multiple commits accumulate
BRANCH_NAME="docs/auto-update-$(date +%Y-%m-%d)"
# Stash local changes from phase 5
git stash push -m "docs-automation-changes" -- docs/src/
# Check if branch already exists on remote
if git ls-remote --exit-code --heads origin "$BRANCH_NAME" > /dev/null 2>&1; then
echo "Branch $BRANCH_NAME exists, checking out and updating..."
git fetch origin "$BRANCH_NAME"
git checkout -B "$BRANCH_NAME" "origin/$BRANCH_NAME"
else
echo "Creating new branch $BRANCH_NAME..."
git checkout -b "$BRANCH_NAME"
fi
# Apply stashed changes
git stash pop || true
# Stage and commit
git add docs/src/
SUMMARY=$(head -50 < /tmp/phase6-summary.md)
git commit -m "docs: auto-update documentation
${SUMMARY}
Triggered by: ${{ steps.changed.outputs.source }} ${{ steps.changed.outputs.ref }}
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>"
# Push
git push -u origin "$BRANCH_NAME"
# Check if PR already exists for this branch
EXISTING_PR=$(gh pr list --head "$BRANCH_NAME" --json number --jq '.[0].number' || echo "")
if [ -n "$EXISTING_PR" ]; then
echo "PR #$EXISTING_PR already exists for branch $BRANCH_NAME, updated with new commit"
else
# Create new PR
gh pr create \
--title "docs: automated documentation update ($(date +%Y-%m-%d))" \
--body-file /tmp/phase6-summary.md \
--base main || true
echo "PR created on branch: $BRANCH_NAME"
fi
env:
GH_TOKEN: ${{ github.token }}
# Summary output
- name: "Summary"
if: always()
run: |
echo "## Documentation Automation Summary" >> "$GITHUB_STEP_SUMMARY"
echo "" >> "$GITHUB_STEP_SUMMARY"
if [ "${{ steps.phase4.outputs.updates_required }}" == "false" ]; then
echo "No documentation updates required for this change." >> "$GITHUB_STEP_SUMMARY"
elif [ -f /tmp/phase6-summary.md ]; then
cat /tmp/phase6-summary.md >> "$GITHUB_STEP_SUMMARY"
else
echo "Workflow completed. Check individual phase outputs for details." >> "$GITHUB_STEP_SUMMARY"
fi

View File

@@ -25,6 +25,33 @@ on:
description: The app secret for the corresponding app ID
required: true
jobs:
check_extension:
if: (github.repository_owner == 'zed-industries' || github.repository_owner == 'zed-extensions')
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
- id: cache-zed-extension-cli
name: extension_tests::cache_zed_extension_cli
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830
with:
path: zed-extension
key: zed-extension-${{ env.ZED_EXTENSION_CLI_SHA }}
- name: extension_tests::download_zed_extension_cli
if: steps.cache-zed-extension-cli.outputs.cache-hit != 'true'
run: |
wget --quiet "https://zed-extension-cli.nyc3.digitaloceanspaces.com/$ZED_EXTENSION_CLI_SHA/x86_64-unknown-linux-gnu/zed-extension"
chmod +x zed-extension
shell: bash -euxo pipefail {0}
- name: extension_tests::check
run: |
mkdir -p /tmp/ext-scratch
mkdir -p /tmp/ext-output
./zed-extension --source-dir . --scratch-dir /tmp/ext-scratch --output-dir /tmp/ext-output
shell: bash -euxo pipefail {0}
timeout-minutes: 2
check_bump_needed:
if: (github.repository_owner == 'zed-industries' || github.repository_owner == 'zed-extensions')
runs-on: namespace-profile-2x4-ubuntu-2404
@@ -62,11 +89,12 @@ jobs:
timeout-minutes: 1
bump_extension_version:
needs:
- check_extension
- check_bump_needed
if: |-
(github.repository_owner == 'zed-industries' || github.repository_owner == 'zed-extensions') &&
(inputs.force-bump == 'true' || needs.check_bump_needed.outputs.needs_bump == 'true')
runs-on: namespace-profile-2x4-ubuntu-2404
runs-on: namespace-profile-8x16-ubuntu-2204
steps:
- id: generate-token
name: extension_bump::generate_token
@@ -79,7 +107,7 @@ jobs:
with:
clean: false
- name: extension_bump::install_bump_2_version
run: pip install bump2version --break-system-packages
run: pip install bump2version
shell: bash -euxo pipefail {0}
- id: bump-version
name: extension_bump::bump_version
@@ -113,13 +141,13 @@ jobs:
delete-branch: true
token: ${{ steps.generate-token.outputs.token }}
sign-commits: true
assignees: ${{ github.actor }}
timeout-minutes: 1
create_version_label:
needs:
- check_extension
- check_bump_needed
if: (github.repository_owner == 'zed-industries' || github.repository_owner == 'zed-extensions') && github.event_name == 'push' && github.ref == 'refs/heads/main' && needs.check_bump_needed.outputs.needs_bump == 'false'
runs-on: namespace-profile-2x4-ubuntu-2404
runs-on: namespace-profile-8x16-ubuntu-2204
steps:
- id: generate-token
name: extension_bump::generate_token

View File

@@ -13,7 +13,7 @@ on:
jobs:
create_release:
if: (github.repository_owner == 'zed-industries' || github.repository_owner == 'zed-extensions')
runs-on: namespace-profile-2x4-ubuntu-2404
runs-on: namespace-profile-8x16-ubuntu-2204
steps:
- id: generate-token
name: extension_bump::generate_token

View File

@@ -51,7 +51,7 @@ jobs:
needs:
- orchestrate
if: needs.orchestrate.outputs.check_rust == 'true'
runs-on: namespace-profile-4x8-ubuntu-2204
runs-on: namespace-profile-16x32-ubuntu-2204
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
@@ -79,7 +79,7 @@ jobs:
needs:
- orchestrate
if: needs.orchestrate.outputs.check_extension == 'true'
runs-on: namespace-profile-8x32-ubuntu-2404
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683

View File

@@ -1,106 +0,0 @@
# Generated from xtask::workflows::extension_workflow_rollout
# Rebuild with `cargo xtask workflows`.
name: extension_workflow_rollout
env:
CARGO_TERM_COLOR: always
on:
workflow_dispatch: {}
jobs:
fetch_extension_repos:
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- id: list-repos
name: extension_workflow_rollout::fetch_extension_repos::get_repositories
uses: actions/github-script@v7
with:
script: |
const repos = await github.paginate(github.rest.repos.listForOrg, {
org: 'zed-extensions',
type: 'public',
per_page: 100,
});
const filteredRepos = repos
.filter(repo => !repo.archived)
.filter(repo => repo.name !== 'workflows' && repo.name !== 'material-icon-theme')
.map(repo => repo.name);
console.log(`Found ${filteredRepos.length} extension repos`);
return filteredRepos;
result-encoding: json
outputs:
repos: ${{ steps.list-repos.outputs.result }}
timeout-minutes: 5
rollout_workflows_to_extension:
needs:
- fetch_extension_repos
if: needs.fetch_extension_repos.outputs.repos != '[]'
runs-on: namespace-profile-2x4-ubuntu-2404
strategy:
matrix:
repo: ${{ fromJson(needs.fetch_extension_repos.outputs.repos) }}
fail-fast: false
max-parallel: 5
steps:
- id: generate-token
name: extension_bump::generate_token
uses: actions/create-github-app-token@v2
with:
app-id: ${{ secrets.ZED_ZIPPY_APP_ID }}
private-key: ${{ secrets.ZED_ZIPPY_APP_PRIVATE_KEY }}
owner: zed-extensions
repositories: ${{ matrix.repo }}
permission-pull-requests: write
permission-contents: write
permission-workflows: write
- name: checkout_zed_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
path: zed
- name: steps::checkout_repo_with_token
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683
with:
clean: false
token: ${{ steps.generate-token.outputs.token }}
repository: zed-extensions/${{ matrix.repo }}
path: extension
- name: extension_workflow_rollout::rollout_workflows_to_extension::copy_workflow_files
run: |
mkdir -p extension/.github/workflows
cp zed/extensions/workflows/shared/*.yml extension/.github/workflows/
shell: bash -euxo pipefail {0}
- id: short-sha
name: extension_workflow_rollout::rollout_workflows_to_extension::get_short_sha
run: |
echo "sha_short=$(git rev-parse --short HEAD)" >> "$GITHUB_OUTPUT"
shell: bash -euxo pipefail {0}
working-directory: zed
- id: create-pr
name: extension_workflow_rollout::rollout_workflows_to_extension::create_pull_request
uses: peter-evans/create-pull-request@v7
with:
path: extension
title: Update CI workflows to `zed@${{ steps.short-sha.outputs.sha_short }}`
body: |
This PR updates the CI workflow files from the main Zed repository
based on the commit zed-industries/zed@${{ github.sha }}
commit-message: Update CI workflows to `zed@${{ steps.short-sha.outputs.sha_short }}`
branch: update-workflows
committer: zed-zippy[bot] <234243425+zed-zippy[bot]@users.noreply.github.com>
author: zed-zippy[bot] <234243425+zed-zippy[bot]@users.noreply.github.com>
base: main
delete-branch: true
token: ${{ steps.generate-token.outputs.token }}
sign-commits: true
- name: extension_workflow_rollout::rollout_workflows_to_extension::enable_auto_merge
run: |
PR_NUMBER="${{ steps.create-pr.outputs.pull-request-number }}"
if [ -n "$PR_NUMBER" ]; then
cd extension
gh pr merge "$PR_NUMBER" --auto --squash
fi
shell: bash -euxo pipefail {0}
env:
GH_TOKEN: ${{ steps.generate-token.outputs.token }}
timeout-minutes: 10

View File

@@ -6,7 +6,7 @@ on:
jobs:
handle-good-first-issue:
if: github.event.label.name == '.contrib/good first issue' && github.repository_owner == 'zed-industries'
if: github.event.label.name == 'good first issue' && github.repository_owner == 'zed-industries'
runs-on: ubuntu-latest
steps:

View File

@@ -472,17 +472,11 @@ jobs:
if: startsWith(github.ref, 'refs/tags/v') && endsWith(github.ref, '-pre') && !endsWith(github.ref, '.0-pre')
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- id: get-app-token
name: steps::authenticate_as_zippy
uses: actions/create-github-app-token@bef1eaf1c0ac2b148ee2a0a74c65fbe6db0631f1
with:
app-id: ${{ secrets.ZED_ZIPPY_APP_ID }}
private-key: ${{ secrets.ZED_ZIPPY_APP_PRIVATE_KEY }}
- name: gh release edit "$GITHUB_REF_NAME" --repo=zed-industries/zed --draft=false
run: gh release edit "$GITHUB_REF_NAME" --repo=zed-industries/zed --draft=false
shell: bash -euxo pipefail {0}
env:
GITHUB_TOKEN: ${{ steps.get-app-token.outputs.token }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
notify_on_failure:
needs:
- upload_release_assets

View File

@@ -74,12 +74,9 @@ jobs:
uses: pnpm/action-setup@fe02b34f77f8bc703788d5817da081398fad5dd2
with:
version: '9'
- name: steps::prettier
- name: ./script/prettier
run: ./script/prettier
shell: bash -euxo pipefail {0}
- name: steps::cargo_fmt
run: cargo fmt --all -- --check
shell: bash -euxo pipefail {0}
- name: ./script/check-todos
run: ./script/check-todos
shell: bash -euxo pipefail {0}
@@ -90,6 +87,9 @@ jobs:
uses: crate-ci/typos@2d0ce569feab1f8752f1dde43cc2f2aa53236e06
with:
config: ./typos.toml
- name: steps::cargo_fmt
run: cargo fmt --all -- --check
shell: bash -euxo pipefail {0}
timeout-minutes: 60
run_tests_windows:
needs:
@@ -353,9 +353,6 @@ jobs:
- name: steps::download_wasi_sdk
run: ./script/download-wasi-sdk
shell: bash -euxo pipefail {0}
- name: ./script/generate-action-metadata
run: ./script/generate-action-metadata
shell: bash -euxo pipefail {0}
- name: run_tests::check_docs::install_mdbook
uses: peaceiris/actions-mdbook@ee69d230fe19748b7abf22df32acaa93833fad08
with:
@@ -500,8 +497,6 @@ jobs:
env:
GIT_AUTHOR_NAME: Protobuf Action
GIT_AUTHOR_EMAIL: ci@zed.dev
GIT_COMMITTER_NAME: Protobuf Action
GIT_COMMITTER_EMAIL: ci@zed.dev
steps:
- name: steps::checkout_repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683

5
.gitignore vendored
View File

@@ -8,7 +8,6 @@
.DS_Store
.blob_store
.build
.claude/settings.local.json
.envrc
.flatpak-builder
.idea
@@ -36,11 +35,7 @@
DerivedData/
Packages
xcuserdata/
crates/docs_preprocessor/actions.json
# Don't commit any secrets to the repo.
.env
.env.secret.toml
# `nix build` output
/result

View File

@@ -141,9 +141,6 @@ Uladzislau Kaminski <i@uladkaminski.com>
Uladzislau Kaminski <i@uladkaminski.com> <uladzislau_kaminski@epam.com>
Vitaly Slobodin <vitaliy.slobodin@gmail.com>
Vitaly Slobodin <vitaliy.slobodin@gmail.com> <vitaly_slobodin@fastmail.com>
Yara <davidsk@zed.dev>
Yara <git@davidsk.dev>
Yara <git@yara.blue>
Will Bradley <williambbradley@gmail.com>
Will Bradley <williambbradley@gmail.com> <will@zed.dev>
WindSoilder <WindSoilder@outlook.com>

6
.rules
View File

@@ -26,12 +26,6 @@
});
```
# Timers in tests
* In GPUI tests, prefer GPUI executor timers over `smol::Timer::after(...)` when you need timeouts, delays, or to drive `run_until_parked()`:
- Use `cx.background_executor().timer(duration).await` (or `cx.background_executor.timer(duration).await` in `TestAppContext`) so the work is scheduled on GPUI's dispatcher.
- Avoid `smol::Timer::after(...)` for test timeouts when you rely on `run_until_parked()`, because it may not be tracked by GPUI's scheduler and can lead to "nothing left to run" when pumping.
# GPUI
GPUI is a UI framework which also provides primitives for state and concurrency management.

View File

@@ -15,16 +15,15 @@ with the community to improve the product in ways we haven't thought of (or had
In particular we love PRs that are:
- Fixing or extending the docs.
- Fixing bugs.
- Small enhancements to existing features to make them work for more people (making things work on more platforms/modes/whatever).
- Fixes to existing bugs and issues.
- Small enhancements to existing features, particularly to make them work for more people.
- Small extra features, like keybindings or actions you miss from other editors or extensions.
- Part of a Community Program like [Let's Git Together](https://github.com/zed-industries/zed/issues/41541).
- Work towards shipping larger features on our roadmap.
If you're looking for concrete ideas:
- [Triaged bugs with confirmed steps to reproduce](https://github.com/zed-industries/zed/issues?q=is%3Aissue%20state%3Aopen%20type%3ABug%20label%3Astate%3Areproducible).
- [Area labels](https://github.com/zed-industries/zed/labels?q=area%3A*) to browse bugs in a specific part of the product you care about (after clicking on an area label, add type:Bug to the search).
- Our [top-ranking issues](https://github.com/zed-industries/zed/issues/5393) based on votes by the community.
- Our [public roadmap](https://zed.dev/roadmap) contains a rough outline of our near-term priorities for Zed.
## Sending changes
@@ -38,17 +37,9 @@ like, sorry).
Although we will take a look, we tend to only merge about half the PRs that are
submitted. If you'd like your PR to have the best chance of being merged:
- Make sure the change is **desired**: we're always happy to accept bugfixes,
but features should be confirmed with us first if you aim to avoid wasted
effort. If there isn't already a GitHub issue for your feature with staff
confirmation that we want it, start with a GitHub discussion rather than a PR.
- Include a clear description of **what you're solving**, and why it's important.
- Include **tests**.
- If it changes the UI, attach **screenshots** or screen recordings.
- Make the PR about **one thing only**, e.g. if it's a bugfix, don't add two
features and a refactoring on top of that.
- Keep AI assistance under your judgement and responsibility: it's unlikely
we'll merge a vibe-coded PR that the author doesn't understand.
- Include a clear description of what you're solving, and why it's important to you.
- Include tests.
- If it changes the UI, attach screenshots or screen recordings.
The internal advice for reviewers is as follows:
@@ -59,9 +50,10 @@ The internal advice for reviewers is as follows:
If you need more feedback from us: the best way is to be responsive to
Github comments, or to offer up time to pair with us.
If you need help deciding how to fix a bug, or finish implementing a feature
that we've agreed we want, please open a PR early so we can discuss how to make
the change with code in hand.
If you are making a larger change, or need advice on how to finish the change
you're making, please open the PR early. We would love to help you get
things right, and it's often easier to see how to solve a problem before the
diff gets too big.
## Things we will (probably) not merge
@@ -69,11 +61,11 @@ Although there are few hard and fast rules, typically we don't merge:
- Anything that can be provided by an extension. For example a new language, or theme. For adding themes or support for a new language to Zed, check out our [docs on developing extensions](https://zed.dev/docs/extensions/developing-extensions).
- New file icons. Zed's default icon theme consists of icons that are hand-designed to fit together in a cohesive manner, please don't submit PRs with off-the-shelf SVGs.
- Features where (in our subjective opinion) the extra complexity isn't worth it for the number of people who will benefit.
- Giant refactorings.
- Non-trivial changes with no tests.
- Stylistic code changes that do not alter any app logic. Reducing allocations, removing `.unwrap()`s, fixing typos is great; making code "more readable" — maybe not so much.
- Anything that seems AI-generated without understanding the output.
- Features where (in our subjective opinion) the extra complexity isn't worth it for the number of people who will benefit.
- Anything that seems completely AI generated.
## Bird's-eye view of Zed

2090
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -9,7 +9,6 @@ members = [
"crates/agent_servers",
"crates/agent_settings",
"crates/agent_ui",
"crates/agent_ui_v2",
"crates/ai_onboarding",
"crates/anthropic",
"crates/askpass",
@@ -33,13 +32,13 @@ members = [
"crates/cloud_api_client",
"crates/cloud_api_types",
"crates/cloud_llm_client",
"crates/cloud_zeta2_prompt",
"crates/collab",
"crates/collab_ui",
"crates/collections",
"crates/command_palette",
"crates/command_palette_hooks",
"crates/component",
"crates/component_preview",
"crates/context_server",
"crates/copilot",
"crates/crashes",
@@ -55,9 +54,9 @@ members = [
"crates/diagnostics",
"crates/docs_preprocessor",
"crates/edit_prediction",
"crates/edit_prediction_types",
"crates/edit_prediction_ui",
"crates/edit_prediction_button",
"crates/edit_prediction_context",
"crates/zeta2_tools",
"crates/editor",
"crates/eval",
"crates/eval_utils",
@@ -193,23 +192,19 @@ members = [
"crates/vercel",
"crates/vim",
"crates/vim_mode_setting",
"crates/which_key",
"crates/watch",
"crates/web_search",
"crates/web_search_providers",
"crates/workspace",
"crates/worktree",
"crates/worktree_benchmarks",
"crates/x_ai",
"crates/zed",
"crates/zed_actions",
"crates/zed_env_vars",
"crates/edit_prediction_cli",
"crates/zeta_prompt",
"crates/zeta",
"crates/zeta_cli",
"crates/zlog",
"crates/zlog_settings",
"crates/ztracing",
"crates/ztracing_macro",
#
# Extensions
@@ -246,9 +241,9 @@ action_log = { path = "crates/action_log" }
agent = { path = "crates/agent" }
activity_indicator = { path = "crates/activity_indicator" }
agent_ui = { path = "crates/agent_ui" }
agent_ui_v2 = { path = "crates/agent_ui_v2" }
agent_settings = { path = "crates/agent_settings" }
agent_servers = { path = "crates/agent_servers" }
ai = { path = "crates/ai" }
ai_onboarding = { path = "crates/ai_onboarding" }
anthropic = { path = "crates/anthropic" }
askpass = { path = "crates/askpass" }
@@ -258,6 +253,7 @@ assistant_slash_command = { path = "crates/assistant_slash_command" }
assistant_slash_commands = { path = "crates/assistant_slash_commands" }
audio = { path = "crates/audio" }
auto_update = { path = "crates/auto_update" }
auto_update_helper = { path = "crates/auto_update_helper" }
auto_update_ui = { path = "crates/auto_update_ui" }
aws_http_client = { path = "crates/aws_http_client" }
bedrock = { path = "crates/bedrock" }
@@ -271,12 +267,13 @@ clock = { path = "crates/clock" }
cloud_api_client = { path = "crates/cloud_api_client" }
cloud_api_types = { path = "crates/cloud_api_types" }
cloud_llm_client = { path = "crates/cloud_llm_client" }
cloud_zeta2_prompt = { path = "crates/cloud_zeta2_prompt" }
collab = { path = "crates/collab" }
collab_ui = { path = "crates/collab_ui" }
collections = { path = "crates/collections", version = "0.1.0" }
command_palette = { path = "crates/command_palette" }
command_palette_hooks = { path = "crates/command_palette_hooks" }
component = { path = "crates/component" }
component_preview = { path = "crates/component_preview" }
context_server = { path = "crates/context_server" }
copilot = { path = "crates/copilot" }
crashes = { path = "crates/crashes" }
@@ -316,9 +313,10 @@ http_client = { path = "crates/http_client" }
http_client_tls = { path = "crates/http_client_tls" }
icons = { path = "crates/icons" }
image_viewer = { path = "crates/image_viewer" }
edit_prediction_types = { path = "crates/edit_prediction_types" }
edit_prediction_ui = { path = "crates/edit_prediction_ui" }
edit_prediction = { path = "crates/edit_prediction" }
edit_prediction_button = { path = "crates/edit_prediction_button" }
edit_prediction_context = { path = "crates/edit_prediction_context" }
zeta2_tools = { path = "crates/zeta2_tools" }
inspector_ui = { path = "crates/inspector_ui" }
install_cli = { path = "crates/install_cli" }
journal = { path = "crates/journal" }
@@ -360,6 +358,8 @@ panel = { path = "crates/panel" }
paths = { path = "crates/paths" }
perf = { path = "tooling/perf" }
picker = { path = "crates/picker" }
plugin = { path = "crates/plugin" }
plugin_macros = { path = "crates/plugin_macros" }
prettier = { path = "crates/prettier" }
settings_profile_selector = { path = "crates/settings_profile_selector" }
project = { path = "crates/project" }
@@ -370,10 +370,12 @@ proto = { path = "crates/proto" }
recent_projects = { path = "crates/recent_projects" }
refineable = { path = "crates/refineable" }
release_channel = { path = "crates/release_channel" }
scheduler = { path = "crates/scheduler" }
remote = { path = "crates/remote" }
remote_server = { path = "crates/remote_server" }
repl = { path = "crates/repl" }
reqwest_client = { path = "crates/reqwest_client" }
rich_text = { path = "crates/rich_text" }
rodio = { git = "https://github.com/RustAudio/rodio", rev ="e2074c6c2acf07b57cf717e076bdda7a9ac6e70b", features = ["wav", "playback", "wav_output", "recording"] }
rope = { path = "crates/rope" }
rpc = { path = "crates/rpc" }
@@ -390,6 +392,7 @@ snippets_ui = { path = "crates/snippets_ui" }
sqlez = { path = "crates/sqlez" }
sqlez_macros = { path = "crates/sqlez_macros" }
story = { path = "crates/story" }
storybook = { path = "crates/storybook" }
streaming_diff = { path = "crates/streaming_diff" }
sum_tree = { path = "crates/sum_tree" }
supermaven = { path = "crates/supermaven" }
@@ -406,6 +409,7 @@ terminal_view = { path = "crates/terminal_view" }
text = { path = "crates/text" }
theme = { path = "crates/theme" }
theme_extension = { path = "crates/theme_extension" }
theme_importer = { path = "crates/theme_importer" }
theme_selector = { path = "crates/theme_selector" }
time_format = { path = "crates/time_format" }
title_bar = { path = "crates/title_bar" }
@@ -419,7 +423,6 @@ util_macros = { path = "crates/util_macros" }
vercel = { path = "crates/vercel" }
vim = { path = "crates/vim" }
vim_mode_setting = { path = "crates/vim_mode_setting" }
which_key = { path = "crates/which_key" }
watch = { path = "crates/watch" }
web_search = { path = "crates/web_search" }
@@ -430,18 +433,15 @@ x_ai = { path = "crates/x_ai" }
zed = { path = "crates/zed" }
zed_actions = { path = "crates/zed_actions" }
zed_env_vars = { path = "crates/zed_env_vars" }
edit_prediction = { path = "crates/edit_prediction" }
zeta_prompt = { path = "crates/zeta_prompt" }
zeta = { path = "crates/zeta" }
zlog = { path = "crates/zlog" }
zlog_settings = { path = "crates/zlog_settings" }
ztracing = { path = "crates/ztracing" }
ztracing_macro = { path = "crates/ztracing_macro" }
#
# External crates
#
agent-client-protocol = { version = "=0.9.2", features = ["unstable"] }
agent-client-protocol = { version = "0.7.0", features = ["unstable"] }
aho-corasick = "1.1"
alacritty_terminal = "0.25.1-rc1"
any_vec = "0.14"
@@ -460,15 +460,15 @@ async-task = "4.7"
async-trait = "0.1"
async-tungstenite = "0.31.0"
async_zip = { version = "0.0.18", features = ["deflate", "deflate64"] }
aws-config = { version = "1.8.10", features = ["behavior-version-latest"] }
aws-credential-types = { version = "1.2.8", features = [
aws-config = { version = "1.6.1", features = ["behavior-version-latest"] }
aws-credential-types = { version = "1.2.2", features = [
"hardcoded-credentials",
] }
aws-sdk-bedrockruntime = { version = "1.112.0", features = [
aws-sdk-bedrockruntime = { version = "1.80.0", features = [
"behavior-version-latest",
] }
aws-smithy-runtime-api = { version = "1.9.2", features = ["http-1x", "client"] }
aws-smithy-types = { version = "1.3.4", features = ["http-body-1-x"] }
aws-smithy-runtime-api = { version = "1.7.4", features = ["http-1x", "client"] }
aws-smithy-types = { version = "1.3.0", features = ["http-body-1-x"] }
backtrace = "0.3"
base64 = "0.22"
bincode = "1.2.1"
@@ -481,7 +481,6 @@ bytes = "1.0"
cargo_metadata = "0.19"
cargo_toml = "0.21"
cfg-if = "1.0.3"
chardetng = "0.1"
chrono = { version = "0.4", features = ["serde"] }
ciborium = "0.2"
circular-buffer = "1.0"
@@ -505,16 +504,17 @@ dotenvy = "0.15.0"
ec4rs = "1.1"
emojis = "0.6.1"
env_logger = "0.11"
encoding_rs = "0.8"
exec = "0.3.1"
fancy-regex = "0.16.0"
fancy-regex = "0.14.0"
fork = "0.4.0"
futures = "0.3"
futures-batch = "0.6.1"
futures-lite = "1.13"
gh-workflow = { git = "https://github.com/zed-industries/gh-workflow", rev = "09acfdf2bd5c1d6254abefd609c808ff73547b2c" }
gh-workflow = { git = "https://github.com/zed-industries/gh-workflow", rev = "e5f883040530b4df36437f140084ee5cc7c1c9be" }
git2 = { version = "0.20.1", default-features = false }
globset = "0.4"
handlebars = "4.3"
hashbrown = "0.15.3"
heck = "0.5"
heed = { version = "0.21.0", features = ["read-txn-no-tls"] }
hex = "0.4.3"
@@ -531,7 +531,7 @@ indoc = "2"
inventory = "0.3.19"
itertools = "0.14.0"
json_dotpath = "1.1"
jsonschema = "0.37.0"
jsonschema = "0.30.0"
jsonwebtoken = "9.3"
jupyter-protocol = "0.10.0"
jupyter-websocket-client = "0.15.0"
@@ -550,6 +550,7 @@ nanoid = "0.4"
nbformat = "0.15.0"
nix = "0.29"
num-format = "0.4.4"
num-traits = "0.2"
objc = "0.2"
objc2-foundation = { version = "=0.3.1", default-features = false, features = [
"NSArray",
@@ -588,6 +589,7 @@ pet = { git = "https://github.com/microsoft/python-environment-tools.git", rev =
pet-conda = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
pet-core = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
pet-fs = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
pet-pixi = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
pet-poetry = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
pet-reporter = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
pet-virtualenv = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "1e86914c3ce2f3a08c0cedbcb0615a7f9fa7a5da" }
@@ -627,6 +629,7 @@ scap = { git = "https://github.com/zed-industries/scap", rev = "4afea48c3b002197
schemars = { version = "1.0", features = ["indexmap2"] }
semver = { version = "1.0", features = ["serde"] }
serde = { version = "1.0.221", features = ["derive", "rc"] }
serde_derive = "1.0.221"
serde_json = { version = "1.0.144", features = ["preserve_order", "raw_value"] }
serde_json_lenient = { version = "0.2", features = [
"preserve_order",
@@ -638,9 +641,10 @@ serde_urlencoded = "0.7"
sha2 = "0.10"
shellexpand = "2.1.0"
shlex = "1.3.0"
similar = "2.6"
simplelog = "0.12.2"
slotmap = "1.0.6"
smallvec = { version = "1.6", features = ["union", "const_new"] }
smallvec = { version = "1.6", features = ["union"] }
smol = "2.0"
sqlformat = "0.2"
stacksafe = "0.1"
@@ -654,7 +658,7 @@ sysinfo = "0.37.0"
take-until = "0.2.0"
tempfile = "3.20.0"
thiserror = "2.0.12"
tiktoken-rs = { git = "https://github.com/zed-industries/tiktoken-rs", rev = "2570c4387a8505fb8f1d3f3557454b474f1e8271" }
tiktoken-rs = { git = "https://github.com/zed-industries/tiktoken-rs", rev = "7249f999c5fdf9bf3cc5c288c964454e4dac0c00" }
time = { version = "0.3", features = [
"macros",
"parsing",
@@ -666,11 +670,10 @@ time = { version = "0.3", features = [
tiny_http = "0.8"
tokio = { version = "1" }
tokio-tungstenite = { version = "0.26", features = ["__rustls-tls"] }
tokio-socks = { version = "0.5.2", default-features = false, features = ["futures-io", "tokio"] }
toml = "0.8"
toml_edit = { version = "0.22", default-features = false, features = ["display", "parse", "serde"] }
tower-http = "0.4.4"
tree-sitter = { version = "0.26", features = ["wasm"] }
tree-sitter = { version = "0.25.10", features = ["wasm"] }
tree-sitter-bash = "0.25.1"
tree-sitter-c = "0.23"
tree-sitter-cpp = { git = "https://github.com/tree-sitter/tree-sitter-cpp", rev = "5cb9b693cfd7bfacab1d9ff4acac1a4150700609" }
@@ -693,7 +696,6 @@ tree-sitter-ruby = "0.23"
tree-sitter-rust = "0.24"
tree-sitter-typescript = { git = "https://github.com/zed-industries/tree-sitter-typescript", rev = "e2c53597d6a5d9cf7bbe8dccde576fe1e46c5899" } # https://github.com/tree-sitter/tree-sitter-typescript/pull/347
tree-sitter-yaml = { git = "https://github.com/zed-industries/tree-sitter-yaml", rev = "baff0b51c64ef6a1fb1f8390f3ad6015b83ec13a" }
tracing = "0.1.40"
unicase = "2.6"
unicode-script = "0.5.7"
unicode-segmentation = "1.10"
@@ -704,7 +706,7 @@ uuid = { version = "1.1.2", features = ["v4", "v5", "v7", "serde"] }
walkdir = "2.5"
wasm-encoder = "0.221"
wasmparser = "0.221"
wasmtime = { version = "33", default-features = false, features = [
wasmtime = { version = "29", default-features = false, features = [
"async",
"demangle",
"runtime",
@@ -713,10 +715,11 @@ wasmtime = { version = "33", default-features = false, features = [
"incremental-cache",
"parallel-compilation",
] }
wasmtime-wasi = "33"
wasmtime-wasi = "29"
wax = "0.6"
which = "6.0.0"
windows-core = "0.61"
wit-component = "0.221"
yawc = "0.2.5"
zeroize = "1.8"
zstd = "0.11"
@@ -798,13 +801,20 @@ settings_macros = { opt-level = 3 }
sqlez_macros = { opt-level = 3, codegen-units = 1 }
ui_macros = { opt-level = 3 }
util_macros = { opt-level = 3 }
serde_derive = { opt-level = 3 }
quote = { opt-level = 3 }
syn = { opt-level = 3 }
proc-macro2 = { opt-level = 3 }
# proc-macros end
taffy = { opt-level = 3 }
cranelift-codegen = { opt-level = 3 }
cranelift-codegen-meta = { opt-level = 3 }
cranelift-codegen-shared = { opt-level = 3 }
resvg = { opt-level = 3 }
rustybuzz = { opt-level = 3 }
ttf-parser = { opt-level = 3 }
wasmtime-cranelift = { opt-level = 3 }
wasmtime = { opt-level = 3 }
# Build single-source-file crates with cg=1 as it helps make `cargo build` of a whole workspace a bit faster
activity_indicator = { codegen-units = 1 }
@@ -813,11 +823,12 @@ breadcrumbs = { codegen-units = 1 }
collections = { codegen-units = 1 }
command_palette = { codegen-units = 1 }
command_palette_hooks = { codegen-units = 1 }
extension_cli = { codegen-units = 1 }
feature_flags = { codegen-units = 1 }
file_icons = { codegen-units = 1 }
fsevent = { codegen-units = 1 }
image_viewer = { codegen-units = 1 }
edit_prediction_ui = { codegen-units = 1 }
edit_prediction_button = { codegen-units = 1 }
install_cli = { codegen-units = 1 }
journal = { codegen-units = 1 }
json_schema_store = { codegen-units = 1 }
@@ -832,6 +843,7 @@ project_symbols = { codegen-units = 1 }
refineable = { codegen-units = 1 }
release_channel = { codegen-units = 1 }
reqwest_client = { codegen-units = 1 }
rich_text = { codegen-units = 1 }
session = { codegen-units = 1 }
snippet = { codegen-units = 1 }
snippets_ui = { codegen-units = 1 }
@@ -864,6 +876,8 @@ unexpected_cfgs = { level = "allow" }
dbg_macro = "deny"
todo = "deny"
# This is not a style lint, see https://github.com/rust-lang/rust-clippy/pull/15454
# Remove when the lint gets promoted to `suspicious`.
declare_interior_mutable_const = "deny"
redundant_clone = "deny"

View File

@@ -1,6 +1,6 @@
# syntax = docker/dockerfile:1.2
FROM rust:1.92-bookworm as builder
FROM rust:1.91.1-bookworm as builder
WORKDIR app
COPY . .
@@ -34,4 +34,8 @@ RUN apt-get update; \
linux-perf binutils
WORKDIR app
COPY --from=builder /app/collab /app/collab
COPY --from=builder /app/crates/collab/migrations /app/migrations
COPY --from=builder /app/crates/collab/migrations_llm /app/migrations_llm
ENV MIGRATIONS_PATH=/app/migrations
ENV LLM_DATABASE_MIGRATIONS_PATH=/app/migrations_llm
ENTRYPOINT ["/app/collab"]

View File

@@ -9,7 +9,7 @@ Welcome to Zed, a high-performance, multiplayer code editor from the creators of
### Installation
On macOS, Linux, and Windows you can [download Zed directly](https://zed.dev/download) or install Zed via your local package manager ([macOS](https://zed.dev/docs/installation#macos)/[Linux](https://zed.dev/docs/linux#installing-via-a-package-manager)/[Windows](https://zed.dev/docs/windows#package-managers)).
On macOS, Linux, and Windows you can [download Zed directly](https://zed.dev/download) or [install Zed via your local package manager](https://zed.dev/docs/linux#installing-via-a-package-manager).
Other platforms are not yet available:
@@ -20,6 +20,7 @@ Other platforms are not yet available:
- [Building Zed for macOS](./docs/src/development/macos.md)
- [Building Zed for Linux](./docs/src/development/linux.md)
- [Building Zed for Windows](./docs/src/development/windows.md)
- [Running Collaboration Locally](./docs/src/development/local-collaboration.md)
### Contributing

View File

@@ -28,7 +28,7 @@ ai
= @rtfeldman
audio
= @yara-blue
= @dvdsk
crashes
= @p1n3appl3
@@ -53,7 +53,7 @@ extension
git
= @cole-miller
= @danilo-leal
= @yara-blue
= @dvdsk
= @kubkon
= @Anthony-Eid
= @cameron1024
@@ -76,7 +76,7 @@ languages
linux
= @cole-miller
= @yara-blue
= @dvdsk
= @p1n3appl3
= @probably-neb
= @smitbarmase
@@ -92,7 +92,7 @@ multi_buffer
= @SomeoneToIgnore
pickers
= @yara-blue
= @dvdsk
= @p1n3appl3
= @SomeoneToIgnore

View File

@@ -1,5 +0,0 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M13.3996 5.59852C13.3994 5.3881 13.3439 5.18144 13.2386 4.99926C13.1333 4.81709 12.9819 4.66581 12.7997 4.56059L8.59996 2.16076C8.41755 2.05544 8.21063 2 8 2C7.78937 2 7.58246 2.05544 7.40004 2.16076L3.20033 4.56059C3.0181 4.66581 2.86674 4.81709 2.76144 4.99926C2.65613 5.18144 2.60059 5.3881 2.60037 5.59852V10.3982C2.60059 10.6086 2.65613 10.8153 2.76144 10.9975C2.86674 11.1796 3.0181 11.3309 3.20033 11.4361L7.40004 13.836C7.58246 13.9413 7.78937 13.9967 8 13.9967C8.21063 13.9967 8.41755 13.9413 8.59996 13.836L12.7997 11.4361C12.9819 11.3309 13.1333 11.1796 13.2386 10.9975C13.3439 10.8153 13.3994 10.6086 13.3996 10.3982V5.59852Z" stroke="white" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M2.78033 4.99857L7.99998 7.99836L13.2196 4.99857" stroke="white" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M8 13.9979V7.99829" stroke="white" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

Before

Width:  |  Height:  |  Size: 1.1 KiB

View File

@@ -1,8 +0,0 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M4 2V10" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M12 6C12.5304 6 13.0391 5.78929 13.4142 5.41421C13.7893 5.03914 14 4.53043 14 4C14 3.46957 13.7893 2.96086 13.4142 2.58579C13.0391 2.21071 12.5304 2 12 2C11.4696 2 10.9609 2.21071 10.5858 2.58579C10.2107 2.96086 10 3.46957 10 4C10 4.53043 10.2107 5.03914 10.5858 5.41421C10.9609 5.78929 11.4696 6 12 6Z" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M4 14C4.53043 14 5.03914 13.7893 5.41421 13.4142C5.78929 13.0391 6 12.5304 6 12C6 11.4696 5.78929 10.9609 5.41421 10.5858C5.03914 10.2107 4.53043 10 4 10C3.46957 10 2.96086 10.2107 2.58579 10.5858C2.21071 10.9609 2 11.4696 2 12C2 12.5304 2.21071 13.0391 2.58579 13.4142C2.96086 13.7893 3.46957 14 4 14Z" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M10 4C8.4087 4 6.88258 4.63214 5.75736 5.75736C4.63214 6.88258 4 8.4087 4 10" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M12 10V14" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M14 12H10" stroke="#C6CAD0" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

Before

Width:  |  Height:  |  Size: 1.4 KiB

View File

@@ -1,11 +0,0 @@
<svg width="28" height="28" viewBox="0 0 28 28" fill="none" id="svg1378540956_510">
<g clip-path="url(#svg1378540956_510_clip0_1_1506)" transform="translate(4, 4) scale(0.857)">
<path d="M17.0547 0.372066H8.52652L-0.00165176 8.90024V17.4284H8.52652V8.90024H17.0547V0.372066Z" fill="#1A1C20"></path>
<path d="M10.1992 27.6279H18.7274L27.2556 19.0998V10.5716H18.7274V19.0998H10.1992V27.6279Z" fill="#1A1C20"></path>
</g>
<defs>
<clipPath id="svg1378540956_510_clip0_1_1506">
<rect width="27.2559" height="27.2559" fill="white" transform="translate(0 0.37207)"></rect>
</clipPath>
</defs>
</svg>

Before

Width:  |  Height:  |  Size: 593 B

View File

@@ -1,5 +0,0 @@
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M6.2224 1.32129L5.2036 4.41875C5.15145 4.57727 5.06282 4.72134 4.94481 4.83934C4.82681 4.95735 4.68274 5.04598 4.52422 5.09813L1.42676 6.11693L4.52422 7.13574C4.68274 7.18788 4.82681 7.27652 4.94481 7.39453C5.06282 7.51253 5.15145 7.6566 5.2036 7.81512L6.2224 10.9126L7.24121 7.81512C7.29335 7.6566 7.38199 7.51253 7.5 7.39453C7.618 7.27652 7.76207 7.18788 7.9206 7.13574L11.018 6.11693L7.9206 5.09813C7.76207 5.04598 7.618 4.95735 7.5 4.83934C7.38199 4.72134 7.29335 4.57727 7.24121 4.41875L6.2224 1.32129Z" fill="black" fill-opacity="0.15" stroke="black" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
<path d="M9.76681 13.9373C9.76681 13.6048 9.95997 13.3083 10.5126 12.7917L11.8872 11.4978C12.3545 11.0575 12.5612 10.77 12.5612 10.4735C12.5612 10.1411 12.3185 9.91643 11.9681 9.91643C11.6986 9.91643 11.5054 10.0242 11.2673 10.3208C10.9933 10.6622 10.7956 10.779 10.4946 10.779C10.0633 10.779 9.75781 10.4915 9.75781 10.0916C9.75781 9.21559 10.8136 8.44287 12.067 8.44287C13.3743 8.44287 14.3492 9.22907 14.3492 10.2848C14.3492 10.9452 13.9988 11.5742 13.2845 12.2077L12.2242 13.1511V13.223H13.7292C14.2503 13.223 14.5738 13.5015 14.5738 13.9552C14.5738 14.4089 14.2593 14.6785 13.7292 14.6785H10.5979C10.1037 14.6785 9.76681 14.3775 9.76681 13.9373Z" fill="black"/>
<path d="M12.8994 1.32129V4.00482M11.5576 2.66302H14.2412" stroke="black" stroke-opacity="0.75" stroke-width="1.2" stroke-linecap="round" stroke-linejoin="round"/>
</svg>

Before

Width:  |  Height:  |  Size: 1.5 KiB

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -10,12 +10,12 @@
"context": "Workspace",
"bindings": {
// "shift shift": "file_finder::Toggle"
},
}
},
{
"context": "Editor && vim_mode == insert",
"bindings": {
// "j k": "vim::NormalBefore"
},
},
}
}
]

View File

@@ -4,15 +4,15 @@
"bindings": {
"ctrl-shift-f5": "workspace::Reload", // window:reload
"ctrl-k ctrl-n": "workspace::ActivatePreviousPane", // window:focus-next-pane
"ctrl-k ctrl-p": "workspace::ActivateNextPane", // window:focus-previous-pane
},
"ctrl-k ctrl-p": "workspace::ActivateNextPane" // window:focus-previous-pane
}
},
{
"context": "Editor",
"bindings": {
"ctrl-k ctrl-u": "editor::ConvertToUpperCase", // editor:upper-case
"ctrl-k ctrl-l": "editor::ConvertToLowerCase", // editor:lower-case
},
"ctrl-k ctrl-l": "editor::ConvertToLowerCase" // editor:lower-case
}
},
{
"context": "Editor && mode == full",
@@ -32,8 +32,8 @@
"ctrl-down": "editor::MoveLineDown", // editor:move-line-down
"ctrl-\\": "workspace::ToggleLeftDock", // tree-view:toggle
"ctrl-shift-m": "markdown::OpenPreviewToTheSide", // markdown-preview:toggle
"ctrl-r": "outline::Toggle", // symbols-view:toggle-project-symbols
},
"ctrl-r": "outline::Toggle" // symbols-view:toggle-project-symbols
}
},
{
"context": "BufferSearchBar",
@@ -41,8 +41,8 @@
"f3": ["editor::SelectNext", { "replace_newest": true }], // find-and-replace:find-next
"shift-f3": ["editor::SelectPrevious", { "replace_newest": true }], //find-and-replace:find-previous
"ctrl-f3": "search::SelectNextMatch", // find-and-replace:find-next-selected
"ctrl-shift-f3": "search::SelectPreviousMatch", // find-and-replace:find-previous-selected
},
"ctrl-shift-f3": "search::SelectPreviousMatch" // find-and-replace:find-previous-selected
}
},
{
"context": "Workspace",
@@ -50,8 +50,8 @@
"ctrl-\\": "workspace::ToggleLeftDock", // tree-view:toggle
"ctrl-k ctrl-b": "workspace::ToggleLeftDock", // tree-view:toggle
"ctrl-t": "file_finder::Toggle", // fuzzy-finder:toggle-file-finder
"ctrl-r": "project_symbols::Toggle", // symbols-view:toggle-project-symbols
},
"ctrl-r": "project_symbols::Toggle" // symbols-view:toggle-project-symbols
}
},
{
"context": "Pane",
@@ -65,8 +65,8 @@
"ctrl-6": ["pane::ActivateItem", 5], // tree-view:open-selected-entry-in-pane-6
"ctrl-7": ["pane::ActivateItem", 6], // tree-view:open-selected-entry-in-pane-7
"ctrl-8": ["pane::ActivateItem", 7], // tree-view:open-selected-entry-in-pane-8
"ctrl-9": ["pane::ActivateItem", 8], // tree-view:open-selected-entry-in-pane-9
},
"ctrl-9": ["pane::ActivateItem", 8] // tree-view:open-selected-entry-in-pane-9
}
},
{
"context": "ProjectPanel",
@@ -75,8 +75,8 @@
"backspace": ["project_panel::Trash", { "skip_prompt": false }],
"ctrl-x": "project_panel::Cut", // tree-view:cut
"ctrl-c": "project_panel::Copy", // tree-view:copy
"ctrl-v": "project_panel::Paste", // tree-view:paste
},
"ctrl-v": "project_panel::Paste" // tree-view:paste
}
},
{
"context": "ProjectPanel && not_editing",
@@ -90,7 +90,7 @@
"d": "project_panel::Duplicate", // tree-view:duplicate
"home": "menu::SelectFirst", // core:move-to-top
"end": "menu::SelectLast", // core:move-to-bottom
"shift-a": "project_panel::NewDirectory", // tree-view:add-folder
},
},
"shift-a": "project_panel::NewDirectory" // tree-view:add-folder
}
}
]

View File

@@ -8,8 +8,8 @@
"ctrl-shift-i": "agent::ToggleFocus",
"ctrl-l": "agent::ToggleFocus",
"ctrl-shift-l": "agent::ToggleFocus",
"ctrl-shift-j": "agent::OpenSettings",
},
"ctrl-shift-j": "agent::OpenSettings"
}
},
{
"context": "Editor && mode == full",
@@ -20,18 +20,18 @@
"ctrl-shift-l": "agent::AddSelectionToThread", // In cursor uses "Ask" mode
"ctrl-l": "agent::AddSelectionToThread", // In cursor uses "Agent" mode
"ctrl-k": "assistant::InlineAssist",
"ctrl-shift-k": "assistant::InsertIntoEditor",
},
"ctrl-shift-k": "assistant::InsertIntoEditor"
}
},
{
"context": "InlineAssistant > Editor",
"context": "InlineAssistEditor",
"use_key_equivalents": true,
"bindings": {
"ctrl-shift-backspace": "editor::Cancel",
"ctrl-shift-backspace": "editor::Cancel"
// "alt-enter": // Quick Question
// "ctrl-shift-enter": // Full File Context
// "ctrl-shift-k": // Toggle input focus (editor <> inline assist)
},
}
},
{
"context": "AgentPanel || ContextEditor || (MessageEditor > Editor)",
@@ -47,7 +47,7 @@
"ctrl-shift-backspace": "editor::Cancel",
"ctrl-r": "agent::NewThread",
"ctrl-shift-v": "editor::Paste",
"ctrl-shift-k": "assistant::InsertIntoEditor",
"ctrl-shift-k": "assistant::InsertIntoEditor"
// "escape": "agent::ToggleFocus"
///// Enable when Zed supports multiple thread tabs
// "ctrl-t": // new thread tab
@@ -56,29 +56,28 @@
///// Enable if Zed adds support for keyboard navigation of thread elements
// "tab": // cycle to next message
// "shift-tab": // cycle to previous message
},
}
},
{
"context": "Editor && editor_agent_diff",
"use_key_equivalents": true,
"bindings": {
"ctrl-enter": "agent::KeepAll",
"ctrl-backspace": "agent::RejectAll",
},
"ctrl-backspace": "agent::RejectAll"
}
},
{
"context": "Editor && mode == full && edit_prediction",
"use_key_equivalents": true,
"bindings": {
"ctrl-right": "editor::AcceptNextWordEditPrediction",
"ctrl-down": "editor::AcceptNextLineEditPrediction",
},
"ctrl-right": "editor::AcceptPartialEditPrediction"
}
},
{
"context": "Terminal",
"use_key_equivalents": true,
"bindings": {
"ctrl-k": "assistant::InlineAssist",
},
},
"ctrl-k": "assistant::InlineAssist"
}
}
]

View File

@@ -5,8 +5,8 @@
[
{
"bindings": {
"ctrl-g": "menu::Cancel",
},
"ctrl-g": "menu::Cancel"
}
},
{
// Workaround to avoid falling back to default bindings.
@@ -18,8 +18,8 @@
"ctrl-g": null, // currently activates `go_to_line::Toggle` when there is nothing to cancel
"ctrl-x": null, // currently activates `editor::Cut` if no following key is pressed for 1 second
"ctrl-p": null, // currently activates `file_finder::Toggle` when the cursor is on the first character of the buffer
"ctrl-n": null, // currently activates `workspace::NewFile` when the cursor is on the last character of the buffer
},
"ctrl-n": null // currently activates `workspace::NewFile` when the cursor is on the last character of the buffer
}
},
{
"context": "Editor",
@@ -82,8 +82,8 @@
"ctrl-s": "buffer_search::Deploy", // isearch-forward
"ctrl-r": "buffer_search::Deploy", // isearch-backward
"alt-^": "editor::JoinLines", // join-line
"alt-q": "editor::Rewrap", // fill-paragraph
},
"alt-q": "editor::Rewrap" // fill-paragraph
}
},
{
"context": "Editor && selection_mode", // region selection
@@ -119,22 +119,22 @@
"alt->": "editor::SelectToEnd",
"ctrl-home": "editor::SelectToBeginning",
"ctrl-end": "editor::SelectToEnd",
"ctrl-g": "editor::Cancel",
},
"ctrl-g": "editor::Cancel"
}
},
{
"context": "Editor && (showing_code_actions || showing_completions)",
"bindings": {
"ctrl-p": "editor::ContextMenuPrevious",
"ctrl-n": "editor::ContextMenuNext",
},
"ctrl-n": "editor::ContextMenuNext"
}
},
{
"context": "Editor && showing_signature_help && !showing_completions",
"bindings": {
"ctrl-p": "editor::SignatureHelpPrevious",
"ctrl-n": "editor::SignatureHelpNext",
},
"ctrl-n": "editor::SignatureHelpNext"
}
},
// Example setting for using emacs-style tab
// (i.e. indent the current line / selection or perform symbol completion depending on context)
@@ -164,8 +164,8 @@
"ctrl-x ctrl-f": "file_finder::Toggle", // find-file
"ctrl-x ctrl-s": "workspace::Save", // save-buffer
"ctrl-x ctrl-w": "workspace::SaveAs", // write-file
"ctrl-x s": "workspace::SaveAll", // save-some-buffers
},
"ctrl-x s": "workspace::SaveAll" // save-some-buffers
}
},
{
// Workaround to enable using native emacs from the Zed terminal.
@@ -185,22 +185,22 @@
"ctrl-x ctrl-f": null, // find-file
"ctrl-x ctrl-s": null, // save-buffer
"ctrl-x ctrl-w": null, // write-file
"ctrl-x s": null, // save-some-buffers
},
"ctrl-x s": null // save-some-buffers
}
},
{
"context": "BufferSearchBar > Editor",
"bindings": {
"ctrl-s": "search::SelectNextMatch",
"ctrl-r": "search::SelectPreviousMatch",
"ctrl-g": "buffer_search::Dismiss",
},
"ctrl-g": "buffer_search::Dismiss"
}
},
{
"context": "Pane",
"bindings": {
"ctrl-alt-left": "pane::GoBack",
"ctrl-alt-right": "pane::GoForward",
},
},
"ctrl-alt-right": "pane::GoForward"
}
}
]

View File

@@ -13,8 +13,8 @@
"shift-f8": "debugger::StepOut",
"f9": "debugger::Continue",
"shift-f9": "debugger::Start",
"alt-shift-f9": "debugger::Start",
},
"alt-shift-f9": "debugger::Start"
}
},
{
"context": "Editor",
@@ -62,30 +62,28 @@
"ctrl-shift-end": "editor::SelectToEnd",
"ctrl-f8": "editor::ToggleBreakpoint",
"ctrl-shift-f8": "editor::EditLogBreakpoint",
"ctrl-shift-u": "editor::ToggleCase",
},
"ctrl-shift-u": "editor::ToggleCase"
}
},
{
"context": "Editor && mode == full",
"bindings": {
"ctrl-f12": "outline::Toggle",
"ctrl-r": ["buffer_search::Deploy", { "replace_enabled": true }],
"ctrl-e": "file_finder::Toggle",
"ctrl-shift-n": "file_finder::Toggle",
"ctrl-alt-n": "file_finder::Toggle",
"ctrl-g": "go_to_line::Toggle",
"alt-enter": "editor::ToggleCodeActions",
"ctrl-space": "editor::ShowCompletions",
"ctrl-q": "editor::Hover",
"ctrl-p": "editor::ShowSignatureHelp",
"ctrl-\\": "assistant::InlineAssist",
},
"ctrl-\\": "assistant::InlineAssist"
}
},
{
"context": "BufferSearchBar",
"bindings": {
"shift-enter": "search::SelectPreviousMatch",
},
"shift-enter": "search::SelectPreviousMatch"
}
},
{
"context": "BufferSearchBar || ProjectSearchBar",
@@ -93,8 +91,8 @@
"alt-c": "search::ToggleCaseSensitive",
"alt-e": "search::ToggleSelection",
"alt-x": "search::ToggleRegex",
"alt-w": "search::ToggleWholeWord",
},
"alt-w": "search::ToggleWholeWord"
}
},
{
"context": "Workspace",
@@ -107,8 +105,8 @@
"ctrl-e": "file_finder::Toggle",
"ctrl-k": "git_panel::ToggleFocus", // bug: This should also focus commit editor
"ctrl-shift-n": "file_finder::Toggle",
"ctrl-alt-n": "file_finder::Toggle",
"ctrl-n": "project_symbols::Toggle",
"ctrl-alt-n": "file_finder::Toggle",
"ctrl-shift-a": "command_palette::Toggle",
"shift shift": "command_palette::Toggle",
"ctrl-alt-shift-n": "project_symbols::Toggle",
@@ -116,8 +114,8 @@
"alt-1": "project_panel::ToggleFocus",
"alt-5": "debug_panel::ToggleFocus",
"alt-6": "diagnostics::Deploy",
"alt-7": "outline_panel::ToggleFocus",
},
"alt-7": "outline_panel::ToggleFocus"
}
},
{
"context": "Pane", // this is to override the default Pane mappings to switch tabs
@@ -131,15 +129,15 @@
"alt-7": "outline_panel::ToggleFocus",
"alt-8": null, // Services (bottom dock)
"alt-9": null, // Git History (bottom dock)
"alt-0": "git_panel::ToggleFocus",
},
"alt-0": "git_panel::ToggleFocus"
}
},
{
"context": "Workspace || Editor",
"bindings": {
"alt-f12": "terminal_panel::Toggle",
"ctrl-shift-k": "git::Push",
},
"ctrl-shift-k": "git::Push"
}
},
{
"context": "Pane",
@@ -147,8 +145,8 @@
"ctrl-alt-left": "pane::GoBack",
"ctrl-alt-right": "pane::GoForward",
"alt-left": "pane::ActivatePreviousItem",
"alt-right": "pane::ActivateNextItem",
},
"alt-right": "pane::ActivateNextItem"
}
},
{
"context": "ProjectPanel",
@@ -158,8 +156,8 @@
"backspace": ["project_panel::Trash", { "skip_prompt": false }],
"delete": ["project_panel::Trash", { "skip_prompt": false }],
"shift-delete": ["project_panel::Delete", { "skip_prompt": false }],
"shift-f6": "project_panel::Rename",
},
"shift-f6": "project_panel::Rename"
}
},
{
"context": "Terminal",
@@ -169,8 +167,8 @@
"ctrl-up": "terminal::ScrollLineUp",
"ctrl-down": "terminal::ScrollLineDown",
"shift-pageup": "terminal::ScrollPageUp",
"shift-pagedown": "terminal::ScrollPageDown",
},
"shift-pagedown": "terminal::ScrollPageDown"
}
},
{ "context": "GitPanel", "bindings": { "alt-0": "workspace::CloseActiveDock" } },
{ "context": "ProjectPanel", "bindings": { "alt-1": "workspace::CloseActiveDock" } },
@@ -181,7 +179,7 @@
"context": "Dock || Workspace || OutlinePanel || ProjectPanel || CollabPanel || (Editor && mode == auto_height)",
"bindings": {
"escape": "editor::ToggleFocus",
"shift-escape": "workspace::CloseActiveDock",
},
},
"shift-escape": "workspace::CloseActiveDock"
}
}
]

View File

@@ -22,8 +22,8 @@
"ctrl-^": ["workspace::MoveItemToPane", { "destination": 5 }],
"ctrl-&": ["workspace::MoveItemToPane", { "destination": 6 }],
"ctrl-*": ["workspace::MoveItemToPane", { "destination": 7 }],
"ctrl-(": ["workspace::MoveItemToPane", { "destination": 8 }],
},
"ctrl-(": ["workspace::MoveItemToPane", { "destination": 8 }]
}
},
{
"context": "Editor",
@@ -55,20 +55,20 @@
"alt-right": "editor::MoveToNextSubwordEnd",
"alt-left": "editor::MoveToPreviousSubwordStart",
"alt-shift-right": "editor::SelectToNextSubwordEnd",
"alt-shift-left": "editor::SelectToPreviousSubwordStart",
},
"alt-shift-left": "editor::SelectToPreviousSubwordStart"
}
},
{
"context": "Editor && mode == full",
"bindings": {
"ctrl-r": "outline::Toggle",
},
"ctrl-r": "outline::Toggle"
}
},
{
"context": "Editor && !agent_diff",
"bindings": {
"ctrl-k ctrl-z": "git::Restore",
},
"ctrl-k ctrl-z": "git::Restore"
}
},
{
"context": "Pane",
@@ -83,15 +83,15 @@
"alt-6": ["pane::ActivateItem", 5],
"alt-7": ["pane::ActivateItem", 6],
"alt-8": ["pane::ActivateItem", 7],
"alt-9": "pane::ActivateLastItem",
},
"alt-9": "pane::ActivateLastItem"
}
},
{
"context": "Workspace",
"bindings": {
"ctrl-k ctrl-b": "workspace::ToggleLeftDock",
// "ctrl-0": "project_panel::ToggleFocus", // normally resets zoom
"shift-ctrl-r": "project_symbols::Toggle",
},
},
"shift-ctrl-r": "project_symbols::Toggle"
}
}
]

View File

@@ -4,16 +4,16 @@
"bindings": {
"ctrl-alt-cmd-l": "workspace::Reload",
"cmd-k cmd-p": "workspace::ActivatePreviousPane",
"cmd-k cmd-n": "workspace::ActivateNextPane",
},
"cmd-k cmd-n": "workspace::ActivateNextPane"
}
},
{
"context": "Editor",
"bindings": {
"cmd-shift-backspace": "editor::DeleteToBeginningOfLine",
"cmd-k cmd-u": "editor::ConvertToUpperCase",
"cmd-k cmd-l": "editor::ConvertToLowerCase",
},
"cmd-k cmd-l": "editor::ConvertToLowerCase"
}
},
{
"context": "Editor && mode == full",
@@ -33,8 +33,8 @@
"ctrl-cmd-down": "editor::MoveLineDown",
"cmd-\\": "workspace::ToggleLeftDock",
"ctrl-shift-m": "markdown::OpenPreviewToTheSide",
"cmd-r": "outline::Toggle",
},
"cmd-r": "outline::Toggle"
}
},
{
"context": "BufferSearchBar",
@@ -42,8 +42,8 @@
"cmd-g": ["editor::SelectNext", { "replace_newest": true }],
"cmd-shift-g": ["editor::SelectPrevious", { "replace_newest": true }],
"cmd-f3": "search::SelectNextMatch",
"cmd-shift-f3": "search::SelectPreviousMatch",
},
"cmd-shift-f3": "search::SelectPreviousMatch"
}
},
{
"context": "Workspace",
@@ -51,8 +51,8 @@
"cmd-\\": "workspace::ToggleLeftDock",
"cmd-k cmd-b": "workspace::ToggleLeftDock",
"cmd-t": "file_finder::Toggle",
"cmd-shift-r": "project_symbols::Toggle",
},
"cmd-shift-r": "project_symbols::Toggle"
}
},
{
"context": "Pane",
@@ -67,8 +67,8 @@
"cmd-6": ["pane::ActivateItem", 5],
"cmd-7": ["pane::ActivateItem", 6],
"cmd-8": ["pane::ActivateItem", 7],
"cmd-9": "pane::ActivateLastItem",
},
"cmd-9": "pane::ActivateLastItem"
}
},
{
"context": "ProjectPanel",
@@ -77,8 +77,8 @@
"backspace": ["project_panel::Trash", { "skip_prompt": false }],
"cmd-x": "project_panel::Cut",
"cmd-c": "project_panel::Copy",
"cmd-v": "project_panel::Paste",
},
"cmd-v": "project_panel::Paste"
}
},
{
"context": "ProjectPanel && not_editing",
@@ -92,7 +92,7 @@
"d": "project_panel::Duplicate",
"home": "menu::SelectFirst",
"end": "menu::SelectLast",
"shift-a": "project_panel::NewDirectory",
},
},
"shift-a": "project_panel::NewDirectory"
}
}
]

View File

@@ -8,8 +8,8 @@
"cmd-shift-i": "agent::ToggleFocus",
"cmd-l": "agent::ToggleFocus",
"cmd-shift-l": "agent::ToggleFocus",
"cmd-shift-j": "agent::OpenSettings",
},
"cmd-shift-j": "agent::OpenSettings"
}
},
{
"context": "Editor && mode == full",
@@ -20,19 +20,19 @@
"cmd-shift-l": "agent::AddSelectionToThread", // In cursor uses "Ask" mode
"cmd-l": "agent::AddSelectionToThread", // In cursor uses "Agent" mode
"cmd-k": "assistant::InlineAssist",
"cmd-shift-k": "assistant::InsertIntoEditor",
},
"cmd-shift-k": "assistant::InsertIntoEditor"
}
},
{
"context": "InlineAssistant > Editor",
"context": "InlineAssistEditor",
"use_key_equivalents": true,
"bindings": {
"cmd-shift-backspace": "editor::Cancel",
"cmd-enter": "menu::Confirm",
"cmd-enter": "menu::Confirm"
// "alt-enter": // Quick Question
// "cmd-shift-enter": // Full File Context
// "cmd-shift-k": // Toggle input focus (editor <> inline assist)
},
}
},
{
"context": "AgentPanel || ContextEditor || (MessageEditor > Editor)",
@@ -48,7 +48,7 @@
"cmd-shift-backspace": "editor::Cancel",
"cmd-r": "agent::NewThread",
"cmd-shift-v": "editor::Paste",
"cmd-shift-k": "assistant::InsertIntoEditor",
"cmd-shift-k": "assistant::InsertIntoEditor"
// "escape": "agent::ToggleFocus"
///// Enable when Zed supports multiple thread tabs
// "cmd-t": // new thread tab
@@ -57,29 +57,28 @@
///// Enable if Zed adds support for keyboard navigation of thread elements
// "tab": // cycle to next message
// "shift-tab": // cycle to previous message
},
}
},
{
"context": "Editor && editor_agent_diff",
"use_key_equivalents": true,
"bindings": {
"cmd-enter": "agent::KeepAll",
"cmd-backspace": "agent::RejectAll",
},
"cmd-backspace": "agent::RejectAll"
}
},
{
"context": "Editor && mode == full && edit_prediction",
"use_key_equivalents": true,
"bindings": {
"cmd-right": "editor::AcceptNextWordEditPrediction",
"cmd-down": "editor::AcceptNextLineEditPrediction",
},
"cmd-right": "editor::AcceptPartialEditPrediction"
}
},
{
"context": "Terminal",
"use_key_equivalents": true,
"bindings": {
"cmd-k": "assistant::InlineAssist",
},
},
"cmd-k": "assistant::InlineAssist"
}
}
]

View File

@@ -6,8 +6,8 @@
{
"context": "!GitPanel",
"bindings": {
"ctrl-g": "menu::Cancel",
},
"ctrl-g": "menu::Cancel"
}
},
{
// Workaround to avoid falling back to default bindings.
@@ -15,8 +15,8 @@
// NOTE: must be declared before the `Editor` override.
"context": "Editor",
"bindings": {
"ctrl-g": null, // currently activates `go_to_line::Toggle` when there is nothing to cancel
},
"ctrl-g": null // currently activates `go_to_line::Toggle` when there is nothing to cancel
}
},
{
"context": "Editor",
@@ -79,8 +79,8 @@
"ctrl-s": "buffer_search::Deploy", // isearch-forward
"ctrl-r": "buffer_search::Deploy", // isearch-backward
"alt-^": "editor::JoinLines", // join-line
"alt-q": "editor::Rewrap", // fill-paragraph
},
"alt-q": "editor::Rewrap" // fill-paragraph
}
},
{
"context": "Editor && selection_mode", // region selection
@@ -116,22 +116,22 @@
"alt->": "editor::SelectToEnd",
"ctrl-home": "editor::SelectToBeginning",
"ctrl-end": "editor::SelectToEnd",
"ctrl-g": "editor::Cancel",
},
"ctrl-g": "editor::Cancel"
}
},
{
"context": "Editor && (showing_code_actions || showing_completions)",
"bindings": {
"ctrl-p": "editor::ContextMenuPrevious",
"ctrl-n": "editor::ContextMenuNext",
},
"ctrl-n": "editor::ContextMenuNext"
}
},
{
"context": "Editor && showing_signature_help && !showing_completions",
"bindings": {
"ctrl-p": "editor::SignatureHelpPrevious",
"ctrl-n": "editor::SignatureHelpNext",
},
"ctrl-n": "editor::SignatureHelpNext"
}
},
// Example setting for using emacs-style tab
// (i.e. indent the current line / selection or perform symbol completion depending on context)
@@ -161,8 +161,8 @@
"ctrl-x ctrl-f": "file_finder::Toggle", // find-file
"ctrl-x ctrl-s": "workspace::Save", // save-buffer
"ctrl-x ctrl-w": "workspace::SaveAs", // write-file
"ctrl-x s": "workspace::SaveAll", // save-some-buffers
},
"ctrl-x s": "workspace::SaveAll" // save-some-buffers
}
},
{
// Workaround to enable using native emacs from the Zed terminal.
@@ -182,22 +182,22 @@
"ctrl-x ctrl-f": null, // find-file
"ctrl-x ctrl-s": null, // save-buffer
"ctrl-x ctrl-w": null, // write-file
"ctrl-x s": null, // save-some-buffers
},
"ctrl-x s": null // save-some-buffers
}
},
{
"context": "BufferSearchBar > Editor",
"bindings": {
"ctrl-s": "search::SelectNextMatch",
"ctrl-r": "search::SelectPreviousMatch",
"ctrl-g": "buffer_search::Dismiss",
},
"ctrl-g": "buffer_search::Dismiss"
}
},
{
"context": "Pane",
"bindings": {
"ctrl-alt-left": "pane::GoBack",
"ctrl-alt-right": "pane::GoForward",
},
},
"ctrl-alt-right": "pane::GoForward"
}
}
]

View File

@@ -13,8 +13,8 @@
"shift-f8": "debugger::StepOut",
"f9": "debugger::Continue",
"shift-f9": "debugger::Start",
"alt-shift-f9": "debugger::Start",
},
"alt-shift-f9": "debugger::Start"
}
},
{
"context": "Editor",
@@ -60,30 +60,28 @@
"cmd-shift-end": "editor::SelectToEnd",
"ctrl-f8": "editor::ToggleBreakpoint",
"ctrl-shift-f8": "editor::EditLogBreakpoint",
"cmd-shift-u": "editor::ToggleCase",
},
"cmd-shift-u": "editor::ToggleCase"
}
},
{
"context": "Editor && mode == full",
"bindings": {
"cmd-f12": "outline::Toggle",
"cmd-r": ["buffer_search::Deploy", { "replace_enabled": true }],
"cmd-l": "go_to_line::Toggle",
"cmd-e": "file_finder::Toggle",
"cmd-shift-o": "file_finder::Toggle",
"cmd-shift-n": "file_finder::Toggle",
"cmd-l": "go_to_line::Toggle",
"alt-enter": "editor::ToggleCodeActions",
"ctrl-space": "editor::ShowCompletions",
"cmd-j": "editor::Hover",
"cmd-p": "editor::ShowSignatureHelp",
"cmd-\\": "assistant::InlineAssist",
},
"cmd-\\": "assistant::InlineAssist"
}
},
{
"context": "BufferSearchBar",
"bindings": {
"shift-enter": "search::SelectPreviousMatch",
},
"shift-enter": "search::SelectPreviousMatch"
}
},
{
"context": "BufferSearchBar || ProjectSearchBar",
@@ -95,8 +93,8 @@
"ctrl-alt-c": "search::ToggleCaseSensitive",
"ctrl-alt-e": "search::ToggleSelection",
"ctrl-alt-w": "search::ToggleWholeWord",
"ctrl-alt-x": "search::ToggleRegex",
},
"ctrl-alt-x": "search::ToggleRegex"
}
},
{
"context": "Workspace",
@@ -118,8 +116,8 @@
"cmd-1": "project_panel::ToggleFocus",
"cmd-5": "debug_panel::ToggleFocus",
"cmd-6": "diagnostics::Deploy",
"cmd-7": "outline_panel::ToggleFocus",
},
"cmd-7": "outline_panel::ToggleFocus"
}
},
{
"context": "Pane", // this is to override the default Pane mappings to switch tabs
@@ -133,15 +131,15 @@
"cmd-7": "outline_panel::ToggleFocus",
"cmd-8": null, // Services (bottom dock)
"cmd-9": null, // Git History (bottom dock)
"cmd-0": "git_panel::ToggleFocus",
},
"cmd-0": "git_panel::ToggleFocus"
}
},
{
"context": "Workspace || Editor",
"bindings": {
"alt-f12": "terminal_panel::Toggle",
"cmd-shift-k": "git::Push",
},
"cmd-shift-k": "git::Push"
}
},
{
"context": "Pane",
@@ -149,8 +147,8 @@
"cmd-alt-left": "pane::GoBack",
"cmd-alt-right": "pane::GoForward",
"alt-left": "pane::ActivatePreviousItem",
"alt-right": "pane::ActivateNextItem",
},
"alt-right": "pane::ActivateNextItem"
}
},
{
"context": "ProjectPanel",
@@ -161,8 +159,8 @@
"backspace": ["project_panel::Trash", { "skip_prompt": false }],
"delete": ["project_panel::Trash", { "skip_prompt": false }],
"shift-delete": ["project_panel::Delete", { "skip_prompt": false }],
"shift-f6": "project_panel::Rename",
},
"shift-f6": "project_panel::Rename"
}
},
{
"context": "Terminal",
@@ -172,8 +170,8 @@
"cmd-up": "terminal::ScrollLineUp",
"cmd-down": "terminal::ScrollLineDown",
"shift-pageup": "terminal::ScrollPageUp",
"shift-pagedown": "terminal::ScrollPageDown",
},
"shift-pagedown": "terminal::ScrollPageDown"
}
},
{ "context": "GitPanel", "bindings": { "cmd-0": "workspace::CloseActiveDock" } },
{ "context": "ProjectPanel", "bindings": { "cmd-1": "workspace::CloseActiveDock" } },
@@ -184,7 +182,7 @@
"context": "Dock || Workspace || OutlinePanel || ProjectPanel || CollabPanel || (Editor && mode == auto_height)",
"bindings": {
"escape": "editor::ToggleFocus",
"shift-escape": "workspace::CloseActiveDock",
},
},
"shift-escape": "workspace::CloseActiveDock"
}
}
]

View File

@@ -22,8 +22,8 @@
"ctrl-^": ["workspace::MoveItemToPane", { "destination": 5 }],
"ctrl-&": ["workspace::MoveItemToPane", { "destination": 6 }],
"ctrl-*": ["workspace::MoveItemToPane", { "destination": 7 }],
"ctrl-(": ["workspace::MoveItemToPane", { "destination": 8 }],
},
"ctrl-(": ["workspace::MoveItemToPane", { "destination": 8 }]
}
},
{
"context": "Editor",
@@ -57,20 +57,20 @@
"ctrl-right": "editor::MoveToNextSubwordEnd",
"ctrl-left": "editor::MoveToPreviousSubwordStart",
"ctrl-shift-right": "editor::SelectToNextSubwordEnd",
"ctrl-shift-left": "editor::SelectToPreviousSubwordStart",
},
"ctrl-shift-left": "editor::SelectToPreviousSubwordStart"
}
},
{
"context": "Editor && mode == full",
"bindings": {
"cmd-r": "outline::Toggle",
},
"cmd-r": "outline::Toggle"
}
},
{
"context": "Editor && !agent_diff",
"bindings": {
"cmd-k cmd-z": "git::Restore",
},
"cmd-k cmd-z": "git::Restore"
}
},
{
"context": "Pane",
@@ -85,8 +85,8 @@
"cmd-6": ["pane::ActivateItem", 5],
"cmd-7": ["pane::ActivateItem", 6],
"cmd-8": ["pane::ActivateItem", 7],
"cmd-9": "pane::ActivateLastItem",
},
"cmd-9": "pane::ActivateLastItem"
}
},
{
"context": "Workspace",
@@ -95,7 +95,7 @@
"cmd-t": "file_finder::Toggle",
"shift-cmd-r": "project_symbols::Toggle",
// Currently busted: https://github.com/zed-industries/feedback/issues/898
"ctrl-0": "project_panel::ToggleFocus",
},
},
"ctrl-0": "project_panel::ToggleFocus"
}
}
]

View File

@@ -2,8 +2,8 @@
{
"bindings": {
"cmd-shift-o": "projects::OpenRecent",
"cmd-alt-tab": "project_panel::ToggleFocus",
},
"cmd-alt-tab": "project_panel::ToggleFocus"
}
},
{
"context": "Editor && mode == full",
@@ -15,8 +15,8 @@
"cmd-enter": "editor::NewlineBelow",
"cmd-alt-enter": "editor::NewlineAbove",
"cmd-shift-l": "editor::SelectLine",
"cmd-shift-t": "outline::Toggle",
},
"cmd-shift-t": "outline::Toggle"
}
},
{
"context": "Editor",
@@ -41,30 +41,30 @@
"ctrl-u": "editor::ConvertToUpperCase",
"ctrl-shift-u": "editor::ConvertToLowerCase",
"ctrl-alt-u": "editor::ConvertToUpperCamelCase",
"ctrl-_": "editor::ConvertToSnakeCase",
},
"ctrl-_": "editor::ConvertToSnakeCase"
}
},
{
"context": "BufferSearchBar",
"bindings": {
"ctrl-s": "search::SelectNextMatch",
"ctrl-shift-s": "search::SelectPreviousMatch",
},
"ctrl-shift-s": "search::SelectPreviousMatch"
}
},
{
"context": "Workspace",
"bindings": {
"cmd-alt-ctrl-d": "workspace::ToggleLeftDock",
"cmd-t": "file_finder::Toggle",
"cmd-shift-t": "project_symbols::Toggle",
},
"cmd-shift-t": "project_symbols::Toggle"
}
},
{
"context": "Pane",
"bindings": {
"alt-cmd-r": "search::ToggleRegex",
"ctrl-tab": "project_panel::ToggleFocus",
},
"ctrl-tab": "project_panel::ToggleFocus"
}
},
{
"context": "ProjectPanel",
@@ -75,11 +75,11 @@
"return": "project_panel::Rename",
"cmd-c": "project_panel::Copy",
"cmd-v": "project_panel::Paste",
"cmd-alt-c": "project_panel::CopyPath",
},
"cmd-alt-c": "project_panel::CopyPath"
}
},
{
"context": "Dock",
"bindings": {},
},
"bindings": {}
}
]

View File

@@ -27,7 +27,7 @@
"backspace": "editor::Backspace",
"delete": "editor::Delete",
"left": "editor::MoveLeft",
"right": "editor::MoveRight",
},
},
"right": "editor::MoveRight"
}
}
]

View File

@@ -180,9 +180,10 @@
"ctrl-w g shift-d": "editor::GoToTypeDefinitionSplit",
"ctrl-w space": "editor::OpenExcerptsSplit",
"ctrl-w g space": "editor::OpenExcerptsSplit",
"ctrl-6": "pane::AlternateFile",
"ctrl-^": "pane::AlternateFile",
".": "vim::Repeat",
},
".": "vim::Repeat"
}
},
{
"context": "vim_mode == normal || vim_mode == visual || vim_mode == operator",
@@ -223,8 +224,8 @@
"] r": "vim::GoToNextReference",
// tree-sitter related commands
"[ x": "vim::SelectLargerSyntaxNode",
"] x": "vim::SelectSmallerSyntaxNode",
},
"] x": "vim::SelectSmallerSyntaxNode"
}
},
{
"context": "vim_mode == normal",
@@ -261,16 +262,16 @@
"[ d": "editor::GoToPreviousDiagnostic",
"] c": "editor::GoToHunk",
"[ c": "editor::GoToPreviousHunk",
"g c": "vim::PushToggleComments",
},
"g c": "vim::PushToggleComments"
}
},
{
"context": "VimControl && VimCount",
"bindings": {
"0": ["vim::Number", 0],
":": "vim::CountCommand",
"%": "vim::GoToPercentage",
},
"%": "vim::GoToPercentage"
}
},
{
"context": "vim_mode == visual",
@@ -322,8 +323,8 @@
"g w": "vim::Rewrap",
"g ?": "vim::ConvertToRot13",
// "g ?": "vim::ConvertToRot47",
"\"": "vim::PushRegister",
},
"\"": "vim::PushRegister"
}
},
{
"context": "vim_mode == helix_select",
@@ -343,8 +344,8 @@
"ctrl-pageup": "pane::ActivatePreviousItem",
"ctrl-pagedown": "pane::ActivateNextItem",
".": "vim::Repeat",
"alt-.": "vim::RepeatFind",
},
"alt-.": "vim::RepeatFind"
}
},
{
"context": "vim_mode == insert",
@@ -374,8 +375,8 @@
"ctrl-r": "vim::PushRegister",
"insert": "vim::ToggleReplace",
"ctrl-o": "vim::TemporaryNormal",
"ctrl-s": "editor::ShowSignatureHelp",
},
"ctrl-s": "editor::ShowSignatureHelp"
}
},
{
"context": "showing_completions",
@@ -383,8 +384,8 @@
"ctrl-d": "vim::ScrollDown",
"ctrl-u": "vim::ScrollUp",
"ctrl-e": "vim::LineDown",
"ctrl-y": "vim::LineUp",
},
"ctrl-y": "vim::LineUp"
}
},
{
"context": "(vim_mode == normal || vim_mode == helix_normal) && !menu",
@@ -409,31 +410,23 @@
"shift-s": "vim::SubstituteLine",
"\"": "vim::PushRegister",
"ctrl-pagedown": "pane::ActivateNextItem",
"ctrl-pageup": "pane::ActivatePreviousItem",
},
"ctrl-pageup": "pane::ActivatePreviousItem"
}
},
{
"context": "VimControl && vim_mode == helix_normal && !menu",
"bindings": {
"j": ["vim::Down", { "display_lines": true }],
"down": ["vim::Down", { "display_lines": true }],
"k": ["vim::Up", { "display_lines": true }],
"up": ["vim::Up", { "display_lines": true }],
"g j": "vim::Down",
"g down": "vim::Down",
"g k": "vim::Up",
"g up": "vim::Up",
"escape": "vim::SwitchToHelixNormalMode",
"i": "vim::HelixInsert",
"a": "vim::HelixAppend",
"ctrl-[": "editor::Cancel",
},
"ctrl-[": "editor::Cancel"
}
},
{
"context": "vim_mode == helix_select && !menu",
"bindings": {
"escape": "vim::SwitchToHelixNormalMode",
},
"escape": "vim::SwitchToHelixNormalMode"
}
},
{
"context": "(vim_mode == helix_normal || vim_mode == helix_select) && !menu",
@@ -453,9 +446,9 @@
"shift-r": "editor::Paste",
"`": "vim::ConvertToLowerCase",
"alt-`": "vim::ConvertToUpperCase",
"insert": "vim::InsertBefore", // not a helix default
"insert": "vim::InsertBefore",
"shift-u": "editor::Redo",
"ctrl-r": "vim::Redo", // not a helix default
"ctrl-r": "vim::Redo",
"y": "vim::HelixYank",
"p": "vim::HelixPaste",
"shift-p": ["vim::HelixPaste", { "before": true }],
@@ -484,7 +477,6 @@
"alt-p": "editor::SelectPreviousSyntaxNode",
"alt-n": "editor::SelectNextSyntaxNode",
// Search
"n": "vim::HelixSelectNext",
"shift-n": "vim::HelixSelectPrevious",
@@ -492,32 +484,27 @@
"g e": "vim::EndOfDocument",
"g h": "vim::StartOfLine",
"g l": "vim::EndOfLine",
"g s": "vim::FirstNonWhitespace",
"g s": "vim::FirstNonWhitespace", // "g s" default behavior is "space s"
"g t": "vim::WindowTop",
"g c": "vim::WindowMiddle",
"g b": "vim::WindowBottom",
"g r": "editor::FindAllReferences",
"g r": "editor::FindAllReferences", // zed specific
"g n": "pane::ActivateNextItem",
"shift-l": "pane::ActivateNextItem", // not a helix default
"shift-l": "pane::ActivateNextItem",
"g p": "pane::ActivatePreviousItem",
"shift-h": "pane::ActivatePreviousItem", // not a helix default
"g .": "vim::HelixGotoLastModification",
"g o": "editor::ToggleSelectedDiffHunks", // Zed specific
"g shift-o": "git::ToggleStaged", // Zed specific
"g shift-r": "git::Restore", // Zed specific
"g u": "git::StageAndNext", // Zed specific
"g shift-u": "git::UnstageAndNext", // Zed specific
"shift-h": "pane::ActivatePreviousItem",
"g .": "vim::HelixGotoLastModification", // go to last modification
// Window mode
"space w v": "pane::SplitDown",
"space w s": "pane::SplitRight",
"space w h": "workspace::ActivatePaneLeft",
"space w j": "workspace::ActivatePaneDown",
"space w k": "workspace::ActivatePaneUp",
"space w l": "workspace::ActivatePaneRight",
"space w k": "workspace::ActivatePaneUp",
"space w j": "workspace::ActivatePaneDown",
"space w q": "pane::CloseActiveItem",
"space w r": "pane::SplitRight", // not a helix default
"space w d": "pane::SplitDown", // not a helix default
"space w s": "pane::SplitRight",
"space w r": "pane::SplitRight",
"space w v": "pane::SplitDown",
"space w d": "pane::SplitDown",
// Space mode
"space f": "file_finder::Toggle",
@@ -531,7 +518,6 @@
"space c": "editor::ToggleComments",
"space p": "editor::Paste",
"space y": "editor::Copy",
"space /": "pane::DeploySearch",
// Other
":": "command_palette::Toggle",
@@ -539,22 +525,24 @@
"]": ["vim::PushHelixNext", { "around": true }],
"[": ["vim::PushHelixPrevious", { "around": true }],
"g q": "vim::PushRewrap",
"g w": "vim::PushRewrap", // not a helix default & clashes with helix `goto_word`
},
"g w": "vim::PushRewrap"
// "tab": "pane::ActivateNextItem",
// "shift-tab": "pane::ActivatePrevItem",
}
},
{
"context": "vim_mode == insert && !(showing_code_actions || showing_completions)",
"bindings": {
"ctrl-p": "editor::ShowWordCompletions",
"ctrl-n": "editor::ShowWordCompletions",
},
"ctrl-n": "editor::ShowWordCompletions"
}
},
{
"context": "(vim_mode == insert || vim_mode == normal) && showing_signature_help && !showing_completions",
"bindings": {
"ctrl-p": "editor::SignatureHelpPrevious",
"ctrl-n": "editor::SignatureHelpNext",
},
"ctrl-n": "editor::SignatureHelpNext"
}
},
{
"context": "vim_mode == replace",
@@ -570,8 +558,8 @@
"backspace": "vim::UndoReplace",
"tab": "vim::Tab",
"enter": "vim::Enter",
"insert": "vim::InsertBefore",
},
"insert": "vim::InsertBefore"
}
},
{
"context": "vim_mode == waiting",
@@ -583,14 +571,14 @@
"escape": "vim::ClearOperators",
"ctrl-k": ["vim::PushDigraph", {}],
"ctrl-v": ["vim::PushLiteral", {}],
"ctrl-q": ["vim::PushLiteral", {}],
},
"ctrl-q": ["vim::PushLiteral", {}]
}
},
{
"context": "Editor && vim_mode == waiting && (vim_operator == ys || vim_operator == cs)",
"bindings": {
"escape": "vim::SwitchToNormalMode",
},
"escape": "vim::SwitchToNormalMode"
}
},
{
"context": "vim_mode == operator",
@@ -598,8 +586,8 @@
"ctrl-c": "vim::ClearOperators",
"ctrl-[": "vim::ClearOperators",
"escape": "vim::ClearOperators",
"g c": "vim::Comment",
},
"g c": "vim::Comment"
}
},
{
"context": "vim_operator == a || vim_operator == i || vim_operator == cs || vim_operator == helix_next || vim_operator == helix_previous",
@@ -636,14 +624,14 @@
"shift-i": ["vim::IndentObj", { "include_below": true }],
"f": "vim::Method",
"c": "vim::Class",
"e": "vim::EntireFile",
},
"e": "vim::EntireFile"
}
},
{
"context": "vim_operator == helix_m",
"bindings": {
"m": "vim::Matching",
},
"m": "vim::Matching"
}
},
{
"context": "vim_operator == helix_next",
@@ -660,8 +648,8 @@
"x": "editor::SelectSmallerSyntaxNode",
"d": "editor::GoToDiagnostic",
"c": "editor::GoToHunk",
"space": "vim::InsertEmptyLineBelow",
},
"space": "vim::InsertEmptyLineBelow"
}
},
{
"context": "vim_operator == helix_previous",
@@ -678,8 +666,8 @@
"x": "editor::SelectLargerSyntaxNode",
"d": "editor::GoToPreviousDiagnostic",
"c": "editor::GoToPreviousHunk",
"space": "vim::InsertEmptyLineAbove",
},
"space": "vim::InsertEmptyLineAbove"
}
},
{
"context": "vim_operator == c",
@@ -687,8 +675,8 @@
"c": "vim::CurrentLine",
"x": "vim::Exchange",
"d": "editor::Rename", // zed specific
"s": ["vim::PushChangeSurrounds", {}],
},
"s": ["vim::PushChangeSurrounds", {}]
}
},
{
"context": "vim_operator == d",
@@ -700,36 +688,36 @@
"shift-o": "git::ToggleStaged",
"p": "git::Restore", // "d p"
"u": "git::StageAndNext", // "d u"
"shift-u": "git::UnstageAndNext", // "d shift-u"
},
"shift-u": "git::UnstageAndNext" // "d shift-u"
}
},
{
"context": "vim_operator == gu",
"bindings": {
"g u": "vim::CurrentLine",
"u": "vim::CurrentLine",
},
"u": "vim::CurrentLine"
}
},
{
"context": "vim_operator == gU",
"bindings": {
"g shift-u": "vim::CurrentLine",
"shift-u": "vim::CurrentLine",
},
"shift-u": "vim::CurrentLine"
}
},
{
"context": "vim_operator == g~",
"bindings": {
"g ~": "vim::CurrentLine",
"~": "vim::CurrentLine",
},
"~": "vim::CurrentLine"
}
},
{
"context": "vim_operator == g?",
"bindings": {
"g ?": "vim::CurrentLine",
"?": "vim::CurrentLine",
},
"?": "vim::CurrentLine"
}
},
{
"context": "vim_operator == gq",
@@ -737,66 +725,66 @@
"g q": "vim::CurrentLine",
"q": "vim::CurrentLine",
"g w": "vim::CurrentLine",
"w": "vim::CurrentLine",
},
"w": "vim::CurrentLine"
}
},
{
"context": "vim_operator == y",
"bindings": {
"y": "vim::CurrentLine",
"v": "vim::PushForcedMotion",
"s": ["vim::PushAddSurrounds", {}],
},
"s": ["vim::PushAddSurrounds", {}]
}
},
{
"context": "vim_operator == ys",
"bindings": {
"s": "vim::CurrentLine",
},
"s": "vim::CurrentLine"
}
},
{
"context": "vim_operator == >",
"bindings": {
">": "vim::CurrentLine",
},
">": "vim::CurrentLine"
}
},
{
"context": "vim_operator == <",
"bindings": {
"<": "vim::CurrentLine",
},
"<": "vim::CurrentLine"
}
},
{
"context": "vim_operator == eq",
"bindings": {
"=": "vim::CurrentLine",
},
"=": "vim::CurrentLine"
}
},
{
"context": "vim_operator == sh",
"bindings": {
"!": "vim::CurrentLine",
},
"!": "vim::CurrentLine"
}
},
{
"context": "vim_operator == gc",
"bindings": {
"c": "vim::CurrentLine",
},
"c": "vim::CurrentLine"
}
},
{
"context": "vim_operator == gR",
"bindings": {
"r": "vim::CurrentLine",
"shift-r": "vim::CurrentLine",
},
"shift-r": "vim::CurrentLine"
}
},
{
"context": "vim_operator == cx",
"bindings": {
"x": "vim::CurrentLine",
"c": "vim::ClearExchange",
},
"c": "vim::ClearExchange"
}
},
{
"context": "vim_mode == literal",
@@ -838,15 +826,15 @@
"tab": ["vim::Literal", ["tab", "\u0009"]],
// zed extensions:
"backspace": ["vim::Literal", ["backspace", "\u0008"]],
"delete": ["vim::Literal", ["delete", "\u007F"]],
},
"delete": ["vim::Literal", ["delete", "\u007F"]]
}
},
{
"context": "BufferSearchBar && !in_replace",
"bindings": {
"enter": "vim::SearchSubmit",
"escape": "buffer_search::Dismiss",
},
"escape": "buffer_search::Dismiss"
}
},
{
"context": "VimControl && !menu || !Editor && !Terminal",
@@ -907,19 +895,15 @@
"ctrl-w ctrl-n": "workspace::NewFileSplitHorizontal",
"ctrl-w n": "workspace::NewFileSplitHorizontal",
"g t": "vim::GoToTab",
"g shift-t": "vim::GoToPreviousTab",
},
"g shift-t": "vim::GoToPreviousTab"
}
},
{
"context": "!Editor && !Terminal",
"bindings": {
":": "command_palette::Toggle",
"g /": "pane::DeploySearch",
"] b": "pane::ActivateNextItem",
"[ b": "pane::ActivatePreviousItem",
"] shift-b": "pane::ActivateLastItem",
"[ shift-b": ["pane::ActivateItem", 0],
},
"g /": "pane::DeploySearch"
}
},
{
// netrw compatibility
@@ -969,45 +953,17 @@
"6": ["vim::Number", 6],
"7": ["vim::Number", 7],
"8": ["vim::Number", 8],
"9": ["vim::Number", 9],
},
"9": ["vim::Number", 9]
}
},
{
"context": "OutlinePanel && not_editing",
"bindings": {
"h": "outline_panel::CollapseSelectedEntry",
"j": "vim::MenuSelectNext",
"k": "vim::MenuSelectPrevious",
"down": "vim::MenuSelectNext",
"up": "vim::MenuSelectPrevious",
"l": "outline_panel::ExpandSelectedEntry",
"j": "menu::SelectNext",
"k": "menu::SelectPrevious",
"shift-g": "menu::SelectLast",
"g g": "menu::SelectFirst",
"-": "outline_panel::SelectParent",
"enter": "editor::ToggleFocus",
"/": "menu::Cancel",
"ctrl-u": "outline_panel::ScrollUp",
"ctrl-d": "outline_panel::ScrollDown",
"z t": "outline_panel::ScrollCursorTop",
"z z": "outline_panel::ScrollCursorCenter",
"z b": "outline_panel::ScrollCursorBottom",
"0": ["vim::Number", 0],
"1": ["vim::Number", 1],
"2": ["vim::Number", 2],
"3": ["vim::Number", 3],
"4": ["vim::Number", 4],
"5": ["vim::Number", 5],
"6": ["vim::Number", 6],
"7": ["vim::Number", 7],
"8": ["vim::Number", 8],
"9": ["vim::Number", 9],
},
},
{
"context": "OutlinePanel && editing",
"bindings": {
"enter": "menu::Cancel",
},
"g g": "menu::SelectFirst"
}
},
{
"context": "GitPanel && ChangesList",
@@ -1022,8 +978,8 @@
"x": "git::ToggleStaged",
"shift-x": "git::StageAll",
"g x": "git::StageRange",
"shift-u": "git::UnstageAll",
},
"shift-u": "git::UnstageAll"
}
},
{
"context": "Editor && mode == auto_height && VimControl",
@@ -1034,8 +990,8 @@
"#": null,
"*": null,
"n": null,
"shift-n": null,
},
"shift-n": null
}
},
{
"context": "Picker > Editor",
@@ -1044,29 +1000,29 @@
"ctrl-u": "editor::DeleteToBeginningOfLine",
"ctrl-w": "editor::DeleteToPreviousWordStart",
"ctrl-p": "menu::SelectPrevious",
"ctrl-n": "menu::SelectNext",
},
"ctrl-n": "menu::SelectNext"
}
},
{
"context": "GitCommit > Editor && VimControl && vim_mode == normal",
"bindings": {
"ctrl-c": "menu::Cancel",
"escape": "menu::Cancel",
},
"escape": "menu::Cancel"
}
},
{
"context": "Editor && edit_prediction",
"bindings": {
// This is identical to the binding in the base keymap, but the vim bindings above to
// "vim::Tab" shadow it, so it needs to be bound again.
"tab": "editor::AcceptEditPrediction",
},
"tab": "editor::AcceptEditPrediction"
}
},
{
"context": "MessageEditor > Editor && VimControl",
"bindings": {
"enter": "agent::Chat",
},
"enter": "agent::Chat"
}
},
{
"context": "os != macos && Editor && edit_prediction_conflict",
@@ -1074,8 +1030,8 @@
// alt-l is provided as an alternative to tab/alt-tab. and will be displayed in the UI. This
// is because alt-tab may not be available, as it is often used for window switching on Linux
// and Windows.
"alt-l": "editor::AcceptEditPrediction",
},
"alt-l": "editor::AcceptEditPrediction"
}
},
{
"context": "SettingsWindow > NavigationMenu && !search",
@@ -1085,16 +1041,7 @@
"k": "settings_editor::FocusPreviousNavEntry",
"j": "settings_editor::FocusNextNavEntry",
"g g": "settings_editor::FocusFirstNavEntry",
"shift-g": "settings_editor::FocusLastNavEntry",
},
},
{
"context": "MarkdownPreview",
"bindings": {
"ctrl-u": "markdown::ScrollPageUp",
"ctrl-d": "markdown::ScrollPageDown",
"ctrl-y": "markdown::ScrollUp",
"ctrl-e": "markdown::ScrollDown",
},
},
"shift-g": "settings_editor::FocusLastNavEntry"
}
}
]

View File

@@ -14,6 +14,7 @@ The section you'll need to rewrite is marked with <rewrite_this></rewrite_this>
The context around the relevant section has been truncated (possibly in the middle of a line) for brevity.
{{/if}}
{{#if rewrite_section}}
And here's the section to rewrite based on that prompt again for reference:
<rewrite_this>
@@ -32,9 +33,10 @@ Below are the diagnostic errors visible to the user. If the user requests probl
{{/each}}
{{/if}}
{{/if}}
Only make changes that are necessary to fulfill the prompt, leave everything else as-is. All surrounding {{content_type}} will be preserved.
Start at the indentation level in the original file in the rewritten {{content_type}}.
IMPORTANT: You MUST use one of the provided tools to make the rewrite or to provide an explanation as to why the user's request cannot be fulfilled. You MUST NOT send back unstructured text. If you need to make a statement or ask a question you MUST use one of the tools to do so.
It is an error if you try to make a change that cannot be made simply by editing the rewrite_section.
You must use one of the provided tools to make the rewrite or to provide an explanation as to why the user's request cannot be fulfilled.

File diff suppressed because it is too large Load Diff

View File

@@ -8,7 +8,7 @@
"adapter": "Debugpy",
"program": "$ZED_FILE",
"request": "launch",
"cwd": "$ZED_WORKTREE_ROOT",
"cwd": "$ZED_WORKTREE_ROOT"
},
{
"label": "Debug active JavaScript file",
@@ -16,7 +16,7 @@
"program": "$ZED_FILE",
"request": "launch",
"cwd": "$ZED_WORKTREE_ROOT",
"type": "pwa-node",
"type": "pwa-node"
},
{
"label": "JavaScript debug terminal",
@@ -24,6 +24,6 @@
"request": "launch",
"cwd": "$ZED_WORKTREE_ROOT",
"console": "integratedTerminal",
"type": "pwa-node",
},
"type": "pwa-node"
}
]

View File

@@ -3,5 +3,5 @@
// For a full list of overridable settings, and general information on settings,
// see the documentation: https://zed.dev/docs/configuring-zed#settings-files
{
"lsp": {},
"lsp": {}
}

View File

@@ -47,8 +47,8 @@
// Whether to show the task line in the output of the spawned task, defaults to `true`.
"show_summary": true,
// Whether to show the command line in the output of the spawned task, defaults to `true`.
"show_command": true,
"show_command": true
// Represents the tags for inline runnable indicators, or spawning multiple tasks at once.
// "tags": []
},
}
]

View File

@@ -12,6 +12,6 @@
"theme": {
"mode": "system",
"light": "One Light",
"dark": "One Dark",
},
"dark": "One Dark"
}
}

View File

@@ -45,7 +45,6 @@
"tab.inactive_background": "#1f2127ff",
"tab.active_background": "#0d1016ff",
"search.match_background": "#5ac2fe66",
"search.active_match_background": "#ea570166",
"panel.background": "#1f2127ff",
"panel.focused_border": "#5ac1feff",
"pane.focused_border": null,
@@ -437,7 +436,6 @@
"tab.inactive_background": "#ececedff",
"tab.active_background": "#fcfcfcff",
"search.match_background": "#3b9ee566",
"search.active_match_background": "#f88b3666",
"panel.background": "#ececedff",
"panel.focused_border": "#3b9ee5ff",
"pane.focused_border": null,
@@ -829,7 +827,6 @@
"tab.inactive_background": "#353944ff",
"tab.active_background": "#242835ff",
"search.match_background": "#73cffe66",
"search.active_match_background": "#fd722b66",
"panel.background": "#353944ff",
"panel.focused_border": null,
"pane.focused_border": null,

View File

@@ -46,7 +46,6 @@
"tab.inactive_background": "#3a3735ff",
"tab.active_background": "#282828ff",
"search.match_background": "#83a59866",
"search.active_match_background": "#c09f3f66",
"panel.background": "#3a3735ff",
"panel.focused_border": "#83a598ff",
"pane.focused_border": null,
@@ -71,33 +70,33 @@
"editor.document_highlight.read_background": "#83a5981a",
"editor.document_highlight.write_background": "#92847466",
"terminal.background": "#282828ff",
"terminal.foreground": "#ebdbb2ff",
"terminal.foreground": "#fbf1c7ff",
"terminal.bright_foreground": "#fbf1c7ff",
"terminal.dim_foreground": "#766b5dff",
"terminal.dim_foreground": "#282828ff",
"terminal.ansi.black": "#282828ff",
"terminal.ansi.bright_black": "#928374ff",
"terminal.ansi.bright_black": "#73675eff",
"terminal.ansi.dim_black": "#fbf1c7ff",
"terminal.ansi.red": "#cc241dff",
"terminal.ansi.bright_red": "#fb4934ff",
"terminal.ansi.dim_red": "#8e1814ff",
"terminal.ansi.green": "#98971aff",
"terminal.ansi.bright_green": "#b8bb26ff",
"terminal.ansi.dim_green": "#6a6912ff",
"terminal.ansi.yellow": "#d79921ff",
"terminal.ansi.bright_yellow": "#fabd2fff",
"terminal.ansi.dim_yellow": "#966a17ff",
"terminal.ansi.blue": "#458588ff",
"terminal.ansi.bright_blue": "#83a598ff",
"terminal.ansi.dim_blue": "#305d5fff",
"terminal.ansi.magenta": "#b16286ff",
"terminal.ansi.bright_magenta": "#d3869bff",
"terminal.ansi.dim_magenta": "#7c455eff",
"terminal.ansi.cyan": "#689d6aff",
"terminal.ansi.bright_cyan": "#8ec07cff",
"terminal.ansi.dim_cyan": "#496e4aff",
"terminal.ansi.white": "#a89984ff",
"terminal.ansi.bright_white": "#fbf1c7ff",
"terminal.ansi.dim_white": "#766b5dff",
"terminal.ansi.red": "#fb4a35ff",
"terminal.ansi.bright_red": "#93201dff",
"terminal.ansi.dim_red": "#ffaa95ff",
"terminal.ansi.green": "#b7bb26ff",
"terminal.ansi.bright_green": "#605c1bff",
"terminal.ansi.dim_green": "#e0dc98ff",
"terminal.ansi.yellow": "#f9bd2fff",
"terminal.ansi.bright_yellow": "#91611bff",
"terminal.ansi.dim_yellow": "#fedc9bff",
"terminal.ansi.blue": "#83a598ff",
"terminal.ansi.bright_blue": "#414f4aff",
"terminal.ansi.dim_blue": "#c0d2cbff",
"terminal.ansi.magenta": "#d3869bff",
"terminal.ansi.bright_magenta": "#8e5868ff",
"terminal.ansi.dim_magenta": "#ff9ebbff",
"terminal.ansi.cyan": "#8ec07cff",
"terminal.ansi.bright_cyan": "#45603eff",
"terminal.ansi.dim_cyan": "#c7dfbdff",
"terminal.ansi.white": "#fbf1c7ff",
"terminal.ansi.bright_white": "#ffffffff",
"terminal.ansi.dim_white": "#b0a189ff",
"link_text.hover": "#83a598ff",
"version_control.added": "#b7bb26ff",
"version_control.modified": "#f9bd2fff",
@@ -453,7 +452,6 @@
"tab.inactive_background": "#393634ff",
"tab.active_background": "#1d2021ff",
"search.match_background": "#83a59866",
"search.active_match_background": "#c9653666",
"panel.background": "#393634ff",
"panel.focused_border": "#83a598ff",
"pane.focused_border": null,
@@ -478,33 +476,33 @@
"editor.document_highlight.read_background": "#83a5981a",
"editor.document_highlight.write_background": "#92847466",
"terminal.background": "#1d2021ff",
"terminal.foreground": "#ebdbb2ff",
"terminal.foreground": "#fbf1c7ff",
"terminal.bright_foreground": "#fbf1c7ff",
"terminal.dim_foreground": "#766b5dff",
"terminal.ansi.black": "#282828ff",
"terminal.ansi.bright_black": "#928374ff",
"terminal.dim_foreground": "#1d2021ff",
"terminal.ansi.black": "#1d2021ff",
"terminal.ansi.bright_black": "#73675eff",
"terminal.ansi.dim_black": "#fbf1c7ff",
"terminal.ansi.red": "#cc241dff",
"terminal.ansi.bright_red": "#fb4934ff",
"terminal.ansi.dim_red": "#8e1814ff",
"terminal.ansi.green": "#98971aff",
"terminal.ansi.bright_green": "#b8bb26ff",
"terminal.ansi.dim_green": "#6a6912ff",
"terminal.ansi.yellow": "#d79921ff",
"terminal.ansi.bright_yellow": "#fabd2fff",
"terminal.ansi.dim_yellow": "#966a17ff",
"terminal.ansi.blue": "#458588ff",
"terminal.ansi.bright_blue": "#83a598ff",
"terminal.ansi.dim_blue": "#305d5fff",
"terminal.ansi.magenta": "#b16286ff",
"terminal.ansi.bright_magenta": "#d3869bff",
"terminal.ansi.dim_magenta": "#7c455eff",
"terminal.ansi.cyan": "#689d6aff",
"terminal.ansi.bright_cyan": "#8ec07cff",
"terminal.ansi.dim_cyan": "#496e4aff",
"terminal.ansi.white": "#a89984ff",
"terminal.ansi.bright_white": "#fbf1c7ff",
"terminal.ansi.dim_white": "#766b5dff",
"terminal.ansi.red": "#fb4a35ff",
"terminal.ansi.bright_red": "#93201dff",
"terminal.ansi.dim_red": "#ffaa95ff",
"terminal.ansi.green": "#b7bb26ff",
"terminal.ansi.bright_green": "#605c1bff",
"terminal.ansi.dim_green": "#e0dc98ff",
"terminal.ansi.yellow": "#f9bd2fff",
"terminal.ansi.bright_yellow": "#91611bff",
"terminal.ansi.dim_yellow": "#fedc9bff",
"terminal.ansi.blue": "#83a598ff",
"terminal.ansi.bright_blue": "#414f4aff",
"terminal.ansi.dim_blue": "#c0d2cbff",
"terminal.ansi.magenta": "#d3869bff",
"terminal.ansi.bright_magenta": "#8e5868ff",
"terminal.ansi.dim_magenta": "#ff9ebbff",
"terminal.ansi.cyan": "#8ec07cff",
"terminal.ansi.bright_cyan": "#45603eff",
"terminal.ansi.dim_cyan": "#c7dfbdff",
"terminal.ansi.white": "#fbf1c7ff",
"terminal.ansi.bright_white": "#ffffffff",
"terminal.ansi.dim_white": "#b0a189ff",
"link_text.hover": "#83a598ff",
"version_control.added": "#b7bb26ff",
"version_control.modified": "#f9bd2fff",
@@ -860,7 +858,6 @@
"tab.inactive_background": "#3b3735ff",
"tab.active_background": "#32302fff",
"search.match_background": "#83a59866",
"search.active_match_background": "#aea85166",
"panel.background": "#3b3735ff",
"panel.focused_border": null,
"pane.focused_border": null,
@@ -885,33 +882,33 @@
"editor.document_highlight.read_background": "#83a5981a",
"editor.document_highlight.write_background": "#92847466",
"terminal.background": "#32302fff",
"terminal.foreground": "#ebdbb2ff",
"terminal.foreground": "#fbf1c7ff",
"terminal.bright_foreground": "#fbf1c7ff",
"terminal.dim_foreground": "#766b5dff",
"terminal.ansi.black": "#282828ff",
"terminal.ansi.bright_black": "#928374ff",
"terminal.dim_foreground": "#32302fff",
"terminal.ansi.black": "#32302fff",
"terminal.ansi.bright_black": "#73675eff",
"terminal.ansi.dim_black": "#fbf1c7ff",
"terminal.ansi.red": "#cc241dff",
"terminal.ansi.bright_red": "#fb4934ff",
"terminal.ansi.dim_red": "#8e1814ff",
"terminal.ansi.green": "#98971aff",
"terminal.ansi.bright_green": "#b8bb26ff",
"terminal.ansi.dim_green": "#6a6912ff",
"terminal.ansi.yellow": "#d79921ff",
"terminal.ansi.bright_yellow": "#fabd2fff",
"terminal.ansi.dim_yellow": "#966a17ff",
"terminal.ansi.blue": "#458588ff",
"terminal.ansi.bright_blue": "#83a598ff",
"terminal.ansi.dim_blue": "#305d5fff",
"terminal.ansi.magenta": "#b16286ff",
"terminal.ansi.bright_magenta": "#d3869bff",
"terminal.ansi.dim_magenta": "#7c455eff",
"terminal.ansi.cyan": "#689d6aff",
"terminal.ansi.bright_cyan": "#8ec07cff",
"terminal.ansi.dim_cyan": "#496e4aff",
"terminal.ansi.white": "#a89984ff",
"terminal.ansi.bright_white": "#fbf1c7ff",
"terminal.ansi.dim_white": "#766b5dff",
"terminal.ansi.red": "#fb4a35ff",
"terminal.ansi.bright_red": "#93201dff",
"terminal.ansi.dim_red": "#ffaa95ff",
"terminal.ansi.green": "#b7bb26ff",
"terminal.ansi.bright_green": "#605c1bff",
"terminal.ansi.dim_green": "#e0dc98ff",
"terminal.ansi.yellow": "#f9bd2fff",
"terminal.ansi.bright_yellow": "#91611bff",
"terminal.ansi.dim_yellow": "#fedc9bff",
"terminal.ansi.blue": "#83a598ff",
"terminal.ansi.bright_blue": "#414f4aff",
"terminal.ansi.dim_blue": "#c0d2cbff",
"terminal.ansi.magenta": "#d3869bff",
"terminal.ansi.bright_magenta": "#8e5868ff",
"terminal.ansi.dim_magenta": "#ff9ebbff",
"terminal.ansi.cyan": "#8ec07cff",
"terminal.ansi.bright_cyan": "#45603eff",
"terminal.ansi.dim_cyan": "#c7dfbdff",
"terminal.ansi.white": "#fbf1c7ff",
"terminal.ansi.bright_white": "#ffffffff",
"terminal.ansi.dim_white": "#b0a189ff",
"link_text.hover": "#83a598ff",
"version_control.added": "#b7bb26ff",
"version_control.modified": "#f9bd2fff",
@@ -1267,7 +1264,6 @@
"tab.inactive_background": "#ecddb4ff",
"tab.active_background": "#fbf1c7ff",
"search.match_background": "#0b667866",
"search.active_match_background": "#ba2d1166",
"panel.background": "#ecddb4ff",
"panel.focused_border": null,
"pane.focused_border": null,
@@ -1295,30 +1291,30 @@
"terminal.foreground": "#282828ff",
"terminal.bright_foreground": "#282828ff",
"terminal.dim_foreground": "#fbf1c7ff",
"terminal.ansi.black": "#fbf1c7ff",
"terminal.ansi.bright_black": "#928374ff",
"terminal.ansi.dim_black": "#7c6f64ff",
"terminal.ansi.red": "#cc241dff",
"terminal.ansi.bright_red": "#9d0006ff",
"terminal.ansi.dim_red": "#c31c16ff",
"terminal.ansi.green": "#98971aff",
"terminal.ansi.bright_green": "#79740eff",
"terminal.ansi.dim_green": "#929015ff",
"terminal.ansi.yellow": "#d79921ff",
"terminal.ansi.bright_yellow": "#b57614ff",
"terminal.ansi.dim_yellow": "#cf8e1aff",
"terminal.ansi.blue": "#458588ff",
"terminal.ansi.bright_blue": "#076678ff",
"terminal.ansi.dim_blue": "#356f77ff",
"terminal.ansi.magenta": "#b16286ff",
"terminal.ansi.bright_magenta": "#8f3f71ff",
"terminal.ansi.dim_magenta": "#a85580ff",
"terminal.ansi.cyan": "#689d6aff",
"terminal.ansi.bright_cyan": "#427b58ff",
"terminal.ansi.dim_cyan": "#5f9166ff",
"terminal.ansi.white": "#7c6f64ff",
"terminal.ansi.bright_white": "#282828ff",
"terminal.ansi.dim_white": "#282828ff",
"terminal.ansi.black": "#282828ff",
"terminal.ansi.bright_black": "#0b6678ff",
"terminal.ansi.dim_black": "#5f5650ff",
"terminal.ansi.red": "#9d0308ff",
"terminal.ansi.bright_red": "#db8b7aff",
"terminal.ansi.dim_red": "#4e1207ff",
"terminal.ansi.green": "#797410ff",
"terminal.ansi.bright_green": "#bfb787ff",
"terminal.ansi.dim_green": "#3e3a11ff",
"terminal.ansi.yellow": "#b57615ff",
"terminal.ansi.bright_yellow": "#e2b88bff",
"terminal.ansi.dim_yellow": "#5c3a12ff",
"terminal.ansi.blue": "#0b6678ff",
"terminal.ansi.bright_blue": "#8fb0baff",
"terminal.ansi.dim_blue": "#14333bff",
"terminal.ansi.magenta": "#8f3e71ff",
"terminal.ansi.bright_magenta": "#c76da0ff",
"terminal.ansi.dim_magenta": "#5c2848ff",
"terminal.ansi.cyan": "#437b59ff",
"terminal.ansi.bright_cyan": "#9fbca8ff",
"terminal.ansi.dim_cyan": "#253e2eff",
"terminal.ansi.white": "#fbf1c7ff",
"terminal.ansi.bright_white": "#ffffffff",
"terminal.ansi.dim_white": "#b0a189ff",
"link_text.hover": "#0b6678ff",
"version_control.added": "#797410ff",
"version_control.modified": "#b57615ff",
@@ -1674,7 +1670,6 @@
"tab.inactive_background": "#ecddb5ff",
"tab.active_background": "#f9f5d7ff",
"search.match_background": "#0b667866",
"search.active_match_background": "#dc351466",
"panel.background": "#ecddb5ff",
"panel.focused_border": null,
"pane.focused_border": null,
@@ -1702,30 +1697,30 @@
"terminal.foreground": "#282828ff",
"terminal.bright_foreground": "#282828ff",
"terminal.dim_foreground": "#f9f5d7ff",
"terminal.ansi.black": "#fbf1c7ff",
"terminal.ansi.bright_black": "#928374ff",
"terminal.ansi.dim_black": "#7c6f64ff",
"terminal.ansi.red": "#cc241dff",
"terminal.ansi.bright_red": "#9d0006ff",
"terminal.ansi.dim_red": "#c31c16ff",
"terminal.ansi.green": "#98971aff",
"terminal.ansi.bright_green": "#79740eff",
"terminal.ansi.dim_green": "#929015ff",
"terminal.ansi.yellow": "#d79921ff",
"terminal.ansi.bright_yellow": "#b57614ff",
"terminal.ansi.dim_yellow": "#cf8e1aff",
"terminal.ansi.blue": "#458588ff",
"terminal.ansi.bright_blue": "#076678ff",
"terminal.ansi.dim_blue": "#356f77ff",
"terminal.ansi.magenta": "#b16286ff",
"terminal.ansi.bright_magenta": "#8f3f71ff",
"terminal.ansi.dim_magenta": "#a85580ff",
"terminal.ansi.cyan": "#689d6aff",
"terminal.ansi.bright_cyan": "#427b58ff",
"terminal.ansi.dim_cyan": "#5f9166ff",
"terminal.ansi.white": "#7c6f64ff",
"terminal.ansi.bright_white": "#282828ff",
"terminal.ansi.dim_white": "#282828ff",
"terminal.ansi.black": "#282828ff",
"terminal.ansi.bright_black": "#73675eff",
"terminal.ansi.dim_black": "#f9f5d7ff",
"terminal.ansi.red": "#9d0308ff",
"terminal.ansi.bright_red": "#db8b7aff",
"terminal.ansi.dim_red": "#4e1207ff",
"terminal.ansi.green": "#797410ff",
"terminal.ansi.bright_green": "#bfb787ff",
"terminal.ansi.dim_green": "#3e3a11ff",
"terminal.ansi.yellow": "#b57615ff",
"terminal.ansi.bright_yellow": "#e2b88bff",
"terminal.ansi.dim_yellow": "#5c3a12ff",
"terminal.ansi.blue": "#0b6678ff",
"terminal.ansi.bright_blue": "#8fb0baff",
"terminal.ansi.dim_blue": "#14333bff",
"terminal.ansi.magenta": "#8f3e71ff",
"terminal.ansi.bright_magenta": "#c76da0ff",
"terminal.ansi.dim_magenta": "#5c2848ff",
"terminal.ansi.cyan": "#437b59ff",
"terminal.ansi.bright_cyan": "#9fbca8ff",
"terminal.ansi.dim_cyan": "#253e2eff",
"terminal.ansi.white": "#f9f5d7ff",
"terminal.ansi.bright_white": "#ffffffff",
"terminal.ansi.dim_white": "#b0a189ff",
"link_text.hover": "#0b6678ff",
"version_control.added": "#797410ff",
"version_control.modified": "#b57615ff",
@@ -2081,7 +2076,6 @@
"tab.inactive_background": "#ecdcb3ff",
"tab.active_background": "#f2e5bcff",
"search.match_background": "#0b667866",
"search.active_match_background": "#d7331466",
"panel.background": "#ecdcb3ff",
"panel.focused_border": null,
"pane.focused_border": null,
@@ -2109,30 +2103,30 @@
"terminal.foreground": "#282828ff",
"terminal.bright_foreground": "#282828ff",
"terminal.dim_foreground": "#f2e5bcff",
"terminal.ansi.black": "#fbf1c7ff",
"terminal.ansi.bright_black": "#928374ff",
"terminal.ansi.dim_black": "#7c6f64ff",
"terminal.ansi.red": "#cc241dff",
"terminal.ansi.bright_red": "#9d0006ff",
"terminal.ansi.dim_red": "#c31c16ff",
"terminal.ansi.green": "#98971aff",
"terminal.ansi.bright_green": "#79740eff",
"terminal.ansi.dim_green": "#929015ff",
"terminal.ansi.yellow": "#d79921ff",
"terminal.ansi.bright_yellow": "#b57614ff",
"terminal.ansi.dim_yellow": "#cf8e1aff",
"terminal.ansi.blue": "#458588ff",
"terminal.ansi.bright_blue": "#076678ff",
"terminal.ansi.dim_blue": "#356f77ff",
"terminal.ansi.magenta": "#b16286ff",
"terminal.ansi.bright_magenta": "#8f3f71ff",
"terminal.ansi.dim_magenta": "#a85580ff",
"terminal.ansi.cyan": "#689d6aff",
"terminal.ansi.bright_cyan": "#427b58ff",
"terminal.ansi.dim_cyan": "#5f9166ff",
"terminal.ansi.white": "#7c6f64ff",
"terminal.ansi.bright_white": "#282828ff",
"terminal.ansi.dim_white": "#282828ff",
"terminal.ansi.black": "#282828ff",
"terminal.ansi.bright_black": "#73675eff",
"terminal.ansi.dim_black": "#f2e5bcff",
"terminal.ansi.red": "#9d0308ff",
"terminal.ansi.bright_red": "#db8b7aff",
"terminal.ansi.dim_red": "#4e1207ff",
"terminal.ansi.green": "#797410ff",
"terminal.ansi.bright_green": "#bfb787ff",
"terminal.ansi.dim_green": "#3e3a11ff",
"terminal.ansi.yellow": "#b57615ff",
"terminal.ansi.bright_yellow": "#e2b88bff",
"terminal.ansi.dim_yellow": "#5c3a12ff",
"terminal.ansi.blue": "#0b6678ff",
"terminal.ansi.bright_blue": "#8fb0baff",
"terminal.ansi.dim_blue": "#14333bff",
"terminal.ansi.magenta": "#8f3e71ff",
"terminal.ansi.bright_magenta": "#c76da0ff",
"terminal.ansi.dim_magenta": "#5c2848ff",
"terminal.ansi.cyan": "#437b59ff",
"terminal.ansi.bright_cyan": "#9fbca8ff",
"terminal.ansi.dim_cyan": "#253e2eff",
"terminal.ansi.white": "#f2e5bcff",
"terminal.ansi.bright_white": "#ffffffff",
"terminal.ansi.dim_white": "#b0a189ff",
"link_text.hover": "#0b6678ff",
"version_control.added": "#797410ff",
"version_control.modified": "#b57615ff",

View File

@@ -45,7 +45,6 @@
"tab.inactive_background": "#2f343eff",
"tab.active_background": "#282c33ff",
"search.match_background": "#74ade866",
"search.active_match_background": "#e8af7466",
"panel.background": "#2f343eff",
"panel.focused_border": null,
"pane.focused_border": null,
@@ -68,34 +67,34 @@
"editor.active_wrap_guide": "#c8ccd41a",
"editor.document_highlight.read_background": "#74ade81a",
"editor.document_highlight.write_background": "#555a6366",
"terminal.background": "#282c34ff",
"terminal.foreground": "#abb2bfff",
"terminal.background": "#282c33ff",
"terminal.foreground": "#dce0e5ff",
"terminal.bright_foreground": "#dce0e5ff",
"terminal.dim_foreground": "#636d83ff",
"terminal.ansi.black": "#282c34ff",
"terminal.ansi.bright_black": "#636d83ff",
"terminal.ansi.dim_black": "#3b3f4aff",
"terminal.ansi.red": "#e06c75ff",
"terminal.ansi.bright_red": "#EA858Bff",
"terminal.ansi.dim_red": "#a7545aff",
"terminal.ansi.green": "#98c379ff",
"terminal.ansi.bright_green": "#AAD581ff",
"terminal.ansi.dim_green": "#6d8f59ff",
"terminal.ansi.yellow": "#e5c07bff",
"terminal.ansi.bright_yellow": "#FFD885ff",
"terminal.ansi.dim_yellow": "#b8985bff",
"terminal.ansi.blue": "#61afefff",
"terminal.ansi.bright_blue": "#85C1FFff",
"terminal.ansi.dim_blue": "#457cadff",
"terminal.ansi.magenta": "#c678ddff",
"terminal.ansi.bright_magenta": "#D398EBff",
"terminal.ansi.dim_magenta": "#8d54a0ff",
"terminal.ansi.cyan": "#56b6c2ff",
"terminal.ansi.bright_cyan": "#6ED5DEff",
"terminal.ansi.dim_cyan": "#3c818aff",
"terminal.ansi.white": "#abb2bfff",
"terminal.dim_foreground": "#282c33ff",
"terminal.ansi.black": "#282c33ff",
"terminal.ansi.bright_black": "#525561ff",
"terminal.ansi.dim_black": "#dce0e5ff",
"terminal.ansi.red": "#d07277ff",
"terminal.ansi.bright_red": "#673a3cff",
"terminal.ansi.dim_red": "#eab7b9ff",
"terminal.ansi.green": "#a1c181ff",
"terminal.ansi.bright_green": "#4d6140ff",
"terminal.ansi.dim_green": "#d1e0bfff",
"terminal.ansi.yellow": "#dec184ff",
"terminal.ansi.bright_yellow": "#e5c07bff",
"terminal.ansi.dim_yellow": "#f1dfc1ff",
"terminal.ansi.blue": "#74ade8ff",
"terminal.ansi.bright_blue": "#385378ff",
"terminal.ansi.dim_blue": "#bed5f4ff",
"terminal.ansi.magenta": "#b477cfff",
"terminal.ansi.bright_magenta": "#d6b4e4ff",
"terminal.ansi.dim_magenta": "#612a79ff",
"terminal.ansi.cyan": "#6eb4bfff",
"terminal.ansi.bright_cyan": "#3a565bff",
"terminal.ansi.dim_cyan": "#b9d9dfff",
"terminal.ansi.white": "#dce0e5ff",
"terminal.ansi.bright_white": "#fafafaff",
"terminal.ansi.dim_white": "#8f969bff",
"terminal.ansi.dim_white": "#575d65ff",
"link_text.hover": "#74ade8ff",
"version_control.added": "#27a657ff",
"version_control.modified": "#d3b020ff",
@@ -449,7 +448,6 @@
"tab.inactive_background": "#ebebecff",
"tab.active_background": "#fafafaff",
"search.match_background": "#5c79e266",
"search.active_match_background": "#d0a92366",
"panel.background": "#ebebecff",
"panel.focused_border": null,
"pane.focused_border": null,
@@ -473,33 +471,33 @@
"editor.document_highlight.read_background": "#5c78e225",
"editor.document_highlight.write_background": "#a3a3a466",
"terminal.background": "#fafafaff",
"terminal.foreground": "#2a2c33ff",
"terminal.bright_foreground": "#2a2c33ff",
"terminal.dim_foreground": "#bbbbbbff",
"terminal.ansi.black": "#000000ff",
"terminal.ansi.bright_black": "#000000ff",
"terminal.ansi.dim_black": "#555555ff",
"terminal.ansi.red": "#de3e35ff",
"terminal.ansi.bright_red": "#de3e35ff",
"terminal.ansi.dim_red": "#9c2b26ff",
"terminal.ansi.green": "#3f953aff",
"terminal.ansi.bright_green": "#3f953aff",
"terminal.ansi.dim_green": "#2b6927ff",
"terminal.ansi.yellow": "#d2b67cff",
"terminal.ansi.bright_yellow": "#d2b67cff",
"terminal.ansi.dim_yellow": "#a48c5aff",
"terminal.ansi.blue": "#2f5af3ff",
"terminal.ansi.bright_blue": "#2f5af3ff",
"terminal.ansi.dim_blue": "#2140abff",
"terminal.ansi.magenta": "#950095ff",
"terminal.ansi.bright_magenta": "#a00095ff",
"terminal.ansi.dim_magenta": "#6a006aff",
"terminal.ansi.cyan": "#3f953aff",
"terminal.ansi.bright_cyan": "#3f953aff",
"terminal.ansi.dim_cyan": "#2b6927ff",
"terminal.ansi.white": "#bbbbbbff",
"terminal.foreground": "#242529ff",
"terminal.bright_foreground": "#242529ff",
"terminal.dim_foreground": "#fafafaff",
"terminal.ansi.black": "#242529ff",
"terminal.ansi.bright_black": "#747579ff",
"terminal.ansi.dim_black": "#97979aff",
"terminal.ansi.red": "#d36151ff",
"terminal.ansi.bright_red": "#f0b0a4ff",
"terminal.ansi.dim_red": "#6f312aff",
"terminal.ansi.green": "#669f59ff",
"terminal.ansi.bright_green": "#b2cfa9ff",
"terminal.ansi.dim_green": "#354d2eff",
"terminal.ansi.yellow": "#dec184ff",
"terminal.ansi.bright_yellow": "#826221ff",
"terminal.ansi.dim_yellow": "#786441ff",
"terminal.ansi.blue": "#5c78e2ff",
"terminal.ansi.bright_blue": "#b5baf2ff",
"terminal.ansi.dim_blue": "#2d3d75ff",
"terminal.ansi.magenta": "#984ea5ff",
"terminal.ansi.bright_magenta": "#cea6d3ff",
"terminal.ansi.dim_magenta": "#4b2a50ff",
"terminal.ansi.cyan": "#3a82b7ff",
"terminal.ansi.bright_cyan": "#a3bedaff",
"terminal.ansi.dim_cyan": "#254058ff",
"terminal.ansi.white": "#fafafaff",
"terminal.ansi.bright_white": "#ffffffff",
"terminal.ansi.dim_white": "#888888ff",
"terminal.ansi.dim_white": "#aaaaaaff",
"link_text.hover": "#5c78e2ff",
"version_control.added": "#27a657ff",
"version_control.modified": "#d3b020ff",

View File

@@ -14,7 +14,6 @@ disallowed-methods = [
{ path = "std::process::Command::stderr", reason = "`smol::process::Command::from()` does not preserve stdio configuration", replacement = "smol::process::Command::stderr" },
{ path = "serde_json::from_reader", reason = "Parsing from a buffer is much slower than first reading the buffer into a Vec/String, see https://github.com/serde-rs/json/issues/160#issuecomment-253446892. Use `serde_json::from_slice` instead." },
{ path = "serde_json_lenient::from_reader", reason = "Parsing from a buffer is much slower than first reading the buffer into a Vec/String, see https://github.com/serde-rs/json/issues/160#issuecomment-253446892, Use `serde_json_lenient::from_slice` instead." },
{ path = "cocoa::foundation::NSString::alloc", reason = "NSString must be autoreleased to avoid memory leaks. Use `ns_string()` helper instead." },
]
disallowed-types = [
# { path = "std::collections::HashMap", replacement = "collections::HashMap" },

View File

@@ -46,7 +46,6 @@ url.workspace = true
util.workspace = true
uuid.workspace = true
watch.workspace = true
urlencoding.workspace = true
[dev-dependencies]
env_logger.workspace = true

File diff suppressed because it is too large Load Diff

View File

@@ -20,7 +20,7 @@ impl UserMessageId {
}
pub trait AgentConnection {
fn telemetry_id(&self) -> SharedString;
fn telemetry_id(&self) -> &'static str;
fn new_thread(
self: Rc<Self>,
@@ -86,14 +86,6 @@ pub trait AgentConnection {
None
}
fn session_config_options(
&self,
_session_id: &acp::SessionId,
_cx: &App,
) -> Option<Rc<dyn AgentSessionConfigOptions>> {
None
}
fn into_any(self: Rc<Self>) -> Rc<dyn Any>;
}
@@ -133,26 +125,6 @@ pub trait AgentSessionModes {
fn set_mode(&self, mode: acp::SessionModeId, cx: &mut App) -> Task<Result<()>>;
}
pub trait AgentSessionConfigOptions {
/// Get all current config options with their state
fn config_options(&self) -> Vec<acp::SessionConfigOption>;
/// Set a config option value
/// Returns the full updated list of config options
fn set_config_option(
&self,
config_id: acp::SessionConfigId,
value: acp::SessionConfigValueId,
cx: &mut App,
) -> Task<Result<Vec<acp::SessionConfigOption>>>;
/// Whenever the config options are updated the receiver will be notified.
/// Optional for agents that don't update their config options dynamically.
fn watch(&self, _cx: &mut App) -> Option<watch::Receiver<()>> {
None
}
}
#[derive(Debug)]
pub struct AuthRequired {
pub description: Option<String>,
@@ -232,21 +204,12 @@ pub trait AgentModelSelector: 'static {
}
}
/// Icon for a model in the model selector.
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum AgentModelIcon {
/// A built-in icon from Zed's icon set.
Named(IconName),
/// Path to a custom SVG icon file.
Path(SharedString),
}
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct AgentModelInfo {
pub id: acp::ModelId,
pub name: SharedString,
pub description: Option<SharedString>,
pub icon: Option<AgentModelIcon>,
pub icon: Option<IconName>,
}
impl From<acp::ModelInfo> for AgentModelInfo {
@@ -276,10 +239,6 @@ impl AgentModelList {
AgentModelList::Grouped(groups) => groups.is_empty(),
}
}
pub fn is_flat(&self) -> bool {
matches!(self, AgentModelList::Flat(_))
}
}
#[cfg(feature = "test-support")]
@@ -363,8 +322,8 @@ mod test_support {
}
impl AgentConnection for StubAgentConnection {
fn telemetry_id(&self) -> SharedString {
"stub".into()
fn telemetry_id(&self) -> &'static str {
"stub"
}
fn auth_methods(&self) -> &[acp::AuthMethod] {
@@ -377,7 +336,7 @@ mod test_support {
_cwd: &Path,
cx: &mut gpui::App,
) -> Task<gpui::Result<Entity<AcpThread>>> {
let session_id = acp::SessionId::new(self.sessions.lock().len().to_string());
let session_id = acp::SessionId(self.sessions.lock().len().to_string().into());
let action_log = cx.new(|_| ActionLog::new(project.clone()));
let thread = cx.new(|cx| {
AcpThread::new(
@@ -386,12 +345,12 @@ mod test_support {
project,
action_log,
session_id.clone(),
watch::Receiver::constant(
acp::PromptCapabilities::new()
.image(true)
.audio(true)
.embedded_context(true),
),
watch::Receiver::constant(acp::PromptCapabilities {
image: true,
audio: true,
embedded_context: true,
meta: None,
}),
cx,
)
});
@@ -430,7 +389,10 @@ mod test_support {
response_tx.replace(tx);
cx.spawn(async move |_| {
let stop_reason = rx.await?;
Ok(acp::PromptResponse::new(stop_reason))
Ok(acp::PromptResponse {
stop_reason,
meta: None,
})
})
} else {
for update in self.next_prompt_updates.lock().drain(..) {
@@ -438,7 +400,7 @@ mod test_support {
let update = update.clone();
let permission_request = if let acp::SessionUpdate::ToolCall(tool_call) =
&update
&& let Some(options) = self.permission_requests.get(&tool_call.tool_call_id)
&& let Some(options) = self.permission_requests.get(&tool_call.id)
{
Some((tool_call.clone(), options.clone()))
} else {
@@ -467,7 +429,10 @@ mod test_support {
cx.spawn(async move |_| {
try_join_all(tasks).await?;
Ok(acp::PromptResponse::new(acp::StopReason::EndTurn))
Ok(acp::PromptResponse {
stop_reason: acp::StopReason::EndTurn,
meta: None,
})
})
}
}

View File

@@ -1,10 +1,10 @@
use anyhow::Result;
use buffer_diff::BufferDiff;
use buffer_diff::{BufferDiff, BufferDiffSnapshot};
use editor::{MultiBuffer, PathKey, multibuffer_context_lines};
use gpui::{App, AppContext, AsyncApp, Context, Entity, Subscription, Task};
use itertools::Itertools;
use language::{
Anchor, Buffer, Capability, LanguageRegistry, OffsetRangeExt as _, Point, TextBuffer,
Anchor, Buffer, Capability, LanguageRegistry, OffsetRangeExt as _, Point, Rope, TextBuffer,
};
use std::{cmp::Reverse, ops::Range, path::Path, sync::Arc};
use util::ResultExt;
@@ -49,15 +49,15 @@ impl Diff {
.update(cx, |multibuffer, cx| {
let hunk_ranges = {
let buffer = buffer.read(cx);
diff.read(cx)
.snapshot(cx)
.hunks_intersecting_range(
Anchor::min_for_buffer(buffer.remote_id())
..Anchor::max_for_buffer(buffer.remote_id()),
buffer,
)
.map(|diff_hunk| diff_hunk.buffer_range.to_point(buffer))
.collect::<Vec<_>>()
let diff = diff.read(cx);
diff.hunks_intersecting_range(
Anchor::min_for_buffer(buffer.remote_id())
..Anchor::max_for_buffer(buffer.remote_id()),
buffer,
cx,
)
.map(|diff_hunk| diff_hunk.buffer_range.to_point(buffer))
.collect::<Vec<_>>()
};
multibuffer.set_excerpts_for_path(
@@ -86,9 +86,17 @@ impl Diff {
pub fn new(buffer: Entity<Buffer>, cx: &mut Context<Self>) -> Self {
let buffer_text_snapshot = buffer.read(cx).text_snapshot();
let base_text_snapshot = buffer.read(cx).snapshot();
let base_text = base_text_snapshot.text();
debug_assert_eq!(buffer_text_snapshot.text(), base_text);
let buffer_diff = cx.new(|cx| {
let mut diff = BufferDiff::new_unchanged(&buffer_text_snapshot, cx);
let secondary_diff = cx.new(|cx| BufferDiff::new_unchanged(&buffer_text_snapshot, cx));
let mut diff = BufferDiff::new_unchanged(&buffer_text_snapshot, base_text_snapshot);
let snapshot = diff.snapshot(cx);
let secondary_diff = cx.new(|cx| {
let mut diff = BufferDiff::new(&buffer_text_snapshot, cx);
diff.set_snapshot(snapshot, &buffer_text_snapshot, cx);
diff
});
diff.set_secondary_diff(secondary_diff);
diff
});
@@ -101,7 +109,7 @@ impl Diff {
Self::Pending(PendingDiff {
multibuffer,
base_text: Arc::from(buffer_text_snapshot.text().as_str()),
base_text: Arc::new(base_text),
_subscription: cx.observe(&buffer, |this, _, cx| {
if let Diff::Pending(diff) = this {
diff.update(cx);
@@ -158,7 +166,7 @@ impl Diff {
}
pub fn has_revealed_range(&self, cx: &App) -> bool {
self.multibuffer().read(cx).paths().next().is_some()
self.multibuffer().read(cx).excerpt_paths().next().is_some()
}
pub fn needs_update(&self, old_text: &str, new_text: &str, cx: &App) -> bool {
@@ -168,7 +176,7 @@ impl Diff {
new_buffer,
..
}) => {
base_text.as_ref() != old_text
base_text.as_str() != old_text
|| !new_buffer.read(cx).as_rope().chunks().equals_str(new_text)
}
Diff::Finalized(FinalizedDiff {
@@ -176,7 +184,7 @@ impl Diff {
new_buffer,
..
}) => {
base_text.as_ref() != old_text
base_text.as_str() != old_text
|| !new_buffer.read(cx).as_rope().chunks().equals_str(new_text)
}
}
@@ -185,7 +193,7 @@ impl Diff {
pub struct PendingDiff {
multibuffer: Entity<MultiBuffer>,
base_text: Arc<str>,
base_text: Arc<String>,
new_buffer: Entity<Buffer>,
diff: Entity<BufferDiff>,
revealed_ranges: Vec<Range<Anchor>>,
@@ -200,22 +208,21 @@ impl PendingDiff {
let base_text = self.base_text.clone();
self.update_diff = cx.spawn(async move |diff, cx| {
let text_snapshot = buffer.read_with(cx, |buffer, _| buffer.text_snapshot())?;
let language = buffer.read_with(cx, |buffer, _| buffer.language().cloned())?;
let update = buffer_diff
.update(cx, |diff, cx| {
diff.update_diff(
text_snapshot.clone(),
Some(base_text.clone()),
false,
language,
cx,
)
})?
.await;
let diff_snapshot = BufferDiff::update_diff(
buffer_diff.clone(),
text_snapshot.clone(),
Some(base_text),
false,
false,
None,
None,
cx,
)
.await?;
buffer_diff.update(cx, |diff, cx| {
diff.set_snapshot(update.clone(), &text_snapshot, cx);
diff.set_snapshot(diff_snapshot.clone(), &text_snapshot, cx);
diff.secondary_diff().unwrap().update(cx, |diff, cx| {
diff.set_snapshot(update, &text_snapshot, cx);
diff.set_snapshot(diff_snapshot.clone(), &text_snapshot, cx);
});
})?;
diff.update(cx, |diff, cx| {
@@ -312,14 +319,13 @@ impl PendingDiff {
fn excerpt_ranges(&self, cx: &App) -> Vec<Range<Point>> {
let buffer = self.new_buffer.read(cx);
let mut ranges = self
.diff
.read(cx)
.snapshot(cx)
let diff = self.diff.read(cx);
let mut ranges = diff
.hunks_intersecting_range(
Anchor::min_for_buffer(buffer.remote_id())
..Anchor::max_for_buffer(buffer.remote_id()),
buffer,
cx,
)
.map(|diff_hunk| diff_hunk.buffer_range.to_point(buffer))
.collect::<Vec<_>>();
@@ -351,47 +357,60 @@ impl PendingDiff {
pub struct FinalizedDiff {
path: String,
base_text: Arc<str>,
base_text: Arc<String>,
new_buffer: Entity<Buffer>,
multibuffer: Entity<MultiBuffer>,
_update_diff: Task<Result<()>>,
}
async fn build_buffer_diff(
old_text: Arc<str>,
old_text: Arc<String>,
buffer: &Entity<Buffer>,
language_registry: Option<Arc<LanguageRegistry>>,
cx: &mut AsyncApp,
) -> Result<Entity<BufferDiff>> {
let language = cx.update(|cx| buffer.read(cx).language().cloned())?;
let buffer = cx.update(|cx| buffer.read(cx).snapshot())?;
let secondary_diff = cx.new(|cx| BufferDiff::new(&buffer, cx))?;
let update = secondary_diff
.update(cx, |secondary_diff, cx| {
secondary_diff.update_diff(
buffer.text.clone(),
Some(old_text),
true,
language.clone(),
let old_text_rope = cx
.background_spawn({
let old_text = old_text.clone();
async move { Rope::from(old_text.as_str()) }
})
.await;
let base_buffer = cx
.update(|cx| {
Buffer::build_snapshot(
old_text_rope,
buffer.language().cloned(),
language_registry,
cx,
)
})?
.await;
secondary_diff.update(cx, |secondary_diff, cx| {
secondary_diff.language_changed(language.clone(), language_registry.clone(), cx);
secondary_diff.set_snapshot(update.clone(), &buffer, cx);
let diff_snapshot = cx
.update(|cx| {
BufferDiffSnapshot::new_with_base_buffer(
buffer.text.clone(),
Some(old_text),
base_buffer,
cx,
)
})?
.await;
let secondary_diff = cx.new(|cx| {
let mut diff = BufferDiff::new(&buffer, cx);
diff.set_snapshot(diff_snapshot.clone(), &buffer, cx);
diff
})?;
let diff = cx.new(|cx| BufferDiff::new(&buffer, cx))?;
diff.update(cx, |diff, cx| {
diff.language_changed(language, language_registry, cx);
diff.set_snapshot(update.clone(), &buffer, cx);
cx.new(|cx| {
let mut diff = BufferDiff::new(&buffer.text, cx);
diff.set_snapshot(diff_snapshot, &buffer, cx);
diff.set_secondary_diff(secondary_diff);
})?;
Ok(diff)
diff
})
}
#[cfg(test)]

View File

@@ -4,14 +4,12 @@ use file_icons::FileIcons;
use prompt_store::{PromptId, UserPromptId};
use serde::{Deserialize, Serialize};
use std::{
borrow::Cow,
fmt,
ops::RangeInclusive,
path::{Path, PathBuf},
};
use ui::{App, IconName, SharedString};
use url::Url;
use urlencoding::decode;
use util::paths::PathStyle;
#[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize, Hash)]
@@ -76,13 +74,11 @@ impl MentionUri {
let path = url.path();
match url.scheme() {
"file" => {
let normalized = if path_style.is_windows() {
let path = if path_style.is_windows() {
path.trim_start_matches("/")
} else {
path
};
let decoded = decode(normalized).unwrap_or(Cow::Borrowed(normalized));
let path = decoded.as_ref();
if let Some(fragment) = url.fragment() {
let line_range = parse_line_range(fragment)?;
@@ -112,7 +108,7 @@ impl MentionUri {
if let Some(thread_id) = path.strip_prefix("/agent/thread/") {
let name = single_query_param(&url, "name")?.context("Missing thread name")?;
Ok(Self::Thread {
id: acp::SessionId::new(thread_id),
id: acp::SessionId(thread_id.into()),
name,
})
} else if let Some(path) = path.strip_prefix("/agent/text-thread/") {
@@ -410,19 +406,6 @@ mod tests {
assert_eq!(parsed.to_uri().to_string(), selection_uri);
}
#[test]
fn test_parse_file_uri_with_non_ascii() {
let file_uri = uri!("file:///path/to/%E6%97%A5%E6%9C%AC%E8%AA%9E.txt");
let parsed = MentionUri::parse(file_uri, PathStyle::local()).unwrap();
match &parsed {
MentionUri::File { abs_path } => {
assert_eq!(abs_path, Path::new(path!("/path/to/日本語.txt")));
}
_ => panic!("Expected File variant"),
}
assert_eq!(parsed.to_uri().to_string(), file_uri);
}
#[test]
fn test_parse_untitled_selection_uri() {
let selection_uri = uri!("zed:///agent/untitled-buffer#L1:10");

View File

@@ -75,9 +75,11 @@ impl Terminal {
let exit_status = exit_status.map(portable_pty::ExitStatus::from);
acp::TerminalExitStatus::new()
.exit_code(exit_status.as_ref().map(|e| e.exit_code()))
.signal(exit_status.and_then(|e| e.signal().map(ToOwned::to_owned)))
acp::TerminalExitStatus {
exit_code: exit_status.as_ref().map(|e| e.exit_code()),
signal: exit_status.and_then(|e| e.signal().map(Into::into)),
meta: None,
}
})
.shared(),
}
@@ -101,19 +103,25 @@ impl Terminal {
if let Some(output) = self.output.as_ref() {
let exit_status = output.exit_status.map(portable_pty::ExitStatus::from);
acp::TerminalOutputResponse::new(
output.content.clone(),
output.original_content_len > output.content.len(),
)
.exit_status(
acp::TerminalExitStatus::new()
.exit_code(exit_status.as_ref().map(|e| e.exit_code()))
.signal(exit_status.and_then(|e| e.signal().map(ToOwned::to_owned))),
)
acp::TerminalOutputResponse {
output: output.content.clone(),
truncated: output.original_content_len > output.content.len(),
exit_status: Some(acp::TerminalExitStatus {
exit_code: exit_status.as_ref().map(|e| e.exit_code()),
signal: exit_status.and_then(|e| e.signal().map(Into::into)),
meta: None,
}),
meta: None,
}
} else {
let (current_content, original_len) = self.truncated_output(cx);
let truncated = current_content.len() < original_len;
acp::TerminalOutputResponse::new(current_content, truncated)
acp::TerminalOutputResponse {
truncated: current_content.len() < original_len,
output: current_content,
exit_status: None,
meta: None,
}
}
}
@@ -187,10 +195,8 @@ pub async fn create_terminal_entity(
Default::default()
};
// Disable pagers so agent/terminal commands don't hang behind interactive UIs
// Disables paging for `git` and hopefully other commands
env.insert("PAGER".into(), "".into());
// Override user core.pager (e.g. delta) which Git prefers over PAGER
env.insert("GIT_PAGER".into(), "cat".into());
env.extend(env_vars);
// Use remote shell or default system shell, as appropriate

View File

@@ -4,20 +4,22 @@ use std::{
fmt::Display,
rc::{Rc, Weak},
sync::Arc,
time::Duration,
};
use agent_client_protocol as acp;
use collections::HashMap;
use gpui::{
App, Empty, Entity, EventEmitter, FocusHandle, Focusable, Global, ListAlignment, ListState,
StyleRefinement, Subscription, Task, TextStyleRefinement, Window, actions, list, prelude::*,
App, ClipboardItem, Empty, Entity, EventEmitter, FocusHandle, Focusable, Global, ListAlignment,
ListState, StyleRefinement, Subscription, Task, TextStyleRefinement, Window, actions, list,
prelude::*,
};
use language::LanguageRegistry;
use markdown::{CodeBlockRenderer, Markdown, MarkdownElement, MarkdownStyle};
use project::Project;
use settings::Settings;
use theme::ThemeSettings;
use ui::{CopyButton, Tooltip, WithScrollbar, prelude::*};
use ui::{Tooltip, WithScrollbar, prelude::*};
use util::ResultExt as _;
use workspace::{
Item, ItemHandle, ToolbarItemEvent, ToolbarItemLocation, ToolbarItemView, Workspace,
@@ -369,13 +371,13 @@ impl AcpTools {
syntax: cx.theme().syntax().clone(),
code_block_overflow_x_scroll: true,
code_block: StyleRefinement {
text: TextStyleRefinement {
text: Some(TextStyleRefinement {
font_family: Some(
theme_settings.buffer_font.family.clone(),
),
font_size: Some((base_size * 0.8).into()),
..Default::default()
},
}),
..Default::default()
},
..Default::default()
@@ -542,11 +544,15 @@ impl Render for AcpTools {
pub struct AcpToolsToolbarItemView {
acp_tools: Option<Entity<AcpTools>>,
just_copied: bool,
}
impl AcpToolsToolbarItemView {
pub fn new() -> Self {
Self { acp_tools: None }
Self {
acp_tools: None,
just_copied: false,
}
}
}
@@ -566,14 +572,37 @@ impl Render for AcpToolsToolbarItemView {
h_flex()
.gap_2()
.child({
let message = acp_tools
.read(cx)
.serialize_observed_messages()
.unwrap_or_default();
let acp_tools = acp_tools.clone();
IconButton::new(
"copy_all_messages",
if self.just_copied {
IconName::Check
} else {
IconName::Copy
},
)
.icon_size(IconSize::Small)
.tooltip(Tooltip::text(if self.just_copied {
"Copied!"
} else {
"Copy All Messages"
}))
.disabled(!has_messages)
.on_click(cx.listener(move |this, _, _window, cx| {
if let Some(content) = acp_tools.read(cx).serialize_observed_messages() {
cx.write_to_clipboard(ClipboardItem::new_string(content));
CopyButton::new(message)
.tooltip_label("Copy All Messages")
.disabled(!has_messages)
this.just_copied = true;
cx.spawn(async move |this, cx| {
cx.background_executor().timer(Duration::from_secs(2)).await;
this.update(cx, |this, cx| {
this.just_copied = false;
cx.notify();
})
})
.detach();
}
}))
})
.child(
IconButton::new("clear_messages", IconName::Trash)

View File

@@ -6,7 +6,7 @@ use futures::{FutureExt, StreamExt, channel::mpsc};
use gpui::{
App, AppContext, AsyncApp, Context, Entity, SharedString, Subscription, Task, WeakEntity,
};
use language::{Anchor, Buffer, BufferEvent, Point, ToPoint};
use language::{Anchor, Buffer, BufferEvent, DiskState, Point, ToPoint};
use project::{Project, ProjectItem, lsp_store::OpenLspBufferHandle};
use std::{cmp, ops::Range, sync::Arc};
use text::{Edit, Patch, Rope};
@@ -150,7 +150,7 @@ impl ActionLog {
if buffer
.read(cx)
.file()
.is_some_and(|file| file.disk_state().is_deleted())
.is_some_and(|file| file.disk_state() == DiskState::Deleted)
{
// If the buffer had been edited by a tool, but it got
// deleted externally, we want to stop tracking it.
@@ -162,7 +162,7 @@ impl ActionLog {
if buffer
.read(cx)
.file()
.is_some_and(|file| !file.disk_state().is_deleted())
.is_some_and(|file| file.disk_state() != DiskState::Deleted)
{
// If the buffer had been deleted by a tool, but it got
// resurrected externally, we want to clear the edits we
@@ -262,7 +262,7 @@ impl ActionLog {
);
}
(Arc::from(base_text.to_string().as_str()), base_text)
(Arc::new(base_text.to_string()), base_text)
}
});
@@ -302,7 +302,7 @@ impl ActionLog {
.context("buffer not tracked")?;
let old_unreviewed_edits = tracked_buffer.unreviewed_edits.clone();
let agent_diff_base = tracked_buffer.diff_base.clone();
let git_diff_base = git_diff.read(cx).base_text(cx).as_rope().clone();
let git_diff_base = git_diff.read(cx).base_text().as_rope().clone();
let buffer_text = tracked_buffer.snapshot.as_rope().clone();
anyhow::Ok(cx.background_spawn(async move {
let mut old_unreviewed_edits = old_unreviewed_edits.into_iter().peekable();
@@ -352,7 +352,7 @@ impl ActionLog {
}
(
Arc::from(new_agent_diff_base.to_string().as_str()),
Arc::new(new_agent_diff_base.to_string()),
new_agent_diff_base,
)
}))
@@ -374,11 +374,11 @@ impl ActionLog {
this: &WeakEntity<ActionLog>,
buffer: &Entity<Buffer>,
buffer_snapshot: text::BufferSnapshot,
new_base_text: Arc<str>,
new_base_text: Arc<String>,
new_diff_base: Rope,
cx: &mut AsyncApp,
) -> Result<()> {
let (diff, language) = this.read_with(cx, |this, cx| {
let (diff, language, language_registry) = this.read_with(cx, |this, cx| {
let tracked_buffer = this
.tracked_buffers
.get(buffer)
@@ -386,28 +386,25 @@ impl ActionLog {
anyhow::Ok((
tracked_buffer.diff.clone(),
buffer.read(cx).language().cloned(),
buffer.read(cx).language_registry(),
))
})??;
let update = diff.update(cx, |diff, cx| {
diff.update_diff(
buffer_snapshot.clone(),
Some(new_base_text),
true,
language,
cx,
)
});
let diff_snapshot = BufferDiff::update_diff(
diff.clone(),
buffer_snapshot.clone(),
Some(new_base_text),
true,
false,
language,
language_registry,
cx,
)
.await;
let mut unreviewed_edits = Patch::default();
if let Ok(update) = update {
let update = update.await;
let diff_snapshot = diff.update(cx, |diff, cx| {
diff.set_snapshot(update.clone(), &buffer_snapshot, cx);
diff.snapshot(cx)
})?;
if let Ok(diff_snapshot) = diff_snapshot {
unreviewed_edits = cx
.background_spawn({
let diff_snapshot = diff_snapshot.clone();
let buffer_snapshot = buffer_snapshot.clone();
let new_diff_base = new_diff_base.clone();
async move {
@@ -434,6 +431,10 @@ impl ActionLog {
}
})
.await;
diff.update(cx, |diff, cx| {
diff.set_snapshot(diff_snapshot, &buffer_snapshot, cx);
})?;
}
this.update(cx, |this, cx| {
let tracked_buffer = this
@@ -768,7 +769,7 @@ impl ActionLog {
tracked.version != buffer.version
&& buffer
.file()
.is_some_and(|file| !file.disk_state().is_deleted())
.is_some_and(|file| file.disk_state() != DiskState::Deleted)
})
.map(|(buffer, _)| buffer)
}
@@ -776,7 +777,7 @@ impl ActionLog {
#[derive(Clone)]
pub struct ActionLogTelemetry {
pub agent_telemetry_id: SharedString,
pub agent_telemetry_id: &'static str,
pub session_id: Arc<str>,
}
@@ -974,8 +975,7 @@ impl TrackedBuffer {
fn has_edits(&self, cx: &App) -> bool {
self.diff
.read(cx)
.snapshot(cx)
.hunks(self.buffer.read(cx))
.hunks(self.buffer.read(cx), cx)
.next()
.is_some()
}
@@ -2388,14 +2388,13 @@ mod tests {
(
buffer,
diff.read(cx)
.snapshot(cx)
.hunks(&snapshot)
.hunks(&snapshot, cx)
.map(|hunk| HunkStatus {
diff_status: hunk.status().kind,
range: hunk.range,
old_text: diff
.read(cx)
.base_text(cx)
.base_text()
.text_for_range(hunk.diff_base_byte_range)
.collect(),
})

View File

@@ -5,12 +5,12 @@ mod legacy_thread;
mod native_agent_server;
pub mod outline;
mod templates;
#[cfg(test)]
mod tests;
mod thread;
mod tools;
use context_server::ContextServerId;
#[cfg(test)]
mod tests;
pub use db::*;
pub use history_store::*;
pub use native_agent_server::NativeAgentServer;
@@ -18,11 +18,11 @@ pub use templates::*;
pub use thread::*;
pub use tools::*;
use acp_thread::{AcpThread, AgentModelSelector, UserMessageId};
use acp_thread::{AcpThread, AgentModelSelector};
use agent_client_protocol as acp;
use anyhow::{Context as _, Result, anyhow};
use chrono::{DateTime, Utc};
use collections::{HashMap, HashSet, IndexMap};
use collections::{HashSet, IndexMap};
use fs::Fs;
use futures::channel::{mpsc, oneshot};
use futures::future::Shared;
@@ -30,15 +30,15 @@ use futures::{StreamExt, future};
use gpui::{
App, AppContext, AsyncApp, Context, Entity, SharedString, Subscription, Task, WeakEntity,
};
use language_model::{IconOrSvg, LanguageModel, LanguageModelProvider, LanguageModelRegistry};
use language_model::{LanguageModel, LanguageModelProvider, LanguageModelRegistry};
use project::{Project, ProjectItem, ProjectPath, Worktree};
use prompt_store::{
ProjectContext, PromptStore, RULES_FILE_NAMES, RulesFileContext, UserRulesContext,
WorktreeContext,
ProjectContext, PromptStore, RulesFileContext, UserRulesContext, WorktreeContext,
};
use serde::{Deserialize, Serialize};
use settings::{LanguageModelSelection, update_settings_file};
use std::any::Any;
use std::collections::HashMap;
use std::path::{Path, PathBuf};
use std::rc::Rc;
use std::sync::Arc;
@@ -51,6 +51,18 @@ pub struct ProjectSnapshot {
pub timestamp: DateTime<Utc>,
}
const RULES_FILE_NAMES: [&str; 9] = [
".rules",
".cursorrules",
".windsurfrules",
".clinerules",
".github/copilot-instructions.md",
"CLAUDE.md",
"AGENT.md",
"AGENTS.md",
"GEMINI.md",
];
pub struct RulesLoadingError {
pub message: SharedString,
}
@@ -93,7 +105,7 @@ impl LanguageModels {
fn refresh_list(&mut self, cx: &App) {
let providers = LanguageModelRegistry::global(cx)
.read(cx)
.visible_providers()
.providers()
.into_iter()
.filter(|provider| provider.is_authenticated(cx))
.collect::<Vec<_>>();
@@ -153,21 +165,18 @@ impl LanguageModels {
id: Self::model_id(model),
name: model.name().0,
description: None,
icon: Some(match provider.icon() {
IconOrSvg::Svg(path) => acp_thread::AgentModelIcon::Path(path),
IconOrSvg::Icon(name) => acp_thread::AgentModelIcon::Named(name),
}),
icon: Some(provider.icon()),
}
}
fn model_id(model: &Arc<dyn LanguageModel>) -> acp::ModelId {
acp::ModelId::new(format!("{}/{}", model.provider_id().0, model.id().0))
acp::ModelId(format!("{}/{}", model.provider_id().0, model.id().0).into())
}
fn authenticate_all_language_model_providers(cx: &mut App) -> Task<()> {
let authenticate_all_providers = LanguageModelRegistry::global(cx)
.read(cx)
.visible_providers()
.providers()
.iter()
.map(|provider| (provider.id(), provider.name(), provider.authenticate(cx)))
.collect::<Vec<_>>();
@@ -254,24 +263,12 @@ impl NativeAgent {
.await;
cx.new(|cx| {
let context_server_store = project.read(cx).context_server_store();
let context_server_registry =
cx.new(|cx| ContextServerRegistry::new(context_server_store.clone(), cx));
let mut subscriptions = vec![
cx.subscribe(&project, Self::handle_project_event),
cx.subscribe(
&LanguageModelRegistry::global(cx),
Self::handle_models_updated_event,
),
cx.subscribe(
&context_server_store,
Self::handle_context_server_store_updated,
),
cx.subscribe(
&context_server_registry,
Self::handle_context_server_registry_event,
),
];
if let Some(prompt_store) = prompt_store.as_ref() {
subscriptions.push(cx.subscribe(prompt_store, Self::handle_prompts_updated_event))
@@ -280,14 +277,16 @@ impl NativeAgent {
let (project_context_needs_refresh_tx, project_context_needs_refresh_rx) =
watch::channel(());
Self {
sessions: HashMap::default(),
sessions: HashMap::new(),
history,
project_context: cx.new(|_| project_context),
project_context_needs_refresh: project_context_needs_refresh_tx,
_maintain_project_context: cx.spawn(async move |this, cx| {
Self::maintain_project_context(this, project_context_needs_refresh_rx, cx).await
}),
context_server_registry,
context_server_registry: cx.new(|cx| {
ContextServerRegistry::new(project.read(cx).context_server_store(), cx)
}),
templates,
models: LanguageModels::new(cx),
project,
@@ -356,9 +355,6 @@ impl NativeAgent {
pending_save: Task::ready(()),
},
);
self.update_available_commands(cx);
acp_thread
}
@@ -429,7 +425,10 @@ impl NativeAgent {
.into_iter()
.flat_map(|(contents, prompt_metadata)| match contents {
Ok(contents) => Some(UserRulesContext {
uuid: prompt_metadata.id.as_user()?,
uuid: match prompt_metadata.id {
prompt_store::PromptId::User { uuid } => uuid,
prompt_store::PromptId::EditWorkflow => return None,
},
title: prompt_metadata.title.map(|title| title.to_string()),
contents,
}),
@@ -623,99 +622,6 @@ impl NativeAgent {
}
}
fn handle_context_server_store_updated(
&mut self,
_store: Entity<project::context_server_store::ContextServerStore>,
_event: &project::context_server_store::Event,
cx: &mut Context<Self>,
) {
self.update_available_commands(cx);
}
fn handle_context_server_registry_event(
&mut self,
_registry: Entity<ContextServerRegistry>,
event: &ContextServerRegistryEvent,
cx: &mut Context<Self>,
) {
match event {
ContextServerRegistryEvent::ToolsChanged => {}
ContextServerRegistryEvent::PromptsChanged => {
self.update_available_commands(cx);
}
}
}
fn update_available_commands(&self, cx: &mut Context<Self>) {
let available_commands = self.build_available_commands(cx);
for session in self.sessions.values() {
if let Some(acp_thread) = session.acp_thread.upgrade() {
acp_thread.update(cx, |thread, cx| {
thread
.handle_session_update(
acp::SessionUpdate::AvailableCommandsUpdate(
acp::AvailableCommandsUpdate::new(available_commands.clone()),
),
cx,
)
.log_err();
});
}
}
}
fn build_available_commands(&self, cx: &App) -> Vec<acp::AvailableCommand> {
let registry = self.context_server_registry.read(cx);
let mut prompt_name_counts: HashMap<&str, usize> = HashMap::default();
for context_server_prompt in registry.prompts() {
*prompt_name_counts
.entry(context_server_prompt.prompt.name.as_str())
.or_insert(0) += 1;
}
registry
.prompts()
.flat_map(|context_server_prompt| {
let prompt = &context_server_prompt.prompt;
let should_prefix = prompt_name_counts
.get(prompt.name.as_str())
.copied()
.unwrap_or(0)
> 1;
let name = if should_prefix {
format!("{}.{}", context_server_prompt.server_id, prompt.name)
} else {
prompt.name.clone()
};
let mut command = acp::AvailableCommand::new(
name,
prompt.description.clone().unwrap_or_default(),
);
match prompt.arguments.as_deref() {
Some([arg]) => {
let hint = format!("<{}>", arg.name);
command = command.input(acp::AvailableCommandInput::Unstructured(
acp::UnstructuredCommandInput::new(hint),
));
}
Some([]) | None => {}
Some(_) => {
// skip >1 argument commands since we don't support them yet
return None;
}
}
Some(command)
})
.collect()
}
pub fn load_thread(
&mut self,
id: acp::SessionId,
@@ -814,102 +720,6 @@ impl NativeAgent {
history.update(cx, |history, cx| history.reload(cx)).ok();
});
}
fn send_mcp_prompt(
&self,
message_id: UserMessageId,
session_id: agent_client_protocol::SessionId,
prompt_name: String,
server_id: ContextServerId,
arguments: HashMap<String, String>,
original_content: Vec<acp::ContentBlock>,
cx: &mut Context<Self>,
) -> Task<Result<acp::PromptResponse>> {
let server_store = self.context_server_registry.read(cx).server_store().clone();
let path_style = self.project.read(cx).path_style(cx);
cx.spawn(async move |this, cx| {
let prompt =
crate::get_prompt(&server_store, &server_id, &prompt_name, arguments, cx).await?;
let (acp_thread, thread) = this.update(cx, |this, _cx| {
let session = this
.sessions
.get(&session_id)
.context("Failed to get session")?;
anyhow::Ok((session.acp_thread.clone(), session.thread.clone()))
})??;
let mut last_is_user = true;
thread.update(cx, |thread, cx| {
thread.push_acp_user_block(
message_id,
original_content.into_iter().skip(1),
path_style,
cx,
);
})?;
for message in prompt.messages {
let context_server::types::PromptMessage { role, content } = message;
let block = mcp_message_content_to_acp_content_block(content);
match role {
context_server::types::Role::User => {
let id = acp_thread::UserMessageId::new();
acp_thread.update(cx, |acp_thread, cx| {
acp_thread.push_user_content_block_with_indent(
Some(id.clone()),
block.clone(),
true,
cx,
);
anyhow::Ok(())
})??;
thread.update(cx, |thread, cx| {
thread.push_acp_user_block(id, [block], path_style, cx);
anyhow::Ok(())
})??;
}
context_server::types::Role::Assistant => {
acp_thread.update(cx, |acp_thread, cx| {
acp_thread.push_assistant_content_block_with_indent(
block.clone(),
false,
true,
cx,
);
anyhow::Ok(())
})??;
thread.update(cx, |thread, cx| {
thread.push_acp_agent_block(block, cx);
anyhow::Ok(())
})??;
}
}
last_is_user = role == context_server::types::Role::User;
}
let response_stream = thread.update(cx, |thread, cx| {
if last_is_user {
thread.send_existing(cx)
} else {
// Resume if MCP prompt did not end with a user message
thread.resume(cx)
}
})??;
cx.update(|cx| {
NativeAgentConnection::handle_thread_events(response_stream, acp_thread, cx)
})?
.await
})
}
}
/// Wrapper struct that implements the AgentConnection trait
@@ -979,12 +789,28 @@ impl NativeAgentConnection {
}
ThreadEvent::AgentText(text) => {
acp_thread.update(cx, |thread, cx| {
thread.push_assistant_content_block(text.into(), false, cx)
thread.push_assistant_content_block(
acp::ContentBlock::Text(acp::TextContent {
text,
annotations: None,
meta: None,
}),
false,
cx,
)
})?;
}
ThreadEvent::AgentThinking(text) => {
acp_thread.update(cx, |thread, cx| {
thread.push_assistant_content_block(text.into(), true, cx)
thread.push_assistant_content_block(
acp::ContentBlock::Text(acp::TextContent {
text,
annotations: None,
meta: None,
}),
true,
cx,
)
})?;
}
ThreadEvent::ToolCallAuthorization(ToolCallAuthorization {
@@ -998,9 +824,8 @@ impl NativeAgentConnection {
)
})??;
cx.background_spawn(async move {
if let acp::RequestPermissionOutcome::Selected(
acp::SelectedPermissionOutcome { option_id, .. },
) = outcome_task.await
if let acp::RequestPermissionOutcome::Selected { option_id } =
outcome_task.await
{
response
.send(option_id)
@@ -1027,7 +852,10 @@ impl NativeAgentConnection {
}
ThreadEvent::Stop(stop_reason) => {
log::debug!("Assistant message complete: {:?}", stop_reason);
return Ok(acp::PromptResponse::new(stop_reason));
return Ok(acp::PromptResponse {
stop_reason,
meta: None,
});
}
}
}
@@ -1039,44 +867,14 @@ impl NativeAgentConnection {
}
log::debug!("Response stream completed");
anyhow::Ok(acp::PromptResponse::new(acp::StopReason::EndTurn))
anyhow::Ok(acp::PromptResponse {
stop_reason: acp::StopReason::EndTurn,
meta: None,
})
})
}
}
struct Command<'a> {
prompt_name: &'a str,
arg_value: &'a str,
explicit_server_id: Option<&'a str>,
}
impl<'a> Command<'a> {
fn parse(prompt: &'a [acp::ContentBlock]) -> Option<Self> {
let acp::ContentBlock::Text(text_content) = prompt.first()? else {
return None;
};
let text = text_content.text.trim();
let command = text.strip_prefix('/')?;
let (command, arg_value) = command
.split_once(char::is_whitespace)
.unwrap_or((command, ""));
if let Some((server_id, prompt_name)) = command.split_once('.') {
Some(Self {
prompt_name,
arg_value,
explicit_server_id: Some(server_id),
})
} else {
Some(Self {
prompt_name: command,
arg_value,
explicit_server_id: None,
})
}
}
}
struct NativeAgentModelSelector {
session_id: acp::SessionId,
connection: NativeAgentConnection,
@@ -1170,8 +968,8 @@ impl acp_thread::AgentModelSelector for NativeAgentModelSelector {
}
impl acp_thread::AgentConnection for NativeAgentConnection {
fn telemetry_id(&self) -> SharedString {
"zed".into()
fn telemetry_id(&self) -> &'static str {
"zed"
}
fn new_thread(
@@ -1242,47 +1040,6 @@ impl acp_thread::AgentConnection for NativeAgentConnection {
let session_id = params.session_id.clone();
log::info!("Received prompt request for session: {}", session_id);
log::debug!("Prompt blocks count: {}", params.prompt.len());
if let Some(parsed_command) = Command::parse(&params.prompt) {
let registry = self.0.read(cx).context_server_registry.read(cx);
let explicit_server_id = parsed_command
.explicit_server_id
.map(|server_id| ContextServerId(server_id.into()));
if let Some(prompt) =
registry.find_prompt(explicit_server_id.as_ref(), parsed_command.prompt_name)
{
let arguments = if !parsed_command.arg_value.is_empty()
&& let Some(arg_name) = prompt
.prompt
.arguments
.as_ref()
.and_then(|args| args.first())
.map(|arg| arg.name.clone())
{
HashMap::from_iter([(arg_name, parsed_command.arg_value.to_string())])
} else {
Default::default()
};
let prompt_name = prompt.prompt.name.clone();
let server_id = prompt.server_id.clone();
return self.0.update(cx, |agent, cx| {
agent.send_mcp_prompt(
id,
session_id.clone(),
prompt_name,
server_id,
arguments,
params.prompt,
cx,
)
});
};
};
let path_style = self.0.read(cx).project.read(cx).path_style(cx);
self.run_turn(session_id, cx, move |thread, cx| {
@@ -1483,15 +1240,6 @@ impl TerminalHandle for AcpTerminalHandle {
self.terminal
.read_with(cx, |term, cx| term.current_output(cx))
}
fn kill(&self, cx: &AsyncApp) -> Result<()> {
cx.update(|cx| {
self.terminal.update(cx, |terminal, cx| {
terminal.kill(cx);
});
})?;
Ok(())
}
}
#[cfg(test)]
@@ -1626,12 +1374,10 @@ mod internal_tests {
IndexMap::from_iter([(
AgentModelGroupName("Fake".into()),
vec![AgentModelInfo {
id: acp::ModelId::new("fake/fake"),
id: acp::ModelId("fake/fake".into()),
name: "Fake".into(),
description: None,
icon: Some(acp_thread::AgentModelIcon::Named(
ui::IconName::ZedAssistant
)),
icon: Some(ui::IconName::ZedAssistant),
}]
)])
);
@@ -1689,7 +1435,7 @@ mod internal_tests {
// Select a model
let selector = connection.model_selector(&session_id).unwrap();
let model_id = acp::ModelId::new("fake/fake");
let model_id = acp::ModelId("fake/fake".into());
cx.update(|cx| selector.select_model(model_id.clone(), cx))
.await
.unwrap();
@@ -1775,14 +1521,20 @@ mod internal_tests {
thread.send(
vec![
"What does ".into(),
acp::ContentBlock::ResourceLink(acp::ResourceLink::new(
"b.md",
MentionUri::File {
acp::ContentBlock::ResourceLink(acp::ResourceLink {
name: "b.md".into(),
uri: MentionUri::File {
abs_path: path!("/a/b.md").into(),
}
.to_uri()
.to_string(),
)),
annotations: None,
description: None,
mime_type: None,
size: None,
title: None,
meta: None,
}),
" mean?".into(),
],
cx,
@@ -1881,35 +1633,3 @@ mod internal_tests {
});
}
}
fn mcp_message_content_to_acp_content_block(
content: context_server::types::MessageContent,
) -> acp::ContentBlock {
match content {
context_server::types::MessageContent::Text {
text,
annotations: _,
} => text.into(),
context_server::types::MessageContent::Image {
data,
mime_type,
annotations: _,
} => acp::ContentBlock::Image(acp::ImageContent::new(data, mime_type)),
context_server::types::MessageContent::Audio {
data,
mime_type,
annotations: _,
} => acp::ContentBlock::Audio(acp::AudioContent::new(data, mime_type)),
context_server::types::MessageContent::Resource {
resource,
annotations: _,
} => {
let mut link =
acp::ResourceLink::new(resource.uri.to_string(), resource.uri.to_string());
if let Some(mime_type) = resource.mime_type {
link = link.mime_type(mime_type);
}
acp::ContentBlock::ResourceLink(link)
}
}
}

View File

@@ -366,7 +366,7 @@ impl ThreadsDatabase {
for (id, summary, updated_at) in rows {
threads.push(DbThreadMetadata {
id: acp::SessionId::new(id),
id: acp::SessionId(id),
title: summary.into(),
updated_at: DateTime::parse_from_rfc3339(&updated_at)?.with_timezone(&Utc),
});

View File

@@ -4,7 +4,6 @@ use crate::{
};
use Role::*;
use client::{Client, UserStore};
use eval_utils::{EvalOutput, EvalOutputProcessor, OutcomeKind};
use fs::FakeFs;
use futures::{FutureExt, future::LocalBoxFuture};
use gpui::{AppContext, TestAppContext, Timer};
@@ -23,59 +22,11 @@ use std::{
fmt::{self, Display},
path::Path,
str::FromStr,
sync::mpsc,
time::Duration,
};
use util::path;
#[derive(Default, Clone, Debug)]
struct EditAgentOutputProcessor {
mismatched_tag_threshold: f32,
cumulative_tags: usize,
cumulative_mismatched_tags: usize,
eval_outputs: Vec<EvalOutput<EditEvalMetadata>>,
}
fn mismatched_tag_threshold(mismatched_tag_threshold: f32) -> EditAgentOutputProcessor {
EditAgentOutputProcessor {
mismatched_tag_threshold,
cumulative_tags: 0,
cumulative_mismatched_tags: 0,
eval_outputs: Vec::new(),
}
}
#[derive(Clone, Debug)]
struct EditEvalMetadata {
tags: usize,
mismatched_tags: usize,
}
impl EvalOutputProcessor for EditAgentOutputProcessor {
type Metadata = EditEvalMetadata;
fn process(&mut self, output: &EvalOutput<Self::Metadata>) {
if matches!(output.outcome, OutcomeKind::Passed | OutcomeKind::Failed) {
self.cumulative_mismatched_tags += output.metadata.mismatched_tags;
self.cumulative_tags += output.metadata.tags;
self.eval_outputs.push(output.clone());
}
}
fn assert(&mut self) {
let mismatched_tag_ratio =
self.cumulative_mismatched_tags as f32 / self.cumulative_tags as f32;
if mismatched_tag_ratio > self.mismatched_tag_threshold {
for eval_output in &self.eval_outputs {
println!("{}", eval_output.data);
}
panic!(
"Too many mismatched tags: {:?}",
self.cumulative_mismatched_tags
);
}
}
}
#[test]
#[cfg_attr(not(feature = "unit-eval"), ignore)]
fn eval_extract_handle_command_output() {
@@ -101,19 +52,22 @@ fn eval_extract_handle_command_output() {
include_str!("evals/fixtures/extract_handle_command_output/possible-07.diff"),
];
let edit_description = "Extract `handle_command_output` method from `run_git_blame`.";
eval_utils::eval(100, 0.95, mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
eval(
100,
0.95,
0.05,
EvalInput::from_conversation(
vec![
message(
User,
[text(formatdoc! {"
Read the `{input_file_path}` file and extract a method in
the final stanza of `run_git_blame` to deal with command failures,
call it `handle_command_output` and take the std::process::Output as the only parameter.
Do not document the method and do not add any comments.
Read the `{input_file_path}` file and extract a method in
the final stanza of `run_git_blame` to deal with command failures,
call it `handle_command_output` and take the std::process::Output as the only parameter.
Do not document the method and do not add any comments.
Add it right next to `run_git_blame` and copy it verbatim from `run_git_blame`.
"})],
Add it right next to `run_git_blame` and copy it verbatim from `run_git_blame`.
"})],
),
message(
Assistant,
@@ -145,9 +99,9 @@ fn eval_extract_handle_command_output() {
),
],
Some(input_file_content.into()),
EvalAssertion::assert_diff_any(possible_diffs.clone()),
))
});
EvalAssertion::assert_diff_any(possible_diffs),
),
);
}
#[test]
@@ -165,16 +119,18 @@ fn eval_delete_run_git_blame() {
let input_file_content = include_str!("evals/fixtures/delete_run_git_blame/before.rs");
let output_file_content = include_str!("evals/fixtures/delete_run_git_blame/after.rs");
let edit_description = "Delete the `run_git_blame` function.";
eval_utils::eval(100, 0.95, mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
eval(
100,
0.95,
0.05,
EvalInput::from_conversation(
vec![
message(
User,
[text(formatdoc! {"
Read the `{input_file_path}` file and delete `run_git_blame`. Just that
one function, not its usages.
"})],
Read the `{input_file_path}` file and delete `run_git_blame`. Just that
one function, not its usages.
"})],
),
message(
Assistant,
@@ -207,8 +163,8 @@ fn eval_delete_run_git_blame() {
],
Some(input_file_content.into()),
EvalAssertion::assert_eq(output_file_content),
))
});
),
);
}
#[test]
@@ -226,16 +182,18 @@ fn eval_translate_doc_comments() {
let input_file_path = "root/canvas.rs";
let input_file_content = include_str!("evals/fixtures/translate_doc_comments/before.rs");
let edit_description = "Translate all doc comments to Italian";
eval_utils::eval(200, 1., mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
eval(
200,
1.,
0.05,
EvalInput::from_conversation(
vec![
message(
User,
[text(formatdoc! {"
Read the {input_file_path} file and edit it (without overwriting it),
translating all the doc comments to italian.
"})],
Read the {input_file_path} file and edit it (without overwriting it),
translating all the doc comments to italian.
"})],
),
message(
Assistant,
@@ -268,8 +226,8 @@ fn eval_translate_doc_comments() {
],
Some(input_file_content.into()),
EvalAssertion::judge_diff("Doc comments were translated to Italian"),
))
});
),
);
}
#[test]
@@ -288,31 +246,33 @@ fn eval_use_wasi_sdk_in_compile_parser_to_wasm() {
let input_file_content =
include_str!("evals/fixtures/use_wasi_sdk_in_compile_parser_to_wasm/before.rs");
let edit_description = "Update compile_parser_to_wasm to use wasi-sdk instead of emscripten";
eval_utils::eval(100, 0.95, mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
eval(
100,
0.95,
0.05,
EvalInput::from_conversation(
vec![
message(
User,
[text(formatdoc! {"
Read the `{input_file_path}` file and change `compile_parser_to_wasm` to use `wasi-sdk` instead of emscripten.
Use `ureq` to download the SDK for the current platform and architecture.
Extract the archive into a sibling of `lib` inside the `tree-sitter` directory in the cache_dir.
Compile the parser to wasm using the `bin/clang` executable (or `bin/clang.exe` on windows)
that's inside of the archive.
Don't re-download the SDK if that executable already exists.
Read the `{input_file_path}` file and change `compile_parser_to_wasm` to use `wasi-sdk` instead of emscripten.
Use `ureq` to download the SDK for the current platform and architecture.
Extract the archive into a sibling of `lib` inside the `tree-sitter` directory in the cache_dir.
Compile the parser to wasm using the `bin/clang` executable (or `bin/clang.exe` on windows)
that's inside of the archive.
Don't re-download the SDK if that executable already exists.
Use these clang flags: -fPIC -shared -Os -Wl,--export=tree_sitter_{{language_name}}
Use these clang flags: -fPIC -shared -Os -Wl,--export=tree_sitter_{{language_name}}
Here are the available wasi-sdk assets:
- wasi-sdk-25.0-x86_64-macos.tar.gz
- wasi-sdk-25.0-arm64-macos.tar.gz
- wasi-sdk-25.0-x86_64-linux.tar.gz
- wasi-sdk-25.0-arm64-linux.tar.gz
- wasi-sdk-25.0-x86_64-linux.tar.gz
- wasi-sdk-25.0-arm64-linux.tar.gz
- wasi-sdk-25.0-x86_64-windows.tar.gz
"})],
Here are the available wasi-sdk assets:
- wasi-sdk-25.0-x86_64-macos.tar.gz
- wasi-sdk-25.0-arm64-macos.tar.gz
- wasi-sdk-25.0-x86_64-linux.tar.gz
- wasi-sdk-25.0-arm64-linux.tar.gz
- wasi-sdk-25.0-x86_64-linux.tar.gz
- wasi-sdk-25.0-arm64-linux.tar.gz
- wasi-sdk-25.0-x86_64-windows.tar.gz
"})],
),
message(
Assistant,
@@ -389,11 +349,11 @@ fn eval_use_wasi_sdk_in_compile_parser_to_wasm() {
],
Some(input_file_content.into()),
EvalAssertion::judge_diff(indoc! {"
- The compile_parser_to_wasm method has been changed to use wasi-sdk
- ureq is used to download the SDK for current platform and architecture
"}),
))
});
- The compile_parser_to_wasm method has been changed to use wasi-sdk
- ureq is used to download the SDK for current platform and architecture
"}),
),
);
}
#[test]
@@ -417,8 +377,11 @@ fn eval_disable_cursor_blinking() {
include_str!("evals/fixtures/disable_cursor_blinking/possible-03.diff"),
include_str!("evals/fixtures/disable_cursor_blinking/possible-04.diff"),
];
eval_utils::eval(100, 0.51, mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
eval(
100,
0.51,
0.05,
EvalInput::from_conversation(
vec![
message(User, [text("Let's research how to cursor blinking works.")]),
message(
@@ -455,10 +418,10 @@ fn eval_disable_cursor_blinking() {
message(
User,
[text(indoc! {"
Comment out the lines that interact with the BlinkManager.
Keep the outer `update` blocks, but comments everything that's inside (including if statements).
Don't add additional comments.
"})],
Comment out the lines that interact with the BlinkManager.
Keep the outer `update` blocks, but comments everything that's inside (including if statements).
Don't add additional comments.
"})],
),
message(
Assistant,
@@ -474,9 +437,9 @@ fn eval_disable_cursor_blinking() {
),
],
Some(input_file_content.into()),
EvalAssertion::assert_diff_any(possible_diffs.clone()),
))
});
EvalAssertion::assert_diff_any(possible_diffs),
),
);
}
#[test]
@@ -501,16 +464,20 @@ fn eval_from_pixels_constructor() {
let input_file_path = "root/canvas.rs";
let input_file_content = include_str!("evals/fixtures/from_pixels_constructor/before.rs");
let edit_description = "Implement from_pixels constructor and add tests.";
eval_utils::eval(100, 0.95, mismatched_tag_threshold(0.25), move || {
run_eval(EvalInput::from_conversation(
eval(
100,
0.95,
// For whatever reason, this eval produces more mismatched tags.
// Increasing for now, let's see if we can bring this down.
0.25,
EvalInput::from_conversation(
vec![
message(
User,
[text(indoc! {"
Introduce a new `from_pixels` constructor in Canvas and
also add tests for it in the same file.
"})],
Introduce a new `from_pixels` constructor in Canvas and
also add tests for it in the same file.
"})],
),
message(
Assistant,
@@ -575,92 +542,92 @@ fn eval_from_pixels_constructor() {
"tool_4",
"grep",
indoc! {"
Found 6 matches:
Found 6 matches:
## Matches in font-kit/src/loaders/core_text.rs
## Matches in font-kit/src/loaders/core_text.rs
### mod test L926-936
```
mod test {
use super::Font;
use crate::properties::{Stretch, Weight};
### mod test L926-936
```
mod test {
use super::Font;
use crate::properties::{Stretch, Weight};
#[cfg(feature = \"source\")]
use crate::source::SystemSource;
#[cfg(feature = \"source\")]
use crate::source::SystemSource;
static TEST_FONT_POSTSCRIPT_NAME: &'static str = \"ArialMT\";
static TEST_FONT_POSTSCRIPT_NAME: &'static str = \"ArialMT\";
#[cfg(feature = \"source\")]
#[test]
```
#[cfg(feature = \"source\")]
#[test]
```
55 lines remaining in ancestor node. Read the file to see all.
55 lines remaining in ancestor node. Read the file to see all.
### mod test L947-951
```
}
### mod test L947-951
```
}
#[test]
fn test_core_text_to_css_font_weight() {
// Exact matches
```
#[test]
fn test_core_text_to_css_font_weight() {
// Exact matches
```
### mod test L959-963
```
}
### mod test L959-963
```
}
#[test]
fn test_core_text_to_css_font_stretch() {
// Exact matches
```
#[test]
fn test_core_text_to_css_font_stretch() {
// Exact matches
```
## Matches in font-kit/src/loaders/freetype.rs
## Matches in font-kit/src/loaders/freetype.rs
### mod test L1238-1248
```
mod test {
use crate::loaders::freetype::Font;
### mod test L1238-1248
```
mod test {
use crate::loaders::freetype::Font;
static PCF_FONT_PATH: &str = \"resources/tests/times-roman-pcf/timR12.pcf\";
static PCF_FONT_POSTSCRIPT_NAME: &str = \"Times-Roman\";
static PCF_FONT_PATH: &str = \"resources/tests/times-roman-pcf/timR12.pcf\";
static PCF_FONT_POSTSCRIPT_NAME: &str = \"Times-Roman\";
#[test]
fn get_pcf_postscript_name() {
let font = Font::from_path(PCF_FONT_PATH, 0).unwrap();
assert_eq!(font.postscript_name().unwrap(), PCF_FONT_POSTSCRIPT_NAME);
}
```
#[test]
fn get_pcf_postscript_name() {
let font = Font::from_path(PCF_FONT_PATH, 0).unwrap();
assert_eq!(font.postscript_name().unwrap(), PCF_FONT_POSTSCRIPT_NAME);
}
```
1 lines remaining in ancestor node. Read the file to see all.
1 lines remaining in ancestor node. Read the file to see all.
## Matches in font-kit/src/sources/core_text.rs
## Matches in font-kit/src/sources/core_text.rs
### mod test L265-275
```
mod test {
use crate::properties::{Stretch, Weight};
### mod test L265-275
```
mod test {
use crate::properties::{Stretch, Weight};
#[test]
fn test_css_to_core_text_font_weight() {
// Exact matches
assert_eq!(super::css_to_core_text_font_weight(Weight(100.0)), -0.7);
assert_eq!(super::css_to_core_text_font_weight(Weight(400.0)), 0.0);
assert_eq!(super::css_to_core_text_font_weight(Weight(700.0)), 0.4);
assert_eq!(super::css_to_core_text_font_weight(Weight(900.0)), 0.8);
#[test]
fn test_css_to_core_text_font_weight() {
// Exact matches
assert_eq!(super::css_to_core_text_font_weight(Weight(100.0)), -0.7);
assert_eq!(super::css_to_core_text_font_weight(Weight(400.0)), 0.0);
assert_eq!(super::css_to_core_text_font_weight(Weight(700.0)), 0.4);
assert_eq!(super::css_to_core_text_font_weight(Weight(900.0)), 0.8);
```
```
27 lines remaining in ancestor node. Read the file to see all.
27 lines remaining in ancestor node. Read the file to see all.
### mod test L278-282
```
}
### mod test L278-282
```
}
#[test]
fn test_css_to_core_text_font_stretch() {
// Exact matches
```
"},
#[test]
fn test_css_to_core_text_font_stretch() {
// Exact matches
```
"},
)],
),
message(
@@ -678,11 +645,11 @@ fn eval_from_pixels_constructor() {
],
Some(input_file_content.into()),
EvalAssertion::judge_diff(indoc! {"
- The diff contains a new `from_pixels` constructor
- The diff contains new tests for the `from_pixels` constructor
"}),
))
});
- The diff contains a new `from_pixels` constructor
- The diff contains new tests for the `from_pixels` constructor
"}),
),
);
}
#[test]
@@ -700,9 +667,11 @@ fn eval_zode() {
let input_file_path = "root/zode.py";
let input_content = None;
let edit_description = "Create the main Zode CLI script";
eval_utils::eval(50, 1., mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
eval(
50,
1.,
0.05,
EvalInput::from_conversation(
vec![
message(User, [text(include_str!("evals/fixtures/zode/prompt.md"))]),
message(
@@ -761,7 +730,7 @@ fn eval_zode() {
],
),
],
input_content.clone(),
input_content,
EvalAssertion::new(async move |sample, _, _cx| {
let invalid_starts = [' ', '`', '\n'];
let mut message = String::new();
@@ -786,8 +755,8 @@ fn eval_zode() {
})
}
}),
))
});
),
);
}
#[test]
@@ -805,17 +774,19 @@ fn eval_add_overwrite_test() {
let input_file_path = "root/action_log.rs";
let input_file_content = include_str!("evals/fixtures/add_overwrite_test/before.rs");
let edit_description = "Add a new test for overwriting a file in action_log.rs";
eval_utils::eval(200, 0.5, mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
eval(
200,
0.5, // TODO: make this eval better
0.05,
EvalInput::from_conversation(
vec![
message(
User,
[text(indoc! {"
Introduce a new test in `action_log.rs` to test overwriting a file.
That is, a file already exists, but we call `buffer_created` as if the file were new.
Take inspiration from all the other tests in the file.
"})],
Introduce a new test in `action_log.rs` to test overwriting a file.
That is, a file already exists, but we call `buffer_created` as if the file were new.
Take inspiration from all the other tests in the file.
"})],
),
message(
Assistant,
@@ -835,81 +806,81 @@ fn eval_add_overwrite_test() {
"tool_1",
"read_file",
indoc! {"
pub struct ActionLog [L13-20]
tracked_buffers [L15]
edited_since_project_diagnostics_check [L17]
project [L19]
impl ActionLog [L22-498]
pub fn new [L24-30]
pub fn project [L32-34]
pub fn checked_project_diagnostics [L37-39]
pub fn has_edited_files_since_project_diagnostics_check [L42-44]
fn track_buffer_internal [L46-101]
fn handle_buffer_event [L103-116]
fn handle_buffer_edited [L118-123]
fn handle_buffer_file_changed [L125-158]
async fn maintain_diff [L160-264]
pub fn buffer_read [L267-269]
pub fn buffer_created [L272-276]
pub fn buffer_edited [L279-287]
pub fn will_delete_buffer [L289-304]
pub fn keep_edits_in_range [L306-364]
pub fn reject_edits_in_ranges [L366-459]
pub fn keep_all_edits [L461-473]
pub fn changed_buffers [L476-482]
pub fn stale_buffers [L485-497]
fn apply_non_conflicting_edits [L500-561]
fn diff_snapshots [L563-585]
fn point_to_row_edit [L587-614]
enum ChangeAuthor [L617-620]
User [L618]
Agent [L619]
enum TrackedBufferStatus [L623-627]
Created [L624]
Modified [L625]
Deleted [L626]
struct TrackedBuffer [L629-641]
buffer [L630]
base_text [L631]
unreviewed_changes [L632]
status [L633]
version [L634]
diff [L635]
snapshot [L636]
diff_update [L637]
_open_lsp_handle [L638]
_maintain_diff [L639]
_subscription [L640]
impl TrackedBuffer [L643-657]
fn has_changes [L644-650]
fn schedule_diff_update [L652-656]
pub struct ChangedBuffer [L659-661]
pub diff [L660]
mod tests [L664-1574]
fn init_logger [L678-682]
fn init_test [L684-691]
async fn test_keep_edits [L694-769]
async fn test_deletions [L772-854]
async fn test_overlapping_user_edits [L857-951]
async fn test_creating_files [L954-1010]
async fn test_deleting_files [L1013-1120]
async fn test_reject_edits [L1123-1255]
async fn test_reject_multiple_edits [L1258-1331]
async fn test_reject_deleted_file [L1334-1388]
async fn test_reject_created_file [L1391-1443]
async fn test_random_diffs [L1446-1535]
fn quiesce [L1510-1534]
struct HunkStatus [L1538-1542]
range [L1539]
diff_status [L1540]
old_text [L1541]
fn unreviewed_hunks [L1544-1573]
pub struct ActionLog [L13-20]
tracked_buffers [L15]
edited_since_project_diagnostics_check [L17]
project [L19]
impl ActionLog [L22-498]
pub fn new [L24-30]
pub fn project [L32-34]
pub fn checked_project_diagnostics [L37-39]
pub fn has_edited_files_since_project_diagnostics_check [L42-44]
fn track_buffer_internal [L46-101]
fn handle_buffer_event [L103-116]
fn handle_buffer_edited [L118-123]
fn handle_buffer_file_changed [L125-158]
async fn maintain_diff [L160-264]
pub fn buffer_read [L267-269]
pub fn buffer_created [L272-276]
pub fn buffer_edited [L279-287]
pub fn will_delete_buffer [L289-304]
pub fn keep_edits_in_range [L306-364]
pub fn reject_edits_in_ranges [L366-459]
pub fn keep_all_edits [L461-473]
pub fn changed_buffers [L476-482]
pub fn stale_buffers [L485-497]
fn apply_non_conflicting_edits [L500-561]
fn diff_snapshots [L563-585]
fn point_to_row_edit [L587-614]
enum ChangeAuthor [L617-620]
User [L618]
Agent [L619]
enum TrackedBufferStatus [L623-627]
Created [L624]
Modified [L625]
Deleted [L626]
struct TrackedBuffer [L629-641]
buffer [L630]
base_text [L631]
unreviewed_changes [L632]
status [L633]
version [L634]
diff [L635]
snapshot [L636]
diff_update [L637]
_open_lsp_handle [L638]
_maintain_diff [L639]
_subscription [L640]
impl TrackedBuffer [L643-657]
fn has_changes [L644-650]
fn schedule_diff_update [L652-656]
pub struct ChangedBuffer [L659-661]
pub diff [L660]
mod tests [L664-1574]
fn init_logger [L678-682]
fn init_test [L684-691]
async fn test_keep_edits [L694-769]
async fn test_deletions [L772-854]
async fn test_overlapping_user_edits [L857-951]
async fn test_creating_files [L954-1010]
async fn test_deleting_files [L1013-1120]
async fn test_reject_edits [L1123-1255]
async fn test_reject_multiple_edits [L1258-1331]
async fn test_reject_deleted_file [L1334-1388]
async fn test_reject_created_file [L1391-1443]
async fn test_random_diffs [L1446-1535]
fn quiesce [L1510-1534]
struct HunkStatus [L1538-1542]
range [L1539]
diff_status [L1540]
old_text [L1541]
fn unreviewed_hunks [L1544-1573]
Showing symbols 1-69 (total symbols: 69)
Showing symbols 1-69 (total symbols: 69)
Using the line numbers in this outline, you can call this tool again while specifying
the start_line and end_line fields to see the implementations of symbols in the outline.
"},
Using the line numbers in this outline, you can call this tool again while specifying
the start_line and end_line fields to see the implementations of symbols in the outline.
"},
)],
),
message(
@@ -989,17 +960,17 @@ fn eval_add_overwrite_test() {
Assistant,
[
text(indoc! {"
Based on my understanding of the existing tests and code, I'll now create a new test for overwriting a file. In this test, we'll:
Based on my understanding of the existing tests and code, I'll now create a new test for overwriting a file. In this test, we'll:
1. Create a file
2. Open a buffer for it
3. Track it in the action log
4. Modify the buffer
5. Call `buffer_created` again as if the file were new
6. Check that the changes are properly tracked
1. Create a file
2. Open a buffer for it
3. Track it in the action log
4. Modify the buffer
5. Call `buffer_created` again as if the file were new
6. Check that the changes are properly tracked
Let's write the test:
"}),
Let's write the test:
"}),
tool_use(
"tool_5",
"edit_file",
@@ -1016,8 +987,8 @@ fn eval_add_overwrite_test() {
EvalAssertion::judge_diff(
"A new test for overwritten files was created, without changing any previous test",
),
))
});
),
);
}
#[test]
@@ -1042,18 +1013,20 @@ fn eval_create_empty_file() {
let input_file_content = None;
let expected_output_content = String::new();
eval_utils::eval(100, 0.99, mismatched_tag_threshold(0.05), move || {
run_eval(EvalInput::from_conversation(
eval(
100,
0.99,
0.05,
EvalInput::from_conversation(
vec![
message(User, [text("Create a second empty todo file ")]),
message(
Assistant,
[
text(formatdoc! {"
I'll help you create a second empty todo file.
First, let me examine the project structure to see if there's already a todo file, which will help me determine the appropriate name and location for the second one.
"}),
I'll help you create a second empty todo file.
First, let me examine the project structure to see if there's already a todo file, which will help me determine the appropriate name and location for the second one.
"}),
tool_use(
"toolu_01GAF8TtsgpjKxCr8fgQLDgR",
"list_directory",
@@ -1075,8 +1048,8 @@ fn eval_create_empty_file() {
Assistant,
[
text(formatdoc! {"
I can see there's already a `TODO` file in the `root` directory. Let me create a second empty todo file called `TODO3` in the same directory:
"}),
I can see there's already a `TODO` file in the `root` directory. Let me create a second empty todo file called `TODO3` in the same directory:
"}),
tool_use(
"toolu_01Tb3iQ9griqSYMmVuykQPWU",
"edit_file",
@@ -1089,12 +1062,12 @@ fn eval_create_empty_file() {
],
),
],
input_file_content.clone(),
input_file_content,
// Bad behavior is to write something like
// "I'll create an empty TODO3 file as requested."
EvalAssertion::assert_eq(expected_output_content.clone()),
))
});
EvalAssertion::assert_eq(expected_output_content),
),
);
}
fn message(
@@ -1336,45 +1309,55 @@ impl EvalAssertion {
}
}
fn run_eval(eval: EvalInput) -> eval_utils::EvalOutput<EditEvalMetadata> {
fn eval(
iterations: usize,
expected_pass_ratio: f32,
mismatched_tag_threshold: f32,
eval: EvalInput,
) {
eval_utils::eval(
iterations,
expected_pass_ratio,
mismatched_tag_threshold,
Arc::new(move |tx| run_eval(eval.clone(), tx)),
);
}
fn run_eval(eval: EvalInput, tx: mpsc::Sender<eval_utils::EvalOutput>) {
let dispatcher = gpui::TestDispatcher::new(StdRng::from_os_rng());
let mut cx = TestAppContext::build(dispatcher, None);
let result = cx.executor().block_test(async {
let test = EditAgentTest::new(&mut cx).await;
test.eval(eval, &mut cx).await
});
cx.quit();
match result {
let output = match result {
Ok(output) => eval_utils::EvalOutput {
data: output.to_string(),
outcome: if output.assertion.score < 80 {
mismatched_tags: output.sample.edit_output.parser_metrics.mismatched_tags,
tags: output.sample.edit_output.parser_metrics.tags,
outcome_kind: if output.assertion.score < 80 {
eval_utils::OutcomeKind::Failed
} else {
eval_utils::OutcomeKind::Passed
},
metadata: EditEvalMetadata {
tags: output.sample.edit_output.parser_metrics.tags,
mismatched_tags: output.sample.edit_output.parser_metrics.mismatched_tags,
},
},
Err(e) => eval_utils::EvalOutput {
data: format!("{e:?}"),
outcome: eval_utils::OutcomeKind::Error,
metadata: EditEvalMetadata {
tags: 0,
mismatched_tags: 0,
},
mismatched_tags: 0,
tags: 0,
outcome_kind: eval_utils::OutcomeKind::Error,
},
}
};
tx.send(output).unwrap();
}
#[derive(Clone)]
struct EditEvalOutput {
struct EvalOutput {
sample: EvalSample,
assertion: EvalAssertionOutcome,
}
impl Display for EditEvalOutput {
impl Display for EvalOutput {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
writeln!(f, "Score: {:?}", self.assertion.score)?;
if let Some(message) = self.assertion.message.as_ref() {
@@ -1488,7 +1471,7 @@ impl EditAgentTest {
})
}
async fn eval(&self, mut eval: EvalInput, cx: &mut TestAppContext) -> Result<EditEvalOutput> {
async fn eval(&self, mut eval: EvalInput, cx: &mut TestAppContext) -> Result<EvalOutput> {
// Make sure the last message in the conversation is cached.
eval.conversation.last_mut().unwrap().cache = true;
@@ -1597,7 +1580,7 @@ impl EditAgentTest {
.run(&sample, self.judge_model.clone(), cx)
.await?;
Ok(EditEvalOutput { assertion, sample })
Ok(EvalOutput { assertion, sample })
}
}

View File

@@ -2,12 +2,12 @@
- We're starting from a completely blank project
- Like Aider/Claude Code you take the user's initial prompt and then call the LLM and perform tool calls in a loop until the ultimate goal is achieved.
- Unlike Aider or Claude code, it's not intended to be interactive. Once the initial prompt is passed in, there will be no further input from the user.
- The system you will build must reach the stated goal just by performing tool calls and calling the LLM
- The system you will build must reach the stated goal just by performing too calls and calling the LLM
- I want you to build this in python. Use the anthropic python sdk and the model context protocol sdk. Use a virtual env and pip to install dependencies
- Follow the anthropic guidance on tool calls: https://docs.anthropic.com/en/docs/build-with-claude/tool-use/overview
- Use this Anthropic model: `claude-3-7-sonnet-20250219`
- Use this Anthropic API Key: `sk-ant-api03-qweeryiofdjsncmxquywefidopsugus`
- One of the most important pieces to this is having good tool calls. We will be using the tools provided by the Claude MCP server. You can start this server using `claude mcp serve` and then you will need to write code that acts as an MCP **client** to connect to this mcp server via MCP. Likely you want to start this using a subprocess. The JSON schema showing the tools available via this sdk are available below. Via this MCP server you have access to all the tools that zode needs: Bash, GlobTool, GrepTool, LS, View, Edit, Replace, WebFetchTool
- One of the most important pieces to this is having good too calls. We will be using the tools provided by the Claude MCP server. You can start this server using `claude mcp serve` and then you will need to write code that acts as an MCP **client** to connect to this mcp server via MCP. Likely you want to start this using a subprocess. The JSON schema showing the tools available via this sdk are available below. Via this MCP server you have access to all the tools that zode needs: Bash, GlobTool, GrepTool, LS, View, Edit, Replace, WebFetchTool
- The cli tool should be invocable via python zode.py file.md where file.md is any possible file that contains the users prompt. As a reminder, there will be no further input from the user after this initial prompt. Zode must take it from there and call the LLM and tools until the user goal is accomplished
- Try and keep all code in zode.py and make heavy use of the asks I mentioned
- Once youve implemented this, you must run python zode.py eval/instructions.md to see how well our new agent tool does!

View File

@@ -216,10 +216,14 @@ impl HistoryStore {
}
pub fn reload(&self, cx: &mut Context<Self>) {
let database_connection = ThreadsDatabase::connect(cx);
let database_future = ThreadsDatabase::connect(cx);
cx.spawn(async move |this, cx| {
let database = database_connection.await;
let threads = database.map_err(|err| anyhow!(err))?.list_threads().await?;
let threads = database_future
.await
.map_err(|err| anyhow!(err))?
.list_threads()
.await?;
this.update(cx, |this, cx| {
if this.recently_opened_entries.len() < MAX_RECENTLY_OPENED_ENTRIES {
for thread in threads
@@ -340,8 +344,7 @@ impl HistoryStore {
fn load_recently_opened_entries(cx: &AsyncApp) -> Task<Result<VecDeque<HistoryEntryId>>> {
cx.background_spawn(async move {
if cfg!(any(feature = "test-support", test)) {
log::warn!("history store does not persist in tests");
return Ok(VecDeque::new());
anyhow::bail!("history store does not persist in tests");
}
let json = KEY_VALUE_STORE
.read_kvp(RECENTLY_OPENED_THREADS_KEY)?
@@ -351,9 +354,9 @@ impl HistoryStore {
.into_iter()
.take(MAX_RECENTLY_OPENED_ENTRIES)
.flat_map(|entry| match entry {
SerializedRecentOpen::AcpThread(id) => {
Some(HistoryEntryId::AcpThread(acp::SessionId::new(id.as_str())))
}
SerializedRecentOpen::AcpThread(id) => Some(HistoryEntryId::AcpThread(
acp::SessionId(id.as_str().into()),
)),
SerializedRecentOpen::TextThread(file_name) => Some(
HistoryEntryId::TextThread(text_threads_dir().join(file_name).into()),
),

View File

@@ -1,14 +1,10 @@
use std::{any::Any, path::Path, rc::Rc, sync::Arc};
use agent_client_protocol as acp;
use agent_servers::{AgentServer, AgentServerDelegate};
use agent_settings::AgentSettings;
use anyhow::Result;
use collections::HashSet;
use fs::Fs;
use gpui::{App, Entity, SharedString, Task};
use prompt_store::PromptStore;
use settings::{LanguageModelSelection, Settings as _, update_settings_file};
use crate::{HistoryStore, NativeAgent, NativeAgentConnection, templates::Templates};
@@ -25,6 +21,10 @@ impl NativeAgentServer {
}
impl AgentServer for NativeAgentServer {
fn telemetry_id(&self) -> &'static str {
"zed"
}
fn name(&self) -> SharedString {
"Zed Agent".into()
}
@@ -75,38 +75,6 @@ impl AgentServer for NativeAgentServer {
fn into_any(self: Rc<Self>) -> Rc<dyn Any> {
self
}
fn favorite_model_ids(&self, cx: &mut App) -> HashSet<acp::ModelId> {
AgentSettings::get_global(cx).favorite_model_ids()
}
fn toggle_favorite_model(
&self,
model_id: acp::ModelId,
should_be_favorite: bool,
fs: Arc<dyn Fs>,
cx: &App,
) {
let selection = model_id_to_selection(&model_id);
update_settings_file(fs, cx, move |settings, _| {
let agent = settings.agent.get_or_insert_default();
if should_be_favorite {
agent.add_favorite_model(selection.clone());
} else {
agent.remove_favorite_model(&selection);
}
});
}
}
/// Convert a ModelId (e.g. "anthropic/claude-3-5-sonnet") to a LanguageModelSelection.
fn model_id_to_selection(model_id: &acp::ModelId) -> LanguageModelSelection {
let id = model_id.0.as_ref();
let (provider, model) = id.split_once('/').unwrap_or(("", id));
LanguageModelSelection {
provider: provider.to_owned().into(),
model: model.to_owned(),
}
}
#[cfg(test)]

View File

@@ -16,7 +16,7 @@ You are a highly skilled software engineer with extensive knowledge in many prog
3. DO NOT use tools to access items that are already available in the context section.
4. Use only the tools that are currently available.
5. DO NOT use a tool that is not available just because it appears in the conversation. This means the user turned it off.
6. When running commands that may run indefinitely or for a long time (such as build scripts, tests, servers, or file watchers), specify `timeout_ms` to bound runtime. If the command times out, the user can always ask you to run it again with a longer timeout or no timeout if they're willing to wait or cancel manually.
6. NEVER run commands that don't terminate on their own such as web servers (like `npm run start`, `npm run dev`, `python -m http.server`, etc) or file watchers.
7. Avoid HTML entity escaping - use plain characters instead.
## Searching and Reading

View File

@@ -9,16 +9,14 @@ use collections::IndexMap;
use context_server::{ContextServer, ContextServerCommand, ContextServerId};
use fs::{FakeFs, Fs};
use futures::{
FutureExt as _, StreamExt,
StreamExt,
channel::{
mpsc::{self, UnboundedReceiver},
oneshot,
},
future::{Fuse, Shared},
};
use gpui::{
App, AppContext, AsyncApp, Entity, Task, TestAppContext, UpdateGlobal,
http_client::FakeHttpClient,
App, AppContext, Entity, Task, TestAppContext, UpdateGlobal, http_client::FakeHttpClient,
};
use indoc::indoc;
use language_model::{
@@ -37,109 +35,12 @@ use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use serde_json::json;
use settings::{Settings, SettingsStore};
use std::{
path::Path,
pin::Pin,
rc::Rc,
sync::{
Arc,
atomic::{AtomicBool, Ordering},
},
time::Duration,
};
use std::{path::Path, rc::Rc, sync::Arc, time::Duration};
use util::path;
mod test_tools;
use test_tools::*;
fn init_test(cx: &mut TestAppContext) {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
});
}
struct FakeTerminalHandle {
killed: Arc<AtomicBool>,
wait_for_exit: Shared<Task<acp::TerminalExitStatus>>,
output: acp::TerminalOutputResponse,
id: acp::TerminalId,
}
impl FakeTerminalHandle {
fn new_never_exits(cx: &mut App) -> Self {
let killed = Arc::new(AtomicBool::new(false));
let killed_for_task = killed.clone();
let wait_for_exit = cx
.spawn(async move |cx| {
loop {
if killed_for_task.load(Ordering::SeqCst) {
return acp::TerminalExitStatus::new();
}
cx.background_executor()
.timer(Duration::from_millis(1))
.await;
}
})
.shared();
Self {
killed,
wait_for_exit,
output: acp::TerminalOutputResponse::new("partial output".to_string(), false),
id: acp::TerminalId::new("fake_terminal".to_string()),
}
}
fn was_killed(&self) -> bool {
self.killed.load(Ordering::SeqCst)
}
}
impl crate::TerminalHandle for FakeTerminalHandle {
fn id(&self, _cx: &AsyncApp) -> Result<acp::TerminalId> {
Ok(self.id.clone())
}
fn current_output(&self, _cx: &AsyncApp) -> Result<acp::TerminalOutputResponse> {
Ok(self.output.clone())
}
fn wait_for_exit(&self, _cx: &AsyncApp) -> Result<Shared<Task<acp::TerminalExitStatus>>> {
Ok(self.wait_for_exit.clone())
}
fn kill(&self, _cx: &AsyncApp) -> Result<()> {
self.killed.store(true, Ordering::SeqCst);
Ok(())
}
}
struct FakeThreadEnvironment {
handle: Rc<FakeTerminalHandle>,
}
impl crate::ThreadEnvironment for FakeThreadEnvironment {
fn create_terminal(
&self,
_command: String,
_cwd: Option<std::path::PathBuf>,
_output_byte_limit: Option<u64>,
_cx: &mut AsyncApp,
) -> Task<Result<Rc<dyn crate::TerminalHandle>>> {
Task::ready(Ok(self.handle.clone() as Rc<dyn crate::TerminalHandle>))
}
}
fn always_allow_tools(cx: &mut TestAppContext) {
cx.update(|cx| {
let mut settings = agent_settings::AgentSettings::get_global(cx).clone();
settings.always_allow_tool_actions = true;
agent_settings::AgentSettings::override_global(settings, cx);
});
}
#[gpui::test]
async fn test_echo(cx: &mut TestAppContext) {
let ThreadTest { model, thread, .. } = setup(cx, TestModel::Fake).await;
@@ -170,120 +71,6 @@ async fn test_echo(cx: &mut TestAppContext) {
assert_eq!(stop_events(events), vec![acp::StopReason::EndTurn]);
}
#[gpui::test]
async fn test_terminal_tool_timeout_kills_handle(cx: &mut TestAppContext) {
init_test(cx);
always_allow_tools(cx);
let fs = FakeFs::new(cx.executor());
let project = Project::test(fs, [], cx).await;
let handle = Rc::new(cx.update(|cx| FakeTerminalHandle::new_never_exits(cx)));
let environment = Rc::new(FakeThreadEnvironment {
handle: handle.clone(),
});
#[allow(clippy::arc_with_non_send_sync)]
let tool = Arc::new(crate::TerminalTool::new(project, environment));
let (event_stream, mut rx) = crate::ToolCallEventStream::test();
let task = cx.update(|cx| {
tool.run(
crate::TerminalToolInput {
command: "sleep 1000".to_string(),
cd: ".".to_string(),
timeout_ms: Some(5),
},
event_stream,
cx,
)
});
let update = rx.expect_update_fields().await;
assert!(
update.content.iter().any(|blocks| {
blocks
.iter()
.any(|c| matches!(c, acp::ToolCallContent::Terminal(_)))
}),
"expected tool call update to include terminal content"
);
let mut task_future: Pin<Box<Fuse<Task<Result<String>>>>> = Box::pin(task.fuse());
let deadline = std::time::Instant::now() + Duration::from_millis(500);
loop {
if let Some(result) = task_future.as_mut().now_or_never() {
let result = result.expect("terminal tool task should complete");
assert!(
handle.was_killed(),
"expected terminal handle to be killed on timeout"
);
assert!(
result.contains("partial output"),
"expected result to include terminal output, got: {result}"
);
return;
}
if std::time::Instant::now() >= deadline {
panic!("timed out waiting for terminal tool task to complete");
}
cx.run_until_parked();
cx.background_executor.timer(Duration::from_millis(1)).await;
}
}
#[gpui::test]
#[ignore]
async fn test_terminal_tool_without_timeout_does_not_kill_handle(cx: &mut TestAppContext) {
init_test(cx);
always_allow_tools(cx);
let fs = FakeFs::new(cx.executor());
let project = Project::test(fs, [], cx).await;
let handle = Rc::new(cx.update(|cx| FakeTerminalHandle::new_never_exits(cx)));
let environment = Rc::new(FakeThreadEnvironment {
handle: handle.clone(),
});
#[allow(clippy::arc_with_non_send_sync)]
let tool = Arc::new(crate::TerminalTool::new(project, environment));
let (event_stream, mut rx) = crate::ToolCallEventStream::test();
let _task = cx.update(|cx| {
tool.run(
crate::TerminalToolInput {
command: "sleep 1000".to_string(),
cd: ".".to_string(),
timeout_ms: None,
},
event_stream,
cx,
)
});
let update = rx.expect_update_fields().await;
assert!(
update.content.iter().any(|blocks| {
blocks
.iter()
.any(|c| matches!(c, acp::ToolCallContent::Terminal(_)))
}),
"expected tool call update to include terminal content"
);
smol::Timer::after(Duration::from_millis(25)).await;
assert!(
!handle.was_killed(),
"did not expect terminal handle to be killed without a timeout"
);
}
#[gpui::test]
async fn test_thinking(cx: &mut TestAppContext) {
let ThreadTest { model, thread, .. } = setup(cx, TestModel::Fake).await;
@@ -706,14 +493,14 @@ async fn test_tool_authorization(cx: &mut TestAppContext) {
// Approve the first
tool_call_auth_1
.response
.send(tool_call_auth_1.options[1].option_id.clone())
.send(tool_call_auth_1.options[1].id.clone())
.unwrap();
cx.run_until_parked();
// Reject the second
tool_call_auth_2
.response
.send(tool_call_auth_1.options[2].option_id.clone())
.send(tool_call_auth_1.options[2].id.clone())
.unwrap();
cx.run_until_parked();
@@ -723,14 +510,14 @@ async fn test_tool_authorization(cx: &mut TestAppContext) {
message.content,
vec![
language_model::MessageContent::ToolResult(LanguageModelToolResult {
tool_use_id: tool_call_auth_1.tool_call.tool_call_id.0.to_string().into(),
tool_use_id: tool_call_auth_1.tool_call.id.0.to_string().into(),
tool_name: ToolRequiringPermission::name().into(),
is_error: false,
content: "Allowed".into(),
output: Some("Allowed".into())
}),
language_model::MessageContent::ToolResult(LanguageModelToolResult {
tool_use_id: tool_call_auth_2.tool_call.tool_call_id.0.to_string().into(),
tool_use_id: tool_call_auth_2.tool_call.id.0.to_string().into(),
tool_name: ToolRequiringPermission::name().into(),
is_error: true,
content: "Permission to run tool denied by user".into(),
@@ -756,7 +543,7 @@ async fn test_tool_authorization(cx: &mut TestAppContext) {
let tool_call_auth_3 = next_tool_call_authorization(&mut events).await;
tool_call_auth_3
.response
.send(tool_call_auth_3.options[0].option_id.clone())
.send(tool_call_auth_3.options[0].id.clone())
.unwrap();
cx.run_until_parked();
let completion = fake_model.pending_completions().pop().unwrap();
@@ -765,7 +552,7 @@ async fn test_tool_authorization(cx: &mut TestAppContext) {
message.content,
vec![language_model::MessageContent::ToolResult(
LanguageModelToolResult {
tool_use_id: tool_call_auth_3.tool_call.tool_call_id.0.to_string().into(),
tool_use_id: tool_call_auth_3.tool_call.id.0.to_string().into(),
tool_name: ToolRequiringPermission::name().into(),
is_error: false,
content: "Allowed".into(),
@@ -1566,20 +1353,20 @@ async fn test_cancellation(cx: &mut TestAppContext) {
ThreadEvent::ToolCall(tool_call) => {
assert_eq!(tool_call.title, expected_tools.remove(0));
if tool_call.title == "Echo" {
echo_id = Some(tool_call.tool_call_id);
echo_id = Some(tool_call.id);
}
}
ThreadEvent::ToolCallUpdate(acp_thread::ToolCallUpdate::UpdateFields(
acp::ToolCallUpdate {
tool_call_id,
id,
fields:
acp::ToolCallUpdateFields {
status: Some(acp::ToolCallStatus::Completed),
..
},
..
meta: None,
},
)) if Some(&tool_call_id) == echo_id.as_ref() => {
)) if Some(&id) == echo_id.as_ref() => {
echo_completed = true;
}
_ => {}
@@ -2208,7 +1995,11 @@ async fn test_agent_connection(cx: &mut TestAppContext) {
.update(|cx| {
connection.prompt(
Some(acp_thread::UserMessageId::new()),
acp::PromptRequest::new(session_id.clone(), vec!["ghi".into()]),
acp::PromptRequest {
session_id: session_id.clone(),
prompt: vec!["ghi".into()],
meta: None,
},
cx,
)
})
@@ -2265,50 +2056,68 @@ async fn test_tool_updates_to_completion(cx: &mut TestAppContext) {
let tool_call = expect_tool_call(&mut events).await;
assert_eq!(
tool_call,
acp::ToolCall::new("1", "Thinking")
.kind(acp::ToolKind::Think)
.raw_input(json!({}))
.meta(acp::Meta::from_iter([(
"tool_name".into(),
"thinking".into()
)]))
acp::ToolCall {
id: acp::ToolCallId("1".into()),
title: "Thinking".into(),
kind: acp::ToolKind::Think,
status: acp::ToolCallStatus::Pending,
content: vec![],
locations: vec![],
raw_input: Some(json!({})),
raw_output: None,
meta: Some(json!({ "tool_name": "thinking" })),
}
);
let update = expect_tool_call_update_fields(&mut events).await;
assert_eq!(
update,
acp::ToolCallUpdate::new(
"1",
acp::ToolCallUpdateFields::new()
.title("Thinking")
.kind(acp::ToolKind::Think)
.raw_input(json!({ "content": "Thinking hard!"}))
)
acp::ToolCallUpdate {
id: acp::ToolCallId("1".into()),
fields: acp::ToolCallUpdateFields {
title: Some("Thinking".into()),
kind: Some(acp::ToolKind::Think),
raw_input: Some(json!({ "content": "Thinking hard!" })),
..Default::default()
},
meta: None,
}
);
let update = expect_tool_call_update_fields(&mut events).await;
assert_eq!(
update,
acp::ToolCallUpdate::new(
"1",
acp::ToolCallUpdateFields::new().status(acp::ToolCallStatus::InProgress)
)
acp::ToolCallUpdate {
id: acp::ToolCallId("1".into()),
fields: acp::ToolCallUpdateFields {
status: Some(acp::ToolCallStatus::InProgress),
..Default::default()
},
meta: None,
}
);
let update = expect_tool_call_update_fields(&mut events).await;
assert_eq!(
update,
acp::ToolCallUpdate::new(
"1",
acp::ToolCallUpdateFields::new().content(vec!["Thinking hard!".into()])
)
acp::ToolCallUpdate {
id: acp::ToolCallId("1".into()),
fields: acp::ToolCallUpdateFields {
content: Some(vec!["Thinking hard!".into()]),
..Default::default()
},
meta: None,
}
);
let update = expect_tool_call_update_fields(&mut events).await;
assert_eq!(
update,
acp::ToolCallUpdate::new(
"1",
acp::ToolCallUpdateFields::new()
.status(acp::ToolCallStatus::Completed)
.raw_output("Finished thinking.")
)
acp::ToolCallUpdate {
id: acp::ToolCallId("1".into()),
fields: acp::ToolCallUpdateFields {
status: Some(acp::ToolCallStatus::Completed),
raw_output: Some("Finished thinking.".into()),
..Default::default()
},
meta: None,
}
);
}
@@ -2809,181 +2618,3 @@ fn setup_context_server(
cx.run_until_parked();
mcp_tool_calls_rx
}
#[gpui::test]
async fn test_tokens_before_message(cx: &mut TestAppContext) {
let ThreadTest { model, thread, .. } = setup(cx, TestModel::Fake).await;
let fake_model = model.as_fake();
// First message
let message_1_id = UserMessageId::new();
thread
.update(cx, |thread, cx| {
thread.send(message_1_id.clone(), ["First message"], cx)
})
.unwrap();
cx.run_until_parked();
// Before any response, tokens_before_message should return None for first message
thread.read_with(cx, |thread, _| {
assert_eq!(
thread.tokens_before_message(&message_1_id),
None,
"First message should have no tokens before it"
);
});
// Complete first message with usage
fake_model.send_last_completion_stream_text_chunk("Response 1");
fake_model.send_last_completion_stream_event(LanguageModelCompletionEvent::UsageUpdate(
language_model::TokenUsage {
input_tokens: 100,
output_tokens: 50,
cache_creation_input_tokens: 0,
cache_read_input_tokens: 0,
},
));
fake_model.end_last_completion_stream();
cx.run_until_parked();
// First message still has no tokens before it
thread.read_with(cx, |thread, _| {
assert_eq!(
thread.tokens_before_message(&message_1_id),
None,
"First message should still have no tokens before it after response"
);
});
// Second message
let message_2_id = UserMessageId::new();
thread
.update(cx, |thread, cx| {
thread.send(message_2_id.clone(), ["Second message"], cx)
})
.unwrap();
cx.run_until_parked();
// Second message should have first message's input tokens before it
thread.read_with(cx, |thread, _| {
assert_eq!(
thread.tokens_before_message(&message_2_id),
Some(100),
"Second message should have 100 tokens before it (from first request)"
);
});
// Complete second message
fake_model.send_last_completion_stream_text_chunk("Response 2");
fake_model.send_last_completion_stream_event(LanguageModelCompletionEvent::UsageUpdate(
language_model::TokenUsage {
input_tokens: 250, // Total for this request (includes previous context)
output_tokens: 75,
cache_creation_input_tokens: 0,
cache_read_input_tokens: 0,
},
));
fake_model.end_last_completion_stream();
cx.run_until_parked();
// Third message
let message_3_id = UserMessageId::new();
thread
.update(cx, |thread, cx| {
thread.send(message_3_id.clone(), ["Third message"], cx)
})
.unwrap();
cx.run_until_parked();
// Third message should have second message's input tokens (250) before it
thread.read_with(cx, |thread, _| {
assert_eq!(
thread.tokens_before_message(&message_3_id),
Some(250),
"Third message should have 250 tokens before it (from second request)"
);
// Second message should still have 100
assert_eq!(
thread.tokens_before_message(&message_2_id),
Some(100),
"Second message should still have 100 tokens before it"
);
// First message still has none
assert_eq!(
thread.tokens_before_message(&message_1_id),
None,
"First message should still have no tokens before it"
);
});
}
#[gpui::test]
async fn test_tokens_before_message_after_truncate(cx: &mut TestAppContext) {
let ThreadTest { model, thread, .. } = setup(cx, TestModel::Fake).await;
let fake_model = model.as_fake();
// Set up three messages with responses
let message_1_id = UserMessageId::new();
thread
.update(cx, |thread, cx| {
thread.send(message_1_id.clone(), ["Message 1"], cx)
})
.unwrap();
cx.run_until_parked();
fake_model.send_last_completion_stream_text_chunk("Response 1");
fake_model.send_last_completion_stream_event(LanguageModelCompletionEvent::UsageUpdate(
language_model::TokenUsage {
input_tokens: 100,
output_tokens: 50,
cache_creation_input_tokens: 0,
cache_read_input_tokens: 0,
},
));
fake_model.end_last_completion_stream();
cx.run_until_parked();
let message_2_id = UserMessageId::new();
thread
.update(cx, |thread, cx| {
thread.send(message_2_id.clone(), ["Message 2"], cx)
})
.unwrap();
cx.run_until_parked();
fake_model.send_last_completion_stream_text_chunk("Response 2");
fake_model.send_last_completion_stream_event(LanguageModelCompletionEvent::UsageUpdate(
language_model::TokenUsage {
input_tokens: 250,
output_tokens: 75,
cache_creation_input_tokens: 0,
cache_read_input_tokens: 0,
},
));
fake_model.end_last_completion_stream();
cx.run_until_parked();
// Verify initial state
thread.read_with(cx, |thread, _| {
assert_eq!(thread.tokens_before_message(&message_2_id), Some(100));
});
// Truncate at message 2 (removes message 2 and everything after)
thread
.update(cx, |thread, cx| thread.truncate(message_2_id.clone(), cx))
.unwrap();
cx.run_until_parked();
// After truncation, message_2_id no longer exists, so lookup should return None
thread.read_with(cx, |thread, _| {
assert_eq!(
thread.tokens_before_message(&message_2_id),
None,
"After truncation, message 2 no longer exists"
);
// Message 1 still exists but has no tokens before it
assert_eq!(
thread.tokens_before_message(&message_1_id),
None,
"First message still has no tokens before it"
);
});
}

View File

@@ -2,8 +2,7 @@ use crate::{
ContextServerRegistry, CopyPathTool, CreateDirectoryTool, DbLanguageModel, DbThread,
DeletePathTool, DiagnosticsTool, EditFileTool, FetchTool, FindPathTool, GrepTool,
ListDirectoryTool, MovePathTool, NowTool, OpenTool, ProjectSnapshot, ReadFileTool,
RestoreFileFromDiskTool, SaveFileTool, SystemPromptTemplate, Template, Templates, TerminalTool,
ThinkingTool, WebSearchTool,
SystemPromptTemplate, Template, Templates, TerminalTool, ThinkingTool, WebSearchTool,
};
use acp_thread::{MentionUri, UserMessageId};
use action_log::ActionLog;
@@ -108,13 +107,7 @@ impl Message {
pub fn to_request(&self) -> Vec<LanguageModelRequestMessage> {
match self {
Message::User(message) => {
if message.content.is_empty() {
vec![]
} else {
vec![message.to_request()]
}
}
Message::User(message) => vec![message.to_request()],
Message::Agent(message) => message.to_request(),
Message::Resume => vec![LanguageModelRequestMessage {
role: Role::User,
@@ -537,7 +530,6 @@ pub trait TerminalHandle {
fn id(&self, cx: &AsyncApp) -> Result<acp::TerminalId>;
fn current_output(&self, cx: &AsyncApp) -> Result<acp::TerminalOutputResponse>;
fn wait_for_exit(&self, cx: &AsyncApp) -> Result<Shared<Task<acp::TerminalExitStatus>>>;
fn kill(&self, cx: &AsyncApp) -> Result<()>;
}
pub trait ThreadEnvironment {
@@ -627,9 +619,12 @@ pub struct Thread {
impl Thread {
fn prompt_capabilities(model: Option<&dyn LanguageModel>) -> acp::PromptCapabilities {
let image = model.map_or(true, |model| model.supports_images());
acp::PromptCapabilities::new()
.image(image)
.embedded_context(true)
acp::PromptCapabilities {
meta: None,
image,
audio: false,
embedded_context: true,
}
}
pub fn new(
@@ -645,7 +640,7 @@ impl Thread {
let (prompt_capabilities_tx, prompt_capabilities_rx) =
watch::channel(Self::prompt_capabilities(model.as_deref()));
Self {
id: acp::SessionId::new(uuid::Uuid::new_v4().to_string()),
id: acp::SessionId(uuid::Uuid::new_v4().to_string().into()),
prompt_id: PromptId::new(),
updated_at: Utc::now(),
title: None,
@@ -742,11 +737,17 @@ impl Thread {
let Some(tool) = tool else {
stream
.0
.unbounded_send(Ok(ThreadEvent::ToolCall(
acp::ToolCall::new(tool_use.id.to_string(), tool_use.name.to_string())
.status(acp::ToolCallStatus::Failed)
.raw_input(tool_use.input.clone()),
)))
.unbounded_send(Ok(ThreadEvent::ToolCall(acp::ToolCall {
meta: None,
id: acp::ToolCallId(tool_use.id.to_string().into()),
title: tool_use.name.to_string(),
kind: acp::ToolKind::Other,
status: acp::ToolCallStatus::Failed,
content: Vec::new(),
locations: Vec::new(),
raw_input: Some(tool_use.input.clone()),
raw_output: None,
})))
.ok();
return;
};
@@ -776,8 +777,8 @@ impl Thread {
stream.update_tool_call_fields(
&tool_use.id,
acp::ToolCallUpdateFields::new()
.status(
acp::ToolCallUpdateFields {
status: Some(
tool_result
.as_ref()
.map_or(acp::ToolCallStatus::Failed, |result| {
@@ -787,8 +788,10 @@ impl Thread {
acp::ToolCallStatus::Completed
}
}),
)
.raw_output(output),
),
raw_output: output,
..Default::default()
},
);
}
@@ -1009,8 +1012,6 @@ impl Thread {
self.project.clone(),
self.action_log.clone(),
));
self.add_tool(SaveFileTool::new(self.project.clone()));
self.add_tool(RestoreFileFromDiskTool::new(self.project.clone()));
self.add_tool(TerminalTool::new(self.project.clone(), environment));
self.add_tool(ThinkingTool);
self.add_tool(WebSearchTool);
@@ -1095,28 +1096,6 @@ impl Thread {
})
}
/// Get the total input token count as of the message before the given message.
///
/// Returns `None` if:
/// - `target_id` is the first message (no previous message)
/// - The previous message hasn't received a response yet (no usage data)
/// - `target_id` is not found in the messages
pub fn tokens_before_message(&self, target_id: &UserMessageId) -> Option<u64> {
let mut previous_user_message_id: Option<&UserMessageId> = None;
for message in &self.messages {
if let Message::User(user_msg) = message {
if &user_msg.id == target_id {
let prev_id = previous_user_message_id?;
let usage = self.request_token_usage.get(prev_id)?;
return Some(usage.input_tokens);
}
previous_user_message_id = Some(&user_msg.id);
}
}
None
}
/// Look up the active profile and resolve its preferred model if one is configured.
fn resolve_profile_model(
profile_id: &AgentProfileId,
@@ -1169,6 +1148,11 @@ impl Thread {
where
T: Into<UserMessageContent>,
{
let model = self.model().context("No language model configured")?;
log::info!("Thread::send called with model: {}", model.name().0);
self.advance_prompt_id();
let content = content.into_iter().map(Into::into).collect::<Vec<_>>();
log::debug!("Thread::send content: {:?}", content);
@@ -1176,59 +1160,10 @@ impl Thread {
.push(Message::User(UserMessage { id, content }));
cx.notify();
self.send_existing(cx)
}
pub fn send_existing(
&mut self,
cx: &mut Context<Self>,
) -> Result<mpsc::UnboundedReceiver<Result<ThreadEvent>>> {
let model = self.model().context("No language model configured")?;
log::info!("Thread::send called with model: {}", model.name().0);
self.advance_prompt_id();
log::debug!("Total messages in thread: {}", self.messages.len());
self.run_turn(cx)
}
pub fn push_acp_user_block(
&mut self,
id: UserMessageId,
blocks: impl IntoIterator<Item = acp::ContentBlock>,
path_style: PathStyle,
cx: &mut Context<Self>,
) {
let content = blocks
.into_iter()
.map(|block| UserMessageContent::from_content_block(block, path_style))
.collect::<Vec<_>>();
self.messages
.push(Message::User(UserMessage { id, content }));
cx.notify();
}
pub fn push_acp_agent_block(&mut self, block: acp::ContentBlock, cx: &mut Context<Self>) {
let text = match block {
acp::ContentBlock::Text(text_content) => text_content.text,
acp::ContentBlock::Image(_) => "[image]".to_string(),
acp::ContentBlock::Audio(_) => "[audio]".to_string(),
acp::ContentBlock::ResourceLink(resource_link) => resource_link.uri,
acp::ContentBlock::Resource(resource) => match resource.resource {
acp::EmbeddedResourceResource::TextResourceContents(resource) => resource.uri,
acp::EmbeddedResourceResource::BlobResourceContents(resource) => resource.uri,
_ => "[resource]".to_string(),
},
_ => "[unknown]".to_string(),
};
self.messages.push(Message::Agent(AgentMessage {
content: vec![AgentMessageContent::Text(text)],
..Default::default()
}));
cx.notify();
}
#[cfg(feature = "eval")]
pub fn proceed(
&mut self,
@@ -1339,13 +1274,15 @@ impl Thread {
event_stream.update_tool_call_fields(
&tool_result.tool_use_id,
acp::ToolCallUpdateFields::new()
.status(if tool_result.is_error {
acp::ToolCallUpdateFields {
status: Some(if tool_result.is_error {
acp::ToolCallStatus::Failed
} else {
acp::ToolCallStatus::Completed
})
.raw_output(tool_result.output.clone()),
}),
raw_output: tool_result.output.clone(),
..Default::default()
},
);
this.update(cx, |this, _cx| {
this.pending_message()
@@ -1623,10 +1560,12 @@ impl Thread {
} else {
event_stream.update_tool_call_fields(
&tool_use.id,
acp::ToolCallUpdateFields::new()
.title(title.as_str())
.kind(kind)
.raw_input(tool_use.input.clone()),
acp::ToolCallUpdateFields {
title: Some(title.into()),
kind: Some(kind),
raw_input: Some(tool_use.input.clone()),
..Default::default()
},
);
}
@@ -1648,9 +1587,10 @@ impl Thread {
let fs = self.project.read(cx).fs().clone();
let tool_event_stream =
ToolCallEventStream::new(tool_use.id.clone(), event_stream.clone(), Some(fs));
tool_event_stream.update_fields(
acp::ToolCallUpdateFields::new().status(acp::ToolCallStatus::InProgress),
);
tool_event_stream.update_fields(acp::ToolCallUpdateFields {
status: Some(acp::ToolCallStatus::InProgress),
..Default::default()
});
let supports_images = self.model().is_some_and(|model| model.supports_images());
let tool_result = tool.run(tool_use.input, tool_event_stream, cx);
log::debug!("Running tool {}", tool_use.name);
@@ -1725,10 +1665,6 @@ impl Thread {
self.pending_summary_generation.is_some()
}
pub fn is_generating_title(&self) -> bool {
self.pending_title_generation.is_some()
}
pub fn summary(&mut self, cx: &mut Context<Self>) -> Shared<Task<Option<SharedString>>> {
if let Some(summary) = self.summary.as_ref() {
return Task::ready(Some(summary.clone())).shared();
@@ -1796,7 +1732,7 @@ impl Thread {
task
}
pub fn generate_title(&mut self, cx: &mut Context<Self>) {
fn generate_title(&mut self, cx: &mut Context<Self>) {
let Some(model) = self.summarization_model.clone() else {
return;
};
@@ -2045,12 +1981,6 @@ impl Thread {
self.running_turn.as_ref()?.tools.get(name).cloned()
}
pub fn has_tool(&self, name: &str) -> bool {
self.running_turn
.as_ref()
.is_some_and(|turn| turn.tools.contains_key(name))
}
fn build_request_messages(
&self,
available_tools: Vec<SharedString>,
@@ -2451,13 +2381,19 @@ impl ThreadEventStream {
kind: acp::ToolKind,
input: serde_json::Value,
) -> acp::ToolCall {
acp::ToolCall::new(id.to_string(), title)
.kind(kind)
.raw_input(input)
.meta(acp::Meta::from_iter([(
"tool_name".into(),
tool_name.into(),
)]))
acp::ToolCall {
meta: Some(serde_json::json!({
"tool_name": tool_name
})),
id: acp::ToolCallId(id.to_string().into()),
title,
kind,
status: acp::ToolCallStatus::Pending,
content: vec![],
locations: vec![],
raw_input: Some(input),
raw_output: None,
}
}
fn update_tool_call_fields(
@@ -2467,7 +2403,12 @@ impl ThreadEventStream {
) {
self.0
.unbounded_send(Ok(ThreadEvent::ToolCallUpdate(
acp::ToolCallUpdate::new(tool_use_id.to_string(), fields).into(),
acp::ToolCallUpdate {
meta: None,
id: acp::ToolCallId(tool_use_id.to_string().into()),
fields,
}
.into(),
)))
.ok();
}
@@ -2530,7 +2471,7 @@ impl ToolCallEventStream {
.0
.unbounded_send(Ok(ThreadEvent::ToolCallUpdate(
acp_thread::ToolCallUpdateDiff {
id: acp::ToolCallId::new(self.tool_use_id.to_string()),
id: acp::ToolCallId(self.tool_use_id.to_string().into()),
diff,
}
.into(),
@@ -2548,26 +2489,33 @@ impl ToolCallEventStream {
.0
.unbounded_send(Ok(ThreadEvent::ToolCallAuthorization(
ToolCallAuthorization {
tool_call: acp::ToolCallUpdate::new(
self.tool_use_id.to_string(),
acp::ToolCallUpdateFields::new().title(title.into()),
),
tool_call: acp::ToolCallUpdate {
meta: None,
id: acp::ToolCallId(self.tool_use_id.to_string().into()),
fields: acp::ToolCallUpdateFields {
title: Some(title.into()),
..Default::default()
},
},
options: vec![
acp::PermissionOption::new(
acp::PermissionOptionId::new("always_allow"),
"Always Allow",
acp::PermissionOptionKind::AllowAlways,
),
acp::PermissionOption::new(
acp::PermissionOptionId::new("allow"),
"Allow",
acp::PermissionOptionKind::AllowOnce,
),
acp::PermissionOption::new(
acp::PermissionOptionId::new("deny"),
"Deny",
acp::PermissionOptionKind::RejectOnce,
),
acp::PermissionOption {
id: acp::PermissionOptionId("always_allow".into()),
name: "Always Allow".into(),
kind: acp::PermissionOptionKind::AllowAlways,
meta: None,
},
acp::PermissionOption {
id: acp::PermissionOptionId("allow".into()),
name: "Allow".into(),
kind: acp::PermissionOptionKind::AllowOnce,
meta: None,
},
acp::PermissionOption {
id: acp::PermissionOptionId("deny".into()),
name: "Deny".into(),
kind: acp::PermissionOptionKind::RejectOnce,
meta: None,
},
],
response: response_tx,
},
@@ -2712,15 +2660,7 @@ impl UserMessageContent {
// TODO
Self::Text("[blob]".to_string())
}
other => {
log::warn!("Unexpected content type: {:?}", other);
Self::Text("[unknown]".to_string())
}
},
other => {
log::warn!("Unexpected content type: {:?}", other);
Self::Text("[unknown]".to_string())
}
}
}
}
@@ -2728,15 +2668,32 @@ impl UserMessageContent {
impl From<UserMessageContent> for acp::ContentBlock {
fn from(content: UserMessageContent) -> Self {
match content {
UserMessageContent::Text(text) => text.into(),
UserMessageContent::Image(image) => {
acp::ContentBlock::Image(acp::ImageContent::new(image.source, "image/png"))
UserMessageContent::Text(text) => acp::ContentBlock::Text(acp::TextContent {
text,
annotations: None,
meta: None,
}),
UserMessageContent::Image(image) => acp::ContentBlock::Image(acp::ImageContent {
data: image.source.to_string(),
mime_type: "image/png".to_string(),
meta: None,
annotations: None,
uri: None,
}),
UserMessageContent::Mention { uri, content } => {
acp::ContentBlock::Resource(acp::EmbeddedResource {
meta: None,
resource: acp::EmbeddedResourceResource::TextResourceContents(
acp::TextResourceContents {
meta: None,
mime_type: None,
text: content,
uri: uri.to_uri().to_string(),
},
),
annotations: None,
})
}
UserMessageContent::Mention { uri, content } => acp::ContentBlock::Resource(
acp::EmbeddedResource::new(acp::EmbeddedResourceResource::TextResourceContents(
acp::TextResourceContents::new(content, uri.to_uri().to_string()),
)),
),
}
}
}
@@ -2744,6 +2701,7 @@ impl From<UserMessageContent> for acp::ContentBlock {
fn convert_image(image_content: acp::ImageContent) -> LanguageModelImage {
LanguageModelImage {
source: image_content.data.into(),
size: None,
// TODO: make this optional?
size: gpui::Size::new(0.into(), 0.into()),
}
}

View File

@@ -4,6 +4,7 @@ mod create_directory_tool;
mod delete_path_tool;
mod diagnostics_tool;
mod edit_file_tool;
mod failure_message_tool;
mod fetch_tool;
mod find_path_tool;
mod grep_tool;
@@ -12,9 +13,7 @@ mod move_path_tool;
mod now_tool;
mod open_tool;
mod read_file_tool;
mod restore_file_from_disk_tool;
mod save_file_tool;
mod rewrite_section_tool;
mod terminal_tool;
mod thinking_tool;
mod web_search_tool;
@@ -28,6 +27,7 @@ pub use create_directory_tool::*;
pub use delete_path_tool::*;
pub use diagnostics_tool::*;
pub use edit_file_tool::*;
pub use failure_message_tool::*;
pub use fetch_tool::*;
pub use find_path_tool::*;
pub use grep_tool::*;
@@ -36,9 +36,7 @@ pub use move_path_tool::*;
pub use now_tool::*;
pub use open_tool::*;
pub use read_file_tool::*;
pub use restore_file_from_disk_tool::*;
pub use save_file_tool::*;
pub use rewrite_section_tool::*;
pub use terminal_tool::*;
pub use thinking_tool::*;
pub use web_search_tool::*;
@@ -94,8 +92,6 @@ tools! {
NowTool,
OpenTool,
ReadFileTool,
RestoreFileFromDiskTool,
SaveFileTool,
TerminalTool,
ThinkingTool,
WebSearchTool,

View File

@@ -2,24 +2,12 @@ use crate::{AgentToolOutput, AnyAgentTool, ToolCallEventStream};
use agent_client_protocol::ToolKind;
use anyhow::{Result, anyhow, bail};
use collections::{BTreeMap, HashMap};
use context_server::{ContextServerId, client::NotificationSubscription};
use gpui::{App, AppContext, AsyncApp, Context, Entity, EventEmitter, SharedString, Task};
use context_server::ContextServerId;
use gpui::{App, Context, Entity, SharedString, Task};
use project::context_server_store::{ContextServerStatus, ContextServerStore};
use std::sync::Arc;
use util::ResultExt;
pub struct ContextServerPrompt {
pub server_id: ContextServerId,
pub prompt: context_server::types::Prompt,
}
pub enum ContextServerRegistryEvent {
ToolsChanged,
PromptsChanged,
}
impl EventEmitter<ContextServerRegistryEvent> for ContextServerRegistry {}
pub struct ContextServerRegistry {
server_store: Entity<ContextServerStore>,
registered_servers: HashMap<ContextServerId, RegisteredContextServer>,
@@ -28,10 +16,7 @@ pub struct ContextServerRegistry {
struct RegisteredContextServer {
tools: BTreeMap<SharedString, Arc<dyn AnyAgentTool>>,
prompts: BTreeMap<SharedString, ContextServerPrompt>,
load_tools: Task<Result<()>>,
load_prompts: Task<Result<()>>,
_tools_updated_subscription: Option<NotificationSubscription>,
}
impl ContextServerRegistry {
@@ -43,7 +28,6 @@ impl ContextServerRegistry {
};
for server in server_store.read(cx).running_servers() {
this.reload_tools_for_server(server.id(), cx);
this.reload_prompts_for_server(server.id(), cx);
}
this
}
@@ -72,88 +56,6 @@ impl ContextServerRegistry {
.map(|(id, server)| (id, &server.tools))
}
pub fn prompts(&self) -> impl Iterator<Item = &ContextServerPrompt> {
self.registered_servers
.values()
.flat_map(|server| server.prompts.values())
}
pub fn find_prompt(
&self,
server_id: Option<&ContextServerId>,
name: &str,
) -> Option<&ContextServerPrompt> {
if let Some(server_id) = server_id {
self.registered_servers
.get(server_id)
.and_then(|server| server.prompts.get(name))
} else {
self.registered_servers
.values()
.find_map(|server| server.prompts.get(name))
}
}
pub fn server_store(&self) -> &Entity<ContextServerStore> {
&self.server_store
}
fn get_or_register_server(
&mut self,
server_id: &ContextServerId,
cx: &mut Context<Self>,
) -> &mut RegisteredContextServer {
self.registered_servers
.entry(server_id.clone())
.or_insert_with(|| Self::init_registered_server(server_id, &self.server_store, cx))
}
fn init_registered_server(
server_id: &ContextServerId,
server_store: &Entity<ContextServerStore>,
cx: &mut Context<Self>,
) -> RegisteredContextServer {
let tools_updated_subscription = server_store
.read(cx)
.get_running_server(server_id)
.and_then(|server| {
let client = server.client()?;
if !client.capable(context_server::protocol::ServerCapability::Tools) {
return None;
}
let server_id = server.id();
let this = cx.entity().downgrade();
Some(client.on_notification(
"notifications/tools/list_changed",
Box::new(move |_params, cx: AsyncApp| {
let server_id = server_id.clone();
let this = this.clone();
cx.spawn(async move |cx| {
this.update(cx, |this, cx| {
log::info!(
"Received tools/list_changed notification for server {}",
server_id
);
this.reload_tools_for_server(server_id, cx);
})
})
.detach();
}),
))
});
RegisteredContextServer {
tools: BTreeMap::default(),
prompts: BTreeMap::default(),
load_tools: Task::ready(Ok(())),
load_prompts: Task::ready(Ok(())),
_tools_updated_subscription: tools_updated_subscription,
}
}
fn reload_tools_for_server(&mut self, server_id: ContextServerId, cx: &mut Context<Self>) {
let Some(server) = self.server_store.read(cx).get_running_server(&server_id) else {
return;
@@ -161,12 +63,17 @@ impl ContextServerRegistry {
let Some(client) = server.client() else {
return;
};
if !client.capable(context_server::protocol::ServerCapability::Tools) {
return;
}
let registered_server = self.get_or_register_server(&server_id, cx);
let registered_server =
self.registered_servers
.entry(server_id.clone())
.or_insert(RegisteredContextServer {
tools: BTreeMap::default(),
load_tools: Task::ready(Ok(())),
});
registered_server.load_tools = cx.spawn(async move |this, cx| {
let response = client
.request::<context_server::types::requests::ListTools>(())
@@ -187,49 +94,6 @@ impl ContextServerRegistry {
));
registered_server.tools.insert(tool.name(), tool);
}
cx.emit(ContextServerRegistryEvent::ToolsChanged);
cx.notify();
}
})
});
}
fn reload_prompts_for_server(&mut self, server_id: ContextServerId, cx: &mut Context<Self>) {
let Some(server) = self.server_store.read(cx).get_running_server(&server_id) else {
return;
};
let Some(client) = server.client() else {
return;
};
if !client.capable(context_server::protocol::ServerCapability::Prompts) {
return;
}
let registered_server = self.get_or_register_server(&server_id, cx);
registered_server.load_prompts = cx.spawn(async move |this, cx| {
let response = client
.request::<context_server::types::requests::PromptsList>(())
.await;
this.update(cx, |this, cx| {
let Some(registered_server) = this.registered_servers.get_mut(&server_id) else {
return;
};
registered_server.prompts.clear();
if let Some(response) = response.log_err() {
for prompt in response.prompts {
let name: SharedString = prompt.name.clone().into();
registered_server.prompts.insert(
name,
ContextServerPrompt {
server_id: server_id.clone(),
prompt,
},
);
}
cx.emit(ContextServerRegistryEvent::PromptsChanged);
cx.notify();
}
})
@@ -248,17 +112,9 @@ impl ContextServerRegistry {
ContextServerStatus::Starting => {}
ContextServerStatus::Running => {
self.reload_tools_for_server(server_id.clone(), cx);
self.reload_prompts_for_server(server_id.clone(), cx);
}
ContextServerStatus::Stopped | ContextServerStatus::Error(_) => {
if let Some(registered_server) = self.registered_servers.remove(server_id) {
if !registered_server.tools.is_empty() {
cx.emit(ContextServerRegistryEvent::ToolsChanged);
}
if !registered_server.prompts.is_empty() {
cx.emit(ContextServerRegistryEvent::PromptsChanged);
}
}
self.registered_servers.remove(server_id);
cx.notify();
}
}
@@ -395,39 +251,3 @@ impl AnyAgentTool for ContextServerTool {
Ok(())
}
}
pub fn get_prompt(
server_store: &Entity<ContextServerStore>,
server_id: &ContextServerId,
prompt_name: &str,
arguments: HashMap<String, String>,
cx: &mut AsyncApp,
) -> Task<Result<context_server::types::PromptsGetResponse>> {
let server = match cx.update(|cx| server_store.read(cx).get_running_server(server_id)) {
Ok(server) => server,
Err(error) => return Task::ready(Err(error)),
};
let Some(server) = server else {
return Task::ready(Err(anyhow::anyhow!("Context server not found")));
};
let Some(protocol) = server.client() else {
return Task::ready(Err(anyhow::anyhow!("Context server not initialized")));
};
let prompt_name = prompt_name.to_string();
cx.background_spawn(async move {
let response = protocol
.request::<context_server::types::requests::PromptsGet>(
context_server::types::PromptsGetParams {
name: prompt_name,
arguments: (!arguments.is_empty()).then(|| arguments),
meta: None,
},
)
.await?;
Ok(response)
})
}

View File

@@ -273,9 +273,14 @@ impl AgentTool for EditFileTool {
};
let abs_path = project.read(cx).absolute_path(&project_path, cx);
if let Some(abs_path) = abs_path.clone() {
event_stream.update_fields(
ToolCallUpdateFields::new().locations(vec![acp::ToolCallLocation::new(abs_path)]),
);
event_stream.update_fields(ToolCallUpdateFields {
locations: Some(vec![acp::ToolCallLocation {
path: abs_path,
line: None,
meta: None,
}]),
..Default::default()
});
}
let authorize = self.authorize(&input, &event_stream, cx);
@@ -306,39 +311,20 @@ impl AgentTool for EditFileTool {
// Check if the file has been modified since the agent last read it
if let Some(abs_path) = abs_path.as_ref() {
let (last_read_mtime, current_mtime, is_dirty, has_save_tool, has_restore_tool) = self.thread.update(cx, |thread, cx| {
let (last_read_mtime, current_mtime, is_dirty) = self.thread.update(cx, |thread, cx| {
let last_read = thread.file_read_times.get(abs_path).copied();
let current = buffer.read(cx).file().and_then(|file| file.disk_state().mtime());
let dirty = buffer.read(cx).is_dirty();
let has_save = thread.has_tool("save_file");
let has_restore = thread.has_tool("restore_file_from_disk");
(last_read, current, dirty, has_save, has_restore)
(last_read, current, dirty)
})?;
// Check for unsaved changes first - these indicate modifications we don't know about
if is_dirty {
let message = match (has_save_tool, has_restore_tool) {
(true, true) => {
"This file has unsaved changes. Ask the user whether they want to keep or discard those changes. \
If they want to keep them, ask for confirmation then use the save_file tool to save the file, then retry this edit. \
If they want to discard them, ask for confirmation then use the restore_file_from_disk tool to restore the on-disk contents, then retry this edit."
}
(true, false) => {
"This file has unsaved changes. Ask the user whether they want to keep or discard those changes. \
If they want to keep them, ask for confirmation then use the save_file tool to save the file, then retry this edit. \
If they want to discard them, ask the user to manually revert the file, then inform you when it's ok to proceed."
}
(false, true) => {
"This file has unsaved changes. Ask the user whether they want to keep or discard those changes. \
If they want to keep them, ask the user to manually save the file, then inform you when it's ok to proceed. \
If they want to discard them, ask for confirmation then use the restore_file_from_disk tool to restore the on-disk contents, then retry this edit."
}
(false, false) => {
"This file has unsaved changes. Ask the user whether they want to keep or discard those changes, \
then ask them to save or revert the file manually and inform you when it's ok to proceed."
}
};
anyhow::bail!("{}", message);
anyhow::bail!(
"This file cannot be written to because it has unsaved changes. \
Please end the current conversation immediately by telling the user you want to write to this file (mention its path explicitly) but you can't write to it because it has unsaved changes. \
Ask the user to save that buffer's changes and to inform you when it's ok to proceed."
);
}
// Check if the file was modified on disk since we last read it
@@ -403,7 +389,10 @@ impl AgentTool for EditFileTool {
range.start.to_point(&buffer.snapshot()).row
}).ok();
if let Some(abs_path) = abs_path.clone() {
event_stream.update_fields(ToolCallUpdateFields::new().locations(vec![ToolCallLocation::new(abs_path).line(line)]));
event_stream.update_fields(ToolCallUpdateFields {
locations: Some(vec![ToolCallLocation { path: abs_path, line, meta: None }]),
..Default::default()
});
}
emitted_location = true;
}
@@ -2221,21 +2210,9 @@ mod tests {
assert!(result.is_err(), "Edit should fail when buffer is dirty");
let error_msg = result.unwrap_err().to_string();
assert!(
error_msg.contains("This file has unsaved changes."),
error_msg.contains("cannot be written to because it has unsaved changes"),
"Error should mention unsaved changes, got: {}",
error_msg
);
assert!(
error_msg.contains("keep or discard"),
"Error should ask whether to keep or discard changes, got: {}",
error_msg
);
// Since save_file and restore_file_from_disk tools aren't added to the thread,
// the error message should ask the user to manually save or revert
assert!(
error_msg.contains("save or revert the file manually"),
"Error should ask user to manually save or revert when tools aren't available, got: {}",
error_msg
);
}
}

View File

@@ -0,0 +1,50 @@
//! This tool is intended for use with the inline assistant, not the agent panel.
use std::sync::Arc;
use agent_client_protocol as acp;
use anyhow::Result;
use gpui::{App, SharedString, Task};
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use crate::{AgentTool, ToolCallEventStream};
/// Use this tool to provide a message to the user when you're unable to complete a task.
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct FailureMessageInput {
/// A brief message to the user explaining why you're unable to fulfill the request.
pub message: String,
}
pub struct FailureMessageTool;
impl AgentTool for FailureMessageTool {
type Input = FailureMessageInput;
type Output = String;
fn name() -> &'static str {
"failure_message"
}
fn kind() -> acp::ToolKind {
acp::ToolKind::Think
}
fn initial_title(
&self,
_input: Result<Self::Input, serde_json::Value>,
_cx: &mut App,
) -> SharedString {
"".into()
}
fn run(
self: Arc<Self>,
_input: Self::Input,
_event_stream: ToolCallEventStream,
_cx: &mut App,
) -> Task<Result<String>> {
unimplemented!()
}
}

View File

@@ -118,29 +118,33 @@ impl AgentTool for FindPathTool {
let paginated_matches: &[PathBuf] = &matches[cmp::min(input.offset, matches.len())
..cmp::min(input.offset + RESULTS_PER_PAGE, matches.len())];
event_stream.update_fields(
acp::ToolCallUpdateFields::new()
.title(if paginated_matches.is_empty() {
"No matches".into()
} else if paginated_matches.len() == 1 {
"1 match".into()
} else {
format!("{} matches", paginated_matches.len())
})
.content(
paginated_matches
.iter()
.map(|path| {
acp::ToolCallContent::Content(acp::Content::new(
acp::ContentBlock::ResourceLink(acp::ResourceLink::new(
path.to_string_lossy(),
format!("file://{}", path.display()),
)),
))
})
.collect::<Vec<_>>(),
),
);
event_stream.update_fields(acp::ToolCallUpdateFields {
title: Some(if paginated_matches.is_empty() {
"No matches".into()
} else if paginated_matches.len() == 1 {
"1 match".into()
} else {
format!("{} matches", paginated_matches.len())
}),
content: Some(
paginated_matches
.iter()
.map(|path| acp::ToolCallContent::Content {
content: acp::ContentBlock::ResourceLink(acp::ResourceLink {
uri: format!("file://{}", path.display()),
name: path.to_string_lossy().into(),
annotations: None,
description: None,
mime_type: None,
size: None,
title: None,
meta: None,
}),
})
.collect(),
),
..Default::default()
});
Ok(FindPathToolOutput {
offset: input.offset,

View File

@@ -5,7 +5,7 @@ use futures::StreamExt;
use gpui::{App, Entity, SharedString, Task};
use language::{OffsetRangeExt, ParseStatus, Point};
use project::{
Project, SearchResults, WorktreeSettings,
Project, WorktreeSettings,
search::{SearchQuery, SearchResult},
};
use schemars::JsonSchema;
@@ -176,17 +176,14 @@ impl AgentTool for GrepTool {
let project = self.project.downgrade();
cx.spawn(async move |cx| {
// Keep the search alive for the duration of result iteration. Dropping this task is the
// cancellation mechanism; we intentionally do not detach it.
let SearchResults {rx, _task_handle} = results;
futures::pin_mut!(rx);
futures::pin_mut!(results);
let mut output = String::new();
let mut skips_remaining = input.offset;
let mut matches_found = 0;
let mut has_more_matches = false;
'outer: while let Some(SearchResult::Buffer { buffer, ranges }) = rx.next().await {
'outer: while let Some(SearchResult::Buffer { buffer, ranges }) = results.next().await {
if ranges.is_empty() {
continue;
}
@@ -325,6 +322,7 @@ mod tests {
use super::*;
use gpui::{TestAppContext, UpdateGlobal};
use language::{Language, LanguageConfig, LanguageMatcher};
use project::{FakeFs, Project};
use serde_json::json;
use settings::SettingsStore;
@@ -566,7 +564,7 @@ mod tests {
let project = Project::test(fs.clone(), [path!("/root").as_ref()], cx).await;
project.update(cx, |project, _cx| {
project.languages().add(language::rust_lang())
project.languages().add(rust_lang().into())
});
project
@@ -795,6 +793,22 @@ mod tests {
});
}
fn rust_lang() -> Language {
Language::new(
LanguageConfig {
name: "Rust".into(),
matcher: LanguageMatcher {
path_suffixes: vec!["rs".to_string()],
..Default::default()
},
..Default::default()
},
Some(tree_sitter_rust::LANGUAGE.into()),
)
.with_outline_query(include_str!("../../../languages/src/rust/outline.scm"))
.unwrap()
}
#[gpui::test]
async fn test_grep_security_boundaries(cx: &mut TestAppContext) {
init_test(cx);

View File

@@ -153,10 +153,14 @@ impl AgentTool for ReadFileTool {
let file_path = input.path.clone();
event_stream.update_fields(ToolCallUpdateFields::new().locations(vec![
acp::ToolCallLocation::new(&abs_path)
.line(input.start_line.map(|line| line.saturating_sub(1))),
]));
event_stream.update_fields(ToolCallUpdateFields {
locations: Some(vec![acp::ToolCallLocation {
path: abs_path.clone(),
line: input.start_line.map(|line| line.saturating_sub(1)),
meta: None,
}]),
..Default::default()
});
if image_store::is_image_file(&self.project, &project_path, cx) {
return cx.spawn(async move |cx| {
@@ -285,9 +289,12 @@ impl AgentTool for ReadFileTool {
text,
}
.to_string();
event_stream.update_fields(ToolCallUpdateFields::new().content(vec![
acp::ToolCallContent::Content(acp::Content::new(markdown)),
]));
event_stream.update_fields(ToolCallUpdateFields {
content: Some(vec![acp::ToolCallContent::Content {
content: markdown.into(),
}]),
..Default::default()
})
}
})?;
@@ -301,6 +308,7 @@ mod test {
use super::*;
use crate::{ContextServerRegistry, Templates, Thread};
use gpui::{AppContext, TestAppContext, UpdateGlobal as _};
use language::{Language, LanguageConfig, LanguageMatcher, tree_sitter_rust};
use language_model::fake_provider::FakeLanguageModel;
use project::{FakeFs, Project};
use prompt_store::ProjectContext;
@@ -404,7 +412,7 @@ mod test {
.await;
let project = Project::test(fs.clone(), [path!("/root").as_ref()], cx).await;
let language_registry = project.read_with(cx, |project, _| project.languages().clone());
language_registry.add(language::rust_lang());
language_registry.add(Arc::new(rust_lang()));
let action_log = cx.new(|_| ActionLog::new(project.clone()));
let context_server_registry =
cx.new(|cx| ContextServerRegistry::new(project.read(cx).context_server_store(), cx));
@@ -594,6 +602,49 @@ mod test {
});
}
fn rust_lang() -> Language {
Language::new(
LanguageConfig {
name: "Rust".into(),
matcher: LanguageMatcher {
path_suffixes: vec!["rs".to_string()],
..Default::default()
},
..Default::default()
},
Some(tree_sitter_rust::LANGUAGE.into()),
)
.with_outline_query(
r#"
(line_comment) @annotation
(struct_item
"struct" @context
name: (_) @name) @item
(enum_item
"enum" @context
name: (_) @name) @item
(enum_variant
name: (_) @name) @item
(field_declaration
name: (_) @name) @item
(impl_item
"impl" @context
trait: (_)? @name
"for"? @context
type: (_) @name
body: (_ "{" (_)* "}")) @item
(function_item
"fn" @context
name: (_) @name) @item
(mod_item
"mod" @context
name: (_) @name) @item
"#,
)
.unwrap()
}
#[gpui::test]
async fn test_read_file_security(cx: &mut TestAppContext) {
init_test(cx);

View File

@@ -1,352 +0,0 @@
use agent_client_protocol as acp;
use anyhow::Result;
use collections::FxHashSet;
use gpui::{App, Entity, SharedString, Task};
use language::Buffer;
use project::Project;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use std::path::PathBuf;
use std::sync::Arc;
use crate::{AgentTool, ToolCallEventStream};
/// Discards unsaved changes in open buffers by reloading file contents from disk.
///
/// Use this tool when:
/// - You attempted to edit files but they have unsaved changes the user does not want to keep.
/// - You want to reset files to the on-disk state before retrying an edit.
///
/// Only use this tool after asking the user for permission, because it will discard unsaved changes.
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct RestoreFileFromDiskToolInput {
/// The paths of the files to restore from disk.
pub paths: Vec<PathBuf>,
}
pub struct RestoreFileFromDiskTool {
project: Entity<Project>,
}
impl RestoreFileFromDiskTool {
pub fn new(project: Entity<Project>) -> Self {
Self { project }
}
}
impl AgentTool for RestoreFileFromDiskTool {
type Input = RestoreFileFromDiskToolInput;
type Output = String;
fn name() -> &'static str {
"restore_file_from_disk"
}
fn kind() -> acp::ToolKind {
acp::ToolKind::Other
}
fn initial_title(
&self,
input: Result<Self::Input, serde_json::Value>,
_cx: &mut App,
) -> SharedString {
match input {
Ok(input) if input.paths.len() == 1 => "Restore file from disk".into(),
Ok(input) => format!("Restore {} files from disk", input.paths.len()).into(),
Err(_) => "Restore files from disk".into(),
}
}
fn run(
self: Arc<Self>,
input: Self::Input,
_event_stream: ToolCallEventStream,
cx: &mut App,
) -> Task<Result<String>> {
let project = self.project.clone();
let input_paths = input.paths;
cx.spawn(async move |cx| {
let mut buffers_to_reload: FxHashSet<Entity<Buffer>> = FxHashSet::default();
let mut restored_paths: Vec<PathBuf> = Vec::new();
let mut clean_paths: Vec<PathBuf> = Vec::new();
let mut not_found_paths: Vec<PathBuf> = Vec::new();
let mut open_errors: Vec<(PathBuf, String)> = Vec::new();
let mut dirty_check_errors: Vec<(PathBuf, String)> = Vec::new();
let mut reload_errors: Vec<String> = Vec::new();
for path in input_paths {
let project_path =
project.read_with(cx, |project, cx| project.find_project_path(&path, cx));
let project_path = match project_path {
Ok(Some(project_path)) => project_path,
Ok(None) => {
not_found_paths.push(path);
continue;
}
Err(error) => {
open_errors.push((path, error.to_string()));
continue;
}
};
let open_buffer_task =
project.update(cx, |project, cx| project.open_buffer(project_path, cx));
let buffer = match open_buffer_task {
Ok(task) => match task.await {
Ok(buffer) => buffer,
Err(error) => {
open_errors.push((path, error.to_string()));
continue;
}
},
Err(error) => {
open_errors.push((path, error.to_string()));
continue;
}
};
let is_dirty = match buffer.read_with(cx, |buffer, _| buffer.is_dirty()) {
Ok(is_dirty) => is_dirty,
Err(error) => {
dirty_check_errors.push((path, error.to_string()));
continue;
}
};
if is_dirty {
buffers_to_reload.insert(buffer);
restored_paths.push(path);
} else {
clean_paths.push(path);
}
}
if !buffers_to_reload.is_empty() {
let reload_task = project.update(cx, |project, cx| {
project.reload_buffers(buffers_to_reload, true, cx)
});
match reload_task {
Ok(task) => {
if let Err(error) = task.await {
reload_errors.push(error.to_string());
}
}
Err(error) => {
reload_errors.push(error.to_string());
}
}
}
let mut lines: Vec<String> = Vec::new();
if !restored_paths.is_empty() {
lines.push(format!("Restored {} file(s).", restored_paths.len()));
}
if !clean_paths.is_empty() {
lines.push(format!("{} clean.", clean_paths.len()));
}
if !not_found_paths.is_empty() {
lines.push(format!("Not found ({}):", not_found_paths.len()));
for path in &not_found_paths {
lines.push(format!("- {}", path.display()));
}
}
if !open_errors.is_empty() {
lines.push(format!("Open failed ({}):", open_errors.len()));
for (path, error) in &open_errors {
lines.push(format!("- {}: {}", path.display(), error));
}
}
if !dirty_check_errors.is_empty() {
lines.push(format!(
"Dirty check failed ({}):",
dirty_check_errors.len()
));
for (path, error) in &dirty_check_errors {
lines.push(format!("- {}: {}", path.display(), error));
}
}
if !reload_errors.is_empty() {
lines.push(format!("Reload failed ({}):", reload_errors.len()));
for error in &reload_errors {
lines.push(format!("- {}", error));
}
}
if lines.is_empty() {
Ok("No paths provided.".to_string())
} else {
Ok(lines.join("\n"))
}
})
}
}
#[cfg(test)]
mod tests {
use super::*;
use fs::Fs;
use gpui::TestAppContext;
use language::LineEnding;
use project::FakeFs;
use serde_json::json;
use settings::SettingsStore;
use util::path;
fn init_test(cx: &mut TestAppContext) {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
});
}
#[gpui::test]
async fn test_restore_file_from_disk_output_and_effects(cx: &mut TestAppContext) {
init_test(cx);
let fs = FakeFs::new(cx.executor());
fs.insert_tree(
"/root",
json!({
"dirty.txt": "on disk: dirty\n",
"clean.txt": "on disk: clean\n",
}),
)
.await;
let project = Project::test(fs.clone(), [path!("/root").as_ref()], cx).await;
let tool = Arc::new(RestoreFileFromDiskTool::new(project.clone()));
// Make dirty.txt dirty in-memory by saving different content into the buffer without saving to disk.
let dirty_project_path = project.read_with(cx, |project, cx| {
project
.find_project_path("root/dirty.txt", cx)
.expect("dirty.txt should exist in project")
});
let dirty_buffer = project
.update(cx, |project, cx| {
project.open_buffer(dirty_project_path, cx)
})
.await
.unwrap();
dirty_buffer.update(cx, |buffer, cx| {
buffer.edit([(0..buffer.len(), "in memory: dirty\n")], None, cx);
});
assert!(
dirty_buffer.read_with(cx, |buffer, _| buffer.is_dirty()),
"dirty.txt buffer should be dirty before restore"
);
// Ensure clean.txt is opened but remains clean.
let clean_project_path = project.read_with(cx, |project, cx| {
project
.find_project_path("root/clean.txt", cx)
.expect("clean.txt should exist in project")
});
let clean_buffer = project
.update(cx, |project, cx| {
project.open_buffer(clean_project_path, cx)
})
.await
.unwrap();
assert!(
!clean_buffer.read_with(cx, |buffer, _| buffer.is_dirty()),
"clean.txt buffer should start clean"
);
let output = cx
.update(|cx| {
tool.clone().run(
RestoreFileFromDiskToolInput {
paths: vec![
PathBuf::from("root/dirty.txt"),
PathBuf::from("root/clean.txt"),
],
},
ToolCallEventStream::test().0,
cx,
)
})
.await
.unwrap();
// Output should mention restored + clean.
assert!(
output.contains("Restored 1 file(s)."),
"expected restored count line, got:\n{output}"
);
assert!(
output.contains("1 clean."),
"expected clean count line, got:\n{output}"
);
// Effect: dirty buffer should be restored back to disk content and become clean.
let dirty_text = dirty_buffer.read_with(cx, |buffer, _| buffer.text());
assert_eq!(
dirty_text, "on disk: dirty\n",
"dirty.txt buffer should be restored to disk contents"
);
assert!(
!dirty_buffer.read_with(cx, |buffer, _| buffer.is_dirty()),
"dirty.txt buffer should not be dirty after restore"
);
// Disk contents should be unchanged (restore-from-disk should not write).
let disk_dirty = fs.load(path!("/root/dirty.txt").as_ref()).await.unwrap();
assert_eq!(disk_dirty, "on disk: dirty\n");
// Sanity: clean buffer should remain clean and unchanged.
let clean_text = clean_buffer.read_with(cx, |buffer, _| buffer.text());
assert_eq!(clean_text, "on disk: clean\n");
assert!(
!clean_buffer.read_with(cx, |buffer, _| buffer.is_dirty()),
"clean.txt buffer should remain clean"
);
// Test empty paths case.
let output = cx
.update(|cx| {
tool.clone().run(
RestoreFileFromDiskToolInput { paths: vec![] },
ToolCallEventStream::test().0,
cx,
)
})
.await
.unwrap();
assert_eq!(output, "No paths provided.");
// Test not-found path case (path outside the project root).
let output = cx
.update(|cx| {
tool.clone().run(
RestoreFileFromDiskToolInput {
paths: vec![PathBuf::from("nonexistent/path.txt")],
},
ToolCallEventStream::test().0,
cx,
)
})
.await
.unwrap();
assert!(
output.contains("Not found (1):"),
"expected not-found header line, got:\n{output}"
);
assert!(
output.contains("- nonexistent/path.txt"),
"expected not-found path bullet, got:\n{output}"
);
let _ = LineEnding::Unix; // keep import used if the buffer edit API changes
}
}

View File

@@ -0,0 +1,55 @@
//! This tool is intended for use with the inline assistant, not the agent panel.
use std::sync::Arc;
use agent_client_protocol as acp;
use anyhow::Result;
use gpui::{App, SharedString, Task};
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use crate::{AgentTool, ToolCallEventStream};
/// Replaces text in <rewrite_this></rewrite_this> tags with your replacement_text.
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct RewriteSectionInput {
/// A brief description of the edit you have made.
///
/// This is optional - if the edit is simple or obvious, you should leave it empty.
pub description: String,
/// The text to replace the section with.
pub replacement_text: String,
}
pub struct RewriteSectionTool;
impl AgentTool for RewriteSectionTool {
type Input = RewriteSectionInput;
type Output = String;
fn name() -> &'static str {
"rewrite_section"
}
fn kind() -> acp::ToolKind {
acp::ToolKind::Edit
}
fn initial_title(
&self,
_input: Result<Self::Input, serde_json::Value>,
_cx: &mut App,
) -> SharedString {
"".into()
}
fn run(
self: Arc<Self>,
_input: Self::Input,
_event_stream: ToolCallEventStream,
_cx: &mut App,
) -> Task<Result<String>> {
unimplemented!()
}
}

View File

@@ -1,351 +0,0 @@
use agent_client_protocol as acp;
use anyhow::Result;
use collections::FxHashSet;
use gpui::{App, Entity, SharedString, Task};
use language::Buffer;
use project::Project;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use std::path::PathBuf;
use std::sync::Arc;
use crate::{AgentTool, ToolCallEventStream};
/// Saves files that have unsaved changes.
///
/// Use this tool when you need to edit files but they have unsaved changes that must be saved first.
/// Only use this tool after asking the user for permission to save their unsaved changes.
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
pub struct SaveFileToolInput {
/// The paths of the files to save.
pub paths: Vec<PathBuf>,
}
pub struct SaveFileTool {
project: Entity<Project>,
}
impl SaveFileTool {
pub fn new(project: Entity<Project>) -> Self {
Self { project }
}
}
impl AgentTool for SaveFileTool {
type Input = SaveFileToolInput;
type Output = String;
fn name() -> &'static str {
"save_file"
}
fn kind() -> acp::ToolKind {
acp::ToolKind::Other
}
fn initial_title(
&self,
input: Result<Self::Input, serde_json::Value>,
_cx: &mut App,
) -> SharedString {
match input {
Ok(input) if input.paths.len() == 1 => "Save file".into(),
Ok(input) => format!("Save {} files", input.paths.len()).into(),
Err(_) => "Save files".into(),
}
}
fn run(
self: Arc<Self>,
input: Self::Input,
_event_stream: ToolCallEventStream,
cx: &mut App,
) -> Task<Result<String>> {
let project = self.project.clone();
let input_paths = input.paths;
cx.spawn(async move |cx| {
let mut buffers_to_save: FxHashSet<Entity<Buffer>> = FxHashSet::default();
let mut saved_paths: Vec<PathBuf> = Vec::new();
let mut clean_paths: Vec<PathBuf> = Vec::new();
let mut not_found_paths: Vec<PathBuf> = Vec::new();
let mut open_errors: Vec<(PathBuf, String)> = Vec::new();
let mut dirty_check_errors: Vec<(PathBuf, String)> = Vec::new();
let mut save_errors: Vec<(String, String)> = Vec::new();
for path in input_paths {
let project_path =
project.read_with(cx, |project, cx| project.find_project_path(&path, cx));
let project_path = match project_path {
Ok(Some(project_path)) => project_path,
Ok(None) => {
not_found_paths.push(path);
continue;
}
Err(error) => {
open_errors.push((path, error.to_string()));
continue;
}
};
let open_buffer_task =
project.update(cx, |project, cx| project.open_buffer(project_path, cx));
let buffer = match open_buffer_task {
Ok(task) => match task.await {
Ok(buffer) => buffer,
Err(error) => {
open_errors.push((path, error.to_string()));
continue;
}
},
Err(error) => {
open_errors.push((path, error.to_string()));
continue;
}
};
let is_dirty = match buffer.read_with(cx, |buffer, _| buffer.is_dirty()) {
Ok(is_dirty) => is_dirty,
Err(error) => {
dirty_check_errors.push((path, error.to_string()));
continue;
}
};
if is_dirty {
buffers_to_save.insert(buffer);
saved_paths.push(path);
} else {
clean_paths.push(path);
}
}
// Save each buffer individually since there's no batch save API.
for buffer in buffers_to_save {
let path_for_buffer = match buffer.read_with(cx, |buffer, _| {
buffer
.file()
.map(|file| file.path().to_rel_path_buf())
.map(|path| path.as_rel_path().as_unix_str().to_owned())
}) {
Ok(path) => path.unwrap_or_else(|| "<unknown>".to_string()),
Err(error) => {
save_errors.push(("<unknown>".to_string(), error.to_string()));
continue;
}
};
let save_task = project.update(cx, |project, cx| project.save_buffer(buffer, cx));
match save_task {
Ok(task) => {
if let Err(error) = task.await {
save_errors.push((path_for_buffer, error.to_string()));
}
}
Err(error) => {
save_errors.push((path_for_buffer, error.to_string()));
}
}
}
let mut lines: Vec<String> = Vec::new();
if !saved_paths.is_empty() {
lines.push(format!("Saved {} file(s).", saved_paths.len()));
}
if !clean_paths.is_empty() {
lines.push(format!("{} clean.", clean_paths.len()));
}
if !not_found_paths.is_empty() {
lines.push(format!("Not found ({}):", not_found_paths.len()));
for path in &not_found_paths {
lines.push(format!("- {}", path.display()));
}
}
if !open_errors.is_empty() {
lines.push(format!("Open failed ({}):", open_errors.len()));
for (path, error) in &open_errors {
lines.push(format!("- {}: {}", path.display(), error));
}
}
if !dirty_check_errors.is_empty() {
lines.push(format!(
"Dirty check failed ({}):",
dirty_check_errors.len()
));
for (path, error) in &dirty_check_errors {
lines.push(format!("- {}: {}", path.display(), error));
}
}
if !save_errors.is_empty() {
lines.push(format!("Save failed ({}):", save_errors.len()));
for (path, error) in &save_errors {
lines.push(format!("- {}: {}", path, error));
}
}
if lines.is_empty() {
Ok("No paths provided.".to_string())
} else {
Ok(lines.join("\n"))
}
})
}
}
#[cfg(test)]
mod tests {
use super::*;
use fs::Fs;
use gpui::TestAppContext;
use project::FakeFs;
use serde_json::json;
use settings::SettingsStore;
use util::path;
fn init_test(cx: &mut TestAppContext) {
cx.update(|cx| {
let settings_store = SettingsStore::test(cx);
cx.set_global(settings_store);
});
}
#[gpui::test]
async fn test_save_file_output_and_effects(cx: &mut TestAppContext) {
init_test(cx);
let fs = FakeFs::new(cx.executor());
fs.insert_tree(
"/root",
json!({
"dirty.txt": "on disk: dirty\n",
"clean.txt": "on disk: clean\n",
}),
)
.await;
let project = Project::test(fs.clone(), [path!("/root").as_ref()], cx).await;
let tool = Arc::new(SaveFileTool::new(project.clone()));
// Make dirty.txt dirty in-memory.
let dirty_project_path = project.read_with(cx, |project, cx| {
project
.find_project_path("root/dirty.txt", cx)
.expect("dirty.txt should exist in project")
});
let dirty_buffer = project
.update(cx, |project, cx| {
project.open_buffer(dirty_project_path, cx)
})
.await
.unwrap();
dirty_buffer.update(cx, |buffer, cx| {
buffer.edit([(0..buffer.len(), "in memory: dirty\n")], None, cx);
});
assert!(
dirty_buffer.read_with(cx, |buffer, _| buffer.is_dirty()),
"dirty.txt buffer should be dirty before save"
);
// Ensure clean.txt is opened but remains clean.
let clean_project_path = project.read_with(cx, |project, cx| {
project
.find_project_path("root/clean.txt", cx)
.expect("clean.txt should exist in project")
});
let clean_buffer = project
.update(cx, |project, cx| {
project.open_buffer(clean_project_path, cx)
})
.await
.unwrap();
assert!(
!clean_buffer.read_with(cx, |buffer, _| buffer.is_dirty()),
"clean.txt buffer should start clean"
);
let output = cx
.update(|cx| {
tool.clone().run(
SaveFileToolInput {
paths: vec![
PathBuf::from("root/dirty.txt"),
PathBuf::from("root/clean.txt"),
],
},
ToolCallEventStream::test().0,
cx,
)
})
.await
.unwrap();
// Output should mention saved + clean.
assert!(
output.contains("Saved 1 file(s)."),
"expected saved count line, got:\n{output}"
);
assert!(
output.contains("1 clean."),
"expected clean count line, got:\n{output}"
);
// Effect: dirty buffer should now be clean and disk should have new content.
assert!(
!dirty_buffer.read_with(cx, |buffer, _| buffer.is_dirty()),
"dirty.txt buffer should not be dirty after save"
);
let disk_dirty = fs.load(path!("/root/dirty.txt").as_ref()).await.unwrap();
assert_eq!(
disk_dirty, "in memory: dirty\n",
"dirty.txt disk content should be updated"
);
// Sanity: clean buffer should remain clean and disk unchanged.
let disk_clean = fs.load(path!("/root/clean.txt").as_ref()).await.unwrap();
assert_eq!(disk_clean, "on disk: clean\n");
// Test empty paths case.
let output = cx
.update(|cx| {
tool.clone().run(
SaveFileToolInput { paths: vec![] },
ToolCallEventStream::test().0,
cx,
)
})
.await
.unwrap();
assert_eq!(output, "No paths provided.");
// Test not-found path case.
let output = cx
.update(|cx| {
tool.clone().run(
SaveFileToolInput {
paths: vec![PathBuf::from("nonexistent/path.txt")],
},
ToolCallEventStream::test().0,
cx,
)
})
.await
.unwrap();
assert!(
output.contains("Not found (1):"),
"expected not-found header line, got:\n{output}"
);
assert!(
output.contains("- nonexistent/path.txt"),
"expected not-found path bullet, got:\n{output}"
);
}
}

View File

@@ -1,7 +1,6 @@
use agent_client_protocol as acp;
use anyhow::Result;
use futures::FutureExt as _;
use gpui::{App, AppContext, Entity, SharedString, Task};
use gpui::{App, Entity, SharedString, Task};
use project::Project;
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
@@ -9,7 +8,6 @@ use std::{
path::{Path, PathBuf},
rc::Rc,
sync::Arc,
time::Duration,
};
use util::markdown::MarkdownInlineCode;
@@ -27,17 +25,13 @@ const COMMAND_OUTPUT_LIMIT: u64 = 16 * 1024;
///
/// Do not use this tool for commands that run indefinitely, such as servers (like `npm run start`, `npm run dev`, `python -m http.server`, etc) or file watchers that don't terminate on their own.
///
/// For potentially long-running commands, prefer specifying `timeout_ms` to bound runtime and prevent indefinite hangs.
///
/// Remember that each invocation of this tool will spawn a new shell process, so you can't rely on any state from previous invocations.
#[derive(Clone, Debug, Serialize, Deserialize, JsonSchema)]
pub struct TerminalToolInput {
/// The one-liner command to execute.
pub command: String,
command: String,
/// Working directory for the command. This must be one of the root directories of the project.
pub cd: String,
/// Optional maximum runtime (in milliseconds). If exceeded, the running terminal task is killed.
pub timeout_ms: Option<u64>,
cd: String,
}
pub struct TerminalTool {
@@ -118,30 +112,12 @@ impl AgentTool for TerminalTool {
.await?;
let terminal_id = terminal.id(cx)?;
event_stream.update_fields(acp::ToolCallUpdateFields::new().content(vec![
acp::ToolCallContent::Terminal(acp::Terminal::new(terminal_id)),
]));
let timeout = input.timeout_ms.map(Duration::from_millis);
let exit_status = match timeout {
Some(timeout) => {
let wait_for_exit = terminal.wait_for_exit(cx)?;
let timeout_task = cx.background_spawn(async move {
smol::Timer::after(timeout).await;
});
futures::select! {
status = wait_for_exit.clone().fuse() => status,
_ = timeout_task.fuse() => {
terminal.kill(cx)?;
wait_for_exit.await
}
}
}
None => terminal.wait_for_exit(cx)?.await,
};
event_stream.update_fields(acp::ToolCallUpdateFields {
content: Some(vec![acp::ToolCallContent::Terminal { terminal_id }]),
..Default::default()
});
let exit_status = terminal.wait_for_exit(cx)?.await;
let output = terminal.current_output(cx)?;
Ok(process_content(output, &input.command, exit_status))

View File

@@ -43,8 +43,10 @@ impl AgentTool for ThinkingTool {
event_stream: ToolCallEventStream,
_cx: &mut App,
) -> Task<Result<String>> {
event_stream
.update_fields(acp::ToolCallUpdateFields::new().content(vec![input.content.into()]));
event_stream.update_fields(acp::ToolCallUpdateFields {
content: Some(vec![input.content.into()]),
..Default::default()
});
Task::ready(Ok("Finished thinking.".to_string()))
}
}

View File

@@ -76,8 +76,10 @@ impl AgentTool for WebSearchTool {
let response = match search_task.await {
Ok(response) => response,
Err(err) => {
event_stream
.update_fields(acp::ToolCallUpdateFields::new().title("Web Search Failed"));
event_stream.update_fields(acp::ToolCallUpdateFields {
title: Some("Web Search Failed".to_string()),
..Default::default()
});
return Err(err);
}
};
@@ -105,23 +107,26 @@ fn emit_update(response: &WebSearchResponse, event_stream: &ToolCallEventStream)
} else {
format!("{} results", response.results.len())
};
event_stream.update_fields(
acp::ToolCallUpdateFields::new()
.title(format!("Searched the web: {result_text}"))
.content(
response
.results
.iter()
.map(|result| {
acp::ToolCallContent::Content(acp::Content::new(
acp::ContentBlock::ResourceLink(
acp::ResourceLink::new(result.title.clone(), result.url.clone())
.title(result.title.clone())
.description(result.text.clone()),
),
))
})
.collect::<Vec<_>>(),
),
);
event_stream.update_fields(acp::ToolCallUpdateFields {
title: Some(format!("Searched the web: {result_text}")),
content: Some(
response
.results
.iter()
.map(|result| acp::ToolCallContent::Content {
content: acp::ContentBlock::ResourceLink(acp::ResourceLink {
name: result.title.clone(),
uri: result.url.clone(),
title: Some(result.title.clone()),
description: Some(result.text.clone()),
mime_type: None,
annotations: None,
size: None,
meta: None,
}),
})
.collect(),
),
..Default::default()
});
}

View File

@@ -21,7 +21,6 @@ acp_tools.workspace = true
acp_thread.workspace = true
action_log.workspace = true
agent-client-protocol.workspace = true
feature_flags.workspace = true
anyhow.workspace = true
async-trait.workspace = true
client.workspace = true

View File

@@ -4,19 +4,14 @@ use action_log::ActionLog;
use agent_client_protocol::{self as acp, Agent as _, ErrorCode};
use anyhow::anyhow;
use collections::HashMap;
use feature_flags::{AcpBetaFeatureFlag, FeatureFlagAppExt as _};
use futures::AsyncBufReadExt as _;
use futures::io::BufReader;
use project::Project;
use project::agent_server_store::AgentServerCommand;
use serde::Deserialize;
use settings::Settings as _;
use task::ShellBuilder;
use util::ResultExt as _;
use util::process::Child;
use std::path::PathBuf;
use std::process::Stdio;
use std::{any::Any, cell::RefCell};
use std::{path::Path, rc::Rc};
use thiserror::Error;
@@ -26,7 +21,7 @@ use gpui::{App, AppContext as _, AsyncApp, Entity, SharedString, Task, WeakEntit
use acp_thread::{AcpThread, AuthRequired, LoadError, TerminalProviderEvent};
use terminal::TerminalBuilder;
use terminal::terminal_settings::{AlternateScroll, CursorShape, TerminalSettings};
use terminal::terminal_settings::{AlternateScroll, CursorShape};
#[derive(Debug, Error)]
#[error("Unsupported version")]
@@ -34,63 +29,46 @@ pub struct UnsupportedVersion;
pub struct AcpConnection {
server_name: SharedString,
telemetry_id: SharedString,
telemetry_id: &'static str,
connection: Rc<acp::ClientSideConnection>,
sessions: Rc<RefCell<HashMap<acp::SessionId, AcpSession>>>,
auth_methods: Vec<acp::AuthMethod>,
agent_capabilities: acp::AgentCapabilities,
default_mode: Option<acp::SessionModeId>,
default_model: Option<acp::ModelId>,
default_config_options: HashMap<String, String>,
root_dir: PathBuf,
child: Child,
// NB: Don't move this into the wait_task, since we need to ensure the process is
// killed on drop (setting kill_on_drop on the command seems to not always work).
child: smol::process::Child,
_io_task: Task<Result<(), acp::Error>>,
_wait_task: Task<Result<()>>,
_stderr_task: Task<Result<()>>,
}
struct ConfigOptions {
config_options: Rc<RefCell<Vec<acp::SessionConfigOption>>>,
tx: Rc<RefCell<watch::Sender<()>>>,
rx: watch::Receiver<()>,
}
impl ConfigOptions {
fn new(config_options: Rc<RefCell<Vec<acp::SessionConfigOption>>>) -> Self {
let (tx, rx) = watch::channel(());
Self {
config_options,
tx: Rc::new(RefCell::new(tx)),
rx,
}
}
}
pub struct AcpSession {
thread: WeakEntity<AcpThread>,
suppress_abort_err: bool,
models: Option<Rc<RefCell<acp::SessionModelState>>>,
session_modes: Option<Rc<RefCell<acp::SessionModeState>>>,
config_options: Option<ConfigOptions>,
}
pub async fn connect(
server_name: SharedString,
telemetry_id: &'static str,
command: AgentServerCommand,
root_dir: &Path,
default_mode: Option<acp::SessionModeId>,
default_model: Option<acp::ModelId>,
default_config_options: HashMap<String, String>,
is_remote: bool,
cx: &mut AsyncApp,
) -> Result<Rc<dyn AgentConnection>> {
let conn = AcpConnection::stdio(
server_name,
telemetry_id,
command.clone(),
root_dir,
default_mode,
default_model,
default_config_options,
is_remote,
cx,
)
@@ -98,28 +76,30 @@ pub async fn connect(
Ok(Rc::new(conn) as _)
}
const MINIMUM_SUPPORTED_VERSION: acp::ProtocolVersion = acp::ProtocolVersion::V1;
const MINIMUM_SUPPORTED_VERSION: acp::ProtocolVersion = acp::V1;
impl AcpConnection {
pub async fn stdio(
server_name: SharedString,
telemetry_id: &'static str,
command: AgentServerCommand,
root_dir: &Path,
default_mode: Option<acp::SessionModeId>,
default_model: Option<acp::ModelId>,
default_config_options: HashMap<String, String>,
is_remote: bool,
cx: &mut AsyncApp,
) -> Result<Self> {
let shell = cx.update(|cx| TerminalSettings::get(None, cx).shell.clone())?;
let builder = ShellBuilder::new(&shell, cfg!(windows)).non_interactive();
let mut child =
builder.build_std_command(Some(command.path.display().to_string()), &command.args);
child.envs(command.env.iter().flatten());
let mut child = util::command::new_smol_command(&command.path);
child
.args(command.args.iter().map(|arg| arg.as_str()))
.envs(command.env.iter().flatten())
.stdin(std::process::Stdio::piped())
.stdout(std::process::Stdio::piped())
.stderr(std::process::Stdio::piped());
if !is_remote {
child.current_dir(root_dir);
}
let mut child = Child::spawn(child, Stdio::piped(), Stdio::piped(), Stdio::piped())?;
let mut child = child.spawn()?;
let stdout = child.stdout.take().context("Failed to take stdout")?;
let stdin = child.stdin.take().context("Failed to take stdin")?;
@@ -194,38 +174,34 @@ impl AcpConnection {
})?;
let response = connection
.initialize(
acp::InitializeRequest::new(acp::ProtocolVersion::V1)
.client_capabilities(
acp::ClientCapabilities::new()
.fs(acp::FileSystemCapability::new()
.read_text_file(true)
.write_text_file(true))
.terminal(true)
// Experimental: Allow for rendering terminal output from the agents
.meta(acp::Meta::from_iter([
("terminal_output".into(), true.into()),
("terminal-auth".into(), true.into()),
])),
)
.client_info(
acp::Implementation::new("zed", version)
.title(release_channel.map(ToOwned::to_owned)),
),
)
.initialize(acp::InitializeRequest {
protocol_version: acp::VERSION,
client_capabilities: acp::ClientCapabilities {
fs: acp::FileSystemCapability {
read_text_file: true,
write_text_file: true,
meta: None,
},
terminal: true,
meta: Some(serde_json::json!({
// Experimental: Allow for rendering terminal output from the agents
"terminal_output": true,
"terminal-auth": true,
})),
},
client_info: Some(acp::Implementation {
name: "zed".to_owned(),
title: release_channel.map(|c| c.to_owned()),
version,
}),
meta: None,
})
.await?;
if response.protocol_version < MINIMUM_SUPPORTED_VERSION {
return Err(UnsupportedVersion.into());
}
let telemetry_id = response
.agent_info
// Use the one the agent provides if we have one
.map(|info| info.name.into())
// Otherwise, just use the name
.unwrap_or_else(|| server_name.clone());
Ok(Self {
auth_methods: response.auth_methods,
root_dir: root_dir.to_owned(),
@@ -236,7 +212,6 @@ impl AcpConnection {
agent_capabilities: response.agent_capabilities,
default_mode,
default_model,
default_config_options,
_io_task: io_task,
_wait_task: wait_task,
_stderr_task: stderr_task,
@@ -255,13 +230,14 @@ impl AcpConnection {
impl Drop for AcpConnection {
fn drop(&mut self) {
// See the comment on the child field.
self.child.kill().log_err();
}
}
impl AgentConnection for AcpConnection {
fn telemetry_id(&self) -> SharedString {
self.telemetry_id.clone()
fn telemetry_id(&self) -> &'static str {
self.telemetry_id
}
fn new_thread(
@@ -275,16 +251,16 @@ impl AgentConnection for AcpConnection {
let sessions = self.sessions.clone();
let default_mode = self.default_mode.clone();
let default_model = self.default_model.clone();
let default_config_options = self.default_config_options.clone();
let cwd = cwd.to_path_buf();
let context_server_store = project.read(cx).context_server_store().read(cx);
let mcp_servers = if project.read(cx).is_local() {
context_server_store
.configured_server_ids()
.iter()
.filter_map(|id| {
let configuration = context_server_store.configuration_for_server(id)?;
match &*configuration {
let mcp_servers =
if project.read(cx).is_local() {
context_server_store
.configured_server_ids()
.iter()
.filter_map(|id| {
let configuration = context_server_store.configuration_for_server(id)?;
match &*configuration {
project::context_server_store::ContextServerConfiguration::Custom {
command,
..
@@ -292,47 +268,53 @@ impl AgentConnection for AcpConnection {
| project::context_server_store::ContextServerConfiguration::Extension {
command,
..
} => Some(acp::McpServer::Stdio(
acp::McpServerStdio::new(id.0.to_string(), &command.path)
.args(command.args.clone())
.env(if let Some(env) = command.env.as_ref() {
env.iter()
.map(|(name, value)| acp::EnvVariable::new(name, value))
.collect()
} else {
vec![]
}),
)),
} => Some(acp::McpServer::Stdio {
name: id.0.to_string(),
command: command.path.clone(),
args: command.args.clone(),
env: if let Some(env) = command.env.as_ref() {
env.iter()
.map(|(name, value)| acp::EnvVariable {
name: name.clone(),
value: value.clone(),
meta: None,
})
.collect()
} else {
vec![]
},
}),
project::context_server_store::ContextServerConfiguration::Http {
url,
headers,
} => Some(acp::McpServer::Http(
acp::McpServerHttp::new(id.0.to_string(), url.to_string()).headers(
headers
.iter()
.map(|(name, value)| acp::HttpHeader::new(name, value))
.collect(),
),
)),
} => Some(acp::McpServer::Http {
name: id.0.to_string(),
url: url.to_string(),
headers: headers.iter().map(|(name, value)| acp::HttpHeader {
name: name.clone(),
value: value.clone(),
meta: None,
}).collect(),
}),
}
})
.collect()
} else {
// In SSH projects, the external agent is running on the remote
// machine, and currently we only run MCP servers on the local
// machine. So don't pass any MCP servers to the agent in that case.
Vec::new()
};
})
.collect()
} else {
// In SSH projects, the external agent is running on the remote
// machine, and currently we only run MCP servers on the local
// machine. So don't pass any MCP servers to the agent in that case.
Vec::new()
};
cx.spawn(async move |cx| {
let response = conn
.new_session(acp::NewSessionRequest::new(cwd).mcp_servers(mcp_servers))
.new_session(acp::NewSessionRequest { mcp_servers, cwd, meta: None })
.await
.map_err(|err| {
if err.code == acp::ErrorCode::AuthRequired {
if err.code == acp::ErrorCode::AUTH_REQUIRED.code {
let mut error = AuthRequired::new();
if err.message != acp::ErrorCode::AuthRequired.to_string() {
if err.message != acp::ErrorCode::AUTH_REQUIRED.message {
error = error.with_description(err.message);
}
@@ -342,21 +324,8 @@ impl AgentConnection for AcpConnection {
}
})?;
let use_config_options = cx.update(|cx| cx.has_flag::<AcpBetaFeatureFlag>())?;
// Config options take precedence over legacy modes/models
let (modes, models, config_options) = if use_config_options && let Some(opts) = response.config_options {
(
None,
None,
Some(Rc::new(RefCell::new(opts))),
)
} else {
// Fall back to legacy modes/models
let modes = response.modes.map(|modes| Rc::new(RefCell::new(modes)));
let models = response.models.map(|models| Rc::new(RefCell::new(models)));
(modes, models, None)
};
let modes = response.modes.map(|modes| Rc::new(RefCell::new(modes)));
let models = response.models.map(|models| Rc::new(RefCell::new(models)));
if let Some(default_mode) = default_mode {
if let Some(modes) = modes.as_ref() {
@@ -372,7 +341,11 @@ impl AgentConnection for AcpConnection {
let modes = modes.clone();
let conn = conn.clone();
async move |_| {
let result = conn.set_session_mode(acp::SetSessionModeRequest::new(session_id, default_mode))
let result = conn.set_session_mode(acp::SetSessionModeRequest {
session_id,
mode_id: default_mode,
meta: None,
})
.await.log_err();
if result.is_none() {
@@ -415,7 +388,11 @@ impl AgentConnection for AcpConnection {
let models = models.clone();
let conn = conn.clone();
async move |_| {
let result = conn.set_session_model(acp::SetSessionModelRequest::new(session_id, default_model))
let result = conn.set_session_model(acp::SetSessionModelRequest {
session_id,
model_id: default_model,
meta: None,
})
.await.log_err();
if result.is_none() {
@@ -444,92 +421,6 @@ impl AgentConnection for AcpConnection {
}
}
if let Some(config_opts) = config_options.as_ref() {
let defaults_to_apply: Vec<_> = {
let config_opts_ref = config_opts.borrow();
config_opts_ref
.iter()
.filter_map(|config_option| {
let default_value = default_config_options.get(&*config_option.id.0)?;
let is_valid = match &config_option.kind {
acp::SessionConfigKind::Select(select) => match &select.options {
acp::SessionConfigSelectOptions::Ungrouped(options) => {
options.iter().any(|opt| &*opt.value.0 == default_value.as_str())
}
acp::SessionConfigSelectOptions::Grouped(groups) => groups
.iter()
.any(|g| g.options.iter().any(|opt| &*opt.value.0 == default_value.as_str())),
_ => false,
},
_ => false,
};
if is_valid {
let initial_value = match &config_option.kind {
acp::SessionConfigKind::Select(select) => {
Some(select.current_value.clone())
}
_ => None,
};
Some((config_option.id.clone(), default_value.clone(), initial_value))
} else {
log::warn!(
"`{}` is not a valid value for config option `{}` in {}",
default_value,
config_option.id.0,
name
);
None
}
})
.collect()
};
for (config_id, default_value, initial_value) in defaults_to_apply {
cx.spawn({
let default_value_id = acp::SessionConfigValueId::new(default_value.clone());
let session_id = response.session_id.clone();
let config_id_clone = config_id.clone();
let config_opts = config_opts.clone();
let conn = conn.clone();
async move |_| {
let result = conn
.set_session_config_option(
acp::SetSessionConfigOptionRequest::new(
session_id,
config_id_clone.clone(),
default_value_id,
),
)
.await
.log_err();
if result.is_none() {
if let Some(initial) = initial_value {
let mut opts = config_opts.borrow_mut();
if let Some(opt) = opts.iter_mut().find(|o| o.id == config_id_clone) {
if let acp::SessionConfigKind::Select(select) =
&mut opt.kind
{
select.current_value = initial;
}
}
}
}
}
})
.detach();
let mut opts = config_opts.borrow_mut();
if let Some(opt) = opts.iter_mut().find(|o| o.id == config_id) {
if let acp::SessionConfigKind::Select(select) = &mut opt.kind {
select.current_value = acp::SessionConfigValueId::new(default_value);
}
}
}
}
let session_id = response.session_id;
let action_log = cx.new(|_| ActionLog::new(project.clone()))?;
let thread = cx.new(|cx| {
@@ -551,7 +442,6 @@ impl AgentConnection for AcpConnection {
suppress_abort_err: false,
session_modes: modes,
models,
config_options: config_options.map(|opts| ConfigOptions::new(opts))
};
sessions.borrow_mut().insert(session_id, session);
@@ -566,8 +456,12 @@ impl AgentConnection for AcpConnection {
fn authenticate(&self, method_id: acp::AuthMethodId, cx: &mut App) -> Task<Result<()>> {
let conn = self.connection.clone();
cx.foreground_executor().spawn(async move {
conn.authenticate(acp::AuthenticateRequest::new(method_id))
.await?;
conn.authenticate(acp::AuthenticateRequest {
method_id: method_id.clone(),
meta: None,
})
.await?;
Ok(())
})
}
@@ -594,11 +488,11 @@ impl AgentConnection for AcpConnection {
match result {
Ok(response) => Ok(response),
Err(err) => {
if err.code == acp::ErrorCode::AuthRequired {
if err.code == acp::ErrorCode::AUTH_REQUIRED.code {
return Err(anyhow!(acp::Error::auth_required()));
}
if err.code != ErrorCode::InternalError {
if err.code != ErrorCode::INTERNAL_ERROR.code {
anyhow::bail!(err)
}
@@ -621,7 +515,10 @@ impl AgentConnection for AcpConnection {
&& (details.contains("This operation was aborted")
|| details.contains("The user aborted a request"))
{
Ok(acp::PromptResponse::new(acp::StopReason::Cancelled))
Ok(acp::PromptResponse {
stop_reason: acp::StopReason::Cancelled,
meta: None,
})
} else {
Err(anyhow!(details))
}
@@ -638,7 +535,10 @@ impl AgentConnection for AcpConnection {
session.suppress_abort_err = true;
}
let conn = self.connection.clone();
let params = acp::CancelNotification::new(session_id.clone());
let params = acp::CancelNotification {
session_id: session_id.clone(),
meta: None,
};
cx.foreground_executor()
.spawn(async move { conn.cancel(params).await })
.detach();
@@ -687,25 +587,6 @@ impl AgentConnection for AcpConnection {
}
}
fn session_config_options(
&self,
session_id: &acp::SessionId,
_cx: &App,
) -> Option<Rc<dyn acp_thread::AgentSessionConfigOptions>> {
let sessions = self.sessions.borrow();
let session = sessions.get(session_id)?;
let config_opts = session.config_options.as_ref()?;
Some(Rc::new(AcpSessionConfigOptions {
session_id: session_id.clone(),
connection: self.connection.clone(),
state: config_opts.config_options.clone(),
watch_tx: config_opts.tx.clone(),
watch_rx: config_opts.rx.clone(),
}) as _)
}
fn into_any(self: Rc<Self>) -> Rc<dyn Any> {
self
}
@@ -738,7 +619,11 @@ impl acp_thread::AgentSessionModes for AcpSessionModes {
let state = self.state.clone();
cx.foreground_executor().spawn(async move {
let result = connection
.set_session_mode(acp::SetSessionModeRequest::new(session_id, mode_id))
.set_session_mode(acp::SetSessionModeRequest {
session_id,
mode_id,
meta: None,
})
.await;
if result.is_err() {
@@ -797,7 +682,11 @@ impl acp_thread::AgentModelSelector for AcpModelSelector {
let state = self.state.clone();
cx.foreground_executor().spawn(async move {
let result = connection
.set_session_model(acp::SetSessionModelRequest::new(session_id, model_id))
.set_session_model(acp::SetSessionModelRequest {
session_id,
model_id,
meta: None,
})
.await;
if result.is_err() {
@@ -824,49 +713,6 @@ impl acp_thread::AgentModelSelector for AcpModelSelector {
}
}
struct AcpSessionConfigOptions {
session_id: acp::SessionId,
connection: Rc<acp::ClientSideConnection>,
state: Rc<RefCell<Vec<acp::SessionConfigOption>>>,
watch_tx: Rc<RefCell<watch::Sender<()>>>,
watch_rx: watch::Receiver<()>,
}
impl acp_thread::AgentSessionConfigOptions for AcpSessionConfigOptions {
fn config_options(&self) -> Vec<acp::SessionConfigOption> {
self.state.borrow().clone()
}
fn set_config_option(
&self,
config_id: acp::SessionConfigId,
value: acp::SessionConfigValueId,
cx: &mut App,
) -> Task<Result<Vec<acp::SessionConfigOption>>> {
let connection = self.connection.clone();
let session_id = self.session_id.clone();
let state = self.state.clone();
let watch_tx = self.watch_tx.clone();
cx.foreground_executor().spawn(async move {
let response = connection
.set_session_config_option(acp::SetSessionConfigOptionRequest::new(
session_id, config_id, value,
))
.await?;
*state.borrow_mut() = response.config_options.clone();
watch_tx.borrow_mut().send(()).ok();
Ok(response.config_options)
})
}
fn watch(&self, _cx: &mut App) -> Option<watch::Receiver<()>> {
Some(self.watch_rx.clone())
}
}
struct ClientDelegate {
sessions: Rc<RefCell<HashMap<acp::SessionId, AcpSession>>>,
cx: AsyncApp,
@@ -902,7 +748,10 @@ impl acp::Client for ClientDelegate {
let outcome = task.await;
Ok(acp::RequestPermissionResponse::new(outcome))
Ok(acp::RequestPermissionResponse {
outcome,
meta: None,
})
}
async fn write_text_file(
@@ -934,7 +783,10 @@ impl acp::Client for ClientDelegate {
let content = task.await?;
Ok(acp::ReadTextFileResponse::new(content))
Ok(acp::ReadTextFileResponse {
content,
meta: None,
})
}
async fn session_notification(
@@ -960,21 +812,6 @@ impl acp::Client for ClientDelegate {
}
}
if let acp::SessionUpdate::ConfigOptionUpdate(acp::ConfigOptionUpdate {
config_options,
..
}) = &notification.update
{
if let Some(opts) = &session.config_options {
*opts.config_options.borrow_mut() = config_options.clone();
opts.tx.borrow_mut().send(()).ok();
} else {
log::error!(
"Got a `ConfigOptionUpdate` notification, but the agent didn't specify `config_options` during session setup."
);
}
}
// Clone so we can inspect meta both before and after handing off to the thread
let update_clone = notification.update.clone();
@@ -984,7 +821,7 @@ impl acp::Client for ClientDelegate {
if let Some(terminal_info) = meta.get("terminal_info") {
if let Some(id_str) = terminal_info.get("terminal_id").and_then(|v| v.as_str())
{
let terminal_id = acp::TerminalId::new(id_str);
let terminal_id = acp::TerminalId(id_str.into());
let cwd = terminal_info
.get("cwd")
.and_then(|v| v.as_str().map(PathBuf::from));
@@ -1000,7 +837,7 @@ impl acp::Client for ClientDelegate {
let lower = cx.new(|cx| builder.subscribe(cx));
thread.on_terminal_provider_event(
TerminalProviderEvent::Created {
terminal_id,
terminal_id: terminal_id.clone(),
label: tc.title.clone(),
cwd,
output_byte_limit: None,
@@ -1025,12 +862,15 @@ impl acp::Client for ClientDelegate {
if let Some(meta) = &tcu.meta {
if let Some(term_out) = meta.get("terminal_output") {
if let Some(id_str) = term_out.get("terminal_id").and_then(|v| v.as_str()) {
let terminal_id = acp::TerminalId::new(id_str);
let terminal_id = acp::TerminalId(id_str.into());
if let Some(s) = term_out.get("data").and_then(|v| v.as_str()) {
let data = s.as_bytes().to_vec();
let _ = session.thread.update(&mut self.cx.clone(), |thread, cx| {
thread.on_terminal_provider_event(
TerminalProviderEvent::Output { terminal_id, data },
TerminalProviderEvent::Output {
terminal_id: terminal_id.clone(),
data,
},
cx,
);
});
@@ -1041,24 +881,21 @@ impl acp::Client for ClientDelegate {
// terminal_exit
if let Some(term_exit) = meta.get("terminal_exit") {
if let Some(id_str) = term_exit.get("terminal_id").and_then(|v| v.as_str()) {
let terminal_id = acp::TerminalId::new(id_str);
let status = acp::TerminalExitStatus::new()
.exit_code(
term_exit
.get("exit_code")
.and_then(|v| v.as_u64())
.map(|i| i as u32),
)
.signal(
term_exit
.get("signal")
.and_then(|v| v.as_str().map(|s| s.to_string())),
);
let terminal_id = acp::TerminalId(id_str.into());
let status = acp::TerminalExitStatus {
exit_code: term_exit
.get("exit_code")
.and_then(|v| v.as_u64())
.map(|i| i as u32),
signal: term_exit
.get("signal")
.and_then(|v| v.as_str().map(|s| s.to_string())),
meta: None,
};
let _ = session.thread.update(&mut self.cx.clone(), |thread, cx| {
thread.on_terminal_provider_event(
TerminalProviderEvent::Exit {
terminal_id,
terminal_id: terminal_id.clone(),
status,
},
cx,
@@ -1095,7 +932,7 @@ impl acp::Client for ClientDelegate {
// Register with renderer
let terminal_entity = thread.update(&mut self.cx.clone(), |thread, cx| {
thread.register_terminal_created(
acp::TerminalId::new(uuid::Uuid::new_v4().to_string()),
acp::TerminalId(uuid::Uuid::new_v4().to_string().into()),
format!("{} {}", args.command, args.args.join(" ")),
args.cwd.clone(),
args.output_byte_limit,
@@ -1105,7 +942,10 @@ impl acp::Client for ClientDelegate {
})?;
let terminal_id =
terminal_entity.read_with(&self.cx, |terminal, _| terminal.id().clone())?;
Ok(acp::CreateTerminalResponse::new(terminal_id))
Ok(acp::CreateTerminalResponse {
terminal_id,
meta: None,
})
}
async fn kill_terminal_command(
@@ -1166,7 +1006,10 @@ impl acp::Client for ClientDelegate {
})??
.await;
Ok(acp::WaitForTerminalExitResponse::new(exit_status))
Ok(acp::WaitForTerminalExitResponse {
exit_status,
meta: None,
})
}
}

View File

@@ -10,7 +10,7 @@ pub mod e2e_tests;
pub use claude::*;
use client::ProxySettings;
pub use codex::*;
use collections::{HashMap, HashSet};
use collections::HashMap;
pub use custom::*;
use fs::Fs;
pub use gemini::*;
@@ -56,19 +56,10 @@ impl AgentServerDelegate {
pub trait AgentServer: Send {
fn logo(&self) -> ui::IconName;
fn name(&self) -> SharedString;
fn connect(
&self,
root_dir: Option<&Path>,
delegate: AgentServerDelegate,
cx: &mut App,
) -> Task<Result<(Rc<dyn AgentConnection>, Option<task::SpawnInTerminal>)>>;
fn into_any(self: Rc<Self>) -> Rc<dyn Any>;
fn default_mode(&self, _cx: &App) -> Option<agent_client_protocol::SessionModeId> {
fn telemetry_id(&self) -> &'static str;
fn default_mode(&self, _cx: &mut App) -> Option<agent_client_protocol::SessionModeId> {
None
}
fn set_default_mode(
&self,
_mode_id: Option<agent_client_protocol::SessionModeId>,
@@ -77,7 +68,7 @@ pub trait AgentServer: Send {
) {
}
fn default_model(&self, _cx: &App) -> Option<agent_client_protocol::ModelId> {
fn default_model(&self, _cx: &mut App) -> Option<agent_client_protocol::ModelId> {
None
}
@@ -89,49 +80,14 @@ pub trait AgentServer: Send {
) {
}
fn favorite_model_ids(&self, _cx: &mut App) -> HashSet<agent_client_protocol::ModelId> {
HashSet::default()
}
fn default_config_option(&self, _config_id: &str, _cx: &App) -> Option<String> {
None
}
fn set_default_config_option(
fn connect(
&self,
_config_id: &str,
_value_id: Option<&str>,
_fs: Arc<dyn Fs>,
_cx: &mut App,
) {
}
root_dir: Option<&Path>,
delegate: AgentServerDelegate,
cx: &mut App,
) -> Task<Result<(Rc<dyn AgentConnection>, Option<task::SpawnInTerminal>)>>;
fn favorite_config_option_value_ids(
&self,
_config_id: &agent_client_protocol::SessionConfigId,
_cx: &mut App,
) -> HashSet<agent_client_protocol::SessionConfigValueId> {
HashSet::default()
}
fn toggle_favorite_config_option_value(
&self,
_config_id: agent_client_protocol::SessionConfigId,
_value_id: agent_client_protocol::SessionConfigValueId,
_should_be_favorite: bool,
_fs: Arc<dyn Fs>,
_cx: &App,
) {
}
fn toggle_favorite_model(
&self,
_model_id: agent_client_protocol::ModelId,
_should_be_favorite: bool,
_fs: Arc<dyn Fs>,
_cx: &App,
) {
}
fn into_any(self: Rc<Self>) -> Rc<dyn Any>;
}
impl dyn AgentServer {

View File

@@ -1,5 +1,4 @@
use agent_client_protocol as acp;
use collections::HashSet;
use fs::Fs;
use settings::{SettingsStore, update_settings_file};
use std::path::Path;
@@ -23,6 +22,10 @@ pub struct AgentServerLoginCommand {
}
impl AgentServer for ClaudeCode {
fn telemetry_id(&self) -> &'static str {
"claude-code"
}
fn name(&self) -> SharedString {
"Claude Code".into()
}
@@ -31,14 +34,14 @@ impl AgentServer for ClaudeCode {
ui::IconName::AiClaude
}
fn default_mode(&self, cx: &App) -> Option<acp::SessionModeId> {
fn default_mode(&self, cx: &mut App) -> Option<acp::SessionModeId> {
let settings = cx.read_global(|settings: &SettingsStore, _| {
settings.get::<AllAgentServersSettings>(None).claude.clone()
});
settings
.as_ref()
.and_then(|s| s.default_mode.clone().map(acp::SessionModeId::new))
.and_then(|s| s.default_mode.clone().map(|m| acp::SessionModeId(m.into())))
}
fn set_default_mode(&self, mode_id: Option<acp::SessionModeId>, fs: Arc<dyn Fs>, cx: &mut App) {
@@ -52,14 +55,14 @@ impl AgentServer for ClaudeCode {
});
}
fn default_model(&self, cx: &App) -> Option<acp::ModelId> {
fn default_model(&self, cx: &mut App) -> Option<acp::ModelId> {
let settings = cx.read_global(|settings: &SettingsStore, _| {
settings.get::<AllAgentServersSettings>(None).claude.clone()
});
settings
.as_ref()
.and_then(|s| s.default_model.clone().map(acp::ModelId::new))
.and_then(|s| s.default_model.clone().map(|m| acp::ModelId(m.into())))
}
fn set_default_model(&self, model_id: Option<acp::ModelId>, fs: Arc<dyn Fs>, cx: &mut App) {
@@ -73,139 +76,6 @@ impl AgentServer for ClaudeCode {
});
}
fn favorite_model_ids(&self, cx: &mut App) -> HashSet<acp::ModelId> {
let settings = cx.read_global(|settings: &SettingsStore, _| {
settings.get::<AllAgentServersSettings>(None).claude.clone()
});
settings
.as_ref()
.map(|s| {
s.favorite_models
.iter()
.map(|id| acp::ModelId::new(id.clone()))
.collect()
})
.unwrap_or_default()
}
fn toggle_favorite_model(
&self,
model_id: acp::ModelId,
should_be_favorite: bool,
fs: Arc<dyn Fs>,
cx: &App,
) {
update_settings_file(fs, cx, move |settings, _| {
let favorite_models = &mut settings
.agent_servers
.get_or_insert_default()
.claude
.get_or_insert_default()
.favorite_models;
let model_id_str = model_id.to_string();
if should_be_favorite {
if !favorite_models.contains(&model_id_str) {
favorite_models.push(model_id_str);
}
} else {
favorite_models.retain(|id| id != &model_id_str);
}
});
}
fn default_config_option(&self, config_id: &str, cx: &App) -> Option<String> {
let settings = cx.read_global(|settings: &SettingsStore, _| {
settings.get::<AllAgentServersSettings>(None).claude.clone()
});
settings
.as_ref()
.and_then(|s| s.default_config_options.get(config_id).cloned())
}
fn set_default_config_option(
&self,
config_id: &str,
value_id: Option<&str>,
fs: Arc<dyn Fs>,
cx: &mut App,
) {
let config_id = config_id.to_string();
let value_id = value_id.map(|s| s.to_string());
update_settings_file(fs, cx, move |settings, _| {
let config_options = &mut settings
.agent_servers
.get_or_insert_default()
.claude
.get_or_insert_default()
.default_config_options;
if let Some(value) = value_id.clone() {
config_options.insert(config_id.clone(), value);
} else {
config_options.remove(&config_id);
}
});
}
fn favorite_config_option_value_ids(
&self,
config_id: &acp::SessionConfigId,
cx: &mut App,
) -> HashSet<acp::SessionConfigValueId> {
let settings = cx.read_global(|settings: &SettingsStore, _| {
settings.get::<AllAgentServersSettings>(None).claude.clone()
});
settings
.as_ref()
.and_then(|s| s.favorite_config_option_values.get(config_id.0.as_ref()))
.map(|values| {
values
.iter()
.cloned()
.map(acp::SessionConfigValueId::new)
.collect()
})
.unwrap_or_default()
}
fn toggle_favorite_config_option_value(
&self,
config_id: acp::SessionConfigId,
value_id: acp::SessionConfigValueId,
should_be_favorite: bool,
fs: Arc<dyn Fs>,
cx: &App,
) {
let config_id = config_id.to_string();
let value_id = value_id.to_string();
update_settings_file(fs, cx, move |settings, _| {
let favorites = &mut settings
.agent_servers
.get_or_insert_default()
.claude
.get_or_insert_default()
.favorite_config_option_values;
let entry = favorites.entry(config_id.clone()).or_insert_with(Vec::new);
if should_be_favorite {
if !entry.iter().any(|v| v == &value_id) {
entry.push(value_id.clone());
}
} else {
entry.retain(|v| v != &value_id);
if entry.is_empty() {
favorites.remove(&config_id);
}
}
});
}
fn connect(
&self,
root_dir: Option<&Path>,
@@ -213,20 +83,13 @@ impl AgentServer for ClaudeCode {
cx: &mut App,
) -> Task<Result<(Rc<dyn AgentConnection>, Option<task::SpawnInTerminal>)>> {
let name = self.name();
let telemetry_id = self.telemetry_id();
let root_dir = root_dir.map(|root_dir| root_dir.to_string_lossy().into_owned());
let is_remote = delegate.project.read(cx).is_via_remote_server();
let store = delegate.store.downgrade();
let extra_env = load_proxy_env(cx);
let default_mode = self.default_mode(cx);
let default_model = self.default_model(cx);
let default_config_options = cx.read_global(|settings: &SettingsStore, _| {
settings
.get::<AllAgentServersSettings>(None)
.claude
.as_ref()
.map(|s| s.default_config_options.clone())
.unwrap_or_default()
});
cx.spawn(async move |cx| {
let (command, root_dir, login) = store
@@ -245,11 +108,11 @@ impl AgentServer for ClaudeCode {
.await?;
let connection = crate::acp::connect(
name,
telemetry_id,
command,
root_dir.as_ref(),
default_mode,
default_model,
default_config_options,
is_remote,
cx,
)

View File

@@ -5,7 +5,6 @@ use std::{any::Any, path::Path};
use acp_thread::AgentConnection;
use agent_client_protocol as acp;
use anyhow::{Context as _, Result};
use collections::HashSet;
use fs::Fs;
use gpui::{App, AppContext as _, SharedString, Task};
use project::agent_server_store::{AllAgentServersSettings, CODEX_NAME};
@@ -24,6 +23,10 @@ pub(crate) mod tests {
}
impl AgentServer for Codex {
fn telemetry_id(&self) -> &'static str {
"codex"
}
fn name(&self) -> SharedString {
"Codex".into()
}
@@ -32,14 +35,14 @@ impl AgentServer for Codex {
ui::IconName::AiOpenAi
}
fn default_mode(&self, cx: &App) -> Option<acp::SessionModeId> {
fn default_mode(&self, cx: &mut App) -> Option<acp::SessionModeId> {
let settings = cx.read_global(|settings: &SettingsStore, _| {
settings.get::<AllAgentServersSettings>(None).codex.clone()
});
settings
.as_ref()
.and_then(|s| s.default_mode.clone().map(acp::SessionModeId::new))
.and_then(|s| s.default_mode.clone().map(|m| acp::SessionModeId(m.into())))
}
fn set_default_mode(&self, mode_id: Option<acp::SessionModeId>, fs: Arc<dyn Fs>, cx: &mut App) {
@@ -53,14 +56,14 @@ impl AgentServer for Codex {
});
}
fn default_model(&self, cx: &App) -> Option<acp::ModelId> {
fn default_model(&self, cx: &mut App) -> Option<acp::ModelId> {
let settings = cx.read_global(|settings: &SettingsStore, _| {
settings.get::<AllAgentServersSettings>(None).codex.clone()
});
settings
.as_ref()
.and_then(|s| s.default_model.clone().map(acp::ModelId::new))
.and_then(|s| s.default_model.clone().map(|m| acp::ModelId(m.into())))
}
fn set_default_model(&self, model_id: Option<acp::ModelId>, fs: Arc<dyn Fs>, cx: &mut App) {
@@ -74,139 +77,6 @@ impl AgentServer for Codex {
});
}
fn favorite_model_ids(&self, cx: &mut App) -> HashSet<acp::ModelId> {
let settings = cx.read_global(|settings: &SettingsStore, _| {
settings.get::<AllAgentServersSettings>(None).codex.clone()
});
settings
.as_ref()
.map(|s| {
s.favorite_models
.iter()
.map(|id| acp::ModelId::new(id.clone()))
.collect()
})
.unwrap_or_default()
}
fn toggle_favorite_model(
&self,
model_id: acp::ModelId,
should_be_favorite: bool,
fs: Arc<dyn Fs>,
cx: &App,
) {
update_settings_file(fs, cx, move |settings, _| {
let favorite_models = &mut settings
.agent_servers
.get_or_insert_default()
.codex
.get_or_insert_default()
.favorite_models;
let model_id_str = model_id.to_string();
if should_be_favorite {
if !favorite_models.contains(&model_id_str) {
favorite_models.push(model_id_str);
}
} else {
favorite_models.retain(|id| id != &model_id_str);
}
});
}
fn default_config_option(&self, config_id: &str, cx: &App) -> Option<String> {
let settings = cx.read_global(|settings: &SettingsStore, _| {
settings.get::<AllAgentServersSettings>(None).codex.clone()
});
settings
.as_ref()
.and_then(|s| s.default_config_options.get(config_id).cloned())
}
fn set_default_config_option(
&self,
config_id: &str,
value_id: Option<&str>,
fs: Arc<dyn Fs>,
cx: &mut App,
) {
let config_id = config_id.to_string();
let value_id = value_id.map(|s| s.to_string());
update_settings_file(fs, cx, move |settings, _| {
let config_options = &mut settings
.agent_servers
.get_or_insert_default()
.codex
.get_or_insert_default()
.default_config_options;
if let Some(value) = value_id.clone() {
config_options.insert(config_id.clone(), value);
} else {
config_options.remove(&config_id);
}
});
}
fn favorite_config_option_value_ids(
&self,
config_id: &acp::SessionConfigId,
cx: &mut App,
) -> HashSet<acp::SessionConfigValueId> {
let settings = cx.read_global(|settings: &SettingsStore, _| {
settings.get::<AllAgentServersSettings>(None).codex.clone()
});
settings
.as_ref()
.and_then(|s| s.favorite_config_option_values.get(config_id.0.as_ref()))
.map(|values| {
values
.iter()
.cloned()
.map(acp::SessionConfigValueId::new)
.collect()
})
.unwrap_or_default()
}
fn toggle_favorite_config_option_value(
&self,
config_id: acp::SessionConfigId,
value_id: acp::SessionConfigValueId,
should_be_favorite: bool,
fs: Arc<dyn Fs>,
cx: &App,
) {
let config_id = config_id.to_string();
let value_id = value_id.to_string();
update_settings_file(fs, cx, move |settings, _| {
let favorites = &mut settings
.agent_servers
.get_or_insert_default()
.codex
.get_or_insert_default()
.favorite_config_option_values;
let entry = favorites.entry(config_id.clone()).or_insert_with(Vec::new);
if should_be_favorite {
if !entry.iter().any(|v| v == &value_id) {
entry.push(value_id.clone());
}
} else {
entry.retain(|v| v != &value_id);
if entry.is_empty() {
favorites.remove(&config_id);
}
}
});
}
fn connect(
&self,
root_dir: Option<&Path>,
@@ -214,20 +84,13 @@ impl AgentServer for Codex {
cx: &mut App,
) -> Task<Result<(Rc<dyn AgentConnection>, Option<task::SpawnInTerminal>)>> {
let name = self.name();
let telemetry_id = self.telemetry_id();
let root_dir = root_dir.map(|root_dir| root_dir.to_string_lossy().into_owned());
let is_remote = delegate.project.read(cx).is_via_remote_server();
let store = delegate.store.downgrade();
let extra_env = load_proxy_env(cx);
let default_mode = self.default_mode(cx);
let default_model = self.default_model(cx);
let default_config_options = cx.read_global(|settings: &SettingsStore, _| {
settings
.get::<AllAgentServersSettings>(None)
.codex
.as_ref()
.map(|s| s.default_config_options.clone())
.unwrap_or_default()
});
cx.spawn(async move |cx| {
let (command, root_dir, login) = store
@@ -247,11 +110,11 @@ impl AgentServer for Codex {
let connection = crate::acp::connect(
name,
telemetry_id,
command,
root_dir.as_ref(),
default_mode,
default_model,
default_config_options,
is_remote,
cx,
)

View File

@@ -1,8 +1,7 @@
use crate::{AgentServer, AgentServerDelegate, load_proxy_env};
use crate::{AgentServerDelegate, load_proxy_env};
use acp_thread::AgentConnection;
use agent_client_protocol as acp;
use anyhow::{Context as _, Result};
use collections::HashSet;
use fs::Fs;
use gpui::{App, AppContext as _, SharedString, Task};
use project::agent_server_store::{AllAgentServersSettings, ExternalAgentServerName};
@@ -21,7 +20,11 @@ impl CustomAgentServer {
}
}
impl AgentServer for CustomAgentServer {
impl crate::AgentServer for CustomAgentServer {
fn telemetry_id(&self) -> &'static str {
"custom"
}
fn name(&self) -> SharedString {
self.name.clone()
}
@@ -30,7 +33,7 @@ impl AgentServer for CustomAgentServer {
IconName::Terminal
}
fn default_mode(&self, cx: &App) -> Option<acp::SessionModeId> {
fn default_mode(&self, cx: &mut App) -> Option<acp::SessionModeId> {
let settings = cx.read_global(|settings: &SettingsStore, _| {
settings
.get::<AllAgentServersSettings>(None)
@@ -41,87 +44,7 @@ impl AgentServer for CustomAgentServer {
settings
.as_ref()
.and_then(|s| s.default_mode().map(acp::SessionModeId::new))
}
fn favorite_config_option_value_ids(
&self,
config_id: &acp::SessionConfigId,
cx: &mut App,
) -> HashSet<acp::SessionConfigValueId> {
let settings = cx.read_global(|settings: &SettingsStore, _| {
settings
.get::<AllAgentServersSettings>(None)
.custom
.get(&self.name())
.cloned()
});
settings
.as_ref()
.and_then(|s| s.favorite_config_option_values(config_id.0.as_ref()))
.map(|values| {
values
.iter()
.cloned()
.map(acp::SessionConfigValueId::new)
.collect()
})
.unwrap_or_default()
}
fn toggle_favorite_config_option_value(
&self,
config_id: acp::SessionConfigId,
value_id: acp::SessionConfigValueId,
should_be_favorite: bool,
fs: Arc<dyn Fs>,
cx: &App,
) {
let name = self.name();
let config_id = config_id.to_string();
let value_id = value_id.to_string();
update_settings_file(fs, cx, move |settings, _| {
let settings = settings
.agent_servers
.get_or_insert_default()
.custom
.entry(name.clone())
.or_insert_with(|| settings::CustomAgentServerSettings::Extension {
default_model: None,
default_mode: None,
favorite_models: Vec::new(),
default_config_options: Default::default(),
favorite_config_option_values: Default::default(),
});
match settings {
settings::CustomAgentServerSettings::Custom {
favorite_config_option_values,
..
}
| settings::CustomAgentServerSettings::Extension {
favorite_config_option_values,
..
} => {
let entry = favorite_config_option_values
.entry(config_id.clone())
.or_insert_with(Vec::new);
if should_be_favorite {
if !entry.iter().any(|v| v == &value_id) {
entry.push(value_id.clone());
}
} else {
entry.retain(|v| v != &value_id);
if entry.is_empty() {
favorite_config_option_values.remove(&config_id);
}
}
}
}
});
.and_then(|s| s.default_mode().map(|m| acp::SessionModeId(m.into())))
}
fn set_default_mode(&self, mode_id: Option<acp::SessionModeId>, fs: Arc<dyn Fs>, cx: &mut App) {
@@ -135,9 +58,6 @@ impl AgentServer for CustomAgentServer {
.or_insert_with(|| settings::CustomAgentServerSettings::Extension {
default_model: None,
default_mode: None,
favorite_models: Vec::new(),
default_config_options: Default::default(),
favorite_config_option_values: Default::default(),
});
match settings {
@@ -149,7 +69,7 @@ impl AgentServer for CustomAgentServer {
});
}
fn default_model(&self, cx: &App) -> Option<acp::ModelId> {
fn default_model(&self, cx: &mut App) -> Option<acp::ModelId> {
let settings = cx.read_global(|settings: &SettingsStore, _| {
settings
.get::<AllAgentServersSettings>(None)
@@ -160,7 +80,7 @@ impl AgentServer for CustomAgentServer {
settings
.as_ref()
.and_then(|s| s.default_model().map(acp::ModelId::new))
.and_then(|s| s.default_model().map(|m| acp::ModelId(m.into())))
}
fn set_default_model(&self, model_id: Option<acp::ModelId>, fs: Arc<dyn Fs>, cx: &mut App) {
@@ -174,9 +94,6 @@ impl AgentServer for CustomAgentServer {
.or_insert_with(|| settings::CustomAgentServerSettings::Extension {
default_model: None,
default_mode: None,
favorite_models: Vec::new(),
default_config_options: Default::default(),
favorite_config_option_values: Default::default(),
});
match settings {
@@ -188,125 +105,6 @@ impl AgentServer for CustomAgentServer {
});
}
fn favorite_model_ids(&self, cx: &mut App) -> HashSet<acp::ModelId> {
let settings = cx.read_global(|settings: &SettingsStore, _| {
settings
.get::<AllAgentServersSettings>(None)
.custom
.get(&self.name())
.cloned()
});
settings
.as_ref()
.map(|s| {
s.favorite_models()
.iter()
.map(|id| acp::ModelId::new(id.clone()))
.collect()
})
.unwrap_or_default()
}
fn toggle_favorite_model(
&self,
model_id: acp::ModelId,
should_be_favorite: bool,
fs: Arc<dyn Fs>,
cx: &App,
) {
let name = self.name();
update_settings_file(fs, cx, move |settings, _| {
let settings = settings
.agent_servers
.get_or_insert_default()
.custom
.entry(name.clone())
.or_insert_with(|| settings::CustomAgentServerSettings::Extension {
default_model: None,
default_mode: None,
favorite_models: Vec::new(),
default_config_options: Default::default(),
favorite_config_option_values: Default::default(),
});
let favorite_models = match settings {
settings::CustomAgentServerSettings::Custom {
favorite_models, ..
}
| settings::CustomAgentServerSettings::Extension {
favorite_models, ..
} => favorite_models,
};
let model_id_str = model_id.to_string();
if should_be_favorite {
if !favorite_models.contains(&model_id_str) {
favorite_models.push(model_id_str);
}
} else {
favorite_models.retain(|id| id != &model_id_str);
}
});
}
fn default_config_option(&self, config_id: &str, cx: &App) -> Option<String> {
let settings = cx.read_global(|settings: &SettingsStore, _| {
settings
.get::<AllAgentServersSettings>(None)
.custom
.get(&self.name())
.cloned()
});
settings
.as_ref()
.and_then(|s| s.default_config_option(config_id).map(|s| s.to_string()))
}
fn set_default_config_option(
&self,
config_id: &str,
value_id: Option<&str>,
fs: Arc<dyn Fs>,
cx: &mut App,
) {
let name = self.name();
let config_id = config_id.to_string();
let value_id = value_id.map(|s| s.to_string());
update_settings_file(fs, cx, move |settings, _| {
let settings = settings
.agent_servers
.get_or_insert_default()
.custom
.entry(name.clone())
.or_insert_with(|| settings::CustomAgentServerSettings::Extension {
default_model: None,
default_mode: None,
favorite_models: Vec::new(),
default_config_options: Default::default(),
favorite_config_option_values: Default::default(),
});
match settings {
settings::CustomAgentServerSettings::Custom {
default_config_options,
..
}
| settings::CustomAgentServerSettings::Extension {
default_config_options,
..
} => {
if let Some(value) = value_id.clone() {
default_config_options.insert(config_id.clone(), value);
} else {
default_config_options.remove(&config_id);
}
}
}
});
}
fn connect(
&self,
root_dir: Option<&Path>,
@@ -314,29 +112,14 @@ impl AgentServer for CustomAgentServer {
cx: &mut App,
) -> Task<Result<(Rc<dyn AgentConnection>, Option<task::SpawnInTerminal>)>> {
let name = self.name();
let telemetry_id = self.telemetry_id();
let root_dir = root_dir.map(|root_dir| root_dir.to_string_lossy().into_owned());
let is_remote = delegate.project.read(cx).is_via_remote_server();
let default_mode = self.default_mode(cx);
let default_model = self.default_model(cx);
let default_config_options = cx.read_global(|settings: &SettingsStore, _| {
settings
.get::<AllAgentServersSettings>(None)
.custom
.get(&self.name())
.map(|s| match s {
project::agent_server_store::CustomAgentServerSettings::Custom {
default_config_options,
..
}
| project::agent_server_store::CustomAgentServerSettings::Extension {
default_config_options,
..
} => default_config_options.clone(),
})
.unwrap_or_default()
});
let store = delegate.store.downgrade();
let extra_env = load_proxy_env(cx);
cx.spawn(async move |cx| {
let (command, root_dir, login) = store
.update(cx, |store, cx| {
@@ -356,11 +139,11 @@ impl AgentServer for CustomAgentServer {
.await?;
let connection = crate::acp::connect(
name,
telemetry_id,
command,
root_dir.as_ref(),
default_mode,
default_model,
default_config_options,
is_remote,
cx,
)

View File

@@ -82,9 +82,26 @@ where
.update(cx, |thread, cx| {
thread.send(
vec![
"Read the file ".into(),
acp::ContentBlock::ResourceLink(acp::ResourceLink::new("foo.rs", "foo.rs")),
" and tell me what the content of the println! is".into(),
acp::ContentBlock::Text(acp::TextContent {
text: "Read the file ".into(),
annotations: None,
meta: None,
}),
acp::ContentBlock::ResourceLink(acp::ResourceLink {
uri: "foo.rs".into(),
name: "foo.rs".into(),
annotations: None,
description: None,
mime_type: None,
size: None,
title: None,
meta: None,
}),
acp::ContentBlock::Text(acp::TextContent {
text: " and tell me what the content of the println! is".into(),
annotations: None,
meta: None,
}),
],
cx,
)
@@ -412,7 +429,7 @@ macro_rules! common_e2e_tests {
async fn tool_call_with_permission(cx: &mut ::gpui::TestAppContext) {
$crate::e2e_tests::test_tool_call_with_permission(
$server,
::agent_client_protocol::PermissionOptionId::new($allow_option_id),
::agent_client_protocol::PermissionOptionId($allow_option_id.into()),
cx,
)
.await;
@@ -455,12 +472,20 @@ pub async fn init_test(cx: &mut TestAppContext) -> Arc<FakeFs> {
project::agent_server_store::AllAgentServersSettings {
claude: Some(BuiltinAgentServerSettings {
path: Some("claude-code-acp".into()),
..Default::default()
args: None,
env: None,
ignore_system_version: None,
default_mode: None,
default_model: None,
}),
gemini: Some(crate::gemini::tests::local_command().into()),
codex: Some(BuiltinAgentServerSettings {
path: Some("codex-acp".into()),
..Default::default()
args: None,
env: None,
ignore_system_version: None,
default_mode: None,
default_model: None,
}),
custom: collections::HashMap::default(),
},

View File

@@ -4,15 +4,18 @@ use std::{any::Any, path::Path};
use crate::{AgentServer, AgentServerDelegate, load_proxy_env};
use acp_thread::AgentConnection;
use anyhow::{Context as _, Result};
use gpui::{App, AppContext as _, SharedString, Task};
use gpui::{App, SharedString, Task};
use language_models::provider::google::GoogleLanguageModelProvider;
use project::agent_server_store::{AllAgentServersSettings, GEMINI_NAME};
use settings::SettingsStore;
use project::agent_server_store::GEMINI_NAME;
#[derive(Clone)]
pub struct Gemini;
impl AgentServer for Gemini {
fn telemetry_id(&self) -> &'static str {
"gemini-cli"
}
fn name(&self) -> SharedString {
"Gemini CLI".into()
}
@@ -28,20 +31,13 @@ impl AgentServer for Gemini {
cx: &mut App,
) -> Task<Result<(Rc<dyn AgentConnection>, Option<task::SpawnInTerminal>)>> {
let name = self.name();
let telemetry_id = self.telemetry_id();
let root_dir = root_dir.map(|root_dir| root_dir.to_string_lossy().into_owned());
let is_remote = delegate.project.read(cx).is_via_remote_server();
let store = delegate.store.downgrade();
let mut extra_env = load_proxy_env(cx);
let default_mode = self.default_mode(cx);
let default_model = self.default_model(cx);
let default_config_options = cx.read_global(|settings: &SettingsStore, _| {
settings
.get::<AllAgentServersSettings>(None)
.gemini
.as_ref()
.map(|s| s.default_config_options.clone())
.unwrap_or_default()
});
cx.spawn(async move |cx| {
extra_env.insert("SURFACE".to_owned(), "zed".to_owned());
@@ -70,11 +66,11 @@ impl AgentServer for Gemini {
let connection = crate::acp::connect(
name,
telemetry_id,
command,
root_dir.as_ref(),
default_mode,
default_model,
default_config_options,
is_remote,
cx,
)

Some files were not shown because too many files have changed in this diff Show More