Compare commits

..

8 Commits

Author SHA1 Message Date
Oleksiy Syvokon
3274a6abb0 More FIM templates 2025-12-19 15:09:18 +02:00
Oleksiy Syvokon
e3ffc53c6a Fix panicing when completions point to outdated snapshot 2025-12-19 14:42:52 +02:00
Oleksiy Syvokon
f5cafe5b95 Change rules for the default Ollama models:
- Use qwen2.5-coder only if it's already downloaded
- Otherwise, show a warning to configure the model
2025-12-19 14:22:12 +02:00
Oleksiy Syvokon
2369fc91b4 Settings UI 2025-12-18 21:45:18 +02:00
Oleksiy Syvokon
cc2d1f935f Display configure providers for Ollama 2025-12-18 21:23:51 +02:00
Oleksiy Syvokon
e39dee27cb Enable Ollama provider only if Ollama service is running 2025-12-18 21:05:16 +02:00
Oleksiy Syvokon
d3ebd02828 Fix default model 2025-12-18 20:38:34 +02:00
Oleksiy Syvokon
8062ee53a6 WIP: Initial Ollama edit prediction provider implementation 2025-12-18 20:20:53 +02:00
234 changed files with 3508 additions and 15608 deletions

View File

@@ -1,55 +0,0 @@
# Phase 2: Explore Repository
You are analyzing a codebase to understand its structure before reviewing documentation impact.
## Objective
Produce a structured overview of the repository to inform subsequent documentation analysis.
## Instructions
1. **Identify Primary Languages and Frameworks**
- Scan for Cargo.toml, package.json, or other manifest files
- Note the primary language(s) and key dependencies
2. **Map Documentation Structure**
- This project uses **mdBook** (https://rust-lang.github.io/mdBook/)
- Documentation is in `docs/src/`
- Table of contents: `docs/src/SUMMARY.md` (mdBook format: https://rust-lang.github.io/mdBook/format/summary.html)
- Style guide: `docs/.rules`
- Agent guidelines: `docs/AGENTS.md`
- Formatting: Prettier (config in `docs/.prettierrc`)
3. **Identify Build and Tooling**
- Note build systems (cargo, npm, etc.)
- Identify documentation tooling (mdbook, etc.)
4. **Output Format**
Produce a JSON summary:
```json
{
"primary_language": "Rust",
"frameworks": ["GPUI"],
"documentation": {
"system": "mdBook",
"location": "docs/src/",
"toc_file": "docs/src/SUMMARY.md",
"toc_format": "https://rust-lang.github.io/mdBook/format/summary.html",
"style_guide": "docs/.rules",
"agent_guidelines": "docs/AGENTS.md",
"formatter": "prettier",
"formatter_config": "docs/.prettierrc",
"custom_preprocessor": "docs_preprocessor (handles {#kb action::Name} syntax)"
},
"key_directories": {
"source": "crates/",
"docs": "docs/src/",
"extensions": "extensions/"
}
}
```
## Constraints
- Read-only: Do not modify any files
- Focus on structure, not content details
- Complete within 2 minutes

View File

@@ -1,57 +0,0 @@
# Phase 3: Analyze Changes
You are analyzing code changes to understand their nature and scope.
## Objective
Produce a clear, neutral summary of what changed in the codebase.
## Input
You will receive:
- List of changed files from the triggering commit/PR
- Repository structure from Phase 2
## Instructions
1. **Categorize Changed Files**
- Source code (which crates/modules)
- Configuration
- Tests
- Documentation (already existing)
- Other
2. **Analyze Each Change**
- Review diffs for files likely to impact documentation
- Focus on: public APIs, settings, keybindings, commands, user-visible behavior
3. **Identify What Did NOT Change**
- Note stable interfaces or behaviors
- Important for avoiding unnecessary documentation updates
4. **Output Format**
Produce a markdown summary:
```markdown
## Change Analysis
### Changed Files Summary
| Category | Files | Impact Level |
| --- | --- | --- |
| Source - [crate] | file1.rs, file2.rs | High/Medium/Low |
| Settings | settings.json | Medium |
| Tests | test_*.rs | None |
### Behavioral Changes
- **[Feature/Area]**: Description of what changed from user perspective
- **[Feature/Area]**: Description...
### Unchanged Areas
- [Area]: Confirmed no changes to [specific behavior]
### Files Requiring Deeper Review
- `path/to/file.rs`: Reason for deeper review
```
## Constraints
- Read-only: Do not modify any files
- Neutral tone: Describe what changed, not whether it's good/bad
- Do not propose documentation changes yet

View File

@@ -1,76 +0,0 @@
# Phase 4: Plan Documentation Impact
You are determining whether and how documentation should be updated based on code changes.
## Objective
Produce a structured documentation plan that will guide Phase 5 execution.
## Documentation System
This is an **mdBook** site (https://rust-lang.github.io/mdBook/):
- `docs/src/SUMMARY.md` defines book structure per https://rust-lang.github.io/mdBook/format/summary.html
- If adding new pages, they MUST be added to SUMMARY.md
- Use `{#kb action::ActionName}` syntax for keybindings (custom preprocessor expands these)
- Prettier formatting (80 char width) will be applied automatically
## Input
You will receive:
- Change analysis from Phase 3
- Repository structure from Phase 2
- Documentation guidelines from `docs/AGENTS.md`
## Instructions
1. **Review AGENTS.md**
- Load and apply all rules from `docs/AGENTS.md`
- Respect scope boundaries (in-scope vs out-of-scope)
2. **Evaluate Documentation Impact**
For each behavioral change from Phase 3:
- Does existing documentation cover this area?
- Is the documentation now inaccurate or incomplete?
- Classify per AGENTS.md "Change Classification" section
3. **Identify Specific Updates**
For each required update:
- Exact file path
- Specific section or heading
- Type of change (update existing, add new, deprecate)
- Description of the change
4. **Flag Uncertainty**
Explicitly mark:
- Assumptions you're making
- Areas where human confirmation is needed
- Ambiguous requirements
5. **Output Format**
Use the exact format specified in `docs/AGENTS.md` Phase 4 section:
```markdown
## Documentation Impact Assessment
### Summary
Brief description of code changes analyzed.
### Documentation Updates Required: [Yes/No]
### Planned Changes
#### 1. [File Path]
- **Section**: [Section name or "New section"]
- **Change Type**: [Update/Add/Deprecate]
- **Reason**: Why this change is needed
- **Description**: What will be added/modified
### Uncertainty Flags
- [ ] [Description of any assumptions or areas needing confirmation]
### No Changes Needed
- [List files reviewed but not requiring updates, with brief reason]
```
## Constraints
- Read-only: Do not modify any files
- Conservative: When uncertain, flag for human review rather than planning changes
- Scoped: Only plan changes that trace directly to code changes from Phase 3
- No scope expansion: Do not plan "improvements" unrelated to triggering changes

View File

@@ -1,67 +0,0 @@
# Phase 5: Apply Documentation Plan
You are executing a pre-approved documentation plan for an **mdBook** documentation site.
## Objective
Implement exactly the changes specified in the documentation plan from Phase 4.
## Documentation System
- **mdBook**: https://rust-lang.github.io/mdBook/
- **SUMMARY.md**: Follows mdBook format (https://rust-lang.github.io/mdBook/format/summary.html)
- **Prettier**: Will be run automatically after this phase (80 char line width)
- **Custom preprocessor**: Use `{#kb action::ActionName}` for keybindings instead of hardcoding
## Input
You will receive:
- Documentation plan from Phase 4
- Documentation guidelines from `docs/AGENTS.md`
- Style rules from `docs/.rules`
## Instructions
1. **Validate Plan**
- Confirm all planned files are within scope per AGENTS.md
- Verify no out-of-scope files are targeted
2. **Execute Each Planned Change**
For each item in "Planned Changes":
- Navigate to the specified file
- Locate the specified section
- Apply the described change
- Follow style rules from `docs/.rules`
3. **Style Compliance**
Every edit must follow `docs/.rules`:
- Second person, present tense
- No hedging words ("simply", "just", "easily")
- Proper keybinding format (`Cmd+Shift+P`)
- Settings Editor first, JSON second
- Correct terminology (folder not directory, etc.)
4. **Preserve Context**
- Maintain surrounding content structure
- Keep consistent heading levels
- Preserve existing cross-references
## Constraints
- Execute ONLY changes listed in the plan
- Do not discover new documentation targets
- Do not make stylistic improvements outside planned sections
- Do not expand scope beyond what Phase 4 specified
- If a planned change cannot be applied (file missing, section not found), skip and note it
## Output
After applying changes, output a summary:
```markdown
## Applied Changes
### Successfully Applied
- `path/to/file.md`: [Brief description of change]
### Skipped (Could Not Apply)
- `path/to/file.md`: [Reason - e.g., "Section not found"]
### Warnings
- [Any issues encountered during application]
```

View File

@@ -1,54 +0,0 @@
# Phase 6: Summarize Changes
You are generating a summary of documentation updates for PR review.
## Objective
Create a clear, reviewable summary of all documentation changes made.
## Input
You will receive:
- Applied changes report from Phase 5
- Original change analysis from Phase 3
- Git diff of documentation changes
## Instructions
1. **Gather Change Information**
- List all modified documentation files
- Identify the corresponding code changes that triggered each update
2. **Generate Summary**
Use the format specified in `docs/AGENTS.md` Phase 6 section:
```markdown
## Documentation Update Summary
### Changes Made
| File | Change | Related Code |
| --- | --- | --- |
| docs/src/path.md | Brief description | PR #123 or commit SHA |
### Rationale
Brief explanation of why these updates were made, linking back to the triggering code changes.
### Review Notes
- Items reviewers should pay special attention to
- Any uncertainty flags from Phase 4 that were addressed
- Assumptions made during documentation
```
3. **Add Context for Reviewers**
- Highlight any changes that might be controversial
- Note if any planned changes were skipped and why
- Flag areas where reviewer expertise is especially needed
## Output Format
The summary should be suitable for:
- PR description body
- Commit message (condensed version)
- Team communication
## Constraints
- Read-only (documentation changes already applied in Phase 5)
- Factual: Describe what was done, not justify why it's good
- Complete: Account for all changes, including skipped items

View File

@@ -1,67 +0,0 @@
# Phase 7: Commit and Open PR
You are creating a git branch, committing documentation changes, and opening a PR.
## Objective
Package documentation updates into a reviewable pull request.
## Input
You will receive:
- Summary from Phase 6
- List of modified files
## Instructions
1. **Create Branch**
```sh
git checkout -b docs/auto-update-{date}
```
Use format: `docs/auto-update-YYYY-MM-DD` or `docs/auto-update-{short-sha}`
2. **Stage and Commit**
- Stage only documentation files in `docs/src/`
- Do not stage any other files
Commit message format:
```
docs: auto-update documentation for [brief description]
[Summary from Phase 6, condensed]
Triggered by: [commit SHA or PR reference]
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
```
3. **Push Branch**
```sh
git push -u origin docs/auto-update-{date}
```
4. **Create Pull Request**
Use the Phase 6 summary as the PR body.
PR Title: `docs: [Brief description of documentation updates]`
Labels (if available): `documentation`, `automated`
Base branch: `main`
## Constraints
- Do NOT auto-merge
- Do NOT request specific reviewers (let CODEOWNERS handle it)
- Do NOT modify files outside `docs/src/`
- If no changes to commit, exit gracefully with message "No documentation changes to commit"
## Output
```markdown
## PR Created
- **Branch**: docs/auto-update-{date}
- **PR URL**: https://github.com/zed-industries/zed/pull/XXXX
- **Status**: Ready for review
### Commit
- SHA: {commit-sha}
- Files: {count} documentation files modified
```

View File

@@ -19,18 +19,6 @@ runs:
shell: bash -euxo pipefail {0}
run: ./script/linux
- name: Install mold linker
shell: bash -euxo pipefail {0}
run: ./script/install-mold
- name: Download WASI SDK
shell: bash -euxo pipefail {0}
run: ./script/download-wasi-sdk
- name: Generate action metadata
shell: bash -euxo pipefail {0}
run: ./script/generate-action-metadata
- name: Check for broken links (in MD)
uses: lycheeverse/lychee-action@82202e5e9c2f4ef1a55a3d02563e1cb6041e5332 # v2.4.1
with:

View File

@@ -1,40 +1,29 @@
name: "Close Stale Issues"
on:
schedule:
- cron: "0 2 * * 5"
- cron: "0 8 31 DEC *"
workflow_dispatch:
inputs:
debug-only:
description: "Run in dry-run mode (no changes made)"
type: boolean
default: false
operations-per-run:
description: "Max number of issues to process (default: 1000)"
type: number
default: 1000
jobs:
stale:
if: github.repository_owner == 'zed-industries'
runs-on: ubuntu-latest
steps:
- uses: actions/stale@997185467fa4f803885201cee163a9f38240193d # v10
- uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 # v9
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: >
Hi there!
Zed development moves fast and a significant number of bugs become outdated.
If you can reproduce this bug on the latest stable Zed, please let us know by leaving a comment with the Zed version.
If the bug doesn't appear for you anymore, feel free to close the issue yourself; otherwise, the bot will close it in a couple of weeks.
Hi there! 👋
We're working to clean up our issue tracker by closing older bugs that might not be relevant anymore. If you are able to reproduce this issue in the latest version of Zed, please let us know by commenting on this issue, and it will be kept open. If you can't reproduce it, feel free to close the issue yourself. Otherwise, it will close automatically in 14 days.
Thanks for your help!
close-issue-message: "This issue was closed due to inactivity. If you're still experiencing this problem, please leave a comment with your Zed version so that we can reopen the issue."
close-issue-message: "This issue was closed due to inactivity. If you're still experiencing this problem, please open a new issue with a link to this issue."
days-before-stale: 60
days-before-close: 14
only-issue-types: "Bug,Crash"
operations-per-run: ${{ inputs.operations-per-run || 1000 }}
operations-per-run: 1000
ascending: true
enable-statistics: true
debug-only: ${{ inputs.debug-only }}
stale-issue-label: "stale"
exempt-issue-labels: "never stale"

View File

@@ -1,264 +0,0 @@
name: Documentation Automation
on:
push:
branches: [main]
paths:
- 'crates/**'
- 'extensions/**'
workflow_dispatch:
inputs:
pr_number:
description: 'PR number to analyze (gets full PR diff)'
required: false
type: string
trigger_sha:
description: 'Commit SHA to analyze (ignored if pr_number is set)'
required: false
type: string
permissions:
contents: write
pull-requests: write
env:
FACTORY_API_KEY: ${{ secrets.FACTORY_API_KEY }}
DROID_MODEL: claude-opus-4-5-20251101
jobs:
docs-automation:
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Install Droid CLI
id: install-droid
run: |
curl -fsSL https://app.factory.ai/cli | sh
echo "${HOME}/.local/bin" >> "$GITHUB_PATH"
echo "DROID_BIN=${HOME}/.local/bin/droid" >> "$GITHUB_ENV"
# Verify installation
"${HOME}/.local/bin/droid" --version
- name: Setup Node.js (for Prettier)
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install Prettier
run: npm install -g prettier
- name: Get changed files
id: changed
run: |
if [ -n "${{ inputs.pr_number }}" ]; then
# Get full PR diff
echo "Analyzing PR #${{ inputs.pr_number }}"
echo "source=pr" >> "$GITHUB_OUTPUT"
echo "ref=${{ inputs.pr_number }}" >> "$GITHUB_OUTPUT"
gh pr diff "${{ inputs.pr_number }}" --name-only > /tmp/changed_files.txt
elif [ -n "${{ inputs.trigger_sha }}" ]; then
# Get single commit diff
SHA="${{ inputs.trigger_sha }}"
echo "Analyzing commit $SHA"
echo "source=commit" >> "$GITHUB_OUTPUT"
echo "ref=$SHA" >> "$GITHUB_OUTPUT"
git diff --name-only "${SHA}^" "$SHA" > /tmp/changed_files.txt
else
# Default to current commit
SHA="${{ github.sha }}"
echo "Analyzing commit $SHA"
echo "source=commit" >> "$GITHUB_OUTPUT"
echo "ref=$SHA" >> "$GITHUB_OUTPUT"
git diff --name-only "${SHA}^" "$SHA" > /tmp/changed_files.txt || git diff --name-only HEAD~1 HEAD > /tmp/changed_files.txt
fi
echo "Changed files:"
cat /tmp/changed_files.txt
env:
GH_TOKEN: ${{ github.token }}
# Phase 0: Guardrails are loaded via AGENTS.md in each phase
# Phase 2: Explore Repository (Read-Only - default)
- name: "Phase 2: Explore Repository"
id: phase2
run: |
"$DROID_BIN" exec \
-m "$DROID_MODEL" \
-f .factory/prompts/docs-automation/phase2-explore.md \
> /tmp/phase2-output.txt 2>&1 || true
echo "Repository exploration complete"
cat /tmp/phase2-output.txt
# Phase 3: Analyze Changes (Read-Only - default)
- name: "Phase 3: Analyze Changes"
id: phase3
run: |
CHANGED_FILES=$(tr '\n' ' ' < /tmp/changed_files.txt)
echo "Analyzing changes in: $CHANGED_FILES"
# Build prompt with context
cat > /tmp/phase3-prompt.md << 'EOF'
$(cat .factory/prompts/docs-automation/phase3-analyze.md)
## Context
### Changed Files
$CHANGED_FILES
### Phase 2 Output
$(cat /tmp/phase2-output.txt)
EOF
"$DROID_BIN" exec \
-m "$DROID_MODEL" \
"$(cat .factory/prompts/docs-automation/phase3-analyze.md)
Changed files: $CHANGED_FILES" \
> /tmp/phase3-output.md 2>&1 || true
echo "Change analysis complete"
cat /tmp/phase3-output.md
# Phase 4: Plan Documentation Impact (Read-Only - default)
- name: "Phase 4: Plan Documentation Impact"
id: phase4
run: |
"$DROID_BIN" exec \
-m "$DROID_MODEL" \
-f .factory/prompts/docs-automation/phase4-plan.md \
> /tmp/phase4-plan.md 2>&1 || true
echo "Documentation plan complete"
cat /tmp/phase4-plan.md
# Check if updates are required
if grep -q "NO_UPDATES_REQUIRED" /tmp/phase4-plan.md; then
echo "updates_required=false" >> "$GITHUB_OUTPUT"
else
echo "updates_required=true" >> "$GITHUB_OUTPUT"
fi
# Phase 5: Apply Plan (Write-Enabled with --auto medium)
- name: "Phase 5: Apply Documentation Plan"
id: phase5
if: steps.phase4.outputs.updates_required == 'true'
run: |
"$DROID_BIN" exec \
-m "$DROID_MODEL" \
--auto medium \
-f .factory/prompts/docs-automation/phase5-apply.md \
> /tmp/phase5-report.md 2>&1 || true
echo "Documentation updates applied"
cat /tmp/phase5-report.md
# Phase 5b: Format with Prettier
- name: "Phase 5b: Format with Prettier"
id: phase5b
if: steps.phase4.outputs.updates_required == 'true'
run: |
echo "Formatting documentation with Prettier..."
cd docs && prettier --write src/
echo "Verifying Prettier formatting passes..."
cd docs && prettier --check src/
echo "Prettier formatting complete"
# Phase 6: Summarize Changes (Read-Only - default)
- name: "Phase 6: Summarize Changes"
id: phase6
if: steps.phase4.outputs.updates_required == 'true'
run: |
# Get git diff of docs
git diff docs/src/ > /tmp/docs-diff.txt || true
"$DROID_BIN" exec \
-m "$DROID_MODEL" \
-f .factory/prompts/docs-automation/phase6-summarize.md \
> /tmp/phase6-summary.md 2>&1 || true
echo "Summary generated"
cat /tmp/phase6-summary.md
# Phase 7: Commit and Open PR
- name: "Phase 7: Create PR"
id: phase7
if: steps.phase4.outputs.updates_required == 'true'
run: |
# Check if there are actual changes
if git diff --quiet docs/src/; then
echo "No documentation changes detected"
exit 0
fi
# Configure git
git config user.name "factory-droid[bot]"
git config user.email "138933559+factory-droid[bot]@users.noreply.github.com"
# Daily batch branch - one branch per day, multiple commits accumulate
BRANCH_NAME="docs/auto-update-$(date +%Y-%m-%d)"
# Stash local changes from phase 5
git stash push -m "docs-automation-changes" -- docs/src/
# Check if branch already exists on remote
if git ls-remote --exit-code --heads origin "$BRANCH_NAME" > /dev/null 2>&1; then
echo "Branch $BRANCH_NAME exists, checking out and updating..."
git fetch origin "$BRANCH_NAME"
git checkout -B "$BRANCH_NAME" "origin/$BRANCH_NAME"
else
echo "Creating new branch $BRANCH_NAME..."
git checkout -b "$BRANCH_NAME"
fi
# Apply stashed changes
git stash pop || true
# Stage and commit
git add docs/src/
SUMMARY=$(head -50 < /tmp/phase6-summary.md)
git commit -m "docs: auto-update documentation
${SUMMARY}
Triggered by: ${{ steps.changed.outputs.source }} ${{ steps.changed.outputs.ref }}
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>"
# Push
git push -u origin "$BRANCH_NAME"
# Check if PR already exists for this branch
EXISTING_PR=$(gh pr list --head "$BRANCH_NAME" --json number --jq '.[0].number' || echo "")
if [ -n "$EXISTING_PR" ]; then
echo "PR #$EXISTING_PR already exists for branch $BRANCH_NAME, updated with new commit"
else
# Create new PR
gh pr create \
--title "docs: automated documentation update ($(date +%Y-%m-%d))" \
--body-file /tmp/phase6-summary.md \
--base main || true
echo "PR created on branch: $BRANCH_NAME"
fi
env:
GH_TOKEN: ${{ github.token }}
# Summary output
- name: "Summary"
if: always()
run: |
echo "## Documentation Automation Summary" >> "$GITHUB_STEP_SUMMARY"
echo "" >> "$GITHUB_STEP_SUMMARY"
if [ "${{ steps.phase4.outputs.updates_required }}" == "false" ]; then
echo "No documentation updates required for this change." >> "$GITHUB_STEP_SUMMARY"
elif [ -f /tmp/phase6-summary.md ]; then
cat /tmp/phase6-summary.md >> "$GITHUB_STEP_SUMMARY"
else
echo "Workflow completed. Check individual phase outputs for details." >> "$GITHUB_STEP_SUMMARY"
fi

View File

@@ -61,7 +61,8 @@ jobs:
uses: namespacelabs/nscloud-cache-action@v1
with:
cache: rust
- name: steps::cargo_fmt
- id: cargo_fmt
name: steps::cargo_fmt
run: cargo fmt --all -- --check
shell: bash -euxo pipefail {0}
- name: extension_tests::run_clippy

View File

@@ -26,7 +26,8 @@ jobs:
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
node-version: '20'
- name: steps::clippy
- id: clippy
name: steps::clippy
run: ./script/clippy
shell: bash -euxo pipefail {0}
- name: steps::clear_target_dir_if_large
@@ -71,9 +72,15 @@ jobs:
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
node-version: '20'
- name: steps::clippy
- id: clippy
name: steps::clippy
run: ./script/clippy
shell: bash -euxo pipefail {0}
- id: record_clippy_failure
name: steps::record_clippy_failure
if: always()
run: echo "failed=${{ steps.clippy.outcome == 'failure' }}" >> "$GITHUB_OUTPUT"
shell: bash -euxo pipefail {0}
- name: steps::cargo_install_nextest
uses: taiki-e/install-action@nextest
- name: steps::clear_target_dir_if_large
@@ -87,6 +94,8 @@ jobs:
run: |
rm -rf ./../.cargo
shell: bash -euxo pipefail {0}
outputs:
clippy_failed: ${{ steps.record_clippy_failure.outputs.failed == 'true' }}
timeout-minutes: 60
run_tests_windows:
if: (github.repository_owner == 'zed-industries' || github.repository_owner == 'zed-extensions')
@@ -105,7 +114,8 @@ jobs:
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
node-version: '20'
- name: steps::clippy
- id: clippy
name: steps::clippy
run: ./script/clippy.ps1
shell: pwsh
- name: steps::clear_target_dir_if_large

View File

@@ -20,7 +20,8 @@ jobs:
with:
clean: false
fetch-depth: 0
- name: steps::cargo_fmt
- id: cargo_fmt
name: steps::cargo_fmt
run: cargo fmt --all -- --check
shell: bash -euxo pipefail {0}
- name: ./script/clippy
@@ -44,7 +45,8 @@ jobs:
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
node-version: '20'
- name: steps::clippy
- id: clippy
name: steps::clippy
run: ./script/clippy.ps1
shell: pwsh
- name: steps::clear_target_dir_if_large

View File

@@ -74,12 +74,19 @@ jobs:
uses: pnpm/action-setup@fe02b34f77f8bc703788d5817da081398fad5dd2
with:
version: '9'
- name: steps::prettier
- id: prettier
name: steps::prettier
run: ./script/prettier
shell: bash -euxo pipefail {0}
- name: steps::cargo_fmt
- id: cargo_fmt
name: steps::cargo_fmt
run: cargo fmt --all -- --check
shell: bash -euxo pipefail {0}
- id: record_style_failure
name: steps::record_style_failure
if: always()
run: echo "failed=${{ steps.prettier.outcome == 'failure' || steps.cargo_fmt.outcome == 'failure' }}" >> "$GITHUB_OUTPUT"
shell: bash -euxo pipefail {0}
- name: ./script/check-todos
run: ./script/check-todos
shell: bash -euxo pipefail {0}
@@ -90,6 +97,8 @@ jobs:
uses: crate-ci/typos@2d0ce569feab1f8752f1dde43cc2f2aa53236e06
with:
config: ./typos.toml
outputs:
style_failed: ${{ steps.record_style_failure.outputs.failed == 'true' }}
timeout-minutes: 60
run_tests_windows:
needs:
@@ -110,7 +119,8 @@ jobs:
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
node-version: '20'
- name: steps::clippy
- id: clippy
name: steps::clippy
run: ./script/clippy.ps1
shell: pwsh
- name: steps::clear_target_dir_if_large
@@ -157,9 +167,15 @@ jobs:
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
node-version: '20'
- name: steps::clippy
- id: clippy
name: steps::clippy
run: ./script/clippy
shell: bash -euxo pipefail {0}
- id: record_clippy_failure
name: steps::record_clippy_failure
if: always()
run: echo "failed=${{ steps.clippy.outcome == 'failure' }}" >> "$GITHUB_OUTPUT"
shell: bash -euxo pipefail {0}
- name: steps::cargo_install_nextest
uses: taiki-e/install-action@nextest
- name: steps::clear_target_dir_if_large
@@ -173,6 +189,8 @@ jobs:
run: |
rm -rf ./../.cargo
shell: bash -euxo pipefail {0}
outputs:
clippy_failed: ${{ steps.record_clippy_failure.outputs.failed == 'true' }}
timeout-minutes: 60
run_tests_mac:
needs:
@@ -193,7 +211,8 @@ jobs:
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020
with:
node-version: '20'
- name: steps::clippy
- id: clippy
name: steps::clippy
run: ./script/clippy
shell: bash -euxo pipefail {0}
- name: steps::clear_target_dir_if_large
@@ -353,9 +372,6 @@ jobs:
- name: steps::download_wasi_sdk
run: ./script/download-wasi-sdk
shell: bash -euxo pipefail {0}
- name: ./script/generate-action-metadata
run: ./script/generate-action-metadata
shell: bash -euxo pipefail {0}
- name: run_tests::check_docs::install_mdbook
uses: peaceiris/actions-mdbook@ee69d230fe19748b7abf22df32acaa93833fad08
with:
@@ -576,6 +592,24 @@ jobs:
exit $EXIT_CODE
shell: bash -euxo pipefail {0}
call_autofix:
needs:
- check_style
- run_tests_linux
if: always() && (needs.check_style.outputs.style_failed == 'true' || needs.run_tests_linux.outputs.clippy_failed == 'true') && github.event_name == 'pull_request' && github.actor != 'zed-zippy[bot]'
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- id: get-app-token
name: steps::authenticate_as_zippy
uses: actions/create-github-app-token@bef1eaf1c0ac2b148ee2a0a74c65fbe6db0631f1
with:
app-id: ${{ secrets.ZED_ZIPPY_APP_ID }}
private-key: ${{ secrets.ZED_ZIPPY_APP_PRIVATE_KEY }}
- name: run_tests::call_autofix::dispatch_autofix
run: gh workflow run autofix_pr.yml -f pr_number=${{ github.event.pull_request.number }} -f run_clippy=${{ needs.run_tests_linux.outputs.clippy_failed == 'true' }}
shell: bash -euxo pipefail {0}
env:
GITHUB_TOKEN: ${{ steps.get-app-token.outputs.token }}
concurrency:
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true

1
.gitignore vendored
View File

@@ -36,7 +36,6 @@
DerivedData/
Packages
xcuserdata/
crates/docs_preprocessor/actions.json
# Don't commit any secrets to the repo.
.env

72
Cargo.lock generated
View File

@@ -226,9 +226,9 @@ dependencies = [
[[package]]
name = "agent-client-protocol"
version = "0.9.2"
version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d3e527d7dfe0f334313d42d1d9318f0a79665f6f21c440d0798f230a77a7ed6c"
checksum = "c2ffe7d502c1e451aafc5aff655000f84d09c9af681354ac0012527009b1af13"
dependencies = [
"agent-client-protocol-schema",
"anyhow",
@@ -243,9 +243,9 @@ dependencies = [
[[package]]
name = "agent-client-protocol-schema"
version = "0.10.5"
version = "0.10.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6903a00e8ac822f9bacac59a1932754d7387c72ebb7c9c7439ad021505591da4"
checksum = "8af81cc2d5c3f9c04f73db452efd058333735ba9d51c2cf7ef33c9fee038e7e6"
dependencies = [
"anyhow",
"derive_more 2.0.1",
@@ -793,7 +793,7 @@ dependencies = [
"url",
"wayland-backend",
"wayland-client",
"wayland-protocols",
"wayland-protocols 0.32.9",
"zbus",
]
@@ -5021,6 +5021,8 @@ name = "docs_preprocessor"
version = "0.1.0"
dependencies = [
"anyhow",
"command_palette",
"gpui",
"mdbook",
"regex",
"serde",
@@ -5029,6 +5031,7 @@ dependencies = [
"task",
"theme",
"util",
"zed",
"zlog",
]
@@ -5317,6 +5320,7 @@ dependencies = [
"client",
"gpui",
"language",
"log",
"text",
]
@@ -5341,11 +5345,13 @@ dependencies = [
"gpui",
"indoc",
"language",
"language_model",
"log",
"lsp",
"markdown",
"menu",
"multi_buffer",
"ollama",
"paths",
"project",
"regex",
@@ -5924,11 +5930,9 @@ dependencies = [
"async-trait",
"client",
"collections",
"credentials_provider",
"criterion",
"ctor",
"dap",
"dirs 4.0.0",
"extension",
"fs",
"futures 0.3.31",
@@ -5937,11 +5941,8 @@ dependencies = [
"http_client",
"language",
"language_extension",
"language_model",
"log",
"lsp",
"markdown",
"menu",
"moka",
"node_runtime",
"parking_lot",
@@ -5956,21 +5957,17 @@ dependencies = [
"serde_json",
"serde_json_lenient",
"settings",
"smol",
"task",
"telemetry",
"tempfile",
"theme",
"theme_extension",
"toml 0.8.23",
"ui",
"ui_input",
"url",
"util",
"wasmparser 0.221.3",
"wasmtime",
"wasmtime-wasi",
"workspace",
"zlog",
]
@@ -7376,7 +7373,7 @@ dependencies = [
"wayland-backend",
"wayland-client",
"wayland-cursor",
"wayland-protocols",
"wayland-protocols 0.31.2",
"wayland-protocols-plasma",
"wayland-protocols-wlr",
"windows 0.61.3",
@@ -8938,8 +8935,6 @@ dependencies = [
"credentials_provider",
"deepseek",
"editor",
"extension",
"extension_host",
"fs",
"futures 0.3.31",
"google_ai",
@@ -10888,12 +10883,19 @@ name = "ollama"
version = "0.1.0"
dependencies = [
"anyhow",
"edit_prediction_context",
"edit_prediction_types",
"futures 0.3.31",
"gpui",
"http_client",
"language",
"language_model",
"log",
"schemars",
"serde",
"serde_json",
"settings",
"text",
]
[[package]]
@@ -12579,7 +12581,6 @@ dependencies = [
"gpui",
"language",
"menu",
"notifications",
"pretty_assertions",
"project",
"rayon",
@@ -12657,8 +12658,6 @@ dependencies = [
"paths",
"rope",
"serde",
"strum 0.27.2",
"tempfile",
"text",
"util",
"uuid",
@@ -14882,7 +14881,6 @@ dependencies = [
"copilot",
"edit_prediction",
"editor",
"extension_host",
"feature_flags",
"fs",
"futures 0.3.31",
@@ -14890,7 +14888,6 @@ dependencies = [
"gpui",
"heck 0.5.0",
"language",
"language_model",
"language_models",
"log",
"menu",
@@ -18940,6 +18937,18 @@ dependencies = [
"xcursor",
]
[[package]]
name = "wayland-protocols"
version = "0.31.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8f81f365b8b4a97f422ac0e8737c438024b5951734506b0e1d775c73030561f4"
dependencies = [
"bitflags 2.9.4",
"wayland-backend",
"wayland-client",
"wayland-scanner",
]
[[package]]
name = "wayland-protocols"
version = "0.32.9"
@@ -18954,14 +18963,14 @@ dependencies = [
[[package]]
name = "wayland-protocols-plasma"
version = "0.3.9"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a07a14257c077ab3279987c4f8bb987851bf57081b93710381daea94f2c2c032"
checksum = "23803551115ff9ea9bce586860c5c5a971e360825a0309264102a9495a5ff479"
dependencies = [
"bitflags 2.9.4",
"wayland-backend",
"wayland-client",
"wayland-protocols",
"wayland-protocols 0.31.2",
"wayland-scanner",
]
@@ -18974,7 +18983,7 @@ dependencies = [
"bitflags 2.9.4",
"wayland-backend",
"wayland-client",
"wayland-protocols",
"wayland-protocols 0.32.9",
"wayland-scanner",
]
@@ -20276,16 +20285,6 @@ dependencies = [
"zlog",
]
[[package]]
name = "worktree_benchmarks"
version = "0.1.0"
dependencies = [
"fs",
"gpui",
"settings",
"worktree",
]
[[package]]
name = "writeable"
version = "0.6.1"
@@ -20708,6 +20707,7 @@ dependencies = [
"nc",
"node_runtime",
"notifications",
"ollama",
"onboarding",
"outline",
"outline_panel",

View File

@@ -198,7 +198,6 @@ members = [
"crates/web_search_providers",
"crates/workspace",
"crates/worktree",
"crates/worktree_benchmarks",
"crates/x_ai",
"crates/zed",
"crates/zed_actions",
@@ -439,7 +438,7 @@ ztracing_macro = { path = "crates/ztracing_macro" }
# External crates
#
agent-client-protocol = { version = "=0.9.2", features = ["unstable"] }
agent-client-protocol = { version = "=0.9.0", features = ["unstable"] }
aho-corasick = "1.1"
alacritty_terminal = "0.25.1-rc1"
any_vec = "0.14"

View File

@@ -20,6 +20,7 @@ Other platforms are not yet available:
- [Building Zed for macOS](./docs/src/development/macos.md)
- [Building Zed for Linux](./docs/src/development/linux.md)
- [Building Zed for Windows](./docs/src/development/windows.md)
- [Running Collaboration Locally](./docs/src/development/local-collaboration.md)
### Contributing

View File

@@ -227,7 +227,6 @@
"ctrl-g": "search::SelectNextMatch",
"ctrl-shift-g": "search::SelectPreviousMatch",
"ctrl-k l": "agent::OpenRulesLibrary",
"ctrl-shift-v": "agent::PasteRaw",
},
},
{
@@ -294,7 +293,6 @@
"shift-ctrl-r": "agent::OpenAgentDiff",
"ctrl-shift-y": "agent::KeepAll",
"ctrl-shift-n": "agent::RejectAll",
"ctrl-shift-v": "agent::PasteRaw",
},
},
{
@@ -306,7 +304,6 @@
"shift-ctrl-r": "agent::OpenAgentDiff",
"ctrl-shift-y": "agent::KeepAll",
"ctrl-shift-n": "agent::RejectAll",
"ctrl-shift-v": "agent::PasteRaw",
},
},
{

View File

@@ -267,7 +267,6 @@
"cmd-shift-g": "search::SelectPreviousMatch",
"cmd-k l": "agent::OpenRulesLibrary",
"alt-tab": "agent::CycleFavoriteModels",
"cmd-shift-v": "agent::PasteRaw",
},
},
{
@@ -336,7 +335,6 @@
"shift-ctrl-r": "agent::OpenAgentDiff",
"cmd-shift-y": "agent::KeepAll",
"cmd-shift-n": "agent::RejectAll",
"cmd-shift-v": "agent::PasteRaw",
},
},
{
@@ -349,7 +347,6 @@
"shift-ctrl-r": "agent::OpenAgentDiff",
"cmd-shift-y": "agent::KeepAll",
"cmd-shift-n": "agent::RejectAll",
"cmd-shift-v": "agent::PasteRaw",
},
},
{

View File

@@ -227,7 +227,6 @@
"ctrl-g": "search::SelectNextMatch",
"ctrl-shift-g": "search::SelectPreviousMatch",
"ctrl-k l": "agent::OpenRulesLibrary",
"ctrl-shift-v": "agent::PasteRaw",
},
},
{
@@ -297,7 +296,6 @@
"ctrl-shift-r": "agent::OpenAgentDiff",
"ctrl-shift-y": "agent::KeepAll",
"ctrl-shift-n": "agent::RejectAll",
"ctrl-shift-v": "agent::PasteRaw",
},
},
{
@@ -310,7 +308,6 @@
"ctrl-shift-r": "agent::OpenAgentDiff",
"ctrl-shift-y": "agent::KeepAll",
"ctrl-shift-n": "agent::RejectAll",
"ctrl-shift-v": "agent::PasteRaw",
},
},
{

View File

@@ -1178,10 +1178,6 @@
"remove_trailing_whitespace_on_save": true,
// Whether to start a new line with a comment when a previous line is a comment as well.
"extend_comment_on_newline": true,
// Whether to continue markdown lists when pressing enter.
"extend_list_on_newline": true,
// Whether to indent list items when pressing tab after a list marker.
"indent_list_on_tab": true,
// Removes any lines containing only whitespace at the end of the file and
// ensures just one newline at the end.
"ensure_final_newline_on_save": true,
@@ -1325,14 +1321,6 @@
"hidden_files": ["**/.*"],
// Git gutter behavior configuration.
"git": {
// Global switch to enable or disable all git integration features.
// If set to true, disables all git integration features.
// If set to false, individual git integration features below will be independently enabled or disabled.
"disable_git": false,
// Whether to enable git status tracking.
"enable_status": true,
// Whether to enable git diff display.
"enable_diff": true,
// Control whether the git gutter is shown. May take 2 values:
// 1. Show the gutter
// "git_gutter": "tracked_files"
@@ -1434,6 +1422,10 @@
"model": "codestral-latest",
"max_tokens": 150,
},
"ollama": {
"api_url": "http://localhost:11434",
"model": "qwen2.5-coder:3b-base",
},
// Whether edit predictions are enabled when editing text threads in the agent panel.
// This setting has no effect if globally disabled.
"enabled_in_text_threads": true,

View File

@@ -3,11 +3,11 @@ use agent_client_protocol::{self as acp};
use anyhow::Result;
use collections::IndexMap;
use gpui::{Entity, SharedString, Task};
use language_model::{IconOrSvg, LanguageModelProviderId};
use language_model::LanguageModelProviderId;
use project::Project;
use serde::{Deserialize, Serialize};
use std::{any::Any, error::Error, fmt, path::Path, rc::Rc, sync::Arc};
use ui::App;
use ui::{App, IconName};
use uuid::Uuid;
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize, Hash)]
@@ -215,7 +215,7 @@ pub struct AgentModelInfo {
pub id: acp::ModelId,
pub name: SharedString,
pub description: Option<SharedString>,
pub icon: Option<IconOrSvg>,
pub icon: Option<IconName>,
}
impl From<acp::ModelInfo> for AgentModelInfo {

View File

@@ -739,7 +739,7 @@ impl ActivityIndicator {
extension_store.outstanding_operations().iter().next()
{
let (message, icon, rotate) = match operation {
ExtensionOperation::Install | ExtensionOperation::AutoInstall => (
ExtensionOperation::Install => (
format!("Installing {extension_id} extension…"),
IconName::LoadCircle,
true,

View File

@@ -93,7 +93,7 @@ impl LanguageModels {
fn refresh_list(&mut self, cx: &App) {
let providers = LanguageModelRegistry::global(cx)
.read(cx)
.visible_providers()
.providers()
.into_iter()
.filter(|provider| provider.is_authenticated(cx))
.collect::<Vec<_>>();
@@ -164,7 +164,7 @@ impl LanguageModels {
fn authenticate_all_language_model_providers(cx: &mut App) -> Task<()> {
let authenticate_all_providers = LanguageModelRegistry::global(cx)
.read(cx)
.visible_providers()
.providers()
.iter()
.map(|provider| (provider.id(), provider.name(), provider.authenticate(cx)))
.collect::<Vec<_>>();
@@ -426,7 +426,7 @@ impl NativeAgent {
.into_iter()
.flat_map(|(contents, prompt_metadata)| match contents {
Ok(contents) => Some(UserRulesContext {
uuid: prompt_metadata.id.as_user()?,
uuid: prompt_metadata.id.user_id()?,
title: prompt_metadata.title.map(|title| title.to_string()),
contents,
}),
@@ -1630,7 +1630,7 @@ mod internal_tests {
id: acp::ModelId::new("fake/fake"),
name: "Fake".into(),
description: None,
icon: Some(language_model::IconOrSvg::Icon(ui::IconName::ZedAssistant)),
icon: Some(ui::IconName::ZedAssistant),
}]
)])
);

View File

@@ -5,7 +5,7 @@ use crate::{AgentServer, AgentServerDelegate, load_proxy_env};
use acp_thread::AgentConnection;
use anyhow::{Context as _, Result};
use gpui::{App, SharedString, Task};
use language_models::api_key_for_gemini_cli;
use language_models::provider::google::GoogleLanguageModelProvider;
use project::agent_server_store::GEMINI_NAME;
#[derive(Clone)]
@@ -37,7 +37,11 @@ impl AgentServer for Gemini {
cx.spawn(async move |cx| {
extra_env.insert("SURFACE".to_owned(), "zed".to_owned());
if let Some(api_key) = cx.update(api_key_for_gemini_cli)?.await.ok() {
if let Some(api_key) = cx
.update(GoogleLanguageModelProvider::api_key_for_gemini_cli)?
.await
.ok()
{
extra_env.insert("GEMINI_API_KEY".into(), api_key);
}
let (command, root_dir, login) = store

View File

@@ -34,7 +34,7 @@ use theme::ThemeSettings;
use ui::prelude::*;
use util::{ResultExt, debug_panic};
use workspace::{CollaboratorId, Workspace};
use zed_actions::agent::{Chat, PasteRaw};
use zed_actions::agent::Chat;
pub struct MessageEditor {
mention_set: Entity<MentionSet>,
@@ -543,9 +543,6 @@ impl MessageEditor {
}
fn paste(&mut self, _: &Paste, window: &mut Window, cx: &mut Context<Self>) {
let Some(workspace) = self.workspace.upgrade() else {
return;
};
let editor_clipboard_selections = cx
.read_from_clipboard()
.and_then(|item| item.entries().first().cloned())
@@ -556,127 +553,133 @@ impl MessageEditor {
_ => None,
});
// Insert creases for pasted clipboard selections that:
// 1. Contain exactly one selection
// 2. Have an associated file path
// 3. Span multiple lines (not single-line selections)
// 4. Belong to a file that exists in the current project
let should_insert_creases = util::maybe!({
let selections = editor_clipboard_selections.as_ref()?;
if selections.len() > 1 {
return Some(false);
}
let selection = selections.first()?;
let file_path = selection.file_path.as_ref()?;
let line_range = selection.line_range.as_ref()?;
let has_file_context = editor_clipboard_selections
.as_ref()
.is_some_and(|selections| {
selections
.iter()
.any(|sel| sel.file_path.is_some() && sel.line_range.is_some())
});
if line_range.start() == line_range.end() {
return Some(false);
}
Some(
workspace
.read(cx)
.project()
.read(cx)
.project_path_for_absolute_path(file_path, cx)
.is_some(),
)
})
.unwrap_or(false);
if should_insert_creases && let Some(selections) = editor_clipboard_selections {
cx.stop_propagation();
let insertion_target = self
.editor
.read(cx)
.selections
.newest_anchor()
.start
.text_anchor;
let project = workspace.read(cx).project().clone();
for selection in selections {
if let (Some(file_path), Some(line_range)) =
(selection.file_path, selection.line_range)
{
let crease_text =
acp_thread::selection_name(Some(file_path.as_ref()), &line_range);
let mention_uri = MentionUri::Selection {
abs_path: Some(file_path.clone()),
line_range: line_range.clone(),
};
let mention_text = mention_uri.as_link().to_string();
let (excerpt_id, text_anchor, content_len) =
self.editor.update(cx, |editor, cx| {
let buffer = editor.buffer().read(cx);
let snapshot = buffer.snapshot(cx);
let (excerpt_id, _, buffer_snapshot) = snapshot.as_singleton().unwrap();
let text_anchor = insertion_target.bias_left(&buffer_snapshot);
editor.insert(&mention_text, window, cx);
editor.insert(" ", window, cx);
(*excerpt_id, text_anchor, mention_text.len())
});
let Some((crease_id, tx)) = insert_crease_for_mention(
excerpt_id,
text_anchor,
content_len,
crease_text.into(),
mention_uri.icon_path(cx),
None,
self.editor.clone(),
window,
cx,
) else {
continue;
};
drop(tx);
let mention_task = cx
.spawn({
let project = project.clone();
async move |_, cx| {
let project_path = project
.update(cx, |project, cx| {
project.project_path_for_absolute_path(&file_path, cx)
})
.map_err(|e| e.to_string())?
.ok_or_else(|| "project path not found".to_string())?;
let buffer = project
.update(cx, |project, cx| project.open_buffer(project_path, cx))
.map_err(|e| e.to_string())?
.await
.map_err(|e| e.to_string())?;
buffer
.update(cx, |buffer, cx| {
let start = Point::new(*line_range.start(), 0)
.min(buffer.max_point());
let end = Point::new(*line_range.end() + 1, 0)
.min(buffer.max_point());
let content = buffer.text_for_range(start..end).collect();
Mention::Text {
content,
tracked_buffers: vec![cx.entity()],
}
})
.map_err(|e| e.to_string())
}
})
.shared();
self.mention_set.update(cx, |mention_set, _cx| {
mention_set.insert_mention(crease_id, mention_uri.clone(), mention_task)
});
if has_file_context {
if let Some((workspace, selections)) =
self.workspace.upgrade().zip(editor_clipboard_selections)
{
let Some(first_selection) = selections.first() else {
return;
};
if let Some(file_path) = &first_selection.file_path {
// In case someone pastes selections from another window
// with a different project, we don't want to insert the
// crease (containing the absolute path) since the agent
// cannot access files outside the project.
let is_in_project = workspace
.read(cx)
.project()
.read(cx)
.project_path_for_absolute_path(file_path, cx)
.is_some();
if !is_in_project {
return;
}
}
cx.stop_propagation();
let insertion_target = self
.editor
.read(cx)
.selections
.newest_anchor()
.start
.text_anchor;
let project = workspace.read(cx).project().clone();
for selection in selections {
if let (Some(file_path), Some(line_range)) =
(selection.file_path, selection.line_range)
{
let crease_text =
acp_thread::selection_name(Some(file_path.as_ref()), &line_range);
let mention_uri = MentionUri::Selection {
abs_path: Some(file_path.clone()),
line_range: line_range.clone(),
};
let mention_text = mention_uri.as_link().to_string();
let (excerpt_id, text_anchor, content_len) =
self.editor.update(cx, |editor, cx| {
let buffer = editor.buffer().read(cx);
let snapshot = buffer.snapshot(cx);
let (excerpt_id, _, buffer_snapshot) =
snapshot.as_singleton().unwrap();
let text_anchor = insertion_target.bias_left(&buffer_snapshot);
editor.insert(&mention_text, window, cx);
editor.insert(" ", window, cx);
(*excerpt_id, text_anchor, mention_text.len())
});
let Some((crease_id, tx)) = insert_crease_for_mention(
excerpt_id,
text_anchor,
content_len,
crease_text.into(),
mention_uri.icon_path(cx),
None,
self.editor.clone(),
window,
cx,
) else {
continue;
};
drop(tx);
let mention_task = cx
.spawn({
let project = project.clone();
async move |_, cx| {
let project_path = project
.update(cx, |project, cx| {
project.project_path_for_absolute_path(&file_path, cx)
})
.map_err(|e| e.to_string())?
.ok_or_else(|| "project path not found".to_string())?;
let buffer = project
.update(cx, |project, cx| {
project.open_buffer(project_path, cx)
})
.map_err(|e| e.to_string())?
.await
.map_err(|e| e.to_string())?;
buffer
.update(cx, |buffer, cx| {
let start = Point::new(*line_range.start(), 0)
.min(buffer.max_point());
let end = Point::new(*line_range.end() + 1, 0)
.min(buffer.max_point());
let content =
buffer.text_for_range(start..end).collect();
Mention::Text {
content,
tracked_buffers: vec![cx.entity()],
}
})
.map_err(|e| e.to_string())
}
})
.shared();
self.mention_set.update(cx, |mention_set, _cx| {
mention_set.insert_mention(crease_id, mention_uri.clone(), mention_task)
});
}
}
return;
}
return;
}
if self.prompt_capabilities.borrow().image
@@ -687,13 +690,6 @@ impl MessageEditor {
}
}
fn paste_raw(&mut self, _: &PasteRaw, window: &mut Window, cx: &mut Context<Self>) {
let editor = self.editor.clone();
window.defer(cx, move |window, cx| {
editor.update(cx, |editor, cx| editor.paste(&Paste, window, cx));
});
}
pub fn insert_dragged_files(
&mut self,
paths: Vec<project::ProjectPath>,
@@ -971,7 +967,6 @@ impl Render for MessageEditor {
.on_action(cx.listener(Self::chat))
.on_action(cx.listener(Self::chat_with_follow))
.on_action(cx.listener(Self::cancel))
.on_action(cx.listener(Self::paste_raw))
.capture_action(cx.listener(Self::paste))
.flex_1()
.child({

View File

@@ -186,17 +186,6 @@ impl Render for ModeSelector {
move |_window, cx| {
v_flex()
.gap_1()
.child(
h_flex()
.gap_2()
.justify_between()
.child(Label::new("Toggle Mode Menu"))
.child(KeyBinding::for_action_in(
&ToggleProfileSelector,
&focus_handle,
cx,
)),
)
.child(
h_flex()
.pb_1()
@@ -211,6 +200,17 @@ impl Render for ModeSelector {
cx,
)),
)
.child(
h_flex()
.gap_2()
.justify_between()
.child(Label::new("Toggle Mode Menu"))
.child(KeyBinding::for_action_in(
&ToggleProfileSelector,
&focus_handle,
cx,
)),
)
.into_any()
}
}),

View File

@@ -13,7 +13,6 @@ use gpui::{
Action, AsyncWindowContext, BackgroundExecutor, DismissEvent, FocusHandle, Task, WeakEntity,
};
use itertools::Itertools;
use language_model::IconOrSvg;
use ordered_float::OrderedFloat;
use picker::{Picker, PickerDelegate};
use settings::Settings;
@@ -351,11 +350,7 @@ impl PickerDelegate for AcpModelPickerDelegate {
})
.child(
ModelSelectorListItem::new(ix, model_info.name.clone())
.map(|this| match &model_info.icon {
Some(IconOrSvg::Svg(path)) => this.icon_path(path.clone()),
Some(IconOrSvg::Icon(icon)) => this.icon(*icon),
None => this,
})
.when_some(model_info.icon, |this, icon| this.icon(icon))
.is_selected(is_selected)
.is_focused(selected)
.when(supports_favorites, |this| {

View File

@@ -6,7 +6,6 @@ use agent_servers::AgentServer;
use agent_settings::AgentSettings;
use fs::Fs;
use gpui::{Entity, FocusHandle};
use language_model::IconOrSvg;
use picker::popover_menu::PickerPopoverMenu;
use settings::Settings as _;
use ui::{ButtonLike, KeyBinding, PopoverMenuHandle, TintColor, Tooltip, prelude::*};
@@ -71,7 +70,7 @@ impl Render for AcpModelSelectorPopover {
.map(|model| model.name.clone())
.unwrap_or_else(|| SharedString::from("Select a Model"));
let model_icon = model.as_ref().and_then(|model| model.icon.clone());
let model_icon = model.as_ref().and_then(|model| model.icon);
let focus_handle = self.focus_handle.clone();
@@ -126,14 +125,7 @@ impl Render for AcpModelSelectorPopover {
ButtonLike::new("active-model")
.selected_style(ButtonStyle::Tinted(TintColor::Accent))
.when_some(model_icon, |this, icon| {
this.child(
match icon {
IconOrSvg::Svg(path) => Icon::from_external_svg(path),
IconOrSvg::Icon(icon_name) => Icon::new(icon_name),
}
.color(color)
.size(IconSize::XSmall),
)
this.child(Icon::new(icon).color(color).size(IconSize::XSmall))
})
.child(
Label::new(model_name)

View File

@@ -1,7 +1,7 @@
use crate::acp::AcpThreadView;
use crate::{AgentPanel, RemoveHistory, RemoveSelectedThread};
use agent::{HistoryEntry, HistoryStore};
use chrono::{Datelike as _, Local, NaiveDate, TimeDelta, Utc};
use chrono::{Datelike as _, Local, NaiveDate, TimeDelta};
use editor::{Editor, EditorEvent};
use fuzzy::StringMatchCandidate;
use gpui::{
@@ -402,22 +402,7 @@ impl AcpThreadHistory {
let selected = ix == self.selected_index;
let hovered = Some(ix) == self.hovered_index;
let timestamp = entry.updated_at().timestamp();
let display_text = match format {
EntryTimeFormat::DateAndTime => {
let entry_time = entry.updated_at();
let now = Utc::now();
let duration = now.signed_duration_since(entry_time);
let days = duration.num_days();
format!("{}d", days)
}
EntryTimeFormat::TimeOnly => format.format_timestamp(timestamp, self.local_timezone),
};
let title = entry.title().clone();
let full_date =
EntryTimeFormat::DateAndTime.format_timestamp(timestamp, self.local_timezone);
let thread_timestamp = format.format_timestamp(timestamp, self.local_timezone);
h_flex()
.w_full()
@@ -438,14 +423,11 @@ impl AcpThreadHistory {
.truncate(),
)
.child(
Label::new(display_text)
Label::new(thread_timestamp)
.color(Color::Muted)
.size(LabelSize::XSmall),
),
)
.tooltip(move |_, cx| {
Tooltip::with_meta(title.clone(), None, full_date.clone(), cx)
})
.on_hover(cx.listener(move |this, is_hovered, _window, cx| {
if *is_hovered {
this.hovered_index = Some(ix);

View File

@@ -338,13 +338,7 @@ impl AcpThreadView {
let prompt_capabilities = Rc::new(RefCell::new(acp::PromptCapabilities::default()));
let available_commands = Rc::new(RefCell::new(vec![]));
let agent_server_store = project.read(cx).agent_server_store().clone();
let agent_display_name = agent_server_store
.read(cx)
.agent_display_name(&ExternalAgentServerName(agent.name()))
.unwrap_or_else(|| agent.name());
let placeholder = placeholder_text(agent_display_name.as_ref(), false);
let placeholder = placeholder_text(agent.name().as_ref(), false);
let message_editor = cx.new(|cx| {
let mut editor = MessageEditor::new(
@@ -383,6 +377,7 @@ impl AcpThreadView {
)
});
let agent_server_store = project.read(cx).agent_server_store().clone();
let subscriptions = [
cx.observe_global_in::<SettingsStore>(window, Self::agent_ui_font_size_changed),
cx.observe_global_in::<AgentFontSize>(window, Self::agent_ui_font_size_changed),
@@ -1503,13 +1498,7 @@ impl AcpThreadView {
let has_commands = !available_commands.is_empty();
self.available_commands.replace(available_commands);
let agent_display_name = self
.agent_server_store
.read(cx)
.agent_display_name(&ExternalAgentServerName(self.agent.name()))
.unwrap_or_else(|| self.agent.name());
let new_placeholder = placeholder_text(agent_display_name.as_ref(), has_commands);
let new_placeholder = placeholder_text(self.agent.name().as_ref(), has_commands);
self.message_editor.update(cx, |editor, cx| {
editor.set_placeholder_text(&new_placeholder, window, cx);
@@ -2718,7 +2707,7 @@ impl AcpThreadView {
..default_markdown_style(false, true, window, cx)
},
))
.tooltip(Tooltip::text("Go to File"))
.tooltip(Tooltip::text("Jump to File"))
.on_click(cx.listener(move |this, _, window, cx| {
this.open_tool_call_location(entry_ix, 0, window, cx);
}))

View File

@@ -22,8 +22,7 @@ use gpui::{
};
use language::LanguageRegistry;
use language_model::{
IconOrSvg, LanguageModelProvider, LanguageModelProviderId, LanguageModelRegistry,
ZED_CLOUD_PROVIDER_ID,
LanguageModelProvider, LanguageModelProviderId, LanguageModelRegistry, ZED_CLOUD_PROVIDER_ID,
};
use language_models::AllLanguageModelSettings;
use notifications::status_toast::{StatusToast, ToastIcon};
@@ -118,7 +117,7 @@ impl AgentConfiguration {
}
fn build_provider_configuration_views(&mut self, window: &mut Window, cx: &mut Context<Self>) {
let providers = LanguageModelRegistry::read_global(cx).visible_providers();
let providers = LanguageModelRegistry::read_global(cx).providers();
for provider in providers {
self.add_provider_configuration_view(&provider, window, cx);
}
@@ -262,12 +261,9 @@ impl AgentConfiguration {
.w_full()
.gap_1p5()
.child(
match provider.icon() {
IconOrSvg::Svg(path) => Icon::from_external_svg(path),
IconOrSvg::Icon(name) => Icon::new(name),
}
.size(IconSize::Small)
.color(Color::Muted),
Icon::new(provider.icon())
.size(IconSize::Small)
.color(Color::Muted),
)
.child(
h_flex()
@@ -420,7 +416,7 @@ impl AgentConfiguration {
&mut self,
cx: &mut Context<Self>,
) -> impl IntoElement {
let providers = LanguageModelRegistry::read_global(cx).visible_providers();
let providers = LanguageModelRegistry::read_global(cx).providers();
let popover_menu = PopoverMenu::new("add-provider-popover")
.trigger(

View File

@@ -4,7 +4,6 @@ use crate::{
};
use fs::Fs;
use gpui::{Entity, FocusHandle, SharedString};
use language_model::IconOrSvg;
use picker::popover_menu::PickerPopoverMenu;
use settings::update_settings_file;
use std::sync::Arc;
@@ -104,14 +103,7 @@ impl Render for AgentModelSelector {
self.selector.clone(),
ButtonLike::new("active-model")
.when_some(provider_icon, |this, icon| {
this.child(
match icon {
IconOrSvg::Svg(path) => Icon::from_external_svg(path),
IconOrSvg::Icon(name) => Icon::new(name),
}
.color(color)
.size(IconSize::XSmall),
)
this.child(Icon::new(icon).color(color).size(IconSize::XSmall))
})
.selected_style(ButtonStyle::Tinted(TintColor::Accent))
.child(
@@ -123,7 +115,7 @@ impl Render for AgentModelSelector {
.child(
Icon::new(IconName::ChevronDown)
.color(color)
.size(IconSize::XSmall),
.size(IconSize::Small),
),
move |_window, cx| {
Tooltip::for_action_in("Change Model", &ToggleModelSelector, &focus_handle, cx)

View File

@@ -2428,7 +2428,7 @@ impl AgentPanel {
let history_is_empty = self.history_store.read(cx).is_empty(cx);
let has_configured_non_zed_providers = LanguageModelRegistry::read_global(cx)
.visible_providers()
.providers()
.iter()
.any(|provider| {
provider.is_authenticated(cx)

View File

@@ -321,6 +321,7 @@ fn update_command_palette_filter(cx: &mut App) {
}
EditPredictionProvider::Zed
| EditPredictionProvider::Codestral
| EditPredictionProvider::Ollama
| EditPredictionProvider::Experimental(_) => {
filter.show_namespace("edit_prediction");
filter.hide_namespace("copilot");
@@ -348,8 +349,7 @@ fn init_language_model_settings(cx: &mut App) {
|_, event: &language_model::Event, cx| match event {
language_model::Event::ProviderStateChanged(_)
| language_model::Event::AddedProvider(_)
| language_model::Event::RemovedProvider(_)
| language_model::Event::ProvidersChanged => {
| language_model::Event::RemovedProvider(_) => {
update_active_language_model_from_settings(cx);
}
_ => {}
@@ -368,49 +368,26 @@ fn update_active_language_model_from_settings(cx: &mut App) {
}
}
// Filter out models from providers that are not authenticated
fn is_provider_authenticated(
selection: &LanguageModelSelection,
registry: &LanguageModelRegistry,
cx: &App,
) -> bool {
let provider_id = LanguageModelProviderId::from(selection.provider.0.clone());
registry
.provider(&provider_id)
.map_or(false, |provider| provider.is_authenticated(cx))
}
let registry = LanguageModelRegistry::global(cx);
let registry_ref = registry.read(cx);
let default = settings
.default_model
.as_ref()
.filter(|s| is_provider_authenticated(s, registry_ref, cx))
.map(to_selected_model);
let default = settings.default_model.as_ref().map(to_selected_model);
let inline_assistant = settings
.inline_assistant_model
.as_ref()
.filter(|s| is_provider_authenticated(s, registry_ref, cx))
.map(to_selected_model);
let commit_message = settings
.commit_message_model
.as_ref()
.filter(|s| is_provider_authenticated(s, registry_ref, cx))
.map(to_selected_model);
let thread_summary = settings
.thread_summary_model
.as_ref()
.filter(|s| is_provider_authenticated(s, registry_ref, cx))
.map(to_selected_model);
let inline_alternatives = settings
.inline_alternatives
.iter()
.filter(|s| is_provider_authenticated(s, registry_ref, cx))
.map(to_selected_model)
.collect::<Vec<_>>();
registry.update(cx, |registry, cx| {
LanguageModelRegistry::global(cx).update(cx, |registry, cx| {
registry.select_default_model(default.as_ref(), cx);
registry.select_inline_assistant_model(inline_assistant.as_ref(), cx);
registry.select_commit_message_model(commit_message.as_ref(), cx);

View File

@@ -1586,7 +1586,7 @@ pub(crate) fn search_rules(
None
} else {
Some(RulesContextEntry {
prompt_id: metadata.id.as_user()?,
prompt_id: metadata.id.user_id()?,
title: metadata.title?,
})
}

View File

@@ -1259,26 +1259,28 @@ impl InlineAssistant {
let bottom = top + 1.0;
(top, bottom)
});
let height_in_lines = editor.visible_line_count().unwrap_or(0.);
let vertical_scroll_margin = editor.vertical_scroll_margin() as ScrollOffset;
let scroll_target_top = (scroll_target_range.0 - vertical_scroll_margin)
// Don't scroll up too far in the case of a large vertical_scroll_margin.
.max(scroll_target_range.0 - height_in_lines / 2.0);
let scroll_target_bottom = (scroll_target_range.1 + vertical_scroll_margin)
// Don't scroll down past where the top would still be visible.
.min(scroll_target_top + height_in_lines);
let mut scroll_target_top = scroll_target_range.0;
let mut scroll_target_bottom = scroll_target_range.1;
scroll_target_top -= editor.vertical_scroll_margin() as ScrollOffset;
scroll_target_bottom += editor.vertical_scroll_margin() as ScrollOffset;
let height_in_lines = editor.visible_line_count().unwrap_or(0.);
let scroll_top = editor.scroll_position(cx).y;
let scroll_bottom = scroll_top + height_in_lines;
if scroll_target_top < scroll_top {
editor.set_scroll_position(point(0., scroll_target_top), window, cx);
} else if scroll_target_bottom > scroll_bottom {
editor.set_scroll_position(
point(0., scroll_target_bottom - height_in_lines),
window,
cx,
);
if (scroll_target_bottom - scroll_target_top) <= height_in_lines {
editor.set_scroll_position(
point(0., scroll_target_bottom - height_in_lines),
window,
cx,
);
} else {
editor.set_scroll_position(point(0., scroll_target_top), window, cx);
}
}
});
}

View File

@@ -2,12 +2,13 @@ use std::{cmp::Reverse, sync::Arc};
use agent_settings::AgentSettings;
use collections::{HashMap, HashSet, IndexMap};
use futures::{StreamExt, channel::mpsc};
use fuzzy::{StringMatch, StringMatchCandidate, match_strings};
use gpui::{Action, AnyElement, App, BackgroundExecutor, DismissEvent, FocusHandle, Task};
use gpui::{
Action, AnyElement, App, BackgroundExecutor, DismissEvent, FocusHandle, Subscription, Task,
};
use language_model::{
AuthenticateError, ConfiguredModel, IconOrSvg, LanguageModel, LanguageModelId,
LanguageModelProvider, LanguageModelProviderId, LanguageModelRegistry,
AuthenticateError, ConfiguredModel, LanguageModel, LanguageModelId, LanguageModelProvider,
LanguageModelProviderId, LanguageModelRegistry,
};
use ordered_float::OrderedFloat;
use picker::{Picker, PickerDelegate};
@@ -54,7 +55,7 @@ pub fn language_model_selector(
fn all_models(cx: &App) -> GroupedModels {
let lm_registry = LanguageModelRegistry::global(cx).read(cx);
let providers = lm_registry.visible_providers();
let providers = lm_registry.providers();
let mut favorites_index = FavoritesIndex::default();
@@ -75,7 +76,7 @@ fn all_models(cx: &App) -> GroupedModels {
})
.collect();
let all: Vec<ModelInfo> = providers
let all = providers
.iter()
.flat_map(|provider| {
provider
@@ -93,7 +94,7 @@ type FavoritesIndex = HashMap<LanguageModelProviderId, HashSet<LanguageModelId>>
#[derive(Clone)]
struct ModelInfo {
model: Arc<dyn LanguageModel>,
icon: IconOrSvg,
icon: IconName,
is_favorite: bool,
}
@@ -123,7 +124,7 @@ pub struct LanguageModelPickerDelegate {
filtered_entries: Vec<LanguageModelPickerEntry>,
selected_index: usize,
_authenticate_all_providers_task: Task<()>,
_refresh_models_task: Task<()>,
_subscriptions: Vec<Subscription>,
popover_styles: bool,
focus_handle: FocusHandle,
}
@@ -150,42 +151,24 @@ impl LanguageModelPickerDelegate {
get_active_model: Arc::new(get_active_model),
on_toggle_favorite: Arc::new(on_toggle_favorite),
_authenticate_all_providers_task: Self::authenticate_all_providers(cx),
_refresh_models_task: {
// Create a channel to signal when models need refreshing
let (refresh_tx, mut refresh_rx) = mpsc::unbounded::<()>();
// Subscribe to registry events and send refresh signals through the channel
let registry = LanguageModelRegistry::global(cx);
cx.subscribe(&registry, move |_picker, _, event, _cx| match event {
language_model::Event::ProviderStateChanged(_)
| language_model::Event::AddedProvider(_)
| language_model::Event::RemovedProvider(_)
| language_model::Event::ProvidersChanged => {
refresh_tx.unbounded_send(()).ok();
}
language_model::Event::DefaultModelChanged
| language_model::Event::InlineAssistantModelChanged
| language_model::Event::CommitMessageModelChanged
| language_model::Event::ThreadSummaryModelChanged => {}
})
.detach();
// Spawn a task that listens for refresh signals and updates the picker
cx.spawn_in(window, async move |this, cx| {
while let Some(()) = refresh_rx.next().await {
if this
.update_in(cx, |picker, window, cx| {
picker.delegate.all_models = Arc::new(all_models(cx));
picker.refresh(window, cx);
})
.is_err()
{
// Picker was dropped, exit the loop
break;
_subscriptions: vec![cx.subscribe_in(
&LanguageModelRegistry::global(cx),
window,
|picker, _, event, window, cx| {
match event {
language_model::Event::ProviderStateChanged(_)
| language_model::Event::AddedProvider(_)
| language_model::Event::RemovedProvider(_) => {
let query = picker.query(cx);
picker.delegate.all_models = Arc::new(all_models(cx));
// Update matches will automatically drop the previous task
// if we get a provider event again
picker.update_matches(query, window, cx)
}
_ => {}
}
})
},
},
)],
popover_styles,
focus_handle,
}
@@ -220,7 +203,7 @@ impl LanguageModelPickerDelegate {
fn authenticate_all_providers(cx: &mut App) -> Task<()> {
let authenticate_all_providers = LanguageModelRegistry::global(cx)
.read(cx)
.visible_providers()
.providers()
.iter()
.map(|provider| (provider.id(), provider.name(), provider.authenticate(cx)))
.collect::<Vec<_>>();
@@ -491,7 +474,7 @@ impl PickerDelegate for LanguageModelPickerDelegate {
let configured_providers = language_model_registry
.read(cx)
.visible_providers()
.providers()
.into_iter()
.filter(|provider| provider.is_authenticated(cx))
.collect::<Vec<_>>();
@@ -583,10 +566,7 @@ impl PickerDelegate for LanguageModelPickerDelegate {
Some(
ModelSelectorListItem::new(ix, model_info.model.name().0)
.map(|this| match &model_info.icon {
IconOrSvg::Icon(icon_name) => this.icon(*icon_name),
IconOrSvg::Svg(icon_path) => this.icon_path(icon_path.clone()),
})
.icon(model_info.icon)
.is_selected(is_selected)
.is_focused(selected)
.is_favorite(is_favorite)
@@ -722,7 +702,7 @@ mod tests {
.any(|(fav_provider, fav_name)| *fav_provider == provider && *fav_name == name);
ModelInfo {
model: Arc::new(TestLanguageModel::new(name, provider)),
icon: IconOrSvg::Icon(IconName::Ai),
icon: IconName::Ai,
is_favorite,
}
})

View File

@@ -191,9 +191,6 @@ impl Render for ProfileSelector {
let container = || h_flex().gap_1().justify_between();
v_flex()
.gap_1()
.child(container().child(Label::new("Toggle Profile Menu")).child(
KeyBinding::for_action_in(&ToggleProfileSelector, &focus_handle, cx),
))
.child(
container()
.pb_1()
@@ -206,6 +203,9 @@ impl Render for ProfileSelector {
cx,
)),
)
.child(container().child(Label::new("Toggle Profile Menu")).child(
KeyBinding::for_action_in(&ToggleProfileSelector, &focus_handle, cx),
))
.into_any()
}
}),

View File

@@ -33,8 +33,7 @@ use language::{
language_settings::{SoftWrap, all_language_settings},
};
use language_model::{
ConfigurationError, IconOrSvg, LanguageModelExt, LanguageModelImage, LanguageModelRegistry,
Role,
ConfigurationError, LanguageModelExt, LanguageModelImage, LanguageModelRegistry, Role,
};
use multi_buffer::MultiBufferRow;
use picker::{Picker, popover_menu::PickerPopoverMenu};
@@ -72,7 +71,7 @@ use workspace::{
pane,
searchable::{SearchEvent, SearchableItem},
};
use zed_actions::agent::{AddSelectionToThread, PasteRaw, ToggleModelSelector};
use zed_actions::agent::{AddSelectionToThread, ToggleModelSelector};
use crate::CycleFavoriteModels;
@@ -1699,9 +1698,6 @@ impl TextThreadEditor {
window: &mut Window,
cx: &mut Context<Self>,
) {
let Some(workspace) = self.workspace.upgrade() else {
return;
};
let editor_clipboard_selections = cx
.read_from_clipboard()
.and_then(|item| item.entries().first().cloned())
@@ -1712,101 +1708,84 @@ impl TextThreadEditor {
_ => None,
});
// Insert creases for pasted clipboard selections that:
// 1. Contain exactly one selection
// 2. Have an associated file path
// 3. Span multiple lines (not single-line selections)
// 4. Belong to a file that exists in the current project
let should_insert_creases = util::maybe!({
let selections = editor_clipboard_selections.as_ref()?;
if selections.len() > 1 {
return Some(false);
}
let selection = selections.first()?;
let file_path = selection.file_path.as_ref()?;
let line_range = selection.line_range.as_ref()?;
let has_file_context = editor_clipboard_selections
.as_ref()
.is_some_and(|selections| {
selections
.iter()
.any(|sel| sel.file_path.is_some() && sel.line_range.is_some())
});
if line_range.start() == line_range.end() {
return Some(false);
}
if has_file_context {
if let Some(clipboard_item) = cx.read_from_clipboard() {
if let Some(ClipboardEntry::String(clipboard_text)) =
clipboard_item.entries().first()
{
if let Some(selections) = editor_clipboard_selections {
cx.stop_propagation();
Some(
workspace
.read(cx)
.project()
.read(cx)
.project_path_for_absolute_path(file_path, cx)
.is_some(),
)
})
.unwrap_or(false);
let text = clipboard_text.text();
self.editor.update(cx, |editor, cx| {
let mut current_offset = 0;
let weak_editor = cx.entity().downgrade();
if should_insert_creases && let Some(clipboard_item) = cx.read_from_clipboard() {
if let Some(ClipboardEntry::String(clipboard_text)) = clipboard_item.entries().first() {
if let Some(selections) = editor_clipboard_selections {
cx.stop_propagation();
for selection in selections {
if let (Some(file_path), Some(line_range)) =
(selection.file_path, selection.line_range)
{
let selected_text =
&text[current_offset..current_offset + selection.len];
let fence = assistant_slash_commands::codeblock_fence_for_path(
file_path.to_str(),
Some(line_range.clone()),
);
let formatted_text = format!("{fence}{selected_text}\n```");
let text = clipboard_text.text();
self.editor.update(cx, |editor, cx| {
let mut current_offset = 0;
let weak_editor = cx.entity().downgrade();
let insert_point = editor
.selections
.newest::<Point>(&editor.display_snapshot(cx))
.head();
let start_row = MultiBufferRow(insert_point.row);
for selection in selections {
if let (Some(file_path), Some(line_range)) =
(selection.file_path, selection.line_range)
{
let selected_text =
&text[current_offset..current_offset + selection.len];
let fence = assistant_slash_commands::codeblock_fence_for_path(
file_path.to_str(),
Some(line_range.clone()),
);
let formatted_text = format!("{fence}{selected_text}\n```");
editor.insert(&formatted_text, window, cx);
let insert_point = editor
.selections
.newest::<Point>(&editor.display_snapshot(cx))
.head();
let start_row = MultiBufferRow(insert_point.row);
let snapshot = editor.buffer().read(cx).snapshot(cx);
let anchor_before = snapshot.anchor_after(insert_point);
let anchor_after = editor
.selections
.newest_anchor()
.head()
.bias_left(&snapshot);
editor.insert(&formatted_text, window, cx);
editor.insert("\n", window, cx);
let snapshot = editor.buffer().read(cx).snapshot(cx);
let anchor_before = snapshot.anchor_after(insert_point);
let anchor_after = editor
.selections
.newest_anchor()
.head()
.bias_left(&snapshot);
let crease_text = acp_thread::selection_name(
Some(file_path.as_ref()),
&line_range,
);
editor.insert("\n", window, cx);
let fold_placeholder = quote_selection_fold_placeholder(
crease_text,
weak_editor.clone(),
);
let crease = Crease::inline(
anchor_before..anchor_after,
fold_placeholder,
render_quote_selection_output_toggle,
|_, _, _, _| Empty.into_any(),
);
editor.insert_creases(vec![crease], cx);
editor.fold_at(start_row, window, cx);
let crease_text = acp_thread::selection_name(
Some(file_path.as_ref()),
&line_range,
);
let fold_placeholder = quote_selection_fold_placeholder(
crease_text,
weak_editor.clone(),
);
let crease = Crease::inline(
anchor_before..anchor_after,
fold_placeholder,
render_quote_selection_output_toggle,
|_, _, _, _| Empty.into_any(),
);
editor.insert_creases(vec![crease], cx);
editor.fold_at(start_row, window, cx);
current_offset += selection.len;
if !selection.is_entire_line && current_offset < text.len() {
current_offset += 1;
current_offset += selection.len;
if !selection.is_entire_line && current_offset < text.len() {
current_offset += 1;
}
}
}
}
});
return;
});
return;
}
}
}
}
@@ -1965,12 +1944,6 @@ impl TextThreadEditor {
}
}
fn paste_raw(&mut self, _: &PasteRaw, window: &mut Window, cx: &mut Context<Self>) {
self.editor.update(cx, |editor, cx| {
editor.paste(&editor::actions::Paste, window, cx);
});
}
fn update_image_blocks(&mut self, cx: &mut Context<Self>) {
self.editor.update(cx, |editor, cx| {
let buffer = editor.buffer().read(cx).snapshot(cx);
@@ -2232,10 +2205,10 @@ impl TextThreadEditor {
.default_model()
.map(|default| default.provider);
let provider_icon = active_provider
.as_ref()
.map(|p| p.icon())
.unwrap_or(IconOrSvg::Icon(IconName::Ai));
let provider_icon = match active_provider {
Some(provider) => provider.icon(),
None => IconName::Ai,
};
let focus_handle = self.editor().focus_handle(cx);
@@ -2245,13 +2218,6 @@ impl TextThreadEditor {
(Color::Muted, IconName::ChevronDown)
};
let provider_icon_element = match provider_icon {
IconOrSvg::Svg(path) => Icon::from_external_svg(path),
IconOrSvg::Icon(name) => Icon::new(name),
}
.color(color)
.size(IconSize::XSmall);
let tooltip = Tooltip::element({
move |_, cx| {
let focus_handle = focus_handle.clone();
@@ -2299,7 +2265,7 @@ impl TextThreadEditor {
.child(
h_flex()
.gap_0p5()
.child(provider_icon_element)
.child(Icon::new(provider_icon).color(color).size(IconSize::XSmall))
.child(
Label::new(model_name)
.color(color)
@@ -2661,7 +2627,6 @@ impl Render for TextThreadEditor {
.capture_action(cx.listener(TextThreadEditor::copy))
.capture_action(cx.listener(TextThreadEditor::cut))
.capture_action(cx.listener(TextThreadEditor::paste))
.on_action(cx.listener(TextThreadEditor::paste_raw))
.capture_action(cx.listener(TextThreadEditor::cycle_message_role))
.capture_action(cx.listener(TextThreadEditor::confirm_command))
.on_action(cx.listener(TextThreadEditor::assist))

View File

@@ -35,16 +35,11 @@ impl RenderOnce for ModelSelectorHeader {
}
}
enum ModelIcon {
Name(IconName),
Path(SharedString),
}
#[derive(IntoElement)]
pub struct ModelSelectorListItem {
index: usize,
title: SharedString,
icon: Option<ModelIcon>,
icon: Option<IconName>,
is_selected: bool,
is_focused: bool,
is_favorite: bool,
@@ -65,12 +60,7 @@ impl ModelSelectorListItem {
}
pub fn icon(mut self, icon: IconName) -> Self {
self.icon = Some(ModelIcon::Name(icon));
self
}
pub fn icon_path(mut self, path: SharedString) -> Self {
self.icon = Some(ModelIcon::Path(path));
self.icon = Some(icon);
self
}
@@ -115,12 +105,9 @@ impl RenderOnce for ModelSelectorListItem {
.gap_1p5()
.when_some(self.icon, |this, icon| {
this.child(
match icon {
ModelIcon::Name(icon_name) => Icon::new(icon_name),
ModelIcon::Path(icon_path) => Icon::from_external_svg(icon_path),
}
.color(model_icon_color)
.size(IconSize::Small),
Icon::new(icon)
.color(model_icon_color)
.size(IconSize::Small),
)
})
.child(Label::new(self.title).truncate()),

View File

@@ -1,5 +1,5 @@
use agent::{HistoryEntry, HistoryStore};
use chrono::{Datelike as _, Local, NaiveDate, TimeDelta, Utc};
use chrono::{Datelike as _, Local, NaiveDate, TimeDelta};
use editor::{Editor, EditorEvent};
use fuzzy::StringMatchCandidate;
use gpui::{
@@ -411,22 +411,7 @@ impl AcpThreadHistory {
let selected = ix == self.selected_index;
let hovered = Some(ix) == self.hovered_index;
let timestamp = entry.updated_at().timestamp();
let display_text = match format {
EntryTimeFormat::DateAndTime => {
let entry_time = entry.updated_at();
let now = Utc::now();
let duration = now.signed_duration_since(entry_time);
let days = duration.num_days();
format!("{}d", days)
}
EntryTimeFormat::TimeOnly => format.format_timestamp(timestamp, self.local_timezone),
};
let title = entry.title().clone();
let full_date =
EntryTimeFormat::DateAndTime.format_timestamp(timestamp, self.local_timezone);
let thread_timestamp = format.format_timestamp(timestamp, self.local_timezone);
h_flex()
.w_full()
@@ -447,14 +432,11 @@ impl AcpThreadHistory {
.truncate(),
)
.child(
Label::new(display_text)
Label::new(thread_timestamp)
.color(Color::Muted)
.size(LabelSize::XSmall),
),
)
.tooltip(move |_, cx| {
Tooltip::with_meta(title.clone(), None, full_date.clone(), cx)
})
.on_hover(cx.listener(move |this, is_hovered, _window, cx| {
if *is_hovered {
this.hovered_index = Some(ix);

View File

@@ -1,9 +1,9 @@
use gpui::{Action, IntoElement, ParentElement, RenderOnce, point};
use language_model::{IconOrSvg, LanguageModelRegistry, ZED_CLOUD_PROVIDER_ID};
use language_model::{LanguageModelRegistry, ZED_CLOUD_PROVIDER_ID};
use ui::{Divider, List, ListBulletItem, prelude::*};
pub struct ApiKeysWithProviders {
configured_providers: Vec<(IconOrSvg, SharedString)>,
configured_providers: Vec<(IconName, SharedString)>,
}
impl ApiKeysWithProviders {
@@ -13,8 +13,7 @@ impl ApiKeysWithProviders {
|this: &mut Self, _registry, event: &language_model::Event, cx| match event {
language_model::Event::ProviderStateChanged(_)
| language_model::Event::AddedProvider(_)
| language_model::Event::RemovedProvider(_)
| language_model::Event::ProvidersChanged => {
| language_model::Event::RemovedProvider(_) => {
this.configured_providers = Self::compute_configured_providers(cx)
}
_ => {}
@@ -27,9 +26,9 @@ impl ApiKeysWithProviders {
}
}
fn compute_configured_providers(cx: &App) -> Vec<(IconOrSvg, SharedString)> {
fn compute_configured_providers(cx: &App) -> Vec<(IconName, SharedString)> {
LanguageModelRegistry::read_global(cx)
.visible_providers()
.providers()
.iter()
.filter(|provider| {
provider.is_authenticated(cx) && provider.id() != ZED_CLOUD_PROVIDER_ID
@@ -48,14 +47,7 @@ impl Render for ApiKeysWithProviders {
.map(|(icon, name)| {
h_flex()
.gap_1p5()
.child(
match icon {
IconOrSvg::Icon(icon_name) => Icon::new(icon_name),
IconOrSvg::Svg(icon_path) => Icon::from_external_svg(icon_path),
}
.size(IconSize::XSmall)
.color(Color::Muted),
)
.child(Icon::new(icon).size(IconSize::XSmall).color(Color::Muted))
.child(Label::new(name))
});
div()

View File

@@ -11,7 +11,7 @@ use crate::{AgentPanelOnboardingCard, ApiKeysWithoutProviders, ZedAiOnboarding};
pub struct AgentPanelOnboarding {
user_store: Entity<UserStore>,
client: Arc<Client>,
has_configured_providers: bool,
configured_providers: Vec<(IconName, SharedString)>,
continue_with_zed_ai: Arc<dyn Fn(&mut Window, &mut App)>,
}
@@ -27,9 +27,8 @@ impl AgentPanelOnboarding {
|this: &mut Self, _registry, event: &language_model::Event, cx| match event {
language_model::Event::ProviderStateChanged(_)
| language_model::Event::AddedProvider(_)
| language_model::Event::RemovedProvider(_)
| language_model::Event::ProvidersChanged => {
this.has_configured_providers = Self::has_configured_providers(cx)
| language_model::Event::RemovedProvider(_) => {
this.configured_providers = Self::compute_available_providers(cx)
}
_ => {}
},
@@ -39,16 +38,20 @@ impl AgentPanelOnboarding {
Self {
user_store,
client,
has_configured_providers: Self::has_configured_providers(cx),
configured_providers: Self::compute_available_providers(cx),
continue_with_zed_ai: Arc::new(continue_with_zed_ai),
}
}
fn has_configured_providers(cx: &App) -> bool {
fn compute_available_providers(cx: &App) -> Vec<(IconName, SharedString)> {
LanguageModelRegistry::read_global(cx)
.visible_providers()
.providers()
.iter()
.any(|provider| provider.is_authenticated(cx) && provider.id() != ZED_CLOUD_PROVIDER_ID)
.filter(|provider| {
provider.is_authenticated(cx) && provider.id() != ZED_CLOUD_PROVIDER_ID
})
.map(|provider| (provider.icon(), provider.name().0))
.collect()
}
}
@@ -78,7 +81,7 @@ impl Render for AgentPanelOnboarding {
}),
)
.map(|this| {
if enrolled_in_trial || is_pro_user || self.has_configured_providers {
if enrolled_in_trial || is_pro_user || !self.configured_providers.is_empty() {
this
} else {
this.child(ApiKeysWithoutProviders::new())

View File

@@ -8,7 +8,7 @@ use futures::{AsyncBufReadExt, AsyncReadExt, StreamExt, io::BufReader, stream::B
use http_client::http::{self, HeaderMap, HeaderValue};
use http_client::{AsyncBody, HttpClient, Method, Request as HttpRequest, StatusCode};
use serde::{Deserialize, Serialize};
pub use settings::ModelMode;
pub use settings::{AnthropicAvailableModel as AvailableModel, ModelMode};
use strum::{EnumIter, EnumString};
use thiserror::Error;

View File

@@ -1159,34 +1159,6 @@ impl BufferDiff {
new_index_text
}
pub fn stage_or_unstage_all_hunks(
&mut self,
stage: bool,
buffer: &text::BufferSnapshot,
file_exists: bool,
cx: &mut Context<Self>,
) {
let hunks = self
.hunks_intersecting_range(Anchor::MIN..Anchor::MAX, buffer, cx)
.collect::<Vec<_>>();
let Some(secondary) = self.secondary_diff.as_ref() else {
return;
};
self.inner.stage_or_unstage_hunks_impl(
&secondary.read(cx).inner,
stage,
&hunks,
buffer,
file_exists,
);
if let Some((first, last)) = hunks.first().zip(hunks.last()) {
let changed_range = first.buffer_range.start..last.buffer_range.end;
cx.emit(BufferDiffEvent::DiffChanged {
changed_range: Some(changed_range),
});
}
}
pub fn range_to_hunk_range(
&self,
range: Range<Anchor>,

View File

@@ -7,6 +7,8 @@ license = "GPL-3.0-or-later"
[dependencies]
anyhow.workspace = true
command_palette.workspace = true
gpui.workspace = true
# We are specifically pinning this version of mdbook, as later versions introduce issues with double-nested subdirectories.
# Ask @maxdeviant about this before bumping.
mdbook = "= 0.4.40"
@@ -15,6 +17,7 @@ serde.workspace = true
serde_json.workspace = true
settings.workspace = true
util.workspace = true
zed.workspace = true
zlog.workspace = true
task.workspace = true
theme.workspace = true
@@ -24,4 +27,4 @@ workspace = true
[[bin]]
name = "docs_preprocessor"
path = "src/main.rs"
path = "src/main.rs"

View File

@@ -22,13 +22,16 @@ static KEYMAP_WINDOWS: LazyLock<KeymapFile> = LazyLock::new(|| {
load_keymap("keymaps/default-windows.json").expect("Failed to load Windows keymap")
});
static ALL_ACTIONS: LazyLock<Vec<ActionDef>> = LazyLock::new(load_all_actions);
static ALL_ACTIONS: LazyLock<Vec<ActionDef>> = LazyLock::new(dump_all_gpui_actions);
const FRONT_MATTER_COMMENT: &str = "<!-- ZED_META {} -->";
fn main() -> Result<()> {
zlog::init();
zlog::init_output_stderr();
// call a zed:: function so everything in `zed` crate is linked and
// all actions in the actual app are registered
zed::stdout_is_a_pty();
let args = std::env::args().skip(1).collect::<Vec<_>>();
match args.get(0).map(String::as_str) {
@@ -69,8 +72,8 @@ enum PreprocessorError {
impl PreprocessorError {
fn new_for_not_found_action(action_name: String) -> Self {
for action in &*ALL_ACTIONS {
for alias in &action.deprecated_aliases {
if alias == action_name.as_str() {
for alias in action.deprecated_aliases {
if alias == &action_name {
return PreprocessorError::DeprecatedActionUsed {
used: action_name,
should_be: action.name.to_string(),
@@ -211,7 +214,7 @@ fn template_and_validate_keybindings(book: &mut Book, errors: &mut HashSet<Prepr
chapter.content = regex
.replace_all(&chapter.content, |caps: &regex::Captures| {
let action = caps[1].trim();
if is_missing_action(action) {
if find_action_by_name(action).is_none() {
errors.insert(PreprocessorError::new_for_not_found_action(
action.to_string(),
));
@@ -241,12 +244,10 @@ fn template_and_validate_actions(book: &mut Book, errors: &mut HashSet<Preproces
.replace_all(&chapter.content, |caps: &regex::Captures| {
let name = caps[1].trim();
let Some(action) = find_action_by_name(name) else {
if actions_available() {
errors.insert(PreprocessorError::new_for_not_found_action(
name.to_string(),
));
}
return format!("<code class=\"hljs\">{}</code>", name);
errors.insert(PreprocessorError::new_for_not_found_action(
name.to_string(),
));
return String::new();
};
format!("<code class=\"hljs\">{}</code>", &action.human_name)
})
@@ -256,19 +257,11 @@ fn template_and_validate_actions(book: &mut Book, errors: &mut HashSet<Preproces
fn find_action_by_name(name: &str) -> Option<&ActionDef> {
ALL_ACTIONS
.binary_search_by(|action| action.name.as_str().cmp(name))
.binary_search_by(|action| action.name.cmp(name))
.ok()
.map(|index| &ALL_ACTIONS[index])
}
fn actions_available() -> bool {
!ALL_ACTIONS.is_empty()
}
fn is_missing_action(name: &str) -> bool {
actions_available() && find_action_by_name(name).is_none()
}
fn find_binding(os: &str, action: &str) -> Option<String> {
let keymap = match os {
"macos" => &KEYMAP_MACOS,
@@ -391,13 +384,18 @@ fn template_and_validate_json_snippets(book: &mut Book, errors: &mut HashSet<Pre
let keymap = settings::KeymapFile::parse(&snippet_json_fixed)
.context("Failed to parse keymap JSON")?;
for section in keymap.sections() {
for (_keystrokes, action) in section.bindings() {
for (keystrokes, action) in section.bindings() {
keystrokes
.split_whitespace()
.map(|source| gpui::Keystroke::parse(source))
.collect::<std::result::Result<Vec<_>, _>>()
.context("Failed to parse keystroke")?;
if let Some((action_name, _)) = settings::KeymapFile::parse_action(action)
.map_err(|err| anyhow::format_err!(err))
.context("Failed to parse action")?
{
anyhow::ensure!(
!is_missing_action(action_name),
find_action_by_name(action_name).is_some(),
"Action not found: {}",
action_name
);
@@ -493,35 +491,27 @@ where
});
}
#[derive(Debug, serde::Serialize, serde::Deserialize)]
#[derive(Debug, serde::Serialize)]
struct ActionDef {
name: String,
name: &'static str,
human_name: String,
deprecated_aliases: Vec<String>,
#[serde(rename = "documentation")]
docs: Option<String>,
deprecated_aliases: &'static [&'static str],
docs: Option<&'static str>,
}
fn load_all_actions() -> Vec<ActionDef> {
let asset_path = concat!(env!("CARGO_MANIFEST_DIR"), "/actions.json");
match std::fs::read_to_string(asset_path) {
Ok(content) => {
let mut actions: Vec<ActionDef> =
serde_json::from_str(&content).expect("Failed to parse actions.json");
actions.sort_by(|a, b| a.name.cmp(&b.name));
actions
}
Err(err) => {
if std::env::var("CI").is_ok() {
panic!("actions.json not found at {}: {}", asset_path, err);
}
eprintln!(
"Warning: actions.json not found, action validation will be skipped: {}",
err
);
Vec::new()
}
}
fn dump_all_gpui_actions() -> Vec<ActionDef> {
let mut actions = gpui::generate_list_of_all_registered_actions()
.map(|action| ActionDef {
name: action.name,
human_name: command_palette::humanize_action_name(action.name),
deprecated_aliases: action.deprecated_aliases,
docs: action.documentation,
})
.collect::<Vec<ActionDef>>();
actions.sort_by_key(|a| a.name);
actions
}
fn handle_postprocessing() -> Result<()> {
@@ -657,7 +647,7 @@ fn generate_big_table_of_actions() -> String {
let mut output = String::new();
let mut actions_sorted = actions.iter().collect::<Vec<_>>();
actions_sorted.sort_by_key(|a| a.name.as_str());
actions_sorted.sort_by_key(|a| a.name);
// Start the definition list with custom styling for better spacing
output.push_str("<dl style=\"line-height: 1.8;\">\n");
@@ -674,7 +664,7 @@ fn generate_big_table_of_actions() -> String {
output.push_str("<dd style=\"margin-left: 2em; margin-bottom: 1em;\">\n");
// Add the description, escaping HTML if needed
if let Some(description) = action.docs.as_ref() {
if let Some(description) = action.docs {
output.push_str(
&description
.replace("&", "&amp;")
@@ -684,7 +674,7 @@ fn generate_big_table_of_actions() -> String {
output.push_str("<br>\n");
}
output.push_str("Keymap Name: <code>");
output.push_str(&action.name);
output.push_str(action.name);
output.push_str("</code><br>\n");
if !action.deprecated_aliases.is_empty() {
output.push_str("Deprecated Alias(es): ");

View File

@@ -16,3 +16,4 @@ client.workspace = true
gpui.workspace = true
language.workspace = true
text.workspace = true
log.workspace = true

View File

@@ -231,6 +231,10 @@ pub enum EditPredictionGranularity {
}
/// Returns edits updated based on user edits since the old snapshot. None is returned if any user
/// edit is not a prefix of a predicted insertion.
///
/// This function is intentionally defensive: edit prediction providers may hold onto anchors from
/// an older snapshot. Converting those anchors to offsets can panic if the buffer version no longer
/// observes the anchor's timestamp. In that case, we treat the prediction as stale and return None.
pub fn interpolate_edits(
old_snapshot: &text::BufferSnapshot,
new_snapshot: &text::BufferSnapshot,
@@ -241,8 +245,12 @@ pub fn interpolate_edits(
let mut model_edits = current_edits.iter().peekable();
for user_edit in new_snapshot.edits_since::<usize>(&old_snapshot.version) {
while let Some((model_old_range, _)) = model_edits.peek() {
let model_old_range = model_old_range.to_offset(old_snapshot);
if model_old_range.end < user_edit.old.start {
let Some(model_old_offset_range) = safe_to_offset_range(old_snapshot, model_old_range)
else {
return None;
};
if model_old_offset_range.end < user_edit.old.start {
let (model_old_range, model_new_text) = model_edits.next().unwrap();
edits.push((model_old_range.clone(), model_new_text.clone()));
} else {
@@ -251,7 +259,11 @@ pub fn interpolate_edits(
}
if let Some((model_old_range, model_new_text)) = model_edits.peek() {
let model_old_offset_range = model_old_range.to_offset(old_snapshot);
let Some(model_old_offset_range) = safe_to_offset_range(old_snapshot, model_old_range)
else {
return None;
};
if user_edit.old == model_old_offset_range {
let user_new_text = new_snapshot
.text_for_range(user_edit.new.clone())
@@ -272,7 +284,38 @@ pub fn interpolate_edits(
return None;
}
// If any remaining edit ranges can't be converted safely, treat the prediction as stale.
if model_edits
.clone()
.any(|(range, _)| safe_to_offset_range(old_snapshot, range).is_none())
{
return None;
}
edits.extend(model_edits.cloned());
if edits.is_empty() { None } else { Some(edits) }
}
fn safe_to_offset_range(
snapshot: &text::BufferSnapshot,
range: &Range<Anchor>,
) -> Option<std::ops::Range<usize>> {
// Min/max anchors are always safe to convert.
let start_ok = range.start.is_min()
|| range.start.is_max()
|| snapshot.version.observed(range.start.timestamp);
let end_ok =
range.end.is_min() || range.end.is_max() || snapshot.version.observed(range.end.timestamp);
if start_ok && end_ok {
Some(range.to_offset(snapshot))
} else {
log::debug!(
"Dropping stale edit prediction range because anchor timestamps are not observed by snapshot version (start_ok: {}, end_ok: {})",
start_ok,
end_ok
);
None
}
}

View File

@@ -32,9 +32,11 @@ futures.workspace = true
gpui.workspace = true
indoc.workspace = true
language.workspace = true
language_model.workspace = true
markdown.workspace = true
menu.workspace = true
multi_buffer.workspace = true
ollama.workspace = true
paths.workspace = true
project.workspace = true
regex.workspace = true

View File

@@ -22,6 +22,7 @@ use language::{
EditPredictionsMode, File, Language,
language_settings::{self, AllLanguageSettings, EditPredictionProvider, all_language_settings},
};
use ollama::OllamaEditPredictionDelegate;
use project::DisableAiSettings;
use regex::Regex;
use settings::{
@@ -91,9 +92,9 @@ impl Render for EditPredictionButton {
return div().hidden();
}
let all_language_settings = all_language_settings(None, cx);
let language_settings = all_language_settings(None, cx);
match all_language_settings.edit_predictions.provider {
match language_settings.edit_predictions.provider {
EditPredictionProvider::Copilot => {
let Some(copilot) = Copilot::global(cx) else {
return div().hidden();
@@ -293,6 +294,60 @@ impl Render for EditPredictionButton {
.with_handle(self.popover_menu_handle.clone()),
)
}
EditPredictionProvider::Ollama => {
let enabled = self.editor_enabled.unwrap_or(true);
let this = cx.weak_entity();
div().child(
PopoverMenu::new("ollama")
.menu(move |window, cx| {
this.update(cx, |this, cx| {
this.build_edit_prediction_context_menu(
EditPredictionProvider::Ollama,
window,
cx,
)
})
.ok()
})
.anchor(Corner::BottomRight)
.trigger_with_tooltip(
IconButton::new("ollama-icon", IconName::AiOllama)
.shape(IconButtonShape::Square)
.when(!enabled, |this| {
this.indicator(Indicator::dot().color(Color::Ignored))
.indicator_border_color(Some(
cx.theme().colors().status_bar_background,
))
}),
move |_window, cx| {
let settings = all_language_settings(None, cx);
let tooltip_meta = match settings
.edit_predictions
.ollama
.model
.as_deref()
{
Some(model) if !model.trim().is_empty() => {
format!("Powered by Ollama ({model})")
}
_ => {
"Ollama model not configured — configure a model before use"
.to_string()
}
};
Tooltip::with_meta(
"Edit Prediction",
Some(&ToggleMenu),
tooltip_meta,
cx,
)
},
)
.with_handle(self.popover_menu_handle.clone()),
)
}
provider @ (EditPredictionProvider::Experimental(_) | EditPredictionProvider::Zed) => {
let enabled = self.editor_enabled.unwrap_or(true);
@@ -547,6 +602,10 @@ impl EditPredictionButton {
providers.push(EditPredictionProvider::Codestral);
}
if OllamaEditPredictionDelegate::is_available(cx) {
providers.push(EditPredictionProvider::Ollama);
}
if cx.has_flag::<SweepFeatureFlag>()
&& edit_prediction::sweep_ai::sweep_api_token(cx)
.read(cx)
@@ -595,6 +654,7 @@ impl EditPredictionButton {
EditPredictionProvider::Copilot => "GitHub Copilot",
EditPredictionProvider::Supermaven => "Supermaven",
EditPredictionProvider::Codestral => "Codestral",
EditPredictionProvider::Ollama => "Ollama",
EditPredictionProvider::Experimental(
EXPERIMENTAL_SWEEP_EDIT_PREDICTION_PROVIDER_NAME,
) => "Sweep",

View File

@@ -51,8 +51,6 @@ pub const MENU_ASIDE_MIN_WIDTH: Pixels = px(260.);
pub const MENU_ASIDE_MAX_WIDTH: Pixels = px(500.);
pub const COMPLETION_MENU_MIN_WIDTH: Pixels = px(280.);
pub const COMPLETION_MENU_MAX_WIDTH: Pixels = px(540.);
pub const CODE_ACTION_MENU_MIN_WIDTH: Pixels = px(220.);
pub const CODE_ACTION_MENU_MAX_WIDTH: Pixels = px(540.);
// Constants for the markdown cache. The purpose of this cache is to reduce flickering due to
// documentation not yet being parsed.
@@ -181,7 +179,7 @@ impl CodeContextMenu {
) -> Option<AnyElement> {
match self {
CodeContextMenu::Completions(menu) => menu.render_aside(max_size, window, cx),
CodeContextMenu::CodeActions(menu) => menu.render_aside(max_size, window, cx),
CodeContextMenu::CodeActions(_) => None,
}
}
@@ -893,7 +891,7 @@ impl CompletionsMenu {
None
} else {
Some(
Label::new(text.trim().to_string())
Label::new(text.clone())
.ml_4()
.size(LabelSize::Small)
.color(Color::Muted),
@@ -1421,6 +1419,26 @@ pub enum CodeActionsItem {
}
impl CodeActionsItem {
fn as_task(&self) -> Option<&ResolvedTask> {
let Self::Task(_, task) = self else {
return None;
};
Some(task)
}
fn as_code_action(&self) -> Option<&CodeAction> {
let Self::CodeAction { action, .. } = self else {
return None;
};
Some(action)
}
fn as_debug_scenario(&self) -> Option<&DebugScenario> {
let Self::DebugScenario(scenario) = self else {
return None;
};
Some(scenario)
}
pub fn label(&self) -> String {
match self {
Self::CodeAction { action, .. } => action.lsp_action.title().to_owned(),
@@ -1428,14 +1446,6 @@ impl CodeActionsItem {
Self::DebugScenario(scenario) => scenario.label.to_string(),
}
}
pub fn menu_label(&self) -> String {
match self {
Self::CodeAction { action, .. } => action.lsp_action.title().replace("\n", ""),
Self::Task(_, task) => task.resolved_label.replace("\n", ""),
Self::DebugScenario(scenario) => format!("debug: {}", scenario.label),
}
}
}
pub struct CodeActionsMenu {
@@ -1545,33 +1555,60 @@ impl CodeActionsMenu {
let item_ix = range.start + ix;
let selected = item_ix == selected_item;
let colors = cx.theme().colors();
ListItem::new(item_ix)
.inset(true)
.toggle_state(selected)
.overflow_x()
.child(
div()
.min_w(CODE_ACTION_MENU_MIN_WIDTH)
.max_w(CODE_ACTION_MENU_MAX_WIDTH)
.overflow_hidden()
.text_ellipsis()
.when(is_quick_action_bar, |this| this.text_ui(cx))
.when(selected, |this| this.text_color(colors.text_accent))
.child(action.menu_label()),
)
.on_click(cx.listener(move |editor, _, window, cx| {
cx.stop_propagation();
if let Some(task) = editor.confirm_code_action(
&ConfirmCodeAction {
item_ix: Some(item_ix),
},
window,
cx,
) {
task.detach_and_log_err(cx)
}
}))
div().min_w(px(220.)).max_w(px(540.)).child(
ListItem::new(item_ix)
.inset(true)
.toggle_state(selected)
.when_some(action.as_code_action(), |this, action| {
this.child(
h_flex()
.overflow_hidden()
.when(is_quick_action_bar, |this| this.text_ui(cx))
.child(
// TASK: It would be good to make lsp_action.title a SharedString to avoid allocating here.
action.lsp_action.title().replace("\n", ""),
)
.when(selected, |this| {
this.text_color(colors.text_accent)
}),
)
})
.when_some(action.as_task(), |this, task| {
this.child(
h_flex()
.overflow_hidden()
.when(is_quick_action_bar, |this| this.text_ui(cx))
.child(task.resolved_label.replace("\n", ""))
.when(selected, |this| {
this.text_color(colors.text_accent)
}),
)
})
.when_some(action.as_debug_scenario(), |this, scenario| {
this.child(
h_flex()
.overflow_hidden()
.when(is_quick_action_bar, |this| this.text_ui(cx))
.child("debug: ")
.child(scenario.label.clone())
.when(selected, |this| {
this.text_color(colors.text_accent)
}),
)
})
.on_click(cx.listener(move |editor, _, window, cx| {
cx.stop_propagation();
if let Some(task) = editor.confirm_code_action(
&ConfirmCodeAction {
item_ix: Some(item_ix),
},
window,
cx,
) {
task.detach_and_log_err(cx)
}
})),
)
})
.collect()
}),
@@ -1598,46 +1635,4 @@ impl CodeActionsMenu {
Popover::new().child(list).into_any_element()
}
fn render_aside(
&mut self,
max_size: Size<Pixels>,
window: &mut Window,
_cx: &mut Context<Editor>,
) -> Option<AnyElement> {
let Some(action) = self.actions.get(self.selected_item) else {
return None;
};
let label = action.menu_label();
let text_system = window.text_system();
let mut line_wrapper = text_system.line_wrapper(
window.text_style().font(),
window.text_style().font_size.to_pixels(window.rem_size()),
);
let is_truncated = line_wrapper.should_truncate_line(
&label,
CODE_ACTION_MENU_MAX_WIDTH,
"",
gpui::TruncateFrom::End,
);
if is_truncated.is_none() {
return None;
}
Some(
Popover::new()
.child(
div()
.child(label)
.id("code_actions_menu_extended")
.px(MENU_ASIDE_X_PADDING / 2.)
.max_w(max_size.width)
.max_h(max_size.height)
.occlude(),
)
.into_any_element(),
)
}
}

View File

@@ -163,7 +163,6 @@ use project::{
project_settings::{DiagnosticSeverity, GoToDiagnosticSeverityFilter, ProjectSettings},
};
use rand::seq::SliceRandom;
use regex::Regex;
use rpc::{ErrorCode, ErrorExt, proto::PeerId};
use scroll::{Autoscroll, OngoingScroll, ScrollAnchor, ScrollManager};
use selections_collection::{MutableSelectionsCollection, SelectionsCollection};
@@ -4788,146 +4787,82 @@ impl Editor {
let end = selection.end;
let selection_is_empty = start == end;
let language_scope = buffer.language_scope_at(start);
let (delimiter, newline_config) = if let Some(language) = &language_scope {
let needs_extra_newline = NewlineConfig::insert_extra_newline_brackets(
&buffer,
start..end,
language,
)
|| NewlineConfig::insert_extra_newline_tree_sitter(
&buffer,
start..end,
);
let (comment_delimiter, doc_delimiter, newline_formatting) =
if let Some(language) = &language_scope {
let mut newline_formatting =
NewlineFormatting::new(&buffer, start..end, language);
let mut newline_config = NewlineConfig::Newline {
additional_indent: IndentSize::spaces(0),
extra_line_additional_indent: if needs_extra_newline {
Some(IndentSize::spaces(0))
} else {
None
},
prevent_auto_indent: false,
// Comment extension on newline is allowed only for cursor selections
let comment_delimiter = maybe!({
if !selection_is_empty {
return None;
}
if !multi_buffer.language_settings(cx).extend_comment_on_newline
{
return None;
}
return comment_delimiter_for_newline(
&start_point,
&buffer,
language,
);
});
let doc_delimiter = maybe!({
if !selection_is_empty {
return None;
}
if !multi_buffer.language_settings(cx).extend_comment_on_newline
{
return None;
}
return documentation_delimiter_for_newline(
&start_point,
&buffer,
language,
&mut newline_formatting,
);
});
(comment_delimiter, doc_delimiter, newline_formatting)
} else {
(None, None, NewlineFormatting::default())
};
let comment_delimiter = maybe!({
if !selection_is_empty {
return None;
}
let prevent_auto_indent = doc_delimiter.is_some();
let delimiter = comment_delimiter.or(doc_delimiter);
if !multi_buffer.language_settings(cx).extend_comment_on_newline {
return None;
}
let capacity_for_delimiter =
delimiter.as_deref().map(str::len).unwrap_or_default();
let mut new_text = String::with_capacity(
1 + capacity_for_delimiter
+ existing_indent.len as usize
+ newline_formatting.indent_on_newline.len as usize
+ newline_formatting.indent_on_extra_newline.len as usize,
);
new_text.push('\n');
new_text.extend(existing_indent.chars());
new_text.extend(newline_formatting.indent_on_newline.chars());
return comment_delimiter_for_newline(
&start_point,
&buffer,
language,
);
});
if let Some(delimiter) = &delimiter {
new_text.push_str(delimiter);
}
let doc_delimiter = maybe!({
if !selection_is_empty {
return None;
}
if !multi_buffer.language_settings(cx).extend_comment_on_newline {
return None;
}
return documentation_delimiter_for_newline(
&start_point,
&buffer,
language,
&mut newline_config,
);
});
let list_delimiter = maybe!({
if !selection_is_empty {
return None;
}
if !multi_buffer.language_settings(cx).extend_list_on_newline {
return None;
}
return list_delimiter_for_newline(
&start_point,
&buffer,
language,
&mut newline_config,
);
});
(
comment_delimiter.or(doc_delimiter).or(list_delimiter),
newline_config,
)
} else {
(
None,
NewlineConfig::Newline {
additional_indent: IndentSize::spaces(0),
extra_line_additional_indent: None,
prevent_auto_indent: false,
},
)
};
let (edit_start, new_text, prevent_auto_indent) = match &newline_config {
NewlineConfig::ClearCurrentLine => {
let row_start =
buffer.point_to_offset(Point::new(start_point.row, 0));
(row_start, String::new(), false)
}
NewlineConfig::UnindentCurrentLine { continuation } => {
let row_start =
buffer.point_to_offset(Point::new(start_point.row, 0));
let tab_size = buffer.language_settings_at(start, cx).tab_size;
let tab_size_indent = IndentSize::spaces(tab_size.get());
let reduced_indent =
existing_indent.with_delta(Ordering::Less, tab_size_indent);
let mut new_text = String::new();
new_text.extend(reduced_indent.chars());
new_text.push_str(continuation);
(row_start, new_text, true)
}
NewlineConfig::Newline {
additional_indent,
extra_line_additional_indent,
prevent_auto_indent,
} => {
let capacity_for_delimiter =
delimiter.as_deref().map(str::len).unwrap_or_default();
let extra_line_len = extra_line_additional_indent
.map(|i| 1 + existing_indent.len as usize + i.len as usize)
.unwrap_or(0);
let mut new_text = String::with_capacity(
1 + capacity_for_delimiter
+ existing_indent.len as usize
+ additional_indent.len as usize
+ extra_line_len,
);
new_text.push('\n');
new_text.extend(existing_indent.chars());
new_text.extend(additional_indent.chars());
if let Some(delimiter) = &delimiter {
new_text.push_str(delimiter);
}
if let Some(extra_indent) = extra_line_additional_indent {
new_text.push('\n');
new_text.extend(existing_indent.chars());
new_text.extend(extra_indent.chars());
}
(start, new_text, *prevent_auto_indent)
}
};
if newline_formatting.insert_extra_newline {
new_text.push('\n');
new_text.extend(existing_indent.chars());
new_text.extend(newline_formatting.indent_on_extra_newline.chars());
}
let anchor = buffer.anchor_after(end);
let new_selection = selection.map(|_| anchor);
(
((edit_start..end, new_text), prevent_auto_indent),
(newline_config.has_extra_line(), new_selection),
((start..end, new_text), prevent_auto_indent),
(newline_formatting.insert_extra_newline, new_selection),
)
})
.unzip()
@@ -10452,22 +10387,6 @@ impl Editor {
}
prev_edited_row = selection.end.row;
// If cursor is after a list prefix, make selection non-empty to trigger line indent
if selection.is_empty() {
let cursor = selection.head();
let settings = buffer.language_settings_at(cursor, cx);
if settings.indent_list_on_tab {
if let Some(language) = snapshot.language_scope_at(Point::new(cursor.row, 0)) {
if is_list_prefix_row(MultiBufferRow(cursor.row), &snapshot, &language) {
row_delta = Self::indent_selection(
buffer, &snapshot, selection, &mut edits, row_delta, cx,
);
continue;
}
}
}
}
// If the selection is non-empty, then increase the indentation of the selected lines.
if !selection.is_empty() {
row_delta =
@@ -20475,7 +20394,7 @@ impl Editor {
EditorSettings::get_global(cx).gutter.line_numbers
}
pub fn relative_line_numbers(&self, cx: &App) -> RelativeLineNumbers {
pub fn relative_line_numbers(&self, cx: &mut App) -> RelativeLineNumbers {
match (
self.use_relative_line_numbers,
EditorSettings::get_global(cx).relative_line_numbers,
@@ -23436,7 +23355,7 @@ fn documentation_delimiter_for_newline(
start_point: &Point,
buffer: &MultiBufferSnapshot,
language: &LanguageScope,
newline_config: &mut NewlineConfig,
newline_formatting: &mut NewlineFormatting,
) -> Option<Arc<str>> {
let BlockCommentConfig {
start: start_tag,
@@ -23488,9 +23407,6 @@ fn documentation_delimiter_for_newline(
}
};
let mut needs_extra_line = false;
let mut extra_line_additional_indent = IndentSize::spaces(0);
let cursor_is_before_end_tag_if_exists = {
let mut char_position = 0u32;
let mut end_tag_offset = None;
@@ -23508,11 +23424,11 @@ fn documentation_delimiter_for_newline(
let cursor_is_before_end_tag = column <= end_tag_offset;
if cursor_is_after_start_tag {
if cursor_is_before_end_tag {
needs_extra_line = true;
newline_formatting.insert_extra_newline = true;
}
let cursor_is_at_start_of_end_tag = column == end_tag_offset;
if cursor_is_at_start_of_end_tag {
extra_line_additional_indent.len = *len;
newline_formatting.indent_on_extra_newline.len = *len;
}
}
cursor_is_before_end_tag
@@ -23524,240 +23440,39 @@ fn documentation_delimiter_for_newline(
if (cursor_is_after_start_tag || cursor_is_after_delimiter)
&& cursor_is_before_end_tag_if_exists
{
let additional_indent = if cursor_is_after_start_tag {
IndentSize::spaces(*len)
} else {
IndentSize::spaces(0)
};
*newline_config = NewlineConfig::Newline {
additional_indent,
extra_line_additional_indent: if needs_extra_line {
Some(extra_line_additional_indent)
} else {
None
},
prevent_auto_indent: true,
};
if cursor_is_after_start_tag {
newline_formatting.indent_on_newline.len = *len;
}
Some(delimiter.clone())
} else {
None
}
}
const ORDERED_LIST_MAX_MARKER_LEN: usize = 16;
fn list_delimiter_for_newline(
start_point: &Point,
buffer: &MultiBufferSnapshot,
language: &LanguageScope,
newline_config: &mut NewlineConfig,
) -> Option<Arc<str>> {
let (snapshot, range) = buffer.buffer_line_for_row(MultiBufferRow(start_point.row))?;
let num_of_whitespaces = snapshot
.chars_for_range(range.clone())
.take_while(|c| c.is_whitespace())
.count();
let task_list_entries: Vec<_> = language
.task_list()
.into_iter()
.flat_map(|config| {
config
.prefixes
.iter()
.map(|prefix| (prefix.as_ref(), config.continuation.as_ref()))
})
.collect();
let unordered_list_entries: Vec<_> = language
.unordered_list()
.iter()
.map(|marker| (marker.as_ref(), marker.as_ref()))
.collect();
let all_entries: Vec<_> = task_list_entries
.into_iter()
.chain(unordered_list_entries)
.collect();
if let Some(max_prefix_len) = all_entries.iter().map(|(p, _)| p.len()).max() {
let candidate: String = snapshot
.chars_for_range(range.clone())
.skip(num_of_whitespaces)
.take(max_prefix_len)
.collect();
if let Some((prefix, continuation)) = all_entries
.iter()
.filter(|(prefix, _)| candidate.starts_with(*prefix))
.max_by_key(|(prefix, _)| prefix.len())
{
let end_of_prefix = num_of_whitespaces + prefix.len();
let cursor_is_after_prefix = end_of_prefix <= start_point.column as usize;
let has_content_after_marker = snapshot
.chars_for_range(range)
.skip(end_of_prefix)
.any(|c| !c.is_whitespace());
if has_content_after_marker && cursor_is_after_prefix {
return Some((*continuation).into());
}
if start_point.column as usize == end_of_prefix {
if num_of_whitespaces == 0 {
*newline_config = NewlineConfig::ClearCurrentLine;
} else {
*newline_config = NewlineConfig::UnindentCurrentLine {
continuation: (*continuation).into(),
};
}
}
return None;
}
}
let candidate: String = snapshot
.chars_for_range(range.clone())
.skip(num_of_whitespaces)
.take(ORDERED_LIST_MAX_MARKER_LEN)
.collect();
for ordered_config in language.ordered_list() {
let regex = match Regex::new(&ordered_config.pattern) {
Ok(r) => r,
Err(_) => continue,
};
if let Some(captures) = regex.captures(&candidate) {
let full_match = captures.get(0)?;
let marker_len = full_match.len();
let end_of_prefix = num_of_whitespaces + marker_len;
let cursor_is_after_prefix = end_of_prefix <= start_point.column as usize;
let has_content_after_marker = snapshot
.chars_for_range(range)
.skip(end_of_prefix)
.any(|c| !c.is_whitespace());
if has_content_after_marker && cursor_is_after_prefix {
let number: u32 = captures.get(1)?.as_str().parse().ok()?;
let continuation = ordered_config
.format
.replace("{1}", &(number + 1).to_string());
return Some(continuation.into());
}
if start_point.column as usize == end_of_prefix {
let continuation = ordered_config.format.replace("{1}", "1");
if num_of_whitespaces == 0 {
*newline_config = NewlineConfig::ClearCurrentLine;
} else {
*newline_config = NewlineConfig::UnindentCurrentLine {
continuation: continuation.into(),
};
}
}
return None;
}
}
None
#[derive(Debug, Default)]
struct NewlineFormatting {
insert_extra_newline: bool,
indent_on_newline: IndentSize,
indent_on_extra_newline: IndentSize,
}
fn is_list_prefix_row(
row: MultiBufferRow,
buffer: &MultiBufferSnapshot,
language: &LanguageScope,
) -> bool {
let Some((snapshot, range)) = buffer.buffer_line_for_row(row) else {
return false;
};
let num_of_whitespaces = snapshot
.chars_for_range(range.clone())
.take_while(|c| c.is_whitespace())
.count();
let task_list_prefixes: Vec<_> = language
.task_list()
.into_iter()
.flat_map(|config| {
config
.prefixes
.iter()
.map(|p| p.as_ref())
.collect::<Vec<_>>()
})
.collect();
let unordered_list_markers: Vec<_> = language
.unordered_list()
.iter()
.map(|marker| marker.as_ref())
.collect();
let all_prefixes: Vec<_> = task_list_prefixes
.into_iter()
.chain(unordered_list_markers)
.collect();
if let Some(max_prefix_len) = all_prefixes.iter().map(|p| p.len()).max() {
let candidate: String = snapshot
.chars_for_range(range.clone())
.skip(num_of_whitespaces)
.take(max_prefix_len)
.collect();
if all_prefixes
.iter()
.any(|prefix| candidate.starts_with(*prefix))
{
return true;
impl NewlineFormatting {
fn new(
buffer: &MultiBufferSnapshot,
range: Range<MultiBufferOffset>,
language: &LanguageScope,
) -> Self {
Self {
insert_extra_newline: Self::insert_extra_newline_brackets(
buffer,
range.clone(),
language,
) || Self::insert_extra_newline_tree_sitter(buffer, range),
indent_on_newline: IndentSize::spaces(0),
indent_on_extra_newline: IndentSize::spaces(0),
}
}
let ordered_list_candidate: String = snapshot
.chars_for_range(range)
.skip(num_of_whitespaces)
.take(ORDERED_LIST_MAX_MARKER_LEN)
.collect();
for ordered_config in language.ordered_list() {
let regex = match Regex::new(&ordered_config.pattern) {
Ok(r) => r,
Err(_) => continue,
};
if let Some(captures) = regex.captures(&ordered_list_candidate) {
return captures.get(0).is_some();
}
}
false
}
#[derive(Debug)]
enum NewlineConfig {
/// Insert newline with optional additional indent and optional extra blank line
Newline {
additional_indent: IndentSize,
extra_line_additional_indent: Option<IndentSize>,
prevent_auto_indent: bool,
},
/// Clear the current line
ClearCurrentLine,
/// Unindent the current line and add continuation
UnindentCurrentLine { continuation: Arc<str> },
}
impl NewlineConfig {
fn has_extra_line(&self) -> bool {
matches!(
self,
Self::Newline {
extra_line_additional_indent: Some(_),
..
}
)
}
fn insert_extra_newline_brackets(
buffer: &MultiBufferSnapshot,
range: Range<MultiBufferOffset>,
@@ -25294,70 +25009,6 @@ impl EditorSnapshot {
let digit_count = self.widest_line_number().ilog10() + 1;
column_pixels(style, digit_count as usize, window)
}
/// Returns the line delta from `base` to `line` in the multibuffer, ignoring wrapped lines.
///
/// This is positive if `base` is before `line`.
fn relative_line_delta(&self, base: DisplayRow, line: DisplayRow) -> i64 {
let point = DisplayPoint::new(line, 0).to_point(self);
self.relative_line_delta_to_point(base, point)
}
/// Returns the line delta from `base` to `point` in the multibuffer, ignoring wrapped lines.
///
/// This is positive if `base` is before `point`.
pub fn relative_line_delta_to_point(&self, base: DisplayRow, point: Point) -> i64 {
let base_point = DisplayPoint::new(base, 0).to_point(self);
point.row as i64 - base_point.row as i64
}
/// Returns the line delta from `base` to `line` in the multibuffer, counting wrapped lines.
///
/// This is positive if `base` is before `line`.
fn relative_wrapped_line_delta(&self, base: DisplayRow, line: DisplayRow) -> i64 {
let point = DisplayPoint::new(line, 0).to_point(self);
self.relative_wrapped_line_delta_to_point(base, point)
}
/// Returns the line delta from `base` to `point` in the multibuffer, counting wrapped lines.
///
/// This is positive if `base` is before `point`.
pub fn relative_wrapped_line_delta_to_point(&self, base: DisplayRow, point: Point) -> i64 {
let base_point = DisplayPoint::new(base, 0).to_point(self);
let wrap_snapshot = self.wrap_snapshot();
let base_wrap_row = wrap_snapshot.make_wrap_point(base_point, Bias::Left).row();
let wrap_row = wrap_snapshot.make_wrap_point(point, Bias::Left).row();
wrap_row.0 as i64 - base_wrap_row.0 as i64
}
/// Returns the unsigned relative line number to display for each row in `rows`.
///
/// Wrapped rows are excluded from the hashmap if `count_relative_lines` is `false`.
pub fn calculate_relative_line_numbers(
&self,
rows: &Range<DisplayRow>,
relative_to: DisplayRow,
count_wrapped_lines: bool,
) -> HashMap<DisplayRow, u32> {
let initial_offset = if count_wrapped_lines {
self.relative_wrapped_line_delta(relative_to, rows.start)
} else {
self.relative_line_delta(relative_to, rows.start)
};
let display_row_infos = self
.row_infos(rows.start)
.take(rows.len())
.enumerate()
.map(|(i, row_info)| (DisplayRow(rows.start.0 + i as u32), row_info));
display_row_infos
.filter(|(_row, row_info)| {
row_info.buffer_row.is_some()
|| (count_wrapped_lines && row_info.wrapped_buffer_row.is_some())
})
.enumerate()
.map(|(i, (row, _row_info))| (row, (initial_offset + i as i64).unsigned_abs() as u32))
.collect()
}
}
pub fn column_pixels(style: &EditorStyle, column: usize, window: &Window) -> Pixels {

View File

@@ -215,8 +215,7 @@ impl Settings for EditorSettings {
},
scrollbar: Scrollbar {
show: scrollbar.show.map(Into::into).unwrap(),
git_diff: scrollbar.git_diff.unwrap()
&& content.git.unwrap().enabled.unwrap().is_git_diff_enabled(),
git_diff: scrollbar.git_diff.unwrap(),
selected_text: scrollbar.selected_text.unwrap(),
selected_symbol: scrollbar.selected_symbol.unwrap(),
search_results: scrollbar.search_results.unwrap(),

View File

@@ -36,8 +36,7 @@ use languages::markdown_lang;
use languages::rust_lang;
use lsp::CompletionParams;
use multi_buffer::{
ExcerptRange, IndentGuide, MultiBuffer, MultiBufferFilterMode, MultiBufferOffset,
MultiBufferOffsetUtf16, PathKey,
IndentGuide, MultiBufferFilterMode, MultiBufferOffset, MultiBufferOffsetUtf16, PathKey,
};
use parking_lot::Mutex;
use pretty_assertions::{assert_eq, assert_ne};
@@ -20881,36 +20880,6 @@ async fn test_toggling_adjacent_diff_hunks(cx: &mut TestAppContext) {
.to_string(),
);
cx.update_editor(|editor, window, cx| {
editor.move_up(&MoveUp, window, cx);
editor.toggle_selected_diff_hunks(&Default::default(), window, cx);
});
cx.assert_state_with_diff(
indoc! { "
ˇone
- two
three
five
"}
.to_string(),
);
cx.update_editor(|editor, window, cx| {
editor.move_down(&MoveDown, window, cx);
editor.move_down(&MoveDown, window, cx);
editor.toggle_selected_diff_hunks(&Default::default(), window, cx);
});
cx.assert_state_with_diff(
indoc! { "
one
- two
ˇthree
- four
five
"}
.to_string(),
);
cx.set_state(indoc! { "
one
ˇTWO
@@ -20950,66 +20919,6 @@ async fn test_toggling_adjacent_diff_hunks(cx: &mut TestAppContext) {
);
}
#[gpui::test]
async fn test_toggling_adjacent_diff_hunks_2(
executor: BackgroundExecutor,
cx: &mut TestAppContext,
) {
init_test(cx, |_| {});
let mut cx = EditorTestContext::new(cx).await;
let diff_base = r#"
lineA
lineB
lineC
lineD
"#
.unindent();
cx.set_state(
&r#"
ˇlineA1
lineB
lineD
"#
.unindent(),
);
cx.set_head_text(&diff_base);
executor.run_until_parked();
cx.update_editor(|editor, window, cx| {
editor.toggle_selected_diff_hunks(&ToggleSelectedDiffHunks, window, cx);
});
executor.run_until_parked();
cx.assert_state_with_diff(
r#"
- lineA
+ ˇlineA1
lineB
lineD
"#
.unindent(),
);
cx.update_editor(|editor, window, cx| {
editor.move_down(&MoveDown, window, cx);
editor.move_right(&MoveRight, window, cx);
editor.toggle_selected_diff_hunks(&ToggleSelectedDiffHunks, window, cx);
});
executor.run_until_parked();
cx.assert_state_with_diff(
r#"
- lineA
+ lineA1
lˇineB
- lineC
lineD
"#
.unindent(),
);
}
#[gpui::test]
async fn test_edits_around_expanded_deletion_hunks(
executor: BackgroundExecutor,
@@ -28022,7 +27931,7 @@ async fn test_markdown_indents(cx: &mut gpui::TestAppContext) {
"
});
// Case 2: Test adding new line after nested list continues the list with unchecked task
// Case 2: Test adding new line after nested list preserves indent of previous line
cx.set_state(&indoc! {"
- [ ] Item 1
- [ ] Item 1.a
@@ -28039,12 +27948,20 @@ async fn test_markdown_indents(cx: &mut gpui::TestAppContext) {
- [x] Item 2
- [x] Item 2.a
- [x] Item 2.b
- [ ] ˇ"
ˇ"
});
// Case 3: Test adding content to continued list item
// Case 3: Test adding a new nested list item preserves indent
cx.set_state(&indoc! {"
- [ ] Item 1
- [ ] Item 1.a
- [x] Item 2
- [x] Item 2.a
- [x] Item 2.b
ˇ"
});
cx.update_editor(|editor, window, cx| {
editor.handle_input("Item 2.c", window, cx);
editor.handle_input("-", window, cx);
});
cx.run_until_parked();
cx.assert_editor_state(indoc! {"
@@ -28053,10 +27970,22 @@ async fn test_markdown_indents(cx: &mut gpui::TestAppContext) {
- [x] Item 2
- [x] Item 2.a
- [x] Item 2.b
- [ ] Item 2.cˇ"
"
});
cx.update_editor(|editor, window, cx| {
editor.handle_input(" [x] Item 2.c", window, cx);
});
cx.run_until_parked();
cx.assert_editor_state(indoc! {"
- [ ] Item 1
- [ ] Item 1.a
- [x] Item 2
- [x] Item 2.a
- [x] Item 2.b
- [x] Item 2.cˇ"
});
// Case 4: Test adding new line after nested ordered list continues with next number
// Case 4: Test adding new line after nested ordered list preserves indent of previous line
cx.set_state(indoc! {"
1. Item 1
1. Item 1.a
@@ -28073,12 +28002,44 @@ async fn test_markdown_indents(cx: &mut gpui::TestAppContext) {
2. Item 2
1. Item 2.a
2. Item 2.b
3. ˇ"
ˇ"
});
// Case 5: Adding content to continued ordered list item
// Case 5: Adding new ordered list item preserves indent
cx.set_state(indoc! {"
1. Item 1
1. Item 1.a
2. Item 2
1. Item 2.a
2. Item 2.b
ˇ"
});
cx.update_editor(|editor, window, cx| {
editor.handle_input("Item 2.c", window, cx);
editor.handle_input("3", window, cx);
});
cx.run_until_parked();
cx.assert_editor_state(indoc! {"
1. Item 1
1. Item 1.a
2. Item 2
1. Item 2.a
2. Item 2.b
"
});
cx.update_editor(|editor, window, cx| {
editor.handle_input(".", window, cx);
});
cx.run_until_parked();
cx.assert_editor_state(indoc! {"
1. Item 1
1. Item 1.a
2. Item 2
1. Item 2.a
2. Item 2.b
3.ˇ"
});
cx.update_editor(|editor, window, cx| {
editor.handle_input(" Item 2.c", window, cx);
});
cx.run_until_parked();
cx.assert_editor_state(indoc! {"
@@ -28634,130 +28595,6 @@ async fn test_sticky_scroll(cx: &mut TestAppContext) {
assert_eq!(sticky_headers(10.0), vec![]);
}
#[gpui::test]
fn test_relative_line_numbers(cx: &mut TestAppContext) {
init_test(cx, |_| {});
let buffer_1 = cx.new(|cx| Buffer::local("aaaaaaaaaa\nbbb\n", cx));
let buffer_2 = cx.new(|cx| Buffer::local("cccccccccc\nddd\n", cx));
let buffer_3 = cx.new(|cx| Buffer::local("eee\nffffffffff\n", cx));
let multibuffer = cx.new(|cx| {
let mut multibuffer = MultiBuffer::new(ReadWrite);
multibuffer.push_excerpts(
buffer_1.clone(),
[ExcerptRange::new(Point::new(0, 0)..Point::new(2, 0))],
cx,
);
multibuffer.push_excerpts(
buffer_2.clone(),
[ExcerptRange::new(Point::new(0, 0)..Point::new(2, 0))],
cx,
);
multibuffer.push_excerpts(
buffer_3.clone(),
[ExcerptRange::new(Point::new(0, 0)..Point::new(2, 0))],
cx,
);
multibuffer
});
// wrapped contents of multibuffer:
// aaa
// aaa
// aaa
// a
// bbb
//
// ccc
// ccc
// ccc
// c
// ddd
//
// eee
// fff
// fff
// fff
// f
let (editor, cx) = cx.add_window_view(|window, cx| build_editor(multibuffer, window, cx));
editor.update_in(cx, |editor, window, cx| {
editor.set_wrap_width(Some(30.0.into()), cx); // every 3 characters
// includes trailing newlines.
let expected_line_numbers = [2, 6, 7, 10, 14, 15, 18, 19, 23];
let expected_wrapped_line_numbers = [
2, 3, 4, 5, 6, 7, 10, 11, 12, 13, 14, 15, 18, 19, 20, 21, 22, 23,
];
editor.change_selections(SelectionEffects::no_scroll(), window, cx, |s| {
s.select_ranges([
Point::new(7, 0)..Point::new(7, 1), // second row of `ccc`
]);
});
let snapshot = editor.snapshot(window, cx);
// these are all 0-indexed
let base_display_row = DisplayRow(11);
let base_row = 3;
let wrapped_base_row = 7;
// test not counting wrapped lines
let expected_relative_numbers = expected_line_numbers
.into_iter()
.enumerate()
.map(|(i, row)| (DisplayRow(row), i.abs_diff(base_row) as u32))
.collect_vec();
let actual_relative_numbers = snapshot
.calculate_relative_line_numbers(
&(DisplayRow(0)..DisplayRow(24)),
base_display_row,
false,
)
.into_iter()
.sorted()
.collect_vec();
assert_eq!(expected_relative_numbers, actual_relative_numbers);
// check `calculate_relative_line_numbers()` against `relative_line_delta()` for each line
for (display_row, relative_number) in expected_relative_numbers {
assert_eq!(
relative_number,
snapshot
.relative_line_delta(display_row, base_display_row)
.unsigned_abs() as u32,
);
}
// test counting wrapped lines
let expected_wrapped_relative_numbers = expected_wrapped_line_numbers
.into_iter()
.enumerate()
.map(|(i, row)| (DisplayRow(row), i.abs_diff(wrapped_base_row) as u32))
.collect_vec();
let actual_relative_numbers = snapshot
.calculate_relative_line_numbers(
&(DisplayRow(0)..DisplayRow(24)),
base_display_row,
true,
)
.into_iter()
.sorted()
.collect_vec();
assert_eq!(expected_wrapped_relative_numbers, actual_relative_numbers);
// check `calculate_relative_line_numbers()` against `relative_wrapped_line_delta()` for each line
for (display_row, relative_number) in expected_wrapped_relative_numbers {
assert_eq!(
relative_number,
snapshot
.relative_wrapped_line_delta(display_row, base_display_row)
.unsigned_abs() as u32,
);
}
});
}
#[gpui::test]
async fn test_scroll_by_clicking_sticky_header(cx: &mut TestAppContext) {
init_test(cx, |_| {});
@@ -29570,524 +29407,6 @@ async fn test_find_references_single_case(cx: &mut TestAppContext) {
cx.assert_editor_state(after);
}
#[gpui::test]
async fn test_newline_task_list_continuation(cx: &mut TestAppContext) {
init_test(cx, |settings| {
settings.defaults.tab_size = Some(2.try_into().unwrap());
});
let markdown_language = languages::language("markdown", tree_sitter_md::LANGUAGE.into());
let mut cx = EditorTestContext::new(cx).await;
cx.update_buffer(|buffer, cx| buffer.set_language(Some(markdown_language), cx));
// Case 1: Adding newline after (whitespace + prefix + any non-whitespace) adds marker
cx.set_state(indoc! {"
- [ ] taskˇ
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(indoc! {"
- [ ] task
- [ ] ˇ
"});
// Case 2: Works with checked task items too
cx.set_state(indoc! {"
- [x] completed taskˇ
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(indoc! {"
- [x] completed task
- [ ] ˇ
"});
// Case 3: Cursor position doesn't matter - content after marker is what counts
cx.set_state(indoc! {"
- [ ] taˇsk
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(indoc! {"
- [ ] ta
- [ ] ˇsk
"});
// Case 4: Adding newline after (whitespace + prefix + some whitespace) does NOT add marker
cx.set_state(indoc! {"
- [ ] ˇ
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(
indoc! {"
- [ ]$$
ˇ
"}
.replace("$", " ")
.as_str(),
);
// Case 5: Adding newline with content adds marker preserving indentation
cx.set_state(indoc! {"
- [ ] task
- [ ] indentedˇ
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(indoc! {"
- [ ] task
- [ ] indented
- [ ] ˇ
"});
// Case 6: Adding newline with cursor right after prefix, unindents
cx.set_state(indoc! {"
- [ ] task
- [ ] sub task
- [ ] ˇ
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(indoc! {"
- [ ] task
- [ ] sub task
- [ ] ˇ
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
// Case 7: Adding newline with cursor right after prefix, removes marker
cx.assert_editor_state(indoc! {"
- [ ] task
- [ ] sub task
- [ ] ˇ
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(indoc! {"
- [ ] task
- [ ] sub task
ˇ
"});
// Case 8: Cursor before or inside prefix does not add marker
cx.set_state(indoc! {"
ˇ- [ ] task
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(indoc! {"
ˇ- [ ] task
"});
cx.set_state(indoc! {"
- [ˇ ] task
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(indoc! {"
- [
ˇ
] task
"});
}
#[gpui::test]
async fn test_newline_unordered_list_continuation(cx: &mut TestAppContext) {
init_test(cx, |settings| {
settings.defaults.tab_size = Some(2.try_into().unwrap());
});
let markdown_language = languages::language("markdown", tree_sitter_md::LANGUAGE.into());
let mut cx = EditorTestContext::new(cx).await;
cx.update_buffer(|buffer, cx| buffer.set_language(Some(markdown_language), cx));
// Case 1: Adding newline after (whitespace + marker + any non-whitespace) adds marker
cx.set_state(indoc! {"
- itemˇ
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(indoc! {"
- item
- ˇ
"});
// Case 2: Works with different markers
cx.set_state(indoc! {"
* starred itemˇ
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(indoc! {"
* starred item
* ˇ
"});
cx.set_state(indoc! {"
+ plus itemˇ
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(indoc! {"
+ plus item
+ ˇ
"});
// Case 3: Cursor position doesn't matter - content after marker is what counts
cx.set_state(indoc! {"
- itˇem
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(indoc! {"
- it
- ˇem
"});
// Case 4: Adding newline after (whitespace + marker + some whitespace) does NOT add marker
cx.set_state(indoc! {"
- ˇ
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(
indoc! {"
- $
ˇ
"}
.replace("$", " ")
.as_str(),
);
// Case 5: Adding newline with content adds marker preserving indentation
cx.set_state(indoc! {"
- item
- indentedˇ
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(indoc! {"
- item
- indented
- ˇ
"});
// Case 6: Adding newline with cursor right after marker, unindents
cx.set_state(indoc! {"
- item
- sub item
- ˇ
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(indoc! {"
- item
- sub item
- ˇ
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
// Case 7: Adding newline with cursor right after marker, removes marker
cx.assert_editor_state(indoc! {"
- item
- sub item
- ˇ
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(indoc! {"
- item
- sub item
ˇ
"});
// Case 8: Cursor before or inside prefix does not add marker
cx.set_state(indoc! {"
ˇ- item
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(indoc! {"
ˇ- item
"});
cx.set_state(indoc! {"
-ˇ item
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(indoc! {"
-
ˇitem
"});
}
#[gpui::test]
async fn test_newline_ordered_list_continuation(cx: &mut TestAppContext) {
init_test(cx, |settings| {
settings.defaults.tab_size = Some(2.try_into().unwrap());
});
let markdown_language = languages::language("markdown", tree_sitter_md::LANGUAGE.into());
let mut cx = EditorTestContext::new(cx).await;
cx.update_buffer(|buffer, cx| buffer.set_language(Some(markdown_language), cx));
// Case 1: Adding newline after (whitespace + marker + any non-whitespace) increments number
cx.set_state(indoc! {"
1. first itemˇ
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(indoc! {"
1. first item
2. ˇ
"});
// Case 2: Works with larger numbers
cx.set_state(indoc! {"
10. tenth itemˇ
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(indoc! {"
10. tenth item
11. ˇ
"});
// Case 3: Cursor position doesn't matter - content after marker is what counts
cx.set_state(indoc! {"
1. itˇem
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(indoc! {"
1. it
2. ˇem
"});
// Case 4: Adding newline after (whitespace + marker + some whitespace) does NOT add marker
cx.set_state(indoc! {"
1. ˇ
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(
indoc! {"
1. $
ˇ
"}
.replace("$", " ")
.as_str(),
);
// Case 5: Adding newline with content adds marker preserving indentation
cx.set_state(indoc! {"
1. item
2. indentedˇ
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(indoc! {"
1. item
2. indented
3. ˇ
"});
// Case 6: Adding newline with cursor right after marker, unindents
cx.set_state(indoc! {"
1. item
2. sub item
3. ˇ
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(indoc! {"
1. item
2. sub item
1. ˇ
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
// Case 7: Adding newline with cursor right after marker, removes marker
cx.assert_editor_state(indoc! {"
1. item
2. sub item
1. ˇ
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(indoc! {"
1. item
2. sub item
ˇ
"});
// Case 8: Cursor before or inside prefix does not add marker
cx.set_state(indoc! {"
ˇ1. item
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(indoc! {"
ˇ1. item
"});
cx.set_state(indoc! {"
1ˇ. item
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(indoc! {"
1
ˇ. item
"});
}
#[gpui::test]
async fn test_newline_should_not_autoindent_ordered_list(cx: &mut TestAppContext) {
init_test(cx, |settings| {
settings.defaults.tab_size = Some(2.try_into().unwrap());
});
let markdown_language = languages::language("markdown", tree_sitter_md::LANGUAGE.into());
let mut cx = EditorTestContext::new(cx).await;
cx.update_buffer(|buffer, cx| buffer.set_language(Some(markdown_language), cx));
// Case 1: Adding newline after (whitespace + marker + any non-whitespace) increments number
cx.set_state(indoc! {"
1. first item
1. sub first item
2. sub second item
3. ˇ
"});
cx.update_editor(|e, window, cx| e.newline(&Newline, window, cx));
cx.wait_for_autoindent_applied().await;
cx.assert_editor_state(indoc! {"
1. first item
1. sub first item
2. sub second item
1. ˇ
"});
}
#[gpui::test]
async fn test_tab_list_indent(cx: &mut TestAppContext) {
init_test(cx, |settings| {
settings.defaults.tab_size = Some(2.try_into().unwrap());
});
let markdown_language = languages::language("markdown", tree_sitter_md::LANGUAGE.into());
let mut cx = EditorTestContext::new(cx).await;
cx.update_buffer(|buffer, cx| buffer.set_language(Some(markdown_language), cx));
// Case 1: Unordered list - cursor after prefix, adds indent before prefix
cx.set_state(indoc! {"
- ˇitem
"});
cx.update_editor(|e, window, cx| e.tab(&Tab, window, cx));
cx.wait_for_autoindent_applied().await;
let expected = indoc! {"
$$- ˇitem
"};
cx.assert_editor_state(expected.replace("$", " ").as_str());
// Case 2: Task list - cursor after prefix
cx.set_state(indoc! {"
- [ ] ˇtask
"});
cx.update_editor(|e, window, cx| e.tab(&Tab, window, cx));
cx.wait_for_autoindent_applied().await;
let expected = indoc! {"
$$- [ ] ˇtask
"};
cx.assert_editor_state(expected.replace("$", " ").as_str());
// Case 3: Ordered list - cursor after prefix
cx.set_state(indoc! {"
1. ˇfirst
"});
cx.update_editor(|e, window, cx| e.tab(&Tab, window, cx));
cx.wait_for_autoindent_applied().await;
let expected = indoc! {"
$$1. ˇfirst
"};
cx.assert_editor_state(expected.replace("$", " ").as_str());
// Case 4: With existing indentation - adds more indent
let initial = indoc! {"
$$- ˇitem
"};
cx.set_state(initial.replace("$", " ").as_str());
cx.update_editor(|e, window, cx| e.tab(&Tab, window, cx));
cx.wait_for_autoindent_applied().await;
let expected = indoc! {"
$$$$- ˇitem
"};
cx.assert_editor_state(expected.replace("$", " ").as_str());
// Case 5: Empty list item
cx.set_state(indoc! {"
- ˇ
"});
cx.update_editor(|e, window, cx| e.tab(&Tab, window, cx));
cx.wait_for_autoindent_applied().await;
let expected = indoc! {"
$$- ˇ
"};
cx.assert_editor_state(expected.replace("$", " ").as_str());
// Case 6: Cursor at end of line with content
cx.set_state(indoc! {"
- itemˇ
"});
cx.update_editor(|e, window, cx| e.tab(&Tab, window, cx));
cx.wait_for_autoindent_applied().await;
let expected = indoc! {"
$$- itemˇ
"};
cx.assert_editor_state(expected.replace("$", " ").as_str());
// Case 7: Cursor at start of list item, indents it
cx.set_state(indoc! {"
- item
ˇ - sub item
"});
cx.update_editor(|e, window, cx| e.tab(&Tab, window, cx));
cx.wait_for_autoindent_applied().await;
let expected = indoc! {"
- item
ˇ - sub item
"};
cx.assert_editor_state(expected);
// Case 8: Cursor at start of list item, moves the cursor when "indent_list_on_tab" is false
cx.update_editor(|_, _, cx| {
SettingsStore::update_global(cx, |store, cx| {
store.update_user_settings(cx, |settings| {
settings.project.all_languages.defaults.indent_list_on_tab = Some(false);
});
});
});
cx.set_state(indoc! {"
- item
ˇ - sub item
"});
cx.update_editor(|e, window, cx| e.tab(&Tab, window, cx));
cx.wait_for_autoindent_applied().await;
let expected = indoc! {"
- item
ˇ- sub item
"};
cx.assert_editor_state(expected);
}
#[gpui::test]
async fn test_local_worktree_trust(cx: &mut TestAppContext) {
init_test(cx, |_| {});

View File

@@ -66,7 +66,7 @@ use project::{
};
use settings::{
GitGutterSetting, GitHunkStyleSetting, IndentGuideBackgroundColoring, IndentGuideColoring,
RelativeLineNumbers, Settings,
Settings,
};
use smallvec::{SmallVec, smallvec};
use std::{
@@ -194,6 +194,8 @@ pub struct EditorElement {
style: EditorStyle,
}
type DisplayRowDelta = u32;
impl EditorElement {
pub(crate) const SCROLLBAR_WIDTH: Pixels = px(15.);
@@ -3223,6 +3225,64 @@ impl EditorElement {
.collect()
}
fn calculate_relative_line_numbers(
&self,
snapshot: &EditorSnapshot,
rows: &Range<DisplayRow>,
relative_to: Option<DisplayRow>,
count_wrapped_lines: bool,
) -> HashMap<DisplayRow, DisplayRowDelta> {
let mut relative_rows: HashMap<DisplayRow, DisplayRowDelta> = Default::default();
let Some(relative_to) = relative_to else {
return relative_rows;
};
let start = rows.start.min(relative_to);
let end = rows.end.max(relative_to);
let buffer_rows = snapshot
.row_infos(start)
.take(1 + end.minus(start) as usize)
.collect::<Vec<_>>();
let head_idx = relative_to.minus(start);
let mut delta = 1;
let mut i = head_idx + 1;
let should_count_line = |row_info: &RowInfo| {
if count_wrapped_lines {
row_info.buffer_row.is_some() || row_info.wrapped_buffer_row.is_some()
} else {
row_info.buffer_row.is_some()
}
};
while i < buffer_rows.len() as u32 {
if should_count_line(&buffer_rows[i as usize]) {
if rows.contains(&DisplayRow(i + start.0)) {
relative_rows.insert(DisplayRow(i + start.0), delta);
}
delta += 1;
}
i += 1;
}
delta = 1;
i = head_idx.min(buffer_rows.len().saturating_sub(1) as u32);
while i > 0 && buffer_rows[i as usize].buffer_row.is_none() && !count_wrapped_lines {
i -= 1;
}
while i > 0 {
i -= 1;
if should_count_line(&buffer_rows[i as usize]) {
if rows.contains(&DisplayRow(i + start.0)) {
relative_rows.insert(DisplayRow(i + start.0), delta);
}
delta += 1;
}
}
relative_rows
}
fn layout_line_numbers(
&self,
gutter_hitbox: Option<&Hitbox>,
@@ -3232,7 +3292,7 @@ impl EditorElement {
rows: Range<DisplayRow>,
buffer_rows: &[RowInfo],
active_rows: &BTreeMap<DisplayRow, LineHighlightSpec>,
relative_line_base: Option<DisplayRow>,
newest_selection_head: Option<DisplayPoint>,
snapshot: &EditorSnapshot,
window: &mut Window,
cx: &mut App,
@@ -3244,16 +3304,32 @@ impl EditorElement {
return Arc::default();
}
let relative = self.editor.read(cx).relative_line_numbers(cx);
let (newest_selection_head, relative) = self.editor.update(cx, |editor, cx| {
let newest_selection_head = newest_selection_head.unwrap_or_else(|| {
let newest = editor
.selections
.newest::<Point>(&editor.display_snapshot(cx));
SelectionLayout::new(
newest,
editor.selections.line_mode(),
editor.cursor_offset_on_selection,
editor.cursor_shape,
&snapshot.display_snapshot,
true,
true,
None,
)
.head
});
let relative = editor.relative_line_numbers(cx);
(newest_selection_head, relative)
});
let relative_line_numbers_enabled = relative.enabled();
let relative_rows = if relative_line_numbers_enabled && let Some(base) = relative_line_base
{
snapshot.calculate_relative_line_numbers(&rows, base, relative.wrapped())
} else {
Default::default()
};
let relative_to = relative_line_numbers_enabled.then(|| newest_selection_head.row());
let relative_rows =
self.calculate_relative_line_numbers(snapshot, &rows, relative_to, relative.wrapped());
let mut line_number = String::new();
let segments = buffer_rows.iter().enumerate().flat_map(|(ix, row_info)| {
let display_row = DisplayRow(rows.start.0 + ix as u32);
@@ -4576,8 +4652,6 @@ impl EditorElement {
gutter_hitbox: &Hitbox,
text_hitbox: &Hitbox,
style: &EditorStyle,
relative_line_numbers: RelativeLineNumbers,
relative_to: Option<DisplayRow>,
window: &mut Window,
cx: &mut App,
) -> Option<StickyHeaders> {
@@ -4607,21 +4681,9 @@ impl EditorElement {
);
let line_number = show_line_numbers.then(|| {
let relative_number = relative_to.and_then(|base| match relative_line_numbers {
RelativeLineNumbers::Disabled => None,
RelativeLineNumbers::Enabled => {
Some(snapshot.relative_line_delta_to_point(base, start_point))
}
RelativeLineNumbers::Wrapped => {
Some(snapshot.relative_wrapped_line_delta_to_point(base, start_point))
}
});
let number = relative_number
.filter(|&delta| delta != 0)
.map(|delta| delta.unsigned_abs() as u32)
.unwrap_or(start_point.row + 1);
let number = (start_point.row + 1).to_string();
let color = cx.theme().colors().editor_line_number;
self.shape_line_number(SharedString::from(number.to_string()), color, window)
self.shape_line_number(SharedString::from(number), color, window)
});
lines.push(StickyHeaderLine::new(
@@ -5355,12 +5417,6 @@ impl EditorElement {
.max(MIN_POPOVER_LINE_HEIGHT * line_height), // Apply minimum height of 4 lines
);
// Don't show hover popovers when context menu is open to avoid overlap
let has_context_menu = self.editor.read(cx).mouse_context_menu.is_some();
if has_context_menu {
return;
}
let hover_popovers = self.editor.update(cx, |editor, cx| {
editor.hover_state.render(
snapshot,
@@ -9374,28 +9430,6 @@ impl Element for EditorElement {
window,
cx,
);
// relative rows are based on newest selection, even outside the visible area
let relative_row_base = self.editor.update(cx, |editor, cx| {
if editor.selections.count()==0 {
return None;
}
let newest = editor
.selections
.newest::<Point>(&editor.display_snapshot(cx));
Some(SelectionLayout::new(
newest,
editor.selections.line_mode(),
editor.cursor_offset_on_selection,
editor.cursor_shape,
&snapshot.display_snapshot,
true,
true,
None,
)
.head.row())
});
let mut breakpoint_rows = self.editor.update(cx, |editor, cx| {
editor.active_breakpoints(start_row..end_row, window, cx)
});
@@ -9413,7 +9447,7 @@ impl Element for EditorElement {
start_row..end_row,
&row_infos,
&active_rows,
relative_row_base,
newest_selection_head,
&snapshot,
window,
cx,
@@ -9733,7 +9767,6 @@ impl Element for EditorElement {
&& is_singleton
&& EditorSettings::get_global(cx).sticky_scroll.enabled
{
let relative = self.editor.read(cx).relative_line_numbers(cx);
self.layout_sticky_headers(
&snapshot,
editor_width,
@@ -9745,8 +9778,6 @@ impl Element for EditorElement {
&gutter_hitbox,
&text_hitbox,
&style,
relative,
relative_row_base,
window,
cx,
)
@@ -11594,7 +11625,7 @@ mod tests {
}
#[gpui::test]
fn test_layout_line_numbers(cx: &mut TestAppContext) {
fn test_shape_line_numbers(cx: &mut TestAppContext) {
init_test(cx, |_| {});
let window = cx.add_window(|window, cx| {
let buffer = MultiBuffer::build_simple(&sample_text(6, 6, 'a'), cx);
@@ -11634,7 +11665,7 @@ mod tests {
})
.collect::<Vec<_>>(),
&BTreeMap::default(),
Some(DisplayRow(0)),
Some(DisplayPoint::new(DisplayRow(0), 0)),
&snapshot,
window,
cx,
@@ -11646,9 +11677,10 @@ mod tests {
let relative_rows = window
.update(cx, |editor, window, cx| {
let snapshot = editor.snapshot(window, cx);
snapshot.calculate_relative_line_numbers(
element.calculate_relative_line_numbers(
&snapshot,
&(DisplayRow(0)..DisplayRow(6)),
DisplayRow(3),
Some(DisplayRow(3)),
false,
)
})
@@ -11664,9 +11696,10 @@ mod tests {
let relative_rows = window
.update(cx, |editor, window, cx| {
let snapshot = editor.snapshot(window, cx);
snapshot.calculate_relative_line_numbers(
element.calculate_relative_line_numbers(
&snapshot,
&(DisplayRow(3)..DisplayRow(6)),
DisplayRow(1),
Some(DisplayRow(1)),
false,
)
})
@@ -11680,9 +11713,10 @@ mod tests {
let relative_rows = window
.update(cx, |editor, window, cx| {
let snapshot = editor.snapshot(window, cx);
snapshot.calculate_relative_line_numbers(
element.calculate_relative_line_numbers(
&snapshot,
&(DisplayRow(0)..DisplayRow(3)),
DisplayRow(6),
Some(DisplayRow(6)),
false,
)
})
@@ -11719,7 +11753,7 @@ mod tests {
})
.collect::<Vec<_>>(),
&BTreeMap::default(),
Some(DisplayRow(0)),
Some(DisplayPoint::new(DisplayRow(0), 0)),
&snapshot,
window,
cx,
@@ -11734,7 +11768,7 @@ mod tests {
}
#[gpui::test]
fn test_layout_line_numbers_wrapping(cx: &mut TestAppContext) {
fn test_shape_line_numbers_wrapping(cx: &mut TestAppContext) {
init_test(cx, |_| {});
let window = cx.add_window(|window, cx| {
let buffer = MultiBuffer::build_simple(&sample_text(6, 6, 'a'), cx);
@@ -11779,7 +11813,7 @@ mod tests {
})
.collect::<Vec<_>>(),
&BTreeMap::default(),
Some(DisplayRow(0)),
Some(DisplayPoint::new(DisplayRow(0), 0)),
&snapshot,
window,
cx,
@@ -11791,9 +11825,10 @@ mod tests {
let relative_rows = window
.update(cx, |editor, window, cx| {
let snapshot = editor.snapshot(window, cx);
snapshot.calculate_relative_line_numbers(
element.calculate_relative_line_numbers(
&snapshot,
&(DisplayRow(0)..DisplayRow(6)),
DisplayRow(3),
Some(DisplayRow(3)),
true,
)
})
@@ -11830,7 +11865,7 @@ mod tests {
})
.collect::<Vec<_>>(),
&BTreeMap::from_iter([(DisplayRow(0), LineHighlightSpec::default())]),
Some(DisplayRow(0)),
Some(DisplayPoint::new(DisplayRow(0), 0)),
&snapshot,
window,
cx,
@@ -11845,9 +11880,10 @@ mod tests {
let relative_rows = window
.update(cx, |editor, window, cx| {
let snapshot = editor.snapshot(window, cx);
snapshot.calculate_relative_line_numbers(
element.calculate_relative_line_numbers(
&snapshot,
&(DisplayRow(0)..DisplayRow(6)),
DisplayRow(3),
Some(DisplayRow(3)),
true,
)
})

View File

@@ -5,7 +5,7 @@ use crate::{
};
use gpui::{Bounds, Context, Pixels, Window};
use language::Point;
use multi_buffer::{Anchor, ToPoint};
use multi_buffer::Anchor;
use std::cmp;
#[derive(Debug, PartialEq, Eq, Clone, Copy)]
@@ -186,19 +186,6 @@ impl Editor {
}
}
let style = self.style(cx).clone();
let sticky_headers = self.sticky_headers(&style, cx).unwrap_or_default();
let visible_sticky_headers = sticky_headers
.iter()
.filter(|h| {
let buffer_snapshot = display_map.buffer_snapshot();
let buffer_range =
h.range.start.to_point(buffer_snapshot)..h.range.end.to_point(buffer_snapshot);
buffer_range.contains(&Point::new(target_top as u32, 0))
})
.count();
let margin = if matches!(self.mode, EditorMode::AutoHeight { .. }) {
0.
} else {
@@ -231,7 +218,7 @@ impl Editor {
let was_autoscrolled = match strategy {
AutoscrollStrategy::Fit | AutoscrollStrategy::Newest => {
let margin = margin.min(self.scroll_manager.vertical_scroll_margin);
let target_top = (target_top - margin - visible_sticky_headers as f64).max(0.0);
let target_top = (target_top - margin).max(0.0);
let target_bottom = target_bottom + margin;
let start_row = scroll_position.y;
let end_row = start_row + visible_lines;

View File

@@ -205,49 +205,6 @@ impl EditorLspTestContext {
(_ "{" "}" @end) @indent
(_ "(" ")" @end) @indent
"#})),
text_objects: Some(Cow::from(indoc! {r#"
(function_declaration
body: (_
"{"
(_)* @function.inside
"}")) @function.around
(method_definition
body: (_
"{"
(_)* @function.inside
"}")) @function.around
; Arrow function in variable declaration - capture the full declaration
([
(lexical_declaration
(variable_declarator
value: (arrow_function
body: (statement_block
"{"
(_)* @function.inside
"}"))))
(variable_declaration
(variable_declarator
value: (arrow_function
body: (statement_block
"{"
(_)* @function.inside
"}"))))
]) @function.around
([
(lexical_declaration
(variable_declarator
value: (arrow_function)))
(variable_declaration
(variable_declarator
value: (arrow_function)))
]) @function.around
; Catch-all for arrow functions in other contexts (callbacks, etc.)
((arrow_function) @function.around (#not-has-parent? @function.around variable_declarator))
"#})),
..Default::default()
})
.expect("Could not parse queries");
@@ -319,49 +276,6 @@ impl EditorLspTestContext {
(jsx_opening_element) @start
(jsx_closing_element)? @end) @indent
"#})),
text_objects: Some(Cow::from(indoc! {r#"
(function_declaration
body: (_
"{"
(_)* @function.inside
"}")) @function.around
(method_definition
body: (_
"{"
(_)* @function.inside
"}")) @function.around
; Arrow function in variable declaration - capture the full declaration
([
(lexical_declaration
(variable_declarator
value: (arrow_function
body: (statement_block
"{"
(_)* @function.inside
"}"))))
(variable_declaration
(variable_declarator
value: (arrow_function
body: (statement_block
"{"
(_)* @function.inside
"}"))))
]) @function.around
([
(lexical_declaration
(variable_declarator
value: (arrow_function)))
(variable_declaration
(variable_declarator
value: (arrow_function)))
]) @function.around
; Catch-all for arrow functions in other contexts (callbacks, etc.)
((arrow_function) @function.around (#not-has-parent? @function.around variable_declarator))
"#})),
..Default::default()
})
.expect("Could not parse queries");

View File

@@ -1 +1 @@
../../LICENSE-GPL
LICENSE-GPL

View File

@@ -19,9 +19,6 @@ impl Global for GlobalExtensionHostProxy {}
///
/// This object implements each of the individual proxy types so that their
/// methods can be called directly on it.
/// Registration function for language model providers.
pub type LanguageModelProviderRegistration = Box<dyn FnOnce(&mut App) + Send>;
#[derive(Default)]
pub struct ExtensionHostProxy {
theme_proxy: RwLock<Option<Arc<dyn ExtensionThemeProxy>>>,
@@ -32,7 +29,6 @@ pub struct ExtensionHostProxy {
slash_command_proxy: RwLock<Option<Arc<dyn ExtensionSlashCommandProxy>>>,
context_server_proxy: RwLock<Option<Arc<dyn ExtensionContextServerProxy>>>,
debug_adapter_provider_proxy: RwLock<Option<Arc<dyn ExtensionDebugAdapterProviderProxy>>>,
language_model_provider_proxy: RwLock<Option<Arc<dyn ExtensionLanguageModelProviderProxy>>>,
}
impl ExtensionHostProxy {
@@ -58,7 +54,6 @@ impl ExtensionHostProxy {
slash_command_proxy: RwLock::default(),
context_server_proxy: RwLock::default(),
debug_adapter_provider_proxy: RwLock::default(),
language_model_provider_proxy: RwLock::default(),
}
}
@@ -95,15 +90,6 @@ impl ExtensionHostProxy {
.write()
.replace(Arc::new(proxy));
}
pub fn register_language_model_provider_proxy(
&self,
proxy: impl ExtensionLanguageModelProviderProxy,
) {
self.language_model_provider_proxy
.write()
.replace(Arc::new(proxy));
}
}
pub trait ExtensionThemeProxy: Send + Sync + 'static {
@@ -389,49 +375,6 @@ pub trait ExtensionContextServerProxy: Send + Sync + 'static {
fn unregister_context_server(&self, server_id: Arc<str>, cx: &mut App);
}
/// A function that registers a language model provider with the registry.
/// This allows extension_host to create the provider (which requires WasmExtension)
/// and pass a registration closure to the language_models crate.
pub type LanguageModelProviderRegistration = Box<dyn FnOnce(&mut App) + Send + Sync + 'static>;
pub trait ExtensionLanguageModelProviderProxy: Send + Sync + 'static {
/// Register an LLM provider from an extension.
/// The `register_fn` closure will be called with the App context and should
/// register the provider with the LanguageModelRegistry.
fn register_language_model_provider(
&self,
provider_id: Arc<str>,
register_fn: LanguageModelProviderRegistration,
cx: &mut App,
);
/// Unregister an LLM provider when an extension is unloaded.
fn unregister_language_model_provider(&self, provider_id: Arc<str>, cx: &mut App);
}
impl ExtensionLanguageModelProviderProxy for ExtensionHostProxy {
fn register_language_model_provider(
&self,
provider_id: Arc<str>,
register_fn: LanguageModelProviderRegistration,
cx: &mut App,
) {
let Some(proxy) = self.language_model_provider_proxy.read().clone() else {
return;
};
proxy.register_language_model_provider(provider_id, register_fn, cx)
}
fn unregister_language_model_provider(&self, provider_id: Arc<str>, cx: &mut App) {
let Some(proxy) = self.language_model_provider_proxy.read().clone() else {
return;
};
proxy.unregister_language_model_provider(provider_id, cx)
}
}
impl ExtensionContextServerProxy for ExtensionHostProxy {
fn register_context_server(
&self,
@@ -503,37 +446,3 @@ impl ExtensionDebugAdapterProviderProxy for ExtensionHostProxy {
proxy.unregister_debug_locator(locator_name)
}
}
pub trait ExtensionLanguageModelProviderProxy: Send + Sync + 'static {
fn register_language_model_provider(
&self,
provider_id: Arc<str>,
register_fn: LanguageModelProviderRegistration,
cx: &mut App,
);
fn unregister_language_model_provider(&self, provider_id: Arc<str>, cx: &mut App);
}
impl ExtensionLanguageModelProviderProxy for ExtensionHostProxy {
fn register_language_model_provider(
&self,
provider_id: Arc<str>,
register_fn: LanguageModelProviderRegistration,
cx: &mut App,
) {
let Some(proxy) = self.language_model_provider_proxy.read().clone() else {
return;
};
proxy.register_language_model_provider(provider_id, register_fn, cx)
}
fn unregister_language_model_provider(&self, provider_id: Arc<str>, cx: &mut App) {
let Some(proxy) = self.language_model_provider_proxy.read().clone() else {
return;
};
proxy.unregister_language_model_provider(provider_id, cx)
}
}

View File

@@ -93,8 +93,6 @@ pub struct ExtensionManifest {
pub debug_adapters: BTreeMap<Arc<str>, DebugAdapterManifestEntry>,
#[serde(default, skip_serializing_if = "BTreeMap::is_empty")]
pub debug_locators: BTreeMap<Arc<str>, DebugLocatorManifestEntry>,
#[serde(default, skip_serializing_if = "BTreeMap::is_empty")]
pub language_model_providers: BTreeMap<Arc<str>, LanguageModelProviderManifestEntry>,
}
impl ExtensionManifest {
@@ -290,68 +288,6 @@ pub struct DebugAdapterManifestEntry {
#[derive(Clone, PartialEq, Eq, Debug, Deserialize, Serialize)]
pub struct DebugLocatorManifestEntry {}
/// Manifest entry for a language model provider.
#[derive(Clone, PartialEq, Eq, Debug, Deserialize, Serialize)]
pub struct LanguageModelProviderManifestEntry {
/// Display name for the provider.
pub name: String,
/// Path to an SVG icon file relative to the extension root (e.g., "icons/provider.svg").
#[serde(default)]
pub icon: Option<String>,
/// Hardcoded models to always show (as opposed to a model list loaded over the network).
#[serde(default)]
pub models: Vec<LanguageModelManifestEntry>,
/// Authentication configuration.
#[serde(default)]
pub auth: Option<LanguageModelAuthConfig>,
}
/// Manifest entry for a language model.
#[derive(Clone, PartialEq, Eq, Debug, Deserialize, Serialize)]
pub struct LanguageModelManifestEntry {
/// Unique identifier for the model.
pub id: String,
/// Display name for the model.
pub name: String,
/// Maximum input token count.
pub max_token_count: u64,
/// Maximum output tokens (optional).
pub max_output_tokens: Option<u64>,
/// Whether the model supports image inputs.
pub supports_images: bool,
/// Whether the model supports tool/function calling.
pub supports_tools: bool,
/// Whether the model supports extended thinking/reasoning.
pub supports_thinking: bool,
}
/// Authentication configuration for a language model provider.
#[derive(Clone, PartialEq, Eq, Debug, Deserialize, Serialize)]
pub struct LanguageModelAuthConfig {
/// Human-readable name for the credential shown in the UI input field (e.g. "API Key", "Access Token").
pub credential_label: Option<String>,
/// Environment variable names for the API key (if env var auth supported).
/// Multiple env vars can be specified; they will be checked in order.
#[serde(default)]
pub env_vars: Option<Vec<String>>,
/// OAuth configuration for web-based authentication flows.
#[serde(default)]
pub oauth: Option<OAuthConfig>,
}
/// OAuth configuration for web-based authentication.
#[derive(Clone, PartialEq, Eq, Debug, Deserialize, Serialize)]
pub struct OAuthConfig {
/// The text to display on the sign-in button (e.g. "Sign in with GitHub").
pub sign_in_button_label: Option<String>,
/// The Zed icon path to display on the sign-in button (e.g. "github").
#[serde(default)]
pub sign_in_button_icon: Option<String>,
/// The description text shown next to the sign-in button in edit prediction settings.
#[serde(default)]
pub sign_in_description: Option<String>,
}
impl ExtensionManifest {
pub async fn load(fs: Arc<dyn Fs>, extension_dir: &Path) -> Result<Self> {
let extension_name = extension_dir
@@ -422,7 +358,6 @@ fn manifest_from_old_manifest(
capabilities: Vec::new(),
debug_adapters: Default::default(),
debug_locators: Default::default(),
language_model_providers: Default::default(),
}
}
@@ -456,7 +391,6 @@ mod tests {
capabilities: vec![],
debug_adapters: Default::default(),
debug_locators: Default::default(),
language_model_providers: BTreeMap::default(),
}
}

View File

@@ -29,26 +29,6 @@ pub use wit::{
GithubRelease, GithubReleaseAsset, GithubReleaseOptions, github_release_by_tag_name,
latest_github_release,
},
zed::extension::llm_provider::{
CacheConfiguration as LlmCacheConfiguration, CompletionEvent as LlmCompletionEvent,
CompletionRequest as LlmCompletionRequest, CustomModelConfig as LlmCustomModelConfig,
DeviceFlowPromptInfo as LlmDeviceFlowPromptInfo, ImageData as LlmImageData,
MessageContent as LlmMessageContent, MessageRole as LlmMessageRole,
ModelCapabilities as LlmModelCapabilities, ModelInfo as LlmModelInfo,
OauthWebAuthConfig as LlmOauthWebAuthConfig, OauthWebAuthResult as LlmOauthWebAuthResult,
ProviderInfo as LlmProviderInfo, ProviderSettings as LlmProviderSettings,
RequestMessage as LlmRequestMessage, StopReason as LlmStopReason,
ThinkingContent as LlmThinkingContent, TokenUsage as LlmTokenUsage,
ToolChoice as LlmToolChoice, ToolDefinition as LlmToolDefinition,
ToolInputFormat as LlmToolInputFormat, ToolResult as LlmToolResult,
ToolResultContent as LlmToolResultContent, ToolUse as LlmToolUse,
ToolUseJsonParseError as LlmToolUseJsonParseError,
delete_credential as llm_delete_credential, get_credential as llm_get_credential,
get_env_var as llm_get_env_var, get_provider_settings as llm_get_provider_settings,
oauth_open_browser as llm_oauth_open_browser,
oauth_send_http_request as llm_oauth_send_http_request,
oauth_start_web_auth as llm_oauth_start_web_auth, store_credential as llm_store_credential,
},
zed::extension::nodejs::{
node_binary_path, npm_install_package, npm_package_installed_version,
npm_package_latest_version,
@@ -279,93 +259,6 @@ pub trait Extension: Send + Sync {
) -> Result<DebugRequest, String> {
Err("`run_dap_locator` not implemented".to_string())
}
/// Returns information about language model providers offered by this extension.
fn llm_providers(&self) -> Vec<LlmProviderInfo> {
Vec::new()
}
/// Returns the models available for a provider.
fn llm_provider_models(&self, _provider_id: &str) -> Result<Vec<LlmModelInfo>, String> {
Ok(Vec::new())
}
/// Returns markdown content to display in the provider's settings UI.
/// This can include setup instructions, links to documentation, etc.
fn llm_provider_settings_markdown(&self, _provider_id: &str) -> Option<String> {
None
}
/// Check if the provider is authenticated.
fn llm_provider_is_authenticated(&self, _provider_id: &str) -> bool {
false
}
/// Start an OAuth device flow sign-in.
/// This is called when the user explicitly clicks "Sign in with GitHub" or similar.
/// Returns information needed to display the device flow prompt modal to the user.
fn llm_provider_start_device_flow_sign_in(
&mut self,
_provider_id: &str,
) -> Result<LlmDeviceFlowPromptInfo, String> {
Err("`llm_provider_start_device_flow_sign_in` not implemented".to_string())
}
/// Poll for device flow sign-in completion.
/// This is called after llm_provider_start_device_flow_sign_in returns the user code.
/// The extension should poll the OAuth provider until the user authorizes or the flow times out.
fn llm_provider_poll_device_flow_sign_in(&mut self, _provider_id: &str) -> Result<(), String> {
Err("`llm_provider_poll_device_flow_sign_in` not implemented".to_string())
}
/// Reset credentials for the provider.
fn llm_provider_reset_credentials(&mut self, _provider_id: &str) -> Result<(), String> {
Err("`llm_provider_reset_credentials` not implemented".to_string())
}
/// Count tokens for a request.
fn llm_count_tokens(
&self,
_provider_id: &str,
_model_id: &str,
_request: &LlmCompletionRequest,
) -> Result<u64, String> {
Err("`llm_count_tokens` not implemented".to_string())
}
/// Start streaming a completion from the model.
/// Returns a stream ID that can be used with `llm_stream_completion_next` and `llm_stream_completion_close`.
fn llm_stream_completion_start(
&mut self,
_provider_id: &str,
_model_id: &str,
_request: &LlmCompletionRequest,
) -> Result<String, String> {
Err("`llm_stream_completion_start` not implemented".to_string())
}
/// Get the next event from a completion stream.
/// Returns `Ok(None)` when the stream is complete.
fn llm_stream_completion_next(
&mut self,
_stream_id: &str,
) -> Result<Option<LlmCompletionEvent>, String> {
Err("`llm_stream_completion_next` not implemented".to_string())
}
/// Close a completion stream and release its resources.
fn llm_stream_completion_close(&mut self, _stream_id: &str) {
// Default implementation does nothing
}
/// Get cache configuration for a model (if prompt caching is supported).
fn llm_cache_configuration(
&self,
_provider_id: &str,
_model_id: &str,
) -> Option<LlmCacheConfiguration> {
None
}
}
/// Registers the provided type as a Zed extension.
@@ -624,67 +517,6 @@ impl wit::Guest for Component {
) -> Result<DebugRequest, String> {
extension().run_dap_locator(locator_name, build_task)
}
fn llm_providers() -> Vec<LlmProviderInfo> {
extension().llm_providers()
}
fn llm_provider_models(provider_id: String) -> Result<Vec<LlmModelInfo>, String> {
extension().llm_provider_models(&provider_id)
}
fn llm_provider_settings_markdown(provider_id: String) -> Option<String> {
extension().llm_provider_settings_markdown(&provider_id)
}
fn llm_provider_is_authenticated(provider_id: String) -> bool {
extension().llm_provider_is_authenticated(&provider_id)
}
fn llm_provider_start_device_flow_sign_in(
provider_id: String,
) -> Result<LlmDeviceFlowPromptInfo, String> {
extension().llm_provider_start_device_flow_sign_in(&provider_id)
}
fn llm_provider_poll_device_flow_sign_in(provider_id: String) -> Result<(), String> {
extension().llm_provider_poll_device_flow_sign_in(&provider_id)
}
fn llm_provider_reset_credentials(provider_id: String) -> Result<(), String> {
extension().llm_provider_reset_credentials(&provider_id)
}
fn llm_count_tokens(
provider_id: String,
model_id: String,
request: LlmCompletionRequest,
) -> Result<u64, String> {
extension().llm_count_tokens(&provider_id, &model_id, &request)
}
fn llm_stream_completion_start(
provider_id: String,
model_id: String,
request: LlmCompletionRequest,
) -> Result<String, String> {
extension().llm_stream_completion_start(&provider_id, &model_id, &request)
}
fn llm_stream_completion_next(stream_id: String) -> Result<Option<LlmCompletionEvent>, String> {
extension().llm_stream_completion_next(&stream_id)
}
fn llm_stream_completion_close(stream_id: String) {
extension().llm_stream_completion_close(&stream_id)
}
fn llm_cache_configuration(
provider_id: String,
model_id: String,
) -> Option<LlmCacheConfiguration> {
extension().llm_cache_configuration(&provider_id, &model_id)
}
}
/// The ID of a language server.

View File

@@ -1,8 +1,7 @@
//! An HTTP client.
pub use crate::wit::zed::extension::http_client::{
HttpMethod, HttpRequest, HttpResponse, HttpResponseStream, HttpResponseWithStatus,
RedirectPolicy, fetch, fetch_fallible, fetch_stream,
HttpMethod, HttpRequest, HttpResponse, HttpResponseStream, RedirectPolicy, fetch, fetch_stream,
};
impl HttpRequest {
@@ -16,11 +15,6 @@ impl HttpRequest {
fetch(self)
}
/// Like [`fetch`], except it doesn't treat any status codes as errors.
pub fn fetch_fallible(&self) -> Result<HttpResponseWithStatus, String> {
fetch_fallible(self)
}
/// Executes the [`HttpRequest`] with [`fetch_stream`].
pub fn fetch_stream(&self) -> Result<HttpResponseStream, String> {
fetch_stream(self)

View File

@@ -8,7 +8,6 @@ world extension {
import platform;
import process;
import nodejs;
import llm-provider;
use common.{env-vars, range};
use context-server.{context-server-configuration};
@@ -16,11 +15,6 @@ world extension {
use lsp.{completion, symbol};
use process.{command};
use slash-command.{slash-command, slash-command-argument-completion, slash-command-output};
use llm-provider.{
provider-info, model-info, completion-request,
cache-configuration, completion-event, token-usage,
device-flow-prompt-info
};
/// Initializes the extension.
export init-extension: func();
@@ -170,73 +164,4 @@ world extension {
export dap-config-to-scenario: func(config: debug-config) -> result<debug-scenario, string>;
export dap-locator-create-scenario: func(locator-name: string, build-config-template: build-task-template, resolved-label: string, debug-adapter-name: string) -> option<debug-scenario>;
export run-dap-locator: func(locator-name: string, config: resolved-task) -> result<debug-request, string>;
/// Returns information about language model providers offered by this extension.
export llm-providers: func() -> list<provider-info>;
/// Returns the models available for a provider.
export llm-provider-models: func(provider-id: string) -> result<list<model-info>, string>;
/// Returns markdown content to display in the provider's settings UI.
/// This can include setup instructions, links to documentation, etc.
export llm-provider-settings-markdown: func(provider-id: string) -> option<string>;
/// Check if the provider is authenticated.
export llm-provider-is-authenticated: func(provider-id: string) -> bool;
/// Start an OAuth device flow sign-in.
/// This is called when the user explicitly clicks "Sign in with GitHub" or similar.
///
/// The device flow works as follows:
/// 1. Extension requests a device code from the OAuth provider
/// 2. Extension returns prompt info including user code and verification URL
/// 3. Host displays a modal with the prompt info
/// 4. Host calls llm-provider-poll-device-flow-sign-in
/// 5. Extension polls for the access token while user authorizes in browser
/// 6. Once authorized, extension stores the credential and returns success
///
/// Returns information needed to display the device flow prompt modal.
export llm-provider-start-device-flow-sign-in: func(provider-id: string) -> result<device-flow-prompt-info, string>;
/// Poll for device flow sign-in completion.
/// This is called after llm-provider-start-device-flow-sign-in returns the user code.
/// The extension should poll the OAuth provider until the user authorizes or the flow times out.
/// Returns Ok(()) on successful authentication, or an error message on failure.
export llm-provider-poll-device-flow-sign-in: func(provider-id: string) -> result<_, string>;
/// Reset credentials for the provider.
export llm-provider-reset-credentials: func(provider-id: string) -> result<_, string>;
/// Count tokens for a request.
export llm-count-tokens: func(
provider-id: string,
model-id: string,
request: completion-request
) -> result<u64, string>;
/// Start streaming a completion from the model.
/// Returns a stream ID that can be used with llm-stream-next and llm-stream-close.
export llm-stream-completion-start: func(
provider-id: string,
model-id: string,
request: completion-request
) -> result<string, string>;
/// Get the next event from a completion stream.
/// Returns None when the stream is complete.
export llm-stream-completion-next: func(
stream-id: string
) -> result<option<completion-event>, string>;
/// Close a completion stream and release its resources.
export llm-stream-completion-close: func(
stream-id: string
);
/// Get cache configuration for a model (if prompt caching is supported).
export llm-cache-configuration: func(
provider-id: string,
model-id: string
) -> option<cache-configuration>;
}

View File

@@ -51,26 +51,9 @@ interface http-client {
body: list<u8>,
}
/// An HTTP response that includes the status code.
///
/// Used by `fetch-fallible` which returns responses for all status codes
/// rather than treating some status codes as errors.
record http-response-with-status {
/// The HTTP status code.
status: u16,
/// The response headers.
headers: list<tuple<string, string>>,
/// The response body.
body: list<u8>,
}
/// Performs an HTTP request and returns the response.
/// Returns an error if the response status is 4xx or 5xx.
fetch: func(req: http-request) -> result<http-response, string>;
/// Performs an HTTP request and returns the response regardless of its status code.
fetch-fallible: func(req: http-request) -> result<http-response-with-status, string>;
/// An HTTP response stream.
resource http-response-stream {
/// Retrieves the next chunk of data from the response stream.

View File

@@ -1,362 +0,0 @@
interface llm-provider {
use http-client.{http-request, http-response-with-status};
/// Information about a language model provider.
record provider-info {
/// Unique identifier for the provider (e.g. "my-extension.my-provider").
id: string,
/// Display name for the provider.
name: string,
/// Path to an SVG icon file relative to the extension root (e.g. "icons/provider.svg").
icon: option<string>,
}
/// Capabilities of a language model.
record model-capabilities {
/// Whether the model supports image inputs.
supports-images: bool,
/// Whether the model supports tool/function calling.
supports-tools: bool,
/// Whether the model supports the "auto" tool choice.
supports-tool-choice-auto: bool,
/// Whether the model supports the "any" tool choice.
supports-tool-choice-any: bool,
/// Whether the model supports the "none" tool choice.
supports-tool-choice-none: bool,
/// Whether the model supports extended thinking/reasoning.
supports-thinking: bool,
/// The format for tool input schemas.
tool-input-format: tool-input-format,
}
/// Format for tool input schemas.
enum tool-input-format {
/// Standard JSON Schema format.
json-schema,
/// A subset of JSON Schema supported by Google AI.
/// See https://ai.google.dev/api/caching#Schema
json-schema-subset,
/// Simplified schema format for certain providers.
simplified,
}
/// Information about a specific model.
record model-info {
/// Unique identifier for the model.
id: string,
/// Display name for the model.
name: string,
/// Maximum input token count.
max-token-count: u64,
/// Maximum output tokens (optional).
max-output-tokens: option<u64>,
/// Model capabilities.
capabilities: model-capabilities,
/// Whether this is the default model for the provider.
is-default: bool,
/// Whether this is the default fast model.
is-default-fast: bool,
}
/// The role of a message participant.
enum message-role {
/// User message.
user,
/// Assistant message.
assistant,
/// System message.
system,
}
/// A message in a completion request.
record request-message {
/// The role of the message sender.
role: message-role,
/// The content of the message.
content: list<message-content>,
/// Whether to cache this message for prompt caching.
cache: bool,
}
/// Content within a message.
variant message-content {
/// Plain text content.
text(string),
/// Image content.
image(image-data),
/// A tool use request from the assistant.
tool-use(tool-use),
/// A tool result from the user.
tool-result(tool-result),
/// Thinking/reasoning content.
thinking(thinking-content),
/// Redacted/encrypted thinking content.
redacted-thinking(string),
}
/// Image data for vision models.
record image-data {
/// Base64-encoded image data.
source: string,
/// Image width in pixels (optional).
width: option<u32>,
/// Image height in pixels (optional).
height: option<u32>,
}
/// A tool use request from the model.
record tool-use {
/// Unique identifier for this tool use.
id: string,
/// The name of the tool being used.
name: string,
/// JSON string of the tool input arguments.
input: string,
/// Whether the input JSON is complete (false while streaming, true when done).
is-input-complete: bool,
/// Thought signature for providers that support it (e.g., Anthropic).
thought-signature: option<string>,
}
/// A tool result to send back to the model.
record tool-result {
/// The ID of the tool use this is a result for.
tool-use-id: string,
/// The name of the tool.
tool-name: string,
/// Whether this result represents an error.
is-error: bool,
/// The content of the result.
content: tool-result-content,
}
/// Content of a tool result.
variant tool-result-content {
/// Text result.
text(string),
/// Image result.
image(image-data),
}
/// Thinking/reasoning content from models that support extended thinking.
record thinking-content {
/// The thinking text.
text: string,
/// Signature for the thinking block (provider-specific).
signature: option<string>,
}
/// A tool definition for function calling.
record tool-definition {
/// The name of the tool.
name: string,
/// Description of what the tool does.
description: string,
/// JSON Schema for input parameters.
input-schema: string,
}
/// Tool choice preference for the model.
enum tool-choice {
/// Let the model decide whether to use tools.
auto,
/// Force the model to use at least one tool.
any,
/// Prevent the model from using tools.
none,
}
/// A completion request to send to the model.
record completion-request {
/// The messages in the conversation.
messages: list<request-message>,
/// Available tools for the model to use.
tools: list<tool-definition>,
/// Tool choice preference.
tool-choice: option<tool-choice>,
/// Stop sequences to end generation.
stop-sequences: list<string>,
/// Temperature for sampling (0.0-1.0).
temperature: option<f32>,
/// Whether thinking/reasoning is allowed.
thinking-allowed: bool,
/// Maximum tokens to generate.
max-tokens: option<u64>,
}
/// Events emitted during completion streaming.
variant completion-event {
/// Completion has started.
started,
/// Text content chunk.
text(string),
/// Thinking/reasoning content chunk.
thinking(thinking-content),
/// Redacted thinking (encrypted) chunk.
redacted-thinking(string),
/// Tool use request from the model.
tool-use(tool-use),
/// JSON parse error when parsing tool input.
tool-use-json-parse-error(tool-use-json-parse-error),
/// Completion stopped.
stop(stop-reason),
/// Token usage update.
usage(token-usage),
/// Reasoning details (provider-specific JSON).
reasoning-details(string),
}
/// Error information when tool use JSON parsing fails.
record tool-use-json-parse-error {
/// The tool use ID.
id: string,
/// The tool name.
tool-name: string,
/// The raw input that failed to parse.
raw-input: string,
/// The parse error message.
error: string,
}
/// Reason the completion stopped.
enum stop-reason {
/// The model finished generating.
end-turn,
/// Maximum tokens reached.
max-tokens,
/// The model wants to use a tool.
tool-use,
/// The model refused to respond.
refusal,
}
/// Token usage statistics.
record token-usage {
/// Number of input tokens used.
input-tokens: u64,
/// Number of output tokens generated.
output-tokens: u64,
/// Tokens used for cache creation (if supported).
cache-creation-input-tokens: option<u64>,
/// Tokens read from cache (if supported).
cache-read-input-tokens: option<u64>,
}
/// Cache configuration for prompt caching.
record cache-configuration {
/// Maximum number of cache anchors.
max-cache-anchors: u32,
/// Whether caching should be applied to tool definitions.
should-cache-tool-definitions: bool,
/// Minimum token count for a message to be cached.
min-total-token-count: u64,
}
/// Configuration for starting an OAuth web authentication flow.
record oauth-web-auth-config {
/// The URL to open in the user's browser to start authentication.
/// This should include client_id, redirect_uri, scope, state, etc.
/// Use `{port}` as a placeholder in the URL - it will be replaced with
/// the actual localhost port before opening the browser.
/// Example: "https://example.com/oauth?redirect_uri=http://127.0.0.1:{port}/callback"
auth-url: string,
/// The path to listen on for the OAuth callback (e.g., "/callback").
/// A localhost server will be started to receive the redirect.
callback-path: string,
/// Timeout in seconds to wait for the callback (default: 300 = 5 minutes).
timeout-secs: option<u32>,
}
/// Result of an OAuth web authentication flow.
record oauth-web-auth-result {
/// The full callback URL that was received, including query parameters.
/// The extension is responsible for parsing the code, state, etc.
callback-url: string,
/// The port that was used for the localhost callback server.
port: u32,
}
/// Get a stored credential for this provider.
get-credential: func(provider-id: string) -> option<string>;
/// Store a credential for this provider.
store-credential: func(provider-id: string, value: string) -> result<_, string>;
/// Delete a stored credential for this provider.
delete-credential: func(provider-id: string) -> result<_, string>;
/// Read an environment variable.
get-env-var: func(name: string) -> option<string>;
/// Start an OAuth web authentication flow.
///
/// This will:
/// 1. Start a localhost server to receive the OAuth callback
/// 2. Open the auth URL in the user's default browser
/// 3. Wait for the callback (up to the timeout)
/// 4. Return the callback URL with query parameters
///
/// The extension is responsible for:
/// - Constructing the auth URL with client_id, redirect_uri, scope, state, etc.
/// - Parsing the callback URL to extract the authorization code
/// - Exchanging the code for tokens using fetch-fallible from http-client
oauth-start-web-auth: func(config: oauth-web-auth-config) -> result<oauth-web-auth-result, string>;
/// Make an HTTP request for OAuth token exchange.
///
/// This is a convenience wrapper around http-client's fetch-fallible for OAuth flows.
/// Unlike the standard fetch, this does not treat non-2xx responses as errors,
/// allowing proper handling of OAuth error responses.
oauth-send-http-request: func(request: http-request) -> result<http-response-with-status, string>;
/// Open a URL in the user's default browser.
///
/// Useful for OAuth flows that need to open a browser but handle the
/// callback differently (e.g., polling-based flows).
oauth-open-browser: func(url: string) -> result<_, string>;
/// Provider settings from user configuration.
/// Extensions can use this to allow custom API URLs, custom models, etc.
record provider-settings {
/// Custom API URL override (if configured by the user).
api-url: option<string>,
/// Custom models configured by the user.
available-models: list<custom-model-config>,
}
/// Configuration for a custom model defined by the user.
record custom-model-config {
/// The model's API identifier.
name: string,
/// Display name for the UI.
display-name: option<string>,
/// Maximum input token count.
max-tokens: u64,
/// Maximum output tokens (optional).
max-output-tokens: option<u64>,
/// Thinking budget for models that support extended thinking (None = auto).
thinking-budget: option<u32>,
}
/// Get provider-specific settings configured by the user.
/// Returns settings like custom API URLs and custom model configurations.
get-provider-settings: func(provider-id: string) -> option<provider-settings>;
/// Information needed to display the device flow prompt modal to the user.
record device-flow-prompt-info {
/// The user code to display (e.g., "ABC-123").
user-code: string,
/// The URL the user needs to visit to authorize (for the "Connect" button).
verification-url: string,
/// The headline text for the modal (e.g., "Use GitHub Copilot in Zed.").
headline: string,
/// A description to show below the headline (e.g., "Using Copilot requires an active subscription on GitHub.").
description: string,
/// Label for the connect button (e.g., "Connect to GitHub").
connect-button-label: string,
/// Success headline shown when authorization completes.
success-headline: string,
/// Success message shown when authorization completes.
success-message: string,
}
}

View File

@@ -255,21 +255,6 @@ async fn copy_extension_resources(
}
}
for (_, provider_entry) in &manifest.language_model_providers {
if let Some(icon_path) = &provider_entry.icon {
let source_icon = extension_path.join(icon_path);
let dest_icon = output_dir.join(icon_path);
// Create parent directory if needed
if let Some(parent) = dest_icon.parent() {
fs::create_dir_all(parent)?;
}
fs::copy(&source_icon, &dest_icon)
.with_context(|| format!("failed to copy LLM provider icon '{}'", icon_path))?;
}
}
if !manifest.languages.is_empty() {
let output_languages_dir = output_dir.join("languages");
fs::create_dir_all(&output_languages_dir)?;

View File

@@ -22,9 +22,7 @@ async-tar.workspace = true
async-trait.workspace = true
client.workspace = true
collections.workspace = true
credentials_provider.workspace = true
dap.workspace = true
dirs.workspace = true
extension.workspace = true
fs.workspace = true
futures.workspace = true
@@ -32,11 +30,8 @@ gpui.workspace = true
gpui_tokio.workspace = true
http_client.workspace = true
language.workspace = true
language_model.workspace = true
log.workspace = true
markdown.workspace = true
lsp.workspace = true
menu.workspace = true
moka.workspace = true
node_runtime.workspace = true
paths.workspace = true
@@ -48,16 +43,11 @@ serde.workspace = true
serde_json.workspace = true
serde_json_lenient.workspace = true
settings.workspace = true
smol.workspace = true
task.workspace = true
telemetry.workspace = true
tempfile.workspace = true
theme.workspace = true
toml.workspace = true
ui.workspace = true
ui_input.workspace = true
url.workspace = true
workspace.workspace = true
util.workspace = true
wasmparser.workspace = true
wasmtime-wasi.workspace = true

View File

@@ -148,7 +148,6 @@ fn manifest() -> ExtensionManifest {
)],
debug_adapters: Default::default(),
debug_locators: Default::default(),
language_model_providers: BTreeMap::default(),
}
}

View File

@@ -1,124 +0,0 @@
use credentials_provider::CredentialsProvider;
use gpui::App;
const ANTHROPIC_EXTENSION_ID: &str = "anthropic";
const ANTHROPIC_PROVIDER_ID: &str = "anthropic";
const ANTHROPIC_DEFAULT_API_URL: &str = "https://api.anthropic.com";
/// Migrates Anthropic API credentials from the old built-in provider location
/// to the new extension-based location.
///
/// This should only be called during auto-install of the extension.
pub fn migrate_anthropic_credentials_if_needed(extension_id: &str, cx: &mut App) {
if extension_id != ANTHROPIC_EXTENSION_ID {
return;
}
let extension_credential_key = format!(
"extension-llm-{}:{}",
ANTHROPIC_EXTENSION_ID, ANTHROPIC_PROVIDER_ID
);
let credentials_provider = <dyn CredentialsProvider>::global(cx);
cx.spawn(async move |cx| {
// Read from old location
let old_credential = credentials_provider
.read_credentials(ANTHROPIC_DEFAULT_API_URL, &cx)
.await
.ok()
.flatten();
let api_key = match old_credential {
Some((_, key_bytes)) => match String::from_utf8(key_bytes) {
Ok(key) if !key.is_empty() => key,
Ok(_) => {
log::debug!("Existing Anthropic API key is empty, nothing to migrate");
return;
}
Err(_) => {
log::error!("Failed to decode Anthropic API key as UTF-8");
return;
}
},
None => {
log::debug!("No existing Anthropic API key found to migrate");
return;
}
};
log::info!("Migrating existing Anthropic API key to Anthropic extension");
match credentials_provider
.write_credentials(&extension_credential_key, "Bearer", api_key.as_bytes(), &cx)
.await
{
Ok(()) => {
log::info!("Successfully migrated Anthropic API key to extension");
}
Err(err) => {
log::error!("Failed to migrate Anthropic API key: {}", err);
}
}
})
.detach();
}
#[cfg(test)]
mod tests {
use super::*;
use gpui::TestAppContext;
#[gpui::test]
async fn test_migrates_credentials_from_old_location(cx: &mut TestAppContext) {
let api_key = "sk-ant-test-key-12345";
cx.write_credentials(ANTHROPIC_DEFAULT_API_URL, "Bearer", api_key.as_bytes());
cx.update(|cx| {
migrate_anthropic_credentials_if_needed(ANTHROPIC_EXTENSION_ID, cx);
});
cx.run_until_parked();
let migrated = cx.read_credentials("extension-llm-anthropic:anthropic");
assert!(migrated.is_some(), "Credentials should have been migrated");
let (username, password) = migrated.unwrap();
assert_eq!(username, "Bearer");
assert_eq!(String::from_utf8(password).unwrap(), api_key);
}
#[gpui::test]
async fn test_no_migration_if_no_old_credentials(cx: &mut TestAppContext) {
cx.update(|cx| {
migrate_anthropic_credentials_if_needed(ANTHROPIC_EXTENSION_ID, cx);
});
cx.run_until_parked();
let credentials = cx.read_credentials("extension-llm-anthropic:anthropic");
assert!(
credentials.is_none(),
"Should not create credentials if none existed"
);
}
#[gpui::test]
async fn test_skips_migration_for_other_extensions(cx: &mut TestAppContext) {
let api_key = "sk-ant-test-key";
cx.write_credentials(ANTHROPIC_DEFAULT_API_URL, "Bearer", api_key.as_bytes());
cx.update(|cx| {
migrate_anthropic_credentials_if_needed("some-other-extension", cx);
});
cx.run_until_parked();
let credentials = cx.read_credentials("extension-llm-anthropic:anthropic");
assert!(
credentials.is_none(),
"Should not migrate for other extensions"
);
}
}

View File

@@ -113,7 +113,6 @@ mod tests {
capabilities: vec![],
debug_adapters: Default::default(),
debug_locators: Default::default(),
language_model_providers: BTreeMap::default(),
}
}

View File

@@ -1,216 +0,0 @@
use credentials_provider::CredentialsProvider;
use gpui::App;
use std::path::PathBuf;
const COPILOT_CHAT_EXTENSION_ID: &str = "copilot-chat";
const COPILOT_CHAT_PROVIDER_ID: &str = "copilot-chat";
/// Migrates Copilot OAuth credentials from the GitHub Copilot config files
/// to the new extension-based credential location.
///
/// This should only be called during auto-install of the extension.
pub fn migrate_copilot_credentials_if_needed(extension_id: &str, cx: &mut App) {
if extension_id != COPILOT_CHAT_EXTENSION_ID {
return;
}
let credential_key = format!(
"extension-llm-{}:{}",
COPILOT_CHAT_EXTENSION_ID, COPILOT_CHAT_PROVIDER_ID
);
let credentials_provider = <dyn CredentialsProvider>::global(cx);
cx.spawn(async move |_cx| {
// Read from copilot config files
let oauth_token = match read_copilot_oauth_token().await {
Some(token) if !token.is_empty() => token,
_ => {
log::debug!("No existing Copilot OAuth token found to migrate");
return;
}
};
log::info!("Migrating existing Copilot OAuth token to Copilot Chat extension");
match credentials_provider
.write_credentials(&credential_key, "api_key", oauth_token.as_bytes(), &_cx)
.await
{
Ok(()) => {
log::info!("Successfully migrated Copilot OAuth token to Copilot Chat extension");
}
Err(err) => {
log::error!("Failed to migrate Copilot OAuth token: {}", err);
}
}
})
.detach();
}
async fn read_copilot_oauth_token() -> Option<String> {
let config_paths = copilot_config_paths();
for path in config_paths {
if let Some(token) = read_oauth_token_from_file(&path).await {
return Some(token);
}
}
None
}
fn copilot_config_paths() -> Vec<PathBuf> {
let config_dir = if cfg!(target_os = "windows") {
dirs::data_local_dir()
} else {
std::env::var("XDG_CONFIG_HOME")
.map(PathBuf::from)
.ok()
.or_else(|| dirs::home_dir().map(|h| h.join(".config")))
};
let Some(config_dir) = config_dir else {
return Vec::new();
};
let copilot_dir = config_dir.join("github-copilot");
vec![
copilot_dir.join("hosts.json"),
copilot_dir.join("apps.json"),
]
}
async fn read_oauth_token_from_file(path: &PathBuf) -> Option<String> {
let contents = match smol::fs::read_to_string(path).await {
Ok(contents) => contents,
Err(_) => return None,
};
extract_oauth_token(&contents, "github.com")
}
fn extract_oauth_token(contents: &str, domain: &str) -> Option<String> {
let value: serde_json::Value = serde_json::from_str(contents).ok()?;
let obj = value.as_object()?;
for (key, value) in obj.iter() {
if key.starts_with(domain) {
if let Some(token) = value.get("oauth_token").and_then(|v| v.as_str()) {
return Some(token.to_string());
}
}
}
None
}
#[cfg(test)]
mod tests {
use super::*;
use gpui::TestAppContext;
#[test]
fn test_extract_oauth_token_from_hosts_json() {
let contents = r#"{
"github.com": {
"oauth_token": "ghu_test_token_12345"
}
}"#;
let token = extract_oauth_token(contents, "github.com");
assert_eq!(token, Some("ghu_test_token_12345".to_string()));
}
#[test]
fn test_extract_oauth_token_with_user_suffix() {
let contents = r#"{
"github.com:user": {
"oauth_token": "ghu_another_token"
}
}"#;
let token = extract_oauth_token(contents, "github.com");
assert_eq!(token, Some("ghu_another_token".to_string()));
}
#[test]
fn test_extract_oauth_token_wrong_domain() {
let contents = r#"{
"gitlab.com": {
"oauth_token": "some_token"
}
}"#;
let token = extract_oauth_token(contents, "github.com");
assert_eq!(token, None);
}
#[test]
fn test_extract_oauth_token_invalid_json() {
let contents = "not valid json";
let token = extract_oauth_token(contents, "github.com");
assert_eq!(token, None);
}
#[test]
fn test_extract_oauth_token_missing_oauth_token_field() {
let contents = r#"{
"github.com": {
"user": "testuser"
}
}"#;
let token = extract_oauth_token(contents, "github.com");
assert_eq!(token, None);
}
#[test]
fn test_extract_oauth_token_multiple_entries_picks_first_match() {
let contents = r#"{
"gitlab.com": {
"oauth_token": "gitlab_token"
},
"github.com": {
"oauth_token": "github_token"
}
}"#;
let token = extract_oauth_token(contents, "github.com");
assert_eq!(token, Some("github_token".to_string()));
}
#[gpui::test]
async fn test_skips_migration_for_other_extensions(cx: &mut TestAppContext) {
cx.update(|cx| {
migrate_copilot_credentials_if_needed("some-other-extension", cx);
});
cx.run_until_parked();
let credentials = cx.read_credentials("extension-llm-copilot-chat:copilot-chat");
assert!(
credentials.is_none(),
"Should not create credentials for other extensions"
);
}
// Note: Unlike the other migrations, copilot migration reads from the filesystem
// (copilot config files), not from the credentials provider. In tests, these files
// don't exist, so no migration occurs.
#[gpui::test]
async fn test_no_credentials_when_no_copilot_config_exists(cx: &mut TestAppContext) {
cx.update(|cx| {
migrate_copilot_credentials_if_needed(COPILOT_CHAT_EXTENSION_ID, cx);
});
cx.run_until_parked();
let credentials = cx.read_credentials("extension-llm-copilot-chat:copilot-chat");
assert!(
credentials.is_none(),
"No credentials should be written when copilot config doesn't exist"
);
}
}

View File

@@ -1,11 +1,6 @@
mod anthropic_migration;
mod capability_granter;
mod copilot_migration;
pub mod extension_settings;
mod google_ai_migration;
pub mod headless_host;
mod open_router_migration;
mod openai_migration;
pub mod wasm_host;
#[cfg(test)]
@@ -17,14 +12,13 @@ use async_tar::Archive;
use client::ExtensionProvides;
use client::{Client, ExtensionMetadata, GetExtensionsResponse, proto, telemetry::Telemetry};
use collections::{BTreeMap, BTreeSet, HashSet, btree_map};
pub use extension::ExtensionManifest;
use extension::extension_builder::{CompileExtensionOptions, ExtensionBuilder};
use extension::{
ExtensionContextServerProxy, ExtensionDebugAdapterProviderProxy, ExtensionEvents,
ExtensionGrammarProxy, ExtensionHostProxy, ExtensionLanguageModelProviderProxy,
ExtensionLanguageProxy, ExtensionLanguageServerProxy, ExtensionSlashCommandProxy,
ExtensionSnippetProxy, ExtensionThemeProxy,
ExtensionGrammarProxy, ExtensionHostProxy, ExtensionLanguageProxy,
ExtensionLanguageServerProxy, ExtensionSlashCommandProxy, ExtensionSnippetProxy,
ExtensionThemeProxy,
};
use fs::{Fs, RemoveOptions};
use futures::future::join_all;
@@ -38,8 +32,8 @@ use futures::{
select_biased,
};
use gpui::{
App, AppContext as _, AsyncApp, Context, Entity, EventEmitter, Global, SharedString, Task,
WeakEntity, actions,
App, AppContext as _, AsyncApp, Context, Entity, EventEmitter, Global, Task, WeakEntity,
actions,
};
use http_client::{AsyncBody, HttpClient, HttpClientWithUrl};
use language::{
@@ -59,28 +53,15 @@ use std::{
cmp::Ordering,
path::{self, Path, PathBuf},
sync::Arc,
time::Duration,
time::{Duration, Instant},
};
use url::Url;
use util::{ResultExt, paths::RemotePathBuf};
use wasm_host::llm_provider::ExtensionLanguageModelProvider;
use wasm_host::{
WasmExtension, WasmHost,
wit::{
LlmCacheConfiguration, LlmModelInfo, LlmProviderInfo, is_supported_wasm_api_version,
wasm_api_version_range,
},
wit::{is_supported_wasm_api_version, wasm_api_version_range},
};
struct LlmProviderWithModels {
provider_info: LlmProviderInfo,
models: Vec<LlmModelInfo>,
cache_configs: collections::HashMap<String, LlmCacheConfiguration>,
is_authenticated: bool,
icon_path: Option<SharedString>,
auth_config: Option<extension::LanguageModelAuthConfig>,
}
pub use extension::{
ExtensionLibraryKind, GrammarManifestEntry, OldExtensionManifest, SchemaVersion,
};
@@ -89,82 +70,6 @@ pub use extension_settings::ExtensionSettings;
pub const RELOAD_DEBOUNCE_DURATION: Duration = Duration::from_millis(200);
const FS_WATCH_LATENCY: Duration = Duration::from_millis(100);
/// Extension IDs that are being migrated from hardcoded LLM providers.
/// For backwards compatibility, if the user has the corresponding env var set,
/// we automatically enable env var reading for these extensions on first install.
pub const LEGACY_LLM_EXTENSION_IDS: &[&str] = &[
"anthropic",
"copilot-chat",
"google-ai",
"openrouter",
"openai",
];
/// Migrates legacy LLM provider extensions by auto-enabling env var reading
/// if the env var is currently present in the environment.
///
/// This is idempotent: if the env var is already in `allowed_env_vars`,
/// we skip. This means if a user explicitly removes it, it will be re-added on
/// next launch if the env var is still set - but that's predictable behavior.
fn migrate_legacy_llm_provider_env_var(manifest: &ExtensionManifest, cx: &mut App) {
// Only apply migration to known legacy LLM extensions
if !LEGACY_LLM_EXTENSION_IDS.contains(&manifest.id.as_ref()) {
return;
}
// Check each provider in the manifest
for (provider_id, provider_entry) in &manifest.language_model_providers {
let Some(auth_config) = &provider_entry.auth else {
continue;
};
let Some(env_vars) = &auth_config.env_vars else {
continue;
};
let full_provider_id = format!("{}:{}", manifest.id, provider_id);
// For each env var, check if it's set and enable it if so
for env_var_name in env_vars {
let env_var_is_set = std::env::var(env_var_name)
.map(|v| !v.is_empty())
.unwrap_or(false);
if !env_var_is_set {
continue;
}
let settings_key: Arc<str> = format!("{}:{}", full_provider_id, env_var_name).into();
// Check if already enabled in settings
let already_enabled = ExtensionSettings::get_global(cx)
.allowed_env_var_providers
.contains(settings_key.as_ref());
if already_enabled {
continue;
}
// Enable env var reading since the env var is set
settings::update_settings_file(<dyn fs::Fs>::global(cx), cx, {
let settings_key = settings_key.clone();
move |settings, _| {
let allowed = settings
.extension
.allowed_env_var_providers
.get_or_insert_with(Vec::new);
if !allowed
.iter()
.any(|id| id.as_ref() == settings_key.as_ref())
{
allowed.push(settings_key);
}
}
});
}
}
}
/// The current extension [`SchemaVersion`] supported by Zed.
const CURRENT_SCHEMA_VERSION: SchemaVersion = SchemaVersion(1);
@@ -226,8 +131,6 @@ pub struct ExtensionStore {
pub enum ExtensionOperation {
Upgrade,
Install,
/// Auto-install from settings - triggers legacy LLM provider migrations
AutoInstall,
Remove,
}
@@ -710,60 +613,8 @@ impl ExtensionStore {
cx.spawn(async move |this, cx| {
for extension_id in extensions_to_install {
// When enabled, this checks if an extension exists locally in the repo's extensions/
// directory and installs it as a dev extension instead of fetching from the registry.
// This is useful for testing auto-installed extensions before they've been published.
// Set to `true` only during local development/testing of new auto-install extensions.
#[cfg(debug_assertions)]
const DEBUG_ALLOW_UNPUBLISHED_AUTO_EXTENSIONS: bool = false;
#[cfg(debug_assertions)]
if DEBUG_ALLOW_UNPUBLISHED_AUTO_EXTENSIONS {
let local_extension_path = std::path::PathBuf::from(env!("CARGO_MANIFEST_DIR"))
.parent()
.unwrap()
.parent()
.unwrap()
.join("extensions")
.join(extension_id.as_ref());
if local_extension_path.exists() {
// Force-remove existing extension directory if it exists and isn't a symlink
// This handles the case where the extension was previously installed from the registry
if let Some(installed_dir) = this
.update(cx, |this, _cx| this.installed_dir.clone())
.ok()
{
let existing_path = installed_dir.join(extension_id.as_ref());
if existing_path.exists() {
let metadata = std::fs::symlink_metadata(&existing_path);
let is_symlink = metadata.map(|m| m.is_symlink()).unwrap_or(false);
if !is_symlink {
if let Err(e) = std::fs::remove_dir_all(&existing_path) {
log::error!(
"Failed to remove existing extension directory {:?}: {}",
existing_path,
e
);
}
}
}
}
if let Some(task) = this
.update(cx, |this, cx| {
this.install_dev_extension(local_extension_path, cx)
})
.ok()
{
task.await.log_err();
}
continue;
}
}
this.update(cx, |this, cx| {
this.auto_install_latest_extension(extension_id.clone(), cx);
this.install_latest_extension(extension_id.clone(), cx);
})
.ok();
}
@@ -918,10 +769,7 @@ impl ExtensionStore {
this.update(cx, |this, cx| this.reload(Some(extension_id.clone()), cx))?
.await;
if matches!(
operation,
ExtensionOperation::Install | ExtensionOperation::AutoInstall
) {
if let ExtensionOperation::Install = operation {
this.update(cx, |this, cx| {
cx.emit(Event::ExtensionInstalled(extension_id.clone()));
if let Some(events) = ExtensionEvents::try_global(cx)
@@ -931,27 +779,6 @@ impl ExtensionStore {
this.emit(extension::Event::ExtensionInstalled(manifest.clone()), cx)
});
}
// Run legacy LLM provider migrations only for auto-installed extensions
if matches!(operation, ExtensionOperation::AutoInstall) {
if let Some(manifest) = this.extension_manifest_for_id(&extension_id) {
migrate_legacy_llm_provider_env_var(&manifest, cx);
}
copilot_migration::migrate_copilot_credentials_if_needed(&extension_id, cx);
anthropic_migration::migrate_anthropic_credentials_if_needed(
&extension_id,
cx,
);
google_ai_migration::migrate_google_ai_credentials_if_needed(
&extension_id,
cx,
);
openai_migration::migrate_openai_credentials_if_needed(&extension_id, cx);
open_router_migration::migrate_open_router_credentials_if_needed(
&extension_id,
cx,
);
}
})
.ok();
}
@@ -961,24 +788,8 @@ impl ExtensionStore {
}
pub fn install_latest_extension(&mut self, extension_id: Arc<str>, cx: &mut Context<Self>) {
self.install_latest_extension_with_operation(extension_id, ExtensionOperation::Install, cx);
}
log::info!("installing extension {extension_id} latest version");
/// Auto-install an extension, triggering legacy LLM provider migrations.
fn auto_install_latest_extension(&mut self, extension_id: Arc<str>, cx: &mut Context<Self>) {
self.install_latest_extension_with_operation(
extension_id,
ExtensionOperation::AutoInstall,
cx,
);
}
fn install_latest_extension_with_operation(
&mut self,
extension_id: Arc<str>,
operation: ExtensionOperation,
cx: &mut Context<Self>,
) {
let schema_versions = schema_version_range();
let wasm_api_versions = wasm_api_version_range(ReleaseChannel::global(cx));
@@ -1001,8 +812,13 @@ impl ExtensionStore {
return;
};
self.install_or_upgrade_extension_at_endpoint(extension_id, url, operation, cx)
.detach_and_log_err(cx);
self.install_or_upgrade_extension_at_endpoint(
extension_id,
url,
ExtensionOperation::Install,
cx,
)
.detach_and_log_err(cx);
}
pub fn upgrade_extension(
@@ -1021,6 +837,7 @@ impl ExtensionStore {
operation: ExtensionOperation,
cx: &mut Context<Self>,
) -> Task<Result<()>> {
log::info!("installing extension {extension_id} {version}");
let Some(url) = self
.http_client
.build_zed_api_url(
@@ -1196,37 +1013,9 @@ impl ExtensionStore {
}
}
fs.create_symlink(output_path, extension_source_path.clone())
fs.create_symlink(output_path, extension_source_path)
.await?;
// Re-load manifest and run migrations before reload so settings are updated before providers are registered
let manifest_for_migration =
ExtensionManifest::load(fs.clone(), &extension_source_path).await?;
this.update(cx, |_this, cx| {
migrate_legacy_llm_provider_env_var(&manifest_for_migration, cx);
// Also run credential migrations for dev extensions
copilot_migration::migrate_copilot_credentials_if_needed(
manifest_for_migration.id.as_ref(),
cx,
);
anthropic_migration::migrate_anthropic_credentials_if_needed(
manifest_for_migration.id.as_ref(),
cx,
);
google_ai_migration::migrate_google_ai_credentials_if_needed(
manifest_for_migration.id.as_ref(),
cx,
);
openai_migration::migrate_openai_credentials_if_needed(
manifest_for_migration.id.as_ref(),
cx,
);
open_router_migration::migrate_open_router_credentials_if_needed(
manifest_for_migration.id.as_ref(),
cx,
);
})?;
this.update(cx, |this, cx| this.reload(None, cx))?.await;
this.update(cx, |this, cx| {
cx.emit(Event::ExtensionInstalled(extension_id.clone()));
@@ -1345,6 +1134,18 @@ impl ExtensionStore {
return Task::ready(());
}
let reload_count = extensions_to_unload
.iter()
.filter(|id| extensions_to_load.contains(id))
.count();
log::info!(
"extensions updated. loading {}, reloading {}, unloading {}",
extensions_to_load.len() - reload_count,
reload_count,
extensions_to_unload.len() - reload_count
);
let extension_ids = extensions_to_load
.iter()
.filter_map(|id| {
@@ -1419,11 +1220,6 @@ impl ExtensionStore {
for command_name in extension.manifest.slash_commands.keys() {
self.proxy.unregister_slash_command(command_name.clone());
}
for provider_id in extension.manifest.language_model_providers.keys() {
let full_provider_id: Arc<str> = format!("{}:{}", extension_id, provider_id).into();
self.proxy
.unregister_language_model_provider(full_provider_id, cx);
}
}
self.wasm_extensions
@@ -1562,11 +1358,7 @@ impl ExtensionStore {
})
.await;
let mut wasm_extensions: Vec<(
Arc<ExtensionManifest>,
WasmExtension,
Vec<LlmProviderWithModels>,
)> = Vec::new();
let mut wasm_extensions = Vec::new();
for extension in extension_entries {
if extension.manifest.lib.kind.is_none() {
continue;
@@ -1584,149 +1376,7 @@ impl ExtensionStore {
match wasm_extension {
Ok(wasm_extension) => {
// Query for LLM providers if the manifest declares any
let mut llm_providers_with_models = Vec::new();
if !extension.manifest.language_model_providers.is_empty() {
let providers_result = wasm_extension
.call(|ext, store| {
async move { ext.call_llm_providers(store).await }.boxed()
})
.await;
if let Ok(Ok(providers)) = providers_result {
for provider_info in providers {
let models_result = wasm_extension
.call({
let provider_id = provider_info.id.clone();
|ext, store| {
async move {
ext.call_llm_provider_models(store, &provider_id)
.await
}
.boxed()
}
})
.await;
let models: Vec<LlmModelInfo> = match models_result {
Ok(Ok(Ok(models))) => models,
Ok(Ok(Err(e))) => {
log::error!(
"Failed to get models for LLM provider {} in extension {}: {}",
provider_info.id,
extension.manifest.id,
e
);
Vec::new()
}
Ok(Err(e)) => {
log::error!(
"Wasm error calling llm_provider_models for {} in extension {}: {:?}",
provider_info.id,
extension.manifest.id,
e
);
Vec::new()
}
Err(e) => {
log::error!(
"Extension call failed for llm_provider_models {} in extension {}: {:?}",
provider_info.id,
extension.manifest.id,
e
);
Vec::new()
}
};
// Query cache configurations for each model
let mut cache_configs = collections::HashMap::default();
for model in &models {
let cache_config_result = wasm_extension
.call({
let provider_id = provider_info.id.clone();
let model_id = model.id.clone();
|ext, store| {
async move {
ext.call_llm_cache_configuration(
store,
&provider_id,
&model_id,
)
.await
}
.boxed()
}
})
.await;
if let Ok(Ok(Some(config))) = cache_config_result {
cache_configs.insert(model.id.clone(), config);
}
}
// Query initial authentication state
let is_authenticated = wasm_extension
.call({
let provider_id = provider_info.id.clone();
|ext, store| {
async move {
ext.call_llm_provider_is_authenticated(
store,
&provider_id,
)
.await
}
.boxed()
}
})
.await
.unwrap_or(Ok(false))
.unwrap_or(false);
// Resolve icon path if provided
let icon_path = provider_info.icon.as_ref().map(|icon| {
let icon_file_path = extension_path.join(icon);
// Canonicalize to resolve symlinks (dev extensions are symlinked)
let absolute_icon_path = icon_file_path
.canonicalize()
.unwrap_or(icon_file_path)
.to_string_lossy()
.to_string();
SharedString::from(absolute_icon_path)
});
let provider_id_arc: Arc<str> =
provider_info.id.as_str().into();
let auth_config = extension
.manifest
.language_model_providers
.get(&provider_id_arc)
.and_then(|entry| entry.auth.clone());
llm_providers_with_models.push(LlmProviderWithModels {
provider_info,
models,
cache_configs,
is_authenticated,
icon_path,
auth_config,
});
}
} else {
log::error!(
"Failed to get LLM providers from extension {}: {:?}",
extension.manifest.id,
providers_result
);
}
}
wasm_extensions.push((
extension.manifest.clone(),
wasm_extension,
llm_providers_with_models,
))
wasm_extensions.push((extension.manifest.clone(), wasm_extension))
}
Err(e) => {
log::error!(
@@ -1745,7 +1395,7 @@ impl ExtensionStore {
this.update(cx, |this, cx| {
this.reload_complete_senders.clear();
for (manifest, wasm_extension, llm_providers_with_models) in &wasm_extensions {
for (manifest, wasm_extension) in &wasm_extensions {
let extension = Arc::new(wasm_extension.clone());
for (language_server_id, language_server_config) in &manifest.language_servers {
@@ -1799,42 +1449,9 @@ impl ExtensionStore {
this.proxy
.register_debug_locator(extension.clone(), debug_adapter.clone());
}
// Register LLM providers
for llm_provider in llm_providers_with_models {
let provider_id: Arc<str> =
format!("{}:{}", manifest.id, llm_provider.provider_info.id).into();
let wasm_ext = extension.as_ref().clone();
let pinfo = llm_provider.provider_info.clone();
let mods = llm_provider.models.clone();
let cache_cfgs = llm_provider.cache_configs.clone();
let auth = llm_provider.is_authenticated;
let icon = llm_provider.icon_path.clone();
let auth_config = llm_provider.auth_config.clone();
this.proxy.register_language_model_provider(
provider_id.clone(),
Box::new(move |cx: &mut App| {
let provider = Arc::new(ExtensionLanguageModelProvider::new(
wasm_ext, pinfo, mods, cache_cfgs, auth, icon, auth_config, cx,
));
language_model::LanguageModelRegistry::global(cx).update(
cx,
|registry, cx| {
registry.register_provider(provider, cx);
},
);
}),
cx,
);
}
}
let wasm_extensions_without_llm: Vec<_> = wasm_extensions
.into_iter()
.map(|(manifest, ext, _)| (manifest, ext))
.collect();
this.wasm_extensions.extend(wasm_extensions_without_llm);
this.wasm_extensions.extend(wasm_extensions);
this.proxy.set_extensions_loaded();
this.proxy.reload_current_theme(cx);
this.proxy.reload_current_icon_theme(cx);
@@ -1856,6 +1473,7 @@ impl ExtensionStore {
let index_path = self.index_path.clone();
let proxy = self.proxy.clone();
cx.background_spawn(async move {
let start_time = Instant::now();
let mut index = ExtensionIndex::default();
fs.create_dir(&work_dir).await.log_err();
@@ -1893,6 +1511,7 @@ impl ExtensionStore {
.log_err();
}
log::info!("rebuilt extension index in {:?}", start_time.elapsed());
index
})
}
@@ -2166,6 +1785,11 @@ impl ExtensionStore {
})?,
path_style,
);
log::info!(
"Uploading extension {} to {:?}",
missing_extension.clone().id,
dest_dir
);
client
.update(cx, |client, cx| {
@@ -2173,6 +1797,11 @@ impl ExtensionStore {
})?
.await?;
log::info!(
"Finished uploading extension {}",
missing_extension.clone().id
);
let result = client
.update(cx, |client, _cx| {
client.proto_client().request(proto::InstallExtension {

View File

@@ -1,4 +1,4 @@
use collections::{HashMap, HashSet};
use collections::HashMap;
use extension::{
DownloadFileCapability, ExtensionCapability, NpmInstallPackageCapability, ProcessExecCapability,
};
@@ -16,10 +16,6 @@ pub struct ExtensionSettings {
pub auto_install_extensions: HashMap<Arc<str>, bool>,
pub auto_update_extensions: HashMap<Arc<str>, bool>,
pub granted_capabilities: Vec<ExtensionCapability>,
/// The extension language model providers that are allowed to read API keys
/// from environment variables. Each entry is in the format
/// "extension_id:provider_id:ENV_VAR_NAME".
pub allowed_env_var_providers: HashSet<Arc<str>>,
}
impl ExtensionSettings {
@@ -64,13 +60,6 @@ impl Settings for ExtensionSettings {
}
})
.collect(),
allowed_env_var_providers: content
.extension
.allowed_env_var_providers
.clone()
.unwrap_or_default()
.into_iter()
.collect(),
}
}
}

View File

@@ -165,7 +165,6 @@ async fn test_extension_store(cx: &mut TestAppContext) {
capabilities: Vec::new(),
debug_adapters: Default::default(),
debug_locators: Default::default(),
language_model_providers: BTreeMap::default(),
}),
dev: false,
},
@@ -197,7 +196,6 @@ async fn test_extension_store(cx: &mut TestAppContext) {
capabilities: Vec::new(),
debug_adapters: Default::default(),
debug_locators: Default::default(),
language_model_providers: BTreeMap::default(),
}),
dev: false,
},
@@ -378,7 +376,6 @@ async fn test_extension_store(cx: &mut TestAppContext) {
capabilities: Vec::new(),
debug_adapters: Default::default(),
debug_locators: Default::default(),
language_model_providers: BTreeMap::default(),
}),
dev: false,
},

View File

@@ -1,124 +0,0 @@
use credentials_provider::CredentialsProvider;
use gpui::App;
const GOOGLE_AI_EXTENSION_ID: &str = "google-ai";
const GOOGLE_AI_PROVIDER_ID: &str = "google-ai";
const GOOGLE_AI_DEFAULT_API_URL: &str = "https://generativelanguage.googleapis.com";
/// Migrates Google AI API credentials from the old built-in provider location
/// to the new extension-based location.
///
/// This should only be called during auto-install of the extension.
pub fn migrate_google_ai_credentials_if_needed(extension_id: &str, cx: &mut App) {
if extension_id != GOOGLE_AI_EXTENSION_ID {
return;
}
let extension_credential_key = format!(
"extension-llm-{}:{}",
GOOGLE_AI_EXTENSION_ID, GOOGLE_AI_PROVIDER_ID
);
let credentials_provider = <dyn CredentialsProvider>::global(cx);
cx.spawn(async move |cx| {
// Read from old location
let old_credential = credentials_provider
.read_credentials(GOOGLE_AI_DEFAULT_API_URL, &cx)
.await
.ok()
.flatten();
let api_key = match old_credential {
Some((_, key_bytes)) => match String::from_utf8(key_bytes) {
Ok(key) if !key.is_empty() => key,
Ok(_) => {
log::debug!("Existing Google AI API key is empty, nothing to migrate");
return;
}
Err(_) => {
log::error!("Failed to decode Google AI API key as UTF-8");
return;
}
},
None => {
log::debug!("No existing Google AI API key found to migrate");
return;
}
};
log::info!("Migrating existing Google AI API key to Google AI extension");
match credentials_provider
.write_credentials(&extension_credential_key, "Bearer", api_key.as_bytes(), &cx)
.await
{
Ok(()) => {
log::info!("Successfully migrated Google AI API key to extension");
}
Err(err) => {
log::error!("Failed to migrate Google AI API key: {}", err);
}
}
})
.detach();
}
#[cfg(test)]
mod tests {
use super::*;
use gpui::TestAppContext;
#[gpui::test]
async fn test_migrates_credentials_from_old_location(cx: &mut TestAppContext) {
let api_key = "AIzaSy-test-key-12345";
cx.write_credentials(GOOGLE_AI_DEFAULT_API_URL, "Bearer", api_key.as_bytes());
cx.update(|cx| {
migrate_google_ai_credentials_if_needed(GOOGLE_AI_EXTENSION_ID, cx);
});
cx.run_until_parked();
let migrated = cx.read_credentials("extension-llm-google-ai:google-ai");
assert!(migrated.is_some(), "Credentials should have been migrated");
let (username, password) = migrated.unwrap();
assert_eq!(username, "Bearer");
assert_eq!(String::from_utf8(password).unwrap(), api_key);
}
#[gpui::test]
async fn test_no_migration_if_no_old_credentials(cx: &mut TestAppContext) {
cx.update(|cx| {
migrate_google_ai_credentials_if_needed(GOOGLE_AI_EXTENSION_ID, cx);
});
cx.run_until_parked();
let credentials = cx.read_credentials("extension-llm-google-ai:google-ai");
assert!(
credentials.is_none(),
"Should not create credentials if none existed"
);
}
#[gpui::test]
async fn test_skips_migration_for_other_extensions(cx: &mut TestAppContext) {
let api_key = "AIzaSy-test-key";
cx.write_credentials(GOOGLE_AI_DEFAULT_API_URL, "Bearer", api_key.as_bytes());
cx.update(|cx| {
migrate_google_ai_credentials_if_needed("some-other-extension", cx);
});
cx.run_until_parked();
let credentials = cx.read_credentials("extension-llm-google-ai:google-ai");
assert!(
credentials.is_none(),
"Should not migrate for other extensions"
);
}
}

View File

@@ -1,124 +0,0 @@
use credentials_provider::CredentialsProvider;
use gpui::App;
const OPEN_ROUTER_EXTENSION_ID: &str = "openrouter";
const OPEN_ROUTER_PROVIDER_ID: &str = "openrouter";
const OPEN_ROUTER_DEFAULT_API_URL: &str = "https://openrouter.ai/api/v1";
/// Migrates OpenRouter API credentials from the old built-in provider location
/// to the new extension-based location.
///
/// This should only be called during auto-install of the extension.
pub fn migrate_open_router_credentials_if_needed(extension_id: &str, cx: &mut App) {
if extension_id != OPEN_ROUTER_EXTENSION_ID {
return;
}
let extension_credential_key = format!(
"extension-llm-{}:{}",
OPEN_ROUTER_EXTENSION_ID, OPEN_ROUTER_PROVIDER_ID
);
let credentials_provider = <dyn CredentialsProvider>::global(cx);
cx.spawn(async move |cx| {
// Read from old location
let old_credential = credentials_provider
.read_credentials(OPEN_ROUTER_DEFAULT_API_URL, &cx)
.await
.ok()
.flatten();
let api_key = match old_credential {
Some((_, key_bytes)) => match String::from_utf8(key_bytes) {
Ok(key) if !key.is_empty() => key,
Ok(_) => {
log::debug!("Existing OpenRouter API key is empty, nothing to migrate");
return;
}
Err(_) => {
log::error!("Failed to decode OpenRouter API key as UTF-8");
return;
}
},
None => {
log::debug!("No existing OpenRouter API key found to migrate");
return;
}
};
log::info!("Migrating existing OpenRouter API key to OpenRouter extension");
match credentials_provider
.write_credentials(&extension_credential_key, "Bearer", api_key.as_bytes(), &cx)
.await
{
Ok(()) => {
log::info!("Successfully migrated OpenRouter API key to extension");
}
Err(err) => {
log::error!("Failed to migrate OpenRouter API key: {}", err);
}
}
})
.detach();
}
#[cfg(test)]
mod tests {
use super::*;
use gpui::TestAppContext;
#[gpui::test]
async fn test_migrates_credentials_from_old_location(cx: &mut TestAppContext) {
let api_key = "sk-or-test-key-12345";
cx.write_credentials(OPEN_ROUTER_DEFAULT_API_URL, "Bearer", api_key.as_bytes());
cx.update(|cx| {
migrate_open_router_credentials_if_needed(OPEN_ROUTER_EXTENSION_ID, cx);
});
cx.run_until_parked();
let migrated = cx.read_credentials("extension-llm-openrouter:openrouter");
assert!(migrated.is_some(), "Credentials should have been migrated");
let (username, password) = migrated.unwrap();
assert_eq!(username, "Bearer");
assert_eq!(String::from_utf8(password).unwrap(), api_key);
}
#[gpui::test]
async fn test_no_migration_if_no_old_credentials(cx: &mut TestAppContext) {
cx.update(|cx| {
migrate_open_router_credentials_if_needed(OPEN_ROUTER_EXTENSION_ID, cx);
});
cx.run_until_parked();
let credentials = cx.read_credentials("extension-llm-openrouter:openrouter");
assert!(
credentials.is_none(),
"Should not create credentials if none existed"
);
}
#[gpui::test]
async fn test_skips_migration_for_other_extensions(cx: &mut TestAppContext) {
let api_key = "sk-or-test-key";
cx.write_credentials(OPEN_ROUTER_DEFAULT_API_URL, "Bearer", api_key.as_bytes());
cx.update(|cx| {
migrate_open_router_credentials_if_needed("some-other-extension", cx);
});
cx.run_until_parked();
let credentials = cx.read_credentials("extension-llm-openrouter:openrouter");
assert!(
credentials.is_none(),
"Should not migrate for other extensions"
);
}
}

View File

@@ -1,124 +0,0 @@
use credentials_provider::CredentialsProvider;
use gpui::App;
const OPENAI_EXTENSION_ID: &str = "openai";
const OPENAI_PROVIDER_ID: &str = "openai";
const OPENAI_DEFAULT_API_URL: &str = "https://api.openai.com/v1";
/// Migrates OpenAI API credentials from the old built-in provider location
/// to the new extension-based location.
///
/// This should only be called during auto-install of the extension.
pub fn migrate_openai_credentials_if_needed(extension_id: &str, cx: &mut App) {
if extension_id != OPENAI_EXTENSION_ID {
return;
}
let extension_credential_key = format!(
"extension-llm-{}:{}",
OPENAI_EXTENSION_ID, OPENAI_PROVIDER_ID
);
let credentials_provider = <dyn CredentialsProvider>::global(cx);
cx.spawn(async move |cx| {
// Read from old location
let old_credential = credentials_provider
.read_credentials(OPENAI_DEFAULT_API_URL, &cx)
.await
.ok()
.flatten();
let api_key = match old_credential {
Some((_, key_bytes)) => match String::from_utf8(key_bytes) {
Ok(key) if !key.is_empty() => key,
Ok(_) => {
log::debug!("Existing OpenAI API key is empty, nothing to migrate");
return;
}
Err(_) => {
log::error!("Failed to decode OpenAI API key as UTF-8");
return;
}
},
None => {
log::debug!("No existing OpenAI API key found to migrate");
return;
}
};
log::info!("Migrating existing OpenAI API key to OpenAI extension");
match credentials_provider
.write_credentials(&extension_credential_key, "Bearer", api_key.as_bytes(), &cx)
.await
{
Ok(()) => {
log::info!("Successfully migrated OpenAI API key to extension");
}
Err(err) => {
log::error!("Failed to migrate OpenAI API key: {}", err);
}
}
})
.detach();
}
#[cfg(test)]
mod tests {
use super::*;
use gpui::TestAppContext;
#[gpui::test]
async fn test_migrates_credentials_from_old_location(cx: &mut TestAppContext) {
let api_key = "sk-test-key-12345";
cx.write_credentials(OPENAI_DEFAULT_API_URL, "Bearer", api_key.as_bytes());
cx.update(|cx| {
migrate_openai_credentials_if_needed(OPENAI_EXTENSION_ID, cx);
});
cx.run_until_parked();
let migrated = cx.read_credentials("extension-llm-openai:openai");
assert!(migrated.is_some(), "Credentials should have been migrated");
let (username, password) = migrated.unwrap();
assert_eq!(username, "Bearer");
assert_eq!(String::from_utf8(password).unwrap(), api_key);
}
#[gpui::test]
async fn test_no_migration_if_no_old_credentials(cx: &mut TestAppContext) {
cx.update(|cx| {
migrate_openai_credentials_if_needed(OPENAI_EXTENSION_ID, cx);
});
cx.run_until_parked();
let credentials = cx.read_credentials("extension-llm-openai:openai");
assert!(
credentials.is_none(),
"Should not create credentials if none existed"
);
}
#[gpui::test]
async fn test_skips_migration_for_other_extensions(cx: &mut TestAppContext) {
let api_key = "sk-test-key";
cx.write_credentials(OPENAI_DEFAULT_API_URL, "Bearer", api_key.as_bytes());
cx.update(|cx| {
migrate_openai_credentials_if_needed("some-other-extension", cx);
});
cx.run_until_parked();
let credentials = cx.read_credentials("extension-llm-openai:openai");
assert!(
credentials.is_none(),
"Should not migrate for other extensions"
);
}
}

View File

@@ -1,11 +1,9 @@
pub mod llm_provider;
pub mod wit;
use crate::capability_granter::CapabilityGranter;
use crate::{ExtensionManifest, ExtensionSettings};
use anyhow::{Context as _, Result, anyhow, bail};
use async_trait::async_trait;
use dap::{DebugRequest, StartDebuggingRequestArgumentsRequest};
use extension::{
CodeLabel, Command, Completion, ContextServerConfiguration, DebugAdapterBinary,
@@ -66,7 +64,7 @@ pub struct WasmHost {
#[derive(Clone, Debug)]
pub struct WasmExtension {
tx: Arc<UnboundedSender<ExtensionCall>>,
tx: UnboundedSender<ExtensionCall>,
pub manifest: Arc<ExtensionManifest>,
pub work_dir: Arc<Path>,
#[allow(unused)]
@@ -76,10 +74,7 @@ pub struct WasmExtension {
impl Drop for WasmExtension {
fn drop(&mut self) {
// Only close the channel when this is the last clone holding the sender
if Arc::strong_count(&self.tx) == 1 {
self.tx.close_channel();
}
self.tx.close_channel();
}
}
@@ -676,7 +671,7 @@ impl WasmHost {
Ok(WasmExtension {
manifest,
work_dir,
tx: Arc::new(tx),
tx,
zed_api_version,
_task: task,
})

File diff suppressed because it is too large Load Diff

View File

@@ -1,12 +0,0 @@
<!DOCTYPE html>
<html>
<head>
<title>Authentication Complete</title>
</head>
<body style="font-family: system-ui, sans-serif; display: flex; justify-content: center; align-items: center; height: 100vh; margin: 0;">
<div style="text-align: center;">
<h1>Authentication Complete</h1>
<p>You can close this window and return to Zed.</p>
</div>
</body>
</html>

View File

@@ -16,7 +16,7 @@ use lsp::LanguageServerName;
use release_channel::ReleaseChannel;
use task::{DebugScenario, SpawnInTerminal, TaskTemplate, ZedDebugConfig};
use crate::wasm_host::wit::since_v0_8_0::dap::StartDebuggingRequestArgumentsRequest;
use crate::wasm_host::wit::since_v0_6_0::dap::StartDebuggingRequestArgumentsRequest;
use super::{WasmState, wasm_engine};
use anyhow::{Context as _, Result, anyhow};
@@ -33,19 +33,6 @@ pub use latest::CodeLabelSpanLiteral;
pub use latest::{
CodeLabel, CodeLabelSpan, Command, DebugAdapterBinary, ExtensionProject, Range, SlashCommand,
zed::extension::context_server::ContextServerConfiguration,
zed::extension::llm_provider::{
CacheConfiguration as LlmCacheConfiguration, CompletionEvent as LlmCompletionEvent,
CompletionRequest as LlmCompletionRequest, DeviceFlowPromptInfo as LlmDeviceFlowPromptInfo,
ImageData as LlmImageData, MessageContent as LlmMessageContent,
MessageRole as LlmMessageRole, ModelCapabilities as LlmModelCapabilities,
ModelInfo as LlmModelInfo, ProviderInfo as LlmProviderInfo,
RequestMessage as LlmRequestMessage, StopReason as LlmStopReason,
ThinkingContent as LlmThinkingContent, TokenUsage as LlmTokenUsage,
ToolChoice as LlmToolChoice, ToolDefinition as LlmToolDefinition,
ToolInputFormat as LlmToolInputFormat, ToolResult as LlmToolResult,
ToolResultContent as LlmToolResultContent, ToolUse as LlmToolUse,
ToolUseJsonParseError as LlmToolUseJsonParseError,
},
zed::extension::lsp::{
Completion, CompletionKind, CompletionLabelDetails, InsertTextFormat, Symbol, SymbolKind,
},
@@ -1020,20 +1007,6 @@ impl Extension {
resource: Resource<Arc<dyn WorktreeDelegate>>,
) -> Result<Result<DebugAdapterBinary, String>> {
match self {
Extension::V0_8_0(ext) => {
let dap_binary = ext
.call_get_dap_binary(
store,
&adapter_name,
&task.try_into()?,
user_installed_path.as_ref().and_then(|p| p.to_str()),
resource,
)
.await?
.map_err(|e| anyhow!("{e:?}"))?;
Ok(Ok(dap_binary))
}
Extension::V0_6_0(ext) => {
let dap_binary = ext
.call_get_dap_binary(
@@ -1059,16 +1032,6 @@ impl Extension {
config: serde_json::Value,
) -> Result<Result<StartDebuggingRequestArgumentsRequest, String>> {
match self {
Extension::V0_8_0(ext) => {
let config =
serde_json::to_string(&config).context("Adapter config is not a valid JSON")?;
let result = ext
.call_dap_request_kind(store, &adapter_name, &config)
.await?
.map_err(|e| anyhow!("{e:?}"))?;
Ok(Ok(result))
}
Extension::V0_6_0(ext) => {
let config =
serde_json::to_string(&config).context("Adapter config is not a valid JSON")?;
@@ -1089,15 +1052,6 @@ impl Extension {
config: ZedDebugConfig,
) -> Result<Result<DebugScenario, String>> {
match self {
Extension::V0_8_0(ext) => {
let config = config.into();
let result = ext
.call_dap_config_to_scenario(store, &config)
.await?
.map_err(|e| anyhow!("{e:?}"))?;
Ok(Ok(result.try_into()?))
}
Extension::V0_6_0(ext) => {
let config = config.into();
let dap_binary = ext
@@ -1120,20 +1074,6 @@ impl Extension {
debug_adapter_name: String,
) -> Result<Option<DebugScenario>> {
match self {
Extension::V0_8_0(ext) => {
let build_config_template = build_config_template.into();
let result = ext
.call_dap_locator_create_scenario(
store,
&locator_name,
&build_config_template,
&resolved_label,
&debug_adapter_name,
)
.await?;
Ok(result.map(TryInto::try_into).transpose()?)
}
Extension::V0_6_0(ext) => {
let build_config_template = build_config_template.into();
let dap_binary = ext
@@ -1159,15 +1099,6 @@ impl Extension {
resolved_build_task: SpawnInTerminal,
) -> Result<Result<DebugRequest, String>> {
match self {
Extension::V0_8_0(ext) => {
let build_config_template = resolved_build_task.try_into()?;
let dap_request = ext
.call_run_dap_locator(store, &locator_name, &build_config_template)
.await?
.map_err(|e| anyhow!("{e:?}"))?;
Ok(Ok(dap_request.into()))
}
Extension::V0_6_0(ext) => {
let build_config_template = resolved_build_task.try_into()?;
let dap_request = ext
@@ -1180,174 +1111,6 @@ impl Extension {
_ => anyhow::bail!("`dap_locator_create_scenario` not available prior to v0.6.0"),
}
}
pub async fn call_llm_providers(
&self,
store: &mut Store<WasmState>,
) -> Result<Vec<latest::llm_provider::ProviderInfo>> {
match self {
Extension::V0_8_0(ext) => ext.call_llm_providers(store).await,
_ => Ok(Vec::new()),
}
}
pub async fn call_llm_provider_models(
&self,
store: &mut Store<WasmState>,
provider_id: &str,
) -> Result<Result<Vec<latest::llm_provider::ModelInfo>, String>> {
match self {
Extension::V0_8_0(ext) => ext.call_llm_provider_models(store, provider_id).await,
_ => anyhow::bail!("`llm_provider_models` not available prior to v0.8.0"),
}
}
pub async fn call_llm_provider_settings_markdown(
&self,
store: &mut Store<WasmState>,
provider_id: &str,
) -> Result<Option<String>> {
match self {
Extension::V0_8_0(ext) => {
ext.call_llm_provider_settings_markdown(store, provider_id)
.await
}
_ => Ok(None),
}
}
pub async fn call_llm_provider_is_authenticated(
&self,
store: &mut Store<WasmState>,
provider_id: &str,
) -> Result<bool> {
match self {
Extension::V0_8_0(ext) => {
ext.call_llm_provider_is_authenticated(store, provider_id)
.await
}
_ => Ok(false),
}
}
pub async fn call_llm_provider_start_device_flow_sign_in(
&self,
store: &mut Store<WasmState>,
provider_id: &str,
) -> Result<Result<LlmDeviceFlowPromptInfo, String>> {
match self {
Extension::V0_8_0(ext) => {
ext.call_llm_provider_start_device_flow_sign_in(store, provider_id)
.await
}
_ => {
anyhow::bail!(
"`llm_provider_start_device_flow_sign_in` not available prior to v0.8.0"
)
}
}
}
pub async fn call_llm_provider_poll_device_flow_sign_in(
&self,
store: &mut Store<WasmState>,
provider_id: &str,
) -> Result<Result<(), String>> {
match self {
Extension::V0_8_0(ext) => {
ext.call_llm_provider_poll_device_flow_sign_in(store, provider_id)
.await
}
_ => {
anyhow::bail!(
"`llm_provider_poll_device_flow_sign_in` not available prior to v0.8.0"
)
}
}
}
pub async fn call_llm_provider_reset_credentials(
&self,
store: &mut Store<WasmState>,
provider_id: &str,
) -> Result<Result<(), String>> {
match self {
Extension::V0_8_0(ext) => {
ext.call_llm_provider_reset_credentials(store, provider_id)
.await
}
_ => anyhow::bail!("`llm_provider_reset_credentials` not available prior to v0.8.0"),
}
}
pub async fn call_llm_count_tokens(
&self,
store: &mut Store<WasmState>,
provider_id: &str,
model_id: &str,
request: &latest::llm_provider::CompletionRequest,
) -> Result<Result<u64, String>> {
match self {
Extension::V0_8_0(ext) => {
ext.call_llm_count_tokens(store, provider_id, model_id, request)
.await
}
_ => anyhow::bail!("`llm_count_tokens` not available prior to v0.8.0"),
}
}
pub async fn call_llm_stream_completion_start(
&self,
store: &mut Store<WasmState>,
provider_id: &str,
model_id: &str,
request: &latest::llm_provider::CompletionRequest,
) -> Result<Result<String, String>> {
match self {
Extension::V0_8_0(ext) => {
ext.call_llm_stream_completion_start(store, provider_id, model_id, request)
.await
}
_ => anyhow::bail!("`llm_stream_completion_start` not available prior to v0.8.0"),
}
}
pub async fn call_llm_stream_completion_next(
&self,
store: &mut Store<WasmState>,
stream_id: &str,
) -> Result<Result<Option<latest::llm_provider::CompletionEvent>, String>> {
match self {
Extension::V0_8_0(ext) => ext.call_llm_stream_completion_next(store, stream_id).await,
_ => anyhow::bail!("`llm_stream_completion_next` not available prior to v0.8.0"),
}
}
pub async fn call_llm_stream_completion_close(
&self,
store: &mut Store<WasmState>,
stream_id: &str,
) -> Result<()> {
match self {
Extension::V0_8_0(ext) => ext.call_llm_stream_completion_close(store, stream_id).await,
_ => anyhow::bail!("`llm_stream_completion_close` not available prior to v0.8.0"),
}
}
pub async fn call_llm_cache_configuration(
&self,
store: &mut Store<WasmState>,
provider_id: &str,
model_id: &str,
) -> Result<Option<latest::llm_provider::CacheConfiguration>> {
match self {
Extension::V0_8_0(ext) => {
ext.call_llm_cache_configuration(store, provider_id, model_id)
.await
}
_ => Ok(None),
}
}
}
trait ToWasmtimeResult<T> {

View File

@@ -32,6 +32,8 @@ wasmtime::component::bindgen!({
},
});
pub use self::zed::extension::*;
mod settings {
#![allow(dead_code)]
include!(concat!(env!("OUT_DIR"), "/since_v0.6.0/settings.rs"));

View File

@@ -1,19 +1,18 @@
use crate::wasm_host::wit::since_v0_8_0::{
use crate::wasm_host::wit::since_v0_6_0::{
dap::{
BuildTaskDefinition, BuildTaskDefinitionTemplatePayload, StartDebuggingRequestArguments,
TcpArguments, TcpArgumentsTemplate,
},
lsp::{CompletionKind, CompletionLabelDetails, InsertTextFormat, SymbolKind},
slash_command::SlashCommandOutputSection,
};
use crate::wasm_host::wit::{CompletionKind, CompletionLabelDetails, InsertTextFormat, SymbolKind};
use crate::wasm_host::{WasmState, wit::ToWasmtimeResult};
use ::http_client::{AsyncBody, HttpRequestExt};
use ::settings::{ModelMode, Settings, SettingsStore, WorktreeId};
use ::settings::{Settings, WorktreeId};
use anyhow::{Context as _, Result, bail};
use async_compression::futures::bufread::GzipDecoder;
use async_tar::Archive;
use async_trait::async_trait;
use credentials_provider::CredentialsProvider;
use extension::{
ExtensionLanguageServerProxy, KeyValueStoreDelegate, ProjectDelegate, WorktreeDelegate,
};
@@ -23,14 +22,12 @@ use gpui::{BackgroundExecutor, SharedString};
use language::{BinaryStatus, LanguageName, language_settings::AllLanguageSettings};
use project::project_settings::ProjectSettings;
use semver::Version;
use smol::net::TcpListener;
use std::{
env,
net::Ipv4Addr,
path::{Path, PathBuf},
str::FromStr,
sync::{Arc, OnceLock},
time::Duration,
};
use task::{SpawnInTerminal, ZedDebugConfig};
use url::Url;
@@ -618,19 +615,6 @@ impl http_client::Host for WasmState {
.to_wasmtime_result()
}
async fn fetch_fallible(
&mut self,
request: http_client::HttpRequest,
) -> wasmtime::Result<Result<http_client::HttpResponseWithStatus, String>> {
maybe!(async {
let request = convert_request(&request)?;
let mut response = self.host.http_client.send(request).await?;
convert_response_with_status(&mut response).await
})
.await
.to_wasmtime_result()
}
async fn fetch_stream(
&mut self,
request: http_client::HttpRequest,
@@ -734,26 +718,6 @@ async fn convert_response(
Ok(extension_response)
}
async fn convert_response_with_status(
response: &mut ::http_client::Response<AsyncBody>,
) -> anyhow::Result<http_client::HttpResponseWithStatus> {
let status = response.status().as_u16();
let headers: Vec<(String, String)> = response
.headers()
.iter()
.map(|(k, v)| (k.to_string(), v.to_str().unwrap_or("").to_string()))
.collect();
let mut body = Vec::new();
response.body_mut().read_to_end(&mut body).await?;
Ok(http_client::HttpResponseWithStatus {
status,
headers,
body,
})
}
impl nodejs::Host for WasmState {
async fn node_binary_path(&mut self) -> wasmtime::Result<Result<String, String>> {
self.host
@@ -1145,369 +1109,3 @@ impl ExtensionImports for WasmState {
.to_wasmtime_result()
}
}
impl llm_provider::Host for WasmState {
async fn get_credential(&mut self, provider_id: String) -> wasmtime::Result<Option<String>> {
let extension_id = self.manifest.id.clone();
let is_legacy_extension = crate::LEGACY_LLM_EXTENSION_IDS.contains(&extension_id.as_ref());
// Check if this provider has env vars configured and if the user has allowed any of them
let env_vars = self
.manifest
.language_model_providers
.get(&Arc::<str>::from(provider_id.as_str()))
.and_then(|entry| entry.auth.as_ref())
.and_then(|auth| auth.env_vars.clone());
if let Some(env_vars) = env_vars {
let full_provider_id = format!("{}:{}", extension_id, provider_id);
// Check each env var to see if it's allowed and set
for env_var_name in &env_vars {
let settings_key: Arc<str> =
format!("{}:{}", full_provider_id, env_var_name).into();
// For legacy extensions, auto-allow if env var is set
let env_var_is_set = env::var(env_var_name)
.map(|v| !v.is_empty())
.unwrap_or(false);
let is_allowed = self
.on_main_thread({
let settings_key = settings_key.clone();
move |cx| {
async move {
cx.update(|cx| {
crate::extension_settings::ExtensionSettings::get_global(cx)
.allowed_env_var_providers
.contains(&settings_key)
})
}
.boxed_local()
}
})
.await
.unwrap_or(false);
if is_allowed || (is_legacy_extension && env_var_is_set) {
if let Ok(value) = env::var(env_var_name) {
if !value.is_empty() {
return Ok(Some(value));
}
}
}
}
}
// Fall back to credential store
let credential_key = format!("extension-llm-{}:{}", extension_id, provider_id);
self.on_main_thread(move |cx| {
async move {
let credentials_provider = cx.update(|cx| <dyn CredentialsProvider>::global(cx))?;
let result = credentials_provider
.read_credentials(&credential_key, cx)
.await
.ok()
.flatten();
Ok(result.map(|(_, password)| String::from_utf8_lossy(&password).to_string()))
}
.boxed_local()
})
.await
}
async fn store_credential(
&mut self,
provider_id: String,
value: String,
) -> wasmtime::Result<Result<(), String>> {
let extension_id = self.manifest.id.clone();
let credential_key = format!("extension-llm-{}:{}", extension_id, provider_id);
self.on_main_thread(move |cx| {
async move {
let credentials_provider = cx.update(|cx| <dyn CredentialsProvider>::global(cx))?;
credentials_provider
.write_credentials(&credential_key, "api_key", value.as_bytes(), cx)
.await
.map_err(|e| anyhow::anyhow!("{}", e))
}
.boxed_local()
})
.await
.to_wasmtime_result()
}
async fn delete_credential(
&mut self,
provider_id: String,
) -> wasmtime::Result<Result<(), String>> {
let extension_id = self.manifest.id.clone();
let credential_key = format!("extension-llm-{}:{}", extension_id, provider_id);
self.on_main_thread(move |cx| {
async move {
let credentials_provider = cx.update(|cx| <dyn CredentialsProvider>::global(cx))?;
credentials_provider
.delete_credentials(&credential_key, cx)
.await
.map_err(|e| anyhow::anyhow!("{}", e))
}
.boxed_local()
})
.await
.to_wasmtime_result()
}
async fn get_env_var(&mut self, name: String) -> wasmtime::Result<Option<String>> {
let extension_id = self.manifest.id.clone();
// Find which provider (if any) declares this env var in its auth config
let mut allowed_provider_id: Option<Arc<str>> = None;
for (provider_id, provider_entry) in &self.manifest.language_model_providers {
if let Some(auth_config) = &provider_entry.auth {
if let Some(env_vars) = &auth_config.env_vars {
if env_vars.iter().any(|v| v == &name) {
allowed_provider_id = Some(provider_id.clone());
break;
}
}
}
}
// If no provider declares this env var, deny access
let Some(provider_id) = allowed_provider_id else {
log::warn!(
"Extension {} attempted to read env var {} which is not declared in any provider auth config",
extension_id,
name
);
return Ok(None);
};
// Check if the user has allowed this specific env var
let settings_key: Arc<str> = format!("{}:{}:{}", extension_id, provider_id, name).into();
let is_legacy_extension = crate::LEGACY_LLM_EXTENSION_IDS.contains(&extension_id.as_ref());
// For legacy extensions, auto-allow if env var is set
let env_var_is_set = env::var(&name).map(|v| !v.is_empty()).unwrap_or(false);
let is_allowed = self
.on_main_thread({
let settings_key = settings_key.clone();
move |cx| {
async move {
cx.update(|cx| {
crate::extension_settings::ExtensionSettings::get_global(cx)
.allowed_env_var_providers
.contains(&settings_key)
})
}
.boxed_local()
}
})
.await
.unwrap_or(false);
if !is_allowed && !(is_legacy_extension && env_var_is_set) {
log::debug!(
"Extension {} provider {} is not allowed to read env var {}",
extension_id,
provider_id,
name
);
return Ok(None);
}
Ok(env::var(&name).ok())
}
async fn oauth_start_web_auth(
&mut self,
config: llm_provider::OauthWebAuthConfig,
) -> wasmtime::Result<Result<llm_provider::OauthWebAuthResult, String>> {
let auth_url = config.auth_url;
let callback_path = config.callback_path;
let timeout_secs = config.timeout_secs.unwrap_or(300);
self.on_main_thread(move |cx| {
async move {
// Bind to port 0 to let the OS assign an available port, then substitute
// it into the auth URL's {port} placeholder for the OAuth callback.
let listener = TcpListener::bind("127.0.0.1:0")
.await
.map_err(|e| anyhow::anyhow!("Failed to bind localhost server: {}", e))?;
let port = listener
.local_addr()
.map_err(|e| anyhow::anyhow!("Failed to get local address: {}", e))?
.port();
let auth_url_with_port = auth_url.replace("{port}", &port.to_string());
cx.update(|cx| {
cx.open_url(&auth_url_with_port);
})?;
let accept_future = async {
let (mut stream, _) = listener
.accept()
.await
.map_err(|e| anyhow::anyhow!("Failed to accept connection: {}", e))?;
let mut request_line = String::new();
{
let mut reader = smol::io::BufReader::new(&mut stream);
smol::io::AsyncBufReadExt::read_line(&mut reader, &mut request_line)
.await
.map_err(|e| anyhow::anyhow!("Failed to read request: {}", e))?;
}
let path = request_line
.split_whitespace()
.nth(1)
.ok_or_else(|| anyhow::anyhow!("Malformed HTTP request"))?;
let callback_url = if path.starts_with(&callback_path)
|| path.starts_with(&format!("/{}", callback_path.trim_start_matches('/')))
{
format!("http://localhost:{}{}", port, path)
} else {
return Err(anyhow::anyhow!("Unexpected callback path: {}", path));
};
let response = format!(
"HTTP/1.1 200 OK\r\nContent-Type: text/html\r\nConnection: close\r\n\r\n{}",
include_str!("../oauth_callback_response.html")
);
smol::io::AsyncWriteExt::write_all(&mut stream, response.as_bytes())
.await
.ok();
smol::io::AsyncWriteExt::flush(&mut stream).await.ok();
Ok(callback_url)
};
let timeout_duration = Duration::from_secs(timeout_secs as u64);
let callback_url = smol::future::or(accept_future, async {
smol::Timer::after(timeout_duration).await;
Err(anyhow::anyhow!(
"OAuth callback timed out after {} seconds",
timeout_secs
))
})
.await?;
Ok(llm_provider::OauthWebAuthResult {
callback_url,
port: port as u32,
})
}
.boxed_local()
})
.await
.to_wasmtime_result()
}
async fn oauth_send_http_request(
&mut self,
request: http_client::HttpRequest,
) -> wasmtime::Result<Result<http_client::HttpResponseWithStatus, String>> {
maybe!(async {
let request = convert_request(&request)?;
let mut response = self.host.http_client.send(request).await?;
convert_response_with_status(&mut response).await
})
.await
.to_wasmtime_result()
}
async fn oauth_open_browser(&mut self, url: String) -> wasmtime::Result<Result<(), String>> {
self.on_main_thread(move |cx| {
async move {
cx.update(|cx| {
cx.open_url(&url);
})?;
Ok(())
}
.boxed_local()
})
.await
.to_wasmtime_result()
}
async fn get_provider_settings(
&mut self,
provider_id: String,
) -> wasmtime::Result<Option<llm_provider::ProviderSettings>> {
let extension_id = self.manifest.id.clone();
let result = self
.on_main_thread(move |cx| {
async move {
cx.update(|cx| {
let settings_store = cx.global::<SettingsStore>();
let user_settings = settings_store.raw_user_settings();
let language_models =
user_settings.and_then(|s| s.content.language_models.as_ref());
// Map provider IDs to their settings
// The provider_id from the extension is just the provider part (e.g., "google-ai")
// We need to match this to the appropriate settings
match provider_id.as_str() {
"google-ai" => {
let google = language_models.and_then(|lm| lm.google.as_ref());
let google = google?;
let api_url = google.api_url.clone().filter(|s| !s.is_empty());
let available_models = google
.available_models
.as_ref()
.map(|models| {
models
.iter()
.map(|m| {
let thinking_budget = match &m.mode {
Some(ModelMode::Thinking { budget_tokens }) => {
*budget_tokens
}
_ => None,
};
llm_provider::CustomModelConfig {
name: m.name.clone(),
display_name: m.display_name.clone(),
max_tokens: m.max_tokens,
max_output_tokens: None,
thinking_budget,
}
})
.collect()
})
.unwrap_or_default();
Some(llm_provider::ProviderSettings {
api_url,
available_models,
})
}
_ => {
log::debug!(
"Extension {} requested settings for unknown provider: {}",
extension_id,
provider_id
);
None
}
}
})
.ok()
.flatten()
}
.boxed_local()
})
.await;
Ok(result)
}
}

View File

@@ -442,9 +442,7 @@ impl ExtensionsPage {
let extension_store = ExtensionStore::global(cx).read(cx);
match extension_store.outstanding_operations().get(extension_id) {
Some(ExtensionOperation::Install) | Some(ExtensionOperation::AutoInstall) => {
ExtensionStatus::Installing
}
Some(ExtensionOperation::Install) => ExtensionStatus::Installing,
Some(ExtensionOperation::Remove) => ExtensionStatus::Removing,
Some(ExtensionOperation::Upgrade) => ExtensionStatus::Upgrading,
None => match extension_store.installed_extensions().get(extension_id) {

View File

@@ -1,155 +0,0 @@
use gpui::{App, Context, WeakEntity, Window};
use notifications::status_toast::{StatusToast, ToastIcon};
use std::sync::Arc;
use ui::{Color, IconName, SharedString};
use util::ResultExt;
use workspace::{self, Workspace};
pub fn clone_and_open(
repo_url: SharedString,
workspace: WeakEntity<Workspace>,
window: &mut Window,
cx: &mut App,
on_success: Arc<
dyn Fn(&mut Workspace, &mut Window, &mut Context<Workspace>) + Send + Sync + 'static,
>,
) {
let destination_prompt = cx.prompt_for_paths(gpui::PathPromptOptions {
files: false,
directories: true,
multiple: false,
prompt: Some("Select as Repository Destination".into()),
});
window
.spawn(cx, async move |cx| {
let mut paths = destination_prompt.await.ok()?.ok()??;
let mut destination_dir = paths.pop()?;
let repo_name = repo_url
.split('/')
.next_back()
.map(|name| name.strip_suffix(".git").unwrap_or(name))
.unwrap_or("repository")
.to_owned();
let clone_task = workspace
.update(cx, |workspace, cx| {
let fs = workspace.app_state().fs.clone();
let destination_dir = destination_dir.clone();
let repo_url = repo_url.clone();
cx.spawn(async move |_workspace, _cx| {
fs.git_clone(&repo_url, destination_dir.as_path()).await
})
})
.ok()?;
if let Err(error) = clone_task.await {
workspace
.update(cx, |workspace, cx| {
let toast = StatusToast::new(error.to_string(), cx, |this, _| {
this.icon(ToastIcon::new(IconName::XCircle).color(Color::Error))
.dismiss_button(true)
});
workspace.toggle_status_toast(toast, cx);
})
.log_err();
return None;
}
let has_worktrees = workspace
.read_with(cx, |workspace, cx| {
workspace.project().read(cx).worktrees(cx).next().is_some()
})
.ok()?;
let prompt_answer = if has_worktrees {
cx.update(|window, cx| {
window.prompt(
gpui::PromptLevel::Info,
&format!("Git Clone: {}", repo_name),
None,
&["Add repo to project", "Open repo in new project"],
cx,
)
})
.ok()?
.await
.ok()?
} else {
// Don't ask if project is empty
0
};
destination_dir.push(&repo_name);
match prompt_answer {
0 => {
workspace
.update_in(cx, |workspace, window, cx| {
let create_task = workspace.project().update(cx, |project, cx| {
project.create_worktree(destination_dir.as_path(), true, cx)
});
let workspace_weak = cx.weak_entity();
let on_success = on_success.clone();
cx.spawn_in(window, async move |_window, cx| {
if create_task.await.log_err().is_some() {
workspace_weak
.update_in(cx, |workspace, window, cx| {
(on_success)(workspace, window, cx);
})
.ok();
}
})
.detach();
})
.ok()?;
}
1 => {
workspace
.update(cx, move |workspace, cx| {
let app_state = workspace.app_state().clone();
let destination_path = destination_dir.clone();
let on_success = on_success.clone();
workspace::open_new(
Default::default(),
app_state,
cx,
move |workspace, window, cx| {
cx.activate(true);
let create_task =
workspace.project().update(cx, |project, cx| {
project.create_worktree(
destination_path.as_path(),
true,
cx,
)
});
let workspace_weak = cx.weak_entity();
cx.spawn_in(window, async move |_window, cx| {
if create_task.await.log_err().is_some() {
workspace_weak
.update_in(cx, |workspace, window, cx| {
(on_success)(workspace, window, cx);
})
.ok();
}
})
.detach();
},
)
.detach();
})
.ok();
}
_ => {}
}
Some(())
})
.detach();
}

View File

@@ -58,7 +58,7 @@ use project::{
git_store::{GitStoreEvent, Repository, RepositoryEvent, RepositoryId, pending_op},
project_settings::{GitPathStyle, ProjectSettings},
};
use prompt_store::{BuiltInPrompt, PromptId, PromptStore, RULES_FILE_NAMES};
use prompt_store::{PromptId, PromptStore, RULES_FILE_NAMES};
use serde::{Deserialize, Serialize};
use settings::{Settings, SettingsStore, StatusStyle};
use std::future::Future;
@@ -2579,26 +2579,25 @@ impl GitPanel {
is_using_legacy_zed_pro: bool,
cx: &mut AsyncApp,
) -> String {
const DEFAULT_PROMPT: &str = include_str!("commit_message_prompt.txt");
// Remove this once we stop supporting legacy Zed Pro
// In legacy Zed Pro, Git commit summary generation did not count as a
// prompt. If the user changes the prompt, our classification will fail,
// meaning that users will be charged for generating commit messages.
if is_using_legacy_zed_pro {
return BuiltInPrompt::CommitMessage.default_content().to_string();
return DEFAULT_PROMPT.to_string();
}
let load = async {
let store = cx.update(|cx| PromptStore::global(cx)).ok()?.await.ok()?;
store
.update(cx, |s, cx| {
s.load(PromptId::BuiltIn(BuiltInPrompt::CommitMessage), cx)
})
.update(cx, |s, cx| s.load(PromptId::CommitMessage, cx))
.ok()?
.await
.ok()
};
load.await
.unwrap_or_else(|| BuiltInPrompt::CommitMessage.default_content().to_string())
load.await.unwrap_or_else(|| DEFAULT_PROMPT.to_string())
}
/// Generates a commit message using an LLM.
@@ -2849,15 +2848,93 @@ impl GitPanel {
}
pub(crate) fn git_clone(&mut self, repo: String, window: &mut Window, cx: &mut Context<Self>) {
let path = cx.prompt_for_paths(gpui::PathPromptOptions {
files: false,
directories: true,
multiple: false,
prompt: Some("Select as Repository Destination".into()),
});
let workspace = self.workspace.clone();
crate::clone::clone_and_open(
repo.into(),
workspace,
window,
cx,
Arc::new(|_workspace: &mut workspace::Workspace, _window, _cx| {}),
);
cx.spawn_in(window, async move |this, cx| {
let mut paths = path.await.ok()?.ok()??;
let mut path = paths.pop()?;
let repo_name = repo.split("/").last()?.strip_suffix(".git")?.to_owned();
let fs = this.read_with(cx, |this, _| this.fs.clone()).ok()?;
let prompt_answer = match fs.git_clone(&repo, path.as_path()).await {
Ok(_) => cx.update(|window, cx| {
window.prompt(
PromptLevel::Info,
&format!("Git Clone: {}", repo_name),
None,
&["Add repo to project", "Open repo in new project"],
cx,
)
}),
Err(e) => {
this.update(cx, |this: &mut GitPanel, cx| {
let toast = StatusToast::new(e.to_string(), cx, |this, _| {
this.icon(ToastIcon::new(IconName::XCircle).color(Color::Error))
.dismiss_button(true)
});
this.workspace
.update(cx, |workspace, cx| {
workspace.toggle_status_toast(toast, cx);
})
.ok();
})
.ok()?;
return None;
}
}
.ok()?;
path.push(repo_name);
match prompt_answer.await.ok()? {
0 => {
workspace
.update(cx, |workspace, cx| {
workspace
.project()
.update(cx, |project, cx| {
project.create_worktree(path.as_path(), true, cx)
})
.detach();
})
.ok();
}
1 => {
workspace
.update(cx, move |workspace, cx| {
workspace::open_new(
Default::default(),
workspace.app_state().clone(),
cx,
move |workspace, _, cx| {
cx.activate(true);
workspace
.project()
.update(cx, |project, cx| {
project.create_worktree(&path, true, cx)
})
.detach();
},
)
.detach();
})
.ok();
}
_ => {}
}
Some(())
})
.detach();
}
pub(crate) fn git_init(&mut self, window: &mut Window, cx: &mut Context<Self>) {
@@ -5203,7 +5280,7 @@ impl GitPanel {
this.child(
self.entry_label(path_name, path_color)
.truncate_start()
.truncate()
.when(strikethrough, Label::strikethrough),
)
})

View File

@@ -10,7 +10,6 @@ use ui::{
};
mod blame_ui;
pub mod clone;
use git::{
repository::{Branch, Upstream, UpstreamTracking, UpstreamTrackingStatus},

View File

@@ -198,14 +198,14 @@ wayland-backend = { version = "0.3.3", features = [
"client_system",
"dlopen",
], optional = true }
wayland-client = { version = "0.31.11", optional = true }
wayland-cursor = { version = "0.31.11", optional = true }
wayland-protocols = { version = "0.32.9", features = [
wayland-client = { version = "0.31.2", optional = true }
wayland-cursor = { version = "0.31.1", optional = true }
wayland-protocols = { version = "0.31.2", features = [
"client",
"staging",
"unstable",
], optional = true }
wayland-protocols-plasma = { version = "0.3.9", features = [
wayland-protocols-plasma = { version = "0.2.0", features = [
"client",
], optional = true }
wayland-protocols-wlr = { version = "0.3.9", features = [

View File

@@ -5,7 +5,6 @@ use gpui::{
struct SubWindow {
custom_titlebar: bool,
is_dialog: bool,
}
fn button(text: &str, on_click: impl Fn(&mut Window, &mut App) + 'static) -> impl IntoElement {
@@ -24,10 +23,7 @@ fn button(text: &str, on_click: impl Fn(&mut Window, &mut App) + 'static) -> imp
}
impl Render for SubWindow {
fn render(&mut self, _window: &mut Window, cx: &mut Context<Self>) -> impl IntoElement {
let window_bounds =
WindowBounds::Windowed(Bounds::centered(None, size(px(250.0), px(200.0)), cx));
fn render(&mut self, _window: &mut Window, _: &mut Context<Self>) -> impl IntoElement {
div()
.flex()
.flex_col()
@@ -56,28 +52,8 @@ impl Render for SubWindow {
.child(
div()
.p_8()
.flex()
.flex_col()
.gap_2()
.child("SubWindow")
.when(self.is_dialog, |div| {
div.child(button("Open Nested Dialog", move |_, cx| {
cx.open_window(
WindowOptions {
window_bounds: Some(window_bounds),
kind: WindowKind::Dialog,
..Default::default()
},
|_, cx| {
cx.new(|_| SubWindow {
custom_titlebar: false,
is_dialog: true,
})
},
)
.unwrap();
}))
})
.child(button("Close", |window, _| {
window.remove_window();
})),
@@ -110,7 +86,6 @@ impl Render for WindowDemo {
|_, cx| {
cx.new(|_| SubWindow {
custom_titlebar: false,
is_dialog: false,
})
},
)
@@ -126,39 +101,6 @@ impl Render for WindowDemo {
|_, cx| {
cx.new(|_| SubWindow {
custom_titlebar: false,
is_dialog: false,
})
},
)
.unwrap();
}))
.child(button("Floating", move |_, cx| {
cx.open_window(
WindowOptions {
window_bounds: Some(window_bounds),
kind: WindowKind::Floating,
..Default::default()
},
|_, cx| {
cx.new(|_| SubWindow {
custom_titlebar: false,
is_dialog: false,
})
},
)
.unwrap();
}))
.child(button("Dialog", move |_, cx| {
cx.open_window(
WindowOptions {
window_bounds: Some(window_bounds),
kind: WindowKind::Dialog,
..Default::default()
},
|_, cx| {
cx.new(|_| SubWindow {
custom_titlebar: false,
is_dialog: true,
})
},
)
@@ -174,7 +116,6 @@ impl Render for WindowDemo {
|_, cx| {
cx.new(|_| SubWindow {
custom_titlebar: true,
is_dialog: false,
})
},
)
@@ -190,7 +131,6 @@ impl Render for WindowDemo {
|_, cx| {
cx.new(|_| SubWindow {
custom_titlebar: false,
is_dialog: false,
})
},
)
@@ -207,7 +147,6 @@ impl Render for WindowDemo {
|_, cx| {
cx.new(|_| SubWindow {
custom_titlebar: false,
is_dialog: false,
})
},
)
@@ -223,7 +162,6 @@ impl Render for WindowDemo {
|_, cx| {
cx.new(|_| SubWindow {
custom_titlebar: false,
is_dialog: false,
})
},
)
@@ -239,7 +177,6 @@ impl Render for WindowDemo {
|_, cx| {
cx.new(|_| SubWindow {
custom_titlebar: false,
is_dialog: false,
})
},
)

View File

@@ -316,7 +316,6 @@ impl SystemWindowTabController {
.find_map(|(group, tabs)| tabs.iter().find(|tab| tab.id == id).map(|_| group));
let current_group = current_group?;
// TODO: `.keys()` returns arbitrary order, what does "next" mean?
let mut group_ids: Vec<_> = controller.tab_groups.keys().collect();
let idx = group_ids.iter().position(|g| *g == current_group)?;
let next_idx = (idx + 1) % group_ids.len();
@@ -341,7 +340,6 @@ impl SystemWindowTabController {
.find_map(|(group, tabs)| tabs.iter().find(|tab| tab.id == id).map(|_| group));
let current_group = current_group?;
// TODO: `.keys()` returns arbitrary order, what does "previous" mean?
let mut group_ids: Vec<_> = controller.tab_groups.keys().collect();
let idx = group_ids.iter().position(|g| *g == current_group)?;
let prev_idx = if idx == 0 {
@@ -363,9 +361,12 @@ impl SystemWindowTabController {
/// Get all tabs in the same window.
pub fn tabs(&self, id: WindowId) -> Option<&Vec<SystemWindowTab>> {
self.tab_groups
.values()
.find(|tabs| tabs.iter().any(|tab| tab.id == id))
let tab_group = self
.tab_groups
.iter()
.find_map(|(group, tabs)| tabs.iter().find(|tab| tab.id == id).map(|_| *group))?;
self.tab_groups.get(&tab_group)
}
/// Initialize the visibility of the system window tab controller.
@@ -440,7 +441,7 @@ impl SystemWindowTabController {
/// Insert a tab into a tab group.
pub fn add_tab(cx: &mut App, id: WindowId, tabs: Vec<SystemWindowTab>) {
let mut controller = cx.global_mut::<SystemWindowTabController>();
let Some(tab) = tabs.iter().find(|tab| tab.id == id).cloned() else {
let Some(tab) = tabs.clone().into_iter().find(|tab| tab.id == id) else {
return;
};
@@ -503,14 +504,16 @@ impl SystemWindowTabController {
return;
};
let initial_tabs_len = initial_tabs.len();
let mut all_tabs = initial_tabs.clone();
for (_, mut tabs) in controller.tab_groups.drain() {
tabs.retain(|tab| !all_tabs[..initial_tabs_len].contains(tab));
all_tabs.extend(tabs);
for tabs in controller.tab_groups.values() {
all_tabs.extend(
tabs.iter()
.filter(|tab| !initial_tabs.contains(tab))
.cloned(),
);
}
controller.tab_groups.clear();
controller.tab_groups.insert(0, all_tabs);
}
@@ -1077,9 +1080,11 @@ impl App {
self.platform.window_appearance()
}
/// Reads data from the platform clipboard.
pub fn read_from_clipboard(&self) -> Option<ClipboardItem> {
self.platform.read_from_clipboard()
/// Writes data to the primary selection buffer.
/// Only available on Linux.
#[cfg(any(target_os = "linux", target_os = "freebsd"))]
pub fn write_to_primary(&self, item: ClipboardItem) {
self.platform.write_to_primary(item)
}
/// Writes data to the platform clipboard.
@@ -1094,31 +1099,9 @@ impl App {
self.platform.read_from_primary()
}
/// Writes data to the primary selection buffer.
/// Only available on Linux.
#[cfg(any(target_os = "linux", target_os = "freebsd"))]
pub fn write_to_primary(&self, item: ClipboardItem) {
self.platform.write_to_primary(item)
}
/// Reads data from macOS's "Find" pasteboard.
///
/// Used to share the current search string between apps.
///
/// https://developer.apple.com/documentation/appkit/nspasteboard/name-swift.struct/find
#[cfg(target_os = "macos")]
pub fn read_from_find_pasteboard(&self) -> Option<ClipboardItem> {
self.platform.read_from_find_pasteboard()
}
/// Writes data to macOS's "Find" pasteboard.
///
/// Used to share the current search string between apps.
///
/// https://developer.apple.com/documentation/appkit/nspasteboard/name-swift.struct/find
#[cfg(target_os = "macos")]
pub fn write_to_find_pasteboard(&self, item: ClipboardItem) {
self.platform.write_to_find_pasteboard(item)
/// Reads data from the platform clipboard.
pub fn read_from_clipboard(&self) -> Option<ClipboardItem> {
self.platform.read_from_clipboard()
}
/// Writes credentials to the platform keychain.

View File

@@ -296,20 +296,6 @@ impl TestAppContext {
&self.text_system
}
/// Simulates writing credentials to the platform keychain.
pub fn write_credentials(&self, url: &str, username: &str, password: &[u8]) {
let _ = self
.test_platform
.write_credentials(url, username, password);
}
/// Simulates reading credentials from the platform keychain.
pub fn read_credentials(&self, url: &str) -> Option<(String, Vec<u8>)> {
smol::block_on(self.test_platform.read_credentials(url))
.ok()
.flatten()
}
/// Simulates writing to the platform clipboard
pub fn write_to_clipboard(&self, item: ClipboardItem) {
self.test_platform.write_to_clipboard(item)

View File

@@ -2,8 +2,8 @@ use crate::{
ActiveTooltip, AnyView, App, Bounds, DispatchPhase, Element, ElementId, GlobalElementId,
HighlightStyle, Hitbox, HitboxBehavior, InspectorElementId, IntoElement, LayoutId,
MouseDownEvent, MouseMoveEvent, MouseUpEvent, Pixels, Point, SharedString, Size, TextOverflow,
TextRun, TextStyle, TooltipId, TruncateFrom, WhiteSpace, Window, WrappedLine,
WrappedLineLayout, register_tooltip_mouse_handlers, set_tooltip_on_window,
TextRun, TextStyle, TooltipId, WhiteSpace, Window, WrappedLine, WrappedLineLayout,
register_tooltip_mouse_handlers, set_tooltip_on_window,
};
use anyhow::Context as _;
use itertools::Itertools;
@@ -354,7 +354,7 @@ impl TextLayout {
None
};
let (truncate_width, truncation_affix, truncate_from) =
let (truncate_width, truncation_suffix) =
if let Some(text_overflow) = text_style.text_overflow.clone() {
let width = known_dimensions.width.or(match available_space.width {
crate::AvailableSpace::Definite(x) => match text_style.line_clamp {
@@ -365,24 +365,17 @@ impl TextLayout {
});
match text_overflow {
TextOverflow::Truncate(s) => (width, s, TruncateFrom::End),
TextOverflow::TruncateStart(s) => (width, s, TruncateFrom::Start),
TextOverflow::Truncate(s) => (width, s),
}
} else {
(None, "".into(), TruncateFrom::End)
(None, "".into())
};
// Only use cached layout if:
// 1. We have a cached size
// 2. wrap_width matches (or both are None)
// 3. truncate_width is None (if truncate_width is Some, we need to re-layout
// because the previous layout may have been computed without truncation)
if let Some(text_layout) = element_state.0.borrow().as_ref()
&& let Some(size) = text_layout.size
&& text_layout.size.is_some()
&& (wrap_width.is_none() || wrap_width == text_layout.wrap_width)
&& truncate_width.is_none()
{
return size;
return text_layout.size.unwrap();
}
let mut line_wrapper = cx.text_system().line_wrapper(text_style.font(), font_size);
@@ -390,9 +383,8 @@ impl TextLayout {
line_wrapper.truncate_line(
text.clone(),
truncate_width,
&truncation_affix,
&truncation_suffix,
&runs,
truncate_from,
)
} else {
(text.clone(), Cow::Borrowed(&*runs))

View File

@@ -262,18 +262,12 @@ pub(crate) trait Platform: 'static {
fn set_cursor_style(&self, style: CursorStyle);
fn should_auto_hide_scrollbars(&self) -> bool;
fn read_from_clipboard(&self) -> Option<ClipboardItem>;
fn write_to_clipboard(&self, item: ClipboardItem);
#[cfg(any(target_os = "linux", target_os = "freebsd"))]
fn read_from_primary(&self) -> Option<ClipboardItem>;
#[cfg(any(target_os = "linux", target_os = "freebsd"))]
fn write_to_primary(&self, item: ClipboardItem);
#[cfg(target_os = "macos")]
fn read_from_find_pasteboard(&self) -> Option<ClipboardItem>;
#[cfg(target_os = "macos")]
fn write_to_find_pasteboard(&self, item: ClipboardItem);
fn write_to_clipboard(&self, item: ClipboardItem);
#[cfg(any(target_os = "linux", target_os = "freebsd"))]
fn read_from_primary(&self) -> Option<ClipboardItem>;
fn read_from_clipboard(&self) -> Option<ClipboardItem>;
fn write_credentials(&self, url: &str, username: &str, password: &[u8]) -> Task<Result<()>>;
fn read_credentials(&self, url: &str) -> Task<Result<Option<(String, Vec<u8>)>>>;
@@ -1354,10 +1348,6 @@ pub enum WindowKind {
/// docks, notifications or wallpapers.
#[cfg(all(target_os = "linux", feature = "wayland"))]
LayerShell(layer_shell::LayerShellOptions),
/// A window that appears on top of its parent window and blocks interaction with it
/// until the modal window is closed
Dialog,
}
/// The appearance of the window, as defined by the operating system.

View File

@@ -36,6 +36,12 @@ use wayland_client::{
wl_shm_pool, wl_surface,
},
};
use wayland_protocols::wp::cursor_shape::v1::client::{
wp_cursor_shape_device_v1, wp_cursor_shape_manager_v1,
};
use wayland_protocols::wp::fractional_scale::v1::client::{
wp_fractional_scale_manager_v1, wp_fractional_scale_v1,
};
use wayland_protocols::wp::primary_selection::zv1::client::zwp_primary_selection_offer_v1::{
self, ZwpPrimarySelectionOfferV1,
};
@@ -55,14 +61,6 @@ use wayland_protocols::xdg::decoration::zv1::client::{
zxdg_decoration_manager_v1, zxdg_toplevel_decoration_v1,
};
use wayland_protocols::xdg::shell::client::{xdg_surface, xdg_toplevel, xdg_wm_base};
use wayland_protocols::{
wp::cursor_shape::v1::client::{wp_cursor_shape_device_v1, wp_cursor_shape_manager_v1},
xdg::dialog::v1::client::xdg_wm_dialog_v1::{self, XdgWmDialogV1},
};
use wayland_protocols::{
wp::fractional_scale::v1::client::{wp_fractional_scale_manager_v1, wp_fractional_scale_v1},
xdg::dialog::v1::client::xdg_dialog_v1::XdgDialogV1,
};
use wayland_protocols_plasma::blur::client::{org_kde_kwin_blur, org_kde_kwin_blur_manager};
use wayland_protocols_wlr::layer_shell::v1::client::{zwlr_layer_shell_v1, zwlr_layer_surface_v1};
use xkbcommon::xkb::ffi::XKB_KEYMAP_FORMAT_TEXT_V1;
@@ -124,7 +122,6 @@ pub struct Globals {
pub layer_shell: Option<zwlr_layer_shell_v1::ZwlrLayerShellV1>,
pub blur_manager: Option<org_kde_kwin_blur_manager::OrgKdeKwinBlurManager>,
pub text_input_manager: Option<zwp_text_input_manager_v3::ZwpTextInputManagerV3>,
pub dialog: Option<xdg_wm_dialog_v1::XdgWmDialogV1>,
pub executor: ForegroundExecutor,
}
@@ -135,7 +132,6 @@ impl Globals {
qh: QueueHandle<WaylandClientStatePtr>,
seat: wl_seat::WlSeat,
) -> Self {
let dialog_v = XdgWmDialogV1::interface().version;
Globals {
activation: globals.bind(&qh, 1..=1, ()).ok(),
compositor: globals
@@ -164,7 +160,6 @@ impl Globals {
layer_shell: globals.bind(&qh, 1..=5, ()).ok(),
blur_manager: globals.bind(&qh, 1..=1, ()).ok(),
text_input_manager: globals.bind(&qh, 1..=1, ()).ok(),
dialog: globals.bind(&qh, dialog_v..=dialog_v, ()).ok(),
executor,
qh,
}
@@ -734,7 +729,10 @@ impl LinuxClient for WaylandClient {
) -> anyhow::Result<Box<dyn PlatformWindow>> {
let mut state = self.0.borrow_mut();
let parent = state.keyboard_focused_window.clone();
let parent = state
.keyboard_focused_window
.as_ref()
.and_then(|w| w.toplevel());
let (window, surface_id) = WaylandWindow::new(
handle,
@@ -753,12 +751,7 @@ impl LinuxClient for WaylandClient {
fn set_cursor_style(&self, style: CursorStyle) {
let mut state = self.0.borrow_mut();
let need_update = state.cursor_style != Some(style)
&& (state.mouse_focused_window.is_none()
|| state
.mouse_focused_window
.as_ref()
.is_some_and(|w| !w.is_blocked()));
let need_update = state.cursor_style != Some(style);
if need_update {
let serial = state.serial_tracker.get(SerialKind::MouseEnter);
@@ -1018,7 +1011,7 @@ impl Dispatch<WlCallback, ObjectId> for WaylandClientStatePtr {
}
}
pub(crate) fn get_window(
fn get_window(
mut state: &mut RefMut<WaylandClientState>,
surface_id: &ObjectId,
) -> Option<WaylandWindowStatePtr> {
@@ -1661,30 +1654,6 @@ impl Dispatch<wl_pointer::WlPointer, ()> for WaylandClientStatePtr {
state.mouse_location = Some(point(px(surface_x as f32), px(surface_y as f32)));
if let Some(window) = state.mouse_focused_window.clone() {
if window.is_blocked() {
let default_style = CursorStyle::Arrow;
if state.cursor_style != Some(default_style) {
let serial = state.serial_tracker.get(SerialKind::MouseEnter);
state.cursor_style = Some(default_style);
if let Some(cursor_shape_device) = &state.cursor_shape_device {
cursor_shape_device.set_shape(serial, default_style.to_shape());
} else {
// cursor-shape-v1 isn't supported, set the cursor using a surface.
let wl_pointer = state
.wl_pointer
.clone()
.expect("window is focused by pointer");
let scale = window.primary_output_scale();
state.cursor.set_icon(
&wl_pointer,
serial,
default_style.to_icon_names(),
scale,
);
}
}
}
if state
.keyboard_focused_window
.as_ref()
@@ -2256,27 +2225,3 @@ impl Dispatch<zwp_primary_selection_source_v1::ZwpPrimarySelectionSourceV1, ()>
}
}
}
impl Dispatch<XdgWmDialogV1, ()> for WaylandClientStatePtr {
fn event(
_: &mut Self,
_: &XdgWmDialogV1,
_: <XdgWmDialogV1 as Proxy>::Event,
_: &(),
_: &Connection,
_: &QueueHandle<Self>,
) {
}
}
impl Dispatch<XdgDialogV1, ()> for WaylandClientStatePtr {
fn event(
_state: &mut Self,
_proxy: &XdgDialogV1,
_event: <XdgDialogV1 as Proxy>::Event,
_data: &(),
_conn: &Connection,
_qhandle: &QueueHandle<Self>,
) {
}
}

View File

@@ -7,7 +7,7 @@ use std::{
};
use blade_graphics as gpu;
use collections::{FxHashSet, HashMap};
use collections::HashMap;
use futures::channel::oneshot::Receiver;
use raw_window_handle as rwh;
@@ -20,7 +20,7 @@ use wayland_protocols::xdg::shell::client::xdg_surface;
use wayland_protocols::xdg::shell::client::xdg_toplevel::{self};
use wayland_protocols::{
wp::fractional_scale::v1::client::wp_fractional_scale_v1,
xdg::dialog::v1::client::xdg_dialog_v1::XdgDialogV1,
xdg::shell::client::xdg_toplevel::XdgToplevel,
};
use wayland_protocols_plasma::blur::client::org_kde_kwin_blur;
use wayland_protocols_wlr::layer_shell::v1::client::zwlr_layer_surface_v1;
@@ -29,7 +29,7 @@ use crate::{
AnyWindowHandle, Bounds, Decorations, Globals, GpuSpecs, Modifiers, Output, Pixels,
PlatformDisplay, PlatformInput, Point, PromptButton, PromptLevel, RequestFrameOptions,
ResizeEdge, Size, Tiling, WaylandClientStatePtr, WindowAppearance, WindowBackgroundAppearance,
WindowBounds, WindowControlArea, WindowControls, WindowDecorations, WindowParams, get_window,
WindowBounds, WindowControlArea, WindowControls, WindowDecorations, WindowParams,
layer_shell::LayerShellNotSupportedError, px, size,
};
use crate::{
@@ -87,8 +87,6 @@ struct InProgressConfigure {
pub struct WaylandWindowState {
surface_state: WaylandSurfaceState,
acknowledged_first_configure: bool,
parent: Option<WaylandWindowStatePtr>,
children: FxHashSet<ObjectId>,
pub surface: wl_surface::WlSurface,
app_id: Option<String>,
appearance: WindowAppearance,
@@ -128,7 +126,7 @@ impl WaylandSurfaceState {
surface: &wl_surface::WlSurface,
globals: &Globals,
params: &WindowParams,
parent: Option<WaylandWindowStatePtr>,
parent: Option<XdgToplevel>,
) -> anyhow::Result<Self> {
// For layer_shell windows, create a layer surface instead of an xdg surface
if let WindowKind::LayerShell(options) = &params.kind {
@@ -180,28 +178,10 @@ impl WaylandSurfaceState {
.get_xdg_surface(&surface, &globals.qh, surface.id());
let toplevel = xdg_surface.get_toplevel(&globals.qh, surface.id());
let xdg_parent = parent.as_ref().and_then(|w| w.toplevel());
if params.kind == WindowKind::Floating || params.kind == WindowKind::Dialog {
toplevel.set_parent(xdg_parent.as_ref());
if params.kind == WindowKind::Floating {
toplevel.set_parent(parent.as_ref());
}
let dialog = if params.kind == WindowKind::Dialog {
let dialog = globals.dialog.as_ref().map(|dialog| {
let xdg_dialog = dialog.get_xdg_dialog(&toplevel, &globals.qh, ());
xdg_dialog.set_modal();
xdg_dialog
});
if let Some(parent) = parent.as_ref() {
parent.add_child(surface.id());
}
dialog
} else {
None
};
if let Some(size) = params.window_min_size {
toplevel.set_min_size(size.width.0 as i32, size.height.0 as i32);
}
@@ -218,7 +198,6 @@ impl WaylandSurfaceState {
xdg_surface,
toplevel,
decoration,
dialog,
}))
}
}
@@ -227,7 +206,6 @@ pub struct WaylandXdgSurfaceState {
xdg_surface: xdg_surface::XdgSurface,
toplevel: xdg_toplevel::XdgToplevel,
decoration: Option<zxdg_toplevel_decoration_v1::ZxdgToplevelDecorationV1>,
dialog: Option<XdgDialogV1>,
}
pub struct WaylandLayerSurfaceState {
@@ -280,13 +258,7 @@ impl WaylandSurfaceState {
xdg_surface,
toplevel,
decoration: _decoration,
dialog,
}) => {
// drop the dialog before toplevel so compositor can explicitly unapply it's effects
if let Some(dialog) = dialog {
dialog.destroy();
}
// The role object (toplevel) must always be destroyed before the xdg_surface.
// See https://wayland.app/protocols/xdg-shell#xdg_surface:request:destroy
toplevel.destroy();
@@ -316,7 +288,6 @@ impl WaylandWindowState {
globals: Globals,
gpu_context: &BladeContext,
options: WindowParams,
parent: Option<WaylandWindowStatePtr>,
) -> anyhow::Result<Self> {
let renderer = {
let raw_window = RawWindow {
@@ -348,8 +319,6 @@ impl WaylandWindowState {
Ok(Self {
surface_state,
acknowledged_first_configure: false,
parent,
children: FxHashSet::default(),
surface,
app_id: None,
blur: None,
@@ -422,10 +391,6 @@ impl Drop for WaylandWindow {
fn drop(&mut self) {
let mut state = self.0.state.borrow_mut();
let surface_id = state.surface.id();
if let Some(parent) = state.parent.as_ref() {
parent.state.borrow_mut().children.remove(&surface_id);
}
let client = state.client.clone();
state.renderer.destroy();
@@ -483,10 +448,10 @@ impl WaylandWindow {
client: WaylandClientStatePtr,
params: WindowParams,
appearance: WindowAppearance,
parent: Option<WaylandWindowStatePtr>,
parent: Option<XdgToplevel>,
) -> anyhow::Result<(Self, ObjectId)> {
let surface = globals.compositor.create_surface(&globals.qh, ());
let surface_state = WaylandSurfaceState::new(&surface, &globals, &params, parent.clone())?;
let surface_state = WaylandSurfaceState::new(&surface, &globals, &params, parent)?;
if let Some(fractional_scale_manager) = globals.fractional_scale_manager.as_ref() {
fractional_scale_manager.get_fractional_scale(&surface, &globals.qh, surface.id());
@@ -508,7 +473,6 @@ impl WaylandWindow {
globals,
gpu_context,
params,
parent,
)?)),
callbacks: Rc::new(RefCell::new(Callbacks::default())),
});
@@ -537,16 +501,6 @@ impl WaylandWindowStatePtr {
Rc::ptr_eq(&self.state, &other.state)
}
pub fn add_child(&self, child: ObjectId) {
let mut state = self.state.borrow_mut();
state.children.insert(child);
}
pub fn is_blocked(&self) -> bool {
let state = self.state.borrow();
!state.children.is_empty()
}
pub fn frame(&self) {
let mut state = self.state.borrow_mut();
state.surface.frame(&state.globals.qh, state.surface.id());
@@ -864,9 +818,6 @@ impl WaylandWindowStatePtr {
}
pub fn handle_ime(&self, ime: ImeInput) {
if self.is_blocked() {
return;
}
let mut state = self.state.borrow_mut();
if let Some(mut input_handler) = state.input_handler.take() {
drop(state);
@@ -943,21 +894,6 @@ impl WaylandWindowStatePtr {
}
pub fn close(&self) {
let state = self.state.borrow();
let client = state.client.get_client();
#[allow(clippy::mutable_key_type)]
let children = state.children.clone();
drop(state);
for child in children {
let mut client_state = client.borrow_mut();
let window = get_window(&mut client_state, &child);
drop(client_state);
if let Some(child) = window {
child.close();
}
}
let mut callbacks = self.callbacks.borrow_mut();
if let Some(fun) = callbacks.close.take() {
fun()
@@ -965,9 +901,6 @@ impl WaylandWindowStatePtr {
}
pub fn handle_input(&self, input: PlatformInput) {
if self.is_blocked() {
return;
}
if let Some(ref mut fun) = self.callbacks.borrow_mut().input
&& !fun(input.clone()).propagate
{

View File

@@ -222,7 +222,7 @@ pub struct X11ClientState {
pub struct X11ClientStatePtr(pub Weak<RefCell<X11ClientState>>);
impl X11ClientStatePtr {
pub fn get_client(&self) -> Option<X11Client> {
fn get_client(&self) -> Option<X11Client> {
self.0.upgrade().map(X11Client)
}
@@ -752,7 +752,7 @@ impl X11Client {
}
}
pub(crate) fn get_window(&self, win: xproto::Window) -> Option<X11WindowStatePtr> {
fn get_window(&self, win: xproto::Window) -> Option<X11WindowStatePtr> {
let state = self.0.borrow();
state
.windows
@@ -789,12 +789,12 @@ impl X11Client {
let [atom, arg1, arg2, arg3, arg4] = event.data.as_data32();
let mut state = self.0.borrow_mut();
if atom == state.atoms.WM_DELETE_WINDOW && window.should_close() {
if atom == state.atoms.WM_DELETE_WINDOW {
// window "x" button clicked by user
// Rest of the close logic is handled in drop_window()
drop(state);
window.close();
state = self.0.borrow_mut();
if window.should_close() {
// Rest of the close logic is handled in drop_window()
window.close();
}
} else if atom == state.atoms._NET_WM_SYNC_REQUEST {
window.state.borrow_mut().last_sync_counter =
Some(x11rb::protocol::sync::Int64 {
@@ -1216,33 +1216,6 @@ impl X11Client {
Event::XinputMotion(event) => {
let window = self.get_window(event.event)?;
let mut state = self.0.borrow_mut();
if window.is_blocked() {
// We want to set the cursor to the default arrow
// when the window is blocked
let style = CursorStyle::Arrow;
let current_style = state
.cursor_styles
.get(&window.x_window)
.unwrap_or(&CursorStyle::Arrow);
if *current_style != style
&& let Some(cursor) = state.get_cursor_icon(style)
{
state.cursor_styles.insert(window.x_window, style);
check_reply(
|| "Failed to set cursor style",
state.xcb_connection.change_window_attributes(
window.x_window,
&ChangeWindowAttributesAux {
cursor: Some(cursor),
..Default::default()
},
),
)
.log_err();
state.xcb_connection.flush().log_err();
};
}
let pressed_button = pressed_button_from_mask(event.button_mask[0]);
let position = point(
px(event.event_x as f32 / u16::MAX as f32 / state.scale_factor),
@@ -1516,7 +1489,7 @@ impl LinuxClient for X11Client {
let parent_window = state
.keyboard_focused_window
.and_then(|focused_window| state.windows.get(&focused_window))
.map(|w| w.window.clone());
.map(|window| window.window.x_window);
let x_window = state
.xcb_connection
.generate_id()
@@ -1571,15 +1544,7 @@ impl LinuxClient for X11Client {
.cursor_styles
.get(&focused_window)
.unwrap_or(&CursorStyle::Arrow);
let window = state
.mouse_focused_window
.and_then(|w| state.windows.get(&w));
let should_change = *current_style != style
&& (window.is_none() || window.is_some_and(|w| !w.is_blocked()));
if !should_change {
if *current_style == style {
return;
}

View File

@@ -11,7 +11,6 @@ use crate::{
};
use blade_graphics as gpu;
use collections::FxHashSet;
use raw_window_handle as rwh;
use util::{ResultExt, maybe};
use x11rb::{
@@ -75,7 +74,6 @@ x11rb::atom_manager! {
_NET_WM_WINDOW_TYPE,
_NET_WM_WINDOW_TYPE_NOTIFICATION,
_NET_WM_WINDOW_TYPE_DIALOG,
_NET_WM_STATE_MODAL,
_NET_WM_SYNC,
_NET_SUPPORTED,
_MOTIF_WM_HINTS,
@@ -251,8 +249,6 @@ pub struct Callbacks {
pub struct X11WindowState {
pub destroyed: bool,
parent: Option<X11WindowStatePtr>,
children: FxHashSet<xproto::Window>,
client: X11ClientStatePtr,
executor: ForegroundExecutor,
atoms: XcbAtoms,
@@ -398,7 +394,7 @@ impl X11WindowState {
atoms: &XcbAtoms,
scale_factor: f32,
appearance: WindowAppearance,
parent_window: Option<X11WindowStatePtr>,
parent_window: Option<xproto::Window>,
) -> anyhow::Result<Self> {
let x_screen_index = params
.display_id
@@ -550,8 +546,8 @@ impl X11WindowState {
)?;
}
if params.kind == WindowKind::Floating || params.kind == WindowKind::Dialog {
if let Some(parent_window) = parent_window.as_ref().map(|w| w.x_window) {
if params.kind == WindowKind::Floating {
if let Some(parent_window) = parent_window {
// WM_TRANSIENT_FOR hint indicating the main application window. For floating windows, we set
// a parent window (WM_TRANSIENT_FOR) such that the window manager knows where to
// place the floating window in relation to the main window.
@@ -567,23 +563,11 @@ impl X11WindowState {
),
)?;
}
}
let parent = if params.kind == WindowKind::Dialog
&& let Some(parent) = parent_window
{
parent.add_child(x_window);
Some(parent)
} else {
None
};
if params.kind == WindowKind::Dialog {
// _NET_WM_WINDOW_TYPE_DIALOG indicates that this is a dialog (floating) window
// https://specifications.freedesktop.org/wm-spec/1.4/ar01s05.html
check_reply(
|| "X11 ChangeProperty32 setting window type for dialog window failed.",
|| "X11 ChangeProperty32 setting window type for floating window failed.",
xcb.change_property32(
xproto::PropMode::REPLACE,
x_window,
@@ -592,20 +576,6 @@ impl X11WindowState {
&[atoms._NET_WM_WINDOW_TYPE_DIALOG],
),
)?;
// We set the modal state for dialog windows, so that the window manager
// can handle it appropriately (e.g., prevent interaction with the parent window
// while the dialog is open).
check_reply(
|| "X11 ChangeProperty32 setting modal state for dialog window failed.",
xcb.change_property32(
xproto::PropMode::REPLACE,
x_window,
atoms._NET_WM_STATE,
xproto::AtomEnum::ATOM,
&[atoms._NET_WM_STATE_MODAL],
),
)?;
}
check_reply(
@@ -697,8 +667,6 @@ impl X11WindowState {
let display = Rc::new(X11Display::new(xcb, scale_factor, x_screen_index)?);
Ok(Self {
parent,
children: FxHashSet::default(),
client,
executor,
display,
@@ -752,11 +720,6 @@ pub(crate) struct X11Window(pub X11WindowStatePtr);
impl Drop for X11Window {
fn drop(&mut self) {
let mut state = self.0.state.borrow_mut();
if let Some(parent) = state.parent.as_ref() {
parent.state.borrow_mut().children.remove(&self.0.x_window);
}
state.renderer.destroy();
let destroy_x_window = maybe!({
@@ -771,6 +734,8 @@ impl Drop for X11Window {
.log_err();
if destroy_x_window.is_some() {
// Mark window as destroyed so that we can filter out when X11 events
// for it still come in.
state.destroyed = true;
let this_ptr = self.0.clone();
@@ -808,7 +773,7 @@ impl X11Window {
atoms: &XcbAtoms,
scale_factor: f32,
appearance: WindowAppearance,
parent_window: Option<X11WindowStatePtr>,
parent_window: Option<xproto::Window>,
) -> anyhow::Result<Self> {
let ptr = X11WindowStatePtr {
state: Rc::new(RefCell::new(X11WindowState::new(
@@ -1014,31 +979,7 @@ impl X11WindowStatePtr {
Ok(())
}
pub fn add_child(&self, child: xproto::Window) {
let mut state = self.state.borrow_mut();
state.children.insert(child);
}
pub fn is_blocked(&self) -> bool {
let state = self.state.borrow();
!state.children.is_empty()
}
pub fn close(&self) {
let state = self.state.borrow();
let client = state.client.clone();
#[allow(clippy::mutable_key_type)]
let children = state.children.clone();
drop(state);
if let Some(client) = client.get_client() {
for child in children {
if let Some(child_window) = client.get_window(child) {
child_window.close();
}
}
}
let mut callbacks = self.callbacks.borrow_mut();
if let Some(fun) = callbacks.close.take() {
fun()
@@ -1053,9 +994,6 @@ impl X11WindowStatePtr {
}
pub fn handle_input(&self, input: PlatformInput) {
if self.is_blocked() {
return;
}
if let Some(ref mut fun) = self.callbacks.borrow_mut().input
&& !fun(input.clone()).propagate
{
@@ -1078,9 +1016,6 @@ impl X11WindowStatePtr {
}
pub fn handle_ime_commit(&self, text: String) {
if self.is_blocked() {
return;
}
let mut state = self.state.borrow_mut();
if let Some(mut input_handler) = state.input_handler.take() {
drop(state);
@@ -1091,9 +1026,6 @@ impl X11WindowStatePtr {
}
pub fn handle_ime_preedit(&self, text: String) {
if self.is_blocked() {
return;
}
let mut state = self.state.borrow_mut();
if let Some(mut input_handler) = state.input_handler.take() {
drop(state);
@@ -1104,9 +1036,6 @@ impl X11WindowStatePtr {
}
pub fn handle_ime_unmark(&self) {
if self.is_blocked() {
return;
}
let mut state = self.state.borrow_mut();
if let Some(mut input_handler) = state.input_handler.take() {
drop(state);
@@ -1117,9 +1046,6 @@ impl X11WindowStatePtr {
}
pub fn handle_ime_delete(&self) {
if self.is_blocked() {
return;
}
let mut state = self.state.borrow_mut();
if let Some(mut input_handler) = state.input_handler.take() {
drop(state);

View File

@@ -5,7 +5,6 @@ mod display;
mod display_link;
mod events;
mod keyboard;
mod pasteboard;
#[cfg(feature = "screen-capture")]
mod screen_capture;
@@ -22,6 +21,8 @@ use metal_renderer as renderer;
#[cfg(feature = "macos-blade")]
use crate::platform::blade as renderer;
mod attributed_string;
#[cfg(feature = "font-kit")]
mod open_type;

Some files were not shown because too many files have changed in this diff Show More