Compare commits
2 Commits
claude-exp
...
quick-comm
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
58a7a277df | ||
|
|
5e9f084f12 |
@@ -1,12 +0,0 @@
|
||||
# This config is different from config.toml in this directory, as the latter is recognized by Cargo.
|
||||
# This file is placed in ./../.cargo/config.toml on CI runs. Cargo then merges Zeds .cargo/config.toml with ./../.cargo/config.toml
|
||||
# with preference for settings from Zeds config.toml.
|
||||
# TL;DR: If a value is set in both ci-config.toml and config.toml, config.toml value takes precedence.
|
||||
# Arrays are merged together though. See: https://doc.rust-lang.org/cargo/reference/config.html#hierarchical-structure
|
||||
# The intent for this file is to configure CI build process with a divergance from Zed developers experience; for example, in this config file
|
||||
# we use `-D warnings` for rustflags (which makes compilation fail in presence of warnings during build process). Placing that in developers `config.toml`
|
||||
# would be incovenient.
|
||||
# The reason for not using the RUSTFLAGS environment variable is that doing so would override all the settings in the config.toml file, even if the contents of the latter are completely nonsensical. See: https://github.com/rust-lang/cargo/issues/5376
|
||||
# Here, we opted to use `[target.'cfg(all())']` instead of `[build]` because `[target.'**']` is guaranteed to be cumulative.
|
||||
[target.'cfg(all())']
|
||||
rustflags = ["-D", "warnings"]
|
||||
@@ -13,15 +13,6 @@ rustflags = ["-C", "link-arg=-fuse-ld=mold"]
|
||||
linker = "clang"
|
||||
rustflags = ["-C", "link-arg=-fuse-ld=mold"]
|
||||
|
||||
# This cfg will reduce the size of `windows::core::Error` from 16 bytes to 4 bytes
|
||||
[target.'cfg(target_os = "windows")']
|
||||
rustflags = [
|
||||
"--cfg",
|
||||
"windows_slim_errors", # This cfg will reduce the size of `windows::core::Error` from 16 bytes to 4 bytes
|
||||
"-C",
|
||||
"target-feature=+crt-static", # This fixes the linking issue when compiling livekit on Windows
|
||||
"-C",
|
||||
"link-arg=-fuse-ld=lld",
|
||||
]
|
||||
|
||||
[env]
|
||||
MACOSX_DEPLOYMENT_TARGET = "10.15.7"
|
||||
rustflags = ["--cfg", "windows_slim_errors"]
|
||||
|
||||
@@ -1,123 +0,0 @@
|
||||
---
|
||||
name: codebase-analyzer
|
||||
description: Analyzes codebase implementation details. Call the codebase-analyzer agent when you need to find detailed information about specific components. As always, the more detailed your request prompt, the better! :)
|
||||
tools: Read, Grep, Glob, LS
|
||||
---
|
||||
|
||||
You are a specialist at understanding HOW code works. Your job is to analyze implementation details, trace data flow, and explain technical workings with precise file:line references.
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
1. **Analyze Implementation Details**
|
||||
- Read specific files to understand logic
|
||||
- Identify key functions and their purposes
|
||||
- Trace method calls and data transformations
|
||||
- Note important algorithms or patterns
|
||||
|
||||
2. **Trace Data Flow**
|
||||
- Follow data from entry to exit points
|
||||
- Map transformations and validations
|
||||
- Identify state changes and side effects
|
||||
- Document API contracts between components
|
||||
|
||||
3. **Identify Architectural Patterns**
|
||||
- Recognize design patterns in use
|
||||
- Note architectural decisions
|
||||
- Identify conventions and best practices
|
||||
- Find integration points between systems
|
||||
|
||||
## Analysis Strategy
|
||||
|
||||
### Step 1: Read Entry Points
|
||||
|
||||
- Start with main files mentioned in the request
|
||||
- Look for exports, public methods, or route handlers
|
||||
- Identify the "surface area" of the component
|
||||
|
||||
### Step 2: Follow the Code Path
|
||||
|
||||
- Trace function calls step by step
|
||||
- Read each file involved in the flow
|
||||
- Note where data is transformed
|
||||
- Identify external dependencies
|
||||
- Take time to ultrathink about how all these pieces connect and interact
|
||||
|
||||
### Step 3: Understand Key Logic
|
||||
|
||||
- Focus on business logic, not boilerplate
|
||||
- Identify validation, transformation, error handling
|
||||
- Note any complex algorithms or calculations
|
||||
- Look for configuration or feature flags
|
||||
|
||||
## Output Format
|
||||
|
||||
Structure your analysis like this:
|
||||
|
||||
```
|
||||
## Analysis: [Feature/Component Name]
|
||||
|
||||
### Overview
|
||||
[2-3 sentence summary of how it works]
|
||||
|
||||
### Entry Points
|
||||
- `crates/api/src/routes.rs:45` - POST /webhooks endpoint
|
||||
- `crates/api/src/handlers/webhook.rs:12` - handle_webhook() function
|
||||
|
||||
### Core Implementation
|
||||
|
||||
#### 1. Request Validation (`crates/api/src/handlers/webhook.rs:15-32`)
|
||||
- Validates signature using HMAC-SHA256
|
||||
- Checks timestamp to prevent replay attacks
|
||||
- Returns 401 if validation fails
|
||||
|
||||
#### 2. Data Processing (`crates/core/src/services/webhook_processor.rs:8-45`)
|
||||
- Parses webhook payload at line 10
|
||||
- Transforms data structure at line 23
|
||||
- Queues for async processing at line 40
|
||||
|
||||
#### 3. State Management (`crates/storage/src/stores/webhook_store.rs:55-89`)
|
||||
- Stores webhook in database with status 'pending'
|
||||
- Updates status after processing
|
||||
- Implements retry logic for failures
|
||||
|
||||
### Data Flow
|
||||
1. Request arrives at `crates/api/src/routes.rs:45`
|
||||
2. Routed to `crates/api/src/handlers/webhook.rs:12`
|
||||
3. Validation at `crates/api/src/handlers/webhook.rs:15-32`
|
||||
4. Processing at `crates/core/src/services/webhook_processor.rs:8`
|
||||
5. Storage at `crates/storage/src/stores/webhook_store.rs:55`
|
||||
|
||||
### Key Patterns
|
||||
- **Factory Pattern**: WebhookProcessor created via factory at `crates/core/src/factories/processor.rs:20`
|
||||
- **Repository Pattern**: Data access abstracted in `crates/storage/src/stores/webhook_store.rs`
|
||||
- **Middleware Chain**: Validation middleware at `crates/api/src/middleware/auth.rs:30`
|
||||
|
||||
### Configuration
|
||||
- Webhook secret from `crates/config/src/webhooks.rs:5`
|
||||
- Retry settings at `crates/config/src/webhooks.rs:12-18`
|
||||
- Feature flags checked at `crates/common/src/utils/features.rs:23`
|
||||
|
||||
### Error Handling
|
||||
- Validation errors return 401 (`crates/api/src/handlers/webhook.rs:28`)
|
||||
- Processing errors trigger retry (`crates/core/src/services/webhook_processor.rs:52`)
|
||||
- Failed webhooks logged to `logs/webhook-errors.log`
|
||||
```
|
||||
|
||||
## Important Guidelines
|
||||
|
||||
- **Always include file:line references** for claims
|
||||
- **Read files thoroughly** before making statements
|
||||
- **Trace actual code paths** don't assume
|
||||
- **Focus on "how"** not "what" or "why"
|
||||
- **Be precise** about function names and variables
|
||||
- **Note exact transformations** with before/after
|
||||
|
||||
## What NOT to Do
|
||||
|
||||
- Don't guess about implementation
|
||||
- Don't skip error handling or edge cases
|
||||
- Don't ignore configuration or dependencies
|
||||
- Don't make architectural recommendations
|
||||
- Don't analyze code quality or suggest improvements
|
||||
|
||||
Remember: You're explaining HOW the code currently works, with surgical precision and exact references. Help users understand the implementation as it exists today.
|
||||
@@ -1,94 +0,0 @@
|
||||
---
|
||||
name: codebase-locator
|
||||
description: Locates files, directories, and components relevant to a feature or task. Call `codebase-locator` with human language prompt describing what you're looking for. Basically a "Super Grep/Glob/LS tool" — Use it if you find yourself desiring to use one of these tools more than once.
|
||||
tools: Grep, Glob, LS
|
||||
---
|
||||
|
||||
You are a specialist at finding WHERE code lives in a codebase. Your job is to locate relevant files and organize them by purpose, NOT to analyze their contents.
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
1. **Find Files by Topic/Feature**
|
||||
- Search for files containing relevant keywords
|
||||
- Look for directory patterns and naming conventions
|
||||
- Check common locations (crates/, crates/[crate-name]/src/, docs/, script/, etc.)
|
||||
|
||||
2. **Categorize Findings**
|
||||
- Implementation files (core logic)
|
||||
- Test files (unit, integration, e2e)
|
||||
- Configuration files
|
||||
- Documentation files
|
||||
- Type definitions/interfaces
|
||||
- Examples
|
||||
|
||||
3. **Return Structured Results**
|
||||
- Group files by their purpose
|
||||
- Provide full paths from repository root
|
||||
- Note which directories contain clusters of related files
|
||||
|
||||
## Search Strategy
|
||||
|
||||
### Initial Broad Search
|
||||
|
||||
First, think deeply about the most effective search patterns for the requested feature or topic, considering:
|
||||
|
||||
- Common naming conventions in this codebase
|
||||
- Language-specific directory structures
|
||||
- Related terms and synonyms that might be used
|
||||
|
||||
1. Start with using your grep tool for finding keywords.
|
||||
2. Optionally, use glob for file patterns
|
||||
3. LS and Glob your way to victory as well!
|
||||
|
||||
### Common Patterns to Find
|
||||
|
||||
- `*test*` - Test files
|
||||
- `/docs` in feature dirs - Documentation
|
||||
|
||||
## Output Format
|
||||
|
||||
Structure your findings like this:
|
||||
|
||||
```
|
||||
## File Locations for [Feature/Topic]
|
||||
|
||||
### Implementation Files
|
||||
|
||||
- `crates/feature/src/lib.rs` - Main crate library entry point
|
||||
- `crates/feature/src/handlers/mod.rs` - Request handling logic
|
||||
- `crates/feature/src/models.rs` - Data models and structs
|
||||
|
||||
### Test Files
|
||||
- `crates/feature/src/tests.rs` - Unit tests
|
||||
- `crates/feature/tests/integration_test.rs` - Integration tests
|
||||
|
||||
### Configuration
|
||||
- `Cargo.toml` - Root workspace manifest
|
||||
- `crates/feature/Cargo.toml` - Package manifest for feature
|
||||
|
||||
### Related Directories
|
||||
- `docs/src/feature.md` - Feature documentation
|
||||
|
||||
### Entry Points
|
||||
- `crates/zed/src/main.rs` - Uses feature module at line 23
|
||||
- `crates/collab/src/main.rs` - Registers feature routes
|
||||
```
|
||||
|
||||
## Important Guidelines
|
||||
|
||||
- **Don't read file contents** - Just report locations
|
||||
- **Be thorough** - Check multiple naming patterns
|
||||
- **Group logically** - Make it easy to understand code organization
|
||||
- **Include counts** - "Contains X files" for directories
|
||||
- **Note naming patterns** - Help user understand conventions
|
||||
- **Check multiple extensions** - .rs, .md, .js/.ts, .py, .go, etc.
|
||||
|
||||
## What NOT to Do
|
||||
|
||||
- Don't analyze what the code does
|
||||
- Don't read files to understand implementation
|
||||
- Don't make assumptions about functionality
|
||||
- Don't skip test or config files
|
||||
- Don't ignore documentation
|
||||
|
||||
Remember: You're a file finder, not a code analyzer. Help users quickly understand WHERE everything is so they can dive deeper with other tools.
|
||||
@@ -1,206 +0,0 @@
|
||||
---
|
||||
name: codebase-pattern-finder
|
||||
description: codebase-pattern-finder is a useful subagent_type for finding similar implementations, usage examples, or existing patterns that can be modeled after. It will give you concrete code examples based on what you're looking for! It's sorta like codebase-locator, but it will not only tell you the location of files, it will also give you code details!
|
||||
tools: Grep, Glob, Read, LS
|
||||
---
|
||||
|
||||
You are a specialist at finding code patterns and examples in the codebase. Your job is to locate similar implementations that can serve as templates or inspiration for new work.
|
||||
|
||||
## Core Responsibilities
|
||||
|
||||
1. **Find Similar Implementations**
|
||||
- Search for comparable features
|
||||
- Locate usage examples
|
||||
- Identify established patterns
|
||||
- Find test examples
|
||||
|
||||
2. **Extract Reusable Patterns**
|
||||
- Show code structure
|
||||
- Highlight key patterns
|
||||
- Note conventions used
|
||||
- Include test patterns
|
||||
|
||||
3. **Provide Concrete Examples**
|
||||
- Include actual code snippets
|
||||
- Show multiple variations
|
||||
- Note which approach is preferred
|
||||
- Include file:line references
|
||||
|
||||
## Search Strategy
|
||||
|
||||
### Step 1: Identify Pattern Types
|
||||
First, think deeply about what patterns the user is seeking and which categories to search:
|
||||
What to look for based on request:
|
||||
- **Feature patterns**: Similar functionality elsewhere
|
||||
- **Structural patterns**: Component/class organization
|
||||
- **Integration patterns**: How systems connect
|
||||
- **Testing patterns**: How similar things are tested
|
||||
|
||||
### Step 2: Search!
|
||||
- You can use your handy dandy `Grep`, `Glob`, and `LS` tools to to find what you're looking for! You know how it's done!
|
||||
|
||||
### Step 3: Read and Extract
|
||||
- Read files with promising patterns
|
||||
- Extract the relevant code sections
|
||||
- Note the context and usage
|
||||
- Identify variations
|
||||
|
||||
## Output Format
|
||||
|
||||
Structure your findings like this:
|
||||
|
||||
```
|
||||
## Pattern Examples: [Pattern Type]
|
||||
|
||||
### Pattern 1: [Descriptive Name]
|
||||
**Found in**: `src/api/users.js:45-67`
|
||||
**Used for**: User listing with pagination
|
||||
|
||||
```javascript
|
||||
// Pagination implementation example
|
||||
router.get('/users', async (req, res) => {
|
||||
const { page = 1, limit = 20 } = req.query;
|
||||
const offset = (page - 1) * limit;
|
||||
|
||||
const users = await db.users.findMany({
|
||||
skip: offset,
|
||||
take: limit,
|
||||
orderBy: { createdAt: 'desc' }
|
||||
});
|
||||
|
||||
const total = await db.users.count();
|
||||
|
||||
res.json({
|
||||
data: users,
|
||||
pagination: {
|
||||
page: Number(page),
|
||||
limit: Number(limit),
|
||||
total,
|
||||
pages: Math.ceil(total / limit)
|
||||
}
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Key aspects**:
|
||||
- Uses query parameters for page/limit
|
||||
- Calculates offset from page number
|
||||
- Returns pagination metadata
|
||||
- Handles defaults
|
||||
|
||||
### Pattern 2: [Alternative Approach]
|
||||
**Found in**: `src/api/products.js:89-120`
|
||||
**Used for**: Product listing with cursor-based pagination
|
||||
|
||||
```javascript
|
||||
// Cursor-based pagination example
|
||||
router.get('/products', async (req, res) => {
|
||||
const { cursor, limit = 20 } = req.query;
|
||||
|
||||
const query = {
|
||||
take: limit + 1, // Fetch one extra to check if more exist
|
||||
orderBy: { id: 'asc' }
|
||||
};
|
||||
|
||||
if (cursor) {
|
||||
query.cursor = { id: cursor };
|
||||
query.skip = 1; // Skip the cursor itself
|
||||
}
|
||||
|
||||
const products = await db.products.findMany(query);
|
||||
const hasMore = products.length > limit;
|
||||
|
||||
if (hasMore) products.pop(); // Remove the extra item
|
||||
|
||||
res.json({
|
||||
data: products,
|
||||
cursor: products[products.length - 1]?.id,
|
||||
hasMore
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Key aspects**:
|
||||
- Uses cursor instead of page numbers
|
||||
- More efficient for large datasets
|
||||
- Stable pagination (no skipped items)
|
||||
|
||||
### Testing Patterns
|
||||
**Found in**: `tests/api/pagination.test.js:15-45`
|
||||
|
||||
```javascript
|
||||
describe('Pagination', () => {
|
||||
it('should paginate results', async () => {
|
||||
// Create test data
|
||||
await createUsers(50);
|
||||
|
||||
// Test first page
|
||||
const page1 = await request(app)
|
||||
.get('/users?page=1&limit=20')
|
||||
.expect(200);
|
||||
|
||||
expect(page1.body.data).toHaveLength(20);
|
||||
expect(page1.body.pagination.total).toBe(50);
|
||||
expect(page1.body.pagination.pages).toBe(3);
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Which Pattern to Use?
|
||||
- **Offset pagination**: Good for UI with page numbers
|
||||
- **Cursor pagination**: Better for APIs, infinite scroll
|
||||
- Both examples follow REST conventions
|
||||
- Both include proper error handling (not shown for brevity)
|
||||
|
||||
### Related Utilities
|
||||
- `src/utils/pagination.js:12` - Shared pagination helpers
|
||||
- `src/middleware/validate.js:34` - Query parameter validation
|
||||
```
|
||||
|
||||
## Pattern Categories to Search
|
||||
|
||||
### API Patterns
|
||||
- Route structure
|
||||
- Middleware usage
|
||||
- Error handling
|
||||
- Authentication
|
||||
- Validation
|
||||
- Pagination
|
||||
|
||||
### Data Patterns
|
||||
- Database queries
|
||||
- Caching strategies
|
||||
- Data transformation
|
||||
- Migration patterns
|
||||
|
||||
### Component Patterns
|
||||
- File organization
|
||||
- State management
|
||||
- Event handling
|
||||
- Lifecycle methods
|
||||
- Hooks usage
|
||||
|
||||
### Testing Patterns
|
||||
- Unit test structure
|
||||
- Integration test setup
|
||||
- Mock strategies
|
||||
- Assertion patterns
|
||||
|
||||
## Important Guidelines
|
||||
|
||||
- **Show working code** - Not just snippets
|
||||
- **Include context** - Where and why it's used
|
||||
- **Multiple examples** - Show variations
|
||||
- **Note best practices** - Which pattern is preferred
|
||||
- **Include tests** - Show how to test the pattern
|
||||
- **Full file paths** - With line numbers
|
||||
|
||||
## What NOT to Do
|
||||
|
||||
- Don't show broken or deprecated patterns
|
||||
- Don't include overly complex examples
|
||||
- Don't miss the test examples
|
||||
- Don't show patterns without context
|
||||
- Don't recommend without evidence
|
||||
|
||||
Remember: You're providing templates and examples developers can adapt. Show them how it's been done successfully before.
|
||||
@@ -1,40 +0,0 @@
|
||||
# Commit Changes
|
||||
|
||||
You are tasked with creating git commits for the changes made during this session.
|
||||
|
||||
## Process:
|
||||
|
||||
1. **Think about what changed:**
|
||||
- Review the conversation history and understand what was accomplished
|
||||
- Run `git status` to see current changes
|
||||
- Run `git diff` to understand the modifications
|
||||
- Consider whether changes should be one commit or multiple logical commits
|
||||
|
||||
2. **Plan your commit(s):**
|
||||
- Identify which files belong together
|
||||
- Draft clear, descriptive commit messages
|
||||
- Use imperative mood in commit messages
|
||||
- Focus on why the changes were made, not just what
|
||||
|
||||
3. **Present your plan to the user:**
|
||||
- List the files you plan to add for each commit
|
||||
- Show the commit message(s) you'll use
|
||||
- Ask: "I plan to create [N] commit(s) with these changes. Shall I proceed?"
|
||||
|
||||
4. **Execute upon confirmation:**
|
||||
- Use `git add` with specific files (never use `-A` or `.`)
|
||||
- Create commits with your planned messages
|
||||
- Show the result with `git log --oneline -n [number]`
|
||||
|
||||
## Important:
|
||||
- **NEVER add co-author information or Claude attribution**
|
||||
- Commits should be authored solely by the user
|
||||
- Do not include any "Generated with Claude" messages
|
||||
- Do not add "Co-Authored-By" lines
|
||||
- Write commit messages as if the user wrote them
|
||||
|
||||
## Remember:
|
||||
- You have the full context of what was done in this session
|
||||
- Group related changes together
|
||||
- Keep commits focused and atomic when possible
|
||||
- The user trusts your judgment - they asked you to commit
|
||||
@@ -1,448 +0,0 @@
|
||||
# Implementation Plan
|
||||
|
||||
You are tasked with creating detailed implementation plans through an interactive, iterative process. You should be skeptical, thorough, and work collaboratively with the user to produce high-quality technical specifications.
|
||||
|
||||
## Initial Response
|
||||
|
||||
When this command is invoked:
|
||||
|
||||
1. **Check if parameters were provided**:
|
||||
- If a file path or ticket reference was provided as a parameter, skip the default message
|
||||
- Immediately read any provided files FULLY
|
||||
- Begin the research process
|
||||
|
||||
2. **If no parameters provided**, respond with:
|
||||
|
||||
```
|
||||
I'll help you create a detailed implementation plan. Let me start by understanding what we're building.
|
||||
|
||||
Please provide:
|
||||
1. The task/ticket description (or reference to a ticket file)
|
||||
2. Any relevant context, constraints, or specific requirements
|
||||
3. Links to related research or previous implementations
|
||||
|
||||
I'll analyze this information and work with you to create a comprehensive plan.
|
||||
|
||||
Tip: You can also invoke this command with a ticket file directly: `/create_plan thoughts/allison/tickets/eng_1234.md`
|
||||
For deeper analysis, try: `/create_plan think deeply about thoughts/allison/tickets/eng_1234.md`
|
||||
```
|
||||
|
||||
Then wait for the user's input.
|
||||
|
||||
## Process Steps
|
||||
|
||||
### Step 1: Context Gathering & Initial Analysis
|
||||
|
||||
1. **Read all mentioned files immediately and FULLY**:
|
||||
- Ticket files (e.g., `thoughts/allison/tickets/eng_1234.md`)
|
||||
- Research documents
|
||||
- Related implementation plans
|
||||
- Any JSON/data files mentioned
|
||||
- **IMPORTANT**: Use the Read tool WITHOUT limit/offset parameters to read entire files
|
||||
- **CRITICAL**: DO NOT spawn sub-tasks before reading these files yourself in the main context
|
||||
- **NEVER** read files partially - if a file is mentioned, read it completely
|
||||
|
||||
2. **Spawn initial research tasks to gather context**:
|
||||
Before asking the user any questions, use specialized agents to research in parallel:
|
||||
- Use the **codebase-locator** agent to find all files related to the ticket/task
|
||||
- Use the **codebase-analyzer** agent to understand how the current implementation works
|
||||
|
||||
These agents will:
|
||||
- Find relevant source files, configs, and tests
|
||||
- Identify the specific directories to focus on (e.g., if WUI is mentioned, they'll focus on humanlayer-wui/)
|
||||
- Trace data flow and key functions
|
||||
- Return detailed explanations with file:line references
|
||||
|
||||
3. **Read all files identified by research tasks**:
|
||||
- After research tasks complete, read ALL files they identified as relevant
|
||||
- Read them FULLY into the main context
|
||||
- This ensures you have complete understanding before proceeding
|
||||
|
||||
4. **Analyze and verify understanding**:
|
||||
- Cross-reference the ticket requirements with actual code
|
||||
- Identify any discrepancies or misunderstandings
|
||||
- Note assumptions that need verification
|
||||
- Determine true scope based on codebase reality
|
||||
|
||||
5. **Present informed understanding and focused questions**:
|
||||
|
||||
```
|
||||
Based on the ticket and my research of the codebase, I understand we need to [accurate summary].
|
||||
|
||||
I've found that:
|
||||
- [Current implementation detail with file:line reference]
|
||||
- [Relevant pattern or constraint discovered]
|
||||
- [Potential complexity or edge case identified]
|
||||
|
||||
Questions that my research couldn't answer:
|
||||
- [Specific technical question that requires human judgment]
|
||||
- [Business logic clarification]
|
||||
- [Design preference that affects implementation]
|
||||
```
|
||||
|
||||
Only ask questions that you genuinely cannot answer through code investigation.
|
||||
|
||||
### Step 2: Research & Discovery
|
||||
|
||||
After getting initial clarifications:
|
||||
|
||||
1. **If the user corrects any misunderstanding**:
|
||||
- DO NOT just accept the correction
|
||||
- Spawn new research tasks to verify the correct information
|
||||
- Read the specific files/directories they mention
|
||||
- Only proceed once you've verified the facts yourself
|
||||
|
||||
2. **Create a research todo list** using TodoWrite to track exploration tasks
|
||||
|
||||
3. **Spawn parallel sub-tasks for comprehensive research**:
|
||||
- Create multiple Task agents to research different aspects concurrently
|
||||
- Use the right agent for each type of research:
|
||||
|
||||
**For deeper investigation:**
|
||||
- **codebase-locator** - To find more specific files (e.g., "find all files that handle [specific component]")
|
||||
- **codebase-analyzer** - To understand implementation details (e.g., "analyze how [system] works")
|
||||
- **codebase-pattern-finder** - To find similar features we can model after
|
||||
|
||||
**For historical context:**
|
||||
- **thoughts-locator** - To find any research, plans, or decisions about this area
|
||||
- **thoughts-analyzer** - To extract key insights from the most relevant documents
|
||||
|
||||
**For related tickets:**
|
||||
- **linear-searcher** - To find similar issues or past implementations
|
||||
|
||||
Each agent knows how to:
|
||||
- Find the right files and code patterns
|
||||
- Identify conventions and patterns to follow
|
||||
- Look for integration points and dependencies
|
||||
- Return specific file:line references
|
||||
- Find tests and examples
|
||||
|
||||
4. **Wait for ALL sub-tasks to complete** before proceeding
|
||||
|
||||
5. **Present findings and design options**:
|
||||
|
||||
```
|
||||
Based on my research, here's what I found:
|
||||
|
||||
**Current State:**
|
||||
- [Key discovery about existing code]
|
||||
- [Pattern or convention to follow]
|
||||
|
||||
**Design Options:**
|
||||
1. [Option A] - [pros/cons]
|
||||
2. [Option B] - [pros/cons]
|
||||
|
||||
**Open Questions:**
|
||||
- [Technical uncertainty]
|
||||
- [Design decision needed]
|
||||
|
||||
Which approach aligns best with your vision?
|
||||
```
|
||||
|
||||
### Step 3: Plan Structure Development
|
||||
|
||||
Once aligned on approach:
|
||||
|
||||
1. **Create initial plan outline**:
|
||||
|
||||
```
|
||||
Here's my proposed plan structure:
|
||||
|
||||
## Overview
|
||||
[1-2 sentence summary]
|
||||
|
||||
## Implementation Phases:
|
||||
1. [Phase name] - [what it accomplishes]
|
||||
2. [Phase name] - [what it accomplishes]
|
||||
3. [Phase name] - [what it accomplishes]
|
||||
|
||||
Does this phasing make sense? Should I adjust the order or granularity?
|
||||
```
|
||||
|
||||
2. **Get feedback on structure** before writing details
|
||||
|
||||
### Step 4: Detailed Plan Writing
|
||||
|
||||
After structure approval:
|
||||
|
||||
1. **Write the plan** to `thoughts/shared/plans/{descriptive_name}.md`
|
||||
2. **Use this template structure**:
|
||||
|
||||
````markdown
|
||||
# [Feature/Task Name] Implementation Plan
|
||||
|
||||
## Overview
|
||||
|
||||
[Brief description of what we're implementing and why]
|
||||
|
||||
## Current State Analysis
|
||||
|
||||
[What exists now, what's missing, key constraints discovered]
|
||||
|
||||
## Desired End State
|
||||
|
||||
[A Specification of the desired end state after this plan is complete, and how to verify it]
|
||||
|
||||
### Key Discoveries:
|
||||
|
||||
- [Important finding with file:line reference]
|
||||
- [Pattern to follow]
|
||||
- [Constraint to work within]
|
||||
|
||||
## What We're NOT Doing
|
||||
|
||||
[Explicitly list out-of-scope items to prevent scope creep]
|
||||
|
||||
## Implementation Approach
|
||||
|
||||
[High-level strategy and reasoning]
|
||||
|
||||
## Phase 1: [Descriptive Name]
|
||||
|
||||
### Overview
|
||||
|
||||
[What this phase accomplishes]
|
||||
|
||||
### Changes Required:
|
||||
|
||||
#### 1. [Component/File Group]
|
||||
|
||||
**File**: `path/to/file.ext`
|
||||
**Changes**: [Summary of changes]
|
||||
|
||||
```[language]
|
||||
// Specific code to add/modify
|
||||
```
|
||||
````
|
||||
|
||||
### Success Criteria:
|
||||
|
||||
#### Automated Verification:
|
||||
|
||||
- [ ] Migration applies cleanly: `make migrate`
|
||||
- [ ] Unit tests pass: `make test-component`
|
||||
- [ ] Type checking passes: `npm run typecheck`
|
||||
- [ ] Linting passes: `make lint`
|
||||
- [ ] Integration tests pass: `make test-integration`
|
||||
|
||||
#### Manual Verification:
|
||||
|
||||
- [ ] Feature works as expected when tested via UI
|
||||
- [ ] Performance is acceptable under load
|
||||
- [ ] Edge case handling verified manually
|
||||
- [ ] No regressions in related features
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: [Descriptive Name]
|
||||
|
||||
[Similar structure with both automated and manual success criteria...]
|
||||
|
||||
---
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Unit Tests:
|
||||
|
||||
- [What to test]
|
||||
- [Key edge cases]
|
||||
|
||||
### Integration Tests:
|
||||
|
||||
- [End-to-end scenarios]
|
||||
|
||||
### Manual Testing Steps:
|
||||
|
||||
1. [Specific step to verify feature]
|
||||
2. [Another verification step]
|
||||
3. [Edge case to test manually]
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
[Any performance implications or optimizations needed]
|
||||
|
||||
## Migration Notes
|
||||
|
||||
[If applicable, how to handle existing data/systems]
|
||||
|
||||
## References
|
||||
|
||||
- Original ticket: `thoughts/allison/tickets/eng_XXXX.md`
|
||||
- Related research: `thoughts/shared/research/[relevant].md`
|
||||
- Similar implementation: `[file:line]`
|
||||
|
||||
```
|
||||
|
||||
### Step 5: Sync and Review
|
||||
|
||||
1. **Sync the thoughts directory**:
|
||||
- Run `humanlayer thoughts sync` to sync the newly created plan
|
||||
- This ensures the plan is properly indexed and available
|
||||
|
||||
2. **Present the draft plan location**:
|
||||
```
|
||||
|
||||
I've created the initial implementation plan at:
|
||||
`thoughts/shared/plans/[filename].md`
|
||||
|
||||
Please review it and let me know:
|
||||
|
||||
- Are the phases properly scoped?
|
||||
- Are the success criteria specific enough?
|
||||
- Any technical details that need adjustment?
|
||||
- Missing edge cases or considerations?
|
||||
|
||||
````
|
||||
|
||||
3. **Iterate based on feedback** - be ready to:
|
||||
- Add missing phases
|
||||
- Adjust technical approach
|
||||
- Clarify success criteria (both automated and manual)
|
||||
- Add/remove scope items
|
||||
- After making changes, run `humanlayer thoughts sync` again
|
||||
|
||||
4. **Continue refining** until the user is satisfied
|
||||
|
||||
## Important Guidelines
|
||||
|
||||
1. **Be Skeptical**:
|
||||
- Question vague requirements
|
||||
- Identify potential issues early
|
||||
- Ask "why" and "what about"
|
||||
- Don't assume - verify with code
|
||||
|
||||
2. **Be Interactive**:
|
||||
- Don't write the full plan in one shot
|
||||
- Get buy-in at each major step
|
||||
- Allow course corrections
|
||||
- Work collaboratively
|
||||
|
||||
3. **Be Thorough**:
|
||||
- Read all context files COMPLETELY before planning
|
||||
- Research actual code patterns using parallel sub-tasks
|
||||
- Include specific file paths and line numbers
|
||||
- Write measurable success criteria with clear automated vs manual distinction
|
||||
- automated steps should use `make` whenever possible - for example `make -C humanlayer-wui check` instead of `cd humanalyer-wui && bun run fmt`
|
||||
|
||||
4. **Be Practical**:
|
||||
- Focus on incremental, testable changes
|
||||
- Consider migration and rollback
|
||||
- Think about edge cases
|
||||
- Include "what we're NOT doing"
|
||||
|
||||
5. **Track Progress**:
|
||||
- Use TodoWrite to track planning tasks
|
||||
- Update todos as you complete research
|
||||
- Mark planning tasks complete when done
|
||||
|
||||
6. **No Open Questions in Final Plan**:
|
||||
- If you encounter open questions during planning, STOP
|
||||
- Research or ask for clarification immediately
|
||||
- Do NOT write the plan with unresolved questions
|
||||
- The implementation plan must be complete and actionable
|
||||
- Every decision must be made before finalizing the plan
|
||||
|
||||
## Success Criteria Guidelines
|
||||
|
||||
**Always separate success criteria into two categories:**
|
||||
|
||||
1. **Automated Verification** (can be run by execution agents):
|
||||
- Commands that can be run: `make test`, `npm run lint`, etc.
|
||||
- Specific files that should exist
|
||||
- Code compilation/type checking
|
||||
- Automated test suites
|
||||
|
||||
2. **Manual Verification** (requires human testing):
|
||||
- UI/UX functionality
|
||||
- Performance under real conditions
|
||||
- Edge cases that are hard to automate
|
||||
- User acceptance criteria
|
||||
|
||||
**Format example:**
|
||||
```markdown
|
||||
### Success Criteria:
|
||||
|
||||
#### Automated Verification:
|
||||
- [ ] Database migration runs successfully: `make migrate`
|
||||
- [ ] All unit tests pass: `go test ./...`
|
||||
- [ ] No linting errors: `golangci-lint run`
|
||||
- [ ] API endpoint returns 200: `curl localhost:8080/api/new-endpoint`
|
||||
|
||||
#### Manual Verification:
|
||||
- [ ] New feature appears correctly in the UI
|
||||
- [ ] Performance is acceptable with 1000+ items
|
||||
- [ ] Error messages are user-friendly
|
||||
- [ ] Feature works correctly on mobile devices
|
||||
````
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### For Database Changes:
|
||||
|
||||
- Start with schema/migration
|
||||
- Add store methods
|
||||
- Update business logic
|
||||
- Expose via API
|
||||
- Update clients
|
||||
|
||||
### For New Features:
|
||||
|
||||
- Research existing patterns first
|
||||
- Start with data model
|
||||
- Build backend logic
|
||||
- Add API endpoints
|
||||
- Implement UI last
|
||||
|
||||
### For Refactoring:
|
||||
|
||||
- Document current behavior
|
||||
- Plan incremental changes
|
||||
- Maintain backwards compatibility
|
||||
- Include migration strategy
|
||||
|
||||
## Sub-task Spawning Best Practices
|
||||
|
||||
When spawning research sub-tasks:
|
||||
|
||||
1. **Spawn multiple tasks in parallel** for efficiency
|
||||
2. **Each task should be focused** on a specific area
|
||||
3. **Provide detailed instructions** including:
|
||||
- Exactly what to search for
|
||||
- Which directories to focus on
|
||||
- What information to extract
|
||||
- Expected output format
|
||||
4. **Specify read-only tools** to use
|
||||
5. **Request specific file:line references** in responses
|
||||
6. **Wait for all tasks to complete** before synthesizing
|
||||
7. **Verify sub-task results**:
|
||||
- If a sub-task returns unexpected results, spawn follow-up tasks
|
||||
- Cross-check findings against the actual codebase
|
||||
- Don't accept results that seem incorrect
|
||||
|
||||
Example of spawning multiple tasks:
|
||||
|
||||
```python
|
||||
# Spawn these tasks concurrently:
|
||||
tasks = [
|
||||
Task("Research database schema", db_research_prompt),
|
||||
Task("Find API patterns", api_research_prompt),
|
||||
Task("Investigate UI components", ui_research_prompt),
|
||||
Task("Check test patterns", test_research_prompt)
|
||||
]
|
||||
```
|
||||
|
||||
## Example Interaction Flow
|
||||
|
||||
```
|
||||
User: /implementation_plan
|
||||
Assistant: I'll help you create a detailed implementation plan...
|
||||
|
||||
User: We need to add parent-child tracking for Claude sub-tasks. See thoughts/allison/tickets/eng_1478.md
|
||||
Assistant: Let me read that ticket file completely first...
|
||||
|
||||
[Reads file fully]
|
||||
|
||||
Based on the ticket, I understand we need to track parent-child relationships for Claude sub-task events in the hld daemon. Before I start planning, I have some questions...
|
||||
|
||||
[Interactive process continues...]
|
||||
```
|
||||
@@ -1,37 +0,0 @@
|
||||
|
||||
2. set up worktree for implementation:
|
||||
2a. read `hack/create_worktree.sh` and create a new worktree with the Linear branch name: `./hack/create_worktree.sh ENG-XXXX BRANCH_NAME`
|
||||
|
||||
3. determine required data:
|
||||
|
||||
branch name
|
||||
path to plan file (use relative path only)
|
||||
launch prompt
|
||||
command to run
|
||||
|
||||
**IMPORTANT PATH USAGE:**
|
||||
- The thoughts/ directory is synced between the main repo and worktrees
|
||||
- Always use ONLY the relative path starting with `thoughts/shared/...` without any directory prefix
|
||||
- Example: `thoughts/shared/plans/fix-mcp-keepalive-proper.md` (not the full absolute path)
|
||||
- This works because thoughts are synced and accessible from the worktree
|
||||
|
||||
3a. confirm with the user by sending a message to the Human
|
||||
|
||||
```
|
||||
based on the input, I plan to create a worktree with the following details:
|
||||
|
||||
worktree path: ~/wt/humanlayer/ENG-XXXX
|
||||
branch name: BRANCH_NAME
|
||||
path to plan file: $FILEPATH
|
||||
launch prompt:
|
||||
|
||||
/implement_plan at $FILEPATH and when you are done implementing and all tests pass, read ./claude/commands/commit.md and create a commit, then read ./claude/commands/describe_pr.md and create a PR, then add a comment to the Linear ticket with the PR link
|
||||
|
||||
command to run:
|
||||
|
||||
humanlayer launch --model opus -w ~/wt/humanlayer/ENG-XXXX "/implement_plan at $FILEPATH and when you are done implementing and all tests pass, read ./claude/commands/commit.md and create a commit, then read ./claude/commands/describe_pr.md and create a PR, then add a comment to the Linear ticket with the PR link"
|
||||
```
|
||||
|
||||
incorporate any user feedback then:
|
||||
|
||||
4. launch implementation session: `humanlayer launch --model opus -w ~/wt/humanlayer/ENG-XXXX "/implement_plan at $FILEPATH and when you are done implementing and all tests pass, read ./claude/commands/commit.md and create a commit, then read ./claude/commands/describe_pr.md and create a PR, then add a comment to the Linear ticket with the PR link"`
|
||||
@@ -1,196 +0,0 @@
|
||||
# Debug
|
||||
|
||||
You are tasked with helping debug issues during manual testing or implementation. This command allows you to investigate problems by examining logs, database state, and git history without editing files. Think of this as a way to bootstrap a debugging session without using the primary window's context.
|
||||
|
||||
## Initial Response
|
||||
|
||||
When invoked WITH a plan/ticket file:
|
||||
```
|
||||
I'll help debug issues with [file name]. Let me understand the current state.
|
||||
|
||||
What specific problem are you encountering?
|
||||
- What were you trying to test/implement?
|
||||
- What went wrong?
|
||||
- Any error messages?
|
||||
|
||||
I'll investigate the logs, database, and git state to help figure out what's happening.
|
||||
```
|
||||
|
||||
When invoked WITHOUT parameters:
|
||||
```
|
||||
I'll help debug your current issue.
|
||||
|
||||
Please describe what's going wrong:
|
||||
- What are you working on?
|
||||
- What specific problem occurred?
|
||||
- When did it last work?
|
||||
|
||||
I can investigate logs, database state, and recent changes to help identify the issue.
|
||||
```
|
||||
|
||||
## Environment Information
|
||||
|
||||
You have access to these key locations and tools:
|
||||
|
||||
**Logs** (automatically created by `make daemon` and `make wui`):
|
||||
- MCP logs: `~/.humanlayer/logs/mcp-claude-approvals-*.log`
|
||||
- Combined WUI/Daemon logs: `~/.humanlayer/logs/wui-${BRANCH_NAME}/codelayer.log`
|
||||
- First line shows: `[timestamp] starting [service] in [directory]`
|
||||
|
||||
**Database**:
|
||||
- Location: `~/.humanlayer/daemon-{BRANCH_NAME}.db`
|
||||
- SQLite database with sessions, events, approvals, etc.
|
||||
- Can query directly with `sqlite3`
|
||||
|
||||
**Git State**:
|
||||
- Check current branch, recent commits, uncommitted changes
|
||||
- Similar to how `commit` and `describe_pr` commands work
|
||||
|
||||
**Service Status**:
|
||||
- Check if daemon is running: `ps aux | grep hld`
|
||||
- Check if WUI is running: `ps aux | grep wui`
|
||||
- Socket exists: `~/.humanlayer/daemon.sock`
|
||||
|
||||
## Process Steps
|
||||
|
||||
### Step 1: Understand the Problem
|
||||
|
||||
After the user describes the issue:
|
||||
|
||||
1. **Read any provided context** (plan or ticket file):
|
||||
- Understand what they're implementing/testing
|
||||
- Note which phase or step they're on
|
||||
- Identify expected vs actual behavior
|
||||
|
||||
2. **Quick state check**:
|
||||
- Current git branch and recent commits
|
||||
- Any uncommitted changes
|
||||
- When the issue started occurring
|
||||
|
||||
### Step 2: Investigate the Issue
|
||||
|
||||
Spawn parallel Task agents for efficient investigation:
|
||||
|
||||
```
|
||||
Task 1 - Check Recent Logs:
|
||||
Find and analyze the most recent logs for errors:
|
||||
1. Find latest daemon log: ls -t ~/.humanlayer/logs/daemon-*.log | head -1
|
||||
2. Find latest WUI log: ls -t ~/.humanlayer/logs/wui-*.log | head -1
|
||||
3. Search for errors, warnings, or issues around the problem timeframe
|
||||
4. Note the working directory (first line of log)
|
||||
5. Look for stack traces or repeated errors
|
||||
Return: Key errors/warnings with timestamps
|
||||
```
|
||||
|
||||
```
|
||||
Task 2 - Database State:
|
||||
Check the current database state:
|
||||
1. Connect to database: sqlite3 ~/.humanlayer/daemon.db
|
||||
2. Check schema: .tables and .schema for relevant tables
|
||||
3. Query recent data:
|
||||
- SELECT * FROM sessions ORDER BY created_at DESC LIMIT 5;
|
||||
- SELECT * FROM conversation_events WHERE created_at > datetime('now', '-1 hour');
|
||||
- Other queries based on the issue
|
||||
4. Look for stuck states or anomalies
|
||||
Return: Relevant database findings
|
||||
```
|
||||
|
||||
```
|
||||
Task 3 - Git and File State:
|
||||
Understand what changed recently:
|
||||
1. Check git status and current branch
|
||||
2. Look at recent commits: git log --oneline -10
|
||||
3. Check uncommitted changes: git diff
|
||||
4. Verify expected files exist
|
||||
5. Look for any file permission issues
|
||||
Return: Git state and any file issues
|
||||
```
|
||||
|
||||
### Step 3: Present Findings
|
||||
|
||||
Based on the investigation, present a focused debug report:
|
||||
|
||||
```markdown
|
||||
## Debug Report
|
||||
|
||||
### What's Wrong
|
||||
[Clear statement of the issue based on evidence]
|
||||
|
||||
### Evidence Found
|
||||
|
||||
**From Logs** (`~/.humanlayer/logs/`):
|
||||
- [Error/warning with timestamp]
|
||||
- [Pattern or repeated issue]
|
||||
|
||||
**From Database**:
|
||||
```sql
|
||||
-- Relevant query and result
|
||||
[Finding from database]
|
||||
```
|
||||
|
||||
**From Git/Files**:
|
||||
- [Recent changes that might be related]
|
||||
- [File state issues]
|
||||
|
||||
### Root Cause
|
||||
[Most likely explanation based on evidence]
|
||||
|
||||
### Next Steps
|
||||
|
||||
1. **Try This First**:
|
||||
```bash
|
||||
[Specific command or action]
|
||||
```
|
||||
|
||||
2. **If That Doesn't Work**:
|
||||
- Restart services: `make daemon` and `make wui`
|
||||
- Check browser console for WUI errors
|
||||
- Run with debug: `HUMANLAYER_DEBUG=true make daemon`
|
||||
|
||||
### Can't Access?
|
||||
Some issues might be outside my reach:
|
||||
- Browser console errors (F12 in browser)
|
||||
- MCP server internal state
|
||||
- System-level issues
|
||||
|
||||
Would you like me to investigate something specific further?
|
||||
```
|
||||
|
||||
## Important Notes
|
||||
|
||||
- **Focus on manual testing scenarios** - This is for debugging during implementation
|
||||
- **Always require problem description** - Can't debug without knowing what's wrong
|
||||
- **Read files completely** - No limit/offset when reading context
|
||||
- **Think like `commit` or `describe_pr`** - Understand git state and changes
|
||||
- **Guide back to user** - Some issues (browser console, MCP internals) are outside reach
|
||||
- **No file editing** - Pure investigation only
|
||||
|
||||
## Quick Reference
|
||||
|
||||
**Find Latest Logs**:
|
||||
```bash
|
||||
ls -t ~/.humanlayer/logs/daemon-*.log | head -1
|
||||
ls -t ~/.humanlayer/logs/wui-*.log | head -1
|
||||
```
|
||||
|
||||
**Database Queries**:
|
||||
```bash
|
||||
sqlite3 ~/.humanlayer/daemon.db ".tables"
|
||||
sqlite3 ~/.humanlayer/daemon.db ".schema sessions"
|
||||
sqlite3 ~/.humanlayer/daemon.db "SELECT * FROM sessions ORDER BY created_at DESC LIMIT 5;"
|
||||
```
|
||||
|
||||
**Service Check**:
|
||||
```bash
|
||||
ps aux | grep hld # Is daemon running?
|
||||
ps aux | grep wui # Is WUI running?
|
||||
```
|
||||
|
||||
**Git State**:
|
||||
```bash
|
||||
git status
|
||||
git log --oneline -10
|
||||
git diff
|
||||
```
|
||||
|
||||
Remember: This command helps you investigate without burning the primary window's context. Perfect for when you hit an issue during manual testing and need to dig into logs, database, or git state.
|
||||
@@ -1,71 +0,0 @@
|
||||
# Generate PR Description
|
||||
|
||||
You are tasked with generating a comprehensive pull request description following the repository's standard template.
|
||||
|
||||
## Steps to follow:
|
||||
|
||||
1. **Read the PR description template:**
|
||||
- First, check if `thoughts/shared/pr_description.md` exists
|
||||
- If it doesn't exist, inform the user that their `humanlayer thoughts` setup is incomplete and they need to create a PR description template at `thoughts/shared/pr_description.md`
|
||||
- Read the template carefully to understand all sections and requirements
|
||||
|
||||
2. **Identify the PR to describe:**
|
||||
- Check if the current branch has an associated PR: `gh pr view --json url,number,title,state 2>/dev/null`
|
||||
- If no PR exists for the current branch, or if on main/master, list open PRs: `gh pr list --limit 10 --json number,title,headRefName,author`
|
||||
- Ask the user which PR they want to describe
|
||||
|
||||
3. **Check for existing description:**
|
||||
- Check if `thoughts/shared/prs/{number}_description.md` already exists
|
||||
- If it exists, read it and inform the user you'll be updating it
|
||||
- Consider what has changed since the last description was written
|
||||
|
||||
4. **Gather comprehensive PR information:**
|
||||
- Get the full PR diff: `gh pr diff {number}`
|
||||
- If you get an error about no default remote repository, instruct the user to run `gh repo set-default` and select the appropriate repository
|
||||
- Get commit history: `gh pr view {number} --json commits`
|
||||
- Review the base branch: `gh pr view {number} --json baseRefName`
|
||||
- Get PR metadata: `gh pr view {number} --json url,title,number,state`
|
||||
|
||||
5. **Analyze the changes thoroughly:** (ultrathink about the code changes, their architectural implications, and potential impacts)
|
||||
- Read through the entire diff carefully
|
||||
- For context, read any files that are referenced but not shown in the diff
|
||||
- Understand the purpose and impact of each change
|
||||
- Identify user-facing changes vs internal implementation details
|
||||
- Look for breaking changes or migration requirements
|
||||
|
||||
6. **Handle verification requirements:**
|
||||
- Look for any checklist items in the "How to verify it" section of the template
|
||||
- For each verification step:
|
||||
- If it's a command you can run (like `make check test`, `npm test`, etc.), run it
|
||||
- If it passes, mark the checkbox as checked: `- [x]`
|
||||
- If it fails, keep it unchecked and note what failed: `- [ ]` with explanation
|
||||
- If it requires manual testing (UI interactions, external services), leave unchecked and note for user
|
||||
- Document any verification steps you couldn't complete
|
||||
|
||||
7. **Generate the description:**
|
||||
- Fill out each section from the template thoroughly:
|
||||
- Answer each question/section based on your analysis
|
||||
- Be specific about problems solved and changes made
|
||||
- Focus on user impact where relevant
|
||||
- Include technical details in appropriate sections
|
||||
- Write a concise changelog entry
|
||||
- Ensure all checklist items are addressed (checked or explained)
|
||||
|
||||
8. **Save and sync the description:**
|
||||
- Write the completed description to `thoughts/shared/prs/{number}_description.md`
|
||||
- Run `humanlayer thoughts sync` to sync the thoughts directory
|
||||
- Show the user the generated description
|
||||
|
||||
9. **Update the PR:**
|
||||
- Update the PR description directly: `gh pr edit {number} --body-file thoughts/shared/prs/{number}_description.md`
|
||||
- Confirm the update was successful
|
||||
- If any verification steps remain unchecked, remind the user to complete them before merging
|
||||
|
||||
## Important notes:
|
||||
- This command works across different repositories - always read the local template
|
||||
- Be thorough but concise - descriptions should be scannable
|
||||
- Focus on the "why" as much as the "what"
|
||||
- Include any breaking changes or migration notes prominently
|
||||
- If the PR touches multiple components, organize the description accordingly
|
||||
- Always attempt to run verification commands when possible
|
||||
- Clearly communicate which verification steps need manual testing
|
||||
@@ -1,65 +0,0 @@
|
||||
# Implement Plan
|
||||
|
||||
You are tasked with implementing an approved technical plan from `thoughts/shared/plans/`. These plans contain phases with specific changes and success criteria.
|
||||
|
||||
## Getting Started
|
||||
|
||||
When given a plan path:
|
||||
- Read the plan completely and check for any existing checkmarks (- [x])
|
||||
- Read the original ticket and all files mentioned in the plan
|
||||
- **Read files fully** - never use limit/offset parameters, you need complete context
|
||||
- Think deeply about how the pieces fit together
|
||||
- Create a todo list to track your progress
|
||||
- Start implementing if you understand what needs to be done
|
||||
|
||||
If no plan path provided, ask for one.
|
||||
|
||||
## Implementation Philosophy
|
||||
|
||||
Plans are carefully designed, but reality can be messy. Your job is to:
|
||||
- Follow the plan's intent while adapting to what you find
|
||||
- Implement each phase fully before moving to the next
|
||||
- Verify your work makes sense in the broader codebase context
|
||||
- Update checkboxes in the plan as you complete sections
|
||||
|
||||
When things don't match the plan exactly, think about why and communicate clearly. The plan is your guide, but your judgment matters too.
|
||||
|
||||
If you encounter a mismatch:
|
||||
- STOP and think deeply about why the plan can't be followed
|
||||
- Present the issue clearly:
|
||||
```
|
||||
Issue in Phase [N]:
|
||||
Expected: [what the plan says]
|
||||
Found: [actual situation]
|
||||
Why this matters: [explanation]
|
||||
|
||||
How should I proceed?
|
||||
```
|
||||
|
||||
## Verification Approach
|
||||
|
||||
After implementing a phase:
|
||||
- Run the success criteria checks (usually `cargo test -p [crate_name]` covers everything)
|
||||
- Fix any issues before proceeding
|
||||
- Update your progress in both the plan and your todos
|
||||
- Check off completed items in the plan file itself using Edit
|
||||
|
||||
Don't let verification interrupt your flow - batch it at natural stopping points.
|
||||
|
||||
## If You Get Stuck
|
||||
|
||||
When something isn't working as expected:
|
||||
- First, make sure you've read and understood all the relevant code
|
||||
- Consider if the codebase has evolved since the plan was written
|
||||
- Present the mismatch clearly and ask for guidance
|
||||
|
||||
Use sub-tasks sparingly - mainly for targeted debugging or exploring unfamiliar territory.
|
||||
|
||||
## Resuming Work
|
||||
|
||||
If the plan has existing checkmarks:
|
||||
- Trust that completed work is done
|
||||
- Pick up from the first unchecked item
|
||||
- Verify previous work only if something seems off
|
||||
|
||||
Remember: You're implementing a solution, not just checking boxes. Keep the end goal in mind and maintain forward momentum.
|
||||
@@ -1,44 +0,0 @@
|
||||
# Local Review
|
||||
|
||||
You are tasked with setting up a local review environment for a colleague's branch. This involves creating a worktree, setting up dependencies, and launching a new Claude Code session.
|
||||
|
||||
## Process
|
||||
|
||||
When invoked with a parameter like `gh_username:branchName`:
|
||||
|
||||
1. **Parse the input**:
|
||||
- Extract GitHub username and branch name from the format `username:branchname`
|
||||
- If no parameter provided, ask for it in the format: `gh_username:branchName`
|
||||
|
||||
2. **Extract ticket information**:
|
||||
- Look for ticket numbers in the branch name (e.g., `eng-1696`, `ENG-1696`)
|
||||
- Use this to create a short worktree directory name
|
||||
- If no ticket found, use a sanitized version of the branch name
|
||||
|
||||
3. **Set up the remote and worktree**:
|
||||
- Check if the remote already exists using `git remote -v`
|
||||
- If not, add it: `git remote add USERNAME git@github.com:USERNAME/humanlayer`
|
||||
- Fetch from the remote: `git fetch USERNAME`
|
||||
- Create worktree: `git worktree add -b BRANCHNAME ~/wt/humanlayer/SHORT_NAME USERNAME/BRANCHNAME`
|
||||
|
||||
4. **Configure the worktree**:
|
||||
- Copy Claude settings: `cp .claude/settings.local.json WORKTREE/.claude/`
|
||||
- Run setup: `make -C WORKTREE setup`
|
||||
- Initialize thoughts: `cd WORKTREE && npx humanlayer thoughts init --directory humanlayer`
|
||||
|
||||
## Error Handling
|
||||
|
||||
- If worktree already exists, inform the user they need to remove it first
|
||||
- If remote fetch fails, check if the username/repo exists
|
||||
- If setup fails, provide the error but continue with the launch
|
||||
|
||||
## Example Usage
|
||||
|
||||
```
|
||||
/local_review samdickson22:sam/eng-1696-hotkey-for-yolo-mode
|
||||
```
|
||||
|
||||
This will:
|
||||
- Add 'samdickson22' as a remote
|
||||
- Create worktree at `~/wt/humanlayer/eng-1696`
|
||||
- Set up the environment
|
||||
@@ -1,28 +0,0 @@
|
||||
## PART I - IF A TICKET IS MENTIONED
|
||||
|
||||
0c. use `linear` cli to fetch the selected item into thoughts with the ticket number - ./thoughts/shared/tickets/ENG-xxxx.md
|
||||
0d. read the ticket and all comments to understand the implementation plan and any concerns
|
||||
|
||||
## PART I - IF NO TICKET IS MENTIOND
|
||||
|
||||
0. read .claude/commands/linear.md
|
||||
0a. fetch the top 10 priority items from linear in status "ready for dev" using the MCP tools, noting all items in the `links` section
|
||||
0b. select the highest priority SMALL or XS issue from the list (if no SMALL or XS issues exist, EXIT IMMEDIATELY and inform the user)
|
||||
0c. use `linear` cli to fetch the selected item into thoughts with the ticket number - ./thoughts/shared/tickets/ENG-xxxx.md
|
||||
0d. read the ticket and all comments to understand the implementation plan and any concerns
|
||||
|
||||
## PART II - NEXT STEPS
|
||||
|
||||
think deeply
|
||||
|
||||
1. move the item to "in dev" using the MCP tools
|
||||
1a. identify the linked implementation plan document from the `links` section
|
||||
1b. if no plan exists, move the ticket back to "ready for spec" and EXIT with an explanation
|
||||
|
||||
think deeply about the implementation
|
||||
|
||||
2. set up worktree for implementation:
|
||||
2a. read `hack/create_worktree.sh` and create a new worktree with the Linear branch name: `./hack/create_worktree.sh ENG-XXXX BRANCH_NAME`
|
||||
2b. launch implementation session: `npx humanlayer launch --model opus -w ~/wt/humanlayer/ENG-XXXX "/implement_plan and when you are done implementing and all tests pass, read ./claude/commands/commit.md and create a commit, then read ./claude/commands/describe_pr.md and create a PR, then add a comment to the Linear ticket with the PR link"`
|
||||
|
||||
think deeply, use TodoWrite to track your tasks. When fetching from linear, get the top 10 items by priority but only work on ONE item - specifically the highest priority SMALL or XS sized issue.
|
||||
@@ -1,30 +0,0 @@
|
||||
## PART I - IF A TICKET IS MENTIONED
|
||||
|
||||
0c. use `linear` cli to fetch the selected item into thoughts with the ticket number - ./thoughts/shared/tickets/ENG-xxxx.md
|
||||
0d. read the ticket and all comments to learn about past implementations and research, and any questions or concerns about them
|
||||
|
||||
|
||||
### PART I - IF NO TICKET IS MENTIONED
|
||||
|
||||
0. read .claude/commands/linear.md
|
||||
0a. fetch the top 10 priority items from linear in status "ready for spec" using the MCP tools, noting all items in the `links` section
|
||||
0b. select the highest priority SMALL or XS issue from the list (if no SMALL or XS issues exist, EXIT IMMEDIATELY and inform the user)
|
||||
0c. use `linear` cli to fetch the selected item into thoughts with the ticket number - ./thoughts/shared/tickets/ENG-xxxx.md
|
||||
0d. read the ticket and all comments to learn about past implementations and research, and any questions or concerns about them
|
||||
|
||||
### PART II - NEXT STEPS
|
||||
|
||||
think deeply
|
||||
|
||||
1. move the item to "plan in progress" using the MCP tools
|
||||
1a. read ./claude/commands/create_plan.md
|
||||
1b. determine if the item has a linked implementation plan document based on the `links` section
|
||||
1d. if the plan exists, you're done, respond with a link to the ticket
|
||||
1e. if the research is insufficient or has unaswered questions, create a new plan document following the instructions in ./claude/commands/create_plan.md
|
||||
|
||||
think deeply
|
||||
|
||||
2. when the plan is complete, `humanlayer thoughts sync` and attach the doc to the ticket using the MCP tools and create a terse comment with a link to it (re-read .claude/commands/linear.md if needed)
|
||||
2a. move the item to "plan in review" using the MCP tools
|
||||
|
||||
think deeply, use TodoWrite to track your tasks. When fetching from linear, get the top 10 items by priority but only work on ONE item - specifically the highest priority SMALL or XS sized issue.
|
||||
@@ -1,46 +0,0 @@
|
||||
## PART I - IF A LINEAR TICKET IS MENTIONED
|
||||
|
||||
0c. use `linear` cli to fetch the selected item into thoughts with the ticket number - ./thoughts/shared/tickets/ENG-xxxx.md
|
||||
0d. read the ticket and all comments to understand what research is needed and any previous attempts
|
||||
|
||||
## PART I - IF NO TICKET IS MENTIONED
|
||||
|
||||
0. read .claude/commands/linear.md
|
||||
0a. fetch the top 10 priority items from linear in status "research needed" using the MCP tools, noting all items in the `links` section
|
||||
0b. select the highest priority SMALL or XS issue from the list (if no SMALL or XS issues exist, EXIT IMMEDIATELY and inform the user)
|
||||
0c. use `linear` cli to fetch the selected item into thoughts with the ticket number - ./thoughts/shared/tickets/ENG-xxxx.md
|
||||
0d. read the ticket and all comments to understand what research is needed and any previous attempts
|
||||
|
||||
## PART II - NEXT STEPS
|
||||
|
||||
think deeply
|
||||
|
||||
1. move the item to "research in progress" using the MCP tools
|
||||
1a. read any linked documents in the `links` section to understand context
|
||||
1b. if insufficient information to conduct research, add a comment asking for clarification and move back to "research needed"
|
||||
|
||||
think deeply about the research needs
|
||||
|
||||
2. conduct the research:
|
||||
2a. read .claude/commands/research_codebase.md for guidance on effective codebase research
|
||||
2b. if the linear comments suggest web research is needed, use WebSearch to research external solutions, APIs, or best practices
|
||||
2c. search the codebase for relevant implementations and patterns
|
||||
2d. examine existing similar features or related code
|
||||
2e. identify technical constraints and opportunities
|
||||
2f. Be unbiased - don't think too much about an ideal implementation plan, just document all related files and how the systems work today
|
||||
2g. document findings in a new thoughts document: `thoughts/shared/research/ENG-XXXX_research.md`
|
||||
|
||||
think deeply about the findings
|
||||
|
||||
3. synthesize research into actionable insights:
|
||||
3a. summarize key findings and technical decisions
|
||||
3b. identify potential implementation approaches
|
||||
3c. note any risks or concerns discovered
|
||||
3d. run `humanlayer thoughts sync` to save the research
|
||||
|
||||
4. update the ticket:
|
||||
4a. attach the research document to the ticket using the MCP tools with proper link formatting
|
||||
4b. add a comment summarizing the research outcomes
|
||||
4c. move the item to "research in review" using the MCP tools
|
||||
|
||||
think deeply, use TodoWrite to track your tasks. When fetching from linear, get the top 10 items by priority but only work on ONE item - specifically the highest priority issue.
|
||||
@@ -1,172 +0,0 @@
|
||||
# Research Codebase
|
||||
|
||||
You are tasked with conducting comprehensive research across the codebase to answer user questions by spawning parallel sub-agents and synthesizing their findings.
|
||||
|
||||
## Initial Setup:
|
||||
|
||||
When this command is invoked, respond with:
|
||||
|
||||
```
|
||||
I'm ready to research the codebase. Please provide your research question or area of interest, and I'll analyze it thoroughly by exploring relevant components and connections.
|
||||
```
|
||||
|
||||
Then wait for the user's research query.
|
||||
|
||||
## Steps to follow after receiving the research query:
|
||||
|
||||
1. **Read any directly mentioned files first:**
|
||||
- If the user mentions specific files (crates, docs, JSON), read them FULLY first
|
||||
- **IMPORTANT**: Use the Read tool WITHOUT limit/offset parameters to read entire files
|
||||
- **CRITICAL**: Read these files yourself in the main context before spawning any sub-tasks
|
||||
- This ensures you have full context before decomposing the research
|
||||
|
||||
2. **Analyze and decompose the research question:**
|
||||
- Break down the user's query into composable research areas
|
||||
- Take time to ultrathink about the underlying patterns, connections, and architectural implications the user might be seeking
|
||||
- Identify specific components, patterns, or concepts to investigate
|
||||
- Create a research plan using TodoWrite to track all subtasks
|
||||
- Consider which directories, files, or architectural patterns are relevant
|
||||
|
||||
3. **Spawn parallel sub-agent tasks for comprehensive research:**
|
||||
- Create multiple Task agents to research different aspects concurrently
|
||||
- We now have specialized agents that know how to do specific research tasks:
|
||||
|
||||
**For codebase research:**
|
||||
- Use the **codebase-locator** agent to find WHERE files and components live
|
||||
- Use the **codebase-analyzer** agent to understand HOW specific code works
|
||||
|
||||
The key is to use these agents intelligently:
|
||||
- Start with locator agents to find what exists
|
||||
- Then use analyzer agents on the most promising findings
|
||||
- Run multiple agents in parallel when they're searching for different things
|
||||
- Each agent knows its job - just tell it what you're looking for
|
||||
- Don't write detailed prompts about HOW to search - the agents already know
|
||||
|
||||
4. **Wait for all sub-agents to complete and synthesize findings:**
|
||||
- IMPORTANT: Wait for ALL sub-agent tasks to complete before proceeding
|
||||
- Compile all sub-agent results (both codebase and thoughts findings)
|
||||
- Prioritize live codebase findings as primary source of truth
|
||||
- Use thoughts/ findings as supplementary historical context
|
||||
- Connect findings across different components
|
||||
- Include specific file paths and line numbers for reference
|
||||
- Verify all thoughts/ paths are correct (e.g., thoughts/allison/ not thoughts/shared/ for personal files)
|
||||
- Highlight patterns, connections, and architectural decisions
|
||||
- Answer the user's specific questions with concrete evidence
|
||||
|
||||
5. **Gather metadata for the research document:**
|
||||
- Run the `zed/script/spec_metadata.sh` script to generate all relevant metadata
|
||||
- Filename: `thoughts/shared/research/YYYY-MM-DD_HH-MM-SS_topic.md`
|
||||
|
||||
6. **Generate research document:**
|
||||
- Use the metadata gathered in step 4
|
||||
- Structure the document with YAML frontmatter followed by content:
|
||||
|
||||
```markdown
|
||||
---
|
||||
date: [Current date and time with timezone in ISO format]
|
||||
researcher: [Researcher name from thoughts status]
|
||||
git_commit: [Current commit hash]
|
||||
branch: [Current branch name]
|
||||
repository: [Repository name]
|
||||
topic: "[User's Question/Topic]"
|
||||
tags: [research, codebase, relevant-component-names]
|
||||
status: complete
|
||||
last_updated: [Current date in YYYY-MM-DD format]
|
||||
last_updated_by: [Researcher name]
|
||||
---
|
||||
|
||||
# Research: [User's Question/Topic]
|
||||
|
||||
**Date**: [Current date and time with timezone from step 4]
|
||||
**Researcher**: [Researcher name from thoughts status]
|
||||
**Git Commit**: [Current commit hash from step 4]
|
||||
**Branch**: [Current branch name from step 4]
|
||||
**Repository**: [Repository name]
|
||||
|
||||
## Research Question
|
||||
|
||||
[Original user query]
|
||||
|
||||
## Summary
|
||||
|
||||
[High-level findings answering the user's question]
|
||||
|
||||
## Detailed Findings
|
||||
|
||||
### [Component/Area 1]
|
||||
|
||||
- Finding with reference ([file.ext:line](link))
|
||||
- Connection to other components
|
||||
- Implementation details
|
||||
|
||||
### [Component/Area 2]
|
||||
|
||||
...
|
||||
|
||||
## Code References
|
||||
|
||||
- `path/to/file.py:123` - Description of what's there
|
||||
- `another/file.ts:45-67` - Description of the code block
|
||||
|
||||
## Architecture Insights
|
||||
|
||||
[Patterns, conventions, and design decisions discovered]
|
||||
|
||||
## Historical Context (from thoughts/)
|
||||
|
||||
[Relevant insights from thoughts/ directory with references]
|
||||
|
||||
- `thoughts/shared/something.md` - Historical decision about X
|
||||
- `thoughts/local/notes.md` - Past exploration of Y
|
||||
Note: Paths exclude "searchable/" even if found there
|
||||
|
||||
## Related Research
|
||||
|
||||
[Links to other research documents in thoughts/shared/research/]
|
||||
|
||||
## Open Questions
|
||||
|
||||
[Any areas that need further investigation]
|
||||
```
|
||||
|
||||
7. **Add GitHub permalinks (if applicable):**
|
||||
- Check if on main branch or if commit is pushed: `git branch --show-current` and `git status`
|
||||
- If on main/master or pushed, generate GitHub permalinks:
|
||||
- Get repo info: `gh repo view --json owner,name`
|
||||
- Create permalinks: `https://github.com/{owner}/{repo}/blob/{commit}/{file}#L{line}`
|
||||
- Replace local file references with permalinks in the document
|
||||
|
||||
8. **Handle follow-up questions:**
|
||||
- If the user has follow-up questions, append to the same research document
|
||||
- Update the frontmatter fields `last_updated` and `last_updated_by` to reflect the update
|
||||
- Add `last_updated_note: "Added follow-up research for [brief description]"` to frontmatter
|
||||
- Add a new section: `## Follow-up Research [timestamp]`
|
||||
- Spawn new sub-agents as needed for additional investigation
|
||||
- Continue updating the document and syncing
|
||||
|
||||
## Important notes:
|
||||
|
||||
- Always use parallel Task agents to maximize efficiency and minimize context usage
|
||||
- Always run fresh codebase research - never rely solely on existing research documents
|
||||
- The thoughts/ directory provides historical context to supplement live findings
|
||||
- Focus on finding concrete file paths and line numbers for developer reference
|
||||
- Research documents should be self-contained with all necessary context
|
||||
- Each sub-agent prompt should be specific and focused on read-only operations
|
||||
- Consider cross-component connections and architectural patterns
|
||||
- Include temporal context (when the research was conducted)
|
||||
- Link to GitHub when possible for permanent references
|
||||
- Keep the main agent focused on synthesis, not deep file reading
|
||||
- Encourage sub-agents to find examples and usage patterns, not just definitions
|
||||
- Explore all of thoughts/ directory, not just research subdirectory
|
||||
- **File reading**: Always read mentioned files FULLY (no limit/offset) before spawning sub-tasks
|
||||
- **Critical ordering**: Follow the numbered steps exactly
|
||||
- ALWAYS read mentioned files first before spawning sub-tasks (step 1)
|
||||
- ALWAYS wait for all sub-agents to complete before synthesizing (step 4)
|
||||
- ALWAYS gather metadata before writing the document (step 5 before step 6)
|
||||
- NEVER write the research document with placeholder values
|
||||
- **Frontmatter consistency**:
|
||||
- Always include frontmatter at the beginning of research documents
|
||||
- Keep frontmatter fields consistent across all research documents
|
||||
- Update frontmatter when adding follow-up research
|
||||
- Use snake_case for multi-word field names (e.g., `last_updated`, `git_commit`)
|
||||
- Tags should be relevant to the research topic and components studied
|
||||
@@ -1,162 +0,0 @@
|
||||
# Validate Plan
|
||||
|
||||
You are tasked with validating that an implementation plan was correctly executed, verifying all success criteria and identifying any deviations or issues.
|
||||
|
||||
## Initial Setup
|
||||
|
||||
When invoked:
|
||||
1. **Determine context** - Are you in an existing conversation or starting fresh?
|
||||
- If existing: Review what was implemented in this session
|
||||
- If fresh: Need to discover what was done through git and codebase analysis
|
||||
|
||||
2. **Locate the plan**:
|
||||
- If plan path provided, use it
|
||||
- Otherwise, search recent commits for plan references or ask user
|
||||
|
||||
3. **Gather implementation evidence**:
|
||||
```bash
|
||||
# Check recent commits
|
||||
git log --oneline -n 20
|
||||
git diff HEAD~N..HEAD # Where N covers implementation commits
|
||||
|
||||
# Run comprehensive checks
|
||||
cd $(git rev-parse --show-toplevel) && make check test
|
||||
```
|
||||
|
||||
## Validation Process
|
||||
|
||||
### Step 1: Context Discovery
|
||||
|
||||
If starting fresh or need more context:
|
||||
|
||||
1. **Read the implementation plan** completely
|
||||
2. **Identify what should have changed**:
|
||||
- List all files that should be modified
|
||||
- Note all success criteria (automated and manual)
|
||||
- Identify key functionality to verify
|
||||
|
||||
3. **Spawn parallel research tasks** to discover implementation:
|
||||
```
|
||||
Task 1 - Verify database changes:
|
||||
Research if migration [N] was added and schema changes match plan.
|
||||
Check: migration files, schema version, table structure
|
||||
Return: What was implemented vs what plan specified
|
||||
|
||||
Task 2 - Verify code changes:
|
||||
Find all modified files related to [feature].
|
||||
Compare actual changes to plan specifications.
|
||||
Return: File-by-file comparison of planned vs actual
|
||||
|
||||
Task 3 - Verify test coverage:
|
||||
Check if tests were added/modified as specified.
|
||||
Run test commands and capture results.
|
||||
Return: Test status and any missing coverage
|
||||
```
|
||||
|
||||
### Step 2: Systematic Validation
|
||||
|
||||
For each phase in the plan:
|
||||
|
||||
1. **Check completion status**:
|
||||
- Look for checkmarks in the plan (- [x])
|
||||
- Verify the actual code matches claimed completion
|
||||
|
||||
2. **Run automated verification**:
|
||||
- Execute each command from "Automated Verification"
|
||||
- Document pass/fail status
|
||||
- If failures, investigate root cause
|
||||
|
||||
3. **Assess manual criteria**:
|
||||
- List what needs manual testing
|
||||
- Provide clear steps for user verification
|
||||
|
||||
4. **Think deeply about edge cases**:
|
||||
- Were error conditions handled?
|
||||
- Are there missing validations?
|
||||
- Could the implementation break existing functionality?
|
||||
|
||||
### Step 3: Generate Validation Report
|
||||
|
||||
Create comprehensive validation summary:
|
||||
|
||||
```markdown
|
||||
## Validation Report: [Plan Name]
|
||||
|
||||
### Implementation Status
|
||||
✓ Phase 1: [Name] - Fully implemented
|
||||
✓ Phase 2: [Name] - Fully implemented
|
||||
⚠️ Phase 3: [Name] - Partially implemented (see issues)
|
||||
|
||||
### Automated Verification Results
|
||||
✓ Build passes: `make build`
|
||||
✓ Tests pass: `make test`
|
||||
✗ Linting issues: `make lint` (3 warnings)
|
||||
|
||||
### Code Review Findings
|
||||
|
||||
#### Matches Plan:
|
||||
- Database migration correctly adds [table]
|
||||
- API endpoints implement specified methods
|
||||
- Error handling follows plan
|
||||
|
||||
#### Deviations from Plan:
|
||||
- Used different variable names in [file:line]
|
||||
- Added extra validation in [file:line] (improvement)
|
||||
|
||||
#### Potential Issues:
|
||||
- Missing index on foreign key could impact performance
|
||||
- No rollback handling in migration
|
||||
|
||||
### Manual Testing Required:
|
||||
1. UI functionality:
|
||||
- [ ] Verify [feature] appears correctly
|
||||
- [ ] Test error states with invalid input
|
||||
|
||||
2. Integration:
|
||||
- [ ] Confirm works with existing [component]
|
||||
- [ ] Check performance with large datasets
|
||||
|
||||
### Recommendations:
|
||||
- Address linting warnings before merge
|
||||
- Consider adding integration test for [scenario]
|
||||
- Document new API endpoints
|
||||
```
|
||||
|
||||
## Working with Existing Context
|
||||
|
||||
If you were part of the implementation:
|
||||
- Review the conversation history
|
||||
- Check your todo list for what was completed
|
||||
- Focus validation on work done in this session
|
||||
- Be honest about any shortcuts or incomplete items
|
||||
|
||||
## Important Guidelines
|
||||
|
||||
1. **Be thorough but practical** - Focus on what matters
|
||||
2. **Run all automated checks** - Don't skip verification commands
|
||||
3. **Document everything** - Both successes and issues
|
||||
4. **Think critically** - Question if the implementation truly solves the problem
|
||||
5. **Consider maintenance** - Will this be maintainable long-term?
|
||||
|
||||
## Validation Checklist
|
||||
|
||||
Always verify:
|
||||
- [ ] All phases marked complete are actually done
|
||||
- [ ] Automated tests pass
|
||||
- [ ] Code follows existing patterns
|
||||
- [ ] No regressions introduced
|
||||
- [ ] Error handling is robust
|
||||
- [ ] Documentation updated if needed
|
||||
- [ ] Manual test steps are clear
|
||||
|
||||
## Relationship to Other Commands
|
||||
|
||||
Recommended workflow:
|
||||
1. `/implement_plan` - Execute the implementation
|
||||
2. `/commit` - Create atomic commits for changes
|
||||
3. `/validate_plan` - Verify implementation correctness
|
||||
4. `/describe_pr` - Generate PR description
|
||||
|
||||
The validation works best after commits are made, as it can analyze the git history to understand what was implemented.
|
||||
|
||||
Remember: Good validation catches issues before they reach production. Be constructive but thorough in identifying gaps or improvements.
|
||||
@@ -1,10 +0,0 @@
|
||||
{
|
||||
"permissions": {
|
||||
"allow": [
|
||||
// "Bash(./hack/spec_metadata.sh)",
|
||||
// "Bash(hack/spec_metadata.sh)",
|
||||
// "Bash(bash hack/spec_metadata.sh)"
|
||||
]
|
||||
},
|
||||
"enableAllProjectMcpServers": false
|
||||
}
|
||||
@@ -1,31 +0,0 @@
|
||||
{
|
||||
"permissions": {
|
||||
"allow": [
|
||||
"Read(/Users/mikaylamaki/projects/zed-work/zed-monorepo-real/**)",
|
||||
"Read(/Users/nathan/src/agent-client-protocol/rust/**)",
|
||||
"Read(/Users/nathan/src/agent-client-protocol/rust/**)",
|
||||
"Read(/Users/nathan/src/agent-client-protocol/rust/**)",
|
||||
"Read(/Users/nathan/src/agent-client-protocol/rust/**)",
|
||||
"Bash(git add:*)",
|
||||
"Read(/Users/nathan/src/agent-client-protocol/rust/**)",
|
||||
"Bash(./script/spec_metadata.sh:*)",
|
||||
"Bash(npm run generate:*)",
|
||||
"Bash(npm run typecheck:*)",
|
||||
"Bash(npm run:*)",
|
||||
"Bash(npm install)",
|
||||
"Bash(grep:*)",
|
||||
"Bash(find:*)",
|
||||
"Bash(node:*)",
|
||||
"Bash(cargo check:*)",
|
||||
"Bash(cargo test)",
|
||||
"Bash(npx tsc:*)"
|
||||
],
|
||||
"additionalDirectories": [
|
||||
"/Users/mikaylamaki/projects/zed-work/zed-monorepo-real/claude-code-acp/",
|
||||
"/Users/mikaylamaki/projects/zed-work/zed-monorepo-real/agentic-coding-protocol/",
|
||||
"/Users/nathan/src/agent",
|
||||
"/Users/nathan/src/agent-client-protocol/",
|
||||
"/Users/nathan/src/claude-code-acp"
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -1 +0,0 @@
|
||||
.rules
|
||||
@@ -3,6 +3,15 @@ export default {
|
||||
const url = new URL(request.url);
|
||||
url.hostname = "docs-anw.pages.dev";
|
||||
|
||||
// These pages were removed, but may still be served due to Cloudflare's
|
||||
// [asset retention](https://developers.cloudflare.com/pages/configuration/serving-pages/#asset-retention).
|
||||
if (
|
||||
url.pathname === "/docs/assistant/context-servers" ||
|
||||
url.pathname === "/docs/assistant/model-context-protocol"
|
||||
) {
|
||||
return await fetch("https://zed.dev/404");
|
||||
}
|
||||
|
||||
let res = await fetch(url, request);
|
||||
|
||||
if (res.status === 404) {
|
||||
|
||||
@@ -1,45 +0,0 @@
|
||||
# This file contains settings for `cargo hakari`.
|
||||
# See https://docs.rs/cargo-hakari/latest/cargo_hakari/config for a full list of options.
|
||||
|
||||
hakari-package = "workspace-hack"
|
||||
|
||||
resolver = "2"
|
||||
dep-format-version = "4"
|
||||
workspace-hack-line-style = "workspace-dotted"
|
||||
|
||||
# this should be the same list as "targets" in ../rust-toolchain.toml
|
||||
platforms = [
|
||||
"x86_64-apple-darwin",
|
||||
"aarch64-apple-darwin",
|
||||
"x86_64-unknown-linux-gnu",
|
||||
"aarch64-unknown-linux-gnu",
|
||||
"x86_64-pc-windows-msvc",
|
||||
"x86_64-unknown-linux-musl", # remote server
|
||||
]
|
||||
|
||||
[traversal-excludes]
|
||||
workspace-members = [
|
||||
"remote_server",
|
||||
]
|
||||
third-party = [
|
||||
{ name = "reqwest", version = "0.11.27" },
|
||||
# build of remote_server should not include scap / its x11 dependency
|
||||
{ name = "scap", git = "https://github.com/zed-industries/scap", rev = "808aa5c45b41e8f44729d02e38fd00a2fe2722e7" },
|
||||
# build of remote_server should not need to include on libalsa through rodio
|
||||
{ name = "rodio" },
|
||||
]
|
||||
|
||||
[final-excludes]
|
||||
workspace-members = [
|
||||
"zed_extension_api",
|
||||
|
||||
# exclude all extensions
|
||||
"zed_glsl",
|
||||
"zed_html",
|
||||
"zed_proto",
|
||||
"zed_ruff",
|
||||
"slash_commands_example",
|
||||
"zed_snippets",
|
||||
"zed_test_extension",
|
||||
"zed_toml",
|
||||
]
|
||||
@@ -1 +0,0 @@
|
||||
.rules
|
||||
@@ -19,10 +19,6 @@
|
||||
# https://github.com/zed-industries/zed/pull/2394
|
||||
eca93c124a488b4e538946cd2d313bd571aa2b86
|
||||
|
||||
# 2024-02-15 Format YAML files
|
||||
# https://github.com/zed-industries/zed/pull/7887
|
||||
a161a7d0c95ca7505bf9218bfae640ee5444c88b
|
||||
|
||||
# 2024-02-25 Format JSON files in assets/
|
||||
# https://github.com/zed-industries/zed/pull/8405
|
||||
ffdda588b41f7d9d270ffe76cab116f828ad545e
|
||||
@@ -30,7 +26,3 @@ ffdda588b41f7d9d270ffe76cab116f828ad545e
|
||||
# 2024-07-05 Improved formatting of default keymaps (single line per bind)
|
||||
# https://github.com/zed-industries/zed/pull/13887
|
||||
813cc3f5e537372fc86720b5e71b6e1c815440ab
|
||||
|
||||
# 2024-07-24 docs: Format docs
|
||||
# https://github.com/zed-industries/zed/pull/15352
|
||||
3a44a59f8ec114ac1ba22f7da1652717ef7e4e5c
|
||||
|
||||
41
.github/ISSUE_TEMPLATE/01_bug_ai.yml
vendored
41
.github/ISSUE_TEMPLATE/01_bug_ai.yml
vendored
@@ -1,41 +0,0 @@
|
||||
name: Bug Report (AI)
|
||||
description: Zed Agent Panel Bugs
|
||||
type: "Bug"
|
||||
labels: ["ai"]
|
||||
title: "AI: <a short description of the AI Related bug>"
|
||||
body:
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Summary
|
||||
description: Describe the bug with a one line summary, and provide detailed reproduction steps
|
||||
value: |
|
||||
<!-- Please insert a one line summary of the issue below -->
|
||||
SUMMARY_SENTENCE_HERE
|
||||
|
||||
### Description
|
||||
<!-- Describe with sufficient detail to reproduce from a clean Zed install. -->
|
||||
Steps to trigger the problem:
|
||||
1.
|
||||
2.
|
||||
3.
|
||||
|
||||
**Expected Behavior**:
|
||||
**Actual Behavior**:
|
||||
|
||||
### Model Provider Details
|
||||
- Provider: (Anthropic via ZedPro, Anthropic via API key, Copilot Chat, Mistral, OpenAI, etc)
|
||||
- Model Name:
|
||||
- Mode: (Agent Panel, Inline Assistant, Terminal Assistant or Text Threads)
|
||||
- Other Details (MCPs, other settings, etc):
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: environment
|
||||
attributes:
|
||||
label: Zed Version and System Specs
|
||||
description: 'Open Zed, and in the command palette select "zed: copy system specs into clipboard"'
|
||||
placeholder: |
|
||||
Output of "zed: copy system specs into clipboard"
|
||||
validations:
|
||||
required: true
|
||||
35
.github/ISSUE_TEMPLATE/04_bug_debugger.yml
vendored
35
.github/ISSUE_TEMPLATE/04_bug_debugger.yml
vendored
@@ -1,35 +0,0 @@
|
||||
name: Bug Report (Debugger)
|
||||
description: Zed Debugger-Related Bugs
|
||||
type: "Bug"
|
||||
labels: ["debugger"]
|
||||
title: "Debugger: <a short description of the Debugger bug>"
|
||||
body:
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Summary
|
||||
description: Describe the bug with a one line summary, and provide detailed reproduction steps
|
||||
value: |
|
||||
<!-- Please insert a one line summary of the issue below -->
|
||||
SUMMARY_SENTENCE_HERE
|
||||
|
||||
### Description
|
||||
<!-- Describe with sufficient detail to reproduce from a clean Zed install. -->
|
||||
Steps to trigger the problem:
|
||||
1.
|
||||
2.
|
||||
3.
|
||||
|
||||
**Expected Behavior**:
|
||||
**Actual Behavior**:
|
||||
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: environment
|
||||
attributes:
|
||||
label: Zed Version and System Specs
|
||||
description: 'Open Zed, and in the command palette select "zed: copy system specs into clipboard"'
|
||||
placeholder: |
|
||||
Output of "zed: copy system specs into clipboard"
|
||||
validations:
|
||||
required: true
|
||||
35
.github/ISSUE_TEMPLATE/07_bug_windows_alpha.yml
vendored
35
.github/ISSUE_TEMPLATE/07_bug_windows_alpha.yml
vendored
@@ -1,35 +0,0 @@
|
||||
name: Bug Report (Windows Alpha)
|
||||
description: Zed Windows Alpha Related Bugs
|
||||
type: "Bug"
|
||||
labels: ["windows"]
|
||||
title: "Windows Alpha: <a short description of the Windows bug>"
|
||||
body:
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Summary
|
||||
description: Describe the bug with a one-line summary, and provide detailed reproduction steps
|
||||
value: |
|
||||
<!-- Please insert a one-line summary of the issue below -->
|
||||
SUMMARY_SENTENCE_HERE
|
||||
|
||||
### Description
|
||||
<!-- Describe with sufficient detail to reproduce from a clean Zed install. -->
|
||||
Steps to trigger the problem:
|
||||
1.
|
||||
2.
|
||||
3.
|
||||
|
||||
**Expected Behavior**:
|
||||
**Actual Behavior**:
|
||||
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: environment
|
||||
attributes:
|
||||
label: Zed Version and System Specs
|
||||
description: 'Open Zed, and in the command palette select "zed: copy system specs into clipboard"'
|
||||
placeholder: |
|
||||
Output of "zed: copy system specs into clipboard"
|
||||
validations:
|
||||
required: true
|
||||
24
.github/ISSUE_TEMPLATE/0_feature_request.yml
vendored
Normal file
24
.github/ISSUE_TEMPLATE/0_feature_request.yml
vendored
Normal file
@@ -0,0 +1,24 @@
|
||||
name: Feature Request
|
||||
description: "Tip: open this issue template from within Zed with the `request feature` command palette action"
|
||||
labels: ["admin read", "triage", "enhancement"]
|
||||
body:
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Check for existing issues
|
||||
description: Check the backlog of issues to reduce the chances of creating duplicates; if an issue already exists, place a `+1` (👍) on it.
|
||||
options:
|
||||
- label: Completed
|
||||
required: true
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Describe the feature
|
||||
description: A clear and concise description of what you want to happen.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: |
|
||||
If applicable, add mockups / screenshots to help present your vision of the feature
|
||||
description: Drag images into the text input below
|
||||
validations:
|
||||
required: false
|
||||
58
.github/ISSUE_TEMPLATE/10_bug_report.yml
vendored
58
.github/ISSUE_TEMPLATE/10_bug_report.yml
vendored
@@ -1,58 +0,0 @@
|
||||
name: Bug Report (Other)
|
||||
description: |
|
||||
Something else is broken in Zed (exclude crashing).
|
||||
type: "Bug"
|
||||
body:
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Summary
|
||||
description: Provide a one sentence summary and detailed reproduction steps
|
||||
value: |
|
||||
<!-- Begin your issue with a one sentence summary -->
|
||||
SUMMARY_SENTENCE_HERE
|
||||
|
||||
### Description
|
||||
<!-- Describe with sufficient detail to reproduce from a clean Zed install.
|
||||
- Any code must be sufficient to reproduce (include context!)
|
||||
- Include code as text, not just as a screenshot.
|
||||
- Issues with insufficient detail may be summarily closed.
|
||||
-->
|
||||
|
||||
DESCRIPTION_HERE
|
||||
|
||||
Steps to reproduce:
|
||||
1.
|
||||
2.
|
||||
3.
|
||||
4.
|
||||
|
||||
**Expected Behavior**:
|
||||
**Actual Behavior**:
|
||||
|
||||
<!-- Before Submitting, did you:
|
||||
1. Include settings.json, keymap.json, .editorconfig if relevant?
|
||||
2. Check your Zed.log for relevant errors? (please include!)
|
||||
3. Click Preview to ensure everything looks right?
|
||||
4. Hide videos, large images and logs in ``` inside collapsible blocks:
|
||||
|
||||
<details><summary>click to expand</summary>
|
||||
|
||||
```json
|
||||
|
||||
```
|
||||
</details>
|
||||
-->
|
||||
|
||||
validations:
|
||||
required: true
|
||||
|
||||
- type: textarea
|
||||
id: environment
|
||||
attributes:
|
||||
label: Zed Version and System Specs
|
||||
description: |
|
||||
Open Zed, from the command palette select "zed: copy system specs into clipboard"
|
||||
placeholder: |
|
||||
Output of "zed: copy system specs into clipboard"
|
||||
validations:
|
||||
required: true
|
||||
51
.github/ISSUE_TEMPLATE/11_crash_report.yml
vendored
51
.github/ISSUE_TEMPLATE/11_crash_report.yml
vendored
@@ -1,51 +0,0 @@
|
||||
name: Crash Report
|
||||
description: Zed is Crashing or Hanging
|
||||
type: "Crash"
|
||||
body:
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Summary
|
||||
description: Summarize the issue with detailed reproduction steps
|
||||
value: |
|
||||
<!-- Begin your issue with a one sentence summary -->
|
||||
SUMMARY_SENTENCE_HERE
|
||||
|
||||
### Description
|
||||
<!-- Include all steps necessary to reproduce from a clean Zed installation. Be verbose -->
|
||||
Steps to trigger the problem:
|
||||
1.
|
||||
2.
|
||||
3.
|
||||
|
||||
Actual Behavior:
|
||||
Expected Behavior:
|
||||
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: environment
|
||||
attributes:
|
||||
label: Zed Version and System Specs
|
||||
description: 'Open Zed, and in the command palette select "zed: copy system specs into clipboard"'
|
||||
placeholder: |
|
||||
Output of "zed: copy system specs into clipboard"
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue.
|
||||
description: |
|
||||
macOS: `~/Library/Logs/Zed/Zed.log`
|
||||
Linux: `~/.local/share/zed/logs/Zed.log` or $XDG_DATA_HOME
|
||||
If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000.
|
||||
value: |
|
||||
<details><summary>Zed.log</summary>
|
||||
|
||||
<!-- Paste your log inside the code block. -->
|
||||
```log
|
||||
|
||||
```
|
||||
|
||||
</details>
|
||||
validations:
|
||||
required: false
|
||||
46
.github/ISSUE_TEMPLATE/1_bug_report.yml
vendored
Normal file
46
.github/ISSUE_TEMPLATE/1_bug_report.yml
vendored
Normal file
@@ -0,0 +1,46 @@
|
||||
name: Bug Report
|
||||
description: |
|
||||
Use this template for **non-crash-related** bug reports.
|
||||
Tip: open this issue template from within Zed with the `file bug report` command palette action.
|
||||
labels: ["admin read", "triage", "defect"]
|
||||
body:
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Check for existing issues
|
||||
description: Check the backlog of issues to reduce the chances of creating duplicates; if an issue already exists, place a `+1` (👍) on it.
|
||||
options:
|
||||
- label: Completed
|
||||
required: true
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Describe the bug / provide steps to reproduce it
|
||||
description: A clear and concise description of what the bug is.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: environment
|
||||
attributes:
|
||||
label: Environment
|
||||
description: Run the `copy system specs into clipboard` command palette action and paste the output in the field below.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: If applicable, add mockups / screenshots to help explain present your vision of the feature
|
||||
description: Drag issues into the text input below
|
||||
validations:
|
||||
required: false
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: If applicable, attach your Zed.log file to this issue.
|
||||
description: |
|
||||
macOS: `~/Library/Logs/Zed/Zed.log`
|
||||
Linux: `~/.local/share/zed/logs/Zed.log` or $XDG_DATA_HOME
|
||||
If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000.
|
||||
value: |
|
||||
<details><summary>Zed.log</summary><pre>
|
||||
<!-- Click below this line and paste or drag-and-drop your log-->
|
||||
|
||||
<!-- Click above this line and paste or drag-and-drop your log--></pre></details>
|
||||
validations:
|
||||
required: false
|
||||
39
.github/ISSUE_TEMPLATE/2_crash_report.yml
vendored
Normal file
39
.github/ISSUE_TEMPLATE/2_crash_report.yml
vendored
Normal file
@@ -0,0 +1,39 @@
|
||||
name: Crash Report
|
||||
description: |
|
||||
Use this template for crash reports.
|
||||
labels: ["admin read", "triage", "defect", "panic / crash"]
|
||||
body:
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Check for existing issues
|
||||
description: Check the backlog of issues to reduce the chances of creating duplicates; if an issue already exists, place a `+1` (👍) on it.
|
||||
options:
|
||||
- label: Completed
|
||||
required: true
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Describe the bug / provide steps to reproduce it
|
||||
description: A clear and concise description of what the bug is.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: environment
|
||||
attributes:
|
||||
label: Environment
|
||||
description: Run the `copy system specs into clipboard` command palette action and paste the output in the field below.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue.
|
||||
description: |
|
||||
macOS: `~/Library/Logs/Zed/Zed.log`
|
||||
Linux: `~/.local/share/zed/logs/Zed.log` or $XDG_DATA_HOME
|
||||
If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000.
|
||||
value: |
|
||||
<details><summary>Zed.log</summary><pre>
|
||||
<!-- Click below this line and paste or drag-and-drop your log-->
|
||||
|
||||
<!-- Click above this line and paste or drag-and-drop your log--></pre></details>
|
||||
validations:
|
||||
required: false
|
||||
19
.github/ISSUE_TEMPLATE/99_other.yml
vendored
19
.github/ISSUE_TEMPLATE/99_other.yml
vendored
@@ -1,19 +0,0 @@
|
||||
name: Other [Staff Only]
|
||||
description: Zed Staff Only
|
||||
body:
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: Summary
|
||||
value: |
|
||||
<!-- Please insert a one line summary of the issue below -->
|
||||
SUMMARY_SENTENCE_HERE
|
||||
|
||||
### Description
|
||||
|
||||
IF YOU DO NOT WORK FOR ZED INDUSTRIES DO NOT CREATE ISSUES WITH THIS TEMPLATE.
|
||||
THEY WILL BE AUTO-CLOSED AND MAY RESULT IN YOU BEING BANNED FROM THE ZED ISSUE TRACKER.
|
||||
|
||||
FEATURE REQUESTS / SUPPORT REQUESTS SHOULD BE OPENED AS DISCUSSIONS:
|
||||
https://github.com/zed-industries/zed/discussions/new/choose
|
||||
validations:
|
||||
required: true
|
||||
22
.github/ISSUE_TEMPLATE/config.yml
vendored
22
.github/ISSUE_TEMPLATE/config.yml
vendored
@@ -1,9 +1,17 @@
|
||||
# yaml-language-server: $schema=https://json.schemastore.org/github-issue-config.json
|
||||
blank_issues_enabled: false
|
||||
contact_links:
|
||||
- name: Feature Request
|
||||
url: https://github.com/zed-industries/zed/discussions/new/choose
|
||||
about: To request a feature, open a new Discussion in one of the appropriate Discussion categories
|
||||
- name: "Zed Discord"
|
||||
url: https://zed.dev/community-links
|
||||
about: Real-time discussion and user support
|
||||
- name: Language Request
|
||||
url: https://github.com/zed-industries/extensions/issues/new?assignees=&labels=language&projects=&template=1_language_request.yml&title=%3Cname_of_language%3E
|
||||
about: Request a language in the extensions repository
|
||||
- name: Theme Request
|
||||
url: https://github.com/zed-industries/extensions/issues/new?assignees=&labels=theme&projects=&template=0_theme_request.yml&title=%3Cname_of_theme%3E+theme
|
||||
about: Request a theme in the extensions repository
|
||||
- name: Top-Ranking Issues
|
||||
url: https://github.com/zed-industries/zed/issues/5393
|
||||
about: See an overview of the most popular Zed issues
|
||||
- name: Platform Support
|
||||
url: https://github.com/zed-industries/zed/issues/5391
|
||||
about: A quick note on platform support
|
||||
- name: Positive Feedback
|
||||
url: https://github.com/zed-industries/zed/discussions/5397
|
||||
about: A central location for kind words about Zed
|
||||
|
||||
45
.github/actionlint.yml
vendored
45
.github/actionlint.yml
vendored
@@ -1,45 +0,0 @@
|
||||
# Configuration related to self-hosted runner.
|
||||
self-hosted-runner:
|
||||
# Labels of self-hosted runner in array of strings.
|
||||
labels:
|
||||
# GitHub-hosted Runners
|
||||
- github-8vcpu-ubuntu-2404
|
||||
- github-16vcpu-ubuntu-2404
|
||||
- github-32vcpu-ubuntu-2404
|
||||
- github-8vcpu-ubuntu-2204
|
||||
- github-16vcpu-ubuntu-2204
|
||||
- github-32vcpu-ubuntu-2204
|
||||
- github-16vcpu-ubuntu-2204-arm
|
||||
- windows-2025-16
|
||||
- windows-2025-32
|
||||
- windows-2025-64
|
||||
# Namespace Ubuntu 20.04 (Release builds)
|
||||
- namespace-profile-16x32-ubuntu-2004
|
||||
- namespace-profile-32x64-ubuntu-2004
|
||||
- namespace-profile-16x32-ubuntu-2004-arm
|
||||
- namespace-profile-32x64-ubuntu-2004-arm
|
||||
# Namespace Ubuntu 22.04 (Everything else)
|
||||
- namespace-profile-4x8-ubuntu-2204
|
||||
- namespace-profile-8x16-ubuntu-2204
|
||||
- namespace-profile-16x32-ubuntu-2204
|
||||
- namespace-profile-32x64-ubuntu-2204
|
||||
# Namespace Ubuntu 24.04 (like ubuntu-latest)
|
||||
- namespace-profile-2x4-ubuntu-2404
|
||||
# Namespace Limited Preview
|
||||
- namespace-profile-8x16-ubuntu-2004-arm-m4
|
||||
- namespace-profile-8x32-ubuntu-2004-arm-m4
|
||||
# Self Hosted Runners
|
||||
- self-mini-macos
|
||||
- self-32vcpu-windows-2022
|
||||
|
||||
# Disable shellcheck because it doesn't like powershell
|
||||
# This should have been triggered with initial rollout of actionlint
|
||||
# but https://github.com/zed-industries/zed/pull/36693
|
||||
# somehow caused actionlint to actually check those windows jobs
|
||||
# where previously they were being skipped. Likely caused by an
|
||||
# unknown bug in actionlint where parsing of `runs-on: [ ]`
|
||||
# breaks something else. (yuck)
|
||||
paths:
|
||||
.github/workflows/{ci,release_nightly}.yml:
|
||||
ignore:
|
||||
- "shellcheck"
|
||||
38
.github/actions/build_docs/action.yml
vendored
38
.github/actions/build_docs/action.yml
vendored
@@ -1,38 +0,0 @@
|
||||
name: "Build docs"
|
||||
description: "Build the docs"
|
||||
|
||||
runs:
|
||||
using: "composite"
|
||||
steps:
|
||||
- name: Setup mdBook
|
||||
uses: peaceiris/actions-mdbook@ee69d230fe19748b7abf22df32acaa93833fad08 # v2
|
||||
with:
|
||||
mdbook-version: "0.4.37"
|
||||
|
||||
- name: Cache dependencies
|
||||
uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2
|
||||
with:
|
||||
save-if: ${{ github.ref == 'refs/heads/main' }}
|
||||
# cache-provider: "buildjet"
|
||||
|
||||
- name: Install Linux dependencies
|
||||
shell: bash -euxo pipefail {0}
|
||||
run: ./script/linux
|
||||
|
||||
- name: Check for broken links (in MD)
|
||||
uses: lycheeverse/lychee-action@82202e5e9c2f4ef1a55a3d02563e1cb6041e5332 # v2.4.1
|
||||
with:
|
||||
args: --no-progress --exclude '^http' './docs/src/**/*'
|
||||
fail: true
|
||||
|
||||
- name: Build book
|
||||
shell: bash -euxo pipefail {0}
|
||||
run: |
|
||||
mkdir -p target/deploy
|
||||
mdbook build ./docs --dest-dir=../target/deploy/docs/
|
||||
|
||||
- name: Check for broken links (in HTML)
|
||||
uses: lycheeverse/lychee-action@82202e5e9c2f4ef1a55a3d02563e1cb6041e5332 # v2.4.1
|
||||
with:
|
||||
args: --no-progress --exclude '^http' 'target/deploy/docs/'
|
||||
fail: true
|
||||
4
.github/actions/run_tests/action.yml
vendored
4
.github/actions/run_tests/action.yml
vendored
@@ -7,10 +7,10 @@ runs:
|
||||
- name: Install Rust
|
||||
shell: bash -euxo pipefail {0}
|
||||
run: |
|
||||
cargo install cargo-nextest --locked
|
||||
cargo install cargo-nextest
|
||||
|
||||
- name: Install Node
|
||||
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
|
||||
uses: actions/setup-node@0a44ba7841725637a19e28fa30b79a866c81b0a6 # v4
|
||||
with:
|
||||
node-version: "18"
|
||||
|
||||
|
||||
186
.github/actions/run_tests_windows/action.yml
vendored
186
.github/actions/run_tests_windows/action.yml
vendored
@@ -1,186 +0,0 @@
|
||||
name: "Run tests on Windows"
|
||||
description: "Runs the tests on Windows"
|
||||
|
||||
inputs:
|
||||
working-directory:
|
||||
description: "The working directory"
|
||||
required: true
|
||||
default: "."
|
||||
|
||||
runs:
|
||||
using: "composite"
|
||||
steps:
|
||||
- name: Install test runner
|
||||
shell: powershell
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
run: cargo install cargo-nextest --locked
|
||||
|
||||
- name: Install Node
|
||||
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
|
||||
with:
|
||||
node-version: "18"
|
||||
|
||||
- name: Configure crash dumps
|
||||
shell: powershell
|
||||
run: |
|
||||
# Record the start time for this CI run
|
||||
$runStartTime = Get-Date
|
||||
$runStartTimeStr = $runStartTime.ToString("yyyy-MM-dd HH:mm:ss")
|
||||
Write-Host "CI run started at: $runStartTimeStr"
|
||||
|
||||
# Save the timestamp for later use
|
||||
echo "CI_RUN_START_TIME=$($runStartTime.Ticks)" >> $env:GITHUB_ENV
|
||||
|
||||
# Create crash dump directory in workspace (non-persistent)
|
||||
$dumpPath = "$env:GITHUB_WORKSPACE\crash_dumps"
|
||||
New-Item -ItemType Directory -Force -Path $dumpPath | Out-Null
|
||||
|
||||
Write-Host "Setting up crash dump detection..."
|
||||
Write-Host "Workspace dump path: $dumpPath"
|
||||
|
||||
# Note: We're NOT modifying registry on stateful runners
|
||||
# Instead, we'll check default Windows crash locations after tests
|
||||
|
||||
- name: Run tests
|
||||
shell: powershell
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
run: |
|
||||
$env:RUST_BACKTRACE = "full"
|
||||
|
||||
# Enable Windows debugging features
|
||||
$env:_NT_SYMBOL_PATH = "srv*https://msdl.microsoft.com/download/symbols"
|
||||
|
||||
# .NET crash dump environment variables (ephemeral)
|
||||
$env:COMPlus_DbgEnableMiniDump = "1"
|
||||
$env:COMPlus_DbgMiniDumpType = "4"
|
||||
$env:COMPlus_CreateDumpDiagnostics = "1"
|
||||
|
||||
cargo nextest run --workspace --no-fail-fast
|
||||
|
||||
- name: Analyze crash dumps
|
||||
if: always()
|
||||
shell: powershell
|
||||
run: |
|
||||
Write-Host "Checking for crash dumps..."
|
||||
|
||||
# Get the CI run start time from the environment
|
||||
$runStartTime = [DateTime]::new([long]$env:CI_RUN_START_TIME)
|
||||
Write-Host "Only analyzing dumps created after: $($runStartTime.ToString('yyyy-MM-dd HH:mm:ss'))"
|
||||
|
||||
# Check all possible crash dump locations
|
||||
$searchPaths = @(
|
||||
"$env:GITHUB_WORKSPACE\crash_dumps",
|
||||
"$env:LOCALAPPDATA\CrashDumps",
|
||||
"$env:TEMP",
|
||||
"$env:GITHUB_WORKSPACE",
|
||||
"$env:USERPROFILE\AppData\Local\CrashDumps",
|
||||
"C:\Windows\System32\config\systemprofile\AppData\Local\CrashDumps"
|
||||
)
|
||||
|
||||
$dumps = @()
|
||||
foreach ($path in $searchPaths) {
|
||||
if (Test-Path $path) {
|
||||
Write-Host "Searching in: $path"
|
||||
$found = Get-ChildItem "$path\*.dmp" -ErrorAction SilentlyContinue | Where-Object {
|
||||
$_.CreationTime -gt $runStartTime
|
||||
}
|
||||
if ($found) {
|
||||
$dumps += $found
|
||||
Write-Host " Found $($found.Count) dump(s) from this CI run"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if ($dumps) {
|
||||
Write-Host "Found $($dumps.Count) crash dump(s)"
|
||||
|
||||
# Install debugging tools if not present
|
||||
$cdbPath = "C:\Program Files (x86)\Windows Kits\10\Debuggers\x64\cdb.exe"
|
||||
if (-not (Test-Path $cdbPath)) {
|
||||
Write-Host "Installing Windows Debugging Tools..."
|
||||
$url = "https://go.microsoft.com/fwlink/?linkid=2237387"
|
||||
Invoke-WebRequest -Uri $url -OutFile winsdksetup.exe
|
||||
Start-Process -Wait winsdksetup.exe -ArgumentList "/features OptionId.WindowsDesktopDebuggers /quiet"
|
||||
}
|
||||
|
||||
foreach ($dump in $dumps) {
|
||||
Write-Host "`n=================================="
|
||||
Write-Host "Analyzing crash dump: $($dump.Name)"
|
||||
Write-Host "Size: $([math]::Round($dump.Length / 1MB, 2)) MB"
|
||||
Write-Host "Time: $($dump.CreationTime)"
|
||||
Write-Host "=================================="
|
||||
|
||||
# Set symbol path
|
||||
$env:_NT_SYMBOL_PATH = "srv*C:\symbols*https://msdl.microsoft.com/download/symbols"
|
||||
|
||||
# Run analysis
|
||||
$analysisOutput = & $cdbPath -z $dump.FullName -c "!analyze -v; ~*k; lm; q" 2>&1 | Out-String
|
||||
|
||||
# Extract key information
|
||||
if ($analysisOutput -match "ExceptionCode:\s*([\w]+)") {
|
||||
Write-Host "Exception Code: $($Matches[1])"
|
||||
if ($Matches[1] -eq "c0000005") {
|
||||
Write-Host "Exception Type: ACCESS VIOLATION"
|
||||
}
|
||||
}
|
||||
|
||||
if ($analysisOutput -match "EXCEPTION_RECORD:\s*(.+)") {
|
||||
Write-Host "Exception Record: $($Matches[1])"
|
||||
}
|
||||
|
||||
if ($analysisOutput -match "FAULTING_IP:\s*\n(.+)") {
|
||||
Write-Host "Faulting Instruction: $($Matches[1])"
|
||||
}
|
||||
|
||||
# Save full analysis
|
||||
$analysisFile = "$($dump.FullName).analysis.txt"
|
||||
$analysisOutput | Out-File -FilePath $analysisFile
|
||||
Write-Host "`nFull analysis saved to: $analysisFile"
|
||||
|
||||
# Print stack trace section
|
||||
Write-Host "`n--- Stack Trace Preview ---"
|
||||
$stackSection = $analysisOutput -split "STACK_TEXT:" | Select-Object -Last 1
|
||||
$stackLines = $stackSection -split "`n" | Select-Object -First 20
|
||||
$stackLines | ForEach-Object { Write-Host $_ }
|
||||
Write-Host "--- End Stack Trace Preview ---"
|
||||
}
|
||||
|
||||
Write-Host "`n⚠️ Crash dumps detected! Download the 'crash-dumps' artifact for detailed analysis."
|
||||
|
||||
# Copy dumps to workspace for artifact upload
|
||||
$artifactPath = "$env:GITHUB_WORKSPACE\crash_dumps_collected"
|
||||
New-Item -ItemType Directory -Force -Path $artifactPath | Out-Null
|
||||
|
||||
foreach ($dump in $dumps) {
|
||||
$destName = "$($dump.Directory.Name)_$($dump.Name)"
|
||||
Copy-Item $dump.FullName -Destination "$artifactPath\$destName"
|
||||
if (Test-Path "$($dump.FullName).analysis.txt") {
|
||||
Copy-Item "$($dump.FullName).analysis.txt" -Destination "$artifactPath\$destName.analysis.txt"
|
||||
}
|
||||
}
|
||||
|
||||
Write-Host "Copied $($dumps.Count) dump(s) to artifact directory"
|
||||
} else {
|
||||
Write-Host "No crash dumps from this CI run found"
|
||||
}
|
||||
|
||||
- name: Upload crash dumps
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: crash-dumps-${{ github.run_id }}-${{ github.run_attempt }}
|
||||
path: |
|
||||
crash_dumps_collected/*.dmp
|
||||
crash_dumps_collected/*.txt
|
||||
if-no-files-found: ignore
|
||||
retention-days: 7
|
||||
|
||||
- name: Check test results
|
||||
shell: powershell
|
||||
working-directory: ${{ inputs.working-directory }}
|
||||
run: |
|
||||
# Re-check test results to fail the job if tests failed
|
||||
if ($LASTEXITCODE -ne 0) {
|
||||
Write-Host "Tests failed with exit code: $LASTEXITCODE"
|
||||
exit $LASTEXITCODE
|
||||
}
|
||||
4
.github/workflows/bump_collab_staging.yml
vendored
4
.github/workflows/bump_collab_staging.yml
vendored
@@ -8,10 +8,10 @@ on:
|
||||
jobs:
|
||||
update-collab-staging-tag:
|
||||
if: github.repository_owner == 'zed-industries'
|
||||
runs-on: namespace-profile-2x4-ubuntu-2404
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
|
||||
15
.github/workflows/bump_patch_version.yml
vendored
15
.github/workflows/bump_patch_version.yml
vendored
@@ -14,12 +14,11 @@ concurrency:
|
||||
|
||||
jobs:
|
||||
bump_patch_version:
|
||||
if: github.repository_owner == 'zed-industries'
|
||||
runs-on:
|
||||
- namespace-profile-16x32-ubuntu-2204
|
||||
- buildjet-16vcpu-ubuntu-2204
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4
|
||||
with:
|
||||
ref: ${{ github.event.inputs.branch }}
|
||||
ssh-key: ${{ secrets.ZED_BOT_DEPLOY_KEY }}
|
||||
@@ -28,7 +27,7 @@ jobs:
|
||||
run: |
|
||||
set -eux
|
||||
|
||||
channel="$(cat crates/zed/RELEASE_CHANNEL)"
|
||||
channel=$(cat crates/zed/RELEASE_CHANNEL)
|
||||
|
||||
tag_suffix=""
|
||||
case $channel in
|
||||
@@ -43,9 +42,7 @@ jobs:
|
||||
;;
|
||||
esac
|
||||
which cargo-set-version > /dev/null || cargo install cargo-edit
|
||||
output="$(cargo set-version -p zed --bump patch 2>&1 | sed 's/.* //')"
|
||||
export GIT_COMMITTER_NAME="Zed Bot"
|
||||
export GIT_COMMITTER_EMAIL="hi@zed.dev"
|
||||
output=$(cargo set-version -p zed --bump patch 2>&1 | sed 's/.* //')
|
||||
git commit -am "Bump to $output for @$GITHUB_ACTOR" --author "Zed Bot <hi@zed.dev>"
|
||||
git tag "v${output}${tag_suffix}"
|
||||
git push origin HEAD "v${output}${tag_suffix}"
|
||||
git tag v${output}${tag_suffix}
|
||||
git push origin HEAD v${output}${tag_suffix}
|
||||
|
||||
670
.github/workflows/ci.yml
vendored
670
.github/workflows/ci.yml
vendored
@@ -7,10 +7,14 @@ on:
|
||||
- "v[0-9]+.[0-9]+.x"
|
||||
tags:
|
||||
- "v*"
|
||||
|
||||
paths-ignore:
|
||||
- "docs/**"
|
||||
pull_request:
|
||||
branches:
|
||||
- "**"
|
||||
paths-ignore:
|
||||
- "docs/**"
|
||||
- ".github/workflows/community_*"
|
||||
|
||||
concurrency:
|
||||
# Allow only one workflow per any non-`main` branch.
|
||||
@@ -21,81 +25,18 @@ env:
|
||||
CARGO_TERM_COLOR: always
|
||||
CARGO_INCREMENTAL: 0
|
||||
RUST_BACKTRACE: 1
|
||||
DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }}
|
||||
DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }}
|
||||
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
|
||||
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
|
||||
|
||||
jobs:
|
||||
job_spec:
|
||||
name: Decide which jobs to run
|
||||
if: github.repository_owner == 'zed-industries'
|
||||
outputs:
|
||||
run_tests: ${{ steps.filter.outputs.run_tests }}
|
||||
run_license: ${{ steps.filter.outputs.run_license }}
|
||||
run_docs: ${{ steps.filter.outputs.run_docs }}
|
||||
run_nix: ${{ steps.filter.outputs.run_nix }}
|
||||
run_actionlint: ${{ steps.filter.outputs.run_actionlint }}
|
||||
runs-on:
|
||||
- namespace-profile-2x4-ubuntu-2404
|
||||
steps:
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
with:
|
||||
# 350 is arbitrary; ~10days of history on main (5secs); full history is ~25secs
|
||||
fetch-depth: ${{ github.ref == 'refs/heads/main' && 2 || 350 }}
|
||||
- name: Fetch git history and generate output filters
|
||||
id: filter
|
||||
run: |
|
||||
if [ -z "$GITHUB_BASE_REF" ]; then
|
||||
echo "Not in a PR context (i.e., push to main/stable/preview)"
|
||||
COMPARE_REV="$(git rev-parse HEAD~1)"
|
||||
else
|
||||
echo "In a PR context comparing to pull_request.base.ref"
|
||||
git fetch origin "$GITHUB_BASE_REF" --depth=350
|
||||
COMPARE_REV="$(git merge-base "origin/${GITHUB_BASE_REF}" HEAD)"
|
||||
fi
|
||||
CHANGED_FILES="$(git diff --name-only "$COMPARE_REV" ${{ github.sha }})"
|
||||
|
||||
# Specify anything which should potentially skip full test suite in this regex:
|
||||
# - docs/
|
||||
# - script/update_top_ranking_issues/
|
||||
# - .github/ISSUE_TEMPLATE/
|
||||
# - .github/workflows/ (except .github/workflows/ci.yml)
|
||||
SKIP_REGEX='^(docs/|script/update_top_ranking_issues/|\.github/(ISSUE_TEMPLATE|workflows/(?!ci)))'
|
||||
|
||||
echo "$CHANGED_FILES" | grep -qvP "$SKIP_REGEX" && \
|
||||
echo "run_tests=true" >> "$GITHUB_OUTPUT" || \
|
||||
echo "run_tests=false" >> "$GITHUB_OUTPUT"
|
||||
|
||||
echo "$CHANGED_FILES" | grep -qP '^docs/' && \
|
||||
echo "run_docs=true" >> "$GITHUB_OUTPUT" || \
|
||||
echo "run_docs=false" >> "$GITHUB_OUTPUT"
|
||||
|
||||
echo "$CHANGED_FILES" | grep -qP '^\.github/(workflows/|actions/|actionlint.yml)' && \
|
||||
echo "run_actionlint=true" >> "$GITHUB_OUTPUT" || \
|
||||
echo "run_actionlint=false" >> "$GITHUB_OUTPUT"
|
||||
|
||||
echo "$CHANGED_FILES" | grep -qP '^(Cargo.lock|script/.*licenses)' && \
|
||||
echo "run_license=true" >> "$GITHUB_OUTPUT" || \
|
||||
echo "run_license=false" >> "$GITHUB_OUTPUT"
|
||||
|
||||
echo "$CHANGED_FILES" | grep -qP '^(nix/|flake\.|Cargo\.|rust-toolchain.toml|\.cargo/config.toml)' && \
|
||||
echo "run_nix=true" >> "$GITHUB_OUTPUT" || \
|
||||
echo "run_nix=false" >> "$GITHUB_OUTPUT"
|
||||
|
||||
migration_checks:
|
||||
name: Check Postgres and Protobuf migrations, mergability
|
||||
needs: [job_spec]
|
||||
if: |
|
||||
github.repository_owner == 'zed-industries' &&
|
||||
needs.job_spec.outputs.run_tests == 'true'
|
||||
if: github.repository_owner == 'zed-industries'
|
||||
timeout-minutes: 60
|
||||
runs-on:
|
||||
- self-mini-macos
|
||||
- self-hosted
|
||||
- test
|
||||
steps:
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4
|
||||
with:
|
||||
clean: false
|
||||
fetch-depth: 0 # fetch full history
|
||||
@@ -114,11 +55,11 @@ jobs:
|
||||
run: |
|
||||
if [ -z "$GITHUB_BASE_REF" ];
|
||||
then
|
||||
echo "BUF_BASE_BRANCH=$(git merge-base origin/main HEAD)" >> "$GITHUB_ENV"
|
||||
echo "BUF_BASE_BRANCH=$(git merge-base origin/main HEAD)" >> $GITHUB_ENV
|
||||
else
|
||||
git checkout -B temp
|
||||
git merge -q "origin/$GITHUB_BASE_REF" -m "merge main into temp"
|
||||
echo "BUF_BASE_BRANCH=$GITHUB_BASE_REF" >> "$GITHUB_ENV"
|
||||
git merge -q origin/$GITHUB_BASE_REF -m "merge main into temp"
|
||||
echo "BUF_BASE_BRANCH=$GITHUB_BASE_REF" >> $GITHUB_ENV
|
||||
fi
|
||||
|
||||
- uses: bufbuild/buf-setup-action@v1
|
||||
@@ -129,410 +70,166 @@ jobs:
|
||||
input: "crates/proto/proto/"
|
||||
against: "https://github.com/${GITHUB_REPOSITORY}.git#branch=${BUF_BASE_BRANCH},subdir=crates/proto/proto/"
|
||||
|
||||
workspace_hack:
|
||||
timeout-minutes: 60
|
||||
name: Check workspace-hack crate
|
||||
needs: [job_spec]
|
||||
if: |
|
||||
github.repository_owner == 'zed-industries' &&
|
||||
needs.job_spec.outputs.run_tests == 'true'
|
||||
runs-on:
|
||||
- namespace-profile-8x16-ubuntu-2204
|
||||
steps:
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
- name: Add Rust to the PATH
|
||||
run: echo "$HOME/.cargo/bin" >> "$GITHUB_PATH"
|
||||
- name: Install cargo-hakari
|
||||
uses: clechasseur/rs-cargo@8435b10f6e71c2e3d4d3b7573003a8ce4bfc6386 # v2
|
||||
with:
|
||||
command: install
|
||||
args: cargo-hakari@0.9.35
|
||||
|
||||
- name: Check workspace-hack Cargo.toml is up-to-date
|
||||
run: |
|
||||
cargo hakari generate --diff || {
|
||||
echo "To fix, run script/update-workspace-hack or script/update-workspace-hack.ps1";
|
||||
false
|
||||
}
|
||||
- name: Check all crates depend on workspace-hack
|
||||
run: |
|
||||
cargo hakari manage-deps --dry-run || {
|
||||
echo "To fix, run script/update-workspace-hack or script/update-workspace-hack.ps1"
|
||||
false
|
||||
}
|
||||
|
||||
style:
|
||||
timeout-minutes: 60
|
||||
name: Check formatting and spelling
|
||||
needs: [job_spec]
|
||||
if: github.repository_owner == 'zed-industries'
|
||||
runs-on:
|
||||
- namespace-profile-4x8-ubuntu-2204
|
||||
- buildjet-8vcpu-ubuntu-2204
|
||||
steps:
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
|
||||
- uses: pnpm/action-setup@fe02b34f77f8bc703788d5817da081398fad5dd2 # v4.0.0
|
||||
with:
|
||||
version: 9
|
||||
|
||||
- name: Prettier Check on /docs
|
||||
working-directory: ./docs
|
||||
run: |
|
||||
pnpm dlx "prettier@${PRETTIER_VERSION}" . --check || {
|
||||
echo "To fix, run from the root of the Zed repo:"
|
||||
echo " cd docs && pnpm dlx prettier@${PRETTIER_VERSION} . --write && cd .."
|
||||
false
|
||||
}
|
||||
env:
|
||||
PRETTIER_VERSION: 3.5.0
|
||||
|
||||
- name: Prettier Check on default.json
|
||||
run: |
|
||||
pnpm dlx "prettier@${PRETTIER_VERSION}" assets/settings/default.json --check || {
|
||||
echo "To fix, run from the root of the Zed repo:"
|
||||
echo " pnpm dlx prettier@${PRETTIER_VERSION} assets/settings/default.json --write"
|
||||
false
|
||||
}
|
||||
env:
|
||||
PRETTIER_VERSION: 3.5.0
|
||||
|
||||
# To support writing comments that they will certainly be revisited.
|
||||
- name: Check for todo! and FIXME comments
|
||||
run: script/check-todos
|
||||
|
||||
- name: Check modifier use in keymaps
|
||||
run: script/check-keymaps
|
||||
uses: actions/checkout@692973e3d937129bcbf40652eb9f2f61becf3332 # v4
|
||||
|
||||
- name: Run style checks
|
||||
uses: ./.github/actions/check_style
|
||||
|
||||
- name: Check for typos
|
||||
uses: crate-ci/typos@8e6a4285bcbde632c5d79900a7779746e8b7ea3f # v1.24.6
|
||||
uses: crate-ci/typos@v1.24.6
|
||||
with:
|
||||
config: ./typos.toml
|
||||
|
||||
check_docs:
|
||||
timeout-minutes: 60
|
||||
name: Check docs
|
||||
needs: [job_spec]
|
||||
if: |
|
||||
github.repository_owner == 'zed-industries' &&
|
||||
(needs.job_spec.outputs.run_tests == 'true' || needs.job_spec.outputs.run_docs == 'true')
|
||||
runs-on:
|
||||
- namespace-profile-8x16-ubuntu-2204
|
||||
steps:
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
with:
|
||||
clean: false
|
||||
|
||||
- name: Configure CI
|
||||
run: |
|
||||
mkdir -p ./../.cargo
|
||||
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
|
||||
|
||||
- name: Build docs
|
||||
uses: ./.github/actions/build_docs
|
||||
|
||||
actionlint:
|
||||
runs-on: namespace-profile-2x4-ubuntu-2404
|
||||
if: github.repository_owner == 'zed-industries' && needs.job_spec.outputs.run_actionlint == 'true'
|
||||
needs: [job_spec]
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- name: Download actionlint
|
||||
id: get_actionlint
|
||||
run: bash <(curl https://raw.githubusercontent.com/rhysd/actionlint/main/scripts/download-actionlint.bash)
|
||||
shell: bash
|
||||
- name: Check workflow files
|
||||
run: ${{ steps.get_actionlint.outputs.executable }} -color
|
||||
shell: bash
|
||||
|
||||
macos_tests:
|
||||
timeout-minutes: 60
|
||||
name: (macOS) Run Clippy and tests
|
||||
needs: [job_spec]
|
||||
if: |
|
||||
github.repository_owner == 'zed-industries' &&
|
||||
needs.job_spec.outputs.run_tests == 'true'
|
||||
runs-on:
|
||||
- self-mini-macos
|
||||
- self-hosted
|
||||
- test
|
||||
steps:
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4
|
||||
with:
|
||||
clean: false
|
||||
|
||||
- name: Configure CI
|
||||
run: |
|
||||
mkdir -p ./../.cargo
|
||||
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
|
||||
|
||||
- name: Check that Cargo.lock is up to date
|
||||
run: |
|
||||
cargo update --locked --workspace
|
||||
|
||||
- name: cargo clippy
|
||||
run: ./script/clippy
|
||||
|
||||
- name: Install cargo-machete
|
||||
uses: clechasseur/rs-cargo@8435b10f6e71c2e3d4d3b7573003a8ce4bfc6386 # v2
|
||||
with:
|
||||
command: install
|
||||
args: cargo-machete@0.7.0
|
||||
|
||||
- name: Check unused dependencies
|
||||
uses: clechasseur/rs-cargo@8435b10f6e71c2e3d4d3b7573003a8ce4bfc6386 # v2
|
||||
with:
|
||||
command: machete
|
||||
uses: bnjbvr/cargo-machete@main
|
||||
|
||||
- name: Check licenses
|
||||
run: |
|
||||
script/check-licenses
|
||||
if [[ "${{ needs.job_spec.outputs.run_license }}" == "true" ]]; then
|
||||
script/generate-licenses /tmp/zed_licenses_output
|
||||
fi
|
||||
|
||||
- name: Check for new vulnerable dependencies
|
||||
if: github.event_name == 'pull_request'
|
||||
uses: actions/dependency-review-action@67d4f4bd7a9b17a0db54d2a7519187c65e339de8 # v4
|
||||
with:
|
||||
license-check: false
|
||||
script/generate-licenses /tmp/zed_licenses_output
|
||||
|
||||
- name: Run tests
|
||||
uses: ./.github/actions/run_tests
|
||||
|
||||
- name: Build collab
|
||||
run: cargo build -p collab
|
||||
run: RUSTFLAGS="-D warnings" cargo build -p collab
|
||||
|
||||
- name: Build other binaries and features
|
||||
run: |
|
||||
cargo build --workspace --bins --all-features
|
||||
RUSTFLAGS="-D warnings" cargo build --workspace --bins --all-features
|
||||
cargo check -p gpui --features "macos-blade"
|
||||
cargo check -p workspace
|
||||
cargo build -p remote_server
|
||||
cargo check -p gpui --examples
|
||||
|
||||
# Since the macOS runners are stateful, so we need to remove the config file to prevent potential bug.
|
||||
- name: Clean CI config file
|
||||
if: always()
|
||||
run: rm -rf ./../.cargo
|
||||
RUSTFLAGS="-D warnings" cargo build -p remote_server
|
||||
|
||||
linux_tests:
|
||||
timeout-minutes: 60
|
||||
name: (Linux) Run Clippy and tests
|
||||
needs: [job_spec]
|
||||
if: |
|
||||
github.repository_owner == 'zed-industries' &&
|
||||
needs.job_spec.outputs.run_tests == 'true'
|
||||
runs-on:
|
||||
- namespace-profile-16x32-ubuntu-2204
|
||||
- buildjet-16vcpu-ubuntu-2204
|
||||
steps:
|
||||
- name: Add Rust to the PATH
|
||||
run: echo "$HOME/.cargo/bin" >> "$GITHUB_PATH"
|
||||
run: echo "$HOME/.cargo/bin" >> $GITHUB_PATH
|
||||
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4
|
||||
with:
|
||||
clean: false
|
||||
|
||||
- name: Cache dependencies
|
||||
uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2
|
||||
uses: swatinem/rust-cache@82a92a6e8fbeee089604da2575dc567ae9ddeaab # v2
|
||||
with:
|
||||
save-if: ${{ github.ref == 'refs/heads/main' }}
|
||||
# cache-provider: "buildjet"
|
||||
cache-provider: "buildjet"
|
||||
|
||||
- name: Install Linux dependencies
|
||||
run: ./script/linux
|
||||
|
||||
- name: Configure CI
|
||||
run: |
|
||||
mkdir -p ./../.cargo
|
||||
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
|
||||
|
||||
- name: cargo clippy
|
||||
run: ./script/clippy
|
||||
|
||||
- name: Run tests
|
||||
uses: ./.github/actions/run_tests
|
||||
|
||||
- name: Build other binaries and features
|
||||
run: |
|
||||
cargo build -p zed
|
||||
cargo check -p workspace
|
||||
cargo check -p gpui --examples
|
||||
|
||||
# Even the Linux runner is not stateful, in theory there is no need to do this cleanup.
|
||||
# But, to avoid potential issues in the future if we choose to use a stateful Linux runner and forget to add code
|
||||
# to clean up the config file, I’ve included the cleanup code here as a precaution.
|
||||
# While it’s not strictly necessary at this moment, I believe it’s better to err on the side of caution.
|
||||
- name: Clean CI config file
|
||||
if: always()
|
||||
run: rm -rf ./../.cargo
|
||||
- name: Build Zed
|
||||
run: RUSTFLAGS="-D warnings" cargo build -p zed
|
||||
|
||||
build_remote_server:
|
||||
timeout-minutes: 60
|
||||
name: (Linux) Build Remote Server
|
||||
needs: [job_spec]
|
||||
if: |
|
||||
github.repository_owner == 'zed-industries' &&
|
||||
needs.job_spec.outputs.run_tests == 'true'
|
||||
runs-on:
|
||||
- namespace-profile-16x32-ubuntu-2204
|
||||
- buildjet-16vcpu-ubuntu-2204
|
||||
steps:
|
||||
- name: Add Rust to the PATH
|
||||
run: echo "$HOME/.cargo/bin" >> "$GITHUB_PATH"
|
||||
run: echo "$HOME/.cargo/bin" >> $GITHUB_PATH
|
||||
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4
|
||||
with:
|
||||
clean: false
|
||||
|
||||
- name: Cache dependencies
|
||||
uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2
|
||||
uses: swatinem/rust-cache@82a92a6e8fbeee089604da2575dc567ae9ddeaab # v2
|
||||
with:
|
||||
save-if: ${{ github.ref == 'refs/heads/main' }}
|
||||
# cache-provider: "buildjet"
|
||||
cache-provider: "buildjet"
|
||||
|
||||
- name: Install Clang & Mold
|
||||
run: ./script/remote-server && ./script/install-mold 2.34.0
|
||||
|
||||
- name: Configure CI
|
||||
run: |
|
||||
mkdir -p ./../.cargo
|
||||
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
|
||||
|
||||
- name: Build Remote Server
|
||||
run: cargo build -p remote_server
|
||||
|
||||
- name: Clean CI config file
|
||||
if: always()
|
||||
run: rm -rf ./../.cargo
|
||||
run: RUSTFLAGS="-D warnings" cargo build -p remote_server
|
||||
|
||||
# todo(windows): Actually run the tests
|
||||
windows_tests:
|
||||
timeout-minutes: 60
|
||||
name: (Windows) Run Clippy and tests
|
||||
needs: [job_spec]
|
||||
if: |
|
||||
github.repository_owner == 'zed-industries' &&
|
||||
needs.job_spec.outputs.run_tests == 'true'
|
||||
runs-on: [self-32vcpu-windows-2022]
|
||||
runs-on: hosted-windows-1
|
||||
steps:
|
||||
- name: Environment Setup
|
||||
run: |
|
||||
$RunnerDir = Split-Path -Parent $env:RUNNER_WORKSPACE
|
||||
Write-Output `
|
||||
"RUSTUP_HOME=$RunnerDir\.rustup" `
|
||||
"CARGO_HOME=$RunnerDir\.cargo" `
|
||||
"PATH=$RunnerDir\.cargo\bin;$env:PATH" `
|
||||
>> $env:GITHUB_ENV
|
||||
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4
|
||||
with:
|
||||
clean: false
|
||||
|
||||
- name: Configure CI
|
||||
run: |
|
||||
New-Item -ItemType Directory -Path "./../.cargo" -Force
|
||||
Copy-Item -Path "./.cargo/ci-config.toml" -Destination "./../.cargo/config.toml"
|
||||
- name: Cache dependencies
|
||||
uses: swatinem/rust-cache@82a92a6e8fbeee089604da2575dc567ae9ddeaab # v2
|
||||
with:
|
||||
save-if: ${{ github.ref == 'refs/heads/main' }}
|
||||
cache-provider: "github"
|
||||
|
||||
- name: cargo clippy
|
||||
run: |
|
||||
.\script\clippy.ps1
|
||||
|
||||
- name: Run tests
|
||||
uses: ./.github/actions/run_tests_windows
|
||||
# Windows can't run shell scripts, so we need to use `cargo xtask`.
|
||||
run: cargo xtask clippy
|
||||
|
||||
- name: Build Zed
|
||||
run: cargo build
|
||||
|
||||
- name: Limit target directory size
|
||||
run: ./script/clear-target-dir-if-larger-than.ps1 250
|
||||
|
||||
- name: Clean CI config file
|
||||
if: always()
|
||||
run: Remove-Item -Recurse -Path "./../.cargo" -Force -ErrorAction SilentlyContinue
|
||||
|
||||
tests_pass:
|
||||
name: Tests Pass
|
||||
runs-on: namespace-profile-2x4-ubuntu-2404
|
||||
needs:
|
||||
- job_spec
|
||||
- style
|
||||
- check_docs
|
||||
- actionlint
|
||||
- migration_checks
|
||||
# run_tests: If adding required tests, add them here and to script below.
|
||||
- workspace_hack
|
||||
- linux_tests
|
||||
- build_remote_server
|
||||
- macos_tests
|
||||
- windows_tests
|
||||
if: |
|
||||
github.repository_owner == 'zed-industries' &&
|
||||
always()
|
||||
steps:
|
||||
- name: Check all tests passed
|
||||
run: |
|
||||
# Check dependent jobs...
|
||||
RET_CODE=0
|
||||
# Always check style
|
||||
[[ "${{ needs.style.result }}" != 'success' ]] && { RET_CODE=1; echo "style tests failed"; }
|
||||
|
||||
if [[ "${{ needs.job_spec.outputs.run_docs }}" == "true" ]]; then
|
||||
[[ "${{ needs.check_docs.result }}" != 'success' ]] && { RET_CODE=1; echo "docs checks failed"; }
|
||||
fi
|
||||
|
||||
if [[ "${{ needs.job_spec.outputs.run_actionlint }}" == "true" ]]; then
|
||||
[[ "${{ needs.actionlint.result }}" != 'success' ]] && { RET_CODE=1; echo "actionlint checks failed"; }
|
||||
fi
|
||||
|
||||
# Only check test jobs if they were supposed to run
|
||||
if [[ "${{ needs.job_spec.outputs.run_tests }}" == "true" ]]; then
|
||||
[[ "${{ needs.workspace_hack.result }}" != 'success' ]] && { RET_CODE=1; echo "Workspace Hack failed"; }
|
||||
[[ "${{ needs.macos_tests.result }}" != 'success' ]] && { RET_CODE=1; echo "macOS tests failed"; }
|
||||
[[ "${{ needs.linux_tests.result }}" != 'success' ]] && { RET_CODE=1; echo "Linux tests failed"; }
|
||||
[[ "${{ needs.windows_tests.result }}" != 'success' ]] && { RET_CODE=1; echo "Windows tests failed"; }
|
||||
[[ "${{ needs.build_remote_server.result }}" != 'success' ]] && { RET_CODE=1; echo "Remote server build failed"; }
|
||||
# This check is intentionally disabled. See: https://github.com/zed-industries/zed/pull/28431
|
||||
# [[ "${{ needs.migration_checks.result }}" != 'success' ]] && { RET_CODE=1; echo "Migration Checks failed"; }
|
||||
fi
|
||||
if [[ "$RET_CODE" -eq 0 ]]; then
|
||||
echo "All tests passed successfully!"
|
||||
fi
|
||||
exit $RET_CODE
|
||||
run: $env:RUSTFLAGS="-D warnings"; cargo build
|
||||
|
||||
bundle-mac:
|
||||
timeout-minutes: 120
|
||||
timeout-minutes: 60
|
||||
name: Create a macOS bundle
|
||||
runs-on:
|
||||
- self-mini-macos
|
||||
if: |
|
||||
( startsWith(github.ref, 'refs/tags/v')
|
||||
|| contains(github.event.pull_request.labels.*.name, 'run-bundling') )
|
||||
- self-hosted
|
||||
- bundle
|
||||
if: ${{ startsWith(github.ref, 'refs/tags/v') || contains(github.event.pull_request.labels.*.name, 'run-bundling') }}
|
||||
needs: [macos_tests]
|
||||
env:
|
||||
MACOS_CERTIFICATE: ${{ secrets.MACOS_CERTIFICATE }}
|
||||
MACOS_CERTIFICATE_PASSWORD: ${{ secrets.MACOS_CERTIFICATE_PASSWORD }}
|
||||
APPLE_NOTARIZATION_KEY: ${{ secrets.APPLE_NOTARIZATION_KEY }}
|
||||
APPLE_NOTARIZATION_KEY_ID: ${{ secrets.APPLE_NOTARIZATION_KEY_ID }}
|
||||
APPLE_NOTARIZATION_ISSUER_ID: ${{ secrets.APPLE_NOTARIZATION_ISSUER_ID }}
|
||||
APPLE_NOTARIZATION_USERNAME: ${{ secrets.APPLE_NOTARIZATION_USERNAME }}
|
||||
APPLE_NOTARIZATION_PASSWORD: ${{ secrets.APPLE_NOTARIZATION_PASSWORD }}
|
||||
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
|
||||
ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}
|
||||
DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }}
|
||||
DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }}
|
||||
steps:
|
||||
- name: Install Node
|
||||
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
|
||||
uses: actions/setup-node@0a44ba7841725637a19e28fa30b79a866c81b0a6 # v4
|
||||
with:
|
||||
node-version: "18"
|
||||
|
||||
- name: Setup Sentry CLI
|
||||
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b #v2
|
||||
with:
|
||||
token: ${{ SECRETS.SENTRY_AUTH_TOKEN }}
|
||||
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4
|
||||
with:
|
||||
# We need to fetch more than one commit so that `script/draft-release-notes`
|
||||
# is able to diff between the current and previous tag.
|
||||
@@ -540,7 +237,6 @@ jobs:
|
||||
# 25 was chosen arbitrarily.
|
||||
fetch-depth: 25
|
||||
clean: false
|
||||
ref: ${{ github.ref }}
|
||||
|
||||
- name: Limit target directory size
|
||||
run: script/clear-target-dir-if-larger-than 100
|
||||
@@ -556,29 +252,35 @@ jobs:
|
||||
run: |
|
||||
mkdir -p target/
|
||||
# Ignore any errors that occur while drafting release notes to not fail the build.
|
||||
script/draft-release-notes "$RELEASE_VERSION" "$RELEASE_CHANNEL" > target/release-notes.md || true
|
||||
script/create-draft-release target/release-notes.md
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
script/draft-release-notes "$version" "$channel" > target/release-notes.md || true
|
||||
|
||||
- name: Generate license file
|
||||
run: script/generate-licenses
|
||||
|
||||
- name: Create macOS app bundle
|
||||
run: script/bundle-mac
|
||||
|
||||
- name: Rename binaries
|
||||
- name: Rename single-architecture binaries
|
||||
if: ${{ github.ref == 'refs/heads/main' }} || contains(github.event.pull_request.labels.*.name, 'run-bundling') }}
|
||||
run: |
|
||||
mv target/aarch64-apple-darwin/release/Zed.dmg target/aarch64-apple-darwin/release/Zed-aarch64.dmg
|
||||
mv target/x86_64-apple-darwin/release/Zed.dmg target/x86_64-apple-darwin/release/Zed-x86_64.dmg
|
||||
|
||||
- name: Upload app bundle (universal) to workflow run if main branch or specific label
|
||||
uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882 # v4
|
||||
if: ${{ github.ref == 'refs/heads/main' }} || contains(github.event.pull_request.labels.*.name, 'run-bundling') }}
|
||||
with:
|
||||
name: Zed_${{ github.event.pull_request.head.sha || github.sha }}.dmg
|
||||
path: target/release/Zed.dmg
|
||||
- name: Upload app bundle (aarch64) to workflow run if main branch or specific label
|
||||
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
|
||||
uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882 # v4
|
||||
if: ${{ github.ref == 'refs/heads/main' }} || contains(github.event.pull_request.labels.*.name, 'run-bundling') }}
|
||||
with:
|
||||
name: Zed_${{ github.event.pull_request.head.sha || github.sha }}-aarch64.dmg
|
||||
path: target/aarch64-apple-darwin/release/Zed-aarch64.dmg
|
||||
|
||||
- name: Upload app bundle (x86_64) to workflow run if main branch or specific label
|
||||
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
|
||||
uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882 # v4
|
||||
if: ${{ github.ref == 'refs/heads/main' }} || contains(github.event.pull_request.labels.*.name, 'run-bundling') }}
|
||||
with:
|
||||
name: Zed_${{ github.event.pull_request.head.sha || github.sha }}-x86_64.dmg
|
||||
@@ -595,34 +297,32 @@ jobs:
|
||||
target/zed-remote-server-macos-aarch64.gz
|
||||
target/aarch64-apple-darwin/release/Zed-aarch64.dmg
|
||||
target/x86_64-apple-darwin/release/Zed-x86_64.dmg
|
||||
target/release/Zed.dmg
|
||||
body_path: target/release-notes.md
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
bundle-linux-x86_x64:
|
||||
bundle-linux:
|
||||
timeout-minutes: 60
|
||||
name: Linux x86_x64 release bundle
|
||||
name: Create a Linux bundle
|
||||
runs-on:
|
||||
- namespace-profile-16x32-ubuntu-2004 # ubuntu 20.04 for minimal glibc
|
||||
if: |
|
||||
( startsWith(github.ref, 'refs/tags/v')
|
||||
|| contains(github.event.pull_request.labels.*.name, 'run-bundling') )
|
||||
- buildjet-16vcpu-ubuntu-2004
|
||||
if: ${{ startsWith(github.ref, 'refs/tags/v') || contains(github.event.pull_request.labels.*.name, 'run-bundling') }}
|
||||
needs: [linux_tests]
|
||||
env:
|
||||
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
|
||||
ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}
|
||||
steps:
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4
|
||||
with:
|
||||
clean: false
|
||||
|
||||
- name: Install Linux dependencies
|
||||
run: ./script/linux && ./script/install-mold 2.34.0
|
||||
|
||||
- name: Setup Sentry CLI
|
||||
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b #v2
|
||||
with:
|
||||
token: ${{ SECRETS.SENTRY_AUTH_TOKEN }}
|
||||
|
||||
- name: Determine version and release channel
|
||||
if: startsWith(github.ref, 'refs/tags/v')
|
||||
if: ${{ startsWith(github.ref, 'refs/tags/v') }}
|
||||
run: |
|
||||
# This exports RELEASE_CHANNEL into env (GITHUB_ENV)
|
||||
script/determine-release-channel
|
||||
@@ -630,233 +330,69 @@ jobs:
|
||||
- name: Create Linux .tar.gz bundle
|
||||
run: script/bundle-linux
|
||||
|
||||
- name: Upload Artifact to Workflow - zed (run-bundling)
|
||||
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
|
||||
if: contains(github.event.pull_request.labels.*.name, 'run-bundling')
|
||||
- name: Upload Linux bundle to workflow run if main branch or specific label
|
||||
uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882 # v4
|
||||
if: ${{ github.ref == 'refs/heads/main' }} || contains(github.event.pull_request.labels.*.name, 'run-bundling') }}
|
||||
with:
|
||||
name: zed-${{ github.event.pull_request.head.sha || github.sha }}-x86_64-unknown-linux-gnu.tar.gz
|
||||
path: target/release/zed-*.tar.gz
|
||||
|
||||
- name: Upload Artifact to Workflow - zed-remote-server (run-bundling)
|
||||
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
|
||||
if: contains(github.event.pull_request.labels.*.name, 'run-bundling')
|
||||
with:
|
||||
name: zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-x86_64-unknown-linux-gnu.gz
|
||||
path: target/zed-remote-server-linux-x86_64.gz
|
||||
|
||||
- name: Upload Artifacts to release
|
||||
- name: Upload app bundle to release
|
||||
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v1
|
||||
if: ${{ !(contains(github.event.pull_request.labels.*.name, 'run-bundling')) }}
|
||||
with:
|
||||
draft: true
|
||||
prerelease: ${{ env.RELEASE_CHANNEL == 'preview' }}
|
||||
files: |
|
||||
target/zed-remote-server-linux-x86_64.gz
|
||||
target/release/zed-linux-x86_64.tar.gz
|
||||
body: ""
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
bundle-linux-aarch64: # this runs on ubuntu22.04
|
||||
timeout-minutes: 60
|
||||
name: Linux arm64 release bundle
|
||||
name: Create arm64 Linux bundle
|
||||
runs-on:
|
||||
- namespace-profile-8x32-ubuntu-2004-arm-m4 # ubuntu 20.04 for minimal glibc
|
||||
if: |
|
||||
startsWith(github.ref, 'refs/tags/v')
|
||||
|| contains(github.event.pull_request.labels.*.name, 'run-bundling')
|
||||
- buildjet-16vcpu-ubuntu-2204-arm
|
||||
if: ${{ startsWith(github.ref, 'refs/tags/v') || contains(github.event.pull_request.labels.*.name, 'run-bundling') }}
|
||||
needs: [linux_tests]
|
||||
env:
|
||||
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
|
||||
ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}
|
||||
steps:
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4
|
||||
with:
|
||||
clean: false
|
||||
|
||||
- name: Install Linux dependencies
|
||||
run: ./script/linux
|
||||
|
||||
- name: Setup Sentry CLI
|
||||
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b #v2
|
||||
with:
|
||||
token: ${{ SECRETS.SENTRY_AUTH_TOKEN }}
|
||||
|
||||
- name: Determine version and release channel
|
||||
if: startsWith(github.ref, 'refs/tags/v')
|
||||
if: ${{ startsWith(github.ref, 'refs/tags/v') }}
|
||||
run: |
|
||||
# This exports RELEASE_CHANNEL into env (GITHUB_ENV)
|
||||
script/determine-release-channel
|
||||
|
||||
- name: Create and upload Linux .tar.gz bundles
|
||||
- name: Create and upload Linux .tar.gz bundle
|
||||
run: script/bundle-linux
|
||||
|
||||
- name: Upload Artifact to Workflow - zed (run-bundling)
|
||||
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
|
||||
if: contains(github.event.pull_request.labels.*.name, 'run-bundling')
|
||||
- name: Upload Linux bundle to workflow run if main branch or specific label
|
||||
uses: actions/upload-artifact@b4b15b8c7c6ac21ea08fcf65892d2ee8f75cf882 # v4
|
||||
if: ${{ github.ref == 'refs/heads/main' }} || contains(github.event.pull_request.labels.*.name, 'run-bundling') }}
|
||||
with:
|
||||
name: zed-${{ github.event.pull_request.head.sha || github.sha }}-aarch64-unknown-linux-gnu.tar.gz
|
||||
path: target/release/zed-*.tar.gz
|
||||
|
||||
- name: Upload Artifact to Workflow - zed-remote-server (run-bundling)
|
||||
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
|
||||
if: contains(github.event.pull_request.labels.*.name, 'run-bundling')
|
||||
with:
|
||||
name: zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-aarch64-unknown-linux-gnu.gz
|
||||
path: target/zed-remote-server-linux-aarch64.gz
|
||||
|
||||
- name: Upload Artifacts to release
|
||||
- name: Upload app bundle to release
|
||||
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v1
|
||||
if: ${{ !(contains(github.event.pull_request.labels.*.name, 'run-bundling')) }}
|
||||
if: ${{ env.RELEASE_CHANNEL == 'preview' || env.RELEASE_CHANNEL == 'stable' }}
|
||||
with:
|
||||
draft: true
|
||||
prerelease: ${{ env.RELEASE_CHANNEL == 'preview' }}
|
||||
files: |
|
||||
target/zed-remote-server-linux-aarch64.gz
|
||||
target/release/zed-linux-aarch64.tar.gz
|
||||
body: ""
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
freebsd:
|
||||
timeout-minutes: 60
|
||||
runs-on: github-8vcpu-ubuntu-2404
|
||||
if: |
|
||||
false && ( startsWith(github.ref, 'refs/tags/v')
|
||||
|| contains(github.event.pull_request.labels.*.name, 'run-bundling') )
|
||||
needs: [linux_tests]
|
||||
name: Build Zed on FreeBSD
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- name: Build FreeBSD remote-server
|
||||
id: freebsd-build
|
||||
uses: vmactions/freebsd-vm@c3ae29a132c8ef1924775414107a97cac042aad5 # v1.2.0
|
||||
with:
|
||||
usesh: true
|
||||
release: 13.5
|
||||
copyback: true
|
||||
prepare: |
|
||||
pkg install -y \
|
||||
bash curl jq git \
|
||||
rustup-init cmake-core llvm-devel-lite pkgconf protobuf # ibx11 alsa-lib rust-bindgen-cli
|
||||
run: |
|
||||
freebsd-version
|
||||
sysctl hw.model
|
||||
sysctl hw.ncpu
|
||||
sysctl hw.physmem
|
||||
sysctl hw.usermem
|
||||
git config --global --add safe.directory /home/runner/work/zed/zed
|
||||
rustup-init --profile minimal --default-toolchain none -y
|
||||
. "$HOME/.cargo/env"
|
||||
./script/bundle-freebsd
|
||||
mkdir -p out/
|
||||
mv "target/zed-remote-server-freebsd-x86_64.gz" out/
|
||||
rm -rf target/
|
||||
cargo clean
|
||||
|
||||
- name: Upload Artifact to Workflow - zed-remote-server (run-bundling)
|
||||
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
|
||||
if: contains(github.event.pull_request.labels.*.name, 'run-bundling')
|
||||
with:
|
||||
name: zed-remote-server-${{ github.event.pull_request.head.sha || github.sha }}-x86_64-unknown-freebsd.gz
|
||||
path: out/zed-remote-server-freebsd-x86_64.gz
|
||||
|
||||
- name: Upload Artifacts to release
|
||||
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v1
|
||||
if: ${{ !(contains(github.event.pull_request.labels.*.name, 'run-bundling')) }}
|
||||
with:
|
||||
draft: true
|
||||
prerelease: ${{ env.RELEASE_CHANNEL == 'preview' }}
|
||||
files: |
|
||||
out/zed-remote-server-freebsd-x86_64.gz
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
nix-build:
|
||||
name: Build with Nix
|
||||
uses: ./.github/workflows/nix.yml
|
||||
needs: [job_spec]
|
||||
if: github.repository_owner == 'zed-industries' &&
|
||||
(contains(github.event.pull_request.labels.*.name, 'run-nix') ||
|
||||
needs.job_spec.outputs.run_nix == 'true')
|
||||
secrets: inherit
|
||||
with:
|
||||
flake-output: debug
|
||||
# excludes the final package to only cache dependencies
|
||||
cachix-filter: "-zed-editor-[0-9.]*-nightly"
|
||||
|
||||
bundle-windows-x64:
|
||||
timeout-minutes: 120
|
||||
name: Create a Windows installer
|
||||
runs-on: [self-32vcpu-windows-2022]
|
||||
if: contains(github.event.pull_request.labels.*.name, 'run-bundling')
|
||||
# if: (startsWith(github.ref, 'refs/tags/v') || contains(github.event.pull_request.labels.*.name, 'run-bundling'))
|
||||
needs: [windows_tests]
|
||||
env:
|
||||
AZURE_TENANT_ID: ${{ secrets.AZURE_SIGNING_TENANT_ID }}
|
||||
AZURE_CLIENT_ID: ${{ secrets.AZURE_SIGNING_CLIENT_ID }}
|
||||
AZURE_CLIENT_SECRET: ${{ secrets.AZURE_SIGNING_CLIENT_SECRET }}
|
||||
ACCOUNT_NAME: ${{ vars.AZURE_SIGNING_ACCOUNT_NAME }}
|
||||
CERT_PROFILE_NAME: ${{ vars.AZURE_SIGNING_CERT_PROFILE_NAME }}
|
||||
ENDPOINT: ${{ vars.AZURE_SIGNING_ENDPOINT }}
|
||||
FILE_DIGEST: SHA256
|
||||
TIMESTAMP_DIGEST: SHA256
|
||||
TIMESTAMP_SERVER: "http://timestamp.acs.microsoft.com"
|
||||
steps:
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
with:
|
||||
clean: false
|
||||
|
||||
- name: Setup Sentry CLI
|
||||
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b #v2
|
||||
with:
|
||||
token: ${{ SECRETS.SENTRY_AUTH_TOKEN }}
|
||||
|
||||
- name: Determine version and release channel
|
||||
working-directory: ${{ env.ZED_WORKSPACE }}
|
||||
if: ${{ startsWith(github.ref, 'refs/tags/v') }}
|
||||
run: |
|
||||
# This exports RELEASE_CHANNEL into env (GITHUB_ENV)
|
||||
script/determine-release-channel.ps1
|
||||
|
||||
- name: Build Zed installer
|
||||
working-directory: ${{ env.ZED_WORKSPACE }}
|
||||
run: script/bundle-windows.ps1
|
||||
|
||||
- name: Upload installer (x86_64) to Workflow - zed (run-bundling)
|
||||
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
|
||||
if: contains(github.event.pull_request.labels.*.name, 'run-bundling')
|
||||
with:
|
||||
name: ZedEditorUserSetup-x64-${{ github.event.pull_request.head.sha || github.sha }}.exe
|
||||
path: ${{ env.SETUP_PATH }}
|
||||
|
||||
- name: Upload Artifacts to release
|
||||
uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v1
|
||||
# Re-enable when we are ready to publish windows preview releases
|
||||
if: ${{ !(contains(github.event.pull_request.labels.*.name, 'run-bundling')) && env.RELEASE_CHANNEL == 'preview' }} # upload only preview
|
||||
with:
|
||||
draft: true
|
||||
prerelease: ${{ env.RELEASE_CHANNEL == 'preview' }}
|
||||
files: ${{ env.SETUP_PATH }}
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
auto-release-preview:
|
||||
name: Auto release preview
|
||||
if: |
|
||||
startsWith(github.ref, 'refs/tags/v')
|
||||
&& endsWith(github.ref, '-pre') && !endsWith(github.ref, '.0-pre')
|
||||
needs: [bundle-mac, bundle-linux-x86_x64, bundle-linux-aarch64, bundle-windows-x64]
|
||||
runs-on:
|
||||
- self-mini-macos
|
||||
steps:
|
||||
- name: gh release
|
||||
run: gh release edit "$GITHUB_REF_NAME" --draft=false
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Create Sentry release
|
||||
uses: getsentry/action-release@526942b68292201ac6bbb99b9a0747d4abee354c # v3
|
||||
env:
|
||||
SENTRY_ORG: zed-dev
|
||||
SENTRY_PROJECT: zed
|
||||
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
|
||||
with:
|
||||
environment: production
|
||||
|
||||
@@ -1,27 +1,30 @@
|
||||
name: "Close Stale Issues"
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 7,9,11 * * 3"
|
||||
- cron: "0 11 * * 2"
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
stale:
|
||||
if: github.repository_owner == 'zed-industries'
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 # v9
|
||||
- uses: actions/stale@28ca1036281a5e5922ead5184a1bbf96e5fc984e # v9
|
||||
with:
|
||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
stale-issue-message: >
|
||||
Hi there! 👋
|
||||
|
||||
We're working to clean up our issue tracker by closing older issues that might not be relevant anymore. If you are able to reproduce this issue in the latest version of Zed, please let us know by commenting on this issue, and we will keep it open. If you can't reproduce it, feel free to close the issue yourself. Otherwise, we'll close it in 7 days.
|
||||
We're working to clean up our issue tracker by closing older issues that might not be relevant anymore. Are you able to reproduce this issue in the latest version of Zed? If so, please let us know by commenting on this issue and we will keep it open; otherwise, we'll close it in 7 days. Feel free to open a new issue if you're seeing this message after the issue has been closed.
|
||||
|
||||
Thanks for your help!
|
||||
close-issue-message: "This issue was closed due to inactivity. If you're still experiencing this problem, please open a new issue with a link to this issue."
|
||||
days-before-stale: 120
|
||||
close-issue-message: "This issue was closed due to inactivity. If you're still experiencing this problem, please open a new issue with a link to this issue."
|
||||
# We will increase `days-before-stale` to 365 on or after Jan 24th,
|
||||
# 2024. This date marks one year since migrating issues from
|
||||
# 'community' to 'zed' repository. The migration added activity to all
|
||||
# issues, preventing 365 days from working until then.
|
||||
days-before-stale: 180
|
||||
days-before-close: 7
|
||||
any-of-issue-labels: "bug,panic / crash"
|
||||
any-of-issue-labels: "defect,panic / crash"
|
||||
operations-per-run: 1000
|
||||
ascending: true
|
||||
enable-statistics: true
|
||||
|
||||
33
.github/workflows/community_delete_comments.yml
vendored
Normal file
33
.github/workflows/community_delete_comments.yml
vendored
Normal file
@@ -0,0 +1,33 @@
|
||||
name: Delete Mediafire Comments
|
||||
|
||||
on:
|
||||
issue_comment:
|
||||
types: [created]
|
||||
|
||||
permissions:
|
||||
issues: write
|
||||
|
||||
jobs:
|
||||
delete_comment:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Check for specific strings in comment
|
||||
id: check_comment
|
||||
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7
|
||||
with:
|
||||
script: |
|
||||
const comment = context.payload.comment.body;
|
||||
const triggerStrings = ['www.mediafire.com'];
|
||||
return triggerStrings.some(triggerString => comment.includes(triggerString));
|
||||
|
||||
- name: Delete comment if it contains any of the specific strings
|
||||
if: steps.check_comment.outputs.result == 'true'
|
||||
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7
|
||||
with:
|
||||
script: |
|
||||
const commentId = context.payload.comment.id;
|
||||
await github.rest.issues.deleteComment({
|
||||
owner: context.repo.owner,
|
||||
repo: context.repo.repo,
|
||||
comment_id: commentId
|
||||
});
|
||||
44
.github/workflows/community_release_actions.yml
vendored
44
.github/workflows/community_release_actions.yml
vendored
@@ -6,21 +6,19 @@ on:
|
||||
|
||||
jobs:
|
||||
discord_release:
|
||||
if: github.repository_owner == 'zed-industries'
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Get release URL
|
||||
id: get-release-url
|
||||
run: |
|
||||
if [ "${{ github.event.release.prerelease }}" == "true" ]; then
|
||||
URL="https://zed.dev/releases/preview/latest"
|
||||
URL="https://zed.dev/releases/preview/latest"
|
||||
else
|
||||
URL="https://zed.dev/releases/stable/latest"
|
||||
URL="https://zed.dev/releases/stable/latest"
|
||||
fi
|
||||
|
||||
echo "URL=$URL" >> "$GITHUB_OUTPUT"
|
||||
echo "::set-output name=URL::$URL"
|
||||
- name: Get content
|
||||
uses: 2428392/gh-truncate-string-action@b3ff790d21cf42af3ca7579146eedb93c8fb0757 # v1.4.1
|
||||
uses: 2428392/gh-truncate-string-action@e6b5885fb83c81ca9a700a91b079baec2133be3e # v1.4.0
|
||||
id: get-content
|
||||
with:
|
||||
stringToTruncate: |
|
||||
@@ -34,37 +32,3 @@ jobs:
|
||||
with:
|
||||
webhook-url: ${{ secrets.DISCORD_WEBHOOK_URL }}
|
||||
content: ${{ steps.get-content.outputs.string }}
|
||||
|
||||
send_release_notes_email:
|
||||
if: github.repository_owner == 'zed-industries' && !github.event.release.prerelease
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Check if release was promoted from preview
|
||||
id: check-promotion-from-preview
|
||||
run: |
|
||||
VERSION="${{ github.event.release.tag_name }}"
|
||||
PREVIEW_TAG="${VERSION}-pre"
|
||||
|
||||
if git rev-parse "$PREVIEW_TAG" > /dev/null 2>&1; then
|
||||
echo "was_promoted_from_preview=true" >> "$GITHUB_OUTPUT"
|
||||
else
|
||||
echo "was_promoted_from_preview=false" >> "$GITHUB_OUTPUT"
|
||||
fi
|
||||
|
||||
- name: Send release notes email
|
||||
if: steps.check-promotion-from-preview.outputs.was_promoted_from_preview == 'true'
|
||||
run: |
|
||||
TAG="${{ github.event.release.tag_name }}"
|
||||
cat << 'EOF' > release_body.txt
|
||||
${{ github.event.release.body }}
|
||||
EOF
|
||||
jq -n --arg tag "$TAG" --rawfile body release_body.txt '{version: $tag, markdown_body: $body}' \
|
||||
> release_data.json
|
||||
curl -X POST "https://zed.dev/api/send_release_notes_email" \
|
||||
-H "Authorization: Bearer ${{ secrets.RELEASE_NOTES_API_TOKEN }}" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d @release_data.json
|
||||
|
||||
@@ -8,11 +8,11 @@ on:
|
||||
jobs:
|
||||
update_top_ranking_issues:
|
||||
runs-on: ubuntu-latest
|
||||
if: github.repository == 'zed-industries/zed'
|
||||
if: github.repository_owner == 'zed-industries'
|
||||
steps:
|
||||
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
- uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4
|
||||
- name: Set up uv
|
||||
uses: astral-sh/setup-uv@caf0cab7a618c569241d31dcd442f54681755d39 # v3
|
||||
uses: astral-sh/setup-uv@f3bcaebff5eace81a1c062af9f9011aae482ca9d # v3
|
||||
with:
|
||||
version: "latest"
|
||||
enable-cache: true
|
||||
|
||||
@@ -8,11 +8,11 @@ on:
|
||||
jobs:
|
||||
update_top_ranking_issues:
|
||||
runs-on: ubuntu-latest
|
||||
if: github.repository == 'zed-industries/zed'
|
||||
if: github.repository_owner == 'zed-industries'
|
||||
steps:
|
||||
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
- uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4
|
||||
- name: Set up uv
|
||||
uses: astral-sh/setup-uv@caf0cab7a618c569241d31dcd442f54681755d39 # v3
|
||||
uses: astral-sh/setup-uv@f3bcaebff5eace81a1c062af9f9011aae482ca9d # v3
|
||||
with:
|
||||
version: "latest"
|
||||
enable-cache: true
|
||||
|
||||
7
.github/workflows/danger.yml
vendored
7
.github/workflows/danger.yml
vendored
@@ -11,18 +11,17 @@ on:
|
||||
|
||||
jobs:
|
||||
danger:
|
||||
if: github.repository_owner == 'zed-industries'
|
||||
runs-on: namespace-profile-2x4-ubuntu-2404
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
- uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4
|
||||
|
||||
- uses: pnpm/action-setup@fe02b34f77f8bc703788d5817da081398fad5dd2 # v4.0.0
|
||||
with:
|
||||
version: 9
|
||||
|
||||
- name: Setup Node
|
||||
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
|
||||
uses: actions/setup-node@0a44ba7841725637a19e28fa30b79a866c81b0a6 # v4
|
||||
with:
|
||||
node-version: "20"
|
||||
cache: "pnpm"
|
||||
|
||||
36
.github/workflows/deploy_cloudflare.yml
vendored
36
.github/workflows/deploy_cloudflare.yml
vendored
@@ -9,51 +9,57 @@ jobs:
|
||||
deploy-docs:
|
||||
name: Deploy Docs
|
||||
if: github.repository_owner == 'zed-industries'
|
||||
runs-on: namespace-profile-16x32-ubuntu-2204
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4
|
||||
with:
|
||||
clean: false
|
||||
|
||||
- name: Setup mdBook
|
||||
uses: peaceiris/actions-mdbook@ee69d230fe19748b7abf22df32acaa93833fad08 # v2
|
||||
with:
|
||||
mdbook-version: "0.4.37"
|
||||
|
||||
- name: Set up default .cargo/config.toml
|
||||
run: cp ./.cargo/collab-config.toml ./.cargo/config.toml
|
||||
|
||||
- name: Build docs
|
||||
uses: ./.github/actions/build_docs
|
||||
- name: Install system dependencies
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install libxkbcommon-dev libxkbcommon-x11-dev
|
||||
|
||||
- name: Build book
|
||||
run: |
|
||||
set -euo pipefail
|
||||
mkdir -p target/deploy
|
||||
mdbook build ./docs --dest-dir=../target/deploy/docs/
|
||||
|
||||
- name: Deploy Docs
|
||||
uses: cloudflare/wrangler-action@da0e0dfe58b7a431659754fdf3f186c529afbe65 # v3
|
||||
uses: cloudflare/wrangler-action@9681c2997648301493e78cacbfb790a9f19c833f # v3
|
||||
with:
|
||||
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
|
||||
accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
|
||||
command: pages deploy target/deploy --project-name=docs
|
||||
|
||||
- name: Deploy Install
|
||||
uses: cloudflare/wrangler-action@da0e0dfe58b7a431659754fdf3f186c529afbe65 # v3
|
||||
uses: cloudflare/wrangler-action@9681c2997648301493e78cacbfb790a9f19c833f # v3
|
||||
with:
|
||||
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
|
||||
accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
|
||||
command: r2 object put -f script/install.sh zed-open-source-website-assets/install.sh
|
||||
|
||||
- name: Deploy Docs Workers
|
||||
uses: cloudflare/wrangler-action@da0e0dfe58b7a431659754fdf3f186c529afbe65 # v3
|
||||
uses: cloudflare/wrangler-action@9681c2997648301493e78cacbfb790a9f19c833f # v3
|
||||
with:
|
||||
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
|
||||
accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
|
||||
command: deploy .cloudflare/docs-proxy/src/worker.js
|
||||
|
||||
- name: Deploy Install Workers
|
||||
uses: cloudflare/wrangler-action@da0e0dfe58b7a431659754fdf3f186c529afbe65 # v3
|
||||
uses: cloudflare/wrangler-action@9681c2997648301493e78cacbfb790a9f19c833f # v3
|
||||
with:
|
||||
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
|
||||
accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
|
||||
command: deploy .cloudflare/docs-proxy/src/worker.js
|
||||
|
||||
- name: Preserve Wrangler logs
|
||||
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
|
||||
if: always()
|
||||
with:
|
||||
name: wrangler_logs
|
||||
path: /home/runner/.config/.wrangler/logs/
|
||||
|
||||
38
.github/workflows/deploy_collab.yml
vendored
38
.github/workflows/deploy_collab.yml
vendored
@@ -12,13 +12,12 @@ env:
|
||||
jobs:
|
||||
style:
|
||||
name: Check formatting and Clippy lints
|
||||
if: github.repository_owner == 'zed-industries'
|
||||
runs-on:
|
||||
- self-hosted
|
||||
- macOS
|
||||
- test
|
||||
steps:
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4
|
||||
with:
|
||||
clean: false
|
||||
fetch-depth: 0
|
||||
@@ -33,11 +32,11 @@ jobs:
|
||||
name: Run tests
|
||||
runs-on:
|
||||
- self-hosted
|
||||
- macOS
|
||||
- test
|
||||
needs: style
|
||||
steps:
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4
|
||||
with:
|
||||
clean: false
|
||||
fetch-depth: 0
|
||||
@@ -45,7 +44,7 @@ jobs:
|
||||
- name: Install cargo nextest
|
||||
shell: bash -euxo pipefail {0}
|
||||
run: |
|
||||
cargo install cargo-nextest --locked
|
||||
cargo install cargo-nextest
|
||||
|
||||
- name: Limit target directory size
|
||||
shell: bash -euxo pipefail {0}
|
||||
@@ -61,7 +60,7 @@ jobs:
|
||||
- style
|
||||
- tests
|
||||
runs-on:
|
||||
- namespace-profile-16x32-ubuntu-2204
|
||||
- buildjet-16vcpu-ubuntu-2204
|
||||
steps:
|
||||
- name: Install doctl
|
||||
uses: digitalocean/action-doctl@v2
|
||||
@@ -72,19 +71,19 @@ jobs:
|
||||
run: doctl registry login
|
||||
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4
|
||||
with:
|
||||
clean: false
|
||||
|
||||
- name: Build docker image
|
||||
run: |
|
||||
docker build -f Dockerfile-collab \
|
||||
--build-arg "GITHUB_SHA=$GITHUB_SHA" \
|
||||
--tag "registry.digitalocean.com/zed/collab:$GITHUB_SHA" \
|
||||
--build-arg GITHUB_SHA=$GITHUB_SHA \
|
||||
--tag registry.digitalocean.com/zed/collab:$GITHUB_SHA \
|
||||
.
|
||||
|
||||
- name: Publish docker image
|
||||
run: docker push "registry.digitalocean.com/zed/collab:${GITHUB_SHA}"
|
||||
run: docker push registry.digitalocean.com/zed/collab:${GITHUB_SHA}
|
||||
|
||||
- name: Prune Docker system
|
||||
run: docker system prune --filter 'until=72h' -f
|
||||
@@ -94,11 +93,11 @@ jobs:
|
||||
needs:
|
||||
- publish
|
||||
runs-on:
|
||||
- namespace-profile-16x32-ubuntu-2204
|
||||
- buildjet-16vcpu-ubuntu-2204
|
||||
|
||||
steps:
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4
|
||||
with:
|
||||
clean: false
|
||||
|
||||
@@ -117,10 +116,12 @@ jobs:
|
||||
export ZED_KUBE_NAMESPACE=production
|
||||
export ZED_COLLAB_LOAD_BALANCER_SIZE_UNIT=10
|
||||
export ZED_API_LOAD_BALANCER_SIZE_UNIT=2
|
||||
export ZED_LLM_LOAD_BALANCER_SIZE_UNIT=2
|
||||
elif [[ $GITHUB_REF_NAME = "collab-staging" ]]; then
|
||||
export ZED_KUBE_NAMESPACE=staging
|
||||
export ZED_COLLAB_LOAD_BALANCER_SIZE_UNIT=1
|
||||
export ZED_API_LOAD_BALANCER_SIZE_UNIT=1
|
||||
export ZED_LLM_LOAD_BALANCER_SIZE_UNIT=1
|
||||
else
|
||||
echo "cowardly refusing to deploy from an unknown branch"
|
||||
exit 1
|
||||
@@ -131,20 +132,23 @@ jobs:
|
||||
source script/lib/deploy-helpers.sh
|
||||
export_vars_for_environment $ZED_KUBE_NAMESPACE
|
||||
|
||||
ZED_DO_CERTIFICATE_ID="$(doctl compute certificate list --format ID --no-header)"
|
||||
export ZED_DO_CERTIFICATE_ID
|
||||
export ZED_DO_CERTIFICATE_ID=$(doctl compute certificate list --format ID --no-header)
|
||||
export ZED_IMAGE_ID="registry.digitalocean.com/zed/collab:${GITHUB_SHA}"
|
||||
|
||||
export ZED_SERVICE_NAME=collab
|
||||
export ZED_LOAD_BALANCER_SIZE_UNIT=$ZED_COLLAB_LOAD_BALANCER_SIZE_UNIT
|
||||
export DATABASE_MAX_CONNECTIONS=850
|
||||
envsubst < crates/collab/k8s/collab.template.yml | kubectl apply -f -
|
||||
kubectl -n "$ZED_KUBE_NAMESPACE" rollout status deployment/$ZED_SERVICE_NAME --watch
|
||||
echo "deployed ${ZED_SERVICE_NAME} to ${ZED_KUBE_NAMESPACE}"
|
||||
|
||||
export ZED_SERVICE_NAME=api
|
||||
export ZED_LOAD_BALANCER_SIZE_UNIT=$ZED_API_LOAD_BALANCER_SIZE_UNIT
|
||||
export DATABASE_MAX_CONNECTIONS=60
|
||||
envsubst < crates/collab/k8s/collab.template.yml | kubectl apply -f -
|
||||
kubectl -n "$ZED_KUBE_NAMESPACE" rollout status deployment/$ZED_SERVICE_NAME --watch
|
||||
echo "deployed ${ZED_SERVICE_NAME} to ${ZED_KUBE_NAMESPACE}"
|
||||
|
||||
export ZED_SERVICE_NAME=llm
|
||||
export ZED_LOAD_BALANCER_SIZE_UNIT=$ZED_LLM_LOAD_BALANCER_SIZE_UNIT
|
||||
envsubst < crates/collab/k8s/collab.template.yml | kubectl apply -f -
|
||||
kubectl -n "$ZED_KUBE_NAMESPACE" rollout status deployment/$ZED_SERVICE_NAME --watch
|
||||
echo "deployed ${ZED_SERVICE_NAME} to ${ZED_KUBE_NAMESPACE}"
|
||||
|
||||
37
.github/workflows/docs.yml
vendored
Normal file
37
.github/workflows/docs.yml
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
name: Docs
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
paths:
|
||||
- "docs/**"
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
|
||||
jobs:
|
||||
check_formatting:
|
||||
name: "Check formatting"
|
||||
if: github.repository_owner == 'zed-industries'
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4
|
||||
|
||||
- uses: pnpm/action-setup@fe02b34f77f8bc703788d5817da081398fad5dd2 # v4.0.0
|
||||
with:
|
||||
version: 9
|
||||
|
||||
- name: Prettier Check on /docs
|
||||
working-directory: ./docs
|
||||
run: |
|
||||
pnpm dlx prettier . --check || {
|
||||
echo "To fix, run from the root of the zed repo:"
|
||||
echo " cd docs && pnpm dlx prettier . --write && cd .."
|
||||
false
|
||||
}
|
||||
|
||||
- name: Check for Typos with Typos-CLI
|
||||
uses: crate-ci/typos@v1.24.6
|
||||
with:
|
||||
config: ./typos.toml
|
||||
files: ./docs/
|
||||
71
.github/workflows/eval.yml
vendored
71
.github/workflows/eval.yml
vendored
@@ -1,71 +0,0 @@
|
||||
name: Run Agent Eval
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 0 * * *"
|
||||
|
||||
pull_request:
|
||||
branches:
|
||||
- "**"
|
||||
types: [synchronize, reopened, labeled]
|
||||
|
||||
workflow_dispatch:
|
||||
|
||||
concurrency:
|
||||
# Allow only one workflow per any non-`main` branch.
|
||||
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
|
||||
cancel-in-progress: true
|
||||
|
||||
env:
|
||||
CARGO_TERM_COLOR: always
|
||||
CARGO_INCREMENTAL: 0
|
||||
RUST_BACKTRACE: 1
|
||||
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
|
||||
ZED_EVAL_TELEMETRY: 1
|
||||
|
||||
jobs:
|
||||
run_eval:
|
||||
timeout-minutes: 60
|
||||
name: Run Agent Eval
|
||||
if: >
|
||||
github.repository_owner == 'zed-industries' &&
|
||||
(github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'run-eval'))
|
||||
runs-on:
|
||||
- namespace-profile-16x32-ubuntu-2204
|
||||
steps:
|
||||
- name: Add Rust to the PATH
|
||||
run: echo "$HOME/.cargo/bin" >> "$GITHUB_PATH"
|
||||
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
with:
|
||||
clean: false
|
||||
|
||||
- name: Cache dependencies
|
||||
uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2
|
||||
with:
|
||||
save-if: ${{ github.ref == 'refs/heads/main' }}
|
||||
# cache-provider: "buildjet"
|
||||
|
||||
- name: Install Linux dependencies
|
||||
run: ./script/linux
|
||||
|
||||
- name: Configure CI
|
||||
run: |
|
||||
mkdir -p ./../.cargo
|
||||
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
|
||||
|
||||
- name: Compile eval
|
||||
run: cargo build --package=eval
|
||||
|
||||
- name: Run eval
|
||||
run: cargo run --package=eval -- --repetitions=8 --concurrency=1
|
||||
|
||||
# Even the Linux runner is not stateful, in theory there is no need to do this cleanup.
|
||||
# But, to avoid potential issues in the future if we choose to use a stateful Linux runner and forget to add code
|
||||
# to clean up the config file, I’ve included the cleanup code here as a precaution.
|
||||
# While it’s not strictly necessary at this moment, I believe it’s better to err on the side of caution.
|
||||
- name: Clean CI config file
|
||||
if: always()
|
||||
run: rm -rf ./../.cargo
|
||||
33
.github/workflows/issue_response.yml
vendored
33
.github/workflows/issue_response.yml
vendored
@@ -1,33 +0,0 @@
|
||||
name: Issue Response
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: "0 12 * * 2"
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
issue-response:
|
||||
if: github.repository_owner == 'zed-industries'
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
|
||||
- uses: pnpm/action-setup@fe02b34f77f8bc703788d5817da081398fad5dd2 # v4.0.0
|
||||
with:
|
||||
version: 9
|
||||
|
||||
- name: Setup Node
|
||||
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
|
||||
with:
|
||||
node-version: "20"
|
||||
cache: "pnpm"
|
||||
cache-dependency-path: "script/issue_response/pnpm-lock.yaml"
|
||||
|
||||
- run: pnpm install --dir script/issue_response
|
||||
|
||||
- name: Run Issue Response
|
||||
run: pnpm run --dir script/issue_response start
|
||||
env:
|
||||
ISSUE_RESPONSE_GITHUB_TOKEN: ${{ secrets.ISSUE_RESPONSE_GITHUB_TOKEN }}
|
||||
SLACK_ISSUE_RESPONSE_WEBHOOK_URL: ${{ secrets.SLACK_ISSUE_RESPONSE_WEBHOOK_URL }}
|
||||
69
.github/workflows/nix.yml
vendored
69
.github/workflows/nix.yml
vendored
@@ -1,69 +0,0 @@
|
||||
name: "Nix build"
|
||||
|
||||
on:
|
||||
workflow_call:
|
||||
inputs:
|
||||
flake-output:
|
||||
type: string
|
||||
default: "default"
|
||||
cachix-filter:
|
||||
type: string
|
||||
default: ""
|
||||
|
||||
jobs:
|
||||
nix-build:
|
||||
timeout-minutes: 60
|
||||
name: (${{ matrix.system.os }}) Nix Build
|
||||
continue-on-error: true # TODO: remove when we want this to start blocking CI
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
system:
|
||||
- os: x86 Linux
|
||||
runner: namespace-profile-16x32-ubuntu-2204
|
||||
install_nix: true
|
||||
- os: arm Mac
|
||||
runner: [macOS, ARM64, test]
|
||||
install_nix: false
|
||||
if: github.repository_owner == 'zed-industries'
|
||||
runs-on: ${{ matrix.system.runner }}
|
||||
env:
|
||||
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
|
||||
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
|
||||
ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}
|
||||
GIT_LFS_SKIP_SMUDGE: 1 # breaks the livekit rust sdk examples which we don't actually depend on
|
||||
steps:
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
with:
|
||||
clean: false
|
||||
|
||||
# on our macs we manually install nix. for some reason the cachix action is running
|
||||
# under a non-login /bin/bash shell which doesn't source the proper script to add the
|
||||
# nix profile to PATH, so we manually add them here
|
||||
- name: Set path
|
||||
if: ${{ ! matrix.system.install_nix }}
|
||||
run: |
|
||||
echo "/nix/var/nix/profiles/default/bin" >> "$GITHUB_PATH"
|
||||
echo "/Users/administrator/.nix-profile/bin" >> "$GITHUB_PATH"
|
||||
|
||||
- uses: cachix/install-nix-action@02a151ada4993995686f9ed4f1be7cfbb229e56f # v31
|
||||
if: ${{ matrix.system.install_nix }}
|
||||
with:
|
||||
github_access_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- uses: cachix/cachix-action@0fc020193b5a1fa3ac4575aa3a7d3aa6a35435ad # v16
|
||||
with:
|
||||
name: zed
|
||||
authToken: "${{ secrets.CACHIX_AUTH_TOKEN }}"
|
||||
pushFilter: "${{ inputs.cachix-filter }}"
|
||||
cachixArgs: "-v"
|
||||
|
||||
- run: nix build .#${{ inputs.flake-output }} -L --accept-flake-config
|
||||
|
||||
- name: Limit /nix/store to 50GB on macs
|
||||
if: ${{ ! matrix.system.install_nix }}
|
||||
run: |
|
||||
if [ "$(du -sm /nix/store | cut -f1)" -gt 50000 ]; then
|
||||
nix-collect-garbage -d || true
|
||||
fi
|
||||
5
.github/workflows/publish_extension_cli.yml
vendored
5
.github/workflows/publish_extension_cli.yml
vendored
@@ -12,17 +12,16 @@ env:
|
||||
jobs:
|
||||
publish:
|
||||
name: Publish zed-extension CLI
|
||||
if: github.repository_owner == 'zed-industries'
|
||||
runs-on:
|
||||
- ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4
|
||||
with:
|
||||
clean: false
|
||||
|
||||
- name: Cache dependencies
|
||||
uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2
|
||||
uses: swatinem/rust-cache@82a92a6e8fbeee089604da2575dc567ae9ddeaab # v2
|
||||
with:
|
||||
save-if: ${{ github.ref == 'refs/heads/main' }}
|
||||
cache-provider: "github"
|
||||
|
||||
7
.github/workflows/randomized_tests.yml
vendored
7
.github/workflows/randomized_tests.yml
vendored
@@ -18,17 +18,16 @@ env:
|
||||
jobs:
|
||||
tests:
|
||||
name: Run randomized tests
|
||||
if: github.repository_owner == 'zed-industries'
|
||||
runs-on:
|
||||
- namespace-profile-16x32-ubuntu-2204
|
||||
- buildjet-16vcpu-ubuntu-2204
|
||||
steps:
|
||||
- name: Install Node
|
||||
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
|
||||
uses: actions/setup-node@0a44ba7841725637a19e28fa30b79a866c81b0a6 # v4
|
||||
with:
|
||||
node-version: "18"
|
||||
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4
|
||||
with:
|
||||
clean: false
|
||||
|
||||
|
||||
191
.github/workflows/release_nightly.yml
vendored
191
.github/workflows/release_nightly.yml
vendored
@@ -12,10 +12,6 @@ env:
|
||||
CARGO_TERM_COLOR: always
|
||||
CARGO_INCREMENTAL: 0
|
||||
RUST_BACKTRACE: 1
|
||||
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
|
||||
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
|
||||
DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }}
|
||||
DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }}
|
||||
|
||||
jobs:
|
||||
style:
|
||||
@@ -24,10 +20,10 @@ jobs:
|
||||
if: github.repository_owner == 'zed-industries'
|
||||
runs-on:
|
||||
- self-hosted
|
||||
- macOS
|
||||
- test
|
||||
steps:
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4
|
||||
with:
|
||||
clean: false
|
||||
fetch-depth: 0
|
||||
@@ -44,64 +40,42 @@ jobs:
|
||||
if: github.repository_owner == 'zed-industries'
|
||||
runs-on:
|
||||
- self-hosted
|
||||
- macOS
|
||||
- test
|
||||
needs: style
|
||||
steps:
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4
|
||||
with:
|
||||
clean: false
|
||||
|
||||
- name: Run tests
|
||||
uses: ./.github/actions/run_tests
|
||||
|
||||
windows-tests:
|
||||
timeout-minutes: 60
|
||||
name: Run tests on Windows
|
||||
if: github.repository_owner == 'zed-industries'
|
||||
runs-on: [self-32vcpu-windows-2022]
|
||||
steps:
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
with:
|
||||
clean: false
|
||||
|
||||
- name: Configure CI
|
||||
run: |
|
||||
New-Item -ItemType Directory -Path "./../.cargo" -Force
|
||||
Copy-Item -Path "./.cargo/ci-config.toml" -Destination "./../.cargo/config.toml"
|
||||
|
||||
- name: Run tests
|
||||
uses: ./.github/actions/run_tests_windows
|
||||
|
||||
- name: Limit target directory size
|
||||
run: ./script/clear-target-dir-if-larger-than.ps1 1024
|
||||
|
||||
- name: Clean CI config file
|
||||
if: always()
|
||||
run: Remove-Item -Recurse -Path "./../.cargo" -Force -ErrorAction SilentlyContinue
|
||||
|
||||
bundle-mac:
|
||||
timeout-minutes: 60
|
||||
name: Create a macOS bundle
|
||||
if: github.repository_owner == 'zed-industries'
|
||||
runs-on:
|
||||
- self-mini-macos
|
||||
- self-hosted
|
||||
- bundle
|
||||
needs: tests
|
||||
env:
|
||||
MACOS_CERTIFICATE: ${{ secrets.MACOS_CERTIFICATE }}
|
||||
MACOS_CERTIFICATE_PASSWORD: ${{ secrets.MACOS_CERTIFICATE_PASSWORD }}
|
||||
APPLE_NOTARIZATION_KEY: ${{ secrets.APPLE_NOTARIZATION_KEY }}
|
||||
APPLE_NOTARIZATION_KEY_ID: ${{ secrets.APPLE_NOTARIZATION_KEY_ID }}
|
||||
APPLE_NOTARIZATION_ISSUER_ID: ${{ secrets.APPLE_NOTARIZATION_ISSUER_ID }}
|
||||
APPLE_NOTARIZATION_USERNAME: ${{ secrets.APPLE_NOTARIZATION_USERNAME }}
|
||||
APPLE_NOTARIZATION_PASSWORD: ${{ secrets.APPLE_NOTARIZATION_PASSWORD }}
|
||||
DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }}
|
||||
DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }}
|
||||
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
|
||||
ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}
|
||||
steps:
|
||||
- name: Install Node
|
||||
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
|
||||
uses: actions/setup-node@0a44ba7841725637a19e28fa30b79a866c81b0a6 # v4
|
||||
with:
|
||||
node-version: "18"
|
||||
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4
|
||||
with:
|
||||
clean: false
|
||||
|
||||
@@ -112,10 +86,8 @@ jobs:
|
||||
echo "Publishing version: ${version} on release channel nightly"
|
||||
echo "nightly" > crates/zed/RELEASE_CHANNEL
|
||||
|
||||
- name: Setup Sentry CLI
|
||||
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b #v2
|
||||
with:
|
||||
token: ${{ SECRETS.SENTRY_AUTH_TOKEN }}
|
||||
- name: Generate license file
|
||||
run: script/generate-licenses
|
||||
|
||||
- name: Create macOS app bundle
|
||||
run: script/bundle-mac
|
||||
@@ -128,25 +100,25 @@ jobs:
|
||||
name: Create a Linux *.tar.gz bundle for x86
|
||||
if: github.repository_owner == 'zed-industries'
|
||||
runs-on:
|
||||
- namespace-profile-16x32-ubuntu-2004 # ubuntu 20.04 for minimal glibc
|
||||
- buildjet-16vcpu-ubuntu-2004
|
||||
needs: tests
|
||||
env:
|
||||
DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }}
|
||||
DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }}
|
||||
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
|
||||
ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}
|
||||
steps:
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4
|
||||
with:
|
||||
clean: false
|
||||
|
||||
- name: Add Rust to the PATH
|
||||
run: echo "$HOME/.cargo/bin" >> "$GITHUB_PATH"
|
||||
run: echo "$HOME/.cargo/bin" >> $GITHUB_PATH
|
||||
|
||||
- name: Install Linux dependencies
|
||||
run: ./script/linux && ./script/install-mold 2.34.0
|
||||
|
||||
- name: Setup Sentry CLI
|
||||
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b #v2
|
||||
with:
|
||||
token: ${{ SECRETS.SENTRY_AUTH_TOKEN }}
|
||||
|
||||
- name: Limit target directory size
|
||||
run: script/clear-target-dir-if-larger-than 100
|
||||
|
||||
@@ -168,22 +140,22 @@ jobs:
|
||||
name: Create a Linux *.tar.gz bundle for ARM
|
||||
if: github.repository_owner == 'zed-industries'
|
||||
runs-on:
|
||||
- namespace-profile-8x32-ubuntu-2004-arm-m4 # ubuntu 20.04 for minimal glibc
|
||||
- hosted-linux-arm-1
|
||||
needs: tests
|
||||
env:
|
||||
DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }}
|
||||
DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }}
|
||||
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
|
||||
ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}
|
||||
steps:
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4
|
||||
with:
|
||||
clean: false
|
||||
|
||||
- name: Install Linux dependencies
|
||||
run: ./script/linux
|
||||
|
||||
- name: Setup Sentry CLI
|
||||
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b #v2
|
||||
with:
|
||||
token: ${{ SECRETS.SENTRY_AUTH_TOKEN }}
|
||||
|
||||
- name: Limit target directory size
|
||||
run: script/clear-target-dir-if-larger-than 100
|
||||
|
||||
@@ -200,105 +172,17 @@ jobs:
|
||||
- name: Upload Zed Nightly
|
||||
run: script/upload-nightly linux-targz
|
||||
|
||||
freebsd:
|
||||
timeout-minutes: 60
|
||||
if: false && github.repository_owner == 'zed-industries'
|
||||
runs-on: github-8vcpu-ubuntu-2404
|
||||
needs: tests
|
||||
name: Build Zed on FreeBSD
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- name: Build FreeBSD remote-server
|
||||
id: freebsd-build
|
||||
uses: vmactions/freebsd-vm@c3ae29a132c8ef1924775414107a97cac042aad5 # v1.2.0
|
||||
with:
|
||||
# envs: "MYTOKEN MYTOKEN2"
|
||||
usesh: true
|
||||
release: 13.5
|
||||
copyback: true
|
||||
prepare: |
|
||||
pkg install -y \
|
||||
bash curl jq git \
|
||||
rustup-init cmake-core llvm-devel-lite pkgconf protobuf # ibx11 alsa-lib rust-bindgen-cli
|
||||
run: |
|
||||
freebsd-version
|
||||
sysctl hw.model
|
||||
sysctl hw.ncpu
|
||||
sysctl hw.physmem
|
||||
sysctl hw.usermem
|
||||
git config --global --add safe.directory /home/runner/work/zed/zed
|
||||
rustup-init --profile minimal --default-toolchain none -y
|
||||
. "$HOME/.cargo/env"
|
||||
./script/bundle-freebsd
|
||||
mkdir -p out/
|
||||
mv "target/zed-remote-server-freebsd-x86_64.gz" out/
|
||||
rm -rf target/
|
||||
cargo clean
|
||||
|
||||
- name: Upload Zed Nightly
|
||||
run: script/upload-nightly freebsd
|
||||
|
||||
bundle-nix:
|
||||
name: Build and cache Nix package
|
||||
needs: tests
|
||||
secrets: inherit
|
||||
uses: ./.github/workflows/nix.yml
|
||||
|
||||
bundle-windows-x64:
|
||||
timeout-minutes: 60
|
||||
name: Create a Windows installer
|
||||
if: github.repository_owner == 'zed-industries'
|
||||
runs-on: [self-32vcpu-windows-2022]
|
||||
needs: windows-tests
|
||||
env:
|
||||
AZURE_TENANT_ID: ${{ secrets.AZURE_SIGNING_TENANT_ID }}
|
||||
AZURE_CLIENT_ID: ${{ secrets.AZURE_SIGNING_CLIENT_ID }}
|
||||
AZURE_CLIENT_SECRET: ${{ secrets.AZURE_SIGNING_CLIENT_SECRET }}
|
||||
ACCOUNT_NAME: ${{ vars.AZURE_SIGNING_ACCOUNT_NAME }}
|
||||
CERT_PROFILE_NAME: ${{ vars.AZURE_SIGNING_CERT_PROFILE_NAME }}
|
||||
ENDPOINT: ${{ vars.AZURE_SIGNING_ENDPOINT }}
|
||||
FILE_DIGEST: SHA256
|
||||
TIMESTAMP_DIGEST: SHA256
|
||||
TIMESTAMP_SERVER: "http://timestamp.acs.microsoft.com"
|
||||
steps:
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
with:
|
||||
clean: false
|
||||
|
||||
- name: Set release channel to nightly
|
||||
working-directory: ${{ env.ZED_WORKSPACE }}
|
||||
run: |
|
||||
$ErrorActionPreference = "Stop"
|
||||
$version = git rev-parse --short HEAD
|
||||
Write-Host "Publishing version: $version on release channel nightly"
|
||||
"nightly" | Set-Content -Path "crates/zed/RELEASE_CHANNEL"
|
||||
|
||||
- name: Setup Sentry CLI
|
||||
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b #v2
|
||||
with:
|
||||
token: ${{ SECRETS.SENTRY_AUTH_TOKEN }}
|
||||
|
||||
- name: Build Zed installer
|
||||
working-directory: ${{ env.ZED_WORKSPACE }}
|
||||
run: script/bundle-windows.ps1
|
||||
|
||||
- name: Upload Zed Nightly
|
||||
working-directory: ${{ env.ZED_WORKSPACE }}
|
||||
run: script/upload-nightly.ps1 windows
|
||||
|
||||
update-nightly-tag:
|
||||
name: Update nightly tag
|
||||
if: github.repository_owner == 'zed-industries'
|
||||
runs-on: namespace-profile-2x4-ubuntu-2404
|
||||
runs-on: ubuntu-latest
|
||||
needs:
|
||||
- bundle-mac
|
||||
- bundle-linux-x86
|
||||
- bundle-linux-arm
|
||||
- bundle-windows-x64
|
||||
steps:
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
uses: actions/checkout@eef61447b9ff4aafe5dcd4e0bbf5d482be7e7871 # v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
@@ -312,12 +196,3 @@ jobs:
|
||||
git config user.email github-actions@github.com
|
||||
git tag -f nightly
|
||||
git push origin nightly --force
|
||||
|
||||
- name: Create Sentry release
|
||||
uses: getsentry/action-release@526942b68292201ac6bbb99b9a0747d4abee354c # v3
|
||||
env:
|
||||
SENTRY_ORG: zed-dev
|
||||
SENTRY_PROJECT: zed
|
||||
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
|
||||
with:
|
||||
environment: production
|
||||
|
||||
21
.github/workflows/script_checks.yml
vendored
21
.github/workflows/script_checks.yml
vendored
@@ -1,21 +0,0 @@
|
||||
name: Script
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
paths:
|
||||
- "script/**"
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
|
||||
jobs:
|
||||
shellcheck:
|
||||
name: "ShellCheck Scripts"
|
||||
if: github.repository_owner == 'zed-industries'
|
||||
runs-on: namespace-profile-2x4-ubuntu-2404
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
- name: Shellcheck ./scripts
|
||||
run: |
|
||||
./script/shellcheck-scripts error
|
||||
86
.github/workflows/unit_evals.yml
vendored
86
.github/workflows/unit_evals.yml
vendored
@@ -1,86 +0,0 @@
|
||||
name: Run Unit Evals
|
||||
|
||||
on:
|
||||
schedule:
|
||||
# GitHub might drop jobs at busy times, so we choose a random time in the middle of the night.
|
||||
- cron: "47 1 * * 2"
|
||||
workflow_dispatch:
|
||||
|
||||
concurrency:
|
||||
# Allow only one workflow per any non-`main` branch.
|
||||
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
|
||||
cancel-in-progress: true
|
||||
|
||||
env:
|
||||
CARGO_TERM_COLOR: always
|
||||
CARGO_INCREMENTAL: 0
|
||||
RUST_BACKTRACE: 1
|
||||
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
|
||||
|
||||
jobs:
|
||||
unit_evals:
|
||||
if: github.repository_owner == 'zed-industries'
|
||||
timeout-minutes: 60
|
||||
name: Run unit evals
|
||||
runs-on:
|
||||
- namespace-profile-16x32-ubuntu-2204
|
||||
steps:
|
||||
- name: Add Rust to the PATH
|
||||
run: echo "$HOME/.cargo/bin" >> "$GITHUB_PATH"
|
||||
|
||||
- name: Checkout repo
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
with:
|
||||
clean: false
|
||||
|
||||
- name: Cache dependencies
|
||||
uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2
|
||||
with:
|
||||
save-if: ${{ github.ref == 'refs/heads/main' }}
|
||||
# cache-provider: "buildjet"
|
||||
|
||||
- name: Install Linux dependencies
|
||||
run: ./script/linux
|
||||
|
||||
- name: Configure CI
|
||||
run: |
|
||||
mkdir -p ./../.cargo
|
||||
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
|
||||
|
||||
- name: Install Rust
|
||||
shell: bash -euxo pipefail {0}
|
||||
run: |
|
||||
cargo install cargo-nextest --locked
|
||||
|
||||
- name: Install Node
|
||||
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
|
||||
with:
|
||||
node-version: "18"
|
||||
|
||||
- name: Limit target directory size
|
||||
shell: bash -euxo pipefail {0}
|
||||
run: script/clear-target-dir-if-larger-than 100
|
||||
|
||||
- name: Run unit evals
|
||||
shell: bash -euxo pipefail {0}
|
||||
run: cargo nextest run --workspace --no-fail-fast --features eval --no-capture -E 'test(::eval_)'
|
||||
env:
|
||||
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
|
||||
- name: Send failure message to Slack channel if needed
|
||||
if: ${{ failure() }}
|
||||
uses: slackapi/slack-github-action@b0fa283ad8fea605de13dc3f449259339835fc52
|
||||
with:
|
||||
method: chat.postMessage
|
||||
token: ${{ secrets.SLACK_APP_ZED_UNIT_EVALS_BOT_TOKEN }}
|
||||
payload: |
|
||||
channel: C04UDRNNJFQ
|
||||
text: "Unit Evals Failed: https://github.com/zed-industries/zed/actions/runs/${{ github.run_id }}"
|
||||
|
||||
# Even the Linux runner is not stateful, in theory there is no need to do this cleanup.
|
||||
# But, to avoid potential issues in the future if we choose to use a stateful Linux runner and forget to add code
|
||||
# to clean up the config file, I’ve included the cleanup code here as a precaution.
|
||||
# While it’s not strictly necessary at this moment, I believe it’s better to err on the side of caution.
|
||||
- name: Clean CI config file
|
||||
if: always()
|
||||
run: rm -rf ./../.cargo
|
||||
57
.gitignore
vendored
57
.gitignore
vendored
@@ -1,39 +1,34 @@
|
||||
**/*.db
|
||||
**/cargo-target
|
||||
**/target
|
||||
**/venv
|
||||
**/.direnv
|
||||
*.wasm
|
||||
*.xcodeproj
|
||||
.DS_Store
|
||||
.blob_store
|
||||
.build
|
||||
.envrc
|
||||
.flatpak-builder
|
||||
/.direnv
|
||||
.idea
|
||||
.netrc
|
||||
*.pyc
|
||||
.pytest_cache
|
||||
.swiftpm
|
||||
.swiftpm/config/registries.json
|
||||
.swiftpm/xcode/package.xcworkspace/contents.xcworkspacedata
|
||||
.venv
|
||||
.vscode
|
||||
.wrangler
|
||||
/assets/*licenses.*
|
||||
/crates/collab/seed.json
|
||||
/crates/theme/schemas/theme.json
|
||||
/crates/zed/resources/flatpak/flatpak-cargo-sources.json
|
||||
/dev.zed.Zed*.json
|
||||
/node_modules/
|
||||
**/target
|
||||
**/cargo-target
|
||||
/zed.xcworkspace
|
||||
.DS_Store
|
||||
/plugins/bin
|
||||
/script/node_modules
|
||||
/snap
|
||||
/zed.xcworkspace
|
||||
DerivedData/
|
||||
/crates/theme/schemas/theme.json
|
||||
/crates/collab/seed.json
|
||||
/crates/zed/resources/flatpak/flatpak-cargo-sources.json
|
||||
/dev.zed.Zed*.json
|
||||
/assets/*licenses.*
|
||||
**/venv
|
||||
.build
|
||||
*.wasm
|
||||
Packages
|
||||
*.xcodeproj
|
||||
xcuserdata/
|
||||
DerivedData/
|
||||
.swiftpm/config/registries.json
|
||||
.swiftpm/xcode/package.xcworkspace/contents.xcworkspacedata
|
||||
.netrc
|
||||
.swiftpm
|
||||
**/*.db
|
||||
.pytest_cache
|
||||
.venv
|
||||
.blob_store
|
||||
.vscode
|
||||
.wrangler
|
||||
.flatpak-builder
|
||||
|
||||
# Don't commit any secrets to the repo.
|
||||
.env
|
||||
.env.secret.toml
|
||||
|
||||
55
.mailmap
55
.mailmap
@@ -9,8 +9,6 @@
|
||||
# Keep these entries sorted alphabetically.
|
||||
# In Zed: `editor: sort lines case insensitive`
|
||||
|
||||
Agus Zubiaga <agus@zed.dev>
|
||||
Agus Zubiaga <agus@zed.dev> <hi@aguz.me>
|
||||
Alex Viscreanu <alexviscreanu@gmail.com>
|
||||
Alex Viscreanu <alexviscreanu@gmail.com> <alexandru.viscreanu@kiwi.com>
|
||||
Alexander Mankuta <alex@pointless.one>
|
||||
@@ -19,41 +17,24 @@ amtoaer <amtoaer@gmail.com>
|
||||
amtoaer <amtoaer@gmail.com> <amtoaer@outlook.com>
|
||||
Andrei Zvonimir Crnković <andrei@0x7f.dev>
|
||||
Andrei Zvonimir Crnković <andrei@0x7f.dev> <andreicek@0x7f.dev>
|
||||
Angelk90 <angelo.k90@hotmail.it>
|
||||
Angelk90 <angelo.k90@hotmail.it> <20476002+Angelk90@users.noreply.github.com>
|
||||
Antonio Scandurra <me@as-cii.com>
|
||||
Antonio Scandurra <me@as-cii.com> <antonio@zed.dev>
|
||||
Ben Kunkle <ben@zed.dev>
|
||||
Ben Kunkle <ben@zed.dev> <ben.kunkle@gmail.com>
|
||||
Bennet Bo Fenner <bennet@zed.dev>
|
||||
Bennet Bo Fenner <bennet@zed.dev> <53836821+bennetbo@users.noreply.github.com>
|
||||
Bennet Bo Fenner <bennet@zed.dev> <bennetbo@gmx.de>
|
||||
Boris Cherny <boris@anthropic.com>
|
||||
Boris Cherny <boris@anthropic.com> <boris@performancejs.com>
|
||||
Brian Tan <brian.tan88@gmail.com>
|
||||
Chris Hayes <chris+git@hayes.software>
|
||||
Christian Bergschneider <christian.bergschneider@gmx.de>
|
||||
Christian Bergschneider <christian.bergschneider@gmx.de> <magiclake@gmx.de>
|
||||
Conrad Irwin <conrad@zed.dev>
|
||||
Conrad Irwin <conrad@zed.dev> <conrad.irwin@gmail.com>
|
||||
Dairon Medina <dairon.medina@gmail.com>
|
||||
Danilo Leal <danilo@zed.dev>
|
||||
Danilo Leal <danilo@zed.dev> <67129314+danilo-leal@users.noreply.github.com>
|
||||
Edwin Aronsson <75266237+4teapo@users.noreply.github.com>
|
||||
Elvis Pranskevichus <elvis@geldata.com>
|
||||
Elvis Pranskevichus <elvis@geldata.com> <elvis@magic.io>
|
||||
Evren Sen <nervenes@icloud.com>
|
||||
Evren Sen <nervenes@icloud.com> <146845123+evrensen467@users.noreply.github.com>
|
||||
Evren Sen <nervenes@icloud.com> <146845123+evrsen@users.noreply.github.com>
|
||||
Fernando Tagawa <tagawafernando@gmail.com>
|
||||
Fernando Tagawa <tagawafernando@gmail.com> <fernando.tagawa.gamail.com@gmail.com>
|
||||
Finn Evers <dev@bahn.sh>
|
||||
Finn Evers <dev@bahn.sh> <75036051+MrSubidubi@users.noreply.github.com>
|
||||
Finn Evers <dev@bahn.sh> <finn.evers@outlook.de>
|
||||
Gowtham K <73059450+dovakin0007@users.noreply.github.com>
|
||||
Greg Morenz <greg-morenz@droid.cafe>
|
||||
Greg Morenz <greg-morenz@droid.cafe> <morenzg@gmail.com>
|
||||
Ihnat Aŭtuška <autushka.ihnat@gmail.com>
|
||||
Ivan Žužak <izuzak@gmail.com>
|
||||
Ivan Žužak <izuzak@gmail.com> <ivan.zuzak@github.com>
|
||||
Joseph T. Lyons <JosephTLyons@gmail.com>
|
||||
@@ -68,31 +49,20 @@ Kirill Bulatov <kirill@zed.dev>
|
||||
Kirill Bulatov <kirill@zed.dev> <mail4score@gmail.com>
|
||||
Kyle Caverly <kylebcaverly@gmail.com>
|
||||
Kyle Caverly <kylebcaverly@gmail.com> <kyle@zed.dev>
|
||||
Lilith Iris <itslirissama@gmail.com>
|
||||
Lilith Iris <itslirissama@gmail.com> <83819417+Irilith@users.noreply.github.com>
|
||||
LoganDark <contact@logandark.mozmail.com>
|
||||
LoganDark <contact@logandark.mozmail.com> <git@logandark.mozmail.com>
|
||||
LoganDark <contact@logandark.mozmail.com> <github@logandark.mozmail.com>
|
||||
Marko Kungla <marko.kungla@gmail.com>
|
||||
Marko Kungla <marko.kungla@gmail.com> <marko@mkungla.dev>
|
||||
Marshall Bowers <git@maxdeviant.com>
|
||||
Marshall Bowers <git@maxdeviant.com> <elliott.codes@gmail.com>
|
||||
Marshall Bowers <git@maxdeviant.com> <marshall@zed.dev>
|
||||
Marshall Bowers <elliott.codes@gmail.com>
|
||||
Marshall Bowers <elliott.codes@gmail.com> <marshall@zed.dev>
|
||||
Matt Fellenz <matt@felle.nz>
|
||||
Matt Fellenz <matt@felle.nz> <matt+github@felle.nz>
|
||||
Max Brunsfeld <maxbrunsfeld@gmail.com>
|
||||
Max Brunsfeld <maxbrunsfeld@gmail.com> <max@zed.dev>
|
||||
Max Linke <maxlinke88@gmail.com>
|
||||
Max Linke <maxlinke88@gmail.com> <kain88-de@users.noreply.github.com>
|
||||
Michael Sloan <michael@zed.dev>
|
||||
Michael Sloan <michael@zed.dev> <mgsloan@gmail.com>
|
||||
Michael Sloan <michael@zed.dev> <mgsloan@google.com>
|
||||
Mikayla Maki <mikayla@zed.dev>
|
||||
Mikayla Maki <mikayla@zed.dev> <mikayla.c.maki@gmail.com>
|
||||
Mikayla Maki <mikayla@zed.dev> <mikayla.c.maki@icloud.com>
|
||||
Morgan Krey <morgan@zed.dev>
|
||||
Muhammad Talal Anwar <mail@talal.io>
|
||||
Muhammad Talal Anwar <mail@talal.io> <talalanwar@outlook.com>
|
||||
Nate Butler <iamnbutler@gmail.com>
|
||||
Nate Butler <iamnbutler@gmail.com> <nate@zed.dev>
|
||||
Nathan Sobo <nathan@zed.dev>
|
||||
@@ -116,32 +86,17 @@ Robert Clover <git@clo4.net>
|
||||
Robert Clover <git@clo4.net> <robert@clover.gdn>
|
||||
Roy Williams <roy.williams.iii@gmail.com>
|
||||
Roy Williams <roy.williams.iii@gmail.com> <roy@anthropic.com>
|
||||
Sebastijan Kelnerič <sebastijan.kelneric@sebba.dev>
|
||||
Sebastijan Kelnerič <sebastijan.kelneric@sebba.dev> <sebastijan.kelneric@vichava.com>
|
||||
Sergey Onufrienko <sergey@onufrienko.com>
|
||||
Shish <webmaster@shishnet.org>
|
||||
Shish <webmaster@shishnet.org> <shish@shishnet.org>
|
||||
Smit Barmase <0xtimsb@gmail.com>
|
||||
Smit Barmase <0xtimsb@gmail.com> <smit@zed.dev>
|
||||
Thomas <github.thomaub@gmail.com>
|
||||
Thomas <github.thomaub@gmail.com> <thomas.aubry94@gmail.com>
|
||||
Thomas <github.thomaub@gmail.com> <thomas.aubry@paylead.fr>
|
||||
Thomas Heartman <thomasheartman+github@gmail.com>
|
||||
Thomas Heartman <thomasheartman+github@gmail.com> <thomas@getunleash.io>
|
||||
Thomas Mickley-Doyle <tmickleydoyle@gmail.com>
|
||||
Thomas Mickley-Doyle <tmickleydoyle@gmail.com> <thomas@zed.dev>
|
||||
Thorben Kröger <dev@thorben.net>
|
||||
Thorben Kröger <dev@thorben.net> <thorben.kroeger@hexagon.com>
|
||||
Thorsten Ball <mrnugget@gmail.com>
|
||||
Thorsten Ball <mrnugget@gmail.com> <me@thorstenball.com>
|
||||
Thorsten Ball <mrnugget@gmail.com> <thorsten@zed.dev>
|
||||
Thorsten Ball <thorsten@zed.dev>
|
||||
Thorsten Ball <thorsten@zed.dev> <me@thorstenball.com>
|
||||
Thorsten Ball <thorsten@zed.dev> <mrnugget@gmail.com>
|
||||
Tristan Hume <tris.hume@gmail.com>
|
||||
Tristan Hume <tris.hume@gmail.com> <tristan@anthropic.com>
|
||||
Uladzislau Kaminski <i@uladkaminski.com>
|
||||
Uladzislau Kaminski <i@uladkaminski.com> <uladzislau_kaminski@epam.com>
|
||||
Vitaly Slobodin <vitaliy.slobodin@gmail.com>
|
||||
Vitaly Slobodin <vitaliy.slobodin@gmail.com> <vitaly_slobodin@fastmail.com>
|
||||
Will Bradley <williambbradley@gmail.com>
|
||||
Will Bradley <williambbradley@gmail.com> <will@zed.dev>
|
||||
WindSoilder <WindSoilder@outlook.com>
|
||||
张小白 <364772080@qq.com>
|
||||
|
||||
@@ -1,3 +0,0 @@
|
||||
{
|
||||
"printWidth": 120
|
||||
}
|
||||
130
.rules
130
.rules
@@ -1,130 +0,0 @@
|
||||
# Rust coding guidelines
|
||||
|
||||
* Prioritize code correctness and clarity. Speed and efficiency are secondary priorities unless otherwise specified.
|
||||
* Do not write organizational or comments that summarize the code. Comments should only be written in order to explain "why" the code is written in some way in the case there is a reason that is tricky / non-obvious.
|
||||
* Prefer implementing functionality in existing files unless it is a new logical component. Avoid creating many small files.
|
||||
* Avoid using functions that panic like `unwrap()`, instead use mechanisms like `?` to propagate errors.
|
||||
* Be careful with operations like indexing which may panic if the indexes are out of bounds.
|
||||
* Never silently discard errors with `let _ =` on fallible operations. Always handle errors appropriately:
|
||||
- Propagate errors with `?` when the calling function should handle them
|
||||
- Use `.log_err()` or similar when you need to ignore errors but want visibility
|
||||
- Use explicit error handling with `match` or `if let Err(...)` when you need custom logic
|
||||
- Example: avoid `let _ = client.request(...).await?;` - use `client.request(...).await?;` instead
|
||||
* When implementing async operations that may fail, ensure errors propagate to the UI layer so users get meaningful feedback.
|
||||
* Never create files with `mod.rs` paths - prefer `src/some_module.rs` instead of `src/some_module/mod.rs`.
|
||||
|
||||
# GPUI
|
||||
|
||||
GPUI is a UI framework which also provides primitives for state and concurrency management.
|
||||
|
||||
## Context
|
||||
|
||||
Context types allow interaction with global state, windows, entities, and system services. They are typically passed to functions as the argument named `cx`. When a function takes callbacks they come after the `cx` parameter.
|
||||
|
||||
* `App` is the root context type, providing access to global state and read and update of entities.
|
||||
* `Context<T>` is provided when updating an `Entity<T>`. This context dereferences into `App`, so functions which take `&App` can also take `&Context<T>`.
|
||||
* `AsyncApp` and `AsyncWindowContext` are provided by `cx.spawn` and `cx.spawn_in`. These can be held across await points.
|
||||
|
||||
## `Window`
|
||||
|
||||
`Window` provides access to the state of an application window. It is passed to functions as an argument named `window` and comes before `cx` when present. It is used for managing focus, dispatching actions, directly drawing, getting user input state, etc.
|
||||
|
||||
## Entities
|
||||
|
||||
An `Entity<T>` is a handle to state of type `T`. With `thing: Entity<T>`:
|
||||
|
||||
* `thing.entity_id()` returns `EntityId`
|
||||
* `thing.downgrade()` returns `WeakEntity<T>`
|
||||
* `thing.read(cx: &App)` returns `&T`.
|
||||
* `thing.read_with(cx, |thing: &T, cx: &App| ...)` returns the closure's return value.
|
||||
* `thing.update(cx, |thing: &mut T, cx: &mut Context<T>| ...)` allows the closure to mutate the state, and provides a `Context<T>` for interacting with the entity. It returns the closure's return value.
|
||||
* `thing.update_in(cx, |thing: &mut T, window: &mut Window, cx: &mut Context<T>| ...)` takes a `AsyncWindowContext` or `VisualTestContext`. It's the same as `update` while also providing the `Window`.
|
||||
|
||||
Within the closures, the inner `cx` provided to the closure must be used instead of the outer `cx` to avoid issues with multiple borrows.
|
||||
|
||||
Trying to update an entity while it's already being updated must be avoided as this will cause a panic.
|
||||
|
||||
When `read_with`, `update`, or `update_in` are used with an async context, the closure's return value is wrapped in an `anyhow::Result`.
|
||||
|
||||
`WeakEntity<T>` is a weak handle. It has `read_with`, `update`, and `update_in` methods that work the same, but always return an `anyhow::Result` so that they can fail if the entity no longer exists. This can be useful to avoid memory leaks - if entities have mutually recursive handles to eachother they will never be dropped.
|
||||
|
||||
## Concurrency
|
||||
|
||||
All use of entities and UI rendering occurs on a single foreground thread.
|
||||
|
||||
`cx.spawn(async move |cx| ...)` runs an async closure on the foreground thread. Within the closure, `cx` is an async context like `AsyncApp` or `AsyncWindowContext`.
|
||||
|
||||
When the outer cx is a `Context<T>`, the use of `spawn` instead looks like `cx.spawn(async move |handle, cx| ...)`, where `handle: WeakEntity<T>`.
|
||||
|
||||
To do work on other threads, `cx.background_spawn(async move { ... })` is used. Often this background task is awaited on by a foreground task which uses the results to update state.
|
||||
|
||||
Both `cx.spawn` and `cx.background_spawn` return a `Task<R>`, which is a future that can be awaited upon. If this task is dropped, then its work is cancelled. To prevent this one of the following must be done:
|
||||
|
||||
* Awaiting the task in some other async context.
|
||||
* Detaching the task via `task.detach()` or `task.detach_and_log_err(cx)`, allowing it to run indefinitely.
|
||||
* Storing the task in a field, if the work should be halted when the struct is dropped.
|
||||
|
||||
A task which doesn't do anything but provide a value can be created with `Task::ready(value)`.
|
||||
|
||||
## Elements
|
||||
|
||||
The `Render` trait is used to render some state into an element tree that is laid out using flexbox layout. An `Entity<T>` where `T` implements `Render` is sometimes called a "view".
|
||||
|
||||
Example:
|
||||
|
||||
```
|
||||
struct TextWithBorder(SharedString);
|
||||
|
||||
impl Render for TextWithBorder {
|
||||
fn render(&mut self, _window: &mut Window, _cx: &mut Context<Self>) -> impl IntoElement {
|
||||
div().border_1().child(self.0.clone())
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Since `impl IntoElement for SharedString` exists, it can be used as an argument to `child`. `SharedString` is used to avoid copying strings, and is either an `&'static str` or `Arc<str>`.
|
||||
|
||||
UI components that are constructed just to be turned into elements can instead implement the `RenderOnce` trait, which is similar to `Render`, but its `render` method takes ownership of `self`. Types that implement this trait can use `#[derive(IntoElement)]` to use them directly as children.
|
||||
|
||||
The style methods on elements are similar to those used by Tailwind CSS.
|
||||
|
||||
If some attributes or children of an element tree are conditional, `.when(condition, |this| ...)` can be used to run the closure only when `condition` is true. Similarly, `.when_some(option, |this, value| ...)` runs the closure when the `Option` has a value.
|
||||
|
||||
## Input events
|
||||
|
||||
Input event handlers can be registered on an element via methods like `.on_click(|event, window, cx: &mut App| ...)`.
|
||||
|
||||
Often event handlers will want to update the entity that's in the current `Context<T>`. The `cx.listener` method provides this - its use looks like `.on_click(cx.listener(|this: &mut T, event, window, cx: &mut Context<T>| ...)`.
|
||||
|
||||
## Actions
|
||||
|
||||
Actions are dispatched via user keyboard interaction or in code via `window.dispatch_action(SomeAction.boxed_clone(), cx)` or `focus_handle.dispatch_action(&SomeAction, window, cx)`.
|
||||
|
||||
Actions with no data defined with the `actions!(some_namespace, [SomeAction, AnotherAction])` macro call. Otherwise the `Action` derive macro is used. Doc comments on actions are displayed to the user.
|
||||
|
||||
Action handlers can be registered on an element via the event handler `.on_action(|action, window, cx| ...)`. Like other event handlers, this is often used with `cx.listener`.
|
||||
|
||||
## Notify
|
||||
|
||||
When a view's state has changed in a way that may affect its rendering, it should call `cx.notify()`. This will cause the view to be rerendered. It will also cause any observe callbacks registered for the entity with `cx.observe` to be called.
|
||||
|
||||
## Entity events
|
||||
|
||||
While updating an entity (`cx: Context<T>`), it can emit an event using `cx.emit(event)`. Entities register which events they can emit by declaring `impl EventEmittor<EventType> for EntityType {}`.
|
||||
|
||||
Other entities can then register a callback to handle these events by doing `cx.subscribe(other_entity, |this, other_entity, event, cx| ...)`. This will return a `Subscription` which deregisters the callback when dropped. Typically `cx.subscribe` happens when creating a new entity and the subscriptions are stored in a `_subscriptions: Vec<Subscription>` field.
|
||||
|
||||
## Recent API changes
|
||||
|
||||
GPUI has had some changes to its APIs. Always write code using the new APIs:
|
||||
|
||||
* `spawn` methods now take async closures (`AsyncFn`), and so should be called like `cx.spawn(async move |cx| ...)`.
|
||||
* Use `Entity<T>`. This replaces `Model<T>` and `View<T>` which no longer exist and should NEVER be used.
|
||||
* Use `App` references. This replaces `AppContext` which no longer exists and should NEVER be used.
|
||||
* Use `Context<T>` references. This replaces `ModelContext<T>` which no longer exists and should NEVER be used.
|
||||
* `Window` is now passed around explicitly. The new interface adds a `Window` reference parameter to some methods, and adds some new "*_in" methods for plumbing `Window`. The old types `WindowContext` and `ViewContext<T>` should NEVER be used.
|
||||
|
||||
|
||||
## General guidelines
|
||||
|
||||
- Use `./script/clippy` instead of `cargo clippy`
|
||||
@@ -1 +0,0 @@
|
||||
.rules
|
||||
@@ -1,20 +0,0 @@
|
||||
[
|
||||
{
|
||||
"label": "Debug Zed (CodeLLDB)",
|
||||
"adapter": "CodeLLDB",
|
||||
"build": {
|
||||
"label": "Build Zed",
|
||||
"command": "cargo",
|
||||
"args": ["build"]
|
||||
}
|
||||
},
|
||||
{
|
||||
"label": "Debug Zed (GDB)",
|
||||
"adapter": "GDB",
|
||||
"build": {
|
||||
"label": "Build Zed",
|
||||
"command": "cargo",
|
||||
"args": ["build"]
|
||||
}
|
||||
}
|
||||
]
|
||||
@@ -14,12 +14,12 @@
|
||||
},
|
||||
"JSON": {
|
||||
"tab_size": 2,
|
||||
"preferred_line_length": 120,
|
||||
"preferred_line_length": 100,
|
||||
"formatter": "prettier"
|
||||
},
|
||||
"JSONC": {
|
||||
"tab_size": 2,
|
||||
"preferred_line_length": 120,
|
||||
"preferred_line_length": 100,
|
||||
"formatter": "prettier"
|
||||
},
|
||||
"JavaScript": {
|
||||
@@ -40,25 +40,10 @@
|
||||
},
|
||||
"file_types": {
|
||||
"Dockerfile": ["Dockerfile*[!dockerignore]"],
|
||||
"JSONC": ["**/assets/**/*.json", "renovate.json"],
|
||||
"Git Ignore": ["dockerignore"]
|
||||
},
|
||||
"hard_tabs": false,
|
||||
"formatter": "auto",
|
||||
"remove_trailing_whitespace_on_save": true,
|
||||
"ensure_final_newline_on_save": true,
|
||||
"file_scan_exclusions": [
|
||||
"crates/assistant_tools/src/edit_agent/evals/fixtures",
|
||||
"crates/eval/worktrees/",
|
||||
"crates/eval/repos/",
|
||||
"**/.git",
|
||||
"**/.svn",
|
||||
"**/.hg",
|
||||
"**/.jj",
|
||||
"**/CVS",
|
||||
"**/.DS_Store",
|
||||
"**/Thumbs.db",
|
||||
"**/.classpath",
|
||||
"**/.settings"
|
||||
]
|
||||
"ensure_final_newline_on_save": true
|
||||
}
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
# Code of Conduct
|
||||
|
||||
The Code of Conduct for this repository can be found online at [zed.dev/code-of-conduct](https://zed.dev/code-of-conduct).
|
||||
The Code of Conduct for this repository can be found online at [zed.dev/docs/code-of-conduct](https://zed.dev/docs/code-of-conduct).
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
Thanks for your interest in contributing to Zed, the collaborative platform that is also a code editor!
|
||||
|
||||
All activity in Zed forums is subject to our [Code of Conduct](https://zed.dev/code-of-conduct). Additionally, contributors must sign our [Contributor License Agreement](https://zed.dev/cla) before their contributions can be merged.
|
||||
All activity in Zed forums is subject to our [Code of Conduct](https://zed.dev/docs/code-of-conduct). Additionally, contributors must sign our [Contributor License Agreement](https://zed.dev/cla) before their contributions can be merged.
|
||||
|
||||
## Contribution ideas
|
||||
|
||||
@@ -37,16 +37,6 @@ We plan to set aside time each week to pair program with contributors on promisi
|
||||
- Pair with us and watch us code to learn the codebase
|
||||
- Low effort PRs, such as those that just re-arrange syntax, won't be merged without a compelling justification
|
||||
|
||||
## File icons
|
||||
|
||||
Zed's default icon theme consists of icons that are hand-designed to fit together in a cohesive manner.
|
||||
|
||||
We do not accept PRs for file icons that are just an off-the-shelf SVG taken from somewhere else.
|
||||
|
||||
### Adding new icons to the Zed icon theme
|
||||
|
||||
If you would like to add a new icon to the Zed icon theme, [open a Discussion](https://github.com/zed-industries/zed/discussions/new?category=ux-and-design) and we can work with you on getting an icon designed and added to Zed.
|
||||
|
||||
## Bird's-eye view of Zed
|
||||
|
||||
Zed is made up of several smaller crates - let's go over those you're most likely to interact with:
|
||||
@@ -62,9 +52,3 @@ Zed is made up of several smaller crates - let's go over those you're most likel
|
||||
- [`rpc`](/crates/rpc) defines messages to be exchanged with collaboration server.
|
||||
- [`theme`](/crates/theme) defines the theme system and provides a default theme.
|
||||
- [`ui`](/crates/ui) is a collection of UI components and common patterns used throughout Zed.
|
||||
- [`cli`](/crates/cli) is the CLI crate which invokes the Zed binary.
|
||||
- [`zed`](/crates/zed) is where all things come together, and the `main` entry point for Zed.
|
||||
|
||||
## Packaging Zed
|
||||
|
||||
Check our [notes for packaging Zed](https://zed.dev/docs/development/linux#notes-for-packaging-zed).
|
||||
|
||||
11820
Cargo.lock
generated
11820
Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
538
Cargo.toml
538
Cargo.toml
@@ -1,66 +1,36 @@
|
||||
[workspace]
|
||||
resolver = "2"
|
||||
members = [
|
||||
"crates/acp_tools",
|
||||
"crates/acp_thread",
|
||||
"crates/action_log",
|
||||
"crates/activity_indicator",
|
||||
"crates/agent",
|
||||
"crates/agent2",
|
||||
"crates/agent_servers",
|
||||
"crates/agent_settings",
|
||||
"crates/agent_ui",
|
||||
"crates/ai_onboarding",
|
||||
"crates/anthropic",
|
||||
"crates/askpass",
|
||||
"crates/assets",
|
||||
"crates/assistant_context",
|
||||
"crates/assistant",
|
||||
"crates/assistant_slash_command",
|
||||
"crates/assistant_slash_commands",
|
||||
"crates/assistant_tool",
|
||||
"crates/assistant_tools",
|
||||
"crates/audio",
|
||||
"crates/auto_update",
|
||||
"crates/auto_update_helper",
|
||||
"crates/auto_update_ui",
|
||||
"crates/aws_http_client",
|
||||
"crates/bedrock",
|
||||
"crates/breadcrumbs",
|
||||
"crates/buffer_diff",
|
||||
"crates/call",
|
||||
"crates/channel",
|
||||
"crates/cli",
|
||||
"crates/client",
|
||||
"crates/clock",
|
||||
"crates/cloud_api_client",
|
||||
"crates/cloud_api_types",
|
||||
"crates/cloud_llm_client",
|
||||
"crates/collab",
|
||||
"crates/collab_ui",
|
||||
"crates/collections",
|
||||
"crates/command_palette",
|
||||
"crates/command_palette_hooks",
|
||||
"crates/component",
|
||||
"crates/context_server",
|
||||
"crates/context_servers",
|
||||
"crates/copilot",
|
||||
"crates/crashes",
|
||||
"crates/credentials_provider",
|
||||
"crates/dap",
|
||||
"crates/dap_adapters",
|
||||
"crates/db",
|
||||
"crates/debug_adapter_extension",
|
||||
"crates/debugger_tools",
|
||||
"crates/debugger_ui",
|
||||
"crates/deepseek",
|
||||
"crates/dev_server_projects",
|
||||
"crates/diagnostics",
|
||||
"crates/docs_preprocessor",
|
||||
"crates/editor",
|
||||
"crates/eval",
|
||||
"crates/explorer_command_injector",
|
||||
"crates/evals",
|
||||
"crates/extension",
|
||||
"crates/extension_api",
|
||||
"crates/extension_cli",
|
||||
"crates/extension_host",
|
||||
"crates/extensions_ui",
|
||||
"crates/feature_flags",
|
||||
"crates/feedback",
|
||||
@@ -71,61 +41,45 @@ members = [
|
||||
"crates/fuzzy",
|
||||
"crates/git",
|
||||
"crates/git_hosting_providers",
|
||||
"crates/git_ui",
|
||||
"crates/go_to_line",
|
||||
"crates/google_ai",
|
||||
"crates/gpui",
|
||||
"crates/gpui_macros",
|
||||
"crates/gpui_tokio",
|
||||
"crates/headless",
|
||||
"crates/html_to_markdown",
|
||||
"crates/http_client",
|
||||
"crates/http_client_tls",
|
||||
"crates/icons",
|
||||
"crates/image_viewer",
|
||||
"crates/edit_prediction",
|
||||
"crates/edit_prediction_button",
|
||||
"crates/inspector_ui",
|
||||
"crates/indexed_docs",
|
||||
"crates/inline_completion_button",
|
||||
"crates/install_cli",
|
||||
"crates/jj",
|
||||
"crates/jj_ui",
|
||||
"crates/journal",
|
||||
"crates/language",
|
||||
"crates/language_extension",
|
||||
"crates/language_model",
|
||||
"crates/language_models",
|
||||
"crates/language_selector",
|
||||
"crates/language_tools",
|
||||
"crates/languages",
|
||||
"crates/livekit_api",
|
||||
"crates/livekit_client",
|
||||
"crates/lmstudio",
|
||||
"crates/live_kit_client",
|
||||
"crates/live_kit_server",
|
||||
"crates/lsp",
|
||||
"crates/markdown",
|
||||
"crates/markdown_preview",
|
||||
"crates/media",
|
||||
"crates/menu",
|
||||
"crates/migrator",
|
||||
"crates/mistral",
|
||||
"crates/multi_buffer",
|
||||
"crates/nc",
|
||||
"crates/net",
|
||||
"crates/node_runtime",
|
||||
"crates/notifications",
|
||||
"crates/ollama",
|
||||
"crates/onboarding",
|
||||
"crates/open_ai",
|
||||
"crates/open_router",
|
||||
"crates/outline",
|
||||
"crates/outline_panel",
|
||||
"crates/panel",
|
||||
"crates/paths",
|
||||
"crates/picker",
|
||||
"crates/prettier",
|
||||
"crates/project",
|
||||
"crates/project_panel",
|
||||
"crates/project_symbols",
|
||||
"crates/prompt_store",
|
||||
"crates/proto",
|
||||
"crates/quick_action_bar",
|
||||
"crates/recent_projects",
|
||||
"crates/refineable",
|
||||
"crates/refineable/derive_refineable",
|
||||
@@ -137,14 +91,11 @@ members = [
|
||||
"crates/rich_text",
|
||||
"crates/rope",
|
||||
"crates/rpc",
|
||||
"crates/rules_library",
|
||||
"crates/schema_generator",
|
||||
"crates/search",
|
||||
"crates/semantic_index",
|
||||
"crates/semantic_version",
|
||||
"crates/session",
|
||||
"crates/settings",
|
||||
"crates/settings_profile_selector",
|
||||
"crates/settings_ui",
|
||||
"crates/snippet",
|
||||
"crates/snippet_provider",
|
||||
@@ -153,137 +104,107 @@ members = [
|
||||
"crates/sqlez_macros",
|
||||
"crates/story",
|
||||
"crates/storybook",
|
||||
"crates/streaming_diff",
|
||||
"crates/sum_tree",
|
||||
"crates/supermaven",
|
||||
"crates/system_specs",
|
||||
"crates/supermaven_api",
|
||||
"crates/svg_preview",
|
||||
"crates/tab_switcher",
|
||||
"crates/task",
|
||||
"crates/tasks_ui",
|
||||
"crates/telemetry",
|
||||
"crates/telemetry_events",
|
||||
"crates/terminal",
|
||||
"crates/terminal_view",
|
||||
"crates/text",
|
||||
"crates/theme",
|
||||
"crates/theme_extension",
|
||||
"crates/theme_importer",
|
||||
"crates/theme_selector",
|
||||
"crates/time_format",
|
||||
"crates/title_bar",
|
||||
"crates/toolchain_selector",
|
||||
"crates/ui",
|
||||
"crates/ui_input",
|
||||
"crates/ui_macros",
|
||||
"crates/ui_prompt",
|
||||
"crates/reqwest_client",
|
||||
"crates/util",
|
||||
"crates/util_macros",
|
||||
"crates/vercel",
|
||||
"crates/vcs_menu",
|
||||
"crates/vim",
|
||||
"crates/vim_mode_setting",
|
||||
"crates/watch",
|
||||
"crates/web_search",
|
||||
"crates/web_search_providers",
|
||||
"crates/welcome",
|
||||
"crates/workspace",
|
||||
"crates/worktree",
|
||||
"crates/x_ai",
|
||||
"crates/zed",
|
||||
"crates/zed_actions",
|
||||
"crates/zeta",
|
||||
"crates/zeta_cli",
|
||||
"crates/zlog",
|
||||
"crates/zlog_settings",
|
||||
|
||||
#
|
||||
# Extensions
|
||||
#
|
||||
|
||||
"extensions/astro",
|
||||
"extensions/clojure",
|
||||
"extensions/csharp",
|
||||
"extensions/dart",
|
||||
"extensions/deno",
|
||||
"extensions/elixir",
|
||||
"extensions/elm",
|
||||
"extensions/emmet",
|
||||
"extensions/erlang",
|
||||
"extensions/glsl",
|
||||
"extensions/haskell",
|
||||
"extensions/html",
|
||||
"extensions/lua",
|
||||
"extensions/ocaml",
|
||||
"extensions/php",
|
||||
"extensions/perplexity",
|
||||
"extensions/prisma",
|
||||
"extensions/proto",
|
||||
"extensions/purescript",
|
||||
"extensions/ruff",
|
||||
"extensions/slash-commands-example",
|
||||
"extensions/snippets",
|
||||
"extensions/terraform",
|
||||
"extensions/test-extension",
|
||||
"extensions/toml",
|
||||
"extensions/uiua",
|
||||
"extensions/zig",
|
||||
|
||||
#
|
||||
# Tooling
|
||||
#
|
||||
|
||||
"tooling/workspace-hack",
|
||||
"tooling/xtask",
|
||||
]
|
||||
default-members = ["crates/zed"]
|
||||
|
||||
[workspace.package]
|
||||
publish = false
|
||||
edition = "2024"
|
||||
|
||||
[workspace.dependencies]
|
||||
|
||||
#
|
||||
# Workspace member crates
|
||||
#
|
||||
|
||||
acp_tools = { path = "crates/acp_tools" }
|
||||
acp_thread = { path = "crates/acp_thread" }
|
||||
action_log = { path = "crates/action_log" }
|
||||
agent = { path = "crates/agent" }
|
||||
agent2 = { path = "crates/agent2" }
|
||||
activity_indicator = { path = "crates/activity_indicator" }
|
||||
agent_ui = { path = "crates/agent_ui" }
|
||||
agent_settings = { path = "crates/agent_settings" }
|
||||
agent_servers = { path = "crates/agent_servers" }
|
||||
ai = { path = "crates/ai" }
|
||||
ai_onboarding = { path = "crates/ai_onboarding" }
|
||||
anthropic = { path = "crates/anthropic" }
|
||||
askpass = { path = "crates/askpass" }
|
||||
assets = { path = "crates/assets" }
|
||||
assistant_context = { path = "crates/assistant_context" }
|
||||
assistant = { path = "crates/assistant" }
|
||||
assistant_slash_command = { path = "crates/assistant_slash_command" }
|
||||
assistant_slash_commands = { path = "crates/assistant_slash_commands" }
|
||||
assistant_tool = { path = "crates/assistant_tool" }
|
||||
assistant_tools = { path = "crates/assistant_tools" }
|
||||
audio = { path = "crates/audio" }
|
||||
auto_update = { path = "crates/auto_update" }
|
||||
auto_update_helper = { path = "crates/auto_update_helper" }
|
||||
auto_update_ui = { path = "crates/auto_update_ui" }
|
||||
aws_http_client = { path = "crates/aws_http_client" }
|
||||
bedrock = { path = "crates/bedrock" }
|
||||
breadcrumbs = { path = "crates/breadcrumbs" }
|
||||
buffer_diff = { path = "crates/buffer_diff" }
|
||||
call = { path = "crates/call" }
|
||||
channel = { path = "crates/channel" }
|
||||
cli = { path = "crates/cli" }
|
||||
client = { path = "crates/client" }
|
||||
clock = { path = "crates/clock" }
|
||||
cloud_api_client = { path = "crates/cloud_api_client" }
|
||||
cloud_api_types = { path = "crates/cloud_api_types" }
|
||||
cloud_llm_client = { path = "crates/cloud_llm_client" }
|
||||
collab = { path = "crates/collab" }
|
||||
collab_ui = { path = "crates/collab_ui" }
|
||||
collections = { path = "crates/collections" }
|
||||
command_palette = { path = "crates/command_palette" }
|
||||
command_palette_hooks = { path = "crates/command_palette_hooks" }
|
||||
component = { path = "crates/component" }
|
||||
context_server = { path = "crates/context_server" }
|
||||
context_servers = { path = "crates/context_servers" }
|
||||
copilot = { path = "crates/copilot" }
|
||||
crashes = { path = "crates/crashes" }
|
||||
credentials_provider = { path = "crates/credentials_provider" }
|
||||
dap = { path = "crates/dap" }
|
||||
dap_adapters = { path = "crates/dap_adapters" }
|
||||
db = { path = "crates/db" }
|
||||
debug_adapter_extension = { path = "crates/debug_adapter_extension" }
|
||||
debugger_tools = { path = "crates/debugger_tools" }
|
||||
debugger_ui = { path = "crates/debugger_ui" }
|
||||
deepseek = { path = "crates/deepseek" }
|
||||
dev_server_projects = { path = "crates/dev_server_projects" }
|
||||
diagnostics = { path = "crates/diagnostics" }
|
||||
editor = { path = "crates/editor" }
|
||||
extension = { path = "crates/extension" }
|
||||
extension_host = { path = "crates/extension_host" }
|
||||
extensions_ui = { path = "crates/extensions_ui" }
|
||||
feature_flags = { path = "crates/feature_flags" }
|
||||
feedback = { path = "crates/feedback" }
|
||||
@@ -294,67 +215,47 @@ fsevent = { path = "crates/fsevent" }
|
||||
fuzzy = { path = "crates/fuzzy" }
|
||||
git = { path = "crates/git" }
|
||||
git_hosting_providers = { path = "crates/git_hosting_providers" }
|
||||
git_ui = { path = "crates/git_ui" }
|
||||
go_to_line = { path = "crates/go_to_line" }
|
||||
google_ai = { path = "crates/google_ai" }
|
||||
gpui = { path = "crates/gpui", default-features = false, features = [
|
||||
"http_client",
|
||||
] }
|
||||
gpui = { path = "crates/gpui", default-features = false, features = ["http_client"]}
|
||||
gpui_macros = { path = "crates/gpui_macros" }
|
||||
gpui_tokio = { path = "crates/gpui_tokio" }
|
||||
headless = { path = "crates/headless" }
|
||||
html_to_markdown = { path = "crates/html_to_markdown" }
|
||||
http_client = { path = "crates/http_client" }
|
||||
http_client_tls = { path = "crates/http_client_tls" }
|
||||
icons = { path = "crates/icons" }
|
||||
image_viewer = { path = "crates/image_viewer" }
|
||||
edit_prediction = { path = "crates/edit_prediction" }
|
||||
edit_prediction_button = { path = "crates/edit_prediction_button" }
|
||||
inspector_ui = { path = "crates/inspector_ui" }
|
||||
indexed_docs = { path = "crates/indexed_docs" }
|
||||
inline_completion_button = { path = "crates/inline_completion_button" }
|
||||
install_cli = { path = "crates/install_cli" }
|
||||
jj = { path = "crates/jj" }
|
||||
jj_ui = { path = "crates/jj_ui" }
|
||||
journal = { path = "crates/journal" }
|
||||
language = { path = "crates/language" }
|
||||
language_extension = { path = "crates/language_extension" }
|
||||
language_model = { path = "crates/language_model" }
|
||||
language_models = { path = "crates/language_models" }
|
||||
language_selector = { path = "crates/language_selector" }
|
||||
language_tools = { path = "crates/language_tools" }
|
||||
languages = { path = "crates/languages" }
|
||||
livekit_api = { path = "crates/livekit_api" }
|
||||
livekit_client = { path = "crates/livekit_client" }
|
||||
lmstudio = { path = "crates/lmstudio" }
|
||||
live_kit_client = { path = "crates/live_kit_client" }
|
||||
live_kit_server = { path = "crates/live_kit_server" }
|
||||
lsp = { path = "crates/lsp" }
|
||||
markdown = { path = "crates/markdown" }
|
||||
markdown_preview = { path = "crates/markdown_preview" }
|
||||
svg_preview = { path = "crates/svg_preview" }
|
||||
media = { path = "crates/media" }
|
||||
menu = { path = "crates/menu" }
|
||||
migrator = { path = "crates/migrator" }
|
||||
mistral = { path = "crates/mistral" }
|
||||
multi_buffer = { path = "crates/multi_buffer" }
|
||||
nc = { path = "crates/nc" }
|
||||
net = { path = "crates/net" }
|
||||
node_runtime = { path = "crates/node_runtime" }
|
||||
notifications = { path = "crates/notifications" }
|
||||
ollama = { path = "crates/ollama" }
|
||||
onboarding = { path = "crates/onboarding" }
|
||||
open_ai = { path = "crates/open_ai" }
|
||||
open_router = { path = "crates/open_router", features = ["schemars"] }
|
||||
outline = { path = "crates/outline" }
|
||||
outline_panel = { path = "crates/outline_panel" }
|
||||
panel = { path = "crates/panel" }
|
||||
paths = { path = "crates/paths" }
|
||||
picker = { path = "crates/picker" }
|
||||
plugin = { path = "crates/plugin" }
|
||||
plugin_macros = { path = "crates/plugin_macros" }
|
||||
prettier = { path = "crates/prettier" }
|
||||
settings_profile_selector = { path = "crates/settings_profile_selector" }
|
||||
project = { path = "crates/project" }
|
||||
project_panel = { path = "crates/project_panel" }
|
||||
project_symbols = { path = "crates/project_symbols" }
|
||||
prompt_store = { path = "crates/prompt_store" }
|
||||
proto = { path = "crates/proto" }
|
||||
quick_action_bar = { path = "crates/quick_action_bar" }
|
||||
recent_projects = { path = "crates/recent_projects" }
|
||||
refineable = { path = "crates/refineable" }
|
||||
release_channel = { path = "crates/release_channel" }
|
||||
@@ -363,10 +264,8 @@ remote_server = { path = "crates/remote_server" }
|
||||
repl = { path = "crates/repl" }
|
||||
reqwest_client = { path = "crates/reqwest_client" }
|
||||
rich_text = { path = "crates/rich_text" }
|
||||
rodio = { version = "0.21.1", default-features = false }
|
||||
rope = { path = "crates/rope" }
|
||||
rpc = { path = "crates/rpc" }
|
||||
rules_library = { path = "crates/rules_library" }
|
||||
search = { path = "crates/search" }
|
||||
semantic_index = { path = "crates/semantic_index" }
|
||||
semantic_version = { path = "crates/semantic_version" }
|
||||
@@ -380,206 +279,134 @@ sqlez = { path = "crates/sqlez" }
|
||||
sqlez_macros = { path = "crates/sqlez_macros" }
|
||||
story = { path = "crates/story" }
|
||||
storybook = { path = "crates/storybook" }
|
||||
streaming_diff = { path = "crates/streaming_diff" }
|
||||
sum_tree = { path = "crates/sum_tree" }
|
||||
supermaven = { path = "crates/supermaven" }
|
||||
supermaven_api = { path = "crates/supermaven_api" }
|
||||
system_specs = { path = "crates/system_specs" }
|
||||
tab_switcher = { path = "crates/tab_switcher" }
|
||||
task = { path = "crates/task" }
|
||||
tasks_ui = { path = "crates/tasks_ui" }
|
||||
telemetry = { path = "crates/telemetry" }
|
||||
telemetry_events = { path = "crates/telemetry_events" }
|
||||
terminal = { path = "crates/terminal" }
|
||||
terminal_view = { path = "crates/terminal_view" }
|
||||
text = { path = "crates/text" }
|
||||
theme = { path = "crates/theme" }
|
||||
theme_extension = { path = "crates/theme_extension" }
|
||||
theme_importer = { path = "crates/theme_importer" }
|
||||
theme_selector = { path = "crates/theme_selector" }
|
||||
time_format = { path = "crates/time_format" }
|
||||
title_bar = { path = "crates/title_bar" }
|
||||
toolchain_selector = { path = "crates/toolchain_selector" }
|
||||
ui = { path = "crates/ui" }
|
||||
ui_input = { path = "crates/ui_input" }
|
||||
ui_macros = { path = "crates/ui_macros" }
|
||||
ui_prompt = { path = "crates/ui_prompt" }
|
||||
util = { path = "crates/util" }
|
||||
util_macros = { path = "crates/util_macros" }
|
||||
vercel = { path = "crates/vercel" }
|
||||
vcs_menu = { path = "crates/vcs_menu" }
|
||||
vim = { path = "crates/vim" }
|
||||
vim_mode_setting = { path = "crates/vim_mode_setting" }
|
||||
|
||||
watch = { path = "crates/watch" }
|
||||
web_search = { path = "crates/web_search" }
|
||||
web_search_providers = { path = "crates/web_search_providers" }
|
||||
welcome = { path = "crates/welcome" }
|
||||
workspace = { path = "crates/workspace" }
|
||||
worktree = { path = "crates/worktree" }
|
||||
x_ai = { path = "crates/x_ai" }
|
||||
zed = { path = "crates/zed" }
|
||||
zed_actions = { path = "crates/zed_actions" }
|
||||
zeta = { path = "crates/zeta" }
|
||||
zlog = { path = "crates/zlog" }
|
||||
zlog_settings = { path = "crates/zlog_settings" }
|
||||
|
||||
#
|
||||
# External crates
|
||||
#
|
||||
|
||||
agent-client-protocol = { path = "../agent-client-protocol" }
|
||||
aho-corasick = "1.1"
|
||||
alacritty_terminal = { git = "https://github.com/zed-industries/alacritty.git", branch = "add-hush-login-flag" }
|
||||
alacritty_terminal = { git = "https://github.com/alacritty/alacritty", rev = "91d034ff8b53867143c005acfaa14609147c9a2c" }
|
||||
any_vec = "0.14"
|
||||
anyhow = "1.0.86"
|
||||
arrayvec = { version = "0.7.4", features = ["serde"] }
|
||||
ashpd = { version = "0.11", default-features = false, features = ["async-std"] }
|
||||
ashpd = "0.9.1"
|
||||
async-compat = "0.2.1"
|
||||
async-compression = { version = "0.4", features = ["gzip", "futures-io"] }
|
||||
async-dispatcher = "0.1"
|
||||
async-fs = "2.1"
|
||||
async-fs = "1.6"
|
||||
async-pipe = { git = "https://github.com/zed-industries/async-pipe-rs", rev = "82d00a04211cf4e1236029aa03e6b6ce2a74c553" }
|
||||
async-recursion = "1.0.0"
|
||||
async-tar = "0.5.0"
|
||||
async-trait = "0.1"
|
||||
async-tungstenite = "0.29.1"
|
||||
async-tungstenite = "0.24"
|
||||
async-watch = "0.3.1"
|
||||
async_zip = { version = "0.0.17", features = ["deflate", "deflate64"] }
|
||||
aws-config = { version = "1.6.1", features = ["behavior-version-latest"] }
|
||||
aws-credential-types = { version = "1.2.2", features = [
|
||||
"hardcoded-credentials",
|
||||
] }
|
||||
aws-sdk-bedrockruntime = { version = "1.80.0", features = [
|
||||
"behavior-version-latest",
|
||||
] }
|
||||
aws-smithy-runtime-api = { version = "1.7.4", features = ["http-1x", "client"] }
|
||||
aws-smithy-types = { version = "1.3.0", features = ["http-body-1-x"] }
|
||||
base64 = "0.22"
|
||||
bincode = "1.2.1"
|
||||
bitflags = "2.6.0"
|
||||
blade-graphics = { git = "https://github.com/kvark/blade", rev = "e0ec4e720957edd51b945b64dd85605ea54bcfe5" }
|
||||
blade-macros = { git = "https://github.com/kvark/blade", rev = "e0ec4e720957edd51b945b64dd85605ea54bcfe5" }
|
||||
blade-util = { git = "https://github.com/kvark/blade", rev = "e0ec4e720957edd51b945b64dd85605ea54bcfe5" }
|
||||
blade-graphics = { git = "https://github.com/kvark/blade", rev = "e142a3a5e678eb6a13e642ad8401b1f3aa38e969" }
|
||||
blade-macros = { git = "https://github.com/kvark/blade", rev = "e142a3a5e678eb6a13e642ad8401b1f3aa38e969" }
|
||||
blade-util = { git = "https://github.com/kvark/blade", rev = "e142a3a5e678eb6a13e642ad8401b1f3aa38e969" }
|
||||
blake3 = "1.5.3"
|
||||
bytes = "1.0"
|
||||
cargo_metadata = "0.19"
|
||||
cargo_toml = "0.21"
|
||||
cargo_metadata = "0.18"
|
||||
cargo_toml = "0.20"
|
||||
chrono = { version = "0.4", features = ["serde"] }
|
||||
ciborium = "0.2"
|
||||
circular-buffer = "1.0"
|
||||
clap = { version = "4.4", features = ["derive"] }
|
||||
clickhouse = "0.11.6"
|
||||
cocoa = "0.26"
|
||||
cocoa-foundation = "0.2.0"
|
||||
convert_case = "0.8.0"
|
||||
core-foundation = "0.10.0"
|
||||
convert_case = "0.6.0"
|
||||
core-foundation = "0.9.3"
|
||||
core-foundation-sys = "0.8.6"
|
||||
core-video = { version = "0.4.3", features = ["metal"] }
|
||||
cpal = "0.16"
|
||||
crash-handler = "0.6"
|
||||
criterion = { version = "0.5", features = ["html_reports"] }
|
||||
ctor = "0.4.0"
|
||||
dap-types = { git = "https://github.com/zed-industries/dap-types", rev = "1b461b310481d01e02b2603c16d7144b926339f8" }
|
||||
ctor = "0.2.6"
|
||||
dashmap = "6.0"
|
||||
derive_more = "0.99.17"
|
||||
dirs = "4.0"
|
||||
documented = "0.9.1"
|
||||
dotenvy = "0.15.0"
|
||||
ec4rs = "1.1"
|
||||
emojis = "0.6.1"
|
||||
env_logger = "0.11"
|
||||
exec = "0.3.1"
|
||||
fancy-regex = "0.14.0"
|
||||
fork = "0.2.0"
|
||||
futures = "0.3"
|
||||
futures-batch = "0.6.1"
|
||||
futures-lite = "1.13"
|
||||
git2 = { version = "0.20.1", default-features = false }
|
||||
git2 = { version = "0.19", default-features = false }
|
||||
globset = "0.4"
|
||||
handlebars = "4.3"
|
||||
heck = "0.5"
|
||||
heed = { version = "0.21.0", features = ["read-txn-no-tls"] }
|
||||
heed = { version = "0.20.1", features = ["read-txn-no-tls"] }
|
||||
hex = "0.4.3"
|
||||
human_bytes = "0.4.1"
|
||||
html5ever = "0.27.0"
|
||||
http = "1.1"
|
||||
http-body = "1.0"
|
||||
hyper = "0.14"
|
||||
ignore = "0.4.22"
|
||||
image = "0.25.1"
|
||||
imara-diff = "0.1.8"
|
||||
indexmap = { version = "2.7.0", features = ["serde"] }
|
||||
indexmap = { version = "1.6.2", features = ["serde"] }
|
||||
indoc = "2"
|
||||
inventory = "0.3.19"
|
||||
itertools = "0.14.0"
|
||||
jj-lib = { git = "https://github.com/jj-vcs/jj", rev = "e18eb8e05efaa153fad5ef46576af145bba1807f" }
|
||||
json_dotpath = "1.1"
|
||||
jsonschema = "0.30.0"
|
||||
itertools = "0.13.0"
|
||||
jsonwebtoken = "9.3"
|
||||
jupyter-protocol = { git = "https://github.com/ConradIrwin/runtimed", rev = "7130c804216b6914355d15d0b91ea91f6babd734" }
|
||||
jupyter-websocket-client = { git = "https://github.com/ConradIrwin/runtimed" ,rev = "7130c804216b6914355d15d0b91ea91f6babd734" }
|
||||
libc = "0.2"
|
||||
libsqlite3-sys = { version = "0.30.1", features = ["bundled"] }
|
||||
linkify = "0.10.0"
|
||||
log = { version = "0.4.16", features = ["kv_unstable_serde", "serde"] }
|
||||
lsp-types = { git = "https://github.com/zed-industries/lsp-types", rev = "39f629bdd03d59abd786ed9fc27e8bca02c0c0ec" }
|
||||
mach2 = "0.5"
|
||||
markup5ever_rcdom = "0.3.0"
|
||||
metal = "0.29"
|
||||
minidumper = "0.8"
|
||||
moka = { version = "0.12.10", features = ["sync"] }
|
||||
naga = { version = "25.0", features = ["wgsl-in"] }
|
||||
nanoid = "0.4"
|
||||
nbformat = { git = "https://github.com/ConradIrwin/runtimed", rev = "7130c804216b6914355d15d0b91ea91f6babd734" }
|
||||
nix = "0.29"
|
||||
num-format = "0.4.4"
|
||||
objc = "0.2"
|
||||
open = "5.0.0"
|
||||
once_cell = "1.19.0"
|
||||
ordered-float = "2.1.1"
|
||||
palette = { version = "0.7.5", default-features = false, features = ["std"] }
|
||||
parking_lot = "0.12.1"
|
||||
partial-json-fixer = "0.5.3"
|
||||
parse_int = "0.9"
|
||||
pciid-parser = "0.8.0"
|
||||
pathdiff = "0.2"
|
||||
pet = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "845945b830297a50de0e24020b980a65e4820559" }
|
||||
pet-conda = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "845945b830297a50de0e24020b980a65e4820559" }
|
||||
pet-core = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "845945b830297a50de0e24020b980a65e4820559" }
|
||||
pet-fs = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "845945b830297a50de0e24020b980a65e4820559" }
|
||||
pet-pixi = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "845945b830297a50de0e24020b980a65e4820559" }
|
||||
pet-poetry = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "845945b830297a50de0e24020b980a65e4820559" }
|
||||
pet-reporter = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "845945b830297a50de0e24020b980a65e4820559" }
|
||||
portable-pty = "0.9.0"
|
||||
postage = { version = "0.5", features = ["futures-traits"] }
|
||||
pretty_assertions = { version = "1.3.0", features = ["unstable"] }
|
||||
proc-macro2 = "1.0.93"
|
||||
pretty_assertions = "1.3.0"
|
||||
profiling = "1"
|
||||
prost = "0.9"
|
||||
prost-build = "0.9"
|
||||
prost-types = "0.9"
|
||||
pulldown-cmark = { version = "0.12.0", default-features = false }
|
||||
quote = "1.0.9"
|
||||
rand = "0.8.5"
|
||||
rayon = "1.8"
|
||||
ref-cast = "1.0.24"
|
||||
regex = "1.5"
|
||||
reqwest = { git = "https://github.com/zed-industries/reqwest.git", rev = "951c770a32f1998d6e999cef3e59e0013e6c4415", default-features = false, features = [
|
||||
repair_json = "0.1.0"
|
||||
reqwest = { git = "https://github.com/zed-industries/reqwest.git", rev = "fd110f6998da16bbca97b6dddda9be7827c50e29", default-features = false, features = [
|
||||
"charset",
|
||||
"http2",
|
||||
"macos-system-configuration",
|
||||
"multipart",
|
||||
"rustls-tls-native-roots",
|
||||
"socks",
|
||||
"stream",
|
||||
] }
|
||||
rsa = "0.9.6"
|
||||
runtimelib = { git = "https://github.com/ConradIrwin/runtimed", rev = "7130c804216b6914355d15d0b91ea91f6babd734", default-features = false, features = [
|
||||
runtimelib = { version = "0.15", default-features = false, features = [
|
||||
"async-dispatcher-runtime",
|
||||
] }
|
||||
rust-embed = { version = "8.4", features = ["include-exclude"] }
|
||||
rustc-demangle = "0.1.23"
|
||||
rustc-hash = "2.1.0"
|
||||
rustls = { version = "0.23.26" }
|
||||
rustls-platform-verifier = "0.5.0"
|
||||
scap = { git = "https://github.com/zed-industries/scap", rev = "808aa5c45b41e8f44729d02e38fd00a2fe2722e7", default-features = false }
|
||||
schemars = { version = "1.0", features = ["indexmap2"] }
|
||||
rust-embed = { version = "8.4", features = ["include-exclude"] }
|
||||
rustls = "0.20.3"
|
||||
rustls-native-certs = "0.8.0"
|
||||
schemars = { version = "0.8", features = ["impl_json_schema"] }
|
||||
semver = "1.0"
|
||||
serde = { version = "1.0", features = ["derive", "rc"] }
|
||||
serde_derive = { version = "1.0", features = ["deserialize_in_place"] }
|
||||
@@ -589,26 +416,23 @@ serde_json_lenient = { version = "0.2", features = [
|
||||
"raw_value",
|
||||
] }
|
||||
serde_repr = "0.1"
|
||||
serde_urlencoded = "0.7"
|
||||
sha2 = "0.10"
|
||||
shellexpand = "2.1.0"
|
||||
shlex = "1.3.0"
|
||||
signal-hook = "0.3.17"
|
||||
similar = "1.3"
|
||||
simplelog = "0.12.2"
|
||||
smallvec = { version = "1.6", features = ["union"] }
|
||||
smol = "2.0"
|
||||
smol = "1.2"
|
||||
sqlformat = "0.2"
|
||||
stacksafe = "0.1"
|
||||
streaming-iterator = "0.1"
|
||||
strsim = "0.11"
|
||||
strum = { version = "0.27.0", features = ["derive"] }
|
||||
strum = { version = "0.25.0", features = ["derive"] }
|
||||
subtle = "2.5.0"
|
||||
syn = { version = "2.0.101", features = ["full", "extra-traits"] }
|
||||
sys-locale = "0.3.1"
|
||||
sysinfo = "0.31.0"
|
||||
take-until = "0.2.0"
|
||||
tempfile = "3.20.0"
|
||||
thiserror = "2.0.12"
|
||||
tiktoken-rs = { git = "https://github.com/zed-industries/tiktoken-rs", rev = "30c32a4522751699adeda0d5840c71c3b75ae73d" }
|
||||
tempfile = "3.9.0"
|
||||
thiserror = "1.0.29"
|
||||
tiktoken-rs = "0.5.9"
|
||||
time = { version = "0.3", features = [
|
||||
"macros",
|
||||
"parsing",
|
||||
@@ -617,98 +441,94 @@ time = { version = "0.3", features = [
|
||||
"formatting",
|
||||
] }
|
||||
tiny_http = "0.8"
|
||||
tokio = { version = "1" }
|
||||
tokio-tungstenite = { version = "0.26", features = ["__rustls-tls"] }
|
||||
toml = "0.8"
|
||||
tokio = { version = "1" }
|
||||
tower-http = "0.4.4"
|
||||
tree-sitter = { version = "0.25.6", features = ["wasm"] }
|
||||
tree-sitter-bash = "0.25.0"
|
||||
tree-sitter = { version = "0.23", features = ["wasm"] }
|
||||
tree-sitter-bash = "0.23"
|
||||
tree-sitter-c = "0.23"
|
||||
tree-sitter-cpp = { git = "https://github.com/tree-sitter/tree-sitter-cpp", rev = "5cb9b693cfd7bfacab1d9ff4acac1a4150700609" }
|
||||
tree-sitter-cpp = "0.23"
|
||||
tree-sitter-css = "0.23"
|
||||
tree-sitter-diff = "0.1.0"
|
||||
tree-sitter-elixir = "0.3"
|
||||
tree-sitter-embedded-template = "0.23.0"
|
||||
tree-sitter-gitcommit = { git = "https://github.com/zed-industries/tree-sitter-git-commit", rev = "88309716a69dd13ab83443721ba6e0b491d37ee9" }
|
||||
tree-sitter-go = "0.23"
|
||||
tree-sitter-go-mod = { git = "https://github.com/camdencheek/tree-sitter-go-mod", rev = "6efb59652d30e0e9cd5f3b3a669afd6f1a926d3c", package = "tree-sitter-gomod" }
|
||||
tree-sitter-go-mod = { git = "https://github.com/zed-industries/tree-sitter-go-mod", rev = "a9aea5e358cde4d0f8ff20b7bc4fa311e359c7ca", package = "tree-sitter-gomod" }
|
||||
tree-sitter-gowork = { git = "https://github.com/zed-industries/tree-sitter-go-work", rev = "acb0617bf7f4fda02c6217676cc64acb89536dc7" }
|
||||
tree-sitter-heex = { git = "https://github.com/zed-industries/tree-sitter-heex", rev = "1dd45142fbb05562e35b2040c6129c9bca346592" }
|
||||
tree-sitter-html = "0.23"
|
||||
tree-sitter-diff = "0.1.0"
|
||||
tree-sitter-html = "0.20"
|
||||
tree-sitter-jsdoc = "0.23"
|
||||
tree-sitter-json = "0.24"
|
||||
tree-sitter-md = { git = "https://github.com/tree-sitter-grammars/tree-sitter-markdown", rev = "9a23c1a96c0513d8fc6520972beedd419a973539" }
|
||||
tree-sitter-python = { git = "https://github.com/zed-industries/tree-sitter-python", rev = "218fcbf3fda3d029225f3dec005cb497d111b35e" }
|
||||
tree-sitter-regex = "0.24"
|
||||
tree-sitter-json = "0.23"
|
||||
tree-sitter-md = { git = "https://github.com/zed-industries/tree-sitter-markdown", rev = "4cfa6aad6b75052a5077c80fd934757d9267d81b" }
|
||||
tree-sitter-python = "0.23"
|
||||
tree-sitter-regex = "0.23"
|
||||
tree-sitter-ruby = "0.23"
|
||||
tree-sitter-rust = "0.24"
|
||||
tree-sitter-rust = "0.23"
|
||||
tree-sitter-typescript = "0.23"
|
||||
tree-sitter-yaml = { git = "https://github.com/zed-industries/tree-sitter-yaml", rev = "baff0b51c64ef6a1fb1f8390f3ad6015b83ec13a" }
|
||||
unicase = "2.6"
|
||||
unicode-script = "0.5.7"
|
||||
unindent = "0.1.7"
|
||||
unicode-segmentation = "1.10"
|
||||
unindent = "0.2.0"
|
||||
url = "2.2"
|
||||
urlencoding = "2.1.2"
|
||||
uuid = { version = "1.1.2", features = ["v4", "v5", "v7", "serde"] }
|
||||
walkdir = "2.5"
|
||||
wasm-encoder = "0.221"
|
||||
wasmparser = "0.221"
|
||||
wasmtime = { version = "29", default-features = false, features = [
|
||||
uuid = { version = "1.1.2", features = ["v4", "v5", "serde"] }
|
||||
wasmparser = "0.215"
|
||||
wasm-encoder = "0.215"
|
||||
wasmtime = { version = "24", default-features = false, features = [
|
||||
"async",
|
||||
"demangle",
|
||||
"runtime",
|
||||
"cranelift",
|
||||
"component-model",
|
||||
"incremental-cache",
|
||||
"parallel-compilation",
|
||||
] }
|
||||
wasmtime-wasi = "29"
|
||||
wasmtime-wasi = "24"
|
||||
which = "6.0.0"
|
||||
windows-core = "0.61"
|
||||
wit-component = "0.221"
|
||||
workspace-hack = "0.1.0"
|
||||
yawc = "0.2.5"
|
||||
wit-component = "0.201"
|
||||
zstd = "0.11"
|
||||
|
||||
[workspace.dependencies.windows]
|
||||
version = "0.61"
|
||||
[workspace.dependencies.async-stripe]
|
||||
git = "https://github.com/zed-industries/async-stripe"
|
||||
rev = "3672dd4efb7181aa597bf580bf5a2f5d23db6735"
|
||||
default-features = false
|
||||
features = [
|
||||
"runtime-tokio-hyper-rustls",
|
||||
"billing",
|
||||
"checkout",
|
||||
"events",
|
||||
# The features below are only enabled to get the `events` feature to build.
|
||||
"chrono",
|
||||
"connect",
|
||||
]
|
||||
|
||||
[workspace.dependencies.windows]
|
||||
version = "0.58"
|
||||
features = [
|
||||
"implement",
|
||||
"Foundation_Numerics",
|
||||
"Storage_Search",
|
||||
"Storage_Streams",
|
||||
"Storage",
|
||||
"System_Threading",
|
||||
"UI_ViewManagement",
|
||||
"Wdk_System_SystemServices",
|
||||
"Win32_Globalization",
|
||||
"Win32_Graphics_Direct3D",
|
||||
"Win32_Graphics_Direct3D11",
|
||||
"Win32_Graphics_Direct3D_Fxc",
|
||||
"Win32_Graphics_DirectComposition",
|
||||
"Win32_Graphics_Direct2D",
|
||||
"Win32_Graphics_Direct2D_Common",
|
||||
"Win32_Graphics_DirectWrite",
|
||||
"Win32_Graphics_Dwm",
|
||||
"Win32_Graphics_Dxgi",
|
||||
"Win32_Graphics_Dxgi_Common",
|
||||
"Win32_Graphics_Gdi",
|
||||
"Win32_Graphics_Imaging",
|
||||
"Win32_Networking_WinSock",
|
||||
"Win32_Graphics_Imaging_D2D",
|
||||
"Win32_Security",
|
||||
"Win32_Security_Credentials",
|
||||
"Win32_Storage_FileSystem",
|
||||
"Win32_System_Com",
|
||||
"Win32_System_Com_StructuredStorage",
|
||||
"Win32_System_Console",
|
||||
"Win32_System_DataExchange",
|
||||
"Win32_System_IO",
|
||||
"Win32_System_LibraryLoader",
|
||||
"Win32_System_Memory",
|
||||
"Win32_System_Ole",
|
||||
"Win32_System_Performance",
|
||||
"Win32_System_Pipes",
|
||||
"Win32_System_SystemInformation",
|
||||
"Win32_System_SystemServices",
|
||||
"Win32_System_Threading",
|
||||
"Win32_System_Variant",
|
||||
"Win32_System_WinRT",
|
||||
"Win32_UI_Controls",
|
||||
"Win32_UI_HiDpi",
|
||||
@@ -716,76 +536,22 @@ features = [
|
||||
"Win32_UI_Input_KeyboardAndMouse",
|
||||
"Win32_UI_Shell",
|
||||
"Win32_UI_Shell_Common",
|
||||
"Win32_UI_Shell_PropertiesSystem",
|
||||
"Win32_UI_WindowsAndMessaging",
|
||||
]
|
||||
|
||||
[patch.crates-io]
|
||||
notify = { git = "https://github.com/zed-industries/notify.git", rev = "bbb9ea5ae52b253e095737847e367c30653a2e96" }
|
||||
notify-types = { git = "https://github.com/zed-industries/notify.git", rev = "bbb9ea5ae52b253e095737847e367c30653a2e96" }
|
||||
windows-capture = { git = "https://github.com/zed-industries/windows-capture.git", rev = "f0d6c1b6691db75461b732f6d5ff56eed002eeb9" }
|
||||
|
||||
# Makes the workspace hack crate refer to the local one, but only when you're building locally
|
||||
workspace-hack = { path = "tooling/workspace-hack" }
|
||||
|
||||
[profile.dev]
|
||||
split-debuginfo = "unpacked"
|
||||
codegen-units = 16
|
||||
|
||||
# mirror configuration for crates compiled for the build platform
|
||||
# (without this cargo will compile ~400 crates twice)
|
||||
[profile.dev.build-override]
|
||||
debug = "limited"
|
||||
codegen-units = 16
|
||||
|
||||
[profile.dev.package]
|
||||
taffy = { opt-level = 3 }
|
||||
cranelift-codegen = { opt-level = 3 }
|
||||
cranelift-codegen-meta = { opt-level = 3 }
|
||||
cranelift-codegen-shared = { opt-level = 3 }
|
||||
resvg = { opt-level = 3 }
|
||||
rustybuzz = { opt-level = 3 }
|
||||
ttf-parser = { opt-level = 3 }
|
||||
wasmtime-cranelift = { opt-level = 3 }
|
||||
wasmtime = { opt-level = 3 }
|
||||
# Build single-source-file crates with cg=1 as it helps make `cargo build` of a whole workspace a bit faster
|
||||
activity_indicator = { codegen-units = 1 }
|
||||
assets = { codegen-units = 1 }
|
||||
breadcrumbs = { codegen-units = 1 }
|
||||
collections = { codegen-units = 1 }
|
||||
command_palette = { codegen-units = 1 }
|
||||
command_palette_hooks = { codegen-units = 1 }
|
||||
extension_cli = { codegen-units = 1 }
|
||||
feature_flags = { codegen-units = 1 }
|
||||
file_icons = { codegen-units = 1 }
|
||||
fsevent = { codegen-units = 1 }
|
||||
image_viewer = { codegen-units = 1 }
|
||||
edit_prediction_button = { codegen-units = 1 }
|
||||
install_cli = { codegen-units = 1 }
|
||||
journal = { codegen-units = 1 }
|
||||
lmstudio = { codegen-units = 1 }
|
||||
menu = { codegen-units = 1 }
|
||||
notifications = { codegen-units = 1 }
|
||||
ollama = { codegen-units = 1 }
|
||||
outline = { codegen-units = 1 }
|
||||
paths = { codegen-units = 1 }
|
||||
prettier = { codegen-units = 1 }
|
||||
project_symbols = { codegen-units = 1 }
|
||||
refineable = { codegen-units = 1 }
|
||||
release_channel = { codegen-units = 1 }
|
||||
reqwest_client = { codegen-units = 1 }
|
||||
rich_text = { codegen-units = 1 }
|
||||
semantic_version = { codegen-units = 1 }
|
||||
session = { codegen-units = 1 }
|
||||
snippet = { codegen-units = 1 }
|
||||
snippets_ui = { codegen-units = 1 }
|
||||
sqlez_macros = { codegen-units = 1 }
|
||||
story = { codegen-units = 1 }
|
||||
supermaven_api = { codegen-units = 1 }
|
||||
telemetry_events = { codegen-units = 1 }
|
||||
theme_selector = { codegen-units = 1 }
|
||||
time_format = { codegen-units = 1 }
|
||||
ui_input = { codegen-units = 1 }
|
||||
zed_actions = { codegen-units = 1 }
|
||||
|
||||
[profile.release]
|
||||
debug = "limited"
|
||||
@@ -801,54 +567,40 @@ debug = "full"
|
||||
lto = false
|
||||
codegen-units = 16
|
||||
|
||||
[workspace.lints.rust]
|
||||
unexpected_cfgs = { level = "allow" }
|
||||
|
||||
[workspace.lints.clippy]
|
||||
dbg_macro = "deny"
|
||||
todo = "deny"
|
||||
|
||||
# This is not a style lint, see https://github.com/rust-lang/rust-clippy/pull/15454
|
||||
# Remove when the lint gets promoted to `suspicious`.
|
||||
declare_interior_mutable_const = "deny"
|
||||
|
||||
redundant_clone = "deny"
|
||||
|
||||
# We currently do not restrict any style rules
|
||||
# as it slows down shipping code to Zed.
|
||||
#
|
||||
# Running ./script/clippy can take several minutes, and so it's
|
||||
# common to skip that step and let CI do it. Any unexpected failures
|
||||
# (which also take minutes to discover) thus require switching back
|
||||
# to an old branch, manual fixing, and re-pushing.
|
||||
#
|
||||
# In the future we could improve this by either making sure
|
||||
# Zed can surface clippy errors in diagnostics (in addition to the
|
||||
# rust-analyzer errors), or by having CI fix style nits automatically.
|
||||
style = { level = "allow", priority = -1 }
|
||||
|
||||
# Individual rules that have violations in the codebase:
|
||||
type_complexity = "allow"
|
||||
let_underscore_future = "allow"
|
||||
|
||||
# Motivation: We use `vec![a..b]` a lot when dealing with ranges in text, so
|
||||
# warning on this rule produces a lot of noise.
|
||||
single_range_in_vec_init = "allow"
|
||||
|
||||
# in Rust it can be very tedious to reduce argument count without
|
||||
# running afoul of the borrow checker.
|
||||
too_many_arguments = "allow"
|
||||
# These are all of the rules that currently have violations in the Zed
|
||||
# codebase.
|
||||
#
|
||||
# We'll want to drive this list down by either:
|
||||
# 1. fixing violations of the rule and begin enforcing it
|
||||
# 2. deciding we want to allow the rule permanently, at which point
|
||||
# we should codify that separately above.
|
||||
#
|
||||
# This list shouldn't be added to; it should only get shorter.
|
||||
# =============================================================================
|
||||
|
||||
# We often have large enum variants yet we rarely actually bother with splitting them up.
|
||||
large_enum_variant = "allow"
|
||||
# There are a bunch of rules currently failing in the `style` group, so
|
||||
# allow all of those, for now.
|
||||
style = { level = "allow", priority = -1 }
|
||||
|
||||
# Temporary list of style lints that we've fixed so far.
|
||||
module_inception = { level = "deny" }
|
||||
question_mark = { level = "deny" }
|
||||
redundant_closure = { level = "deny" }
|
||||
# Individual rules that have violations in the codebase:
|
||||
type_complexity = "allow"
|
||||
# We often return trait objects from `new` functions.
|
||||
new_ret_no_self = { level = "allow" }
|
||||
# We have a few `next` functions that differ in lifetimes
|
||||
# compared to Iterator::next. Yet, clippy complains about those.
|
||||
should_implement_trait = { level = "allow" }
|
||||
|
||||
[workspace.metadata.cargo-machete]
|
||||
ignored = [
|
||||
"bindgen",
|
||||
"cbindgen",
|
||||
"prost_build",
|
||||
"serde",
|
||||
"component",
|
||||
"documented",
|
||||
"workspace-hack",
|
||||
]
|
||||
ignored = ["bindgen", "cbindgen", "prost_build", "serde"]
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# syntax = docker/dockerfile:1.2
|
||||
|
||||
FROM rust:1.89-bookworm as builder
|
||||
FROM rust:1.81-bookworm as builder
|
||||
WORKDIR app
|
||||
COPY . .
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
Copyright 2022 - 2025 Zed Industries, Inc.
|
||||
Copyright 2022 - 2024 Zed Industries, Inc.
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
Copyright 2022 - 2025 Zed Industries, Inc.
|
||||
Copyright 2022 - 2024 Zed Industries, Inc.
|
||||
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
|
||||
1
Procfile
1
Procfile
@@ -1,4 +1,3 @@
|
||||
collab: RUST_LOG=${RUST_LOG:-info} cargo run --package=collab serve all
|
||||
cloud: cd ../cloud; cargo make dev
|
||||
livekit: livekit-server --dev
|
||||
blob_store: ./script/run-local-minio
|
||||
|
||||
@@ -1,2 +0,0 @@
|
||||
postgrest_llm: postgrest crates/collab/postgrest_llm.conf
|
||||
website: cd ../zed.dev; npm run dev -- --port=3000
|
||||
@@ -1,6 +1,5 @@
|
||||
# Zed
|
||||
|
||||
[](https://zed.dev)
|
||||
[](https://github.com/zed-industries/zed/actions/workflows/ci.yml)
|
||||
|
||||
Welcome to Zed, a high-performance, multiplayer code editor from the creators of [Atom](https://github.com/atom/atom) and [Tree-sitter](https://github.com/tree-sitter/tree-sitter).
|
||||
@@ -9,6 +8,10 @@ Welcome to Zed, a high-performance, multiplayer code editor from the creators of
|
||||
|
||||
### Installation
|
||||
|
||||
<a href="https://repology.org/project/zed-editor/versions">
|
||||
<img src="https://repology.org/badge/vertical-allrepos/zed-editor.svg?minversion=0.143.5" alt="Packaging status" align="right">
|
||||
</a>
|
||||
|
||||
On macOS and Linux you can [download Zed directly](https://zed.dev/download) or [install Zed via your local package manager](https://zed.dev/docs/linux#installing-via-a-package-manager).
|
||||
|
||||
Other platforms are not yet available:
|
||||
|
||||
@@ -1,8 +0,0 @@
|
||||
{
|
||||
"label": "",
|
||||
"message": "Zed",
|
||||
"logoSvg": "<svg xmlns=\"http://www.w3.org/2000/svg\" viewBox=\"0 0 96 96\"><rect width=\"96\" height=\"96\" fill=\"#000\"/><path fill-rule=\"evenodd\" clip-rule=\"evenodd\" d=\"M9 6C7.34315 6 6 7.34315 6 9V75H0V9C0 4.02944 4.02944 0 9 0H89.3787C93.3878 0 95.3955 4.84715 92.5607 7.68198L43.0551 57.1875H57V51H63V58.6875C63 61.1728 60.9853 63.1875 58.5 63.1875H37.0551L26.7426 73.5H73.5V36H79.5V73.5C79.5 76.8137 76.8137 79.5 73.5 79.5H20.7426L10.2426 90H87C88.6569 90 90 88.6569 90 87V21H96V87C96 91.9706 91.9706 96 87 96H6.62132C2.61224 96 0.604504 91.1529 3.43934 88.318L52.7574 39H39V45H33V37.5C33 35.0147 35.0147 33 37.5 33H58.7574L69.2574 22.5H22.5V60H16.5V22.5C16.5 19.1863 19.1863 16.5 22.5 16.5H75.2574L85.7574 6H9Z\" fill=\"#fff\"/></svg>",
|
||||
"logoWidth": 16,
|
||||
"labelColor": "black",
|
||||
"color": "white"
|
||||
}
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
BIN
assets/fonts/plex-mono/ZedPlexMono-Bold.ttf
Normal file
BIN
assets/fonts/plex-mono/ZedPlexMono-Bold.ttf
Normal file
Binary file not shown.
BIN
assets/fonts/plex-mono/ZedPlexMono-BoldItalic.ttf
Normal file
BIN
assets/fonts/plex-mono/ZedPlexMono-BoldItalic.ttf
Normal file
Binary file not shown.
BIN
assets/fonts/plex-mono/ZedPlexMono-Italic.ttf
Normal file
BIN
assets/fonts/plex-mono/ZedPlexMono-Italic.ttf
Normal file
Binary file not shown.
BIN
assets/fonts/plex-mono/ZedPlexMono-Regular.ttf
Normal file
BIN
assets/fonts/plex-mono/ZedPlexMono-Regular.ttf
Normal file
Binary file not shown.
BIN
assets/fonts/plex-sans/ZedPlexSans-Bold.ttf
Normal file
BIN
assets/fonts/plex-sans/ZedPlexSans-Bold.ttf
Normal file
Binary file not shown.
BIN
assets/fonts/plex-sans/ZedPlexSans-BoldItalic.ttf
Normal file
BIN
assets/fonts/plex-sans/ZedPlexSans-BoldItalic.ttf
Normal file
Binary file not shown.
BIN
assets/fonts/plex-sans/ZedPlexSans-Italic.ttf
Normal file
BIN
assets/fonts/plex-sans/ZedPlexSans-Italic.ttf
Normal file
Binary file not shown.
BIN
assets/fonts/plex-sans/ZedPlexSans-Regular.ttf
Normal file
BIN
assets/fonts/plex-sans/ZedPlexSans-Regular.ttf
Normal file
Binary file not shown.
@@ -1,9 +1,8 @@
|
||||
Copyright 2019 The Lilex Project Authors (https://github.com/mishamyrt/Lilex)
|
||||
Copyright © 2017 IBM Corp. with Reserved Font Name "Plex"
|
||||
|
||||
This Font Software is licensed under the SIL Open Font License, Version 1.1.
|
||||
This license is copied below, and is also available with a FAQ at:
|
||||
https://scripts.sil.org/OFL
|
||||
|
||||
http://scripts.sil.org/OFL
|
||||
|
||||
-----------------------------------------------------------
|
||||
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
|
||||
@@ -90,4 +89,4 @@ COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
|
||||
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
|
||||
OTHER DEALINGS IN THE FONT SOFTWARE.
|
||||
OTHER DEALINGS IN THE FONT SOFTWARE.
|
||||
@@ -1,5 +1,5 @@
|
||||
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<path d="M10.5 8.75V10.5C8.43097 10.5 7.56903 10.5 5.5 10.5V10L10.5 6V5.5H5.5V7.25" stroke="black" stroke-width="1.2"/>
|
||||
<path d="M10.5 8.75V10.5C8.43097 10.5 7.56903 10.5 5.5 10.5V10L10.5 6V5.5H5.5V7.25" stroke="black" stroke-width="1.5"/>
|
||||
<path d="M1.5 8.5C1.77614 8.5 2 8.27614 2 8C2 7.72386 1.77614 7.5 1.5 7.5C1.22386 7.5 1 7.72386 1 8C1 8.27614 1.22386 8.5 1.5 8.5Z" fill="black"/>
|
||||
<path d="M2.49976 6.33002C2.7759 6.33002 2.99976 6.10616 2.99976 5.83002C2.99976 5.55387 2.7759 5.33002 2.49976 5.33002C2.22361 5.33002 1.99976 5.55387 1.99976 5.83002C1.99976 6.10616 2.22361 6.33002 2.49976 6.33002Z" fill="black"/>
|
||||
<path d="M2.49976 10.66C2.7759 10.66 2.99976 10.4361 2.99976 10.16C2.99976 9.88383 2.7759 9.65997 2.49976 9.65997C2.22361 9.65997 1.99976 9.88383 1.99976 10.16C1.99976 10.4361 2.22361 10.66 2.49976 10.66Z" fill="black"/>
|
||||
|
||||
|
Before Width: | Height: | Size: 4.2 KiB After Width: | Height: | Size: 4.2 KiB |
12
assets/icons/ai_anthropic_hosted.svg
Normal file
12
assets/icons/ai_anthropic_hosted.svg
Normal file
@@ -0,0 +1,12 @@
|
||||
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<rect width="16" height="16" rx="2" fill="black" fill-opacity="0.2"/>
|
||||
<g clip-path="url(#clip0_1916_18)">
|
||||
<path d="M10.652 3.79999H8.816L12.164 12.2H14L10.652 3.79999Z" fill="#1F1F1E"/>
|
||||
<path d="M5.348 3.79999L2 12.2H3.872L4.55672 10.436H8.05927L8.744 12.2H10.616L7.268 3.79999H5.348ZM5.16224 8.87599L6.308 5.92399L7.45374 8.87599H5.16224Z" fill="#1F1F1E"/>
|
||||
</g>
|
||||
<defs>
|
||||
<clipPath id="clip0_1916_18">
|
||||
<rect width="12" height="8.4" fill="white" transform="translate(2 3.79999)"/>
|
||||
</clipPath>
|
||||
</defs>
|
||||
</svg>
|
||||
|
After Width: | Height: | Size: 601 B |
@@ -1,8 +0,0 @@
|
||||
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<mask id="path-1-outside-1_2722_10821" maskUnits="userSpaceOnUse" x="1.00002" y="0.500015" width="15" height="15" fill="black">
|
||||
<rect fill="white" x="1.00002" y="0.500015" width="15" height="15"/>
|
||||
<path fill-rule="evenodd" clip-rule="evenodd" d="M14.0714 7.76784C13.8154 7.76784 13.6071 7.55961 13.6071 7.30355C13.6071 7.0475 13.8154 6.83927 14.0714 6.83927C14.3275 6.83927 14.5357 7.0475 14.5357 7.30355C14.5357 7.55961 14.3275 7.76784 14.0714 7.76784ZM6.66911 14.0143L5.9151 13.5747L7.46234 12.6075L7.21627 12.2138L5.46196 13.3102L4.0893 12.5096V10.4456L5.37885 9.58598L5.12117 9.19969L3.84765 10.0486L2.4643 9.25819V8.13462L3.97231 7.27291L3.74202 6.86991L2.4643 7.6V6.74177L3.85716 5.94598L5.25001 6.74177V7.63645L4.2019 8.26532L4.441 8.66321L5.48215 8.03852L6.52332 8.66321L6.76242 8.26532L5.7143 7.63645V6.73132L7.00385 5.8717C7.06838 5.82852 7.10715 5.75609 7.10715 5.67855V4.05356H6.64287V5.55436L5.47265 6.33436L4.0893 5.54391V3.49038L5.25001 2.81345V4.74998H5.7143V2.54254L6.66911 1.98563L8.50001 2.59594V9.26145L5.13047 11.2832L5.36957 11.6811L8.50001 9.8028V13.404L6.66911 14.0143ZM13.6071 10.3214C13.6071 10.5775 13.3989 10.7857 13.1429 10.7857C12.8868 10.7857 12.6786 10.5775 12.6786 10.3214C12.6786 10.0654 12.8868 9.85712 13.1429 9.85712C13.3989 9.85712 13.6071 10.0654 13.6071 10.3214ZM11.2857 12.6428C11.2857 12.8989 11.0775 13.1071 10.8214 13.1071C10.5654 13.1071 10.3571 12.8989 10.3571 12.6428C10.3571 12.3868 10.5654 12.1785 10.8214 12.1785C11.0775 12.1785 11.2857 12.3868 11.2857 12.6428ZM11.0536 3.35713C11.0536 3.10108 11.2618 2.89284 11.5179 2.89284C11.7739 2.89284 11.9821 3.10108 11.9821 3.35713C11.9821 3.61318 11.7739 3.82141 11.5179 3.82141C11.2618 3.82141 11.0536 3.61318 11.0536 3.35713ZM14.0714 6.37498C13.6399 6.37498 13.2796 6.67212 13.1758 7.07141H8.96429V5.9107H11.5179C11.6462 5.9107 11.75 5.8067 11.75 5.67855V4.25274C12.1493 4.14897 12.4464 3.78845 12.4464 3.35713C12.4464 2.84502 12.03 2.42856 11.5179 2.42856C11.0058 2.42856 10.5893 2.84502 10.5893 3.35713C10.5893 3.78845 10.8864 4.14897 11.2857 4.25274V5.44641H8.96429V2.42856C8.96429 2.32851 8.90046 2.24006 8.80552 2.20826L6.71623 1.51183C6.65263 1.49094 6.58345 1.4979 6.52587 1.53156L3.74016 3.15656C3.66866 3.19811 3.62501 3.27472 3.62501 3.35713V5.54391L2.11702 6.40563C2.04459 6.44695 2.00002 6.52379 2.00002 6.60713V9.39284C2.00002 9.47618 2.04459 9.55301 2.11702 9.59434L3.62501 10.456V12.6428C3.62501 12.7252 3.66866 12.8018 3.74016 12.8434L6.52587 14.4684C6.56162 14.4893 6.60224 14.5 6.64287 14.5C6.66747 14.5 6.69232 14.496 6.71623 14.4881L8.80552 13.7917C8.90046 13.7599 8.96429 13.6715 8.96429 13.5714V10.7857H10.5893V11.7472C10.19 11.851 9.89286 12.2115 9.89286 12.6428C9.89286 13.1549 10.3093 13.5714 10.8214 13.5714C11.3335 13.5714 11.75 13.1549 11.75 12.6428C11.75 12.2115 11.4529 11.851 11.0536 11.7472V10.5535C11.0536 10.4254 10.9498 10.3214 10.8214 10.3214H8.96429V9.16069H11.8661L12.3624 9.8223C12.2698 9.96669 12.2143 10.1373 12.2143 10.3214C12.2143 10.8335 12.6308 11.25 13.1429 11.25C13.655 11.25 14.0714 10.8335 14.0714 10.3214C14.0714 9.8093 13.655 9.39284 13.1429 9.39284C12.9841 9.39284 12.8369 9.43648 12.7062 9.50705L12.1679 8.78927C12.124 8.73077 12.055 8.69641 11.9821 8.69641H8.96429V7.5357H13.1758C13.2796 7.93498 13.6399 8.23212 14.0714 8.23212C14.5835 8.23212 15 7.81566 15 7.30355C15 6.79145 14.5835 6.37498 14.0714 6.37498Z"/>
|
||||
</mask>
|
||||
<path fill-rule="evenodd" clip-rule="evenodd" d="M14.0714 7.76784C13.8154 7.76784 13.6071 7.55961 13.6071 7.30355C13.6071 7.0475 13.8154 6.83927 14.0714 6.83927C14.3275 6.83927 14.5357 7.0475 14.5357 7.30355C14.5357 7.55961 14.3275 7.76784 14.0714 7.76784ZM6.66911 14.0143L5.9151 13.5747L7.46234 12.6075L7.21627 12.2138L5.46196 13.3102L4.0893 12.5096V10.4456L5.37885 9.58598L5.12117 9.19969L3.84765 10.0486L2.4643 9.25819V8.13462L3.97231 7.27291L3.74202 6.86991L2.4643 7.6V6.74177L3.85716 5.94598L5.25001 6.74177V7.63645L4.2019 8.26532L4.441 8.66321L5.48215 8.03852L6.52332 8.66321L6.76242 8.26532L5.7143 7.63645V6.73132L7.00385 5.8717C7.06838 5.82852 7.10715 5.75609 7.10715 5.67855V4.05356H6.64287V5.55436L5.47265 6.33436L4.0893 5.54391V3.49038L5.25001 2.81345V4.74998H5.7143V2.54254L6.66911 1.98563L8.50001 2.59594V9.26145L5.13047 11.2832L5.36957 11.6811L8.50001 9.8028V13.404L6.66911 14.0143ZM13.6071 10.3214C13.6071 10.5775 13.3989 10.7857 13.1429 10.7857C12.8868 10.7857 12.6786 10.5775 12.6786 10.3214C12.6786 10.0654 12.8868 9.85712 13.1429 9.85712C13.3989 9.85712 13.6071 10.0654 13.6071 10.3214ZM11.2857 12.6428C11.2857 12.8989 11.0775 13.1071 10.8214 13.1071C10.5654 13.1071 10.3571 12.8989 10.3571 12.6428C10.3571 12.3868 10.5654 12.1785 10.8214 12.1785C11.0775 12.1785 11.2857 12.3868 11.2857 12.6428ZM11.0536 3.35713C11.0536 3.10108 11.2618 2.89284 11.5179 2.89284C11.7739 2.89284 11.9821 3.10108 11.9821 3.35713C11.9821 3.61318 11.7739 3.82141 11.5179 3.82141C11.2618 3.82141 11.0536 3.61318 11.0536 3.35713ZM14.0714 6.37498C13.6399 6.37498 13.2796 6.67212 13.1758 7.07141H8.96429V5.9107H11.5179C11.6462 5.9107 11.75 5.8067 11.75 5.67855V4.25274C12.1493 4.14897 12.4464 3.78845 12.4464 3.35713C12.4464 2.84502 12.03 2.42856 11.5179 2.42856C11.0058 2.42856 10.5893 2.84502 10.5893 3.35713C10.5893 3.78845 10.8864 4.14897 11.2857 4.25274V5.44641H8.96429V2.42856C8.96429 2.32851 8.90046 2.24006 8.80552 2.20826L6.71623 1.51183C6.65263 1.49094 6.58345 1.4979 6.52587 1.53156L3.74016 3.15656C3.66866 3.19811 3.62501 3.27472 3.62501 3.35713V5.54391L2.11702 6.40563C2.04459 6.44695 2.00002 6.52379 2.00002 6.60713V9.39284C2.00002 9.47618 2.04459 9.55301 2.11702 9.59434L3.62501 10.456V12.6428C3.62501 12.7252 3.66866 12.8018 3.74016 12.8434L6.52587 14.4684C6.56162 14.4893 6.60224 14.5 6.64287 14.5C6.66747 14.5 6.69232 14.496 6.71623 14.4881L8.80552 13.7917C8.90046 13.7599 8.96429 13.6715 8.96429 13.5714V10.7857H10.5893V11.7472C10.19 11.851 9.89286 12.2115 9.89286 12.6428C9.89286 13.1549 10.3093 13.5714 10.8214 13.5714C11.3335 13.5714 11.75 13.1549 11.75 12.6428C11.75 12.2115 11.4529 11.851 11.0536 11.7472V10.5535C11.0536 10.4254 10.9498 10.3214 10.8214 10.3214H8.96429V9.16069H11.8661L12.3624 9.8223C12.2698 9.96669 12.2143 10.1373 12.2143 10.3214C12.2143 10.8335 12.6308 11.25 13.1429 11.25C13.655 11.25 14.0714 10.8335 14.0714 10.3214C14.0714 9.8093 13.655 9.39284 13.1429 9.39284C12.9841 9.39284 12.8369 9.43648 12.7062 9.50705L12.1679 8.78927C12.124 8.73077 12.055 8.69641 11.9821 8.69641H8.96429V7.5357H13.1758C13.2796 7.93498 13.6399 8.23212 14.0714 8.23212C14.5835 8.23212 15 7.81566 15 7.30355C15 6.79145 14.5835 6.37498 14.0714 6.37498Z" fill="black"/>
|
||||
<path fill-rule="evenodd" clip-rule="evenodd" d="M14.0714 7.76784C13.8154 7.76784 13.6071 7.55961 13.6071 7.30355C13.6071 7.0475 13.8154 6.83927 14.0714 6.83927C14.3275 6.83927 14.5357 7.0475 14.5357 7.30355C14.5357 7.55961 14.3275 7.76784 14.0714 7.76784ZM6.66911 14.0143L5.9151 13.5747L7.46234 12.6075L7.21627 12.2138L5.46196 13.3102L4.0893 12.5096V10.4456L5.37885 9.58598L5.12117 9.19969L3.84765 10.0486L2.4643 9.25819V8.13462L3.97231 7.27291L3.74202 6.86991L2.4643 7.6V6.74177L3.85716 5.94598L5.25001 6.74177V7.63645L4.2019 8.26532L4.441 8.66321L5.48215 8.03852L6.52332 8.66321L6.76242 8.26532L5.7143 7.63645V6.73132L7.00385 5.8717C7.06838 5.82852 7.10715 5.75609 7.10715 5.67855V4.05356H6.64287V5.55436L5.47265 6.33436L4.0893 5.54391V3.49038L5.25001 2.81345V4.74998H5.7143V2.54254L6.66911 1.98563L8.50001 2.59594V9.26145L5.13047 11.2832L5.36957 11.6811L8.50001 9.8028V13.404L6.66911 14.0143ZM13.6071 10.3214C13.6071 10.5775 13.3989 10.7857 13.1429 10.7857C12.8868 10.7857 12.6786 10.5775 12.6786 10.3214C12.6786 10.0654 12.8868 9.85712 13.1429 9.85712C13.3989 9.85712 13.6071 10.0654 13.6071 10.3214ZM11.2857 12.6428C11.2857 12.8989 11.0775 13.1071 10.8214 13.1071C10.5654 13.1071 10.3571 12.8989 10.3571 12.6428C10.3571 12.3868 10.5654 12.1785 10.8214 12.1785C11.0775 12.1785 11.2857 12.3868 11.2857 12.6428ZM11.0536 3.35713C11.0536 3.10108 11.2618 2.89284 11.5179 2.89284C11.7739 2.89284 11.9821 3.10108 11.9821 3.35713C11.9821 3.61318 11.7739 3.82141 11.5179 3.82141C11.2618 3.82141 11.0536 3.61318 11.0536 3.35713ZM14.0714 6.37498C13.6399 6.37498 13.2796 6.67212 13.1758 7.07141H8.96429V5.9107H11.5179C11.6462 5.9107 11.75 5.8067 11.75 5.67855V4.25274C12.1493 4.14897 12.4464 3.78845 12.4464 3.35713C12.4464 2.84502 12.03 2.42856 11.5179 2.42856C11.0058 2.42856 10.5893 2.84502 10.5893 3.35713C10.5893 3.78845 10.8864 4.14897 11.2857 4.25274V5.44641H8.96429V2.42856C8.96429 2.32851 8.90046 2.24006 8.80552 2.20826L6.71623 1.51183C6.65263 1.49094 6.58345 1.4979 6.52587 1.53156L3.74016 3.15656C3.66866 3.19811 3.62501 3.27472 3.62501 3.35713V5.54391L2.11702 6.40563C2.04459 6.44695 2.00002 6.52379 2.00002 6.60713V9.39284C2.00002 9.47618 2.04459 9.55301 2.11702 9.59434L3.62501 10.456V12.6428C3.62501 12.7252 3.66866 12.8018 3.74016 12.8434L6.52587 14.4684C6.56162 14.4893 6.60224 14.5 6.64287 14.5C6.66747 14.5 6.69232 14.496 6.71623 14.4881L8.80552 13.7917C8.90046 13.7599 8.96429 13.6715 8.96429 13.5714V10.7857H10.5893V11.7472C10.19 11.851 9.89286 12.2115 9.89286 12.6428C9.89286 13.1549 10.3093 13.5714 10.8214 13.5714C11.3335 13.5714 11.75 13.1549 11.75 12.6428C11.75 12.2115 11.4529 11.851 11.0536 11.7472V10.5535C11.0536 10.4254 10.9498 10.3214 10.8214 10.3214H8.96429V9.16069H11.8661L12.3624 9.8223C12.2698 9.96669 12.2143 10.1373 12.2143 10.3214C12.2143 10.8335 12.6308 11.25 13.1429 11.25C13.655 11.25 14.0714 10.8335 14.0714 10.3214C14.0714 9.8093 13.655 9.39284 13.1429 9.39284C12.9841 9.39284 12.8369 9.43648 12.7062 9.50705L12.1679 8.78927C12.124 8.73077 12.055 8.69641 11.9821 8.69641H8.96429V7.5357H13.1758C13.2796 7.93498 13.6399 8.23212 14.0714 8.23212C14.5835 8.23212 15 7.81566 15 7.30355C15 6.79145 14.5835 6.37498 14.0714 6.37498Z" stroke="black" stroke-width="0.4" mask="url(#path-1-outside-1_2722_10821)"/>
|
||||
</svg>
|
||||
|
Before Width: | Height: | Size: 9.7 KiB |
@@ -1,3 +0,0 @@
|
||||
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<path d="M4.35443 9.97775L6.71495 8.65418L6.75443 8.53883L6.71495 8.47508H6.59948L6.20456 8.45081L4.8557 8.41436L3.68608 8.36579L2.55291 8.30507L2.26734 8.24438L2 7.89224L2.02734 7.71617L2.26734 7.55528L2.61063 7.58564L3.37013 7.63724L4.50937 7.71617L5.3357 7.76474L6.56 7.89224H6.75443L6.78176 7.81331L6.71495 7.76474L6.66329 7.71617L5.48456 6.91778L4.20861 6.07388L3.54025 5.58815L3.17873 5.34227L2.99646 5.11157L2.91747 4.60764L3.24557 4.24639L3.68608 4.27675L3.79848 4.30711L4.24506 4.65014L5.19899 5.38781L6.44456 6.30458L6.62684 6.45635L6.69974 6.40475L6.70886 6.36833L6.62684 6.23171L5.94938 5.00834L5.22632 3.76372L4.9043 3.24766L4.81924 2.93802C4.78886 2.81053 4.7676 2.70427 4.7676 2.57374L5.14127 2.06678L5.34785 2L5.84609 2.06678L6.0557 2.24893L6.36557 2.95624L6.86684 4.07033L7.64456 5.58512L7.87241 6.0344L7.99391 6.45029L8.03948 6.57779H8.11847V6.50492L8.18228 5.6519L8.30075 4.6046L8.41619 3.25677L8.4557 2.87731L8.64404 2.42196L9.01772 2.17607L9.30938 2.31571L9.54938 2.65874L9.51596 2.88034L9.37316 3.80622L9.09368 5.25728L8.9114 6.22868H9.01772L9.13925 6.10727L9.6314 5.45459L10.4577 4.42246L10.8223 4.01265L11.2476 3.56033L11.521 3.3448H12.0375L12.4172 3.90944L12.2471 4.49228L11.7154 5.1662L11.275 5.73692L10.643 6.58691L10.2481 7.26689L10.2846 7.32152L10.3787 7.31243L11.8066 7.00886L12.5782 6.86921L13.4987 6.71135L13.915 6.90563L13.9605 7.10297L13.7965 7.50671L12.8122 7.74956L11.6577 7.98026L9.93824 8.38706L9.91697 8.40224L9.94127 8.43257L10.716 8.50544L11.0471 8.52365H11.8582L13.3681 8.63597L13.763 8.89703L14 9.21578L13.9605 9.45863L13.3529 9.76829L12.5327 9.57398L10.6187 9.11864L9.96254 8.95472H9.8714V9.00935L10.4182 9.54365L11.4208 10.4483L12.6754 11.614L12.7393 11.9023L12.5782 12.13L12.4081 12.1057L11.3053 11.277L10.88 10.9036L9.91697 10.0931H9.85316V10.1781L10.075 10.5029L11.2476 12.2636L11.3083 12.804L11.2233 12.98L10.9195 13.0863L10.5853 13.0255L9.89873 12.0632L9.19088 10.9795L8.61974 10.0081L8.54987 10.0476L8.21267 13.6752L8.05469 13.8604L7.69013 14L7.38632 13.7693L7.22531 13.3959L7.38632 12.6582L7.58075 11.6959L7.73873 10.9309L7.88153 9.98078L7.96658 9.66506L7.96052 9.64382L7.89062 9.65291L7.17368 10.6365L6.08303 12.1088L5.22026 13.0316L5.01368 13.1136L4.65519 12.9284L4.68861 12.5975L4.88911 12.303L6.08303 10.7852L6.80303 9.84416L7.26785 9.30077L7.26482 9.22187H7.23746L4.06582 11.2801L3.50076 11.3529L3.25772 11.1252L3.2881 10.7518L3.40354 10.6304L4.35747 9.97469L4.35443 9.97775Z" fill="black"/>
|
||||
</svg>
|
||||
|
Before Width: | Height: | Size: 2.5 KiB |
@@ -1,3 +0,0 @@
|
||||
<svg width="16" height="16" viewBox="0 0 16 16" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<path d="M15.143 3.82006C15.0024 3.75143 14.9415 3.8826 14.8596 3.94956C14.8314 3.97115 14.8076 3.99937 14.7838 4.02483C14.5779 4.24454 14.3377 4.38844 14.0239 4.37128C13.5651 4.34582 13.1733 4.48972 12.8268 4.84059C12.7532 4.40781 12.5086 4.14991 12.1367 3.98387C11.9419 3.89754 11.7449 3.81176 11.6082 3.62414C11.513 3.49076 11.487 3.34189 11.4394 3.19578C11.4089 3.10723 11.3785 3.01702 11.2772 3.00208C11.1665 2.98492 11.1234 3.07735 11.0802 3.15483C10.907 3.47139 10.84 3.82006 10.8466 4.17315C10.8616 4.96788 11.197 5.60101 11.8639 6.05096C11.9397 6.10243 11.9591 6.15445 11.9353 6.22972C11.8899 6.38468 11.8356 6.53521 11.788 6.69073C11.7576 6.7898 11.7122 6.81083 11.606 6.76821C11.2469 6.61391 10.9208 6.39223 10.6452 6.11516C10.1709 5.65691 9.74254 5.15107 9.20792 4.75481C9.08405 4.66328 8.95686 4.57633 8.82661 4.49414C8.28147 3.9645 8.89855 3.5295 9.04134 3.47803C9.19077 3.4238 9.09281 3.23895 8.61021 3.24116C8.12762 3.24338 7.68598 3.40443 7.12313 3.61971C7.03949 3.65176 6.95344 3.67712 6.86578 3.69553C6.33981 3.59643 5.80188 3.5774 5.27023 3.63908C4.227 3.75531 3.39408 4.24897 2.78142 5.09075C2.04535 6.10243 1.87213 7.25247 2.08409 8.45121C2.30713 9.71526 2.95244 10.7618 3.94364 11.5798C4.97192 12.4282 6.15572 12.8438 7.50666 12.7641C8.32685 12.7171 9.24058 12.607 10.2705 11.7347C10.5306 11.8643 10.8029 11.9157 11.2556 11.9545C11.6043 11.9871 11.9397 11.9379 12.1992 11.8836C12.606 11.7973 12.5778 11.4204 12.4311 11.3518C11.2385 10.7961 11.5003 11.0225 11.2617 10.8393C11.8683 10.122 12.7815 9.37711 13.139 6.96357C13.1667 6.77153 13.1429 6.65088 13.139 6.49592C13.1368 6.40184 13.1584 6.36476 13.2663 6.35424C13.5657 6.32318 13.8562 6.23388 14.1213 6.09136C14.8939 5.66909 15.2061 4.97619 15.2797 4.14492C15.2907 4.01763 15.2775 3.88702 15.143 3.82006ZM8.40932 11.3014C7.25319 10.3927 6.69256 10.0933 6.46122 10.106C6.24427 10.1193 6.28357 10.3667 6.33116 10.5283C6.38097 10.6876 6.44572 10.7972 6.53649 10.9372C6.59958 11.0297 6.64275 11.1675 6.47395 11.271C6.10149 11.5012 5.45452 11.1935 5.42408 11.1785C4.67085 10.7347 4.04049 10.1492 3.59719 9.34833C3.16883 8.57739 2.91978 7.75056 2.87883 6.86783C2.86776 6.6542 2.9303 6.57894 3.14282 6.5402C3.42181 6.48681 3.70767 6.47951 3.98902 6.51861C5.16895 6.69128 6.17288 7.21871 7.01521 8.05384C7.49559 8.5298 7.8592 9.09818 8.23388 9.65383C8.63235 10.2438 9.06071 10.8061 9.6064 11.2665C9.79899 11.4281 9.9523 11.551 10.0995 11.6412C9.65565 11.691 8.91516 11.7021 8.40932 11.3014ZM8.96275 7.73728C8.96266 7.70979 8.96925 7.68268 8.98198 7.65831C8.9947 7.63394 9.01316 7.61303 9.03578 7.5974C9.05839 7.58176 9.08447 7.57186 9.11176 7.56856C9.13905 7.56526 9.16674 7.56865 9.19243 7.57844C9.22517 7.59019 9.25343 7.61186 9.27327 7.64043C9.29311 7.669 9.30355 7.70305 9.30312 7.73783C9.30319 7.76031 9.29879 7.78257 9.29018 7.80333C9.28156 7.82408 9.2689 7.84292 9.25293 7.85873C9.23696 7.87454 9.21801 7.88702 9.19717 7.89544C9.17633 7.90385 9.15402 7.90803 9.13155 7.90774C9.10925 7.90781 9.08715 7.90344 9.06656 7.89487C9.04597 7.88631 9.02729 7.87372 9.01163 7.85784C8.99596 7.84197 8.98362 7.82313 8.97532 7.80243C8.96702 7.78173 8.96238 7.75958 8.96275 7.73728ZM10.6839 8.62056C10.5733 8.66539 10.4631 8.70413 10.3574 8.70911C10.1984 8.71466 10.0423 8.66499 9.91577 8.56854C9.76413 8.44125 9.65565 8.37041 9.61027 8.14903C9.59463 8.04085 9.59762 7.93079 9.61913 7.82361C9.65787 7.64264 9.6147 7.52642 9.48686 7.42127C9.38336 7.33493 9.25109 7.31113 9.10609 7.31113C9.05645 7.30825 9.00823 7.29344 8.96552 7.26796C8.90464 7.23808 8.85483 7.16281 8.90243 7.06983C8.91792 7.03995 8.99098 6.9669 9.00869 6.95361C9.20571 6.84182 9.43317 6.87835 9.64293 6.96247C9.83774 7.04216 9.98495 7.18827 10.1969 7.39526C10.4133 7.64485 10.4526 7.71403 10.576 7.9011C10.6734 8.04776 10.762 8.19829 10.8223 8.37041C10.8594 8.47833 10.8118 8.56633 10.6839 8.62056Z" fill="black"/>
|
||||
</svg>
|
||||
|
Before Width: | Height: | Size: 3.9 KiB |
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user