Compare commits

..

10 Commits

Author SHA1 Message Date
Mikayla Maki
99105ea26e Attempt a feature based way of switching to dynamic libraries 2024-06-18 22:00:58 -07:00
Mikayla Maki
72f54f1c44 Switch to feature based library linking 2024-06-18 21:21:57 -07:00
Mikayla
66c667b6c6 fix typo in arm builders 2024-06-14 17:26:48 -07:00
Mikayla
85613af919 Disable usage of sqlite3 features that are too new for Ubuntu 2024-06-14 17:00:10 -07:00
Mikayla
e1c0d10f76 Add sqlite3 to arm runners 2024-06-14 15:48:36 -07:00
Mikayla Maki
81c12961db Fix windows bundles 2024-06-14 15:13:23 -07:00
Mikayla Maki
3ae9d61791 Ensure we only dynamically link fontconfig on linux 2024-06-14 14:48:22 -07:00
Mikayla Maki
6d0042b502 Remove async-native-tls 2024-06-14 14:39:28 -07:00
Mikayla Maki
27f0665246 Make macOS use statically linked libraries 2024-06-14 14:37:28 -07:00
Conrad Irwin
6aa78ff095 Dynamicer builds
Co-Authored-By: Mikayla <mikayla@zed.dev>
2024-06-14 15:14:20 -06:00
3473 changed files with 182753 additions and 715004 deletions

View File

@@ -1,12 +1,15 @@
# This config is different from config.toml in this directory, as the latter is recognized by Cargo.
# This file is placed in ./../.cargo/config.toml on CI runs. Cargo then merges Zeds .cargo/config.toml with ./../.cargo/config.toml
# This file is placed in $HOME/.cargo/config.toml on CI runs. Cargo then merges Zeds .cargo/config.toml with $HOME/.cargo/config.toml
# with preference for settings from Zeds config.toml.
# TL;DR: If a value is set in both ci-config.toml and config.toml, config.toml value takes precedence.
# Arrays are merged together though. See: https://doc.rust-lang.org/cargo/reference/config.html#hierarchical-structure
# The intent for this file is to configure CI build process with a divergance from Zed developers experience; for example, in this config file
# we use `-D warnings` for rustflags (which makes compilation fail in presence of warnings during build process). Placing that in developers `config.toml`
# would be incovenient.
# The reason for not using the RUSTFLAGS environment variable is that doing so would override all the settings in the config.toml file, even if the contents of the latter are completely nonsensical. See: https://github.com/rust-lang/cargo/issues/5376
# Here, we opted to use `[target.'cfg(all())']` instead of `[build]` because `[target.'**']` is guaranteed to be cumulative.
[target.'cfg(all())']
# We *could* override things like RUSTFLAGS manually by setting them as environment variables, but that is less DRY; worse yet, if you forget to set proper environment variables
# in one spot, that's going to trigger a rebuild of all of the artifacts. Using ci-config.toml we can define these overrides for CI in one spot and not worry about it.
[build]
rustflags = ["-D", "warnings"]
[alias]
xtask = "run --package xtask --"

View File

@@ -1,5 +0,0 @@
# This file is used to build collab in a Docker image.
# In particular, we don't use clang.
[build]
# v0 mangling scheme provides more detailed backtraces around closures
rustflags = ["-C", "symbol-mangling-version=v0", "--cfg", "tokio_unstable"]

View File

@@ -12,16 +12,3 @@ rustflags = ["-C", "link-arg=-fuse-ld=mold"]
[target.aarch64-unknown-linux-gnu]
linker = "clang"
rustflags = ["-C", "link-arg=-fuse-ld=mold"]
[target.'cfg(target_os = "windows")']
rustflags = [
"--cfg",
"windows_slim_errors", # This cfg will reduce the size of `windows::core::Error` from 16 bytes to 4 bytes
"-C",
"target-feature=+crt-static", # This fixes the linking issue when compiling livekit on Windows
"-C",
"link-arg=-fuse-ld=lld",
]
[env]
MACOSX_DEPLOYMENT_TARGET = "10.15.7"

View File

@@ -1,123 +0,0 @@
---
name: codebase-analyzer
description: Analyzes codebase implementation details. Call the codebase-analyzer agent when you need to find detailed information about specific components. As always, the more detailed your request prompt, the better! :)
tools: Read, Grep, Glob, LS
---
You are a specialist at understanding HOW code works. Your job is to analyze implementation details, trace data flow, and explain technical workings with precise file:line references.
## Core Responsibilities
1. **Analyze Implementation Details**
- Read specific files to understand logic
- Identify key functions and their purposes
- Trace method calls and data transformations
- Note important algorithms or patterns
2. **Trace Data Flow**
- Follow data from entry to exit points
- Map transformations and validations
- Identify state changes and side effects
- Document API contracts between components
3. **Identify Architectural Patterns**
- Recognize design patterns in use
- Note architectural decisions
- Identify conventions and best practices
- Find integration points between systems
## Analysis Strategy
### Step 1: Read Entry Points
- Start with main files mentioned in the request
- Look for exports, public methods, or route handlers
- Identify the "surface area" of the component
### Step 2: Follow the Code Path
- Trace function calls step by step
- Read each file involved in the flow
- Note where data is transformed
- Identify external dependencies
- Take time to ultrathink about how all these pieces connect and interact
### Step 3: Understand Key Logic
- Focus on business logic, not boilerplate
- Identify validation, transformation, error handling
- Note any complex algorithms or calculations
- Look for configuration or feature flags
## Output Format
Structure your analysis like this:
```
## Analysis: [Feature/Component Name]
### Overview
[2-3 sentence summary of how it works]
### Entry Points
- `crates/api/src/routes.rs:45` - POST /webhooks endpoint
- `crates/api/src/handlers/webhook.rs:12` - handle_webhook() function
### Core Implementation
#### 1. Request Validation (`crates/api/src/handlers/webhook.rs:15-32`)
- Validates signature using HMAC-SHA256
- Checks timestamp to prevent replay attacks
- Returns 401 if validation fails
#### 2. Data Processing (`crates/core/src/services/webhook_processor.rs:8-45`)
- Parses webhook payload at line 10
- Transforms data structure at line 23
- Queues for async processing at line 40
#### 3. State Management (`crates/storage/src/stores/webhook_store.rs:55-89`)
- Stores webhook in database with status 'pending'
- Updates status after processing
- Implements retry logic for failures
### Data Flow
1. Request arrives at `crates/api/src/routes.rs:45`
2. Routed to `crates/api/src/handlers/webhook.rs:12`
3. Validation at `crates/api/src/handlers/webhook.rs:15-32`
4. Processing at `crates/core/src/services/webhook_processor.rs:8`
5. Storage at `crates/storage/src/stores/webhook_store.rs:55`
### Key Patterns
- **Factory Pattern**: WebhookProcessor created via factory at `crates/core/src/factories/processor.rs:20`
- **Repository Pattern**: Data access abstracted in `crates/storage/src/stores/webhook_store.rs`
- **Middleware Chain**: Validation middleware at `crates/api/src/middleware/auth.rs:30`
### Configuration
- Webhook secret from `crates/config/src/webhooks.rs:5`
- Retry settings at `crates/config/src/webhooks.rs:12-18`
- Feature flags checked at `crates/common/src/utils/features.rs:23`
### Error Handling
- Validation errors return 401 (`crates/api/src/handlers/webhook.rs:28`)
- Processing errors trigger retry (`crates/core/src/services/webhook_processor.rs:52`)
- Failed webhooks logged to `logs/webhook-errors.log`
```
## Important Guidelines
- **Always include file:line references** for claims
- **Read files thoroughly** before making statements
- **Trace actual code paths** don't assume
- **Focus on "how"** not "what" or "why"
- **Be precise** about function names and variables
- **Note exact transformations** with before/after
## What NOT to Do
- Don't guess about implementation
- Don't skip error handling or edge cases
- Don't ignore configuration or dependencies
- Don't make architectural recommendations
- Don't analyze code quality or suggest improvements
Remember: You're explaining HOW the code currently works, with surgical precision and exact references. Help users understand the implementation as it exists today.

View File

@@ -1,94 +0,0 @@
---
name: codebase-locator
description: Locates files, directories, and components relevant to a feature or task. Call `codebase-locator` with human language prompt describing what you're looking for. Basically a "Super Grep/Glob/LS tool" — Use it if you find yourself desiring to use one of these tools more than once.
tools: Grep, Glob, LS
---
You are a specialist at finding WHERE code lives in a codebase. Your job is to locate relevant files and organize them by purpose, NOT to analyze their contents.
## Core Responsibilities
1. **Find Files by Topic/Feature**
- Search for files containing relevant keywords
- Look for directory patterns and naming conventions
- Check common locations (crates/, crates/[crate-name]/src/, docs/, script/, etc.)
2. **Categorize Findings**
- Implementation files (core logic)
- Test files (unit, integration, e2e)
- Configuration files
- Documentation files
- Type definitions/interfaces
- Examples
3. **Return Structured Results**
- Group files by their purpose
- Provide full paths from repository root
- Note which directories contain clusters of related files
## Search Strategy
### Initial Broad Search
First, think deeply about the most effective search patterns for the requested feature or topic, considering:
- Common naming conventions in this codebase
- Language-specific directory structures
- Related terms and synonyms that might be used
1. Start with using your grep tool for finding keywords.
2. Optionally, use glob for file patterns
3. LS and Glob your way to victory as well!
### Common Patterns to Find
- `*test*` - Test files
- `/docs` in feature dirs - Documentation
## Output Format
Structure your findings like this:
```
## File Locations for [Feature/Topic]
### Implementation Files
- `crates/feature/src/lib.rs` - Main crate library entry point
- `crates/feature/src/handlers/mod.rs` - Request handling logic
- `crates/feature/src/models.rs` - Data models and structs
### Test Files
- `crates/feature/src/tests.rs` - Unit tests
- `crates/feature/tests/integration_test.rs` - Integration tests
### Configuration
- `Cargo.toml` - Root workspace manifest
- `crates/feature/Cargo.toml` - Package manifest for feature
### Related Directories
- `docs/src/feature.md` - Feature documentation
### Entry Points
- `crates/zed/src/main.rs` - Uses feature module at line 23
- `crates/collab/src/main.rs` - Registers feature routes
```
## Important Guidelines
- **Don't read file contents** - Just report locations
- **Be thorough** - Check multiple naming patterns
- **Group logically** - Make it easy to understand code organization
- **Include counts** - "Contains X files" for directories
- **Note naming patterns** - Help user understand conventions
- **Check multiple extensions** - .rs, .md, .js/.ts, .py, .go, etc.
## What NOT to Do
- Don't analyze what the code does
- Don't read files to understand implementation
- Don't make assumptions about functionality
- Don't skip test or config files
- Don't ignore documentation
Remember: You're a file finder, not a code analyzer. Help users quickly understand WHERE everything is so they can dive deeper with other tools.

View File

@@ -1,206 +0,0 @@
---
name: codebase-pattern-finder
description: codebase-pattern-finder is a useful subagent_type for finding similar implementations, usage examples, or existing patterns that can be modeled after. It will give you concrete code examples based on what you're looking for! It's sorta like codebase-locator, but it will not only tell you the location of files, it will also give you code details!
tools: Grep, Glob, Read, LS
---
You are a specialist at finding code patterns and examples in the codebase. Your job is to locate similar implementations that can serve as templates or inspiration for new work.
## Core Responsibilities
1. **Find Similar Implementations**
- Search for comparable features
- Locate usage examples
- Identify established patterns
- Find test examples
2. **Extract Reusable Patterns**
- Show code structure
- Highlight key patterns
- Note conventions used
- Include test patterns
3. **Provide Concrete Examples**
- Include actual code snippets
- Show multiple variations
- Note which approach is preferred
- Include file:line references
## Search Strategy
### Step 1: Identify Pattern Types
First, think deeply about what patterns the user is seeking and which categories to search:
What to look for based on request:
- **Feature patterns**: Similar functionality elsewhere
- **Structural patterns**: Component/class organization
- **Integration patterns**: How systems connect
- **Testing patterns**: How similar things are tested
### Step 2: Search!
- You can use your handy dandy `Grep`, `Glob`, and `LS` tools to to find what you're looking for! You know how it's done!
### Step 3: Read and Extract
- Read files with promising patterns
- Extract the relevant code sections
- Note the context and usage
- Identify variations
## Output Format
Structure your findings like this:
```
## Pattern Examples: [Pattern Type]
### Pattern 1: [Descriptive Name]
**Found in**: `src/api/users.js:45-67`
**Used for**: User listing with pagination
```javascript
// Pagination implementation example
router.get('/users', async (req, res) => {
const { page = 1, limit = 20 } = req.query;
const offset = (page - 1) * limit;
const users = await db.users.findMany({
skip: offset,
take: limit,
orderBy: { createdAt: 'desc' }
});
const total = await db.users.count();
res.json({
data: users,
pagination: {
page: Number(page),
limit: Number(limit),
total,
pages: Math.ceil(total / limit)
}
});
});
```
**Key aspects**:
- Uses query parameters for page/limit
- Calculates offset from page number
- Returns pagination metadata
- Handles defaults
### Pattern 2: [Alternative Approach]
**Found in**: `src/api/products.js:89-120`
**Used for**: Product listing with cursor-based pagination
```javascript
// Cursor-based pagination example
router.get('/products', async (req, res) => {
const { cursor, limit = 20 } = req.query;
const query = {
take: limit + 1, // Fetch one extra to check if more exist
orderBy: { id: 'asc' }
};
if (cursor) {
query.cursor = { id: cursor };
query.skip = 1; // Skip the cursor itself
}
const products = await db.products.findMany(query);
const hasMore = products.length > limit;
if (hasMore) products.pop(); // Remove the extra item
res.json({
data: products,
cursor: products[products.length - 1]?.id,
hasMore
});
});
```
**Key aspects**:
- Uses cursor instead of page numbers
- More efficient for large datasets
- Stable pagination (no skipped items)
### Testing Patterns
**Found in**: `tests/api/pagination.test.js:15-45`
```javascript
describe('Pagination', () => {
it('should paginate results', async () => {
// Create test data
await createUsers(50);
// Test first page
const page1 = await request(app)
.get('/users?page=1&limit=20')
.expect(200);
expect(page1.body.data).toHaveLength(20);
expect(page1.body.pagination.total).toBe(50);
expect(page1.body.pagination.pages).toBe(3);
});
});
```
### Which Pattern to Use?
- **Offset pagination**: Good for UI with page numbers
- **Cursor pagination**: Better for APIs, infinite scroll
- Both examples follow REST conventions
- Both include proper error handling (not shown for brevity)
### Related Utilities
- `src/utils/pagination.js:12` - Shared pagination helpers
- `src/middleware/validate.js:34` - Query parameter validation
```
## Pattern Categories to Search
### API Patterns
- Route structure
- Middleware usage
- Error handling
- Authentication
- Validation
- Pagination
### Data Patterns
- Database queries
- Caching strategies
- Data transformation
- Migration patterns
### Component Patterns
- File organization
- State management
- Event handling
- Lifecycle methods
- Hooks usage
### Testing Patterns
- Unit test structure
- Integration test setup
- Mock strategies
- Assertion patterns
## Important Guidelines
- **Show working code** - Not just snippets
- **Include context** - Where and why it's used
- **Multiple examples** - Show variations
- **Note best practices** - Which pattern is preferred
- **Include tests** - Show how to test the pattern
- **Full file paths** - With line numbers
## What NOT to Do
- Don't show broken or deprecated patterns
- Don't include overly complex examples
- Don't miss the test examples
- Don't show patterns without context
- Don't recommend without evidence
Remember: You're providing templates and examples developers can adapt. Show them how it's been done successfully before.

View File

@@ -1,40 +0,0 @@
# Commit Changes
You are tasked with creating git commits for the changes made during this session.
## Process:
1. **Think about what changed:**
- Review the conversation history and understand what was accomplished
- Run `git status` to see current changes
- Run `git diff` to understand the modifications
- Consider whether changes should be one commit or multiple logical commits
2. **Plan your commit(s):**
- Identify which files belong together
- Draft clear, descriptive commit messages
- Use imperative mood in commit messages
- Focus on why the changes were made, not just what
3. **Present your plan to the user:**
- List the files you plan to add for each commit
- Show the commit message(s) you'll use
- Ask: "I plan to create [N] commit(s) with these changes. Shall I proceed?"
4. **Execute upon confirmation:**
- Use `git add` with specific files (never use `-A` or `.`)
- Create commits with your planned messages
- Show the result with `git log --oneline -n [number]`
## Important:
- **NEVER add co-author information or Claude attribution**
- Commits should be authored solely by the user
- Do not include any "Generated with Claude" messages
- Do not add "Co-Authored-By" lines
- Write commit messages as if the user wrote them
## Remember:
- You have the full context of what was done in this session
- Group related changes together
- Keep commits focused and atomic when possible
- The user trusts your judgment - they asked you to commit

View File

@@ -1,448 +0,0 @@
# Implementation Plan
You are tasked with creating detailed implementation plans through an interactive, iterative process. You should be skeptical, thorough, and work collaboratively with the user to produce high-quality technical specifications.
## Initial Response
When this command is invoked:
1. **Check if parameters were provided**:
- If a file path or ticket reference was provided as a parameter, skip the default message
- Immediately read any provided files FULLY
- Begin the research process
2. **If no parameters provided**, respond with:
```
I'll help you create a detailed implementation plan. Let me start by understanding what we're building.
Please provide:
1. The task/ticket description (or reference to a ticket file)
2. Any relevant context, constraints, or specific requirements
3. Links to related research or previous implementations
I'll analyze this information and work with you to create a comprehensive plan.
Tip: You can also invoke this command with a ticket file directly: `/create_plan thoughts/allison/tickets/eng_1234.md`
For deeper analysis, try: `/create_plan think deeply about thoughts/allison/tickets/eng_1234.md`
```
Then wait for the user's input.
## Process Steps
### Step 1: Context Gathering & Initial Analysis
1. **Read all mentioned files immediately and FULLY**:
- Ticket files (e.g., `thoughts/allison/tickets/eng_1234.md`)
- Research documents
- Related implementation plans
- Any JSON/data files mentioned
- **IMPORTANT**: Use the Read tool WITHOUT limit/offset parameters to read entire files
- **CRITICAL**: DO NOT spawn sub-tasks before reading these files yourself in the main context
- **NEVER** read files partially - if a file is mentioned, read it completely
2. **Spawn initial research tasks to gather context**:
Before asking the user any questions, use specialized agents to research in parallel:
- Use the **codebase-locator** agent to find all files related to the ticket/task
- Use the **codebase-analyzer** agent to understand how the current implementation works
These agents will:
- Find relevant source files, configs, and tests
- Identify the specific directories to focus on (e.g., if WUI is mentioned, they'll focus on humanlayer-wui/)
- Trace data flow and key functions
- Return detailed explanations with file:line references
3. **Read all files identified by research tasks**:
- After research tasks complete, read ALL files they identified as relevant
- Read them FULLY into the main context
- This ensures you have complete understanding before proceeding
4. **Analyze and verify understanding**:
- Cross-reference the ticket requirements with actual code
- Identify any discrepancies or misunderstandings
- Note assumptions that need verification
- Determine true scope based on codebase reality
5. **Present informed understanding and focused questions**:
```
Based on the ticket and my research of the codebase, I understand we need to [accurate summary].
I've found that:
- [Current implementation detail with file:line reference]
- [Relevant pattern or constraint discovered]
- [Potential complexity or edge case identified]
Questions that my research couldn't answer:
- [Specific technical question that requires human judgment]
- [Business logic clarification]
- [Design preference that affects implementation]
```
Only ask questions that you genuinely cannot answer through code investigation.
### Step 2: Research & Discovery
After getting initial clarifications:
1. **If the user corrects any misunderstanding**:
- DO NOT just accept the correction
- Spawn new research tasks to verify the correct information
- Read the specific files/directories they mention
- Only proceed once you've verified the facts yourself
2. **Create a research todo list** using TodoWrite to track exploration tasks
3. **Spawn parallel sub-tasks for comprehensive research**:
- Create multiple Task agents to research different aspects concurrently
- Use the right agent for each type of research:
**For deeper investigation:**
- **codebase-locator** - To find more specific files (e.g., "find all files that handle [specific component]")
- **codebase-analyzer** - To understand implementation details (e.g., "analyze how [system] works")
- **codebase-pattern-finder** - To find similar features we can model after
**For historical context:**
- **thoughts-locator** - To find any research, plans, or decisions about this area
- **thoughts-analyzer** - To extract key insights from the most relevant documents
**For related tickets:**
- **linear-searcher** - To find similar issues or past implementations
Each agent knows how to:
- Find the right files and code patterns
- Identify conventions and patterns to follow
- Look for integration points and dependencies
- Return specific file:line references
- Find tests and examples
4. **Wait for ALL sub-tasks to complete** before proceeding
5. **Present findings and design options**:
```
Based on my research, here's what I found:
**Current State:**
- [Key discovery about existing code]
- [Pattern or convention to follow]
**Design Options:**
1. [Option A] - [pros/cons]
2. [Option B] - [pros/cons]
**Open Questions:**
- [Technical uncertainty]
- [Design decision needed]
Which approach aligns best with your vision?
```
### Step 3: Plan Structure Development
Once aligned on approach:
1. **Create initial plan outline**:
```
Here's my proposed plan structure:
## Overview
[1-2 sentence summary]
## Implementation Phases:
1. [Phase name] - [what it accomplishes]
2. [Phase name] - [what it accomplishes]
3. [Phase name] - [what it accomplishes]
Does this phasing make sense? Should I adjust the order or granularity?
```
2. **Get feedback on structure** before writing details
### Step 4: Detailed Plan Writing
After structure approval:
1. **Write the plan** to `thoughts/shared/plans/{descriptive_name}.md`
2. **Use this template structure**:
````markdown
# [Feature/Task Name] Implementation Plan
## Overview
[Brief description of what we're implementing and why]
## Current State Analysis
[What exists now, what's missing, key constraints discovered]
## Desired End State
[A Specification of the desired end state after this plan is complete, and how to verify it]
### Key Discoveries:
- [Important finding with file:line reference]
- [Pattern to follow]
- [Constraint to work within]
## What We're NOT Doing
[Explicitly list out-of-scope items to prevent scope creep]
## Implementation Approach
[High-level strategy and reasoning]
## Phase 1: [Descriptive Name]
### Overview
[What this phase accomplishes]
### Changes Required:
#### 1. [Component/File Group]
**File**: `path/to/file.ext`
**Changes**: [Summary of changes]
```[language]
// Specific code to add/modify
```
````
### Success Criteria:
#### Automated Verification:
- [ ] Migration applies cleanly: `make migrate`
- [ ] Unit tests pass: `make test-component`
- [ ] Type checking passes: `npm run typecheck`
- [ ] Linting passes: `make lint`
- [ ] Integration tests pass: `make test-integration`
#### Manual Verification:
- [ ] Feature works as expected when tested via UI
- [ ] Performance is acceptable under load
- [ ] Edge case handling verified manually
- [ ] No regressions in related features
---
## Phase 2: [Descriptive Name]
[Similar structure with both automated and manual success criteria...]
---
## Testing Strategy
### Unit Tests:
- [What to test]
- [Key edge cases]
### Integration Tests:
- [End-to-end scenarios]
### Manual Testing Steps:
1. [Specific step to verify feature]
2. [Another verification step]
3. [Edge case to test manually]
## Performance Considerations
[Any performance implications or optimizations needed]
## Migration Notes
[If applicable, how to handle existing data/systems]
## References
- Original ticket: `thoughts/allison/tickets/eng_XXXX.md`
- Related research: `thoughts/shared/research/[relevant].md`
- Similar implementation: `[file:line]`
```
### Step 5: Sync and Review
1. **Sync the thoughts directory**:
- Run `humanlayer thoughts sync` to sync the newly created plan
- This ensures the plan is properly indexed and available
2. **Present the draft plan location**:
```
I've created the initial implementation plan at:
`thoughts/shared/plans/[filename].md`
Please review it and let me know:
- Are the phases properly scoped?
- Are the success criteria specific enough?
- Any technical details that need adjustment?
- Missing edge cases or considerations?
````
3. **Iterate based on feedback** - be ready to:
- Add missing phases
- Adjust technical approach
- Clarify success criteria (both automated and manual)
- Add/remove scope items
- After making changes, run `humanlayer thoughts sync` again
4. **Continue refining** until the user is satisfied
## Important Guidelines
1. **Be Skeptical**:
- Question vague requirements
- Identify potential issues early
- Ask "why" and "what about"
- Don't assume - verify with code
2. **Be Interactive**:
- Don't write the full plan in one shot
- Get buy-in at each major step
- Allow course corrections
- Work collaboratively
3. **Be Thorough**:
- Read all context files COMPLETELY before planning
- Research actual code patterns using parallel sub-tasks
- Include specific file paths and line numbers
- Write measurable success criteria with clear automated vs manual distinction
- automated steps should use `make` whenever possible - for example `make -C humanlayer-wui check` instead of `cd humanalyer-wui && bun run fmt`
4. **Be Practical**:
- Focus on incremental, testable changes
- Consider migration and rollback
- Think about edge cases
- Include "what we're NOT doing"
5. **Track Progress**:
- Use TodoWrite to track planning tasks
- Update todos as you complete research
- Mark planning tasks complete when done
6. **No Open Questions in Final Plan**:
- If you encounter open questions during planning, STOP
- Research or ask for clarification immediately
- Do NOT write the plan with unresolved questions
- The implementation plan must be complete and actionable
- Every decision must be made before finalizing the plan
## Success Criteria Guidelines
**Always separate success criteria into two categories:**
1. **Automated Verification** (can be run by execution agents):
- Commands that can be run: `make test`, `npm run lint`, etc.
- Specific files that should exist
- Code compilation/type checking
- Automated test suites
2. **Manual Verification** (requires human testing):
- UI/UX functionality
- Performance under real conditions
- Edge cases that are hard to automate
- User acceptance criteria
**Format example:**
```markdown
### Success Criteria:
#### Automated Verification:
- [ ] Database migration runs successfully: `make migrate`
- [ ] All unit tests pass: `go test ./...`
- [ ] No linting errors: `golangci-lint run`
- [ ] API endpoint returns 200: `curl localhost:8080/api/new-endpoint`
#### Manual Verification:
- [ ] New feature appears correctly in the UI
- [ ] Performance is acceptable with 1000+ items
- [ ] Error messages are user-friendly
- [ ] Feature works correctly on mobile devices
````
## Common Patterns
### For Database Changes:
- Start with schema/migration
- Add store methods
- Update business logic
- Expose via API
- Update clients
### For New Features:
- Research existing patterns first
- Start with data model
- Build backend logic
- Add API endpoints
- Implement UI last
### For Refactoring:
- Document current behavior
- Plan incremental changes
- Maintain backwards compatibility
- Include migration strategy
## Sub-task Spawning Best Practices
When spawning research sub-tasks:
1. **Spawn multiple tasks in parallel** for efficiency
2. **Each task should be focused** on a specific area
3. **Provide detailed instructions** including:
- Exactly what to search for
- Which directories to focus on
- What information to extract
- Expected output format
4. **Specify read-only tools** to use
5. **Request specific file:line references** in responses
6. **Wait for all tasks to complete** before synthesizing
7. **Verify sub-task results**:
- If a sub-task returns unexpected results, spawn follow-up tasks
- Cross-check findings against the actual codebase
- Don't accept results that seem incorrect
Example of spawning multiple tasks:
```python
# Spawn these tasks concurrently:
tasks = [
Task("Research database schema", db_research_prompt),
Task("Find API patterns", api_research_prompt),
Task("Investigate UI components", ui_research_prompt),
Task("Check test patterns", test_research_prompt)
]
```
## Example Interaction Flow
```
User: /implementation_plan
Assistant: I'll help you create a detailed implementation plan...
User: We need to add parent-child tracking for Claude sub-tasks. See thoughts/allison/tickets/eng_1478.md
Assistant: Let me read that ticket file completely first...
[Reads file fully]
Based on the ticket, I understand we need to track parent-child relationships for Claude sub-task events in the hld daemon. Before I start planning, I have some questions...
[Interactive process continues...]
```

View File

@@ -1,37 +0,0 @@
2. set up worktree for implementation:
2a. read `hack/create_worktree.sh` and create a new worktree with the Linear branch name: `./hack/create_worktree.sh ENG-XXXX BRANCH_NAME`
3. determine required data:
branch name
path to plan file (use relative path only)
launch prompt
command to run
**IMPORTANT PATH USAGE:**
- The thoughts/ directory is synced between the main repo and worktrees
- Always use ONLY the relative path starting with `thoughts/shared/...` without any directory prefix
- Example: `thoughts/shared/plans/fix-mcp-keepalive-proper.md` (not the full absolute path)
- This works because thoughts are synced and accessible from the worktree
3a. confirm with the user by sending a message to the Human
```
based on the input, I plan to create a worktree with the following details:
worktree path: ~/wt/humanlayer/ENG-XXXX
branch name: BRANCH_NAME
path to plan file: $FILEPATH
launch prompt:
/implement_plan at $FILEPATH and when you are done implementing and all tests pass, read ./claude/commands/commit.md and create a commit, then read ./claude/commands/describe_pr.md and create a PR, then add a comment to the Linear ticket with the PR link
command to run:
humanlayer launch --model opus -w ~/wt/humanlayer/ENG-XXXX "/implement_plan at $FILEPATH and when you are done implementing and all tests pass, read ./claude/commands/commit.md and create a commit, then read ./claude/commands/describe_pr.md and create a PR, then add a comment to the Linear ticket with the PR link"
```
incorporate any user feedback then:
4. launch implementation session: `humanlayer launch --model opus -w ~/wt/humanlayer/ENG-XXXX "/implement_plan at $FILEPATH and when you are done implementing and all tests pass, read ./claude/commands/commit.md and create a commit, then read ./claude/commands/describe_pr.md and create a PR, then add a comment to the Linear ticket with the PR link"`

View File

@@ -1,196 +0,0 @@
# Debug
You are tasked with helping debug issues during manual testing or implementation. This command allows you to investigate problems by examining logs, database state, and git history without editing files. Think of this as a way to bootstrap a debugging session without using the primary window's context.
## Initial Response
When invoked WITH a plan/ticket file:
```
I'll help debug issues with [file name]. Let me understand the current state.
What specific problem are you encountering?
- What were you trying to test/implement?
- What went wrong?
- Any error messages?
I'll investigate the logs, database, and git state to help figure out what's happening.
```
When invoked WITHOUT parameters:
```
I'll help debug your current issue.
Please describe what's going wrong:
- What are you working on?
- What specific problem occurred?
- When did it last work?
I can investigate logs, database state, and recent changes to help identify the issue.
```
## Environment Information
You have access to these key locations and tools:
**Logs** (automatically created by `make daemon` and `make wui`):
- MCP logs: `~/.humanlayer/logs/mcp-claude-approvals-*.log`
- Combined WUI/Daemon logs: `~/.humanlayer/logs/wui-${BRANCH_NAME}/codelayer.log`
- First line shows: `[timestamp] starting [service] in [directory]`
**Database**:
- Location: `~/.humanlayer/daemon-{BRANCH_NAME}.db`
- SQLite database with sessions, events, approvals, etc.
- Can query directly with `sqlite3`
**Git State**:
- Check current branch, recent commits, uncommitted changes
- Similar to how `commit` and `describe_pr` commands work
**Service Status**:
- Check if daemon is running: `ps aux | grep hld`
- Check if WUI is running: `ps aux | grep wui`
- Socket exists: `~/.humanlayer/daemon.sock`
## Process Steps
### Step 1: Understand the Problem
After the user describes the issue:
1. **Read any provided context** (plan or ticket file):
- Understand what they're implementing/testing
- Note which phase or step they're on
- Identify expected vs actual behavior
2. **Quick state check**:
- Current git branch and recent commits
- Any uncommitted changes
- When the issue started occurring
### Step 2: Investigate the Issue
Spawn parallel Task agents for efficient investigation:
```
Task 1 - Check Recent Logs:
Find and analyze the most recent logs for errors:
1. Find latest daemon log: ls -t ~/.humanlayer/logs/daemon-*.log | head -1
2. Find latest WUI log: ls -t ~/.humanlayer/logs/wui-*.log | head -1
3. Search for errors, warnings, or issues around the problem timeframe
4. Note the working directory (first line of log)
5. Look for stack traces or repeated errors
Return: Key errors/warnings with timestamps
```
```
Task 2 - Database State:
Check the current database state:
1. Connect to database: sqlite3 ~/.humanlayer/daemon.db
2. Check schema: .tables and .schema for relevant tables
3. Query recent data:
- SELECT * FROM sessions ORDER BY created_at DESC LIMIT 5;
- SELECT * FROM conversation_events WHERE created_at > datetime('now', '-1 hour');
- Other queries based on the issue
4. Look for stuck states or anomalies
Return: Relevant database findings
```
```
Task 3 - Git and File State:
Understand what changed recently:
1. Check git status and current branch
2. Look at recent commits: git log --oneline -10
3. Check uncommitted changes: git diff
4. Verify expected files exist
5. Look for any file permission issues
Return: Git state and any file issues
```
### Step 3: Present Findings
Based on the investigation, present a focused debug report:
```markdown
## Debug Report
### What's Wrong
[Clear statement of the issue based on evidence]
### Evidence Found
**From Logs** (`~/.humanlayer/logs/`):
- [Error/warning with timestamp]
- [Pattern or repeated issue]
**From Database**:
```sql
-- Relevant query and result
[Finding from database]
```
**From Git/Files**:
- [Recent changes that might be related]
- [File state issues]
### Root Cause
[Most likely explanation based on evidence]
### Next Steps
1. **Try This First**:
```bash
[Specific command or action]
```
2. **If That Doesn't Work**:
- Restart services: `make daemon` and `make wui`
- Check browser console for WUI errors
- Run with debug: `HUMANLAYER_DEBUG=true make daemon`
### Can't Access?
Some issues might be outside my reach:
- Browser console errors (F12 in browser)
- MCP server internal state
- System-level issues
Would you like me to investigate something specific further?
```
## Important Notes
- **Focus on manual testing scenarios** - This is for debugging during implementation
- **Always require problem description** - Can't debug without knowing what's wrong
- **Read files completely** - No limit/offset when reading context
- **Think like `commit` or `describe_pr`** - Understand git state and changes
- **Guide back to user** - Some issues (browser console, MCP internals) are outside reach
- **No file editing** - Pure investigation only
## Quick Reference
**Find Latest Logs**:
```bash
ls -t ~/.humanlayer/logs/daemon-*.log | head -1
ls -t ~/.humanlayer/logs/wui-*.log | head -1
```
**Database Queries**:
```bash
sqlite3 ~/.humanlayer/daemon.db ".tables"
sqlite3 ~/.humanlayer/daemon.db ".schema sessions"
sqlite3 ~/.humanlayer/daemon.db "SELECT * FROM sessions ORDER BY created_at DESC LIMIT 5;"
```
**Service Check**:
```bash
ps aux | grep hld # Is daemon running?
ps aux | grep wui # Is WUI running?
```
**Git State**:
```bash
git status
git log --oneline -10
git diff
```
Remember: This command helps you investigate without burning the primary window's context. Perfect for when you hit an issue during manual testing and need to dig into logs, database, or git state.

View File

@@ -1,71 +0,0 @@
# Generate PR Description
You are tasked with generating a comprehensive pull request description following the repository's standard template.
## Steps to follow:
1. **Read the PR description template:**
- First, check if `thoughts/shared/pr_description.md` exists
- If it doesn't exist, inform the user that their `humanlayer thoughts` setup is incomplete and they need to create a PR description template at `thoughts/shared/pr_description.md`
- Read the template carefully to understand all sections and requirements
2. **Identify the PR to describe:**
- Check if the current branch has an associated PR: `gh pr view --json url,number,title,state 2>/dev/null`
- If no PR exists for the current branch, or if on main/master, list open PRs: `gh pr list --limit 10 --json number,title,headRefName,author`
- Ask the user which PR they want to describe
3. **Check for existing description:**
- Check if `thoughts/shared/prs/{number}_description.md` already exists
- If it exists, read it and inform the user you'll be updating it
- Consider what has changed since the last description was written
4. **Gather comprehensive PR information:**
- Get the full PR diff: `gh pr diff {number}`
- If you get an error about no default remote repository, instruct the user to run `gh repo set-default` and select the appropriate repository
- Get commit history: `gh pr view {number} --json commits`
- Review the base branch: `gh pr view {number} --json baseRefName`
- Get PR metadata: `gh pr view {number} --json url,title,number,state`
5. **Analyze the changes thoroughly:** (ultrathink about the code changes, their architectural implications, and potential impacts)
- Read through the entire diff carefully
- For context, read any files that are referenced but not shown in the diff
- Understand the purpose and impact of each change
- Identify user-facing changes vs internal implementation details
- Look for breaking changes or migration requirements
6. **Handle verification requirements:**
- Look for any checklist items in the "How to verify it" section of the template
- For each verification step:
- If it's a command you can run (like `make check test`, `npm test`, etc.), run it
- If it passes, mark the checkbox as checked: `- [x]`
- If it fails, keep it unchecked and note what failed: `- [ ]` with explanation
- If it requires manual testing (UI interactions, external services), leave unchecked and note for user
- Document any verification steps you couldn't complete
7. **Generate the description:**
- Fill out each section from the template thoroughly:
- Answer each question/section based on your analysis
- Be specific about problems solved and changes made
- Focus on user impact where relevant
- Include technical details in appropriate sections
- Write a concise changelog entry
- Ensure all checklist items are addressed (checked or explained)
8. **Save and sync the description:**
- Write the completed description to `thoughts/shared/prs/{number}_description.md`
- Run `humanlayer thoughts sync` to sync the thoughts directory
- Show the user the generated description
9. **Update the PR:**
- Update the PR description directly: `gh pr edit {number} --body-file thoughts/shared/prs/{number}_description.md`
- Confirm the update was successful
- If any verification steps remain unchecked, remind the user to complete them before merging
## Important notes:
- This command works across different repositories - always read the local template
- Be thorough but concise - descriptions should be scannable
- Focus on the "why" as much as the "what"
- Include any breaking changes or migration notes prominently
- If the PR touches multiple components, organize the description accordingly
- Always attempt to run verification commands when possible
- Clearly communicate which verification steps need manual testing

View File

@@ -1,65 +0,0 @@
# Implement Plan
You are tasked with implementing an approved technical plan from `thoughts/shared/plans/`. These plans contain phases with specific changes and success criteria.
## Getting Started
When given a plan path:
- Read the plan completely and check for any existing checkmarks (- [x])
- Read the original ticket and all files mentioned in the plan
- **Read files fully** - never use limit/offset parameters, you need complete context
- Think deeply about how the pieces fit together
- Create a todo list to track your progress
- Start implementing if you understand what needs to be done
If no plan path provided, ask for one.
## Implementation Philosophy
Plans are carefully designed, but reality can be messy. Your job is to:
- Follow the plan's intent while adapting to what you find
- Implement each phase fully before moving to the next
- Verify your work makes sense in the broader codebase context
- Update checkboxes in the plan as you complete sections
When things don't match the plan exactly, think about why and communicate clearly. The plan is your guide, but your judgment matters too.
If you encounter a mismatch:
- STOP and think deeply about why the plan can't be followed
- Present the issue clearly:
```
Issue in Phase [N]:
Expected: [what the plan says]
Found: [actual situation]
Why this matters: [explanation]
How should I proceed?
```
## Verification Approach
After implementing a phase:
- Run the success criteria checks (usually `cargo test -p [crate_name]` covers everything)
- Fix any issues before proceeding
- Update your progress in both the plan and your todos
- Check off completed items in the plan file itself using Edit
Don't let verification interrupt your flow - batch it at natural stopping points.
## If You Get Stuck
When something isn't working as expected:
- First, make sure you've read and understood all the relevant code
- Consider if the codebase has evolved since the plan was written
- Present the mismatch clearly and ask for guidance
Use sub-tasks sparingly - mainly for targeted debugging or exploring unfamiliar territory.
## Resuming Work
If the plan has existing checkmarks:
- Trust that completed work is done
- Pick up from the first unchecked item
- Verify previous work only if something seems off
Remember: You're implementing a solution, not just checking boxes. Keep the end goal in mind and maintain forward momentum.

View File

@@ -1,44 +0,0 @@
# Local Review
You are tasked with setting up a local review environment for a colleague's branch. This involves creating a worktree, setting up dependencies, and launching a new Claude Code session.
## Process
When invoked with a parameter like `gh_username:branchName`:
1. **Parse the input**:
- Extract GitHub username and branch name from the format `username:branchname`
- If no parameter provided, ask for it in the format: `gh_username:branchName`
2. **Extract ticket information**:
- Look for ticket numbers in the branch name (e.g., `eng-1696`, `ENG-1696`)
- Use this to create a short worktree directory name
- If no ticket found, use a sanitized version of the branch name
3. **Set up the remote and worktree**:
- Check if the remote already exists using `git remote -v`
- If not, add it: `git remote add USERNAME git@github.com:USERNAME/humanlayer`
- Fetch from the remote: `git fetch USERNAME`
- Create worktree: `git worktree add -b BRANCHNAME ~/wt/humanlayer/SHORT_NAME USERNAME/BRANCHNAME`
4. **Configure the worktree**:
- Copy Claude settings: `cp .claude/settings.local.json WORKTREE/.claude/`
- Run setup: `make -C WORKTREE setup`
- Initialize thoughts: `cd WORKTREE && npx humanlayer thoughts init --directory humanlayer`
## Error Handling
- If worktree already exists, inform the user they need to remove it first
- If remote fetch fails, check if the username/repo exists
- If setup fails, provide the error but continue with the launch
## Example Usage
```
/local_review samdickson22:sam/eng-1696-hotkey-for-yolo-mode
```
This will:
- Add 'samdickson22' as a remote
- Create worktree at `~/wt/humanlayer/eng-1696`
- Set up the environment

View File

@@ -1,28 +0,0 @@
## PART I - IF A TICKET IS MENTIONED
0c. use `linear` cli to fetch the selected item into thoughts with the ticket number - ./thoughts/shared/tickets/ENG-xxxx.md
0d. read the ticket and all comments to understand the implementation plan and any concerns
## PART I - IF NO TICKET IS MENTIOND
0. read .claude/commands/linear.md
0a. fetch the top 10 priority items from linear in status "ready for dev" using the MCP tools, noting all items in the `links` section
0b. select the highest priority SMALL or XS issue from the list (if no SMALL or XS issues exist, EXIT IMMEDIATELY and inform the user)
0c. use `linear` cli to fetch the selected item into thoughts with the ticket number - ./thoughts/shared/tickets/ENG-xxxx.md
0d. read the ticket and all comments to understand the implementation plan and any concerns
## PART II - NEXT STEPS
think deeply
1. move the item to "in dev" using the MCP tools
1a. identify the linked implementation plan document from the `links` section
1b. if no plan exists, move the ticket back to "ready for spec" and EXIT with an explanation
think deeply about the implementation
2. set up worktree for implementation:
2a. read `hack/create_worktree.sh` and create a new worktree with the Linear branch name: `./hack/create_worktree.sh ENG-XXXX BRANCH_NAME`
2b. launch implementation session: `npx humanlayer launch --model opus -w ~/wt/humanlayer/ENG-XXXX "/implement_plan and when you are done implementing and all tests pass, read ./claude/commands/commit.md and create a commit, then read ./claude/commands/describe_pr.md and create a PR, then add a comment to the Linear ticket with the PR link"`
think deeply, use TodoWrite to track your tasks. When fetching from linear, get the top 10 items by priority but only work on ONE item - specifically the highest priority SMALL or XS sized issue.

View File

@@ -1,30 +0,0 @@
## PART I - IF A TICKET IS MENTIONED
0c. use `linear` cli to fetch the selected item into thoughts with the ticket number - ./thoughts/shared/tickets/ENG-xxxx.md
0d. read the ticket and all comments to learn about past implementations and research, and any questions or concerns about them
### PART I - IF NO TICKET IS MENTIONED
0. read .claude/commands/linear.md
0a. fetch the top 10 priority items from linear in status "ready for spec" using the MCP tools, noting all items in the `links` section
0b. select the highest priority SMALL or XS issue from the list (if no SMALL or XS issues exist, EXIT IMMEDIATELY and inform the user)
0c. use `linear` cli to fetch the selected item into thoughts with the ticket number - ./thoughts/shared/tickets/ENG-xxxx.md
0d. read the ticket and all comments to learn about past implementations and research, and any questions or concerns about them
### PART II - NEXT STEPS
think deeply
1. move the item to "plan in progress" using the MCP tools
1a. read ./claude/commands/create_plan.md
1b. determine if the item has a linked implementation plan document based on the `links` section
1d. if the plan exists, you're done, respond with a link to the ticket
1e. if the research is insufficient or has unaswered questions, create a new plan document following the instructions in ./claude/commands/create_plan.md
think deeply
2. when the plan is complete, `humanlayer thoughts sync` and attach the doc to the ticket using the MCP tools and create a terse comment with a link to it (re-read .claude/commands/linear.md if needed)
2a. move the item to "plan in review" using the MCP tools
think deeply, use TodoWrite to track your tasks. When fetching from linear, get the top 10 items by priority but only work on ONE item - specifically the highest priority SMALL or XS sized issue.

View File

@@ -1,46 +0,0 @@
## PART I - IF A LINEAR TICKET IS MENTIONED
0c. use `linear` cli to fetch the selected item into thoughts with the ticket number - ./thoughts/shared/tickets/ENG-xxxx.md
0d. read the ticket and all comments to understand what research is needed and any previous attempts
## PART I - IF NO TICKET IS MENTIONED
0. read .claude/commands/linear.md
0a. fetch the top 10 priority items from linear in status "research needed" using the MCP tools, noting all items in the `links` section
0b. select the highest priority SMALL or XS issue from the list (if no SMALL or XS issues exist, EXIT IMMEDIATELY and inform the user)
0c. use `linear` cli to fetch the selected item into thoughts with the ticket number - ./thoughts/shared/tickets/ENG-xxxx.md
0d. read the ticket and all comments to understand what research is needed and any previous attempts
## PART II - NEXT STEPS
think deeply
1. move the item to "research in progress" using the MCP tools
1a. read any linked documents in the `links` section to understand context
1b. if insufficient information to conduct research, add a comment asking for clarification and move back to "research needed"
think deeply about the research needs
2. conduct the research:
2a. read .claude/commands/research_codebase.md for guidance on effective codebase research
2b. if the linear comments suggest web research is needed, use WebSearch to research external solutions, APIs, or best practices
2c. search the codebase for relevant implementations and patterns
2d. examine existing similar features or related code
2e. identify technical constraints and opportunities
2f. Be unbiased - don't think too much about an ideal implementation plan, just document all related files and how the systems work today
2g. document findings in a new thoughts document: `thoughts/shared/research/ENG-XXXX_research.md`
think deeply about the findings
3. synthesize research into actionable insights:
3a. summarize key findings and technical decisions
3b. identify potential implementation approaches
3c. note any risks or concerns discovered
3d. run `humanlayer thoughts sync` to save the research
4. update the ticket:
4a. attach the research document to the ticket using the MCP tools with proper link formatting
4b. add a comment summarizing the research outcomes
4c. move the item to "research in review" using the MCP tools
think deeply, use TodoWrite to track your tasks. When fetching from linear, get the top 10 items by priority but only work on ONE item - specifically the highest priority issue.

View File

@@ -1,172 +0,0 @@
# Research Codebase
You are tasked with conducting comprehensive research across the codebase to answer user questions by spawning parallel sub-agents and synthesizing their findings.
## Initial Setup:
When this command is invoked, respond with:
```
I'm ready to research the codebase. Please provide your research question or area of interest, and I'll analyze it thoroughly by exploring relevant components and connections.
```
Then wait for the user's research query.
## Steps to follow after receiving the research query:
1. **Read any directly mentioned files first:**
- If the user mentions specific files (crates, docs, JSON), read them FULLY first
- **IMPORTANT**: Use the Read tool WITHOUT limit/offset parameters to read entire files
- **CRITICAL**: Read these files yourself in the main context before spawning any sub-tasks
- This ensures you have full context before decomposing the research
2. **Analyze and decompose the research question:**
- Break down the user's query into composable research areas
- Take time to ultrathink about the underlying patterns, connections, and architectural implications the user might be seeking
- Identify specific components, patterns, or concepts to investigate
- Create a research plan using TodoWrite to track all subtasks
- Consider which directories, files, or architectural patterns are relevant
3. **Spawn parallel sub-agent tasks for comprehensive research:**
- Create multiple Task agents to research different aspects concurrently
- We now have specialized agents that know how to do specific research tasks:
**For codebase research:**
- Use the **codebase-locator** agent to find WHERE files and components live
- Use the **codebase-analyzer** agent to understand HOW specific code works
The key is to use these agents intelligently:
- Start with locator agents to find what exists
- Then use analyzer agents on the most promising findings
- Run multiple agents in parallel when they're searching for different things
- Each agent knows its job - just tell it what you're looking for
- Don't write detailed prompts about HOW to search - the agents already know
4. **Wait for all sub-agents to complete and synthesize findings:**
- IMPORTANT: Wait for ALL sub-agent tasks to complete before proceeding
- Compile all sub-agent results (both codebase and thoughts findings)
- Prioritize live codebase findings as primary source of truth
- Use thoughts/ findings as supplementary historical context
- Connect findings across different components
- Include specific file paths and line numbers for reference
- Verify all thoughts/ paths are correct (e.g., thoughts/allison/ not thoughts/shared/ for personal files)
- Highlight patterns, connections, and architectural decisions
- Answer the user's specific questions with concrete evidence
5. **Gather metadata for the research document:**
- Run the `zed/script/spec_metadata.sh` script to generate all relevant metadata
- Filename: `thoughts/shared/research/YYYY-MM-DD_HH-MM-SS_topic.md`
6. **Generate research document:**
- Use the metadata gathered in step 4
- Structure the document with YAML frontmatter followed by content:
```markdown
---
date: [Current date and time with timezone in ISO format]
researcher: [Researcher name from thoughts status]
git_commit: [Current commit hash]
branch: [Current branch name]
repository: [Repository name]
topic: "[User's Question/Topic]"
tags: [research, codebase, relevant-component-names]
status: complete
last_updated: [Current date in YYYY-MM-DD format]
last_updated_by: [Researcher name]
---
# Research: [User's Question/Topic]
**Date**: [Current date and time with timezone from step 4]
**Researcher**: [Researcher name from thoughts status]
**Git Commit**: [Current commit hash from step 4]
**Branch**: [Current branch name from step 4]
**Repository**: [Repository name]
## Research Question
[Original user query]
## Summary
[High-level findings answering the user's question]
## Detailed Findings
### [Component/Area 1]
- Finding with reference ([file.ext:line](link))
- Connection to other components
- Implementation details
### [Component/Area 2]
...
## Code References
- `path/to/file.py:123` - Description of what's there
- `another/file.ts:45-67` - Description of the code block
## Architecture Insights
[Patterns, conventions, and design decisions discovered]
## Historical Context (from thoughts/)
[Relevant insights from thoughts/ directory with references]
- `thoughts/shared/something.md` - Historical decision about X
- `thoughts/local/notes.md` - Past exploration of Y
Note: Paths exclude "searchable/" even if found there
## Related Research
[Links to other research documents in thoughts/shared/research/]
## Open Questions
[Any areas that need further investigation]
```
7. **Add GitHub permalinks (if applicable):**
- Check if on main branch or if commit is pushed: `git branch --show-current` and `git status`
- If on main/master or pushed, generate GitHub permalinks:
- Get repo info: `gh repo view --json owner,name`
- Create permalinks: `https://github.com/{owner}/{repo}/blob/{commit}/{file}#L{line}`
- Replace local file references with permalinks in the document
8. **Handle follow-up questions:**
- If the user has follow-up questions, append to the same research document
- Update the frontmatter fields `last_updated` and `last_updated_by` to reflect the update
- Add `last_updated_note: "Added follow-up research for [brief description]"` to frontmatter
- Add a new section: `## Follow-up Research [timestamp]`
- Spawn new sub-agents as needed for additional investigation
- Continue updating the document and syncing
## Important notes:
- Always use parallel Task agents to maximize efficiency and minimize context usage
- Always run fresh codebase research - never rely solely on existing research documents
- The thoughts/ directory provides historical context to supplement live findings
- Focus on finding concrete file paths and line numbers for developer reference
- Research documents should be self-contained with all necessary context
- Each sub-agent prompt should be specific and focused on read-only operations
- Consider cross-component connections and architectural patterns
- Include temporal context (when the research was conducted)
- Link to GitHub when possible for permanent references
- Keep the main agent focused on synthesis, not deep file reading
- Encourage sub-agents to find examples and usage patterns, not just definitions
- Explore all of thoughts/ directory, not just research subdirectory
- **File reading**: Always read mentioned files FULLY (no limit/offset) before spawning sub-tasks
- **Critical ordering**: Follow the numbered steps exactly
- ALWAYS read mentioned files first before spawning sub-tasks (step 1)
- ALWAYS wait for all sub-agents to complete before synthesizing (step 4)
- ALWAYS gather metadata before writing the document (step 5 before step 6)
- NEVER write the research document with placeholder values
- **Frontmatter consistency**:
- Always include frontmatter at the beginning of research documents
- Keep frontmatter fields consistent across all research documents
- Update frontmatter when adding follow-up research
- Use snake_case for multi-word field names (e.g., `last_updated`, `git_commit`)
- Tags should be relevant to the research topic and components studied

View File

@@ -1,162 +0,0 @@
# Validate Plan
You are tasked with validating that an implementation plan was correctly executed, verifying all success criteria and identifying any deviations or issues.
## Initial Setup
When invoked:
1. **Determine context** - Are you in an existing conversation or starting fresh?
- If existing: Review what was implemented in this session
- If fresh: Need to discover what was done through git and codebase analysis
2. **Locate the plan**:
- If plan path provided, use it
- Otherwise, search recent commits for plan references or ask user
3. **Gather implementation evidence**:
```bash
# Check recent commits
git log --oneline -n 20
git diff HEAD~N..HEAD # Where N covers implementation commits
# Run comprehensive checks
cd $(git rev-parse --show-toplevel) && make check test
```
## Validation Process
### Step 1: Context Discovery
If starting fresh or need more context:
1. **Read the implementation plan** completely
2. **Identify what should have changed**:
- List all files that should be modified
- Note all success criteria (automated and manual)
- Identify key functionality to verify
3. **Spawn parallel research tasks** to discover implementation:
```
Task 1 - Verify database changes:
Research if migration [N] was added and schema changes match plan.
Check: migration files, schema version, table structure
Return: What was implemented vs what plan specified
Task 2 - Verify code changes:
Find all modified files related to [feature].
Compare actual changes to plan specifications.
Return: File-by-file comparison of planned vs actual
Task 3 - Verify test coverage:
Check if tests were added/modified as specified.
Run test commands and capture results.
Return: Test status and any missing coverage
```
### Step 2: Systematic Validation
For each phase in the plan:
1. **Check completion status**:
- Look for checkmarks in the plan (- [x])
- Verify the actual code matches claimed completion
2. **Run automated verification**:
- Execute each command from "Automated Verification"
- Document pass/fail status
- If failures, investigate root cause
3. **Assess manual criteria**:
- List what needs manual testing
- Provide clear steps for user verification
4. **Think deeply about edge cases**:
- Were error conditions handled?
- Are there missing validations?
- Could the implementation break existing functionality?
### Step 3: Generate Validation Report
Create comprehensive validation summary:
```markdown
## Validation Report: [Plan Name]
### Implementation Status
✓ Phase 1: [Name] - Fully implemented
✓ Phase 2: [Name] - Fully implemented
⚠️ Phase 3: [Name] - Partially implemented (see issues)
### Automated Verification Results
✓ Build passes: `make build`
✓ Tests pass: `make test`
✗ Linting issues: `make lint` (3 warnings)
### Code Review Findings
#### Matches Plan:
- Database migration correctly adds [table]
- API endpoints implement specified methods
- Error handling follows plan
#### Deviations from Plan:
- Used different variable names in [file:line]
- Added extra validation in [file:line] (improvement)
#### Potential Issues:
- Missing index on foreign key could impact performance
- No rollback handling in migration
### Manual Testing Required:
1. UI functionality:
- [ ] Verify [feature] appears correctly
- [ ] Test error states with invalid input
2. Integration:
- [ ] Confirm works with existing [component]
- [ ] Check performance with large datasets
### Recommendations:
- Address linting warnings before merge
- Consider adding integration test for [scenario]
- Document new API endpoints
```
## Working with Existing Context
If you were part of the implementation:
- Review the conversation history
- Check your todo list for what was completed
- Focus validation on work done in this session
- Be honest about any shortcuts or incomplete items
## Important Guidelines
1. **Be thorough but practical** - Focus on what matters
2. **Run all automated checks** - Don't skip verification commands
3. **Document everything** - Both successes and issues
4. **Think critically** - Question if the implementation truly solves the problem
5. **Consider maintenance** - Will this be maintainable long-term?
## Validation Checklist
Always verify:
- [ ] All phases marked complete are actually done
- [ ] Automated tests pass
- [ ] Code follows existing patterns
- [ ] No regressions introduced
- [ ] Error handling is robust
- [ ] Documentation updated if needed
- [ ] Manual test steps are clear
## Relationship to Other Commands
Recommended workflow:
1. `/implement_plan` - Execute the implementation
2. `/commit` - Create atomic commits for changes
3. `/validate_plan` - Verify implementation correctness
4. `/describe_pr` - Generate PR description
The validation works best after commits are made, as it can analyze the git history to understand what was implemented.
Remember: Good validation catches issues before they reach production. Be constructive but thorough in identifying gaps or improvements.

View File

@@ -1,10 +0,0 @@
{
"permissions": {
"allow": [
// "Bash(./hack/spec_metadata.sh)",
// "Bash(hack/spec_metadata.sh)",
// "Bash(bash hack/spec_metadata.sh)"
]
},
"enableAllProjectMcpServers": false
}

View File

@@ -1,31 +0,0 @@
{
"permissions": {
"allow": [
"Read(/Users/mikaylamaki/projects/zed-work/zed-monorepo-real/**)",
"Read(/Users/nathan/src/agent-client-protocol/rust/**)",
"Read(/Users/nathan/src/agent-client-protocol/rust/**)",
"Read(/Users/nathan/src/agent-client-protocol/rust/**)",
"Read(/Users/nathan/src/agent-client-protocol/rust/**)",
"Bash(git add:*)",
"Read(/Users/nathan/src/agent-client-protocol/rust/**)",
"Bash(./script/spec_metadata.sh:*)",
"Bash(npm run generate:*)",
"Bash(npm run typecheck:*)",
"Bash(npm run:*)",
"Bash(npm install)",
"Bash(grep:*)",
"Bash(find:*)",
"Bash(node:*)",
"Bash(cargo check:*)",
"Bash(cargo test)",
"Bash(npx tsc:*)"
],
"additionalDirectories": [
"/Users/mikaylamaki/projects/zed-work/zed-monorepo-real/claude-code-acp/",
"/Users/mikaylamaki/projects/zed-work/zed-monorepo-real/agentic-coding-protocol/",
"/Users/nathan/src/agent",
"/Users/nathan/src/agent-client-protocol/",
"/Users/nathan/src/claude-code-acp"
]
}
}

View File

@@ -1 +0,0 @@
.rules

View File

@@ -1,8 +1,8 @@
We have two cloudflare workers that let us serve some assets of this repo
from Cloudflare.
- `open-source-website-assets` is used for `install.sh`
- `docs-proxy` is used for `https://zed.dev/docs`
* `open-source-website-assets` is used for `install.sh`
* `docs-proxy` is used for `https://zed.dev/docs`
On push to `main`, both of these (and the files they depend on) are uploaded to Cloudflare.

View File

@@ -1,45 +0,0 @@
# This file contains settings for `cargo hakari`.
# See https://docs.rs/cargo-hakari/latest/cargo_hakari/config for a full list of options.
hakari-package = "workspace-hack"
resolver = "2"
dep-format-version = "4"
workspace-hack-line-style = "workspace-dotted"
# this should be the same list as "targets" in ../rust-toolchain.toml
platforms = [
"x86_64-apple-darwin",
"aarch64-apple-darwin",
"x86_64-unknown-linux-gnu",
"aarch64-unknown-linux-gnu",
"x86_64-pc-windows-msvc",
"x86_64-unknown-linux-musl", # remote server
]
[traversal-excludes]
workspace-members = [
"remote_server",
]
third-party = [
{ name = "reqwest", version = "0.11.27" },
# build of remote_server should not include scap / its x11 dependency
{ name = "scap", git = "https://github.com/zed-industries/scap", rev = "808aa5c45b41e8f44729d02e38fd00a2fe2722e7" },
# build of remote_server should not need to include on libalsa through rodio
{ name = "rodio" },
]
[final-excludes]
workspace-members = [
"zed_extension_api",
# exclude all extensions
"zed_glsl",
"zed_html",
"zed_proto",
"zed_ruff",
"slash_commands_example",
"zed_snippets",
"zed_test_extension",
"zed_toml",
]

View File

@@ -1 +0,0 @@
.rules

View File

@@ -1,13 +1,6 @@
.git
.github
**/.gitignore
**/.gitkeep
.gitattributes
.mailmap
**/target
zed.xcworkspace
.DS_Store
compose.yml
plugins/bin
script/node_modules
styles/node_modules

View File

@@ -1,36 +0,0 @@
# .git-blame-ignore-revs
#
# This file consists of a list of commits that should be ignored for
# `git blame` purposes. This is useful for ignoring commits that only
# changed whitespace / indentation / formatting, but did not change
# the underlying syntax tree.
#
# GitHub will pick this up automatically for blame views:
# https://docs.github.com/en/repositories/working-with-files/using-files/viewing-a-file#ignore-commits-in-the-blame-view
# To use this file locally, run:
# git blame --ignore-revs-file .git-blame-ignore-revs
# To always use this file by default, run:
# git config --local blame.ignoreRevsFile .git-blame-ignore-revs
# To disable this functionality, run:
# git config --local blame.ignoreRevsFile ""
# Comments are optional, but may provide helpful context.
# 2023-04-20 Set default tab_size for JSON to 2 and apply new formatting
# https://github.com/zed-industries/zed/pull/2394
eca93c124a488b4e538946cd2d313bd571aa2b86
# 2024-02-15 Format YAML files
# https://github.com/zed-industries/zed/pull/7887
a161a7d0c95ca7505bf9218bfae640ee5444c88b
# 2024-02-25 Format JSON files in assets/
# https://github.com/zed-industries/zed/pull/8405
ffdda588b41f7d9d270ffe76cab116f828ad545e
# 2024-07-05 Improved formatting of default keymaps (single line per bind)
# https://github.com/zed-industries/zed/pull/13887
813cc3f5e537372fc86720b5e71b6e1c815440ab
# 2024-07-24 docs: Format docs
# https://github.com/zed-industries/zed/pull/15352
3a44a59f8ec114ac1ba22f7da1652717ef7e4e5c

View File

@@ -1,41 +0,0 @@
name: Bug Report (AI)
description: Zed Agent Panel Bugs
type: "Bug"
labels: ["ai"]
title: "AI: <a short description of the AI Related bug>"
body:
- type: textarea
attributes:
label: Summary
description: Describe the bug with a one line summary, and provide detailed reproduction steps
value: |
<!-- Please insert a one line summary of the issue below -->
SUMMARY_SENTENCE_HERE
### Description
<!-- Describe with sufficient detail to reproduce from a clean Zed install. -->
Steps to trigger the problem:
1.
2.
3.
**Expected Behavior**:
**Actual Behavior**:
### Model Provider Details
- Provider: (Anthropic via ZedPro, Anthropic via API key, Copilot Chat, Mistral, OpenAI, etc)
- Model Name:
- Mode: (Agent Panel, Inline Assistant, Terminal Assistant or Text Threads)
- Other Details (MCPs, other settings, etc):
validations:
required: true
- type: textarea
id: environment
attributes:
label: Zed Version and System Specs
description: 'Open Zed, and in the command palette select "zed: copy system specs into clipboard"'
placeholder: |
Output of "zed: copy system specs into clipboard"
validations:
required: true

View File

@@ -1,35 +0,0 @@
name: Bug Report (Debugger)
description: Zed Debugger-Related Bugs
type: "Bug"
labels: ["debugger"]
title: "Debugger: <a short description of the Debugger bug>"
body:
- type: textarea
attributes:
label: Summary
description: Describe the bug with a one line summary, and provide detailed reproduction steps
value: |
<!-- Please insert a one line summary of the issue below -->
SUMMARY_SENTENCE_HERE
### Description
<!-- Describe with sufficient detail to reproduce from a clean Zed install. -->
Steps to trigger the problem:
1.
2.
3.
**Expected Behavior**:
**Actual Behavior**:
validations:
required: true
- type: textarea
id: environment
attributes:
label: Zed Version and System Specs
description: 'Open Zed, and in the command palette select "zed: copy system specs into clipboard"'
placeholder: |
Output of "zed: copy system specs into clipboard"
validations:
required: true

View File

@@ -1,35 +0,0 @@
name: Bug Report (Windows Alpha)
description: Zed Windows Alpha Related Bugs
type: "Bug"
labels: ["windows"]
title: "Windows Alpha: <a short description of the Windows bug>"
body:
- type: textarea
attributes:
label: Summary
description: Describe the bug with a one-line summary, and provide detailed reproduction steps
value: |
<!-- Please insert a one-line summary of the issue below -->
SUMMARY_SENTENCE_HERE
### Description
<!-- Describe with sufficient detail to reproduce from a clean Zed install. -->
Steps to trigger the problem:
1.
2.
3.
**Expected Behavior**:
**Actual Behavior**:
validations:
required: true
- type: textarea
id: environment
attributes:
label: Zed Version and System Specs
description: 'Open Zed, and in the command palette select "zed: copy system specs into clipboard"'
placeholder: |
Output of "zed: copy system specs into clipboard"
validations:
required: true

View File

@@ -0,0 +1,24 @@
name: Feature Request
description: "Tip: open this issue template from within Zed with the `request feature` command palette action"
labels: ["admin read", "triage", "enhancement"]
body:
- type: checkboxes
attributes:
label: Check for existing issues
description: Check the backlog of issues to reduce the chances of creating duplicates; if an issue already exists, place a `+1` (👍) on it.
options:
- label: Completed
required: true
- type: textarea
attributes:
label: Describe the feature
description: A clear and concise description of what you want to happen.
validations:
required: true
- type: textarea
attributes:
label: |
If applicable, add mockups / screenshots to help present your vision of the feature
description: Drag images into the text input below
validations:
required: false

View File

@@ -1,58 +0,0 @@
name: Bug Report (Other)
description: |
Something else is broken in Zed (exclude crashing).
type: "Bug"
body:
- type: textarea
attributes:
label: Summary
description: Provide a one sentence summary and detailed reproduction steps
value: |
<!-- Begin your issue with a one sentence summary -->
SUMMARY_SENTENCE_HERE
### Description
<!-- Describe with sufficient detail to reproduce from a clean Zed install.
- Any code must be sufficient to reproduce (include context!)
- Include code as text, not just as a screenshot.
- Issues with insufficient detail may be summarily closed.
-->
DESCRIPTION_HERE
Steps to reproduce:
1.
2.
3.
4.
**Expected Behavior**:
**Actual Behavior**:
<!-- Before Submitting, did you:
1. Include settings.json, keymap.json, .editorconfig if relevant?
2. Check your Zed.log for relevant errors? (please include!)
3. Click Preview to ensure everything looks right?
4. Hide videos, large images and logs in ``` inside collapsible blocks:
<details><summary>click to expand</summary>
```json
```
</details>
-->
validations:
required: true
- type: textarea
id: environment
attributes:
label: Zed Version and System Specs
description: |
Open Zed, from the command palette select "zed: copy system specs into clipboard"
placeholder: |
Output of "zed: copy system specs into clipboard"
validations:
required: true

View File

@@ -1,51 +0,0 @@
name: Crash Report
description: Zed is Crashing or Hanging
type: "Crash"
body:
- type: textarea
attributes:
label: Summary
description: Summarize the issue with detailed reproduction steps
value: |
<!-- Begin your issue with a one sentence summary -->
SUMMARY_SENTENCE_HERE
### Description
<!-- Include all steps necessary to reproduce from a clean Zed installation. Be verbose -->
Steps to trigger the problem:
1.
2.
3.
Actual Behavior:
Expected Behavior:
validations:
required: true
- type: textarea
id: environment
attributes:
label: Zed Version and System Specs
description: 'Open Zed, and in the command palette select "zed: copy system specs into clipboard"'
placeholder: |
Output of "zed: copy system specs into clipboard"
validations:
required: true
- type: textarea
attributes:
label: If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue.
description: |
macOS: `~/Library/Logs/Zed/Zed.log`
Linux: `~/.local/share/zed/logs/Zed.log` or $XDG_DATA_HOME
If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000.
value: |
<details><summary>Zed.log</summary>
<!-- Paste your log inside the code block. -->
```log
```
</details>
validations:
required: false

40
.github/ISSUE_TEMPLATE/1_bug_report.yml vendored Normal file
View File

@@ -0,0 +1,40 @@
name: Bug Report
description: |
Use this template for **non-crash-related** bug reports.
Tip: open this issue template from within Zed with the `file bug report` command palette action.
labels: ["admin read", "triage", "defect"]
body:
- type: checkboxes
attributes:
label: Check for existing issues
description: Check the backlog of issues to reduce the chances of creating duplicates; if an issue already exists, place a `+1` (👍) on it.
options:
- label: Completed
required: true
- type: textarea
attributes:
label: Describe the bug / provide steps to reproduce it
description: A clear and concise description of what the bug is.
validations:
required: true
- type: textarea
id: environment
attributes:
label: Environment
description: Run the `copy system specs into clipboard` command palette action and paste the output in the field below.
validations:
required: true
- type: textarea
attributes:
label: If applicable, add mockups / screenshots to help explain present your vision of the feature
description: Drag issues into the text input below
validations:
required: false
- type: textarea
attributes:
label: If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue.
description: |
Drag Zed.log into the text input below.
If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000.
validations:
required: false

View File

@@ -0,0 +1,33 @@
name: Crash Report
description: |
Use this template for crash reports.
labels: ["admin read", "triage", "defect", "panic / crash"]
body:
- type: checkboxes
attributes:
label: Check for existing issues
description: Check the backlog of issues to reduce the chances of creating duplicates; if an issue already exists, place a `+1` (👍) on it.
options:
- label: Completed
required: true
- type: textarea
attributes:
label: Describe the bug / provide steps to reproduce it
description: A clear and concise description of what the bug is.
validations:
required: true
- type: textarea
id: environment
attributes:
label: Environment
description: Run the `copy system specs into clipboard` command palette action and paste the output in the field below.
validations:
required: true
- type: textarea
attributes:
label: If applicable, attach your `~/Library/Logs/Zed/Zed.log` file to this issue.
description: |
Drag Zed.log into the text input below.
If you only need the most recent lines, you can run the `zed: open log` command palette action to see the last 1000.
validations:
required: false

View File

@@ -1,19 +0,0 @@
name: Other [Staff Only]
description: Zed Staff Only
body:
- type: textarea
attributes:
label: Summary
value: |
<!-- Please insert a one line summary of the issue below -->
SUMMARY_SENTENCE_HERE
### Description
IF YOU DO NOT WORK FOR ZED INDUSTRIES DO NOT CREATE ISSUES WITH THIS TEMPLATE.
THEY WILL BE AUTO-CLOSED AND MAY RESULT IN YOU BEING BANNED FROM THE ZED ISSUE TRACKER.
FEATURE REQUESTS / SUPPORT REQUESTS SHOULD BE OPENED AS DISCUSSIONS:
https://github.com/zed-industries/zed/discussions/new/choose
validations:
required: true

View File

@@ -1,9 +1,17 @@
# yaml-language-server: $schema=https://json.schemastore.org/github-issue-config.json
blank_issues_enabled: false
contact_links:
- name: Feature Request
url: https://github.com/zed-industries/zed/discussions/new/choose
about: To request a feature, open a new Discussion in one of the appropriate Discussion categories
- name: "Zed Discord"
url: https://zed.dev/community-links
about: Real-time discussion and user support
- name: Language Request
url: https://github.com/zed-industries/extensions/issues/new?assignees=&labels=language&projects=&template=1_language_request.yml&title=%3Cname_of_language%3E
about: Request a language in the extensions repository
- name: Theme Request
url: https://github.com/zed-industries/extensions/issues/new?assignees=&labels=theme&projects=&template=0_theme_request.yml&title=%3Cname_of_theme%3E+theme
about: Request a theme in the extensions repository
- name: Top-Ranking Issues
url: https://github.com/zed-industries/zed/issues/5393
about: See an overview of the most popular Zed issues
- name: Platform Support
url: https://github.com/zed-industries/zed/issues/5391
about: A quick note on platform support
- name: Positive Feedback
url: https://github.com/zed-industries/zed/discussions/5397
about: A central location for kind words about Zed

View File

@@ -1,45 +0,0 @@
# Configuration related to self-hosted runner.
self-hosted-runner:
# Labels of self-hosted runner in array of strings.
labels:
# GitHub-hosted Runners
- github-8vcpu-ubuntu-2404
- github-16vcpu-ubuntu-2404
- github-32vcpu-ubuntu-2404
- github-8vcpu-ubuntu-2204
- github-16vcpu-ubuntu-2204
- github-32vcpu-ubuntu-2204
- github-16vcpu-ubuntu-2204-arm
- windows-2025-16
- windows-2025-32
- windows-2025-64
# Namespace Ubuntu 20.04 (Release builds)
- namespace-profile-16x32-ubuntu-2004
- namespace-profile-32x64-ubuntu-2004
- namespace-profile-16x32-ubuntu-2004-arm
- namespace-profile-32x64-ubuntu-2004-arm
# Namespace Ubuntu 22.04 (Everything else)
- namespace-profile-4x8-ubuntu-2204
- namespace-profile-8x16-ubuntu-2204
- namespace-profile-16x32-ubuntu-2204
- namespace-profile-32x64-ubuntu-2204
# Namespace Ubuntu 24.04 (like ubuntu-latest)
- namespace-profile-2x4-ubuntu-2404
# Namespace Limited Preview
- namespace-profile-8x16-ubuntu-2004-arm-m4
- namespace-profile-8x32-ubuntu-2004-arm-m4
# Self Hosted Runners
- self-mini-macos
- self-32vcpu-windows-2022
# Disable shellcheck because it doesn't like powershell
# This should have been triggered with initial rollout of actionlint
# but https://github.com/zed-industries/zed/pull/36693
# somehow caused actionlint to actually check those windows jobs
# where previously they were being skipped. Likely caused by an
# unknown bug in actionlint where parsing of `runs-on: [ ]`
# breaks something else. (yuck)
paths:
.github/workflows/{ci,release_nightly}.yml:
ignore:
- "shellcheck"

View File

@@ -1,38 +0,0 @@
name: "Build docs"
description: "Build the docs"
runs:
using: "composite"
steps:
- name: Setup mdBook
uses: peaceiris/actions-mdbook@ee69d230fe19748b7abf22df32acaa93833fad08 # v2
with:
mdbook-version: "0.4.37"
- name: Cache dependencies
uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
# cache-provider: "buildjet"
- name: Install Linux dependencies
shell: bash -euxo pipefail {0}
run: ./script/linux
- name: Check for broken links (in MD)
uses: lycheeverse/lychee-action@82202e5e9c2f4ef1a55a3d02563e1cb6041e5332 # v2.4.1
with:
args: --no-progress --exclude '^http' './docs/src/**/*'
fail: true
- name: Build book
shell: bash -euxo pipefail {0}
run: |
mkdir -p target/deploy
mdbook build ./docs --dest-dir=../target/deploy/docs/
- name: Check for broken links (in HTML)
uses: lycheeverse/lychee-action@82202e5e9c2f4ef1a55a3d02563e1cb6041e5332 # v2.4.1
with:
args: --no-progress --exclude '^http' 'target/deploy/docs/'
fail: true

View File

@@ -7,3 +7,9 @@ runs:
- name: cargo fmt
shell: bash -euxo pipefail {0}
run: cargo fmt --all -- --check
- name: Find modified migrations
shell: bash -euxo pipefail {0}
run: |
export SQUAWK_GITHUB_TOKEN=${{ github.token }}
. ./script/squawk

View File

@@ -7,10 +7,10 @@ runs:
- name: Install Rust
shell: bash -euxo pipefail {0}
run: |
cargo install cargo-nextest --locked
cargo install cargo-nextest
- name: Install Node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
uses: actions/setup-node@v4
with:
node-version: "18"

View File

@@ -1,186 +0,0 @@
name: "Run tests on Windows"
description: "Runs the tests on Windows"
inputs:
working-directory:
description: "The working directory"
required: true
default: "."
runs:
using: "composite"
steps:
- name: Install test runner
shell: powershell
working-directory: ${{ inputs.working-directory }}
run: cargo install cargo-nextest --locked
- name: Install Node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
with:
node-version: "18"
- name: Configure crash dumps
shell: powershell
run: |
# Record the start time for this CI run
$runStartTime = Get-Date
$runStartTimeStr = $runStartTime.ToString("yyyy-MM-dd HH:mm:ss")
Write-Host "CI run started at: $runStartTimeStr"
# Save the timestamp for later use
echo "CI_RUN_START_TIME=$($runStartTime.Ticks)" >> $env:GITHUB_ENV
# Create crash dump directory in workspace (non-persistent)
$dumpPath = "$env:GITHUB_WORKSPACE\crash_dumps"
New-Item -ItemType Directory -Force -Path $dumpPath | Out-Null
Write-Host "Setting up crash dump detection..."
Write-Host "Workspace dump path: $dumpPath"
# Note: We're NOT modifying registry on stateful runners
# Instead, we'll check default Windows crash locations after tests
- name: Run tests
shell: powershell
working-directory: ${{ inputs.working-directory }}
run: |
$env:RUST_BACKTRACE = "full"
# Enable Windows debugging features
$env:_NT_SYMBOL_PATH = "srv*https://msdl.microsoft.com/download/symbols"
# .NET crash dump environment variables (ephemeral)
$env:COMPlus_DbgEnableMiniDump = "1"
$env:COMPlus_DbgMiniDumpType = "4"
$env:COMPlus_CreateDumpDiagnostics = "1"
cargo nextest run --workspace --no-fail-fast
- name: Analyze crash dumps
if: always()
shell: powershell
run: |
Write-Host "Checking for crash dumps..."
# Get the CI run start time from the environment
$runStartTime = [DateTime]::new([long]$env:CI_RUN_START_TIME)
Write-Host "Only analyzing dumps created after: $($runStartTime.ToString('yyyy-MM-dd HH:mm:ss'))"
# Check all possible crash dump locations
$searchPaths = @(
"$env:GITHUB_WORKSPACE\crash_dumps",
"$env:LOCALAPPDATA\CrashDumps",
"$env:TEMP",
"$env:GITHUB_WORKSPACE",
"$env:USERPROFILE\AppData\Local\CrashDumps",
"C:\Windows\System32\config\systemprofile\AppData\Local\CrashDumps"
)
$dumps = @()
foreach ($path in $searchPaths) {
if (Test-Path $path) {
Write-Host "Searching in: $path"
$found = Get-ChildItem "$path\*.dmp" -ErrorAction SilentlyContinue | Where-Object {
$_.CreationTime -gt $runStartTime
}
if ($found) {
$dumps += $found
Write-Host " Found $($found.Count) dump(s) from this CI run"
}
}
}
if ($dumps) {
Write-Host "Found $($dumps.Count) crash dump(s)"
# Install debugging tools if not present
$cdbPath = "C:\Program Files (x86)\Windows Kits\10\Debuggers\x64\cdb.exe"
if (-not (Test-Path $cdbPath)) {
Write-Host "Installing Windows Debugging Tools..."
$url = "https://go.microsoft.com/fwlink/?linkid=2237387"
Invoke-WebRequest -Uri $url -OutFile winsdksetup.exe
Start-Process -Wait winsdksetup.exe -ArgumentList "/features OptionId.WindowsDesktopDebuggers /quiet"
}
foreach ($dump in $dumps) {
Write-Host "`n=================================="
Write-Host "Analyzing crash dump: $($dump.Name)"
Write-Host "Size: $([math]::Round($dump.Length / 1MB, 2)) MB"
Write-Host "Time: $($dump.CreationTime)"
Write-Host "=================================="
# Set symbol path
$env:_NT_SYMBOL_PATH = "srv*C:\symbols*https://msdl.microsoft.com/download/symbols"
# Run analysis
$analysisOutput = & $cdbPath -z $dump.FullName -c "!analyze -v; ~*k; lm; q" 2>&1 | Out-String
# Extract key information
if ($analysisOutput -match "ExceptionCode:\s*([\w]+)") {
Write-Host "Exception Code: $($Matches[1])"
if ($Matches[1] -eq "c0000005") {
Write-Host "Exception Type: ACCESS VIOLATION"
}
}
if ($analysisOutput -match "EXCEPTION_RECORD:\s*(.+)") {
Write-Host "Exception Record: $($Matches[1])"
}
if ($analysisOutput -match "FAULTING_IP:\s*\n(.+)") {
Write-Host "Faulting Instruction: $($Matches[1])"
}
# Save full analysis
$analysisFile = "$($dump.FullName).analysis.txt"
$analysisOutput | Out-File -FilePath $analysisFile
Write-Host "`nFull analysis saved to: $analysisFile"
# Print stack trace section
Write-Host "`n--- Stack Trace Preview ---"
$stackSection = $analysisOutput -split "STACK_TEXT:" | Select-Object -Last 1
$stackLines = $stackSection -split "`n" | Select-Object -First 20
$stackLines | ForEach-Object { Write-Host $_ }
Write-Host "--- End Stack Trace Preview ---"
}
Write-Host "`n⚠ Crash dumps detected! Download the 'crash-dumps' artifact for detailed analysis."
# Copy dumps to workspace for artifact upload
$artifactPath = "$env:GITHUB_WORKSPACE\crash_dumps_collected"
New-Item -ItemType Directory -Force -Path $artifactPath | Out-Null
foreach ($dump in $dumps) {
$destName = "$($dump.Directory.Name)_$($dump.Name)"
Copy-Item $dump.FullName -Destination "$artifactPath\$destName"
if (Test-Path "$($dump.FullName).analysis.txt") {
Copy-Item "$($dump.FullName).analysis.txt" -Destination "$artifactPath\$destName.analysis.txt"
}
}
Write-Host "Copied $($dumps.Count) dump(s) to artifact directory"
} else {
Write-Host "No crash dumps from this CI run found"
}
- name: Upload crash dumps
if: always()
uses: actions/upload-artifact@v4
with:
name: crash-dumps-${{ github.run_id }}-${{ github.run_attempt }}
path: |
crash_dumps_collected/*.dmp
crash_dumps_collected/*.txt
if-no-files-found: ignore
retention-days: 7
- name: Check test results
shell: powershell
working-directory: ${{ inputs.working-directory }}
run: |
# Re-check test results to fail the job if tests failed
if ($LASTEXITCODE -ne 0) {
Write-Host "Tests failed with exit code: $LASTEXITCODE"
exit $LASTEXITCODE
}

View File

@@ -1,5 +1,13 @@
Closes #ISSUE
Release Notes:
- N/A *or* Added/Fixed/Improved ...
- Added/Fixed/Improved ... ([#<public_issue_number_if_exists>](https://github.com/zed-industries/zed/issues/<public_issue_number_if_exists>)).
Optionally, include screenshots / media showcasing your addition that can be included in the release notes.
### Or...
Release Notes:
- N/A

View File

@@ -1,23 +0,0 @@
name: Bump collab-staging Tag
on:
schedule:
# Fire every day at 16:00 UTC (At the start of the US workday)
- cron: "0 16 * * *"
jobs:
update-collab-staging-tag:
if: github.repository_owner == 'zed-industries'
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
fetch-depth: 0
- name: Update collab-staging tag
run: |
git config user.name github-actions
git config user.email github-actions@github.com
git tag -f collab-staging
git push origin collab-staging --force

View File

@@ -14,12 +14,12 @@ concurrency:
jobs:
bump_patch_version:
if: github.repository_owner == 'zed-industries'
runs-on:
- namespace-profile-16x32-ubuntu-2204
- self-hosted
- test
steps:
- name: Checkout code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
uses: actions/checkout@v2
with:
ref: ${{ github.event.inputs.branch }}
ssh-key: ${{ secrets.ZED_BOT_DEPLOY_KEY }}
@@ -28,7 +28,7 @@ jobs:
run: |
set -eux
channel="$(cat crates/zed/RELEASE_CHANNEL)"
channel=$(cat crates/zed/RELEASE_CHANNEL)
tag_suffix=""
case $channel in
@@ -42,10 +42,8 @@ jobs:
exit 1
;;
esac
which cargo-set-version > /dev/null || cargo install cargo-edit
output="$(cargo set-version -p zed --bump patch 2>&1 | sed 's/.* //')"
export GIT_COMMITTER_NAME="Zed Bot"
export GIT_COMMITTER_EMAIL="hi@zed.dev"
which cargo-set-version > /dev/null || cargo install cargo-edit --features vendored-openssl
output=$(cargo set-version -p zed --bump patch 2>&1 | sed 's/.* //')
git commit -am "Bump to $output for @$GITHUB_ACTOR" --author "Zed Bot <hi@zed.dev>"
git tag "v${output}${tag_suffix}"
git push origin HEAD "v${output}${tag_suffix}"
git tag v${output}${tag_suffix}
git push origin HEAD v${output}${tag_suffix}

File diff suppressed because it is too large Load Diff

View File

@@ -1,28 +0,0 @@
name: "Close Stale Issues"
on:
schedule:
- cron: "0 7,9,11 * * 3"
workflow_dispatch:
jobs:
stale:
if: github.repository_owner == 'zed-industries'
runs-on: ubuntu-latest
steps:
- uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 # v9
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
stale-issue-message: >
Hi there! 👋
We're working to clean up our issue tracker by closing older issues that might not be relevant anymore. If you are able to reproduce this issue in the latest version of Zed, please let us know by commenting on this issue, and we will keep it open. If you can't reproduce it, feel free to close the issue yourself. Otherwise, we'll close it in 7 days.
Thanks for your help!
close-issue-message: "This issue was closed due to inactivity. If you're still experiencing this problem, please open a new issue with a link to this issue."
days-before-stale: 120
days-before-close: 7
any-of-issue-labels: "bug,panic / crash"
operations-per-run: 1000
ascending: true
enable-statistics: true
stale-issue-label: "stale"

View File

@@ -1,70 +0,0 @@
name: Release Actions
on:
release:
types: [published]
jobs:
discord_release:
if: github.repository_owner == 'zed-industries'
runs-on: ubuntu-latest
steps:
- name: Get release URL
id: get-release-url
run: |
if [ "${{ github.event.release.prerelease }}" == "true" ]; then
URL="https://zed.dev/releases/preview/latest"
else
URL="https://zed.dev/releases/stable/latest"
fi
echo "URL=$URL" >> "$GITHUB_OUTPUT"
- name: Get content
uses: 2428392/gh-truncate-string-action@b3ff790d21cf42af3ca7579146eedb93c8fb0757 # v1.4.1
id: get-content
with:
stringToTruncate: |
📣 Zed [${{ github.event.release.tag_name }}](<${{ steps.get-release-url.outputs.URL }}>) was just released!
${{ github.event.release.body }}
maxLength: 2000
truncationSymbol: "..."
- name: Discord Webhook Action
uses: tsickert/discord-webhook@c840d45a03a323fbc3f7507ac7769dbd91bfb164 # v5.3.0
with:
webhook-url: ${{ secrets.DISCORD_WEBHOOK_URL }}
content: ${{ steps.get-content.outputs.string }}
send_release_notes_email:
if: github.repository_owner == 'zed-industries' && !github.event.release.prerelease
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
fetch-depth: 0
- name: Check if release was promoted from preview
id: check-promotion-from-preview
run: |
VERSION="${{ github.event.release.tag_name }}"
PREVIEW_TAG="${VERSION}-pre"
if git rev-parse "$PREVIEW_TAG" > /dev/null 2>&1; then
echo "was_promoted_from_preview=true" >> "$GITHUB_OUTPUT"
else
echo "was_promoted_from_preview=false" >> "$GITHUB_OUTPUT"
fi
- name: Send release notes email
if: steps.check-promotion-from-preview.outputs.was_promoted_from_preview == 'true'
run: |
TAG="${{ github.event.release.tag_name }}"
cat << 'EOF' > release_body.txt
${{ github.event.release.body }}
EOF
jq -n --arg tag "$TAG" --rawfile body release_body.txt '{version: $tag, markdown_body: $body}' \
> release_data.json
curl -X POST "https://zed.dev/api/send_release_notes_email" \
-H "Authorization: Bearer ${{ secrets.RELEASE_NOTES_API_TOKEN }}" \
-H "Content-Type: application/json" \
-d @release_data.json

View File

@@ -1,25 +0,0 @@
name: Update All Top Ranking Issues
on:
schedule:
- cron: "0 */12 * * *"
workflow_dispatch:
jobs:
update_top_ranking_issues:
runs-on: ubuntu-latest
if: github.repository == 'zed-industries/zed'
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
- name: Set up uv
uses: astral-sh/setup-uv@caf0cab7a618c569241d31dcd442f54681755d39 # v3
with:
version: "latest"
enable-cache: true
cache-dependency-glob: "script/update_top_ranking_issues/pyproject.toml"
- name: Install Python 3.13
run: uv python install 3.13
- name: Install dependencies
run: uv sync --project script/update_top_ranking_issues -p 3.13
- name: Run script
run: uv run --project script/update_top_ranking_issues script/update_top_ranking_issues/main.py --github-token ${{ secrets.GITHUB_TOKEN }} --issue-reference-number 5393

View File

@@ -1,25 +0,0 @@
name: Update Weekly Top Ranking Issues
on:
schedule:
- cron: "0 15 * * *"
workflow_dispatch:
jobs:
update_top_ranking_issues:
runs-on: ubuntu-latest
if: github.repository == 'zed-industries/zed'
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
- name: Set up uv
uses: astral-sh/setup-uv@caf0cab7a618c569241d31dcd442f54681755d39 # v3
with:
version: "latest"
enable-cache: true
cache-dependency-glob: "script/update_top_ranking_issues/pyproject.toml"
- name: Install Python 3.13
run: uv python install 3.13
- name: Install dependencies
run: uv sync --project script/update_top_ranking_issues -p 3.13
- name: Run script
run: uv run --project script/update_top_ranking_issues script/update_top_ranking_issues/main.py --github-token ${{ secrets.GITHUB_TOKEN }} --issue-reference-number 6952 --query-day-interval 7

View File

@@ -11,18 +11,17 @@ on:
jobs:
danger:
if: github.repository_owner == 'zed-industries'
runs-on: namespace-profile-2x4-ubuntu-2404
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
- uses: actions/checkout@v4
- uses: pnpm/action-setup@fe02b34f77f8bc703788d5817da081398fad5dd2 # v4.0.0
- uses: pnpm/action-setup@v3
with:
version: 9
- name: Setup Node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
uses: actions/setup-node@v4
with:
node-version: "20"
cache: "pnpm"

View File

@@ -8,52 +8,49 @@ on:
jobs:
deploy-docs:
name: Deploy Docs
if: github.repository_owner == 'zed-industries'
runs-on: namespace-profile-16x32-ubuntu-2204
runs-on: ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
uses: actions/checkout@v4
with:
clean: false
- name: Set up default .cargo/config.toml
run: cp ./.cargo/collab-config.toml ./.cargo/config.toml
- name: Setup mdBook
uses: peaceiris/actions-mdbook@v2
with:
mdbook-version: "0.4.37"
- name: Build docs
uses: ./.github/actions/build_docs
- name: Build book
run: |
set -euo pipefail
mkdir -p target/deploy
mdbook build ./docs --dest-dir=../target/deploy/docs/
- name: Deploy Docs
uses: cloudflare/wrangler-action@da0e0dfe58b7a431659754fdf3f186c529afbe65 # v3
uses: cloudflare/wrangler-action@v3
with:
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
command: pages deploy target/deploy --project-name=docs
- name: Deploy Install
uses: cloudflare/wrangler-action@da0e0dfe58b7a431659754fdf3f186c529afbe65 # v3
uses: cloudflare/wrangler-action@v3
with:
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
command: r2 object put -f script/install.sh zed-open-source-website-assets/install.sh
- name: Deploy Docs Workers
uses: cloudflare/wrangler-action@da0e0dfe58b7a431659754fdf3f186c529afbe65 # v3
uses: cloudflare/wrangler-action@v3
with:
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
command: deploy .cloudflare/docs-proxy/src/worker.js
- name: Deploy Install Workers
uses: cloudflare/wrangler-action@da0e0dfe58b7a431659754fdf3f186c529afbe65 # v3
uses: cloudflare/wrangler-action@v3
with:
apiToken: ${{ secrets.CLOUDFLARE_API_TOKEN }}
accountId: ${{ secrets.CLOUDFLARE_ACCOUNT_ID }}
command: deploy .cloudflare/docs-proxy/src/worker.js
- name: Preserve Wrangler logs
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4
if: always()
with:
name: wrangler_logs
path: /home/runner/.config/.wrangler/logs/

View File

@@ -8,17 +8,17 @@ on:
env:
DOCKER_BUILDKIT: 1
DIGITALOCEAN_ACCESS_TOKEN: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}
jobs:
style:
name: Check formatting and Clippy lints
if: github.repository_owner == 'zed-industries'
runs-on:
- self-hosted
- macOS
- test
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
uses: actions/checkout@v4
with:
clean: false
fetch-depth: 0
@@ -27,17 +27,17 @@ jobs:
uses: ./.github/actions/check_style
- name: Run clippy
run: ./script/clippy
run: cargo xtask clippy
tests:
name: Run tests
runs-on:
- self-hosted
- macOS
- test
needs: style
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
uses: actions/checkout@v4
with:
clean: false
fetch-depth: 0
@@ -45,7 +45,7 @@ jobs:
- name: Install cargo nextest
shell: bash -euxo pipefail {0}
run: |
cargo install cargo-nextest --locked
cargo install cargo-nextest
- name: Limit target directory size
shell: bash -euxo pipefail {0}
@@ -61,30 +61,25 @@ jobs:
- style
- tests
runs-on:
- namespace-profile-16x32-ubuntu-2204
- self-hosted
- deploy
steps:
- name: Install doctl
uses: digitalocean/action-doctl@v2
with:
token: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}
- name: Add Rust to the PATH
run: echo "$HOME/.cargo/bin" >> $GITHUB_PATH
- name: Sign into DigitalOcean docker registry
run: doctl registry login
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
uses: actions/checkout@v4
with:
clean: false
- name: Build docker image
run: |
docker build -f Dockerfile-collab \
--build-arg "GITHUB_SHA=$GITHUB_SHA" \
--tag "registry.digitalocean.com/zed/collab:$GITHUB_SHA" \
.
run: docker build . --build-arg GITHUB_SHA=$GITHUB_SHA --tag registry.digitalocean.com/zed/collab:$GITHUB_SHA
- name: Publish docker image
run: docker push "registry.digitalocean.com/zed/collab:${GITHUB_SHA}"
run: docker push registry.digitalocean.com/zed/collab:${GITHUB_SHA}
- name: Prune Docker system
run: docker system prune --filter 'until=72h' -f
@@ -94,19 +89,10 @@ jobs:
needs:
- publish
runs-on:
- namespace-profile-16x32-ubuntu-2204
- self-hosted
- deploy
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Install doctl
uses: digitalocean/action-doctl@v2
with:
token: ${{ secrets.DIGITALOCEAN_ACCESS_TOKEN }}
- name: Sign into Kubernetes
run: doctl kubernetes cluster kubeconfig save --expiry-seconds 600 ${{ secrets.CLUSTER_NAME }}
@@ -131,20 +117,17 @@ jobs:
source script/lib/deploy-helpers.sh
export_vars_for_environment $ZED_KUBE_NAMESPACE
ZED_DO_CERTIFICATE_ID="$(doctl compute certificate list --format ID --no-header)"
export ZED_DO_CERTIFICATE_ID
export ZED_DO_CERTIFICATE_ID=$(doctl compute certificate list --format ID --no-header)
export ZED_IMAGE_ID="registry.digitalocean.com/zed/collab:${GITHUB_SHA}"
export ZED_SERVICE_NAME=collab
export ZED_LOAD_BALANCER_SIZE_UNIT=$ZED_COLLAB_LOAD_BALANCER_SIZE_UNIT
export DATABASE_MAX_CONNECTIONS=850
envsubst < crates/collab/k8s/collab.template.yml | kubectl apply -f -
kubectl -n "$ZED_KUBE_NAMESPACE" rollout status deployment/$ZED_SERVICE_NAME --watch
echo "deployed ${ZED_SERVICE_NAME} to ${ZED_KUBE_NAMESPACE}"
export ZED_SERVICE_NAME=api
export ZED_LOAD_BALANCER_SIZE_UNIT=$ZED_API_LOAD_BALANCER_SIZE_UNIT
export DATABASE_MAX_CONNECTIONS=60
envsubst < crates/collab/k8s/collab.template.yml | kubectl apply -f -
kubectl -n "$ZED_KUBE_NAMESPACE" rollout status deployment/$ZED_SERVICE_NAME --watch
echo "deployed ${ZED_SERVICE_NAME} to ${ZED_KUBE_NAMESPACE}"

View File

@@ -1,71 +0,0 @@
name: Run Agent Eval
on:
schedule:
- cron: "0 0 * * *"
pull_request:
branches:
- "**"
types: [synchronize, reopened, labeled]
workflow_dispatch:
concurrency:
# Allow only one workflow per any non-`main` branch.
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: 0
RUST_BACKTRACE: 1
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_EVAL_TELEMETRY: 1
jobs:
run_eval:
timeout-minutes: 60
name: Run Agent Eval
if: >
github.repository_owner == 'zed-industries' &&
(github.event_name != 'pull_request' || contains(github.event.pull_request.labels.*.name, 'run-eval'))
runs-on:
- namespace-profile-16x32-ubuntu-2204
steps:
- name: Add Rust to the PATH
run: echo "$HOME/.cargo/bin" >> "$GITHUB_PATH"
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Cache dependencies
uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
# cache-provider: "buildjet"
- name: Install Linux dependencies
run: ./script/linux
- name: Configure CI
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
- name: Compile eval
run: cargo build --package=eval
- name: Run eval
run: cargo run --package=eval -- --repetitions=8 --concurrency=1
# Even the Linux runner is not stateful, in theory there is no need to do this cleanup.
# But, to avoid potential issues in the future if we choose to use a stateful Linux runner and forget to add code
# to clean up the config file, Ive included the cleanup code here as a precaution.
# While its not strictly necessary at this moment, I believe its better to err on the side of caution.
- name: Clean CI config file
if: always()
run: rm -rf ./../.cargo

View File

@@ -1,33 +0,0 @@
name: Issue Response
on:
schedule:
- cron: "0 12 * * 2"
workflow_dispatch:
jobs:
issue-response:
if: github.repository_owner == 'zed-industries'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
- uses: pnpm/action-setup@fe02b34f77f8bc703788d5817da081398fad5dd2 # v4.0.0
with:
version: 9
- name: Setup Node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
with:
node-version: "20"
cache: "pnpm"
cache-dependency-path: "script/issue_response/pnpm-lock.yaml"
- run: pnpm install --dir script/issue_response
- name: Run Issue Response
run: pnpm run --dir script/issue_response start
env:
ISSUE_RESPONSE_GITHUB_TOKEN: ${{ secrets.ISSUE_RESPONSE_GITHUB_TOKEN }}
SLACK_ISSUE_RESPONSE_WEBHOOK_URL: ${{ secrets.SLACK_ISSUE_RESPONSE_WEBHOOK_URL }}

View File

@@ -1,69 +0,0 @@
name: "Nix build"
on:
workflow_call:
inputs:
flake-output:
type: string
default: "default"
cachix-filter:
type: string
default: ""
jobs:
nix-build:
timeout-minutes: 60
name: (${{ matrix.system.os }}) Nix Build
continue-on-error: true # TODO: remove when we want this to start blocking CI
strategy:
fail-fast: false
matrix:
system:
- os: x86 Linux
runner: namespace-profile-16x32-ubuntu-2204
install_nix: true
- os: arm Mac
runner: [macOS, ARM64, test]
install_nix: false
if: github.repository_owner == 'zed-industries'
runs-on: ${{ matrix.system.runner }}
env:
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON: ${{ secrets.ZED_CLOUD_PROVIDER_ADDITIONAL_MODELS_JSON }}
GIT_LFS_SKIP_SMUDGE: 1 # breaks the livekit rust sdk examples which we don't actually depend on
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
# on our macs we manually install nix. for some reason the cachix action is running
# under a non-login /bin/bash shell which doesn't source the proper script to add the
# nix profile to PATH, so we manually add them here
- name: Set path
if: ${{ ! matrix.system.install_nix }}
run: |
echo "/nix/var/nix/profiles/default/bin" >> "$GITHUB_PATH"
echo "/Users/administrator/.nix-profile/bin" >> "$GITHUB_PATH"
- uses: cachix/install-nix-action@02a151ada4993995686f9ed4f1be7cfbb229e56f # v31
if: ${{ matrix.system.install_nix }}
with:
github_access_token: ${{ secrets.GITHUB_TOKEN }}
- uses: cachix/cachix-action@0fc020193b5a1fa3ac4575aa3a7d3aa6a35435ad # v16
with:
name: zed
authToken: "${{ secrets.CACHIX_AUTH_TOKEN }}"
pushFilter: "${{ inputs.cachix-filter }}"
cachixArgs: "-v"
- run: nix build .#${{ inputs.flake-output }} -L --accept-flake-config
- name: Limit /nix/store to 50GB on macs
if: ${{ ! matrix.system.install_nix }}
run: |
if [ "$(du -sm /nix/store | cut -f1)" -gt 50000 ]; then
nix-collect-garbage -d || true
fi

View File

@@ -12,20 +12,18 @@ env:
jobs:
publish:
name: Publish zed-extension CLI
if: github.repository_owner == 'zed-industries'
runs-on:
- ubuntu-latest
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
uses: actions/checkout@v4
with:
clean: false
- name: Cache dependencies
uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2
uses: swatinem/rust-cache@v2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
cache-provider: "github"
- name: Configure linux
shell: bash -euxo pipefail {0}

View File

@@ -18,17 +18,17 @@ env:
jobs:
tests:
name: Run randomized tests
if: github.repository_owner == 'zed-industries'
runs-on:
- namespace-profile-16x32-ubuntu-2204
- self-hosted
- randomized-tests
steps:
- name: Install Node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
uses: actions/setup-node@v4
with:
node-version: "18"
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
uses: actions/checkout@v4
with:
clean: false

32
.github/workflows/release_actions.yml vendored Normal file
View File

@@ -0,0 +1,32 @@
on:
release:
types: [published]
jobs:
discord_release:
runs-on: ubuntu-latest
steps:
- name: Get release URL
id: get-release-url
run: |
if [ "${{ github.event.release.prerelease }}" == "true" ]; then
URL="https://zed.dev/releases/preview/latest"
else
URL="https://zed.dev/releases/stable/latest"
fi
echo "::set-output name=URL::$URL"
- name: Get content
uses: 2428392/gh-truncate-string-action@v1.3.0
id: get-content
with:
stringToTruncate: |
📣 Zed [${{ github.event.release.tag_name }}](<${{ steps.get-release-url.outputs.URL }}>) was just released!
${{ github.event.release.body }}
maxLength: 2000
truncationSymbol: "..."
- name: Discord Webhook Action
uses: tsickert/discord-webhook@v5.3.0
with:
webhook-url: ${{ secrets.DISCORD_WEBHOOK_URL }}
content: ${{ steps.get-content.outputs.string }}

View File

@@ -12,10 +12,6 @@ env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: 0
RUST_BACKTRACE: 1
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
ZED_MINIDUMP_ENDPOINT: ${{ secrets.ZED_SENTRY_MINIDUMP_ENDPOINT }}
DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }}
DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }}
jobs:
style:
@@ -24,10 +20,10 @@ jobs:
if: github.repository_owner == 'zed-industries'
runs-on:
- self-hosted
- macOS
- test
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
uses: actions/checkout@v4
with:
clean: false
fetch-depth: 0
@@ -36,72 +32,48 @@ jobs:
uses: ./.github/actions/check_style
- name: Run clippy
run: ./script/clippy
run: cargo xtask clippy
tests:
timeout-minutes: 60
name: Run tests
if: github.repository_owner == 'zed-industries'
runs-on:
- self-hosted
- macOS
- test
needs: style
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
uses: actions/checkout@v4
with:
clean: false
- name: Run tests
uses: ./.github/actions/run_tests
windows-tests:
timeout-minutes: 60
name: Run tests on Windows
if: github.repository_owner == 'zed-industries'
runs-on: [self-32vcpu-windows-2022]
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Configure CI
run: |
New-Item -ItemType Directory -Path "./../.cargo" -Force
Copy-Item -Path "./.cargo/ci-config.toml" -Destination "./../.cargo/config.toml"
- name: Run tests
uses: ./.github/actions/run_tests_windows
- name: Limit target directory size
run: ./script/clear-target-dir-if-larger-than.ps1 1024
- name: Clean CI config file
if: always()
run: Remove-Item -Recurse -Path "./../.cargo" -Force -ErrorAction SilentlyContinue
bundle-mac:
timeout-minutes: 60
name: Create a macOS bundle
if: github.repository_owner == 'zed-industries'
runs-on:
- self-mini-macos
- self-hosted
- bundle
needs: tests
env:
MACOS_CERTIFICATE: ${{ secrets.MACOS_CERTIFICATE }}
MACOS_CERTIFICATE_PASSWORD: ${{ secrets.MACOS_CERTIFICATE_PASSWORD }}
APPLE_NOTARIZATION_KEY: ${{ secrets.APPLE_NOTARIZATION_KEY }}
APPLE_NOTARIZATION_KEY_ID: ${{ secrets.APPLE_NOTARIZATION_KEY_ID }}
APPLE_NOTARIZATION_ISSUER_ID: ${{ secrets.APPLE_NOTARIZATION_ISSUER_ID }}
APPLE_NOTARIZATION_USERNAME: ${{ secrets.APPLE_NOTARIZATION_USERNAME }}
APPLE_NOTARIZATION_PASSWORD: ${{ secrets.APPLE_NOTARIZATION_PASSWORD }}
DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }}
DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }}
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
steps:
- name: Install Node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
uses: actions/setup-node@v4
with:
node-version: "18"
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
uses: actions/checkout@v4
with:
clean: false
@@ -112,10 +84,8 @@ jobs:
echo "Publishing version: ${version} on release channel nightly"
echo "nightly" > crates/zed/RELEASE_CHANNEL
- name: Setup Sentry CLI
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b #v2
with:
token: ${{ SECRETS.SENTRY_AUTH_TOKEN }}
- name: Generate license file
run: script/generate-licenses
- name: Create macOS app bundle
run: script/bundle-mac
@@ -123,32 +93,26 @@ jobs:
- name: Upload Zed Nightly
run: script/upload-nightly macos
bundle-linux-x86:
bundle-deb:
timeout-minutes: 60
name: Create a Linux *.tar.gz bundle for x86
name: Create a Linux *.tar.gz bundle
if: github.repository_owner == 'zed-industries'
runs-on:
- namespace-profile-16x32-ubuntu-2004 # ubuntu 20.04 for minimal glibc
- self-hosted
- deploy
needs: tests
env:
DIGITALOCEAN_SPACES_ACCESS_KEY: ${{ secrets.DIGITALOCEAN_SPACES_ACCESS_KEY }}
DIGITALOCEAN_SPACES_SECRET_KEY: ${{ secrets.DIGITALOCEAN_SPACES_SECRET_KEY }}
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
uses: actions/checkout@v4
with:
clean: false
- name: Add Rust to the PATH
run: echo "$HOME/.cargo/bin" >> "$GITHUB_PATH"
- name: Install Linux dependencies
run: ./script/linux && ./script/install-mold 2.34.0
- name: Setup Sentry CLI
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b #v2
with:
token: ${{ SECRETS.SENTRY_AUTH_TOKEN }}
- name: Limit target directory size
run: script/clear-target-dir-if-larger-than 100
run: echo "$HOME/.cargo/bin" >> $GITHUB_PATH
- name: Set release channel to nightly
run: |
@@ -157,167 +121,11 @@ jobs:
echo "Publishing version: ${version} on release channel nightly"
echo "nightly" > crates/zed/RELEASE_CHANNEL
- name: Create Linux .tar.gz bundle
run: script/bundle-linux
- name: Upload Zed Nightly
run: script/upload-nightly linux-targz
bundle-linux-arm:
timeout-minutes: 60
name: Create a Linux *.tar.gz bundle for ARM
if: github.repository_owner == 'zed-industries'
runs-on:
- namespace-profile-8x32-ubuntu-2004-arm-m4 # ubuntu 20.04 for minimal glibc
needs: tests
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Install Linux dependencies
run: ./script/linux
- name: Setup Sentry CLI
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b #v2
with:
token: ${{ SECRETS.SENTRY_AUTH_TOKEN }}
- name: Limit target directory size
run: script/clear-target-dir-if-larger-than 100
- name: Set release channel to nightly
run: |
set -euo pipefail
version=$(git rev-parse --short HEAD)
echo "Publishing version: ${version} on release channel nightly"
echo "nightly" > crates/zed/RELEASE_CHANNEL
- name: Generate license file
run: script/generate-licenses
- name: Create Linux .tar.gz bundle
run: script/bundle-linux
- name: Upload Zed Nightly
run: script/upload-nightly linux-targz
freebsd:
timeout-minutes: 60
if: false && github.repository_owner == 'zed-industries'
runs-on: github-8vcpu-ubuntu-2404
needs: tests
name: Build Zed on FreeBSD
steps:
- uses: actions/checkout@v4
- name: Build FreeBSD remote-server
id: freebsd-build
uses: vmactions/freebsd-vm@c3ae29a132c8ef1924775414107a97cac042aad5 # v1.2.0
with:
# envs: "MYTOKEN MYTOKEN2"
usesh: true
release: 13.5
copyback: true
prepare: |
pkg install -y \
bash curl jq git \
rustup-init cmake-core llvm-devel-lite pkgconf protobuf # ibx11 alsa-lib rust-bindgen-cli
run: |
freebsd-version
sysctl hw.model
sysctl hw.ncpu
sysctl hw.physmem
sysctl hw.usermem
git config --global --add safe.directory /home/runner/work/zed/zed
rustup-init --profile minimal --default-toolchain none -y
. "$HOME/.cargo/env"
./script/bundle-freebsd
mkdir -p out/
mv "target/zed-remote-server-freebsd-x86_64.gz" out/
rm -rf target/
cargo clean
- name: Upload Zed Nightly
run: script/upload-nightly freebsd
bundle-nix:
name: Build and cache Nix package
needs: tests
secrets: inherit
uses: ./.github/workflows/nix.yml
bundle-windows-x64:
timeout-minutes: 60
name: Create a Windows installer
if: github.repository_owner == 'zed-industries'
runs-on: [self-32vcpu-windows-2022]
needs: windows-tests
env:
AZURE_TENANT_ID: ${{ secrets.AZURE_SIGNING_TENANT_ID }}
AZURE_CLIENT_ID: ${{ secrets.AZURE_SIGNING_CLIENT_ID }}
AZURE_CLIENT_SECRET: ${{ secrets.AZURE_SIGNING_CLIENT_SECRET }}
ACCOUNT_NAME: ${{ vars.AZURE_SIGNING_ACCOUNT_NAME }}
CERT_PROFILE_NAME: ${{ vars.AZURE_SIGNING_CERT_PROFILE_NAME }}
ENDPOINT: ${{ vars.AZURE_SIGNING_ENDPOINT }}
FILE_DIGEST: SHA256
TIMESTAMP_DIGEST: SHA256
TIMESTAMP_SERVER: "http://timestamp.acs.microsoft.com"
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Set release channel to nightly
working-directory: ${{ env.ZED_WORKSPACE }}
run: |
$ErrorActionPreference = "Stop"
$version = git rev-parse --short HEAD
Write-Host "Publishing version: $version on release channel nightly"
"nightly" | Set-Content -Path "crates/zed/RELEASE_CHANNEL"
- name: Setup Sentry CLI
uses: matbour/setup-sentry-cli@3e938c54b3018bdd019973689ef984e033b0454b #v2
with:
token: ${{ SECRETS.SENTRY_AUTH_TOKEN }}
- name: Build Zed installer
working-directory: ${{ env.ZED_WORKSPACE }}
run: script/bundle-windows.ps1
- name: Upload Zed Nightly
working-directory: ${{ env.ZED_WORKSPACE }}
run: script/upload-nightly.ps1 windows
update-nightly-tag:
name: Update nightly tag
if: github.repository_owner == 'zed-industries'
runs-on: namespace-profile-2x4-ubuntu-2404
needs:
- bundle-mac
- bundle-linux-x86
- bundle-linux-arm
- bundle-windows-x64
steps:
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
fetch-depth: 0
- name: Update nightly tag
run: |
if [ "$(git rev-parse nightly)" = "$(git rev-parse HEAD)" ]; then
echo "Nightly tag already points to current commit. Skipping tagging."
exit 0
fi
git config user.name github-actions
git config user.email github-actions@github.com
git tag -f nightly
git push origin nightly --force
- name: Create Sentry release
uses: getsentry/action-release@526942b68292201ac6bbb99b9a0747d4abee354c # v3
env:
SENTRY_ORG: zed-dev
SENTRY_PROJECT: zed
SENTRY_AUTH_TOKEN: ${{ secrets.SENTRY_AUTH_TOKEN }}
with:
environment: production

View File

@@ -1,21 +0,0 @@
name: Script
on:
pull_request:
paths:
- "script/**"
push:
branches:
- main
jobs:
shellcheck:
name: "ShellCheck Scripts"
if: github.repository_owner == 'zed-industries'
runs-on: namespace-profile-2x4-ubuntu-2404
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
- name: Shellcheck ./scripts
run: |
./script/shellcheck-scripts error

View File

@@ -1,86 +0,0 @@
name: Run Unit Evals
on:
schedule:
# GitHub might drop jobs at busy times, so we choose a random time in the middle of the night.
- cron: "47 1 * * 2"
workflow_dispatch:
concurrency:
# Allow only one workflow per any non-`main` branch.
group: ${{ github.workflow }}-${{ github.ref_name }}-${{ github.ref_name == 'main' && github.sha || 'anysha' }}
cancel-in-progress: true
env:
CARGO_TERM_COLOR: always
CARGO_INCREMENTAL: 0
RUST_BACKTRACE: 1
ZED_CLIENT_CHECKSUM_SEED: ${{ secrets.ZED_CLIENT_CHECKSUM_SEED }}
jobs:
unit_evals:
if: github.repository_owner == 'zed-industries'
timeout-minutes: 60
name: Run unit evals
runs-on:
- namespace-profile-16x32-ubuntu-2204
steps:
- name: Add Rust to the PATH
run: echo "$HOME/.cargo/bin" >> "$GITHUB_PATH"
- name: Checkout repo
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
clean: false
- name: Cache dependencies
uses: swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2
with:
save-if: ${{ github.ref == 'refs/heads/main' }}
# cache-provider: "buildjet"
- name: Install Linux dependencies
run: ./script/linux
- name: Configure CI
run: |
mkdir -p ./../.cargo
cp ./.cargo/ci-config.toml ./../.cargo/config.toml
- name: Install Rust
shell: bash -euxo pipefail {0}
run: |
cargo install cargo-nextest --locked
- name: Install Node
uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4
with:
node-version: "18"
- name: Limit target directory size
shell: bash -euxo pipefail {0}
run: script/clear-target-dir-if-larger-than 100
- name: Run unit evals
shell: bash -euxo pipefail {0}
run: cargo nextest run --workspace --no-fail-fast --features eval --no-capture -E 'test(::eval_)'
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
- name: Send failure message to Slack channel if needed
if: ${{ failure() }}
uses: slackapi/slack-github-action@b0fa283ad8fea605de13dc3f449259339835fc52
with:
method: chat.postMessage
token: ${{ secrets.SLACK_APP_ZED_UNIT_EVALS_BOT_TOKEN }}
payload: |
channel: C04UDRNNJFQ
text: "Unit Evals Failed: https://github.com/zed-industries/zed/actions/runs/${{ github.run_id }}"
# Even the Linux runner is not stateful, in theory there is no need to do this cleanup.
# But, to avoid potential issues in the future if we choose to use a stateful Linux runner and forget to add code
# to clean up the config file, Ive included the cleanup code here as a precaution.
# While its not strictly necessary at this moment, I believe its better to err on the side of caution.
- name: Clean CI config file
if: always()
run: rm -rf ./../.cargo

View File

@@ -0,0 +1,18 @@
on:
schedule:
- cron: "0 */12 * * *"
workflow_dispatch:
jobs:
update_top_ranking_issues:
runs-on: ubuntu-latest
if: github.repository_owner == 'zed-industries'
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.11"
architecture: "x64"
cache: "pip"
- run: pip install -r script/update_top_ranking_issues/requirements.txt
- run: python script/update_top_ranking_issues/main.py --github-token ${{ secrets.GITHUB_TOKEN }} --issue-reference-number 5393

View File

@@ -0,0 +1,18 @@
on:
schedule:
- cron: "0 15 * * *"
workflow_dispatch:
jobs:
update_top_ranking_issues:
runs-on: ubuntu-latest
if: github.repository_owner == 'zed-industries'
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.11"
architecture: "x64"
cache: "pip"
- run: pip install -r script/update_top_ranking_issues/requirements.txt
- run: python script/update_top_ranking_issues/main.py --github-token ${{ secrets.GITHUB_TOKEN }} --issue-reference-number 6952 --query-day-interval 7

59
.gitignore vendored
View File

@@ -1,39 +1,30 @@
**/*.db
**/cargo-target
**/target
**/venv
**/.direnv
*.wasm
*.xcodeproj
.DS_Store
.blob_store
.build
.envrc
.flatpak-builder
.idea
.netrc
*.pyc
.pytest_cache
.swiftpm
.swiftpm/config/registries.json
.swiftpm/xcode/package.xcworkspace/contents.xcworkspacedata
.venv
.vscode
.wrangler
/assets/*licenses.*
/crates/collab/seed.json
/crates/theme/schemas/theme.json
/crates/zed/resources/flatpak/flatpak-cargo-sources.json
/dev.zed.Zed*.json
/node_modules/
**/target
**/cargo-target
/zed.xcworkspace
.DS_Store
/plugins/bin
/script/node_modules
/snap
/zed.xcworkspace
DerivedData/
/crates/theme/schemas/theme.json
/crates/collab/seed.json
/crates/zed/resources/flatpak/flatpak-cargo-sources.json
/dev.zed.Zed*.json
/assets/*licenses.md
**/venv
.build
*.wasm
Packages
*.xcodeproj
xcuserdata/
# Don't commit any secrets to the repo.
.env
.env.secret.toml
DerivedData/
.swiftpm/config/registries.json
.swiftpm/xcode/package.xcworkspace/contents.xcworkspacedata
.netrc
.swiftpm
**/*.db
.pytest_cache
.venv
.blob_store
.vscode
.wrangler
.flatpak-builder

100
.mailmap
View File

@@ -7,53 +7,18 @@
# Reference: https://git-scm.com/docs/gitmailmap
# Keep these entries sorted alphabetically.
# In Zed: `editor: sort lines case insensitive`
# In Zed: `editor: sort lines case sensitive`
Agus Zubiaga <agus@zed.dev>
Agus Zubiaga <agus@zed.dev> <hi@aguz.me>
Alex Viscreanu <alexviscreanu@gmail.com>
Alex Viscreanu <alexviscreanu@gmail.com> <alexandru.viscreanu@kiwi.com>
Alexander Mankuta <alex@pointless.one>
Alexander Mankuta <alex@pointless.one> <alex+github@pointless.one>
amtoaer <amtoaer@gmail.com>
amtoaer <amtoaer@gmail.com> <amtoaer@outlook.com>
Andrei Zvonimir Crnković <andrei@0x7f.dev>
Andrei Zvonimir Crnković <andrei@0x7f.dev> <andreicek@0x7f.dev>
Angelk90 <angelo.k90@hotmail.it>
Angelk90 <angelo.k90@hotmail.it> <20476002+Angelk90@users.noreply.github.com>
Antonio Scandurra <me@as-cii.com>
Antonio Scandurra <me@as-cii.com> <antonio@zed.dev>
Ben Kunkle <ben@zed.dev>
Ben Kunkle <ben@zed.dev> <ben.kunkle@gmail.com>
Bennet Bo Fenner <bennet@zed.dev>
Bennet Bo Fenner <bennet@zed.dev> <53836821+bennetbo@users.noreply.github.com>
Bennet Bo Fenner <bennet@zed.dev> <bennetbo@gmx.de>
Boris Cherny <boris@anthropic.com>
Boris Cherny <boris@anthropic.com> <boris@performancejs.com>
Brian Tan <brian.tan88@gmail.com>
Chris Hayes <chris+git@hayes.software>
Christian Bergschneider <christian.bergschneider@gmx.de>
Christian Bergschneider <christian.bergschneider@gmx.de> <magiclake@gmx.de>
Conrad Irwin <conrad@zed.dev>
Conrad Irwin <conrad@zed.dev> <conrad.irwin@gmail.com>
Dairon Medina <dairon.medina@gmail.com>
Danilo Leal <danilo@zed.dev>
Danilo Leal <danilo@zed.dev> <67129314+danilo-leal@users.noreply.github.com>
Edwin Aronsson <75266237+4teapo@users.noreply.github.com>
Elvis Pranskevichus <elvis@geldata.com>
Elvis Pranskevichus <elvis@geldata.com> <elvis@magic.io>
Evren Sen <nervenes@icloud.com>
Evren Sen <nervenes@icloud.com> <146845123+evrensen467@users.noreply.github.com>
Evren Sen <nervenes@icloud.com> <146845123+evrsen@users.noreply.github.com>
Fernando Tagawa <tagawafernando@gmail.com>
Fernando Tagawa <tagawafernando@gmail.com> <fernando.tagawa.gamail.com@gmail.com>
Finn Evers <dev@bahn.sh>
Finn Evers <dev@bahn.sh> <75036051+MrSubidubi@users.noreply.github.com>
Finn Evers <dev@bahn.sh> <finn.evers@outlook.de>
Gowtham K <73059450+dovakin0007@users.noreply.github.com>
Greg Morenz <greg-morenz@droid.cafe>
Greg Morenz <greg-morenz@droid.cafe> <morenzg@gmail.com>
Ihnat Aŭtuška <autushka.ihnat@gmail.com>
Ivan Žužak <izuzak@gmail.com>
Ivan Žužak <izuzak@gmail.com> <ivan.zuzak@github.com>
Joseph T. Lyons <JosephTLyons@gmail.com>
@@ -68,80 +33,27 @@ Kirill Bulatov <kirill@zed.dev>
Kirill Bulatov <kirill@zed.dev> <mail4score@gmail.com>
Kyle Caverly <kylebcaverly@gmail.com>
Kyle Caverly <kylebcaverly@gmail.com> <kyle@zed.dev>
Lilith Iris <itslirissama@gmail.com>
Lilith Iris <itslirissama@gmail.com> <83819417+Irilith@users.noreply.github.com>
LoganDark <contact@logandark.mozmail.com>
LoganDark <contact@logandark.mozmail.com> <git@logandark.mozmail.com>
LoganDark <contact@logandark.mozmail.com> <github@logandark.mozmail.com>
Marko Kungla <marko.kungla@gmail.com>
Marko Kungla <marko.kungla@gmail.com> <marko@mkungla.dev>
Marshall Bowers <git@maxdeviant.com>
Marshall Bowers <git@maxdeviant.com> <elliott.codes@gmail.com>
Marshall Bowers <git@maxdeviant.com> <marshall@zed.dev>
Matt Fellenz <matt@felle.nz>
Matt Fellenz <matt@felle.nz> <matt+github@felle.nz>
Marshall Bowers <elliott.codes@gmail.com>
Marshall Bowers <elliott.codes@gmail.com> <marshall@zed.dev>
Max Brunsfeld <maxbrunsfeld@gmail.com>
Max Brunsfeld <maxbrunsfeld@gmail.com> <max@zed.dev>
Max Linke <maxlinke88@gmail.com>
Max Linke <maxlinke88@gmail.com> <kain88-de@users.noreply.github.com>
Michael Sloan <michael@zed.dev>
Michael Sloan <michael@zed.dev> <mgsloan@gmail.com>
Michael Sloan <michael@zed.dev> <mgsloan@google.com>
Mikayla Maki <mikayla@zed.dev>
Mikayla Maki <mikayla@zed.dev> <mikayla.c.maki@gmail.com>
Mikayla Maki <mikayla@zed.dev> <mikayla.c.maki@icloud.com>
Morgan Krey <morgan@zed.dev>
Muhammad Talal Anwar <mail@talal.io>
Muhammad Talal Anwar <mail@talal.io> <talalanwar@outlook.com>
Nate Butler <iamnbutler@gmail.com>
Nate Butler <iamnbutler@gmail.com> <nate@zed.dev>
Nathan Sobo <nathan@zed.dev>
Nathan Sobo <nathan@zed.dev> <nathan@warp.dev>
Nathan Sobo <nathan@zed.dev> <nathansobo@gmail.com>
Nigel Jose <nigelmjose@gmail.com>
Nigel Jose <nigelmjose@gmail.com> <nigel.jose@student.manchester.ac.uk>
Peter Tripp <peter@zed.dev>
Peter Tripp <peter@zed.dev> <petertripp@gmail.com>
Petros Amoiridis <petros@hey.com>
Petros Amoiridis <petros@hey.com> <petros@zed.dev>
Piotr Osiewicz <piotr@zed.dev>
Piotr Osiewicz <piotr@zed.dev> <24362066+osiewicz@users.noreply.github.com>
Pocæus <github@pocaeus.com>
Pocæus <github@pocaeus.com> <pseudomata@proton.me>
Rashid Almheiri <r.muhairi@pm.me>
Rashid Almheiri <r.muhairi@pm.me> <69181766+huwaireb@users.noreply.github.com>
Richard Feldman <oss@rtfeldman.com>
Richard Feldman <oss@rtfeldman.com> <richard@zed.dev>
Robert Clover <git@clo4.net>
Robert Clover <git@clo4.net> <robert@clover.gdn>
Roy Williams <roy.williams.iii@gmail.com>
Roy Williams <roy.williams.iii@gmail.com> <roy@anthropic.com>
Sebastijan Kelnerič <sebastijan.kelneric@sebba.dev>
Sebastijan Kelnerič <sebastijan.kelneric@sebba.dev> <sebastijan.kelneric@vichava.com>
Sergey Onufrienko <sergey@onufrienko.com>
Shish <webmaster@shishnet.org>
Shish <webmaster@shishnet.org> <shish@shishnet.org>
Smit Barmase <0xtimsb@gmail.com>
Smit Barmase <0xtimsb@gmail.com> <smit@zed.dev>
Thomas <github.thomaub@gmail.com>
Thomas <github.thomaub@gmail.com> <thomas.aubry94@gmail.com>
Thomas <github.thomaub@gmail.com> <thomas.aubry@paylead.fr>
Thomas Heartman <thomasheartman+github@gmail.com>
Thomas Heartman <thomasheartman+github@gmail.com> <thomas@getunleash.io>
Thomas Mickley-Doyle <tmickleydoyle@gmail.com>
Thomas Mickley-Doyle <tmickleydoyle@gmail.com> <thomas@zed.dev>
Thorben Kröger <dev@thorben.net>
Thorben Kröger <dev@thorben.net> <thorben.kroeger@hexagon.com>
Thorsten Ball <mrnugget@gmail.com>
Thorsten Ball <mrnugget@gmail.com> <me@thorstenball.com>
Thorsten Ball <mrnugget@gmail.com> <thorsten@zed.dev>
Tristan Hume <tris.hume@gmail.com>
Tristan Hume <tris.hume@gmail.com> <tristan@anthropic.com>
Uladzislau Kaminski <i@uladkaminski.com>
Uladzislau Kaminski <i@uladkaminski.com> <uladzislau_kaminski@epam.com>
Vitaly Slobodin <vitaliy.slobodin@gmail.com>
Vitaly Slobodin <vitaliy.slobodin@gmail.com> <vitaly_slobodin@fastmail.com>
Will Bradley <williambbradley@gmail.com>
Will Bradley <williambbradley@gmail.com> <will@zed.dev>
WindSoilder <WindSoilder@outlook.com>
张小白 <364772080@qq.com>
Thorsten Ball <thorsten@zed.dev>
Thorsten Ball <thorsten@zed.dev> <me@thorstenball.com>
Thorsten Ball <thorsten@zed.dev> <mrnugget@gmail.com>

View File

@@ -1,3 +0,0 @@
{
"printWidth": 120
}

130
.rules
View File

@@ -1,130 +0,0 @@
# Rust coding guidelines
* Prioritize code correctness and clarity. Speed and efficiency are secondary priorities unless otherwise specified.
* Do not write organizational or comments that summarize the code. Comments should only be written in order to explain "why" the code is written in some way in the case there is a reason that is tricky / non-obvious.
* Prefer implementing functionality in existing files unless it is a new logical component. Avoid creating many small files.
* Avoid using functions that panic like `unwrap()`, instead use mechanisms like `?` to propagate errors.
* Be careful with operations like indexing which may panic if the indexes are out of bounds.
* Never silently discard errors with `let _ =` on fallible operations. Always handle errors appropriately:
- Propagate errors with `?` when the calling function should handle them
- Use `.log_err()` or similar when you need to ignore errors but want visibility
- Use explicit error handling with `match` or `if let Err(...)` when you need custom logic
- Example: avoid `let _ = client.request(...).await?;` - use `client.request(...).await?;` instead
* When implementing async operations that may fail, ensure errors propagate to the UI layer so users get meaningful feedback.
* Never create files with `mod.rs` paths - prefer `src/some_module.rs` instead of `src/some_module/mod.rs`.
# GPUI
GPUI is a UI framework which also provides primitives for state and concurrency management.
## Context
Context types allow interaction with global state, windows, entities, and system services. They are typically passed to functions as the argument named `cx`. When a function takes callbacks they come after the `cx` parameter.
* `App` is the root context type, providing access to global state and read and update of entities.
* `Context<T>` is provided when updating an `Entity<T>`. This context dereferences into `App`, so functions which take `&App` can also take `&Context<T>`.
* `AsyncApp` and `AsyncWindowContext` are provided by `cx.spawn` and `cx.spawn_in`. These can be held across await points.
## `Window`
`Window` provides access to the state of an application window. It is passed to functions as an argument named `window` and comes before `cx` when present. It is used for managing focus, dispatching actions, directly drawing, getting user input state, etc.
## Entities
An `Entity<T>` is a handle to state of type `T`. With `thing: Entity<T>`:
* `thing.entity_id()` returns `EntityId`
* `thing.downgrade()` returns `WeakEntity<T>`
* `thing.read(cx: &App)` returns `&T`.
* `thing.read_with(cx, |thing: &T, cx: &App| ...)` returns the closure's return value.
* `thing.update(cx, |thing: &mut T, cx: &mut Context<T>| ...)` allows the closure to mutate the state, and provides a `Context<T>` for interacting with the entity. It returns the closure's return value.
* `thing.update_in(cx, |thing: &mut T, window: &mut Window, cx: &mut Context<T>| ...)` takes a `AsyncWindowContext` or `VisualTestContext`. It's the same as `update` while also providing the `Window`.
Within the closures, the inner `cx` provided to the closure must be used instead of the outer `cx` to avoid issues with multiple borrows.
Trying to update an entity while it's already being updated must be avoided as this will cause a panic.
When `read_with`, `update`, or `update_in` are used with an async context, the closure's return value is wrapped in an `anyhow::Result`.
`WeakEntity<T>` is a weak handle. It has `read_with`, `update`, and `update_in` methods that work the same, but always return an `anyhow::Result` so that they can fail if the entity no longer exists. This can be useful to avoid memory leaks - if entities have mutually recursive handles to eachother they will never be dropped.
## Concurrency
All use of entities and UI rendering occurs on a single foreground thread.
`cx.spawn(async move |cx| ...)` runs an async closure on the foreground thread. Within the closure, `cx` is an async context like `AsyncApp` or `AsyncWindowContext`.
When the outer cx is a `Context<T>`, the use of `spawn` instead looks like `cx.spawn(async move |handle, cx| ...)`, where `handle: WeakEntity<T>`.
To do work on other threads, `cx.background_spawn(async move { ... })` is used. Often this background task is awaited on by a foreground task which uses the results to update state.
Both `cx.spawn` and `cx.background_spawn` return a `Task<R>`, which is a future that can be awaited upon. If this task is dropped, then its work is cancelled. To prevent this one of the following must be done:
* Awaiting the task in some other async context.
* Detaching the task via `task.detach()` or `task.detach_and_log_err(cx)`, allowing it to run indefinitely.
* Storing the task in a field, if the work should be halted when the struct is dropped.
A task which doesn't do anything but provide a value can be created with `Task::ready(value)`.
## Elements
The `Render` trait is used to render some state into an element tree that is laid out using flexbox layout. An `Entity<T>` where `T` implements `Render` is sometimes called a "view".
Example:
```
struct TextWithBorder(SharedString);
impl Render for TextWithBorder {
fn render(&mut self, _window: &mut Window, _cx: &mut Context<Self>) -> impl IntoElement {
div().border_1().child(self.0.clone())
}
}
```
Since `impl IntoElement for SharedString` exists, it can be used as an argument to `child`. `SharedString` is used to avoid copying strings, and is either an `&'static str` or `Arc<str>`.
UI components that are constructed just to be turned into elements can instead implement the `RenderOnce` trait, which is similar to `Render`, but its `render` method takes ownership of `self`. Types that implement this trait can use `#[derive(IntoElement)]` to use them directly as children.
The style methods on elements are similar to those used by Tailwind CSS.
If some attributes or children of an element tree are conditional, `.when(condition, |this| ...)` can be used to run the closure only when `condition` is true. Similarly, `.when_some(option, |this, value| ...)` runs the closure when the `Option` has a value.
## Input events
Input event handlers can be registered on an element via methods like `.on_click(|event, window, cx: &mut App| ...)`.
Often event handlers will want to update the entity that's in the current `Context<T>`. The `cx.listener` method provides this - its use looks like `.on_click(cx.listener(|this: &mut T, event, window, cx: &mut Context<T>| ...)`.
## Actions
Actions are dispatched via user keyboard interaction or in code via `window.dispatch_action(SomeAction.boxed_clone(), cx)` or `focus_handle.dispatch_action(&SomeAction, window, cx)`.
Actions with no data defined with the `actions!(some_namespace, [SomeAction, AnotherAction])` macro call. Otherwise the `Action` derive macro is used. Doc comments on actions are displayed to the user.
Action handlers can be registered on an element via the event handler `.on_action(|action, window, cx| ...)`. Like other event handlers, this is often used with `cx.listener`.
## Notify
When a view's state has changed in a way that may affect its rendering, it should call `cx.notify()`. This will cause the view to be rerendered. It will also cause any observe callbacks registered for the entity with `cx.observe` to be called.
## Entity events
While updating an entity (`cx: Context<T>`), it can emit an event using `cx.emit(event)`. Entities register which events they can emit by declaring `impl EventEmittor<EventType> for EntityType {}`.
Other entities can then register a callback to handle these events by doing `cx.subscribe(other_entity, |this, other_entity, event, cx| ...)`. This will return a `Subscription` which deregisters the callback when dropped. Typically `cx.subscribe` happens when creating a new entity and the subscriptions are stored in a `_subscriptions: Vec<Subscription>` field.
## Recent API changes
GPUI has had some changes to its APIs. Always write code using the new APIs:
* `spawn` methods now take async closures (`AsyncFn`), and so should be called like `cx.spawn(async move |cx| ...)`.
* Use `Entity<T>`. This replaces `Model<T>` and `View<T>` which no longer exist and should NEVER be used.
* Use `App` references. This replaces `AppContext` which no longer exists and should NEVER be used.
* Use `Context<T>` references. This replaces `ModelContext<T>` which no longer exists and should NEVER be used.
* `Window` is now passed around explicitly. The new interface adds a `Window` reference parameter to some methods, and adds some new "*_in" methods for plumbing `Window`. The old types `WindowContext` and `ViewContext<T>` should NEVER be used.
## General guidelines
- Use `./script/clippy` instead of `cargo clippy`

View File

@@ -1 +0,0 @@
.rules

View File

@@ -1,20 +0,0 @@
[
{
"label": "Debug Zed (CodeLLDB)",
"adapter": "CodeLLDB",
"build": {
"label": "Build Zed",
"command": "cargo",
"args": ["build"]
}
},
{
"label": "Debug Zed (GDB)",
"adapter": "GDB",
"build": {
"label": "Build Zed",
"command": "cargo",
"args": ["build"]
}
}
]

View File

@@ -14,51 +14,14 @@
},
"JSON": {
"tab_size": 2,
"preferred_line_length": 120,
"formatter": "prettier"
},
"JSONC": {
"tab_size": 2,
"preferred_line_length": 120,
"formatter": "prettier"
},
"JavaScript": {
"tab_size": 2,
"formatter": "prettier"
},
"CSS": {
"tab_size": 2,
"formatter": "prettier"
},
"Rust": {
"tasks": {
"variables": {
"RUST_DEFAULT_PACKAGE_RUN": "zed"
}
}
}
},
"file_types": {
"Dockerfile": ["Dockerfile*[!dockerignore]"],
"JSONC": ["**/assets/**/*.json", "renovate.json"],
"Git Ignore": ["dockerignore"]
},
"hard_tabs": false,
"formatter": "auto",
"remove_trailing_whitespace_on_save": true,
"ensure_final_newline_on_save": true,
"file_scan_exclusions": [
"crates/assistant_tools/src/edit_agent/evals/fixtures",
"crates/eval/worktrees/",
"crates/eval/repos/",
"**/.git",
"**/.svn",
"**/.hg",
"**/.jj",
"**/CVS",
"**/.DS_Store",
"**/Thumbs.db",
"**/.classpath",
"**/.settings"
]
"ensure_final_newline_on_save": true
}

View File

@@ -1,16 +1,7 @@
[
{
"label": "clippy",
"command": "./script/clippy",
"args": [],
"allow_concurrent_runs": true,
"use_new_terminal": false
},
{
"label": "cargo run --profile release-fast",
"command": "cargo",
"args": ["run", "--profile", "release-fast"],
"allow_concurrent_runs": true,
"use_new_terminal": false
"args": ["xtask", "clippy"]
}
]

View File

@@ -1 +0,0 @@
.rules

View File

@@ -1,3 +1,3 @@
# Code of Conduct
The Code of Conduct for this repository can be found online at [zed.dev/code-of-conduct](https://zed.dev/code-of-conduct).
The Code of Conduct for this repository can be found online at [zed.dev/docs/code-of-conduct](https://zed.dev/docs/code-of-conduct).

View File

@@ -2,7 +2,7 @@
Thanks for your interest in contributing to Zed, the collaborative platform that is also a code editor!
All activity in Zed forums is subject to our [Code of Conduct](https://zed.dev/code-of-conduct). Additionally, contributors must sign our [Contributor License Agreement](https://zed.dev/cla) before their contributions can be merged.
All activity in Zed forums is subject to our [Code of Conduct](https://zed.dev/docs/code-of-conduct). Additionally, contributors must sign our [Contributor License Agreement](https://zed.dev/cla) before their contributions can be merged.
## Contribution ideas
@@ -11,7 +11,7 @@ If you're looking for ideas about what to work on, check out:
- Our [public roadmap](https://zed.dev/roadmap) contains a rough outline of our near-term priorities for Zed.
- Our [top-ranking issues](https://github.com/zed-industries/zed/issues/5393) based on votes by the community.
For adding themes or support for a new language to Zed, check out our [docs on developing extensions](https://zed.dev/docs/extensions/developing-extensions).
For adding themes or support for a new language to Zed, check out our [extension docs](https://github.com/zed-industries/extensions/blob/main/AUTHORING_EXTENSIONS.md).
## Proposing changes
@@ -37,21 +37,11 @@ We plan to set aside time each week to pair program with contributors on promisi
- Pair with us and watch us code to learn the codebase
- Low effort PRs, such as those that just re-arrange syntax, won't be merged without a compelling justification
## File icons
Zed's default icon theme consists of icons that are hand-designed to fit together in a cohesive manner.
We do not accept PRs for file icons that are just an off-the-shelf SVG taken from somewhere else.
### Adding new icons to the Zed icon theme
If you would like to add a new icon to the Zed icon theme, [open a Discussion](https://github.com/zed-industries/zed/discussions/new?category=ux-and-design) and we can work with you on getting an icon designed and added to Zed.
## Bird's-eye view of Zed
Zed is made up of several smaller crates - let's go over those you're most likely to interact with:
- [`gpui`](/crates/gpui) is a GPU-accelerated UI framework which provides all of the building blocks for Zed. **We recommend familiarizing yourself with the root level GPUI documentation.**
- [`gpui`](/crates/gpui) is a GPU-accelerated UI framework which provides all of the building blocks for Zed. **We recommend familiarizing yourself with the root level GPUI documentation**
- [`editor`](/crates/editor) contains the core `Editor` type that drives both the code editor and all various input fields within Zed. It also handles a display layer for LSP features such as Inlay Hints or code completions.
- [`project`](/crates/project) manages files and navigation within the filetree. It is also Zed's side of communication with LSP.
- [`workspace`](/crates/workspace) handles local state serialization and groups projects together.
@@ -62,9 +52,3 @@ Zed is made up of several smaller crates - let's go over those you're most likel
- [`rpc`](/crates/rpc) defines messages to be exchanged with collaboration server.
- [`theme`](/crates/theme) defines the theme system and provides a default theme.
- [`ui`](/crates/ui) is a collection of UI components and common patterns used throughout Zed.
- [`cli`](/crates/cli) is the CLI crate which invokes the Zed binary.
- [`zed`](/crates/zed) is where all things come together, and the `main` entry point for Zed.
## Packaging Zed
Check our [notes for packaging Zed](https://zed.dev/docs/development/linux#notes-for-packaging-zed).

15087
Cargo.lock generated

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,2 +0,0 @@
[build]
dockerfile = "Dockerfile-cross"

View File

@@ -1,22 +1,14 @@
# syntax = docker/dockerfile:1.2
FROM rust:1.89-bookworm as builder
FROM rust:1.79-bookworm as builder
WORKDIR app
COPY . .
# Replace the Cargo configuration with the one used by collab.
COPY ./.cargo/collab-config.toml ./.cargo/config.toml
# Compile collab server
ARG CARGO_PROFILE_RELEASE_PANIC=abort
ARG GITHUB_SHA
ENV GITHUB_SHA=$GITHUB_SHA
# Also add `cmake`, since we need it to build `wasmtime`.
RUN apt-get update; \
apt-get install -y --no-install-recommends cmake
RUN --mount=type=cache,target=./script/node_modules \
--mount=type=cache,target=/usr/local/cargo/registry \
--mount=type=cache,target=/usr/local/cargo/git \
@@ -35,7 +27,5 @@ RUN apt-get update; \
WORKDIR app
COPY --from=builder /app/collab /app/collab
COPY --from=builder /app/crates/collab/migrations /app/migrations
COPY --from=builder /app/crates/collab/migrations_llm /app/migrations_llm
ENV MIGRATIONS_PATH=/app/migrations
ENV LLM_DATABASE_MIGRATIONS_PATH=/app/migrations_llm
ENTRYPOINT ["/app/collab"]

View File

@@ -1,17 +0,0 @@
# syntax=docker/dockerfile:1
ARG CROSS_BASE_IMAGE
FROM ${CROSS_BASE_IMAGE}
WORKDIR /app
ARG TZ=Etc/UTC \
LANG=C.UTF-8 \
LC_ALL=C.UTF-8 \
DEBIAN_FRONTEND=noninteractive
ENV CARGO_TERM_COLOR=always
COPY script/install-mold script/
RUN ./script/install-mold "2.34.0"
COPY script/remote-server script/
RUN ./script/remote-server
COPY . .

View File

@@ -1,16 +0,0 @@
.git
.github
**/.gitignore
**/.gitkeep
.gitattributes
.mailmap
**/target
zed.xcworkspace
.DS_Store
compose.yml
plugins/bin
script/node_modules
styles/node_modules
crates/collab/static/styles.css
vendor/bin
assets/themes/

View File

@@ -1,26 +0,0 @@
# syntax=docker/dockerfile:1
ARG BASE_IMAGE
FROM ${BASE_IMAGE}
WORKDIR /app
ARG TZ=Etc/UTC \
LANG=C.UTF-8 \
LC_ALL=C.UTF-8 \
DEBIAN_FRONTEND=noninteractive
ENV CARGO_TERM_COLOR=always
COPY script/linux script/
RUN ./script/linux
COPY script/install-mold script/install-cmake script/
RUN ./script/install-mold "2.34.0"
RUN ./script/install-cmake "3.30.4"
COPY . .
# When debugging, make these into individual RUN statements.
# Cleanup to avoid saving big layers we aren't going to use.
RUN . "$HOME/.cargo/env" \
&& cargo fetch \
&& cargo build \
&& cargo run -- --help \
&& cargo clean --quiet

View File

@@ -1,2 +0,0 @@
**/target
**/node_modules

View File

@@ -1,4 +1,4 @@
Copyright 2022 - 2025 Zed Industries, Inc.
Copyright 2022 - 2024 Zed Industries, Inc.

View File

@@ -1,4 +1,4 @@
Copyright 2022 - 2025 Zed Industries, Inc.
Copyright 2022 - 2024 Zed Industries, Inc.
Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -1,4 +1,3 @@
collab: RUST_LOG=${RUST_LOG:-info} cargo run --package=collab serve all
cloud: cd ../cloud; cargo make dev
collab: RUST_LOG=${RUST_LOG:-info} cargo run --package=collab serve
livekit: livekit-server --dev
blob_store: ./script/run-local-minio

View File

@@ -1,2 +0,0 @@
app: postgrest crates/collab/postgrest_app.conf
llm: postgrest crates/collab/postgrest_llm.conf

View File

@@ -1,2 +0,0 @@
postgrest_llm: postgrest crates/collab/postgrest_llm.conf
website: cd ../zed.dev; npm run dev -- --port=3000

View File

@@ -1,35 +1,45 @@
# Zed
[![Zed](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/zed-industries/zed/main/assets/badge/v0.json)](https://zed.dev)
[![CI](https://github.com/zed-industries/zed/actions/workflows/ci.yml/badge.svg)](https://github.com/zed-industries/zed/actions/workflows/ci.yml)
Welcome to Zed, a high-performance, multiplayer code editor from the creators of [Atom](https://github.com/atom/atom) and [Tree-sitter](https://github.com/tree-sitter/tree-sitter).
---
## Installation
### Installation
You can [download](https://zed.dev/download) Zed today for macOS (v10.15+).
On macOS and Linux you can [download Zed directly](https://zed.dev/download) or [install Zed via your local package manager](https://zed.dev/docs/linux#installing-via-a-package-manager).
Other platforms are not yet available:
Support for additional platforms is on our [roadmap](https://zed.dev/roadmap):
- Linux ([tracking issue](https://github.com/zed-industries/zed/issues/7015))
- Windows ([tracking issue](https://github.com/zed-industries/zed/issues/5394))
- Web ([tracking issue](https://github.com/zed-industries/zed/issues/5396))
### Developing Zed
For macOS users, you can also install Zed using [Homebrew](https://brew.sh/):
```sh
brew install --cask zed
```
Alternatively, to install the Preview release:
```sh
brew install --cask zed@preview
```
## Developing Zed
- [Building Zed for macOS](./docs/src/development/macos.md)
- [Building Zed for Linux](./docs/src/development/linux.md)
- [Building Zed for Windows](./docs/src/development/windows.md)
- [Running Collaboration Locally](./docs/src/development/local-collaboration.md)
### Contributing
## Contributing
See [CONTRIBUTING.md](./CONTRIBUTING.md) for ways you can contribute to Zed.
Also... we're hiring! Check out our [jobs](https://zed.dev/jobs) page for open roles.
### Licensing
## Licensing
License information for third party dependencies must be correctly provided for CI to pass.

View File

@@ -1,8 +0,0 @@
{
"label": "",
"message": "Zed",
"logoSvg": "<svg xmlns=\"http://www.w3.org/2000/svg\" viewBox=\"0 0 96 96\"><rect width=\"96\" height=\"96\" fill=\"#000\"/><path fill-rule=\"evenodd\" clip-rule=\"evenodd\" d=\"M9 6C7.34315 6 6 7.34315 6 9V75H0V9C0 4.02944 4.02944 0 9 0H89.3787C93.3878 0 95.3955 4.84715 92.5607 7.68198L43.0551 57.1875H57V51H63V58.6875C63 61.1728 60.9853 63.1875 58.5 63.1875H37.0551L26.7426 73.5H73.5V36H79.5V73.5C79.5 76.8137 76.8137 79.5 73.5 79.5H20.7426L10.2426 90H87C88.6569 90 90 88.6569 90 87V21H96V87C96 91.9706 91.9706 96 87 96H6.62132C2.61224 96 0.604504 91.1529 3.43934 88.318L52.7574 39H39V45H33V37.5C33 35.0147 35.0147 33 37.5 33H58.7574L69.2574 22.5H22.5V60H16.5V22.5C16.5 19.1863 19.1863 16.5 22.5 16.5H75.2574L85.7574 6H9Z\" fill=\"#fff\"/></svg>",
"logoWidth": 16,
"labelColor": "black",
"color": "white"
}

View File

@@ -1,92 +0,0 @@
Copyright © 2017 IBM Corp. with Reserved Font Name "Plex"
This Font Software is licensed under the SIL Open Font License, Version 1.1.
This license is copied below, and is also available with a FAQ at:
http://scripts.sil.org/OFL
-----------------------------------------------------------
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
-----------------------------------------------------------
PREAMBLE
The goals of the Open Font License (OFL) are to stimulate worldwide
development of collaborative font projects, to support the font creation
efforts of academic and linguistic communities, and to provide a free and
open framework in which fonts may be shared and improved in partnership
with others.
The OFL allows the licensed fonts to be used, studied, modified and
redistributed freely as long as they are not sold by themselves. The
fonts, including any derivative works, can be bundled, embedded,
redistributed and/or sold with any software provided that any reserved
names are not used by derivative works. The fonts and derivatives,
however, cannot be released under any other type of license. The
requirement for fonts to remain under this license does not apply
to any document created using the fonts or their derivatives.
DEFINITIONS
"Font Software" refers to the set of files released by the Copyright
Holder(s) under this license and clearly marked as such. This may
include source files, build scripts and documentation.
"Reserved Font Name" refers to any names specified as such after the
copyright statement(s).
"Original Version" refers to the collection of Font Software components as
distributed by the Copyright Holder(s).
"Modified Version" refers to any derivative made by adding to, deleting,
or substituting -- in part or in whole -- any of the components of the
Original Version, by changing formats or by porting the Font Software to a
new environment.
"Author" refers to any designer, engineer, programmer, technical
writer or other person who contributed to the Font Software.
PERMISSION & CONDITIONS
Permission is hereby granted, free of charge, to any person obtaining
a copy of the Font Software, to use, study, copy, merge, embed, modify,
redistribute, and sell modified and unmodified copies of the Font
Software, subject to the following conditions:
1) Neither the Font Software nor any of its individual components,
in Original or Modified Versions, may be sold by itself.
2) Original or Modified Versions of the Font Software may be bundled,
redistributed and/or sold with any software, provided that each copy
contains the above copyright notice and this license. These can be
included either as stand-alone text files, human-readable headers or
in the appropriate machine-readable metadata fields within text or
binary files as long as those fields can be easily viewed by the user.
3) No Modified Version of the Font Software may use the Reserved Font
Name(s) unless explicit written permission is granted by the corresponding
Copyright Holder. This restriction only applies to the primary font name as
presented to the users.
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
Software shall not be used to promote, endorse or advertise any
Modified Version, except to acknowledge the contribution(s) of the
Copyright Holder(s) and the Author(s) or with their explicit written
permission.
5) The Font Software, modified or unmodified, in part or in whole,
must be distributed entirely under this license, and must not be
distributed under any other license. The requirement for fonts to
remain under this license does not apply to any document created
using the Font Software.
TERMINATION
This license becomes null and void if any of the above conditions are
not met.
DISCLAIMER
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
OTHER DEALINGS IN THE FONT SOFTWARE.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View File

@@ -1,93 +0,0 @@
Copyright 2019 The Lilex Project Authors (https://github.com/mishamyrt/Lilex)
This Font Software is licensed under the SIL Open Font License, Version 1.1.
This license is copied below, and is also available with a FAQ at:
https://scripts.sil.org/OFL
-----------------------------------------------------------
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
-----------------------------------------------------------
PREAMBLE
The goals of the Open Font License (OFL) are to stimulate worldwide
development of collaborative font projects, to support the font creation
efforts of academic and linguistic communities, and to provide a free and
open framework in which fonts may be shared and improved in partnership
with others.
The OFL allows the licensed fonts to be used, studied, modified and
redistributed freely as long as they are not sold by themselves. The
fonts, including any derivative works, can be bundled, embedded,
redistributed and/or sold with any software provided that any reserved
names are not used by derivative works. The fonts and derivatives,
however, cannot be released under any other type of license. The
requirement for fonts to remain under this license does not apply
to any document created using the fonts or their derivatives.
DEFINITIONS
"Font Software" refers to the set of files released by the Copyright
Holder(s) under this license and clearly marked as such. This may
include source files, build scripts and documentation.
"Reserved Font Name" refers to any names specified as such after the
copyright statement(s).
"Original Version" refers to the collection of Font Software components as
distributed by the Copyright Holder(s).
"Modified Version" refers to any derivative made by adding to, deleting,
or substituting -- in part or in whole -- any of the components of the
Original Version, by changing formats or by porting the Font Software to a
new environment.
"Author" refers to any designer, engineer, programmer, technical
writer or other person who contributed to the Font Software.
PERMISSION & CONDITIONS
Permission is hereby granted, free of charge, to any person obtaining
a copy of the Font Software, to use, study, copy, merge, embed, modify,
redistribute, and sell modified and unmodified copies of the Font
Software, subject to the following conditions:
1) Neither the Font Software nor any of its individual components,
in Original or Modified Versions, may be sold by itself.
2) Original or Modified Versions of the Font Software may be bundled,
redistributed and/or sold with any software, provided that each copy
contains the above copyright notice and this license. These can be
included either as stand-alone text files, human-readable headers or
in the appropriate machine-readable metadata fields within text or
binary files as long as those fields can be easily viewed by the user.
3) No Modified Version of the Font Software may use the Reserved Font
Name(s) unless explicit written permission is granted by the corresponding
Copyright Holder. This restriction only applies to the primary font name as
presented to the users.
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
Software shall not be used to promote, endorse or advertise any
Modified Version, except to acknowledge the contribution(s) of the
Copyright Holder(s) and the Author(s) or with their explicit written
permission.
5) The Font Software, modified or unmodified, in part or in whole,
must be distributed entirely under this license, and must not be
distributed under any other license. The requirement for fonts to
remain under this license does not apply to any document created
using the Font Software.
TERMINATION
This license becomes null and void if any of the above conditions are
not met.
DISCLAIMER
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
OTHER DEALINGS IN THE FONT SOFTWARE.

Binary file not shown.

Some files were not shown because too many files have changed in this diff Show More