Files
Nathan Sobo 1ae326432e Extract a scheduler crate from GPUI to enable unified integration testing of client and server code (#37326)
Extracts and cleans up GPUI's scheduler code into a new `scheduler`
crate, making it pluggable by external runtimes. This will enable
deterministic integration testing with cloud components by providing a
unified test scheduler across Zed and backend code. In Zed, it will
replace the existing GPUI scheduler for consistent async task management
across platforms.

## Changes

- **Core Implementation**: `TestScheduler` with seed-based
randomization, session tracking (`SessionId`), and foreground/background
task separation for reproducible testing.
- **Executors**: `ForegroundExecutor` (!Send, thread-local) and
`BackgroundExecutor` (Send, with blocking/timeout support) as
GPUI-compatible wrappers.
- **Clock and Timer**: Controllable `TestClock` and future-based `Timer`
for time-sensitive tests.
- **Testing APIs**: `once()`, `with_seed()`, and `many()` methods for
configurable test runs.
- **Dependencies**: Added `async-task`, `chrono`, `futures`, etc., with
updates to `Cargo.toml` and lock file.

## Benefits

- **Integration Testing**: Facilitates reliable async tests involving
cloud sessions, reducing flakiness via deterministic execution.
- **Pluggability**: Trait-based design (`Scheduler`) allows easy
integration into non-GPUI runtimes while maintaining GPUI compatibility.
- **Cleanup**: Refactors GPUI scheduler logic for clarity, correctness
(no `unwrap()`, proper error handling), and extensibility.

Follows Rust guidelines; run `./script/clippy` for verification.

- [x] Define and test a core scheduler that we think can power our cloud
code and GPUI
- [ ] Replace GPUI's scheduler


Release Notes:

- N/A

---------

Co-authored-by: Antonio Scandurra <me@as-cii.com>
2025-09-04 17:14:53 +02:00
..
2025-08-15 20:49:56 +00:00
2025-08-28 15:32:44 +03:00

Zed Server

This crate is what we run at https://collab.zed.dev.

It contains our back-end logic for collaboration, to which we connect from the Zed client via a websocket after authenticating via https://zed.dev, which is a separate repo running on Vercel.

Local Development

Database setup

Before you can run the collab server locally, you'll need to set up a zed Postgres database. Follow the steps sequentially:

  1. Ensure you have postgres installed. If not, install with brew install postgresql@15.
  2. Follow the steps on Brew's formula and verify your $PATH contains /opt/homebrew/opt/postgresql@15/bin.
  3. If you hadn't done it before, create the postgres user with createuser -s postgres.
  4. You are now ready to run the bootstrap script:
script/bootstrap

This script will set up the zed Postgres database, and populate it with some users. It requires internet access, because it fetches some users from the GitHub API.

The script will create several admin users, who you'll sign in as by default when developing locally. The GitHub logins for the default users are specified in the seed.default.json file.

To use a different set of admin users, create crates/collab/seed.json.

{
  "admins": ["yourgithubhere"],
  "channels": ["zed"]
}

Testing collaborative features locally

In one terminal, run Zed's collaboration server and the livekit dev server:

foreman start

In a second terminal, run two or more instances of Zed.

script/zed-local -2

This script starts one to four instances of Zed, depending on the -2, -3 or -4 flags. Each instance will be connected to the local collab server, signed in as a different user from seed.json or seed.default.json.

Deployment

We run two instances of collab:

Both of these run on the Kubernetes cluster hosted in Digital Ocean.

Deployment is triggered by pushing to the collab-staging (or collab-production) tag in GitHub. The best way to do this is:

  • ./script/deploy-collab staging
  • ./script/deploy-collab production

You can tell what is currently deployed with ./script/what-is-deployed.

Database Migrations

To create a new migration:

./script/create-migration <name>

Migrations are run automatically on service start, so run foreman start again. The service will crash if the migrations fail.

When you create a new migration, you also need to update the SQLite schema that is used for testing.