Compare commits
53 Commits
additional
...
scheduler-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
56429f37c7 | ||
|
|
a5f39ec024 | ||
|
|
c1cea51e2a | ||
|
|
b245db699a | ||
|
|
96996950f4 | ||
|
|
dcc9988844 | ||
|
|
d8ebd8101f | ||
|
|
02230acf19 | ||
|
|
849266d949 | ||
|
|
075a318a89 | ||
|
|
ebb6533612 | ||
|
|
9bc8ffd96b | ||
|
|
9d4f73cc2f | ||
|
|
8f1cc9275e | ||
|
|
b68973ed25 | ||
|
|
5d59ace54e | ||
|
|
f7e89a3442 | ||
|
|
f70960b3ce | ||
|
|
5633366cc5 | ||
|
|
f9d4ac0b4f | ||
|
|
9a2a95e321 | ||
|
|
3d387f4bcf | ||
|
|
99eb28785f | ||
|
|
e007b3227f | ||
|
|
550da8756f | ||
|
|
1acd3a72c4 | ||
|
|
064ec8c52c | ||
|
|
280cf662a1 | ||
|
|
76e8715434 | ||
|
|
be0444e039 | ||
|
|
aa2ecef1cf | ||
|
|
a22bc65573 | ||
|
|
328b5cb969 | ||
|
|
d02575cf2b | ||
|
|
fc620bba43 | ||
|
|
a788c91de9 | ||
|
|
7676004a96 | ||
|
|
e248cb14e5 | ||
|
|
7b08b685a2 | ||
|
|
1fe806a34c | ||
|
|
8d4ac155e2 | ||
|
|
31270d41c8 | ||
|
|
f0e1a76877 | ||
|
|
56493370a4 | ||
|
|
5e41d48512 | ||
|
|
5c3b5a1d60 | ||
|
|
02925409f0 | ||
|
|
c39581d52e | ||
|
|
52e2650270 | ||
|
|
d5e1509b53 | ||
|
|
5b07e2b242 | ||
|
|
9272b90672 | ||
|
|
8d714f0bc4 |
143
.rules
143
.rules
@@ -1,143 +0,0 @@
|
||||
# Rust coding guidelines
|
||||
|
||||
* Prioritize code correctness and clarity. Speed and efficiency are secondary priorities unless otherwise specified.
|
||||
* Do not write organizational or comments that summarize the code. Comments should only be written in order to explain "why" the code is written in some way in the case there is a reason that is tricky / non-obvious.
|
||||
* Prefer implementing functionality in existing files unless it is a new logical component. Avoid creating many small files.
|
||||
* Avoid using functions that panic like `unwrap()`, instead use mechanisms like `?` to propagate errors.
|
||||
* Be careful with operations like indexing which may panic if the indexes are out of bounds.
|
||||
* Never silently discard errors with `let _ =` on fallible operations. Always handle errors appropriately:
|
||||
- Propagate errors with `?` when the calling function should handle them
|
||||
- Use `.log_err()` or similar when you need to ignore errors but want visibility
|
||||
- Use explicit error handling with `match` or `if let Err(...)` when you need custom logic
|
||||
- Example: avoid `let _ = client.request(...).await?;` - use `client.request(...).await?;` instead
|
||||
* When implementing async operations that may fail, ensure errors propagate to the UI layer so users get meaningful feedback.
|
||||
* Never create files with `mod.rs` paths - prefer `src/some_module.rs` instead of `src/some_module/mod.rs`.
|
||||
* When creating new crates, prefer specifying the library root path in `Cargo.toml` using `[lib] path = "...rs"` instead of the default `lib.rs`, to maintain consistent and descriptive naming (e.g., `gpui.rs` or `main.rs`).
|
||||
* Avoid creative additions unless explicitly requested
|
||||
* Use full words for variable names (no abbreviations like "q" for "queue")
|
||||
* Use variable shadowing to scope clones in async contexts for clarity, minimizing the lifetime of borrowed references.
|
||||
Example:
|
||||
```rust
|
||||
executor.spawn({
|
||||
let task_ran = task_ran.clone();
|
||||
async move {
|
||||
*task_ran.borrow_mut() = true;
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
# GPUI
|
||||
|
||||
GPUI is a UI framework which also provides primitives for state and concurrency management.
|
||||
|
||||
## Context
|
||||
|
||||
Context types allow interaction with global state, windows, entities, and system services. They are typically passed to functions as the argument named `cx`. When a function takes callbacks they come after the `cx` parameter.
|
||||
|
||||
* `App` is the root context type, providing access to global state and read and update of entities.
|
||||
* `Context<T>` is provided when updating an `Entity<T>`. This context dereferences into `App`, so functions which take `&App` can also take `&Context<T>`.
|
||||
* `AsyncApp` and `AsyncWindowContext` are provided by `cx.spawn` and `cx.spawn_in`. These can be held across await points.
|
||||
|
||||
## `Window`
|
||||
|
||||
`Window` provides access to the state of an application window. It is passed to functions as an argument named `window` and comes before `cx` when present. It is used for managing focus, dispatching actions, directly drawing, getting user input state, etc.
|
||||
|
||||
## Entities
|
||||
|
||||
An `Entity<T>` is a handle to state of type `T`. With `thing: Entity<T>`:
|
||||
|
||||
* `thing.entity_id()` returns `EntityId`
|
||||
* `thing.downgrade()` returns `WeakEntity<T>`
|
||||
* `thing.read(cx: &App)` returns `&T`.
|
||||
* `thing.read_with(cx, |thing: &T, cx: &App| ...)` returns the closure's return value.
|
||||
* `thing.update(cx, |thing: &mut T, cx: &mut Context<T>| ...)` allows the closure to mutate the state, and provides a `Context<T>` for interacting with the entity. It returns the closure's return value.
|
||||
* `thing.update_in(cx, |thing: &mut T, window: &mut Window, cx: &mut Context<T>| ...)` takes a `AsyncWindowContext` or `VisualTestContext`. It's the same as `update` while also providing the `Window`.
|
||||
|
||||
Within the closures, the inner `cx` provided to the closure must be used instead of the outer `cx` to avoid issues with multiple borrows.
|
||||
|
||||
Trying to update an entity while it's already being updated must be avoided as this will cause a panic.
|
||||
|
||||
When `read_with`, `update`, or `update_in` are used with an async context, the closure's return value is wrapped in an `anyhow::Result`.
|
||||
|
||||
`WeakEntity<T>` is a weak handle. It has `read_with`, `update`, and `update_in` methods that work the same, but always return an `anyhow::Result` so that they can fail if the entity no longer exists. This can be useful to avoid memory leaks - if entities have mutually recursive handles to each other they will never be dropped.
|
||||
|
||||
## Concurrency
|
||||
|
||||
All use of entities and UI rendering occurs on a single foreground thread.
|
||||
|
||||
`cx.spawn(async move |cx| ...)` runs an async closure on the foreground thread. Within the closure, `cx` is an async context like `AsyncApp` or `AsyncWindowContext`.
|
||||
|
||||
When the outer cx is a `Context<T>`, the use of `spawn` instead looks like `cx.spawn(async move |handle, cx| ...)`, where `handle: WeakEntity<T>`.
|
||||
|
||||
To do work on other threads, `cx.background_spawn(async move { ... })` is used. Often this background task is awaited on by a foreground task which uses the results to update state.
|
||||
|
||||
Both `cx.spawn` and `cx.background_spawn` return a `Task<R>`, which is a future that can be awaited upon. If this task is dropped, then its work is cancelled. To prevent this one of the following must be done:
|
||||
|
||||
* Awaiting the task in some other async context.
|
||||
* Detaching the task via `task.detach()` or `task.detach_and_log_err(cx)`, allowing it to run indefinitely.
|
||||
* Storing the task in a field, if the work should be halted when the struct is dropped.
|
||||
|
||||
A task which doesn't do anything but provide a value can be created with `Task::ready(value)`.
|
||||
|
||||
## Elements
|
||||
|
||||
The `Render` trait is used to render some state into an element tree that is laid out using flexbox layout. An `Entity<T>` where `T` implements `Render` is sometimes called a "view".
|
||||
|
||||
Example:
|
||||
|
||||
```
|
||||
struct TextWithBorder(SharedString);
|
||||
|
||||
impl Render for TextWithBorder {
|
||||
fn render(&mut self, _window: &mut Window, _cx: &mut Context<Self>) -> impl IntoElement {
|
||||
div().border_1().child(self.0.clone())
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Since `impl IntoElement for SharedString` exists, it can be used as an argument to `child`. `SharedString` is used to avoid copying strings, and is either an `&'static str` or `Arc<str>`.
|
||||
|
||||
UI components that are constructed just to be turned into elements can instead implement the `RenderOnce` trait, which is similar to `Render`, but its `render` method takes ownership of `self`. Types that implement this trait can use `#[derive(IntoElement)]` to use them directly as children.
|
||||
|
||||
The style methods on elements are similar to those used by Tailwind CSS.
|
||||
|
||||
If some attributes or children of an element tree are conditional, `.when(condition, |this| ...)` can be used to run the closure only when `condition` is true. Similarly, `.when_some(option, |this, value| ...)` runs the closure when the `Option` has a value.
|
||||
|
||||
## Input events
|
||||
|
||||
Input event handlers can be registered on an element via methods like `.on_click(|event, window, cx: &mut App| ...)`.
|
||||
|
||||
Often event handlers will want to update the entity that's in the current `Context<T>`. The `cx.listener` method provides this - its use looks like `.on_click(cx.listener(|this: &mut T, event, window, cx: &mut Context<T>| ...)`.
|
||||
|
||||
## Actions
|
||||
|
||||
Actions are dispatched via user keyboard interaction or in code via `window.dispatch_action(SomeAction.boxed_clone(), cx)` or `focus_handle.dispatch_action(&SomeAction, window, cx)`.
|
||||
|
||||
Actions with no data defined with the `actions!(some_namespace, [SomeAction, AnotherAction])` macro call. Otherwise the `Action` derive macro is used. Doc comments on actions are displayed to the user.
|
||||
|
||||
Action handlers can be registered on an element via the event handler `.on_action(|action, window, cx| ...)`. Like other event handlers, this is often used with `cx.listener`.
|
||||
|
||||
## Notify
|
||||
|
||||
When a view's state has changed in a way that may affect its rendering, it should call `cx.notify()`. This will cause the view to be rerendered. It will also cause any observe callbacks registered for the entity with `cx.observe` to be called.
|
||||
|
||||
## Entity events
|
||||
|
||||
While updating an entity (`cx: Context<T>`), it can emit an event using `cx.emit(event)`. Entities register which events they can emit by declaring `impl EventEmittor<EventType> for EntityType {}`.
|
||||
|
||||
Other entities can then register a callback to handle these events by doing `cx.subscribe(other_entity, |this, other_entity, event, cx| ...)`. This will return a `Subscription` which deregisters the callback when dropped. Typically `cx.subscribe` happens when creating a new entity and the subscriptions are stored in a `_subscriptions: Vec<Subscription>` field.
|
||||
|
||||
## Recent API changes
|
||||
|
||||
GPUI has had some changes to its APIs. Always write code using the new APIs:
|
||||
|
||||
* `spawn` methods now take async closures (`AsyncFn`), and so should be called like `cx.spawn(async move |cx| ...)`.
|
||||
* Use `Entity<T>`. This replaces `Model<T>` and `View<T>` which no longer exist and should NEVER be used.
|
||||
* Use `App` references. This replaces `AppContext` which no longer exists and should NEVER be used.
|
||||
* Use `Context<T>` references. This replaces `ModelContext<T>` which no longer exists and should NEVER be used.
|
||||
* `Window` is now passed around explicitly. The new interface adds a `Window` reference parameter to some methods, and adds some new "*_in" methods for plumbing `Window`. The old types `WindowContext` and `ViewContext<T>` should NEVER be used.
|
||||
|
||||
|
||||
## General guidelines
|
||||
|
||||
- Use `./script/clippy` instead of `cargo clippy`
|
||||
3
Cargo.lock
generated
3
Cargo.lock
generated
@@ -7243,6 +7243,7 @@ dependencies = [
|
||||
"calloop",
|
||||
"calloop-wayland-source",
|
||||
"cbindgen",
|
||||
"chrono",
|
||||
"circular-buffer",
|
||||
"cocoa 0.26.0",
|
||||
"cocoa-foundation 0.2.0",
|
||||
@@ -7292,6 +7293,7 @@ dependencies = [
|
||||
"refineable",
|
||||
"reqwest_client",
|
||||
"resvg",
|
||||
"scheduler",
|
||||
"schemars",
|
||||
"seahash",
|
||||
"semver",
|
||||
@@ -13471,6 +13473,7 @@ dependencies = [
|
||||
"alacritty_terminal",
|
||||
"anyhow",
|
||||
"async-dispatcher",
|
||||
"async-task",
|
||||
"async-tungstenite",
|
||||
"base64 0.22.1",
|
||||
"client",
|
||||
|
||||
@@ -374,6 +374,7 @@ rodio = { git = "https://github.com/RustAudio/rodio", rev ="e2074c6c2acf07b57cf7
|
||||
rope = { path = "crates/rope" }
|
||||
rpc = { path = "crates/rpc" }
|
||||
rules_library = { path = "crates/rules_library" }
|
||||
scheduler = { path = "crates/scheduler" }
|
||||
search = { path = "crates/search" }
|
||||
session = { path = "crates/session" }
|
||||
settings = { path = "crates/settings" }
|
||||
|
||||
144
STATUS.md
Normal file
144
STATUS.md
Normal file
@@ -0,0 +1,144 @@
|
||||
# Scheduler Integration - Debugging Status
|
||||
|
||||
## Problem
|
||||
|
||||
PR #44810 causes Zed to hang on startup on Linux/Windows, but works fine on Mac.
|
||||
|
||||
From PR comment by @yara-blue and @localcc:
|
||||
> "With it applied zed hangs without ever responding when you open it"
|
||||
|
||||
## What Was Cleaned Up
|
||||
|
||||
Removed unrelated changes that were accidentally committed:
|
||||
- `ConfiguredApiCard`, `InstructionListItem`, `api_key.rs` (UI components)
|
||||
- Debug instrumentation in `terminal_tool.rs`, `capability_granter.rs`, `wasm_host/wit/since_v0_8_0.rs`, `lsp_store.rs`
|
||||
- Planning docs (`.rules`, `PLAN.md`, old `STATUS.md`)
|
||||
|
||||
Kept `language_registry.rs` changes as they may be relevant to debugging.
|
||||
|
||||
## Analysis So Far
|
||||
|
||||
### Code paths verified as correct:
|
||||
|
||||
1. **Priority queue algorithm** - The weighted random selection in `crates/gpui/src/queue.rs` is mathematically sound. When the last non-empty queue is checked, the probability is always 100%.
|
||||
|
||||
2. **`async_task::Task` implements `Unpin`** - So `Pin::new(task).poll(cx)` is valid.
|
||||
|
||||
3. **`parking::Parker` semantics** - If `unpark()` is called before `park()`, the `park()` returns immediately. This is correct.
|
||||
|
||||
4. **Waker creation** - `waker_fn` with `Unparker` (which is `Clone + Send + Sync`) should work correctly.
|
||||
|
||||
5. **`PlatformScheduler::block` implementation** - Identical logic to the old `block_internal` for production builds.
|
||||
|
||||
### The blocking flow:
|
||||
|
||||
1. Task spawned → runnable scheduled → sent to priority queue
|
||||
2. Background thread waiting on condvar in `PriorityQueueReceiver::recv()`
|
||||
3. `send()` pushes to queue and calls `condvar.notify_one()`
|
||||
4. Background thread wakes, pops item, runs runnable
|
||||
5. When task completes, `async_task` wakes the registered waker
|
||||
6. Waker calls `unparker.unpark()`
|
||||
7. `parker.park()` returns
|
||||
8. Future is polled again, returns `Ready`
|
||||
|
||||
### Files involved:
|
||||
|
||||
- `crates/gpui/src/platform_scheduler.rs` - `PlatformScheduler::block()` implementation
|
||||
- `crates/gpui/src/executor.rs` - `BackgroundExecutor::block()` wraps futures
|
||||
- `crates/gpui/src/queue.rs` - Priority queue with `parking_lot::Condvar`
|
||||
- `crates/gpui/src/platform/linux/dispatcher.rs` - Background thread pool
|
||||
- `crates/scheduler/src/executor.rs` - `scheduler::BackgroundExecutor::spawn_with_priority`
|
||||
|
||||
## What to investigate next
|
||||
|
||||
### 1. Verify background threads are actually running
|
||||
|
||||
Add logging at the start of background worker threads in `LinuxDispatcher::new()`:
|
||||
```rust
|
||||
.spawn(move || {
|
||||
log::info!("[LinuxDispatcher] background worker {} started", i);
|
||||
for runnable in receiver.iter() {
|
||||
// ...
|
||||
}
|
||||
})
|
||||
```
|
||||
|
||||
### 2. Verify tasks are being dispatched
|
||||
|
||||
Add logging in `PlatformScheduler::schedule_background_with_priority`:
|
||||
```rust
|
||||
fn schedule_background_with_priority(&self, runnable: Runnable<RunnableMeta>, priority: Priority) {
|
||||
log::info!("[PlatformScheduler] dispatching task priority={:?}", priority);
|
||||
self.dispatcher.dispatch(runnable, priority);
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Verify the priority queue send/receive
|
||||
|
||||
In `crates/gpui/src/queue.rs`, add logging to `send()` and `recv()`:
|
||||
```rust
|
||||
fn send(&self, priority: Priority, item: T) -> Result<(), SendError<T>> {
|
||||
// ...
|
||||
self.condvar.notify_one();
|
||||
log::debug!("[PriorityQueue] sent item, notified condvar");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn recv(&self) -> Result<...> {
|
||||
log::debug!("[PriorityQueue] recv() waiting...");
|
||||
while queues.is_empty() {
|
||||
self.condvar.wait(&mut queues);
|
||||
}
|
||||
log::debug!("[PriorityQueue] recv() got item");
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Check timing of dispatcher creation vs task spawning
|
||||
|
||||
Trace when `LinuxDispatcher::new()` is called vs when the first `spawn()` happens. If tasks are spawned before background threads are ready, they might be lost.
|
||||
|
||||
### 5. Check for platform-specific differences in `parking` or `parking_lot`
|
||||
|
||||
The `parking` crate (used for `Parker`/`Unparker`) and `parking_lot` (used for `Condvar` in the priority queue) may have platform-specific behavior. Check their GitHub issues for Linux-specific bugs.
|
||||
|
||||
### 6. Verify the startup sequence
|
||||
|
||||
The hang happens during startup. Key calls in `crates/zed/src/main.rs`:
|
||||
```rust
|
||||
// Line ~292: Tasks spawned BEFORE app.run()
|
||||
let system_id = app.background_executor().spawn(system_id());
|
||||
let installation_id = app.background_executor().spawn(installation_id());
|
||||
let session = app.background_executor().spawn(Session::new(session_id.clone()));
|
||||
|
||||
// Line ~513-515: Inside app.run() callback, these BLOCK waiting for the tasks
|
||||
let system_id = cx.background_executor().block(system_id).ok();
|
||||
let installation_id = cx.background_executor().block(installation_id).ok();
|
||||
let session = cx.background_executor().block(session);
|
||||
```
|
||||
|
||||
If background threads aren't running yet when `block()` is called, or if the tasks never got dispatched, it will hang forever.
|
||||
|
||||
## Hypotheses to test
|
||||
|
||||
1. **Background threads not started yet** - Race condition where tasks are dispatched before threads are listening on the queue.
|
||||
|
||||
2. **Condvar notification lost** - `notify_one()` called but no thread was waiting yet, and subsequent waits miss it.
|
||||
|
||||
3. **Platform-specific parking behavior** - `parking::Parker` or `parking_lot::Condvar` behaves differently on Linux.
|
||||
|
||||
4. **Priority queue never releases items** - Something in the weighted random selection is wrong on Linux (different RNG behavior?).
|
||||
|
||||
## Running tests
|
||||
|
||||
To get logs, set `RUST_LOG=info` or `RUST_LOG=debug` when running Zed.
|
||||
|
||||
For the extension_host test hang (separate issue):
|
||||
```bash
|
||||
cargo test -p extension_host extension_store_test::test_extension_store_with_test_extension -- --nocapture
|
||||
```
|
||||
|
||||
## Key commits
|
||||
|
||||
- `5b07e2b242` - "WIP: scheduler integration debugging" - This accidentally added unrelated UI components
|
||||
- `d8ebd8101f` - "WIP: scheduler integration debugging + agent terminal diagnostics" - Added debug instrumentation (now removed)
|
||||
@@ -1337,7 +1337,7 @@ impl EvalAssertion {
|
||||
}
|
||||
|
||||
fn run_eval(eval: EvalInput) -> eval_utils::EvalOutput<EditEvalMetadata> {
|
||||
let dispatcher = gpui::TestDispatcher::new(StdRng::from_os_rng());
|
||||
let dispatcher = gpui::TestDispatcher::new(rand::random());
|
||||
let mut cx = TestAppContext::build(dispatcher, None);
|
||||
let result = cx.executor().block_test(async {
|
||||
let test = EditAgentTest::new(&mut cx).await;
|
||||
|
||||
@@ -2210,7 +2210,7 @@ pub mod evals {
|
||||
use eval_utils::{EvalOutput, NoProcessor};
|
||||
use gpui::TestAppContext;
|
||||
use language_model::{LanguageModelRegistry, SelectedModel};
|
||||
use rand::{SeedableRng as _, rngs::StdRng};
|
||||
|
||||
|
||||
use crate::inline_assistant::test::{InlineAssistantOutput, run_inline_assistant_test};
|
||||
|
||||
@@ -2282,7 +2282,7 @@ pub mod evals {
|
||||
let prompt = prompt.into();
|
||||
|
||||
eval_utils::eval(iterations, expected_pass_ratio, NoProcessor, move || {
|
||||
let dispatcher = gpui::TestDispatcher::new(StdRng::from_os_rng());
|
||||
let dispatcher = gpui::TestDispatcher::new(rand::random());
|
||||
let mut cx = TestAppContext::build(dispatcher, None);
|
||||
cx.skip_drawing();
|
||||
|
||||
|
||||
@@ -1,23 +1,16 @@
|
||||
use futures::channel::oneshot;
|
||||
use git2::{DiffLineType as GitDiffLineType, DiffOptions as GitOptions, Patch as GitPatch};
|
||||
use gpui::{App, AppContext as _, AsyncApp, Context, Entity, EventEmitter, Task, TaskLabel};
|
||||
use gpui::{App, AppContext as _, AsyncApp, Context, Entity, EventEmitter, Task};
|
||||
use language::{
|
||||
BufferRow, DiffOptions, File, Language, LanguageName, LanguageRegistry,
|
||||
language_settings::language_settings, word_diff_ranges,
|
||||
};
|
||||
use rope::Rope;
|
||||
use std::{
|
||||
cmp::Ordering,
|
||||
future::Future,
|
||||
iter,
|
||||
ops::Range,
|
||||
sync::{Arc, LazyLock},
|
||||
};
|
||||
use std::{cmp::Ordering, future::Future, iter, ops::Range, sync::Arc};
|
||||
use sum_tree::SumTree;
|
||||
use text::{Anchor, Bias, BufferId, OffsetRangeExt, Point, ToOffset as _, ToPoint as _};
|
||||
use util::ResultExt;
|
||||
|
||||
pub static CALCULATE_DIFF_TASK: LazyLock<TaskLabel> = LazyLock::new(TaskLabel::new);
|
||||
pub const MAX_WORD_DIFF_LINE_COUNT: usize = 5;
|
||||
|
||||
pub struct BufferDiff {
|
||||
@@ -247,12 +240,10 @@ impl BufferDiffSnapshot {
|
||||
base_text_exists = false;
|
||||
};
|
||||
|
||||
let hunks = cx
|
||||
.background_executor()
|
||||
.spawn_labeled(*CALCULATE_DIFF_TASK, {
|
||||
let buffer = buffer.clone();
|
||||
async move { compute_hunks(base_text_pair, buffer, diff_options) }
|
||||
});
|
||||
let hunks = cx.background_executor().spawn({
|
||||
let buffer = buffer.clone();
|
||||
async move { compute_hunks(base_text_pair, buffer, diff_options) }
|
||||
});
|
||||
|
||||
async move {
|
||||
let (base_text, hunks) = futures::join!(base_text_snapshot, hunks);
|
||||
@@ -285,18 +276,17 @@ impl BufferDiffSnapshot {
|
||||
debug_assert_eq!(&*text, &base_text_snapshot.text());
|
||||
(text, base_text_snapshot.as_rope().clone())
|
||||
});
|
||||
cx.background_executor()
|
||||
.spawn_labeled(*CALCULATE_DIFF_TASK, async move {
|
||||
Self {
|
||||
inner: BufferDiffInner {
|
||||
base_text: base_text_snapshot,
|
||||
pending_hunks: SumTree::new(&buffer),
|
||||
hunks: compute_hunks(base_text_pair, buffer, diff_options),
|
||||
base_text_exists,
|
||||
},
|
||||
secondary_diff: None,
|
||||
}
|
||||
})
|
||||
cx.background_executor().spawn(async move {
|
||||
Self {
|
||||
inner: BufferDiffInner {
|
||||
base_text: base_text_snapshot,
|
||||
pending_hunks: SumTree::new(&buffer),
|
||||
hunks: compute_hunks(base_text_pair, buffer, diff_options),
|
||||
base_text_exists,
|
||||
},
|
||||
secondary_diff: None,
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
|
||||
@@ -3,7 +3,7 @@ use anyhow::{Context as _, Result, anyhow};
|
||||
use cloud_api_client::{AuthenticatedUser, GetAuthenticatedUserResponse, PlanInfo};
|
||||
use cloud_llm_client::{CurrentUsage, PlanV1, UsageData, UsageLimit};
|
||||
use futures::{StreamExt, stream::BoxStream};
|
||||
use gpui::{AppContext as _, BackgroundExecutor, Entity, TestAppContext};
|
||||
use gpui::{AppContext as _, Entity, TestAppContext};
|
||||
use http_client::{AsyncBody, Method, Request, http};
|
||||
use parking_lot::Mutex;
|
||||
use rpc::{ConnectionId, Peer, Receipt, TypedEnvelope, proto};
|
||||
@@ -13,7 +13,6 @@ pub struct FakeServer {
|
||||
peer: Arc<Peer>,
|
||||
state: Arc<Mutex<FakeServerState>>,
|
||||
user_id: u64,
|
||||
executor: BackgroundExecutor,
|
||||
}
|
||||
|
||||
#[derive(Default)]
|
||||
@@ -35,7 +34,6 @@ impl FakeServer {
|
||||
peer: Peer::new(0),
|
||||
state: Default::default(),
|
||||
user_id: client_user_id,
|
||||
executor: cx.executor(),
|
||||
};
|
||||
|
||||
client.http_client().as_fake().replace_handler({
|
||||
@@ -181,8 +179,6 @@ impl FakeServer {
|
||||
|
||||
#[allow(clippy::await_holding_lock)]
|
||||
pub async fn receive<M: proto::EnvelopedMessage>(&self) -> Result<TypedEnvelope<M>> {
|
||||
self.executor.start_waiting();
|
||||
|
||||
let message = self
|
||||
.state
|
||||
.lock()
|
||||
@@ -192,7 +188,6 @@ impl FakeServer {
|
||||
.next()
|
||||
.await
|
||||
.context("other half hung up")?;
|
||||
self.executor.finish_waiting();
|
||||
let type_name = message.payload_type_name();
|
||||
let message = message.into_any();
|
||||
|
||||
|
||||
@@ -251,8 +251,6 @@ impl Database {
|
||||
{
|
||||
#[cfg(test)]
|
||||
{
|
||||
use rand::prelude::*;
|
||||
|
||||
let test_options = self.test_options.as_ref().unwrap();
|
||||
test_options.executor.simulate_random_delay().await;
|
||||
let fail_probability = *test_options.query_failure_probability.lock();
|
||||
|
||||
@@ -254,7 +254,6 @@ async fn test_channel_notes_participant_indices(
|
||||
let (workspace_b, cx_b) = client_b.build_workspace(&project_b, cx_b);
|
||||
|
||||
// Clients A and B open the same file.
|
||||
executor.start_waiting();
|
||||
let editor_a = workspace_a
|
||||
.update_in(cx_a, |workspace, window, cx| {
|
||||
workspace.open_path(
|
||||
@@ -269,7 +268,6 @@ async fn test_channel_notes_participant_indices(
|
||||
.unwrap()
|
||||
.downcast::<Editor>()
|
||||
.unwrap();
|
||||
executor.start_waiting();
|
||||
let editor_b = workspace_b
|
||||
.update_in(cx_b, |workspace, window, cx| {
|
||||
workspace.open_path(
|
||||
|
||||
@@ -151,7 +151,7 @@ async fn test_host_disconnect(
|
||||
|
||||
// Allow client A to reconnect to the server.
|
||||
server.allow_connections();
|
||||
cx_a.background_executor.advance_clock(RECEIVE_TIMEOUT);
|
||||
cx_a.background_executor.advance_clock(RECONNECT_TIMEOUT);
|
||||
|
||||
// Client B calls client A again after they reconnected.
|
||||
let active_call_b = cx_b.read(ActiveCall::global);
|
||||
@@ -427,6 +427,51 @@ async fn test_collaborating_with_completion(cx_a: &mut TestAppContext, cx_b: &mu
|
||||
assert!(!buffer.completion_triggers().is_empty())
|
||||
});
|
||||
|
||||
// Set up the completion request handlers BEFORE typing the trigger character.
|
||||
// This is critical - the handlers must be in place when the request arrives,
|
||||
// otherwise the requests will time out waiting for a response.
|
||||
let mut first_completion_request = fake_language_server
|
||||
.set_request_handler::<lsp::request::Completion, _, _>(|params, _| async move {
|
||||
assert_eq!(
|
||||
params.text_document_position.text_document.uri,
|
||||
lsp::Uri::from_file_path(path!("/a/main.rs")).unwrap(),
|
||||
);
|
||||
assert_eq!(
|
||||
params.text_document_position.position,
|
||||
lsp::Position::new(0, 14),
|
||||
);
|
||||
|
||||
Ok(Some(lsp::CompletionResponse::Array(vec![
|
||||
lsp::CompletionItem {
|
||||
label: "first_method(…)".into(),
|
||||
detail: Some("fn(&mut self, B) -> C".into()),
|
||||
text_edit: Some(lsp::CompletionTextEdit::Edit(lsp::TextEdit {
|
||||
new_text: "first_method($1)".to_string(),
|
||||
range: lsp::Range::new(
|
||||
lsp::Position::new(0, 14),
|
||||
lsp::Position::new(0, 14),
|
||||
),
|
||||
})),
|
||||
insert_text_format: Some(lsp::InsertTextFormat::SNIPPET),
|
||||
..Default::default()
|
||||
},
|
||||
lsp::CompletionItem {
|
||||
label: "second_method(…)".into(),
|
||||
detail: Some("fn(&mut self, C) -> D<E>".into()),
|
||||
text_edit: Some(lsp::CompletionTextEdit::Edit(lsp::TextEdit {
|
||||
new_text: "second_method()".to_string(),
|
||||
range: lsp::Range::new(
|
||||
lsp::Position::new(0, 14),
|
||||
lsp::Position::new(0, 14),
|
||||
),
|
||||
})),
|
||||
insert_text_format: Some(lsp::InsertTextFormat::SNIPPET),
|
||||
..Default::default()
|
||||
},
|
||||
])))
|
||||
});
|
||||
let mut second_completion_request = second_fake_language_server
|
||||
.set_request_handler::<lsp::request::Completion, _, _>(|_, _| async move { Ok(None) });
|
||||
// Type a completion trigger character as the guest.
|
||||
editor_b.update_in(cx_b, |editor, window, cx| {
|
||||
editor.change_selections(SelectionEffects::no_scroll(), window, cx, |s| {
|
||||
@@ -440,6 +485,10 @@ async fn test_collaborating_with_completion(cx_a: &mut TestAppContext, cx_b: &mu
|
||||
cx_b.background_executor.run_until_parked();
|
||||
cx_a.background_executor.run_until_parked();
|
||||
|
||||
// Wait for the completion requests to be received by the fake language servers.
|
||||
first_completion_request.next().await.unwrap();
|
||||
second_completion_request.next().await.unwrap();
|
||||
|
||||
// Open the buffer on the host.
|
||||
let buffer_a = project_a
|
||||
.update(cx_a, |p, cx| {
|
||||
@@ -1840,7 +1889,6 @@ async fn test_on_input_format_from_guest_to_host(
|
||||
|
||||
// Receive an OnTypeFormatting request as the host's language server.
|
||||
// Return some formatting from the host's language server.
|
||||
executor.start_waiting();
|
||||
fake_language_server
|
||||
.set_request_handler::<lsp::request::OnTypeFormatting, _, _>(|params, _| async move {
|
||||
assert_eq!(
|
||||
@@ -1860,7 +1908,6 @@ async fn test_on_input_format_from_guest_to_host(
|
||||
.next()
|
||||
.await
|
||||
.unwrap();
|
||||
executor.finish_waiting();
|
||||
|
||||
// Open the buffer on the host and see that the formatting worked
|
||||
let buffer_a = project_a
|
||||
@@ -2236,8 +2283,6 @@ async fn test_inlay_hint_refresh_is_forwarded(
|
||||
let (workspace_a, cx_a) = client_a.build_workspace(&project_a, cx_a);
|
||||
let (workspace_b, cx_b) = client_b.build_workspace(&project_b, cx_b);
|
||||
|
||||
cx_a.background_executor.start_waiting();
|
||||
|
||||
let editor_a = workspace_a
|
||||
.update_in(cx_a, |workspace, window, cx| {
|
||||
workspace.open_path((worktree_id, rel_path("main.rs")), None, true, window, cx)
|
||||
@@ -2301,7 +2346,6 @@ async fn test_inlay_hint_refresh_is_forwarded(
|
||||
.next()
|
||||
.await
|
||||
.unwrap();
|
||||
executor.finish_waiting();
|
||||
|
||||
executor.run_until_parked();
|
||||
editor_a.update(cx_a, |editor, cx| {
|
||||
@@ -2913,7 +2957,6 @@ async fn test_lsp_pull_diagnostics(
|
||||
.unwrap();
|
||||
|
||||
let (workspace_a, cx_a) = client_a.build_workspace(&project_a, cx_a);
|
||||
executor.start_waiting();
|
||||
|
||||
// The host opens a rust file.
|
||||
let _buffer_a = project_a
|
||||
|
||||
@@ -2051,6 +2051,9 @@ async fn test_following_to_channel_notes_without_a_shared_project(
|
||||
});
|
||||
});
|
||||
|
||||
// Ensure client A's edits are synced to the server before client B starts following.
|
||||
deterministic.run_until_parked();
|
||||
|
||||
// Client B follows client A.
|
||||
workspace_b
|
||||
.update_in(cx_b, |workspace, window, cx| {
|
||||
|
||||
@@ -1111,7 +1111,8 @@ impl RandomizedTest for ProjectCollaborationTest {
|
||||
let fs = fs.clone();
|
||||
move |_, cx| {
|
||||
let background = cx.background_executor();
|
||||
let mut rng = background.rng();
|
||||
let rng = background.rng();
|
||||
let mut rng = rng.lock();
|
||||
let count = rng.random_range::<usize, _>(1..3);
|
||||
let files = fs.as_fake().files();
|
||||
let files = (0..count)
|
||||
@@ -1137,7 +1138,8 @@ impl RandomizedTest for ProjectCollaborationTest {
|
||||
move |_, cx| {
|
||||
let mut highlights = Vec::new();
|
||||
let background = cx.background_executor();
|
||||
let mut rng = background.rng();
|
||||
let rng = background.rng();
|
||||
let mut rng = rng.lock();
|
||||
|
||||
let highlight_count = rng.random_range(1..=5);
|
||||
for _ in 0..highlight_count {
|
||||
|
||||
@@ -174,9 +174,7 @@ pub async fn run_randomized_test<T: RandomizedTest>(
|
||||
}
|
||||
|
||||
drop(operation_channels);
|
||||
executor.start_waiting();
|
||||
futures::future::join_all(client_tasks).await;
|
||||
executor.finish_waiting();
|
||||
|
||||
executor.run_until_parked();
|
||||
T::on_quiesce(&mut server, &mut clients).await;
|
||||
@@ -524,10 +522,8 @@ impl<T: RandomizedTest> TestPlan<T> {
|
||||
server.forbid_connections();
|
||||
server.disconnect_client(removed_peer_id);
|
||||
deterministic.advance_clock(RECEIVE_TIMEOUT + RECONNECT_TIMEOUT);
|
||||
deterministic.start_waiting();
|
||||
log::info!("waiting for user {} to exit...", removed_user_id);
|
||||
client_task.await;
|
||||
deterministic.finish_waiting();
|
||||
server.allow_connections();
|
||||
|
||||
for project in client.dev_server_projects().iter() {
|
||||
|
||||
@@ -579,17 +579,21 @@ async fn test_remote_server_debugger(
|
||||
server_cx: &mut TestAppContext,
|
||||
executor: BackgroundExecutor,
|
||||
) {
|
||||
eprintln!("[DEBUG] test_remote_server_debugger: START");
|
||||
cx_a.update(|cx| {
|
||||
release_channel::init(semver::Version::new(0, 0, 0), cx);
|
||||
command_palette_hooks::init(cx);
|
||||
zlog::init_test();
|
||||
dap_adapters::init(cx);
|
||||
});
|
||||
eprintln!("[DEBUG] test_remote_server_debugger: cx_a init done");
|
||||
server_cx.update(|cx| {
|
||||
release_channel::init(semver::Version::new(0, 0, 0), cx);
|
||||
dap_adapters::init(cx);
|
||||
});
|
||||
eprintln!("[DEBUG] test_remote_server_debugger: server_cx init done");
|
||||
let (opts, server_ssh) = RemoteClient::fake_server(cx_a, server_cx);
|
||||
eprintln!("[DEBUG] test_remote_server_debugger: fake_server created");
|
||||
let remote_fs = FakeFs::new(server_cx.executor());
|
||||
remote_fs
|
||||
.insert_tree(
|
||||
@@ -599,6 +603,7 @@ async fn test_remote_server_debugger(
|
||||
}),
|
||||
)
|
||||
.await;
|
||||
eprintln!("[DEBUG] test_remote_server_debugger: insert_tree done");
|
||||
|
||||
// User A connects to the remote project via SSH.
|
||||
server_cx.update(HeadlessProject::init);
|
||||
@@ -618,32 +623,43 @@ async fn test_remote_server_debugger(
|
||||
cx,
|
||||
)
|
||||
});
|
||||
eprintln!("[DEBUG] test_remote_server_debugger: headless_project created");
|
||||
|
||||
let client_ssh = RemoteClient::fake_client(opts, cx_a).await;
|
||||
eprintln!("[DEBUG] test_remote_server_debugger: fake_client created");
|
||||
eprintln!("[DEBUG] test_remote_server_debugger: starting TestServer");
|
||||
let mut server = TestServer::start(server_cx.executor()).await;
|
||||
eprintln!("[DEBUG] test_remote_server_debugger: TestServer started");
|
||||
let client_a = server.create_client(cx_a, "user_a").await;
|
||||
eprintln!("[DEBUG] test_remote_server_debugger: client_a created");
|
||||
cx_a.update(|cx| {
|
||||
debugger_ui::init(cx);
|
||||
command_palette_hooks::init(cx);
|
||||
});
|
||||
eprintln!("[DEBUG] test_remote_server_debugger: building ssh project");
|
||||
let (project_a, _) = client_a
|
||||
.build_ssh_project(path!("/code"), client_ssh.clone(), cx_a)
|
||||
.await;
|
||||
eprintln!("[DEBUG] test_remote_server_debugger: ssh project built");
|
||||
|
||||
let (workspace, cx_a) = client_a.build_workspace(&project_a, cx_a);
|
||||
|
||||
eprintln!("[DEBUG] test_remote_server_debugger: loading debugger panel");
|
||||
let debugger_panel = workspace
|
||||
.update_in(cx_a, |_workspace, window, cx| {
|
||||
cx.spawn_in(window, DebugPanel::load)
|
||||
})
|
||||
.await
|
||||
.unwrap();
|
||||
eprintln!("[DEBUG] test_remote_server_debugger: debugger panel loaded");
|
||||
|
||||
workspace.update_in(cx_a, |workspace, window, cx| {
|
||||
workspace.add_panel(debugger_panel, window, cx);
|
||||
});
|
||||
|
||||
eprintln!("[DEBUG] test_remote_server_debugger: calling run_until_parked (1)");
|
||||
cx_a.run_until_parked();
|
||||
eprintln!("[DEBUG] test_remote_server_debugger: run_until_parked (1) done");
|
||||
let debug_panel = workspace
|
||||
.update(cx_a, |workspace, cx| workspace.panel::<DebugPanel>(cx))
|
||||
.unwrap();
|
||||
@@ -653,8 +669,13 @@ async fn test_remote_server_debugger(
|
||||
.downcast::<workspace::Workspace>()
|
||||
.unwrap();
|
||||
|
||||
eprintln!("[DEBUG] test_remote_server_debugger: calling start_debug_session");
|
||||
let session = debugger_ui::tests::start_debug_session(&workspace_window, cx_a, |_| {}).unwrap();
|
||||
eprintln!(
|
||||
"[DEBUG] test_remote_server_debugger: start_debug_session returned, calling run_until_parked (2)"
|
||||
);
|
||||
cx_a.run_until_parked();
|
||||
eprintln!("[DEBUG] test_remote_server_debugger: run_until_parked (2) done");
|
||||
debug_panel.update(cx_a, |debug_panel, cx| {
|
||||
assert_eq!(
|
||||
debug_panel.active_session().unwrap().read(cx).session(cx),
|
||||
@@ -666,6 +687,7 @@ async fn test_remote_server_debugger(
|
||||
assert_eq!(session.binary().unwrap().command.as_deref(), Some("ssh"));
|
||||
});
|
||||
|
||||
eprintln!("[DEBUG] test_remote_server_debugger: shutting down session");
|
||||
let shutdown_session = workspace.update(cx_a, |workspace, cx| {
|
||||
workspace.project().update(cx, |project, cx| {
|
||||
project.dap_store().update(cx, |dap_store, cx| {
|
||||
@@ -678,7 +700,9 @@ async fn test_remote_server_debugger(
|
||||
a.shutdown_processes(Some(proto::ShutdownRemoteServer {}), executor)
|
||||
});
|
||||
|
||||
eprintln!("[DEBUG] test_remote_server_debugger: awaiting shutdown_session");
|
||||
shutdown_session.await.unwrap();
|
||||
eprintln!("[DEBUG] test_remote_server_debugger: DONE");
|
||||
}
|
||||
|
||||
#[gpui::test]
|
||||
|
||||
@@ -9,8 +9,7 @@ use text::Bias;
|
||||
use util::RandomCharIter;
|
||||
|
||||
fn to_tab_point_benchmark(c: &mut Criterion) {
|
||||
let rng = StdRng::seed_from_u64(1);
|
||||
let dispatcher = TestDispatcher::new(rng);
|
||||
let dispatcher = TestDispatcher::new(1);
|
||||
let cx = gpui::TestAppContext::build(dispatcher, None);
|
||||
|
||||
let create_tab_map = |length: usize| {
|
||||
@@ -55,8 +54,7 @@ fn to_tab_point_benchmark(c: &mut Criterion) {
|
||||
}
|
||||
|
||||
fn to_fold_point_benchmark(c: &mut Criterion) {
|
||||
let rng = StdRng::seed_from_u64(1);
|
||||
let dispatcher = TestDispatcher::new(rng);
|
||||
let dispatcher = TestDispatcher::new(1);
|
||||
let cx = gpui::TestAppContext::build(dispatcher, None);
|
||||
|
||||
let create_tab_map = |length: usize| {
|
||||
|
||||
@@ -116,7 +116,7 @@ fn editor_render(bencher: &mut Bencher<'_>, cx: &TestAppContext) {
|
||||
}
|
||||
|
||||
pub fn benches() {
|
||||
let dispatcher = TestDispatcher::new(StdRng::seed_from_u64(1));
|
||||
let dispatcher = TestDispatcher::new(1);
|
||||
let cx = gpui::TestAppContext::build(dispatcher, None);
|
||||
cx.update(|cx| {
|
||||
let store = SettingsStore::test(cx);
|
||||
|
||||
@@ -11705,7 +11705,6 @@ async fn test_document_format_during_save(cx: &mut TestAppContext) {
|
||||
});
|
||||
assert!(cx.read(|cx| editor.is_dirty(cx)));
|
||||
|
||||
cx.executor().start_waiting();
|
||||
let fake_server = fake_servers.next().await.unwrap();
|
||||
|
||||
{
|
||||
@@ -11735,7 +11734,6 @@ async fn test_document_format_during_save(cx: &mut TestAppContext) {
|
||||
)
|
||||
})
|
||||
.unwrap();
|
||||
cx.executor().start_waiting();
|
||||
save.await;
|
||||
|
||||
assert_eq!(
|
||||
@@ -11776,7 +11774,6 @@ async fn test_document_format_during_save(cx: &mut TestAppContext) {
|
||||
})
|
||||
.unwrap();
|
||||
cx.executor().advance_clock(super::FORMAT_TIMEOUT);
|
||||
cx.executor().start_waiting();
|
||||
save.await;
|
||||
assert_eq!(
|
||||
editor.update(cx, |editor, cx| editor.text(cx)),
|
||||
@@ -11822,7 +11819,6 @@ async fn test_document_format_during_save(cx: &mut TestAppContext) {
|
||||
)
|
||||
})
|
||||
.unwrap();
|
||||
cx.executor().start_waiting();
|
||||
save.await;
|
||||
}
|
||||
}
|
||||
@@ -11889,7 +11885,6 @@ async fn test_redo_after_noop_format(cx: &mut TestAppContext) {
|
||||
)
|
||||
})
|
||||
.unwrap();
|
||||
cx.executor().start_waiting();
|
||||
save.await;
|
||||
assert!(!cx.read(|cx| editor.is_dirty(cx)));
|
||||
}
|
||||
@@ -12058,7 +12053,6 @@ async fn test_multibuffer_format_during_save(cx: &mut TestAppContext) {
|
||||
});
|
||||
cx.executor().run_until_parked();
|
||||
|
||||
cx.executor().start_waiting();
|
||||
let save = multi_buffer_editor
|
||||
.update_in(cx, |editor, window, cx| {
|
||||
editor.save(
|
||||
@@ -12320,7 +12314,6 @@ async fn setup_range_format_test(
|
||||
build_editor_with_project(project.clone(), buffer, window, cx)
|
||||
});
|
||||
|
||||
cx.executor().start_waiting();
|
||||
let fake_server = fake_servers.next().await.unwrap();
|
||||
|
||||
(project, editor, cx, fake_server)
|
||||
@@ -12362,7 +12355,6 @@ async fn test_range_format_on_save_success(cx: &mut TestAppContext) {
|
||||
})
|
||||
.next()
|
||||
.await;
|
||||
cx.executor().start_waiting();
|
||||
save.await;
|
||||
assert_eq!(
|
||||
editor.update(cx, |editor, cx| editor.text(cx)),
|
||||
@@ -12405,7 +12397,6 @@ async fn test_range_format_on_save_timeout(cx: &mut TestAppContext) {
|
||||
})
|
||||
.unwrap();
|
||||
cx.executor().advance_clock(super::FORMAT_TIMEOUT);
|
||||
cx.executor().start_waiting();
|
||||
save.await;
|
||||
assert_eq!(
|
||||
editor.update(cx, |editor, cx| editor.text(cx)),
|
||||
@@ -12437,7 +12428,6 @@ async fn test_range_format_not_called_for_clean_buffer(cx: &mut TestAppContext)
|
||||
panic!("Should not be invoked");
|
||||
})
|
||||
.next();
|
||||
cx.executor().start_waiting();
|
||||
save.await;
|
||||
cx.run_until_parked();
|
||||
}
|
||||
@@ -12544,7 +12534,6 @@ async fn test_document_format_manual_trigger(cx: &mut TestAppContext) {
|
||||
editor.set_text("one\ntwo\nthree\n", window, cx)
|
||||
});
|
||||
|
||||
cx.executor().start_waiting();
|
||||
let fake_server = fake_servers.next().await.unwrap();
|
||||
|
||||
let format = editor
|
||||
@@ -12572,7 +12561,6 @@ async fn test_document_format_manual_trigger(cx: &mut TestAppContext) {
|
||||
})
|
||||
.next()
|
||||
.await;
|
||||
cx.executor().start_waiting();
|
||||
format.await;
|
||||
assert_eq!(
|
||||
editor.update(cx, |editor, cx| editor.text(cx)),
|
||||
@@ -12605,7 +12593,6 @@ async fn test_document_format_manual_trigger(cx: &mut TestAppContext) {
|
||||
})
|
||||
.unwrap();
|
||||
cx.executor().advance_clock(super::FORMAT_TIMEOUT);
|
||||
cx.executor().start_waiting();
|
||||
format.await;
|
||||
assert_eq!(
|
||||
editor.update(cx, |editor, cx| editor.text(cx)),
|
||||
@@ -12660,8 +12647,6 @@ async fn test_multiple_formatters(cx: &mut TestAppContext) {
|
||||
build_editor_with_project(project.clone(), buffer, window, cx)
|
||||
});
|
||||
|
||||
cx.executor().start_waiting();
|
||||
|
||||
let fake_server = fake_servers.next().await.unwrap();
|
||||
fake_server.set_request_handler::<lsp::request::Formatting, _, _>(
|
||||
move |_params, _| async move {
|
||||
@@ -12763,7 +12748,6 @@ async fn test_multiple_formatters(cx: &mut TestAppContext) {
|
||||
}
|
||||
});
|
||||
|
||||
cx.executor().start_waiting();
|
||||
editor
|
||||
.update_in(cx, |editor, window, cx| {
|
||||
editor.perform_format(
|
||||
@@ -12931,7 +12915,6 @@ async fn test_organize_imports_manual_trigger(cx: &mut TestAppContext) {
|
||||
)
|
||||
});
|
||||
|
||||
cx.executor().start_waiting();
|
||||
let fake_server = fake_servers.next().await.unwrap();
|
||||
|
||||
let format = editor
|
||||
@@ -12977,7 +12960,6 @@ async fn test_organize_imports_manual_trigger(cx: &mut TestAppContext) {
|
||||
})
|
||||
.next()
|
||||
.await;
|
||||
cx.executor().start_waiting();
|
||||
format.await;
|
||||
assert_eq!(
|
||||
editor.update(cx, |editor, cx| editor.text(cx)),
|
||||
@@ -13013,7 +12995,6 @@ async fn test_organize_imports_manual_trigger(cx: &mut TestAppContext) {
|
||||
})
|
||||
.unwrap();
|
||||
cx.executor().advance_clock(super::CODE_ACTION_TIMEOUT);
|
||||
cx.executor().start_waiting();
|
||||
format.await;
|
||||
assert_eq!(
|
||||
editor.update(cx, |editor, cx| editor.text(cx)),
|
||||
@@ -13066,9 +13047,7 @@ async fn test_concurrent_format_requests(cx: &mut TestAppContext) {
|
||||
|
||||
// Wait for both format requests to complete
|
||||
cx.executor().advance_clock(Duration::from_millis(200));
|
||||
cx.executor().start_waiting();
|
||||
format_1.await.unwrap();
|
||||
cx.executor().start_waiting();
|
||||
format_2.await.unwrap();
|
||||
|
||||
// The formatting edits only happens once.
|
||||
@@ -18068,7 +18047,6 @@ async fn test_on_type_formatting_not_triggered(cx: &mut TestAppContext) {
|
||||
.downcast::<Editor>()
|
||||
.unwrap();
|
||||
|
||||
cx.executor().start_waiting();
|
||||
let fake_server = fake_servers.next().await.unwrap();
|
||||
|
||||
fake_server.set_request_handler::<lsp::request::OnTypeFormatting, _, _>(
|
||||
|
||||
@@ -292,6 +292,7 @@ impl Editor {
|
||||
};
|
||||
|
||||
let mut visible_excerpts = self.visible_excerpts(true, cx);
|
||||
|
||||
let mut invalidate_hints_for_buffers = HashSet::default();
|
||||
let ignore_previous_fetches = match reason {
|
||||
InlayHintRefreshReason::ModifiersChanged(_)
|
||||
@@ -348,6 +349,7 @@ impl Editor {
|
||||
let mut buffers_to_query = HashMap::default();
|
||||
for (_, (buffer, buffer_version, visible_range)) in visible_excerpts {
|
||||
let buffer_id = buffer.read(cx).remote_id();
|
||||
|
||||
if !self.registered_buffers.contains_key(&buffer_id) {
|
||||
continue;
|
||||
}
|
||||
@@ -3656,35 +3658,49 @@ let c = 3;"#
|
||||
})
|
||||
.await
|
||||
.unwrap();
|
||||
let editor =
|
||||
cx.add_window(|window, cx| Editor::for_buffer(buffer, Some(project), window, cx));
|
||||
|
||||
// Use a VisualTestContext and explicitly establish a viewport on the editor (the production
|
||||
// trigger for `NewLinesShown` / inlay hint refresh) by setting visible line/column counts.
|
||||
let (editor_entity, cx) =
|
||||
cx.add_window_view(|window, cx| Editor::for_buffer(buffer, Some(project), window, cx));
|
||||
|
||||
editor_entity.update_in(cx, |editor, window, cx| {
|
||||
// Establish a viewport. The exact values are not important for this test; we just need
|
||||
// the editor to consider itself visible so the refresh pipeline runs.
|
||||
editor.set_visible_line_count(50.0, window, cx);
|
||||
editor.set_visible_column_count(120.0);
|
||||
|
||||
// Explicitly trigger a refresh now that the viewport exists.
|
||||
editor.refresh_inlay_hints(InlayHintRefreshReason::NewLinesShown, cx);
|
||||
});
|
||||
cx.executor().run_until_parked();
|
||||
editor
|
||||
.update(cx, |editor, window, cx| {
|
||||
editor.change_selections(SelectionEffects::no_scroll(), window, cx, |s| {
|
||||
s.select_ranges([Point::new(10, 0)..Point::new(10, 0)])
|
||||
})
|
||||
})
|
||||
.unwrap();
|
||||
|
||||
editor_entity.update_in(cx, |editor, window, cx| {
|
||||
editor.change_selections(SelectionEffects::no_scroll(), window, cx, |s| {
|
||||
s.select_ranges([Point::new(10, 0)..Point::new(10, 0)])
|
||||
});
|
||||
});
|
||||
cx.executor().run_until_parked();
|
||||
editor
|
||||
.update(cx, |editor, _window, cx| {
|
||||
let expected_hints = vec![
|
||||
"move".to_string(),
|
||||
"(".to_string(),
|
||||
"&x".to_string(),
|
||||
") ".to_string(),
|
||||
") ".to_string(),
|
||||
];
|
||||
assert_eq!(
|
||||
expected_hints,
|
||||
cached_hint_labels(editor, cx),
|
||||
"Editor inlay hints should repeat server's order when placed at the same spot"
|
||||
);
|
||||
assert_eq!(expected_hints, visible_hint_labels(editor, cx));
|
||||
})
|
||||
.unwrap();
|
||||
|
||||
// Allow any async inlay hint request/response work to complete.
|
||||
cx.executor().advance_clock(Duration::from_millis(100));
|
||||
cx.executor().run_until_parked();
|
||||
|
||||
editor_entity.update(cx, |editor, cx| {
|
||||
let expected_hints = vec![
|
||||
"move".to_string(),
|
||||
"(".to_string(),
|
||||
"&x".to_string(),
|
||||
") ".to_string(),
|
||||
") ".to_string(),
|
||||
];
|
||||
assert_eq!(
|
||||
expected_hints,
|
||||
cached_hint_labels(editor, cx),
|
||||
"Editor inlay hints should repeat server's order when placed at the same spot"
|
||||
);
|
||||
assert_eq!(expected_hints, visible_hint_labels(editor, cx));
|
||||
});
|
||||
}
|
||||
|
||||
#[gpui::test]
|
||||
|
||||
@@ -11,7 +11,7 @@ use fs::{Fs, RealFs};
|
||||
use gpui::{TestAppContext, TestDispatcher};
|
||||
use http_client::{FakeHttpClient, Response};
|
||||
use node_runtime::NodeRuntime;
|
||||
use rand::{SeedableRng, rngs::StdRng};
|
||||
|
||||
use reqwest_client::ReqwestClient;
|
||||
use serde_json::json;
|
||||
use settings::SettingsStore;
|
||||
@@ -52,7 +52,7 @@ fn extension_benchmarks(c: &mut Criterion) {
|
||||
|
||||
fn init() -> TestAppContext {
|
||||
const SEED: u64 = 9999;
|
||||
let dispatcher = TestDispatcher::new(StdRng::seed_from_u64(SEED));
|
||||
let dispatcher = TestDispatcher::new(SEED);
|
||||
let cx = TestAppContext::build(dispatcher, None);
|
||||
cx.executor().allow_parking();
|
||||
cx.update(|cx| {
|
||||
|
||||
@@ -534,6 +534,29 @@ async fn test_extension_store_with_test_extension(cx: &mut TestAppContext) {
|
||||
log::info!("Initializing test");
|
||||
init_test(cx);
|
||||
cx.executor().allow_parking();
|
||||
log::info!("[test_extension_store_with_test_extension] after init_test + allow_parking");
|
||||
|
||||
fn panic_timeout<T>(what: &str, seconds: u64) -> T {
|
||||
panic!(
|
||||
"[test_extension_store_with_test_extension] timed out after {seconds}s while {what}"
|
||||
);
|
||||
}
|
||||
|
||||
async fn await_or_timeout<T>(
|
||||
what: &'static str,
|
||||
seconds: u64,
|
||||
future: impl std::future::Future<Output = T>,
|
||||
) -> T {
|
||||
use futures::FutureExt as _;
|
||||
use gpui::Timer;
|
||||
|
||||
let timeout = Timer::after(std::time::Duration::from_secs(seconds));
|
||||
|
||||
futures::select! {
|
||||
output = future.fuse() => output,
|
||||
_ = futures::FutureExt::fuse(timeout) => panic_timeout(what, seconds),
|
||||
}
|
||||
}
|
||||
|
||||
let root_dir = Path::new(env!("CARGO_MANIFEST_DIR"))
|
||||
.parent()
|
||||
@@ -558,7 +581,14 @@ async fn test_extension_store_with_test_extension(cx: &mut TestAppContext) {
|
||||
|
||||
log::info!("Setting up test");
|
||||
|
||||
let project = Project::test(fs.clone(), [project_dir.as_path()], cx).await;
|
||||
log::info!("[test_extension_store_with_test_extension] creating Project::test");
|
||||
let project = await_or_timeout(
|
||||
"awaiting Project::test",
|
||||
5,
|
||||
Project::test(fs.clone(), [project_dir.as_path()], cx),
|
||||
)
|
||||
.await;
|
||||
log::info!("[test_extension_store_with_test_extension] created Project::test");
|
||||
|
||||
let proxy = Arc::new(ExtensionHostProxy::new());
|
||||
let theme_registry = Arc::new(ThemeRegistry::new(Box::new(())));
|
||||
@@ -684,6 +714,10 @@ async fn test_extension_store_with_test_extension(cx: &mut TestAppContext) {
|
||||
let _task = cx.executor().spawn(async move {
|
||||
while let Some(event) = events.next().await {
|
||||
if let Event::StartedReloading = event {
|
||||
log::info!(
|
||||
"[test_extension_store_with_test_extension] saw Event::StartedReloading; advancing clock by {:?}",
|
||||
RELOAD_DEBOUNCE_DURATION
|
||||
);
|
||||
executor.advance_clock(RELOAD_DEBOUNCE_DURATION);
|
||||
}
|
||||
}
|
||||
@@ -698,13 +732,22 @@ async fn test_extension_store_with_test_extension(cx: &mut TestAppContext) {
|
||||
.detach();
|
||||
});
|
||||
|
||||
extension_store
|
||||
.update(cx, |store, cx| {
|
||||
log::info!(
|
||||
"[test_extension_store_with_test_extension] install_dev_extension starting: {:?}",
|
||||
test_extension_dir
|
||||
);
|
||||
await_or_timeout(
|
||||
"awaiting install_dev_extension",
|
||||
5,
|
||||
extension_store.update(cx, |store, cx| {
|
||||
store.install_dev_extension(test_extension_dir.clone(), cx)
|
||||
})
|
||||
.await
|
||||
.unwrap();
|
||||
}),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
log::info!("[test_extension_store_with_test_extension] install_dev_extension completed");
|
||||
|
||||
log::info!("[test_extension_store_with_test_extension] registering fake LSP server: gleam");
|
||||
let mut fake_servers = language_registry.register_fake_lsp_server(
|
||||
LanguageServerName("gleam".into()),
|
||||
lsp::ServerCapabilities {
|
||||
@@ -713,15 +756,36 @@ async fn test_extension_store_with_test_extension(cx: &mut TestAppContext) {
|
||||
},
|
||||
None,
|
||||
);
|
||||
log::info!("[test_extension_store_with_test_extension] registered fake LSP server: gleam");
|
||||
log::warn!("[test_extension_store_with_test_extension] after register_fake_lsp_server; running until parked");
|
||||
cx.executor().run_until_parked();
|
||||
log::warn!("[test_extension_store_with_test_extension] after run_until_parked (post fake server registration)");
|
||||
|
||||
let (buffer, _handle) = project
|
||||
.update(cx, |project, cx| {
|
||||
log::warn!("[test_extension_store_with_test_extension] about to open buffer with LSP");
|
||||
log::info!("[test_extension_store_with_test_extension] opening buffer with LSP");
|
||||
let (buffer, _handle) = await_or_timeout(
|
||||
"awaiting open_local_buffer_with_lsp",
|
||||
5,
|
||||
project.update(cx, |project, cx| {
|
||||
project.open_local_buffer_with_lsp(project_dir.join("test.gleam"), cx)
|
||||
})
|
||||
}),
|
||||
)
|
||||
.await
|
||||
.unwrap();
|
||||
log::info!("[test_extension_store_with_test_extension] opened buffer with LSP");
|
||||
log::warn!("[test_extension_store_with_test_extension] opened buffer; running until parked before awaiting fake server");
|
||||
cx.executor().run_until_parked();
|
||||
log::warn!("[test_extension_store_with_test_extension] done run_until_parked; awaiting first fake LSP server spawn");
|
||||
|
||||
log::info!("[test_extension_store_with_test_extension] awaiting first fake LSP server spawn");
|
||||
let fake_server = await_or_timeout("awaiting first fake server spawn", 10, fake_servers.next())
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
let fake_server = fake_servers.next().await.unwrap();
|
||||
log::info!(
|
||||
"[test_extension_store_with_test_extension] got first fake LSP server spawn (binary={:?}, args={:?})",
|
||||
fake_server.binary.path,
|
||||
fake_server.binary.arguments
|
||||
);
|
||||
let work_dir = extensions_dir.join(format!("work/{test_extension_id}"));
|
||||
let expected_server_path = work_dir.join("gleam-v1.2.3/gleam");
|
||||
let expected_binary_contents = language_server_version.lock().binary_contents.clone();
|
||||
@@ -735,16 +799,30 @@ async fn test_extension_store_with_test_extension(cx: &mut TestAppContext) {
|
||||
assert_eq!(fake_server.binary.path, expected_server_path);
|
||||
assert_eq!(fake_server.binary.arguments, [OsString::from("lsp")]);
|
||||
assert_eq!(
|
||||
fs.load(&expected_server_path).await.unwrap(),
|
||||
await_or_timeout(
|
||||
"awaiting fs.load(expected_server_path)",
|
||||
5,
|
||||
fs.load(&expected_server_path)
|
||||
)
|
||||
.await
|
||||
.unwrap(),
|
||||
expected_binary_contents
|
||||
);
|
||||
assert_eq!(language_server_version.lock().http_request_count, 2);
|
||||
assert_eq!(
|
||||
[
|
||||
status_updates.next().await.unwrap(),
|
||||
status_updates.next().await.unwrap(),
|
||||
status_updates.next().await.unwrap(),
|
||||
status_updates.next().await.unwrap(),
|
||||
await_or_timeout("awaiting status_updates #1", 5, status_updates.next())
|
||||
.await
|
||||
.unwrap(),
|
||||
await_or_timeout("awaiting status_updates #2", 5, status_updates.next())
|
||||
.await
|
||||
.unwrap(),
|
||||
await_or_timeout("awaiting status_updates #3", 5, status_updates.next())
|
||||
.await
|
||||
.unwrap(),
|
||||
await_or_timeout("awaiting status_updates #4", 5, status_updates.next())
|
||||
.await
|
||||
.unwrap(),
|
||||
],
|
||||
[
|
||||
(
|
||||
@@ -793,16 +871,19 @@ async fn test_extension_store_with_test_extension(cx: &mut TestAppContext) {
|
||||
])))
|
||||
});
|
||||
|
||||
let completion_labels = project
|
||||
.update(cx, |project, cx| {
|
||||
let completion_labels = await_or_timeout(
|
||||
"awaiting completions",
|
||||
5,
|
||||
project.update(cx, |project, cx| {
|
||||
project.completions(&buffer, 0, DEFAULT_COMPLETION_CONTEXT, cx)
|
||||
})
|
||||
.await
|
||||
.unwrap()
|
||||
.into_iter()
|
||||
.flat_map(|response| response.completions)
|
||||
.map(|c| c.label.text)
|
||||
.collect::<Vec<_>>();
|
||||
}),
|
||||
)
|
||||
.await
|
||||
.unwrap()
|
||||
.into_iter()
|
||||
.flat_map(|response| response.completions)
|
||||
.map(|c| c.label.text)
|
||||
.collect::<Vec<_>>();
|
||||
assert_eq!(
|
||||
completion_labels,
|
||||
[
|
||||
@@ -826,40 +907,67 @@ async fn test_extension_store_with_test_extension(cx: &mut TestAppContext) {
|
||||
|
||||
// The extension has cached the binary path, and does not attempt
|
||||
// to reinstall it.
|
||||
let fake_server = fake_servers.next().await.unwrap();
|
||||
let fake_server =
|
||||
await_or_timeout("awaiting second fake server spawn", 5, fake_servers.next())
|
||||
.await
|
||||
.unwrap();
|
||||
assert_eq!(fake_server.binary.path, expected_server_path);
|
||||
assert_eq!(
|
||||
fs.load(&expected_server_path).await.unwrap(),
|
||||
await_or_timeout(
|
||||
"awaiting fs.load(expected_server_path) after restart",
|
||||
5,
|
||||
fs.load(&expected_server_path)
|
||||
)
|
||||
.await
|
||||
.unwrap(),
|
||||
expected_binary_contents
|
||||
);
|
||||
assert_eq!(language_server_version.lock().http_request_count, 0);
|
||||
|
||||
// Reload the extension, clearing its cache.
|
||||
// Start a new instance of the language server.
|
||||
extension_store
|
||||
.update(cx, |store, cx| {
|
||||
await_or_timeout(
|
||||
"awaiting extension_store.reload(test-extension)",
|
||||
5,
|
||||
extension_store.update(cx, |store, cx| {
|
||||
store.reload(Some("test-extension".into()), cx)
|
||||
})
|
||||
.await;
|
||||
}),
|
||||
)
|
||||
.await;
|
||||
cx.executor().run_until_parked();
|
||||
project.update(cx, |project, cx| {
|
||||
project.restart_language_servers_for_buffers(vec![buffer.clone()], HashSet::default(), cx)
|
||||
});
|
||||
|
||||
// The extension re-fetches the latest version of the language server.
|
||||
let fake_server = fake_servers.next().await.unwrap();
|
||||
let fake_server = await_or_timeout("awaiting third fake server spawn", 5, fake_servers.next())
|
||||
.await
|
||||
.unwrap();
|
||||
let new_expected_server_path =
|
||||
extensions_dir.join(format!("work/{test_extension_id}/gleam-v2.0.0/gleam"));
|
||||
let expected_binary_contents = language_server_version.lock().binary_contents.clone();
|
||||
assert_eq!(fake_server.binary.path, new_expected_server_path);
|
||||
assert_eq!(fake_server.binary.arguments, [OsString::from("lsp")]);
|
||||
assert_eq!(
|
||||
fs.load(&new_expected_server_path).await.unwrap(),
|
||||
await_or_timeout(
|
||||
"awaiting fs.load(new_expected_server_path)",
|
||||
5,
|
||||
fs.load(&new_expected_server_path)
|
||||
)
|
||||
.await
|
||||
.unwrap(),
|
||||
expected_binary_contents
|
||||
);
|
||||
|
||||
// The old language server directory has been cleaned up.
|
||||
assert!(fs.metadata(&expected_server_path).await.unwrap().is_none());
|
||||
assert!(await_or_timeout(
|
||||
"awaiting fs.metadata(expected_server_path)",
|
||||
5,
|
||||
fs.metadata(&expected_server_path)
|
||||
)
|
||||
.await
|
||||
.unwrap()
|
||||
.is_none());
|
||||
}
|
||||
|
||||
fn init_test(cx: &mut TestAppContext) {
|
||||
|
||||
@@ -14,21 +14,15 @@ use git::{
|
||||
UnmergedStatus,
|
||||
},
|
||||
};
|
||||
use gpui::{AsyncApp, BackgroundExecutor, SharedString, Task, TaskLabel};
|
||||
use gpui::{AsyncApp, BackgroundExecutor, SharedString, Task};
|
||||
use ignore::gitignore::GitignoreBuilder;
|
||||
use parking_lot::Mutex;
|
||||
use rope::Rope;
|
||||
use smol::future::FutureExt as _;
|
||||
use std::{
|
||||
path::PathBuf,
|
||||
sync::{Arc, LazyLock},
|
||||
};
|
||||
use std::{path::PathBuf, sync::Arc};
|
||||
use text::LineEnding;
|
||||
use util::{paths::PathStyle, rel_path::RelPath};
|
||||
|
||||
pub static LOAD_INDEX_TEXT_TASK: LazyLock<TaskLabel> = LazyLock::new(TaskLabel::new);
|
||||
pub static LOAD_HEAD_TEXT_TASK: LazyLock<TaskLabel> = LazyLock::new(TaskLabel::new);
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct FakeGitRepository {
|
||||
pub(crate) fs: Arc<FakeFs>,
|
||||
@@ -104,9 +98,7 @@ impl GitRepository for FakeGitRepository {
|
||||
.context("not present in index")
|
||||
.cloned()
|
||||
});
|
||||
self.executor
|
||||
.spawn_labeled(*LOAD_INDEX_TEXT_TASK, async move { fut.await.ok() })
|
||||
.boxed()
|
||||
self.executor.spawn(async move { fut.await.ok() }).boxed()
|
||||
}
|
||||
|
||||
fn load_committed_text(&self, path: RepoPath) -> BoxFuture<'_, Option<String>> {
|
||||
@@ -117,9 +109,7 @@ impl GitRepository for FakeGitRepository {
|
||||
.context("not present in HEAD")
|
||||
.cloned()
|
||||
});
|
||||
self.executor
|
||||
.spawn_labeled(*LOAD_HEAD_TEXT_TASK, async move { fut.await.ok() })
|
||||
.boxed()
|
||||
self.executor.spawn(async move { fut.await.ok() }).boxed()
|
||||
}
|
||||
|
||||
fn load_blob_content(&self, oid: git::Oid) -> BoxFuture<'_, Result<String>> {
|
||||
@@ -656,7 +646,7 @@ impl GitRepository for FakeGitRepository {
|
||||
let repository_dir_path = self.repository_dir_path.parent().unwrap().to_path_buf();
|
||||
async move {
|
||||
executor.simulate_random_delay().await;
|
||||
let oid = git::Oid::random(&mut executor.rng());
|
||||
let oid = git::Oid::random(&mut *executor.rng().lock());
|
||||
let entry = fs.entry(&repository_dir_path)?;
|
||||
checkpoints.lock().insert(oid, entry);
|
||||
Ok(GitRepositoryCheckpoint { commit_sha: oid })
|
||||
|
||||
@@ -63,9 +63,6 @@ use smol::io::AsyncReadExt;
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
use std::ffi::OsStr;
|
||||
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
pub use fake_git_repo::{LOAD_HEAD_TEXT_TASK, LOAD_INDEX_TEXT_TASK};
|
||||
|
||||
pub trait Watcher: Send + Sync {
|
||||
fn add(&self, path: &Path) -> Result<()>;
|
||||
fn remove(&self, path: &Path) -> Result<()>;
|
||||
|
||||
@@ -107,10 +107,12 @@ num_cpus = "1.13"
|
||||
parking = "2.0.0"
|
||||
parking_lot.workspace = true
|
||||
postage.workspace = true
|
||||
chrono.workspace = true
|
||||
profiling.workspace = true
|
||||
rand.workspace = true
|
||||
raw-window-handle = "0.6"
|
||||
refineable.workspace = true
|
||||
scheduler.workspace = true
|
||||
resvg = { version = "0.45.0", default-features = false, features = [
|
||||
"text",
|
||||
"system-fonts",
|
||||
@@ -251,6 +253,7 @@ lyon = { version = "1.0", features = ["extra"] }
|
||||
pretty_assertions.workspace = true
|
||||
rand.workspace = true
|
||||
reqwest_client = { workspace = true, features = ["test-support"] }
|
||||
scheduler = { workspace = true, features = ["test-support"] }
|
||||
unicode-segmentation.workspace = true
|
||||
util = { workspace = true, features = ["test-support"] }
|
||||
|
||||
|
||||
@@ -34,11 +34,11 @@ use util::{ResultExt, debug_panic};
|
||||
#[cfg(any(feature = "inspector", debug_assertions))]
|
||||
use crate::InspectorElementRegistry;
|
||||
use crate::{
|
||||
Action, ActionBuildError, ActionRegistry, Any, AnyView, AnyWindowHandle, AppContext, Asset,
|
||||
AssetSource, BackgroundExecutor, Bounds, ClipboardItem, CursorStyle, DispatchPhase, DisplayId,
|
||||
EventEmitter, FocusHandle, FocusMap, ForegroundExecutor, Global, KeyBinding, KeyContext,
|
||||
Keymap, Keystroke, LayoutId, Menu, MenuItem, OwnedMenu, PathPromptOptions, Pixels, Platform,
|
||||
PlatformDisplay, PlatformKeyboardLayout, PlatformKeyboardMapper, Point, Priority,
|
||||
Action, ActionBuildError, ActionRegistry, Any, AnyView, AnyWindowHandle, AppContext, Arena,
|
||||
Asset, AssetSource, BackgroundExecutor, Bounds, ClipboardItem, CursorStyle, DispatchPhase,
|
||||
DisplayId, EventEmitter, FocusHandle, FocusMap, ForegroundExecutor, Global, KeyBinding,
|
||||
KeyContext, Keymap, Keystroke, LayoutId, Menu, MenuItem, OwnedMenu, PathPromptOptions, Pixels,
|
||||
Platform, PlatformDisplay, PlatformKeyboardLayout, PlatformKeyboardMapper, Point, Priority,
|
||||
PromptBuilder, PromptButton, PromptHandle, PromptLevel, Render, RenderImage,
|
||||
RenderablePromptHandle, Reservation, ScreenCaptureSource, SharedString, SubscriberSet,
|
||||
Subscription, SvgRenderer, Task, TextSystem, Window, WindowAppearance, WindowHandle, WindowId,
|
||||
@@ -637,6 +637,9 @@ pub struct App {
|
||||
pub(crate) name: Option<&'static str>,
|
||||
quit_mode: QuitMode,
|
||||
quitting: bool,
|
||||
/// Per-App element arena. This isolates element allocations between different
|
||||
/// App instances (important for tests where multiple Apps run concurrently).
|
||||
pub(crate) element_arena: RefCell<Arena>,
|
||||
}
|
||||
|
||||
impl App {
|
||||
@@ -713,6 +716,7 @@ impl App {
|
||||
|
||||
#[cfg(any(test, feature = "test-support", debug_assertions))]
|
||||
name: None,
|
||||
element_arena: RefCell::new(Arena::new(1024 * 1024)),
|
||||
}),
|
||||
});
|
||||
|
||||
|
||||
@@ -9,7 +9,7 @@ use crate::{
|
||||
};
|
||||
use anyhow::{anyhow, bail};
|
||||
use futures::{Stream, StreamExt, channel::oneshot};
|
||||
use rand::{SeedableRng, rngs::StdRng};
|
||||
|
||||
use std::{
|
||||
cell::RefCell, future::Future, ops::Deref, path::PathBuf, rc::Rc, sync::Arc, time::Duration,
|
||||
};
|
||||
@@ -154,7 +154,7 @@ impl TestAppContext {
|
||||
|
||||
/// Create a single TestAppContext, for non-multi-client tests
|
||||
pub fn single() -> Self {
|
||||
let dispatcher = TestDispatcher::new(StdRng::seed_from_u64(0));
|
||||
let dispatcher = TestDispatcher::new(0);
|
||||
Self::build(dispatcher, None)
|
||||
}
|
||||
|
||||
@@ -663,11 +663,9 @@ impl<V> Entity<V> {
|
||||
}
|
||||
}
|
||||
|
||||
cx.borrow().background_executor().start_waiting();
|
||||
rx.recv()
|
||||
.await
|
||||
.expect("view dropped with pending condition");
|
||||
cx.borrow().background_executor().finish_waiting();
|
||||
}
|
||||
})
|
||||
.await
|
||||
|
||||
@@ -32,9 +32,9 @@
|
||||
//! your own custom layout algorithm or rendering a code editor.
|
||||
|
||||
use crate::{
|
||||
App, ArenaBox, AvailableSpace, Bounds, Context, DispatchNodeId, ELEMENT_ARENA, ElementId,
|
||||
FocusHandle, InspectorElementId, LayoutId, Pixels, Point, Size, Style, Window,
|
||||
util::FluentBuilder,
|
||||
App, ArenaBox, AvailableSpace, Bounds, Context, DispatchNodeId, ElementId, FocusHandle,
|
||||
InspectorElementId, LayoutId, Pixels, Point, Size, Style, Window, util::FluentBuilder,
|
||||
window::with_element_arena,
|
||||
};
|
||||
use derive_more::{Deref, DerefMut};
|
||||
use std::{
|
||||
@@ -579,8 +579,7 @@ impl AnyElement {
|
||||
E: 'static + Element,
|
||||
E::RequestLayoutState: Any,
|
||||
{
|
||||
let element = ELEMENT_ARENA
|
||||
.with_borrow_mut(|arena| arena.alloc(|| Drawable::new(element)))
|
||||
let element = with_element_arena(|arena| arena.alloc(|| Drawable::new(element)))
|
||||
.map(|element| element as &mut dyn ElementObject);
|
||||
AnyElement(element)
|
||||
}
|
||||
|
||||
@@ -1,98 +1,38 @@
|
||||
use crate::{App, PlatformDispatcher, RunnableMeta, RunnableVariant, TaskTiming, profiler};
|
||||
use async_task::Runnable;
|
||||
use crate::{App, PlatformDispatcher, PlatformScheduler};
|
||||
use futures::channel::mpsc;
|
||||
use parking_lot::{Condvar, Mutex};
|
||||
use scheduler::Scheduler;
|
||||
use smol::prelude::*;
|
||||
use std::{
|
||||
fmt::Debug,
|
||||
future::Future,
|
||||
marker::PhantomData,
|
||||
mem::{self, ManuallyDrop},
|
||||
num::NonZeroUsize,
|
||||
panic::Location,
|
||||
mem,
|
||||
pin::Pin,
|
||||
rc::Rc,
|
||||
sync::{
|
||||
Arc,
|
||||
atomic::{AtomicUsize, Ordering},
|
||||
},
|
||||
task::{Context, Poll},
|
||||
thread::{self, ThreadId},
|
||||
sync::Arc,
|
||||
time::{Duration, Instant},
|
||||
};
|
||||
use util::TryFutureExt;
|
||||
use waker_fn::waker_fn;
|
||||
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
use rand::rngs::StdRng;
|
||||
pub use scheduler::Priority;
|
||||
|
||||
/// A pointer to the executor that is currently running,
|
||||
/// for spawning background tasks.
|
||||
#[derive(Clone)]
|
||||
pub struct BackgroundExecutor {
|
||||
#[doc(hidden)]
|
||||
pub dispatcher: Arc<dyn PlatformDispatcher>,
|
||||
scheduler: Arc<dyn Scheduler>,
|
||||
dispatcher: Arc<dyn PlatformDispatcher>,
|
||||
}
|
||||
|
||||
/// A pointer to the executor that is currently running,
|
||||
/// for spawning tasks on the main thread.
|
||||
///
|
||||
/// This is intentionally `!Send` via the `not_send` marker field. This is because
|
||||
/// `ForegroundExecutor::spawn` does not require `Send` but checks at runtime that the future is
|
||||
/// only polled from the same thread it was spawned from. These checks would fail when spawning
|
||||
/// foreground tasks from background threads.
|
||||
#[derive(Clone)]
|
||||
pub struct ForegroundExecutor {
|
||||
#[doc(hidden)]
|
||||
pub dispatcher: Arc<dyn PlatformDispatcher>,
|
||||
inner: scheduler::ForegroundExecutor,
|
||||
dispatcher: Arc<dyn PlatformDispatcher>,
|
||||
not_send: PhantomData<Rc<()>>,
|
||||
}
|
||||
|
||||
/// Realtime task priority
|
||||
#[derive(Clone, Copy, Debug, Default, PartialEq, Eq)]
|
||||
#[repr(u8)]
|
||||
pub enum RealtimePriority {
|
||||
/// Audio task
|
||||
Audio,
|
||||
/// Other realtime task
|
||||
#[default]
|
||||
Other,
|
||||
}
|
||||
|
||||
/// Task priority
|
||||
#[derive(Clone, Copy, Debug, Default, PartialEq, Eq)]
|
||||
#[repr(u8)]
|
||||
pub enum Priority {
|
||||
/// Realtime priority
|
||||
///
|
||||
/// Spawning a task with this priority will spin it off on a separate thread dedicated just to that task.
|
||||
Realtime(RealtimePriority),
|
||||
/// High priority
|
||||
///
|
||||
/// Only use for tasks that are critical to the user experience / responsiveness of the editor.
|
||||
High,
|
||||
/// Medium priority, probably suits most of your use cases.
|
||||
#[default]
|
||||
Medium,
|
||||
/// Low priority
|
||||
///
|
||||
/// Prioritize this for background work that can come in large quantities
|
||||
/// to not starve the executor of resources for high priority tasks
|
||||
Low,
|
||||
}
|
||||
|
||||
impl Priority {
|
||||
#[allow(dead_code)]
|
||||
pub(crate) const fn probability(&self) -> u32 {
|
||||
match self {
|
||||
// realtime priorities are not considered for probability scheduling
|
||||
Priority::Realtime(_) => 0,
|
||||
Priority::High => 60,
|
||||
Priority::Medium => 30,
|
||||
Priority::Low => 10,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Task is a primitive that allows work to happen in the background.
|
||||
///
|
||||
/// It implements [`Future`] so you can `.await` on it.
|
||||
@@ -101,39 +41,36 @@ impl Priority {
|
||||
/// the task to continue running, but with no way to return a value.
|
||||
#[must_use]
|
||||
#[derive(Debug)]
|
||||
pub struct Task<T>(TaskState<T>);
|
||||
|
||||
#[derive(Debug)]
|
||||
enum TaskState<T> {
|
||||
/// A task that is ready to return a value
|
||||
Ready(Option<T>),
|
||||
|
||||
/// A task that is currently running.
|
||||
Spawned(async_task::Task<T, RunnableMeta>),
|
||||
}
|
||||
pub struct Task<T>(scheduler::Task<T>);
|
||||
|
||||
impl<T> Task<T> {
|
||||
/// Creates a new task that will resolve with the value
|
||||
/// Creates a new task that will resolve with the value.
|
||||
pub fn ready(val: T) -> Self {
|
||||
Task(TaskState::Ready(Some(val)))
|
||||
Task(scheduler::Task::ready(val))
|
||||
}
|
||||
|
||||
/// Detaching a task runs it to completion in the background
|
||||
/// Returns true if the task has completed or was created with `Task::ready`.
|
||||
pub fn is_ready(&self) -> bool {
|
||||
self.0.is_ready()
|
||||
}
|
||||
|
||||
/// Detaching a task runs it to completion in the background.
|
||||
pub fn detach(self) {
|
||||
match self {
|
||||
Task(TaskState::Ready(_)) => {}
|
||||
Task(TaskState::Spawned(task)) => task.detach(),
|
||||
}
|
||||
self.0.detach()
|
||||
}
|
||||
|
||||
/// Wraps a scheduler::Task.
|
||||
pub fn from_scheduler(task: scheduler::Task<T>) -> Self {
|
||||
Task(task)
|
||||
}
|
||||
}
|
||||
|
||||
impl<E, T> Task<Result<T, E>>
|
||||
impl<T, E> Task<Result<T, E>>
|
||||
where
|
||||
T: 'static,
|
||||
E: 'static + Debug,
|
||||
{
|
||||
/// Run the task to completion in the background and log any
|
||||
/// errors that occur.
|
||||
/// Run the task to completion in the background and log any errors that occur.
|
||||
#[track_caller]
|
||||
pub fn detach_and_log_err(self, cx: &App) {
|
||||
let location = core::panic::Location::caller();
|
||||
@@ -143,53 +80,37 @@ where
|
||||
}
|
||||
}
|
||||
|
||||
impl<T> Future for Task<T> {
|
||||
impl<T> std::future::Future for Task<T> {
|
||||
type Output = T;
|
||||
|
||||
fn poll(self: Pin<&mut Self>, cx: &mut Context) -> Poll<Self::Output> {
|
||||
match unsafe { self.get_unchecked_mut() } {
|
||||
Task(TaskState::Ready(val)) => Poll::Ready(val.take().unwrap()),
|
||||
Task(TaskState::Spawned(task)) => task.poll(cx),
|
||||
}
|
||||
fn poll(
|
||||
self: std::pin::Pin<&mut Self>,
|
||||
cx: &mut std::task::Context<'_>,
|
||||
) -> std::task::Poll<Self::Output> {
|
||||
// SAFETY: Task is a repr(transparent) wrapper around scheduler::Task,
|
||||
// and we're just projecting the pin through to the inner task.
|
||||
let inner = unsafe { self.map_unchecked_mut(|t| &mut t.0) };
|
||||
inner.poll(cx)
|
||||
}
|
||||
}
|
||||
|
||||
/// A task label is an opaque identifier that you can use to
|
||||
/// refer to a task in tests.
|
||||
#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)]
|
||||
pub struct TaskLabel(NonZeroUsize);
|
||||
|
||||
impl Default for TaskLabel {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
|
||||
impl TaskLabel {
|
||||
/// Construct a new task label.
|
||||
pub fn new() -> Self {
|
||||
static NEXT_TASK_LABEL: AtomicUsize = AtomicUsize::new(1);
|
||||
Self(
|
||||
NEXT_TASK_LABEL
|
||||
.fetch_add(1, Ordering::SeqCst)
|
||||
.try_into()
|
||||
.unwrap(),
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
type AnyLocalFuture<R> = Pin<Box<dyn 'static + Future<Output = R>>>;
|
||||
|
||||
type AnyFuture<R> = Pin<Box<dyn 'static + Send + Future<Output = R>>>;
|
||||
|
||||
/// BackgroundExecutor lets you run things on background threads.
|
||||
/// In production this is a thread pool with no ordering guarantees.
|
||||
/// In tests this is simulated by running tasks one by one in a deterministic
|
||||
/// (but arbitrary) order controlled by the `SEED` environment variable.
|
||||
impl BackgroundExecutor {
|
||||
#[doc(hidden)]
|
||||
/// Creates a new BackgroundExecutor from the given PlatformDispatcher.
|
||||
pub fn new(dispatcher: Arc<dyn PlatformDispatcher>) -> Self {
|
||||
Self { dispatcher }
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
let scheduler: Arc<dyn Scheduler> = if let Some(test_dispatcher) = dispatcher.as_test() {
|
||||
test_dispatcher.scheduler().clone()
|
||||
} else {
|
||||
Arc::new(PlatformScheduler::new(dispatcher.clone()))
|
||||
};
|
||||
|
||||
#[cfg(not(any(test, feature = "test-support")))]
|
||||
let scheduler: Arc<dyn Scheduler> = Arc::new(PlatformScheduler::new(dispatcher.clone()));
|
||||
|
||||
Self {
|
||||
scheduler,
|
||||
dispatcher,
|
||||
}
|
||||
}
|
||||
|
||||
/// Enqueues the given future to be run to completion on a background thread.
|
||||
@@ -201,7 +122,15 @@ impl BackgroundExecutor {
|
||||
self.spawn_with_priority(Priority::default(), future)
|
||||
}
|
||||
|
||||
/// Enqueues the given future to be run to completion on a background thread.
|
||||
/// Enqueues the given future to be run to completion on a background thread with the given priority.
|
||||
///
|
||||
/// `Priority::Realtime` is currently treated as `Priority::High`.
|
||||
///
|
||||
/// This is intentionally *not* a "downgrade" feature: realtime execution is effectively
|
||||
/// disabled until we have an in-tree use case and are confident about the semantics and
|
||||
/// failure modes (especially around channel backpressure and the risk of blocking
|
||||
/// latency-sensitive threads). It should be straightforward to add a true realtime
|
||||
/// implementation back once those constraints are well-defined.
|
||||
#[track_caller]
|
||||
pub fn spawn_with_priority<R>(
|
||||
&self,
|
||||
@@ -211,7 +140,8 @@ impl BackgroundExecutor {
|
||||
where
|
||||
R: Send + 'static,
|
||||
{
|
||||
self.spawn_internal::<R>(Box::pin(future), None, priority)
|
||||
let inner = scheduler::BackgroundExecutor::new(self.scheduler.clone());
|
||||
Task::from_scheduler(inner.spawn_with_priority(priority, future))
|
||||
}
|
||||
|
||||
/// Enqueues the given future to be run to completion on a background thread and blocking the current task on it.
|
||||
@@ -222,8 +152,9 @@ impl BackgroundExecutor {
|
||||
where
|
||||
R: Send,
|
||||
{
|
||||
// We need to ensure that cancellation of the parent task does not drop the environment
|
||||
// before the our own task has completed or got cancelled.
|
||||
use crate::RunnableMeta;
|
||||
use parking_lot::{Condvar, Mutex};
|
||||
|
||||
struct NotifyOnDrop<'a>(&'a (Condvar, Mutex<bool>));
|
||||
|
||||
impl Drop for NotifyOnDrop<'_> {
|
||||
@@ -259,11 +190,7 @@ impl BackgroundExecutor {
|
||||
future.await
|
||||
},
|
||||
move |runnable| {
|
||||
dispatcher.dispatch(
|
||||
RunnableVariant::Meta(runnable),
|
||||
None,
|
||||
Priority::default(),
|
||||
)
|
||||
dispatcher.dispatch(runnable, Priority::default());
|
||||
},
|
||||
)
|
||||
};
|
||||
@@ -271,238 +198,84 @@ impl BackgroundExecutor {
|
||||
task.await
|
||||
}
|
||||
|
||||
/// Enqueues the given future to be run to completion on a background thread.
|
||||
/// The given label can be used to control the priority of the task in tests.
|
||||
#[track_caller]
|
||||
pub fn spawn_labeled<R>(
|
||||
&self,
|
||||
label: TaskLabel,
|
||||
future: impl Future<Output = R> + Send + 'static,
|
||||
) -> Task<R>
|
||||
where
|
||||
R: Send + 'static,
|
||||
{
|
||||
self.spawn_internal::<R>(Box::pin(future), Some(label), Priority::default())
|
||||
}
|
||||
|
||||
#[track_caller]
|
||||
fn spawn_internal<R: Send + 'static>(
|
||||
&self,
|
||||
future: AnyFuture<R>,
|
||||
label: Option<TaskLabel>,
|
||||
priority: Priority,
|
||||
) -> Task<R> {
|
||||
let dispatcher = self.dispatcher.clone();
|
||||
let (runnable, task) = if let Priority::Realtime(realtime) = priority {
|
||||
let location = core::panic::Location::caller();
|
||||
let (mut tx, rx) = flume::bounded::<Runnable<RunnableMeta>>(1);
|
||||
|
||||
dispatcher.spawn_realtime(
|
||||
realtime,
|
||||
Box::new(move || {
|
||||
while let Ok(runnable) = rx.recv() {
|
||||
let start = Instant::now();
|
||||
let location = runnable.metadata().location;
|
||||
let mut timing = TaskTiming {
|
||||
location,
|
||||
start,
|
||||
end: None,
|
||||
};
|
||||
profiler::add_task_timing(timing);
|
||||
|
||||
runnable.run();
|
||||
|
||||
let end = Instant::now();
|
||||
timing.end = Some(end);
|
||||
profiler::add_task_timing(timing);
|
||||
}
|
||||
}),
|
||||
);
|
||||
|
||||
async_task::Builder::new()
|
||||
.metadata(RunnableMeta { location })
|
||||
.spawn(
|
||||
move |_| future,
|
||||
move |runnable| {
|
||||
let _ = tx.send(runnable);
|
||||
},
|
||||
)
|
||||
} else {
|
||||
let location = core::panic::Location::caller();
|
||||
async_task::Builder::new()
|
||||
.metadata(RunnableMeta { location })
|
||||
.spawn(
|
||||
move |_| future,
|
||||
move |runnable| {
|
||||
dispatcher.dispatch(RunnableVariant::Meta(runnable), label, priority)
|
||||
},
|
||||
)
|
||||
};
|
||||
|
||||
runnable.schedule();
|
||||
Task(TaskState::Spawned(task))
|
||||
}
|
||||
|
||||
/// Used by the test harness to run an async test in a synchronous fashion.
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
#[track_caller]
|
||||
pub fn block_test<R>(&self, future: impl Future<Output = R>) -> R {
|
||||
if let Ok(value) = self.block_internal(false, future, None) {
|
||||
value
|
||||
} else {
|
||||
unreachable!()
|
||||
}
|
||||
use std::cell::Cell;
|
||||
|
||||
let test_dispatcher = self
|
||||
.dispatcher
|
||||
.as_test()
|
||||
.expect("block_test requires a test dispatcher");
|
||||
let scheduler = test_dispatcher.scheduler();
|
||||
|
||||
let output = Cell::new(None);
|
||||
let future = async {
|
||||
output.set(Some(future.await));
|
||||
};
|
||||
let mut future = std::pin::pin!(future);
|
||||
|
||||
// In async GPUI tests, we must allow foreground tasks scheduled by the test itself
|
||||
// (which are associated with the test session) to make progress while we block.
|
||||
// Otherwise, awaiting futures that depend on same-session foreground work can deadlock.
|
||||
scheduler.block(None, future.as_mut(), None);
|
||||
|
||||
output.take().expect("block_test future did not complete")
|
||||
}
|
||||
|
||||
/// Block the current thread until the given future resolves.
|
||||
/// Consider using `block_with_timeout` instead.
|
||||
pub fn block<R>(&self, future: impl Future<Output = R>) -> R {
|
||||
if let Ok(value) = self.block_internal(true, future, None) {
|
||||
value
|
||||
} else {
|
||||
unreachable!()
|
||||
}
|
||||
}
|
||||
use std::cell::Cell;
|
||||
|
||||
#[cfg(not(any(test, feature = "test-support")))]
|
||||
pub(crate) fn block_internal<Fut: Future>(
|
||||
&self,
|
||||
_background_only: bool,
|
||||
future: Fut,
|
||||
timeout: Option<Duration>,
|
||||
) -> Result<Fut::Output, impl Future<Output = Fut::Output> + use<Fut>> {
|
||||
use std::time::Instant;
|
||||
|
||||
let mut future = Box::pin(future);
|
||||
if timeout == Some(Duration::ZERO) {
|
||||
return Err(future);
|
||||
}
|
||||
let deadline = timeout.map(|timeout| Instant::now() + timeout);
|
||||
|
||||
let parker = parking::Parker::new();
|
||||
let unparker = parker.unparker();
|
||||
let waker = waker_fn(move || {
|
||||
unparker.unpark();
|
||||
});
|
||||
let mut cx = std::task::Context::from_waker(&waker);
|
||||
|
||||
loop {
|
||||
match future.as_mut().poll(&mut cx) {
|
||||
Poll::Ready(result) => return Ok(result),
|
||||
Poll::Pending => {
|
||||
let timeout =
|
||||
deadline.map(|deadline| deadline.saturating_duration_since(Instant::now()));
|
||||
if let Some(timeout) = timeout {
|
||||
if !parker.park_timeout(timeout)
|
||||
&& deadline.is_some_and(|deadline| deadline < Instant::now())
|
||||
{
|
||||
return Err(future);
|
||||
}
|
||||
} else {
|
||||
parker.park();
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
#[track_caller]
|
||||
pub(crate) fn block_internal<Fut: Future>(
|
||||
&self,
|
||||
background_only: bool,
|
||||
future: Fut,
|
||||
timeout: Option<Duration>,
|
||||
) -> Result<Fut::Output, impl Future<Output = Fut::Output> + use<Fut>> {
|
||||
use std::sync::atomic::AtomicBool;
|
||||
|
||||
use parking::Parker;
|
||||
|
||||
let mut future = Box::pin(future);
|
||||
if timeout == Some(Duration::ZERO) {
|
||||
return Err(future);
|
||||
}
|
||||
let Some(dispatcher) = self.dispatcher.as_test() else {
|
||||
return Err(future);
|
||||
let output = Cell::new(None);
|
||||
let future = async {
|
||||
output.set(Some(future.await));
|
||||
};
|
||||
let mut future = std::pin::pin!(future);
|
||||
|
||||
let mut max_ticks = if timeout.is_some() {
|
||||
dispatcher.gen_block_on_ticks()
|
||||
} else {
|
||||
usize::MAX
|
||||
};
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
let session_id = self.dispatcher.as_test().map(|t| t.session_id());
|
||||
#[cfg(not(any(test, feature = "test-support")))]
|
||||
let session_id = None;
|
||||
|
||||
let parker = Parker::new();
|
||||
let unparker = parker.unparker();
|
||||
self.scheduler.block(session_id, future.as_mut(), None);
|
||||
|
||||
let awoken = Arc::new(AtomicBool::new(false));
|
||||
let waker = waker_fn({
|
||||
let awoken = awoken.clone();
|
||||
let unparker = unparker.clone();
|
||||
move || {
|
||||
awoken.store(true, Ordering::SeqCst);
|
||||
unparker.unpark();
|
||||
}
|
||||
});
|
||||
let mut cx = std::task::Context::from_waker(&waker);
|
||||
|
||||
let duration = Duration::from_secs(
|
||||
option_env!("GPUI_TEST_TIMEOUT")
|
||||
.and_then(|s| s.parse::<u64>().ok())
|
||||
.unwrap_or(180),
|
||||
);
|
||||
let mut test_should_end_by = Instant::now() + duration;
|
||||
|
||||
loop {
|
||||
match future.as_mut().poll(&mut cx) {
|
||||
Poll::Ready(result) => return Ok(result),
|
||||
Poll::Pending => {
|
||||
if max_ticks == 0 {
|
||||
return Err(future);
|
||||
}
|
||||
max_ticks -= 1;
|
||||
|
||||
if !dispatcher.tick(background_only) {
|
||||
if awoken.swap(false, Ordering::SeqCst) {
|
||||
continue;
|
||||
}
|
||||
|
||||
if !dispatcher.parking_allowed() {
|
||||
if dispatcher.advance_clock_to_next_delayed() {
|
||||
continue;
|
||||
}
|
||||
let mut backtrace_message = String::new();
|
||||
let mut waiting_message = String::new();
|
||||
if let Some(backtrace) = dispatcher.waiting_backtrace() {
|
||||
backtrace_message =
|
||||
format!("\nbacktrace of waiting future:\n{:?}", backtrace);
|
||||
}
|
||||
if let Some(waiting_hint) = dispatcher.waiting_hint() {
|
||||
waiting_message = format!("\n waiting on: {}\n", waiting_hint);
|
||||
}
|
||||
panic!(
|
||||
"parked with nothing left to run{waiting_message}{backtrace_message}",
|
||||
)
|
||||
}
|
||||
dispatcher.push_unparker(unparker.clone());
|
||||
parker.park_timeout(Duration::from_millis(1));
|
||||
if Instant::now() > test_should_end_by {
|
||||
panic!("test timed out after {duration:?} with allow_parking")
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
output.take().expect("block future did not complete")
|
||||
}
|
||||
|
||||
/// Block the current thread until the given future resolves
|
||||
/// or `duration` has elapsed.
|
||||
pub fn block_with_timeout<Fut: Future>(
|
||||
/// Block the current thread until the given future resolves or the timeout elapses.
|
||||
pub fn block_with_timeout<R, Fut: Future<Output = R>>(
|
||||
&self,
|
||||
duration: Duration,
|
||||
future: Fut,
|
||||
) -> Result<Fut::Output, impl Future<Output = Fut::Output> + use<Fut>> {
|
||||
self.block_internal(true, future, Some(duration))
|
||||
) -> Result<R, impl Future<Output = R> + use<R, Fut>> {
|
||||
use std::cell::Cell;
|
||||
|
||||
let output = Cell::new(None);
|
||||
let mut future = Box::pin(future);
|
||||
|
||||
{
|
||||
let future_ref = &mut future;
|
||||
let wrapper = async {
|
||||
output.set(Some(future_ref.await));
|
||||
};
|
||||
let mut wrapper = std::pin::pin!(wrapper);
|
||||
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
let session_id = self.dispatcher.as_test().map(|t| t.session_id());
|
||||
#[cfg(not(any(test, feature = "test-support")))]
|
||||
let session_id = None;
|
||||
|
||||
self.scheduler
|
||||
.block(session_id, wrapper.as_mut(), Some(duration));
|
||||
}
|
||||
|
||||
match output.take() {
|
||||
Some(value) => Ok(value),
|
||||
None => Err(future),
|
||||
}
|
||||
}
|
||||
|
||||
/// Scoped lets you start a number of tasks and waits
|
||||
@@ -544,7 +317,7 @@ impl BackgroundExecutor {
|
||||
/// Calling this instead of `std::time::Instant::now` allows the use
|
||||
/// of fake timers in tests.
|
||||
pub fn now(&self) -> Instant {
|
||||
self.dispatcher.now()
|
||||
self.scheduler.clock().now()
|
||||
}
|
||||
|
||||
/// Returns a task that will complete after the given duration.
|
||||
@@ -554,93 +327,150 @@ impl BackgroundExecutor {
|
||||
if duration.is_zero() {
|
||||
return Task::ready(());
|
||||
}
|
||||
let location = core::panic::Location::caller();
|
||||
let (runnable, task) = async_task::Builder::new()
|
||||
.metadata(RunnableMeta { location })
|
||||
.spawn(move |_| async move {}, {
|
||||
let dispatcher = self.dispatcher.clone();
|
||||
move |runnable| dispatcher.dispatch_after(duration, RunnableVariant::Meta(runnable))
|
||||
});
|
||||
runnable.schedule();
|
||||
Task(TaskState::Spawned(task))
|
||||
self.spawn(self.scheduler.timer(duration))
|
||||
}
|
||||
|
||||
/// in tests, start_waiting lets you indicate which task is waiting (for debugging only)
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
pub fn start_waiting(&self) {
|
||||
self.dispatcher.as_test().unwrap().start_waiting();
|
||||
}
|
||||
|
||||
/// in tests, removes the debugging data added by start_waiting
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
pub fn finish_waiting(&self) {
|
||||
self.dispatcher.as_test().unwrap().finish_waiting();
|
||||
}
|
||||
|
||||
/// in tests, run an arbitrary number of tasks (determined by the SEED environment variable)
|
||||
/// In tests, run an arbitrary number of tasks (determined by the SEED environment variable)
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
pub fn simulate_random_delay(&self) -> impl Future<Output = ()> + use<> {
|
||||
self.dispatcher.as_test().unwrap().simulate_random_delay()
|
||||
}
|
||||
|
||||
/// in tests, indicate that a given task from `spawn_labeled` should run after everything else
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
pub fn deprioritize(&self, task_label: TaskLabel) {
|
||||
self.dispatcher.as_test().unwrap().deprioritize(task_label)
|
||||
}
|
||||
|
||||
/// in tests, move time forward. This does not run any tasks, but does make `timer`s ready.
|
||||
/// In tests, move time forward. This does not run any tasks, but does make `timer`s ready.
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
pub fn advance_clock(&self, duration: Duration) {
|
||||
self.dispatcher.as_test().unwrap().advance_clock(duration)
|
||||
}
|
||||
|
||||
/// in tests, run one task.
|
||||
/// In tests, run one task.
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
pub fn tick(&self) -> bool {
|
||||
self.dispatcher.as_test().unwrap().tick(false)
|
||||
self.dispatcher.as_test().unwrap().scheduler().tick()
|
||||
}
|
||||
|
||||
/// in tests, run all tasks that are ready to run. If after doing so
|
||||
/// the test still has outstanding tasks, this will panic. (See also [`Self::allow_parking`])
|
||||
/// In tests, run tasks until the scheduler would park.
|
||||
///
|
||||
/// Under the scheduler-backed test dispatcher, `tick()` will not advance the clock, so a pending
|
||||
/// timer can keep `has_pending_tasks()` true even after all currently-runnable tasks have been
|
||||
/// drained. To preserve the historical semantics that tests relied on (drain all work that can
|
||||
/// make progress), we advance the clock to the next timer when no runnable tasks remain.
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
pub fn run_until_parked(&self) {
|
||||
self.dispatcher.as_test().unwrap().run_until_parked()
|
||||
let dispatcher = self.dispatcher.as_test().unwrap();
|
||||
let scheduler = dispatcher.scheduler();
|
||||
|
||||
let log_enabled = std::env::var("GPUI_RUN_UNTIL_PARKED_LOG")
|
||||
.ok()
|
||||
.as_deref()
|
||||
== Some("1");
|
||||
|
||||
if log_enabled {
|
||||
let (foreground_len, background_len) = scheduler.pending_task_counts();
|
||||
let has_pending = scheduler.has_pending_tasks();
|
||||
log::warn!(
|
||||
"[gpui::executor] run_until_parked: begin pending foreground={} background={} has_pending_tasks={}",
|
||||
foreground_len,
|
||||
background_len,
|
||||
has_pending
|
||||
);
|
||||
}
|
||||
|
||||
let mut ticks = 0usize;
|
||||
let mut advanced_timers = 0usize;
|
||||
|
||||
loop {
|
||||
let mut did_work = false;
|
||||
while scheduler.tick() {
|
||||
did_work = true;
|
||||
ticks += 1;
|
||||
|
||||
if log_enabled && ticks.is_multiple_of(100) {
|
||||
let (foreground_len, background_len) = scheduler.pending_task_counts();
|
||||
let has_pending = scheduler.has_pending_tasks();
|
||||
log::warn!(
|
||||
"[gpui::executor] run_until_parked: progressed ticks={} pending foreground={} background={} has_pending_tasks={}",
|
||||
ticks,
|
||||
foreground_len,
|
||||
background_len,
|
||||
has_pending
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
if did_work {
|
||||
continue;
|
||||
}
|
||||
|
||||
// No runnable tasks; if a timer is pending, advance time so it can become runnable.
|
||||
if dispatcher.advance_clock_to_next_timer() {
|
||||
advanced_timers += 1;
|
||||
continue;
|
||||
}
|
||||
|
||||
break;
|
||||
}
|
||||
|
||||
if log_enabled {
|
||||
let (foreground_len, background_len) = scheduler.pending_task_counts();
|
||||
let has_pending = scheduler.has_pending_tasks();
|
||||
log::warn!(
|
||||
"[gpui::executor] run_until_parked: end ticks={} advanced_timers={} pending foreground={} background={} has_pending_tasks={}",
|
||||
ticks,
|
||||
advanced_timers,
|
||||
foreground_len,
|
||||
background_len,
|
||||
has_pending
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/// in tests, prevents `run_until_parked` from panicking if there are outstanding tasks.
|
||||
/// This is useful when you are integrating other (non-GPUI) futures, like disk access, that
|
||||
/// do take real async time to run.
|
||||
/// In tests, prevents `run_until_parked` from panicking if there are outstanding tasks.
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
pub fn allow_parking(&self) {
|
||||
self.dispatcher.as_test().unwrap().allow_parking();
|
||||
self.dispatcher
|
||||
.as_test()
|
||||
.unwrap()
|
||||
.scheduler()
|
||||
.allow_parking();
|
||||
|
||||
if std::env::var("GPUI_RUN_UNTIL_PARKED_LOG").ok().as_deref() == Some("1") {
|
||||
log::warn!("[gpui::executor] allow_parking: enabled");
|
||||
}
|
||||
}
|
||||
|
||||
/// undoes the effect of [`Self::allow_parking`].
|
||||
/// Sets the range of ticks to run before timing out in block_on.
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
pub fn set_block_on_ticks(&self, range: std::ops::RangeInclusive<usize>) {
|
||||
self.dispatcher
|
||||
.as_test()
|
||||
.unwrap()
|
||||
.scheduler()
|
||||
.set_timeout_ticks(range);
|
||||
}
|
||||
|
||||
/// Undoes the effect of [`Self::allow_parking`].
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
pub fn forbid_parking(&self) {
|
||||
self.dispatcher.as_test().unwrap().forbid_parking();
|
||||
self.dispatcher
|
||||
.as_test()
|
||||
.unwrap()
|
||||
.scheduler()
|
||||
.forbid_parking();
|
||||
}
|
||||
|
||||
/// adds detail to the "parked with nothing let to run" message.
|
||||
/// In tests, returns the rng used by the dispatcher.
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
pub fn set_waiting_hint(&self, msg: Option<String>) {
|
||||
self.dispatcher.as_test().unwrap().set_waiting_hint(msg);
|
||||
}
|
||||
|
||||
/// in tests, returns the rng used by the dispatcher and seeded by the `SEED` environment variable
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
pub fn rng(&self) -> StdRng {
|
||||
self.dispatcher.as_test().unwrap().rng()
|
||||
pub fn rng(&self) -> scheduler::SharedRng {
|
||||
self.dispatcher.as_test().unwrap().scheduler().rng()
|
||||
}
|
||||
|
||||
/// How many CPUs are available to the dispatcher.
|
||||
pub fn num_cpus(&self) -> usize {
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
return 4;
|
||||
|
||||
#[cfg(not(any(test, feature = "test-support")))]
|
||||
return num_cpus::get();
|
||||
if self.dispatcher.as_test().is_some() {
|
||||
return 4;
|
||||
}
|
||||
num_cpus::get()
|
||||
}
|
||||
|
||||
/// Whether we're on the main thread.
|
||||
@@ -648,132 +478,70 @@ impl BackgroundExecutor {
|
||||
self.dispatcher.is_main_thread()
|
||||
}
|
||||
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
/// in tests, control the number of ticks that `block_with_timeout` will run before timing out.
|
||||
pub fn set_block_on_ticks(&self, range: std::ops::RangeInclusive<usize>) {
|
||||
self.dispatcher.as_test().unwrap().set_block_on_ticks(range);
|
||||
#[doc(hidden)]
|
||||
pub fn dispatcher(&self) -> &Arc<dyn PlatformDispatcher> {
|
||||
&self.dispatcher
|
||||
}
|
||||
}
|
||||
|
||||
/// ForegroundExecutor runs things on the main thread.
|
||||
impl ForegroundExecutor {
|
||||
/// Creates a new ForegroundExecutor from the given PlatformDispatcher.
|
||||
pub fn new(dispatcher: Arc<dyn PlatformDispatcher>) -> Self {
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
let (scheduler, session_id): (Arc<dyn Scheduler>, _) =
|
||||
if let Some(test_dispatcher) = dispatcher.as_test() {
|
||||
(
|
||||
test_dispatcher.scheduler().clone(),
|
||||
test_dispatcher.session_id(),
|
||||
)
|
||||
} else {
|
||||
let platform_scheduler = Arc::new(PlatformScheduler::new(dispatcher.clone()));
|
||||
let session_id = platform_scheduler.allocate_session_id();
|
||||
(platform_scheduler, session_id)
|
||||
};
|
||||
|
||||
#[cfg(not(any(test, feature = "test-support")))]
|
||||
let (scheduler, session_id): (Arc<dyn Scheduler>, _) = {
|
||||
let platform_scheduler = Arc::new(PlatformScheduler::new(dispatcher.clone()));
|
||||
let session_id = platform_scheduler.allocate_session_id();
|
||||
(platform_scheduler, session_id)
|
||||
};
|
||||
|
||||
let inner = scheduler::ForegroundExecutor::new(session_id, scheduler);
|
||||
|
||||
Self {
|
||||
inner,
|
||||
dispatcher,
|
||||
not_send: PhantomData,
|
||||
}
|
||||
}
|
||||
|
||||
/// Enqueues the given Task to run on the main thread at some point in the future.
|
||||
/// Enqueues the given Task to run on the main thread.
|
||||
#[track_caller]
|
||||
pub fn spawn<R>(&self, future: impl Future<Output = R> + 'static) -> Task<R>
|
||||
where
|
||||
R: 'static,
|
||||
{
|
||||
self.spawn_with_priority(Priority::default(), future)
|
||||
Task::from_scheduler(self.inner.spawn(future))
|
||||
}
|
||||
|
||||
/// Enqueues the given Task to run on the main thread at some point in the future.
|
||||
/// Enqueues the given Task to run on the main thread with the given priority.
|
||||
#[track_caller]
|
||||
pub fn spawn_with_priority<R>(
|
||||
&self,
|
||||
priority: Priority,
|
||||
_priority: Priority,
|
||||
future: impl Future<Output = R> + 'static,
|
||||
) -> Task<R>
|
||||
where
|
||||
R: 'static,
|
||||
{
|
||||
let dispatcher = self.dispatcher.clone();
|
||||
let location = core::panic::Location::caller();
|
||||
|
||||
#[track_caller]
|
||||
fn inner<R: 'static>(
|
||||
dispatcher: Arc<dyn PlatformDispatcher>,
|
||||
future: AnyLocalFuture<R>,
|
||||
location: &'static core::panic::Location<'static>,
|
||||
priority: Priority,
|
||||
) -> Task<R> {
|
||||
let (runnable, task) = spawn_local_with_source_location(
|
||||
future,
|
||||
move |runnable| {
|
||||
dispatcher.dispatch_on_main_thread(RunnableVariant::Meta(runnable), priority)
|
||||
},
|
||||
RunnableMeta { location },
|
||||
);
|
||||
runnable.schedule();
|
||||
Task(TaskState::Spawned(task))
|
||||
}
|
||||
inner::<R>(dispatcher, Box::pin(future), location, priority)
|
||||
}
|
||||
}
|
||||
|
||||
/// Variant of `async_task::spawn_local` that includes the source location of the spawn in panics.
|
||||
///
|
||||
/// Copy-modified from:
|
||||
/// <https://github.com/smol-rs/async-task/blob/ca9dbe1db9c422fd765847fa91306e30a6bb58a9/src/runnable.rs#L405>
|
||||
#[track_caller]
|
||||
fn spawn_local_with_source_location<Fut, S, M>(
|
||||
future: Fut,
|
||||
schedule: S,
|
||||
metadata: M,
|
||||
) -> (Runnable<M>, async_task::Task<Fut::Output, M>)
|
||||
where
|
||||
Fut: Future + 'static,
|
||||
Fut::Output: 'static,
|
||||
S: async_task::Schedule<M> + Send + Sync + 'static,
|
||||
M: 'static,
|
||||
{
|
||||
#[inline]
|
||||
fn thread_id() -> ThreadId {
|
||||
std::thread_local! {
|
||||
static ID: ThreadId = thread::current().id();
|
||||
}
|
||||
ID.try_with(|id| *id)
|
||||
.unwrap_or_else(|_| thread::current().id())
|
||||
// Priority is ignored for foreground tasks - they run in order on the main thread
|
||||
Task::from_scheduler(self.inner.spawn(future))
|
||||
}
|
||||
|
||||
struct Checked<F> {
|
||||
id: ThreadId,
|
||||
inner: ManuallyDrop<F>,
|
||||
location: &'static Location<'static>,
|
||||
}
|
||||
|
||||
impl<F> Drop for Checked<F> {
|
||||
fn drop(&mut self) {
|
||||
assert!(
|
||||
self.id == thread_id(),
|
||||
"local task dropped by a thread that didn't spawn it. Task spawned at {}",
|
||||
self.location
|
||||
);
|
||||
unsafe { ManuallyDrop::drop(&mut self.inner) };
|
||||
}
|
||||
}
|
||||
|
||||
impl<F: Future> Future for Checked<F> {
|
||||
type Output = F::Output;
|
||||
|
||||
fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
|
||||
assert!(
|
||||
self.id == thread_id(),
|
||||
"local task polled by a thread that didn't spawn it. Task spawned at {}",
|
||||
self.location
|
||||
);
|
||||
unsafe { self.map_unchecked_mut(|c| &mut *c.inner).poll(cx) }
|
||||
}
|
||||
}
|
||||
|
||||
// Wrap the future into one that checks which thread it's on.
|
||||
let future = Checked {
|
||||
id: thread_id(),
|
||||
inner: ManuallyDrop::new(future),
|
||||
location: Location::caller(),
|
||||
};
|
||||
|
||||
unsafe {
|
||||
async_task::Builder::new()
|
||||
.metadata(metadata)
|
||||
.spawn_unchecked(move |_| future, schedule)
|
||||
#[doc(hidden)]
|
||||
pub fn dispatcher(&self) -> &Arc<dyn PlatformDispatcher> {
|
||||
&self.dispatcher
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -20,6 +20,8 @@ pub mod colors;
|
||||
mod element;
|
||||
mod elements;
|
||||
mod executor;
|
||||
mod platform_scheduler;
|
||||
pub(crate) use platform_scheduler::PlatformScheduler;
|
||||
mod geometry;
|
||||
mod global;
|
||||
mod input;
|
||||
|
||||
@@ -39,10 +39,9 @@ use crate::{
|
||||
Action, AnyWindowHandle, App, AsyncWindowContext, BackgroundExecutor, Bounds,
|
||||
DEFAULT_WINDOW_SIZE, DevicePixels, DispatchEventResult, Font, FontId, FontMetrics, FontRun,
|
||||
ForegroundExecutor, GlyphId, GpuSpecs, ImageSource, Keymap, LineLayout, Pixels, PlatformInput,
|
||||
Point, Priority, RealtimePriority, RenderGlyphParams, RenderImage, RenderImageParams,
|
||||
RenderSvgParams, Scene, ShapedGlyph, ShapedRun, SharedString, Size, SvgRenderer,
|
||||
SystemWindowTab, Task, TaskLabel, TaskTiming, ThreadTaskTimings, Window, WindowControlArea,
|
||||
hash, point, px, size,
|
||||
Point, Priority, RenderGlyphParams, RenderImage, RenderImageParams, RenderSvgParams, Scene,
|
||||
ShapedGlyph, ShapedRun, SharedString, Size, SvgRenderer, SystemWindowTab, Task, TaskTiming,
|
||||
ThreadTaskTimings, Window, WindowControlArea, hash, point, px, size,
|
||||
};
|
||||
use anyhow::Result;
|
||||
use async_task::Runnable;
|
||||
@@ -50,6 +49,7 @@ use futures::channel::oneshot;
|
||||
use image::codecs::gif::GifDecoder;
|
||||
use image::{AnimationDecoder as _, Frame};
|
||||
use raw_window_handle::{HasDisplayHandle, HasWindowHandle};
|
||||
pub use scheduler::RunnableMeta;
|
||||
use schemars::JsonSchema;
|
||||
use seahash::SeaHasher;
|
||||
use serde::{Deserialize, Serialize};
|
||||
@@ -566,20 +566,10 @@ pub(crate) trait PlatformWindow: HasWindowHandle + HasDisplayHandle {
|
||||
}
|
||||
}
|
||||
|
||||
/// This type is public so that our test macro can generate and use it, but it should not
|
||||
/// be considered part of our public API.
|
||||
/// Type alias for runnables with metadata.
|
||||
/// Previously an enum with a single variant, now simplified to a direct type alias.
|
||||
#[doc(hidden)]
|
||||
#[derive(Debug)]
|
||||
pub struct RunnableMeta {
|
||||
/// Location of the runnable
|
||||
pub location: &'static core::panic::Location<'static>,
|
||||
}
|
||||
|
||||
#[doc(hidden)]
|
||||
pub enum RunnableVariant {
|
||||
Meta(Runnable<RunnableMeta>),
|
||||
Compat(Runnable),
|
||||
}
|
||||
pub type RunnableVariant = Runnable<RunnableMeta>;
|
||||
|
||||
/// This type is public so that our test macro can generate and use it, but it should not
|
||||
/// be considered part of our public API.
|
||||
@@ -588,10 +578,9 @@ pub trait PlatformDispatcher: Send + Sync {
|
||||
fn get_all_timings(&self) -> Vec<ThreadTaskTimings>;
|
||||
fn get_current_thread_timings(&self) -> Vec<TaskTiming>;
|
||||
fn is_main_thread(&self) -> bool;
|
||||
fn dispatch(&self, runnable: RunnableVariant, label: Option<TaskLabel>, priority: Priority);
|
||||
fn dispatch(&self, runnable: RunnableVariant, priority: Priority);
|
||||
fn dispatch_on_main_thread(&self, runnable: RunnableVariant, priority: Priority);
|
||||
fn dispatch_after(&self, duration: Duration, runnable: RunnableVariant);
|
||||
fn spawn_realtime(&self, priority: RealtimePriority, f: Box<dyn FnOnce() + Send>);
|
||||
|
||||
fn now(&self) -> Instant {
|
||||
Instant::now()
|
||||
|
||||
@@ -42,43 +42,27 @@ impl LinuxDispatcher {
|
||||
// executor
|
||||
let mut background_threads = (0..thread_count)
|
||||
.map(|i| {
|
||||
let mut receiver = background_receiver.clone();
|
||||
let mut receiver: PriorityQueueReceiver<RunnableVariant> =
|
||||
background_receiver.clone();
|
||||
std::thread::Builder::new()
|
||||
.name(format!("Worker-{i}"))
|
||||
.spawn(move || {
|
||||
for runnable in receiver.iter() {
|
||||
let start = Instant::now();
|
||||
|
||||
let mut location = match runnable {
|
||||
RunnableVariant::Meta(runnable) => {
|
||||
let location = runnable.metadata().location;
|
||||
let timing = TaskTiming {
|
||||
location,
|
||||
start,
|
||||
end: None,
|
||||
};
|
||||
profiler::add_task_timing(timing);
|
||||
|
||||
runnable.run();
|
||||
timing
|
||||
}
|
||||
RunnableVariant::Compat(runnable) => {
|
||||
let location = core::panic::Location::caller();
|
||||
let timing = TaskTiming {
|
||||
location,
|
||||
start,
|
||||
end: None,
|
||||
};
|
||||
profiler::add_task_timing(timing);
|
||||
|
||||
runnable.run();
|
||||
timing
|
||||
}
|
||||
let location = runnable.metadata().location;
|
||||
let mut timing = TaskTiming {
|
||||
location,
|
||||
start,
|
||||
end: None,
|
||||
};
|
||||
profiler::add_task_timing(timing);
|
||||
|
||||
runnable.run();
|
||||
|
||||
let end = Instant::now();
|
||||
location.end = Some(end);
|
||||
profiler::add_task_timing(location);
|
||||
timing.end = Some(end);
|
||||
profiler::add_task_timing(timing);
|
||||
|
||||
log::trace!(
|
||||
"background thread {}: ran runnable. took: {:?}",
|
||||
@@ -111,31 +95,15 @@ impl LinuxDispatcher {
|
||||
move |_, _, _| {
|
||||
if let Some(runnable) = runnable.take() {
|
||||
let start = Instant::now();
|
||||
let mut timing = match runnable {
|
||||
RunnableVariant::Meta(runnable) => {
|
||||
let location = runnable.metadata().location;
|
||||
let timing = TaskTiming {
|
||||
location,
|
||||
start,
|
||||
end: None,
|
||||
};
|
||||
profiler::add_task_timing(timing);
|
||||
|
||||
runnable.run();
|
||||
timing
|
||||
}
|
||||
RunnableVariant::Compat(runnable) => {
|
||||
let timing = TaskTiming {
|
||||
location: core::panic::Location::caller(),
|
||||
start,
|
||||
end: None,
|
||||
};
|
||||
profiler::add_task_timing(timing);
|
||||
|
||||
runnable.run();
|
||||
timing
|
||||
}
|
||||
let location = runnable.metadata().location;
|
||||
let mut timing = TaskTiming {
|
||||
location,
|
||||
start,
|
||||
end: None,
|
||||
};
|
||||
profiler::add_task_timing(timing);
|
||||
|
||||
runnable.run();
|
||||
let end = Instant::now();
|
||||
|
||||
timing.end = Some(end);
|
||||
@@ -189,7 +157,7 @@ impl PlatformDispatcher for LinuxDispatcher {
|
||||
thread::current().id() == self.main_thread_id
|
||||
}
|
||||
|
||||
fn dispatch(&self, runnable: RunnableVariant, _: Option<TaskLabel>, priority: Priority) {
|
||||
fn dispatch(&self, runnable: RunnableVariant, priority: Priority) {
|
||||
self.background_sender
|
||||
.send(priority, runnable)
|
||||
.unwrap_or_else(|_| panic!("blocking sender returned without value"));
|
||||
|
||||
@@ -31,10 +31,7 @@ impl HeadlessClient {
|
||||
handle
|
||||
.insert_source(main_receiver, |event, _, _: &mut HeadlessClient| {
|
||||
if let calloop::channel::Event::Msg(runnable) = event {
|
||||
match runnable {
|
||||
crate::RunnableVariant::Meta(runnable) => runnable.run(),
|
||||
crate::RunnableVariant::Compat(runnable) => runnable.run(),
|
||||
};
|
||||
runnable.run();
|
||||
}
|
||||
})
|
||||
.ok();
|
||||
|
||||
@@ -79,10 +79,6 @@ use crate::{
|
||||
PlatformInput, PlatformKeyboardLayout, Point, ResultExt as _, SCROLL_LINES, ScrollDelta,
|
||||
ScrollWheelEvent, Size, TouchPhase, WindowParams, point, profiler, px, size,
|
||||
};
|
||||
use crate::{
|
||||
RunnableVariant, TaskTiming,
|
||||
platform::{PlatformWindow, blade::BladeContext},
|
||||
};
|
||||
use crate::{
|
||||
SharedString,
|
||||
platform::linux::{
|
||||
@@ -97,6 +93,10 @@ use crate::{
|
||||
xdg_desktop_portal::{Event as XDPEvent, XDPEventSource},
|
||||
},
|
||||
};
|
||||
use crate::{
|
||||
TaskTiming,
|
||||
platform::{PlatformWindow, blade::BladeContext},
|
||||
};
|
||||
|
||||
/// Used to convert evdev scancode to xkb scancode
|
||||
const MIN_KEYCODE: u32 = 8;
|
||||
@@ -495,32 +495,15 @@ impl WaylandClient {
|
||||
if let calloop::channel::Event::Msg(runnable) = event {
|
||||
handle.insert_idle(|_| {
|
||||
let start = Instant::now();
|
||||
let mut timing = match runnable {
|
||||
RunnableVariant::Meta(runnable) => {
|
||||
let location = runnable.metadata().location;
|
||||
let timing = TaskTiming {
|
||||
location,
|
||||
start,
|
||||
end: None,
|
||||
};
|
||||
profiler::add_task_timing(timing);
|
||||
|
||||
runnable.run();
|
||||
timing
|
||||
}
|
||||
RunnableVariant::Compat(runnable) => {
|
||||
let location = core::panic::Location::caller();
|
||||
let timing = TaskTiming {
|
||||
location,
|
||||
start,
|
||||
end: None,
|
||||
};
|
||||
profiler::add_task_timing(timing);
|
||||
|
||||
runnable.run();
|
||||
timing
|
||||
}
|
||||
let location = runnable.metadata().location;
|
||||
let mut timing = TaskTiming {
|
||||
location,
|
||||
start,
|
||||
end: None,
|
||||
};
|
||||
profiler::add_task_timing(timing);
|
||||
|
||||
runnable.run();
|
||||
|
||||
let end = Instant::now();
|
||||
timing.end = Some(end);
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
use crate::{Capslock, ResultExt as _, RunnableVariant, TaskTiming, profiler, xcb_flush};
|
||||
use crate::{Capslock, ResultExt as _, TaskTiming, profiler, xcb_flush};
|
||||
use anyhow::{Context as _, anyhow};
|
||||
use ashpd::WindowIdentifier;
|
||||
use calloop::{
|
||||
@@ -314,32 +314,15 @@ impl X11Client {
|
||||
// callbacks.
|
||||
handle.insert_idle(|_| {
|
||||
let start = Instant::now();
|
||||
let mut timing = match runnable {
|
||||
RunnableVariant::Meta(runnable) => {
|
||||
let location = runnable.metadata().location;
|
||||
let timing = TaskTiming {
|
||||
location,
|
||||
start,
|
||||
end: None,
|
||||
};
|
||||
profiler::add_task_timing(timing);
|
||||
|
||||
runnable.run();
|
||||
timing
|
||||
}
|
||||
RunnableVariant::Compat(runnable) => {
|
||||
let location = core::panic::Location::caller();
|
||||
let timing = TaskTiming {
|
||||
location,
|
||||
start,
|
||||
end: None,
|
||||
};
|
||||
profiler::add_task_timing(timing);
|
||||
|
||||
runnable.run();
|
||||
timing
|
||||
}
|
||||
let location = runnable.metadata().location;
|
||||
let mut timing = TaskTiming {
|
||||
location,
|
||||
start,
|
||||
end: None,
|
||||
};
|
||||
profiler::add_task_timing(timing);
|
||||
|
||||
runnable.run();
|
||||
|
||||
let end = Instant::now();
|
||||
timing.end = Some(end);
|
||||
|
||||
@@ -3,22 +3,11 @@
|
||||
#![allow(non_snake_case)]
|
||||
|
||||
use crate::{
|
||||
GLOBAL_THREAD_TIMINGS, PlatformDispatcher, Priority, RealtimePriority, RunnableMeta,
|
||||
RunnableVariant, THREAD_TIMINGS, TaskLabel, TaskTiming, ThreadTaskTimings,
|
||||
GLOBAL_THREAD_TIMINGS, PlatformDispatcher, Priority, RunnableMeta, RunnableVariant,
|
||||
THREAD_TIMINGS, TaskTiming, ThreadTaskTimings,
|
||||
};
|
||||
|
||||
use anyhow::Context;
|
||||
use async_task::Runnable;
|
||||
use mach2::{
|
||||
kern_return::KERN_SUCCESS,
|
||||
mach_time::mach_timebase_info_data_t,
|
||||
thread_policy::{
|
||||
THREAD_EXTENDED_POLICY, THREAD_EXTENDED_POLICY_COUNT, THREAD_PRECEDENCE_POLICY,
|
||||
THREAD_PRECEDENCE_POLICY_COUNT, THREAD_TIME_CONSTRAINT_POLICY,
|
||||
THREAD_TIME_CONSTRAINT_POLICY_COUNT, thread_extended_policy_data_t,
|
||||
thread_precedence_policy_data_t, thread_time_constraint_policy_data_t,
|
||||
},
|
||||
};
|
||||
use objc::{
|
||||
class, msg_send,
|
||||
runtime::{BOOL, YES},
|
||||
@@ -26,11 +15,9 @@ use objc::{
|
||||
};
|
||||
use std::{
|
||||
ffi::c_void,
|
||||
mem::MaybeUninit,
|
||||
ptr::{NonNull, addr_of},
|
||||
time::{Duration, Instant},
|
||||
};
|
||||
use util::ResultExt;
|
||||
|
||||
/// All items in the generated file are marked as pub, so we're gonna wrap it in a separate mod to prevent
|
||||
/// these pub items from leaking into public API.
|
||||
@@ -69,20 +56,11 @@ impl PlatformDispatcher for MacDispatcher {
|
||||
is_main_thread == YES
|
||||
}
|
||||
|
||||
fn dispatch(&self, runnable: RunnableVariant, _: Option<TaskLabel>, priority: Priority) {
|
||||
let (context, trampoline) = match runnable {
|
||||
RunnableVariant::Meta(runnable) => (
|
||||
runnable.into_raw().as_ptr() as *mut c_void,
|
||||
Some(trampoline as unsafe extern "C" fn(*mut c_void)),
|
||||
),
|
||||
RunnableVariant::Compat(runnable) => (
|
||||
runnable.into_raw().as_ptr() as *mut c_void,
|
||||
Some(trampoline_compat as unsafe extern "C" fn(*mut c_void)),
|
||||
),
|
||||
};
|
||||
fn dispatch(&self, runnable: RunnableVariant, priority: Priority) {
|
||||
let context = runnable.into_raw().as_ptr() as *mut c_void;
|
||||
let trampoline = Some(trampoline as unsafe extern "C" fn(*mut c_void));
|
||||
|
||||
let queue_priority = match priority {
|
||||
Priority::Realtime(_) => unreachable!(),
|
||||
Priority::High => DISPATCH_QUEUE_PRIORITY_HIGH as isize,
|
||||
Priority::Medium => DISPATCH_QUEUE_PRIORITY_DEFAULT as isize,
|
||||
Priority::Low => DISPATCH_QUEUE_PRIORITY_LOW as isize,
|
||||
@@ -98,32 +76,16 @@ impl PlatformDispatcher for MacDispatcher {
|
||||
}
|
||||
|
||||
fn dispatch_on_main_thread(&self, runnable: RunnableVariant, _priority: Priority) {
|
||||
let (context, trampoline) = match runnable {
|
||||
RunnableVariant::Meta(runnable) => (
|
||||
runnable.into_raw().as_ptr() as *mut c_void,
|
||||
Some(trampoline as unsafe extern "C" fn(*mut c_void)),
|
||||
),
|
||||
RunnableVariant::Compat(runnable) => (
|
||||
runnable.into_raw().as_ptr() as *mut c_void,
|
||||
Some(trampoline_compat as unsafe extern "C" fn(*mut c_void)),
|
||||
),
|
||||
};
|
||||
let context = runnable.into_raw().as_ptr() as *mut c_void;
|
||||
let trampoline = Some(trampoline as unsafe extern "C" fn(*mut c_void));
|
||||
unsafe {
|
||||
dispatch_async_f(dispatch_get_main_queue(), context, trampoline);
|
||||
}
|
||||
}
|
||||
|
||||
fn dispatch_after(&self, duration: Duration, runnable: RunnableVariant) {
|
||||
let (context, trampoline) = match runnable {
|
||||
RunnableVariant::Meta(runnable) => (
|
||||
runnable.into_raw().as_ptr() as *mut c_void,
|
||||
Some(trampoline as unsafe extern "C" fn(*mut c_void)),
|
||||
),
|
||||
RunnableVariant::Compat(runnable) => (
|
||||
runnable.into_raw().as_ptr() as *mut c_void,
|
||||
Some(trampoline_compat as unsafe extern "C" fn(*mut c_void)),
|
||||
),
|
||||
};
|
||||
let context = runnable.into_raw().as_ptr() as *mut c_void;
|
||||
let trampoline = Some(trampoline as unsafe extern "C" fn(*mut c_void));
|
||||
unsafe {
|
||||
let queue =
|
||||
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH.try_into().unwrap(), 0);
|
||||
@@ -131,120 +93,6 @@ impl PlatformDispatcher for MacDispatcher {
|
||||
dispatch_after_f(when, queue, context, trampoline);
|
||||
}
|
||||
}
|
||||
|
||||
fn spawn_realtime(&self, priority: RealtimePriority, f: Box<dyn FnOnce() + Send>) {
|
||||
std::thread::spawn(move || {
|
||||
match priority {
|
||||
RealtimePriority::Audio => set_audio_thread_priority(),
|
||||
RealtimePriority::Other => set_high_thread_priority(),
|
||||
}
|
||||
.context(format!("for priority {:?}", priority))
|
||||
.log_err();
|
||||
|
||||
f();
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
fn set_high_thread_priority() -> anyhow::Result<()> {
|
||||
// SAFETY: always safe to call
|
||||
let thread_id = unsafe { libc::pthread_self() };
|
||||
|
||||
// SAFETY: all sched_param members are valid when initialized to zero.
|
||||
let mut sched_param = unsafe { MaybeUninit::<libc::sched_param>::zeroed().assume_init() };
|
||||
sched_param.sched_priority = 45;
|
||||
|
||||
let result = unsafe { libc::pthread_setschedparam(thread_id, libc::SCHED_FIFO, &sched_param) };
|
||||
if result != 0 {
|
||||
anyhow::bail!("failed to set realtime thread priority")
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn set_audio_thread_priority() -> anyhow::Result<()> {
|
||||
// https://chromium.googlesource.com/chromium/chromium/+/master/base/threading/platform_thread_mac.mm#93
|
||||
|
||||
// SAFETY: always safe to call
|
||||
let thread_id = unsafe { libc::pthread_self() };
|
||||
|
||||
// SAFETY: thread_id is a valid thread id
|
||||
let thread_id = unsafe { libc::pthread_mach_thread_np(thread_id) };
|
||||
|
||||
// Fixed priority thread
|
||||
let mut policy = thread_extended_policy_data_t { timeshare: 0 };
|
||||
|
||||
// SAFETY: thread_id is a valid thread id
|
||||
// SAFETY: thread_extended_policy_data_t is passed as THREAD_EXTENDED_POLICY
|
||||
let result = unsafe {
|
||||
mach2::thread_policy::thread_policy_set(
|
||||
thread_id,
|
||||
THREAD_EXTENDED_POLICY,
|
||||
&mut policy as *mut _ as *mut _,
|
||||
THREAD_EXTENDED_POLICY_COUNT,
|
||||
)
|
||||
};
|
||||
|
||||
if result != KERN_SUCCESS {
|
||||
anyhow::bail!("failed to set thread extended policy");
|
||||
}
|
||||
|
||||
// relatively high priority
|
||||
let mut precedence = thread_precedence_policy_data_t { importance: 63 };
|
||||
|
||||
// SAFETY: thread_id is a valid thread id
|
||||
// SAFETY: thread_precedence_policy_data_t is passed as THREAD_PRECEDENCE_POLICY
|
||||
let result = unsafe {
|
||||
mach2::thread_policy::thread_policy_set(
|
||||
thread_id,
|
||||
THREAD_PRECEDENCE_POLICY,
|
||||
&mut precedence as *mut _ as *mut _,
|
||||
THREAD_PRECEDENCE_POLICY_COUNT,
|
||||
)
|
||||
};
|
||||
|
||||
if result != KERN_SUCCESS {
|
||||
anyhow::bail!("failed to set thread precedence policy");
|
||||
}
|
||||
|
||||
const GUARANTEED_AUDIO_DUTY_CYCLE: f32 = 0.75;
|
||||
const MAX_AUDIO_DUTY_CYCLE: f32 = 0.85;
|
||||
|
||||
// ~128 frames @ 44.1KHz
|
||||
const TIME_QUANTUM: f32 = 2.9;
|
||||
|
||||
const AUDIO_TIME_NEEDED: f32 = GUARANTEED_AUDIO_DUTY_CYCLE * TIME_QUANTUM;
|
||||
const MAX_TIME_ALLOWED: f32 = MAX_AUDIO_DUTY_CYCLE * TIME_QUANTUM;
|
||||
|
||||
let mut timebase_info = mach_timebase_info_data_t { numer: 0, denom: 0 };
|
||||
// SAFETY: timebase_info is a valid pointer to a mach_timebase_info_data_t struct
|
||||
unsafe { mach2::mach_time::mach_timebase_info(&mut timebase_info) };
|
||||
|
||||
let ms_to_abs_time = ((timebase_info.denom as f32) / (timebase_info.numer as f32)) * 1000000f32;
|
||||
|
||||
let mut time_constraints = thread_time_constraint_policy_data_t {
|
||||
period: (TIME_QUANTUM * ms_to_abs_time) as u32,
|
||||
computation: (AUDIO_TIME_NEEDED * ms_to_abs_time) as u32,
|
||||
constraint: (MAX_TIME_ALLOWED * ms_to_abs_time) as u32,
|
||||
preemptible: 0,
|
||||
};
|
||||
|
||||
// SAFETY: thread_id is a valid thread id
|
||||
// SAFETY: thread_precedence_pthread_time_constraint_policy_data_t is passed as THREAD_TIME_CONSTRAINT_POLICY
|
||||
let result = unsafe {
|
||||
mach2::thread_policy::thread_policy_set(
|
||||
thread_id,
|
||||
THREAD_TIME_CONSTRAINT_POLICY,
|
||||
&mut time_constraints as *mut _ as *mut _,
|
||||
THREAD_TIME_CONSTRAINT_POLICY_COUNT,
|
||||
)
|
||||
};
|
||||
|
||||
if result != KERN_SUCCESS {
|
||||
anyhow::bail!("failed to set thread time constraint policy");
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
extern "C" fn trampoline(runnable: *mut c_void) {
|
||||
@@ -284,39 +132,3 @@ extern "C" fn trampoline(runnable: *mut c_void) {
|
||||
last_timing.end = Some(end);
|
||||
});
|
||||
}
|
||||
|
||||
extern "C" fn trampoline_compat(runnable: *mut c_void) {
|
||||
let task = unsafe { Runnable::<()>::from_raw(NonNull::new_unchecked(runnable as *mut ())) };
|
||||
|
||||
let location = core::panic::Location::caller();
|
||||
|
||||
let start = Instant::now();
|
||||
let timing = TaskTiming {
|
||||
location,
|
||||
start,
|
||||
end: None,
|
||||
};
|
||||
THREAD_TIMINGS.with(|timings| {
|
||||
let mut timings = timings.lock();
|
||||
let timings = &mut timings.timings;
|
||||
if let Some(last_timing) = timings.iter_mut().rev().next() {
|
||||
if last_timing.location == timing.location {
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
timings.push_back(timing);
|
||||
});
|
||||
|
||||
task.run();
|
||||
let end = Instant::now();
|
||||
|
||||
THREAD_TIMINGS.with(|timings| {
|
||||
let mut timings = timings.lock();
|
||||
let timings = &mut timings.timings;
|
||||
let Some(last_timing) = timings.iter_mut().rev().next() else {
|
||||
return;
|
||||
};
|
||||
last_timing.end = Some(end);
|
||||
});
|
||||
}
|
||||
|
||||
@@ -1,267 +1,78 @@
|
||||
use crate::{PlatformDispatcher, Priority, RunnableVariant, TaskLabel};
|
||||
use backtrace::Backtrace;
|
||||
use collections::{HashMap, HashSet, VecDeque};
|
||||
use parking::Unparker;
|
||||
use parking_lot::Mutex;
|
||||
use rand::prelude::*;
|
||||
use crate::{PlatformDispatcher, Priority, RunnableVariant};
|
||||
use scheduler::{Clock, Scheduler, SessionId, TestScheduler, TestSchedulerConfig, Yield};
|
||||
use std::{
|
||||
future::Future,
|
||||
ops::RangeInclusive,
|
||||
pin::Pin,
|
||||
sync::Arc,
|
||||
task::{Context, Poll},
|
||||
time::{Duration, Instant},
|
||||
};
|
||||
use util::post_inc;
|
||||
|
||||
#[derive(Copy, Clone, PartialEq, Eq, Hash)]
|
||||
struct TestDispatcherId(usize);
|
||||
|
||||
/// TestDispatcher provides deterministic async execution for tests.
|
||||
///
|
||||
/// This implementation delegates task scheduling to the scheduler crate's `TestScheduler`.
|
||||
/// Access the scheduler directly via `scheduler()` for clock, rng, and parking control.
|
||||
#[doc(hidden)]
|
||||
pub struct TestDispatcher {
|
||||
id: TestDispatcherId,
|
||||
state: Arc<Mutex<TestDispatcherState>>,
|
||||
}
|
||||
|
||||
struct TestDispatcherState {
|
||||
random: StdRng,
|
||||
foreground: HashMap<TestDispatcherId, VecDeque<RunnableVariant>>,
|
||||
background: Vec<RunnableVariant>,
|
||||
deprioritized_background: Vec<RunnableVariant>,
|
||||
delayed: Vec<(Duration, RunnableVariant)>,
|
||||
start_time: Instant,
|
||||
time: Duration,
|
||||
is_main_thread: bool,
|
||||
next_id: TestDispatcherId,
|
||||
allow_parking: bool,
|
||||
waiting_hint: Option<String>,
|
||||
waiting_backtrace: Option<Backtrace>,
|
||||
deprioritized_task_labels: HashSet<TaskLabel>,
|
||||
block_on_ticks: RangeInclusive<usize>,
|
||||
unparkers: Vec<Unparker>,
|
||||
session_id: SessionId,
|
||||
scheduler: Arc<TestScheduler>,
|
||||
}
|
||||
|
||||
impl TestDispatcher {
|
||||
pub fn new(random: StdRng) -> Self {
|
||||
let state = TestDispatcherState {
|
||||
random,
|
||||
foreground: HashMap::default(),
|
||||
background: Vec::new(),
|
||||
deprioritized_background: Vec::new(),
|
||||
delayed: Vec::new(),
|
||||
time: Duration::ZERO,
|
||||
start_time: Instant::now(),
|
||||
is_main_thread: true,
|
||||
next_id: TestDispatcherId(1),
|
||||
pub fn new(seed: u64) -> Self {
|
||||
let scheduler = Arc::new(TestScheduler::new(TestSchedulerConfig {
|
||||
seed,
|
||||
randomize_order: true,
|
||||
allow_parking: false,
|
||||
waiting_hint: None,
|
||||
waiting_backtrace: None,
|
||||
deprioritized_task_labels: Default::default(),
|
||||
block_on_ticks: 0..=1000,
|
||||
unparkers: Default::default(),
|
||||
};
|
||||
capture_pending_traces: std::env::var("PENDING_TRACES")
|
||||
.map_or(false, |var| var == "1" || var == "true"),
|
||||
timeout_ticks: 0..=1000,
|
||||
}));
|
||||
|
||||
let session_id = scheduler.allocate_session_id();
|
||||
|
||||
TestDispatcher {
|
||||
id: TestDispatcherId(0),
|
||||
state: Arc::new(Mutex::new(state)),
|
||||
session_id,
|
||||
scheduler,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn scheduler(&self) -> &Arc<TestScheduler> {
|
||||
&self.scheduler
|
||||
}
|
||||
|
||||
pub fn session_id(&self) -> SessionId {
|
||||
self.session_id
|
||||
}
|
||||
|
||||
pub fn advance_clock(&self, by: Duration) {
|
||||
let new_now = self.state.lock().time + by;
|
||||
loop {
|
||||
self.run_until_parked();
|
||||
let state = self.state.lock();
|
||||
let next_due_time = state.delayed.first().map(|(time, _)| *time);
|
||||
drop(state);
|
||||
if let Some(due_time) = next_due_time
|
||||
&& due_time <= new_now
|
||||
{
|
||||
self.state.lock().time = due_time;
|
||||
continue;
|
||||
}
|
||||
break;
|
||||
}
|
||||
self.state.lock().time = new_now;
|
||||
self.scheduler.advance_clock(by);
|
||||
}
|
||||
|
||||
pub fn advance_clock_to_next_delayed(&self) -> bool {
|
||||
let next_due_time = self.state.lock().delayed.first().map(|(time, _)| *time);
|
||||
if let Some(next_due_time) = next_due_time {
|
||||
self.state.lock().time = next_due_time;
|
||||
return true;
|
||||
}
|
||||
false
|
||||
pub fn advance_clock_to_next_timer(&self) -> bool {
|
||||
self.scheduler.advance_clock_to_next_timer()
|
||||
}
|
||||
|
||||
pub fn simulate_random_delay(&self) -> impl 'static + Send + Future<Output = ()> + use<> {
|
||||
struct YieldNow {
|
||||
pub(crate) count: usize,
|
||||
}
|
||||
|
||||
impl Future for YieldNow {
|
||||
type Output = ();
|
||||
|
||||
fn poll(mut self: Pin<&mut Self>, cx: &mut Context) -> Poll<Self::Output> {
|
||||
if self.count > 0 {
|
||||
self.count -= 1;
|
||||
cx.waker().wake_by_ref();
|
||||
Poll::Pending
|
||||
} else {
|
||||
Poll::Ready(())
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
YieldNow {
|
||||
count: self.state.lock().random.random_range(0..10),
|
||||
}
|
||||
pub fn simulate_random_delay(&self) -> Yield {
|
||||
self.scheduler.yield_random()
|
||||
}
|
||||
|
||||
pub fn tick(&self, background_only: bool) -> bool {
|
||||
let mut state = self.state.lock();
|
||||
|
||||
while let Some((deadline, _)) = state.delayed.first() {
|
||||
if *deadline > state.time {
|
||||
break;
|
||||
}
|
||||
let (_, runnable) = state.delayed.remove(0);
|
||||
state.background.push(runnable);
|
||||
if background_only {
|
||||
self.scheduler.tick_background_only()
|
||||
} else {
|
||||
self.scheduler.tick()
|
||||
}
|
||||
|
||||
let foreground_len: usize = if background_only {
|
||||
0
|
||||
} else {
|
||||
state
|
||||
.foreground
|
||||
.values()
|
||||
.map(|runnables| runnables.len())
|
||||
.sum()
|
||||
};
|
||||
let background_len = state.background.len();
|
||||
|
||||
let runnable;
|
||||
let main_thread;
|
||||
if foreground_len == 0 && background_len == 0 {
|
||||
let deprioritized_background_len = state.deprioritized_background.len();
|
||||
if deprioritized_background_len == 0 {
|
||||
return false;
|
||||
}
|
||||
let ix = state.random.random_range(0..deprioritized_background_len);
|
||||
main_thread = false;
|
||||
runnable = state.deprioritized_background.swap_remove(ix);
|
||||
} else {
|
||||
main_thread = state.random.random_ratio(
|
||||
foreground_len as u32,
|
||||
(foreground_len + background_len) as u32,
|
||||
);
|
||||
if main_thread {
|
||||
let state = &mut *state;
|
||||
runnable = state
|
||||
.foreground
|
||||
.values_mut()
|
||||
.filter(|runnables| !runnables.is_empty())
|
||||
.choose(&mut state.random)
|
||||
.unwrap()
|
||||
.pop_front()
|
||||
.unwrap();
|
||||
} else {
|
||||
let ix = state.random.random_range(0..background_len);
|
||||
runnable = state.background.swap_remove(ix);
|
||||
};
|
||||
};
|
||||
|
||||
let was_main_thread = state.is_main_thread;
|
||||
state.is_main_thread = main_thread;
|
||||
drop(state);
|
||||
|
||||
// todo(localcc): add timings to tests
|
||||
match runnable {
|
||||
RunnableVariant::Meta(runnable) => runnable.run(),
|
||||
RunnableVariant::Compat(runnable) => runnable.run(),
|
||||
};
|
||||
|
||||
self.state.lock().is_main_thread = was_main_thread;
|
||||
|
||||
true
|
||||
}
|
||||
|
||||
pub fn deprioritize(&self, task_label: TaskLabel) {
|
||||
self.state
|
||||
.lock()
|
||||
.deprioritized_task_labels
|
||||
.insert(task_label);
|
||||
}
|
||||
|
||||
pub fn run_until_parked(&self) {
|
||||
while self.tick(false) {}
|
||||
}
|
||||
|
||||
pub fn parking_allowed(&self) -> bool {
|
||||
self.state.lock().allow_parking
|
||||
}
|
||||
|
||||
pub fn allow_parking(&self) {
|
||||
self.state.lock().allow_parking = true
|
||||
}
|
||||
|
||||
pub fn forbid_parking(&self) {
|
||||
self.state.lock().allow_parking = false
|
||||
}
|
||||
|
||||
pub fn set_waiting_hint(&self, msg: Option<String>) {
|
||||
self.state.lock().waiting_hint = msg
|
||||
}
|
||||
|
||||
pub fn waiting_hint(&self) -> Option<String> {
|
||||
self.state.lock().waiting_hint.clone()
|
||||
}
|
||||
|
||||
pub fn start_waiting(&self) {
|
||||
self.state.lock().waiting_backtrace = Some(Backtrace::new_unresolved());
|
||||
}
|
||||
|
||||
pub fn finish_waiting(&self) {
|
||||
self.state.lock().waiting_backtrace.take();
|
||||
}
|
||||
|
||||
pub fn waiting_backtrace(&self) -> Option<Backtrace> {
|
||||
self.state.lock().waiting_backtrace.take().map(|mut b| {
|
||||
b.resolve();
|
||||
b
|
||||
})
|
||||
}
|
||||
|
||||
pub fn rng(&self) -> StdRng {
|
||||
self.state.lock().random.clone()
|
||||
}
|
||||
|
||||
pub fn set_block_on_ticks(&self, range: std::ops::RangeInclusive<usize>) {
|
||||
self.state.lock().block_on_ticks = range;
|
||||
}
|
||||
|
||||
pub fn gen_block_on_ticks(&self) -> usize {
|
||||
let mut lock = self.state.lock();
|
||||
let block_on_ticks = lock.block_on_ticks.clone();
|
||||
lock.random.random_range(block_on_ticks)
|
||||
}
|
||||
|
||||
pub fn unpark_all(&self) {
|
||||
self.state.lock().unparkers.retain(|parker| parker.unpark());
|
||||
}
|
||||
|
||||
pub fn push_unparker(&self, unparker: Unparker) {
|
||||
let mut state = self.state.lock();
|
||||
state.unparkers.push(unparker);
|
||||
}
|
||||
}
|
||||
|
||||
impl Clone for TestDispatcher {
|
||||
fn clone(&self) -> Self {
|
||||
let id = post_inc(&mut self.state.lock().next_id.0);
|
||||
let session_id = self.scheduler.allocate_session_id();
|
||||
Self {
|
||||
id: TestDispatcherId(id),
|
||||
state: self.state.clone(),
|
||||
session_id,
|
||||
scheduler: self.scheduler.clone(),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -276,52 +87,31 @@ impl PlatformDispatcher for TestDispatcher {
|
||||
}
|
||||
|
||||
fn is_main_thread(&self) -> bool {
|
||||
self.state.lock().is_main_thread
|
||||
self.scheduler.is_main_thread()
|
||||
}
|
||||
|
||||
fn now(&self) -> Instant {
|
||||
let state = self.state.lock();
|
||||
state.start_time + state.time
|
||||
self.scheduler.clock().now()
|
||||
}
|
||||
|
||||
fn dispatch(&self, runnable: RunnableVariant, label: Option<TaskLabel>, _priority: Priority) {
|
||||
{
|
||||
let mut state = self.state.lock();
|
||||
if label.is_some_and(|label| state.deprioritized_task_labels.contains(&label)) {
|
||||
state.deprioritized_background.push(runnable);
|
||||
} else {
|
||||
state.background.push(runnable);
|
||||
}
|
||||
}
|
||||
self.unpark_all();
|
||||
fn dispatch(&self, runnable: RunnableVariant, priority: Priority) {
|
||||
self.scheduler
|
||||
.schedule_background_with_priority(runnable, priority);
|
||||
}
|
||||
|
||||
fn dispatch_on_main_thread(&self, runnable: RunnableVariant, _priority: Priority) {
|
||||
self.state
|
||||
.lock()
|
||||
.foreground
|
||||
.entry(self.id)
|
||||
.or_default()
|
||||
.push_back(runnable);
|
||||
self.unpark_all();
|
||||
self.scheduler
|
||||
.schedule_foreground(self.session_id, runnable);
|
||||
}
|
||||
|
||||
fn dispatch_after(&self, duration: std::time::Duration, runnable: RunnableVariant) {
|
||||
let mut state = self.state.lock();
|
||||
let next_time = state.time + duration;
|
||||
let ix = match state.delayed.binary_search_by_key(&next_time, |e| e.0) {
|
||||
Ok(ix) | Err(ix) => ix,
|
||||
};
|
||||
state.delayed.insert(ix, (next_time, runnable));
|
||||
fn dispatch_after(&self, _duration: Duration, _runnable: RunnableVariant) {
|
||||
panic!(
|
||||
"dispatch_after should not be called in tests. \
|
||||
Use BackgroundExecutor::timer() which uses the scheduler's native timer."
|
||||
);
|
||||
}
|
||||
|
||||
fn as_test(&self) -> Option<&TestDispatcher> {
|
||||
Some(self)
|
||||
}
|
||||
|
||||
fn spawn_realtime(&self, _priority: crate::RealtimePriority, f: Box<dyn FnOnce() + Send>) {
|
||||
std::thread::spawn(move || {
|
||||
f();
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
@@ -135,7 +135,6 @@ impl TestPlatform {
|
||||
.new_path
|
||||
.pop_front()
|
||||
.expect("no pending new path prompt");
|
||||
self.background_executor().set_waiting_hint(None);
|
||||
tx.send(Ok(select_path(&path))).ok();
|
||||
}
|
||||
|
||||
@@ -147,7 +146,6 @@ impl TestPlatform {
|
||||
.multiple_choice
|
||||
.pop_front()
|
||||
.expect("no pending multiple choice prompt");
|
||||
self.background_executor().set_waiting_hint(None);
|
||||
let Some(ix) = prompt.answers.iter().position(|a| a == response) else {
|
||||
panic!(
|
||||
"PROMPT: {}\n{:?}\n{:?}\nCannot respond with {}",
|
||||
@@ -182,8 +180,6 @@ impl TestPlatform {
|
||||
) -> oneshot::Receiver<usize> {
|
||||
let (tx, rx) = oneshot::channel();
|
||||
let answers: Vec<String> = answers.iter().map(|s| s.label().to_string()).collect();
|
||||
self.background_executor()
|
||||
.set_waiting_hint(Some(format!("PROMPT: {:?} {:?}", msg, detail)));
|
||||
self.prompts
|
||||
.borrow_mut()
|
||||
.multiple_choice
|
||||
@@ -348,8 +344,6 @@ impl Platform for TestPlatform {
|
||||
_suggested_name: Option<&str>,
|
||||
) -> oneshot::Receiver<Result<Option<std::path::PathBuf>>> {
|
||||
let (tx, rx) = oneshot::channel();
|
||||
self.background_executor()
|
||||
.set_waiting_hint(Some(format!("PROMPT FOR PATH: {:?}", directory)));
|
||||
self.prompts
|
||||
.borrow_mut()
|
||||
.new_path
|
||||
|
||||
@@ -4,7 +4,6 @@ use std::{
|
||||
time::{Duration, Instant},
|
||||
};
|
||||
|
||||
use anyhow::Context;
|
||||
use util::ResultExt;
|
||||
use windows::{
|
||||
System::Threading::{
|
||||
@@ -12,18 +11,14 @@ use windows::{
|
||||
},
|
||||
Win32::{
|
||||
Foundation::{LPARAM, WPARAM},
|
||||
System::Threading::{
|
||||
GetCurrentThread, HIGH_PRIORITY_CLASS, SetPriorityClass, SetThreadPriority,
|
||||
THREAD_PRIORITY_HIGHEST, THREAD_PRIORITY_TIME_CRITICAL,
|
||||
},
|
||||
UI::WindowsAndMessaging::PostMessageW,
|
||||
},
|
||||
};
|
||||
|
||||
use crate::{
|
||||
GLOBAL_THREAD_TIMINGS, HWND, PlatformDispatcher, Priority, PriorityQueueSender,
|
||||
RealtimePriority, RunnableVariant, SafeHwnd, THREAD_TIMINGS, TaskLabel, TaskTiming,
|
||||
ThreadTaskTimings, WM_GPUI_TASK_DISPATCHED_ON_MAIN_THREAD, profiler,
|
||||
RunnableVariant, SafeHwnd, THREAD_TIMINGS, TaskTiming, ThreadTaskTimings,
|
||||
WM_GPUI_TASK_DISPATCHED_ON_MAIN_THREAD, profiler,
|
||||
};
|
||||
|
||||
pub(crate) struct WindowsDispatcher {
|
||||
@@ -79,33 +74,15 @@ impl WindowsDispatcher {
|
||||
pub(crate) fn execute_runnable(runnable: RunnableVariant) {
|
||||
let start = Instant::now();
|
||||
|
||||
let mut timing = match runnable {
|
||||
RunnableVariant::Meta(runnable) => {
|
||||
let location = runnable.metadata().location;
|
||||
let timing = TaskTiming {
|
||||
location,
|
||||
start,
|
||||
end: None,
|
||||
};
|
||||
profiler::add_task_timing(timing);
|
||||
|
||||
runnable.run();
|
||||
|
||||
timing
|
||||
}
|
||||
RunnableVariant::Compat(runnable) => {
|
||||
let timing = TaskTiming {
|
||||
location: core::panic::Location::caller(),
|
||||
start,
|
||||
end: None,
|
||||
};
|
||||
profiler::add_task_timing(timing);
|
||||
|
||||
runnable.run();
|
||||
|
||||
timing
|
||||
}
|
||||
let location = runnable.metadata().location;
|
||||
let mut timing = TaskTiming {
|
||||
location,
|
||||
start,
|
||||
end: None,
|
||||
};
|
||||
profiler::add_task_timing(timing);
|
||||
|
||||
runnable.run();
|
||||
|
||||
let end = Instant::now();
|
||||
timing.end = Some(end);
|
||||
@@ -138,18 +115,13 @@ impl PlatformDispatcher for WindowsDispatcher {
|
||||
current().id() == self.main_thread_id
|
||||
}
|
||||
|
||||
fn dispatch(&self, runnable: RunnableVariant, label: Option<TaskLabel>, priority: Priority) {
|
||||
fn dispatch(&self, runnable: RunnableVariant, priority: Priority) {
|
||||
let priority = match priority {
|
||||
Priority::Realtime(_) => unreachable!(),
|
||||
Priority::High => WorkItemPriority::High,
|
||||
Priority::Medium => WorkItemPriority::Normal,
|
||||
Priority::Low => WorkItemPriority::Low,
|
||||
};
|
||||
self.dispatch_on_threadpool(priority, runnable);
|
||||
|
||||
if let Some(label) = label {
|
||||
log::debug!("TaskLabel: {label:?}");
|
||||
}
|
||||
}
|
||||
|
||||
fn dispatch_on_main_thread(&self, runnable: RunnableVariant, priority: Priority) {
|
||||
@@ -184,28 +156,4 @@ impl PlatformDispatcher for WindowsDispatcher {
|
||||
fn dispatch_after(&self, duration: Duration, runnable: RunnableVariant) {
|
||||
self.dispatch_on_threadpool_after(runnable, duration);
|
||||
}
|
||||
|
||||
fn spawn_realtime(&self, priority: RealtimePriority, f: Box<dyn FnOnce() + Send>) {
|
||||
std::thread::spawn(move || {
|
||||
// SAFETY: always safe to call
|
||||
let thread_handle = unsafe { GetCurrentThread() };
|
||||
|
||||
let thread_priority = match priority {
|
||||
RealtimePriority::Audio => THREAD_PRIORITY_TIME_CRITICAL,
|
||||
RealtimePriority::Other => THREAD_PRIORITY_HIGHEST,
|
||||
};
|
||||
|
||||
// SAFETY: thread_handle is a valid handle to a thread
|
||||
unsafe { SetPriorityClass(thread_handle, HIGH_PRIORITY_CLASS) }
|
||||
.context("thread priority class")
|
||||
.log_err();
|
||||
|
||||
// SAFETY: thread_handle is a valid handle to a thread
|
||||
unsafe { SetThreadPriority(thread_handle, thread_priority) }
|
||||
.context("thread priority")
|
||||
.log_err();
|
||||
|
||||
f();
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
131
crates/gpui/src/platform_scheduler.rs
Normal file
131
crates/gpui/src/platform_scheduler.rs
Normal file
@@ -0,0 +1,131 @@
|
||||
use crate::{PlatformDispatcher, RunnableMeta};
|
||||
use async_task::Runnable;
|
||||
use chrono::{DateTime, Utc};
|
||||
use futures::channel::oneshot;
|
||||
use scheduler::{Clock, Priority, Scheduler, SessionId, TestScheduler, Timer};
|
||||
use std::{
|
||||
future::Future,
|
||||
pin::Pin,
|
||||
sync::{
|
||||
Arc,
|
||||
atomic::{AtomicU16, Ordering},
|
||||
},
|
||||
task::{Context, Poll},
|
||||
time::{Duration, Instant},
|
||||
};
|
||||
use waker_fn::waker_fn;
|
||||
|
||||
/// A production implementation of [`Scheduler`] that wraps a [`PlatformDispatcher`].
|
||||
///
|
||||
/// This allows GPUI to use the scheduler crate's executor types with the platform's
|
||||
/// native dispatch mechanisms (e.g., Grand Central Dispatch on macOS).
|
||||
pub struct PlatformScheduler {
|
||||
dispatcher: Arc<dyn PlatformDispatcher>,
|
||||
clock: Arc<PlatformClock>,
|
||||
next_session_id: AtomicU16,
|
||||
}
|
||||
|
||||
impl PlatformScheduler {
|
||||
pub fn new(dispatcher: Arc<dyn PlatformDispatcher>) -> Self {
|
||||
Self {
|
||||
dispatcher: dispatcher.clone(),
|
||||
clock: Arc::new(PlatformClock { dispatcher }),
|
||||
next_session_id: AtomicU16::new(0),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn allocate_session_id(&self) -> SessionId {
|
||||
SessionId::new(self.next_session_id.fetch_add(1, Ordering::SeqCst))
|
||||
}
|
||||
}
|
||||
|
||||
impl Scheduler for PlatformScheduler {
|
||||
fn block(
|
||||
&self,
|
||||
_session_id: Option<SessionId>,
|
||||
mut future: Pin<&mut dyn Future<Output = ()>>,
|
||||
timeout: Option<Duration>,
|
||||
) -> bool {
|
||||
let deadline = timeout.map(|t| Instant::now() + t);
|
||||
let parker = parking::Parker::new();
|
||||
let unparker = parker.unparker();
|
||||
let waker = waker_fn(move || {
|
||||
unparker.unpark();
|
||||
});
|
||||
let mut cx = Context::from_waker(&waker);
|
||||
|
||||
loop {
|
||||
match future.as_mut().poll(&mut cx) {
|
||||
Poll::Ready(()) => return true,
|
||||
Poll::Pending => {
|
||||
if let Some(deadline) = deadline {
|
||||
let now = Instant::now();
|
||||
if now >= deadline {
|
||||
return false;
|
||||
}
|
||||
parker.park_timeout(deadline - now);
|
||||
} else {
|
||||
parker.park();
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
fn schedule_foreground(&self, _session_id: SessionId, runnable: Runnable<RunnableMeta>) {
|
||||
self.dispatcher
|
||||
.dispatch_on_main_thread(runnable, Priority::default());
|
||||
}
|
||||
|
||||
fn schedule_background_with_priority(
|
||||
&self,
|
||||
runnable: Runnable<RunnableMeta>,
|
||||
priority: Priority,
|
||||
) {
|
||||
self.dispatcher.dispatch(runnable, priority);
|
||||
}
|
||||
|
||||
fn timer(&self, duration: Duration) -> Timer {
|
||||
let (tx, rx) = oneshot::channel();
|
||||
let dispatcher = self.dispatcher.clone();
|
||||
|
||||
// Create a runnable that will send the completion signal
|
||||
let location = std::panic::Location::caller();
|
||||
let (runnable, _task) = async_task::Builder::new()
|
||||
.metadata(RunnableMeta { location })
|
||||
.spawn(
|
||||
move |_| async move {
|
||||
let _ = tx.send(());
|
||||
},
|
||||
move |runnable| {
|
||||
dispatcher.dispatch_after(duration, runnable);
|
||||
},
|
||||
);
|
||||
runnable.schedule();
|
||||
|
||||
Timer::new(rx)
|
||||
}
|
||||
|
||||
fn clock(&self) -> Arc<dyn Clock> {
|
||||
self.clock.clone()
|
||||
}
|
||||
|
||||
fn as_test(&self) -> Option<&TestScheduler> {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
/// A production clock that uses the platform dispatcher's time.
|
||||
struct PlatformClock {
|
||||
dispatcher: Arc<dyn PlatformDispatcher>,
|
||||
}
|
||||
|
||||
impl Clock for PlatformClock {
|
||||
fn utc_now(&self) -> DateTime<Utc> {
|
||||
Utc::now()
|
||||
}
|
||||
|
||||
fn now(&self) -> Instant {
|
||||
self.dispatcher.now()
|
||||
}
|
||||
}
|
||||
@@ -217,6 +217,7 @@ impl Drop for ThreadTimings {
|
||||
}
|
||||
}
|
||||
|
||||
#[allow(dead_code)] // Used by Linux and Windows dispatchers, not macOS
|
||||
pub(crate) fn add_task_timing(timing: TaskTiming) {
|
||||
THREAD_TIMINGS.with(|timings| {
|
||||
let mut timings = timings.lock();
|
||||
|
||||
@@ -41,7 +41,6 @@ impl<T> PriorityQueueState<T> {
|
||||
|
||||
let mut queues = self.queues.lock();
|
||||
match priority {
|
||||
Priority::Realtime(_) => unreachable!(),
|
||||
Priority::High => queues.high_priority.push(item),
|
||||
Priority::Medium => queues.medium_priority.push(item),
|
||||
Priority::Low => queues.low_priority.push(item),
|
||||
@@ -218,29 +217,29 @@ impl<T> PriorityQueueReceiver<T> {
|
||||
self.state.recv()?
|
||||
};
|
||||
|
||||
let high = P::High.probability() * !queues.high_priority.is_empty() as u32;
|
||||
let medium = P::Medium.probability() * !queues.medium_priority.is_empty() as u32;
|
||||
let low = P::Low.probability() * !queues.low_priority.is_empty() as u32;
|
||||
let high = P::High.weight() * !queues.high_priority.is_empty() as u32;
|
||||
let medium = P::Medium.weight() * !queues.medium_priority.is_empty() as u32;
|
||||
let low = P::Low.weight() * !queues.low_priority.is_empty() as u32;
|
||||
let mut mass = high + medium + low; //%
|
||||
|
||||
if !queues.high_priority.is_empty() {
|
||||
let flip = self.rand.random_ratio(P::High.probability(), mass);
|
||||
let flip = self.rand.random_ratio(P::High.weight(), mass);
|
||||
if flip {
|
||||
return Ok(queues.high_priority.pop());
|
||||
}
|
||||
mass -= P::High.probability();
|
||||
mass -= P::High.weight();
|
||||
}
|
||||
|
||||
if !queues.medium_priority.is_empty() {
|
||||
let flip = self.rand.random_ratio(P::Medium.probability(), mass);
|
||||
let flip = self.rand.random_ratio(P::Medium.weight(), mass);
|
||||
if flip {
|
||||
return Ok(queues.medium_priority.pop());
|
||||
}
|
||||
mass -= P::Medium.probability();
|
||||
mass -= P::Medium.weight();
|
||||
}
|
||||
|
||||
if !queues.low_priority.is_empty() {
|
||||
let flip = self.rand.random_ratio(P::Low.probability(), mass);
|
||||
let flip = self.rand.random_ratio(P::Low.weight(), mass);
|
||||
if flip {
|
||||
return Ok(queues.low_priority.pop());
|
||||
}
|
||||
|
||||
@@ -27,7 +27,6 @@
|
||||
//! ```
|
||||
use crate::{Entity, Subscription, TestAppContext, TestDispatcher};
|
||||
use futures::StreamExt as _;
|
||||
use rand::prelude::*;
|
||||
use smol::channel;
|
||||
use std::{
|
||||
env,
|
||||
@@ -54,7 +53,7 @@ pub fn run_test(
|
||||
eprintln!("seed = {seed}");
|
||||
}
|
||||
let result = panic::catch_unwind(|| {
|
||||
let dispatcher = TestDispatcher::new(StdRng::seed_from_u64(seed));
|
||||
let dispatcher = TestDispatcher::new(seed);
|
||||
test_fn(dispatcher, seed);
|
||||
});
|
||||
|
||||
|
||||
@@ -318,10 +318,9 @@ mod tests {
|
||||
use crate::{Font, FontFeatures, FontStyle, FontWeight, TestAppContext, TestDispatcher, font};
|
||||
#[cfg(target_os = "macos")]
|
||||
use crate::{TextRun, WindowTextSystem, WrapBoundary};
|
||||
use rand::prelude::*;
|
||||
|
||||
fn build_wrapper() -> LineWrapper {
|
||||
let dispatcher = TestDispatcher::new(StdRng::seed_from_u64(0));
|
||||
let dispatcher = TestDispatcher::new(0);
|
||||
let cx = TestAppContext::build(dispatcher, None);
|
||||
let id = cx.text_system().resolve_font(&font(".ZedMono"));
|
||||
LineWrapper::new(id, px(16.), cx.text_system().platform_text_system.clone())
|
||||
|
||||
@@ -217,19 +217,77 @@ slotmap::new_key_type! {
|
||||
}
|
||||
|
||||
thread_local! {
|
||||
/// Fallback arena used when no app-specific arena is active.
|
||||
/// In production, each window draw sets CURRENT_ELEMENT_ARENA to the app's arena.
|
||||
pub(crate) static ELEMENT_ARENA: RefCell<Arena> = RefCell::new(Arena::new(1024 * 1024));
|
||||
|
||||
/// Points to the current App's element arena during draw operations.
|
||||
/// This allows multiple test Apps to have isolated arenas, preventing
|
||||
/// cross-session corruption when the scheduler interleaves their tasks.
|
||||
static CURRENT_ELEMENT_ARENA: Cell<Option<*const RefCell<Arena>>> = const { Cell::new(None) };
|
||||
}
|
||||
|
||||
/// Allocates an element in the current arena. Uses the app-specific arena if one
|
||||
/// is active (during draw), otherwise falls back to the thread-local ELEMENT_ARENA.
|
||||
pub(crate) fn with_element_arena<R>(f: impl FnOnce(&mut Arena) -> R) -> R {
|
||||
CURRENT_ELEMENT_ARENA.with(|current| {
|
||||
if let Some(arena_ptr) = current.get() {
|
||||
// SAFETY: The pointer is valid for the duration of the draw operation
|
||||
// that set it, and we're being called during that same draw.
|
||||
let arena_cell = unsafe { &*arena_ptr };
|
||||
f(&mut arena_cell.borrow_mut())
|
||||
} else {
|
||||
ELEMENT_ARENA.with_borrow_mut(f)
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
/// RAII guard that sets CURRENT_ELEMENT_ARENA for the duration of a draw operation.
|
||||
/// When dropped, restores the previous arena (supporting nested draws).
|
||||
pub(crate) struct ElementArenaScope {
|
||||
previous: Option<*const RefCell<Arena>>,
|
||||
}
|
||||
|
||||
impl ElementArenaScope {
|
||||
/// Enter a scope where element allocations use the given arena.
|
||||
pub(crate) fn enter(arena: &RefCell<Arena>) -> Self {
|
||||
let previous = CURRENT_ELEMENT_ARENA.with(|current| {
|
||||
let prev = current.get();
|
||||
current.set(Some(arena as *const RefCell<Arena>));
|
||||
prev
|
||||
});
|
||||
Self { previous }
|
||||
}
|
||||
}
|
||||
|
||||
impl Drop for ElementArenaScope {
|
||||
fn drop(&mut self) {
|
||||
CURRENT_ELEMENT_ARENA.with(|current| {
|
||||
current.set(self.previous);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
/// Returned when the element arena has been used and so must be cleared before the next draw.
|
||||
#[must_use]
|
||||
pub struct ArenaClearNeeded;
|
||||
pub struct ArenaClearNeeded {
|
||||
arena: *const RefCell<Arena>,
|
||||
}
|
||||
|
||||
impl ArenaClearNeeded {
|
||||
/// Create a new ArenaClearNeeded that will clear the given arena.
|
||||
pub(crate) fn new(arena: &RefCell<Arena>) -> Self {
|
||||
Self {
|
||||
arena: arena as *const RefCell<Arena>,
|
||||
}
|
||||
}
|
||||
|
||||
/// Clear the element arena.
|
||||
pub fn clear(self) {
|
||||
ELEMENT_ARENA.with_borrow_mut(|element_arena| {
|
||||
element_arena.clear();
|
||||
});
|
||||
// SAFETY: The arena pointer is valid because ArenaClearNeeded is created
|
||||
// at the end of draw() and must be cleared before the next draw.
|
||||
let arena_cell = unsafe { &*self.arena };
|
||||
arena_cell.borrow_mut().clear();
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1998,6 +2056,10 @@ impl Window {
|
||||
/// the contents of the new [`Scene`], use [`Self::present`].
|
||||
#[profiling::function]
|
||||
pub fn draw(&mut self, cx: &mut App) -> ArenaClearNeeded {
|
||||
// Set up the per-App arena for element allocation during this draw.
|
||||
// This ensures that multiple test Apps have isolated arenas.
|
||||
let _arena_scope = ElementArenaScope::enter(&cx.element_arena);
|
||||
|
||||
self.invalidate_entities();
|
||||
cx.entities.clear_accessed();
|
||||
debug_assert!(self.rendered_entity_stack.is_empty());
|
||||
@@ -2065,7 +2127,7 @@ impl Window {
|
||||
self.invalidator.set_phase(DrawPhase::None);
|
||||
self.needs_present.set(true);
|
||||
|
||||
ArenaClearNeeded
|
||||
ArenaClearNeeded::new(&cx.element_arena)
|
||||
}
|
||||
|
||||
fn record_entities_accessed(&mut self, cx: &mut App) {
|
||||
|
||||
@@ -29,7 +29,7 @@ use fs::MTime;
|
||||
use futures::channel::oneshot;
|
||||
use gpui::{
|
||||
App, AppContext as _, Context, Entity, EventEmitter, HighlightStyle, SharedString, StyledText,
|
||||
Task, TaskLabel, TextStyle,
|
||||
Task, TextStyle,
|
||||
};
|
||||
|
||||
use lsp::{LanguageServerId, NumberOrString};
|
||||
@@ -52,7 +52,7 @@ use std::{
|
||||
ops::{Deref, Range},
|
||||
path::PathBuf,
|
||||
rc,
|
||||
sync::{Arc, LazyLock},
|
||||
sync::Arc,
|
||||
time::{Duration, Instant},
|
||||
vec,
|
||||
};
|
||||
@@ -75,10 +75,6 @@ pub use {tree_sitter_python, tree_sitter_rust, tree_sitter_typescript};
|
||||
|
||||
pub use lsp::DiagnosticSeverity;
|
||||
|
||||
/// A label for the background task spawned by the buffer to compute
|
||||
/// a diff against the contents of its file.
|
||||
pub static BUFFER_DIFF_TASK: LazyLock<TaskLabel> = LazyLock::new(TaskLabel::new);
|
||||
|
||||
/// Indicate whether a [`Buffer`] has permissions to edit.
|
||||
#[derive(PartialEq, Clone, Copy, Debug)]
|
||||
pub enum Capability {
|
||||
@@ -2098,18 +2094,17 @@ impl Buffer {
|
||||
pub fn diff(&self, mut new_text: String, cx: &App) -> Task<Diff> {
|
||||
let old_text = self.as_rope().clone();
|
||||
let base_version = self.version();
|
||||
cx.background_executor()
|
||||
.spawn_labeled(*BUFFER_DIFF_TASK, async move {
|
||||
let old_text = old_text.to_string();
|
||||
let line_ending = LineEnding::detect(&new_text);
|
||||
LineEnding::normalize(&mut new_text);
|
||||
let edits = text_diff(&old_text, &new_text);
|
||||
Diff {
|
||||
base_version,
|
||||
line_ending,
|
||||
edits,
|
||||
}
|
||||
})
|
||||
cx.background_spawn(async move {
|
||||
let old_text = old_text.to_string();
|
||||
let line_ending = LineEnding::detect(&new_text);
|
||||
LineEnding::normalize(&mut new_text);
|
||||
let edits = text_diff(&old_text, &new_text);
|
||||
Diff {
|
||||
base_version,
|
||||
line_ending,
|
||||
edits,
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
/// Spawns a background task that searches the buffer for any whitespace
|
||||
|
||||
@@ -492,6 +492,11 @@ impl LanguageRegistry {
|
||||
servers_rx
|
||||
}
|
||||
|
||||
#[cfg(any(feature = "test-support", test))]
|
||||
pub fn has_fake_lsp_server(&self, lsp_name: &LanguageServerName) -> bool {
|
||||
self.state.read().fake_server_entries.contains_key(lsp_name)
|
||||
}
|
||||
|
||||
/// Adds a language to the registry, which can be loaded if needed.
|
||||
pub fn register_language(
|
||||
&self,
|
||||
@@ -1129,10 +1134,23 @@ impl LanguageRegistry {
|
||||
binary: lsp::LanguageServerBinary,
|
||||
cx: &mut gpui::AsyncApp,
|
||||
) -> Option<lsp::LanguageServer> {
|
||||
use gpui::AppContext as _;
|
||||
log::warn!(
|
||||
"[language::language_registry] create_fake_language_server: called name={} id={} path={:?} args={:?}",
|
||||
name.0,
|
||||
server_id,
|
||||
binary.path,
|
||||
binary.arguments
|
||||
);
|
||||
|
||||
let mut state = self.state.write();
|
||||
let fake_entry = state.fake_server_entries.get_mut(name)?;
|
||||
let Some(fake_entry) = state.fake_server_entries.get_mut(name) else {
|
||||
log::warn!(
|
||||
"[language::language_registry] create_fake_language_server: no fake server entry registered for {}",
|
||||
name.0
|
||||
);
|
||||
return None;
|
||||
};
|
||||
|
||||
let (server, mut fake_server) = lsp::FakeLanguageServer::new(
|
||||
server_id,
|
||||
binary,
|
||||
@@ -1146,17 +1164,18 @@ impl LanguageRegistry {
|
||||
initializer(&mut fake_server);
|
||||
}
|
||||
|
||||
let tx = fake_entry.tx.clone();
|
||||
cx.background_spawn(async move {
|
||||
if fake_server
|
||||
.try_receive_notification::<lsp::notification::Initialized>()
|
||||
.await
|
||||
.is_some()
|
||||
{
|
||||
tx.unbounded_send(fake_server.clone()).ok();
|
||||
}
|
||||
})
|
||||
.detach();
|
||||
let server_name = name.0.to_string();
|
||||
log::info!(
|
||||
"[language_registry] create_fake_language_server: created fake server for {server_name}, emitting synchronously (tests must not depend on LSP task scheduling)"
|
||||
);
|
||||
|
||||
// Emit synchronously so tests can reliably observe server creation even if the LSP startup
|
||||
// task hasn't progressed to initialization yet.
|
||||
if fake_entry.tx.unbounded_send(fake_server).is_err() {
|
||||
log::warn!(
|
||||
"[language_registry] create_fake_language_server: failed to send fake server for {server_name} (receiver dropped)"
|
||||
);
|
||||
}
|
||||
|
||||
Some(server)
|
||||
}
|
||||
|
||||
@@ -48,18 +48,17 @@ pub struct State {
|
||||
codestral_api_key_state: Entity<ApiKeyState>,
|
||||
}
|
||||
|
||||
struct CodestralApiKey(Entity<ApiKeyState>);
|
||||
impl Global for CodestralApiKey {}
|
||||
|
||||
pub fn codestral_api_key(cx: &mut App) -> Entity<ApiKeyState> {
|
||||
if cx.has_global::<CodestralApiKey>() {
|
||||
cx.global::<CodestralApiKey>().0.clone()
|
||||
} else {
|
||||
let api_key_state = cx
|
||||
.new(|_| ApiKeyState::new(CODESTRAL_API_URL.into(), CODESTRAL_API_KEY_ENV_VAR.clone()));
|
||||
cx.set_global(CodestralApiKey(api_key_state.clone()));
|
||||
api_key_state
|
||||
}
|
||||
// IMPORTANT:
|
||||
// Do not store `Entity<T>` handles in process-wide statics (e.g. `OnceLock`).
|
||||
//
|
||||
// `Entity<T>` is tied to a particular `App`/entity-map context. Caching it globally can
|
||||
// cause panics like "used a entity with the wrong context" when tests (or multiple apps)
|
||||
// create distinct `App` instances in the same process.
|
||||
//
|
||||
// If we want a per-process singleton, store plain data (e.g. env var names) and create
|
||||
// the entity per-App instead.
|
||||
cx.new(|_| ApiKeyState::new(CODESTRAL_API_URL.into(), CODESTRAL_API_KEY_ENV_VAR.clone()))
|
||||
}
|
||||
|
||||
impl State {
|
||||
|
||||
@@ -1735,13 +1735,11 @@ impl FakeLanguageServer {
|
||||
T: request::Request,
|
||||
T::Result: 'static + Send,
|
||||
{
|
||||
self.server.executor.start_waiting();
|
||||
self.server.request::<T>(params).await
|
||||
}
|
||||
|
||||
/// Attempts [`Self::try_receive_notification`], unwrapping if it has not received the specified type yet.
|
||||
pub async fn receive_notification<T: notification::Notification>(&mut self) -> T::Params {
|
||||
self.server.executor.start_waiting();
|
||||
self.try_receive_notification::<T>().await.unwrap()
|
||||
}
|
||||
|
||||
|
||||
@@ -125,7 +125,7 @@ impl ProfilerWindow {
|
||||
loop {
|
||||
let data = cx
|
||||
.foreground_executor()
|
||||
.dispatcher
|
||||
.dispatcher()
|
||||
.get_current_thread_timings();
|
||||
|
||||
this.update(cx, |this: &mut ProfilerWindow, cx| {
|
||||
|
||||
@@ -9,8 +9,7 @@ use crate::{
|
||||
};
|
||||
use async_trait::async_trait;
|
||||
use buffer_diff::{
|
||||
BufferDiffEvent, CALCULATE_DIFF_TASK, DiffHunkSecondaryStatus, DiffHunkStatus,
|
||||
DiffHunkStatusKind, assert_hunks,
|
||||
BufferDiffEvent, DiffHunkSecondaryStatus, DiffHunkStatus, DiffHunkStatusKind, assert_hunks,
|
||||
};
|
||||
use fs::FakeFs;
|
||||
use futures::{StreamExt, future};
|
||||
@@ -4204,10 +4203,6 @@ async fn test_file_changes_multiple_times_on_disk(cx: &mut gpui::TestAppContext)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Simulate buffer diffs being slow, so that they don't complete before
|
||||
// the next file change occurs.
|
||||
cx.executor().deprioritize(*language::BUFFER_DIFF_TASK);
|
||||
|
||||
// Change the buffer's file on disk, and then wait for the file change
|
||||
// to be detected by the worktree, so that the buffer starts reloading.
|
||||
fs.save(
|
||||
@@ -4259,10 +4254,6 @@ async fn test_edit_buffer_while_it_reloads(cx: &mut gpui::TestAppContext) {
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
// Simulate buffer diffs being slow, so that they don't complete before
|
||||
// the next file change occurs.
|
||||
cx.executor().deprioritize(*language::BUFFER_DIFF_TASK);
|
||||
|
||||
// Change the buffer's file on disk, and then wait for the file change
|
||||
// to be detected by the worktree, so that the buffer starts reloading.
|
||||
fs.save(
|
||||
@@ -7982,18 +7973,13 @@ async fn test_staging_hunks_with_delayed_fs_event(cx: &mut gpui::TestAppContext)
|
||||
#[gpui::test(iterations = 25)]
|
||||
async fn test_staging_random_hunks(
|
||||
mut rng: StdRng,
|
||||
executor: BackgroundExecutor,
|
||||
_executor: BackgroundExecutor,
|
||||
cx: &mut gpui::TestAppContext,
|
||||
) {
|
||||
let operations = env::var("OPERATIONS")
|
||||
.map(|i| i.parse().expect("invalid `OPERATIONS` variable"))
|
||||
.unwrap_or(20);
|
||||
|
||||
// Try to induce races between diff recalculation and index writes.
|
||||
if rng.random_bool(0.5) {
|
||||
executor.deprioritize(*CALCULATE_DIFF_TASK);
|
||||
}
|
||||
|
||||
use DiffHunkSecondaryStatus::*;
|
||||
init_test(cx);
|
||||
|
||||
|
||||
@@ -16,6 +16,7 @@ doctest = false
|
||||
alacritty_terminal.workspace = true
|
||||
anyhow.workspace = true
|
||||
async-dispatcher.workspace = true
|
||||
async-task.workspace = true
|
||||
async-tungstenite = { workspace = true, features = ["tokio", "tokio-rustls-manual-roots", "tokio-runtime"] }
|
||||
base64.workspace = true
|
||||
client.workspace = true
|
||||
|
||||
@@ -12,7 +12,7 @@ mod session;
|
||||
use std::{sync::Arc, time::Duration};
|
||||
|
||||
use async_dispatcher::{Dispatcher, Runnable, set_dispatcher};
|
||||
use gpui::{App, PlatformDispatcher, Priority, RunnableVariant};
|
||||
use gpui::{App, PlatformDispatcher, Priority, RunnableMeta};
|
||||
use project::Fs;
|
||||
pub use runtimelib::ExecutionState;
|
||||
|
||||
@@ -44,18 +44,34 @@ fn zed_dispatcher(cx: &mut App) -> impl Dispatcher {
|
||||
// just make that consistent so we have this dispatcher ready to go for
|
||||
// other crates in Zed.
|
||||
impl Dispatcher for ZedDispatcher {
|
||||
#[track_caller]
|
||||
fn dispatch(&self, runnable: Runnable) {
|
||||
self.dispatcher
|
||||
.dispatch(RunnableVariant::Compat(runnable), None, Priority::default());
|
||||
let location = core::panic::Location::caller();
|
||||
let (wrapper, task) = async_task::Builder::new()
|
||||
.metadata(RunnableMeta { location })
|
||||
.spawn(|_| async move { runnable.run() }, {
|
||||
let dispatcher = self.dispatcher.clone();
|
||||
move |r| dispatcher.dispatch(r, Priority::default())
|
||||
});
|
||||
wrapper.schedule();
|
||||
task.detach();
|
||||
}
|
||||
|
||||
#[track_caller]
|
||||
fn dispatch_after(&self, duration: Duration, runnable: Runnable) {
|
||||
self.dispatcher
|
||||
.dispatch_after(duration, RunnableVariant::Compat(runnable));
|
||||
let location = core::panic::Location::caller();
|
||||
let (wrapper, task) = async_task::Builder::new()
|
||||
.metadata(RunnableMeta { location })
|
||||
.spawn(|_| async move { runnable.run() }, {
|
||||
let dispatcher = self.dispatcher.clone();
|
||||
move |r| dispatcher.dispatch_after(duration, r)
|
||||
});
|
||||
wrapper.schedule();
|
||||
task.detach();
|
||||
}
|
||||
}
|
||||
|
||||
ZedDispatcher {
|
||||
dispatcher: cx.background_executor().dispatcher.clone(),
|
||||
dispatcher: cx.background_executor().dispatcher().clone(),
|
||||
}
|
||||
}
|
||||
|
||||
229
crates/scheduler/full_integration_plan.md
Normal file
229
crates/scheduler/full_integration_plan.md
Normal file
@@ -0,0 +1,229 @@
|
||||
# GPUI Scheduler Integration
|
||||
|
||||
This document describes the integration of GPUI's async execution with the scheduler crate, including architecture, design decisions, and lessons learned.
|
||||
|
||||
## Goal
|
||||
|
||||
Unify GPUI's async execution with the scheduler crate, eliminating duplicate blocking/scheduling logic and enabling deterministic testing.
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────┐
|
||||
│ GPUI │
|
||||
│ │
|
||||
│ ┌────────────────────────┐ ┌──────────────────────────────┐ │
|
||||
│ │ gpui::Background- │ │ gpui::ForegroundExecutor │ │
|
||||
│ │ Executor │ │ - inner: scheduler:: │ │
|
||||
│ │ - scheduler: Arc< │ │ ForegroundExecutor │ │
|
||||
│ │ dyn Scheduler> │ │ - dispatcher: Arc │ │
|
||||
│ │ - dispatcher: Arc │ └──────────────┬───────────────┘ │
|
||||
│ └───────────┬────────────┘ │ │
|
||||
│ │ │ │
|
||||
│ │ (creates temporary │ (wraps) │
|
||||
│ │ scheduler::Background- │ │
|
||||
│ │ Executor when spawning) │ │
|
||||
│ │ │ │
|
||||
│ │ ┌───────────────────────────┘ │
|
||||
│ │ │ │
|
||||
│ ▼ ▼ │
|
||||
│ ┌─────────────────────────────────────────────────────────────┐ │
|
||||
│ │ Arc<dyn Scheduler> │ │
|
||||
│ └──────────────────────────┬──────────────────────────────────┘ │
|
||||
│ │ │
|
||||
│ ┌────────────────┴────────────────┐ │
|
||||
│ │ │ │
|
||||
│ ▼ ▼ │
|
||||
│ ┌───────────────────────┐ ┌───────────────────────────┐ │
|
||||
│ │ PlatformScheduler │ │ TestScheduler │ │
|
||||
│ │ (production) │ │ (deterministic tests) │ │
|
||||
│ └───────────────────────┘ └───────────────────────────┘ │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Scheduler Trait
|
||||
|
||||
The scheduler crate provides:
|
||||
- `Scheduler` trait with `block()`, `schedule_foreground()`, `schedule_background_with_priority()`, `timer()`, `clock()`
|
||||
- `TestScheduler` implementation for deterministic testing
|
||||
- `ForegroundExecutor` and `BackgroundExecutor` that wrap `Arc<dyn Scheduler>`
|
||||
- `Task<T>` type with `ready()`, `is_ready()`, `detach()`, `from_async_task()`
|
||||
|
||||
### PlatformScheduler
|
||||
|
||||
`PlatformScheduler` in GPUI (`crates/gpui/src/platform_scheduler.rs`):
|
||||
- Implements `Scheduler` trait for production use
|
||||
- Wraps `PlatformDispatcher` (Mac, Linux, Windows)
|
||||
- Uses `parking::Parker` for blocking operations
|
||||
- Uses `dispatch_after` for timers
|
||||
- Provides a `PlatformClock` that delegates to the dispatcher
|
||||
|
||||
### GPUI Executors
|
||||
|
||||
GPUI's executors (`crates/gpui/src/executor.rs`):
|
||||
- `gpui::ForegroundExecutor` wraps `scheduler::ForegroundExecutor` internally
|
||||
- `gpui::BackgroundExecutor` holds `Arc<dyn Scheduler>` directly
|
||||
- Select `TestScheduler` or `PlatformScheduler` based on dispatcher type
|
||||
- Wrap `scheduler::Task<T>` in a thin `gpui::Task<T>` that adds `detach_and_log_err()`
|
||||
- Use `Scheduler::block()` for all blocking operations
|
||||
|
||||
---
|
||||
|
||||
## Design Decisions
|
||||
|
||||
### Key Design Principles
|
||||
|
||||
1. **No optional fields**: Both test and production paths use the same executor types with different `Scheduler` implementations underneath.
|
||||
|
||||
2. **Scheduler owns blocking logic**: The `Scheduler::block()` method handles all blocking, including timeout and task stepping (for tests).
|
||||
|
||||
3. **GPUI Task wrapper**: Thin wrapper around `scheduler::Task` that adds `detach_and_log_err()` which requires `&App`.
|
||||
|
||||
### Foreground Priority Not Supported
|
||||
|
||||
`ForegroundExecutor::spawn_with_priority` accepts a priority parameter but ignores it. This is acceptable because:
|
||||
- macOS (primary platform) ignores foreground priority anyway
|
||||
- TestScheduler runs foreground tasks in order
|
||||
- There are no external callers of this method in the codebase
|
||||
|
||||
### Session IDs for Foreground Isolation
|
||||
|
||||
Each `ForegroundExecutor` gets a `SessionId` to prevent reentrancy when blocking. This ensures that when blocking on a future, we don't run foreground tasks from the same session.
|
||||
|
||||
### Runtime Scheduler Selection
|
||||
|
||||
In test builds, we check `dispatcher.as_test()` to choose between `TestScheduler` and `PlatformScheduler`. This allows the same executor types to work in both test and production environments.
|
||||
|
||||
### Profiler Integration
|
||||
|
||||
The profiler task timing infrastructure continues to work because:
|
||||
- `PlatformScheduler::schedule_background_with_priority` calls `dispatcher.dispatch()`
|
||||
- `PlatformScheduler::schedule_foreground` calls `dispatcher.dispatch_on_main_thread()`
|
||||
- All platform dispatchers wrap task execution with profiler timing
|
||||
|
||||
---
|
||||
|
||||
## Intentional Removals
|
||||
|
||||
### `spawn_labeled` and `deprioritize`
|
||||
|
||||
**What was removed**:
|
||||
- `BackgroundExecutor::spawn_labeled(label: TaskLabel, future)`
|
||||
- `BackgroundExecutor::deprioritize(label: TaskLabel)`
|
||||
- `TaskLabel` type
|
||||
|
||||
**Why**: These were only used in a few places for test ordering control. The new priority-weighted scheduling in `TestScheduler` provides similar functionality through `Priority::High/Medium/Low`.
|
||||
|
||||
**Migration**: Use `spawn()` instead of `spawn_labeled()`. For test ordering, use explicit synchronization (channels, etc.) or priority levels.
|
||||
|
||||
### `start_waiting` / `finish_waiting` Debug Methods
|
||||
|
||||
**What was removed**:
|
||||
- `BackgroundExecutor::start_waiting()`
|
||||
- `BackgroundExecutor::finish_waiting()`
|
||||
- Associated `waiting_backtrace` tracking in TestDispatcher
|
||||
|
||||
**Why**: The new `TracingWaker` in `TestScheduler` provides better debugging capability. Run tests with `PENDING_TRACES=1` to see backtraces of all pending futures when parking is forbidden.
|
||||
|
||||
### Realtime Priority
|
||||
|
||||
**What was removed**: `Priority::Realtime` variant and associated OS thread spawning.
|
||||
|
||||
**Why**: There were no in-tree call sites using realtime priority. The correctness/backpressure semantics are non-trivial:
|
||||
- Blocking enqueue risks stalling latency-sensitive threads
|
||||
- Non-blocking enqueue implies dropping runnables under load, which breaks correctness for general futures
|
||||
|
||||
Rather than ship ambiguous or risky semantics, we removed the API until there is a concrete in-tree use case.
|
||||
|
||||
---
|
||||
|
||||
## Lessons Learned
|
||||
|
||||
These lessons were discovered during integration testing and represent important design constraints.
|
||||
|
||||
### 1. Never Cache `Entity<T>` in Process-Wide Statics
|
||||
|
||||
**Problem**: `gpui::Entity<T>` is a handle tied to a particular `App`'s entity-map. Storing an `Entity<T>` in a process-wide static (`OnceLock`, `LazyLock`, etc.) and reusing it across different `App` instances causes:
|
||||
- "used a entity with the wrong context" panics
|
||||
- `Option::unwrap()` failures in leak-detection clone paths
|
||||
- Nondeterministic behavior depending on test ordering
|
||||
|
||||
**Solution**: Cache plain data (env var name, URL, etc.) in statics, and create `Entity<T>` per-`App`.
|
||||
|
||||
**Guideline**: Never store `gpui::Entity<T>` or other `App`-context-bound handles in process-wide statics unless explicitly keyed by `App` identity.
|
||||
|
||||
### 2. `block_with_timeout` Behavior Depends on Tick Budget
|
||||
|
||||
**Problem**: In `TestScheduler`, "timeout" behavior depends on an internal tick budget (`timeout_ticks`), not just elapsed wall-clock time. During the allotted ticks, the scheduler can poll futures and step other tasks.
|
||||
|
||||
**Implications**:
|
||||
- A future can complete "within a timeout" in tests due to scheduler progress, even without explicit `advance_clock()`
|
||||
- Yielding does not advance time
|
||||
- If a test needs time to advance, it must do so explicitly via `advance_clock()`
|
||||
|
||||
**For deterministic timeout tests**: Set `scheduler.set_timeout_ticks(0..=0)` to prevent any scheduler stepping during timeout, then explicitly advance time.
|
||||
|
||||
### 3. Realtime Priority Must Panic in Tests
|
||||
|
||||
**Problem**: `Priority::Realtime` spawns dedicated OS threads outside the test scheduler, which breaks determinism and causes hangs/flakes.
|
||||
|
||||
**Solution**: The test dispatcher's `spawn_realtime` implementation panics with a clear message. This is an enforced invariant, not an implementation detail.
|
||||
|
||||
---
|
||||
|
||||
## Test Helpers
|
||||
|
||||
Test-only methods on `BackgroundExecutor`:
|
||||
- `block_test()` - for running async tests synchronously
|
||||
- `advance_clock()` - move simulated time forward
|
||||
- `tick()` - run one task
|
||||
- `run_until_parked()` - run all ready tasks
|
||||
- `allow_parking()` / `forbid_parking()` - control parking behavior
|
||||
- `simulate_random_delay()` - yield randomly for fuzzing
|
||||
- `rng()` - access seeded RNG
|
||||
- `set_block_on_ticks()` - configure timeout tick range for block operations
|
||||
|
||||
---
|
||||
|
||||
## Code Quality Notes
|
||||
|
||||
### `dispatch_after` Panics in TestDispatcher
|
||||
|
||||
This is intentional:
|
||||
```rust
|
||||
fn dispatch_after(&self, _duration: Duration, _runnable: RunnableVariant) {
|
||||
panic!(
|
||||
"dispatch_after should not be called in tests. \
|
||||
Use BackgroundExecutor::timer() which uses the scheduler's native timer."
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
In tests, `TestScheduler::timer()` creates native timers without using `dispatch_after`. Any code hitting this panic has a bug.
|
||||
|
||||
---
|
||||
|
||||
## Files Changed
|
||||
|
||||
Key files modified during integration:
|
||||
|
||||
- `crates/scheduler/src/scheduler.rs` - `Scheduler::block()` signature takes `Pin<&mut dyn Future>` and returns `bool`
|
||||
- `crates/scheduler/src/executor.rs` - Added `from_async_task()`
|
||||
- `crates/scheduler/src/test_scheduler.rs` - Deterministic scheduling implementation
|
||||
- `crates/gpui/src/executor.rs` - Rewritten to use scheduler executors
|
||||
- `crates/gpui/src/platform_scheduler.rs` - New file implementing `Scheduler` for production
|
||||
- `crates/gpui/src/platform/test/dispatcher.rs` - Simplified to delegate to TestScheduler
|
||||
- `crates/gpui/src/platform.rs` - Simplified `RunnableVariant`, removed `TaskLabel`
|
||||
- Platform dispatchers (mac/linux/windows) - Removed label parameter from dispatch
|
||||
|
||||
---
|
||||
|
||||
## Future Considerations
|
||||
|
||||
1. **Foreground priority support**: If needed, add `schedule_foreground_with_priority` to the `Scheduler` trait.
|
||||
|
||||
2. **Profiler integration in scheduler**: Could move task timing into the scheduler crate for more consistent profiling.
|
||||
|
||||
3. **Additional test utilities**: The `TestScheduler` could be extended with more debugging/introspection capabilities.
|
||||
@@ -1,5 +1,4 @@
|
||||
use crate::{Scheduler, SessionId, Timer};
|
||||
use futures::FutureExt as _;
|
||||
use crate::{Priority, RunnableMeta, Scheduler, SessionId, Timer};
|
||||
use std::{
|
||||
future::Future,
|
||||
marker::PhantomData,
|
||||
@@ -10,7 +9,7 @@ use std::{
|
||||
sync::Arc,
|
||||
task::{Context, Poll},
|
||||
thread::{self, ThreadId},
|
||||
time::Duration,
|
||||
time::{Duration, Instant},
|
||||
};
|
||||
|
||||
#[derive(Clone)]
|
||||
@@ -29,6 +28,14 @@ impl ForegroundExecutor {
|
||||
}
|
||||
}
|
||||
|
||||
pub fn session_id(&self) -> SessionId {
|
||||
self.session_id
|
||||
}
|
||||
|
||||
pub fn scheduler(&self) -> &Arc<dyn Scheduler> {
|
||||
&self.scheduler
|
||||
}
|
||||
|
||||
#[track_caller]
|
||||
pub fn spawn<F>(&self, future: F) -> Task<F::Output>
|
||||
where
|
||||
@@ -37,40 +44,69 @@ impl ForegroundExecutor {
|
||||
{
|
||||
let session_id = self.session_id;
|
||||
let scheduler = Arc::clone(&self.scheduler);
|
||||
let (runnable, task) = spawn_local_with_source_location(future, move |runnable| {
|
||||
scheduler.schedule_foreground(session_id, runnable);
|
||||
});
|
||||
let location = Location::caller();
|
||||
let (runnable, task) = spawn_local_with_source_location(
|
||||
future,
|
||||
move |runnable| {
|
||||
scheduler.schedule_foreground(session_id, runnable);
|
||||
},
|
||||
RunnableMeta { location },
|
||||
);
|
||||
runnable.schedule();
|
||||
Task(TaskState::Spawned(task))
|
||||
}
|
||||
|
||||
pub fn block_on<Fut: Future>(&self, future: Fut) -> Fut::Output {
|
||||
let mut output = None;
|
||||
self.scheduler.block(
|
||||
Some(self.session_id),
|
||||
async { output = Some(future.await) }.boxed_local(),
|
||||
None,
|
||||
);
|
||||
output.unwrap()
|
||||
use std::cell::Cell;
|
||||
|
||||
let output = Cell::new(None);
|
||||
let future = async {
|
||||
output.set(Some(future.await));
|
||||
};
|
||||
let mut future = std::pin::pin!(future);
|
||||
|
||||
self.scheduler
|
||||
.block(Some(self.session_id), future.as_mut(), None);
|
||||
|
||||
output.take().expect("block_on future did not complete")
|
||||
}
|
||||
|
||||
pub fn block_with_timeout<Fut: Unpin + Future>(
|
||||
/// Block until the future completes or timeout occurs.
|
||||
/// Returns Ok(output) if completed, Err(future) if timed out.
|
||||
pub fn block_with_timeout<Fut: Future>(
|
||||
&self,
|
||||
timeout: Duration,
|
||||
mut future: Fut,
|
||||
) -> Result<Fut::Output, Fut> {
|
||||
let mut output = None;
|
||||
self.scheduler.block(
|
||||
Some(self.session_id),
|
||||
async { output = Some((&mut future).await) }.boxed_local(),
|
||||
Some(timeout),
|
||||
);
|
||||
output.ok_or(future)
|
||||
future: Fut,
|
||||
) -> Result<Fut::Output, impl Future<Output = Fut::Output> + use<Fut>> {
|
||||
use std::cell::Cell;
|
||||
|
||||
let output = Cell::new(None);
|
||||
let mut future = Box::pin(future);
|
||||
|
||||
{
|
||||
let future_ref = &mut future;
|
||||
let wrapper = async {
|
||||
output.set(Some(future_ref.await));
|
||||
};
|
||||
let mut wrapper = std::pin::pin!(wrapper);
|
||||
|
||||
self.scheduler
|
||||
.block(Some(self.session_id), wrapper.as_mut(), Some(timeout));
|
||||
}
|
||||
|
||||
match output.take() {
|
||||
Some(value) => Ok(value),
|
||||
None => Err(future),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn timer(&self, duration: Duration) -> Timer {
|
||||
self.scheduler.timer(duration)
|
||||
}
|
||||
|
||||
pub fn now(&self) -> Instant {
|
||||
self.scheduler.clock().now()
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
@@ -83,15 +119,31 @@ impl BackgroundExecutor {
|
||||
Self { scheduler }
|
||||
}
|
||||
|
||||
#[track_caller]
|
||||
pub fn spawn<F>(&self, future: F) -> Task<F::Output>
|
||||
where
|
||||
F: Future + Send + 'static,
|
||||
F::Output: Send + 'static,
|
||||
{
|
||||
self.spawn_with_priority(Priority::default(), future)
|
||||
}
|
||||
|
||||
#[track_caller]
|
||||
pub fn spawn_with_priority<F>(&self, priority: Priority, future: F) -> Task<F::Output>
|
||||
where
|
||||
F: Future + Send + 'static,
|
||||
F::Output: Send + 'static,
|
||||
{
|
||||
let scheduler = Arc::clone(&self.scheduler);
|
||||
let (runnable, task) = async_task::spawn(future, move |runnable| {
|
||||
scheduler.schedule_background(runnable);
|
||||
});
|
||||
let location = Location::caller();
|
||||
let (runnable, task) = async_task::Builder::new()
|
||||
.metadata(RunnableMeta { location })
|
||||
.spawn(
|
||||
move |_| future,
|
||||
move |runnable| {
|
||||
scheduler.schedule_background_with_priority(runnable, priority);
|
||||
},
|
||||
);
|
||||
runnable.schedule();
|
||||
Task(TaskState::Spawned(task))
|
||||
}
|
||||
@@ -100,6 +152,10 @@ impl BackgroundExecutor {
|
||||
self.scheduler.timer(duration)
|
||||
}
|
||||
|
||||
pub fn now(&self) -> Instant {
|
||||
self.scheduler.clock().now()
|
||||
}
|
||||
|
||||
pub fn scheduler(&self) -> &Arc<dyn Scheduler> {
|
||||
&self.scheduler
|
||||
}
|
||||
@@ -121,7 +177,7 @@ enum TaskState<T> {
|
||||
Ready(Option<T>),
|
||||
|
||||
/// A task that is currently running.
|
||||
Spawned(async_task::Task<T>),
|
||||
Spawned(async_task::Task<T, RunnableMeta>),
|
||||
}
|
||||
|
||||
impl<T> Task<T> {
|
||||
@@ -130,6 +186,11 @@ impl<T> Task<T> {
|
||||
Task(TaskState::Ready(Some(val)))
|
||||
}
|
||||
|
||||
/// Creates a Task from an async_task::Task
|
||||
pub fn from_async_task(task: async_task::Task<T, RunnableMeta>) -> Self {
|
||||
Task(TaskState::Spawned(task))
|
||||
}
|
||||
|
||||
pub fn is_ready(&self) -> bool {
|
||||
match &self.0 {
|
||||
TaskState::Ready(_) => true,
|
||||
@@ -158,18 +219,19 @@ impl<T> Future for Task<T> {
|
||||
}
|
||||
|
||||
/// Variant of `async_task::spawn_local` that includes the source location of the spawn in panics.
|
||||
///
|
||||
/// Copy-modified from:
|
||||
/// <https://github.com/smol-rs/async-task/blob/ca9dbe1db9c422fd765847fa91306e30a6bb58a9/src/runnable.rs#L405>
|
||||
#[track_caller]
|
||||
fn spawn_local_with_source_location<Fut, S>(
|
||||
future: Fut,
|
||||
schedule: S,
|
||||
) -> (async_task::Runnable, async_task::Task<Fut::Output, ()>)
|
||||
metadata: RunnableMeta,
|
||||
) -> (
|
||||
async_task::Runnable<RunnableMeta>,
|
||||
async_task::Task<Fut::Output, RunnableMeta>,
|
||||
)
|
||||
where
|
||||
Fut: Future + 'static,
|
||||
Fut::Output: 'static,
|
||||
S: async_task::Schedule + Send + Sync + 'static,
|
||||
S: async_task::Schedule<RunnableMeta> + Send + Sync + 'static,
|
||||
{
|
||||
#[inline]
|
||||
fn thread_id() -> ThreadId {
|
||||
@@ -212,12 +274,18 @@ where
|
||||
}
|
||||
}
|
||||
|
||||
// Wrap the future into one that checks which thread it's on.
|
||||
let future = Checked {
|
||||
id: thread_id(),
|
||||
inner: ManuallyDrop::new(future),
|
||||
location: Location::caller(),
|
||||
};
|
||||
let location = metadata.location;
|
||||
|
||||
unsafe { async_task::spawn_unchecked(future, schedule) }
|
||||
unsafe {
|
||||
async_task::Builder::new()
|
||||
.metadata(metadata)
|
||||
.spawn_unchecked(
|
||||
move |_| Checked {
|
||||
id: thread_id(),
|
||||
inner: ManuallyDrop::new(future),
|
||||
location,
|
||||
},
|
||||
schedule,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -9,32 +9,88 @@ pub use executor::*;
|
||||
pub use test_scheduler::*;
|
||||
|
||||
use async_task::Runnable;
|
||||
use futures::{FutureExt as _, channel::oneshot, future::LocalBoxFuture};
|
||||
use futures::channel::oneshot;
|
||||
use std::{
|
||||
future::Future,
|
||||
panic::Location,
|
||||
pin::Pin,
|
||||
sync::Arc,
|
||||
task::{Context, Poll},
|
||||
time::Duration,
|
||||
};
|
||||
|
||||
pub trait Scheduler: Send + Sync {
|
||||
fn block(
|
||||
&self,
|
||||
session_id: Option<SessionId>,
|
||||
future: LocalBoxFuture<()>,
|
||||
timeout: Option<Duration>,
|
||||
);
|
||||
fn schedule_foreground(&self, session_id: SessionId, runnable: Runnable);
|
||||
fn schedule_background(&self, runnable: Runnable);
|
||||
fn timer(&self, timeout: Duration) -> Timer;
|
||||
fn clock(&self) -> Arc<dyn Clock>;
|
||||
fn as_test(&self) -> &TestScheduler {
|
||||
panic!("this is not a test scheduler")
|
||||
/// Task priority for background tasks.
|
||||
///
|
||||
/// Higher priority tasks are more likely to be scheduled before lower priority tasks,
|
||||
/// but this is not a strict guarantee - the scheduler may interleave tasks of different
|
||||
/// priorities to prevent starvation.
|
||||
#[derive(Clone, Copy, Debug, Default, PartialEq, Eq, Hash)]
|
||||
#[repr(u8)]
|
||||
pub enum Priority {
|
||||
/// High priority - use for tasks critical to user experience/responsiveness.
|
||||
High,
|
||||
/// Medium priority - suitable for most use cases.
|
||||
#[default]
|
||||
Medium,
|
||||
/// Low priority - use for background work that can be deprioritized.
|
||||
Low,
|
||||
}
|
||||
|
||||
impl Priority {
|
||||
/// Returns the relative probability weight for this priority level.
|
||||
/// Used by schedulers to determine task selection probability.
|
||||
pub const fn weight(self) -> u32 {
|
||||
match self {
|
||||
Priority::High => 60,
|
||||
Priority::Medium => 30,
|
||||
Priority::Low => 10,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Copy, Clone, Debug, Eq, PartialEq, Ord, PartialOrd)]
|
||||
/// Metadata attached to runnables for debugging and profiling.
|
||||
#[derive(Clone, Debug)]
|
||||
pub struct RunnableMeta {
|
||||
/// The source location where the task was spawned.
|
||||
pub location: &'static Location<'static>,
|
||||
}
|
||||
|
||||
pub trait Scheduler: Send + Sync {
|
||||
/// Block until the given future completes or timeout occurs.
|
||||
///
|
||||
/// Returns `true` if the future completed, `false` if it timed out.
|
||||
/// The future is passed as a pinned mutable reference so the caller
|
||||
/// retains ownership and can continue polling or return it on timeout.
|
||||
fn block(
|
||||
&self,
|
||||
session_id: Option<SessionId>,
|
||||
future: Pin<&mut dyn Future<Output = ()>>,
|
||||
timeout: Option<Duration>,
|
||||
) -> bool;
|
||||
|
||||
fn schedule_foreground(&self, session_id: SessionId, runnable: Runnable<RunnableMeta>);
|
||||
|
||||
/// Schedule a background task with the given priority.
|
||||
fn schedule_background_with_priority(
|
||||
&self,
|
||||
runnable: Runnable<RunnableMeta>,
|
||||
priority: Priority,
|
||||
);
|
||||
|
||||
/// Schedule a background task with default (medium) priority.
|
||||
fn schedule_background(&self, runnable: Runnable<RunnableMeta>) {
|
||||
self.schedule_background_with_priority(runnable, Priority::default());
|
||||
}
|
||||
|
||||
fn timer(&self, timeout: Duration) -> Timer;
|
||||
fn clock(&self) -> Arc<dyn Clock>;
|
||||
|
||||
fn as_test(&self) -> Option<&TestScheduler> {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Copy, Clone, Debug, Eq, PartialEq, Ord, PartialOrd, Hash)]
|
||||
pub struct SessionId(u16);
|
||||
|
||||
impl SessionId {
|
||||
@@ -55,7 +111,7 @@ impl Future for Timer {
|
||||
type Output = ();
|
||||
|
||||
fn poll(mut self: Pin<&mut Self>, cx: &mut Context) -> Poll<()> {
|
||||
match self.0.poll_unpin(cx) {
|
||||
match Pin::new(&mut self.0).poll(cx) {
|
||||
Poll::Ready(_) => Poll::Ready(()),
|
||||
Poll::Pending => Poll::Pending,
|
||||
}
|
||||
|
||||
@@ -1,14 +1,18 @@
|
||||
use crate::{
|
||||
BackgroundExecutor, Clock, ForegroundExecutor, Scheduler, SessionId, TestClock, Timer,
|
||||
BackgroundExecutor, Clock, ForegroundExecutor, Priority, RunnableMeta, Scheduler, SessionId,
|
||||
TestClock, Timer,
|
||||
};
|
||||
use async_task::Runnable;
|
||||
use backtrace::{Backtrace, BacktraceFrame};
|
||||
use futures::{FutureExt as _, channel::oneshot, future::LocalBoxFuture};
|
||||
use parking_lot::Mutex;
|
||||
use rand::prelude::*;
|
||||
use futures::channel::oneshot;
|
||||
use parking_lot::{Mutex, MutexGuard};
|
||||
use rand::{
|
||||
distr::{StandardUniform, uniform::SampleRange, uniform::SampleUniform},
|
||||
prelude::*,
|
||||
};
|
||||
use std::{
|
||||
any::type_name_of_val,
|
||||
collections::{BTreeMap, VecDeque},
|
||||
collections::{BTreeMap, HashSet, VecDeque},
|
||||
env,
|
||||
fmt::Write,
|
||||
future::Future,
|
||||
@@ -90,6 +94,7 @@ impl TestScheduler {
|
||||
capture_pending_traces: config.capture_pending_traces,
|
||||
pending_traces: BTreeMap::new(),
|
||||
next_trace_id: TraceId(0),
|
||||
is_main_thread: true,
|
||||
})),
|
||||
clock: Arc::new(TestClock::new()),
|
||||
thread: thread::current(),
|
||||
@@ -100,8 +105,8 @@ impl TestScheduler {
|
||||
self.clock.clone()
|
||||
}
|
||||
|
||||
pub fn rng(&self) -> Arc<Mutex<StdRng>> {
|
||||
self.rng.clone()
|
||||
pub fn rng(&self) -> SharedRng {
|
||||
SharedRng(self.rng.clone())
|
||||
}
|
||||
|
||||
pub fn set_timeout_ticks(&self, timeout_ticks: RangeInclusive<usize>) {
|
||||
@@ -116,13 +121,25 @@ impl TestScheduler {
|
||||
self.state.lock().allow_parking = false;
|
||||
}
|
||||
|
||||
pub fn parking_allowed(&self) -> bool {
|
||||
self.state.lock().allow_parking
|
||||
}
|
||||
|
||||
pub fn is_main_thread(&self) -> bool {
|
||||
self.state.lock().is_main_thread
|
||||
}
|
||||
|
||||
/// Allocate a new session ID for foreground task scheduling.
|
||||
/// This is used by GPUI's TestDispatcher to map dispatcher instances to sessions.
|
||||
pub fn allocate_session_id(&self) -> SessionId {
|
||||
let mut state = self.state.lock();
|
||||
state.next_session_id.0 += 1;
|
||||
state.next_session_id
|
||||
}
|
||||
|
||||
/// Create a foreground executor for this scheduler
|
||||
pub fn foreground(self: &Arc<Self>) -> ForegroundExecutor {
|
||||
let session_id = {
|
||||
let mut state = self.state.lock();
|
||||
state.next_session_id.0 += 1;
|
||||
state.next_session_id
|
||||
};
|
||||
let session_id = self.allocate_session_id();
|
||||
ForegroundExecutor::new(session_id, self.clone())
|
||||
}
|
||||
|
||||
@@ -152,38 +169,155 @@ impl TestScheduler {
|
||||
}
|
||||
}
|
||||
|
||||
/// Execute one tick of the scheduler, processing expired timers and running
|
||||
/// at most one task. Returns true if any work was done.
|
||||
///
|
||||
/// This is the public interface for GPUI's TestDispatcher to drive task execution.
|
||||
pub fn tick(&self) -> bool {
|
||||
self.step_filtered(false)
|
||||
}
|
||||
|
||||
/// Execute one tick, but only run background tasks (no foreground/session tasks).
|
||||
/// Returns true if any work was done.
|
||||
pub fn tick_background_only(&self) -> bool {
|
||||
self.step_filtered(true)
|
||||
}
|
||||
|
||||
/// Check if there are any pending tasks or timers that could run.
|
||||
pub fn has_pending_tasks(&self) -> bool {
|
||||
let state = self.state.lock();
|
||||
!state.runnables.is_empty() || !state.timers.is_empty()
|
||||
}
|
||||
|
||||
/// Returns counts of (foreground_tasks, background_tasks) currently queued.
|
||||
/// Foreground tasks are those with a session_id, background tasks have none.
|
||||
pub fn pending_task_counts(&self) -> (usize, usize) {
|
||||
let state = self.state.lock();
|
||||
let foreground = state
|
||||
.runnables
|
||||
.iter()
|
||||
.filter(|r| r.session_id.is_some())
|
||||
.count();
|
||||
let background = state
|
||||
.runnables
|
||||
.iter()
|
||||
.filter(|r| r.session_id.is_none())
|
||||
.count();
|
||||
(foreground, background)
|
||||
}
|
||||
|
||||
fn step(&self) -> bool {
|
||||
let elapsed_timers = {
|
||||
self.step_filtered(false)
|
||||
}
|
||||
|
||||
fn step_filtered(&self, background_only: bool) -> bool {
|
||||
let (elapsed_count, runnables_before) = {
|
||||
let mut state = self.state.lock();
|
||||
let end_ix = state
|
||||
.timers
|
||||
.partition_point(|timer| timer.expiration <= self.clock.now());
|
||||
state.timers.drain(..end_ix).collect::<Vec<_>>()
|
||||
let elapsed: Vec<_> = state.timers.drain(..end_ix).collect();
|
||||
let count = elapsed.len();
|
||||
let runnables = state.runnables.len();
|
||||
drop(state);
|
||||
// Dropping elapsed timers here wakes the waiting futures
|
||||
drop(elapsed);
|
||||
(count, runnables)
|
||||
};
|
||||
|
||||
if !elapsed_timers.is_empty() {
|
||||
if elapsed_count > 0 {
|
||||
let runnables_after = self.state.lock().runnables.len();
|
||||
if std::env::var("DEBUG_SCHEDULER").is_ok() {
|
||||
eprintln!(
|
||||
"[scheduler] Expired {} timers at {:?}, runnables: {} -> {}",
|
||||
elapsed_count,
|
||||
self.clock.now(),
|
||||
runnables_before,
|
||||
runnables_after
|
||||
);
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
let runnable = {
|
||||
let state = &mut *self.state.lock();
|
||||
let ix = state.runnables.iter().position(|runnable| {
|
||||
runnable
|
||||
.session_id
|
||||
.is_none_or(|session_id| !state.blocked_sessions.contains(&session_id))
|
||||
});
|
||||
ix.and_then(|ix| state.runnables.remove(ix))
|
||||
|
||||
// Find candidate tasks:
|
||||
// - For foreground tasks (with session_id), only the first task from each session
|
||||
// is a candidate (to preserve intra-session ordering)
|
||||
// - For background tasks (no session_id), all are candidates
|
||||
// - Tasks from blocked sessions are excluded
|
||||
// - If background_only is true, skip foreground tasks entirely
|
||||
let mut seen_sessions = HashSet::new();
|
||||
let candidate_indices: Vec<usize> = state
|
||||
.runnables
|
||||
.iter()
|
||||
.enumerate()
|
||||
.filter(|(_, runnable)| {
|
||||
if let Some(session_id) = runnable.session_id {
|
||||
// Skip foreground tasks if background_only mode
|
||||
if background_only {
|
||||
return false;
|
||||
}
|
||||
// Exclude tasks from blocked sessions
|
||||
if state.blocked_sessions.contains(&session_id) {
|
||||
return false;
|
||||
}
|
||||
// Only include first task from each session (insert returns true if new)
|
||||
seen_sessions.insert(session_id)
|
||||
} else {
|
||||
// Background tasks are always candidates
|
||||
true
|
||||
}
|
||||
})
|
||||
.map(|(ix, _)| ix)
|
||||
.collect();
|
||||
|
||||
if candidate_indices.is_empty() {
|
||||
None
|
||||
} else if state.randomize_order {
|
||||
// Use priority-weighted random selection
|
||||
let weights: Vec<u32> = candidate_indices
|
||||
.iter()
|
||||
.map(|&ix| state.runnables[ix].priority.weight())
|
||||
.collect();
|
||||
let total_weight: u32 = weights.iter().sum();
|
||||
|
||||
if total_weight == 0 {
|
||||
// Fallback to uniform random if all weights are zero
|
||||
let choice = self.rng.lock().random_range(0..candidate_indices.len());
|
||||
state.runnables.remove(candidate_indices[choice])
|
||||
} else {
|
||||
let mut target = self.rng.lock().random_range(0..total_weight);
|
||||
let mut selected_idx = 0;
|
||||
for (i, &weight) in weights.iter().enumerate() {
|
||||
if target < weight {
|
||||
selected_idx = i;
|
||||
break;
|
||||
}
|
||||
target -= weight;
|
||||
}
|
||||
state.runnables.remove(candidate_indices[selected_idx])
|
||||
}
|
||||
} else {
|
||||
// Non-randomized: just take the first candidate task
|
||||
state.runnables.remove(candidate_indices[0])
|
||||
}
|
||||
};
|
||||
|
||||
if let Some(runnable) = runnable {
|
||||
let is_foreground = runnable.session_id.is_some();
|
||||
let was_main_thread = self.state.lock().is_main_thread;
|
||||
self.state.lock().is_main_thread = is_foreground;
|
||||
runnable.run();
|
||||
self.state.lock().is_main_thread = was_main_thread;
|
||||
return true;
|
||||
}
|
||||
|
||||
false
|
||||
}
|
||||
|
||||
fn advance_clock_to_next_timer(&self) -> bool {
|
||||
pub fn advance_clock_to_next_timer(&self) -> bool {
|
||||
if let Some(timer) = self.state.lock().timers.first() {
|
||||
self.clock.advance(timer.expiration - self.clock.now());
|
||||
true
|
||||
@@ -193,18 +327,41 @@ impl TestScheduler {
|
||||
}
|
||||
|
||||
pub fn advance_clock(&self, duration: Duration) {
|
||||
let next_now = self.clock.now() + duration;
|
||||
let debug = std::env::var("DEBUG_SCHEDULER").is_ok();
|
||||
let start = self.clock.now();
|
||||
let next_now = start + duration;
|
||||
if debug {
|
||||
let timer_count = self.state.lock().timers.len();
|
||||
eprintln!(
|
||||
"[scheduler] advance_clock({:?}) from {:?}, {} pending timers",
|
||||
duration, start, timer_count
|
||||
);
|
||||
}
|
||||
loop {
|
||||
self.run();
|
||||
if let Some(timer) = self.state.lock().timers.first()
|
||||
&& timer.expiration <= next_now
|
||||
{
|
||||
self.clock.advance(timer.expiration - self.clock.now());
|
||||
let advance_to = timer.expiration;
|
||||
if debug {
|
||||
eprintln!(
|
||||
"[scheduler] Advancing clock {:?} -> {:?} for timer",
|
||||
self.clock.now(),
|
||||
advance_to
|
||||
);
|
||||
}
|
||||
self.clock.advance(advance_to - self.clock.now());
|
||||
} else {
|
||||
break;
|
||||
}
|
||||
}
|
||||
self.clock.advance(next_now - self.clock.now());
|
||||
if debug {
|
||||
eprintln!(
|
||||
"[scheduler] advance_clock done, now at {:?}",
|
||||
self.clock.now()
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
fn park(&self, deadline: Option<Instant>) -> bool {
|
||||
@@ -245,9 +402,9 @@ impl Scheduler for TestScheduler {
|
||||
fn block(
|
||||
&self,
|
||||
session_id: Option<SessionId>,
|
||||
mut future: LocalBoxFuture<()>,
|
||||
mut future: Pin<&mut dyn Future<Output = ()>>,
|
||||
timeout: Option<Duration>,
|
||||
) {
|
||||
) -> bool {
|
||||
if let Some(session_id) = session_id {
|
||||
self.state.lock().blocked_sessions.push(session_id);
|
||||
}
|
||||
@@ -270,10 +427,15 @@ impl Scheduler for TestScheduler {
|
||||
};
|
||||
let mut cx = Context::from_waker(&waker);
|
||||
|
||||
let mut completed = false;
|
||||
for _ in 0..max_ticks {
|
||||
let Poll::Pending = future.poll_unpin(&mut cx) else {
|
||||
break;
|
||||
};
|
||||
match future.as_mut().poll(&mut cx) {
|
||||
Poll::Ready(()) => {
|
||||
completed = true;
|
||||
break;
|
||||
}
|
||||
Poll::Pending => {}
|
||||
}
|
||||
|
||||
let mut stepped = None;
|
||||
while self.rng.lock().random() {
|
||||
@@ -297,9 +459,11 @@ impl Scheduler for TestScheduler {
|
||||
if session_id.is_some() {
|
||||
self.state.lock().blocked_sessions.pop();
|
||||
}
|
||||
|
||||
completed
|
||||
}
|
||||
|
||||
fn schedule_foreground(&self, session_id: SessionId, runnable: Runnable) {
|
||||
fn schedule_foreground(&self, session_id: SessionId, runnable: Runnable<RunnableMeta>) {
|
||||
let mut state = self.state.lock();
|
||||
let ix = if state.randomize_order {
|
||||
let start_ix = state
|
||||
@@ -317,6 +481,7 @@ impl Scheduler for TestScheduler {
|
||||
ix,
|
||||
ScheduledRunnable {
|
||||
session_id: Some(session_id),
|
||||
priority: Priority::default(),
|
||||
runnable,
|
||||
},
|
||||
);
|
||||
@@ -324,7 +489,11 @@ impl Scheduler for TestScheduler {
|
||||
self.thread.unpark();
|
||||
}
|
||||
|
||||
fn schedule_background(&self, runnable: Runnable) {
|
||||
fn schedule_background_with_priority(
|
||||
&self,
|
||||
runnable: Runnable<RunnableMeta>,
|
||||
priority: Priority,
|
||||
) {
|
||||
let mut state = self.state.lock();
|
||||
let ix = if state.randomize_order {
|
||||
self.rng.lock().random_range(0..=state.runnables.len())
|
||||
@@ -335,6 +504,7 @@ impl Scheduler for TestScheduler {
|
||||
ix,
|
||||
ScheduledRunnable {
|
||||
session_id: None,
|
||||
priority,
|
||||
runnable,
|
||||
},
|
||||
);
|
||||
@@ -357,8 +527,8 @@ impl Scheduler for TestScheduler {
|
||||
self.clock.clone()
|
||||
}
|
||||
|
||||
fn as_test(&self) -> &TestScheduler {
|
||||
self
|
||||
fn as_test(&self) -> Option<&TestScheduler> {
|
||||
Some(self)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -395,7 +565,8 @@ impl Default for TestSchedulerConfig {
|
||||
|
||||
struct ScheduledRunnable {
|
||||
session_id: Option<SessionId>,
|
||||
runnable: Runnable,
|
||||
priority: Priority,
|
||||
runnable: Runnable<RunnableMeta>,
|
||||
}
|
||||
|
||||
impl ScheduledRunnable {
|
||||
@@ -420,6 +591,7 @@ struct SchedulerState {
|
||||
capture_pending_traces: bool,
|
||||
next_trace_id: TraceId,
|
||||
pending_traces: BTreeMap<TraceId, Backtrace>,
|
||||
is_main_thread: bool,
|
||||
}
|
||||
|
||||
const WAKER_VTABLE: RawWakerVTable = RawWakerVTable::new(
|
||||
@@ -508,6 +680,46 @@ impl TracingWaker {
|
||||
|
||||
pub struct Yield(usize);
|
||||
|
||||
/// A wrapper around `Arc<Mutex<StdRng>>` that provides convenient methods
|
||||
/// for random number generation without requiring explicit locking.
|
||||
#[derive(Clone)]
|
||||
pub struct SharedRng(Arc<Mutex<StdRng>>);
|
||||
|
||||
impl SharedRng {
|
||||
/// Lock the inner RNG for direct access. Use this when you need multiple
|
||||
/// random operations without re-locking between each one.
|
||||
pub fn lock(&self) -> MutexGuard<'_, StdRng> {
|
||||
self.0.lock()
|
||||
}
|
||||
|
||||
/// Generate a random value in the given range.
|
||||
pub fn random_range<T, R>(&self, range: R) -> T
|
||||
where
|
||||
T: SampleUniform,
|
||||
R: SampleRange<T>,
|
||||
{
|
||||
self.0.lock().random_range(range)
|
||||
}
|
||||
|
||||
/// Generate a random boolean with the given probability of being true.
|
||||
pub fn random_bool(&self, p: f64) -> bool {
|
||||
self.0.lock().random_bool(p)
|
||||
}
|
||||
|
||||
/// Generate a random value of the given type.
|
||||
pub fn random<T>(&self) -> T
|
||||
where
|
||||
StandardUniform: Distribution<T>,
|
||||
{
|
||||
self.0.lock().random()
|
||||
}
|
||||
|
||||
/// Generate a random ratio - true with probability `numerator/denominator`.
|
||||
pub fn random_ratio(&self, numerator: u32, denominator: u32) -> bool {
|
||||
self.0.lock().random_ratio(numerator, denominator)
|
||||
}
|
||||
}
|
||||
|
||||
impl Future for Yield {
|
||||
type Output = ();
|
||||
|
||||
|
||||
@@ -238,7 +238,7 @@ fn test_block() {
|
||||
}
|
||||
|
||||
#[test]
|
||||
#[should_panic(expected = "futures_channel::oneshot::Inner")]
|
||||
#[should_panic(expected = "Parking forbidden. Pending traces:")]
|
||||
fn test_parking_panics() {
|
||||
let config = TestSchedulerConfig {
|
||||
capture_pending_traces: true,
|
||||
@@ -297,20 +297,27 @@ fn test_block_with_timeout() {
|
||||
let foreground = scheduler.foreground();
|
||||
let future = future::ready(42);
|
||||
let output = foreground.block_with_timeout(Duration::from_millis(100), future);
|
||||
assert_eq!(output.unwrap(), 42);
|
||||
assert_eq!(output.ok(), Some(42));
|
||||
});
|
||||
|
||||
// Test case: future times out
|
||||
TestScheduler::once(async |scheduler| {
|
||||
// Make timeout behavior deterministic by forcing the timeout tick budget to be exactly 0.
|
||||
// This prevents `block_with_timeout` from making progress via extra scheduler stepping and
|
||||
// accidentally completing work that we expect to time out.
|
||||
scheduler.set_timeout_ticks(0..=0);
|
||||
|
||||
let foreground = scheduler.foreground();
|
||||
let future = future::pending::<()>();
|
||||
let output = foreground.block_with_timeout(Duration::from_millis(50), future);
|
||||
let _ = output.expect_err("future should not have finished");
|
||||
assert!(output.is_err(), "future should not have finished");
|
||||
});
|
||||
|
||||
// Test case: future makes progress via timer but still times out
|
||||
let mut results = BTreeSet::new();
|
||||
TestScheduler::many(100, async |scheduler| {
|
||||
// Keep the existing probabilistic behavior here (do not force 0 ticks), since this subtest
|
||||
// is explicitly checking that some seeds/timeouts can complete while others can time out.
|
||||
let task = scheduler.background().spawn(async move {
|
||||
Yield { polls: 10 }.await;
|
||||
42
|
||||
@@ -324,6 +331,44 @@ fn test_block_with_timeout() {
|
||||
results.into_iter().collect::<Vec<_>>(),
|
||||
vec![None, Some(42)]
|
||||
);
|
||||
|
||||
// Regression test:
|
||||
// A timed-out future must not be cancelled. The returned future should still be
|
||||
// pollable to completion later. We also want to ensure time only advances when we
|
||||
// explicitly advance it (not by yielding).
|
||||
TestScheduler::once(async |scheduler| {
|
||||
// Force immediate timeout: the timeout tick budget is 0 so we will not step or
|
||||
// advance timers inside `block_with_timeout`.
|
||||
scheduler.set_timeout_ticks(0..=0);
|
||||
|
||||
let background = scheduler.background();
|
||||
|
||||
// This task should only complete once time is explicitly advanced.
|
||||
let task = background.spawn({
|
||||
let scheduler = scheduler.clone();
|
||||
async move {
|
||||
scheduler.timer(Duration::from_millis(100)).await;
|
||||
123
|
||||
}
|
||||
});
|
||||
|
||||
// This should time out before we advance time enough for the timer to fire.
|
||||
let timed_out = scheduler
|
||||
.foreground()
|
||||
.block_with_timeout(Duration::from_millis(50), task);
|
||||
assert!(
|
||||
timed_out.is_err(),
|
||||
"expected timeout before advancing the clock enough for the timer"
|
||||
);
|
||||
|
||||
// Now explicitly advance time and ensure the returned future can complete.
|
||||
let mut task = timed_out.err().unwrap();
|
||||
scheduler.advance_clock(Duration::from_millis(100));
|
||||
scheduler.run();
|
||||
|
||||
let output = scheduler.foreground().block_on(&mut task);
|
||||
assert_eq!(output, 123);
|
||||
});
|
||||
}
|
||||
|
||||
// When calling block, we shouldn't make progress on foreground-spawned futures with the same session id.
|
||||
@@ -370,3 +415,64 @@ impl Future for Yield {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_background_priority_scheduling() {
|
||||
use parking_lot::Mutex;
|
||||
|
||||
// Run many iterations to get statistical significance
|
||||
let mut high_before_low_count = 0;
|
||||
let iterations = 100;
|
||||
|
||||
for seed in 0..iterations {
|
||||
let config = TestSchedulerConfig::with_seed(seed);
|
||||
let scheduler = Arc::new(TestScheduler::new(config));
|
||||
let background = scheduler.background();
|
||||
|
||||
let execution_order = Arc::new(Mutex::new(Vec::new()));
|
||||
|
||||
// Spawn low priority tasks first
|
||||
for i in 0..3 {
|
||||
let order = execution_order.clone();
|
||||
background
|
||||
.spawn_with_priority(Priority::Low, async move {
|
||||
order.lock().push(format!("low-{}", i));
|
||||
})
|
||||
.detach();
|
||||
}
|
||||
|
||||
// Spawn high priority tasks second
|
||||
for i in 0..3 {
|
||||
let order = execution_order.clone();
|
||||
background
|
||||
.spawn_with_priority(Priority::High, async move {
|
||||
order.lock().push(format!("high-{}", i));
|
||||
})
|
||||
.detach();
|
||||
}
|
||||
|
||||
scheduler.run();
|
||||
|
||||
// Count how many high priority tasks ran in the first half
|
||||
let order = execution_order.lock();
|
||||
let high_in_first_half = order
|
||||
.iter()
|
||||
.take(3)
|
||||
.filter(|s| s.starts_with("high"))
|
||||
.count();
|
||||
|
||||
if high_in_first_half >= 2 {
|
||||
high_before_low_count += 1;
|
||||
}
|
||||
}
|
||||
|
||||
// High priority tasks should tend to run before low priority tasks
|
||||
// With weights of 60 vs 10, high priority should dominate early execution
|
||||
assert!(
|
||||
high_before_low_count > iterations / 2,
|
||||
"Expected high priority tasks to run before low priority tasks more often. \
|
||||
Got {} out of {} iterations",
|
||||
high_before_low_count,
|
||||
iterations
|
||||
);
|
||||
}
|
||||
|
||||
@@ -117,7 +117,7 @@ fn save_hang_trace(
|
||||
background_executor: &gpui::BackgroundExecutor,
|
||||
hang_time: chrono::DateTime<chrono::Local>,
|
||||
) {
|
||||
let thread_timings = background_executor.dispatcher.get_all_timings();
|
||||
let thread_timings = background_executor.dispatcher().get_all_timings();
|
||||
let thread_timings = thread_timings
|
||||
.into_iter()
|
||||
.map(|mut timings| {
|
||||
|
||||
Reference in New Issue
Block a user