Compare commits
25 Commits
inspector-
...
thomas/add
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c080d3d60d | ||
|
|
dfbd132d9f | ||
|
|
2e8ee9b64f | ||
|
|
c15382c4d8 | ||
|
|
70c51b513b | ||
|
|
38afae86a9 | ||
|
|
9249919b7a | ||
|
|
9fe4a14f73 | ||
|
|
7cc3c03b08 | ||
|
|
4f2f9ff762 | ||
|
|
7aa0fa1543 | ||
|
|
3b31860d52 | ||
|
|
733cd6b68c | ||
|
|
e8fe0eb2e6 | ||
|
|
0f3ac38332 | ||
|
|
32e9757a85 | ||
|
|
be76942a69 | ||
|
|
942d4eb126 | ||
|
|
9d35f0389d | ||
|
|
d13cd007a2 | ||
|
|
f8ac6eef75 | ||
|
|
6d2bdc3bac | ||
|
|
9a3434efb4 | ||
|
|
333de5d673 | ||
|
|
97ab0980d1 |
11
.rules
11
.rules
@@ -5,6 +5,7 @@
|
||||
* Prefer implementing functionality in existing files unless it is a new logical component. Avoid creating many small files.
|
||||
* Avoid using functions that panic like `unwrap()`, instead use mechanisms like `?` to propagate errors.
|
||||
* Be careful with operations like indexing which may panic if the indexes are out of bounds.
|
||||
* Never create files with `mod.rs` paths - prefer `src/some_module.rs` instead of `src/some_module/mod.rs`.
|
||||
|
||||
# GPUI
|
||||
|
||||
@@ -108,3 +109,13 @@ When a view's state has changed in a way that may affect its rendering, it shoul
|
||||
While updating an entity (`cx: Context<T>`), it can emit an event using `cx.emit(event)`. Entities register which events they can emit by declaring `impl EventEmittor<EventType> for EntityType {}`.
|
||||
|
||||
Other entities can then register a callback to handle these events by doing `cx.subscribe(other_entity, |this, other_entity, event, cx| ...)`. This will return a `Subscription` which deregisters the callback when dropped. Typically `cx.subscribe` happens when creating a new entity and the subscriptions are stored in a `_subscriptions: Vec<Subscription>` field.
|
||||
|
||||
## Recent API changes
|
||||
|
||||
GPUI has had some changes to its APIs. Always write code using the new APIs:
|
||||
|
||||
* `spawn` methods now take async closures (`AsyncFn`), and so should be called like `cx.spawn(async move |cx| ...)`.
|
||||
* Use `Entity<T>`. This replaces `Model<T>` and `View<T>` which longer exists and should NEVER be used.
|
||||
* Use `App` references. This replaces `AppContext` which no longer exists and should NEVER be used.
|
||||
* Use `Context<T>` references. This replaces `ModelContext<T>` which no longer exists and should NEVER be used.
|
||||
* `Window` is now passed around explicitly. The new interface adds a `Window` reference parameter to some methods, and adds some new "*_in" methods for plumbing `Window`. The old types `WindowContext` and `ViewContext<T>` should NEVER be used.
|
||||
|
||||
13
Cargo.lock
generated
13
Cargo.lock
generated
@@ -4013,6 +4013,7 @@ dependencies = [
|
||||
"node_runtime",
|
||||
"parking_lot",
|
||||
"paths",
|
||||
"proto",
|
||||
"schemars",
|
||||
"serde",
|
||||
"serde_json",
|
||||
@@ -4897,7 +4898,6 @@ dependencies = [
|
||||
"client",
|
||||
"collections",
|
||||
"context_server",
|
||||
"dap",
|
||||
"dirs 5.0.1",
|
||||
"env_logger 0.11.8",
|
||||
"extension",
|
||||
@@ -4920,6 +4920,7 @@ dependencies = [
|
||||
"serde",
|
||||
"settings",
|
||||
"shellexpand 2.1.2",
|
||||
"smol",
|
||||
"telemetry",
|
||||
"toml 0.8.20",
|
||||
"unindent",
|
||||
@@ -7713,6 +7714,7 @@ dependencies = [
|
||||
"mistral",
|
||||
"ollama",
|
||||
"open_ai",
|
||||
"partial-json-fixer",
|
||||
"project",
|
||||
"proto",
|
||||
"schemars",
|
||||
@@ -9828,6 +9830,12 @@ dependencies = [
|
||||
"windows-targets 0.52.6",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "partial-json-fixer"
|
||||
version = "0.5.3"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "35ffd90b3f3b6477db7478016b9efb1b7e9d38eafd095f0542fe0ec2ea884a13"
|
||||
|
||||
[[package]]
|
||||
name = "password-hash"
|
||||
version = "0.4.2"
|
||||
@@ -11743,6 +11751,7 @@ dependencies = [
|
||||
"client",
|
||||
"clock",
|
||||
"dap",
|
||||
"dap_adapters",
|
||||
"env_logger 0.11.8",
|
||||
"extension",
|
||||
"extension_host",
|
||||
@@ -14214,11 +14223,11 @@ version = "0.1.0"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"collections",
|
||||
"dap-types",
|
||||
"futures 0.3.31",
|
||||
"gpui",
|
||||
"hex",
|
||||
"parking_lot",
|
||||
"proto",
|
||||
"schemars",
|
||||
"serde",
|
||||
"serde_json",
|
||||
|
||||
@@ -480,6 +480,7 @@ num-format = "0.4.4"
|
||||
ordered-float = "2.1.1"
|
||||
palette = { version = "0.7.5", default-features = false, features = ["std"] }
|
||||
parking_lot = "0.12.1"
|
||||
partial-json-fixer = "0.5.3"
|
||||
pathdiff = "0.2"
|
||||
pet = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "845945b830297a50de0e24020b980a65e4820559" }
|
||||
pet-fs = { git = "https://github.com/microsoft/python-environment-tools.git", rev = "845945b830297a50de0e24020b980a65e4820559" }
|
||||
|
||||
@@ -1028,10 +1028,10 @@
|
||||
// Using `ctrl-shift-space` in Zed requires disabling the macOS global shortcut.
|
||||
// System Preferences->Keyboard->Keyboard Shortcuts->Input Sources->Select the previous input source (uncheck)
|
||||
"ctrl-shift-space": "terminal::ToggleViMode",
|
||||
"ctrl-k up": "pane::SplitUp",
|
||||
"ctrl-k down": "pane::SplitDown",
|
||||
"ctrl-k left": "pane::SplitLeft",
|
||||
"ctrl-k right": "pane::SplitRight"
|
||||
"ctrl-alt-up": "pane::SplitUp",
|
||||
"ctrl-alt-down": "pane::SplitDown",
|
||||
"ctrl-alt-left": "pane::SplitLeft",
|
||||
"ctrl-alt-right": "pane::SplitRight"
|
||||
}
|
||||
},
|
||||
{
|
||||
|
||||
@@ -830,5 +830,13 @@
|
||||
// and Windows.
|
||||
"alt-l": "editor::AcceptEditPrediction"
|
||||
}
|
||||
},
|
||||
{
|
||||
// Fixes https://github.com/zed-industries/zed/issues/29095 by ensuring that
|
||||
// the last binding for editor::ToggleComments is not ctrl-c.
|
||||
"context": "hack_to_fix_ctrl-c",
|
||||
"bindings": {
|
||||
"g c": "editor::ToggleComments"
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
@@ -73,9 +73,9 @@ There are project rules that apply to these root directories:
|
||||
{{/each}}
|
||||
{{/if}}
|
||||
|
||||
{{#if has_default_user_rules}}
|
||||
{{#if has_user_rules}}
|
||||
The user has specified the following rules that should be applied:
|
||||
{{#each default_user_rules}}
|
||||
{{#each user_rules}}
|
||||
|
||||
{{#if title}}
|
||||
Rules title: {{title}}
|
||||
|
||||
@@ -1489,7 +1489,12 @@
|
||||
"use_multiline_find": false,
|
||||
"use_smartcase_find": false,
|
||||
"highlight_on_yank_duration": 200,
|
||||
"custom_digraphs": {}
|
||||
"custom_digraphs": {},
|
||||
// Cursor shape for the each mode.
|
||||
// Specify the mode as the key and the shape as the value.
|
||||
// The mode can be one of the following: "normal", "replace", "insert", "visual".
|
||||
// The shape can be one of the following: "block", "bar", "underline", "hollow".
|
||||
"cursor_shape": {}
|
||||
},
|
||||
// The server to connect to. If the environment variable
|
||||
// ZED_SERVER_URL is set, it will override this setting.
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
use crate::context::{AssistantContext, ContextId, format_context_as_string};
|
||||
use crate::context::{AssistantContext, ContextId, RULES_ICON, format_context_as_string};
|
||||
use crate::context_picker::MentionLink;
|
||||
use crate::thread::{
|
||||
LastRestoreCheckpoint, MessageId, MessageSegment, Thread, ThreadError, ThreadEvent,
|
||||
@@ -266,14 +266,6 @@ fn default_markdown_style(window: &Window, cx: &App) -> MarkdownStyle {
|
||||
}
|
||||
}
|
||||
|
||||
fn render_tool_use_markdown(
|
||||
text: SharedString,
|
||||
language_registry: Arc<LanguageRegistry>,
|
||||
cx: &mut App,
|
||||
) -> Entity<Markdown> {
|
||||
cx.new(|cx| Markdown::new(text, Some(language_registry), None, cx))
|
||||
}
|
||||
|
||||
fn tool_use_markdown_style(window: &Window, cx: &mut App) -> MarkdownStyle {
|
||||
let theme_settings = ThemeSettings::get_global(cx);
|
||||
let colors = cx.theme().colors();
|
||||
@@ -688,6 +680,12 @@ fn open_markdown_link(
|
||||
}
|
||||
}),
|
||||
Some(MentionLink::Fetch(url)) => cx.open_url(&url),
|
||||
Some(MentionLink::Rules(prompt_id)) => window.dispatch_action(
|
||||
Box::new(OpenPromptLibrary {
|
||||
prompt_to_select: Some(prompt_id.0),
|
||||
}),
|
||||
cx,
|
||||
),
|
||||
None => cx.open_url(&text),
|
||||
}
|
||||
}
|
||||
@@ -861,21 +859,34 @@ impl ActiveThread {
|
||||
tool_output: SharedString,
|
||||
cx: &mut Context<Self>,
|
||||
) {
|
||||
let rendered = RenderedToolUse {
|
||||
label: render_tool_use_markdown(tool_label.into(), self.language_registry.clone(), cx),
|
||||
input: render_tool_use_markdown(
|
||||
format!(
|
||||
"```json\n{}\n```",
|
||||
serde_json::to_string_pretty(tool_input).unwrap_or_default()
|
||||
)
|
||||
.into(),
|
||||
self.language_registry.clone(),
|
||||
cx,
|
||||
),
|
||||
output: render_tool_use_markdown(tool_output, self.language_registry.clone(), cx),
|
||||
};
|
||||
self.rendered_tool_uses
|
||||
.insert(tool_use_id.clone(), rendered);
|
||||
let rendered = self
|
||||
.rendered_tool_uses
|
||||
.entry(tool_use_id.clone())
|
||||
.or_insert_with(|| RenderedToolUse {
|
||||
label: cx.new(|cx| {
|
||||
Markdown::new("".into(), Some(self.language_registry.clone()), None, cx)
|
||||
}),
|
||||
input: cx.new(|cx| {
|
||||
Markdown::new("".into(), Some(self.language_registry.clone()), None, cx)
|
||||
}),
|
||||
output: cx.new(|cx| {
|
||||
Markdown::new("".into(), Some(self.language_registry.clone()), None, cx)
|
||||
}),
|
||||
});
|
||||
|
||||
rendered.label.update(cx, |this, cx| {
|
||||
this.replace(tool_label, cx);
|
||||
});
|
||||
rendered.input.update(cx, |this, cx| {
|
||||
let input = format!(
|
||||
"```json\n{}\n```",
|
||||
serde_json::to_string_pretty(tool_input).unwrap_or_default()
|
||||
);
|
||||
this.replace(input, cx);
|
||||
});
|
||||
rendered.output.update(cx, |this, cx| {
|
||||
this.replace(tool_output, cx);
|
||||
});
|
||||
}
|
||||
|
||||
fn handle_thread_event(
|
||||
@@ -968,6 +979,19 @@ impl ActiveThread {
|
||||
);
|
||||
}
|
||||
}
|
||||
ThreadEvent::StreamedToolUse {
|
||||
tool_use_id,
|
||||
ui_text,
|
||||
input,
|
||||
} => {
|
||||
self.render_tool_use_markdown(
|
||||
tool_use_id.clone(),
|
||||
ui_text.clone(),
|
||||
input,
|
||||
"".into(),
|
||||
cx,
|
||||
);
|
||||
}
|
||||
ThreadEvent::ToolFinished {
|
||||
pending_tool_use, ..
|
||||
} => {
|
||||
@@ -2472,13 +2496,15 @@ impl ActiveThread {
|
||||
let edit_tools = tool_use.needs_confirmation;
|
||||
|
||||
let status_icons = div().child(match &tool_use.status {
|
||||
ToolUseStatus::Pending | ToolUseStatus::NeedsConfirmation => {
|
||||
ToolUseStatus::NeedsConfirmation => {
|
||||
let icon = Icon::new(IconName::Warning)
|
||||
.color(Color::Warning)
|
||||
.size(IconSize::Small);
|
||||
icon.into_any_element()
|
||||
}
|
||||
ToolUseStatus::Running => {
|
||||
ToolUseStatus::Pending
|
||||
| ToolUseStatus::InputStillStreaming
|
||||
| ToolUseStatus::Running => {
|
||||
let icon = Icon::new(IconName::ArrowCircle)
|
||||
.color(Color::Accent)
|
||||
.size(IconSize::Small);
|
||||
@@ -2564,7 +2590,7 @@ impl ActiveThread {
|
||||
}),
|
||||
)),
|
||||
),
|
||||
ToolUseStatus::Running => container.child(
|
||||
ToolUseStatus::InputStillStreaming | ToolUseStatus::Running => container.child(
|
||||
results_content_container().child(
|
||||
h_flex()
|
||||
.gap_1()
|
||||
@@ -2957,10 +2983,10 @@ impl ActiveThread {
|
||||
return div().into_any();
|
||||
};
|
||||
|
||||
let default_user_rules_text = if project_context.default_user_rules.is_empty() {
|
||||
let user_rules_text = if project_context.user_rules.is_empty() {
|
||||
None
|
||||
} else if project_context.default_user_rules.len() == 1 {
|
||||
let user_rules = &project_context.default_user_rules[0];
|
||||
} else if project_context.user_rules.len() == 1 {
|
||||
let user_rules = &project_context.user_rules[0];
|
||||
|
||||
match user_rules.title.as_ref() {
|
||||
Some(title) => Some(format!("Using \"{title}\" user rule")),
|
||||
@@ -2969,14 +2995,14 @@ impl ActiveThread {
|
||||
} else {
|
||||
Some(format!(
|
||||
"Using {} user rules",
|
||||
project_context.default_user_rules.len()
|
||||
project_context.user_rules.len()
|
||||
))
|
||||
};
|
||||
|
||||
let first_default_user_rules_id = project_context
|
||||
.default_user_rules
|
||||
let first_user_rules_id = project_context
|
||||
.user_rules
|
||||
.first()
|
||||
.map(|user_rules| user_rules.uuid);
|
||||
.map(|user_rules| user_rules.uuid.0);
|
||||
|
||||
let rules_files = project_context
|
||||
.worktrees
|
||||
@@ -2993,7 +3019,7 @@ impl ActiveThread {
|
||||
rules_files => Some(format!("Using {} project rules files", rules_files.len())),
|
||||
};
|
||||
|
||||
if default_user_rules_text.is_none() && rules_file_text.is_none() {
|
||||
if user_rules_text.is_none() && rules_file_text.is_none() {
|
||||
return div().into_any();
|
||||
}
|
||||
|
||||
@@ -3001,45 +3027,42 @@ impl ActiveThread {
|
||||
.pt_2()
|
||||
.px_2p5()
|
||||
.gap_1()
|
||||
.when_some(
|
||||
default_user_rules_text,
|
||||
|parent, default_user_rules_text| {
|
||||
parent.child(
|
||||
h_flex()
|
||||
.w_full()
|
||||
.child(
|
||||
Icon::new(IconName::File)
|
||||
.size(IconSize::XSmall)
|
||||
.color(Color::Disabled),
|
||||
)
|
||||
.child(
|
||||
Label::new(default_user_rules_text)
|
||||
.size(LabelSize::XSmall)
|
||||
.color(Color::Muted)
|
||||
.truncate()
|
||||
.buffer_font(cx)
|
||||
.ml_1p5()
|
||||
.mr_0p5(),
|
||||
)
|
||||
.child(
|
||||
IconButton::new("open-prompt-library", IconName::ArrowUpRightAlt)
|
||||
.shape(ui::IconButtonShape::Square)
|
||||
.icon_size(IconSize::XSmall)
|
||||
.icon_color(Color::Ignored)
|
||||
// TODO: Figure out a way to pass focus handle here so we can display the `OpenPromptLibrary` keybinding
|
||||
.tooltip(Tooltip::text("View User Rules"))
|
||||
.on_click(move |_event, window, cx| {
|
||||
window.dispatch_action(
|
||||
Box::new(OpenPromptLibrary {
|
||||
prompt_to_focus: first_default_user_rules_id,
|
||||
}),
|
||||
cx,
|
||||
)
|
||||
}),
|
||||
),
|
||||
)
|
||||
},
|
||||
)
|
||||
.when_some(user_rules_text, |parent, user_rules_text| {
|
||||
parent.child(
|
||||
h_flex()
|
||||
.w_full()
|
||||
.child(
|
||||
Icon::new(RULES_ICON)
|
||||
.size(IconSize::XSmall)
|
||||
.color(Color::Disabled),
|
||||
)
|
||||
.child(
|
||||
Label::new(user_rules_text)
|
||||
.size(LabelSize::XSmall)
|
||||
.color(Color::Muted)
|
||||
.truncate()
|
||||
.buffer_font(cx)
|
||||
.ml_1p5()
|
||||
.mr_0p5(),
|
||||
)
|
||||
.child(
|
||||
IconButton::new("open-prompt-library", IconName::ArrowUpRightAlt)
|
||||
.shape(ui::IconButtonShape::Square)
|
||||
.icon_size(IconSize::XSmall)
|
||||
.icon_color(Color::Ignored)
|
||||
// TODO: Figure out a way to pass focus handle here so we can display the `OpenPromptLibrary` keybinding
|
||||
.tooltip(Tooltip::text("View User Rules"))
|
||||
.on_click(move |_event, window, cx| {
|
||||
window.dispatch_action(
|
||||
Box::new(OpenPromptLibrary {
|
||||
prompt_to_select: first_user_rules_id,
|
||||
}),
|
||||
cx,
|
||||
)
|
||||
}),
|
||||
),
|
||||
)
|
||||
})
|
||||
.when_some(rules_file_text, |parent, rules_file_text| {
|
||||
parent.child(
|
||||
h_flex()
|
||||
@@ -3259,12 +3282,10 @@ pub(crate) fn open_context(
|
||||
}
|
||||
}
|
||||
AssistantContext::Directory(directory_context) => {
|
||||
let project_path = directory_context.project_path(cx);
|
||||
let entry_id = directory_context.entry_id;
|
||||
workspace.update(cx, |workspace, cx| {
|
||||
workspace.project().update(cx, |project, cx| {
|
||||
if let Some(entry) = project.entry_for_path(&project_path, cx) {
|
||||
cx.emit(project::Event::RevealInProjectPanel(entry.id));
|
||||
}
|
||||
workspace.project().update(cx, |_project, cx| {
|
||||
cx.emit(project::Event::RevealInProjectPanel(entry_id));
|
||||
})
|
||||
})
|
||||
}
|
||||
@@ -3316,6 +3337,12 @@ pub(crate) fn open_context(
|
||||
}
|
||||
})
|
||||
}
|
||||
AssistantContext::Rules(rules_context) => window.dispatch_action(
|
||||
Box::new(OpenPromptLibrary {
|
||||
prompt_to_select: Some(rules_context.prompt_id.0),
|
||||
}),
|
||||
cx,
|
||||
),
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -16,7 +16,7 @@ use language_model::{LanguageModelProvider, LanguageModelProviderId, LanguageMod
|
||||
use settings::{Settings, update_settings_file};
|
||||
use ui::{
|
||||
Disclosure, Divider, DividerColor, ElevationIndex, Indicator, Scrollbar, ScrollbarState,
|
||||
Switch, Tooltip, prelude::*,
|
||||
Switch, SwitchColor, Tooltip, prelude::*,
|
||||
};
|
||||
use util::ResultExt as _;
|
||||
use zed_actions::ExtensionCategoryFilter;
|
||||
@@ -236,6 +236,7 @@ impl AssistantConfiguration {
|
||||
"always-allow-tool-actions-switch",
|
||||
always_allow_tool_actions.into(),
|
||||
)
|
||||
.color(SwitchColor::Accent)
|
||||
.on_click({
|
||||
let fs = self.fs.clone();
|
||||
move |state, _window, cx| {
|
||||
@@ -332,41 +333,44 @@ impl AssistantConfiguration {
|
||||
),
|
||||
)
|
||||
.child(
|
||||
Switch::new("context-server-switch", is_running.into()).on_click({
|
||||
let context_server_manager =
|
||||
self.context_server_manager.clone();
|
||||
let context_server = context_server.clone();
|
||||
move |state, _window, cx| match state {
|
||||
ToggleState::Unselected | ToggleState::Indeterminate => {
|
||||
context_server_manager.update(cx, |this, cx| {
|
||||
this.stop_server(context_server.clone(), cx)
|
||||
.log_err();
|
||||
});
|
||||
}
|
||||
ToggleState::Selected => {
|
||||
cx.spawn({
|
||||
let context_server_manager =
|
||||
context_server_manager.clone();
|
||||
let context_server = context_server.clone();
|
||||
async move |cx| {
|
||||
if let Some(start_server_task) =
|
||||
context_server_manager
|
||||
.update(cx, |this, cx| {
|
||||
this.start_server(
|
||||
context_server,
|
||||
cx,
|
||||
)
|
||||
})
|
||||
.log_err()
|
||||
{
|
||||
start_server_task.await.log_err();
|
||||
Switch::new("context-server-switch", is_running.into())
|
||||
.color(SwitchColor::Accent)
|
||||
.on_click({
|
||||
let context_server_manager =
|
||||
self.context_server_manager.clone();
|
||||
let context_server = context_server.clone();
|
||||
move |state, _window, cx| match state {
|
||||
ToggleState::Unselected
|
||||
| ToggleState::Indeterminate => {
|
||||
context_server_manager.update(cx, |this, cx| {
|
||||
this.stop_server(context_server.clone(), cx)
|
||||
.log_err();
|
||||
});
|
||||
}
|
||||
ToggleState::Selected => {
|
||||
cx.spawn({
|
||||
let context_server_manager =
|
||||
context_server_manager.clone();
|
||||
let context_server = context_server.clone();
|
||||
async move |cx| {
|
||||
if let Some(start_server_task) =
|
||||
context_server_manager
|
||||
.update(cx, |this, cx| {
|
||||
this.start_server(
|
||||
context_server,
|
||||
cx,
|
||||
)
|
||||
})
|
||||
.log_err()
|
||||
{
|
||||
start_server_task.await.log_err();
|
||||
}
|
||||
}
|
||||
}
|
||||
})
|
||||
.detach();
|
||||
})
|
||||
.detach();
|
||||
}
|
||||
}
|
||||
}
|
||||
}),
|
||||
}),
|
||||
),
|
||||
)
|
||||
.map(|parent| {
|
||||
|
||||
@@ -25,7 +25,7 @@ use language_model::{LanguageModelProviderTosView, LanguageModelRegistry};
|
||||
use language_model_selector::ToggleModelSelector;
|
||||
use project::Project;
|
||||
use prompt_library::{PromptLibrary, open_prompt_library};
|
||||
use prompt_store::{PromptBuilder, PromptId};
|
||||
use prompt_store::{PromptBuilder, PromptId, UserPromptId};
|
||||
use proto::Plan;
|
||||
use settings::{Settings, update_settings_file};
|
||||
use time::UtcOffset;
|
||||
@@ -79,11 +79,11 @@ pub fn init(cx: &mut App) {
|
||||
panel.update(cx, |panel, cx| panel.new_prompt_editor(window, cx));
|
||||
}
|
||||
})
|
||||
.register_action(|workspace, _: &OpenPromptLibrary, window, cx| {
|
||||
.register_action(|workspace, action: &OpenPromptLibrary, window, cx| {
|
||||
if let Some(panel) = workspace.panel::<AssistantPanel>(cx) {
|
||||
workspace.focus_panel::<AssistantPanel>(window, cx);
|
||||
panel.update(cx, |panel, cx| {
|
||||
panel.deploy_prompt_library(&OpenPromptLibrary::default(), window, cx)
|
||||
panel.deploy_prompt_library(action, window, cx)
|
||||
});
|
||||
}
|
||||
})
|
||||
@@ -502,7 +502,9 @@ impl AssistantPanel {
|
||||
None,
|
||||
))
|
||||
}),
|
||||
action.prompt_to_focus.map(|uuid| PromptId::User { uuid }),
|
||||
action.prompt_to_select.map(|uuid| PromptId::User {
|
||||
uuid: UserPromptId(uuid),
|
||||
}),
|
||||
cx,
|
||||
)
|
||||
.detach_and_log_err(cx);
|
||||
|
||||
@@ -3,7 +3,8 @@ use std::{ops::Range, path::Path, sync::Arc};
|
||||
use gpui::{App, Entity, SharedString};
|
||||
use language::{Buffer, File};
|
||||
use language_model::LanguageModelRequestMessage;
|
||||
use project::{ProjectPath, Worktree};
|
||||
use project::{ProjectEntryId, ProjectPath, Worktree};
|
||||
use prompt_store::UserPromptId;
|
||||
use rope::Point;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use text::{Anchor, BufferId};
|
||||
@@ -12,6 +13,8 @@ use util::post_inc;
|
||||
|
||||
use crate::thread::Thread;
|
||||
|
||||
pub const RULES_ICON: IconName = IconName::Context;
|
||||
|
||||
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Clone, Copy, Serialize, Deserialize)]
|
||||
pub struct ContextId(pub(crate) usize);
|
||||
|
||||
@@ -20,6 +23,7 @@ impl ContextId {
|
||||
Self(post_inc(&mut self.0))
|
||||
}
|
||||
}
|
||||
|
||||
pub enum ContextKind {
|
||||
File,
|
||||
Directory,
|
||||
@@ -27,6 +31,7 @@ pub enum ContextKind {
|
||||
Excerpt,
|
||||
FetchedUrl,
|
||||
Thread,
|
||||
Rules,
|
||||
}
|
||||
|
||||
impl ContextKind {
|
||||
@@ -38,6 +43,7 @@ impl ContextKind {
|
||||
ContextKind::Excerpt => IconName::Code,
|
||||
ContextKind::FetchedUrl => IconName::Globe,
|
||||
ContextKind::Thread => IconName::MessageBubbles,
|
||||
ContextKind::Rules => RULES_ICON,
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -50,6 +56,7 @@ pub enum AssistantContext {
|
||||
FetchedUrl(FetchedUrlContext),
|
||||
Thread(ThreadContext),
|
||||
Excerpt(ExcerptContext),
|
||||
Rules(RulesContext),
|
||||
}
|
||||
|
||||
impl AssistantContext {
|
||||
@@ -61,6 +68,7 @@ impl AssistantContext {
|
||||
Self::FetchedUrl(url) => url.id,
|
||||
Self::Thread(thread) => thread.id,
|
||||
Self::Excerpt(excerpt) => excerpt.id,
|
||||
Self::Rules(rules) => rules.id,
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -75,17 +83,25 @@ pub struct FileContext {
|
||||
pub struct DirectoryContext {
|
||||
pub id: ContextId,
|
||||
pub worktree: Entity<Worktree>,
|
||||
pub path: Arc<Path>,
|
||||
pub entry_id: ProjectEntryId,
|
||||
pub last_path: Arc<Path>,
|
||||
/// Buffers of the files within the directory.
|
||||
pub context_buffers: Vec<ContextBuffer>,
|
||||
}
|
||||
|
||||
impl DirectoryContext {
|
||||
pub fn project_path(&self, cx: &App) -> ProjectPath {
|
||||
ProjectPath {
|
||||
worktree_id: self.worktree.read(cx).id(),
|
||||
path: self.path.clone(),
|
||||
}
|
||||
pub fn entry<'a>(&self, cx: &'a App) -> Option<&'a project::Entry> {
|
||||
self.worktree.read(cx).entry_for_id(self.entry_id)
|
||||
}
|
||||
|
||||
pub fn project_path(&self, cx: &App) -> Option<ProjectPath> {
|
||||
let worktree = self.worktree.read(cx);
|
||||
worktree
|
||||
.entry_for_id(self.entry_id)
|
||||
.map(|entry| ProjectPath {
|
||||
worktree_id: worktree.id(),
|
||||
path: entry.path.clone(),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -123,7 +139,7 @@ impl ThreadContext {
|
||||
#[derive(Clone)]
|
||||
pub struct ContextBuffer {
|
||||
pub id: BufferId,
|
||||
// TODO: Entity<Buffer> holds onto the thread even if the thread is deleted. Should probably be
|
||||
// TODO: Entity<Buffer> holds onto the buffer even if the buffer is deleted. Should probably be
|
||||
// a WeakEntity and handle removal from the UI when it has dropped.
|
||||
pub buffer: Entity<Buffer>,
|
||||
pub file: Arc<dyn File>,
|
||||
@@ -168,6 +184,14 @@ pub struct ExcerptContext {
|
||||
pub context_buffer: ContextBuffer,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct RulesContext {
|
||||
pub id: ContextId,
|
||||
pub prompt_id: UserPromptId,
|
||||
pub title: SharedString,
|
||||
pub text: SharedString,
|
||||
}
|
||||
|
||||
/// Formats a collection of contexts into a string representation
|
||||
pub fn format_context_as_string<'a>(
|
||||
contexts: impl Iterator<Item = &'a AssistantContext>,
|
||||
@@ -179,6 +203,7 @@ pub fn format_context_as_string<'a>(
|
||||
let mut excerpt_context = Vec::new();
|
||||
let mut fetch_context = Vec::new();
|
||||
let mut thread_context = Vec::new();
|
||||
let mut rules_context = Vec::new();
|
||||
|
||||
for context in contexts {
|
||||
match context {
|
||||
@@ -188,6 +213,7 @@ pub fn format_context_as_string<'a>(
|
||||
AssistantContext::Excerpt(context) => excerpt_context.push(context),
|
||||
AssistantContext::FetchedUrl(context) => fetch_context.push(context),
|
||||
AssistantContext::Thread(context) => thread_context.push(context),
|
||||
AssistantContext::Rules(context) => rules_context.push(context),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -197,6 +223,7 @@ pub fn format_context_as_string<'a>(
|
||||
&& excerpt_context.is_empty()
|
||||
&& fetch_context.is_empty()
|
||||
&& thread_context.is_empty()
|
||||
&& rules_context.is_empty()
|
||||
{
|
||||
return None;
|
||||
}
|
||||
@@ -263,6 +290,18 @@ pub fn format_context_as_string<'a>(
|
||||
result.push_str("</conversation_threads>\n");
|
||||
}
|
||||
|
||||
if !rules_context.is_empty() {
|
||||
result.push_str(
|
||||
"<user_rules>\n\
|
||||
The user has specified the following rules that should be applied:\n\n",
|
||||
);
|
||||
for context in &rules_context {
|
||||
result.push_str(&context.text);
|
||||
result.push('\n');
|
||||
}
|
||||
result.push_str("</user_rules>\n");
|
||||
}
|
||||
|
||||
result.push_str("</context>\n");
|
||||
Some(result)
|
||||
}
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
mod completion_provider;
|
||||
mod fetch_context_picker;
|
||||
mod file_context_picker;
|
||||
mod rules_context_picker;
|
||||
mod symbol_context_picker;
|
||||
mod thread_context_picker;
|
||||
|
||||
@@ -18,17 +19,22 @@ use gpui::{
|
||||
};
|
||||
use multi_buffer::MultiBufferRow;
|
||||
use project::{Entry, ProjectPath};
|
||||
use prompt_store::UserPromptId;
|
||||
use rules_context_picker::RulesContextEntry;
|
||||
use symbol_context_picker::SymbolContextPicker;
|
||||
use thread_context_picker::{ThreadContextEntry, render_thread_context_entry};
|
||||
use ui::{
|
||||
ButtonLike, ContextMenu, ContextMenuEntry, ContextMenuItem, Disclosure, TintColor, prelude::*,
|
||||
};
|
||||
use uuid::Uuid;
|
||||
use workspace::{Workspace, notifications::NotifyResultExt};
|
||||
|
||||
use crate::AssistantPanel;
|
||||
use crate::context::RULES_ICON;
|
||||
pub use crate::context_picker::completion_provider::ContextPickerCompletionProvider;
|
||||
use crate::context_picker::fetch_context_picker::FetchContextPicker;
|
||||
use crate::context_picker::file_context_picker::FileContextPicker;
|
||||
use crate::context_picker::rules_context_picker::RulesContextPicker;
|
||||
use crate::context_picker::thread_context_picker::ThreadContextPicker;
|
||||
use crate::context_store::ContextStore;
|
||||
use crate::thread::ThreadId;
|
||||
@@ -40,6 +46,7 @@ enum ContextPickerMode {
|
||||
Symbol,
|
||||
Fetch,
|
||||
Thread,
|
||||
Rules,
|
||||
}
|
||||
|
||||
impl TryFrom<&str> for ContextPickerMode {
|
||||
@@ -51,6 +58,7 @@ impl TryFrom<&str> for ContextPickerMode {
|
||||
"symbol" => Ok(Self::Symbol),
|
||||
"fetch" => Ok(Self::Fetch),
|
||||
"thread" => Ok(Self::Thread),
|
||||
"rules" => Ok(Self::Rules),
|
||||
_ => Err(format!("Invalid context picker mode: {}", value)),
|
||||
}
|
||||
}
|
||||
@@ -63,6 +71,7 @@ impl ContextPickerMode {
|
||||
Self::Symbol => "symbol",
|
||||
Self::Fetch => "fetch",
|
||||
Self::Thread => "thread",
|
||||
Self::Rules => "rules",
|
||||
}
|
||||
}
|
||||
|
||||
@@ -72,6 +81,7 @@ impl ContextPickerMode {
|
||||
Self::Symbol => "Symbols",
|
||||
Self::Fetch => "Fetch",
|
||||
Self::Thread => "Threads",
|
||||
Self::Rules => "Rules",
|
||||
}
|
||||
}
|
||||
|
||||
@@ -81,6 +91,7 @@ impl ContextPickerMode {
|
||||
Self::Symbol => IconName::Code,
|
||||
Self::Fetch => IconName::Globe,
|
||||
Self::Thread => IconName::MessageBubbles,
|
||||
Self::Rules => RULES_ICON,
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -92,6 +103,7 @@ enum ContextPickerState {
|
||||
Symbol(Entity<SymbolContextPicker>),
|
||||
Fetch(Entity<FetchContextPicker>),
|
||||
Thread(Entity<ThreadContextPicker>),
|
||||
Rules(Entity<RulesContextPicker>),
|
||||
}
|
||||
|
||||
pub(super) struct ContextPicker {
|
||||
@@ -253,6 +265,19 @@ impl ContextPicker {
|
||||
}));
|
||||
}
|
||||
}
|
||||
ContextPickerMode::Rules => {
|
||||
if let Some(thread_store) = self.thread_store.as_ref() {
|
||||
self.mode = ContextPickerState::Rules(cx.new(|cx| {
|
||||
RulesContextPicker::new(
|
||||
thread_store.clone(),
|
||||
context_picker.clone(),
|
||||
self.context_store.clone(),
|
||||
window,
|
||||
cx,
|
||||
)
|
||||
}));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
cx.notify();
|
||||
@@ -381,6 +406,7 @@ impl ContextPicker {
|
||||
ContextPickerState::Symbol(entity) => entity.update(cx, |_, cx| cx.notify()),
|
||||
ContextPickerState::Fetch(entity) => entity.update(cx, |_, cx| cx.notify()),
|
||||
ContextPickerState::Thread(entity) => entity.update(cx, |_, cx| cx.notify()),
|
||||
ContextPickerState::Rules(entity) => entity.update(cx, |_, cx| cx.notify()),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -395,6 +421,7 @@ impl Focusable for ContextPicker {
|
||||
ContextPickerState::Symbol(symbol_picker) => symbol_picker.focus_handle(cx),
|
||||
ContextPickerState::Fetch(fetch_picker) => fetch_picker.focus_handle(cx),
|
||||
ContextPickerState::Thread(thread_picker) => thread_picker.focus_handle(cx),
|
||||
ContextPickerState::Rules(user_rules_picker) => user_rules_picker.focus_handle(cx),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -410,6 +437,9 @@ impl Render for ContextPicker {
|
||||
ContextPickerState::Symbol(symbol_picker) => parent.child(symbol_picker.clone()),
|
||||
ContextPickerState::Fetch(fetch_picker) => parent.child(fetch_picker.clone()),
|
||||
ContextPickerState::Thread(thread_picker) => parent.child(thread_picker.clone()),
|
||||
ContextPickerState::Rules(user_rules_picker) => {
|
||||
parent.child(user_rules_picker.clone())
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
@@ -431,6 +461,7 @@ fn supported_context_picker_modes(
|
||||
];
|
||||
if thread_store.is_some() {
|
||||
modes.push(ContextPickerMode::Thread);
|
||||
modes.push(ContextPickerMode::Rules);
|
||||
}
|
||||
modes
|
||||
}
|
||||
@@ -626,6 +657,7 @@ pub enum MentionLink {
|
||||
Symbol(ProjectPath, String),
|
||||
Fetch(String),
|
||||
Thread(ThreadId),
|
||||
Rules(UserPromptId),
|
||||
}
|
||||
|
||||
impl MentionLink {
|
||||
@@ -633,14 +665,16 @@ impl MentionLink {
|
||||
const SYMBOL: &str = "@symbol";
|
||||
const THREAD: &str = "@thread";
|
||||
const FETCH: &str = "@fetch";
|
||||
const RULES: &str = "@rules";
|
||||
|
||||
const SEPARATOR: &str = ":";
|
||||
|
||||
pub fn is_valid(url: &str) -> bool {
|
||||
url.starts_with(Self::FILE)
|
||||
|| url.starts_with(Self::SYMBOL)
|
||||
|| url.starts_with(Self::FETCH)
|
||||
|| url.starts_with(Self::THREAD)
|
||||
|| url.starts_with(Self::FETCH)
|
||||
|| url.starts_with(Self::RULES)
|
||||
}
|
||||
|
||||
pub fn for_file(file_name: &str, full_path: &str) -> String {
|
||||
@@ -657,12 +691,16 @@ impl MentionLink {
|
||||
)
|
||||
}
|
||||
|
||||
pub fn for_thread(thread: &ThreadContextEntry) -> String {
|
||||
format!("[@{}]({}:{})", thread.summary, Self::THREAD, thread.id)
|
||||
}
|
||||
|
||||
pub fn for_fetch(url: &str) -> String {
|
||||
format!("[@{}]({}:{})", url, Self::FETCH, url)
|
||||
}
|
||||
|
||||
pub fn for_thread(thread: &ThreadContextEntry) -> String {
|
||||
format!("[@{}]({}:{})", thread.summary, Self::THREAD, thread.id)
|
||||
pub fn for_rules(rules: &RulesContextEntry) -> String {
|
||||
format!("[@{}]({}:{})", rules.title, Self::RULES, rules.prompt_id.0)
|
||||
}
|
||||
|
||||
pub fn try_parse(link: &str, workspace: &Entity<Workspace>, cx: &App) -> Option<Self> {
|
||||
@@ -706,6 +744,10 @@ impl MentionLink {
|
||||
Some(MentionLink::Thread(thread_id))
|
||||
}
|
||||
Self::FETCH => Some(MentionLink::Fetch(argument.to_string())),
|
||||
Self::RULES => {
|
||||
let prompt_id = UserPromptId(Uuid::try_parse(argument).ok()?);
|
||||
Some(MentionLink::Rules(prompt_id))
|
||||
}
|
||||
_ => None,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -14,11 +14,13 @@ use http_client::HttpClientWithUrl;
|
||||
use language::{Buffer, CodeLabel, HighlightId};
|
||||
use lsp::CompletionContext;
|
||||
use project::{Completion, CompletionIntent, ProjectPath, Symbol, WorktreeId};
|
||||
use prompt_store::PromptId;
|
||||
use rope::Point;
|
||||
use text::{Anchor, ToPoint};
|
||||
use ui::prelude::*;
|
||||
use workspace::Workspace;
|
||||
|
||||
use crate::context::RULES_ICON;
|
||||
use crate::context_picker::file_context_picker::search_files;
|
||||
use crate::context_picker::symbol_context_picker::search_symbols;
|
||||
use crate::context_store::ContextStore;
|
||||
@@ -26,6 +28,7 @@ use crate::thread_store::ThreadStore;
|
||||
|
||||
use super::fetch_context_picker::fetch_url_content;
|
||||
use super::file_context_picker::FileMatch;
|
||||
use super::rules_context_picker::{RulesContextEntry, search_rules};
|
||||
use super::symbol_context_picker::SymbolMatch;
|
||||
use super::thread_context_picker::{ThreadContextEntry, ThreadMatch, search_threads};
|
||||
use super::{
|
||||
@@ -38,6 +41,7 @@ pub(crate) enum Match {
|
||||
File(FileMatch),
|
||||
Thread(ThreadMatch),
|
||||
Fetch(SharedString),
|
||||
Rules(RulesContextEntry),
|
||||
Mode(ModeMatch),
|
||||
}
|
||||
|
||||
@@ -54,6 +58,7 @@ impl Match {
|
||||
Match::Thread(_) => 1.,
|
||||
Match::Symbol(_) => 1.,
|
||||
Match::Fetch(_) => 1.,
|
||||
Match::Rules(_) => 1.,
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -112,6 +117,21 @@ fn search(
|
||||
Task::ready(Vec::new())
|
||||
}
|
||||
}
|
||||
Some(ContextPickerMode::Rules) => {
|
||||
if let Some(thread_store) = thread_store.as_ref().and_then(|t| t.upgrade()) {
|
||||
let search_rules_task =
|
||||
search_rules(query.clone(), cancellation_flag.clone(), thread_store, cx);
|
||||
cx.background_spawn(async move {
|
||||
search_rules_task
|
||||
.await
|
||||
.into_iter()
|
||||
.map(Match::Rules)
|
||||
.collect::<Vec<_>>()
|
||||
})
|
||||
} else {
|
||||
Task::ready(Vec::new())
|
||||
}
|
||||
}
|
||||
None => {
|
||||
if query.is_empty() {
|
||||
let mut matches = recent_entries
|
||||
@@ -287,6 +307,60 @@ impl ContextPickerCompletionProvider {
|
||||
}
|
||||
}
|
||||
|
||||
fn completion_for_rules(
|
||||
rules: RulesContextEntry,
|
||||
excerpt_id: ExcerptId,
|
||||
source_range: Range<Anchor>,
|
||||
editor: Entity<Editor>,
|
||||
context_store: Entity<ContextStore>,
|
||||
thread_store: Entity<ThreadStore>,
|
||||
) -> Completion {
|
||||
let new_text = MentionLink::for_rules(&rules);
|
||||
let new_text_len = new_text.len();
|
||||
Completion {
|
||||
replace_range: source_range.clone(),
|
||||
new_text,
|
||||
label: CodeLabel::plain(rules.title.to_string(), None),
|
||||
documentation: None,
|
||||
insert_text_mode: None,
|
||||
source: project::CompletionSource::Custom,
|
||||
icon_path: Some(RULES_ICON.path().into()),
|
||||
confirm: Some(confirm_completion_callback(
|
||||
RULES_ICON.path().into(),
|
||||
rules.title.clone(),
|
||||
excerpt_id,
|
||||
source_range.start,
|
||||
new_text_len,
|
||||
editor.clone(),
|
||||
move |cx| {
|
||||
let prompt_uuid = rules.prompt_id;
|
||||
let prompt_id = PromptId::User { uuid: prompt_uuid };
|
||||
let context_store = context_store.clone();
|
||||
let Some(prompt_store) = thread_store.read(cx).prompt_store() else {
|
||||
log::error!("Can't add user rules as prompt store is missing.");
|
||||
return;
|
||||
};
|
||||
let prompt_store = prompt_store.read(cx);
|
||||
let Some(metadata) = prompt_store.metadata(prompt_id) else {
|
||||
return;
|
||||
};
|
||||
let Some(title) = metadata.title else {
|
||||
return;
|
||||
};
|
||||
let text_task = prompt_store.load(prompt_id, cx);
|
||||
|
||||
cx.spawn(async move |cx| {
|
||||
let text = text_task.await?;
|
||||
context_store.update(cx, |context_store, cx| {
|
||||
context_store.add_rules(prompt_uuid, title, text, false, cx)
|
||||
})
|
||||
})
|
||||
.detach_and_log_err(cx);
|
||||
},
|
||||
)),
|
||||
}
|
||||
}
|
||||
|
||||
fn completion_for_fetch(
|
||||
source_range: Range<Anchor>,
|
||||
url_to_fetch: SharedString,
|
||||
@@ -593,6 +667,17 @@ impl CompletionProvider for ContextPickerCompletionProvider {
|
||||
thread_store,
|
||||
))
|
||||
}
|
||||
Match::Rules(user_rules) => {
|
||||
let thread_store = thread_store.as_ref().and_then(|t| t.upgrade())?;
|
||||
Some(Self::completion_for_rules(
|
||||
user_rules,
|
||||
excerpt_id,
|
||||
source_range.clone(),
|
||||
editor.clone(),
|
||||
context_store.clone(),
|
||||
thread_store,
|
||||
))
|
||||
}
|
||||
Match::Fetch(url) => Some(Self::completion_for_fetch(
|
||||
source_range.clone(),
|
||||
url,
|
||||
|
||||
248
crates/agent/src/context_picker/rules_context_picker.rs
Normal file
248
crates/agent/src/context_picker/rules_context_picker.rs
Normal file
@@ -0,0 +1,248 @@
|
||||
use std::sync::Arc;
|
||||
use std::sync::atomic::AtomicBool;
|
||||
|
||||
use anyhow::anyhow;
|
||||
use gpui::{App, DismissEvent, Entity, FocusHandle, Focusable, Task, WeakEntity};
|
||||
use picker::{Picker, PickerDelegate};
|
||||
use prompt_store::{PromptId, UserPromptId};
|
||||
use ui::{ListItem, prelude::*};
|
||||
|
||||
use crate::context::RULES_ICON;
|
||||
use crate::context_picker::ContextPicker;
|
||||
use crate::context_store::{self, ContextStore};
|
||||
use crate::thread_store::ThreadStore;
|
||||
|
||||
pub struct RulesContextPicker {
|
||||
picker: Entity<Picker<RulesContextPickerDelegate>>,
|
||||
}
|
||||
|
||||
impl RulesContextPicker {
|
||||
pub fn new(
|
||||
thread_store: WeakEntity<ThreadStore>,
|
||||
context_picker: WeakEntity<ContextPicker>,
|
||||
context_store: WeakEntity<context_store::ContextStore>,
|
||||
window: &mut Window,
|
||||
cx: &mut Context<Self>,
|
||||
) -> Self {
|
||||
let delegate = RulesContextPickerDelegate::new(thread_store, context_picker, context_store);
|
||||
let picker = cx.new(|cx| Picker::uniform_list(delegate, window, cx));
|
||||
|
||||
RulesContextPicker { picker }
|
||||
}
|
||||
}
|
||||
|
||||
impl Focusable for RulesContextPicker {
|
||||
fn focus_handle(&self, cx: &App) -> FocusHandle {
|
||||
self.picker.focus_handle(cx)
|
||||
}
|
||||
}
|
||||
|
||||
impl Render for RulesContextPicker {
|
||||
fn render(&mut self, _window: &mut Window, _cx: &mut Context<Self>) -> impl IntoElement {
|
||||
self.picker.clone()
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct RulesContextEntry {
|
||||
pub prompt_id: UserPromptId,
|
||||
pub title: SharedString,
|
||||
}
|
||||
|
||||
pub struct RulesContextPickerDelegate {
|
||||
thread_store: WeakEntity<ThreadStore>,
|
||||
context_picker: WeakEntity<ContextPicker>,
|
||||
context_store: WeakEntity<context_store::ContextStore>,
|
||||
matches: Vec<RulesContextEntry>,
|
||||
selected_index: usize,
|
||||
}
|
||||
|
||||
impl RulesContextPickerDelegate {
|
||||
pub fn new(
|
||||
thread_store: WeakEntity<ThreadStore>,
|
||||
context_picker: WeakEntity<ContextPicker>,
|
||||
context_store: WeakEntity<context_store::ContextStore>,
|
||||
) -> Self {
|
||||
RulesContextPickerDelegate {
|
||||
thread_store,
|
||||
context_picker,
|
||||
context_store,
|
||||
matches: Vec::new(),
|
||||
selected_index: 0,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl PickerDelegate for RulesContextPickerDelegate {
|
||||
type ListItem = ListItem;
|
||||
|
||||
fn match_count(&self) -> usize {
|
||||
self.matches.len()
|
||||
}
|
||||
|
||||
fn selected_index(&self) -> usize {
|
||||
self.selected_index
|
||||
}
|
||||
|
||||
fn set_selected_index(
|
||||
&mut self,
|
||||
ix: usize,
|
||||
_window: &mut Window,
|
||||
_cx: &mut Context<Picker<Self>>,
|
||||
) {
|
||||
self.selected_index = ix;
|
||||
}
|
||||
|
||||
fn placeholder_text(&self, _window: &mut Window, _cx: &mut App) -> Arc<str> {
|
||||
"Search available rules…".into()
|
||||
}
|
||||
|
||||
fn update_matches(
|
||||
&mut self,
|
||||
query: String,
|
||||
window: &mut Window,
|
||||
cx: &mut Context<Picker<Self>>,
|
||||
) -> Task<()> {
|
||||
let Some(thread_store) = self.thread_store.upgrade() else {
|
||||
return Task::ready(());
|
||||
};
|
||||
|
||||
let search_task = search_rules(query, Arc::new(AtomicBool::default()), thread_store, cx);
|
||||
cx.spawn_in(window, async move |this, cx| {
|
||||
let matches = search_task.await;
|
||||
this.update(cx, |this, cx| {
|
||||
this.delegate.matches = matches;
|
||||
this.delegate.selected_index = 0;
|
||||
cx.notify();
|
||||
})
|
||||
.ok();
|
||||
})
|
||||
}
|
||||
|
||||
fn confirm(&mut self, _secondary: bool, _window: &mut Window, cx: &mut Context<Picker<Self>>) {
|
||||
let Some(entry) = self.matches.get(self.selected_index) else {
|
||||
return;
|
||||
};
|
||||
|
||||
let Some(thread_store) = self.thread_store.upgrade() else {
|
||||
return;
|
||||
};
|
||||
|
||||
let prompt_id = entry.prompt_id;
|
||||
|
||||
let load_rules_task = thread_store.update(cx, |thread_store, cx| {
|
||||
thread_store.load_rules(prompt_id, cx)
|
||||
});
|
||||
|
||||
cx.spawn(async move |this, cx| {
|
||||
let (metadata, text) = load_rules_task.await?;
|
||||
let Some(title) = metadata.title else {
|
||||
return Err(anyhow!("Encountered user rule with no title when attempting to add it to agent context."));
|
||||
};
|
||||
this.update(cx, |this, cx| {
|
||||
this.delegate
|
||||
.context_store
|
||||
.update(cx, |context_store, cx| {
|
||||
context_store.add_rules(prompt_id, title, text, true, cx)
|
||||
})
|
||||
.ok();
|
||||
})
|
||||
})
|
||||
.detach_and_log_err(cx);
|
||||
}
|
||||
|
||||
fn dismissed(&mut self, _window: &mut Window, cx: &mut Context<Picker<Self>>) {
|
||||
self.context_picker
|
||||
.update(cx, |_, cx| {
|
||||
cx.emit(DismissEvent);
|
||||
})
|
||||
.ok();
|
||||
}
|
||||
|
||||
fn render_match(
|
||||
&self,
|
||||
ix: usize,
|
||||
selected: bool,
|
||||
_window: &mut Window,
|
||||
cx: &mut Context<Picker<Self>>,
|
||||
) -> Option<Self::ListItem> {
|
||||
let thread = &self.matches[ix];
|
||||
|
||||
Some(ListItem::new(ix).inset(true).toggle_state(selected).child(
|
||||
render_thread_context_entry(thread, self.context_store.clone(), cx),
|
||||
))
|
||||
}
|
||||
}
|
||||
|
||||
pub fn render_thread_context_entry(
|
||||
user_rules: &RulesContextEntry,
|
||||
context_store: WeakEntity<ContextStore>,
|
||||
cx: &mut App,
|
||||
) -> Div {
|
||||
let added = context_store.upgrade().map_or(false, |ctx_store| {
|
||||
ctx_store
|
||||
.read(cx)
|
||||
.includes_user_rules(&user_rules.prompt_id)
|
||||
.is_some()
|
||||
});
|
||||
|
||||
h_flex()
|
||||
.gap_1p5()
|
||||
.w_full()
|
||||
.justify_between()
|
||||
.child(
|
||||
h_flex()
|
||||
.gap_1p5()
|
||||
.max_w_72()
|
||||
.child(
|
||||
Icon::new(RULES_ICON)
|
||||
.size(IconSize::XSmall)
|
||||
.color(Color::Muted),
|
||||
)
|
||||
.child(Label::new(user_rules.title.clone()).truncate()),
|
||||
)
|
||||
.when(added, |el| {
|
||||
el.child(
|
||||
h_flex()
|
||||
.gap_1()
|
||||
.child(
|
||||
Icon::new(IconName::Check)
|
||||
.size(IconSize::Small)
|
||||
.color(Color::Success),
|
||||
)
|
||||
.child(Label::new("Added").size(LabelSize::Small)),
|
||||
)
|
||||
})
|
||||
}
|
||||
|
||||
pub(crate) fn search_rules(
|
||||
query: String,
|
||||
cancellation_flag: Arc<AtomicBool>,
|
||||
thread_store: Entity<ThreadStore>,
|
||||
cx: &mut App,
|
||||
) -> Task<Vec<RulesContextEntry>> {
|
||||
let Some(prompt_store) = thread_store.read(cx).prompt_store() else {
|
||||
return Task::ready(vec![]);
|
||||
};
|
||||
let search_task = prompt_store.read(cx).search(query, cancellation_flag, cx);
|
||||
cx.background_spawn(async move {
|
||||
search_task
|
||||
.await
|
||||
.into_iter()
|
||||
.flat_map(|metadata| {
|
||||
// Default prompts are filtered out as they are automatically included.
|
||||
if metadata.default {
|
||||
None
|
||||
} else {
|
||||
match metadata.id {
|
||||
PromptId::EditWorkflow => None,
|
||||
PromptId::User { uuid } => Some(RulesContextEntry {
|
||||
prompt_id: uuid,
|
||||
title: metadata.title?,
|
||||
}),
|
||||
}
|
||||
}
|
||||
})
|
||||
.collect::<Vec<_>>()
|
||||
})
|
||||
}
|
||||
@@ -103,11 +103,11 @@ impl PickerDelegate for ThreadContextPickerDelegate {
|
||||
window: &mut Window,
|
||||
cx: &mut Context<Picker<Self>>,
|
||||
) -> Task<()> {
|
||||
let Some(threads) = self.thread_store.upgrade() else {
|
||||
let Some(thread_store) = self.thread_store.upgrade() else {
|
||||
return Task::ready(());
|
||||
};
|
||||
|
||||
let search_task = search_threads(query, Arc::new(AtomicBool::default()), threads, cx);
|
||||
let search_task = search_threads(query, Arc::new(AtomicBool::default()), thread_store, cx);
|
||||
cx.spawn_in(window, async move |this, cx| {
|
||||
let matches = search_task.await;
|
||||
this.update(cx, |this, cx| {
|
||||
@@ -217,15 +217,15 @@ pub(crate) fn search_threads(
|
||||
thread_store: Entity<ThreadStore>,
|
||||
cx: &mut App,
|
||||
) -> Task<Vec<ThreadMatch>> {
|
||||
let threads = thread_store.update(cx, |this, _cx| {
|
||||
this.threads()
|
||||
.into_iter()
|
||||
.map(|thread| ThreadContextEntry {
|
||||
id: thread.id,
|
||||
summary: thread.summary,
|
||||
})
|
||||
.collect::<Vec<_>>()
|
||||
});
|
||||
let threads = thread_store
|
||||
.read(cx)
|
||||
.threads()
|
||||
.into_iter()
|
||||
.map(|thread| ThreadContextEntry {
|
||||
id: thread.id,
|
||||
summary: thread.summary,
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
let executor = cx.background_executor().clone();
|
||||
cx.background_spawn(async move {
|
||||
|
||||
@@ -8,7 +8,8 @@ use futures::future::join_all;
|
||||
use futures::{self, Future, FutureExt, future};
|
||||
use gpui::{App, AppContext as _, Context, Entity, SharedString, Task, WeakEntity};
|
||||
use language::{Buffer, File};
|
||||
use project::{Project, ProjectItem, ProjectPath, Worktree};
|
||||
use project::{Project, ProjectEntryId, ProjectItem, ProjectPath, Worktree};
|
||||
use prompt_store::UserPromptId;
|
||||
use rope::{Point, Rope};
|
||||
use text::{Anchor, BufferId, OffsetRangeExt};
|
||||
use util::{ResultExt as _, maybe};
|
||||
@@ -16,7 +17,7 @@ use util::{ResultExt as _, maybe};
|
||||
use crate::ThreadStore;
|
||||
use crate::context::{
|
||||
AssistantContext, ContextBuffer, ContextId, ContextSymbol, ContextSymbolId, DirectoryContext,
|
||||
ExcerptContext, FetchedUrlContext, FileContext, SymbolContext, ThreadContext,
|
||||
ExcerptContext, FetchedUrlContext, FileContext, RulesContext, SymbolContext, ThreadContext,
|
||||
};
|
||||
use crate::context_strip::SuggestedContext;
|
||||
use crate::thread::{Thread, ThreadId};
|
||||
@@ -25,7 +26,6 @@ pub struct ContextStore {
|
||||
project: WeakEntity<Project>,
|
||||
context: Vec<AssistantContext>,
|
||||
thread_store: Option<WeakEntity<ThreadStore>>,
|
||||
// TODO: If an EntityId is used for all context types (like BufferId), can remove ContextId.
|
||||
next_context_id: ContextId,
|
||||
files: BTreeMap<BufferId, ContextId>,
|
||||
directories: HashMap<ProjectPath, ContextId>,
|
||||
@@ -35,6 +35,7 @@ pub struct ContextStore {
|
||||
threads: HashMap<ThreadId, ContextId>,
|
||||
thread_summary_tasks: Vec<Task<()>>,
|
||||
fetched_urls: HashMap<String, ContextId>,
|
||||
user_rules: HashMap<UserPromptId, ContextId>,
|
||||
}
|
||||
|
||||
impl ContextStore {
|
||||
@@ -55,6 +56,7 @@ impl ContextStore {
|
||||
threads: HashMap::default(),
|
||||
thread_summary_tasks: Vec::new(),
|
||||
fetched_urls: HashMap::default(),
|
||||
user_rules: HashMap::default(),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -72,6 +74,7 @@ impl ContextStore {
|
||||
self.directories.clear();
|
||||
self.threads.clear();
|
||||
self.fetched_urls.clear();
|
||||
self.user_rules.clear();
|
||||
}
|
||||
|
||||
pub fn add_file_from_path(
|
||||
@@ -159,6 +162,14 @@ impl ContextStore {
|
||||
return Task::ready(Err(anyhow!("failed to read project")));
|
||||
};
|
||||
|
||||
let Some(entry_id) = project
|
||||
.read(cx)
|
||||
.entry_for_path(&project_path, cx)
|
||||
.map(|entry| entry.id)
|
||||
else {
|
||||
return Task::ready(Err(anyhow!("no entry found for directory context")));
|
||||
};
|
||||
|
||||
let already_included = match self.includes_directory(&project_path) {
|
||||
Some(FileInclusion::Direct(context_id)) => {
|
||||
if remove_if_exists {
|
||||
@@ -228,7 +239,7 @@ impl ContextStore {
|
||||
}
|
||||
|
||||
this.update(cx, |this, cx| {
|
||||
this.insert_directory(worktree, project_path, context_buffers, cx);
|
||||
this.insert_directory(worktree, entry_id, project_path, context_buffers, cx);
|
||||
})?;
|
||||
|
||||
anyhow::Ok(())
|
||||
@@ -238,19 +249,21 @@ impl ContextStore {
|
||||
fn insert_directory(
|
||||
&mut self,
|
||||
worktree: Entity<Worktree>,
|
||||
entry_id: ProjectEntryId,
|
||||
project_path: ProjectPath,
|
||||
context_buffers: Vec<ContextBuffer>,
|
||||
cx: &mut Context<Self>,
|
||||
) {
|
||||
let id = self.next_context_id.post_inc();
|
||||
let path = project_path.path.clone();
|
||||
let last_path = project_path.path.clone();
|
||||
self.directories.insert(project_path, id);
|
||||
|
||||
self.context
|
||||
.push(AssistantContext::Directory(DirectoryContext {
|
||||
id,
|
||||
worktree,
|
||||
path,
|
||||
entry_id,
|
||||
last_path,
|
||||
context_buffers,
|
||||
}));
|
||||
cx.notify();
|
||||
@@ -390,6 +403,42 @@ impl ContextStore {
|
||||
cx.notify();
|
||||
}
|
||||
|
||||
pub fn add_rules(
|
||||
&mut self,
|
||||
prompt_id: UserPromptId,
|
||||
title: impl Into<SharedString>,
|
||||
text: impl Into<SharedString>,
|
||||
remove_if_exists: bool,
|
||||
cx: &mut Context<ContextStore>,
|
||||
) {
|
||||
if let Some(context_id) = self.includes_user_rules(&prompt_id) {
|
||||
if remove_if_exists {
|
||||
self.remove_context(context_id, cx);
|
||||
}
|
||||
} else {
|
||||
self.insert_user_rules(prompt_id, title, text, cx);
|
||||
}
|
||||
}
|
||||
|
||||
pub fn insert_user_rules(
|
||||
&mut self,
|
||||
prompt_id: UserPromptId,
|
||||
title: impl Into<SharedString>,
|
||||
text: impl Into<SharedString>,
|
||||
cx: &mut Context<ContextStore>,
|
||||
) {
|
||||
let id = self.next_context_id.post_inc();
|
||||
|
||||
self.user_rules.insert(prompt_id, id);
|
||||
self.context.push(AssistantContext::Rules(RulesContext {
|
||||
id,
|
||||
prompt_id,
|
||||
title: title.into(),
|
||||
text: text.into(),
|
||||
}));
|
||||
cx.notify();
|
||||
}
|
||||
|
||||
pub fn add_fetched_url(
|
||||
&mut self,
|
||||
url: String,
|
||||
@@ -518,6 +567,9 @@ impl ContextStore {
|
||||
AssistantContext::Thread(_) => {
|
||||
self.threads.retain(|_, context_id| *context_id != id);
|
||||
}
|
||||
AssistantContext::Rules(RulesContext { prompt_id, .. }) => {
|
||||
self.user_rules.remove(&prompt_id);
|
||||
}
|
||||
}
|
||||
|
||||
cx.notify();
|
||||
@@ -614,6 +666,10 @@ impl ContextStore {
|
||||
self.threads.get(thread_id).copied()
|
||||
}
|
||||
|
||||
pub fn includes_user_rules(&self, prompt_id: &UserPromptId) -> Option<ContextId> {
|
||||
self.user_rules.get(prompt_id).copied()
|
||||
}
|
||||
|
||||
pub fn includes_url(&self, url: &str) -> Option<ContextId> {
|
||||
self.fetched_urls.get(url).copied()
|
||||
}
|
||||
@@ -641,7 +697,8 @@ impl ContextStore {
|
||||
| AssistantContext::Symbol(_)
|
||||
| AssistantContext::Excerpt(_)
|
||||
| AssistantContext::FetchedUrl(_)
|
||||
| AssistantContext::Thread(_) => None,
|
||||
| AssistantContext::Thread(_)
|
||||
| AssistantContext::Rules(_) => None,
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
@@ -828,6 +885,7 @@ pub fn refresh_context_store_text(
|
||||
let task = maybe!({
|
||||
match context {
|
||||
AssistantContext::File(file_context) => {
|
||||
// TODO: Should refresh if the path has changed, as it's in the text.
|
||||
if changed_buffers.is_empty()
|
||||
|| changed_buffers.contains(&file_context.context_buffer.buffer)
|
||||
{
|
||||
@@ -836,8 +894,9 @@ pub fn refresh_context_store_text(
|
||||
}
|
||||
}
|
||||
AssistantContext::Directory(directory_context) => {
|
||||
let directory_path = directory_context.project_path(cx);
|
||||
let should_refresh = changed_buffers.is_empty()
|
||||
let directory_path = directory_context.project_path(cx)?;
|
||||
let should_refresh = directory_path.path != directory_context.last_path
|
||||
|| changed_buffers.is_empty()
|
||||
|| changed_buffers.iter().any(|buffer| {
|
||||
let Some(buffer_path) = buffer.read(cx).project_path(cx) else {
|
||||
return false;
|
||||
@@ -847,10 +906,16 @@ pub fn refresh_context_store_text(
|
||||
|
||||
if should_refresh {
|
||||
let context_store = context_store.clone();
|
||||
return refresh_directory_text(context_store, directory_context, cx);
|
||||
return refresh_directory_text(
|
||||
context_store,
|
||||
directory_context,
|
||||
directory_path,
|
||||
cx,
|
||||
);
|
||||
}
|
||||
}
|
||||
AssistantContext::Symbol(symbol_context) => {
|
||||
// TODO: Should refresh if the path has changed, as it's in the text.
|
||||
if changed_buffers.is_empty()
|
||||
|| changed_buffers.contains(&symbol_context.context_symbol.buffer)
|
||||
{
|
||||
@@ -859,6 +924,7 @@ pub fn refresh_context_store_text(
|
||||
}
|
||||
}
|
||||
AssistantContext::Excerpt(excerpt_context) => {
|
||||
// TODO: Should refresh if the path has changed, as it's in the text.
|
||||
if changed_buffers.is_empty()
|
||||
|| changed_buffers.contains(&excerpt_context.context_buffer.buffer)
|
||||
{
|
||||
@@ -876,6 +942,10 @@ pub fn refresh_context_store_text(
|
||||
// and doing the caching properly could be tricky (unless it's already handled by
|
||||
// the HttpClient?).
|
||||
AssistantContext::FetchedUrl(_) => {}
|
||||
AssistantContext::Rules(user_rules_context) => {
|
||||
let context_store = context_store.clone();
|
||||
return Some(refresh_user_rules(context_store, user_rules_context, cx));
|
||||
}
|
||||
}
|
||||
|
||||
None
|
||||
@@ -914,6 +984,7 @@ fn refresh_file_text(
|
||||
fn refresh_directory_text(
|
||||
context_store: Entity<ContextStore>,
|
||||
directory_context: &DirectoryContext,
|
||||
directory_path: ProjectPath,
|
||||
cx: &App,
|
||||
) -> Option<Task<()>> {
|
||||
let mut stale = false;
|
||||
@@ -938,7 +1009,8 @@ fn refresh_directory_text(
|
||||
|
||||
let id = directory_context.id;
|
||||
let worktree = directory_context.worktree.clone();
|
||||
let path = directory_context.path.clone();
|
||||
let entry_id = directory_context.entry_id;
|
||||
let last_path = directory_path.path;
|
||||
Some(cx.spawn(async move |cx| {
|
||||
let context_buffers = context_buffers.await;
|
||||
context_store
|
||||
@@ -946,7 +1018,8 @@ fn refresh_directory_text(
|
||||
let new_directory_context = DirectoryContext {
|
||||
id,
|
||||
worktree,
|
||||
path,
|
||||
entry_id,
|
||||
last_path,
|
||||
context_buffers,
|
||||
};
|
||||
context_store.replace_context(AssistantContext::Directory(new_directory_context));
|
||||
@@ -1026,6 +1099,45 @@ fn refresh_thread_text(
|
||||
})
|
||||
}
|
||||
|
||||
fn refresh_user_rules(
|
||||
context_store: Entity<ContextStore>,
|
||||
user_rules_context: &RulesContext,
|
||||
cx: &App,
|
||||
) -> Task<()> {
|
||||
let id = user_rules_context.id;
|
||||
let prompt_id = user_rules_context.prompt_id;
|
||||
let Some(thread_store) = context_store.read(cx).thread_store.as_ref() else {
|
||||
return Task::ready(());
|
||||
};
|
||||
let Ok(load_task) = thread_store.read_with(cx, |thread_store, cx| {
|
||||
thread_store.load_rules(prompt_id, cx)
|
||||
}) else {
|
||||
return Task::ready(());
|
||||
};
|
||||
cx.spawn(async move |cx| {
|
||||
if let Ok((metadata, text)) = load_task.await {
|
||||
if let Some(title) = metadata.title.clone() {
|
||||
context_store
|
||||
.update(cx, |context_store, _cx| {
|
||||
context_store.replace_context(AssistantContext::Rules(RulesContext {
|
||||
id,
|
||||
prompt_id,
|
||||
title,
|
||||
text: text.into(),
|
||||
}));
|
||||
})
|
||||
.ok();
|
||||
return;
|
||||
}
|
||||
}
|
||||
context_store
|
||||
.update(cx, |context_store, cx| {
|
||||
context_store.remove_context(id, cx);
|
||||
})
|
||||
.ok();
|
||||
})
|
||||
}
|
||||
|
||||
fn refresh_context_buffer(
|
||||
context_buffer: &ContextBuffer,
|
||||
cx: &App,
|
||||
|
||||
@@ -38,7 +38,7 @@ use crate::thread_store::{
|
||||
SerializedMessage, SerializedMessageSegment, SerializedThread, SerializedToolResult,
|
||||
SerializedToolUse, SharedProjectContext,
|
||||
};
|
||||
use crate::tool_use::{PendingToolUse, ToolUse, ToolUseState, USING_TOOL_MARKER};
|
||||
use crate::tool_use::{PendingToolUse, ToolUse, ToolUseMetadata, ToolUseState, USING_TOOL_MARKER};
|
||||
|
||||
#[derive(
|
||||
Debug, PartialEq, Eq, PartialOrd, Ord, Hash, Clone, Serialize, Deserialize, JsonSchema,
|
||||
@@ -774,7 +774,9 @@ impl Thread {
|
||||
cx,
|
||||
);
|
||||
}
|
||||
AssistantContext::FetchedUrl(_) | AssistantContext::Thread(_) => {}
|
||||
AssistantContext::FetchedUrl(_)
|
||||
| AssistantContext::Thread(_)
|
||||
| AssistantContext::Rules(_) => {}
|
||||
}
|
||||
}
|
||||
});
|
||||
@@ -1180,6 +1182,12 @@ impl Thread {
|
||||
None
|
||||
};
|
||||
let prompt_id = self.last_prompt_id.clone();
|
||||
let tool_use_metadata = ToolUseMetadata {
|
||||
model: model.clone(),
|
||||
thread_id: self.id.clone(),
|
||||
prompt_id: prompt_id.clone(),
|
||||
};
|
||||
|
||||
let task = cx.spawn(async move |thread, cx| {
|
||||
let stream_completion_future = model.stream_completion_with_usage(request, &cx);
|
||||
let initial_token_usage =
|
||||
@@ -1285,11 +1293,27 @@ impl Thread {
|
||||
thread.insert_message(Role::Assistant, vec![], cx)
|
||||
});
|
||||
|
||||
thread.tool_use.request_tool_use(
|
||||
let tool_use_id = tool_use.id.clone();
|
||||
let streamed_input = if tool_use.is_input_complete {
|
||||
None
|
||||
} else {
|
||||
Some((&tool_use.input).clone())
|
||||
};
|
||||
|
||||
let ui_text = thread.tool_use.request_tool_use(
|
||||
last_assistant_message_id,
|
||||
tool_use,
|
||||
tool_use_metadata.clone(),
|
||||
cx,
|
||||
);
|
||||
|
||||
if let Some(input) = streamed_input {
|
||||
cx.emit(ThreadEvent::StreamedToolUse {
|
||||
tool_use_id,
|
||||
ui_text,
|
||||
input,
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2180,6 +2204,11 @@ pub enum ThreadEvent {
|
||||
StreamedCompletion,
|
||||
StreamedAssistantText(MessageId, String),
|
||||
StreamedAssistantThinking(MessageId, String),
|
||||
StreamedToolUse {
|
||||
tool_use_id: LanguageModelToolUseId,
|
||||
ui_text: Arc<str>,
|
||||
input: serde_json::Value,
|
||||
},
|
||||
Stopped(Result<StopReason, Arc<anyhow::Error>>),
|
||||
MessageAdded(MessageId),
|
||||
MessageEdited(MessageId),
|
||||
|
||||
@@ -24,8 +24,8 @@ use heed::types::SerdeBincode;
|
||||
use language_model::{LanguageModelToolUseId, Role, TokenUsage};
|
||||
use project::{Project, Worktree};
|
||||
use prompt_store::{
|
||||
DefaultUserRulesContext, ProjectContext, PromptBuilder, PromptId, PromptStore,
|
||||
PromptsUpdatedEvent, RulesFileContext, WorktreeContext,
|
||||
ProjectContext, PromptBuilder, PromptId, PromptMetadata, PromptStore, PromptsUpdatedEvent,
|
||||
RulesFileContext, UserPromptId, UserRulesContext, WorktreeContext,
|
||||
};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use settings::{Settings as _, SettingsStore};
|
||||
@@ -62,6 +62,7 @@ pub struct ThreadStore {
|
||||
project: Entity<Project>,
|
||||
tools: Entity<ToolWorkingSet>,
|
||||
prompt_builder: Arc<PromptBuilder>,
|
||||
prompt_store: Option<Entity<PromptStore>>,
|
||||
context_server_manager: Entity<ContextServerManager>,
|
||||
context_server_tool_ids: HashMap<Arc<str>, Vec<ToolId>>,
|
||||
threads: Vec<SerializedThreadMetadata>,
|
||||
@@ -135,6 +136,7 @@ impl ThreadStore {
|
||||
let (ready_tx, ready_rx) = oneshot::channel();
|
||||
let mut ready_tx = Some(ready_tx);
|
||||
let reload_system_prompt_task = cx.spawn({
|
||||
let prompt_store = prompt_store.clone();
|
||||
async move |thread_store, cx| {
|
||||
loop {
|
||||
let Some(reload_task) = thread_store
|
||||
@@ -158,6 +160,7 @@ impl ThreadStore {
|
||||
project,
|
||||
tools,
|
||||
prompt_builder,
|
||||
prompt_store,
|
||||
context_server_manager,
|
||||
context_server_tool_ids: HashMap::default(),
|
||||
threads: Vec::new(),
|
||||
@@ -245,7 +248,7 @@ impl ThreadStore {
|
||||
let default_user_rules = default_user_rules
|
||||
.into_iter()
|
||||
.flat_map(|(contents, prompt_metadata)| match contents {
|
||||
Ok(contents) => Some(DefaultUserRulesContext {
|
||||
Ok(contents) => Some(UserRulesContext {
|
||||
uuid: match prompt_metadata.id {
|
||||
PromptId::User { uuid } => uuid,
|
||||
PromptId::EditWorkflow => return None,
|
||||
@@ -346,6 +349,27 @@ impl ThreadStore {
|
||||
self.context_server_manager.clone()
|
||||
}
|
||||
|
||||
pub fn prompt_store(&self) -> Option<Entity<PromptStore>> {
|
||||
self.prompt_store.clone()
|
||||
}
|
||||
|
||||
pub fn load_rules(
|
||||
&self,
|
||||
prompt_id: UserPromptId,
|
||||
cx: &App,
|
||||
) -> Task<Result<(PromptMetadata, String)>> {
|
||||
let prompt_id = PromptId::User { uuid: prompt_id };
|
||||
let Some(prompt_store) = self.prompt_store.as_ref() else {
|
||||
return Task::ready(Err(anyhow!("Prompt store unexpectedly missing.")));
|
||||
};
|
||||
let prompt_store = prompt_store.read(cx);
|
||||
let Some(metadata) = prompt_store.metadata(prompt_id) else {
|
||||
return Task::ready(Err(anyhow!("User rules not found in library.")));
|
||||
};
|
||||
let text_task = prompt_store.load(prompt_id, cx);
|
||||
cx.background_spawn(async move { Ok((metadata, text_task.await?)) })
|
||||
}
|
||||
|
||||
pub fn tools(&self) -> Entity<ToolWorkingSet> {
|
||||
self.tools.clone()
|
||||
}
|
||||
|
||||
@@ -7,13 +7,13 @@ use futures::FutureExt as _;
|
||||
use futures::future::Shared;
|
||||
use gpui::{App, Entity, SharedString, Task};
|
||||
use language_model::{
|
||||
LanguageModelRegistry, LanguageModelRequestMessage, LanguageModelToolResult,
|
||||
LanguageModel, LanguageModelRegistry, LanguageModelRequestMessage, LanguageModelToolResult,
|
||||
LanguageModelToolUse, LanguageModelToolUseId, MessageContent, Role,
|
||||
};
|
||||
use ui::IconName;
|
||||
use util::truncate_lines_to_byte_limit;
|
||||
|
||||
use crate::thread::MessageId;
|
||||
use crate::thread::{MessageId, PromptId, ThreadId};
|
||||
use crate::thread_store::SerializedMessage;
|
||||
|
||||
#[derive(Debug)]
|
||||
@@ -36,6 +36,7 @@ pub struct ToolUseState {
|
||||
tool_results: HashMap<LanguageModelToolUseId, LanguageModelToolResult>,
|
||||
pending_tool_uses_by_id: HashMap<LanguageModelToolUseId, PendingToolUse>,
|
||||
tool_result_cards: HashMap<LanguageModelToolUseId, AnyToolCard>,
|
||||
tool_use_metadata_by_id: HashMap<LanguageModelToolUseId, ToolUseMetadata>,
|
||||
}
|
||||
|
||||
impl ToolUseState {
|
||||
@@ -47,6 +48,7 @@ impl ToolUseState {
|
||||
tool_results: HashMap::default(),
|
||||
pending_tool_uses_by_id: HashMap::default(),
|
||||
tool_result_cards: HashMap::default(),
|
||||
tool_use_metadata_by_id: HashMap::default(),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -73,6 +75,7 @@ impl ToolUseState {
|
||||
id: tool_use.id.clone(),
|
||||
name: tool_use.name.clone().into(),
|
||||
input: tool_use.input.clone(),
|
||||
is_input_complete: true,
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
@@ -174,6 +177,9 @@ impl ToolUseState {
|
||||
PendingToolUseStatus::Error(ref err) => {
|
||||
ToolUseStatus::Error(err.clone().into())
|
||||
}
|
||||
PendingToolUseStatus::InputStillStreaming => {
|
||||
ToolUseStatus::InputStillStreaming
|
||||
}
|
||||
}
|
||||
} else {
|
||||
ToolUseStatus::Pending
|
||||
@@ -190,7 +196,12 @@ impl ToolUseState {
|
||||
tool_uses.push(ToolUse {
|
||||
id: tool_use.id.clone(),
|
||||
name: tool_use.name.clone().into(),
|
||||
ui_text: self.tool_ui_label(&tool_use.name, &tool_use.input, cx),
|
||||
ui_text: self.tool_ui_label(
|
||||
&tool_use.name,
|
||||
&tool_use.input,
|
||||
tool_use.is_input_complete,
|
||||
cx,
|
||||
),
|
||||
input: tool_use.input.clone(),
|
||||
status,
|
||||
icon,
|
||||
@@ -205,10 +216,15 @@ impl ToolUseState {
|
||||
&self,
|
||||
tool_name: &str,
|
||||
input: &serde_json::Value,
|
||||
is_input_complete: bool,
|
||||
cx: &App,
|
||||
) -> SharedString {
|
||||
if let Some(tool) = self.tools.read(cx).tool(tool_name, cx) {
|
||||
tool.ui_text(input).into()
|
||||
if is_input_complete {
|
||||
tool.ui_text(input).into()
|
||||
} else {
|
||||
tool.still_streaming_ui_text(input).into()
|
||||
}
|
||||
} else {
|
||||
format!("Unknown tool {tool_name:?}").into()
|
||||
}
|
||||
@@ -254,20 +270,52 @@ impl ToolUseState {
|
||||
&mut self,
|
||||
assistant_message_id: MessageId,
|
||||
tool_use: LanguageModelToolUse,
|
||||
metadata: ToolUseMetadata,
|
||||
cx: &App,
|
||||
) {
|
||||
self.tool_uses_by_assistant_message
|
||||
) -> Arc<str> {
|
||||
let tool_uses = self
|
||||
.tool_uses_by_assistant_message
|
||||
.entry(assistant_message_id)
|
||||
.or_default()
|
||||
.push(tool_use.clone());
|
||||
.or_default();
|
||||
|
||||
// The tool use is being requested by the Assistant, so we want to
|
||||
// attach the tool results to the next user message.
|
||||
let next_user_message_id = MessageId(assistant_message_id.0 + 1);
|
||||
self.tool_uses_by_user_message
|
||||
.entry(next_user_message_id)
|
||||
.or_default()
|
||||
.push(tool_use.id.clone());
|
||||
let mut existing_tool_use_found = false;
|
||||
|
||||
for existing_tool_use in tool_uses.iter_mut() {
|
||||
if existing_tool_use.id == tool_use.id {
|
||||
*existing_tool_use = tool_use.clone();
|
||||
existing_tool_use_found = true;
|
||||
}
|
||||
}
|
||||
|
||||
if !existing_tool_use_found {
|
||||
tool_uses.push(tool_use.clone());
|
||||
}
|
||||
|
||||
let status = if tool_use.is_input_complete {
|
||||
self.tool_use_metadata_by_id
|
||||
.insert(tool_use.id.clone(), metadata);
|
||||
|
||||
// The tool use is being requested by the Assistant, so we want to
|
||||
// attach the tool results to the next user message.
|
||||
let next_user_message_id = MessageId(assistant_message_id.0 + 1);
|
||||
self.tool_uses_by_user_message
|
||||
.entry(next_user_message_id)
|
||||
.or_default()
|
||||
.push(tool_use.id.clone());
|
||||
|
||||
PendingToolUseStatus::Idle
|
||||
} else {
|
||||
PendingToolUseStatus::InputStillStreaming
|
||||
};
|
||||
|
||||
let ui_text: Arc<str> = self
|
||||
.tool_ui_label(
|
||||
&tool_use.name,
|
||||
&tool_use.input,
|
||||
tool_use.is_input_complete,
|
||||
cx,
|
||||
)
|
||||
.into();
|
||||
|
||||
self.pending_tool_uses_by_id.insert(
|
||||
tool_use.id.clone(),
|
||||
@@ -275,13 +323,13 @@ impl ToolUseState {
|
||||
assistant_message_id,
|
||||
id: tool_use.id,
|
||||
name: tool_use.name.clone(),
|
||||
ui_text: self
|
||||
.tool_ui_label(&tool_use.name, &tool_use.input, cx)
|
||||
.into(),
|
||||
ui_text: ui_text.clone(),
|
||||
input: tool_use.input,
|
||||
status: PendingToolUseStatus::Idle,
|
||||
status,
|
||||
},
|
||||
);
|
||||
|
||||
ui_text
|
||||
}
|
||||
|
||||
pub fn run_pending_tool(
|
||||
@@ -327,7 +375,21 @@ impl ToolUseState {
|
||||
output: Result<String>,
|
||||
cx: &App,
|
||||
) -> Option<PendingToolUse> {
|
||||
telemetry::event!("Agent Tool Finished", tool_name, success = output.is_ok());
|
||||
let metadata = self.tool_use_metadata_by_id.remove(&tool_use_id);
|
||||
|
||||
telemetry::event!(
|
||||
"Agent Tool Finished",
|
||||
model = metadata
|
||||
.as_ref()
|
||||
.map(|metadata| metadata.model.telemetry_id()),
|
||||
model_provider = metadata
|
||||
.as_ref()
|
||||
.map(|metadata| metadata.model.provider_id().to_string()),
|
||||
thread_id = metadata.as_ref().map(|metadata| metadata.thread_id.clone()),
|
||||
prompt_id = metadata.as_ref().map(|metadata| metadata.prompt_id.clone()),
|
||||
tool_name,
|
||||
success = output.is_ok()
|
||||
);
|
||||
|
||||
match output {
|
||||
Ok(tool_result) => {
|
||||
@@ -477,6 +539,7 @@ pub struct Confirmation {
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub enum PendingToolUseStatus {
|
||||
InputStillStreaming,
|
||||
Idle,
|
||||
NeedsConfirmation(Arc<Confirmation>),
|
||||
Running { _task: Shared<Task<()>> },
|
||||
@@ -496,3 +559,10 @@ impl PendingToolUseStatus {
|
||||
matches!(self, PendingToolUseStatus::NeedsConfirmation { .. })
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct ToolUseMetadata {
|
||||
pub model: Arc<dyn LanguageModel>,
|
||||
pub thread_id: ThreadId,
|
||||
pub prompt_id: PromptId,
|
||||
}
|
||||
|
||||
@@ -59,7 +59,6 @@ impl AgentNotification {
|
||||
app_id: Some(app_id.to_owned()),
|
||||
window_min_size: None,
|
||||
window_decorations: Some(WindowDecorations::Client),
|
||||
..Default::default()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -264,10 +264,14 @@ impl AddedContext {
|
||||
}
|
||||
|
||||
AssistantContext::Directory(directory_context) => {
|
||||
let full_path = directory_context
|
||||
.worktree
|
||||
.read(cx)
|
||||
.full_path(&directory_context.path);
|
||||
let worktree = directory_context.worktree.read(cx);
|
||||
// If the directory no longer exists, use its last known path.
|
||||
let full_path = worktree
|
||||
.entry_for_id(directory_context.entry_id)
|
||||
.map_or_else(
|
||||
|| directory_context.last_path.clone(),
|
||||
|entry| worktree.full_path(&entry.path).into(),
|
||||
);
|
||||
let full_path_string: SharedString =
|
||||
full_path.to_string_lossy().into_owned().into();
|
||||
let name = full_path
|
||||
@@ -354,6 +358,16 @@ impl AddedContext {
|
||||
.read(cx)
|
||||
.is_generating_detailed_summary(),
|
||||
},
|
||||
|
||||
AssistantContext::Rules(user_rules_context) => AddedContext {
|
||||
id: user_rules_context.id,
|
||||
kind: ContextKind::Rules,
|
||||
name: user_rules_context.title.clone(),
|
||||
parent: None,
|
||||
tooltip: None,
|
||||
icon_path: None,
|
||||
summarizing: false,
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -27,7 +27,7 @@ use language_model::{
|
||||
};
|
||||
use project::Project;
|
||||
use prompt_library::{PromptLibrary, open_prompt_library};
|
||||
use prompt_store::{PromptBuilder, PromptId};
|
||||
use prompt_store::{PromptBuilder, PromptId, UserPromptId};
|
||||
|
||||
use search::{BufferSearchBar, buffer_search::DivRegistrar};
|
||||
use settings::{Settings, update_settings_file};
|
||||
@@ -58,11 +58,11 @@ pub fn init(cx: &mut App) {
|
||||
.register_action(AssistantPanel::show_configuration)
|
||||
.register_action(AssistantPanel::create_new_context)
|
||||
.register_action(AssistantPanel::restart_context_servers)
|
||||
.register_action(|workspace, _: &OpenPromptLibrary, window, cx| {
|
||||
.register_action(|workspace, action: &OpenPromptLibrary, window, cx| {
|
||||
if let Some(panel) = workspace.panel::<AssistantPanel>(cx) {
|
||||
workspace.focus_panel::<AssistantPanel>(window, cx);
|
||||
panel.update(cx, |panel, cx| {
|
||||
panel.deploy_prompt_library(&OpenPromptLibrary::default(), window, cx)
|
||||
panel.deploy_prompt_library(action, window, cx)
|
||||
});
|
||||
}
|
||||
});
|
||||
@@ -1060,7 +1060,9 @@ impl AssistantPanel {
|
||||
None,
|
||||
))
|
||||
}),
|
||||
action.prompt_to_focus.map(|uuid| PromptId::User { uuid }),
|
||||
action.prompt_to_select.map(|uuid| PromptId::User {
|
||||
uuid: UserPromptId(uuid),
|
||||
}),
|
||||
cx,
|
||||
)
|
||||
.detach_and_log_err(cx);
|
||||
|
||||
@@ -44,9 +44,10 @@ impl SlashCommand for PromptSlashCommand {
|
||||
let store = PromptStore::global(cx);
|
||||
let query = arguments.to_owned().join(" ");
|
||||
cx.spawn(async move |cx| {
|
||||
let cancellation_flag = Arc::new(AtomicBool::default());
|
||||
let prompts: Vec<PromptMetadata> = store
|
||||
.await?
|
||||
.read_with(cx, |store, cx| store.search(query, cx))?
|
||||
.read_with(cx, |store, cx| store.search(query, cancellation_flag, cx))?
|
||||
.await;
|
||||
Ok(prompts
|
||||
.into_iter()
|
||||
|
||||
@@ -30,6 +30,7 @@ pub fn init(cx: &mut App) {
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub enum ToolUseStatus {
|
||||
InputStillStreaming,
|
||||
NeedsConfirmation,
|
||||
Pending,
|
||||
Running,
|
||||
@@ -41,6 +42,7 @@ impl ToolUseStatus {
|
||||
pub fn text(&self) -> SharedString {
|
||||
match self {
|
||||
ToolUseStatus::NeedsConfirmation => "".into(),
|
||||
ToolUseStatus::InputStillStreaming => "".into(),
|
||||
ToolUseStatus::Pending => "".into(),
|
||||
ToolUseStatus::Running => "".into(),
|
||||
ToolUseStatus::Finished(out) => out.clone(),
|
||||
@@ -148,6 +150,12 @@ pub trait Tool: 'static + Send + Sync {
|
||||
/// Returns markdown to be displayed in the UI for this tool.
|
||||
fn ui_text(&self, input: &serde_json::Value) -> String;
|
||||
|
||||
/// Returns markdown to be displayed in the UI for this tool, while the input JSON is still streaming
|
||||
/// (so information may be missing).
|
||||
fn still_streaming_ui_text(&self, input: &serde_json::Value) -> String {
|
||||
self.ui_text(input)
|
||||
}
|
||||
|
||||
/// Runs the tool with the provided input.
|
||||
fn run(
|
||||
self: Arc<Self>,
|
||||
|
||||
@@ -33,8 +33,18 @@ pub struct CreateFileToolInput {
|
||||
pub contents: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
|
||||
struct PartialInput {
|
||||
#[serde(default)]
|
||||
path: String,
|
||||
#[serde(default)]
|
||||
contents: String,
|
||||
}
|
||||
|
||||
pub struct CreateFileTool;
|
||||
|
||||
const DEFAULT_UI_TEXT: &str = "Create file";
|
||||
|
||||
impl Tool for CreateFileTool {
|
||||
fn name(&self) -> String {
|
||||
"create_file".into()
|
||||
@@ -62,7 +72,14 @@ impl Tool for CreateFileTool {
|
||||
let path = MarkdownString::inline_code(&input.path);
|
||||
format!("Create file {path}")
|
||||
}
|
||||
Err(_) => "Create file".to_string(),
|
||||
Err(_) => DEFAULT_UI_TEXT.to_string(),
|
||||
}
|
||||
}
|
||||
|
||||
fn still_streaming_ui_text(&self, input: &serde_json::Value) -> String {
|
||||
match serde_json::from_value::<PartialInput>(input.clone()).ok() {
|
||||
Some(input) if !input.path.is_empty() => input.path,
|
||||
_ => DEFAULT_UI_TEXT.to_string(),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -111,3 +128,60 @@ impl Tool for CreateFileTool {
|
||||
.into()
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use serde_json::json;
|
||||
|
||||
#[test]
|
||||
fn still_streaming_ui_text_with_path() {
|
||||
let tool = CreateFileTool;
|
||||
let input = json!({
|
||||
"path": "src/main.rs",
|
||||
"contents": "fn main() {\n println!(\"Hello, world!\");\n}"
|
||||
});
|
||||
|
||||
assert_eq!(tool.still_streaming_ui_text(&input), "src/main.rs");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn still_streaming_ui_text_without_path() {
|
||||
let tool = CreateFileTool;
|
||||
let input = json!({
|
||||
"path": "",
|
||||
"contents": "fn main() {\n println!(\"Hello, world!\");\n}"
|
||||
});
|
||||
|
||||
assert_eq!(tool.still_streaming_ui_text(&input), DEFAULT_UI_TEXT);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn still_streaming_ui_text_with_null() {
|
||||
let tool = CreateFileTool;
|
||||
let input = serde_json::Value::Null;
|
||||
|
||||
assert_eq!(tool.still_streaming_ui_text(&input), DEFAULT_UI_TEXT);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn ui_text_with_valid_input() {
|
||||
let tool = CreateFileTool;
|
||||
let input = json!({
|
||||
"path": "src/main.rs",
|
||||
"contents": "fn main() {\n println!(\"Hello, world!\");\n}"
|
||||
});
|
||||
|
||||
assert_eq!(tool.ui_text(&input), "Create file `src/main.rs`");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn ui_text_with_invalid_input() {
|
||||
let tool = CreateFileTool;
|
||||
let input = json!({
|
||||
"invalid": "field"
|
||||
});
|
||||
|
||||
assert_eq!(tool.ui_text(&input), DEFAULT_UI_TEXT);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -47,8 +47,22 @@ pub struct EditFileToolInput {
|
||||
pub new_string: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
|
||||
struct PartialInput {
|
||||
#[serde(default)]
|
||||
path: String,
|
||||
#[serde(default)]
|
||||
display_description: String,
|
||||
#[serde(default)]
|
||||
old_string: String,
|
||||
#[serde(default)]
|
||||
new_string: String,
|
||||
}
|
||||
|
||||
pub struct EditFileTool;
|
||||
|
||||
const DEFAULT_UI_TEXT: &str = "Edit file";
|
||||
|
||||
impl Tool for EditFileTool {
|
||||
fn name(&self) -> String {
|
||||
"edit_file".into()
|
||||
@@ -77,6 +91,22 @@ impl Tool for EditFileTool {
|
||||
}
|
||||
}
|
||||
|
||||
fn still_streaming_ui_text(&self, input: &serde_json::Value) -> String {
|
||||
if let Some(input) = serde_json::from_value::<PartialInput>(input.clone()).ok() {
|
||||
let description = input.display_description.trim();
|
||||
if !description.is_empty() {
|
||||
return description.to_string();
|
||||
}
|
||||
|
||||
let path = input.path.trim();
|
||||
if !path.is_empty() {
|
||||
return path.to_string();
|
||||
}
|
||||
}
|
||||
|
||||
DEFAULT_UI_TEXT.to_string()
|
||||
}
|
||||
|
||||
fn run(
|
||||
self: Arc<Self>,
|
||||
input: serde_json::Value,
|
||||
@@ -181,3 +211,69 @@ impl Tool for EditFileTool {
|
||||
}).into()
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use serde_json::json;
|
||||
|
||||
#[test]
|
||||
fn still_streaming_ui_text_with_path() {
|
||||
let tool = EditFileTool;
|
||||
let input = json!({
|
||||
"path": "src/main.rs",
|
||||
"display_description": "",
|
||||
"old_string": "old code",
|
||||
"new_string": "new code"
|
||||
});
|
||||
|
||||
assert_eq!(tool.still_streaming_ui_text(&input), "src/main.rs");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn still_streaming_ui_text_with_description() {
|
||||
let tool = EditFileTool;
|
||||
let input = json!({
|
||||
"path": "",
|
||||
"display_description": "Fix error handling",
|
||||
"old_string": "old code",
|
||||
"new_string": "new code"
|
||||
});
|
||||
|
||||
assert_eq!(tool.still_streaming_ui_text(&input), "Fix error handling");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn still_streaming_ui_text_with_path_and_description() {
|
||||
let tool = EditFileTool;
|
||||
let input = json!({
|
||||
"path": "src/main.rs",
|
||||
"display_description": "Fix error handling",
|
||||
"old_string": "old code",
|
||||
"new_string": "new code"
|
||||
});
|
||||
|
||||
assert_eq!(tool.still_streaming_ui_text(&input), "Fix error handling");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn still_streaming_ui_text_no_path_or_description() {
|
||||
let tool = EditFileTool;
|
||||
let input = json!({
|
||||
"path": "",
|
||||
"display_description": "",
|
||||
"old_string": "old code",
|
||||
"new_string": "new code"
|
||||
});
|
||||
|
||||
assert_eq!(tool.still_streaming_ui_text(&input), DEFAULT_UI_TEXT);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn still_streaming_ui_text_with_null() {
|
||||
let tool = EditFileTool;
|
||||
let input = serde_json::Value::Null;
|
||||
|
||||
assert_eq!(tool.still_streaming_ui_text(&input), DEFAULT_UI_TEXT);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,268 +0,0 @@
|
||||
use crate::{replace::replace_with_flexible_indent, schema::json_schema_for};
|
||||
use anyhow::{Context as _, Result, anyhow};
|
||||
use assistant_tool::{ActionLog, Tool, ToolResult};
|
||||
use gpui::{App, AppContext, AsyncApp, Entity, Task};
|
||||
use language_model::{LanguageModelRequestMessage, LanguageModelToolSchemaFormat};
|
||||
use project::Project;
|
||||
use schemars::JsonSchema;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::{path::PathBuf, sync::Arc};
|
||||
use ui::IconName;
|
||||
|
||||
use crate::replace::replace_exact;
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize, JsonSchema)]
|
||||
pub struct FindReplaceFileToolInput {
|
||||
/// The path of the file to modify.
|
||||
///
|
||||
/// WARNING: When specifying which file path need changing, you MUST
|
||||
/// start each path with one of the project's root directories.
|
||||
///
|
||||
/// The following examples assume we have two root directories in the project:
|
||||
/// - backend
|
||||
/// - frontend
|
||||
///
|
||||
/// <example>
|
||||
/// `backend/src/main.rs`
|
||||
///
|
||||
/// Notice how the file path starts with root-1. Without that, the path
|
||||
/// would be ambiguous and the call would fail!
|
||||
/// </example>
|
||||
///
|
||||
/// <example>
|
||||
/// `frontend/db.js`
|
||||
/// </example>
|
||||
pub path: PathBuf,
|
||||
|
||||
/// A user-friendly markdown description of what's being replaced. This will be shown in the UI.
|
||||
///
|
||||
/// <example>Fix API endpoint URLs</example>
|
||||
/// <example>Update copyright year in `page_footer`</example>
|
||||
pub display_description: String,
|
||||
|
||||
/// The unique string to find in the file. This string cannot be empty;
|
||||
/// if the string is empty, the tool call will fail. Remember, do not use this tool
|
||||
/// to create new files from scratch, or to overwrite existing files! Use a different
|
||||
/// approach if you want to do that.
|
||||
///
|
||||
/// If this string appears more than once in the file, this tool call will fail,
|
||||
/// so it is absolutely critical that you verify ahead of time that the string
|
||||
/// is unique. You can search within the file to verify this.
|
||||
///
|
||||
/// To make the string more likely to be unique, include a minimum of 3 lines of context
|
||||
/// before the string you actually want to find, as well as a minimum of 3 lines of
|
||||
/// context after the string you want to find. (These lines of context should appear
|
||||
/// in the `replace` string as well.) If 3 lines of context is not enough to obtain
|
||||
/// a string that appears only once in the file, then double the number of context lines
|
||||
/// until the string becomes unique. (Start with 3 lines before and 3 lines after
|
||||
/// though, because too much context is needlessly costly.)
|
||||
///
|
||||
/// Do not alter the context lines of code in any way, and make sure to preserve all
|
||||
/// whitespace and indentation for all lines of code. This string must be exactly as
|
||||
/// it appears in the file, because this tool will do a literal find/replace, and if
|
||||
/// even one character in this string is different in any way from how it appears
|
||||
/// in the file, then the tool call will fail.
|
||||
///
|
||||
/// If you get an error that the `find` string was not found, this means that either
|
||||
/// you made a mistake, or that the file has changed since you last looked at it.
|
||||
/// Either way, when this happens, you should retry doing this tool call until it
|
||||
/// succeeds, up to 3 times. Each time you retry, you should take another look at
|
||||
/// the exact text of the file in question, to make sure that you are searching for
|
||||
/// exactly the right string. Regardless of whether it was because you made a mistake
|
||||
/// or because the file changed since you last looked at it, you should be extra
|
||||
/// careful when retrying in this way. It's a bad experience for the user if
|
||||
/// this `find` string isn't found, so be super careful to get it exactly right!
|
||||
///
|
||||
/// <example>
|
||||
/// If a file contains this code:
|
||||
///
|
||||
/// ```ignore
|
||||
/// fn check_user_permissions(user_id: &str) -> Result<bool> {
|
||||
/// // Check if user exists first
|
||||
/// let user = database.find_user(user_id)?;
|
||||
///
|
||||
/// // This is the part we want to modify
|
||||
/// if user.role == "admin" {
|
||||
/// return Ok(true);
|
||||
/// }
|
||||
///
|
||||
/// // Check other permissions
|
||||
/// check_custom_permissions(user_id)
|
||||
/// }
|
||||
/// ```
|
||||
///
|
||||
/// Your find string should include at least 3 lines of context before and after the part
|
||||
/// you want to change:
|
||||
///
|
||||
/// ```ignore
|
||||
/// fn check_user_permissions(user_id: &str) -> Result<bool> {
|
||||
/// // Check if user exists first
|
||||
/// let user = database.find_user(user_id)?;
|
||||
///
|
||||
/// // This is the part we want to modify
|
||||
/// if user.role == "admin" {
|
||||
/// return Ok(true);
|
||||
/// }
|
||||
///
|
||||
/// // Check other permissions
|
||||
/// check_custom_permissions(user_id)
|
||||
/// }
|
||||
/// ```
|
||||
///
|
||||
/// And your replace string might look like:
|
||||
///
|
||||
/// ```ignore
|
||||
/// fn check_user_permissions(user_id: &str) -> Result<bool> {
|
||||
/// // Check if user exists first
|
||||
/// let user = database.find_user(user_id)?;
|
||||
///
|
||||
/// // This is the part we want to modify
|
||||
/// if user.role == "admin" || user.role == "superuser" {
|
||||
/// return Ok(true);
|
||||
/// }
|
||||
///
|
||||
/// // Check other permissions
|
||||
/// check_custom_permissions(user_id)
|
||||
/// }
|
||||
/// ```
|
||||
/// </example>
|
||||
pub find: String,
|
||||
|
||||
/// The string to replace the one unique occurrence of the find string with.
|
||||
pub replace: String,
|
||||
}
|
||||
|
||||
pub struct FindReplaceFileTool;
|
||||
|
||||
impl Tool for FindReplaceFileTool {
|
||||
fn name(&self) -> String {
|
||||
"find_replace_file".into()
|
||||
}
|
||||
|
||||
fn needs_confirmation(&self, _: &serde_json::Value, _: &App) -> bool {
|
||||
false
|
||||
}
|
||||
|
||||
fn description(&self) -> String {
|
||||
include_str!("find_replace_tool/description.md").to_string()
|
||||
}
|
||||
|
||||
fn icon(&self) -> IconName {
|
||||
IconName::Pencil
|
||||
}
|
||||
|
||||
fn input_schema(&self, format: LanguageModelToolSchemaFormat) -> Result<serde_json::Value> {
|
||||
json_schema_for::<FindReplaceFileToolInput>(format)
|
||||
}
|
||||
|
||||
fn ui_text(&self, input: &serde_json::Value) -> String {
|
||||
match serde_json::from_value::<FindReplaceFileToolInput>(input.clone()) {
|
||||
Ok(input) => input.display_description,
|
||||
Err(_) => "Edit file".to_string(),
|
||||
}
|
||||
}
|
||||
|
||||
fn run(
|
||||
self: Arc<Self>,
|
||||
input: serde_json::Value,
|
||||
_messages: &[LanguageModelRequestMessage],
|
||||
project: Entity<Project>,
|
||||
action_log: Entity<ActionLog>,
|
||||
cx: &mut App,
|
||||
) -> ToolResult {
|
||||
let input = match serde_json::from_value::<FindReplaceFileToolInput>(input) {
|
||||
Ok(input) => input,
|
||||
Err(err) => return Task::ready(Err(anyhow!(err))).into(),
|
||||
};
|
||||
|
||||
cx.spawn(async move |cx: &mut AsyncApp| {
|
||||
let project_path = project.read_with(cx, |project, cx| {
|
||||
project
|
||||
.find_project_path(&input.path, cx)
|
||||
.context("Path not found in project")
|
||||
})??;
|
||||
|
||||
let buffer = project
|
||||
.update(cx, |project, cx| project.open_buffer(project_path, cx))?
|
||||
.await?;
|
||||
|
||||
let snapshot = buffer.read_with(cx, |buffer, _cx| buffer.snapshot())?;
|
||||
|
||||
if input.find.is_empty() {
|
||||
return Err(anyhow!("`find` string cannot be empty. Use a different tool if you want to create a file."));
|
||||
}
|
||||
|
||||
if input.find == input.replace {
|
||||
return Err(anyhow!("The `find` and `replace` strings are identical, so no changes would be made."));
|
||||
}
|
||||
|
||||
let result = cx
|
||||
.background_spawn(async move {
|
||||
// Try to match exactly
|
||||
let diff = replace_exact(&input.find, &input.replace, &snapshot)
|
||||
.await
|
||||
// If that fails, try being flexible about indentation
|
||||
.or_else(|| replace_with_flexible_indent(&input.find, &input.replace, &snapshot))?;
|
||||
|
||||
if diff.edits.is_empty() {
|
||||
return None;
|
||||
}
|
||||
|
||||
let old_text = snapshot.text();
|
||||
|
||||
Some((old_text, diff))
|
||||
})
|
||||
.await;
|
||||
|
||||
let Some((old_text, diff)) = result else {
|
||||
let err = buffer.read_with(cx, |buffer, _cx| {
|
||||
let file_exists = buffer
|
||||
.file()
|
||||
.map_or(false, |file| file.disk_state().exists());
|
||||
|
||||
if !file_exists {
|
||||
anyhow!("{} does not exist", input.path.display())
|
||||
} else if buffer.is_empty() {
|
||||
anyhow!(
|
||||
"{} is empty, so the provided `find` string wasn't found.",
|
||||
input.path.display()
|
||||
)
|
||||
} else {
|
||||
anyhow!("Failed to match the provided `find` string")
|
||||
}
|
||||
})?;
|
||||
|
||||
return Err(err)
|
||||
};
|
||||
|
||||
let snapshot = cx.update(|cx| {
|
||||
action_log.update(cx, |log, cx| {
|
||||
log.buffer_read(buffer.clone(), cx)
|
||||
});
|
||||
let snapshot = buffer.update(cx, |buffer, cx| {
|
||||
buffer.finalize_last_transaction();
|
||||
buffer.apply_diff(diff, cx);
|
||||
buffer.finalize_last_transaction();
|
||||
buffer.snapshot()
|
||||
});
|
||||
action_log.update(cx, |log, cx| {
|
||||
log.buffer_edited(buffer.clone(), cx)
|
||||
});
|
||||
snapshot
|
||||
})?;
|
||||
|
||||
project.update( cx, |project, cx| {
|
||||
project.save_buffer(buffer, cx)
|
||||
})?.await?;
|
||||
|
||||
let diff_str = cx.background_spawn(async move {
|
||||
let new_text = snapshot.text();
|
||||
language::unified_diff(&old_text, &new_text)
|
||||
}).await;
|
||||
|
||||
|
||||
Ok(format!("Edited {}:\n\n```diff\n{}\n```", input.path.display(), diff_str))
|
||||
|
||||
}).into()
|
||||
}
|
||||
}
|
||||
@@ -1,7 +1,7 @@
|
||||
use crate::tests::TestServer;
|
||||
use call::ActiveCall;
|
||||
use collections::{HashMap, HashSet};
|
||||
use dap::DapRegistry;
|
||||
|
||||
use extension::ExtensionHostProxy;
|
||||
use fs::{FakeFs, Fs as _, RemoveOptions};
|
||||
use futures::StreamExt as _;
|
||||
@@ -86,7 +86,6 @@ async fn test_sharing_an_ssh_remote_project(
|
||||
http_client: remote_http_client,
|
||||
node_runtime: node,
|
||||
languages,
|
||||
debug_adapters: Arc::new(DapRegistry::fake()),
|
||||
extension_host_proxy: Arc::new(ExtensionHostProxy::new()),
|
||||
},
|
||||
cx,
|
||||
@@ -254,7 +253,6 @@ async fn test_ssh_collaboration_git_branches(
|
||||
http_client: remote_http_client,
|
||||
node_runtime: node,
|
||||
languages,
|
||||
debug_adapters: Arc::new(DapRegistry::fake()),
|
||||
extension_host_proxy: Arc::new(ExtensionHostProxy::new()),
|
||||
},
|
||||
cx,
|
||||
@@ -460,7 +458,6 @@ async fn test_ssh_collaboration_formatting_with_prettier(
|
||||
http_client: remote_http_client,
|
||||
node_runtime: NodeRuntime::unavailable(),
|
||||
languages,
|
||||
debug_adapters: Arc::new(DapRegistry::fake()),
|
||||
extension_host_proxy: Arc::new(ExtensionHostProxy::new()),
|
||||
},
|
||||
cx,
|
||||
|
||||
@@ -14,7 +14,7 @@ use client::{
|
||||
use clock::FakeSystemClock;
|
||||
use collab_ui::channel_view::ChannelView;
|
||||
use collections::{HashMap, HashSet};
|
||||
use dap::DapRegistry;
|
||||
|
||||
use fs::FakeFs;
|
||||
use futures::{StreamExt as _, channel::oneshot};
|
||||
use git::GitHostingProviderRegistry;
|
||||
@@ -275,14 +275,12 @@ impl TestServer {
|
||||
let user_store = cx.new(|cx| UserStore::new(client.clone(), cx));
|
||||
let workspace_store = cx.new(|cx| WorkspaceStore::new(client.clone(), cx));
|
||||
let language_registry = Arc::new(LanguageRegistry::test(cx.executor()));
|
||||
let debug_adapters = Arc::new(DapRegistry::default());
|
||||
let session = cx.new(|cx| AppSession::new(Session::test(), cx));
|
||||
let app_state = Arc::new(workspace::AppState {
|
||||
client: client.clone(),
|
||||
user_store: user_store.clone(),
|
||||
workspace_store,
|
||||
languages: language_registry,
|
||||
debug_adapters,
|
||||
fs: fs.clone(),
|
||||
build_window_options: |_, _| Default::default(),
|
||||
node_runtime: NodeRuntime::unavailable(),
|
||||
@@ -798,7 +796,6 @@ impl TestClient {
|
||||
self.app_state.node_runtime.clone(),
|
||||
self.app_state.user_store.clone(),
|
||||
self.app_state.languages.clone(),
|
||||
self.app_state.debug_adapters.clone(),
|
||||
self.app_state.fs.clone(),
|
||||
None,
|
||||
cx,
|
||||
|
||||
@@ -66,6 +66,5 @@ fn notification_window_options(
|
||||
app_id: Some(app_id.to_owned()),
|
||||
window_min_size: None,
|
||||
window_decorations: Some(WindowDecorations::Client),
|
||||
..Default::default()
|
||||
}
|
||||
}
|
||||
|
||||
@@ -166,6 +166,9 @@ impl ComponentPreview {
|
||||
|
||||
component_preview.update_component_list(cx);
|
||||
|
||||
let focus_handle = component_preview.filter_editor.read(cx).focus_handle(cx);
|
||||
window.focus(&focus_handle);
|
||||
|
||||
component_preview
|
||||
}
|
||||
|
||||
@@ -779,10 +782,13 @@ impl Item for ComponentPreview {
|
||||
fn added_to_workspace(
|
||||
&mut self,
|
||||
workspace: &mut Workspace,
|
||||
_window: &mut Window,
|
||||
_cx: &mut Context<Self>,
|
||||
window: &mut Window,
|
||||
cx: &mut Context<Self>,
|
||||
) {
|
||||
self.workspace_id = workspace.database_id();
|
||||
|
||||
let focus_handle = self.filter_editor.read(cx).focus_handle(cx);
|
||||
window.focus(&focus_handle);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -39,6 +39,7 @@ log.workspace = true
|
||||
node_runtime.workspace = true
|
||||
parking_lot.workspace = true
|
||||
paths.workspace = true
|
||||
proto.workspace = true
|
||||
schemars.workspace = true
|
||||
serde.workspace = true
|
||||
serde_json.workspace = true
|
||||
|
||||
@@ -3,7 +3,8 @@ use anyhow::{Context as _, Result, anyhow};
|
||||
use async_compression::futures::bufread::GzipDecoder;
|
||||
use async_tar::Archive;
|
||||
use async_trait::async_trait;
|
||||
use dap_types::StartDebuggingRequestArguments;
|
||||
use collections::HashMap;
|
||||
use dap_types::{StartDebuggingRequestArguments, StartDebuggingRequestArgumentsRequest};
|
||||
use futures::io::BufReader;
|
||||
use gpui::{AsyncApp, SharedString};
|
||||
pub use http_client::{HttpClient, github::latest_github_release};
|
||||
@@ -13,16 +14,10 @@ use serde::{Deserialize, Serialize};
|
||||
use settings::WorktreeId;
|
||||
use smol::{self, fs::File, lock::Mutex};
|
||||
use std::{
|
||||
borrow::Borrow,
|
||||
collections::{HashMap, HashSet},
|
||||
ffi::{OsStr, OsString},
|
||||
fmt::Debug,
|
||||
net::Ipv4Addr,
|
||||
ops::Deref,
|
||||
path::PathBuf,
|
||||
sync::Arc,
|
||||
borrow::Borrow, collections::HashSet, ffi::OsStr, fmt::Debug, net::Ipv4Addr, ops::Deref,
|
||||
path::PathBuf, sync::Arc,
|
||||
};
|
||||
use task::DebugTaskDefinition;
|
||||
use task::{DebugTaskDefinition, TcpArgumentsTemplate};
|
||||
use util::ResultExt;
|
||||
|
||||
#[derive(Clone, Debug, PartialEq, Eq)]
|
||||
@@ -93,17 +88,91 @@ pub struct TcpArguments {
|
||||
pub port: u16,
|
||||
pub timeout: Option<u64>,
|
||||
}
|
||||
|
||||
impl TcpArguments {
|
||||
pub fn from_proto(proto: proto::TcpHost) -> anyhow::Result<Self> {
|
||||
let host = TcpArgumentsTemplate::from_proto(proto)?;
|
||||
Ok(TcpArguments {
|
||||
host: host.host.ok_or_else(|| anyhow!("missing host"))?,
|
||||
port: host.port.ok_or_else(|| anyhow!("missing port"))?,
|
||||
timeout: host.timeout,
|
||||
})
|
||||
}
|
||||
|
||||
pub fn to_proto(&self) -> proto::TcpHost {
|
||||
TcpArgumentsTemplate {
|
||||
host: Some(self.host),
|
||||
port: Some(self.port),
|
||||
timeout: self.timeout,
|
||||
}
|
||||
.to_proto()
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct DebugAdapterBinary {
|
||||
pub adapter_name: DebugAdapterName,
|
||||
pub command: String,
|
||||
pub arguments: Option<Vec<OsString>>,
|
||||
pub envs: Option<HashMap<String, String>>,
|
||||
pub arguments: Vec<String>,
|
||||
pub envs: HashMap<String, String>,
|
||||
pub cwd: Option<PathBuf>,
|
||||
pub connection: Option<TcpArguments>,
|
||||
pub request_args: StartDebuggingRequestArguments,
|
||||
}
|
||||
|
||||
impl DebugAdapterBinary {
|
||||
pub fn from_proto(binary: proto::DebugAdapterBinary) -> anyhow::Result<Self> {
|
||||
let request = match binary.launch_type() {
|
||||
proto::debug_adapter_binary::LaunchType::Launch => {
|
||||
StartDebuggingRequestArgumentsRequest::Launch
|
||||
}
|
||||
proto::debug_adapter_binary::LaunchType::Attach => {
|
||||
StartDebuggingRequestArgumentsRequest::Attach
|
||||
}
|
||||
};
|
||||
|
||||
Ok(DebugAdapterBinary {
|
||||
command: binary.command,
|
||||
arguments: binary.arguments,
|
||||
envs: binary.envs.into_iter().collect(),
|
||||
connection: binary
|
||||
.connection
|
||||
.map(TcpArguments::from_proto)
|
||||
.transpose()?,
|
||||
request_args: StartDebuggingRequestArguments {
|
||||
configuration: serde_json::from_str(&binary.configuration)?,
|
||||
request,
|
||||
},
|
||||
cwd: binary.cwd.map(|cwd| cwd.into()),
|
||||
})
|
||||
}
|
||||
|
||||
pub fn to_proto(&self) -> proto::DebugAdapterBinary {
|
||||
proto::DebugAdapterBinary {
|
||||
command: self.command.clone(),
|
||||
arguments: self.arguments.clone(),
|
||||
envs: self
|
||||
.envs
|
||||
.iter()
|
||||
.map(|(k, v)| (k.clone(), v.clone()))
|
||||
.collect(),
|
||||
cwd: self
|
||||
.cwd
|
||||
.as_ref()
|
||||
.map(|cwd| cwd.to_string_lossy().to_string()),
|
||||
connection: self.connection.as_ref().map(|c| c.to_proto()),
|
||||
launch_type: match self.request_args.request {
|
||||
StartDebuggingRequestArgumentsRequest::Launch => {
|
||||
proto::debug_adapter_binary::LaunchType::Launch.into()
|
||||
}
|
||||
StartDebuggingRequestArgumentsRequest::Attach => {
|
||||
proto::debug_adapter_binary::LaunchType::Attach.into()
|
||||
}
|
||||
},
|
||||
configuration: self.request_args.configuration.to_string(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub struct AdapterVersion {
|
||||
pub tag_name: String,
|
||||
@@ -318,22 +387,22 @@ impl FakeAdapter {
|
||||
|
||||
fn request_args(&self, config: &DebugTaskDefinition) -> StartDebuggingRequestArguments {
|
||||
use serde_json::json;
|
||||
use task::DebugRequestType;
|
||||
use task::DebugRequest;
|
||||
|
||||
let value = json!({
|
||||
"request": match config.request {
|
||||
DebugRequestType::Launch(_) => "launch",
|
||||
DebugRequestType::Attach(_) => "attach",
|
||||
DebugRequest::Launch(_) => "launch",
|
||||
DebugRequest::Attach(_) => "attach",
|
||||
},
|
||||
"process_id": if let DebugRequestType::Attach(attach_config) = &config.request {
|
||||
"process_id": if let DebugRequest::Attach(attach_config) = &config.request {
|
||||
attach_config.process_id
|
||||
} else {
|
||||
None
|
||||
},
|
||||
});
|
||||
let request = match config.request {
|
||||
DebugRequestType::Launch(_) => dap_types::StartDebuggingRequestArgumentsRequest::Launch,
|
||||
DebugRequestType::Attach(_) => dap_types::StartDebuggingRequestArgumentsRequest::Attach,
|
||||
DebugRequest::Launch(_) => dap_types::StartDebuggingRequestArgumentsRequest::Launch,
|
||||
DebugRequest::Attach(_) => dap_types::StartDebuggingRequestArgumentsRequest::Attach,
|
||||
};
|
||||
StartDebuggingRequestArguments {
|
||||
configuration: value,
|
||||
@@ -357,11 +426,10 @@ impl DebugAdapter for FakeAdapter {
|
||||
_: &mut AsyncApp,
|
||||
) -> Result<DebugAdapterBinary> {
|
||||
Ok(DebugAdapterBinary {
|
||||
adapter_name: Self::ADAPTER_NAME.into(),
|
||||
command: "command".into(),
|
||||
arguments: None,
|
||||
arguments: vec![],
|
||||
connection: None,
|
||||
envs: None,
|
||||
envs: HashMap::default(),
|
||||
cwd: None,
|
||||
request_args: self.request_args(config),
|
||||
})
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
use crate::{
|
||||
adapters::{DebugAdapterBinary, DebugAdapterName},
|
||||
adapters::DebugAdapterBinary,
|
||||
transport::{IoKind, LogKind, TransportDelegate},
|
||||
};
|
||||
use anyhow::{Result, anyhow};
|
||||
@@ -88,7 +88,6 @@ impl DebugAdapterClient {
|
||||
) -> Result<Self> {
|
||||
let binary = match self.transport_delegate.transport() {
|
||||
crate::transport::Transport::Tcp(tcp_transport) => DebugAdapterBinary {
|
||||
adapter_name: binary.adapter_name,
|
||||
command: binary.command,
|
||||
arguments: binary.arguments,
|
||||
envs: binary.envs,
|
||||
@@ -219,9 +218,6 @@ impl DebugAdapterClient {
|
||||
self.id
|
||||
}
|
||||
|
||||
pub fn name(&self) -> DebugAdapterName {
|
||||
self.binary.adapter_name.clone()
|
||||
}
|
||||
pub fn binary(&self) -> &DebugAdapterBinary {
|
||||
&self.binary
|
||||
}
|
||||
@@ -322,7 +318,6 @@ mod tests {
|
||||
let client = DebugAdapterClient::start(
|
||||
crate::client::SessionId(1),
|
||||
DebugAdapterBinary {
|
||||
adapter_name: "adapter".into(),
|
||||
command: "command".into(),
|
||||
arguments: Default::default(),
|
||||
envs: Default::default(),
|
||||
@@ -393,7 +388,6 @@ mod tests {
|
||||
let client = DebugAdapterClient::start(
|
||||
crate::client::SessionId(1),
|
||||
DebugAdapterBinary {
|
||||
adapter_name: "adapter".into(),
|
||||
command: "command".into(),
|
||||
arguments: Default::default(),
|
||||
envs: Default::default(),
|
||||
@@ -447,7 +441,6 @@ mod tests {
|
||||
let client = DebugAdapterClient::start(
|
||||
crate::client::SessionId(1),
|
||||
DebugAdapterBinary {
|
||||
adapter_name: "test-adapter".into(),
|
||||
command: "command".into(),
|
||||
arguments: Default::default(),
|
||||
envs: Default::default(),
|
||||
|
||||
@@ -7,7 +7,7 @@ pub mod transport;
|
||||
|
||||
pub use dap_types::*;
|
||||
pub use registry::DapRegistry;
|
||||
pub use task::DebugRequestType;
|
||||
pub use task::DebugRequest;
|
||||
|
||||
pub type ScopeId = u64;
|
||||
pub type VariableReference = u64;
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
use gpui::{App, Global};
|
||||
use parking_lot::RwLock;
|
||||
|
||||
use crate::adapters::{DebugAdapter, DebugAdapterName};
|
||||
@@ -11,8 +12,20 @@ struct DapRegistryState {
|
||||
#[derive(Clone, Default)]
|
||||
/// Stores available debug adapters.
|
||||
pub struct DapRegistry(Arc<RwLock<DapRegistryState>>);
|
||||
impl Global for DapRegistry {}
|
||||
|
||||
impl DapRegistry {
|
||||
pub fn global(cx: &mut App) -> &mut Self {
|
||||
let ret = cx.default_global::<Self>();
|
||||
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
if ret.adapter(crate::FakeAdapter::ADAPTER_NAME).is_none() {
|
||||
ret.add_adapter(Arc::new(crate::FakeAdapter::new()));
|
||||
}
|
||||
|
||||
ret
|
||||
}
|
||||
|
||||
pub fn add_adapter(&self, adapter: Arc<dyn DebugAdapter>) {
|
||||
let name = adapter.name();
|
||||
let _previous_value = self.0.write().adapters.insert(name, adapter);
|
||||
@@ -21,19 +34,12 @@ impl DapRegistry {
|
||||
"Attempted to insert a new debug adapter when one is already registered"
|
||||
);
|
||||
}
|
||||
|
||||
pub fn adapter(&self, name: &str) -> Option<Arc<dyn DebugAdapter>> {
|
||||
self.0.read().adapters.get(name).cloned()
|
||||
}
|
||||
|
||||
pub fn enumerate_adapters(&self) -> Vec<DebugAdapterName> {
|
||||
self.0.read().adapters.keys().cloned().collect()
|
||||
}
|
||||
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
pub fn fake() -> Self {
|
||||
use crate::FakeAdapter;
|
||||
|
||||
let register = Self::default();
|
||||
register.add_adapter(Arc::new(FakeAdapter::new()));
|
||||
register
|
||||
}
|
||||
}
|
||||
|
||||
@@ -21,7 +21,7 @@ use std::{
|
||||
sync::Arc,
|
||||
time::Duration,
|
||||
};
|
||||
use task::TCPHost;
|
||||
use task::TcpArgumentsTemplate;
|
||||
use util::ResultExt as _;
|
||||
|
||||
use crate::{adapters::DebugAdapterBinary, debugger_settings::DebuggerSettings};
|
||||
@@ -74,16 +74,14 @@ pub enum Transport {
|
||||
}
|
||||
|
||||
impl Transport {
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
async fn start(_: &DebugAdapterBinary, cx: AsyncApp) -> Result<(TransportPipe, Self)> {
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
return FakeTransport::start(cx)
|
||||
.await
|
||||
.map(|(transports, fake)| (transports, Self::Fake(fake)));
|
||||
}
|
||||
|
||||
#[cfg(not(any(test, feature = "test-support")))]
|
||||
async fn start(binary: &DebugAdapterBinary, cx: AsyncApp) -> Result<(TransportPipe, Self)> {
|
||||
#[cfg(any(test, feature = "test-support"))]
|
||||
if cfg!(any(test, feature = "test-support")) {
|
||||
return FakeTransport::start(cx)
|
||||
.await
|
||||
.map(|(transports, fake)| (transports, Self::Fake(fake)));
|
||||
}
|
||||
|
||||
if binary.connection.is_some() {
|
||||
TcpTransport::start(binary, cx)
|
||||
.await
|
||||
@@ -520,18 +518,21 @@ pub struct TcpTransport {
|
||||
|
||||
impl TcpTransport {
|
||||
/// Get an open port to use with the tcp client when not supplied by debug config
|
||||
pub async fn port(host: &TCPHost) -> Result<u16> {
|
||||
pub async fn port(host: &TcpArgumentsTemplate) -> Result<u16> {
|
||||
if let Some(port) = host.port {
|
||||
Ok(port)
|
||||
} else {
|
||||
Ok(TcpListener::bind(SocketAddrV4::new(host.host(), 0))
|
||||
.await?
|
||||
.local_addr()?
|
||||
.port())
|
||||
Self::unused_port(host.host()).await
|
||||
}
|
||||
}
|
||||
|
||||
#[allow(dead_code, reason = "This is used in non test builds of Zed")]
|
||||
pub async fn unused_port(host: Ipv4Addr) -> Result<u16> {
|
||||
Ok(TcpListener::bind(SocketAddrV4::new(host, 0))
|
||||
.await?
|
||||
.local_addr()?
|
||||
.port())
|
||||
}
|
||||
|
||||
async fn start(binary: &DebugAdapterBinary, cx: AsyncApp) -> Result<(TransportPipe, Self)> {
|
||||
let Some(connection_args) = binary.connection.as_ref() else {
|
||||
return Err(anyhow!("No connection arguments provided"));
|
||||
@@ -546,13 +547,8 @@ impl TcpTransport {
|
||||
command.current_dir(cwd);
|
||||
}
|
||||
|
||||
if let Some(args) = &binary.arguments {
|
||||
command.args(args);
|
||||
}
|
||||
|
||||
if let Some(envs) = &binary.envs {
|
||||
command.envs(envs);
|
||||
}
|
||||
command.args(&binary.arguments);
|
||||
command.envs(&binary.envs);
|
||||
|
||||
command
|
||||
.stdin(Stdio::null())
|
||||
@@ -635,13 +631,8 @@ impl StdioTransport {
|
||||
command.current_dir(cwd);
|
||||
}
|
||||
|
||||
if let Some(args) = &binary.arguments {
|
||||
command.args(args);
|
||||
}
|
||||
|
||||
if let Some(envs) = &binary.envs {
|
||||
command.envs(envs);
|
||||
}
|
||||
command.args(&binary.arguments);
|
||||
command.envs(&binary.envs);
|
||||
|
||||
command
|
||||
.stdin(Stdio::piped())
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
use std::{path::PathBuf, sync::OnceLock};
|
||||
use std::{collections::HashMap, path::PathBuf, sync::OnceLock};
|
||||
|
||||
use anyhow::{Result, bail};
|
||||
use async_trait::async_trait;
|
||||
use dap::adapters::latest_github_release;
|
||||
use gpui::AsyncApp;
|
||||
use task::{DebugRequestType, DebugTaskDefinition};
|
||||
use task::{DebugRequest, DebugTaskDefinition};
|
||||
|
||||
use crate::*;
|
||||
|
||||
@@ -19,8 +19,8 @@ impl CodeLldbDebugAdapter {
|
||||
fn request_args(&self, config: &DebugTaskDefinition) -> dap::StartDebuggingRequestArguments {
|
||||
let mut configuration = json!({
|
||||
"request": match config.request {
|
||||
DebugRequestType::Launch(_) => "launch",
|
||||
DebugRequestType::Attach(_) => "attach",
|
||||
DebugRequest::Launch(_) => "launch",
|
||||
DebugRequest::Attach(_) => "attach",
|
||||
},
|
||||
});
|
||||
let map = configuration.as_object_mut().unwrap();
|
||||
@@ -28,10 +28,10 @@ impl CodeLldbDebugAdapter {
|
||||
map.insert("name".into(), Value::String(config.label.clone()));
|
||||
let request = config.request.to_dap();
|
||||
match &config.request {
|
||||
DebugRequestType::Attach(attach) => {
|
||||
DebugRequest::Attach(attach) => {
|
||||
map.insert("pid".into(), attach.process_id.into());
|
||||
}
|
||||
DebugRequestType::Launch(launch) => {
|
||||
DebugRequest::Launch(launch) => {
|
||||
map.insert("program".into(), launch.program.clone().into());
|
||||
|
||||
if !launch.args.is_empty() {
|
||||
@@ -140,16 +140,13 @@ impl DebugAdapter for CodeLldbDebugAdapter {
|
||||
.ok_or_else(|| anyhow!("Adapter path is expected to be valid UTF-8"))?;
|
||||
Ok(DebugAdapterBinary {
|
||||
command,
|
||||
cwd: Some(adapter_dir),
|
||||
arguments: Some(vec![
|
||||
cwd: None,
|
||||
arguments: vec![
|
||||
"--settings".into(),
|
||||
json!({"sourceLanguages": ["cpp", "rust"]})
|
||||
.to_string()
|
||||
.into(),
|
||||
]),
|
||||
json!({"sourceLanguages": ["cpp", "rust"]}).to_string(),
|
||||
],
|
||||
request_args: self.request_args(config),
|
||||
adapter_name: "test".into(),
|
||||
envs: None,
|
||||
envs: HashMap::default(),
|
||||
connection: None,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -11,7 +11,7 @@ use anyhow::{Result, anyhow};
|
||||
use async_trait::async_trait;
|
||||
use codelldb::CodeLldbDebugAdapter;
|
||||
use dap::{
|
||||
DapRegistry, DebugRequestType,
|
||||
DapRegistry, DebugRequest,
|
||||
adapters::{
|
||||
self, AdapterVersion, DapDelegate, DebugAdapter, DebugAdapterBinary, DebugAdapterName,
|
||||
GithubRepo,
|
||||
@@ -19,23 +19,26 @@ use dap::{
|
||||
};
|
||||
use gdb::GdbDebugAdapter;
|
||||
use go::GoDebugAdapter;
|
||||
use gpui::{App, BorrowAppContext};
|
||||
use javascript::JsDebugAdapter;
|
||||
use php::PhpDebugAdapter;
|
||||
use python::PythonDebugAdapter;
|
||||
use serde_json::{Value, json};
|
||||
use task::TCPHost;
|
||||
use task::TcpArgumentsTemplate;
|
||||
|
||||
pub fn init(registry: Arc<DapRegistry>) {
|
||||
registry.add_adapter(Arc::from(CodeLldbDebugAdapter::default()));
|
||||
registry.add_adapter(Arc::from(PythonDebugAdapter));
|
||||
registry.add_adapter(Arc::from(PhpDebugAdapter));
|
||||
registry.add_adapter(Arc::from(JsDebugAdapter));
|
||||
registry.add_adapter(Arc::from(GoDebugAdapter));
|
||||
registry.add_adapter(Arc::from(GdbDebugAdapter));
|
||||
pub fn init(cx: &mut App) {
|
||||
cx.update_default_global(|registry: &mut DapRegistry, _cx| {
|
||||
registry.add_adapter(Arc::from(CodeLldbDebugAdapter::default()));
|
||||
registry.add_adapter(Arc::from(PythonDebugAdapter));
|
||||
registry.add_adapter(Arc::from(PhpDebugAdapter));
|
||||
registry.add_adapter(Arc::from(JsDebugAdapter));
|
||||
registry.add_adapter(Arc::from(GoDebugAdapter));
|
||||
registry.add_adapter(Arc::from(GdbDebugAdapter));
|
||||
})
|
||||
}
|
||||
|
||||
pub(crate) async fn configure_tcp_connection(
|
||||
tcp_connection: TCPHost,
|
||||
tcp_connection: TcpArgumentsTemplate,
|
||||
) -> Result<(Ipv4Addr, u16, Option<u64>)> {
|
||||
let host = tcp_connection.host();
|
||||
let timeout = tcp_connection.timeout;
|
||||
@@ -53,7 +56,7 @@ trait ToDap {
|
||||
fn to_dap(&self) -> dap::StartDebuggingRequestArgumentsRequest;
|
||||
}
|
||||
|
||||
impl ToDap for DebugRequestType {
|
||||
impl ToDap for DebugRequest {
|
||||
fn to_dap(&self) -> dap::StartDebuggingRequestArgumentsRequest {
|
||||
match self {
|
||||
Self::Launch(_) => dap::StartDebuggingRequestArgumentsRequest::Launch,
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
use std::ffi::OsStr;
|
||||
use std::{collections::HashMap, ffi::OsStr};
|
||||
|
||||
use anyhow::{Result, bail};
|
||||
use async_trait::async_trait;
|
||||
use dap::StartDebuggingRequestArguments;
|
||||
use gpui::AsyncApp;
|
||||
use task::{DebugRequestType, DebugTaskDefinition};
|
||||
use task::{DebugRequest, DebugTaskDefinition};
|
||||
|
||||
use crate::*;
|
||||
|
||||
@@ -17,18 +17,18 @@ impl GdbDebugAdapter {
|
||||
fn request_args(&self, config: &DebugTaskDefinition) -> StartDebuggingRequestArguments {
|
||||
let mut args = json!({
|
||||
"request": match config.request {
|
||||
DebugRequestType::Launch(_) => "launch",
|
||||
DebugRequestType::Attach(_) => "attach",
|
||||
DebugRequest::Launch(_) => "launch",
|
||||
DebugRequest::Attach(_) => "attach",
|
||||
},
|
||||
});
|
||||
|
||||
let map = args.as_object_mut().unwrap();
|
||||
match &config.request {
|
||||
DebugRequestType::Attach(attach) => {
|
||||
DebugRequest::Attach(attach) => {
|
||||
map.insert("pid".into(), attach.process_id.into());
|
||||
}
|
||||
|
||||
DebugRequestType::Launch(launch) => {
|
||||
DebugRequest::Launch(launch) => {
|
||||
map.insert("program".into(), launch.program.clone().into());
|
||||
|
||||
if !launch.args.is_empty() {
|
||||
@@ -82,10 +82,9 @@ impl DebugAdapter for GdbDebugAdapter {
|
||||
let gdb_path = user_setting_path.unwrap_or(gdb_path?);
|
||||
|
||||
Ok(DebugAdapterBinary {
|
||||
adapter_name: Self::ADAPTER_NAME.into(),
|
||||
command: gdb_path,
|
||||
arguments: Some(vec!["-i=dap".into()]),
|
||||
envs: None,
|
||||
arguments: vec!["-i=dap".into()],
|
||||
envs: HashMap::default(),
|
||||
cwd: None,
|
||||
connection: None,
|
||||
request_args: self.request_args(config),
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
use dap::StartDebuggingRequestArguments;
|
||||
use gpui::AsyncApp;
|
||||
use std::{ffi::OsStr, path::PathBuf};
|
||||
use std::{collections::HashMap, ffi::OsStr, path::PathBuf};
|
||||
use task::DebugTaskDefinition;
|
||||
|
||||
use crate::*;
|
||||
@@ -12,12 +12,12 @@ impl GoDebugAdapter {
|
||||
const ADAPTER_NAME: &'static str = "Delve";
|
||||
fn request_args(&self, config: &DebugTaskDefinition) -> StartDebuggingRequestArguments {
|
||||
let mut args = match &config.request {
|
||||
dap::DebugRequestType::Attach(attach_config) => {
|
||||
dap::DebugRequest::Attach(attach_config) => {
|
||||
json!({
|
||||
"processId": attach_config.process_id,
|
||||
})
|
||||
}
|
||||
dap::DebugRequestType::Launch(launch_config) => json!({
|
||||
dap::DebugRequest::Launch(launch_config) => json!({
|
||||
"program": launch_config.program,
|
||||
"cwd": launch_config.cwd,
|
||||
"args": launch_config.args
|
||||
@@ -92,15 +92,14 @@ impl DebugAdapter for GoDebugAdapter {
|
||||
let (host, port, timeout) = crate::configure_tcp_connection(tcp_connection).await?;
|
||||
|
||||
Ok(DebugAdapterBinary {
|
||||
adapter_name: self.name(),
|
||||
command: delve_path,
|
||||
arguments: Some(vec![
|
||||
arguments: vec![
|
||||
"dap".into(),
|
||||
"--listen".into(),
|
||||
format!("{}:{}", host, port).into(),
|
||||
]),
|
||||
format!("{}:{}", host, port),
|
||||
],
|
||||
cwd: None,
|
||||
envs: None,
|
||||
envs: HashMap::default(),
|
||||
connection: Some(adapters::TcpArguments {
|
||||
host,
|
||||
port,
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
use adapters::latest_github_release;
|
||||
use dap::StartDebuggingRequestArguments;
|
||||
use gpui::AsyncApp;
|
||||
use std::path::PathBuf;
|
||||
use task::{DebugRequestType, DebugTaskDefinition};
|
||||
use std::{collections::HashMap, path::PathBuf};
|
||||
use task::{DebugRequest, DebugTaskDefinition};
|
||||
|
||||
use crate::*;
|
||||
|
||||
@@ -18,16 +18,16 @@ impl JsDebugAdapter {
|
||||
let mut args = json!({
|
||||
"type": "pwa-node",
|
||||
"request": match config.request {
|
||||
DebugRequestType::Launch(_) => "launch",
|
||||
DebugRequestType::Attach(_) => "attach",
|
||||
DebugRequest::Launch(_) => "launch",
|
||||
DebugRequest::Attach(_) => "attach",
|
||||
},
|
||||
});
|
||||
let map = args.as_object_mut().unwrap();
|
||||
match &config.request {
|
||||
DebugRequestType::Attach(attach) => {
|
||||
DebugRequest::Attach(attach) => {
|
||||
map.insert("processId".into(), attach.process_id.into());
|
||||
}
|
||||
DebugRequestType::Launch(launch) => {
|
||||
DebugRequest::Launch(launch) => {
|
||||
map.insert("program".into(), launch.program.clone().into());
|
||||
|
||||
if !launch.args.is_empty() {
|
||||
@@ -106,20 +106,22 @@ impl DebugAdapter for JsDebugAdapter {
|
||||
let (host, port, timeout) = crate::configure_tcp_connection(tcp_connection).await?;
|
||||
|
||||
Ok(DebugAdapterBinary {
|
||||
adapter_name: self.name(),
|
||||
command: delegate
|
||||
.node_runtime()
|
||||
.binary_path()
|
||||
.await?
|
||||
.to_string_lossy()
|
||||
.into_owned(),
|
||||
arguments: Some(vec![
|
||||
adapter_path.join(Self::ADAPTER_PATH).into(),
|
||||
port.to_string().into(),
|
||||
host.to_string().into(),
|
||||
]),
|
||||
arguments: vec![
|
||||
adapter_path
|
||||
.join(Self::ADAPTER_PATH)
|
||||
.to_string_lossy()
|
||||
.to_string(),
|
||||
port.to_string(),
|
||||
host.to_string(),
|
||||
],
|
||||
cwd: None,
|
||||
envs: None,
|
||||
envs: HashMap::default(),
|
||||
connection: Some(adapters::TcpArguments {
|
||||
host,
|
||||
port,
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
use adapters::latest_github_release;
|
||||
use dap::adapters::TcpArguments;
|
||||
use gpui::AsyncApp;
|
||||
use std::path::PathBuf;
|
||||
use std::{collections::HashMap, path::PathBuf};
|
||||
use task::DebugTaskDefinition;
|
||||
|
||||
use crate::*;
|
||||
@@ -19,20 +19,18 @@ impl PhpDebugAdapter {
|
||||
config: &DebugTaskDefinition,
|
||||
) -> Result<dap::StartDebuggingRequestArguments> {
|
||||
match &config.request {
|
||||
dap::DebugRequestType::Attach(_) => {
|
||||
dap::DebugRequest::Attach(_) => {
|
||||
anyhow::bail!("php adapter does not support attaching")
|
||||
}
|
||||
dap::DebugRequestType::Launch(launch_config) => {
|
||||
Ok(dap::StartDebuggingRequestArguments {
|
||||
configuration: json!({
|
||||
"program": launch_config.program,
|
||||
"cwd": launch_config.cwd,
|
||||
"args": launch_config.args,
|
||||
"stopOnEntry": config.stop_on_entry.unwrap_or_default(),
|
||||
}),
|
||||
request: config.request.to_dap(),
|
||||
})
|
||||
}
|
||||
dap::DebugRequest::Launch(launch_config) => Ok(dap::StartDebuggingRequestArguments {
|
||||
configuration: json!({
|
||||
"program": launch_config.program,
|
||||
"cwd": launch_config.cwd,
|
||||
"args": launch_config.args,
|
||||
"stopOnEntry": config.stop_on_entry.unwrap_or_default(),
|
||||
}),
|
||||
request: config.request.to_dap(),
|
||||
}),
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -94,24 +92,26 @@ impl DebugAdapter for PhpDebugAdapter {
|
||||
let (host, port, timeout) = crate::configure_tcp_connection(tcp_connection).await?;
|
||||
|
||||
Ok(DebugAdapterBinary {
|
||||
adapter_name: self.name(),
|
||||
command: delegate
|
||||
.node_runtime()
|
||||
.binary_path()
|
||||
.await?
|
||||
.to_string_lossy()
|
||||
.into_owned(),
|
||||
arguments: Some(vec![
|
||||
adapter_path.join(Self::ADAPTER_PATH).into(),
|
||||
format!("--server={}", port).into(),
|
||||
]),
|
||||
arguments: vec![
|
||||
adapter_path
|
||||
.join(Self::ADAPTER_PATH)
|
||||
.to_string_lossy()
|
||||
.to_string(),
|
||||
format!("--server={}", port),
|
||||
],
|
||||
connection: Some(TcpArguments {
|
||||
port,
|
||||
host,
|
||||
timeout,
|
||||
}),
|
||||
cwd: None,
|
||||
envs: None,
|
||||
envs: HashMap::default(),
|
||||
request_args: self.request_args(config)?,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
use crate::*;
|
||||
use dap::{DebugRequestType, StartDebuggingRequestArguments};
|
||||
use dap::{DebugRequest, StartDebuggingRequestArguments};
|
||||
use gpui::AsyncApp;
|
||||
use std::{ffi::OsStr, path::PathBuf};
|
||||
use std::{collections::HashMap, ffi::OsStr, path::PathBuf};
|
||||
use task::DebugTaskDefinition;
|
||||
|
||||
#[derive(Default)]
|
||||
@@ -16,18 +16,18 @@ impl PythonDebugAdapter {
|
||||
fn request_args(&self, config: &DebugTaskDefinition) -> StartDebuggingRequestArguments {
|
||||
let mut args = json!({
|
||||
"request": match config.request {
|
||||
DebugRequestType::Launch(_) => "launch",
|
||||
DebugRequestType::Attach(_) => "attach",
|
||||
DebugRequest::Launch(_) => "launch",
|
||||
DebugRequest::Attach(_) => "attach",
|
||||
},
|
||||
"subProcess": true,
|
||||
"redirectOutput": true,
|
||||
});
|
||||
let map = args.as_object_mut().unwrap();
|
||||
match &config.request {
|
||||
DebugRequestType::Attach(attach) => {
|
||||
DebugRequest::Attach(attach) => {
|
||||
map.insert("processId".into(), attach.process_id.into());
|
||||
}
|
||||
DebugRequestType::Launch(launch) => {
|
||||
DebugRequest::Launch(launch) => {
|
||||
map.insert("program".into(), launch.program.clone().into());
|
||||
map.insert("args".into(), launch.args.clone().into());
|
||||
|
||||
@@ -141,20 +141,22 @@ impl DebugAdapter for PythonDebugAdapter {
|
||||
};
|
||||
|
||||
Ok(DebugAdapterBinary {
|
||||
adapter_name: self.name(),
|
||||
command: python_path.ok_or(anyhow!("failed to find binary path for python"))?,
|
||||
arguments: Some(vec![
|
||||
debugpy_dir.join(Self::ADAPTER_PATH).into(),
|
||||
format!("--port={}", port).into(),
|
||||
format!("--host={}", host).into(),
|
||||
]),
|
||||
arguments: vec![
|
||||
debugpy_dir
|
||||
.join(Self::ADAPTER_PATH)
|
||||
.to_string_lossy()
|
||||
.to_string(),
|
||||
format!("--port={}", port),
|
||||
format!("--host={}", host),
|
||||
],
|
||||
connection: Some(adapters::TcpArguments {
|
||||
host,
|
||||
port,
|
||||
timeout,
|
||||
}),
|
||||
cwd: None,
|
||||
envs: None,
|
||||
envs: HashMap::default(),
|
||||
request_args: self.request_args(config),
|
||||
})
|
||||
}
|
||||
|
||||
@@ -566,11 +566,13 @@ impl DapLogView {
|
||||
.dap_store()
|
||||
.read(cx)
|
||||
.sessions()
|
||||
.filter_map(|client| {
|
||||
let client = client.read(cx).adapter_client()?;
|
||||
.filter_map(|session| {
|
||||
let session = session.read(cx);
|
||||
session.adapter_name();
|
||||
let client = session.adapter_client()?;
|
||||
Some(DapMenuItem {
|
||||
client_id: client.id(),
|
||||
client_name: client.name().0.as_ref().into(),
|
||||
client_name: session.adapter_name().to_string(),
|
||||
has_adapter_logs: client.has_adapter_logs(),
|
||||
selected_entry: self.current_view.map_or(LogKind::Adapter, |(_, kind)| kind),
|
||||
})
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
use dap::DebugRequestType;
|
||||
use dap::DebugRequest;
|
||||
use fuzzy::{StringMatch, StringMatchCandidate};
|
||||
use gpui::Subscription;
|
||||
use gpui::{DismissEvent, Entity, EventEmitter, Focusable, Render};
|
||||
@@ -216,10 +216,10 @@ impl PickerDelegate for AttachModalDelegate {
|
||||
};
|
||||
|
||||
match &mut self.debug_config.request {
|
||||
DebugRequestType::Attach(config) => {
|
||||
DebugRequest::Attach(config) => {
|
||||
config.process_id = Some(candidate.pid);
|
||||
}
|
||||
DebugRequestType::Launch(_) => {
|
||||
DebugRequest::Launch(_) => {
|
||||
debug_panic!("Debugger attach modal used on launch debug config");
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -5,7 +5,7 @@ use std::{
|
||||
};
|
||||
|
||||
use anyhow::{Result, anyhow};
|
||||
use dap::DebugRequestType;
|
||||
use dap::{DapRegistry, DebugRequest};
|
||||
use editor::{Editor, EditorElement, EditorStyle};
|
||||
use gpui::{
|
||||
App, AppContext, DismissEvent, Entity, EventEmitter, FocusHandle, Focusable, Render, TextStyle,
|
||||
@@ -13,7 +13,7 @@ use gpui::{
|
||||
};
|
||||
use project::Project;
|
||||
use settings::Settings;
|
||||
use task::{DebugTaskDefinition, LaunchConfig};
|
||||
use task::{DebugTaskDefinition, DebugTaskTemplate, LaunchRequest};
|
||||
use theme::ThemeSettings;
|
||||
use ui::{
|
||||
ActiveTheme, Button, ButtonCommon, ButtonSize, CheckboxWithLabel, Clickable, Color, Context,
|
||||
@@ -37,9 +37,9 @@ pub(super) struct NewSessionModal {
|
||||
last_selected_profile_name: Option<SharedString>,
|
||||
}
|
||||
|
||||
fn suggested_label(request: &DebugRequestType, debugger: &str) -> String {
|
||||
fn suggested_label(request: &DebugRequest, debugger: &str) -> String {
|
||||
match request {
|
||||
DebugRequestType::Launch(config) => {
|
||||
DebugRequest::Launch(config) => {
|
||||
let last_path_component = Path::new(&config.program)
|
||||
.file_name()
|
||||
.map(|name| name.to_string_lossy())
|
||||
@@ -47,7 +47,7 @@ fn suggested_label(request: &DebugRequestType, debugger: &str) -> String {
|
||||
|
||||
format!("{} ({debugger})", last_path_component)
|
||||
}
|
||||
DebugRequestType::Attach(config) => format!(
|
||||
DebugRequest::Attach(config) => format!(
|
||||
"pid: {} ({debugger})",
|
||||
config.process_id.unwrap_or(u32::MAX)
|
||||
),
|
||||
@@ -71,7 +71,7 @@ impl NewSessionModal {
|
||||
.and_then(|def| def.stop_on_entry);
|
||||
|
||||
let launch_config = match past_debug_definition.map(|def| def.request) {
|
||||
Some(DebugRequestType::Launch(launch_config)) => Some(launch_config),
|
||||
Some(DebugRequest::Launch(launch_config)) => Some(launch_config),
|
||||
_ => None,
|
||||
};
|
||||
|
||||
@@ -96,7 +96,6 @@ impl NewSessionModal {
|
||||
request,
|
||||
initialize_args: self.initialize_args.clone(),
|
||||
tcp_connection: None,
|
||||
locator: None,
|
||||
stop_on_entry: match self.stop_on_entry {
|
||||
ToggleState::Selected => Some(true),
|
||||
_ => None,
|
||||
@@ -131,20 +130,16 @@ impl NewSessionModal {
|
||||
let project = workspace.update(cx, |workspace, _| workspace.project().clone())?;
|
||||
|
||||
let task = project.update(cx, |this, cx| {
|
||||
if let Some(debug_config) =
|
||||
config
|
||||
.clone()
|
||||
.to_zed_format()
|
||||
.ok()
|
||||
.and_then(|task_template| {
|
||||
task_template
|
||||
.resolve_task("debug_task", &task_context)
|
||||
.and_then(|resolved_task| {
|
||||
resolved_task.resolved_debug_adapter_config()
|
||||
})
|
||||
})
|
||||
let template = DebugTaskTemplate {
|
||||
locator: None,
|
||||
definition: config.clone(),
|
||||
};
|
||||
if let Some(debug_config) = template
|
||||
.to_zed_format()
|
||||
.resolve_task("debug_task", &task_context)
|
||||
.and_then(|resolved_task| resolved_task.resolved_debug_adapter_config())
|
||||
{
|
||||
this.start_debug_session(debug_config, cx)
|
||||
this.start_debug_session(debug_config.definition, cx)
|
||||
} else {
|
||||
this.start_debug_session(config, cx)
|
||||
}
|
||||
@@ -214,12 +209,7 @@ impl NewSessionModal {
|
||||
};
|
||||
|
||||
let available_adapters = workspace
|
||||
.update(cx, |this, cx| {
|
||||
this.project()
|
||||
.read(cx)
|
||||
.debug_adapters()
|
||||
.enumerate_adapters()
|
||||
})
|
||||
.update(cx, |_, cx| DapRegistry::global(cx).enumerate_adapters())
|
||||
.ok()
|
||||
.unwrap_or_default();
|
||||
|
||||
@@ -251,14 +241,14 @@ impl NewSessionModal {
|
||||
this.debugger = Some(task.adapter.clone().into());
|
||||
this.initialize_args = task.initialize_args.clone();
|
||||
match &task.request {
|
||||
DebugRequestType::Launch(launch_config) => {
|
||||
DebugRequest::Launch(launch_config) => {
|
||||
this.mode = NewSessionMode::launch(
|
||||
Some(launch_config.clone()),
|
||||
window,
|
||||
cx,
|
||||
);
|
||||
}
|
||||
DebugRequestType::Attach(_) => {
|
||||
DebugRequest::Attach(_) => {
|
||||
let Ok(project) = this
|
||||
.workspace
|
||||
.read_with(cx, |this, _| this.project().clone())
|
||||
@@ -285,7 +275,7 @@ impl NewSessionModal {
|
||||
}
|
||||
};
|
||||
|
||||
let available_adapters: Vec<DebugTaskDefinition> = workspace
|
||||
let available_adapters: Vec<DebugTaskTemplate> = workspace
|
||||
.update(cx, |this, cx| {
|
||||
this.project()
|
||||
.read(cx)
|
||||
@@ -303,9 +293,9 @@ impl NewSessionModal {
|
||||
|
||||
for debug_definition in available_adapters {
|
||||
menu = menu.entry(
|
||||
debug_definition.label.clone(),
|
||||
debug_definition.definition.label.clone(),
|
||||
None,
|
||||
setter_for_name(debug_definition),
|
||||
setter_for_name(debug_definition.definition),
|
||||
);
|
||||
}
|
||||
menu
|
||||
@@ -322,7 +312,7 @@ struct LaunchMode {
|
||||
|
||||
impl LaunchMode {
|
||||
fn new(
|
||||
past_launch_config: Option<LaunchConfig>,
|
||||
past_launch_config: Option<LaunchRequest>,
|
||||
window: &mut Window,
|
||||
cx: &mut App,
|
||||
) -> Entity<Self> {
|
||||
@@ -348,9 +338,9 @@ impl LaunchMode {
|
||||
cx.new(|_| Self { program, cwd })
|
||||
}
|
||||
|
||||
fn debug_task(&self, cx: &App) -> task::LaunchConfig {
|
||||
fn debug_task(&self, cx: &App) -> task::LaunchRequest {
|
||||
let path = self.cwd.read(cx).text(cx);
|
||||
task::LaunchConfig {
|
||||
task::LaunchRequest {
|
||||
program: self.program.read(cx).text(cx),
|
||||
cwd: path.is_empty().not().then(|| PathBuf::from(path)),
|
||||
args: Default::default(),
|
||||
@@ -373,10 +363,9 @@ impl AttachMode {
|
||||
) -> Entity<Self> {
|
||||
let debug_definition = DebugTaskDefinition {
|
||||
label: "Attach New Session Setup".into(),
|
||||
request: dap::DebugRequestType::Attach(task::AttachConfig { process_id: None }),
|
||||
request: dap::DebugRequest::Attach(task::AttachRequest { process_id: None }),
|
||||
tcp_connection: None,
|
||||
adapter: debugger.clone().unwrap_or_default().into(),
|
||||
locator: None,
|
||||
initialize_args: None,
|
||||
stop_on_entry: Some(false),
|
||||
};
|
||||
@@ -391,8 +380,8 @@ impl AttachMode {
|
||||
attach_picker,
|
||||
})
|
||||
}
|
||||
fn debug_task(&self) -> task::AttachConfig {
|
||||
task::AttachConfig { process_id: None }
|
||||
fn debug_task(&self) -> task::AttachRequest {
|
||||
task::AttachRequest { process_id: None }
|
||||
}
|
||||
}
|
||||
|
||||
@@ -406,7 +395,7 @@ enum NewSessionMode {
|
||||
}
|
||||
|
||||
impl NewSessionMode {
|
||||
fn debug_task(&self, cx: &App) -> DebugRequestType {
|
||||
fn debug_task(&self, cx: &App) -> DebugRequest {
|
||||
match self {
|
||||
NewSessionMode::Launch(entity) => entity.read(cx).debug_task(cx).into(),
|
||||
NewSessionMode::Attach(entity) => entity.read(cx).debug_task().into(),
|
||||
@@ -488,7 +477,7 @@ impl NewSessionMode {
|
||||
Self::Attach(AttachMode::new(debugger, project, window, cx))
|
||||
}
|
||||
fn launch(
|
||||
past_launch_config: Option<LaunchConfig>,
|
||||
past_launch_config: Option<LaunchRequest>,
|
||||
window: &mut Window,
|
||||
cx: &mut Context<NewSessionModal>,
|
||||
) -> Self {
|
||||
|
||||
@@ -5,7 +5,7 @@ use gpui::{BackgroundExecutor, TestAppContext, VisualTestContext};
|
||||
use menu::Confirm;
|
||||
use project::{FakeFs, Project};
|
||||
use serde_json::json;
|
||||
use task::{AttachConfig, DebugTaskDefinition, TCPHost};
|
||||
use task::{AttachRequest, DebugTaskDefinition, TcpArgumentsTemplate};
|
||||
use tests::{init_test, init_test_workspace};
|
||||
|
||||
#[gpui::test]
|
||||
@@ -31,13 +31,12 @@ async fn test_direct_attach_to_process(executor: BackgroundExecutor, cx: &mut Te
|
||||
cx,
|
||||
DebugTaskDefinition {
|
||||
adapter: "fake-adapter".to_string(),
|
||||
request: dap::DebugRequestType::Attach(AttachConfig {
|
||||
request: dap::DebugRequest::Attach(AttachRequest {
|
||||
process_id: Some(10),
|
||||
}),
|
||||
label: "label".to_string(),
|
||||
initialize_args: None,
|
||||
tcp_connection: None,
|
||||
locator: None,
|
||||
stop_on_entry: None,
|
||||
},
|
||||
|client| {
|
||||
@@ -105,11 +104,10 @@ async fn test_show_attach_modal_and_select_process(
|
||||
project.clone(),
|
||||
DebugTaskDefinition {
|
||||
adapter: FakeAdapter::ADAPTER_NAME.into(),
|
||||
request: dap::DebugRequestType::Attach(AttachConfig::default()),
|
||||
request: dap::DebugRequest::Attach(AttachRequest::default()),
|
||||
label: "attach example".into(),
|
||||
initialize_args: None,
|
||||
tcp_connection: Some(TCPHost::default()),
|
||||
locator: None,
|
||||
tcp_connection: Some(TcpArgumentsTemplate::default()),
|
||||
stop_on_entry: None,
|
||||
},
|
||||
vec![
|
||||
|
||||
@@ -5111,44 +5111,21 @@ impl Editor {
|
||||
CodeActionsItem::Task(task_source_kind, resolved_task) => {
|
||||
match resolved_task.task_type() {
|
||||
task::TaskType::Script => workspace.update(cx, |workspace, cx| {
|
||||
workspace::tasks::schedule_resolved_task(
|
||||
workspace,
|
||||
workspace.schedule_resolved_task(
|
||||
task_source_kind,
|
||||
resolved_task,
|
||||
false,
|
||||
window,
|
||||
cx,
|
||||
);
|
||||
|
||||
Some(Task::ready(Ok(())))
|
||||
}),
|
||||
task::TaskType::Debug(debug_args) => {
|
||||
if debug_args.locator.is_some() {
|
||||
workspace.update(cx, |workspace, cx| {
|
||||
workspace::tasks::schedule_resolved_task(
|
||||
workspace,
|
||||
task_source_kind,
|
||||
resolved_task,
|
||||
false,
|
||||
cx,
|
||||
);
|
||||
});
|
||||
|
||||
return Some(Task::ready(Ok(())));
|
||||
}
|
||||
|
||||
if let Some(project) = self.project.as_ref() {
|
||||
project
|
||||
.update(cx, |project, cx| {
|
||||
project.start_debug_session(
|
||||
resolved_task.resolved_debug_adapter_config().unwrap(),
|
||||
cx,
|
||||
)
|
||||
})
|
||||
.detach_and_log_err(cx);
|
||||
Some(Task::ready(Ok(())))
|
||||
} else {
|
||||
Some(Task::ready(Ok(())))
|
||||
}
|
||||
task::TaskType::Debug(_) => {
|
||||
workspace.update(cx, |workspace, cx| {
|
||||
workspace.schedule_debug_task(resolved_task, window, cx);
|
||||
});
|
||||
Some(Task::ready(Ok(())))
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -6845,12 +6822,12 @@ impl Editor {
|
||||
resolved.reveal = reveal_strategy;
|
||||
|
||||
workspace
|
||||
.update(cx, |workspace, cx| {
|
||||
workspace::tasks::schedule_resolved_task(
|
||||
workspace,
|
||||
.update_in(cx, |workspace, window, cx| {
|
||||
workspace.schedule_resolved_task(
|
||||
task_source_kind,
|
||||
resolved_task,
|
||||
false,
|
||||
window,
|
||||
cx,
|
||||
);
|
||||
})
|
||||
|
||||
@@ -12,8 +12,8 @@ use crate::{
|
||||
use buffer_diff::{BufferDiff, DiffHunkSecondaryStatus, DiffHunkStatus, DiffHunkStatusKind};
|
||||
use futures::StreamExt;
|
||||
use gpui::{
|
||||
BackgroundExecutor, SemanticVersion, TestAppContext, UpdateGlobal, VisualTestContext,
|
||||
WindowBounds, WindowOptions, div,
|
||||
BackgroundExecutor, DismissEvent, SemanticVersion, TestAppContext, UpdateGlobal,
|
||||
VisualTestContext, WindowBounds, WindowOptions, div,
|
||||
};
|
||||
use indoc::indoc;
|
||||
use language::{
|
||||
@@ -19549,6 +19549,64 @@ println!("5");
|
||||
});
|
||||
}
|
||||
|
||||
#[gpui::test]
|
||||
async fn test_hide_mouse_context_menu_on_modal_opened(cx: &mut TestAppContext) {
|
||||
struct EmptyModalView {
|
||||
focus_handle: gpui::FocusHandle,
|
||||
}
|
||||
impl EventEmitter<DismissEvent> for EmptyModalView {}
|
||||
impl Render for EmptyModalView {
|
||||
fn render(&mut self, _: &mut Window, _: &mut Context<'_, Self>) -> impl IntoElement {
|
||||
div()
|
||||
}
|
||||
}
|
||||
impl Focusable for EmptyModalView {
|
||||
fn focus_handle(&self, _cx: &App) -> gpui::FocusHandle {
|
||||
self.focus_handle.clone()
|
||||
}
|
||||
}
|
||||
impl workspace::ModalView for EmptyModalView {}
|
||||
fn new_empty_modal_view(cx: &App) -> EmptyModalView {
|
||||
EmptyModalView {
|
||||
focus_handle: cx.focus_handle(),
|
||||
}
|
||||
}
|
||||
|
||||
init_test(cx, |_| {});
|
||||
|
||||
let fs = FakeFs::new(cx.executor());
|
||||
let project = Project::test(fs, [], cx).await;
|
||||
let workspace = cx.add_window(|window, cx| Workspace::test_new(project.clone(), window, cx));
|
||||
let buffer = cx.update(|cx| MultiBuffer::build_simple("hello world!", cx));
|
||||
let cx = &mut VisualTestContext::from_window(*workspace.deref(), cx);
|
||||
let editor = cx.new_window_entity(|window, cx| {
|
||||
Editor::new(
|
||||
EditorMode::full(),
|
||||
buffer,
|
||||
Some(project.clone()),
|
||||
window,
|
||||
cx,
|
||||
)
|
||||
});
|
||||
workspace
|
||||
.update(cx, |workspace, window, cx| {
|
||||
workspace.add_item_to_active_pane(Box::new(editor.clone()), None, true, window, cx);
|
||||
})
|
||||
.unwrap();
|
||||
editor.update_in(cx, |editor, window, cx| {
|
||||
editor.open_context_menu(&OpenContextMenu, window, cx);
|
||||
assert!(editor.mouse_context_menu.is_some());
|
||||
});
|
||||
workspace
|
||||
.update(cx, |workspace, window, cx| {
|
||||
workspace.toggle_modal(window, cx, |_, cx| new_empty_modal_view(cx));
|
||||
})
|
||||
.unwrap();
|
||||
cx.read(|cx| {
|
||||
assert!(editor.read(cx).mouse_context_menu.is_none());
|
||||
});
|
||||
}
|
||||
|
||||
fn empty_range(row: usize, column: usize) -> Range<DisplayPoint> {
|
||||
let point = DisplayPoint::new(DisplayRow(row as u32), column as u32);
|
||||
point..point
|
||||
|
||||
@@ -6681,7 +6681,10 @@ impl Element for EditorElement {
|
||||
let max_row = snapshot.max_point().row().as_f32();
|
||||
|
||||
// The max scroll position for the top of the window
|
||||
let max_scroll_top = if matches!(snapshot.mode, EditorMode::AutoHeight { .. }) {
|
||||
let max_scroll_top = if matches!(
|
||||
snapshot.mode,
|
||||
EditorMode::AutoHeight { .. } | EditorMode::SingleLine { .. }
|
||||
) {
|
||||
(max_row - height_in_lines + 1.).max(0.)
|
||||
} else {
|
||||
let settings = EditorSettings::get_global(cx);
|
||||
|
||||
@@ -928,9 +928,17 @@ impl Item for Editor {
|
||||
&mut self,
|
||||
workspace: &mut Workspace,
|
||||
_window: &mut Window,
|
||||
_: &mut Context<Self>,
|
||||
cx: &mut Context<Self>,
|
||||
) {
|
||||
self.workspace = Some((workspace.weak_handle(), workspace.database_id()));
|
||||
if let Some(workspace) = &workspace.weak_handle().upgrade() {
|
||||
cx.subscribe(&workspace, |editor, _, event: &workspace::Event, _cx| {
|
||||
if matches!(event, workspace::Event::ModalOpened) {
|
||||
editor.mouse_context_menu.take();
|
||||
}
|
||||
})
|
||||
.detach();
|
||||
}
|
||||
}
|
||||
|
||||
fn to_item_events(event: &EditorEvent, mut f: impl FnMut(ItemEvent)) {
|
||||
|
||||
@@ -7,15 +7,14 @@ edition.workspace = true
|
||||
[dependencies]
|
||||
agent.workspace = true
|
||||
anyhow.workspace = true
|
||||
async-watch.workspace = true
|
||||
assistant_tool.workspace = true
|
||||
assistant_tools.workspace = true
|
||||
async-watch.workspace = true
|
||||
chrono.workspace = true
|
||||
clap.workspace = true
|
||||
client.workspace = true
|
||||
collections.workspace = true
|
||||
context_server.workspace = true
|
||||
dap.workspace = true
|
||||
dirs = "5.0"
|
||||
env_logger.workspace = true
|
||||
extension.workspace = true
|
||||
@@ -38,6 +37,7 @@ reqwest_client.workspace = true
|
||||
serde.workspace = true
|
||||
settings.workspace = true
|
||||
shellexpand.workspace = true
|
||||
smol.workspace = true
|
||||
telemetry.workspace = true
|
||||
toml.workspace = true
|
||||
unindent.workspace = true
|
||||
|
||||
@@ -1,3 +0,0 @@
|
||||
url = "https://github.com/workos/authkit-js.git"
|
||||
revision = "949345d85782a93e8f1738ec31823948ffc26301"
|
||||
language_extension = "ts"
|
||||
@@ -1,10 +0,0 @@
|
||||
1. Add a new test case in `create-client.test.ts` for when the `returnTo` option is provided during sign-out. It verifies that the sign-out URL includes the correct `return_to` query parameter with the provided URL. The test sets up a mock client, calls signOut with a returnTo value, and asserts that the resulting URL contains the expected session_id and return_to parameters while maintaining the correct API endpoint structure.
|
||||
2. Modifies the `signOut` method in `create-client.ts` to accept an optional options parameter containing a returnTo string. Instead of directly passing the sessionId to getLogoutUrl, it now passes an object containing both the sessionId and the returnTo value from the options. The method maintains its existing behavior of checking for an access token and clearing session data when a URL is available.
|
||||
3. Updates the HTTP client tests in `http-client.test.ts` to reflect the new getLogoutUrl signature. It adds a test case for the basic logout URL and a new describe block for when returnTo is provided, verifying that the URL includes the properly encoded return_to parameter. The test ensures the URL construction handles both cases correctly.
|
||||
4. Modifies the `getLogoutUrl` method in `http-client.ts` to accept an object parameter with sessionId and returnTo properties instead of just a sessionId string. It maintains the base URL construction but now conditionally adds the return_to query parameter only when a returnTo value is provided, while always including the session_id parameter. The method handles URL construction and parameter encoding internally.
|
||||
5. Updates the session initialization logic in `create-client.ts` to check for either a `workos-has-session` cookie or a refresh token (retrieved via `getRefreshToken`). This allows the client to refresh sessions even if no `code` is present in the URL, especially in development environments.
|
||||
6. Adds corresponding test coverage in `create-client.test.ts`:
|
||||
- When no code is in the URL but the `workos-has-session` cookie exists, the session should be refreshed.
|
||||
- When devMode is enabled and a refresh token is present in localStorage, the session should be refreshed.
|
||||
- When devMode is enabled but no refresh token exists, the client should be created without making any network requests.
|
||||
- When neither a code, cookie, nor refresh token is present, the client should initialize without refreshing.
|
||||
@@ -1,3 +0,0 @@
|
||||
I need to improve our logout feature. When users sign out, they should be able to specify a return URL to redirect to afterward. Right now, signing out just takes them to a default page, but we want to support custom redirects (like back to the homepage or a login screen). The URL should be safely included in the logout request. Make sure existing logouts still work normally when no redirect is specified.
|
||||
|
||||
Also, note that we updated how the client initializes its session. It should now check for either a `workos-has-session` cookie or a valid refresh token (even in devMode). This ensures that sessions are refreshed appropriately even without a code in the URL. Be sure this logic is covered by the minimum tests.
|
||||
@@ -1,3 +0,0 @@
|
||||
url = "https://github.com/cline/cline.git"
|
||||
revision = "a26494e5cc453f9c7e148d35895fda3f74d03284"
|
||||
language_extension = "ts"
|
||||
@@ -1,5 +0,0 @@
|
||||
1. A new changeset file is created to document a patch that improves diff editing animations and enhances prompts for large file edits. An indicator showing the number of diff edits is also added next to each file path.
|
||||
2. In `diff.ts`, the error message thrown when a `SEARCH` block doesn’t match content has been updated to clarify that the mismatch could be due to out-of-order blocks.
|
||||
3. In `responses.ts`, the assistant response for diff mismatches now recommends limiting to 1–3 `SEARCH/REPLACE` blocks at a time for large files. It also simplifies fallback instructions for using the `write_to_file` tool.
|
||||
4. The `DiffViewProvider.ts` file has been updated to replace line-by-line animations with chunk-based updates for better performance. For large diffs, a smooth scrolling animation is introduced to maintain visual context. Small diffs still scroll directly.
|
||||
5. In `CodeAccordian.tsx`, a new visual indicator displays the number of `REPLACE` blocks in the code diff using a diff icon and count, providing quick insight into the volume of changes.
|
||||
@@ -1,7 +0,0 @@
|
||||
We're trying to improve both performance and usability when working with large diffs in the editor. A few areas need attention:
|
||||
|
||||
First, the current diff animation applies updates line-by-line, which can feel slow and visually jarring for large edits. Could you revise the logic so that we update the editor in larger chunks instead? For smaller diffs, direct scrolling to the edited line is fine, but for larger changes, it would be great to implement a smooth scrolling animation that steps through the affected region before settling at the final line.
|
||||
|
||||
Second, the current error message when a SEARCH block doesn't match is a bit too vague. Let's make it clearer that the issue could be due to out-of-order or imprecise SEARCH/REPLACE blocks, especially when working with multiple blocks. It might also help to add a suggestion that users try only 1–3 changes at a time for large files before retrying.
|
||||
|
||||
Finally, in the file accordion UI, it would be useful to show how many edits a file contains. Could you parse the diff content and display a count of REPLACE blocks next to the file path, maybe with a small icon for clarity?
|
||||
@@ -1,3 +0,0 @@
|
||||
url = "https://github.com/punkpeye/awesome-mcp-servers.git"
|
||||
revision = "5480a9849b01ae8a5c1433d75ad0415975609571"
|
||||
language_extension = "md"
|
||||
@@ -1,5 +0,0 @@
|
||||
1. The diff shows changes to `README.md`, specifically adding a new entry to the "Tools and integrations" list. The new entry is for `@iaptic/mcp-server-iaptic`, which provides access to customer purchase and revenue data.
|
||||
2. The added line includes:
|
||||
- The GitHub repository URL
|
||||
- Three emojis: 🎖️ (possibly representing awards or achievements), 📇 (profiles or contacts), and ☁️ (cloud)
|
||||
- A description of the tool's functionality: "Connect with [iaptic](https://www.iaptic.com) to ask about your Customer Purchases, Transaction data and App Revenue statistics"
|
||||
@@ -1,3 +0,0 @@
|
||||
Please add a new tool entry to the README.md file's integration list: "@iaptic/mcp-server-iaptic" with GitHub link, described as "Connect with [iaptic](https://www.iaptic.com) to ask about your Customer Purchases, Transaction data and App Revenue statistics", tagged with the following emojis: 🎖️ 📇 ☁️. Place it appropriately in the existing tools section, following the current alphabetical or category-based order.
|
||||
|
||||
Edit the README file with the above, new resource
|
||||
@@ -1,3 +0,0 @@
|
||||
url = "https://github.com/avkcode/container-tools.git"
|
||||
revision = "34137bb453b4d2dd28b08bd80e26bc3105a50ada"
|
||||
language_extension = "sh"
|
||||
@@ -1,4 +0,0 @@
|
||||
1. Changes to the Makefile where the parameter "--keyrign" was corrected to "--keyring" in multiple build targets including debian11, debian11-java, debian11-java-slim, debian11-graal, debian11-graal-slim, debian11-corretto, debian11-java-slim-maven, debian11-java-slim-gradle, debian11-graal-slim-maven, and debian11-graal-slim-gradle. This appears to be a typo fix across all Java-related build configurations in the Makefile.
|
||||
2. Introduces significant enhancements to the debian/mkimage.sh script, including adding a usage function with detailed documentation, improving error handling for command-line arguments, and fixing the "--keyrign" parameter to "--keyring" to match the Makefile changes. It also adds better validation for required arguments and more descriptive error messages when values are missing. The script now includes comprehensive documentation about its purpose and usage examples.
|
||||
3. Shows extensive improvements to the script's functionality and robustness, including adding tracing capabilities, better error handling, and more informative logging. It introduces new helper functions like usage(), die(), warn(), and info() for better user feedback. The script now properly checks for required commands (debootstrap, unzip, trivy) and provides installation instructions if they're missing. It also includes better system checks (Linux OS verification, root privileges check, SELinux status) and implements a more reliable way to handle GPG keys by setting up the correct directory structure and permissions before key import.
|
||||
4. Continues the script improvements with better package management, repository configuration, and container setup. It adds proper apt repository configuration in the target system, implements package installation with retries, and includes Docker-specific optimizations. The script now provides clearer output about installed packages and their sizes. It also includes better cleanup procedures and more informative completion messages with clear instructions on how to load and run the resulting Docker image. The output now includes example commands and proper formatting for better readability.
|
||||
@@ -1 +0,0 @@
|
||||
I need to make several improvements to our Debian image-building scripts. First, fix the typo in the `Makefile` where `--keyrign` is incorrectly used instead of `--keyring` across all build targets, including the standard Debian image and Java variants like `debian11-java`, `debian11-graal`, and `debian11-corretto`. Second, enhance the `debian/mkimage.sh` script to include proper error handling, usage documentation, and command-line argument validation. The script should check for required tools like `debootstrap`, `unzip`, and `trivy`, and provide installation instructions if they're missing. Improve the GPG key setup by ensuring the `/root/.gnupg` directory is properly configured before importing keys. Add structured logging with timestamps, warnings, and informational messages. Implement better package installation with retries and proper cleanup. Finally, include clear instructions at the end on how to load and run the generated Docker image, with example commands for verification. The script should be robust, well-documented, and fail early with meaningful error messages if system requirements aren't met.
|
||||
@@ -1,3 +0,0 @@
|
||||
url = "https://github.com/YuhangSong/Arena-Baselines.git"
|
||||
revision = "801ed8566110ddc4a6ada0cc70171c636d78dbb8"
|
||||
language_extension = "py"
|
||||
@@ -1,12 +0,0 @@
|
||||
1. README.md Features Section Reorganization
|
||||
The features section has been reorganized into two subsections ("Baselines" and "Games") with markdown tables added. The previous bullet points were replaced with more structured content including supported/benchmarked status indicators. A new "Visualization" section was added with TensorBoard and port forwarding instructions.
|
||||
2. Content Relocation and File Restructuring
|
||||
The Tennis game documentation and action space details were moved from README.md to a new games.md file. The README was cleaned up by removing commented-out content and consolidating documentation sections. YAML config files (Benchmark-2T1P-Discrete.yaml and Test-Pong.yaml) were modified to replace `selfplay_recent_prob` with `playing_policy_load_recent_prob` and adjust population size options.
|
||||
3. train.py Refactoring
|
||||
Significant changes to train.py including:
|
||||
- Renamed `selfplay_recent_prob` parameter to `playing_policy_load_recent_prob`
|
||||
- Simplified the nested grid search structure by removing unnecessary loops
|
||||
- Improved policy loading logic with better checkpoint path handling
|
||||
- Enhanced error handling and logging for policy saving/reloading
|
||||
- Removed redundant code and improved code organization
|
||||
- Added more descriptive console output during policy operations
|
||||
@@ -1,13 +0,0 @@
|
||||
I need to refactor the multi-agent configuration system in our Arena-Baselines repository. The current policy_assignment parameter (self_play, independent) is too coarse. I want to replace it with a more flexible set of parameters to better support advanced training schemes like population-based training (PBT) and sophisticated self-play with historical opponents.
|
||||
|
||||
Specifically, I will introduce four new configuration parameters:
|
||||
|
||||
iterations_per_reload: Controls the frequency (in training iterations) at which policies are saved and potentially reloaded.
|
||||
num_learning_policies: Explicitly defines how many agents use policies that are actively being trained (can be an integer or 'all').
|
||||
selfplay_recent_prob: For non-learning agents (players), this determines the probability of loading the latest version of a learning policy versus loading a uniformly random historical version during reloads.
|
||||
size_population: Specifies the number of distinct policy versions maintained for each learning agent, enabling PBT-style experiments.
|
||||
To implement this, I will significantly modify train.py. This includes updating the argument parser, changing how experiment configurations are expanded (especially with grid_search), and implementing a new callback function (on_train_result). This callback will handle the periodic saving (using pickle) of learning policies to structured directories and the reloading of all policies (learning and playing) according to the new parameters (iterations_per_reload, selfplay_recent_prob, size_population). Playing policies will use deterministic actions.
|
||||
|
||||
I'll also reorganize the codebase by renaming arena/rllib_env.py to arena/arena.py and creating a new arena/utils.py file to house utility functions (like configuration helpers, ID generators, DeterministicCategorical) and constants.
|
||||
|
||||
Finally, I will update the example configuration files (Benchmark-2T1P-Discrete.yaml, Test-Pong.yaml) to remove policy_assignment and demonstrate the usage of the new parameters, including within grid_search.
|
||||
@@ -1,3 +0,0 @@
|
||||
url = "https://github.com/calebporzio/sushi.git"
|
||||
revision = "01dd34fe3374f5fb7ce63756c0419385e31cd532"
|
||||
language_extension = "php"
|
||||
@@ -1,3 +0,0 @@
|
||||
1. The GitHub workflow file has been significantly updated to expand testing coverage and improve the CI process. The changes introduce a new `fail-fast: false` setting to allow all matrix combinations to complete even if some fail. The testing matrix now includes PHP 8.4 and Laravel 12.* alongside the existing versions. The configuration includes specific testbench version mappings for Laravel 12.* and removes the DBAL requirement for Laravel 11.* tests. Numerous new test combinations have been added across all Laravel versions to include PHP 8.4 testing. The dependency installation process has been restructured into separate steps - one specifically for DBAL when needed, and another for general dependencies using updated composer commands with precise version constraints.
|
||||
2. The composer.json file has been updated to support the newly added Laravel 12.* version in both the main requirements and development dependencies. The testbench package now explicitly includes versions 5.* and 10.* in its supported range. For testing tools, PHPUnit 11.* has been added to the list of supported versions while maintaining backward compatibility with older versions. These changes ensure the package can be used with the latest Laravel ecosystem components while preserving compatibility with existing installations.
|
||||
st file modifications primarily focus on adapting to changes in Laravel 11+ where column type handling was updated. The changes introduce version-aware assertions that check whether to expect 'string' or 'varchar' as column types based on the Laravel version being tested. A new import for the version comparison function was added to support these conditional checks. Additional safeguards were implemented, including a check for the HandlesAnnotations trait before running database migration tests, making the test suite more robust when running in different environments. The column type assertions in multiple test methods were updated to use these version-aware checks to maintain compatibility across Laravel versions.
|
||||
@@ -1,11 +0,0 @@
|
||||
|
||||
I'd like to update our Laravel package's CI workflow and dependencies to ensure compatibility with the upcoming Laravel 12 release and PHP 8.4. Currently, our package supports Laravel versions 5.8 through 11 and PHP versions 7.1 through 8.3, and we'll need to extend this support while maintaining backward compatibility.
|
||||
|
||||
**Key Changes Needed:**
|
||||
First, we'll need to update composer.json to explicitly support Laravel 12. The CI test matrix should also be expanded to include PHP 8.4 testing across all supported Laravel versions. The workflow configuration will require adjustments to properly handle these new version combinations.
|
||||
|
||||
There are some test compatibility issues we'll need to address - particularly around how we check string column types in Laravel 11+ (where 'string' was changed to 'varchar'), and we should add conditional skipping for tests that depend on traits that might not be available in all test environments.
|
||||
|
||||
While making these changes, we could also implement some workflow improvements: enabling the fail-fast: false option to get complete test results even with individual failures, modernizing our dependency installation approach using the newer composer update syntax, and making the DBAL dependency installation conditional since it's not needed for all test cases.
|
||||
|
||||
Would you be able to help review these changes or suggest any additional considerations we should keep in mind for this compatibility update? I want to make sure we maintain stability while expanding our support coverage.
|
||||
@@ -1,3 +1,3 @@
|
||||
url = "https://github.com/zed-industries/zed.git"
|
||||
revision = "03ecb88fe30794873f191ddb728f597935b3101c"
|
||||
revision = "38fcadf9481d018543c65f36ac3bafeba190179b"
|
||||
language_extension = "rs"
|
||||
|
||||
@@ -1,3 +0,0 @@
|
||||
url = "https://github.com/sdras/array-explorer.git"
|
||||
revision = "8ff1a72f7ba24d44946bf591c3586b0dcccc2381"
|
||||
language_extension = "js"
|
||||
@@ -1,12 +0,0 @@
|
||||
1. **EditorConfig Change**
|
||||
Added a new setting `quote_type = single` to the `.editorconfig` file. This specifies that single quotes should be used for quoting in the codebase.
|
||||
2. **New Finnish Locale Files**
|
||||
Added two new Finnish language files:
|
||||
- `src/locale/fi/index.js`: Contains Finnish translations for UI strings and method descriptions
|
||||
- `store/fi/index.js`: Contains Finnish translations for all array method documentation (298 lines)
|
||||
- `store/fi/meta.json`: Metadata about the Finnish translation (language code "fi", full name "Finnish", created by "sjarva")
|
||||
3. **Store Integration Updates**
|
||||
Modified `store/index.js` to:
|
||||
- Import the new Finnish locale files (`import fi from './fi/index'` and `import translationsFi from '../src/locale/fi/index'`)
|
||||
- Add Finnish to the Vuex store state (`fi`)
|
||||
- Register Finnish translations with Vue I18n (`Vue.i18n.add('fi', translationsFi)`)
|
||||
@@ -1,5 +0,0 @@
|
||||
I’m working on adding Finnish (fi) language support to our array method reference application, which helps users determine the right JavaScript array methods based on their needs. To achieve this, I’ll need to:
|
||||
|
||||
First, create the Finnish locale file containing translations for method selection options, method types (such as add, remove, find, and iterate), and primary action choices. Next, I’ll add Finnish translations to the store, covering all array methods (like splice, push, and unshift), including detailed descriptions of their behaviors, parameters, return values, and example code with outputs.
|
||||
|
||||
Additionally, I’ll generate a Finnish meta file with language metadata (language code, full name, and contributor info). Finally, I’ll update the main store index to integrate Finnish alongside existing languages like English, Spanish, and German.
|
||||
@@ -1,3 +0,0 @@
|
||||
url = "https://github.com/vercel/ai.git"
|
||||
revision = "1766edec300deb05c84bb7fefc034af4c2bc1165"
|
||||
language_extension = "ts"
|
||||
@@ -1,3 +0,0 @@
|
||||
1. Introduces a new changeset file that documents a patch for the '@ai-sdk/provider' package. The changeset indicates a chore task where 'LanguageModelV2File' is being extracted, suggesting a refactoring effort to modularize the codebase by separating file-related types into their own module.
|
||||
2. Modifications to the language model v2 index file where a new export statement for 'language-model-v2-file' has been added. This change reflects the extraction mentioned in the changeset and makes the new file type available to other parts of the application. Additionally, there are significant changes to the language model v2 implementation file where the inline file type definition has been replaced with the newly extracted 'LanguageModelV2File' type, both in the main model interface and in the stream part union type, demonstrating the consolidation of file-related types into a single, reusable definition.
|
||||
3. Present the newly created 'language-model-v2-file.ts' file which defines the 'LanguageModelV2File' type with comprehensive documentation. The type includes two properties: 'mediaType' which specifies the IANA media type of the file with a reference to the official media types registry, and 'data' which can be either a base64 encoded string or binary data, with clear documentation about maintaining the original format from the API without unnecessary conversion. This new file represents the extracted type that is now being used throughout the codebase.
|
||||
@@ -1 +0,0 @@
|
||||
We need to improve how our language model handles file attachments by making the file type definitions more modular and reusable. Currently, file-related properties are defined inline within the model’s response and stream types, which makes maintenance harder and duplicates documentation. The goal is to extract these definitions into a dedicated type that can be shared consistently across both static responses and streaming payloads. The new type should include clear documentation about media types (referencing IANA standards) and support both base64 and binary data formats without unnecessary conversions. This change should maintain backward compatibility while centralizing the file structure definition for better type safety and readability. Focus on clean separation of concerns, and ensure the extracted type is properly exported and imported where needed.
|
||||
@@ -1,3 +0,0 @@
|
||||
url = "https://github.com/AsyncBanana/microdiff"
|
||||
revision = "ce2055948483d01fb1e96def4ab98d6339d3b2f9"
|
||||
language_extension = "js"
|
||||
@@ -1,6 +0,0 @@
|
||||
1. **NaN Comparison Logic Update**:
|
||||
The diff modifies the comparison function to explicitly handle NaN values as equivalent. Previously, the function relied on string conversion for NaN comparison, but now it first checks if both values are NaN using Number.isNaN() before proceeding with other comparison logic. This change ensures consistent behavior when comparing NaN values in objects.
|
||||
2. **New NaN Test Suite - Object Operations**:
|
||||
A comprehensive test suite is added to verify NaN handling in object operations. The tests cover: creating new objects with NaN values, changing NaN values to other numbers, verifying no changes when NaN values remain the same, and removing properties with NaN values. Each test case validates the diff output structure and type of operation.
|
||||
3. **New NaN Test Suite - Array Operations**:
|
||||
The test suite extends to array operations with similar test cases as objects but adapted for array contexts. It tests: adding NaN to arrays, replacing NaN with other numbers, maintaining arrays with unchanged NaN values, and removing NaN elements from arrays. The tests ensure consistent behavior between object and array operations involving NaN values.
|
||||
@@ -1 +0,0 @@
|
||||
The goal of this update is to fix NaN value handling in our JavaScript object diffing functionality. Currently, the diff function fails to properly recognize that two NaN values should be treated as equal due to JavaScript's native behavior where `NaN !== NaN`. This causes incorrect change detection when comparing objects or arrays containing NaN values. The solution involves modifying the diff function to explicitly check for NaN values using `Number.isNaN()` during comparisons of object keys and values, ensuring NaN values are treated as equivalent. The implementation requires adding specific NaN equivalence checks while maintaining existing comparison logic. Additionally, comprehensive unit tests are being added to verify correct handling across various scenarios: creating objects/arrays with NaN values, changing NaN values to other values, ensuring no false positives when NaN values remain unchanged, and properly tracking removal of NaN values from both objects and arrays. This change will bring the diff behavior in line with mathematical expectations for NaN comparisons while maintaining all other existing functionality.
|
||||
@@ -0,0 +1,4 @@
|
||||
# Pull Request: https://github.com/zed-industries/zed/pull/27934
|
||||
url = "https://github.com/zed-industries/zed.git"
|
||||
revision = "889bc13b7dd61c67d894cee8a6cdd87f56c6c45b"
|
||||
language_extension = "rs"
|
||||
@@ -0,0 +1,3 @@
|
||||
1. The changes must add internal state to track whether the user is providing feedback comments and to store the comment editor instance. This includes adding new fields to the ActiveThread struct and initializing them appropriately when the thread is created.
|
||||
2. When a user selects negative feedback, the application should show a UI for submitting additional comments. This includes rendering a short prompt, a multi-line text editor, and submit/cancel buttons. The editor should only be created when first needed, and the UI should be dismissed and cleaned up after submission or cancellation.
|
||||
3. On submit, the system must report the negative feedback as before, and if the comment field is not empty, it must also log the comment as a separate telemetry event. Positive feedback handling should remain unchanged and bypass the comment UI entirely.
|
||||
@@ -0,0 +1 @@
|
||||
Add support for optional user comments when the thumbs down is given on a thread. Comments should be submitted along with the reaction and logged if provided. Make sure the UI highlights the ui icon when the user clicks it.
|
||||
@@ -0,0 +1,3 @@
|
||||
1. The first tool call should perform a regex search for terms related to "negative feedback." Since no specific file path or code snippet was provided, regex is necessary to locate relevant content. Once the matching files are found, their contents should be read.
|
||||
2. Only the `zed/crates/agent/src/active_thread.rs` file needs to be edited. All logic related to reactions and negative comments should be contained within this file.
|
||||
3. A comment box should appear only when the negative reaction is clicked. The positive reaction behavior should remain unchanged.
|
||||
@@ -1,3 +0,0 @@
|
||||
url = "https://github.com/redis/redis-vl-python.git"
|
||||
revision = "494e5e2f8cf800b90c7383385095c2e503404bc5"
|
||||
language_extension = "py"
|
||||
@@ -1,3 +0,0 @@
|
||||
1. The changes involve renaming the `TestData` class to `LabeledData` across multiple files. This includes updating the import statements in `__init__.py`, `cache.py`, `router.py`, `schema.py`, and `utils.py` to reflect this new class name. The `__all__` list in `__init__.py` is also updated to export `LabeledData` instead of `TestData`. This appears to be a conceptual renaming to better reflect the purpose of the data structure.
|
||||
2. The modifications update all function signatures and type hints that previously used `TestData` to now use `LabeledData`. This affects several functions in `cache.py` including `_generate_run_cache`, `_eval_cache`, and `_grid_search_opt_cache`, as well as functions in `router.py` like `_generate_run_router` and `_eval_router`. The utility functions in `utils.py` are also updated to work with `LabeledData` instead of `TestData`.
|
||||
3. The changes introduce a new `search_step` parameter in the router optimization logic within `router.py`, with a default value of 0.10. This parameter is passed through to the `_router_random_search` function and is used in the optimization process. The test file `test_threshold_optimizer.py` is updated to explicitly set this parameter to 0.5 when calling the optimize method, demonstrating how it can be configured for different search granularities during threshold optimization.
|
||||
@@ -1 +0,0 @@
|
||||
I need to refactor our codebase to improve the clarity and consistency of our data model, particularly around how we handle labeled evaluation data for our threshold optimization system. Currently, the naming and structure might imply that this data is only used for testing, when in reality it represents labeled examples that power both training and evaluation. The changes should better reflect that these are curated data points with known outcomes, not just test cases. Focus on updating the core data model and ensuring all dependent components—like the cache optimizer, router, and evaluation utilities—properly reference this updated concept. The implementation should maintain all existing functionality while making the naming more semantically accurate. Where relevant, consider adding parameters to fine-tune optimization behavior, like allowing control over the granularity of threshold searches.
|
||||
@@ -1,3 +0,0 @@
|
||||
url = "https://github.com/matryer/goblueprints.git"
|
||||
revision = "68041a598865cc3f4fa2acd4119081a2ea0826bf"
|
||||
language_extension = "go"
|
||||
@@ -1,12 +0,0 @@
|
||||
1. The main.go changes introduce rate-limited endpoints by creating them via `MakeEndpoints` and passing them to both HTTP and gRPC servers instead of directly using the service. This includes:
|
||||
- Adding endpoint creation before server startup
|
||||
- Modifying HTTP server to use endpoints
|
||||
- Modifying gRPC server to use endpoints
|
||||
2. The server_grpc.go changes update the gRPC server implementation to use the provided endpoints instead of creating them internally. This affects both hash and validate endpoints which are now taken from the Endpoints struct rather than being created via makeHashEndpoint/makeValidateEndpoint.
|
||||
3. The server_http.go changes mirror the gRPC server changes, modifying the HTTP server to use endpoints from the Endpoints struct rather than creating them internally for both hash and validate routes.
|
||||
4. The service.go changes include:
|
||||
- Renaming makeHashEndpoint to MakeHashEndpoint and making it public
|
||||
- Renaming makeValidateEndpoint to MakeValidateEndpoint and making it public
|
||||
- Adding new MakeEndpoints function that creates rate-limited endpoints using a token bucket (5 requests per second)
|
||||
- Adding new dependencies for rate limiting (kitrl and ratelimit packages)
|
||||
- The Endpoints struct remains the same but is now populated with rate-limited versions of the endpoints
|
||||
@@ -1,18 +0,0 @@
|
||||
Here’s a more abstract, goal-oriented version of your request without diving into implementation specifics:
|
||||
|
||||
---
|
||||
|
||||
### **Request: Add Rate Limiting to Vault Service**
|
||||
|
||||
We need to introduce rate limiting to our vault service to protect it from excessive traffic and ensure fair usage. The service currently handles password hashing and validation through both HTTP and gRPC, and we want to enforce a controlled request rate across all endpoints.
|
||||
|
||||
#### **Key Requirements:**
|
||||
- Apply a global rate limit (e.g., 5 requests per second) to prevent abuse.
|
||||
- Ensure the rate limiting works consistently across both HTTP and gRPC interfaces.
|
||||
- Refactor the service to cleanly support rate limiting without breaking existing functionality.
|
||||
- Maintain flexibility so that limits can be adjusted if needed.
|
||||
|
||||
#### **Implementation Approach (High-Level):**
|
||||
- Use a token bucket or similar algorithm for smooth rate limiting.
|
||||
- Integrate with our existing middleware/request pipeline.
|
||||
- Keep the changes minimal but scalable for future adjustments.
|
||||
@@ -1,3 +0,0 @@
|
||||
url = "https://github.com/localtunnel/localtunnel.git"
|
||||
revision = "4c136a265c2005bcb81bf47709c8ca9b634f2fc1"
|
||||
language_extension = "js"
|
||||
@@ -1,3 +0,0 @@
|
||||
1. The first change replaces the `request` module import with `axios` in Tunnel.js. This is accompanied by modifications to the request parameters where `path` and `json` fields are removed and replaced with `responseType: 'json'`. The request URI construction is also slightly modified to separate the base URI from the parameters.
|
||||
2. The second chunk shows significant changes to the request handling logic in Tunnel.js. The callback-based `request` implementation is replaced with a promise-based `axios.get` approach. The error handling is restructured to use `.catch()` instead of checking for errors in the callback. The success case now extracts data from `res.data` instead of directly from the response body, and the status code check looks at `res.status` instead of `res.statusCode`.
|
||||
3. The third chunk shows changes to package.json where the `request` dependency is removed and replaced with `axios` at version 0.17.1. The dependencies are also reordered, with `debug` and `openurl` moved up and `yargs` moved to the end of the list, though their versions remain unchanged. The devDependencies section remains untouched.
|
||||
@@ -1 +0,0 @@
|
||||
I need help modernizing the HTTP client in my Node.js tunneling service. The current implementation uses the older `request` library, which is now deprecated, and I'd like to switch to a more modern, promise-based alternative like `axios`. The changes should maintain all existing functionality—including error handling, retry logic, and response parsing—but improve readability and maintainability by using async/await or proper promise chaining where possible. The request parameters and response handling should be updated to match the new library's conventions while preserving the same behavior for downstream consumers. Additionally, ensure the package.json dependencies are updated accordingly, removing deprecated packages and cleaning up the dependency list. The core tunneling logic should remain unchanged; this is purely about updating the HTTP client layer to be more future-proof.
|
||||
@@ -1,3 +0,0 @@
|
||||
url = "https://github.com/thalissonvs/pydoll.git"
|
||||
revision = "9ea9e91c716b60a7cc8f11ecd865093d460f31aa"
|
||||
language_extension = "py"
|
||||
@@ -1,6 +0,0 @@
|
||||
1. **Added RuntimeCommands import and WebElement to page.py**
|
||||
The changes add an import for `RuntimeCommands` and `WebElement` to `page.py`. The `execute_js_script` method is renamed to `execute_script` and enhanced to support execution in the context of a WebElement. The method now uses `RuntimeCommands` for script evaluation.
|
||||
2. **Refactored Runtime-related commands from DomCommands to new RuntimeCommands class**
|
||||
The changes move all Runtime-related command templates and methods from `DomCommands` in `dom.py` to a new `runtime.py` file. This includes `EVALUATE_TEMPLATE`, `CALL_FUNCTION_ON_TEMPLATE`, `GET_PROPERTIES`, and their associated methods. The DomCommands class now uses RuntimeCommands for JavaScript evaluation.
|
||||
3. **Added Scripts constants and enhanced WebElement functionality**
|
||||
The changes add a new `Scripts` class to `constants.py` containing JavaScript snippets for common operations. The `element.py` file is significantly enhanced with new methods for script execution, visibility checking, and improved click handling. New exceptions are added to `exceptions.py` for better error handling.
|
||||
@@ -1,7 +0,0 @@
|
||||
I'm looking to improve our Python web automation library (pydoll) to make it more robust and maintainable, particularly around JavaScript execution and element interactions. Currently, we need to better organize our Runtime-related commands and enhance how scripts are executed in the browser context.
|
||||
|
||||
The main focus areas include creating a dedicated RuntimeCommands class to centralize all JavaScript-related operations, moving these functions out of DomCommands for cleaner separation of concerns. This new class would handle script evaluation, function calling, and property lookups. We should also enhance the existing page.execute_js_script method—renaming it to execute_script for clarity—and expand its functionality to support execution within specific WebElement contexts, including passing elements as arguments.
|
||||
|
||||
For element interactions, we need more reliable mechanisms, particularly around clicking elements. The improvements would include visibility checks, verifying elements aren't obscured, and implementing proper error handling with descriptive exceptions when interactions fail. The current click implementation should be moved to realistic_click, while the new click method would incorporate these safety checks. Additionally, we should consolidate commonly used JavaScript snippets into a centralized Scripts class for better maintainability.
|
||||
|
||||
The overall goal is to strengthen the library's reliability for automation tasks while making the codebase more organized and easier to maintain. These changes will provide better error handling, clearer structure, and more intuitive APIs for working with page elements and JavaScript execution. Would you be able to help break this down into actionable steps or suggest any improvements to this approach?
|
||||
@@ -1,3 +0,0 @@
|
||||
url = "https://github.com/basecamp/kamal.git"
|
||||
revision = "0174b872bfc34b66852cffb58514ae079f21d299"
|
||||
language_extension = "rb"
|
||||
@@ -1,7 +0,0 @@
|
||||
1. The changes introduce a new `DependencyError` class in `kamal/cli.rb` alongside other error classes like `BootError` and `HookError`. This new error class will be used to handle dependency-related failures.
|
||||
2. In `kamal/cli/base.rb`, a new method `ensure_docker_installed` is added which checks for Docker and buildx plugin installation locally. It raises the new `DependencyError` with appropriate messages if either Docker or buildx plugin are not found, replacing similar functionality that was previously scattered elsewhere.
|
||||
3. The `kamal/cli/build.rb` file is modified to use the new `ensure_docker_installed` method instead of the removed `verify_local_dependencies` method. The error handling is now consistent, using `DependencyError` instead of `BuildError` for dependency-related failures.
|
||||
4. The `kamal/cli/registry.rb` file now includes a call to `ensure_docker_installed` at the start of the login method, ensuring Docker is available before attempting registry operations.
|
||||
5. The `kamal/commands/base.rb` file adds a new public method `ensure_docker_installed` that combines checks for both Docker and buildx plugin installation, moving this functionality from the Builder class.
|
||||
6. The `kamal/commands/builder.rb` file is simplified by removing the `ensure_local_dependencies_installed` method and related private methods, as this functionality has been moved to the base commands class.
|
||||
7. Test files are updated to reflect these changes, with `build_test.rb` now expecting `DependencyError` instead of `BuildError` for dependency failures, and `registry_test.rb` adding a new test case for Docker dependency checking during login.
|
||||
@@ -1 +0,0 @@
|
||||
I need to improve how our codebase handles Docker dependency checks and error reporting. Right now, the logic for verifying Docker and buildx installations is scattered across different classes, and the error messages aren't consistent. I'd like a more unified approach where we centralize these checks in a single place, making it easier to maintain and reuse. Additionally, we should introduce a dedicated error type for dependency-related failures instead of repurposing existing errors like BuildError. The changes should ensure that any command requiring Docker (like builds or registry logins) properly validates dependencies first, with clear error messages if something is missing. The solution should be clean, follow existing patterns in the codebase, and include any necessary test updates to reflect the new behavior.
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user