Compare commits

...

10 Commits

Author SHA1 Message Date
Peter Tripp
ba0115977d zed 0.151.2 2024-09-06 09:52:53 -04:00
Thorsten Ball
f7a0e01fb8 Revert FPS counter (#17485)
**UPDATE**: Response so far seems to be that this fixes the performance
issues on Intel MacBooks. So we're going to go ahead and merge it.

This reverts the FPS counter added in 11753914d (#16422) because in this
issue someone bisected recent performance regressions down to this
commit:

- https://github.com/zed-industries/zed/issues/16729

Another issue that's possibly related:

-
https://github.com/zed-industries/zed/issues/17305#issuecomment-2332316242

We're reverting this in a PR to create a bundle that people can try out.

Assets:

- Universal Binary:
https://github.com/zed-industries/zed/actions/runs/10735702994/artifacts/1900460781
- x86/Intel:
https://github.com/zed-industries/zed/actions/runs/10735702994/artifacts/1900461236
- Apple Silicon:
https://github.com/zed-industries/zed/actions/runs/10735702994/artifacts/1900460978

Release Notes:

- Removed the recently-added FPS counter since the changes it made to
the Metal renderer on macOS could lead to performance regressions on
Intel MacBooks.

Co-authored-by: Bennet <bennet@zed.dev>
2024-09-06 09:51:54 -04:00
Thorsten Ball
26d33b7139 Fix Workspace references being leaked (#17497)
We noticed that the `Workspace` was never released (along with the
`Project` and everything that comes along with that) when closing a
window.

After playing around with the LeakDetector and debugging with
`cx.on_release()` callbacks, we found two culprits: the inline assistant
and the outline panel.

Both held strong references to `View<Workspace>` after PR #16589 and PR
#16845.

This PR changes both references to `WeakView<Workspace>` which fixes the
leak but keeps the behaviour the same.

Release Notes:

- N/A

Co-authored-by: Bennet <bennet@zed.dev>
2024-09-06 09:41:06 -04:00
apricotbucket28
16ee4e63b8 blade: Update to bf40d4f to fix crash (#17319)
See https://github.com/zed-industries/zed/issues/17005

This PR updates Blade to fix a regression from
d1dceef945,
and also properly destroys all Vulkan objects (verified by enabling
Vulkan validation in the source code, in both Wayland and X11).

Release Notes:

- Linux: Fixed crash when closing windows on Wayland.
2024-09-04 22:33:03 -06:00
Joseph T Lyons
0c74f4e0ca v0.151.x stable 2024-09-04 10:30:20 -04:00
Kirill Bulatov
7c0ad81a3d Improve outline panel performance (#17183)
Part of https://github.com/zed-industries/zed/issues/14235

* moved search results highlight calculation into the background thread,
with highlight-less representation as a fallback
* show only a part of the line per search result, stop uniting them into
a single line if possible, always trim left trailing whitespaces
* highlight results in batches
* better cache all search result data, related to rendering
* add test infra and fix folding-related issues
* improve entry displays when multi buffer has a buffer search (find
references one has)
* fix cloud notes not showing search matches

Release Notes:

- Improved outline panel performance
2024-09-02 01:51:47 +03:00
Peter Tripp
cd0841c590 zed 0.151.1 2024-08-30 09:01:14 -04:00
Peter Tripp
f44f5c5bc7 Ollama max_tokens settings (#17025)
- Support `available_models` for Ollama
- Clamp default max tokens (context length) to 16384.
- Add documentation for ollama context configuration.
2024-08-30 09:00:03 -04:00
CharlesChen0823
670b3b9382 file_finder: Fix crash in new_path_prompt (#16991)
Closes #16919 

repro step:
  - add two worktree
  - modify settings `"use_system_path_prompts" : false`
  - `ctrl-n` create new file, typing any chars.
  - `ctrl-s` save file
  - typing any char, crashed.

Release Notes:

- Fixed crashed when setting `"use_system_path_prompts": false` or in
remote project with two or more worktree.
2024-08-30 08:59:50 -04:00
Joseph T Lyons
75da9af3e6 v0.151.x preview 2024-08-28 11:43:35 -04:00
32 changed files with 1968 additions and 1253 deletions

612
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -71,7 +71,6 @@ members = [
"crates/outline",
"crates/outline_panel",
"crates/paths",
"crates/performance",
"crates/picker",
"crates/prettier",
"crates/project",
@@ -166,7 +165,7 @@ members = [
# Tooling
#
"tooling/xtask"
"tooling/xtask",
]
default-members = ["crates/zed"]
@@ -244,7 +243,6 @@ open_ai = { path = "crates/open_ai" }
outline = { path = "crates/outline" }
outline_panel = { path = "crates/outline_panel" }
paths = { path = "crates/paths" }
performance = { path = "crates/performance" }
picker = { path = "crates/picker" }
plugin = { path = "crates/plugin" }
plugin_macros = { path = "crates/plugin_macros" }
@@ -322,9 +320,9 @@ async-watch = "0.3.1"
async_zip = { version = "0.0.17", features = ["deflate", "deflate64"] }
base64 = "0.22"
bitflags = "2.6.0"
blade-graphics = { git = "https://github.com/kvark/blade", rev = "b37a9a994709d256f4634efd29281c78ba89071a" }
blade-macros = { git = "https://github.com/kvark/blade", rev = "b37a9a994709d256f4634efd29281c78ba89071a" }
blade-util = { git = "https://github.com/kvark/blade", rev = "b37a9a994709d256f4634efd29281c78ba89071a" }
blade-graphics = { git = "https://github.com/kvark/blade", rev = "bf40d4f91fb56031e8676376dba2fc021b3e8eaf" }
blade-macros = { git = "https://github.com/kvark/blade", rev = "bf40d4f91fb56031e8676376dba2fc021b3e8eaf" }
blade-util = { git = "https://github.com/kvark/blade", rev = "bf40d4f91fb56031e8676376dba2fc021b3e8eaf" }
cargo_metadata = "0.18"
cargo_toml = "0.20"
chrono = { version = "0.4", features = ["serde"] }

View File

@@ -135,6 +135,7 @@ impl AssistantSettingsContent {
Some(language_model::settings::OllamaSettingsContent {
api_url,
low_speed_timeout_in_seconds,
available_models: None,
});
}
},
@@ -295,7 +296,7 @@ impl AssistantSettingsContent {
_ => (None, None),
};
settings.provider = Some(AssistantProviderContentV1::Ollama {
default_model: Some(ollama::Model::new(&model)),
default_model: Some(ollama::Model::new(&model, None, None)),
api_url,
low_speed_timeout_in_seconds,
});

View File

@@ -134,8 +134,11 @@ impl InlineAssistant {
})
.detach();
let workspace = workspace.clone();
let workspace = workspace.downgrade();
cx.observe_global::<SettingsStore>(move |cx| {
let Some(workspace) = workspace.upgrade() else {
return;
};
let Some(terminal_panel) = workspace.read(cx).panel::<TerminalPanel>(cx) else {
return;
};

View File

@@ -107,8 +107,10 @@ impl Match {
if let Some(path_match) = &self.path_match {
text.push_str(&path_match.path.to_string_lossy());
let mut whole_path = PathBuf::from(path_match.path_prefix.to_string());
whole_path = whole_path.join(path_match.path.clone());
for (range, style) in highlight_ranges(
&path_match.path.to_string_lossy(),
&whole_path.to_string_lossy(),
&path_match.positions,
gpui::HighlightStyle::color(Color::Accent.color(cx)),
) {

View File

@@ -6,7 +6,7 @@ use std::{
path::{Path, PathBuf},
rc::{Rc, Weak},
sync::{atomic::Ordering::SeqCst, Arc},
time::{Duration, Instant},
time::Duration,
};
use anyhow::{anyhow, Result};
@@ -142,12 +142,6 @@ impl App {
self
}
/// Sets a start time for tracking time to first window draw.
pub fn measure_time_to_first_window_draw(self, start: Instant) -> Self {
self.0.borrow_mut().time_to_first_window_draw = Some(TimeToFirstWindowDraw::Pending(start));
self
}
/// Start the application. The provided callback will be called once the
/// app is fully launched.
pub fn run<F>(self, on_finish_launching: F)
@@ -253,7 +247,6 @@ pub struct AppContext {
pub(crate) layout_id_buffer: Vec<LayoutId>, // We recycle this memory across layout requests.
pub(crate) propagate_event: bool,
pub(crate) prompt_builder: Option<PromptBuilder>,
pub(crate) time_to_first_window_draw: Option<TimeToFirstWindowDraw>,
}
impl AppContext {
@@ -307,7 +300,6 @@ impl AppContext {
layout_id_buffer: Default::default(),
propagate_event: true,
prompt_builder: Some(PromptBuilder::Default),
time_to_first_window_draw: None,
}),
});
@@ -1310,14 +1302,6 @@ impl AppContext {
(task, is_first)
}
/// Returns the time to first window draw, if available.
pub fn time_to_first_window_draw(&self) -> Option<Duration> {
match self.time_to_first_window_draw {
Some(TimeToFirstWindowDraw::Done(duration)) => Some(duration),
_ => None,
}
}
}
impl Context for AppContext {
@@ -1481,15 +1465,6 @@ impl<G: Global> DerefMut for GlobalLease<G> {
}
}
/// Represents the initialization duration of the application.
#[derive(Clone, Copy)]
pub enum TimeToFirstWindowDraw {
/// The application is still initializing, and contains the start time.
Pending(Instant),
/// The application has finished initializing, and contains the total duration.
Done(Duration),
}
/// Contains state associated with an active drag operation, started by dragging an element
/// within the window or by dragging into the app from the underlying platform.
pub struct AnyDrag {

View File

@@ -16,7 +16,6 @@ mod blade;
#[cfg(any(test, feature = "test-support"))]
mod test;
mod fps;
#[cfg(target_os = "windows")]
mod windows;
@@ -52,7 +51,6 @@ use strum::EnumIter;
use uuid::Uuid;
pub use app_menu::*;
pub use fps::*;
pub use keystroke::*;
#[cfg(target_os = "linux")]
@@ -356,7 +354,7 @@ pub(crate) trait PlatformWindow: HasWindowHandle + HasDisplayHandle {
fn on_should_close(&self, callback: Box<dyn FnMut() -> bool>);
fn on_close(&self, callback: Box<dyn FnOnce()>);
fn on_appearance_changed(&self, callback: Box<dyn FnMut()>);
fn draw(&self, scene: &Scene, on_complete: Option<oneshot::Sender<()>>);
fn draw(&self, scene: &Scene);
fn completed_frame(&self) {}
fn sprite_atlas(&self) -> Arc<dyn PlatformAtlas>;
@@ -381,7 +379,6 @@ pub(crate) trait PlatformWindow: HasWindowHandle + HasDisplayHandle {
}
fn set_client_inset(&self, _inset: Pixels) {}
fn gpu_specs(&self) -> Option<GPUSpecs>;
fn fps(&self) -> Option<f32>;
#[cfg(any(test, feature = "test-support"))]
fn as_test(&mut self) -> Option<&mut TestWindow> {

View File

@@ -9,7 +9,6 @@ use crate::{
};
use bytemuck::{Pod, Zeroable};
use collections::HashMap;
use futures::channel::oneshot;
#[cfg(target_os = "macos")]
use media::core_video::CVMetalTextureCache;
#[cfg(target_os = "macos")]
@@ -336,6 +335,17 @@ impl BladePipelines {
}),
}
}
fn destroy(&mut self, gpu: &gpu::Context) {
gpu.destroy_render_pipeline(&mut self.quads);
gpu.destroy_render_pipeline(&mut self.shadows);
gpu.destroy_render_pipeline(&mut self.path_rasterization);
gpu.destroy_render_pipeline(&mut self.paths);
gpu.destroy_render_pipeline(&mut self.underlines);
gpu.destroy_render_pipeline(&mut self.mono_sprites);
gpu.destroy_render_pipeline(&mut self.poly_sprites);
gpu.destroy_render_pipeline(&mut self.surfaces);
}
}
pub struct BladeSurfaceConfig {
@@ -438,6 +448,7 @@ impl BladeRenderer {
self.wait_for_gpu();
self.surface_config.transparent = transparent;
let surface_info = self.gpu.resize(self.surface_config);
self.pipelines.destroy(&self.gpu);
self.pipelines = BladePipelines::new(&self.gpu, surface_info);
self.alpha_mode = surface_info.alpha;
}
@@ -538,16 +549,13 @@ impl BladeRenderer {
pub fn destroy(&mut self) {
self.wait_for_gpu();
self.atlas.destroy();
self.gpu.destroy_sampler(self.atlas_sampler);
self.instance_belt.destroy(&self.gpu);
self.gpu.destroy_command_encoder(&mut self.command_encoder);
self.pipelines.destroy(&self.gpu);
}
pub fn draw(
&mut self,
scene: &Scene,
// Required to compile on macOS, but not currently supported.
_on_complete: Option<oneshot::Sender<()>>,
) {
pub fn draw(&mut self, scene: &Scene) {
self.command_encoder.start();
self.atlas.before_frame(&mut self.command_encoder);
self.rasterize_paths(scene.paths());
@@ -776,10 +784,4 @@ impl BladeRenderer {
self.wait_for_gpu();
self.last_sync_point = Some(sync_point);
}
/// Required to compile on macOS, but not currently supported.
#[cfg_attr(any(target_os = "linux", target_os = "windows"), allow(dead_code))]
pub fn fps(&self) -> f32 {
0.0
}
}

View File

@@ -1,94 +0,0 @@
use std::sync::atomic::{AtomicU64, AtomicUsize, Ordering};
use std::sync::Arc;
const NANOS_PER_SEC: u64 = 1_000_000_000;
const WINDOW_SIZE: usize = 128;
/// Represents a rolling FPS (Frames Per Second) counter.
///
/// This struct provides a lock-free mechanism to measure and calculate FPS
/// continuously, updating with every frame. It uses atomic operations to
/// ensure thread-safety without the need for locks.
pub struct FpsCounter {
frame_times: [AtomicU64; WINDOW_SIZE],
head: AtomicUsize,
tail: AtomicUsize,
}
impl FpsCounter {
/// Creates a new `Fps` counter.
///
/// Returns an `Arc<Fps>` for safe sharing across threads.
pub fn new() -> Arc<Self> {
Arc::new(Self {
frame_times: std::array::from_fn(|_| AtomicU64::new(0)),
head: AtomicUsize::new(0),
tail: AtomicUsize::new(0),
})
}
/// Increments the FPS counter with a new frame timestamp.
///
/// This method updates the internal state to maintain a rolling window
/// of frame data for the last second. It uses atomic operations to
/// ensure thread-safety.
///
/// # Arguments
///
/// * `timestamp_ns` - The timestamp of the new frame in nanoseconds.
pub fn increment(&self, timestamp_ns: u64) {
let mut head = self.head.load(Ordering::Relaxed);
let mut tail = self.tail.load(Ordering::Relaxed);
// Add new timestamp
self.frame_times[head].store(timestamp_ns, Ordering::Relaxed);
// Increment head and wrap around to 0 if it reaches WINDOW_SIZE
head = (head + 1) % WINDOW_SIZE;
self.head.store(head, Ordering::Relaxed);
// Remove old timestamps (older than 1 second)
while tail != head {
let oldest = self.frame_times[tail].load(Ordering::Relaxed);
if timestamp_ns.wrapping_sub(oldest) <= NANOS_PER_SEC {
break;
}
// Increment tail and wrap around to 0 if it reaches WINDOW_SIZE
tail = (tail + 1) % WINDOW_SIZE;
self.tail.store(tail, Ordering::Relaxed);
}
}
/// Calculates and returns the current FPS.
///
/// This method computes the FPS based on the frames recorded in the last second.
/// It uses atomic loads to ensure thread-safety.
///
/// # Returns
///
/// The calculated FPS as a `f32`, or 0.0 if no frames have been recorded.
pub fn fps(&self) -> f32 {
let head = self.head.load(Ordering::Relaxed);
let tail = self.tail.load(Ordering::Relaxed);
if head == tail {
return 0.0;
}
let newest =
self.frame_times[head.wrapping_sub(1) & (WINDOW_SIZE - 1)].load(Ordering::Relaxed);
let oldest = self.frame_times[tail].load(Ordering::Relaxed);
let time_diff = newest.wrapping_sub(oldest) as f32;
if time_diff == 0.0 {
return 0.0;
}
let frame_count = if head > tail {
head - tail
} else {
WINDOW_SIZE - tail + head
};
(frame_count as f32 - 1.0) * NANOS_PER_SEC as f32 / time_diff
}
}

View File

@@ -6,7 +6,7 @@ use std::sync::Arc;
use blade_graphics as gpu;
use collections::HashMap;
use futures::channel::oneshot;
use futures::channel::oneshot::Receiver;
use raw_window_handle as rwh;
use wayland_backend::client::ObjectId;
@@ -827,7 +827,7 @@ impl PlatformWindow for WaylandWindow {
_msg: &str,
_detail: Option<&str>,
_answers: &[&str],
) -> Option<oneshot::Receiver<usize>> {
) -> Option<Receiver<usize>> {
None
}
@@ -934,9 +934,9 @@ impl PlatformWindow for WaylandWindow {
self.0.callbacks.borrow_mut().appearance_changed = Some(callback);
}
fn draw(&self, scene: &Scene, on_complete: Option<oneshot::Sender<()>>) {
fn draw(&self, scene: &Scene) {
let mut state = self.borrow_mut();
state.renderer.draw(scene, on_complete);
state.renderer.draw(scene);
}
fn completed_frame(&self) {
@@ -1009,10 +1009,6 @@ impl PlatformWindow for WaylandWindow {
fn gpu_specs(&self) -> Option<GPUSpecs> {
self.borrow().renderer.gpu_specs().into()
}
fn fps(&self) -> Option<f32> {
None
}
}
fn update_window(mut state: RefMut<WaylandWindowState>) {

View File

@@ -1,3 +1,5 @@
use anyhow::Context;
use crate::{
platform::blade::{BladeRenderer, BladeSurfaceConfig},
px, size, AnyWindowHandle, Bounds, Decorations, DevicePixels, ForegroundExecutor, GPUSpecs,
@@ -7,9 +9,7 @@ use crate::{
X11ClientStatePtr,
};
use anyhow::Context;
use blade_graphics as gpu;
use futures::channel::oneshot;
use raw_window_handle as rwh;
use util::{maybe, ResultExt};
use x11rb::{
@@ -1210,10 +1210,9 @@ impl PlatformWindow for X11Window {
self.0.callbacks.borrow_mut().appearance_changed = Some(callback);
}
// TODO: on_complete not yet supported for X11 windows
fn draw(&self, scene: &Scene, on_complete: Option<oneshot::Sender<()>>) {
fn draw(&self, scene: &Scene) {
let mut inner = self.0.state.borrow_mut();
inner.renderer.draw(scene, on_complete);
inner.renderer.draw(scene);
}
fn sprite_atlas(&self) -> Arc<dyn PlatformAtlas> {
@@ -1399,8 +1398,4 @@ impl PlatformWindow for X11Window {
fn gpu_specs(&self) -> Option<GPUSpecs> {
self.0.state.borrow().renderer.gpu_specs().into()
}
fn fps(&self) -> Option<f32> {
None
}
}

View File

@@ -1,7 +1,7 @@
use super::metal_atlas::MetalAtlas;
use crate::{
point, size, AtlasTextureId, AtlasTextureKind, AtlasTile, Bounds, ContentMask, DevicePixels,
FpsCounter, Hsla, MonochromeSprite, PaintSurface, Path, PathId, PathVertex, PolychromeSprite,
Hsla, MonochromeSprite, PaintSurface, Path, PathId, PathVertex, PolychromeSprite,
PrimitiveBatch, Quad, ScaledPixels, Scene, Shadow, Size, Surface, Underline,
};
use anyhow::{anyhow, Result};
@@ -14,7 +14,6 @@ use cocoa::{
use collections::HashMap;
use core_foundation::base::TCFType;
use foreign_types::ForeignType;
use futures::channel::oneshot;
use media::core_video::CVMetalTextureCache;
use metal::{CAMetalLayer, CommandQueue, MTLPixelFormat, MTLResourceOptions, NSRange};
use objc::{self, msg_send, sel, sel_impl};
@@ -106,7 +105,6 @@ pub(crate) struct MetalRenderer {
instance_buffer_pool: Arc<Mutex<InstanceBufferPool>>,
sprite_atlas: Arc<MetalAtlas>,
core_video_texture_cache: CVMetalTextureCache,
fps_counter: Arc<FpsCounter>,
}
impl MetalRenderer {
@@ -252,7 +250,6 @@ impl MetalRenderer {
instance_buffer_pool,
sprite_atlas,
core_video_texture_cache,
fps_counter: FpsCounter::new(),
}
}
@@ -295,8 +292,7 @@ impl MetalRenderer {
// nothing to do
}
pub fn draw(&mut self, scene: &Scene, on_complete: Option<oneshot::Sender<()>>) {
let on_complete = Arc::new(Mutex::new(on_complete));
pub fn draw(&mut self, scene: &Scene) {
let layer = self.layer.clone();
let viewport_size = layer.drawable_size();
let viewport_size: Size<DevicePixels> = size(
@@ -323,24 +319,13 @@ impl MetalRenderer {
Ok(command_buffer) => {
let instance_buffer_pool = self.instance_buffer_pool.clone();
let instance_buffer = Cell::new(Some(instance_buffer));
let device = self.device.clone();
let fps_counter = self.fps_counter.clone();
let completed_handler =
ConcreteBlock::new(move |_: &metal::CommandBufferRef| {
let mut cpu_timestamp = 0;
let mut gpu_timestamp = 0;
device.sample_timestamps(&mut cpu_timestamp, &mut gpu_timestamp);
fps_counter.increment(gpu_timestamp);
if let Some(on_complete) = on_complete.lock().take() {
on_complete.send(()).ok();
}
if let Some(instance_buffer) = instance_buffer.take() {
instance_buffer_pool.lock().release(instance_buffer);
}
});
let completed_handler = completed_handler.copy();
command_buffer.add_completed_handler(&completed_handler);
let block = ConcreteBlock::new(move |_| {
if let Some(instance_buffer) = instance_buffer.take() {
instance_buffer_pool.lock().release(instance_buffer);
}
});
let block = block.copy();
command_buffer.add_completed_handler(&block);
if self.presents_with_transaction {
command_buffer.commit();
@@ -1132,10 +1117,6 @@ impl MetalRenderer {
}
true
}
pub fn fps(&self) -> f32 {
self.fps_counter.fps()
}
}
fn build_pipeline_state(

View File

@@ -784,14 +784,14 @@ impl PlatformWindow for MacWindow {
self.0.as_ref().lock().bounds()
}
fn is_maximized(&self) -> bool {
self.0.as_ref().lock().is_maximized()
}
fn window_bounds(&self) -> WindowBounds {
self.0.as_ref().lock().window_bounds()
}
fn is_maximized(&self) -> bool {
self.0.as_ref().lock().is_maximized()
}
fn content_size(&self) -> Size<Pixels> {
self.0.as_ref().lock().content_size()
}
@@ -975,6 +975,8 @@ impl PlatformWindow for MacWindow {
}
}
fn set_app_id(&mut self, _app_id: &str) {}
fn set_background_appearance(&self, background_appearance: WindowBackgroundAppearance) {
let mut this = self.0.as_ref().lock();
this.renderer
@@ -1005,6 +1007,30 @@ impl PlatformWindow for MacWindow {
}
}
fn set_edited(&mut self, edited: bool) {
unsafe {
let window = self.0.lock().native_window;
msg_send![window, setDocumentEdited: edited as BOOL]
}
// Changing the document edited state resets the traffic light position,
// so we have to move it again.
self.0.lock().move_traffic_light();
}
fn show_character_palette(&self) {
let this = self.0.lock();
let window = this.native_window;
this.executor
.spawn(async move {
unsafe {
let app = NSApplication::sharedApplication(nil);
let _: () = msg_send![app, orderFrontCharacterPalette: window];
}
})
.detach();
}
fn minimize(&self) {
let window = self.0.lock().native_window;
unsafe {
@@ -1081,48 +1107,18 @@ impl PlatformWindow for MacWindow {
self.0.lock().appearance_changed_callback = Some(callback);
}
fn draw(&self, scene: &crate::Scene, on_complete: Option<oneshot::Sender<()>>) {
fn draw(&self, scene: &crate::Scene) {
let mut this = self.0.lock();
this.renderer.draw(scene, on_complete);
this.renderer.draw(scene);
}
fn sprite_atlas(&self) -> Arc<dyn PlatformAtlas> {
self.0.lock().renderer.sprite_atlas().clone()
}
fn set_edited(&mut self, edited: bool) {
unsafe {
let window = self.0.lock().native_window;
msg_send![window, setDocumentEdited: edited as BOOL]
}
// Changing the document edited state resets the traffic light position,
// so we have to move it again.
self.0.lock().move_traffic_light();
}
fn show_character_palette(&self) {
let this = self.0.lock();
let window = this.native_window;
this.executor
.spawn(async move {
unsafe {
let app = NSApplication::sharedApplication(nil);
let _: () = msg_send![app, orderFrontCharacterPalette: window];
}
})
.detach();
}
fn set_app_id(&mut self, _app_id: &str) {}
fn gpu_specs(&self) -> Option<crate::GPUSpecs> {
None
}
fn fps(&self) -> Option<f32> {
Some(self.0.lock().renderer.fps())
}
}
impl rwh::HasWindowHandle for MacWindow {

View File

@@ -251,12 +251,7 @@ impl PlatformWindow for TestWindow {
fn on_appearance_changed(&self, _callback: Box<dyn FnMut()>) {}
fn draw(
&self,
_scene: &crate::Scene,
_on_complete: Option<futures::channel::oneshot::Sender<()>>,
) {
}
fn draw(&self, _scene: &crate::Scene) {}
fn sprite_atlas(&self) -> sync::Arc<dyn crate::PlatformAtlas> {
self.0.lock().sprite_atlas.clone()
@@ -282,10 +277,6 @@ impl PlatformWindow for TestWindow {
fn gpu_specs(&self) -> Option<GPUSpecs> {
None
}
fn fps(&self) -> Option<f32> {
None
}
}
pub(crate) struct TestAtlasState {

View File

@@ -660,8 +660,8 @@ impl PlatformWindow for WindowsWindow {
self.0.state.borrow_mut().callbacks.appearance_changed = Some(callback);
}
fn draw(&self, scene: &Scene, on_complete: Option<oneshot::Sender<()>>) {
self.0.state.borrow_mut().renderer.draw(scene, on_complete)
fn draw(&self, scene: &Scene) {
self.0.state.borrow_mut().renderer.draw(scene)
}
fn sprite_atlas(&self) -> Arc<dyn PlatformAtlas> {
@@ -675,10 +675,6 @@ impl PlatformWindow for WindowsWindow {
fn gpu_specs(&self) -> Option<GPUSpecs> {
Some(self.0.state.borrow().renderer.gpu_specs())
}
fn fps(&self) -> Option<f32> {
None
}
}
#[implement(IDropTarget)]

View File

@@ -11,9 +11,9 @@ use crate::{
PromptLevel, Quad, Render, RenderGlyphParams, RenderImage, RenderImageParams, RenderSvgParams,
Replay, ResizeEdge, ScaledPixels, Scene, Shadow, SharedString, Size, StrikethroughStyle, Style,
SubscriberSet, Subscription, TaffyLayoutEngine, Task, TextStyle, TextStyleRefinement,
TimeToFirstWindowDraw, TransformationMatrix, Underline, UnderlineStyle, View, VisualContext,
WeakView, WindowAppearance, WindowBackgroundAppearance, WindowBounds, WindowControls,
WindowDecorations, WindowOptions, WindowParams, WindowTextSystem, SUBPIXEL_VARIANTS,
TransformationMatrix, Underline, UnderlineStyle, View, VisualContext, WeakView,
WindowAppearance, WindowBackgroundAppearance, WindowBounds, WindowControls, WindowDecorations,
WindowOptions, WindowParams, WindowTextSystem, SUBPIXEL_VARIANTS,
};
use anyhow::{anyhow, Context as _, Result};
use collections::{FxHashMap, FxHashSet};
@@ -544,8 +544,6 @@ pub struct Window {
hovered: Rc<Cell<bool>>,
pub(crate) dirty: Rc<Cell<bool>>,
pub(crate) needs_present: Rc<Cell<bool>>,
/// We assign this to be notified when the platform graphics backend fires the next completion callback for drawing the window.
present_completed: RefCell<Option<oneshot::Sender<()>>>,
pub(crate) last_input_timestamp: Rc<Cell<Instant>>,
pub(crate) refreshing: bool,
pub(crate) draw_phase: DrawPhase,
@@ -822,7 +820,6 @@ impl Window {
hovered,
dirty,
needs_present,
present_completed: RefCell::default(),
last_input_timestamp,
refreshing: false,
draw_phase: DrawPhase::None,
@@ -1492,29 +1489,13 @@ impl<'a> WindowContext<'a> {
self.window.refreshing = false;
self.window.draw_phase = DrawPhase::None;
self.window.needs_present.set(true);
if let Some(TimeToFirstWindowDraw::Pending(start)) = self.app.time_to_first_window_draw {
let (tx, rx) = oneshot::channel();
*self.window.present_completed.borrow_mut() = Some(tx);
self.spawn(|mut cx| async move {
rx.await.ok();
cx.update(|cx| {
let duration = start.elapsed();
cx.time_to_first_window_draw = Some(TimeToFirstWindowDraw::Done(duration));
log::info!("time to first window draw: {:?}", duration);
cx.push_effect(Effect::Refresh);
})
})
.detach();
}
}
#[profiling::function]
fn present(&self) {
let on_complete = self.window.present_completed.take();
self.window
.platform_window
.draw(&self.window.rendered_frame.scene, on_complete);
.draw(&self.window.rendered_frame.scene);
self.window.needs_present.set(false);
profiling::finish_frame!();
}
@@ -3737,12 +3718,6 @@ impl<'a> WindowContext<'a> {
pub fn gpu_specs(&self) -> Option<GPUSpecs> {
self.window.platform_window.gpu_specs()
}
/// Get the current FPS (frames per second) of the window.
/// This is only supported on macOS currently.
pub fn fps(&self) -> Option<f32> {
self.window.platform_window.fps()
}
}
#[cfg(target_os = "windows")]

View File

@@ -6,8 +6,10 @@ use ollama::{
get_models, preload_model, stream_chat_completion, ChatMessage, ChatOptions, ChatRequest,
ChatResponseDelta, OllamaToolCall,
};
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use settings::{Settings, SettingsStore};
use std::{sync::Arc, time::Duration};
use std::{collections::BTreeMap, sync::Arc, time::Duration};
use ui::{prelude::*, ButtonLike, Indicator};
use util::ResultExt;
@@ -28,6 +30,17 @@ const PROVIDER_NAME: &str = "Ollama";
pub struct OllamaSettings {
pub api_url: String,
pub low_speed_timeout: Option<Duration>,
pub available_models: Vec<AvailableModel>,
}
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize, JsonSchema)]
pub struct AvailableModel {
/// The model name in the Ollama API (e.g. "llama3.1:latest")
pub name: String,
/// The model's name in Zed's UI, such as in the model selector dropdown menu in the assistant panel.
pub display_name: Option<String>,
/// The Context Length parameter to the model (aka num_ctx or n_ctx)
pub max_tokens: usize,
}
pub struct OllamaLanguageModelProvider {
@@ -61,7 +74,7 @@ impl State {
// indicating which models are embedding models,
// simply filter out models with "-embed" in their name
.filter(|model| !model.name.contains("-embed"))
.map(|model| ollama::Model::new(&model.name))
.map(|model| ollama::Model::new(&model.name, None, None))
.collect();
models.sort_by(|a, b| a.name.cmp(&b.name));
@@ -123,10 +136,32 @@ impl LanguageModelProvider for OllamaLanguageModelProvider {
}
fn provided_models(&self, cx: &AppContext) -> Vec<Arc<dyn LanguageModel>> {
self.state
.read(cx)
let mut models: BTreeMap<String, ollama::Model> = BTreeMap::default();
// Add models from the Ollama API
for model in self.state.read(cx).available_models.iter() {
models.insert(model.name.clone(), model.clone());
}
// Override with available models from settings
for model in AllLanguageModelSettings::get_global(cx)
.ollama
.available_models
.iter()
{
models.insert(
model.name.clone(),
ollama::Model {
name: model.name.clone(),
display_name: model.display_name.clone(),
max_tokens: model.max_tokens,
keep_alive: None,
},
);
}
models
.into_values()
.map(|model| {
Arc::new(OllamaLanguageModel {
id: LanguageModelId::from(model.name.clone()),

View File

@@ -152,6 +152,7 @@ pub struct AnthropicSettingsContentV1 {
pub struct OllamaSettingsContent {
pub api_url: Option<String>,
pub low_speed_timeout_in_seconds: Option<u64>,
pub available_models: Option<Vec<provider::ollama::AvailableModel>>,
}
#[derive(Clone, Debug, Serialize, Deserialize, PartialEq, JsonSchema)]
@@ -276,6 +277,9 @@ impl settings::Settings for AllLanguageModelSettings {
anthropic.as_ref().and_then(|s| s.available_models.clone()),
);
// Ollama
let ollama = value.ollama.clone();
merge(
&mut settings.ollama.api_url,
value.ollama.as_ref().and_then(|s| s.api_url.clone()),
@@ -288,6 +292,10 @@ impl settings::Settings for AllLanguageModelSettings {
settings.ollama.low_speed_timeout =
Some(Duration::from_secs(low_speed_timeout_in_seconds));
}
merge(
&mut settings.ollama.available_models,
ollama.as_ref().and_then(|s| s.available_models.clone()),
);
// OpenAI
let (openai, upgraded) = match value.openai.clone().map(|s| s.upgrade()) {

View File

@@ -66,40 +66,37 @@ impl Default for KeepAlive {
#[derive(Clone, Debug, Default, Serialize, Deserialize, PartialEq)]
pub struct Model {
pub name: String,
pub display_name: Option<String>,
pub max_tokens: usize,
pub keep_alive: Option<KeepAlive>,
}
// This could be dynamically retrieved via the API (1 call per model)
// curl -s http://localhost:11434/api/show -d '{"model": "llama3.1:latest"}' | jq '.model_info."llama.context_length"'
fn get_max_tokens(name: &str) -> usize {
match name {
"dolphin-llama3:8b-256k" => 262144, // 256K
_ => match name.split(':').next().unwrap() {
"mistral-nemo" => 1024000, // 1M
"deepseek-coder-v2" => 163840, // 160K
"llama3.1" | "phi3" | "command-r" | "command-r-plus" => 131072, // 128K
"codeqwen" => 65536, // 64K
"mistral" | "mistral-large" | "dolphin-mistral" | "codestral" // 32K
| "mistral-openorca" | "dolphin-mixtral" | "mixstral" | "llava"
| "qwen" | "qwen2" | "wizardlm2" | "wizard-math" => 32768,
"codellama" | "stable-code" | "deepseek-coder" | "starcoder2" // 16K
| "wizardcoder" => 16384,
"llama3" | "gemma2" | "gemma" | "codegemma" | "dolphin-llama3" // 8K
| "llava-llama3" | "starcoder" | "openchat" | "aya" => 8192,
"llama2" | "yi" | "llama2-chinese" | "vicuna" | "nous-hermes2" // 4K
| "stablelm2" => 4096,
"phi" | "orca-mini" | "tinyllama" | "granite-code" => 2048, // 2K
_ => 2048, // 2K (default)
},
/// Default context length for unknown models.
const DEFAULT_TOKENS: usize = 2048;
/// Magic number. Lets many Ollama models work with ~16GB of ram.
const MAXIMUM_TOKENS: usize = 16384;
match name.split(':').next().unwrap() {
"phi" | "tinyllama" | "granite-code" => 2048,
"llama2" | "yi" | "vicuna" | "stablelm2" => 4096,
"llama3" | "gemma2" | "gemma" | "codegemma" | "starcoder" | "aya" => 8192,
"codellama" | "starcoder2" => 16384,
"mistral" | "codestral" | "mixstral" | "llava" | "qwen2" | "dolphin-mixtral" => 32768,
"llama3.1" | "phi3" | "phi3.5" | "command-r" | "deepseek-coder-v2" => 128000,
_ => DEFAULT_TOKENS,
}
.clamp(1, MAXIMUM_TOKENS)
}
impl Model {
pub fn new(name: &str) -> Self {
pub fn new(name: &str, display_name: Option<&str>, max_tokens: Option<usize>) -> Self {
Self {
name: name.to_owned(),
max_tokens: get_max_tokens(name),
display_name: display_name
.map(ToString::to_string)
.or_else(|| name.strip_suffix(":latest").map(ToString::to_string)),
max_tokens: max_tokens.unwrap_or_else(|| get_max_tokens(name)),
keep_alive: Some(KeepAlive::indefinite()),
}
}
@@ -109,7 +106,7 @@ impl Model {
}
pub fn display_name(&self) -> &str {
&self.name
self.display_name.as_ref().unwrap_or(&self.name)
}
pub fn max_token_count(&self) -> usize {

View File

@@ -30,10 +30,15 @@ search.workspace = true
serde.workspace = true
serde_json.workspace = true
settings.workspace = true
smol.workspace = true
theme.workspace = true
util.workspace = true
worktree.workspace = true
workspace.workspace = true
[dev-dependencies]
search = { workspace = true, features = ["test-support"] }
pretty_assertions.workspace = true
[package.metadata.cargo-machete]
ignored = ["log"]

File diff suppressed because it is too large Load Diff

View File

@@ -1,36 +0,0 @@
[package]
name = "performance"
version = "0.1.0"
edition = "2021"
publish = false
license = "GPL-3.0-or-later"
[lints]
workspace = true
[lib]
path = "src/performance.rs"
doctest = false
[features]
test-support = [
"collections/test-support",
"gpui/test-support",
"workspace/test-support",
]
[dependencies]
anyhow.workspace = true
gpui.workspace = true
log.workspace = true
schemars.workspace = true
serde.workspace = true
settings.workspace = true
workspace.workspace = true
[dev-dependencies]
collections = { workspace = true, features = ["test-support"] }
gpui = { workspace = true, features = ["test-support"] }
settings = { workspace = true, features = ["test-support"] }
util = { workspace = true, features = ["test-support"] }
workspace = { workspace = true, features = ["test-support"] }

View File

@@ -1 +0,0 @@
../../LICENSE-GPL

View File

@@ -1,189 +0,0 @@
use std::time::Instant;
use anyhow::Result;
use gpui::{
div, AppContext, InteractiveElement as _, Render, StatefulInteractiveElement as _,
Subscription, ViewContext, VisualContext,
};
use schemars::JsonSchema;
use serde::{Deserialize, Serialize};
use settings::{Settings, SettingsSources, SettingsStore};
use workspace::{
ui::{Label, LabelCommon, LabelSize, Tooltip},
ItemHandle, StatusItemView, Workspace,
};
const SHOW_STARTUP_TIME_DURATION: std::time::Duration = std::time::Duration::from_secs(5);
pub fn init(cx: &mut AppContext) {
PerformanceSettings::register(cx);
let mut enabled = PerformanceSettings::get_global(cx)
.show_in_status_bar
.unwrap_or(false);
let start_time = Instant::now();
let mut _observe_workspaces = toggle_status_bar_items(enabled, start_time, cx);
cx.observe_global::<SettingsStore>(move |cx| {
let new_value = PerformanceSettings::get_global(cx)
.show_in_status_bar
.unwrap_or(false);
if new_value != enabled {
enabled = new_value;
_observe_workspaces = toggle_status_bar_items(enabled, start_time, cx);
}
})
.detach();
}
fn toggle_status_bar_items(
enabled: bool,
start_time: Instant,
cx: &mut AppContext,
) -> Option<Subscription> {
for window in cx.windows() {
if let Some(workspace) = window.downcast::<Workspace>() {
workspace
.update(cx, |workspace, cx| {
toggle_status_bar_item(workspace, enabled, start_time, cx);
})
.ok();
}
}
if enabled {
log::info!("performance metrics display enabled");
Some(cx.observe_new_views::<Workspace>(move |workspace, cx| {
toggle_status_bar_item(workspace, true, start_time, cx);
}))
} else {
log::info!("performance metrics display disabled");
None
}
}
struct PerformanceStatusBarItem {
display_mode: DisplayMode,
}
#[derive(Copy, Clone, Debug)]
enum DisplayMode {
StartupTime,
Fps,
}
impl PerformanceStatusBarItem {
fn new(start_time: Instant, cx: &mut ViewContext<Self>) -> Self {
let now = Instant::now();
let display_mode = if now < start_time + SHOW_STARTUP_TIME_DURATION {
DisplayMode::StartupTime
} else {
DisplayMode::Fps
};
let this = Self { display_mode };
if let DisplayMode::StartupTime = display_mode {
cx.spawn(|this, mut cx| async move {
let now = Instant::now();
let remaining_duration =
(start_time + SHOW_STARTUP_TIME_DURATION).saturating_duration_since(now);
cx.background_executor().timer(remaining_duration).await;
this.update(&mut cx, |this, cx| {
this.display_mode = DisplayMode::Fps;
cx.notify();
})
.ok();
})
.detach();
}
this
}
}
impl Render for PerformanceStatusBarItem {
fn render(&mut self, cx: &mut gpui::ViewContext<Self>) -> impl gpui::IntoElement {
let text = match self.display_mode {
DisplayMode::StartupTime => cx
.time_to_first_window_draw()
.map_or("Pending".to_string(), |duration| {
format!("{}ms", duration.as_millis())
}),
DisplayMode::Fps => cx.fps().map_or("".to_string(), |fps| {
format!("{:3} FPS", fps.round() as u32)
}),
};
use gpui::ParentElement;
let display_mode = self.display_mode;
div()
.id("performance status")
.child(Label::new(text).size(LabelSize::Small))
.tooltip(move |cx| match display_mode {
DisplayMode::StartupTime => Tooltip::text("Time to first window draw", cx),
DisplayMode::Fps => cx
.new_view(|cx| {
let tooltip = Tooltip::new("Current FPS");
if let Some(time_to_first) = cx.time_to_first_window_draw() {
tooltip.meta(format!(
"Time to first window draw: {}ms",
time_to_first.as_millis()
))
} else {
tooltip
}
})
.into(),
})
}
}
impl StatusItemView for PerformanceStatusBarItem {
fn set_active_pane_item(
&mut self,
_active_pane_item: Option<&dyn ItemHandle>,
_cx: &mut gpui::ViewContext<Self>,
) {
// This is not currently used.
}
}
fn toggle_status_bar_item(
workspace: &mut Workspace,
enabled: bool,
start_time: Instant,
cx: &mut ViewContext<Workspace>,
) {
if enabled {
workspace.status_bar().update(cx, |bar, cx| {
bar.add_right_item(
cx.new_view(|cx| PerformanceStatusBarItem::new(start_time, cx)),
cx,
)
});
} else {
workspace.status_bar().update(cx, |bar, cx| {
bar.remove_items_of_type::<PerformanceStatusBarItem>(cx);
});
}
}
/// Configuration of the display of performance details.
#[derive(Clone, Default, Serialize, Deserialize, JsonSchema, Debug)]
pub struct PerformanceSettings {
/// Display the time to first window draw and frame rate in the status bar.
///
/// Default: false
pub show_in_status_bar: Option<bool>,
}
impl Settings for PerformanceSettings {
const KEY: Option<&'static str> = Some("performance");
type FileContent = Self;
fn load(sources: SettingsSources<Self::FileContent>, _: &mut AppContext) -> Result<Self> {
sources.json_merge()
}
}

View File

@@ -5,6 +5,14 @@ edition = "2021"
publish = false
license = "GPL-3.0-or-later"
[features]
test-support = [
"client/test-support",
"editor/test-support",
"gpui/test-support",
"workspace/test-support",
]
[lints]
workspace = true

View File

@@ -113,7 +113,7 @@ pub fn init(cx: &mut AppContext) {
.detach();
}
struct ProjectSearch {
pub struct ProjectSearch {
project: Model<Project>,
excerpts: Model<MultiBuffer>,
pending_search: Option<Task<Option<()>>>,
@@ -151,7 +151,7 @@ pub struct ProjectSearchView {
}
#[derive(Debug, Clone)]
struct ProjectSearchSettings {
pub struct ProjectSearchSettings {
search_options: SearchOptions,
filters_enabled: bool,
}
@@ -162,7 +162,7 @@ pub struct ProjectSearchBar {
}
impl ProjectSearch {
fn new(project: Model<Project>, cx: &mut ModelContext<Self>) -> Self {
pub fn new(project: Model<Project>, cx: &mut ModelContext<Self>) -> Self {
let replica_id = project.read(cx).replica_id();
let capability = project.read(cx).capability();
@@ -605,7 +605,7 @@ impl ProjectSearchView {
});
}
fn new(
pub fn new(
model: Model<ProjectSearch>,
cx: &mut ViewContext<Self>,
settings: Option<ProjectSearchSettings>,
@@ -751,9 +751,9 @@ impl ProjectSearchView {
});
}
// Re-activate the most recently activated search in this pane or the most recent if it has been closed.
// If no search exists in the workspace, create a new one.
fn deploy_search(
/// Re-activate the most recently activated search in this pane or the most recent if it has been closed.
/// If no search exists in the workspace, create a new one.
pub fn deploy_search(
workspace: &mut Workspace,
action: &workspace::DeploySearch,
cx: &mut ViewContext<Workspace>,
@@ -1140,6 +1140,11 @@ impl ProjectSearchView {
return self.focus_results_editor(cx);
}
}
#[cfg(any(test, feature = "test-support"))]
pub fn results_editor(&self) -> &View<Editor> {
&self.results_editor
}
}
impl ProjectSearchBar {
@@ -1752,15 +1757,31 @@ fn register_workspace_action_for_present_search<A: Action>(
});
}
#[cfg(any(test, feature = "test-support"))]
pub fn perform_project_search(
search_view: &View<ProjectSearchView>,
text: impl Into<std::sync::Arc<str>>,
cx: &mut gpui::VisualTestContext,
) {
search_view.update(cx, |search_view, cx| {
search_view
.query_editor
.update(cx, |query_editor, cx| query_editor.set_text(text, cx));
search_view.search(cx);
});
cx.run_until_parked();
}
#[cfg(test)]
pub mod tests {
use std::sync::Arc;
use super::*;
use editor::{display_map::DisplayRow, DisplayPoint};
use gpui::{Action, TestAppContext, WindowHandle};
use project::FakeFs;
use serde_json::json;
use settings::SettingsStore;
use std::sync::Arc;
use workspace::DeploySearch;
#[gpui::test]

View File

@@ -117,7 +117,7 @@ pub struct RevealInProjectPanel {
pub entry_id: Option<u64>,
}
#[derive(PartialEq, Clone, Deserialize)]
#[derive(Default, PartialEq, Clone, Deserialize)]
pub struct DeploySearch {
#[serde(default)]
pub replace_enabled: bool,

View File

@@ -153,17 +153,6 @@ impl StatusBar {
cx.notify();
}
pub fn remove_items_of_type<T>(&mut self, cx: &mut ViewContext<Self>)
where
T: 'static + StatusItemView,
{
self.left_items
.retain(|item| item.item_type() != TypeId::of::<T>());
self.right_items
.retain(|item| item.item_type() != TypeId::of::<T>());
cx.notify();
}
pub fn add_right_item<T>(&mut self, item: View<T>, cx: &mut ViewContext<Self>)
where
T: 'static + StatusItemView,

View File

@@ -2,7 +2,7 @@
description = "The fast, collaborative code editor."
edition = "2021"
name = "zed"
version = "0.151.0"
version = "0.151.2"
publish = false
license = "GPL-3.0-or-later"
authors = ["Zed Team <hi@zed.dev>"]
@@ -72,7 +72,6 @@ outline.workspace = true
outline_panel.workspace = true
parking_lot.workspace = true
paths.workspace = true
performance.workspace = true
profiling.workspace = true
project.workspace = true
project_panel.workspace = true

View File

@@ -1 +1 @@
dev
stable

View File

@@ -268,7 +268,6 @@ fn init_ui(
welcome::init(cx);
settings_ui::init(cx);
extensions_ui::init(cx);
performance::init(cx);
cx.observe_global::<SettingsStore>({
let languages = app_state.languages.clone();
@@ -318,7 +317,6 @@ fn init_ui(
}
fn main() {
let start_time = std::time::Instant::now();
menu::init();
zed_actions::init();
@@ -330,9 +328,7 @@ fn main() {
init_logger();
log::info!("========== starting zed ==========");
let app = App::new()
.with_assets(Assets)
.measure_time_to_first_window_draw(start_time);
let app = App::new().with_assets(Assets);
let (installation_id, existing_installation_id_found) = app
.background_executor()

View File

@@ -108,33 +108,49 @@ Custom models will be listed in the model dropdown in the assistant panel.
Download and install Ollama from [ollama.com/download](https://ollama.com/download) (Linux or macOS) and ensure it's running with `ollama --version`.
You can use Ollama with the Zed assistant by making Ollama appear as an OpenAPI endpoint.
1. Download, for example, the `mistral` model with Ollama:
1. Download one of the [available models](https://ollama.com/models), for example, for `mistral`:
```sh
ollama pull mistral
```
2. Make sure that the Ollama server is running. You can start it either via running the Ollama app, or launching:
2. Make sure that the Ollama server is running. You can start it either via running Ollama.app (MacOS) or launching:
```sh
ollama serve
```
3. In the assistant panel, select one of the Ollama models using the model dropdown.
4. (Optional) If you want to change the default URL that is used to access the Ollama server, you can do so by adding the following settings:
4. (Optional) Specify a [custom api_url](#custom-endpoint) or [custom `low_speed_timeout_in_seconds`](#provider-timeout) if required.
#### Ollama Context Length {#ollama-context}}
Zed has pre-configured maximum context lengths (`max_tokens`) to match the capabilities of common models. Zed API requests to Ollama include this as `num_ctx` parameter, but the default values do not exceed `16384` so users with ~16GB of ram are able to use most models out of the box. See [get_max_tokens in ollama.rs](https://github.com/zed-industries/zed/blob/main/crates/ollama/src/ollama.rs) for a complete set of defaults.
**Note**: Tokens counts displayed in the assistant panel are only estimates and will differ from the models native tokenizer.
Depending on your hardware or use-case you may wish to limit or increase the context length for a specific model via settings.json:
```json
{
"language_models": {
"ollama": {
"api_url": "http://localhost:11434"
"low_speed_timeout_in_seconds": 120,
"available_models": [
{
"provider": "ollama",
"name": "mistral:latest",
"max_tokens": 32768
}
]
}
}
}
```
If you specify a context length that is too large for your hardware, Ollama will log an error. You can watch these logs by running: `tail -f ~/.ollama/logs/ollama.log` (MacOS) or `journalctl -u ollama -f` (Linux). Depending on the memory available on your machine, you may need to adjust the context length to a smaller value.
### OpenAI {#openai}
1. Visit the OpenAI platform and [create an API key](https://platform.openai.com/account/api-keys)