Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

+1, I wasn't able to make it work on zed either and It would really help if woile can tell how they made it work on their workstation.


Edit: I asked chatgpt and just tinkered around till I found a setting which could work

{ "agent": { "default_model": { "model": "hf.co/sweepai/sweep-next-edit-1.5B:latest" } }, "inline_completion": { "default_provider": { "model": "hf.co/sweepai/sweep-next-edit-1.5B" } }, "chat_panel": { "default_provider": { "model": "hf.co/sweepai/sweep-next-edit-1.5B" } } }

Then go on the down bottom AI button or that gemini like logo and then select sweep model. And also you are expected to run ollama run command and ollama serve it

ollama pull hf.co/sweepai/sweep-next-edit-1.5B ollama run hf.co/sweepai/sweep-next-edit-1.5B

I did ask Chatgpt some parts about it tho and had to add this setting into my other settings too so ymmw but Its working for me

It's an interesting model for sure but I am unable to get tab auto_completion/inline in zed, I can ask it in summary and agentic mode of sorts and have a button at top which can generate code in file itself (which I found to be what I preferred in all this)

But I asked it to generate a simple hello world on localhost:8080 in golang and in the end it was able to but it took me like 10 minutes. But some other things like simple hello world was one shot for the most part

It's definitely an interesting model that's for sure. We need stronger model like these I can't imagine how strong it might be at 7B or 8B as iirc someone mentioned that this i think already has it or similar.

A lot of new developments are happening in here to make things smaller and I am all for it man!


You sure this works? inline_completion and chat_panel give me "Property inline_completion is not allowed." - not sure if this works regardless?


I really don't know, I had asked chatgpt to create it and earlier it did give me a wrong one & I had to try out a lot of things and how it worked on my mac

I then pasted that whole convo into aistudio gemini flash to then summarize & give you the correct settings as my settings included some servers and their ip's by the zed remote feature too

Sorry that it didn't work. I um again asked from my working configuration to chatgpt and here's what I get (this may also not work or something so ymmv)

{ "agent": { "default_model": { "provider": "ollama", "model": "hf.co/sweepai/sweep-next-edit-1.5B:latest" }, "model_parameters": [] },

  "ui_font_size": 16,
  "buffer_font_size": 15,

  "theme": {
    "mode": "system",
    "light": "One Light",
    "dark": "One Dark"
  },

  // --- OLLAMA / SWEEP CONFIG ---
  "openai": {
    "api_url": "http://localhost:11434/v1",
    "low_latency_mode": true
  },

  //  TAB AUTOCOMPLETE (THIS IS THE IMPORTANT PART)
  "inline_completion": {
    "default_provider": {
      "name": "openai",
      "model": "hf.co/sweepai/sweep-next-edit-1.5B"
    }
  },

  //  CHAT SIDEBAR
  "chat_panel": {
    "default_provider": {
      "name": "openai",
      "model": "hf.co/sweepai/sweep-next-edit-1.5B"
    }
  }
}




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: