-
Notifications
You must be signed in to change notification settings - Fork 6k
feat(opencode): add model detection for local providers #8359
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
|
The following comment was made by an LLM, it may be inaccurate: Potential Duplicate PRs FoundBased on the search results, there are two PRs that appear to be related to or potentially addressing similar functionality:
Recommendation: You should review these PRs to check if they have been merged, closed, or superseded, and whether PR #8359 builds upon or duplicates their work. |
|
I was going to add this this week, some notes:
|
|
|
I wanted to add this as well! LMStudio does have api to fetch each model info with context length etc |
It's not like "completely dropped" it should be like any model that exists in models.dev that isnt in models endpoint should be dropped but that doesnt mean drop all the useful metadata it's more to filter potentially unsupported models |
Then we should defo use that, wanna do the same for ollama but that can be separate |
|
@rekram1-node I would love to help out on this if possible |
|
Yeah up to yall I'd really like to ship this week so @tomzhu1024 if you aren't going to be available I think @goniz or myself will take care of this. Ig we can build off this PR easily |
Got it. This is definitely the optimal way. Thanks for the clarification! I was planning to iterate this PR later today -- I'm planning to 1) remove the extra config option, 2) still honor the model info from Models.dev (if they do exist according to the endpoint), 3) leave some space for provider-specific model info retrieval (like @goniz mentioned). I'm a little unsure about expanding this feature to all But feel free to take it over; just let me know so I can shift to something else. |
All the providers should work, I can test and iterate to make sure it works but ik u would use your github token from oauth exchange for copilot, and an api key for zen, etc so i think most things will be pretty standard. |
|
I see. That sounds good. I'll then try to 4) expand it to all |
What does this PR do?
This PR adds automatic model detection/discovery for local providers that use
npm: @ai-sdk/openai-compatibleand don’t require API keys (e.g., LM Studio).It introduces a new config field,
provider.<name>.options.autoDetectModels(default:false). When set totrue, OpenCode will fetch models via the OpenAI-compatible API (GET/v1/models) and make them available in the UI (e.g./models). Any models defined inprovider.<name>.modelsare still honored. Model information fromModels.devis ignored, but used as a fallback if the fetch fails.Fixes #6231
Fixes #4232
Fixes #2456
Note: I understand it may be by design to rely entirely on
Models.devfor providers and models, but having OpenCode core automatically populate the model list would be a major improvement for users running LM Studio or other local providers.How did you verify your code works?
openai/gpt-oss-20bandzai-org/glm-4.6v-flash).~/.config/opencode/opencode.jsonwith two LM Studio providers and two models. One of the models does not exist in LM Studio, but should still appear later. (File attached: opencode.json).providersvalue atpackages/opencode/src/provider/provider.ts#958, and it looked correct to me.bun dev -- models, and the output looked correct.bun dev, ran/models, and the output also looked correct.