Skip to content

Conversation

@tomzhu1024
Copy link

@tomzhu1024 tomzhu1024 commented Jan 14, 2026

What does this PR do?

This PR adds automatic model detection/discovery for local providers that use npm: @ai-sdk/openai-compatible and don’t require API keys (e.g., LM Studio).

It introduces a new config field, provider.<name>.options.autoDetectModels (default: false). When set to true, OpenCode will fetch models via the OpenAI-compatible API (GET /v1/models) and make them available in the UI (e.g. /models). Any models defined in provider.<name>.models are still honored. Model information from Models.dev is ignored, but used as a fallback if the fetch fails.

Fixes #6231
Fixes #4232
Fixes #2456

Note: I understand it may be by design to rely entirely on Models.dev for providers and models, but having OpenCode core automatically populate the model list would be a major improvement for users running LM Studio or other local providers.

How did you verify your code works?

  1. I ran an LM Studio server locally and downloaded two models (openai/gpt-oss-20b and zai-org/glm-4.6v-flash).
  2. I configured my ~/.config/opencode/opencode.json with two LM Studio providers and two models. One of the models does not exist in LM Studio, but should still appear later. (File attached: opencode.json).
  3. I used a debugger and breakpoints to inspect the providers value at packages/opencode/src/provider/provider.ts#958, and it looked correct to me.
  4. I ran bun dev -- models, and the output looked correct.
  5. I launched the TUI with bun dev, ran /models, and the output also looked correct.
  6. From the TUI, I was able to use both local models in LM Studio and cloud models I had previously configured.
  7. I shut down the LM Studio server and repeated steps 3–6; everything behaved as expected.

@github-actions
Copy link
Contributor

The following comment was made by an LLM, it may be inaccurate:

Potential Duplicate PRs Found

Based on the search results, there are two PRs that appear to be related to or potentially addressing similar functionality:

  1. PR Fetch from model list from /v1/models/ from OpenAI Compatible APIs #3427: "Fetch from model list from /v1/models/ from OpenAI Compatible APIs"

  2. PR Adding the auto-detection of ollama local with a variable for baseURL #3726: "Adding the auto-detection of ollama local with a variable for baseURL"

Recommendation: You should review these PRs to check if they have been merged, closed, or superseded, and whether PR #8359 builds upon or duplicates their work.

@rekram1-node
Copy link
Collaborator

I was going to add this this week, some notes:

  • ideally no extra config option
  • models SHOULD be sourced from the provider models endpoint to a degree for all openai compat, like say models.dev says they have N models but the provider endpoint returns N-2, we should drop those 2 models because normally those aren't supported by that persons subscription or they disabled them or something (if this makes sense),
  • Does the lmstudio endpoint not return any info about the model? Context size? Supported Attachments? Etc?

@tomzhu1024
Copy link
Author

tomzhu1024 commented Jan 14, 2026

  • The LMStudio endpoint only returns the model ID, effectively... Here is an example:
    - curl http://192.168.10.203:1234/v1/models
    {
      "data": [
        {
          "id": "zai-org/glm-4.6v-flash",
          "object": "model",
          "owned_by": "organization_owner"
        },
        {
          "id": "qwen/qwen3-vl-8b",
          "object": "model",
          "owned_by": "organization_owner"
        },
        {
          "id": "google/gemma-3-12b",
          "object": "model",
          "owned_by": "organization_owner"
        },
        {
          "id": "openai/gpt-oss-20b",
          "object": "model",
          "owned_by": "organization_owner"
        },
        {
          "id": "text-embedding-nomic-embed-text-v1.5",
          "object": "model",
          "owned_by": "organization_owner"
        }
      ],
      "object": "list"
    }
    
  • I believe using LM Studio's CLI or SDK we can have access to more information about the models. But that'll be provider-specific (like it won't work for Ollama).
  • Agree with no extra config option. I tried to still honor the config file because it allows the user to provide a prettier name to the model, and replace the model ID. To achieve this, I put my logic in a early stage of the state initialization, before collecting the auth info. This means that I won't be able to retrieve model list from cloud provider that use @ai-sdk/openai-compatible (they usually require auth). So, I have to introduce a extra config option such that this feature won't cause troubles for cloud providers.
  • Agree that the model info from Models.dev should be completely dropped when OpenAI-compatible API is returning the model list. I thought the same way when I drafted this PR.

@goniz
Copy link
Contributor

goniz commented Jan 14, 2026

I wanted to add this as well!
I use some python scripts to generate opencode.json configs from LMStudio/ ollama apis.

LMStudio does have api to fetch each model info with context length etc

@rekram1-node
Copy link
Collaborator

Agree that the model info from Models.dev should be completely dropped when OpenAI-compatible API is returning the model list. I thought the same way when I drafted this PR.

It's not like "completely dropped" it should be like any model that exists in models.dev that isnt in models endpoint should be dropped but that doesnt mean drop all the useful metadata it's more to filter potentially unsupported models

@rekram1-node
Copy link
Collaborator

LMStudio does have api to fetch each model info with context length etc

Then we should defo use that, wanna do the same for ollama but that can be separate

@goniz
Copy link
Contributor

goniz commented Jan 14, 2026

@rekram1-node I would love to help out on this if possible

@rekram1-node
Copy link
Collaborator

Yeah up to yall I'd really like to ship this week so @tomzhu1024 if you aren't going to be available I think @goniz or myself will take care of this.

Ig we can build off this PR easily

@tomzhu1024
Copy link
Author

tomzhu1024 commented Jan 14, 2026

It's not like "completely dropped" it should be like any model that exists in models.dev that isnt in models endpoint should be dropped but that doesnt mean drop all the useful metadata it's more to filter potentially unsupported models

Got it. This is definitely the optimal way. Thanks for the clarification!

I was planning to iterate this PR later today -- I'm planning to 1) remove the extra config option, 2) still honor the model info from Models.dev (if they do exist according to the endpoint), 3) leave some space for provider-specific model info retrieval (like @goniz mentioned). I'm a little unsure about expanding this feature to all @ai-sdk/openai-compatible providers, as some of them requires auth and brings some uncertainty (There are many auth methods, and maybe it's not API key...). Any thoughts or suggestions?

But feel free to take it over; just let me know so I can shift to something else.

@rekram1-node
Copy link
Collaborator

I'm a little unsure about expanding this feature to all @ai-sdk/openai-compatible providers, as some of them requires auth and brings some uncertainty (There are many auth methods, and maybe it's not API key...). Any thoughts or suggestions?

All the providers should work, I can test and iterate to make sure it works but ik u would use your github token from oauth exchange for copilot, and an api key for zen, etc so i think most things will be pretty standard.

@tomzhu1024
Copy link
Author

I see. That sounds good. I'll then try to 4) expand it to all @ai-sdk/openai-compatible providers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

3 participants