LM Studio v1 Bug

创建于:2025年1月31日

使用 O1 回答 Chat01

问题

Before submitting your bug report

text
I believe this is a bug. I'll try to join the Continue Discord for questions I'm not able to find an open issue that reports the same bug I've seen the troubleshooting guide on the Continue Docs

Relevant environment info

  • OS: Windows
  • Continue: v0.6.18

Description

LM Studio url appends /v1/ to url, resulting in:

Error visible in LM Studio logs:

[2023-12-28 14:58:31.771] [ERROR] Unexpected endpoint or method. (POST /v1/v1/chat/completions). Returning 200 anyway

Standard Config:

text
{ "title": "Mistral", "model": "dolphin-2.6-mistral-7b", "contextLength": 4096, "provider": "lmstudio" }

Fixed by manually adding apiBase setting in config:

text
{ "title": "Mistral", "model": "dolphin-2.6-mistral-7b", "contextLength": 4096, "provider": "lmstudio", "apiBase": "http://localhost:1234" }

The code set here hard codes the v1:
https://github.com/continuedev/continue/blame/b162e8703a357c8d365cd6073bbc7fbb58ad527f/core/llm/llms/LMStudio.ts#L7
To reproduce

No response
Log output

No response

思考

Addressing the issue

I’m digging into a bug report where the user encountered an issue with the Continue extension in LM Studio when using Mistral. The cause seems to be a hard-coded /v1 in LMStudio.ts line 7.

Thought about Continue bug report,持续 5 秒

回答

Thank you for the clear report! It looks like you’ve already pinpointed the underlying issue: LM Studio expects requests at the base URL (e.g., http://localhost:1234), but the Continue code path you linked is tacking on v1, leading to a doubled path (/v1/v1/chat/completions).

Workaround

Manually specifying the apiBase in your config—just as you did—bypasses the hard-coded v1 in Continue’s LMStudio.ts. For example:

json
{ "title": "Mistral", "model": "dolphin-2.6-mistral-7b", "contextLength": 4096, "provider": "lmstudio", "apiBase": "http://localhost:1234" }

That tells Continue to post requests directly to http://localhost:1234/chat/completions (rather than http://localhost:1234/v1/chat/completions).

Why it happens

The relevant code (in core/llm/llms/LMStudio.ts) currently assumes an endpoint structure that matches “OpenAI-compatible” endpoints (where the base path usually is .../v1). LM Studio itself doesn’t use the extra /v1 in the same way, so you end up with /v1/v1/... when the v1 is already built into the library’s request path.

Next steps

  • For Continue maintainers: Update LMStudio.ts to either remove or conditionally append v1, or accept a user config to override the full route automatically.
  • In the meantime: Continue using the apiBase override, which is the simplest and fastest fix until the code is updated.

Thanks again for taking the time to troubleshoot and file this! Feel free to add any additional logs or info you have.

分享这个问答