OK: LLM API is running. Use /v1/chat/completions