DevTk.AI
DeepSeek V4OpenCodeCodexClineAI Agents

DeepSeek V4 Agent Setup: OpenCode, Codex, Cline, Kilo Code, Roo Code

Configure DeepSeek V4 Flash or V4 Pro in major coding agents. Covers OpenCode, Codex, Cline, Kilo Code, and Roo Code with OpenAI-compatible and Anthropic-compatible endpoints.

DevTk.AI 2026-04-28 Updated 2026-04-28 3 min read

DeepSeek V4 is easiest to use in agents that support either OpenAI-compatible chat completions or Anthropic-compatible messages. The two base URLs are:

OpenAI compatible:    https://api.deepseek.com
Anthropic compatible: https://api.deepseek.com/anthropic

Use deepseek-v4-flash for cheap high-volume work and deepseek-v4-pro for harder coding or long-horizon agent tasks. DeepSeek’s current coding-agent guide recommends V4 Pro for OpenCode reasoning-heavy work.

Official references: DeepSeek Coding Agents integration, DeepSeek Anthropic API, and DeepSeek pricing.

Shared Values

SettingValue
API keyYour DeepSeek API key from platform.deepseek.com
OpenAI base URLhttps://api.deepseek.com
Anthropic base URLhttps://api.deepseek.com/anthropic
Cheap modeldeepseek-v4-flash
Stronger modeldeepseek-v4-pro

OpenCode

Create or update ~/.config/opencode/opencode.jsonc:

{
  "provider": {
    "deepseek": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "DeepSeek",
      "options": {
        "baseURL": "https://api.deepseek.com",
        "apiKey": "{env:DEEPSEEK_API_KEY}"
      },
      "models": {
        "deepseek-v4-pro": {
          "name": "DeepSeek-V4-Pro",
          "limit": {
            "context": 1048576,
            "output": 262144
          },
          "reasoning": true,
          "options": {
            "reasoningEffort": "max",
            "thinking": {
              "type": "enabled"
            }
          }
        }
      }
    }
  }
}

Then run:

export DEEPSEEK_API_KEY=your-deepseek-api-key
opencode

Use deepseek-v4-flash as an additional custom model only when cost matters more than coding quality.

Codex CLI

Use this only if your Codex CLI version still supports Chat Completions providers. If your version requires the Responses API only, prefer OpenCode or Claude Code for DeepSeek V4.

~/.codex/config.toml:

model = "deepseek-v4-flash"
model_provider = "deepseek"

[model_providers.deepseek]
name = "DeepSeek"
env_key = "DEEPSEEK_API_KEY"
base_url = "https://api.deepseek.com"
wire_api = "chat"

Then set the key:

export DEEPSEEK_API_KEY=your-deepseek-api-key
codex

Cline

In the Cline extension or CLI, choose an OpenAI-compatible provider:

FieldValue
API ProviderOpenAI Compatible
Base URLhttps://api.deepseek.com
API KeyYour DeepSeek API key
Model IDdeepseek-v4-flash or deepseek-v4-pro
Context Window1048576

For Cline CLI, a typical command is:

cline auth -p openai -k your-deepseek-api-key -b https://api.deepseek.com -m deepseek-v4-flash

Kilo Code and Roo Code

Both tools usually work through an OpenAI-compatible custom provider:

Provider: OpenAI Compatible
Base URL: https://api.deepseek.com
API Key: your-deepseek-api-key
Model: deepseek-v4-flash

If the tool has a separate “supports images” toggle, turn it off for DeepSeek V4. If it has a context window field, use 1048576.

Cost Guardrails

DeepSeek cache hits are the reason agent bills can be surprisingly low. Keep repository rules, system prompts, and stable context at the beginning of the prompt so the prefix can be reused. Avoid injecting timestamps, random IDs, or constantly changing logs before the stable prefix, because that can reduce cache hits.

Related:

Related Posts