DevTk.AI
GPT-5.5CodexOpenAIDeepSeek V4API Pricing

GPT-5.5 in Codex Pricing: API Costs, Model IDs, and DeepSeek Routing

Updated April 2026. GPT-5.5 is available in Codex and API at $5/$30 per 1M tokens with $0.50 cached input. Compare GPT-5.2-Codex, GPT-5.5 Pro, and DeepSeek V4 Flash routing costs.

DevTk.AI 2026-04-28 Updated 2026-04-28 4 min read

OpenAI now exposes GPT-5.5 as the frontier model for complex coding and professional work, and GPT-5.5 is available inside Codex. The API model ID is gpt-5.5. For the dedicated Codex API model, the current public model ID is gpt-5.2-codex, not gpt-5.5-codex.

Official references: OpenAI GPT-5.5 model docs, GPT-5.5 Pro docs, GPT-5.2-Codex docs, and DeepSeek V4 pricing.

Current OpenAI Coding Models

ModelAPI model IDInputCached inputOutputContextMax outputBest use
GPT-5.5gpt-5.5$5.00$0.50$30.001.05M128KHard coding, review, architecture
GPT-5.5 Progpt-5.5-pro$30.00-$180.001.05M128KHighest-precision professional tasks
GPT-5.2-Codexgpt-5.2-codex$1.75$0.175$14.00400K128KLong-horizon agentic coding in Codex-like tools
GPT-5.4gpt-5.4$2.50$0.25$15.001M128KCheaper OpenAI professional work

For GPT-5.5, OpenAI lists higher long-context pricing when prompts exceed 272K input tokens: $10/M input, $1/M cached input, and $45/M output for the full session.

How It Compares With DeepSeek V4

DeepSeek V4 Flash is the model to overweight when you care about traffic volume and cache-heavy agent loops.

ModelCache-miss inputCached inputOutputContextPractical role
DeepSeek V4 Flash$0.14$0.0028$0.281MDefault low-cost agent traffic
DeepSeek V4 Pro$0.435$0.003625$0.871MDiscounted stronger DeepSeek route
GPT-5.2-Codex$1.75$0.175$14.00400KOpenAI coding-specialist escalation
GPT-5.5$5.00$0.50$30.001.05MFrontier escalation

If a coding agent repeatedly sends the same repo context, DeepSeek’s cache-hit pricing changes the math. A 10M-token run with 90% cached input, 5% cache-miss input, and 5% output is roughly:

DeepSeek V4 Flash = 9M * $0.0028 + 0.5M * $0.14 + 0.5M * $0.28 = $0.235
GPT-5.2-Codex      = 9M * $0.175  + 0.5M * $1.75 + 0.5M * $14  = $9.45
GPT-5.5            = 9M * $0.50   + 0.5M * $5.00 + 0.5M * $30  = $22.00

That is why real DeepSeek bills can look surprisingly low. In China pricing terms, the same DeepSeek V4 Flash shape is about ¥1.68 before tax or payment effects, which matches reports of 10M-token agent sessions landing around only a few yuan.

For most developer-tool traffic, do not make Codex or GPT-5.5 the default route. Use DeepSeek for high-volume context-heavy work, then escalate selectively.

routing_config = {
    "deepseek":       {"model": "deepseek-v4-flash", "weight": 45},
    "gemini_flash":   {"model": "gemini-2.5-flash",  "weight": 25},
    "openai_codex":   {"model": "gpt-5.2-codex",     "weight": 15},
    "openai_frontier":{"model": "gpt-5.5",           "weight": 10},
    "anthropic":      {"model": "claude-sonnet-4-6", "weight": 5},
}

Use gpt-5.5 when the task needs frontier reasoning, difficult multi-file design, security review, or high-stakes correctness. Use gpt-5.2-codex when you specifically want OpenAI’s Codex-optimized API behavior. Use DeepSeek V4 Flash for cheap iteration, repo reading, lint/test-fix loops, and repeated context.

When To Choose Each Model

WorkloadDefault pickEscalate when
Repeated repo explorationDeepSeek V4 FlashNeeds final high-confidence review
Test fixing / lint loopsDeepSeek V4 FlashModel gets stuck after 2-3 attempts
Large architectural refactorGPT-5.2-Codex or GPT-5.5Code quality or safety is critical
Final PR reviewGPT-5.5Use GPT-5.5 Pro only for highest-stakes work
Cost-sensitive Chinese agent workflowsDeepSeek V4 FlashNeed stronger cross-provider validation

Bottom Line

DeepSeek should carry a higher routing weight for traffic growth because it is dramatically cheaper when cache hit rates are high. GPT-5.5 is the premium Codex/OpenAI escalation model, not the default high-volume route.

Use the AI Model Pricing Calculator for exact costs, and compare with the DeepSeek V4 API pricing guide if your workload has repeated context.

Related Posts