GPT-5.5 in Codex Pricing: API Costs, Model IDs, and DeepSeek Routing
Updated April 2026. GPT-5.5 is available in Codex and API at $5/$30 per 1M tokens with $0.50 cached input. Compare GPT-5.2-Codex, GPT-5.5 Pro, and DeepSeek V4 Flash routing costs.
OpenAI now exposes GPT-5.5 as the frontier model for complex coding and professional work, and GPT-5.5 is available inside Codex. The API model ID is gpt-5.5. For the dedicated Codex API model, the current public model ID is gpt-5.2-codex, not gpt-5.5-codex.
Official references: OpenAI GPT-5.5 model docs, GPT-5.5 Pro docs, GPT-5.2-Codex docs, and DeepSeek V4 pricing.
Current OpenAI Coding Models
| Model | API model ID | Input | Cached input | Output | Context | Max output | Best use |
|---|---|---|---|---|---|---|---|
| GPT-5.5 | gpt-5.5 | $5.00 | $0.50 | $30.00 | 1.05M | 128K | Hard coding, review, architecture |
| GPT-5.5 Pro | gpt-5.5-pro | $30.00 | - | $180.00 | 1.05M | 128K | Highest-precision professional tasks |
| GPT-5.2-Codex | gpt-5.2-codex | $1.75 | $0.175 | $14.00 | 400K | 128K | Long-horizon agentic coding in Codex-like tools |
| GPT-5.4 | gpt-5.4 | $2.50 | $0.25 | $15.00 | 1M | 128K | Cheaper OpenAI professional work |
For GPT-5.5, OpenAI lists higher long-context pricing when prompts exceed 272K input tokens: $10/M input, $1/M cached input, and $45/M output for the full session.
How It Compares With DeepSeek V4
DeepSeek V4 Flash is the model to overweight when you care about traffic volume and cache-heavy agent loops.
| Model | Cache-miss input | Cached input | Output | Context | Practical role |
|---|---|---|---|---|---|
| DeepSeek V4 Flash | $0.14 | $0.0028 | $0.28 | 1M | Default low-cost agent traffic |
| DeepSeek V4 Pro | $0.435 | $0.003625 | $0.87 | 1M | Discounted stronger DeepSeek route |
| GPT-5.2-Codex | $1.75 | $0.175 | $14.00 | 400K | OpenAI coding-specialist escalation |
| GPT-5.5 | $5.00 | $0.50 | $30.00 | 1.05M | Frontier escalation |
If a coding agent repeatedly sends the same repo context, DeepSeek’s cache-hit pricing changes the math. A 10M-token run with 90% cached input, 5% cache-miss input, and 5% output is roughly:
DeepSeek V4 Flash = 9M * $0.0028 + 0.5M * $0.14 + 0.5M * $0.28 = $0.235
GPT-5.2-Codex = 9M * $0.175 + 0.5M * $1.75 + 0.5M * $14 = $9.45
GPT-5.5 = 9M * $0.50 + 0.5M * $5.00 + 0.5M * $30 = $22.00
That is why real DeepSeek bills can look surprisingly low. In China pricing terms, the same DeepSeek V4 Flash shape is about ¥1.68 before tax or payment effects, which matches reports of 10M-token agent sessions landing around only a few yuan.
Recommended Routing Weights
For most developer-tool traffic, do not make Codex or GPT-5.5 the default route. Use DeepSeek for high-volume context-heavy work, then escalate selectively.
routing_config = {
"deepseek": {"model": "deepseek-v4-flash", "weight": 45},
"gemini_flash": {"model": "gemini-2.5-flash", "weight": 25},
"openai_codex": {"model": "gpt-5.2-codex", "weight": 15},
"openai_frontier":{"model": "gpt-5.5", "weight": 10},
"anthropic": {"model": "claude-sonnet-4-6", "weight": 5},
}
Use gpt-5.5 when the task needs frontier reasoning, difficult multi-file design, security review, or high-stakes correctness. Use gpt-5.2-codex when you specifically want OpenAI’s Codex-optimized API behavior. Use DeepSeek V4 Flash for cheap iteration, repo reading, lint/test-fix loops, and repeated context.
When To Choose Each Model
| Workload | Default pick | Escalate when |
|---|---|---|
| Repeated repo exploration | DeepSeek V4 Flash | Needs final high-confidence review |
| Test fixing / lint loops | DeepSeek V4 Flash | Model gets stuck after 2-3 attempts |
| Large architectural refactor | GPT-5.2-Codex or GPT-5.5 | Code quality or safety is critical |
| Final PR review | GPT-5.5 | Use GPT-5.5 Pro only for highest-stakes work |
| Cost-sensitive Chinese agent workflows | DeepSeek V4 Flash | Need stronger cross-provider validation |
Bottom Line
DeepSeek should carry a higher routing weight for traffic growth because it is dramatically cheaper when cache hit rates are high. GPT-5.5 is the premium Codex/OpenAI escalation model, not the default high-volume route.
Use the AI Model Pricing Calculator for exact costs, and compare with the DeepSeek V4 API pricing guide if your workload has repeated context.