DevTk.AI

Qwen 2.5 72B

Alibaba

Qwen 2.5 72B by Alibaba costs $0.4/M input, $1.2/M output with 128K context window. Updated February 2026. Compare with GPT-5, Claude, Gemini & 40+ models.

Input Price

$0.40

per 1M tokens

Output Price

$1.20

per 1M tokens

Context Window

128K

tokens

Specifications

ProviderAlibaba
Model IDqwen-2-5-72b
Input Price$0.4 / 1M tokens
Output Price$1.2 / 1M tokens
Context Window128K tokens
Max Output8K tokens
Capabilities
textfunction_calling
Release Date2025-09
NotesOpen-source. Competitive with Llama 3.3 70B.

Monthly Cost Estimates

Estimated monthly costs based on different daily usage levels (assuming 50% input / 50% output split).

Daily TokensMonthly CostAnnual Cost
10K $0.24 $2.88
50K $1.20 $14.40
100K $2.40 $28.80
500K $12.00 $144.00
1.0M $24.00 $288.00

About Qwen 2.5 72B

Qwen 2.5 72B is a large language model by Alibaba. It features a 128K token context window with up to 8K tokens of output per request. The model supports 2 capabilities: text, function_calling.

At $0.4 per million input tokens and $1.2 per million output tokens, Qwen 2.5 72B is positioned as a cost-effective option in the Alibaba lineup. Use our Token Counter to estimate how many tokens your prompts use, and our Pricing Calculator to compare costs across all models.

Other Alibaba Models

Similar Price Range

Related Tools

FAQ

How much does Qwen 2.5 72B cost?

Qwen 2.5 72B costs $0.4 per million input tokens and $1.2 per million output tokens. For a typical workload of 100K tokens/day, expect approximately $3.00/month.

What is Qwen 2.5 72B's context window?

Qwen 2.5 72B supports a context window of 128K tokens. This means your combined input prompt and output response can be up to 128K tokens. The maximum output per response is 8K tokens.

Is Qwen 2.5 72B good for my use case?

Qwen 2.5 72B supports text, function_calling. As a budget-friendly model, it works well for high-volume tasks like classification, summarization, and simple generation. Use our Pricing Calculator to compare with alternatives.