Updated March 26, 2026· Based on independent benchmark data
GPT-5.3 Codex (xhigh) leads in intelligence with a score of 54.0 vs 18.2. Magistral Small 1.2 is 3.5x cheaper at $0.50/1M tokens vs $1.75/1M. For speed, Magistral Small 1.2 wins at 102 tok/s vs 72 tok/s.
| Metric | Magistral Small 1.2 | GPT-5.3 Codex (xhigh) |
|---|---|---|
| Intelligence Score | 18.2 | 54.0 |
| Coding Score | 14.8 | 53.1 |
| Math Score | 80.3 | N/A |
| Speed (tok/s) | 102 tok/s | 72 tok/s |
| Latency (TTFT) | 0.33s | 83.91s |
| Input Price / 1M tokens | $0.50 | $1.75 |
| Output Price / 1M tokens | $1.50 | $14 |
| Context Window | N/A |
GPT-5.3 Codex (xhigh) outperforms Magistral Small 1.2 on the intelligence index with a score of 54.0 compared to 18.2. For coding tasks, GPT-5.3 Codex (xhigh) has the edge with a coding score of 53.1 vs 14.8.
Magistral Small 1.2 generates output significantly faster at 102 tok/s compared to GPT-5.3 Codex (xhigh)'s 72 tok/s, making it 1.4x faster for streaming responses. Time to first token is 0.33s for Magistral Small 1.2 vs 83.91s for GPT-5.3 Codex (xhigh), which affects perceived responsiveness in interactive applications.
Magistral Small 1.2 is more affordable at $0.50/1M input tokens ($1.50/1M output), while GPT-5.3 Codex (xhigh) costs $1.75/1M input ($14/1M output). That makes GPT-5.3 Codex (xhigh) 3.5x more expensive per token, which can add up significantly at scale. For a typical workload of 100 requests per day at 2,000 tokens each, Magistral Small 1.2 would cost approximately $3.00/month vs $10.50/month for GPT-5.3 Codex (xhigh) in input costs alone.
Choose Magistral Small 1.2 when you need faster output (102 tok/s), lower cost. Choose GPT-5.3 Codex (xhigh) when you need higher intelligence (54.0), stronger coding performance (53.1).
GPT-5.3 Codex (xhigh) scores higher on coding benchmarks (53.1 vs 14.8), making it the better choice for programming tasks.
Magistral Small 1.2 is cheaper at $0.50/1M input tokens vs $1.75/1M for GPT-5.3 Codex (xhigh).
Magistral Small 1.2 is faster, producing output at 102 tok/s compared to GPT-5.3 Codex (xhigh)'s 72 tok/s.
No, Magistral Small 1.2 does not support image input. Neither model supports image input.
Data last synced: March 26, 2026
| N/A |
| Max Output Tokens | N/A | N/A |
| Input Modalities | Text | Text |
| Output Modalities | Text | Text |
| Free Tier | No | No |
It depends on your priorities. GPT-5.3 Codex (xhigh) scores higher on intelligence (54.0), but Magistral Small 1.2 may be better for specific use cases like budget-conscious projects or speed-critical applications.