Updated March 26, 2026· Based on independent benchmark data
GPT-5.3 Codex (xhigh) leads in intelligence with a score of 54.0 vs 20.4. For speed, GPT-5.3 Codex (xhigh) wins at 72 tok/s vs 0 tok/s.
| Metric | o1-mini | GPT-5.3 Codex (xhigh) |
|---|---|---|
| Intelligence Score | 20.4 | 54.0 |
| Coding Score | N/A | 53.1 |
| Math Score | N/A | N/A |
| Speed (tok/s) | 0 tok/s | 72 tok/s |
| Latency (TTFT) | 0.00s | 83.91s |
| Input Price / 1M tokens | Free | $1.75 |
| Output Price / 1M tokens | Free | $14 |
| Context Window | N/A | N/A |
| Max Output Tokens | N/A | N/A |
| Input Modalities | Text | Text |
GPT-5.3 Codex (xhigh) outperforms o1-mini on the intelligence index with a score of 54.0 compared to 20.4.
GPT-5.3 Codex (xhigh) generates output significantly faster at 72 tok/s compared to o1-mini's 0 tok/s, making it Infinityx faster for streaming responses. Time to first token is 0.00s for o1-mini vs 83.91s for GPT-5.3 Codex (xhigh), which affects perceived responsiveness in interactive applications.
o1-mini is completely free, while GPT-5.3 Codex (xhigh) costs $1.75/1M input tokens and $14/1M output tokens.
Choose GPT-5.3 Codex (xhigh) when you need higher intelligence (54.0), faster output (72 tok/s).
GPT-5.3 Codex (xhigh) is faster, producing output at 72 tok/s compared to o1-mini's 0 tok/s.
No, o1-mini does not support image input. Neither model supports image input.
It depends on your priorities. GPT-5.3 Codex (xhigh) scores higher on intelligence (54.0), but o1-mini may be better for specific use cases like cost savings or speed-critical applications.
Data last synced: March 26, 2026
| Output Modalities | Text | Text |
| Free Tier | Yes | No |