Updated March 26, 2026· Based on independent benchmark data
GPT-5.4 (xhigh) leads in intelligence with a score of 57.2 vs 10.6. For speed, GPT-5.4 (xhigh) wins at 77 tok/s vs 57 tok/s.
| Metric | Jamba 1.6 Large | GPT-5.4 (xhigh) |
|---|---|---|
| Intelligence Score | 10.6 | 57.2 |
| Coding Score | N/A | 57.3 |
| Math Score | N/A | N/A |
| Speed (tok/s) | 57 tok/s | 77 tok/s |
| Latency (TTFT) | 0.74s | 181.82s |
| Input Price / 1M tokens | $2.00 | $2.50 |
| Output Price / 1M tokens | $8.00 | $15 |
| Context Window | N/A | N/A |
| Max Output Tokens | N/A | N/A |
| Input Modalities | Text | Text |
GPT-5.4 (xhigh) outperforms Jamba 1.6 Large on the intelligence index with a score of 57.2 compared to 10.6.
GPT-5.4 (xhigh) generates output significantly faster at 77 tok/s compared to Jamba 1.6 Large's 57 tok/s, making it 1.3x faster for streaming responses. Time to first token is 0.74s for Jamba 1.6 Large vs 181.82s for GPT-5.4 (xhigh), which affects perceived responsiveness in interactive applications.
Jamba 1.6 Large is more affordable at $2.00/1M input tokens ($8.00/1M output), while GPT-5.4 (xhigh) costs $2.50/1M input ($15/1M output). For a typical workload of 100 requests per day at 2,000 tokens each, Jamba 1.6 Large would cost approximately $12.00/month vs $15.00/month for GPT-5.4 (xhigh) in input costs alone.
Choose GPT-5.4 (xhigh) when you need higher intelligence (57.2), faster output (77 tok/s).
Jamba 1.6 Large is cheaper at $2.00/1M input tokens vs $2.50/1M for GPT-5.4 (xhigh).
GPT-5.4 (xhigh) is faster, producing output at 77 tok/s compared to Jamba 1.6 Large's 57 tok/s.
No, Jamba 1.6 Large does not support image input. Neither model supports image input.
It depends on your priorities. GPT-5.4 (xhigh) scores higher on intelligence (57.2), but Jamba 1.6 Large may be better for specific use cases like budget-conscious projects or speed-critical applications.
Data last synced: March 26, 2026
| Output Modalities |
| Text |
| Text |
| Free Tier | No | No |