DeepSeek R1 Distill Llama 70B is a text-based AI model by DeepSeek. It features a 131K token context window, 51 tok/s output speed, $0.70/1M input tokens pricing. It scores 16.0 on the intelligence index.
Output Speed
51 tok/s
Latency (TTFT)
0.77s
Blended Price
$0.88/M
| Input (Prompt) | $0.70 |
| Output (Completion) | $1.05 |
| Cache Read | Free |
| Cache Write | Free |
| Context Length | 131K |
| Max Output Tokens | N/A |
| Input Modalities | Text |
| Output Modalities | Text |
| Tokenizer | N/A |
DeepSeek R1 Distill Llama 70B costs $0.70/1M input tokens and $1.05/1M output tokens.
DeepSeek R1 Distill Llama 70B supports a context window of 131K tokens, which is approximately 66 pages of text.
DeepSeek R1 Distill Llama 70B has a lower coding score of 11.4. For demanding coding tasks, consider a model with a higher coding benchmark.
DeepSeek R1 Distill Llama 70B generates output at 51 tok/s. Time to first token is 0.77s.
Last updated:
No, DeepSeek R1 Distill Llama 70B is a paid model. Check the free models page for zero-cost alternatives.
See the alternatives section above for models with similar capabilities. You can also compare DeepSeek R1 Distill Llama 70B head-to-head with any model on our comparison page.