Mistral: Mistral Nemo API Pricing
by mistralai · 131K context window · #5 cheapest paid
Explore Mistral Nemo API pricing for large language model inference. Mistralai's Nemo offers a generous 131,072 token context window with competitive rates. Input tokens are priced at $0.02000 per 1 million tokens, while output tokens cost $0.04000 per 1 million tokens, resulting in a total cost of $0.0600 per million tokens. This makes it an attractive option for ML engineers seeking cost-effective LLM solutions. For instance, processing 100 million tokens monthly (a mix of input and output) would cost approximately $6.00. Compare Mistral Nemo's pricing against other LLM APIs to determine the best fit for your project's budget and performance needs.
Monthly Cost Examples
Assuming 50% input / 50% output token split
| Usage | Monthly cost |
|---|---|
| 100K tokens/month | <$0.01 |
| 1M tokens/month | $0.03 |
| 10M tokens/month | $0.30 |
| 100M tokens/month | $3.00 |
Compare with other models
Mistral: Mistral Nemo vs OpenAI: GPT-4o →Mistral: Mistral Nemo vs OpenAI: GPT-4o-mini →Mistral: Mistral Nemo vs OpenAI: o1 →Automate your model selection
StormRouter sends each request to the cheapest model that can handle it.
Only use Mistral: Mistral Nemo when your quality requirements demand it.