Mistral: Mistral Nemo API Pricing

by mistralai · 131K context window · #5 cheapest paid

$0.0200
Input /1M tokens
$0.0400
Output /1M tokens
131K
Context window

Explore Mistral Nemo API pricing for large language model inference. Mistralai's Nemo offers a generous 131,072 token context window with competitive rates. Input tokens are priced at $0.02000 per 1 million tokens, while output tokens cost $0.04000 per 1 million tokens, resulting in a total cost of $0.0600 per million tokens. This makes it an attractive option for ML engineers seeking cost-effective LLM solutions. For instance, processing 100 million tokens monthly (a mix of input and output) would cost approximately $6.00. Compare Mistral Nemo's pricing against other LLM APIs to determine the best fit for your project's budget and performance needs.

Monthly Cost Examples

Assuming 50% input / 50% output token split

UsageMonthly cost
100K tokens/month<$0.01
1M tokens/month$0.03
10M tokens/month$0.30
100M tokens/month$3.00

Compare with other models

Mistral: Mistral Nemo vs OpenAI: GPT-4o →Mistral: Mistral Nemo vs OpenAI: GPT-4o-mini →Mistral: Mistral Nemo vs OpenAI: o1 →

Automate your model selection

StormRouter sends each request to the cheapest model that can handle it.
Only use Mistral: Mistral Nemo when your quality requirements demand it.

Try StormRouter free →

Similar models

Free Models RouterFree/1MStepFun: Step 3.5 Flash (free)Free/1MArcee AI: Trinity Large Preview (free)Free/1MUpstage: Solar Pro 3 (free)Free/1MLiquidAI: LFM2.5-1.2B-Thinking (free)Free/1MLiquidAI: LFM2.5-1.2B-Instruct (free)Free/1M