LiquidAI: LFM2-2.6B API Pricing

by liquid · 32K context window · #2 cheapest paid

$0.0100
Input /1M tokens
$0.0200
Output /1M tokens
32K
Context window

Explore LiquidAI's LFM2-2.6B API pricing for cost-effective large language model inference. Ideal for ML engineers evaluating LLM API options, LFM2-2.6B offers a 32,768 token context window with competitive rates. Input tokens are priced at $0.01000 per 1 million tokens, while output tokens cost $0.02000 per 1 million tokens, resulting in a total cost of $0.0300 per 1 million tokens processed. For instance, processing 100 million tokens monthly would cost just $3.00, making LiquidAI's LFM2-2.6B a compelling choice for budget-conscious NLP projects. Compare LiquidAI's LFM2-2.6B API pricing to other providers and see how it can optimize your LLM costs. Powered by liquid.

Monthly Cost Examples

Assuming 50% input / 50% output token split

UsageMonthly cost
100K tokens/month<$0.01
1M tokens/month$0.01
10M tokens/month$0.15
100M tokens/month$1.50

Compare with other models

LiquidAI: LFM2-2.6B vs OpenAI: GPT-4o →LiquidAI: LFM2-2.6B vs OpenAI: GPT-4o-mini →LiquidAI: LFM2-2.6B vs OpenAI: o1 →

Automate your model selection

StormRouter sends each request to the cheapest model that can handle it.
Only use LiquidAI: LFM2-2.6B when your quality requirements demand it.

Try StormRouter free →

Similar models

Free Models RouterFree/1MStepFun: Step 3.5 Flash (free)Free/1MArcee AI: Trinity Large Preview (free)Free/1MUpstage: Solar Pro 3 (free)Free/1MLiquidAI: LFM2.5-1.2B-Thinking (free)Free/1MLiquidAI: LFM2.5-1.2B-Instruct (free)Free/1M