LiquidAI: LFM2-2.6B API Pricing
by liquid · 32K context window · #2 cheapest paid
Explore LiquidAI's LFM2-2.6B API pricing for cost-effective large language model inference. Ideal for ML engineers evaluating LLM API options, LFM2-2.6B offers a 32,768 token context window with competitive rates. Input tokens are priced at $0.01000 per 1 million tokens, while output tokens cost $0.02000 per 1 million tokens, resulting in a total cost of $0.0300 per 1 million tokens processed. For instance, processing 100 million tokens monthly would cost just $3.00, making LiquidAI's LFM2-2.6B a compelling choice for budget-conscious NLP projects. Compare LiquidAI's LFM2-2.6B API pricing to other providers and see how it can optimize your LLM costs. Powered by liquid.
Monthly Cost Examples
Assuming 50% input / 50% output token split
| Usage | Monthly cost |
|---|---|
| 100K tokens/month | <$0.01 |
| 1M tokens/month | $0.01 |
| 10M tokens/month | $0.15 |
| 100M tokens/month | $1.50 |
Compare with other models
LiquidAI: LFM2-2.6B vs OpenAI: GPT-4o →LiquidAI: LFM2-2.6B vs OpenAI: GPT-4o-mini →LiquidAI: LFM2-2.6B vs OpenAI: o1 →Automate your model selection
StormRouter sends each request to the cheapest model that can handle it.
Only use LiquidAI: LFM2-2.6B when your quality requirements demand it.