Paste any text and instantly see token counts and API costs for every major LLM — side by side, updating as you type.
0 characters
0
Est. tokens
0
Words
Show cost per
Count as
⚡ Estimation uses ~4 chars/token heuristic. Actual counts vary ±5–10% by model tokenizer.
💰 Cost comparison (per 1K tokens)
LLM pricing reference (May 2025)
Model
Input /1M
Output /1M
Context
GPT-4o
$2.50
$10.00
128K
GPT-4o mini
$0.15
$0.60
128K
Claude 3.5 Sonnet
$3.00
$15.00
200K
Claude 3.5 Haiku
$0.80
$4.00
200K
Claude 3 Opus
$15.00
$75.00
200K
Gemini 1.5 Pro
$1.25
$5.00
1M
Gemini 1.5 Flash
$0.075
$0.30
1M
Llama 3.1 70B (Groq)
$0.59
$0.79
131K
Is the token count exact?
No — this uses the ~4 chars/token heuristic. For exact counts, use tiktoken (OpenAI), the Anthropic tokenizer API, or the Google tokenizer client.
Why do output tokens cost more?
Input tokens are processed in parallel. Output tokens must be generated sequentially, one at a time, using more GPU compute. Most models price output at 3–5× the input rate.
Does this send my text anywhere?
No. All processing is in JavaScript in your browser. Your text never leaves your device.
Building AI agents on Salesforce?
We design and ship LLM-powered workflows inside Salesforce — RAG pipelines to AI-driven RevOps automation.