Loading
Check if your computer can run local AI models. Pick a model, enter your specs, and see instantly.
Cloud equivalent: GPT-4o mini ($20/mo)
47 TPS
Optimal
Plenty of VRAM. Running at full speed with NVIDIA RTX 4070 Ti.
Savings vs GPT-4o mini
$20/mo
$240/year by running locally
| Model | Min VRAM | Ideal VRAM | RAM | Cloud Alt. | Cloud Cost |
|---|---|---|---|---|---|
| Llama 3.3 8B (8B) | 6 GB | 10 GB | 8 GB | GPT-4o mini | $20/mo |
| Llama 3.1 70B (70B) | 40 GB | 48 GB | 32 GB | GPT-4o | $20/mo |
| Mistral 7B (7B) | 5 GB | 8 GB | 8 GB | GPT-4o mini | $20/mo |
| Mixtral 8x7B (46.7B) | 24 GB | 48 GB | 32 GB | GPT-4 Turbo | $20/mo |
| Phi-4 Mini (3.8B) | 3 GB | 6 GB | 8 GB | GPT-4o mini | $20/mo |
| Gemma 2 9B (9B) | 6 GB | 12 GB | 16 GB | Gemini Flash | Free |
| Qwen 2.5 14B (14B) | 10 GB | 16 GB | 16 GB | GPT-4o | $20/mo |
| DeepSeek R1 7B (7B) | 5 GB | 8 GB | 8 GB | DeepSeek V4 | Free |
| Code Llama 34B (34B) | 20 GB | 24 GB | 32 GB | GitHub Copilot | $19/mo |
| Stable Diffusion XL (3.5B) | 6 GB | 12 GB | 16 GB | Midjourney | $10/mo |
| Whisper Large v3 (1.5B) | 4 GB | 6 GB | 8 GB | Otter.ai | $17/mo |
| LLaVA v1.6 7B (7B) | 8 GB | 12 GB | 16 GB | GPT-4o Vision | $20/mo |