Local Whisper Setup
This project integrates whisper.cpp for fully offline speech recognition.
- Included: CPU version of Whisper (
whisper-cli.exe) is bundled - Required: Download model files (
.bin) manually - GPU: Replace with CUDA version for faster processing
⚡ Quick Start
- Download Model: Visit Hugging Face for GGML models
- Enable Feature: Settings > Services > Speech Recognition > "Local Whisper"
- Load Model: Click "Browse" and select the
.binmodel file - Ready: Start using after model path is set
📦 Model Guide
| Model | Filename | Size | Memory | Speed | Use Case |
|---|---|---|---|---|---|
| Tiny | ggml-tiny.bin | 75 MB | ~390 MB | Fastest | Quick test |
| Base | ggml-base.bin | 142 MB | ~500 MB | Fast | Casual ⭐ |
| Small | ggml-small.bin | 466 MB | ~1 GB | Medium | Podcast ⭐ |
| Medium | ggml-medium.bin | 1.5 GB | ~2.6 GB | Slow | Complex |
| Large-v3 | ggml-large-v3.bin | 2.9 GB | ~4.7 GB | Slowest | Professional |
🛠️ GPU Acceleration (NVIDIA)
- Download
whisper-cublas-bin-x64.zipfrom whisper.cpp Releases - Extract
whisper-cli.exeand.dllfiles - Place in
resourcesfolder next to the app - Restart and test