Skip to content

Local Whisper Setup

This project integrates whisper.cpp for fully offline speech recognition.

  • Included: CPU version of Whisper (whisper-cli.exe) is bundled
  • Required: Download model files (.bin) manually
  • GPU: Replace with CUDA version for faster processing

⚡ Quick Start

  1. Download Model: Visit Hugging Face for GGML models
  2. Enable Feature: Settings > Services > Speech Recognition > "Local Whisper"
  3. Load Model: Click "Browse" and select the .bin model file
  4. Ready: Start using after model path is set

📦 Model Guide

ModelFilenameSizeMemorySpeedUse Case
Tinyggml-tiny.bin75 MB~390 MBFastestQuick test
Baseggml-base.bin142 MB~500 MBFastCasual ⭐
Smallggml-small.bin466 MB~1 GBMediumPodcast ⭐
Mediumggml-medium.bin1.5 GB~2.6 GBSlowComplex
Large-v3ggml-large-v3.bin2.9 GB~4.7 GBSlowestProfessional

🛠️ GPU Acceleration (NVIDIA)

  1. Download whisper-cublas-bin-x64.zip from whisper.cpp Releases
  2. Extract whisper-cli.exe and .dll files
  3. Place in resources folder next to the app
  4. Restart and test

Released under the MIT License.