Whisper GGML
OpenAI Whisper ASR (Automatic Speech Recognition) for Flutter using Whisper.cpp.
Supported platforms
Platform | Supported |
---|---|
Android | ✅ |
iOS | ✅ |
MacOS | ✅ |
Features
-
Automatic Speech Recognition integration for Flutter apps.
-
Supports automatic model downloading and initialization. Can be configured to work fully offline by using
assets
models (see example folder). -
Seamless iOS and Android support with optimized performance.
-
Utilizes CORE ML for enhanced processing on iOS devices.
Installation
To use this library in your Flutter project, follow these steps:
- Add the library to your Flutter project's
pubspec.yaml
:
dependencies:
whisper_ggml: ^1.3.0
- Run
flutter pub get
to install the package.
Usage
To integrate Whisper ASR in your Flutter app:
- Import the package:
import 'package:whisper_ggml/whisper_ggml.dart';
- Pick your model. Smaller models are more performant, but the accuracy may be lower. Recommended models are
tiny
andsmall
.
final model = WhisperModel.tiny;
- Declare
WhisperController
and use it for transcription:
final controller = WhisperController();
final result = await whisperController.transcribe(
model: model, /// Selected WhisperModel
audioPath: audioPath, /// Path to .wav file
lang: 'en', /// Language to transcribe
);
- Use the
result
variable to access the transcription result:
if (result?.transcription.text != null) {
/// Do something with the transcription
print(result!.transcription.text);
}
Notes
Transcription processing time is about 5x
times faster when running in release mode.