New
faster-whisper 1.1.0
New Features
- New batched inference that is 4x faster and accurate, Refer to README on usage instructions.
- Support for the new
large-v3-turbomodel. - VAD filter is now 3x faster on CPU.
- Feature Extraction is now 3x faster.
- Added
log_progresstoWhisperModel.transcribeto print transcription progress. - Added
multilingualoption to transcription to allow transcribing multilingual audio. Note that Large models already have codeswitching capabilities, so this is mostly beneficial tomediummodel or smaller. WhisperModel.detect_languagenow has the option to use VAD filter and improved language detection usinglanguage_detection_segmentsand .