bmorphism / WhisperKit

Swift native speech recognition on-device for iOS and macOS applications.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

WhisperKit

WhisperKit is a Swift package that integrates OpenAI's popular Whisper speech recognition model with Apple's CoreML framework for efficient, local inference on Apple devices.

Check out the demo app on TestFlight.

[Blog Post] [Python Tools Repo]

Table of Contents

Installation

WhisperKit can be integrated into your Swift project using the Swift Package Manager.

Prerequisites

  • macOS 14.0 or later.
  • Xcode 15.0 or later.

Steps

  1. Open your Swift project in Xcode.
  2. Navigate to File > Add Package Dependencies....
  3. Enter the package repository URL: https://github.com/argmaxinc/whisperkit.
  4. Choose the version range or specific version.
  5. Click Finish to add WhisperKit to your project.

Getting Started

To get started with WhisperKit, you need to initialize it in your project.

Quick Example

This example demonstrates how to transcribe a local audio file:

import WhisperKit

// Initialize WhisperKit with default settings
Task {
   let pipe = try? await WhisperKit()
   let transcription = try? await pipe!.transcribe(audioPath: "path/to/your/audio.{wav,mp3,m4a,flac}")?.text
    print(transcription)
}

Model Selection

WhisperKit automatically downloads the recommended model for the device if not specified. You can also select a specific model by passing in the model name:

let pipe = try? await WhisperKit(model: "large-v3")

For a list of available models, see our HuggingFace repo.

Generating Models

WhisperKit also comes with the supporting repo whisperkittools which lets you create and deploy your own fine tuned versions of Whisper in CoreML format to HuggingFace. Once generated, they can be loaded by simply changing the repo name to the one used to upload the model:

let pipe = try? await WhisperKit(model: "large-v3", modelRepo: "username/your-model-repo")

Swift CLI

The Swift CLI allows for quick testing and debugging outside of an Xcode project. To install it, run the following:

git clone https://github.com/argmaxinc/whisperkit.git
cd whisperkit

Then, setup the environment and download the models.

Note:

  1. this will download all available models to your local folder, if you only want to download a specific model, see our HuggingFace repo)
  2. before running download-models, make sure git-lfs is installed
make setup
make download-models

You can then run the CLI with:

swift run transcribe --model-path "Models/whisperkit-coreml/openai_whisper-large-v3" --audio-path "path/to/your/audio.{wav,mp3,m4a,flac}" 

Which should print a transcription of the audio file.

Contributing & Roadmap

Our goal is to make WhisperKit better and better over time and we'd love your help! Just search the code for "TODO" for a variety of features that are yet to be built. Please refer to our contribution guidelines for submitting issues, pull requests, and coding standards, where we also have a public roadmap of features we are looking forward to building in the future.

License

WhisperKit is released under the MIT License. See LICENSE.md for more details.

Citation

If you use WhisperKit for something cool or just find it useful, please drop us a note at info@takeargmax.com!

If you use WhisperKit for academic work, here is the BibTeX:

@misc{whisperkit-argmax,
   title = {WhisperKit},
   author = {Argmax, Inc.},
   year = {2024},
   URL = {https://github.com/argmaxinc/WhisperKit}
}

About

Swift native speech recognition on-device for iOS and macOS applications.

License:MIT License


Languages

Language:Swift 99.2%Language:Makefile 0.8%