Gadersd / whisper-burn

A Rust implementation of OpenAI's Whisper model using the burn framework

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Transriptions fails using large-v2 model

xelibrion opened this issue · comments

Not sure if this is related to loading the model, or the transcription process. Also it seems restoring the checkpoint into VRAM takes much longer compared to Python version.

RUST_BACKTRACE=1 cargo run --release audio.wav large-v2

Caused by:
    In Device::create_bind_group
    Buffer binding 0 range 265548800 exceeds `max_*_buffer_binding_size` limit 134217728

', /home/username/.cargo/registry/src/index.crates.io-6f17d22bba15001f/wgpu-0.17.0/src/backend/direct.rs:3056:5
stack backtrace:
   0: rust_begin_unwind
             at /rustc/eb26296b556cef10fb713a38f3d16b9886080f26/library/std/src/panicking.rs:593:5
   1: core::panicking::panic_fmt
             at /rustc/eb26296b556cef10fb713a38f3d16b9886080f26/library/core/src/panicking.rs:67:14
   2: core::ops::function::Fn::call
   3: <wgpu::backend::direct::Context as wgpu::context::Context>::device_create_bind_group
   4: <T as wgpu::context::DynContext>::device_create_bind_group
   5: wgpu::Device::create_bind_group
   6: burn_wgpu::context::base::Context::execute
   7: burn_wgpu::kernel::index::select::select
   8: burn_tensor::tensor::ops::modules::base::ModuleOps::embedding
   9: whisper::model::TextDecoder<B>::forward
  10: whisper::transcribe::waveform_to_text
  11: whisper::main
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.

The issue is that burn-wgpu doesn't currently use the maximum available device memory limits so larger models may fail to run. I'm hoping to resolve this within the next day or two. The slow model loading speed should be resolved by the latest updates.