riffusion / riffusion

Stable diffusion for real-time music generation

Home Page:http://riffusion.com/about

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Out of 4GiB VRAM memory at run server

Demification opened this issue · comments

Installation Guide for Riffusion App & Inference Server on Windows. After command python -m riffusion.server --port 3013 --host 127.0.0.1 :

╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ C:\ProgramData\Anaconda3\envs\riffusion-inference\lib\runpy.py:197 in _run_module_as_main │
│ │
│ 194 │ main_globals = sys.modules["main"].dict
│ 195 │ if alter_argv: │
│ 196 │ │ sys.argv[0] = mod_spec.origin │
│ ❱ 197 │ return _run_code(code, main_globals, None, │
│ 198 │ │ │ │ │ "main", mod_spec) │
│ 199 │
│ 200 def run_module(mod_name, init_globals=None, │
│ │
│ C:\ProgramData\Anaconda3\envs\riffusion-inference\lib\runpy.py:87 in _run_code │
│ │
│ 84 │ │ │ │ │ loader = loader, │
│ 85 │ │ │ │ │ package = pkg_name, │
│ 86 │ │ │ │ │ spec = mod_spec) │
│ ❱ 87 │ exec(code, run_globals) │
│ 88 │ return run_globals │
│ 89 │
│ 90 def _run_module_code(code, init_globals=None, │
│ │
│ C:\TheAiWork\Riffusion\riffusion-inference\riffusion\server.py:189 in │
│ │
│ 186 if name == "main": │
│ 187 │ import argh │
│ 188 │ │
│ ❱ 189 │ argh.dispatch_command(run_app) │
│ 190 │
│ │
│ C:\ProgramData\Anaconda3\envs\riffusion-inference\lib\site-packages\argh\dispatching.py:306 in │
│ dispatch_command │
│ │
│ 303 │ """ │
│ 304 │ parser = argparse.ArgumentParser(formatter_class=PARSER_FORMATTER) │
│ 305 │ set_default_command(parser, function) │
│ ❱ 306 │ dispatch(parser, *args, **kwargs) │
│ 307 │
│ 308 │
│ 309 def dispatch_commands(functions, *args, **kwargs): │
│ │
│ C:\ProgramData\Anaconda3\envs\riffusion-inference\lib\site-packages\argh\dispatching.py:174 in │
│ dispatch │
│ │
│ 171 │ │ # normally this is stdout; can be any file │
│ 172 │ │ f = output_file │
│ 173 │ │
│ ❱ 174 │ for line in lines: │
│ 175 │ │ # print the line as soon as it is generated to ensure that it is │
│ 176 │ │ # displayed to the user before anything else happens, e.g. │
│ 177 │ │ # raw_input() is called │
│ │
│ C:\ProgramData\Anaconda3\envs\riffusion-inference\lib\site-packages\argh\dispatching.py:277 in │
│ _execute_command │
│ │
│ 274 │ │
│ 275 │ try: │
│ 276 │ │ result = _call() │
│ ❱ 277 │ │ for line in result: │
│ 278 │ │ │ yield line │
│ 279 │ except tuple(wrappable_exceptions) as e: │
│ 280 │ │ processor = getattr(function, ATTR_WRAPPED_EXCEPTIONS_PROCESSOR, │
│ │
│ C:\ProgramData\Anaconda3\envs\riffusion-inference\lib\site-packages\argh\dispatching.py:260 in │
│ _call │
│ │
│ 257 │ │ │ │ │ │ continue │
│ 258 │ │ │ │ │ keywords[k] = getattr(namespace_obj, k) │
│ 259 │ │ │ │
│ ❱ 260 │ │ │ result = function(*positional, **keywords) │
│ 261 │ │ │
│ 262 │ │ # Yield the results │
│ 263 │ │ if isinstance(result, (GeneratorType, list, tuple)): │
│ │
│ C:\TheAiWork\Riffusion\riffusion-inference\riffusion\server.py:55 in run_app │
│ │
│ 52 │ """ │
│ 53 │ # Initialize the model │
│ 54 │ global PIPELINE │
│ ❱ 55 │ PIPELINE = RiffusionPipeline.load_checkpoint( │
│ 56 │ │ checkpoint=checkpoint, │
│ 57 │ │ use_traced_unet=not no_traced_unet, │
│ 58 │ │ device=device, │
│ │
│ C:\TheAiWork\Riffusion\riffusion-inference\riffusion\riffusion_pipeline.py:109 in │
│ load_checkpoint │
│ │
│ 106 │ │ │
│ 107 │ │ # Optionally load a traced unet │
│ 108 │ │ if checkpoint == "riffusion/riffusion-model-v1" and use_traced_unet: │
│ ❱ 109 │ │ │ traced_unet = cls.load_traced_unet( │
│ 110 │ │ │ │ checkpoint=checkpoint, │
│ 111 │ │ │ │ subfolder="unet_traced", │
│ 112 │ │ │ │ filename="unet_traced.pt", │
│ │
│ C:\TheAiWork\Riffusion\riffusion-inference\riffusion\riffusion_pipeline.py:153 in │
│ load_traced_unet │
│ │
│ 150 │ │ │ local_files_only=local_files_only, │
│ 151 │ │ │ cache_dir=cache_dir, │
│ 152 │ │ ) │
│ ❱ 153 │ │ unet_traced = torch.jit.load(unet_file) │
│ 154 │ │ │
│ 155 │ │ # Wrap it in a torch module │
│ 156 │ │ class TracedUNet(torch.nn.Module): │
│ │
│ C:\ProgramData\Anaconda3\envs\riffusion-inference\lib\site-packages\torch\jit_serialization.py: │
│ 162 in load │
│ │
│ 159 │ │
│ 160 │ cu = torch._C.CompilationUnit() │
│ 161 │ if isinstance(f, str) or isinstance(f, pathlib.Path): │
│ ❱ 162 │ │ cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files) │
│ 163 │ else: │
│ 164 │ │ cpp_module = torch._C.import_ir_module_from_buffer( │
│ 165 │ │ │ cu, f.read(), map_location, _extra_files │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯

OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 3.39 GiB already
allocated; 0 bytes free; 3.47 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting
max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

you are trying to use 20 gigs from a 4gig GPU

you are trying to use 20 gigs from a 4gig GPU

Please explain in more detail how it should be fixed (or guide link) (i not understand. i must have 20 gigs host RAM?)

you are trying to use 20 gigs from a 4gig GPU

Please explain in more detail how it should be fixed (or guide link) (i not understand. i must have 20 gigs host RAM?)

how are you getting this error?

how are you getting this error?

followed the instructions "Installation Guide for Riffusion App & Inference Server on Windows". at starting server with command "python -m riffusion.server --port 3013 --host 127.0.0.1" getting this error. this is the first start.

unittest test.audio_to_image_test - OK.

commented

Stable Diffusion model is very large and needs a GPU with a lot of VRAM.. .. There are version of SD that works with less vram but I'm not sure Riffusion supports that..

Stable Diffusion model is very large and needs a GPU with a lot of VRAM.. .. There are version of SD that works with less vram but I'm not sure Riffusion supports that..

i know like lowest it can go is 8gigs of VRAM. the last I Rembered

Stable Diffusion model is very large and needs a GPU with a lot of VRAM.. .. There are version of SD that works with less vram but I'm not sure Riffusion supports that..

i know like lowest it can go is 8gigs of VRAM. the last I Rembered

Just came here to ask if I can run it with 4GB of VRAM... someone said 20 I'm crying 💯

I can run Stable Diffusion with 2GB VRAM, if I limit myself to a batch size and count of 1. Then I have to set the --lowvram parameter in the .bat , is such a workaround also possible for Riffusion?