camenduru / text-to-video-synthesis-colab

Text To Video Synthesis Colab

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

can't get the path to the model right

mberman84 opened this issue · comments

hi i'm trying to set this up locally and followed the script from google colab.

however, every time i try to run it, i'm told it can't find the model. i'm pointing to it just like in colab and the file structure is the same. what am i doing wrong?

python inference.py -m "/content/zeroscope_v1-1_320s" -p "ducks in a lake" -W 320 -H 320 -o /content/outputs -d cuda -x -s 33 -g 23 -f 30 -T 24
Traceback (most recent call last):
  File "C:\Users\mberm\miniconda3\envs\vid\lib\site-packages\diffusers\configuration_utils.py", line 358, in load_config
    config_file = hf_hub_download(
  File "C:\Users\mberm\miniconda3\envs\vid\lib\site-packages\huggingface_hub\utils\_validators.py", line 110, in _inner_fn
    validate_repo_id(arg_value)
  File "C:\Users\mberm\miniconda3\envs\vid\lib\site-packages\huggingface_hub\utils\_validators.py", line 158, in validate_repo_id
    raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/content/zeroscope_v1-1_320s'. Use `repo_type` argument if needed.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\mberm\OneDrive\Desktop\content\Text-To-Video-Finetuning\inference.py", line 194, in <module>
    videos = inference(**args)
  File "C:\Users\mberm\miniconda3\envs\vid\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "C:\Users\mberm\OneDrive\Desktop\content\Text-To-Video-Finetuning\inference.py", line 122, in inference
    pipeline = initialize_pipeline(model, device, xformers, sdp)
  File "C:\Users\mberm\OneDrive\Desktop\content\Text-To-Video-Finetuning\inference.py", line 21, in initialize_pipeline
    scheduler, tokenizer, text_encoder, vae, _unet = load_primary_models(model)
  File "C:\Users\mberm\OneDrive\Desktop\content\Text-To-Video-Finetuning\train.py", line 132, in load_primary_models
    noise_scheduler = DDPMScheduler.from_pretrained(pretrained_model_path, subfolder="scheduler")
  File "C:\Users\mberm\miniconda3\envs\vid\lib\site-packages\diffusers\schedulers\scheduling_utils.py", line 140, in from_pretrained
    config, kwargs, commit_hash = cls.load_config(
  File "C:\Users\mberm\miniconda3\envs\vid\lib\site-packages\diffusers\configuration_utils.py", line 394, in load_config
    raise EnvironmentError(
OSError: We couldn't connect to 'https://huggingface.co' to load this model, couldn't find it in the cached files and it looks like /content/zeroscope_v1-1_320s is not the path to a directory containing a scheduler_config.json file.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/diffusers/installation#offline-mode'.

here's the absolute path to the content folder:

C:\Users\mberm\OneDrive\Desktop\content