Hangover3832 / ComfyUI-Hangover-Moondream

Moondream is a lightweight multimodal large language model

Home Page:https://github.com/Hangover3832/ComfyUI-Hangover-Moondream

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Object of type PhiConfig is not JSON serializable

lord-lethris opened this issue · comments

I get the following error when running Moondream Interrogator:

Error occurred when executing Moondream Interrogator:

Object of type PhiConfig is not JSON serializable

File "D:\apps\SD-WebUI\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\apps\SD-WebUI\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\apps\SD-WebUI\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "D:\apps\SD-WebUI\ComfyUI\custom_nodes\ComfyUI-Hangover-Moondream\ho_moondream.py", line 108, in interrogate
self.model = AutoModel.from_pretrained(
File "D:\apps\Python\Python310\lib\site-packages\transformers\models\auto\auto_factory.py", line 434, in from_pretrained
config, kwargs = AutoConfig.from_pretrained(
File "D:\apps\Python\Python310\lib\site-packages\transformers\models\auto\configuration_auto.py", line 871, in from_pretrained
return config_class.from_pretrained(pretrained_model_name_or_path, **kwargs)
File "D:\apps\Python\Python310\lib\site-packages\transformers\configuration_utils.py", line 545, in from_pretrained
return cls.from_dict(config_dict, **kwargs)
File "D:\apps\Python\Python310\lib\site-packages\transformers\configuration_utils.py", line 712, in from_dict
logger.info(f"Model config {config}")
File "D:\apps\Python\Python310\lib\site-packages\transformers\configuration_utils.py", line 744, in __repr__
return f"{self.__class__.__name__} {self.to_json_string()}"
File "D:\apps\Python\Python310\lib\site-packages\transformers\configuration_utils.py", line 816, in to_json_string
return json.dumps(config_dict, indent=2, sort_keys=True) + "\n"
File "D:\apps\Python\Python310\lib\json\__init__.py", line 238, in dumps
**kw).encode(obj)
File "D:\apps\Python\Python310\lib\json\encoder.py", line 201, in encode
chunks = list(chunks)
File "D:\apps\Python\Python310\lib\json\encoder.py", line 431, in _iterencode
yield from _iterencode_dict(o, _current_indent_level)
File "D:\apps\Python\Python310\lib\json\encoder.py", line 405, in _iterencode_dict
yield from chunks
File "D:\apps\Python\Python310\lib\json\encoder.py", line 438, in _iterencode
o = _default(o)
File "D:\apps\Python\Python310\lib\json\encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '

can you please run
python.exe -m pip freeze | findstr "timm einops transformers"
within your python environment and post the output.

einops==0.7.0
timm==0.9.16
transformers==4.26.1

FYI - I updated the transformers to transformers>=4.36.2 and now I get this:

[Moondream] loading model moondream2 revision '2024-03-04', please stand by....
!!! Exception during processing!!! cannot import name 'ToImage' from 'torchvision.transforms.v2' (D:\apps\Python\Python310\lib\site-packages\torchvision\transforms\v2\__init__.py)
Traceback (most recent call last):
  File "D:\apps\SD-WebUI\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
  File "D:\apps\SD-WebUI\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
  File "D:\apps\SD-WebUI\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
  File "D:\apps\SD-WebUI\ComfyUI\custom_nodes\ComfyUI-Hangover-Moondream\ho_moondream.py", line 124, in interrogate
    self.model = AutoModel.from_pretrained(
  File "D:\apps\Python\Python310\lib\site-packages\transformers\models\auto\auto_factory.py", line 550, in from_pretrained
    model_class = get_class_from_dynamic_module(
  File "D:\apps\Python\Python310\lib\site-packages\transformers\dynamic_module_utils.py", line 501, in get_class_from_dynamic_module
    return get_class_in_module(class_name, final_module)
  File "D:\apps\Python\Python310\lib\site-packages\transformers\dynamic_module_utils.py", line 201, in get_class_in_module
    module = importlib.machinery.SourceFileLoader(name, module_path).load_module()
  File "<frozen importlib._bootstrap_external>", line 548, in _check_name_wrapper
  File "<frozen importlib._bootstrap_external>", line 1063, in load_module
  File "<frozen importlib._bootstrap_external>", line 888, in load_module
  File "<frozen importlib._bootstrap>", line 290, in _load_module_shim
  File "<frozen importlib._bootstrap>", line 719, in _load
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "C:\Users\Lethris\.cache\huggingface\modules\transformers_modules\vikhyatk\moondream2\4cb9e48b11351d6d73a53844f962de49d1192aa6\moondream.py", line 2, in <module>
    from .vision_encoder import VisionEncoder
  File "C:\Users\Lethris\.cache\huggingface\modules\transformers_modules\vikhyatk\moondream2\4cb9e48b11351d6d73a53844f962de49d1192aa6\vision_encoder.py", line 5, in <module>
    from torchvision.transforms.v2 import (
ImportError: cannot import name 'ToImage' from 'torchvision.transforms.v2' (D:\apps\Python\Python310\lib\site-packages\torchvision\transforms\v2\__init__.py)

what about
python.exe -m pip freeze | findstr "torch"

python.exe -m pip freeze | findstr "torch"
open-clip-torch==2.24.0
pytorch-lightning==2.2.0.post0
torch==2.0.0
torch-directml==0.2.0.dev230426
torchaudio==2.2.0.dev20240123+cpu
torchmetrics==1.3.0
torchsde==0.2.6
torchvision==0.15.1

@lord-lethris, you need torch 2.1.2 and torchvision 0.16.2

Oh damn - well that's not going to happed :D

I have an AMD card and thus stuck on DirectML until AMD get their finger out and update ROCM to work in Windows.

Thanks anyway :) 💞