Tensor size mismatch
ricperry opened this issue · comments
Errors out with a mismatch in the size of the tensor. It happens both on CPU and GPU. Here's the terminal output:
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "ComfyUI/execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "ComfyUI/execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "ComfyUI/execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "ComfyUI/custom_nodes/ComfyUI-Hangover-Moondream/ho_moondream.py", line 71, in interrogate
answer = self.model.answer_question(enc_image, prompt, self.tokenizer)
File "/home/ricperry1/.cache/huggingface/modules/transformers_modules/vikhyatk/moondream1/f6e9da68e8f1b78b8f3ee10905d56826db7a5802/moondream.py", line 93, in answer_question
answer = self.generate(
File "/home/ricperry1/.cache/huggingface/modules/transformers_modules/vikhyatk/moondream1/f6e9da68e8f1b78b8f3ee10905d56826db7a5802/moondream.py", line 77, in generate
output_ids = self.text_model.generate(
File "ComfyUI/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "ComfyUI/venv/lib/python3.10/site-packages/transformers/generation/utils.py", line 1544, in generate
return self.greedy_search(
File "ComfyUI/venv/lib/python3.10/site-packages/transformers/generation/utils.py", line 2404, in greedy_search
outputs = self(
File "ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1529, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1538, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ricperry1/.cache/huggingface/modules/transformers_modules/vikhyatk/moondream1/f6e9da68e8f1b78b8f3ee10905d56826db7a5802/modeling_phi.py", line 709, in forward
hidden_states = self.transformer(
File "ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1529, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1538, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ricperry1/.cache/huggingface/modules/transformers_modules/vikhyatk/moondream1/f6e9da68e8f1b78b8f3ee10905d56826db7a5802/modeling_phi.py", line 675, in forward
else func(*args)
File "ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1529, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1538, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ricperry1/.cache/huggingface/modules/transformers_modules/vikhyatk/moondream1/f6e9da68e8f1b78b8f3ee10905d56826db7a5802/modeling_phi.py", line 541, in forward
attn_outputs = self.mixer(
File "ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1529, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1538, in _call_impl
return forward_call(*args, **kwargs)
File "/home/ricperry1/.cache/huggingface/modules/transformers_modules/vikhyatk/moondream1/f6e9da68e8f1b78b8f3ee10905d56826db7a5802/modeling_phi.py", line 514, in forward
attn_output_function(x, past_key_values, attention_mask)
File "/home/ricperry1/.cache/huggingface/modules/transformers_modules/vikhyatk/moondream1/f6e9da68e8f1b78b8f3ee10905d56826db7a5802/modeling_phi.py", line 494, in _forward_cross_attn
return attn_func(
File "/home/ricperry1/.cache/huggingface/modules/transformers_modules/vikhyatk/moondream1/f6e9da68e8f1b78b8f3ee10905d56826db7a5802/modeling_phi.py", line 491, in <lambda>
else lambda fn, *args, **kwargs: fn(*args, **kwargs)
File "ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1529, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "ComfyUI/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1538, in _call_impl
return forward_call(*args, **kwargs)
File "ComfyUI/venv/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 16, in decorate_autocast
return func(*args, **kwargs)
File "ComfyUI/venv/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 16, in decorate_autocast
return func(*args, **kwargs)
File "/home/ricperry1/.cache/huggingface/modules/transformers_modules/vikhyatk/moondream1/f6e9da68e8f1b78b8f3ee10905d56826db7a5802/modeling_phi.py", line 318, in forward
padding_mask.masked_fill_(key_padding_mask, 0.0)
RuntimeError: The expanded size of the tensor (760) must match the existing size (761) at non-singleton dimension 1. Target sizes: [1, 760]. Tensor sizes: [1, 761]
I see, vikhyat/moondream/issues/50.
Either downgrade transformers to 4.36.2, or wait for a response from vikhyat.
+1 having same issue (maybe):
`Error occurred when executing Moondream Interrogator (NO COMMERCIAL USE):
The size of tensor a (754) must match the size of tensor b (755) at non-singleton dimension 1`
+1 same type of error.
RuntimeError: The expanded size of the tensor (754) must match the existing size (755) at non-singleton dimension 1. Target sizes: [1, 754]. Tensor sizes: [1, 755]
I see, vikhyat/moondream/issues/50. Either downgrade transformers to 4.36.2, or wait for a response from vikhyat.
I am the poster of that issue and here is at least a temp workaround vikhyat/moondream#50 (comment)
I hope the update fixes the issue 🙄. Please make sure to select the moondream2 model within the node.
I hope the update fixes the issue 🙄. Please make sure to select the moondream2 model within the node.
Works for me now! thanks a lot Hangover3832
Confirmed fixed, thanks!