kyegomez / zeta

Build high-performance AI models with modular building blocks

Home Page:https://zeta.apac.ai

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[BUG] [DPO - Direct Policy Optimization][RuntimeError: mat1 and mat2 must have the same dtype, but got Long and Float]

vyomakesh09 opened this issue · comments

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
[<ipython-input-26-508a53a4cb02>](https://localhost:8080/#) in <cell line: 26>()
     24 
     25 # Compute loss
---> 26 loss = dpo_model(preferred_seq, unpreferred_seq)
     27 print(loss)

9 frames
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/linear.py](https://localhost:8080/#) in forward(self, input)
    112 
    113     def forward(self, input: Tensor) -> Tensor:
--> 114         return F.linear(input, self.weight, self.bias)
    115 
    116     def extra_repr(self) -> str:

RuntimeError: mat1 and mat2 must have the same dtype, but got Long and Float

Upvote & Fund

  • We're using Polar.sh so you can upvote and help fund this issue.
  • We receive the funding once the issue is completed & confirmed by you.
  • Thank you in advance for helping prioritize & fund our backlog.
Fund with Polar

Stale issue message