xl0 / lovely-tensors

Tensors, ready for human consumption

Home Page:https://xl0.github.io/lovely-tensors

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

NotImplementedError when using Apple MPS device

malcolmsailor opened this issue · comments

Hi, great library!

I have an M1 Macbook and if I try to show a tensor on Apple's "mps" device, I get the following exception:

>>> import torch
>>> import lovely_tensors as lt
>>> lt.monkey_patch()
>>> torch.rand(3)
tensor[3] x∈[0.115, 0.698] μ=0.389 σ=0.293 [0.353, 0.698, 0.115]
>>> torch.rand(3, device="mps")
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/malcolm/venvs/mal_vq_vae/lib/python3.10/site-packages/lovely_tensors/patch.py", line 26, in __repr__
    return str(StrProxy(self))
  File "/Users/malcolm/venvs/mal_vq_vae/lib/python3.10/site-packages/lovely_tensors/repr_str.py", line 180, in __repr__
    return to_str(self.t, plain=self.plain, verbose=self.verbose,
  File "/Users/malcolm/venvs/mal_vq_vae/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/Users/malcolm/venvs/mal_vq_vae/lib/python3.10/site-packages/lovely_tensors/repr_str.py", line 137, in to_str
    common = torch_to_str_common(t, color=color)
  File "/Users/malcolm/venvs/mal_vq_vae/lib/python3.10/site-packages/lovely_tensors/repr_str.py", line 70, in torch_to_str_common
    pinf = ansi_color("+Inf!", "red", color) if amax.isposinf() else None
NotImplementedError: The operator 'aten::isposinf.out' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.

M1 support is still a bit patchy as you can see and it looks like you are relying on at least one not-yet-implemented operator. And if you don't have an M1 Mac, this might be hard for you to reproduce. One hacky fix might be to check if a tensor is on the mps device and if so, move a copy over to the cpu before doing the repr logic.

Thank you 💕 :)

If it's just .isposinf() (and I guess, .isneginf() too), I can copy the result of amax() to CPU before checking for inf, which should not affect performance. Will try to do it tomorrow. But I don't have access to an mps machine, so you might run into other issues after this one is fixed.

Great! Then, if I get similar issues after that, I can conceivably do a PR modeled on your changes to fix them.

I pushed a fix to git. Could you give it a try?

pip install git+https://github.com/xl0/lovely-tensors

😀

>>> import torch
>>> import lovely_tensors as lt
>>> lt.monkey_patch()
>>> torch.rand(3, device="mps")
tensor[3] x∈[0.032, 0.086] μ=0.065 σ=0.029 mps:0 [0.086, 0.078, 0.032]

Thanks for patching this so quickly!

Awesome! I'll include it in the next release. Please let me know if you spot any other issues.