test_norm_3d_inner_axis blocks landing torch.norm improvements on pytorch/pytorch
kit1980 opened this issue · comments
This PR pytorch/pytorch#81761 has several improvements for torch.norm
, but can't be landed because it fails a glow test:
FAIL: test_norm_3d_inner_axis (glow.torch_glow.nodes.norm_test.TestNorm)
Basic test of the PyTorch norm Node on Glow.
AssertionError: Expected fusion of {'aten::frobenius_norm'}, but only set() was fused in graph
graph(%self : __torch__.glow.torch_glow.nodes.norm_test.SimpleNormModule,
%tensor : Float(*, *, *, requires_grad=0, device=cpu)):
%8 : int[] = prim::Constant[value=[1]]()
%2 : NoneType = prim::Constant()
%3 : bool = prim::Constant[value=0]() # /data/sandcastle/boxes/eden-trunk-hg-fbcode-fbsource/fbcode/buck-out/dev/gen/aab7ed39/glow/glow/torch_glow/tests/norm_test#binary,link-tree/torch/functional.py:1471:0
%5 : int = prim::Constant[value=2]() # /data/sandcastle/boxes/eden-trunk-hg-fbcode-fbsource/fbcode/buck-out/dev/gen/aab7ed39/glow/glow/torch_glow/tests/norm_test#binary,link-tree/torch/functional.py:1471:0
%7 : Tensor = aten::linalg_vector_norm(%tensor, %5, %8, %3, %2) # /data/sandcastle/boxes/eden-trunk-hg-fbcode-fbsource/fbcode/buck-out/dev/gen/aab7ed39/glow/glow/torch_glow/tests/norm_test#binary,link-tree/torch/functional.py:1471:0
return (%7)
See pytorch/pytorch#81761 for more details.
Any suggestions how to unblock the PR?
@jfix71 do you know the answer by any chance?
Seems like the original PR is now dispatching torch.norm
to linalg.vector_norm
instead of frobenius_norm
, so we need to update the loader to load linalg.vector_norm
as seen below? CC @khabinov @qxy11
glow/torch_glow/src/PyTorchModelLoader.cpp
Lines 1721 to 1723 in 87753a1
Yes, the loader has to be updated to also load aten::linalg_vector_norm
at
glow/torch_glow/src/PyTorchModelLoader.cpp
Lines 1721 to 1723 in 87753a1
Also this test case needs to be updated to aten::linalg_vector_norm
instead of aten::frobenius_norm
glow/torch_glow/tests/nodes/norm_test.py
Line 56 in 1b4e9d1
Can we disable the test for now and update it after the PyTorch change lands?
Yes but please update the section below with {{"aten::norm", "aten::frobenius_norm", "aten::linalg_vector_norm"},
(and it'd be great if you could try those unit tests locally to make sure that change works). If torch::norm is used in any models running on A*, the PR might break model loading without adding this change.
glow/torch_glow/src/PyTorchModelLoader.cpp
Lines 1721 to 1723 in 87753a1
What's the current state of this issue?