yangsenius / TransPose

PyTorch Implementation for "TransPose: Keypoint localization via Transformer", ICCV 2021.

Home Page:https://github.com/yangsenius/TransPose/releases/download/paper/transpose.pdf

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Static quantization of TransPose model in PyTorch? NotImplementedError: Could not run ‘aten::add.out’

mukeshnarendran7 opened this issue · comments

I am trying to do a static quantization of the transpose a4 model in PyTorch but I run into this error. It's coming from the aten::add.out operation. How can I modify this in the pre-trained model to make it suitable to test quantization? Thanks

---> 8 model_static_quantized(x).shape

7 frames

/root/.cache/torch/hub/yangsenius_TransPose_main/lib/models/transpose_h.py in forward(self, x)
99 residual = self.downsample(x)
100
→ 101 out += residual
102 out = self.relu(out)
103

NotImplementedError: Could not run ‘aten::add.out’ with arguments from the ‘QuantizedCPU’ backend. This could be because the operator doesn’t exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit Internal Login for possible resolutions. ‘aten::add.out’ is only available for these backends: [CPU, CUDA, Meta, MkldnnCPU, SparseCPU, SparseCUDA, SparseCsrCPU, SparseCsrCUDA, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].