[Dynamism] Running forward pass of FCOS model
ymwangg opened this issue Β· comments
π Feature
We'll use this thread to track issues related to running the forward pass of FCOS model.
Issues
Tested using this script with flag XLA_EXPERIMENTAL="nonzero"
.
torch.where(x > 0.5)
(lowered toaten::nonzero
) returns wrong output dtype int32 instead of int64.aten::index
does not support int32 as input element type.min(SymInt, int)
raised "NYI" error.torch.topk
does not support SymInt type as input parameterk
.- unlowered ops:
aten::_unique2
,torchvision::nms
.
aten::nonzero
return int32
is expected, on XLA it seems like we expect the size(used for SetDimensionSize
and return by GetDimensionSIze
) to be int32
instead of int64
for XLA:GPU and XLA:TPU.
I explicitly to make size to be int32 in #4243.. I guess now since aten::index
does not work with s32 we need a different solution.
Yes if aten::index
accepts s32 then issue 1 shoudn't be significant. The native pytorch aten::index
supports s32 type but torch_xla blocked it here. I'll see if I can enable it.