pytorch / xla

Enabling PyTorch on XLA Devices (e.g. Google TPU)

Home Page:https://pytorch.org/xla

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

[Dynamism] Running forward pass of FCOS model

ymwangg opened this issue Β· comments

πŸš€ Feature

We'll use this thread to track issues related to running the forward pass of FCOS model.

Issues

Tested using this script with flag XLA_EXPERIMENTAL="nonzero".

  1. torch.where(x > 0.5) (lowered to aten::nonzero) returns wrong output dtype int32 instead of int64.
  2. aten::index does not support int32 as input element type.
  3. min(SymInt, int) raised "NYI" error.
  4. torch.topk does not support SymInt type as input parameter k.
  5. unlowered ops: aten::_unique2, torchvision::nms.

aten::nonzero return int32 is expected, on XLA it seems like we expect the size(used for SetDimensionSize and return by GetDimensionSIze) to be int32 instead of int64 for XLA:GPU and XLA:TPU.

I explicitly to make size to be int32 in #4243.. I guess now since aten::index does not work with s32 we need a different solution.

Yes if aten::index accepts s32 then issue 1 shoudn't be significant. The native pytorch aten::index supports s32 type but torch_xla blocked it here. I'll see if I can enable it.