pytorch / xla

Enabling PyTorch on XLA Devices (e.g. Google TPU)

Home Page:https://pytorch.org/xla

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

The ability to exchange between TPU computation and CPU(GPU) computation

rwbfd opened this issue Β· comments

commented

πŸš€ Feature

The ability to exchange between TPU computation and CPU(GPU) computation

Motivation

As far as I am concerned, it is not yet possible to combine a CPU pipeline within the TPU framework. There are two primal examples of this,

  1. In Diffusion Models, random numbers must be generated when using the numerical SDE solver. I am not aware whether TPU could handle a random number generator.
  2. In many CV applications, it is important to generate data augmentations.

When necessary, XLA already falls back to CPU instead of TPU to execute computations. Further, you can move tensors back and forth between XLA and CPU devices if you need to explicitly perform computation on CPU.