google / compare_gan

Compare GAN code.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Can't run on v3-8 or v3-32 TPU nodes.

mbbrodie opened this issue · comments

Hi, I trained TPU-accelerated GANs from https://github.com/tensorflow/gan without any issues, but can't seem to get compare_gan examples to run on GCP TPUs.

Here is the general error, which appears whether using ctpu, gcloud, or the online GUI to setup compute resources.

tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot assign a device for operation input_pipeline_task0/TensorSliceDataset: node input_pipeline_task0/TensorSliceDataset (defined at /usr/local/lib/python3.5/dist-packages/tensorflow_core/python/framework/ops.py:1748) was explicitly assigned to /job:worker/task:0/device:CPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0, /job:localhost/replica:0/task:0/device:XLA_CPU:0 ]. Make sure the device specification refers to a valid device.

Any thoughts here?
Is there a specific python/tensorflow version I should use for running compare_gan?

Thanks!

Same issue here. I tried TF 1.15 and TF 1.15.3 and all got the same error message.

@hytseng0509, found a workaround for now:
After running into errors on different VM configs, I got TPU training (no eval) working on a VM node with a GPU. Regardless of the TF/Python version or RAM/# CPUs, I couldn't get training to work without a GPU on board.

For the repo maintainers:
(First, thank you -- the repo is awesome and much more intuitive to use than TF-GAN).

Some of the README could use some updated language:

  • https://github.com/google/compare_gan#training-on-cloud-tpus seems to imply that you ONLY need a GPU if you're evaluating.
  • Listing the TF & Python versions (that you used to test/run your models) would be helpful.
  • A summary for setting up on GCP would be nice (e.g. getting started with a basic VM and v3-8 TPU).

I imagine newcomers (especially students!) would appreciate spending their TPU $$$'s actually training instead of installing 10 different TF/TPU setups.

@mbbrodie Thanks for sharing! How do you config the VM and TPU? I still got the same error message using a VM with GPUs.

No problem, here's my basic setup:
VM Config

  • c2-deeplearning-pytorch-1-4-cu101-v20200826-debian-10
  • n1-highmem-8 (8 vCPUs, 52 GB memory)
  • 1 NVIDIA Tesla V100
  • Zone: us-central1-a
  • Allow HTTP/HTTPS traffic

Obviously, the boot image is meant for PyTorch; you'll probably want to find something that comes with tensorflow-gpu==1.15 installed.

However, if go this route, make the following changes:

  • On startup you'll be prompted to install nvidia drivers. Go ahead and install them (1 min)
  • Run the following:
    sudo apt-get install python3 && sudo apt-get install pip3 && export pip=pip3 && export python=python3
    pip install tensorflow-gpu==1.15.0
  • Clone compare-gan, run pip install -e . as explained in the README

Because Tensorflow is...well, Tensorflow...your tensorflow-gpu 1.15 install will not actually use the gpu out of the box.
You'll need to copy or symlink a few libraries:

cp /usr/local/cuda-10.1/lib64/libcudart.so libcudart.so.10.0
cp /usr/local/cuda-10.1/lib64/libcublas.so.10 libcublas.so.10.0
cp /usr/local/cuda-10.1/lib64/libcufft.so.10 libcufft.so.10.0
cp /usr/local/cuda-10.1/lib64/libcurand.so.10 libcurand.so.10.0
cp /usr/local/cuda-10.1/lib64/libcusolver.so.10 libcusolver.so.10.0
cp /usr/local/cuda-10.1/lib64/libcusparse.so.10 libcusparse.so.10.0

TPU Config
Nothing hard here. I just used https://console.cloud.google.com/compute/tpus to create the latest set of TPU instances:

  • zone=europe-west4-a (US zones work well too)
  • TPU type=v3-32
  • TPU software version: 1.15
  • export TPU_NAME={your tpu name} and export TPU_ZONE={your zone}
  • Make sure your model/data buckets have the right permission roles.

Anyway, you'll likely have more fun bugs to sort out for your particular use case. But this can at least get you started.

@mbbrodie Thanks for the information! Do you encounter the error message that request the installation of googleapiclient and oauth2client in your setup?

You're welcome. And yes, you'll need to pip install googleapiclient and oauth2client as well.
For simplicity, you could also change
cp /usr/local/cuda-10.1/lib64/libcudart.so libcudart.so.10.0
(and other cp commands) to
sudo cp /usr/local/cuda-10.1/lib64/libcudart.so /usr/local/cuda-10.1/lib64/libcudart.so.10.0.

Otherwise, make sure you put the renamed libs on your LD_LIBRARY_PATH.

I have the same error here related to InvalidArgumentError: Cannot assign a device for operation input_pipeline_task0/TensorSliceDataset. Wonder if this is related to the TF framework?