backprop-ai / backprop

Backprop makes it simple to use, finetune, and deploy state-of-the-art ML models.

Home Page:https://backprop.co

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Finetuning & Cuda

karan-jgu opened this issue · comments

Hello Backprop Team!

Great job on the library.

I was trying to replicate the Generate Questions Fine-tuning example: https://github.com/backprop-ai/backprop/blob/main/examples/Finetuning_GettingStarted.ipynb

However, I'm facing the following error:

Exception: You need a cuda capable (Nvidia) GPU for fine-tuning.

image

When I add the following line to my code while calling the TextGeneration Model
device="cuda"
the error changes to No CUDA GPUs are available

image

I'm using the following AWS EC2 instance which claims to have NVIDIA CUDA: https://aws.amazon.com/marketplace/pp/Amazon-Web-Services-AWS-Deep-Learning-Base-AMI-Ubu/B07Y3VDBNS#pdp-overview

Moreover, when I run the command nvcc --version, I see the following output:
image

Please help. Where am I going wrong?

Best,
Karan

Hey,

Thanks!
You listed an AMI that includes the libraries, but seems it can be run on EC2 instances that don't have a GPU.
Just to confirm, could you say what EC2 instance you have to replicate this issue?

And try running nvidia-smi to be sure you have a GPU and it's working correctly.

Hey Kristo,

The details of the EC2 Instance are as follows:
AMI Name: Deep Learning Base AMI (Ubuntu 18.04) Version 37.0
Instance Type: t2.medium

I tried the nvidia-smi command and got the following response:
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.

Thanks.

Seems like this is the problem. The t2.medium instance doesn't have a GPU.

You need to go for p3, g3, g4, or p4 instance types.
https://docs.aws.amazon.com/dlami/latest/devguide/gpu.html

Cheapest one I found, as an example, is g4dn.xlarge.

Hi Kristo,

Thank you for sharing this. I was able to run the fine-tune example of g4dn.xlarge, and could successfully train and save the model.

Now, the question is, how can I use the trained model on a non-cuda device?

I've copied the model folder to my local development machine (MacBook Air - Non Cuda), and I get the following error when I load the fine-tuned model.

model = backprop.TextGeneration("genQA")

Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

I've tried the following as well:

model = backprop.TextGeneration("genQA", device="cpu") but the error persists.

Note: I was able to load and use the model on the CUDA machine.

Thanks for being patient with me.

Glad you could get finetuning to work!

Did you use backprop.save or model.save to save the model? If so, there's an oversight on our part where it does not put the model on CPU before serializing it.

As to fixing this, doing the suggested torch.load with map_location should work in theory, but it didn't for me (relevant issue on torch).
So we can't include the loading as a fix in the library.

Currently you could do this:

  1. Load the model on a CUDA machine with model = backprop.TextGeneration("genQA", device="cpu") - this loads the model on the CUDA device and puts it on CPU.
  2. Save the model again with model.save("genQA")
  3. Now you should be able to load the model on a CPU only machine

We'll include the fix of always saving on CPU for better protability in a future release.

Thanks!

Worked like a charm.

Thank you so much!