BenevolentAI / DeeplyTough

DeeplyTough: Learning Structural Comparison of Protein Binding Sites

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Custom Evaluation =

protbiochem opened this issue · comments

I've been trying to set up the custom evaluation tool and I keep getting this error while I run it

File "~/DeeplyTough-master/deeplytough/scripts/custom_evaluation.py", line 50
fname = f"Custom-{args.alg}-{os.path.basename(os.path.dirname(args.net))}.pickle"

Do you have any insight on what is going on?

Thanks you for trying out DeeplyTough. Would you mind pasting here the actual complete error message and the command you are running? Thanks!

I have no idea what I did differently, but I uninstall it, and reinstalled it, and now a different error is popping up.

Here is what I input and the results

python $DEEPLYTOUGH/deeplytough/scripts/custom_evaluation.py --dataset_subdir 'custom' --output_dir $DEEPLYTOUGH/results --device 'cuda:0' --nworkers 4 --net $DEEPLYTOUGH/networks/deeplytough_toughm1_test.pth.tar
/uufs/chpc.utah.edu/common/home/u1261874/software/pkg/miniconda3/lib/python3.6/site-packages/htmd/molecule/util.py:666: NumbaPerformanceWarning: np.dot() is faster on contiguous arrays, called on (array(float32, 2d, A), array(float32, 2d, A))
covariance = np.dot(P.T, Q)
/uufs/chpc.utah.edu/common/home/u1261874/software/pkg/miniconda3/lib/python3.6/site-packages/htmd/molecule/util.py:704: NumbaPerformanceWarning: np.dot() is faster on contiguous arrays, called on (array(float32, 2d, C), array(float32, 2d, A))
all1 = np.dot(all1, rot.T)
Traceback (most recent call last):
File "/uufs/chpc.utah.edu/common/home/u1261874/research/learning_programs/DeeplyTough-master/deeplytough/scripts/custom_evaluation.py", line 69, in
main()
File "/uufs/chpc.utah.edu/common/home/u1261874/research/learning_programs/DeeplyTough-master/deeplytough/scripts/custom_evaluation.py", line 40, in main
matcher = DeeplyTough(args.net, device=args.device, batch_size=args.batch_size, nworkers=args.nworkers)
File "/uufs/chpc.utah.edu/common/home/u1261874/research/learning_programs/DeeplyTough-master/deeplytough/matchers/deeply_tough.py", line 24, in init
self.model, self.args = load_model(model_dir, device)
File "/uufs/chpc.utah.edu/common/home/u1261874/research/learning_programs/DeeplyTough-master/deeplytough/engine/predictor.py", line 22, in load_model
checkpoint = torch.load(fname, map_location=str(device))
File "/uufs/chpc.utah.edu/common/home/u1261874/software/pkg/miniconda3/lib/python3.6/site-packages/torch/serialization.py", line 367, in load
return _load(f, map_location, pickle_module)
File "/uufs/chpc.utah.edu/common/home/u1261874/software/pkg/miniconda3/lib/python3.6/site-packages/torch/serialization.py", line 538, in _load
result = unpickler.load()
File "/uufs/chpc.utah.edu/common/home/u1261874/software/pkg/miniconda3/lib/python3.6/site-packages/torch/serialization.py", line 504, in persistent_load
data_type(size), location)
File "/uufs/chpc.utah.edu/common/home/u1261874/software/pkg/miniconda3/lib/python3.6/site-packages/torch/serialization.py", line 384, in restore_location
return default_restore_location(storage, map_location)
File "/uufs/chpc.utah.edu/common/home/u1261874/software/pkg/miniconda3/lib/python3.6/site-packages/torch/serialization.py", line 113, in default_restore_location
result = fn(storage, location)
File "/uufs/chpc.utah.edu/common/home/u1261874/software/pkg/miniconda3/lib/python3.6/site-packages/torch/serialization.py", line 94, in _cuda_deserialize
device = validate_cuda_device(location)
File "/uufs/chpc.utah.edu/common/home/u1261874/software/pkg/miniconda3/lib/python3.6/site-packages/torch/serialization.py", line 78, in validate_cuda_device
raise RuntimeError('Attempting to deserialize object on a CUDA '
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location='cpu' to map your storages to the CPU.

I'm not exactly sure what is going on

It seems that you don't have a GPU on your machine but your command tells the script to use it. Try python $DEEPLYTOUGH/deeplytough/scripts/custom_evaluation.py --dataset_subdir 'custom' --output_dir $DEEPLYTOUGH/results --device 'cpu' --nworkers 4 --net $DEEPLYTOUGH/networks/deeplytough_toughm1_test.pth.tar?

Thank you so much, That seems to have fixed it.

I was running your program through our campus high performance computing cluster. They don't have GPUs readily accessible so this is really helpful