hanchenchen / Attention-A-Lightweight-2D-Hand-Pose-Estimation-Approach-Pytorch

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

data download

yuan243212790 opened this issue · comments

where ? datasets

It is updated in the README.md

why there are only 2 pictures in your link in readme??

why there are only 2 pictures in your link in readme??

Which one?

here. https://github.com/HowieMa/NSRMhand#training:

For your convenience, you can download our preprocessed dataset from here.

OK!thank you! when i run train.py there shows
Delete last direction!

Process finished with exit code 0
it did not work ,it only work before training.what is the problem?

please pull the latest commit: 60cddad
It is used to avoid rewriting existing log files.

which pytorch version do you use? i have the runtimeerror:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [4, 42, 1, 1]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!

torch==1.7.1+cu101
torchvision==0.8.2+cu101

Traceback (most recent call last):
File "/home/yuan/Attention-A-Lightweight-2D-Hand-Pose-Estimation-Approach/Attention-A-Lightweight-2D-Hand-Pose-Estimation-Approach-Pytorch-main/main.py", line 200, in
train()
File "/home/yuan/Attention-A-Lightweight-2D-Hand-Pose-Estimation-Approach/Attention-A-Lightweight-2D-Hand-Pose-Estimation-Approach-Pytorch-main/main.py", line 113, in train
label_loss.backward()
File "/home/yuan/anaconda3/lib/python3.7/site-packages/torch/_tensor.py", line 307, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/home/yuan/anaconda3/lib/python3.7/site-packages/torch/autograd/init.py", line 156, in backward
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [4, 42, 1, 1]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

Process finished with exit code 1

hi,do you know how to sovle the loss.backgward() probem?

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [4, 42, 1, 1]], which is output 0 of ReluBackward0, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

I don’t know.. And I can’t reproduce this error

hi,i sovled the probem by installing the same enviroment torch 1.7+cuda11