chihyaoma / Activity-Recognition-with-CNN-and-RNN

Temporal Segments LSTM and Temporal-Inception for Activity Recognition

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Fine-tuned model "model_best.t7" is worng.

Datasharing-wow opened this issue · comments

Hi:
I got new error,
After spent almost two weeks to fine tuned both flow and RGB model("model_best.t7" in the folder CNN-GPUs),when I use the two models to generat the feature(in CNN-Pred-Feat),the error appear:

inconsistent tensor size, expected tensor [1 x 25 x 101] and src [400 x 101] to have the same number of elements, but got 2525 and 40400 elements respectively at /home/gtune/torch/pkg/torch/lib/TH/generic/THTensorCopy.c:86

How to correct the code?

where does the number "400" come from?
25 means we sample 25 frames for each video, and 101 means there are 101 classes.

Thank you for the reply.
After reviewed the code entirely, I'm not really sure where the "400" came from.
Here is my steps to use CNN-GPUs package:
step1:use CNN-GPUs/datasets/video2frame_dataset.lua to generate frame image from videos
step2:download resnet-101 model from https://github.com/facebook/fb.resnet.torch
step3:change the folder path in CNN-GPUs/opts.lua(line #39 and #73)
step4:for RGB image.I change the "nChannel" to 3,in opts.lua
step5:run the command in terminal. "th main.lua -nGPU 2"
step6:when the "model_best.t7" is generated,copy it to CNN-Pred-Feat.Then when I run the command to generate feature the error shows up.But I can use your model_best.t7 to generate feature correctly.So definitely I trained the wrong model_best.t7.

Can you say some details about how to train the model_best.t7(How to use CNN-GPUs)?

There is no special trick to use CNN-GPU. Maybe first you need to figure out where the number "400" comes from.

Thank you for the reply.

After I checked the program again and again.
I'm still confused about the wired "400" ....