liuziwei7 / fashion-landmarks

Fashion Landmark Detection in the Wild

Home Page:http://personal.ie.cuhk.edu.hk/~lz013/projects/FashionLandmarks.html

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Getting landmark coordinates

richliao opened this issue · comments

Dear Ziwei,

Awesome and interesting work!

Do you mind shed some lights on what is purpose of the statement -get_orig_coordinate = @(p)((p+0.5)*224-repmat([offset(2),offset(1)]',[pipline.num_points,1]))/scale;?

I can not relate this to the paper, particularly (p+0.5)*224. I don't have matlab so I won't be able to debug the value but when I run pyCaffe those values of landmarks came out of stage 1 forward are very small, same as pseudo labels (all lower than 0.01). Any explanations will be greatly appreciated! Thanks.

Dear richliao,
Thanks for your interest in our work.
We normalized landmarks to [-0.5, 0.5] in training process.
And we used images of 224*224 for training model.
This operation is just for projecting normalized landmarks to absolute coordinate frame.

Thanks for the explanation, Ziwei!

Can you further elaborate a bit how to convert from a vector (1 dimension) to x and y coordinates which are 2 dimension? Further, I don't see any input box, are you assuming the box is the full size of the image (224x224)? Thanks much.

the meaning of a output vector is [x1, y1, x2, y2 ....]

Yes, and the matlab script would resize images to 224*224 before testing.

@richliao Yes, the inputs to our Deep Fashion Alignment (DFA) are clothes bounding boxes. We treat this detection and cropping procedure as pre-processing and don't include it in this codebase. But you can definitely find bounding box annotations in the DeepFashion dataset.