VlSomers / bpbreid

A strong baseline for body part-based person re-identification (check out our WACV23 paper)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Feature extractor!

debenton opened this issue · comments

Thanks for the great work on this topic!

When trying to retrieve features using tools/feature_extractor, passing in masks = [np.load(image_mask.npy)] leads to image/mask dimensions mismatch error. I assume this didn't work because .npy stores the contour points?

How should we pass in the masks parameter to the feature extractor to actually give us the bpbreid features? - Thanks!

Hi, can you share the exact error you have? You can have a look inside "bpbreid/torchreid/data/datasets/dataset.py" in the ImageDataset.getitem() function how the masks are loaded from the dataset with the "read_masks" and mimic that behavior. The '.npy' does not store contour, but dense masks of size HxWx36 (or 36xHxW, to be checked).

thanks! The error was height/width of mask must match image. I will check what you said.

Hi, can you share the exact error you have? You can have a look inside "bpbreid/torchreid/data/datasets/dataset.py" in the ImageDataset.getitem() function how the masks are loaded from the dataset with the "read_masks" and mimic that behavior. The '.npy' does not store contour, but dense masks of size HxWx36 (or 36xHxW, to be checked).

I tried using read_masks from torchreid.utils.tools - didn't work.

Steps to reproduce:

  1. make a custom dataset "reid-data/custom/demo.jpg"
  2. import FeatureExteactor class and instantiate with get_default_config() from scripts
  3. pass in the model path and cfg to make feature extractor
  4. features = extractor(im_list, external_parts_masks=read_mask(mask path .npy file)

error: ValueError, Height and Width of image, mask, or masks should be equal.

The mask is generated from the PR's code.

Can you show the full stack trace? I guess it is an Albumentation issue, requiring image and masks to have the same size, fortunately there is a new argument to disable that assertion.

Hi, I disabled the new parameter and now we got features, only it has multiple features under dictionary keys like bn_global, globl, backg, parts, etc

Which feature do you recommend we use to compare for similarities?

And is disabling the new parameter ok - or should I change my dataset to do it "the right way"?

Many thanks to you - great work again!

The easier is to use the foreground feature, because you will just have one feature vector per sample (image) and just need to compute a simple cosine distance with another sample. The best performance is reached when using part features together with visibility scores: the distance between two samples is then the average of the local distance of body parts that are visible in BOTH samples (have a look at the paper for more details). You can have a look at the method '_compute_distance_matrix_using_bp_features_and_visibility_scores' inside 'bpbreid/torchreid/metrics/distance.py.' that will compute a distance matrix between a set of query feature (qf) and gallery features (gf). If you want to compute the distance between two samples A and B, you can just put the first sample inside 'qf' and the second one inside 'gf'. If you want to know more about the exact shapes of the arguments to pass to that function, you can run the code with a supported dataset in eval mode, and look at what is happening inside 'torchreid/engine/image/part_based_engine.py', in the '_evaluate', on the line with the 'compute_distance_matrix_using_bp_features' call.

thanks, I know what to do now, closing! (appreciate you taking time to answer questions!!)

I'm happy to help, let me know if you need further help!

Hi, i come across this project and found it very interesting. Is there a demo.py file which I could use to run this project?

Hi @erictan23, there is no explicit demo.py, but you can first try to run an inference with a pretrained model on an existing dataset (e.g. Market1501) following the instructions under the "Inference" section of the README. Once you manage to run the model like this, you can try to run the "tools/feature_extractor" to extract features for a given arbitrary image. Let me know if you have more precise questions.