Yukun-Huang / Person-Attribute-Recognition-MarketDuke

A simple baseline implemented in PyTorch for pedestrian attribute recognition task, evaluating on Market-1501 and DukeMTMC-reID dataset.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Understanding of measurement indicators

Vivinia opened this issue · comments

How are evaluation indicators, such as Presion and recall, understood in pedestrian attribute recognition?

I use the command “python3 test.py --data-path ./dataset --dataset market --print-table --use-id” to get the following results

+------------+----------+-----------+--------+----------+
| attribute | accuracy | precision | recall | f1 score |
+------------+----------+-----------+--------+----------+
| young | 0.998 | - | - | - |
| teenager | 0.871 | 0.871 | 1.000 | 0.931 |
| adult | 0.880 | - | - | - |
| old | 0.994 | - | - | - |
| backpack | 0.749 | - | - | - |
| bag | 0.757 | - | - | - |
| handbag | 0.905 | - | - | - |
| clothes | 0.881 | 0.881 | 1.000 | 0.937 |
| down | 0.670 | 0.670 | 1.000 | 0.803 |
| up | 0.935 | 0.935 | 1.000 | 0.967 |
| hair | 0.639 | - | - | - |
| hat | 0.971 | - | - | - |
| gender | 0.559 | - | - | - |
| upblack | 0.866 | - | - | - |
| upwhite | 0.734 | - | - | - |
| upred | 0.897 | - | - | - |
| uppurple | 0.972 | - | - | - |
| upyellow | 0.909 | - | - | - |
| upgray | 0.866 | - | - | - |
| upblue | 0.916 | - | - | - |
| upgreen | 0.928 | - | - | - |
| downblack | 0.614 | - | - | - |
| downwhite | 0.946 | - | - | - |
| downpink | 0.973 | - | - | - |
| downpurple | 1.000 | - | - | - |
| downyellow | 0.999 | - | - | - |
| downgray | 0.826 | - | - | - |
| downblue | 0.799 | - | - | - |
| downgreen | 0.973 | - | - | - |
| downbrown | 0.931 | - | - | - |
+------------+----------+-----------+--------+----------+
Average accuracy: 0.8653
Average f1 score: 0.9092

Can you tell me what this "-" means?