MM-Fi is the first multi-modal non-intrusive 4D human pose estimation dataset with 27 daily or rehabilitation action categories for high-level wireless human sensing tasks. MM-Fi consists of over 320k synchronized frames of five modalities from 40 human subjects in four domains. The annotations include 2D/3D human pose keypoints, 3D position, 3D dense pose, and the category of action.
For more details and demos about MMFi dataset, please refer to [Project Page] and [Paper].
Please download the dataset through [Google Drive] or [Baidu Netdisk].
To get started, follow the instructions in this section. We will introduce the simple steps and how you can customize the configuration.
Please make sure you have installed the following dependencies before using MMFi dataset.
- Python 3+ distribution
- Pytorch >= 1.1.0
Quick installation of depedencies (in one local or vitual environment)
pip install python torch torchvision pyyaml numpy scipy opencv-python
Once the environment is built successfully, download the dataset;
After unziping all four parts, the dataset directory structure should be as follows.
${DATASET_ROOT}
|-- E01
| |-- S01
| | |-- A01
| | | |-- rgb
| | | |-- mmwave
| | | |-- wifi-csi
| | | |-- ...
| | |-- A02
| | |-- ...
| | |-- A27
| |-- S02
| |-- ...
| |-- S10
|-- E02
|......
|-- E03
|......
|-- E04
|......
Edit your code and configuration file (.yaml file) carefully before running. For details of the configuration, please check the keys description.
Here we just take the code snippets in the example.py for instance.
import yaml
import numpy as np
import torch
# Please add the downloaded mmfi directory into your python project.
from mmfi import make_dataset, make_dataloader
dataset_root = '/data3/MMFi_Dataset' # path will not be same in your server.
with open('config.yaml', 'r') as fd: # change the .yaml file in your code.
config = yaml.load(fd, Loader=yaml.FullLoader)
train_dataset, val_dataset = make_dataset(dataset_root, config)
rng_generator = torch.manual_seed(config['init_rand_seed'])
train_loader = make_dataloader(train_dataset, is_training=True, generator=rng_generator, **config['train_loader'])
val_loader = make_dataloader(val_dataset, is_training=False, generator=rng_generator, **config['validation_loader'])
# Coding
Now you can start the implementation. For example, using the commands below:
cd your_project_dir
// make coding
python your_script_name.py path_to_dataset_dir your_config.yaml
modality
-
Single modality
Please use one of the followings:
rgb, infra1, infra2, depth, lidar, mmwave, wifi-csi
Note
that every modality should be inlowercase
.Currently, the raw images (rgb, infra1 and infra2) of subjects are not publicly available for the privacy concerns. Thus, we provide the 17 body keypoints extracted from images using the common ResNet-48 model.
-
Multiple modalities:
Please use
|
to connect different modalities.Note
thatspace
is not allowed in the connection. For example,wifi-csi|mmwave
is OK, butwifi-csi | mmwave
will not be accepted.
data_unit
-
sequence
The data generator will return data with sequence as the unit, e.g., each sample contains 297 frames.
-
frame
The data generator will return data with frame as the unit, e.g., each sample only has 1 frame.
protocol
This key defines how many activities could be enabled in your training/testing.
- protocol 1: Only the daily activities are enabled.
- protocol 2: Only the rehabilitation activities are enabled.
- protocol 3: All activities are enabled.
split
The train/test split of your code. There are already 3 splits which are used in our paper.
- manual_split: Please refer to the example in .yaml file and customize your own dataset split setting here (which subjects and actions are regarded as the testing data).
- split_to_use: Specify the split you want.
train_loader
validation_loader
These two options define the parameters which are used to construct your dataloaders. We keep these two options open so that you could customize freely.
MMFi dataset constains two types of actions: daily activities and rehabilitation activities.
Activity | Description | Category |
---|---|---|
A01 | Stretching and relaxing | Rehabilitation activities |
A02 | Chest expansion(horizontal) | Daily activities |
A03 | Chest expansion (vertical) | Daily activities |
A04 | Twist (left) | Daily activities |
A05 | Twist (right) | Daily activities |
A06 | Mark time | Rehabilitation activities |
A07 | Limb extension (left) | Rehabilitation activities |
A08 | Limb extension (right) | Rehabilitation activities |
A09 | Lunge (toward left-front) | Rehabilitation activities |
A10 | Lunge (toward right-front) | Rehabilitation activities |
A11 | Limb extension (both) | Rehabilitation activities |
A12 | Squat | Rehabilitation activities |
A13 | Raising hand (left) | Daily activities |
A14 | Raising hand (right) | Daily activities |
A15 | Lunge (toward left side) | Rehabilitation activities |
A16 | Lunge (toward right side) | Rehabilitation activities |
A17 | Waving hand (left) | Daily activities |
A18 | Waving hand (right) | Daily activities |
A19 | Picking up things | Daily activities |
A20 | Throwing (toward left side) | Daily activities |
A21 | Throwing (toward right side) | Daily activities |
A22 | Kicking (toward left side) | Daily activities |
A23 | Kicking (toward right side) | Daily activities |
A24 | Body extension (left) | Rehabilitation activities |
A25 | Body extension (right) | Rehabilitation activities |
A26 | Jumping up | Rehabilitation activities |
A27 | Bowing | Daily activities |
40 volunteers (11 females and 29 males) aging from 23 to 40 participated in the data collection of MMFi. We appreciate their kind assitance in the completion of this work!
In addition, the 40 volunteers were divided into 4 groups corresponding to 4 different environmental settings so that cross-domain research could be conducted for the WiFi sensing.
We have also extracted the action segments from the raw sequences, with relevant information stored into a .csv form, which can be referred to in the dataset directory.
Please cite the following paper if you find MMFi dataset and toolbox benefit your research. Thank you for your support!
@inproceedings{
yang2023mm,
title={MM-Fi: Multi-Modal Non-Intrusive 4D Human Dataset for Versatile Wireless Sensing},
author={Yang, Jianfei and Huang, He and Zhou, Yunjiao and Chen, Xinyan and Xu, Yuecong and Yuan, Shenghai and Zou, Han and Lu, Chris Xiaoxuan and Xie, Lihua},
booktitle={Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2023},
url={https://openreview.net/forum?id=1uAsASS1th}
}