It is recommended to symlink the dataset root to $MMPOSE/data
.
If your folder structure is different, you may need to change the corresponding paths in config files.
MMPose supported datasets:
@article{wang2018mask,
title={Mask-pose cascaded cnn for 2d hand pose estimation from single color image},
author={Wang, Yangang and Peng, Cong and Liu, Yebin},
journal={IEEE Transactions on Circuits and Systems for Video Technology},
volume={29},
number={11},
pages={3258--3268},
year={2018},
publisher={IEEE}
}
For OneHand10K data, please download from OneHand10K Dataset. Please download the annotation files from onehand10k_annotations. Extract them under {MMPose}/data, and make them look like this:
mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
│── onehand10k
|── annotations
| |── onehand10k_train.json
| |── onehand10k_test.json
`── Train
| |── source
| |── 0.jpg
| |── 1.jpg
| ...
`── Test
|── source
|── 0.jpg
|── 1.jpg
@inproceedings{zimmermann2019freihand,
title={Freihand: A dataset for markerless capture of hand pose and shape from single rgb images},
author={Zimmermann, Christian and Ceylan, Duygu and Yang, Jimei and Russell, Bryan and Argus, Max and Brox, Thomas},
booktitle={Proceedings of the IEEE International Conference on Computer Vision},
pages={813--822},
year={2019}
}
For FreiHAND data, please download from FreiHand Dataset. Since the official dataset does not provide validation set, we randomly split the training data into 8:1:1 for train/val/test. Please download the annotation files from freihand_annotations. Extract them under {MMPose}/data, and make them look like this:
mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
│── freihand
|── annotations
| |── freihand_train.json
| |── freihand_val.json
| |── freihand_test.json
`── training
|── rgb
| |── 00000000.jpg
| |── 00000001.jpg
| ...
|── mask
|── 00000000.jpg
|── 00000001.jpg
...
@inproceedings{simon2017hand,
title={Hand keypoint detection in single images using multiview bootstrapping},
author={Simon, Tomas and Joo, Hanbyul and Matthews, Iain and Sheikh, Yaser},
booktitle={Proceedings of the IEEE conference on Computer Vision and Pattern Recognition},
pages={1145--1153},
year={2017}
}
For CMU Panoptic HandDB, please download from CMU Panoptic HandDB. Following Simon et al, panoptic images (hand143_panopticdb) and MPII & NZSL training sets (manual_train) are used for training, while MPII & NZSL test set (manual_test) for testing. Please download the annotation files from panoptic_annotations. Extract them under {MMPose}/data, and make them look like this:
mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
│── panoptic
|── annotations
| |── panoptic_train.json
| |── panoptic_test.json
|
`── hand143_panopticdb
| |── imgs
| | |── 00000000.jpg
| | |── 00000001.jpg
| | ...
|
`── hand_labels
|── manual_train
| |── 000015774_01_l.jpg
| |── 000015774_01_r.jpg
| ...
|
`── manual_test
|── 000648952_02_l.jpg
|── 000835470_01_l.jpg
...
@InProceedings{Moon_2020_ECCV_InterHand2.6M,
author = {Moon, Gyeongsik and Yu, Shoou-I and Wen, He and Shiratori, Takaaki and Lee, Kyoung Mu},
title = {InterHand2.6M: A Dataset and Baseline for 3D Interacting Hand Pose Estimation from a Single RGB Image},
booktitle = {European Conference on Computer Vision (ECCV)},
year = {2020}
}
For InterHand2.6M, please download from InterHand2.6M. Please download the annotation files from annotations. Extract them under {MMPose}/data, and make them look like this:
mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
│── interhand2.6m
|── annotations
| |── all
| |── human_annot
| |── machine_annot
| |── skeleton.txt
| |── subject.txt
|
`── images
| |── train
| | |-- Capture0 ~ Capture26
| |── val
| | |-- Capture0
| |── test
| | |-- Capture0 ~ Capture7
@TechReport{zb2017hand,
author={Christian Zimmermann and Thomas Brox},
title={Learning to Estimate 3D Hand Pose from Single RGB Images},
institution={arXiv:1705.01389},
year={2017},
note="https://arxiv.org/abs/1705.01389",
url="https://lmb.informatik.uni-freiburg.de/projects/hand3d/"
}
For RHD Dataset, please download from RHD Dataset. Please download the annotation files from rhd_annotations. Extract them under {MMPose}/data, and make them look like this:
mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
│── rhd
|── annotations
| |── rhd_train.json
| |── rhd_test.json
`── training
| |── color
| | |── 00000.jpg
| | |── 00001.jpg
| |── depth
| | |── 00000.jpg
| | |── 00001.jpg
| |── mask
| | |── 00000.jpg
| | |── 00001.jpg
`── evaluation
| |── color
| | |── 00000.jpg
| | |── 00001.jpg
| |── depth
| | |── 00000.jpg
| | |── 00001.jpg
| |── mask
| | |── 00000.jpg
| | |── 00001.jpg
@inproceedings{jin2020whole,
title={Whole-Body Human Pose Estimation in the Wild},
author={Jin, Sheng and Xu, Lumin and Xu, Jin and Wang, Can and Liu, Wentao and Qian, Chen and Ouyang, Wanli and Luo, Ping},
booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
year={2020}
}
For COCO-WholeBody dataset, images can be downloaded from COCO download, 2017 Train/Val is needed for COCO keypoints training and validation. Download COCO-WholeBody annotations for COCO-WholeBody annotations for Train / Validation (Google Drive). Download person detection result of COCO val2017 from OneDrive or GoogleDrive. Download and extract them under $MMPOSE/data, and make them look like this:
mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
│── coco
│-- annotations
│ │-- coco_wholebody_train_v1.0.json
│ |-- coco_wholebody_val_v1.0.json
|-- person_detection_results
| |-- COCO_val2017_detections_AP_H_56_person.json
│-- train2017
│ │-- 000000000009.jpg
│ │-- 000000000025.jpg
│ │-- 000000000030.jpg
│ │-- ...
`-- val2017
│-- 000000000139.jpg
│-- 000000000285.jpg
│-- 000000000632.jpg
│-- ...
Please also install the latest version of Extended COCO API to support COCO-WholeBody evaluation:
pip install xtcocotools