It is recommended to symlink the dataset root to $MMPOSE/data
.
If your folder structure is different, you may need to change the corresponding paths in config files.
MMPose supported datasets:
@InProceedings{Cao_2019_ICCV,
author = {Cao, Jinkun and Tang, Hongyang and Fang, Hao-Shu and Shen, Xiaoyong and Lu, Cewu and Tai, Yu-Wing},
title = {Cross-Domain Adaptation for Animal Pose Estimation},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}
For Animal-Pose dataset, we prepare the dataset as follows:
Extract them under {MMPose}/data, and make them look like this:
mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
│── animalpose
│
│-- VOC2012
│ │-- Annotations
│ │-- ImageSets
│ │-- JPEGImages
│ │-- SegmentationClass
│ │-- SegmentationObject
│
│-- animalpose_image_part2
│ │-- cat
│ │-- cow
│ │-- dog
│ │-- horse
│ │-- sheep
│
│-- annotations
│ │-- animalpose_train.json
│ |-- animalpose_val.json
│ |-- animalpose_trainval.json
│ │-- animalpose_test.json
│
│-- PASCAL2011_animal_annotation
│ │-- cat
│ │ |-- 2007_000528_1.xml
│ │ |-- 2007_000549_1.xml
│ │ │-- ...
│ │-- cow
│ │-- dog
│ │-- horse
│ │-- sheep
│
│-- annimalpose_anno2
│ │-- cat
│ │ |-- ca1.xml
│ │ |-- ca2.xml
│ │ │-- ...
│ │-- cow
│ │-- dog
│ │-- horse
│ │-- sheep
The official dataset does not provide the official train/val/test set split. We choose the images from PascalVOC for train & val. In total, we have 3608 images and 5117 annotations for train+val, where 2798 images with 4000 annotations are used for training, and 810 images with 1117 annotations are used for validation. Those images from other sources (1000 images with 1000 annotations) are used for testing.
@misc{yu2021ap10k,
title={AP-10K: A Benchmark for Animal Pose Estimation in the Wild},
author={Hang Yu and Yufei Xu and Jing Zhang and Wei Zhao and Ziyu Guan and Dacheng Tao},
year={2021},
eprint={2108.12617},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
For AP-10K dataset, images and annotations can be downloaded from download. Note, this data and annotation data is for non-commercial use only.
Extract them under {MMPose}/data, and make them look like this:
mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
│── ap10k
│-- annotations
│ │-- ap10k-train-split1.json
│ |-- ap10k-train-split2.json
│ |-- ap10k-train-split3.json
│ │-- ap10k-val-split1.json
│ |-- ap10k-val-split2.json
│ |-- ap10k-val-split3.json
│ |-- ap10k-test-split1.json
│ |-- ap10k-test-split2.json
│ |-- ap10k-test-split3.json
│-- data
│ │-- 000000000001.jpg
│ │-- 000000000002.jpg
│ │-- ...
The annotation files in 'annotation' folder contains 50 labeled animal species. There are total 10,015 labeled images with 13,028 instances in the AP-10K dataset. We randonly split them into train, val, and test set following the ratio of 7:1:2.
@inproceedings{mathis2021pretraining,
title={Pretraining boosts out-of-domain robustness for pose estimation},
author={Mathis, Alexander and Biasi, Thomas and Schneider, Steffen and Yuksekgonul, Mert and Rogers, Byron and Bethge, Matthias and Mathis, Mackenzie W},
booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
pages={1859--1868},
year={2021}
}
For Horse-10 dataset, images can be downloaded from download. Please download the annotation files from horse10_annotations. Note, this data and annotation data is for non-commercial use only, per the authors (see http://horse10.deeplabcut.org for more information). Extract them under {MMPose}/data, and make them look like this:
mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
│── horse10
│-- annotations
│ │-- horse10-train-split1.json
│ |-- horse10-train-split2.json
│ |-- horse10-train-split3.json
│ │-- horse10-test-split1.json
│ |-- horse10-test-split2.json
│ |-- horse10-test-split3.json
│-- labeled-data
│ │-- BrownHorseinShadow
│ │-- BrownHorseintoshadow
│ │-- ...
@article{labuguen2020macaquepose,
title={MacaquePose: A novel ‘in the wild’macaque monkey pose dataset for markerless motion capture},
author={Labuguen, Rollyn and Matsumoto, Jumpei and Negrete, Salvador and Nishimaru, Hiroshi and Nishijo, Hisao and Takada, Masahiko and Go, Yasuhiro and Inoue, Ken-ichi and Shibata, Tomohiro},
journal={bioRxiv},
year={2020},
publisher={Cold Spring Harbor Laboratory}
}
For MacaquePose dataset, images can be downloaded from download. Please download the annotation files from macaque_annotations. Extract them under {MMPose}/data, and make them look like this:
mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
│── macaque
│-- annotations
│ │-- macaque_train.json
│ |-- macaque_test.json
│-- images
│ │-- 01418849d54b3005.jpg
│ │-- 0142d1d1a6904a70.jpg
│ │-- 01ef2c4c260321b7.jpg
│ │-- 020a1c75c8c85238.jpg
│ │-- 020b1506eef2557d.jpg
│ │-- ...
Since the official dataset does not provide the test set, we randomly select 12500 images for training, and the rest for evaluation (see code).
@article{pereira2019fast,
title={Fast animal pose estimation using deep neural networks},
author={Pereira, Talmo D and Aldarondo, Diego E and Willmore, Lindsay and Kislin, Mikhail and Wang, Samuel S-H and Murthy, Mala and Shaevitz, Joshua W},
journal={Nature methods},
volume={16},
number={1},
pages={117--125},
year={2019},
publisher={Nature Publishing Group}
}
For Vinegar Fly dataset, images can be downloaded from vinegar_fly_images. Please download the annotation files from vinegar_fly_annotations. Extract them under {MMPose}/data, and make them look like this:
mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
│── fly
│-- annotations
│ │-- fly_train.json
│ |-- fly_test.json
│-- images
│ │-- 0.jpg
│ │-- 1.jpg
│ │-- 2.jpg
│ │-- 3.jpg
│ │-- ...
Since the official dataset does not provide the test set, we randomly select 90% images for training, and the rest (10%) for evaluation (see code).
@article{graving2019deepposekit,
title={DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning},
author={Graving, Jacob M and Chae, Daniel and Naik, Hemal and Li, Liang and Koger, Benjamin and Costelloe, Blair R and Couzin, Iain D},
journal={Elife},
volume={8},
pages={e47994},
year={2019},
publisher={eLife Sciences Publications Limited}
}
For Desert Locust dataset, images can be downloaded from locust_images. Please download the annotation files from locust_annotations. Extract them under {MMPose}/data, and make them look like this:
mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
│── locust
│-- annotations
│ │-- locust_train.json
│ |-- locust_test.json
│-- images
│ │-- 0.jpg
│ │-- 1.jpg
│ │-- 2.jpg
│ │-- 3.jpg
│ │-- ...
Since the official dataset does not provide the test set, we randomly select 90% images for training, and the rest (10%) for evaluation (see code).
@article{graving2019deepposekit,
title={DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning},
author={Graving, Jacob M and Chae, Daniel and Naik, Hemal and Li, Liang and Koger, Benjamin and Costelloe, Blair R and Couzin, Iain D},
journal={Elife},
volume={8},
pages={e47994},
year={2019},
publisher={eLife Sciences Publications Limited}
}
For Grévy’s Zebra dataset, images can be downloaded from zebra_images. Please download the annotation files from zebra_annotations. Extract them under {MMPose}/data, and make them look like this:
mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
│── zebra
│-- annotations
│ │-- zebra_train.json
│ |-- zebra_test.json
│-- images
│ │-- 0.jpg
│ │-- 1.jpg
│ │-- 2.jpg
│ │-- 3.jpg
│ │-- ...
Since the official dataset does not provide the test set, we randomly select 90% images for training, and the rest (10%) for evaluation (see code).
@inproceedings{li2020atrw,
title={ATRW: A Benchmark for Amur Tiger Re-identification in the Wild},
author={Li, Shuyuan and Li, Jianguo and Tang, Hanlin and Qian, Rui and Lin, Weiyao},
booktitle={Proceedings of the 28th ACM International Conference on Multimedia},
pages={2590--2598},
year={2020}
}
ATRW captures images of the Amur tiger (also known as Siberian tiger, Northeast-China tiger) in the wild. For ATRW dataset, please download images from Pose_train, Pose_val, and Pose_test. Note that in the ATRW official annotation files, the key "file_name" is written as "filename". To make it compatible with other coco-type json files, we have modified this key. Please download the modified annotation files from atrw_annotations. Extract them under {MMPose}/data, and make them look like this:
mmpose
├── mmpose
├── docs
├── tests
├── tools
├── configs
`── data
│── atrw
│-- annotations
│ │-- keypoint_train.json
│ │-- keypoint_val.json
│ │-- keypoint_trainval.json
│-- images
│ │-- train
│ │ │-- 000002.jpg
│ │ │-- 000003.jpg
│ │ │-- ...
│ │-- val
│ │ │-- 000001.jpg
│ │ │-- 000013.jpg
│ │ │-- ...
│ │-- test
│ │ │-- 000000.jpg
│ │ │-- 000004.jpg
│ │ │-- ...