# Awesome MMPose
A list of resources related to MMPose. Feel free to contribute!
## Contents
- [Tutorials](#tutorials)
- [Papers](#papers)
- [Datasets](#datasets)
- [Projects](#projects)
## Tutorials
- [MMPose Tutorial (Chinese)](https://github.com/TommyZihao/MMPose_Tutorials)
MMPose 中文视频代码教程,from 同济子豪兄
- [OpenMMLab Course](https://github.com/open-mmlab/OpenMMLabCourse)
This repository hosts articles, lectures and tutorials on computer vision and OpenMMLab, helping learners to understand algorithms and master our toolboxes in a systematical way.
## Papers
- [\[paper\]](https://arxiv.org/abs/2207.10387) [\[code\]](https://github.com/luminxu/Pose-for-Everything)
ECCV 2022, Pose for Everything: Towards Category-Agnostic Pose Estimation
- [\[paper\]](https://arxiv.org/abs/2201.04676) [\[code\]](https://github.com/Sense-X/UniFormer)
ICLR 2022, UniFormer: Unified Transformer for Efficient Spatiotemporal Representation Learning
- [\[paper\]](https://arxiv.org/abs/2201.07412) [\[code\]](https://github.com/aim-uofa/Poseur)
ECCV 2022, Poseur:Direct Human Pose Regression with Transformers
- [\[paper\]](https://arxiv.org/abs/2106.03348) [\[code\]](https://github.com/ViTAE-Transformer/ViTAE-Transformer)
NeurIPS 2022, ViTAEv2: Vision Transformer Advanced by Exploring Inductive Bias for Image Recognition and Beyond
- [\[paper\]](https://arxiv.org/abs/2204.10762) [\[code\]](https://github.com/ZiyiZhang27/Dite-HRNet)
IJCAI-ECAI 2021, Dite-HRNet:Dynamic Lightweight High-Resolution Network for Human Pose Estimation
- [\[paper\]](https://arxiv.org/abs/2302.08453) [\[code\]](https://github.com/TencentARC/T2I-Adapter)
T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models
- [\[paper\]](https://arxiv.org/pdf/2303.11638.pdf) [\[code\]](https://github.com/Gengzigang/PCT)
CVPR 2023, Human Pose as Compositional Tokens
## Datasets
- [\[github\]](https://github.com/luminxu/Pose-for-Everything) **MP-100**
Multi-category Pose (MP-100) dataset, which is a 2D pose dataset of 100 object categories containing over 20K instances and is well-designed for developing CAPE algorithms.
- [\[github\]](https://github.com/facebookresearch/Ego4d/) **Ego4D**
EGO4D is the world's largest egocentric (first person) video ML dataset and benchmark suite, with 3,600 hrs (and counting) of densely narrated video and a wide range of annotations across five new benchmark tasks. It covers hundreds of scenarios (household, outdoor, workplace, leisure, etc.) of daily life activity captured in-the-wild by 926 unique camera wearers from 74 worldwide locations and 9 different countries.
## Projects
Waiting for your contribution!