2d_human_pose_demo.md 6.5 KB

2D Human Pose Demo

We provide demo scripts to perform human pose estimation on images or videos.

2D Human Pose Top-Down Image Demo

Use full image as input

We provide a demo script to test a single image, using the full image as input bounding box.

python demo/image_demo.py \
    ${IMG_FILE} ${MMPOSE_CONFIG_FILE} ${MMPOSE_CHECKPOINT_FILE} \
    --out-file ${OUTPUT_FILE} \
    [--device ${GPU_ID or CPU}] \
    [--draw_heatmap]

If you use a heatmap-based model and set argument --draw-heatmap, the predicted heatmap will be visualized together with the keypoints.

The pre-trained human pose estimation models can be downloaded from model zoo. Take coco model as an example:

python demo/image_demo.py \
    tests/data/coco/000000000785.jpg \
    configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w48_8xb32-210e_coco-256x192.py \
    https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth \
    --out-file vis_results.jpg \
    --draw-heatmap

To run this demo on CPU:

python demo/image_demo.py \
    tests/data/coco/000000000785.jpg \
    configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w48_8xb32-210e_coco-256x192.py \
    https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth \
    --out-file vis_results.jpg \
    --draw-heatmap \
    --device=cpu

Visualization result:


Use mmdet for human bounding box detection

We provide a demo script to run mmdet for human detection, and mmpose for pose estimation.

Assume that you have already installed mmdet with version >= 3.0.

python demo/topdown_demo_with_mmdet.py \
    ${MMDET_CONFIG_FILE} ${MMDET_CHECKPOINT_FILE} \
    ${MMPOSE_CONFIG_FILE} ${MMPOSE_CHECKPOINT_FILE} \
    --input ${INPUT_PATH} \
    [--output-root ${OUTPUT_DIR}] [--save-predictions] \
    [--show] [--draw-heatmap] [--device ${GPU_ID or CPU}] \
    [--bbox-thr ${BBOX_SCORE_THR}] [--kpt-thr ${KPT_SCORE_THR}]

Example:

python demo/topdown_demo_with_mmdet.py \
    demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py \
    https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
    configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w32_8xb64-210e_coco-256x192.py \
    https://download.openmmlab.com/mmpose/top_down/hrnet/hrnet_w32_coco_256x192-c78dce93_20200708.pth \
    --input tests/data/coco/000000197388.jpg --show --draw-heatmap \
    --output-root vis_results/

Visualization result:


To save the predicted results on disk, please specify --save-predictions.

2D Human Pose Top-Down Video Demo

The above demo script can also take video as input, and run mmdet for human detection, and mmpose for pose estimation. The difference is, the ${INPUT_PATH} for videos can be the local path or URL link to video file.

Assume that you have already installed mmdet with version >= 3.0.

Example:

python demo/topdown_demo_with_mmdet.py \
    demo/mmdetection_cfg/faster_rcnn_r50_fpn_coco.py \
    https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth \
    configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w32_8xb64-210e_coco-256x192.py \
    https://download.openmmlab.com/mmpose/v1/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w32_8xb64-210e_coco-256x192-81c58e40_20220909.pth \
    --input tests/data/posetrack18/videos/000001_mpiinew_test/000001_mpiinew_test.mp4 \
    --output-root=vis_results/demo --show --draw-heatmap

2D Human Pose Bottom-up Image/Video Demo

We also provide a demo script using bottom-up models to estimate the human pose in an image or a video, which does not rely on human detectors.

python demo/bottomup_demo.py \
    ${MMPOSE_CONFIG_FILE} ${MMPOSE_CHECKPOINT_FILE} \
    --input ${INPUT_PATH} \
    [--output-root ${OUTPUT_DIR}] [--save-predictions] \
    [--show] [--device ${GPU_ID or CPU}] \
    [--kpt-thr ${KPT_SCORE_THR}]

Example:

python demo/bottomup_demo.py \
    configs/body_2d_keypoint/dekr/coco/dekr_hrnet-w32_8xb10-140e_coco-512x512.py \
    https://download.openmmlab.com/mmpose/v1/body_2d_keypoint/dekr/coco/dekr_hrnet-w32_8xb10-140e_coco-512x512_ac7c17bf-20221228.pth \
    --input tests/data/coco/000000197388.jpg --output-root=vis_results \
    --show --save-predictions

Visualization result:


2D Human Pose Estimation with Inferencer

The Inferencer provides a convenient interface for inference, allowing customization using model aliases instead of configuration files and checkpoint paths. It supports various input formats, including image paths, video paths, image folder paths, and webcams. Below is an example command:

python demo/inferencer_demo.py \
    tests/data/posetrack18/videos/000001_mpiinew_test/000001_mpiinew_test.mp4 \
    --pose2d human --vis-out-dir vis_results/posetrack18

This command infers the video and saves the visualization results in the vis_results/posetrack18 directory.

Image 1

In addition, the Inferencer supports saving predicted poses. For more information, please refer to the inferencer document.

Speed Up Inference

Some tips to speed up MMPose inference:

For top-down models, try to edit the config file. For example,

  1. set model.test_cfg.flip_test=False in topdown-res50.
  2. use faster human bounding box detector, see MMDetection.