DJW c16313bb6a 第一次提交 vor 10 Monaten
..
README.md c16313bb6a 第一次提交 vor 10 Monaten
faster-rcnn_r50_fpn_pisa_1x_coco.py c16313bb6a 第一次提交 vor 10 Monaten
faster-rcnn_x101-32x4d_fpn_pisa_1x_coco.py c16313bb6a 第一次提交 vor 10 Monaten
mask-rcnn_r50_fpn_pisa_1x_coco.py c16313bb6a 第一次提交 vor 10 Monaten
mask-rcnn_x101-32x4d_fpn_pisa_1x_coco.py c16313bb6a 第一次提交 vor 10 Monaten
metafile.yml c16313bb6a 第一次提交 vor 10 Monaten
retinanet-r50_fpn_pisa_1x_coco.py c16313bb6a 第一次提交 vor 10 Monaten
retinanet_x101-32x4d_fpn_pisa_1x_coco.py c16313bb6a 第一次提交 vor 10 Monaten
ssd300_pisa_coco.py c16313bb6a 第一次提交 vor 10 Monaten
ssd512_pisa_coco.py c16313bb6a 第一次提交 vor 10 Monaten

README.md

PISA

Prime Sample Attention in Object Detection

Abstract

It is a common paradigm in object detection frameworks to treat all samples equally and target at maximizing the performance on average. In this work, we revisit this paradigm through a careful study on how different samples contribute to the overall performance measured in terms of mAP. Our study suggests that the samples in each mini-batch are neither independent nor equally important, and therefore a better classifier on average does not necessarily mean higher mAP. Motivated by this study, we propose the notion of Prime Samples, those that play a key role in driving the detection performance. We further develop a simple yet effective sampling and learning strategy called PrIme Sample Attention (PISA) that directs the focus of the training process towards such samples. Our experiments demonstrate that it is often more effective to focus on prime samples than hard samples when training a detector. Particularly, On the MSCOCO dataset, PISA outperforms the random sampling baseline and hard mining schemes, e.g., OHEM and Focal Loss, consistently by around 2% on both single-stage and two-stage detectors, even with a strong backbone ResNeXt-101.

Results and Models

PISA Network Backbone Lr schd box AP mask AP Config Download
× Faster R-CNN R-50-FPN 1x 36.4 -
Faster R-CNN R-50-FPN 1x 38.4 config model | log
× Faster R-CNN X101-32x4d-FPN 1x 40.1 -
Faster R-CNN X101-32x4d-FPN 1x 41.9 config model | log
× Mask R-CNN R-50-FPN 1x 37.3 34.2 -
Mask R-CNN R-50-FPN 1x 39.1 35.2 config model | log
× Mask R-CNN X101-32x4d-FPN 1x 41.1 37.1 -
Mask R-CNN X101-32x4d-FPN 1x
× RetinaNet R-50-FPN 1x 35.6 -
RetinaNet R-50-FPN 1x 36.9 config model | log
× RetinaNet X101-32x4d-FPN 1x 39.0 -
RetinaNet X101-32x4d-FPN 1x 40.7 config model | log
× SSD300 VGG16 1x 25.6 -
SSD300 VGG16 1x 27.6 config model | log
× SSD512 VGG16 1x 29.3 -
SSD512 VGG16 1x 31.8 config model | log

Notes:

  • In the original paper, all models are trained and tested on mmdet v1.x, thus results may not be exactly the same with this release on v2.0.
  • It is noted PISA only modifies the training pipeline so the inference time remains the same with the baseline.

Citation

@inproceedings{cao2019prime,
  title={Prime sample attention in object detection},
  author={Cao, Yuhang and Chen, Kai and Loy, Chen Change and Lin, Dahua},
  booktitle={IEEE Conference on Computer Vision and Pattern Recognition},
  year={2020}
}