DJW c16313bb6a 第一次提交 | 9 mesiacov pred | |
---|---|---|
.. | ||
README.md | 9 mesiacov pred | |
metafile.yml | 9 mesiacov pred | |
sabl-cascade-rcnn_r101_fpn_1x_coco.py | 9 mesiacov pred | |
sabl-cascade-rcnn_r50_fpn_1x_coco.py | 9 mesiacov pred | |
sabl-faster-rcnn_r101_fpn_1x_coco.py | 9 mesiacov pred | |
sabl-faster-rcnn_r50_fpn_1x_coco.py | 9 mesiacov pred | |
sabl-retinanet_r101-gn_fpn_1x_coco.py | 9 mesiacov pred | |
sabl-retinanet_r101-gn_fpn_ms-480-960-2x_coco.py | 9 mesiacov pred | |
sabl-retinanet_r101-gn_fpn_ms-640-800-2x_coco.py | 9 mesiacov pred | |
sabl-retinanet_r101_fpn_1x_coco.py | 9 mesiacov pred | |
sabl-retinanet_r50-gn_fpn_1x_coco.py | 9 mesiacov pred | |
sabl-retinanet_r50_fpn_1x_coco.py | 9 mesiacov pred |
Side-Aware Boundary Localization for More Precise Object Detection
Current object detection frameworks mainly rely on bounding box regression to localize objects. Despite the remarkable progress in recent years, the precision of bounding box regression remains unsatisfactory, hence limiting performance in object detection. We observe that precise localization requires careful placement of each side of the bounding box. However, the mainstream approach, which focuses on predicting centers and sizes, is not the most effective way to accomplish this task, especially when there exists displacements with large variance between the anchors and the targets. In this paper, we propose an alternative approach, named as Side-Aware Boundary Localization (SABL), where each side of the bounding box is respectively localized with a dedicated network branch. To tackle the difficulty of precise localization in the presence of displacements with large variance, we further propose a two-step localization scheme, which first predicts a range of movement through bucket prediction and then pinpoints the precise position within the predicted bucket. We test the proposed method on both two-stage and single-stage detection frameworks. Replacing the standard bounding box regression branch with the proposed design leads to significant improvements on Faster R-CNN, RetinaNet, and Cascade R-CNN, by 3.0%, 1.7%, and 0.9%, respectively.
The results on COCO 2017 val is shown in the below table. (results on test-dev are usually slightly higher than val). Single-scale testing (1333x800) is adopted in all results.
Method | Backbone | Lr schd | ms-train | box AP | Config | Download |
---|---|---|---|---|---|---|
SABL Faster R-CNN | R-50-FPN | 1x | N | 39.9 | config | model | log |
SABL Faster R-CNN | R-101-FPN | 1x | N | 41.7 | config | model | log |
SABL Cascade R-CNN | R-50-FPN | 1x | N | 41.6 | config | model | log |
SABL Cascade R-CNN | R-101-FPN | 1x | N | 43.0 | config | model | log |
Method | Backbone | GN | Lr schd | ms-train | box AP | Config | Download |
---|---|---|---|---|---|---|---|
SABL RetinaNet | R-50-FPN | N | 1x | N | 37.7 | config | model | log |
SABL RetinaNet | R-50-FPN | Y | 1x | N | 38.8 | config | model | log |
SABL RetinaNet | R-101-FPN | N | 1x | N | 39.7 | config | model | log |
SABL RetinaNet | R-101-FPN | Y | 1x | N | 40.5 | config | model | log |
SABL RetinaNet | R-101-FPN | Y | 2x | Y (640~800) | 42.9 | config | model | log |
SABL RetinaNet | R-101-FPN | Y | 2x | Y (480~960) | 43.6 | config | model | log |
We provide config files to reproduce the object detection results in the ECCV 2020 Spotlight paper for Side-Aware Boundary Localization for More Precise Object Detection.
@inproceedings{Wang_2020_ECCV,
title = {Side-Aware Boundary Localization for More Precise Object Detection},
author = {Jiaqi Wang and Wenwei Zhang and Yuhang Cao and Kai Chen and Jiangmiao Pang and Tao Gong and Jianping Shi and Chen Change Loy and Dahua Lin},
booktitle = {ECCV},
year = {2020}
}