DJW c16313bb6a 第一次提交 10 달 전
..
README.md c16313bb6a 第一次提交 10 달 전
metafile.yml c16313bb6a 第一次提交 10 달 전
retinanet_r101-caffe_fpn_1x_coco.py c16313bb6a 第一次提交 10 달 전
retinanet_r101-caffe_fpn_ms-3x_coco.py c16313bb6a 第一次提交 10 달 전
retinanet_r101_fpn_1x_coco.py c16313bb6a 第一次提交 10 달 전
retinanet_r101_fpn_2x_coco.py c16313bb6a 第一次提交 10 달 전
retinanet_r101_fpn_8xb8-amp-lsj-200e_coco.py c16313bb6a 第一次提交 10 달 전
retinanet_r101_fpn_ms-640-800-3x_coco.py c16313bb6a 第一次提交 10 달 전
retinanet_r18_fpn_1x_coco.py c16313bb6a 第一次提交 10 달 전
retinanet_r18_fpn_1xb8-1x_coco.py c16313bb6a 第一次提交 10 달 전
retinanet_r18_fpn_8xb8-amp-lsj-200e_coco.py c16313bb6a 第一次提交 10 달 전
retinanet_r50-caffe_fpn_1x_coco.py c16313bb6a 第一次提交 10 달 전
retinanet_r50-caffe_fpn_ms-1x_coco.py c16313bb6a 第一次提交 10 달 전
retinanet_r50-caffe_fpn_ms-2x_coco.py c16313bb6a 第一次提交 10 달 전
retinanet_r50-caffe_fpn_ms-3x_coco.py c16313bb6a 第一次提交 10 달 전
retinanet_r50_fpn_1x_coco.py c16313bb6a 第一次提交 10 달 전
retinanet_r50_fpn_2x_coco.py c16313bb6a 第一次提交 10 달 전
retinanet_r50_fpn_8xb8-amp-lsj-200e_coco.py c16313bb6a 第一次提交 10 달 전
retinanet_r50_fpn_90k_coco.py c16313bb6a 第一次提交 10 달 전
retinanet_r50_fpn_amp-1x_coco.py c16313bb6a 第一次提交 10 달 전
retinanet_r50_fpn_ms-640-800-3x_coco.py c16313bb6a 第一次提交 10 달 전
retinanet_tta.py c16313bb6a 第一次提交 10 달 전
retinanet_x101-32x4d_fpn_1x_coco.py c16313bb6a 第一次提交 10 달 전
retinanet_x101-32x4d_fpn_2x_coco.py c16313bb6a 第一次提交 10 달 전
retinanet_x101-64x4d_fpn_1x_coco.py c16313bb6a 第一次提交 10 달 전
retinanet_x101-64x4d_fpn_2x_coco.py c16313bb6a 第一次提交 10 달 전
retinanet_x101-64x4d_fpn_ms-640-800-3x_coco.py c16313bb6a 第一次提交 10 달 전

README.md

RetinaNet

Focal Loss for Dense Object Detection

Abstract

The highest accuracy object detectors to date are based on a two-stage approach popularized by R-CNN, where a classifier is applied to a sparse set of candidate object locations. In contrast, one-stage detectors that are applied over a regular, dense sampling of possible object locations have the potential to be faster and simpler, but have trailed the accuracy of two-stage detectors thus far. In this paper, we investigate why this is the case. We discover that the extreme foreground-background class imbalance encountered during training of dense detectors is the central cause. We propose to address this class imbalance by reshaping the standard cross entropy loss such that it down-weights the loss assigned to well-classified examples. Our novel Focal Loss focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. To evaluate the effectiveness of our loss, we design and train a simple dense detector we call RetinaNet. Our results show that when trained with the focal loss, RetinaNet is able to match the speed of previous one-stage detectors while surpassing the accuracy of all existing state-of-the-art two-stage detectors.

Results and Models

Backbone Style Lr schd Mem (GB) Inf time (fps) box AP Config Download
R-18-FPN pytorch 1x 1.7 31.7 config model | log
R-18-FPN pytorch 1x(1 x 8 BS) 5.0 31.7 config model | log
R-50-FPN caffe 1x 3.5 18.6 36.3 config model | log
R-50-FPN pytorch 1x 3.8 19.0 36.5 config model | log
R-50-FPN (FP16) pytorch 1x 2.8 31.6 36.4 config model | log
R-50-FPN pytorch 2x - - 37.4 config model | log
R-101-FPN caffe 1x 5.5 14.7 38.5 config model | log
R-101-FPN pytorch 1x 5.7 15.0 38.5 config model | log
R-101-FPN pytorch 2x - - 38.9 config model | log
X-101-32x4d-FPN pytorch 1x 7.0 12.1 39.9 config model | log
X-101-32x4d-FPN pytorch 2x - - 40.1 config model | log
X-101-64x4d-FPN pytorch 1x 10.0 8.7 41.0 config model | log
X-101-64x4d-FPN pytorch 2x - - 40.8 config model | log

Pre-trained Models

We also train some models with longer schedules and multi-scale training. The users could finetune them for downstream tasks.

Backbone Style Lr schd Mem (GB) box AP Config Download
R-50-FPN pytorch 3x 3.5 39.5 config model | log
R-101-FPN caffe 3x 5.4 40.7 config model | log
R-101-FPN pytorch 3x 5.4 41 config model | log
X-101-64x4d-FPN pytorch 3x 9.8 41.6 config model | log

Citation

@inproceedings{lin2017focal,
  title={Focal loss for dense object detection},
  author={Lin, Tsung-Yi and Goyal, Priya and Girshick, Ross and He, Kaiming and Doll{\'a}r, Piotr},
  booktitle={Proceedings of the IEEE international conference on computer vision},
  year={2017}
}