DJW c16313bb6a 第一次提交 7 mesiacov pred
..
README.md c16313bb6a 第一次提交 7 mesiacov pred
mask-rcnn_r101_fpn_sample1e-3_ms-1x_lvis-v1.py c16313bb6a 第一次提交 7 mesiacov pred
mask-rcnn_r101_fpn_sample1e-3_ms-2x_lvis-v0.5.py c16313bb6a 第一次提交 7 mesiacov pred
mask-rcnn_r50_fpn_sample1e-3_ms-1x_lvis-v1.py c16313bb6a 第一次提交 7 mesiacov pred
mask-rcnn_r50_fpn_sample1e-3_ms-2x_lvis-v0.5.py c16313bb6a 第一次提交 7 mesiacov pred
mask-rcnn_x101-32x4d_fpn_sample1e-3_ms-1x_lvis-v1.py c16313bb6a 第一次提交 7 mesiacov pred
mask-rcnn_x101-32x4d_fpn_sample1e-3_ms-2x_lvis-v0.5.py c16313bb6a 第一次提交 7 mesiacov pred
mask-rcnn_x101-64x4d_fpn_sample1e-3_ms-1x_lvis-v1.py c16313bb6a 第一次提交 7 mesiacov pred
mask-rcnn_x101-64x4d_fpn_sample1e-3_ms-2x_lvis-v0.5.py c16313bb6a 第一次提交 7 mesiacov pred
metafile.yml c16313bb6a 第一次提交 7 mesiacov pred

README.md

LVIS

LVIS: A Dataset for Large Vocabulary Instance Segmentation

Abstract

Progress on object detection is enabled by datasets that focus the research community's attention on open challenges. This process led us from simple images to complex scenes and from bounding boxes to segmentation masks. In this work, we introduce LVIS (pronounced `el-vis'): a new dataset for Large Vocabulary Instance Segmentation. We plan to collect ~2 million high-quality instance segmentation masks for over 1000 entry-level object categories in 164k images. Due to the Zipfian distribution of categories in natural images, LVIS naturally has a long tail of categories with few training samples. Given that state-of-the-art deep learning methods for object detection perform poorly in the low-sample regime, we believe that our dataset poses an important and exciting new scientific challenge.

Common Setting

  • Please follow install guide to install open-mmlab forked cocoapi first.

  • Run following scripts to install our forked lvis-api.

  pip install git+https://github.com/lvis-dataset/lvis-api.git
  • All experiments use oversample strategy here with oversample threshold 1e-3.

  • The size of LVIS v0.5 is half of COCO, so schedule 2x in LVIS is roughly the same iterations as 1x in COCO.

Results and models of LVIS v0.5

Backbone Style Lr schd Mem (GB) Inf time (fps) box AP mask AP Config Download
R-50-FPN pytorch 2x - - 26.1 25.9 config model | log
R-101-FPN pytorch 2x - - 27.1 27.0 config model | log
X-101-32x4d-FPN pytorch 2x - - 26.7 26.9 config model | log
X-101-64x4d-FPN pytorch 2x - - 26.4 26.0 config model | log

Results and models of LVIS v1

Backbone Style Lr schd Mem (GB) Inf time (fps) box AP mask AP Config Download
R-50-FPN pytorch 1x 9.1 - 22.5 21.7 config model | log
R-101-FPN pytorch 1x 10.8 - 24.6 23.6 config model | log
X-101-32x4d-FPN pytorch 1x 11.8 - 26.7 25.5 config model | log
X-101-64x4d-FPN pytorch 1x 14.6 - 27.2 25.8 config model | log

Citation

@inproceedings{gupta2019lvis,
  title={{LVIS}: A Dataset for Large Vocabulary Instance Segmentation},
  author={Gupta, Agrim and Dollar, Piotr and Girshick, Ross},
  booktitle={Proceedings of the {IEEE} Conference on Computer Vision and Pattern Recognition},
  year={2019}
}