DJW 10 mesi fa
commit
c16313bb6a
100 ha cambiato i file con 8974 aggiunte e 0 eliminazioni
  1. 34 0
      .circleci/config.yml
  2. 11 0
      .circleci/docker/Dockerfile
  3. 199 0
      .circleci/test.yml
  4. 545 0
      .dev_scripts/batch_test_list.py
  5. 80 0
      .dev_scripts/batch_train_list.txt
  6. 167 0
      .dev_scripts/benchmark_filter.py
  7. 91 0
      .dev_scripts/benchmark_full_models.txt
  8. 171 0
      .dev_scripts/benchmark_inference_fps.py
  9. 13 0
      .dev_scripts/benchmark_options.py
  10. 115 0
      .dev_scripts/benchmark_test.py
  11. 134 0
      .dev_scripts/benchmark_test_image.py
  12. 178 0
      .dev_scripts/benchmark_train.py
  13. 18 0
      .dev_scripts/benchmark_train_models.txt
  14. 295 0
      .dev_scripts/benchmark_valid_flops.py
  15. 157 0
      .dev_scripts/check_links.py
  16. 114 0
      .dev_scripts/convert_test_benchmark_script.py
  17. 104 0
      .dev_scripts/convert_train_benchmark_script.py
  18. 5 0
      .dev_scripts/covignore.cfg
  19. 40 0
      .dev_scripts/diff_coverage_test.sh
  20. 83 0
      .dev_scripts/download_checkpoints.py
  21. 344 0
      .dev_scripts/gather_models.py
  22. 96 0
      .dev_scripts/gather_test_benchmark_metric.py
  23. 151 0
      .dev_scripts/gather_train_benchmark_metric.py
  24. 3 0
      .dev_scripts/linter.sh
  25. 157 0
      .dev_scripts/test_benchmark.sh
  26. 178 0
      .dev_scripts/test_init_backbone.py
  27. 164 0
      .dev_scripts/train_benchmark.sh
  28. 76 0
      .github/CODE_OF_CONDUCT.md
  29. 1 0
      .github/CONTRIBUTING.md
  30. 9 0
      .github/ISSUE_TEMPLATE/config.yml
  31. 46 0
      .github/ISSUE_TEMPLATE/error-report.md
  32. 21 0
      .github/ISSUE_TEMPLATE/feature_request.md
  33. 7 0
      .github/ISSUE_TEMPLATE/general_questions.md
  34. 67 0
      .github/ISSUE_TEMPLATE/reimplementation_questions.md
  35. 25 0
      .github/pull_request_template.md
  36. 28 0
      .github/workflows/deploy.yml
  37. 123 0
      .gitignore
  38. 14 0
      .owners.yml
  39. 61 0
      .pre-commit-config-zh-cn.yaml
  40. 50 0
      .pre-commit-config.yaml
  41. 9 0
      .readthedocs.yml
  42. 8 0
      CITATION.cff
  43. 203 0
      LICENSE
  44. 6 0
      MANIFEST.in
  45. 433 0
      README.md
  46. 452 0
      README_zh-CN.md
  47. 84 0
      configs/_base_/datasets/cityscapes_detection.py
  48. 113 0
      configs/_base_/datasets/cityscapes_instance.py
  49. 95 0
      configs/_base_/datasets/coco_detection.py
  50. 95 0
      configs/_base_/datasets/coco_instance.py
  51. 78 0
      configs/_base_/datasets/coco_instance_semantic.py
  52. 94 0
      configs/_base_/datasets/coco_panoptic.py
  53. 95 0
      configs/_base_/datasets/deepfashion.py
  54. 79 0
      configs/_base_/datasets/lvis_v0.5_instance.py
  55. 22 0
      configs/_base_/datasets/lvis_v1_instance.py
  56. 74 0
      configs/_base_/datasets/objects365v1_detection.py
  57. 73 0
      configs/_base_/datasets/objects365v2_detection.py
  58. 81 0
      configs/_base_/datasets/openimages_detection.py
  59. 178 0
      configs/_base_/datasets/semi_coco_detection.py
  60. 92 0
      configs/_base_/datasets/voc0712.py
  61. 73 0
      configs/_base_/datasets/wider_face.py
  62. 24 0
      configs/_base_/default_runtime.py
  63. 203 0
      configs/_base_/models/cascade-mask-rcnn_r50_fpn.py
  64. 185 0
      configs/_base_/models/cascade-rcnn_r50_fpn.py
  65. 68 0
      configs/_base_/models/fast-rcnn_r50_fpn.py
  66. 123 0
      configs/_base_/models/faster-rcnn_r50-caffe-c4.py
  67. 111 0
      configs/_base_/models/faster-rcnn_r50-caffe-dc5.py
  68. 114 0
      configs/_base_/models/faster-rcnn_r50_fpn.py
  69. 132 0
      configs/_base_/models/mask-rcnn_r50-caffe-c4.py
  70. 127 0
      configs/_base_/models/mask-rcnn_r50_fpn.py
  71. 68 0
      configs/_base_/models/retinanet_r50_fpn.py
  72. 64 0
      configs/_base_/models/rpn_r50-caffe-c4.py
  73. 64 0
      configs/_base_/models/rpn_r50_fpn.py
  74. 63 0
      configs/_base_/models/ssd300.py
  75. 28 0
      configs/_base_/schedules/schedule_1x.py
  76. 28 0
      configs/_base_/schedules/schedule_20e.py
  77. 28 0
      configs/_base_/schedules/schedule_2x.py
  78. 31 0
      configs/albu_example/README.md
  79. 66 0
      configs/albu_example/mask-rcnn_r50_fpn_albu-1x_coco.py
  80. 17 0
      configs/albu_example/metafile.yml
  81. 31 0
      configs/atss/README.md
  82. 6 0
      configs/atss/atss_r101_fpn_1x_coco.py
  83. 7 0
      configs/atss/atss_r101_fpn_8xb8-amp-lsj-200e_coco.py
  84. 7 0
      configs/atss/atss_r18_fpn_8xb8-amp-lsj-200e_coco.py
  85. 71 0
      configs/atss/atss_r50_fpn_1x_coco.py
  86. 81 0
      configs/atss/atss_r50_fpn_8xb8-amp-lsj-200e_coco.py
  87. 60 0
      configs/atss/metafile.yml
  88. 35 0
      configs/autoassign/README.md
  89. 69 0
      configs/autoassign/autoassign_r50-caffe_fpn_1x_coco.py
  90. 33 0
      configs/autoassign/metafile.yml
  91. 32 0
      configs/boxinst/README.md
  92. 8 0
      configs/boxinst/boxinst_r101_fpn_ms-90k_coco.py
  93. 93 0
      configs/boxinst/boxinst_r50_fpn_ms-90k_coco.py
  94. 52 0
      configs/boxinst/metafile.yml
  95. 42 0
      configs/carafe/README.md
  96. 20 0
      configs/carafe/faster-rcnn_r50_fpn-carafe_1x_coco.py
  97. 30 0
      configs/carafe/mask-rcnn_r50_fpn-carafe_1x_coco.py
  98. 55 0
      configs/carafe/metafile.yml
  99. 79 0
      configs/cascade_rcnn/README.md
  100. 7 0
      configs/cascade_rcnn/cascade-mask-rcnn_r101-caffe_fpn_1x_coco.py

+ 34 - 0
.circleci/config.yml

@@ -0,0 +1,34 @@
+version: 2.1
+
+# this allows you to use CircleCI's dynamic configuration feature
+setup: true
+
+# the path-filtering orb is required to continue a pipeline based on
+# the path of an updated fileset
+orbs:
+  path-filtering: circleci/path-filtering@0.1.2
+
+workflows:
+  # the always-run workflow is always triggered, regardless of the pipeline parameters.
+  always-run:
+    jobs:
+      # the path-filtering/filter job determines which pipeline
+      # parameters to update.
+      - path-filtering/filter:
+          name: check-updated-files
+          # 3-column, whitespace-delimited mapping. One mapping per
+          # line:
+          # <regex path-to-test> <parameter-to-set> <value-of-pipeline-parameter>
+          mapping: |
+            mmdet/.* lint_only false
+            requirements/.* lint_only false
+            tests/.* lint_only false
+            tools/.* lint_only false
+            configs/.* lint_only false
+            .circleci/.* lint_only false
+          base-revision: dev-3.x
+          # this is the path of the configuration we should trigger once
+          # path filtering and pipeline parameter value updates are
+          # complete. In this case, we are using the parent dynamic
+          # configuration itself.
+          config-path: .circleci/test.yml

+ 11 - 0
.circleci/docker/Dockerfile

@@ -0,0 +1,11 @@
+ARG PYTORCH="1.8.1"
+ARG CUDA="10.2"
+ARG CUDNN="7"
+
+FROM pytorch/pytorch:${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel
+
+# To fix GPG key error when running apt-get update
+RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/3bf863cc.pub
+RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/7fa2af80.pub
+
+RUN apt-get update && apt-get install -y ninja-build libglib2.0-0 libsm6 libxrender-dev libxext6 libgl1-mesa-glx

+ 199 - 0
.circleci/test.yml

@@ -0,0 +1,199 @@
+version: 2.1
+
+# the default pipeline parameters, which will be updated according to
+# the results of the path-filtering orb
+parameters:
+  lint_only:
+    type: boolean
+    default: true
+
+jobs:
+  lint:
+    docker:
+      - image: cimg/python:3.7.4
+    steps:
+      - checkout
+      - run:
+          name: Install pre-commit hook
+          command: |
+            pip install pre-commit
+            pre-commit install
+      - run:
+          name: Linting
+          command: pre-commit run --all-files
+      - run:
+          name: Check docstring coverage
+          command: |
+            pip install interrogate
+            interrogate -v --ignore-init-method --ignore-module --ignore-nested-functions --ignore-magic --ignore-regex "__repr__" --fail-under 85 mmdet
+
+  build_cpu:
+    parameters:
+      # The python version must match available image tags in
+      # https://circleci.com/developer/images/image/cimg/python
+      python:
+        type: string
+      torch:
+        type: string
+      torchvision:
+        type: string
+    docker:
+      - image: cimg/python:<< parameters.python >>
+    resource_class: large
+    steps:
+      - checkout
+      - run:
+          name: Install Libraries
+          command: |
+            sudo apt-get update
+            sudo apt-get install -y ninja-build libglib2.0-0 libsm6 libxrender-dev libxext6 libgl1-mesa-glx libjpeg-dev zlib1g-dev libtinfo-dev libncurses5
+      - run:
+          name: Configure Python & pip
+          command: |
+            pip install --upgrade pip
+            pip install wheel
+      - run:
+          name: Install PyTorch
+          command: |
+            python -V
+            python -m pip install torch==<< parameters.torch >>+cpu torchvision==<< parameters.torchvision >>+cpu -f https://download.pytorch.org/whl/torch_stable.html
+      - when:
+          condition:
+            equal: ["3.9.0", << parameters.python >>]
+          steps:
+            - run: pip install "protobuf <= 3.20.1" && sudo apt-get update && sudo apt-get -y install libprotobuf-dev protobuf-compiler cmake
+      - run:
+          name: Install mmdet dependencies
+          # numpy may be downgraded after building pycocotools, which causes `ImportError: numpy.core.multiarray failed to import`
+          # force reinstall pycocotools to ensure pycocotools being built under the currenct numpy
+          command: |
+            python -m pip install git+ssh://git@github.com/open-mmlab/mmengine.git@main
+            pip install -U openmim
+            mim install 'mmcv >= 2.0.0rc4'
+            pip install -r requirements/tests.txt -r requirements/optional.txt
+            pip install --force-reinstall pycocotools
+            pip install albumentations>=0.3.2 --no-binary imgaug,albumentations
+            pip install git+https://github.com/cocodataset/panopticapi.git
+      - run:
+          name: Build and install
+          command: |
+            pip install -e .
+      - run:
+          name: Run unittests
+          command: |
+            python -m coverage run --branch --source mmdet -m pytest tests/
+            python -m coverage xml
+            python -m coverage report -m
+
+  build_cuda:
+    parameters:
+      torch:
+        type: string
+      cuda:
+        type: enum
+        enum: ["10.1", "10.2", "11.1", "11.7"]
+      cudnn:
+        type: integer
+        default: 7
+    machine:
+      image: ubuntu-2004-cuda-11.4:202110-01
+      # docker_layer_caching: true
+    resource_class: gpu.nvidia.small
+    steps:
+      - checkout
+      - run:
+          # CLoning repos in VM since Docker doesn't have access to the private key
+          name: Clone Repos
+          command: |
+            git clone -b main --depth 1 ssh://git@github.com/open-mmlab/mmengine.git /home/circleci/mmengine
+      - run:
+          name: Build Docker image
+          command: |
+            docker build .circleci/docker -t mmdetection:gpu --build-arg PYTORCH=<< parameters.torch >> --build-arg CUDA=<< parameters.cuda >> --build-arg CUDNN=<< parameters.cudnn >>
+            docker run --gpus all -t -d -v /home/circleci/project:/mmdetection -v /home/circleci/mmengine:/mmengine -w /mmdetection --name mmdetection mmdetection:gpu
+            docker exec mmdetection apt-get install -y git
+      - run:
+          name: Install mmdet dependencies
+          command: |
+            docker exec mmdetection pip install -e /mmengine
+            docker exec mmdetection pip install -U openmim
+            docker exec mmdetection mim install 'mmcv >= 2.0.0rc4'
+            docker exec mmdetection pip install -r requirements/tests.txt -r requirements/optional.txt
+            docker exec mmdetection pip install pycocotools
+            docker exec mmdetection pip install albumentations>=0.3.2 --no-binary imgaug,albumentations
+            docker exec mmdetection pip install git+https://github.com/cocodataset/panopticapi.git
+            docker exec mmdetection python -c 'import mmcv; print(mmcv.__version__)'
+      - run:
+          name: Build and install
+          command: |
+            docker exec mmdetection pip install -e .
+      - run:
+          name: Run unittests
+          command: |
+            docker exec mmdetection python -m pytest tests/
+
+workflows:
+  pr_stage_lint:
+    when: << pipeline.parameters.lint_only >>
+    jobs:
+      - lint:
+          name: lint
+          filters:
+            branches:
+              ignore:
+                - dev-3.x
+  pr_stage_test:
+    when:
+      not: << pipeline.parameters.lint_only >>
+    jobs:
+      - lint:
+          name: lint
+          filters:
+            branches:
+              ignore:
+                - dev-3.x
+      - build_cpu:
+          name: minimum_version_cpu
+          torch: 1.6.0
+          torchvision: 0.7.0
+          python: 3.7.4 # The lowest python 3.7.x version available on CircleCI images
+          requires:
+            - lint
+      - build_cpu:
+          name: maximum_version_cpu
+          torch: 2.0.0
+          torchvision: 0.15.1
+          python: 3.9.0
+          requires:
+            - minimum_version_cpu
+      - hold:
+          type: approval
+          requires:
+            - maximum_version_cpu
+      - build_cuda:
+          name: mainstream_version_gpu
+          torch: 1.8.1
+          # Use double quotation mark to explicitly specify its type
+          # as string instead of number
+          cuda: "10.2"
+          requires:
+            - hold
+      - build_cuda:
+          name: maximum_version_gpu
+          torch: 2.0.0
+          cuda: "11.7"
+          cudnn: 8
+          requires:
+            - hold
+  merge_stage_test:
+    when:
+      not: << pipeline.parameters.lint_only >>
+    jobs:
+      - build_cuda:
+          name: minimum_version_gpu
+          torch: 1.6.0
+          cuda: "10.1"
+          filters:
+            branches:
+              only:
+                - dev-3.x

+ 545 - 0
.dev_scripts/batch_test_list.py

@@ -0,0 +1,545 @@
+# Copyright (c) OpenMMLab. All rights reserved.
+
+# missing wider_face/timm_example/strong_baselines/simple_copy_paste/
+# selfsup_pretrain/seesaw_loss/pascal_voc/openimages/lvis/ld/lad/cityscapes/deepfashion
+
+# yapf: disable
+atss = dict(
+    config='configs/atss/atss_r50_fpn_1x_coco.py',
+    checkpoint='atss_r50_fpn_1x_coco_20200209-985f7bd0.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/atss/atss_r50_fpn_1x_coco/atss_r50_fpn_1x_coco_20200209-985f7bd0.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=39.4),
+)
+autoassign = dict(
+    config='configs/autoassign/autoassign_r50-caffe_fpn_1x_coco.py',
+    checkpoint='auto_assign_r50_fpn_1x_coco_20210413_115540-5e17991f.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/autoassign/auto_assign_r50_fpn_1x_coco/auto_assign_r50_fpn_1x_coco_20210413_115540-5e17991f.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=40.4),
+)
+carafe = dict(
+    config='configs/carafe/faster-rcnn_r50_fpn-carafe_1x_coco.py',
+    checkpoint='faster_rcnn_r50_fpn_carafe_1x_coco_bbox_mAP-0.386_20200504_175733-385a75b7.pth',  # noqa
+    url='https://download.openmmlab.com/mmdetection/v2.0/carafe/faster_rcnn_r50_fpn_carafe_1x_coco/faster_rcnn_r50_fpn_carafe_1x_coco_bbox_mAP-0.386_20200504_175733-385a75b7.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=38.6),
+)
+cascade_rcnn = [
+    dict(
+        config='configs/cascade_rcnn/cascade-rcnn_r50_fpn_1x_coco.py',
+        checkpoint='cascade_rcnn_r50_fpn_1x_coco_20200316-3dc56deb.pth',
+        eval='bbox',
+        url='https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_rcnn_r50_fpn_1x_coco/cascade_rcnn_r50_fpn_1x_coco_20200316-3dc56deb.pth', # noqa
+        metric=dict(bbox_mAP=40.3),
+    ),
+    dict(
+        config='configs/cascade_rcnn/cascade-mask-rcnn_r50_fpn_1x_coco.py',
+        checkpoint='cascade_mask_rcnn_r50_fpn_1x_coco_20200203-9d4dcb24.pth',
+        url='https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_r50_fpn_1x_coco/cascade_mask_rcnn_r50_fpn_1x_coco_20200203-9d4dcb24.pth', # noqa
+        eval=['bbox', 'segm'],
+        metric=dict(bbox_mAP=41.2, segm_mAP=35.9),
+    ),
+]
+cascade_rpn = dict(
+    config='configs/cascade_rpn/cascade-rpn_faster-rcnn_r50-caffe_fpn_1x_coco.py', # noqa
+    checkpoint='crpn_faster_rcnn_r50_caffe_fpn_1x_coco-c8283cca.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/cascade_rpn/crpn_faster_rcnn_r50_caffe_fpn_1x_coco/crpn_faster_rcnn_r50_caffe_fpn_1x_coco-c8283cca.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=40.4),
+)
+centernet = dict(
+    config='configs/centernet/centernet_r18-dcnv2_8xb16-crop512-140e_coco.py',
+    checkpoint='centernet_resnet18_dcnv2_140e_coco_20210702_155131-c8cd631f.pth', # noqa
+    url='https://download.openmmlab.com/mmdetection/v2.0/centernet/centernet_resnet18_dcnv2_140e_coco/centernet_resnet18_dcnv2_140e_coco_20210702_155131-c8cd631f.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=29.5),
+)
+centripetalnet = dict(
+    config='configs/centripetalnet/centripetalnet_hourglass104_16xb6-crop511-210e-mstest_coco.py',  # noqa
+    checkpoint='centripetalnet_hourglass104_mstest_16x6_210e_coco_20200915_204804-3ccc61e5.pth',  # noqa
+    url='https://download.openmmlab.com/mmdetection/v2.0/centripetalnet/centripetalnet_hourglass104_mstest_16x6_210e_coco/centripetalnet_hourglass104_mstest_16x6_210e_coco_20200915_204804-3ccc61e5.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=44.7),
+)
+convnext = dict(
+    config='configs/convnext/cascade-mask-rcnn_convnext-s-p4-w7_fpn_4conv1fc-giou_amp-ms-crop-3x_coco.py', # noqa
+    checkpoint='cascade_mask_rcnn_convnext-s_p4_w7_fpn_giou_4conv1f_fp16_ms-crop_3x_coco_20220510_201004-3d24f5a4.pth', # noqa
+    url='https://download.openmmlab.com/mmdetection/v2.0/convnext/cascade_mask_rcnn_convnext-s_p4_w7_fpn_giou_4conv1f_fp16_ms-crop_3x_coco/cascade_mask_rcnn_convnext-s_p4_w7_fpn_giou_4conv1f_fp16_ms-crop_3x_coco_20220510_201004-3d24f5a4.pth', # noqa
+    eval=['bbox', 'segm'],
+    metric=dict(bbox_mAP=51.8, segm_mAP=44.8),
+)
+cornernet = dict(
+    config='configs/cornernet/cornernet_hourglass104_8xb6-210e-mstest_coco.py',
+    checkpoint='cornernet_hourglass104_mstest_8x6_210e_coco_20200825_150618-79b44c30.pth',  # noqa
+    url='https://download.openmmlab.com/mmdetection/v2.0/cornernet/cornernet_hourglass104_mstest_8x6_210e_coco/cornernet_hourglass104_mstest_8x6_210e_coco_20200825_150618-79b44c30.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=41.2),
+)
+dcn = dict(
+    config='configs/dcn/faster-rcnn_r50-dconv-c3-c5_fpn_1x_coco.py',
+    checkpoint='faster_rcnn_r50_fpn_dconv_c3-c5_1x_coco_20200130-d68aed1e.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/dcn/faster_rcnn_r50_fpn_dconv_c3-c5_1x_coco/faster_rcnn_r50_fpn_dconv_c3-c5_1x_coco_20200130-d68aed1e.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=41.3),
+)
+dcnv2 = dict(
+    config='configs/dcnv2/faster-rcnn_r50_fpn_mdpool_1x_coco.py',
+    checkpoint='faster_rcnn_r50_fpn_mdpool_1x_coco_20200307-c0df27ff.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/dcn/faster_rcnn_r50_fpn_mdpool_1x_coco/faster_rcnn_r50_fpn_mdpool_1x_coco_20200307-c0df27ff.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=38.7),
+)
+ddod = dict(
+    config='configs/ddod/ddod_r50_fpn_1x_coco.py',
+    checkpoint='ddod_r50_fpn_1x_coco_20220523_223737-29b2fc67.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/ddod/ddod_r50_fpn_1x_coco/ddod_r50_fpn_1x_coco_20220523_223737-29b2fc67.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=41.7),
+)
+deformable_detr = dict(
+    config='configs/deformable_detr/deformable-detr_r50_16xb2-50e_coco.py',
+    checkpoint='deformable_detr_r50_16x2_50e_coco_20210419_220030-a12b9512.pth',  # noqa
+    url='https://download.openmmlab.com/mmdetection/v2.0/deformable_detr/deformable_detr_r50_16x2_50e_coco/deformable_detr_r50_16x2_50e_coco_20210419_220030-a12b9512.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=44.5),
+)
+detectors = dict(
+    config='configs/detectors/detectors_htc-r50_1x_coco.py',
+    checkpoint='detectors_htc_r50_1x_coco-329b1453.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/detectors/detectors_htc_r50_1x_coco/detectors_htc_r50_1x_coco-329b1453.pth', # noqa
+    eval=['bbox', 'segm'],
+    metric=dict(bbox_mAP=49.1, segm_mAP=42.6),
+)
+detr = dict(
+    config='configs/detr/detr_r50_8xb2-150e_coco.py',
+    checkpoint='detr_r50_8x2_150e_coco_20201130_194835-2c4b8974.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/detr/detr_r50_8x2_150e_coco/detr_r50_8x2_150e_coco_20201130_194835-2c4b8974.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=40.1),
+)
+double_heads = dict(
+    config='configs/double_heads/dh-faster-rcnn_r50_fpn_1x_coco.py',
+    checkpoint='dh_faster_rcnn_r50_fpn_1x_coco_20200130-586b67df.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/double_heads/dh_faster_rcnn_r50_fpn_1x_coco/dh_faster_rcnn_r50_fpn_1x_coco_20200130-586b67df.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=40.0),
+)
+dyhead = dict(
+    config='configs/dyhead/atss_r50_fpn_dyhead_1x_coco.py',
+    checkpoint='atss_r50_fpn_dyhead_4x4_1x_coco_20211219_023314-eaa620c6.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/dyhead/atss_r50_fpn_dyhead_4x4_1x_coco/atss_r50_fpn_dyhead_4x4_1x_coco_20211219_023314-eaa620c6.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=43.3),
+)
+dynamic_rcnn = dict(
+    config='configs/dynamic_rcnn/dynamic-rcnn_r50_fpn_1x_coco.py',
+    checkpoint='dynamic_rcnn_r50_fpn_1x-62a3f276.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/dynamic_rcnn/dynamic_rcnn_r50_fpn_1x/dynamic_rcnn_r50_fpn_1x-62a3f276.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=38.9),
+)
+efficientnet = dict(
+    config='configs/efficientnet/retinanet_effb3_fpn_8xb4-crop896-1x_coco.py',
+    checkpoint='retinanet_effb3_fpn_crop896_8x4_1x_coco_20220322_234806-615a0dda.pth', # noqa
+    url='https://download.openmmlab.com/mmdetection/v2.0/efficientnet/retinanet_effb3_fpn_crop896_8x4_1x_coco/retinanet_effb3_fpn_crop896_8x4_1x_coco_20220322_234806-615a0dda.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=40.5),
+)
+empirical_attention = dict(
+    config='configs/empirical_attention/faster-rcnn_r50-attn1111_fpn_1x_coco.py',  # noqa
+    checkpoint='faster_rcnn_r50_fpn_attention_1111_1x_coco_20200130-403cccba.pth',  # noqa
+    url='https://download.openmmlab.com/mmdetection/v2.0/empirical_attention/faster_rcnn_r50_fpn_attention_1111_1x_coco/faster_rcnn_r50_fpn_attention_1111_1x_coco_20200130-403cccba.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=40.0),
+)
+faster_rcnn = dict(
+    config='configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py',
+    checkpoint='faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=37.4),
+)
+fcos = dict(
+    config='configs/fcos/fcos_r50-caffe_fpn_gn-head-center-normbbox-centeronreg-giou_1x_coco.py',  # noqa
+    checkpoint='fcos_center-normbbox-centeronreg-giou_r50_caffe_fpn_gn-head_1x_coco-0a0d75a8.pth',  # noqa
+    url='https://download.openmmlab.com/mmdetection/v2.0/fcos/fcos_center-normbbox-centeronreg-giou_r50_caffe_fpn_gn-head_1x_coco/fcos_center-normbbox-centeronreg-giou_r50_caffe_fpn_gn-head_1x_coco-0a0d75a8.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=38.7),
+)
+foveabox = dict(
+    config='configs/foveabox/fovea_r50_fpn_gn-head-align_4xb4-2x_coco.py',
+    checkpoint='fovea_align_r50_fpn_gn-head_4x4_2x_coco_20200203-8987880d.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/foveabox/fovea_align_r50_fpn_gn-head_4x4_2x_coco/fovea_align_r50_fpn_gn-head_4x4_2x_coco_20200203-8987880d.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=37.9),
+)
+fpg = dict(
+    config='configs/fpg/mask-rcnn_r50_fpg_crop640-50e_coco.py',
+    checkpoint='mask_rcnn_r50_fpg_crop640_50e_coco_20220311_011857-233b8334.pth', # noqa
+    url='https://download.openmmlab.com/mmdetection/v2.0/fpg/mask_rcnn_r50_fpg_crop640_50e_coco/mask_rcnn_r50_fpg_crop640_50e_coco_20220311_011857-233b8334.pth', # noqa
+    eval=['bbox', 'segm'],
+    metric=dict(bbox_mAP=43.0, segm_mAP=38.1),
+)
+free_anchor = dict(
+    config='configs/free_anchor/freeanchor_r50_fpn_1x_coco.py',
+    checkpoint='retinanet_free_anchor_r50_fpn_1x_coco_20200130-0f67375f.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/free_anchor/retinanet_free_anchor_r50_fpn_1x_coco/retinanet_free_anchor_r50_fpn_1x_coco_20200130-0f67375f.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=38.7),
+)
+fsaf = dict(
+    config='configs/fsaf/fsaf_r50_fpn_1x_coco.py',
+    checkpoint='fsaf_r50_fpn_1x_coco-94ccc51f.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/fsaf/fsaf_r50_fpn_1x_coco/fsaf_r50_fpn_1x_coco-94ccc51f.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=37.4),
+)
+gcnet = dict(
+    config='configs/gcnet/mask-rcnn_r50-syncbn-gcb-r16-c3-c5_fpn_1x_coco.py',  # noqa
+    checkpoint='mask_rcnn_r50_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco_20200202-587b99aa.pth',  # noqa
+    url='https://download.openmmlab.com/mmdetection/v2.0/gcnet/mask_rcnn_r50_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco/mask_rcnn_r50_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco_20200202-587b99aa.pth', # noqa
+    eval=['bbox', 'segm'],
+    metric=dict(bbox_mAP=40.4, segm_mAP=36.2),
+)
+gfl = dict(
+    config='configs/gfl/gfl_r50_fpn_1x_coco.py',
+    checkpoint='gfl_r50_fpn_1x_coco_20200629_121244-25944287.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/gfl/gfl_r50_fpn_1x_coco/gfl_r50_fpn_1x_coco_20200629_121244-25944287.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=40.2),
+)
+ghm = dict(
+    config='configs/ghm/retinanet_r50_fpn_ghm-1x_coco.py',
+    checkpoint='retinanet_ghm_r50_fpn_1x_coco_20200130-a437fda3.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/ghm/retinanet_ghm_r50_fpn_1x_coco/retinanet_ghm_r50_fpn_1x_coco_20200130-a437fda3.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=37.0),
+)
+gn = dict(
+    config='configs/gn/mask-rcnn_r50_fpn_gn-all_2x_coco.py',
+    checkpoint='mask_rcnn_r50_fpn_gn-all_2x_coco_20200206-8eee02a6.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/gn/mask_rcnn_r50_fpn_gn-all_2x_coco/mask_rcnn_r50_fpn_gn-all_2x_coco_20200206-8eee02a6.pth', # noqa
+    eval=['bbox', 'segm'],
+    metric=dict(bbox_mAP=40.1, segm_mAP=36.4),
+)
+gn_ws = dict(
+    config='configs/gn+ws/faster-rcnn_r50_fpn_gn-ws-all_1x_coco.py',
+    checkpoint='faster_rcnn_r50_fpn_gn_ws-all_1x_coco_20200130-613d9fe2.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/gn%2Bws/faster_rcnn_r50_fpn_gn_ws-all_1x_coco/faster_rcnn_r50_fpn_gn_ws-all_1x_coco_20200130-613d9fe2.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=39.7),
+)
+grid_rcnn = dict(
+    config='configs/grid_rcnn/grid-rcnn_r50_fpn_gn-head_2x_coco.py',
+    checkpoint='grid_rcnn_r50_fpn_gn-head_2x_coco_20200130-6cca8223.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/grid_rcnn/grid_rcnn_r50_fpn_gn-head_2x_coco/grid_rcnn_r50_fpn_gn-head_2x_coco_20200130-6cca8223.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=40.4),
+)
+groie = dict(
+    config='configs/groie/faste-rcnn_r50_fpn_groie_1x_coco.py',
+    checkpoint='faster_rcnn_r50_fpn_groie_1x_coco_20200604_211715-66ee9516.pth',  # noqa
+    url='https://download.openmmlab.com/mmdetection/v2.0/groie/faster_rcnn_r50_fpn_groie_1x_coco/faster_rcnn_r50_fpn_groie_1x_coco_20200604_211715-66ee9516.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=38.3),
+)
+guided_anchoring = dict(
+        config='configs/guided_anchoring/ga-retinanet_r50-caffe_fpn_1x_coco.py',  # noqa
+        checkpoint='ga_retinanet_r50_caffe_fpn_1x_coco_20201020-39581c6f.pth',
+        url='https://download.openmmlab.com/mmdetection/v2.0/guided_anchoring/ga_retinanet_r50_caffe_fpn_1x_coco/ga_retinanet_r50_caffe_fpn_1x_coco_20201020-39581c6f.pth', # noqa
+        eval='bbox',
+        metric=dict(bbox_mAP=36.9),
+    )
+hrnet = dict(
+    config='configs/hrnet/faster-rcnn_hrnetv2p-w18-1x_coco.py',
+    checkpoint='faster_rcnn_hrnetv2p_w18_1x_coco_20200130-56651a6d.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/hrnet/faster_rcnn_hrnetv2p_w18_1x_coco/faster_rcnn_hrnetv2p_w18_1x_coco_20200130-56651a6d.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=36.9),
+)
+htc = dict(
+    config='configs/htc/htc_r50_fpn_1x_coco.py',
+    checkpoint='htc_r50_fpn_1x_coco_20200317-7332cf16.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/htc/htc_r50_fpn_1x_coco/htc_r50_fpn_1x_coco_20200317-7332cf16.pth', # noqa
+    eval=['bbox', 'segm'],
+    metric=dict(bbox_mAP=42.3, segm_mAP=37.4),
+)
+instaboost = dict(
+    config='configs/instaboost/mask-rcnn_r50_fpn_instaboost-4x_coco.py',
+    checkpoint='mask_rcnn_r50_fpn_instaboost_4x_coco_20200307-d025f83a.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/instaboost/mask_rcnn_r50_fpn_instaboost_4x_coco/mask_rcnn_r50_fpn_instaboost_4x_coco_20200307-d025f83a.pth', # noqa
+    eval=['bbox', 'segm'],
+    metric=dict(bbox_mAP=40.6, segm_mAP=36.6),
+)
+libra_rcnn = dict(
+    config='configs/libra_rcnn/libra-faster-rcnn_r50_fpn_1x_coco.py',
+    checkpoint='libra_faster_rcnn_r50_fpn_1x_coco_20200130-3afee3a9.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/libra_rcnn/libra_faster_rcnn_r50_fpn_1x_coco/libra_faster_rcnn_r50_fpn_1x_coco_20200130-3afee3a9.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=38.3),
+)
+mask2former = dict(
+    config='configs/mask2former/mask2former_r50_8xb2-lsj-50e_coco-panoptic.py',
+    checkpoint='mask2former_r50_lsj_8x2_50e_coco-panoptic_20220326_224516-11a44721.pth', # noqa
+    url='https://download.openmmlab.com/mmdetection/v2.0/mask2former/mask2former_r50_lsj_8x2_50e_coco-panoptic/mask2former_r50_lsj_8x2_50e_coco-panoptic_20220326_224516-11a44721.pth', # noqa
+    eval=['bbox', 'segm', 'PQ'],
+    metric=dict(PQ=51.9, bbox_mAP=44.8, segm_mAP=41.9),
+)
+mask_rcnn = dict(
+    config='configs/mask_rcnn/mask-rcnn_r50_fpn_1x_coco.py',
+    checkpoint='mask_rcnn_r50_fpn_1x_coco_20200205-d4b0c5d6.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn_r50_fpn_1x_coco/mask_rcnn_r50_fpn_1x_coco_20200205-d4b0c5d6.pth', # noqa
+    eval=['bbox', 'segm'],
+    metric=dict(bbox_mAP=38.2, segm_mAP=34.7),
+)
+maskformer = dict(
+    config='configs/maskformer/maskformer_r50_ms-16xb1-75e_coco.py',
+    checkpoint='maskformer_r50_mstrain_16x1_75e_coco_20220221_141956-bc2699cb.pth', # noqa
+    url='https://download.openmmlab.com/mmdetection/v2.0/maskformer/maskformer_r50_mstrain_16x1_75e_coco/maskformer_r50_mstrain_16x1_75e_coco_20220221_141956-bc2699cb.pth', # noqa
+    eval='PQ',
+    metric=dict(PQ=46.9),
+)
+ms_rcnn = dict(
+    config='configs/ms_rcnn/ms-rcnn_r50-caffe_fpn_1x_coco.py',
+    checkpoint='ms_rcnn_r50_caffe_fpn_1x_coco_20200702_180848-61c9355e.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/ms_rcnn/ms_rcnn_r50_caffe_fpn_1x_coco/ms_rcnn_r50_caffe_fpn_1x_coco_20200702_180848-61c9355e.pth', # noqa
+    eval=['bbox', 'segm'],
+    metric=dict(bbox_mAP=38.2, segm_mAP=36.0),
+)
+nas_fcos = dict(
+    config='configs/nas_fcos/nas-fcos_r50-caffe_fpn_nashead-gn-head_4xb4-1x_coco.py',  # noqa
+    checkpoint='nas_fcos_nashead_r50_caffe_fpn_gn-head_4x4_1x_coco_20200520-1bdba3ce.pth',  # noqa
+    url='https://download.openmmlab.com/mmdetection/v2.0/nas_fcos/nas_fcos_nashead_r50_caffe_fpn_gn-head_4x4_1x_coco/nas_fcos_nashead_r50_caffe_fpn_gn-head_4x4_1x_coco_20200520-1bdba3ce.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=39.4),
+)
+nas_fpn = dict(
+    config='configs/nas_fpn/retinanet_r50_nasfpn_crop640-50e_coco.py',
+    checkpoint='retinanet_r50_nasfpn_crop640_50e_coco-0ad1f644.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/nas_fpn/retinanet_r50_nasfpn_crop640_50e_coco/retinanet_r50_nasfpn_crop640_50e_coco-0ad1f644.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=40.5),
+)
+paa = dict(
+    config='configs/paa/paa_r50_fpn_1x_coco.py',
+    checkpoint='paa_r50_fpn_1x_coco_20200821-936edec3.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/paa/paa_r50_fpn_1x_coco/paa_r50_fpn_1x_coco_20200821-936edec3.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=40.4),
+)
+pafpn = dict(
+    config='configs/pafpn/faster-rcnn_r50_pafpn_1x_coco.py',
+    checkpoint='faster_rcnn_r50_pafpn_1x_coco_bbox_mAP-0.375_20200503_105836-b7b4b9bd.pth',  # noqa
+    url='https://download.openmmlab.com/mmdetection/v2.0/pafpn/faster_rcnn_r50_pafpn_1x_coco/faster_rcnn_r50_pafpn_1x_coco_bbox_mAP-0.375_20200503_105836-b7b4b9bd.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=37.5),
+)
+panoptic_fpn = dict(
+    config='configs/panoptic_fpn/panoptic-fpn_r50_fpn_1x_coco.py',
+    checkpoint='panoptic_fpn_r50_fpn_1x_coco_20210821_101153-9668fd13.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/panoptic_fpn/panoptic_fpn_r50_fpn_1x_coco/panoptic_fpn_r50_fpn_1x_coco_20210821_101153-9668fd13.pth', # noqa
+    eval='PQ',
+    metric=dict(PQ=40.2),
+)
+pisa = dict(
+    config='configs/pisa/faster-rcnn_r50_fpn_pisa_1x_coco.py',
+    checkpoint='pisa_faster_rcnn_r50_fpn_1x_coco-dea93523.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/pisa/pisa_faster_rcnn_r50_fpn_1x_coco/pisa_faster_rcnn_r50_fpn_1x_coco-dea93523.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=38.4),
+)
+point_rend = dict(
+    config='configs/point_rend/point-rend_r50-caffe_fpn_ms-1x_coco.py',
+    checkpoint='point_rend_r50_caffe_fpn_mstrain_1x_coco-1bcb5fb4.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/point_rend/point_rend_r50_caffe_fpn_mstrain_1x_coco/point_rend_r50_caffe_fpn_mstrain_1x_coco-1bcb5fb4.pth', # noqa
+    eval=['bbox', 'segm'],
+    metric=dict(bbox_mAP=38.4, segm_mAP=36.3),
+)
+pvt = dict(
+    config='configs/pvt/retinanet_pvt-s_fpn_1x_coco.py',
+    checkpoint='retinanet_pvt-s_fpn_1x_coco_20210906_142921-b6c94a5b.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/pvt/retinanet_pvt-s_fpn_1x_coco/retinanet_pvt-s_fpn_1x_coco_20210906_142921-b6c94a5b.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=40.4),
+)
+queryinst = dict(
+    config='configs/queryinst/queryinst_r50_fpn_1x_coco.py',
+    checkpoint='queryinst_r50_fpn_1x_coco_20210907_084916-5a8f1998.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/queryinst/queryinst_r50_fpn_1x_coco/queryinst_r50_fpn_1x_coco_20210907_084916-5a8f1998.pth', # noqa
+    eval=['bbox', 'segm'],
+    metric=dict(bbox_mAP=42.0, segm_mAP=37.5),
+)
+regnet = dict(
+    config='configs/regnet/mask-rcnn_regnetx-3.2GF_fpn_1x_coco.py',
+    checkpoint='mask_rcnn_regnetx-3.2GF_fpn_1x_coco_20200520_163141-2a9d1814.pth',  # noqa
+    url='https://download.openmmlab.com/mmdetection/v2.0/regnet/mask_rcnn_regnetx-3.2GF_fpn_1x_coco/mask_rcnn_regnetx-3.2GF_fpn_1x_coco_20200520_163141-2a9d1814.pth', # noqa
+    eval=['bbox', 'segm'],
+    metric=dict(bbox_mAP=40.4, segm_mAP=36.7),
+)
+reppoints = dict(
+    config='configs/reppoints/reppoints-moment_r50_fpn_1x_coco.py',
+    checkpoint='reppoints_moment_r50_fpn_1x_coco_20200330-b73db8d1.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/reppoints/reppoints_moment_r50_fpn_1x_coco/reppoints_moment_r50_fpn_1x_coco_20200330-b73db8d1.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=37.0),
+)
+res2net = dict(
+    config='configs/res2net/faster-rcnn_res2net-101_fpn_2x_coco.py',
+    checkpoint='faster_rcnn_r2_101_fpn_2x_coco-175f1da6.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/res2net/faster_rcnn_r2_101_fpn_2x_coco/faster_rcnn_r2_101_fpn_2x_coco-175f1da6.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=43.0),
+)
+resnest = dict(
+    config='configs/resnest/faster-rcnn_s50_fpn_syncbn-backbone+head_ms-range-1x_coco.py',  # noqa
+    checkpoint='faster_rcnn_s50_fpn_syncbn-backbone+head_mstrain-range_1x_coco_20200926_125502-20289c16.pth',  # noqa
+    url='https://download.openmmlab.com/mmdetection/v2.0/resnest/faster_rcnn_s50_fpn_syncbn-backbone%2Bhead_mstrain-range_1x_coco/faster_rcnn_s50_fpn_syncbn-backbone%2Bhead_mstrain-range_1x_coco_20200926_125502-20289c16.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=42.0),
+)
+resnet_strikes_back = dict(
+    config='configs/resnet_strikes_back/mask-rcnn_r50-rsb-pre_fpn_1x_coco.py', # noqa
+    checkpoint='mask_rcnn_r50_fpn_rsb-pretrain_1x_coco_20220113_174054-06ce8ba0.pth', # noqa
+    url='https://download.openmmlab.com/mmdetection/v2.0/resnet_strikes_back/mask_rcnn_r50_fpn_rsb-pretrain_1x_coco/mask_rcnn_r50_fpn_rsb-pretrain_1x_coco_20220113_174054-06ce8ba0.pth', # noqa
+    eval=['bbox', 'segm'],
+    metric=dict(bbox_mAP=41.2, segm_mAP=38.2),
+)
+retinanet = dict(
+    config='configs/retinanet/retinanet_r50_fpn_1x_coco.py',
+    checkpoint='retinanet_r50_fpn_1x_coco_20200130-c2398f9e.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/retinanet/retinanet_r50_fpn_1x_coco/retinanet_r50_fpn_1x_coco_20200130-c2398f9e.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=36.5),
+)
+rpn = dict(
+    config='configs/rpn/rpn_r50_fpn_1x_coco.py',
+    checkpoint='rpn_r50_fpn_1x_coco_20200218-5525fa2e.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/rpn/rpn_r50_fpn_1x_coco/rpn_r50_fpn_1x_coco_20200218-5525fa2e.pth', # noqa
+    eval='proposal_fast',
+    metric=dict(AR_1000=58.2),
+)
+sabl = [
+    dict(
+        config='configs/sabl/sabl-retinanet_r50_fpn_1x_coco.py',
+        checkpoint='sabl_retinanet_r50_fpn_1x_coco-6c54fd4f.pth',
+        url='https://download.openmmlab.com/mmdetection/v2.0/sabl/sabl_retinanet_r50_fpn_1x_coco/sabl_retinanet_r50_fpn_1x_coco-6c54fd4f.pth', # noqa
+        eval='bbox',
+        metric=dict(bbox_mAP=37.7),
+    ),
+    dict(
+        config='configs/sabl/sabl-faster-rcnn_r50_fpn_1x_coco.py',
+        checkpoint='sabl_faster_rcnn_r50_fpn_1x_coco-e867595b.pth',
+        url='https://download.openmmlab.com/mmdetection/v2.0/sabl/sabl_faster_rcnn_r50_fpn_1x_coco/sabl_faster_rcnn_r50_fpn_1x_coco-e867595b.pth', # noqa
+        eval='bbox',
+        metric=dict(bbox_mAP=39.9),
+    ),
+]
+scnet = dict(
+    config='configs/scnet/scnet_r50_fpn_1x_coco.py',
+    checkpoint='scnet_r50_fpn_1x_coco-c3f09857.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/scnet/scnet_r50_fpn_1x_coco/scnet_r50_fpn_1x_coco-c3f09857.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=43.5),
+)
+scratch = dict(
+    config='configs/scratch/mask-rcnn_r50-scratch_fpn_gn-all_6x_coco.py',
+    checkpoint='scratch_mask_rcnn_r50_fpn_gn_6x_bbox_mAP-0.412__segm_mAP-0.374_20200201_193051-1e190a40.pth', # noqa
+    url='https://download.openmmlab.com/mmdetection/v2.0/scratch/mask_rcnn_r50_fpn_gn-all_scratch_6x_coco/scratch_mask_rcnn_r50_fpn_gn_6x_bbox_mAP-0.412__segm_mAP-0.374_20200201_193051-1e190a40.pth', # noqa
+    eval=['bbox', 'segm'],
+    metric=dict(bbox_mAP=41.2, segm_mAP=37.4),
+)
+solo = dict(
+    config='configs/solo/decoupled-solo_r50_fpn_1x_coco.py',
+    checkpoint='decoupled_solo_r50_fpn_1x_coco_20210820_233348-6337c589.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/solo/decoupled_solo_r50_fpn_1x_coco/decoupled_solo_r50_fpn_1x_coco_20210820_233348-6337c589.pth', # noqa
+    eval='segm',
+    metric=dict(segm_mAP=33.9),
+)
+solov2 = dict(
+    config='configs/solov2/solov2_r50_fpn_1x_coco.py',
+    checkpoint='solov2_r50_fpn_1x_coco_20220512_125858-a357fa23.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/solov2/solov2_r50_fpn_1x_coco/solov2_r50_fpn_1x_coco_20220512_125858-a357fa23.pth', # noqa
+    eval='segm',
+    metric=dict(segm_mAP=34.8),
+)
+sparse_rcnn = dict(
+    config='configs/sparse_rcnn/sparse-rcnn_r50_fpn_1x_coco.py',
+    checkpoint='sparse_rcnn_r50_fpn_1x_coco_20201222_214453-dc79b137.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/sparse_rcnn/sparse_rcnn_r50_fpn_1x_coco/sparse_rcnn_r50_fpn_1x_coco_20201222_214453-dc79b137.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=37.9),
+)
+ssd = [
+    dict(
+        config='configs/ssd/ssd300_coco.py',
+        checkpoint='ssd300_coco_20210803_015428-d231a06e.pth',
+        url='https://download.openmmlab.com/mmdetection/v2.0/ssd/ssd300_coco/ssd300_coco_20210803_015428-d231a06e.pth', # noqa
+        eval='bbox',
+        metric=dict(bbox_mAP=25.5),
+    ),
+    dict(
+        config='configs/ssd/ssdlite_mobilenetv2-scratch_8xb24-600e_coco.py',
+        checkpoint='ssdlite_mobilenetv2_scratch_600e_coco_20210629_110627-974d9307.pth',  # noqa
+        url='https://download.openmmlab.com/mmdetection/v2.0/ssd/ssdlite_mobilenetv2_scratch_600e_coco/ssdlite_mobilenetv2_scratch_600e_coco_20210629_110627-974d9307.pth', # noqa
+        eval='bbox',
+        metric=dict(bbox_mAP=21.3),
+    ),
+]
+swin = dict(
+    config='configs/swin/mask-rcnn_swin-t-p4-w7_fpn_1x_coco.py',
+    checkpoint='mask_rcnn_swin-t-p4-w7_fpn_1x_coco_20210902_120937-9d6b7cfa.pth', # noqa
+    url='https://download.openmmlab.com/mmdetection/v2.0/swin/mask_rcnn_swin-t-p4-w7_fpn_1x_coco/mask_rcnn_swin-t-p4-w7_fpn_1x_coco_20210902_120937-9d6b7cfa.pth', # noqa
+    eval=['bbox', 'segm'],
+    metric=dict(bbox_mAP=42.7, segm_mAP=39.3),
+)
+tood = dict(
+    config='configs/tood/tood_r50_fpn_1x_coco.py',
+    checkpoint='tood_r50_fpn_1x_coco_20211210_103425-20e20746.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/tood/tood_r50_fpn_1x_coco/tood_r50_fpn_1x_coco_20211210_103425-20e20746.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=42.4),
+)
+tridentnet = dict(
+    config='configs/tridentnet/tridentnet_r50-caffe_1x_coco.py',
+    checkpoint='tridentnet_r50_caffe_1x_coco_20201230_141838-2ec0b530.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/tridentnet/tridentnet_r50_caffe_1x_coco/tridentnet_r50_caffe_1x_coco_20201230_141838-2ec0b530.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=37.6),
+)
+vfnet = dict(
+    config='configs/vfnet/vfnet_r50_fpn_1x_coco.py',
+    checkpoint='vfnet_r50_fpn_1x_coco_20201027-38db6f58.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/vfnet/vfnet_r50_fpn_1x_coco/vfnet_r50_fpn_1x_coco_20201027-38db6f58.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=41.6),
+)
+yolact = dict(
+    config='configs/yolact/yolact_r50_1xb8-55e_coco.py',
+    checkpoint='yolact_r50_1x8_coco_20200908-f38d58df.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/yolact/yolact_r50_1x8_coco/yolact_r50_1x8_coco_20200908-f38d58df.pth', # noqa
+    eval=['bbox', 'segm'],
+    metric=dict(bbox_mAP=31.2, segm_mAP=29.0),
+)
+yolo = dict(
+    config='configs/yolo/yolov3_d53_8xb8-320-273e_coco.py',
+    checkpoint='yolov3_d53_320_273e_coco-421362b6.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/yolo/yolov3_d53_320_273e_coco/yolov3_d53_320_273e_coco-421362b6.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=27.9),
+)
+yolof = dict(
+    config='configs/yolof/yolof_r50-c5_8xb8-1x_coco.py',
+    checkpoint='yolof_r50_c5_8x8_1x_coco_20210425_024427-8e864411.pth',
+    url='https://download.openmmlab.com/mmdetection/v2.0/yolof/yolof_r50_c5_8x8_1x_coco/yolof_r50_c5_8x8_1x_coco_20210425_024427-8e864411.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=37.5),
+)
+yolox = dict(
+    config='configs/yolox/yolox_tiny_8xb8-300e_coco.py',
+    checkpoint='yolox_tiny_8x8_300e_coco_20211124_171234-b4047906.pth',  # noqa
+    url='https://download.openmmlab.com/mmdetection/v2.0/yolox/yolox_tiny_8x8_300e_coco/yolox_tiny_8x8_300e_coco_20211124_171234-b4047906.pth', # noqa
+    eval='bbox',
+    metric=dict(bbox_mAP=31.8),
+)
+# yapf: enable

+ 80 - 0
.dev_scripts/batch_train_list.txt

@@ -0,0 +1,80 @@
+configs/albu_example/mask-rcnn_r50_fpn_albu_1x_coco.py
+configs/atss/atss_r50_fpn_1x_coco.py
+configs/autoassign/autoassign_r50-caffe_fpn_1x_coco.py
+configs/carafe/faster-rcnn_r50_fpn-carafe_1x_coco.py
+configs/cascade_rcnn/cascade-rcnn_r50_fpn_1x_coco.py
+configs/cascade_rcnn/cascade-mask-rcnn_r50_fpn_1x_coco.py
+configs/cascade_rpn/cascade-rpn_faster-rcnn_r50-caffe_fpn_1x_coco.py
+configs/centernet/centernet_r18-dcnv2_8xb16-crop512-140e_coco.py
+configs/centernet/centernet-update_r50-caffe_fpn_ms-1x_coco.py
+configs/centripetalnet/centripetalnet_hourglass104_16xb6-crop511-210e-mstest_coco.py
+configs/cornernet/cornernet_hourglass104_8xb6-210e-mstest_coco.py
+configs/convnext/mask-rcnn_convnext-t-p4-w7_fpn_amp-ms-crop-3x_coco.py
+configs/dcn/faster-rcnn_r50-dconv-c3-c5_fpn_1x_coco.py
+configs/dcnv2/faster-rcnn_r50_fpn_mdpool_1x_coco.py
+configs/ddod/ddod_r50_fpn_1x_coco.py
+configs/detectors/detectors_htc-r50_1x_coco.py
+configs/deformable_detr/deformable-detr_r50_16xb2-50e_coco.py
+configs/detr/detr_r50_8xb2-150e_coco.py
+configs/double_heads/dh-faster-rcnn_r50_fpn_1x_coco.py
+configs/dynamic_rcnn/dynamic-rcnn_r50_fpn_1x_coco.py
+configs/dyhead/atss_r50_fpn_dyhead_1x_coco.py
+configs/efficientnet/retinanet_effb3_fpn_8xb4-crop896-1x_coco.py
+configs/empirical_attention/faster-rcnn_r50-attn1111_fpn_1x_coco.py
+configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py
+configs/faster_rcnn/faster-rcnn_r50-caffe-dc5_ms-1x_coco.py
+configs/fcos/fcos_r50-caffe_fpn_gn-head-center-normbbox-centeronreg-giou_1x_coco.py
+configs/foveabox/fovea_r50_fpn_gn-head-align_4xb4-2x_coco.py
+configs/fpg/mask-rcnn_r50_fpg_crop640-50e_coco.py
+configs/free_anchor/freeanchor_r50_fpn_1x_coco.py
+configs/fsaf/fsaf_r50_fpn_1x_coco.py
+configs/gcnet/mask-rcnn_r50-syncbn-gcb-r16-c3-c5_fpn_1x_coco.py
+configs/gfl/gfl_r50_fpn_1x_coco.py
+configs/ghm/retinanet_r50_fpn_ghm-1x_coco.py
+configs/gn/mask-rcnn_r50_fpn_gn-all_2x_coco.py
+configs/gn+ws/faster-rcnn_r50_fpn_gn-ws-all_1x_coco.py
+configs/grid_rcnn/grid-rcnn_r50_fpn_gn-head_2x_coco.py
+configs/groie/faste-rcnn_r50_fpn_groie_1x_coco.py
+configs/guided_anchoring/ga-retinanet_r50-caffe_fpn_1x_coco.py
+configs/hrnet/faster-rcnn_hrnetv2p-w18-1x_coco.py
+configs/htc/htc_r50_fpn_1x_coco.py
+configs/instaboost/mask-rcnn_r50_fpn_instaboost-4x_coco.py
+configs/lad/lad_r50-paa-r101_fpn_2xb8_coco_1x.py
+configs/ld/ld_r18-gflv1-r101_fpn_1x_coco.py
+configs/libra_rcnn/libra-faster-rcnn_r50_fpn_1x_coco.py
+configs/mask2former/mask2former_r50_8xb2-lsj-50e_coco-panoptic.py
+configs/mask_rcnn/mask-rcnn_r50_fpn_1x_coco.py
+configs/maskformer/maskformer_r50_ms-16xb1-75e_coco.py
+configs/ms_rcnn/ms-rcnn_r50-caffe_fpn_1x_coco.py
+configs/nas_fcos/nas-fcos_r50-caffe_fpn_nashead-gn-head_4xb4-1x_coco.py
+configs/nas_fpn/retinanet_r50_nasfpn_crop640-50e_coco.py
+configs/paa/paa_r50_fpn_1x_coco.py
+configs/pafpn/faster-rcnn_r50_pafpn_1x_coco.py
+configs/panoptic_fpn/panoptic-fpn_r50_fpn_1x_coco.py
+configs/pisa/mask-rcnn_r50_fpn_pisa_1x_coco.py
+configs/point_rend/point-rend_r50-caffe_fpn_ms-1x_coco.py
+configs/pvt/retinanet_pvt-t_fpn_1x_coco.py
+configs/queryinst/queryinst_r50_fpn_1x_coco.py
+configs/regnet/retinanet_regnetx-800MF_fpn_1x_coco.py
+configs/reppoints/reppoints-moment_r50_fpn_1x_coco.py
+configs/res2net/faster-rcnn_res2net-101_fpn_2x_coco.py
+configs/resnest/faster-rcnn_s50_fpn_syncbn-backbone+head_ms-range-1x_coco.py
+configs/resnet_strikes_back/retinanet_r50-rsb-pre_fpn_1x_coco.py
+configs/retinanet/retinanet_r50-caffe_fpn_1x_coco.py
+configs/rpn/rpn_r50_fpn_1x_coco.py
+configs/sabl/sabl-retinanet_r50_fpn_1x_coco.py
+configs/scnet/scnet_r50_fpn_1x_coco.py
+configs/scratch/faster-rcnn_r50-scratch_fpn_gn-all_6x_coco.py
+configs/solo/solo_r50_fpn_1x_coco.py
+configs/solov2/solov2_r50_fpn_1x_coco.py
+configs/sparse_rcnn/sparse-rcnn_r50_fpn_1x_coco.py
+configs/ssd/ssd300_coco.py
+configs/ssd/ssdlite_mobilenetv2-scratch_8xb24-600e_coco.py
+configs/swin/mask-rcnn_swin-t-p4-w7_fpn_1x_coco.py
+configs/tood/tood_r50_fpn_1x_coco.py
+'configs/tridentnet/tridentnet_r50-caffe_1x_coco.py
+configs/vfnet/vfnet_r50_fpn_1x_coco.py
+configs/yolact/yolact_r50_8xb8-55e_coco.py
+configs/yolo/yolov3_d53_8xb8-320-273e_coco.py
+configs/yolof/yolof_r50-c5_8xb8-1x_coco.py
+configs/yolox/yolox_tiny_8xb8-300e_coco.py

+ 167 - 0
.dev_scripts/benchmark_filter.py

@@ -0,0 +1,167 @@
+# Copyright (c) OpenMMLab. All rights reserved.
+import argparse
+import os
+import os.path as osp
+
+
+def parse_args():
+    parser = argparse.ArgumentParser(description='Filter configs to train')
+    parser.add_argument(
+        '--basic-arch',
+        action='store_true',
+        help='to train models in basic arch')
+    parser.add_argument(
+        '--datasets', action='store_true', help='to train models in dataset')
+    parser.add_argument(
+        '--data-pipeline',
+        action='store_true',
+        help='to train models related to data pipeline, e.g. augmentations')
+    parser.add_argument(
+        '--nn-module',
+        action='store_true',
+        help='to train models related to neural network modules')
+    parser.add_argument(
+        '--model-options',
+        nargs='+',
+        help='custom options to special model benchmark')
+    parser.add_argument(
+        '--out',
+        type=str,
+        default='batch_train_list.txt',
+        help='output path of gathered metrics to be stored')
+    args = parser.parse_args()
+    return args
+
+
+basic_arch_root = [
+    'atss', 'autoassign', 'cascade_rcnn', 'cascade_rpn', 'centripetalnet',
+    'cornernet', 'detectors', 'deformable_detr', 'detr', 'double_heads',
+    'dynamic_rcnn', 'faster_rcnn', 'fcos', 'foveabox', 'fp16', 'free_anchor',
+    'fsaf', 'gfl', 'ghm', 'grid_rcnn', 'guided_anchoring', 'htc', 'ld',
+    'libra_rcnn', 'mask_rcnn', 'ms_rcnn', 'nas_fcos', 'paa', 'pisa',
+    'point_rend', 'reppoints', 'retinanet', 'rpn', 'sabl', 'ssd', 'tridentnet',
+    'vfnet', 'yolact', 'yolo', 'sparse_rcnn', 'scnet', 'yolof', 'centernet'
+]
+
+datasets_root = [
+    'wider_face', 'pascal_voc', 'cityscapes', 'lvis', 'deepfashion'
+]
+
+data_pipeline_root = ['albu_example', 'instaboost']
+
+nn_module_root = [
+    'carafe', 'dcn', 'empirical_attention', 'gcnet', 'gn', 'gn+ws', 'hrnet',
+    'pafpn', 'nas_fpn', 'regnet', 'resnest', 'res2net', 'groie'
+]
+
+benchmark_pool = [
+    'configs/albu_example/mask_rcnn_r50_fpn_albu_1x_coco.py',
+    'configs/atss/atss_r50_fpn_1x_coco.py',
+    'configs/autoassign/autoassign_r50_fpn_8x2_1x_coco.py',
+    'configs/carafe/mask_rcnn_r50_fpn_carafe_1x_coco.py',
+    'configs/cascade_rcnn/cascade_mask_rcnn_r50_fpn_1x_coco.py',
+    'configs/cascade_rpn/crpn_faster_rcnn_r50_caffe_fpn_1x_coco.py',
+    'configs/centernet/centernet_resnet18_dcnv2_140e_coco.py',
+    'configs/centripetalnet/'
+    'centripetalnet_hourglass104_mstest_16x6_210e_coco.py',
+    'configs/cityscapes/mask_rcnn_r50_fpn_1x_cityscapes.py',
+    'configs/cornernet/'
+    'cornernet_hourglass104_mstest_8x6_210e_coco.py',
+    'configs/dcn/mask_rcnn_r50_fpn_mdconv_c3-c5_1x_coco.py',
+    'configs/dcn/faster_rcnn_r50_fpn_dpool_1x_coco.py',
+    'configs/dcn/faster_rcnn_r50_fpn_mdpool_1x_coco.py',
+    'configs/dcn/mask_rcnn_r50_fpn_dconv_c3-c5_1x_coco.py',
+    'configs/deformable_detr/deformable_detr_r50_16x2_50e_coco.py',
+    'configs/detectors/detectors_htc_r50_1x_coco.py',
+    'configs/detr/detr_r50_8x2_150e_coco.py',
+    'configs/double_heads/dh_faster_rcnn_r50_fpn_1x_coco.py',
+    'configs/dynamic_rcnn/dynamic_rcnn_r50_fpn_1x_coco.py',
+    'configs/empirical_attention/faster_rcnn_r50_fpn_attention_1111_dcn_1x_coco.py',  # noqa
+    'configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py',
+    'configs/faster_rcnn/faster_rcnn_r50_fpn_ohem_1x_coco.py',
+    'configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_1x_coco.py',
+    'configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco.py',
+    'configs/faster_rcnn/faster_rcnn_r50_caffe_dc5_mstrain_1x_coco.py',
+    'configs/fcos/fcos_center_r50_caffe_fpn_gn-head_4x4_1x_coco.py',
+    'configs/foveabox/fovea_align_r50_fpn_gn-head_4x4_2x_coco.py',
+    'configs/retinanet/retinanet_r50_fpn_fp16_1x_coco.py',
+    'configs/mask_rcnn/mask_rcnn_r50_fpn_fp16_1x_coco.py',
+    'configs/free_anchor/retinanet_free_anchor_r50_fpn_1x_coco.py',
+    'configs/fsaf/fsaf_r50_fpn_1x_coco.py',
+    'configs/gcnet/mask_rcnn_r50_fpn_r4_gcb_c3-c5_1x_coco.py',
+    'configs/gfl/gfl_r50_fpn_1x_coco.py',
+    'configs/ghm/retinanet_ghm_r50_fpn_1x_coco.py',
+    'configs/gn/mask_rcnn_r50_fpn_gn-all_2x_coco.py',
+    'configs/gn+ws/mask_rcnn_r50_fpn_gn_ws-all_2x_coco.py',
+    'configs/grid_rcnn/grid_rcnn_r50_fpn_gn-head_2x_coco.py',
+    'configs/groie/faster_rcnn_r50_fpn_groie_1x_coco.py',
+    'configs/guided_anchoring/ga_faster_r50_caffe_fpn_1x_coco.py',
+    'configs/hrnet/mask_rcnn_hrnetv2p_w18_1x_coco.py',
+    'configs/htc/htc_r50_fpn_1x_coco.py',
+    'configs/instaboost/mask_rcnn_r50_fpn_instaboost_4x_coco.py',
+    'configs/ld/ld_r18_gflv1_r101_fpn_coco_1x.py',
+    'configs/libra_rcnn/libra_faster_rcnn_r50_fpn_1x_coco.py',
+    'configs/lvis/mask_rcnn_r50_fpn_sample1e-3_mstrain_1x_lvis_v1.py',
+    'configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco.py',
+    'configs/ms_rcnn/ms_rcnn_r50_caffe_fpn_1x_coco.py',
+    'configs/nas_fcos/nas_fcos_nashead_r50_caffe_fpn_gn-head_4x4_1x_coco.py',
+    'configs/nas_fpn/retinanet_r50_nasfpn_crop640_50e_coco.py',
+    'configs/paa/paa_r50_fpn_1x_coco.py',
+    'configs/pafpn/faster_rcnn_r50_pafpn_1x_coco.py',
+    'configs/pisa/pisa_mask_rcnn_r50_fpn_1x_coco.py',
+    'configs/point_rend/point_rend_r50_caffe_fpn_mstrain_1x_coco.py',
+    'configs/regnet/mask_rcnn_regnetx-3.2GF_fpn_1x_coco.py',
+    'configs/reppoints/reppoints_moment_r50_fpn_gn-neck+head_1x_coco.py',
+    'configs/res2net/faster_rcnn_r2_101_fpn_2x_coco.py',
+    'configs/resnest/'
+    'mask_rcnn_s50_fpn_syncbn-backbone+head_mstrain_1x_coco.py',
+    'configs/retinanet/retinanet_r50_caffe_fpn_1x_coco.py',
+    'configs/rpn/rpn_r50_fpn_1x_coco.py',
+    'configs/sabl/sabl_retinanet_r50_fpn_1x_coco.py',
+    'configs/ssd/ssd300_coco.py',
+    'configs/tridentnet/tridentnet_r50_caffe_1x_coco.py',
+    'configs/vfnet/vfnet_r50_fpn_1x_coco.py',
+    'configs/yolact/yolact_r50_1x8_coco.py',
+    'configs/yolo/yolov3_d53_320_273e_coco.py',
+    'configs/sparse_rcnn/sparse_rcnn_r50_fpn_1x_coco.py',
+    'configs/scnet/scnet_r50_fpn_1x_coco.py',
+    'configs/yolof/yolof_r50_c5_8x8_1x_coco.py',
+]
+
+
+def main():
+    args = parse_args()
+
+    benchmark_type = []
+    if args.basic_arch:
+        benchmark_type += basic_arch_root
+    if args.datasets:
+        benchmark_type += datasets_root
+    if args.data_pipeline:
+        benchmark_type += data_pipeline_root
+    if args.nn_module:
+        benchmark_type += nn_module_root
+
+    special_model = args.model_options
+    if special_model is not None:
+        benchmark_type += special_model
+
+    config_dpath = 'configs/'
+    benchmark_configs = []
+    for cfg_root in benchmark_type:
+        cfg_dir = osp.join(config_dpath, cfg_root)
+        configs = os.scandir(cfg_dir)
+        for cfg in configs:
+            config_path = osp.join(cfg_dir, cfg.name)
+            if (config_path in benchmark_pool
+                    and config_path not in benchmark_configs):
+                benchmark_configs.append(config_path)
+
+    print(f'Totally found {len(benchmark_configs)} configs to benchmark')
+    with open(args.out, 'w') as f:
+        for config in benchmark_configs:
+            f.write(config + '\n')
+
+
+if __name__ == '__main__':
+    main()

+ 91 - 0
.dev_scripts/benchmark_full_models.txt

@@ -0,0 +1,91 @@
+albu_example/mask-rcnn_r50_fpn_albu_1x_coco.py
+atss/atss_r50_fpn_1x_coco.py
+autoassign/autoassign_r50-caffe_fpn_1x_coco.py
+boxinst/boxinst_r50_fpn_ms-90k_coco.py
+carafe/faster-rcnn_r50_fpn-carafe_1x_coco.py
+cascade_rcnn/cascade-rcnn_r50_fpn_1x_coco.py
+cascade_rcnn/cascade-mask-rcnn_r50_fpn_1x_coco.py
+cascade_rpn/cascade-rpn_faster-rcnn_r50-caffe_fpn_1x_coco.py
+centernet/centernet-update_r50-caffe_fpn_ms-1x_coco.py
+centripetalnet/centripetalnet_hourglass104_16xb6-crop511-210e-mstest_coco.py
+condinst/condinst_r50_fpn_ms-poly-90k_coco_instance.py
+conditional_detr/conditional-detr_r50_8xb2-50e_coco.py
+convnext/mask-rcnn_convnext-t-p4-w7_fpn_amp-ms-crop-3x_coco.py
+cornernet/cornernet_hourglass104_8xb6-210e-mstest_coco.py
+dab_detr/dab-detr_r50_8xb2-50e_coco.py
+dcn/mask-rcnn_r50-dconv-c3-c5_fpn_1x_coco.py
+dcnv2/faster-rcnn_r50_fpn_mdpool_1x_coco.py
+ddod/ddod_r50_fpn_1x_coco.py
+deformable_detr/deformable-detr_r50_16xb2-50e_coco.py
+detectors/detectors_htc-r50_1x_coco.py
+detr/detr_r50_8xb2-150e_coco.py
+dino/dino-4scale_r50_8xb2-12e_coco.py
+double_heads/dh-faster-rcnn_r50_fpn_1x_coco.py
+dyhead/atss_r50_fpn_dyhead_1x_coco.py
+dynamic_rcnn/dynamic-rcnn_r50_fpn_1x_coco.py
+efficientnet/retinanet_effb3_fpn_8xb4-crop896-1x_coco.py
+empirical_attention/faster-rcnn_r50-attn0010-dcn_fpn_1x_coco.py
+faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py
+fcos/fcos_r50-caffe_fpn_gn-head-center-normbbox-centeronreg-giou_1x_coco.py
+foveabox/fovea_r50_fpn_gn-head-align_4xb4-2x_coco.py
+fpg/retinanet_r50_fpg_crop640_50e_coco.py
+free_anchor/freeanchor_r50_fpn_1x_coco.py
+fsaf/fsaf_r50_fpn_1x_coco.py
+gcnet/mask-rcnn_r50-gcb-r4-c3-c5_fpn_1x_coco.py
+gfl/gfl_r50_fpn_1x_coco.py
+ghm/retinanet_r50_fpn_ghm-1x_coco.py
+gn/mask-rcnn_r50_fpn_gn-all_2x_coco.py
+gn+ws/faster-rcnn_r50_fpn_gn-ws-all_1x_coco.py
+grid_rcnn/grid-rcnn_r50_fpn_gn-head_1x_coco.py
+groie/faste-rcnn_r50_fpn_groie_1x_coco.py
+guided_anchoring/ga-faster-rcnn_r50_fpn_1x_coco.py
+hrnet/htc_hrnetv2p-w18_20e_coco.py
+htc/htc_r50_fpn_1x_coco.py
+instaboost/mask-rcnn_r50_fpn_instaboost-4x_coco.py
+lad/lad_r50-paa-r101_fpn_2xb8_coco_1x.py
+ld/ld_r18-gflv1-r101_fpn_1x_coco.py
+libra_rcnn/libra-faster-rcnn_r50_fpn_1x_coco.py
+lvis/mask-rcnn_r50_fpn_sample1e-3_ms-1x_lvis-v1.py
+mask2former/mask2former_r50_8xb2-lsj-50e_coco.py
+mask2former/mask2former_r50_8xb2-lsj-50e_coco-panoptic.py
+mask_rcnn/mask-rcnn_r50_fpn_1x_coco.py
+maskformer/maskformer_r50_ms-16xb1-75e_coco.py
+ms_rcnn/ms-rcnn_r50_fpn_1x_coco.py
+nas_fcos/nas-fcos_r50-caffe_fpn_nashead-gn-head_4xb4-1x_coco.py
+nas_fpn/retinanet_r50_nasfpn_crop640-50e_coco.py
+paa/paa_r50_fpn_1x_coco.py
+pafpn/faster-rcnn_r50_pafpn_1x_coco.py
+panoptic_fpn/panoptic-fpn_r50_fpn_1x_coco.py
+pisa/faster-rcnn_r50_fpn_pisa_1x_coco.py
+point_rend/point-rend_r50-caffe_fpn_ms-1x_coco.py
+pvt/retinanet_pvtv2-b0_fpn_1x_coco.py
+queryinst/queryinst_r50_fpn_1x_coco.py
+regnet/mask-rcnn_regnetx-3.2GF_fpn_1x_coco.py
+reppoints/reppoints-moment_r50_fpn-gn_head-gn_1x_coco.py
+res2net/faster-rcnn_res2net-101_fpn_2x_coco.py
+resnest/mask-rcnn_s50_fpn_syncbn-backbone+head_ms-1x_coco.py
+resnet_strikes_back/faster-rcnn_r50-rsb-pre_fpn_1x_coco.py
+retinanet/retinanet_r50_fpn_1x_coco.py
+rpn/rpn_r50_fpn_1x_coco.py
+rtmdet/rtmdet_s_8xb32-300e_coco.py
+rtmdet/rtmdet-ins_s_8xb32-300e_coco.py
+sabl/sabl-retinanet_r50_fpn_1x_coco.py
+scnet/scnet_r50_fpn_1x_coco.py
+scratch/faster-rcnn_r50-scratch_fpn_gn-all_6x_coco.py
+seesaw_loss/mask-rcnn_r50_fpn_seesaw-loss_random-ms-2x_lvis-v1.py
+simple_copy_paste/mask-rcnn_r50_fpn_rpn-2conv_4conv1fc_syncbn-all_32xb2-ssj-scp-90k_coco.py
+soft_teacher/soft-teacher_faster-rcnn_r50-caffe_fpn_180k_semi-0.1-coco.py
+solo/solo_r50_fpn_1x_coco.py
+solov2/solov2_r50_fpn_1x_coco.py
+sparse_rcnn/sparse-rcnn_r50_fpn_1x_coco.py
+ssd/ssd300_coco.py
+strong_baselines/mask-rcnn_r50-caffe_fpn_rpn-2conv_4conv1fc_syncbn-all_amp-lsj-100e_coco.py
+swin/mask-rcnn_swin-t-p4-w7_fpn_1x_coco.py
+timm_example/retinanet_timm-tv-resnet50_fpn_1x_coco.py
+tood/tood_r50_fpn_1x_coco.py
+tridentnet/tridentnet_r50-caffe_1x_coco.py
+vfnet/vfnet_r50_fpn_1x_coco.py
+yolact/yolact_r50_8xb8-55e_coco.py
+yolo/yolov3_d53_8xb8-320-273e_coco.py
+yolof/yolof_r50-c5_8xb8-1x_coco.py
+yolox/yolox_s_8xb8-300e_coco.py

+ 171 - 0
.dev_scripts/benchmark_inference_fps.py

@@ -0,0 +1,171 @@
+# Copyright (c) OpenMMLab. All rights reserved.
+import argparse
+import os
+import os.path as osp
+
+from mmengine.config import Config, DictAction
+from mmengine.dist import init_dist
+from mmengine.fileio import dump
+from mmengine.utils import mkdir_or_exist
+from terminaltables import GithubFlavoredMarkdownTable
+
+from tools.analysis_tools.benchmark import repeat_measure_inference_speed
+
+
+def parse_args():
+    parser = argparse.ArgumentParser(
+        description='MMDet benchmark a model of FPS')
+    parser.add_argument('config', help='test config file path')
+    parser.add_argument('checkpoint_root', help='Checkpoint file root path')
+    parser.add_argument(
+        '--round-num',
+        type=int,
+        default=1,
+        help='round a number to a given precision in decimal digits')
+    parser.add_argument(
+        '--repeat-num',
+        type=int,
+        default=1,
+        help='number of repeat times of measurement for averaging the results')
+    parser.add_argument(
+        '--out', type=str, help='output path of gathered fps to be stored')
+    parser.add_argument(
+        '--max-iter', type=int, default=2000, help='num of max iter')
+    parser.add_argument(
+        '--log-interval', type=int, default=50, help='interval of logging')
+    parser.add_argument(
+        '--fuse-conv-bn',
+        action='store_true',
+        help='Whether to fuse conv and bn, this will slightly increase'
+        'the inference speed')
+    parser.add_argument(
+        '--cfg-options',
+        nargs='+',
+        action=DictAction,
+        help='override some settings in the used config, the key-value pair '
+        'in xxx=yyy format will be merged into config file. If the value to '
+        'be overwritten is a list, it should be like key="[a,b]" or key=a,b '
+        'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" '
+        'Note that the quotation marks are necessary and that no white space '
+        'is allowed.')
+    parser.add_argument(
+        '--launcher',
+        choices=['none', 'pytorch', 'slurm', 'mpi'],
+        default='none',
+        help='job launcher')
+    parser.add_argument('--local_rank', type=int, default=0)
+    args = parser.parse_args()
+    if 'LOCAL_RANK' not in os.environ:
+        os.environ['LOCAL_RANK'] = str(args.local_rank)
+    return args
+
+
+def results2markdown(result_dict):
+    table_data = []
+    is_multiple_results = False
+    for cfg_name, value in result_dict.items():
+        name = cfg_name.replace('configs/', '')
+        fps = value['fps']
+        ms_times_pre_image = value['ms_times_pre_image']
+        if isinstance(fps, list):
+            is_multiple_results = True
+            mean_fps = value['mean_fps']
+            mean_times_pre_image = value['mean_times_pre_image']
+            fps_str = ','.join([str(s) for s in fps])
+            ms_times_pre_image_str = ','.join(
+                [str(s) for s in ms_times_pre_image])
+            table_data.append([
+                name, fps_str, mean_fps, ms_times_pre_image_str,
+                mean_times_pre_image
+            ])
+        else:
+            table_data.append([name, fps, ms_times_pre_image])
+
+    if is_multiple_results:
+        table_data.insert(0, [
+            'model', 'fps', 'mean_fps', 'times_pre_image(ms)',
+            'mean_times_pre_image(ms)'
+        ])
+
+    else:
+        table_data.insert(0, ['model', 'fps', 'times_pre_image(ms)'])
+    table = GithubFlavoredMarkdownTable(table_data)
+    print(table.table, flush=True)
+
+
+if __name__ == '__main__':
+    args = parse_args()
+    assert args.round_num >= 0
+    assert args.repeat_num >= 1
+
+    config = Config.fromfile(args.config)
+
+    if args.launcher == 'none':
+        raise NotImplementedError('Only supports distributed mode')
+    else:
+        init_dist(args.launcher)
+
+    result_dict = {}
+    for model_key in config:
+        model_infos = config[model_key]
+        if not isinstance(model_infos, list):
+            model_infos = [model_infos]
+        for model_info in model_infos:
+            record_metrics = model_info['metric']
+            cfg_path = model_info['config'].strip()
+            cfg = Config.fromfile(cfg_path)
+            checkpoint = osp.join(args.checkpoint_root,
+                                  model_info['checkpoint'].strip())
+            try:
+                fps = repeat_measure_inference_speed(cfg, checkpoint,
+                                                     args.max_iter,
+                                                     args.log_interval,
+                                                     args.fuse_conv_bn,
+                                                     args.repeat_num)
+                if args.repeat_num > 1:
+                    fps_list = [round(fps_, args.round_num) for fps_ in fps]
+                    times_pre_image_list = [
+                        round(1000 / fps_, args.round_num) for fps_ in fps
+                    ]
+                    mean_fps = round(
+                        sum(fps_list) / len(fps_list), args.round_num)
+                    mean_times_pre_image = round(
+                        sum(times_pre_image_list) / len(times_pre_image_list),
+                        args.round_num)
+                    print(
+                        f'{cfg_path} '
+                        f'Overall fps: {fps_list}[{mean_fps}] img / s, '
+                        f'times per image: '
+                        f'{times_pre_image_list}[{mean_times_pre_image}] '
+                        f'ms / img',
+                        flush=True)
+                    result_dict[cfg_path] = dict(
+                        fps=fps_list,
+                        mean_fps=mean_fps,
+                        ms_times_pre_image=times_pre_image_list,
+                        mean_times_pre_image=mean_times_pre_image)
+                else:
+                    print(
+                        f'{cfg_path} fps : {fps:.{args.round_num}f} img / s, '
+                        f'times per image: {1000 / fps:.{args.round_num}f} '
+                        f'ms / img',
+                        flush=True)
+                    result_dict[cfg_path] = dict(
+                        fps=round(fps, args.round_num),
+                        ms_times_pre_image=round(1000 / fps, args.round_num))
+            except Exception as e:
+                print(f'{cfg_path} error: {repr(e)}')
+                if args.repeat_num > 1:
+                    result_dict[cfg_path] = dict(
+                        fps=[0],
+                        mean_fps=0,
+                        ms_times_pre_image=[0],
+                        mean_times_pre_image=0)
+                else:
+                    result_dict[cfg_path] = dict(fps=0, ms_times_pre_image=0)
+
+    if args.out:
+        mkdir_or_exist(args.out)
+        dump(result_dict, osp.join(args.out, 'batch_inference_fps.json'))
+
+    results2markdown(result_dict)

+ 13 - 0
.dev_scripts/benchmark_options.py

@@ -0,0 +1,13 @@
+# Copyright (c) OpenMMLab. All rights reserved.
+
+third_part_libs = [
+    'pip install -r ../requirements/albu.txt',
+    'pip install instaboostfast',
+    'pip install git+https://github.com/cocodataset/panopticapi.git',
+    'pip install timm',
+    'pip install mmcls>=1.0.0rc0',
+    'pip install git+https://github.com/lvis-dataset/lvis-api.git',
+]
+
+default_floating_range = 0.5
+model_floating_ranges = {'atss/atss_r50_fpn_1x_coco.py': 0.3}

+ 115 - 0
.dev_scripts/benchmark_test.py

@@ -0,0 +1,115 @@
+# Copyright (c) OpenMMLab. All rights reserved.
+import logging
+import os
+import os.path as osp
+from argparse import ArgumentParser
+
+from mmengine.config import Config, DictAction
+from mmengine.logging import MMLogger
+from mmengine.registry import RUNNERS
+from mmengine.runner import Runner
+
+from mmdet.testing import replace_to_ceph
+from mmdet.utils import register_all_modules, replace_cfg_vals
+
+
+def parse_args():
+    parser = ArgumentParser()
+    parser.add_argument('config', help='test config file path')
+    parser.add_argument('checkpoint_root', help='Checkpoint file root path')
+    parser.add_argument('--work-dir', help='the dir to save logs')
+    parser.add_argument('--ceph', action='store_true')
+    parser.add_argument(
+        '--cfg-options',
+        nargs='+',
+        action=DictAction,
+        help='override some settings in the used config, the key-value pair '
+        'in xxx=yyy format will be merged into config file. If the value to '
+        'be overwritten is a list, it should be like key="[a,b]" or key=a,b '
+        'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" '
+        'Note that the quotation marks are necessary and that no white space '
+        'is allowed.')
+    parser.add_argument(
+        '--launcher',
+        choices=['none', 'pytorch', 'slurm', 'mpi'],
+        default='none',
+        help='job launcher')
+    parser.add_argument('--local_rank', type=int, default=0)
+    args = parser.parse_args()
+    if 'LOCAL_RANK' not in os.environ:
+        os.environ['LOCAL_RANK'] = str(args.local_rank)
+    args = parser.parse_args()
+    return args
+
+
+# TODO: Need to refactor test.py so that it can be reused.
+def fast_test_model(config_name, checkpoint, args, logger=None):
+    cfg = Config.fromfile(config_name)
+    cfg = replace_cfg_vals(cfg)
+    cfg.launcher = args.launcher
+    if args.cfg_options is not None:
+        cfg.merge_from_dict(args.cfg_options)
+
+    # work_dir is determined in this priority: CLI > segment in file > filename
+    if args.work_dir is not None:
+        # update configs according to CLI args if args.work_dir is not None
+        cfg.work_dir = osp.join(args.work_dir,
+                                osp.splitext(osp.basename(config_name))[0])
+    elif cfg.get('work_dir', None) is None:
+        # use config filename as default work_dir if cfg.work_dir is None
+        cfg.work_dir = osp.join('./work_dirs',
+                                osp.splitext(osp.basename(config_name))[0])
+
+    if args.ceph:
+        replace_to_ceph(cfg)
+
+    cfg.load_from = checkpoint
+
+    # TODO: temporary plan
+    if 'visualizer' in cfg:
+        if 'name' in cfg.visualizer:
+            del cfg.visualizer.name
+
+    # build the runner from config
+    if 'runner_type' not in cfg:
+        # build the default runner
+        runner = Runner.from_cfg(cfg)
+    else:
+        # build customized runner from the registry
+        # if 'runner_type' is set in the cfg
+        runner = RUNNERS.build(cfg)
+
+    runner.test()
+
+
+# Sample test whether the inference code is correct
+def main(args):
+    # register all modules in mmdet into the registries
+    register_all_modules(init_default_scope=False)
+
+    config = Config.fromfile(args.config)
+
+    # test all model
+    logger = MMLogger.get_instance(
+        name='MMLogger',
+        log_file='benchmark_test.log',
+        log_level=logging.ERROR)
+
+    for model_key in config:
+        model_infos = config[model_key]
+        if not isinstance(model_infos, list):
+            model_infos = [model_infos]
+        for model_info in model_infos:
+            print('processing: ', model_info['config'], flush=True)
+            config_name = model_info['config'].strip()
+            checkpoint = osp.join(args.checkpoint_root,
+                                  model_info['checkpoint'].strip())
+            try:
+                fast_test_model(config_name, checkpoint, args, logger)
+            except Exception as e:
+                logger.error(f'{config_name} " : {repr(e)}')
+
+
+if __name__ == '__main__':
+    args = parse_args()
+    main(args)

+ 134 - 0
.dev_scripts/benchmark_test_image.py

@@ -0,0 +1,134 @@
+# Copyright (c) OpenMMLab. All rights reserved.
+import logging
+import os.path as osp
+from argparse import ArgumentParser
+
+import mmcv
+from mmengine.config import Config
+from mmengine.logging import MMLogger
+from mmengine.utils import mkdir_or_exist
+
+from mmdet.apis import inference_detector, init_detector
+from mmdet.registry import VISUALIZERS
+from mmdet.utils import register_all_modules
+
+
+def parse_args():
+    parser = ArgumentParser()
+    parser.add_argument('config', help='test config file path')
+    parser.add_argument('checkpoint_root', help='Checkpoint file root path')
+    parser.add_argument('--img', default='demo/demo.jpg', help='Image file')
+    parser.add_argument('--aug', action='store_true', help='aug test')
+    parser.add_argument('--model-name', help='model name to inference')
+    parser.add_argument('--show', action='store_true', help='show results')
+    parser.add_argument('--out-dir', default=None, help='Dir to output file')
+    parser.add_argument(
+        '--wait-time',
+        type=float,
+        default=1,
+        help='the interval of show (s), 0 is block')
+    parser.add_argument(
+        '--device', default='cuda:0', help='Device used for inference')
+    parser.add_argument(
+        '--palette',
+        default='coco',
+        choices=['coco', 'voc', 'citys', 'random'],
+        help='Color palette used for visualization')
+    parser.add_argument(
+        '--score-thr', type=float, default=0.3, help='bbox score threshold')
+    args = parser.parse_args()
+    return args
+
+
+def inference_model(config_name, checkpoint, visualizer, args, logger=None):
+    cfg = Config.fromfile(config_name)
+    if args.aug:
+        raise NotImplementedError()
+
+    model = init_detector(
+        cfg, checkpoint, palette=args.palette, device=args.device)
+    visualizer.dataset_meta = model.dataset_meta
+
+    # test a single image
+    result = inference_detector(model, args.img)
+
+    # show the results
+    if args.show or args.out_dir is not None:
+        img = mmcv.imread(args.img)
+        img = mmcv.imconvert(img, 'bgr', 'rgb')
+        out_file = None
+        if args.out_dir is not None:
+            out_dir = args.out_dir
+            mkdir_or_exist(out_dir)
+
+            out_file = osp.join(
+                out_dir,
+                config_name.split('/')[-1].replace('py', 'jpg'))
+
+        visualizer.add_datasample(
+            'result',
+            img,
+            data_sample=result,
+            draw_gt=False,
+            show=args.show,
+            wait_time=args.wait_time,
+            out_file=out_file,
+            pred_score_thr=args.score_thr)
+
+    return result
+
+
+# Sample test whether the inference code is correct
+def main(args):
+    # register all modules in mmdet into the registries
+    register_all_modules()
+
+    config = Config.fromfile(args.config)
+
+    # init visualizer
+    visualizer_cfg = dict(type='DetLocalVisualizer', name='visualizer')
+    visualizer = VISUALIZERS.build(visualizer_cfg)
+
+    # test single model
+    if args.model_name:
+        if args.model_name in config:
+            model_infos = config[args.model_name]
+            if not isinstance(model_infos, list):
+                model_infos = [model_infos]
+            model_info = model_infos[0]
+            config_name = model_info['config'].strip()
+            print(f'processing: {config_name}', flush=True)
+            checkpoint = osp.join(args.checkpoint_root,
+                                  model_info['checkpoint'].strip())
+            # build the model from a config file and a checkpoint file
+            inference_model(config_name, checkpoint, visualizer, args)
+            return
+        else:
+            raise RuntimeError('model name input error.')
+
+    # test all model
+    logger = MMLogger.get_instance(
+        name='MMLogger',
+        log_file='benchmark_test_image.log',
+        log_level=logging.ERROR)
+
+    for model_key in config:
+        model_infos = config[model_key]
+        if not isinstance(model_infos, list):
+            model_infos = [model_infos]
+        for model_info in model_infos:
+            print('processing: ', model_info['config'], flush=True)
+            config_name = model_info['config'].strip()
+            checkpoint = osp.join(args.checkpoint_root,
+                                  model_info['checkpoint'].strip())
+            try:
+                # build the model from a config file and a checkpoint file
+                inference_model(config_name, checkpoint, visualizer, args,
+                                logger)
+            except Exception as e:
+                logger.error(f'{config_name} " : {repr(e)}')
+
+
+if __name__ == '__main__':
+    args = parse_args()
+    main(args)

+ 178 - 0
.dev_scripts/benchmark_train.py

@@ -0,0 +1,178 @@
+# Copyright (c) OpenMMLab. All rights reserved.
+import logging
+import os
+import os.path as osp
+from argparse import ArgumentParser
+
+from mmengine.config import Config, DictAction
+from mmengine.logging import MMLogger, print_log
+from mmengine.registry import RUNNERS
+from mmengine.runner import Runner
+
+from mmdet.testing import replace_to_ceph
+from mmdet.utils import register_all_modules, replace_cfg_vals
+
+
+def parse_args():
+    parser = ArgumentParser()
+    parser.add_argument('config', help='test config file path')
+    parser.add_argument('--work-dir', help='the dir to save logs and models')
+    parser.add_argument('--ceph', action='store_true')
+    parser.add_argument('--save-ckpt', action='store_true')
+    parser.add_argument(
+        '--amp',
+        action='store_true',
+        default=False,
+        help='enable automatic-mixed-precision training')
+    parser.add_argument(
+        '--auto-scale-lr',
+        action='store_true',
+        help='enable automatically scaling LR.')
+    parser.add_argument(
+        '--resume',
+        action='store_true',
+        help='resume from the latest checkpoint in the work_dir automatically')
+    parser.add_argument(
+        '--cfg-options',
+        nargs='+',
+        action=DictAction,
+        help='override some settings in the used config, the key-value pair '
+        'in xxx=yyy format will be merged into config file. If the value to '
+        'be overwritten is a list, it should be like key="[a,b]" or key=a,b '
+        'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" '
+        'Note that the quotation marks are necessary and that no white space '
+        'is allowed.')
+    parser.add_argument(
+        '--launcher',
+        choices=['none', 'pytorch', 'slurm', 'mpi'],
+        default='none',
+        help='job launcher')
+    parser.add_argument('--local_rank', type=int, default=0)
+    args = parser.parse_args()
+    if 'LOCAL_RANK' not in os.environ:
+        os.environ['LOCAL_RANK'] = str(args.local_rank)
+    args = parser.parse_args()
+    return args
+
+
+# TODO: Need to refactor train.py so that it can be reused.
+def fast_train_model(config_name, args, logger=None):
+    cfg = Config.fromfile(config_name)
+    cfg = replace_cfg_vals(cfg)
+    cfg.launcher = args.launcher
+    if args.cfg_options is not None:
+        cfg.merge_from_dict(args.cfg_options)
+
+    # work_dir is determined in this priority: CLI > segment in file > filename
+    if args.work_dir is not None:
+        # update configs according to CLI args if args.work_dir is not None
+        cfg.work_dir = osp.join(args.work_dir,
+                                osp.splitext(osp.basename(config_name))[0])
+    elif cfg.get('work_dir', None) is None:
+        # use config filename as default work_dir if cfg.work_dir is None
+        cfg.work_dir = osp.join('./work_dirs',
+                                osp.splitext(osp.basename(config_name))[0])
+
+    ckpt_hook = cfg.default_hooks.checkpoint
+    by_epoch = ckpt_hook.get('by_epoch', True)
+    fast_stop_hook = dict(type='FastStopTrainingHook')
+    fast_stop_hook['by_epoch'] = by_epoch
+    if args.save_ckpt:
+        if by_epoch:
+            interval = 1
+            stop_iter_or_epoch = 2
+        else:
+            interval = 4
+            stop_iter_or_epoch = 10
+        fast_stop_hook['stop_iter_or_epoch'] = stop_iter_or_epoch
+        fast_stop_hook['save_ckpt'] = True
+        ckpt_hook.interval = interval
+
+    if 'custom_hooks' in cfg:
+        cfg.custom_hooks.append(fast_stop_hook)
+    else:
+        custom_hooks = [fast_stop_hook]
+        cfg.custom_hooks = custom_hooks
+
+    # TODO: temporary plan
+    if 'visualizer' in cfg:
+        if 'name' in cfg.visualizer:
+            del cfg.visualizer.name
+
+    # enable automatic-mixed-precision training
+    if args.amp is True:
+        optim_wrapper = cfg.optim_wrapper.type
+        if optim_wrapper == 'AmpOptimWrapper':
+            print_log(
+                'AMP training is already enabled in your config.',
+                logger='current',
+                level=logging.WARNING)
+        else:
+            assert optim_wrapper == 'OptimWrapper', (
+                '`--amp` is only supported when the optimizer wrapper type is '
+                f'`OptimWrapper` but got {optim_wrapper}.')
+            cfg.optim_wrapper.type = 'AmpOptimWrapper'
+            cfg.optim_wrapper.loss_scale = 'dynamic'
+
+    # enable automatically scaling LR
+    if args.auto_scale_lr:
+        if 'auto_scale_lr' in cfg and \
+                'enable' in cfg.auto_scale_lr and \
+                'base_batch_size' in cfg.auto_scale_lr:
+            cfg.auto_scale_lr.enable = True
+        else:
+            raise RuntimeError('Can not find "auto_scale_lr" or '
+                               '"auto_scale_lr.enable" or '
+                               '"auto_scale_lr.base_batch_size" in your'
+                               ' configuration file.')
+
+    if args.ceph:
+        replace_to_ceph(cfg)
+
+    cfg.resume = args.resume
+
+    # build the runner from config
+    if 'runner_type' not in cfg:
+        # build the default runner
+        runner = Runner.from_cfg(cfg)
+    else:
+        # build customized runner from the registry
+        # if 'runner_type' is set in the cfg
+        runner = RUNNERS.build(cfg)
+
+    runner.train()
+
+
+# Sample test whether the train code is correct
+def main(args):
+    # register all modules in mmdet into the registries
+    register_all_modules(init_default_scope=False)
+
+    config = Config.fromfile(args.config)
+
+    # test all model
+    logger = MMLogger.get_instance(
+        name='MMLogger',
+        log_file='benchmark_train.log',
+        log_level=logging.ERROR)
+
+    for model_key in config:
+        model_infos = config[model_key]
+        if not isinstance(model_infos, list):
+            model_infos = [model_infos]
+        for model_info in model_infos:
+            print('processing: ', model_info['config'], flush=True)
+            config_name = model_info['config'].strip()
+            try:
+                fast_train_model(config_name, args, logger)
+            except RuntimeError as e:
+                # quick exit is the normal exit message
+                if 'quick exit' not in repr(e):
+                    logger.error(f'{config_name} " : {repr(e)}')
+            except Exception as e:
+                logger.error(f'{config_name} " : {repr(e)}')
+
+
+if __name__ == '__main__':
+    args = parse_args()
+    main(args)

+ 18 - 0
.dev_scripts/benchmark_train_models.txt

@@ -0,0 +1,18 @@
+atss/atss_r50_fpn_1x_coco.py
+faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py
+mask_rcnn/mask-rcnn_r50_fpn_1x_coco.py
+cascade_rcnn/cascade-mask-rcnn_r50_fpn_1x_coco.py
+panoptic_fpn/panoptic-fpn_r50_fpn_1x_coco.py
+retinanet/retinanet_r50_fpn_1x_coco.py
+rtmdet/rtmdet_s_8xb32-300e_coco.py
+rtmdet/rtmdet-ins_s_8xb32-300e_coco.py
+deformable_detr/deformable-detr_r50_16xb2-50e_coco.py
+fcos/fcos_r50-caffe_fpn_gn-head-center-normbbox-centeronreg-giou_1x_coco.py
+centernet/centernet-update_r50-caffe_fpn_ms-1x_coco.py
+dino/dino-4scale_r50_8xb2-12e_coco.py
+htc/htc_r50_fpn_1x_coco.py
+mask2former/mask2former_r50_8xb2-lsj-50e_coco-panoptic.py
+swin/mask-rcnn_swin-t-p4-w7_fpn_1x_coco.py
+condinst/condinst_r50_fpn_ms-poly-90k_coco_instance.py
+lvis/mask-rcnn_r50_fpn_sample1e-3_ms-1x_lvis-v1.py
+convnext/mask-rcnn_convnext-t-p4-w7_fpn_amp-ms-crop-3x_coco.py

+ 295 - 0
.dev_scripts/benchmark_valid_flops.py

@@ -0,0 +1,295 @@
+import logging
+import re
+import tempfile
+from argparse import ArgumentParser
+from collections import OrderedDict
+from functools import partial
+from pathlib import Path
+
+import numpy as np
+import pandas as pd
+import torch
+from mmengine import Config, DictAction
+from mmengine.analysis import get_model_complexity_info
+from mmengine.analysis.print_helper import _format_size
+from mmengine.fileio import FileClient
+from mmengine.logging import MMLogger
+from mmengine.model import revert_sync_batchnorm
+from mmengine.runner import Runner
+from modelindex.load_model_index import load
+from rich.console import Console
+from rich.table import Table
+from rich.text import Text
+from tqdm import tqdm
+
+from mmdet.registry import MODELS
+from mmdet.utils import register_all_modules
+
+console = Console()
+MMDET_ROOT = Path(__file__).absolute().parents[1]
+
+
+def parse_args():
+    parser = ArgumentParser(description='Valid all models in model-index.yml')
+    parser.add_argument(
+        '--shape',
+        type=int,
+        nargs='+',
+        default=[1280, 800],
+        help='input image size')
+    parser.add_argument(
+        '--checkpoint_root',
+        help='Checkpoint file root path. If set, load checkpoint before test.')
+    parser.add_argument('--img', default='demo/demo.jpg', help='Image file')
+    parser.add_argument('--models', nargs='+', help='models name to inference')
+    parser.add_argument(
+        '--batch-size',
+        type=int,
+        default=1,
+        help='The batch size during the inference.')
+    parser.add_argument(
+        '--flops', action='store_true', help='Get Flops and Params of models')
+    parser.add_argument(
+        '--flops-str',
+        action='store_true',
+        help='Output FLOPs and params counts in a string form.')
+    parser.add_argument(
+        '--cfg-options',
+        nargs='+',
+        action=DictAction,
+        help='override some settings in the used config, the key-value pair '
+        'in xxx=yyy format will be merged into config file. If the value to '
+        'be overwritten is a list, it should be like key="[a,b]" or key=a,b '
+        'It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]" '
+        'Note that the quotation marks are necessary and that no white space '
+        'is allowed.')
+    parser.add_argument(
+        '--size_divisor',
+        type=int,
+        default=32,
+        help='Pad the input image, the minimum size that is divisible '
+        'by size_divisor, -1 means do not pad the image.')
+    args = parser.parse_args()
+    return args
+
+
+def inference(config_file, checkpoint, work_dir, args, exp_name):
+    logger = MMLogger.get_instance(name='MMLogger')
+    logger.warning('if you want test flops, please make sure torch>=1.12')
+    cfg = Config.fromfile(config_file)
+    cfg.work_dir = work_dir
+    cfg.load_from = checkpoint
+    cfg.log_level = 'WARN'
+    cfg.experiment_name = exp_name
+    if args.cfg_options is not None:
+        cfg.merge_from_dict(args.cfg_options)
+
+    # forward the model
+    result = {'model': config_file.stem}
+
+    if args.flops:
+
+        if len(args.shape) == 1:
+            h = w = args.shape[0]
+        elif len(args.shape) == 2:
+            h, w = args.shape
+        else:
+            raise ValueError('invalid input shape')
+        divisor = args.size_divisor
+        if divisor > 0:
+            h = int(np.ceil(h / divisor)) * divisor
+            w = int(np.ceil(w / divisor)) * divisor
+
+        input_shape = (3, h, w)
+        result['resolution'] = input_shape
+
+        try:
+            cfg = Config.fromfile(config_file)
+            if hasattr(cfg, 'head_norm_cfg'):
+                cfg['head_norm_cfg'] = dict(type='SyncBN', requires_grad=True)
+                cfg['model']['roi_head']['bbox_head']['norm_cfg'] = dict(
+                    type='SyncBN', requires_grad=True)
+                cfg['model']['roi_head']['mask_head']['norm_cfg'] = dict(
+                    type='SyncBN', requires_grad=True)
+
+            if args.cfg_options is not None:
+                cfg.merge_from_dict(args.cfg_options)
+
+            model = MODELS.build(cfg.model)
+            input = torch.rand(1, *input_shape)
+            if torch.cuda.is_available():
+                model.cuda()
+                input = input.cuda()
+            model = revert_sync_batchnorm(model)
+            inputs = (input, )
+            model.eval()
+            outputs = get_model_complexity_info(
+                model, input_shape, inputs, show_table=False, show_arch=False)
+            flops = outputs['flops']
+            params = outputs['params']
+            activations = outputs['activations']
+            result['Get Types'] = 'direct'
+        except:  # noqa 772
+            logger = MMLogger.get_instance(name='MMLogger')
+            logger.warning(
+                'Direct get flops failed, try to get flops with data')
+            cfg = Config.fromfile(config_file)
+            if hasattr(cfg, 'head_norm_cfg'):
+                cfg['head_norm_cfg'] = dict(type='SyncBN', requires_grad=True)
+                cfg['model']['roi_head']['bbox_head']['norm_cfg'] = dict(
+                    type='SyncBN', requires_grad=True)
+                cfg['model']['roi_head']['mask_head']['norm_cfg'] = dict(
+                    type='SyncBN', requires_grad=True)
+            data_loader = Runner.build_dataloader(cfg.val_dataloader)
+            data_batch = next(iter(data_loader))
+            model = MODELS.build(cfg.model)
+            if torch.cuda.is_available():
+                model = model.cuda()
+            model = revert_sync_batchnorm(model)
+            model.eval()
+            _forward = model.forward
+            data = model.data_preprocessor(data_batch)
+            del data_loader
+            model.forward = partial(
+                _forward, data_samples=data['data_samples'])
+            outputs = get_model_complexity_info(
+                model,
+                input_shape,
+                data['inputs'],
+                show_table=False,
+                show_arch=False)
+            flops = outputs['flops']
+            params = outputs['params']
+            activations = outputs['activations']
+            result['Get Types'] = 'dataloader'
+
+        if args.flops_str:
+            flops = _format_size(flops)
+            params = _format_size(params)
+            activations = _format_size(activations)
+
+        result['flops'] = flops
+        result['params'] = params
+
+    return result
+
+
+def show_summary(summary_data, args):
+    table = Table(title='Validation Benchmark Regression Summary')
+    table.add_column('Model')
+    table.add_column('Validation')
+    table.add_column('Resolution (c, h, w)')
+    if args.flops:
+        table.add_column('Flops', justify='right', width=11)
+        table.add_column('Params', justify='right')
+
+    for model_name, summary in summary_data.items():
+        row = [model_name]
+        valid = summary['valid']
+        color = 'green' if valid == 'PASS' else 'red'
+        row.append(f'[{color}]{valid}[/{color}]')
+        if valid == 'PASS':
+            row.append(str(summary['resolution']))
+            if args.flops:
+                row.append(str(summary['flops']))
+                row.append(str(summary['params']))
+        table.add_row(*row)
+
+    console.print(table)
+    table_data = {
+        x.header: [Text.from_markup(y).plain for y in x.cells]
+        for x in table.columns
+    }
+    table_pd = pd.DataFrame(table_data)
+    table_pd.to_csv('./mmdetection_flops.csv')
+
+
+# Sample test whether the inference code is correct
+def main(args):
+    register_all_modules()
+    model_index_file = MMDET_ROOT / 'model-index.yml'
+    model_index = load(str(model_index_file))
+    model_index.build_models_with_collections()
+    models = OrderedDict({model.name: model for model in model_index.models})
+
+    logger = MMLogger(
+        'validation',
+        logger_name='validation',
+        log_file='benchmark_test_image.log',
+        log_level=logging.INFO)
+
+    if args.models:
+        patterns = [
+            re.compile(pattern.replace('+', '_')) for pattern in args.models
+        ]
+        filter_models = {}
+        for k, v in models.items():
+            k = k.replace('+', '_')
+            if any([re.match(pattern, k) for pattern in patterns]):
+                filter_models[k] = v
+        if len(filter_models) == 0:
+            print('No model found, please specify models in:')
+            print('\n'.join(models.keys()))
+            return
+        models = filter_models
+
+    summary_data = {}
+    tmpdir = tempfile.TemporaryDirectory()
+    for model_name, model_info in tqdm(models.items()):
+
+        if model_info.config is None:
+            continue
+
+        model_info.config = model_info.config.replace('%2B', '+')
+        config = Path(model_info.config)
+
+        try:
+            config.exists()
+        except:  # noqa 722
+            logger.error(f'{model_name}: {config} not found.')
+            continue
+
+        logger.info(f'Processing: {model_name}')
+
+        http_prefix = 'https://download.openmmlab.com/mmdetection/'
+        if args.checkpoint_root is not None:
+            root = args.checkpoint_root
+            if 's3://' in args.checkpoint_root:
+                from petrel_client.common.exception import AccessDeniedError
+                file_client = FileClient.infer_client(uri=root)
+                checkpoint = file_client.join_path(
+                    root, model_info.weights[len(http_prefix):])
+                try:
+                    exists = file_client.exists(checkpoint)
+                except AccessDeniedError:
+                    exists = False
+            else:
+                checkpoint = Path(root) / model_info.weights[len(http_prefix):]
+                exists = checkpoint.exists()
+            if exists:
+                checkpoint = str(checkpoint)
+            else:
+                print(f'WARNING: {model_name}: {checkpoint} not found.')
+                checkpoint = None
+        else:
+            checkpoint = None
+
+        try:
+            # build the model from a config file and a checkpoint file
+            result = inference(MMDET_ROOT / config, checkpoint, tmpdir.name,
+                               args, model_name)
+            result['valid'] = 'PASS'
+        except Exception:  # noqa 722
+            import traceback
+            logger.error(f'"{config}" :\n{traceback.format_exc()}')
+            result = {'valid': 'FAIL'}
+
+        summary_data[model_name] = result
+
+    tmpdir.cleanup()
+    show_summary(summary_data, args)
+
+
+if __name__ == '__main__':
+    args = parse_args()
+    main(args)

+ 157 - 0
.dev_scripts/check_links.py

@@ -0,0 +1,157 @@
+# Modified from:
+# https://github.com/allenai/allennlp/blob/main/scripts/check_links.py
+
+import argparse
+import logging
+import os
+import pathlib
+import re
+import sys
+from multiprocessing.dummy import Pool
+from typing import NamedTuple, Optional, Tuple
+
+import requests
+from mmengine.logging import MMLogger
+
+
+def parse_args():
+    parser = argparse.ArgumentParser(
+        description='Goes through all the inline-links '
+        'in markdown files and reports the breakages')
+    parser.add_argument(
+        '--num-threads',
+        type=int,
+        default=100,
+        help='Number of processes to confirm the link')
+    parser.add_argument('--https-proxy', type=str, help='https proxy')
+    parser.add_argument(
+        '--out',
+        type=str,
+        default='link_reports.txt',
+        help='output path of reports')
+    args = parser.parse_args()
+    return args
+
+
+OK_STATUS_CODES = (
+    200,
+    401,  # the resource exists but may require some sort of login.
+    403,  # ^ same
+    405,  # HEAD method not allowed.
+    # the resource exists, but our default 'Accept-' header may not
+    # match what the server can provide.
+    406,
+)
+
+
+class MatchTuple(NamedTuple):
+    source: str
+    name: str
+    link: str
+
+
+def check_link(
+        match_tuple: MatchTuple,
+        http_session: requests.Session,
+        logger: logging = None) -> Tuple[MatchTuple, bool, Optional[str]]:
+    reason: Optional[str] = None
+    if match_tuple.link.startswith('http'):
+        result_ok, reason = check_url(match_tuple, http_session)
+    else:
+        result_ok = check_path(match_tuple)
+    if logger is None:
+        print(f"  {'✓' if result_ok else '✗'} {match_tuple.link}")
+    else:
+        logger.info(f"  {'✓' if result_ok else '✗'} {match_tuple.link}")
+    return match_tuple, result_ok, reason
+
+
+def check_url(match_tuple: MatchTuple,
+              http_session: requests.Session) -> Tuple[bool, str]:
+    """Check if a URL is reachable."""
+    try:
+        result = http_session.head(
+            match_tuple.link, timeout=5, allow_redirects=True)
+        return (
+            result.ok or result.status_code in OK_STATUS_CODES,
+            f'status code = {result.status_code}',
+        )
+    except (requests.ConnectionError, requests.Timeout):
+        return False, 'connection error'
+
+
+def check_path(match_tuple: MatchTuple) -> bool:
+    """Check if a file in this repository exists."""
+    relative_path = match_tuple.link.split('#')[0]
+    full_path = os.path.join(
+        os.path.dirname(str(match_tuple.source)), relative_path)
+    return os.path.exists(full_path)
+
+
+def main():
+    args = parse_args()
+
+    # setup logger
+    logger = MMLogger.get_instance(name='mmdet', log_file=args.out)
+
+    # setup https_proxy
+    if args.https_proxy:
+        os.environ['https_proxy'] = args.https_proxy
+
+    # setup http_session
+    http_session = requests.Session()
+    for resource_prefix in ('http://', 'https://'):
+        http_session.mount(
+            resource_prefix,
+            requests.adapters.HTTPAdapter(
+                max_retries=5,
+                pool_connections=20,
+                pool_maxsize=args.num_threads),
+        )
+
+    logger.info('Finding all markdown files in the current directory...')
+
+    project_root = (pathlib.Path(__file__).parent / '..').resolve()
+    markdown_files = project_root.glob('**/*.md')
+
+    all_matches = set()
+    url_regex = re.compile(r'\[([^!][^\]]+)\]\(([^)(]+)\)')
+    for markdown_file in markdown_files:
+        with open(markdown_file) as handle:
+            for line in handle.readlines():
+                matches = url_regex.findall(line)
+                for name, link in matches:
+                    if 'localhost' not in link:
+                        all_matches.add(
+                            MatchTuple(
+                                source=str(markdown_file),
+                                name=name,
+                                link=link))
+
+    logger.info(f'  {len(all_matches)} markdown files found')
+    logger.info('Checking to make sure we can retrieve each link...')
+
+    with Pool(processes=args.num_threads) as pool:
+        results = pool.starmap(check_link, [(match, http_session, logger)
+                                            for match in list(all_matches)])
+
+    # collect unreachable results
+    unreachable_results = [(match_tuple, reason)
+                           for match_tuple, success, reason in results
+                           if not success]
+
+    if unreachable_results:
+        logger.info('================================================')
+        logger.info(f'Unreachable links ({len(unreachable_results)}):')
+        for match_tuple, reason in unreachable_results:
+            logger.info('  > Source: ' + match_tuple.source)
+            logger.info('    Name: ' + match_tuple.name)
+            logger.info('    Link: ' + match_tuple.link)
+            if reason is not None:
+                logger.info('    Reason: ' + reason)
+        sys.exit(1)
+    logger.info('No Unreachable link found.')
+
+
+if __name__ == '__main__':
+    main()

+ 114 - 0
.dev_scripts/convert_test_benchmark_script.py

@@ -0,0 +1,114 @@
+# Copyright (c) OpenMMLab. All rights reserved.
+import argparse
+import os
+import os.path as osp
+
+from mmengine import Config
+
+
+def parse_args():
+    parser = argparse.ArgumentParser(
+        description='Convert benchmark model list to script')
+    parser.add_argument('config', help='test config file path')
+    parser.add_argument('--port', type=int, default=29666, help='dist port')
+    parser.add_argument(
+        '--run', action='store_true', help='run script directly')
+    parser.add_argument(
+        '--out', type=str, help='path to save model benchmark script')
+
+    args = parser.parse_args()
+    return args
+
+
+def process_model_info(model_info, work_dir):
+    config = model_info['config'].strip()
+    fname, _ = osp.splitext(osp.basename(config))
+    job_name = fname
+    work_dir = '$WORK_DIR/' + fname
+    checkpoint = model_info['checkpoint'].strip()
+    return dict(
+        config=config,
+        job_name=job_name,
+        work_dir=work_dir,
+        checkpoint=checkpoint)
+
+
+def create_test_bash_info(commands, model_test_dict, port, script_name,
+                          partition):
+    config = model_test_dict['config']
+    job_name = model_test_dict['job_name']
+    checkpoint = model_test_dict['checkpoint']
+    work_dir = model_test_dict['work_dir']
+
+    echo_info = f' \necho \'{config}\' &'
+    commands.append(echo_info)
+    commands.append('\n')
+
+    command_info = f'GPUS=8  GPUS_PER_NODE=8  ' \
+                   f'CPUS_PER_TASK=$CPUS_PRE_TASK {script_name} '
+
+    command_info += f'{partition} '
+    command_info += f'{job_name} '
+    command_info += f'{config} '
+    command_info += f'$CHECKPOINT_DIR/{checkpoint} '
+    command_info += f'--work-dir {work_dir} '
+
+    command_info += f'--cfg-option env_cfg.dist_cfg.port={port} '
+    command_info += ' &'
+
+    commands.append(command_info)
+
+
+def main():
+    args = parse_args()
+    if args.out:
+        out_suffix = args.out.split('.')[-1]
+        assert args.out.endswith('.sh'), \
+            f'Expected out file path suffix is .sh, but get .{out_suffix}'
+    assert args.out or args.run, \
+        ('Please specify at least one operation (save/run/ the '
+         'script) with the argument "--out" or "--run"')
+
+    commands = []
+    partition_name = 'PARTITION=$1 '
+    commands.append(partition_name)
+    commands.append('\n')
+
+    checkpoint_root = 'CHECKPOINT_DIR=$2 '
+    commands.append(checkpoint_root)
+    commands.append('\n')
+
+    work_dir = 'WORK_DIR=$3 '
+    commands.append(work_dir)
+    commands.append('\n')
+
+    cpus_pre_task = 'CPUS_PER_TASK=${4:-2} '
+    commands.append(cpus_pre_task)
+    commands.append('\n')
+
+    script_name = osp.join('tools', 'slurm_test.sh')
+    port = args.port
+
+    cfg = Config.fromfile(args.config)
+
+    for model_key in cfg:
+        model_infos = cfg[model_key]
+        if not isinstance(model_infos, list):
+            model_infos = [model_infos]
+        for model_info in model_infos:
+            print('processing: ', model_info['config'])
+            model_test_dict = process_model_info(model_info, work_dir)
+            create_test_bash_info(commands, model_test_dict, port, script_name,
+                                  '$PARTITION')
+            port += 1
+
+    command_str = ''.join(commands)
+    if args.out:
+        with open(args.out, 'w') as f:
+            f.write(command_str)
+    if args.run:
+        os.system(command_str)
+
+
+if __name__ == '__main__':
+    main()

+ 104 - 0
.dev_scripts/convert_train_benchmark_script.py

@@ -0,0 +1,104 @@
+# Copyright (c) OpenMMLab. All rights reserved.
+import argparse
+import os
+import os.path as osp
+
+
+def parse_args():
+    parser = argparse.ArgumentParser(
+        description='Convert benchmark model json to script')
+    parser.add_argument(
+        'txt_path', type=str, help='txt path output by benchmark_filter')
+    parser.add_argument(
+        '--run', action='store_true', help='run script directly')
+    parser.add_argument(
+        '--out', type=str, help='path to save model benchmark script')
+
+    args = parser.parse_args()
+    return args
+
+
+def determine_gpus(cfg_name):
+    gpus = 8
+    gpus_pre_node = 8
+
+    if cfg_name.find('16x') >= 0:
+        gpus = 16
+    elif cfg_name.find('4xb4') >= 0:
+        gpus = 4
+        gpus_pre_node = 4
+    elif 'lad' in cfg_name:
+        gpus = 2
+        gpus_pre_node = 2
+
+    return gpus, gpus_pre_node
+
+
+def main():
+    args = parse_args()
+    if args.out:
+        out_suffix = args.out.split('.')[-1]
+        assert args.out.endswith('.sh'), \
+            f'Expected out file path suffix is .sh, but get .{out_suffix}'
+    assert args.out or args.run, \
+        ('Please specify at least one operation (save/run/ the '
+         'script) with the argument "--out" or "--run"')
+
+    root_name = './tools'
+    train_script_name = osp.join(root_name, 'slurm_train.sh')
+
+    commands = []
+    partition_name = 'PARTITION=$1 '
+    commands.append(partition_name)
+    commands.append('\n')
+
+    work_dir = 'WORK_DIR=$2 '
+    commands.append(work_dir)
+    commands.append('\n')
+
+    cpus_pre_task = 'CPUS_PER_TASK=${3:-4} '
+    commands.append(cpus_pre_task)
+    commands.append('\n')
+    commands.append('\n')
+
+    with open(args.txt_path, 'r') as f:
+        model_cfgs = f.readlines()
+        for i, cfg in enumerate(model_cfgs):
+            cfg = cfg.strip()
+            if len(cfg) == 0:
+                continue
+            # print cfg name
+            echo_info = f'echo \'{cfg}\' &'
+            commands.append(echo_info)
+            commands.append('\n')
+
+            fname, _ = osp.splitext(osp.basename(cfg))
+            out_fname = '$WORK_DIR/' + fname
+
+            gpus, gpus_pre_node = determine_gpus(cfg)
+            command_info = f'GPUS={gpus}  GPUS_PER_NODE={gpus_pre_node}  ' \
+                           f'CPUS_PER_TASK=$CPUS_PRE_TASK {train_script_name} '
+            command_info += '$PARTITION '
+            command_info += f'{fname} '
+            command_info += f'{cfg} '
+            command_info += f'{out_fname} '
+
+            command_info += '--cfg-options default_hooks.checkpoint.' \
+                            'max_keep_ckpts=1 '
+            command_info += '&'
+
+            commands.append(command_info)
+
+            if i < len(model_cfgs):
+                commands.append('\n')
+
+        command_str = ''.join(commands)
+        if args.out:
+            with open(args.out, 'w') as f:
+                f.write(command_str)
+        if args.run:
+            os.system(command_str)
+
+
+if __name__ == '__main__':
+    main()

+ 5 - 0
.dev_scripts/covignore.cfg

@@ -0,0 +1,5 @@
+# Each line should be the relative path to the root directory
+# of this repo. Support regular expression as well.
+# For example:
+
+.*/__init__.py

+ 40 - 0
.dev_scripts/diff_coverage_test.sh

@@ -0,0 +1,40 @@
+#!/bin/bash
+
+readarray -t IGNORED_FILES < $( dirname "$0" )/covignore.cfg
+REUSE_COVERAGE_REPORT=${REUSE_COVERAGE_REPORT:-0}
+REPO=${1:-"origin"}
+BRANCH=${2:-"refactor_dev"}
+
+git fetch $REPO $BRANCH
+
+PY_FILES=""
+for FILE_NAME in $(git diff --name-only ${REPO}/${BRANCH}); do
+    # Only test python files in mmdet/ existing in current branch, and not ignored in covignore.cfg
+    if [ ${FILE_NAME: -3} == ".py" ] && [ ${FILE_NAME:0:6} == "mmdet/" ] && [ -f "$FILE_NAME" ]; then
+        IGNORED=false
+        for IGNORED_FILE_NAME in "${IGNORED_FILES[@]}"; do
+            # Skip blank lines
+            if [ -z "$IGNORED_FILE_NAME" ]; then
+                continue
+            fi
+            if [ "${IGNORED_FILE_NAME::1}" != "#" ] && [[ "$FILE_NAME" =~ $IGNORED_FILE_NAME ]]; then
+                echo "Ignoring $FILE_NAME"
+                IGNORED=true
+                break
+            fi
+        done
+        if [ "$IGNORED" = false ]; then
+            PY_FILES="$PY_FILES $FILE_NAME"
+        fi
+    fi
+done
+
+# Only test the coverage when PY_FILES are not empty, otherwise they will test the entire project
+if [ ! -z "${PY_FILES}" ]
+then
+    if [ "$REUSE_COVERAGE_REPORT" == "0" ]; then
+        coverage run --branch --source mmdet -m pytest tests/
+    fi
+    coverage report --fail-under 80 -m $PY_FILES
+    interrogate -v --ignore-init-method --ignore-module --ignore-nested-functions --ignore-magic --ignore-regex "__repr__" --fail-under 95 $PY_FILES
+fi

+ 83 - 0
.dev_scripts/download_checkpoints.py

@@ -0,0 +1,83 @@
+# Copyright (c) OpenMMLab. All rights reserved.
+
+import argparse
+import math
+import os
+import os.path as osp
+from multiprocessing import Pool
+
+import torch
+from mmengine.config import Config
+from mmengine.utils import mkdir_or_exist
+
+
+def download(url, out_file, min_bytes=math.pow(1024, 2), progress=True):
+    # math.pow(1024, 2) is mean 1 MB
+    assert_msg = f"Downloaded url '{url}' does not exist " \
+                 f'or size is < min_bytes={min_bytes}'
+    try:
+        print(f'Downloading {url} to {out_file}...')
+        torch.hub.download_url_to_file(url, str(out_file), progress=progress)
+        assert osp.exists(
+            out_file) and osp.getsize(out_file) > min_bytes, assert_msg
+    except Exception as e:
+        if osp.exists(out_file):
+            os.remove(out_file)
+        print(f'ERROR: {e}\nRe-attempting {url} to {out_file} ...')
+        os.system(f"curl -L '{url}' -o '{out_file}' --retry 3 -C -"
+                  )  # curl download, retry and resume on fail
+    finally:
+        if osp.exists(out_file) and osp.getsize(out_file) < min_bytes:
+            os.remove(out_file)  # remove partial downloads
+
+        if not osp.exists(out_file):
+            print(f'ERROR: {assert_msg}\n')
+        print('=========================================\n')
+
+
+def parse_args():
+    parser = argparse.ArgumentParser(description='Download checkpoints')
+    parser.add_argument('config', help='test config file path')
+    parser.add_argument(
+        'out', type=str, help='output dir of checkpoints to be stored')
+    parser.add_argument(
+        '--nproc', type=int, default=16, help='num of Processes')
+    parser.add_argument(
+        '--intranet',
+        action='store_true',
+        help='switch to internal network url')
+    args = parser.parse_args()
+    return args
+
+
+if __name__ == '__main__':
+    args = parse_args()
+    mkdir_or_exist(args.out)
+
+    cfg = Config.fromfile(args.config)
+
+    checkpoint_url_list = []
+    checkpoint_out_list = []
+
+    for model in cfg:
+        model_infos = cfg[model]
+        if not isinstance(model_infos, list):
+            model_infos = [model_infos]
+        for model_info in model_infos:
+            checkpoint = model_info['checkpoint']
+            out_file = osp.join(args.out, checkpoint)
+            if not osp.exists(out_file):
+
+                url = model_info['url']
+                if args.intranet is True:
+                    url = url.replace('.com', '.sensetime.com')
+                    url = url.replace('https', 'http')
+
+                checkpoint_url_list.append(url)
+                checkpoint_out_list.append(out_file)
+
+    if len(checkpoint_url_list) > 0:
+        pool = Pool(min(os.cpu_count(), args.nproc))
+        pool.starmap(download, zip(checkpoint_url_list, checkpoint_out_list))
+    else:
+        print('No files to download!')

+ 344 - 0
.dev_scripts/gather_models.py

@@ -0,0 +1,344 @@
+# Copyright (c) OpenMMLab. All rights reserved.
+import argparse
+import glob
+import json
+import os.path as osp
+import shutil
+import subprocess
+from collections import OrderedDict
+
+import torch
+import yaml
+from mmengine.config import Config
+from mmengine.fileio import dump
+from mmengine.utils import mkdir_or_exist, scandir
+
+
+def ordered_yaml_dump(data, stream=None, Dumper=yaml.SafeDumper, **kwds):
+
+    class OrderedDumper(Dumper):
+        pass
+
+    def _dict_representer(dumper, data):
+        return dumper.represent_mapping(
+            yaml.resolver.BaseResolver.DEFAULT_MAPPING_TAG, data.items())
+
+    OrderedDumper.add_representer(OrderedDict, _dict_representer)
+    return yaml.dump(data, stream, OrderedDumper, **kwds)
+
+
+def process_checkpoint(in_file, out_file):
+    checkpoint = torch.load(in_file, map_location='cpu')
+    # remove optimizer for smaller file size
+    if 'optimizer' in checkpoint:
+        del checkpoint['optimizer']
+
+    # remove ema state_dict
+    for key in list(checkpoint['state_dict']):
+        if key.startswith('ema_'):
+            checkpoint['state_dict'].pop(key)
+
+    # if it is necessary to remove some sensitive data in checkpoint['meta'],
+    # add the code here.
+    if torch.__version__ >= '1.6':
+        torch.save(checkpoint, out_file, _use_new_zipfile_serialization=False)
+    else:
+        torch.save(checkpoint, out_file)
+    sha = subprocess.check_output(['sha256sum', out_file]).decode()
+    final_file = out_file.rstrip('.pth') + '-{}.pth'.format(sha[:8])
+    subprocess.Popen(['mv', out_file, final_file])
+    return final_file
+
+
+def is_by_epoch(config):
+    cfg = Config.fromfile('./configs/' + config)
+    return cfg.runner.type == 'EpochBasedRunner'
+
+
+def get_final_epoch_or_iter(config):
+    cfg = Config.fromfile('./configs/' + config)
+    if cfg.runner.type == 'EpochBasedRunner':
+        return cfg.runner.max_epochs
+    else:
+        return cfg.runner.max_iters
+
+
+def get_best_epoch_or_iter(exp_dir):
+    best_epoch_iter_full_path = list(
+        sorted(glob.glob(osp.join(exp_dir, 'best_*.pth'))))[-1]
+    best_epoch_or_iter_model_path = best_epoch_iter_full_path.split('/')[-1]
+    best_epoch_or_iter = best_epoch_or_iter_model_path.\
+        split('_')[-1].split('.')[0]
+    return best_epoch_or_iter_model_path, int(best_epoch_or_iter)
+
+
+def get_real_epoch_or_iter(config):
+    cfg = Config.fromfile('./configs/' + config)
+    if cfg.runner.type == 'EpochBasedRunner':
+        epoch = cfg.runner.max_epochs
+        if cfg.data.train.type == 'RepeatDataset':
+            epoch *= cfg.data.train.times
+        return epoch
+    else:
+        return cfg.runner.max_iters
+
+
+def get_final_results(log_json_path,
+                      epoch_or_iter,
+                      results_lut,
+                      by_epoch=True):
+    result_dict = dict()
+    last_val_line = None
+    last_train_line = None
+    last_val_line_idx = -1
+    last_train_line_idx = -1
+    with open(log_json_path, 'r') as f:
+        for i, line in enumerate(f.readlines()):
+            log_line = json.loads(line)
+            if 'mode' not in log_line.keys():
+                continue
+
+            if by_epoch:
+                if (log_line['mode'] == 'train'
+                        and log_line['epoch'] == epoch_or_iter):
+                    result_dict['memory'] = log_line['memory']
+
+                if (log_line['mode'] == 'val'
+                        and log_line['epoch'] == epoch_or_iter):
+                    result_dict.update({
+                        key: log_line[key]
+                        for key in results_lut if key in log_line
+                    })
+                    return result_dict
+            else:
+                if log_line['mode'] == 'train':
+                    last_train_line_idx = i
+                    last_train_line = log_line
+
+                if log_line and log_line['mode'] == 'val':
+                    last_val_line_idx = i
+                    last_val_line = log_line
+
+    # bug: max_iters = 768, last_train_line['iter'] = 750
+    assert last_val_line_idx == last_train_line_idx + 1, \
+        'Log file is incomplete'
+    result_dict['memory'] = last_train_line['memory']
+    result_dict.update({
+        key: last_val_line[key]
+        for key in results_lut if key in last_val_line
+    })
+
+    return result_dict
+
+
+def get_dataset_name(config):
+    # If there are more dataset, add here.
+    name_map = dict(
+        CityscapesDataset='Cityscapes',
+        CocoDataset='COCO',
+        CocoPanopticDataset='COCO',
+        DeepFashionDataset='Deep Fashion',
+        LVISV05Dataset='LVIS v0.5',
+        LVISV1Dataset='LVIS v1',
+        VOCDataset='Pascal VOC',
+        WIDERFaceDataset='WIDER Face',
+        OpenImagesDataset='OpenImagesDataset',
+        OpenImagesChallengeDataset='OpenImagesChallengeDataset',
+        Objects365V1Dataset='Objects365 v1',
+        Objects365V2Dataset='Objects365 v2')
+    cfg = Config.fromfile('./configs/' + config)
+    return name_map[cfg.dataset_type]
+
+
+def convert_model_info_to_pwc(model_infos):
+    pwc_files = {}
+    for model in model_infos:
+        cfg_folder_name = osp.split(model['config'])[-2]
+        pwc_model_info = OrderedDict()
+        pwc_model_info['Name'] = osp.split(model['config'])[-1].split('.')[0]
+        pwc_model_info['In Collection'] = 'Please fill in Collection name'
+        pwc_model_info['Config'] = osp.join('configs', model['config'])
+
+        # get metadata
+        memory = round(model['results']['memory'] / 1024, 1)
+        meta_data = OrderedDict()
+        meta_data['Training Memory (GB)'] = memory
+        if 'epochs' in model:
+            meta_data['Epochs'] = get_real_epoch_or_iter(model['config'])
+        else:
+            meta_data['Iterations'] = get_real_epoch_or_iter(model['config'])
+        pwc_model_info['Metadata'] = meta_data
+
+        # get dataset name
+        dataset_name = get_dataset_name(model['config'])
+
+        # get results
+        results = []
+        # if there are more metrics, add here.
+        if 'bbox_mAP' in model['results']:
+            metric = round(model['results']['bbox_mAP'] * 100, 1)
+            results.append(
+                OrderedDict(
+                    Task='Object Detection',
+                    Dataset=dataset_name,
+                    Metrics={'box AP': metric}))
+        if 'segm_mAP' in model['results']:
+            metric = round(model['results']['segm_mAP'] * 100, 1)
+            results.append(
+                OrderedDict(
+                    Task='Instance Segmentation',
+                    Dataset=dataset_name,
+                    Metrics={'mask AP': metric}))
+        if 'PQ' in model['results']:
+            metric = round(model['results']['PQ'], 1)
+            results.append(
+                OrderedDict(
+                    Task='Panoptic Segmentation',
+                    Dataset=dataset_name,
+                    Metrics={'PQ': metric}))
+        pwc_model_info['Results'] = results
+
+        link_string = 'https://download.openmmlab.com/mmdetection/v2.0/'
+        link_string += '{}/{}'.format(model['config'].rstrip('.py'),
+                                      osp.split(model['model_path'])[-1])
+        pwc_model_info['Weights'] = link_string
+        if cfg_folder_name in pwc_files:
+            pwc_files[cfg_folder_name].append(pwc_model_info)
+        else:
+            pwc_files[cfg_folder_name] = [pwc_model_info]
+    return pwc_files
+
+
+def parse_args():
+    parser = argparse.ArgumentParser(description='Gather benchmarked models')
+    parser.add_argument(
+        'root',
+        type=str,
+        help='root path of benchmarked models to be gathered')
+    parser.add_argument(
+        'out', type=str, help='output path of gathered models to be stored')
+    parser.add_argument(
+        '--best',
+        action='store_true',
+        help='whether to gather the best model.')
+
+    args = parser.parse_args()
+    return args
+
+
+def main():
+    args = parse_args()
+    models_root = args.root
+    models_out = args.out
+    mkdir_or_exist(models_out)
+
+    # find all models in the root directory to be gathered
+    raw_configs = list(scandir('./configs', '.py', recursive=True))
+
+    # filter configs that is not trained in the experiments dir
+    used_configs = []
+    for raw_config in raw_configs:
+        if osp.exists(osp.join(models_root, raw_config)):
+            used_configs.append(raw_config)
+    print(f'Find {len(used_configs)} models to be gathered')
+
+    # find final_ckpt and log file for trained each config
+    # and parse the best performance
+    model_infos = []
+    for used_config in used_configs:
+        exp_dir = osp.join(models_root, used_config)
+        by_epoch = is_by_epoch(used_config)
+        # check whether the exps is finished
+        if args.best is True:
+            final_model, final_epoch_or_iter = get_best_epoch_or_iter(exp_dir)
+        else:
+            final_epoch_or_iter = get_final_epoch_or_iter(used_config)
+            final_model = '{}_{}.pth'.format('epoch' if by_epoch else 'iter',
+                                             final_epoch_or_iter)
+
+        model_path = osp.join(exp_dir, final_model)
+        # skip if the model is still training
+        if not osp.exists(model_path):
+            continue
+
+        # get the latest logs
+        log_json_path = list(
+            sorted(glob.glob(osp.join(exp_dir, '*.log.json'))))[-1]
+        log_txt_path = list(sorted(glob.glob(osp.join(exp_dir, '*.log'))))[-1]
+        cfg = Config.fromfile('./configs/' + used_config)
+        results_lut = cfg.evaluation.metric
+        if not isinstance(results_lut, list):
+            results_lut = [results_lut]
+        # case when using VOC, the evaluation key is only 'mAP'
+        # when using Panoptic Dataset, the evaluation key is 'PQ'.
+        for i, key in enumerate(results_lut):
+            if 'mAP' not in key and 'PQ' not in key:
+                results_lut[i] = key + '_mAP'
+        model_performance = get_final_results(log_json_path,
+                                              final_epoch_or_iter, results_lut,
+                                              by_epoch)
+
+        if model_performance is None:
+            continue
+
+        model_time = osp.split(log_txt_path)[-1].split('.')[0]
+        model_info = dict(
+            config=used_config,
+            results=model_performance,
+            model_time=model_time,
+            final_model=final_model,
+            log_json_path=osp.split(log_json_path)[-1])
+        model_info['epochs' if by_epoch else 'iterations'] =\
+            final_epoch_or_iter
+        model_infos.append(model_info)
+
+    # publish model for each checkpoint
+    publish_model_infos = []
+    for model in model_infos:
+        model_publish_dir = osp.join(models_out, model['config'].rstrip('.py'))
+        mkdir_or_exist(model_publish_dir)
+
+        model_name = osp.split(model['config'])[-1].split('.')[0]
+
+        model_name += '_' + model['model_time']
+        publish_model_path = osp.join(model_publish_dir, model_name)
+        trained_model_path = osp.join(models_root, model['config'],
+                                      model['final_model'])
+
+        # convert model
+        final_model_path = process_checkpoint(trained_model_path,
+                                              publish_model_path)
+
+        # copy log
+        shutil.copy(
+            osp.join(models_root, model['config'], model['log_json_path']),
+            osp.join(model_publish_dir, f'{model_name}.log.json'))
+        shutil.copy(
+            osp.join(models_root, model['config'],
+                     model['log_json_path'].rstrip('.json')),
+            osp.join(model_publish_dir, f'{model_name}.log'))
+
+        # copy config to guarantee reproducibility
+        config_path = model['config']
+        config_path = osp.join(
+            'configs',
+            config_path) if 'configs' not in config_path else config_path
+        target_config_path = osp.split(config_path)[-1]
+        shutil.copy(config_path, osp.join(model_publish_dir,
+                                          target_config_path))
+
+        model['model_path'] = final_model_path
+        publish_model_infos.append(model)
+
+    models = dict(models=publish_model_infos)
+    print(f'Totally gathered {len(publish_model_infos)} models')
+    dump(models, osp.join(models_out, 'model_info.json'))
+
+    pwc_files = convert_model_info_to_pwc(publish_model_infos)
+    for name in pwc_files:
+        with open(osp.join(models_out, name + '_metafile.yml'), 'w') as f:
+            ordered_yaml_dump(pwc_files[name], f, encoding='utf-8')
+
+
+if __name__ == '__main__':
+    main()

+ 96 - 0
.dev_scripts/gather_test_benchmark_metric.py

@@ -0,0 +1,96 @@
+# Copyright (c) OpenMMLab. All rights reserved.
+import argparse
+import glob
+import os.path as osp
+
+from mmengine.config import Config
+from mmengine.fileio import dump, load
+from mmengine.utils import mkdir_or_exist
+
+
+def parse_args():
+    parser = argparse.ArgumentParser(
+        description='Gather benchmarked models metric')
+    parser.add_argument('config', help='test config file path')
+    parser.add_argument(
+        'root',
+        type=str,
+        help='root path of benchmarked models to be gathered')
+    parser.add_argument(
+        '--out', type=str, help='output path of gathered metrics to be stored')
+    parser.add_argument(
+        '--not-show', action='store_true', help='not show metrics')
+    parser.add_argument(
+        '--show-all', action='store_true', help='show all model metrics')
+
+    args = parser.parse_args()
+    return args
+
+
+if __name__ == '__main__':
+    args = parse_args()
+
+    root_path = args.root
+    metrics_out = args.out
+    result_dict = {}
+
+    cfg = Config.fromfile(args.config)
+
+    for model_key in cfg:
+        model_infos = cfg[model_key]
+        if not isinstance(model_infos, list):
+            model_infos = [model_infos]
+        for model_info in model_infos:
+            record_metrics = model_info['metric']
+            config = model_info['config'].strip()
+            fname, _ = osp.splitext(osp.basename(config))
+            metric_json_dir = osp.join(root_path, fname)
+            if osp.exists(metric_json_dir):
+                json_list = glob.glob(osp.join(metric_json_dir, '*.json'))
+                if len(json_list) > 0:
+                    log_json_path = list(sorted(json_list))[-1]
+
+                    metric = load(log_json_path)
+                    if config in metric.get('config', {}):
+
+                        new_metrics = dict()
+                        for record_metric_key in record_metrics:
+                            record_metric_key_bk = record_metric_key
+                            old_metric = record_metrics[record_metric_key]
+                            if record_metric_key == 'AR_1000':
+                                record_metric_key = 'AR@1000'
+                            if record_metric_key not in metric['metric']:
+                                raise KeyError(
+                                    'record_metric_key not exist, please '
+                                    'check your config')
+                            new_metric = round(
+                                metric['metric'][record_metric_key] * 100, 1)
+                            new_metrics[record_metric_key_bk] = new_metric
+
+                        if args.show_all:
+                            result_dict[config] = dict(
+                                before=record_metrics, after=new_metrics)
+                        else:
+                            for record_metric_key in record_metrics:
+                                old_metric = record_metrics[record_metric_key]
+                                new_metric = new_metrics[record_metric_key]
+                                if old_metric != new_metric:
+                                    result_dict[config] = dict(
+                                        before=record_metrics,
+                                        after=new_metrics)
+                                    break
+                    else:
+                        print(f'{config} not included in: {log_json_path}')
+                else:
+                    print(f'{config} not exist file: {metric_json_dir}')
+            else:
+                print(f'{config} not exist dir: {metric_json_dir}')
+
+    if metrics_out:
+        mkdir_or_exist(metrics_out)
+        dump(result_dict, osp.join(metrics_out, 'batch_test_metric_info.json'))
+    if not args.not_show:
+        print('===================================')
+        for config_name, metrics in result_dict.items():
+            print(config_name, metrics)
+        print('===================================')

+ 151 - 0
.dev_scripts/gather_train_benchmark_metric.py

@@ -0,0 +1,151 @@
+# Copyright (c) OpenMMLab. All rights reserved.
+import argparse
+import glob
+import os.path as osp
+
+from gather_models import get_final_results
+from mmengine.config import Config
+from mmengine.fileio import dump
+from mmengine.utils import mkdir_or_exist
+
+try:
+    import xlrd
+except ImportError:
+    xlrd = None
+try:
+    import xlutils
+    from xlutils.copy import copy
+except ImportError:
+    xlutils = None
+
+
+def parse_args():
+    parser = argparse.ArgumentParser(
+        description='Gather benchmarked models metric')
+    parser.add_argument(
+        'root',
+        type=str,
+        help='root path of benchmarked models to be gathered')
+    parser.add_argument(
+        'txt_path', type=str, help='txt path output by benchmark_filter')
+    parser.add_argument(
+        '--out', type=str, help='output path of gathered metrics to be stored')
+    parser.add_argument(
+        '--not-show', action='store_true', help='not show metrics')
+    parser.add_argument(
+        '--excel', type=str, help='input path of excel to be recorded')
+    parser.add_argument(
+        '--ncol', type=int, help='Number of column to be modified or appended')
+
+    args = parser.parse_args()
+    return args
+
+
+if __name__ == '__main__':
+    args = parse_args()
+
+    if args.excel:
+        assert args.ncol, 'Please specify "--excel" and "--ncol" ' \
+                          'at the same time'
+        if xlrd is None:
+            raise RuntimeError(
+                'xlrd is not installed,'
+                'Please use “pip install xlrd==1.2.0” to install')
+        if xlutils is None:
+            raise RuntimeError(
+                'xlutils is not installed,'
+                'Please use “pip install xlutils==2.0.0” to install')
+        readbook = xlrd.open_workbook(args.excel)
+        sheet = readbook.sheet_by_name('Sheet1')
+        sheet_info = {}
+        total_nrows = sheet.nrows
+        for i in range(3, sheet.nrows):
+            sheet_info[sheet.row_values(i)[0]] = i
+        xlrw = copy(readbook)
+        table = xlrw.get_sheet(0)
+
+    root_path = args.root
+    metrics_out = args.out
+
+    result_dict = {}
+    with open(args.txt_path, 'r') as f:
+        model_cfgs = f.readlines()
+        for i, config in enumerate(model_cfgs):
+            config = config.strip()
+            if len(config) == 0:
+                continue
+
+            config_name = osp.split(config)[-1]
+            config_name = osp.splitext(config_name)[0]
+            result_path = osp.join(root_path, config_name)
+            if osp.exists(result_path):
+                # 1 read config
+                cfg = Config.fromfile(config)
+                total_epochs = cfg.runner.max_epochs
+                final_results = cfg.evaluation.metric
+                if not isinstance(final_results, list):
+                    final_results = [final_results]
+                final_results_out = []
+                for key in final_results:
+                    if 'proposal_fast' in key:
+                        final_results_out.append('AR@1000')  # RPN
+                    elif 'mAP' not in key:
+                        final_results_out.append(key + '_mAP')
+
+                # 2 determine whether total_epochs ckpt exists
+                ckpt_path = f'epoch_{total_epochs}.pth'
+                if osp.exists(osp.join(result_path, ckpt_path)):
+                    log_json_path = list(
+                        sorted(glob.glob(osp.join(result_path,
+                                                  '*.log.json'))))[-1]
+
+                    # 3 read metric
+                    model_performance = get_final_results(
+                        log_json_path, total_epochs, final_results_out)
+                    if model_performance is None:
+                        print(f'log file error: {log_json_path}')
+                        continue
+                    for performance in model_performance:
+                        if performance in ['AR@1000', 'bbox_mAP', 'segm_mAP']:
+                            metric = round(
+                                model_performance[performance] * 100, 1)
+                            model_performance[performance] = metric
+                    result_dict[config] = model_performance
+
+                    # update and append excel content
+                    if args.excel:
+                        if 'AR@1000' in model_performance:
+                            metrics = f'{model_performance["AR@1000"]}' \
+                                      f'(AR@1000)'
+                        elif 'segm_mAP' in model_performance:
+                            metrics = f'{model_performance["bbox_mAP"]}/' \
+                                      f'{model_performance["segm_mAP"]}'
+                        else:
+                            metrics = f'{model_performance["bbox_mAP"]}'
+
+                        row_num = sheet_info.get(config, None)
+                        if row_num:
+                            table.write(row_num, args.ncol, metrics)
+                        else:
+                            table.write(total_nrows, 0, config)
+                            table.write(total_nrows, args.ncol, metrics)
+                            total_nrows += 1
+
+                else:
+                    print(f'{config} not exist: {ckpt_path}')
+            else:
+                print(f'not exist: {config}')
+
+        # 4 save or print results
+        if metrics_out:
+            mkdir_or_exist(metrics_out)
+            dump(result_dict, osp.join(metrics_out, 'model_metric_info.json'))
+        if not args.not_show:
+            print('===================================')
+            for config_name, metrics in result_dict.items():
+                print(config_name, metrics)
+            print('===================================')
+        if args.excel:
+            filename, sufflx = osp.splitext(args.excel)
+            xlrw.save(f'{filename}_o{sufflx}')
+            print(f'>>> Output {filename}_o{sufflx}')

+ 3 - 0
.dev_scripts/linter.sh

@@ -0,0 +1,3 @@
+yapf -r -i mmdet/ configs/ tests/ tools/
+isort -rc mmdet/ configs/ tests/ tools/
+flake8 .

+ 157 - 0
.dev_scripts/test_benchmark.sh

@@ -0,0 +1,157 @@
+PARTITION=$1
+CHECKPOINT_DIR=$2
+WORK_DIR=$3
+CPUS_PER_TASK=${4:-2}
+
+echo 'configs/atss/atss_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION atss_r50_fpn_1x_coco configs/atss/atss_r50_fpn_1x_coco.py $CHECKPOINT_DIR/atss_r50_fpn_1x_coco_20200209-985f7bd0.pth --work-dir $WORK_DIR/atss_r50_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29666  &
+echo 'configs/autoassign/autoassign_r50-caffe_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION autoassign_r50-caffe_fpn_1x_coco configs/autoassign/autoassign_r50-caffe_fpn_1x_coco.py $CHECKPOINT_DIR/auto_assign_r50_fpn_1x_coco_20210413_115540-5e17991f.pth --work-dir $WORK_DIR/autoassign_r50-caffe_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29667  &
+echo 'configs/carafe/faster-rcnn_r50_fpn-carafe_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION faster-rcnn_r50_fpn-carafe_1x_coco configs/carafe/faster-rcnn_r50_fpn-carafe_1x_coco.py $CHECKPOINT_DIR/faster_rcnn_r50_fpn_carafe_1x_coco_bbox_mAP-0.386_20200504_175733-385a75b7.pth --work-dir $WORK_DIR/faster-rcnn_r50_fpn-carafe_1x_coco --cfg-option env_cfg.dist_cfg.port=29668  &
+echo 'configs/cascade_rcnn/cascade-rcnn_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION cascade-rcnn_r50_fpn_1x_coco configs/cascade_rcnn/cascade-rcnn_r50_fpn_1x_coco.py $CHECKPOINT_DIR/cascade_rcnn_r50_fpn_1x_coco_20200316-3dc56deb.pth --work-dir $WORK_DIR/cascade-rcnn_r50_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29669  &
+echo 'configs/cascade_rcnn/cascade-mask-rcnn_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION cascade-mask-rcnn_r50_fpn_1x_coco configs/cascade_rcnn/cascade-mask-rcnn_r50_fpn_1x_coco.py $CHECKPOINT_DIR/cascade_mask_rcnn_r50_fpn_1x_coco_20200203-9d4dcb24.pth --work-dir $WORK_DIR/cascade-mask-rcnn_r50_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29670  &
+echo 'configs/cascade_rpn/cascade-rpn_faster-rcnn_r50-caffe_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION cascade-rpn_faster-rcnn_r50-caffe_fpn_1x_coco configs/cascade_rpn/cascade-rpn_faster-rcnn_r50-caffe_fpn_1x_coco.py $CHECKPOINT_DIR/crpn_faster_rcnn_r50_caffe_fpn_1x_coco-c8283cca.pth --work-dir $WORK_DIR/cascade-rpn_faster-rcnn_r50-caffe_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29671  &
+echo 'configs/centernet/centernet_r18-dcnv2_8xb16-crop512-140e_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION centernet_r18-dcnv2_8xb16-crop512-140e_coco configs/centernet/centernet_r18-dcnv2_8xb16-crop512-140e_coco.py $CHECKPOINT_DIR/centernet_resnet18_dcnv2_140e_coco_20210702_155131-c8cd631f.pth --work-dir $WORK_DIR/centernet_r18-dcnv2_8xb16-crop512-140e_coco --cfg-option env_cfg.dist_cfg.port=29672  &
+echo 'configs/centripetalnet/centripetalnet_hourglass104_16xb6-crop511-210e-mstest_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION centripetalnet_hourglass104_16xb6-crop511-210e-mstest_coco configs/centripetalnet/centripetalnet_hourglass104_16xb6-crop511-210e-mstest_coco.py $CHECKPOINT_DIR/centripetalnet_hourglass104_mstest_16x6_210e_coco_20200915_204804-3ccc61e5.pth --work-dir $WORK_DIR/centripetalnet_hourglass104_16xb6-crop511-210e-mstest_coco --cfg-option env_cfg.dist_cfg.port=29673  &
+echo 'configs/convnext/cascade-mask-rcnn_convnext-s-p4-w7_fpn_4conv1fc-giou_amp-ms-crop-3x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION cascade-mask-rcnn_convnext-s-p4-w7_fpn_4conv1fc-giou_amp-ms-crop-3x_coco configs/convnext/cascade-mask-rcnn_convnext-s-p4-w7_fpn_4conv1fc-giou_amp-ms-crop-3x_coco.py $CHECKPOINT_DIR/cascade_mask_rcnn_convnext-s_p4_w7_fpn_giou_4conv1f_fp16_ms-crop_3x_coco_20220510_201004-3d24f5a4.pth --work-dir $WORK_DIR/cascade-mask-rcnn_convnext-s-p4-w7_fpn_4conv1fc-giou_amp-ms-crop-3x_coco --cfg-option env_cfg.dist_cfg.port=29674  &
+echo 'configs/cornernet/cornernet_hourglass104_8xb6-210e-mstest_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION cornernet_hourglass104_8xb6-210e-mstest_coco configs/cornernet/cornernet_hourglass104_8xb6-210e-mstest_coco.py $CHECKPOINT_DIR/cornernet_hourglass104_mstest_8x6_210e_coco_20200825_150618-79b44c30.pth --work-dir $WORK_DIR/cornernet_hourglass104_8xb6-210e-mstest_coco --cfg-option env_cfg.dist_cfg.port=29675  &
+echo 'configs/dcn/faster-rcnn_r50-dconv-c3-c5_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION faster-rcnn_r50-dconv-c3-c5_fpn_1x_coco configs/dcn/faster-rcnn_r50-dconv-c3-c5_fpn_1x_coco.py $CHECKPOINT_DIR/faster_rcnn_r50_fpn_dconv_c3-c5_1x_coco_20200130-d68aed1e.pth --work-dir $WORK_DIR/faster-rcnn_r50-dconv-c3-c5_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29676  &
+echo 'configs/dcnv2/faster-rcnn_r50_fpn_mdpool_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION faster-rcnn_r50_fpn_mdpool_1x_coco configs/dcnv2/faster-rcnn_r50_fpn_mdpool_1x_coco.py $CHECKPOINT_DIR/faster_rcnn_r50_fpn_mdpool_1x_coco_20200307-c0df27ff.pth --work-dir $WORK_DIR/faster-rcnn_r50_fpn_mdpool_1x_coco --cfg-option env_cfg.dist_cfg.port=29677  &
+echo 'configs/ddod/ddod_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION ddod_r50_fpn_1x_coco configs/ddod/ddod_r50_fpn_1x_coco.py $CHECKPOINT_DIR/ddod_r50_fpn_1x_coco_20220523_223737-29b2fc67.pth --work-dir $WORK_DIR/ddod_r50_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29678  &
+echo 'configs/deformable_detr/deformable-detr_r50_16xb2-50e_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION deformable-detr_r50_16xb2-50e_coco configs/deformable_detr/deformable-detr_r50_16xb2-50e_coco.py $CHECKPOINT_DIR/deformable_detr_r50_16x2_50e_coco_20210419_220030-a12b9512.pth --work-dir $WORK_DIR/deformable-detr_r50_16xb2-50e_coco --cfg-option env_cfg.dist_cfg.port=29679  &
+echo 'configs/detectors/detectors_htc-r50_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION detectors_htc-r50_1x_coco configs/detectors/detectors_htc-r50_1x_coco.py $CHECKPOINT_DIR/detectors_htc_r50_1x_coco-329b1453.pth --work-dir $WORK_DIR/detectors_htc-r50_1x_coco --cfg-option env_cfg.dist_cfg.port=29680  &
+echo 'configs/detr/detr_r50_8xb2-150e_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION detr_r50_8xb2-150e_coco configs/detr/detr_r50_8xb2-150e_coco.py $CHECKPOINT_DIR/detr_r50_8x2_150e_coco_20201130_194835-2c4b8974.pth --work-dir $WORK_DIR/detr_r50_8xb2-150e_coco --cfg-option env_cfg.dist_cfg.port=29681  &
+echo 'configs/double_heads/dh-faster-rcnn_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION dh-faster-rcnn_r50_fpn_1x_coco configs/double_heads/dh-faster-rcnn_r50_fpn_1x_coco.py $CHECKPOINT_DIR/dh_faster_rcnn_r50_fpn_1x_coco_20200130-586b67df.pth --work-dir $WORK_DIR/dh-faster-rcnn_r50_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29682  &
+echo 'configs/dyhead/atss_r50_fpn_dyhead_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION atss_r50_fpn_dyhead_1x_coco configs/dyhead/atss_r50_fpn_dyhead_1x_coco.py $CHECKPOINT_DIR/atss_r50_fpn_dyhead_4x4_1x_coco_20211219_023314-eaa620c6.pth --work-dir $WORK_DIR/atss_r50_fpn_dyhead_1x_coco --cfg-option env_cfg.dist_cfg.port=29683  &
+echo 'configs/dynamic_rcnn/dynamic-rcnn_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION dynamic-rcnn_r50_fpn_1x_coco configs/dynamic_rcnn/dynamic-rcnn_r50_fpn_1x_coco.py $CHECKPOINT_DIR/dynamic_rcnn_r50_fpn_1x-62a3f276.pth --work-dir $WORK_DIR/dynamic-rcnn_r50_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29684  &
+echo 'configs/efficientnet/retinanet_effb3_fpn_8xb4-crop896-1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION retinanet_effb3_fpn_8xb4-crop896-1x_coco configs/efficientnet/retinanet_effb3_fpn_8xb4-crop896-1x_coco.py $CHECKPOINT_DIR/retinanet_effb3_fpn_crop896_8x4_1x_coco_20220322_234806-615a0dda.pth --work-dir $WORK_DIR/retinanet_effb3_fpn_8xb4-crop896-1x_coco --cfg-option env_cfg.dist_cfg.port=29685  &
+echo 'configs/empirical_attention/faster-rcnn_r50-attn1111_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION faster-rcnn_r50-attn1111_fpn_1x_coco configs/empirical_attention/faster-rcnn_r50-attn1111_fpn_1x_coco.py $CHECKPOINT_DIR/faster_rcnn_r50_fpn_attention_1111_1x_coco_20200130-403cccba.pth --work-dir $WORK_DIR/faster-rcnn_r50-attn1111_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29686  &
+echo 'configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION faster-rcnn_r50_fpn_1x_coco configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py $CHECKPOINT_DIR/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth --work-dir $WORK_DIR/faster-rcnn_r50_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29687  &
+echo 'configs/fcos/fcos_r50-caffe_fpn_gn-head-center-normbbox-centeronreg-giou_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION fcos_r50-caffe_fpn_gn-head-center-normbbox-centeronreg-giou_1x_coco configs/fcos/fcos_r50-caffe_fpn_gn-head-center-normbbox-centeronreg-giou_1x_coco.py $CHECKPOINT_DIR/fcos_center-normbbox-centeronreg-giou_r50_caffe_fpn_gn-head_1x_coco-0a0d75a8.pth --work-dir $WORK_DIR/fcos_r50-caffe_fpn_gn-head-center-normbbox-centeronreg-giou_1x_coco --cfg-option env_cfg.dist_cfg.port=29688  &
+echo 'configs/foveabox/fovea_r50_fpn_gn-head-align_4xb4-2x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION fovea_r50_fpn_gn-head-align_4xb4-2x_coco configs/foveabox/fovea_r50_fpn_gn-head-align_4xb4-2x_coco.py $CHECKPOINT_DIR/fovea_align_r50_fpn_gn-head_4x4_2x_coco_20200203-8987880d.pth --work-dir $WORK_DIR/fovea_r50_fpn_gn-head-align_4xb4-2x_coco --cfg-option env_cfg.dist_cfg.port=29689  &
+echo 'configs/fpg/mask-rcnn_r50_fpg_crop640-50e_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION mask-rcnn_r50_fpg_crop640-50e_coco configs/fpg/mask-rcnn_r50_fpg_crop640-50e_coco.py $CHECKPOINT_DIR/mask_rcnn_r50_fpg_crop640_50e_coco_20220311_011857-233b8334.pth --work-dir $WORK_DIR/mask-rcnn_r50_fpg_crop640-50e_coco --cfg-option env_cfg.dist_cfg.port=29690  &
+echo 'configs/free_anchor/freeanchor_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION freeanchor_r50_fpn_1x_coco configs/free_anchor/freeanchor_r50_fpn_1x_coco.py $CHECKPOINT_DIR/retinanet_free_anchor_r50_fpn_1x_coco_20200130-0f67375f.pth --work-dir $WORK_DIR/freeanchor_r50_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29691  &
+echo 'configs/fsaf/fsaf_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION fsaf_r50_fpn_1x_coco configs/fsaf/fsaf_r50_fpn_1x_coco.py $CHECKPOINT_DIR/fsaf_r50_fpn_1x_coco-94ccc51f.pth --work-dir $WORK_DIR/fsaf_r50_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29692  &
+echo 'configs/gcnet/mask-rcnn_r50-syncbn-gcb-r16-c3-c5_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION mask-rcnn_r50-syncbn-gcb-r16-c3-c5_fpn_1x_coco configs/gcnet/mask-rcnn_r50-syncbn-gcb-r16-c3-c5_fpn_1x_coco.py $CHECKPOINT_DIR/mask_rcnn_r50_fpn_syncbn-backbone_r16_gcb_c3-c5_1x_coco_20200202-587b99aa.pth --work-dir $WORK_DIR/mask-rcnn_r50-syncbn-gcb-r16-c3-c5_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29693  &
+echo 'configs/gfl/gfl_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION gfl_r50_fpn_1x_coco configs/gfl/gfl_r50_fpn_1x_coco.py $CHECKPOINT_DIR/gfl_r50_fpn_1x_coco_20200629_121244-25944287.pth --work-dir $WORK_DIR/gfl_r50_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29694  &
+echo 'configs/ghm/retinanet_r50_fpn_ghm-1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION retinanet_r50_fpn_ghm-1x_coco configs/ghm/retinanet_r50_fpn_ghm-1x_coco.py $CHECKPOINT_DIR/retinanet_ghm_r50_fpn_1x_coco_20200130-a437fda3.pth --work-dir $WORK_DIR/retinanet_r50_fpn_ghm-1x_coco --cfg-option env_cfg.dist_cfg.port=29695  &
+echo 'configs/gn/mask-rcnn_r50_fpn_gn-all_2x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION mask-rcnn_r50_fpn_gn-all_2x_coco configs/gn/mask-rcnn_r50_fpn_gn-all_2x_coco.py $CHECKPOINT_DIR/mask_rcnn_r50_fpn_gn-all_2x_coco_20200206-8eee02a6.pth --work-dir $WORK_DIR/mask-rcnn_r50_fpn_gn-all_2x_coco --cfg-option env_cfg.dist_cfg.port=29696  &
+echo 'configs/gn+ws/faster-rcnn_r50_fpn_gn-ws-all_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION faster-rcnn_r50_fpn_gn-ws-all_1x_coco configs/gn+ws/faster-rcnn_r50_fpn_gn-ws-all_1x_coco.py $CHECKPOINT_DIR/faster_rcnn_r50_fpn_gn_ws-all_1x_coco_20200130-613d9fe2.pth --work-dir $WORK_DIR/faster-rcnn_r50_fpn_gn-ws-all_1x_coco --cfg-option env_cfg.dist_cfg.port=29697  &
+echo 'configs/grid_rcnn/grid-rcnn_r50_fpn_gn-head_2x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION grid-rcnn_r50_fpn_gn-head_2x_coco configs/grid_rcnn/grid-rcnn_r50_fpn_gn-head_2x_coco.py $CHECKPOINT_DIR/grid_rcnn_r50_fpn_gn-head_2x_coco_20200130-6cca8223.pth --work-dir $WORK_DIR/grid-rcnn_r50_fpn_gn-head_2x_coco --cfg-option env_cfg.dist_cfg.port=29698  &
+echo 'configs/groie/faste-rcnn_r50_fpn_groie_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION faste-rcnn_r50_fpn_groie_1x_coco configs/groie/faste-rcnn_r50_fpn_groie_1x_coco.py $CHECKPOINT_DIR/faster_rcnn_r50_fpn_groie_1x_coco_20200604_211715-66ee9516.pth --work-dir $WORK_DIR/faste-rcnn_r50_fpn_groie_1x_coco --cfg-option env_cfg.dist_cfg.port=29699  &
+echo 'configs/guided_anchoring/ga-retinanet_r50-caffe_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION ga-retinanet_r50-caffe_fpn_1x_coco configs/guided_anchoring/ga-retinanet_r50-caffe_fpn_1x_coco.py $CHECKPOINT_DIR/ga_retinanet_r50_caffe_fpn_1x_coco_20201020-39581c6f.pth --work-dir $WORK_DIR/ga-retinanet_r50-caffe_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29700  &
+echo 'configs/hrnet/faster-rcnn_hrnetv2p-w18-1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION faster-rcnn_hrnetv2p-w18-1x_coco configs/hrnet/faster-rcnn_hrnetv2p-w18-1x_coco.py $CHECKPOINT_DIR/faster_rcnn_hrnetv2p_w18_1x_coco_20200130-56651a6d.pth --work-dir $WORK_DIR/faster-rcnn_hrnetv2p-w18-1x_coco --cfg-option env_cfg.dist_cfg.port=29701  &
+echo 'configs/htc/htc_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION htc_r50_fpn_1x_coco configs/htc/htc_r50_fpn_1x_coco.py $CHECKPOINT_DIR/htc_r50_fpn_1x_coco_20200317-7332cf16.pth --work-dir $WORK_DIR/htc_r50_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29702  &
+echo 'configs/instaboost/mask-rcnn_r50_fpn_instaboost-4x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION mask-rcnn_r50_fpn_instaboost-4x_coco configs/instaboost/mask-rcnn_r50_fpn_instaboost-4x_coco.py $CHECKPOINT_DIR/mask_rcnn_r50_fpn_instaboost_4x_coco_20200307-d025f83a.pth --work-dir $WORK_DIR/mask-rcnn_r50_fpn_instaboost-4x_coco --cfg-option env_cfg.dist_cfg.port=29703  &
+echo 'configs/libra_rcnn/libra-faster-rcnn_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION libra-faster-rcnn_r50_fpn_1x_coco configs/libra_rcnn/libra-faster-rcnn_r50_fpn_1x_coco.py $CHECKPOINT_DIR/libra_faster_rcnn_r50_fpn_1x_coco_20200130-3afee3a9.pth --work-dir $WORK_DIR/libra-faster-rcnn_r50_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29704  &
+echo 'configs/mask2former/mask2former_r50_8xb2-lsj-50e_coco-panoptic.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION mask2former_r50_8xb2-lsj-50e_coco-panoptic configs/mask2former/mask2former_r50_8xb2-lsj-50e_coco-panoptic.py $CHECKPOINT_DIR/mask2former_r50_lsj_8x2_50e_coco-panoptic_20220326_224516-11a44721.pth --work-dir $WORK_DIR/mask2former_r50_8xb2-lsj-50e_coco-panoptic --cfg-option env_cfg.dist_cfg.port=29705  &
+echo 'configs/mask_rcnn/mask-rcnn_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION mask-rcnn_r50_fpn_1x_coco configs/mask_rcnn/mask-rcnn_r50_fpn_1x_coco.py $CHECKPOINT_DIR/mask_rcnn_r50_fpn_1x_coco_20200205-d4b0c5d6.pth --work-dir $WORK_DIR/mask-rcnn_r50_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29706  &
+echo 'configs/maskformer/maskformer_r50_ms-16xb1-75e_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION maskformer_r50_ms-16xb1-75e_coco configs/maskformer/maskformer_r50_ms-16xb1-75e_coco.py $CHECKPOINT_DIR/maskformer_r50_mstrain_16x1_75e_coco_20220221_141956-bc2699cb.pth --work-dir $WORK_DIR/maskformer_r50_ms-16xb1-75e_coco --cfg-option env_cfg.dist_cfg.port=29707  &
+echo 'configs/ms_rcnn/ms-rcnn_r50-caffe_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION ms-rcnn_r50-caffe_fpn_1x_coco configs/ms_rcnn/ms-rcnn_r50-caffe_fpn_1x_coco.py $CHECKPOINT_DIR/ms_rcnn_r50_caffe_fpn_1x_coco_20200702_180848-61c9355e.pth --work-dir $WORK_DIR/ms-rcnn_r50-caffe_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29708  &
+echo 'configs/nas_fcos/nas-fcos_r50-caffe_fpn_nashead-gn-head_4xb4-1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION nas-fcos_r50-caffe_fpn_nashead-gn-head_4xb4-1x_coco configs/nas_fcos/nas-fcos_r50-caffe_fpn_nashead-gn-head_4xb4-1x_coco.py $CHECKPOINT_DIR/nas_fcos_nashead_r50_caffe_fpn_gn-head_4x4_1x_coco_20200520-1bdba3ce.pth --work-dir $WORK_DIR/nas-fcos_r50-caffe_fpn_nashead-gn-head_4xb4-1x_coco --cfg-option env_cfg.dist_cfg.port=29709  &
+echo 'configs/nas_fpn/retinanet_r50_nasfpn_crop640-50e_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION retinanet_r50_nasfpn_crop640-50e_coco configs/nas_fpn/retinanet_r50_nasfpn_crop640-50e_coco.py $CHECKPOINT_DIR/retinanet_r50_nasfpn_crop640_50e_coco-0ad1f644.pth --work-dir $WORK_DIR/retinanet_r50_nasfpn_crop640-50e_coco --cfg-option env_cfg.dist_cfg.port=29710  &
+echo 'configs/paa/paa_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION paa_r50_fpn_1x_coco configs/paa/paa_r50_fpn_1x_coco.py $CHECKPOINT_DIR/paa_r50_fpn_1x_coco_20200821-936edec3.pth --work-dir $WORK_DIR/paa_r50_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29711  &
+echo 'configs/pafpn/faster-rcnn_r50_pafpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION faster-rcnn_r50_pafpn_1x_coco configs/pafpn/faster-rcnn_r50_pafpn_1x_coco.py $CHECKPOINT_DIR/faster_rcnn_r50_pafpn_1x_coco_bbox_mAP-0.375_20200503_105836-b7b4b9bd.pth --work-dir $WORK_DIR/faster-rcnn_r50_pafpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29712  &
+echo 'configs/panoptic_fpn/panoptic-fpn_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION panoptic-fpn_r50_fpn_1x_coco configs/panoptic_fpn/panoptic-fpn_r50_fpn_1x_coco.py $CHECKPOINT_DIR/panoptic_fpn_r50_fpn_1x_coco_20210821_101153-9668fd13.pth --work-dir $WORK_DIR/panoptic-fpn_r50_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29713  &
+echo 'configs/pisa/faster-rcnn_r50_fpn_pisa_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION faster-rcnn_r50_fpn_pisa_1x_coco configs/pisa/faster-rcnn_r50_fpn_pisa_1x_coco.py $CHECKPOINT_DIR/pisa_faster_rcnn_r50_fpn_1x_coco-dea93523.pth --work-dir $WORK_DIR/faster-rcnn_r50_fpn_pisa_1x_coco --cfg-option env_cfg.dist_cfg.port=29714  &
+echo 'configs/point_rend/point-rend_r50-caffe_fpn_ms-1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION point-rend_r50-caffe_fpn_ms-1x_coco configs/point_rend/point-rend_r50-caffe_fpn_ms-1x_coco.py $CHECKPOINT_DIR/point_rend_r50_caffe_fpn_mstrain_1x_coco-1bcb5fb4.pth --work-dir $WORK_DIR/point-rend_r50-caffe_fpn_ms-1x_coco --cfg-option env_cfg.dist_cfg.port=29715  &
+echo 'configs/pvt/retinanet_pvt-s_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION retinanet_pvt-s_fpn_1x_coco configs/pvt/retinanet_pvt-s_fpn_1x_coco.py $CHECKPOINT_DIR/retinanet_pvt-s_fpn_1x_coco_20210906_142921-b6c94a5b.pth --work-dir $WORK_DIR/retinanet_pvt-s_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29716  &
+echo 'configs/queryinst/queryinst_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION queryinst_r50_fpn_1x_coco configs/queryinst/queryinst_r50_fpn_1x_coco.py $CHECKPOINT_DIR/queryinst_r50_fpn_1x_coco_20210907_084916-5a8f1998.pth --work-dir $WORK_DIR/queryinst_r50_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29717  &
+echo 'configs/regnet/mask-rcnn_regnetx-3.2GF_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION mask-rcnn_regnetx-3.2GF_fpn_1x_coco configs/regnet/mask-rcnn_regnetx-3.2GF_fpn_1x_coco.py $CHECKPOINT_DIR/mask_rcnn_regnetx-3.2GF_fpn_1x_coco_20200520_163141-2a9d1814.pth --work-dir $WORK_DIR/mask-rcnn_regnetx-3.2GF_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29718  &
+echo 'configs/reppoints/reppoints-moment_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION reppoints-moment_r50_fpn_1x_coco configs/reppoints/reppoints-moment_r50_fpn_1x_coco.py $CHECKPOINT_DIR/reppoints_moment_r50_fpn_1x_coco_20200330-b73db8d1.pth --work-dir $WORK_DIR/reppoints-moment_r50_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29719  &
+echo 'configs/res2net/faster-rcnn_res2net-101_fpn_2x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION faster-rcnn_res2net-101_fpn_2x_coco configs/res2net/faster-rcnn_res2net-101_fpn_2x_coco.py $CHECKPOINT_DIR/faster_rcnn_r2_101_fpn_2x_coco-175f1da6.pth --work-dir $WORK_DIR/faster-rcnn_res2net-101_fpn_2x_coco --cfg-option env_cfg.dist_cfg.port=29720  &
+echo 'configs/resnest/faster-rcnn_s50_fpn_syncbn-backbone+head_ms-range-1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION faster-rcnn_s50_fpn_syncbn-backbone+head_ms-range-1x_coco configs/resnest/faster-rcnn_s50_fpn_syncbn-backbone+head_ms-range-1x_coco.py $CHECKPOINT_DIR/faster_rcnn_s50_fpn_syncbn-backbone+head_mstrain-range_1x_coco_20200926_125502-20289c16.pth --work-dir $WORK_DIR/faster-rcnn_s50_fpn_syncbn-backbone+head_ms-range-1x_coco --cfg-option env_cfg.dist_cfg.port=29721  &
+echo 'configs/resnet_strikes_back/mask-rcnn_r50-rsb-pre_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION mask-rcnn_r50-rsb-pre_fpn_1x_coco configs/resnet_strikes_back/mask-rcnn_r50-rsb-pre_fpn_1x_coco.py $CHECKPOINT_DIR/mask_rcnn_r50_fpn_rsb-pretrain_1x_coco_20220113_174054-06ce8ba0.pth --work-dir $WORK_DIR/mask-rcnn_r50-rsb-pre_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29722  &
+echo 'configs/retinanet/retinanet_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION retinanet_r50_fpn_1x_coco configs/retinanet/retinanet_r50_fpn_1x_coco.py $CHECKPOINT_DIR/retinanet_r50_fpn_1x_coco_20200130-c2398f9e.pth --work-dir $WORK_DIR/retinanet_r50_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29723  &
+echo 'configs/rpn/rpn_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION rpn_r50_fpn_1x_coco configs/rpn/rpn_r50_fpn_1x_coco.py $CHECKPOINT_DIR/rpn_r50_fpn_1x_coco_20200218-5525fa2e.pth --work-dir $WORK_DIR/rpn_r50_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29724  &
+echo 'configs/sabl/sabl-retinanet_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION sabl-retinanet_r50_fpn_1x_coco configs/sabl/sabl-retinanet_r50_fpn_1x_coco.py $CHECKPOINT_DIR/sabl_retinanet_r50_fpn_1x_coco-6c54fd4f.pth --work-dir $WORK_DIR/sabl-retinanet_r50_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29725  &
+echo 'configs/sabl/sabl-faster-rcnn_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION sabl-faster-rcnn_r50_fpn_1x_coco configs/sabl/sabl-faster-rcnn_r50_fpn_1x_coco.py $CHECKPOINT_DIR/sabl_faster_rcnn_r50_fpn_1x_coco-e867595b.pth --work-dir $WORK_DIR/sabl-faster-rcnn_r50_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29726  &
+echo 'configs/scnet/scnet_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION scnet_r50_fpn_1x_coco configs/scnet/scnet_r50_fpn_1x_coco.py $CHECKPOINT_DIR/scnet_r50_fpn_1x_coco-c3f09857.pth --work-dir $WORK_DIR/scnet_r50_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29727  &
+echo 'configs/scratch/mask-rcnn_r50-scratch_fpn_gn-all_6x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION mask-rcnn_r50-scratch_fpn_gn-all_6x_coco configs/scratch/mask-rcnn_r50-scratch_fpn_gn-all_6x_coco.py $CHECKPOINT_DIR/scratch_mask_rcnn_r50_fpn_gn_6x_bbox_mAP-0.412__segm_mAP-0.374_20200201_193051-1e190a40.pth --work-dir $WORK_DIR/mask-rcnn_r50-scratch_fpn_gn-all_6x_coco --cfg-option env_cfg.dist_cfg.port=29728  &
+echo 'configs/solo/decoupled-solo_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION decoupled-solo_r50_fpn_1x_coco configs/solo/decoupled-solo_r50_fpn_1x_coco.py $CHECKPOINT_DIR/decoupled_solo_r50_fpn_1x_coco_20210820_233348-6337c589.pth --work-dir $WORK_DIR/decoupled-solo_r50_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29729  &
+echo 'configs/solov2/solov2_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION solov2_r50_fpn_1x_coco configs/solov2/solov2_r50_fpn_1x_coco.py $CHECKPOINT_DIR/solov2_r50_fpn_1x_coco_20220512_125858-a357fa23.pth --work-dir $WORK_DIR/solov2_r50_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29730  &
+echo 'configs/sparse_rcnn/sparse-rcnn_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION sparse-rcnn_r50_fpn_1x_coco configs/sparse_rcnn/sparse-rcnn_r50_fpn_1x_coco.py $CHECKPOINT_DIR/sparse_rcnn_r50_fpn_1x_coco_20201222_214453-dc79b137.pth --work-dir $WORK_DIR/sparse-rcnn_r50_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29731  &
+echo 'configs/ssd/ssd300_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION ssd300_coco configs/ssd/ssd300_coco.py $CHECKPOINT_DIR/ssd300_coco_20210803_015428-d231a06e.pth --work-dir $WORK_DIR/ssd300_coco --cfg-option env_cfg.dist_cfg.port=29732  &
+echo 'configs/ssd/ssdlite_mobilenetv2-scratch_8xb24-600e_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION ssdlite_mobilenetv2-scratch_8xb24-600e_coco configs/ssd/ssdlite_mobilenetv2-scratch_8xb24-600e_coco.py $CHECKPOINT_DIR/ssdlite_mobilenetv2_scratch_600e_coco_20210629_110627-974d9307.pth --work-dir $WORK_DIR/ssdlite_mobilenetv2-scratch_8xb24-600e_coco --cfg-option env_cfg.dist_cfg.port=29733  &
+echo 'configs/swin/mask-rcnn_swin-t-p4-w7_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION mask-rcnn_swin-t-p4-w7_fpn_1x_coco configs/swin/mask-rcnn_swin-t-p4-w7_fpn_1x_coco.py $CHECKPOINT_DIR/mask_rcnn_swin-t-p4-w7_fpn_1x_coco_20210902_120937-9d6b7cfa.pth --work-dir $WORK_DIR/mask-rcnn_swin-t-p4-w7_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29734  &
+echo 'configs/tood/tood_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION tood_r50_fpn_1x_coco configs/tood/tood_r50_fpn_1x_coco.py $CHECKPOINT_DIR/tood_r50_fpn_1x_coco_20211210_103425-20e20746.pth --work-dir $WORK_DIR/tood_r50_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29735  &
+echo 'configs/tridentnet/tridentnet_r50-caffe_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION tridentnet_r50-caffe_1x_coco configs/tridentnet/tridentnet_r50-caffe_1x_coco.py $CHECKPOINT_DIR/tridentnet_r50_caffe_1x_coco_20201230_141838-2ec0b530.pth --work-dir $WORK_DIR/tridentnet_r50-caffe_1x_coco --cfg-option env_cfg.dist_cfg.port=29736  &
+echo 'configs/vfnet/vfnet_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION vfnet_r50_fpn_1x_coco configs/vfnet/vfnet_r50_fpn_1x_coco.py $CHECKPOINT_DIR/vfnet_r50_fpn_1x_coco_20201027-38db6f58.pth --work-dir $WORK_DIR/vfnet_r50_fpn_1x_coco --cfg-option env_cfg.dist_cfg.port=29737  &
+echo 'configs/yolact/yolact_r50_1xb8-55e_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION yolact_r50_1xb8-55e_coco configs/yolact/yolact_r50_1xb8-55e_coco.py $CHECKPOINT_DIR/yolact_r50_1x8_coco_20200908-f38d58df.pth --work-dir $WORK_DIR/yolact_r50_1xb8-55e_coco --cfg-option env_cfg.dist_cfg.port=29738  &
+echo 'configs/yolo/yolov3_d53_8xb8-320-273e_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION yolov3_d53_8xb8-320-273e_coco configs/yolo/yolov3_d53_8xb8-320-273e_coco.py $CHECKPOINT_DIR/yolov3_d53_320_273e_coco-421362b6.pth --work-dir $WORK_DIR/yolov3_d53_8xb8-320-273e_coco --cfg-option env_cfg.dist_cfg.port=29739  &
+echo 'configs/yolof/yolof_r50-c5_8xb8-1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION yolof_r50-c5_8xb8-1x_coco configs/yolof/yolof_r50-c5_8xb8-1x_coco.py $CHECKPOINT_DIR/yolof_r50_c5_8x8_1x_coco_20210425_024427-8e864411.pth --work-dir $WORK_DIR/yolof_r50-c5_8xb8-1x_coco --cfg-option env_cfg.dist_cfg.port=29740  &
+echo 'configs/yolox/yolox_tiny_8xb8-300e_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK tools/slurm_test.sh $PARTITION yolox_tiny_8xb8-300e_coco configs/yolox/yolox_tiny_8xb8-300e_coco.py $CHECKPOINT_DIR/yolox_tiny_8x8_300e_coco_20211124_171234-b4047906.pth --work-dir $WORK_DIR/yolox_tiny_8xb8-300e_coco --cfg-option env_cfg.dist_cfg.port=29741  &

+ 178 - 0
.dev_scripts/test_init_backbone.py

@@ -0,0 +1,178 @@
+# Copyright (c) OpenMMLab. All rights reserved.
+"""Check out backbone whether successfully load pretrained checkpoint."""
+import copy
+import os
+from os.path import dirname, exists, join
+
+import pytest
+from mmengine.config import Config
+from mmengine.runner import CheckpointLoader
+from mmengine.utils import ProgressBar
+
+from mmdet.registry import MODELS
+
+
+def _get_config_directory():
+    """Find the predefined detector config directory."""
+    try:
+        # Assume we are running in the source mmdetection repo
+        repo_dpath = dirname(dirname(__file__))
+    except NameError:
+        # For IPython development when this __file__ is not defined
+        import mmdet
+        repo_dpath = dirname(dirname(mmdet.__file__))
+    config_dpath = join(repo_dpath, 'configs')
+    if not exists(config_dpath):
+        raise Exception('Cannot find config path')
+    return config_dpath
+
+
+def _get_config_module(fname):
+    """Load a configuration as a python module."""
+    config_dpath = _get_config_directory()
+    config_fpath = join(config_dpath, fname)
+    config_mod = Config.fromfile(config_fpath)
+    return config_mod
+
+
+def _get_detector_cfg(fname):
+    """Grab configs necessary to create a detector.
+
+    These are deep copied to allow for safe modification of parameters without
+    influencing other tests.
+    """
+    config = _get_config_module(fname)
+    model = copy.deepcopy(config.model)
+    return model
+
+
+def _traversed_config_file():
+    """We traversed all potential config files under the `config` file. If you
+    need to print details or debug code, you can use this function.
+
+    If the `backbone.init_cfg` is None (do not use `Pretrained` init way), you
+    need add the folder name in `ignores_folder` (if the config files in this
+    folder all set backbone.init_cfg is None) or add config name in
+    `ignores_file` (if the config file set backbone.init_cfg is None)
+    """
+    config_path = _get_config_directory()
+    check_cfg_names = []
+
+    # `base`, `legacy_1.x` and `common` ignored by default.
+    ignores_folder = ['_base_', 'legacy_1.x', 'common']
+    # 'ld' need load teacher model, if want to check 'ld',
+    # please check teacher_config path first.
+    ignores_folder += ['ld']
+    # `selfsup_pretrain` need convert model, if want to check this model,
+    # need to convert the model first.
+    ignores_folder += ['selfsup_pretrain']
+
+    # the `init_cfg` in 'centripetalnet', 'cornernet', 'cityscapes',
+    # 'scratch' is None.
+    # the `init_cfg` in ssdlite(`ssdlite_mobilenetv2_scratch_600e_coco.py`)
+    # is None
+    # Please confirm `bockbone.init_cfg` is None first.
+    ignores_folder += ['centripetalnet', 'cornernet', 'cityscapes', 'scratch']
+    ignores_file = ['ssdlite_mobilenetv2_scratch_600e_coco.py']
+
+    for config_file_name in os.listdir(config_path):
+        if config_file_name not in ignores_folder:
+            config_file = join(config_path, config_file_name)
+            if os.path.isdir(config_file):
+                for config_sub_file in os.listdir(config_file):
+                    if config_sub_file.endswith('py') and \
+                            config_sub_file not in ignores_file:
+                        name = join(config_file, config_sub_file)
+                        check_cfg_names.append(name)
+    return check_cfg_names
+
+
+def _check_backbone(config, print_cfg=True):
+    """Check out backbone whether successfully load pretrained model, by using
+    `backbone.init_cfg`.
+
+    First, using `CheckpointLoader.load_checkpoint` to load the checkpoint
+        without loading models.
+    Then, using `MODELS.build` to build models, and using
+        `model.init_weights()` to initialize the parameters.
+    Finally, assert weights and bias of each layer loaded from pretrained
+        checkpoint are equal to the weights and bias of original checkpoint.
+        For the convenience of comparison, we sum up weights and bias of
+        each loaded layer separately.
+
+    Args:
+        config (str): Config file path.
+        print_cfg (bool): Whether print logger and return the result.
+
+    Returns:
+        results (str or None): If backbone successfully load pretrained
+            checkpoint, return None; else, return config file path.
+    """
+    if print_cfg:
+        print('-' * 15 + 'loading ', config)
+    cfg = Config.fromfile(config)
+    init_cfg = None
+    try:
+        init_cfg = cfg.model.backbone.init_cfg
+        init_flag = True
+    except AttributeError:
+        init_flag = False
+    if init_cfg is None or init_cfg.get('type') != 'Pretrained':
+        init_flag = False
+    if init_flag:
+        checkpoint = CheckpointLoader.load_checkpoint(init_cfg.checkpoint)
+        if 'state_dict' in checkpoint:
+            state_dict = checkpoint['state_dict']
+        else:
+            state_dict = checkpoint
+
+        model = MODELS.build(cfg.model)
+        model.init_weights()
+
+        checkpoint_layers = state_dict.keys()
+        for name, value in model.backbone.state_dict().items():
+            if name in checkpoint_layers:
+                assert value.equal(state_dict[name])
+
+        if print_cfg:
+            print('-' * 10 + 'Successfully load checkpoint' + '-' * 10 +
+                  '\n', )
+            return None
+    else:
+        if print_cfg:
+            print(config + '\n' + '-' * 10 +
+                  'config file do not have init_cfg' + '-' * 10 + '\n')
+            return config
+
+
+@pytest.mark.parametrize('config', _traversed_config_file())
+def test_load_pretrained(config):
+    """Check out backbone whether successfully load pretrained model by using
+    `backbone.init_cfg`.
+
+    Details please refer to `_check_backbone`
+    """
+    _check_backbone(config, print_cfg=False)
+
+
+def _test_load_pretrained():
+    """We traversed all potential config files under the `config` file. If you
+    need to print details or debug code, you can use this function.
+
+    Returns:
+        check_cfg_names (list[str]): Config files that backbone initialized
+        from pretrained checkpoint might be problematic. Need to recheck
+        the config file. The output including the config files that the
+        backbone.init_cfg is None
+    """
+    check_cfg_names = _traversed_config_file()
+    need_check_cfg = []
+
+    prog_bar = ProgressBar(len(check_cfg_names))
+    for config in check_cfg_names:
+        init_cfg_name = _check_backbone(config)
+        if init_cfg_name is not None:
+            need_check_cfg.append(init_cfg_name)
+        prog_bar.update()
+    print('These config files need to be checked again')
+    print(need_check_cfg)

+ 164 - 0
.dev_scripts/train_benchmark.sh

@@ -0,0 +1,164 @@
+PARTITION=$1
+WORK_DIR=$2
+CPUS_PER_TASK=${3:-4}
+
+echo 'configs/albu_example/mask-rcnn_r50_fpn_albu_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION mask-rcnn_r50_fpn_albu_1x_coco configs/albu_example/mask-rcnn_r50_fpn_albu_1x_coco.py $WORK_DIR/mask-rcnn_r50_fpn_albu_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/atss/atss_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION atss_r50_fpn_1x_coco configs/atss/atss_r50_fpn_1x_coco.py $WORK_DIR/atss_r50_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/autoassign/autoassign_r50-caffe_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION autoassign_r50-caffe_fpn_1x_coco configs/autoassign/autoassign_r50-caffe_fpn_1x_coco.py $WORK_DIR/autoassign_r50-caffe_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/carafe/faster-rcnn_r50_fpn-carafe_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION faster-rcnn_r50_fpn-carafe_1x_coco configs/carafe/faster-rcnn_r50_fpn-carafe_1x_coco.py $WORK_DIR/faster-rcnn_r50_fpn-carafe_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/cascade_rcnn/cascade-rcnn_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION cascade-rcnn_r50_fpn_1x_coco configs/cascade_rcnn/cascade-rcnn_r50_fpn_1x_coco.py $WORK_DIR/cascade-rcnn_r50_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/cascade_rcnn/cascade-mask-rcnn_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION cascade-mask-rcnn_r50_fpn_1x_coco configs/cascade_rcnn/cascade-mask-rcnn_r50_fpn_1x_coco.py $WORK_DIR/cascade-mask-rcnn_r50_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/cascade_rpn/cascade-rpn_faster-rcnn_r50-caffe_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION cascade-rpn_faster-rcnn_r50-caffe_fpn_1x_coco configs/cascade_rpn/cascade-rpn_faster-rcnn_r50-caffe_fpn_1x_coco.py $WORK_DIR/cascade-rpn_faster-rcnn_r50-caffe_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/centernet/centernet_r18-dcnv2_8xb16-crop512-140e_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION centernet_r18-dcnv2_8xb16-crop512-140e_coco configs/centernet/centernet_r18-dcnv2_8xb16-crop512-140e_coco.py $WORK_DIR/centernet_r18-dcnv2_8xb16-crop512-140e_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/centernet/centernet-update_r50-caffe_fpn_ms-1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION centernet-update_r50-caffe_fpn_ms-1x_coco configs/centernet/centernet-update_r50-caffe_fpn_ms-1x_coco.py $WORK_DIR/centernet-update_r50-caffe_fpn_ms-1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/centripetalnet/centripetalnet_hourglass104_16xb6-crop511-210e-mstest_coco.py' &
+GPUS=16  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION centripetalnet_hourglass104_16xb6-crop511-210e-mstest_coco configs/centripetalnet/centripetalnet_hourglass104_16xb6-crop511-210e-mstest_coco.py $WORK_DIR/centripetalnet_hourglass104_16xb6-crop511-210e-mstest_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/cornernet/cornernet_hourglass104_8xb6-210e-mstest_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION cornernet_hourglass104_8xb6-210e-mstest_coco configs/cornernet/cornernet_hourglass104_8xb6-210e-mstest_coco.py $WORK_DIR/cornernet_hourglass104_8xb6-210e-mstest_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/convnext/mask-rcnn_convnext-t-p4-w7_fpn_amp-ms-crop-3x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION mask-rcnn_convnext-t-p4-w7_fpn_amp-ms-crop-3x_coco configs/convnext/mask-rcnn_convnext-t-p4-w7_fpn_amp-ms-crop-3x_coco.py $WORK_DIR/mask-rcnn_convnext-t-p4-w7_fpn_amp-ms-crop-3x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/dcn/faster-rcnn_r50-dconv-c3-c5_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION faster-rcnn_r50-dconv-c3-c5_fpn_1x_coco configs/dcn/faster-rcnn_r50-dconv-c3-c5_fpn_1x_coco.py $WORK_DIR/faster-rcnn_r50-dconv-c3-c5_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/dcnv2/faster-rcnn_r50_fpn_mdpool_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION faster-rcnn_r50_fpn_mdpool_1x_coco configs/dcnv2/faster-rcnn_r50_fpn_mdpool_1x_coco.py $WORK_DIR/faster-rcnn_r50_fpn_mdpool_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/ddod/ddod_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION ddod_r50_fpn_1x_coco configs/ddod/ddod_r50_fpn_1x_coco.py $WORK_DIR/ddod_r50_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/detectors/detectors_htc-r50_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION detectors_htc-r50_1x_coco configs/detectors/detectors_htc-r50_1x_coco.py $WORK_DIR/detectors_htc-r50_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/deformable_detr/deformable-detr_r50_16xb2-50e_coco.py' &
+GPUS=16  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION deformable-detr_r50_16xb2-50e_coco configs/deformable_detr/deformable-detr_r50_16xb2-50e_coco.py $WORK_DIR/deformable-detr_r50_16xb2-50e_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/detr/detr_r50_8xb2-150e_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION detr_r50_8xb2-150e_coco configs/detr/detr_r50_8xb2-150e_coco.py $WORK_DIR/detr_r50_8xb2-150e_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/double_heads/dh-faster-rcnn_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION dh-faster-rcnn_r50_fpn_1x_coco configs/double_heads/dh-faster-rcnn_r50_fpn_1x_coco.py $WORK_DIR/dh-faster-rcnn_r50_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/dynamic_rcnn/dynamic-rcnn_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION dynamic-rcnn_r50_fpn_1x_coco configs/dynamic_rcnn/dynamic-rcnn_r50_fpn_1x_coco.py $WORK_DIR/dynamic-rcnn_r50_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/dyhead/atss_r50_fpn_dyhead_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION atss_r50_fpn_dyhead_1x_coco configs/dyhead/atss_r50_fpn_dyhead_1x_coco.py $WORK_DIR/atss_r50_fpn_dyhead_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/efficientnet/retinanet_effb3_fpn_8xb4-crop896-1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION retinanet_effb3_fpn_8xb4-crop896-1x_coco configs/efficientnet/retinanet_effb3_fpn_8xb4-crop896-1x_coco.py $WORK_DIR/retinanet_effb3_fpn_8xb4-crop896-1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/empirical_attention/faster-rcnn_r50-attn1111_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION faster-rcnn_r50-attn1111_fpn_1x_coco configs/empirical_attention/faster-rcnn_r50-attn1111_fpn_1x_coco.py $WORK_DIR/faster-rcnn_r50-attn1111_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION faster-rcnn_r50_fpn_1x_coco configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py $WORK_DIR/faster-rcnn_r50_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/faster_rcnn/faster-rcnn_r50-caffe-dc5_ms-1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION faster-rcnn_r50-caffe-dc5_ms-1x_coco configs/faster_rcnn/faster-rcnn_r50-caffe-dc5_ms-1x_coco.py $WORK_DIR/faster-rcnn_r50-caffe-dc5_ms-1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/fcos/fcos_r50-caffe_fpn_gn-head-center-normbbox-centeronreg-giou_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION fcos_r50-caffe_fpn_gn-head-center-normbbox-centeronreg-giou_1x_coco configs/fcos/fcos_r50-caffe_fpn_gn-head-center-normbbox-centeronreg-giou_1x_coco.py $WORK_DIR/fcos_r50-caffe_fpn_gn-head-center-normbbox-centeronreg-giou_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/foveabox/fovea_r50_fpn_gn-head-align_4xb4-2x_coco.py' &
+GPUS=4  GPUS_PER_NODE=4  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION fovea_r50_fpn_gn-head-align_4xb4-2x_coco configs/foveabox/fovea_r50_fpn_gn-head-align_4xb4-2x_coco.py $WORK_DIR/fovea_r50_fpn_gn-head-align_4xb4-2x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/fpg/mask-rcnn_r50_fpg_crop640-50e_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION mask-rcnn_r50_fpg_crop640-50e_coco configs/fpg/mask-rcnn_r50_fpg_crop640-50e_coco.py $WORK_DIR/mask-rcnn_r50_fpg_crop640-50e_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/free_anchor/freeanchor_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION freeanchor_r50_fpn_1x_coco configs/free_anchor/freeanchor_r50_fpn_1x_coco.py $WORK_DIR/freeanchor_r50_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/fsaf/fsaf_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION fsaf_r50_fpn_1x_coco configs/fsaf/fsaf_r50_fpn_1x_coco.py $WORK_DIR/fsaf_r50_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/gcnet/mask-rcnn_r50-syncbn-gcb-r16-c3-c5_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION mask-rcnn_r50-syncbn-gcb-r16-c3-c5_fpn_1x_coco configs/gcnet/mask-rcnn_r50-syncbn-gcb-r16-c3-c5_fpn_1x_coco.py $WORK_DIR/mask-rcnn_r50-syncbn-gcb-r16-c3-c5_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/gfl/gfl_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION gfl_r50_fpn_1x_coco configs/gfl/gfl_r50_fpn_1x_coco.py $WORK_DIR/gfl_r50_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/ghm/retinanet_r50_fpn_ghm-1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION retinanet_r50_fpn_ghm-1x_coco configs/ghm/retinanet_r50_fpn_ghm-1x_coco.py $WORK_DIR/retinanet_r50_fpn_ghm-1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/gn/mask-rcnn_r50_fpn_gn-all_2x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION mask-rcnn_r50_fpn_gn-all_2x_coco configs/gn/mask-rcnn_r50_fpn_gn-all_2x_coco.py $WORK_DIR/mask-rcnn_r50_fpn_gn-all_2x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/gn+ws/faster-rcnn_r50_fpn_gn-ws-all_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION faster-rcnn_r50_fpn_gn-ws-all_1x_coco configs/gn+ws/faster-rcnn_r50_fpn_gn-ws-all_1x_coco.py $WORK_DIR/faster-rcnn_r50_fpn_gn-ws-all_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/grid_rcnn/grid-rcnn_r50_fpn_gn-head_2x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION grid-rcnn_r50_fpn_gn-head_2x_coco configs/grid_rcnn/grid-rcnn_r50_fpn_gn-head_2x_coco.py $WORK_DIR/grid-rcnn_r50_fpn_gn-head_2x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/groie/faste-rcnn_r50_fpn_groie_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION faste-rcnn_r50_fpn_groie_1x_coco configs/groie/faste-rcnn_r50_fpn_groie_1x_coco.py $WORK_DIR/faste-rcnn_r50_fpn_groie_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/guided_anchoring/ga-retinanet_r50-caffe_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION ga-retinanet_r50-caffe_fpn_1x_coco configs/guided_anchoring/ga-retinanet_r50-caffe_fpn_1x_coco.py $WORK_DIR/ga-retinanet_r50-caffe_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/hrnet/faster-rcnn_hrnetv2p-w18-1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION faster-rcnn_hrnetv2p-w18-1x_coco configs/hrnet/faster-rcnn_hrnetv2p-w18-1x_coco.py $WORK_DIR/faster-rcnn_hrnetv2p-w18-1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/htc/htc_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION htc_r50_fpn_1x_coco configs/htc/htc_r50_fpn_1x_coco.py $WORK_DIR/htc_r50_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/instaboost/mask-rcnn_r50_fpn_instaboost-4x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION mask-rcnn_r50_fpn_instaboost-4x_coco configs/instaboost/mask-rcnn_r50_fpn_instaboost-4x_coco.py $WORK_DIR/mask-rcnn_r50_fpn_instaboost-4x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/lad/lad_r50-paa-r101_fpn_2xb8_coco_1x.py' &
+GPUS=2  GPUS_PER_NODE=2  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION lad_r50-paa-r101_fpn_2xb8_coco_1x configs/lad/lad_r50-paa-r101_fpn_2xb8_coco_1x.py $WORK_DIR/lad_r50-paa-r101_fpn_2xb8_coco_1x --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/ld/ld_r18-gflv1-r101_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION ld_r18-gflv1-r101_fpn_1x_coco configs/ld/ld_r18-gflv1-r101_fpn_1x_coco.py $WORK_DIR/ld_r18-gflv1-r101_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/libra_rcnn/libra-faster-rcnn_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION libra-faster-rcnn_r50_fpn_1x_coco configs/libra_rcnn/libra-faster-rcnn_r50_fpn_1x_coco.py $WORK_DIR/libra-faster-rcnn_r50_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/mask2former/mask2former_r50_8xb2-lsj-50e_coco-panoptic.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION mask2former_r50_8xb2-lsj-50e_coco-panoptic configs/mask2former/mask2former_r50_8xb2-lsj-50e_coco-panoptic.py $WORK_DIR/mask2former_r50_8xb2-lsj-50e_coco-panoptic --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/mask_rcnn/mask-rcnn_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION mask-rcnn_r50_fpn_1x_coco configs/mask_rcnn/mask-rcnn_r50_fpn_1x_coco.py $WORK_DIR/mask-rcnn_r50_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/maskformer/maskformer_r50_ms-16xb1-75e_coco.py' &
+GPUS=16  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION maskformer_r50_ms-16xb1-75e_coco configs/maskformer/maskformer_r50_ms-16xb1-75e_coco.py $WORK_DIR/maskformer_r50_ms-16xb1-75e_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/ms_rcnn/ms-rcnn_r50-caffe_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION ms-rcnn_r50-caffe_fpn_1x_coco configs/ms_rcnn/ms-rcnn_r50-caffe_fpn_1x_coco.py $WORK_DIR/ms-rcnn_r50-caffe_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/nas_fcos/nas-fcos_r50-caffe_fpn_nashead-gn-head_4xb4-1x_coco.py' &
+GPUS=4  GPUS_PER_NODE=4  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION nas-fcos_r50-caffe_fpn_nashead-gn-head_4xb4-1x_coco configs/nas_fcos/nas-fcos_r50-caffe_fpn_nashead-gn-head_4xb4-1x_coco.py $WORK_DIR/nas-fcos_r50-caffe_fpn_nashead-gn-head_4xb4-1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/nas_fpn/retinanet_r50_nasfpn_crop640-50e_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION retinanet_r50_nasfpn_crop640-50e_coco configs/nas_fpn/retinanet_r50_nasfpn_crop640-50e_coco.py $WORK_DIR/retinanet_r50_nasfpn_crop640-50e_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/paa/paa_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION paa_r50_fpn_1x_coco configs/paa/paa_r50_fpn_1x_coco.py $WORK_DIR/paa_r50_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/pafpn/faster-rcnn_r50_pafpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION faster-rcnn_r50_pafpn_1x_coco configs/pafpn/faster-rcnn_r50_pafpn_1x_coco.py $WORK_DIR/faster-rcnn_r50_pafpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/panoptic_fpn/panoptic-fpn_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION panoptic-fpn_r50_fpn_1x_coco configs/panoptic_fpn/panoptic-fpn_r50_fpn_1x_coco.py $WORK_DIR/panoptic-fpn_r50_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/pisa/mask-rcnn_r50_fpn_pisa_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION mask-rcnn_r50_fpn_pisa_1x_coco configs/pisa/mask-rcnn_r50_fpn_pisa_1x_coco.py $WORK_DIR/mask-rcnn_r50_fpn_pisa_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/point_rend/point-rend_r50-caffe_fpn_ms-1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION point-rend_r50-caffe_fpn_ms-1x_coco configs/point_rend/point-rend_r50-caffe_fpn_ms-1x_coco.py $WORK_DIR/point-rend_r50-caffe_fpn_ms-1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/pvt/retinanet_pvt-t_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION retinanet_pvt-t_fpn_1x_coco configs/pvt/retinanet_pvt-t_fpn_1x_coco.py $WORK_DIR/retinanet_pvt-t_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/queryinst/queryinst_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION queryinst_r50_fpn_1x_coco configs/queryinst/queryinst_r50_fpn_1x_coco.py $WORK_DIR/queryinst_r50_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/regnet/retinanet_regnetx-800MF_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION retinanet_regnetx-800MF_fpn_1x_coco configs/regnet/retinanet_regnetx-800MF_fpn_1x_coco.py $WORK_DIR/retinanet_regnetx-800MF_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/reppoints/reppoints-moment_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION reppoints-moment_r50_fpn_1x_coco configs/reppoints/reppoints-moment_r50_fpn_1x_coco.py $WORK_DIR/reppoints-moment_r50_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/res2net/faster-rcnn_res2net-101_fpn_2x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION faster-rcnn_res2net-101_fpn_2x_coco configs/res2net/faster-rcnn_res2net-101_fpn_2x_coco.py $WORK_DIR/faster-rcnn_res2net-101_fpn_2x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/resnest/faster-rcnn_s50_fpn_syncbn-backbone+head_ms-range-1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION faster-rcnn_s50_fpn_syncbn-backbone+head_ms-range-1x_coco configs/resnest/faster-rcnn_s50_fpn_syncbn-backbone+head_ms-range-1x_coco.py $WORK_DIR/faster-rcnn_s50_fpn_syncbn-backbone+head_ms-range-1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/resnet_strikes_back/retinanet_r50-rsb-pre_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION retinanet_r50-rsb-pre_fpn_1x_coco configs/resnet_strikes_back/retinanet_r50-rsb-pre_fpn_1x_coco.py $WORK_DIR/retinanet_r50-rsb-pre_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/retinanet/retinanet_r50-caffe_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION retinanet_r50-caffe_fpn_1x_coco configs/retinanet/retinanet_r50-caffe_fpn_1x_coco.py $WORK_DIR/retinanet_r50-caffe_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/rpn/rpn_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION rpn_r50_fpn_1x_coco configs/rpn/rpn_r50_fpn_1x_coco.py $WORK_DIR/rpn_r50_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/sabl/sabl-retinanet_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION sabl-retinanet_r50_fpn_1x_coco configs/sabl/sabl-retinanet_r50_fpn_1x_coco.py $WORK_DIR/sabl-retinanet_r50_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/scnet/scnet_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION scnet_r50_fpn_1x_coco configs/scnet/scnet_r50_fpn_1x_coco.py $WORK_DIR/scnet_r50_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/scratch/faster-rcnn_r50-scratch_fpn_gn-all_6x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION faster-rcnn_r50-scratch_fpn_gn-all_6x_coco configs/scratch/faster-rcnn_r50-scratch_fpn_gn-all_6x_coco.py $WORK_DIR/faster-rcnn_r50-scratch_fpn_gn-all_6x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/solo/solo_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION solo_r50_fpn_1x_coco configs/solo/solo_r50_fpn_1x_coco.py $WORK_DIR/solo_r50_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/solov2/solov2_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION solov2_r50_fpn_1x_coco configs/solov2/solov2_r50_fpn_1x_coco.py $WORK_DIR/solov2_r50_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/sparse_rcnn/sparse-rcnn_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION sparse-rcnn_r50_fpn_1x_coco configs/sparse_rcnn/sparse-rcnn_r50_fpn_1x_coco.py $WORK_DIR/sparse-rcnn_r50_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/ssd/ssd300_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION ssd300_coco configs/ssd/ssd300_coco.py $WORK_DIR/ssd300_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/ssd/ssdlite_mobilenetv2-scratch_8xb24-600e_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION ssdlite_mobilenetv2-scratch_8xb24-600e_coco configs/ssd/ssdlite_mobilenetv2-scratch_8xb24-600e_coco.py $WORK_DIR/ssdlite_mobilenetv2-scratch_8xb24-600e_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/swin/mask-rcnn_swin-t-p4-w7_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION mask-rcnn_swin-t-p4-w7_fpn_1x_coco configs/swin/mask-rcnn_swin-t-p4-w7_fpn_1x_coco.py $WORK_DIR/mask-rcnn_swin-t-p4-w7_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/tood/tood_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION tood_r50_fpn_1x_coco configs/tood/tood_r50_fpn_1x_coco.py $WORK_DIR/tood_r50_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo ''configs/tridentnet/tridentnet_r50-caffe_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION tridentnet_r50-caffe_1x_coco 'configs/tridentnet/tridentnet_r50-caffe_1x_coco.py $WORK_DIR/tridentnet_r50-caffe_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/vfnet/vfnet_r50_fpn_1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION vfnet_r50_fpn_1x_coco configs/vfnet/vfnet_r50_fpn_1x_coco.py $WORK_DIR/vfnet_r50_fpn_1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/yolact/yolact_r50_8xb8-55e_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION yolact_r50_8xb8-55e_coco configs/yolact/yolact_r50_8xb8-55e_coco.py $WORK_DIR/yolact_r50_8xb8-55e_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/yolo/yolov3_d53_8xb8-320-273e_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION yolov3_d53_8xb8-320-273e_coco configs/yolo/yolov3_d53_8xb8-320-273e_coco.py $WORK_DIR/yolov3_d53_8xb8-320-273e_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/yolof/yolof_r50-c5_8xb8-1x_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION yolof_r50-c5_8xb8-1x_coco configs/yolof/yolof_r50-c5_8xb8-1x_coco.py $WORK_DIR/yolof_r50-c5_8xb8-1x_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &
+echo 'configs/yolox/yolox_tiny_8xb8-300e_coco.py' &
+GPUS=8  GPUS_PER_NODE=8  CPUS_PER_TASK=$CPUS_PRE_TASK ./tools/slurm_train.sh $PARTITION yolox_tiny_8xb8-300e_coco configs/yolox/yolox_tiny_8xb8-300e_coco.py $WORK_DIR/yolox_tiny_8xb8-300e_coco --cfg-options default_hooks.checkpoint.max_keep_ckpts=1 &

+ 76 - 0
.github/CODE_OF_CONDUCT.md

@@ -0,0 +1,76 @@
+# Contributor Covenant Code of Conduct
+
+## Our Pledge
+
+In the interest of fostering an open and welcoming environment, we as
+contributors and maintainers pledge to making participation in our project and
+our community a harassment-free experience for everyone, regardless of age, body
+size, disability, ethnicity, sex characteristics, gender identity and expression,
+level of experience, education, socio-economic status, nationality, personal
+appearance, race, religion, or sexual identity and orientation.
+
+## Our Standards
+
+Examples of behavior that contributes to creating a positive environment
+include:
+
+- Using welcoming and inclusive language
+- Being respectful of differing viewpoints and experiences
+- Gracefully accepting constructive criticism
+- Focusing on what is best for the community
+- Showing empathy towards other community members
+
+Examples of unacceptable behavior by participants include:
+
+- The use of sexualized language or imagery and unwelcome sexual attention or
+  advances
+- Trolling, insulting/derogatory comments, and personal or political attacks
+- Public or private harassment
+- Publishing others' private information, such as a physical or electronic
+  address, without explicit permission
+- Other conduct which could reasonably be considered inappropriate in a
+  professional setting
+
+## Our Responsibilities
+
+Project maintainers are responsible for clarifying the standards of acceptable
+behavior and are expected to take appropriate and fair corrective action in
+response to any instances of unacceptable behavior.
+
+Project maintainers have the right and responsibility to remove, edit, or
+reject comments, commits, code, wiki edits, issues, and other contributions
+that are not aligned to this Code of Conduct, or to ban temporarily or
+permanently any contributor for other behaviors that they deem inappropriate,
+threatening, offensive, or harmful.
+
+## Scope
+
+This Code of Conduct applies both within project spaces and in public spaces
+when an individual is representing the project or its community. Examples of
+representing a project or community include using an official project e-mail
+address, posting via an official social media account, or acting as an appointed
+representative at an online or offline event. Representation of a project may be
+further defined and clarified by project maintainers.
+
+## Enforcement
+
+Instances of abusive, harassing, or otherwise unacceptable behavior may be
+reported by contacting the project team at chenkaidev@gmail.com. All
+complaints will be reviewed and investigated and will result in a response that
+is deemed necessary and appropriate to the circumstances. The project team is
+obligated to maintain confidentiality with regard to the reporter of an incident.
+Further details of specific enforcement policies may be posted separately.
+
+Project maintainers who do not follow or enforce the Code of Conduct in good
+faith may face temporary or permanent repercussions as determined by other
+members of the project's leadership.
+
+## Attribution
+
+This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
+available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
+
+For answers to common questions about this code of conduct, see
+https://www.contributor-covenant.org/faq
+
+[homepage]: https://www.contributor-covenant.org

+ 1 - 0
.github/CONTRIBUTING.md

@@ -0,0 +1 @@
+We appreciate all contributions to improve MMDetection. Please refer to [CONTRIBUTING.md](https://github.com/open-mmlab/mmcv/blob/master/CONTRIBUTING.md) in MMCV for more details about the contributing guideline.

+ 9 - 0
.github/ISSUE_TEMPLATE/config.yml

@@ -0,0 +1,9 @@
+blank_issues_enabled: false
+
+contact_links:
+  - name: Common Issues
+    url: https://mmdetection.readthedocs.io/en/latest/faq.html
+    about: Check if your issue already has solutions
+  - name: MMDetection Documentation
+    url: https://mmdetection.readthedocs.io/en/latest/
+    about: Check if your question is answered in docs

+ 46 - 0
.github/ISSUE_TEMPLATE/error-report.md

@@ -0,0 +1,46 @@
+---
+name: Error report
+about: Create a report to help us improve
+title: ''
+labels: ''
+assignees: ''
+---
+
+Thanks for your error report and we appreciate it a lot.
+
+**Checklist**
+
+1. I have searched related issues but cannot get the expected help.
+2. I have read the [FAQ documentation](https://mmdetection.readthedocs.io/en/latest/faq.html) but cannot get the expected help.
+3. The bug has not been fixed in the latest version.
+
+**Describe the bug**
+A clear and concise description of what the bug is.
+
+**Reproduction**
+
+1. What command or script did you run?
+
+```none
+A placeholder for the command.
+```
+
+2. Did you make any modifications on the code or config? Did you understand what you have modified?
+3. What dataset did you use?
+
+**Environment**
+
+1. Please run `python mmdet/utils/collect_env.py` to collect necessary environment information and paste it here.
+2. You may add addition that may be helpful for locating the problem, such as
+   - How you installed PyTorch \[e.g., pip, conda, source\]
+   - Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.)
+
+**Error traceback**
+If applicable, paste the error trackback here.
+
+```none
+A placeholder for trackback.
+```
+
+**Bug fix**
+If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!

+ 21 - 0
.github/ISSUE_TEMPLATE/feature_request.md

@@ -0,0 +1,21 @@
+---
+name: Feature request
+about: Suggest an idea for this project
+title: ''
+labels: ''
+assignees: ''
+---
+
+**Describe the feature**
+
+**Motivation**
+A clear and concise description of the motivation of the feature.
+Ex1. It is inconvenient when \[....\].
+Ex2. There is a recent paper \[....\], which is very helpful for \[....\].
+
+**Related resources**
+If there is an official code release or third-party implementations, please also provide the information here, which would be very helpful.
+
+**Additional context**
+Add any other context or screenshots about the feature request here.
+If you would like to implement the feature and create a PR, please leave a comment here and that would be much appreciated.

+ 7 - 0
.github/ISSUE_TEMPLATE/general_questions.md

@@ -0,0 +1,7 @@
+---
+name: General questions
+about: Ask general questions to get help
+title: ''
+labels: ''
+assignees: ''
+---

+ 67 - 0
.github/ISSUE_TEMPLATE/reimplementation_questions.md

@@ -0,0 +1,67 @@
+---
+name: Reimplementation Questions
+about: Ask about questions during model reimplementation
+title: ''
+labels: reimplementation
+assignees: ''
+---
+
+**Notice**
+
+There are several common situations in the reimplementation issues as below
+
+1. Reimplement a model in the model zoo using the provided configs
+2. Reimplement a model in the model zoo on other dataset (e.g., custom datasets)
+3. Reimplement a custom model but all the components are implemented in MMDetection
+4. Reimplement a custom model with new modules implemented by yourself
+
+There are several things to do for different cases as below.
+
+- For case 1 & 3, please follow the steps in the following sections thus we could help to quick identify the issue.
+- For case 2 & 4, please understand that we are not able to do much help here because we usually do not know the full code and the users should be responsible to the code they write.
+- One suggestion for case 2 & 4 is that the users should first check whether the bug lies in the self-implemented code or the original code. For example, users can first make sure that the same model runs well on supported datasets. If you still need help, please describe what you have done and what you obtain in the issue, and follow the steps in the following sections and try as clear as possible so that we can better help you.
+
+**Checklist**
+
+1. I have searched related issues but cannot get the expected help.
+2. The issue has not been fixed in the latest version.
+
+**Describe the issue**
+
+A clear and concise description of what the problem you meet and what have you done.
+
+**Reproduction**
+
+1. What command or script did you run?
+
+```none
+A placeholder for the command.
+```
+
+2. What config dir you run?
+
+```none
+A placeholder for the config.
+```
+
+3. Did you make any modifications on the code or config? Did you understand what you have modified?
+4. What dataset did you use?
+
+**Environment**
+
+1. Please run `python mmdet/utils/collect_env.py` to collect necessary environment information and paste it here.
+2. You may add addition that may be helpful for locating the problem, such as
+   1. How you installed PyTorch \[e.g., pip, conda, source\]
+   2. Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.)
+
+**Results**
+
+If applicable, paste the related results here, e.g., what you expect and what you get.
+
+```none
+A placeholder for results comparison
+```
+
+**Issue fix**
+
+If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!

+ 25 - 0
.github/pull_request_template.md

@@ -0,0 +1,25 @@
+Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
+
+## Motivation
+
+Please describe the motivation of this PR and the goal you want to achieve through this PR.
+
+## Modification
+
+Please briefly describe what modification is made in this PR.
+
+## BC-breaking (Optional)
+
+Does the modification introduce changes that break the backward-compatibility of the downstream repos?
+If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
+
+## Use cases (Optional)
+
+If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.
+
+## Checklist
+
+1. Pre-commit or other linting tools are used to fix the potential lint issues.
+2. The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
+3. If the modification has potential influence on downstream projects, this PR should be tested with downstream projects, like MMDet or MMCls.
+4. The documentation has been modified accordingly, like docstring or example tutorials.

+ 28 - 0
.github/workflows/deploy.yml

@@ -0,0 +1,28 @@
+name: deploy
+
+on: push
+
+concurrency:
+  group: ${{ github.workflow }}-${{ github.ref }}
+  cancel-in-progress: true
+
+jobs:
+  build-n-publish:
+    runs-on: ubuntu-latest
+    if: startsWith(github.event.ref, 'refs/tags')
+    steps:
+      - uses: actions/checkout@v2
+      - name: Set up Python 3.7
+        uses: actions/setup-python@v2
+        with:
+          python-version: 3.7
+      - name: Install torch
+        run: pip install torch
+      - name: Install wheel
+        run: pip install wheel
+      - name: Build MMDetection
+        run: python setup.py sdist bdist_wheel
+      - name: Publish distribution to PyPI
+        run: |
+          pip install twine
+          twine upload dist/* -u __token__ -p ${{ secrets.pypi_password }}

+ 123 - 0
.gitignore

@@ -0,0 +1,123 @@
+# Byte-compiled / optimized / DLL files
+__pycache__/
+*.py[cod]
+*$py.class
+
+# C extensions
+*.so
+
+# Distribution / packaging
+.Python
+build/
+develop-eggs/
+dist/
+downloads/
+eggs/
+.eggs/
+lib/
+lib64/
+parts/
+sdist/
+var/
+wheels/
+*.egg-info/
+.installed.cfg
+*.egg
+MANIFEST
+
+# PyInstaller
+#  Usually these files are written by a python script from a template
+#  before PyInstaller builds the exe, so as to inject date/other infos into it.
+*.manifest
+*.spec
+
+# Installer logs
+pip-log.txt
+pip-delete-this-directory.txt
+
+# Unit test / coverage reports
+htmlcov/
+.tox/
+.coverage
+.coverage.*
+.cache
+nosetests.xml
+coverage.xml
+*.cover
+.hypothesis/
+.pytest_cache/
+
+# Translations
+*.mo
+*.pot
+
+# Django stuff:
+*.log
+local_settings.py
+db.sqlite3
+
+# Flask stuff:
+instance/
+.webassets-cache
+
+# Scrapy stuff:
+.scrapy
+
+# Sphinx documentation
+docs/en/_build/
+docs/zh_cn/_build/
+
+# PyBuilder
+target/
+
+# Jupyter Notebook
+.ipynb_checkpoints
+
+# pyenv
+.python-version
+
+# celery beat schedule file
+celerybeat-schedule
+
+# SageMath parsed files
+*.sage.py
+
+# Environments
+.env
+.venv
+env/
+venv/
+ENV/
+env.bak/
+venv.bak/
+
+# Spyder project settings
+.spyderproject
+.spyproject
+
+# Rope project settings
+.ropeproject
+
+# mkdocs documentation
+/site
+
+# mypy
+.mypy_cache/
+data/
+data
+.vscode
+.idea
+.DS_Store
+
+# custom
+*.pkl
+*.pkl.json
+*.log.json
+docs/modelzoo_statistics.md
+mmdet/.mim
+work_dirs/
+
+# Pytorch
+*.pth
+*.py~
+*.sh~

+ 14 - 0
.owners.yml

@@ -0,0 +1,14 @@
+assign:
+  strategy:
+    # random
+    daily-shift-based
+  scedule:
+    '*/1 * * * *'
+  assignees:
+    - Czm369
+    - hhaAndroid
+    - jbwang1997
+    - RangiLyu
+    - BIGWangYuDong
+    - chhluo
+    - ZwwWayne

+ 61 - 0
.pre-commit-config-zh-cn.yaml

@@ -0,0 +1,61 @@
+exclude: ^tests/data/
+repos:
+  - repo: https://gitee.com/openmmlab/mirrors-flake8
+    rev: 5.0.4
+    hooks:
+      - id: flake8
+  - repo: https://gitee.com/openmmlab/mirrors-isort
+    rev: 5.11.5
+    hooks:
+      - id: isort
+  - repo: https://gitee.com/openmmlab/mirrors-yapf
+    rev: v0.32.0
+    hooks:
+      - id: yapf
+  - repo: https://gitee.com/openmmlab/mirrors-pre-commit-hooks
+    rev: v4.3.0
+    hooks:
+      - id: trailing-whitespace
+      - id: check-yaml
+      - id: end-of-file-fixer
+      - id: requirements-txt-fixer
+      - id: double-quote-string-fixer
+      - id: check-merge-conflict
+      - id: fix-encoding-pragma
+        args: ["--remove"]
+      - id: mixed-line-ending
+        args: ["--fix=lf"]
+  - repo: https://gitee.com/openmmlab/mirrors-mdformat
+    rev: 0.7.9
+    hooks:
+      - id: mdformat
+        args: ["--number"]
+        additional_dependencies:
+          - mdformat-openmmlab
+          - mdformat_frontmatter
+          - linkify-it-py
+  - repo: https://gitee.com/openmmlab/mirrors-codespell
+    rev: v2.2.1
+    hooks:
+      - id: codespell
+  - repo: https://gitee.com/openmmlab/mirrors-docformatter
+    rev: v1.3.1
+    hooks:
+      - id: docformatter
+        args: ["--in-place", "--wrap-descriptions", "79"]
+  - repo: https://gitee.com/openmmlab/mirrors-pyupgrade
+    rev: v3.0.0
+    hooks:
+      - id: pyupgrade
+        args: ["--py36-plus"]
+  - repo: https://gitee.com/open-mmlab/pre-commit-hooks
+    rev: v0.2.0
+    hooks:
+      - id: check-algo-readme
+      - id: check-copyright
+        args: ["mmdet"]
+#  - repo: https://gitee.com/openmmlab/mirrors-mypy
+#    rev: v0.812
+#    hooks:
+#      - id: mypy
+#        exclude: "docs"

+ 50 - 0
.pre-commit-config.yaml

@@ -0,0 +1,50 @@
+repos:
+  - repo: https://github.com/PyCQA/flake8
+    rev: 5.0.4
+    hooks:
+      - id: flake8
+  - repo: https://github.com/PyCQA/isort
+    rev: 5.11.5
+    hooks:
+      - id: isort
+  - repo: https://github.com/pre-commit/mirrors-yapf
+    rev: v0.32.0
+    hooks:
+      - id: yapf
+  - repo: https://github.com/pre-commit/pre-commit-hooks
+    rev: v4.3.0
+    hooks:
+      - id: trailing-whitespace
+      - id: check-yaml
+      - id: end-of-file-fixer
+      - id: requirements-txt-fixer
+      - id: double-quote-string-fixer
+      - id: check-merge-conflict
+      - id: fix-encoding-pragma
+        args: ["--remove"]
+      - id: mixed-line-ending
+        args: ["--fix=lf"]
+  - repo: https://github.com/codespell-project/codespell
+    rev: v2.2.1
+    hooks:
+      - id: codespell
+  - repo: https://github.com/executablebooks/mdformat
+    rev: 0.7.9
+    hooks:
+      - id: mdformat
+        args: ["--number"]
+        additional_dependencies:
+          - mdformat-openmmlab
+          - mdformat_frontmatter
+          - linkify-it-py
+  - repo: https://github.com/myint/docformatter
+    rev: v1.3.1
+    hooks:
+      - id: docformatter
+        args: ["--in-place", "--wrap-descriptions", "79"]
+  - repo: https://github.com/open-mmlab/pre-commit-hooks
+    rev: v0.2.0  # Use the ref you want to point at
+    hooks:
+      - id: check-algo-readme
+      - id: check-copyright
+        args: ["mmdet"]  # replace the dir_to_check with your expected directory to check

+ 9 - 0
.readthedocs.yml

@@ -0,0 +1,9 @@
+version: 2
+
+formats: all
+
+python:
+  version: 3.7
+  install:
+    - requirements: requirements/docs.txt
+    - requirements: requirements/readthedocs.txt

+ 8 - 0
CITATION.cff

@@ -0,0 +1,8 @@
+cff-version: 1.2.0
+message: "If you use this software, please cite it as below."
+authors:
+  - name: "MMDetection Contributors"
+title: "OpenMMLab Detection Toolbox and Benchmark"
+date-released: 2018-08-22
+url: "https://github.com/open-mmlab/mmdetection"
+license: Apache-2.0

+ 203 - 0
LICENSE

@@ -0,0 +1,203 @@
+Copyright 2018-2023 OpenMMLab. All rights reserved.
+
+                                 Apache License
+                           Version 2.0, January 2004
+                        http://www.apache.org/licenses/
+
+   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+   1. Definitions.
+
+      "License" shall mean the terms and conditions for use, reproduction,
+      and distribution as defined by Sections 1 through 9 of this document.
+
+      "Licensor" shall mean the copyright owner or entity authorized by
+      the copyright owner that is granting the License.
+
+      "Legal Entity" shall mean the union of the acting entity and all
+      other entities that control, are controlled by, or are under common
+      control with that entity. For the purposes of this definition,
+      "control" means (i) the power, direct or indirect, to cause the
+      direction or management of such entity, whether by contract or
+      otherwise, or (ii) ownership of fifty percent (50%) or more of the
+      outstanding shares, or (iii) beneficial ownership of such entity.
+
+      "You" (or "Your") shall mean an individual or Legal Entity
+      exercising permissions granted by this License.
+
+      "Source" form shall mean the preferred form for making modifications,
+      including but not limited to software source code, documentation
+      source, and configuration files.
+
+      "Object" form shall mean any form resulting from mechanical
+      transformation or translation of a Source form, including but
+      not limited to compiled object code, generated documentation,
+      and conversions to other media types.
+
+      "Work" shall mean the work of authorship, whether in Source or
+      Object form, made available under the License, as indicated by a
+      copyright notice that is included in or attached to the work
+      (an example is provided in the Appendix below).
+
+      "Derivative Works" shall mean any work, whether in Source or Object
+      form, that is based on (or derived from) the Work and for which the
+      editorial revisions, annotations, elaborations, or other modifications
+      represent, as a whole, an original work of authorship. For the purposes
+      of this License, Derivative Works shall not include works that remain
+      separable from, or merely link (or bind by name) to the interfaces of,
+      the Work and Derivative Works thereof.
+
+      "Contribution" shall mean any work of authorship, including
+      the original version of the Work and any modifications or additions
+      to that Work or Derivative Works thereof, that is intentionally
+      submitted to Licensor for inclusion in the Work by the copyright owner
+      or by an individual or Legal Entity authorized to submit on behalf of
+      the copyright owner. For the purposes of this definition, "submitted"
+      means any form of electronic, verbal, or written communication sent
+      to the Licensor or its representatives, including but not limited to
+      communication on electronic mailing lists, source code control systems,
+      and issue tracking systems that are managed by, or on behalf of, the
+      Licensor for the purpose of discussing and improving the Work, but
+      excluding communication that is conspicuously marked or otherwise
+      designated in writing by the copyright owner as "Not a Contribution."
+
+      "Contributor" shall mean Licensor and any individual or Legal Entity
+      on behalf of whom a Contribution has been received by Licensor and
+      subsequently incorporated within the Work.
+
+   2. Grant of Copyright License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      copyright license to reproduce, prepare Derivative Works of,
+      publicly display, publicly perform, sublicense, and distribute the
+      Work and such Derivative Works in Source or Object form.
+
+   3. Grant of Patent License. Subject to the terms and conditions of
+      this License, each Contributor hereby grants to You a perpetual,
+      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+      (except as stated in this section) patent license to make, have made,
+      use, offer to sell, sell, import, and otherwise transfer the Work,
+      where such license applies only to those patent claims licensable
+      by such Contributor that are necessarily infringed by their
+      Contribution(s) alone or by combination of their Contribution(s)
+      with the Work to which such Contribution(s) was submitted. If You
+      institute patent litigation against any entity (including a
+      cross-claim or counterclaim in a lawsuit) alleging that the Work
+      or a Contribution incorporated within the Work constitutes direct
+      or contributory patent infringement, then any patent licenses
+      granted to You under this License for that Work shall terminate
+      as of the date such litigation is filed.
+
+   4. Redistribution. You may reproduce and distribute copies of the
+      Work or Derivative Works thereof in any medium, with or without
+      modifications, and in Source or Object form, provided that You
+      meet the following conditions:
+
+      (a) You must give any other recipients of the Work or
+          Derivative Works a copy of this License; and
+
+      (b) You must cause any modified files to carry prominent notices
+          stating that You changed the files; and
+
+      (c) You must retain, in the Source form of any Derivative Works
+          that You distribute, all copyright, patent, trademark, and
+          attribution notices from the Source form of the Work,
+          excluding those notices that do not pertain to any part of
+          the Derivative Works; and
+
+      (d) If the Work includes a "NOTICE" text file as part of its
+          distribution, then any Derivative Works that You distribute must
+          include a readable copy of the attribution notices contained
+          within such NOTICE file, excluding those notices that do not
+          pertain to any part of the Derivative Works, in at least one
+          of the following places: within a NOTICE text file distributed
+          as part of the Derivative Works; within the Source form or
+          documentation, if provided along with the Derivative Works; or,
+          within a display generated by the Derivative Works, if and
+          wherever such third-party notices normally appear. The contents
+          of the NOTICE file are for informational purposes only and
+          do not modify the License. You may add Your own attribution
+          notices within Derivative Works that You distribute, alongside
+          or as an addendum to the NOTICE text from the Work, provided
+          that such additional attribution notices cannot be construed
+          as modifying the License.
+
+      You may add Your own copyright statement to Your modifications and
+      may provide additional or different license terms and conditions
+      for use, reproduction, or distribution of Your modifications, or
+      for any such Derivative Works as a whole, provided Your use,
+      reproduction, and distribution of the Work otherwise complies with
+      the conditions stated in this License.
+
+   5. Submission of Contributions. Unless You explicitly state otherwise,
+      any Contribution intentionally submitted for inclusion in the Work
+      by You to the Licensor shall be under the terms and conditions of
+      this License, without any additional terms or conditions.
+      Notwithstanding the above, nothing herein shall supersede or modify
+      the terms of any separate license agreement you may have executed
+      with Licensor regarding such Contributions.
+
+   6. Trademarks. This License does not grant permission to use the trade
+      names, trademarks, service marks, or product names of the Licensor,
+      except as required for reasonable and customary use in describing the
+      origin of the Work and reproducing the content of the NOTICE file.
+
+   7. Disclaimer of Warranty. Unless required by applicable law or
+      agreed to in writing, Licensor provides the Work (and each
+      Contributor provides its Contributions) on an "AS IS" BASIS,
+      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+      implied, including, without limitation, any warranties or conditions
+      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+      PARTICULAR PURPOSE. You are solely responsible for determining the
+      appropriateness of using or redistributing the Work and assume any
+      risks associated with Your exercise of permissions under this License.
+
+   8. Limitation of Liability. In no event and under no legal theory,
+      whether in tort (including negligence), contract, or otherwise,
+      unless required by applicable law (such as deliberate and grossly
+      negligent acts) or agreed to in writing, shall any Contributor be
+      liable to You for damages, including any direct, indirect, special,
+      incidental, or consequential damages of any character arising as a
+      result of this License or out of the use or inability to use the
+      Work (including but not limited to damages for loss of goodwill,
+      work stoppage, computer failure or malfunction, or any and all
+      other commercial damages or losses), even if such Contributor
+      has been advised of the possibility of such damages.
+
+   9. Accepting Warranty or Additional Liability. While redistributing
+      the Work or Derivative Works thereof, You may choose to offer,
+      and charge a fee for, acceptance of support, warranty, indemnity,
+      or other liability obligations and/or rights consistent with this
+      License. However, in accepting such obligations, You may act only
+      on Your own behalf and on Your sole responsibility, not on behalf
+      of any other Contributor, and only if You agree to indemnify,
+      defend, and hold each Contributor harmless for any liability
+      incurred by, or claims asserted against, such Contributor by reason
+      of your accepting any such warranty or additional liability.
+
+   END OF TERMS AND CONDITIONS
+
+   APPENDIX: How to apply the Apache License to your work.
+
+      To apply the Apache License to your work, attach the following
+      boilerplate notice, with the fields enclosed by brackets "[]"
+      replaced with your own identifying information. (Don't include
+      the brackets!)  The text should be enclosed in the appropriate
+      comment syntax for the file format. We also recommend that a
+      file or class name and description of purpose be included on the
+      same "printed page" as the copyright notice for easier
+      identification within third-party archives.
+
+   Copyright 2018-2023 OpenMMLab.
+
+   Licensed under the Apache License, Version 2.0 (the "License");
+   you may not use this file except in compliance with the License.
+   You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+   Unless required by applicable law or agreed to in writing, software
+   distributed under the License is distributed on an "AS IS" BASIS,
+   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+   See the License for the specific language governing permissions and
+   limitations under the License.

+ 6 - 0
MANIFEST.in

@@ -0,0 +1,6 @@
+include requirements/*.txt
+include mmdet/VERSION
+include mmdet/.mim/model-index.yml
+include mmdet/.mim/demo/*/*
+recursive-include mmdet/.mim/configs *.py *.yml
+recursive-include mmdet/.mim/tools *.sh *.py

+ 433 - 0
README.md

@@ -0,0 +1,433 @@
+<div align="center">
+  <img src="resources/mmdet-logo.png" width="600"/>
+  <div>&nbsp;</div>
+  <div align="center">
+    <b><font size="5">OpenMMLab website</font></b>
+    <sup>
+      <a href="https://openmmlab.com">
+        <i><font size="4">HOT</font></i>
+      </a>
+    </sup>
+    &nbsp;&nbsp;&nbsp;&nbsp;
+    <b><font size="5">OpenMMLab platform</font></b>
+    <sup>
+      <a href="https://platform.openmmlab.com">
+        <i><font size="4">TRY IT OUT</font></i>
+      </a>
+    </sup>
+  </div>
+  <div>&nbsp;</div>
+
+[![PyPI](https://img.shields.io/pypi/v/mmdet)](https://pypi.org/project/mmdet)
+[![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmdetection.readthedocs.io/en/latest/)
+[![badge](https://github.com/open-mmlab/mmdetection/workflows/build/badge.svg)](https://github.com/open-mmlab/mmdetection/actions)
+[![codecov](https://codecov.io/gh/open-mmlab/mmdetection/branch/main/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmdetection)
+[![license](https://img.shields.io/github/license/open-mmlab/mmdetection.svg)](https://github.com/open-mmlab/mmdetection/blob/main/LICENSE)
+[![open issues](https://isitmaintained.com/badge/open/open-mmlab/mmdetection.svg)](https://github.com/open-mmlab/mmdetection/issues)
+[![issue resolution](https://isitmaintained.com/badge/resolution/open-mmlab/mmdetection.svg)](https://github.com/open-mmlab/mmdetection/issues)
+
+[📘Documentation](https://mmdetection.readthedocs.io/en/latest/) |
+[🛠️Installation](https://mmdetection.readthedocs.io/en/latest/get_started.html) |
+[👀Model Zoo](https://mmdetection.readthedocs.io/en/latest/model_zoo.html) |
+[🆕Update News](https://mmdetection.readthedocs.io/en/latest/notes/changelog.html) |
+[🚀Ongoing Projects](https://github.com/open-mmlab/mmdetection/projects) |
+[🤔Reporting Issues](https://github.com/open-mmlab/mmdetection/issues/new/choose)
+
+</div>
+
+<div align="center">
+
+English | [简体中文](README_zh-CN.md)
+
+</div>
+
+<div align="center">
+  <a href="https://openmmlab.medium.com/" style="text-decoration:none;">
+    <img src="https://user-images.githubusercontent.com/25839884/219255827-67c1a27f-f8c5-46a9-811d-5e57448c61d1.png" width="3%" alt="" /></a>
+  <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
+  <a href="https://discord.com/channels/1037617289144569886/1046608014234370059" style="text-decoration:none;">
+    <img src="https://user-images.githubusercontent.com/25839884/218347213-c080267f-cbb6-443e-8532-8e1ed9a58ea9.png" width="3%" alt="" /></a>
+  <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
+  <a href="https://twitter.com/OpenMMLab" style="text-decoration:none;">
+    <img src="https://user-images.githubusercontent.com/25839884/218346637-d30c8a0f-3eba-4699-8131-512fb06d46db.png" width="3%" alt="" /></a>
+  <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
+  <a href="https://www.youtube.com/openmmlab" style="text-decoration:none;">
+    <img src="https://user-images.githubusercontent.com/25839884/218346691-ceb2116a-465a-40af-8424-9f30d2348ca9.png" width="3%" alt="" /></a>
+  <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
+  <a href="https://space.bilibili.com/1293512903" style="text-decoration:none;">
+    <img src="https://user-images.githubusercontent.com/25839884/219026751-d7d14cce-a7c9-4e82-9942-8375fca65b99.png" width="3%" alt="" /></a>
+  <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
+  <a href="https://www.zhihu.com/people/openmmlab" style="text-decoration:none;">
+    <img src="https://user-images.githubusercontent.com/25839884/219026120-ba71e48b-6e94-4bd4-b4e9-b7d175b5e362.png" width="3%" alt="" /></a>
+</div>
+
+## Introduction
+
+MMDetection is an open source object detection toolbox based on PyTorch. It is
+a part of the [OpenMMLab](https://openmmlab.com/) project.
+
+The main branch works with **PyTorch 1.6+**.
+
+<img src="https://user-images.githubusercontent.com/12907710/187674113-2074d658-f2fb-42d1-ac15-9c4a695e64d7.png"/>
+
+<details open>
+<summary>Major features</summary>
+
+- **Modular Design**
+
+  We decompose the detection framework into different components and one can easily construct a customized object detection framework by combining different modules.
+
+- **Support of multiple tasks out of box**
+
+  The toolbox directly supports multiple detection tasks such as **object detection**, **instance segmentation**, **panoptic segmentation**, and **semi-supervised object detection**.
+
+- **High efficiency**
+
+  All basic bbox and mask operations run on GPUs. The training speed is faster than or comparable to other codebases, including [Detectron2](https://github.com/facebookresearch/detectron2), [maskrcnn-benchmark](https://github.com/facebookresearch/maskrcnn-benchmark) and [SimpleDet](https://github.com/TuSimple/simpledet).
+
+- **State of the art**
+
+  The toolbox stems from the codebase developed by the *MMDet* team, who won [COCO Detection Challenge](http://cocodataset.org/#detection-leaderboard) in 2018, and we keep pushing it forward.
+  The newly released [RTMDet](configs/rtmdet) also obtains new state-of-the-art results on real-time instance segmentation and rotated object detection tasks and the best parameter-accuracy trade-off on object detection.
+
+</details>
+
+Apart from MMDetection, we also released [MMEngine](https://github.com/open-mmlab/mmengine) for model training and [MMCV](https://github.com/open-mmlab/mmcv) for computer vision research, which are heavily depended on by this toolbox.
+
+## What's New
+
+### Highlight
+
+We are excited to announce our latest work on real-time object recognition tasks, **RTMDet**, a family of fully convolutional single-stage detectors. RTMDet not only achieves the best parameter-accuracy trade-off on object detection from tiny to extra-large model sizes but also obtains new state-of-the-art performance on instance segmentation and rotated object detection tasks. Details can be found in the [technical report](https://arxiv.org/abs/2212.07784). Pre-trained models are [here](configs/rtmdet).
+
+[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/rtmdet-an-empirical-study-of-designing-real/real-time-instance-segmentation-on-mscoco)](https://paperswithcode.com/sota/real-time-instance-segmentation-on-mscoco?p=rtmdet-an-empirical-study-of-designing-real)
+[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/rtmdet-an-empirical-study-of-designing-real/object-detection-in-aerial-images-on-dota-1)](https://paperswithcode.com/sota/object-detection-in-aerial-images-on-dota-1?p=rtmdet-an-empirical-study-of-designing-real)
+[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/rtmdet-an-empirical-study-of-designing-real/object-detection-in-aerial-images-on-hrsc2016)](https://paperswithcode.com/sota/object-detection-in-aerial-images-on-hrsc2016?p=rtmdet-an-empirical-study-of-designing-real)
+
+| Task                     | Dataset | AP                                   | FPS(TRT FP16 BS1 3090) |
+| ------------------------ | ------- | ------------------------------------ | ---------------------- |
+| Object Detection         | COCO    | 52.8                                 | 322                    |
+| Instance Segmentation    | COCO    | 44.6                                 | 188                    |
+| Rotated Object Detection | DOTA    | 78.9(single-scale)/81.3(multi-scale) | 121                    |
+
+<div align=center>
+<img src="https://user-images.githubusercontent.com/12907710/208044554-1e8de6b5-48d8-44e4-a7b5-75076c7ebb71.png"/>
+</div>
+
+**v3.0.0** was released in 6/4/2023:
+
+- Release MMDetection 3.0.0 official version
+- Support Semi-automatic annotation Base [Label-Studio](projects/LabelStudio) (#10039)
+- Support [EfficientDet](projects/EfficientDet) in projects (#9810)
+
+## Installation
+
+Please refer to [Installation](https://mmdetection.readthedocs.io/en/latest/get_started.html) for installation instructions.
+
+## Getting Started
+
+Please see [Overview](https://mmdetection.readthedocs.io/en/latest/get_started.html) for the general introduction of MMDetection.
+
+For detailed user guides and advanced guides, please refer to our [documentation](https://mmdetection.readthedocs.io/en/latest/):
+
+- User Guides
+
+  <details>
+
+  - [Train & Test](https://mmdetection.readthedocs.io/en/latest/user_guides/index.html#train-test)
+    - [Learn about Configs](https://mmdetection.readthedocs.io/en/latest/user_guides/config.html)
+    - [Inference with existing models](https://mmdetection.readthedocs.io/en/latest/user_guides/inference.html)
+    - [Dataset Prepare](https://mmdetection.readthedocs.io/en/latest/user_guides/dataset_prepare.html)
+    - [Test existing models on standard datasets](https://mmdetection.readthedocs.io/en/latest/user_guides/test.html)
+    - [Train predefined models on standard datasets](https://mmdetection.readthedocs.io/en/latest/user_guides/train.html)
+    - [Train with customized datasets](https://mmdetection.readthedocs.io/en/latest/user_guides/train.html#train-with-customized-datasets)
+    - [Train with customized models and standard datasets](https://mmdetection.readthedocs.io/en/latest/user_guides/new_model.html)
+    - [Finetuning Models](https://mmdetection.readthedocs.io/en/latest/user_guides/finetune.html)
+    - [Test Results Submission](https://mmdetection.readthedocs.io/en/latest/user_guides/test_results_submission.html)
+    - [Weight initialization](https://mmdetection.readthedocs.io/en/latest/user_guides/init_cfg.html)
+    - [Use a single stage detector as RPN](https://mmdetection.readthedocs.io/en/latest/user_guides/single_stage_as_rpn.html)
+    - [Semi-supervised Object Detection](https://mmdetection.readthedocs.io/en/latest/user_guides/semi_det.html)
+  - [Useful Tools](https://mmdetection.readthedocs.io/en/latest/user_guides/index.html#useful-tools)
+
+  </details>
+
+- Advanced Guides
+
+  <details>
+
+  - [Basic Concepts](https://mmdetection.readthedocs.io/en/latest/advanced_guides/index.html#basic-concepts)
+  - [Component Customization](https://mmdetection.readthedocs.io/en/latest/advanced_guides/index.html#component-customization)
+  - [How to](https://mmdetection.readthedocs.io/en/latest/advanced_guides/index.html#how-to)
+
+  </details>
+
+We also provide object detection colab tutorial [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](demo/MMDet_Tutorial.ipynb) and instance segmentation colab tutorial [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](demo/MMDet_InstanceSeg_Tutorial.ipynb).
+
+To migrate from MMDetection 2.x, please refer to [migration](https://mmdetection.readthedocs.io/en/latest/migration.html).
+
+## Overview of Benchmark and Model Zoo
+
+Results and models are available in the [model zoo](docs/en/model_zoo.md).
+
+<div align="center">
+  <b>Architectures</b>
+</div>
+<table align="center">
+  <tbody>
+    <tr align="center" valign="bottom">
+      <td>
+        <b>Object Detection</b>
+      </td>
+      <td>
+        <b>Instance Segmentation</b>
+      </td>
+      <td>
+        <b>Panoptic Segmentation</b>
+      </td>
+      <td>
+        <b>Other</b>
+      </td>
+    </tr>
+    <tr valign="top">
+      <td>
+        <ul>
+            <li><a href="configs/fast_rcnn">Fast R-CNN (ICCV'2015)</a></li>
+            <li><a href="configs/faster_rcnn">Faster R-CNN (NeurIPS'2015)</a></li>
+            <li><a href="configs/rpn">RPN (NeurIPS'2015)</a></li>
+            <li><a href="configs/ssd">SSD (ECCV'2016)</a></li>
+            <li><a href="configs/retinanet">RetinaNet (ICCV'2017)</a></li>
+            <li><a href="configs/cascade_rcnn">Cascade R-CNN (CVPR'2018)</a></li>
+            <li><a href="configs/yolo">YOLOv3 (ArXiv'2018)</a></li>
+            <li><a href="configs/cornernet">CornerNet (ECCV'2018)</a></li>
+            <li><a href="configs/grid_rcnn">Grid R-CNN (CVPR'2019)</a></li>
+            <li><a href="configs/guided_anchoring">Guided Anchoring (CVPR'2019)</a></li>
+            <li><a href="configs/fsaf">FSAF (CVPR'2019)</a></li>
+            <li><a href="configs/centernet">CenterNet (CVPR'2019)</a></li>
+            <li><a href="configs/libra_rcnn">Libra R-CNN (CVPR'2019)</a></li>
+            <li><a href="configs/tridentnet">TridentNet (ICCV'2019)</a></li>
+            <li><a href="configs/fcos">FCOS (ICCV'2019)</a></li>
+            <li><a href="configs/reppoints">RepPoints (ICCV'2019)</a></li>
+            <li><a href="configs/free_anchor">FreeAnchor (NeurIPS'2019)</a></li>
+            <li><a href="configs/cascade_rpn">CascadeRPN (NeurIPS'2019)</a></li>
+            <li><a href="configs/foveabox">Foveabox (TIP'2020)</a></li>
+            <li><a href="configs/double_heads">Double-Head R-CNN (CVPR'2020)</a></li>
+            <li><a href="configs/atss">ATSS (CVPR'2020)</a></li>
+            <li><a href="configs/nas_fcos">NAS-FCOS (CVPR'2020)</a></li>
+            <li><a href="configs/centripetalnet">CentripetalNet (CVPR'2020)</a></li>
+            <li><a href="configs/autoassign">AutoAssign (ArXiv'2020)</a></li>
+            <li><a href="configs/sabl">Side-Aware Boundary Localization (ECCV'2020)</a></li>
+            <li><a href="configs/dynamic_rcnn">Dynamic R-CNN (ECCV'2020)</a></li>
+            <li><a href="configs/detr">DETR (ECCV'2020)</a></li>
+            <li><a href="configs/paa">PAA (ECCV'2020)</a></li>
+            <li><a href="configs/vfnet">VarifocalNet (CVPR'2021)</a></li>
+            <li><a href="configs/sparse_rcnn">Sparse R-CNN (CVPR'2021)</a></li>
+            <li><a href="configs/yolof">YOLOF (CVPR'2021)</a></li>
+            <li><a href="configs/yolox">YOLOX (CVPR'2021)</a></li>
+            <li><a href="configs/deformable_detr">Deformable DETR (ICLR'2021)</a></li>
+            <li><a href="configs/tood">TOOD (ICCV'2021)</a></li>
+            <li><a href="configs/ddod">DDOD (ACM MM'2021)</a></li>
+            <li><a href="configs/rtmdet">RTMDet (ArXiv'2022)</a></li>
+            <li><a href="configs/conditional_detr">Conditional DETR (ICCV'2021)</a></li>
+            <li><a href="configs/dab_detr">DAB-DETR (ICLR'2022)</a></li>
+            <li><a href="configs/dino">DINO (ICLR'2023)</a></li>
+            <li><a href="projects/DiffusionDet">DiffusionDet (ArXiv'2023)</a></li>
+            <li><a href="projects/EfficientDet">EfficientDet (CVPR'2020)</a></li>
+            <li><a href="projects/Detic">Detic (ECCV'2022)</a></li>
+      </ul>
+      </td>
+      <td>
+        <ul>
+          <li><a href="configs/mask_rcnn">Mask R-CNN (ICCV'2017)</a></li>
+          <li><a href="configs/cascade_rcnn">Cascade Mask R-CNN (CVPR'2018)</a></li>
+          <li><a href="configs/ms_rcnn">Mask Scoring R-CNN (CVPR'2019)</a></li>
+          <li><a href="configs/htc">Hybrid Task Cascade (CVPR'2019)</a></li>
+          <li><a href="configs/yolact">YOLACT (ICCV'2019)</a></li>
+          <li><a href="configs/instaboost">InstaBoost (ICCV'2019)</a></li>
+          <li><a href="configs/solo">SOLO (ECCV'2020)</a></li>
+          <li><a href="configs/point_rend">PointRend (CVPR'2020)</a></li>
+          <li><a href="configs/detectors">DetectoRS (ArXiv'2020)</a></li>
+          <li><a href="configs/solov2">SOLOv2 (NeurIPS'2020)</a></li>
+          <li><a href="configs/scnet">SCNet (AAAI'2021)</a></li>
+          <li><a href="configs/queryinst">QueryInst (ICCV'2021)</a></li>
+          <li><a href="configs/mask2former">Mask2Former (ArXiv'2021)</a></li>
+          <li><a href="configs/condinst">CondInst (ECCV'2020)</a></li>
+          <li><a href="projects/SparseInst">SparseInst (CVPR'2022)</a></li>
+          <li><a href="configs/rtmdet">RTMDet (ArXiv'2022)</a></li>
+          <li><a href="configs/boxinst">BoxInst (CVPR'2021)</a></li>
+        </ul>
+      </td>
+      <td>
+        <ul>
+          <li><a href="configs/panoptic_fpn">Panoptic FPN (CVPR'2019)</a></li>
+          <li><a href="configs/maskformer">MaskFormer (NeurIPS'2021)</a></li>
+          <li><a href="configs/mask2former">Mask2Former (ArXiv'2021)</a></li>
+        </ul>
+      </td>
+      <td>
+        </ul>
+          <li><b>Contrastive Learning</b></li>
+        <ul>
+        <ul>
+          <li><a href="configs/selfsup_pretrain">SwAV (NeurIPS'2020)</a></li>
+          <li><a href="configs/selfsup_pretrain">MoCo (CVPR'2020)</a></li>
+          <li><a href="configs/selfsup_pretrain">MoCov2 (ArXiv'2020)</a></li>
+        </ul>
+        </ul>
+        </ul>
+          <li><b>Distillation</b></li>
+        <ul>
+        <ul>
+          <li><a href="configs/ld">Localization Distillation (CVPR'2022)</a></li>
+          <li><a href="configs/lad">Label Assignment Distillation (WACV'2022)</a></li>
+        </ul>
+        </ul>
+          <li><b>Semi-Supervised Object Detection</b></li>
+        <ul>
+        <ul>
+          <li><a href="configs/soft_teacher">Soft Teacher (ICCV'2021)</a></li>
+        </ul>
+        </ul>
+      </ul>
+      </td>
+    </tr>
+</td>
+    </tr>
+  </tbody>
+</table>
+
+<div align="center">
+  <b>Components</b>
+</div>
+<table align="center">
+  <tbody>
+    <tr align="center" valign="bottom">
+      <td>
+        <b>Backbones</b>
+      </td>
+      <td>
+        <b>Necks</b>
+      </td>
+      <td>
+        <b>Loss</b>
+      </td>
+      <td>
+        <b>Common</b>
+      </td>
+    </tr>
+    <tr valign="top">
+      <td>
+      <ul>
+        <li>VGG (ICLR'2015)</li>
+        <li>ResNet (CVPR'2016)</li>
+        <li>ResNeXt (CVPR'2017)</li>
+        <li>MobileNetV2 (CVPR'2018)</li>
+        <li><a href="configs/hrnet">HRNet (CVPR'2019)</a></li>
+        <li><a href="configs/empirical_attention">Generalized Attention (ICCV'2019)</a></li>
+        <li><a href="configs/gcnet">GCNet (ICCVW'2019)</a></li>
+        <li><a href="configs/res2net">Res2Net (TPAMI'2020)</a></li>
+        <li><a href="configs/regnet">RegNet (CVPR'2020)</a></li>
+        <li><a href="configs/resnest">ResNeSt (ArXiv'2020)</a></li>
+        <li><a href="configs/pvt">PVT (ICCV'2021)</a></li>
+        <li><a href="configs/swin">Swin (CVPR'2021)</a></li>
+        <li><a href="configs/pvt">PVTv2 (ArXiv'2021)</a></li>
+        <li><a href="configs/resnet_strikes_back">ResNet strikes back (ArXiv'2021)</a></li>
+        <li><a href="configs/efficientnet">EfficientNet (ArXiv'2021)</a></li>
+        <li><a href="configs/convnext">ConvNeXt (CVPR'2022)</a></li>
+        <li><a href="projects/ConvNeXt-V2">ConvNeXtv2 (ArXiv'2023)</a></li>
+      </ul>
+      </td>
+      <td>
+      <ul>
+        <li><a href="configs/pafpn">PAFPN (CVPR'2018)</a></li>
+        <li><a href="configs/nas_fpn">NAS-FPN (CVPR'2019)</a></li>
+        <li><a href="configs/carafe">CARAFE (ICCV'2019)</a></li>
+        <li><a href="configs/fpg">FPG (ArXiv'2020)</a></li>
+        <li><a href="configs/groie">GRoIE (ICPR'2020)</a></li>
+        <li><a href="configs/dyhead">DyHead (CVPR'2021)</a></li>
+      </ul>
+      </td>
+      <td>
+        <ul>
+          <li><a href="configs/ghm">GHM (AAAI'2019)</a></li>
+          <li><a href="configs/gfl">Generalized Focal Loss (NeurIPS'2020)</a></li>
+          <li><a href="configs/seesaw_loss">Seasaw Loss (CVPR'2021)</a></li>
+        </ul>
+      </td>
+      <td>
+        <ul>
+          <li><a href="configs/faster_rcnn/faster-rcnn_r50_fpn_ohem_1x_coco.py">OHEM (CVPR'2016)</a></li>
+          <li><a href="configs/gn">Group Normalization (ECCV'2018)</a></li>
+          <li><a href="configs/dcn">DCN (ICCV'2017)</a></li>
+          <li><a href="configs/dcnv2">DCNv2 (CVPR'2019)</a></li>
+          <li><a href="configs/gn+ws">Weight Standardization (ArXiv'2019)</a></li>
+          <li><a href="configs/pisa">Prime Sample Attention (CVPR'2020)</a></li>
+          <li><a href="configs/strong_baselines">Strong Baselines (CVPR'2021)</a></li>
+          <li><a href="configs/resnet_strikes_back">Resnet strikes back (ArXiv'2021)</a></li>
+        </ul>
+      </td>
+    </tr>
+</td>
+    </tr>
+  </tbody>
+</table>
+
+Some other methods are also supported in [projects using MMDetection](./docs/en/notes/projects.md).
+
+## FAQ
+
+Please refer to [FAQ](docs/en/notes/faq.md) for frequently asked questions.
+
+## Contributing
+
+We appreciate all contributions to improve MMDetection. Ongoing projects can be found in out [GitHub Projects](https://github.com/open-mmlab/mmdetection/projects). Welcome community users to participate in these projects. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guideline.
+
+## Acknowledgement
+
+MMDetection is an open source project that is contributed by researchers and engineers from various colleges and companies. We appreciate all the contributors who implement their methods or add new features, as well as users who give valuable feedbacks.
+We wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their own new detectors.
+
+## Citation
+
+If you use this toolbox or benchmark in your research, please cite this project.
+
+```
+@article{mmdetection,
+  title   = {{MMDetection}: Open MMLab Detection Toolbox and Benchmark},
+  author  = {Chen, Kai and Wang, Jiaqi and Pang, Jiangmiao and Cao, Yuhang and
+             Xiong, Yu and Li, Xiaoxiao and Sun, Shuyang and Feng, Wansen and
+             Liu, Ziwei and Xu, Jiarui and Zhang, Zheng and Cheng, Dazhi and
+             Zhu, Chenchen and Cheng, Tianheng and Zhao, Qijie and Li, Buyu and
+             Lu, Xin and Zhu, Rui and Wu, Yue and Dai, Jifeng and Wang, Jingdong
+             and Shi, Jianping and Ouyang, Wanli and Loy, Chen Change and Lin, Dahua},
+  journal= {arXiv preprint arXiv:1906.07155},
+  year={2019}
+}
+```
+
+## License
+
+This project is released under the [Apache 2.0 license](LICENSE).
+
+## Projects in OpenMMLab
+
+- [MMEngine](https://github.com/open-mmlab/mmengine): OpenMMLab foundational library for training deep learning models.
+- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision.
+- [MIM](https://github.com/open-mmlab/mim): MIM installs OpenMMLab packages.
+- [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab image classification toolbox and benchmark.
+- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark.
+- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection.
+- [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark.
+- [MMYOLO](https://github.com/open-mmlab/mmyolo): OpenMMLab YOLO series toolbox and benchmark.
+- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark.
+- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab text detection, recognition, and understanding toolbox.
+- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark.
+- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark.
+- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning toolbox and benchmark.
+- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab model compression toolbox and benchmark.
+- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark.
+- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark.
+- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.
+- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.
+- [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox.
+- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab image and video generative models toolbox.
+- [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab model deployment framework.

+ 452 - 0
README_zh-CN.md

@@ -0,0 +1,452 @@
+<div align="center">
+  <img src="resources/mmdet-logo.png" width="600"/>
+  <div>&nbsp;</div>
+  <div align="center">
+    <b><font size="5">OpenMMLab 官网</font></b>
+    <sup>
+      <a href="https://openmmlab.com">
+        <i><font size="4">HOT</font></i>
+      </a>
+    </sup>
+    &nbsp;&nbsp;&nbsp;&nbsp;
+    <b><font size="5">OpenMMLab 开放平台</font></b>
+    <sup>
+      <a href="https://platform.openmmlab.com">
+        <i><font size="4">TRY IT OUT</font></i>
+      </a>
+    </sup>
+  </div>
+  <div>&nbsp;</div>
+
+[![PyPI](https://img.shields.io/pypi/v/mmdet)](https://pypi.org/project/mmdet)
+[![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmdetection.readthedocs.io/en/latest/)
+[![badge](https://github.com/open-mmlab/mmdetection/workflows/build/badge.svg)](https://github.com/open-mmlab/mmdetection/actions)
+[![codecov](https://codecov.io/gh/open-mmlab/mmdetection/branch/main/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmdetection)
+[![license](https://img.shields.io/github/license/open-mmlab/mmdetection.svg)](https://github.com/open-mmlab/mmdetection/blob/main/LICENSE)
+[![open issues](https://isitmaintained.com/badge/open/open-mmlab/mmdetection.svg)](https://github.com/open-mmlab/mmdetection/issues)
+[![issue resolution](https://isitmaintained.com/badge/resolution/open-mmlab/mmdetection.svg)](https://github.com/open-mmlab/mmdetection/issues)
+
+[📘使用文档](https://mmdetection.readthedocs.io/zh_CN/latest/) |
+[🛠️安装教程](https://mmdetection.readthedocs.io/zh_CN/latest/get_started.html) |
+[👀模型库](https://mmdetection.readthedocs.io/zh_CN/latest/model_zoo.html) |
+[🆕更新日志](https://mmdetection.readthedocs.io/en/latest/notes/changelog.html) |
+[🚀进行中的项目](https://github.com/open-mmlab/mmdetection/projects) |
+[🤔报告问题](https://github.com/open-mmlab/mmdetection/issues/new/choose)
+
+</div>
+
+<div align="center">
+
+[English](README.md) | 简体中文
+
+</div>
+
+<div align="center">
+  <a href="https://openmmlab.medium.com/" style="text-decoration:none;">
+    <img src="https://user-images.githubusercontent.com/25839884/219255827-67c1a27f-f8c5-46a9-811d-5e57448c61d1.png" width="3%" alt="" /></a>
+  <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
+  <a href="https://discord.com/channels/1037617289144569886/1046608014234370059" style="text-decoration:none;">
+    <img src="https://user-images.githubusercontent.com/25839884/218347213-c080267f-cbb6-443e-8532-8e1ed9a58ea9.png" width="3%" alt="" /></a>
+  <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
+  <a href="https://twitter.com/OpenMMLab" style="text-decoration:none;">
+    <img src="https://user-images.githubusercontent.com/25839884/218346637-d30c8a0f-3eba-4699-8131-512fb06d46db.png" width="3%" alt="" /></a>
+  <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
+  <a href="https://www.youtube.com/openmmlab" style="text-decoration:none;">
+    <img src="https://user-images.githubusercontent.com/25839884/218346691-ceb2116a-465a-40af-8424-9f30d2348ca9.png" width="3%" alt="" /></a>
+  <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
+  <a href="https://space.bilibili.com/1293512903" style="text-decoration:none;">
+    <img src="https://user-images.githubusercontent.com/25839884/219026751-d7d14cce-a7c9-4e82-9942-8375fca65b99.png" width="3%" alt="" /></a>
+  <img src="https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png" width="3%" alt="" />
+  <a href="https://www.zhihu.com/people/openmmlab" style="text-decoration:none;">
+    <img src="https://user-images.githubusercontent.com/25839884/219026120-ba71e48b-6e94-4bd4-b4e9-b7d175b5e362.png" width="3%" alt="" /></a>
+</div>
+
+## 简介
+
+MMDetection 是一个基于 PyTorch 的目标检测开源工具箱。它是 [OpenMMLab](https://openmmlab.com/) 项目的一部分。
+
+主分支代码目前支持 PyTorch 1.6 以上的版本。
+
+<img src="https://user-images.githubusercontent.com/12907710/187674113-2074d658-f2fb-42d1-ac15-9c4a695e64d7.png"/>
+
+<details open>
+<summary>主要特性</summary>
+
+- **模块化设计**
+
+  MMDetection 将检测框架解耦成不同的模块组件,通过组合不同的模块组件,用户可以便捷地构建自定义的检测模型
+
+- **支持多种检测任务**
+
+  MMDetection 支持了各种不同的检测任务,包括**目标检测**,**实例分割**,**全景分割**,以及**半监督目标检测**。
+
+- **速度快**
+
+  基本的框和 mask 操作都实现了 GPU 版本,训练速度比其他代码库更快或者相当,包括 [Detectron2](https://github.com/facebookresearch/detectron2), [maskrcnn-benchmark](https://github.com/facebookresearch/maskrcnn-benchmark) 和 [SimpleDet](https://github.com/TuSimple/simpledet)。
+
+- **性能高**
+
+  MMDetection 这个算法库源自于 COCO 2018 目标检测竞赛的冠军团队 *MMDet* 团队开发的代码,我们在之后持续进行了改进和提升。
+  新发布的 [RTMDet](configs/rtmdet) 还在实时实例分割和旋转目标检测任务中取得了最先进的成果,同时也在目标检测模型中取得了最佳的的参数量和精度平衡。
+
+</details>
+
+除了 MMDetection 之外,我们还开源了深度学习训练库 [MMEngine](https://github.com/open-mmlab/mmengine) 和计算机视觉基础库 [MMCV](https://github.com/open-mmlab/mmcv),它们是 MMDetection 的主要依赖。
+
+## 最新进展
+
+### 亮点
+
+我们很高兴向大家介绍我们在实时目标识别任务方面的最新成果 RTMDet,包含了一系列的全卷积单阶段检测模型。 RTMDet 不仅在从 tiny 到 extra-large 尺寸的目标检测模型上实现了最佳的参数量和精度的平衡,而且在实时实例分割和旋转目标检测任务上取得了最先进的成果。 更多细节请参阅[技术报告](https://arxiv.org/abs/2212.07784)。 预训练模型可以在[这里](configs/rtmdet)找到。
+
+[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/rtmdet-an-empirical-study-of-designing-real/real-time-instance-segmentation-on-mscoco)](https://paperswithcode.com/sota/real-time-instance-segmentation-on-mscoco?p=rtmdet-an-empirical-study-of-designing-real)
+[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/rtmdet-an-empirical-study-of-designing-real/object-detection-in-aerial-images-on-dota-1)](https://paperswithcode.com/sota/object-detection-in-aerial-images-on-dota-1?p=rtmdet-an-empirical-study-of-designing-real)
+[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/rtmdet-an-empirical-study-of-designing-real/object-detection-in-aerial-images-on-hrsc2016)](https://paperswithcode.com/sota/object-detection-in-aerial-images-on-hrsc2016?p=rtmdet-an-empirical-study-of-designing-real)
+
+| Task                     | Dataset | AP                                   | FPS(TRT FP16 BS1 3090) |
+| ------------------------ | ------- | ------------------------------------ | ---------------------- |
+| Object Detection         | COCO    | 52.8                                 | 322                    |
+| Instance Segmentation    | COCO    | 44.6                                 | 188                    |
+| Rotated Object Detection | DOTA    | 78.9(single-scale)/81.3(multi-scale) | 121                    |
+
+<div align=center>
+<img src="https://user-images.githubusercontent.com/12907710/208044554-1e8de6b5-48d8-44e4-a7b5-75076c7ebb71.png"/>
+</div>
+
+**v3.0.0** 版本已经在 2023.4.6 发布:
+
+- 发布 MMDetection 3.0.0 正式版
+- 基于 [Label-Studio](projects/LabelStudio) 支持半自动标注流程
+- projects 中支持了 [EfficientDet](projects/EfficientDet)
+
+## 安装
+
+请参考[快速入门文档](https://mmdetection.readthedocs.io/zh_CN/latest/get_started.html)进行安装。
+
+## 教程
+
+请阅读[概述](https://mmdetection.readthedocs.io/zh_CN/latest/get_started.html)对 MMDetection 进行初步的了解。
+
+为了帮助用户更进一步了解 MMDetection,我们准备了用户指南和进阶指南,请阅读我们的[文档](https://mmdetection.readthedocs.io/zh_CN/latest/):
+
+- 用户指南
+
+  <details>
+
+  - [训练 & 测试](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/index.html#train-test)
+    - [学习配置文件](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/config.html)
+    - [使用已有模型在标准数据集上进行推理](https://mmdetection.readthedocs.io/en/latest/user_guides/inference.html)
+    - [数据集准备](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/dataset_prepare.html)
+    - [测试现有模型](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/test.html)
+    - [在标准数据集上训练预定义的模型](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/train.html)
+    - [在自定义数据集上进行训练](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/train.html#train-with-customized-datasets)
+    - [在标准数据集上训练自定义模型](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/new_model.html)
+    - [模型微调](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/finetune.html)
+    - [提交测试结果](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/test_results_submission.html)
+    - [权重初始化](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/init_cfg.html)
+    - [将单阶段检测器作为 RPN](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/single_stage_as_rpn.html)
+    - [半监督目标检测](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/semi_det.html)
+  - [实用工具](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/index.html#useful-tools)
+
+  </details>
+
+- 进阶指南
+
+  <details>
+
+  - [基础概念](https://mmdetection.readthedocs.io/zh_CN/latest/advanced_guides/index.html#basic-concepts)
+  - [组件定制](https://mmdetection.readthedocs.io/zh_CN/latest/advanced_guides/index.html#component-customization)
+  - [How to](https://mmdetection.readthedocs.io/zh_CN/latest/advanced_guides/index.html#how-to)
+
+  </details>
+
+我们提供了检测的 colab 教程 [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](demo/MMDet_Tutorial.ipynb) 和 实例分割的 colab 教程 [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](demo/MMDet_Tutorial.ipynb)
+
+同时,我们还提供了 [MMDetection 中文解读文案汇总](docs/zh_cn/article.md)
+
+若需要将2.x版本的代码迁移至新版,请参考[迁移文档](https://mmdetection.readthedocs.io/en/latest/migration.html)。
+
+## 基准测试和模型库
+
+测试结果和模型可以在[模型库](docs/zh_cn/model_zoo.md)中找到。
+
+<div align="center">
+  <b>算法架构</b>
+</div>
+<table align="center">
+  <tbody>
+    <tr align="center" valign="bottom">
+      <td>
+        <b>Object Detection</b>
+      </td>
+      <td>
+        <b>Instance Segmentation</b>
+      </td>
+      <td>
+        <b>Panoptic Segmentation</b>
+      </td>
+      <td>
+        <b>Other</b>
+      </td>
+    </tr>
+    <tr valign="top">
+      <td>
+        <ul>
+            <li><a href="configs/fast_rcnn">Fast R-CNN (ICCV'2015)</a></li>
+            <li><a href="configs/faster_rcnn">Faster R-CNN (NeurIPS'2015)</a></li>
+            <li><a href="configs/rpn">RPN (NeurIPS'2015)</a></li>
+            <li><a href="configs/ssd">SSD (ECCV'2016)</a></li>
+            <li><a href="configs/retinanet">RetinaNet (ICCV'2017)</a></li>
+            <li><a href="configs/cascade_rcnn">Cascade R-CNN (CVPR'2018)</a></li>
+            <li><a href="configs/yolo">YOLOv3 (ArXiv'2018)</a></li>
+            <li><a href="configs/cornernet">CornerNet (ECCV'2018)</a></li>
+            <li><a href="configs/grid_rcnn">Grid R-CNN (CVPR'2019)</a></li>
+            <li><a href="configs/guided_anchoring">Guided Anchoring (CVPR'2019)</a></li>
+            <li><a href="configs/fsaf">FSAF (CVPR'2019)</a></li>
+            <li><a href="configs/centernet">CenterNet (CVPR'2019)</a></li>
+            <li><a href="configs/libra_rcnn">Libra R-CNN (CVPR'2019)</a></li>
+            <li><a href="configs/tridentnet">TridentNet (ICCV'2019)</a></li>
+            <li><a href="configs/fcos">FCOS (ICCV'2019)</a></li>
+            <li><a href="configs/reppoints">RepPoints (ICCV'2019)</a></li>
+            <li><a href="configs/free_anchor">FreeAnchor (NeurIPS'2019)</a></li>
+            <li><a href="configs/cascade_rpn">CascadeRPN (NeurIPS'2019)</a></li>
+            <li><a href="configs/foveabox">Foveabox (TIP'2020)</a></li>
+            <li><a href="configs/double_heads">Double-Head R-CNN (CVPR'2020)</a></li>
+            <li><a href="configs/atss">ATSS (CVPR'2020)</a></li>
+            <li><a href="configs/nas_fcos">NAS-FCOS (CVPR'2020)</a></li>
+            <li><a href="configs/centripetalnet">CentripetalNet (CVPR'2020)</a></li>
+            <li><a href="configs/autoassign">AutoAssign (ArXiv'2020)</a></li>
+            <li><a href="configs/sabl">Side-Aware Boundary Localization (ECCV'2020)</a></li>
+            <li><a href="configs/dynamic_rcnn">Dynamic R-CNN (ECCV'2020)</a></li>
+            <li><a href="configs/detr">DETR (ECCV'2020)</a></li>
+            <li><a href="configs/paa">PAA (ECCV'2020)</a></li>
+            <li><a href="configs/vfnet">VarifocalNet (CVPR'2021)</a></li>
+            <li><a href="configs/sparse_rcnn">Sparse R-CNN (CVPR'2021)</a></li>
+            <li><a href="configs/yolof">YOLOF (CVPR'2021)</a></li>
+            <li><a href="configs/yolox">YOLOX (CVPR'2021)</a></li>
+            <li><a href="configs/deformable_detr">Deformable DETR (ICLR'2021)</a></li>
+            <li><a href="configs/tood">TOOD (ICCV'2021)</a></li>
+            <li><a href="configs/ddod">DDOD (ACM MM'2021)</a></li>
+            <li><a href="configs/rtmdet">RTMDet (ArXiv'2022)</a></li>
+            <li><a href="configs/conditional_detr">Conditional DETR (ICCV'2021)</a></li>
+            <li><a href="configs/dab_detr">DAB-DETR (ICLR'2022)</a></li>
+            <li><a href="configs/dino">DINO (ICLR'2023)</a></li>
+            <li><a href="projects/DiffusionDet">DiffusionDet (ArXiv'2023)</a></li>
+            <li><a href="projects/EfficientDet">EfficientDet (CVPR'2020)</a></li>
+            <li><a href="projects/Detic">Detic (ECCV'2022)</a></li>
+      </ul>
+      </td>
+      <td>
+        <ul>
+          <li><a href="configs/mask_rcnn">Mask R-CNN (ICCV'2017)</a></li>
+          <li><a href="configs/cascade_rcnn">Cascade Mask R-CNN (CVPR'2018)</a></li>
+          <li><a href="configs/ms_rcnn">Mask Scoring R-CNN (CVPR'2019)</a></li>
+          <li><a href="configs/htc">Hybrid Task Cascade (CVPR'2019)</a></li>
+          <li><a href="configs/yolact">YOLACT (ICCV'2019)</a></li>
+          <li><a href="configs/instaboost">InstaBoost (ICCV'2019)</a></li>
+          <li><a href="configs/solo">SOLO (ECCV'2020)</a></li>
+          <li><a href="configs/point_rend">PointRend (CVPR'2020)</a></li>
+          <li><a href="configs/detectors">DetectoRS (ArXiv'2020)</a></li>
+          <li><a href="configs/solov2">SOLOv2 (NeurIPS'2020)</a></li>
+          <li><a href="configs/scnet">SCNet (AAAI'2021)</a></li>
+          <li><a href="configs/queryinst">QueryInst (ICCV'2021)</a></li>
+          <li><a href="configs/mask2former">Mask2Former (ArXiv'2021)</a></li>
+          <li><a href="configs/condinst">CondInst (ECCV'2020)</a></li>
+          <li><a href="projects/SparseInst">SparseInst (CVPR'2022)</a></li>
+          <li><a href="configs/rtmdet">RTMDet (ArXiv'2022)</a></li>
+          <li><a href="configs/boxinst">BoxInst (CVPR'2021)</a></li>
+        </ul>
+      </td>
+      <td>
+        <ul>
+          <li><a href="configs/panoptic_fpn">Panoptic FPN (CVPR'2019)</a></li>
+          <li><a href="configs/maskformer">MaskFormer (NeurIPS'2021)</a></li>
+          <li><a href="configs/mask2former">Mask2Former (ArXiv'2021)</a></li>
+        </ul>
+      </td>
+      <td>
+        </ul>
+          <li><b>Contrastive Learning</b></li>
+        <ul>
+        <ul>
+          <li><a href="configs/selfsup_pretrain">SwAV (NeurIPS'2020)</a></li>
+          <li><a href="configs/selfsup_pretrain">MoCo (CVPR'2020)</a></li>
+          <li><a href="configs/selfsup_pretrain">MoCov2 (ArXiv'2020)</a></li>
+        </ul>
+        </ul>
+        </ul>
+          <li><b>Distillation</b></li>
+        <ul>
+        <ul>
+          <li><a href="configs/ld">Localization Distillation (CVPR'2022)</a></li>
+          <li><a href="configs/lad">Label Assignment Distillation (WACV'2022)</a></li>
+        </ul>
+        </ul>
+          <li><b>Semi-Supervised Object Detection</b></li>
+        <ul>
+        <ul>
+          <li><a href="configs/soft_teacher">Soft Teacher (ICCV'2021)</a></li>
+        </ul>
+        </ul>
+      </ul>
+      </td>
+    </tr>
+</td>
+    </tr>
+  </tbody>
+</table>
+
+<div align="center">
+  <b>模块组件</b>
+</div>
+<table align="center">
+  <tbody>
+    <tr align="center" valign="bottom">
+      <td>
+        <b>Backbones</b>
+      </td>
+      <td>
+        <b>Necks</b>
+      </td>
+      <td>
+        <b>Loss</b>
+      </td>
+      <td>
+        <b>Common</b>
+      </td>
+    </tr>
+    <tr valign="top">
+      <td>
+      <ul>
+        <li>VGG (ICLR'2015)</li>
+        <li>ResNet (CVPR'2016)</li>
+        <li>ResNeXt (CVPR'2017)</li>
+        <li>MobileNetV2 (CVPR'2018)</li>
+        <li><a href="configs/hrnet">HRNet (CVPR'2019)</a></li>
+        <li><a href="configs/empirical_attention">Generalized Attention (ICCV'2019)</a></li>
+        <li><a href="configs/gcnet">GCNet (ICCVW'2019)</a></li>
+        <li><a href="configs/res2net">Res2Net (TPAMI'2020)</a></li>
+        <li><a href="configs/regnet">RegNet (CVPR'2020)</a></li>
+        <li><a href="configs/resnest">ResNeSt (ArXiv'2020)</a></li>
+        <li><a href="configs/pvt">PVT (ICCV'2021)</a></li>
+        <li><a href="configs/swin">Swin (CVPR'2021)</a></li>
+        <li><a href="configs/pvt">PVTv2 (ArXiv'2021)</a></li>
+        <li><a href="configs/resnet_strikes_back">ResNet strikes back (ArXiv'2021)</a></li>
+        <li><a href="configs/efficientnet">EfficientNet (ArXiv'2021)</a></li>
+        <li><a href="configs/convnext">ConvNeXt (CVPR'2022)</a></li>
+        <li><a href="projects/ConvNeXt-V2">ConvNeXtv2 (ArXiv'2023)</a></li>
+      </ul>
+      </td>
+      <td>
+      <ul>
+        <li><a href="configs/pafpn">PAFPN (CVPR'2018)</a></li>
+        <li><a href="configs/nas_fpn">NAS-FPN (CVPR'2019)</a></li>
+        <li><a href="configs/carafe">CARAFE (ICCV'2019)</a></li>
+        <li><a href="configs/fpg">FPG (ArXiv'2020)</a></li>
+        <li><a href="configs/groie">GRoIE (ICPR'2020)</a></li>
+        <li><a href="configs/dyhead">DyHead (CVPR'2021)</a></li>
+      </ul>
+      </td>
+      <td>
+        <ul>
+          <li><a href="configs/ghm">GHM (AAAI'2019)</a></li>
+          <li><a href="configs/gfl">Generalized Focal Loss (NeurIPS'2020)</a></li>
+          <li><a href="configs/seesaw_loss">Seasaw Loss (CVPR'2021)</a></li>
+        </ul>
+      </td>
+      <td>
+        <ul>
+          <li><a href="configs/faster_rcnn/faster_rcnn_r50_fpn_ohem_1x_coco.py">OHEM (CVPR'2016)</a></li>
+          <li><a href="configs/gn">Group Normalization (ECCV'2018)</a></li>
+          <li><a href="configs/dcn">DCN (ICCV'2017)</a></li>
+          <li><a href="configs/dcnv2">DCNv2 (CVPR'2019)</a></li>
+          <li><a href="configs/gn+ws">Weight Standardization (ArXiv'2019)</a></li>
+          <li><a href="configs/pisa">Prime Sample Attention (CVPR'2020)</a></li>
+          <li><a href="configs/strong_baselines">Strong Baselines (CVPR'2021)</a></li>
+          <li><a href="configs/resnet_strikes_back">Resnet strikes back (ArXiv'2021)</a></li>
+        </ul>
+      </td>
+    </tr>
+</td>
+    </tr>
+  </tbody>
+</table>
+
+我们在[基于 MMDetection 的项目](./docs/zh_cn/notes/projects.md)中列举了一些其他的支持的算法。
+
+## 常见问题
+
+请参考 [FAQ](docs/zh_cn/notes/faq.md) 了解其他用户的常见问题。
+
+## 贡献指南
+
+我们感谢所有的贡献者为改进和提升 MMDetection 所作出的努力。我们将正在进行中的项目添加进了[GitHub Projects](https://github.com/open-mmlab/mmdetection/projects)页面,非常欢迎社区用户能参与进这些项目中来。请参考[贡献指南](.github/CONTRIBUTING.md)来了解参与项目贡献的相关指引。
+
+## 致谢
+
+MMDetection 是一款由来自不同高校和企业的研发人员共同参与贡献的开源项目。我们感谢所有为项目提供算法复现和新功能支持的贡献者,以及提供宝贵反馈的用户。 我们希望这个工具箱和基准测试可以为社区提供灵活的代码工具,供用户复现已有算法并开发自己的新模型,从而不断为开源社区提供贡献。
+
+## 引用
+
+如果你在研究中使用了本项目的代码或者性能基准,请参考如下 bibtex 引用 MMDetection。
+
+```
+@article{mmdetection,
+  title   = {{MMDetection}: Open MMLab Detection Toolbox and Benchmark},
+  author  = {Chen, Kai and Wang, Jiaqi and Pang, Jiangmiao and Cao, Yuhang and
+             Xiong, Yu and Li, Xiaoxiao and Sun, Shuyang and Feng, Wansen and
+             Liu, Ziwei and Xu, Jiarui and Zhang, Zheng and Cheng, Dazhi and
+             Zhu, Chenchen and Cheng, Tianheng and Zhao, Qijie and Li, Buyu and
+             Lu, Xin and Zhu, Rui and Wu, Yue and Dai, Jifeng and Wang, Jingdong
+             and Shi, Jianping and Ouyang, Wanli and Loy, Chen Change and Lin, Dahua},
+  journal= {arXiv preprint arXiv:1906.07155},
+  year={2019}
+}
+```
+
+## 开源许可证
+
+该项目采用 [Apache 2.0 开源许可证](LICENSE)。
+
+## OpenMMLab 的其他项目
+
+- [MMEngine](https://github.com/open-mmlab/mmengine): OpenMMLab 深度学习模型训练基础库
+- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab 计算机视觉基础库
+- [MIM](https://github.com/open-mmlab/mim): MIM 是 OpenMMlab 项目、算法、模型的统一入口
+- [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab 图像分类工具箱
+- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab 目标检测工具箱
+- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab 新一代通用 3D 目标检测平台
+- [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab 旋转框检测工具箱与测试基准
+- [MMYOLO](https://github.com/open-mmlab/mmyolo): OpenMMLab YOLO 系列工具箱与测试基准
+- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab 语义分割工具箱
+- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab 全流程文字检测识别理解工具包
+- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab 姿态估计工具箱
+- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 人体参数化模型工具箱与测试基准
+- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab 自监督学习工具箱与测试基准
+- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab 模型压缩工具箱与测试基准
+- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab 少样本学习工具箱与测试基准
+- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab 新一代视频理解工具箱
+- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab 一体化视频目标感知平台
+- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab 光流估计工具箱与测试基准
+- [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab 图像视频编辑工具箱
+- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab 图片视频生成模型工具箱
+- [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab 模型部署框架
+
+## 欢迎加入 OpenMMLab 社区
+
+扫描下方的二维码可关注 OpenMMLab 团队的 [知乎官方账号](https://www.zhihu.com/people/openmmlab),加入 OpenMMLab 团队的 [官方交流 QQ 群](https://jq.qq.com/?_wv=1027&k=aCvMxdr3)
+
+<div align="center">
+<img src="resources/zhihu_qrcode.jpg" height="400" />  <img src="resources/qq_group_qrcode.jpg" height="400" />
+</div>
+
+我们会在 OpenMMLab 社区为大家
+
+- 📢 分享 AI 框架的前沿核心技术
+- 💻 解读 PyTorch 常用模块源码
+- 📰 发布 OpenMMLab 的相关新闻
+- 🚀 介绍 OpenMMLab 开发的前沿算法
+- 🏃 获取更高效的问题答疑和意见反馈
+- 🔥 提供与各行各业开发者充分交流的平台
+
+干货满满 📘,等你来撩 💗,OpenMMLab 社区期待您的加入 👬

+ 84 - 0
configs/_base_/datasets/cityscapes_detection.py

@@ -0,0 +1,84 @@
+# dataset settings
+dataset_type = 'CityscapesDataset'
+data_root = 'data/cityscapes/'
+
+# Example to use different file client
+# Method 1: simply set the data root and let the file I/O module
+# automatically infer from prefix (not support LMDB and Memcache yet)
+
+# data_root = 's3://openmmlab/datasets/segmentation/cityscapes/'
+
+# Method 2: Use `backend_args`, `file_client_args` in versions before 3.0.0rc6
+# backend_args = dict(
+#     backend='petrel',
+#     path_mapping=dict({
+#         './data/': 's3://openmmlab/datasets/segmentation/',
+#          'data/': 's3://openmmlab/datasets/segmentation/'
+#      }))
+backend_args = None
+
+train_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='LoadAnnotations', with_bbox=True),
+    dict(
+        type='RandomResize',
+        scale=[(2048, 800), (2048, 1024)],
+        keep_ratio=True),
+    dict(type='RandomFlip', prob=0.5),
+    dict(type='PackDetInputs')
+]
+
+test_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='Resize', scale=(2048, 1024), keep_ratio=True),
+    # If you don't have a gt annotation, delete the pipeline
+    dict(type='LoadAnnotations', with_bbox=True),
+    dict(
+        type='PackDetInputs',
+        meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
+                   'scale_factor'))
+]
+
+train_dataloader = dict(
+    batch_size=1,
+    num_workers=2,
+    persistent_workers=True,
+    sampler=dict(type='DefaultSampler', shuffle=True),
+    batch_sampler=dict(type='AspectRatioBatchSampler'),
+    dataset=dict(
+        type='RepeatDataset',
+        times=8,
+        dataset=dict(
+            type=dataset_type,
+            data_root=data_root,
+            ann_file='annotations/instancesonly_filtered_gtFine_train.json',
+            data_prefix=dict(img='leftImg8bit/train/'),
+            filter_cfg=dict(filter_empty_gt=True, min_size=32),
+            pipeline=train_pipeline,
+            backend_args=backend_args)))
+
+val_dataloader = dict(
+    batch_size=1,
+    num_workers=2,
+    persistent_workers=True,
+    drop_last=False,
+    sampler=dict(type='DefaultSampler', shuffle=False),
+    dataset=dict(
+        type=dataset_type,
+        data_root=data_root,
+        ann_file='annotations/instancesonly_filtered_gtFine_val.json',
+        data_prefix=dict(img='leftImg8bit/val/'),
+        test_mode=True,
+        filter_cfg=dict(filter_empty_gt=True, min_size=32),
+        pipeline=test_pipeline,
+        backend_args=backend_args))
+
+test_dataloader = val_dataloader
+
+val_evaluator = dict(
+    type='CocoMetric',
+    ann_file=data_root + 'annotations/instancesonly_filtered_gtFine_val.json',
+    metric='bbox',
+    backend_args=backend_args)
+
+test_evaluator = val_evaluator

+ 113 - 0
configs/_base_/datasets/cityscapes_instance.py

@@ -0,0 +1,113 @@
+# dataset settings
+dataset_type = 'CityscapesDataset'
+data_root = 'data/cityscapes/'
+
+# Example to use different file client
+# Method 1: simply set the data root and let the file I/O module
+# automatically infer from prefix (not support LMDB and Memcache yet)
+
+# data_root = 's3://openmmlab/datasets/segmentation/cityscapes/'
+
+# Method 2: Use backend_args, file_client_args in versions before 3.0.0rc6
+# backend_args = dict(
+#     backend='petrel',
+#     path_mapping=dict({
+#         './data/': 's3://openmmlab/datasets/segmentation/',
+#          'data/': 's3://openmmlab/datasets/segmentation/'
+#      }))
+backend_args = None
+
+train_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
+    dict(
+        type='RandomResize',
+        scale=[(2048, 800), (2048, 1024)],
+        keep_ratio=True),
+    dict(type='RandomFlip', prob=0.5),
+    dict(type='PackDetInputs')
+]
+
+test_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='Resize', scale=(2048, 1024), keep_ratio=True),
+    # If you don't have a gt annotation, delete the pipeline
+    dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
+    dict(
+        type='PackDetInputs',
+        meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
+                   'scale_factor'))
+]
+
+train_dataloader = dict(
+    batch_size=1,
+    num_workers=2,
+    persistent_workers=True,
+    sampler=dict(type='DefaultSampler', shuffle=True),
+    batch_sampler=dict(type='AspectRatioBatchSampler'),
+    dataset=dict(
+        type='RepeatDataset',
+        times=8,
+        dataset=dict(
+            type=dataset_type,
+            data_root=data_root,
+            ann_file='annotations/instancesonly_filtered_gtFine_train.json',
+            data_prefix=dict(img='leftImg8bit/train/'),
+            filter_cfg=dict(filter_empty_gt=True, min_size=32),
+            pipeline=train_pipeline,
+            backend_args=backend_args)))
+
+val_dataloader = dict(
+    batch_size=1,
+    num_workers=2,
+    persistent_workers=True,
+    drop_last=False,
+    sampler=dict(type='DefaultSampler', shuffle=False),
+    dataset=dict(
+        type=dataset_type,
+        data_root=data_root,
+        ann_file='annotations/instancesonly_filtered_gtFine_val.json',
+        data_prefix=dict(img='leftImg8bit/val/'),
+        test_mode=True,
+        filter_cfg=dict(filter_empty_gt=True, min_size=32),
+        pipeline=test_pipeline,
+        backend_args=backend_args))
+
+test_dataloader = val_dataloader
+
+val_evaluator = [
+    dict(
+        type='CocoMetric',
+        ann_file=data_root +
+        'annotations/instancesonly_filtered_gtFine_val.json',
+        metric=['bbox', 'segm'],
+        backend_args=backend_args),
+    dict(
+        type='CityScapesMetric',
+        seg_prefix=data_root + 'gtFine/val',
+        outfile_prefix='./work_dirs/cityscapes_metric/instance',
+        backend_args=backend_args)
+]
+
+test_evaluator = val_evaluator
+
+# inference on test dataset and
+# format the output results for submission.
+# test_dataloader = dict(
+#     batch_size=1,
+#     num_workers=2,
+#     persistent_workers=True,
+#     drop_last=False,
+#     sampler=dict(type='DefaultSampler', shuffle=False),
+#     dataset=dict(
+#         type=dataset_type,
+#         data_root=data_root,
+#         ann_file='annotations/instancesonly_filtered_gtFine_test.json',
+#         data_prefix=dict(img='leftImg8bit/test/'),
+#         test_mode=True,
+#         filter_cfg=dict(filter_empty_gt=True, min_size=32),
+#         pipeline=test_pipeline))
+# test_evaluator = dict(
+#         type='CityScapesMetric',
+#         format_only=True,
+#         outfile_prefix='./work_dirs/cityscapes_metric/test')

+ 95 - 0
configs/_base_/datasets/coco_detection.py

@@ -0,0 +1,95 @@
+# dataset settings
+dataset_type = 'CocoDataset'
+data_root = 'data/coco/'
+
+# Example to use different file client
+# Method 1: simply set the data root and let the file I/O module
+# automatically infer from prefix (not support LMDB and Memcache yet)
+
+# data_root = 's3://openmmlab/datasets/detection/coco/'
+
+# Method 2: Use `backend_args`, `file_client_args` in versions before 3.0.0rc6
+# backend_args = dict(
+#     backend='petrel',
+#     path_mapping=dict({
+#         './data/': 's3://openmmlab/datasets/detection/',
+#         'data/': 's3://openmmlab/datasets/detection/'
+#     }))
+backend_args = None
+
+train_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='LoadAnnotations', with_bbox=True),
+    dict(type='Resize', scale=(1333, 800), keep_ratio=True),
+    dict(type='RandomFlip', prob=0.5),
+    dict(type='PackDetInputs')
+]
+test_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='Resize', scale=(1333, 800), keep_ratio=True),
+    # If you don't have a gt annotation, delete the pipeline
+    dict(type='LoadAnnotations', with_bbox=True),
+    dict(
+        type='PackDetInputs',
+        meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
+                   'scale_factor'))
+]
+train_dataloader = dict(
+    batch_size=2,
+    num_workers=2,
+    persistent_workers=True,
+    sampler=dict(type='DefaultSampler', shuffle=True),
+    batch_sampler=dict(type='AspectRatioBatchSampler'),
+    dataset=dict(
+        type=dataset_type,
+        data_root=data_root,
+        ann_file='annotations/instances_train2017.json',
+        data_prefix=dict(img='train2017/'),
+        filter_cfg=dict(filter_empty_gt=True, min_size=32),
+        pipeline=train_pipeline,
+        backend_args=backend_args))
+val_dataloader = dict(
+    batch_size=1,
+    num_workers=2,
+    persistent_workers=True,
+    drop_last=False,
+    sampler=dict(type='DefaultSampler', shuffle=False),
+    dataset=dict(
+        type=dataset_type,
+        data_root=data_root,
+        ann_file='annotations/instances_val2017.json',
+        data_prefix=dict(img='val2017/'),
+        test_mode=True,
+        pipeline=test_pipeline,
+        backend_args=backend_args))
+test_dataloader = val_dataloader
+
+val_evaluator = dict(
+    type='CocoMetric',
+    ann_file=data_root + 'annotations/instances_val2017.json',
+    metric='bbox',
+    format_only=False,
+    backend_args=backend_args)
+test_evaluator = val_evaluator
+
+# inference on test dataset and
+# format the output results for submission.
+# test_dataloader = dict(
+#     batch_size=1,
+#     num_workers=2,
+#     persistent_workers=True,
+#     drop_last=False,
+#     sampler=dict(type='DefaultSampler', shuffle=False),
+#     dataset=dict(
+#         type=dataset_type,
+#         data_root=data_root,
+#         ann_file=data_root + 'annotations/image_info_test-dev2017.json',
+#         data_prefix=dict(img='test2017/'),
+#         test_mode=True,
+#         pipeline=test_pipeline))
+# test_evaluator = dict(
+#     type='CocoMetric',
+#     metric='bbox',
+#     format_only=True,
+#     ann_file=data_root + 'annotations/image_info_test-dev2017.json',
+#     outfile_prefix='./work_dirs/coco_detection/test')

+ 95 - 0
configs/_base_/datasets/coco_instance.py

@@ -0,0 +1,95 @@
+# dataset settings
+dataset_type = 'CocoDataset'
+data_root = 'data/coco/'
+
+# Example to use different file client
+# Method 1: simply set the data root and let the file I/O module
+# automatically infer from prefix (not support LMDB and Memcache yet)
+
+# data_root = 's3://openmmlab/datasets/detection/coco/'
+
+# Method 2: Use `backend_args`, `file_client_args` in versions before 3.0.0rc6
+# backend_args = dict(
+#     backend='petrel',
+#     path_mapping=dict({
+#         './data/': 's3://openmmlab/datasets/detection/',
+#         'data/': 's3://openmmlab/datasets/detection/'
+#     }))
+backend_args = None
+
+train_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
+    dict(type='Resize', scale=(1333, 800), keep_ratio=True),
+    dict(type='RandomFlip', prob=0.5),
+    dict(type='PackDetInputs')
+]
+test_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='Resize', scale=(1333, 800), keep_ratio=True),
+    # If you don't have a gt annotation, delete the pipeline
+    dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
+    dict(
+        type='PackDetInputs',
+        meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
+                   'scale_factor'))
+]
+train_dataloader = dict(
+    batch_size=2,
+    num_workers=2,
+    persistent_workers=True,
+    sampler=dict(type='DefaultSampler', shuffle=True),
+    batch_sampler=dict(type='AspectRatioBatchSampler'),
+    dataset=dict(
+        type=dataset_type,
+        data_root=data_root,
+        ann_file='annotations/instances_train2017.json',
+        data_prefix=dict(img='train2017/'),
+        filter_cfg=dict(filter_empty_gt=True, min_size=32),
+        pipeline=train_pipeline,
+        backend_args=backend_args))
+val_dataloader = dict(
+    batch_size=1,
+    num_workers=2,
+    persistent_workers=True,
+    drop_last=False,
+    sampler=dict(type='DefaultSampler', shuffle=False),
+    dataset=dict(
+        type=dataset_type,
+        data_root=data_root,
+        ann_file='annotations/instances_val2017.json',
+        data_prefix=dict(img='val2017/'),
+        test_mode=True,
+        pipeline=test_pipeline,
+        backend_args=backend_args))
+test_dataloader = val_dataloader
+
+val_evaluator = dict(
+    type='CocoMetric',
+    ann_file=data_root + 'annotations/instances_val2017.json',
+    metric=['bbox', 'segm'],
+    format_only=False,
+    backend_args=backend_args)
+test_evaluator = val_evaluator
+
+# inference on test dataset and
+# format the output results for submission.
+# test_dataloader = dict(
+#     batch_size=1,
+#     num_workers=2,
+#     persistent_workers=True,
+#     drop_last=False,
+#     sampler=dict(type='DefaultSampler', shuffle=False),
+#     dataset=dict(
+#         type=dataset_type,
+#         data_root=data_root,
+#         ann_file=data_root + 'annotations/image_info_test-dev2017.json',
+#         data_prefix=dict(img='test2017/'),
+#         test_mode=True,
+#         pipeline=test_pipeline))
+# test_evaluator = dict(
+#     type='CocoMetric',
+#     metric=['bbox', 'segm'],
+#     format_only=True,
+#     ann_file=data_root + 'annotations/image_info_test-dev2017.json',
+#     outfile_prefix='./work_dirs/coco_instance/test')

+ 78 - 0
configs/_base_/datasets/coco_instance_semantic.py

@@ -0,0 +1,78 @@
+# dataset settings
+dataset_type = 'CocoDataset'
+data_root = 'data/coco/'
+
+# Example to use different file client
+# Method 1: simply set the data root and let the file I/O module
+# automatically infer from prefix (not support LMDB and Memcache yet)
+
+# data_root = 's3://openmmlab/datasets/detection/coco/'
+
+# Method 2: Use `backend_args`, `file_client_args` in versions before 3.0.0rc6
+# backend_args = dict(
+#     backend='petrel',
+#     path_mapping=dict({
+#         './data/': 's3://openmmlab/datasets/detection/',
+#         'data/': 's3://openmmlab/datasets/detection/'
+#     }))
+backend_args = None
+
+train_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(
+        type='LoadAnnotations', with_bbox=True, with_mask=True, with_seg=True),
+    dict(type='Resize', scale=(1333, 800), keep_ratio=True),
+    dict(type='RandomFlip', prob=0.5),
+    dict(type='PackDetInputs')
+]
+test_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='Resize', scale=(1333, 800), keep_ratio=True),
+    # If you don't have a gt annotation, delete the pipeline
+    dict(
+        type='LoadAnnotations', with_bbox=True, with_mask=True, with_seg=True),
+    dict(
+        type='PackDetInputs',
+        meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
+                   'scale_factor'))
+]
+
+train_dataloader = dict(
+    batch_size=2,
+    num_workers=2,
+    persistent_workers=True,
+    sampler=dict(type='DefaultSampler', shuffle=True),
+    batch_sampler=dict(type='AspectRatioBatchSampler'),
+    dataset=dict(
+        type=dataset_type,
+        data_root=data_root,
+        ann_file='annotations/instances_train2017.json',
+        data_prefix=dict(img='train2017/', seg='stuffthingmaps/train2017/'),
+        filter_cfg=dict(filter_empty_gt=True, min_size=32),
+        pipeline=train_pipeline,
+        backend_args=backend_args))
+
+val_dataloader = dict(
+    batch_size=1,
+    num_workers=2,
+    persistent_workers=True,
+    drop_last=False,
+    sampler=dict(type='DefaultSampler', shuffle=False),
+    dataset=dict(
+        type=dataset_type,
+        data_root=data_root,
+        ann_file='annotations/instances_val2017.json',
+        data_prefix=dict(img='val2017/'),
+        test_mode=True,
+        pipeline=test_pipeline,
+        backend_args=backend_args))
+
+test_dataloader = val_dataloader
+
+val_evaluator = dict(
+    type='CocoMetric',
+    ann_file=data_root + 'annotations/instances_val2017.json',
+    metric=['bbox', 'segm'],
+    format_only=False,
+    backend_args=backend_args)
+test_evaluator = val_evaluator

+ 94 - 0
configs/_base_/datasets/coco_panoptic.py

@@ -0,0 +1,94 @@
+# dataset settings
+dataset_type = 'CocoPanopticDataset'
+# data_root = 'data/coco/'
+
+# Example to use different file client
+# Method 1: simply set the data root and let the file I/O module
+# automatically infer from prefix (not support LMDB and Memcache yet)
+
+data_root = 's3://openmmlab/datasets/detection/coco/'
+
+# Method 2: Use `backend_args`, `file_client_args` in versions before 3.0.0rc6
+# backend_args = dict(
+#     backend='petrel',
+#     path_mapping=dict({
+#         './data/': 's3://openmmlab/datasets/detection/',
+#         'data/': 's3://openmmlab/datasets/detection/'
+#     }))
+backend_args = None
+
+train_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='LoadPanopticAnnotations', backend_args=backend_args),
+    dict(type='Resize', scale=(1333, 800), keep_ratio=True),
+    dict(type='RandomFlip', prob=0.5),
+    dict(type='PackDetInputs')
+]
+test_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='Resize', scale=(1333, 800), keep_ratio=True),
+    dict(type='LoadPanopticAnnotations', backend_args=backend_args),
+    dict(
+        type='PackDetInputs',
+        meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
+                   'scale_factor'))
+]
+
+train_dataloader = dict(
+    batch_size=2,
+    num_workers=2,
+    persistent_workers=True,
+    sampler=dict(type='DefaultSampler', shuffle=True),
+    batch_sampler=dict(type='AspectRatioBatchSampler'),
+    dataset=dict(
+        type=dataset_type,
+        data_root=data_root,
+        ann_file='annotations/panoptic_train2017.json',
+        data_prefix=dict(
+            img='train2017/', seg='annotations/panoptic_train2017/'),
+        filter_cfg=dict(filter_empty_gt=True, min_size=32),
+        pipeline=train_pipeline,
+        backend_args=backend_args))
+val_dataloader = dict(
+    batch_size=1,
+    num_workers=2,
+    persistent_workers=True,
+    drop_last=False,
+    sampler=dict(type='DefaultSampler', shuffle=False),
+    dataset=dict(
+        type=dataset_type,
+        data_root=data_root,
+        ann_file='annotations/panoptic_val2017.json',
+        data_prefix=dict(img='val2017/', seg='annotations/panoptic_val2017/'),
+        test_mode=True,
+        pipeline=test_pipeline,
+        backend_args=backend_args))
+test_dataloader = val_dataloader
+
+val_evaluator = dict(
+    type='CocoPanopticMetric',
+    ann_file=data_root + 'annotations/panoptic_val2017.json',
+    seg_prefix=data_root + 'annotations/panoptic_val2017/',
+    backend_args=backend_args)
+test_evaluator = val_evaluator
+
+# inference on test dataset and
+# format the output results for submission.
+# test_dataloader = dict(
+#     batch_size=1,
+#     num_workers=1,
+#     persistent_workers=True,
+#     drop_last=False,
+#     sampler=dict(type='DefaultSampler', shuffle=False),
+#     dataset=dict(
+#         type=dataset_type,
+#         data_root=data_root,
+#         ann_file='annotations/panoptic_image_info_test-dev2017.json',
+#         data_prefix=dict(img='test2017/'),
+#         test_mode=True,
+#         pipeline=test_pipeline))
+# test_evaluator = dict(
+#     type='CocoPanopticMetric',
+#     format_only=True,
+#     ann_file=data_root + 'annotations/panoptic_image_info_test-dev2017.json',
+#     outfile_prefix='./work_dirs/coco_panoptic/test')

+ 95 - 0
configs/_base_/datasets/deepfashion.py

@@ -0,0 +1,95 @@
+# dataset settings
+dataset_type = 'DeepFashionDataset'
+data_root = 'data/DeepFashion/In-shop/'
+
+# Example to use different file client
+# Method 1: simply set the data root and let the file I/O module
+# automatically infer from prefix (not support LMDB and Memcache yet)
+
+# data_root = 's3://openmmlab/datasets/detection/coco/'
+
+# Method 2: Use `backend_args`, `file_client_args` in versions before 3.0.0rc6
+# backend_args = dict(
+#     backend='petrel',
+#     path_mapping=dict({
+#         './data/': 's3://openmmlab/datasets/detection/',
+#         'data/': 's3://openmmlab/datasets/detection/'
+#     }))
+backend_args = None
+
+train_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
+    dict(type='Resize', scale=(750, 1101), keep_ratio=True),
+    dict(type='RandomFlip', prob=0.5),
+    dict(type='PackDetInputs')
+]
+test_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='Resize', scale=(750, 1101), keep_ratio=True),
+    dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
+    dict(
+        type='PackDetInputs',
+        meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
+                   'scale_factor'))
+]
+train_dataloader = dict(
+    batch_size=2,
+    num_workers=2,
+    persistent_workers=True,
+    sampler=dict(type='DefaultSampler', shuffle=True),
+    batch_sampler=dict(type='AspectRatioBatchSampler'),
+    dataset=dict(
+        type='RepeatDataset',
+        times=2,
+        dataset=dict(
+            type=dataset_type,
+            data_root=data_root,
+            ann_file='Anno/segmentation/DeepFashion_segmentation_train.json',
+            data_prefix=dict(img='Img/'),
+            filter_cfg=dict(filter_empty_gt=True, min_size=32),
+            pipeline=train_pipeline,
+            backend_args=backend_args)))
+val_dataloader = dict(
+    batch_size=1,
+    num_workers=2,
+    persistent_workers=True,
+    drop_last=False,
+    sampler=dict(type='DefaultSampler', shuffle=False),
+    dataset=dict(
+        type=dataset_type,
+        data_root=data_root,
+        ann_file='Anno/segmentation/DeepFashion_segmentation_query.json',
+        data_prefix=dict(img='Img/'),
+        test_mode=True,
+        pipeline=test_pipeline,
+        backend_args=backend_args))
+test_dataloader = dict(
+    batch_size=1,
+    num_workers=2,
+    persistent_workers=True,
+    drop_last=False,
+    sampler=dict(type='DefaultSampler', shuffle=False),
+    dataset=dict(
+        type=dataset_type,
+        data_root=data_root,
+        ann_file='Anno/segmentation/DeepFashion_segmentation_gallery.json',
+        data_prefix=dict(img='Img/'),
+        test_mode=True,
+        pipeline=test_pipeline,
+        backend_args=backend_args))
+
+val_evaluator = dict(
+    type='CocoMetric',
+    ann_file=data_root +
+    'Anno/segmentation/DeepFashion_segmentation_query.json',
+    metric=['bbox', 'segm'],
+    format_only=False,
+    backend_args=backend_args)
+test_evaluator = dict(
+    type='CocoMetric',
+    ann_file=data_root +
+    'Anno/segmentation/DeepFashion_segmentation_gallery.json',
+    metric=['bbox', 'segm'],
+    format_only=False,
+    backend_args=backend_args)

+ 79 - 0
configs/_base_/datasets/lvis_v0.5_instance.py

@@ -0,0 +1,79 @@
+# dataset settings
+dataset_type = 'LVISV05Dataset'
+data_root = 'data/lvis_v0.5/'
+
+# Example to use different file client
+# Method 1: simply set the data root and let the file I/O module
+# automatically infer from prefix (not support LMDB and Memcache yet)
+
+# data_root = 's3://openmmlab/datasets/detection/lvis_v0.5/'
+
+# Method 2: Use `backend_args`, `file_client_args` in versions before 3.0.0rc6
+# backend_args = dict(
+#     backend='petrel',
+#     path_mapping=dict({
+#         './data/': 's3://openmmlab/datasets/detection/',
+#         'data/': 's3://openmmlab/datasets/detection/'
+#     }))
+backend_args = None
+
+train_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
+    dict(
+        type='RandomChoiceResize',
+        scales=[(1333, 640), (1333, 672), (1333, 704), (1333, 736),
+                (1333, 768), (1333, 800)],
+        keep_ratio=True),
+    dict(type='RandomFlip', prob=0.5),
+    dict(type='PackDetInputs')
+]
+test_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='Resize', scale=(1333, 800), keep_ratio=True),
+    dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
+    dict(
+        type='PackDetInputs',
+        meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
+                   'scale_factor'))
+]
+
+train_dataloader = dict(
+    batch_size=2,
+    num_workers=2,
+    persistent_workers=True,
+    sampler=dict(type='DefaultSampler', shuffle=True),
+    batch_sampler=dict(type='AspectRatioBatchSampler'),
+    dataset=dict(
+        type='ClassBalancedDataset',
+        oversample_thr=1e-3,
+        dataset=dict(
+            type=dataset_type,
+            data_root=data_root,
+            ann_file='annotations/lvis_v0.5_train.json',
+            data_prefix=dict(img='train2017/'),
+            filter_cfg=dict(filter_empty_gt=True, min_size=32),
+            pipeline=train_pipeline,
+            backend_args=backend_args)))
+val_dataloader = dict(
+    batch_size=1,
+    num_workers=2,
+    persistent_workers=True,
+    drop_last=False,
+    sampler=dict(type='DefaultSampler', shuffle=False),
+    dataset=dict(
+        type=dataset_type,
+        data_root=data_root,
+        ann_file='annotations/lvis_v0.5_val.json',
+        data_prefix=dict(img='val2017/'),
+        test_mode=True,
+        pipeline=test_pipeline,
+        backend_args=backend_args))
+test_dataloader = val_dataloader
+
+val_evaluator = dict(
+    type='LVISMetric',
+    ann_file=data_root + 'annotations/lvis_v0.5_val.json',
+    metric=['bbox', 'segm'],
+    backend_args=backend_args)
+test_evaluator = val_evaluator

+ 22 - 0
configs/_base_/datasets/lvis_v1_instance.py

@@ -0,0 +1,22 @@
+# dataset settings
+_base_ = 'lvis_v0.5_instance.py'
+dataset_type = 'LVISV1Dataset'
+data_root = 'data/lvis_v1/'
+
+train_dataloader = dict(
+    dataset=dict(
+        dataset=dict(
+            type=dataset_type,
+            data_root=data_root,
+            ann_file='annotations/lvis_v1_train.json',
+            data_prefix=dict(img=''))))
+val_dataloader = dict(
+    dataset=dict(
+        type=dataset_type,
+        data_root=data_root,
+        ann_file='annotations/lvis_v1_val.json',
+        data_prefix=dict(img='')))
+test_dataloader = val_dataloader
+
+val_evaluator = dict(ann_file=data_root + 'annotations/lvis_v1_val.json')
+test_evaluator = val_evaluator

+ 74 - 0
configs/_base_/datasets/objects365v1_detection.py

@@ -0,0 +1,74 @@
+# dataset settings
+dataset_type = 'Objects365V1Dataset'
+data_root = 'data/Objects365/Obj365_v1/'
+
+# Example to use different file client
+# Method 1: simply set the data root and let the file I/O module
+# automatically infer from prefix (not support LMDB and Memcache yet)
+
+# data_root = 's3://openmmlab/datasets/detection/coco/'
+
+# Method 2: Use `backend_args`, `file_client_args` in versions before 3.0.0rc6
+# backend_args = dict(
+#     backend='petrel',
+#     path_mapping=dict({
+#         './data/': 's3://openmmlab/datasets/detection/',
+#         'data/': 's3://openmmlab/datasets/detection/'
+#     }))
+backend_args = None
+
+train_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='LoadAnnotations', with_bbox=True),
+    dict(type='Resize', scale=(1333, 800), keep_ratio=True),
+    dict(type='RandomFlip', prob=0.5),
+    dict(type='PackDetInputs')
+]
+test_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='Resize', scale=(1333, 800), keep_ratio=True),
+    # If you don't have a gt annotation, delete the pipeline
+    dict(type='LoadAnnotations', with_bbox=True),
+    dict(
+        type='PackDetInputs',
+        meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
+                   'scale_factor'))
+]
+train_dataloader = dict(
+    batch_size=2,
+    num_workers=2,
+    persistent_workers=True,
+    sampler=dict(type='DefaultSampler', shuffle=True),
+    batch_sampler=dict(type='AspectRatioBatchSampler'),
+    dataset=dict(
+        type=dataset_type,
+        data_root=data_root,
+        ann_file='annotations/objects365_train.json',
+        data_prefix=dict(img='train/'),
+        filter_cfg=dict(filter_empty_gt=True, min_size=32),
+        pipeline=train_pipeline,
+        backend_args=backend_args))
+val_dataloader = dict(
+    batch_size=1,
+    num_workers=2,
+    persistent_workers=True,
+    drop_last=False,
+    sampler=dict(type='DefaultSampler', shuffle=False),
+    dataset=dict(
+        type=dataset_type,
+        data_root=data_root,
+        ann_file='annotations/objects365_val.json',
+        data_prefix=dict(img='val/'),
+        test_mode=True,
+        pipeline=test_pipeline,
+        backend_args=backend_args))
+test_dataloader = val_dataloader
+
+val_evaluator = dict(
+    type='CocoMetric',
+    ann_file=data_root + 'annotations/objects365_val.json',
+    metric='bbox',
+    sort_categories=True,
+    format_only=False,
+    backend_args=backend_args)
+test_evaluator = val_evaluator

+ 73 - 0
configs/_base_/datasets/objects365v2_detection.py

@@ -0,0 +1,73 @@
+# dataset settings
+dataset_type = 'Objects365V2Dataset'
+data_root = 'data/Objects365/Obj365_v2/'
+
+# Example to use different file client
+# Method 1: simply set the data root and let the file I/O module
+# automatically infer from prefix (not support LMDB and Memcache yet)
+
+# data_root = 's3://openmmlab/datasets/detection/coco/'
+
+# Method 2: Use `backend_args`, `file_client_args` in versions before 3.0.0rc6
+# backend_args = dict(
+#     backend='petrel',
+#     path_mapping=dict({
+#         './data/': 's3://openmmlab/datasets/detection/',
+#         'data/': 's3://openmmlab/datasets/detection/'
+#     }))
+backend_args = None
+
+train_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='LoadAnnotations', with_bbox=True),
+    dict(type='Resize', scale=(1333, 800), keep_ratio=True),
+    dict(type='RandomFlip', prob=0.5),
+    dict(type='PackDetInputs')
+]
+test_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='Resize', scale=(1333, 800), keep_ratio=True),
+    # If you don't have a gt annotation, delete the pipeline
+    dict(type='LoadAnnotations', with_bbox=True),
+    dict(
+        type='PackDetInputs',
+        meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
+                   'scale_factor'))
+]
+train_dataloader = dict(
+    batch_size=2,
+    num_workers=2,
+    persistent_workers=True,
+    sampler=dict(type='DefaultSampler', shuffle=True),
+    batch_sampler=dict(type='AspectRatioBatchSampler'),
+    dataset=dict(
+        type=dataset_type,
+        data_root=data_root,
+        ann_file='annotations/zhiyuan_objv2_train.json',
+        data_prefix=dict(img='train/'),
+        filter_cfg=dict(filter_empty_gt=True, min_size=32),
+        pipeline=train_pipeline,
+        backend_args=backend_args))
+val_dataloader = dict(
+    batch_size=1,
+    num_workers=2,
+    persistent_workers=True,
+    drop_last=False,
+    sampler=dict(type='DefaultSampler', shuffle=False),
+    dataset=dict(
+        type=dataset_type,
+        data_root=data_root,
+        ann_file='annotations/zhiyuan_objv2_val.json',
+        data_prefix=dict(img='val/'),
+        test_mode=True,
+        pipeline=test_pipeline,
+        backend_args=backend_args))
+test_dataloader = val_dataloader
+
+val_evaluator = dict(
+    type='CocoMetric',
+    ann_file=data_root + 'annotations/zhiyuan_objv2_val.json',
+    metric='bbox',
+    format_only=False,
+    backend_args=backend_args)
+test_evaluator = val_evaluator

+ 81 - 0
configs/_base_/datasets/openimages_detection.py

@@ -0,0 +1,81 @@
+# dataset settings
+dataset_type = 'OpenImagesDataset'
+data_root = 'data/OpenImages/'
+
+# Example to use different file client
+# Method 1: simply set the data root and let the file I/O module
+# automatically infer from prefix (not support LMDB and Memcache yet)
+
+# data_root = 's3://openmmlab/datasets/detection/coco/'
+
+# Method 2: Use `backend_args`, `file_client_args` in versions before 3.0.0rc6
+# backend_args = dict(
+#     backend='petrel',
+#     path_mapping=dict({
+#         './data/': 's3://openmmlab/datasets/detection/',
+#         'data/': 's3://openmmlab/datasets/detection/'
+#     }))
+backend_args = None
+
+train_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='LoadAnnotations', with_bbox=True),
+    dict(type='Resize', scale=(1024, 800), keep_ratio=True),
+    dict(type='RandomFlip', prob=0.5),
+    dict(type='PackDetInputs')
+]
+test_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='Resize', scale=(1024, 800), keep_ratio=True),
+    # avoid bboxes being resized
+    dict(type='LoadAnnotations', with_bbox=True),
+    # TODO: find a better way to collect image_level_labels
+    dict(
+        type='PackDetInputs',
+        meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
+                   'scale_factor', 'instances', 'image_level_labels'))
+]
+
+train_dataloader = dict(
+    batch_size=2,
+    num_workers=0,  # workers_per_gpu > 0 may occur out of memory
+    persistent_workers=False,
+    sampler=dict(type='DefaultSampler', shuffle=True),
+    batch_sampler=dict(type='AspectRatioBatchSampler'),
+    dataset=dict(
+        type=dataset_type,
+        data_root=data_root,
+        ann_file='annotations/oidv6-train-annotations-bbox.csv',
+        data_prefix=dict(img='OpenImages/train/'),
+        label_file='annotations/class-descriptions-boxable.csv',
+        hierarchy_file='annotations/bbox_labels_600_hierarchy.json',
+        meta_file='annotations/train-image-metas.pkl',
+        pipeline=train_pipeline,
+        backend_args=backend_args))
+val_dataloader = dict(
+    batch_size=1,
+    num_workers=0,
+    persistent_workers=False,
+    drop_last=False,
+    sampler=dict(type='DefaultSampler', shuffle=False),
+    dataset=dict(
+        type=dataset_type,
+        data_root=data_root,
+        ann_file='annotations/validation-annotations-bbox.csv',
+        data_prefix=dict(img='OpenImages/validation/'),
+        label_file='annotations/class-descriptions-boxable.csv',
+        hierarchy_file='annotations/bbox_labels_600_hierarchy.json',
+        meta_file='annotations/validation-image-metas.pkl',
+        image_level_ann_file='annotations/validation-'
+        'annotations-human-imagelabels-boxable.csv',
+        pipeline=test_pipeline,
+        backend_args=backend_args))
+test_dataloader = val_dataloader
+
+val_evaluator = dict(
+    type='OpenImagesMetric',
+    iou_thrs=0.5,
+    ioa_thrs=0.5,
+    use_group_of=True,
+    get_supercategory=True)
+test_evaluator = val_evaluator

+ 178 - 0
configs/_base_/datasets/semi_coco_detection.py

@@ -0,0 +1,178 @@
+# dataset settings
+dataset_type = 'CocoDataset'
+data_root = 'data/coco/'
+
+# Example to use different file client
+# Method 1: simply set the data root and let the file I/O module
+# automatically infer from prefix (not support LMDB and Memcache yet)
+
+# data_root = 's3://openmmlab/datasets/detection/coco/'
+
+# Method 2: Use `backend_args`, `file_client_args` in versions before 3.0.0rc6
+# backend_args = dict(
+#     backend='petrel',
+#     path_mapping=dict({
+#         './data/': 's3://openmmlab/datasets/detection/',
+#         'data/': 's3://openmmlab/datasets/detection/'
+#     }))
+backend_args = None
+
+color_space = [
+    [dict(type='ColorTransform')],
+    [dict(type='AutoContrast')],
+    [dict(type='Equalize')],
+    [dict(type='Sharpness')],
+    [dict(type='Posterize')],
+    [dict(type='Solarize')],
+    [dict(type='Color')],
+    [dict(type='Contrast')],
+    [dict(type='Brightness')],
+]
+
+geometric = [
+    [dict(type='Rotate')],
+    [dict(type='ShearX')],
+    [dict(type='ShearY')],
+    [dict(type='TranslateX')],
+    [dict(type='TranslateY')],
+]
+
+scale = [(1333, 400), (1333, 1200)]
+
+branch_field = ['sup', 'unsup_teacher', 'unsup_student']
+# pipeline used to augment labeled data,
+# which will be sent to student model for supervised training.
+sup_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='LoadAnnotations', with_bbox=True),
+    dict(type='RandomResize', scale=scale, keep_ratio=True),
+    dict(type='RandomFlip', prob=0.5),
+    dict(type='RandAugment', aug_space=color_space, aug_num=1),
+    dict(type='FilterAnnotations', min_gt_bbox_wh=(1e-2, 1e-2)),
+    dict(
+        type='MultiBranch',
+        branch_field=branch_field,
+        sup=dict(type='PackDetInputs'))
+]
+
+# pipeline used to augment unlabeled data weakly,
+# which will be sent to teacher model for predicting pseudo instances.
+weak_pipeline = [
+    dict(type='RandomResize', scale=scale, keep_ratio=True),
+    dict(type='RandomFlip', prob=0.5),
+    dict(
+        type='PackDetInputs',
+        meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
+                   'scale_factor', 'flip', 'flip_direction',
+                   'homography_matrix')),
+]
+
+# pipeline used to augment unlabeled data strongly,
+# which will be sent to student model for unsupervised training.
+strong_pipeline = [
+    dict(type='RandomResize', scale=scale, keep_ratio=True),
+    dict(type='RandomFlip', prob=0.5),
+    dict(
+        type='RandomOrder',
+        transforms=[
+            dict(type='RandAugment', aug_space=color_space, aug_num=1),
+            dict(type='RandAugment', aug_space=geometric, aug_num=1),
+        ]),
+    dict(type='RandomErasing', n_patches=(1, 5), ratio=(0, 0.2)),
+    dict(type='FilterAnnotations', min_gt_bbox_wh=(1e-2, 1e-2)),
+    dict(
+        type='PackDetInputs',
+        meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
+                   'scale_factor', 'flip', 'flip_direction',
+                   'homography_matrix')),
+]
+
+# pipeline used to augment unlabeled data into different views
+unsup_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='LoadEmptyAnnotations'),
+    dict(
+        type='MultiBranch',
+        branch_field=branch_field,
+        unsup_teacher=weak_pipeline,
+        unsup_student=strong_pipeline,
+    )
+]
+
+test_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='Resize', scale=(1333, 800), keep_ratio=True),
+    dict(
+        type='PackDetInputs',
+        meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
+                   'scale_factor'))
+]
+
+batch_size = 5
+num_workers = 5
+# There are two common semi-supervised learning settings on the coco dataset:
+# (1) Divide the train2017 into labeled and unlabeled datasets
+# by a fixed percentage, such as 1%, 2%, 5% and 10%.
+# The format of labeled_ann_file and unlabeled_ann_file are
+# instances_train2017.{fold}@{percent}.json, and
+# instances_train2017.{fold}@{percent}-unlabeled.json
+# `fold` is used for cross-validation, and `percent` represents
+# the proportion of labeled data in the train2017.
+# (2) Choose the train2017 as the labeled dataset
+# and unlabeled2017 as the unlabeled dataset.
+# The labeled_ann_file and unlabeled_ann_file are
+# instances_train2017.json and image_info_unlabeled2017.json
+# We use this configuration by default.
+labeled_dataset = dict(
+    type=dataset_type,
+    data_root=data_root,
+    ann_file='annotations/instances_train2017.json',
+    data_prefix=dict(img='train2017/'),
+    filter_cfg=dict(filter_empty_gt=True, min_size=32),
+    pipeline=sup_pipeline,
+    backend_args=backend_args)
+
+unlabeled_dataset = dict(
+    type=dataset_type,
+    data_root=data_root,
+    ann_file='annotations/instances_unlabeled2017.json',
+    data_prefix=dict(img='unlabeled2017/'),
+    filter_cfg=dict(filter_empty_gt=False),
+    pipeline=unsup_pipeline,
+    backend_args=backend_args)
+
+train_dataloader = dict(
+    batch_size=batch_size,
+    num_workers=num_workers,
+    persistent_workers=True,
+    sampler=dict(
+        type='GroupMultiSourceSampler',
+        batch_size=batch_size,
+        source_ratio=[1, 4]),
+    dataset=dict(
+        type='ConcatDataset', datasets=[labeled_dataset, unlabeled_dataset]))
+
+val_dataloader = dict(
+    batch_size=1,
+    num_workers=2,
+    persistent_workers=True,
+    drop_last=False,
+    sampler=dict(type='DefaultSampler', shuffle=False),
+    dataset=dict(
+        type=dataset_type,
+        data_root=data_root,
+        ann_file='annotations/instances_val2017.json',
+        data_prefix=dict(img='val2017/'),
+        test_mode=True,
+        pipeline=test_pipeline,
+        backend_args=backend_args))
+
+test_dataloader = val_dataloader
+
+val_evaluator = dict(
+    type='CocoMetric',
+    ann_file=data_root + 'annotations/instances_val2017.json',
+    metric='bbox',
+    format_only=False,
+    backend_args=backend_args)
+test_evaluator = val_evaluator

+ 92 - 0
configs/_base_/datasets/voc0712.py

@@ -0,0 +1,92 @@
+# dataset settings
+dataset_type = 'VOCDataset'
+data_root = 'data/VOCdevkit/'
+
+# Example to use different file client
+# Method 1: simply set the data root and let the file I/O module
+# automatically Infer from prefix (not support LMDB and Memcache yet)
+
+# data_root = 's3://openmmlab/datasets/detection/segmentation/VOCdevkit/'
+
+# Method 2: Use `backend_args`, `file_client_args` in versions before 3.0.0rc6
+# backend_args = dict(
+#     backend='petrel',
+#     path_mapping=dict({
+#         './data/': 's3://openmmlab/datasets/segmentation/',
+#         'data/': 's3://openmmlab/datasets/segmentation/'
+#     }))
+backend_args = None
+
+train_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='LoadAnnotations', with_bbox=True),
+    dict(type='Resize', scale=(1000, 600), keep_ratio=True),
+    dict(type='RandomFlip', prob=0.5),
+    dict(type='PackDetInputs')
+]
+test_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='Resize', scale=(1000, 600), keep_ratio=True),
+    # avoid bboxes being resized
+    dict(type='LoadAnnotations', with_bbox=True),
+    dict(
+        type='PackDetInputs',
+        meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
+                   'scale_factor'))
+]
+train_dataloader = dict(
+    batch_size=2,
+    num_workers=2,
+    persistent_workers=True,
+    sampler=dict(type='DefaultSampler', shuffle=True),
+    batch_sampler=dict(type='AspectRatioBatchSampler'),
+    dataset=dict(
+        type='RepeatDataset',
+        times=3,
+        dataset=dict(
+            type='ConcatDataset',
+            # VOCDataset will add different `dataset_type` in dataset.metainfo,
+            # which will get error if using ConcatDataset. Adding
+            # `ignore_keys` can avoid this error.
+            ignore_keys=['dataset_type'],
+            datasets=[
+                dict(
+                    type=dataset_type,
+                    data_root=data_root,
+                    ann_file='VOC2007/ImageSets/Main/trainval.txt',
+                    data_prefix=dict(sub_data_root='VOC2007/'),
+                    filter_cfg=dict(
+                        filter_empty_gt=True, min_size=32, bbox_min_size=32),
+                    pipeline=train_pipeline,
+                    backend_args=backend_args),
+                dict(
+                    type=dataset_type,
+                    data_root=data_root,
+                    ann_file='VOC2012/ImageSets/Main/trainval.txt',
+                    data_prefix=dict(sub_data_root='VOC2012/'),
+                    filter_cfg=dict(
+                        filter_empty_gt=True, min_size=32, bbox_min_size=32),
+                    pipeline=train_pipeline,
+                    backend_args=backend_args)
+            ])))
+
+val_dataloader = dict(
+    batch_size=1,
+    num_workers=2,
+    persistent_workers=True,
+    drop_last=False,
+    sampler=dict(type='DefaultSampler', shuffle=False),
+    dataset=dict(
+        type=dataset_type,
+        data_root=data_root,
+        ann_file='VOC2007/ImageSets/Main/test.txt',
+        data_prefix=dict(sub_data_root='VOC2007/'),
+        test_mode=True,
+        pipeline=test_pipeline,
+        backend_args=backend_args))
+test_dataloader = val_dataloader
+
+# Pascal VOC2007 uses `11points` as default evaluate mode, while PASCAL
+# VOC2012 defaults to use 'area'.
+val_evaluator = dict(type='VOCMetric', metric='mAP', eval_mode='11points')
+test_evaluator = val_evaluator

+ 73 - 0
configs/_base_/datasets/wider_face.py

@@ -0,0 +1,73 @@
+# dataset settings
+dataset_type = 'WIDERFaceDataset'
+data_root = 'data/WIDERFace/'
+# Example to use different file client
+# Method 1: simply set the data root and let the file I/O module
+# automatically infer from prefix (not support LMDB and Memcache yet)
+
+# data_root = 's3://openmmlab/datasets/detection/cityscapes/'
+
+# Method 2: Use `backend_args`, `file_client_args` in versions before 3.0.0rc6
+# backend_args = dict(
+#     backend='petrel',
+#     path_mapping=dict({
+#         './data/': 's3://openmmlab/datasets/detection/',
+#          'data/': 's3://openmmlab/datasets/detection/'
+#      }))
+backend_args = None
+
+img_scale = (640, 640)  # VGA resolution
+
+train_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='LoadAnnotations', with_bbox=True),
+    dict(type='Resize', scale=img_scale, keep_ratio=True),
+    dict(type='RandomFlip', prob=0.5),
+    dict(type='PackDetInputs')
+]
+test_pipeline = [
+    dict(type='LoadImageFromFile', backend_args=backend_args),
+    dict(type='Resize', scale=img_scale, keep_ratio=True),
+    dict(type='LoadAnnotations', with_bbox=True),
+    dict(
+        type='PackDetInputs',
+        meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape',
+                   'scale_factor'))
+]
+
+train_dataloader = dict(
+    batch_size=2,
+    num_workers=2,
+    persistent_workers=True,
+    drop_last=False,
+    sampler=dict(type='DefaultSampler', shuffle=True),
+    batch_sampler=dict(type='AspectRatioBatchSampler'),
+    dataset=dict(
+        type=dataset_type,
+        data_root=data_root,
+        ann_file='train.txt',
+        data_prefix=dict(img='WIDER_train'),
+        filter_cfg=dict(filter_empty_gt=True, bbox_min_size=17, min_size=32),
+        pipeline=train_pipeline))
+
+val_dataloader = dict(
+    batch_size=1,
+    num_workers=2,
+    persistent_workers=True,
+    drop_last=False,
+    sampler=dict(type='DefaultSampler', shuffle=False),
+    dataset=dict(
+        type=dataset_type,
+        data_root=data_root,
+        ann_file='val.txt',
+        data_prefix=dict(img='WIDER_val'),
+        test_mode=True,
+        pipeline=test_pipeline))
+test_dataloader = val_dataloader
+
+val_evaluator = dict(
+    # TODO: support WiderFace-Evaluation for easy, medium, hard cases
+    type='VOCMetric',
+    metric='mAP',
+    eval_mode='11points')
+test_evaluator = val_evaluator

+ 24 - 0
configs/_base_/default_runtime.py

@@ -0,0 +1,24 @@
+default_scope = 'mmdet'
+
+default_hooks = dict(
+    timer=dict(type='IterTimerHook'),
+    logger=dict(type='LoggerHook', interval=50),
+    param_scheduler=dict(type='ParamSchedulerHook'),
+    checkpoint=dict(type='CheckpointHook', interval=1),
+    sampler_seed=dict(type='DistSamplerSeedHook'),
+    visualization=dict(type='DetVisualizationHook'))
+
+env_cfg = dict(
+    cudnn_benchmark=False,
+    mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
+    dist_cfg=dict(backend='nccl'),
+)
+
+vis_backends = [dict(type='LocalVisBackend')]
+visualizer = dict(
+    type='DetLocalVisualizer', vis_backends=vis_backends, name='visualizer')
+log_processor = dict(type='LogProcessor', window_size=50, by_epoch=True)
+
+log_level = 'INFO'
+load_from = None
+resume = False

+ 203 - 0
configs/_base_/models/cascade-mask-rcnn_r50_fpn.py

@@ -0,0 +1,203 @@
+# model settings
+model = dict(
+    type='CascadeRCNN',
+    data_preprocessor=dict(
+        type='DetDataPreprocessor',
+        mean=[123.675, 116.28, 103.53],
+        std=[58.395, 57.12, 57.375],
+        bgr_to_rgb=True,
+        pad_mask=True,
+        pad_size_divisor=32),
+    backbone=dict(
+        type='ResNet',
+        depth=50,
+        num_stages=4,
+        out_indices=(0, 1, 2, 3),
+        frozen_stages=1,
+        norm_cfg=dict(type='BN', requires_grad=True),
+        norm_eval=True,
+        style='pytorch',
+        init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
+    neck=dict(
+        type='FPN',
+        in_channels=[256, 512, 1024, 2048],
+        out_channels=256,
+        num_outs=5),
+    rpn_head=dict(
+        type='RPNHead',
+        in_channels=256,
+        feat_channels=256,
+        anchor_generator=dict(
+            type='AnchorGenerator',
+            scales=[8],
+            ratios=[0.5, 1.0, 2.0],
+            strides=[4, 8, 16, 32, 64]),
+        bbox_coder=dict(
+            type='DeltaXYWHBBoxCoder',
+            target_means=[.0, .0, .0, .0],
+            target_stds=[1.0, 1.0, 1.0, 1.0]),
+        loss_cls=dict(
+            type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
+        loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)),
+    roi_head=dict(
+        type='CascadeRoIHead',
+        num_stages=3,
+        stage_loss_weights=[1, 0.5, 0.25],
+        bbox_roi_extractor=dict(
+            type='SingleRoIExtractor',
+            roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
+            out_channels=256,
+            featmap_strides=[4, 8, 16, 32]),
+        bbox_head=[
+            dict(
+                type='Shared2FCBBoxHead',
+                in_channels=256,
+                fc_out_channels=1024,
+                roi_feat_size=7,
+                num_classes=80,
+                bbox_coder=dict(
+                    type='DeltaXYWHBBoxCoder',
+                    target_means=[0., 0., 0., 0.],
+                    target_stds=[0.1, 0.1, 0.2, 0.2]),
+                reg_class_agnostic=True,
+                loss_cls=dict(
+                    type='CrossEntropyLoss',
+                    use_sigmoid=False,
+                    loss_weight=1.0),
+                loss_bbox=dict(type='SmoothL1Loss', beta=1.0,
+                               loss_weight=1.0)),
+            dict(
+                type='Shared2FCBBoxHead',
+                in_channels=256,
+                fc_out_channels=1024,
+                roi_feat_size=7,
+                num_classes=80,
+                bbox_coder=dict(
+                    type='DeltaXYWHBBoxCoder',
+                    target_means=[0., 0., 0., 0.],
+                    target_stds=[0.05, 0.05, 0.1, 0.1]),
+                reg_class_agnostic=True,
+                loss_cls=dict(
+                    type='CrossEntropyLoss',
+                    use_sigmoid=False,
+                    loss_weight=1.0),
+                loss_bbox=dict(type='SmoothL1Loss', beta=1.0,
+                               loss_weight=1.0)),
+            dict(
+                type='Shared2FCBBoxHead',
+                in_channels=256,
+                fc_out_channels=1024,
+                roi_feat_size=7,
+                num_classes=80,
+                bbox_coder=dict(
+                    type='DeltaXYWHBBoxCoder',
+                    target_means=[0., 0., 0., 0.],
+                    target_stds=[0.033, 0.033, 0.067, 0.067]),
+                reg_class_agnostic=True,
+                loss_cls=dict(
+                    type='CrossEntropyLoss',
+                    use_sigmoid=False,
+                    loss_weight=1.0),
+                loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))
+        ],
+        mask_roi_extractor=dict(
+            type='SingleRoIExtractor',
+            roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0),
+            out_channels=256,
+            featmap_strides=[4, 8, 16, 32]),
+        mask_head=dict(
+            type='FCNMaskHead',
+            num_convs=4,
+            in_channels=256,
+            conv_out_channels=256,
+            num_classes=80,
+            loss_mask=dict(
+                type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))),
+    # model training and testing settings
+    train_cfg=dict(
+        rpn=dict(
+            assigner=dict(
+                type='MaxIoUAssigner',
+                pos_iou_thr=0.7,
+                neg_iou_thr=0.3,
+                min_pos_iou=0.3,
+                match_low_quality=True,
+                ignore_iof_thr=-1),
+            sampler=dict(
+                type='RandomSampler',
+                num=256,
+                pos_fraction=0.5,
+                neg_pos_ub=-1,
+                add_gt_as_proposals=False),
+            allowed_border=0,
+            pos_weight=-1,
+            debug=False),
+        rpn_proposal=dict(
+            nms_pre=2000,
+            max_per_img=2000,
+            nms=dict(type='nms', iou_threshold=0.7),
+            min_bbox_size=0),
+        rcnn=[
+            dict(
+                assigner=dict(
+                    type='MaxIoUAssigner',
+                    pos_iou_thr=0.5,
+                    neg_iou_thr=0.5,
+                    min_pos_iou=0.5,
+                    match_low_quality=False,
+                    ignore_iof_thr=-1),
+                sampler=dict(
+                    type='RandomSampler',
+                    num=512,
+                    pos_fraction=0.25,
+                    neg_pos_ub=-1,
+                    add_gt_as_proposals=True),
+                mask_size=28,
+                pos_weight=-1,
+                debug=False),
+            dict(
+                assigner=dict(
+                    type='MaxIoUAssigner',
+                    pos_iou_thr=0.6,
+                    neg_iou_thr=0.6,
+                    min_pos_iou=0.6,
+                    match_low_quality=False,
+                    ignore_iof_thr=-1),
+                sampler=dict(
+                    type='RandomSampler',
+                    num=512,
+                    pos_fraction=0.25,
+                    neg_pos_ub=-1,
+                    add_gt_as_proposals=True),
+                mask_size=28,
+                pos_weight=-1,
+                debug=False),
+            dict(
+                assigner=dict(
+                    type='MaxIoUAssigner',
+                    pos_iou_thr=0.7,
+                    neg_iou_thr=0.7,
+                    min_pos_iou=0.7,
+                    match_low_quality=False,
+                    ignore_iof_thr=-1),
+                sampler=dict(
+                    type='RandomSampler',
+                    num=512,
+                    pos_fraction=0.25,
+                    neg_pos_ub=-1,
+                    add_gt_as_proposals=True),
+                mask_size=28,
+                pos_weight=-1,
+                debug=False)
+        ]),
+    test_cfg=dict(
+        rpn=dict(
+            nms_pre=1000,
+            max_per_img=1000,
+            nms=dict(type='nms', iou_threshold=0.7),
+            min_bbox_size=0),
+        rcnn=dict(
+            score_thr=0.05,
+            nms=dict(type='nms', iou_threshold=0.5),
+            max_per_img=100,
+            mask_thr_binary=0.5)))

+ 185 - 0
configs/_base_/models/cascade-rcnn_r50_fpn.py

@@ -0,0 +1,185 @@
+# model settings
+model = dict(
+    type='CascadeRCNN',
+    data_preprocessor=dict(
+        type='DetDataPreprocessor',
+        mean=[123.675, 116.28, 103.53],
+        std=[58.395, 57.12, 57.375],
+        bgr_to_rgb=True,
+        pad_size_divisor=32),
+    backbone=dict(
+        type='ResNet',
+        depth=50,
+        num_stages=4,
+        out_indices=(0, 1, 2, 3),
+        frozen_stages=1,
+        norm_cfg=dict(type='BN', requires_grad=True),
+        norm_eval=True,
+        style='pytorch',
+        init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
+    neck=dict(
+        type='FPN',
+        in_channels=[256, 512, 1024, 2048],
+        out_channels=256,
+        num_outs=5),
+    rpn_head=dict(
+        type='RPNHead',
+        in_channels=256,
+        feat_channels=256,
+        anchor_generator=dict(
+            type='AnchorGenerator',
+            scales=[8],
+            ratios=[0.5, 1.0, 2.0],
+            strides=[4, 8, 16, 32, 64]),
+        bbox_coder=dict(
+            type='DeltaXYWHBBoxCoder',
+            target_means=[.0, .0, .0, .0],
+            target_stds=[1.0, 1.0, 1.0, 1.0]),
+        loss_cls=dict(
+            type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
+        loss_bbox=dict(type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0)),
+    roi_head=dict(
+        type='CascadeRoIHead',
+        num_stages=3,
+        stage_loss_weights=[1, 0.5, 0.25],
+        bbox_roi_extractor=dict(
+            type='SingleRoIExtractor',
+            roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
+            out_channels=256,
+            featmap_strides=[4, 8, 16, 32]),
+        bbox_head=[
+            dict(
+                type='Shared2FCBBoxHead',
+                in_channels=256,
+                fc_out_channels=1024,
+                roi_feat_size=7,
+                num_classes=80,
+                bbox_coder=dict(
+                    type='DeltaXYWHBBoxCoder',
+                    target_means=[0., 0., 0., 0.],
+                    target_stds=[0.1, 0.1, 0.2, 0.2]),
+                reg_class_agnostic=True,
+                loss_cls=dict(
+                    type='CrossEntropyLoss',
+                    use_sigmoid=False,
+                    loss_weight=1.0),
+                loss_bbox=dict(type='SmoothL1Loss', beta=1.0,
+                               loss_weight=1.0)),
+            dict(
+                type='Shared2FCBBoxHead',
+                in_channels=256,
+                fc_out_channels=1024,
+                roi_feat_size=7,
+                num_classes=80,
+                bbox_coder=dict(
+                    type='DeltaXYWHBBoxCoder',
+                    target_means=[0., 0., 0., 0.],
+                    target_stds=[0.05, 0.05, 0.1, 0.1]),
+                reg_class_agnostic=True,
+                loss_cls=dict(
+                    type='CrossEntropyLoss',
+                    use_sigmoid=False,
+                    loss_weight=1.0),
+                loss_bbox=dict(type='SmoothL1Loss', beta=1.0,
+                               loss_weight=1.0)),
+            dict(
+                type='Shared2FCBBoxHead',
+                in_channels=256,
+                fc_out_channels=1024,
+                roi_feat_size=7,
+                num_classes=80,
+                bbox_coder=dict(
+                    type='DeltaXYWHBBoxCoder',
+                    target_means=[0., 0., 0., 0.],
+                    target_stds=[0.033, 0.033, 0.067, 0.067]),
+                reg_class_agnostic=True,
+                loss_cls=dict(
+                    type='CrossEntropyLoss',
+                    use_sigmoid=False,
+                    loss_weight=1.0),
+                loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))
+        ]),
+    # model training and testing settings
+    train_cfg=dict(
+        rpn=dict(
+            assigner=dict(
+                type='MaxIoUAssigner',
+                pos_iou_thr=0.7,
+                neg_iou_thr=0.3,
+                min_pos_iou=0.3,
+                match_low_quality=True,
+                ignore_iof_thr=-1),
+            sampler=dict(
+                type='RandomSampler',
+                num=256,
+                pos_fraction=0.5,
+                neg_pos_ub=-1,
+                add_gt_as_proposals=False),
+            allowed_border=0,
+            pos_weight=-1,
+            debug=False),
+        rpn_proposal=dict(
+            nms_pre=2000,
+            max_per_img=2000,
+            nms=dict(type='nms', iou_threshold=0.7),
+            min_bbox_size=0),
+        rcnn=[
+            dict(
+                assigner=dict(
+                    type='MaxIoUAssigner',
+                    pos_iou_thr=0.5,
+                    neg_iou_thr=0.5,
+                    min_pos_iou=0.5,
+                    match_low_quality=False,
+                    ignore_iof_thr=-1),
+                sampler=dict(
+                    type='RandomSampler',
+                    num=512,
+                    pos_fraction=0.25,
+                    neg_pos_ub=-1,
+                    add_gt_as_proposals=True),
+                pos_weight=-1,
+                debug=False),
+            dict(
+                assigner=dict(
+                    type='MaxIoUAssigner',
+                    pos_iou_thr=0.6,
+                    neg_iou_thr=0.6,
+                    min_pos_iou=0.6,
+                    match_low_quality=False,
+                    ignore_iof_thr=-1),
+                sampler=dict(
+                    type='RandomSampler',
+                    num=512,
+                    pos_fraction=0.25,
+                    neg_pos_ub=-1,
+                    add_gt_as_proposals=True),
+                pos_weight=-1,
+                debug=False),
+            dict(
+                assigner=dict(
+                    type='MaxIoUAssigner',
+                    pos_iou_thr=0.7,
+                    neg_iou_thr=0.7,
+                    min_pos_iou=0.7,
+                    match_low_quality=False,
+                    ignore_iof_thr=-1),
+                sampler=dict(
+                    type='RandomSampler',
+                    num=512,
+                    pos_fraction=0.25,
+                    neg_pos_ub=-1,
+                    add_gt_as_proposals=True),
+                pos_weight=-1,
+                debug=False)
+        ]),
+    test_cfg=dict(
+        rpn=dict(
+            nms_pre=1000,
+            max_per_img=1000,
+            nms=dict(type='nms', iou_threshold=0.7),
+            min_bbox_size=0),
+        rcnn=dict(
+            score_thr=0.05,
+            nms=dict(type='nms', iou_threshold=0.5),
+            max_per_img=100)))

+ 68 - 0
configs/_base_/models/fast-rcnn_r50_fpn.py

@@ -0,0 +1,68 @@
+# model settings
+model = dict(
+    type='FastRCNN',
+    data_preprocessor=dict(
+        type='DetDataPreprocessor',
+        mean=[123.675, 116.28, 103.53],
+        std=[58.395, 57.12, 57.375],
+        bgr_to_rgb=True,
+        pad_size_divisor=32),
+    backbone=dict(
+        type='ResNet',
+        depth=50,
+        num_stages=4,
+        out_indices=(0, 1, 2, 3),
+        frozen_stages=1,
+        norm_cfg=dict(type='BN', requires_grad=True),
+        norm_eval=True,
+        style='pytorch',
+        init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
+    neck=dict(
+        type='FPN',
+        in_channels=[256, 512, 1024, 2048],
+        out_channels=256,
+        num_outs=5),
+    roi_head=dict(
+        type='StandardRoIHead',
+        bbox_roi_extractor=dict(
+            type='SingleRoIExtractor',
+            roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
+            out_channels=256,
+            featmap_strides=[4, 8, 16, 32]),
+        bbox_head=dict(
+            type='Shared2FCBBoxHead',
+            in_channels=256,
+            fc_out_channels=1024,
+            roi_feat_size=7,
+            num_classes=80,
+            bbox_coder=dict(
+                type='DeltaXYWHBBoxCoder',
+                target_means=[0., 0., 0., 0.],
+                target_stds=[0.1, 0.1, 0.2, 0.2]),
+            reg_class_agnostic=False,
+            loss_cls=dict(
+                type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
+            loss_bbox=dict(type='L1Loss', loss_weight=1.0))),
+    # model training and testing settings
+    train_cfg=dict(
+        rcnn=dict(
+            assigner=dict(
+                type='MaxIoUAssigner',
+                pos_iou_thr=0.5,
+                neg_iou_thr=0.5,
+                min_pos_iou=0.5,
+                match_low_quality=False,
+                ignore_iof_thr=-1),
+            sampler=dict(
+                type='RandomSampler',
+                num=512,
+                pos_fraction=0.25,
+                neg_pos_ub=-1,
+                add_gt_as_proposals=True),
+            pos_weight=-1,
+            debug=False)),
+    test_cfg=dict(
+        rcnn=dict(
+            score_thr=0.05,
+            nms=dict(type='nms', iou_threshold=0.5),
+            max_per_img=100)))

+ 123 - 0
configs/_base_/models/faster-rcnn_r50-caffe-c4.py

@@ -0,0 +1,123 @@
+# model settings
+norm_cfg = dict(type='BN', requires_grad=False)
+model = dict(
+    type='FasterRCNN',
+    data_preprocessor=dict(
+        type='DetDataPreprocessor',
+        mean=[103.530, 116.280, 123.675],
+        std=[1.0, 1.0, 1.0],
+        bgr_to_rgb=False,
+        pad_size_divisor=32),
+    backbone=dict(
+        type='ResNet',
+        depth=50,
+        num_stages=3,
+        strides=(1, 2, 2),
+        dilations=(1, 1, 1),
+        out_indices=(2, ),
+        frozen_stages=1,
+        norm_cfg=norm_cfg,
+        norm_eval=True,
+        style='caffe',
+        init_cfg=dict(
+            type='Pretrained',
+            checkpoint='open-mmlab://detectron2/resnet50_caffe')),
+    rpn_head=dict(
+        type='RPNHead',
+        in_channels=1024,
+        feat_channels=1024,
+        anchor_generator=dict(
+            type='AnchorGenerator',
+            scales=[2, 4, 8, 16, 32],
+            ratios=[0.5, 1.0, 2.0],
+            strides=[16]),
+        bbox_coder=dict(
+            type='DeltaXYWHBBoxCoder',
+            target_means=[.0, .0, .0, .0],
+            target_stds=[1.0, 1.0, 1.0, 1.0]),
+        loss_cls=dict(
+            type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
+        loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
+    roi_head=dict(
+        type='StandardRoIHead',
+        shared_head=dict(
+            type='ResLayer',
+            depth=50,
+            stage=3,
+            stride=2,
+            dilation=1,
+            style='caffe',
+            norm_cfg=norm_cfg,
+            norm_eval=True,
+            init_cfg=dict(
+                type='Pretrained',
+                checkpoint='open-mmlab://detectron2/resnet50_caffe')),
+        bbox_roi_extractor=dict(
+            type='SingleRoIExtractor',
+            roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0),
+            out_channels=1024,
+            featmap_strides=[16]),
+        bbox_head=dict(
+            type='BBoxHead',
+            with_avg_pool=True,
+            roi_feat_size=7,
+            in_channels=2048,
+            num_classes=80,
+            bbox_coder=dict(
+                type='DeltaXYWHBBoxCoder',
+                target_means=[0., 0., 0., 0.],
+                target_stds=[0.1, 0.1, 0.2, 0.2]),
+            reg_class_agnostic=False,
+            loss_cls=dict(
+                type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
+            loss_bbox=dict(type='L1Loss', loss_weight=1.0))),
+    # model training and testing settings
+    train_cfg=dict(
+        rpn=dict(
+            assigner=dict(
+                type='MaxIoUAssigner',
+                pos_iou_thr=0.7,
+                neg_iou_thr=0.3,
+                min_pos_iou=0.3,
+                match_low_quality=True,
+                ignore_iof_thr=-1),
+            sampler=dict(
+                type='RandomSampler',
+                num=256,
+                pos_fraction=0.5,
+                neg_pos_ub=-1,
+                add_gt_as_proposals=False),
+            allowed_border=-1,
+            pos_weight=-1,
+            debug=False),
+        rpn_proposal=dict(
+            nms_pre=12000,
+            max_per_img=2000,
+            nms=dict(type='nms', iou_threshold=0.7),
+            min_bbox_size=0),
+        rcnn=dict(
+            assigner=dict(
+                type='MaxIoUAssigner',
+                pos_iou_thr=0.5,
+                neg_iou_thr=0.5,
+                min_pos_iou=0.5,
+                match_low_quality=False,
+                ignore_iof_thr=-1),
+            sampler=dict(
+                type='RandomSampler',
+                num=512,
+                pos_fraction=0.25,
+                neg_pos_ub=-1,
+                add_gt_as_proposals=True),
+            pos_weight=-1,
+            debug=False)),
+    test_cfg=dict(
+        rpn=dict(
+            nms_pre=6000,
+            max_per_img=1000,
+            nms=dict(type='nms', iou_threshold=0.7),
+            min_bbox_size=0),
+        rcnn=dict(
+            score_thr=0.05,
+            nms=dict(type='nms', iou_threshold=0.5),
+            max_per_img=100)))

+ 111 - 0
configs/_base_/models/faster-rcnn_r50-caffe-dc5.py

@@ -0,0 +1,111 @@
+# model settings
+norm_cfg = dict(type='BN', requires_grad=False)
+model = dict(
+    type='FasterRCNN',
+    data_preprocessor=dict(
+        type='DetDataPreprocessor',
+        mean=[103.530, 116.280, 123.675],
+        std=[1.0, 1.0, 1.0],
+        bgr_to_rgb=False,
+        pad_size_divisor=32),
+    backbone=dict(
+        type='ResNet',
+        depth=50,
+        num_stages=4,
+        strides=(1, 2, 2, 1),
+        dilations=(1, 1, 1, 2),
+        out_indices=(3, ),
+        frozen_stages=1,
+        norm_cfg=norm_cfg,
+        norm_eval=True,
+        style='caffe',
+        init_cfg=dict(
+            type='Pretrained',
+            checkpoint='open-mmlab://detectron2/resnet50_caffe')),
+    rpn_head=dict(
+        type='RPNHead',
+        in_channels=2048,
+        feat_channels=2048,
+        anchor_generator=dict(
+            type='AnchorGenerator',
+            scales=[2, 4, 8, 16, 32],
+            ratios=[0.5, 1.0, 2.0],
+            strides=[16]),
+        bbox_coder=dict(
+            type='DeltaXYWHBBoxCoder',
+            target_means=[.0, .0, .0, .0],
+            target_stds=[1.0, 1.0, 1.0, 1.0]),
+        loss_cls=dict(
+            type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
+        loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
+    roi_head=dict(
+        type='StandardRoIHead',
+        bbox_roi_extractor=dict(
+            type='SingleRoIExtractor',
+            roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
+            out_channels=2048,
+            featmap_strides=[16]),
+        bbox_head=dict(
+            type='Shared2FCBBoxHead',
+            in_channels=2048,
+            fc_out_channels=1024,
+            roi_feat_size=7,
+            num_classes=80,
+            bbox_coder=dict(
+                type='DeltaXYWHBBoxCoder',
+                target_means=[0., 0., 0., 0.],
+                target_stds=[0.1, 0.1, 0.2, 0.2]),
+            reg_class_agnostic=False,
+            loss_cls=dict(
+                type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
+            loss_bbox=dict(type='L1Loss', loss_weight=1.0))),
+    # model training and testing settings
+    train_cfg=dict(
+        rpn=dict(
+            assigner=dict(
+                type='MaxIoUAssigner',
+                pos_iou_thr=0.7,
+                neg_iou_thr=0.3,
+                min_pos_iou=0.3,
+                match_low_quality=True,
+                ignore_iof_thr=-1),
+            sampler=dict(
+                type='RandomSampler',
+                num=256,
+                pos_fraction=0.5,
+                neg_pos_ub=-1,
+                add_gt_as_proposals=False),
+            allowed_border=0,
+            pos_weight=-1,
+            debug=False),
+        rpn_proposal=dict(
+            nms_pre=12000,
+            max_per_img=2000,
+            nms=dict(type='nms', iou_threshold=0.7),
+            min_bbox_size=0),
+        rcnn=dict(
+            assigner=dict(
+                type='MaxIoUAssigner',
+                pos_iou_thr=0.5,
+                neg_iou_thr=0.5,
+                min_pos_iou=0.5,
+                match_low_quality=False,
+                ignore_iof_thr=-1),
+            sampler=dict(
+                type='RandomSampler',
+                num=512,
+                pos_fraction=0.25,
+                neg_pos_ub=-1,
+                add_gt_as_proposals=True),
+            pos_weight=-1,
+            debug=False)),
+    test_cfg=dict(
+        rpn=dict(
+            nms=dict(type='nms', iou_threshold=0.7),
+            nms_pre=6000,
+            max_per_img=1000,
+            min_bbox_size=0),
+        rcnn=dict(
+            score_thr=0.05,
+            nms=dict(type='nms', iou_threshold=0.5),
+            max_per_img=100)))

+ 114 - 0
configs/_base_/models/faster-rcnn_r50_fpn.py

@@ -0,0 +1,114 @@
+# model settings
+model = dict(
+    type='FasterRCNN',
+    data_preprocessor=dict(
+        type='DetDataPreprocessor',
+        mean=[123.675, 116.28, 103.53],
+        std=[58.395, 57.12, 57.375],
+        bgr_to_rgb=True,
+        pad_size_divisor=32),
+    backbone=dict(
+        type='ResNet',
+        depth=50,
+        num_stages=4,
+        out_indices=(0, 1, 2, 3),
+        frozen_stages=1,
+        norm_cfg=dict(type='BN', requires_grad=True),
+        norm_eval=True,
+        style='pytorch',
+        init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
+    neck=dict(
+        type='FPN',
+        in_channels=[256, 512, 1024, 2048],
+        out_channels=256,
+        num_outs=5),
+    rpn_head=dict(
+        type='RPNHead',
+        in_channels=256,
+        feat_channels=256,
+        anchor_generator=dict(
+            type='AnchorGenerator',
+            scales=[8],
+            ratios=[0.5, 1.0, 2.0],
+            strides=[4, 8, 16, 32, 64]),
+        bbox_coder=dict(
+            type='DeltaXYWHBBoxCoder',
+            target_means=[.0, .0, .0, .0],
+            target_stds=[1.0, 1.0, 1.0, 1.0]),
+        loss_cls=dict(
+            type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
+        loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
+    roi_head=dict(
+        type='StandardRoIHead',
+        bbox_roi_extractor=dict(
+            type='SingleRoIExtractor',
+            roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
+            out_channels=256,
+            featmap_strides=[4, 8, 16, 32]),
+        bbox_head=dict(
+            type='Shared2FCBBoxHead',
+            in_channels=256,
+            fc_out_channels=1024,
+            roi_feat_size=7,
+            num_classes=80,
+            bbox_coder=dict(
+                type='DeltaXYWHBBoxCoder',
+                target_means=[0., 0., 0., 0.],
+                target_stds=[0.1, 0.1, 0.2, 0.2]),
+            reg_class_agnostic=False,
+            loss_cls=dict(
+                type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
+            loss_bbox=dict(type='L1Loss', loss_weight=1.0))),
+    # model training and testing settings
+    train_cfg=dict(
+        rpn=dict(
+            assigner=dict(
+                type='MaxIoUAssigner',
+                pos_iou_thr=0.7,
+                neg_iou_thr=0.3,
+                min_pos_iou=0.3,
+                match_low_quality=True,
+                ignore_iof_thr=-1),
+            sampler=dict(
+                type='RandomSampler',
+                num=256,
+                pos_fraction=0.5,
+                neg_pos_ub=-1,
+                add_gt_as_proposals=False),
+            allowed_border=-1,
+            pos_weight=-1,
+            debug=False),
+        rpn_proposal=dict(
+            nms_pre=2000,
+            max_per_img=1000,
+            nms=dict(type='nms', iou_threshold=0.7),
+            min_bbox_size=0),
+        rcnn=dict(
+            assigner=dict(
+                type='MaxIoUAssigner',
+                pos_iou_thr=0.5,
+                neg_iou_thr=0.5,
+                min_pos_iou=0.5,
+                match_low_quality=False,
+                ignore_iof_thr=-1),
+            sampler=dict(
+                type='RandomSampler',
+                num=512,
+                pos_fraction=0.25,
+                neg_pos_ub=-1,
+                add_gt_as_proposals=True),
+            pos_weight=-1,
+            debug=False)),
+    test_cfg=dict(
+        rpn=dict(
+            nms_pre=1000,
+            max_per_img=1000,
+            nms=dict(type='nms', iou_threshold=0.7),
+            min_bbox_size=0),
+        rcnn=dict(
+            score_thr=0.05,
+            nms=dict(type='nms', iou_threshold=0.5),
+            max_per_img=100)
+        # soft-nms is also supported for rcnn testing
+        # e.g., nms=dict(type='soft_nms', iou_threshold=0.5, min_score=0.05)
+    ))

+ 132 - 0
configs/_base_/models/mask-rcnn_r50-caffe-c4.py

@@ -0,0 +1,132 @@
+# model settings
+norm_cfg = dict(type='BN', requires_grad=False)
+model = dict(
+    type='MaskRCNN',
+    data_preprocessor=dict(
+        type='DetDataPreprocessor',
+        mean=[103.530, 116.280, 123.675],
+        std=[1.0, 1.0, 1.0],
+        bgr_to_rgb=False,
+        pad_mask=True,
+        pad_size_divisor=32),
+    backbone=dict(
+        type='ResNet',
+        depth=50,
+        num_stages=3,
+        strides=(1, 2, 2),
+        dilations=(1, 1, 1),
+        out_indices=(2, ),
+        frozen_stages=1,
+        norm_cfg=norm_cfg,
+        norm_eval=True,
+        style='caffe',
+        init_cfg=dict(
+            type='Pretrained',
+            checkpoint='open-mmlab://detectron2/resnet50_caffe')),
+    rpn_head=dict(
+        type='RPNHead',
+        in_channels=1024,
+        feat_channels=1024,
+        anchor_generator=dict(
+            type='AnchorGenerator',
+            scales=[2, 4, 8, 16, 32],
+            ratios=[0.5, 1.0, 2.0],
+            strides=[16]),
+        bbox_coder=dict(
+            type='DeltaXYWHBBoxCoder',
+            target_means=[.0, .0, .0, .0],
+            target_stds=[1.0, 1.0, 1.0, 1.0]),
+        loss_cls=dict(
+            type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
+        loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
+    roi_head=dict(
+        type='StandardRoIHead',
+        shared_head=dict(
+            type='ResLayer',
+            depth=50,
+            stage=3,
+            stride=2,
+            dilation=1,
+            style='caffe',
+            norm_cfg=norm_cfg,
+            norm_eval=True),
+        bbox_roi_extractor=dict(
+            type='SingleRoIExtractor',
+            roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0),
+            out_channels=1024,
+            featmap_strides=[16]),
+        bbox_head=dict(
+            type='BBoxHead',
+            with_avg_pool=True,
+            roi_feat_size=7,
+            in_channels=2048,
+            num_classes=80,
+            bbox_coder=dict(
+                type='DeltaXYWHBBoxCoder',
+                target_means=[0., 0., 0., 0.],
+                target_stds=[0.1, 0.1, 0.2, 0.2]),
+            reg_class_agnostic=False,
+            loss_cls=dict(
+                type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
+            loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
+        mask_roi_extractor=None,
+        mask_head=dict(
+            type='FCNMaskHead',
+            num_convs=0,
+            in_channels=2048,
+            conv_out_channels=256,
+            num_classes=80,
+            loss_mask=dict(
+                type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))),
+    # model training and testing settings
+    train_cfg=dict(
+        rpn=dict(
+            assigner=dict(
+                type='MaxIoUAssigner',
+                pos_iou_thr=0.7,
+                neg_iou_thr=0.3,
+                min_pos_iou=0.3,
+                match_low_quality=True,
+                ignore_iof_thr=-1),
+            sampler=dict(
+                type='RandomSampler',
+                num=256,
+                pos_fraction=0.5,
+                neg_pos_ub=-1,
+                add_gt_as_proposals=False),
+            allowed_border=0,
+            pos_weight=-1,
+            debug=False),
+        rpn_proposal=dict(
+            nms_pre=12000,
+            max_per_img=2000,
+            nms=dict(type='nms', iou_threshold=0.7),
+            min_bbox_size=0),
+        rcnn=dict(
+            assigner=dict(
+                type='MaxIoUAssigner',
+                pos_iou_thr=0.5,
+                neg_iou_thr=0.5,
+                min_pos_iou=0.5,
+                match_low_quality=False,
+                ignore_iof_thr=-1),
+            sampler=dict(
+                type='RandomSampler',
+                num=512,
+                pos_fraction=0.25,
+                neg_pos_ub=-1,
+                add_gt_as_proposals=True),
+            mask_size=14,
+            pos_weight=-1,
+            debug=False)),
+    test_cfg=dict(
+        rpn=dict(
+            nms_pre=6000,
+            nms=dict(type='nms', iou_threshold=0.7),
+            max_per_img=1000,
+            min_bbox_size=0),
+        rcnn=dict(
+            score_thr=0.05,
+            nms=dict(type='nms', iou_threshold=0.5),
+            max_per_img=100,
+            mask_thr_binary=0.5)))

+ 127 - 0
configs/_base_/models/mask-rcnn_r50_fpn.py

@@ -0,0 +1,127 @@
+# model settings
+model = dict(
+    type='MaskRCNN',
+    data_preprocessor=dict(
+        type='DetDataPreprocessor',
+        mean=[123.675, 116.28, 103.53],
+        std=[58.395, 57.12, 57.375],
+        bgr_to_rgb=True,
+        pad_mask=True,
+        pad_size_divisor=32),
+    backbone=dict(
+        type='ResNet',
+        depth=50,
+        num_stages=4,
+        out_indices=(0, 1, 2, 3),
+        frozen_stages=1,
+        norm_cfg=dict(type='BN', requires_grad=True),
+        norm_eval=True,
+        style='pytorch',
+        init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
+    neck=dict(
+        type='FPN',
+        in_channels=[256, 512, 1024, 2048],
+        out_channels=256,
+        num_outs=5),
+    rpn_head=dict(
+        type='RPNHead',
+        in_channels=256,
+        feat_channels=256,
+        anchor_generator=dict(
+            type='AnchorGenerator',
+            scales=[8],
+            ratios=[0.5, 1.0, 2.0],
+            strides=[4, 8, 16, 32, 64]),
+        bbox_coder=dict(
+            type='DeltaXYWHBBoxCoder',
+            target_means=[.0, .0, .0, .0],
+            target_stds=[1.0, 1.0, 1.0, 1.0]),
+        loss_cls=dict(
+            type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
+        loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
+    roi_head=dict(
+        type='StandardRoIHead',
+        bbox_roi_extractor=dict(
+            type='SingleRoIExtractor',
+            roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0),
+            out_channels=256,
+            featmap_strides=[4, 8, 16, 32]),
+        bbox_head=dict(
+            type='Shared2FCBBoxHead',
+            in_channels=256,
+            fc_out_channels=1024,
+            roi_feat_size=7,
+            num_classes=80,
+            bbox_coder=dict(
+                type='DeltaXYWHBBoxCoder',
+                target_means=[0., 0., 0., 0.],
+                target_stds=[0.1, 0.1, 0.2, 0.2]),
+            reg_class_agnostic=False,
+            loss_cls=dict(
+                type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0),
+            loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
+        mask_roi_extractor=dict(
+            type='SingleRoIExtractor',
+            roi_layer=dict(type='RoIAlign', output_size=14, sampling_ratio=0),
+            out_channels=256,
+            featmap_strides=[4, 8, 16, 32]),
+        mask_head=dict(
+            type='FCNMaskHead',
+            num_convs=4,
+            in_channels=256,
+            conv_out_channels=256,
+            num_classes=80,
+            loss_mask=dict(
+                type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))),
+    # model training and testing settings
+    train_cfg=dict(
+        rpn=dict(
+            assigner=dict(
+                type='MaxIoUAssigner',
+                pos_iou_thr=0.7,
+                neg_iou_thr=0.3,
+                min_pos_iou=0.3,
+                match_low_quality=True,
+                ignore_iof_thr=-1),
+            sampler=dict(
+                type='RandomSampler',
+                num=256,
+                pos_fraction=0.5,
+                neg_pos_ub=-1,
+                add_gt_as_proposals=False),
+            allowed_border=-1,
+            pos_weight=-1,
+            debug=False),
+        rpn_proposal=dict(
+            nms_pre=2000,
+            max_per_img=1000,
+            nms=dict(type='nms', iou_threshold=0.7),
+            min_bbox_size=0),
+        rcnn=dict(
+            assigner=dict(
+                type='MaxIoUAssigner',
+                pos_iou_thr=0.5,
+                neg_iou_thr=0.5,
+                min_pos_iou=0.5,
+                match_low_quality=True,
+                ignore_iof_thr=-1),
+            sampler=dict(
+                type='RandomSampler',
+                num=512,
+                pos_fraction=0.25,
+                neg_pos_ub=-1,
+                add_gt_as_proposals=True),
+            mask_size=28,
+            pos_weight=-1,
+            debug=False)),
+    test_cfg=dict(
+        rpn=dict(
+            nms_pre=1000,
+            max_per_img=1000,
+            nms=dict(type='nms', iou_threshold=0.7),
+            min_bbox_size=0),
+        rcnn=dict(
+            score_thr=0.05,
+            nms=dict(type='nms', iou_threshold=0.5),
+            max_per_img=100,
+            mask_thr_binary=0.5)))

+ 68 - 0
configs/_base_/models/retinanet_r50_fpn.py

@@ -0,0 +1,68 @@
+# model settings
+model = dict(
+    type='RetinaNet',
+    data_preprocessor=dict(
+        type='DetDataPreprocessor',
+        mean=[123.675, 116.28, 103.53],
+        std=[58.395, 57.12, 57.375],
+        bgr_to_rgb=True,
+        pad_size_divisor=32),
+    backbone=dict(
+        type='ResNet',
+        depth=50,
+        num_stages=4,
+        out_indices=(0, 1, 2, 3),
+        frozen_stages=1,
+        norm_cfg=dict(type='BN', requires_grad=True),
+        norm_eval=True,
+        style='pytorch',
+        init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
+    neck=dict(
+        type='FPN',
+        in_channels=[256, 512, 1024, 2048],
+        out_channels=256,
+        start_level=1,
+        add_extra_convs='on_input',
+        num_outs=5),
+    bbox_head=dict(
+        type='RetinaHead',
+        num_classes=80,
+        in_channels=256,
+        stacked_convs=4,
+        feat_channels=256,
+        anchor_generator=dict(
+            type='AnchorGenerator',
+            octave_base_scale=4,
+            scales_per_octave=3,
+            ratios=[0.5, 1.0, 2.0],
+            strides=[8, 16, 32, 64, 128]),
+        bbox_coder=dict(
+            type='DeltaXYWHBBoxCoder',
+            target_means=[.0, .0, .0, .0],
+            target_stds=[1.0, 1.0, 1.0, 1.0]),
+        loss_cls=dict(
+            type='FocalLoss',
+            use_sigmoid=True,
+            gamma=2.0,
+            alpha=0.25,
+            loss_weight=1.0),
+        loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
+    # model training and testing settings
+    train_cfg=dict(
+        assigner=dict(
+            type='MaxIoUAssigner',
+            pos_iou_thr=0.5,
+            neg_iou_thr=0.4,
+            min_pos_iou=0,
+            ignore_iof_thr=-1),
+        sampler=dict(
+            type='PseudoSampler'),  # Focal loss should use PseudoSampler
+        allowed_border=-1,
+        pos_weight=-1,
+        debug=False),
+    test_cfg=dict(
+        nms_pre=1000,
+        min_bbox_size=0,
+        score_thr=0.05,
+        nms=dict(type='nms', iou_threshold=0.5),
+        max_per_img=100))

+ 64 - 0
configs/_base_/models/rpn_r50-caffe-c4.py

@@ -0,0 +1,64 @@
+# model settings
+model = dict(
+    type='RPN',
+    data_preprocessor=dict(
+        type='DetDataPreprocessor',
+        mean=[103.530, 116.280, 123.675],
+        std=[1.0, 1.0, 1.0],
+        bgr_to_rgb=False,
+        pad_size_divisor=32),
+    backbone=dict(
+        type='ResNet',
+        depth=50,
+        num_stages=3,
+        strides=(1, 2, 2),
+        dilations=(1, 1, 1),
+        out_indices=(2, ),
+        frozen_stages=1,
+        norm_cfg=dict(type='BN', requires_grad=False),
+        norm_eval=True,
+        style='caffe',
+        init_cfg=dict(
+            type='Pretrained',
+            checkpoint='open-mmlab://detectron2/resnet50_caffe')),
+    neck=None,
+    rpn_head=dict(
+        type='RPNHead',
+        in_channels=1024,
+        feat_channels=1024,
+        anchor_generator=dict(
+            type='AnchorGenerator',
+            scales=[2, 4, 8, 16, 32],
+            ratios=[0.5, 1.0, 2.0],
+            strides=[16]),
+        bbox_coder=dict(
+            type='DeltaXYWHBBoxCoder',
+            target_means=[.0, .0, .0, .0],
+            target_stds=[1.0, 1.0, 1.0, 1.0]),
+        loss_cls=dict(
+            type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
+        loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
+    # model training and testing settings
+    train_cfg=dict(
+        rpn=dict(
+            assigner=dict(
+                type='MaxIoUAssigner',
+                pos_iou_thr=0.7,
+                neg_iou_thr=0.3,
+                min_pos_iou=0.3,
+                ignore_iof_thr=-1),
+            sampler=dict(
+                type='RandomSampler',
+                num=256,
+                pos_fraction=0.5,
+                neg_pos_ub=-1,
+                add_gt_as_proposals=False),
+            allowed_border=-1,
+            pos_weight=-1,
+            debug=False)),
+    test_cfg=dict(
+        rpn=dict(
+            nms_pre=12000,
+            max_per_img=2000,
+            nms=dict(type='nms', iou_threshold=0.7),
+            min_bbox_size=0)))

+ 64 - 0
configs/_base_/models/rpn_r50_fpn.py

@@ -0,0 +1,64 @@
+# model settings
+model = dict(
+    type='RPN',
+    data_preprocessor=dict(
+        type='DetDataPreprocessor',
+        mean=[123.675, 116.28, 103.53],
+        std=[58.395, 57.12, 57.375],
+        bgr_to_rgb=True,
+        pad_size_divisor=32),
+    backbone=dict(
+        type='ResNet',
+        depth=50,
+        num_stages=4,
+        out_indices=(0, 1, 2, 3),
+        frozen_stages=1,
+        norm_cfg=dict(type='BN', requires_grad=True),
+        norm_eval=True,
+        style='pytorch',
+        init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
+    neck=dict(
+        type='FPN',
+        in_channels=[256, 512, 1024, 2048],
+        out_channels=256,
+        num_outs=5),
+    rpn_head=dict(
+        type='RPNHead',
+        in_channels=256,
+        feat_channels=256,
+        anchor_generator=dict(
+            type='AnchorGenerator',
+            scales=[8],
+            ratios=[0.5, 1.0, 2.0],
+            strides=[4, 8, 16, 32, 64]),
+        bbox_coder=dict(
+            type='DeltaXYWHBBoxCoder',
+            target_means=[.0, .0, .0, .0],
+            target_stds=[1.0, 1.0, 1.0, 1.0]),
+        loss_cls=dict(
+            type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0),
+        loss_bbox=dict(type='L1Loss', loss_weight=1.0)),
+    # model training and testing settings
+    train_cfg=dict(
+        rpn=dict(
+            assigner=dict(
+                type='MaxIoUAssigner',
+                pos_iou_thr=0.7,
+                neg_iou_thr=0.3,
+                min_pos_iou=0.3,
+                ignore_iof_thr=-1),
+            sampler=dict(
+                type='RandomSampler',
+                num=256,
+                pos_fraction=0.5,
+                neg_pos_ub=-1,
+                add_gt_as_proposals=False),
+            allowed_border=-1,
+            pos_weight=-1,
+            debug=False)),
+    test_cfg=dict(
+        rpn=dict(
+            nms_pre=2000,
+            max_per_img=1000,
+            nms=dict(type='nms', iou_threshold=0.7),
+            min_bbox_size=0)))

+ 63 - 0
configs/_base_/models/ssd300.py

@@ -0,0 +1,63 @@
+# model settings
+input_size = 300
+model = dict(
+    type='SingleStageDetector',
+    data_preprocessor=dict(
+        type='DetDataPreprocessor',
+        mean=[123.675, 116.28, 103.53],
+        std=[1, 1, 1],
+        bgr_to_rgb=True,
+        pad_size_divisor=1),
+    backbone=dict(
+        type='SSDVGG',
+        depth=16,
+        with_last_pool=False,
+        ceil_mode=True,
+        out_indices=(3, 4),
+        out_feature_indices=(22, 34),
+        init_cfg=dict(
+            type='Pretrained', checkpoint='open-mmlab://vgg16_caffe')),
+    neck=dict(
+        type='SSDNeck',
+        in_channels=(512, 1024),
+        out_channels=(512, 1024, 512, 256, 256, 256),
+        level_strides=(2, 2, 1, 1),
+        level_paddings=(1, 1, 0, 0),
+        l2_norm_scale=20),
+    bbox_head=dict(
+        type='SSDHead',
+        in_channels=(512, 1024, 512, 256, 256, 256),
+        num_classes=80,
+        anchor_generator=dict(
+            type='SSDAnchorGenerator',
+            scale_major=False,
+            input_size=input_size,
+            basesize_ratio_range=(0.15, 0.9),
+            strides=[8, 16, 32, 64, 100, 300],
+            ratios=[[2], [2, 3], [2, 3], [2, 3], [2], [2]]),
+        bbox_coder=dict(
+            type='DeltaXYWHBBoxCoder',
+            target_means=[.0, .0, .0, .0],
+            target_stds=[0.1, 0.1, 0.2, 0.2])),
+    # model training and testing settings
+    train_cfg=dict(
+        assigner=dict(
+            type='MaxIoUAssigner',
+            pos_iou_thr=0.5,
+            neg_iou_thr=0.5,
+            min_pos_iou=0.,
+            ignore_iof_thr=-1,
+            gt_max_assign_all=False),
+        sampler=dict(type='PseudoSampler'),
+        smoothl1_beta=1.,
+        allowed_border=-1,
+        pos_weight=-1,
+        neg_pos_ratio=3,
+        debug=False),
+    test_cfg=dict(
+        nms_pre=1000,
+        nms=dict(type='nms', iou_threshold=0.45),
+        min_bbox_size=0,
+        score_thr=0.02,
+        max_per_img=200))
+cudnn_benchmark = True

+ 28 - 0
configs/_base_/schedules/schedule_1x.py

@@ -0,0 +1,28 @@
+# training schedule for 1x
+train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=12, val_interval=1)
+val_cfg = dict(type='ValLoop')
+test_cfg = dict(type='TestLoop')
+
+# learning rate
+param_scheduler = [
+    dict(
+        type='LinearLR', start_factor=0.001, by_epoch=False, begin=0, end=500),
+    dict(
+        type='MultiStepLR',
+        begin=0,
+        end=12,
+        by_epoch=True,
+        milestones=[8, 11],
+        gamma=0.1)
+]
+
+# optimizer
+optim_wrapper = dict(
+    type='OptimWrapper',
+    optimizer=dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001))
+
+# Default setting for scaling LR automatically
+#   - `enable` means enable scaling LR automatically
+#       or not by default.
+#   - `base_batch_size` = (8 GPUs) x (2 samples per GPU).
+auto_scale_lr = dict(enable=False, base_batch_size=16)

+ 28 - 0
configs/_base_/schedules/schedule_20e.py

@@ -0,0 +1,28 @@
+# training schedule for 20e
+train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=20, val_interval=1)
+val_cfg = dict(type='ValLoop')
+test_cfg = dict(type='TestLoop')
+
+# learning rate
+param_scheduler = [
+    dict(
+        type='LinearLR', start_factor=0.001, by_epoch=False, begin=0, end=500),
+    dict(
+        type='MultiStepLR',
+        begin=0,
+        end=20,
+        by_epoch=True,
+        milestones=[16, 19],
+        gamma=0.1)
+]
+
+# optimizer
+optim_wrapper = dict(
+    type='OptimWrapper',
+    optimizer=dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001))
+
+# Default setting for scaling LR automatically
+#   - `enable` means enable scaling LR automatically
+#       or not by default.
+#   - `base_batch_size` = (8 GPUs) x (2 samples per GPU).
+auto_scale_lr = dict(enable=False, base_batch_size=16)

+ 28 - 0
configs/_base_/schedules/schedule_2x.py

@@ -0,0 +1,28 @@
+# training schedule for 2x
+train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=24, val_interval=1)
+val_cfg = dict(type='ValLoop')
+test_cfg = dict(type='TestLoop')
+
+# learning rate
+param_scheduler = [
+    dict(
+        type='LinearLR', start_factor=0.001, by_epoch=False, begin=0, end=500),
+    dict(
+        type='MultiStepLR',
+        begin=0,
+        end=24,
+        by_epoch=True,
+        milestones=[16, 22],
+        gamma=0.1)
+]
+
+# optimizer
+optim_wrapper = dict(
+    type='OptimWrapper',
+    optimizer=dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001))
+
+# Default setting for scaling LR automatically
+#   - `enable` means enable scaling LR automatically
+#       or not by default.
+#   - `base_batch_size` = (8 GPUs) x (2 samples per GPU).
+auto_scale_lr = dict(enable=False, base_batch_size=16)

+ 31 - 0
configs/albu_example/README.md

@@ -0,0 +1,31 @@
+# Albu Example
+
+> [Albumentations: fast and flexible image augmentations](https://arxiv.org/abs/1809.06839)
+
+<!-- [OTHERS] -->
+
+## Abstract
+
+Data augmentation is a commonly used technique for increasing both the size and the diversity of labeled training sets by leveraging input transformations that preserve output labels. In computer vision domain, image augmentations have become a common implicit regularization technique to combat overfitting in deep convolutional neural networks and are ubiquitously used to improve performance. While most deep learning frameworks implement basic image transformations, the list is typically limited to some variations and combinations of flipping, rotating, scaling, and cropping. Moreover, the image processing speed varies in existing tools for image augmentation. We present Albumentations, a fast and flexible library for image augmentations with many various image transform operations available, that is also an easy-to-use wrapper around other augmentation libraries. We provide examples of image augmentations for different computer vision tasks and show that Albumentations is faster than other commonly used image augmentation tools on the most of commonly used image transformations.
+
+<div align=center>
+<img src="https://user-images.githubusercontent.com/40661020/143870703-74f3ea3f-ae23-4035-9856-746bc3f88464.png" height="400" />
+</div>
+
+## Results and Models
+
+| Backbone |  Style  | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP |                    Config                     |                                                                                                                                                        Download                                                                                                                                                         |
+| :------: | :-----: | :-----: | :------: | :------------: | :----: | :-----: | :-------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
+|   R-50   | pytorch |   1x    |   4.4    |      16.6      |  38.0  |  34.5   | [config](./mask-rcnn_r50_fpn_albu-1x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/albu_example/mask_rcnn_r50_fpn_albu_1x_coco/mask_rcnn_r50_fpn_albu_1x_coco_20200208-ab203bcd.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/albu_example/mask_rcnn_r50_fpn_albu_1x_coco/mask_rcnn_r50_fpn_albu_1x_coco_20200208_225520.log.json) |
+
+## Citation
+
+```latex
+@article{2018arXiv180906839B,
+  author = {A. Buslaev, A. Parinov, E. Khvedchenya, V.~I. Iglovikov and A.~A. Kalinin},
+  title = "{Albumentations: fast and flexible image augmentations}",
+  journal = {ArXiv e-prints},
+  eprint = {1809.06839},
+  year = 2018
+}
+```

+ 66 - 0
configs/albu_example/mask-rcnn_r50_fpn_albu-1x_coco.py

@@ -0,0 +1,66 @@
+_base_ = '../mask_rcnn/mask-rcnn_r50_fpn_1x_coco.py'
+
+albu_train_transforms = [
+    dict(
+        type='ShiftScaleRotate',
+        shift_limit=0.0625,
+        scale_limit=0.0,
+        rotate_limit=0,
+        interpolation=1,
+        p=0.5),
+    dict(
+        type='RandomBrightnessContrast',
+        brightness_limit=[0.1, 0.3],
+        contrast_limit=[0.1, 0.3],
+        p=0.2),
+    dict(
+        type='OneOf',
+        transforms=[
+            dict(
+                type='RGBShift',
+                r_shift_limit=10,
+                g_shift_limit=10,
+                b_shift_limit=10,
+                p=1.0),
+            dict(
+                type='HueSaturationValue',
+                hue_shift_limit=20,
+                sat_shift_limit=30,
+                val_shift_limit=20,
+                p=1.0)
+        ],
+        p=0.1),
+    dict(type='JpegCompression', quality_lower=85, quality_upper=95, p=0.2),
+    dict(type='ChannelShuffle', p=0.1),
+    dict(
+        type='OneOf',
+        transforms=[
+            dict(type='Blur', blur_limit=3, p=1.0),
+            dict(type='MedianBlur', blur_limit=3, p=1.0)
+        ],
+        p=0.1),
+]
+train_pipeline = [
+    dict(type='LoadImageFromFile', backend_args={{_base_.backend_args}}),
+    dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
+    dict(type='Resize', scale=(1333, 800), keep_ratio=True),
+    dict(
+        type='Albu',
+        transforms=albu_train_transforms,
+        bbox_params=dict(
+            type='BboxParams',
+            format='pascal_voc',
+            label_fields=['gt_bboxes_labels', 'gt_ignore_flags'],
+            min_visibility=0.0,
+            filter_lost_elements=True),
+        keymap={
+            'img': 'image',
+            'gt_masks': 'masks',
+            'gt_bboxes': 'bboxes'
+        },
+        skip_img_without_anno=True),
+    dict(type='RandomFlip', prob=0.5),
+    dict(type='PackDetInputs')
+]
+
+train_dataloader = dict(dataset=dict(pipeline=train_pipeline))

+ 17 - 0
configs/albu_example/metafile.yml

@@ -0,0 +1,17 @@
+Models:
+  - Name: mask-rcnn_r50_fpn_albu-1x_coco
+    In Collection: Mask R-CNN
+    Config: mask-rcnn_r50_fpn_albu-1x_coco.py
+    Metadata:
+      Training Memory (GB): 4.4
+      Epochs: 12
+    Results:
+      - Task: Object Detection
+        Dataset: COCO
+        Metrics:
+          box AP: 38.0
+      - Task: Instance Segmentation
+        Dataset: COCO
+        Metrics:
+          mask AP: 34.5
+    Weights: https://download.openmmlab.com/mmdetection/v2.0/albu_example/mask_rcnn_r50_fpn_albu_1x_coco/mask_rcnn_r50_fpn_albu_1x_coco_20200208-ab203bcd.pth

+ 31 - 0
configs/atss/README.md

@@ -0,0 +1,31 @@
+# ATSS
+
+> [Bridging the Gap Between Anchor-based and Anchor-free Detection via Adaptive Training Sample Selection](https://arxiv.org/abs/1912.02424)
+
+<!-- [ALGORITHM] -->
+
+## Abstract
+
+Object detection has been dominated by anchor-based detectors for several years. Recently, anchor-free detectors have become popular due to the proposal of FPN and Focal Loss. In this paper, we first point out that the essential difference between anchor-based and anchor-free detection is actually how to define positive and negative training samples, which leads to the performance gap between them. If they adopt the same definition of positive and negative samples during training, there is no obvious difference in the final performance, no matter regressing from a box or a point. This shows that how to select positive and negative training samples is important for current object detectors. Then, we propose an Adaptive Training Sample Selection (ATSS) to automatically select positive and negative samples according to statistical characteristics of object. It significantly improves the performance of anchor-based and anchor-free detectors and bridges the gap between them. Finally, we discuss the necessity of tiling multiple anchors per location on the image to detect objects. Extensive experiments conducted on MS COCO support our aforementioned analysis and conclusions. With the newly introduced ATSS, we improve state-of-the-art detectors by a large margin to 50.7% AP without introducing any overhead.
+
+<div align=center>
+<img src="https://user-images.githubusercontent.com/40661020/143870776-c81168f5-e8b2-44ee-978b-509e4372c5c9.png"/>
+</div>
+
+## Results and Models
+
+| Backbone |  Style  | Lr schd | Mem (GB) | Inf time (fps) | box AP |                Config                |                                                                                                                            Download                                                                                                                             |
+| :------: | :-----: | :-----: | :------: | :------------: | :----: | :----------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
+|   R-50   | pytorch |   1x    |   3.7    |      19.7      |  39.4  | [config](./atss_r50_fpn_1x_coco.py)  | [model](https://download.openmmlab.com/mmdetection/v2.0/atss/atss_r50_fpn_1x_coco/atss_r50_fpn_1x_coco_20200209-985f7bd0.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/atss/atss_r50_fpn_1x_coco/atss_r50_fpn_1x_coco_20200209_102539.log.json) |
+|  R-101   | pytorch |   1x    |   5.6    |      12.3      |  41.5  | [config](./atss_r101_fpn_1x_coco.py) |   [model](https://download.openmmlab.com/mmdetection/v2.0/atss/atss_r101_fpn_1x_coco/atss_r101_fpn_1x_20200825-dfcadd6f.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/atss/atss_r101_fpn_1x_coco/atss_r101_fpn_1x_20200825-dfcadd6f.log.json)   |
+
+## Citation
+
+```latex
+@article{zhang2019bridging,
+  title   =  {Bridging the Gap Between Anchor-based and Anchor-free Detection via Adaptive Training Sample Selection},
+  author  =  {Zhang, Shifeng and Chi, Cheng and Yao, Yongqiang and Lei, Zhen and Li, Stan Z.},
+  journal =  {arXiv preprint arXiv:1912.02424},
+  year    =  {2019}
+}
+```

+ 6 - 0
configs/atss/atss_r101_fpn_1x_coco.py

@@ -0,0 +1,6 @@
+_base_ = './atss_r50_fpn_1x_coco.py'
+model = dict(
+    backbone=dict(
+        depth=101,
+        init_cfg=dict(type='Pretrained',
+                      checkpoint='torchvision://resnet101')))

+ 7 - 0
configs/atss/atss_r101_fpn_8xb8-amp-lsj-200e_coco.py

@@ -0,0 +1,7 @@
+_base_ = './atss_r50_fpn_8xb8-amp-lsj-200e_coco.py'
+
+model = dict(
+    backbone=dict(
+        depth=101,
+        init_cfg=dict(type='Pretrained',
+                      checkpoint='torchvision://resnet101')))

+ 7 - 0
configs/atss/atss_r18_fpn_8xb8-amp-lsj-200e_coco.py

@@ -0,0 +1,7 @@
+_base_ = './atss_r50_fpn_8xb8-amp-lsj-200e_coco.py'
+
+model = dict(
+    backbone=dict(
+        depth=18,
+        init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet18')),
+    neck=dict(in_channels=[64, 128, 256, 512]))

+ 71 - 0
configs/atss/atss_r50_fpn_1x_coco.py

@@ -0,0 +1,71 @@
+_base_ = [
+    '../_base_/datasets/coco_detection.py',
+    '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
+]
+
+# model settings
+model = dict(
+    type='ATSS',
+    data_preprocessor=dict(
+        type='DetDataPreprocessor',
+        mean=[123.675, 116.28, 103.53],
+        std=[58.395, 57.12, 57.375],
+        bgr_to_rgb=True,
+        pad_size_divisor=32),
+    backbone=dict(
+        type='ResNet',
+        depth=50,
+        num_stages=4,
+        out_indices=(0, 1, 2, 3),
+        frozen_stages=1,
+        norm_cfg=dict(type='BN', requires_grad=True),
+        norm_eval=True,
+        style='pytorch',
+        init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
+    neck=dict(
+        type='FPN',
+        in_channels=[256, 512, 1024, 2048],
+        out_channels=256,
+        start_level=1,
+        add_extra_convs='on_output',
+        num_outs=5),
+    bbox_head=dict(
+        type='ATSSHead',
+        num_classes=80,
+        in_channels=256,
+        stacked_convs=4,
+        feat_channels=256,
+        anchor_generator=dict(
+            type='AnchorGenerator',
+            ratios=[1.0],
+            octave_base_scale=8,
+            scales_per_octave=1,
+            strides=[8, 16, 32, 64, 128]),
+        bbox_coder=dict(
+            type='DeltaXYWHBBoxCoder',
+            target_means=[.0, .0, .0, .0],
+            target_stds=[0.1, 0.1, 0.2, 0.2]),
+        loss_cls=dict(
+            type='FocalLoss',
+            use_sigmoid=True,
+            gamma=2.0,
+            alpha=0.25,
+            loss_weight=1.0),
+        loss_bbox=dict(type='GIoULoss', loss_weight=2.0),
+        loss_centerness=dict(
+            type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0)),
+    # training and testing settings
+    train_cfg=dict(
+        assigner=dict(type='ATSSAssigner', topk=9),
+        allowed_border=-1,
+        pos_weight=-1,
+        debug=False),
+    test_cfg=dict(
+        nms_pre=1000,
+        min_bbox_size=0,
+        score_thr=0.05,
+        nms=dict(type='nms', iou_threshold=0.6),
+        max_per_img=100))
+# optimizer
+optim_wrapper = dict(
+    optimizer=dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001))

+ 81 - 0
configs/atss/atss_r50_fpn_8xb8-amp-lsj-200e_coco.py

@@ -0,0 +1,81 @@
+_base_ = '../common/lsj-200e_coco-detection.py'
+
+image_size = (1024, 1024)
+batch_augments = [dict(type='BatchFixedSizePad', size=image_size)]
+
+model = dict(
+    type='ATSS',
+    data_preprocessor=dict(
+        type='DetDataPreprocessor',
+        mean=[123.675, 116.28, 103.53],
+        std=[58.395, 57.12, 57.375],
+        bgr_to_rgb=True,
+        pad_size_divisor=32,
+        batch_augments=batch_augments),
+    backbone=dict(
+        type='ResNet',
+        depth=50,
+        num_stages=4,
+        out_indices=(0, 1, 2, 3),
+        frozen_stages=1,
+        norm_cfg=dict(type='BN', requires_grad=True),
+        norm_eval=True,
+        style='pytorch',
+        init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')),
+    neck=dict(
+        type='FPN',
+        in_channels=[256, 512, 1024, 2048],
+        out_channels=256,
+        start_level=1,
+        add_extra_convs='on_output',
+        num_outs=5),
+    bbox_head=dict(
+        type='ATSSHead',
+        num_classes=80,
+        in_channels=256,
+        stacked_convs=4,
+        feat_channels=256,
+        anchor_generator=dict(
+            type='AnchorGenerator',
+            ratios=[1.0],
+            octave_base_scale=8,
+            scales_per_octave=1,
+            strides=[8, 16, 32, 64, 128]),
+        bbox_coder=dict(
+            type='DeltaXYWHBBoxCoder',
+            target_means=[.0, .0, .0, .0],
+            target_stds=[0.1, 0.1, 0.2, 0.2]),
+        loss_cls=dict(
+            type='FocalLoss',
+            use_sigmoid=True,
+            gamma=2.0,
+            alpha=0.25,
+            loss_weight=1.0),
+        loss_bbox=dict(type='GIoULoss', loss_weight=2.0),
+        loss_centerness=dict(
+            type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0)),
+    # training and testing settings
+    train_cfg=dict(
+        assigner=dict(type='ATSSAssigner', topk=9),
+        allowed_border=-1,
+        pos_weight=-1,
+        debug=False),
+    test_cfg=dict(
+        nms_pre=1000,
+        min_bbox_size=0,
+        score_thr=0.05,
+        nms=dict(type='nms', iou_threshold=0.6),
+        max_per_img=100))
+
+train_dataloader = dict(batch_size=8, num_workers=4)
+
+# Enable automatic-mixed-precision training with AmpOptimWrapper.
+optim_wrapper = dict(
+    type='AmpOptimWrapper',
+    optimizer=dict(
+        type='SGD', lr=0.01 * 4, momentum=0.9, weight_decay=0.00004))
+
+# NOTE: `auto_scale_lr` is for automatically scaling LR,
+# USER SHOULD NOT CHANGE ITS VALUES.
+# base_batch_size = (8 GPUs) x (8 samples per GPU)
+auto_scale_lr = dict(base_batch_size=64)

+ 60 - 0
configs/atss/metafile.yml

@@ -0,0 +1,60 @@
+Collections:
+  - Name: ATSS
+    Metadata:
+      Training Data: COCO
+      Training Techniques:
+        - SGD with Momentum
+        - Weight Decay
+      Training Resources: 8x V100 GPUs
+      Architecture:
+        - ATSS
+        - FPN
+        - ResNet
+    Paper:
+      URL: https://arxiv.org/abs/1912.02424
+      Title: 'Bridging the Gap Between Anchor-based and Anchor-free Detection via Adaptive Training Sample Selection'
+    README: configs/atss/README.md
+    Code:
+      URL: https://github.com/open-mmlab/mmdetection/blob/v2.0.0/mmdet/models/detectors/atss.py#L6
+      Version: v2.0.0
+
+Models:
+  - Name: atss_r50_fpn_1x_coco
+    In Collection: ATSS
+    Config: configs/atss/atss_r50_fpn_1x_coco.py
+    Metadata:
+      Training Memory (GB): 3.7
+      inference time (ms/im):
+        - value: 50.76
+          hardware: V100
+          backend: PyTorch
+          batch size: 1
+          mode: FP32
+          resolution: (800, 1333)
+      Epochs: 12
+    Results:
+      - Task: Object Detection
+        Dataset: COCO
+        Metrics:
+          box AP: 39.4
+    Weights: https://download.openmmlab.com/mmdetection/v2.0/atss/atss_r50_fpn_1x_coco/atss_r50_fpn_1x_coco_20200209-985f7bd0.pth
+
+  - Name: atss_r101_fpn_1x_coco
+    In Collection: ATSS
+    Config: configs/atss/atss_r101_fpn_1x_coco.py
+    Metadata:
+      Training Memory (GB): 5.6
+      inference time (ms/im):
+        - value: 81.3
+          hardware: V100
+          backend: PyTorch
+          batch size: 1
+          mode: FP32
+          resolution: (800, 1333)
+      Epochs: 12
+    Results:
+      - Task: Object Detection
+        Dataset: COCO
+        Metrics:
+          box AP: 41.5
+    Weights: https://download.openmmlab.com/mmdetection/v2.0/atss/atss_r101_fpn_1x_coco/atss_r101_fpn_1x_20200825-dfcadd6f.pth

+ 35 - 0
configs/autoassign/README.md

@@ -0,0 +1,35 @@
+# AutoAssign
+
+> [AutoAssign: Differentiable Label Assignment for Dense Object Detection](https://arxiv.org/abs/2007.03496)
+
+<!-- [ALGORITHM] -->
+
+## Abstract
+
+Determining positive/negative samples for object detection is known as label assignment. Here we present an anchor-free detector named AutoAssign. It requires little human knowledge and achieves appearance-aware through a fully differentiable weighting mechanism. During training, to both satisfy the prior distribution of data and adapt to category characteristics, we present Center Weighting to adjust the category-specific prior distributions. To adapt to object appearances, Confidence Weighting is proposed to adjust the specific assign strategy of each instance. The two weighting modules are then combined to generate positive and negative weights to adjust each location's confidence. Extensive experiments on the MS COCO show that our method steadily surpasses other best sampling strategies by large margins with various backbones. Moreover, our best model achieves 52.1% AP, outperforming all existing one-stage detectors. Besides, experiments on other datasets, e.g., PASCAL VOC, Objects365, and WiderFace, demonstrate the broad applicability of AutoAssign.
+
+<div align=center>
+<img src="https://user-images.githubusercontent.com/40661020/143870875-33567e44-0584-4470-9a90-0df0fb6c1fe2.png"/>
+</div>
+
+## Results and Models
+
+| Backbone | Style | Lr schd | Mem (GB) | box AP |                     Config                      |                                                                                                                                                        Download                                                                                                                                                         |
+| :------: | :---: | :-----: | :------: | :----: | :---------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
+|   R-50   | caffe |   1x    |   4.08   |  40.4  | [config](./autoassign_r50-caffe_fpn_1x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/autoassign/auto_assign_r50_fpn_1x_coco/auto_assign_r50_fpn_1x_coco_20210413_115540-5e17991f.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/autoassign/auto_assign_r50_fpn_1x_coco/auto_assign_r50_fpn_1x_coco_20210413_115540-5e17991f.log.json) |
+
+**Note**:
+
+1. We find that the performance is unstable with 1x setting and may fluctuate by about 0.3 mAP. mAP 40.3 ~ 40.6 is acceptable. Such fluctuation can also be found in the original implementation.
+2. You can get a more stable results ~ mAP 40.6 with a schedule total 13 epoch, and learning rate is divided by 10 at 10th and 13th epoch.
+
+## Citation
+
+```latex
+@article{zhu2020autoassign,
+  title={AutoAssign: Differentiable Label Assignment for Dense Object Detection},
+  author={Zhu, Benjin and Wang, Jianfeng and Jiang, Zhengkai and Zong, Fuhang and Liu, Songtao and Li, Zeming and Sun, Jian},
+  journal={arXiv preprint arXiv:2007.03496},
+  year={2020}
+}
+```

+ 69 - 0
configs/autoassign/autoassign_r50-caffe_fpn_1x_coco.py

@@ -0,0 +1,69 @@
+# We follow the original implementation which
+# adopts the Caffe pre-trained backbone.
+_base_ = [
+    '../_base_/datasets/coco_detection.py',
+    '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
+]
+# model settings
+model = dict(
+    type='AutoAssign',
+    data_preprocessor=dict(
+        type='DetDataPreprocessor',
+        mean=[102.9801, 115.9465, 122.7717],
+        std=[1.0, 1.0, 1.0],
+        bgr_to_rgb=False,
+        pad_size_divisor=32),
+    backbone=dict(
+        type='ResNet',
+        depth=50,
+        num_stages=4,
+        out_indices=(0, 1, 2, 3),
+        frozen_stages=1,
+        norm_cfg=dict(type='BN', requires_grad=False),
+        norm_eval=True,
+        style='caffe',
+        init_cfg=dict(
+            type='Pretrained',
+            checkpoint='open-mmlab://detectron2/resnet50_caffe')),
+    neck=dict(
+        type='FPN',
+        in_channels=[256, 512, 1024, 2048],
+        out_channels=256,
+        start_level=1,
+        add_extra_convs=True,
+        num_outs=5,
+        relu_before_extra_convs=True,
+        init_cfg=dict(type='Caffe2Xavier', layer='Conv2d')),
+    bbox_head=dict(
+        type='AutoAssignHead',
+        num_classes=80,
+        in_channels=256,
+        stacked_convs=4,
+        feat_channels=256,
+        strides=[8, 16, 32, 64, 128],
+        loss_bbox=dict(type='GIoULoss', loss_weight=5.0)),
+    train_cfg=None,
+    test_cfg=dict(
+        nms_pre=1000,
+        min_bbox_size=0,
+        score_thr=0.05,
+        nms=dict(type='nms', iou_threshold=0.6),
+        max_per_img=100))
+
+# learning rate
+param_scheduler = [
+    dict(
+        type='LinearLR', start_factor=0.001, by_epoch=False, begin=0,
+        end=1000),
+    dict(
+        type='MultiStepLR',
+        begin=0,
+        end=12,
+        by_epoch=True,
+        milestones=[8, 11],
+        gamma=0.1)
+]
+
+# optimizer
+optim_wrapper = dict(
+    optimizer=dict(lr=0.01), paramwise_cfg=dict(norm_decay_mult=0.))

+ 33 - 0
configs/autoassign/metafile.yml

@@ -0,0 +1,33 @@
+Collections:
+  - Name: AutoAssign
+    Metadata:
+      Training Data: COCO
+      Training Techniques:
+        - SGD with Momentum
+        - Weight Decay
+      Training Resources: 8x V100 GPUs
+      Architecture:
+        - AutoAssign
+        - FPN
+        - ResNet
+    Paper:
+      URL: https://arxiv.org/abs/2007.03496
+      Title: 'AutoAssign: Differentiable Label Assignment for Dense Object Detection'
+    README: configs/autoassign/README.md
+    Code:
+      URL: https://github.com/open-mmlab/mmdetection/blob/v2.12.0/mmdet/models/detectors/autoassign.py#L6
+      Version: v2.12.0
+
+Models:
+  - Name: autoassign_r50-caffe_fpn_1x_coco
+    In Collection: AutoAssign
+    Config: configs/autoassign/autoassign_r50-caffe_fpn_1x_coco.py
+    Metadata:
+      Training Memory (GB): 4.08
+      Epochs: 12
+    Results:
+      - Task: Object Detection
+        Dataset: COCO
+        Metrics:
+          box AP: 40.4
+    Weights: https://download.openmmlab.com/mmdetection/v2.0/autoassign/auto_assign_r50_fpn_1x_coco/auto_assign_r50_fpn_1x_coco_20210413_115540-5e17991f.pth

+ 32 - 0
configs/boxinst/README.md

@@ -0,0 +1,32 @@
+# BoxInst
+
+> [BoxInst: High-Performance Instance Segmentation with Box Annotations](https://arxiv.org/pdf/2012.02310.pdf)
+
+<!-- [ALGORITHM] -->
+
+## Abstract
+
+We present a high-performance method that can achieve mask-level instance segmentation with only bounding-box annotations for training. While this setting has been studied in the literature, here we show significantly stronger performance with a simple design (e.g., dramatically improving previous best reported mask AP of 21.1% to 31.6% on the COCO dataset). Our core idea is to redesign the loss
+of learning masks in instance segmentation, with no modification to the segmentation network itself. The new loss functions can supervise the mask training without relying on mask annotations. This is made possible with two loss terms, namely, 1) a surrogate term that minimizes the discrepancy between the projections of the ground-truth box and the predicted mask; 2) a pairwise loss that can exploit the prior that proximal pixels with similar colors are very likely to have the same category label. Experiments demonstrate that the redesigned mask loss can yield surprisingly high-quality instance masks with only box annotations. For example, without using any mask annotations, with a ResNet-101 backbone and 3× training schedule, we achieve 33.2% mask AP on COCO test-dev split (vs. 39.1% of the fully supervised counterpart). Our excellent experiment results on COCO and Pascal VOC indicate that our method dramatically narrows the performance gap between weakly and fully supervised instance segmentation.
+
+<div align=center>
+<img src="https://user-images.githubusercontent.com/57584090/209087723-756b76d7-5061-4000-a93c-df1194a439a0.png"/>
+</div>
+
+## Results and Models
+
+| Backbone |  Style  | MS train | Lr schd | bbox AP | mask AP |                   Config                    |                                                                                                                                                  Download                                                                                                                                                   |
+| :------: | :-----: | :------: | :-----: | :-----: | :-----: | :-----------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
+|   R-50   | pytorch |    Y     |   1x    |  39.6   |  31.1   | [config](./boxinst_r50_fpn_ms-90k_coco.py)  |  [model](https://download.openmmlab.com/mmdetection/v3.0/boxinst/boxinst_r50_fpn_ms-90k_coco/boxinst_r50_fpn_ms-90k_coco_20221228_163052-6add751a.pth) \| [log](https://download.openmmlab.com/mmdetection/v3.0/boxinst/boxinst_r50_fpn_ms-90k_coco/boxinst_r50_fpn_ms-90k_coco_20221228_163052.log.json)   |
+|  R-101   | pytorch |    Y     |   1x    |  41.8   |  32.7   | [config](./boxinst_r101_fpn_ms-90k_coco.py) | [model](https://download.openmmlab.com/mmdetection/v3.0/boxinst/boxinst_r101_fpn_ms-90k_coco/boxinst_r101_fpn_ms-90k_coco_20221229_145106-facf375b.pth) \|[log](https://download.openmmlab.com/mmdetection/v3.0/boxinst/boxinst_r101_fpn_ms-90k_coco/boxinst_r101_fpn_ms-90k_coco_20221229_145106.log.json) |
+
+## Citation
+
+```latex
+@inproceedings{tian2020boxinst,
+  title     =  {{BoxInst}: High-Performance Instance Segmentation with Box Annotations},
+  author    =  {Tian, Zhi and Shen, Chunhua and Wang, Xinlong and Chen, Hao},
+  booktitle =  {Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR)},
+  year      =  {2021}
+}
+```

+ 8 - 0
configs/boxinst/boxinst_r101_fpn_ms-90k_coco.py

@@ -0,0 +1,8 @@
+_base_ = './boxinst_r50_fpn_ms-90k_coco.py'
+
+# model settings
+model = dict(
+    backbone=dict(
+        depth=101,
+        init_cfg=dict(type='Pretrained',
+                      checkpoint='torchvision://resnet101')))

+ 93 - 0
configs/boxinst/boxinst_r50_fpn_ms-90k_coco.py

@@ -0,0 +1,93 @@
+_base_ = '../common/ms-90k_coco.py'
+
+# model settings
+model = dict(
+    type='BoxInst',
+    data_preprocessor=dict(
+        type='BoxInstDataPreprocessor',
+        mean=[123.675, 116.28, 103.53],
+        std=[58.395, 57.12, 57.375],
+        bgr_to_rgb=True,
+        pad_size_divisor=32,
+        mask_stride=4,
+        pairwise_size=3,
+        pairwise_dilation=2,
+        pairwise_color_thresh=0.3,
+        bottom_pixels_removed=10),
+    backbone=dict(
+        type='ResNet',
+        depth=50,
+        num_stages=4,
+        out_indices=(0, 1, 2, 3),
+        frozen_stages=1,
+        norm_cfg=dict(type='BN', requires_grad=True),
+        norm_eval=True,
+        init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50'),
+        style='pytorch'),
+    neck=dict(
+        type='FPN',
+        in_channels=[256, 512, 1024, 2048],
+        out_channels=256,
+        start_level=1,
+        add_extra_convs='on_output',  # use P5
+        num_outs=5,
+        relu_before_extra_convs=True),
+    bbox_head=dict(
+        type='BoxInstBboxHead',
+        num_params=593,
+        num_classes=80,
+        in_channels=256,
+        stacked_convs=4,
+        feat_channels=256,
+        strides=[8, 16, 32, 64, 128],
+        norm_on_bbox=True,
+        centerness_on_reg=True,
+        dcn_on_last_conv=False,
+        center_sampling=True,
+        conv_bias=True,
+        loss_cls=dict(
+            type='FocalLoss',
+            use_sigmoid=True,
+            gamma=2.0,
+            alpha=0.25,
+            loss_weight=1.0),
+        loss_bbox=dict(type='GIoULoss', loss_weight=1.0),
+        loss_centerness=dict(
+            type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0)),
+    mask_head=dict(
+        type='BoxInstMaskHead',
+        num_layers=3,
+        feat_channels=16,
+        size_of_interest=8,
+        mask_out_stride=4,
+        topk_masks_per_img=64,
+        mask_feature_head=dict(
+            in_channels=256,
+            feat_channels=128,
+            start_level=0,
+            end_level=2,
+            out_channels=16,
+            mask_stride=8,
+            num_stacked_convs=4,
+            norm_cfg=dict(type='BN', requires_grad=True)),
+        loss_mask=dict(
+            type='DiceLoss',
+            use_sigmoid=True,
+            activate=True,
+            eps=5e-6,
+            loss_weight=1.0)),
+    # model training and testing settings
+    test_cfg=dict(
+        nms_pre=1000,
+        min_bbox_size=0,
+        score_thr=0.05,
+        nms=dict(type='nms', iou_threshold=0.6),
+        max_per_img=100,
+        mask_thr=0.5))
+
+# optimizer
+optim_wrapper = dict(optimizer=dict(lr=0.01))
+
+# evaluator
+val_evaluator = dict(metric=['bbox', 'segm'])
+test_evaluator = val_evaluator

+ 52 - 0
configs/boxinst/metafile.yml

@@ -0,0 +1,52 @@
+Collections:
+  - Name: BoxInst
+    Metadata:
+      Training Data: COCO
+      Training Techniques:
+        - SGD with Momentum
+        - Weight Decay
+      Training Resources: 8x A100 GPUs
+      Architecture:
+        - ResNet
+        - FPN
+        - CondInst
+    Paper:
+      URL: https://arxiv.org/abs/2012.02310
+      Title: 'BoxInst: High-Performance Instance Segmentation with Box Annotations'
+    README: configs/boxinst/README.md
+    Code:
+      URL: https://github.com/open-mmlab/mmdetection/blob/v3.0.0rc6/mmdet/models/detectors/boxinst.py#L8
+      Version: v3.0.0rc6
+
+Models:
+  - Name: boxinst_r50_fpn_ms-90k_coco
+    In Collection: BoxInst
+    Config: configs/boxinst/boxinst_r50_fpn_ms-90k_coco.py
+    Metadata:
+      Iterations: 90000
+    Results:
+      - Task: Object Detection
+        Dataset: COCO
+        Metrics:
+          box AP: 39.4
+      - Task: Instance Segmentation
+        Dataset: COCO
+        Metrics:
+          mask AP: 30.8
+    Weights: https://download.openmmlab.com/mmdetection/v3.0/boxinst/boxinst_r50_fpn_ms-90k_coco/boxinst_r50_fpn_ms-90k_coco_20221228_163052-6add751a.pth
+
+  - Name: boxinst_r101_fpn_ms-90k_coco
+    In Collection: BoxInst
+    Config: configs/boxinst/boxinst_r101_fpn_ms-90k_coco.py
+    Metadata:
+      Iterations: 90000
+    Results:
+      - Task: Object Detection
+        Dataset: COCO
+        Metrics:
+          box AP: 41.8
+      - Task: Instance Segmentation
+        Dataset: COCO
+        Metrics:
+          mask AP: 32.7
+    Weights: https://download.openmmlab.com/mmdetection/v3.0/boxinst/boxinst_r101_fpn_ms-90k_coco/boxinst_r101_fpn_ms-90k_coco_20221229_145106-facf375b.pth

+ 42 - 0
configs/carafe/README.md

@@ -0,0 +1,42 @@
+# CARAFE
+
+> [CARAFE: Content-Aware ReAssembly of FEatures](https://arxiv.org/abs/1905.02188)
+
+<!-- [ALGORITHM] -->
+
+## Abstract
+
+Feature upsampling is a key operation in a number of modern convolutional network architectures, e.g. feature pyramids. Its design is critical for dense prediction tasks such as object detection and semantic/instance segmentation. In this work, we propose Content-Aware ReAssembly of FEatures (CARAFE), a universal, lightweight and highly effective operator to fulfill this goal. CARAFE has several appealing properties: (1) Large field of view. Unlike previous works (e.g. bilinear interpolation) that only exploit sub-pixel neighborhood, CARAFE can aggregate contextual information within a large receptive field. (2) Content-aware handling. Instead of using a fixed kernel for all samples (e.g. deconvolution), CARAFE enables instance-specific content-aware handling, which generates adaptive kernels on-the-fly. (3) Lightweight and fast to compute. CARAFE introduces little computational overhead and can be readily integrated into modern network architectures. We conduct comprehensive evaluations on standard benchmarks in object detection, instance/semantic segmentation and inpainting. CARAFE shows consistent and substantial gains across all the tasks (1.2%, 1.3%, 1.8%, 1.1db respectively) with negligible computational overhead. It has great potential to serve as a strong building block for future research. It has great potential to serve as a strong building block for future research.
+
+<div align=center>
+<img src="https://user-images.githubusercontent.com/40661020/143872016-48225685-0e59-49cf-bd65-a50ee04ca8a2.png"/>
+</div>
+
+## Results and Models
+
+The results on COCO 2017 val is shown in the below table.
+
+|         Method         | Backbone |  Style  | Lr schd | Test Proposal Num | Inf time (fps) | Box AP | Mask AP |                      Config                       |                                                                                                                                                                         Download                                                                                                                                                                          |
+| :--------------------: | :------: | :-----: | :-----: | :---------------: | :------------: | :----: | :-----: | :-----------------------------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
+| Faster R-CNN w/ CARAFE | R-50-FPN | pytorch |   1x    |       1000        |      16.5      |  38.6  |  38.6   | [config](./faster-rcnn_r50_fpn-carafe_1x_coco.py) |     [model](https://download.openmmlab.com/mmdetection/v2.0/carafe/faster_rcnn_r50_fpn_carafe_1x_coco/faster_rcnn_r50_fpn_carafe_1x_coco_bbox_mAP-0.386_20200504_175733-385a75b7.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/carafe/faster_rcnn_r50_fpn_carafe_1x_coco/faster_rcnn_r50_fpn_carafe_1x_coco_20200504_175733.log.json)     |
+|           -            |    -     |    -    |    -    |       2000        |                |        |         |                                                   |                                                                                                                                                                                                                                                                                                                                                           |
+|  Mask R-CNN w/ CARAFE  | R-50-FPN | pytorch |   1x    |       1000        |      14.0      |  39.3  |  35.8   |  [config](./mask-rcnn_r50_fpn-carafe_1x_coco.py)  | [model](https://download.openmmlab.com/mmdetection/v2.0/carafe/mask_rcnn_r50_fpn_carafe_1x_coco/mask_rcnn_r50_fpn_carafe_1x_coco_bbox_mAP-0.393__segm_mAP-0.358_20200503_135957-8687f195.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/carafe/mask_rcnn_r50_fpn_carafe_1x_coco/mask_rcnn_r50_fpn_carafe_1x_coco_20200503_135957.log.json) |
+|           -            |    -     |    -    |    -    |       2000        |                |        |         |                                                   |                                                                                                                                                                                                                                                                                                                                                           |
+
+## Implementation
+
+The CUDA implementation of CARAFE can be find at https://github.com/myownskyW7/CARAFE.
+
+## Citation
+
+We provide config files to reproduce the object detection & instance segmentation results in the ICCV 2019 Oral paper for [CARAFE: Content-Aware ReAssembly of FEatures](https://arxiv.org/abs/1905.02188).
+
+```latex
+@inproceedings{Wang_2019_ICCV,
+    title = {CARAFE: Content-Aware ReAssembly of FEatures},
+    author = {Wang, Jiaqi and Chen, Kai and Xu, Rui and Liu, Ziwei and Loy, Chen Change and Lin, Dahua},
+    booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
+    month = {October},
+    year = {2019}
+}
+```

+ 20 - 0
configs/carafe/faster-rcnn_r50_fpn-carafe_1x_coco.py

@@ -0,0 +1,20 @@
+_base_ = '../faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py'
+model = dict(
+    data_preprocessor=dict(pad_size_divisor=64),
+    neck=dict(
+        type='FPN_CARAFE',
+        in_channels=[256, 512, 1024, 2048],
+        out_channels=256,
+        num_outs=5,
+        start_level=0,
+        end_level=-1,
+        norm_cfg=None,
+        act_cfg=None,
+        order=('conv', 'norm', 'act'),
+        upsample_cfg=dict(
+            type='carafe',
+            up_kernel=5,
+            up_group=1,
+            encoder_kernel=3,
+            encoder_dilation=1,
+            compressed_channels=64)))

+ 30 - 0
configs/carafe/mask-rcnn_r50_fpn-carafe_1x_coco.py

@@ -0,0 +1,30 @@
+_base_ = '../mask_rcnn/mask-rcnn_r50_fpn_1x_coco.py'
+model = dict(
+    data_preprocessor=dict(pad_size_divisor=64),
+    neck=dict(
+        type='FPN_CARAFE',
+        in_channels=[256, 512, 1024, 2048],
+        out_channels=256,
+        num_outs=5,
+        start_level=0,
+        end_level=-1,
+        norm_cfg=None,
+        act_cfg=None,
+        order=('conv', 'norm', 'act'),
+        upsample_cfg=dict(
+            type='carafe',
+            up_kernel=5,
+            up_group=1,
+            encoder_kernel=3,
+            encoder_dilation=1,
+            compressed_channels=64)),
+    roi_head=dict(
+        mask_head=dict(
+            upsample_cfg=dict(
+                type='carafe',
+                scale_factor=2,
+                up_kernel=5,
+                up_group=1,
+                encoder_kernel=3,
+                encoder_dilation=1,
+                compressed_channels=64))))

+ 55 - 0
configs/carafe/metafile.yml

@@ -0,0 +1,55 @@
+Collections:
+  - Name: CARAFE
+    Metadata:
+      Training Data: COCO
+      Training Techniques:
+        - SGD with Momentum
+        - Weight Decay
+      Training Resources: 8x V100 GPUs
+      Architecture:
+        - RPN
+        - FPN_CARAFE
+        - ResNet
+        - RoIPool
+    Paper:
+      URL: https://arxiv.org/abs/1905.02188
+      Title: 'CARAFE: Content-Aware ReAssembly of FEatures'
+    README: configs/carafe/README.md
+    Code:
+      URL: https://github.com/open-mmlab/mmdetection/blob/v2.12.0/mmdet/models/necks/fpn_carafe.py#L11
+      Version: v2.12.0
+
+Models:
+  - Name: faster-rcnn_r50_fpn_carafe_1x_coco
+    In Collection: CARAFE
+    Config: configs/carafe/faster-rcnn_r50_fpn-carafe_1x_coco.py
+    Metadata:
+      Training Memory (GB): 4.26
+      Epochs: 12
+    Results:
+      - Task: Object Detection
+        Dataset: COCO
+        Metrics:
+          box AP: 38.6
+      - Task: Instance Segmentation
+        Dataset: COCO
+        Metrics:
+          mask AP: 38.6
+    Weights: https://download.openmmlab.com/mmdetection/v2.0/carafe/faster_rcnn_r50_fpn_carafe_1x_coco/faster_rcnn_r50_fpn_carafe_1x_coco_bbox_mAP-0.386_20200504_175733-385a75b7.pth
+
+  - Name: mask-rcnn_r50_fpn_carafe_1x_coco
+    In Collection: CARAFE
+    Config: configs/carafe/mask-rcnn_r50_fpn-carafe_1x_coco.py
+    Metadata:
+      Training Memory (GB): 4.31
+      Epochs: 12
+    Results:
+      - Task: Object Detection
+        Dataset: COCO
+        Metrics:
+          box AP: 39.3
+      - Task: Instance Segmentation
+        Dataset: COCO
+        Metrics:
+          mask AP: 35.6
+    Weights: https://download.openmmlab.com/mmdetection/v2.0/carafe/mask_rcnn_r50_fpn_carafe_1x_coco/mask_rcnn_r50_fpn_carafe_1x_coco_bbox_mAP-0.393__segm_mAP-0.358_20200503_135957-8687f195.pth

+ 79 - 0
configs/cascade_rcnn/README.md

@@ -0,0 +1,79 @@
+# Cascade R-CNN
+
+> [Cascade R-CNN: High Quality Object Detection and Instance Segmentation](https://arxiv.org/abs/1906.09756)
+
+<!-- [ALGORITHM] -->
+
+## Abstract
+
+In object detection, the intersection over union (IoU) threshold is frequently used to define positives/negatives. The threshold used to train a detector defines its quality. While the commonly used threshold of 0.5 leads to noisy (low-quality) detections, detection performance frequently degrades for larger thresholds. This paradox of high-quality detection has two causes: 1) overfitting, due to vanishing positive samples for large thresholds, and 2) inference-time quality mismatch between detector and test hypotheses. A multi-stage object detection architecture, the Cascade R-CNN, composed of a sequence of detectors trained with increasing IoU thresholds, is proposed to address these problems. The detectors are trained sequentially, using the output of a detector as training set for the next. This resampling progressively improves hypotheses quality, guaranteeing a positive training set of equivalent size for all detectors and minimizing overfitting. The same cascade is applied at inference, to eliminate quality mismatches between hypotheses and detectors. An implementation of the Cascade R-CNN without bells or whistles achieves state-of-the-art performance on the COCO dataset, and significantly improves high-quality detection on generic and specific object detection datasets, including VOC, KITTI, CityPerson, and WiderFace. Finally, the Cascade R-CNN is generalized to instance segmentation, with nontrivial improvements over the Mask R-CNN.
+
+<div align=center>
+<img src="https://user-images.githubusercontent.com/40661020/143872197-d99b90e4-4f05-4329-80a4-327ac862a051.png"/>
+</div>
+
+## Results and Models
+
+### Cascade R-CNN
+
+|    Backbone     |  Style  | Lr schd | Mem (GB) | Inf time (fps) | box AP |                       Config                        |                                                                                                                                                                             Download                                                                                                                                                                              |
+| :-------------: | :-----: | :-----: | :------: | :------------: | :----: | :-------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
+|    R-50-FPN     |  caffe  |   1x    |   4.2    |                |  40.4  |  [config](./cascade-rcnn_r50-caffe_fpn_1x_coco.py)  |   [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_rcnn_r50_caffe_fpn_1x_coco/cascade_rcnn_r50_caffe_fpn_1x_coco_bbox_mAP-0.404_20200504_174853-b857be87.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_rcnn_r50_caffe_fpn_1x_coco/cascade_rcnn_r50_caffe_fpn_1x_coco_20200504_174853.log.json)   |
+|    R-50-FPN     | pytorch |   1x    |   4.4    |      16.1      |  40.3  |     [config](./cascade-rcnn_r50_fpn_1x_coco.py)     |                          [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_rcnn_r50_fpn_1x_coco/cascade_rcnn_r50_fpn_1x_coco_20200316-3dc56deb.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_rcnn_r50_fpn_1x_coco/cascade_rcnn_r50_fpn_1x_coco_20200316_214748.log.json)                          |
+|    R-50-FPN     | pytorch |   20e   |    -     |       -        |  41.0  |    [config](./cascade-rcnn_r50_fpn_20e_coco.py)     |             [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_rcnn_r50_fpn_20e_coco/cascade_rcnn_r50_fpn_20e_coco_bbox_mAP-0.41_20200504_175131-e9872a90.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_rcnn_r50_fpn_20e_coco/cascade_rcnn_r50_fpn_20e_coco_20200504_175131.log.json)              |
+|    R-101-FPN    |  caffe  |   1x    |   6.2    |                |  42.3  | [config](./cascade-rcnn_r101-caffe_fpn_1x_coco.py)  | [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_rcnn_r101_caffe_fpn_1x_coco/cascade_rcnn_r101_caffe_fpn_1x_coco_bbox_mAP-0.423_20200504_175649-cab8dbd5.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_rcnn_r101_caffe_fpn_1x_coco/cascade_rcnn_r101_caffe_fpn_1x_coco_20200504_175649.log.json) |
+|    R-101-FPN    | pytorch |   1x    |   6.4    |      13.5      |  42.0  |    [config](./cascade-rcnn_r101_fpn_1x_coco.py)     |                        [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_rcnn_r101_fpn_1x_coco/cascade_rcnn_r101_fpn_1x_coco_20200317-0b6a2fbf.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_rcnn_r101_fpn_1x_coco/cascade_rcnn_r101_fpn_1x_coco_20200317_101744.log.json)                        |
+|    R-101-FPN    | pytorch |   20e   |    -     |       -        |  42.5  |    [config](./cascade-rcnn_r101_fpn_20e_coco.py)    |           [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_rcnn_r101_fpn_20e_coco/cascade_rcnn_r101_fpn_20e_coco_bbox_mAP-0.425_20200504_231812-5057dcc5.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_rcnn_r101_fpn_20e_coco/cascade_rcnn_r101_fpn_20e_coco_20200504_231812.log.json)           |
+| X-101-32x4d-FPN | pytorch |   1x    |   7.6    |      10.9      |  43.7  | [config](./cascade-rcnn_x101-32x4d_fpn_1x_coco.py)  |            [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_rcnn_x101_32x4d_fpn_1x_coco/cascade_rcnn_x101_32x4d_fpn_1x_coco_20200316-95c2deb6.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_rcnn_x101_32x4d_fpn_1x_coco/cascade_rcnn_x101_32x4d_fpn_1x_coco_20200316_055608.log.json)            |
+| X-101-32x4d-FPN | pytorch |   20e   |   7.6    |                |  43.7  | [config](./cascade-rcnn_x101-32x4d_fpn_20e_coco.py) |      [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_rcnn_x101_32x4d_fpn_20e_coco/cascade_rcnn_x101_32x4d_fpn_20e_coco_20200906_134608-9ae0a720.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_rcnn_x101_32x4d_fpn_20e_coco/cascade_rcnn_x101_32x4d_fpn_20e_coco_20200906_134608.log.json)       |
+| X-101-64x4d-FPN | pytorch |   1x    |   10.7   |                |  44.7  | [config](./cascade-rcnn_x101-64x4d_fpn_1x_coco.py)  |        [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_rcnn_x101_64x4d_fpn_1x_coco/cascade_rcnn_x101_64x4d_fpn_1x_coco_20200515_075702-43ce6a30.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_rcnn_x101_64x4d_fpn_1x_coco/cascade_rcnn_x101_64x4d_fpn_1x_coco_20200515_075702.log.json)         |
+| X-101-64x4d-FPN | pytorch |   20e   |   10.7   |                |  44.5  | [config](./cascade-rcnn_x101_64x4d_fpn_20e_coco.py) |      [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_rcnn_x101_64x4d_fpn_20e_coco/cascade_rcnn_x101_64x4d_fpn_20e_coco_20200509_224357-051557b1.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_rcnn_x101_64x4d_fpn_20e_coco/cascade_rcnn_x101_64x4d_fpn_20e_coco_20200509_224357.log.json)       |
+
+### Cascade Mask R-CNN
+
+|    Backbone     |  Style  | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP |                          Config                          |                                                                                                                                                                                               Download                                                                                                                                                                                                |
+| :-------------: | :-----: | :-----: | :------: | :------------: | :----: | :-----: | :------------------------------------------------------: | :---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
+|    R-50-FPN     |  caffe  |   1x    |   5.9    |                |  41.2  |  36.0   |  [config](./cascade-mask-rcnn_r50-caffe_fpn_1x_coco.py)  |   [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_r50_caffe_fpn_1x_coco/cascade_mask_rcnn_r50_caffe_fpn_1x_coco_bbox_mAP-0.412__segm_mAP-0.36_20200504_174659-5004b251.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_r50_caffe_fpn_1x_coco/cascade_mask_rcnn_r50_caffe_fpn_1x_coco_20200504_174659.log.json)    |
+|    R-50-FPN     | pytorch |   1x    |   6.0    |      11.2      |  41.2  |  35.9   |     [config](./cascade-mask-rcnn_r50_fpn_1x_coco.py)     |                                  [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_r50_fpn_1x_coco/cascade_mask_rcnn_r50_fpn_1x_coco_20200203-9d4dcb24.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_r50_fpn_1x_coco/cascade_mask_rcnn_r50_fpn_1x_coco_20200203_170449.log.json)                                  |
+|    R-50-FPN     | pytorch |   20e   |    -     |       -        |  41.9  |  36.5   |    [config](./cascade-mask-rcnn_r50_fpn_20e_coco.py)     |             [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_r50_fpn_20e_coco/cascade_mask_rcnn_r50_fpn_20e_coco_bbox_mAP-0.419__segm_mAP-0.365_20200504_174711-4af8e66e.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_r50_fpn_20e_coco/cascade_mask_rcnn_r50_fpn_20e_coco_20200504_174711.log.json)             |
+|    R-101-FPN    |  caffe  |   1x    |   7.8    |                |  43.2  |  37.6   | [config](./cascade-mask-rcnn_r101-caffe_fpn_1x_coco.py)  | [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_r101_caffe_fpn_1x_coco/cascade_mask_rcnn_r101_caffe_fpn_1x_coco_bbox_mAP-0.432__segm_mAP-0.376_20200504_174813-5c1e9599.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_r101_caffe_fpn_1x_coco/cascade_mask_rcnn_r101_caffe_fpn_1x_coco_20200504_174813.log.json) |
+|    R-101-FPN    | pytorch |   1x    |   7.9    |      9.8       |  42.9  |  37.3   |    [config](./cascade-mask-rcnn_r101_fpn_1x_coco.py)     |                                [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_r101_fpn_1x_coco/cascade_mask_rcnn_r101_fpn_1x_coco_20200203-befdf6ee.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_r101_fpn_1x_coco/cascade_mask_rcnn_r101_fpn_1x_coco_20200203_092521.log.json)                                |
+|    R-101-FPN    | pytorch |   20e   |    -     |       -        |  43.4  |  37.8   |    [config](./cascade-mask-rcnn_r101_fpn_20e_coco.py)    |           [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_r101_fpn_20e_coco/cascade_mask_rcnn_r101_fpn_20e_coco_bbox_mAP-0.434__segm_mAP-0.378_20200504_174836-005947da.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_r101_fpn_20e_coco/cascade_mask_rcnn_r101_fpn_20e_coco_20200504_174836.log.json)           |
+| X-101-32x4d-FPN | pytorch |   1x    |   9.2    |      8.6       |  44.3  |  38.3   | [config](./cascade-mask-rcnn_x101-32x4d_fpn_1x_coco.py)  |                    [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_x101_32x4d_fpn_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_1x_coco_20200201-0f411b1f.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_x101_32x4d_fpn_1x_coco/cascade_mask_rcnn_x101_32x4d_fpn_1x_coco_20200201_052416.log.json)                    |
+| X-101-32x4d-FPN | pytorch |   20e   |   9.2    |       -        |  45.0  |  39.0   | [config](./cascade-mask-rcnn_x101-32x4d_fpn_20e_coco.py) |              [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_x101_32x4d_fpn_20e_coco/cascade_mask_rcnn_x101_32x4d_fpn_20e_coco_20200528_083917-ed1f4751.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_x101_32x4d_fpn_20e_coco/cascade_mask_rcnn_x101_32x4d_fpn_20e_coco_20200528_083917.log.json)               |
+| X-101-64x4d-FPN | pytorch |   1x    |   12.2   |      6.7       |  45.3  |  39.2   | [config](./cascade-mask-rcnn_x101-64x4d_fpn_1x_coco.py)  |                    [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_x101_64x4d_fpn_1x_coco/cascade_mask_rcnn_x101_64x4d_fpn_1x_coco_20200203-9a2db89d.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_x101_64x4d_fpn_1x_coco/cascade_mask_rcnn_x101_64x4d_fpn_1x_coco_20200203_044059.log.json)                    |
+| X-101-64x4d-FPN | pytorch |   20e   |   12.2   |                |  45.6  |  39.5   | [config](./cascade-mask-rcnn_x101-64x4d_fpn_20e_coco.py) |              [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_x101_64x4d_fpn_20e_coco/cascade_mask_rcnn_x101_64x4d_fpn_20e_coco_20200512_161033-bdb5126a.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_x101_64x4d_fpn_20e_coco/cascade_mask_rcnn_x101_64x4d_fpn_20e_coco_20200512_161033.log.json)               |
+
+**Notes:**
+
+- The `20e` schedule in Cascade (Mask) R-CNN indicates decreasing the lr at 16 and 19 epochs, with a total of 20 epochs.
+
+## Pre-trained Models
+
+We also train some models with longer schedules and multi-scale training for Cascade Mask R-CNN. The users could finetune them for downstream tasks.
+
+|    Backbone     |  Style  | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP |                           Config                           |                                                                                                                                                                                                Download                                                                                                                                                                                                |
+| :-------------: | :-----: | :-----: | :------: | :------------: | :----: | :-----: | :--------------------------------------------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
+|    R-50-FPN     |  caffe  |   3x    |   5.7    |                |  44.0  |  38.1   | [config](./cascade-mask-rcnn_r50-caffe_fpn_ms-3x_coco.py)  |   [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_r50_caffe_fpn_mstrain_3x_coco/cascade_mask_rcnn_r50_caffe_fpn_mstrain_3x_coco_20210707_002651-6e29b3a6.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_r50_caffe_fpn_mstrain_3x_coco/cascade_mask_rcnn_r50_caffe_fpn_mstrain_3x_coco_20210707_002651.log.json)   |
+|    R-50-FPN     | pytorch |   3x    |   5.9    |                |  44.3  |  38.5   |    [config](./cascade-mask-rcnn_r50_fpn_ms-3x_coco.py)     |               [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_r50_fpn_mstrain_3x_coco/cascade_mask_rcnn_r50_fpn_mstrain_3x_coco_20210628_164719-5bdc3824.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_r50_fpn_mstrain_3x_coco/cascade_mask_rcnn_r50_fpn_mstrain_3x_coco_20210628_164719.log.json)               |
+|    R-101-FPN    |  caffe  |   3x    |   7.7    |                |  45.4  |  39.5   | [config](./cascade-mask-rcnn_r101-caffe_fpn_ms-3x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_r101_caffe_fpn_mstrain_3x_coco/cascade_mask_rcnn_r101_caffe_fpn_mstrain_3x_coco_20210707_002620-a5bd2389.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_r101_caffe_fpn_mstrain_3x_coco/cascade_mask_rcnn_r101_caffe_fpn_mstrain_3x_coco_20210707_002620.log.json) |
+|    R-101-FPN    | pytorch |   3x    |   7.8    |                |  45.5  |  39.6   |    [config](./cascade-mask-rcnn_r101_fpn_ms-3x_coco.py)    |             [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_r101_fpn_mstrain_3x_coco/cascade_mask_rcnn_r101_fpn_mstrain_3x_coco_20210628_165236-51a2d363.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_r101_fpn_mstrain_3x_coco/cascade_mask_rcnn_r101_fpn_mstrain_3x_coco_20210628_165236.log.json)             |
+| X-101-32x4d-FPN | pytorch |   3x    |   9.0    |                |  46.3  |  40.1   | [config](./cascade-mask-rcnn_x101-32x4d_fpn_ms-3x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_x101_32x4d_fpn_mstrain_3x_coco/cascade_mask_rcnn_x101_32x4d_fpn_mstrain_3x_coco_20210706_225234-40773067.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_x101_32x4d_fpn_mstrain_3x_coco/cascade_mask_rcnn_x101_32x4d_fpn_mstrain_3x_coco_20210706_225234.log.json) |
+| X-101-32x8d-FPN | pytorch |   3x    |   12.1   |                |  46.1  |  39.9   | [config](./cascade-mask-rcnn_x101-32x8d_fpn_ms-3x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_x101_32x8d_fpn_mstrain_3x_coco/cascade_mask_rcnn_x101_32x8d_fpn_mstrain_3x_coco_20210719_180640-9ff7e76f.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_x101_32x8d_fpn_mstrain_3x_coco/cascade_mask_rcnn_x101_32x8d_fpn_mstrain_3x_coco_20210719_180640.log.json) |
+| X-101-64x4d-FPN | pytorch |   3x    |   12.0   |                |  46.6  |  40.3   | [config](./cascade-mask-rcnn_x101-64x4d_fpn_ms-3x_coco.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_x101_64x4d_fpn_mstrain_3x_coco/cascade_mask_rcnn_x101_64x4d_fpn_mstrain_3x_coco_20210719_210311-d3e64ba0.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_mask_rcnn_x101_64x4d_fpn_mstrain_3x_coco/cascade_mask_rcnn_x101_64x4d_fpn_mstrain_3x_coco_20210719_210311.log.json) |
+
+## Citation
+
+```latex
+@article{Cai_2019,
+   title={Cascade R-CNN: High Quality Object Detection and Instance Segmentation},
+   ISSN={1939-3539},
+   url={http://dx.doi.org/10.1109/tpami.2019.2956516},
+   DOI={10.1109/tpami.2019.2956516},
+   journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
+   publisher={Institute of Electrical and Electronics Engineers (IEEE)},
+   author={Cai, Zhaowei and Vasconcelos, Nuno},
+   year={2019},
+   pages={1–1}
+}
+```

+ 7 - 0
configs/cascade_rcnn/cascade-mask-rcnn_r101-caffe_fpn_1x_coco.py

@@ -0,0 +1,7 @@
+_base_ = './cascade-mask-rcnn_r50-caffe_fpn_1x_coco.py'
+model = dict(
+    backbone=dict(
+        depth=101,
+        init_cfg=dict(
+            type='Pretrained',
+            checkpoint='open-mmlab://detectron2/resnet101_caffe')))

Some files were not shown because too many files changed in this diff