DJW c16313bb6a 第一次提交 10 månader sedan
..
configs c16313bb6a 第一次提交 10 månader sedan
efficientdet c16313bb6a 第一次提交 10 månader sedan
README.md c16313bb6a 第一次提交 10 månader sedan
convert_tf_to_pt.py c16313bb6a 第一次提交 10 månader sedan

README.md

EfficientDet

EfficientDet: Scalable and Efficient Object Detection, Mingxing Tan, Ruoming Pang, Quoc V. Le, CVPR 2020

Abstract

This is an implementation of EfficientDet based on MMDetection, MMCV, and MMEngine.
EfficientDet a new family of object detectors, which consistently achieve much better efficiency than prior art across a wide spectrum of resource constraints. In particular, with single model and single-scale, EfficientDet-D7 achieves stateof-the-art 55.1 AP on COCO test-dev with 77M parameters and 410B FLOP.
BiFPN is a simple yet highly effective weighted bi-directional feature pyramid network, which introduces learnable weights to learn the importance of different input features, while repeatedly applying topdown and bottom-up multi-scale feature fusion.
In contrast to other feature pyramid network, such as FPN, FPN + PAN, NAS-FPN, BiFPN achieves the best accuracy with fewer parameters and FLOPs.

Usage

Official TensorFlow Model

This project also supports official tensorflow model, it uses 90 categories and yxyx box encoding in training. If you want to use the original model weight to get official results, please refer to the following steps.

Model conversion

Firstly, download EfficientDet weights and unzip, please use the following command

tar -xzvf {EFFICIENTDET_WEIGHT}

Then, install tensorflow, please use the following command

pip install tensorflow-gpu==2.6.0

Lastly, convert weights from tensorflow to pytorch, please use the following command

python projects/EfficientDet/convert_tf_to_pt.py --backbone {BACKBONE_NAME} --tensorflow_weight {TENSORFLOW_WEIGHT_PATH} --out_weight {OUT_PATH}

Testing commands

In MMDetection's root directory, run the following command to test the model:

python tools/test.py projects/EfficientDet/configs/tensorflow/efficientdet_effb0_bifpn_8xb16-crop512-300e_coco_tf.py ${CHECKPOINT_PATH}

Reproduce Model

For convenience, we recommend the current implementation version, it uses 80 categories and xyxy encoding in training. On this basis, a higher result was finally achieved.

Training commands

In MMDetection's root directory, run the following command to train the model:

python tools/train.py projects/EfficientDet/configs/efficientdet_effb3_bifpn_8xb16-crop896-300e_coco.py

Testing commands

In MMDetection's root directory, run the following command to test the model:

python tools/test.py projects/EfficientDet/configs/efficientdet_effb3_bifpn_8xb16-crop896-300e_coco.py ${CHECKPOINT_PATH}

Results

Based on mmdetection, this project aligns the accuracy of the official model.

Method Backbone Pretrained Model Training set Test set Epoch Val Box AP Official AP Download
efficientdet-d0* efficientnet-b0 ImageNet COCO2017 Train COCO2017 Val 300 34.4 34.3
efficientdet-d3 efficientnet-b3 ImageNet COCO2017 Train COCO2017 Val 300 47.2 46.8 model | log

Note: *means use official tensorflow model weights to test.

Citation

@inproceedings{tan2020efficientdet,
  title={Efficientdet: Scalable and efficient object detection},
  author={Tan, Mingxing and Pang, Ruoming and Le, Quoc V},
  booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
  pages={10781--10790},
  year={2020}
}

Checklist

  • [x] Milestone 1: PR-ready, and acceptable to be one of the projects/.

    • Finish the code
    • Basic docstrings & proper citation
    • Test-time correctness
    • A full README
  • [x] Milestone 2: Indicates a successful model implementation.

    • Training-time correctness
  • [ ] Milestone 3: Good to be a part of our core package!

    • Type hints and docstrings
    • Unit tests
    • Code polishing
    • Metafile.yml
  • [ ] Move your modules into the core package following the codebase's file hierarchy structure.

  • Refactor your modules into the core package following the codebase's file hierarchy structure.