DJW c16313bb6a 第一次提交 10 mesiacov pred
..
README.md c16313bb6a 第一次提交 10 mesiacov pred
conditional-detr_r50_8xb2-50e_coco.py c16313bb6a 第一次提交 10 mesiacov pred
metafile.yml c16313bb6a 第一次提交 10 mesiacov pred

README.md

Conditional DETR

Conditional DETR for Fast Training Convergence

Abstract

The DETR approach applies the transformer encoder and decoder architecture to object detection and achieves promising performance. In this paper, we handle the critical issue, slow training convergence, and present a conditional cross-attention mechanism for fast DETR training. Our approach is motivated by that the cross-attention in DETR relies highly on the content embeddings and that the spatial embeddings make minor contributions, increasing the need for high-quality content embeddings and thus increasing the training difficulty.

Our conditional DETR learns a conditional spatial query from the decoder embedding for decoder multi-head cross-attention. The benefit is that through the conditional spatial query, each cross-attention head is able to attend to a band containing a distinct region, e.g., one object extremity or a region inside the object box (Figure 1). This narrows down the spatial range for localizing the distinct regions for object classification and box regression, thus relaxing the dependence on the content embeddings and easing the training. Empirical results show that conditional DETR converges 6.7x faster for the backbones R50 and R101 and 10x faster for stronger backbones DC5-R50 and DC5-R101.

Results and Models

We provide the config files and models for Conditional DETR: Conditional DETR for Fast Training Convergence.

Backbone Model Lr schd Mem (GB) Inf time (fps) box AP Config Download
R-50 Conditional DETR 50e 41.1 config model | log

Citation

@inproceedings{meng2021-CondDETR,
  title       = {Conditional DETR for Fast Training Convergence},
  author      = {Meng, Depu and Chen, Xiaokang and Fan, Zejia and Zeng, Gang and Li, Houqiang and Yuan, Yuhui and Sun, Lei and Wang, Jingdong},
  booktitle   = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
  year        = {2021}
}