DJW c16313bb6a 第一次提交 hace 10 meses
..
README.md c16313bb6a 第一次提交 hace 10 meses
metafile.yml c16313bb6a 第一次提交 hace 10 meses
retinanet_pvt-l_fpn_1x_coco.py c16313bb6a 第一次提交 hace 10 meses
retinanet_pvt-m_fpn_1x_coco.py c16313bb6a 第一次提交 hace 10 meses
retinanet_pvt-s_fpn_1x_coco.py c16313bb6a 第一次提交 hace 10 meses
retinanet_pvt-t_fpn_1x_coco.py c16313bb6a 第一次提交 hace 10 meses
retinanet_pvtv2-b0_fpn_1x_coco.py c16313bb6a 第一次提交 hace 10 meses
retinanet_pvtv2-b1_fpn_1x_coco.py c16313bb6a 第一次提交 hace 10 meses
retinanet_pvtv2-b2_fpn_1x_coco.py c16313bb6a 第一次提交 hace 10 meses
retinanet_pvtv2-b3_fpn_1x_coco.py c16313bb6a 第一次提交 hace 10 meses
retinanet_pvtv2-b4_fpn_1x_coco.py c16313bb6a 第一次提交 hace 10 meses
retinanet_pvtv2-b5_fpn_1x_coco.py c16313bb6a 第一次提交 hace 10 meses

README.md

PVT

Pyramid vision transformer: A versatile backbone for dense prediction without convolutions

Abstract

Although using convolutional neural networks (CNNs) as backbones achieves great successes in computer vision, this work investigates a simple backbone network useful for many dense prediction tasks without convolutions. Unlike the recently-proposed Transformer model (e.g., ViT) that is specially designed for image classification, we propose Pyramid Vision Transformer~(PVT), which overcomes the difficulties of porting Transformer to various dense prediction tasks. PVT has several merits compared to prior arts. (1) Different from ViT that typically has low-resolution outputs and high computational and memory cost, PVT can be not only trained on dense partitions of the image to achieve high output resolution, which is important for dense predictions but also using a progressive shrinking pyramid to reduce computations of large feature maps. (2) PVT inherits the advantages from both CNN and Transformer, making it a unified backbone in various vision tasks without convolutions by simply replacing CNN backbones. (3) We validate PVT by conducting extensive experiments, showing that it boosts the performance of many downstream tasks, e.g., object detection, semantic, and instance segmentation. For example, with a comparable number of parameters, RetinaNet+PVT achieves 40.4 AP on the COCO dataset, surpassing RetinNet+ResNet50 (36.3 AP) by 4.1 absolute AP. We hope PVT could serve as an alternative and useful backbone for pixel-level predictions and facilitate future researches.

Transformer recently has shown encouraging progresses in computer vision. In this work, we present new baselines by improving the original Pyramid Vision Transformer (abbreviated as PVTv1) by adding three designs, including (1) overlapping patch embedding, (2) convolutional feed-forward networks, and (3) linear complexity attention layers. With these modifications, our PVTv2 significantly improves PVTv1 on three tasks e.g., classification, detection, and segmentation. Moreover, PVTv2 achieves comparable or better performances than recent works such as Swin Transformer. We hope this work will facilitate state-of-the-art Transformer researches in computer vision.

Results and Models

RetinaNet (PVTv1)

Backbone Lr schd Mem (GB) box AP Config Download
PVT-Tiny 12e 8.5 36.6 config model | log
PVT-Small 12e 14.5 40.4 config model | log
PVT-Medium 12e 20.9 41.7 config model | log

RetinaNet (PVTv2)

Backbone Lr schd Mem (GB) box AP Config Download
PVTv2-B0 12e 7.4 37.1 config model | log
PVTv2-B1 12e 9.5 41.2 config model | log
PVTv2-B2 12e 16.2 44.6 config model | log
PVTv2-B3 12e 23.0 46.0 config model | log
PVTv2-B4 12e 17.0 46.3 config model | log
PVTv2-B5 12e 18.7 46.1 config model | log

Citation

@article{wang2021pyramid,
  title={Pyramid vision transformer: A versatile backbone for dense prediction without convolutions},
  author={Wang, Wenhai and Xie, Enze and Li, Xiang and Fan, Deng-Ping and Song, Kaitao and Liang, Ding and Lu, Tong and Luo, Ping and Shao, Ling},
  journal={arXiv preprint arXiv:2102.12122},
  year={2021}
}
@article{wang2021pvtv2,
  title={PVTv2: Improved Baselines with Pyramid Vision Transformer},
  author={Wang, Wenhai and Xie, Enze and Li, Xiang and Fan, Deng-Ping and Song, Kaitao and Liang, Ding and Lu, Tong and Luo, Ping and Shao, Ling},
  journal={arXiv preprint arXiv:2106.13797},
  year={2021}
}