DJW c16313bb6a 第一次提交 | 9 mēneši atpakaļ | |
---|---|---|
.. | ||
README.md | 9 mēneši atpakaļ | |
faster-rcnn_r101_fpn_gn-ws-all_1x_coco.py | 9 mēneši atpakaļ | |
faster-rcnn_r50_fpn_gn-ws-all_1x_coco.py | 9 mēneši atpakaļ | |
faster-rcnn_x101-32x4d_fpn_gn-ws-all_1x_coco.py | 9 mēneši atpakaļ | |
faster-rcnn_x50-32x4d_fpn_gn-ws-all_1x_coco.py | 9 mēneši atpakaļ | |
mask-rcnn_r101_fpn_gn-ws-all_20-23-24e_coco.py | 9 mēneši atpakaļ | |
mask-rcnn_r101_fpn_gn-ws-all_2x_coco.py | 9 mēneši atpakaļ | |
mask-rcnn_r50_fpn_gn-ws-all_20-23-24e_coco.py | 9 mēneši atpakaļ | |
mask-rcnn_r50_fpn_gn-ws-all_2x_coco.py | 9 mēneši atpakaļ | |
mask-rcnn_x101-32x4d_fpn_gn-ws-all_20-23-24e_coco.py | 9 mēneši atpakaļ | |
mask-rcnn_x101-32x4d_fpn_gn-ws-all_2x_coco.py | 9 mēneši atpakaļ | |
mask-rcnn_x50-32x4d_fpn_gn-ws-all_20-23-24e_coco.py | 9 mēneši atpakaļ | |
mask-rcnn_x50-32x4d_fpn_gn-ws-all_2x_coco.py | 9 mēneši atpakaļ | |
metafile.yml | 9 mēneši atpakaļ |
Batch Normalization (BN) has become an out-of-box technique to improve deep network training. However, its effectiveness is limited for micro-batch training, i.e., each GPU typically has only 1-2 images for training, which is inevitable for many computer vision tasks, e.g., object detection and semantic segmentation, constrained by memory consumption. To address this issue, we propose Weight Standardization (WS) and Batch-Channel Normalization (BCN) to bring two success factors of BN into micro-batch training: 1) the smoothing effects on the loss landscape and 2) the ability to avoid harmful elimination singularities along the training trajectory. WS standardizes the weights in convolutional layers to smooth the loss landscape by reducing the Lipschitz constants of the loss and the gradients; BCN combines batch and channel normalizations and leverages estimated statistics of the activations in convolutional layers to keep networks away from elimination singularities. We validate WS and BCN on comprehensive computer vision tasks, including image classification, object detection, instance segmentation, video recognition and semantic segmentation. All experimental results consistently show that WS and BCN improve micro-batch training significantly. Moreover, using WS and BCN with micro-batch training is even able to match or outperform the performances of BN with large-batch training.
Faster R-CNN
Backbone | Style | Normalization | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download |
---|---|---|---|---|---|---|---|---|---|
R-50-FPN | pytorch | GN+WS | 1x | 5.9 | 11.7 | 39.7 | - | config | model | log |
R-101-FPN | pytorch | GN+WS | 1x | 8.9 | 9.0 | 41.7 | - | config | model | log |
X-50-32x4d-FPN | pytorch | GN+WS | 1x | 7.0 | 10.3 | 40.7 | - | config | model | log |
X-101-32x4d-FPN | pytorch | GN+WS | 1x | 10.8 | 7.6 | 42.1 | - | config | model | log |
Mask R-CNN
Backbone | Style | Normalization | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download |
---|---|---|---|---|---|---|---|---|---|
R-50-FPN | pytorch | GN+WS | 2x | 7.3 | 10.5 | 40.6 | 36.6 | config | model | log |
R-101-FPN | pytorch | GN+WS | 2x | 10.3 | 8.6 | 42.0 | 37.7 | config | model | log |
X-50-32x4d-FPN | pytorch | GN+WS | 2x | 8.4 | 9.3 | 41.1 | 37.0 | config | model | log |
X-101-32x4d-FPN | pytorch | GN+WS | 2x | 12.2 | 7.1 | 42.1 | 37.9 | config | model | log |
R-50-FPN | pytorch | GN+WS | 20-23-24e | 7.3 | - | 41.1 | 37.1 | config | model | log |
R-101-FPN | pytorch | GN+WS | 20-23-24e | 10.3 | - | 43.1 | 38.6 | config | model | log |
X-50-32x4d-FPN | pytorch | GN+WS | 20-23-24e | 8.4 | - | 42.1 | 38.0 | config | model | log |
X-101-32x4d-FPN | pytorch | GN+WS | 20-23-24e | 12.2 | - | 42.7 | 38.5 | config | model | log |
Note:
@article{weightstandardization,
author = {Siyuan Qiao and Huiyu Wang and Chenxi Liu and Wei Shen and Alan Yuille},
title = {Weight Standardization},
journal = {arXiv preprint arXiv:1903.10520},
year = {2019},
}