科學研究

                Research

                首頁 >  論文  > 詳情

                Fast Convergence of DETR with Spatially Modulated Co-Attention

                發表會議及期刊:ICCV

                Peng Gao1     Minghang Zheng3     Xiaogang Wang2     Jifeng Dai4     Hongsheng Li2

                1Shanghai AI Laboratory,

                2CUHK-SenseTime Joint Laboratory, The Chinese University of Hong Kong

                3Peking University     4SenseTime Research

                1155102382@link.cuhk.edu.hk     hsli@ee.cuhk.edu.hk



                Abstract:

                The recently proposed Detection Transformer (DETR)model successfully applies Transformer to objects detection and achieves comparable performance with two-stage object detection frameworks, such as Faster-RCNN. However, DETR suffers from its slow convergence. Training DETR [4] from scratch needs 500 epochs to achieve a high accuracy. To accelerate its convergence, we propose a simple yet effective scheme for improving the DETR framework, namely Spatially Modulated Co-Attention (SMCA) mechanism. The core idea of SMCA is to conduct locationaware co-attention in DETR by constraining co-attention responses to be high near initially estimated bounding box locations. Our proposed SMCA increases DETR’s convergence speed by replacing the original co-attention mechanism in the decoder while keeping other operations in DETR unchanged. Furthermore, by integrating multi-head and scale-selection attention designs into SMCA, our fully fledged SMCA can achieve better performance compared to DETR with a dilated convolution-based backbone (45.6 mAP at 108 epochs vs. 43.3 mAP at 500 epochs). We perform extensive ablation studies on COCO dataset to validate SMCA.


                comm@pjlab.org.cn

                上海市徐匯區云錦路701號西岸國際人工智能中心37-38層

                滬ICP備2021009351號-1

                        
                        

                              拔萝卜又叫又疼原声视频