|
|
Radar-Camera Fusion Vehicle Detection Based on Feature Weighted and Visual Enhancement |
YAN Xiao-feng1, HUO Ke-xin2, LI Xiao-huan2,3, TANG Xin2,3, XU Shao-hua4 |
1. Guangxi Transportation Science and Technology Group Co., Ltd., Nanning Guangxi 530000, China; 2. Guilin University of Electronic Technology, Guilin Guangxi 541004, China; 3. Guangxi Comprehensive Transportation Big Data Research Institute, Nanning Guangxi 530000, China; 4. Guangxi Beitou IT Innovation Technology Investment Group Co., Ltd., Nanning Guangxi 530000, China |
|
|
Abstract To improve the the vehicle detection accuracy under the condition of low light and long distance detection requirements on expressway, a radar-camera fusion vehicle target detection method based on visual enhancement and feature weighting is proposed. First, starting with the fusion of the radar-camera data layer, the spatial location of the potential target based on millimeter-wave radar is characterized, and the characterization result is used for the division of long-distance target areas in visual images. Then, the images of the divided area are reconstructed, detected, and restored to improve visual detection accuracy of the long-distance target. Next, the radar-camera detection feature layer is fused and modeling is conducted. Considering the difference in the contribution of different layers to feature detection, the weight parameters of different feature maps are obtained through model training, and the features of different layers are fused and calculated according to the weight to enhance the feature information of the target. Next, branch network is added, and different receptive field information in the feature map is extracted by using convolutional layers of different sizes, and the branch output results are fused to obtain stronger image representation ability, which can achieve the goal of improving the detection accuracy in low light. Finally, combining with the feature-weighted radar-camera framework and the vision enhancement based on millimeter wave radar spatial preprocessing, the radar-camera fusion detection network based on YOLOv4-tiny is designed and the verification system is built. The results show that (1) the average precision (AP) of the proposed algorithm in the low-light environment increased by 20% compared with that of YOLOv4, and the AP increased by 5% compared with that of the radar-camera fusion algorithm RVNet; (2) in the test of detection performance at different distances, when detecting a 120-meter target, the AP of the proposed algorithm is 73% higher than that of the YOLOv4 algorithm, and 63% higher than that of RVNet, which improved the coverage distance and low-light detection accuracy of vehicle detection in intelligent transportation systems(ITS).
|
Received: 04 April 2022
|
Fund:Supported by the Guangxi Science and Technology Program Project (Nos. AA22068101,AB21196021) |
|
|
|
[1] LI Bin, HOU De-zao, ZHANG Ji-sheng, et al. Study on Conception and Mechanism of Intelligent Vehicle-infrastructure Cooperation[J]. Journal of Highway and Transportation Research and Development, 2020, 37(10):134-141. (in Chinese) [2] CEN Yan-qing, SONG Xiang-hui, WANG Dong-zhu, et al. Establishment of Technology System of Smart Expressway[J]. Journal of Highway and Transportation Research and Development, 2020, 37(7):111-121. (in Chinese) [3] MIMOUNA A, ALOUANI I, KHALIFA A B, et al. OLIMP:A Heterogeneous Multimodal Dataset for Advanced Environment Perception[J]. Electronics, 2020, 9(4):560. [4] LI Yuan. Research on Application of Millimeter Wave Radar in Vehicle-road Coordination System[J]. Industrial Control Computer, 2020,33(1):44-46,50. (in Chinese) [5] LU J L, TANG S M, WANG J Q, et al. A Review on Object Detection Based on Deep Convolutional Neural Networks for Autonomous Driving[C]//2019 Chinese Control and Decision Conference (CCDC). Nanchang:IEEE, 2019:5301-5308. [6] JHA H, LODHI V, CHAKRAVARTY D. Object Detection and Identification Using Vision and Radar Data Fusion System for Ground-based Navigation[C]//20196th International Conference on Signal Processing and Integrated Networks (SPIN). Noida:IEEE, 2019. [7] GAO Ji-dong, JIAO Xin, LIU Quan-zhou, et al. Research on Vehicle Detection Based on Data Fusion of Machine Vision and Millimeter Wave Radar[J]. China Measurement & Test, 2021,47(10):33-40. (in Chinese) [8] LIU W, ANGUELOV D, ERHAN D, et al. SSD:Single Shot Multibox Detector[C]//European Conference on Computer Vision. Amsterdam:Springer, 2016:21-37. [9] KOWOL K, ROTTMANN M, BRACKE S, et al. YOdar Uncertainty-based Sensor Fusion for Vehicle Detection with Camera and Radar Sensors[C]//Proceedings of the 13th International Conference on Agents and Artificial Intelligence. Vienna:[s. n.], 2021.[ZK)] [10] CHADWICK S, MADDERN W, NEWMAN P. Distant Vehicle Detection Using Radar and Vision[C]//2019 International Conference on Robotics and Automation (ICRA). Montreal:IEEE, 2019. [11] CHENG Y W, HU X, LIU Y M. Robust Small Object Detection on the Water Surface Through Fusion of Camera and Millimeter Wave Radar[C]//2021 IEEE/CVF International Conference on Computer Vision (ICCV). Montreal:IEEE, 2021:15263-15272. [12] LUO Yu-jie, ZHANG Jian, CHEN Liang, et al. Lightweight Target Detection Algorithm Based on Adaptive Spatial Feature Fusion[J]. Laser & Optoelectronics Progress,2022,59(4):310-320. (in Chinese) [13] LIU S T, HUANG D, WANG Y H. Learning Spatial Fusion for Single-shot Object Detection[R].[S. l.]:arXiv e-prints, 2019. [14] LUO Xiao,YAO Yuan, ZHANG Jin-huan. Unified Calibration Method for Millimeter-wave Radar and Camera[J]. Journal of Tsinghua University (Science and Technology Edition), 2014, 54(3):289-293. (in Chinese) [15] WANG T, XIN J M, ZHENG N N. A Method Integrating Human Visual Attention and Consciousness of Radar and Vision Fusion for Autonomous Vehicle Navigation[C]//2011 IEEE Fourth International Conference on Space Mission Challenges for Information Technology. Palo Alto:IEEE, 2011:192-197.[ZK)] [16] BOCHKOVSKIY A, WANG C Y, LIAO H Y M. Yolov4:Optimal Speed and Accuracy of Object Detection[R].[S. l.]:arXiv e-prints, 2020. [17] JOHN V, MITA S. RVNet:Deep Sensor Fusion of Monocular Camera and Radar for Image-based Obstacle Detection in Challenging Environments[C]//2019 Pacific-rim Symposium on Image and Video Technology, Sydney:Springer, 2019:351-64. [18] CAESAR H, BANKITI V, LANG A H, et al. NuScenes:A Multimodal Dataset for Autonomous Driving[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle:IEEE, 2020:11621-11631. |
|
|
|