Monocular 3D Object Detection in Foggy Conditions

Published:

Advisor: Prof. Maani Ghaffari



Brief: This paper investigates improvements in monocular 3D object detection, which is crucial for autonomous vehicles. The focus is on increasing detection accuracy and robustness in various weather conditions, especially in fog. The study uses the MonoCon model, incorporating transfer learning, image augmentation, and pre-processing techniques to enhance visibility in foggy conditions. It addresses specific challenges like fluctuating Average Precision (AP) values and the inefficient detection of distant or small vehicles in fog by revising the evaluation strategy and using targeted image processing.

Role: Robotics Researcher

Result

- Collaborated with two graduate students to enhance monocular 3D object detection for autonomous vehicles in foggy conditions, adapting the MonoCon model with transfer learning and advanced image processing techniques.
- Developed and implemented image augmentation strategies (contrast enhancement, CLAHE, and blurring) to improve detection accuracy and robustness in low-visibility environments, significantly increasing Average Precision (AP) from 7.05% to 25.82%.
- Conducted extensive evaluations using the KITTI dataset, demonstrating the model’s improved performance in detecting distant and small objects, enhancing the reliability of autonomous vehicle perception systems in challenging weather conditions.
MonoCon Architecture by 2gunsu

Normal condition test

Foggy condition test

AP values trend over epoch trained

[GitHub][Publication]

Skills: Leadership, PyTorch, Python, Linux, Bash/Shell Scripting, Git, Docker
Contributors' Acknowledgement: 2gunsu/monocon-pytorch, Minghan Zhu, Xirong Liu, Rahul Swayampakula