Enhancing Monocular 3D Object Detection in Foggy Conditions: An Adapted MonoCon Approach for Autonomous Vehicles (Unpublished manuscript)

Published:

Recommended citation: Do, T., Liu, X., & Swayampakula, R. (2023). Enhancing Monocular 3D Object Detection in Foggy Conditions: An Adapted MonoCon Approach for Autonomous Vehicles. Unpublished manuscript, University of Michigan, Ann Arbor.

  • Abstract: This paper explores advancements in monocular 3D object detection, a pivotal aspect of autonomous vehicle technology. We focus on enhancing detection accuracy and robustness in diverse weather conditions, specifically addressing the challenges in foggy scenarios. Implementing the MonoCon model, our methodology includes transfer learning, image augmentation techniques, and pre-processing strategies to improve visibility in foggy images. Challenges such as fluctuating Average Precision (AP) values and inefficient detection of distant or small vehicles in fog are addressed through a revised evaluation strategy and targeted image processing. Results showed an increase in AP from 7.05% to 17.67% for the normal dataset after training to more epochs and up to 25.82% for foggy conditions after training to 300 more epochs and applying CLAHE and blur. These findings underscore the model’s adaptability and effectiveness in diverse environments.
  • Index Terms: Monocular 3D Object Detection, Autonomous Vehicles, Deep Neural Networks (DNN), Deep Learning in Computer Vision

[Portfolio][Download][GitHub]