Enhancing Vision based SLAM through Shadow Removal Processing

Published:

Advisor: Prof. Maani Ghaffari



Brief: This research addresses the challenge of shadow interference in Vision-based Simultaneous Localization and Mapping (SLAM), which is critical for applications in robotics and augmented reality. Traditional SLAM systems can be hindered by shadows in visual data, leading to inaccuracies in map construction and object localization. This paper proposes an innovative approach to detect and remove shadows from real-time video feeds, thereby improving the accuracy and reliability of SLAM systems.

Role: Robotics Researcher

Result

- Collaborated with three graduate students to enhance vision-based SLAM systems by integrating shadow removal preprocessing techniques to improve object detection and mapping for UGVs in dynamic environments.
- Defined and addressed a research problem focused on the limitations of SLAM systems under shadows and changing illumination, testing algorithms like SpA-Former and LAB color space methods on KITTI and FinnForest datasets.
- Integrated and validated shadow removal techniques within the ORB-SLAM2 pipeline, significantly improving SLAM system accuracy and feature detection in complex, real-world environments.

[GitHub][Publication]

Skills: SLAM, PyTorch, Python, Linux, Bash/Shell Scripting, Git
Contributors' Acknowledgement: Hanxi Wan, Kanisius Kusumadjaja, Seung Hun Lee, zhangbaijin/SpA-Former-shadow-removal