Enhancing Vision based SLAM through Shadow Removal Processing
Published:
Brief: This research addresses the challenge of shadow interference in Vision-based Simultaneous Localization and Mapping (SLAM), which is critical for applications in robotics and augmented reality. Traditional SLAM systems can be hindered by shadows in visual data, leading to inaccuracies in map construction and object localization. This paper proposes an innovative approach to detect and remove shadows from real-time video feeds, thereby improving the accuracy and reliability of SLAM systems.
Role: Robotics Researcher
Result
- Defined and addressed key limitations of SLAM systems under shadows and varying illumination, evaluating algorithms such as SpA-Former and LAB color space methods on KITTI and FinnForest datasets.
- Collaborated with three graduate students to enhance vision-based SLAM for UGVs by incorporating shadow removal preprocessing, improving object detection and mapping in dynamic environments.
Skills: SLAM, PyTorch, Python, Linux, Bash/Shell Scripting, Git
Contributors' Acknowledgement: Hanxi Wan, Kanisius Kusumadjaja, Seung Hun Lee, zhangbaijin/SpA-Former-shadow-removal