Intelligent Navigation of Autonomous Maritime Robots (Work in Progress)
Published:
Brief: While reinforcement learning offers potential for continual learning and adaptability in complex scenarios, its application to real-world robotics faces significant challenges. Unlike in simulations, physical platforms struggle to collect a diverse corpus of training data due to critical safety risks and the inherent constraints of operating within a dynamic and partially observable environment. Our work draws inspiration from the human capability to fuse and exploit multiple sensing modalities, construct comprehensive models of how the world operates, and then leverage those models to adeptly navigate in challenging and often unpredictable environments. Here is an overview of how unmanned vehicles (ground, air, and surface) can exploit a world model constructed through multimodal perception, to learn near-optimal policies for guidance and control. A key aspect of the approach is learning from imagination in which the world model is used to simulate future imagined trajectories, enabling it to anticipate potential risks before encountering them in the real world. Our ongoing work and long-term vision is to evolve the traditional sense-plan-act framework into a more intuitive and cognitively inspired sense-imagine-act model. Dr. Elena Shrestha's presentation of our research.
Role: Graduate Research Assistant, Field Robotics Group
Contribution:
- Developed a URDF model for the Heron Unmanned Surface Vehicle (USV) in ROS2 and Gazebo Garden, creating a detailed simulation environment for testing autonomous marine systems in the Marine Hydrodynamics Lab.
- Prepared the Heron USV for real-world deployment by configuring battery, electrical, and mechanical systems; 3D-printed custom sensor mounts; and integrated LIDAR, camera, IMU, GPS, and odometry sensors through ROS for synchronized data acquisition.
- Designed and implemented an object avoidance algorithm in Python and C++, refining it through extensive testing in simulated and real-world environments, optimizing the USV’s mechanical control systems for autonomous navigation.
- Conducted real-world testing under varying wave conditions, collected sensor data (LIDAR, camera, odometry, IMU), and replicated the data in simulation to analyze the stability and accuracy of autonomous navigation during tests such as “straight line” and “2 buoys.”
- Engineered sensor data fusion by aligning laser/scan and odometry/filtered transform frames, ensuring accurate timestamp synchronization and transformations between frames for SLAM analysis in RViz using the Hector SLAM package.
- Designing a U-Net model for image segmentation to improve the USV’s perception capabilities, post-processing collected data for reinforcement learning training, and configuring the camera URDF in real-world and simulated environments for precise control and navigation.
[GitHub][Publication][Slide]
Skills: ROS, ROS2, Rviz, Gazebo, SLAM (Hector SLAM), Sensor Fusion (LIDAR, Camera, IMU, GPS, Odometry), object avoidance algorithm, autonomous navigation, PyTorch, TensorFlow, U-Net, OpenCV, PID, Python, C++, Linux, Bash/Shell Scripting, Git, Docker, Microcontroller, SolidWorks, 3D printing
Contributors' Acknowledgement: