The proposed thesis develops an autonomous exploration strategy for tunnels using ROS (Robotic Operating System). This strategy builds upon existing approaches applicable to various robots. The primary goal is to autonomously explore tunnels and create detailed maps while addressing the challenges of subterranean environments. Subsequent sections will outline the objectives and their descriptions.
Developing the Approach
A junction, where two or more paths meet, is a critical point in tunnel structures. To effectively explore tunnels, understanding junction types is essential.
Junction Types
Navigating the intricate world of robotic tunnel inspections reveals a multitude of junctions, each with its unique challenges. From the familiar crossroad junctions to the nuanced formations of T-junctions, Y-junctions, and staggered junctions, the landscape unfolds with diverse road geometries. Alongside these, we encounter everyday scenarios like left branches, right branches, dead ends, and straight paths, each presenting its navigation complexities. To address this complexity, a novel solution emerges: the junction synthesizer technique. This innovative approach aims to distill key data points from various junction types, empowering robotic systems to navigate and map tunnel environments with precision and adaptability.
Environment Segmentation
The junction synthesizer categorizes the area around the robot into eight regions of interest (ROIs) based on geometry and robotic coordinate systems.
Front (F): +x-axis direction.
Back (B): -x-axis direction.
Left (L): +y-axis direction.
Right (R): -y-axis direction.
Diagonal Front Left (DFL): +x and +y axes.
Diagonal Front Right (DFR): +x and -y axes.
Diagonal Back Left (DBL): -x and +y axes.
Diagonal Back Right (DBR): -x and -y axes.
Approaches
The exploration of tunnel environments demands sophisticated segmentation approaches to effectively categorize regions of interest (ROIs). Initially, the rectangular approach was employed, defining rough limits for each ROI. However, the resulting overlapping regions led to compromised data quality, rendering the approach ineffective. Subsequently, the squared regions approach introduced significant gaps between ROIs, refining the limits to square shapes. While well-defined, the approach resulted in data loss due to extensive gaps between regions. The circular regions approach further subdivided the circular area around the robot into 16 sections, but its convergence inward and lack of gaps between ROIs made it unsuitable for junction structures. Finally, the hybrid approach emerged as the optimal solution, featuring well-defined limits with appropriate gaps between ROIs, proving to be the most effective method for retrieving valuable data in tunnel exploration scenarios.
Environment Segmentation Workflow
The Junction Synthesizer creates imaginary regions around the robot based on the hybrid approach, categorizing lidar points falling into these regions. The synthesizer's output serves as input for the junction detector, which identifies various junction types crucial for navigation decisions.
Environment Identification
The junction detector classifies junction types such as dead ends, cross-road junctions, T-junctions, and merge junctions. ROIs within the LIDAR point cloud are determined based on geometric patterns associated with each junction type. Detection logic and frontier calculation strategies are employed to identify junctions and their frontiers.Cross-Road Junction Detection: Points present in all diagonal directions indicate a cross-road junction.
T-Junction Detection: Points in the front and diagonals without rear or side points indicate a T-junction.
Left/Right Merge Junction Detection: Points on one side without rear or front points indicate a merge junction.
Left/Right Branch Detection: Points on one side and front without rear or opposite side points indicate a branch.
Dead End Detection: Points in all directions except the back indicate a dead end.
Straight Path Detection: Points on both sides without front or rear points indicate a straight path.
Frontiers Calculation
The process of calculating frontiers in various junction scenarios involves intricate analysis and utilization of data points within different regions of interest (ROIs). In a crossroad junction, the presence of data points across diagonal ROIs signifies potential frontiers in the front, left, and right directions. Similarly, in a T junction, data points in diagonal ROIs indicate potential frontiers on the left and right. Left and right merge junctions also involve intricate calculations based on the arrangement of data points within specific ROIs, determining potential frontiers in relevant directions. Additionally, in a dead-end scenario, the absence of data points in the back ROI implies the robot's current position as the frontier. These calculations, visualized through diagrams, illustrate the dynamic nature of frontier determination based on the geometric and spatial characteristics of each junction type.
Environment Identification Workflow
The junction detector processes synthesizer data, identifies junction types, calculates frontiers, and stores junction information in a dictionary. Dead-end detection triggers a flag for navigation purposes.
This streamlined workflow enables the robot to effectively navigate tunnel environments using lidar data analysis and junction identification.
Autonomous Exploration
Similar to the junction detector, autonomous exploration utilizes data from the junction synthesizer to facilitate the robot's autonomous navigation within the tunnel. It employs a priority-based depth-first search algorithm to maneuver through dead ends and continue exploration. An overview of the autonomous exploration is depicted in Fig. below.
Case I: Straight Path
The robot continues moving forward if the front is empty while the left and right are occupied, as indicated in the junction detection logic table. It maintains a central position within the branch until this condition no longer holds.
Case II: Strategic Movement
In the event of a straight path failure, the robot executes strategic movement to gain better spatial awareness. This movement is divided into two cases:
Strategic Movement without Front
Adjusts the robot's position slightly forward by utilizing data from diagonal ROIs, specifically designed for left and right merge junctions.
Strategic Movement with Front
Similar to the above case but considers data from the front ROI, primarily for T-junctions and dead ends.
Case III: Branch
Branch Case brings the robot forward into the branch if the left or right branch condition is met. It further divides into left and right branch cases:
Left Branch Case
Moves the robot forward into the left branch by analyzing data from the front and diagonal back left ROI.
Right Branch Case
Moves the robot forward into the right branch by analyzing data from the front and diagonal back right ROI.
Point-to-Point Navigation
The calculated frontiers from different cases guide the robot's movement through point-to-point navigation, directing it toward the identified points.
This comprehensive exploration strategy enables the robot to autonomously navigate through complex tunnel environments, adjusting its movement based on junction type and spatial constraints.
Environment Setup
The environment setup for our robotics project encompassed both simulated and real-world testing scenarios. In the simulated environment, we utilized Gazebo, a powerful 3D simulation tool, to create dynamic environments for testing our algorithms and robot behaviors. Gazebo facilitated realistic simulations of robots like the Clearpath Jackal and Turtlebot3, providing a controlled setting for evaluating performance. ROS, the Robot Operating System, served as the backbone for communication and control, enabling seamless integration between components. Rviz, another ROS tool, allowed us to visualize robot models and sensor data, aiding in debugging and analysis. In the real-world lab setup, we constructed a tunnel-like atmosphere using cardboard boxes to mimic genuine scenarios. The Turtlebot3 was deployed in this setup, and ROS and Rviz played crucial roles in communication, data analysis, and visualization, ensuring consistency between simulated and real environments. This comprehensive setup enabled thorough testing and validation of our approaches across both simulated and real-world conditions, contributing to the robustness and reliability of our research outcomes.
Use Cases
To evaluate our use cases, we designed and implemented environments in both simulation and real-world settings. Real Environment I was crafted using cardboard boxes, we replicated all junction cases with a branch size of 85cm, featuring numerous dead ends and covering a substantial area. Real Environment II, although smaller in size compared to Real Environment I, encompassed all junction types and included three dead ends, with a branch size of 70cm. Real Environment III closely resembled Real Environment II but featured two dead ends instead. In Simulation Environment I, created using squared boxes in a Gazebo simulator, we again replicated all junction cases, with a larger branch size of 100cm and several dead ends, spanning a considerable area. Simulation Environment II, albeit smaller, mirrored the features of Simulation Environment I, including all junction types and three dead ends, with a branch size of 70cm. These diverse environments allowed us to thoroughly test and validate our use cases across different scenarios and settings.