Unlocking the Depths: Autonomous Robotic Exploration II

/ Comments 0
Unlocking the Depths: Autonomous Robotic Exploration II - Autonomous Robotic Tunnel Exploration: Navigating Junctions with ROS
Autonomous robots explore complex infrastructures like underground mines, facing challenges such as GPS unavailability and hazardous conditions <sup> <span style="color: #34A734;"> [2] </span> <span style="color: #34A734;"> [3] </span> </sup> . The DARPA subterranean challenge highlighted the need for effective path planning and SLAM <sup> <span style="color: #34A734;"> [4] </span> </sup>

The proposed thesis develops an autonomous exploration strategy for tunnels using ROS (Robotic Operating System). This strategy builds upon existing approaches applicable to various robots. The primary goal is to autonomously explore tunnels and create detailed maps while addressing the challenges of subterranean environments. Subsequent sections will outline the objectives and their descriptions.

Picture 1
Methodology Overview

Developing the Approach

A junction, where two or more paths meet, is a critical point in tunnel structures. To effectively explore tunnels, understanding junction types is essential.

Junction Types

Navigating the intricate world of robotic tunnel inspections reveals a multitude of junctions, each with its unique challenges. From the familiar crossroad junctions to the nuanced formations of T-junctions, Y-junctions, and staggered junctions, the landscape unfolds with diverse road geometries. Alongside these, we encounter everyday scenarios like left branches, right branches, dead ends, and straight paths, each presenting its navigation complexities. To address this complexity, a novel solution emerges: the junction synthesizer technique. This innovative approach aims to distill key data points from various junction types, empowering robotic systems to navigate and map tunnel environments with precision and adaptability.

Picture 2
Crossroad Junction
Picture 3
T Junction
Picture 4
Y Junction
Picture 5
Staggered Junction
Picture 6
Right Merge Junction
Picture 7
Left Merge Junction
Picture 8
Left Branch
Picture 9
Right Branch
Picture 10
Dead End
Picture 11
Straight Path

Environment Segmentation

The junction synthesizer categorizes the area around the robot into eight regions of interest (ROIs) based on geometry and robotic coordinate systems.
Front (F): +x-axis direction.
Back (B): -x-axis direction.
Left (L): +y-axis direction.
Right (R): -y-axis direction.
Diagonal Front Left (DFL): +x and +y axes.
Diagonal Front Right (DFR): +x and -y axes.
Diagonal Back Left (DBL): -x and +y axes.
Diagonal Back Right (DBR): -x and -y axes.


The exploration of tunnel environments demands sophisticated segmentation approaches to effectively categorize regions of interest (ROIs). Initially, the rectangular approach was employed, defining rough limits for each ROI. However, the resulting overlapping regions led to compromised data quality, rendering the approach ineffective. Subsequently, the squared regions approach introduced significant gaps between ROIs, refining the limits to square shapes. While well-defined, the approach resulted in data loss due to extensive gaps between regions. The circular regions approach further subdivided the circular area around the robot into 16 sections, but its convergence inward and lack of gaps between ROIs made it unsuitable for junction structures. Finally, the hybrid approach emerged as the optimal solution, featuring well-defined limits with appropriate gaps between ROIs, proving to be the most effective method for retrieving valuable data in tunnel exploration scenarios.

Picture 13
Rectangular Approach
Picture 14
Squared Regions Approach
Picture 15
Circular Regions Approach
Picture 16
Hybrid Approach

Environment Segmentation Workflow

The Junction Synthesizer creates imaginary regions around the robot based on the hybrid approach, categorizing lidar points falling into these regions. The synthesizer's output serves as input for the junction detector, which identifies various junction types crucial for navigation decisions.

Picture 17
Environment Segmentation Workflow

Environment Identification

The junction detector classifies junction types such as dead ends, cross-road junctions, T-junctions, and merge junctions. ROIs within the LIDAR point cloud are determined based on geometric patterns associated with each junction type. Detection logic and frontier calculation strategies are employed to identify junctions and their frontiers.
Picture 18
Environment Identification Logic Table

Cross-Road Junction Detection: Points present in all diagonal directions indicate a cross-road junction.
T-Junction Detection: Points in the front and diagonals without rear or side points indicate a T-junction.
Left/Right Merge Junction Detection: Points on one side without rear or front points indicate a merge junction.
Left/Right Branch Detection: Points on one side and front without rear or opposite side points indicate a branch.
Dead End Detection: Points in all directions except the back indicate a dead end.
Straight Path Detection: Points on both sides without front or rear points indicate a straight path.

Frontiers Calculation

The process of calculating frontiers in various junction scenarios involves intricate analysis and utilization of data points within different regions of interest (ROIs). In a crossroad junction, the presence of data points across diagonal ROIs signifies potential frontiers in the front, left, and right directions. Similarly, in a T junction, data points in diagonal ROIs indicate potential frontiers on the left and right. Left and right merge junctions also involve intricate calculations based on the arrangement of data points within specific ROIs, determining potential frontiers in relevant directions. Additionally, in a dead-end scenario, the absence of data points in the back ROI implies the robot's current position as the frontier. These calculations, visualized through diagrams, illustrate the dynamic nature of frontier determination based on the geometric and spatial characteristics of each junction type.

Picture 19
Frontiers Calculation in CrossRoad Junction Method
Picture 20
Frontiers Calculation in CrossRoad Junction Rviz
Picture 21
Frontiers Calculation in Left Merge Junction Method
Picture 22
Frontiers Calculation in Left Merge Junction Rviz
Picture 23
Frontiers Calculation in Right Merge Junction Method
Picture 24
Frontiers Calculation in Right Merge Junction Rviz
Picture 25
Frontiers Calculation in T Junction Method
Picture 26
Frontiers Calculation in Dead End Method

Environment Identification Workflow

The junction detector processes synthesizer data, identifies junction types, calculates frontiers, and stores junction information in a dictionary. Dead-end detection triggers a flag for navigation purposes.

This streamlined workflow enables the robot to effectively navigate tunnel environments using lidar data analysis and junction identification.

Picture 27
Environment Identification Workflow

Autonomous Exploration

Similar to the junction detector, autonomous exploration utilizes data from the junction synthesizer to facilitate the robot's autonomous navigation within the tunnel. It employs a priority-based depth-first search algorithm to maneuver through dead ends and continue exploration. An overview of the autonomous exploration is depicted in Fig. below.

Picture 28
Autonomous Exploration Overview

Case I: Straight Path

The robot continues moving forward if the front is empty while the left and right are occupied, as indicated in the junction detection logic table. It maintains a central position within the branch until this condition no longer holds.

Case II: Strategic Movement

In the event of a straight path failure, the robot executes strategic movement to gain better spatial awareness. This movement is divided into two cases:

Strategic Movement without Front

Adjusts the robot's position slightly forward by utilizing data from diagonal ROIs, specifically designed for left and right merge junctions.

Picture 29
Navigation Goals in Strategic Movement without front Method
Picture 30
Navigation Goals in Strategic Movement without front Rviz

Strategic Movement with Front

Similar to the above case but considers data from the front ROI, primarily for T-junctions and dead ends.

Picture 31
Navigation Goals in Strategic Movement with front Method
Picture 32
Navigation Goals in Strategic Movement with front Rviz

Case III: Branch

Branch Case brings the robot forward into the branch if the left or right branch condition is met. It further divides into left and right branch cases:

Left Branch Case

Moves the robot forward into the left branch by analyzing data from the front and diagonal back left ROI.

Picture 33
Navigation Goals in Left Branch Method
Picture 34
Navigation Goals in Left Branch Rviz

Right Branch Case

Moves the robot forward into the right branch by analyzing data from the front and diagonal back right ROI.

Picture 35
Navigation Goals in Right Branch Method
Picture 36
Navigation Goals in Right Branch Rviz

Point-to-Point Navigation

The calculated frontiers from different cases guide the robot's movement through point-to-point navigation, directing it toward the identified points.

This comprehensive exploration strategy enables the robot to autonomously navigate through complex tunnel environments, adjusting its movement based on junction type and spatial constraints.

Environment Setup

The environment setup for our robotics project encompassed both simulated and real-world testing scenarios. In the simulated environment, we utilized Gazebo, a powerful 3D simulation tool, to create dynamic environments for testing our algorithms and robot behaviors. Gazebo facilitated realistic simulations of robots like the Clearpath Jackal and Turtlebot3, providing a controlled setting for evaluating performance. ROS, the Robot Operating System, served as the backbone for communication and control, enabling seamless integration between components. Rviz, another ROS tool, allowed us to visualize robot models and sensor data, aiding in debugging and analysis. In the real-world lab setup, we constructed a tunnel-like atmosphere using cardboard boxes to mimic genuine scenarios. The Turtlebot3 was deployed in this setup, and ROS and Rviz played crucial roles in communication, data analysis, and visualization, ensuring consistency between simulated and real environments. This comprehensive setup enabled thorough testing and validation of our approaches across both simulated and real-world conditions, contributing to the robustness and reliability of our research outcomes.

Picture 37
Gazebo: Jackal — Simulation Environment
Picture 38
Real: Turtlebot3 — Lab Environment

Use Cases

To evaluate our use cases, we designed and implemented environments in both simulation and real-world settings. Real Environment I was crafted using cardboard boxes, we replicated all junction cases with a branch size of 85cm, featuring numerous dead ends and covering a substantial area. Real Environment II, although smaller in size compared to Real Environment I, encompassed all junction types and included three dead ends, with a branch size of 70cm. Real Environment III closely resembled Real Environment II but featured two dead ends instead. In Simulation Environment I, created using squared boxes in a Gazebo simulator, we again replicated all junction cases, with a larger branch size of 100cm and several dead ends, spanning a considerable area. Simulation Environment II, albeit smaller, mirrored the features of Simulation Environment I, including all junction types and three dead ends, with a branch size of 70cm. These diverse environments allowed us to thoroughly test and validate our use cases across different scenarios and settings.

Picture 39
Real: Turtlebot3 — T-Junction
Picture 41
Gazebo: Clearpath Jackal — Crossroads Junction
Picture 43
Gazebo: Turtlebot3 — Dead End
Picture 40
Rviz: Turtlebot3 — T-Junction
Picture 42
Rviz: Clearpath Jackal — Crossroads Junction
Picture 44
Rviz: Turtlebot3 — Dead End


Our Instagram FEEDS

Social Media

Why not follow us on one of the following social media channels?