Humans can successfully navigate through complex environments if they have seen the location before. Similarly, employing machine learning methods in robotics can improve visual navigation. A recent paper on arXiv.org suggests an approach that enables effective navigation in unstructured outdoor environments using only offline data.
Instead of using geometric maps, the system uses graph-structured “mental maps”. Firstly, the user provides the robot with a picture of the desired destination. A function that estimates how many time steps are needed between the pairs of observations is then learned. Past observations are embedded into a topological graph, and the system plans the route. The system can be applied for scenarios where GPS-based methods are unavailable, such as last-mile delivery or autonomous inspection of warehouses.
We propose a learning-based navigation system for reaching visually indicated goals and demonstrate this system on a real mobile robot platform. Learning provides an appealing alternative to conventional methods for robotic navigation: instead of reasoning about environments in terms of geometry and maps, learning can enable a robot to learn about navigational affordances, understand what types of obstacles are traversable (e.g., tall grass) or not (e.g., walls), and generalize over patterns in the environment. However, unlike conventional planning algorithms, it is harder to change the goal for a learned policy during deployment. We propose a method for learning to navigate towards a goal image of the desired destination. By combining a learned policy with a topological graph constructed out of previously observed data, our system can determine how to reach this visually indicated goal even in the presence of variable appearance and lighting. Three key insights, waypoint proposal, graph pruning and negative mining, enable our method to learn to navigate in real-world environments using only offline data, a setting where prior methods struggle. We instantiate our method on a real outdoor ground robot and show that our system, which we call ViNG, outperforms previously-proposed methods for goal-conditioned reinforcement learning, including other methods that incorporate reinforcement learning and search. We also study how ViNG generalizes to unseen environments and evaluate its ability to adapt to such an environment with growing experience. Finally, we demonstrate ViNG on a number of real-world applications, such as last-mile delivery and warehouse inspection. We encourage the reader to check out the videos of our experiments and demonstrations at our project website this https URL