Robo-vision boosted
Researchers have developed a unique and improved approach to the way robots see, with the aim of making a cheap and reliable positioning system.
Lead researcher QUT PhD student Connor Malone says there are many Visual Place Recognition (VPR) techniques and positioning methods out there. Each tries to tackle a different problem, and each one works better in some circumstances than others.
“Sometimes a robot needs to operate in places where environmental conditions change, you might have snow, rain or lighting conditions, or even just temporal or structural changes with buildings. And so different techniques tend to tackle different problems,” Mr Malone said.
“What we are proposing is a system that can switch between those different techniques in response to different problems in the environment. So rather than the impossible goal of having one solution that does everything, we use the solutions that are already made to make a more robust system.
“A naive approach would be to run all of these different techniques in parallel and use the ones that appear to be working better at a particular time, but this is very computationally intensive.
“We have run a known single high-performance technique all the time and can predict - without having to run them all - which of the other techniques to add in to get the best performance.
“This system could potentially be used on any sort of autonomous vehicle platform. A lot of the testing and data sets that we used were from self-driving car applications.
“The particular focus of this system is about getting more bang for your buck. So, making cheap platforms, with cheap sensors and not a lot of computer power.
“We reviewed sequential images as a vehicle drove through an environment and labelled those images as to which particular techniques will work for that particular image.
“We then developed training systems that we call ‘neural networks’, which are in essence are AI systems to help them to learn for a particular image which technique is going to work the best.
“The AI system is learning which of these conditions that it is having to account for - so whether it's a difference in the appearance of a place, the lighting conditions, or seasonal changes,” Mr Malone said.
Professor Michael Milford from the QUT School of Electrical Engineering & Robotics says that their experiments have shown that the approach works well in various challenging environmental conditions.
“The old approach can drive up the cost of the computer hardware or slow down the speed at which the robot can operate, which is not good from a commercial or usability perspective,” he said.
“Everybody is trying to go for the holy grail of one system that fits everything and thus we have ended up with many different systems that are good at different things. We do this switching mechanism, where the images come in, it switches between different techniques, but it is done in a very computationally cheap way.
“It does not take a lot of hardware and resources to actually do this. And the time that it takes to decide the switching is exceedingly small,” Professor Milford said.
The research is partially funded by Amazon via an Amazon Research Award, with additional support from Michael Milford’s ARC Laureate Fellowship and QUT Robotics.
The research was published and presented on May 29 at the annual IEEE International Conference on Robotics and Automation ICRA2023 - the premier international robotics conference - this year held in London.