Car companies have been feverishly working to improve the technologies behind self-driving cars. But so far even the most high-tech vehicles still fail when it comes to safely navigating in rain and snow.
This is because these weather conditions wreak havoc on the most common approaches for sensing, which usually involve either LIDAR sensors or cameras. In the snow, for example, cameras can no longer recognize lane markings and traffic signs, while the lasers of LIDAR sensors malfunction when there’s, say, stuff flying down from the sky.
MIT researchers have recently been wondering whether an entirely different approach might work. Specifically, what if we instead looked under the road?
A team from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) developed a new system that uses an existing technology called “ground-penetrating radar” (GPR) to send electromagnetic pulses underground that measure that area’s specific combination of soil, rocks and roots. The mapping process creates a unique fingerprint of sorts that the car can later use to localize itself when it returns to that particular plot of land. (Specifically, the CSAIL team used a particular form of GPR instrumentation developed at the MIT Lincoln Laboratory called “localizing ground-penetrating radar,” or LGPR.)
“If you or I grabbed a shovel and dug it into the ground, all we’re going to see is a bunch of dirt,” says CSAIL PhD student Teddy Ort, lead author on a new paper about the project that will be published in the IEEE Robotics and Automation Letters journal later this month. “But LGPR can quantify the specific elements there and compare that to the map it’s already created, so that it knows exactly where it is, without needing cameras or lasers.”
In tests the team found that in snowy conditions the navigation system’s average margin of error was on the order of only about an inch compared to clear weather. The researchers were surprised to find that it had a bit more trouble with rainy conditions, but was still only off by an average of 5.5 inches. (This is because rain leads to more water soaking into the ground, leading to a larger disparity between the original mapped LGPR reading and the current condition of the soil.)
The researchers said the system’s robustness was further validated by the fact that, over a period of six months of testing, they never had to unexpectedly step in to take the wheel.
“Our work demonstrates that this approach is actually a practical way to help self-driving cars navigate poor weather without actually having to be able to ‘see’ in the traditional sense using laser scanners or cameras,” says MIT professor Daniela Rus, senior author on the new paper, which will also be presented in May at the International Conference on Robotics and Automation (ICRA) in Paris.
While the team has only tested the system at low speeds on a closed country road, Ort said that existing work from the Lincoln Laboratory suggests that the system could easily be extended to highways and other high-speed areas.
This is the first time that developers of self-driving systems have employed ground-penetrating radar, which has previously been used in fields like construction planning, landmine detection and even lunar exploration. The approach wouldn’t be able to work completely on its own, since it can’t detect things aboveground. But its ability to localize in bad weather means that it would couple nicely with LIDAR and vision approaches.
“Before releasing autonomous vehicles on public streets, localization and navigation have to be totally reliable at all times,” says Roland Siegwart, a professor of autonomous systems at ETH Zurich who was not involved in the project. “The CSAIL team’s innovative and novel concept has the potential to push autonomous vehicles much closer to real-world deployment.”
One major benefit of mapping out an area with LGPR is that underground maps tend to hold up better over time than maps created using vision or LIDAR, since features of an above-ground map are much more likely to change. LGPR maps also take up roughly 20 percent less space than the traditional 2D sensor maps that many companies use for their cars.
While the system represents an important advance, Ort says that it’s far from road-ready. Future work will need to focus on designing mapping techniques that allow LGPR datasets to be stitched together to be able to deal with multi-lane roads and intersections. In addition, the current hardware is bulky and six feet wide, so major design advances need to be made before it’s small and light enough to fit into commercial vehicles.
Ort and Rus co-wrote the paper with CSAIL postdoctoral associate Igor Gilitschenski. The project was supported in part by MIT Lincoln Laboratory.
RELATED MIT NEWS
MIT News: “Better autonomous ‘reasoning’ at tricky intersections” http://news.mit.edu/2019/risk-model-autonomous-vehicles-1104
MIT News: “Self-driving cars for country roads” http://news.mit.edu/2018/self-driving-cars-for-country-roads-mit-csail-0507
MIT News: “Smart automation”
http://news.mit.edu/2017/smart-automation-daniela-rus-0119
TAGS
Computer Science and Artificial Intelligence Laboratory (CSAIL), Robotics, Robots, Electrical Engineering & Computer Science (eecs), Research, Distributed Robotics Laboratory, School of Engineering, Artificial intelligence, Automobiles, Autonomous vehicles, Research, Faculty, Computer vision, transportation, machine learning