MIT researchers have developed a new system that can reproduce images of objects wrapped in a thick fog impossible to capture by the human eye. Technology has also allowed to measure the distance between objects.
One of the main challenges faced by the current autonomous navigation systems based on visible light are the fog conditions on the road. These systems are more reliable than those based on radars, because of their high resolution and ability to read traffic signals and identify road markings, but they are not safe when there is fog. That is why this new system can be a crucial step for the development of the autonomous vehicle.
«We decided to take on the challenge of developing a system that can» see «through real fog conditions,» said Guy Sabat, a graduate student at the MIT Media Lab, who led the research. «We are talking about realistic fog, which is dense, dynamic and heterogeneous. It moves and changes constantly, with zones of greater and lesser fog. Other methods are not designed to deal with these realistic scenarios. »
The researchers tested the system generating a fog so dense that human vision could only penetrate 36 cm, and they were able to obtain images of objects and measure distances in a range of 57 centimeters. It might seem that this distance is not significant, but the fog produced for the study is much denser than any that a driver had to face; In the real world, a typical fog could allow visibility between 30 and 50 meters.
The vital point of the research is that the system has managed to surpass human vision, while most of today’s image-based systems work much worse. A navigation system that was as good as a human driver to drive under fog conditions would be a breakthrough.
The new system uses a «time-of-flight» camera, which fires ultra-short bursts of laser light by measuring the time it takes for its reflections to return.
On a clear day, the time of return of the light faithfully indicates the distances of the objects that reflected it. But fog causes light to «scatter» or bounce at random. In high fog conditions, most of the light that reaches the camera’s sensor is reflected by the water droplets in the environment, not by the objects that the autonomous vehicle needs to detect. Furthermore, even if the light has been reflected by the potential obstacles, it would arrive at different times, being diverted by the water droplets at the exit as well as on the way back.
The system developed by MIT avoids all these problems through the use of statistics. The patterns produced by the light reflected in the fog vary according to the density of the fog: on average, the light penetrates less deeply into a thick fog than in a light haze. But the MIT researchers were able to show that, no matter how thick the fog, the arrival times of the reflected light adhere to a statistical pattern known as gamma distribution.
Fundamentally, the system calculates a different gamma distribution for each of the 1,024 pixels in the sensor. That is why it is able to handle the variations in fog density that frustrated previous systems: it can handle circumstances in which each pixel sees a different type of fog.
«The good thing about this is that it’s pretty simple,» says Satat. «If you look at the calculation and the method, surprisingly it’s not complex. Nor do we need any prior knowledge about fog and its density, which helps it work in a wide range of fog conditions. »
Satat tested the system using a one meter long fog camera. Inside the camera, he mounted regularly spaced distance markers, which provided an approximate measure of visibility. He also placed a series of small objects (a wooden figurine, blocks of wood, silhouettes of letters) that the system could represent even when they were imperceptible to the naked eye.
«Bad weather is one of the big obstacles that remain to be addressed for autonomous driving technology,» says Srinivasa Narasimhan, a computer science professor at Carnegie Mellon University. «The innovative work of Guy and Ramesh is the best approach I’ve seen in this field and has the potential to be implemented in cars very soon.»