Researchers at the Massachusetts Institute of Technology (MIT) have created a world-first open-source photorealistic simulator for autonomous driving. VISTA 2.0 is described as “a data-driven simulation engine where vehicles can learn to drive in the real world and recover from near-crash scenarios.” Data-driven means it is built and photorealistically rendered from real-world data. Currently, this highly advanced simulator is the exclusive domain of some of the companies leading the way in automated driving tech, who rely on data only they have access to – such as edge cases, near misses and crashes – to enable accurate simulations. Without access to this sort of data, training AI models for autonomous vehicles is difficult. It is clearly undesirable, for example, to deliberately crash cars into other cars just to teach a neural network how to avoid doing so. But that’s where VISTA 2.0 can change things and break new ground, with its code open-sourced to the public. “With this release, the research community will have access to a powerful new tool for accelerating the research and development of adaptive robust control for autonomous driving,” said MIT Professor and CSAIL Director Daniela Rus. VISTA 2.0 builds off a previous MIT model, VISTA, but moves things on significantly. While the initial iteration of VISTA supported only single-car lane-following with one camera sensor, VISTA 2.0 can simulate complex sensor types and interactive scenarios and intersections at scale. Among the driving tasks VISTA 2.0 can recreate and scale in complexity are lane following, lane turning, car following, static and dynamic overtaking, as well as road scenarios where multiple agents are involved. “This is a massive jump in capabilities of data-driven simulation for autonomous vehicles,” said Alexander Amini, computer science and artificial intelligence laboratory Ph.D. student and co-lead author on two new papers, together with fellow Ph.D. student Tsun-Hsuan Wang. “VISTA 2.0 demonstrates the ability to simulate sensor data far beyond 2D RGB cameras, but also extremely high dimensional 3D LiDARS with millions of points, irregularly timed event-based cameras, and even interactive and dynamic scenarios with other vehicles as well,” Amini said.
TALLAHASSEE, FL – Advanced Manufacturing International (AMI) has been awarded a $2M grant