Using simulated scenarios for testing in the automotive industry is a well-established practice. However, the scenarios used in the past, for example, to train ABS brake systems, do not suffice for autonomous vehicle training. Essentially, autonomous vehicles need to be trained to behave like humans, which requires highly complex simulations.
A key part of any autonomous vehicle training simulation is the simulation environment. Unity, the real-time 3D rendering platform, is being used by engineering teams to efficiently create simulation environments for autonomous vehicle training that are rich in sensory and physical complexity, provide compelling cognitive challenges, and support dynamic multi-agent interaction.
This article provides a useful overview of what comprises a simulation environment and how Unity is used in the production of training environments for autonomous vehicles.
What makes an autonomous vehicle think?
Just like humans, an autonomous vehicle needs a “brain”: this is the autonomous system, which comprises four key areas:
Control: This part takes care of the actions that the car needs to do, such as braking, accelerating and steering.
Planning: The planning part looks after how the vehicle navigates, overtakes, and avoids obstacles.
Perception: This covers how the car gets information about the real world. Information can be gathered with a combination of sensors, such as:
- Computer vision if you're using cameras
- LiDARs (Light Detection and ranging) sensors
Lastly, via a process called sensor fusion, the information collected with the above methods is collated and put together as something that the car actually understands.
Coordination: This part is connected with the planning one, as it deals with how the car behaves in relationship to other smart cars it encounters. It requires communication with other vehicles and infrastructure, examples of which include:
- Platooning: how cars can closely follow each other in highways forming a kind of "train" that optimizes fuel by cutting on air resistance, among other things.
- Merging and intersections: how cars could adjust collectively to the traffic flow.
- Swarming: the concept that facilitates the above coordinations.
The key challenges with training autonomous vehicles
How do you collect all data you need? Machine Learning is at the heart of autonomous vehicles, and it’s a system that is very data-hungry. Huge amounts of data are necessary to train an autonomous vehicle. How do you do that cost-effectively and accurately?
How will the car understand what the data is? It’s not just enough to collect the data, you have to make sure that the car understands what the data is–it can’t just see an object, it has to understand if that object is a tree, road, person, and so on.
How do you sort and structure the data? Every single piece of data has to to be understood by the autonomous vehicle, a process that’s both expensive and prone to error when carried out by people. If it could be done automatically, you’d arrive much faster, and more securely at the required algorithms.
How do you prepare the vehicle for the inevitable unforeseen situations? Data collected exclusively from the real world can only prepare the autonomous vehicle for what it’s already seen out there.
Rich and complex simulation environments give engineering teams control over the data generation, and ultimately, train an autonomous vehicle system to be ready for all scenarios, including unforseen ones and edge cases.
What’s in a simulation environment?
To train an autonomous vehicle system, you need to produce an environment that is as closely related to what the real car would see on the road. The key parts of a simulation environment are:
Vehicle dynamics: how the car behaves physically, such as friction with the asphalt.
Environment: This part comprises three sub-categories:
- Static elements, such as the roads, trees, and stop light signs.
- Dynamic elements, such as pedestrians or other cars, that provide variations within your scenario and allow you to create scenarios that can be used to validate or collect data for your vehicles.
- Parameters, such as the time day and different weather conditions, that you can apply to a given scenario, to recreate different situations.
The conjunction of these varied environment factors is what allows you to produce edge cases that are rare in reality.
Sensor model: The simulation scenarios need to be taken in by the autonomous system via a sensor model, such as a LiDAR sensor, camera, or radar. It has to be physically accurate to the point that the algorithm relying on this information can behave in the synthetic environment as well as it will behave in reality.
Unity: a natural choice for developing simulation environments
Engineering a simulation environment requires the same features and toolsets used in creating other types of rich interactive content: lighting and physics, particle and weather systems, animation, machine learning and more.
Unity is the global-leading real-time 3D rendering platform for games and other interactive content development. It’s a tried-and-tested, full-featured platform that powers millions of multi-platform games and applications. It also provides the unique advantages of the Asset Store (see below for more information), and its huge community of cross-industry developers and creators.
Key Unity features for engineering teams developing autonomous vehicle systems
Speed: The intuitive UI of the Unity editor makes it possible to prototype quickly. When you are in “Play” mode in the editor, you can play and preview how your application will look in its final build. You can pause the scene, and alter values, assets, scripts and other properties, and instantly see the results. You can also step through a project frame by frame for easy debugging.
Rich interactivity: Unity provides a robust and well-documented API that gives access to the complete range of its systems, including physics, rendering, animation, and communications, enabling a rich interaction model and integration with other systems.
High-end graphics: The Scriptable Render Pipeline or SRP, allows you to code the core of your render loop in C#, thereby giving you much more flexibility for customizing how your scene is drawn to make it specific to your content.
There are two SRPs available: the High-Definition Render Pipeline (HDRP) offers world-class visual quality on high-performance hardware, while the Universal Render Pipeline (URP) maintains responsive performance when scaling for mobile.
VR and AR support (plus deployment to 25 other platforms). Due to its extensive platform support, Unity is used by AAA game studios, top creative agencies, film studios and research teams in auto, space, and other industries to create immersive applications.
Machine Learning and AI capabilities: The Unity ML-Agents toolkit enables machine learning researchers to study complex behaviors using Unity, and provide interactive content developers with the latest machine learning technologies to develop intelligent agents.
The Asset Store: The Asset Store gives you access to the largest marketplace of off-the-shelf assets and productivity tools, including a huge selection for creating environments, to save on development time.
What’s coming up in Unity for autonomous vehicle training
- A Sensor SDK, to develop sensors on Unity.
- Reference models, which are startup kits, or templates, that teams can use to prototype quickly.
- Big features coming up in the Unity product cycle.
For content creation:
- Collaborating with a number of partners in the auto industry to create the pipelines for data ingestion that will help generate content faster in that scale.
- A new communication layer currently being developed, improvements for Linux support, and continuous development on the ML-Agents Toolkit.