It’s clear that next-generation smart cameras provide vehicles and drivers with valuable information. Advanced driver assistance systems (ADAS) rely on such cameras to recognize and detect pedestrians, approaching vehicles, and even lane markings. However, both manufacturers and consumers are now recognizing the limitations of the tech.
Cameras paired with ADAS systems provide drivers with additional lines of sight, like the blind spot for passing vehicles, but these advancements are still insufficient. Rather, what’s necessary is tech that can capture diverse types of sensory data – from multiple senses, beyond the visual – to create a more complete picture of the road and the vehicle for better safety and performance.
The shortcomings of cameras alone become clear as manufacturers venture into the world of autonomous vehicles, which need to be equipped with senses comparable to those of a human driver. Vision alone is not enough.
Luckily, AI is now transforming the automotive industry and equipping vehicles with multisensory data. Car manufacturers can now incorporate more diverse sensory data streams – a key steppingstone on the path to fully autonomous vehicles.
Limitations of cameras
While cameras were once primarily used to record police traffic stops or for evidence in accident reports, newer AI-powered ADAS cameras have become a critical component of vehicle safety.
Generally, AI-powered cameras can analyze the road ahead of and around the vehicle. The cameras use AI to analyze live images and determine if there is an obstruction. These new cameras can thereby detect objects in proximity to the vehicle, whether it’s a child behind a reversing car or another vehicle passing in a driver’s blind spot and provoke a response to prevent an accident.
Going beyond the cam
Today, OEMs are increasingly looking to equip vehicles with a sense of touch. The resulting tactile data can provide valuable insights to both drivers and manufacturers: sensors built into the vehicle can capture real-time data on road conditions, slipperiness, friction, aquaplaning, and more.
The tactile data can be used to either prompt a manual response by a driver or trigger an automatic system response. For instance, tire sensor data can prompt the vehicle to adjust tire grip to fit the road conditions, or the system can alert the driver to manually adjust the settings.
In autonomous vehicles, the response that the data triggers depends on the level of autonomy. There are six recognized levels of vehicle autonomy: 0 (no automation), 1 (driver assistance), 2 (partial automation), 3 (conditional automation), 4 (high automation), and 5 (full automation).
The distinction between levels 2 and 3 is particularly important. In level 2, only one feature of the car can be automated at any given moment, for instance, cruise control or automated braking. At level 3, multiple features can be autonomous at once. However, to reach level 3, vehicles must be adequately equipped with diverse data inputs from multiple sources to both detect problems and to elicit responses.
The path to autonomy
While ADAS-equipped cameras are an important tool for drivers, their capabilities are not enough to enable autonomous driving in all conditions. Rather, manufacturers need to equip vehicles with multiple types and sources of perception and sensor data.
This level of autonomy can occur through multiple vehicle sensors working collectively. The latest model in China’s line of self-driving robo-taxis, for instance, is equipped with 38 sensors, including eight light-detection and ranging sensors (LIDAR). The LIDAR features enable the vehicle to determine depth and distance more accurately. A sensor emits lasers that bounce off objects before returning to a receiver on the vehicle. Sort of like with radar or sonar, the system then calculates distance based on the amount of time the laser takes to return.
Having diverse sensory inputs enables autonomous vehicles to operate safely even if their field of vision is obscured, to better calculate distance, improve depth perception, and respond to varying road conditions in real-time. These advancements are only possible when vehicles have senses beyond sight.
AI-powered cameras are one piece of the autonomy puzzle, but there are many additional pieces. To achieve full autonomy, vehicles must ultimately be able to “sense” like a person, not just see like one.
Latest from EV Design & Manufacturing
- Latest Battery Workforce Challenge partner introduces recyclability into competition
- LK Metrology lowers carbon footprint of machines
- Zinc-sulfur batteries: another alternative to lithium-ion?
- Machine vision camera offers compact size, low power consumption
- Major EV industry players sign agreement to build large-scale battery plant in Spain
- Battery health and safety solution aims to reduce EV incidents
- IMTS 2024 Booth Tour - Niigata
- Factorial scales solid-state battery cells to 40Ah capacity