CEVA,has kindly shared the following article with us, concerning their growing knowledge base about Computer Vision and the Challenges that are facing fully Autonomous cars and vehicles.
Computer vision and image enhancement are intrinsic aspects in today’s active safety systems and tomorrow’s autonomous driving vehicles. While there are a range of sensors, it is cameras that will play a pivotal role in these sensing systems. Automotive advances are being driven forward by intelligent computer vision and imaging.
Some of these advances in the automotive industry are especially interesting, and will lead to fully autonomous vehicles as well as highly automated driving features.
Please see his following blog post:
The challenges to fully autonomous cars.
On September 19, 2016, the U.S. Government issued the Federal Automated Vehicles Policy, marking a significant milestone in the progress of autonomous vehicles. Although this is a guidance document and does not contain official regulations, it is the first federal involvement in the United States. The federal government recognizes that the automation of vehicles has the potential to increase safety, improve mobility, save energy, and reduce pollution. Therefore, the U.S. Department of Transportation has set a goal to accelerate the revolution of highly automated vehicles (HAV). This is another example of a crucial step to overcoming red tape, as I discussed in my post about Moving autonomous driving into the fast lane.
Will human drivers soon be a thing of the past? (Source: unsplash.com)
Four levels of vehicle automation
This migration from solely human-operated cars to completely autonomous ones is not a sudden leap, but is instead a gradual progression. So where exactly are we in this evolution process, and what can we expect to see in 2017?
The automation of vehicles can be divided into four levels:
Taking a look at this classification, it’s clear that the first level of zero automation is practically extinct. Almost every new vehicle in the developed world has at least one automated feature, and most have many. The second level, mechanical automation, has been around for decades. Features of smart automation, the third level, are now commercially available, like adaptive cruise control and parking assist. But, this level is far from being fully exhausted. Features like traffic jam assist and self-parking are in very advanced stages of development and are being introduced on the roads today. The technological components that enable these tasks are ever developing; for example, artificial intelligence and smart sensors are continuing to improve these automated features. As this trend continues, smart automation will be more easily and quickly adopted into the mass market, and will soon become standard equipment in new cars.
The fourth and last level is full autonomous driving. This is the pinnacle of the evolution and will have the largest societal impact in how people utilize transportation through shared mobility services. Many traditional automotive companies, such as General Motors, Ford, and Daimler, to name but a few, have joined the likes of Google, Baidu, and Uber in this revolution. You can see these test cars on the road today and even more locations are being used as test sites for autonomous vehicles, gradually incorporating them in general traffic.
Self-driving Uber car (Source: Uber)
Just a few days prior to the release of the federal policy document, Uber unleashed a fleet of autonomous vehicles for hire to the general public. The experimental pilot program includes four cars that are capable of navigating a certain part of Pittsburgh, which has been pre-mapped. In each car, two Uber engineers are in the driver and passenger seats ready to take control if there is a safety risk or other issue that requires intervention. According to reports, like this one from Reuters, such an intervention was necessary every few miles.
Three main technological challenges that need to be addressed in 2017
Experiments like the Uber fleet in Pittsburgh show that fully autonomous vehicles are very close to fruition. In the foreseeable future, it’s likely that these vehicles will mainly be used for services, while car owners will see increasingly sophisticated level 3 HAVs. What are the challenges standing in the way of further enhancing level 3 and fully transitioning into level 4 automation?
#1 Availability of intelligent vision sensors: One of the main challenges is the availability of intelligent vision sensors that can match and exceed human capacity for vision and analysis, and adequately respond to any situation that may occur on the road. These sensors must be able to detect signs, hazards, other vehicles, pedestrians, and more, and process this information to determine precisely how to react in real-time. But, the vision sensors aren’t only pointed outwards to the road...
#2 Simple handoff from the automated function to the driver: In level 3 of smart automation, there is complexity in the handoff from the automated function to the driver. The solution for this is driver monitoring. This feature uses machine vision that can detect the state of the driver (e.g., alert or fatigued, paying attention or distracted) by analyzing facial expressions to ensure a safe handoff.
#3: Cost-effectiveness: Even the most advanced imaging and vision technology won’t cut it if it’s not extremely low-power, small-die-size, and part of a cost-effective solution. To really penetrate the mass market and disrupt the transportation industry, the solution must be able to perform all of the above tasks with all their complexity, but still be efficient and flexible enough to adapt to future changes and innovations.
Mass market applications enablers
There are many challenges ahead before fully autonomous vehicles can become the main mode of transportation across the globe. Safety, ethics, psychology, and other profound fields will undoubtedly be involved in decision making and policy formulation, as the U.S. government made clear in its publication. In the meantime, huge steps are being made towards that goal.
The main enablers of these steps are efficient vision and sensing solutions, low power, low-cost machine intelligence and positioning technology. As these components become more cost-effective and more available for mass market applications, we’ll be seeing more green lights for features that can make driving safer, more efficient and more convenient.