Terms such as artificial intelligence, neural networks, deep learning and machine learning are ubiquitous in the age of digitalisation. Especially when it comes to autonomous driving. The EDAG Group, too, has long since been using these technologies for innovative customer and in-house projects.
Vehicle automation offers the automotive industry unprecedented opportunities to make cars safer and more comfortable. AI is the key to increasing safety on the roads and enabling more and more autonomous driving. Vehicles are already being equipped with a large number of sensors whose data are evaluated and used via AI to understand and improve traffic in the real world.
The car’s sensor system can detect other road users, signs or road markings via neural networks and is also able to detect moving objects such as pedestrians in order to derive actions.
When driving a car, people keep an eye on their surroundings as a matter of course. AI in the autonomous vehicle, on the other hand, must constantly recalculate the situation in fractions of a second on the basis of the data from the sensors and reassess the situation.
But what if sources of danger, such as a sudden obstacle, cannot be detected because sensors or cameras cannot see them? When the weather conditions are so bad that neither man nor the autonomous car can react in time? Or the camera is dirty?
Clear vision thanks to EDAG
EDAG Electronics has a solution for precisely this problem: we have developed software that uses artificial intelligence to support assisted and automated driving even in poor visibility conditions. The DiFoRem (Dirt & Fog Removal) system is able to compensate for image errors caused by dirt, fogging or camera lens defects with the help of neural networks in real time. The reconstructed image can then be used by other assistance systems or for automated driving, providing a significant increase in image and information quality. This means that obstacles can be detected despite a dirty or fogged lens.
In order to digitally compensate for image errors, the impaired areas of every single ingoing image are first identified algorithmically and marked accordingly. In the next step, faulty image sections are reconstructed in the overall image by the neural network, and then replaced by the reconstructions.
The developers of the EDAG Group developed novel auto-encoder architectures consisting of Partial-Convolutional Neural Networks and ConvLSTMs. This enables a spatio-temporal feature extraction inspired by human memory and allows DiFoRem to also use information from previous and subsequent images for reconstruction. The deep-learning approach permits the robust reconstruction of image errors, which can take a variety of different forms.
The network architecture selected also makes it possible to abstract information from objects and scenarios that have been previously seen, and to recognise basic connections. For example, hidden objects in a single image can be reconstructed on the basis of empirical values from previous images. These empirical values are collected in the training process by analysing millions of different images; the correctness of the abstraction is constantly checked during the training process.
The EDAG Group has optimised the software for the widely used Nvidia autonomous platforms. As a result, users benefit from high-performance algorithms in combination with a standard hardware platform for embedded systems.
The DiFoRem system increases the availability and robustness of camera-based signals, so improving the quality of input data from current driver assistance systems or automated driving functions.
DiFoRem is completely hardware independent and can be used compatibly in rear view, front view, top view or surround view camera systems.
Artificial intelligence and neural networks help the automotive industry in all areas, from construction and design to production and the further development and improvement of safety and comfort. They can thus become pioneers for the mobility of the future, and they also give us, the experts at EDAG Electronics, the opportunity to support and accompany our customers on their way to an autonomous vehicle.
Our colleague Jacek Burger, Head of Embedded Systems & Computer Vision/AI, will be glad to help you to get a more detailed insight into these technologies and which application areas are still possible.