Performance is often equated with speed. In sports or Formula 1 racing, for example, a razor-thin lead of two and a half tenths of a second - i.e. 0.25 seconds - can make all the difference between victory and defeat. And it is not just on the race track that speed is one of the key criteria for success. Performance also plays a decisive role in the interiors of our vehicles, not least on account of digitalisation and state-of-the-art developments such as autonomous driving, driver assistance systems and a wide variety of innovations in the electronics field. The OEMs are hard at work on finding ways to fit vast amounts of data, all of which have to be orchestrated every second and require immense computing power from the processors, into the corresponding hardware. These functions are controlled by huge quantities of algorithms, which could be put to far better use if they were to be optimised or rewritten. This idea - namely that of reducing the footprint of algorithms - has given rise to the development of performance engineering. Since 2019, EDAG, too, has been focussing on optimisation at instruction level, and on the optimisation of algorithms data structures and vectorisation.
The inner values are what count: let the function and its harmony set you apart from your competitors.
In times when hardware is equally available to many, the important thing is to create a market advantage through function, in particular through the harmony and wealth of functions. Hardware manufacturers are running up against physical limits, and software is having to become increasingly innovative, to fit more functions into the same computer capacity. The exponential growth of the data quantities results in ever-increasing demands on high-performance storage media capable of processing these, making it all the more important for sustainable algorithms to be created.
The end of Moore's Law – according to which the computing power of processors reliably doubles every 18 months – also opens up opportunities for methods and innovative strength. As a consequence of customers' calls for innovations in the field of safety, and also the ever-increasing requirements of consumer tests (e.g. NCAP) and homologation, OEMs are having to bow to this pressure and incorporate more and more functions in the vehicle. Especially in the area of driver assistance systems and autonomous driving, this leads to extremely complex task in terms of die computing power.
For the vehicle manufacturers, implementation of the development plans is always to the fore, and the focus is clearly on functionality – "make it work" is the motto here. Large development teams work independently of one another on highly complex tasks such as road sign and object recognition, etc., sometimes for several years. Finally, all the functions have to be incorporated into the vehicle - and then comes the realisation that there is not enough computing power for all the functions to be carried out at the same time. As start of production approaches, OEMs see that time is running out, and they find themselves under more and more pressure. If they have not already done so, now is the time to reduce algorithms, recognise optimisation potential, and then optimise to solve the problem.
Many roads lead to Rome. With performance engineering we can find the most efficient one for you.
This is exactly where performance engineering comes in: algorithms which already meet all requirements in full are considered with a view to reengineering, and closely scrutinised while asking a wide variety of questions, under the premise "make it fast":
- How efficiently is the CPU utilised?
- Does the computer hardware have features that could be put to better use if the algorithms were rewritten?
- Can the algorithmic complexity be reduced?
The focus of our optimisers' analyses is always on the best possible performance in terms of computing time, memory utilisation and data transmission. The programs we use for this performance engineering process are open source programs in the Linux environment, in which we have constructed various scripts and tools. We put these to various uses, including running time analysis, resource optimisation, vectorisation, algorithmic analysis and improvement, changing data structures.
To do this, we evaluate the elements of the algorithms used, and test their stability and performance. In this way, developments can be structurally optimised, and the performance level of existing systems or problems analysed. Software elements which are redundant or do not make any significant contribution to the result are freed up. In addition, the memory usage is reprogrammed so that, as far as possible, the work can be done using a small memory area. In other words, the same memory area can be put to other use, the algorithm is rewritten to make it more efficient, and hardware resources that are no longer needed are freed up. This is also an example of how our optimisation strategy can be implemented.
It is also important to know the various methods and properties of the algorithms, and to keep these in mind during development - something which very seldom happens. For this reason, various aspects of the algorithm need to be repeatedly considered. This means that the data quantity needing to be processed at one and the same time plays an important role. Certain techniques, for example, enable not just one, but 4, 8 or 16 values to be processed at the same time. As a result of this vectorisation and other optimisations, we can then achieve higher speed-up factors, for instance a factor of 10. The speed-up is then more or less linear. Overall, this requires less memory and processor time.
It goes without saying that we always need our customers' know-how for our optimisation work. They have been dealing with the development for many months, sometimes even years, and have acquired a great deal of knowledge on their own particular problem. For our optimisation strategy, it is important to treat them as sparring partners and consult them for the analysis and implementation. The OEM or Tier 1 supplier knows what steps had to be taken to achieve the development, what milestones were set for the various software components, which then ultimately led to the fact that computer hardware proved insufficient. We therefore advise the OEMs' development teams and check, for instance, whether the data structures are put to efficient use, and analyse elements of the programming language, etc.
A change of perspective is important here: the number of different performance engineering skills we can apply at EDAG means that we also always have various different approaches that can be borne in mind during our deliberations, and quickly show whether it might not be possible to improve the efficiency of the direction taken.
For "make it fast", we have a veritable toolbox of very special skills. In our interdisciplinary team covering all locations and departments, for instance, you will find various backgrounds, in other words physicists, mathematicians and computer scientists, all of whom bring their knowledge, thoughts and experience from a wide range of sectors into the consulting sessions and processes - also bearing in mind the question of safety.
The key to success differs from case to case: we look at problems from different perspectives - from the point of view of software engineering, from a mathematical perspective, but also in the context of data structure. Depending on how the algorithm is set up, different approaches are adopted, to enable a solution to be found in the most efficient way. As the potential solutions vary enormously, this is a subject that never loses its fascination.
Markus Kohout, Program Manager Autonomous Drive & Safety, knows from a wide variety of different projects and many meetings with OEMs' development teams that the vehicle manufacturers set great store by performance engineering, i.e. the optimisation of running times, resources and data structures. Many of them are facing this problem, and are having to fit large numbers of complex software functions into as little hardware as possible. In addition, there is very little time available for such optimisation processes. Do you find yourself and your software development at a similarly striking point? If so, our team looks forward to your challenge and to the opportunity of reducing the footprint of another algorithm.
Further, Dr. Tobias Schmitt, our expert and development engineer at EDAG Engineering GmbH, has put together a set of guidelines with the 17 most important steps for optimising running time. This can provide you with some good, additional insight into the subject matter and initial ideas on optimisation.
Download the guidelines now.