News

Steering a Safer Future for Autonomous Systems

  • Interdisciplinary Centre for Security, Reliability and Trust (SnT)
    04 octobre 2022
  • Catégorie
    Recherche

Car accidents are one of the leading causes of avoidable deaths in Europe, and despite numbers falling in recent years, there were still almost 20,000 people who lost their lives in 2021 as a result. It’s a problem that has many proposed solutions under way, with the EU aiming to reduce the number of deaths and serious injury by 50% by 2030, and to zero by 2050. This kind of world is hopefully one that will be our future, and SnT researchers are finding ways to bring it closer to a reality.

In the project entitled Supporting Functional Safety in Autonomous Systems, the Software Verification and Validation (SVV) research group have been working in collaboration with IEE, a global manufacturer and supplier of advanced sensing solutions across varying industries, including healthcare and automotive. Their aim is to ensure the accuracy of systems that can enable cars to make autonomous decisions in the event that an accident may occur. The project is also supported by the Luxembourg National Research Fund (FNR), awarded through the BRIDGES funding instrument, which provides financial support within industry partnerships.  

The research and development unit at IEE is working on systems that scan the inside of a car and detect signs that an accident could be imminent – for example sensing if a driver is affected by drowsiness or driving dangerously. They can even detect if a child is left inside the car, a pedestrian steps out in front of the car, or simply if the air bag can be safely deployed. Their systems are powered by artificial intelligence, but automating this kind of response in a car is not such a simple task. Leveraging deep neural network technology, the system thinks for itself. This ‘black box’ style of artificial intelligence processes large amounts of data points to produce a desired outcome, but is so complex that it is not possible for users to interpret how the algorithm arrived at a decision. As IEE’s system is safety critical, if an error were to occur then lives are at stake – therefore it’s vital that it can be understood how the error occurred.

For this reason, the team – comprised of Prof. Lionel Briand, head of the SVV group, Prof. Fabrizio Pastore, Dr. Mohammed Oualid Attaoui, Fitash Ul Haq, and Hazem Fahmy – aims to support the validation and verification of IEE’s systems, and ensure their accuracy. “These types of systems have to comply with safety standards, and since they have the potential to be responsible for human life, it’s important that they work accurately – and that there are appropriate countermeasures in case of a system failure,” said Prof. Fabrizio Pastore, a research scientist of the SVV research group. “Together we developed a technology that automatically verifies the software, generating explanations in the case that the software doesn’t behave correctly, and, crucially, it produces a solution to that error that improves the software,” he continued.

The improvement of the software involves detecting what kind of information the algorithm is missing. For example, can the system still accurately detect driver behaviour if they’re wearing a mask, or if a shadow falls on one side of the face? By detecting the error behind the system failure, the team can generate more inputs to retrain the deep neural network model so that it can better detect its environment when put into practice.