Confiance.ai (www.confiance.ai) is the largest initiative in Europe for developing a software platform for trustworthy AI engineering. It gathers 9 major industrial companies (Air Liquide, Airbus, Atos, Naval Group, Renault, Safran, Sopra Steria, Thales and Valeo), two technology research institutes (SystemX and Saint Exupéry) and two top-level scientific organizations (Inria and CEA) within a €45M R&T program for four years. Moreover, thanks to two subsequent requests for expressions of interest, the program benefits from the contribution of about fifteen academic laboratories including SHS teams (through PhDs and postdocs) and a similar number of deep tech startups bringing additional expertise. This round table aims to underline a two year feedback on how to break down the barriers associated to scientific challenges related to AI based critical system, covering a wide range of issues (explainability, dependability, robustness, etc.) that together increase trust in AI-based critical systems (life-critical, mission-critical, business-critical).
In this panel discussion, the Confiance.ai programme will be first introduced. The discussants will then address different aspects regarding pillars and also barriers in the pursuit of improving trustworthiness of AI-enabled Systems. The perspectives about current and expected programme breakthroughs will be also reviewed.
|
|
Dr. Fateh Kaakai (Thales, IRT SystemX)
Dr. Fateh KAAKAI is a safety engineering expert currently involved in the digital transformation of Thales Air Mobility Solutions with cloud-native and Machine Learning technologies. He is an expert seconded to the French National Program “Trusted Artificial Intelligence” (45M€ budget) and a contributor to EUROCAE & SAE standardization working groups (co-chair of G34/WG114 sub-group 3 dedicated to AI technologies in Aeronautical Safety Critical Systems). He was the Thales representative to EASA (European Union Aviation Safety Agency) in the Air Traffic Management field from 2018 to 2020. He has been also involved in several safety research and innovation activities (SESAR, SESAR 2020 and Future Sky Safety programs). His previous industrial safety experience includes 5 years in Railway domain (CBTC), and 7 years in Air Traffic Control domain (ATC). His Ph.D. in Automatic Control received in 2007 within IFSTTAR and the University of Besançon was dedicated to the development of formal methods (Petri nets) for safety and capacity assessment in ground transportation networks. He holds 4 patents and has (co-)authored one book on Abstract Interpretation of Software, and several journal and conference papers. He was graduated from Ecole Centrale de Lille (Master of Science) in 2003.
|
|
Dr. Souhaiel Khalfaoui (Valeo)
Souhaiel Khalfaoui is an AI Expert at Valeo and is acting as Applied Machine Learning team leader. He received his MSc degree in robotics and intelligent systems from Pierre et Marie Curie University, Paris 6, France, in 2009 and PhD degree in computer vision from the University of Burgundy, Dijon, France, in November 2012. Before joining VALEO, He was a research manager of Vecteo SAS company (startup) for four years. He is in charge of defining the AI methodology for AI based projects at Valeo group level. Souhaiel is part of the Confiance.ai program focusing on empowering trustworthiness of AI based critical systems.
|
|
Dr. Augustin Lemesle (CEA)
Augustin Lemesle is a research engineer at the Software Safety and Security Laboratory at CEA. He is part of the AISER team on AI Safety and works at the application of formal methods to artificial intelligence safety both as part of academic or industrial projects. He is the lead developer of two tools of the laboratory since 2020, PyRAT, a neural network verification tool and AIMOS, a framework for metamorphic testing. He has also been involved in a variety of projects: SPARTA as part of the coordination team, ENSURESEC as the technical coordinator, Confiance.ai, TRUMPET and more.
|