Keynote: EUROCAE WG114 – SAE G34: a joint standardization initiative to support Artificial Intelligence revolution in aeronautics
Christophe Gabreau (Airbus, co-chair of EUROCAE WG-114 Group)

Christophe GABREAU is a Software Processes specialist in the domains of development, certification and airworthiness at AIRBUS (Toulouse). Software regulation expert, he is the focal point for certification aspects of Artificial Intelligence in embedded safety-critical software. He is co-chairing the EUROCAE WG-114 (a joint group with SAE G-34) in charge of developing a standard for certification/approval of aeronautical systems implementing AI technologies. He is also contributing to the ANITI/DEEL (Dependable and Explainable Learning) research project at Toulouse IRT.
Graduated with a master’s degree in Electronics and Computer Engineering (ESEO-Angers) in 1990, he has been working for 3 decades in the field of avionic embedded software. The first part of his career was dedicated to operational embedded developments for CS-Defense, Alcatel Space and mainly THALES where the development and the certification of the Flight Management System (FMS) for AIRBUS SA/LR families was one of the main achievement. In 2006 he left THALES to found SafetySOFT, a software audit company, and performed (on behalf of Airbus Commercial, Airbus Helicopter and ATR) the monitoring of the main avionic suppliers to ensure the conformity of their equipment to certification regulations and industrial standards (DO-178/ED-12, DO-200/ED-76). He was hired by APSYS (an AIRBUS company) in 2016 to take a Software Assurance Nominee position at AIRBUS Filton (Landing Gear and Fuel systems). He eventually joined the Hardware and Software Qualification Group at AIRBUS Toulouse by the end of 2017 to a position of Software Designated Certification Specialist.

Beatrice Pesquet-Popescu (Thales, co-chair of EUROCAE WG-114 Group)

Dr. Béatrice PESQUET-POPESCU is a Co-Chair of the joint EUROCAE WG114/SAE SG34 standardization committee for AI in aeronautics systems. She joined Thales Air Mobility Solutions in Jan 2018 as the Research and Innovation Director, where she defines and implements the AI strategy and manages the innovation in this field. Her team is showcasing data-based algorithms and services and is integrating these ML modules into Thales AMS products, with a particular concern on safety, qualification needs and engineering processes for ML. Previously, she has been Full Professor and Head of Multimedia Group in Telecom ParisTech and was also the Head of the UBIMEDIA Common Research Laboratory between Alcatel-Lucent Bell Labs Paris, and the Institut Télécom.
Dr. Pesquet-Popescu is an IEEE Fellow, an EURASIP Fellow, an elected member of Board of Governors of the IEEE Signal Processing Society and was an EURASIP BoG Member and member of the French GRETSI CA. She was also the Chair of two Technical Committees of the IEEE SPS: the Image, Video and Multidimensional Signal Processing (IVMSP) and the Industrial Digital Signal Processing (IDSP). Béatrice Pesquet-Popescu was an Associate Editor for 7 international journals and was the Technical Co-Chair of two conferences, and the General Co-Chair for 3 international conferences. She is a recipient of the Young Investigator Award from the French Physical Society in 1998 and of several best paper awards. She holds over 35 patents, has (co-)authored 4 books and over 400 book chapters, journals, and conference papers and (co-)directed 35 PhD theses.
.

Fateh Kaakai (Thales, Sub-Group Leader of EUROCAE WG-114 Group)

Dr. Fateh KAAKAI is a Sub-Group Leader of EUROCAE WG-114 Group working on the safe design of Machine Learning models in the aeronautical domain. He is a safety engineering expert currently involved in the digital transformation of Thales Air Mobility Solutions with cloud-native and Machine Learning technologies. He is currently co-directing a PhD thesis on Neural Network Robustness evaluation. He is a regular contributor to EUROCAE standardization working groups (safety aviation standards and guidelines). He was the Thales representative to EASA (European Union Aviation Safety Agency) in the Air Traffic Management field from 2018 to 2020. He has been also involved in several safety research and innovation activities (SESAR, SESAR 2020 and Future Sky Safety programs).
His previous industrial safety experience includes 5 years in Railway domain (CBTC), and 7 years in Air Traffic Control domain (ATC). His Ph.D. in Automatic Control received in 2007 within IFSTTAR and the University of Besançon was dedicated to the development of formal methods (Petri nets) for safety and capacity assessment in ground transportation networks. He holds 3 patents and has (co-)authored one book on Abstract Interpretation of Software, and several journal and conference papers. He was graduated from Ecole Centrale de Lille (Master of Science) in 2003.

Abstract: AI has the potential to disrupt the aerospace industry, impacting all areas in which computing and aerospace intersect. AI is a broad subject, still being actively developed from a confluence of many disciplines, including mathematics, computing, cognitive science, software development, data science, control theory, and others. It demands a collaborative approach with experts contributing from multiple domains. AI technologies are becoming progressively more embedded into the digital systems used to design, manufacture, operate, and maintain both aerial vehicles and ground-based systems. Leveraged appropriately, AI-driven solutions could transform the products and services that aerospace companies provide with an accelerated pace of change. Specifically, Machine Learning (ML) technologies have the potential to revolutionize established paradigms of aeronautical system development, including those concerned with safety-critical applications.
Anticipating a growing commercial pressure for Artificial Intelligence (AI) solutions within the aerospace industry over the coming few years, there is an urgent call for regulation and the emergence of norms around acceptable usage. In response, two working groups were set up independently on either side of the Atlantic during 2019 to address concerns around assuring products and services that exploit AI technologies. WG-114 was established by EUROCAE in Europe and G-34 by SAE in the United States.
Both Working Groups were created to produce guidance on safe and successful adoption of AI technologies in Aeronautical Systems, through consensus amongst many experts and practitioners in industry and academia. In bilateral agreement, the groups formed a joint committee in June 2019.
The joint working group will evaluate key applications for AI usage within aeronautical systems, with a scope encompassing ground-based equipment and airborne vehicles, including Unmanned Aircraft Systems (UAS) products. In terms of processes, the full lifecycle will be under consideration, from design and manufacture, to operation and through-life maintenance.
A key deliverable will be documented standards, providing guidance on assuring safe systems utilizing AI, through an agreed acceptable means of compliance with regulatory requirements.

Invited Talk: Methods and tools for trusted AI: an urgent challenge for industry
Juliette Mattioli (Thales, France)

In 1993, she received a Ph.D. in Applied Maths for AI from Paris-Dauphine Univ. and became a RD Engineer on Image Processing and Combinatorial Problem Solving. From 2001-2016, she led a research lab. dedicated on decision support combining data driven AI, semantics and knowledge-based reasoning. She was appointed Strategy & Innovation Director at Thales Technical Directorate in 2010 and as AI Senior Expert in 2012. In 2017, She was a member of the #FranceIA mission (2016), one of the 5 representatives of France at the G7 innovators (2017), contributor of “Plan IA 2021” for IDF (2018) and since 2018, coVP of the DS&AI Hub (Pôle Systematic).

Rodolphe Gelin (Renault, France)

Rodolphe was graduated in 1989 from Ecole Nationale des Ponts et Chaussées and got a Master in AI. He has been working for 20 years at the French Atomic Energy Commission where he contributed to the development of assistive robotics and oversaw Interactive System Programs. In 2009, he joined Aldebaran (Softbank Robotics) to manage the Romeo project that aims to develop a humanoid robot to assist elderly people. In 2016, he became EVP Chief Innovation Officer of SoftBank Robotics Europe. Since 2019, he is expert in deep learning for intelligent vehicle at Renault. After several books on virtual reality and robotics, he published “l’IA et Nous” (2019) about societal impacts of AI..

Abstract: At French national level major industrial players in the fields of Automotive, Aeronautics, Defense, Manufacturing and Energy (Air Liquide, Airbus, Atos, EDF, Naval-Group, Renault, Safran, SopraSteria, Thales, Total and Valeo) with the support of academic partners (CEA, INRIA, IRT Saint Exupéry and IRT SystemX) are collaborating together to address AI trustworthiness issues through the French National Program “Confiance.ai” (45M€ budget). The program is developed on operational use cases on various domains as: automatic control of trajectory of automotive vehicle, automatic default detection for safety critical mechanics component in energy power plant, airplane automatic braking system for landing operations or quality control in manufacturing. The objective is to set up generic tooling solutions answering to the requirement of the several domains in order to capitalize the knowledge, the development efforts and the tooling technologies. This program aims to bridge the gap between AI PoCs and AI deployment within critical systems toward certification by providing an interoperable engineering workbench to support AI processes and practices through methods and tools during the over-all lifecycle of the AI-based system.

 

Invited Talk: Challenges and Directions in Avoiding Negative Side Effects
Sandhya Saisubramanian (University of Massachusetts Amherst, USA)

Sandhya Saisubramanian is a Ph.D. candidate in the College of Information and Computer Sciences at the University of Massachusetts Amherst. Her research focuses on developing general techniques for reliable decision-making in autonomous systems operating in the open world. Her recent work on negative side effects received IJCAI 2020 Distinguished Paper award. She was also selected as a Rising Star in EECS by UC Berkeley in 2020.

Abstract: A world populated with intelligent and autonomous systems that simplify our lives is gradually becoming a reality. These systems have broad societal impacts and it is critical to ensure that they operate safely when deployed in the real-world. Due to the practical challenges in data collection and precise model specification, AI systems often operate based on imperfect models. This may lead to unintentional, undesirable consequences when deployed, whose severity ranges from mild and tolerable events to safety-critical failures. A particular form of unexpected, undesirable consequence is the \emph{negative side effects}, which occur because the agent’s model and objective function focus on some aspects of the environment but its operation could impact additional aspects of the environment.

In this talk, I will focus specifically on the (1) challenges in identifying and mitigating the impacts of undesirable side effects of an agent’s actions when operating in the open world; (2) provide a comprehensive overview of different forms of negative side effects; and (3) discussion of open questions and suggestions for future research directions. I will also be discussing some of the results from our recent human subjects study on understanding user tolerance and response to negative side effects, and the impact of negative side effects on user’s trust in the system’s capabilities.

.