EnnCore – Special Session

EnnCore addresses the fundamental security problem of guaranteeing safety, transparency, and robustness in neural-based architectures. Specifically, EnnCore aims at enabling system designers to specify essential conceptual/behavioral properties of neural-based systems, verify them, and thus safeguard the system against unpredictable behavior and attacks. In this respect, EnnCore will pioneer the dialogue between contemporary explainable neural models and full-stack neural software verification. This special session will discuss existing studies’ limitations, our research objectives, current achievements, and future trends towards this goal. In particular, we will discuss the development and evaluation of new methods, algorithms, and tools to achieve fully-verifiable intelligent systems, which are explainable, whose correct behavior is guaranteed, and robust against attacks. We also describe how EnnCore will be validated on two diverse and high-impact application scenarios: securing an AI system for (i) cancer diagnosis and (ii) energy demand response.

Program

Special Session scheduled on Monday Feb 28, 2022 from 14:00 to 15:30 UTC

Time (UTC)

Description

14:00-14:10 Welcome, overview, Lucas Cordeiro (University of Manchester, UK)
14:10-14:25 Verifying Quantized Neural Networks using SMT-Based Model Checking, Edoardo Manino (University of Manchester, UK)
14:25-14:40 Explainability and Inference Controls, André Freitas (University of Manchester UK & Idiap Research Institute, Switzerland)
14:40-14:55 Safety Verification of Deep Reinforcement Learning, Yi Dong (University of Liverpool, UK)
14:55-15:10 Privacy Friendly Energy Consumption Prediction: Real Case-Studies, Mustafa A. Mustafa (University of Manchester, UK / KU Leuven, Belgium)
15:10-15:30 Closed-loop Safety of Bayesian Neural Networks and Stochastic Control Systems, Mathias Lechner, IST Austria

 

Special Session Speakers

Lucas Cordeiro (University of Manchester, UK)

Lucas C. Cordeiro is a Reader in the Department of Computer Science at the University of Manchester (UoM), where he leads the Systems and Software Security (S3) Research Group. Dr. Cordeiro is the Director of the Arm Centre of Excellence at UoM; he also leads the Trusted Digital Systems Cluster at the Centre for Digital Trust and Society. His work focuses on software model checking, automated testing, program synthesis, software security, embedded and cyber-physical systems. He has co-authored more than 120 referred papers in leading venues such as ASE, CAV, ICSE, HSCC, FSE, and TACAS. He leads one large EPSRC project concerning verifiable and explainable secure AI. He is Co-I on three further projects related to software security and automated reasoning, with a portfolio of approximately 5.4m GBP.

Edoardo Manino (University of Manchester, UK)

Edoardo Manino is a Post-Doctoral Researcher in the Department of Computer Science at the University of Manchester, UK. He is part of the EnnCore project and focuses on automated verification of neural network architectures. His background is in Bayesian machine learning, a topic he recently got awarded a Ph.D. from the University of Southampton. His other research interests range from network science to algorithmic game theory and reinforcement learning.

André Freitas (University of Manchester UK & Idiap Research Institute, Switzerland)

André Freitas leads the Reasoning & Explainable AI group at Idiap and at the Department of Computer Science at the University of Manchester. He is also the AI group leader at the digital Experimental Cancer Medicine Team (CancerResearchUK). His main research interests are on enabling the development of AI methods to support abstract, explainable, and flexible inference. In particular, he investigates how the combination of neural and symbolic data representation paradigms can deliver better inference. Some of his research topics include explanation generation, natural language inference, explainable question answering, knowledge graphs, and open information extraction. He is actively engaged in collaboration projects with industrial and clinical partners.

Yi Dong (University of Liverpool, UK)

Yi Dong is a post-doctoral researcher in the Department of Computer Science at the University of Liverpool, the UK. He works on End-to-End Conceptual Guarding of Neural Architectures, Safety Arguments for Learning-enabled Autonomous Underwater Vehicles, and Foundations for Continuous Engineering of Trustworthy Autonomy. His research interests include Deep Reinforcement Learning, Probabilistic Verification, Explainable AI, Distributed Optimisation.

Mustafa A. Mustafa (University of Manchester, UK / KU Leuven, Belgium)

Mustafa A. Mustafa is a research fellow at the University of Manchester & KU Leuven. His research interests include information security, data privacy, and applied cryptography in areas such as smart grid, smart city, e-health, and IoT. His main expertise lies in applied cryptography in the Smart Grid domain where he has filed one patent and published numerous papers at leading Smart Grid conferences.

Mathias Lechner (Institute of Science and Technology Austria, Austria)

Mathias Lechner is a final-year Ph.D. student at IST Austria. His Ph.D.-thesis, which is supervised by Tom Henzinger, focuses on the intersection of machine learning, formal verification, and robotics. The ultimate goal of his work is to learn neural networks that provide guarantees beyond their accuracy on safety and robustness. His research has been published in the top conferences NeurIPS, ICML, ICLR, AAAI, and ICRA.