Dr. Sandeep Neema, DARPA
Keynote: Assured Autonomy

The DARPA Assured Autonomy program aims to advance the ways computing systems can learn and evolve with machine learning to better manage variations in the environment and enhance the predictability of autonomous systems like driverless vehicles.

Dr. Sandeep Neema leads the DARPA’s Assured Autonomy program. His research interests include cyber physical systems, model-based design methodologies, distributed real-time systems, and mobile software technologies.

Prior to joining DARPA, Dr. Neema was a research associate professor of electrical engineering and computer science at Vanderbilt University, and a senior research scientist at the Institute for Software Integrated Systems, also at Vanderbilt University. Dr. Neema participated in numerous DARPA initiatives through his career including the Transformative Apps, Adaptive Vehicle Make, and Model-based Integration of Embedded Systems programs.

Dr. Neema has authored and co-authored more than 100 peer-reviewed conference, journal publications, and book chapters. Dr. Neema holds a doctorate in electrical engineering and computer science from Vanderbilt University, and a master’s in electrical engineering from Utah State University. He earned a bachelor of technology degree in electrical engineering from the Indian Institute of Technology, New Delhi, India.

Prof. Francesca Rossi, IBM and University of Padova
Keynote: Ethically Bounded AI

Humans and machines will often need to work together and agree on common decisions. We claim that it is possible to adapt current logic-based modelling and reasoning frameworks, such as CP-nets, and constraint-based scheduling under uncertainty, to model safety constraints, moral values, and ethical principles.

Francesca Rossi is the IBM AI Ethics Global Leader and a Distinguished Research Staff Member at IBM Research, and professor of computer science at the University of Padova, Italy, currently on leave.

Her research interests focus on artificial intelligence, specifically they include constraint reasoning, preferences, multi-agent systems, computational social choice, and collective decision making. She is also interested in ethical issues in the development and behaviour of AI systems, in particular for decision support systems for group decision making. She has published over 170 scientific articles in journals and conference proceedings, and as book chapters. She has co-authored a book. She has edited 17 volumes, between conference proceedings, collections of contributions, special issues of journals, as well as the Handbook of Constraint Programming.

She is a AAAI and a EurAI fellow, and a Radcliffe fellow 2015. She has been president of IJCAI and an executive councillor of AAAI. She is Editor in Chief of JAIR and a member of the editorial board of Constraints, Artificial Intelligence, AMAI, and KAIS. She is a member of the scientific advisory board of the Future of Life Institute and a deputy academic director of the Leverhulme Centre for the Future of Intelligence. She is in the executive committee of the IEEE global initiative on ethical considerations on the development of autonomous and intelligent systems and she is a member of the board of directors of the Partnership on AI, where she represents IBM as one of the founding partners. She is also a member of the high level expert group on AI, that will advise the European Commission on AI policies and AI ethics guidelines.

Dr. Peter Eckersley, Partnership on AI & EFF
Invited Talk: Impossibility and Uncertainty Theorems
in AI Value Alignmentor why your AGI should not have a utility function

We show that impossibility theorems in ethics have implications for the design of powerful algorithmic systems, such as high-stakes AI applications or sufficiently rigid and rule-based bureaucracies. We show that such paradoxes can be avoided by using uncertain objectives, such as partial orders or probability distributions over total orders; we prove uncertainty theorems that place a minimum bound on the amount of uncertainty required.

Peter Eckersley is Director of Research at the Partnership on AI, a collaboration between the major technology companies, civil society and academia to ensure that AI is designed and used in service of the public good. He leads PAI’s research on machine learning policy and ethics, including projects within PAI itself and projects in collaboration with the Partnership’s extensive membership. Peter’s AI research interests are broad, including measuring progress in the field, figuring out how to translate ethical and safety concerns into mathematical constraints, and setting sound policies around high-stakes applications such as self-driving vehicles, recidivism prediction, cybersecurity, and military applications of AI.

Prior to joining PAI, Peter was Chief Computer Scientist for the Electronic Frontier Foundation (EFF) in San Francisco. At EFF he lead a team of technologists that launched numerous computer security and privacy projects including Let’s Encrypt and Certbot, Panopticlick, HTTPS Everywhere, the SSL Observatory and Privacy Badger; they also worked on diverse Internet policy issues including campaigning to preserve open wireless networks; fighting to keep modern computing platforms open; helping to start the campaign against the SOPA/PIPA Internet blacklist legislation; and running the first controlled tests to confirm that Comcast was using forged reset packets to interfere with P2P protocols.

Peter holds a PhD in computer science and law from the University of Melbourne; his research focused on the practicality and desirability of using alternative compensation systems to legalize P2P file sharing and similar distribution tools while still paying authors and artists for their work.

Dr. Ian Goodfellow, Google Brain
Invited Talk: Adversarial Robustness for AI Safety

This talk addresses how it is important to develop robust models in order to make reward maximization safe, in order to make models succeed at learning to evaluate safety, and to evaluate human preferences.

Ian Goodfellow is a senior staff research scientist at Google Brain. He leads a group of researchers studying adversarial techniques in AI.

He developed the first defenses against adversarial examples, was among the first to study the security and privacy of neural networks, and helped to popularize the field of machine learning security and privacy. He is the lead author of the MIT Press textbook Deep Learning (www.deeplearningbook.org).

Previously, Ian has worked at OpenAI and Willow Garage, and has study with Andrew Ng and Gary Bradski at Stanford University, and with Yoshua Bengio and Aaron Courville at Université de Montréal. In 2017, Ian was listed among MIT Technology Review’s 35 Innovators under 35, recognizing his invention of generative adversarial networks.

Prof. Alessio R. Lomuscio, Imperial College London
Invited Talk: Reachability Analysis for Neural Agent-Environment Systems

We present a novel model for studying agent-environment systems, where the agents are implemented via feed-forward ReLU neural networks. We study several reachability problems for the system, ranging from one-step reachability, to fixed multi-step and arbitrary-step to study the system evolution. We automate the various reachability problems studied by recasting them as mixed-integer linear programming problems.

Alessio Lomuscio (http://www.doc.ic.ac.uk/~alessio), PhD in 1999, is professor of Logic in Multi-Agent Systems in the Department of Computing at Imperial College London (UK), where he leads the Verification of Autonomous Systems Group. He is a Fellow of the European Association of Artificial Intelligence and currently holds a Royal Academy of Engineering Chair in Emerging Technologies. He previously held an EPSRC Leadership Fellowship betwen 2010 and 2015.

Alessio’s research interests concern the realisation of safe artificial intelligence.  Since 2000, in collaboration with colleagues, he has worked on the development of formal methods for the verification of autonomous systems and multi-agent systems. To this end, he has put forward several methods based on model checking and various forms abstraction to verify AI systems. A particular focus of Alessio’s work has been the attention to support AI specifications, including those concerning the knowledge and strategic properties of the agents in their interactions. The methods and resulting toolkits have found applications in autonomous vehicles, robotics, and swarms.

He has published over 100 papers in AI conferences (including AAMAS, IJCAI, AAAI, ECAI), verification and formal methods (CAV, SEFM), and services (ICSOC, ICWS) and over 30 papers in international journals (AIJ, JAIR, JAAMAS).