Safety in Artificial Intelligence (AI) should not be an option, but a design principle. However, there are different levels of safety, different ethical standards and values, and different degrees of liability, for which we face trade-offs or alternative solutions. These choices can only be analyzed holistically if we integrate the technological and the ethical perspectives into the engineering problem, and consider both the theoretical and practical challenges for AI safety. This view must cover a wide range of AI paradigms, including systems that are specific for a particular application, and also those that are more general, and can lead to unanticipated potential risks. We must also bridge short-term with long-term issues, idealistic with pragmatic solutions, operational with policy issues, and industry with academia, to really build, evaluate, deploy, operate and maintain AI-based systems that are truly safe.

This workshop seeks to explore new ideas on AI safety with particular focus on addressing the following questions:

  • What is the status of existing approaches in ensuring AI and Machine Learning (ML) safety and what are the gaps?
  • How can we engineer trustable AI software architectures?
  • How can we make AI-based systems more ethically aligned?
  • What safety engineering considerations are required to develop safe human-machine interaction?
  • What AI safety considerations and experiences are relevant from industry?
  • How can we characterize or evaluate AI systems according to their potential risks and vulnerabilities?
  • How can we develop solid technical visions and new paradigms about AI Safety?
  • How do metrics of capability and generality, and the trade-offs with performance affect safety?

The main interest of the proposed workshop is to look holistically at AI and safety engineering, jointly with the ethical and legal issues, to build trustable intelligent autonomous machines.

As part of a “sister” workshop (AISafety 2019), we started the AI Safety Landscape initiative. This initiative aims at defining an AI safety landscape providing a “view” of the current needs, challenges and state of the art and the practice of this field.

Contributions are sought in (but are not limited to) the following topics:

  • Safety in AI-based system architectures 
  • Continuous V&V and predictability of AI safety properties
  • Runtime monitoring and (self-)adaptation of AI safety
  • Accountability, responsibility and liability of AI-based systems
  • Effect of uncertainty in AI safety
  • Avoiding negative side effects in AI-based systems
  • Role and effectiveness of oversight: corrigibility and interruptibility
  • Loss of values and the catastrophic forgetting problem
  • Confidence, self-esteem and the distributional shift problem
  • Safety of Artificial General Intelligence (AGI) systems and the role of generality
  • Reward hacking and training corruption
  • Self-explanation, self-criticism and the transparency problem
  • Human-machine interaction safety
  • Regulating AI-based systems: safety standards and certification
  • Human-in-the-loop and the scalable oversight problem
  • Evaluation platforms for AI safety
  • AI safety education and awareness
  • Experiences in AI-based safety-critical systems, including industrial processes, health, automotive systems, robotics, critical infrastructures, among others