Speakers

Invited Talk 1: Digitalization and automation for driverless regional trains – The safe.trAIn research project

Eng. Martin Rothfelder (Siemens AG, Technology)

Martin Rothfelder received his Diploma in electrical engineering in 1991 from the Ruhr-University Bochum, Germany. Since then he focussed on quality, and dependability of software-intensive embedded systems. First, from 1991-1996, he was safety assessor within TÜV Rheinland, and since 1996 within Siemens research departments. Martin is currently head of the Siemens’ research group Dependability Analysis & Management within the technology field Software Systems Processes. Martin Rothfelder is an acknowledged assessor for safety-relevant software on trains.

Highly automated train operation leads to performance increase of railway systems, e.g. wrt. capacity of existing tracks, or flexibility of train services. Artificial Intelligence (AI) and Machine Learning (ML) offers great potential to replace certain tasks of a human train driver, such as obstacle detection. This talk summarizes the safe.trAIn project (2022 – 2024) which aims to lay the foundation for safe use of AI/ML for the driverless operation of rail vehicles

The project investigates methods to prove trustworthiness of AI-based functions taking robustness, performance, uncertainty, transparency, and out-of-distribution aspects of the AI/ML model into account. These methods are integrated into a comprehensive and continuous testing and verification process for trains.

Partners are from the railway domain, academia, standardization bodies, and safety assessment bodies. Industrial partners will use the outcomes to launch automation solutions that enable highly automated and driverless operation of rail vehicles. Relevant results from the safe.trAIn project will be integrated into related standardization activities. 

Research receives funding from the Federal Ministry for Economic Affairs and Climate Action in Germany (BMWK; grant agreement 19I21039A).

 

Invited Talk 2: Foundations of Cooperative AI

Prof. Vincent Conitzer (Carnegie Mellon University and University of Oxford)

Vincent Conitzer is Professor of Computer Science (with affiliate/courtesy appointments in Machine Learning, Philosophy, and the Tepper School of Business) at Carnegie Mellon University, where he directs the Foundations of Cooperative AI Lab (FOCAL). He is also Head of Technical AI Engagement at the Institute for Ethics in AI, and Professor of Computer Science and Philosophy, at the University of Oxford.

Previous to joining CMU, Conitzer was the Kimberly J. Jenkins Distinguished University Professor of New Technologies and Professor of Computer Science, Professor of Economics, and Professor of Philosophy at Duke University. He received Ph.D. (2006) and M.S. (2003) degrees in Computer Science from Carnegie Mellon University, and an A.B. (2001) degree in Applied Mathematics from Harvard University.

Conitzer has received the 2021 ACM/SIGAI Autonomous Agents Research Award, the Social Choice and Welfare Prize, a Presidential Early Career Award for Scientists and Engineers (PECASE), the IJCAI Computers and Thought Award, an NSF CAREER award, the inaugural Victor Lesser dissertation award, an honorable mention for the ACM dissertation award, and several awards for papers and service at the AAAI and AAMAS conferences. He has also been named a Guggenheim Fellow, a Sloan Fellow, a Kavli Fellow, a Bass Fellow, an ACM Fellow, a AAAI Fellow, and one of AI’s Ten to Watch. He has served as program and/or general chair of the AAAI, AAMAS, AIES, COMSOC, and EC conferences. Conitzer and Preston McAfee were the founding Editors-in-Chief of the ACM Transactions on Economics and Computation (TEAC).

AI systems can interact in unexpected ways, sometimes with disastrous consequences. As AI gets to control more of our world, these interactions will become more common and have higher stakes. As AI becomes more advanced, these interactions will become more sophisticated, and game theory will provide the tools for analyzing these interactions. However, AI agents are in some ways unlike the agents traditionally studied in game theory, introducing new challenges as well as opportunities. We propose a research agenda to develop the game theory of highly advanced AI agents, with a focus on achieving cooperation.

Acknowledgment: joint work with Caspar Oesterheld, write up here: http://www.cs.cmu.edu/~conitzer/FOCALAAAI23.pdf