
Severin Engelmann
Cornell Tech
A Governance Model for AI Inferences
Abstract
We are constantly being interpreted, simulated, and evaluated by machines based on the data we generate, both consciously and unconsciously. Even a simple action like picking up a smartphone triggers sensors that collect data, which AI models then leverage to infer aspects of our lives, from our mood to our health. The rapid evolution and decreasing cost of large language models now allow them to predict our responses to surveys or even play economic games on our behalf with surprisingly little input, often achieving remarkable accuracy. In this talk I will ask: What are our possibilities and limits of governing information in an age of AI?
About
The goal of Severin Engelmann's research is to support the design of ethical and beneficial AI. He engages with the key conceptual assumptions underpinning powerful AI applications to protect key values such as privacy and fairness. Engelmann employs both qualitative and quantitative methods to understand how various groups reason about the ethics of AI applications deployed in different socio-technical systems.
Engelmann publishes in cross-disciplinary Computer Science conferences such as Fairness, Accountability, and Transparency (FAccT) and Artificial Intelligence, Ethics, and Society (AIES). His work has been covered by media outlets such as TechCrunch. Prior to his Postdoctoral Research Fellowship at the Digital Life Initiative at Cornell Tech, he completed his Ph.D. at the School of Computation, Information and Technology at the Technical University of Munich (TUM). Here, Engelmann designed, developed, and taught courses in AI Ethics at the Computer Science Department. He has done research visits at the School of Information at UC Berkeley and the Max Planck Institute for Research on Collective Goods.