

Digital Life Seminar
Date: 2025 Fall Semester
When: Thursdays (1.25 - 2.40pm). Due to limited space, all guests outside of Cornell Tech are asked to please RSVP beforehand.
Where: Cornell Tech's Bloomberg Center, Room 161.
Convenors: Helen Nissenbaum and Michael Byrne
Queries: Michael Byrne (mjb556@cornell.edu)
About: The Digital Life Seminar series offers students and guests an opportunity to engage actively with leading scholars and practitioners researching and responding to the development and application of digital technologies.
For information about
Watch past seminars via
Visit our
Listen to our

DLI Seminars | Fall 2025

Thursday, September 11, 2025 at 5:25:00 PM UTC
Nikhil Garg
Cornell Tech
Recommendations in High-stakes Settings
Recommendation systems are now used in high-stakes settings, including to help find jobs, schools, and partners. Building public interest recommender systems in such settings bring both individual-level (enabling exploration, diversity, data quality) and societal (fairness, capacity constraints, algorithmic monoculture) challenges.

Thursday, September 18, 2025 at 5:25:00 PM UTC
Yixuan Gao
Cornell Tech
Seeing Without Seeing: Privacy Challenges in Innovations of Wireless Sensing
Wireless sensing technologies embedded in everyday devices can now detect our physiological signals, breathing, heartbeat, and even emotional states, without any physical contact or visible indication. While these capabilities promise revolutionary applications in healthcare and safety, they also enable unprecedented invisible surveillance. This talk explores the fundamental tension between technical capabilities and ethical boundaries in wireless physiological sensing.

Thursday, September 25, 2025 at 5:25:00 PM UTC
Jen Semler
Cornell Tech
Moral Agents Unlike Us: The Limits of Non-conscious Moral Agents
Suppose AI developers succeed in creating advanced non-conscious artificial moral agents—AI systems that can act in sophisticated ways in the moral domain yet lack phenomenal consciousness (i.e., first-personal experience, something “it is like” to be them). Initially, it might seem that we should be indifferent between human moral agents and artificial moral agents in moral decision-making contexts. In this paper, Semler argues that we have grounds for requiring certain decisions to be made by human moral agents. She outlines two asymmetries that arise between human moral agents and artificial moral agents in virtue of artificial moral agents’ lack of phenomenal consciousness: a moral status asymmetry and a valance asymmetry.

Thursday, October 9, 2025 at 5:25:00 PM UTC
Angelina Wang
Cornell Tech
"Fairness" in AI: Gone Too Far, And Also Not Far Enough
Angelina Wang is an assistant professor of information science at Cornell Tech and the Cornell Ann S. Bowers College of Computing and Information Science. Her research is in the area of responsible AI. Wang’s publications have addressed topics such as the societal impacts of AI; evaluation of AI systems; and how to move beyond one-size-fits-all, mathematically convenient notions of fairness.

Thursday, October 16, 2025 at 5:25:00 PM UTC
Niloofar Mireshghallah
Meta AI’s FAIR Alignment Group
What does it mean for agentic AI to preserve privacy?
The rise of agentic LLMs has fundamentally altered the privacy landscape: models now orchestrate information flows between emails, calendars, medical records, and external services, creating novel attack vectors where traditional data protection falls short. These agents must constantly decide what to share, with whom, and in what context—decisions that require nuanced understanding of contextual integrity rather than binary public/private classifications. In this talk, we first introduce CONFAIDE, a benchmark grounded in contextual integrity theory that systematically measures LLMs' privacy reasoning capabilities across increasingly complex scenarios, revealing that frontier models fail up to 39% of the time.

Thursday, October 23, 2025 at 5:25:00 PM UTC
Pegah Nokhiz
Cornell Tech
Values and Agency in Algorithmically Optimized Ecosystems
Having just completed her PhD in Computer Science at Brown University (advised by Professor Suresh Venkatasubramanian), Pegah Nokhiz is a Postdoctoral Fellow at the Digital Life Initiative, Cornell Tech. She was an affiliate of Brown's Center for Technological Responsibility, Reimagination and Redesign (CNTR) at the Data Science Institute.

Thursday, October 30, 2025 at 5:25:00 PM UTC
DLI Researcher
Cornell Tech
Digital Life Seminar No. 7
The Digital Life Seminar series offers students and guests an opportunity to engage actively with leading scholars and practitioners researching and responding to the development and application of digital technologies.

Thursday, November 6, 2025 at 6:25:00 PM UTC
Alicia Solow-Niederman
George Washington University Law School
Beyond the Supply Chain: Artificial Intelligence’s Demand Side
AI governance underscores the AI “supply chain” and the “many hands” involved in the production of AI systems. Even when it addresses downstream risks, the focus remains on AI's producers. Although invaluable and important, this production-centered approach risks overlooking what happens when real people use AI tools. This paper identifies and theorizes AI governance’s missing half, which I call the “demand side.”

Thursday, November 13, 2025 at 6:25:00 PM UTC
Amy Brand
MIT Press, Director and Publisher
The Future of Knowledge and Publishing
The rise of large language models (LLMs) is reshaping knowledge production, raising urgent questions for research communication and publishing writ large. Drawing on qualitative survey responses from over 850 academic book authors from across a range of fields and institutions, we highlight widespread concern about the unlicensed use of in-copyright scientific and scholarly publications for AI training. Most authors are not opposed to generative AI, but they strongly favor consent, attribution, and compensation as conditions for use of their work.
Previous Seminars
For more information about our past list of seminar speakers, see the DLI Seminar Archive >
To watch previous seminar series, visit our DLI Media Channel >