
Angelina Wang
Cornell Tech
"Fairness" in AI: Gone Too Far, And Also Not Far Enough
Abstract
As machine learning has proliferated, so have concerns about fairness and bias. Much of this work, however, relies on two implicit assumptions: that fairness means treating groups the same, and that this definition applies uniformly across different forms of oppression, such as racism, sexism, and ableism. In this talk, I will challenge both assumptions, showing how they can produce harmful consequences. I argue that fairness sometimes requires recognizing differences between groups as well as developing identity-specific approaches that do not generalize across different forms of oppression. I will then apply these principles to the case of chatbot personalization. Ultimately, I show how prevailing notions of fairness risk going both too far and not far enough.
About
Angelina Wang is an assistant professor of information science at Cornell Tech and the Cornell Ann S. Bowers College of Computing and Information Science. Her research is in the area of responsible AI. Wang’s publications have addressed topics such as the societal impacts of AI; evaluation of AI systems; and how to move beyond one-size-fits-all, mathematically convenient notions of fairness. Wang has been recognized by the NSF GRFP, the EECS Rising Stars, the Siebel Scholarship, and the Microsoft AI & Society Fellowship. Her work has been featured in a number of news outlets, including the MIT Technology Review, Vice, Washington Post, New Scientist, and Tech Brew.