Web
Analytics
top of page

Interview with Margot Hanley

  • Writer: Digital Life Initiative
    Digital Life Initiative
  • Dec 24, 2024
  • 7 min read


By Shana Creaney (Cornell Tech)


Can we get a brief introduction about yourself and what you’re studying?


Well, I just finished my PhD here at Cornell Tech in information science, and I worked with Helen Nissenbaum and within the DLI lab. Broadly I study the ethics of artificial intelligence systems and for my dissertation specifically, I focused on the ethics of consumer neurotechnology, which is kind of this fairly new type of technology that uses a combination of hardware and software and AI to capture and decode brain data for a whole range of purposes and applications, like measuring a user’s focus, or mood, or emotion. These technologies have historically been confined to hospitals and research contexts, for treatment of sensory and motor conditions, but more and more they’re being introduced into consumer contexts in wearable form. My research examines how consumer neurotechnologies could pose threats to our privacy and autonomy, emphasizing the importance of rigorous ethical reasoning in influencing the industry’s design and development practices and shaping emerging policy and law.


How did you become interested in that?


Yeah, so while I was in the program here at Cornell Tech, I was exploring ethical and social questions surrounding AI systems. I was contemplating what to focus on for my dissertation when Helen and I started working on a project about a phenomenon we termed "machine readable humans." This concept revolves around the idea that humans are becoming increasingly legible to computational systems, and we were examining the conditions under which this is desirable versus those where it poses a problem for society and undermines our autonomy.


I was thinking to myself: what are the speculative technologies out there, on the horizon, that exemplify this? Initially, I thought about exoskeletons, like those used in Amazon's warehouses, which integrate robotics to sense human bodies and movement and augment human strength. That seemed relevant. Ultimately, I found myself particularly drawn to brain-computer interfaces, which had started gaining public attention with developments like Elon Musk's Neuralink and Meta's acquisition of a company developing a neurotech bracelet.  And I thought… if Musk and Meta were getting into this space, then it was likely worthy of ethical scrutiny.


Why do you think that’s worth studying? Especially thinking about it from a layman's understanding.


Well, it’s worth studying for many reasons.  First, neurotech has this sort of dystopian flair to it—you know, the kind we’ve seen in movies or read about in novels, where brain chips enable things like mind reading or mind control.  But as it makes its way onto the scene, experts and leaders in the field have to be careful not to get carried away praising their benefits or outright demonizing them—it’s hype either way. Naturally, companies and investors hype up the positives to boost sales and profits. And on the flip side, tech critics are quick to point out the ethical issues and potential harms. Truth is, both sides are right depending on the context and purpose of the technology. And that’s why digging into these technologies critically, recognizing both their potential to enhance and undermine human flourishing, is so important.


In fact, neurotechnologies that claim to read our brainwaves could bring up some serious privacy and autonomy issues if they actually work as advertised. This problem could in some way echo many of the privacy issues we have seen introduced in previous waves of tech—smartphones, Iot, wearables, internet browsing, and the list goes on. They could also present new privacy problems; for example, if these systems can actually infer what we are thinking, that represents a step change in the type of potential invasion.  But even if they don’t work perfectly, people might still believe they do because of the authority we tend to give AI. This can also lead to ethical problems—the system says you’re in one mood or emotional state but you feel like you’re in another. Some people may trust the computer more than themselves. Where does that leave us? 


Lastly, there’s this whole aspect of policy-making that’s already happening, with actual state laws being crafted as we speak. We really need these policies to be shaped by thoughtful, normative and philosophical frameworks. In a concrete way, of course. Often the folks driving these policies don’t fully realize what the end goal should be. So, sure, the process of making laws and policies might mean making some compromises, but we ought to start with the ideal in mind.


In an ideal world, where does this research lead?


Where does it lead? Hopefully, it leads to informing substantive policy and law, as well as corporate design and development practices, across the field at both startups and Big Tech companies. Relying on the measures we've used in the past has proven ineffective for previous technologies, and they will be ineffective for future ones as well. 


And so I am really grateful that I've been able to kind of study these ethical and philosophical questions in the context of really concrete legal and policy applications and implications. This research could help shape state privacy laws, draft best practices within industries, and future privacy policies, including terms of service and privacy documentation. Many companies are proactively engaging with academics, asking what they should consider or do. Are these purely technical questions, or should we be developing technologies that store data locally? Academics, or those trained as I have been, can suggest that this is just one of many strategies needed in these socio-technical systems to protect consumer privacy, and other interests.


I hope this research not only leads to practical changes in neurotechnology but also addresses the broader questions emerging around the power of making inferences from data and the ways privacy is conceptualized in law and policy. These considerations speak to much broader questions that need global attention. I intend and hope for my research to be used in this way as well.


How did you become part of DLI?


Well, it all started when I was at Columbia, working on my master’s in sociology. I knew I wanted to explore the intersection of social theory, ethics, and technology. Cornell Tech was just across the island from where I was, and DLI was kicking off its inaugural year with programming and the weekly seminar. I think I was actually at the very first seminar, and all of them that season. The topics were fascinating; Solon Barocas spoke, Natasha Dow Schull. We had an illustrator documenting all of the talks!  I really admired Helen Nissenbaum for her direct, incisive questions that cut through the usual hype and conflated concerns. It made me think about technology in a fundamentally different way, like comparing new tech to something as everyday as an alarm clock, or grounding the discussion in the history of these issues.


So, when I joined the program at Cornell Tech, I naturally gravitated towards working with Helen. It felt like a natural fit, and before I knew it, I became the first student from Cornell Tech to join the DLI lab. 


What has the DLI experience been like for you?


DLI experience has just been wonderful. I mean, again, like I say, I really do believe that the interdisciplinary environment is so rich for personal and intellectual development. The postdocs come from such diverse backgrounds and engage in such varied work, yet there's a unifying spirit in how we approach these deep questions about what these systems mean for human life and society. Being a student in that setting was amazing—it informed not just my ethical analysis but also normative policy analysis and even empowered me to delve into reading and interpreting law, despite my sociology background.


Being exposed to the way teams at DLI used normative frameworks to shape concrete law and policy, like city government planning, was truly inspiring. I gained confidence in approaching my project from multiple angles yet keeping it as a cohesive pursuit. This involved exploring how technology impacts policy and law, which seemed like a natural fit in an interdisciplinary setting. I may not have been able to engage in this kind of normative and prescriptive work in another program, so I’m incredibly grateful for the opportunity here. I’m also leaving the lab enriched with close colleagues, friends, and a community.


I think being exposed to all of this work, and seeing how teams used normative frameworks to think about concrete law and policy, like city government planning, was so inspiring.  I learned and grew confidence in engaging in my project from multiple perspectives but as a cohesive pursuit, exploring the implications of technology on policy and law, which feels so natural in an interdisciplinary setting. I might not have been able to do this kind of normative and prescriptive work in a different program, but I was able to here, and for that, I'm incredibly grateful. I'm also leaving the lab with close colleagues, friends, and a community. I mean, even right now, I'm waiting for an email about a fascinating reading group that I'll be joining, started by friends from from this lab. I'm already so excited for how that's going to inform my work and how I can contribute to others and really be be part of these unfolding debates and discourses, and that all has just stemmed from DLI!


What’s next for you?


What's next for me? Well, I'm actually headed to Duke to start a postdoc with Nita Farahany, who's written extensively on the ethics of neurotechnology and issues surrounding privacy within this field. She's a renowned scholar on the ethical, legal, and social implications of neurotechnology and other advanced technologies at Duke Science and Society. I'll be starting this new chapter in the new year, and I’m very excited about it! Additionally, I'll continue to stay in touch with DLI. I'll be working remotely from New York City, which will hopefully keep me very close to all my friends and colleagues here.

 

 

 

Margot Hanley

  



Cornell Tech | 2024



64 comentarios


Neha Singh
Neha Singh
8 hours ago

Experience a smooth, safe and secure betting Mahadev Book Betting, one stop for all your betting needs. Whether you are a new, seasoned or experienced player we always have something for you. We offer quick, reliable and trusted services. Need help? Please reach us out via the Mahadev Book WhatsApp Number for fast assistance or updates. Or call us via the Mahadev Book Customer Care Number 24x7 is always available to solve your queries. Get started today and make online betting easier.


Me gusta

Reddy Anna
Reddy Anna
4 days ago

Discover the ultimate destination for betting enthusiasts with the Reddy Anna 777, where you can explore top odds, events, and betting opportunities. Join the Anna Reddy Club to stay ahead of the game with expert tips, exclusive insights, and winning strategies for a variety of sports and markets.


Me gusta

xavako2239
4 days ago

Absolutely love these cosmetics! The pigmentation, texture, and affordability make them a must-have for every makeup lover! 먹튀사이트

Me gusta

ali umair
ali umair
28 abr

I was reading some of your content on this website and I conceive this internet site is really informative ! Keep on putting up.무료웹툰

Me gusta

jack son
jack son
27 abr

Thanks a ton to get writing this sort of superb posting! I uncovered your web blog ideal for this demands. Contained in the grapefruit excellent plus handy discussions. Keep up to date favorable deliver the results! fototapety dla dzieci

Me gusta
bottom of page