Highlights from a SXSW panel about the Future of Tech Responsibility
By Yaël Eisenstat – Visiting Fellow at Cornell Tech’s Digital Life Initiative, and former Elections Integrity Operations Head at Facebook, CIA Officer, and White House Adviser.
Who bears responsibility for the real-world consequences of technology? This question has been unduly complicated for decades by the 1996 legislation that provides immunity from liability to platforms that host third-party content.
According to Section 230 of the Communications Decency Act, written before platforms such as Facebook, YouTube and Twitter existed: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
This one sentence has been interpreted as essentially freeing the “provider”, including social media platforms, from responsibility for the content they host (with a few carve-outs for things such as intellectual property infringements and sex trafficking).
In theory, Section 230 was written to give the internet room to flourish without fear of potentially never-ending lawsuits from anyone with a grievance from something that is posted online as well as protections to moderate what they view as dangerous and offensive content.
In practice, it has provided a vastly over-interpreted blanket immunity shield for companies to evade responsibility for real-world harms that, as we have seen in recent years, have been spawned or exacerbated by some online companies.
While many of the amazing advancements that have grown from an open internet were in large part thanks to this legal infrastructure that provided a wide berth for innovators to experiment without the fear of reprisal, it has left those who have been harmed by some of these innovations often with no avenue for recourse.
Many of us who believe that corporate responsibility and accountability are critical components of a healthy democracy are often left frustrated that the tech industry continues to be held to a different standard. It is the only industry whose responsibility has yet to be defined.
Over the past year, the debate around accountability in the tech industry made it to the national (and global) stage. Of all the conversations about how to regulate “big tech”, one that had the attention of U.S. legislators on both sides of the aisle was whether to amend, or even remove, Section 230 of the Communications Decency Act.
As this has become one of the hottest-debated tech policy issues, South by Southwest (SXSW) invited U.S. Federal Elections Commissioner Ellen Weintraub, UN Rapporteur for Freedom of Expression David Kaye and me to tackle this topic in a panel moderated by NYC Media Lab’s Steve Rosenbaum. Although this year’s SXSW festival was cancelled due to COVID-19, we decided to take the discussion virtual. The full conversation can be viewed here: Section 230 Revisited: Web Freedom vs Accountability.
We discussed some of the intricacies of Section 230, of social media’s effects on the public square and democracy, and of ideas on how we can foster a healthier information ecosystem and rethink the rules that govern these companies. The debates about Section 230 often portray it as a binary choice of repeal or not; publisher or platform; either kill innovation or ensure that it's a free market of ideas; censorship or freedom of speech. We went beyond the absolutist thinking, recognizing there is no quick fix, and offered some critical ways of thinking about the issue.
While the question of accountability in the tech industry may not feel like the top priority while we continue to battle COVID-19, I would argue it is even more important that we finally seek to define the tech industry’s responsibility for the real-world consequences of their products, as we are becoming more reliant than ever on technology to get us through this crisis.
Although we all brought different perspectives and lenses to the conversation, there were three key points that we agreed on and that framed the panel:
We were all emphatic that we do not want government to regulate speech.
None of us believe Section 230 should be eliminated, but we each have ideas on how it could be amended to better serve and protect society.
Creating the rules for how to govern online speech and define platforms’ responsibility is not a magic wand to fix the myriad harms emanating from the internet. This is one piece of a larger puzzle of things that will need to change if we want to foster a healthier information ecosystem.
I believe there is a way to update the 1996 legislation to more accurately reflect the technology and information landscape today. Section 230 still makes sense for companies that truly are merely “providers” or “intermediaries”. For example, I believe a company that provides software to build websites should not be responsible for what a user produces using that software. That is a somewhat straightforward application of the rule.
My concern, as I discussed in the panel, is how Section 230 applies to social media companies who curate content, whose algorithms decide what speech to amplify, who nudge users towards the content that will keep them engaged. The notion that the few tech companies who steer how over two billion people communicate, find information, and consume media enjoy the same blanket immunity as a truly neutral internet company that provides a service makes it clear that it is time for an upgrade to the rules.
Part of the complication lies in how we define companies such as Facebook, YouTube and Twitter. In the panel, I argue: “They're not just a neutral intermediary. Their algorithms are deciding how they are curating content… what's being boosted, what's being amplified. Why don't we actually create a new category that's neither publisher nor platform? Something like a digital curator, and then figure out where does the responsibility lie? How do you make their algorithmic decision making more transparent, so that we can say: ‘it's not that you're responsible for the disinformation on your platform, you're responsible for the fact that your algorithm decided to amplify this disinformation to two billion people?’”
Ellen Weintraub’s arguments stemmed from her role of protecting elections in the U.S. and how social media companies are affecting our electoral landscape. As she explains in the panel, “The Federal Election Commission is all about money and politics; how people raise and spend money to get their political message out and how they use various media… Part of my concern about what's going on on the Internet is that it is obscuring rather than illuminating who is behind the messages.”
Ellen and I both published op-eds in the Washington Post in November highlighting the dangers of Facebook’s political advertising policies, particularly their ability to micro-target users with paid political speech. As Ellen explained in the panel, “it really removes accountability not only from the platforms themselves but also from the candidates. They don't have to come up with a broad message that appeals to a lot of people. They can come up with these micro-targeted messages to appeal exactly to each individual, and then everybody thinks they're going to get a personally custom-designed candidate and that no compromises are necessary. And the candidates aren't accountable for the messages that they're selling because only the people who like that message are actually hearing that message.”
To that point, I added, “And now how does that apply to Section 230? If you had to be more transparent about how you are amplifying content, about how you're micro-targeting tools work, about the data you have on me why am I being targeted differently than my next-door neighbor… I think that would change the game.”
As David Kaye also writes in his book Speech Police: The Global Struggle to Govern the Internet, much of his focus is on who should write the rules. As he said explained in the panel, “I've been arguing for the companies for a couple of years to be focused on adopting rules that are rooted in human rights standards and international standards… To me that's a better approach. I do think there's a longer conversation to have about what those standards actually involve, but they do involve the kinds of things we've been talking about here around transparency, around a baseline of freedom of expression.” We wrapped up the panel with a discussion about whether the tech platforms’ response to COVID-19, especially how social media is combatting disinformation, could change the conversation.
Since our SXSW panel aired, there has been one big development that some have touted as Facebook’s effort to responsibly tackle this challenge of content moderation. Facebook on May 6th announced the first members of its new Oversight Board, which, according to the co-chairs NY Times op-ed, “will focus on the most challenging content issues for Facebook.” This board, as I see it, is a direct response to the debates around Section 230. However, I am not ready to celebrate this move just yet, as it does not assign any responsibility to the company for the consequences of their business. In fact, it does the exact opposite:
Yaël Eisenstat
Cornell Tech
Good information. lebensblick.net/
The discussion on balancing innovation with accountability is particularly insightful, making it a must-read for anyone interested in the future of tech regulation.
Neugier Zone
Good info. Alltagsentdecker.com is a lifestyle blog that celebrates everyday discoveries, offering inspiration through travel, family adventures, DIY projects, and practical tips for a joyful, fulfilling life.
themen mix presents remarkable perspectives regarding style paired with gadgets. https://themenmix.com/
Thank you for this insightful post highlighting the complexities of Section 230 and its impact on tech accountability. It's a reminder that just as we must carefully regulate technology for the public good, we should also strive for efficient systems in our daily lives, whether in governance or practical areas like sustainable food storage and energy use.