Web
Analytics
top of page
Writer's pictureJessie G Taft

DL Seminar | "With Great Power Comes... No Responsibility?"

Updated: Apr 25, 2021


Individual reflections by Daswell Davis and Max Rosenthal (scroll down).



By Daswell Davis

Cornell Tech


“The Twenty-Six Words That [Endanger Society]”


In the first installment of the Digital Life Seminar series for the 2020-21 academic year, Yaël Eisenstat and Carrie Goldberg engaged in a very important conversation centered on how responsibility is defined within the tech industry. Eisenstat and Goldberg discussed an unduly complicated but significant question - who bears responsibility for the real-world consequences of technology?


The Speakers


Yaël Eisenstat, a Visiting Fellow at Cornell Tech’s Digital Life Initiative, has a particular interest in the various ways in which social media impacts civil discourse and democracy. She currently teaches a multi-university course on Tech, Media, and Democracy, and has spent 20 years working around the globe as a CIA officer, a national security advisor to Vice President Biden, the Global Head of Elections Integrity Operations at Facebook, a diplomat, a corporate social responsibility strategist at ExxonMobil, and the head of a global risk firm.

Carrie Goldberg is a lawyer and founder of C. A. Goldberg, PLLC, a victims’ rights law firm that fights for people under attack against their abusers who think they can evade accountability. Among her landmark cases includes Herrick v. Grindr – which we will discuss later in this reflection – which introduced the application of product liability law to dangerous technology products. She is also the author of “Nobody’s Victim: Fighting Psychos, Stalkers, Pervs & Trolls,” a 2019 NYT Editor’s Choice, and is well-known nationally for her work for victims of nonconsensual porn.


“The Twenty-Six Words That Created The Internet”


The Communications Decency Act of 1996 (the “CDA”) is a federal statute that represents the first significant attempt by Congress to regulate pornographic material on the internet. The CDA originally imposed criminal sanctions for the knowing transmission of obscene or indecent material via the internet under certain circumstances. As such, a website operator could face criminal liability even if a third party posted the obscene material on its site. The Supreme Court later struck down certain provisions of the CDA concerning indecency, holding that they violated First Amendment free speech rights.


More relevantly, Section 230 of the CDA (“Section 230”) provides protection for providers and users of an “interactive computer service” from liability for third party content that they publish, stating in twenty-six words that:


“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

U.S. Courts have since established Section 230 as a safe harbor for providers or users of interactive computer services (e.g. website operators) acting solely as intermediaries for another party's content, barring claims if (1) the defendant asserting immunity is an interactive computer service provider or user, (2) the particular information at issue was provided by another information content provider, and (3) the claim seeks to treat the defendant as the publisher or speaker of that information.


Unfortunate Consequences


Section 230 also provides immunity from liability for restricting access or giving others the technical means to restrict access to certain harmful content, including constitutionally protected material. This means that efforts by interactive computer service providers to self-regulate harmful online content are not penalized. Courts have held that there is no requirement for interactive computer service providers to block or screen offensive material in order to benefit from Section 230 protection, even in situations where they have actual notice of allegedly objectionable content on its service.


We see the harmful effects of this policy in Herrick v. Grindr. The case involved a situation where Goldberg’s client, Matthew Herrick, was stalked and harassed by his ex-boyfriend through Grindr – an online dating application. Herrick’s ex-boyfriend created fake profiles to arrange sex dates with over a thousand men who came to his home and workplace. Herrick reported these incidents to Grindr over 100 times yet the company refused to block his ex-boyfriend from using the application or take any action to protect Herrick even though his abuser was still actively using the Grindr app to harass him while litigation was ongoing.

The case was ultimately unsuccessful as the U.S. Supreme Court denied Herrick’s petition to review a Court of Appeals ruling tossing his allegations that the Grindr application lacked safety features to prevent fake profiles, and that it doesn’t warn users it could be used to harass them. Which begs the question – was it Congress’ intention for the CDA to prioritize the protection of its wealthiest and most powerful companies over the protection of vulnerable citizens under Section 230?


Section 230 Needs A Makeover


Times have changed drastically since the CDA was first introduced in 1996. While Congress may have initially been well-intentioned, the law is now outdated in today’s digital age and its broad protections only serves to permit powerful companies to escape responsibility for the real-world harms resulting from the use of its products and services. As we have seen with the response of the U.S. Government to the COVID-19 pandemic, far too often is the economy and big business deemed a higher priority than the health and safety of the general population.


Maintaining Section 230 in its current form is a danger to society. At a minimum, U.S. law should allow for companies to be held liable once they have notice of an ongoing or imminent harm that a third party is using its online products and services to facilitate, there are no reasonable alternative means available to stop the harm within a reasonable amount of time, the company possesses the power or ability to assist in stopping the harm, and the company fails to take any action to do so in order to protect the victim.


By Max Rosenthal

Cornell Tech


On September 9th, 2020, the Digital Life Seminar Series at Cornell Tech hosted Yaël Eisenstat and Carrie Goldberg in a seminar which delved into the topic of who is responsible for what gets posted on the internet. Yaël Eisenstat is an expert working with the intersection of ethics, tech, and policy, and has a background which includes positions with the CIA, ExxonMobil, and Facebook. She is currently a Visiting Fellow at Cornell Tech’s Digital Life Initiative, where she teaches and studies the effect of technology on our political process and social dialogue. Carrie Goldberg is the founder of C.A. Goldberg, PLLC., which works nationally to protect victims of both in-person and virtual sexual violence. She has recovered record sums in Title IX cases, and is known for inventing the legal approach of applying product liability law to tech products.


The seminar began with a broad overview of tech regulation by Eisenstat, and then continued with Eisenstat interviewing Goldberg. Eisenstat covered some of the broad questions surrounding tech, but focused in on the question of “Who is responsible for the real-world consequences of tech?”. Section 230 of the Communications Decency Act is the main law in this space, but it was passed in 1996, which is important because this was before the rise of big tech companies such as Facebook, Twitter, and Youtube. Section 230 breaks into two parts, with the first stating that internet providers should not be treated as publishers of the information on their websites. The second part of Section 230 grants these providers the ability to moderate the material published on their websites in a reasonable manner.


Eisenstat argues that Section 230 needs to be revamped, as the definitions in the act no longer fit the current landscape of technology. Eisenstat contends that a company like Facebook is neither a publisher nor a platform, but is rather a digital curator which uses algorithims to manipulate the digital experience for the user. This is a point extremely relevant right now, as just this month Tik Tok has faced intense scrutiny by the federal government over a fear that the Chinese government could use Tik Tok’s algorithms to curate propaganda aimed at American users.


Following Eisenstat’s presentation, Goldberg entered the discussion and gave a deeper look into how this responsibility in tech conversation is having ramifications in the real world. Goldberg is known for her legal work in taking down “revenge porn”, and she started litigating in this area after she herself was threatened with revenge porn by her ex-boyfriend. A problem she faced was the “who do we sue?” question, as it was individuals committing these heinous acts, but they were being given a platform by various websites.


Golberg then proceeded to give a detailed outline of the Matthew Herrick v. Grindr LLC case, which is a case in which she represented Matthew Herrick in his suit against Grindr, a dating app primarily for homosexuals. In the case, Matthew’s ex-boyfriend impersonated Matthew on the app, and would frequently share Matthew’s location (a feature of Grindr) in order to lure men to Matthew’s home and workplace with the expectation of a sexual encounter. At one point 23 people came to Matthew’s home in a day, and Matthew constantly lived in a state of fear. The police were unable to take action, and Grindr was left in the exclusive position to take action on this urgent matter. When Matthew and Goldberg tried to get Grindr to take action, the company claimed they were unable to remove profiles and cited Section 230 for protection, which is an example of how this big tech regulation law was in some cases protecting individual acts of digital sexual violence, such as the acts in this case committed by Matthew’s ex-boyfriend. In order to try to get around Section 230, Goldberg argued that this was a products liability case, as Grindr had put out a product (dating app) with a dangerous feature (the inability to remove profiles).


Goldberg concluded her segment with a brief discussion of the future of Section 230 and what possible changes to the law could look like, and what effect these changes could have. Goldberg argued that removing Section 230 would not have the internet-killing effect some Section 230 supporters suggest it would have. Rather, Goldberg claimed that the court process itself would filter out most of the frivolous claims in a post-Section 230 world, as the cost to litigate itself is very high and would only be pursued by those with legitimate claims. Furthermore, tech companies would be able to protect themselves with litigation insurance, which is already offered to companies to protect against other legal risks.


Overall, it is clear that both Eisenstat and Goldberg are in favor of getting rid of Section 230. While it is clear that Section 230 is the dominant topic in modern internet regulation, I’m curious to see how this conversation will evolve with the advancement of blockchain and the decentralization of the internet. We are already living In a time where VPNs can evade government bans on websites, and it seems we will eventually get to some point where at least some sections of the internet will be unable to be regulated by governmental institutions. If the government does eventually lose this regulatory power, how will it protect victims of digital sexual violence such as revenge porn? Perhaps the law would have to shift to targeting those that view any sort of illegal content, as it would be impossible to stop the publishing of the content in the first place. While this isn’t a short-term concern, if the internet eventually does progress to this point I’m optimistic that there will be people like Goldberg fighting to make sure victims are protected.

Yorumlar


bottom of page