The Offensive Internet: Speech, Privacy, and Reputation is an essential addition to academic, legal, and professional literatures on the prospective harms raised by Web 2.0 and social networking sites more specifically. Levmore and Nussbaum (eds.) have drawn together high profile legal scholars, philosophers, and lawyers to trace the dimensions of how the Internet can cause harm, with a focus on the United States’ legal code to understand what enables harm and how to mitigate harm in the future. The editors have divided the book into four sections – ‘The Internet and Its Problems’, ‘Reputation’, ‘Speech’, and ‘Privacy’ – and included a total of thirteen contributions. On the whole, the collection is strong (even if I happen to disagree with many of the policy and legal changes that many authors call for).
In this review I want to cover the particularly notable elements of the book and then move to a meta-critique of the book. Specifically, I critique how some authors perceive the Internet as an ‘extra’ that lacks significant difference from earlier modes of disseminating information, as well as the position that the Internet is a somehow a less real/authentic environment for people to work, play, and communicate within. If you read no further, leave with this: this is an excellent, well crafted, edited volume and I highly recommend it.
The Internet and Its Problems
Solove kickstarts the collection with his essay, ‘Speech, Privacy, and Reputation on the Internet.’ His general argument (and that of many other contributors) is that there must a ‘rethink’ on notions of privacy, insofar as privacy must be re-calibrated against freedoms of speech to better balance the two principles. With the rise of Web 2.0 there is a sliding scale of what is considered worthy of publication; in the early-mid 20th century journalists may have focused on issues and topics that were in the general public interest but, as everyone becomes a publisher, what constitutes ‘general interest’ is increasingly focused on smaller and smaller audiences. As a result, more is being written about those of (relatively) inconsiderable import and when such writings are defamatory, hurtful, or otherwise harmful the authors of the speech are protected under Section 230 of the Communications Decency Act from potential legal rejoinders. To rebalance privacy and free speech – with the argument being that free speech is presently over-privileged – Solove suggests adopting a notice and takedown system akin to that included in the Digital Millennium Copyright Act (DMCA). While the DMCA is oft-criticised for being overly broad and inappropriately used by rights-holders to chill speech, Solove insists that similar problems would not arise because:
- abusing takedowns should be penalized;
- whereas copyright interests are well-resourced this is not often the case with defamation and privacy complainants.
It is important that some kind of rebalancing/effective redress system is adopted, in Solove’s argument, because there is no guarantee that exposing the foibles of one another will lead to a shift in social ideals to accept such foibles. Instead, it is just as likely that there will simply be more people hurt – the exposure of others does not necessarily mean that one changes their own perceptions of the nature of social norms. His concerns echo those of Mayer-Schonberger’s book, Delete: The Virtue of Forgetting in the Digital Age.
Whereas Solove emphasizes a take-down approach, Keats Citron explores the destructive nature of online mobs and how the Internet magnifies harmful behaviour. She rejects the notion that individuals can, and should, combat mobs alone and instead argues that “robust protection of cyber civil rights would promote more valuable speech than it would inhibit” (33). Drawing on literature of group behaviours, she identifies four central ways in which the Internet aggravates cyber mob behaviour:
- groups with homogeneous views tend to become more extreme when they deliberate;
- group members often lack a sense of personal responsibility for their acts;
- groups are more destructive when they dehumanize their victims and are more aggressive when authority figures support efforts;
- “Cyber mobs see victims as digital images that can be eviscerated without regret” (37).
Whereas criminal law and civil torts cannot reach the harms experienced by individuals, groups, or society as a result of mob behaviour, civil rights laws do address such shortcomings. Given that self-expression is key to autonomy there should be minimal protection of expression that is primarily (or even solely) meant to extinguish others’ expressions. This may mean that website operators are held accountable for facilitating anonymous attacks, and this may in fact lead to the protection of groups’ civil rights by limiting the capacity for the mob to form in the first place.
Picking up on the challenges that can arise from anonymous online speech, Levmore tries to tackle the issue in “The Internet’s Anonymity Problem.” Specifically, he argues that the novelty and free speech claims that have protected the Internet from legal regulation have resulted in excessive costs to the targets of offensive speech. Such costs might be diminished by moving to an Internet that integrates identification or notice-and-takedown policies. Given that Section 230 of the Communications Decency Act was put into place in the 1996 to let the Internet grow – the Internet was, at the time, a nascent space that could have had its growth stunted by onerous or overzealous laws – the present ‘mature’ state of the Internet suggests that it is time to repeal this section. Today the Internet suffers from both juvenile and hurtful comments, as well as from a high noise-to-signal ratio that imposes costs to find, filter, and manage information online. In shifting to a ‘non-anonymous’ Internet Levmore believes that there would be fewer of these hurtful comments and a better signal-to-noise ratio, though he does grant that a ‘simple’ notice-and-takedown system may be appropriate for some online venues. Ultimately, however, he asserts that Section 230 should be repealed and that Internet providers be recognized more like newspapers by becoming liable if they do not impose or enforce takedown, notice, or non-anonymity principles and laws.
Nussbaum’s contribution to the book, titled “Objectification and Internet Misogyny,’ asserts that the online community must confront objectification when it occurs online, especially when such actions relate to the historical (misogynistic) objectification of women as instrumental for male pleasure. In particular, she finds that the key facets of objectification as it relates to online mob-like behaviour aim to:
- reduce to body;
- reduce to appearance;
Each of these elements are key to ‘shame justice’, which is “justice by the mob: the dominant group are asked to take delight in the discomfort of the excluded and stigmatized” (73). To those who make assertions that misogyny and shame are ‘typical’ or ‘historical’ practices she responds as such: “To say, “It’s traditional,” and “It’s part of culture” is not to dispense with the need for an account” (75). Thus, in investigating the case of misogyny online we (re)articulate underlying problems surrounding the social conditions of masculine development, the legitimacy of using shame to hijack the agency of a person’s mind, social relationships, and access to employment, and the relationship between online misogynistic behaviour and gender-based hate crimes. Ultimately, she avoids making the strong policy claims of the other contributors to this section, instead arguing that it is key to educate men about the acceptability of weakness, to educate men of the inherent value of women, and to dissolve or reform modernistic assumption of male identity in order to address the roots of misogynistic violence and harms.
Cass Sunstein’s “Believing False Rumors” continues his project of articulating the harms that arise when individuals engage in discourse. His work is excellent, though (arguably) derivative of his books Infotopia: How Many Minds Produce Knowledge and On Rumors: How Falsehoods Spread, Why We Believe Them, What Can Be Done. In essence, the liberal ideal that the marketplace of ideas will sort out falsehoods is demonstrably incorrect, especially when applied to the Internet. As a result, some kind of chilling effect on false statements of fact is important to establish, both to limit harm inflicted onto others and to enhance the functioning of democratic discourse. A danger, noted by Pasquale in his “Reputation Regulation: Disclosure and the Challenge of Clandestinely Commensurating Computing,” is that the spread of information online (sometimes including false facts and rumors) can be incorporated into reputation systems that impact citizens and customers. As a result, he argues that legislation is required to make such systems more just. Legislation must:
- “…ensure that key decision makers reveal the full range of online sources they consult as they approve or deny applications for credit, insurance, employment, and college and graduate school admissions” (108);
- address the use of reputation score systems to avoid black-box evaluations that defeat the aims of accountability and transparency of choices made.
Brian Leiter’s contribution, “Cleaning Cyber-Cesspools: Google and Free Speech,” argues that the potential chilling effects of overturning Section 230 of the Communications Decency Act are overstated and unduly protect low-speech at the expense of people’s dignity. He defines the cyber-cesspool as
an amalgamation of what I will call “tortious harms” (harms giving rise to causes of action for torts such as defamation and infliction of emotional distress) and “dignitary harms,” harms to individuals that are real enough to those affected and recognized by ordinary standards of decency, though not generally actionable. (155)
Identifying Mill (the theoretical source of many of the claims that the marketplace of ideas/speech will sort out falsehoods) as a radical empiricist on the basis that he insists that all truth and knowledge are a posteriori, Leiter maintains that there is some speech (e.g. Jane Doe ought to be forcibly sodomized) that is absolutely without moral standing. Online, harms from speech are made worse by Google and the protection Section 230 affords to intermediaries (e.g. blog operators). The repeal of this section would not impact “the ability of individuals to speak freely, just in their ability to exercise that purported right to speak freely in cyberspace” (167).
Leiter emphasizes Google’s role in making content from cyber cesspools highly accessible to casual web surfers, and argues that Section 230 should be repealed and make Google liable “for its negligence in disseminating tortious material” with an addendum that a “more radical proposal would make Google liable for disseminating material constituting dignitary harms as well; I remain agnostic on whether that would be advisable” (171). To avoid liability Google ought to set up a “panel of neutral arbitrators who would evaluate claims by private individuals that Google is returning search results that might constitute tortious or dignitary harms” and second that “the Google panel would have authority to provide several possible remedies in the event it concurs with the complainant that the material in question is more likely that not to constitute actionable material or a dignitary harm” (170). I remain unclear as to why, exactly, Google should be required to establish such panels and make private decisions about the nature of free speech; it seems like this is a task for the judiciary. Shouldn’t the Department of Justice, to follow his claims, be responsible for assigning a team of judges to determine each and every claim of harm and assure that their decision falls within the confines of existing case law? Further, the position that Google somehow ‘disseminates’ information like a newspaper fails to acknowledge the fundamental difference between ‘push’ and ‘pull’ modes of delivering content: unlike a broadcaster, Google’s expressions of speech (which are, effectively, what algorithmic search is equatable to) are called upon by users rather than imposed on the individual using the Web. They are not curators of the Web but instead rely on the claims of others to assert their own suggestions based on search terms; broadcasters, on the other hand, are curators. That individuals have chosen to see Google as a curator does not make Google a curator any more than people choosing to believe that I am a professor makes me an actual professor.
Stone’s “Privacy, the First Amendment, and the Internet” is based around the argumentative point that “[i]f speech is sufficiently valuable to merit First Amendment protection when it is spoken over a backyard fence or published in a local newspaper, then (at least presumptively) it is also sufficiently valuable to be protected when it is disseminated on the Internet … as a matter of first approximation, the fact that speech on the Internet can cause more harm than speech in a local newspaper is not a reason to accord it any less protection under the First Amendment. The balance between value and harm remains more or less constant” (175-6, emphasis added). In considering what kinds of speech are actionable he rightly notes that it is important to be careful about what is meant by a ‘threat’ under the law (and that thus is illegal speech). Specifically, “incitement to commit unlawful conduct does not mean statements that might cause others to commit crimes or even statements that are intended to encourage other to commit crimes … for speech to be punishable as incitement, it must expressly incite unlawful conduct” (186, emphasis added). Regardless of the nature of the law, social and technological change means that privacy laws cannot effectively address non-newsworthy invasions of privacy (though they may be able to address some of the worst threats online). Referring to Brandeis and Warren’s tort, Stone notes that “… even if the First Amendment itself is not sufficient in principle to “swallow the tort,” the combination of the First Amendment and social and technological change has, for all practical purposes, gobbled it up completely. To argue otherwise is simply to tilt at windmills” (193).
Expressly engaging with the issue of collective privacy rights – where the rights of members of a collective are in contradiction – Strahilevitz suggests that privacy laws begin to recognize, and courts adopt, constructive partition. Such a partition fragments a collective resource and assigns elements of it to the collective’s members. Admittedly this is an imperfect solution – sometimes interests are inextricably linked – but at least offers a way to address some of the collective privacy issues involved in FOIA requests.
Rodrigues’ essay rounds out the book, and it really is one of the absolute highlights. In “Privacy on Social Networks: Norms, Markets, and Natural Monopoly” he argues that there are privacy issues with social networks and that a key way of addressing them involves encouraging competition between networks so that privacy (in effect) becomes one of many points these networks compete on. He acknowledges that while people share personal information on social networks it is predominantly shared for ‘semi-public’ purposes; in most cases, controls and mitigating elements preclude the information’s absolute sharing of information. To conceptualize the elements of privacy-based competition in social networks he identifies the following facets:
- Privacy policies – reflective of contractual relationships between the site and user(s);
- Privacy practices – how the privacy policies are actually implemented;
- Privacy controls – means by which users can control their personal information;
- Data security – how diligent the site is in actually securing data from outside forces.
Rodrigues points to Facebook as the incumbent ‘natural monopoly’ that is largely dependent on its vast network effect to achieve financial success and usefulness for its users. The worry is that a monopolist may raise switching costs by locking users in, while simultaneously exploiting its monopoly power to erode past privacy practices “in exchange for greater income by directly reselling personal information and contact information, despite the interests of its entrenched users” (245). To alleviate the power of the natural monopoly he suggests that the government establish data portability regulations, enabling users to freely move between networks and thus enabling competitors to rapidly scale if the monopolist (or other competitor) acts in a manner contrary to users’ privacy desires. While some might point to the ‘successful’ petitions that Facebook users launch when the company initiates particularly onerous and privacy invasive changes to the service, Rodrigues argues that these petitions happened with competition “lurking in the background”; “in the end the profitable alternative will be the path that Facebook takes” (255).
Issues and Concerns
Laden throughout the book is a (legitimate) concern for the harms that can befall people who are defamed, mobbed, or targeted online in disingenuous ways. Many authors take issue with Section 230 of the Communications Decency Act, arguing that without the protections afforded by this section the law would be better resourced to persecute those engaged in hate crimes, defamation, and so forth. In most of these cases, authors assert that what would suffer is largely ‘low value’ speech instead of ‘high value’ speech, and that abuse of a notice-and-takedown or identification regime would likely be marginal at best. It is important to recognize claims these for what they are: normative assertions of the value of particular venues and modes of speech. Under proposed changes, sites like 4chan.org would be almost immediately dissolved, to say nothing of many IRC channels, and website owners would suddenly face liability if they refused to take down content on First Amendment grounds. In essence, proposals to remove Section 230 will arguably see a massive increase in the already large number of Strategic Lawsuits Against Public Participation (SLAPP) suits. What happens when a company begins to go after individuals who have injured its corporate personhood? While many authors emphasize that only individuals could file suit against one another, I worry that this is just the thin edge of the wedge – corporate interests will flock to expand how the proposed changes can be utilized by their own well-resourced legal departments.
Despite this problem, it is perhaps either one of naivete or a case of my projecting pessimism upon the argument. Of greater concern is a point underscoring many of the contributions of the book, and that is only explicitly made manifest by Leiter when he asks:
What precisely are the contributions to human knowledge and well-being that are attributable solely to [blogs, chatrooms, and Google], that would have been impossible without [the Internet’s] existence in its current, unregulated form? It is far from obvious that there are any, at least in otherwise democratic societies. (168)
Leiter is focused on the harms that these things have led to – statements that have seriously impacted people’s lives and livelihoods – but in rhetorically asserting a lack of primary value to any of these key facets of the Internet he demonstrates a lack of awareness of what the Internet is, and has done, for many individuals. Indeed, Leiter and Nussbaum in particular seem to have carefully separated the ‘real’ world from the ‘fictional’ world of the Internet and assigned primacy to the former, and denigrate the value of the latter. Such is unsurprising – neither are so-called ‘digital natives’ – and so see the Internet as a tool or ‘other space’ instead of an element of their very existence. Sherry Turkle’s work, which carefully maps the close linkage between online and offline worlds to recognize that neither is necessarily discrete to the minds of those growing up with the Internet, is a better way to approach and understand the domains of agency associated with the virtual and virtually real. Either domain can be perceived or realized as discrete, but they can also been seen as mutually overlapping and integrated. Indeed, with the expansion of the digital into the analogue life we live in the West, notions of the Internet and Web as somehow ‘fictitious’ or needing to assert their primary and unitary value as opposed to ‘traditional’ analogue modes of communication are increasingly anachronistic. One imagines that similar complaints were made when the yellow press was launched, radio began, television launched, and so forth.
I am not a fan of repealing Section 230, nor am I in favor of establishing an ‘identified Internet’ in an effort to somehow convey ‘respectability’ on the online world (as suggested by Levmore). I worry that many of the essays in the book are so focused on alleviating (very real!) harms that they miss the broader impacts of modifying how the Internet is regulated by the United States. I also am concerned that the aim is to draw regulation of the Internet into the ‘traditional’ justice system (and not significantly update, or address how to update, a system that is dependent on access to counsel) or into a privatized system of judgement (e.g. Google evaluations of whether content is defamatory or tortious).
Despite my own concerns, this book contains incredibly well articulated arguments for further regulation of the Internet and a host of (disgusting and depressing) examples of the specific harms that individuals have, and continue to, face at the hands of online aggressors. I would highly recommend the text to anyone working on the ‘darker’ sides of the Internet from policy, legal, or general studies points of view: this is one of 2011’s ‘must reads’.