There have been lots of good critiques and comments concerning Facebook’s recently announced “Graph Search” product. Graph Search lets individuals semantically query large datasets that are associated with data shared by their friends, friends-of-friends, and the public more generally. Greg Satell tries to put the product in context – Graph Search is really a a way for corporations to peer into our lives – and a series of articles have tried to unpack the privacy implications of Facebook’s newest product.
I want to talk less directly about privacy, and more about how Graph Search threatens to further limit discourse on the network. While privacy is clearly implicated throughout the post, we can think of privacy beyond just a loss for the individual and more about the broader social impacts of its loss. Specifically, I want to briefly reflect on how Graph Search (further?) transforms Facebook into a hostile discursive domain, and what this might mean for Facebook users.
Siva Vaidhyanathan’s The Googlization of Everything (And Why We Should Worry) is a challenging, if flawed, book. Vaidhyanathan’s central premise is that we should work to influence or regulate search systems like Google (and, presumably, Yahoo! and Bing) to take responsibility for how the Web delivers knowledge to us, the citizens of the world. In addition to pursuing this premise, the book tries to deflate the hyperbole around contemporary technical systems by arguing against notions of technological determinism/utopianism.
As I will discuss, the book largely succeeds in pointing to reasons why regulation is an important policy instrument to keep available. The book also attempts to situate itself within the science and technology studies field, and here it is less successful. Ultimately, while Vaidhyanathan offers insight into Google itself – its processes, products, and implications of using the company’s systems – he is less successful in digging into the nature of technology, Google, culture, and society at a theoretical level. This leaves the reader with an empirical understanding of the topic matter without significant analytic resources to unpack the theoretical significance of their newfound empirical understandings.
I like to pretend that I’m somewhat web savvy, and that I can generally guess where links on large websites will take me. This apparently isn’t the case with Blogger – I have a Blogger account to occasionally comment on blogs in the Google blogsphere, but despise the service enough that I don’t use the service. I do, however, have an interest in Google’s newly released Dashboard that is intended to show users what Google knows about them, and how their privacy settings are configured.
Given that I don’t use Blogger much, I was amazed and pleased to see that there was a link to the Dashboard in the upper-right hand corner of a Blogger page that I was reading when I logged in. Was this really the moment where Google made it easy for end-users to identify their privacy settings?
Alas, no. If I were a regular Blogger user I probably would have known better. What I was sent to when I clicked ‘Dashboard’ was my user dashboard for the blogger service itself. This seems to be a branding issue; I had (foolishly!) assumed that various Google environments that serve very different purposes would be labeled differently. In naming multiple things ‘dashboard’ it obfuscates access to a genuinely helpful service that Google is now providing. (I’ll note that a search for ‘Google Dashboard’ also calls up the App Status Dashboard, and that Google Apps also has a ‘Dashboard’ tab!)
[Note – I preface this with the following: I am not a lawyer, and what follows is a non-lawyer’s ruminations of how the Supreme Court’s thoughts on reasonable expectations to privacy intersect with what deep packet inspection (DPI) can potentially do. This is not meant to be a detailed examination of particular network appliances with particular characteristics, but much, much more general in nature.]
Whereas Kyllo v. United States saw the US Supreme Court assert that thermal-imaging devices, when directed towards citizens’ homes, did constitute an invasion of citizens’ privacy, the corresponding Canadian case (R. v. Tessling) saw the Supreme Court assert that RCMP thermal imaging devices did not violate Canadians’ Section 8 Chart rights (“Everyone has the right to be secure against unreasonable search or seizure”). The Court’s conclusions emphasized information privacy interests at the expense of normative expectations – thermal information, on its own, was practically ‘meaningless’ – which has led Ian Kerr and Jena McGill to worry that informational understandings of privacy invoke:
I owe this (more nuanced reflection) of yesterday’s note on the role of ‘professional’ versus ‘amateur’ news, again, to my colleague Tim Smith. After reading my post yesterday, he replied:
nice piece Chris! I have a follow up question.
is investigative journalism on the net in the spaces Simon characterized as amateur. I am thinking of reports like a Bob Woodward breaking of Watergate. A Seymour Hersh breaking of Abu Ghraib. This type of investigative reporting.
Do you see the type of investigative journalism (on political matters) coming from blogs and internet media? If not, could it come from there? It certainly requires a system of professional training (gathering and putting together information not necessarily available on the internet), resources and social capital (contacts).
Re-reading what I’d posted, I can see that these are questions that needed to be asked and responded to. Below is my response to Tim.
The problem with walled gardens such as Facebook, is that you can be searched whenever you pass through their blue gates. In the course of being searched, undesired data can be refused – data like links to ‘abusive’ sites that facilitate copyright infringement. As of today, Facebook has declared war on the Pirates Bay, maintaining that because links to the site often infringe on someone’s copyright then linking to it violates the terms of service that Facebook users agree to. Given that the Pirates Bay is just a particularly specialized search engine, it would seem that Facebook is now going to start applying (American?) ethical and moral judgements on what people use to search for data. Sharing data is great, but only so long as it’s the ‘right kind’ of data.
What constitutes ‘infringing’ use when talking about a search engine? Google, as an example, lets individuals quickly and easily find torrent files that can subsequently be used to download/upload infringing material. The specific case being made against the Pirate Bay is that:
“Facebook respects copyrights and our Terms of Service prohibits placement of ‘Share on Facebook’ links on sites that contain “any content that is infringing. Given the controversy surrounding The Pirate Bay and the pending lawsuit against them, we’ve reached out to The Pirate Bay and asked them to remove the ‘Share on Facebook’ links from their site. The Pirate Bay has not responded and so we have blocked their torrents from being shared on Facebook.” (Source)