I think about peer to peer (P2P) filesharing on a reasonably regular basis, for a variety of reasons (digital surveillance, copyright analysis and infringement, legal cases, value in efficiently mobilizing data, etc.). Something that always nags at me is the defense that P2P websites offer when they are sued by groups like the Recording Industry Association of America (RIAA). The defense goes something like this:
“We, the torrent website, are just an search engine. We don’t actually host the infringing files, we are just responsible for directing people to them. We’re no more guilty of copyright infringement than Google, Yahoo!, or Microsoft are.”
Let’s set aside the fact that Google has been sued for infringing on copyright on the basis that it scrapes information from other websites, and instead turn our attention to the difference between what are termed ‘public’ and ‘private’ trackers. ‘Public’ trackers are available to anyone with a web connection and a torrent program. These sites do not require users to upload a certain amount of data to access the website – they are public, insofar as there are few/no requirements placed on users to access the torrent search engine and associated index. Registration is rarely required. Good examples at thepiratebay.org, and mininova.org. ‘Private’ trackers require users to sign up and log into the website before they can access the search engine and associated index of .torrent files. Moreover, private trackers usually require users to maintain a particular sharing ration – they must upload a certain amount of data that equals or exceeds the amount of data that they download. Failure to maintain the correct share ratio results in users being kicked off the site – they can no longer log into it and access the engine and index.
Ars technica has a pretty good rebuttal to the recent piece in the London Times that offered the (seeming) common line of crap that you hear when old industries talk about peer to peer networks. You know what the line is in its general format: “Without the guarantee of making money through our tried, tired and tested revenue streams, authors will stop writing, culture with wither away AND IT’S ALL YOUR FAULT!” (There is often a “Think of the children!” added in there for good measure.)
Now, why isn’t it likely that authors are going to flee writing like bookworms from a server farm?
(1) It’s a pain in the ass to scan a book, cover to cover. Don’t believe me? Scan a decent book and then post it for all of us at The Student Bay. I bet you give up before you get halfway through your task. And I bet that you can’t scan in Communicative Action (ISBN-10 0807015075) in a searchable PDF format! (Let’s see if this whole reverse psychology stuff really works…)
People use Google and Yahoo! throughout their daily lives – they need to know how to get from point a to b, need to find ecommerce sites, need to search friends’ blogs, need to learn how to cook fish, and have (generally) grown used to having the equivalent of electronic encyclopedias at their fingertips at all times. I’m not going to bother addressing concerns that this might be detrimentally affecting how people learn to retain information (i.e. as information is increasingly retained as search strings rather than as info-articles) but want to instead briefly consider how search intersects with privacy.
We hear about the need to protect our private information all of the time. ‘Shred your bank statements’, ‘be wary of online commerce sites’, ‘never share personal information on the ‘net’, and other proclamations of wisdom are uttered in print and video on a regular basis which are, in most cases, completely ignored. Proponents of the commercialization of privacy use this as definitive proof that citizens really don’t care about their privacy like they did in days gone past – people are willing to give up their names, addresses, phone numbers, and other personal information to receive services that they want. In light of this regulators should just butt out – the market has spoken!
In recent months more and more attention has been directed towards Google’s data retention policies. In May of 2007 Peter Fleishcher of Google’s global privacy counsel established three key reasons for why his company had to maintain search records:
To improve their services. Specifically, he writes “Search companies like Google are constantly trying to improve the quality of their search services. Analyzing logs data is an important tool to help our engineers refine search quality and build helpful new services . . . The ability of a search company to continue to improve its services is essential, and represents a normal and expected use of such data.”
To maintain security and prevent fraud and abuse. “Data protection laws around the world require Internet companies to maintain adequate security measures to protect the personal data of their users. Immediate deletion of IP addresses from our logs would make our systems more vulnerable to security attacks, putting the personal data of our users at greater risk. Historical logs information can also be a useful tool to help us detect and prevent phishing, scripting attacks, and spam, including query click spam and ads click spam.”
To comply with legal obligations to retrieve data. “Search companies like Google are also subject to laws that sometimes conflict with data protection regulations, like data retention for law enforcement purposes.” (Source)
The Canadian SIGINT Summaries includes downloadable copies, along with summary, publication, and original source information, of leaked CSE documents.
Parsons, Christopher; and Molnar, Adam. (2021). “Horizontal Accountability and Signals Intelligence: Lesson Drawing from Annual Electronic Surveillance Reports,” David Murakami Wood and David Lyon (Eds.), Big Data Surveillance and Security Intelligence: The Canadian Case.
Parsons, Christopher. (2015). “Stuck on the Agenda: Drawing lessons from the stagnation of ‘lawful access’ legislation in Canada,” Michael Geist (ed.), Law, Privacy and Surveillance in Canada in the Post-Snowden Era (Ottawa University Press).
Parsons, Christopher. (2015). “The Governance of Telecommunications Surveillance: How Opaque and Unaccountable Practices and Policies Threaten Canadians,” Telecom Transparency Project.
Parsons, Christopher. (2015). “Beyond the ATIP: New methods for interrogating state surveillance,” in Jamie Brownlee and Kevin Walby (Eds.), Access to Information and Social Justice (Arbeiter Ring Publishing).
Bennett, Colin; Parsons, Christopher; Molnar, Adam. (2014). “Forgetting and the right to be forgotten” in Serge Gutwirth et al. (Eds.), Reloading Data Protection: Multidisciplinary Insights and Contemporary Challenges.
Bennett, Colin, and Parsons, Christopher. (2013). “Privacy and Surveillance: The Multi-Disciplinary Literature on the Capture, Use, and Disclosure of Personal information in Cyberspace” in W. Dutton (Ed.), Oxford Handbook of Internet Studies.
McPhail, Brenda; Parsons, Christopher; Ferenbok, Joseph; Smith, Karen; and Clement, Andrew. (2013). “Identifying Canadians at the Border: ePassports and the 9/11 legacy,” in Canadian Journal of Law and Society 27(3).
Parsons, Christopher; Savirimuthu, Joseph; Wipond, Rob; McArthur, Kevin. (2012). “ANPR: Code and Rhetorics of Compliance,” in European Journal of Law and Technology 3(3).