The CBC’s Jesse Brown has a nice piece that tries to respond to the question, “Is Throttling Necessary?” I won’t spoil the answer (or possible lack of an answer), but I will note that Jesse incorporated a few pieces of information that I’ve posted about here. If you’re not already subscribed to his Search Engine podcast, you should – it’s amongst the best Canadian tech journalism (that is accessible to non-tech people).
P2P and Complicity in Filesharing
I think about peer to peer (P2P) filesharing on a reasonably regular basis, for a variety of reasons (digital surveillance, copyright analysis and infringement, legal cases, value in efficiently mobilizing data, etc.). Something that always nags at me is the defense that P2P websites offer when they are sued by groups like the Recording Industry Association of America (RIAA). The defense goes something like this:
“We, the torrent website, are just an search engine. We don’t actually host the infringing files, we are just responsible for directing people to them. We’re no more guilty of copyright infringement than Google, Yahoo!, or Microsoft are.”
Let’s set aside the fact that Google has been sued for infringing on copyright on the basis that it scrapes information from other websites, and instead turn our attention to the difference between what are termed ‘public’ and ‘private’ trackers. ‘Public’ trackers are available to anyone with a web connection and a torrent program. These sites do not require users to upload a certain amount of data to access the website – they are public, insofar as there are few/no requirements placed on users to access the torrent search engine and associated index. Registration is rarely required. Good examples at thepiratebay.org, and mininova.org. ‘Private’ trackers require users to sign up and log into the website before they can access the search engine and associated index of .torrent files. Moreover, private trackers usually require users to maintain a particular sharing ration – they must upload a certain amount of data that equals or exceeds the amount of data that they download. Failure to maintain the correct share ratio results in users being kicked off the site – they can no longer log into it and access the engine and index.
Continue readingReview: Access Denied
The OpenNet Initiative’s (ONI) mission is to “identify and document Internet filtering and surveillance, and to promote and inform wider public dialogs about such practices.” Access Denied: The Practice and Policy of Global Internet Filtering is one of their texts that effectively draws together years of their research, and presents it in an accessible and useful manner for researchers, activists, and individuals who are simply interested in how the Internet is shaped by state governments.
The text is separated into two broad parts – the first is a series of essays that situate the data that has been collected into a quickly accessible framework. The authors of each essay manage to retain a reasonable level of technical acumen, even when presenting their findings and the techniques of filtering to a presumably non-technical audience. It should be noted that the data collected includes up to 2007 – if you’re reading the text in the hopes that the authors are going to directly address filtering technologies that have recently been in the new, such as Deep Packet Inspection, you’re going to be a disappointed (though they do allude to Deep Packet technologies, without explicitly focusing on it, in a few areas). Throughout the text there are references to human rights and, while I’m personally a proponent of them, I wish that the authors had endeavored to lay out some more of the complexities of human rights discourse – while they don’t present these rights as unproblematic, I felt that more depth would have been rewarding both for their analysis, and for the benefit of the reader. This having been said, I can’t begrudge the authors of the essays for drawing on human rights at various points in their respective pieces – doing so fits perfectly within ONI’s mandate, and their arguments surrounding the use of human rights are sound.
Continue readingOwnership of Public Clouds
I’ve recently been chewing through BlueMountainLab’s podcasts on Cloud Computing. I’ll be honest – I’m a skeptic when it comes to cloud computing, but I’m developing a better understanding of it after listening to ‘casts on this topic for about 2 hours (maybe I’m just been brainwashed?). If you’re not immediately familiar with what this term means, check out the below video – you’ll see some of the biggest and brightest minds in digital technologies explain in simple terms what ‘cloud computing’ is.
Unless you’ve been living under a rock, or away from a digital connection, for the past couple of years you’ve likely experienced cloud computing. Have you hopped into Google docs, Zimbra, or any other environment where you perform standard tasks in a web-based environment? If so, you’ve been ‘in the cloud’. What we’re seeing is a shift away from centrally owned company infrastructure toward infrastructure that is owned and operated by another company. To picture it, rather than host your own mail servers, you shift your corporation over to Google Apps, and at the same time can take advantage of the word processing, chat, and page creation features that accompany the Google solution. Should you need to increase storage, or alter your current feature set, you can have it set up in a few hours – this contrasts with spending corporate resources acquiring a solution, installing it, educating your users, etc. By outsourcing high-cost, high-time-sink operations you can realign your IT staff so that they can focus on corporate issues; designing unique solutions for unique problems, focusing their skill sets in more cost-effective areas, etc.
Review: Everything is Miscellaneous
I recently received David Weinberger’s Everything is Miscellaneous: The Power of the New Digital Disorder and was excited. A great deal of my present work surrounds understanding metadata, and the implications that it has for the reconstitution of knowledge and reordering of political association. Imagine my surprise when I quickly found that Weinberger fails to perform a substantive investigation of the role of metadata in the reconstitution of knowledge and society, in book that emphasizes metadata’s role! At most, he skims the surface of what metadata can affect, glossing over specifics most of the time in favor of generalizations and limited references to Greek philosophers. After you’ve read the first 30-40 pages, the only thing you really have to look forward to are (a) a few interesting discussions about blogging, tagging, and the challenges in monetizing past modes of organizing data in comparison to digital metadata-based information-associations; (b) the end, when you can put the book away or give it to someone you aren’t terribly keen about.
While there are a handful of interesting parts in the book (in particular 2-3 pages on tagging data, and the beginning discussion between 1st, 2nd, and 3rd order data might be a useful conceptual device) I was grossly unimpressed with it on the whole. For a better read and more useful investment of reading time, turn to Negroponte, Sunstein, Lessig, or even Erik Davis. Alternately, just go to Wired’s website and spend the couple hours reading the free articles there that you’d otherwise be spending reading this book. I can almost guarantee your time at Wired will be better spent.
How do I rate it? 1/5 stars.
Review: Protectors of Privacy
Newman’s Protectors of Privacy: Regulating Personal Data in the Global Economy is exemplary in its careful exposition of Europe’s data protection regulations. Using a historical narrative approach, he demonstrates that Europe’s current preeminence in data protection is largely a consequence of the creation of regulatory authorities in member nations that were endowed with binding coercive powers. As a result of using the historical narrative method, he can firmly argue that neither liberal intergovermentalist nor neo-functionalist theories can adequately account for the spread of data protection regulations in the EU. Disavowing the argument that market size alone is responsible for the spread of data protection between member nations, or in explaining Europe’s ability to influence foreign data protection regulations, Newman argues that the considerable development of regulatory capacity in European member states, and the EU itself, is key to Europe’s present leading role in the field of data protection.
Drawing on recent telecommunication retention directives, as well as agreements between the EU and US surrounding the sharing of airline passenger information, Newman reveals the extent to which data protection advocates can influence transnational agreements; influence, in the EU, turns out to be largely dependent on situating data privacy issues within the First Pillar. For Newman, Europe’s intentional development of regulatory expertise at the member state, and subsequently EU level, as demonstrated in the field of data privacy and tentatively substantiated by his brief reflection on the EU’s financial regulatory capacity, may lead the EU to play a more significant role in shaping international action than would be expected, given its smaller market size as compared to the US, China, and India.
Overall, I would highly recommend this book. If you are interested in the role of regulatory capacity in the ongoing issues of personal data (especially as it pertains to the EU), or if you just want to read an inviting, concise, and well-developed historical account of the development of EU data protection regulations, then this book is a great way to spend an evening or three.
