I recently received David Weinberger’s Everything is Miscellaneous: The Power of the New Digital Disorder and was excited. A great deal of my present work surrounds understanding metadata, and the implications that it has for the reconstitution of knowledge and reordering of political association. Imagine my surprise when I quickly found that Weinberger fails to perform a substantive investigation of the role of metadata in the reconstitution of knowledge and society, in book that emphasizes metadata’s role! At most, he skims the surface of what metadata can affect, glossing over specifics most of the time in favor of generalizations and limited references to Greek philosophers. After you’ve read the first 30-40 pages, the only thing you really have to look forward to are (a) a few interesting discussions about blogging, tagging, and the challenges in monetizing past modes of organizing data in comparison to digital metadata-based information-associations; (b) the end, when you can put the book away or give it to someone you aren’t terribly keen about.
While there are a handful of interesting parts in the book (in particular 2-3 pages on tagging data, and the beginning discussion between 1st, 2nd, and 3rd order data might be a useful conceptual device) I was grossly unimpressed with it on the whole. For a better read and more useful investment of reading time, turn to Negroponte, Sunstein, Lessig, or even Erik Davis. Alternately, just go to Wired’s website and spend the couple hours reading the free articles there that you’d otherwise be spending reading this book. I can almost guarantee your time at Wired will be better spent.
How do I rate it? 1/5 stars.
Newman’s Protectors of Privacy: Regulating Personal Data in the Global Economy is exemplary in its careful exposition of Europe’s data protection regulations. Using a historical narrative approach, he demonstrates that Europe’s current preeminence in data protection is largely a consequence of the creation of regulatory authorities in member nations that were endowed with binding coercive powers. As a result of using the historical narrative method, he can firmly argue that neither liberal intergovermentalist nor neo-functionalist theories can adequately account for the spread of data protection regulations in the EU. Disavowing the argument that market size alone is responsible for the spread of data protection between member nations, or in explaining Europe’s ability to influence foreign data protection regulations, Newman argues that the considerable development of regulatory capacity in European member states, and the EU itself, is key to Europe’s present leading role in the field of data protection.
Drawing on recent telecommunication retention directives, as well as agreements between the EU and US surrounding the sharing of airline passenger information, Newman reveals the extent to which data protection advocates can influence transnational agreements; influence, in the EU, turns out to be largely dependent on situating data privacy issues within the First Pillar. For Newman, Europe’s intentional development of regulatory expertise at the member state, and subsequently EU level, as demonstrated in the field of data privacy and tentatively substantiated by his brief reflection on the EU’s financial regulatory capacity, may lead the EU to play a more significant role in shaping international action than would be expected, given its smaller market size as compared to the US, China, and India.
Overall, I would highly recommend this book. If you are interested in the role of regulatory capacity in the ongoing issues of personal data (especially as it pertains to the EU), or if you just want to read an inviting, concise, and well-developed historical account of the development of EU data protection regulations, then this book is a great way to spend an evening or three.
Earlier this year, I was asked a very good question by my MA advisor. Omid asked, “Why do you study what you study?” At the time, I gave an incredibly disappointing answer – it was vague, disjointed, and really didn’t address the question in a forthright way. I think that there were a few reasons: first, I didn’t have time to prepare; second, I hadn’t reflected on this question in a deep manner that could be succinctly expressed; and third, I’m not very good at answering relatively complicated questions that link into my personal history on the spot. Since then, the question has been in the back of my mind, and I’ve come back to it on a frequent basis.
So, with that in mind I want to put forth a probationary answer to “Why do you study what you study?” It’s going to involve touching on what was a few key computing moments in my life, formative elements of my undergraduate and graduate degrees, and how my background working in IT fits into things. If you want to just skip to the final answer, hit the bottom of the post – the intermediary sections see me start linking together various facets of my life and education to form the structure to answer Omid’s question, and may be of little interest to you.
I’ve had a computer in my house almost since I can remember. My dad had an old Tandy computer that I played very early video games on. It was a beast to navigate, and the commands were arcane (especially to a 4 or 5 year old!). That said, it was amazing that you could play games on it. It wasn’t until we moved from the Maritimes that there was a ‘household’ computer. It cost a small fortune, and was meant for school work. I, of course, quickly learned how to install games on it. This was in the days of DOS and Windows 3.11. I learned how to navigate via a command line, as well as what not to do when trying to fix computer problems (an early lesson: deleting full directories when you don’t know what is in them is a really, really, really bad idea!). Continue reading
In Lessig’s most recent book, Remix, he avoids directly endorsing any particular method of alleviating the issues with copyright infringement. Rather, he notes that there are models that have been proposed to alter how monies are collected for copyright holders. I want to briefly attend to the notion that file signatures can be used to identify particular copywritten works, and how deep packet inspection (DPI) could be used to facilitate this identification process.
The idea for using file signatures to track the movement of copywritten files goes like this: when you create a work that you want to have copywritten, the work is submitted to a body responsible for maintaining records on copywritten work. We can imagine that this could be national libraries. When the libraries receive the work, they create a unique signature, or hash code, for the copywritten work. This signature is stored in the national library’s database, and is known to the copyright holder as well. We can imagine a situation where we can choose what kind of signature we want copywritten work to have – there could be a full-stop copyright, a share-and-share alike non-commercial style copyright, and so forth. By breaking copyright up in this fashion, it would be possible to more granularly identify how content can and should be used.