Controversial Changes to Public Domain Works

by Muskingum University Library

A considerable number of today’s copyfight discussions revolve around the usage of DRM to prevent transformative uses of works, to prevent the sharing of works, and to generally limit how individuals engage with the cultural artefacts around them. This post takes a step back from that, thinking through the significance of transforming ‘classic’ works of the English literary canon instead of looking at how new technologies butt heads against free speech. Specifically, I want to argue that NewSouth, Inc.’s decision to publish Huckleberry Finn without the word “nigger” – replacing it with “slave” – demonstrates the importance of works entering the public domain. I restrain from providing a normative framework to evaluate NewSouth’s actual decision – whether changing the particular word is good – and instead use their decision to articulate the conditions constituting ‘bad’ transformations versus ‘good’ transformations of public domain works. I will argue that uniform, uncontested, and totalizing modifications of public domains works is ‘bad’, whereas localized, particular, and discrete transformations should be encouraged given their existence as free expressions capable of (re)generating discussions around topics of social import.

Copyright is intended to operate as an engine to generating expressive content. In theory, by providing a limited monopoly over expressions (not the ideas that are expressed) authors can receive some kind of restitution for the fixed costs that they invest in creating works. While true (especially in the digital era) that marginal costs trend towards zero, pricing based on marginal cost alone fails to adequately account for the sunk costs of actual writing. Admittedly, some do write for free (blogs and academic articles in journals might stand as examples) but many people still write with the hope earning their riches through publications. There isn’t anything wrong with profit motivating an author’s desire to create.

Continue reading

Review of Telecommunications Policy in Transition

Image courtesy of the MIT Press

This first: the edited collection is a decade old. Given the rate that communications technologies and information policies change, this means that several of the articles are…outmoded. Don’t turn here for the latest, greatest, and most powerful analyses of contemporary communications policy. A book published in 2001 is good for anchoring subsequent reading into telecom policy, but less helpful for guiding present day policy analyses.

Having said that: there are some genuine gems in this book, including one of the most forward thinking essays around network neutrality of the past decade by Blumenthal and Clark. Before getting to their piece, I want to touch on O’Donnell’s contribution, “Broadband Architectures, ISP Business Plans, and Open Access”. He reviews architectures and ISP service portfolios to demonstrate that open access is both technically and economically feasible, though acknowledges that implementation is not a trivial task. In the chapter he argues that the FCC should encourage deployment of open access ready networks to reduce the costs of future implementation; I think it’s pretty safe to say that that ship sailed by and open connection is (largely) a dead issue in the US today. That said, he has an excellent overview of the differences between ADSL and Cable networks, and identifies the pain points of interconnection in each architecture.

Generally, O’Donnell sees interconnection as less of a hardware problem and more of a network management issue. In discussing the need and value of open access, O’Donnell does a good job at noting the dangers of throttling (at a time well ahead of ISP’s contemporary throttling regimes), writing

differential caching and routing need not be blatant to be effective in steering customers to preferred content. The subtle manipulation of the technical performance of the network can condition users unconsciously to avoid certain “slower” web sites. A few extra milliseconds’ delay strategically inserted here and there, for example, can effectively shepard users from one web site to another (p53).

Continue reading

Review of Wired Shut: Copyright and the Shape of Digital Culture

Image courtesy of the MIT PressGillespie argues that we must examine the technical, social-cultural, legal and market approaches to copyright in order to understand the ethical, cultural, and political implications of how copyrights are secured in the digital era. Contemporary measures predominantly rely on encryption to survey and regulate content, which has the effect of intervening before infringement can even occur. This new approach is juxtaposed from how copyright regulation operated previously: individuals were prosecuted after having committing copyright infringement. The shift to pre-regulation treats all users as criminals, makes copyright less open to fair use, renders opposition to copyright law through civil disobedience as challenging, and undermines the sense of moral autonomy required for citizens to recognize copyright law’s legitimacy. In essence, the assertion of control over content, facilitated by digital surveillance and encryption schemes, has profound impacts on what it means to be, and act as, a citizen in the digital era.

This text does an excellent job at working through how laws such as the Digital Millennium Copyright Act (DMCA), accompanied by designs of technologies and the political efforts of lobbyists, have established a kind of ‘paracopyright’ regime. This regime limits uses that were once socially and technically permissible, and thus is seen as undermining long-held (analogue-based) notions of what constitutes acceptable sharing of content and media. In establishing closed trusted systems that are regulated by law and received approval from political actors content industries are forging digitality to be receptive to principles of mass-produced culture.

Continue reading

Distinguishing Between Mobile Congestions

by Simon TunbridgeThere is an ongoing push to ‘better’ monetize the mobile marketplace. In this near-future market, wireless providers use DPI and other Quality of Service equipment to charge subscribers for each and every action they take online. The past few weeks have seen Sandvine and other vendors talk about this potential, and Rogers has begun testing the market to determine if mobile customers will pay for data prioritization. The prioritization of data is classified as a network neutrality issue proper, and one that demands careful consideration and examination.

In this post, I’m not talking about network neutrality. Instead, I’m going to talk about what supposedly drives prioritization schemes in Canada’s wireless marketplace: congestion. Consider this a repartee to the oft-touted position that ‘wireless is different’: ISPs assert that wireless is different than wireline for their own regulatory ends, but blur distinctions between the two when pitching ‘congestion management’ schemes to customers. In this post I suggest that the congestion faced by AT&T and other wireless providers has far less to do with data congestion than with signal congestion, and that carriers have to own responsibility for the latter.

Continue reading

Publication – Digital Inflections: Post-Literacy and the Age of Imagination

Earlier this year I was contacted by CTheory to find and interview interesting people that are doing work at the intersection of theory, digitality, and information. Michael Ridley, the Chief Information Officer and Chief Librarian at the University of Guelph, was the first person that came to mind. I met with Michael earlier this year for a face-to-face discussion, and our conversation has since been transcribed and published at CTheory. Below is the full introduction to the interview.

“… [O]ne of the things about librarians is that they’re subversive in the nicest possible ways. They’ve been doing the Wikileak thing for centuries, but just didn’t get the credit for it. This is what we try to do all the time; we try to reduce the barriers and open up that information.”
— Michael Ridley

Self-identifying as the University’s Head Geek and Chief Dork, Michael Ridley leads a life of the future by reconfiguring access to the past. As Chief Librarian and Chief Information Office of the University of Guelph, Ridley spends his days integrating digital potentialities and the power of imagination with the cultural and historical resources of the library. Seeing the digital as a liminal space between the age of the alphabet and an era of post-literacy, he is transforming the mission of libraries: gone are the days where libraries primarily focus on developing collections. Today, collections are the raw materials fueling the library as a dissonance engine, an engine enabling collaborative, cross-disciplinary imaginations.

With a critical attitude towards the hegemony of literacy, combined with a prognostication of digitality’s impending demise, Ridley’s position at the University of Guelph facilitates radical reconsiderations of the library’s present and forthcoming roles. He received his M.L.S. from the University of Toronto, his M.A from the University of New Brunswick, and has been a professional librarian since 1979. So far, Michael has served as President of the Canadian Association for Information Science, President of the Ontario Library Association, Board member of the Canadian Association of Research Libraries, and Chair of the Ontario Council of Universities. He is presently a board member of the Canadian Research Knowledge Network and of the Canadian University Council of CIOs. He has received an array of awards, and was most recently awarded the Miles Blackwell Award for Outstanding Academic Librarians by the Canadian Association of College and University Libraries. Ridley has published extensively about the intersection of networks, digital systems, and libraries, including “The Online Catalogue and the User,” “Providing Electronic Library Reference Service: Experiences from the Indonesia-Canada Tele-Education Project,” “Computer-Mediated Communications Systems,” and “Community Development in the Digital World.” He has also co-edited volumes one and two of The Public-Access Computer Systems Review. Lately, his work has examined the potentials of post-literacy, which has seen him teach an ongoing undergraduate class on literacy and post-literacy as well as giving presentations and publishing on the topic.

Read the full conversation at CTheory

iPhone Promiscuity

Photo credit: Steve KeysI’ve written a fair bit about mobile phones; they’re considerable conveniences that are accompanied by serious security, privacy, and technical deficiencies. Perhaps unsurprisingly, Apple’s iPhone has received a considerable amount of criticism in the press and by industry because of the Apple aura of producing ‘excellent’ products combined with the general popularity of their mobile device lines.

In this short post I want to revisit two issues I’ve previously written about: the volume of information that the iPhone emits when attached to WiFi networks and its contribution to carriers’ wireless network congestion. The first issue is meant to further document here, for my readers and my own projects, just how much information the iPhone makes available to third-parties. The second, however, reveals that a technical solution resolves the underlying cause of wireless congestion associated with Apple products. Thus, trapping customers into bucket-based data plans in response to congestion primarily served financial bottom lines instead of customers’ interests. This instance of leveraging an inefficient (economic) solution to a technical problem might, then, function as a good example of the difference between ‘reasonable technical management’ that is composed of technical and business goals versus the management of just the network infrastructure itself.

Continue reading