AT&T’s Anti-Infringement Patent

AT&TNetwork surveillance is a persistent issue that privacy advocates warn about on a regular basis. In the face of Edward Snowden’s disclosures, the public has often been concerned about how, when, and why corporations disclose information to policing, security, and intelligence services. Codenamed projects like PRISM, NUCLEON, and MAINWAY, combined with the shadowy nature of how data is collected and used, makes Snowden’s very serious revelations a hot topic to talk, write, and think about.

However, it’s important to recognize that the corporations that are entrusted with significant amounts of our personal information often independently analyze and process our information in ways that we don’t expect. In this post I discuss a patent that AT&T received a little over a year ago to analyze the personal communications of its subscribers to catch instances of copyright infringement. I begin by outlining providing information concerning AT&T’s patent. From there, I discuss other companies’ efforts to develop and deploy similar systems in Europe to shed more light on how AT&T’s system might work. This post concludes by considering a range of reasons that might have driven AT&T to file for their patent, and notes why it’s important to place patents within the broader policy ecosystem that telecommunications companies operate within instead of analyzing such patents in isolation. Continue reading

Controversial Changes to Public Domain Works

by Muskingum University Library

A considerable number of today’s copyfight discussions revolve around the usage of DRM to prevent transformative uses of works, to prevent the sharing of works, and to generally limit how individuals engage with the cultural artefacts around them. This post takes a step back from that, thinking through the significance of transforming ‘classic’ works of the English literary canon instead of looking at how new technologies butt heads against free speech. Specifically, I want to argue that NewSouth, Inc.’s decision to publish Huckleberry Finn without the word “nigger” – replacing it with “slave” – demonstrates the importance of works entering the public domain. I restrain from providing a normative framework to evaluate NewSouth’s actual decision – whether changing the particular word is good – and instead use their decision to articulate the conditions constituting ‘bad’ transformations versus ‘good’ transformations of public domain works. I will argue that uniform, uncontested, and totalizing modifications of public domains works is ‘bad’, whereas localized, particular, and discrete transformations should be encouraged given their existence as free expressions capable of (re)generating discussions around topics of social import.

Copyright is intended to operate as an engine to generating expressive content. In theory, by providing a limited monopoly over expressions (not the ideas that are expressed) authors can receive some kind of restitution for the fixed costs that they invest in creating works. While true (especially in the digital era) that marginal costs trend towards zero, pricing based on marginal cost alone fails to adequately account for the sunk costs of actual writing. Admittedly, some do write for free (blogs and academic articles in journals might stand as examples) but many people still write with the hope earning their riches through publications. There isn’t anything wrong with profit motivating an author’s desire to create.

Continue reading

Review of Telecommunications Policy in Transition

Image courtesy of the MIT Press

This first: the edited collection is a decade old. Given the rate that communications technologies and information policies change, this means that several of the articles are…outmoded. Don’t turn here for the latest, greatest, and most powerful analyses of contemporary communications policy. A book published in 2001 is good for anchoring subsequent reading into telecom policy, but less helpful for guiding present day policy analyses.

Having said that: there are some genuine gems in this book, including one of the most forward thinking essays around network neutrality of the past decade by Blumenthal and Clark. Before getting to their piece, I want to touch on O’Donnell’s contribution, “Broadband Architectures, ISP Business Plans, and Open Access”. He reviews architectures and ISP service portfolios to demonstrate that open access is both technically and economically feasible, though acknowledges that implementation is not a trivial task. In the chapter he argues that the FCC should encourage deployment of open access ready networks to reduce the costs of future implementation; I think it’s pretty safe to say that that ship sailed by and open connection is (largely) a dead issue in the US today. That said, he has an excellent overview of the differences between ADSL and Cable networks, and identifies the pain points of interconnection in each architecture.

Generally, O’Donnell sees interconnection as less of a hardware problem and more of a network management issue. In discussing the need and value of open access, O’Donnell does a good job at noting the dangers of throttling (at a time well ahead of ISP’s contemporary throttling regimes), writing

differential caching and routing need not be blatant to be effective in steering customers to preferred content. The subtle manipulation of the technical performance of the network can condition users unconsciously to avoid certain “slower” web sites. A few extra milliseconds’ delay strategically inserted here and there, for example, can effectively shepard users from one web site to another (p53).

Continue reading

Review of Wired Shut: Copyright and the Shape of Digital Culture

Image courtesy of the MIT PressGillespie argues that we must examine the technical, social-cultural, legal and market approaches to copyright in order to understand the ethical, cultural, and political implications of how copyrights are secured in the digital era. Contemporary measures predominantly rely on encryption to survey and regulate content, which has the effect of intervening before infringement can even occur. This new approach is juxtaposed from how copyright regulation operated previously: individuals were prosecuted after having committing copyright infringement. The shift to pre-regulation treats all users as criminals, makes copyright less open to fair use, renders opposition to copyright law through civil disobedience as challenging, and undermines the sense of moral autonomy required for citizens to recognize copyright law’s legitimacy. In essence, the assertion of control over content, facilitated by digital surveillance and encryption schemes, has profound impacts on what it means to be, and act as, a citizen in the digital era.

This text does an excellent job at working through how laws such as the Digital Millennium Copyright Act (DMCA), accompanied by designs of technologies and the political efforts of lobbyists, have established a kind of ‘paracopyright’ regime. This regime limits uses that were once socially and technically permissible, and thus is seen as undermining long-held (analogue-based) notions of what constitutes acceptable sharing of content and media. In establishing closed trusted systems that are regulated by law and received approval from political actors content industries are forging digitality to be receptive to principles of mass-produced culture.

Continue reading

Ole, Intellectual Property, and Taxing Canadian ISPs

Ole, a Canadian independent record label, put forward an often-heard and much disputed proposal to enhance record label revenues: Ole wants ISPs to surveil Canada’s digital networks for copywritten works. In the record label’s filing on July 12 for the Digital Economy Consultations, entitled “Building Delivery Systems at the Expense of Content Creators,” Ole asserts that ISPs are functioning as “short circuits” and let music customers avoid purchasing music on the free market. Rather than go to the market, customers are (behaving as rational economic actors…) instead using ISP networks to download music. That music is being downloaded is an unquestionable reality, but the stance that this indicates ISP liability for customers’ actions seems to be an effort to re-frame record industries’ unwillingness to adopt contemporary business models as a matter for ISPs to now deal with. In this post, I want to briefly touch on Ole’s filing and the realities of network surveillance for network-grade content awareness in today market. I’ll be concluding by suggesting that many of the problems presently facing labels are of their own making and that we should, at best, feel pity and at worst fear what they crush in their terror throes induced by disruptive technologies.

Ole asserts that there are two key infotainment revenue streams that content providers, such as ISPs, maintain: the $150 Cable TV stream and the $50 Internet stream. Given that content providers are required to redistribute some of the $150/month to content creators (often between 0.40-0.50 cents of every dollar collected), Ole argues that ISPs should be similarly required to distribute some of the $50/month to content creators that make the Internet worth using for end-users. Unstated, but presumed, is a very 1995 understanding of both copyright and digital networks. In 1995 the American Information Infrastructure Task Force released its Intellectual Property and the National Information Infrastructure report, wherein they wrote;

…the full potential of the NII will not be realized if the education, information and entertainment products protected by intellectual property laws are not protected effectively when disseminated via the NII…the public will not use the services available on the NII and generate the market necessary for its success unless a wide variety of works are available under equitable and reasonable terms and conditions, and the integrity of those works is assured…What will drive the NII is the content moving through it.

Of course, the assertion that if commercial content creators don’t make their works available on the Internet then the Internet will collapse is patently false.

Continue reading

Thoughts on COUNTER: Counterfeiting and Piracy Research Conference

Last week I was a participant at the COUNTER: Counterfeit and Piracy Research Conference in Manchester, UK. I was invited to be part of a panel on deep packet inspection by Joseph Savirimuthu, as well as enjoy the conference more generally. It was, without a doubt, one of the best conferences that I have attended – it was thought-provoking and (at points) anger-inducing, good food and accommodations were provided, and excellent discussions were had. What I want to talk about are some of the resonating themes that coursed through the conference and try to situate a few of the positions and participants to give an insight into what was talked about.

The COUNTER project is a European research project exploring the consumption of counterfeit and pirated leisure goods. It has a series of primary research domains, including: (1) frequency and distribution of counterfeits; (2) consumer attitudes to counterfeit and pirated goods; (3) legal and ethical frameworks for intellectual property; (4) policy options for engaging with consumers of counterfeit; (5) the use of copyrighted goods for the creation of new cultural artifacts; (6) impacts of counterfeiting and control of intellectual property.

Continue reading