legocrossdressIt’s been a while since I’ve been updating this blog regularly – since I last wrote, I’ve completed my Master’s thesis, traveled to Brasil, sent out applications to Doctoral programs, found (temporary) full-time employment, and rested my brain a bit. Now, I feel rejuvenated, and ready to get back into the swing of things.

Setting the Stage

We are increasingly living in a hybrid world, one where our lives are being digitized. We eat food (analogue) but order it online (digital); we use our voices to talk with one another (analogue) using cell phones (digital); we read cooking recipes (analogue) from recipe websites (digital). In addition to what we actually do, what happens around us, and shapes how we are capable of interacting, often occurs within digital spaces – banking institutions are networked, government documents are send across departments by email, and major corporate executives that make (oftentimes) global decisions seem to have Blackberries surgically attached to themselves.

The transition to new technological systems isn’t an inherently bad thing, nor is it inherently good – I’m unwilling to go so far as to engage with the ontology of communication technologies without a much deeper understanding of human psychology, moral and ethical theory, and a better understanding of systems of ethico-existential theory. So, rather than focus on the technology as it is, I want to turn to the effects of that technology if adequate privacy regulations not be deployed to stymie potential negative effects of digital systems towards political action. In particular, I’m interested in how digital system relate to political action that depends on a functioning discursively generated political environment.

The All Seeing Digital Eye

In a previous post, entitled ‘Public and Private Digital Space’ I noted the risks associated with Deep Packet Inspection (DPI) hardware. This technology has the effect of looking past packet header information and actually looking into the payload of packets. I noted at the time that DPI threatened to alter the structure of public and private spaces. Presently, Internet Service Providers (ISPs) tend to examine packet header information and based on the header’s contents allow or prevent the packet from passing along their network. Thus (in theory) ‘private’ actions can be recognized by looking at header information and then passed along. Now, in light of the ability to encrypt and/or mask the accuracy of packet information, I wrote that ISPs might shift their behaviour and begin examining and regulating the data traffic moving across their networks.

In my previous post, I wrote about what would happen if corporations began using DPI to shape or manipulate data traffic. I tried (in my own head) to remain optimistic that telecommunications giants would avoid using these technologies because it could implicate them for copyright violation, electronic fraud, and other offenses. Classically these giants have insisted that they cannot be held responsible for what passed along their networks but, were they to institute DPI, they would quickly become the targets of litigation.

Unfortunately, in the past few weeks, it’s become desperately clear that at least one giant may be in the process of shifting its position on this core issue. At CES major content creators and providers discussed whether we were at a time when content should be inspected at a network level, and only after that inspection be allowed (were it deemed legitimate) to continue across the ISPs network. AT&T, the same ISP that has been identified as collaborating with American government agencies to access and store all data movement through AT&T data hubs, was the major content provider involved in those discussions. AT&T has previously been identified to be using Narus STA 6400 Equipment, which is capable of massively surveying data streams (there are claims that it can analyze over 10 billion bits of data per second). Moreover, as the below image from demonstrates, these data collection centers weren’t located deep within the US – it was located along the central points of entry into and out of the US. Based on expenses in deploying sophisticated and expensive telecommunications equipment, in addition to the decentralized nature of network traffic transmissions, a massive proportion of world communications were being routed through AT&T data centres. This should indicate the challenges that are present to anyone who wants to dodge DPI technologies – were they deployed at central network hubs, it would be practically impossible for data to move across networks without being inspected during delivery.

200901201225.jpg What This Means

When most people are told that their communications may be being monitored (at least amongst my generation) are remarkably cavalier. Sure, they seem to get upset when Facebook releases an application that invasively monitors and publicizes their online purchases. put together a petition that was intended to force Facebook to ‘reconsider’ their Beacon application. As of this writing, almost 80,000 people have joined a Facebook group that Moveon has created to protest Facebook’s reasonably regular invasions of privacy. (And yes, you can certainly detect a moderate degree of irony in protesting in the system that is tracking your actions, to protest against that system’s tracing on one’s actions. At least you’d know they knew you were a member of the group.)

What was unique about Beacon, however, was that it clearly showed how discrete users were being monitored, and people didn’t like the idea that their information was being made public. Even though it wasn’t necessarily incriminating, even though they didn’t have anything to hide, even though they weren’t afraid it would result in some measure of state-sanctioned coercion, they were upset – they strongly believed that Facebook was infringing on their privacy. In response, they got together and communicated with one another about what was going on, and discussed the issue. Living in a digital era, this means that their mass communications took place using digital systems – the same systems that are susceptible to DPI technologies and other systems of data oversight and discrimination. While it is possible to learn that your data is being shaped, it requires a certain level of expertise, a certain level of persistence, and an awareness that it is occurring. Most people who I have spoken to assume that, after the Beacon debacle, that they were no long participating in the program – what has actually happened is that users became able to opt-out of it. Now that most of the news has died down about the topic, and it isn’t such an evident privacy invasion, most users have forgotten about it and moved on. The same individuals who were up in arms about Beacon assumed the issue was settled, that the message had been sent, and have returned to their regular digital movements.

What happens in a case where individuals can never be certain what will, and won’t, be passed along a network (which is really the case now, save that individuals aren’t necessarily aware of how comprehensive their digital portfolios are)? As someone who works with digital networks, I can realize the incredible value of data discrimination and oversight from an administrative standpoint – if users don’t know what will be blocked, then they tend to be conservative in their usage. Moreover, with your block/shaping list remaining secretive you don’t have to worry about privacy concerns insofar as you don’t have to concern yourself with user outrage, and you and you don’t have publicly to justify what you’ve chosen to shape/control. It makes life more pleasant, and unless you happen to have an incredibly vocal individual who has genuine leadership skills, it’s unlikely that you will encounter a situation where such shaping causes a problem. (As a note: neither I, nor my present employer, are presently using data shaping technologies to the best of my knowledge.) As long as shaping is reasonably innocuous, as long as it doesn’t pick on powerful people, as long as it is only inconvenient (rather than genuinely problematic), as long as it remains moderately mysterious, then the system can remain in place without enough user outrage that the policies must be abandoned.

In a discursive democracy, the capacity for citizens to publicly communicate with one another is a central element of retaining the democracy’s health. People send private messages from their email accounts at work, from work accounts, they IM using cell phones, they develop mashups of copy written material, they (generally) participate in political society on the backbones of digital networks. When citizens constituted their nations they (theoretically) came to a consensus about the basic laws of their state, laws that they had to uniformly consent to. (Note: in consenting to basic law, this does not mean that citizens whole heartedly endorse each law – rather, it symbolizes a validation of a particular constitutional culture, within which disagreements should be expected. That said, such disagreements do not inherently endanger the constitution itself, or necessitate a repudiation of it during its inception.) Moreover, for citizens to be able to accept a law, they must be able to recognize themselves as its authors and addressees. This meant that they had to be able to communicate to one another about their particular situations, both to their political figures and amongst themselves, so that the laws that are created can be seen to have included the considerations of those who will be affected by the law. Where individuals are unable to communicate privately, they are less likely to experiment with political ideas, less likely to raise controversial issues, and less likely to substantively experience the creative safety that is often found in informal or private communicative spaces. Moreover, given the prevalence of digital oversights, accompanied by the diminishment of what can be considered private time and private spaces, citizens are rapidly losing the spheres of discourse that they have relied upon.

Now, does this means that as a result of these technologies as they are presently being deployed that individuals will necessarily experience data discrimination and illegitimate accumulation, and consequently limit their digital communications? No, not necessarily – but it isn’t absurd to think that they will become more conservative in what they say, where they say it, and who they say it to. And, depending on the kinds of packet monitoring/reporting systems that are deployed, it may not matter what they say – their messages might just not be sent or they may be censored and, when they demand answers to why such actions are occurring, receive an answer that is something along the lines of ‘you communication was deemed to be in violation of the Terms of Service that you agreed to when you signed the contract to receive this service’.