While I haven’t posted much this month, it isn’t because I’m not writing: it’s because what I’m writing just doesn’t seem to pull together very well and so I have 4 or 5 items held in ‘draft’. See, I’ve been trying to integrate thoughts on accessible versus technically correct understandings of technology as it relates to privacy, and to issues on public relations and the use of FUD by privacy activists, and what I think of the idea of ‘anonymity’ in digital environments that are increasingly geared to map, track, and trace people’s action. Given that it’s the data privacy day, I thought that I should try to pull some of thoughts together, and so today I’m going to draw on some of those aforementioned ideas and, in particular, start thinking about anonymity in our present digitally networked world.
To take the ‘effort’ to try and remain anonymous requires some kind of motivation, and in North America that motivation is sorely lacking. North America isn’t Iran or China or North Korea; Canadians, in particular, have a somewhat envious position where even with the government prorogued – a situation that, were it to happen in Afghanistan would have pundits and politicians worrying about possibilities of tyranny and violence – there isn’t a perception that Canadians ought to be fearful that proroguement heralds the beginning of a Canadian authoritarian state, or the stripping of Charter rights and freedoms. This said, I think that people in the West are realizing that, as their worlds are increasingly digitized, their ‘analogue’ expectations of privacy are not, and have not for some time, been precisely mirrored in the digital realm. This awareness is causing worry and consternation, but is not yet (and may never be) sufficient for wide-scale adoption of anonymization technologies. Instead, we have worry without (much) action.
But let me back up, just a bit. What motivated the creation of a digital architecture that can be used for limiting anonymity online? What sense of anonymity am I referring to?
To the latter question, I’m referring to the ability to be anonymous at the hardware level of the Internet – I’m interested in remaining ‘anonymous’ in the face of client-server transactions. I’m not calling into question the ability to ‘game’ the web as perceived on Facebook, Yahoo!, Yelp.* Instead, I’m referring to particular individuals’ abilities to mask their presence or identity on a computer network.
What has driven the limitations upon online anonymity? To begin we need to acknowledge that the ‘net is, contrary to the early Internet Utopians, a perfect domain of control. With immense possibilities for control come opportunities of surveillance, surveillance that can deanonymize individuals and their actions. It isn’t that a particularly nefarious cabal has sought to do this: network engineers and administrators have deployed networking hubs and associated software-driven security appliances to better analyze data traffic and identify sources and destinations of data traffic for a long time to facilitate a fast, efficient Internet. We have witnessed incredible booms in the amount of traffic that is flowing across networks, with an increasing amount of that traffic is harmful to networks and individuals (e.g. DDOS attacks, email spam, phishing). To deal with these, along with other authentication, security, and traffic analysis challenges, network engineers and administrators have made network nodes ‘more intelligent’; intrusion detection systems operate at borders, email analysis often precedes mail hitting your inbox, and multiple levels of authentication and firewalls surround intranets. It’s important to note that networks have grown more intelligent, rather than recently become intelligent; network appliances have always had some ‘intelligence’ to them.** We might say that carapaces have slowly grown around networks as the nature of the network environment itself has evolved.
Given the technical element to data routing, network security, and data usage tracking, networking staff have implemented some technologies that, in the right hands, are as benign/beneficial as a scalpel in an experienced surgeon’s hands but, alternately, are as dangerous as the same scalpel in a psychopath’s grasp. Deep packet inspection (DPI) is one (of many!) such technologies. Last year, Sandvine noted that 160 ISPs purchase their equipment, with 90% of those ISPs using the equipment for some kind of traffic management purposes. This traffic management can extend to modifying the traffic speeds of particular applications, such as those used for P2P file-sharing, to discriminating between different tiers of service. Further, discrimination might mean impeding on some traffic – such as ‘suspicious’ traffic as identified by network administrators – or applying an ‘economic’ management system that is dependent on tracking customer data use and charging for bandwidth use. As is obvious, some of the uses I just identified are clearly ‘acceptable’ – if network equipment generally can detect and limit/isolate computers operating in a botnet, that’s a plus as I read it – whereas others – behavioural advertising and carriers delaying competing services – are far less acceptable.
It is critical to identify what is seen as acceptable versus unacceptable uses of any particular networking technology. Targeting use is important, given that there is a lot that goes on behind the scenes that consumers and end-users are largely ignorant of and, to date, have had no issues remaining ignorant of. ‘Computer geeks’ have done their thing, the email has flowed, and everyone has been happy (save, perhaps, for the admins responsible for keeping things running at those moments when an attack overwhelms the network defences…). Because of a general lack of technical training, incredibly poor relationships between large telecommunications companies and end users, and increasing uses of network technologies for surveying and modifying traffic in obstrusive ways, the public is only now getting interested in how our digital networks operate. They’re a few decades late, and their late arrival is generating challenges for all involved. Citizens expect perfect (or, at least largely similar) correspondence between ‘analogue’ and ‘digital’ expectations of privacy, consumers want efficiency and speed, and administrators want secure networks. This is to say nothing of the corporations running the networks, who want to earn a reasonable rate of return on their investment. These concerns don’t necessarily fall into ‘either/or’ categories, but require a common language and commonly understood implementation strategy that is respected by all involved. Developing this language and consensus is, obviously, a daunting task.
Public interest in telecommunication networks is not inherently bad, and in fact is probably a good thing – more people should be involved in making technology increasingly democratic – but participation, at this point, requires that either incredibly good metaphors and analogies need to be deployed, or interested people need to learn something about the technologies in question. Arguably, the line should be drawn somewhere between those two alternatives.
It is even more important that the privacy advocates who are dealing with networking technologies develop an appreciation both for the real challenges faced by network administrators as well as the technical structures underpinning networks themselves. They need to realize that while they defend civil rights, they need to find ways of bringing consumer groups (and, ideally, network operators themselves) onside, as opposed to alienating the consumer and network operator to protect the citizen. We live in market societies: this means consumers have to be placated, or at least feel like they’re being placated. Such placations may often be symbolic, but the effort to recognize the bridge between these multiple agentic points of view must be recognized and, ideally, exploited by privacy advocates.
Further, it is key that privacy advocates possess the technical knowledge to parse what a well-meaning administrator tells them, so that they can subsequently craft their concerns, complaints, and issues in a language that can be accessible to the public and sensitive to the technical realities of the contemporary networked world. In the case of DPI, there are various ways that the technology can be used, and not all of them are harmful. The stance that any analysis of data packets, however transitory, constitutes a privacy infringement would require the abandoning of many incredibly valuable perimeter defences that most end-users are entirely ignorant of. We needn’t toss out the baby with the bathwater! Effectively, without appreciating the particularities of each device and its particular uses, advocates cannot precisely target their complaints for maximum effect: rather than targeting all cars, environmentalists often target vehicles below particular standards; they draw distinctions within a large category and frame solutions for both citizens, the environment, and consumers. Privacy advocates can learn a great deal about how to position issues from surrounding groups, and in many cases already have.
Networking technologies can often be designed to analyze and identify unique users. In many environments this is required for anything from law enforcement (e.g. CALEA), to billing, to traffic analysis purposes. The question is whether or not such analysis and identification, when performed by your Internet Service Provider (ISP), is appropriate or not. In the EU, there are questions over whether any use of DPI or DPI-like technologies constitute a privacy infringement. An element of such questions surrounds whether or not temporary analysis of data traffic constitutes wiretapping (I have doubts) and, implicitly, an element is about what kind of anonymity Internet users can expect. Should our ISP analyze traffic to identify dominant applications used and bandwidth those applications, in aggregate, are generating? Should our ISP be theoretically capable of revealing what particular users do with their Internet connections if law enforcement makes a legitimate request? Should our ISP identify the lowball percentage of data traffic that constitutes copyright infringing material?
It is, of course, when we get to these more precise questions that question of ‘what anonymity are we referring to?’ comes to the fore again, alongside the question ‘what conditions warrant deanonymization?’
The Information and Privacy Commissioner/Ontario maintains that pseudo-anonymity, rather than full blown anonymity, should be the expected and perceived norm of Ontarians in digital environments. By this, an individual can maintain anonymity until they perform some action that crosses the boundary of the law – at that point, it is imperative for a service provider to identify who that individual is so that the arm of the law can reach out and touch them. In effect, IPC/O’s position is that operating in the digital era does not mean that the state’s coercive powers ought to be diminished: the benefits of free, anonymous speech and association can be had, so long as this is done within the confines of the law.
At a broad, principled level the IPC/O’s position on anonymity raises concerns: what if we applied this same set of rules to an authoritarian environment, such as in China or Iran? However, if we move from ‘high-level’ theorizing and generalizations to ‘mid-level’ theorization and better contextualized generalizations the conditions of inquiry change; does the pseudo-anonymity offered in Ontario, which is (generally) an orderly and lawful province in an orderly and lawful nation, meet the expected freedom and association rights of Ontarians without detoothing the provinces and/or nation-state’s coercive power? Clearly this latter question is better contextualized than one would expect from a philosophical treatise on anonymity, and more likely to resonate both with (in this case) Ontarians and Canadians than a ‘high-level’ theory that has to apply universal statements to particular situations without becoming logically or practically incoherent.
Note that I’m not suggesting that we must avoid ‘high theoretical’ approaches to privacy and anonymity – I’m regularly involved in such discussions, and think that the principled position needs to be stated loudly because the speaker is often alone in the room – but that there is a danger in always assuming a ‘principled’ position that refuses compromise and reflective consideration in advocacy. Advocacy is not, as I’m regularly reminded, the same as scholarship. Are networking technologies like DPI likely to be ‘banned’ in the West? Unlikely – were an attempt made, network operators would point out to the public just how ingrained these technologies are in daily operations, and instead (continue to) focus on uses of the technology. If advocates adopt a nuanced and contextualized and principled position, from which they can insist that particular uses of the technology be banned, and frame the benefits of such bans to reach out to consumers and citizens alike, then I think that that advocates will have a stronger foot to stand on.
Concluding, and returning to anonymity more closely, I think that we need to distinguish various ways to ‘approach’ the topic of anonymity. We need to recognize different levels of theorizing about ‘what anonymity means’, analytically distinguish between the domains of expected anonymity (e.g. Internet versus Web versus particular sites on the Web), and collect empirical data that outlines the present practices that would anonymize and deanonymize individuals in each of these domains. After intensively investigating these practices, advocates and citizens alike can proceed to launch complaints, concerns, and raise issues in a highly contextualized, high targeted way that is likely to have superior results than broadly framed worries lacking analytic precision.*** These worries are not just ‘academic’: launching ill-formed complaints can lead to setting precedent that runs counter to privacy-protection, that undermines the instantiation of privacy principles in society and jurisprudence, and potentially waters down constitutional rights. Badly formed complaints don’t just run the risk of being ineffective: they run the greater risk of jeopardizing the principles that privacy advocates defend, principles that often includes anonymity in ‘public’ spaces like the Internet.
———
* I’ve begun differentiating between ‘first’ (hardware) and ‘second’ (web-level) presence to mark off this distinction between spaces of anonymization.
** At some point, probably after my February exams, I’m hoping to write something on the use of the term ‘intelligence’ to refer to networks. I find the term incredibly offsetting and awkward.
*** As a note, I recognize that this is ‘ideal’ – often groups suspected of infringing individuals’ privacy are not forthcoming with information. In these cases, advocates should get as much of the best data that they can together and launch the complaint. I’m not suggesting that, where information to create one of these hyper-targeted complaints is unavailable that advocates shouldn’t complain, but that when the information is available the advocate ought to engage with all the available data in a charitable fashion. At the very least, they should try to seek out the information in a rigorous fashion before submitting the complaint.
I’m not familiar with Cavoukian’s work on anonymity, but I am quite familiar with her ideas generally, and she’s definitely not from the civil liberties school of thought.
No civil liberties advocate would say that people engaging in criminal activity should be guaranteed anonymity. A civil libertarian’s approach to this is this same as for wiretaps or jailing people. A civil libertarian insists that people only be deprived of their liberties on the basis of evidence, subject to procedural fairness and openness, and only when absolutely necessary. That’s not a very good description, so from s.2 of the charter “…as can be demonstrably justified in a free & democratic society.” Anyway, I’m a programmer, not a lawyer. 🙂
If Cavoukian suggests the civil libertarian’s view is otherwise, then she’s being intellectually dishonest.
I think your comparison of Canada & Afghanistan in prorogation situation is not entirely correct, but I may have misunderstood. If you’re saying that since ours is not a totalitarian government, we can trust them with whatever personal info they want to have, I think you have to be careful. Yes, we’re a democracy but we can’t think that we’re immune to the possibility of a descent into tyranny. It’s not likely, but it’s not impossible. Once a nation has achieved “democratic status” doesn’t mean we’ll never again face assault on our liberties. Prorogation is not a disaster in Canada as it would be in Afghanistan’s far more fragile ‘democracy’, but if it persists and becomes a habitual way to avoid votes of non-confidence, we will have very serious problems.
So an interesting question occurs to me. Would civil liberties advocates oppose, for practical reasons, wiretaps for criminal investigation in totalitarian states, because we know this tool will be used for persecution? Murders and theft have to be prosecuted in totalitarian states too, it’s the other stuff we have a problem with. It’s a moot question I suppose, because in those states the authorities won’t let go of authority to wiretap (and jail people) whenever they want.
I think the distinction you make between “hardware” and “web” level of anonymity is important. In the same way that routers have traffic logs today, so phone switches have long had their own ‘traffic’ logs. This is not something we can avoid, nor is it, properly used, something we should be afraid of. This is the same parallel I draw when talking about “lawful access” proposals. The government says they need tools equivalent to those they have long had for wiretapping phone calls. But that’s a lie, because there was never a requirement for all phone calls to be recorded all the time and stored somewhere just in case the police want to listen to it. Police already have the equivalent powers, there’s no need to archive the entire content of people’s online activites for future reference.
Anyway, lawful access is another topic altogether.
LikeLike
@Paul
The aim of referring to Canada and Afghanistan was to not that there are contextual differences in how we perceive the relationship between politics and law. It wasn’t to try and suggest that upon reaching a ‘democratic status’ that we’re immune to assault on liberties, just that it transforms how we perceive our political reality. Such perceptions may be more or less accurate, of course. (And, as a note, I certainly do not trust any large institution with my information. I supply it, of course, because I’m required to but this doesn’t imply genuine consent.)
As for wiretaps in authoritarian states, that was at least part of the concern surrounding the companies that have, or have not, supplies surveillance equipment to Iran. I would suggest that there are worries that extend to ‘is the policing of crimes legitimate’ in addition to ‘is the use of this surveillance equipment legitimate’, but recognize the distinction that you’re drawing on.
I have a feeling we’re in a similar situation re: lawful access.
LikeLike
Wow! Long post. But good points and I do agree with you that anonymity needs to be addressed. Hopefully with Hilary Clinton’s recent speech where she mentions it, it will start.
For me it is a tough question. Many of us in security are involved on both sides of anonymity at one time or another. I agree anonymity is needed, but how do you ensure the bad guys don’t have it. Of course, then one has to define what is ‘bad’. Bad will differ between individuals, organizations, and countries. So if what I am doing is considered ‘good’ in one context, but ‘bad’ in another context, who gets to decide if I can be anonymous? And even if you decide or not, can you truly enforce it?
LikeLike