The Western world is pervaded by digital information, to the point where we might argue that most Western citizens operate in a bio-digital field that is constituted by the conditions of life and life’s (now intrinsic) relationships to digital code. While historically (if 30 years or so can withstand the definitional intonations of ‘historically) such notions of code would dominantly pertain to government databanks and massive corporate uses of code and data, with the advent of the ‘social web’ and ease of mashups we are forced to engage with questions of how information, code, and privacy norms and regulations pertain to individual’s usage of data sources. While in some instances we see penalties being handed down to individuals that publicly release sensitive information (such as Sweden’s Bodil Lindqvist, who was fined for posting personal data about fellow church parishioners without consent), what is the penalty when public information is situated outside of its original format and mashed-up with other data sources? What happens when we correlate data to ‘map’ it?
Let’s get into some ‘concrete’ examples to engage with this matter. First, I want to point to geo-locating trace route data, the information that identifies the origin of website visitors’ data traffic, to start thinking about mashups and privacy infringements. Second, I’ll briefly point to some of the challenges arising with the meta-coding of the world using Augmented Reality (AR) technologies. The overall aim is not to ‘resolve’ any privacy questions, but to try and reflect on differences between ‘specificity’ of geolocation technology, the implications of specificity, and potential need to establish a new set of privacy norms given the bio-digital fields that we find ourself immersed in.
There are various ways of identifying the geographical origin of data traffic on the ‘net; the rough geolocation of traffic origins is not a newfound phenomena. What is relatively new, however, is the ease by which non-technically minded individuals can ‘track’ the location of their web visitors. For example, I could export the server logs associated with visitor traffic to this website into an XML format and then play with it using the Google Maps API to locate where most visitors are from, or I could rely on a service such as TraceURL (a web shortening service) to create a more general understanding of visitors’ geolocation by seeding the link and a description of its destination across the web. Regardless of which method is used, below is a set of screenshots that show the level of detail that is easily, and freely, gained by such geolocation efforts in just a few minutes (most of the time involved is the creation of accounts).
This information is ‘public’, insofar as individuals move throughout the ‘net and deposit little elements of their geographical location as they pass from link to link. We know this. The question is whether or not associating that information with geographical location in a (relatively) novel visual sense constitutes some sort of a privacy infringement. Does my stripping of other people’s perceptions of digital anonymity mean that I have offended a normative expectation of privacy online? Given that I can identify who these particular individuals are by relying on a mashup of public data sources and applications, am I subsequently responsible for keeping this information away from the public eye? Am I obligated to either encrypt these data sources so that the chance of their being copied as minimal, or purge them, given their potential to reveal who is reading what I write?
We might argue that this is absurd on a few different levels; IP-based geolocation is not perfectly accurate, and in fact can often give you broad rather than specific ideas of where visitors might be. Further, the various data sources that I can access to ‘map’ my visitors is already publicly displayed and accessible through alternate data display environments. More seriously, we might worry locating data traffic could constitute a privacy infringement should I then use that information to target individuals, classifying them based on data points and thus (arguably) stripping them of their situated cultural values. This is the essence of Curry’s worries in his 1999 paper, “Rethinking Privacy in a Geo-Coded World.” He argues that there is a privacy infringement when data is collected and used to geographically differentiate and discriminate against individuals on the basis of mashed up (and sometimes aggregated) private information.
We might argue that mapping visitor on a map, and then combining that with additional third-party sources to predict who the specific visitor is, might not necessarily constitute and infringement on anyone’s privacy if there is ambiguity, inaccuracy (thus the surveillance in not complete) and limited processing of ‘public’ or ‘semi-public’ information. Finally, so long as there is no effort to use this information to subsequently affect the embodied reality of an individual or, perhaps more precisely, manipulate the bio-digital fields they are immersed in, the data subject cannot be seen as ‘affected’ by, or discriminated using, any such mapping. At most, there is an emotive shock or surprise at the capability to map data; shock drawn from the ignorance of technological possibility does not seem to be a convincing reason to radically alter normative understandings of privacy, though it might give rise to democratically oriented normative approaches to technological developments.
Whereas IP-based geotagging, even when combined with other data sources, is likely to be ambigious, how should we understand the ‘tagging’ of people and things using AR technologies? Juniper Networks has identified AR as a ‘boom’ technology, but in the absence of regulatory guidelines we can expect the next few years to be as ‘Wild West’ as the ‘net was in its earliest public days. Accompanying this lack of regulatory oversight is a very real question of privacy with AR; what happens when another person geotags your location, or the location of your (expensive) private property without asking for your consent? What when they add to your own tag’s meta-information to expand it to what you consider ‘personal’ or ‘private’ information?
In the case of AR we seem to be at an exciting moment, but the potential to share personal information about ourselves and others can be significant; telling my friend that I’m going away for vacation is fine, but when I mention it on Twitter (which can locate the precise physical origin of a tweet and associate it with my username) then I’m adding metadata on top of my own world. My bodily existence is, in effect, very immediately penetrated by digital code. Significantly, this code does not share the same (relatively non-mobile) characteristics of information shared in the analogue world; digital code can be mashed-up, displayed on a relatively ‘static’ map or potentially ‘seen’ using an AR program such as TwittaRound (demonstrated below):
Does the ability to ‘mash-up’ data and very specifically locate an individual mean that we need to rethink how individuals conceptualize their privacy, and do we need to establish new norms for what others can do with ‘my’ data, or data related to me? If I tweet ‘home again!’ from the comfort of my apartment, and someone else appends some information to my metadata identifying the value of items in my home, or even creates separate tags while in my home (e.g. snaps a picture of my computer system with a note ‘holy shit that’s expensive!’), what are the norms at play? Should it be treated as the revelation of personal information without consent, and if so should a tort-based system be deployed to remedy inappropriate data disclosures? Does this even make sense – should AR actually become the phenomenal technology that it is expected to be, ought we mould AR to accommodate already existing privacy norms that were developed for other technologies, other modes of revelation, or establish new norms to adjudicate permissive behaviour using these new and highly specific technologies?
I have previously argued that the latter, rather than the former, is the path that needs to be adopted in the face of radically novel technological transformations, that privacy norms generated for the analogue world are unsuitable for the digital; at the minimum, they must undergo a translation if not recreation. Extending beyond that past work, however, I wonder if there is an increasing need to create norms for the bio-digital, for the highly specific intersections of data as imposed on the ‘real’ world? Does the transparency of data and data markers genuinely constitute a ‘new’ problem, or is it simply another instantiation of transparency into the ways that data has always been manipulatable? In the face of the recognition of the ways that data can be used, and the resulting public resistance to such uses, should we restructure the very processes by which we can specifically identify individuals using public data?
I’m not certain what my response to these questions should be, or are. The conservative side of me very definitely wants to say that there isn’t a need for new norms, that we just need to flesh out principles for digital privacy and have them applied across the board to capture individuals, governments, and corporations alike that are implicated in modifying the individual’s bio-digital field. For some reason, a reason I’ve yet to put my finger on, this doesn’t quite sit with me. Perhaps it’s because I see the bio-digital generating new situations for experiencing data and thus demanding a new set of normative expectations to capture these situations’ full scope. Perhaps its because the massive visualization of data seems to lead us to a new era of engaging with the digital. Maybe it’s just because I want to make things harder than they need to be, in an (unconscious) desire to find new places to think through *grin*.