When you spend a lot of time working in the areas of copyright, traffic sniffing and analysis, and the Internet’s surveillance infrastructure more generally, there is a tendency to expect bad things on a daily basis. This expectation is built up from years of horrors, and I’m rarely disappointed in my day-to-day research. Thus, when Wired reported that a company called Feeva was injecting locational information into packet headers the actions didn’t come across as surprising; privacy infringements as reported in the Wired piece are depressingly common. In response I wrote a brief post decrying the modification of packet-headers for geolocational purposes and was quoted by Jon Newton on P2Pnet on my reactions to what I understood at the time was going on.
After the post, and quotations turned up on P2Pnet, folks at Feeva quickly got ahold of me. I’ve since had a few conversations with them. It turns out that (a) there were factual inaccuracies in the Wired article; (b) Feeva isn’t the privacy-devastating monster that they came off as in the Wired article. Given my increased familiarity with the technology I wanted to better outline what their technology does and alter my earlier post’s conclusion: Feeva is employing a surprising privacy-protective advertising system. As it stands, their system is a whole lot better at limiting infringements on individuals’ privacy for advertising-related purposes than any other scalable model that I’m presently aware of.
Before I get into the post proper, however, I do want to note that I am somewhat limited in the totality of what I can speak about. I’ve spoken with both Feeva’s Chief Technology Officer, Miten Sampat, and Chief Privacy Officer, Dr. Don Lloyd Cook, and they’ve been incredibly generous in sharing both their time and corporate information. The two have been incredibly forthcoming with the technical details of the system employed and (unsurprisingly) some of this information is protected. As such, I can’t get into super-specifics (i.e. X technology uses Y protocol and Z hardware) but, while some abstractions are required, I think that I’ve managed to get across key elements of the system they’ve put in place.
The Feeva system is designed to avoid the privacy concerns associated with behavioural online advertising (such as those that emerged with NebuAd, Phorm, and DoubleClick) whilst also ensuring that individuals are not susceptible to opt-out problems associated with cookie opt-outs. (The problem with any cookie-based opt-out scheme is that opting-out requires your computing hosting a unique cookie. After deleting cookies that opted you out of the behavioural advertising, you find yourself opted back into the ad network again!) Feeva’s approach sees ISPs scrub out clearly identifiable personal information (name, account number, etc.) and passes to Feeva a unique number (representing the customer) and geolocation information (ZIP/ZIP+4) about the number. This scrubbing means that Feeva is unaware of what numbers would correlate to what people; members of the company repeatedly stated to me that they don’t want to know who individuals are, and keeping their hands clean of personal information is seen as a selling feature of their approach. Where the geolocation information could likely identify specific individuals the company flips from ZIP+4 to ZIP geographical or neighborhood demographics and characteristics. Barnes and Jennings summarize this technical process in their paper, “Why the end-to-end principle matters for privacy,” thusly: Feeva’s ISP partners will install a HTTP proxy that will receive, “location information from the ISP’s network management infrastructure in the form of mappings between an IP address and an “anonymized token”, effectively a random value that Feeva can map back to a location value using information provided off-line.”
Feeva’s system of attaching tags, or adding information into HTTP headers, does not include actual geographic information. It is not using a one-way hash (as I had previously suggested might be the case) but a method through which an attacker that successfully captures header information would be no wiser as to the individual’s location. Reverse engineering the tag would not reveal geographical information. Further, not all headers have data injected; Feeva uses a whitelist of partners to whom information can be provided. Given that the company is aiming to generate revenue through partnerships it lacks a business interest in making this information freely available. Once partners receive packets with Feeva’s tags they contact Feeva to have appropriate derived data for the visitor, such as household income in the neighbourhood, and display an ad. The tag system is such that it would be incredibly challenging to extract any useful, identifying, information from the tags should protections around them be breached. Moreover, partners will be contractually prevented from trying to hack the system; partners are not to try to identify individuals with information provided through Feeva. Feeva can update their whitelist, enabling them to ‘turn off’ any particular ad-partner found performing malpractices.
Individuals can opt-out of the advertising system, and Feeva has insisted that ISPs provide meaningful opt-out solutions. In speaking with members of the company, I would say that they are being entirely earnest in their drive to implement Ann Cavoukian’s privacy by design, where privacy is baked into companies’ technologies and business plans. The benefit to this opt-out approach is that once you’ve opted-out, you’re out forever. Clearing your cookies won’t result in being re-drawn into the advertising system. This is clearly a good thing.
A well framed privacy out-out is important and something that the company genuinely believes in. They believe it is critically important that ISP customers are provided with meaningful opt-out opportunities, and it’s key to their business approach that individuals are given opportunities to step away from Feeva’s practices if customers are uncomfortable with the advertising practices. It does have to be noted, of course, that opt-in systems are better for individuals. As recently noted in the New York Times’ recent post, “The Economics of Privacy Pricing,” in cases where individuals already believe they possess privacy they are more likely to pay to retain it than they are willing to give up some benefits to regain privacy. In the Times’ piece, customers that already had received a benefit ($2) for having lost their privacy were less likely to ‘pay’ $2 to regain it; with Feeva it’s less clear that customers would have already received an equivalent benefit and thus the economic calculus against their working to ‘regain’ privacy might work out differently. Regardless, we know from other studies that opt-ins are better for consumers, and opt-out for companies.
Barnes and Jennings also caution that, due to the configuration of Feeva’s infrastructure, individuals are unlikely to know whether they are in a Feeva-enabled network. I agree with Barnes and Jennings, but only to a point. Few consumers know when they are on a site that uses DoubleClick, Flash-based Cookies, Omniture, or more silent/smaller advertising and analytics organs. In many cases there are plugins for web browsers (such as Ghostery for Firefox) and tricks that various programmers have developed to identify when a site utilizes an analytics or advertising system, and it’s not outside the realm of possibility that a plugin will detect Feeva-partnered sites. Moreover, the shift away from behavioural advertising towards demographic advertising obviously comes with its own worries and challenges, but from my own perspective it’s behavioural advertising that most worries me in the online marketplace. Not all individuals may agree with this position, but I’m personally far less comfortable with my behaviour’s being tracked and used for advertising purposes than being targeted with ads based on information my ISP has, presuming that the information is used responsibly. This position is unlikely to be shared by all. A certain amount of this attitude might derived from a callousness on my own part: I’m bombarded with ads every weekday when I open my mailbox and so I’m just more used to this kind of demographic advertising. It’s important to note that I’m distinguishing between the use of demographic information for advertising and for broader ‘life’ issues (e.g. where urban infrastructure is deployed, where police deploy patrol cars more regularly, etc); Feeva is invested in the former, not the latter, uses of demographic information.
From the perspective of ‘does Feeva ever have personally identifiable information’ I’m admittedly somewhat torn. On the one hand they lack name, date of birth, absolute specific point of residence, and so forth, and as I understand American law the company should be in the clear. I’m not certain, however, whether the lack of these specific elements of a person’s identity necessarily means that they are without personal information under a Canadian definition of the term. Specifically, they have a number that is associated with locational information and I don’t know whether this would be a sufficient link to constitute personal information in the Canadian context. With RFID devices the association of a number with fairly specific locational information constitutes personal information, regardless of being aware of who the holder of the RFID chips is, but I don’t know what kind of absolute proximity would be required for an approach like Feeva’s to be considered holding ‘personal information’. Obviously this is something that Canadian privacy lawyers will think through and when/if the company comes to Canada I’m sure that we’ll see this issue dealt with in detail.
Of course, one cannot avoid this: Feeva is looking to deploy an advertising platform. For those absolutely opposed to advertising, then it doesn’t matter what the company does – it’s corporate products will always been seen in a poor light. I’m admittedly not a fan of advertising but, of the scalable advertising systems that I’m aware of, Feeva is employing practices and demonstrating a sensitivity to the collection and retention of personal information (or, better put, the lack of collection and retention of personal information) that sets them aside from competitors in the advertising sphere. This is especially true when juxtaposing Feeva against NebuAd, Phorm, or Doubleclick. Further, advertising is a part of the online ecosystem and is unlikely to go away as long as we want to enjoy ‘free’ content. The company is genuinely leveraging privacy as a competitive advantage and tying it with more traditional marketing at the same time. The latter means that there are more resources to understanding how the system will impact individuals and groups – we can leverage existing information and research – and the former is to be commended.
Privacy advocates and academics alike push for privacy to be seen as a driver of business practices, and here we have an instantiation of privacy driving a business model. This is rare, and indicates just how pervasive privacy has become as an issue in Silicon Valley, even in highly-competitive business environments that have historically thrived on exploiting every piece of information that can be collected. Feeva’s adoption of privacy as a cornerstone of their business indicates a (rare) success for privacy advocates who have advocated for stronger privacy protections online; whether you agree with the success resting on the technology (where I think a success can be read), at the very least least it should be agreed that baking privacy into Feeva’s advertising-based business model is a success. We would be better off if more companies similarly engrained privacy into their technological infrastructure and business models alike.