Technology, Thoughts & Trinkets

Touring the digital through type

Update: Feeva, Advertising, and Privacy

MusicBrainzServersWhen you spend a lot of time working in the areas of copyright, traffic sniffing and analysis, and the Internet’s surveillance infrastructure more generally, there is a tendency to expect bad things on a daily basis. This expectation is built up from years of horrors, and I’m rarely disappointed in my day-to-day research. Thus, when Wired reported that a company called Feeva was injecting locational information into packet headers the actions didn’t come across as surprising; privacy infringements as reported in the Wired piece are depressingly common. In response I wrote a brief post decrying the modification of packet-headers for geolocational purposes and was quoted by Jon Newton on P2Pnet on my reactions to what I understood at the time was going on.

After the post, and quotations turned up on P2Pnet, folks at Feeva quickly got ahold of me. I’ve since had a few conversations with them. It turns out that (a) there were factual inaccuracies in the Wired article; (b) Feeva isn’t the privacy-devastating monster that they came off as in the Wired article. Given my increased familiarity with the technology I wanted to better outline what their technology does and alter my earlier post’s conclusion: Feeva is employing a surprising privacy-protective advertising system. As it stands, their system is a whole lot better at limiting infringements on individuals’ privacy for advertising-related purposes than any other scalable model that I’m presently aware of.

Before I get into the post proper, however, I do want to note that I am somewhat limited in the totality of what I can speak about. I’ve spoken with both Feeva’s Chief Technology Officer, Miten Sampat, and Chief Privacy Officer, Dr. Don Lloyd Cook, and they’ve been incredibly generous in sharing both their time and corporate information. The two have been incredibly forthcoming with the technical details of the system employed and (unsurprisingly) some of this information is protected. As such, I can’t get into super-specifics (i.e. X technology uses Y protocol and Z hardware) but, while some abstractions are required, I think that I’ve managed to get across key elements of the system they’ve put in place.

The Feeva system is designed to avoid the privacy concerns associated with behavioural online advertising (such as those that emerged with NebuAd, Phorm, and DoubleClick) whilst also ensuring that individuals are not susceptible to opt-out problems associated with cookie opt-outs. (The problem with any cookie-based opt-out scheme is that opting-out requires your computing hosting a unique cookie. After deleting cookies that opted you out of the behavioural advertising, you find yourself opted back into the ad network again!) Feeva’s approach sees ISPs scrub out clearly identifiable personal information (name, account number, etc.) and passes to Feeva a unique number (representing the customer) and geolocation information (ZIP/ZIP+4) about the number. This scrubbing means that Feeva is unaware of what numbers would correlate to what people; members of the company repeatedly stated to me that they don’t want to know who individuals are, and keeping their hands clean of personal information is seen as a selling feature of their approach. Where the geolocation information could likely identify specific individuals the company flips from ZIP+4 to ZIP geographical or neighborhood demographics and characteristics. Barnes and Jennings summarize this technical process in their paper, “Why the end-to-end principle matters for privacy,” thusly: Feeva’s ISP partners will install a HTTP proxy that will receive, “location information from the ISP’s network management infrastructure in the form of mappings between an IP address and an “anonymized token”, effectively a random value that Feeva can map back to a location value using information provided off-line.”

Feeva’s system of attaching tags, or adding information into HTTP headers, does not include actual geographic information. It is not using a one-way hash (as I had previously suggested might be the case) but a method through which an attacker that successfully captures header information would be no wiser as to the individual’s location. Reverse engineering the tag would not reveal geographical information. Further, not all headers have data injected; Feeva uses a whitelist of partners to whom information can be provided. Given that the company is aiming to generate revenue through partnerships it lacks a business interest in making this information freely available. Once partners receive packets with Feeva’s tags they contact Feeva to have appropriate derived data for the visitor, such as household income in the neighbourhood, and display an ad. The tag system is such that it would be incredibly challenging to extract any useful, identifying, information from the tags should protections around them be breached. Moreover, partners will be contractually prevented from trying to hack the system; partners are not to try to identify individuals with information provided through Feeva. Feeva can update their whitelist, enabling them to ‘turn off’ any particular ad-partner found performing malpractices.

Individuals can opt-out of the advertising system, and Feeva has insisted that ISPs provide meaningful opt-out solutions. In speaking with members of the company, I would say that they are being entirely earnest in their drive to implement Ann Cavoukian’s privacy by design, where privacy is baked into companies’ technologies and business plans. The benefit to this opt-out approach is that once you’ve opted-out, you’re out forever. Clearing your cookies won’t result in being re-drawn into the advertising system. This is clearly a good thing.

A well framed privacy out-out is important and something that the company genuinely believes in. They believe it is critically important that ISP customers are provided with meaningful opt-out opportunities, and it’s key to their business approach that individuals are given opportunities to step away from Feeva’s practices if customers are uncomfortable with the advertising practices. It does have to be noted, of course, that opt-in systems are better for individuals. As recently noted in the New York Times’ recent post, “The Economics of Privacy Pricing,” in cases where individuals already believe they possess privacy they are more likely to pay to retain it than they are willing to give up some benefits to regain privacy. In the Times’ piece, customers that already had received a benefit ($2) for having lost their privacy were less likely to ‘pay’ $2 to regain it; with Feeva it’s less clear that customers would have already received an equivalent benefit and thus the economic calculus against their working to ‘regain’ privacy might work out differently. Regardless, we know from other studies that opt-ins are better for consumers, and opt-out for companies.

Barnes and Jennings also caution that, due to the configuration of Feeva’s infrastructure, individuals are unlikely to know whether they are in a Feeva-enabled network. I agree with Barnes and Jennings, but only to a point. Few consumers know when they are on a site that uses DoubleClick, Flash-based Cookies, Omniture, or more silent/smaller advertising and analytics organs. In many cases there are plugins for web browsers (such as Ghostery for Firefox) and tricks that various programmers have developed to identify when a site utilizes an analytics or advertising system, and it’s not outside the realm of possibility that a plugin will detect Feeva-partnered sites. Moreover, the shift away from behavioural advertising towards demographic advertising obviously comes with its own worries and challenges, but from my own perspective it’s behavioural advertising that most worries me in the online marketplace. Not all individuals may agree with this position, but I’m personally far less comfortable with my behaviour’s being tracked and used for advertising purposes than being targeted with ads based on information my ISP has, presuming that the information is used responsibly. This position is unlikely to be shared by all. A certain amount of this attitude might derived from a callousness on my own part: I’m bombarded with ads every weekday when I open my mailbox and so I’m just more used to this kind of demographic advertising. It’s important to note that I’m distinguishing between the use of demographic information for advertising and for broader ‘life’ issues (e.g. where urban infrastructure is deployed, where police deploy patrol cars more regularly, etc); Feeva is invested in the former, not the latter, uses of demographic information.

From the perspective of ‘does Feeva ever have personally identifiable information’ I’m admittedly somewhat torn. On the one hand they lack name, date of birth, absolute specific point of residence, and so forth, and as I understand American law the company should be in the clear. I’m not certain, however, whether the lack of these specific elements of a person’s identity necessarily means that they are without personal information under a Canadian definition of the term. Specifically, they have a number that is associated with locational information and I don’t know whether this would be a sufficient link to constitute personal information in the Canadian context. With RFID devices the association of a number with fairly specific locational information constitutes personal information, regardless of being aware of who the holder of the RFID chips is, but I don’t know what kind of absolute proximity would be required for an approach like Feeva’s to be considered holding ‘personal information’. Obviously this is something that Canadian privacy lawyers will think through and when/if the company comes to Canada I’m sure that we’ll see this issue dealt with in detail.

Of course, one cannot avoid this: Feeva is looking to deploy an advertising platform. For those absolutely opposed to advertising, then it doesn’t matter what the company does – it’s corporate products will always been seen in a poor light. I’m admittedly not a fan of advertising but, of the scalable advertising systems that I’m aware of, Feeva is employing practices and demonstrating a sensitivity to the collection and retention of personal information (or, better put, the lack of collection and retention of personal information) that sets them aside from competitors in the advertising sphere. This is especially true when juxtaposing Feeva against NebuAd, Phorm, or Doubleclick. Further, advertising is a part of the online ecosystem and is unlikely to go away as long as we want to enjoy ‘free’ content. The company is genuinely leveraging privacy as a competitive advantage and tying it with more traditional marketing at the same time. The latter means that there are more resources to understanding how the system will impact individuals and groups – we can leverage existing information and research – and the former is to be commended.

Privacy advocates and academics alike push for privacy to be seen as a driver of business practices, and here we have an instantiation of privacy driving a business model. This is rare, and indicates just how pervasive privacy has become as an issue in Silicon Valley, even in highly-competitive business environments that have historically thrived on exploiting every piece of information that can be collected. Feeva’s adoption of privacy as a cornerstone  of their business indicates a (rare) success for privacy advocates who have advocated for stronger privacy protections online; whether you agree with the success resting on the technology (where I think a success can be read), at the very least least it should be agreed that baking privacy into Feeva’s advertising-based business model is a success. We would be better off if more companies similarly engrained privacy into their technological infrastructure and business models alike.

5 Comments

  1. Half-formed thought. Could any advertiser ticketing type system (like Freeva’s) be truly anonymous (no way to calculate or derive the true source) in an IPv6 world? Or would a Freeva app break an IPv6 communication channel, and therefore not be workable.

    So there’s two thoughts there. Will header modification products work in an IPv6 world, and if so, can they be truly anonymizing?

  2. “ISPs scrub out clearly identifiable personal information (name, account number, etc.) and passes to Feeva a unique number (representing the customer)”

    So, if I understand it correctly: Someone is serving an ad targeted to a /unique/ customer, based on data derived from the ISP and Feeva. This is at least potentially personally identifyable data. I don’t get why this would not be a privacy risk. Give us the source and spare us the stuff about their cool attitude and nice talks you had.

  3. @Catelli

    I’m not familiar enough with the establishment of IP addresses to say, with absolute certainty, that header modification products would work in an IPv6 world, but I don’t know why they wouldn’t. Information added by an HTTP proxy to a packet header doesn’t necessarily interfere with the creation of the IPv6 address itself, or at least if it does that would be news to me. While IPv6 has a ‘simplified’ header compared to the IPv4 header, IPv6 has extensibility that accommodates the formal loss in the shift to IPv6. As such, additions to the header should remain possible.

    To the latter question, the anonymization provided through IPv6 is dependent on how its implemented. As I noted in my earlier post on IPv6 (https://www.christopher-parsons.com/technology/ipv6-and-the-future-of-privacy/) the working group on IPng resolved this particular problem. Assuming that ISPs implement it according to standards, then IPv6’s limitations are on the ISP’s end of things, not on whether data is/isn’t added to packet headers.

  4. @Ralf Bendrath

    The worry that you raise is also noted by Barnes and Jennings; they worry about Feeva being a point of failure and source of an effort to deanonymize information. The ISP engages in a process that prevents Feeva from re-associating the number they receive with precise information about customers, and Feeva is not specifically aimed at individual-level marketing. As such, in the case of a breach at best there would be a series of numbers associated with a demographic area; this doesn’t link specific individuals to anything unless you already have access to additional information (e.g. names associated with ZIP codes) to try and link back the information Feeva has received. In this case, you really haven’t ‘gained’ anything that you didn’t already have by having a list of names associated with ZIP codes (which are already available online). Even with a breach linking a number given to Feeva you can’t subsequently associate the tags included in HTTP headers with a particular individual given how the tags are designed. In effect, for a deanonymization to take place, there has to be a near absolute failure of the system, at various point in the chain. It doesn’t break with one breach.

    The ‘risk’ posed by the use of this information is parallel to the risk associated with traditional demographic marketing, though in the case of demographic marketers I’m less familiar with the process by which individual marketers do (or do not) employ technology to limit the collection of individually identifiable personal information. Feeva has a decent system in place from what I’ve learned.

    As for handing over the ‘source’ as I noted I’m limited in what I can provide at a technical level. I’ve given what I could, at a level intended to shine a light into the processes that are involved with Feeva’s system without infringing on trade secrets – this is more information than was provided by Wired or any other source.

    As for ‘spare us the stuff about their cool attitude and nice talks [I] had’ I saw this inclusion of information as significant; I’ve had various discussions with companies where I felt they were dodgy, and this wasn’t the case here. The intentions of companies, and their corporate officers, are important as I approach things and I tried to convey that. Further, I find that bringing in the human element of technical projects does a great deal to overcome the dehumanization of technological issues that I encounter when reading about technologies and their corporations. Sorry that you felt it took away from the substance of what I wrote about.

  5. Thanks for the clarifications, Christopher. I did not want to sound grumpy, I am just really cautious when companies talk to people like us and tell us how nicely they try to protect privacy. Been there, done that. 😉

Leave a Reply

Your email address will not be published.

*