Facial Blurring = Securing Individual Privacy?

Google map privacy?The above image was taken by a Google Streetcar. As is evident, all of the faces in the picture have been blurred in accordance with Google’s anonymization policy. I think that the image nicely works as a lightning rod to capture some of the criticisms and questions that have been arisen around Streetview:

  1. Does the Streetview image-taking process itself, generally, constitute a privacy violation of some sort?
  2. Are individuals’ privacy secured by just blurring faces?
  3. Is this woman’s privacy being violated/infringed upon in so way as a result of having her photo taken?

Google’s response is, no doubt, that individuals who feel that an image is inappropriate can contact the company and they will take the image offline. The problem is that this puts the onus on individuals, though we  might be willing to affirm that Google recognizes photographic privacy as a social value, insofar as any member of society who sees this as a privacy infringement/violation can also ask Google to remove the image. Still, even in the latter case this ‘outsources’ privacy to the community and is a reactive, rather than a proactive, way to limit privacy invasions (if, in fact, the image above constitutes an ‘invasion’). Regardless of whether we want to see privacy as an individual or social value (or, better, as valuable both for individuals and society) we can perhaps more simply ponder whether blurring the face alone is enough to secure individuals’ privacy. Is anonymization the same as securing privacy?

Of course, discussions of ‘securing’ one’s privacy must gravitate to whether not ‘securing’ or ‘controlling’ one’s personal information is the best means of framing privacy discussions. Such discourse often reaffirms liberal positions that information somehow belongs to individuals and that their personal dignity depends on their controlling or limiting ‘violations of the self’. Given that we are necessarily involved in exchanges of personal data throughout our lives, and that such exchanges are not necessarily infringements on our individualism or personal dignity nor entail entail explicit data sharing relationships, the idea that we can ‘control’ ourselves is questionable at best.

Simultaneously, the attitude that there is no privacy in public seems at odds with how we understand the world we live in – we expect (hope?) to be left alone when intentionally secluded in a large park, and feel annoyed when another person moves close to us when there were other places they could occupy in the park. Similarly, we would feel that something was wrong were someone to follow us everywhere we went in public; we expect a certain degree of privacy (anonymity?) in public, even when we are walking ‘in public’. There is really a certain ‘ick’ reaction when some public images are captured and then widely disseminated online (such as the above Google Streetview picture) – there is an expectation that certain contextual, culturally specific privacy norms  should carry over into ‘public’ spaces. While technology advocates might say that privacy norms are shifting as rapidly as the technologies that are infusing our lives, I would suggest that this position is likely over-emphasized by tech companies. These companies, especially those who espouse to ‘do no evil’, should (ought to?) respect the cultural norms of the countries they are operating in. The challenge, of course, thus becomes “how can we identify cultural norms,” and “can we/should we understand cultural norms are paralleling legal norms?”

Unfortunately I don’t have the time to effectively delve into possible ways of understanding these norms, and I recognize that doing so is challenging at best, but companies that want to avoid doing evil would be well served to realize privacy as a cultural, rather than an engineering, issue.