The Geek, Restraining Orders, and Theories of Privacy

restrainingorderI’ve been reading some work on privacy and social networks recently, and this combined with Ratliff’s “Gone Forever: What Does It Really Take to Disappear” has led me to think about whether a geek with a website that is clearly their own (e.g. Christopher-Parsons.com) should reasonably expect restraining laws to extend to digital spaces. I’m not really talking at the level of law necessarily, but at a level of normativity: ought a restraining order limit a person from ‘following’ me online as it does from being near me in the physical world?

Restraining orders are commonly issued to prevent recurrences of abuse (physical or verbal) and stalking. While most people who have a website are unable to track who is visiting their webspace, what happens when you compulsively check your server logs (as many good geeks do) and can roughly correlate traffic to particular geo-locations. As a loose example, let’s say that you were in a small town, ‘gained’ an estranged spouse, and then notice that there are regular hits to your website from that small town after you’ve been away from it for years. Let’s go further and say that you have few/no friends in that town, and that you do have a restraining order that is meant to prevent your ex-spouse from being anywhere near you. Does surfing to your online presence (we’ll assume, for this posting, that they aren’t commenting or engaging with the site) normatively constitute a breach of an order?

Continue reading

Context, Privacy, and (Attempted) Blogger Anonymity

bloggingtimelineWhile it’s fine and good to leave a comment where neither you nor an anonymous blogger know one another, what happens when you do know the anonymous blogger and it’s clear that they want to remain anonymous? This post tries to engage with this question, and focuses on the challenges that I experience when I want to post on an ‘anonymous’ blog where I know who is doing the blogging – it attends to the contextual privacy questions that race through my head before I post. As part  of this, I want to think through how a set of norms might be established to address my own questions/worries, and means of communicating this with visitors.

I’ve been blogging in various forms for a long time now – about a decade (!) – and in every blog I’ve ever had I use my name. This has been done, in part, because when I write under my name I’m far more accountable than when I write under an alias (or, at least I think this is the case). This said, I recognize that my stance to is slightly different than that of many bloggers out there – many avoid closely associating their published content with their names, and often for exceedingly good reasons. Sometimes a blogger wants to just vent, and doesn’t want to deal with related social challenges that arise as people know that Tommy is angry. Others do so for personal safety reasons (angry/dangerous ex-spouses), some for career reasons (not permitted to blog/worried about effects of blogging for future job prospects), some to avoid ‘-ist’ related comments (sexist, racist, ageist, etc.).

Continue reading

Continuums of Social Media?

socialmedialandscapeWhile it’s not the core focus of my research, I pay a lot of attention to trends and conversations about social media, and particularly focus on common standards that support the ‘semantic’ capabilities of web-enabled appliances. In this post I want to think about ways of ‘structuring’ social media along a set of continuums/formalized networks and the role of HTML 5’s semantic possibilities in pushing past the present set of social networking environments.

Social Media as a Hub

As shown in the image to the left, social platforms are situated in the middle of a set of larger social media items; platforms are integrative, insofar as they are able to make calls to other social items and enrich the platform. Under a ‘social media as hub’ continuum, we might imagine that ‘spoke-based’ media items facilitate highly targeted uses; while MMORPGs are ‘social’, they are hyper-targeted and meant to maintain their own internal infrastructure.

Continue reading

Beyond Fear and Deep Packet Inspection

securitybooksOver the past few days I’ve been able to attend to non-essential reading, which has given me the opportunity to start chewing through Bruce Schneier’s Beyond Fear. The book, in general, is an effort on Bruce’s part to get people thinking critically about security measures. It’s incredibly accessible and easy to read – I’d highly recommend it.

Early on in the text, Schneier provides a set of questions that ought to be asked before deploying a security system. I want to very briefly think through those questions as they relate to Deep Packet Inspection (DPI) in Canada to begin narrowing a security-derived understanding of the technology in Canada. My hope is that through critically engaging with this technology that a model to capture concerns and worries can start to emerge.

Question 1: What assets are you trying to protect?

  • Network infrastructure from being overwhelmed by data traffic.

Question 2: What are the risks to these assets?

  • Synchronous bandwidth-heavy applications running 24/7 that generate congestion and thus broadly degrade consumer experiences.

Question 3: How well does security mitigate those risks?

Continue reading

Review: The Long Tail (Revised and Updated)

thelongtailI’m in the middle of a massive reading streak for my comprehensive exams, but I’m trying to sneak in some personal reading at the same time. The first book in that ‘extra’ reading is Anderson’s “The Long Tail”, which focuses on the effect that shifting to digital systems has for economic scarcities, producers, aggregators, and consumers.

The key insight that Anderson brings to the table is this: with the birth of digital retail and communication systems, customers can find niche goods that appeal to their personal interests and tastes, rather than exclusively focusing on goods that retailers expect will be hits. This means that customers can follow the ‘long tail’, or the line of niche goods that are individually less and less likely to sell in a mass retail environment.

There are several ‘drivers’ of the long tail:

  1. There are far more niche goods than ‘hits’ (massively popular works), and more and more niche goods are being produced with the falling costs of production and distribution in various fields.
  2. Filters are more and more effective, which means that consumers can find niches they are interested in.
  3. There are so many niche items that, collectively, they can comprise a market rivaling hits.
  4. Without distribution bottlenecks, the ‘true’ elongated tail of the present Western economic reality is made apparent.

Continue reading

Deep Packet Inspection: What Innovation Will ISPs Encourage?

InnovationAll sorts of nasty things as said about ISPs that use Deep Packet Inspection (DPI). ISPs aren’t investing enough in their networks, they just want to punish early adopters of new technologies, they’re looking to deepen their regulatory powers capacities, or they want to track what their customers do online. ISPs, in turn, tend to insist that P2P applications are causing undue network congestion, and DPI is the only measure presently available to them to alleviate such congestion.

At the moment, the constant focus on P2P over the past few years has resulted in various ‘solutions’ including the development of P4P and the shift to UDP. Unfortunately, the cat and mouse game between groups representing record labels, ISPs (to a limited extent), and end-users has led to conflict that has ensured that most of the time and money is being put into ‘offensive’ and ‘defensive’ technologies and tactics online rather than more extensively into bandwidth-limiting technologies. Offensive technologies include those that enable mass analysis of data- and protocol-types to try and stop or delay particular modes of data sharing. While DPI can be factored into this set of technologies, a multitude of network technologies can just as easily fit into this category. ‘Defensive’ technologies include port randomizers, superior encryption and anonymity techniques, and other techniques that are primarily designed to evade particular analyses of network activity.

I should state up front that I don’t want to make myself out to be a technological determinist; neither ‘offensive’ or ‘defensive’ technologies are in a necessary causal relationship with one another. Many of the ‘offensive’ technologies could have been developed in light of increasingly nuanced viral attacks and spam barrages, to say nothing of the heightening complexity of intrusion attacks and pressures from the copyright lobbies. Similarly, encryption and anonymity technologies would have continued to develop, given that in many nations it is impossible to trust local ISPs or governments.

Continue reading