I attended this year’s Computers, Freedom, and Privacy conference and spent time in sessions on privacy in large data sets, deep packet inspection and network neutrality, the role of privacy in venture capital pitches, and what businesses are doing to secure privacy. In addition, a collection of us worked for some time to produce a rough draft of the Social Network Users’ Bill of Rights that was subsequently discussed and ratified by the conference participants. In this post, I want to speak to the motivations of the Bill of Rights, characteristics of social networking and Bill proper, a few hopeful outcomes resulting from the Bill’s instantiation and conclude by denoting a concerns around the Bill’s creation and consequent challenges for moving it forward.
First, let me speak to the motivation behind the Bill. Social networking environments are increasingly becoming the places where individuals store key information – contact information, photos, thoughts and reflections, video – and genuinely becoming integrated into the political. This integration was particularly poignantly demonstrated last year when the American State Department asked Twitter to delay upgrades that would disrupt service and stem the information flowing out of Iran following the illegitimate election of President Ahmadinejad. Social networks have already been tied into the economic and social landscapes in profound ways: we see infrastructure costs for maintaining core business functionality approaching zero and the labor that was historically required for initiating conversations and meetings, to say nothing of shared authorship, have been integrated into social networking platforms themselves. Social networking, under this rubric, extends beyond sites such as Facebook and MySpace, and encapsulate companies like Google and Yahoo!, WordPress, and Digg, and their associated product offerings. Social networking extends well beyond social media; we can turn to Mashable’s collection of twenty characteristics included in the term ‘social networking’ for guidance as to what the term captures:
We are rapidly shifting towards a ubiquitous networked world, one that promises to accelerate our access to information and each other, but this network requires a few key elements. Bandwidth must be plentiful, mobile devices that can engage with this world must be widely deployed, and some kind of normative-regulatory framework that encourages creation and consumption must be in place. As it stands, backhaul bandwidth is plentiful, though front-line cellular towers in American and (possibly) Canada are largely unable to accommodate the growing ubiquity of smart devices. In addition to this challenge, we operate in a world where the normative-regulatory framework for the mobile world is threatened by regulatory capture that encourages limited consumption that maximizes revenues while simultaneously discouraging rich, mobile, creative actions. Without a shift to fact-based policy decisions and pricing systems North America is threatened to become the new tech ghetto of the mobile world: rich in talent and ability to innovate, but poor in the actual infrastructure to locally enjoy those innovations.
At the Canadian Telecom Summit this year, mobile operators such as TELUS, Wind Mobile, and Rogers Communications were all quick to pounce on the problems facing AT&T in the US. AT&T regularly suffers voice and data outages for its highest-revenue customers: those who own and use smart phones that are built on the Android, WebOS (i.e. Palm Pre and Pixi), and iOS. Each of these Canadian mobile companies used AT&T’s weaknesses to hammer home that unlimited bandwidth cannot be offered along mobile networks, and suggested that AT&T’s shift from unlimited to limited data plans are indicative of the backhaul and/or spectrum problems caused by smart devices. While I do not want to entirely contest the claim that there are challenges managing exponential increases in mobile data growth, I do want to suggest that technical analysis rather than rhetorical ‘obviousness’ should be applied to understand the similarities and differences between Canadian telcos/cablecos and AT&T.
For the past little while I’ve been (back) in Ontario trying to soak up as much information as I could about telecommunications and deep packet inspection. I was generously given the opportunity to attend the Canadian Telecommunications Summit by Mark Goldberg a while ago, and it was an amazing experience. I found that the new media panel, where broadcasters and carriers came together to discuss their (often contrasting) modes of disseminating content offered some real insights into the approaches to media on the ‘net. It demonstrated very clear contrasts in how new media might operate, and be seen by the Dominant Carriers, into focus for me and really began to provide a broader image of the actual strategies of various parties.
A huge element of the conference surrounded the development of wireless as the new space for innovation. Often unspoken, save for in informal discussions, was that wireline was seen as increasingly outmoded. Most statistics that were formally presented saw wireless overtaking wireline broadband by 2014 or so. This has me wondering about how important it is to examine capital expenses by major broadband providers – while we read that there is massive investment (totaling in the hundreds of millions/billions per year across all carriers), how much is in wireless and how much is in wireline infrastructure?
As of this week I’m working with a series of incredibly smart, erudite individuals to set up and run a graduate student conference – I’m excited, but nervous! I want to quickly note what technology we hopefully will be using, and then note some of the immediate challenges standing before all of us, and invite any comments on how to overcome/run around them.
First, I think that we may have found an online conferencing system that would really make life easy – the Public Knowledge Project provides a FOSS conference system that is really awesome. I’ve used their open Journal system when submitting a paper to a University of British Colombia undergraduate journal (Prolegomena) and it was a really slick system. I think that (for me at the time) the most awesome part of the system was that I could log in and see how far along the process my paper was. It kept me from harassing the journal editors, which I’m confident is a reasonably common problem with other methods of harvesting and selecting papers.
I’m giving a presentation on Web 2.0 tools in under a month and, since I’ve received notice from the conference organizers, I’ve been working diligently to compile tools and identify their uses and potentials for abuse. Over the coming week or two I expect I’ll be posting a reasonably amount about thoughts and ideas that I have surrounding my presentation – comments are of course welcome here, and you are also welcome to look at and contribute to the wiki article that I’ve set up for the conference.
Before getting into content in any depth I wanted to take a step back and reflect on what I am referring to when talking about ‘Web 2.0’ and how it (potentially) applies to post-secondary education. I’m not going to get into the politics of technology in post-secondary environments, or at least I’m not planning on directly posting about this (largely because I work in an educational institution, and it’s really best to keep some thoughts to yourself).
I’ve recently been accepted to present at a conference for incoming TAs to my university. I’m giving a talk for over an hour on Web 2.0, it’s possibilities, and pitfalls. Obviously I’m going to be going nuts building up information to provide, but does anyone have anything that I *need* to be talking about? The current list is going to include things like blogging, wiki’s, and online data archival tools.
I’m going to have a 100% captive audience, and lots of time, so your ideas and suggestions would be extremely appreciated!