While it’s not the core focus of my research, I pay a lot of attention to trends and conversations about social media, and particularly focus on common standards that support the ‘semantic’ capabilities of web-enabled appliances. In this post I want to think about ways of ‘structuring’ social media along a set of continuums/formalized networks and the role of HTML 5’s semantic possibilities in pushing past the present set of social networking environments.
Social Media as a Hub
As shown in the image to the left, social platforms are situated in the middle of a set of larger social media items; platforms are integrative, insofar as they are able to make calls to other social items and enrich the platform. Under a ‘social media as hub’ continuum, we might imagine that ‘spoke-based’ media items facilitate highly targeted uses; while MMORPGs are ‘social’, they are hyper-targeted and meant to maintain their own internal infrastructure.
This said, there may be ways that social platforms can integrate MMORPG content; maybe on Facebook someone joins a WoW page, or installs an applications that lets other Facebook members examine the gear a character carries. We can also see how something like Flickr might be integrated – a social platform that offers a diverse set of functions might incorporate a user’s Flickr photos, whereas Flickr cannot (easily) call on the data stored in this social platform. In essence, this continuum would need to establish some rule of thumb classificatory schemes for the ‘types’ of media platforms and then ascribe a specific set of criteria for each platform.
Web 1.0 >> Web 2.0 >> Web Squared
Under this continuum, there is a progression of social media sites and activities that are based principally on the technologies underwriting the web. The problem (as will be a bit more evident as we hit the end of this type of continuum) is that Web Squared is a relatively new term. While this leaves it fertile for future theoretical monkeying-around with, it also means that there is fairly little Web Squared to be found in the wild’s of the ‘net.
Web 1.0 services were dominated by content being delivered by creators, and then limited abilities to engage and transform the content after it had been published. Remember: in 2001 blogging was still a relatively elite/bizarre mode of engaging with audiences, and the interfaces were FAR from enjoyable to use (trust me…). Applications that were characterized as ‘Web 1.0’ included: DoubleClick, Ofoto, Akamai, mp3.com, Britannica Online, personal websites, and core business interests surrounded: page views, screen scraping, publishing, dictionary taxonomies, and ‘stickiness’. Their 2.0 replacements were: Google AdSense, Flickr, BitTorrent, Napster, Wikipedia, blogging, cost-per-click, web services, participation, tagging folksonomies, and syndication.
We should never forget that Web 2.0 is actually a pretty old term at this point. It was coined at the original Web 2.0 conference to explain what the applications that emerged from the dot-com burst had in common. Under the Web 2.0 platform, users were meant to control their own data as the web itself became an application and data-engagement platform. Core elements of Web 2.0 saw early adoption of Software as a Service (SaaS), persistent user engagement with products, cost-effective scalability, remixable data sources, software functioning across different hardware and operating system configurations, and the harnessing of the crowd (Source).
With the Web Squared (original piece on topic here), we are at a point where the nascent capacities of Web 2.0 can be realized more formally – the Internet can be everywhere. While the Internet of Things typically refers to RFID devices that are Internet Protocol sensitive, this network can actually include ANYTHING that has been ‘drawn into’ the Internet. Your pillow doesn’t need a RFID chip in it; it just needs to have a picture of its barcode snapped at the store for a price check against all other stores selling pillows in the region. Essentially, as geolocative technologies are drawn into consumer applications and combined with inexpensive cameras and bandwidth, it is possible for the crowd to get involved in ‘sucking’ things into the web.
In this continuum, we might see Web 1.0 services as injecting early content into the web, content that could then be leveraged by Web 2.0 applications and techniques to developed increasingly involved data profiles and data sets. In essence, Web 2.0 depended on people taking the time to tag data, to geolocate photos, to blog about niche content, upload their personal details into database fields, and to develop interaction patterns. With all of the Web 1.0 and 2.0 data drawn together, the skeleton for a Web Squared environment is set.
Key to Web Squared, as far as I’m concerned, is the emergence of new HTML 5 standard. What is new with HTML 5 is that it gives a big push for ‘semantic browsing’ – it includes the following structural elements: header, nav, section, article, aside, footer, offline data capacity, drag and drop, video and audio tags, geolocation. From ‘article’ it will be apparent what elements of a web page really apply to a text-based search, whereas with the video and audio tags search for those kinds of media will be easy to identify. In all of these cases, with appropriate meta-data, search can more precisely target and mix information types. Data already existing in the Web 2.0 environment that has to be situated into HTML 5, after which it is possible to remix data in ways that we can’t presently imagine. The core issues facing HTML 5 are:
- extensibility of the structural elements
- backwards compatibility
Thus, under this continuum model, a Web 1.0 site would really mean that content providers offer static content that is not inherently meant to be socially engaged with – discussion isn’t built into the provision of content, nor is external content invited. Web 2.0 demands that content be ‘socialized’, insofar as it is meant to more precisely monitor and target discrete usage patterns and encourage engagement through conversation and dynamic content provision/remix. Web Squared would extend beyond social platforms to try and integrate socialization into the very infrastructure of the ‘net through emerging web standards. With search and open APIs come massive reconfigurations of data, and new metadata methods of mediating data engagement that are capable of pico-targeting.
Niche-Macro Audiences/Reasonable Expectations
Under this continuum, we might image social networking sites that are more or less appealing to the public at large. Presuming that these sites meet the criteria of Web 2.0 (most especially, on the technical end, scalability), we can imagine that there are social networking sites that are so targeted that only a very small niche of individuals may be interested in them (e.g. Barack Obama’s Organizing for America), and others are meant to be a jack-of-all trades (e.g. Facebook). Depending on the audience in question, we might also map on a reasonable expectations of privacy continuum and see if there is a correlation between degrees of targeting and actual (versus perceived) expectations of privacy. The benefit of this way of analyzing social networking environments is that an analysis of privacy that draws on excellent sociological and anthropological research into expectations of privacy could be used as a foil to actual expectations of privacy in social networking. In effect, and three-dimensional modeling of targeting/perceived expectations/actual expectations could visualize data relationships and lend to extended theorization about what hyper-public social markup standards and languages would look like.
There are obviously different degrees of privacy that are afforded at the front-end of social networking environments. Twitter, on the one hand, puts most of the data that is transmitted using the service into the public eye (though I’ve written about expectations of privacy on Twitter from a theory-based point of view that contests reading data this way). On the other hand, we see Facebook with a massive set of privacy controls (though it is debatable that users really understand how to configure them). Are privacy settings implemented on the basis of regulatory pressure, on user pressure, based on expected consumers? We might see how privacy settings/business expectations/audience expectations map onto a three-dimensional graph in some format to develop a visual comparison between networking environments to map trends.
These are just a few of the possible continuums for social networking environments – chime in with your own suggestions in the comments!