There is an ongoing push to ‘better’ monetize the mobile marketplace. In this near-future market, wireless providers use DPI and other Quality of Service equipment to charge subscribers for each and every action they take online. The past few weeks have seen Sandvine and other vendors talk about this potential, and Rogers has begun testing the market to determine if mobile customers will pay for data prioritization. The prioritization of data is classified as a network neutrality issue proper, and one that demands careful consideration and examination.
In this post, I’m not talking about network neutrality. Instead, I’m going to talk about what supposedly drives prioritization schemes in Canada’s wireless marketplace: congestion. Consider this a repartee to the oft-touted position that ‘wireless is different’: ISPs assert that wireless is different than wireline for their own regulatory ends, but blur distinctions between the two when pitching ‘congestion management’ schemes to customers. In this post I suggest that the congestion faced by AT&T and other wireless providers has far less to do with data congestion than with signal congestion, and that carriers have to own responsibility for the latter.
Since the first generation iPhone was exclusively released to AT&T in the US, AT&T’s network woes have become almost legendary amongst smartphone users. Dropped calls. Slow downloads. Dead zones. From today’s vantage point, we often forget just how remarkable it is for a smartphone to properly display the web with a rich browser, let alone how significantly we have embraced ‘application-based’ computing on our smart devices. While these portable computers (with bolted on phone-capabilities) are delightful to use, wireless providers around the world often warn us about ‘data pigs’ who ‘hog’ all the bandwidth to the detriment of fellow wireless customers. How dare those paying a premium for wireless bandwidth use it!
To underscore the ‘dangers’ that these ‘pigs’ pose, the problems facing AT&T’s wireless network are regularly called forth by carriers and media to demonstrate just how bad things can get if networks are poorly provisioned. The problem is that, while there have been massive increases in the amount of wireless bandwidth that providers have to transit, data transmissions are not solely responsible for congestion on AT&T’s network. Let me explain.
AT&T’s network has experienced a 5000% increase in wireless data transit in the past few years. This growth has been fuelled by the hosts of smart devices brought to market since the release of the iPhone, with Apple’s products and Android-OS devices accounting for a considerable percentage of that growth. The story goes that, since smartphone ‘pigs’ are breaking the Internet for everyone, the pigs should have to pay extra for the privilege, with Rogers’ presently contemplating whether these customers should pay a surcharge if they want to evade network congestion. With the regularity that data congestion is written and spoken about by the media, congestion has become a ‘fact’ around which discussions about managed Internet services are oriented around. Congestion is the ‘fact’ that reinforces why consumers need to subsidize massive infrastructure investments. Congestion is also a fact that desperately needs public contestation.
What mobile providers are less likely to talk about are the technical difficulties facing AT&T around the smart devices they support. The iPhone, along with other smart devices including the Blackberry and those running Android, has historically been configured to drop data connections once any requested data is received, reinitiating a data connection when the device requires additional data. This technique conserves battery power but has the unfortunate effect of overloading the signalling channels that cell nodes use to set up data connections, signal phone calls, transmit SMS messages, receive voice messages, and so forth. Save for in extremely poorly provisioned areas, data capacity itself on the cell nodes is rarely a problem. Instead, older networks that didn’t see early adoption of heavy texting or data use (read: North American networks) must be upgraded to handle an ever-increasing number of devices that are rapidly connecting and disconnecting from the networks.
What does this mean for congestion, then? It means that while an incredible minority of users might be using truly “amazing” amount of data (read: 2+ GB/Month), simply having a contemporary, pre-iOS 4.2 Apple product on your person contributes to cellular ‘congestion’. (The same is true if you use a WebOS device, Android-based phone, or a Blackberry.) Why the distinction between versions of Apple’s operating system driving their mobile product lines? Because from Nokia Siemens Networks’ tests last month we’ve learned that the newest version of iOS corrects Apple devices’ signalling problems. Apple has implemented a signalling technique that both increases battery life whilst addressing the signal congestion problem; they’ve simultaneously made things better for both the consumer and the carrier.
To recap: Apple has largely corrected the congestion problems cause by their devices, when those devices are used in well-provisioned wireless networks. This was accomplished by upgrading iDevices’ operating systems, not by building out new carrier infrastructure. Imposing higher fees for carrying data, reducing data consumption, and so forth has a minimal effect on the kinds of signalling congestion caused by contemporary smart devices.
So, what lessons can we drawn from these actual, engineering-based, facts?
- The language around congestion is unclear and needs to be better nuanced. Wireless providers use AT&T’s woes as demonstrations of what happens when the networks experience data congestion. Data congestion is great for carriers because it’s easy to ascribe responsibility for excess data usage: customer A has used a lot of data, caused congestion, and now has to pay for it (though the sidenote is that these customers are already paying for their bandwidth; when was the last time your wireless provider just gave away wireless bandwidth?). Our public debates need to incorporate the notion of signal congestion, where the devices sold to us by wireless providers are themselves responsible for congestion. If providers are selling (on long-term contracts) devices that they know cause signalling congestion, why should the customer ever have to pay extra for their faulty device to be made fully operable? While it might be acceptable for customers to cover come data congestion costs, carriers must be held responsible for selling devices that cause signal congestion.
- We need engineers to take part in public discussions if we’re to understand the actual problems facing wireless providers. Engineers understand ‘network management’ a bit differently than most business executives; the former want the network to function at a technical level, whereas the latter want the network to be productive in a technical and economic sense – the network needs to be operable, but it also needs to be maximally profitable while operating. Before we can discuss (ir)rational economic and business practices, everyone needs to be on a common technical footing, and this means engineers with knowledge of the networks are essential for well-informed debate.
- In the absence on engineers coming forward from within the wireless provider companies to correct the technical facts around signal and data congestion, some kind of oversight mechanism of wireless networks is required. Whether this is a federal institution, an international association, or something else isn’t key for my overarching point: we need someone to keep providers honest about what causes problems on their networks.
- Before we talk about offering ‘prioritized service’ to smartphone customers, we desperately need to clarify whether this service is designed to monetize signalling congestion or data congestion. If it’s a case of the former, then we need to talk about providers’ responsibilities to offer fully functional and compatible smartphones; customers shouldn’t be punished financially for being sold technically limited devices. If signalling is the problem, then Rogers’ efforts constitute a monetization strategy designed to take advantage of archaic infrastructure that desperately would need updating. If, however, independent third-party engineers can examine the Rogers network and recognize that their customers are actually experiencing data congestion – that signalling congestion is an absolute non-issue for Rogers – then a discussion about data congestion can and should take place.
A key part of any debate, a part often unspoken, revolves around the framing of an issue. Framing constitutes what a news report/blog “presumes to be significant, how it certifies the relevant players, how it narrates the conflict. Each statement renders the others invisible. The frame is most powerful not for what it includes, but for what it leaves out as either insignificant or obvious” (Gillespie, Wired Shut, 2007. 134). Almost none of the congestion debates about wireless broadband have recognized signalling capacity or smart device adherence to signalling standards as key to the issue at hand. Instead we’ve seen a sloppy (and, in the case of carriers, likely intentional) conjoinment of signalling and data congestion, and this kind of thinking and discourse must stop. If wireless is truly different, then let’s get serious about it being different; let’s look at the technical differences, how congestion can happen at different points in the wireless network than in wireline networks, and what kinds of congestion are causing degraded customer experiences.
In effect: before we start talking about congestion and prioritization, let’s actually figure out what is causing the congestion, who’s responsible, and then get into a bigger discussion of what are responsible solutions to degraded mobile experiences. Doing anything else obscures the broader framework that congestion discussions take place within and severely limits critical engagements with the issue of mobile congestion.