Steven Levy’s book, “In the Plex: How Google Things, Works, and Shapes Our Lives,” holistically explores the history and various products of Google Inc. The book’s significance comes from Levy’s ongoing access to various Google employees, attendance at company events and product discussions, and other Google-related cultural and business elements since the company’s inception in 1999. In essence, Levy provides us with a superb – if sometimes favourably biased – account of Google’s growth and development.
The book covers Google’s successes, failures, and difficulties as it grew from a graduate project at Stanford University to the multi-billion dollar business it is today. Throughout we see just how important algorithmic learning and automation is; core to Google’s business philosophy is that using humans to rank or evaluate things “was out of the question. First, it was inherently impractical. Further, humans were unreliable. Only algorithms – well drawn, efficiently executed, and based on sound data – could deliver unbiased results” (p. 16). This attitude of the ‘pure algorithm’ is pervasive; translation between languages is just an information problem that can – through suitable algorithms – accurately and effectively translate even the cultural uniqueness that is linked to languages. Moreover, when Google’s search algorithms routinely display anti-Semitic websites after searching for “Jew” the founders refused to modify the search algorithms because the algorithms had “spoke” and “Brin’s ideals, no matter how heartfelt, could not justify intervention. “I feel like I shouldn’t impose my beliefs on the world,” he said. “It’s a bad technology practice”” (p. 275). This is an important statement: the founders see the product of human mathematical ingenuity as non-human and lacking bias born of their human creation.
There are ongoing concerns in Canada about the CRTC’s capacity to gauge and evaluate the quality of Internet service that Canadians receive. This was most recently brought to the fore when the CRTC announced that Canada ranked second to Japan in broadband access speeds. Such a stance is PR spin and, as noted by Peter Nowak, “[o]nly in the halcyon world of the CRTC, where the sky is purple and pigs can fly, could that claim possibly be true.” This head-in-the-sands approach to understanding the Canadian broadband environment, unfortunately, is similarly reflective in the lack of a federal digital strategy and absolutely inadequate funding for even the most basic governmental cyber-security.
To return the CRTC from the halcyon world it is presently stuck within, and establish firm empirical data to guide a digital economic strategy, the Government of Canada should establish a framework to audit ISPs’ infrastructure and network practices. Ideally this would result in an independent body that could examine the quality and speed of broadband throughout Canada. Their methodology and results would be publicly published and could assure all parties – businesses, citizens, and consumers – that they could trust or rely upon ISPs’ infrastructure. Importantly, having an independent body research and publish data concerning Canadian broadband would relieve companies and consumers from having to assume this role, freeing them to use the Internet for productive (rather than watchdog-related) purposes.
Google Analytics have become an almost ever-present part of the contemporary Internet. Large, small, and medium-sized sites alike track their website visitors using Google’s free tools to identify where visitors are coming from, what they’re looking at (and for how long), where they subsequently navigate to, what keywords bring people to websites, and whether internal metrics are in line with advertising campaign goals. As of 2010, roughly 52% of all websites used Google’s analytics system, and it accounted for 81.4% of the traffic analysis tools market. As of this writing, Google’s system is used by roughly 58% of the top 10,000 websites, 57% of the top 100,000 websites, and 41.5% of the top million sites. In short, Google is providing analytics services to a considerable number of the world’s most commonly frequented websites.
In this short post I want to discuss the terms of using Google analytics. Based on conversations I’ve had over the past several months, it seems like many of the medium and small business owners are unaware of the conditions that Google places on using their tool. Further, independent bloggers are using analytics engines – either intentionally or by the default of their website host/creator – and are ignorant of what they must do to legitimately use them. After outlining the brief bits of legalese that are required by Google – and suggesting what Google should do to ensure terms of service compliance – I’ll suggest a business model/addition that could simultaneously assist in privacy compliance while netting an enterprising company/individual a few extra dollars in revenue.
Throughout the Global North there are discussions on the table for introducing what are called ‘three-strikes’ rules that are designed to cut or, or hinder, people’s access to the Internet should they be caught infringing on copyright. In the EU, the big content cartel is trying to get ISPs to inspect consumer data flows and, when copywritten content is identified, ‘punish’ the individual in some fashion. Fortunately, it is looking that at least the EU Parliament is against imposing such rules on the basis that disconnecting individuals from the Internet would infringe on EU citizens’ basic rights. In an era where we are increasingly digitizing our records and basic communications infrastructure, it’s delightful to see a body in a major world power recognize the incredibly detrimental and over-reactionary behavior that the copyright cartel is calling for. Copyright infringement does not trump basic civil liberties.
Now, I expect that many readers would say something along this line: I don’t live in the EU, and the EU Parliament has incredibly limited powers. Who cares, this: (a) doesn’t affect me; (b) is unlikely to have a real impact on EU policy.
The Canadian SIGINT Summaries includes downloadable copies, along with summary, publication, and original source information, of leaked CSE documents.
Parsons, Christopher; and Molnar, Adam. (2021). “Horizontal Accountability and Signals Intelligence: Lesson Drawing from Annual Electronic Surveillance Reports,” David Murakami Wood and David Lyon (Eds.), Big Data Surveillance and Security Intelligence: The Canadian Case.
Parsons, Christopher. (2015). “Stuck on the Agenda: Drawing lessons from the stagnation of ‘lawful access’ legislation in Canada,” Michael Geist (ed.), Law, Privacy and Surveillance in Canada in the Post-Snowden Era (Ottawa University Press).
Parsons, Christopher. (2015). “The Governance of Telecommunications Surveillance: How Opaque and Unaccountable Practices and Policies Threaten Canadians,” Telecom Transparency Project.
Parsons, Christopher. (2015). “Beyond the ATIP: New methods for interrogating state surveillance,” in Jamie Brownlee and Kevin Walby (Eds.), Access to Information and Social Justice (Arbeiter Ring Publishing).
Bennett, Colin; Parsons, Christopher; Molnar, Adam. (2014). “Forgetting and the right to be forgotten” in Serge Gutwirth et al. (Eds.), Reloading Data Protection: Multidisciplinary Insights and Contemporary Challenges.
Bennett, Colin, and Parsons, Christopher. (2013). “Privacy and Surveillance: The Multi-Disciplinary Literature on the Capture, Use, and Disclosure of Personal information in Cyberspace” in W. Dutton (Ed.), Oxford Handbook of Internet Studies.
McPhail, Brenda; Parsons, Christopher; Ferenbok, Joseph; Smith, Karen; and Clement, Andrew. (2013). “Identifying Canadians at the Border: ePassports and the 9/11 legacy,” in Canadian Journal of Law and Society 27(3).
Parsons, Christopher; Savirimuthu, Joseph; Wipond, Rob; McArthur, Kevin. (2012). “ANPR: Code and Rhetorics of Compliance,” in European Journal of Law and Technology 3(3).