I downloaded a copy of Desk last week, an OS X applications that is designed for bloggers by bloggers. It costs $30 from the Mac App Store, which is in line with other blogging software for OS X.
To cut to the chase, I like the application but, as it stands right now, version 1.0 feels like it’s just barely out of beta. As a result there’s no way that I could recommend that anyone purchase Desk until a series of important bug fixes are implemented.
What’s to Love
I write in Markdown. At this point it’s so engrained in how I stylize my writing that even my paper notebooks (yes, I still use those…) prominently feature Markdown so I can understand links, heading levels, levels of emphasis, and so forth. Desk uses Markdown and also offers a GUI where, after highlighting some text, you’re given the option to stylize add boldface or italics, insert a hyperlink, or generally add in some basic HTML. That means that people like me (Markdown users) are happy as are (presumably) those who prefer working from a graphical user interface. Everyone wins!
In line with other contemporary writing applications (e.g. Byword, Write) the menu options are designed to just fade away while you’re writing. This means there are no distractions when you’re involved in writing itself and that’s a good thing. You always have the option to calling up the menu items just by just scrolling somewhere in the main window. So, the menu is there when you want it and absent when you’re actually working. Another win.
Steven Levy’s book, “In the Plex: How Google Things, Works, and Shapes Our Lives,” holistically explores the history and various products of Google Inc. The book’s significance comes from Levy’s ongoing access to various Google employees, attendance at company events and product discussions, and other Google-related cultural and business elements since the company’s inception in 1999. In essence, Levy provides us with a superb – if sometimes favourably biased – account of Google’s growth and development.
The book covers Google’s successes, failures, and difficulties as it grew from a graduate project at Stanford University to the multi-billion dollar business it is today. Throughout we see just how important algorithmic learning and automation is; core to Google’s business philosophy is that using humans to rank or evaluate things “was out of the question. First, it was inherently impractical. Further, humans were unreliable. Only algorithms – well drawn, efficiently executed, and based on sound data – could deliver unbiased results” (p. 16). This attitude of the ‘pure algorithm’ is pervasive; translation between languages is just an information problem that can – through suitable algorithms – accurately and effectively translate even the cultural uniqueness that is linked to languages. Moreover, when Google’s search algorithms routinely display anti-Semitic websites after searching for “Jew” the founders refused to modify the search algorithms because the algorithms had “spoke” and “Brin’s ideals, no matter how heartfelt, could not justify intervention. “I feel like I shouldn’t impose my beliefs on the world,” he said. “It’s a bad technology practice”” (p. 275). This is an important statement: the founders see the product of human mathematical ingenuity as non-human and lacking bias born of their human creation.
Despite some cries that the publishing industry is at the precipice of financial doom, it’s hard to tell based on the proliferation of texts being published year after year. With such high volumes of new works being produced it can be incredibly difficult to sort the wheat from the chaff. Within scholarly circles it (sometimes) becomes readily apparent what books are above middling quality by turning to citation indices, but outside of such (often paywall protected) circles it can be more challenging to ascertain what texts are clearly worth reading and which are not.
While I can hardly claim to speak with the weight of scholarly indices, I do read (and rate) a prolific number of texts each year. In what follows, I offer a list of the ‘best’ books that I read through 2011. Some are thought-provoking, others were important in how I understood various facets of the policy process, and still others offer interesting tidbits of information that have until now been hidden in shadow. For each book I’ll identify it’s main aim and a few points about what made the book compelling enough to get onto my list. Texts are not arranged in any particular ranking order and all should be available through your preferred book seller.
Christena Nippert-Eng’s Islands of Privacy is an interview-intensive book that grapples with how her sample group of Chicago residents attempt to achieve privacy, and the regular issues they face in maintaining privacy on a day-to-day basis. She finds a strong correlation between those who have had their privacy violated and those who want to secure and defend privacy as a concept and important element of their lived experience. 74 interviews were conducted with residents of Chicago and she makes very clear that her findings and conclusions are consequently highly contingent: other populations across America and the world would likely result in very different understandings of what constitutes privacy and a violation.
Privacy is defined quite early as “about nothing less than trying to live both as a member of social units – as part of a number of larger wholes – and as an individual – a unique, individuated self” (6). Further, privacy is identified as something to be managed: it exists by managing public information. Information is seen by participants as inherently public, with effort required to make it private, though interviewed subjects do not necessarily stick to this understanding of privacy throughout their interviews. On the whole, the approach to privacy remains wrapped up in the language on control, seclusion, and selective sharing of information; in this sense, Nippert-Eng’s work can be seen as a fusion of Westin’s Privacy and Freedom and key tenets of Nissembaum’s work in Privacy in Context: Technology, Policy, and the Integrity of Social Life.
The Canadian SIGINT Summaries includes downloadable copies, along with summary, publication, and original source information, of leaked CSE documents.
Parsons, Christopher; and Molnar, Adam. (2021). “Horizontal Accountability and Signals Intelligence: Lesson Drawing from Annual Electronic Surveillance Reports,” David Murakami Wood and David Lyon (Eds.), Big Data Surveillance and Security Intelligence: The Canadian Case.
Parsons, Christopher. (2015). “Stuck on the Agenda: Drawing lessons from the stagnation of ‘lawful access’ legislation in Canada,” Michael Geist (ed.), Law, Privacy and Surveillance in Canada in the Post-Snowden Era (Ottawa University Press).
Parsons, Christopher. (2015). “The Governance of Telecommunications Surveillance: How Opaque and Unaccountable Practices and Policies Threaten Canadians,” Telecom Transparency Project.
Parsons, Christopher. (2015). “Beyond the ATIP: New methods for interrogating state surveillance,” in Jamie Brownlee and Kevin Walby (Eds.), Access to Information and Social Justice (Arbeiter Ring Publishing).
Bennett, Colin; Parsons, Christopher; Molnar, Adam. (2014). “Forgetting and the right to be forgotten” in Serge Gutwirth et al. (Eds.), Reloading Data Protection: Multidisciplinary Insights and Contemporary Challenges.
Bennett, Colin, and Parsons, Christopher. (2013). “Privacy and Surveillance: The Multi-Disciplinary Literature on the Capture, Use, and Disclosure of Personal information in Cyberspace” in W. Dutton (Ed.), Oxford Handbook of Internet Studies.
McPhail, Brenda; Parsons, Christopher; Ferenbok, Joseph; Smith, Karen; and Clement, Andrew. (2013). “Identifying Canadians at the Border: ePassports and the 9/11 legacy,” in Canadian Journal of Law and Society 27(3).
Parsons, Christopher; Savirimuthu, Joseph; Wipond, Rob; McArthur, Kevin. (2012). “ANPR: Code and Rhetorics of Compliance,” in European Journal of Law and Technology 3(3).