Initial Thoughts on Biden’s Executive Order on Improving the Nation’s Cybersecurity

black android smartphone on top of white book
Photo by Pixabay on

On May 12, 2021, President Joseph Biden promulgated an Executive Order (EO) to compel federal agencies to modify and enhance their cybersecurity practices. In this brief post I note a handful of elements of the EO that are noteworthy for the United States and, also, more broadly can be used to inform, assess, and evaluate non-American cybersecurity practices.

The core takeaway, for me, is that the United States government is drawing from its higher level strategies to form a clear and distinct set of policies that are linked to measurable goals. The Biden EO is significant in its scope though it remains unclear whether it will actually lead to government agencies better mitigating the threats which are facing their computer networks and systems.


Before diving into the EO itself it’s worth noting why it was drafted and promulgated in the first place. In 2021 alone there have been a series of high-profile cybersecurity incidents that have led to serious concerns inside and outside the US government. First, there was the SolarWinds hack that exposed a number of confidential and high-security systems to intrusions from foreign state actors. Second, there were successful intrusions into a huge swathe of Microsoft Exchange servers that led the US government to leverage its powers to remove installed web shells. Third, and most recently, has been the ransomware operation that was carried out against Colonial Pipeline’s billing systems, and which caused the company to stop sending fuel along one of the key pipelines in the continental United States. And these are just the most high-profile operations and intrusions that have occurred and immediately come to mind; in addition are hundreds of lower-profile operations in 2021, and a nearly uncountable number of intrusions into federal systems and critical infrastructures in the United States over the past decade.

The rate of intrusions is arguably increasing or, at the very least, not substantially decreasing. There are any number of reasons for this state of affairs. In no particular order, causes originate from:

  • Vendors generally not facing serious financial consequences for inadequately securing their products and services;
  • Lack of coordination between government agencies, as well as between and across private actors, which enables malicious operators to linger in systems and leverage vulnerabilities to target a wider number of victims;
  • Inadequate or non-standardized ways of investigating intrusions or other operations, combined with highly disparate ways of reacting when an intrusion or operation has been discovered;
  • Concerns on the parts of government agencies and private actors that the sharing of information might run afoul of laws that prohibit certain kinds or classes of sharing;
  • The digitization of public and private services, combined with a regular effort to purchase the most affordable as opposed to best secured systems and services, means there are more vulnerable systems each day that can be targeted;
  • As vulnerabilities–0-days and old-days alike–proliferate, in tandem with the rise of crimeware as a service, there are more medium- to low-skilled operators that can (and are) taking advantage of vulnerabilities to seek financial gain;
  • More individuals are learning how to undertake offensive operations with the effect that foreign states can tap into an increasingly savvy set of local operators to carry out either state-sanctioned or state-driven operations that fulfill state objects, or condone criminal activities that target foreign states but avoid targeting local populations; and
  • The availability of ‘hack for hire’ firms means that high- and mid-skilled commercial companies are willing to sell their tools, operators, and services to any number of bidders, inclusive of those who have an interest in targeting governments’ systems.

As a result of the current state of affairs, Western governments, including the United States, have regularly spoken about the need to improve the state of their cybersecurity. Doing so is a challenging operational task and requires leadership from the very top of the government to have much hope of success. Biden’s executive order aims to provide exactly this kind of leadership while setting clear accountability and reporting targets to ensure compliance with the EO. Where compliance is not absolute, a record of the failures will be kept so as to either direct more resources to improve an agency’s security posture or, alternately, to ultimately replace agency leaders who have failed to carry out the executive order.

Executive Order Sections

In each of the following sections I note elements of the Executive Order that were of interest to me. Often, this means I’m summarizing bits and pieces and subsequently editorializing. When quoting specific text I identify the relevant section of the EO. However, keep in mind that there may be parts of the EO that I didn’t think were particularly interesting on my first or second rounds of analysis. All of which is to say: if you’re curious about the EO beyond my summary and thoughts I’d highly recommend you take some time and read it yourself.

Sec. 1-Policy

This section sets out the challenge facing the United States government, insofar as in the face of cyber-related threats the government must actively work to “improve its efforts to identify, deter, protect against, detect, and respond to these actions and actors.” Notably, the government links trust in infrastructure as proportional to “how trustworthy and transparent” the infrastructure is, while asserting that building trust requires “bold changes and significant investments”. Not only are such investments needed to protect national security but the United States’ economic security, as well.

Critically, this section of the EO makes clear that the subsequent directives, and their outcomes, can be assessed on the basis of whether there is an improvement in the state of the American government’s cybersecurity to the effect of improving trust and transparency in government-managed infrastructures. If, as a result of implementing this EO, that trust and transparency isn’t realized then the entire series of efforts mandated through the EO may be deemed a total or partial failure.

Sec. 2-Removing Barriers to Sharing Threat Information

At its crux, this section asserts that government agencies and private contractors offering services to the government must effectively share threat information if defenders are to address the current threats facing the United States government’s systems and services. As seen throughout the EO, this section highlights how modifications to purchase and contracting requirements can enable better government security postures, such as by requiring a review of the Federal Acquisition Regulation and Defense Federal Acquisition Regulation Supplement contract requirements and inserting language that will compel better information sharing. Moreover, this section would require contracting parties to better collect and preserve data, share that data as well as information pertaining to incidents, collaborate with federal response bodies in prescribed ways, and to “promptly” report cyber incidents to relevant federal agencies with whom they are contracting. This section also calls for incident information to be disclosed to, and collected by, the Cybersecurity & Infrastructure Security Agency (CISA).

The actual language that is to be built into contracting language would involve consultations with key parties, including the Secretary of Defense, Director of the National Security Agency, Attorney General, and Director of the Office of Management and Budget (OMB). The ultimate contracting requirements would be standardized across the US government to ensure that processes would be streamlined and to improve compliance.

Sec. 3-Modernizing Federal Government Cybersecurity

This entire section presents a clear and focused set of principles and broad classes of technology that, if adopted, would improve the state of US federal government cybersecurity. Noted at the top, but not particularly well addressed later on, is the need to “invest in both technology and personnel to match these modernization goals” ((3(a)), emphasis added). There is a considerable lack of cybersecurity professionals for these roles, today, and it’s not clear how the US government will be able to have more people educated with relevant skills or pay them a market salary should these talented people be trained. Without solving the talent problem I fear that a core part of improving the state of government security–hiring qualified personnel–will be doomed to failure.

Setting that aside, the focus on Zero Trust Architectures (i.e., presume you are already breached, only provide the required access to systems and no more, segregate systems to the maximum extent possible, and work to remove default trust architectures from systems and services that could be abused by intruders), push to improve cloud-service governance and security frameworks, and effort to make multi-factor authentication the default for accessing government systems would be excellent to see put in place.1 Moreover, there is a requirement that agencies must “evaluate the types and sensitivity of their respective agency’s unclassified data” and, in doing so, “prioritize identification of the unclassified data considered by the agency to be the most sensitive and under greatest threat” (3(c)(iv)). This could help defenders specify what is most needing protection and thus assign rankings or priorities for internal defensive operations. To some extent this policy would parallel some of what is happening in the US military, where commanders are sometimes told that only 1-3 assets can be protected in a war fighting situation and all others will be presumed lost or compromised: of every asset at a commander’s (digital) fingertips, which are the most important so that defenders can prioritize them?

Perhaps what’s most positive in this entire section (and, indeed, throughout the EO) is the constant focus on identifying the agency responsible for implementing different elements of the EO and, subsequently, requiring that implementing agencies report back to CISA and others about progress that has(n’t) been made. As an example: agencies have 180 days to implement multi-factor authentication and encrypt data at rest, and must report every 60 days about their progress in doing so. If they have not managed to complete this process in the allocated 180 days then agency heads have to explain the reasons for this failure.

If you don’t track the course of a policy you can’t assess its efficacy, and the EO including tracking and reporting in spades.

Sec. 4-Enhancing Software Supply Chain Security

Distinctive from other concerns (e.g., Huawei hardware equipment) this EO turns to software supply chains. The turn is, in part, expected given the high-profile operations that have exploited software vulnerabilities in 2021 and in the preceding years. What characterizes this section is, in part, the speed: within 30 days a set of draft guidelines have to be prepared for comment and review.

The guidance will track along similar lines as those outlined in Sec. 3. As such, it will require developers/private companies to work from secure software development environments, though the list denoted at 4(e)(i) is not a conclusive list (as indicated in the language, “including such actions as” (emphasis added)). Requirements will include auditing trust relationships, using multi-factor authentication, documenting and minimizing dependencies, encrypting data, and monitoring operations and alerts associated with attempted and actual cyber events. Beyond this, however, are guidelines for proving conformity with 4(e)(i), automating code integrity assessments and monitoring for code vulnerabilities, including up-to-date information on third-party software and controls over them, producing a Software Bill of Materials (SBOM), participating in a vulnerability disclosure program, attesting to the conformity of secure development practices, and attesting the integrity and provenance of open source software used in a company’s products. It’s a long list.

Of particular interest in the definition of “critical software”. Such software reflects:

the level of privilege or access required to function, integration and dependencies with other software, direct access to networking and computing resources, performance of a function critical to trust, and potential for harm if compromised (4(g)).

The goal of the definition is to clarify what needs protection, and arguably constitutes software that is both linked to national interests as well as national security. The concern, however, is that while the definition arguably encapsulates software that is ‘critical’ the subsequent list of software may become untenably long, especially when it comes to the bit software to is involved in the “performance of a function critical to trust”. I suspect that agencies will require further guidance from existing CISA and NIST frameworks to create a somewhat common set of assets which are recognized as captured under this clause, lest it otherwise turn into a term that captures so much that it becomes functionally or analytically useless.

Similar to Sec. 3, agencies will be required to ensure that the contractors they work with have satisfied the security criteria set out in Sec. 4. Agencies can ask for extensions beyond the stated 30 days but must do so on a case-by-case basis and “only if accompanied by a plan for meeting the underlying requirements” (4(l)). The agency head who receives, and may grant, these extensions–the Director of the Office of Management and Budget–will be required to identify and explain their justifications for exemptions on a quarterly basis. As a result, reports should be collated which should, in turn, enable the executive branch to assess the outcomes of this portion of the EO.

Notably, in situations where software cannot be brought into compliance under this section it is to be removed “as appropriate” from a range of federal contracting options (4(p)). To put it another way, private companies will be very motivated to comply if only so they can continue to bid upon, and receive, federal contracts. Even legacy software will be captured: agencies will need to go through the same process to request extensions or exemptions, and explain how they will enter compliance with the new cybersecurity regime regardless of the agedness of the software.

Finally, in what may be the most far-reaching consequence, the US Government will launch a pilot program “informed by existing consumer product labeling programs to educate the public on the security capabilities of Internet-of-Things (IoT) devices and software development practices…and shall consider whether such a consumer labeling program may be operated in conjunction with or modelled after any similar existing government programs consistent with applicable law” (4(s)-(t)).

The US government getting into the labeling game will put pressure on a lot of IoT companies to advocate for what they think are fair or appropriate practices, and I half suspect that for consumer IoT this is where the Matter (formally CHIP) group will be lobbying hard to get the standards it’s developed in front of the US government and functionally approved. The decisions made in this space by the US government will likely contribute to setting the international floor for what security practices are considered mandatory and, thus, this will be a space to watch closely.

Sec 5.-Establishing a Cyber Safety Review Board

First, it’s important to recognize that this is not a cyber security board but a cyber safety board. The latter term is more comprehensive in its nature. The board is being composed to respond, in part, to the SolarWinds hack but going forward is to asses threat activity, vulnerabilities, mitigation activities, and agency responses more broadly.

This board is meant to pull together the relevant agencies as well as select private organizations that will likely be involved in quickly assessing, or subsequently reviewing, major cyber incidents that have (or which or may) threatened the US government. I suspect this board has been created to ensure that private stakeholders can be at a table, whereas otherwise incidents might either be mostly kept within the National Security Council (NSC) or smaller/less prominent arrangements or groups. I similarly suspect that this Review Board will functionally draw together the stakeholders inside and outside government that are already engaged in close dialogues with one another, while also potentially improving information sharing and strategic planning between public and private actors.

Sec. 6-Standardizing the Federal Government’s Playbook for Responding to Cybersecurity Vulnerabilities and Incidents

Standardizing response processes is, on its face, a positive maneuver insofar as incident responders and subsequent analysts will have regularized datasets from which they can conduct their own defensive operations. Moreover, when the playbook and standards are written by those with the most expertise in government the result should be that the best standards will be adopted. That’s the theory and hope, at least.

This section of the EO lays out the ground floor for what must be done: adhere to NIST standards, see the guidelines used across relevant federal agencies, and denote how incident responses are to progress to completion while providing some flexibility in recognition of the complexity of response operations. Like other elements of the EO, section 6 includes a process of reviewing and validating the incident response activities of federal agencies and, thus, creates a way to measure whether the efforts to standardize incident response end up driving the anticipated positive changes.

The core risk with these standards (and all standards, really) are associated with the need to meet the standards and be evaluated based upon doing so. This means that what is not standardized is not necessarily ‘counted’ and, as such, can create a disincentive for going beyond the required standards. This is a known issue in any number of transparency and accountabilityframeworks: you can create a ‘hall of mirrors’ where what is measured is radically different from lived reality. As such, it will be important for any and all assessments of incident responses processes to be analyzed carefully by CISA as opposed to becoming a kind of check-box exercise.

Sec. 7-Improving Detection of Cybersecurity Vulnerabilities and Incidents on Federal Government Networks

On the whole, this section is focused on ensuring that defenders are better able to engage in threat hunting on US government networks and capturing information that can feed into threat analysis and intelligence. Key to the planned efforts will be forcing the adoption of end-point sensing systems and software. I frankly have some baseline questions about the likelihood of this working, at least in senior levels of government, given the predilection to use non-government equipment and services to evade Freedom of Information (FOI) laws. To some extent those laws mean that some members of government are incentivized to avoid using monitored government communications systems, with the possible result that there may be an internal group of high-priority actors who are disincentivized from adhering to key elements of this part of the EO.

Sec. 8-Improving the Federal Government’s Investigative and Remediation Capabilities

Like other parts of the EO, elements of this section are designed to take hold very quickly. Within 14 days of the EO being promulgated, the Secretary of Homeland Security in consultation with others must provide recommendations about “the types of logs to be maintained, the time periods to retain the logs and other relevant data, the time periods for agencies to enable recommended logging and security requirements, and how to protect logs” (8(b)). The goal of this is to ensure that agencies are retaining the information that defenders will require when conducting incident response and analysis operations. All this seems pretty reasonable and, if implemented, will provide additional data for defenders. Key, of course, will be ensuring that the data to be collected doesn’t overwhelm defenders or the semi- and fully-automated systems they rely upon to flag events to human analysts.

Sec. 9 and 11

Sections 9 and 11, pertaining to National Security Systems and General Provisions, respectively, didn’t have anything in them that struck me as particularly noteworthy.

Sec. 10-Definitions

Experts in American security policies may notice that there have been changes in the definitions; on the whole, I defer to their eagle eyes and expertise. What did catch my eye were the detailed definitions of Software Bill of Materials (SBOM) and Zero Trust Architectures. For both terms you can find more information on NIST’s and CISA’s respective websites, but the definitions in the EO will guide how it’s put into action. The SBOM definition almost reads as a defence or justification of the concept in its own right (see: 10(j)) whereas the Zero Trust Architecture definition (10(k)) seems to be written to make clear what is, and is not, captured in such an Architecture. If the former definition is to justify it, the latter seems to be establishing the conditions such that American agencies can’t claim their system is Zero Trust when, in fact, it fails to meet the definitional requirements of one.

Takeaways and Reflections

Overall, the Executive Order on Improving the Nation’s Cybersecurity is an effort to better point the American government towards adopting defensive policies that are measurable and standardized across federal agencies. As has been noted routinely, if you don’t measure something you can’t really assess if you’re making progress on what you’ve chosen to focus on; for US federal government cybersecurity, this EO an effort to correct that current deficit.

However, it will remain to be seen just how seriously American agencies actually adjust their behaviours in response to the EO. Departments across the United States are reliant on ancient systems and services which likely aren’t particularly well secured and, perhaps, cannot be: how many near permanent exemptions may have to be written, when one or ten agencies just state that they cannot comply? And, if that’s the case, then what will be the precise implications for those agencies? While the EO is good for making clear that accounts will be kept, there is little to nothing that will punish a government agency for failing to modify its practices. Really, only contractors who are trying to provide services to government may face repercussions should they be excluded from future contracting due to deficient security postures. Maybe that’ll be enough given the prevalence of private companies offering products and services to government agencies?

Nonetheless, I think that other governments (including the Canadian government) would be well served by looking at this EO as an outgrowth of a broader government cybersecurity strategy. A related Canadian strategy would ideally make clear the links between cybersecurity, national security, national interests, foreign affairs, industrial policy, and more, and something like this EO would be a logical implementable outgrowth of such a strategy. I would be concerned, however, if the Canadian government (or any other) were to look at the EO and define it, in and of itself, as a strategy instead of just one part of implementing one.

In closing, there are some things worth highlighting in this EO. First, it is a defence-forward approach: nowhere does it discuss ‘defending forward’ or conducting ‘offensive or defensive cyber operations’. Second, it makes clear that by using the government purse it is possible to modify vendors’ security postures. Third, it doesn’t try to reach beyond the government’s perimeter that much and, instead, recognizes that there’s a need to fix the government’s own systems rather than trying to fix public and private security postures simultaneously. Fourth, it includes a lot of accountability and tracking of how well things are done. At a minimum this will ensure that the United States’ Government Accountability Office and the various Inspectors General will have lots of ammunition for critical reporting of the (in)security of government systems and interconnected private systems. Lastly, I do worry that the requirements as they are explained may inhibit the ability of smaller contractors from working with the government unless the tooling environments for software and system development themselves are not generally hardened. While bigger companies such as IBM, Microsoft, or Amazon will be able to throw internal auditors onto projects to ensure they meet government requirements, I have doubts about smaller, more niche, private businesses. Thus, if there is one ‘next step’ in all this, and separate from the Executive Order, it would be for the US government to work hard to improve the general development environments and cloud services which developers use, as a next or subsequent step. Doing so would ensure that a wider diversity of contractors could work with the US government while, also and more broadly, improving upon the security hygiene of developers that work exclusively in the private sector.


  1. Google, as an example, credits adopting multi factor authentication as central to preventing or stopping digital operations which target their employees, and multi factor authentication is what raised flags which led to revealing the SolarWinds hack. ↩︎