The G7 Communique and Artificial Intelligence

The G7 Communique which was issued on May 20 included discussions of AI technology and governance. While comments are high-level they are worth paying attention to since they may indicate where ongoing strategic pressure will be placed when developing AI policies.

The G7’s end goals around AI are to ensure that trustworthy AI is developed that is aligned with democratic values. The specific values called out include:

  • fairness;
  • accountability;
  • transparency;
  • safety;
  • protection from online harassment, hate, and abuse; and
  • respect for privacy and human rights, fundamental freedoms, and the protection of personal data.

While not surprising, the core values stated do underscore the role for privacy regulators and advocates in the development of AI governance policies and practices.

Three other highlights include:

  1. The need to work with private parties to promote responsible AI, with the caveat that platforms are singled out for the needing to address child sexual exploitation and abuse while upholding the children’s rights to safety and privacy online.
  2. A strong emphasis on developing interoperable international governance and technical standards to promote responsible AI governance and technologies.
  3. A commitment by the G7, in collaboration with the OECD and GPAI, to launch discussions on generative AI technologies by end of the year.

The first point, concerning child sexual exploitation, either suggests a new front on the discussions of technology policy and online child abuse images or is just another reference to ongoing pressure on large internet platforms. Only time will tell us how to interpret this aspect of the G7’s messaging. Monitoring other Five Eyes meetings and G7 outputs maybe help with this interpretation.

The second point, on international governance, raises the question of whether federal governments will link national regulations to international standards. Should that occur then it will be interesting to see the extent to which regulations in Canada’s Artificial Intelligence and Data Act ultimately refer to, or integrate, such standards. Assuming, of course, that that the Act is passed into law in its present format.

The third point underscores how generative AI technologies are attracting attention on prominent and important national and international agendas. It remains to be seen, however, whether such attention persists and, also, whether we see ongoing and significant concerns continue to percolate as the public and politicians become used to the technology and it’s increasing integration with failing computing functions. For my money I don’t see emerging uses of AI systems to fall off the agenda anytime in the near future.

If you’re curious in assessing the AI-related aspects of the Communique yourself, you can find them in the Preamble at 1, as well as in Digital at 38