Biden Signs Executive Order to Regulate AI Systems

The White House’s ‘fact sheet’ underscored extensive international consultations regarding an AI framework, involving countries like India, the E.U., the U.K., Japan and others.

author-image
Srajan Girdonia
31 Oct 2023
AI REGULATION.jpg

In a significant move reflecting the global urgency to regulate and secure artificial intelligence (AI) systems, U.S. President Joe Biden signed an executive order on Monday, unleashing a broad-reaching regulatory framework for AI. 

This directive, coming days ahead of the U.K. Prime Minister Rishi Sunak's AI Safety Summit at Bletchley Park, highlights the escalating competition among nations to keep pace with the swiftly evolving AI landscape.

Biden’s Executive Order Overview

Utilising the Defense Production Act, previously deployed during the Covid-19 pandemic, President Biden's order compels companies engaged in AI development to inform the U.S. federal government about technologies impacting national security, economic security, or public health. 

This mandate also includes sharing the outcomes of specific safety assessments. Moreover, the executive order establishes an AI Safety and Security Board while demanding the implementation of safety tests.

This recent development follows the White House’s earlier announcement in July, wherein seven major AI companies—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI—voluntarily committed to complying with safety standards for their AI systems.

Focus on Safety, Security, and Privacy

Aside from safety protocols, President Biden also urged the U.S. Congress to pass bipartisan legislation regarding privacy. The order aims to prevent AI from exacerbating discrimination and is set to issue guidelines for landlords and federal contractors. While lacking specifics, the directive broadly encourages the use of AI in educational settings.

Additionally, the President instructed his administration to enhance consultations at bilateral, multilateral, and multi-stakeholder levels on AI. The objective is to construct a robust international framework, as per the White House statement.

Global Collaboration and India’s Perspective

The White House’s ‘fact sheet’ underscored extensive international consultations regarding an AI framework, involving countries like India, the European Union (E.U.), the U.K., Japan, Australia, Germany, France, Italy, South Korea, Israel, and Kenya. The administration seeks to bolster Japan’s leadership of the G-7 Hiroshima Process, the U.K.’s AI Safety Summit, and India's chairmanship of the Global Partnership on AI (GPAI), aligning with ongoing deliberations at the United Nations.

Commenting on India's role, Amlan Mohanty, a technology policy expert affiliated with Carnegie India, emphasised the Indian government's intent to closely monitor the executive order's implementation. This awareness is in anticipation of the upcoming meeting in New Delhi this December, with India chairing the GPAI. Prime Minister Modi has stressed the necessity of a global AI framework, viewing the GPAI as a pivotal platform for India to lead the international discourse.

Global Response and U.K.’s AI Summit

While the U.K.’s Prime Minister Rishi Sunak had initially aimed for the U.K. to spearhead global AI regulation, several countries have responded cautiously or independently moved forward with their regulations. The E.U. stands at the forefront in Western AI regulation, having earlier passed stringent draft legislation this year.

The UK's AI summit at Bletchley Park will witness U.S. Vice President Kamala Harris in attendance this week. Notably, leaders such as German Chancellor Olaf Scholz, Canadian Prime Minister Justin Trudeau, President Biden, and French President Emmanuel Macron are absent from the summit. However, Italian Prime Minister Giorgia Meloni and European Commission President Ursula von der Leyen are confirmed attendees.

President Biden’s executive order marks a significant stride in the quest to regulate and secure AI systems on a global scale. With the collaborative efforts of various nations, including the initiatives of key players like India, the U.K., and the E.U., the future of AI regulation and safety appears to be advancing, though with varying degrees of participation and approaches from different nations.

As leaders converge at the U.K. AI Safety Summit, the discussions and outcomes of this meeting could potentially shape the international AI landscape, addressing safety, security, and ethical concerns associated with the rapid proliferation of artificial intelligence.