A committee of leaders and scholars from MIT has released a series of policy briefs outlining a framework for the governance of artificial intelligence (AI). The aim of the papers is to provide a resource for U.S. policymakers, enhance U.S. leadership in AI, and mitigate potential harms while promoting the beneficial deployment of AI in society.
The main policy brief, titled “A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector,” proposes leveraging existing regulatory and liability approaches to regulate AI tools. The recommendations emphasize the need to define the purpose of AI applications to tailor appropriate regulations for each use case.
According to Dan Huttenlocher, Dean of the MIT Schwarzman College of Computing and the project’s leader, the committee suggests focusing on regulating areas where human activities are already highly regulated and deemed high-risk. By starting with existing regulated domains, the committee believes a practical approach to governing AI can be achieved.
Asu Ozdaglar, the Deputy Dean of Academics in the MIT Schwarzman College of Computing, also emphasizes the importance of the framework as a concrete way to approach AI governance. The project includes multiple policy papers and comes at a time of increased interest in AI and substantial industry investment in the field.
While the European Union is in the process of finalizing its own AI regulations, the MIT committee’s framework addresses the challenges involved in regulating both general and specific AI tools, including misinformation, deepfakes, surveillance, and more.
The committee’s involvement in AI governance stems from MIT’s expertise and leadership in AI research. As David Goldston, Director of the MIT Washington Office, explains, MIT feels an obligation to address the important issues raised by the technology it is helping to develop.
The main policy brief proposes extending current regulatory agencies and legal liability frameworks to cover AI. For example, existing licensing laws in the medical field can be applied to regulate AI systems used for medical purposes. Additionally, AI providers should be accountable for clearly defining the purpose and intent of their tools.
However, the committee acknowledges that AI systems often exist in multiple levels or stacks of systems. In cases where a specific service utilizes a general-purpose underlying AI tool, both the service provider and the tool builder should share responsibility for any problems that may arise.
To facilitate effective AI governance, the policy brief also suggests advances in auditing AI tools through government, user-driven, or legal liability proceedings. Establishing public standards for auditing, either through an independent nonprofit entity or a federal organization, could ensure transparency and accountability in the AI industry.
The committee also proposes the creation of a government-approved self-regulatory organization (SRO) agency, similar to FINRA in the financial industry. This AI-focused agency could accumulate domain-specific knowledge and effectively engage with the rapidly evolving AI industry.
The policy papers also address specific legal matters related to AI, such as copyright and intellectual property issues. Moreover, the committee recognizes the need for special legal considerations for AI tools that surpass human capabilities, such as mass-surveillance tools.
In addition to regulatory considerations, the policy briefs highlight the importance of encouraging research on how AI can benefit society. For instance, one paper explores the possibility of AI augmenting and aiding workers instead of replacing them, ultimately contributing to long-term economic growth.
The committee’s approach to AI regulation aims to strike a balance between leveraging technological advancements and ensuring appropriate oversight. Recognizing the complexity of human-machine interactions, the committee emphasizes the need for responsive governance that considers both technology and social systems.
MIT’s committee hopes to bridge the gap between AI enthusiasts and those concerned about its implications by advocating for adequate regulation and oversight. They affirm that AI governance is necessary for responsible and ethical development of the technology.
MIT’s involvement in AI governance reflects its commitment to serving the nation and the world by addressing the challenges posed by emerging technologies. The committee’s release of the policy briefs marks a significant moment in advancing AI governance and responsible AI development.
Note:
- Source: Coherent Market Insights, Public sources, Desk research
- We have leveraged AI tools to mine information and compile it