As the adoption of artificial intelligence (AI) skyrockets, transforming industries and reshaping the fabric of our daily lives, regulation has lagged behind, at least until now. To date, the rapid pace of AI innovation has outstripped the ability of policymakers to craft effective regulatory frameworks, leaving a gap that raises critical ethical, legal, and societal questions.
Moreover, the slow adoption of robust regulatory frameworks poses significant risks, from privacy violations to unchecked biases, necessitating an urgent reevaluation of AI governance.
The Current Global AI Regulatory Landscape
This need for reevaluation has sparked a sudden flurry of activity around AI regulation. Given the typically slow pace of bureaucracy, it is impressive that so many well-considered international AI regulations are currently in development, with some already implemented and others in progress.
While different in specifics, size, and scope, key themes across regulatory initiatives include:
- Governance: Documentation, accountability, impact assessments, and risk management.
- Ethical Considerations: Fairness, transparency, non-discrimination.
- Risk Management: Classification of AI systems based on risk levels.
- Accountability: Roles and responsibilities for AI developers and users.
- Enforcement: Penalties for non-compliance.
The EU’s Comprehensive Regulatory Framework
The EU AI Act is the world’s first comprehensive regulatory framework to ensure artificial intelligence’s responsible and ethical use. It received a favorable vote from the European Parliament on 13 March 2024 and approved by the European Council on 21 May. It will be enforced 20 days after its publication in the EU’s Official Journal.
The Act establishes robust safeguards to ensure AI is used responsibly and upholds human rights. Let’s take a closer look:
A Risk-Based Approach
With the Act, AI systems are classified based on the potential harm they may potentially cause to individuals’ rights and safety. The Act identifies high-risk AI systems, particularly those that could influence democratic or legal procedures, and subjects them to stricter regulations. This risk-based framework ensures that the most potentially harmful AI applications receive the highest scrutiny.
Human Oversight
A fundamental aspect of the Act is the requirement for human-in-the-loop systems, especially in scenarios where AI-based decisions could significantly impact human lives. This provision ensures that AI does not entirely replace human decision-making but instead supports and enhances human judgment. Human oversight is crucial for maintaining accountability and preventing undue reliance on automated systems.
Transparency
To address the “black box” phenomenon in AI, where the decision-making process of models is opaque, the Act mandates that AI providers ensure their systems’ logic, judgments, and rationales are understandable to users. This requirement aims to demystify AI operations, allowing users to comprehend and trust the system’s actions, thereby fostering greater transparency and accountability.
Ban on Certain AI Practices
The Act prohibits AI practices that are deemed unethical, such as those designed to control or manipulate human behavior, exploit vulnerable populations, or promote unfair discrimination and bias. By banning these harmful practices, the Act seeks to protect individuals and ensure that AI development and deployment are aligned with ethical standards and human rights principles.
Authorities Overseeing AI Compliance
In the EU, several authorities are overseeing AI compliance:
- The European Data Protection Board and European Data Protection Supervisor
- The EU AI Board, as outlined by the proposed AI Act
- AI regulatory bodies of member states, such as the Spanish AI Supervision Agency
- Data Protection Authorities of Member States
AI Regulation Beyond the EU
Beyond the EU other countries are also making process in AI regulation, adopting a wide range of enforcement strategies. Below is a map outlining the landscape of global AI regulations.
![](https://elevateconsult.com/wp-content/uploads/2024/07/image.png)
Regulatory Approaches of Non-EU Countries
Various institutions are currently implementing a range of frameworks and methodologies, each driven by its specific mandates, institutional dynamics, and perceived benefits. Let’s examine the initiatives undertaken by countries outside the EU.
Region | Policy Details | Authorities Overssing AI Compliance |
United Kingdom | The UK hasn’t introduced a comprehensive AI regulation, opting instead for a context-sensitive, balanced approach using existing sector-specific laws to guide AI. However, in February 2024, the UK government issued a white paper that outlined its domestic AI regulation, explained the country’s current regulatory landscape for AI, and detailed its to improve its regulatory landscape. | -Office for AI Information -Commissioner’s Office -Digital Regulation Cooperation Forum |
United States | The US adopts a case-by-case strategy for AI governance, and lacks unified AI regulation. However, there are a slew of federal guidelines and frameworks, including: The Algorithmic Accountability Act – A bill to regulate how AI models make “decisions” to eliminate unfair algorithmic bias and discrimination. Executive Orders, such as: –Maintaining American Leadership in AI –Promoting the Use of Trustworthy AI in the Federal Government Act And Legislations and Proposed Bills –AI Training Act –National AI Initiative Act –Algorithmic Accountability Act (proposed) –Transparent Automated Governance Act (proposed) –Global Technology Leadership Act (proposed) | -Federal Trade Commission -Department of Justice -Consumer Financial Protection Bureau -Equal Employment Opportunity Commission |
Canada | Canada is advancing the AI and Data Act (AIDA) to safeguard Canadians from high-risk AI and promote responsible AI practices. Canada also has a Directive on Automated Decision-Making which sets out standards that the federal government needs to stick to when using automated decision-making systems. federal standards. | -Ministry of Innovation, Science and Economic Development -Office of the Privacy Commissioner of Canada |
China | China is actively introducing AI regulations, with several specific applications already governed by rules such as the Algorithmic Recommendations Management Provisions. Ongoing initiatives include AI governance initiatives in progress include: Draft Provisions on Deep Synthesis Management and Measures for the Management of Generative AI Services | -Cyberspace Administration of China -Ministry of Industry and Information Technology -State Administration for Market Regulation |
Australia | Australia uses existing regulatory structures for AI oversight rather than specific AI laws. However, the University of Technology Sydney released a report: “The State of AI Governance in Australia” which outlines progress made in this area. | -Department of Industry, Science and Resources -Office of the eSafety Commissioner -Office of the Victorian Information Commissioner -Competition and Consumer Commission |
The OECD AI Principles
In addition, the Organisation for Economic Co-operation and Development (OECD) – an international organization in which governments collaborate to address common challenges, establish global standards, and identify best practices – introduced the OECD AI Principles to create a one-stop-shop for AI policymakers.
Initially adopted in 2019 and updated in May 2024, the OECD AI Principles have been revised to reflect new technological and policy advancements, ensuring their continued relevance and robustness. These principles guide AI developers to create trustworthy AI systems and offer policymakers recommendations for effective AI policies.
Countries employ the OECD AI Principles and related tools to shape policies and develop AI risk frameworks, fostering global interoperability between jurisdictions. Today, entities such as the European Union, the Council of Europe, the United States, the United Nations, and other jurisdictions incorporate the OECD’s definition of an AI system and its lifecycle into their legislative and regulatory frameworks.
Ensuring Compliance: A GRC Professional’s Toolkit
These AI regulations typically consist of a comprehensive scope of requirements that can be difficult to track for those in the field. GRC professionals can easily get confused about where to start or how to incorporate new AI legislation into existing GRC structures. Here are some essential elements to include in your AI GRC toolkit.
The first step is taking inventory to understand what is happening within your organization. Consider the internal development that is taking place and which third-party products are being used. Examine the use cases and consider how AI is being used and controlled. Gaining this initial line of sight is critical to moving forward with the following steps:
Risk Assessment: Identify and classify AI systems based on potential risks and perform impact assessments for those considered high-risk
- Governance Framework: Establish clear policies and procedures for AI development and deployment.
- Documentation: Maintain detailed AI system design, testing, and monitoring records.
- Audits: Conduct regular internal and external audits to assess compliance.
- Continuous Monitoring: Implement mechanisms for ongoing monitoring and evaluation of AI systems.
- Training and Awareness: Educate employees on AI regulations and compliance requirements.
Chat with the Experts
It cannot be overstated how important it is to familiarize yourself with the laws to ensure you remain within the boundaries of AI compliance.
As a GRC professional, it is your responsibility and opportunity to ensure your company remains compliant and competitive. This allows ample time to identify and understand emerging regulations.
If you’re unsure where to start or what to focus on, schedule a consultation with Elevate. We offer specialized AI governance and audit services tailored to your needs.