AI Regulation Tracker: 52 Countries Now Have AI Laws

The global regulation of artificial intelligence has accelerated rapidly. As of April 2026, 52 countries have enacted legislation specifically addressing AI development, deployment, or governance. This represents a doubling from just 24 countries with AI-specific laws at the start of 2025.

The Regulatory Landscape

AI regulation varies significantly across jurisdictions, but several common themes have emerged. Most laws address transparency requirements, mandating disclosure when consumers interact with AI systems. Risk-based frameworks that impose stricter requirements on higher-risk applications are becoming the dominant regulatory model.

The European Union's AI Act remains the most comprehensive framework, with its provisions phasing in throughout 2025 and 2026. The Act categorizes AI applications into risk tiers and imposes requirements ranging from transparency disclosures for low-risk systems to pre-market conformity assessments for high-risk applications.

Regional Approaches

In North America, the United States has taken a sector-specific approach, with regulations emerging from individual agencies rather than comprehensive federal legislation. The FDA has published guidelines for AI in medical devices, the SEC has issued rules for AI in financial services, and the FTC has taken enforcement actions against deceptive AI practices. Several states, led by California and Colorado, have enacted their own AI laws.

Canada's Artificial Intelligence and Data Act, passed in late 2025, establishes a national framework with criminal penalties for reckless AI deployment that causes serious harm.

In Asia, China continues to lead in AI regulation volume, with specific rules covering generative AI, recommendation algorithms, deepfakes, and AI-powered decision systems. Japan has adopted a lighter-touch approach focused on industry guidelines, while South Korea's AI Basic Act establishes a comprehensive framework similar to the EU's risk-based model.

India enacted its Digital India AI Governance Framework in early 2026, requiring registration of high-risk AI systems and establishing an AI regulatory sandbox for testing new applications.

Key Provisions Worldwide

Across the 52 countries with AI legislation, the most common provisions include transparency and disclosure requirements (present in 48 of 52 laws), prohibition of certain AI practices such as social scoring and manipulative systems (41 countries), requirements for human oversight of high-risk AI decisions (38 countries), obligations for AI impact assessments (35 countries), data governance requirements specific to AI training data (32 countries), and provisions for algorithmic auditing (27 countries).

Enforcement Begins

The transition from legislation to enforcement is underway. The EU AI Office has initiated its first investigations and is expected to issue preliminary fines by July 2026. China has already levied penalties against companies for non-compliant AI systems. Brazil's data protection authority has expanded its enforcement to cover AI applications under its new AI framework.

Enforcement capacity remains a challenge. Many countries have passed AI laws without establishing dedicated regulatory bodies or allocating sufficient resources for oversight. Industry observers note a significant gap between legislative ambition and enforcement reality in many jurisdictions.

Impact on Business

For multinational companies, the patchwork of AI regulations creates significant compliance complexity. Large technology companies are investing heavily in regulatory compliance teams, with some reportedly spending over $100 million annually on AI governance functions.

Smaller companies and startups face particular challenges. The cost of compliance with multiple regulatory frameworks can be prohibitive, leading some to limit their AI deployments to less regulated markets or to rely on compliance-as-a-service providers.

International Coordination

Efforts toward international harmonization are ongoing but slow. The OECD AI Principles, the G7 Hiroshima AI Process, and the Council of Europe's AI Convention provide frameworks for alignment, but significant differences in national approaches persist.

The most notable divergence is between the EU's precautionary, rights-based approach and the more innovation-focused frameworks in the US and parts of Asia. Companies operating globally must navigate both philosophies simultaneously.

Looking Ahead

The regulatory trajectory is clear: more countries will enact AI laws, existing laws will be strengthened and enforced, and international coordination will gradually improve. Companies developing or deploying AI systems should treat regulatory compliance not as an afterthought but as a core business function that influences product design, deployment decisions, and market strategy.