Navigating AI governance across the globe: regional approaches and frontline insights

As artificial intelligence (AI) continues to revolutionise industries and reshape societies, the need for effective governance frameworks has become increasingly urgent. In an era of rapid technological evolution, understanding the nuances of regional regulation - whether in Europe, Asia, the Middle East, or beyond - helps you identify both the pitfalls and opportunities inherent in AI innovation. We’ve seen not only the complex regulatory landscapes emerging worldwide, but also the practical governance challenges faced by organisations on the ground. From global compliance efforts and holistic risk management to determining who should ultimately steer AI governance within a business, the issues at play illustrate the real-world dilemmas larger organisations are facing every day.

This report examines the emerging governance models in the EU, UK, Asia, the Middle East, and Australia, offering frontline insights on how forward-thinking organisations can craft robust strategies for responsible AI implementation.

AI governance and emerging AI regulation

Effective AI governance is not built in isolation - it’s closely tied to the evolving landscape of laws, regulations, guidelines, and frameworks shaping the use and development of AI technologies. Understanding these regulatory frameworks is fundamental to crafting governance strategies that are not only compliant but also adaptable to the rapid pace of innovation. We recognise that navigating this complex regulatory environment can be a daunting task. That’s why our AI Horizon Tracker provides a comprehensive, global perspective on AI regulation. Covering 22 jurisdictions, the Tracker maps out existing laws, proposed legislation, guidelines, and enforcement actions across key regions, offering clarity on both AI-specific regulations and broader frameworks impacting AI adoption. It highlights emerging trends, such as the EU’s AI Act, and provides insights into how businesses can align their governance frameworks with regulatory requirements. Access it here.

 

Part I: AI Governance strategies in key regions

Part II: Navigating the complexities of AI governance: insights from the front lines

Having explored the diversity of AI governance across various key regions, it's clear that organisations face many challenges in implementing effective AI governance strategies. Drawing from our extensive experience advising clients on the frontlines of AI governance, we've identified several key questions and issues that are shaping the discourse:

  • Organisational responsibility: One of the most pressing questions is determining which function should have overall responsibility for AI governance. Should it fall under data protection, a separate AI function, or another department entirely? The answer often depends on the organisation's structure, industry, and specific AI use cases. Some companies place responsibility with existing data protection teams, while others create dedicated AI functions. Many also introduce cross-functional collaboration involving legal, IT, data protection, and ethics experts, ensuring broader perspectives are integrated into decision-making and policy development.

  • Holistic risk management: Integrating AI risk reviews into existing processes can be challenging when AI systems intersect with data protection, cybersecurity, and operational concerns. For example, there’s an ongoing debate about whether AI risk reviews should be integrated into existing data protection reviews or conducted as separate processes. Each approach has its merits, and the decision often hinges on the organisation's existing risk management framework and the nature of its AI applications. Successful strategies often include developing a unified AI risk framework or taxonomy that fits naturally within broader risk management activities. In this way, organisations avoid siloed reviews and can more easily identify and address overlapping risks.

  • Agile legal and compliance processes: Organisations are grappling with how to build processes that allow for efficient AI use, acquisition, and development without overburdening legal teams. Innovation can suffer if legal and compliance reviews become bottlenecks. Many organisations have tiered approaches that separate lower-risk initiatives - where self-service tools and brief reviews may suffice - from more complex and high-risk AI projects requiring substantial legal guidance from the get-go. This ensures that governance frameworks remain robust while allowing businesses to experiment and adapt quickly.

  • Global compliance challenges in a fragmented regulatory landscape: With the EU AI Act setting a high bar for AI regulation, many organisations are considering whether to apply these standards globally. However, this approach must be balanced against the need to comply with a multitude of emerging AI laws and regulatory approaches in other jurisdictions. A flexible strategy involves establishing global AI principles and then adding jurisdiction-specific requirements as necessary. Keeping a close watch on evolving legislation worldwide allows organisations to anticipate changes and update their governance measures efficiently.

  • Geopolitical dynamics and AI governance: Shifting political climates and international relations can alter the trajectory of AI regulation, from cross-border data transfer rules to the acceptance of emerging AI technologies. Organisations benefit from scenario planning and maintaining infrastructure that can adapt to new rules and restrictions. By balancing local requirements with global principles, it becomes easier to pivot as regulations evolve.

  • Supply chain management & AI contracting: Large organisations with extensive supplier networks face significant challenges in managing AI-related risks throughout their supply chains. Organisations that implement supplier assessments tailored specifically to AI capabilities and risks can more accurately evaluate whether vendors meet essential governance and performance criteria. This leads to better contracts and clearer responsibilities for stronger overall collaboration.

    In terms of contracting, deciding how to allocate risks and responsibilities in contracts requires careful consideration of data rights, model ownership, and performance commitments. Standardised clauses can streamline negotiations and clarify liability boundaries. This is especially relevant when partnering with third parties whose AI tools may involve complex layers of intellectual property and shared data usage.

  • AI governance metrics and continuous improvement: As AI governance is still evolving, organisations need methods to measure their own effectiveness and drive ongoing enhancements in processes. Key performance indicators might track the reduction of certain risks, the efficiency of AI review processes, or the overall alignment of AI initiatives with corporate values. Benchmarking these measures against industry peers and recognised best practices allows organisations to refine their strategies and stay at the forefront of responsible AI innovation.

These insights of AI governance highlight the need for flexible, adaptable strategies that can navigate the complex interplay of regional regulations, industry challenges, and rapidly evolving technologies. As we continue to advise clients on these critical issues, we're developing innovative approaches to the challenges businesses are facing.

The key takeaways

As regulation continues to expand globally - from the EU’s risk-based AI Act to the flexible and sector-specific frameworks in the UK, Asia, the Middle East, and beyond - organisations face a crucial choice for their AI strategy. A one-size-fits-all compliance approach can ensure consistency but may limit innovation and place undue burdens on smaller companies, whereas a region-by-region strategy might create fragmentation, internal complexity and additional costs. Across all these jurisdictions, a number of unifying themes have emerged that provide common ground for organisations. Nearly every major regime places a premium on transparency, accountability, and data minimisation, emphasising that trustworthy AI rests on well-defined oversight processes and responsible data use. A clear example is the widespread agreement on the importance of robust risk assessment and bias detection, which is embodied in ISO/IEC 42001 and echoed in guidelines from the Middle East to Hong Kong. These parallels reveal that frameworks increasingly converge on core principles of transparency, ethics, and human-centric innovation.

Generally, the best choice lies in an adaptable governance framework: one that integrates the shared pillars of transparency, accountability, risk-based processes, and data protection by design, while preserving the agility to accommodate differing local requirements. A key marker of success in AI governance is the ability to anticipate regulatory shifts, invest in forward-looking risk management, and embed cross-functional oversight that supports sustained, responsible AI innovation. By aligning to these universal principles, organisations can adopt a cohesive approach that reduces regulatory friction, encourages innovation, and fosters global trust in AI.