As artificial intelligence (AI) continues to revolutionise industries and reshape societies, the need for effective governance frameworks has become increasingly urgent. In an era of rapid technological evolution, understanding the nuances of regional regulation - whether in Europe, Asia, the Middle East, or beyond - helps you identify both the pitfalls and opportunities inherent in AI innovation. We’ve seen not only the complex regulatory landscapes emerging worldwide, but also the practical governance challenges faced by organisations on the ground. From global compliance efforts and holistic risk management to determining who should ultimately steer AI governance within a business, the issues at play illustrate the real-world dilemmas larger organisations are facing every day.
This report examines the emerging governance models in the EU, UK, Asia, the Middle East, and Australia, offering frontline insights on how forward-thinking organisations can craft robust strategies for responsible AI implementation.
Effective AI governance is not built in isolation - it’s closely tied to the evolving landscape of laws, regulations, guidelines, and frameworks shaping the use and development of AI technologies. Understanding these regulatory frameworks is fundamental to crafting governance strategies that are not only compliant but also adaptable to the rapid pace of innovation. We recognise that navigating this complex regulatory environment can be a daunting task. That’s why our AI Horizon Tracker provides a comprehensive, global perspective on AI regulation. Covering 22 jurisdictions, the Tracker maps out existing laws, proposed legislation, guidelines, and enforcement actions across key regions, offering clarity on both AI-specific regulations and broader frameworks impacting AI adoption. It highlights emerging trends, such as the EU’s AI Act, and provides insights into how businesses can align their governance frameworks with regulatory requirements. Access it here.
The EU’s AI Act is set to shape global AI governance with its risk-based framework, imposing stricter obligations on “high-risk” applications like recruitment or healthcare. However, many AI activities fall outside these high-risk categories and are primarily governed by laws such as the General Data Protection Regulation (GDPR) and intellectual property (IP) regulations. As a result, AI compliance in the EU requires navigating a multi-layered framework that integrates the AI Act’s rules with GDPR protections and IP constraints.
Technical standards play a crucial role in simplifying compliance. The European Commission has tasked organisations like the European Committee for Standardisation (CEN) and the European Committee for Electrotechnical Standardisation (CENELEC) with developing harmonised specifications to support AI Act adherence. These standards not only help demonstrate AI Act compliance but also align with broader frameworks, including GDPR and best practices for trustworthy AI. While voluntary, they offer a cohesive approach to compliance, enabling organisations, large and small, to streamline their processes and navigate the EU’s complex regulatory landscape more effectively.
AI governance considerations
One of these standards is ISO/IEC 42001:2023, which offers a voluntary but significant framework for AI governance, enabling organisations to demonstrate accountability across the AI lifecycle. It outlines processes for risk assessment, bias detection, auditing, and governance, while integrating GDPR principles like data minimisation with broader controls such as oversight committees and documentation protocols. This helps institutionalise “trustworthy AI” and prepares businesses for audits and evolving regulations. While it is only one among potentially many forthcoming AI standards, it already provides a robust operational blueprint for embedding ethics and security into AI processes. Global bodies like IEEE and ISO/IEC SC 42 are also advancing standards, raising questions about alignment with EU-specific requirements. Businesses should monitor ongoing standardisation, engage in forums, and pilot ISO/IEC 42001 readiness. Adaptive compliance focused on proactive engagement rather than box-ticking will be key.
Industry-specific challenges
The EU’s AI governance framework, centred on the proposed AI Act, adopts a horizontal, risk-based approach rather than creating sector-specific rules. While it identifies “high-risk” use cases and references industries like finance and healthcare, it also relies on existing regulations, such as the Medical Device Regulation (MDR), to address AI within these sectors. This approach complements existing laws but stops short of forming standalone, sector-specific frameworks, potentially leaving gaps in addressing industry-specific challenges. For instance, in healthcare, where AI impacts diagnostics and treatment, patient data privacy and safety concerns are critical, requiring rigorous oversight to align with GDPR, MDR, and the AI Act’s risk classifications. Over time, the Act’s implementation will prompt regulators to issue more targeted, industry-specific guidelines to address these unique challenges.
Future trends and developments
The EU’s focus on AI governance is set to intensify with forthcoming guidance on standardisation and regulatory alignment. As AI adoption grows, especially in high-impact areas, the need for robust compliance frameworks will increase, alongside deeper collaboration between regulators, standard-setters, and industry consortia. Meanwhile, the EU is signalling greater flexibility in AI regulation, which will shape the governance landscape. The EU’s withdrawal of the AI Liability Directive in February 2025 aims to ease burdens on developers while balancing innovation with risk management. Building on this, the European Commission’s AI Continent Action Plan seeks to position Europe as a global AI leader, allocating €200 billion for AI development, €20 billion for five AI gigafactories, and 13 additional factories to support startups, industries, and research. To bolster this, the EU plans to simplify the AI Act’s implementation, attract global talent, and upskill its workforce. While primarily focused on driving AI growth, these measures will also contribute to a more pragmatic and adaptive governance framework.
The UK does not currently have a comprehensive, binding law in place to regulate the development and use of AI. However, it has well-developed regulatory frameworks in other areas which must be considered when navigating AI Governance. Two common areas of focus are data protection and IP.
AI governance considerations
GDPR is applicable as part of the UK domestic law through section 3 of the European Union (Withdrawal) Act 2018 and as amended by the Data Protection, Privacy and Electronic Communications (EU Exit) Regulations 2019 and the Data Protection Act 2018. The UK Information Commissioner’s Office has been active on AI issues, taking an enforcement action against Clearview AI in 2022 and running a series of consultations on data protection issues relating to generative AI in 2024. However, while differences between the UK and EU personal data regimes are starting to emerge, AI Governance frameworks which adequately mitigate personal data risks in the EU will typically also work in the UK.
AI governance in the UK is also impacted by the UK’s IP laws, which can create risks around the use of data for AI training and during inference. The UK’s copyright law shares similarities with EU harmonised law but differs in some important respects. The most notable in the AI context is the lack of a UK equivalent to the EU’s commercial text and data mining exception under Article 4 of the Digital Copyright Directive. As a result, the UK currently presents a less permissive environment for AI development compared to some other jurisdictions. Where the choice is available, risk management strategies can include undertaking AI development work and running inference outside of the UK.
Industry-specific challenges
Beyond personal data and IP, the UK has so far taken a sectoral approach to AI regulation, with regulators in different sectors looking to regulate AI within their remits in accordance with a set of values-focused principles set out in a 2023 White Paper. The broad concepts in the White Paper appear to remain in place under today’s Government, although this hasn’t been expressly confirmed. The main regulators involved are the Competition and Markets Authority, the Office of Communications and the Financial Conduct Authority.
Regulators have published various guidance documents on AI for their sectors. An AI solution spanning different sectors may involve different, possibly even inconsistent, guidelines to follow which presents challenges and complexity.
Future trends and developments
The Government has signalled an intention to regulate developers of the most powerful AI models, but no draft bill has been introduced, and recent reports indicate that the plans have been put on hold for now. The government is also planning to reform UK copyright law for AI systems, with the outcome of a recent UK Intellectual Property Office consultation anticipated in the second half of this year. The government is also planning on making changes to the UK’s data protection law - the Data (Use and Access) Bill will amend UK data protection law and may have consequences for AI, although as of April 2025, it is still being debated.
The Middle East is rapidly embracing AI, guided by ambitious national strategies such as the UAE's AI Strategy 2031 and Saudi Arabia’s Vision 2030. However, governance in this region demands careful navigation of distinct cultural, ethical, and regulatory landscapes.
AI governance considerations
AI governance in the Middle East does not have a unified regulatory framework comparable to the EU's AI Act. Instead, jurisdictions like the UAE and Saudi Arabia are integrating AI risk management into various references, such as broader national data protection laws and guidelines aimed specifically at the use of AI. Dubai's dedicated "Ethical AI Toolkit" and Saudi Arabia’s “National Centre for AI” demonstrate its efforts at providing practical guidance for organisations to mitigate privacy, security, and bias risks specific to the region's ethical considerations. This point is further exemplified by the establishment of specific AI regulatory bodies e.g. the Minister of State for AI, Digital Economy & Remote Work Applications Office (UAE) and the Saudi Data & AI Authority.
For further information on Dubai’s approach, Dubai‘s "AI Ethics Principles & Guidelines“ state (on page 15) that the mentioned AI Toolkit consists of the linked AI Ethics Principles and Guidelines (see link above) as well as a self-assessment tool (which can be found here).
Industry-specific challenges
In healthcare, the Gulf Cooperation Council (GCC) face unique challenges relating to data sensitivity and confidentiality, heightened by religious and cultural contexts. Similarly, the financial sector grapples with harmonising Sharia-compliant banking principles with AI-driven innovations. Nevertheless, these challenges present opportunities to pioneer regionally tailored AI governance frameworks that reconcile technological advancements with cultural and ethical expectations.
Future trends and developments
Looking forward, the Middle East will likely experience accelerated regulatory developments to align with international standards, while preserving local values. Future governance is anticipated to emphasise ethical AI, transparency, and accountability, especially in sensitive areas such as facial recognition and automated decision-making systems. Additionally, the UAE and Saudi Arabia are expected to lead regional initiatives, setting precedents through collaborative platforms and public-private partnerships aimed at creating globally competitive yet regionally distinct AI ecosystems.
Singapore has opted for a softer approach to AI regulation, through general best practice and sector-specific guidelines which aim to foster innovation within a trusted ecosystem. This recognises both the potential of AI and its associated risks, so that organisations can continue to innovate, while keeping consumers safe.
AI governance considerations
Singapore has long led the development of AI governance frameworks, launching its first Model AI Governance Framework back in 2019 (second edition available here). The Framework provides detailed and actionable guidance for organisations to address ethical and governance issues when deploying AI solutions. It focuses on the key principles that AI-made decisions should be explainable, transparent and fair, and that AI systems should always be human-centric. The Framework has since been updated to reflect the new risks and challenges associated with Generative AI, to better enable its development and adoption within a trusted environment.
This is supported by the establishment of “AI Verify”, a governance testing framework and software toolkit which helps organisations validate the performance of their AI systems against internationally recognised frameworks. This includes not only Singapore’s Model AI Governance Framework, but other frameworks such as those from the EU and the OECD, reflecting the importance of international standardisation.
At the same time, Singapore is supplementing existing legislation to enable the development and deployment of AI tools in a safe manner. For example, the Personal Data Protection Commission has developed a set of advisory guidelines explaining when and how personal data can be used in AI systems and specifically permitting the use of personal data to develop AI systems within certain parameters. It has also introduced an exception to the Copyright Act to allow the use of copyrighted works to train machine learning models without infringing copyright under certain conditions. These developments reflect a rather more permissive approach to enabling the development of AI, within a structured environment.
Industry-specific challenges
The general framework is supported by several sectoral guidelines which seek to address industry-specific risks while still encouraging innovation. These include the Monetary Authority of Singapore’s “Principles to Promote Fairness, Ethics, Accountability and Transparency in the Use of Artificial Intelligence and Data Analytics in the Financial Sector”, as well as the healthcare sector’s “AI in Healthcare Guidelines”.
Future trends & developments
This framework illustrates Singapore’s permissive approach to AI regulation and governance, which veers away from hard legislation that could potentially stunt AI innovation and growth in favour of softer regulations and guidelines which focus on fairness, accountability and transparency. This reflects Singapore’s approach as a business-friendly jurisdiction, setting an optimistic tone for AI development and enabling innovation within a trusted digital environment.
Hong Kong adopts a soft approach to AI regulation, relying on regulatory guidance rather than statutory provisions like the EU AI Act.
AI governance considerations
The Office of the Privacy Commissioner for Personal Data (PCPD) has issued two key AI guidelines: the “Guidance on the Ethical Development and Use of Artificial Intelligence” (AI Guidelines 2021) for in-house AI development and deployment, and the “Artificial Intelligence: Model Personal Data Protection Framework” (Model Framework 2024) for procuring third-party AI solutions. In March 2025, the PCPD also released the “Checklist on Guidelines for the Use of Generative AI by Employees” (the Checklist), which aids organisations in creating internal policies on employee use of GenAI. The Checklist elaborates on considerations from the Model Framework 2024, such as defining permissible uses of GenAI and enforcing stringent security settings.
AI Governance, as outlined in the AI Guidelines 2021 and Model Framework 2024, is based on four pillars: AI strategy, procurement governance, governance structure, and training and awareness. For AI procurement, the Model Framework 2024 recommends addressing key governance issues, including the purpose of AI use, privacy and ethical obligations, international standards (e.g. ISO/IEC 420001:2023), review criteria, data processor agreements, output handling policies, landscape monitoring, AI solution management, and supplier evaluation. The PCPD further advises implementing an “AI Incident Response Plan” to handle AI-related incidents and assembling a cross-functional team (e.g. business, legal, technical, HR) to manage AI systems.
Both the AI Guidelines 2021 and Model Framework 2024 emphasise compliance with the Personal Data (Privacy) Ordinance and its six Data Protection Principles. This is particularly critical for GenAI deployment, where customisation and enterprise integration heighten data privacy risks, including processing roles, data sensitivity, and lawful bases for use.
Industry-specific challenges
While Hong Kong’s AI governance framework is primarily guided by high-level, cross-industry standards set by the PCPD, certain industry regulators are beginning to issue more targeted requirements for AI. For instance, the Securities and Futures Commission (SFC), the Hong Kong Monetary Authority (HKMA), and the Insurance Authority (IA) have each released guidance in recent years addressing the responsible use of technologies - often including AI - within their respective sectors.
As a result, businesses operating in these regulated industries are expected to adapt their existing organisational AI governance frameworks to meet the specific requirements imposed by these regulators.
Future trends & developments
The approaches and requirements outlined in the AI Guidelines 2021 and the Model Framework 2024 are closely aligned with international standards, particularly the OECD AI Principles. This alignment allows global organisations to leverage harmonised regional or international AI governance frameworks to comply with Hong Kong’s local legal requirements. However, it remains uncertain whether Hong Kong-specific regulatory obligations will eventually undergo legislative scrutiny or debate in the near future.
As DeepSeek gains global prominence, China is proactively advancing its AI regulatory framework. While we are still awaiting a comprehensive law like the EU AI Act, China has implemented sector-specific regulations addressing technologies such as recommendation algorithms, deep synthesis, and generative AI.
AI governance considerations
Since there are no unified AI laws in China currently, China's AI risk management strategy is characterised by a “decentralised” regulatory framework, with oversight distributed across various sector-specific laws addressing data protection, content moderation, algorithm regulation, and IP.
China’s data protection regime is anchored in the Cybersecurity Law, the Data Security Law, and the Personal Information Protection Law, which collectively lay down requirements such as transparency, data minimisation, accountability, and data export compliance. Although these laws and technology-specific regulations provide a broad structure on data protection, there remain no specific, prescriptive requirements governing data protection in the AI sector.
China has introduced measures to address AI-generated content. In March 2025, the Cyberspace Administration of China issued “The Measures for the Identification of AI-Generated Content”, requiring transparent labelling. In addition, China has developed a unique “algorithm filing” system, compelling AI algorithms that possess public opinion attributes and social mobilisation capabilities to undergo security assessments and file with the CAC.
Furthermore, standardisation bodies have stated various standards focused on algorithm safety and ethics, guiding the responsible development and deployment of AI. The Ministry of Science and Technology’s issued “The Measures for Ethical Review of Science and Technology” (Trial) which underscores this oversight, incorporating regulations for algorithm models and AI. These Measures apply to private companies, and have a certain R&D focus, but companies should consider its ethics review requirements when bringing AI to the Chinese market.
Meanwhile, China is also witnessing an increase in IP litigation related to AI-generated works and the fair use of training data. Although many of these cases are ongoing, judicial decisions are gradually shaping a foundational framework for resolving AI-related IP disputes.
Industry-specific challenges
Gen AI providers must navigate China's vertical, technology-specific framework by complying with the GenAI Interim Measures, which integrate requirements for algorithm filing from the Recommendation Algorithms Provisions and transparency obligations from the Deep Synthesis Provisions. Providers should implement robust data protection, transparency, and user protection measures, as well as conducting security assessment/algorithms filing if triggered.
To ensure legal conformity, companies should refer to regulations on specific obligations and adhere to national or industry standards. For instance, to clearly indicate that content is AI-generated and avoid confusion, Gen AI providers should adopt appropriate labelling practices for AI-generated content in accordance with the Measures for Labelling AI-Generated Content and its corresponding standard.
Furthermore, China's government is actively advancing the “AI+” initiative, aiming to integrate AI across various sectors, including smart connected vehicles, smartphones and computers, and robots. With respect to industry-specific challenges, if AI products involve specific industries (such as news publishing, film production, healthcare, education, and finance), they may need to comply with industry requirements and, if required, obtain the appropriate licenses based on the nature of the product. For instance, in the financial sector, the People's Bank of China has developed a series of technical standards on AI.
Future trends and developments
While we’ve seen three expert drafts on the AI Law, the implementation of a unified national AI law remains a complex undertaking. As such, we understand that sectoral and technology-specific regulation will remain the dominant approach in the foreseeable future.
Australia has adopted a cautious and deliberative approach to regulating AI to date, with no AI-specific legislation proposed or enacted yet. The Federal Government has engaged in extensive community consultations to ensure that any regulatory framework strikes a balance between fostering economic development and innovation while promoting the responsible and safe use of AI technologies.
AI governance considerations
The Government has indicated that existing laws including those relating to privacy, consumer protection, and intellectual property rights, already address many issues arising out of AI or can be amended to include additional provisions relating specifically to AI. Furthermore, they undertook a public consultation process in 2023 to better understand the risks associated with AI, and identified several key measures to address those risks. These included strengthening existing laws to mitigate “known harms” from AI (such as updating privacy laws under the Privacy Act 1988 (Cth)), enhancing online safety regulations, and introducing new laws targeting misinformation and disinformation. They also considered the introduction of AI-specific legislation that would impose “safeguards” on organisations and individuals developing or deploying AI systems in legitimate, high-risk settings, as well as obligations concerning the development, deployment, and use of frontier or general-purpose AI models to address their unique risks.
They also outlined three core themes that would likely be central to future legislation: Testing and auditing (focusing on product safety and data security), transparency (involving public reporting obligations, model design disclosures, data usage, and watermarking of AI-generated content), and accountability (emphasising organisational roles, responsibilities, and training requirements).
Building on the key measures and regulatory themes relating to AI identified through the consultation processes, the Australian Government released in mid-2024 a set of proposed "mandatory guardrails" for high-risk AI applications, as well as a set of voluntary ethics principles which closely align with those guardrails. These guardrails and ethics principles are in line with the approach taken by the EU AI Act, reaffirming that governments globally are taking into account how the EU is regulating AI.
Businesses operating in Australia can and should consider the Government’s voluntary ethics principles when developing and implementing their own AI governance policies and practices, since it is likely that addressing the matters raised in these will place businesses in a good position to comply with any AI-specific legislative requirements in the future, as well as existing laws that are relevant to the use of AI.
Industry-specific challenges
Australia’s sectoral landscape is subject to a patchwork of existing legal frameworks. For heavily regulated industries like finance and healthcare, the Government’s proposed “mandatory guardrails” pose heightened compliance obligations, including enhanced testing and auditing requirements for high-risk AI applications, which can introduce additional layers of review, delay, and expense. Meanwhile, consumer-facing industries that deploy generative AI or rely on automated decision-making may face challenges in meeting transparency and accountability standards to mitigate misinformation risks.
Future trends and developments
Throughout its approach, the Government has consistently emphasised the importance of ensuring the responsible and safe use of AI while not impeding innovation. While high-risk applications may be subject to legislated mandatory guardrails, regulation of AI more broadly is expected to remain relatively light-touch, primarily achieved through amendments to existing laws rather than through comprehensive, AI-specific legislation.
Having explored the diversity of AI governance across various key regions, it's clear that organisations face many challenges in implementing effective AI governance strategies. Drawing from our extensive experience advising clients on the frontlines of AI governance, we've identified several key questions and issues that are shaping the discourse:
These insights of AI governance highlight the need for flexible, adaptable strategies that can navigate the complex interplay of regional regulations, industry challenges, and rapidly evolving technologies. As we continue to advise clients on these critical issues, we're developing innovative approaches to the challenges businesses are facing.
As regulation continues to expand globally - from the EU’s risk-based AI Act to the flexible and sector-specific frameworks in the UK, Asia, the Middle East, and beyond - organisations face a crucial choice for their AI strategy. A one-size-fits-all compliance approach can ensure consistency but may limit innovation and place undue burdens on smaller companies, whereas a region-by-region strategy might create fragmentation, internal complexity and additional costs. Across all these jurisdictions, a number of unifying themes have emerged that provide common ground for organisations. Nearly every major regime places a premium on transparency, accountability, and data minimisation, emphasising that trustworthy AI rests on well-defined oversight processes and responsible data use. A clear example is the widespread agreement on the importance of robust risk assessment and bias detection, which is embodied in ISO/IEC 42001 and echoed in guidelines from the Middle East to Hong Kong. These parallels reveal that frameworks increasingly converge on core principles of transparency, ethics, and human-centric innovation.
Generally, the best choice lies in an adaptable governance framework: one that integrates the shared pillars of transparency, accountability, risk-based processes, and data protection by design, while preserving the agility to accommodate differing local requirements. A key marker of success in AI governance is the ability to anticipate regulatory shifts, invest in forward-looking risk management, and embed cross-functional oversight that supports sustained, responsible AI innovation. By aligning to these universal principles, organisations can adopt a cohesive approach that reduces regulatory friction, encourages innovation, and fosters global trust in AI.