Japan's New AI Act: Examining an Innovation-First Approach Against the EU's Com-prehensive Risk Framework

Written By

nils loelfing module
Dr. Nils Lölfing

Senior Counsel
Germany

As Senior Counsel in our Technology & Communications Sector Group at Bird & Bird, I provide pragmatic, solution-driven advice on all aspects of data and information technology law. With over a decade of experience as a Tech, AI, and Data Lawyer, I have a strong focus on and extensive expertise in projects involving AI, particularly generative AI, advising clients across a range of industries, including life sciences, healthcare, automotive, and technology.

aya saito Module
Aya Saito

Partner
Japan

I am a commercial technology lawyer based in Japan, specialising in complex transactions and strategic legal support for clients leveraging technology to drive growth and innovation globally. My approach is to provide pragmatic, business-focused advice that helps clients navigate legal complexities while achieving their commercial objectives.

Japan's recently enacted "Act on Promotion of Research and Development, and Utilization of AI-related Technology" (Japan AI Act) represents a significant development in the global artificial intelligence governance landscape. This new legislation introduces an innovation-first approach that emphasises voluntary cooperation and technology promotion without traditional enforcement mechanisms, marking a distinctive entry into the evolving field of AI regulation.

The Japanese framework stands in notable contrast to the European Union's AI Act (EU AI Act), with its comprehensive risk-based regulatory approach and substantial penalties. This divergence reflects deeper philosophical differences about the role of regulation in technological development, the balance between innovation and risk mitigation, and the appropriate mechanisms for ensuring responsible AI deployment.

As nations grapple with the dual imperatives of fostering innovation while managing AI-related risks, Japan's voluntary cooperation model offers a compelling alternative to mandatory compliance frameworks. However, as this analysis will demonstrate, Japan's approach and the EU's risk-first regulation may be more complementary than contradictory, each offering valuable insights for the evolving global AI governance ecosystem and potentially creating a new paradigm for international AI governance.

Japan's AI Act: Innovation-first governance

Japan's recently enacted AI Act marks a significant milestone in Japan's commitment and ambition to become a global leader in AI. The Act aims to promote innovation while mitigating risk in alignment with global norms, representing the culmination of nearly a decade of policy evolution.

Japan's approach to AI has evolved systematically since 2015, beginning with the Society 5.0 vision in December 2015 and progressing through the "Social Principles of Human-Centric AI" in March 2019. Between 2016 and 2022, various ministries issued non-binding guidelines for AI use, development, and governance.

Legislative development and rationale

With the rise of generative AI and international developments in AI regulation, discussions on formal AI legislation intensified. While Japanese policymakers initially considered following Europe's regulatory model, concerns from businesses and government leaders about potentially stifling innovation led to a different path.

At the same time, there were ongoing concerns that non-binding "soft law" measures, lacking enforceability, did not adequately address societal concerns related to AI risks.

Seeking to become the world's most AI-friendly country, Japan opted for "light-touch" legislation. The draft bill was approved by the Cabinet in February 2025 and subsequently by the Diet in May 2025, coming into full effect in September 2025.

Core principles and framework

The Japan AI Act takes a fundamentally different approach from the EU AI Act, laying out basic principles and plans for AI research, development and use, defining stakeholder roles including government, businesses, academia and citizens, and relying on guidelines and strategic plans rather than direct regulation and penalties.

The Act promotes AI through four basic principles: retaining research and development capabilities and enhancing international competitiveness; comprehensively and systematically promoting AI from basic research to practical use; ensuring transparency for appropriate AI research, development and use; and leading international cooperation.

Government measures and implementation

Under the Japan AI Act, government measures include promoting research and development and sharing facilities and data; securing and developing talent; contributing to the establishment to international norms and guidelines; conducting studies and research on domestic and global AI trends and analysing AI use that violates individual rights; and providing guidance to business operators and citizens. Detailed measures will be decided in the Basic AI Plan created by the AI Strategic Headquarters led by the Prime Minister and compromised by all Cabinet members.

Enforcement philosophy: No penalties approach

The most notable characteristic is the absence of penalties or sanctions. Business operators should endeavour to actively adopt AI for innovation and business sophistication in accordance with basic principles and cooperate with government policies. Citizens should deepen their understanding and interest in AI and cooperate with government initiatives. However, there are no consequences under this Act for inappropriate AI use that may violate individual rights. 

That said, this does not mean that inappropriate AI use will go unaddressed. Unlawful activities involving AI remain subject to penalties and regulation under existing legal frameworks. Additionally, the government has announced plans to initiate research into specific AI-related concerns, including gender discrimination in AI-powered HR processes and the creation of AI-generated deepfake pornography. 

Complementary soft law mechanisms

Japan's approach extends beyond the formal Act to include detailed guidelines such as the "AI Guidelines for Business," created by the Ministry of Economy, Trade and Industry (METI) and the Ministry of Internal Affairs and Communications (MIAC) in April 2024 and updated in March 2025.

These comprehensive, non-binding guidelines establish industry norms in Japan and provide guidance on AI risks and mitigation measures for voluntary implementation. Designed as a "living document," the guidelines acknowledge the need for continuous improvement and regular updates to keep pace with technological advancement.

While soft law may offer flexibility to adapt to new technologies, formal legislation is essential to demonstrate that the promotion of AI as a national policy and to enable central coordination across stakeholders. International economic competitiveness appears to be a key driver, with Japan's digital trade deficit reaching a record 6.6 trillion yen in 2024 and studies showing lower AI adoption rates compared to other countries. The Japan AI Act takes a unique and strategic approach, signifying that AI is of the utmost priority to Japan while maintaining the flexibility to foster innovation. 

Fundamental differences in approaches under the EU AI Act

At first glance, Japan and EU approaches appear to represent fundamentally different philosophies toward AI governance, creating what seems like an unbridgeable divide between innovation promotion and risk regulation.

Regulatory intensity: Soft law versus hard law

Japan relies primarily on pro-innovation and principles-based legislation supplemented by detailed guidelines, creating a soft law framework that emphasises guidance over compulsion. The EU employs binding legal requirements with specific obligations for clearly defined AI systems and models, establishing a hard law regime with mandatory compliance obligations. This contrast suggests fundamentally different views on the appropriate role of legal compulsion in technology governance.

Innovation priority: Promotion versus mitigation

While the need for risk management is not unrecognised, Japan explicitly prioritises innovation promotion and aims to become the "world's most AI-friendly country." The EU emphasises risk mitigation as the primary objective, with innovation support relegated to separate policy initiatives. This apparent divergence suggests competing visions of whether regulation should primarily enable or constrain technological development.

Enforcement mechanisms: Voluntary versus mandatory

Japan relies entirely on voluntary cooperation and reputational mechanisms, relying on existing legal frameworks to penalise or regulate rights violations by inappropriate use of AI. The EU enforces mandatory compliance with significant penalties for violations under the EU AI Act, including fines of up to €35 million or 7% of global turnover for breaches of prohibited practices. This stark contrast in enforcement philosophy appears to reflect fundamentally different assumptions about human behaviour and the necessity of deterrence.

Stakeholder responsibility: Distributed versus centralised

Japan distributes responsibility across government, business, academia, and citizens, creating a collaborative governance model where all stakeholders share accountability. The EU centralises primary responsibility on AI system providers and deployers, establishing clear value chains and accountability mechanisms. This difference suggests competing theories about how responsibility should be allocated in complex technological systems.

A closer look at the broader context

However, when examined within broader policy contexts, these apparently fundamental differences prove more nuanced than they initially appear.

Both jurisdictions demonstrate commitment to innovation promotion and risk management, albeit through different mechanisms and with different emphases.

Understanding the EU's regulatory rationale

The main difference lies in the EU AI Act's regulation of specific AI systems and models, namely prohibited practices, high-risk systems, general-purpose AI models, and certain AI systems subject to transparency requirements, through concrete definitions and financial penalties for violations.

This EU approach stems from the EU's legislative tradition of detailed harmonisation to ensure single market functioning and fundamental rights protection. The EU AI Act's recitals explicitly reference the need to ensure high levels of protection for health, safety, and fundamental rights, reflecting the EU's constitutional commitment to these values and its experience with technology regulation.

Convergence in innovation promotion

Examining the broader policy context reveals significant convergence in direction of travel. The EU's AI innovation agenda, exemplified by the AI Continent Action Plan from April 2025 (here), demonstrates similar pro-innovation elements to Japan's approach. The Plan includes massive investment in AI research and development, support for AI startups and scale-ups, development of AI talent through education and training programs, and creation of regulatory sandboxes for AI innovation. These initiatives mirror Japan's emphasis on research promotion, talent development, and innovation facilitation.

Japan's risk-based thinking

Similarly, Japan's comprehensive "AI Guidelines for Business" showcase a sophisticated risk-based approach comparable to the EU AI Act (here).These guidelines adopt a similar framework, providing targeted recommendations that address risks associated with AI applications while allowing for discretionary assessment. Unlike the EU AI Act, however, they do not explicitly categorise specific AI systems. Instead, the focus is on core principles such as transparency, explainability, bias mitigation, and data protection, which are all integral components of the EU's risk-based methodology.

Strategic trade-offs: Innovation flexibility versus regulatory certainty

Japan's principle-based framework prioritises innovation facilitation through regulatory flexibility. This approach provides businesses with discretionary space to implement broader principles rather than adhering to rigid, system-specific requirements backed by substantial financial penalties. The absence of prescriptive mandates reduces the deterrent effect that can discourage experimentation and development, potentially preserving the societal benefits that emerge from unrestricted AI innovation. This flexibility proves particularly valuable in rapidly evolving technological landscapes where adaptive implementation of responsible practices outweighs strict regulatory adherence.

Conversely, the EU's rule-based methodology prioritises legal certainty through clearly defined requirements and penalties. This approach creates predictable compliance frameworks that facilitate strategic business planning and investment decisions. The regulatory clarity proves especially valuable for risk-averse organisations and heavily regulated sectors where compliance predictability is essential for operational planning. However, this certainty comes at the cost of reduced flexibility in responding to technological developments.

The contrasting approaches reveal different vulnerabilities to technological change. The EU's prescriptive framework faces inherent risks of technological obsolescence, as demonstrated by the emergence of ChatGPT in late 2022, which necessitated significant legislative amendments to the then pending EU Act proposal to accommodate general-purpose AI models as a new regulatory category. This reactive pattern highlights the fundamental challenge of maintaining detailed regulations that remain current with rapid technological developments, requiring frequent legislative updates that are inherently slower than the guideline modifications possible under Japan's more flexible approach.

Japan's principle-based system, while offering greater adaptability to technological evolution, trades this flexibility for reduced regulatory certainty, potentially creating implementation inconsistencies and compliance ambiguities that may ultimately hinder rather than help innovation in certain contexts.

Conclusion

The apparent contrast between Japan's innovation-first framework and the EU's risk-first regulation represents two distinct paradigms in AI governance approaches. However, when examined within broader policy contexts, the differences become more nuanced than initially apparent, with both jurisdictions demonstrating commitment to innovation promotion and risk management through different mechanisms and emphases.

Japan's principle-based approach offers greater flexibility and adaptability to rapid technological change, potentially providing more conducive conditions for experimental approaches that drive AI advancement. The absence of prescriptive AI-specific penalties reduces regulatory deterrence effects that might otherwise constrain beneficial innovation. Conversely, the EU's detailed framework provides superior legal certainty and systematic risk management through comprehensive requirements and clear enforcement mechanisms, offering predictable compliance frameworks and robust protection for fundamental rights and safety.

Neither approach is inherently superior; they reflect different societal values, economic priorities, and regulatory philosophies, each creating distinct opportunities and challenges for businesses and innovation. The global AI governance landscape benefits from this regulatory diversity, providing multiple pathways for responsible AI development and deployment. The ultimate test will be outcomes: which approach better balances innovation with risk management, and how effectively each framework adapts to technological evolution while maintaining public trust.

For multinational businesses subject to both frameworks, effective AI governance requires a dual-track strategy that satisfies the EU's prescriptive requirements while leveraging Japan's flexible implementation approach. Companies should establish compliance baselines that meet the EU's detailed mandates, particularly for high-risk AI systems, while developing adaptive governance processes that can respond to Japan's evolving guidelines and industry best practices. This approach involves implementing robust documentation and risk assessment procedures that satisfy EU requirements, while maintaining organizational agility to incorporate Japan's principle-based guidance as it develops. Businesses should also consider establishing regional compliance teams that can navigate the specific nuances of each framework while ensuring consistent global AI ethics standards.

Latest insights

More Insights
Curiosity line teal background

China Cybersecurity and Data Protection: Monthly Update - August 2025 Issue

Sep 09 2025

Read More
featured image

Pantech v Google: Tokyo District Court grants first injunction for SEP infringement

3 minutes Sep 08 2025

Read More
featured image

EU AI Act Update: Ireland appoints its national competent authorities

4 minutes Sep 08 2025

Read More