AI generated content: Why is OpenAI’s new language model “too dangerous to release”?

From personal assistants to customer services, AI that can talk to us is already big business. But what happens when AI gets a bit too good at pretending to be a human?

OpenAI announced recently that they had created an algorithm which generates incredibly humanlike content. They soon realised that the humanlike interactions that their AI could generate might fall into the wrong hands and announced that they would not be releasing the code to the public. Instead, they only released a smaller, more restricted sample.

View the full article on digitalbusiness.law >

Latest insights

More Insights
featured image

Reshaping the Game: An EU-Focused Legal Guide to Generative and Agentic AI in Gaming

Aug 14 2025

Read More
Curiosity line green background

An In-depth Analysis of China’s Network Data Security Regime Part III: Cross-Border Data Transfer and Platform Data Protection

Aug 14 2025

Read More
Curiosity line pink background

A decision of epic proportions: Federal Court finds that app store providers contravened Australia’s competition laws

Aug 14 2025

Read More