AI generated content: Why is OpenAI’s new language model “too dangerous to release”?

From personal assistants to customer services, AI that can talk to us is already big business. But what happens when AI gets a bit too good at pretending to be a human?

OpenAI announced recently that they had created an algorithm which generates incredibly humanlike content. They soon realised that the humanlike interactions that their AI could generate might fall into the wrong hands and announced that they would not be releasing the code to the public. Instead, they only released a smaller, more restricted sample.

View the full article on digitalbusiness.law >

Latest insights

More Insights
Curiosity line green background

From sales to sanctions: Optus faces $100 million penalty for unconscionable sales practices

Jun 19 2025

Read More
Robot Arm

Our experts discuss Tech Disputes with Financier Worldwide Magazine

Jun 18 2025

Read More
featured image

Mitigating the legal risks of licencing in open-source software and database elements

7 minutes Jun 17 2025

Read More