Skip to content

OpenAI Accuses The New York Times of Hacking AI Models in Copyright Dispute

OpenAI accuses The New York Times of hacking its AI models in a copyright lawsuit, sparking a debate over ethical AI training and copyright law. This legal battle could set a precedent for AI development practices.

Allegations of Misleading Evidence

In a legal showdown, OpenAI has taken aim at The New York Times, accusing the newspaper of orchestrating a hack into its AI models. OpenAI alleges that The Times paid someone to manipulate its systems, including ChatGPT, using deceptive prompts that violated OpenAI's terms of use. The tech giant claims this action was intended to generate misleading evidence for The Times' copyright lawsuit against OpenAI and Microsoft.

OpenAI's allegations came to light in a recent filing in a Manhattan federal court, where it seeks to dismiss parts of The Times' lawsuit. The filing asserts that The Times caused its technology to reproduce copyrighted material through deceptive means. However, The Times' attorney, Ian Crosby, argues that OpenAI's claim of "hacking" is unfounded, suggesting it was an attempt to uncover evidence of alleged theft and reproduction of The Times' content.

The Times' lawsuit, filed in December 2023, alleges that OpenAI and Microsoft used millions of its articles without authorization to train chatbots. This legal clash underscores broader concerns about the use of copyrighted material in AI training. Copyright holders, including authors, visual artists, and music publishers, have filed similar lawsuits against tech firms, challenging their practices in AI development.

The legal dispute raises questions about fair use in copyright law, particularly regarding AI training. OpenAI contends that training advanced AI models without incorporating copyrighted works is impossible. On the other hand, tech firms argue that their AI systems use copyrighted material fairly, emphasizing the industry's potential for growth.

Latest