OpenAI Alleges The New York Times of Using Deceptive Tactics in Copyright Lawsuit

OpenAI, the prominent artificial intelligence (AI) research organization, has filed a motion to dismiss portions of The New York Times’ copyright lawsuit against it, alleging that the newspaper engaged in deceptive practices to gather evidence for the case. According to OpenAI, The NYT employed individuals to manipulate AI systems, including ChatGPT, to generate misleading evidence, a claim vehemently denied by the newspaper’s attorney.

In a filing submitted to a Manhattan federal court, OpenAI asserted that The NYT induced its technology to reproduce copyrighted material through deceptive prompts, which violated OpenAI’s terms of use. However, The NYT’s attorney countered that OpenAI’s accusations of “hacking” were an attempt to exploit its products to uncover evidence of alleged theft and reproduction of copyrighted content.

The lawsuit, initiated by The NYT in December 2023, alleges that OpenAI and its major supporter, Microsoft, unlawfully utilized millions of NYT articles to train chatbots, such as ChatGPT, for informational purposes. The lawsuit invokes constitutional and Copyright Act protections to safeguard The NYT’s original journalism, further implicating Microsoft’s Bing AI for allegedly generating verbatim excerpts from its content.

This legal battle is part of a broader trend wherein copyright holders, including authors, visual artists, and music publishers, pursue legal action against tech firms for purported misuse of their content in AI training. OpenAI has previously argued that training advanced AI models without incorporating copyrighted works is unfeasible, emphasizing the wide scope of copyright coverage across human expressions. However, the determination of whether AI training constitutes fair use under copyright law remains unresolved, with courts dismissing some infringement claims due to insufficient evidence linking AI-generated content to copyrighted works.