Did New York Times “Hack” ChatGPT to Bring Up Lawsuit Against OpenAI? – Technology Org

OpenAI has requested a federal judge to dismiss portions of the New York Times’ copyright lawsuit, asserting that the newspaper engaged in “hacking” of its chatbot ChatGPT and other AI systems to fabricate misleading evidence.

Did New York Times “Hack” ChatGPT to Bring Up Lawsuit Against OpenAI? – Technology Org

ChatGPT logo – artistic impression. Image credit: Mariia Shalabaieva via Unsplash, free license

In a filing to the Manhattan federal court on Monday, OpenAI contended that the Times manipulated the technology by employing “deceptive prompts” that overtly violated OpenAI’s terms of use.

OpenAI dismissed the allegations, asserting that the Times paid an individual to hack its products. While OpenAI did not disclose the identity of the alleged manipulator, it refrained from accusing the Times of violating anti-hacking laws.

Responding to OpenAI’s claims, the Times’ attorney, Ian Crosby, stated that OpenAI misrepresents the act of using its products to search for evidence as “hacking,” emphasizing that it aimed to uncover instances of stolen and reproduced copyrighted work.

In December, The New York Times filed a lawsuit against OpenAI and its primary financial supporter, Microsoft, accusing them of utilizing millions of articles without authorization to train chatbots for delivering information to users.

The Times is one of several copyright holders taking legal action against tech companies for alleged misuse of their content in AI training, joining groups of authors, visual artists, and music publishers in similar suits. Tech companies argue that their AI systems fall under fair use of copyrighted material, emphasizing that these lawsuits pose a threat to the growth of the potential multitrillion-dollar AI industry.

The legal landscape is yet to address the fundamental question of whether AI training qualifies as fair use under copyright law. Some infringement claims over the output of generative AI systems have been dismissed by judges, citing a lack of evidence that AI-created content closely resembles copyrighted works.

The New York Times’ complaint highlighted multiple instances where OpenAI and Microsoft chatbots provided users with nearly identical excerpts from its articles upon request. The lawsuit accused both companies of attempting to exploit the Times’ substantial investment in journalism and create a substitute for the newspaper.

OpenAI countered in its filing, stating that it required the Times “tens of thousands of attempts to generate the highly anomalous results” and emphasized that using ChatGPT to serve up Times articles at will is not standard practice. OpenAI argued that the Times cannot prevent AI models from acquiring knowledge about facts, akin to any news organization’s inability to prevent the Times from re-reporting stories it did not investigate.

Written by Alius Noreika