OpenAI, a leading artificial intelligence research lab, has dismissed a copyright lawsuit filed by The New York Times (NYT) as “without merit.” The lawsuit, which was brought to light by the NYT, accuses OpenAI of using the newspaper’s content for training its AI chatbots, such as GPT-4 and DALL-E 3, without authorization. This move by the NYT represents a significant challenge in the rapidly evolving landscape of AI and copyright law.
In December 2023, the NYT filed a lawsuit against OpenAI and Microsoft, alleging that the companies used the Times’ copyrighted content to train their generative AI models. The suit, which has become a major talking point in the AI community, claims that this action was taken without permission or payment, potentially causing billions of dollars in damages to the NYT.
OpenAI, however, has countered these allegations. In a public response, OpenAI reiterated its stance that training AI models using publicly available data, including articles from the NYT, falls under fair use. The company argues that this approach is essential for innovation and competitiveness in the U.S. OpenAI also addressed the issue of “regurgitation,” where AI models output training data verbatim, stating that this is less likely with data from a single source and that it’s the users’ responsibility to avoid intentional misuse of the models.
Interestingly, OpenAI has been in constructive discussions with the NYT about forming a partnership. These talks were progressing well until the lawsuit was filed, which came as a surprise to OpenAI. The company believes that this legal action is not representative of the typical use or intent of its AI models and sees this as an opportunity to clarify its business practices and technology development.
The NYT lawsuit is part of a growing trend where content creators, including artists and journalists, are challenging the use of their work in training AI systems. Other lawsuits have been filed against OpenAI and similar companies, accusing them of copyright infringement. This legal pushback signifies a broader concern over the ethical and legal implications of AI in the creative and media industries.
Notably, some news organizations have chosen a different path, forming licensing agreements with AI companies. The Associated Press and Axel Springer, for example, have entered into deals with OpenAI, indicating a potential collaborative approach to address these challenges. However, these agreements are often for relatively small sums, especially considering the revenues of AI companies like OpenAI.
The lawsuit and the issues it raises about AI and copyright law are set to be a pivotal moment in defining the boundaries and responsibilities of AI developers and content creators. As the case unfolds, it will undoubtedly have significant implications for the future of AI, journalism, and intellectual property rights.
Image source: Shutterstock
Credit: Source link