In a company blog post, published Monday OpenAI publicly addressed its AI media negotiations and its recent lawsuit from The New York Times.
According to the company’s blog titled “OpenAI and Journalism,” “While we disagree with the claims in The New York Times lawsuit, we view it as an opportunity to clarify our business, our intent, and how we build our technology.”
As reported by Fast Company, the legal challenge is centered on the alleged unauthorized use of the newspaper’s content to train AI language models like ChatGPT and Bing Chat (now known as Microsoft Copilot). Traditionally, language models have been trained on vast amounts of data scraped from websites across the internet.
As AI tooling and software becomes increasingly integrated into various industries, the clarity of data usage rights and intellectual property becomes more important as well. The outcome of this lawsuit could lead to more stringent AI regulations, impacting how tech companies train AI models and the kind of data they are allowed to use in that training — thereby influencing the trajectory of AI development and its uses in the workplace.
In the statement, OpenAI touted early media partnerships already formed with the Associated Press, Axel Springer, American Journalism Project, and NYU. The company also mentioned that it had ongoing discussions with The New York Times, and that the discussions “had appeared to be progressing constructively through our last communication on December 19.”
According to OpenAI, “We had explained to The New York Times that, like any single source, their content didn’t meaningfully contribute to the training of our existing models and also wouldn’t be sufficiently impactful for future training. Their lawsuit on December 27 — which we learned about by reading The New York Times — came as a surprise and disappointment to us.”
OpenAI claims that a partnership would have allowed The New York Times to connect with readers through ChatGPT, contributing to an AI news source where the company’s software could play an integral part in content distribution by relaying real-time news with attribution to the official news source. However, this lawsuit is seen by many as more complex and pivotal in shaping the actual legal and ethical guardrails of AI technologies.
An excellent point raised in a report published by Wired is if AI tools like ChatGPT become popular for summarizing up-to-date news, it could reduce direct traffic to news websites, potentially having adverse effects on a media company’s advertising and subscription revenue.
If OpenAI prevails in the lawsuit, it would essentially require more media outlets to embrace deals with AI companies, potentially leading to a new wave of how news is consumed by readers. Conversely, a loss for OpenAI might result in more cautious adoption of AI in journalism — with greater emphasis on ethical and legal considerations. Regardless of the outcome, journalists and content creators will need to develop new skills to work alongside AI technologies — ensuring that their human judgment complements and, in some cases, overrides the technological capabilities of AI tools, especially if there is an error produced from the AI.
As AI technologies become more advanced, their adoption across various industries, including journalism, seems inevitable. This will require a reevaluation of journalistic practices, ethical standards, and future legal frameworks.
In the long term, the role of journalists may evolve to focus more narrowly on areas where human insight is essential — including investigative and original reporting, analysis, and ethical decision-making. The need for professionals who can effectively integrate AI-generated content with human creativity and critical thinking will likely increase.