This year will mark three years since The New York Times sued OpenAI and Microsoft for copyright infringement, and although the outcome of the case could be a milestone in clarifying whether AI vendors can use large amounts of creative content to train models without obtaining permission from the creators, the case is still pending.
The New York Times lawsuit highlights the difficulties and challenges of adjudicating AI lawsuits and provides a backdrop for what 2026 will look like in the battle between creatives and AI vendors.
In the lawsuit filed on December 27, 2023, the Times accused OpenAI Generative AI and its then-main backer, Microsoft, of using millions of its copyrighted articles without permission to train its models. The Times claimed that OpenAI’s generic AI tools compete directly with its publishing model, America’s largest newspaper and news website. In response to the Times, OpenAI claimed that News publisher stifling innovation.
The Times’ lawsuit was among other AI lawsuits filed against AI companies for “stealing” creative works, including music and artworks, and using them to train AI models. In 2023, imaging model vendor Stability AI was sued by multiple companies, including getty imagesFor copyright and trademark infringement. Author and comedian Sarah Silverman also sued OpenAI for using her books to train its large language models (LLM). Other authors sued generative AI vendors, including Anthropic, and once again condemned using their creative pieces as training grounds for AI models without permission.
Each lawsuit created ripple effects with major impetus for authors, publishers, and composers demanding permission and even some form of compensation before their work could be used. train an ai model That could make them irrelevant, or in the case of The New York Times, jeopardize their core business.
More than two years after the Times sued OpenAI and Microsoft and three years after Getty Images sued Stability AI, there is still no definitive conclusion. And it is unlikely that these cases, which are still being litigated, will reach any conclusion this year. However, it is possible that the publishing alliance that began to form between creatives and model makers will reappear.
More emphasis on how LLMs convert content
However, the question of appropriate use when AI model makers are training these models will depend on the transformative power of the technology. fair use Legal principle that allows a party to use copyrighted material without the owner’s permission for purposes such as news reporting, research, or other uses that serve the public interest. Meanwhile, the transformation concept holds that if AI transforms the original training data into something completely new or different, then it is valid. While no precedent was set, in the Anthropic case, the judge ruled that fair use could be upheld because Anthropic had altered the data.
“In 2026, courts can clarify how they distinguish ‘transformative’ training from substitute uses, especially when models are general-purpose rather than direct competitors,” said Kashyap Kompela, CEO and founder of RPA2AI Research.
The judicial system was already considering the transformative power of AI technology in 2025. For example, in June, U.S. District Judge William Alsup ruled that, because of the transformative power of LLM, fair use was a plausible argument in the case of a group of authors versus Anthropic. However, the judge decided to let the case go to trial because Use of pirated books by Anthropic. on September, Anthropic agreed to pay $1.5 billion to authorsan amount considered one of the largest copyright settlements in American history.
The agreement shows that although fair use will remain an important marker and argument point in these AI lawsuits, in 2026, courts will focus more on how the data is collected, such as whether the data is pirated, or if the training data violated some form of contractual agreements, Kompela said.
more legal settlements
The Anthropic Agreement also indicates that more settlements could arrive in 2026.
“A large settlement resets expectations at the plaintiff bar and in the litigation-finance ecosystem, increasing the pressure to resolve cases once key facts are established,” Kompela said.
Model makers aren’t the only ones considering a compromise; Many publishers and authors will also use Anthropic Agreement as a criterion for going for testing.
The main reason most people are pushing for a settlement is that a lawsuit has a huge impact not only on the case at hand but also on the entire industry.
“Litigation is a risk for everyone, and the risk is that you can set a bad precedent for yourself and for the rest of the parties who are aligned with you,” said Michael McCready, owner of McCready Law in Chicago.
The challenge is that if Creative wins, it may lead to financial crisis or even bankruptcy for some AI companies, especially those that do not have strong financial backing, such as large AI vendors like Anthropic. And if AI vendors win, creative people like publishers, musicians, writers and artists get nothing.
“There’s really a lot at stake here,” McCready said. “It makes sense for both sides to reach a negotiated settlement.”
However, not all cases will be resolved. He added, “At some point, someone will take it up at every level, and our first definitive decision will be how these issues will be addressed in the future.”
One publisher that will likely settle in 2026 is The New York Times has filed a lawsuit against OpenAI and Microsoft, said Michael Bennett, associate vice chancellor for data science and AI strategy at the University of Illinois Chicago.
“My impression is that so many pieces of journalistic content have been allegedly violated by OpenAI and Microsoft, and it’s likely that, at the very least, this will be an incentive against the backdrop of the Anthropic settlement,” Bennett said. “This would be a major incentive for OpenAI to accept settlement terms that work for both parties.”
He said many vendors are not only facing legal battles in AI lawsuits but are also considering their reputation.
“Large AI companies, in particular, need to be concerned about the potential legal risks of these lawsuits, intellectual property-based risks, but they also need to be concerned about the potential stigma of their brands when they are accused of pirated, stolen, used without permission or compensation, other creative works for the purposes of training their systems,” he said. But the settlement will depend on what the seller can afford financially, Kompela said.
More licensing deals, but no collective industry standards
This means there could be more partnerships and AI licensing deals in 2026. There are a growing number of licensing deals, including New York Times’ deal with Amazon Last May, it was reportedly worth $20 million to $25 million. Another major deal is Google’s deal with Reddit to use Reddit’s user-generated content for training its Gemini models.
Other AI vendors are creating programs that group different publishers together. For example, Perplexity AI Publisher Program This includes partners like the Los Angeles Times and a revenue-sharing model that pays publishers when the AI search vendor’s AI search chatbot uses the publishers’ content for an AI-generated response.
The irony is that new business opportunities arising from prior legal disputes may continue to arise even after the end of this year.
“Some of the companies that claim that large AI companies infringed upon them and their training efforts have in many cases gone on to form new business ventures with AI companies,” Bennett said. A notable example is Getty. After suing Stability, the stock image provider created its own AI product Generative AI via Getty Images.
Despite the increase in licensing deals, it is unlikely that there will be a collective agreement between AI vendors and creatives on the same scale as the compensation model that currently supports the music industry.
In 2001, music-sharing site Napster agreed to pay $26 million to settle lawsuits On illegal music sharing. The deal later fell through after Napster filed for bankruptcy and a judge blocked the acquisition deal with Germany-based multimedia group Bertelsmann, but it laid the groundwork for the compensation model that protects the music industry today.
The AI industry isn’t ready for that kind of model, Bennett said, and it will be challenging for creative people.
He said, “I would be surprised if we saw that scale, that ambition, because of the wide range of materials that have been sampled to train models and or simply directly appropriated.” He said AI model makers use a variety of written texts to train models, and not all of those texts receive the same level of security. For example, journalistic writing receives less protection than fiction.
Bennett added, “Those differences will make it difficult for a very large group of content creators to come together.” “I wouldn’t expect millions of people, or millions of people, or anything like that. But you can imagine something small, thousands of people.”
On the other hand, in the creative world there may be consensus around “enforceable dataset transparency, scalable licensing for high-value corpora like publishers, music catalogs and stock libraries, and output-side guardrails like provenance tools, watermarking and restrictions on artists”, Kompela said.
emphasis on big issues
Bennett said it is also highly likely that 2026 could be the year when intellectual property lawsuits decrease somewhat as the tech world and government regulators turn their attention to other big problems affecting society due to the use of AI.
These include the impacts of generic AI on employment, education and energy production.
Another type of lawsuit that could have arisen has been eliminated. algorithmic bias. One current lawsuit is Mobley v. Workday, in which Derek Mobley claims that Workday’s screening tool harmed his job application. Another case that highlighted a different type of bias involved two Black women in Massachusetts, Mary Louise and Monica Douglas, who in 2022 sued Saferent Solutions, a rental company, because its algorithm had a bias against Black tenants. The rental company later settled for $2.275 million.
“Many bias cases end up through operational measures – audit barriers, monitoring, usage limits – rather than blanket ‘AI is illegal,’” Kompela said.
James Cooper, a professor at California Western School of Law, said regardless of the type of AI lawsuit, what is clear is that some clarity must emerge about AI technology using creative functions.
Currently, local and regional jurisdictions are being forced to decide what is acceptable in most of these cases. While many are waiting to see how the lawsuits will play out, many say the legislative branch of the government needs to get involved to create an effective regulatory framework. AI is advancing rapidly, and now is the time for our regulators to do their job rather than relying on the judiciary to deal with this rapidly evolving technology,” Cooper said.
He said that while courts are doing most of the work in resolving complex issues of IP ownership, lawmakers and regulators need to do more to provide binding guidance that all sellers and creatives can follow.
However, politics could hinder any strong action from Congress, McCready said.
“There are too many interests at stake here and it’s almost impossible to get consensus on anything these days, so I don’t think Congress is touching this with a 10-foot pole,” he said.
