Navigation
Join our brand new verified AMN Telegram channel and get important news uncensored!
  •  
HFP

Get paid or sue? How the news business is combating the threat of AI

On Sept. 30, 2020, L.A. Taco editor Javier Cabral in the alleyway behind the Figueroa Theatre in Los Angeles. (Mariah Tauger/Los Angeles Times/TNS)

Journalist Javier Cabral wanted to test Google’s much-hyped, experimental artificial intelligence-powered search results. So he typed out a question about a topic he knew intimately: the Long Beach bakery Gusto Bread’s coffee.

In less than a second, Google’s AI summarized information about the bakery in a few sentences and bullet points. But according to Cabral, the summary wasn’t original — it appeared to be lifted from an article he wrote last year for the local food, community and culture publication L.A. Taco, where he serves as the editor in chief. For a previous story, he’d spent at least five days working on a feature about the bakery, arriving at 4 a.m. to report on the bread making process.

As Cabral saw it, the search giant’s AI was ripping him off.

“The average consumer that just wants to go check it out, they’re probably not going to read (the article) anymore,” Cabral said in an interview. “When you break it down like that, it’s a little enraging for sure.”

The rise of AI is just the latest existential threat to news organizations such as Cabral’s, which are fighting to survive amid a rapidly changing media and information environment.

News outlets have struggled to attract subscribers and advertising dollars in the internet age. And social media platforms such as Facebook, on which publishers depended to get their content to a massive audience, have largely pivoted away from news. Now, with the growth of AI thanks to companies including Google, Microsoft and ChatGPT maker OpenAI, publishers fear catastrophic consequences will result from digital programs automatically scraping information from their archives and delivering it to audiences for free.

“There’s something that’s very fundamentally unfair about this,” said Danielle Coffey, president and chief executive of the News/Media Alliance, which represents publications including the New York Times and the Los Angeles Times. “What will happen is there won’t be a business model for us in a scenario where they use our own work to compete with us, and that’s something we’re very worried about.”

Tech companies leading the charge on AI say their tools are not engaged in copyright infringement and can drive traffic to publishers.

Google said in a statement that it designed its AI Overviews — the summaries that appear when people enter search queries — to “provide a snapshot of relevant information from multiple web pages.” The companies also provide links with the summaries so people can learn more.

AI and machine learning could provide useful tools for publishers when doing research or creating reader recommendations. But for many journalistic outlets, the AI revolution represents yet another consequence of the tech behemoths becoming the middlemen between the content producers and their consumers, and then taking the spoils for themselves.

“For the past 20 years, Big Tech has dictated the business model for news by essentially mandating how news is distributed, either through search or social, and this has turned out to be pretty disastrous for most news organizations,” said Gabriel Kahn, a professor at USC’s Annenberg School for Communication and Journalism.

To respond to the problem, news organizations have taken dramatically different approaches. Some, including the Associated Press, the Financial Times and News Corp., the owner of the Wall Street Journal and Dow Jones, have signed licensing deals to allow San Francisco-based OpenAI to use their content in exchange for payment. Vox Media and the Atlantic have also struck deals with the firm.

Others have taken their fights to court.

The New York Times in December sued OpenAI and Microsoft, alleging that both companies used its articles to train their digital assistants and share text of paywalled stories to users without compensation. The newspaper estimated that those actions resulted in billions of dollars in damages.

Separately, last month Forbes threatened legal action against AI startup Perplexity, accusing it of plagiarism. After receiving Forbes’ letter, Perplexity said it changed the way it presented sources and adjusted the prompting for its AI models.

The company said it has been developing a revenue sharing program with publishers.

The New York Times said in its lawsuit that its battle against AI isn’t just about getting paid for content now; it’s about protecting the future of the journalism profession.

“With less revenue, news organizations will have fewer journalists able to dedicate time and resources to important, in-depth stories, which creates a risk that those stories will go untold,” the newspaper said in its lawsuit. “Less journalism will be produced, and the cost to society will be enormous.”

OpenAI said that the New York Times’ lawsuit was without merit and that it has been unable to reproduce examples the newspaper has cited of ChatGPT regurgitating paywalled articles. The company said publishers have a way to opt out of their sites being used to train AI tools. Microsoft did not respond to a request for comment.

“Microsoft and OpenAI have the process entirely backwards,” Davida Brook, a partner at law firm Susman Godfrey, which is representing the New York Times, said in a statement. “Neither The New York Times nor other creators should have to opt out of having their works stolen.”

The legal war is spreading. In April, eight publications owned by private equity firm Alden Global Capital also accused OpenAI and Microsoft of using and providing information from its news stories without payment.

In some cases, OpenAI’s chat tool provided incorrect information attributed to the publications, Frank Pine, executive editor for MediaNews Group and Tribune Publishing, said in a statement. For example, according to Pine, OpenAI said that the Mercury News recommended injecting disinfectants to treat COVID-19 and the Denver Post published research suggesting that smoking cures asthma. Neither publication has made such claims.

“(W)hen they’re not delivering the actual verbatim reporting of our hard-working journalists, they misattribute bogus information to our news publications, damaging our credibility,” Pine said.

OpenAI said that it was “not previously aware” of Alden’s concerns and that it is “actively engaged in constructive partnerships and conversations with many news organizations around the world to explore opportunities, discuss any concerns, and provide solutions.”

One such partnership is OpenAI’s recent deal with News Corp., which allows the tech company’s tools to display content from news outlets in response to user questions and access content from the Wall Street Journal, New York Post and publications in the United Kingdom and Australia to train its AI models. The deal was valued at more than $250 million over five years, according to the Wall Street Journal, which cited unnamed sources. News Corp and OpenAI declined to comment on the financial terms.

“This landmark accord is not an end, but the beginning of a beautiful friendship in which we are jointly committed to creating and delivering insight and integrity instantaneously,” Robert Thomson, chief executive of News Corp., said in a statement.

“We are committed to a thriving ecosystem of publishers and creators by making it easier for people to find their content through our tools,” OpenAI said in a statement.

Although OpenAI has cut deals with some publishers, the tech industry has argued that it should be able to train its AI models on content available online and bring up relevant information under the “fair use” doctrine, which allows for the limited reproduction of content without permission from the copyright holder.

“As long as these companies aren’t reproducing verbatim what these news sites are putting out, we believe they are well within their legal rights to offer this content to users,” said Chris MacKenzie, spokesman for Chamber of Progress, an industry group that represents companies including Google and Meta. “At the end of the day, it’s important to remember that nobody has a copyright on facts.”

But outlets including the New York Times reject such fair-use claims, arguing that in some cases the chatbots do reproduce their content, unfairly profiting from their thoroughly researched and fact-checked work. The situation is even more difficult for smaller outlets such as L.A. Taco, which can’t afford to sue OpenAI or develop their own AI platforms.

Located in L.A.’s Chinatown with four full-time workers and two part-timers, L.A. Taco operates on a tight budget; its publisher doesn’t take a salary. The site makes most of its money through memberships, so if people are getting the information directly from Google instead of paying to read L.A. Taco’s articles, that’s a major problem.

Legislation is another potential way to deal with Big Tech’s disruption of the journalism industry. The California News Publishers Assn., of which the Los Angeles Times is a member, is sponsoring a state bill known as the California Journalism Preservation Act, which would require digital advertising giants to pay news outlets for accessing their articles, either through a predetermined fee or through an amount set by arbitration. Most publishers would have to spend 70% of the funds received on journalists’ salaries. Another bill lawmakers are considering would tax large tech platforms for the data they collect from users and pump the money into news organizations by giving them a tax credit for employing full-time journalists.

“The way out of this is some type of regulation,” USC’s Kahn said. “Congress can’t get anything done so that basically gives these platforms free rein to do what they want with very little consequence.”

___

© 2024 Los Angeles Times

Distributed by Tribune Content Agency, LLC.