close
close

US senators urge regulators to investigate possible AI antitrust violations

US senators urge regulators to investigate possible AI antitrust violations

The U.S. government has become aware of the potentially negative impacts of generative AI on areas like journalism and content creation. Senator Amy Klobuchar, along with seven Democratic colleagues, have urged the Federal Trade Commission (FTC) and the Department of Justice to investigate generative AI products like ChatGPT for possible antitrust violations, they wrote in a press release.

“Recently, several dominant online platforms have introduced new generative AI features that respond to user queries by summarizing or, in some cases, simply regurgitating online content from other sources or platforms,” the letter states. “The introduction of these new generative AI features further threatens the ability of journalists and other content creators to earn compensation for their essential work.”

Lawmakers went on to note that traditional search results drive users to publishers’ websites while AI-generated summaries keep users on the search platform “where that platform alone can profit from the user’s attention through advertising and data collection.”

These products also have significant competitive implications that distort content markets. When a generative AI feature directly answers a query, it often forces the content creator, whose content has been relegated to a lower position on the user interface, to compete with content generated by their own work.

The fact that AI can scrape information from news sites without even directing users to the original source could be a form of “exclusionary conduct or a method of unfair competition in violation of antitrust laws,” the lawmakers concluded. (It also potentially violates copyright laws, but that’s a whole other legal battle.)

Lawmakers have already proposed a handful of bills aimed at protecting artists, journalists, and others from unauthorized use of generative AI. In July, three senators introduced the COPIED Act to combat and monitor the rise of AI content and deepfakes. Later that month, a group of senators introduced the NO FAKES Act, a bill that would make it illegal to digitally recreate a person’s voice or image without their consent.

AI poses a particularly significant risk to journalism, both local and global, by removing revenue streams that enable original and investigative reporting. The New York Timesfor its part, cited examples of ChatGPT providing users with “near-text snippets” of paid articles. OpenAI recently admitted that it was impossible to train generative AI without copyrighted material.