close
close

EFF wants FTC to consider lying chatbots ‘unfair and deceptive’ in eyes of law • The Register

The Electronic Frontier Foundation (EFF) has proposed that the FTC enforce rules prohibiting unfair business practices to punish those who operate deceptive chatbots.

Cory Doctorow, an author and activist who serves as special counsel to the EFF, said the U.S. trade watchdog should not focus on copyright law to address the privacy, fairness and labor issues that accompany generative AI applications. Instead, the regulator should rely on the authority established under Section 5 of the Federal Trade Commission Act, which prohibits “unfair or deceptive acts or practices in or affecting commerce.”

“The FTC should issue guidance stating that any company that deploys a chatbot that lies to a customer has engaged in an ‘unfair and deceptive practice’ that violates Section 5 of the Federal Trade Commission Act, with all the fines and other penalties that entails,” Doctorow wrote.

He says not penalizing creators of lying chatbots means operators have less incentive to invest in improving the technology.

But the recommendation seems fraught with consequences in light of the U.S. Supreme Court’s decision last week that ended a policy known as Chevron deference, which encouraged judges to defer to regulatory agencies for expert interpretation of ambiguities in the law.

Now judges can decide what government regulators can do when faced with an ambiguous or ill-defined law.

The FTC, in its own enforcement principles (PDF), acknowledges that its statutory authority is not precisely defined in Section 5.

“Congress chose not to define the specific acts and practices that constitute unfair methods of competition in violation of Section 5, recognizing that enforcement should evolve as markets and business practices evolve,” the agency said.

“Instead, it left the development of Section 5 to the Federal Trade Commission as an expert administrative agency, which would apply the law flexibly on a case-by-case basis, subject to judicial review.”

The EFF declined to comment on the potential impact of the Supreme Court’s decision. The FTC also declined to comment. The registerHowever, the FTC understands that the Supreme Court decision will not have much impact on the commission’s core work, much of which involves evidentiary procedures related to mergers and acquisitions. The agency did not rely on a Chevron-argument based on his dispute and the courts did not return Chevron– evidence-based opinions on agency issues.

However, now that judges are tasked with interpreting regulatory powers, U.S. government agencies that take actions not specifically listed in applicable laws can expect more frequent challenges from businesses and interest groups.

Tech companies would prefer not to be regulated, but they are facing a growing number of AI-focused regulations around the world. Last month, the National Conference of State Legislatures said that the 2024 legislative session has seen AI bills introduced in at least 40 states, Puerto Rico, the Virgin Islands and Washington, D.C., and that AI laws have already been passed in six states, Puerto Rico and the Virgin Islands.

The flood of legislative proposals to regulate AI has become significant enough that Santa Clara University law professor Eric Goldman believes the AI ​​industry is doomed to fail. And there is also concern among industry players about executive and legislative overreach.

But as Doctorow points out, something needs to be done because generative AI chatbots already have problems.

For example, last year, an Air Canada passenger who was researching flights to attend his grandmother’s funeral was told by the airline’s chatbot that he could get a discounted funeral fare after purchasing a ticket. But the chatbot misrepresented the airline’s policy, forcing the passenger to seek the mistakenly promised discount in the Civil Resolution Tribunal of Canada, also known as small claims court.

The court found that “Air Canada failed to take reasonable precautions to ensure that its chatbot was accurate” and awarded the passenger CAD$812.02.

Chatbots have also been reported to insult customers, give incorrect answers to legal questions, ask businesses to take illegal actions, and generally not function very well.

Whatever the regulatory landscape, it may be reality that is holding AI back. According to a report (PDF) released last week by Goldman Sachs, spending on AI to date “has yielded little in the way of results, beyond reports of developer efficiencies.”