The rise of artificial intelligence (AI) sophisticated
enough to impersonate individuals has led to a surge in complaints about fraud
and other consumer harm. In response, the Federal Trade Commission (FTC) issued
a supplemental notice of proposed rulemaking (NPRM) to strengthen anti-fraud
measures included in its recently finalized Government and Business
Impersonation Rule.
AI-generated “deepfakes,” and other emerging technology,
have the potential to “turbocharge” the types of impersonation fraud the agency
has seen become more pervasive and has cost consumers and entities billions of
dollars in recent years as AI capabilities continue to improve and become more
widely available, the FTC said in the release.
“Fraudsters are using AI tools to impersonate individuals
with eerie precision and at a much wider scale. With voice cloning and other
AI-driven scams on the rise, protecting Americans from impersonator fraud is
more critical than ever,” FTC Chair Lina Khan said in a press release. “Our
proposed expansions to the final impersonation rule would do just that,
strengthening the FTC’s toolkit to address AI-enabled scams impersonating
individuals.”
Khan was joined by FTC Commissioners Rebecca Kelly Slaughter
and Commissioner Alvaro Bedoya in releasing a separate statement on the matter.
“Impersonation schemes cheat Americans out of billions of
dollars every year,” the commissioners said. “Fraudsters pretending to
represent government agencies — like the Social Security Administration or the
IRS — tell targets that if they do not hand over money or their sensitive
personal information, then they could lose a government benefit, face a tax
liability, or even be arrested. Scammers also commonly claim false affiliations
with household brand names to bilk consumers for bogus services.”
Impersonation scams resulted in $2 billion in stolen funds between
October 2020 and September 2021, an 85 percent increase year-over-year, the
commissioners noted. In 2023, consumers reported $2.7 billion in losses from
imposter scams.
The FTC is asking for feedback on whether the revised rule
should declare it unlawful for a company – such as an AI platform that creates
images, video, or text – to provide goods or services that they know or have
reason to believe are being used to harm consumers through impersonation.
Comments the FTC received regarding its Government and Business
Impersonation Rule raised concerns about additional threats and harms posed by
bad actors who impersonate individuals, which the agency decided are not
adequately addressed by the final rule’s existing provisions. The proposal is
intended to help the agency deter fraud and secure redress for harmed
consumers.
The rule was crafted to enable the agency to directly file
federal court cases against scammers who impersonate businesses or government
agencies, forcing them to return funds made from such scams. This is
particularly important given the Supreme Court’s April 2021 ruling in AMG
Capital Management LLC v. FTC, which significantly limited the agency’s
ability to require defendants to return money to injured consumers, the FTC
explained.
Specifically, the rule would allow the FTC to directly seek
monetary relief from scammers that:
·
Use government seals and business
logos when interacting with consumers by mail or online.
·
Spoof government and business emails and web
addresses, including “.gov” email addresses, or use lookalike email addresses
or websites that rely on misspellings of a company’s name.
·
Falsely imply affiliation with a government or
business entity by using terms commonly associated with a government
agency or business (e.g., stating “I’m calling from the clerk’s office” to
falsely imply affiliation with a court of law).