The "QUIET Act" requires disclosure when AI is used in robocalls and doubles penalties for illegal robocalls or texts that use AI to impersonate someone with the intent to defraud, cause harm, or gain something of value.
Eric Sorensen
Representative
IL-17
The "QUIET Act" aims to regulate robocalls using artificial intelligence by requiring disclosure when AI is used to mimic a real person in robocalls or text messages. It mandates that callers reveal the use of AI at the beginning of the communication. The Act also doubles the penalties for illegal robocalls or texts that use AI to impersonate individuals with the intent to defraud or cause harm.
The "Quashing Unwanted and Interruptive Electronic Telecommunications Act," or QUIET Act, is taking aim at those annoying robocalls, especially the ones using artificial intelligence to sound like real people. This bill is all about making sure you know when you're talking to a bot, and it's hitting scammers where it hurts – their wallets.
The core of the QUIET Act is transparency. If a robocall uses AI to mimic a human voice, the caller must disclose this right at the start of the call, or in the initial text message. This is crucial because, let’s face it, AI is getting really good at sounding human. Think of it like this: if you get a call from what sounds like your neighbor, but it's actually a bot trying to sell you something, you deserve to know that upfront. This applies to calls and texts made using automated systems to dial numbers or that use an artificial or prerecorded voice (Section 2). It also covers texts with images, sounds, or other information sent to or from a device identified by a phone number or email, including SMS, MMS, and RCS messages (Section 2). Real-time voice or video calls are exempt.
This is where the QUIET Act gets serious. If a robocall or text uses AI to impersonate someone with the intent to defraud, cause harm, or wrongly obtain something of value, the penalties are doubled (Section 3). We're talking about both FCC fines and potential criminal fines. For example, if a scammer uses an AI-generated voice of a known bank to trick someone into handing over their account details, they'll face much stiffer consequences under this new law.
Imagine a small business owner constantly bombarded with robocalls, some of which are now using AI to sound like potential clients or suppliers. This bill could significantly cut down on that noise and deception. Or consider an elderly person who's more vulnerable to scams – knowing upfront that a call is AI-generated could be a crucial safeguard.
However, there are potential challenges. How do you prove intent to defraud? The bill's language around "significant human intervention" (Section 2) could also become a loophole. Will companies try to tweak their AI just enough to avoid the disclosure requirement? These are details that will need to be ironed out in practice.
Overall, the QUIET Act represents a significant step toward protecting consumers in the age of increasingly sophisticated AI. By demanding transparency and increasing penalties for misuse, it aims to level the playing field and give us all a little more peace and quiet.