Sen. Amy Klobuchar has spent months warning about the dangers of deep fakes. Then she became a target.
In an op-ed this week in The New York Times, the Minnesota Democrat described the shock of seeing a viral video clip that appeared to show her making vulgar remarks about actress Sydney Sweeney. The video was fake, generated by artificial intelligence, but it looked convincing enough to rack up over a million views before she could persuade some platforms to act. One labeled it AI-generated; another told her to seek “community notes.”
Speaking in a live interview, Klobuchar said the experience made clear just how vulnerable ordinary people are in the age of AI manipulation. “For regular people, this is nearly impossible,” she said. “We are literally doing nothing right now to police this.”
Klobuchar is working with Sen. Marsha Blackburn, a Republican from Tennessee, to advance legislation that would create guardrails around AI-generated content, particularly when it comes to deceptive or harmful uses. It’s one of the few issues in Congress that has drawn bipartisan agreement in a divided era. Both senators have argued that the U.S. is falling behind other countries that already require labeling for AI-generated videos and mandate takedowns of malicious deep fakes.
The stakes, Klobuchar insists, go far beyond personal embarrassment. She pointed to real-world examples where fabricated videos or audio have caused chaos: a school principal suspended over a fake video, Sen. Marco Rubio’s voice cloned for calls to foreign officials, scams targeting military families, and the looming possibility of deep fakes used to manipulate financial markets or elections. “You could have a bank CEO causing a bank run. You could have an influencer telling kids to use drugs. And people wouldn’t know what’s real,” Klobuchar said.
The senator stresses that protections can be crafted within the bounds of free speech. Parody and satire, she argues, should remain protected, but false pornography or deceptive videos that misrepresent public officials ought to be taken down—or at minimum labeled—so viewers understand what they’re seeing. “Not all speech is protected,” she noted. “You can’t yell fire in a crowded theater. You can’t create child pornography. We can act in certain spaces.”
A flood of convincing deep fakes threatens to erode that trust at a time when democracy already feels fragile. “These things can result in scams,” she said. “They can result in a democracy where people don’t know if their elected official really said that or not.”
Her message is blunt: the technology is moving faster than the rules. And unless Congress steps in, Americans will be left defenseless in a world where anyone can be made to say—or appear to do—almost anything.





