Elections & Democracy March 30, 2026 · 6 min read

Deepfake Political Ads Are Here. And There Are Almost No Rules.

A Reuters investigation published March 28, 2026 confirmed what many feared: AI-generated deepfake videos are now active tools in US midterm campaign ads. A candidate never filmed a video. Voters watched it anyway. This is what that means.

Split digital screen showing a real and AI-synthesized politician speaking, fractured by glowing teal fault lines against a dark background

The video looked real. Texas Democratic State Representative James Talarico stood in front of a Texas flag and spoke directly to camera. The problem: he never filmed it. The clip was an AI-generated ad produced by the National Republican Senatorial Committee (NRSC), using deepfake technology to put Talarico's old social media posts into his mouth, on screen, in his likeness.

The disclaimer "AI generated" appeared in small font in the lower-right corner. Easy to miss. Hard to notice when you're scrolling.

This is where we are in March 2026, eight months before the US midterm elections that will decide control of Congress.

Three Real Deepfake Ads, Zero Federal Rules

The NRSC's Talarico ad is one of at least three recent political ads from national Republicans that use deepfake technology, according to a Reuters review of publicly available campaign material. A second ad, from the campaign of Republican Representative Mike Collins of Georgia, showed Democratic Senator Jon Ossoff appearing to say: "I just voted to keep the government shut down. They say it would hurt farmers, but I wouldn't know. I've only seen a farm on Instagram." Ossoff never said any of that.

Collins' spokesperson did not apologize. In a statement, the campaign said it would "be at the forefront embracing new tactics and strategies that pierce through lopsided legacy media coverage."

On the Democratic side, California Governor Gavin Newsom has used AI-generated videos to mock President Trump on social media. But the national Democratic campaign committees have not yet deployed the same tactic in midterm ads.

The legal landscape around all of this: almost empty. There is no federal regulation governing the use of AI in political advertising. What exists is a patchwork of state-level laws, twenty-eight of them as of late March 2026, most untested in court and inconsistently written. Social media platforms like Meta and X do label some AI-generated political content, but both have dismantled their professional fact-checking programs, replacing them with community-sourced notes that move far slower than viral content.

Why Deepfake Ads Work on Voters

A 2025 peer-reviewed study published in the Journal of Creative Communications found that people struggle to identify deepfake videos and that their political opinions are measurably affected by this type of misinformation. This is not a theoretical risk. Voters who see a fabricated video of a candidate saying something controversial carry that impression even after a correction.

The mechanics are straightforward. Modern AI video synthesis tools can clone a person's face, voice, and mannerisms from a handful of reference clips. What required a Hollywood budget five years ago now costs a campaign a few hundred dollars and an afternoon. The output does not need to be perfect to be effective. It needs to be good enough to pass a casual scroll.

Daniel Schiff, a Purdue University professor who has studied thousands of deepfakes, put it plainly to Reuters: "The types of damage that we can do to the rigor and credibility of elections and democratic systems... very much risks being supercharged."

Political strategists meanwhile are openly candid that the technology is "persuasive, time-effective, and cost-effective." Some argue it is a new form of political satire. Others note it puts words in real people's mouths for explicit electoral gain. The line between satire and deception gets harder to draw when the video looks indistinguishable from reality.

What Voters Can Actually Do Right Now

Waiting for federal legislation is not a strategy. The election is in November. Here are the practical checks worth doing before sharing any political video:

  • Check the original source. If a video shows a candidate saying something surprising, search for the clip on their official accounts or a credible news outlet. If you cannot find it independently verified, treat it as suspect.
  • Look for AI disclaimers, then question them. The NRSC's ad did include a small-print disclaimer. Most deepfake ads will not. The presence of a disclaimer in tiny font does not mean the content is acceptable. The absence of one makes things worse.
  • Watch for unnatural mouth movement and audio sync issues. Current AI video tools still struggle with teeth, tongue, and the precise way lips move at the edges. Slow down video playback to 0.5x if something looks off.
  • Use detection tools before forwarding. Upload suspicious images or screenshots to an AI detector before sharing. It takes 30 seconds and can stop a fabricated clip from spreading through your network.
  • Apply the same scepticism to satire. "It's obviously satire" is increasingly unreliable as a defence when the video looks photorealistic. Context gets stripped the moment a clip is shared out of its original platform.

Senator Warner's Warning and What Comes Next

Senator Mark Warner has publicly pushed tech companies to act on deepfakes ahead of November, calling voluntary industry efforts like the Coalition for Content Provenance and Authenticity (C2PA) and the 2024 Tech Accord "imperfect and no substitute for comprehensive federal legislation." As of March 2026, that legislation does not exist. The C2PA standard, which embeds cryptographic provenance data into media files at creation, offers a longer-term path to verification. But adoption remains limited, and most campaign ads are not being generated by tools that support it.

The most likely short-term development is more deepfake ads, from both parties, as the November election approaches and AI generation tools get cheaper and faster. Voters in contested Senate races across Texas, Georgia, and other swing states will encounter them whether or not they expect to.

The ability to detect synthetic media is no longer an abstract technical skill. It is a basic literacy requirement for anyone participating in an election this year.

FakeOut lets you check images and video frames for AI manipulation directly from your phone. It's free on Android, with iOS beta in development. If something you see in the next few months looks off, run it through FakeOut before forwarding it.