AI Deepfakes Are Already Messing With the 2026 Midterms

POLICYFEATURED

Saad Amjad

4/5/20263 min read

This isn't a warning about the future. It's already happening.

AI-generated deepfake videos are showing up in real campaign ads for the 2026 U.S. midterm elections, and they're getting harder to spot. A Reuters investigation published last week laid out just how far things have gone, and the picture isn't great.

The most talked-about example: the National Republican Senatorial Committee released an AI-generated video of Texas Senate candidate James Talarico that showed what looked like a real person speaking directly into a camera for over a minute. The problem? Talarico never filmed that video. The whole thing was fabricated using AI, with a small "AI generated" label tucked into the bottom corner in a font most viewers would never notice.

And it's not the only one.

A Pattern, Not an Incident

The Talarico ad was actually one of at least five confirmed deepfake incidents across the 2026 cycle so far, spread across races in Texas, Georgia, and Massachusetts.

In Georgia, Representative Mike Collins released a fabricated video of Senator Jon Ossoff appearing to say he voted to keep the government shut down. His campaign called it satire. In Texas, Senator John Cornyn put out an AI-generated music video about his primary rival. In Massachusetts, a Republican gubernatorial candidate ran a radio ad featuring a deepfake voice of Governor Maura Healey with no disclosure at all.

The throughline here is clear. Campaigns are treating deepfakes as a standard tool in the playbook, not some fringe experiment.

Voters Can't Tell the Difference

This is the part that should make everyone uncomfortable.

A 2025 study in the Journal of Creative Communications found that people consistently struggle to identify deepfake videos, and their opinions shift based on what they see, even when it's fake. Daniel Schiff, a Purdue University professor who has researched thousands of deepfakes, put it bluntly: the risk of damage to election credibility is being supercharged by this technology.

What looked obviously fake 18 months ago now passes as real to most people scrolling through social media. One analysis of the Talarico ad found the only detectable flaw was a subtle audio sync issue that required an expert to spot.

The next version won't have that flaw.

The Regulation Gap

Here's where it gets frustrating. There is still no federal law that specifically addresses AI-generated content in political advertising. None.

The TAKE IT DOWN Act, signed in May 2025, does criminalize AI-generated intimate imagery. But it doesn't cover political content at all.

At the state level, 28 states have passed some form of deepfake legislation, but the rules are all over the place. Most require disclosure rather than an outright ban. Some apply only to the creators of the content, others extend liability to distributors. A few states, like Minnesota, make it illegal to use AI to show a candidate doing something they didn't actually do without consent. But enforcement is inconsistent, and the laws haven't been seriously tested yet.

Texas has one of the strictest state laws on political deepfakes, but it only kicks in within 30 days of an election. The Talarico ad dropped months before Election Day, falling outside that window entirely.

Platforms Aren't Helping Much Either

Meanwhile, both Meta and X have scrapped professional fact-checking in favor of user-generated community notes. They label some AI content, but the labels are easy to miss. Once a deepfake clip gets reposted without the disclaimer, it moves through feeds as if it were real. By the time fact-checkers respond, the video has already been seen by millions.

The Bigger Picture

Deepfakes in elections used to be a theoretical concern, a "what if" for policy papers and tech conferences. That phase is over.

Campaigns have decided that synthetic media is a legitimate tool. Political strategists on both sides say these videos are cost-effective, easy to produce, and good at getting attention. Even the outrage cycle helps, because every controversy gives the ad more reach.

The honest answer to "can you trust what you see in a political ad" is no, and there's no clear path back to yes anytime soon. This story is going to get louder as November gets closer.