AI fuels a new wave of political lies

In a new political ad in Georgia’s Senate race, GOP Rep. Mike Collins‘ campaign released a video featuring incumbent Democratic Sen. Jon Ossoff saying he knows his vote to shut down the government will hurt farmers: “But I wouldn’t know. I’ve only seen a farm on Instagram.” Ossoff never said any of this. When challenged on spreading disinformation using Ossoff’s likeness and voice, Collins’ campaign doubled down, saying they were pleased the ad sparked conversation — proving they were either oblivious to the dangerous precedent or had simply decided to embrace it as strategy.

While political cartoonists have long created derogatory or lampoonish images of elected officials and candidates for public office, the political imagery that can be created by artificial intelligence blurs truth and fiction in unprecedented ways. AI can make falsehoods look authentic and, when used by politicians themselves, it becomes particularly harmful. AI use that started as experimentation by campaigns has evolved into something far more troubling: It now merges satire, disinformation and official messaging that misleads voters and distorts democratic discourse.

In New York City’s recent mayoral race, former Democratic Gov. Andrew Cuomo‘s campaign released an ad on social media, which was later deleted, featuring purported “criminals for Zohran Mamdani” — a parade of racist caricatures that included a pimp in a purple suit, along with a drunk driver, shoplifter and domestic abuser endorsing the Democratic nominee. In one sequence, a Black man shoplifts from a bodega, his face visibly morphing mid-clip as he puts on a keffiyeh and mask before robbing the store. As AI tools grow more sophisticated, Mamdani’s election may serve as both a warning and a testament: A warning of how easily political imagery can be weaponized, and a testament to the electorate’s enduring capacity to look beyond manipulation.

In recent weeks we have seen the official X account of the National Republican Senatorial Committee post a video of Senate Minority Leader Chuck Schumer, D-N.Y., also talking about the government shutdown. “Every day gets better for us,” the AI-generated video says. While the quote is accurate, the image of Schumer maniacally grinning as he says it is completely fabricated. 

Taking such creative license with elected officials’ images confuses the electorate about what is real. And, even if the Cuomo campaign claimed their ad represented their genuine beliefs about Mamdani supporters, portraying AI-generated individuals as real people destroys voters’ ability to distinguish fact from propaganda. It feeds the worst instincts in our politics — rewarding deception over debate, spectacle over substance. 

In these cases, political operatives are generating disinformation and misleading the public at a time when confidence in the government to protect the common good is low and shaken by the recent government shutdown. When AI-generated videos portray something that never happened with such realism, it stops being satire and is instead a false representation. Voters encountering these images have no way to know they’re fabricated — and that’s precisely the point.

The dangers of weaponized media manipulation are not theoretical. In Rwanda, hate radio broadcasts during the lead-up to the 1994 genocide systematically dehumanized Tutsis as “cockroaches” and spreading false claims about planned attacks on Hutus. These broadcasts didn’t just report hatred. They manufactured it, creating a shared false reality that made mass violence seem not only justified but necessary. Through relentless repetition, the medium’s authority normalized the dehumanization until neighbors turned on neighbors. 

When politicians deploy these tools to portray opponents as criminals, threats or caricatures, they’re not just lying. They’re constructing an alternate visual record designed to shape and control collective perception.

Today’s AI-generated political content operates with a similar psychological architecture. Imagine if those orchestrating the genocide has possessed today’s tools to fake videos of Tutsis’ attacking, chatbots fabricating eyewitness accounts and AI manipulating footage of political leaders suggesting retaliation. When politicians deploy these tools to portray opponents as criminals, threats or caricatures, they’re not just lying. They’re constructing an alternate visual record designed to shape and control collective perception. The Rwandan radio didn’t cause the genocide on its own, but it prepared the national mind for it. 

The minds of the public are now being desensitized to ridicule, and to racist images. Another video of Schumer, this time with House Minority Leader Hakeem Jeffries, D-N.Y., had racist overtones, showing Jeffries wearing a sombrero accompanied by mariachi music. Critically, it appeared on President Donald Trump’s official X account. The video recalls the yellow journalism of William Randolph Hearst’s New York Journal — only now, what once took hours to print and eventually reached thousands can be created in seconds and seen by millions.

When elected officials themselves engage in such behavior, it further erodes moral character as a basis for evaluating our government officials, normalizes such racism and ridicule as a part of the American political process, and redefines cruelty as content — none of which serves the American people well. While satire was intended to serve as a moral check on power, now politicians themselves are using it as a tool of power, wielding both viral velocity and institutional authority.

But perhaps most insidiously, AI-generated political content blurs the line between opinion and fact. What begins as a politician’s viewpoint — Jeffries is soft on immigration, Schumer celebrates chaos — gets packaged into seemingly authentic visual “evidence.” As content like this is amplified and reshared across social media, it transforms from obvious manipulation into what resembles verified “fact.” This is truth-impersonating propaganda: personal opinions weaponized through AI to look like objective reality, posted from public official accounts that lend them institutional credibility. 

In the aftermath of the nationwide “No Kings” protests on Oct. 18, Trump posted another AI-generated video showing him as a fighter pilot dumping what appears to be feces on American protestors. Beyond the obvious cruelty of a president mocking citizens exercising their constitutional rights, the video underscores his inherent disrespect for the people he represents, as an elected official’s primary concern should be advancing the American people’s interests and informing them. When he models using AI for such frivolous, mocking, hurtful purposes, Trump is also turning a deaf ear to such concerns.


Want more sharp takes on politics? Sign up for our free newsletter, Standing Room Only, written by Amanda Marcotte, now also a weekly show on YouTube or wherever you get your podcasts.


Such irresponsible applications of AI reinforce the need for regulatory safeguards, though the current political climate suggests this is highly unlikely to happen at the federal level. States have the ability to mandate limitations on inappropriate AI use, though we may be fast approaching the volume of AI-generated ads that would make enforcement challenging. In fact, a disclaimer on a lie does not undo the lie; it merely documents that deception occurred, and every day that passes without comprehensive federal action pushes us closer to a political landscape where truth becomes fundamentally unverifiable.

More than ever, we need leaders with strong moral character, personal discipline and the ability to model prudential choices about when and how to use such a powerful technology. Given that AI uses a tremendous amount of energy and water, public servants should be applying it towards solving society’s most intractable problems, not creating racist tropes or belittling people for exercising their democratic rights. 

The voting public has the power to make AI ethics a defining issue that shapes norms of behavior around technology. Judicious use of AI should become a defining attribute of successful candidates in future elections. One of us co-authored “Voting for Ethics: A Guide for U.S. Voters, Second Edition,” which outlines what voters can look for in candidates using AI responsibly. Candidates using AI ethically will have transparent policies outlining their use of AI and campaign ethics codes. Notably, Ossoff responded ethically to Collins’ campaign video falsely depicting him and announced that he would not be using AI-generated deepfakes — drawing a line that more candidates must follow.

In the United States, California has set the pace, enacting a series of laws in 2024 that make transparency and integrity in AI use not just ethical ideals, but a legal requirement. The European Union’s AI Act — widely regarded as the most comprehensive framework — requires all AI-generated content, including deepfakes, to be clearly labeled with watermarks or metadata. Major platforms like Facebook and TikTok must identify and flag manipulated audio and imagery with penalties reaching up to 35 million euros for non-compliance. Already, deepfake election incidents have struck at least 38 countries. In Romania, the 2024 presidential election was annulled in part due to AI-powered interference through coordinated disinformation campaigns and manipulated online content.

Tech leaders need to own their role in the degradation of democracy and prepare for justifiable backlash and reputational damage from economic boycotts. Inaction translates to a legitimate business risk with potential legal liability as AI-generated political content continues to erode public discourse and confidence in government officials. When allegiance to power trumps commitment to principles, tech leaders shouldn’t be surprised when both users and history hold them accountable. 

The irresponsible use of AI by people in government leadership positions suggests that we are at a moment that transcends politics. What is society’s moral code? If politicians, news outlets, and civic organizations adopted a “civic covenant for the AI age,” much like current journalistic standards, could we create a healthier public square?

Like other civic duties, such as voting, verifying information before sharing can become a new norm for responsible citizenship. Ultimately, it is the vote that gives us the power to communicate to political figures how we expect them to steward this powerful technology and use it for the common good. Technology amplifies whatever moral climate it enters, and right now, that climate is being shaped from the highest office to local leaders who seemingly treat deception as strategy. The ultimate question is not whether we can restore truth to politics, it’s whether we will act before the damage becomes irreversible.

Read more

about AI

Comments

Leave a Reply

Skip to toolbar