Why AI chatbots are getting more political

The biggest and most prominent companies in the highly competitive AI space are becoming increasingly politicized. Anthropic, the company behind the Claude chatbot, has been involved in a high-profile conflict with the Pentagon, while OpenAI and Palantir have seemingly become supportive of the Trump administration. But the motivations behind these maneuvers, industry analysts say, are not entirely clear.

Over the past few weeks, the major AI companies that are also U.S. defense contractors,  have begun to positioning themselves around the Iran war launched on Feb. 28. The first shot, so to speak, came in Anthropic’s conflict with the Pentagon after the company refused to have its technology used for fully autonomous killer drone storms and the mass surveillance of Americans.

That led to the Department of Defense designating Anthropic as a supply chain risk and barring the use of its tools at the Pentagon, following a six-month phase-out. Anthropic’s current defense contracts total only about $200 million, a barely discernible amount in the trillion-dollar Pentagon budget. That issue is currently being litigated, but shortly after the Pentagon’s designation, OpenAI cut a new defense deal, the details of which are not fully public.

Advertisement:

Anthropic has tried to make peace with Team Trump, even saying that the company “has much more in common with the Department of War [sic] than we have differences.” But many liberal-leaning or anti-Trump users flocked to Anthropic’s Claude app in the days after the announcement. Sen. Brian Schatz, D-Hawaii, for instance, said in a tweet that he had “just downloaded Claude.”.

Meanwhile, Palantir CEO Alex Karp, a vocal supporter of Trump and a favorite target of mockery by the online left, has said that he supports the current U.S. war with Iran, explaining that he sees it as distinct from American regime-change wars of the past.

It seems as though these companies are positioning themselves for or against the current administration to some degree, with Anthropic taking a guardedly oppositional stance and OpenAI and Palantir seemingly embracing the Trump agenda. But many factors are behind these decisions, analysts say.

Advertisement:

These maneuvers come as AI industry cash appears to be flooding into the 2026 midterm elections. A recent Washington Post report found that 19 of 20 primary candidates backed by AI companies, a mix of Democrats and Republicans, had won their races.

Ari Abelson, the co-founder of OpenOrigins, a company that helps identity AI-generated and deepfake content, told Salon that he sees the political maneuverings of these companies as a byproduct of shifting business models.

“A lot more money is being piled into war. We also have a new technology sphere that’s opening up … because modern warfare is extremely different and much more technical than any form of warfare we’ve seen before.”

Anthropic adopted positions likely to be popular with a user base of educated professionals, Abelson said, a demographic that leans Democratic. OpenAI made a different calculation, in his view, concluding that users didn’t care about its relationship with the Pentagon.

As many in Silicon Valley have become more comfortable with military contracts, Abelson said, they may have lost touch with public sentiment on privacy and surveillance issues, partly because military contracts offer a reliable revenue stream.

“A lot more money is being piled into war,” he said. “We also have a new technology sphere that’s opening up. It’s legitimizing this need, because modern warfare is extremely different and much more technical than any form of warfare we’ve seen before.”

Daniel Schiff, a professor of technology policy at Purdue University and the co-director of their Governance and Responsible AI Lab, told Salon he sees a variety of factors at work. Anthropic’s dispute with the government is more than “just notional or just symbolic,” he said, and carries real risk for the company.

Advertisement:

“If they lose short-term contracts, if they lose longer-term contracts, if, let’s say, companies are even afraid to work with them, because they’re afraid of getting pushback from the government, then you get a chilling effect,” he said.

Anthropic may be betting on the possibility that it will reap future benefits under a Democratic administration in 2029, Schiff suggested, while its competitors have clearly decided to side with the current powers that be. These moves are not simply about marketing and P.R., he added, but reflect more complicated factors.

While Anthropic is likely to lose billions in revenue after breaking with the Pentagon, Schiff said, an explanation for this decision can be found in the company’s founding agenda.

Anthropic was launched in 2021, in large part over growing concerns about AI safety. The new company hired away talent who had grown disillusioned at other companies, and has struck a position far more supportive of government regulation than other leading AI firms.

Advertisement:

That has not exempted Anthropic from criticism, especially after it recently dropped a supposed core commitment that it would not roll out a model that outstripped the company’s ability to control it. That change, in late February, was largely overshadowed by the company’s conflict with the Pentagon.

Claude itself admitted, in a recent interaction with Sen. Bernie Sanders, I-Vt., that even AI companies that espouse a moral commitment to something like privacy shouldn’t necessarily be trusted.

“You’re asking people to trust companies whose entire business model depends on extracting value from your personal data,” the chatbot told Sanders. “There’s an inherent conflict of interest. An AI company says they’ll protect your privacy while simultaneously training their models on that same personal information to build better products they can sell or monetize.”

Schiff compares the way AI companies are shifting with the political winds to the ways many companies broadly adopted DEI programs, only to drop them after Donald Trump returned to the White House. He believes the military will end up using Anthropic’s tools despite the dispute, since they have already been integrated into military systems and commanders will push to use what they believe are the best tools.

Advertisement:

David Krueger, the founder of Evitable, a nonprofit pushing for an AI moratorium, said he saw Anthropic’s positioning as a potential play to attract talent. There is steep competition for top talent in AI, Krueger said, and several of the leading companies have had problems retaining talent, as employees come to question the ethics of their own firms and the entire enterprise.

“Anthropic definitely wants to be seen as the good guys,” he said. “For a lot of their employees, that’s part of the appeal of working there. Their leadership, I believe, has publicly said that having this mission to do good is a good way to attract top talent. I’ve seen lots of big-name researchers moving to Anthropic in recent years.”

But the most important motivation driving the decisions of AI industry leaders, Krueger said, is how they believe AI will shape the future.

“It’s not just about money, it’s about power,” he said.  “They and the other leading AI developers believe that we’re very close, like a couple years away, from transformative AI that will completely reshape the world and basically dictate the course of future events. And so whoever controls those AI systems, how they end up being directed, could have a huge impact.

Advertisement:

“Leading AI developers believe that we’re a couple years away from transformative AI that will completely reshape the world and dictate the course of future events. So whoever controls those AI systems could have a huge impact.”

“I think they think that’s more important on some level than the business side of things and the money. They need money to keep scaling up their AI projects, but ultimately they want to have power and influence over the future, and that means having power over AI.”

In this imagined future, which Krueger sees as dangerous, national security agencies and the biggest AI companies will dominate public life. Today’s AI executives are fighting to make sure that their company will have a seat at the table. He believes that neither AI companies nor the government will be able to control this technology in the long run.

On the other hand, Kruger said that Palantir and Karp, its CEO, have simply stopped caring about public perception, and do not mind being saying “widely viewed as one of the more evil tech companies.”

Palantir’s strategic decision to ally itself with the Trump administration can partly be explained by financial incentives, Krueger said, but also reflects the widespread acceptance of accelerationism in the AI industry, including the belief that even a few years in which the government lets these companies do whatever they want will allow them to reshape society in their image.

Advertisement:

“When you see these billboards that say ‘stop hiring humans,’ those people are just giddy for the replacement of humanity with AI,” Krueger said. “There’s a general undercurrent running through a lot of the tech world that is just not in touch with what the rest of the world thinks.”


Start your day with essential news from Salon.
Sign up for our free morning newsletter, Crash Course.


One source working at an AI-related political organization, who requested anonymity, agreed with that analysis but pointed out that companies like Palantir may face significant backlash after a coming political shift. “It is a little short-sighted for them to be so partisan,” this person said. “I don’t care which party it is, the majorities never last. It’s always going to backlash,” the source said.

In the AI space, this source added, many people suffer from a cultural echo chamber, in which researchers and executives all read the same books and imbibing the same ideas about the potential of AI and their company’s role in society.

Advertisement:

“There’s a list of 40 books that they all read, and that’s it,” he said. “It ranges from like, something you’d read in your freshman year of philosophy to something that was a 1970s airport paperback thriller.”

The current partisan shift, he added, marks a distinct break from previous norms in the tech industry, when companies typically donated to both political parties, in the model of an investor like Marc Andreesen.

Read more

about AI and Big Tech


Advertisement:

Comments

Leave a Reply

Skip to toolbar