Hegseth threatens Anthropic over killer AI limits

The federal government is expanding its feud with those who have a problem with illegal orders. This time, Defense Secretary Pete Hegseth’s Pentagon is threatening to invoke the Defense Production Act to force Anthropic, the maker of the Claude artificial intelligence model, to tailor it to the Pentagon’s desires beyond the red lines set by the company, which include forbidding the mass surveillance of Americans and fully-autonomous drone swarms.

A source familiar with the discussions told Salon that, in a Tuesday meeting between Anthropic CEO Dario Amodei and Hegseth, Amodei laid out the company’s red lines. Hegseth also ended the meeting by threatening to either invoke the Defense Production Act to compel Anthropic to produce an AI model in line with his desire, or that the Pentagon will cut ties with Anthropic and contract with a different AI firm.

The conflict, however, comes into sharper focus when the red lines set by Amodei are made clear. In an interview with The New York Times’s Ross Douthat, Amodei laid them out, with Douthat asking, “What is the safeguard there to prevent essentially AI becoming a tool of authoritarian takeover inside a democratic context?”

Advertisement:

“I’m worried about the autonomous drone swarm,” Amodei said. “The constitutional protections in our military structures depend on the idea that there are humans who would, we hope, disobey illegal orders. With fully autonomous weapons we don’t necessarily have those protections.”

Amodei went on to say he was also concerned about the potential uses for Anthropic’s technology for mass surveillance, saying that it could easily identify members of the opposition based on their conversations in public.

“With AI, the ability to transcribe speech, to look through it, correlate it all, you could say ‘Oh, this person is a member of the opposition. This person is expressing this view.’ And make a map of all 100 million,” Amodei said.

In a response to a request for comment from Salon, an Anthropic spokesperson confirmed the meeting between Amodei and Hegseth at the Pentagon, Tuesday.

“The constitutional protections in our military structures depend on the idea that there are humans who would, we hope, disobey illegal orders.”

“We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do,” the spokesperson said.

The conflict between Anthropic and the Pentagon comes as the conversation over how to integrate AI into the military is shifting rapidly. While once the discourse centered around having a human “in the loop,” meaning actively involved in the decision making, now it centers on having a human “on the loop,” taking on a supervisory role and being able to intervene in case an AI makes a mistake.

Advertisement:

Kenneth Payne, a researcher at King’s College London specializing in AI and national security, underscored how AI and humans make different decisions in certain military situations. For example, in a recent experiment, Payne found that AI opted to deploy tactical nuclear weapons in 95% of the scenarios he designed, something that humans have never done. (The bombs dropped on Hiroshima and Nagasaki were not “tactical” nukes.)

Payne said that, although no countries are currently considering allowing AI to deploy nuclear weapons, his research does help establish the sorts of biases that AI has in decision-making.

“I think it is important because these systems and successor systems like them will be used in decision support. So they will be used to provide information to human decision makers. And therefore it’s incumbent on us to understand how they view the world and how they weigh risk in dangerous times,” Payne said. “We focus intently on where the technology is now, not where it might be in a few years time.

Payne also said that relying on AI for decision-making poses a risk in terms of accountability, as the AI, not being a person, can’t really be held accountable for the decisions it makes. While current AI systems are ill-equipped for legal and moral judgments, Payne said future models might be able to weigh the legality of an action, though perhaps not the morality of it.

Advertisement:

“In this debate, we started by talking about the need to keep a human in the loop. And over the years, as the technology has advanced, we’ve retreated to talking about keeping humans on the loop,” Payne said.

Historically, the Pentagon has been relatively forward-thinking in terms of its policies towards AI and accountability, Owen Daniels, the associate director of analysis at Georgetown University‘s Center for Security and Emerging Technology, told Salon. This is because troops are unlikely to use a tool or weapon if they believe it will result in negative repercussions for themselves.

“This is one of those areas where there’s been a lot of work behind the scenes over the past decade, but it continues to be a really pressing challenge, because if soldiers or the military don’t trust the tools, they will fundamentally not use them,” Daniels said.

The current policy, as explained by Daniels, is that humans are responsible for “maintaining appropriate levels of human judgment over the deployment of autonomous weapon systems,” meaning, from a legal standpoint, “humans are responsible for making choices to deploy AI in accordance with the laws of war, the international humanitarian law and loss of armed conflict.” The issues, Daniels explained, can be compounded when AI systems like Anthropic’s Claude are integrated into other systems, such as those developed by Palantir, the surveillance company founded by Trump ally Peter Thiel.

Advertisement:

“If you think about whether a human’s pulling the trigger versus just kind of watching and making sure nothing happens, the AI system is also making determinations about how it arrives at those decision points that, depending on the amount of transparency in the system, the human may or may not be aware of,” Daniels said.


Start your day with essential news from Salon.
Sign up for our free morning newsletter, Crash Course.


The dust-up comes as and the Trump administration and Hegseth, an Iraq War veteran and former Fox News host, have taken a special interest in the topic of illegal orders. Last November, a group of Democrats made a video reminding military members of their obligation to disobey illegal orders, in response to the U.S. attacks against boaters in the Pacific Ocean and the Caribbean. Despite the bombings ostensibly being part of a pressure campaign against Venezuela, the attacks are ongoing, even after the U.S. arrest of the Venezuelan President Nicolás Maduro. The latest of these bombings came earlier this week, as the death toll from the bombings has reached around 150 people.

The Justice Department responded by trying and failing to indict the Democrats, while Hegseth demoted and cut the pay of Sen. Mark Kelly, D-Ariz., a retired fighter pilot and astronaut who spoke in the video. Hegseth’s attempt to punish Kelly was, however, blocked by a judge, though he is now seeking an appeal of that ruling.

Advertisement:

The move marks an about-face for Hegseth as well, as he had previously warned that Trump may issue illegal orders in 2016. The issue is of particular importance for Hegseth as well, as he may be implicated in war crimes in the aforementioned attacks on boaters. A full investigation, however, has not been conducted.

Trump has pardoned convicted war criminals during his first term, including Special Operations Chief Edward Gallagher, a Navy SEAL platoon leader, accused of shooting unarmed civilians while deployed in Iraq, including a young girl. The suggestion to issue these pardons came from none other than Hegseth.

The Department of Defense did not respond to a request for comment from Salon.

Read more

about this topic


Advertisement:

Comments

Leave a Reply

Skip to toolbar