Who should control AI? Are the corporations that release the powerful technology the arbiters of its fate? Or should that power be vested in the hands of the government?

Palmer Luckey, the founder of defense company Anduril—which aims to modernize the U.S. military—thinks the answer is straightforward: Give the power to the government. In a recent interview with the New York Post, the billionaire founder weighed in on a burgeoning debate around who gets to determine how AI is used by the government.

For the billionaire, it’s up to the government, and therefore, the people, to make specific use decisions. Otherwise, tech companies could imperil democracy.

“We need to stick to a position that this is in the hands of the people,” he said. “Anyone who says that a defense company should be going beyond the law, beyond what legislators and elected leaders say in terms of who they’ll work with and not, you are effectively saying you do not believe in this democratic experiment, that you want a ‘corporatocracy.’

“In all cases, whoever the United States government tells me that I can and cannot sell to,” he continued, “to have any other position is to fall further into … basically corporate executives having de facto control over U.S. foreign policy.”

Luckey’s thoughts come as Anthropic CEO Dario Amodei refused to allow the Pentagon full use of the company’s AI systems for mass surveillance or to power fully autonomous weapons that operate without human oversight. As a result, the Department of Defense labeled the AI company a “supply-chain risk,” a designation usually reserved for foreign adversarial firms, such as the Chinese-based Huawei. Amodei said the label won’t have much of an impact on the company’s business, and that it will sue to overturn the designation. Still, it remains in discussions with the Pentagon regarding use of its AI models and tools.

But Amodei, along with Anthropic’s other cofounders—who had departed OpenAI together to build a company that they say prioritizes AI safety—maintain that what the Pentagon requests crosses the line. “These threats do not change our position: We cannot in good conscience accede to their request,” Amodei said in a press release last week.

Anthropic didn’t immediately respond to Fortune’s request for comment.

Silicon Valley vs. Washington

The Department of Defense—and figures like Luckey—don’t think it’s within the hands of a private contractor to dictate use cases, and instead argue that’s within the powers of the government. Shortly after the Anthropic agreement came crumbling down last month, Sam Altman’s OpenAI reached an agreement with the Pentagon to allow use of the startup’s AI models and tools. Elon Musk’s xAI also reached a deal to let the Pentagon use its AI, adding competition to Anthropic’s once-exclusive partnership.

Anthropic isn’t the first tech company to push back against the DOD. As Luckey notes during the interview, Google walked away from the Pentagon in 2018, pulling out of Project Maven, which involved AI drone footage analysis, after thousands of employees protested involvement in the program out of fears it could lead to autonomous weapons.

“What you would have had is a world where Silicon Valley executives would have had more foreign policy power than the president of the United States,” Luckey said. “That’s really, really dangerous.”

For Luckey, it comes down to whether top-level decisions on AI’s usage belong to Silicon Valley or Washington. His view is that, regardless of who is in the White House, tech companies, and the private sector more broadly, have a responsibility to adhere to that administration’s foreign policy decisions. 

But even as the Anthropic-Pentagon conflict balloons, Amodei said in a press release Thursday the two parties are able to find some common ground. “Anthropic has much more in common with the Department of War than we have differences,” he said.



Source link

Share:

administrator