“We want artificial intelligence to benefit humanity, not to be used in inhumane or profoundly harmful ways.”
That warning came late last month in an open letter sent by more than 600 employees at Google to CEO Sundar Pichai, urging the company to end its AI cooperation with the U.S. Department of Defense. Employees expressed concern that Google’s Gemini AI model could eventually be used for military and surveillance purposes. The protest, however, did not alter the company’s course. Google ultimately agreed to allow its AI technology to operate within classified Pentagon work environments. The decision marked a dramatic shift from the company’s position eight years ago.
In 2018, Google faced a wave of employee backlash over “Project Maven,” a Pentagon initiative that used artificial intelligence to analyze drone surveillance footage. Thousands of employees argued at the time that AI should not be used in warfare, prompting the company to walk away from renewing the contract.
Now, a company that once distanced itself from military projects is moving directly into the national security sphere. And Google is not alone. AI firms including OpenAI, Amazon Web Services and Palantir Technologies are rapidly expanding cooperation with the Pentagon. The idealism that once defined Silicon Valley, where companies frequently championed ethics, values and “AI for humanity,” is steadily fading.
AI, once celebrated mainly as a tool for productivity and innovation, is increasingly viewed as critical infrastructure tied to national competitiveness and military strength. Some analysts point to the escalating AI rivalry with China as a key force driving Silicon Valley’s transformation.
As Beijing aggressively pushes AI as a strategic national industry, the emergence of low-cost, high-performance models such as DeepSeek has rattled major U.S. technology firms. Generative AI is no longer seen simply as a commercial productivity tool, but as a strategic technology capable of determining military and intelligence superiority. The wars in Ukraine and the Middle East have only accelerated that shift in perception.
Alex Karp, the chief executive of Palantir, has been unusually direct about the industry’s changing mindset. “If a U.S. Marine wants a better rifle, we should build it. Software is no different,” Karp said, signaling that the era of tech companies keeping their distance from the state may be coming to an end.
The fierce race for dominance in what many expect to become a winner-take-all AI market has also weakened ethical restraint across the industry. Questions such as “Should we be doing this?” are increasingly being replaced by a more hard-nosed calculation: companies may have little choice if they hope to survive the competition.
Close cooperation with the military, once treated as taboo in Silicon Valley, is now often framed as both patriotism and business necessity. Still, the shift is fueling growing anxiety.
AI companies argue that the real danger lies not in the weaponization of AI itself, but in who controls the technology and how it is used. Yet there is no guarantee that AI will remain entirely under human control.
As AI becomes more deeply woven into military and national security systems, no one can say with confidence what kind of battlefields or realities it may ultimately create. The world has marveled at the pace of AI development, but it has spent far less time confronting where the technology may be heading.
Physicist J. Robert Oppenheimer, who led the United States’ Manhattan Project to develop the atomic bomb, was later consumed by fear and regret after witnessing the destructive power of nuclear weapons. That reckoning became known as the “Oppenheimer moment.” AI’s own Oppenheimer moment may already have arrived.
Most Viewed