ONE. In the north there is a kind of room that seems designed by an accountant with imperial fantasies. The sun doesn’t come in, the wind doesn’t come in, the noise from the street doesn’t come in. Men enter with suits, a folder under their arm and that seriousness that usually appears when someone believes that history is watching them through a keyhole. Maps with red arrows no longer weigh on the table. Contracts weigh. They weigh terms of use. Names of young companies with meditation application aesthetics and strategic arsenal ambitions weigh in. Anthropic, OpenAI and Palantir. No one needs to raise their voice for the scene to be disturbing.
The problem that broke out at the end of February had a humble, almost administrative form. The Pentagon demanded greater leeway from Anthropic to use its AI models in sensitive operations. The company refused to raise two specific barriers because it did not want its technology to be used for mass surveillance within the United States nor did it accept its use in autonomous lethal systems without human supervision.
In any other area, that disagreement would have ended in lawyers and some awkward dinner. But in the military universe the response took a different tone. The Department of Defense classified Anthropic as a supply chain risk and pushed the rest of the federal apparatus and contractors to stop using its technology.
The scene has something of a tantrum and something of a warning. If a company tries to set ethical limits on the machine it sells, the State can leave it out of the loop. The official argument invoked national security. The practical effect smacked of exemplary punishment. Pentagon officials said those contractual restrictions could render a model useless in the middle of a mission with kinetic consequences, a word that avoids saying deaths.
TWO. The Pentagon crisis showed what happens when politics clumsily chases after technique. While Anthropic was under punishment, OpenAI signed an agreement with Defense at high speed. Sam Altman, its CEO, later admitted that the maneuver made his company look opportunistic and sloppy. The company later attempted to add additional safeguards to the contract.
That detail matters because it reveals an uncomfortable truth. Even companies that promise limits don’t know if those limits can withstand political pressure, business competition, and military budgets. Anthropic wanted to reserve a veto. The Pentagon considered it unacceptable. OpenAI accepted looser ground and then tried to correct in the fine print what was granted earlier. Technology was caught between scruples and opportunity.
But there is a less visible angle to this discussion that is beginning to worry military strategists and security analysts: the physical geography of artificial intelligence. For years, models, algorithms and data were talked about as if everything happened in an ethereal cloud. It actually happens in data centers. The problem is that these facilities are already part of the strategic infrastructure of the States. When a company builds a large data center to train advanced models, it doesn’t just install GPU racks. It installs a computational capacity that can be used for intelligence, cyber defense or scenario simulation. In other words, install power.
The problem is obvious to any Middle East military planner. If a country hosts key infrastructure for AI models used by Western allies, those facilities could become targets for drones. That introduces a disturbing paradox: the more global the AI infrastructure becomes, the more vulnerable it becomes to the territorial logic of traditional warfare. A model may live in the cloud, but the fiber optic cables, power plants, and buildings that support it live on a map. And maps, in times of conflict, always end up full of red circles.
THREE. From other capitals the American scene is observed with irony and geopolitical calculation. The case is especially interesting for countries that already integrate AI into their war structures. China (DeepSeek), France (Mistral), Russia (GigaChat), among others, are advancing with a model of integration between the State, technology and forces. Iran watches the development of the field with growing interest in times of emergency.
In modern warfare, the advantage is not always who shoots first but who first understands what is happening. A model capable of analyzing thousands of satellite images or interceptions in minutes can reduce the time between observation and response.
In practice, any massive information processing technology can have dual applications. In this context, the Pentagon episode leaves an uncomfortable lesson. The problem does not only lie in the machine, which could make a mistake when identifying a target. The problem also lives in the political system that hands fundamental decisions to executives who change their criteria with the speed of the market and the price of the stock exchanges.
In the end, the red button that Stanley Kubrick painted in his film Dr. Strangelove with Peter Sellers (1964) may not even exist as a visible object. It can live on a gray panel remote console with dual authentication and a friendly emoji interface. The serious thing would not be that a villain pressures it with a cinematic laugh, but that several reasonable people activate it in parts, between meetings, legal reviews and promises of prudence, until one day the decision is distributed among so many hands that no one can clearly point out where the responsibility ended. And at that moment it will be discovered that the real leap was not teaching machines to decide, but rather it was teaching humans to obey without realizing it.
National Alert. In Argentina and the rest of Latin America there is still no algorithmic Pentagon, but there is a known vulnerability. The region purchases sensors, software and doctrine before building computational sovereignty. While the United States, France, Russia and China integrate defense models, the south observes, imports and is exposed to strategic dependence, outsourced surveillance and industrial lag. «
