Link Copied!

El pivote de la letalidad: el "Zero Trust" del Pentágono en la ética

El lanzamiento el 13 de enero de la Estrategia de Aceleración de la IA del DoD marca el fin de la era de la 'IA Ética'. Al desechar las barreras de protección de DEI por una doctrina de 'Letalidad Primero', el Pentágono reconoce una cruda realidad: en un conflicto casi paritario, la seguridad es una responsabilidad.

🌐
Nota de Idioma

Este artículo está escrito en inglés. El título y la descripción han sido traducidos automáticamente para su conveniencia.

Una imagen hiperrealista y de alto contraste de un centro de mando militar futurista con pantallas tácticas oscuras que muestran datos de focalización de la IA, que contrasta con un fondo borroso de una sala de reuniones tradicional de DC.

On Monday, January 12, 2026, Defense Secretary Pete Hegseth stood on a stage at SpaceX’s Starbase in Texas (not the Pentagon briefing room) and effectively declared the end of the “Responsible AI” era.

Backed by the roar of the commercial space industry, Hegseth announced the Department of War’s (formerly DoD) new AI Acceleration Strategy. The message was blunt, stripped of the “safety first” caveats that have defined military tech policy for the last five years: the era of “peacetime science projects” is over, and the new mandate is to build for lethality.

The following day, January 13, the release of the official strategy document and the accompanying policy memorandums confirmed what defense insiders had suspected since the passing of the FY2026 National Defense Authorization Act (NDAA). The Pentagon is not just buying new software; it is pivoting its entire doctrinal foundation from “Risk Mitigation” to “Risk Acceptance.”

For Silicon Valley’s “Safety First” factions, this is a nightmare. For the new class of defense primes, such as Palantir, Anduril, and Shield AI, it is the victory lap they have been lobbying for since 2021.

This is the Lethality Pivot. And it changes everything about how the US government consumes, deploys, and regulates artificial intelligence.

The Death of “Responsible AI”

Since 2020, the Department of Defense has operated under a framework of “Responsible AI” (RAI). This doctrine mandated that all AI systems be “traceable, reliable, and governable.” In practice, it meant that “meaningful human control” was a non-negotiable bottleneck in the kill chain.

The January 13 strategy shreds that bottleneck.

Calling the previous approvals process “operational risk” rather than safety, the new directive establishes a “Barrier Removal SWAT Team” under the Undersecretary for Research and Engineering. Their mandate is specific: Identify non-statutory requirements, including internal rules on ethics, bias testing, and interpretability that go beyond the strict letter of the law, and waive them.

The “Objectivity” Mandate

One of the most striking language shifts is the rejection of “DEI-influenced” models. The strategy explicitly forbids the procurement of models that enforce “ideological guardrails” over “objective truth.”

In technical terms, this is a direct shot at the “Safety Tuning” layers (RLHF) used by companies like Anthropic or Google to prevent models from generating “harmful” content. The Pentagon is arguing that in a war, a model that refuses to generate a targeting solution because it violates a safety policy is not “safe”; it is broken.

The result is a demand for “Unshackled Models.” Security is now defined purely as Cybersecurity (Section 1512 of the NDAA), which entails keeping the adversary out of the weights, rather than Safety (keeping the model from saying or doing something “offensive”).

The NDAA Hardware: Zero Trust on Ethics

The legislative teeth for this pivot come from the FY2026 NDAA, specifically Sections 1512, 1513, and 1533.

While the media focused on the topline budget numbers, Section 1512 quietly redefined “AI Safety” as strictly a cybersecurity problem. The law mandates a department-wide policy for protecting AI/ML systems from “adversarial attacks” (model poisoning, weight theft) but notably omits previous language regarding “algorithmic equity” or “fairness.”

This is the “Zero Trust” approach applied to ethics. The assumption is now that speed is the only safety.

The Physics of the Pivot

Why the shift? It comes down to the latency of the OODA loop (Observe, Orient, Decide, Act).

In a simulated near-peer conflict with China (the primary scenario driving this strategy), the volume of incoming data from drone swarms, satellite constellations, and cyber-sensors exceeds human cognitive capacity by orders of magnitude.

Tdecision=Tdata+Tanalysis+Thuman_approvalT_{decision} = T_{data} + T_{analysis} + T_{human\_approval}

The Pentagon has calculated that Thuman_approvalT_{human\_approval} is the only variable they can cut to zero. By moving to “Agentic AI” for battle management, specifically naming projects like Swarm Forge, the goal is to have systems that can autonomously negotiate target allocation at machine speed.

The strategy cites the January 8, 2026 demonstration at Camp Blanding, where a single operator commanded four kinetic drones simultaneously (a 1:4 ratio), as the baseline for this new doctrine.

The Beneficiaries: Project Maven 2.0

If the “Safety” teams are the losers of this pivot, the winners are the companies that built their brand on “Lethality.”

Palantir is the clearest beneficiary. Their Maven Smart System (part of the original Project Maven) has effectively become the operating system of this new doctrine. With a recent contract expansion bringing the total deal value to $1.3 billion through 2029, Palantir is no longer a vendor; they are infrastructure.

Anduril and Shield AI are close seconds. Their lobbying efforts in 2024 and 2025 focused heavily on the argument that “legacy primes” (Lockheed, Raytheon) were too slow and too risk-averse to handle the AI transition. The “Barrier Removal SWAT Team” is essentially a mechanism to fast-track their software into the field without the years-long testing cycles of traditional procurement.

Crucially, the strategy emphasizes “Any Lawful Use.” This phrase is the key. It moves the burden of ethics from the Software to the Operator. If a strike is legal under the Law of Armed Conflict (LOAC), the software should allow it. It is no longer the AI’s job to be the conscience of the operator.

The Historical Rhyme: Manhattan Project Logic

History offers a clear precedent. In 1942, as the Manhattan Project spun up, there were profound debates about the morality of the weapon. But the prevailing logic, driven by the existential threat of Nazi Germany, was that the only immorality was losing.

The rebranding of the “DoD” to the “Department of War” (DoW), a stylistic revert to its pre-1947 name, is a deliberate signal. It is an acknowledgement that the “Defense” era, defined by deterrence and peacekeeping, is seen as over. The “War” era focuses on capability.

Hegseth’s speech at SpaceX was not subtle: “You don’t deter by being safe. You deter by being dangerous.”

The “Splinternet” of Models

The second-order effect of this policy is the bifurcation of the global AI market.

The defense sector is now moving toward a Dual-Stack Reality:

  1. The Civilian Stack: Governed by the EU AI Act, California Safety Bills, and corporate “Safety Teams.” These models will be polite, guarded, and slow.
  2. The Sovereign Stack: Governed by the NDAA and the Department of War. These models will be uncensored, optimized for lethality, and hardened against cyber-intrusion.

This bifurcation creates a profound talent crisis within the industry. Engineers at major tech firms like Google or Microsoft, who adhere to strict internal “AI Principles” regarding harm reduction, may find themselves locked out of the most advanced “Sovereign” projects which require clearance and a willingness to build kinetic systems.

Conversely, engineers at Anduril or Palantir are being recruited with the specific promise of working on uncensored, mission-critical systems that operate at the edge of physics, not policy. The “revolving door” between Silicon Valley and the Pentagon is becoming a one-way street, where those willing to build for the mission move permanently into the defense ecosystem, while those prioritizing safety remain in the consumer sector.

For tech workers, this forces a choice. The “Google Walkout” era of 2018, where employees successfully pressured management to drop Project Maven, is officially dead. Section 1533’s “Model Assessment Framework” implies that vendors who want to play in the Sovereign Stack must segregate their teams. You are either building for the consumer, or you are building for the mission. You cannot do both with the same weights.

Conclusion: The Feature, Not The Bug

The Lethality Pivot is not a “failure” of AI regulation. It is a calculated, strategic rejection of it.

By prioritizing Section 1512 (Cybersecurity) over Section 1513 (Assessment), the Pentagon has decided that the risk of an “unsafe” (biased or hallucinating) AI is lower than the risk of a “slow” AI.

In the brutal calculus of 2026, the Department of War has decided that it is better to have an AI that shoots fast and misses occasionally, than an AI that hesitates and gets hit. The “Zero Trust” isn’t just about the network; it’s about the Pentagon’s zero trust in the peacetime luxury of ethics.

Sources

🦋 Discussion on Bluesky

Discuss on Bluesky

Searching for posts...