What changes with this deal?

Anthropic will access all available capacity at Colossus 1, the data center built by Musk's company for xAI. The headline figure is 300 megawatts — the equivalent of a mid-size city's electricity consumption — distributed across more than 220,000 latest-generation NVIDIA GPUs.

For perspective: training Claude Opus 4.7 required around 25,000 GPUs over several months. The new capacity multiplies what Anthropic had access to by 9, and materializes within the month, not over years of construction.

Who benefits?

Anthropic will allocate the extra compute to three fronts: inference capacity for Claude Pro and Max, training of successor models, and the enterprise product line (Claude Code, Claude Security, Managed Agents).

Pro and Max subscribers will see two concrete changes: doubled usage limits and removal of peak-hour restrictions Anthropic had imposed in March when demand saturated its clusters.

The Musk paradox

The agreement is commercially sound but politically strange. Musk is actively suing OpenAI, but rents capacity to one of OpenAI's direct rivals. The simplest read: infrastructure is business, and SpaceX/xAI needs to monetize Colossus 1 while building Colossus 2.

For Anthropic, the irony is welcome: for years it was the firm with the least compute capacity among the three major labs. The deal closes that gap in one shot.

Market reordering

300 MW
Aggregate capacity for Anthropic
220k+
NVIDIA GPUs available
Multiplier vs. previous capacity

The move pressures OpenAI and Google. Microsoft, OpenAI's main infrastructure ally, has been operating with tight capacity since late 2025. Google Cloud continues with its own TPU capacity, but Gemini models share compute with consumer services (Search, YouTube).

The big collateral winner is NVIDIA: every announcement like this validates its moat in high-density GPUs and the Hopper → Blackwell → Rubin transition the company is executing.

What this means for companies using Claude

For enterprise Claude customers via API or Bedrock, the immediate effect should appear as lower latency during peak hours and greater availability of premium models (Opus 4.7, Sonnet 4.6). Anthropic also announced it will remove aggressive rate limits for Pro and Enterprise tiers.

At VuraOS we deploy Claude as the primary model in several modules (chat, voice, admin agents). The deal means we'll be able to assign Opus 4.7 to more use cases without fear of degradation at peak hours — the bottleneck was compute, not model capability.

Conclusion

The race for AI infrastructure has shifted from "who trains the biggest model" to "who can serve billions of tokens per day with decent latency". The Anthropic + SpaceX deal is the most aggressive play in this new phase. Next to watch: how OpenAI responds with Stargate.