# Anthropic’s TPU Deal, OpenAI’s Washington Push, and a Gradual Automation Picture

*By AI News Digest • April 7, 2026*

Anthropic locked in multi-gigawatt TPU capacity and OpenAI stepped up its case for earlier policy debate on cyber, bio, and energy. New research, meanwhile, suggests AI capabilities are spreading broadly across work and cyber tasks even as near-term GDP effects may remain more modest.

## Frontier buildout is accelerating faster than the measured macro story

Today's clearest contrast was between frontier companies planning for much larger demand and new research that still points to a slower macroeconomic rollout [^1][^2][^3].

### Anthropic locks in multi-gigawatt TPU capacity for Claude
Anthropic said it signed an agreement with Google and Broadcom for multiple gigawatts of next-generation TPU capacity, coming online starting in 2027, to train and serve frontier Claude models [^1]. The company also said its run-rate revenue has passed $30 billion, up from $9 billion at the end of 2025, and framed the deal as the compute needed to keep pace with demand [^2].

*Why it matters:* This ties Anthropic's next stage of growth directly to multi-year compute procurement and sharply rising commercial demand [^1][^2].

### OpenAI takes its superintelligence argument to Washington
In a Washington-facing interview, Sam Altman said OpenAI wants policy ideas discussed now because AI is beginning to do more real work and could change coding, knowledge work, science, and the shape of work [^4]. He said OpenAI's main preparedness areas are cyber and bio, expects significant cyber threats within the next year, and argued that safety in a world of powerful AI cannot be handled by companies alone; he also pointed to energy buildout, privacy, and frontier-system auditing as active policy areas [^4].

> "I suspect in the next year we will see significant threats we have to mitigate from cyber" [^4]

*Why it matters:* The policy conversation is moving from abstract AGI claims toward concrete questions about resilience, infrastructure, and oversight [^4].

## Research suggests the impact may be broad, but not abrupt

### MIT finds a "rising tide" across text-based work
Researchers analyzing 3,000 O-NET tasks with 17,000 worker evaluations said AI progress across realistic text-based labor-market tasks looks more like a rising tide than a crashing wave [^3]. They report that, between 2024-Q2 and 2025-Q3, frontier models moved from 50% success on 3- to 4-hour tasks to 1-week tasks, and project most surveyed tasks could reach 80%-95% AI success rates by 2029 at minimally sufficient quality [^3].

*Why it matters:* The study points to broad, gradual gains across many job families rather than a few isolated task categories changing all at once [^3].

### Forecasters still expect only a modest GDP boost by 2030
A major report from the Forecasting Research Institute found that surveyed groups expect moderate to rapid AI progress in the coming years, yet still see GDP impacts staying relatively small by 2030 - about one percentage point above the 2025 baseline growth rate of 2.4% [^3]. The same material says all surveyed cohorts expect continued declines in labor-force participation and rising wealth inequality, while economists put a 14% chance on major near-term increases in GDP and inequality [^3].

*Why it matters:* Taken together with the MIT work, the picture is not "no impact" - it is faster task-level progress paired with a still-muted near-term GDP forecast [^3].

## Security and public-safety systems

### Offensive cyber capability keeps improving on short timelines
Lyptus Research found that frontier-model performance on cyberoffense tasks has followed a 9.8-month doubling time since 2019, steepening to 5.7 months for models released since 2024 [^3]. The evaluation covered standard cyber benchmarks and a new 291-task dataset with time estimates calibrated by 10 offensive cybersecurity professionals [^3]. In the study, GPT-5.3 Codex and Opus 4.6 reached 50% success on tasks that take human experts about 3.1 to 3.2 hours, and the most recent open-weight model in the sample lagged the closed frontier by 5.7 months [^3].

*Why it matters:* The result points to both stronger offensive capability and relatively short diffusion timelines from closed frontier systems to open-weight models [^3].

### Google puts 24-hour flash-flood forecasts on Flood Hub
Google launched an AI system that predicts flash floods 24 hours in advance and made the predictions live for free on Flood Hub [^5][^6]. The system uses Gemini to extract confirmed flood locations and times from global news, builds missing historical event data, and combines weather forecasts with terrain, soil absorption, and urban density; the notes say it can work in countries with little flood-monitoring infrastructure [^5].

*Why it matters:* Flash floods kill more than 5,000 people each year, and a 12-hour warning alone can reduce damage by 60%, making this a notable example of AI being deployed into public-risk infrastructure [^5].

---

### Sources

[^1]: [𝕏 post by @AnthropicAI](https://x.com/AnthropicAI/status/2041275561704931636)
[^2]: [𝕏 post by @AnthropicAI](https://x.com/AnthropicAI/status/2041275563466502560)
[^3]: [Import AI 452: Scaling laws for cyberwar; rising tides of AI automation; and a puzzle over gDP forecasting](https://importai.substack.com/p/import-ai-452-scaling-laws-for-cyberwar)
[^4]: [OpenAI’s warning: Washington isn’t ready for what’s coming](https://www.youtube.com/watch?v=B21KxGs8zDI)
[^5]: [𝕏 post by @rowancheung](https://x.com/rowancheung/status/2041172396116476371)
[^6]: [𝕏 post by @rowancheung](https://x.com/rowancheung/status/2041172407889940702)