# OpenAI Expands Beyond Azure, David Silver Launches a $1.1B AI Lab, and Anthropic Updates Opus

*By AI High Signal Digest • April 28, 2026*

OpenAI broadened its cloud distribution beyond Microsoft exclusivity, David Silver launched Ineffable with a massive reported seed round, and Anthropic updated its flagship Opus model. The brief also covers agent research, Xiaomi’s open release, enterprise deployments, and China’s intervention in the Meta-Manus deal.

## Top Stories

*Why it matters:* Cloud distribution, frontier funding, and flagship model releases all shifted in meaningful ways.

- **OpenAI widened its distribution options.** OpenAI said Microsoft remains its primary cloud partner, but its products and services can now be offered across all clouds; OpenAI also said it will continue providing Microsoft with models and products until 2032, with revenue share through 2030. AWS CEO Andy Jassy added that OpenAI models will be available directly on Bedrock in coming weeks alongside a Stateful Runtime Environment [^1][^2].
- **David Silver’s new lab launched at unusual scale.** Ineffable Intelligence said it is building a system that discovers knowledge from its own experience, while multiple posts reported a **$1.1B** raise at a **$5.1B** post-money valuation. The company is building in London, and Silver committed **100%** of his Ineffable equity proceeds to Founders Pledge [^3][^4][^5][^3][^4].
- **Anthropic shipped Claude Opus 4.7.** The update was described as a coding-focused upgrade over Opus 4.6 with improved vision, a new **xhigh** effort setting, and real-world cyber safeguards. On the live GSO benchmark, Opus 4.7 was listed first at **42.2%**, ahead of Opus 4.6 and GPT-5.5 at **37.3%** [^6][^7][^8].

## Research & Innovation

*Why it matters:* The most useful research this cycle focused on making agents more reliable, more structured, and less wasteful.

- **A smaller model beat a much larger one in theorem proving.** Self-Guided Self-Play adds a Guide role that filters synthetic problems and reduces reward hacking; in Lean4, a **7B** model outperformed a **671B** baseline in fewer than 200 rounds [^9][^10].
- **A major survey gave world-model research a common vocabulary.** The 40-author *Agentic World Modeling* paper proposes a **levels × laws** framework spanning **L1 predictors**, **L2 simulators**, and **L3 evolvers**, synthesizing **400+ works** and **100+ systems** across RL, web agents, video generation, and scientific discovery [^11].
- **A new cost paper challenged common assumptions about coding agents.** On SWE-bench Verified, agentic coding used about **1000x** more tokens than chat or code reasoning, varied by up to **30x** across identical runs, and higher spend did not reliably improve accuracy [^12].

## Products & Launches

*Why it matters:* New launches are pushing agents deeper into real developer workflows rather than standalone demos.

- **Xiaomi open-sourced MiMo-V2.5 under MIT.** The release includes **MiMo-V2.5-Pro** for agent and coding tasks and **MiMo-V2.5** as a native omni-modal model; both support a **1M-token** context window. vLLM highlighted long-horizon execution across **1000+ tool calls** for the Pro model [^13][^14].
- **Cognition launched Devin for Terminal.** It is a local coding agent that runs in the shell with full access to the codebase, tools, and environment, and can hand work off to the cloud after a laptop is closed [^15][^16][^15][^17].
- **OpenAI open-sourced Symphony for Codex.** The orchestration layer connects issue trackers such as Linear to coding agents and turns the workflow into: open issue, assign agent, generate PR, then human review [^18].

## Industry Moves

*Why it matters:* Enterprise adoption is increasingly arriving through large partnerships, not just model benchmarks.

- **Cognition partnered with Mercedes-Benz** on what it called one of the most extensive deployments of AI software engineering in the automotive industry so far [^19][^20].
- **Google DeepMind expanded in South Korea** with a new AI Campus in Seoul, internships for Korean students, and collaboration with the Korean AI Safety Institute [^21][^22].
- **Together AI joined the U.S. DOE’s Genesis Mission** with **17 national laboratories**, aimed at connecting supercomputers, facilities, and datasets to help double American scientific productivity within a decade [^23][^24][^25].

## Policy & Regulation

*Why it matters:* Governments are becoming more willing to shape cross-border AI deals directly.

- **China blocked Meta’s $2B acquisition of Manus**, citing concerns over foreign investment and the transfer of strategic AI technology to the U.S. [^26].

## Quick Takes

*Why it matters:* A few smaller updates still stood out for pricing, infrastructure, and developer tooling.

- **GitHub Copilot** moves to **usage-based billing** on **June 1** as it supports more agentic and advanced workflows [^27].
- **vLLM v0.20.0** added **DeepSeek V4** support, moved to **CUDA 13 / PyTorch 2.11**, and introduced **TurboQuant 2-bit KV cache** with **4x** capacity [^28][^29][^30].
- **OpenAI announced gpt-realtime-1.5** for interactive voice-controlled apps and published an open-source repo developers can fork and extend [^31][^32].
- **Moonshot open-sourced Kimi K2.6**, a coding and long-horizon agent model that scales to **300 concurrent sub-agents** across **4,000 coordinated steps** [^33].

---

### Sources

[^1]: [𝕏 post by @sama](https://x.com/sama/status/2048755148361707946)
[^2]: [𝕏 post by @ajassy](https://x.com/ajassy/status/2048806022253609115)
[^3]: [𝕏 post by @AlexLaterre](https://x.com/AlexLaterre/status/2048785535376773526)
[^4]: [𝕏 post by @etnshow](https://x.com/etnshow/status/2048770524046585865)
[^5]: [𝕏 post by @dejavucoder](https://x.com/dejavucoder/status/2048791142230290858)
[^6]: [𝕏 post by @dl_weekly](https://x.com/dl_weekly/status/2048854885290938856)
[^7]: [𝕏 post by @scaling01](https://x.com/scaling01/status/2048853227211251891)
[^8]: [𝕏 post by @scaling01](https://x.com/scaling01/status/2048853251747954839)
[^9]: [𝕏 post by @TheAITimeline](https://x.com/TheAITimeline/status/2048820447530144227)
[^10]: [𝕏 post by @LukeBailey181](https://x.com/LukeBailey181/status/2047340293490724945)
[^11]: [𝕏 post by @omarsar0](https://x.com/omarsar0/status/2048783073547079816)
[^12]: [𝕏 post by @dair_ai](https://x.com/dair_ai/status/2048784506635878644)
[^13]: [𝕏 post by @XiaomiMiMo](https://x.com/XiaomiMiMo/status/2048821516079661561)
[^14]: [𝕏 post by @vllm_project](https://x.com/vllm_project/status/2048825703244972375)
[^15]: [𝕏 post by @cognition](https://x.com/cognition/status/2048821234281181302)
[^16]: [𝕏 post by @cognition](https://x.com/cognition/status/2048821291973816592)
[^17]: [𝕏 post by @cognition](https://x.com/cognition/status/2048821345455382782)
[^18]: [𝕏 post by @reach_vb](https://x.com/reach_vb/status/2048873546261074165)
[^19]: [𝕏 post by @cognition](https://x.com/cognition/status/2048790199656661041)
[^20]: [𝕏 post by @cognition](https://x.com/cognition/status/2048798451727495429)
[^21]: [𝕏 post by @_philschmid](https://x.com/_philschmid/status/2048750592760164400)
[^22]: [𝕏 post by @_philschmid](https://x.com/_philschmid/status/2048750595540967904)
[^23]: [𝕏 post by @togethercompute](https://x.com/togethercompute/status/2048894521472434612)
[^24]: [𝕏 post by @togethercompute](https://x.com/togethercompute/status/2048894523389137025)
[^25]: [𝕏 post by @togethercompute](https://x.com/togethercompute/status/2048894527365386456)
[^26]: [𝕏 post by @kimmonismus](https://x.com/kimmonismus/status/2048691049590034652)
[^27]: [𝕏 post by @github](https://x.com/github/status/2048794729274278258)
[^28]: [𝕏 post by @vllm_project](https://x.com/vllm_project/status/2048918629144805619)
[^29]: [𝕏 post by @vllm_project](https://x.com/vllm_project/status/2048918632772825102)
[^30]: [𝕏 post by @vllm_project](https://x.com/vllm_project/status/2048918634811290039)
[^31]: [𝕏 post by @OpenAIDevs](https://x.com/OpenAIDevs/status/2048871260512473385)
[^32]: [𝕏 post by @OpenAIDevs](https://x.com/OpenAIDevs/status/2048871272231338455)
[^33]: [𝕏 post by @dl_weekly](https://x.com/dl_weekly/status/2048764506105348129)