# Sakana’s Tiny Coordinator, DeepSeek’s Price Cut, and Google’s Anthropic Bet

*By AI High Signal Digest • April 27, 2026*

Sakana pushed multi-agent orchestration into a product, DeepSeek cut long-context memory costs, and Google made a concrete new compute commitment to Anthropic. The brief also covers Alibaba’s AgenticQwen, Gemma 3n, Microsoft TRELLIS.2, and new model-evaluation tools.

## Top Stories

*Why it matters:* The clearest signals today were cheaper agent memory, stronger model orchestration, and more concrete compute financing.

- **Sakana pushed model orchestration from paper to product.** It launched beta access to **Fugu**, an OpenAI-compatible orchestration API, and published **TRINITY**, a sub-20K-parameter coordinator that assigns Thinker, Worker, and Verifier roles across frontier models. TRINITY reached **86.2% pass@1** on LiveCodeBench, while Fugu claims SOTA on SWE-Pro, GPQA-D, and ALE-Bench. [^1][^2][^1]

- **DeepSeek made long-context agent loops materially cheaper.** Input cache-hit prices across the DeepSeek API fell to **one-tenth** of prior levels, the discount is permanent, and **V4-Pro** remains **75% off** until May 5. Separate commentary noted cache hits can make up a large share of agent bills as sessions grow. [^3][^4][^3][^5]

- **OpenAI expanded image generation into more structured workflows.** **ChatGPT Images 2.0** adds native reasoning and web search, supports up to **8 coherent images per prompt** at up to **2K** resolution, and early users showed it generating 3D-style UI assets and texture-map grids from a single prompt. [^6][^7][^8]

## Research & Innovation

*Why it matters:* The most interesting technical work focused on doing more with less active compute and making smaller models practical in constrained settings.

- **Alibaba’s AgenticQwen shrinks active compute for tool use.** **AgenticQwen-30B-A3B** uses only **3B active parameters** yet reportedly matches **Qwen3-235B** on real tool-use workloads. Its training recipe pairs error-mining RL with an agentic loop that expands tool use into multi-branch behavior trees. [^9]

- **Gemma 3n targets embedded deployment.** Google’s developer guide says Gemma 3n relies on **MatFormer**, **per-layer embeddings**, and **KV cache sharing**; the last cuts KV memory and prefill time roughly in half, a notable efficiency gain for edge and long-context use. [^10][^11][^12][^13]

## Products & Launches

*Why it matters:* New releases centered on 3D generation and better model evaluation infrastructure, not just another general chatbot.

- **Microsoft TRELLIS.2** open-sources a **4B** model that turns a single image into a fully textured 3D asset in about **3 seconds**, including PBR details such as roughness, metallic, and opacity, with a live project page and demo. [^14]

- **Contextarena.ai** launched as a free interactive leaderboard for **70 model variants** on **8-needle GDM-MRCRv2**, with views for context bins, cost, and token efficiency. Its initial tables show **GPT-5.5** tiers leading AUC at both **128k** and **1M** context. [^15]

## Industry Moves

*Why it matters:* Labs are competing through capital, distribution, and consumer deployment channels as much as through raw model quality.

- **Google deepened its Anthropic bet.** Anthropic said Google committed **$10 billion** in cash at a **$350 billion** valuation to fund computing-capacity expansion, with another **$30 billion** available if performance targets are met. [^16][^17]

- **DeepSeek widened distribution.** **V4 Flash** and **V4 Pro** are now on Ollama’s U.S.-hosted cloud, with launch paths into tools including Claude Code, Hermes Agent, Codex, and OpenClaw. [^18][^19][^20]

- **Waymo reached the Uber app in Atlanta.** The move extends autonomous rides through a mainstream consumer platform rather than a standalone robotaxi experience. [^21]

## Quick Takes

*Why it matters:* Smaller updates still shifted benchmarking, developer tooling, and trust in agent products.

- **EQ-Bench:** Opus 4.7 stayed on top; DeepSeek 4 was near frontier; GPT-5.5 looked roughly unchanged from 5.4. [^22]
- **Claude Code billing:** Anthropic is issuing refunds and free credits after the "HERMES.md" billing bug. [^23][^24]
- **Codex usage:** ChatGPT Pro now has **2x Codex rate limits** through May 31. [^25]
- **Health evaluation:** OpenAI’s **HealthBench Professional** is now on Hugging Face, with each item written, reviewed, and adjudicated by three or more physicians. [^26]

---

### Sources

[^1]: [𝕏 post by @SakanaAILabs](https://x.com/SakanaAILabs/status/2047479445209145785)
[^2]: [𝕏 post by @SakanaAILabs](https://x.com/SakanaAILabs/status/2048181386868293639)
[^3]: [𝕏 post by @deepseek_ai](https://x.com/deepseek_ai/status/2048440764368347611)
[^4]: [𝕏 post by @victor207755822](https://x.com/victor207755822/status/2048442362800804159)
[^5]: [𝕏 post by @teortaxesTex](https://x.com/teortaxesTex/status/2048607700322316627)
[^6]: [𝕏 post by @dl_weekly](https://x.com/dl_weekly/status/2048462213581586499)
[^7]: [𝕏 post by @blixt](https://x.com/blixt/status/2048199166862495897)
[^8]: [𝕏 post by @blixt](https://x.com/blixt/status/2048199175095898150)
[^9]: [𝕏 post by @omarsar0](https://x.com/omarsar0/status/2048504655932760565)
[^10]: [𝕏 post by @gabriberton](https://x.com/gabriberton/status/2048552603760705629)
[^11]: [𝕏 post by @gabriberton](https://x.com/gabriberton/status/2048552611473936760)
[^12]: [𝕏 post by @gabriberton](https://x.com/gabriberton/status/2048552614804267319)
[^13]: [𝕏 post by @gabriberton](https://x.com/gabriberton/status/2048552616054137025)
[^14]: [𝕏 post by @_vmlops](https://x.com/_vmlops/status/2048362543866060878)
[^15]: [𝕏 post by @DillonUzar](https://x.com/DillonUzar/status/2048266693756015099)
[^16]: [𝕏 post by @kimmonismus](https://x.com/kimmonismus/status/2048430788094374078)
[^17]: [𝕏 post by @kimmonismus](https://x.com/kimmonismus/status/2048430820638052668)
[^18]: [𝕏 post by @ollama](https://x.com/ollama/status/2047598971435290992)
[^19]: [𝕏 post by @ollama](https://x.com/ollama/status/2048631770283962380)
[^20]: [𝕏 post by @ollama](https://x.com/ollama/status/2048631772439863795)
[^21]: [𝕏 post by @TheEthanDing](https://x.com/TheEthanDing/status/2048611502391808352)
[^22]: [𝕏 post by @sam_paech](https://x.com/sam_paech/status/2048221992503947444)
[^23]: [𝕏 post by @om_patel5](https://x.com/om_patel5/status/2048204411986469232)
[^24]: [𝕏 post by @Teknium](https://x.com/Teknium/status/2048576507786956973)
[^25]: [𝕏 post by @reach_vb](https://x.com/reach_vb/status/2048635276226949301)
[^26]: [𝕏 post by @thekaransinghal](https://x.com/thekaransinghal/status/2048502612018766332)