# NVIDIA Opens an Omni Stack, Codex Broadens Its Reach, and Google Signs a Pentagon AI Deal

*By AI High Signal Digest • April 29, 2026*

NVIDIA released an open multimodal model designed for agent workflows, OpenAI expanded Codex well beyond coding, and reports detailed Google’s classified Pentagon AI deal. Also in focus: stronger math capability signals, on-device privacy tooling, and major partnerships from Profluent, Exa, and Anthropic.

## Top Stories

*Why it matters:* The biggest signals today were an open multimodal push from NVIDIA, a broader scope for Codex, and stronger evidence that frontier models are contributing to serious technical work.

- **NVIDIA released a new open multimodal model built for agent loops.** Nemotron 3 Nano Omni combines audio, image, video, and text in one reasoning loop, ships with **30B parameters** and **256K context**, and quickly landed across vLLM, Together AI, fal, and Ollama. fal highlighted roughly **9× higher throughput** from fewer inference hops in multimodal agent workflows [^1][^2][^3][^4][^5].

- **Codex moved closer to a general work agent.** Recent updates added macOS computer use, an in-app browser for inspecting localhost builds, built-in image generation, plugins, first-class artifacts, and follow-up automations. OpenAI also added a **/fast** mode for GPT-5.5 in Codex at **1.5×** speed and reset rate limits for all paid plans [^6][^7][^8][^9][^10][^11][^12][^13][^14].

- **OpenAI’s math signal kept strengthening.** OpenAI said **GPT-5.4 Pro** helped solve a **60-year-old Erdős problem**, while **GPT-5.5 Pro** reached a new high of **159** on Epoch’s Capabilities Index and improved FrontierMath results, including solving two previously unsolved Tier 4 problems across runs [^15][^16][^17].

## Research & Innovation

*Why it matters:* The most useful research today focused on where current systems still break: retrieval, post-training efficiency, and safety visibility.

- **MathNet exposed a major retrieval gap in math AI.** The MIT benchmark includes **30,676 Olympiad-level problems** from **47 countries** and **17 languages**; top models reached **78.4%** problem-solving accuracy, but retrieval Recall@1 was only about **5%**, with RAG improving results by up to **12%** [^18].

- **Self-distillation is emerging as a serious post-training alternative.** MIT and ETH Zurich researchers described a setup where models act as their own teacher using feedback or demonstrations; they highlighted **SDPO** for RL, **SDFT** for continual learning, and argued the approach is simpler and faster than **GRPO**, with production use already underway [^19].

- **A new “Introspection Adapter” targets hidden model behavior.** Researchers trained a single adapter that makes finetuned models describe their behavior and generalizes to detecting hidden misalignment, backdoors, and safeguard removal [^20][^21].

## Products & Launches

*Why it matters:* The most notable launches were practical: privacy, enterprise research, and deployable coding models.

- **OpenAI shipped Privacy Filter.** It is a **1.5B-parameter**, open-source, on-device model for PII detection and redaction, scored at **96% F1** on PII-Masking-300k, and can detect sensitive text including **API keys** [^22][^23][^24].

- **Google launched Deep Research and Deep Research Max.** The new Gemini 3.1 Pro-powered agents combine open-web search with proprietary enterprise data via **MCP** in a single API call [^25].

- **Poolside released its first open-weight coding model.** **Laguna XS.2** is a **33B total / 3B active** MoE for agentic coding and long-horizon tasks, trained in-house, runnable on a single GPU, and released under **Apache 2.0** [^26].

## Industry Moves

*Why it matters:* Partnerships are increasingly about distribution, workflow control, and high-value verticals rather than just model access.

- **Profluent signed a major pharma deal with Eli Lilly.** The partnership is worth **$2.25B plus royalties** and focuses on AI-designed proteins for **large gene insertion** therapeutics [^27].

- **Google added Exa search inside Gemini.** Exa said its agent-first search now powers **Grounding With Exa** for Gemini, giving models access to billions of websites, technical docs, papers, people, and companies [^28].

- **Anthropic pushed Claude deeper into creative software.** New partnerships with **Blender, Autodesk, Adobe, Ableton**, and others connect Claude directly to professional creative workflows; the Blender connector can debug scenes, build tools, and batch-apply changes across objects [^29][^30].

## Policy & Regulation

*Why it matters:* Government AI contracts are becoming more consequential for both deployment norms and internal company politics.

- **Google’s Pentagon contract became one of the day’s biggest governance stories.** Posts citing *The Information* said Google signed a classified deal allowing use of its AI for “any lawful government purpose” and requiring help adjusting safety filters; more than **600 employees** reportedly opposed the move, and lawyers said the contract’s “not intended for” language on surveillance and autonomous weapons carries no legal weight [^31][^32][^31][^33].

## Quick Takes

*Why it matters:* A few smaller releases still stood out for real-time multimodality, evaluation infrastructure, and world-model tooling.

- **MiniCPM-o 4.5** open-sourced a **9B** full-duplex multimodal streaming model and said it can run offline on Windows and macOS hardware [^34].
- **fal** launched **World Model Accelerator**, an inference engine for generative media and world models that scales from **1 to 1,000+ GPUs** [^35].
- **ParseBench** launched with **2,000 verified pages** from real enterprise documents plus a Kaggle leaderboard for document understanding [^36][^37].
- **VibeBench** is recruiting **1,000** software engineers to rank models on real engineering work, with public reports planned after each evaluation round [^38].

---

### Sources

[^1]: [𝕏 post by @NVIDIAAI](https://x.com/NVIDIAAI/status/2049159441870717428)
[^2]: [𝕏 post by @vllm_project](https://x.com/vllm_project/status/2049171268344426846)
[^3]: [𝕏 post by @togethercompute](https://x.com/togethercompute/status/2049160446708711883)
[^4]: [𝕏 post by @fal](https://x.com/fal/status/2049160999442198632)
[^5]: [𝕏 post by @ollama](https://x.com/ollama/status/2049194377751437470)
[^6]: [𝕏 post by @reach_vb](https://x.com/reach_vb/status/2049202864682131469)
[^7]: [𝕏 post by @reach_vb](https://x.com/reach_vb/status/2049202867580420545)
[^8]: [𝕏 post by @reach_vb](https://x.com/reach_vb/status/2049202869908189380)
[^9]: [𝕏 post by @reach_vb](https://x.com/reach_vb/status/2049202872563257793)
[^10]: [𝕏 post by @reach_vb](https://x.com/reach_vb/status/2049202875364962793)
[^11]: [𝕏 post by @reach_vb](https://x.com/reach_vb/status/2049202877986488335)
[^12]: [𝕏 post by @reach_vb](https://x.com/reach_vb/status/2049202880599503307)
[^13]: [𝕏 post by @reach_vb](https://x.com/reach_vb/status/2049367497179074788)
[^14]: [𝕏 post by @thsottiaux](https://x.com/thsottiaux/status/2048997818673537399)
[^15]: [𝕏 post by @OpenAI](https://x.com/OpenAI/status/2049182118069358967)
[^16]: [𝕏 post by @EpochAIResearch](https://x.com/EpochAIResearch/status/2049186851844771888)
[^17]: [𝕏 post by @EpochAIResearch](https://x.com/EpochAIResearch/status/2049186868982677989)
[^18]: [𝕏 post by @TheTuringPost](https://x.com/TheTuringPost/status/2049155956135841862)
[^19]: [𝕏 post by @yacinelearning](https://x.com/yacinelearning/status/2049134868315971655)
[^20]: [𝕏 post by @kshenoy_](https://x.com/kshenoy_/status/2049211997481505050)
[^21]: [𝕏 post by @NeelNanda5](https://x.com/NeelNanda5/status/2049229805598445799)
[^22]: [𝕏 post by @dl_weekly](https://x.com/dl_weekly/status/2049134162611679601)
[^23]: [𝕏 post by @thursdai_pod](https://x.com/thursdai_pod/status/2049157878066430078)
[^24]: [𝕏 post by @thursdai_pod](https://x.com/thursdai_pod/status/2049198681719808465)
[^25]: [𝕏 post by @dl_weekly](https://x.com/dl_weekly/status/2049172165228999142)
[^26]: [𝕏 post by @poolsideai](https://x.com/poolsideai/status/2049144111626670282)
[^27]: [𝕏 post by @thisismadani](https://x.com/thisismadani/status/2049091724828623047)
[^28]: [𝕏 post by @ExaAILabs](https://x.com/ExaAILabs/status/2049200462273147132)
[^29]: [𝕏 post by @Techmeme](https://x.com/Techmeme/status/2049149038977773890)
[^30]: [𝕏 post by @claudeai](https://x.com/claudeai/status/2049143438281445811)
[^31]: [𝕏 post by @kimmonismus](https://x.com/kimmonismus/status/2049081961222955403)
[^32]: [𝕏 post by @erinkwoo](https://x.com/erinkwoo/status/2048991376159998035)
[^33]: [𝕏 post by @TheRundownAI](https://x.com/TheRundownAI/status/2049165183910551873)
[^34]: [𝕏 post by @OpenBMB](https://x.com/OpenBMB/status/2049118941197328408)
[^35]: [𝕏 post by @fal](https://x.com/fal/status/2049262579500114207)
[^36]: [𝕏 post by @osanseviero](https://x.com/osanseviero/status/2048777802015535189)
[^37]: [𝕏 post by @jerryjliu0](https://x.com/jerryjliu0/status/2049189752159764784)
[^38]: [𝕏 post by @jpschroeder](https://x.com/jpschroeder/status/2049139723776495800)