# Subliminal Learning, GPT-5.5 Demand, and AI’s Move Into Classified Networks

*By AI High Signal Digest • May 2, 2026*

Anthropic’s subliminal learning result, OpenAI’s unusually strong GPT-5.5 traction, and new government deployment of frontier AI led today’s brief. Also included: key papers on multi-agent systems and long-horizon training, new agent and edge-inference products, and notable labor and robotics policy moves.

## Top Stories

*Why it matters:* The biggest signals today were about hidden model risk, fast commercialization, and AI moving into more sensitive environments.

- **Anthropic’s subliminal learning paper raises a new distillation safety problem.** Anthropic and collaborators reported that student models can inherit traits, including misalignment, from teacher-generated synthetic data even when the data contains no explicit semantic reference to the trait and has been filtered for clean content. The transfer was also reported as architecture-specific: GPT-to-GPT worked, while GPT-to-Claude did not [^1].
- **OpenAI says GPT-5.5 is its strongest launch yet.** One week after release, OpenAI said API revenue is growing more than 2x faster than any prior launch, while Codex doubled revenue in under seven days; separately, GPT-5.5, Codex, and Managed Agents were brought to Amazon Bedrock in limited preview [^2][^3].
- **Frontier AI is moving onto classified networks.** The DeptofWar CTO account said the department signed agreements with SpaceX, OpenAI, Google, NVIDIA, Reflection, Microsoft, and AWS to deploy frontier capabilities on classified networks, framing the effort as part of an AI-first war department mandate [^4].

## Research & Innovation

*Why it matters:* The most useful research updates targeted coordination, long-horizon training data, and improving model behavior earlier in the pipeline.

- **RecursiveMAS replaces agent-to-agent text chatter with latent-state transfer.** The paper introduces a RecursiveLink module and shared credit assignment across heterogeneous agents; across nine benchmarks, it reported an 8.3% average accuracy gain, 1.2x-2.4x inference speedups, and 34.6%-75.6% lower token usage [^5].
- **Microsoft Research built 1,000 synthetic computers for training computer-use agents.** Each simulated workflow averaged more than 8 hours of agent runtime and 2,000+ turns, and the team said training on this data improved both in-domain and out-of-domain productivity while scaling to millions or billions of synthetic worlds [^6].
- **Meta FAIR showed a way to push safety and factuality into pretraining itself.** Using a strong post-trained model as both rewriter and judge during pretraining, the method reported 36.2% relative gains in factuality, 18.5% in safety, and up to 86.3% better generation quality than standard pretraining [^7].

## Products & Launches

*Why it matters:* Product releases are increasingly about agent workflow quality, local inference, and turning AI into routine software behavior.

- **Codex added a more goal-oriented workflow.** The new `/goal` command sets a persistent objective, nudges the model toward the next concrete action after each turn, and maps requirements to evidence; OpenAI also added one-click workflow import for settings, plugins, agents, and project configuration [^8][^9][^10].
- **Moondream shipped Photon 1.2.0 for edge vision inference.** The release adds Apple Silicon, native Windows CUDA, Blackwell, and Jetson Thor support; the team also described custom Metal kernels and a fused token-sampling path that cut one step from 687µs to 130µs, while arguing local vision can beat cloud wall-clock latency by avoiding large image uploads [^11][^12][^13][^14][^15].
- **Google added agentic restaurant booking to Search and Maps.** Users can describe constraints like group size, vibe, time, and dietary preferences, after which AI Mode or Ask Maps searches multiple reservation sources and returns options with booking links via partners such as OpenTable and Resy [^16][^17].

## Industry Moves

*Why it matters:* Corporate strategy is shifting from model releases alone to robotics, internal automation, and data-layer bets.

- **Meta pulled ARI into Meta Superintelligence Labs.** ARI said it is joining MSL to build general-purpose humanoid intelligence and argued that scaling will come from learning directly from human experience, not teleoperation alone [^18][^19].
- **Ramp says coding agents are now doing most of the merge work.** The company said its in-house agent Inspect now writes about 70% of merged PRs, up from 30% when first shared; one team reported its Cloud Agent accounted for 80.3% of work/PRs over the last 14 days, helped by Slack-triggered workflows [^20][^21].
- **Hightouch raised $150M at a $2.75B valuation.** The company said it is building an AI platform for marketers, with commentary around the round emphasizing that marketing AI depends heavily on access to the right data foundations [^22][^23].

## Policy & Regulation

*Why it matters:* Governments are starting to shape AI through both labor protections and direct industrial policy.

- **Chinese courts ruled companies cannot fire workers simply to replace them with AI.** In Hangzhou, a tech company’s reassignment and pay-cut strategy tied to automation was deemed illegal termination [^24][^25].
- **Hangzhou enacted what it calls China’s first local regulation for embodied intelligent robots.** The law defines the category, directs R&D support toward motion control, core components, and domestic chips, and requires public agencies to open application scenarios [^26].

## Quick Takes

*Why it matters:* A few smaller updates still sharpen the picture on capability, infrastructure, and open-model economics.

- **ARC-AGI-3 remains extremely hard:** GPT-5.5 scored 0.43% and Opus 4.7 scored 0.18%, with ARC Prize identifying three recurring failure modes [^27].
- **Azure says hosted OpenAI models now have 10x better latency and throughput,** and one external monitor later reported Azure faster than OpenAI directly for GPT-5.5 [^28][^29].
- **Open-weight leaders are still closing the gap:** Artificial Analysis said Kimi K2.6 and MiMo V2.5 Pro tied at 54 on its Intelligence Index, within 3-6 points of top proprietary models and at half to one-sixth the price [^30][^31].
- **NVIDIA Research says speculative decoding can ease RL rollout bottlenecks,** with 1.8x higher throughput at 8B and a projected 2.5x end-to-end speedup at 235B [^32].

---

### Sources

[^1]: [𝕏 post by @iam_elias1](https://x.com/iam_elias1/status/2049909541408850312)
[^2]: [𝕏 post by @OpenAI](https://x.com/OpenAI/status/2050250926888468929)
[^3]: [𝕏 post by @dl_weekly](https://x.com/dl_weekly/status/2050244192358613029)
[^4]: [𝕏 post by @DoWCTO](https://x.com/DoWCTO/status/2050175912134561977)
[^5]: [𝕏 post by @omarsar0](https://x.com/omarsar0/status/2050261229315477988)
[^6]: [𝕏 post by @dair_ai](https://x.com/dair_ai/status/2050263752147456238)
[^7]: [𝕏 post by @omarsar0](https://x.com/omarsar0/status/2050213732970848664)
[^8]: [𝕏 post by @mattlam_](https://x.com/mattlam_/status/2049907603829121354)
[^9]: [𝕏 post by @OpenAI](https://x.com/OpenAI/status/2050290618187055175)
[^10]: [𝕏 post by @OpenAI](https://x.com/OpenAI/status/2050290619684393152)
[^11]: [𝕏 post by @moondreamai](https://x.com/moondreamai/status/2050284529798275262)
[^12]: [𝕏 post by @vikhyatk](https://x.com/vikhyatk/status/2050372968845541515)
[^13]: [𝕏 post by @vikhyatk](https://x.com/vikhyatk/status/2050372970535915797)
[^14]: [𝕏 post by @vikhyatk](https://x.com/vikhyatk/status/2050372972041638369)
[^15]: [𝕏 post by @mayfer](https://x.com/mayfer/status/2050328884374388953)
[^16]: [𝕏 post by @Google](https://x.com/Google/status/2050293156067881013)
[^17]: [𝕏 post by @Google](https://x.com/Google/status/2050293159230320954)
[^18]: [𝕏 post by @xiaolonw](https://x.com/xiaolonw/status/2050298370842132680)
[^19]: [𝕏 post by @LerrelPinto](https://x.com/LerrelPinto/status/2050297929294885270)
[^20]: [𝕏 post by @zachbruggeman](https://x.com/zachbruggeman/status/2049912136957386848)
[^21]: [𝕏 post by @leveredvlad](https://x.com/leveredvlad/status/2050335535806505013)
[^22]: [𝕏 post by @tejasmanohar](https://x.com/tejasmanohar/status/2049569379793138031)
[^23]: [𝕏 post by @sarahcat21](https://x.com/sarahcat21/status/2050140617368277305)
[^24]: [𝕏 post by @kimmonismus](https://x.com/kimmonismus/status/2050250164645007770)
[^25]: [𝕏 post by @kimmonismus](https://x.com/kimmonismus/status/2050250176540107142)
[^26]: [𝕏 post by @poezhao0605](https://x.com/poezhao0605/status/2050177996087320956)
[^27]: [𝕏 post by @arcprize](https://x.com/arcprize/status/2050261221165989969)
[^28]: [𝕏 post by @theo](https://x.com/theo/status/2050305813894648289)
[^29]: [𝕏 post by @theo](https://x.com/theo/status/2050327526933979647)
[^30]: [𝕏 post by @ArtificialAnlys](https://x.com/ArtificialAnlys/status/2050096370200281539)
[^31]: [𝕏 post by @ArtificialAnlys](https://x.com/ArtificialAnlys/status/2050096388445491399)
[^32]: [𝕏 post by @NVIDIAAI](https://x.com/NVIDIAAI/status/2050304249699950739)