# Profluent-Lilly Partnership, Actively’s Series B, and New Coding-Agent Signals

*By VC Tech Radar • April 29, 2026*

Profluent’s multi-$B Eli Lilly partnership and Actively’s $45M Series B led the financing news, while Poolside, Imbue, and SenseTime surfaced fresh technical signals in coding agents and multimodal models. The broader investor read-through is a reset in software moats, rising interest in AI-native enterprise wedges, and stronger screening around how teams build with agents.

## Funding & Deals

- **Profluent Bio x Eli Lilly**: Profluent and Eli Lilly announced a multi-$B partnership to use Profluent’s AI models to design custom recombinases, described as a new class of gene editor for large-scale DNA editing across multiple diseases. Nathan Benaich called Profluent an "n=1 company." [^1]

- **Actively**: Actively raised a $45M Series B led by TCV and firstharmonic, with Bain Capital Ventures, First Round, and Alkeon participating. The company’s pitch is an AI agent for every sales account—working 24/7 with persistent context, org-chart research, briefing memos, and real-time coaching—and it says teams at Samsara, Ramp, Ironclad, and Attentive have standardized on it. Bain Capital Ventures and First Round are doubling down. [^2][^3][^2]

## Emerging Teams

- **Pallo**: A small team says its Cambridge/IB study app now has about 4,000 students using it. The clearest signal is voice: voice users send roughly 5x more messages, stay almost 2x longer, and show higher conversion and retention than typing-only users. The team also reports ~17% D30 retention for new users and around 50% W4 for some cohorts; 73% of active users upload worksheet photos. Their product takeaway is blunt: students want tutor-style back-and-forth, not more curriculum, and AI-generated tests saw only 12 completions in two months. [^4][^5][^4]

- **Magnifly AI**: Social Channel Group has turned its Business Enablement Guide service—a 2-4 week GTM workflow covering personas, market intelligence, messaging, prospecting, sales enablement, and content direction—into SaaS, and says some high-ticket clients already use it. The interesting part is distribution: the agency says it already works with Fortune 100-500 companies, distributors, MSPs, SIs, resellers, and mid-market/SMB customers, giving it a real starting channel if the product keeps converting service demand into software. [^6]

- **Saffron**: YC highlighted Saffron, which evaluates how well software engineers code with AI tools to help companies identify "the next 10x engineer." Founders named in the launch post are Rob Lukan, Kazuma Choji, and MJ Yao. [^7]

## AI & Tech Breakthroughs

- **Poolside AI**: Poolside released Laguna M.1 and Laguna XS.2, its first public open-weight models, framing them as the first output of a full-stack approach spanning data, training, reinforcement learning, and inference for coding agents. The company says it is making the models available to everyone. [^8][^9][^8]

> "Asking AI to ask questions is hugely underrated." [^10]

- **Question-asking is becoming a capability frontier**: Imbue’s open-source Blueprint reads code, asks grounded questions, and produces executable plans; Kanjun says it is "10x better" than Claude Code Plan Mode because the questions are better. In research workflows, Sebastien Bubeck says OpenAI’s internal agents are already finding mistakes in papers and surfacing questions that humans then want to turn into papers. [^11][^12][^13][^14]

- **SenseNova-U1**: SenseTime open-sourced SenseNova-U1 under Apache 2.0. Its NEO-Unify architecture removes the visual encoder and VAE, uses a Mixture-of-Transformer backbone on near-lossless pixel inputs, and claims strengths in text rendering, dense visual layouts, interleaved text-image generation, and open-source unified-model benchmarks. Tradeoffs noted in the post: weaker high-resolution photorealism than specialized diffusion models, training code/report still pending, and an ecosystem that still needs to be built. [^15]

- **On-device AI is getting materially better**: llm-autotune reports average improvements of 39% lower time-to-first-token, 67% less KV-cache RAM, 46% lower agent wall time, and 67% lower KV prefill time via dynamic KV sizing, live RAM management, and system-prompt prefix caching. Separately, Nova shows a local-first assistant that runs fully offline on consumer hardware with about 8GB RAM, including local chat, local storage, document reading, and offline text-to-speech. [^16][^17]

## Market Signals

- **Naval’s investability reset**: Naval argues pure software is now "uninvestable" because coding agents let people hack together apps today and are improving quickly enough to build scalable software with good architecture. His suggested targets are hardware, network effects, and AI models; he also says coding agents hit an inflection point around December 2025 and that 1-2 person software companies will increasingly be able to reach massive scale. [^18]

- **Model price/performance is compressing**: Abacus.AI CEO Bindu Reddy says her team is moving workloads to Kimi 2.6 because it beats Opus 4.7 medium on some use cases, beats GPT 5.5 on front-end work, performs well on tool calling and instruction following, and is 5x cheaper. [^19]

- **HCM looks like a large remaining AI-native wedge**: a16z argues Workday remains deeply embedded—more than 10,000 organizations, tens of millions of users, and approaching $10B in annual revenue—while HCM is still the last large enterprise software category without a serious AI-native challenger. [^20]

- **AI-native engineering workflows are becoming evaluation surfaces**: One SaaS builder claims YC Spring applications are asking founders to submit Claude Code `/export` files as a signal of taste and caliber, while Saffron is explicitly assessing how well engineers code with AI tools. The broader read-through is that engineering quality is starting to be screened through agent usage, not just raw coding output. [^21][^7]

- **VC workflows are fair game for agents**: Elizabeth Yin agreed with the claim that agents can replace 90% of VC associate and principal work, adding that venture was "never about diligence…but access." [^22][^23]

- **AI infrastructure sentiment is turning more constructive**: Elizabeth Yin says recent AI improvements and new frameworks shifted her from worrying about data-center debt to seeing an inflection for further AI acceleration. Garry Tan goes further, arguing AI data centers are San Francisco’s most important growth industry and the engine of downtown recovery. [^24][^25]

## Worth Your Time

- [On Vibe Coding](https://www.youtube.com/watch?v=hTdSU7q5WCo) and the [transcript](http://nav.al/code): primary source for Naval’s view on coding-agent inflection, software moats, and why company size may compress. [^26][^18]


[![On Vibe Coding](https://img.youtube.com/vi/hTdSU7q5WCo/hqdefault.jpg)](https://youtube.com/watch?v=hTdSU7q5WCo&t=34)
*On Vibe Coding (0:34)*


- [Workday’s last workday](https://www.a16z.news/p/workdays-last-workday): concise thesis on why HCM may still be open to an AI-native attacker despite Workday’s scale and stickiness. [^20]

- [Blueprint launch thread](https://x.com/kanjun/status/2049194874625503695): useful if you are tracking where coding agents may differentiate next—planning quality and better questions, not just faster code generation. [^11][^12][^13]

- [Vintage models thread](https://x.com/andrewchen/status/2049041858198994969): a speculative but interesting framework for testing whether historical corpora contain future-predictive latent structure, paired with a 13B model trained only on pre-1931 text. [^27][^28]

- [SenseNova-U1 GitHub](https://github.com/OpenSenseNova/SenseNova-U1): worth reviewing if unified multimodal architectures are on your map; the key architectural claim is removing the visual encoder and VAE rather than layering adapters onto separate systems. [^15]

---

### Sources

[^1]: [𝕏 post by @nathanbenaich](https://x.com/nathanbenaich/status/2049162488475009140)
[^2]: [𝕏 post by @mgarimella](https://x.com/mgarimella/status/2049113875765698671)
[^3]: [𝕏 post by @ajay_bcv](https://x.com/ajay_bcv/status/2049146730424254736)
[^4]: [r/SideProject post by u/InteractionKnown6441](https://www.reddit.com/r/SideProject/comments/1sxxsf6/)
[^5]: [r/SideProject comment by u/InteractionKnown6441](https://www.reddit.com/r/SideProject/comments/1sxxsf6/comment/oiq2jyx/)
[^6]: [r/startups post by u/ZLordofThunder](https://www.reddit.com/r/startups/comments/1sygrah/)
[^7]: [𝕏 post by @ycombinator](https://x.com/ycombinator/status/2049239677862138149)
[^8]: [𝕏 post by @jasoncwarner](https://x.com/jasoncwarner/status/2049143220790317409)
[^9]: [𝕏 post by @nathanbenaich](https://x.com/nathanbenaich/status/2049189849878692287)
[^10]: [𝕏 post by @pmarca](https://x.com/pmarca/status/2049188549342138449)
[^11]: [𝕏 post by @kanjun](https://x.com/kanjun/status/2049194874625503695)
[^12]: [𝕏 post by @imbue_ai](https://x.com/imbue_ai/status/2049174423757103217)
[^13]: [𝕏 post by @kanjun](https://x.com/kanjun/status/2049176168226595113)
[^14]: [𝕏 post by @deredleritt3r](https://x.com/deredleritt3r/status/2049184480141742123)
[^15]: [r/deeplearning post by u/Remarkable-Aspect879](https://www.reddit.com/r/deeplearning/comments/1sy64c2/)
[^16]: [r/SideProject post by u/tctheking1](https://www.reddit.com/r/SideProject/comments/1sydx52/)
[^17]: [r/SideProject post by u/Yusso_17](https://www.reddit.com/r/SideProject/comments/1sygrsg/)
[^18]: [On Vibe Coding](https://www.youtube.com/watch?v=hTdSU7q5WCo)
[^19]: [𝕏 post by @bindureddy](https://x.com/bindureddy/status/2049330512544808983)
[^20]: [𝕏 post by @a16z](https://x.com/a16z/status/2049146324008968512)
[^21]: [r/SaaS post by u/achilleskedd](https://www.reddit.com/r/SaaS/comments/1sy1bsf/)
[^22]: [𝕏 post by @dunkhippo33](https://x.com/dunkhippo33/status/2049365591379828948)
[^23]: [𝕏 post by @geoffreywoo](https://x.com/geoffreywoo/status/2040187833651023906)
[^24]: [𝕏 post by @dunkhippo33](https://x.com/dunkhippo33/status/2049184875253563588)
[^25]: [𝕏 post by @garrytan](https://x.com/garrytan/status/2049284944737009744)
[^26]: [𝕏 post by @naval](https://x.com/naval/status/2049349252112089553)
[^27]: [𝕏 post by @andrewchen](https://x.com/andrewchen/status/2049041858198994969)
[^28]: [𝕏 post by @status_effects](https://x.com/status_effects/status/2048878495539843211)