# Coding Agents Get a Price Tag as AI Pushes Back Toward Science

*By AI News Digest • April 10, 2026*

OpenAI's new $100 Codex-focused tier made the commercialization of agentic coding explicit. Meanwhile, Meta, OpenAI, Demis Hassabis, and U.S. officials all pushed AI back into a broader science, medicine, and infrastructure conversation.

## The market for coding agents got more explicit

### OpenAI introduced a new $100/month Pro tier for heavier Codex use

OpenAI launched a new $100/month ChatGPT Pro tier that offers 5x more Codex usage than Plus and is designed for longer, high-effort coding sessions [^1]. The plan includes all Pro features, including the exclusive Pro model and unlimited access to Instant and Thinking models, and OpenAI is temporarily boosting Codex access to up to 10x Plus through May 31 [^1]. Sam Altman said the move follows strong interest in Codex [^2].

*Why it matters:* OpenAI is now pricing around sustained agentic coding demand, while keeping its existing $200 Pro tier as the highest-usage option [^3].

### The strongest gains are still clustering in technical workflows

Andrej Karpathy said many people still judge AI by free or older chat products that fumble simple tasks, while users of frontier agentic systems such as OpenAI Codex and Claude Code are seeing much stronger progress in programming, math, and research [^4]. He tied that gap to domains with verifiable rewards and high B2B value [^4]. Hex's data-agent team described a similar asymmetry from the product side: coding is increasingly easy to verify, while analytical work still involves many hard-to-validate decisions, which is why their agents rely on long-running workflows and custom context handling [^5]. Gary Marcus argued that these advances remain concentrated in particular areas and should not be mistaken for AGI being "in striking distance" [^6].

*Why it matters:* The frontier is moving quickly, but unevenly: the biggest gains are appearing first where feedback is crisp and measurable [^4][^5].

## Research pushed further beyond chat

### Meta's TRIBE v2 models how the brain responds to media

Meta released TRIBE v2, a foundation model trained on more than 1,000 hours of brain imaging data from 720 people [^7]. Given video, audio, or text, it predicts which brain regions activate, how strongly, and in what order; on unseen subjects, its predictions were reported as more accurate than most real scans [^7]. Researchers also used it to recreate classic neuroscience experiments in software and identify face-recognition, language, and emotional-processing regions on its own [^7].

*Why it matters:* It is a notable signal that leading labs are still investing in scientific foundation models, not only assistant and coding products [^7].

### OpenAI highlighted healthcare benchmarks and AI-assisted treatment analysis

OpenAI said its team has created public benchmarks for evaluating models in healthcare, deployed clinical copilots in primary care settings, and is working to democratize medical expertise in ChatGPT Health [^8]. In a featured osteosarcoma case, GPT-4o was used on bulk RNA-seq data to flag targets such as B7H3, and a custom agent system performed literature review and bioinformatics analysis across 600,000 single cells [^8]. The presentation linked that work to a personalized mRNA vaccine, TCR-T, and CAR-T efforts [^8].

*Why it matters:* The example shows model providers trying to move from health chatbots toward tool-assisted clinical and research workflows [^8].

## The science-first argument got louder

### Demis Hassabis said the commercial race crowded out slower scientific work

In a recent interview, Demis Hassabis said he would have preferred to keep AI in the lab longer and focus on more AlphaFold-like advances rather than getting pulled into a "ferocious commercial pressure race" after ChatGPT [^9]. He also warned that the next two to four years of the "agentic era" will make alignment and guardrail failures a much harder technical challenge, and called for cooperation across labs, safety institutes, and academia [^9].

> "If I'd had my way, I would have left AI in the lab for longer. Done more things like AlphaFold. Maybe cured cancer or something like that." [^9]

*Why it matters:* One of the most influential lab leaders is publicly arguing for a more science-oriented path even as he warns that more autonomous systems are arriving quickly [^9].

### U.S. officials are also framing AI as research infrastructure

The Department of Energy's Genesis Mission was presented as an AI-driven platform for accelerating scientific discovery by combining AI, supercomputing, and quantum technologies, alongside public-private partnerships and interagency coordination [^10]. Under Secretary Dario Gil also emphasized research security, allied collaboration, and a broader effort to revitalize the U.S. science and technology enterprise [^10].

*Why it matters:* The policy conversation is not only about consumer products and risk; it is also moving toward national research capacity and scientific infrastructure [^10].

## Enterprise adoption still looks messy

### A new survey found most employees are bypassing formal AI rollouts

A survey of 3,750 executives and employees found that 54% of workers bypassed their company's AI tools in the past 30 days and another 33% had not used AI at all, despite average deployments of $54 million this year [^11]. Only 9% of workers said they trust AI for complex business-critical decisions, versus 61% of executives, and workers were reported to lose the equivalent of 51 working days per year to technology friction [^11]. Marc Andreessen, by contrast, argued that adoption is still happening bottom-up inside companies, with workers and managers often using AI whether or not leaders see it [^12].

*Why it matters:* For enterprises, the bottleneck now looks less like access and more like trust, training, and workflow fit [^11].

---

### Sources

[^1]: [𝕏 post by @OpenAI](https://x.com/OpenAI/status/2042295688323875316)
[^2]: [𝕏 post by @sama](https://x.com/sama/status/2042342572958630332)
[^3]: [𝕏 post by @OpenAI](https://x.com/OpenAI/status/2042296046009626989)
[^4]: [𝕏 post by @karpathy](https://x.com/karpathy/status/2042334451611693415)
[^5]: [How Hex Builds AI Agents: Making Agents Reason Like Human Data Analysts | Izzy Miller, AI Engineer](https://www.youtube.com/watch?v=Xyh1EqcjGME)
[^6]: [𝕏 post by @GaryMarcus](https://x.com/GaryMarcus/status/2042400479167332491)
[^7]: [𝕏 post by @rowancheung](https://x.com/rowancheung/status/2042260621274861756)
[^8]: [ChatGPT and Cancer: How a Tech Founder Rewrote His Treatment Plan](https://www.youtube.com/watch?v=OAlHiQLsYQM)
[^9]: [𝕏 post by @Ric_RTP](https://x.com/Ric_RTP/status/2042230439788638487)
[^10]: [𝕏 post by @CSISEST](https://x.com/CSISEST/status/2041879033852448955)
[^11]: [𝕏 post by @HedgieMarkets](https://x.com/HedgieMarkets/status/2042295089586700645)
[^12]: [𝕏 post by @pmarca](https://x.com/pmarca/status/2042368286277661041)