# OpenAI’s Image Surge, a More Contested Model Race, and the New Shape of AI Work

*By AI News Digest • May 3, 2026*

OpenAI shared fast early growth for ChatGPT Images as fresh evaluations and investor commentary pulled the open-model story in different directions. The day also added evidence that AI is reshaping software jobs toward planning and review, while practical local deployment keeps advancing.

## What stood out

Today’s clearest story was market pull, not a single blockbuster launch. OpenAI posted fresh adoption data, benchmark charts drew unusually explicit disagreement, and software-engineering commentary kept shifting from replacement toward workflow redesign [^1][^2][^3][^4][^5].

### OpenAI is seeing product pull from images — and still arguing for smarter models

ChatGPT Images usage rose more than 50% in a few weeks, with nearly 60% of daily users coming from newly logged-in users; Greg Brockman said the feature is "really taking off" [^1][^6]. Sam Altman separately said he increasingly sees smarter models as more important than cheaper or faster ones [^7].

> "but it seems that just being smarter is still the most important thing" [^7]

**Why it matters:** OpenAI’s own usage signal suggests that new capability can still bring in fresh audiences quickly, especially when the use cases are broad across design, learning, work graphics, and creative work [^1].

### The open-model race looked more contested, not less

A NIST CAISI evaluation said DeepSeek V4 trails leading U.S. models by about eight months; Sebastian Raschka said he would have liked to see GLM 5.1, Kimi K2.6, and Qwen3.6 Max included on the same chart, and the full report is here: [nist.gov/news-events/news/2026/05/caisi-evaluation-deepseek-v4-pro](https://www.nist.gov/news-events/news/2026/05/caisi-evaluation-deepseek-v4-pro) [^2][^8]. At the same time, commentary endorsed by Marc Andreessen argued that Kimi K2.6 and DeepSeek V4 show open-source scaling is continuing, while Nathan Lambert said much depends on which trend line is more representative and noted that the best open models have long been Chinese [^9][^10][^11][^12]. Another widely shared critique warned that these ELO gaps are inferred from benchmark scores rather than head-to-head play, and can widen mechanically as models approach 100% accuracy on more tests [^3].

**Why it matters:** For anyone tracking the U.S.-China or open-vs.-closed race, leaderboard headlines are carrying more interpretation risk than usual. Official evaluations, open-model momentum claims, and benchmark-methodology caveats are all landing at once.

### Software work still looks like a redesign story before a replacement story

Citadel Securities analysis shared by several AI commentators said demand for software engineers — the most AI-exposed occupation — has continued to accelerate, with job postings up 18% from the May inflection point [^4][^13]. In parallel, swyx highlighted a shift toward "plan and review": as AI "eats the middle," engineers spend more time defining work and reviewing model output, which he described as the biggest lever for shipping faster [^5]. Andreessen also endorsed the view that "we need more engineers, not less" [^9][^10].

**Why it matters:** The short-term pattern in these notes is not simple displacement. Demand may still be rising even as the job changes shape toward specification, oversight, and review.

### Local and embedded AI kept getting more practical

A Reddit post described a quantized Llama 3.3 70B running locally on a MacBook Pro M4 with 64GB RAM at about 71 tokens per second, finishing an offline client queue over an 11-hour flight with checkpointing for battery swaps [^14]. Separately, a LocalLLM commenter pointed to OpenAI’s newly released PII redaction model intended to run locally or in the browser, and Elon Musk said Grok Voice is already being used by Starlink [^15][^16].

**Why it matters:** The common thread is deployment. More attention is shifting from raw model scores to where models can actually run: offline, in-browser, and inside operational systems.

---

### Sources

[^1]: [𝕏 post by @nickaturley](https://x.com/nickaturley/status/2050716264826593637)
[^2]: [𝕏 post by @sebkrier](https://x.com/sebkrier/status/2050369549111795880)
[^3]: [𝕏 post by @0xdoug](https://x.com/0xdoug/status/2050663406995378466)
[^4]: [𝕏 post by @Konstantine](https://x.com/Konstantine/status/2050317573649289351)
[^5]: [𝕏 post by @aiDotEngineer](https://x.com/aiDotEngineer/status/2050681484072161550)
[^6]: [𝕏 post by @gdb](https://x.com/gdb/status/2050731568742723899)
[^7]: [𝕏 post by @sama](https://x.com/sama/status/2050671161915371998)
[^8]: [𝕏 post by @rasbt](https://x.com/rasbt/status/2050469092058927295)
[^9]: [𝕏 post by @casper_hansen_](https://x.com/casper_hansen_/status/2050757487880782208)
[^10]: [𝕏 post by @pmarca](https://x.com/pmarca/status/2050760775661457660)
[^11]: [𝕏 post by @natolambert](https://x.com/natolambert/status/2050610488212627897)
[^12]: [𝕏 post by @natolambert](https://x.com/natolambert/status/2050610490737672502)
[^13]: [𝕏 post by @pmarca](https://x.com/pmarca/status/2050481817120375099)
[^14]: [r/LocalLLM post by u/DragonflyOk7139](https://www.reddit.com/r/LocalLLM/comments/1t1z0ud/)
[^15]: [r/LocalLLM comment by u/this_for_loona](https://www.reddit.com/r/LocalLLM/comments/1t1tlaj/comment/ojivacl/)
[^16]: [𝕏 post by @elonmusk](https://x.com/elonmusk/status/2050475355258151330)