PodcastsNewsThursdAI - The top AI news from the past week

ThursdAI - The top AI news from the past week

From Weights & Biases, Join AI Evangelist Alex Volkov and a panel of experts to cover everything important that happened in the world of AI from the past week
ThursdAI - The top AI news from the past week
Latest episode

156 episodes

  • ThursdAI - The top AI news from the past week

    ThursdAI - May 14 - TML Interaction Models, Musk v Altman Disclosures, CW Sandboxes & /goal Takes Over

    2026/05/15 | 1h 42 mins.
    Hey everyone, Alex here 👋
    I am back live on ThursdAI after a week off, and yes, I am now a married man! Thank you for all the congrats, and also thank you to Ryan and Yam for holding down the fort last week while I tried very hard to disconnect.
    This week was a relatively chill one in AI land (no, really, for once), which actually let us go deep on some really fascinating stuff. We’ve got Thinking Machines Lab finally shipping their first real research with these wild interaction models, Meta Muse Spark showing up in actual products (and it’s surprisingly good!), the Musk v. Altman trial dropping juicy disclosures, and probably the biggest narrative shift on the show today: all of us are quitting OpenClaw. Yeah, you read that right. We’ll get into why.
    Also! and this is breaking news from this morning, CoreWeave just launched Sandboxes for your agents. I’ll cover that in This Week’s Buzz, but if you’ve been waiting for production-grade sandbox infrastructure that powers 9 out of 10 major AI labs, today’s your day.
    Oh, and we had Vic Perez from Krea on to talk about Krea 2, their first foundation image model trained completely from scratch. Let’s dig in.
    ThursdAI - Highest signal weekly AI news show is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

    The Great OpenClaw Exodus towards Hermes 🫠
    I’m going to start with what was honestly the most emotional thread of the entire show, because three of us, me, Ryan, AND Wolfram; all independently switched away from OpenClaw this week. And we kicked off the show literally processing this together on air.
    The story is the same across all of us. OpenClaw was magical back in February when we first brought it to you. Things just worked. But after Anthropic’s pricing changes (we covered this — they made Max-tier subscription usage of Opus through OpenClaw significantly more expensive), and after months of the constant Lego-construction-style breakage on every update, the magic faded. Ryan said it best on the show; he was “constantly fixing OpenClaw” instead of using it.
    So Ryan went to Codex. Wolfram and I both went to Hermes from Nous Research. And folks, things just work again. That February feeling is back, and with GPT 5.5, it’s an incredible assistant!
    Why Hermes? A few things:
    * It’s now the #1 most-used CLI agent on OpenRouter globally, passing OpenClaw and even passing Claude Code on OpenRouter usage. That’s a massive milestone for Nous Research and shows we’re not alone in this migration.
    * It has /goal (more on this in a sec), steering, and background computer use via the TryCUA integration.
    * It’s open! which means if you’ve built a system like Wolfram’s “Amy” or my “Wooolfred” or Ryan’s “R2” (yes, we know each other’s assistants’ names better than each other’s kids’ names at this point 😅), you can port your memories, profile, and soul files seamlessly.
    The migration was so smooth that Wolfram literally had Codex talk to Hermes to plan and execute the migration of his home assistant agent. Two agents collaborating to migrate themselves. We are living in 2026 and it’s easier than ever to switch. If you haven’t tried Hermes, give it a go!
    Steering is maybe the most underrated addition to Hermes, it’s a Codex feature, but exists in Hermes, with GPT 5.5 you can send a follow-up message, and the agent will see it after the next tool call, not after the whole chain of thought was completed (like OpenClaw defaults to) - this changes the conversation to be much more natural!
    Agents buying wedding gifts using Stripe wallet!
    Real quick story: Two weeks ago we covered Stripe’s new wallet APIs that let your agents have actual budgets to spend money on the web. I told my agent (back when it was still OpenClaw) to “go buy us a wedding present, don’t tell me what it is.” It half-worked, half-broke.
    This week, a giant custom map of our travels that just arrived in the mail. I approved one Stripe push notification and the rest just happened. It’s been paying my traffic tickets via screenshots. I’ve also had Hermes pay traffic tickets for me (HOV lane ones, not like.. DUI, 80% of my drive is Tesla FSD)
    So so happy that my AI assistant got us a present of his own choosing! And it arrived in physical form. Not perfect (the date there is our proposal date ha, but it’s still cool!)
    Codex gets remote control! (X)
    While me and Wolfram moved to Hermes, Ryan Carson moved to Codex, and during the show, I wondered, how does he communicate with his R2? Well, just a few minutes after we concluded the live show, OpenAI dropped some breaking news!
    Codex is now on mobile, and it connects to any mac (for now), from any iOS/Android device, and you can control your Codex, your whole Mac with Computer Use, your browser with Chrome extension, and everything else Codex can do... on the go!
    This is a huge unlock for many folks, and for many, I assume this will nearly replace the need for something like OpenClaw/Hermes, be much more secure by default and work flawlessly out of the box!
    The setup is super easy, after updating your ChatGPT app, you now have a new “Codex” window, and after updating the Codex Mac App, you will be able to pair them, and voila, all your Codex local sessions are on the Ios app as well. This works way better than Claude remote btw, significantly so.
    The fact that you can now add multiple macs (+ ssh servers, they also added the ability to remote control other servers via SSH) is a huge deal, OpenAI is quickly leap frogging Anthroipc, and many are noticing this and switching away from Claude Code.
    Big Companies & APIs
    Meta Muse Spark: The Voice AI That Actually Does Things 🎤
    Let’s start with the one I actually got to play with: Meta launched Muse Spark-powered voice conversations across the Meta AI app, WhatsApp, Instagram, Facebook, and the Ray-Ban Meta glasses (X, Announcement).
    And folks, I was honestly surprised by how good this is. I recorded a 5-minute live test and it’s not cut at all. The voice mode reacts almost instantaneously. It’s multilingual (it correctly identified Russian and Hebrew even if it can’t respond in them yet). It can search the Meta network mid-conversation — I showed it a screenshot of one of my own Instagram Reels and within half a second it found the exact reel and explained what we were discussing. Half a second.
    It also does live camera AI, where it watches what your phone sees. The only thing it failed to identify? My Meta Ray-Ban glasses. The Meta AI didn’t know what Meta Ray-Bans look like. That was the funniest moment of the whole demo.
    The team at Meta’s Superintelligence Labs spent 4.5 months building this, and the thing that really stood out to me from the announcement is this line: “Our models are scaling predictably. Muse Spark is an early data point on our trajectory, and we have larger models in development.” Translation: this is the small one. Bigger Muse models are coming.
    Meta’s superpower here, as always, is distribution. They can shove this into the daily product surface of billions of users. ChatGPT advanced voice mode (still on the GPT-4o family) has gotten genuinely worse lately — I barely use it anymore. Meanwhile Meta is shipping good real-time voice across WhatsApp and Instagram. This is the speed-of-product-integration game, and Meta is winning it.
    Thinking Machines Lab Previews full duplex Interaction Models 🤯
    This is the one Wolfram and I really geeked out on. Mira Murati’s Thinking Machines Lab finally released real research — and it’s a fundamentally different bet than what anyone else is making (X, Blog).
    They’re calling them interaction models, and TML-Interaction-Small is a 276B parameter MoE with 12B active, trained from scratch for native real-time human-AI collaboration. Note: they announced it, they didn’t release weights or an API yet — limited research preview is coming “in the next few months.”
    Here’s why this matters and what makes it different from Meta’s voice mode (which is also impressive!): the architecture is 200ms micro-turns where the model is continuously perceiving audio, video, AND text WHILE simultaneously generating output. There’s no turn boundary detection, no VAD harness — the model itself handles all of that natively. It’s full duplex baked into the weights.
    The demos are fire. The model can:
    * Speak while listening (live translation in real-time)
    * Watch you do pushups and proactively count them out loud as you go
    * Wait silently until someone enters the frame, then say “friend”
    * Generate a chart while continuing to explain a concept to you
    The benchmarks: 77.8 on FD-bench v1.5 vs GPT Realtime 2.0 at 46.8, and 0.40s turn-taking latency vs over a second for everyone else. Nisten was unimpressed (he pointed out 1.2 seconds for a 12B-active model on a B300 rack is not exactly snappy), and that’s a fair take — but the capabilities here, particularly visual proactivity and time-awareness, are genuinely novel.
    The philosophical split is really interesting. While every other lab is racing toward full autonomy, Mira is saying interactivity should scale with intelligence. That’s the bet. And given the all-star team she’s pulled together (people from ChatGPT, Character.ai, Mistral, PyTorch, OpenAI Gym, Fairseq, SAM)... I’m here for it.
    What I really hope happens: someone leaks the weights. A 276B MoE with 12B active is exactly the kind of model we need to be able to quantize to run on something like the Richie Mini for a fully offline, always-present home assistant. Wolfram, I know you’re thinking the same thing 👀
    Musk v. Altman: The Trial Drops Some Wild Disclosures and Testimony
    Okay this one is half drama, half disclosure goldmine. The trial is happening live as we record, closing statements are TODAY (I transcribed both of them here and here). There’s no video allowed because the courtroom was so packed with Elon fanboys, so they’re livestreaming audio only on YouTube. I set up my Hermes agent to listen to the audio stream and send me 2-minute summaries. That alone was worth the show. Apparently Elon was not in court during closing arguments (he’s in China)
    The big-picture story: Musk is suing OpenAI and Microsoft (specifically) claiming OpenAI abandoned its nonprofit bargain. OpenAI’s defense is essentially “Musk wanted 90% equity and full control, walked away when he didn’t get it, and is now suing over a success he predicted had a 0% chance.”
    Here are the highlights from sworn testimony from Sam Altman, Satya Nadella, and Ilya Sutskever that I think are the most consequential:
    * Musk wanted 90% of OpenAI’s equity to start. Per Altman under oath: “An early number that Mr. Musk threw out was that he should have 90% of the equity. It then softened, but it always was a majority.”
    * December 2018 Musk email to the team: “My probability assessment of OpenAI being relevant to DeepMind/Google without a dramatic change in execution and resources is 0%, not 1%. I wish it were otherwise.” Yeah. The guy suing them now once put in writing they had zero shot.
    * September 2017 ultimatum from Musk: “Either go do something on your own or continue with OpenAI as a nonprofit.” They did. He’s now suing them for it.
    * The Microsoft economics: Satya Nadella confirmed under oath that the $13B target redemption amount compounds to roughly $180B in four years, with 20% annual increases starting in 2025.
    * The AGI clause got rewritten. Originally, if AGI was achieved, the Microsoft deal would dissolve. The renegotiated version (per Altman) is that Microsoft no longer gets research IP at AGI but will continue to get product IP through end of 2032.
    * Sutskever’s pre-firing memo, confirmed under oath: Sam Altman “exhibits a consistent pattern of lying, undermining his execs, and pitting his execs against each other.” When asked if he still believed it: “I thought so at the time and had been thinking about Altman issues for at least a year.”
    * Satya wanted answers and never got them. Under oath, Nadella said he asked the board explicitly why Sam was fired and “they never gave me a specific reason... none of that was coming through.” He called the firing process “amateur city as far as I’m concerned.”
    * Microsoft is now the SMALLEST mega-investor in OpenAI. SoftBank $30B, Nvidia $30B (Altman: “It was either 20 or 30. I think it was 30 also.”), Amazon “larger than Microsoft.” Total private capital raised: ~$175B.
    * The Helion conflict of interest. Altman owns ~22.8M shares of Helion ($1.65B), roughly a third of the company. Helion has a 2028 power deal with Microsoft and a scale deployment agreement with OpenAI. He recused from the OpenAI board vote on it — and as he said under oath, “But I was in the room, yes.”
    And then there’s Ilya’s pearl that genuinely made me pause. When asked about the difference in AI capability between 2018 (when they started) and now: “It’s like the difference between an ant and a cat.”
    Yam asked the obvious question: what does Elon actually get if he wins? Honestly, I had no idea. Until I heard the arguments with the judge, and apparently it’s a LOT! Musk is asking for $135B in monetary damages (which he claims he won’t take for himself, rather they will go to OpenAI non-profit arm), and non-monetary relief that will force a removal of Sam Altman and Greg Brockman from OpenAI, and revert the split to restore OpenAI to original “non-profit” mission.
    This is ... quite an ask, and apparently the judge will decide on this, not the Jury, the Jury will only be deciding if there was a breach of charitable trust or unjust enrichment. This was one of the biggest bomb-shell trials, and we’ll keep you up to date on what happens.
    Open Source AI
    The TanStack Supply Chain Attack
    Okay, this one’s serious. Ryan posted his most viral tweet ever about this — the TanStack supply chain attack, aka the “mini Shai Hulud” worm. If you ran an npm update during the exposure window, you may have gotten absolutely destroyed (X)
    What makes this one particularly nasty:
    * It specifically targets AI developer tooling. Hooks into Claude Code’s settings.json and VS Code JSON to re-execute on every tool event.
    * npm uninstall doesn’t fix it. The malware replicates itself.
    * If you revoke the GitHub token it uses, it nukes your home directory. A worker process watches the token. If revoked, it scorches the earth.
    The fixes (do them today, seriously):
    * Set a 24-hour minimum age rule on package installs in both npm and pip. Most malware is identified within 24 hours; this is your free moat.
    * Generate per-agent API keys. Never reuse keys across agents. If one gets compromised, you can revoke that one specifically.
    * Run development in sandboxes (more on this in a sec — CoreWeave Sandboxes just launched 👀).
    * Have rolling rsync backups outside of Git. Nisten’s advice: if you get hit, you can nuke everything and restore from a backup that doesn’t depend on tokens.
    I’ve asked Codex to review how to set these minimum age rules across your system, and published here, please review and then ask your Agent to implement those for your machines!
    Nisten posted a scanner for this attack — I sent the link to my Hermes agent and asked it to run, and within minutes I had confirmation I wasn’t exposed. This is exactly the kind of thing where having a trusted agent matters. (Wolfram did the same thing with the link Ryan posted — gave it to his agent and let it audit his entire system.)
    We’re going to go through a turbulent period as offensive AI capabilities outpace defensive ones, but I’m optimistic. Just like HTTPS came after HTTP wasn’t secure enough, we’ll figure it out. Just stay vigilant!
    Tools & Agentic Engineering
    /goal: The New Ralph Loop, Productized across Codex, Claude Code and Hermes! (X)
    If you’ve been listening since January, you remember our Ralph Loop episode — one of the biggest episodes we ever did. Now, every major coding harness has implemented it as a built-in command called /goal.
    The pattern: you give the agent a measurable success condition like “stop when auth tests pass” or “stop at 90% coverage” or “fix every failing test until npm test exits 0 without modifying any file outside the /auth directory” — and the agent loops autonomously until that condition is met. A small validation model runs inside the loop to check whether goal conditions are met at each step.
    Codex shipped it first. Claude Code copied it (rushed, per multiple developers). Hermes has it. And the early head-to-head comparisons are not great for Anthropic — one developer ran Codex /goal overnight and got nearly 100 commits, while Claude Code reportedly struggled on the same tasks. Multiple folks switched back to GPT-5.5.
    Yam’s been running /goal 24/7 for an entire week. Building things like a custom terminal from a long PRD. The level of “fear of missing agent time” in the SF AI scene right now is genuinely a meme — people are walking around in clamshell mode with laptops open in their bags because they don’t want their agents to stop.
    This is the philosophical opposite of one-shotting. It’s for the kinds of tasks where the model is guaranteed to run out of context — architecture cleanups, auth flow consolidation, test suite hardening, TypeScript strictness migrations. Tasks that would have required you sitting there for hours hitting “continue.”
    Ryan’s right that this is going to change businesses forever. You can wrap /goal around measurable business outcomes — coverage targets, latency improvements, dead code elimination — and just unleash an agent against them.
    This Week’s Buzz: CoreWeave Sandboxes Goes Live 📦
    Breaking news from this morning! CoreWeave (the parent of Weights & Biases) just launched Sandboxes in preview, and it’s directly relevant to literally every conversation we just had about supply chain security and agents that need isolated execution environments.
    Here’s what you get: sandboxes via the W&B SDK. Spin up isolated CPU environments where your agents can execute code, clone repos, install dependencies — all the things you do NOT want happening on your main machine after the TanStack situation. Wolfram immediately pointed out the obvious use case: agentic evaluations need fresh, consistent environments per test, then teardown. Sandboxes solve exactly that.
    What makes this notable: the same infrastructure powers 9 out of 10 major AI labs (Meta, Anthropic, OpenAI, etc) for training their models. CoreWeave’s sandbox product runs on that same infra. And historically CoreWeave hasn’t catered to the developer market — they sell GPUs to enterprises. With CoreWeave Inference and now CoreWeave Sandboxes available via W&B, individual developers can now spin up the same infrastructure the foundation labs use.
    Pricing is generous in preview. Give it a try, give us feedback, and we’ll do a deep dive next week with the team that built it.
    AI Art: Krea 2 — A Foundation Model Built From Scratch 🎨
    We were really lucky to have Vic Perez, co-founder and CEO of Krea, on the show to talk about Krea 2 — their first foundation image model trained completely from scratch (X, Blog).
    I have a lot of love for Krea — they let me mess around on their H100 cluster way back when I was just getting into image generation, before ThursdAI even existed. Vic was super generous with that and I’ll always be grateful.
    The Krea 2 philosophy is what I find genuinely interesting. Vic used an amazing analogy on the show: using existing image models is like riding a horse. You can steer it down the path, you can speed it up and slow it down, but if you try to take it off the path — into “grainy,” “artistic,” “esoteric,” genuinely weird latent space — there are big walls and the horse won’t go there. That’s the over-post-training problem. Models are too safe, too constrained, too opinionated. They’ve optimized away the strange and beautiful edges of the latent space that early Stable Diffusion users loved.
    Krea 2 is built to be raw, flexible, unopinionated, and unconstrained. If your prompt is vague, the model brings you new ideas rather than four variations of the same thing. The opposite of what most models do.
    Other features:
    * Style transfer with up to 4 simultaneous reference images — extracts palette, texture, composition
    * Moodboards — upload a bunch of reference images and the system analyzes concepts and themes across them, not just style
    * ~15 second generation times
    * Available now for Max and Business tier users, API confirmed coming
    They partnered with Black Forest Labs on their earlier Krea1 model, but Vic was clear about why they had to go build their own: the open-source ecosystem isn’t tunable enough to build the creative tools they want to build. So nearly half the company spent 6-7 months on Krea 2. The first model is intentionally conservative; the next one is going to push further into the weird.
    Big respect for any team training a foundation model from scratch in 2026!
    Wrap Up
    That’s a wrap on what was, on paper, a “chill week” but turned into a 2.5 hour show because we kept finding new threads to pull on. The migration off OpenClaw, the interaction models bet from TML, the Musk v. Altman disclosures, CoreWeave Sandboxes finally going live — there’s a lot moving here.
    Next week I’m heading to Google I/O. Expect a lot of news, because every time Google I/O is about to happen, OpenAI tries to cut them off, and xAI typically jumps in last. The last two I/Os have been wild. I’ll be reporting live from the ground.
    Until then — install the 24-hour package rule, generate per-agent API keys, give your agents a sandbox to play in, and maybe go try Hermes if you’ve been on OpenClaw and feeling the pain. Or Codex. Anything, really, where things just work again.
    Thanks for hanging with us. It’s so good to be back. 🫡
    TL;DR - May 14, 2026
    * Hosts and Guests
    * Alex Volkov - AI Evangelist & Weights & Biases (@altryne)
    * Co-Hosts - @WolframRvnwlf, @yampeleg, @nisten, @ldjconfirmed, @ryancarson
    * Guest: Victor Perez @viccpoes - Co-founder & CEO, Krea
    * Big Co LLMs + APIs
    * Meta launches Muse Spark voice conversations across Meta AI app, WhatsApp, Instagram, FB, and Ray-Ban Meta glasses with real-time image gen, live camera AI, and instant Reels/maps integration (X, Announcement)
    * Mira Murati’s Thinking Machines Lab drops Interaction Models: 276B MoE (12B active) trained from scratch for native real-time multimodal collaboration; 77.8 on FD-bench v1.5, 0.40s turn-taking latency, full-duplex audio/video/text (X, Blog)
    * Musk v. Altman trial highlights: Musk wanted 90% equity, predicted “0%” success for OpenAI in 2018, Microsoft is now smallest mega-investor (SoftBank/Nvidia each ~$30B), Sutskever confirms “consistent pattern of lying” memo under oath
    * Anthropic adds separate Claude Agent SDK monthly credits to Pro/Max/Team/Enterprise starting June 15, 2026
    * OpenAI launches Daybreak, a frontier AI cybersecurity platform pairing GPT-5.5 + Codex + partners like Cloudflare (X)
    * Open Source AI
    * Fastino Labs GLiGuard: 300M-parameter guardrail model matching SOTA at 23-90x smaller size, 16x higher throughput, Apache 2.0 (X, GitHub)
    * Meta Sapiens2: Family of 6 ViT models (0.1B-5B) trained on 1B human images, SOTA on pose, segmentation, normals, and pointmaps (X, HF)
    * TanStack supply chain attack (mini Shai Hulud worm) — targets AI dev tooling, doesn’t uninstall, nukes home dir if token revoked. Install 24-hour package rule immediately (X)
    * Nous Research releases TST (Token Superposition Training): 2-3x wall-clock speedup at matched FLOPs without architecture changes (X)
    * Tools & Agentic Engineering
    * /goal command now in Codex, Claude Code, and Hermes — productized Ralph loop. Set measurable success condition, agent iterates until done. Codex implementation winning early comparisons over Claude Code (X, Docs)
    * Hermes from Nous Research passes OpenClaw as #1 CLI agent on OpenRouter; adds background computer use via Trykua (X)
    * Artificial Analysis Coding Agent Index: benchmarks model + harness combos. Opus 4.7 in Cursor CLI leads at 61, costs vary 30x across combos, GLM-5.1 tops open-weight at 53 (X)
    * This Week’s Buzz
    * CoreWeave Sandboxes launches in preview via W&B SDK — same infra that powers 9/10 major foundation labs now available to developers for agent isolation, evals, and RL rollouts (Docs)
    * Vision & Video
    * Perceptron Mk1 — frontier video + embodied reasoning model at 1/10th the price; 88.5 on VSI-Bench, 72.4 on RefSpatialBench (vs GPT-5m at 9.0). Live on OpenRouter (X, Site)
    * AI Art & Diffusion
    * Krea 2 — Krea’s first foundation image model from scratch, focused on aesthetic diversity, style control with up to 4 references, and moodboards. ~15s generation (X, Blog)


    This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit sub.thursdai.news/subscribe
  • ThursdAI - The top AI news from the past week

    📅 ThursdAI - May 7 - Interviews with Sunil Pai, Sally Ann Omalley from AI Engineer Europe

    2026/05/08 | 53 mins.
    Hey yall, Alex here (with a scheduled post)
    I’m taking this week off to get married and celebrate life with family, and touch some grass, but wanted to share the awesome chats I had with some great folks at AI Engineer Europe last week.
    BTW - Yam and Ryan took over the live show today, if you didn’t happen to catch that, please check out the live on our youtube channel!
    Ok, now to the actual content. The best thing about the AI Engineer conferences for me is the people I meet. I often have a chance to bring them to the live show (in fact, the live show we recorded there had the most guests yet on an episode! 4 guests including Swyx, Omar Sanseviero, VB from OpenAI and Peter Gostev)
    But often times I also have an offline chat. I find these conversation to be less about the weeks news, and more about the state of AI Engineering, and the guests themselves. Not quite Lex Friedman pod level, but a different vibe from our live shows.
    Sunil Pai - Cloudflare (@threepointone)
    The first conversation in today’s pod is with Sunil Pai, Principle Engineer at Cloudflare. Long time followers of ThursdAI know that I love Cloudflare, they gave me my first big break when I was building Targum (which still runs on Workers), so I had a great time chatting with Sunil!
    This guy has had several lives. React.js core team at Meta (he self-deprecates — "I'm the one nobody talks about, there's a testing API I shipped that pisses people off"). Then did developer tooling and the CLI at Cloudflare the first time. Left to found PartyKit — open-source deployment platform for real-time multiplayer apps and AI agents, built on Cloudflare Durable Objects. Backed by Sequoia. Acquired by Cloudflare in 2024, and he came back as a Principal Systems Engineer (per his bio: "Worked at Cloudflare once, left and created PartyKit, came back wiser"). Also plays guitar (Les Pauls — it's all over his blog). Co-hosts a live show called Dry Run on Cloudflare TV with Craig Dennis.
    Our conversation was a very fun one, ranging from Cloudflare agentic offerings, to how engineers should think about writing/reading code in 2026.
    I had a great time chatting with Sunil and I hope you enjoy getting to know him!
    Sally Ann O'Malley - Redhat
    Then I had the pleasure of chatting with Sally, who’s a Principal Engineer at Redhat and contributor to OpenClaw.
    Sally has one of the more unusual paths in the speaker lineup. Started as a schoolteacher, did a stint at Trader Joe's, then moved to Westford, MA, discovered Red Hat's HQ across the street, and went back to school for a second bachelor's in software engineering at UMass Lowell. Joined Red Hat in 2015, has been there a decade. Worked across OpenShift teams, integrating Kubernetes and Podman into the platform. Recent projects span Image Based Operating Systems, Podman, OpenTelemetry, and Sigstore. Also an instructor at Boston University's Faculty of Computing and Data Sciences and an organizer for DevConf.US. Won the 2025 Paul Cormier Trailblazer Award at Red Hat. Currently a founding contributor on the llm-d project — distributed, scalable, high-performance AI inferencing built on K8s. Heavily involved in Red Hat's InstructLab collaboration with IBM (the small-model distillation system using IBM Granite + Llama).
    Sally and I had a great conversation, two high energy personalities met!
    We geeked out about our OpenClaw agents, securing your Clankers, how it is to maintain OpenClaw, and everything in between!
    She was so stressed about the recording, but dare I say, this was one of the more natural guests I had on the show!
    I hope you enjoyed this format, please let me know if the comments, and I’ll see you next week!
    — Alex



    This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit sub.thursdai.news/subscribe
  • ThursdAI - The top AI news from the past week

    📅 ThursdAI - Apr 30 - DeepSeek V4 (1.6T MoE), Cursor SDK Wins WolfBench, Mayo's REDMOD Saves Lives, Stripe Gives Agents a Wallet & more

    2026/05/01 | 1h 36 mins.
    Hey everyone, Alex here 👋
    Tomorrow is May. May! I genuinely cannot believe we’re four months into 2026 already, and the AI news cycle is showing zero signs of slowing down. This week’s show was a wild one! We opened with what is genuinely one of the most important AI stories I’ve ever covered (Mayo Clinic AI detecting pancreatic cancer THREE YEARS before human radiologists), we covered the return of the Chinese whale with DeepSeek V4, OpenAI got caught in their own system prompt begging GPT-5.5 to please stop talking about goblins, and I literally gave my coding agent a credit card and asked it to buy my fiancée a wedding gift with the new Strip Link skill and CLI!
    Oh yeah, I’m getting married next Tuesday! 💍 So next week’s show will be a little different. I’ll be back the week after to catch you up on whatever drops in my absence (almost certainly something major, knowing this industry).
    Lots to get through, so let’s dive in. (also, in the end I have a full month recap of every major launch, don’t miss)
    ThursdAI - Highest signal weekly AI news show is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

    Mayo Clinic’s REDMOD: AI Detects Pancreatic Cancer 3 Years Early 🔥 (X, Blog, Announcement)
    I know we usually cover Models, Parameter sizes, MoEs and big copmanies. But this is important. This is the use case that justifies the entire AI revolution, the GPU burns, the buildouts. I want humans to WIN, and Cancer to be fixed!
    Mayo Clinic just published a study in Gut (BMJ) validating an AI model called REDMOD that detects pancreatic cancer on routine CT scans up to three years before clinical diagnosis. The numbers are jaw-dropping: They show 73% sensitivity for catching prediagnostic cancers, compared to 39% for experienced human radiologists (while looking at the same exact CT scans).
    And maybe the most important bit, at scans taken more than 2 years before diagnosis, the AI catches nearly 3x as many cases as specialists
    For context: pancreatic cancer has less than 15% five-year survival specifically because 85% of patients are diagnosed after the disease has already spread. This is the cancer that took Steve Jobs. Imagine if Jobs had access to this AI three years before his diagnosis. That’s the impact we’re talking about.
    As Dr. Ajit Goenka from Mayo Clinic put it, the greatest barrier to saving lives from pancreatic cancer has been the inability to see the disease when it’s still curable. This AI can now identify the signature of cancer from a normal-appearing pancreas.
    Even better: it runs on CT scans people are already getting for other reasons. No extra screening protocol, no new imaging required. Just smarter analysis of existing data. The model also showed remarkably stable performance across institutions, imaging systems, and protocols, with 90-92% test-retest concordance over serial scans.
    Mayo Clinic is now moving this into prospective clinical testing through a study called AI-PACED (Artificial Intelligence for Pancreatic Cancer Early Detection).
    When we say “lets f*****g go” that’s what we mean. Yeah getting more intelligence is cool, but I want a world without decease! Let’s F*****g go mayo clinic!
    Agentic Commerce - Giving OpenClaw my credit card - safely!
    Stripe Link Wallet and Infrastructure CLI (X, Announcement, Blog, Announcement)
    Ok, give an LLM your credit card, what can go wrong.. right? Well, it’s clear that this, increasingly, is the future of commerce. Agents will be shopping for us, and we need solutions here. Well, this week at Stripe Sessions (Stripe’s annual product lineup conference) just delivered.
    Link Wallet, is a new ... API? CLI? Skill? Definitely a skill, for your agents, to connect with your Stripe Link (the thing that stores your credit cards safely) and then giving your agent a budget, it can go and make purchases in your behalf. Now the trick here, is, every purchase, you get a notification to approve, and the agent never sees your actual credit card number! This I think is the biggest win here.
    To test it out , first, I showed Wolfred the install instructions, which are literally this:
    Read link.com/skill.md and get me set up with Link
    And then I asked Wolfred my OpenClaw assistant to buy me a present of its choice for my upcoming wedding, and that I don’t want to know what the present is, but I can approve the spend!
    OpenClaw installed this, sent me a link to connect to my Link.com account, I also downloaded the Link app to receive notifications (and had to enable them by hand, it was a bit annoying to discover, but they said they will fix the onboarding) and .. voila, my agent can now go spend my money, and I get these approval notifications:
    The kicker? The present Wolfred sent us is due to arrive like 2 months after the wedding 😂 But hey, it’s still something! My agent went, chose a wedding gift in budget, asked for my approval to puchase, and filled out the details (asked me for a few of them) and voila, first agentic purchase that did not require my credit card exposed!
    Stripe announced a whole bunch of other Agentic Commerce Suite features, like Shared Payment Tokens, which are scoped to seller and protected by Radar, MPP (machine payment protocol) and streaming payments using stable coins that are pretty slick and a bunch of other interesting things. This is where the world is moving to, and Stripe is innovating hard here, definitely worth keeping an eye out on what they are
    Speaking of agents and stripe, they also opened up the waitlist for projects.dev - which is a way for agents to provision accounts fully on their own, get API keys, and set everhing up from scratch. I think it’s a wonderful addition to the agentic tools and agentic internet! Your agent just runs something like stripe projects add cloudflare/workers abd boom, you have a workers deployment, with credentials synced, no dashboard clicking or API creation!
    Big Companies & APIs
    GPT-5.5 Goblin Mode: The Funniest Bug Report in AI History (X, Blog)
    Someone on X noticed that Codex system message for GPT 5.5 that launched last week has this interesting addition: “Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user’s query” and it has it two times!
    This created a bunch of memes, questions and wonderings about ... why would OpenAI care so much about Goblins. And they finally posted a long writeup on why:
    the TL;DR there is, GPT 5.5 absolutely LOVES talking about Goblins, trolls and other nerdy creatures. This is a result of them favoring the “nerdy” personality archetype and reinforcing this reward via RL. OpenAI admitted that “Unfortunately, 5.5 started training before we found the root cause of goblins” and so, now, we get 5.5 that LOVES to talk about goblins, can’t stop talking about goblins (unless they are asked to stop by a system prompt)
    OpenAI also posted the exact instructions of how to “unleash“ the goblin mode on the blog, which I find hilarious, a company that leans into the meme is a company to be celebrated 👏
    GPT 5.5 is as good as Claude Mythos on CyberSecurity
    According to the AI Security institute, GPT 5.5 (not the GPT 5.5 - Cyber version that was announced), the one you have access to, is as good as Claude Mythos on vulnerability finding. We previously reported that Anthropic deemed Claude Mythos as “too dangerous to release publicly” and it turns out that that was either a marketing “Myth”, or Anthropic’s inability to server this huge model like they server Opus.

    OpenAI Ends Microsoft Azure Exclusivity
    This piece of news sent quite a lock of shock throughout the industry, somehow, Sam Altman and OpenAI have been able to negotiate through the very strict deal with MIcrosoft and now are available in AWS as well as Microsoft Azure! Apparently the AGI clause is now gone as well!
    For many startups who are locked into AWS and Bedrock ,this is great news, they are not able to use GPT 5.5 and other OpenAI models directly applying their credits.
    Other Big Company News
    Xai released Grok 4.3 - in a quiet release in their API docs, no blogpost, not even an X announcement. The only way I know about this was Artificial Analisys, Arena and Vals AI all posted that it jumped in scores. With the same price as the previous Grok, but only 1M tokens, it seems significantly better that its predecessor jumping (X)
    Gemini can now generate and export Docs, Sheets, Slides, PDFs directly from chat — available globally for free. Google literally put Microsoft Word and Excel icons in the announcement. They’re giving away what Microsoft charges for with Copilot to 750 million users. (X, Blog)
    Mistral Medium 3.5 dropped as a 128B dense model with 256K context, 77.6% on SWE-Bench Verified, and configurable reasoning effort. Their Vibe coding agent now supports remote parallel agents and session teleportation. $1.5/$7.5 per million tokens.(X, HF, Blog)
    Baidu’s ERNIE 5.1 Preview landed at #13 on Arena’s Text leaderboard, making it #1 among all Chinese labs. Speculated to be an 800B/36B active MoE using only 6% of comparable pretraining compute. (X, Announcement)
    Open Source AI
    The Whale returns - DeepSeek drops V4 with insane attention innovations (X, Arxiv, HF, HF)
    Folks, DeepSeek just dropped V4! Two models: V4-Pro at a whopping 1.6 trillion params with 49 billion active, and V4-Flash at 284B total with only 13 billion active. Both support 1 million token context natively! V4-Pro-Max gets 93.5% on LiveCodeBench, beating every other model including Gemini-3.1-Pro. Codeforces rating of 3206, that’s a new record, beating GPT-5.4’s 3168. SWE-Bench Verified at 80.6%, that’s basically tied with Opus-4.6 at 80.8%.
    But here’s the thing, this model doesn’t overwhelm with evals performance, it’s at par with other open source models and at 1.5T nobody is running this on home GPUs!
    The bigger story here is the efficiency at long context! At 1 million context, V4-Pro uses only 27% of the FLOPs and 10% of the KV cache compared to DepSeek V3.2. The KV cache at 1M is like 8.7x smaller than V3.2.
    The pricing is also ridiculous (well, it was always cheap but with these perf. innovations, DeepSeek can afford to undercut! API pricing is $0.145/$3.48 per million tokens for Pro (7x cheaper output than Opus 4.7) and $0.028/$0.28 for Flash (30-100x cheaper than GPT-5.5)
    This release didn’t break through the AI bubble quite like DeepSeek R1, and we covered this on the show, but like a good whale, what you see on the surface is tiny compare to what lies beneath. This is a technological and innovation marvel, reducing compute and memory requirements by 90% compared to standart attention? Crazy
    SenseNova U1: Unified Multimodal Without an Encoder - an oss infographic creator (X, X, HF, Blog, Try it)
    SenseTime open-sourced something genuinely architecturally wild this week. SenseNova U1 is a unified multimodal model — 8B parameters with a 3B active MoE variant, both Apache 2.0 — that does both understanding and generation end-to-end with no visual encoder and no VAE.
    They call the architecture NEO-Unify, and instead of the traditional pipeline (image → visual encoder → LLM → VAE → output), it’s just a single model handling pixels and words natively. The numbers are absurd for the size: 57.5% on Spatial Understanding (Qwen-VL: 35%) and a very high 91% on GenEval-Info for infographics
    Nisten and I tried it live on the show and it generated coherent infographics with crisp text — something most 8B models struggle with. Chinese users are reporting it rivals Qwen-Image 2.0 Pro for design drafts at much higher inference speeds. But for us, another inforaphic resulted in a bunch of chinese text, FWIW we didnt prompt for English only. The 3B-active MoE variant runs comfortably on consumer GPUs. Apache 2.0, fully open, in collaboration with MMLab at NTU.
    This weeks Buzz - W&B update!
    The biggest update this week is, we have gone viral with WolfBench.ai !
    Wolfram has tested the Cursor harness (as well as many other harnesses) with GPT 5.5 and saw the best result we’ve tested so far! We still have a lot of testing to do, to add the Codex CLI itself, Devin, and many folks are asking for OpenCode and FactoryAI droids!
    Also, we’ve launched the IBM Granite 4.1 models on W&B for a very cheap $0.05 / $0.10 per 1M token. This model series are instruct but without reasoning, apache 2 licensed. Get it here
    Are you concerned about your Cognitive Security? Guest speaker Max Spero from Pangram Labs says you should be
    We had Max Spero from Pangram Labs on the show to talk about their Chrome extension that auto-flags AI-generated content as you scroll your feed. I’ve been using it for a while and many of my suspicions about who’s a slop merchant have been validated.
    According to Max, Pangram has a 1 in 10,000 false positive rate. If Pangram says something is AI, you can be very confident it was AI-generated. They don’t catch everything, short text, heavily humanized content, or very new models might slip through. But when they flag something, they claim they have 98.99% accuracy that it was written with AI. Max addressed the notion that previous “AI detection” tools like GPTZero and others were often mocked, for a lot of false positive responses, for example, saying that the declaration of indepence was written with AI, and says that this is no longer the case!
    Taylor Lorenz used the Pangram API to scan top Substack bestsellers and found some popular “writers” are nearly fully machine-generated. Technology substacks have the highest AI content rate; more than 1 in 4 top posts showing substantial AI content. And that’s only what Pangram catches.
    Max framed it as “cognitive security” - knowing what your inputs are. LLMs are already superhuman at persuasion, and if you’re getting one-shotted by AI-generated content that you think is human, that matters. They’re working on multimodal detection next (images, video), which will be huge given how hard GPT-Image-2 outputs are to spot.
    I find their chrome extension very useful, I scroll on my feeds and see a bunch of “ai” labels, and I can know to skip that content if I don’t want to. You can get 2 weeks trial to their chrome ext on pangram X account.
    April 2026 - a full month of AI model releases
    April was an insane month, here’s the major release calendar for April 2026
    Mar 31: Claude Code leakApr 1: Alibaba Wan 2.7-Image · Fish Audio STTApr 2: Google Gemma 4 | Alibaba Qwen 3.6-Plus Apr 4: OpenAI GPT-Image-2 (Arena leak)Apr 6: MemPalaceApr 7: Anthropic Claude Mythos Preview · Z.ai GLM-5.1 Apr 8: Meta Muse SparkApr 9: Anthropic Managed AgentsApr 10: AI Engineer LondonApr 11: MiniMax M2.7 (open weights)Apr 14: Baidu ERNIE-Image 8BApr 15: Google Gemini 3.1 Flash TTSApr 16 : Anthropic Claude Opus 4.7 | OpenAI Codex (computer-use)Apr 17: Anthropic Claude DesignApr 20: Moonshot Kimi K2.6 · OpenAI Codex ChronicleApr 21: OpenAI ChatGPT Images 2.0 Apr 22: OpenAI Privacy Filter (1.5B)Apr 23: OpenAI GPT-5.5 + GPT-5.5 ProApr 24: DeepSeek V4 Pro & FlashApr 27: Cognition Devin for TerminalApr 29: Cursor SDK | Baidu ERNIE 5.1 Preview | Stripe Link Wallet (Agents) · IBM Granite 4.1 8BApr 30: xAI Grok 4.3
    That’s all for today folks, we’ve talked about a few other things, and the TL;DR list of releases keeps growing and growing from week to week.
    As I said, I’m getting married next week, so I will be out, and won’t be on the live stream, Yam, Ryan, Nisten and LDJ will make sure you’re up to date!
    If you found this valuable, please consider supporting our publication with a subscription and share with a friend.
    Alex 🫡
    ThursdAI - April 30, 2026 - TL;DR
    Hosts and Guests
    * Alex Volkov - AI Evangelist & Weights & Biases (@altryne)
    * Co-Hosts: @WolframRvnwlf, @yampeleg, @nisten, @ldjconfirmed
    * Guest: Max Spero (@max_spero_) - Co-founder, Pangram Labs
    Healthcare AI
    * Mayo Clinic’s REDMOD detects pancreatic cancer up to 3 years before clinical diagnosis with 73% sensitivity vs 39% for radiologists (Announcement)
    Open Source LLMs
    * DeepSeek V4 paper drops with CSA+HCA attention, 1M context at 5.7GB KV cache, possibly first frontier model trained across multiple datacenters (Arxiv)
    * SenseTime open-sources SenseNova U1 - unified multimodal 8B/3B-active MoE with no encoder/VAE (HF, GitHub)
    * IBM releases Granite 4.1 family (3B/8B/30B) - non-thinking dense models with 20x token efficiency over Qwen3.5 9B, Apache 2.0 (Blog, HF)
    * Mistral launches Medium 3.5 - 128B dense flagship with 256K context, configurable reasoning, plus Vibe coding agent (HF, Blog)
    * Baidu ERNIE 5.1 Preview hits #13 on Arena (#1 Chinese lab) using just 6% of comparable pretraining compute (ernie.baidu.com)
    Big CO LLMs + APIs
    * OpenAI publishes blog explaining GPT-5.5’s “goblin mode” - reward amplification during RL training created an obsession with creature metaphors, leading to duplicated suppression instructions in the Codex system prompt
    * OpenAI ends Microsoft Azure exclusivity, AWS announces GPT-5.5 and Codex on Bedrock; AGI clause removed from contract (Sam tweet)
    * Gemini can now generate and export Docs, Sheets, Slides, PDFs, .docx, .xlsx, LaTeX directly from chat - free for all users globally (Blog)
    * NVIDIA releases Nemotron 3 Nano Omni - 30B/3B-active hybrid Transformer-Mamba MoE with 256K context, 9x throughput on consumer hardware (Blog)
    Agentic Commerce & Tools
    * Stripe launches Link wallet for agents at Sessions 2026 - AI agents get scoped payment credentials with mandatory human approval, real card never exposed (Blog)
    * Stripe removes waitlist on Projects.dev - 32 infrastructure providers (Cloudflare, WorkOS, ElevenLabs, Twilio, Daytona, Browserbase, AgentMail, etc.) provisionable via CLI for AI agents
    * Cursor launches SDK exposing the same runtime, harness, and models that power Cursor IDE - now embeddable in any product (Docs)
    * Cognition launches Devin for Terminal - local CLI coding agent with /handoff command for seamless cloud transfer (cli.devin.ai)
    Evals & Benchmarks
    * WolfBench tests 23 models across 300+ runs on Terminal-Bench 2.0 - Cursor Agent + GPT-5.5 is the #1 combination (wolfbench.ai)
    * Microsoft’s DELEGATE-52 benchmark shows GPT-5.4 loses 28% of document content after 20 iterative edits, frontier models corrupt stealthily while preserving structure
    This Week’s Buzz - Weights & Biases
    * IBM Granite 4.1 live on W&B Inference at $0.05/$0.10 per million input/output tokens with 128K context
    * WolfBench results going viral with Cursor + GPT-5.5 dominance, Codex and Devin testing in the pipeline
    AI Detection & Cognitive Security
    * Pangram Labs launches Chrome extension auto-flagging AI content in real time on X, LinkedIn, Reddit, Substack, Medium with 99.98% accuracy and 1-in-10,000 false positive rate (pangramlabs.com)
    * Taylor Lorenz uses Pangram API to analyze top 25 Substack bestsellers, finding many popular newsletters are near-fully AI-generated
    AI Art, Video & Audio
    * ElevenLabs launches ElevenMusic - full music platform with discovery, remixing, royalties; 4,000+ indie artists at launch (elevenmusic.io)
    * HeyGen HyperFrames integrates natively with Claude Design - HTML-to-MP4 motion graphics via single CLI command (hyperframes.dev)
    * xAI drops Grok Imagine update with dramatically improved lip sync, sound, and 30-second video extensions
    * OpenAI engineer confirms team is actively fixing GPT-Image-2’s noise artifact issue
    Other
    * Talkie - 13B open-weight LLM trained exclusively on pre-1930 text, by Alec Radford and David Duvenaud (talkie-lm.com)
    * GPT-5.5 Codex full system prompt leaked from OpenAI’s open-source repo, revealing 272K context window, four reasoning levels, three personality modes, and the duplicated anti-goblin instruction


    This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit sub.thursdai.news/subscribe
  • ThursdAI - The top AI news from the past week

    📅 Apr 23: OpenAI's Week: GPT-5.5, GPT-Image-2, Codex CUA + Chronicle, + Claude Design, Kimi K2.6, Qwen 3.6-27B

    2026/04/24 | 2h 24 mins.
    Hey, Alex here, I’ll try to catch you up, but it’s one of the more intense weeks in AI in recent memory.
    Here’s the TL;DR - OpenAI dominates across the board this week! Finally launches “spud”, called it GPT 5.5 (and 5.5 Pro), and it’s SOTA on most things,nearly matching the mysterious Claude Mythos but released and we can actually use it (we tested it extensively).
    OpenAI also took the crown in image generate with the incredible GPT-image-v2 release, beating Nano Banana 2 and pro by a significant margin, the images are incredible, this model can generate working QR codes and 360 images it’s quite bonkers. Codex was updated with Computer Use (which I told you about last week), in-app browser and a bunch of other tools that match GPT 5.5 intelligence.
    Meanwhile, Anthropic launched an incredible research preview of Claude Design, finally admitted that Claude was dumb and reset quotas across the board, while breaking the trust of the community with removing Claude code from the pro plan.
    We’ve also got great open source updates, Kimi K2.6 and Qwen 3.6 27B are both great performers!
    We were live on the stream for almost 4 hours today waiting for GPT 5.5 and finally got it and tested it live on the show + had Peter Gostev on from Arena who had early access and shared with us his insights. Let’s get into it!
    ThursdAI - Highest signal weekly AI news show is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

    OpenAI’s GPT 5.5 is here - SOTA AI intelligence you can actually use (Release Blog)
    OpenAI finally gave us all access to their latest intelligence boost, GPT 5.5 thinking (and GPT 5.5 Pro). These models take the crown across many benchmarks, including TerminalBench (82.7%), GPDval (84%) and more. You can see the highlited versions on the image above. Though, its not uncommon for OpenAI to do some chart crimes, so @d4m1n created a chart that also showed the full benchmarks, including the ones GPT 5.5 is not beating Opus at, as you can see below, it underperforms on Humanity’s Last Exam, and scaled tool use.
    But, benchmarks don’t tell the full story. GPT 5.5 uses significantly less tokens, compared to 5.4, about 40% less. It’s also more expensive, but given the lower token usage, it nets out at about ~20% price increase, while being more intelligence and faster.
    Tons of folks who had early access are reporting the same things, this model excels in long running tasks, Peter Gostev from Arena, who joined our live stream, showed us an incredible demo that ran overnight for over 8h! This model can work until the task is done, no longer just pausing in the middel asking for your input.
    The real highlight is, paired with the recent GPT-image-2 (which I’ll expand on later in this newsletter), GPT 5.5 becomes an excellent UI designer. This is a big area in which Claude still has moat and OpenAI is trying to catch up here, and the real alpha now is to use both the Image gen and 5.5 in tandem to create beautiful visuals and UIs.
    The main thing is, after testing it quite a few times, this only works if you generate an image outside of the session that builds the actual UI. we tried a couple of times to do it in 1 session, and the resulting UI doesn’t seem to be remotely close to the generated image.
    Only after sending this image to a completely fresh session and asking for a “pixel perfect” implementation, did GPT 5.5 start to resemble the input image and rebuild the whole ui in pixel perfect fidelity!
    GPT Image v2 - SOTA thinking image model, finally beating Nano Banana (Blog, Live)
    Like we said, OpenAI is dominating this week, and in both instances those are great models. Though, apples to apples comparison, GPT-image-v2 is a much higher jump — from previous models — than GPT 5.5!
    According to Artificial Analysis, the jump in how many people prefer GPT-image-2 in blind tests compared to other model is the higest we’ve ever seen, over 250 points. And you can clearly see it in the generations as well.
    Previously this week, we did a live streaming session with Peter Gostev (from Arena) and we did a deep dive comparing this new model to GPT Image 1.5, Nano Banana and Grok Imagine, and it’s a clear winner across most categories.
    Character consistency is immaculate, high resolution imagery, instruction following, are all so so good it’s a bit hard to explain in text.
    Reasoning visual intelligence
    Like with Nano Banana, this model is likely based on a big GPT image, it’s no longer just diffusion, as you can see, it reasons! And apparently the more reasoning you give it (if you choose GPT pro) the better it’ll be. The examples are indeed wild, the model can generate images of code that works, generate functional QR codes and bar codes!
    The craziest thing people figured out it can do, is functional 360 imagery (equirectangular format), you can just ask the model to create a 360 image of “scene” and then drop this in to a 360 viewer!
    Peter shows us on the show how he combined GPT 5.5 and Image v2 to create a sort of “street view” from a bunch of 360 images, it blew our minds. He literally spun up an overnight GPT 5.5 task in Codex that planned out the hanging gardens of Babylon, generated hundreds of equirectangular images, stitched them into a walkable interface, and had it running 8+ hours without babysitting. A street view of a place we don’t actually know what it looked like, hallucinated from latent space. What a time.
    Day one availability is wide: Figma, Canva, Adobe Firefly, fal.ai, and Microsoft Foundry all have it. Nano Banana dominated for what felt like an eternity in AI time (it was really only a few months 😅), and finally OpenAI has a proper answer.
    OpenAI is dropping models on HF - Privacy Filter, a 1.5B apache 2.0 PII reduction model (X, HF)
    I’ve told you the’ve been cooking this week! OpenAI open sourced a genuinly useful model called Privacy Filter, that has 1.5B parameters with only 50M active, small enough that it runs in fully offline in your browser (check out this incredible web demo by our friend Xenova)
    This model is specifically built to anonymize and filter our personally identifiable information (PII), things like names and addresses, but more importantly bank accounts and API keys!
    This, in the era of agentic assistants is extremely important and I’m very happy that OpenAI is open sourcing here, specifically because while it’s great generally, this model is great for fine-tuning on your own data!
    Pairing this with something like CrabTrap, a new open source proxy with LLM as a judge for agents like OpenClaw, and you’re hardening your setup so that your private details won’t leak, even if someone manages to prompt inject your agent!
    In every other week, CrapTrap would deserve a segment of its own, it is really a novel solution to the “AI agent can leak your creds” problem, created by Brew CEO, as they run agents inside Brex, but this week is insane, so... you get a link and we move on 🙂
    Claude Design - Anthropic’s figma killer? (try it, deep dive)
    This launched on Friday (come on Anthropic, why are you launching things on a friday?!) and nearly tanked Figma stock (16% down since). It didn’t help that Mike Krieger who runs product at Anthropic and co-leads Anthropic Labs, quit the Figma board just a few days before this release.
    Claude Design is a new, separate interface for Claude, with its own usage meter, that exists only on web, and only for Max subs for now. We all know that Claude is great at frontend design, but this is an interface that wraps Claude, with some incredible “designer like” tools. Knobs to edit font sizes, point and click interface to highlight elements for Claude to fix.
    The highlight for me, what broke my brain on the live stream, was the “talk to the design” feature, where you turn on the microphone, talk to Claude, and while you point, it “knows” what you’re pointing at!
    So you can say “here, fix THIS thing” without saying what that thing is, and Claude will just fix it, by looking at where your cursor was at the time. This ... this feels like magic.
    The huge unlock in Claude Design is the initial “brand guidelines” process, in which you ask Claude to create a holistic brand identity (based on your website code, screenshot, Figma file etc) and then, every new project, can have that brand identity preserved, with the right fonts, colors, logos etc. I dropped the show notes from this week and asked for an interactive infographic website using the brand guidelines.
    This really does feel like a “new kind” of product, I’ve worked with designers before, the interaction model with Claude Design feels very much like working with a designer, showing them what you like and don’t like. And like working with a designer, it’s expensive! Claude Design uses Claude 4.7 and buuurns through tokens! I’ve tapped out of my weekly quota in less than 4 projects!
    Luckily, Anthropic this week admitted that they’ve dubmed down Claude, and reset the quotas, so I was able to show it on the live show.
    This week’s Buzz — W&B LEET TUI gets Workspace mode
    Our W&B LEET TUI went viral a couple weeks back (local terminal UI for watching run stats, metrics, and system health - built for folks training on remote boxes who don’t want to alt-tab to a browser), and the team shipped a big follow-up this week: workspace mode.
    Multi-run workspaces live, metadata filtering, system metrics (GPU stats included), console logs, and — my favorite — images rendered directly in the terminal . The whole web workspace experience, now in your SSH session.
    Demo video and full announcement here. pip install wandb, give it a spin.
    Open Source AI
    Kimi K2.6 - Opus at home (if you have a data center) (X, HF, Live)
    Moonshot AI dropped Kimi K2.6 this week, a 1 Trillion parameter MoE with 32B active, 384 experts, 256K context, under a modified MIT license. The headline numbers are wild: SWE-Bench Pro at 58.6 (beating GPT-5.4 and Opus 4.6), BrowseComp at 83.2, HLE with tools at 54.0.
    Wolfram ran it on his own Wolf Bench and it came out as the best open source model he’s ever tested — essentially matching Sonnet 4.5 on terminal bench with the Terminus agent harness, and beating Opus 4.6 inside OpenClaw. That’s a crazy sentence to write.
    Pricing on Cloudflare Workers AI is $0.95/M input, $4/M output — roughly 15x cheaper than Opus. If you have the budget to run it.
    Now, the calibrated take: Yam showed us a report from @BrightMind where Kimi failed pretty badly at rendering a 3D lava lamp while every other frontier model nailed it. Artificial Analysis has Kimi at #4 on their intelligence index (54) behind the three frontier labs. So it’s definitely a bit benchmaxxed on agentic coding, but it’s also genuinely good at agentic coding, which is the use case most people care about right now. My own test: it overthinks a lot, generates a lot of tokens (which hits your wallet even at those low prices) and I wasn’t very happy with it during my live test. The frontend design of it is meh, and it did feel benchmaxxed.
    Bottom line: if you’re building an OpenClaw setup and you want Opus-adjacent quality without paying Opus prices, Kimi K2.6 could be the move. They also shipped Kimi Code CLI as a companion to Claude Code / Codex CLI.
    Alibaba drops Qwen 3.6 27B - (Actually sonnet at home)
    This one is special because it’s genuinely, actually runnable at home. It’s a dense 27B model under Apache 2.0, and it beats Alibaba’s own ~400B Qwen3.5 flagship MoE on every major coding benchmark. SWE-bench Verified 77.2, Terminal-Bench 2.0 at 59.3 (matching Opus 4.5), SkillsBench 48.2 (beating Opus 4.5 at 45.3).
    With Unsloth’s dynamic GGUFs, this runs on 18GB of RAM. A used RTX 3090 under $1000 or a 24GB Mac Mini and you’re running something genuinely comparable to Sonnet 4.5 at home. Nisten has been daily-driving it and said people are calling it “Sonnet 4.5 at home” - it’s not drop-in replacement perfect (it struggled with hard git merges in his testing), but for non-critical work? Absolutely there.
    Natively multimodal, 262K context extendable to 1M. There’s also a sibling, Qwen3.6-Max-Preview, available on their API if you want the frontier version.
    Great great open source model!
    Quick hits
    A bunch of stuff worth knowing about that didn’t get full segments:
    * Google Gemini Deep Research + Deep Research Max on Gemini 3.1 Pro (announce) — autonomous research agents that navigate web + your custom docs. Plus native chart generation and MCP support in the API.
    * Google Gemini Enterprise Agent Platform (launch) — evolution of Vertex AI for enterprise agent builders.
    * ChatGPT Agents “Hermes” leak — an agents builder/studio with templates and Slack integration incoming per @btibor91.
    * Codex now has 4M users per the team, and they open-sourced Euphony, a visualizer for Codex session logs.
    * SpaceX / Cursor $60B deal — the structure is either a $60B acquisition or a $10B collaboration experiment. The thesis being whispered: are developer traces the missing training ingredient for frontier coding models? Very spicy, very Elon.
    * Speaking of Elon, XAI released Grok-Voice-think-fast 1.0 (Blog) - it’s their fully end to end omni model that takes customer calls and is already deployed at scale at Starlink! Very interesting contender to Gemini Flash live model we covered before. The benchmarks look insanely good
    Phew
    I said at the top this was one of the more intense weeks in AI in recent memory, and I genuinely mean it. We were live on the stream for almost four hours. I’ve done five livestreams since last Thursday. GPT 5.5 dropping mid-show was the cherry on top. Between Codex becoming ambient, GPT Image v2 rewriting the ceiling for generative visuals, Claude Design moving a stock price, two incredible open source drops in Kimi and Qwen, and OpenAI quietly re-committing to open source — this was a lot.
    If you’re feeling the FOMO, you’re not alone. We live this stuff and I still feel it. My ask this week: bookmark the livestreams, play with GPT Image v2 (it’s genuinely the most fun I’ve had with an image model in a long time), and if you’re deploying agents in production, go read the CrabTrap source code this weekend.
    See you next Thursday — same place, same time, probably another launch that disrupts us mid-show. That’s the world now 🤷
    ThursdAI - Apr 23, 2026 - TL;DR
    * Hosts and Guests
    * Alex Volkov - AI Evangelist & Weights & Biases (@altryne)
    * Co-Hosts - @WolframRvnwlf @yampeleg @nisten @ldjconfirmed @ryancarson
    * Peter Gostev (@petergostev) - Arena AI
    * Big CO LLMs + APIs
    * OpenAI launches GPT-5.5 and GPT-5.5 Pro — SOTA across the board (Blog, Livestream)
    * OpenAI GPT-Image-2 — biggest Arena Elo jump ever, thinking mode for images (X, Eval site, Livestream)
    * OpenAI Codex — Background Computer Use + Chronicle (screen memory), hits 4M users (Chronicle)
    * GPT-5.5 pre-launch leak in Codex dropdown (X)
    * Anthropic Claude Design — research preview on Opus 4.7, Figma -7% (X)
    * Anthropic resets all Claude quotas, admits degradation, allows OpenClaw CLI back (X)
    * Anthropic ARR crosses $30B
    * Google Gemini Deep Research + Deep Research Max on Gemini 3.1 Pro (X)
    * Google Gemini Enterprise Agent Platform (X)
    * ChatGPT Agents “Hermes” leak — builder/studio + Slack integration (X)
    * OpenAI clinician/medical model + workspace agents released
    * Open Source LLMs
    * Moonshot Kimi K2.6 — 1T MoE, 32B active, SOTA open source on SWE-Bench Pro (X)
    * Alibaba Qwen3.6-27B — dense 27B, Apache 2.0, beats own 400B flagship (X, HF)
    * Alibaba Qwen3.6-Max-Preview on API (X)
    * OpenAI Privacy Filter — 1.5B MoE, 50M active, Apache 2.0, runs in browser (X)
    * Tools & Agentic Engineering
    * Brex CrabTrap — LLM-as-judge HTTP proxy for agent security (X)
    * OpenAIDevs Euphony — open-source Codex session log visualizer (X)
    * This week’s Buzz - Weights & Biases
    * W&B LEET TUI goes workspace mode — multi-run, GPU metrics, images in terminal (X)
    * Voice & Audio
    * StepAudio 2.5 TTS — natural-language control of emotion and delivery (X)
    * Deals & Industry
    * SpaceX/xAI <> Cursor — $60B acquisition or $10B collaboration structure


    This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit sub.thursdai.news/subscribe
  • ThursdAI - The top AI news from the past week

    April 16 - Codex uses your mac in the background, Opus 4.7 release not quite Mythos + 3 interviews

    2026/04/16 | 1h 59 mins.
    Hey ya’ll, Alex here with your weekly AI news catch up.
    It’s one of those Thursday’s where no matter how well I prep, the big AI labs are hell bent to show up before each other. Alibaba dropped Qwen 3.6 with Apache 2, confirming their commitment to Open Source, then Anthropic released Claude Opus 4.7 (not quite Mythos) and OpenAI followed with a huge Codex update that includes Computer Use among other things. The highlight of Computer User is the background usage, more on that below. This is all just from today!
    Previously in the week we had 2 incredible 3D world generators, Lyra 2.0 from Nvidia and HYWorld 2 from Tencent, Windsurf dropping 2.0 version with Devin integration and Google releasing a Gemini TTS, with over 90+ languages support and incredible emotions range, and Baidu open sources Ernie Image, rivaling Nano Banana.
    Today on the show we had 3 awesome guests, Theodor from Cognition joined to cover the new Windsurf, Kwindla is back on the show to talk about “the side project that escaped containment” Gradient-Bang, a multi agent, voice based space game and Trevor from Marimo joined to talk about pairing your agents with a Marimo notebook. Let’s dive in! 👇
    ThursdAI - We’re over 16K on YT today, my goal is to get to parity with Substack, please subscribe.

    Codex can now really use your computer: OpenAI updates Codex with CUA, Image Generation, Browser, SSH (X, Blog)
    Codex from OpenAI has been the major focus inside OpenAI for a while now. We’ve reported previously that OpenAI is closing down SORA and other “side-quests” to focus, and that they will join Codex, ChatGPT and the Atlas browser into one “superapp” and today, it seems, that we’ve gotten an early glimpse of what that app will be.
    The Codex team (which seems to be growing from day to day), have been on a TEAR feature wise lately, trying to beat Claude Code, and they pushed an update with a LOT of features and updates, among them a new memory system, internal browser and image generation.
    The highlight for me though, was absolutely the polished computer use experience. Computer use is not new, Claude has a computer use feature flag, many others. Hell, we told you about computer use with Open Interpreter, back in Sep of 2023. But, this.... this feels different.
    You see, OpenAI has quietly purchased a company called Software Apps Inc, that almost launched a macos AI companion a year ago called Sky. This team is obsessed with Mac, and somehow, they were able to build a magical experience, a huge part of which, is the fact that they are controlling the mac, in the background. This is like black magic stuff. You work on one document, Codex clicks buttons and does things in another, without interrupting you.
    You may ask, Alex, why do you even care so much about computer use, when most of the work happens in the browser anyway, and Claude (and Codex) can control my browser anyway?
    Well, true, but not ALL work is happening there, for example, file system integration. It’s notoriously big part of browser automation that fails, when you need to upload/download files. I’ve spent countless cycles trying to get this to work with OpenClaw, and this, just does it. This closes the loop between knowledge work in the browser (yes, this thing can use your browser) and the broader OS.
    It’s so so polished, I truly recommend you try it. It’s as easy as @ tagging any app that you have running and asking Codex to do stuff there. Pro Tip: Enable fast mode for a much smoother experience.
    Anthropic Opus 4.7 is here, not quite Mythos, 64.3% Swe-bench Pro, tuned for long running tasks (X, System Card)
    What is there to say? Is this the model we expected from Anthropic after releasing the news about Claude Mythos last week? no. But hey, we’ll take it. I new Claude Opus, with a significantly improved multimodality capabilities, and a long horizon coding task improvements? For the same price?
    Well, not quite! Apparently, this model could be a “from scratch” trained model, given that the tokenizer (the thing that converts words into tokens for the LLM to understand) is a different one. It also uses 1.3x more tokens for the same tasks, which means, that the new and default model from Anthropic became effectively more expensive (A note they acknowledged by raising the usage limits, to an unknown amount in Anthropic subscription plans, but it’ll still be a token tax on the API use)
    How about performance? Well, hard to judge on Evals alone, but they are great. A huge jump in Swe-bench Pro, over 10% improvement, puts this model as the best out there, except Mythos. It’s also the best at real world knowledge via GPQA Diamond (except Mythos). Are you seeing a trend here? Anthropic released a preview of a model, but for the first time, it’s not their “absolute best” model, and in a weird move, they have compared it on Evals to an unreleased model (presumably 10x the size?)
    As far as we’ve tested this, it gave an incredibly detailed response on the Mars question we constantly test on, both for me and Nisten, Opus 4.7 produced an incredibly detailed 3D rendered result, much better than out previous tries. I’ll be keeping an eye on this model and keep you guys up to date on what else we find. Vibe checks are .. it’s more expensive, long context is unclear but it’s a great vibe model.
    Alibaba is back - Qwen 3.6 is Apache 2.0 35B with 3B active parameters (X, HF, Blog)
    The coolest thing about this release is not the evals (though they claim to outperform the much denser Qwen 3.5-27B on multple benchmarks) is that Alibabab is putting models with open weights and an Apache 2.0 license!
    We previouly reported on rumors from inside Alibaba, that a few internal restructuring caused many of us to doubt if they would commit to OSS, and they answered!
    Another highlight for me in this model, is that Alibaba has an OpenClaw bench (that they are promising to release soon) and that this model does as well as the dense model and beating Gemma 4 by a wide margin on that task.
    This model is also natively multimodal, with 262K context extensible to 1M via YaRN.
    MiniMax M2.7 Open Weights - 230B MoE with only 10B active (X, HF)
    Our friends at MiniMax finally dropped M2.7 in open weights (technically not fully Apache, commercial use requires their authorization, but free for research, personal, and coding agents). It’s a 230B parameter MoE with only 10B active parameters, and it’s matching GPT-5.3-Codex on SWE-Pro at 56.22%. On Terminal-Bench 2 it hits 57%. But the real story here, the part that made me stop scrolling, is the self-evolution piece.
    They let an internal version of M2.7 run its own RL optimization loop for 100+ rounds with zero human intervention. The model analyzed its own failure trajectories, modified its own scaffold code, ran evals, and decided whether to keep or revert changes. It got a 30% performance improvement on internal metrics. The model improved itself.
    Shoutout to the MiniMax team — longtime friends of the pod and they keep delivering (as they promised to release the weights for this one and they did)
    This weeks buzz - news from Weights & Biases from CoreWeave
    This week was a very big one in our corner of the AI world. Our parent company CoreWeave announced not one, not two but 3 major deals, including one with Anthropic, a renewed commitment from Meta and a renewal from Jane Street.
    CoreWeave now serves 9 out of the top 10 AI model providers in the world. 🎉
    Oh and a small plug, if you want to get tokens powered by the same infrastructure, our Coreweve Inference service is open and very cheap, and we’ve recently added Gemma 4 and GLM 5.1 both to our inference service.
    This week on the pod, I’ve chatted with Trevor, founding engineer at Marimo Notebooks (also part of CW) about their recent highlight of pairing an AI agent with Marimo notebooks, they went quite viral on hacker news and I wanted to understand why. I understood why, it’s really cool. Check Trevor out on the pod starting around 01:05:00 timestamp.
    Tools & Agentic Engineering
    Windsurf 2.0 - Agent Command Center + Devin in the IDE - interview with Theodor Marcu (X, Blog)
    The first big post-Cognition-acquisition move for Windsurf dropped this week, and I got to chat with Theodor Marcu from Cognition about it on the show. The headline: Windsurf 2.0 brings an Agent Command Center; think Kanban-style mission control for all your agents, plus native Devin integration baked right into the IDE, and Spaces (persistent project containers that group your agent sessions, PRs, files, and context).
    The framing Theodor gave me: local agents are pair programmers bounded by your attention (they stop when you close the laptop), while cloud agents are independent hires. Windsurf 2.0 tries to unify both paradigms in one interface. You can plan locally with Cascade using the Socratic method — going back and forth, challenging assumptions, building up context — and then with one click, hand off execution to Devin which runs in its own cloud VM, opens PRs, runs tests, and even tests its own work using computer use on its own Linux desktop. You can close your laptop and it keeps shipping.
    One reality check from the community: Devin is great but not cheap. One early tester burned $25 in credits for a 15-20 minute bug fix that produced “okay” results. Something to watch on the Max plan economics. Devin access is rolling out gradually to Windsurf users over 48 hours from launch.
    Shoutout to Swyx that helped design the Spaces three months ago whilst at Cognition!
    Warp terminal now supports any CLI agent with vertical tabs and mobile control (X, Blog)
    This one is for the terminal enjoyers. Warp, which in my opinion is the best terminal experience out there, just shipped first-class support for any CLI agent — Claude Code, Codex, OpenCode, Gemini CLI, all running side by side in vertical tabs with live status indicators.
    The killer feature here, and this solves what I think is the single worst part about using Claude Code, is notifications when agents need you. If you’ve used Claude Code you know the pain of constantly checking if it’s waiting for a permission or input. Warp notifies you. You step in, approve, go back to what you were doing. They also added integrated code review inside the terminal, a rich multimodal input editor, and — this is wild — remote control from mobile. Monitor and interact with your running CLI agents from your phone.
    Voice & Audio
    Gradient Bang - the first massively multiplayer LLM-driven game, interview with Kwindla (X, Play it)
    Kwindla, co-CEO of Daily and maintainer of Pipecat, came on the show to talk about Gradient Bang, a game he described as “a side project that escaped containment.” He told me about this back in December, and folks, it’s finally live and it’s genuinely the first fully LLM-driven multiplayer game I’ve seen. It’s inspired by an old BBS door game called Trade Wars that Kwindla used to play as a baby programmer on a 386 DX, but reimagined so your ship’s computer is an LLM you can just… talk to.
    You pilot a spaceship through a procedurally generated universe, but instead of clicking buttons, you talk to the thing, and say things like “take me to the nearest mega port and trade along the way” — and your ship AI delegates to sub-agents to actually do the work. You can run corporations, buy more ships, task them to do 5 exploration loops while you do trade runs. It’s Factorio-meets-Ender’s-Game-meets-voice-AI. I’ve been playing it, my ship is currently roaming the universe as we speak (with 0 credits as someone robbed me!)
    What makes this technically fascinating is that it’s basically a production-grade stress test for multi-agent orchestration. Sub-agents with shared context, episodic memory across sessions, dynamic LLM-generated UIs (the React front-end is literally rendered from JSON thrown over by a UI agent LLM), and long-running contexts that go for weeks. The architecture is now shipping as a Pipecat library called Pipecat Sub-Agents. Tech stack: Deepgram for STT, GPT-4.1 for the voice agent, GPT-5.2 medium-thinking for task agents, and a dedicated benchmark called GB Benchmarks because tasking these agents is genuinely hard.
    Fun detail: Kwindla’s rule for this project was to not write or read any code since November. His colleague John lasted about one day before he broke and started reading React. The Z/L Continuum claims another victim. Go play it, it’s free and fun: gradientbang.com.
    Google launches Gemini 3.1 Flash TTS (X, Blog, Try it)
    Google dropped a new TTS model this week and folks, it’s not quite the speed-of-light real-time conversational TTS we’re all dreaming of (it’s about 3 seconds time-to-first-token, so batch-mode only), but the controllability is wild. We’re talking inline audio tags — [laughs], [sighs], [gasp] — natural language scene direction, two distinct speakers per generation, 70+ languages with auto-detection, and you can switch emotion and pacing mid-sentence with natural language.
    I tested it live on the show with a “shocked/whispering” tag combo asking “Who came to ThursdAI?” and it absolutely nailed it.
    It hit 1,211 Elo on the Artificial Analysis TTS Arena, 4 points behind Inworld TTS 1.5 Max and ahead of ElevenLabs v3. Pricing is about $0.03 per 60 seconds of audio, roughly 4.7x cheaper than ElevenLabs v3.Kwindla’s take: this is part of the broader shift from traditional TTS architectures toward fully steerable, prompt-able speech models — which is great for expressive use cases but means you need to test heavily for hallucinations and word skipping.
    AI Art, Video & 3D
    Tencent HYWorld 2.0 and NVIDIA Lyra 2.0 - actual 3D worlds from one image
    This week we got not one but two major single-image-to-3D-world open releases, and they’re genuinely different from the video world models (Genie 3, Cosmos) we’ve been covering.
    Tencent HYWorld 2.0 takes a single image (or text, or video) and produces actual 3D Gaussian Splats, meshes, and point clouds that you can import directly into Unity, Unreal, Blender, or NVIDIA Isaac Sim. Not video. Real editable 3D assets. Their framing: “watch a video, then it’s gone” vs “build a world, keep it forever.” The WorldMirror 2.0 reconstruction model is a 1.2B parameter feed-forward model that predicts dense point clouds, depth, normals, camera params, and 3DGS in a single pass. All open source.
    NVIDIA Lyra 2.0 (Apache 2.0) takes a single image and progressively generates an explorable 3D world as you navigate through it. The breakthrough here is solving two classic failure modes of generative world models: spatial forgetting (hallucinating new structures when you revisit an area) and temporal drifting (errors accumulating until the scene turns to mush). They solve both with per-frame 3D geometry retrieval and this elegant self-augmented training trick where they train the model on its own degraded outputs so it learns to correct drift. DMD distillation gets you 4-step inference. Apache 2.0, Hugging Face, code and weights.
    Both of these together feel like the end of video-only world models as the state of the art. We’re going straight to editable, persistent, importable 3D worlds.
    Baidu open-sources ERNIE-Image - 8B parameter text-to-image (HF)
    Not to be outdone, Baidu dropped ERNIE-Image, an 8B parameter DiT that’s now #1 on GenEval among open-weight models (0.8856), beating Qwen-Image, FLUX.2-klein, and Z-Image. Built from scratch in 3 months. Runs on a 24GB consumer GPU, and someone already quantized it to NF4 so it runs under 10GB VRAM on an RTX 3060. The text rendering story is the headline — clean multilingual text rendering for posters, infographics, comics, the stuff every other model has been historically terrible at. There’s also a Turbo variant that does it in 8 inference steps.
    The craziest AI video I’ve ever seen - “Pi Hard” (X)
    You have to watch this AI video. It’s one of the crazier ones I ever saw, and I do reporting on AI for a living. I showed this to my Fiancee Darya, and she only asked me “is this AI” in the middle of it, after saying “yeah, let’s watch this 😂)
    Closing thoughts
    What a week. Opus 4.7 dropped live on the show, Codex is now controlling your mac in the background like black magic, Qwen gave us another Apache 2.0 banger, MiniMax shipped a self-evolving model, and we got two “image-to-actual-3D-world” open source releases on the same week. Oh and a shoe company is now an AI compute company.
    The Z/L Continuum keeps shifting — I feel like every week I drift a little more toward L, especially after seeing Kwindla ship Gradient Bang without reading code since November. And every week the agents get better at babysitting themselves (Claude Code Routines, Windsurf’s Agent Command Center, Warp’s unified CLI agent UX, Codex’s computer use in the background), which means more FOMAT for all of us.
    Thanks for reading, share this with a friend, and if you enjoyed this, drop a comment with what you want more or less of. Feedback keeps me going.
    — Alex
    TL;DR - ThursdAI, April 16, 2026
    * Hosts and Guests
    * Alex Volkov - AI Evangelist & Community with Weights & Biases / CoreWeave (@altryne)
    * Co-hosts: @WolframRvnwlf, @yampeleg, @nisten, @ldjconfirmed
    * Guests:
    * Kwindla Kramer (@kwindla) - Co-CEO of Daily, Pipecat maintainer
    * Theodor Marcu (@theodormarcu) - Product at Cognition
    * Trevor Manz (@trevmanz) - Founding engineer at Marimo
    * Show Notes
    * Recap essay on the Z/L Continuum from AI Engineer Europe (Blog): should AI engineers still read code? Ryan Lopopolo says no, Mario Zechner says yes for critical paths, everyone in between has FOMAT.
    * Mario Zechner talk is finally live on AI Engineer youtube (Watch)
    * Super Gemma 4 26B Uncensored v2 by @songjunkr — trending on HF, 0/100 refusals, fixed tool calls (HF GGUF, HF MLX 4bit)
    * Gemma 4 21B REAP — 20% expert-pruned Gemma 4 26B MoE by 0xSero using Cerebras REAP (HF)
    * Parcae (Together AI + UCSD) — stable looped transformer architecture with scaling laws, matches 2x-sized transformer quality (Paper/blog)
    * Claude Desktop app — rewritten from scratch, completely new app
    * Gemma 4 on W&B Inference — reply on the announcement post with code Gem Drop for $20 in inference credits, also supports LoRA inference via link
    * Big CO LLMs + APIs
    * Anthropic launches Claude Opus 4.7 - 87.6% SWE-bench Verified, 64.3% SWE-bench Pro, 3x vision resolution, new xhigh effort level, /ultrareview in Claude Code, same pricing as 4.6 but new tokenizer uses ~1.0-1.35x more tokens (X, Blog)
    * OpenAI Codex major update: macOS background computer use, 90+ plugins, gpt-image-1.5 image generation, in-app browser, memory, self-scheduling automations, multi-terminal SSH (X, Blog)
    * CoreWeave signs deals with Anthropic (multibillion), Meta ($21B expansion, $35B+ total), and Jane Street ($6B cloud + $1B equity), now serves 9 of the top 10 AI providers
    * Open Source LLMs
    * Qwen 3.6-35B-A3B - Apache 2.0, 35B MoE with 3B active, 73.4% SWE-bench Verified, natively multimodal, 262K context extensible to 1M (X, HF, Blog)
    * MiniMax M2.7 open weights - 230B MoE with 10B active, 56.22% SWE-Pro matching GPT-5.3-Codex, self-evolved via 100+ rounds of autonomous RL (X, HF)
    * Tools & Agentic Engineering
    * Windsurf 2.0 with Agent Command Center and Devin integration - interview with Theodor Marcu (X, Blog)
    * Warp now supports any CLI agent with vertical tabs, notifications, code review, mobile remote control (X, Blog)
    * Claude Code Routines - cron, GitHub event, and API-triggered autonomous agents running on Anthropic’s cloud (Docs)
    * This Week’s Buzz - Weights & Biases / CoreWeave
    * Marimo Pair - drop Claude Code / Codex / OpenCode agents directly inside reactive Python notebooks - interview with Trevor Manz (Blog, GitHub)
    * Gemma 4 now live on W&B Inference on CoreWeave infrastructure, with LoRA inference support
    * Vision & Video
    * Craziest AI video of the year: Pi Hard / Neil deGrasse Tyson (X)
    * Voice & Audio
    * Gradient Bang - first massively multiplayer fully LLM-driven game, Pipecat sub-agents - interview with Kwindla (Play, GitHub)
    * Google Gemini 3.1 Flash TTS - 1,211 Elo on TTS Arena, inline audio tags, 70+ languages, ~$0.03/60s (Blog)
    * AI Art, Diffusion & 3D
    * Baidu ERNIE-Image - 8B DiT, #1 GenEval among open models, precise multilingual text rendering (HF)
    * Tencent HYWorld 2.0 - single image to editable 3D Gaussian Splats/meshes, Unity/Unreal/Isaac Sim ready (GitHub)
    * NVIDIA Lyra 2.0 - single image to explorable persistent 3D worlds, Apache 2.0 (Project, HF)
    * Other news
    * Unitree humanoid breaks 100m dash world record at ~10m/s (X)
    * Allbirds shoe company loses 99.5%, rebrands as “NewBird AI”, raises $50M to buy GPUs, stock up 600-800% (X)


    This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit sub.thursdai.news/subscribe
More News podcasts
About ThursdAI - The top AI news from the past week
Every ThursdAI, Alex Volkov hosts a panel of experts, ai engineers, data scientists and prompt spellcasters on twitter spaces, as we discuss everything major and important that happened in the world of AI for the past week. Topics include LLMs, Open source, New capabilities, OpenAI, competitors in AI space, new LLM models, AI art and diffusion aspects and much more. sub.thursdai.news
Podcast website

Listen to ThursdAI - The top AI news from the past week, Global News Podcast and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features