This Week in NLP #385: LangChain/LangGraph CVEs, Anthropic Claude Code Source Leak, Gemma 4
Robert Dale's This Week in NLP newsletter has been the AI community's most reliable weekly digest for years, and issue #385 — published April 3rd — delivers a particularly packed edition. The lead item covers the three high-severity LangChain and LangGraph CVEs patched this week (path traversal, unsafe deserialisation, and SQL injection), which hit a dependency web touching more than 60 million weekly PyPI downloads. It's the kind of vulnerability disclosure that tends to shake confidence in the open-source AI stack, and Dale doesn't soft-pedal the blast radius.
Also in this issue: Anthropic accidentally shipped Claude Code's internal source code via an npm package, briefly exposing its memory architecture and internal model roadmap before issuing mass DMCA takedowns — a story that quickly became one of the most-discussed AI incidents of the month. On the model side, Google released Gemma 4 in 2B-to-31B sizes under Apache 2.0, OpenAI closed a $122B funding round at an $852B valuation, and Apple announced that iOS 27 Extensions will allow Siri to hand off to rival AI assistants including Claude.
For practitioners who want a single weekly read that covers the AI framework security landscape, major model releases, and the business dynamics reshaping the industry, issue #385 is a good reminder of why this newsletter has kept its readership across cycles of AI hype. The convergence of the LangChain CVEs and the Claude Code source leak in the same week signals that security scrutiny of the open-source AI stack is no longer peripheral — it's front-page news.