Neon Changelog: MCP Server Integration, Mastra Memory Module, and Vercel AI SDK Guide

Neon's April 3 changelog adds MCP Server for AI database access, a Mastra Memory guide for stateful agents, and a Vercel AI SDK tutorial—database-permanent memory moving from concept to implementation.

Neon's April 3rd changelog landed with three additions that point in the same direction: database-permanent memory for agents is moving from concept to production-ready implementation. The headline items are an MCP Server integration for connecting AI assistants directly to Neon Postgres, a Mastra Memory module guide for stateful agents that retain context across sessions, and a Vercel AI SDK tutorial for building data-driven Slack bots on top of Neon read replicas.

The MCP Server integration is the most structural of the three. Model Context Protocol has been gaining momentum as the standard for connecting AI assistants to external tools and data sources, and Neon's implementation means any MCP-compatible client can now talk to a Neon database with proper connection management and authentication built in. The changelog also includes a streamlined setup flow: copy the npx neonctl@latest init command and the docs sidebar handles the rest. If you've been waiting for a clean path to give your AI assistant direct database access without building your own connector, this is close to that.

The Mastra guide is where the "stateful agent" pattern gets concrete. Mastra—a Python agent framework that's been accumulating GitHub stars quietly over the past year—now has official documentation for wiring its Memory module to Neon Postgres. The pattern is straightforward: instead of an agent losing all context when a session ends, conversation history and working state get persisted to a Postgres database. The next time the agent runs, it pulls that context back in and continues from where it left off. For anyone building customer-facing agents that need to maintain continuity over days or weeks, this is the difference between a demo and a product.

The Vercel AI SDK integration is the most immediate for developers already in the Vercel ecosystem. The tutorial walks through building a Slack bot that answers data questions by querying a Neon read replica directly from the SDK. Read replicas matter here because they let agents run analytical queries without touching the production database—neon-scale architecture applied to the agentic layer. The guide covers the full flow from connection setup to response formatting, which makes it a reasonable starting point even if your use case isn't exactly a Slack bot.

The neon extension also bumped to version 1.14, bringing neon_stat_file_cache views that let you monitor local file cache hit rates via EXPLAIN ANALYZE. Not glamorous, but useful if you're debugging performance on a high-traffic Neon setup. One heads-up worth noting: pg_search is deprecated for new projects as of March 19th, and Azure regions are being phased out—so if you're running on either of those, now's the time to plan a migration before Neon contacts you about it.

Read the full changelog at Neon →