Qodo Raises $70M — The AI Code Verification Layer That ChatGPT and Claude Code Created a Need For
There is a growing crisis of trust at the center of the AI coding boom, and a startup called Qodo is betting $70 million that it can solve it. As tools like Claude Code, GitHub Copilot, and Cursor flood enterprise codebases with AI-generated code, a new bottleneck has emerged: engineers don't fully trust what those tools produce, yet the volume is too great to review manually. Qodo, which builds AI agents for code review, testing, and governance, closed a $70M Series B led by Qumra Capital — bringing its total funding to $120M and drawing notable angels including Peter Welinder of OpenAI and Clara Shih of Meta.
What sets Qodo apart from generic code-review tools is its scope of analysis. Rather than just flagging what changed in a pull request, Qodo evaluates how those changes affect entire systems — weighing organizational coding standards, historical context, and the company's own risk tolerance. That kind of institutional memory is precisely what an LLM lacks. Founder Itamar Friedman put it plainly: "Code generation and verification require fundamentally different systems. An LLM can't understand tribal knowledge." The numbers back up the anxiety driving demand: 95% of developers don't fully trust AI-generated code, but only 48% consistently review it before committing.
The raise reflects a broader bifurcation in the AI coding market. Generation tools are rapidly commoditizing — a race to zero-cost tokens — while the verification and governance layer is emerging as the defensible, premium tier. As enterprises scale up deployments of Claude Code and Cursor, the downstream need for trust infrastructure only grows. Qodo is positioning itself as exactly that layer: the quality gate between AI productivity and production reliability.