Google's Gemma 4 Brings Most Capable Open Models Under Apache 2.0
Google dropped Gemma 4 on April 2, 2026 — a new family of open models released under the Apache 2.0 license that the company is calling its most capable open-weight release to date. The timing is notable: it arrives just as the broader AI industry is sorting out how open-source models fit into a world increasingly dominated by proprietary API services, and it puts Google in direct competition with Meta's Llama line and the growing ecosystem of permissively-licensed foundation models.
What stands out about Gemma 4 is the breadth of day-one support across the inference ecosystem. Google announced that Hugging Face, Ollama, vLLM, and LM Studio all have native support at launch — a coordinated rollout that reflects how much Google has learned from previous model releases about the importance of making open models easy to run everywhere. For enterprise teams evaluating open-weight alternatives to OpenAI or Anthropic's offerings, that kind of deployment flexibility is a meaningful differentiator.
Under Apache 2.0, Gemma 4 can be used commercially without royalties or restrictive terms, which positions it as a serious option for organizations that want to run AI capabilities in-house without committing to a proprietary API vendor. Whether the raw capability matches what the closed models deliver on benchmarks is still being debated in the community, but the licensing story alone is enough to make Gemma 4 worth watching — especially for companies with data residency requirements or long-term cost concerns about API dependency.