Google Drops Gemma 4 With Apache 2.0 License — Open AI Just Got a Lot More Open
Google DeepMind today made a significant move in the open-source AI race, releasing Gemma 4 — a new family of four open-weight models spanning 2B, 4B, 26B (Mixture of Experts), and 31B (Dense) parameters. The lineup covers the full spectrum from smartphone-grade inference to serious fine-tuning workloads, with the 26B MoE model activating only 3.8B parameters at a time for fast, efficient on-device performance. The 2B and 4B "Effective" variants are explicitly tuned for edge hardware including Pixel phones, Raspberry Pi, and Jetson Nano, targeting near-zero latency without a cloud connection. All four models run on a single consumer or professional GPU when quantized.
But the bigger story is the license. Google is dropping its restrictive custom Gemma license in favor of Apache 2.0 — one of the most permissive open-source licenses available. That change removes the legal friction that had kept enterprises from building commercial products on top of previous Gemma models. It puts Google in direct competition with Meta's Llama 4 and signals a genuine shift in how the company is positioning itself in the open-source AI ecosystem. For developers and startups who were waiting on the sidelines, Gemma 4 just became a lot easier to say yes to.