GitHub Is Changing How It Uses Your Copilot Data — Starting April 24

GitHub Is Changing How It Uses Your Copilot Data — Starting April 24

GitHub has updated its Copilot data usage policy, and if you're on a Free, Pro, or Pro+ plan, the default is changing in a way you need to act on before April 24. Starting that date, GitHub will use your interaction data — prompts, completions, code snippets, and the surrounding context Copilot sees — to train and improve AI models, unless you explicitly opt out. This reverses the previous opt-in stance for individual plan users. Enterprise and Business subscribers are not affected; their existing data policies remain unchanged.

The developer community noticed quickly. Within hours of the announcement by Mario Rodriguez, VP of Product at GitHub, the post was generating significant reaction on Hacker News — 171 points and 83 comments as of this writing. The concern is straightforward: Copilot's context window frequently includes code from private repositories, and individual plan users at companies may not realize their work is flowing into model training by default. Team leads whose engineers are on personal Copilot plans should communicate the opt-out path now, not after April 24.

Zoom out and the strategic logic is apparent. GitHub needs training data to close the quality gap with competitors, and individual users represent a large, largely untapped signal source. The policy divergence between individual and enterprise plans is now a real procurement consideration: Enterprise customers get data isolation guarantees that Free and Pro users no longer have by default. That gap will matter to compliance teams. For everyone else, the immediate action is simple — find the opt-out in your Copilot settings and decide deliberately, rather than having the decision made for you.

Read the full article at The GitHub Blog →