Suno v5.5 — Voice Cloning and Custom Model Training Come to AI Music
Suno has shipped its most ambitious update yet with v5.5, shifting the focus of AI music generation from raw audio quality toward deep personalization. The headline features are Voices and Custom Models: users can now train the generation model on their own voice using acapellas, full tracks, or a direct microphone recording, with a verification phrase built in to prevent misuse. Separately, uploading six or more tracks from your own catalog trains a personalized style model that learns your aesthetic and applies it going forward.
The update also includes My Taste, a preference engine that watches the genres, moods, and artists you return to and quietly factors them into new generations — no manual configuration required. Voices and Custom Models are gated to Pro and Premier subscribers, while My Taste is available to everyone. The timing is deliberate: Suno released v5.5 on March 26th, a direct competitive response to Google DeepMind's Lyria 3 Pro launch earlier that week.
The strategic logic here is clear. Fidelity improvements are increasingly table stakes in AI audio. Personalization — a model that sounds like you, generates in your style, and reflects your taste — is a much stickier value proposition. Suno's CEO has framed the product around "active music creation" rather than passive generation, and v5.5 makes that distinction concrete. The question now is whether voice cloning and style training become the moat, or whether competitors like Udio close the gap quickly.