Tool in development to create soundtracks from prompts, building on Jukebox legacy amid rising AI audio competition.

OpenAI is advancing into generative music with a new tool that crafts original tracks from text descriptions or audio clips, according to a The Information report cited by Engadget on October 26, 2025.

The project, still in early stages, envisions applications like adding guitar riffs to vocals or custom scores to videos, potentially integrating with ChatGPT or Sora.

To refine its model, OpenAI has partnered with students from The Juilliard School, who are annotating musical scores for high-quality training data. This collaboration addresses key challenges like capturing nuance and avoiding copyright issues, as noted in WebProNews.

It builds on OpenAIโ€™s 2020 Jukebox project, which generated raw audio in genres like blues, but shifts toward multimodal prompts for more user-friendly creation.

The tool enters a crowded field dominated by startups like Suno and Udio, which have flooded streaming platforms with AI tracksโ€”drawing scrutiny over โ€œslopโ€ content, as seen in the Velvet Sundown parody scandal.

Competitors include Googleโ€™s MusicFX and Stability AIโ€™s Stable Audio, but OpenAIโ€™s resources could elevate the space. Features may include multi-vocal generation and AI mixing, appealing to indie creators, per NDTV.

No launch timeline is set, but experts predict integration with Sora for video-audio synergy, as speculated in Mint. Creators can experiment with existing tools like Suno or explore ElevenLabs for voice-music hybrids.

As AI music proliferatesโ€”projected to hit $1.5 billion by 2028, per MarketsandMarketsโ€”OpenAIโ€™s entry raises ethical questions on originality and artist rights.

For musicians, this could democratize production; for listeners, it promises personalized soundscapes.

LEAVE A REPLY

Please enter your comment!
Please enter your name here