• Source:JND

Similar to how systems like DALL-E generate images from written prompts, Google researchers have reportedly developed an AI that can generate minutes-long musical pieces from text prompts and even transform a whistled or hummed melody into other instruments.

According to a report by The Verge, the model, citing a source, is known as MusicLM, and although you can't experiment with it on your own, the business has posted a number of samples that it created using the model.

There are five-minute pieces produced from only one or two words like "melodic techno," as well as 30-second samples that sound like entire songs and are formed from paragraph-long descriptions that prescribe a genre, vibe, and even specific instruments reported The Verge.

On the demo site, one can hear examples of the music the model creates when asked to create 10-second clips of instruments like the cello or maracas, eight-second clips of a particular genre, music that would fit a prison break, or even the difference between the sounds of a beginner and an experienced pianist. According to The Verge, it also offers interpretations of terms like "futuristic club" and "accordion death metal."

Even though MusicLM can replicate human vocals and seems to get the pitch and general sound of voices right, there is unmistakably an off-putting element to them.

According to The Verge, artificial intelligence (AI) systems have been credited for creating pop songs, reproducing Bach more accurately than a person could in the 1990s, and composing music to go with live performances.

Meanwhile, ChatGPT, an AI platform developed by Open AI, a San-Francisco based startup is witnessing a lot of appreciation as it can do a variety of tasks that can help a user on a day-to-day basis. It includes writing codes, articles, essays, and more. The chatbot can even talk to the users in a human like way.

(With agency inputs)