[ad_1]
Part 3 of 4
Pyotr Ilyich Tchaikovsky, a Russian composer best known for his ballet “The Nutcracker” and his rich string arrangements, died 130 years ago.
Yet his influence is evident within three minutes of “Vertigo,” which uses artificial intelligence to fuse singer-songwriter Kemi Sulola’s melody with sounds generated by NoiseBandNet, the work of Adrian, a doctoral student at the University of York. Barahona – Computer model of Rios. UK and Tom Collins, associate professor of music engineering technology at the Frost School of Music at the University of Miami.
The model uses Tchaikovsky’s “Souvenirs de Florence” string sextet, ambient noise and other “training” clips to generate audio samples based on Ms. Surola’s musical ideas, resulting in a unique soundscape.
The song won third place in the 2023 International Artificial Intelligence Song Contest.
See also: Hype and danger: Artificial intelligence suddenly becomes very real
“This is a great example of technology, or technological innovation, just being a catalyst for creativity,” Mr. Collins said. “We wouldn’t be writing songs with Kemi Surola and she wouldn’t be writing songs with us if it weren’t for the interest in artificial intelligence.”
Artificial intelligence, which allows machines to receive input, learn and perform human-like tasks, is making waves in healthcare, education and multiple economic sectors. It has also had a huge impact on music production and record labels, raising existential questions about the meaning of creativity and whether machines are augmenting or replacing human inspiration.
Due to the rapid advancement of artificial intelligence technology, the Internet is flooded with programs that can clone John Lennon of the Beatles, Kurt Cobain of Nirvana or other famous voices. Artificial intelligence can spit out complete songs through a few text prompts, challenging the copyright landscape and inspiring mixed emotions in listeners who are intrigued by new possibilities but uneasy about what happens next.
“Music matters. Artificial intelligence is changing that relationship. We need to approach this carefully,” says Martin Clancy, an Irish expert who has written chart-topping songs and is the IEEE Global Chair in Artificial Intelligence. Founding Chairman of the Council for Ethical Arts.
Online generators that can generate fully baked songs by themselves are an aspect of artificial intelligence music that have exploded in the past year or two, along with a lot of buzz about ChatGPT, a software that allows users to Popular chatbot that generates written works.
Other artificial intelligence and machine learning programs in the music field include “tone transmission” applications, which allow someone to sing a melody and then return it as a trumpet instead of a voice.
See also: Ex-Google engineer fired for claiming artificial intelligence was sentient now warns of doomsday scenarios
Other programs help mix and master demo tapes by relying on machines to scan them and might suggest a little more vocals here or a little less drums there.
Even those who are enthusiastic about the artificial intelligence musical phenomenon are finding it difficult to keep up.
“There’s a moment every semester where I say something isn’t possible yet, and then some student discovers that exact thing isn’t possible yet,” said Jason Palamara, assistant professor of music technology at Indiana University in Indianapolis. released to the public.”
Some artificial intelligence programs can fill skill gaps by allowing creators to fully express their musical creativity. It’s one thing to have a rough melody or harmony idea, but another to execute it without the instrumental skills, studio time, or ability to assemble an ensemble.
“I think really exciting things have happened,” Mr. Collins said, citing the example of someone who wanted to add a bossa nova beat to a song but needed a program to explain how to do it because it wasn’t in the tune of their music. Part of the color palette. “This is what I can do with generative AI that I couldn’t do before.”
Other advances in artificial intelligence in music are all about fun. For example, Suno AI’s “Chirp” app can spit out a song within minutes after entering a few commands.
“If you took the 10 points of sale where the ukulele was reintroduced to the North American market, we would see a correlation between the sales pitch of the ukulele and AI music,” Mr. Clancy said, referring to It’s the ukulele. An instrument that provides an entry point for many musicians. “It’s affordable. It’s fun. That’s a big part of these tools. Like they’re really, really fun and really easy to use.”
To emphasize this point, Mr. Clancy asked Suno AI to write a song for the drafting of this article. You can listen to it here.
Creators in the rapidly growing field of music generators tend to emphasize the need to democratize music production. A generator called Loudly says its growing team “is made up of musicians, creatives, and technologists who believe the magic of music creation should be accessible to everyone.”
Voice cloning is another hot area in AI music production. In a viral clip online, Cobain sings Soundgarden’s “Black Hole Sun” instead of fellow grunge icon Chris Cornell, who sang on the original record. The Beatles broke up decades ago but released a new song, “Now and Then,” using old demos and artificial intelligence to create a cleaner version of Lennon’s voice.
Sound cloning is a fun, if not weird, experiment for listeners, but it’s causing serious problems for the music industry. A record label faced a major test earlier this year when a user named “ghostwriter” uploaded a duet between rapper Drake and pop star The Weeknd titled “Heart on My Sleeve.” The problem, of course, is that neither artist had a hand in the song. It is made using voice cloning artificial intelligence.
Universal Music Group removed the song from streaming services, citing violations of copyright laws. The incident raises questions about what aspects of songs are controlled by labels, artists or creators of artificial intelligence content.
“Drake owns his voice, or does only UMG, the record label he’s signed to, own his voice? Or is this an original work under fair use?” instrumentalist and producer Rick Beato in an artificial intelligence segment on his YouTube channel said. “People are not going to stop using artificial intelligence. They are going to use it more and more. The only question is, ‘What will record labels do about it, what will artists do about it, what will fans do about it?'”
In the Drake-Weeknd case, Universal said that “using our artists’ music to train generative artificial intelligence” violated copyright, but some artists are embracing artificial intelligence as long as they can get a share of the revenue.
“I will share 50% of the royalties from any successful song generated by artificial intelligence using my voice,” electronic music producer Grimes wrote on Twitter earlier this year.
The U.S. Copyright Office in March provided some clarification regarding works produced primarily by machines. It said it would not register the works.
“When an artificial intelligence technology determines the expressive elements of its output, the resulting material is not the product of a human author,” the guidance states. “The material is therefore not subject to copyright protection and must be disclaimed in the registration application.”
The Biden administration, the European Union and other governments are rushing to catch up with artificial intelligence and harness its benefits while containing its potential adverse social impacts. They also dabble in copyright and other legal issues.
Even if they enact legislation now, it could take years for the rules to take effect. The European Union recently passed a comprehensive artificial intelligence law, but it won’t come into effect until 2025.
“That’s something that’s always been true in this area, which means all we’re left with is our ethical decision-making,” Mr. Clancy said.
Right now, the AI-generated music landscape is like the Wild West. Many AI-generated songs are contrived or just don’t sound great. Vast amounts of AI-generated music may require curators to filter through to find content worthy of listeners’ time.
Other thorny questions include whether it’s in good taste to use the voice of an artist like Cobain, who died by suicide in 1994, and what benefits might be gained from trying to generate artificial intelligence music through him.
“If we trained a model on Nirvana and we said, ‘Give me a new Nirvana track,’ we wouldn’t get a new track from Nirvana, we’d get reprises of ‘Nevermind,’ ‘In Utero’ and ‘Bleach’ “,” Mr. Paramara said, referring to the band’s albums released between 1989 and 1993. “It’s not the same as Kurt Cobain being alive today. What would he have done? Who knows what he would have done?”
At a Senate hearing in November, Mr. Beato testified that an “AI music collection license” would be needed so that listeners would know the type of music the AI platform was trained on, so that copyright holders and artists could contribute their work to Receive fair compensation for your work.
Mr. Paramara worries that as AI tools become simpler, musicians in general may lose the ability to create music at a virtuosic level. Some singers already rely on pitch correction technology such as Auto-Tune.
“Freshmen coming in know how to use these techniques and have never had to struggle to sing in tune, so it’s harder to argue that they should learn how to use these techniques,” he said. “Some might say that might just mean that in today’s world In a world where the ability to sing accurately is no longer as important, that may be true. But you can’t say that humanity is improving through the erosion of certain abilities that have been honed over centuries.”
Another concern is that machines could replace a source of income that songwriters or musicians (some of whom have given up performing) rely on.
At the same time, artificial intelligence also provides opportunities for musicians and arts organizations.
Lithuanian composer Mantautas Krukauskas and Latvian composer Maris Kupcs produced the first artificial intelligence-generated opera for the Lithuanian capital Vilnius in September.
Only the lyrics from the 17th-century work “Andromeda” remain, but modern composers restored the opera using an artificial intelligence system called Composer’s Assistant.
Developed by Martin Marandro, associate professor of mathematics at Sam Houston State University, the model fills in melodies, harmonies and percussion that fit specific cues. European composers trained models on the opera’s libretto and extant music by Baroque-era composer Marco Scacchi and his contemporaries to create an opera that might sound like the original, even if it wasn’t an exact score.
Mr Malandro said he was not directly involved in the restoration but acknowledged he was considered a contributor to the artificial intelligence model. “My understanding is that the opera was sold out at its premiere and was well received,” he said.
A survey conducted by British arts nonprofit Youth Music found that 63% of 16-24-year-olds said they were embracing artificial intelligence to assist their creative process, although interest wanes as they age. Only 19% of those 55 and older said they were likely to use it.
Mixing and mastering are areas ripe for artificial intelligence applications, Mr. Paramara said. He took some “bad” demos his high school band had produced in the 1990s and ran them through IzoTope, a program that analyzed the demos and found ways to improve them.
Experts say programs could also take over some of the heavy lifting for music professionals who want to focus on a project but let artificial intelligence assist them with the tasks needed to pay bills and meet tight deadlines.
Mr Collins said artificial intelligence was “definitely going to transform our musicianship”. “But I think changes in musicianship have happened over the centuries.”
[ad_2]