If you thought technology couldn’t break any more traditions, think again. Computers are making waves in the music industry. They are now capable of producing songs. An eight-track album was produced entirely with artificial intelligence.
In late 2018, YouTube star Taryn Southern, who doesn’t know how to play any instruments, released I Am AI.
Speaking at a festival panel discussion on Sunday, 10th March 2019, Southern said:
“For my first music video in 2017, I had a lot of friction as a non-musician. I wrote the lyrics. I had a melodic line but it was difficult to compose and record the actual music.”
The young pop artist started experimenting with AI two years ago using Amper, an artificial intelligence music composition software.
“In two days, I had composed a song that I could actually feel was mine. It means that I don’t necessarily have to rely on other people.”
Amper, founded in 2014 in New York by a group of engineers and musicians, is one of about a dozen start-ups using artificial intelligence to break the traditional way of making music.
According to the company’s co-founder and CEO, Drew Silverstein, they aim not to replace human composers but rather to work with them to reach their goal. The company relies on tons of source material — from dance hits to classical music — to produce custom songs and “enable everyone to express themselves (through) music regardless of their background and skills.”
The Amper app allows a user to pick a genre of music (rap, folk, rock) and a mood (happy, sad, driving) before spitting out a song. The user can then change the tempo, add instruments or switch them out until the result is satisfactory.
Two songs created by Amper at SXSW — using the public’s choice of pop and hip hop as the genres and tender or sad for the mood — clearly aren’t likely to top the charts. But the pieces were pleasant enough to the ear and perfectly usable as background music to illustrate a video or a computer game.
Amper describes such songs as “functional music” as opposed to “artistic music.”
Southern said she reworked the music in her own album dozens of times before she got the right tune.
“For me, it’s just a tool I can use in my creative process: I’m still the editor, I’m still in the driver’s seat,” she said.
Computing technologies leader and strategist Jay Boisseau predicted that computers will generate more and more music in the future. However, the machine is unlikely to totally replace the human touch.
“We’re going to hear a lot of music composed by computers and there’s nothing wrong with that. But computers are not very good at creativity … they are 0 and 1.
“They can find patterns, but they’re not, like humans, particularly good to go beyond what they’ve been trained for. They are tools.”
American filmmaker and writer Lance Weiler, who uses AI in his work, believes that the collaboration between machine and artist should not be sneered at.
“It mainly enables you to improve the way you work, to augment your skills in expressing creative thoughts,” he told the panel discussion.
Silverstein underlined that while AI was useful to experiment with an objective goal when it came to artistic experimentation, it was far from perfect.
But for some, such arguments are not convincing. A British musician at SXSW didn’t appear happy with the competition and questioned whether the word creativity even applied when speaking about music generated by a computer.