Siavash Mahdavi

Siavash Mahdavi - AI Music

by MTF Labs | MTF Podcast

Siavash Mahdavi always had two passions in life: technology and music. He studied Mechatronics, an MSc in Artificial Intelligence and went straight into a PhD in Evolutionary Robotics - and to this day he is still taking regular piano lessons.

In 2017, he combined his passion for music and his expertise in AI to found the company AI Music, with the aim of harnessing the power of AI to infinitely scale and adapt human creativity – delivering the perfect musical track on demand for any use. It started life as part of the famous Abbey Road Studios Red incubator programme in conjunction with Universal Music Group.

AI Music


Dubber      Hi, I’m Andrew Dubber. I’m Director of MTF Labs, and this is the MTF Podcast. Siavash Mahdavi is someone with an eye and an ear on the future. As the forward-looking CEO of AI Music, he’s teaching the robots how to sing, or, at least, the algorithms how to come up with mass-produced production library music in myriad automated variations.

When I spoke to him from his home in London, we’d not yet arrived in the brave new world of 2021. Everything was different. Donald Trump was the president of the United States, and… Well, mostly just that. Otherwise, climate crisis, pandemic, tech-giant monopolies, endless Zoom calls, and musicians and composers who feel that the job of coming up with tunes and playing them to people, which had historically been their primary domain, should rightly be left to them. That was all pretty much then as it is today.

But, like it or not, music and how we make it, experience it, discover it, interact with it, all that does change over time. Or, at least, the range of possible options continuously expands, hardly ever contracts, and the ratios are ever shifting. So I was interested to find out not so much “Why do AI people hate musicians so much?”, but more “What else can music be?”.

Siavash Mahdavi, thanks so much for joining us for the MTF Podcast today. How are you doing?

Siavash     Good. Been looking forward to 2021.

Dubber      Right, yeah. Are you over it?

Siavash     Yeah. Well, it’s funny because there’s obviously no technical reason why, magically, January 1st, the world’s going to go back to normal. But I just feel there’s so much sentiment and willingness for next year to be so much better than this year, from a global perspective, that I’m convinced something is going to happen.

Dubber      It’s inviting disaster to say “It could hardly be worse.”, though, isn’t it?

Siavash     You know what? They’re quite sad stories, but there are two times in my life when things have been really bad - and both instances involve people dying - where I did say to myself, literally, in my head, “Things couldn’t get any worse.”, and somehow that day something worse happened.

Dubber      Wow.

Siavash     And I was like “Bloody hell.”. So I would never wish that on anyone. So be careful.

Dubber      Well, let’s hang on to the optimism for the year. But we’ll talk about that personal story, even if you prefer to skip over some of the darker bits. That’s absolutely fine. But we will go there. But we should probably say who you are and what you do. So you’re CEO of something called AI Music, which obviously puts it very squarely into our area of interest. What’s AI Music?

Siavash     So at AI Music, we are looking at different ways of interacting with music. So we always saw music as, for me, one of the most creative art-forms and yet one where ninety-nine percent of people consume in a passive, lean-back way. So unless you play an instrument or can produce or can sing, your interaction with music is to sit back and someone says “Here’s my song.” and you go “I’m listening to your song.”, and that’s all you get. And so what we’re exploring is can we shift music from what we call static consumption of music to something more along the lines of dynamic co-creation, where an artist, a musician, will make a song but you decide how to interpret that song. So “Can you make the song a bit faster? Can you change the key to make it something you can sing along to? Can you change the genre to suit maybe your activity or mood?”.

So that was the high-level philosophy of the company, and then we went deeper into “Okay, what does that actually mean?”. So “Can we make a product out of that?”. So we explored shape-changing music. We explored creating hyper-customised remixes of songs so that a song gets released, we can create ten-thousand versions of that song, and everyone gets their own super-unique version to suit them.

Some of the things we’re doing now are essentially the same thing, but maybe a little bit more practical, around music beds for audio adverts. So can a brand - they’re launching a new phone, they’re launching a new restaurant, whatever it might be - when they create an advert have the music bed of the advert customise itself to the music you were listening to before the advert came in? And what does that do?

So let’s say you’re listening to jazz - you’re listening to Jazz FM or something - and then this advert pops in. Which, no one wants adverts, anyway, but they’re there, and they pay for the musicians. They pay for everything. Can that be a jazzy version of the ad so that you go “Oh.”? It wasn’t as disruptive. So I can almost click along to it. It might be that good. But also I’d feel the brand understands me more and wants to engage with me more, and I’m actually more likely to then buy the product.

So that’s one of the applications. I’ve got plenty more to talk about. But that’s one of the practical applications of the high-level philosophy we have around hyper-customised music.

Dubber      To make ads less offensive.

Siavash     Yeah. Make them less offensive. Less disruptive.

Dubber      And when you hear ‘AI music’, particularly… I talk to a lot of musicians, and the phrase ‘AI music’ really makes them bristle because the assumption is that it’s a replacement for creative musicians. Is that how you see it?

Siavash     No. So my background is in the AI bit. So I studied a master’s in machine learning. This was back in… What? 2002. So we were actually the first cohort at UCL, in London, that did this master’s in machine learning. It didn’t exist. AI has been around since the ‘60s/’70s, but, for some reason, no one thought to make it into an actual master’s degree. So we were the first guys that did it there. So I’ve always been fascinated by the role of automation and where AI sits, and what can machines do better than humans, where can they support humans, and where do humans win? And that’s always been really exciting for me.

So actually, I started my first company when I finished my PhD in machine learning in 2008, and there we used machine learning to automatically design objects that were then 3D printed - so big 3D shapes - for a range of industries. And we focussed on aerospace and Formula One and medical, and in those applications, the same challenge was there.

So there are engineers who are also very creative. They might use a bit more maths, but I would argue that they’re as creative as musicians, and they’re designing very complicated things. So they might be designing a component for an aerospace engine that has to withstand high temperatures and lots of pressures and lots of other things, and it might take them months and months and years to end up designing and optimising something. And we designed software that you click a button, it understands all the constraints, and [sound effect for something appearing], it ends up designing this thing, and we had the same kind of pushback.

So we’re selling to an engineer who’s listening to us and saying “Hold on. So something that takes me two months to do, you do in twenty minutes. So then what do I then do?”. And in those instances, what we’ve found… Because that technology has been proven really successful. So if you look at any new designs within aerospace or Formula One, they’re a bit more organic looking - they’re bio-inspired - and they’re using the algorithms that we designed at the time. And what’s happening is that people are simply designing more things. They’re focussing on other aspects of the car or the components and focussing on driving these tools using higher-level abstractions, using higher-level control. And the outcome is better, and the engineer gets to focus more on what he or she is interested in doing.

So moving back onto music, what we’re looking at doing is creating tools that for musicians, also for professional musicians, continue to, firstly, lower the barriers of entry for the ones that need it. So if you don’t have access to a studio, we have some tools that allow you to sing straight into the microphone of your mobile phone and we use some clever machine-learning to augment that signal and make it sound better. So make it sound a little bit more like you’re singing in the studio. So that tool is purely just helping people lower the barriers of entry to creating content. But also when it comes to composition and to creating assets, again, allowing musicians to focus more on the creative stuff and less on the searching, less on the mixing, less on the production-y bits.

Now, I have had pushback. I have had sound engineers say “We love sound engineering. We love those little micro tweaks.”, but I would argue that we’re not really replacing those jobs.

So we have a tool that can automatically mix and master a track. So, just to tell you what that means, if you have a song and you have a guitar playing and the piano and drums and someone’s singing, those different signals go into a digital audio workstation, and what you want to do is get the levels and the way in which these things pan to sound good.

Dubber      Right. You’re talking about actually taking in a multitrack and getting a balance between those instruments. I’ve heard of AI mastering. I’ve not heard of AI mixing. Is that what we’re actually talking about, is doing a mix?

Siavash     Yeah. Luckily, our Head of Research did a PhD on automatic mixing, so we have that expertise in-house. We are using that internally because when we create content - and our content is in the millions of songs - we can’t do that one at a time, so it has to be done at scale. So we’re doing it not just because it’s interesting from an academic perspective but because there is no other way of doing it.

Now, having said that, if you are releasing an album and you care about every micro-detail of how something is mixed down, you’ll probably still get a sound engineer to do that final mixdown. And it’s part of the fun and part of the process, so I don’t think we’re replacing those guys soon. But if you’re creating, back to the original use-case of radio ads, and our technology…

Dubber      Sausage factory music is what you’re talking about.

Siavash     Well, no. I’m not going to call it sausage factory music. It’s less prominent. So if you think, a music bed for a radio ad is a music bed. The point of the ad is the thing on top. The person trying to sell you something. The music is supporting that, so you’re not going to go “How crisp are those drums? How is the guitar coming through?”. If anything, it needs to be quite muted and in the background.

So in those things, and when we do it at scale - so you’ll work with a telecoms brand, and they’ll say “Give us two hundred versions of this advert.” - we’re not going to spend a couple of months making our way through every single version of that and mixing all of those and listening to each one. We want to be able to press a button and it goes “Boop. Here you go.”. So we have a very specific reason why those things happen. If you did want to launch an album and you had the budget for it and you really care about every single thing, then you’ll probably sit with the sound engineer for weeks and weeks and tweak every little knob because, actually, that’s part of the art, and you’ll still go ahead and do that.

Dubber      Right. So let me check that I understand this. The telecommunications company is coming to you and saying “I want music that does X, Y, and Z. Can you give me a thousand different varieties of that?”. They’re not using a tool that you’ve created, pressing a button, and it churning out the music according to certain parameters that they’ve put in.

Siavash     No, it is the second one. So they come to us, but they come to us through our software.

Dubber      Oh, I see. Okay. So basically they push the button, turn the handle, out comes some music.

Siavash     Yeah. They’re not calling us. So what you do is you say “As a brand, I have three target markets. I have gamers, I have people that are into sports, and people above a certain age. And for some reason, these are the people I want to sell to in these different geographies.”. And what you can then do is target them - which already exists, so we’re not managing how to target customers - but we then can profile the music they listen to. So let’s say a gamer in London happens to be listening to hip-hop, another gamer in Sweden is listening to trance. We’re not assuming profiles and musical tastes. We can then create fifty or a hundred versions of the music, and then we partner up with people that do the targeting, and so when you’re listening to the hip-hop version and you’re the gamer, the ad will come in and it will deliver that ad especially to you.

And we’ve shown some really good results. So increase in engagement, measured by someone clicking on an ad and going “Ooh, I actually want to buy this product.”, is two and a half times more than if you didn’t do the hyper-customised music.

Dubber      Wow.

Siavash     So what we’re really showing is the music really does make people feel…

Dubber      Sorry. What’s the control for that? Is it no music or just generic music?

Siavash     The control would be the same track, let’s say a generic pop track, everyone gets. So imagine fifty percent of the audience get this one generic pop track, another fifty percent get one of however many versions.

Dubber      The customised versions.

Siavash     We’ve run this about seventy-five million times, so this isn’t just a hundred people’s short survey. This one survey we did as a media study was, I think, seventy-five million people for over six months. And we ran these A/B tests to really prove out that there is clearly an increase in engagement when you do this.

Dubber      Interesting. Just so we’re clear, when you say ‘machine learning’, because you say machine learning a lot, what does it mean?

Siavash     What does machine learning mean?

Dubber      Yeah, absolutely. When you say machine learning, what do you mean by that? The reason I ask this question is because you started out by saying there are some things that machines are really good at, there are some things that you can train them at, and there are some things that humans are better at. And the word ‘learning’ suggests, to me, getting better at things. So, presumably, the things that humans are currently better at, they might not always be better at. Is that part of what you mean by machine learning?

Siavash     So the term machine learning is that the learning bit is about training the algorithm. So let me just step back a bit. Machines can do things really quickly. If you want to say “What’s two plus two?”, it’ll do it quicker than any human can do. It can do it a few billion times in a second. And that’s how these CPUs work. When you have hard-coded rules - so “If X then Y.”, so “If I’m moving forward and I see an obstacle, turn left. If I see something else, turn right.” - that isn’t machine learning. That is heuristics and rules. There are a couple of ways I can make that system more intelligent.

So if you move from a simple robot on a table not colliding into objects, which we’ve… You can imagine a toy from the ‘80s doing that. You put your hand in front of it, it goes [buzzer sound], it stops, and it turns. When you move from that to autonomous vehicles in the road where you have different lighting conditions, you have rain, you have different road conditions, people jumping out of nowhere, different types of cars, glare, all those things, you don’t just add more rules because you’re never going to come up with every single rule. So you can’t say “If two people plus a pram coming this way plus it’s sunny and you’re going at twenty-seven miles an hour, what do you do in that instance?” because that’s just going to compound into… You’re going to just have versions and scenarios that you haven’t planned, and the whole thing’s going to fail.

What you instead do is you train the system. You give it examples of human behaviour, for example. You can literally track a human driving. And it says “Okay. So what I saw here is when inputs X, Y, Z, which was two people from the left, one person from the right, and this, this, this, this person did this.”. It starts to learn the system like a human would learn. So you can watch your parents drive a car around and you’ll just slowly see every time there’s a zebra crossing, they tend to slow down. They’re not telling you “We slow down thirty meters before the zebra crossing.”. You just feel that just seems to be what you do. And you have systems like that that learn in those ways by taking inputs and outputs and working out how to map the two things together themselves, and that’s the learning process. And you then reverse the equation and say “Okay, now let’s see what you can do.”, and then they’ll try and do something.

So that’s why training data is important. So when you train machine-learning systems, you want to make sure the data is diverse enough so that the system doesn’t make wrong assumptions. So for example, if I train an autonomous car only in California where it’s really sunny, you go take it to Scandinavia, it’s going to fail. So you need to go “Right. I’m going to train it across all these different scenarios, all these different types of cars, different roads, different environments.”. And so these systems have that kind of learning.

And then relating that back to music, if you’re teaching a system how to make a song and all you do is send it deep house and say “Here’s what all this trance and house stuff looks like.” and then say “Make me some jazz.”, it’s going to come up with some really weird jazz. So you have to expose these systems to these things. So that’s why they’re called machine learning.

Dubber      A bunch of questions arising from that. Number one is, I can understand teaching a machine the rules of musicology. “This is how chords are structured, this is how rhythm works, this is how harmonies operate.”. But the culture of music… You mentioned different genres of music. Genres of music are not just alterations to the rules. They are cultural. How do you get the computer to at least seem to understand the cultures of music?

Siavash     Okay. So with music, and this is the fun thing about music, is there is a lot of maths in music. It’s all about frequencies. It’s about those ratios of those frequencies. There are music theory books.

I did classical musical training. And so you do your Grade 5 and Grade 6, and these are like “Okay, so here’s what minor chords are like. Here’s what a scale is. Here’s what arpeggios are.”. And these rules, like any other art form, are actual rules. You start with basic things. So “The chords you hit with your left hand on the piano will be in the same key as the notes you play in your right hand on the piano. If you do something else, it sounds a bit wrong.”. So you have these foundational rules that you don’t really break. What happens as you become a better musician is you can start flexing these rules.

So jazz is a really great example where you shift it from… Hardcore classical, so Baroque and Bach, if you look at the rules they were breaking, they really weren’t. The final chord you hit in a piano sonata is the exact same… It’s the root chord…

Dubber      The Amen resolution.

Siavash     Yeah. If you end up in the minor or in something like that, everyone’s like… I think it was that the church would come after you or something. You literally weren’t allowed to do it. And then you look at slowly moving on to Romantic music and moving on to jazz, the rules… They don’t go out the window, they become more and more complicated. So if you’d spoken to Miles Davis and said “Okay, tell me about your rules.”, he’ll come up with modal jazz and other types of music that are extensions from a classical foundation.

So there is a way of teaching systems rules. But you can imagine, if you teach something rules and teach it only the basic rules, it’s going to be really constrained and be really boring. You’re never going to come up with a ‘Bohemian Rhapsody’. You then want to also allow it freedom to explore. So a foundation of rules with the freedom to explore on top. So that’s the ways in which you teach it music theory.

Now, when it comes to the idea of genres, it’s actually an interesting one because if you don’t think about it much and you go “How many genres are there?”, some people might go “There’s like ten.”. But if you look at, for example, what we have at AI Music, I think we have 120, and even then we’re collapsing things on top of each other. Just when it comes to house music, there’s big room house and deep house and tropical house and French house. And even within those, you say “Okay, can you tell me exactly what deep house is?”, you ask ten different musicians, they’ll tell you ten different things.

So genres are kind of annoying, in a way, because there’s a lot of… And this is the fun of music. You can cross-pollinate across genres and have a hip-hop track that has lots of jazz or soul influence, and that’s where samples and things can come into it. But we have to use the idea of genres because people want to search our content by typing those things in. So they’ll say “Give me some Latin pop in a major key with a guitar.”, and we need to know what that means and then present them with what we think that means.

Dubber      Interesting. Further questions arising from that. Where do I start with this? So one thing that you said, I think, that was really telling, more about you than about music or about AI, is… The phrase you used was something like “The fun thing about music is that it has lots of mathematics in it.”, and that speaks to something about you, I think. Where does that come from for you? Where does this interest in the precision and the science and the mathematics of it all being the fun part come from?

Siavash     My background is in engineering. I love physics. And if you give me any object, I’ll try and work out how it was made, and I’ll tap it and scratch it and look at it. And so, for me, understanding something allows me to appreciate it more. And with music, when I first learned there was maths in music, which wasn’t initially intuitive, that actually got me more excited. I was like “Wow. So you’re telling me I can work out how this piano sonata that I love to play was constructed?”. Even now, I have piano lessons. I’ve been playing piano for thirty-something years, but I still have this really great teacher that comes in once a week and pushes me further and further.

But what I love to discuss with them is “Okay, so I can see something happening in this part of the piece. It seems to be a variation of what happened in the previous passage.”, and they will go “Yes. You see, so now we’re moving away from an arpeggiated left hand to block chords, and it’s moving here, and you can see how here when we hit this top note here, that isn’t the important note. The important note is the one before. This is a supporting note.”. And again, breaking these things down adds to the beauty of it because, to me, the composer probably didn’t think in that way at all. They just played it and it sounded good. But the fact that we can then reverse engineer all of this maths from underneath it and go “Wow. You just worked this out without really thinking about the maths.” is what makes it exciting.

Dubber      So the mathematical properties of the composition are intuited. They’re inherent in the composer. They’re this inbuilt knowledge rather than a calculation that goes on at the point of creation.

Siavash     Yeah, exactly. The more as humans we’ve learnt about music and have evolved composing, more and more rules and things come in. So people will go “Oh, that was the wrong note.”. Someone will compose something and hit this chord, and we might all go [disgusted noise] “What the hell was that?”, but that’s their composition. They can be like “I want that note.”, and someone will say “No. It’s in the wrong key. That note here should be here.”, and they’ll say “Well, no. I want to do this.”. So we have this idea of rules. Even a non-trained musician will still squirm when they hear the wrong note. They won’t know why, but they’ll say it sounds wrong. So I think you can build upon rules, but what’s amazing is when you just compose without really thinking about the rules.

Dubber      Sure. And what you’re trying to do is give that inherent, intuited idea of how you put those rules together… You want to be able to teach a machine to be able to do that.

Siavash     Yeah. So there are a couple of things. So in the beginning, we had more hard-coded rules in our system because we initially don’t care about creating a new masterpiece. We want to make music that people understand. So if I make a Latin pop track, or some reggae, I want people to immediately go “Yeah, that pretty much sounds like reggae.”. I’m not breaking the rules of reggae and coming up with some whole new contemporary type of reggae to push that genre forward. That’s not the point of what we do. So in that case, we can fit to the rules of the certain instruments and sounds and timbres and song structures that we know exist.

Over time, we start removing the rules and see what the system does, and it can start exploring a bit more. So we’re at that stage now where we’re able to, as an example, cross-pollinate genres. So we can take the sounds of classical music and apply them on Latin pop and see what it sounds like. And it often comes up with some really interesting results.

So we’re looking at that, and then we go “Okay. So if we do that, if we create a system that has less rules, we are more likely to create things that people don’t like because something might clash. And if I’m creating a million songs at a time, how do I even manage that quality control?”. So what do we then do? So, for example, how do we expand beyond the rules and then prune back the bits that don’t make sense, but still leave these gems, instead of just constraining ourselves to just the rules? So that’s the place we are right now.

Dubber      Right. It’s interesting because none of this sounds like what you started out saying was the point of all this, which was this collaborative approach to listening to music. This idea that it’s a co-creation. Customisation and co-creation are not the same thing. So where does the co-creation part come into this?

Siavash     Sure. So what I’ve been focussing on the last while is around creating the underlying elements of a song. We then have something that sits on top of that we call the remix engine, and that’s where the co-creation comes in. So that allows the end user - and that could be just an individual that wants to have fun, all the way through to a big corporation that wants to make radio ads, or anything else - to then interact with that music. So that’s where they then sit on top and go “Okay, you know what? I don’t like this instrument. I want to remove it. I want to shift the genre. I want to make it faster. I want to change the key. I want to make the whole track start and stop within twenty seconds because I want to use it for a video on a post on Instagram, and my video is only twenty seconds long, and I don’t want to just have a twenty-second snippet of a track. I want to have a piece of music that almost sounds like it was made perfectly to the length of my video.”, or “I want a one-hour long mix.”. So these are the kind of tools that then sit on top of the engines that make the assets.

One of the fun things we’re working on now is fully dynamic music that changes in real time. So the applications there would be in fitness, for example, where you might fit sensors off a smartwatch to the elements of the song. So as an example, we took the running pace of someone running and mapped that to the tempo of a song, and then we took their heart rate and we mapped that to the energy of the song. And so as you’re running, if you’re running slowly to begin with, maybe you listen to some hip-hop, and as you go faster and faster, it moves into house and then ends up in drum and bass at 174 bpm. And your heart rate maps the energy level, because you could also start sprinting in the beginning but you’re actually not tired yet, so it can still be quite subdued. But as your heart rate goes up, the energy level, so the number of instruments that come in, the way they’re presented with each other all expands and you get that kind of thing. And that’s quite a fun experience as well.

So that kind of technology allows an adaptation by the user that’s actually unconscious. They’re not actively controlling it by saying “Make it faster.”, they’re just running faster and the thing is somehow adapting to them. So we’re speaking to some fitness companies about that. We’re speaking to VR companies. Gaming, as well. So if you’re playing games and the music just interacts with your gameplay, so all the…

Dubber      Increased peril equals heightened music. That sort of thing.

Siavash     Yeah. All those types of things.

Dubber      Interesting. It sounds very much like - and this, I think, is the thing that musicians might hear as reassuring - you’re creating music for having on while something is happening, not music for listening to. Do you make that distinction?

Siavash     So I would say we have ambitions for both. We have a YouTube channel, for example, that is… You just listen to music. It’s very basic because you can’t do much with it because it’s just running through YouTube, but we do have that.

But if you look at music consumption over the last few years, so much of it is activity or mood based. If you look at playlists on Spotify, they’re all around “Is it music to study to? Is it breakfast music? Is it workout music?” versus “It’s just music that you listen to because you’re going to sit down and listen to music.”. And I know my own musical consumption has shifted. I go “Okay, what do I want to experience? I want to calm down. I’m going to write a big email. Okay, I’ll listen to some cool ambient music.”, for example, or “I’m about to go upstairs and exercise. I’m going to listen to something higher energy to motivate myself.”. So you always look at those things. So in that instance, musical listening habits have shifted.

And it has, obviously, a lot to do with our access to streaming and personalisation versus listening to an album, which, unless the artist is of a very specific genre, can jump around and so you get less control over what you do. Or listening to the radio, which, again, unless you listen to a genre-specific radio station, you have less control. So here because you have the minute control of every single song you want to listen to, the activity in mood based listening is something that’s becoming much more prevalent.

Dubber      One issue arising from customised and co-created songs is, who owns it? Who owns the composition? Because it must exist. That must be something that you’ve wrestled with.

Siavash     Oh, yeah, and we still haven’t solved it. So we don’t need to necessarily own it because… I didn’t even know the beginnings of how complex music rights are until I got into this company.

Dubber      Yeah. It’s a wee bit thorny, isn’t it?

Siavash     Yeah. There’s publishing rights and there’s the recording rights, then there’s the mechanical rights, and then there’s different composition and they all have different splits, and then you then assign your rights to different people across different countries who will then manage the collections for you for different… And it’s insane.

And actually, what we do is we simplify that from our perspective, which is “If you want our music, we own every single bit of our music, so just take it and go nuts. We don’t care what you do with it.”. So that is itself something if you just want to take our music to use. If you then want to add something to it, again, depending on the use-case, you can just go ahead and take it. We, again, don’t care because that’s one of the beauties of having access to our…

Dubber      There’s no temptation if, let’s say, Nicki Minaj uses one of your tracks as a backing, has a worldwide smash hit with it, makes a million, you’re not coming knocking on the door?

Siavash     That’s the thing. That would be a good problem to have, if we get to a point where we have global artists taking our content and doing that. We may have to think about whether we should have struck a better contract or a better deal in that instance.

But it’s not that dissimilar to sample packs. So if you’re making a track, you can go to hundreds of websites and download audio samples. That may be someone playing an instrument, or maybe some cool sound effect. And as part of the licence fee, you get to drag and drop it into your song and do whatever you want with it. Then if you look at the…

Dubber      Yeah, but we’re not talking about a snare hit, though. We’re talking about a three-and-a-half-minute produced song that’s been mixed and mastered.

Siavash     I agree. It’s different. Yeah.

Dubber      So it does get a little bit complicated. But my question is not just “Do you own that?”, but does the machine own that? Presumably, if you’re able to essentially churn out every possible combination of melody, harmony, rhythm and just put it out there and go “Now we own this.”, then basically nobody can compose new music anymore without the threat of a copyright challenge.

Siavash     Yeah. We actually did think of something like that in the beginning. Like “Why don’t we just do every possible melody? Within a sixteen-bar phrase, how many possible notes are there? Let’s literally make every single version and just throw it out there.”.

Dubber      That’s the copyright troll approach.

Siavash     Yeah. So the challenge with that is that copyright works a bit differently. It’s not like patents, where you say “I’m the first in. I get to claim my stake.”. Copyright is all around copying, so you have to prove that someone else heard that and copied you. So as an example, if I write a track that sounds just like ‘Shape of You’ by Ed Sheeran and I can prove that for the last twenty years I was stuck on a deserted island and didn’t speak to anyone, I’m not infringing his copyright, which is different to patents.

Dubber      The influence needs to be there.

Siavash     Yeah, exactly. The actual copying needs to be proven. Now, of course, what happens is we’re in a world that’s connected, and there’s no chance of me being able to prove that I didn’t listen to that. And so what happens is they go and listen to the acoustic similarities and what have you, and then it becomes more like a comparison. But the idea of just churning out a billion chord progressions doesn’t actually get you what you wanted to get to because you have to then show that people had access to every single version of those and somehow heard them to then be inspired to make their number one hit.

Dubber      Right, okay. Well, that makes sense. It does raise the issue… And you talked about autonomous vehicles before. And of course, when you talk about autonomous vehicles, the next step is to talk about the trolley problem and AI and ethics and those sorts of things. What are the ethical considerations for AI music?

Siavash     The topic of displacing artists, which we have covered, is top of mind. I can’t really think of much more around that. And then the other thing would be about not making caricatures of genres that we’re not experts in.

Dubber      Right. So you mean cultural appropriation kind of…

Siavash     Kind of, yeah. So let’s say a customer of ours says “Hey, go and make some reggaeton.”, and let’s say none of us… We’ve heard it, but we’re not experts in what it does, and then we come up with a really generic cartoon version of reggaeton. You could argue “Are you taking the piss? That’s not reggaeton.”. But that would also be in line with the customer then not wanting that track. So from an ethical perspective, that could be another thing as well.

Dubber      And given the nature of what you do, you would fall under the category of start-up. So you’re in the London start-up ecosystem. Presumably, you have investors who are breathing down your neck wanting particular results from things. What’s the trajectory for a company like yours?

Siavash     So I really love what I do. I’ve managed to create the perfect job. So it’s a great mixture of super geeky, techy, like algorithms, and using my own expertise, and I spend the rest of the day talking about music and making music. And the rest of the team as well. We’re just under twenty people, so we’re small. A good half of us are either amateur musicians or we even have things on Spotify and tour.

And so I don’t really want to exit or sell, which is… A typical trajectory for a start-up is a big company comes and then says “That’s great. We’ll take everything.”. I’m not saying that won’t happen. You don’t know what the future holds. But I think I’d love to continue doing what we’re doing and, as you pointed out, start to expand beyond some of the very practical, slightly boring radio advert music soundbeds - which pays the bills, and you have to make sure you’re a functional company - and move to more explorative “Okay, I want every person on the planet to be able to swipe left and right on a smartphone and create real-time shape-changing music.”, and be able to slowly move towards that place but maintaining a sustainable company.

Dubber      Is there any thought about training AIs to compose in the manner of a particular composer? So for instance, let’s say you want something a bit Radiohead-y. Can you actually program a Thom Yorke way of thinking about music creation?

Siavash     Yeah, you definitely can. Do you know about the guy who hacked into Radiohead’s servers and took all their stems?

Dubber      I did hear something about that, but I didn’t know that he used that as an AI modelling…

Siavash     No, he didn’t. He just took it and held them for ransom and said “I’m going to release these unless you give us how much.”, and Radiohead said… They just released it. Those are just available on their website. We took a few of those, and we have actually taken some Thom Yorke vocals and mixed them into some Latin pop. It sounds hilarious.

But, yeah, you definitely can. You can take elements from tracks - if you get access to them - and just literally mix the two things up and see what happens, which is more basic, but the way in which we do it and find a chord progression on a piano that perfectly hits the melody that he happens to sing, those are some of the clever bits. But you can then move to the level you’re describing which is you take someone’s entire back catalogue and look at “What would they have written next?” and train, again, the machine-learning models on “What kind of instruments did they use? What kind of chord progressions and melodies did they really come up with?” and be able to come up with the next one.

Dubber      Sure. So “In the absence of a new Radiohead album, create me a new Radiohead album.” is basically the call. And we’re working towards that, is what you’re saying.

Siavash     The business case for that is smaller. You’d literally have to have a label come to us and say “Look, we own all of the Prince back catalogue. We want to make more money from this. Can we write three more albums?”, and somehow…

Dubber      There’s your ethics issue, right there.

Siavash     Right, I know. And then obviously no one can sign off on that, and I’m sure his fans would be horrified that there’s all this new content coming out that he definitely didn’t actually make. There’s some ethical issues around that as well.

Dubber      Yeah, interesting. I’m curious, just to round off, what sort of kid were you? Were these elements always there? Or the people you went to school with would look at you now and go “Yeah. Obviously, that’s where he was going to end up.”? Or is this…? Because you started your story studying engineering, and it feels like there’s something that led up to that.

Siavash     No, music has always been part of my life. As a family, my grandmum used to play the violin, and my dad plays an instrument called a santur. It’s an Iranian instrument. My mum used to sing, and the house would be always filled with music. And I’ve been playing piano since I was eight. So we’ve always been really into music. Playing it non-stop and exploring a really wide range of genres, from classical Iranian music through to rock and hip-hop and that kind of stuff.

And actually, when I was looking to do my PhD, so even though I studied engineering, there was the option back in the early 2000s to look at music then. So I had this thought of applying AI to music back then. I’m really lucky I didn’t do it because none of the tech existed to be able to do it. We would have been decades ahead of the ability to actually get to what we can do now. So it’s always been part of what I’ve been fascinated about. That’s why I’m saying I’m really lucky to be able to do what I’m doing.

Dubber      It strikes me that your story almost points out that there’s no such thing as too much music. That you can be somebody who’s really into creating music and making music and want to put that out into the world while also your day job is creating millions and millions of tracks for commercial purposes without that actually being an internal conflict of any kind. I think that’s really interesting.

Siavash     No, definitely. If you think about it, every day forty thousand new songs get uploaded to Spotify. No one’s listening to most of those, and we can’t anyway. And so the rate at which music is growing is far greater than any of us will ever be able to catch up with. So in that instance, there is already more than enough music.

I think new genres is really interesting. I would love to get to a point where I can officially say “We’ve invented our own new genre that people enjoy.”, and we can maybe name it. That would be quite fun.

                   But I think the process of making music is never going to change. I feel we’re going to always want to sit there with our friends and strum a guitar and sing along and come up with new things. And live music, which obviously didn’t really happen in 2020, is something I’m really looking forward to in the future.

Dubber      Brilliant. Siavash, thanks so much for your time. It’s been really interesting.

Siavash     Great. Thank you very much.

Dubber      AI Music’s CEO, Siavash Mahdavi, and that’s the MTF Podcast from here at MTF Labs. I’m Dubber. You can find me @dubber on Twitter. MTF Labs is @mtflabs on basically everything, and that is where you’ll find us. Thanks to Sergio Castillo for the additional technical production, Lance Conrad and airtone for the music, Run Dreamer for the MTF audio logo, and you for listening. Hope you’re well and staying safe. Wear a mask, wash your hands, all that stuff, and I’ll catch you back here next week. Talk soon. Cheers.

Subscribe on Android