fbpx

Christian Guttmann - Explaining AI

by Music Tech Fest | MTF Podcast

Christian Guttmann is the founder of the Nordic Artificial Intelligence Institute, an adjunct and associate professor at the University of New South Wales in Australia, and adjunct researcher at the Karolinska Institute. He’s an entrepreneur who has founded several AI startups in healthcare, finance, retail and music recommendation, and he’s a mentor to AI-based companies and startups.

He’s also the Vice President, Global Head of Artificial Intelligence and Chief AI and Data Scientist at Tieto. He joined MTF Director Andrew Dubber for a conversation about the past, present and future of artificial intelligence - and the forthcoming redundancy of the human race.

AI Transcription

SUMMARY KEYWORDS

ai, people, tasks, types, data, technologies, algorithms, create, artificial intelligence, question, detect, humans, machine, system, understand, learning, decisions, world, intelligence, practical

SPEAKERS

Andrew Dubber, Christian Guttmann

 

Andrew Dubber 

Hi, I’m Dubber. I’m the director of Music Tech Fest, and this is the MTF podcast. Now something that has been increasingly prevalent in our MTF Labs and Music Tech Fest events in recent months has been artificial intelligence. And it’s something that we’re going to be going even deeper on in just a couple of weeks. So I thought that this would be a good moment to have a chat with Christian Guttmann. Christian is a scientist and an entrepreneur. He’s the founder of the Nordic artificial intelligence Institute. He’s adjunct and associate professor at the University of New South Wales in Australia in the field of AI, adjunct researcher at the Karolinska Institute and Medical University in Sweden. Also in the field of AI. He’s written and edited seven books, over 50 publications and has registered for patents in the field of AI. He’s founded and been at the ground level of several startups in healthcare, finance, retail, and music recommendation, all of which use AI, and which have been acquired by the likes of Microsoft and IBM. He’s a mentor to the CEOs of several AI based startups and companies. And he’s also vice president in charge of AI at Tieto, an IT software and service company with around 15,000 employees in 20 countries. So when it comes to AI, Christian, someone who has a fair idea what he’s talking about, we sat down for a chat and stock on about clever chess computers, the ability to spot cats and photographs, the inevitable robot uprising and the overthrow and imminent extinction of the human race, and whether we’re ultimately okay with that, here’s Christian Guttmann. Christian Guttmann, thank you so much for joining us for The MTF podcast.

 

Christian Guttmann 

Thanks so much for having me.

 

Andrew Dubber 

So you’re probably the right person to ask this question. What’s and how do we distinguish neural networks? machine learning artificial intelligence? Are they subsets of the same thing? Are they overlapping Venn diagrams? Are they completely distinct things out to you?

 

Christian Guttmann 

So I mean, they are, the way how I look at it is that artificial intelligence is the broad umbrella term of these different types of subsections. And then you have machine learning as being one of the different subsections. And then below that you have, you have deep learning, for example, or transfer learning or supervised learning these types of, you know, like approaches, and then you have technologies and neural networks, when you implement them, that would be an AI technology. So tool that you can start using to identify patterns or, you know, look at clusters or use it for reinforcement, reinforcement learning, or see, you know, cats and pictures and so on. So that’s the way how I would look at it, I guess. And then,

 

Andrew Dubber 

yeah, it’s funny how often cats and pictures comes up in this conversation, because my next question is, what’s it really for? because surely, finding cats and pictures are something we can already do.

 

Christian Guttmann 

Yeah, exactly. That’s pretty well done. And with the advent of image net, which was a few years ago, you could actually show that AI is much better and recognising things, pictures, you know, like things in pictures. And that is that that’s what an AI is much better than doing. And that’s not just cats, but pretty much everything else that you can imagine which will become useful. So for example, one case that we were pushing, and my team was working on that he had was in retail, so we have robots that go through the aisles and supermarkets and check what’s what’s in the what’s in the shelves, right. And so if something is missing, so it will detect like bread or other items, right, and then essentially reshelve those items when they’re missing. And so detecting those breads and the types of breads and how much is left, that’s one of those typical tasks that you can do with neural networks, one of so many, for example. So that would be one another one, when it comes to this machine vision machine learning type approach. I said, you can start detecting detect patterns in tunnels, which we also did, you can see there are abnormalities. And these tunnels were cast drive through and you see, for example, you have lots of footage, video footage of of these tunnels. And when you see pedestrians walking through accidents happening, then you can essentially flag that as being something that’s abnormal, that shouldn’t happen in these types of tunnels. And that’s examples where we were we have been using neural networks, for example, these types of technologies together often with other types of technologies, such as sensor fusion, or 5g or IoT technologies, and so on.

 

Andrew Dubber 

The sounds very much in the realm of things that we can already do without artificial intelligence. And this is really kind of one of the parts where I get confused as Why are we trying to solve problems that we don’t have? Or is there is that a step towards something that artificial intelligence will ultimately be before that we can’t yet do?

 

Christian Guttmann 

Well, that’s I think that’s the whole idea with AI. As soon as we do AI, we know that that’s something we wouldn’t call AI anymore, right? So as soon as you’re able to perform a certain task, which before you thought would be owned doable by humans because they have the capabilities of performing that task such as chess playing, or, you know, driving a car and so on. Once you achieve that type of task, you put it in the category of things well, that they, you know, an AI can do these types of things. So so it’s a computer can do that a machine can do that. Right. So, so and then to the question, I didn’t quite get it like, Do you mean that? You know, why do we do why do we try to solve tasks which are not considered a problem? Or

 

Andrew Dubber 

what can pictures? Yeah, people can already spy cats in pictures. So there’s another problem that people are struggling with. So computerised solution to that is doesn’t seem like the end goal, it seems like a step towards something else. What is it that you imagine AI will ultimately be able to do that is something that’s beyond our capabilities.

 

Christian Guttmann 

So one other definition of AI is, for example, that it does all these types of tasks and makes all these types of decisions. And the tasks being physical or cognitive, which require capabilities that usually humans only possess these capabilities would be of cognitive, physical or emotional nature. So when you think about it, and this is sort of the ambition, and this is what AI is really doing today, then that means that AI will be doing essentially many of the tasks that we are doing normally in our daily lives, like making coffee or being a nurse or being a doctor, which could be done by an AI, and it could do it with higher, with a higher level of accuracy with more efficiency, for example. So now it won’t be cats, but it would be let’s say, MRIs, right? That’s a typical application domain. And in the medical domain, where you have MRIs with x rays, there’s no plenty plenty of evidence that an AI and machine learning algorithm can detect the the abnormal patterns in MRI is much, much better than a radiologist. So that doesn’t mean that the radiologist would lose their job tomorrow. But that task of identifying these patterns would be something that an AI is better able to do. And you as a patient then would most likely prefer such a machine on AI to make this type of decision if it does it with 95% accuracy as a human, which might be tired, which might not be as skilled, you know, does it at 75% accuracy. For sure. That’d be one example. Or, for example, the whole self driving car movements on 1015 years time, I don’t know, do you have kids, or the younger than five years or no, but those that have kids that are younger than five years, those kids are unlikely to learn how to drive a car, I mean, they will not need to do that, because in 15/20 years time, you have cars that will be self driving. And with that also comes the safety aspect. So the cars there won’t be accidents, which really if all cars are most of the cars are self driving, there won’t be any any accidents. That’s another thing that I’m foreseeing which will be happening with very great likelihood. And this also brings all these unforeseen consequences. And that’s why it’s such a big topic in, in the government circles and in many companies as to what will happen with people that have a job based on transport. So about 200 million people in the world, I think 400 million people in the world are involved in transporting things from A to B. Right, right. Things are people from A to B, and that’s

 

Andrew Dubber 

not counting people who cook and roadside cafes or you know, all those sorts of

 

Christian Guttmann 

exactly right, exactly. So there’s a plenty of open questions of how to deal with those types of challenges. You

 

Andrew Dubber 

know, I guess with any any major technological shift, you’ve got a load of unintended consequences. Yeah. Is there any way to mitigate against those or

 

Christian Guttmann 

I often say there’s sort of two types of outcomes. One is that we look at AI technology as being like what happens with technologies in general, it usually creates in the beginning question marks, but in the long run, it creates new jobs, it creates new opportunities for people to find a way of adding value to the, you know, to the way how you create products or services or what it is. So that’s, that’s one likely scenario, when you look back at what mobile phones did, or, or computers or electricity, you know, those technologies created many more possibilities for us. The other scenario and that’s also discussed quite heavily is the future of work. So what if all these tasks that we are doing today can be automated and can be done much better? How will our future look like what value do we have as humans in the society? Right? Well, where do we stand? What What is the uniqueness of us? And some people jump to things like, Oh, we can do lots of arts, you know, like, the basic things will be given to us because there’s food, there’s transport and medication medicine will be sort of automated, and so on. But even that part will be used for creative tasks. Do you know so there’s lots of examples by Professor David Cope, for example, who use AI to to compose music, and it he taught Is AI system, Johann Sebastian Bach’s compositions and did took him seven years. But it imitated his style so well that it created new compositions, right. And it became so good that it was indistinguishable from a human playing, for example. So

 

Andrew Dubber 

I would probably argue that the next step would be to not create music in the style of Bach, but to create music in the style of the AI. And that’s perhaps where genuine creativity comes on. Do you think that’s achievable?

 

Christian Guttmann 

Well, I mean, when these discussions pop up, so when I started my journey, doing my aim doing AI, and all like, 25/30 years ago, almost I got very fascinated and exactly these questions. So what’s consciousness? You know, what is creativity? What are the unique values that are perhaps the, you know, the biggest challenges to to create, you know, buy a machine? The first question you have to ask yourself, define what you mean by creativity. Because if you cannot define that, and this is, by the way, very difficult to define creativity, consciousness, even intelligence, then it will be very hard to, you know, say, Well, is it possible to create such a thing or not, but in principle, I see a lot of signals that we, that we go into this direction, so everything that you can describe and define, I think there’s a very high likelihood that machine will be able to do it in one way or another.

 

Andrew Dubber 

What do you mean by intelligence?

 

Christian Guttmann 

Yeah. It is the ability to make decisions and perform tasks based on those emotional, cognitive, and physical abilities and capabilities that humans usually possess. So the way you solve problems, the way you also make the way how you make sense of data. And you essentially, I think, I look at us, and this is based on my website, a psychology degree also that we are essentially biological algorithm. So we take in, you take in, I take in lots of data every day, and we take actions based on that data, which go back I remember, I had many discussions and thoughts about the topic of free will, or do we actually have a free will or not? Or is it a result of essentially of who we are biologically, and certain random effects that we are exposed to? So So I think the biological algorithms are essentially taking data, have you taken impulses and observations and then based on that reacting in our environment and with other people, because not everybody

 

Andrew Dubber 

thinks that I mean, Roger Penrose, for instance, the Emperor’s New mind, basically, is predicated on the idea that we’re not reducible to, you know, algorithms that there is something more complex, more kind of ephemeral going on within the human mind, you’re more essentially, mechanical about how well

 

Christian Guttmann 

I mean, there’s not as a sort of, sort of two points to this, like, first, there is no evidence for that to be the case. I mean, we have been for a long, long time, science has been trying to find that little extra in us, you know, and do in the life sciences, by and large, do think of us as being exactly those types of algorithms, if you like your mechanisms, your bio mechanisms and our brain and, and lots of other, you know, mechanisms that we have in our body. So I think the evidence suggests at the moment that there isn’t some extra thing we just extremely complex, and some very magic, fantastic things come out of us either as individuals or as groups. So, so that would be my response, but I’m happy to take a discussion with

 

Andrew Dubber 

Well, I was I was gonna say, I’m drawn to the Roger Penrose argument purely because I feel like it’s the more optimistic position that there is something special about us that there is something that isn’t reproducible by machines. But that might be delusional. I mean, is that necessarily a pessimistic view of the direction of technologies?

 

Christian Guttmann 

No, I don’t. I don’t think so. I mean, the thing is, when I find it terribly fascinating, if something is so complex, that it creates, you know, individuals like us, you know, as having this discussion now, and having the dialogue and understanding things around creating music and making great food. I mean, that’s very, very fascinating. And finding the mechanisms that are behind that is certainly, for me more fascinating and intriguing than, let’s say, makes it seem cold, you know, that’s my, my position. So. So yeah, I would say, that’s sort of my angle into this.

 

Andrew Dubber 

Right? To what extent can we be made redundant? And to what extent will that be your fault?

 

Christian Guttmann 

Yeah, exactly. So I think that is a question, right? I think when it comes to many, many of the jobs and tasks in the future, it is quite likely that in the coming decades, we see many of these typical tasks that we are doing to disappear from a point of view that we need to do them to earn money and to make a living right i think that that will happen. More and more, we are already at a stage where I don’t think many people die of starvation, at least in Europe at the moment. And the overall poverty levels have come up dramatically in the world. So we moving into this direction. And so I think there will be more this and this is why this very deep question which I’ve discussed with other other of my colleagues in the community, like Francesca Rossi, for example, from from IBM and the University in Italy, where we look at what is the role that we should play in this highly digitised highly data driven future that we will be living in, and we will be living in that future 20 or 30 years time? And I think it’s a very open question. I’m happy to debate it. And and then the, then I don’t know, if I should take responsibility for scenario I do. I think this is certainly something that happens, many people contribute to this to this transformation. And I think it’s inevitable, it’s just something that really, you see so many, some so much need to make sense of that data, and the fact that you can use it for good things such as improving health, or, you know, removing cancer or reducing cancer risks, and all these sorts of things. So I think people want that. Right. And I think that’s, that’s important. So, so I would hope that if I’m responsible for certain things to become redundant, I’m not saying that people will become redundant, but certain tasks become redundant, that these tasks that will become redundant, are those that are dangerous to people that performed them today, or that are so complex, that they don’t deliver the value to the individuals to patients, for example, that they could if a machine does it, and so on. So if I’m, if those types of things disappear for people, then I’d be very happy, I think.

 

Andrew Dubber 

Right. Okay, so that sort of answers my question, which is about what do you hope will happen as a result of your work? But obviously, there’s an economic dimension to the impact of AI, but there’s also an ethical dimension to you know, not just can we do these things? And how do we make it sustainable from from an economic perspective, but should we do these things? And and how do you make those decisions and who should be involved in that discussion?

 

Christian Guttmann 

So, so first, I think it is certainly it affects all of us. So I always encourage as many as possible to be part of this discussion. No matter who you are, and where you are, I think it because it affects us all, it should be very, very important and very interesting for any individual to participate in this discussion. And to get, get to know what it is all about. So that’s one, the second part I always say is that when you have these discussions, make sure that you have someone in the group or in the panel that understands the topic in its depth, right. So you would not discuss, let’s say, for example, brain surgery, or cancer research or something if you do not have an expert in this area, in the group, so that this just is that type of message. And so that’s sort of the basic message. And then who else would you want to have in this discussion, I think in the first instance, for the next 10/20 years, you would want to have those industry representatives in these meetings, which would be most affected or impacted by these types of technologies. And also, I think, Representative stem from certain communities, you know, that represent people that would be mostly impacted by certain groups, perhaps, you know, that would be most impacted by, by these types of technologies in all those setup, perhaps more, you know, fragile, or maybe not so digitally enabled, and so on. So that I think that will be very important.

 

Andrew Dubber 

Because it’s interesting to me that Google put together their AI ethics committee famously recently without anybody who studied the ethics on it, and with some people with, let’s say, questionable ethics. And so, to what extent can we trust industry to take the lead on this?

 

Christian Guttmann 

Hmm. So that’s a good question. And I think I read, for example, published at tier two, the tier two AI guidelines, right, which I was very frank about, I wanted them to be in place. I said, it’s important for me to sleep at night, you know, I want everyone that works with AI, to know what the potential effects would be, you know, so that that I think is very important. But I said at the same time, we are not going to be the ones that have the final word on that. It just I don’t want to wait for the, for the government or for regulators of someone else to come up with solutions, which might take years to come into effect. Right. So we took a very proactive step in that sense. But should we be the ones taking charge of it? No, I don’t think so. I think it will be perhaps taking a similar route than for example, what happened in the medical field, right? I mean, perhaps 200 years ago, maybe even 100 years ago, there were many charlatans, you know, that would essentially suggest Hey, Here, take this and you will be healed. And, and so, you know, that’s why we have the FDA or Läkemedelsverket in Sweden, the equivalent more or less, that we’ll start looking at, you know, what does medication actually do and we also have the scientific method, double blind trials, which you run three times. You know, to test these, so I hope that it won’t be similarly complex, because it also takes back the medical field a little bit and its speed of creating new, new solutions, you know, for for diseases, but, but I think something similar will come up so that not everyone will be, let’s say, using data which can impact people people’s health or people’s fortune in ways which is not understood, you know, to some level, that doesn’t mean that you will understand it completely, but you will need to understand it to level with, you know, where it becomes useful. And, you know, you understand at least some consequences. Sure,

 

Andrew Dubber 

sure. He said you wanted to be able to sleep at night, what keeps you awake? I mean, what’s the nightmare scenario?

 

Christian Guttmann 

Well, I mean, for example, when you do use, you know, when you have data scientists, or when you start applying AI, or these data driven methods, this context now, there are other things that could, you know, keep me awake at night, but in this context, it would be that, you know, it will be used for purposes and you don’t, you haven’t understood the consequences of these technologies. So when people get very enthusiastic, which happens a fair bit, so we are really at an age where this is so new, so fresh, people see like, Wow, it can detect cat. So MRI, so what it is right, but But what is the data is a data that is balanced, meaning that, you know, what it detects will be equally valuable for anyone that uses the system? You know, is it safe enough? Is it for example, something called adversarial attacks. So when you have something that you recognise, and you can fool the system in, you know, understanding something different about the world, like self driving cars, and so on. So when you start having people being enthusiastic, rushing into this field, which happens a fair bit, right, and then the application of these technologies is not thought through. And it’s a little bit I compare this a little bit like with when you look, again, a medicine because I’m also at the Karolinska Institute, I think no one in their right mind would think about, you know, giving someone a book, and a quick lesson about you know, how to do brain surgery, and then just give them a scalpel and say, well just go for it, you know, like next week, there’s some people coming in. So you need to, you know, that is something that that worries me, because in the medical world, you have, you need a fair bit of track record, before you come to that point, you need to have a pretty thorough education to understand holistically how person functions if you like, and then you need to have lots of practice. And then you get the scalpel under supervision of people that are experienced. And, and in the data science field at the moment is like, well, there’s data, you know, go for it, if there’s some value that potentially comes out of it. But again, it’s like, you know, on immigration cases, when you apply for visas, if you give people access to certain services, you know, making sure that everything is done, as fair and as robust as would be considered reasonable and highest possible standards. You know,

 

Andrew Dubber 

to that end, there’s a lot of people talking about the transparency of the decision making process, and we want a sort of AI that is explainable. To what extent is that realistic?

 

Christian Guttmann 

Well, in some cases, it’s realistic and very feasible, if you make it transparent, and explainable, particularly when you have AI technologies that are not these blackbox approaches, such as, for example, you know, deep learning networks, and so on. So they are much less transparent and much harder to understand what they do. But if you have decision trees, for example, so I did some research and some work a few years back, whereby we balanced the provenance. So the transparency against the accuracy. So you know, you will start having algorithms, whereby you can see why and how they make the decisions, but you will have to counter that against the, the accuracy that it delivers. Right, so, so it’s quite visible. And, and but having said that, I’m aware that in Europe, there are quite a lot of efforts to make these more black box type approaches more explainable and more transparent to right. And I know that DARPA in the US is now I think, spending 2.5 billion or something to make all these neural networks more explainable. But in some cases, you don’t even want to make them transparent, or or publish them, let’s say because it makes them also more vulnerable. For manipulation, right?

 

Andrew Dubber 

Sure. I mean, the my example, I guess, for this is something like AlphaGo, which, which, you know, ultimately, there might have been a decision tree at work, but what it was working on was essentially intuition based on data. At least that’s what it looks like from the outside. And like there’s a massive amount of data because it studied all of these historical games. But why the stone was put exactly there at this point in the game was seemed unfathomable. And it seemed like that was the reason it was so good. And so are we given Something up by going No, no, you need to have enough of them, you know, decision path. In order to be able to explain this as something lost in that

 

Christian Guttmann 

it is of course hard because in the end of the day, what you have today is like a big mathematical, massive formula, like a matrix of many weights and many, many data points, right. So that that’s all you have, which is a bit highly simplified version of what’s in our brain. So you couldn’t really explain precisely what’s happening when you make a particular decision, right? Because you would have to break it down. But the the case that I think was more mind boggling is of course, AlphaGo, zero, right? whereby it didn’t learn at all from the wisdom of humans. It just created its own understanding of how the game should be played. And I think, I think that was a much more mind boggling situation, because I think it was moved 42 in the game, I think it was never, ever played before by any human being ever. And it was apparently the decisive move, right in this entire game. And it was extremely unintuitive experts of the game said, Well, why did the system

 

Andrew Dubber 

What sort of idiot would put that stone there

 

Christian Guttmann 

Yeah, along those lines. And, and so that’s one part. And the second part is, as you said, it cannot be even explained. So the system, what the system got AlphaGo zero was not the data of humankind, but it received the rules of the game only. And so this is really quite a fundamental insight. And we are now really looking into Well, how can we use such a technology whereby we just give it the rules of the game, the roots of the problem, right? And then you figure out what the solution is, rather than teaching the AI based on lots and lots of data, which is today’s sort of the normal way of approaching it, right? That’s why all the discussions about data and how much you need. And all these types of things

 

Andrew Dubber 

are kind of curious how we got here. I mean, there seems to have been some really big leaps in an AI and imperative, like by the decade, for instance, what was the journey that got us here? And are we due for another big leap?

 

Christian Guttmann 

As you know, AI has been around for quite some time for about 70 years. Right? So the efforts were in early days, Alan Turing, in 1959, published the book can machines Think so? It started back then. And in recent times, what got us here today and why everyone talks about AI is pretty much what happened with deep learning recurrent neural networks CNNs. In the inside, that these neural networks, which started to fade away in the AI community said, Well, how they don’t converge, they’re complex, why do we use them suddenly started to show after some tweaks, immense quality improvements in how they detect certain things and how quick they are. And then that really then got applicable to, you know, for many companies like Baidu, Facebook, Google, and so on. And not just for recognition of items and books, but also for very practical industries, where you recommend items like on Amazon, and so on. So that really brought us to where we are, then everyone talks about AI. But as I said earlier, it is only one part in the entire AI area. And when you ask about well, what will be the sort of the next leap forward, I mean, one part is likely going to be in machine learning. So we have now this so called supervised learning, so you need a lot of data. But I think one part would be this type of deep reinforcement learning. So we will have, we need much less data. And, and also the, the way where the machine learns just after a few steps. So after a few iterations, I think that is where a key will lie in making these systems even more effective. So when you learn something new, and I show you how to pour water into glass, you’re very quickly learning that after maybe one iteration, right. So if an AI system today would need 100,000 iterations of that, to even start grasping what that is, and that’s just recognition, it’s not reasoning, it’s not understanding what’s happening. It’s just recognising what needs to be done. So I think but that that will be some big breakthrough, because it would essentially mean that we are not relying on huge amounts of data, we can show the system just once or a few times, what needs to be done. And so I think that will be a big breakthrough also, in in regard to what we have currently in Europe, which is GDPR. So the data is not flowing as fluid, and let’s say in other places in the world, like China,

 

Andrew Dubber 

right? Because I mean, the amount of data available, does that put us at a major disadvantage? And do you think that there’s a trade off that could be made there that would make things better

 

Christian Guttmann 

from the perspective of building efficient AI systems quickly? It’s certainly a massive challenge. I think that’s that’s without doubt,

 

Andrew Dubber 

privacy is a problem as

 

Christian Guttmann 

well, for that particular purpose. And then don’t get me wrong. I think the idea of GDPR is completely correct. Right. The intention is very much correct. But I also think that the, the level of abstraction like what individuals now need to do with this GDPR I think it’s above most of us. I mean, I read part of the GDPR but I think very few people do and when it comes to the practice qualities of, you know, clicking buttons and agreeing to the terms of conditions. Very few people find that practical. And I wouldn’t know I probably have hundreds of different accounts across, you know, hotels and airlines. And I don’t know where and those profiles, I don’t even know they exist anymore. I forgotten most of them, let alone My right of being forgotten and GDPR being having any effect. So, you know, in fact, I go around, I test sometimes these sort of systems, and I was part of a gym chain in Sweden. And then I became a member for some time. And then I wanted to test if I can remove myself out of the system, right? So say, well delete everything. So I sent an email, I called them up, delete everything. And so it is it did not happen. So I could log in still, right, like three months later, as an example. It’s in principle, a good idea. But I think it needs to be much more practical, it needs to have a different layer of abstraction to become useful.

 

Andrew Dubber 

Because here in Sweden, of course, everything’s tied to personal numbers. And surely that’s difficult to disentangle. Hmm. Is that, again, is that is that another problem? Or is that, you know, part of the solution?

 

Christian Guttmann 

Well, I mean, it’s part of a solution when it comes to building systems that will be more effective. And for example, predicting individuals health status, or their needs, you know, when it comes to financial loans, or those types of practical things,

 

Andrew Dubber 

but my grocery shopping is tied to my number. So, you know, if I buy a bar of chocolate, and that’s tied to the same number that’s tied to my medical records, there’s, there’s going to be that kind of intersection of data. Should that all be one pool of data? And should machines that don’t reveal that data to other human beings be able to make decisions based on that information?

 

Christian Guttmann 

Well, if you choose to do so, and I think it’s, it’s, it’s fine. I mean, I think if you as an individual are completely aware of what these processes do, and how the data is pulled together. And if you feel that the value that you receive as a result of those data sources coming together, makes sense to you, then I think, should be done it should then of course, not be used for things outside of what you have agreed with the data to be done. Right. So it should only be what you agreed to have done with it, if that makes sense.

 

Andrew Dubber 

Yeah. But there’s a whole lot of uses that I’m not going to be able to imagine that other people will be able to profit from presumably,

 

Christian Guttmann 

not if you don’t agree with it. But what I’m saying

 

Andrew Dubber 

is that I don’t know what those uses are. So I have no basis on which to agree or disagree. Um, why wouldn’t they need to then explain what they are? What these things would be there? No, you tell me I mean, we already have Facebook in the world. And Facebook, apart from the odd three to $5 billion fine, is broadly illegal entity that does a whole lot of stuff with their data that we just have no idea what that is. And so we don’t have the opportunity to to agree or disagree. And there are algorithms and neural networks at work, but under scenes, yes. And we don’t know what they do. Yep. Good or bad.

 

Christian Guttmann 

Okay, if that’s the case, that shouldn’t happen, right? I mean, and, and my understanding is now that Facebook, Google, many of the companies, American companies in particular, but also other companies from other countries have to know they got a very hefty fine, and I’m sure made a difference in their thinking. And I do understand, really,

 

Andrew Dubber 

because it’s like one month of their revenue, it’s not even a dent in their annual income. Do you think that’s going to change their behaviour in any way?

 

Christian Guttmann 

I think I think it has, or it has changed their behaviour. I think the pressure is higher. I mean, if you’re losing trust with the user, so also, I mean, that’s another factor, right? I mean, you have, of course, the authorities, which will start putting fines on you and put pressure on you and say, Well, you can’t even operate in Europe, if you don’t follow these laws. And if you don’t have your data centres, here, and so on. That’s one part. And the other one is, of course, I think the public becomes increasingly aware of that. So I think they run a big risk and being less, let’s say, losing the trust with the customer. I think that’s the most horrible scenario. So I think the pressure is increasing, would it? Did it not change everything from one day to another? Probably not. Right. But I think it’s, it’s going more into this direction.

 

Andrew Dubber 

I think we’re becoming increasingly aware that we’re living in a dystopian technological society as far as, but I don’t know if there’s any choices that a lot of us have in response to that. So it’s, yeah, it’s a tricky one from that from that perspective. So apart from how we got here from AI, how did you get here? What’s your story? Was this a childhood dream to become a you know, an expert in computer thinking? Or tell me about the child Christian? What were you taking apart making things work?

 

Christian Guttmann 

Yeah, I started really early getting into the computer world. And then I became I think I have sort of two parts of my character. One is that I’m very much engineer, scientific minded. You know, I find that terribly fascinating. And I started programming when I was 13. You know, the good old Commodore 64 back in the days, right, you know, and that was good fun. And that was one partner. It got me really interested. I started watching certain movies which were perhaps not known for their enormously great acting skills, like for example nightrider, or something a little slightly embarrassed to say that but I think the point there wasn’t wasn’t a great acting, but I was very fascinating. In this this car, which had intelligence to support people, right? It was one of these first examples in the, in the science fiction world where you had a an AI system being having a personality and on having having the ability to communicate and understand its environment.

 

Andrew Dubber 

And I think that’s the real difference to it. Actually, you know, this is the goes back to your definition of intelligence. It was making decisions, but it was actually aware, and it was it was sentience. Is that the target? Is that what? That was

 

Christian Guttmann 

certainly one of the first questions I had when I started my PhD. Right. So I think my my one on my PhD supervisors was saying, well, Christian, you can’t just talk about consciousness, that’s such a loaded term, right? You need to start specifying, you know, essentially, awareness, essentially having self awareness, and it gets you very quickly into philosophical topics, right. And it also starts it at least for me, I realised that it is just so hard to define that it’s not defined. That’s why it isn’t defined. Right. Right. So but yeah, that’s certainly one fascination, how can you build some entity that has that awareness that has the ability to make connections to others and, you know, sets its own goals and takes its own initiatives, you know, certainly one of the many fascinating topics at the time, and then the other part of my personality, let’s say, to find out how we humans work, and that is what brought me down the path of studying psychology and, you know, understanding when and how we make decisions and how we why we, I think one reason why we are intelligent is also we need this ability to work with each other. I think that is a that is a very big point. And I use these lessons learned in psychology, and try to transfer them into how we build AI systems, essentially. So that that was something that definitely fascinated me early on, I’d say,

 

Andrew Dubber 

What did your parents do? And how did that affect where you ended up? Hmm.

 

Christian Guttmann 

So in those days, I think I was one of the few at the whole school that had this computer type, slightly nerdy, you know, nerdy path, and I think my parents overall. So from a practical point of view, I think parents tend to be more practical, what’s my son, my kids are going to do and what will they be? And, and so they were happy to see overall that Well, it seems to one part of what he’s doing is it in computers, and everyone tells us that that stuff will be the future. So they will, I think, generally quite happy. When it came to the whole AI part. I think it was, it’s always more difficult to explain, and I guess, you know, on a high level, so I broke it down. So my PhD thesis, for example, back in the days when I explained to my mom and my grandma, which was always a good exercise. By the way, if you do highly complex thing, you break it down to something very practical. But it turned out to be quite explainable. It’s like how you have several AI’s and how they make decisions with each other in an optimal way. Right. So so it became they thought, well, it sort of makes sense. Yeah, at least the way I explained it. So so they were quite content with me moving into this direction. Were

 

Andrew Dubber 

they in the kind of scientific world at all?

 

Christian Guttmann 

Not so much. No, actually. So my mom worked much more as a chef, for example, you know, she was more hands on when it came to the food and cooking and those types of things. And, and that’s maybe in part some of my papers. They had examples in cooking and dried. So how would you make an AI collaborate with others to create a dish or something of that sort? So I the I wouldn’t be surprised if that had an influence, you know, from where she comes from, but it was more practical.

 

Andrew Dubber 

Sure. Sure. Where were you growing up? Because you’re sort of German, Australian, you’ve lived in Dubai mentioned? What’s the geographic story?

 

Christian Guttmann 

Yeah. So often, I mean, I still grew up I feel, you know, they still I’m still growing up in the sense that, you know, every experience that I have is influencing me and how I’m thinking about the world about my life. Take

 

Andrew Dubber 

more data approach. Yeah,

 

Christian Guttmann 

exactly. Right. Exactly. Maybe changes developing the algorithm. Exactly. Right. Exactly. Right. Spot on. But I grew up physically, I wasn’t in Germany when I was young. And then I also stayed in Luxembourg a fair bit, because I had good friends that, you know, we were working from, you know, computer perspective and AI perspective together. And then I moved to Australia, did much of my education there. And yeah, so I think before I came to Sweden, there was really a lot in Australia. And, and I could say much less of what happened in Germany over the last 30 years or so I’d say. And then I would say that, you know, as you said, Dubai, I was in Japan for some time, and those experiences are really orthogonal in my my world. Because when you live in western countries, Sweden, Germany, Australia, the US is by and large, it’s about the same value settings, right. But if you start living in the Middle East or in Japan, for me, at least it was a big, I became very humbled about, you know, looking at those value systems that exists there, and seeing people how they live, and how they prosper and how they live with each other, which is just very different. So it’s true culture shock, you know, when I think, and that that’s something that humbled me, and I think it gave me a very different perspective on what’s happening, for example, in Sweden, or in the Nordic countries, and so on, has

 

Andrew Dubber 

that affected how you approached AI?

 

Christian Guttmann 

I would say so, for example, one thing is that in the, I hope I get this together, but in the in Japan, and I think also in China, the view on how AI will become part of daily life is much more inviting. So I think they have a much more open concept as to what could be part of society. That’s why robots, for example, are, let’s say, not considered to be a strange externality that will somehow invade our life, and it’s cold, and so on. And that that’s some lessons that I learned over time. So the approach to those types of technologies are much more inclusive, in some sense, then in Europe. So I think, and that also makes me again, when I think about the AI ethics guidelines, and so on, trying to not have too many or more global outlook on those types of guidelines, and making sure that we have a representation of different thoughts and different value systems, you know, as much as makes sense, you know, to find a find a common view. So that’s certainly one thing. And then just from a practical perspective, you see that, again, countries like Japan, have been investing very, very heavily into robotics. So they see the applications in, in hospitals where robot can lift the patient from one bed to another, or, or taking, for example, drones, or robots and rescue scenarios. So that much more open and integrating these types of technologies in the environment. I would say so. So that’s, that’s certainly an eye opener, and I understand this much, much better after having been in, in societies where I needed to adopt and understand and accept different sort of value systems. Right.

 

Andrew Dubber 

One of the things I’ve noticed while you’ve been talking about AI, is that it seems to be that the intelligences are the algorithms in question. Tasks specific. Yeah, find pictures of cats move a patient from bed to bed be? Can we generalise intelligence and computers?

 

Christian Guttmann 

That is a that is one of those big questions, right? So for the listeners, this this concept of narrow artificial intelligence, right, which you just described, it’s an AI that does a task really, really well, maybe really, really quick and, and very accurate, right, and that’s narrow AI, like, you know, identification recognition. And then there’s a term artificial general intelligence, which you probably referring to, which is starting to be an AI that has a much higher level. So we’ll be able to do all the tasks that we will be doing at the same level. So and, and then there’s actually also the term which lies in the middle, it’s called artificial broad intelligence. And that’s what we will be aiming for do to whereby you can start having a system do two or three tasks together really well. So instead of just focusing on one thing very well, we start combining them. So having a chat bot, that can, for example, also detect whether you have certain needs, or you might have a problem in the way how you, you know, and in your health or something of that sort. So you start combining certain AI technologies in this, you know, on this framework of AI. And so what’s the question also, like, can we achieve that? Or

 

Andrew Dubber 

I guess, I mean, like, when an example would be, you know, there’s an AI that can spot a frisbee and a photo, but wouldn’t have any clue what song is. Yeah. Yeah. So can we go

 

Christian Guttmann 

there? Huh? You can divide roughly these two areas where you have recognition of things, right, which is what you said, you can see a frisbee and then you have reasoning, right, that’s completely different. Seeing that something exists doesn’t mean you know what it is. And there’s a lot of work on that too, which is we’re much further away from from having a big breakthrough in that in that area. But having said that, you know, some of these decision tree algorithms and so on, or for example, planning this big area of artificial intelligence, which is called autonomous planning essentially, whereby you let systems plan out a very, very complex set of actions and are essentially able to react to changes in the environment. So it needs to have a certain level of awareness right and so the reasoning as to how you would create this plan is sort of in the plan creation so there’s, it’s not like there’s nothing when it comes to reasoning and the you know, the the wind of the system or what it means to throw the Frisbee or something. But we’re certainly much further away from that as being a breakthrough, you know that.

 

Andrew Dubber 

And the next level of abstraction beyond that, I guess, is metaphor and, you know, poetic thinking. And yeah, that sort of thing, which I guess comes back to our conversation of, you know, essentially other musicians safe. Yeah,

 

Christian Guttmann 

yeah, exactly. Well say from what you know, I mean, that’s the thing, like, maybe it will be a liberation too, right. But yeah, you’re right. In the it was an algorithm. I try to remember the use case, it was actually detecting sarcasm, right, in texts and jokes, right. And that was detecting them, I think, at 80% accuracy in certain documents, and therefore, would highlight in which documents it would be, and that was, there was a very nifty use case for that type of thing. You know, I think it might have been used in a company where you would see you know, what documents were written? Or what emails it was it emails, I don’t know, maybe you go into territory we don’t want to go into, but it was essentially then detecting could learn what that meant, you know,

 

Andrew Dubber 

yeah. But then again, it’s a further step to actually then being sarcastic itself, when I felt like it. So I think that’s an interesting thing. I guess the final thing I have, from an industry perspective, is, is there an industry that’s going to be untouched by this are there people out there going well, you know, at least my businesses safe from this, because we just keep making what we make, or that kind of a naive perspective,

 

Christian Guttmann 

I think in the long run, it’s going to be it’s a naive perspective, in the long run, so so there will certainly be certain areas, which will be safer, let’s say will be less touched. But I think all of them will be touched, I think there’s no doubt. And the general rule of thumb is that if the tasks that you’re doing are very data driven, and somewhat repetitive, it’s much more likely they will be impacted earlier. So and those that are more that require, let’s say, much more variability in how you perform it, and might be really based on the situation that you’re in, there might be safe often. So for example, as this example, with doctors, when a medical doctor makes a differential diagnosis, they usually and they always say, should do that based on data, and they would see you at your blood values and check you out and so on, they slowly work their way towards a diagnosis, right. And that’s very data driven. Whereas, for example, a nurse that deals with a patient on an individual basis, they need to be much more aware of very many ideas, syncretic sort of situations, and people’s personalities, and so on. So their job, let’s say would be, as the job, you know, would be much further in the future to be impacted in some sense. But overall, I think it’s pretty clear and the estimates by many analysts, and many, many of my colleagues such as, for example, Kai Fuli, on Sunday, as many that make estimates, and I agree with them, and have my own calculations that perhaps in the next 10 or 12 years, 50% of jobs will be quite heavily impacted, if not completely replaced, even you know, but again, in the next 10/20 years, those replaced jobs, will at least in the next 20 years, be replaced with I think, much more exciting roles and tasks, right. But in the very long run, I think we really need to sit down and really rethink our role in society, I think, yes,

 

Andrew Dubber 

I’m gonna end with a philosophical question. And I think I know what Kai Fuli would answer this, but I’m interested in your answer, which is if we are simply super, super complex algorithms, and we’re creating artificially other super, super complex algorithms that may in many ways replacers does it fundamentally matter if we do ultimately get phased out?

 

Christian Guttmann 

Yeah. matter to whom? That would be the first question that contact question. So So

 

Andrew Dubber 

why now it matters to me. But in the grand scheme of things, does it make any difference if it’s asked here or them?

 

Christian Guttmann 

So from a from if you ask me personally, or if you I mean, if you were to ask me from an evolutionary perspective, right? This would be like another step forward in the evolutionary ladder, arguably, actually, this would be breaching the entire Darwinian theory, because actually, this is not how evolution works, right? You’re not meant to be

 

Andrew Dubber 

selection. There’s no

 

Christian Guttmann 

exactly right. It breaks the whole idea about Darwinism and evolution. But if that was to happen, to whom would it matter? Well, well, to humanity, it would matter then obviously, right. And the but one could also look at it as being the the step that that, you know, we move forward and in the way how our journey is meant to be meant to be going forward. So for me, it matters in this Then so I would be sad that it would happen, right? And maybe we need. Now making this up a little bit rather follows off a good question, as you said, but maybe we need we need some type of sanctuaries in the future for humans, right, that we have the species that once existed, and were the peer of the humans. Yeah, something along those lines, you know, some, I think you can find some jokes of that sort, also on the internet, but, uh, that that is, is a scenario. But again, this, we look very far to the future. And then there are many, many question marks as to whether we would reach such a situation. Yeah. But

 

Andrew Dubber 

I guess the provocation would be that, the way to think about this would be that’s not we’re not building something that is other to us, what we’re building is us. And if what we’re doing is essentially trying to replicate human intelligence and trying to, then then, essentially, you know, the sort of the meat version of us being phased out doesn’t matter, because they’re still in US News.

 

Christian Guttmann 

That’s one way to look at it, I encourage, now that we go into towards the end of the discussion, but there’s also this very good book, which is sort of the standard reading for those that study artificial intelligence. It’s, it’s a book called artificial intelligence, a modern approach by Russell and Norvig. And the first chapter you will, this is a very technical book, but in the first chapter, you see four different ways of looking as to what the goals of AI would be. And one of those is what you mentioned that some AI researchers are working to build something that is just like us, but others believe, for example, that AI will be just mimicking us. So it will never be like us, it will just be a mimic, like, you know, puppet. And then there’s another category which suggests that it is actually completely its own intelligence. So And personally, I actually prefer to call it machine intelligence and not artificial intelligence, because what type of intelligence and the actions that come out of these systems, I think there will be there will be having the nature of the machine, you know, its own character, like fractals in mathematics, you know, like, you have these very complex patterns. And I think it will be that type of personality, if you even want to call it that. It will be in may not be at all like it is like us, it will be possibly completely different. We might not be even able to understand how it would work.

 

Andrew Dubber 

I have to say that would be my preference. I think that we create a bit because like I said before, we can already identify cats in photographs, why don’t we build intelligence, and they can do things that we can’t currently do. Which which brings us to not not artificial intelligence, but augmented intelligence to help people think further and deeper and, and reach further is that on anybody’s agenda, or am I just wishful think? Yeah,

 

Christian Guttmann 

absolutely. Like, I agree like this. It’s also this idea of augmented workforce, for example, you know, how can you when you look more practically, like employees and so on, but how can you upgrade them? In some sense, you know, how can you make them How can you give people abilities that they don’t have today, like, either, like very hands on like almost towards cybernetics, and, you know, Cyborg type ideas, which is sort of in part happening already, because people have already technology installed in them, if you like, you know, that that will bring them forward. But also in ways where what are you doing today, in a small way, you can enhance these things much smarter and quicker by using AI, you know, when you search for documents, or any of the tasks that you do today, which would be tedious and take a long time, or which might even be dangerous, right? Sure. So you would be upgraded in that sense.

 

Andrew Dubber 

And it seems to also reveal new human abilities that we didn’t know we had, which I think is a really interesting potential for that.

 

Christian Guttmann 

That’s a good point too. Exactly. I mean, we certainly if you think about it, it has happened many times before with technology when you think of music, and you know, electronic music, you know, the type of music that’s created today, and 20/30/40 years ago, coming from a machine you know, from pulpwood microprocessors has created a completely new category of music genres, right, which have influenced our emotions and the way how we react to the world and in completely different ways. So So yeah, along those types of thinking, I can completely see that and you think of other examples, like you know, the whole internet phenomenon the whole social networking part whereby when we react on Twitter to people we don’t even know who they are often or we have no clue who they are they might be somewhere on the other end of the world and we talk to people which we never have never seen will never see right and so it this technology changes ways and I agree with you it’s and then you could become again follows article is that the right way to do it or the wrong way to do it because I have friends that don’t like digital music at all electronic music is like go away with that stuff, right? It’s either classical or heavy metal and and guitars right and so on. Some, and and so, but others embrace it as you know, the new way of experiencing things in the world.

 

Andrew Dubber 

And I guess on that note, artificial intelligence is a broadening of our palette of technologies rather than abandoning the old and embracing the new as some people think that you need to do for electronic music.

 

Christian Guttmann 

I think so. I think I think so, too. Yeah. Christian, thanks so much for your time today. Thanks so much for having me.

 

Andrew Dubber 

Christian Guttmann, Vice President Global Head of artificial intelligence and chief AI and data scientist at Tieto. And almost certainly real human person. And that’s the MTF podcast. If you enjoyed please do share it with someone else. like it on Facebook, subscribe on your podcast player of choice and do let us know we’d love to hear from you. In the meantime, have a great week, and we’ll talk soon. Cheers.

Subscribe on Android