
Charles Ess - Reflecting on AI Ethics
Charles Ess is a philosopher and Media Studies professor at the University of Oslo. He specialises in the study of ethics in the digital age, and specifically the ethics of AI and robotics.
This episode of the MTF podcast is a deep dive into the meaning of ethics, how it can be applied to the internet and computing, and what it means to lead an ethical and reflective life in the digital age.
AI Transcription
SUMMARY KEYWORDS
people, ethics, autonomous vehicles, ai, virtue ethics, called, human, world, systems, ethical, human beings, technologies, find, ways, death, reflective, extraordinary, greater, problem, empathy
SPEAKERS
Charles Ess, Andrew Dubber
Andrew Dubber
Hi, I’m Dubber. I’m the director of Music Tech Fest. And this is the MTF podcast. Charles Ess is a professor of Media Studies at the University of Oslo. He’s a philosopher, specialising in ethics, and specifically the study of ethics as it relates to things like AI, computers and the Internet. He literally wrote the book on digital media ethics in his book, digital media ethics, and he’s also the former president of the Association of internet researchers. Charles is spending a month in my adopted hometown of Umeå, in the north of Sweden, as a guest of homelab, the humanities laboratory at Umeå University, where there’s a lot of work being done right now in the field of AI ethics. He’s given guest lectures and workshops while he’s been here, which have been fascinating and helpfully incredibly accessible. And I wanted to sit down with him and talk about this idea of human centred AI, which has kind of been at the heart of MTF in everything we’ve done over this past year or so. We talked about a lot of things from Digital death to autonomous vehicles. gretta tonberry, to Will Smith, the book of Genesis to the wisdom of Leonard Cohen, the importance of empathy, and how to lead a good and reflective life. Here’s Charles Ess. Charles Ess. Thank you so much for being on the programme in Terminal
Charles Ess
My pleasure.
Andrew Dubber
So the S is not for anything. That’s the whole name.
Charles Ess
That’s the whole name. It caused a lot of trouble when I was young. Both when I wanted to make long distance calls. How do you spell that name again? Ess spelled E S S. No. And of course, Junior High jokes about my first name is jack. Oh, jackass. Haha.
Andrew Dubber
Yes. And where’s the name from origins? German. Okay.
Charles Ess
Yeah, it’s German, and it may have actually Dutch Dutch heritage behind it. But yeah,
Andrew Dubber
that’s not a German accent.
Charles Ess
That’s not a German accent. That’s fully American.
Andrew Dubber
So whereabouts are you from
Charles Ess
born and raised in Tulsa, Oklahoma. Land of Leon Russell. And Michael Doonesbury, which you probably don’t know.
Andrew Dubber
Yeah, very well.
Charles Ess
Oh, you do know. Okay. Yeah. So and then Oral Roberts. So we say two out of three is not bad.
Andrew Dubber
But there is a religious thread in what you do. So maybe he’ll pop up a little bit later in the conversation
Charles Ess
may as well get it over now. Yeah, that I, when I was growing up, we we thought that Tulsa was the buckle of the Bible Belt I found out later No, that was Springfield, Missouri, where I ended up teaching for about 20 years. So I, I didn’t spend all my time in that kind of community, but some, which turned out to be good in a number of ways. So I described myself as a recovering Baptist. I did learn to take text seriously. So it’s very good for hermeneutics. I was also reading Nietzsche at the same time I was reading the Bible. And to interpret a text is is difficult work, it’s not obvious. So contrary to the fundamentalism that was around, I was able to start to, to read things in my own way. And so when, when the Baptist preacher said from the pulpit, all of us young men should go off to Vietnam, and kill in the name of Jesus. I’m not seeing that in that text. So I became a conscientious objector. So that’s that’s part of the story. The other part of the story, or another part of the story is that gave me an advantage in terms of teaching. Two thirds of my students in Springfield, were self identified, born again Christians. And I was taking philosophy and ethics and critical thinking, but I had an edge over some of my colleagues because I knew the language and I knew how to use it in a way that was respectful, but could also help open up the discussion. So that was unexpected advantage. It was many years later that actually when I started studying, going into media studies out of philosophy that I ran into, well, partly reading this through the eyes of people like Walter Ong, and Marshall McLuhan. The motto of the Protestant Reformation is sola scriptura, only the Scripture. So again, you’re you’re coming to grips with this text is literally life and death. But it also means the capacity to stand up resist and disobey, which is something that I think is really central to the Western tradition, starting with antiquity and Socrates. I point to people like Tess Aspen, who is quite famous in this part of the world, black woman in Sweden, who stood up single handedly to neo nazis marching and Gothenburg. I think we need people like that. I think they change our world in more just ways in ways of greater equality, greater respect, greater emancipation. But yeah, it doesn’t come cheap or easy.
Andrew Dubber
The main reason that we’re here having this conversation is because you happen to be in the same city as me, you are the professor of essentially, AI and ethics and how those things come together. I’m going to start with because we’ve discussed AI quite a lot on this podcast, what’s ethics?
Charles Ess
ethics is how you live your life in a reflective way. Some people will make a distinction, it depends between morality and ethics, and morality is sort of the the enculturated ways of doing things. And not necessarily anything wrong with that. But ethics is when you start, in my view, is when you start to become reflective, and try to think through critically, what are the foundational values? Where do they come from? Why are they legitimate? Are they legitimate, when, and where, under what contexts. And a central part of that is developing a kind of ethical judgement, you’ve heard me talk about four nieces? A kind of reflective judgement that we, we all kind of know about, but find hard to articulate, but it’s when we’re in those places, facing ethical challenges that we really don’t know what the right answer is. That’s when we have to call calling the PI, these kinds of capacities for reflective judgement that you don’t have to have a PhD in philosophy, or an ethics, I don’t have a PhD in ethics. But the more that we can reflect on this using story from tradition, or using ethical frameworks that have been developed over the over time in different cultures, generally, the better prepared we are for when, when those moments come. We also learn from our mistakes when those moments come. So it might be better prepared the next time around,
Andrew Dubber
how can we tell them we’re doing it right?
Charles Ess
You can’t always, of course, otherwise, it’d be fairly easy. I think one of the things that I like to stress is that all of us are ethicists to start with this is a very Aristotelian view. But it’s also a kind of sociological view, by the time you’re in your teens or 20s, or that part of life, you know how to make good judgments, or else you’d be in jail or dead.
Andrew Dubber
Some people are,
Charles Ess
some people are, I’m sorry, really. But again, we’ve rely on on a kind of receive morality, we rely on the advice or examples of friends or exemplars and that kind of thing. But we don’t always know and that one of the primary metaphors or images of this kind of reflective judgement is in Plato, and it’s the Kubernetes. It’s the steersman or the pilot. And if you’ve ever done something like sailing, what you know is, yeah, you’ve you’ve studied in a way you know how these things work. But you also have a feel for how your boat is moving in this current with this wind, at this angle, and so forth. And that becomes tacit knowledge that becomes embodied knowledge. So the Kubernetes is the Stearman who Plato says, who who knows the limits of his art, what you can do what you can’t do. And if you do make a mistake, you’re able to correct the error. And that sense of self correction is what Norbert Wiener used as the foundation for cybernetics, and Vayner also wrote the first ethics textbook in Computing and Information ethics. But cybernetics is originally an ethical sense of self correction. So I think that most of us, when we make mistakes, we often feel those mistakes. So we talk about heartache, or we talk about, it was like a kick in the stomach. And contrary wise, we often will say, I followed my gut feeling I followed my heart. That’s not naive, this kind of ethical experience is encoded in our bodies. And we bring that into play some usually in an unconscious way, just like we do when we’re sailing or when we’re playing tennis, we don’t always have to reflect, to do the right thing. So I think if we pay attention to our ethical experience, we’ll notice for the most part, yeah, I’m doing the right thing all the time. So I don’t have to think about it right. I didn’t kill anybody. I didn’t run down this person on the road. All those
Andrew Dubber
mentioned narratives from history writings regarding principles, a lot of the people who we look to in history, and those stories who do act like icons of ethics do end up dead or in jail as a result of their actions and and what lessons can we learn from that?
Charles Ess
Being good is costly. Part of the reason why those people are held up is precisely that they paid the ultimate price would be one way of putting it so Jesus on the cross or Socrates or Antigone, Martin Luther King being assassinated Gandhi being assassinated. It doesn’t always in that way, one of my moral Heroes is a fellow named Hugh Thompson. He was a helicopter pilot at the Mỹ Lai massacre. And when he saw what was going on, this is basic law of war, you as a soldier are obliged to obey superior orders, you are obliged to disobey superior, but illegal orders, and to kill civilians is an illegal order, period. Full stop. And so when Thompson and his unit arrived, they saw what was going on, they turn their own guns against their comrades, and to get them to stop. He received a medal for 50 years later, but he’s, he wasn’t killed. So So we, it’s partly that these, these stick in the memory, because they’re dramatic, but we also talk about everyday heroes. And those stories may be less dramatic, but happier, they also end up with fewer casualties.
Andrew Dubber
Okay, so now we need to apply this to technology, and how you get from not running people over on the street, for instance, to designing an autonomous vehicle? And what are the thought processes in that if those things are embodied? And an eight, you know, an automatic if you like, how do you how do you write that into an automatic system.
Charles Ess
And that’s, that’s exactly the great trick. And and many of us think you can’t, that you can get close and you can approximate, but you can’t ultimately do it in quite the same way that a human being would do. Because the systems don’t have that sense of embodied presence in the world, they don’t have a sense of desire, they don’t have a sense of emotion, they can’t feel one way or another. And so I think the best we’ll be able to do is some kind of approximations. And what I find heartening is that three or four years ago, when autonomous vehicles were all the sort of buzz, everybody was talking about the trolley problem. And we were going to solve this with, you know, kind of simple minded utilitarianism cost benefit approach, you know, five lives versus one. And we’re not hearing so much about that anymore. And part of the reason we’re not hearing so much about that anymore, is it doesn’t work. And most of us realise it won’t work, after a few minutes of careful reflection, we have to bring in other kinds of value judgments that are very hard to quantify, much less automate in a system. There, there are people who would say, well, we can approximate this through AI and machine learning techniques. And perhaps that’s true, but they’re approximations. And they’re approximations that can literally very likely go off the rails in a very bad way very quickly. So I think, I think we’re learning to be very cautious. As I mentioned, the, the I triple E is now arguing for putting ethically aligned design in these systems from the very beginning. So at the design phase, and their ethical recommendations are they include utilitarianism, but it’s much richer and much broader. So we have to start with making sure that human integrity, autonomy, respect, basic rights, that these always remain protected. And we have to use virtue ethics, we have to design these technologies so that they’re not simply for the sake of somebody’s profit or somebody’s convenience. They are for the sake of human flourishing and the good of society. This is a remarkable transformation. I’ve been paying attention to this for a really long time. And if you had said that we would be at this place 10 years ago, I know, sorry. But it’s like the fall of the Berlin Wall. It’s just an extraordinary sort of coalescence now between the technical communities and the philosophers and other other stakeholders,
Andrew Dubber
but they’re not the ones paying the bill, though the the technical community and the philosophers all have involvement and a stake in it. Yeah. But they’re not Google. They they might work for Google, or Apple or whoever it might be. So to what extent does this agreement actually impact upon what happens by the people who are commissioning these technologies?
Charles Ess
Excellent question. And it can it can go in a couple of ways. One is, this is also driving the EU and the regulation on AI. So you get notions of AI for people or AI for good. Virginia dignam. Here, for example, has a new book out on responsible AI that reflects this. So within the EU in Scandinavia. Again, this is remarkable, but incredibly heartening that we have this sort of very strong Central recognition that we have to do if we’re going to develop these systems. First of all, they have to be within frameworks of regulation. And that regulation in turn is driven by a very, very clear set of ethical standards. So that I think is extraordinary, we can be very, very happy about this are grateful. The flip side, Google is not in the EU, or Microsoft or Apple, and so forth. True. And I don’t want to under estimate by any means the sort of autonomous power that those corporations have. At the same time, there’s a little bit of glimmer of hope they exist to make a profit, and they get that profit by selling to customers. And when customers and or their workers say, I’m sorry, this isn’t good enough, grudgingly, they will respond, or maybe sometimes not so grudgingly. So I think if there is a virtue in the free market, it’s that the companies do respond. If they have, you know, they can be forced, they can be brought to the table, so to speak, either by customers, who just say, I’m sorry, I’m not buying it literally, and or who may protest against one wrinkle or another. And or workers inside. I mean, some of this, we don’t have a lot of stories of this, but the fellow who invented the Facebook Like button, ended up deciding this was a terrible idea. The whole thing of social media, nudging reinforcement of behaviours was a terrible idea. And he stepped out. And so there’s now been this kind of he’s he and others have founded a kind of counter movement in the technology world. And there are counter movements that if people want to they can participate in as alternatives.
Andrew Dubber
I guess the counter argument from the free market perspective would be that if you write the legislation first, you block the innovation, right? You can’t innovate in an environment where you’re just simply not allowed to try things.
Charles Ess
Yeah, that’s an argument, but I’m not sure. I don’t find a persuasive. And partly because as I have seen, the the history of technology and the history of innovation, broadly, the spirit, or the story is that the people who are innovating who are creating products, etc, are taking risks. Let’s give them credit. Yes. They always scream, we can’t do it, if there’s regulation. So there’s a classic example. Andrew finberg, writes about when the steam engine was introduced, and steam engines kept blowing up because they were shoddily made and people died. And a few thoughtful reflective people said, Maybe we should regulate these. And oh, you can’t do that that’s going to make it impossible. All the arguments came out. After about 50,000 people had died because of shoddy steam engines. Finally, there was enough public pressure to get the regulations or the laws that were needed for safety. And you see that over and over again. So I’m not persuaded that these claims that somehow innovation is going to be stifled ahead of time, I would much rather have I mean, the the counter response to the counter responses. So why do we need so much innovation? Exactly. So especially if that innovation is so to speak outside of the bounds of law? What does it offering us that we really, really need? that we can’t do through legal or ways of organising our technologies that respect human values, not just corporate profit, not to put too fine of a point on that.
Andrew Dubber
I’m reluctant to keep going back to the example of the autonomous vehicles. But ethics is a really clear, there’s a really clear use case for for ethics. And one of the things that strikes me about autonomous vehicles is that there seems to be a prerequisite that they’d be safe, not safer than human drivers, but safe. And I’m wondering, is there in your mind this kind of idea where if we reduced that, say, by half the number of motor vehicle accidents, then autonomous vehicles should be implemented, you know, right across the board? Or do we wait until you can’t have accidents anymore?
Charles Ess
Hmm. That’s also a really good question. And I think it probably depends on the country, you find yourself in the the arguments I’ve heard for autonomous vehicles have been very strongly in the direction of utilitarian ones that if we dramatically reduce accident rates and what’s the problem? And prima facia? Yeah, sounds great. The problem there are several problems that line up one of them is it turns out that autonomous vehicles can and probably ought to be programmed in such a way that if the choice is between saving the driver, or five people, it’ll save the five people now are you as a driver, going to go buy a car that you know might literally kill you? If it thinks that’s the best decision? Not many of us are going to step into that kind of context. I don’t think So there’s, there’s and there’s also a question of rights that are raised by that. So I suspect that in the US, you’re, I’m rather confident that in the US, if you could produce those kinds of vehicles, then you might have a stronger chance at making that kind of utilitarian argument. The flip side is that they’ve found, in some studies, that people really don’t want to give up driving, especially in the US, for many people, it’s their flow experience. It’s one of the places they have control over their lives. And so there’s other things going on in there besides just running a vehicle down the road. And I also wonder, there’s a really, I think, a really fine movie called iRobot. With Will Smith. And there’s a scene in there that literally gets to the heart of this where the Will Smith character is in a car accident. The other vehicle has a driver and a 13 year old girl. A robot sees this and the robots are programmed to save human lives. And so the robot calculates that Will Smith has a 45% chance of survival. And the 13 year old girl has an 11% chance of survival. Simple. And what Will Smith says after this is all over is she was somebody’s baby. 11% was enough. Anybody with a heart would have known that, approximately? Yeah. They’re just different opinions. They’re just lights and clocks. And that’s a little bit harsh. But what I find, obviously, moving in that is this sense that we know something in our ethical judgement, that has to do with relationship that will take chances that machines wouldn’t. You could maybe reprogram the machine say let’s save little girls rather than old men. Okay, fine. And let’s still not, I’m still a little sceptical, huh, yeah.
Andrew Dubber
Interesting. The other use case that is brought up a lot about AI is identifying photographs of cats. And I don’t think there are a lot of ethical issues involved in that. Well, what are the other AI applications that become problematic?
Charles Ess
Well, first of all, there’s facial recognition software that famously identifies black people as gorillas. Really. Yeah, there’s, there’s a whole list of ways in which biases are either built in intrinsically, not because anybody’s partly because people aren’t paying attention. But then, depending on what they train on, and so somebody has to choose the material that these things are trained on. And if you end up training it mostly on white folk, then it doesn’t know how to deal with dark skin. The really famous example is poor Microsoft’s Tay chatbot. This was an AI that was supposed to become sociable. And within 24 hours, it was spewing neo nazi racist stuff, because that’s what it learned on Twitter. Right. So problem is partly us.
Andrew Dubber
Wow, really? Still? It wasn’t in a liberal bubble. That was really the
Charles Ess
No, it was, it was not. But another set of systems that I and others are particularly concerned about are so called pre emptive, policing systems, or systems that, for example, try to make judgments about the probability of you being re incarcerated if you happen to be released from jail. So these systems are somewhere on the order of maybe 76 to 80%. accurate. And they do save time, but they’re only 76 to 80%. accurate. And if you’re the criminal, who has been good, but it makes you’re the 20%. It doesn’t quite like somehow. The second problem is you can’t contest this, nobody knows. Because these are, by definition, machine learning systems that even the programmers can’t predict what they will do. And in many ways, they can’t explain how they do it. Why are we letting critical issues in the justice system be offloaded to machines that we don’t understand? And that we can’t contest? So if you keep humans in the loop, that might be one solution to this. But it’s it’s a hallmark of modern law and democracy that you and I have the right if we’re accused of something. You and I have a right, it’s okay, fine. Let’s go to let’s go to trial. Let’s see the evidence. Let’s interrogate the evidence. And so we can contest how we’re being read as the way Moray Hildebrandt puts it in one of her really good books on this smart technologies and the hands of a law. We can’t contest the machines. The programmers can’t contest them.
Andrew Dubber
Is that because we don’t know how the decisions are being reached? Yeah,
Charles Ess
we can’t. It’s byte boxed. And and
Andrew Dubber
that’s by design, or That’s the nature of how these things learn.
Charles Ess
It’s both, as I understand it, so that, by definition, we want the devices to learn to respond to what they’ve learned from, you know, what they’ve learned, through through in effect their experience, and then to be able to improve on their, on their performance. And so we can establish parameters for how that’s done from the outset. But once the system runs, again, too far as I know, it’s opaque even to the people who built the programme, because they don’t know what happened on the inside, and to take apart the millions of lines of code, and try to figure out what that means seems to be all but humanly impossible, right, within anyone’s lifetime,
Andrew Dubber
there are two responses that you could potentially have to their understanding. One is to fear technology. And you don’t strike me as somebody who necessarily fears technology, but somebody who grapples with it, and how do we do that? in a way that is, I guess, conscious and thoughtful?
Charles Ess
Great question. No, I, I, I, I’m a gear nut. I mean, I think this stuff is great and wonderful in all kinds of ways. biographically, I was exposed to an analogue computer when I was like, fifth or sixth grade, and they’re like, story over, you know, this is what I’m doing. So I think these are potentially wonderful technologies. They’re fulfilling dreams that we’ve had for 2500 years. How to calculate how the planets run, how the universe runs, people like Kepler would kill for the information and the computational devices we have Newton with to for that matter. So no, I think, by no means a lot I. But yes, I think we have to grapple with it. And I may have an overly simplified view of these things. But I think historically, the way we’ve grappled with it has been sort of crudely namely, waiting for disasters to happen. And then saying, okay, maybe we should do something about this. What I find really, really interesting and really, really heartening is this time around, at least with AI, social robots? No, we’re faking it, we’ve, we’ve had our damage control lessons, in a way. And again, these efforts on the part of the I triple E of the EU, and other other places to say we need to do this right, ethically from the start. This is extraordinary. This is extraordinary. This is not 10 years ago, when it was a few philosophers talking to a few computer scientists. So I find that very heartening. And so part of what I would say in addition to that is, the more people on the street, I mean, everyday folk, people who will buy these technologies, the more we learn how they work, so that we can take better control and understanding of them. I think that’s also key artwork. But more and more people seem to be doing that in some way. And then, you know, go wild, join a hackerspace, join a makerspace, learn learn how these technologies really work at an even deeper level. So that you can tune them to what you think is a good life to what’s going to give you a sense of contentment and fulfilment, and meaning, the very idealistic vision. But that’s been the vision since the enlightenment. And I’ve often heard people say, in these contexts, we need a new enlightenment, we need a new building is the German word, where we educate people, not just for the sake of getting a good job, occasionally, but to enhance this sort of human centric understanding that this is this is not just about profit, it’s not just about convenience, which all of all of which can be useful and good. It’s about what kind of life Do we lead. And as we become more reflective about that, then I think we’re in a better shape, better place to take control. as best we can have these things that so deeply infuse and define our lives and the lives of the people around us that we care about.
Andrew Dubber
Some of the things that I am particularly robotics get talked about a lot as in the realm we mentioned religion before, but also things like sexuality, things that are really deeply human kind of experiences. Yeah. How do those things fit together? And why are these the things that we’re looking to outsource?
Charles Ess
That’s a really good question. Why we’re trying to outsource these, I think the broad answer is, human beings are difficult. There’s a wonderful little phrase from Charlie Brown, the comic book character, I love mankind, it’s people I can’t stand. human relationships are difficult. That’s just the truth of it. I mean, if you want to a religious start, one way of reading the book of Genesis is all about dysfunctional human family relationships. Am I am I my brother’s keeper? Well, yeah, oh, okay. And we’re off and running. I’m, I’m not sure where to start. I mean, in Japan, this is a big problem for for young, young people, young people seem to be having sex less, they’re dating less, they seem to be more satisfied with sort of virtual companions and virtual girlfriends, their work lives are very strenuous. Who has time to develop a relationship? I think if you poke beneath the surface of any long term relationship, married couple, you’re going to find there were times when I don’t know if we can do this anymore. I don’t know that I want to do this anymore. A lot of times we work through that sometimes we don’t. So I think the sort of starting point is, it’s hard. And this is partly why I’ve become interested in virtue ethics is, you know, I don’t like the term virtue so much. But what it does teach us is it doesn’t come naturally, we have to work at it right, by virtue of human qualities, rather than right, being good. Yeah, what I mean is more like the capacities or the abilities or the habits, and these become habituated so that we, we become habitually nicer to each other practice. And that’s a good thing that are central to what we conceive of as a good life or a life that has meaning a sense of contentment. So that’s what I mean by by virtue. So, again, the human sphere is hard. Even in wonderful places, like Scandinavia, where you have such a terrific lifework balance. So the temptation to offload the chores of relationship to the machines or through the machines, I think is very, very high. Understandably, one of the things that I don’t know if it’s happened here, but it’s been observed in Norway, instead of texting, which used to be what you do, because you don’t want to call. Because if you call, then you have to deal with the oh my gosh, there’s a moment of silence now, what do I do? So we text because we don’t have that problem anymore. And then texting becomes a problem because it takes time. But I don’t really want to call anybody. So what I’ll do is leave a voice message. And so we’ve we’ve, the convenience level of communication has gone up, we still obviously care about each other and or we wouldn’t be talking. But the work level. We’re we’re learning how to be lazier and lazier about communication and relationships. And that’s, that’s this problem of de skilling. So that’s what I worry about.
Andrew Dubber
That said, we seem to be spending an awful lot more time doing it on my social media platforms and communicating with other people. It’s pretty much everybody’s job. Now. It’s email. It’s Facebook, its Twitter, it’s you know, so is it just that we’ve multiplied the number of communications that we’ve had? And we’ve just made it more productive?
Charles Ess
It might be kind of nice to think that I’m not entirely sure. I’m sure there’s more than two sides to it. I think on the one hand, yeah. It’s really handy and convenient to be able to chat with my wife while she’s in Palestine, or wherever, and my kids who are in North America, but I think many of us also feel, I mean, there is something now called digital detox. And there there has been now for two or three years, this sense of this whole multitasking thing is a sham. It’s not multitasking, it’s switch tasking, and it’s not as efficient. So I think there been recognitions that were a little too busy with this, in a way. And I also know of examples of people who are finding that, for example, in the face of death, learning about the death of a loved one, because somebody has posted on their wall Oh, I’m so sorry. I’m going to miss you. This is not how you want to find out your sibling or your parent has died when your child has died. And it’s not going to help you with the grieving process. 197 likes just doesn’t mean very much. So I know of examples. I mean, these are these are documented through research of young people in Norway, for example, we’re getting off of social media has a very high cost. And yet because their experience with the fakery of grieving online, sometimes called grief 2.0 On the one hand, and finding the need for embodied co presence in mourning, on the other hand, they got off of Facebook, and you can go out you can go to other places you can go to Snapchat and so forth. So so it’s not an either or, and I’m not trying to suggest it’s an either or I think I think what I’m seeing is we’re trying to find the right balance. To go back to what you started with a few minutes ago, no one is gonna throw this away. At least not for any length of time. But it’s finding the balance. And I think finding that balance is is hard sometimes.
Andrew Dubber
I noticed when you’re talking about other people’s grief, and you collect yourself for a moment, because that you sort of, you’ve taken that on, in some sense, and that’s, for lack of better word empathy. And empathy seems to be something that is lacking in the world, to a large extent, and there and there are practical considerations that come out of that, you clearly a very empathic, empathetic person. How do you learn that? And what are the advantages of it? Hmm?
Charles Ess
Well, first of all, your your your question is really wonderful, because when you approach all this through a kind of virtue ethics lens, empathy is perhaps the most important starting point. This you can see it negatively with with children on the autistic spectrum. The reason that at the extreme, they don’t relate to you because as far as they’re concerned, your toaster is they they don’t have the capacity for empathy, they haven’t learned it, the capacity to learn it is there, but learning it is not necessarily a given. And so I think it’s absolutely essential to learn empathy. But empathy can be painful. Especially in those sort of very strong moments of grief of loss of conflict, your marriage may be falling apart. So of course, we don’t want to spend our time without if we don’t have to. So again, there’s a temptation, I’ll communicate with my virtual girlfriend because she’s predictable. I don’t have to take care of her, I can turn her off if I get bored. The flip side is from Shannon Vallor is the example of when you’re four years old, and you’re forced to go talk to grandmother. You know, I’m sorry, but I think in some sense, we have to have the courage, or develop the courage to confront those moments of caregiving or of grief or of deep love. I mean, deep love can also be incredibly scary, and painful. And confront it and, and without the risk of sounding cliched embracing it, and taking it for both the good and the bad. And I’m fairly sure, and this is this is, again, kind of cliche, but as far as I know, it’s true. That’s how we grow as human beings. There’s, there’s a kind of an analogue, if you want to be a better tennis player, don’t play with somebody who’s as good as you buy with somebody who’s better than you. And if you want to sort of expand your capacities to live as a feeling reflective human being, take some chances get hurt, take the chance of getting hurt. Because I remain convinced that the flip side of that is that. Yeah, how to put it. The rewards of loving will be will be provided. Yeah.
Andrew Dubber
That seems like a fantastic place to leave something like this. But I still have so many more questions. I guess, from that, is there a politics of ethics? Is there a? Is there a team we should be picking based on our understanding of how other human beings feel and work? And this feels like a very leading question, but I mean, literally, as the world divided into ethical politics, and I guess, anti ethical politics,
Charles Ess
it certainly isn’t the country I’m from. It’s no, again, it’s a really, really good question. And I think one of the things that I really, really appreciate about living in Norway is that there are nine functioning political parties. And I ended up having beer next to one of the leaders of one which shows you how we’re not very far from power, which is another nice thing about living in Norway. So without wanting to name names, I think from virtue ethics, as well as the ontology, the idea that human beings are freedoms and need to be treated as such. And that means equality. And that means emancipation. I would turn the question around and say, work for the people. Because sometimes the people are different from their party’s work for the people. And the parties who are clearly on the side of emancipation, clearly on the side of equality, including gender equality, sexual equality, and the people who are not just, you know, peace and prosperity. That’s an easy slogan. Peace would be wonderful. Very hard. But the people who are really making the difference in terms of how we’re living on this planet for the sake of our grandchildren, I don’t know why I’m weepy, I’m so sorry.
Andrew Dubber
No, it’s Oh, it drives at home Really? Well. I don’t have grandchildren. But I imagine if I did, yeah, the idea of the future of the earth and the future of how people, you know, set up institutions Now, in order to take care of what happened in the future, would have a similar impact on me or my son as an adult. And so he’s operating in my world as far as I’m concerned. But there is a world to come. And you seem very invested in that,
Charles Ess
you know, it’s partly because Norway is a country that is very attuned to the environment into these issues. Last fall, when the UN panel came out on climate change. Very, there were young people going around, saying, we’re the first generation and the last generation. So we’re the first generation to feel the effects of climate change. And we’re the last generation to have any hope of doing anything about it. I grew up in the 60s we had we had the threat of nuclear holocaust in 20 minutes, we could all be gone. Never know. That’s how the way to grow up. Um, there was a great song by Bob Dylan, about the masters of war who’ve hurled into the world the worst fear, you can hurl the fear of bringing a child into this world. And I hear young people struggling with that today. It’s a terrible thing to throw on a young person. Is
Andrew Dubber
it fair to say that every generation has that I mean, I grew up in the in the late 70s, early 80s. And so we had this nuclear holocaust was the big factor then of minutes to midnight, you know, Cold War, etc. So, the sort of existential peril. Haven’t we all always had that or is this new?
Charles Ess
Well, I mean, from an existentialist perspective, death is always possible. And yes, you had plagues in the Middle Ages and so forth. But those were extraordinary. They weren’t daily confrontations. I think what’s different now? Since the 20th century, I mean, this is sort of classic existentialism. Since the first world war we have had machineries of industrialising death. And by industrialising, we mean mass scale. So, millions and millions of people slaughtered in the First World War, over 50 million in the Second World War, nuclear holocaust, hundreds of million billions of people, the planet itself, that I think is qualitatively new. And I think it cuts us in at least two different directions. One of them is it is so horrifying to contemplate. for any length of time that we spend a lot of time doing everything we can not to. So this Neil Postman idea of amusing ourselves to death, and it makes sense in a way, but it doesn’t help the problem. So I think, you know, we’re very good. There’s a way in which you can’t constantly think about your own mortality. Even Can’t you know, I’ve I’ve had cancer diagnoses twice, are almost cancer diagnoses twice. So this is this is the sort of classic moment for us in the industrial world, you might have cancer, you might die in three years. Okay. That puts your world in perspective. And so every second becomes precious. But you can’t live like that. Or I can’t. I don’t know anybody who can for about more than three weeks, approximately. It’s just too intense.
Andrew Dubber
There’s so many days you can seize.
Charles Ess
Yeah, yeah, it’s an it’s, it’s Leonard Cohen, put it nicely in an interview on, you know, your body sends you signals, it’s not going to go on like this forever. But it’s really nice to sit down in the morning and pretend it will. So I can fiddle with that. Do that. Let’s go do something fun. Perhaps not terribly meaningful, but it’s fun. That’s also good. So it’s it’s understandable that we turn away as best we can from these kinds of things. But I think both individually the lesson of existentialism and to some degree the lesson of some religions I I go back to the Epic of Gilgamesh, as well as a different reading of the Garden of Eden story. If you want to grow up as a human being you have to recognise you’re going to die because it’s only then you start to take responsibility for either finding meaning or purpose. Meaning in your life. And it’s meaning that you craft you create, it can be in coherency. with other people, it could include traditions, it can be novel, it can be creative, but you have to do it. And if you don’t, it’s not necessarily a tragedy, but you’ve missed in some ways, really the most distinctive opportunity of being a human being, that is free choice. In terms of crafting, crafting your life,
Andrew Dubber
I did want to touch on this idea. And so there are a couple of threads that I want to pick up on that the first one is this idea of creativity is an essential part of the sort of human experience and the relationship of creativity to these things that we’re talking about, about ethics and so on. So let’s start with that. I’ll get back to the industrialised death later. My question, I guess, is how do you fit this idea of creativity and, you know, artistic expression, and those things into this framework of being in the world being ethical, living deliberately, you know, how important is it that we do those things? Or is it just, you know, well, it’s entertaining in the meantime? And
Charles Ess
oh, no, no, no, I mean, it can be entertaining, it can be just decoration. But on the other hand, decorations, also nice. But there’s a wonderful phrase, I don’t know who said it, but it was something like poetry is a raid upon the inarticulate. And I like that, because I think philosophy is a raid on the inarticulate things we somehow know, but can’t quite get out without some work. And so what I find in poetry and in the arts and in music, and in language, for that matter, learning a new language that is so valuable is that it cracks open. it cracks open the ordinary, Leonard Cohen, song. anthem is some people know, and he talks about, there’s a crack, there’s a crack in everything. That’s how the light gets in. He’s referring to a Jewish mystic from the 15th century. It’s a long story, I don’t need to repeat it. But basically, it’s through these cracks that something new breaks in and helps sort of open up the every day and you see things in new ways, you may see them in extraordinary new ways. And those can open up new possibilities for us. So this is sort of standard existentialism, but I think it’s standard human beings. in a certain way for us to separate the arts, from the humanities, from the sciences. I mean, this is this is very, sort of 18th century. And if we go back to the Renaissance, the so called renaissance man or woman doing everything, well, of course, because all of this is necessary. In existential traditions, you have people like Nietzsche, he never really wrote an academic paper. These were essays, and they were specific kinds of essays that were intended to crack open the academic linear argument. And they were intended to be aesthetic. They were intended to be artistic. So Sartre, Nietzsche also wrote music for that matter, but soft wrote plays, you want to convey your message, do it in a play with other people? Well, that’s a little depressing. But okay, there’s, here’s the play. So I don’t think there’s an essential connection at all between the arts and thinking about things ethically. The flip side of that is this is why in, it’s not just virtue ethics, but this is why in religious tradition, and in the arts, we’re interested in heroes. We’re interested in anti heroes. Most of us don’t study academic ethics, but we, we might go to a pie, we might go to a film. And it’s those people who strike us in an interesting way that can help us inspire us to think about our lives in new ways. And so I think there, there’s an enormous range of essential resources. And even thinking about it, it’s resources is sort of industrial. But know what I find. For example, here at home lab, we have an artist in residence. So humanities and computing, and an artist in residence, that’s extremely cool. And most many of the projects that I’m seeing coming out now that are trying to feel our way, think our way, reflect our way into a more humane future, include art. So I think we’re learning to do that in good ways. Yeah.
Andrew Dubber
The art thing is interesting, because it seems like small, personal acts of good if you like, and to go back to the industrialization of death that seems monolithic, and that’s huge. So essentially, evil is the way I talk about evil as elephants, and as good as insects and the insects outweigh the elephants in the world. But the elephants you put an elephant in the room and it’s the most overbearing thing. It’s the one thing that you see. And that’s how I think I retain optimism is that you know, the good is insects and insects outweigh the elephants. But do you share that optimism I mean, you put industrialised to death and you put AI into the makes the first thing you go to a Terminator. Right? Is that our future or our little good acts of creativity enough to counterbalance
Charles Ess
I, I’m inclined to be optimistic. And it’s not just because I’m naive American whose DNA is programmed to make me optimistic. I think there are two aspects of this. Elie Wiesel, the Holocaust survivor said, the good news is over 6 billion people got up this morning without thinking about how they were going to kill their neighbour. So I think if we step back, and we we just sort of look at how we treat each other, even if it’s automatic, even if it’s for self interested reasons. There are these small acts of kindness that we do quite frequently with one another, that I think made a huge amount of difference. We tend to take it for granted, perhaps. And yes, I agree, I want very much one sort of underscore agree with you. I think those those 6 billion people who woke up this morning without trying to think about how they’re gonna kill their neighbour, they’re the vast majority of human beings. And in the long run, if we have a long run, they’ll win. They have so far. And I think there are reasons to think that they will continue to do so. What that means, though, is two things. One is the more we offload human decision making, into autonomous machines that we do not understand, the less capacity we have to exercise our agency, including our agency for good, our agency for evil, but our agency for good. So from my perspective, this is a particularly important time for as many of us as possible to realise what we do now, what we choose now, who we elect, how we consume, how we live. These are choices that have the potential of making an enormous difference for ourselves now, as well as for our children and our grandchildren. The more we recognise we have agency and take that agency, then the more I think we have the better chance of living the elephants. So can
Andrew Dubber
you reassure us as a professor of ethics, that the arc of history curves towards justice?
Charles Ess
Yes. But not without pain, and not without sacrifice, sometimes enormous sacrifice.
Andrew Dubber
Thinking of a specific example.
Charles Ess
We’re sorry. We’re close to the observances for Hiroshima and Nagasaki. But I’m also thinking of, of I mean, you name it victims of whatever war is going on right now. There’s there’s no shortage. Unfortunately, there’s no shortage of injustice or terror around the world.
Andrew Dubber
Yeah. But the answer, I guess, is to work on making sure that we design systems that pull in the other direction,
Charles Ess
pull the other direction, and don’t forget that we’re in the system that we create the system and that we can guide the system. And that we can do things apart from the system. That will make enormous differences. I mean, it’s perhaps trite and cliched, but I still think it’s true. It doesn’t take a lot of people to make an enormous difference. It’s one of the really interesting things about human history. It’s sometimes the striking individuals who seem to come out of nowhere. I mean, three years ago, if you said this, this young girl from Sweden, sitting outside of Parliament by herself on a school day, she’s going to change the world. You’re out of your mind, of course, not completely irrational. And yet, something extraordinary has happened. At the same time, we want so I want to lift up these moral heroes who have the courage to do something that sometimes will take off in an extraordinary way, but not at the cost of recognising that all of us have these kinds of capacities, if at a lesser scale or lesser dramatic impact. And the more we choose, the more we recognise, that we can turn things, then I think the chances are clearly much greater. That Yes, the arc of history will will turn in the directions that we, or at least I think our best in terms of emancipation, equality, greater sustainability on the planet, those kinds of things.
Andrew Dubber
This has been absolutely fascinating. And I kind of feel like I could keep this conversation going on for a lot longer. I just I guess what I want as a pragmatic takeaway for people listening to this, how should we then live,
Charles Ess
live a good life. Be as reflective as you can about what you’re doing. be as careful in the literal sense, but also the figurative sense. Try to exercise more care as much care as possible, both to the people around you, you know, and those who don’t know, take a chance on empathy. Speak up, ask people what you can do. If we can use religious language, one of my colleagues said something, I think, very striking just a few days ago, she said the method is to treat others better than yourself. That’s where God is. And what I think is true, from both sort of scholarly and or experiential approaches, is, I mean, you can find this in other literature on psychology of happiness, and that kind of thing. connecting up with giving something of yourself to another or to something that is, in some ways greater than you, and works on the side of greater compassion, greater empathy, and so forth. Almost everybody finds that their lives are much more meaningful, and doing that. I can’t think of a religious tradition or ethical framework that in some way or another doesn’t support that view. If you interrogate the literature’s of so called mysticism, Buddhism, etc. It circles down really to a very simple truth. I think it’s true. It’s a very simple idea. Give a little bit more of yourself to something else around you besides yourself. And yeah,
Andrew Dubber
that’ll make a difference. Charles, thanks so much for your time.
Charles Ess
Thank you. I hope some of this is useful.
Andrew Dubber
I don’t know how we measure it. Yeah. Thank you. That’s professor, philosopher and good ethical human being Charles Ess. And that’s the MTF podcast. If you enjoyed that and you’re interested in some related material about AI ethics and internet culture, do check out some past episodes of the podcast. Let’s start with fellow internet research pioneer Nancy Baym episode, ai expert, Christian Guttmann, internet activist and author Cory Doctorow, philosopher composer turntablist and polymath Paul Miller, aka DJ spooky, and the special episode we did at MTF Örebro at Örebro University about a month back with AI Professor Amy Loutfi, music tech CEO Niclas Molinder. And Michela Magas, who you know as MTF, Labs, founder and advisor to the European Commission on things like AI, innovation and the creative industries. I can thoroughly recommend these conversations as they’re all with outstanding people with brilliant insight, each shedding more light on the same territory. Don’t forget to spread the word, press the star button, like share, rate, review, and most importantly, subscribe, and we’ll talk soon. Have a great week. Cheers.