Virginia Dignum - Artificial Responsibility

by Music Tech Fest | MTF Podcast

Virginia Dignum is Professor at the Department of Computing Science at Umeå University, where she leads a research group on Social and Ethical Artificial Intelligence. She’s the author of Responsible Artificial Intelligence - and an advisor to the European Commission and the World Economic Forum.

She’s currently working on one of the largest AI research programmes in the world as chair of the Wallenberg Foundation’s WASP project where she is Scientific Director for Humanities and Society.

Virginia Dignum on Wikipedia
Virginia Dignum on Twitter

AI Transcription

SUMMARY KEYWORDS

ai, people, system, realise, world, social science, machine, virginia, artificial intelligence, identify, problems, understand, programme, data, sweden, policies, type, cats, working, swedish

SPEAKERS

Virginia Dignum, Andrew Dubber

 

Andrew Dubber 

Hi, I’m Andrew Dubber. I’m the director of Music Tech Fest, and this is the MTF podcast. on previous episodes of this podcast, I’ve kind of made a point wherever possible to sit down with guests in person and interview them. I was in Berlin to interview Daniel haber, the CEO of Native Instruments, interviewed beyond our various from Abba in Austin during South by Southwest last year. I’ve interviewed people in Sweden, Croatia, England, Scotland, the US, Germany and New Zealand. And I really like to be face to face when we do this. And of course, broadly speaking, that’s no longer possible in the current environment. So it’s perhaps ironic that someone I’ve been meaning to speak with on the programme for quite some time is my first zoom call podcast interview, and she lives in the same town as me in the north of Sweden, Virginia. dignam is a globally recognised expert on artificial intelligence. She’s professor at the Department of computing Science at the University, where she leads a research group on social and ethical artificial intelligence. her recent book is called responsible artificial intelligence, and it was named one of the world’s best science books of 2019 by Springer. Virginia is currently working on one of the largest AI research programmes in the world, funded by the Annenberg Foundation, which I suppose is kind of like the Swedish Bill and Melinda Gates Foundation. Her part of it focuses specifically on ethical and human centred AI. She advises to the EU, she’s on the World Economic Forum, global AI Council, and she’s advising to all manner of international policy bodies on matters of human centred artificial intelligence, and what with everything that’s going on in the world right now. That’s kind of more important and pressing than ever. I caught up with Virginia working from home this week. And we talked about AI, what machine creativity might entail, as well as the potential copyright implications of that, what responsible AI actually means, and how she’s currently using AI to help beat the global coronavirus pandemic, from just down the road. Here’s Virginia dignam. Virginia dignam. Welcome to the MTF podcast, you’re the professor and Vandenberg chair on responsible artificial intelligence. What exactly does that entail?

 

Virginia Dignum 

It entails that we are looking at the development and use of artificial intelligence from the perspective of how it impacts and influences people’s life and to take into account to the responsibility of users and of developers and other stakeholders to ensure that things are done in a responsible way.

 

Andrew Dubber 

Right. So let’s start with AI. How do you define exactly what AI is?

 

Virginia Dignum 

Wow, many hours for centuries,

 

Andrew Dubber 

to get us into into one chapter of your book. So let’s see if we can get this into a fit into a podcast. Okay, what’s AI?

 

Virginia Dignum 

Yep. So AI is basically there’s so the first thing we need to really to understand is AI is software is an artefact that people build is not magic is not something which happens to us. It’s not something which comes out of outer space and happens. It’s something which is consciously developed an engineer five people to do some purpose, which is also determined by so that is, I think, the most important part to understand, then how does it work? It’s what distinguishes it from other types of software is basically the capability that this techniques have to be able to analyse patterns in current situations and current contexts and use that analysis to come up with potential new suggestions on new insights, I don’t really like to talk about predictions. I don’t think that AI makes any predictions or whatsoever it can correlate or extrapolate from existing data. But the prediction is something that we might or not decide to do ourselves based on what AI is identifying gaps. Former more technical perspective, this type of systems are systems that are able indeed, to learn or to adapt to the context by analysing the input and the data that they receive, and they are able to change their results based on new data. They do that the systems sometimes are very often in that autonomous or automatic place. It’s not autonomous in the sense of philosophical autonomy, but much more on the sense of automation for the system doesn’t need the direct user to come up with the results. And also, more importantly, the system interacts with us in people in different ways. That makes that the system, the AI itself, the way that I like to look at it is not only the technical components, but also the social technical environment in which we used the system. So when we talk about responsible AI, is not that I want to give the software the responsibility for the results. But more that I want to ensure that there is a social technical context or as institutions around the software, which are able to take the responsibility for what the system does or doesn’t do.

 

Andrew Dubber 

So let’s start with this idea of responsible responsible by what measure who gets to decide what’s responsible?

 

Virginia Dignum 

That’s a very good question. And indeed, it’s one of the things which we work on. It is, is that sides kind of responsible in the eyes of the beholder. So if users need to see responsibility for what the impact potential impact and potential results are of the systems, then it’s up to developers and policy makers to ensure that that responsibility is somewhere that there is somewhere where we can go in terms of liability. If that that is somewhere which can be accountable for what the system’s up, it also means that we need to take into account that it is important to have some level of transparency about what the system is doing. And there again, can be you can understand transparency at many different ways. But basically, at the very minimum, we need to have some openness about who is developing this system, why is the system being developed? Why am I interacting with the system? What kind of potential impact the system can have in my own life? And where can I go if I have any issues, which I would like to discuss about the system?

 

Andrew Dubber 

So from my understanding of something like machine learning, you give something a lot of cases like the the example that’s always given as photographs of cats, and you give lots and lots of photographs of cats who say these ones have cats, these ones don’t. And then the AI system will then later be able to identify photographs of cats. But my understanding is that it won’t necessarily be easily able to tell you why it thinks that there’s a photograph of cats on there. So does the idea of transparency and decision making actually hamper the possibility or restrict the possibility of what might be possible with AI? Is it a problem to try and make these decisions transparent? If you know make it explainable?

 

Virginia Dignum 

No, I don’t think so. Indeed, the the example of the cats is the same example that is usually given. First thing here, we have to understand that the AI system the machine learning has no idea what the cat is, even after I’ve seen a 20 million pictures of a cat, it will never be able to tell you that the cat is some animal which meows and doesn’t bark, it’s not able to tell you that that sink each or that that sink sleeps. So it has no idea of what the cap is, it just is able to understand some set of characteristics of those pictures, which then takes as being a get often the example which is given as a counter example to this is that you can train the machine learning to identify wolf see pictures. And the distinguishing factor of those pictures is the amount of white pixels, somehow we pictures we have of all sudden the snow, and then the machine learns to identify that a lot of snow in a picture, it’s more likely to be a wolf than to be a dog.

 

Andrew Dubber 

Well, okay, I would not have thought of that.

 

Virginia Dignum 

Yeah, even worse than that is we can train machines and we use machines. More importantly than for cats or dogs or boats identify for instance, cancer cells. And when specific example which has been given is that when the doctors suspect that someone has cancer, they are more likely to make a better resolution type of scans than if it’s just a normal scan. So they use different types of machines. The machine learning algorithm learns to identify some characteristics on the pictures, things like scales, or dots or some kind of things which have nothing to do with the image of the cancer cell itself, but with the formatting of the pictures, and then associate the ones which have the IRS solution with the higher probability of having cuts. That’s where responsibility starts. We have training machines on data, we not always know exactly what are the patterns that the machine learning algorithm is identifying as being whatever we want it to be cancer cells, or dogs or cats or whatever. And then we are relying our own decisions on the results from these machines.

 

Andrew Dubber 

It sounds like what you’re describing is not so much artificial intelligence in the sense of abdicating decision making process to a machine, but some sort of enhanced intelligence for humans a tool for adding to how we can make our own decisions.

 

Virginia Dignum 

Yes. And I do think that the better solution is to have a combination of machine intelligence and human intelligence machines are great on identifying whatever pixels, we need them to identifying in a way that they never get tired. They never make mistakes, they will do the things always again and again in the same way. Like I say, they don’t understand what to get. We don’t understand what to get, if you’re not very good to get to the question consistent and tirelessly looking at gets on the internet. So if we combine both things, we get the best of two worlds.

 

Andrew Dubber 

And likewise, I guess from a medical perspective, I mean, you’ve been tracking policy responses to the Coronavirus using artificial intelligence. Can you tell me a little bit about that?

 

Virginia Dignum 

Yes. So what we have been looking at it is what what we realise is that at this moment data that is available about Coronavirus is extremely unreliable. Firstly is the most data that we have is the true positives is the people who have been tested and positively been test of every Khurana. That is the only part of the day that we can trust. We know a little bit of people who have been tested negatively. So we know that the correct cases we don’t know the people we didn’t test. And we don’t know in many cases are accurate and are well perform this test. So that is a huge difference in data about Coronavirus epidemics from the different parts of the world. And also in terms of quality of this data. So our approach is okay, if that is the case, let’s see if we can understand the policies that different governments are doing in terms of other types of features. So we are looking at demographics, we are looking at the economic effects, we are looking at things like what would be the impact of for instance, doing other percent testing is not really possible. No country in the world has the capability to do under percent testing. But we can in a simulated environment, identify what would be the difference between 100% 80% or 60%. And then we can support governments to tell them okay, the difference between 60 and 80% is maybe not so big, that entails the needs and the expense and the costs of doing the extra 20%. And we are also looking at what is for instance, the effect of closing schools. And things which we are seeing and I see this it’s a simulation. So it’s based on synthetic, imperfect. So we create a bunch of fake agents, which behave more or less like people, let’s say, we put them all in a city or for our city at this moment, there’s 5000 inhabitants, they all have their own types of jobs that are types of families, which we can of course, play with it and change that as we go, I can see. And then we see what is the effects of closing schools. One thing which can see depending on the on the demographics is that people will put the children with their grandparents while they go work, which is probably not what we want, because we want to isolate the elderly population. And that the thing we see is that the if people are at home, stuck at home working with the children and doing their own work at home, they will take any opportunity that they can have to go outside, like in the weekends. And then in weekends, they will but go to places where they are more likely to encounter other people which are not normally in their own circles. Because the this idea that everybody can be contagious. Everybody else. It’s entirely true. But in a sense, we all live in very small world communities. So usually in us in a day, if you open me leaving me over to me in a normal life, we don’t think of each other because you are in your own circle. And I’m in my own seat. So at a certain moment within those circles, you don’t really condition more than the people which is there. If we go out of those circles, which we see that increases in fact, in cases where people are forced to be contained for quite some time. I’m not saying that these are real effects, but it’s potential effects which needs to be taken into account when the policies are being made.

 

Andrew Dubber 

If policies are made based on your kind of SimCity simulations of how people should behave, and it turns out that people don’t behave like that. Whose fault is that?

 

Virginia Dignum 

Is that the fault of everybody who took the decision of making that policy? I’m not saying that we should make policies based on my SimCity. What we want with our SimCity. Four on to call it like that is to provide a playground to experiment with many, many, many different policies. So I would not really agree that it is used to test one policy and see if it says that people go to the kids that grandparents are attached if they don’t put the kids up to 10 parents, but really, you need to try a few hundred different types of approaches, not so much to know what is the result, I don’t think the result is what is important. But by doing gifts, you start getting insights yourself about potential effects, right. And that is where I think the policy makers needs to take the decisions for the policies that they are making.

 

Andrew Dubber 

That sounds like what you’re doing is very humanities, social sciences, cultural studies, but from within a computer science department, how how interdisciplinary Can you be?

 

Virginia Dignum 

Yeah, indeed, that’s what I do have a background in computer science and mathematics myself. I’ve been working in a very mixed discipline world for a long time of my life, I worked in industry and in consultancy and the development and half of my professional life, I’ve been working in academia. And in both cases, I have been working always in very multidisciplinary environment. And like I said, in the beginning, AI is not only technology, AI is the people is humanities is the social science is the interaction between all these things, is not just about designing engineering systems, because we can engineer them. It’s about engineering systems, because we can have some kind of positive impact on society. And this type of understanding what is the impact in society is not something which we usually do as engineers, but it’s exactly what humanities and social science in that is why I’m very happy and honoured to be leading this initiative from the voluntary foundations on AI, eponymous systems, humanities and society.

 

Andrew Dubber 

So what actually is the vulnerable foundations interest in AI?

 

Virginia Dignum 

As far as I understand, and I can, of course, I cannot speak for the Waterbury Foundation, because I’m just a researcher. A few years ago, they realised that the benefits and they they weren’t always very much from improving and being beneficial for Swedish society and Swedish industries. they realised A few years ago, but Sweden would need to do a step forward on AI research in order to be able to keep up with the demands from Swedish industry and Swedish society. So they have created this bus programme on which is actually the reason I’m in Anya, is because I was approached and employed by the funds that were made available for universities to attract AI researchers to Sweden. So they create this vast programme, which by now it’s one of the largest ever problems in the world, on AI, funds. And recently, so when they went a year ago, they realise exactly what they continue on this, that it’s not just about providing the technology and the the computer science research on AI, but that once we want the AI to be beneficial for Swedish life and Swedish industry, we also have to understand what is the impact. And for that they create this new problem which started last year, which is exactly about the analysing the impact. And it’s a problem focusing on the social science and the Humanities Research.

 

Andrew Dubber 

One of the things that sort of struck me as really interesting about how at least your work is described online, is that part of your research is about the formalisation of social interaction. Can you unpack that a little bit for me it the bit that I’m sort of stuck on is the idea that social interaction can be formalised?

 

Virginia Dignum 

I don’t think social interaction can be formalised, but if we are building systems, again, engineering, artefacts that are going to interact with us in our social context, we need to design those systems to be able to do that in a way that it sheets with the way that we live and that we interact in any form that we need to and the systems are at the core of formal mathematic functions, because it’s software we will get to to work otherwise if we cannot make a function of our the system, the scenes work, we cannot build the software. So we do need to have somehow ways to formalise social concepts, such that we can will systems that can interact with us. So of course, like any other formalisation, or any other engineering thing, and approximation of reality, and it’s not to formalise our interactions that do not it is to build these systems. But one of the things we are looking at the moment is this notion that we want AI systems to be fair, ought to be bias free. What do we mean by fairness? And what is the mathematical function that guarantees me that the system is fair?

 

Andrew Dubber 

Because mostly people say that they want things to be fair, but what they mean is they want them to be more unfair in their favour.

 

Virginia Dignum 

Yeah, they want to be the things to be fair for you. Yes.

 

Andrew Dubber 

Yeah. Yeah. So So again, the question comes back to who gets to decide what’s fair?

 

Virginia Dignum 

Yes. So our approach to that is that, in a sense, we don’t care, you give us a definition of fair. And we try to build the system, which meets your definition. So that that’s the easy the, let’s say, need the engineer speaking. But need the more multidisciplinary person speaking would say that, in order to get a notion of fair, that really fits the context in which we want to be fair, you need to take a participatory approach, you need to look at all the different possible potential stakeholders in your system, the users, the developers, the policymakers, the society in general, the ones which might be in that indirectly, affected by it, and try with them. And there are techniques, again, from the social science and from humanities, which enable people to agree on a certain way of a certain definition of fairness, which is definitely not yours or mine. But it’s something which both of us are sufficiently confident. We do that in many other types of social systems, we do that on the democratic process and elections, the result we take the result of a democratic election is when we accept it, even though it is usually not the result that no no individual in him or herself, was expecting. I think

 

Andrew Dubber 

it’s also been a long time since anybody described a democratic election is fair. So it’s

 

Virginia Dignum 

that that was not the definition I want to make. It was more about the process, except the process, the process of democratic execution, you accept the outcome, take the results, we accept outcome. And we work from the premise that that outcome is outcome, which is the most beneficial for all of us. So that process is the process which can be taken in order to, for instance, to understand what would be the outcome of fairness for a certain AI system to

 

Andrew Dubber 

show, we spoke to Charles S, who is an ethics professor, and he’s obviously very interested in the ethics of AI. What’s the overlap between what somebody like he does, and I want to let you do

 

Virginia Dignum 

I think that we come from different backgrounds and different sides of the of the spectrum. But in a sense, we are looking about the not so much the ethics of the system itself, but about our match the system interacts and influence around ethics. And there is all these examples, and you probably have read about it on teaching AI systems to be ethical, or all those trolley problem kind of things, whether the self driving car should kill old lady to one side or to mortgage to the other side, that whole those kinds of things. And in a sense, I think that shows so please with that. That’s not the most interesting problems. The most interesting problems is our is AI and the use of AI in the feeling of interacting with our own sense of morality and ethics. And these is that going to change as we use an AI our interactions are more mediated by the type of system,

 

Andrew Dubber 

right? The two main AI professors that we’ve dealt with in our labs at MTF have been done at a craggy edge in Stockholm at kth and Amy lotfy in Edinburgh. And and now obviously, we’re speaking with you, is this something that you think that women are particularly good at when it comes to this topic.

 

Virginia Dignum 

I don’t know you AI is a very male dominated field, especially the the more mathematical and computer science part of it. You do see much more women are working on the societal aspects of AI than in other areas of AI. It’s not that there are no men that are working on the social aspects. I don’t, I don’t really know if it’s more female type of seeing but in a sense I think that I never like to make this type of generalisations, but women tend to be more broad, approaching things in a broader way. And men tend to be more specialised in a specific problem. And maybe this societal aspects are things which needs to be taken in a more broader way than a very focused and specialised approach. I wouldn’t dare say it’s a male, female type of thing in abroad

 

Andrew Dubber 

away, meaning,

 

Virginia Dignum 

taking more things into account, taking the broader picture into account, not so much the specific details. Okay?

 

Andrew Dubber 

I’m interested in your journey, because you’re a long way from Portugal and you’ve been in Delft, obviously. What what’s, what’s your journey? Where did the start and how did you come to be where you are? Oh,

 

Virginia Dignum 

starting which colonies I bought, bought and raised in Portugal, I studied in Portugal, I studied mathematics, computer science at MIT. When I started in the 80s. We didn’t have in Portugal, a specific computer science degree. So I did most most of my degree in mathematics, and then space specialised where I could be the science. Already at that time, I was working on what very interested in AI systems and like I say, the very first AI problems that I developed is already in it in 1986. So it’s very, very long time ago. At that time, we were much more trying to understand how to represent human knowledge in the machine understandable way, which is a very different way from the approach that we are now looking at AI to do it from a data data driven type of light. But still, both of them are very much main main text within the AI discipline. I moved to the Netherlands for a combination of studies and the left my Aspen is that so it was a good combination. from the Netherlands after I finished my studies there a Master’s an on AI. In the Free University we moved quite a lot around the world we worked in Swaziland we developed there implement the very first computer science degree at the University of Swaziland, which is a small countries in Africa is called is what they didn’t know that. But near Seoul, South Africa, we have been for one year in Australia. And now since one and a half year, I’m here in Sweden. Like I said, both have been working in industry and in academia. So I I’ve done a lot of different things, which in a sense, makes it easier to move and to do other things if more than if we had been working all the time in simplest and insane population. And I’m really I was very happy and very excited to come to Sweden and not only for the possibilities that we got as one very chairs unit to move University, but also now with the possibility to really shape this new area of research on the AI humanities and social science, which I do believe that it’s kind of the direction that we have to take forward in the use and in the development of AI systems not go to yet another improvement on the machine learning algorithms. But it’s much more about how we take it again in the broader perspective, and take the broader picture of impact and technique together.

 

Andrew Dubber 

Right. And now of course you’re advising at the highest levels of policy, I mean, like Michela Magas from MTF advising European Commission on things like AI, but you’re also now part of the World Economic Forum global AI Council. What does that do? Exactly?

 

Virginia Dignum 

Basically, the World Economic Forum AI council advises the World Economic Forum bodies on AI issues. And they also as a council are working on specific white papers are specific briefs on issues related to AI. So at this moment, like a lot of other people are working on the contributions and not not only contributions, but the effect that the covid 19 situation is everything, especially on data on surveillance and on controlling of people and populations. So we are looking about seeing whether we should be saying that and reflecting on that and how can we best advise the World Economic Forum on those type of issues. We try to be one step ahead of what the policymakers would want to know we have something to provide to them.

 

Andrew Dubber 

So you have answers already.

 

Virginia Dignum 

We don’t don’t really have answers on that one. But we are quite, let’s say concerned or Looking closely to the fact that increasingly, a lot of countries are opening and relaxing their regulations concerning data collection and data usage and tracking of people and surveillance, because of the crisis, which we fully can understand that that is something which might be needed now. But the point is, once the genies out of the bottle, can we put it back in the bottle once situation goes back to some type of normal? And that’s the things which we are trying to discuss at the moment.

 

Andrew Dubber 

Obviously, communities really interested in the overlap between creativity and AI. And the idea of, say, for instance, who owns something if an AI creates a piece of music? How do you think about these sorts of problems?

 

Virginia Dignum 

If you realise my view on AI, that it is an artefact a tool, then the creativity is with the people. And it’s with the ones which develop it. The ones which employ it in a creative way are the ones which provides the facility for the AI to generate some type of music or art or whatever. I have been working on talking quite a lot with the researchers at kth. Bob star who does machine generated music, folk music. Yeah, I would say that, if anything, it’s him and his team are the creative ones. If they are the ones to sync up the ways that AI could be clear. AI is the tool.

 

Andrew Dubber 

So what do you think of the biggest opportunities for AI right now?

 

Virginia Dignum 

The biggest opportunities for AI is the possibility to enhance human intelligence. And for us to make us do things in a better way and take into account the differences and the societal impact or the societal possibilities of what we are doing.

 

Andrew Dubber 

And dangerous.

 

Virginia Dignum 

It’s the same is how we are using AI to influence and to enhance our own capabilities. We are extremely creative on goods around that type of things. So I can only enhance our capabilities for better problems. And

 

Andrew Dubber 

are you optimistic about that?

 

Virginia Dignum 

Yeah, I’m not an optimistic person. So I think that at the end, it’s inaccessible when some one of my colleagues always says that there is no business model for unethical AI for an irresponsible AI. So in a sense, the if even if we take a very business market oriented approach, the main opportunity is that on the responsibility and the trustworthiness of the system.

 

Andrew Dubber 

Wonderful. That sounds like a really positive place to leave it, Virginia. Thanks so much for your time today. Thank you really good. Talking guys. Nice. Thank you. I appreciate that. That’s my neighbour or near enough. Virginia dignam, professor at Ohio University, Volland Berg chair on responsible artificial intelligence member of the EU is high level expert group on AI and scientific director of the wasp AI research project, and humanities and society. And that’s the MTF podcast. You can follow Virginia on Twitter. She’s at V dignam. That’s v DIG in un. And she links from there to all manner of interesting things that she’s working on. I’m double at Dubber, on Twitter, Music Tech Fest is at Music Tech Fest pretty much everywhere. And of course, you can share like rate and review this podcast. If you find yourself with a little bit of extra time, you can go back through and listen to older episodes, something like 60 hours worth if you do it all back to back. But why would you and this will be back next Friday with more of the interesting people from the worlds of music, technology, innovation, creativity, arts, science, academia, and industry. Stay safe. Wash your hands again. And we’ll talk soon. Cheers.