The Philosophy of AI with Dr. Mark Coeckelbergh

Dr. Mark Coeckelbergh contemplates the messy reality and political nature of AI, the interplay of technology with society, and the impact of AI on democracy.

KIMBERLY NEVALA: Welcome to "Pondering AI." My name is Kimberly Nevala, and I'm a strategic advisor at SAS. It's been my absolute pleasure to be joined by a diverse group of thinkers and doers as we explore how we can create meaningful human experiences and make mindful decisions in the age of AI.

In this episode, I'm so pleased to be joined by Dr. Mark Coeckelbergh. Mark is a professor of philosophy of media and technology and the vice dean of philosophy and education at the University of Vienna. Mark joins us to discuss the philosophical and political underpinnings of AI. Welcome, Mark.

MARK COECKELBERGH: Hi. Thanks for inviting me.

KIMBERLY NEVALA: Thank you so much for being here. So tell us a little bit about what drew you to study the intersection of the humanities and philosophy in particular with technology?

MARK COECKELBERGH: I was doing more standard philosophy before. And then I've always been interested in societal issues. And I figured out that some of the most pressing societal issues are very much linked to technology, so I started researching that and got involved in some more hands-on practical ethics projects. And that's how I slowly moved into this field, and then read up on the philosophy of technology and started contributing to that field.

KIMBERLY NEVALA: Well, you have been a prolific contributor. You have an absolute wealth of research projects and an amazing number of books on the topic. And we could spend, I think, literally hours on each one of them.

So I'm wondering instead if we could maybe just get a bit of a recap of the narrative arc, or some of the different problems you were interested in as you moved from things like Human Being @ Risk, which, if memory serves is back from 2013, through to New Romantic Cyborgs to more recent work on AI Ethics and the Political Philosophy of AI. Is there a common narrative, thread, or research arc that has gone from A to Z there?

MARK COECKELBERGH: Well, I've always been interested in a number of topics. And existential issues is still one of my interests, like in the book on Self-Improvement, for example, or in a forthcoming one. But, yeah, I think what I did was moving from more theoretical discourse to practical, ethical, and political issues about emerging technologies.

Yeah, first it was robotics always. Then artificial intelligence proved to be very fruitful area of inquiry. And I think it's still philosophically interesting, too.
KIMBERLY NEVALA: Recently you wrote about the political philosophy of AI. And you and I connected over some of that writing. What is political philosophy? What is the problem space or the hypothesis that that explores?

MARK COECKELBERGH: Yeah. I guess the most broad definition of political philosophy would be that it's about power; because often it's narrowly defined as being about states and their relation to citizens.

But I think it's-- yeah, it can be defined much broader - influenced by Foucault, for example. And that's also the approach I take in the book. And I take an even broader by saying, is it really about humans, or do other entities also have politics? Or should our politics be relevant to them? Somehow is our politics relevant? So, yeah, I work with a broad definition.

At the same time, I do use traditional theory, more limited theory as well, because I think it's good to always make a bridge between, on the one hand, the traditional theories, and on the other hand, the very contemporary problems that we face today and that we most likely will face in the near future.

KIMBERLY NEVALA: So let's talk a little bit about that bridge between some of those maybe more traditional theories and the contemporary problems of AI. It's striking in some of your writings, I believe, you've said that AI - and this may be applicable to technology more broadly - but that AI is inherently political. So when we say AI or technology is inherently political, what do we mean by that? What is politics in this context?

MARK COECKELBERGH: Yeah, I think we usually think about technologies as tools, as instruments, as things we use for goals. It could be political goals. But in fact, when we use technology, we do much more. We are actually influenced by the tool we are using.

For example, if you're a social movement, and you're using social media, it's not like you have a tool like a hammer that you completely control and you know exactly what you're doing and you know your goal, like putting in a nail, and you're using your instrument and that's it. What's actually happening with the technologies we have today is that you don't really know what's going to happen.

So you might use social media initially to push your message, but then the recommender systems and all the operational stuff that has to do with the algorithms, how they work, is going to spread your message perhaps globally and change a lot of things. And that's not completely under your control. So the technology has all this effects, non-intended effects.

And they're political because they change the way we live together. They are also political in the narrow sense of having influence on elections. But they also really change the relationships in society. And so in that sense, it's definitely non-instrumental and political.

KIMBERLY NEVALA: I recently spoke with Roger Spitz; I don't if you're familiar with him. And he said one of the areas he thinks about and is most concerned about is hidden influence. Where systems are influencing human decisions in ways that we're not even aware of. So, not so much the decisions that we allow systems to make autonomously, but the decisions we delegate to machines that then influence our own behavior. I get a little bit of that same thread in what you're talking about.

So in that world where cause and effect, or maybe there's not a direct link: you may not understand the implications, or next-order implications, or effects of something that you're putting out in the world using or through technology. How does political philosophy help us understand or frame those issues in a way that we can better understand and then think about addressing them?

MARK COECKELBERGH: Yeah. It offers some concepts. And the difference is that usually we just shout these concepts at other people, especially on social media. But we don't really think about what they mean. And so what it allows is a more differentiated discussion.

For example, about freedom, people feel often threatened in their freedom. And they might be even right. But they don't always know what kind of freedom. And maybe one person's freedom is the lack of freedom of someone else.

And these kind of issues, they come up, and they should be discussed, but people don't have the vocabulary to discuss them in a more sophisticated way. And I think that's what political philosophy can do, that it can give us more of a refined framework to have these discussions and to also move on to better understand what's going on, and then also to better articulate when argue for one's own position.

KIMBERLY NEVALA: So can you give us some examples of some of those concepts that are maybe a little too esoteric or conceptual, and how framing that with political philosophy, how does that actually change the narrative?

MARK COECKELBERGH: Yeah. One example is when people say that AI is biased. Usually it's like a symptom. OK, then, we just know what it is when someone says it's biased. But whether something is biased, and even if it is biased, whether that bias is problematic or not, it's very much a political question.

And what political philosophy can contribute here is saying, well, here are these different theories of justice. And with these theories of justice, we can clarify what exactly is wrong with this bias. And, yeah, I think through making more explicit and outlining the arguments, one can then have a real, more profound discussion with other people. Whereas otherwise it's just like my opinion against your opinion. Yeah, that's a frustrating thing to do. And we don't really move on. And we have just these social media discussions without further depth to them.

KIMBERLY NEVALA: And are you seeing that in practice out in the world today that people are taking on this perspective and using the tenets of things like political philosophy to help forward these discussions? Or are we still stuck in the never-ending and escalating opinion bubbles that we all have?

MARK COECKELBERGH: I think there's still a lot of that. I'd like to see a lot more refined arguments for why certain things are wrong.

For example, I'm working now on democracy and AI. And I think that we really need to be more explicit about, what do we mean by democracy? Is it just voting every so many years or is it more? What is it then? And why is AI a problem for that or even an opportunity?

And I just see that people lack the background in political philosophy. And I think that it would be good to bring that in. And in this way I think we can have a discussion about both technology and politics. And I think that's a good thing if we can do that.

And you see, for example, in the current discussions about is it the gun that's the problem or the person, for example? Of course, a very familiar discussion now. I think there that both philosophers of technology and political philosophers, I think if they work together, they can give a more interesting view on this question.

But philosophers of technology can show that guns are not just guns, that they have like this more than instrumental role, that they really shape how people deal with one another. And political philosophers can sketch the more political framework within which it is possible for people to say that it's their freedom to carry guns, even in the light of what's happening now. So I think it's really helpful to have both disciplines and to work together to better understand what's going on, and then also to make better normative arguments.

KIMBERLY NEVALA: It would seem that also allows us to take, for lack of a better word, some of the emotion out of the discussion as well because you don't necessarily have to come to a singular point of view. You can look at it from a lot of different angles.

And perhaps in the confluence of those angles – as in the old Venn diagram - find some points of commonality that you might not otherwise do if someone's coming just from one perspective of the tool's the tool, for instance. And a tool in and of itself is incapable of harm. It's the person who wields it. And then somebody else saying, well, the person couldn't wield it if the tool didn't exist. But when you're having that conversation just solely from those viewpoints, it's almost impossible, I think, to compromise.

MARK COECKELBERGH: Yeah. It helps to find common points. Of course, whether that takes away the emotional tension is another question. There's even people who argue, like Chantal Mouffe, that politics should be about antagonism. And that it's OK to have different viewpoints and that emotions should play a role.

But coming from political philosophy, I can bring up this point, whereas people tend to have standard views about what should be done. And here the question is, for example, is consensus always a good thing? Could it be that sometimes voicing your opinion in an emotional way can actually still play a role if that means articulating, giving a voice to a certain political view? Maybe it's not always bad. So that gets us into an interesting discussion about what's the role of emotions in politics?

And, yeah, that shows again like it's good to know these theories to also know contemporary political philosophy as opposed to just knowing ancient or, say, Hobbes or Mill or something. I think in contemporary philosophy of technology, there's a lot of stuff there that we can learn from.

KIMBERLY NEVALA: [CHUCKLING] And even as we were talking, I was thinking, well, we didn't even really define what we meant by emotional, right? So my view of what an emotional argument is, and your view, and someone else's might be very, very different because I'm not talking about theatrical drama necessarily. So perhaps, again, one of the lessons from the work that you do is about grounding that language in very real terms, I suppose.

And as you talked about emotion, it started harking back to some of the work in New Romantic Cyborgs. And I haven't gotten through the entire book, but I have to say I downloaded a bunch of your stuff, and I've just gone down the Mark Coeckelbergh rabbit hole, and it's fascinating.

Was some of that work in New Romantic Cyborgs, as you look at it today, some of the core concepts, do they translate today? Are they still relevant? And were you trying to also expose a little bit of that idea that when we think about technology or AI, a lot of times even as technologists, we talk about it as if it's a strictly rational, logical thing to do. That there's not any, as you said, romanticizing or emotion involved in it. So--

MARK COECKELBERGH: Yeah.

KIMBERLY NEVALA: --talk a little bit about how that concept came about and how you see that playing out today.

MARK COECKELBERGH: Yeah. If we take, for example, now the case of Lemoine arguing that an AI is sentient, conscious, maybe a person. I think that was emotional and, in his words, kind of a religious argument. And it shows very much that what technology is and does, that it is not just about rationality. Because technology used to be seen as applied science and applied rationality. We put rationality in things. And especially also AI, the early AI, when you put a decision tree, very logical, and you put that reasoning into the AI, of course, the AI is going to be rational in that sense.

But I think the way we interact with contemporary technologies, including AI, is different because they do imitate human-like capacities. And they speak, for example. Natural language processing got really indeed much better. It's very impressive.

And so it's in the interaction with the machines, I think, that there is still that emotional side. And there's also the romanticizing and the narrating. Like it was almost a story about this locked-in person, a person who is locked in this technological system but cannot get out, cannot communicate that it's actually conscious, and that person has to be saved. And that person also is going to save the world maybe. So that's almost a Hollywood movie. And so there's a lot of romanticizing there.

Also if we look at how people talk about the metaverse and what a metaverse could become, it's very much a romantic story, I would say. And so if we understand that kind of element in our culture, I think we can better understand how humans experience technology, how humans talk about technology. And eventually, that also shapes what a technology is going to become.

So for thinking about a future of technology, you can't just say like, oh, that's all rubbish, and everyone is wrong about this, and we just need to interact with things. Well, at least one should understand what's going on. Even if the point is that we shouldn't develop human-like AI, it's very important to understand what's going on between humans and machines. And there's more and more interest in that now and much more research on it also. So I'm quite happy about that, to see that.

KIMBERLY NEVALA: Are you seeing more of those types of discussions, which I suppose when we try to frame all these problems in very, quote, unquote, "hard" or "logical" concepts, it feels somehow more concrete. And possibly we have our own human bias towards thinking that those are more justifiable arguments.

And so I can imagine as we start to think about or bring in some of these philosophical underpinnings or softer concepts, if you will - I think they're really hard concepts, so I'm wincing as I say softer concepts - but maybe more amorphous or less well-understood ideas. Things that you can't just say that A is A and B is B in a very discrete way. Are more of those conversations happening broadly, whether it's within the work that you're doing in the expert groups or within organizations? Or do you still think this is the purview of research only?

MARK COECKELBERGH: I think it's starting because many researchers are also going outside of academia and doing the work of communicating their understanding of things. And I also think that some smart people in companies also understand that things are more complicated on the human side, on the ethics side and so on.

The problem is that often in organizations, there are other motives and other goals that take priority. And for a company, in this case like Google, it can be interesting if people talk about their technology from their point of view, from a commercial point of view. Whereas from a philosophical point of view, there's lots of things going on that maybe that are much more complicated and that could be problematic. So there are all these angles to it. And that's, again, also where the politics comes in: it's one dimension. And all this I think it makes things more complicated. It complicates things.

It shows the complexity of things because we are moving from a kind of world of technology and science that's purified from everything else to a world where there is the social, and the human, and the political. And those things are not A is not A and B is not B. Even if that is the case, we can still use the human sciences and philosophy to talk about it in a systematic and rigorous way, more rigorous than, let's say, just voicing opinions.

But still it's a very complex reality. And I think people from the world of technology have to realize that it is like that. And that things are more messy. And also that there can be some expertise about this mess. That there are actually people who study this, and that it's not just something soft that has to remain completely vague, but there are people who have studied this for many years.

And so I think we can work together. We can work together between scientists and technology people on the one hand and more human sciences and philosophy on the other hand.

KIMBERLY NEVALA: Yeah. And I've certainly been heartened as I watch the different collaborations across the public/private space, across academia and in commercial entities, et cetera. Although thus far, it seems to me that there's still more talking about the need for it than I've seen necessarily active and productive collaboration. I don't if that's a fair assessment or not, as it comes from just my tiny corner of the world.

MARK COECKELBERGH: Yeah. I think there's more and more collaboration, but there needs to be much more time and resources to go into it. So much more people need to be involved. We need to educate people so that there's a pool of people that we can rely on, also to mediate between these different worlds and bring people together. Now it's, I think, just a limited number of people who can do it and who are very active, both in academia and outside, for example, in the world of business and tech.

KIMBERLY NEVALA: Now, if we take a step back towards what we might think of as the more traditional political environment. There is certainly a lot of work going on right now at various levels to try to think about getting out in front and regulating this technology. Or trying to put in place some standards and guidelines that will allow us to alleviate, if not mitigate, some of the harms that we know can and have happened with artificial intelligence systems.

I'm interested in your thoughts on the overall political landscape, and how we're doing. How are some of the natural tensions that happen within that landscape impacting our ability to regulate AI, for instance, at a national level and then blowing that out to the global level. What are you seeing in terms of how we are able to deal with things politically today: the thing in this case being AI?

MARK COECKELBERGH: Yeah. I think the problem with new technologies like AI is always that regulation is always a bit behind the actual development. So that's a challenge everywhere. What you also see is that there's very different approaches depending on the political system.

If you have a more laissez-faire kind of system, then within that culture, there won't be likely very heavy regulation on AI. Whereas if you have a system like in China, for example, that's a very different way of dealing with AI and indeed with people. Europe is somehow in the middle there. So there are different political cultures.

And so it's very difficult to have one AI strategy globally because there are these differences in political sensitivities and political culture. Nevertheless, I do think that given that AI is a global phenomenon, that software also moves everywhere, data moves everywhere, in that sense, the problem is global. And I do think we also need a global approach to the governance of AI. And big international organizations are now also busy with thinking about the problem.

But what we don't really have is a proper political framework on that level that could also have some force rather than being just recommendations. So, I see there a big gap for policy. And that has, again, to do with the fact that we don't have many supranational institutions. For example, the EU is one that has a supranational aspect, but it's still often the nation-states, the big nation-states who decide. And so it's very difficult at the moment to have these global institutions for the regulation of AI.

KIMBERLY NEVALA: It certainly seems to be a problem we haven’t cracked – even conceptually – at the global scale. Of course, the pandemic shined a spotlight on the difficulty of governance across the board – globally and nationally. One escalating source of tension, as you have pointed out, is the debate over the role experts play in a democracy. Can you talk about this tension and how this may impact how AI is regulated and adopted?

MARK COECKELBERGH: Yeah. In the case of COVID, we saw that suddenly experts play a much bigger role than they already do in democracies. And that also the democracies move to slightly less democratic sites and took away freedom of people. That, again, was then taken up by different political directions in response to put their view on it.

But I think it's an interesting phenomenon that our democracies can go more that way in certain times. And it brings up the more fundamental question, what's the role of scientific expertise and technology? And what should their role be in society?

So with regard to AI, there's a kind of temptation for policymakers to say, well, we're going to fully use AI to steer the behavior of citizens in beneficial directions, however that's defined. But, yeah, that brings up the problem of freedom, of course.

And as I explain in the book, it's partly a problem of negative freedom in a sense that, yeah, it's about literally taking away freedom. For example, obliging people to wear a mask or something. But it's also about autonomy and how you treat people. And we see in modern society already in all kinds of institutions, like hospitals and so on, as Foucault described, there's disciplining.

But there's also basically not respecting the autonomy of people. So you don't take people seriously as subjects who can reason and who can think for themselves. You're saying, no we have to think for you. We are going to decide how to deal with this risk now. And we're going to regulate you in this and that way.

So that's taking away a different kind of freedom. You're not locking anyone up. You're not forbidding something necessarily, but you're treating people like non-autonomous subjects. Like you would treat small children. So there I also see a problem in our society.

And I think that problem will get worse when we use much more AI to profile people, analyze people. And then there's a temptation to then use that knowledge to steer the behavior of people, to influence them without their knowledge, to put them in categories, to put them under surveillance. Again, moving towards negative freedom again.

So, yeah, I think the combination of AI with these tendencies in modern society to not respect the autonomy of citizens, I think there is definitely a danger there both ethically and politically. So I try to warn for that.

And it's not only about COVID, but also, for example, for when it comes to climate change. On the one hand, I think it would be great if people behaved in different ways, in more climate-friendly ways. We would all be in favor of that in some sense. But then on the other hand, the question is, how do you do that? And is it right to then forbid things or to manipulate people into better behavior?

So I think these are big challenges for the next decades, I think. To use the technologies in ways that are still ethically and politically acceptable and justifiable and find a good middle way there because not using AI in data would also be not a good idea probably, given that we have these huge political entities and mass societies. But we need to find a good way. And in order to prepare that, we need to have the public discussions about the politics of technology.

KIMBERLY NEVALA: And two quick threads I want to pull on-- we could talk for a very long time and go deep on any one of these topics and statements.

Today I know you work as a part of a number of expert groups in the EU advising on the development of policies and perspectives on how to approach these problems. We have advisory councils here in the US. And certainly around the world we see that.

I am concerned, however, that they're not permanent establishments, are they? They're a point in time. And certainly we have proven over time that our individual and collective attention spans are short. Do we need to do more? Or what would that look like to ensure that the politics of tech, if you will, are being attended to on an ongoing basis? Or will this be enough?

MARK COECKELBERGH: I think it won't be enough.

So what we do now basically is that each time a new technology comes along, we have these councils and new policies. But what we really need, I think, for the long term are permanent political institutions where experts weigh in on these issues as they do now. But also where there's a possibility for citizens to participate in the decision-making around these technologies.

So rather than ad hoc calling experts together or organizing some focus groups, I think we need permanent political institutions that can help us to guide us through these new times - including how to deal with technologies like AI. So we really need that more long-term vision right now already.

KIMBERLY NEVALA: Is some of that need that you're seeing now the result of the nature of technology, such as AI? Because people could argue we've had technologies, and technologies have changed forever, right, that this is nothing new.

But by virtue, as you mentioned earlier, of having technologies that we can look at and see almost human characteristics of logical -- illogical, as that may be, that is our human tendency, we like to anthropomorphize stuff. But also in the digital age, technology is so interwoven, and dispersed, and distributed. Is that part of the reason we're coming to maybe this point where it needs to have a permanent seat at the table?

MARK COECKELBERGH: Yeah. That's part of the problem, I think, is that the change is incremental. And we're there before we know it, and we don't see the bigger picture. For example, the internet, it in a way took us by surprise. Not because it was installed overnight or something, but just because it's slowly developed into something that few of us could imagine.

So I think given these kinds of developments, that should make us more sensitive and concerned about new technologies that we really need to think about. Yeah, OK, how might it look in the future? Like AI, we could develop scenarios about what they could do in different sectors and different parts of the world, how it could develop, and how we then could do it in a more ethical way.

So we need, I think, to use our imagination and also somehow integrate it with the politics of it. Because otherwise what's happening is that we leave the imagination to films and to some of the AI prophets, as I would call them. But their scenarios might not be the scenarios that are relevant for us in the near future.
And so we really need to do that in a different way, and not just rely on them.

So responsible technology, I think, requires the responsible development of the imagination, also the political imagination, to make sure that we have at least some idea where it's going. And it's not very easy. It's limited. Technology can still surprise us, just like any other developments in society. But I think we can do much better than we do now when we suddenly jump, when there's a new technology again.

KIMBERLY NEVALA: Excellent. So any final thoughts or observations you'd like to leave with the audience about what you're observing today and what might be coming next?

MARK COECKELBERGH: Well, I think we've been talking a bit about politics. And for me now a big concern is democracy. I think that the form of government that we've been developing in the West and (which) also has been merged with other forms elsewhere, I think it's a very fragile and vulnerable kind of political system. It's also not fully developed yet. It's a lot of work. In a sense, we're not living in democracies yet. So there's a lot of work to do there.

But the problem is that because of the technologies, it might be overhauled or changed back to more non-democratic forms very easily. And so I think what we need to do is to think about how can we make our democracies more resilient against anti-democratic tendencies in general? And what could the role of technology be there? And what needs to change in terms of technology to avoid this? How can we use technologies in a way that supports rather than undermines democracy? I think that's one of the questions that we should ask ourselves today.

KIMBERLY NEVALA: I'm going to have to restrain myself from asking you more questions about that question. We'll leave it with the hope that we can have you back in the future to talk a little bit more about that question and others. So thank you so much for your time today. This was absolutely thought-provoking.

MARK COECKELBERGH: Thanks for inviting me, Kimberly.

KIMBERLY NEVALA: Yes. Well, Mark has provided us some much-needed reflection on the nature of AI, and even more so how political philosophy can help us understand the influence of AI on our ever, as you said, emergent society.

So I just want to thank you again for joining us. This is our last episode of this particular season. So if you've missed any of our stellar guests, or if you want to catch up on a new favorite such as Mark, now is the time. We'll be back soon with more ponderings on the nature of AI: subscribe now so you don't miss it.

Creators and Guests

Kimberly Nevala
Host
Kimberly Nevala
Strategic advisor at SAS
Mark Coeckelbergh
Guest
Mark Coeckelbergh
Professor and Vice Dean, Faculty of Philosophy and Education at University of Vienna
The Philosophy of AI with Dr. Mark Coeckelbergh
Broadcast by