The Problem of Democracy with Henrik Skaug Sætra
KIMBERLY NEVALA: Welcome to Pondering AI. I'm your host, Kimberly Nevala.
In this episode, I am so pleased to welcome back to the show Henrik Skaug Sætra. Henrik is the associate professor at the University of Oslo and the CEO of Pathwais.eu. And that's ways spelled W-A-I-S. He is also the author of the recent book How to Save Democracy from AI. And because that publication is not yet available in English, I had to jump on the opportunity to ask him to join the show and talk to us directly about how or what influence AI systems are having on us as individuals and as citizens.
So welcome back, Henrik.
HENRIK SKAUG SÆTRA: Thank you very much, pleasure to be back.
KIMBERLY NEVALA: Now, the last time you joined us, we were discussing whether we could use tech such as AI to meaningfully enable the Sustainable Development Goals, or SDGs. Is your current interest and research into the intersection of democracy and AI a natural extension of that work or did that come about from a different source entirely?
HENRIK SKAUG SÆTRA: For me, it's quite closely related, because I think much of the work on the SDGs and sustainable development more generally tends to end up with this conclusion that, yes, there is a positive role for technology to be played, potentially.
That requires politics, that requires us to make the right choices, have a willingness to make these choices, and go through the processes that actually enable us to use technology in a meaningful way. Otherwise, that potential is just that: potential that will never be able to realize.
So for me, they are quite closely linked because I always end up with this conclusion that there is this potential here. But what we actually need is political action and the sort of processes that allow us to regulate and control and use technology in a way that's beneficial to, conducive to, reaching our goals.
KIMBERLY NEVALA: It's an interesting perspective because I think there's a tendency sometimes for us to want to say if could just get politics out of things. If we could take the politics out of whatever that might be, whether it's policy or technology or schools or insert the noun here, then wouldn't things just be easier? And so this is very much an acknowledgment, I suppose, that, in fact, politics is part and parcel with what we do as humans. Both, perhaps even - there's a political element, if you will, maybe we call it a social element - even in our individual relationships and as well as that with our country. Is that a fair comment?
HENRIK SKAUG SÆTRA: Definitely.
And that also links very explicitly to some of the other work I've done recently on critiquing, I suppose, the techno fixes and techno solutionist approaches to different sorts of things. Which, in essence, tend to revolve around this idea that can't we just optimize away or get rid of that kind of social and political, those sorts of things that drag us down, drag us back, and kind of hold us back from utilizing this beautiful technology or other sorts of engineering-oriented approaches to solving the social problems that we encounter?
So I think it definitely ties to that thinking, which relates very much to technocracy, I guess. Which is always, we have a part of technocracy and democracy always, but we should be wary of trying to go all out would be my suggestion.
KIMBERLY NEVALA: Yeah. And we'll talk a bit more here today about why that is sometimes problematic. But I'd like to take a step back and get your perspectives on how AI more broadly is influencing us just as people, if you will. What do you think, or what are some of the concerns or observations you have, about what it's doing to how we think about and engage with even just basic information?
HENRIK SKAUG SÆTRA: Yeah, that's where I started in the book. I deal with individuals, and then I deal with the relations between people.
And I think generative AI systems in particular have very significant effects on how we access information and gather information, how we process it, how we analyze it, and how we turn it into something else, turn it into synthesis or analysis, analyze it in these sorts of things, and communicate our analysis and our views and perspectives to others.
We use AI so heavily in all these processes now that I'm arguing that we're at risk of losing some of the core competencies that we need to be effective citizens in vital democracy. Because then we would need to understand what people try to convey to us. Even if it's longer than a short bullet point list, even if it's more than a synopsis, we will have to be able to deal with a bit more complex information than that.
And we need to take that information and be able to take the essence out of it ourselves as well. To understand what is meaningful in different contexts from different people, for different cases. I think that's also a crucial task that I think we're at risk of losing when we don't exercise that capability.
And then, of course, communicating is key to democracy, as far as I see it. Being able to understand what you, Kimberly, for example, what you are as a person, what you might respond to in terms of argument, what's important to you, and then conveying my arguments to you in a way that might facilitate a good discussion between us. That goes for what I call key democratic competencies.
I think the way we're using AI systems now means we get a lot less practice in these things. And while some think that's good, effect-wise and saves time on doing these things, I think we get worse at doing these things. Which is to me a huge red flag.
And then, of course, we have the interactions between people as well. Which is more that kind of familiar older debate on filter bubbles and echo chambers and that sort of dynamics. That means less exposure to divergent opinions and worldviews. Also, worldviews that are much further apart than they used to be for these different groups and these different echo chambers. And all these sorts of things are exacerbated in different ways by the current generation of AI systems as well.
So these two things are linked and deal with these very basic competencies that we need to communicate and also to understand each other.
KIMBERLY NEVALA: And it almost seems that it would be intuitive to assume that the latter portion of that, where we are increasingly in these filter bubbles and islands, has actually been perpetuated or propagated by the former.
Which is, in our natural interactions with folks, we are trying to understand what are your motivations? Where are you coming from and what to do about that? But I don't know that, unless we're particularly calculated, and I may want to bring you over to my point of view and think about - based on how you're talking, what you're talking about, what I know about your background, your interests, how you're presenting yourself - how do I persuade you to that other point of view?
But some of the conversation is not even about persuasion. But today so much is focused on personalization, especially systematic personalization, that it feels like it's gone too far in that direction.
HENRIK SKAUG SÆTRA: Yeah, I definitely agree. Our worlds have become quite narrow and that's dangerous in terms of facilitating those broader discussions and just having the ability, then, to understand what other people want and how to communicate with them.
And that might be fine if you don't need politics. You could say that, OK, this allows us to have this kind of nice, isolated world where I don't need to deal with these problematic people that want something different from me, that have different opinions from me and these sorts of things. Then that might be fine.
But if we think politics and democracy in particular is about jointly trying to figure out what is our problem and what will we do to try to solve it? As opposed to fighting over who has the majority, for example. These are two different views than on politics. If you want to form some kind of joint basic understanding of what our society is and that we need to deal with certain problems together, then I think that sort of dynamic is really detrimental to being able to do that effectively.
KIMBERLY NEVALA: Yeah, I would agree.
Now, you mentioned, briefly in passing there earlier, our interactions with each other. And in your research and your observations is, or how is, our relation with machines or how we relate to machines - which is very much something also that we are being pushed to do, to think about our interaction with AI systems as relationships. How is that impacting our relationships with each other? And is that changing the nature of our interactions?
HENRIK SKAUG SÆTRA: And that's also really interesting. That's one of the things I said before, interactions between us, and they are mediated by AI.
But then, of course, we have that additional layer that you point to now. That we actually interact with these AI systems directly as well. And for some, this replaces or alleviates the need then for some interaction with humans, for example. And if so, that might, of course, spell trouble in terms of that joint understanding of and feeling of we and understanding and acknowledgment of that we are a joint society. We are, in essence, the community.
But it also shapes our expectations of people. For example, if I get used to talking to ChatGPT, which is, I don't know what you want to call it, if it's kind of overly sycophantic. If it praises me, if it tells me I'm smart all the time, if it listens to me whenever I kind of need some advice, if it doesn't ever need something from me, for example. Interacting heavily with these sorts of systems might make me demand more of other people as well and grow weary of them sooner because they are more cumbersome to deal with than these machines. So these sorts of things might have long-term effects that could be problematic.
And with ChatGPT, that's one instance where we choose to interact with it to get information - many people at least. But of course, you have character AI and Replica and all these other systems as well that make these sort of social partners that are intended to be one of the primary social interactions that we have. And that could be problematic, I think.
But also if you turn this into politics, again, we see government also trying to make chatbots, for example. If we can just lay off all these people now, we can make these AI agents be our government representatives. So whenever I need something from the government, I get a chatbot, as opposed to getting a real human, for example.
And that does something as well with my relationship with the state, with the political institutions that are what govern us. So I think that also matters, in a sense, because who's accountable? Who's the state? Who can I turn to for this kind of meaningful human interaction with politics, with political institutions? I think that matters a lot as well.
So while it's definitely good in many instances, while it may make customer service if you want, more effective, we shouldn't always look at interactions with the state as customer service. It's about something different as well. It's about communicating with the humans that have been delegated to these kinds of political tasks for us as a community. So replacing these sorts of things with AI systems also runs the risk of then making government more distant from us, more opaque. And all these sorts of dynamics could be very detrimental for our understanding, again, of politics as something we do jointly as human beings.
KIMBERLY NEVALA: Does that also change our vision of the government as a collective agency, something that is there to serve all of us? Which sometimes means that the services, the products, if you will - I don't like that orientation when we're talking about social services in particular and government - should be, if I have the broader collective view, I might see that, hey, here's services and supports that are important and that I think should be provided, and that are valuable because they support my fellow citizen, my neighbor, the person down the street. But if I'm going to Target to buy something, I expect them to have, if they want me as a customer, I want them to have the explicit products that I want. And that's a very different mindset in my thinking. And I would imagine that could then influence our perception of our governments as well. Is that true?
HENRIK SKAUG SÆTRA: Yes, I think so, very much so.
You get this kind of customer perspective as opposed to the citizen perspective. This kind of government as me and you versus this kind of me as a recipient of services that I can be critical of, I can evaluate them, I can place my demands on them, and I can demand my rights and these sorts of things. Whereas, a citizen, of course, is much more about having this obligation to do things myself as well. To participate, to play a role, to be an active part of government. So it's a very different perspective.
And I think the attempts to do, to effective-ize politics with AI systems tends to take us into this kind of right holders and customers and recipients of services that these systems can then optimize for us, for example. So it tends to nudge our perception of what government is in that direction. So that would be one of the key points, I guess.
KIMBERLY NEVALA: And does that mean that because it's turning into-- it turns us into a very sort of output-oriented affectation, I suppose? And why is that so problematic?
HENRIK SKAUG SÆTRA: Yeah, that would be problematic. And it's also why it's very understandable why people want to do this.
Why do we want to use technology to solve social problems in general if we return to that? Because social problems and social processes and democratic processes are inherently messy. They take a lot of time. They are messy. We very often get output that's quite kind of fundamentally flawed. Thats obvious to us that this is not a perfect outcome.
But then when we look back, we've been through some very important processes. Very often when we do these sorts of politics as democracy, as this kind of messy and cumbersome processes. Because we learn. That's where we learn about what society is, about each other, what the others in society want from us, and these sorts of things.
So while the process takes a lot of time, that's partly because we do all these other things in that process. And also then because we jointly also figure out in that process what it is we want to solve, what it is we want to achieve, what our goals are, what the goals of our society is. That is not some objective, predetermined output or measure that can be put as the target state of some effective system. Because just our goals and our values and what we want to achieve is continually reestablished through this messy process of democracy. So it's both a process that's cumbersome and messy because so much goes on in there.
And I think it's also messy because that's where we learn to do things. And if we do that a lot, we learn to do better. So that would be important in terms of doing democracy as the sort of school of society, for example, that all political philosophers talk about. For example, specifically municipal and lower level politics, which can be particularly frustrating to people. That's this kind of school of society, school of democracy in many ways that's necessary to build this understanding of each other and society that people then can use in these higher levels of government, I think.
KIMBERLY NEVALA: And I suppose our societies and the scope of control, I guess, of our Democratic institutions. if we look at the size of our states and our countries these days, are very, very large as well. And perhaps that makes that all of that messiness just bigger and harder to wrangle. You get a much broader spectrum. And so you can see perhaps the natural inclination towards more command-and-control type approaches. Although it's not, at least in my opinion, leading us in a productive and healthful way that's promoting well-being, I suppose.
HENRIK SKAUG SÆTRA: No. But of course, with the modern societies and not Aristotle's kind of Greece, for example, or example Athens, we have a different fundament for doing politics, of course.
So doing that at the US level with direct democracy and interacting with each other to understand each other. We can't do that at that level, but we can do that at the municipal level and city level, for example, and increasingly less so the higher up we come. But we have these tiered systems to be able to have these sorts of processes, even in large and complex societies. So the federal system in the US, for example, is a good example of this of course.
KIMBERLY NEVALA: So what you were saying there also then reminds me, and perhaps reminds all of us, that the mechanisms of government-- It's so easy, we just came out of, as you know, a very contentious and roiling election here in the US and it's been quite eventful ever since as well. And it's so easy to see it at that level, at the nationwide level, and just feel like it's out of control. And we do forget that, again, we are operating within a smaller community, and we can effect change at that level, which billows up.
But maybe this is also where AI is getting us off course by keeping us off our local streets, if you will. This is where it's also serving as an isolating mechanism because all we're seeing is the noise at the big, big level where we really can't affect it. And it's keeping us in our houses, if you will, tied to our screens and not engaging even with the local community.
HENRIK SKAUG SÆTRA: Yeah, so that's one of the benefits, I guess, and also linkages between social media and digital technology in general and AI systems that create certain dynamics at these sorts of platforms or services.
So it's, yes, national level, at least in the US. And here in Norway, for example, is also very much kind of regional or even global all times. So we engage with this kind of global discourse instead of engaging with this kind of local communities meaningfully, very often at least.
So people in Norway, young people in Norway, they never see Norwegian news, for example. And they deal with the kind of manosphere and they deal with Jordan Peterson. And they get the same kinds of news and information and problems and challenges that people in the US get, for example, even if it's from a very different context. It might not make too much sense. It creates this kind of very broad and high-level groups, I suppose.
So, yes, very much so because then we end up with very different worldviews. And we deal with very high level topics that, for example, you have the MAGA and you have the others in the US that get this kind of very rival camps with a very kind of split view on what the world looks like and should look like.
KIMBERLY NEVALA: Yeah. And as you were talking as well-- we talk a lot on the podcast with guests about are we solving the right problems with AI and trying to achieve the right objective. So the question that rose to mind is: is democracy a problem that needs to be solved by technology and by AI in particular?
HENRIK SKAUG SÆTRA: Yeah, because what is the problem here, right? That's the best question to start with, I suppose. When we ask: can technology be used to solve something or fix something? I tend at least to get back to this. What is the problem really, and for whom and why and how?
Because anything can be a problem for someone. But the problem we want to solve with democracy deals with the very basic values we have and very kind of goals we have for society. And that could be, of course, a basic desire to have order and security and not live in fear of violent death, these sorts of things. Maybe fundamental level of health, maybe some welfare on top of that, maybe some sort of educational provisions that allow people to have some opportunities in life.
So these sorts of things then are the basic fundamentals that I think we have to figure out to what extent and how far do we agree that this should be our joint initiative in Norway. We agree that a lot more should be our joint undertaking than in the US, for example. So that's something to negotiate continually with each other as well. How far should our joint project extend? And what's left than for us to solve individually?
But then how do we solve politics with democracy? And then that solidifies pretty much the question of what we want to achieve and that then tries to figure out what we have, at one point in time, tried to maximize or optimize? If it's economic growth, that's shallow and narrow, I suppose. If it's some level of welfare, that's also insufficient in a certain sense. If it's climate emissions, these sorts of things, AI systems can solve for some of these.
So we can make all these different indicators of the goals we have and the desires we have for politics to do. But then we also end up in the same place. That we might solve some isolated, or optimized some isolated, indicators but we lose all the stuff that goes on in the process and negotiating what the system should optimize. How do we deal effectively with that if we delegate too much to these systems?
But the goals to optimize, the goals to solve. That’s really also a contentious issue then because some think that this can be done quite effectively. We can say that what we want is economic growth that will allow us all the benefits of life. So we can optimize that and then we can deal with the consequences later, for example. Whereas others would be quite opposed to that, as I believe, and say we want some different values as well. Equality and equity and all these other sorts of things, for example, also to be dealt with collectively.
KIMBERLY NEVALA: So I think it'd be good for us to talk about maybe some examples where folks have tried to solve democracy, if you will, with technical approaches.
But one question before we do that. Does a technocratic approach then, as you were speaking of it there, does that mean that a tech- or an AI-led approach to society, to governing society, could actually change or influence what democracy even means, how we define it?
HENRIK SKAUG SÆTRA: Yes, I think so, very much so. So that would be also another key point in the book. That we run the risk as we increasingly use AI systems for the different parts and components of various democratic processes, we actively change what we mean by democracy. And of course, democracy is a concept that has always changed. It changes also without technology, over time, as we develop, our societies develop, all these sorts of things. We get new ideas and our imagination of what democracy is changes as well.
But it changes very dramatically when we use AI systems to try to solve the sorts of things that we are talking about here, I believe. So here, we can get into some examples of maybe how that happens. Because Google DeepMind, for example, is one of the least-problematic, maybe, AI powerhouses in the world. And they've developed some interesting systems related to trying to take their approach of solving protein folding, solving chess, solving, go, and these sorts of things. They're good at solving things, so why not democracy, right?
And so they've taken us into some interesting research papers on that. They have one system that they call Democratic AI, which is really interesting. Both because they explicitly call it Democratic AI and because this is a system that then can analyze different people in a group and try to come up with solutions that a majority of these people would support. So that takes us into very-- It's democratic in name, of course. Majority rule is part of democracy often, but it's definitely not the core or the only or the central aspect of democracy, at least not as I see it.
So there we get into this idea that democracy is voting and majority rule, for example. So it diminishes our conception of democracy into something quite narrow and quite impoverished, I believe, when we do these sorts of things and turn democracy into something computationally computable. At least it runs the risk of doing so. So this system is kind of useful for many things, but it's not really democratic in the sense that I think about democratic: as deliberative processes and these sorts of human processes.
The other thing that they did was this system where they saw can be used AI systems to reach consensus? So they built this machine called the Habermas machine from Jürgen Habermas, the founder of this idea of deliberative democracy. So they named it after him. And this system acts as a mediator between different people.
So if you and, like, five or six other people then were in a group, and we were tasked with coming up with some consensus statement, for example, we could either talk together. Or we could have this human mediator or moderator between us that tried to come up with something we could all agree with. Or we could use this Habermas machine, for example.
So their study shows that Habermas machine outperforms humans in coming up with statements that people support and feel that, OK, this is a statement I can stand behind this. This reflects my views and perspectives more so than the other processes. But then we have what they call deliberative democracy without a single human being interacting with another human being.
So that also takes us into a very different landscape of sorts. That also changes what we mean by democracy because then the road is quite short to using these systems, for example, to just analyze me, profile me. Or I can play some kind of training games with my digital AI twin, for example. And this twin can represent me in some sort of government process. And I can be represented. My perspective is in there. All these sorts of things could be qualified or kind of checked off without me meaningfully interacting with human beings or politics at all in some way.
So in name, yes, this could be democratic, but it's also deeply problematic.
KIMBERLY NEVALA: I find myself almost lost for words, but not quite. I guess I'm not quite following how that could be seen as democratic. Because what you're talking about, if understand that correctly, is essentially running a simulacrum or simulation of society and assuming that, based on the various bits and bobs I have deduced about Kimberly - with or largely without her knowledge or consent - I have a view that I have perfectly determined what her perspectives, her views, her desires, her beliefs are.
Now, knowing how people show up online versus how they show offline, I think we should all be able to instinctively know already that this is not correct. So the idea that this is a-- I mean, it would be lovely if that were true, I suppose, in some sense. I say that slightly sarcastically.
But is this really just then providing the purveyors of these systems who, by the way, are developing the algorithms which means that they are deciding what are the important points, the ability to say to you it's OK because you're represented, i.e. in the data? But ultimately, I mean, they are the ultimate sort of master there, correct?
HENRIK SKAUG SÆTRA: Yes. And great points here.
I think, first of all, we could kind of tone this down a little bit. We can say that you could, for example, explicitly, for example, input your preferences. So it's not some sort of automatic deduction of something based on what you do. You could actually instruct it on how you value different things and different dilemmas, for example. And you could even have this kind of on/off switch. So you could delegate it to cases that you're not that interested in and you could actually go in and manually vote through the systems, for example, when you want to.
So, of course, there's many different approaches that the proponents of these sorts of technologies say that this is uber kind of democratic. This is direct democracy at scale. So there is some truth to the idea that representation could happen at scale here more effectively, but, of course, we've run into a lot of risks. Related to what you said, definitely.
How does this translate into accurate reflection, representation of what I mean? And how is the system built then? Because when we make this computation computable, when we make the simulation, we have to translate, for example, political agendas or political decisions into something computable. Into something that translates into the various indicators and values and factors, for example, that we build these models on. That is a hugely political choice and design point as well.
And how we translate that into interaction between these different digital twins, if you want to. How we do social choice here has huge political consequences and implications. So there's all sorts of different kinds of leverage points here for the people behind this technology to manipulate and shape politics if we turn our politics over into this system resembling something like this.
So definitely, hugely problematic, I would say. But of course, I see why people would be attracted to these sorts of ideas.
KIMBERLY NEVALA: Yeah. Well, as I think about circumstances, too, where I've been involved in mediation, part of the point there, I suppose, is that the mediator doesn't have a dog in the fight. Terrible, terrible statement there as well. Or saying there as well.
But the idea is that the mediator fundamentally shouldn't care necessarily where things go out. And at the end of the day, it tends to be about everybody giving up a little bit of something, right? I think that's the old trope. Which is it went well when everybody's a little bit unhappy versus everyone being perfectly happy. And so that objective, I'm not sure that I see how we achieve that with an AI-mediated system or world, first of all.
But also we know that with technology - there's been research out there as well, that I know you're aware of, and look very, very deeply into - can we use this to persuade people? So for instance, we know that a lot of folks are radicalized very effectively online. And so then, as the logic goes, well, can't we also be using these systems to de-radicalize people or to persuade them, bring them over to our point of view? Which, again, that's narrowing the aperture. But it also depends on where, if behavioral psychology has been put into absolute play, it is within our automated systems. And I think that is not to our collective value and certainly not to the collective benefit. It’s not something that we should be proud of as folks who work in this area.
But the use of dark patterns, things like that are very explicitly and purposely designed to be manipulative, concerns me a lot then in this kind of an approach.
HENRIK SKAUG SÆTRA: Yeah, it would be endemic, I think, in these sorts of systems. How it's possible to nudge, if you want. And those that want to see their manipulation as positive and call it when you do nudging of different kinds; nudging into people being active or nudging towards better or healthier or more socially conscious choices. You could have all this incentive to build a different sort of manipulative mechanisms into this, of course.
And these kind of consensus statements could, of course, also be shaped in different ways. So we have first, the problem that I omit the entire process where the mediation or this kind of discussion between us makes me understand who you are and why you want what you want. And then being able to understand a bit better when we kind of meet on the street and/or discuss something else later on. I skip that process entirely.
We might end up with a process where I have no idea what the others even wanted. So kind of everyone feels then pretty happy with the statement but not that kind of a bit unhappy but satisfied because we reached something together. So it's quite a different process, definitely.
KIMBERLY NEVALA: Well and there's no room in that process necessarily for us to be really clear about what everyone's sort of giving up. And I think that's part of that consensus and collective building as well.
It's not that you all have to walk out unhappy. I'm not suggesting that at all, but there is some discomfort. I think you call it constructive division. I don't think it's a term you coined necessarily, but constructive division is important to say that it's OK. We can actually work and function as a collective without us all having to ascribe to a singular point of view, to a singular perspective.
And working out what are the things that we - our own sort of social Venn diagram of what we - think as a community where we really do need to be aligned and where we don't is also, in and of itself, part and parcel of how we think about democracy today, I guess I should say. Maybe we'll define this differently in the future.
HENRIK SKAUG SÆTRA: That's hugely important. And I'm not advocating for some sort of totalitarian society where we have this joint common will, where we all kind of come together and agree on everything. Of course, not, right?
But coming together and then being able to understand our differences and respect and tolerate each other enough despite wanting some different things, that's essential for a functioning democracy. And that's of course, something we see in different societies struggling with today I see being able to talk across these divisions.
And that's where the social media and the sort of echo chambers and all that sort of stuff deprives us of the practice as well. Of being exposed to diverging views and being able to respectfully tolerate and engage with each other without that becoming utter chaos or just withdrawal. So that's really problematic, I think.
That's also very much a design choice and political choice in the platforms we have. For example, in the platforms we use, like Facebook, like X, Twitter, these sorts of things. They can promote or discourage different sorts of social dynamics and political dynamics, which is a responsibility they might have or might take on their own. Or it's something we might have to then regulate or try to engage with on a political level if they don't do it. If we think that's necessary for us to have a good foundation for social engagement across these different firing lines.
KIMBERLY NEVALA: There's another narrative that's fairly common today around viewing AI as infrastructure. It comes up a lot today mostly in the context of LLMs, in things like ChatGPT, et cetera, which have hoovered up immense amounts of information. And because they are so beyond enormous, it's impractical to expect a lot of other folks without really deep pockets in time to be able to develop their own versions of that, as it stands today.
I just want to put out there that narrative precludes the ability for us to do things differently or take up different bits. But I'm wondering, are there other specific political implications of just the narrative of AI as an infrastructure?
HENRIK SKAUG SÆTRA: Yeah, I think it's become really, really important now.
So many people now know what this technology does, and they use it and rely on it in so many different ways now. And most people are aware of it to some degree. Some aren't because it's now kind of being diffused into all their different services and messaging apps and mail apps, of course. So it's there on this infrastructural level in everything. But people also engage very directly with it.
I think one response for some would then be, OK, we just need a smaller model. This technology is nonsense. It's just hype, right? I don't buy that because so many people are finding kind of immediately obvious use cases for this sort of technology. They find it so immensely valuable that I don't think this technology will be going away. And people are already becoming reliant on it, mostly, maybe not for crucial kind of purposes. But for some purposes are good and useful, definitely.
I think then it is a problem. And it's increasingly being discussed in a political lens as we got the shifts in politics and geopolitics following Trump's inauguration and following up on what he said he wanted to do in terms of alliances and international politics.
In Europe, people are talking much, much more about this now. That digital sovereignty, for example, is a huge thing. That we need digital sovereignty in terms of we need to have models and infrastructure and AI systems. If we at least believe that this is at least partially fundamental for achieving productivity growth and economic growth and of sovereignty in general, then we need to have these systems built and controlled in friendly states or here. And that's become increasingly uncertain whether or not the US is a friendly state.
And we get the Microsoft blocking the email accounts of the International Criminal Court. You get all these sorts of things. And people increasingly seeing these people behind these infrastructures at Trump's inauguration and then very much associated with the kinds of politics that is much more in terms of Europe, but US relations, for example. And then they go to work, and they have Microsoft Office, and they have Microsoft Copilot. They have only US companies. Everything they do is on some US service.
There is no really valid European alternative at scale yet, many would say, for most of these services. And that's becoming something people discuss a lot more. And of course that links geopolitics to markets, to economy, and to these sorts of things because that creates a market for growth, of course, in Europe. But it also makes it a conflictual political area as JD Vance came to Munich and said, don't touch our companies. We don't want you to regulate our companies.
So we get these tensions that we didn't have just six months ago, I think, in terms of figuring out, what does it mean that these sorts of technologies are becoming infrastructure in the sense that we rely on them for basic social functionality? Connecting between people and for work and economic purposes and all these sorts of things, they are, in many ways, akin to infrastructure.
KIMBERLY NEVALA: It is interesting, though, because if we think about a lot of our, what we might have traditionally thought about, infrastructure, let's say our electrical grids or those components. I've been trying to think about this a little bit more. And in a lot of cases, there has been an element of that - they were always regionally controlled. Your grid is not connected to our grid.
But there has also always been the resource control that is underlying that, I suppose. Which is where do we get the power to power our power. Where did the oil and the gas and the coal come from that used to do all that? So I suppose as you start to think more deeply about that, you start to see the levels.
What do you think, or in your research or in the book, what are the implications around either folks pushing for digital sovereignty or the lack, the inability, to enforce digital sovereignty?
HENRIK SKAUG SÆTRA: Yeah, as I said, I think it creates huge new markets in Europe, for example, for developing the kinds of services that we know we rely on at the scale. That makes it kind of effective competitors, either subsidized or not. Everything, kind of everywhere, is partially subsidized in some way. Also the American tech scene is subsidized in different ways through different kind of uses of political power to sustain them and regulate in ways that's beneficial for them.
So I think there's a lot of push here on research and innovation funding, for example, for projects that might make this happen - that we get kind of regional alternatives. It's that, or it's effective regulation, of course. The saying that data in Europe has to stay in Europe. It's not allowed to flow out of Europe and control mechanisms for making sure that happens.
So of course, there is an alternative here as well. We don't have to build our own but where we actually ensure that the technology here is safe for us. China and Europe always has this kind of discourse, right? We're very skeptical of Chinese infrastructure in our communication, telecommunication, mobile networks, for example, because that's been a kind of potential source of espionage and these sorts of things. And even buying kind of electric EV buses from China in Norway is being discussed because they have this kind of ability to remote turn these things off. And these are the main means of evacuating people from cities.
But what's interesting then is that this is also now applying to a certain degree to US technology as well. And it never did in the past because it's been a very close kind of, perceived close, friendship. So that's been unproblematic, but it is more so now.
So I think there's different sorts of opportunities that arise in being the most responsible ones. Of the ones in the US now that are least linked to the most problematic aspects perceived from other countries. For example, in the US, I think that's hugely kind of valuable not to be seen as very intimately linked with administration now, for example. But of course in other areas that will also be very beneficial to be very linked to these sorts of institutions in the US now. And also increasingly linking or trying to get into military spending and all these other things that further link politics and tech and power in a way.
So there's very, very different dynamics here, and there's pros and cons of these different strategies, I suppose. But people are definitely looking at who's most actively now trying to be close to Trump, for example. So people are observing that. And different companies come out very differently in that sense, I guess. So that's a strategic choice as well.
KIMBERLY NEVALA: It is interesting.
I drive an electric car, and I've been pondering recently. As you were talking about the buses, we don't necessarily want to buy an electric bus from China because that might mean they have control over that bus. But there was really nothing required or determined that said when we went to electric vehicles that they had to be connected in this way.
So there seems to have been, we're consuming or taking - I don't even know if it's ownership, you can’t call it ownership with these types of products - some sort of embedded assumption that means they must be connected to the digital infrastructure. They must be automatically controllable from x, y, and z. They must share data in places.
And, this is a complete sidebar here, but it's something that I've been thinking more and more about. I know why the companies have done that. But it's not clear why we as consumers did. I think it's just because it kind of got slipped in. It's like the spinach in your fruit smoothie as a kid. We didn't notice it. But now I'm like, why? Why does it need to be that way? You should be able to give me an electric car that has none of those ties and integrations.
HENRIK SKAUG SÆTRA: Yeah, I definitely agree. And of course, they can argue in favor of it, but you have the same with refrigerators of course and toothbrushes. And everything now has AI and is connected to the cloud. So we get that kind of creep everywhere because software as a service is much more profitable than selling some self-contained device that you can service differently.
So I think it's really interesting and a really good point. So also when we buy this critical infrastructure, it should, of course, be possible to mandate that this is self-contained in some sense as well. If you are a big buyer or a public buyer of vehicles, for example, in Norway or a municipality or something. So it's possible, of course, to think along those lines, I think, and people should increasingly do so.
KIMBERLY NEVALA: Now as we close up here then, I'll just assume that we agree that democracy is not a problem to, quote unquote, "be solved with technology.” Although technology may play a part in helping us operate elements of the governmental or societal machine, if you will, the mechanisms.
But what is it that we can do to ensure that we are promoting an informed and engaged citizenry? And are there steps that we should be taking now to promote democratic values and ensure that we have a hand on the wheel of where this technology is going in terms of how it is actually shaping and being deployed in the service of our societies?
HENRIK SKAUG SÆTRA: Yeah, that's a great question, of course. And that's the difficult one. But I think it always comes down to some key aspects that I think is really important, of course.
Education is always important. Because in Norway, for example, people are turning to we need to make more and better engineers. We need to make people that understand AI and can build these new systems to rival Silicon Valley, for example. And partially, yes. But also having this interdisciplinary approach to technology that allows us to have humanities and social sciences and informatics and computer science people talk together and understand each other.
I think those sorts of competencies will be crucial for allowing us to build the kinds of technologies we want. Not just a particular very rapid accelerationist technology, for example.
So those sorts of approaches from early on in school, for example, in these kinds of democratic competences, educations and bringing technology in there and seeing how these interactions play out, I think will be very important. So some sort of digital competency or AI literacy, if you want, is important. But not just this isolated engineering or computer science approach to that, of course. So I think there's many different approaches here also when having higher education degrees that are tightly interdisciplinary in that sense.
Of course, people might say that, OK, but then you get outcompeted by the purists. But I think that's not really necessarily a problem if the regulatory environment is devised in a way that promotes and rewards responsible technology and development and use that actually promotes the values we want as a society. Because if that's the case, if we have these kind of rules of the game that promote the kind of good innovation we want, then these sorts of people are the ones able to make that and understand that. So I think those two things play tightly together.
I think the EU has been going in the right direction for quite some time with the AI Act and regulating risky systems, as they say. The Digital Services Act, mandating some sort of content moderation, mandating transparency in terms of who buys advertising time, how elections are handled by the different platforms. These sorts of things are crucial, I think, kind of given it should be.
And of course, the Digital Markets Act as well, which turns to this problem of some companies getting too much power, and monopoly power being something. Even the libertarians tend to, in principle, at least historically, agree that should be something we deal with through politics. But not when it's our monopolies, of course. But still, in essence, this is one of the few things that even the market liberals say that politics is good at and should be used for.
So I think these sorts of regulatory environments are crucial for getting good technology and we need competency to make that happen.
And of course, I think the media as well is interesting for me. I write about that in the book. Editorial media is crucial in the sense that that's also very much intrinsically linked to our ability to have functioning democracies, to have transparency, to have critical journalism, to have something translate and find the bigger lines and communicate these sorts of things with us and get important voices out there to more people. And these traditional media are, of course, undermined by the social media that are increasingly problematic because it's hard to get advertisers money, and it's hard to get subscribers because everything is free, and everything is becoming increasingly freer as you just increasingly get it just in ChatGPT or Perplexity AI, for example. Why should anyone click into newspapers at all anymore?
So I think that's also a huge problem that we need to deal with as a society. And I think Canada, for example, with a revenue sharing model let's say the big tech companies have to share part of their revenues with local independent journalism is also crucial, I think.
So then it's regulation. And it's education. And it's media as some sort of at-risk sector now that I think we need to protect and build up in order to safeguard some of the key democratic institutions that we need.
And of course, these are no quick fixes. And people might say, of course, these things are important. But yes, it's difficult to pinpoint one specific thing that would fix this. So I think it's a long-term goal.
And I think, as I said, EU with the regulatory approach is required and necessary. Which is why I also think it's really sad that they are now starting to backtrack and say that, OK, we don't want to be left behind. We want to be innovative. We want to be productive. We want to have these sorts of economic growth that has been driven by the US tech sector.
So they are also increasingly talking as if these sorts of things, these protections, which I think is really important, are being scaled back in order to promote more innovation and more growth here in Europe. So I think that's sad, and I think that's where we're probably headed. And I'll do what I can to try and make it not happen too much.
KIMBERLY NEVALA: Yeah. And I suppose the critical question whenever we're having the conversation about ‘are we falling behind?’ is that you can only answer that relative to the end goal that you have in sight. So that assumes we are all heading in the same direction or want the same outcomes, which I think is perhaps a question as well.
So with all that being said, Henrik, what would you like to leave with the audience and as individuals for what we can do to make sure that we are, in fact, truly represented and showing up as engaged citizens moving forward?
HENRIK SKAUG SÆTRA: Yeah.
I think it's trying to take back some sort of relationship with the concept of democracy. What we think democracy is and think about why we think it's important, if we think it's important. Because it's become something of this kind of euphemism for everything good.
Which is why people say that we can have democratic AI. We can democratize everything. We can do everything. Democracy's good, without ever really thinking about it anymore why it matters and what's at risk of being lost if we don't have the democracy that we once fought hard for. Not we as in me, but someone, right?
So that's also part of the problem. I think that not many people that are engaged in this right now know the real alternatives to democracy. So I think engaging with those sorts of questions is really important for us.
And then doing what we can as individuals, of course, to choose the platforms and services, and do what we can to promote the technology, we think is good and conducive to it will be important. And then to try to imagine how we can use technology in better ways that don't diminish or narrow down democracy into something quite technocratic. But actually function in ways that can be more empowering and allow more people to understand politics and participate actively and meaningfully in politics. Because I definitely think there is scope for using technology for those sorts of things. And technology has to play a role, I think, if we are to have more effective democracy and more direct engagement with democracy as well.
So I don't think democracy is bad. We shouldn't abandon it. I think democracy can have a meaningful role to play. But that means we have to engage with these sorts of things and don't try to solve it in a sense.
KIMBERLY NEVALA: Well, on those very wise words and call to action for all of us we will end here today. Thank you so much, Henrik. It's just always a joy to chat with you. And we'll look forward to having you back, perhaps, for a third full-scale appearance here in the future to see how we're doing on our path to wherever we are headed at the moment.
HENRIK SKAUG SÆTRA: Likewise, always a pleasure.
KIMBERLY NEVALA: Thank you so much.
Alright, so if you'd like to continue learning and hearing from thinkers, doers and advocates such as Henrik, subscribe to Pondering AI now. You'll find us wherever you listen to your podcasts and also on YouTube. In addition, if you have questions, comments, and/or guest suggestions, you can write to us at ponderingai@sas.com. That is S-A-S dot com.
