AI for Sustainable Development with Henrik Skaug Sætra

KIMBERLY NEVALA: Welcome to Pondering AI. I'm your host, Kimberly Nevala, and I want to thank you for joining us as we consider the reality of AI with a diverse group of innovators, advocates, and data professionals.

I'm so excited today to be joined by Henrik Sætra. Henrik joins us to discuss the interplay between AI, SDG, and ESG. And not to worry, we are going to define all those acronyms. Henrik is the head of the Digital Society and an Associate Professor at Østfold University College. As a political theorist, he focuses on the political, ethical, and social implications of technology. So this is sure to be a mind-opening discussion. Thank you for joining us, Henrik.

HENRIK SÆTRA: Thank you so much for having me. I look forward to this.

KIMBERLY NEVALA: So Henrik, tell us how your current interest in the sustainability impacts of AI has come about?

HENRIK SÆTRA: Ah, that's a good question. Well, you mentioned my background is in political philosophy. So it doesn't automatically make sense that I'm doing work on sustainability and AI. But I've been working a lot in environmental ethics, previously. I've had courses, developed courses, on politics and environmental ethics, for example-- politics and environment. So that's been part of my background and sort of the ethics background.

Then, a couple of years ago, I started combining an old interest in programming and technology with the work I do in political philosophy. So I started working using the theories and frameworks from political philosophy to better understand the impacts of technology. So that was a fruitful marriage of two previously very separate interests.

I've also worked as a sustainability consultant. So I was doing KPMG for a couple of years, worked a lot of ESG-advisory related things. And that also opened up new ways to approach the business side of things that I hadn't had that much of before. So it has been a marriage of three different things, I think-- the environmental ethics, the political philosophy, and then also the sustainability consulting work.

KIMBERLY NEVALA: I'm finding that a lot of folks in this space are pulling together these very diverse backgrounds and working at the intersection. And that's where a lot of really interesting perspectives and helpful perspectives have come about.

Perhaps before we jump into some of the SDG discussion, or ESG, we can start off a little philosophically. As you know, a lot of these AI techniques - and especially things like machine learning, deep learning - they thrive on homogeneity. They thrive on sameness.

So the more frequent and consistent a pattern - whether that's what we buy, how we socialize, how we write - the better able an algorithm is to detect it, to model it, to make accurate predictions about what comes next. And yeah, this can be a source of great efficiency. It can be very useful. It can also promote or nudge us towards a level of conformity that's a little uncomfortable and maybe undesirable: and sometimes narrows or restricts our viewpoints, our perspectives, even our individuality.

You had some really interesting research and you made this statement, I actually wrote it down verbatim, because I thought it was so striking. You said, "an over-reliance on a machine-like approach to science limits human experience.” Which is a similar point, but a bit broader than what I started with. So can you tell us a little bit about the thinking behind that, and what the real ramifications of this limitation are?

HENRIK SÆTRA: Ah, perfect. I love that you picked up on that. I love talking about those kinds of aspects. I think that some parts of human life can easily be datafied and turned into data, quantified, measured, put into these machine learning algorithms, and be used to predict and guide us and understand society better, right? So we use variables everywhere to understand the world, to collect data, search for patterns.

But not everything is translatable into data, I think. So that's what I'm getting at in what we're doing: phenomenology, working with human experience. How does it feel walking around the woods, meeting the gaze of someone interesting, for example? Those kinds of things, those kinds of aspects of being human, is relatively easily lost once we get into this dealing only with what can be measured.

So we end up measuring behavior. We end up observing behavior, measuring behavior. And without really noticing, we often proceed from that into thinking that everything is behavior. That everything -- only what can be measured matters, in a sense.

So we get into this kind of behavioralism, that a lot of critics - Shoshana Zuboff, in Surveillance Capitalism, for example - she pulls on that behaviorism as a problem with this approach. But also Arthur Koestler and Maslow and a lot of old philosophers and psychologists, for example, have always stressed the dangers of only relying on that which can be quantifiable. Because we end up reducing humans to something less than human.

I argue, at least, or warn against when we start seeing ourselves as some kind of machine in this process. That we end up making machines that mimic ourselves, and then we try to understand ourselves through how we make machines. And then we end up reducing ourselves to some kind of machine that's basically only behavior, and not that much of these kinds of experiences, and drama, and strange stuff that's a very big part of what it is to be human.

KIMBERLY NEVALA: Some of the goodness is in the messiness, right? Of our human interactions and interplay.

HENRIK SÆTRA: Definitely. I think that relates, also, how efforts to use AI to perfect politics, for example, is subject to the same kind of fallacy. I think, because a lot of the efficiency in democracies, for example, is just in this messiness and this kind of muddling through that's often a part of what it means to be human.

KIMBERLY NEVALA: Are there particular applications or areas today-- you mentioned a little bit of politics, or I could think of areas in education, in particular-- where some of what we're pursuing today in the AI field feels particularly concerning or particularly dangerous, or could be, if pursued incorrectly?

HENRIK SÆTRA: It's interesting, a lot of the efforts used to try to have AI therapists, for example, if you go into that area. For example, we're just starting to say, oh, ChatGPT is pretty good at answering questions. It's pretty good at understanding input. It can generate pretty useful answers, for example.
So going from there into having AI start to solve our problems, for example, and deal with our psyche, and deal with those kinds of issues. I think that's one area of obvious concern.

But also the more general efforts to gather data on us and try to understand us and guide us by various actors in different aspects, I guess. So it's pretty much everywhere, but there are some areas, of course, that's more problematic than others.

And I'd point to vulnerable groups-- children, those that have less capacity to resist this effort and the potential dangers. Because AI-literate people or highly educated people will often be better able to resist some sort of influence of AI, if you want to call it that.

KIMBERLY NEVALA: Well, I often feel like I'm probably on a very small island with a very small group, but I find it rather creepy. I don't enjoy interacting with a chat bot that sounds like a human, or even robots that look very humanoid if there's not a reason for them to do so. And in most cases, I don't think there is. And so I don't know if that's just me, but I have a very sort of gut-level negative reaction. I think I'd rather, to some extent, have something that's a little -- I don't know if mechanical is the right word -- in form or function, in the output. But perhaps that's me also trying to push back against our innate tendency to anthropomorphize everything.

HENRIK SÆTRA: I think it's really interesting because there's really different approaches to this. I was in a faculty meeting before, and people were, like, we have to embrace this, we have to embrace ChatGPT. Give it to the students, and we have to use it. There's a lot of hallelujah mood out there. But there are also definitely skeptics there. And I very much agree with you. It makes sense to try, at least, to resist a bit.

KIMBERLY NEVALA: Yeah. So we'll see. A soft resistance, anyway.

HENRIK SÆTRA: Yeah.

KIMBERLY NEVALA: Now, turning back to the work - in fact, you wrote your book last year, I believe it was - on AI and the Sustainable Development Goals. Let's talk a little bit about that. But perhaps you can give those of us who are familiar with or have heard the term SDG, or Sustainable Development Goals, but don't have cause to run up against it or work with it on a daily basis, a little bit of a primer about what the SDGs are and what their purpose was.

HENRIK SÆTRA: Sure. Sustainable Development Goals. It's the UN Sustainable Development Goals, is usually what we refer to it as. Now, it's built under this concept of sustainable development: where we have these three dimensions of sustainable development, which is the social, environmental, and the economic. And the Sustainable Development Goals builds directly on that framework, and it supersedes the Millennium Goals that came before them.

But there are 17 goals here that deal with different issues related to these three dimensions. So they're relatively high-level and intuitive, in terms of the headlines for these different goals. So reduce inequality, eradicate poverty, all these kinds of… peace and prosperity. So it sounds like these really obvious things.

But then each of these goals have a number of targets as well. So we have 169 specific targets that's sorted under these 17 goals that operationalizes in a bit more detail what is it that the world wants us to do to deal with in the short to medium-term: which is up until 2030. Because these goals are the global goals for the period of 2015 to 2030. By which time we'll most likely need some new goals, which is already being prepared and in preparation.

But these are goals that are – there is broad political unanimity related to these are goals that are important to businesses, to politicians, to state, to regions, to everyone. And that also means that there's some compromise here. So democracy, for example, LGBTQ+ issues, for example: some of these concepts that are controversial some places in the world are relatively toned down in this framework, for example. So we'll get back to that, I guess, and pros and cons. But that's a very broad background.

These are the political goals, but also very much aimed at usefulness for businesses and markets as well. And it has increasingly been used by almost everyone that communicates something related to their sustainability-related efforts.

KIMBERLY NEVALA: So if we think about applying or using the SDGs in the context of AI, what are we really talking about? Are we talking about using those to identify or prioritize the problems we should apply AI to? Or are we talking about using those -- because they do reference things like safeguarding basic human rights, and agency, and some of those components, the environment -- using AI to influence or vet the design of our solutions? Or something entirely different?

HENRIK SÆTRA: Ah. That's a good question. And I'd say both. I think it's very useful for us, everything about, can we use the SDGs to evaluate our impact on the world? How do our activities, as a company - usually as a company - then impact the different sustainability-related dimensions and goals? So it's useful for us to get in these headlines or these keywords that helps us identify potential issues, positive and negative, related to our activities.
So I think it's useful, in the sense that if you just said that we'll have an ethical audit of our activities, for example, that's exceedingly broad. So in terms of business and for many stakeholders, I think is very useful, because we get this very pointed, but not too many, areas in which to evaluate our impacts and our potential to contribute positively and negatively.

But definitely, also, for developing things and for finding avenues for potential impact of technology, for example. Because in this framework, it's very obvious that the UN and the authors of these goals and the sustainable development - the report, Our Common Future, way back - they're very positive towards technology. And they truly believe that technology has the potential and is a necessary component for achieving sustainable development.

So there is a lot of techno-optimism here. So it's useful, in that sense, for developing and identifying areas. I might be a bit more skeptic to whether it's enough. But still, I think it's useful also for development, but mainly for identifying opportunities. It's focusing attention on the goals that the global community of both states and companies are interested in and concerned about. That funnels attention towards where the market is at, for example, as well. So in terms of deep and detailed design options, I think they're not that useful. But in terms of identifying where attention is, and where funds are, and what the market is concerned with, it's useful.

KIMBERLY NEVALA: Are there examples you can share of where they've been applied in that way, and help to guide folks towards problems that they are now pursuing with AI?

HENRIK SÆTRA: I think an obvious example would be climate action.

So that's SDG 13, for example, right? Which is about mitigating and adapting to climate change. And I wouldn't say that, necessarily, the SDGs are the cause for this. But, we're seeing a lot of different AI applications and startups, and also relatively short technologies aimed at providing different sorts of solutions for understanding our climate-related impact, mitigating impact. Different approaches to, for example, using AI for better carbon capture and storage solutions. But all across all industries as well.

You also see all the big consultancies are providing this AI for climate sites and marketing materials on their websites. And you also see all the major actors, like Microsoft, and Google, and all these, are also developing their sustainability cloud - Microsoft, for example - and all these kinds of different things.

So that's just one area where we see it's very obvious that we are deeply concerned about what's happening with the climate. And AI is really being used in a lot of different ways to target these challenges. But that's just one.

KIMBERLY NEVALA: That's an interesting area, because we had the opportunity, also, to talk some folks previously about AI being used to enable sustainability and sustainability initiatives. And at the same time, there is certainly a lot of – I think, appropriate criticism - that AI today is not, in and of itself, particularly sustainable.

HENRIK SÆTRA: Oh, no. Definitely.

And you have, also, this whenever you use AI to promote climate change mitigation, for example. You, of course, have to account for the climate impact of AI itself. But the work I'm doing now is relatively suggestive of that. The ICT sector is not insignificant and is expected to grow. But in comparison to concrete and buildings and construction, all these things, if AI can make a relatively small contribution to emission reductions in those sectors, we can accept a relatively sizable increase in emissions from AI.

But you have to account for it, right? You have to understand the climate impact of it. So those areas are interesting. But also, then, you have all the other sorts of AI used for identifying faces and making profile pictures and all that kind of non-positive, necessarily, stuff. There's a lot of the emissions and negative impact as well.

KIMBERLY NEVALA: Well that environmental example is a good one, to always remind us, as well, that this isn't a zero-sum game. And there are always trade-offs when we're dealing with applications. AI and otherwise, really, but AI in particular these days. There's no easy answer. Sometimes, it really is just both/and.

HENRIK SÆTRA: Yeah. Yeah, usually. And that's the case in my book on the SDGs as well. Because I find - or at least, I don't find, but discover and discuss - a lot of the positive potential for AI to contribute to better health, to better education, all these things. While at the same time presenting new challenges.

So very often, you have this potential for AI to contribute to individual benefits. You could get better education. You could get better health through different sorts of AI applications. But the major problems often come at a group level and a societal level, where we get differences between groups: those who have access to and those who don't. And for example, when you have biased systems, that work better for some people than others, for example. So you get this complex picture of different levels of impacts, where you might have positive impact on some level and negative distributional impacts, for example.

KIMBERLY NEVALA: Interesting. So are there examples you can share of some of those new emerging risks that you've identified in the book and through your ongoing work?

HENRIK SÆTRA: Yeah. I think most people working on AI would say that these aren't that new.

But I think a lot of what I focus on is the distributional and power-related issues related to, for example, if you say AI in education. That's an obvious example where you have the potential to really - not ChatGPT, - but that's an example that people can easily understand. So If. You develop some sort of educational system where a teacher uses this in a good way and in a critical way. Then this could potentially be beneficial for a lot of students.

But then you also have this (question of) who has the infrastructure to develop these things? And is this accessible, for example, for children in Africa? These sorts of issues. Or is this primarily being available and developed by rich Western countries and China, for example, that have access to it? In that sense, you get this concentration of power and of the profits from these sorts of solutions. But also, they're also made more effective for the population in certain parts of the world.

And the SDGs is very explicit on this need for local and regional development and competencies and infrastructure. And affordable and equitable access to infrastructure and new technologies. So technology transfer - and particularly, what I focus on, which is perhaps most novelistic - the need for this political aspect related to AI that becomes very obvious as soon as you deal with the SDGs. Where you see that, OK, technology can bring a lot of benefits. But it really, really depends on controversial and tough and sometimes unlikely political will and action. So that's where I think we need a lot of effort and focus.

KIMBERLY NEVALA: And where, today, do you see our ability to exert our collective political will, and even just to focus on common issues? Where is that going well and where do we need to build, maybe, a little more of the- I don't know if infrastructure is the right word or not when we're talking politics - but better collaborations… I don't know. What word would you use to affect this?

HENRIK SÆTRA: Yeah, yeah. I think that's a really important point, and I think that relates to societal control of technology.

That also relates to one of my key issues. It would most likely be that it's really important that we don't just develop technology and then go searching for problems to solve with it, right? But that we really have this discussion first of what do we really want? What are our values as a society? Where do we want to go? And then we can see, OK, what sort of technology would enable us or promote this sort of development?

But that requires, relatively, a very high level of competency and foresight on the democratic engagement side, but especially politicians and regulators. Right? They really need to understand these technologies and engage actively with them. And the regulators and the domain of law is really crucial.

And I think it's a problem when the technology sector has most of the competency and has really tight ties and a lot of effort spent on lobbying efforts, for example, towards regulators and politicians. It creates this sort of imbalance that could be problematic. Because they tend to be most interested in efforts to self-regulate and having an ethical checklist. Saying, we're ethical, keep your hands off, we're doing good, we can deal with this ourselves. But I think we don't see that much democratic engagement in these sorts of processes. And we're seeing efforts around the world of course, in the EU, for example, we see the AI Act, the Digital Services Act, for example.

But that's also relatively technocratic if you want to look at how does this come about. It's not really through grassroots movements and stakeholder engagement and involvement, right? So that's an area of really huge importance, which means we need AI literacy. And we need to focus more on democracy, how to revitalize and make it relevant in a new sense.

KIMBERLY NEVALA: Tell me if I'm thinking about this the right way. When we think about this I could see that organizations could potentially go down a - with good intent, I'm going to assume good intent - go down a path of assuming that AI to support, or that is supportive of the SDGs, or just more global philosophies, are, by definition, only those applications that they would categorize as AI for good. Things that are specifically applications or programs that they've intended to promote, support, or extend maybe social or environmental justice.

And that strikes me as perhaps a too limiting viewpoint. Because it allows us to then not be thoughtful about how do we deploy - maybe it's an educational application, or a chat bot that's intended to, whether they do this properly or not, expand access to knowledge - to not necessarily think about, how does this play out? Or what's the accessibility worldwide if it's not explicitly under the banner of something that is a quote, unquote, "AI for good" initiative.

HENRIK SÆTRA: I think that's a really, really good point. And that's also something I write a lot about. Because you have this isolationist approach to the analysis of technology that deals with: we have this new product. What are the implications of this product, in particular, related to, for example, these individuals? And then it could definitely show that there's huge potential for good here. So we label this AI for good, and we say this is SDG 4: this is good education - AI for good education - and we're done, right?

So I definitely agree that there's a danger in that, and that we need what's called an integrated or a more holistic analysis of these different ramifications. And how these systems are part of this social sociotechnical system that's much broader than each different application and each different system.

So I definitely share that concern in a lot of what's being done, both in sustainability and the world in general, with greenwashing. And just putting all these colorful boxes of the SDGs on websites, and saying, we're doing good. It's really easy to abuse them as well. So I really share that concern, and I think it's really important to get that other level of analysis in whenever we talk about, is this a good solution? And force attention to these sorts of different valuations.

What are the questions we have to ask if we say that this is AI for good. AI for education, for example? And that's what I try to do in that book. I have this list of questions of pros. Can we say that this is a good thing for education if it does this, this, this? Yes, sure. Then you could check some of these boxes.
But you also have to answer to critical questions on the other side of that box. Which are, does this apply equally to all these? Is it accessible for all groups? These different questions. So I think it's really important what you said there, and it's far too easy of abuse this and just use it for greenwashing, if you want to, in a broader sense.

KIMBERLY NEVALA: That probably brings us to a good time to bring in the concept of ESG (environmental, social, governance). There is, as you just alluded to, quite a lot of discussion now about boards and looking at metrics that look good. But (that) might not actually have a whole lot of substance behind the number.

So we're thinking about organizations coming up with some of these AI-enabled applications….And there's a lot of discussion now, too, about how to think about AI globally or broadly, in terms of principles, if it has to be implemented locally to meet the norms and the customs and meet different communities where they're at, if you will. Maybe it's in terms of accessibility, or technology, or literacy, or whatever that may be. Again, good intent, it seems, could be a way for us to give ourselves a way out from having to address broader issues. Particularly with digital technologies that are so scalable and can really go almost anywhere.

But it also begs, I think, a question of whether corporations, whether they're big or small, should be expected to be social agents. Is that an appropriate expectation? Is that something that we should be pushing?

HENRIK SÆTRA: No, I think we shouldn't. Definitely shouldn't. I think history has taught us a lot. And I'm reading Thomas Hobbes, and Machiavelli, and all these things from political theory. My view of human nature is relatively skeptical.

So I'd support Adam Smith and these people-- Friedman Doctrine, for example-- that says, corporations are in it to win, to gain something from it, to satisfy their ambitions in order to grow, for example. So I think even if I might wish for a situation in which we expect them to be ethical in a broader sense, I think we can't expect them to. We should design our systems to be foolproof, even if we might want to, of course, both applaud and wish for these things.

I think we have to set the bar so that our regulation forces the behavior we want to. So that if we make good regulations, we can try to achieve the goal of Adam Smith. Which was that the baker does good not from the goodness of his heart, right, but because he wants to make money. Right? But if our institutions and our regulations prevent the worst ethical bads. And we make sure that we get ways to reward the behavior we want, I think that's a much better approach than hoping for ethical behavior on the part of corporations.

Because I think - I think that we won't really get there anytime soon, even if we want to. And there are some good companies, but these are really, really exceptional. You have Patagonia, for example. It's often the one example of this kind, of we're in it to do good, right? Most others aren't, and I think that's fine.

But we have to make sure that we have frameworks for reporting and disclosing information that makes shareholders able to punish those that don't do good. Right? That do exceedingly bad things. We have to prevent that in some sense and that requires information. Which brings us to ESG, for example, and those kinds of attempts to quantify and disclose and report on these issues that relate to ethics in the society and the environment, for example.

KIMBERLY NEVALA: And you've said there are some lessons to be learned for when we're evaluating AI and applications and trying to hold companies accountable for what they're putting out in the world. While also providing some mechanisms for them to think through and be mindful about how they develop. You said there is some momentum, potentially, that are good lessons learned. Could you tell us some of the lessons learned from ESG that can be applied in the case of AI as well?

HENRIK SÆTRA: Yeah, sure. This is something I'm working on now, which is called the AI ESG protocol, for example. But that's mostly an attempt to fix some of the holes or blind spots in existing frameworks.

Because one concern of mine is that you can make an ESG report, check all the boxes and have all the criteria met. But you're not really dealing with any of the key issues related to the problems generated by AI. Because they're too nonspecific and they often only have these quantitative indicators on a very high level. And not broken down into the technologies used, activities used, and these sorts of things.

And you don't really deal with the issues of power, and structure, and all these kinds of concerns baked into the SDGs at all. So you can check all the boxes, and you can score really high on RSPE and you can fill out your SASB and TCFD and you can use GRI, all these things, without ever having to deal with and really explain the sustainability-related impact of AI.

So what I did in one article was look at Google and Microsoft's ESG reporting, sustainability reporting, for example. And they have this part that's quantitative and that's really non-informative in terms of understanding the AI ESG impact. And then they have this qualitative part, where they brag about everything good they're doing. Yeah, there's AI for good, environment, we share our technology with these. And it's really unsystematic. And it’s too easy just brag about what you're doing that's good without really having to be forced to disclose information related to the potential negative aspects.

So that's what I write about in this AI ESG protocol: an attempt to have this tiered approach to ESG reporting. You have this one higher level where it's applicable to all companies which goes to overall G9: greenhouse gas emissions, board diversity, and having a purpose in your strategy. All these things that apply to everyone. But then we have a tier two, for example, and tier three level of reporting that's more detailed on sector, and even activities.

That would be one way I'm looking at potentially solving and approaching these issues. Because I think companies using that approach would be able to have better structured information. Being forced to disclose more of their relevant impacts in a way that's good for both them and stakeholders. Because it's more transparent, and it's easier to compare, and it's easier to easily understand what's going on in the company - both the risks and the opportunities. So markets will reward, also, those that have good capacities and do good things and have good governance in these sorts of issues.

KIMBERLY NEVALA: So market incentive probably trumps political incentive when it comes to that?

HENRIK SÆTRA: Yeah.

KIMBERLY NEVALA: Now, does this tie it all back to the conversation we started with about the risk of over-relying just on what we can quantify? When you're talking about this, is it safe for us to assume that when we're talking about not just adding additional metrics that we can calculate with data points you can collect? But adding rigor around the qualitative qualifications. I'm having a bad alliterative morning here. Asking questions that we want you to answer qualitatively, but having a template, if you will, or a structure for what questions you want to ask or to respond to?

HENRIK SÆTRA: Yeah, exactly. That's exactly the point. And I think what I tried to do in that is a tentative approach that just tries to get at the problem.

Where I think there are a lot of indicators we could have that are quantitative, and that are useful, and that could be used in this. When finance companies try to compare a ton of different companies, they want quantitative stuff. They want the indicators, and rankings, and percentages, right? Because they don't have the time to go into all the gory details of all companies.

But I think complementing that with just what you're saying now is what I'm trying to do. To have the key indicators that are qualitative. That require a short, brief response related to how we're dealing with stuff, where you can find stuff, how we integrate stuff, how we govern, what sort of governance structure we have. Those are non-quantitative, but they don't really require, or shouldn't be answered through pages and pages of text either. So it's easy to have this middle approach to some of the reporting by just asking the right questions and making it really easy to find the information those interested stakeholders are looking for.

KIMBERLY NEVALA: And does responding to this, or integrating that rigor, or even just the process of asking these questions…does it integrate nicely into existing compliance domains or risk management domains? Is this a separate ethical domain? Or is there some overlap, but with some very distinct things when we're talking about AI that we need to develop?

HENRIK SÆTRA: There's two parts to this answer.

The first part would be this is meant to integrate very nicely with all existing reporting frameworks. So it's connectable to the GRI metrics, to SASB metrics, so you can just complement existing work on frameworks and standards on reporting and disclosure, for example.

But in terms of distinguishing this from other ethical domains, that's also a sort of different question, because that relates to AI ethics. Is this really just AI ethics, or is this about computing, or is this about data, or is it about privacy and manipulation? Is it about the dangers of an employment from automation? So that's a really good question. I think it's really important to just not jumble all this into AI ethics and say that this is just something new that we label AI ethics. Just because it's the newest kid on the block that's really being talked about now.

But in terms of making it really easy for companies to do, I think that's really important. And then that ties us back to… these are old concerns. Most of these questions are relatively old concerns. I think it's really important, as well, to make it relatively actionable. And that requires a lot more work because I think it's difficult now.

Different stakeholders need very different approaches. Whereas academic philosophers might write these philosophical treatises on AI and what it does to us, that doesn't really work in a local municipality where stakeholders are contemplating using AI in their decision-making systems. And it's not what developers need, for example, when they're developing a new system. So I think there's different need for different stakeholders. And trying to make one universal approach here of forcing one approach to everyone would definitely make it overwhelming, or underwhelming, or -- yeah, I think that would be impossible. So I think some pragmatism is required here as well.

KIMBERLY NEVALA: Yeah. It'll be interesting to see how this plays out. We're starting, a lot of times, with risk and existing approaches to risk management that are comfortable, well-understood, but may limit-- again, restrict that view-- too much. At least in terms of us trying to really anticipate what the future implications of some of these technologies and applications are. Even if they seem relatively innocuous to start.

HENRIK SÆTRA: Yeah. But then again, I think it's really, really useful to connect these issues to risk management: seeing risks and opportunities and using the double materiality in order to understand these things. Because I find that as a sustainability consultant, it's much easier conveying the concerns of AI ethicists, for example, through just these kinds of new risks: transition risks, climate risks, fiscal risks. All these sorts of different risks and different opportunities and risks arising from stakeholder demands and requirements and preferences, for example. So I think it's really useful to use these existing frameworks that are working reasonably well already.

So I think it's definitely possible and very useful to use these differently. So that's why I try to connect it, at times, to ESG and all those things that's already being done. Because I think making just this new thing, another thing, a completely different thing, just for understanding AI impacts, wouldn't really make sense. Or at least, it would make it a lot less accessible.

KIMBERLY NEVALA: And certainly less palatable for organizations. So what advice or immediate steps would you recommend for organizations and maybe even for individuals who are looking to level up both their understanding and awareness of issues? For instance, those that relate to SDG or ESG for AI? Both in terms of literacy and improving that and starting to take the first steps to making this actionable. I don't mean that in a litigious sense. I mean making this operational.

HENRIK SÆTRA: I guess the silly answer would be to read the book on AI and the Sustainable Development Goals. But that would be a very accessible start, because that's a short book dealing with all the SDGs and the potential impacts of AI. But there are definitely other sources out there, and a lot of people are writing about this. So that would be a good, decent way to start, I guess.

But then mapping, as well, if we're talking about the business sense. And I think that's most interesting right now. Because if it’s the general public that's just interested: they can read about whatever concerns them. But when businesses are trying to deal with this, they can have a look at what are our capabilities in terms of gathering data, developing the software, using software? What sort of activities do we have, and what are our capacities? And then trying to map out these different potential activities in relation to, for example, the SDGs. I think that could be a useful and very easy exercise to just start contemplating. OK, we're doing this thing here. This relates to this inequality aspect, for example. This relates to health, definitely. But what could be the downsides to this?

So I think it's relatively easy, but it's also definitely beneficial to get someone that's a little bit experienced with working with these things and how the terms are used, for example: what this means and implies. Of course, get someone to help in that process if you want to. But I think it's just useful to get the board together and do this anyway. Or getting some different levels of the organization together would be to their benefit definitely. Get some technical people, get some of the strategic people together, and discuss, for example, through the SDGs. Because that could be a tool for making it easier to sort and have some talking points and doing these sorts of things. And then when you run into issues related to, OK, this could be a potential impact, then you can start reading up on the details and the different things. That could be an easy way to start, at least.

KIMBERLY NEVALA: And it has the benefit of being discrete. There's a set of categories. You can say, yes, this category would apply or wouldn't apply, move onto the next one. This might apply, then look down the level and use it almost - I mean, not to - someone may think this is using this somewhat informally, but as a form of inspiring critical thinking and brainstorming way up front in the process.

HENRIK SÆTRA: And I think that wouldn't necessarily become a state-of-the-art sustainability report. You have to develop that further, of course. And you have to get some assistance if you're not really familiar with what these things need. But as a start, I think it's good.

And I think it's really important to understand that this is not just a precautionary measure, and trying to make sure you don't do anything wrong, right? Because this is a really good way to identify opportunities. Both for developing new things and doing new things, but also to communicate what you're already doing. And a lot of people are discovering that through this process, that, oh, we were actually doing a lot of good stuff that we didn't really know or that we couldn't really communicate. So it's really useful in that process as well.

KIMBERLY NEVALA: Excellent. So use it to identify new opportunities and to highlight the ones you may be working on that the rest of us are not yet aware of.

HENRIK SÆTRA: Yeah.

KIMBERLY NEVALA: That's fantastic. Well thank you, Henrik. I really appreciate you sharing your insights into how both SDG and ESG can be used to positively guide our collective journey with AI. Really enjoyed the conversation and hope we can get you back.

HENRIK SÆTRA: Thank you very much, Kimberly. It's a pleasure being here.

KIMBERLY NEVALA: Excellent. Now, in our next episode, Chris McClean, who is the Global Lead for Digital Ethics at Avanade, joins us to talk about - you likely guessed it - digital ethics. He's going to share his perspectives on trust and why we must all think beyond ourselves in the age of AI. Subscribe now so you don't miss it.

Creators and Guests

Kimberly Nevala
Host
Kimberly Nevala
Strategic advisor at SAS
Henrik Skaug Sætra
Guest
Henrik Skaug Sætra
Head of the Digital Society; Associate Professor at Østfold University College
AI for Sustainable Development with Henrik Skaug Sætra
Broadcast by