Environmental & Social Sustainability w/ Henrik Skaug Saetra
KIMBERLY NEVALA: It's day three of Pondering AI's Insights and Intuition Series. Today, Henrik Skaug Sætra reflects on environmental and social sustainability. Welcome back, Henrik. Thank you for joining us yet again.
HENRIK SKAUG SÆTRA : Thank you very much.
KIMBERLY NEVALA: So you and I spoke this past February. What have been the most significant developments, positive or otherwise, in your areas of interest since then?
HENRIK SKAUG SÆTRA : Well, those are broad areas of interest. But if you stick to AI and ESG SDG related stuff, there's some increased awareness of the environmental impacts that AI has on the climate. But not really substantial getting down to fixing it. You have this occasional article here and there and everyone says we need to account for the climate impact of AI, the footprint of AI. Without us really getting close to actually measuring it and taking steps to bring it down.
We also get a bit broader awareness, as in, water consumption, for example. You get “a half a bottle of water (consumed) for every ChatGPT search.” So these occasional articles show that people are also seeing there's environmental impacts beyond climate. And that's a good thing.
There's scope for more, but that requires more transparency, better frameworks for reporting, and demanding reporting on the environmental impacts from the big tech companies, for example. There is too little transparency so there is really no good standard measure for what is the climate impact or the environmental impact of AI these days. And that's unfortunate but at least we're getting a bit closer to demanding it as people are becoming increasingly aware of the potential downsides in that sense.
Other than that, there's a plethora of initiatives related to regulating and auditing and figuring out how we can solve sustainability related issues of AI through law, through better measurements and better metrics. But I haven't really seen great initiatives targeted at the stuff we talked about last time. Which is the social aspects related to automation and the concentration of power and wealth and those kinds of things. Inequalities and the need for politics, for example.
And by politics, I don't mean just making better laws or just saying that we need regulation. But the deeper political engagement, the democratic engagement. So awareness and literacy and these kinds of things are important. I haven't seen too much on that side. But a lot of computer scientists are jumping into the regulatory and legal sphere is what I've been seeing, at least.
KIMBERLY NEVALA: Well, it's a good first step because first comes awareness, then comes action. You've also recently published a new book. What were the key questions you were exploring in that book?
HENRIK SKAUG SÆTRA : Ah, that was Technology and Sustainable Development. So that was a bit broader than AI but definitely related to AI as well. The subtitle of that book is The Promise and Pitfalls of Techno Solutionism. Which is once we find or identify or develop new technologies, we often search for things we can solve with these technologies. Instead of starting with what sort of society do we want? What are our main problems? And then seeing out or developing technologies to address these solutions.
We tend to go about it the other way around is my argument. That's what you refer to as techno solutionism, in a sense. The idea that we can solve and address all our questions through technology.
So there's different authors looking at different sides of this. Geoengineering, for example, and all sorts of social, environmental, and economic - lots of economic and political - chapters and some by me, as well.
KIMBERLY NEVALA: Excellent. I will refer everyone to that and look forward to reading that myself. This issue of making sure we are solving problems that are significantly impacting people. And making sure that we are doing that in a way that is humane - by which I don't mean just a human versus machine - but in a way that actually makes sense, instead of trying to shoehorn some technology in to make the current process work is really important
HENRIK SKAUG SÆTRA : Yeah. But that's a result of the current market. In terms of capital, for example, AI sells so much that it's really tempting for everyone now to label everything AI. Even if you don't really need it, even if the solution is more basic, and in my opinion, would be better solved through the simpleness of what it does. But to attract investors and look good, there's a lot of AI on everywhere, which makes things overly complicated now and then.
KIMBERLY NEVALA: So all of that being said, as you look at AI, the SEGs, what is your projection for what may unfold or what we can expect to see as we around the corner into the new year?
HENRIK SKAUG SÆTRA : I think we'll see increased attention on the idea of the twin transition, the twinning of the digital and green economy. And the role that AI and technology can play in getting us towards the environmental goals we have. So that twin transition idea is increasingly popular.
But, at the same time, that tends to play on the upsides of AI. They say we can use AI to fix our environmental challenges. But there's always this kind of comma and then…but we need to make sure that it's not too bad while doing it. So you get this kind of equation.
What follows is that there will need to be developed some sort of life-cycle assessment methods for the use of AI technologies. So that we can get a good overview and some actual numbers on the environmental impacts. Not just climate, but environmental impacts more broadly, of AI so that we can calculate those tradeoffs. Whereas people propose to use AI to address climate change, but what's the cost side here as well? And there are no really good life-cycle assessment methods developed and uniformly available, at least. So that's one thing we can and should see.
Also we have the AI Act, and we have lots of regulation going on and coming. The AI Act was drawn out quite a bit since they discovered in the middle of closing ChatGPT and generative AI and LLMs which they had to get in there. So it's turned into a much longer process and an interesting project definitely.
That and the AI safety discussion is quite interesting. It relates to sustainable development, of course, if we think that human lives, period, are endangered by AI. You get this conflict between those people that are concerned with the actual harms going on right now: with the biases and discrimination, automation, people losing their jobs. Versus this longer term super intelligent machines that will eradicate us all.
KIMBERLY NEVALA: The comment you made about regulation is interesting. It's a good example of why the regulation and the regulatory scheme need to build in resiliency but also adaptability on an ongoing basis. Because if we have to stop the development of the baseline framework or the baseline regulations and laws because something new just came on the scene, we will end up never having anything that is active. Because there will always be something new coming on the scene.
HENRIK SKAUG SÆTRA : Yeah, my main gripe with the focus on regulation now is that people are quite history-less I'd say, in terms of, oh, this is some brand-new magical technology. We need regulation specifically for generative AI now. And then we'll always be chasing the technology and always be trying to address the top of our minds questions related to what's happening (now). Without ever taking the step back and looking at what is this doing to social structures and these kinds of deeper questions.
So I would say a simpler regulation that's more technology neutral would be something I'd very much encourage. Rather than this kind of very technology specific regulation which will never catch up. We'll always be chasing technology if that's the way we approach regulation of technology and in my opinion.
KIMBERLY NEVALA: So where would you be focusing our attention on the new year if you were, in fact, directing that agenda?
HENRIK SKAUG SÆTRA : If we start with what I touched on before, I would look at some sort of requirement for reporting on and disclosing the environmental-related impacts of training and running AI systems. That would require that we have some sort of method, but that's not magic. in the world of LCAs (Life Cycle Assessments_ there are tool boxes for us to use.
Requiring and mandating the disclosure of these kind of numbers would be one important big step because that would drive the development of methodologies. And it would also allow us a much better overview of what's the actual environmental impact of AI. Because right now, there's a lot of guesswork.
If you look at the environmental impact of ChatGPT, you get wildly diverging figures. But this is running on Microsoft Azure. And they pride themselves on the sustainability cloud and being able to give everyone that runs on their platform these numbers. So, as far as I can understand, this could be readily disclosed already. Of course, there's an interest, perhaps, in not doing so. But still, that's something that we could require and that should be relatively easy to achieve.
The other thing is it would be interesting now to step back and see what sort of deeper social structural changes that follow AI and automation and concentration of capital. As particularly generative AI, for example, disrupts a lot of employment and work markets in the sense that people's jobs change, people's jobs might disappear, or they need to do something else.
So those kinds of questions related to decent work, for inequality and these sorts of system change would be, for me, a main focus area. While I could agree that these are good things that will give us benefits so that in the next 10, 20 years we'll look back and see that this was good for us, there are some rough changes in the meantime. At least in the United States, for example, where you don't have the social networks that actually catch these people that are disrupted in their actual lived lives today.
So they won't really look 10, 20 years ahead now and think that it'll be worth it by then, right? Of course, there's a lot of unions and others really engaged with that already. But that's one thing I'd like more focus on.
KIMBERLY NEVALA: It’s an interesting point and a great challenge laid down. To that latter bit, I heard this great analogy. I want to say it was Karl Benedict Frey who said people say we'll lose so many jobs but we'll also gain so many jobs so it'll all be OK. He said it's a little like sticking your hand in the oven and one in the freezer and saying, hey, the temperature is average. You should be comfortable, right? I thought that was brilliant.
HENRIK SKAUG SÆTRA : Yeah, it is. It’s really good. And it's really fitting because we need to think about both these things. The long term, yes. But we also need to think about these people right now. One recommendation is Brian Merchant's new book Blood Into Machine which is about the Luddites. And how this first Industrial Revolution has some similarities to what's happening now. In terms of, yes, we think it was worth it and we don't want to forego these things. But it was quite rough for a lot of people for a while.
KIMBERLY NEVALA: Awesome, well, thank you, Henrik. This was a pure pleasure.
HENRIK SKAUG SÆTRA : Thank you very much.
[MUSIC PLAYING]
KIMBERLY NEVALA: 12 Days of Pondering AI continues tomorrow. Subscribe now for more insights into what's happening now and what to expect next in the ever-fascinating world of AI.