Righting AI with Susie Alegre

KIMBERLY NEVALA: Welcome to Pondering AI. I'm your host, Kimberly Nevala. In this episode, it is our pleasure to bring you Susie Alegre.

Susie is an international human rights lawyer and author confronting some of the most important contemporary issues of our time. This includes the impact that digital technologies such as artificial intelligence have on human rights. In addition to her extensive legal work and cultural advocacy, Susie is also a senior fellow at the Center for International Governance and Innovation.

Today, we're going to be talking about how we can ensure that AI is used to protect and promote, rather than undermine, human rights. Thank you for joining us, Susie.

SUSAN ALEGRE: It's my absolute pleasure.

KIMBERLY NEVALA: Now, for folks who have not had the pleasure of engaging with your work before, can you tell us a little bit about what was inspiring your thoughts as you moved from writing Freedom To Think, back in 2022 I believe it was published -

SUSAN ALEGRE: Yeah.

KIMBERLY NEVALA: – to Human Rights Robot Wrongs (which I have right here): Being Human in the Age of AI here in 2024.

SUSAN ALEGRE: Yeah, absolutely. When I wrote Freedom To Think, it came about, the trigger for that was the Cambridge Analytica scandal in particular and this idea that technology could be used to understand our thoughts. And then use the way we interact with technology and social media to manipulate our thoughts, potentially, or to try to understand us. So that had been very much a focus of my work up until that point.

But what I realized after Freedom To Think came out in 2022 was how much the development of technology was affecting every aspect of our lives and every aspect of our human rights. And so there were a couple of triggers for writing Human Rights Robot Wrongs.

The first one was this sort of overwhelming feeling of dread that I think a lot of creators had in early 2023 with this new wave of AI, and generative AI particularly characterized by ChatGPT, where people were suddenly talking about, oh, we don't need creatives anymore, we don't need novelists. AI will be able to do all of that for you.

And I think like many creatives the despair came not from the idea that AI might replace us. But more from the idea that the people sort of embracing this technological revolution did not really understand what human creativity was about and why people create and that creativity is the work. It's not about the product. So that was one sort of feeling of, like I say, sort of overwhelming despair for a short period of time.

Then I heard a story from a colleague in Belgium about a young man who had taken his own life after a very short, six-week intensive relationship with a chat bot, leaving behind a widow and two small children. And his widow told Belgian media that certainly she felt that he would have been alive today, but for this relationship, where he really kind of lost his way.

He was a man who was suffering from climate anxiety and sort of sought solace in the chatbot. And the chatbot effectively taking the place of his human relationships, of his therapist, of his wife, until he found himself so isolated that he took his own life. And that, to me, was a real wakeup call as to the manipulative power of our increasing engagement with AI and the increasing proliferation of AI to replace our human relationships even to the point where it's a threat potentially to our right to life. That questions of suicide are covered by the right to life and international human rights law.

So it was those two things that I think finally spurred me out of despair into a desire to look at this question more deeply and to write about it and to help other people think about the future that we have with AI. And more importantly, the future that we want with AI and why we need to think about our human rights.

KIMBERLY NEVALA: Human Rights and Wrongs in particular is a very thoughtful journey through a lot of different aspects or domains in which artificial intelligence - I'm going to refer to it here as AI broadly, but I think you and I are in violent agreement that it is a portfolio of applications. And some of where we really go wrong is just using “the AI”, as I started to call it, the royal AI is part of where the discussion goes wrong.

One of the things that becomes clear in reading the book is that if we anchor consideration and thinking around how and where and even if, maybe most importantly if, we deploy artificial intelligence systems on human rights versus more common views of harms or risk, it really actually opens the aperture quite a bit for the types of things that we need to consider and how we look at our engagement with these applications.

And I have to admit, probably because we're very privileged and don't have to think about this on a day-to-day basis, it caused me to go back and look at the Universal Declaration of Human Rights again. And to think about the scope of obligations, duties, and rights as individuals that attempts to provide. So can you talk a little bit about the scope that a focus on human rights brings to the conversation that we may not get in other approaches? Which very commonly focus on issues that are quite important, such as bias or discrimination but this is a really small piece of the pie when we start to look at this more broadly.

SUSAN ALEGRE: Yeah, I think one of the reasons why the Universal Declaration on Human Rights is a really useful lens is because of the breadth of rights that it encompasses.
So it's not just about traditional civil and political rights that we think of. Things like the right to liberty, the right to privacy, the right to life. But it also encompasses economic, social, and cultural rights, so things like community access to scientific exploration. Things like the rights of artists and creators, both their economic rights but also their moral rights. Things like the right to education, the right to health.

It really encompasses this very broad concept of all the things we need to thrive as humans. And then turns the lens, if you like, away from controlling the AI to what is it about us, what is it about our humanity that needs to be protected and that is really important? So that's why I think the Universal Declaration on Human Rights - even though you might struggle to enforce it directly in a court of law in most circumstances - but it gives that holistic view on what it means to be human, why it matters, the things that matter to us as individuals and as societies in general. And that's why I think it's a really useful lens for looking at our future with technology.

KIMBERLY NEVALA: How do you respond to folks who say, yeah, this is really interesting. It's important. No one's going to read these things and disagree. But it's a statement of intent more so than it is something that is concrete or concrete enough that we can take action against it.

Particularly today, the impact and the pressure of global pressures on international institutions and norms is really increasing and rising. And there is, some would argue, an increasing lack of trust.

SUSAN ALEGRE: Yeah, I mean, the Universal Declaration on Human Rights is what it says. It's a declaration that was universal just over 75 years ago, agreed around the world. But what it has given rise to is regional and domestic human rights laws that are applied every day in the courts or through legislation that's enacted in different jurisdictions.

There is no doubt that we are in an extremely difficult geopolitical situation. That multilateral institutions and the international rules based order are under threat in ways that they probably haven't been in the last 75 years, or certainly in our lifetimes. So there is definitely pressure happening on those international institutions.

And so the relevance or the enforceability, if you like, of the rights will very much depend on where you are and in some cases who you are. But what we see, I'm here in the UK. Well, in the UK, we have a Human Rights Act, which is part of UK law, which is enforceable in the UK, and which guides the interpretation of all other laws and also guides decisions made by public bodies, actions made by public bodies.

So if you look at, for example, the use of AI, whether it's in schools, or by immigration, government departments, government, in making those decisions has to apply those decisions and make those decisions in line with the Human Rights Act, which contains particularly the civil and political rights. It doesn't extend to all of those rights in the UDHR. But it does to the core civil and political rights that are also in the European Convention on Human Rights.

And so what that means is that if there is a threat to or a violation of your rights in a country like the UK, you have access to justice to put things right. You can judicially review those decisions. And that includes potentially positive obligations on the state. So, requiring the state to enact laws that will protect us from each other or from companies or from foreign actors. It's quite a strong protection. And it's very much a legal protection.

So how that can be enforced and how relevant it is going to depend very much on access to justice. So if you build back from those overarching, idealized, sort of philosophical principles, again, depending on where you are, you're going to find ways to use those ideas and those principles to enforce your rights on the ground.

But it's also why when you talk about regulating technology, you need to think about the framework that you're living in. It's not so much just about tech regulation. It's about the whole package of law, the rule of law, access to justice, democracy in the country or the territory where you live.

KIMBERLY NEVALA: One thing again that became more clear to me, or that reading the book underscored, was the fact of that proactive requirement. That you said whether it's for governments and civil society and folks to proactively protect individuals and society at large from harms that are reasonably known or can be reasonably conceived.

Because right now there's a lot of activity that's individual. So generative AI is just the obvious exemplar at our current point of time with individuals trying to prosecute cases and show harm. And it's not clear to me that is going to be effective in stopping what has become a fast-moving train in a lot of cases today.

SUSAN ALEGRE: Yeah, I think that's right. And I think what we will start to see is pressure on governments to regulate. And it's going to depend on where that's coming from.

So in some areas, it'll be questions around copyright and the rights of artists, as I mentioned at the start, that will then push for change and for clarification in the laws. I think we might also find, and we're already seeing in terms of companion AI and emotional AI, we're starting to see, as you said, individual cases being brought, particularly in the US already.

There was one earlier this week, a new one, against character.ai. And those individual cases I think will serve to call the alarm, if you like. They will serve to open the debate about how current laws actually operate. But I think they will also serve to focus the minds of legislators on the very real dangers and potentially the very real need to take radical and speedy action to prevent really widespread harms in certain areas.

So I think they're never going to be all of the answer and all of the picture. But they are part of a movement to reflect on what action is needed to create that future that we want.

KIMBERLY NEVALA: What's your perspective on the question of whether the focus and prioritization - or I'll say promotion - of the need for new regulation for AI is obfuscating or being used with ill intent; obfuscating current harms and bias?

SUSAN ALEGRE: I think certainly the argument that we need some kind of international body and some international convention to manage the risks of AI, I think that absolutely is obfuscation.

On the one hand, it tends to indicate that we have no laws, that it's a law free zone and therefore, it can do whatever. People can do whatever they want with it because there's no law, which is not true.
And on the other hand, it sort of kicks the can down the road by saying, oh, we can't do anything until we've done everything.

And as we discussed earlier, the current situation with multilateral organizations and the international rules-based order really calls into question what the possibility of and the value of having an international convention that somehow tries to create an agreement on AI at this current point of time would do in practice. And how it would be enforceable, how it would be respected.

So I think there is a lot of that kind of this is all so new that nobody knows how the laws apply. And all of that makes it very easy to then plow on without respecting existing laws. But having said that, I think there are areas, discrete areas where legislators can just say, actually, on this point, we are going to make it very clear what the law says.

So for example, on companion AI, you could say, actually, it's completely illegal to sell companion AI at all. Or you could say it should be completely illegal to sell companion AI to people under 18, to children. There are discrete areas where you could make quite radical legislation, if you'd like, to put the brakes on while you decide what the future is. Or in some areas where you may have already decided.

And we're seeing that already around the world with discrete legislation relating to AI-generated non-consensual sexual image-based abuse. This is an area where potentially existing criminal laws are not clear enough or may have gaps. Well, it's quite simple. You can just create a criminal offense that deals with that particular problem, that you know is a problem, that you know undermines the dignity of women, and that is an area that clearly needs a strong criminal response to get ahead of it and to try and prevent what is at the moment a kind of tsunami of this kind of abuse around the world.

So I think it's not one thing or the other. But I think this idea that we need some sort of global superpower organization that's going to get a grip on AI and, in particular, that that should somehow be run by people who build, and design, and sell AI is not the whole answer. And is certainly a way of distracting from the things that are being done and could be done very quickly on the ground.

KIMBERLY NEVALA: So let's talk a little bit about some of the challenges or problems, issues that you're concerned about, with how we are deploying and utilizing artificial intelligence systems.
And I should say here that one thing that's always clear in your work, although we're applying a critical eye today, is that it's not about AI per se as much as it is about those who use the technology to either dehumanize, or disenfranchise, or take liberties away, the right of self-determination, all of these things away from other people.

So you're not anti-tech from that perspective. And you do note that specific instantiations of AI technologies can be used to solve very difficult and vexing problems. But you also argue that AI in and of itself is creating a whole new -- I don't know if it's a whole new category -- but it's creating a new category of wicked problems. Can you talk to us about what some of those wicked problems are, and particularly, the perspectives that may be relatively new due to the nature of AI itself?

SUSAN ALEGRE: I think one of the really big developments, and one that really surprised me actually when I was researching the book, was this question of companion AI and emotional AI and the real development of chatbots as a replacement for friends, or a replacement for romantic partners, or a way of keeping your loved ones alive after the grave, or as therapists. This kind of really deep emotional engagement and that technology is being designed for that and sold for that.

I think that is problematic on very many different levels. I mean, I think it's uniquely manipulative because when you engage in conversation, if you like, with an AI, even if you know it's an AI, if it's operating and appearing to respond like a person, you can very quickly fall into this idea that you are communicating with a person. But it's a person who gives no judgment, a person who's always there for you, a person who's got your back. And that is very seductive, which then means you can find suggestions that this AI is making seem very persuasive.

And the lawsuit that was launched in the US earlier this week against character.ai was citing a case of a child who was apparently, it was being suggested, that maybe he could murder his parents in response to their enforcement of screen time. And you can see how, like I said, particularly when you're talking about children or vulnerable people, this can be extremely dangerous. Both for them - self-harm is one of the really big areas of concern for this kind of technology - but also for other people.

And we already had a case here in the UK where at a sentencing hearing last year for a man who was arrested having broken into Windsor Castle one Christmas Eve with a plan to murder the late Queen. At his sentencing hearing, reams and reams of conversations that he had with his AI girlfriend were read out where, and I paraphrase, he would be saying something like, I'm an assassin, does that make you think any less of me? And she's saying, oh, wow, no, you're not like the other boys. And I think you're really amazing and really strong.

And if you think about that, that this kind of relationship isolates people from the regulating effect of real relationships and had that girlfriend not been an AI girlfriend, had it been a real girlfriend, maybe she would have sounded the alarm. Or maybe she would have said I think maybe that's not a great idea. Maybe we should talk some more. Maybe you want to get help. Maybe she would have alerted the police.
And if she hadn't, then maybe she would have had a degree of criminal liability herself in the whole plan.

And so I think we really need to be careful about what it means for us as individuals and also for our societies to launch and sell tech that is designed to replace our human relationships on such a wide scale and currently also for free. I mean, one of the things that you can also see, I mean, millions of people around the world have downloaded apps that give them AI partners.

And you have to then wonder, well, what does that mean for our capacity to connect to each other? If you have an AI partner who's always available, always interested in you, it's going to be pretty disappointing when you find out that most people actually, it's not all about you. Most people have other stuff going on. They're not going to be aware. They're not going to be available 24/7. I mean, that's been a massive disappointment. And what we see as well with talking about therapy and saying, well, some people find it easier to talk to technology because there's no judgment. It's like sometimes judgment is actually really, really valuable. Judgment is exactly what's required in some circumstances. And it's what we need for self-reflection.

So what concerns me about this area in particular is that it is really being pushed ahead very, very fast, very, very cheaply, creating dependencies and really breaking up human connection. It's a kind of corporate capture of human connection in a way that potentially reflects issues like coercive control that we see criminalized in human-to-human behavior. Whereas this is potentially companies taking control of people's inner lives, of their ability to reach out to others, and leaving them isolated. Rather than being a cure for the global epidemic of loneliness, it's really just preying on it and exacerbating it.

So that is an area where I think we need to very quickly take a step back and think, what are the consequences of this? And it's not about a moral judgment of the individuals. It's nothing to do with that. But it is about why do we want this? And what could it mean for us as individuals and as communities and for humanity as a whole?

KIMBERLY NEVALA: There's this really odd tension that I believe may be developing and it's related somewhat to the move for quote unquote, "machine rights". Which I think is both, yes, academic in one sense, but also somewhat almost an implicit assumption by some of the companies pushing these systems. Where, therefore, because it is human-like, we're not responsible for this product that we have put out in the world because it is somehow a separate, distinct, self-propelling entity. Which we know is not true. And perhaps that wasn't the necessarily intent up front.

But this push to really see and view and do a lot of comparisons and raise these things up as being human-like and providing a level of human-like care or comfort, interaction, engagement - there's an addictive component of that and engaging in selling a product. But it's also becoming a front for abdicating liability for what happens as a result of that engagement.

SUSAN ALEGRE: I think that's right. Yeah, no, I mean, there's an awful lot of moral and philosophical gymnastics that seems to go around the justification of whatever direction innovation seems to be taking at any given time.
And I think you're right that this sort of question of robot rights, which, as you say, I mean and I agree that I think in many ways it's an academic point. And I'm not concerned about robot rights in terms of I'm not worried about the robots. But I think it is a way of saying we don't have control over these things. Almost by giving them rights, it is giving them autonomy and saying it's not our problem anymore. It's acting God and letting these things go off and do all the terrible things that creations do while abdicating responsibility.

So I think you're right. There's a very sort of complicated stream of discussions going on that are often turning around questions of responsibility.

KIMBERLY NEVALA: And there was - particularly as a woman, I suppose - the example you brought in the book of Sophia, the gynoid robot that was granted personhood in Saudi Arabia. I think it was in 2017, 2018, clearly quite a PR stunt. But as I thought about that a little bit more, you make some really targeted points about the fact that we are starting to use these systems, or replicants, or digital humans, my new absolute least favorite term ever - and I have a few so that's saying something - but when I thought about that as well, it was A, telling that Sophia is a woman. It's not an android, as you said. It's a gynoid.

SUSAN ALEGRE: It's a simulacrum of a woman.

KIMBERLY NEVALA: Yeah, well, yeah, exactly. But in a state where women have restricted rights. Then I really thought about it and thought, this is interesting because it looks like you've done something innovative. But this is still something that can be fully controlled. It is at the behest of that. And so in a lot of ways, what looked to be sort of a very interesting, innovative bit is very regressive.

SUSAN ALEGRE: Absolutely. Yeah. And I mean, I think the fact that apparently, her face was based on Nefertiti, I think, and her creator's wife. And then you look back to the Stepford Wives. While I was reading the book, I watched the Nicole Kidman version of the film of the Stepford Wives and the resonance with this real-life phenomenon, particularly of Sophia. Down to the fact that in the Stepford Wives as well, they were originally, the idea came from a man called Dizz, who was a former Disney imagineer.

You can see this read across from the Disney princess, if you like, into this future of technology that is all about, like you say, a regressive and idealized and highly controllable idea of womanhood. And you see it as well in the first CEOs, the first AI CEOs, again, gynoid CEOs. It's like, well, isn't it great? You don't actually have to have women on the boardroom. You can just have an AI woman as CEO, and that solves all your diversity problems without having to deal with the mess that is real women and real people.

And so, yes, I think it really does bring to the fore some of the underlying societal issues that we have and the challenges that we still have for equality and dignity and respect for women. We can see it almost being a replacement, a stand in for the flawed people that we are.

KIMBERLY NEVALA: And again, I think that zooming out a bit with this human rights lens is very helpful and gave me some sleepless nights. So I thank you for those. I think those were probably productive. Even this idea, the right to self-determination, to liberty; things that sound like just big, grandiose thoughts, but I think are so important and grounding to how we see ourselves as humans versus as mechanized beings.

But there seems to also be this narrative that says, don't worry about this perhaps. I'm going solve these certain problems for you because these are all things people don't want to do anyway. People don't want to do these jobs. People don't want to have to take care of other people. We don't want to have to deal with the friction and the stress that comes with relating to other people in difficult situations. We all want to spend more time really just doing what we want to do, being creative, studying ourselves, whatever that might be. And that narrative in and of itself, I think, is dangerous because it's promoting a view that takes that choice away from me, I suppose.

SUSAN ALEGRE: I think so. And I think it's dehumanizing because it's about saying, yeah, we don't want to deal with each other. We want nice clean lines. We don't want to think about the mess. And I think that is dehumanizing because many of those areas and, like I say, the creativity to me was something that I felt really profoundly.

But also, as you say, that the idea of care robots and you can see that, you can see why people might think that's a good idea. And I can see how there may well be use cases where it is a good idea. But many of the use cases that are being developed and particularly in countries like Japan, where there's been a lot of money put into developing care robots, the research seems to show that even where these robots are then being developed, they're actually then just sitting as landfill. Gathering dust in a cupboard because rather than freeing up care workers to do more interesting work, they are effectively making care work more like drudgery. Taking away that human, rewarding part of care work, which can also be incredibly difficult. I mean, it's not at all to say that it's all apple pie at all. It's incredibly difficult work.

But we need to engage with what that work is, why it matters; respecting the workers, understanding what it is about care work that people get pleasure from. And what it is as well that people being cared for get pleasure from interaction with humans. And then trying to plug the gaps around that, if you like. So looking for where are the gaps of things that people don't want to do.

I talked as well about reinventing the dog. You can see dog therapy. And you might have heard my dog earlier. Dogs have downsides. But you have this idea that you can have a robot pet because that'll be better than a dog. And I remember talking to an AI developer that was explaining the kind of development of some kind of tool like that. And I said, well, isn't that just a dog? And he said, yeah, but dogs poo. And I said, well, why don't you develop AI to pick up dog poo then instead of replacing all the joyful things about the dog?

So I think that is what's missing in our imagination about our future with technology, is trying to really get to the heart of what would be useful and what would actually give us more joy. And certainly, for anyone who writes or is a creative, the idea that an AI can write your novel in two minutes based on some careful prompts totally and utterly misunderstands the creative journey, the creative inspiration, and the kind of life force that is part of creativity.

So it is that kind of question of how do you find what's good? And you see it as well I mean, even in the legal sphere, this idea that you might be able to write your submissions with ChatGPT and it'll make you much more productive. But what that then leaves you with, with this kind of caveat that you have to check everything because it might be completely made up. It means that instead of actually doing the interesting intellectual work and research of creating an innovative argument to achieve a goal for your client, you're basically left as a sort of second-rate editor and fact checker for a machine. You're doing the drudge work, not the interesting, creative human work.

KIMBERLY NEVALA: This is also an area where we tend to perhaps oversimplify or just completely misunderstand what the job is. So I think you have said so much about what - and you are a lawyer - so much about the law is not about what's on paper. Just like health care is not just about your stats. And we are not well-defined, well-constrained systems, if you will. Because if we were, even in the health care sphere, we wouldn't have these ranges, where it's normal if something was recently between 50 and 700 is normal. And this, if nothing else, tells you that there's a lot of individuality.

So I'm also concerned that in some of these areas it becomes a Band-aid for other issues. So we're trying to solve perhaps the right problem but we're doing that in the wrong way. And by virtue of doing this, what we can accomplish with the systems is this idea that it's good enough. But that access to things like expertise then becomes increasingly unreachable. It's already fairly unreachable for a lot of folks and prohibitively expensive with really germane and significant impacts then to folks' livelihood, well-being, so on and so forth.

SUSAN ALEGRE: Definitely. And I think an awful lot of what we're being sold today is this kind of veneer.

So it's a kind of like, well, this looks really great. It's got all the bells and whistles, so it'll catch up. The fact that it's got a glossy sheen on it means that the fact that it's not working or really doing what it says it does underneath, it'll catch up.

I think that's a very dangerous path because it's like, well, probably the last thing we need as humans is a glossy veneer on areas that are really vital to our well-being. We need substance. We don't need more froth.

KIMBERLY NEVALA: Yeah, for sure. And so I'd like to circle back as we start to close up.

I will say that lawyers do not always get the credit for being creative, for being the creative creatures they are. But you are also actually an author and participate actively in the creative sphere. Again, going back to that comment about looking at the spectrum of human rights, opening the aperture, one of the things that also came through your book that I am abashed to admit that I hadn't really thought about was what you called the assault - I'm using the word assault, Susie did not - the assault on creativity that is being pushed. Or at least the assault on what we think the intent of creativity is.

And you also called out the destruction of an intangible cultural heritage. In that the right to a cultural heritage and to protection of that is something that is also a universal human right. Albeit maybe one that we don't manage well already. But that doesn't mean it's not valuable and something that we should strive for.

So can you talk a little bit about how you're seeing this develop and why we need to not just look for tangible impacts and harms? Although, some of these, I think, even in the creative space are very tangible for folks who are trying to make a living as writers and artists and authors. But why is an appreciation for the intangible just as important, if not sometimes more important?

SUSAN ALEGRE: Yeah, I mean, I think it was a really interesting development for me actually researching the chapter about creativity. I started out with the Hollywood writers strikes and the actor strikes. And then I ended up in the International Criminal Court. And when I started that chapter, that was not where I was expecting to land up at all.

And that's part of the creative process, actually. It's that you start writing something and that's how you start thinking. And so if you're not writing it, you're not thinking. You get to places that you didn't expect to be when you started. So going from that kind of question of more the employment rights, if you like, or the economic rights of creatives and then moving through, well, what does that mean for the people receiving whatever creatives make? And that idea of cultural heritage that we all benefit from millennia of cultural heritage all around the world and the fact that destruction of both tangible and intangible cultural heritage is an international crime that can be prosecuted in the International Criminal Court.

And so the prosecutor of the International Criminal Court has written a report about this: the value of cultural heritage and how destruction of cultural heritage can be a warning sign, if you like, for things like genocide and wider destruction of humanity because it sort of erodes our idea of ourselves as humans. It erodes our humanity.

And when I was looking at this question of intangible cultural heritage which can be many things. It can be languages. It can be folktales. It can be cultural practices. But it's about who we are through how we absorb all these things that are around us, all the culture that we live with. And one of the things about relying on generative AI as an alternative to creatives is that it's kind of the end of the evolution of human creativity because it's just recycling stuff that's gone before. It sort of deadens our ability to move forward.

And so by the end of that chapter, I was looking at it and thinking, are we looking at the end of human cultural heritage? What does this mean when we can't move forward, we can't have that creative spark? And as I say, in many ways, my creative process through that chapter led me to that conclusion, which was not where I'd started.
And so I do think there's a really wider question which is not about whether or not writers and artists get paid properly or it's not even about the quality. It's about what it means for our evolution as humans and what and why human culture matters for us.

KIMBERLY NEVALA: So with all of the work that you have been doing and going on these creative and intellectual journeys that have led into new and unexpected places, tell us a little bit about your current posture relative to the very wide world of AI. And what is it that you would like to see us do or really lean into as we move forward to protect our human dignities and rights?

SUSAN ALEGRE: I think one of my biggest concerns is the creation of dependencies unthinkingly.

Whether that is pushing the fact that we should all be using generative AI because it will make us much more productive without thinking about what that might do for our ability to think for ourselves or our ability, as I say, to think creatively and think about the future. Or whether it is about creating emotional dependencies on technology as a kind of replacement and fracturing human relationships. Or whether it's just about pushing AI into every single corner of our lives in ways that are not environmentally sustainable. I mean, we're now seeing, since I finished writing the book, we're seeing big tech companies investing in small nuclear power generating facilities because they recognize that they're not going to be able to keep the AI going at the pace that it is.

And so what I really hope - and it's a great hope for me that was a big push to write and publish a book like this - is to create a space where we can have a pause for thought, to really hold up a bit of a mirror and ask the question, is this the direction we want to go in? Do we need AI in everything we do? What are we losing? And that's not to say we don't need AI at all.

I think one of the really big challenges in these discussions is you get pushed into a position where you've either got to be for or against AI. And as you said at the start, AI doesn't really mean anything. What AI? And for what? And in what circumstances? Where? There are so many questions about whether you're for or against a specific use case or a specific type of AI or related technologies.

So what I hope is that this will allow a pause for thought to think about, where does AI really add value? How can AI be sustainable? And that's not just profitable, it's actually sustainable and actually useful for us and for future generations. And I think that does require a really hard pause in certain areas to really resist the siren call of innovation. To stop and think about, actually, what innovation do we want?

We don't necessarily want innovation in all directions. Where do we want innovation? And why do we want innovation? For that, sometimes you have to put down barriers on innovation in different directions. So my hope is that by turning this lens to think about people and to think about human rights and to think about humanity, that we might start asking those big questions and prioritizing people before technology.

KIMBERLY NEVALA: And I think leaving that question ringing in the audience's ears is the perfect place to end. So thank you so much. I've really appreciated the insights and your willingness to traverse all of these different paths with me and with us as a whole. So thanks again for your time.

SUSAN ALEGRE: Thanks for having me.

KIMBERLY NEVALA: And I would just really encourage folks to engage with your work, including the book. Which was not available to me on Kindle when I went to buy it, which I have to say, I've had a bit of unholy joy just having a nice book in my hands again --

SUSAN ALEGRE: Well, I do hope it's now available on Kindle because I have followed that up. So it should now be available on Kindle, I understand, as of today.

KIMBERLY NEVALA: Awesome. Very, very good. Well, thank you again. And thank you for all the work. We will continue to follow it with avid interest and support.

SUSAN ALEGRE: My pleasure. Thank you for having me.

KIMBERLY NEVALA: All right, and thanks to all of you for joining us yet again. If you want to continue learning and being challenged by thinkers, advocates, and doers such as Susie, subscribe to Pondering AI now. We're available on all your favorite podcatchers as well as on YouTube.

Creators and Guests

Kimberly Nevala
Host
Kimberly Nevala
Strategic advisor at SAS
Susie Alegre
Guest
Susie Alegre
International Human Rights Lawyer and Author
Righting AI with Susie Alegre
Broadcast by