Ethical Control and Trust with Marianna B. Ganapini

KIMBERLY NEVALA: Welcome to Pondering AI. I'm your host, Kimberly Nevala. I'm so pleased you're joining us as we continue to ponder the realities of AI, for better and for worse.

Today, we are joined by the inimitable Marianna Ganapini. Marianna is a professor of philosophy at Union College and the cofounder and CEO of Logica.Now, a consultancy that seeks to educate and engage organizations in ethical inquiry. She's a faculty director and instructional designer at the Montreal AI Ethics Institute and is also a fellow and associate researcher and collaborator at a number of other public and academic institutions. Marianna joins us today to discuss how organizations and individuals developing or using AI can engage in practical, ethical inquiry. So thank you for joining us, Marianna.

MARIANNA GANAPINI: Thank you. Thank you for having me. This is great.

KIMBERLY NEVALA: Now, your research and work covers quite a lot of ground but I suspect they have some common underpinnings. What was it that drew you to the study of the philosophy of mind and how did that then lead to your current work in applied AI ethics?

MARIANNA GANAPINI: Sure. I started really loving philosophy, to be honest, the whole thing. I love the history of philosophy. I love the ethics. I love the philosophy of mind. The whole spectrum really interested me. So when I came to the US, and I was able to start a PhD program at Johns Hopkins, as I was progressing in the program, I figured that the part that I really loved was the intersection between philosophy and psychology, philosophy and cognitive science. Really figuring out how we think, why we think so weirdly sometimes, and all these big questions that I thought were very, very exciting.
So this is how, well, you have to do it for your PhD dissertation. You have to narrow it down, so I did so.

And that got me really thinking about AI, which is linked to philosophy of mind because it's a reflection of what does it mean to think? And what does it mean to have consciousness? What does it mean to have understanding? All these questions are, just now, the questions that we ask about AI. Does AI think? Does AI understand? Does AI have consciousness? And so on and so forth. So this is how I got from philosophy, broadly, to this more specialized topic.

KIMBERLY NEVALA: Well, we'll try not to turn this into a counseling session for all of us in AI today. Or maybe we will. I don't know. [LAUGHS] We'll see.

One aspect of your work looks at how AI-driven sense systems influence human perceptions and our decision-making. It's a topic that's increasingly imperative. Particularly as language becomes a primary interface mechanism. And this has obviously been driven by the huge influx and release of commonly available LLM systems.

Now, certainly, behavioral design is not new. It predates the digital revolution although analytics has certainly allowed us to up the ante in that regard. But in some of your work, such as the paper on the Audit Framework for Nudging in Children, you argue that AI systems are unique. So what is it about AI-driven nudging that requires increased scrutiny or attention?

MARIANNA GANAPINI: Right. So, as you said, nudging, behavioral techniques been there for a while. Behavioral economics. We’ve talked about it. They have been implemented in various fields. They're based on the idea that you can somehow influence people's behavior by exploiting some of the biases that we all have.

So we are dual thinkers, in a sense. We have two minds. We have a more rational mind, but we usually run on a more intuitive, biased, fast thinking (mind). That is very useful in many situations, but it can also be exploited. Nudging uses that knowledge of the human mind to drive people to make certain decisions. It doesn't force them, but it drives them toward a certain direction. So we know about the techniques that have been studied in psychology and behavioral economics.

When it comes to AI, the problem that we see is that -- well, there are really two issues. One is that when you have a normal nudging, a normal nudging technique, that is usually one design. It's structured. It's a way of structuring the decision environment for an agent or for agents in general. It's one setting, and everybody is subject to that setting, and it doesn't really change based on the person. So the idea of structuring things in that way produces certain results but that result is an average. It's for everybody.

AI nudging is a little bit different. The idea behind an AI nudging, that is also called AI manipulation, or a hyper-nudging, is a technique that has two important features. One is, it might not have been explicitly designed by anyone. So that's the first thing. So whereas in the normal nudging, there is a designer who structures the environment in a certain way, when it comes to AI nudging AI might really shape the decision environment in various different ways. And it can do that based on the input coming from the user. So, in a hyper personalized digital environment, the information coming from a specific user - you, for instance - may be used to really shape the nudging for you. So whatever works for you is the design environment that you will see. It might be different for me. It might be different for another person. And so it's probably much more effective than the regular nudging. It's also hard to detect because there's no decision-making being made by the human, necessarily, so it can vary.

And it can be done, and that's the third point which is important, at scale. So it can really have a big, big reach to various types of people. You can see how you start losing control of what's going on and it becomes more effective.

If this is done to influence people's choices and they don't even know that and it's hard to show that they were influenced and it's hard to prove. Which is one of the difficulties because the EU AI Act, for instance, does not allow for manipulative techniques. But now you’ve got to show that there were manipulative techniques being used, which is not easy. All that really worries us. That's where we try to do in that paper related to, in particular, the use of nudging on children, which is even more problematic.
KIMBERLY NEVALA: Why is it even more problematic when we're talking about children or young adults?

MARIANNA GANAPINI: One of the features of nudging is that by unconsciously prompting you to make certain decisions it's a little bit autonomy-undermining. So it really, a little bit, undermines your decisions, your choice, your autonomy, and freedom in making decisions. It's not forcing you so it's not completely autonomy-undermining. But there have been arguments that nudging is a paternalistic choice in certain settings and may infringe on people's freedom and autonomy.

When it comes to children and teenagers, where their decision-making abilities maybe are less sophisticated or less evolved or less refined, you might think that they might be impacted even more. So their ability to resist, for instance, the temptation to stay online for long hours instead of sleeping. That might be something that nudging induces them to do. It's not good for them and they might have a hard time resisting that temptation. So it might really harm them in various ways that people are discussing. So that's why we think that it's particularly problematic.

KIMBERLY NEVALA: It's interesting, then, because this drive we've had for hyper personalization - the opportunity to use AI and data to adaptively respond and to customize everybody's views - actually has a potentially really negative downside. Not only does it narrow our gaze to only our perspective and views. But, what I hear you saying, is it alters or limits our choices or even our understanding of what choices may be out there, just by virtue of the technique itself.

MARIANNA GANAPINI: Exactly, exactly. So there's an argument for saying that personalization is a good thing, maybe in medicine and healthcare. But in other settings, it does raise some challenges, for sure.

KIMBERLY NEVALA: You noted that this can sometimes happen almost unintentionally. Because you're designing a system to react in a way that feels good, responds to what I am doing in a way that you think I will want it to respond, and will allow me to continue to engage. And it's pursuing some commercial logic whether that's engagement or keeping me in a game or getting me to buy different things.

But that limited attention span or that focus. Maybe it's the captured attention span or that limit of choice, which in a game, there's issues with that. But in other aspects and realms - education, healthcare, social services - this has much more impactful ramifications, very serious ramifications. What is it that organizations who are designing and deploying these systems need to do, then, to try to scare up or identify something that is, in fact, unintended or, in a lot of cases, unseen?

MARIANNA GANAPINI: Good. Yeah, so solutions are available. It's not that this is an unsolvable problem. It's very much a solvable problem. There are various types of solutions.

There are technical solutions in the sense that you may want to use nudging for good. So they might go against a certain commercial logic, but it actually would help, in certain cases, to use nudging techniques to actually help, for instance, children to make better choices. To stay less on screens, advise them to take walks, to engage with their friends in person, as they say. Or, as my daughter says, in real life. "Do you want to play in real life?" Yeah, exactly. So that kind of tool can be turned to do good. That's one option.

There are other options, though. There are ways to empower, for instance, parents, letting them know that nudging techniques are being used on their kids and to what extent. That that's probably too much because their kids have been on a screen for hours. They've been seeing content that they shouldn't be seeing.

There are ways to mitigate the risk, to create those factors that lower the unpredictability of the tool, because that's what you want, right? You also want to lower not just the things that go on but the amount of entropy that is in the system. You want to lower it and put guardrails to monitor what's going on.

Then there are ways to educate people about this. There could be a sign or a pop-up that says, in this particular moment, you are being nudged by the system in this particular way. That creates awareness.

So there are a ton of things that can be done and that they should be done, to be honest, at this point, to empower people, users, consumers to make more reasonable and autonomous choices. Although we know that these tools will be used because they are very powerful, organizations, companies, institutions, legislators should really come together and make decisions about this issue.

As I said, the European Union has. We need to see, in terms of the regulations, how the EU AI Act is going to be applied in order to know whether some of the aspects of the law in relation to manipulation are going to be effective or not. We really need to look at the implementation phase in order to know whether this thing has teeth. Or whether it's just, at this time, not yet a very useful tool to mitigate the risks of this technology.

KIMBERLY NEVALA: I think it's probably fairly easy for folks to say, clearly, we should be taking these steps or putting out the warning or adding some friction to the process when we're dealing with children . Or young adults or folks who may have lesser understanding or exposure.

But would you also agree that we should be doing this with adults as well? I have this tendency to think that those of us who work in the field and are aware overestimate how much the average citizen or consumer really understands - even today, even before the hyper effective techniques and adaptive techniques we have now - how much of this is happening, just day to day.

MARIANNA GANAPINI: So this, I think, is a very interesting question.

What should be the requirements when it comes to adults who are able to make decisions for themselves autonomously? There's a very interesting discussion on whether these techniques a form of a form of manipulation. And, if they are, is manipulation always bad? So the discussion is quite nuanced and is getting a little more complicated than it was before because it's unclear.
One thing that I would say is that you can't judge a technology without setting it in a context. So if this technique is used to make me eat more healthy food, for instance, or go to the gym more often, I'm not sure there's anything necessarily bad about it. Whereas, of course, if it does the opposite, if it makes me eat unhealthy food or not exercise, that might be a problem.

But we can't regulate all kind of influences that are out there and that might prompt us to make certain decisions. Because sugar in sugary drinks is a form of nudging, in a sense. It's forcing us to want to drink more of that. So I guess I'm going to be agnostic, at this point, on that question. I think it merits a serious discussion about this technology and how far we can allow it.

KIMBERLY NEVALA: Very interesting. As you were talking, I was thinking nicotine in a cigarette. That was a nudging technique.

MARIANNA GANAPINI: Yeah, in a sense.

KIMBERLY NEVALA: Right. It's physically addictive, but it's… Anyway, we could spend a lot of time on that.

Another interesting thing I found in that same paper was the concept of moral inertia. You put this audit framework together and it's very explicit that the objective, or the outcome, is to allow organizations to maintain what you called moral inertia. Now, I don't move in particularly philosophical circles, so this may be a very common term, but it wasn't familiar to me. What is moral inertia? And why was that the right objective, or how did you come to pick that as the end goal of an audit?

MARIANNA GANAPINI: So the framework, the idea of the framework, the ethical background of that audit framework is grounded in the idea of looking at a system and looking at where there are elements of risk.

And risk, here, is not what you usually talk about. Namely, the probability of harm. It's more the idea of entropy. So how much entropy there is in a system, how much control there is in a system. The idea is that, where there is no control, there is less probability that harm may occur or harm that you don't know about. The idea of the framework is that. How can you obtain knowledge of potential elements of harm by looking at elements of entropy, elements of risk, elements of lack of control? And what can you do about that?

Once you figure that OK, using nudging and hyper-nudging introduces more entropy than regular nudging for instance. Because you don't know what's going to happen, you haven't tested it, you don't know what the AI is going to design as a nudge. Then your goal - unless you want to completely discard the idea of nudging – if you want to keep the nudging you have to find ways to lower the risk, lower the effects that this nudging has, and putting guardrail in place. And that the idea of, also, inertia.

Inertia is the opposite of movement. Movement is seen as problematic - not morally speaking, but in terms of entropy. So if you want to get to a point in which your system is not producing moral harm or things that are harmful for people or the wrong people, in a certain way, what you want to create is a system that tends to go a state of inertia, a state of less movement and less control. And you do that by just inserting guardrails. That's the only way: just risk mitigation processes.

These are the things that we talked about. You can have a sign, a pop-up that says, you're being nudged right now or you've spent five hours on a screen, it's time to move. I wish there was something like that already. It is a small thing. But that cools down the system and it gets you towards an inertia. Is the system doing something good? Not necessarily. You're still spending five hours. But it's limiting the amount of harm that it's doing. So that's the idea.

It's not necessarily a philosophical term or a term that you find a lot in philosophy. Philosophers are more interested in the idea of what is harm and what is wrong and how you define that. And maybe less about the question of how you prevent that. What are the measures that you should put in place to mitigate that? But in applied ethics that's what you're doing.

KIMBERLY NEVALA: As I said, I came across this in the paper on nudging for children; on the audit framework for nudging for children. But it did strike me that this, conceptually, could and should generalize to any application. Any organization or individual thinking about designing or deploying these (systems) could use this concept.

It's a bit of a different spin and maybe takes us out of the realm of "thou shalt not do harm," which sometimes feels a little either soft or fuzzy. Or, just non-manageable, very qualitative, and not something that we can address. People have feelings about ethics, and so on and so forth, or even thinking about risk or harm. This was a different terminology with a slightly different take on the issue, which I thought was really interesting.

MARIANNA GANAPINI: Yeah, that's what we were trying to do with that kind of framework. To get this out of ethics, in a sense, and more in terms of what are the responsible elements in a system that bring you to a lack of control. That can be true for any kind of system. It doesn't have to be related to AI in particular.

But you know that if you introduce features, and you haven't tested them, you will necessarily fail. Maybe there are going to be awesome results and that's great. But you just don't know and that's your responsibility. The lack of knowledge is where the moral responsibility is. So that's another way to say the same thing.

KIMBERLY NEVALA: At Logica.Now, you provide a range of services: learning and development, consultative services, audits assessments. You are talking, in that work and in your other consultative work and academic endeavors, to a very broad swath of organizations and leaders. How would you assess the current gross level of awareness around ethical AI or the need for ethics in AI today? Or, stating that differently, what's the tone at the top when you're speaking to organizations about ethics? Are they acknowledging it, embracing it, yes-butting it?
MARIANNA GANAPINI: That's a good question. It is difficult to answer, in a sense, because my sample may not be representative. Those who want to talk to me already have worries about ethics. So the impression is that people, organizations are concerned about ethical issues. Maybe related to the regulations that are emerging in the world, in Europe in particular, but also in the US. So there's that. That definitely helped. The EU AI Act really sparked the discussion and brought awareness to many organizations that then reach out and ask for help.

So my impression is, that since 2019, there's been an increase. And now, with the EU AI Act and ChatGPT, in this last year we have seen a huge bump in awareness. And there is awareness and there are motivations. Motivations can be different. Can be compliance. Can be about trust. Can be about reputation. Can be about just ethics. Honestly, I think they're all valid. I don't think there is one that is better than the other one.

We surely need to avoid ethics washing and all that, I think that's an issue. But at this point, honestly, seeing what we have seen, any kind of awareness is good at this point. Because as I'm seeing more awareness we're also seeing companies just keep pumping out this technology and new models and all that without really thinking it through from an ethical standpoint. Which is sad because we've seen that things can go really wrong. And so you wonder. It's like, really?

KIMBERLY NEVALA: [LAUGHS]

MARIANNA GANAPINI: So it's a mixed bag. I think it's better, but we still have a long way to go.

KIMBERLY NEVALA: You touch on a really important point. Which is being aware that this might, maybe, could or should be something you're considering is very different than having the organizational will and ability to actually engage and move forward on that basis.

MARIANNA GANAPINI: Correct, correct. So I'm working on a project exactly on that issue. So what is the ROI? What's the return on investment when it comes to investing in AI ethics? What are organizations getting out of it when they invest, besides doing the right thing? What are they hoping to get? And it's very difficult to quantify the ROI because, sure, you're avoiding lawsuits, and you're avoiding fines because of regulations. You can, perhaps, put a number on that. But the rest is very, very qualitative and very, very vague. That is a problem because then organizations sometime don't feel like they need to invest on this as much as they actually should.

KIMBERLY NEVALA: Maybe regulation is the first stepping stone in this process because it allows us to look at some concrete elements and then start to think more broadly about harm and wrong. When and where we want to be implicated in that happening in the world.

MARIANNA GANAPINI: Yeah, regulations, public pressure, public awareness. Reputational risk, I think, is sometimes not considered enough, and it's actually huge. So these things may start putting pressure on companies, yeah. At least, I hope.
KIMBERLY NEVALA: We hope. Now, really quickly, I wanted to jump back to Logica.Now. Acknowledging the admittedly, as you said, smaller sample size of the organizations that are coming to you, are they falling into particular categories? Are there common characteristics? Do these tend to be small and medium businesses? Are they large businesses? Are they coming from particular industries or application areas? Or is it really a bit of a melting pot for who's knocking on your door these days?

MARIANNA GANAPINI: It's actually very widespread. There is no one particular sector. Maybe the banking and the telecom sector those are definitely, especially. We are working in Europe as well, so we're seeing clients reaching out to us there, even more than the US.

What we are seeing, honestly, is large organizations' feeling this pressure more. So we have been talking to large organizations in the US and in Europe. Sometimes, what we see is that some of them may already have -- as you said, we offer both L&D and advisory services. When it comes to L&D, we have seen some organizations they have the basics covered. They have their AI ethics courses and training and all that. But then, what we're offering are more specialized courses that are tailored to specific personas. So the engineers, the HR people, the business people. They need to know about AI ethics for different reasons and they need to be careful about different things. This is where we are intervening, more tailored. That is an interest that we are seeing a lot.

KIMBERLY NEVALA: OK, excellent. That's great to know, actually. I find it positive or comforting that the folks coming to you are from across the spectrum. As this moves forward and as we see where the liability starts to land if liability starts to land for organizations. Is it the developer and the supplier of a system or the deployer of that system, which is where we seem to be leaning a little bit more, small and medium businesses will have a different and interesting challenge going forward, and I have a lot of empathy for them for that.

MARIANNA GANAPINI: Yeah, I'm very worried about them because they might not have the bandwidth to put in place all the mitigation measures and because they require quite a bit of resources. But they also need to be preparing for what's coming. So I'm a little worried that small and medium businesses might actually fall through the cracks and feel the pain of regulators coming after them. We'll see. It didn't happen with the GDPR. Regulators are going after the big guys more than the small, so that might be a comfort for them. But the AI Act will feel different.

KIMBERLY NEVALA: Which is not to say, if you're a small or medium business, that this is encouraging you to move forward with your bad selves, regardless, because you have limited risk.

[LAUGHING]

MARIANNA GANAPINI: No, no, no, no. I'd say that happened with the GDPR, but still, the EU AI Act is very, very comprehensive legislation, so yeah…

KIMBERLY NEVALA: You have a tool called the AI Risk Compass on your website that's available. I imagine that this, which, by my read, helps organizations jump-start AI risk assessment will be helpful for organizations small and large. Or at least that was my hope in looking at that. It seems like a very accessible tool and approach. Now, one of the things I noticed is that you specify that it was grounded in information theory. Why was it important to ground it in that approach? Why did you choose to ground it in information theory?

MARIANNA GANAPINI: So this goes back a little bit to what we were saying about risk before. My collaborator, Enrico Panai, is the person I wrote the paper on nudging with. He draws a lot from the work of the philosopher Luciano Floridi who has built a somewhat comprehensive system around information theory. This is where the idea of risk as entropy and the idea of looking at macrostructures within a situation that may bring up your risk level, that's where the idea came from.

So honestly, really, the initiator of that framework is Enrico. And he with me, we developed this tool. The idea of the tool is the idea of a thermometer. So you're not feeling well. You feel something is wrong, so you use a thermometer to assess your body temperature. Now, your body temperature is too high, you have a fever, you call the doctor. You don't have a fever, so you monitor it. That's the idea of the tool.

It gives you a birds-eye view of your risk level. Again, I'm not talking about harm. We're talking about how controlled is your system? How much control and knowledge do you have on the parts of the system? Turns out that if you have very low knowledge on certain aspects, then you might need to start worrying. It's definitely not enough to tell you what you should do or what your risks, in terms of harms, are. Absolutely not. It just like a pair of glass to start seeing a little better. But then you might need to really call a specialist to help you figure things out.

We use that tool on a number of clients and I find it very helpful as an initial assessment. It gives you some understanding of the macrostructures, and what are the things that you should be careful about even before you do any sort of impact analysis, ethical impact analysis, any sort of risk analysis, in terms of harm. Even before that, it tells you.

Even when you go in and you do that impact analysis, you might miss some features if you don't look at this broader perspective. There are many tools out there, so this is definitely not the only one. I feel like it is something that anybody can use. You don't need to know about ethics, don't need to know about the regulations. You just have to have an understanding of the system you're using. So I think designers and engineers can use that.

KIMBERLY NEVALA: Would it be oversimplifying to say that the intent here is to allow you to identify which risks are - I was going to say the right risks, but saying "the right risks" doesn't sound quite right - what risks you need to be paying attention to? As you said earlier, for organizations, there may be some risks that we're comfortable with, they're well within our risk tolerance. And there may be other risks that we are not comfortable with. Either because they exceed our tolerance or they exceed some red line of how we want to operate in the world.
It sounds like this, then, allows you to develop a bit of a landscape. A high-level view, as you said, of the landscape of risk relative to an organization. So then you can move forward and think about further assessment, remediation, et cetera, et cetera.

MARIANNA GANAPINI: Exactly. Because it's a birds-eye view, it allows you to see the whole picture. Whereas, for instance, let's assume that you use a technique called red teaming to figure out if there's something wrong with the large language model you're using, or the GPT or whatever. That is one way to look at the problems and it is very effective in figuring out how things can go south and what the problems could be. I think that's absolutely needed.

But it doesn't cover other aspects. For instance, the control that you have on the people receiving the information. Because you might have people who, even if the information you're sending is supposedly neutral, you might have people that are unable or that might misinterpret that information. So if you have not studied what's called your stakeholders very well, that is a risk. That is something that you should know: that you're incurring a risk if you don't look at the stakeholders and the users. If, for instance, you don't do a semantic analysis of them. You might not want to do it. Maybe there's no problem. But it's a choice. That's the thought. That's the idea.

KIMBERLY NEVALA: Make sure you're making those choices mindfully and not just by virtue of not having tried to look at them at all. Or averting your gaze, I suppose.

MARIANNA GANAPINI: Exactly.

KIMBERLY NEVALA: You've mentioned or been careful to specify that you're talking about risk and not harm. Why is that an important differentiation?

MARIANNA GANAPINI: Risk is how many factors are in a system that you cannot control.

The metaphor I like is video games, it’s a useful way. So when I was a kid, my grandpa had a video game and I would play one or more video games. There was a console and you have to put money in it. And then you have to play and then if you lose you just die, and you lose the money, and you're gone. So it's a very self-contained system. Of course, if I'm very rich, I can spend a lot of money and a lot of time on it. But it does constrain the time that you can actually spend on this thing. Although I used to spend a lot of time on it [LAUGHS]

Now you have a different system in which kids stay on these platforms. They are prone to stay more and more. They engage with other people online as they're on the platform. So that's also something reinforcing their desire to stay on the gaming platform. There are so many factors brought in that are very difficult to monitor. It's very difficult to monitor who is the kid interacting with when it's on this gaming platform. There could be an adult on the other side and that he doesn't know. So all these things, they're not harmful. There's nothing harmful about an open environment. But it is risky because it is an environment in which, if there are bad actors on the other side, the kid might incur some harm.
Now, it might also bring good. It's the ability of the child to interact with people diverse from herself or himself and so learn things. These kinds of games are used in schools to prompt STEM reasoning. Wonderful, right? But you need to assess that because, in and of itself, the difference between what was happening when I was a kid and what's happening now is that the amount of things that can go wrong is just much more. As is the amount of advantages that you can get.

So this is what our tool talks about. I'm going to give you a number. I'm telling you how much unknown there is in your system and then you can do whatever you want with that information. It'll tell you if there is an unknown in the environment, if there is an unknown how the system relates to the user, if it’s an unknown within the system itself. I'm telling you these are your blind spots and that's it. That's not harmful. There's nothing harmful in blind spots, but they may invite harm. That's the difference.

KIMBERLY NEVALA: I like this because, in a lot of conversations I've had or I've heard, there's a tendency to want to disintermediate risk from harm. What I hear you saying - and you should absolutely correct me if I'm wrong, please - is that, in fact, by looking at risk through this particular lens, by identifying those risks or where there may be areas of heightened risk in the system – either by virtue of the fact that you either don't have insight or, as you said, there's a lack of control - then you can use to say this risk then invites or may generate harm.

So it has a more tangible connection to just saying "harm" relative to societal good or human well-being and flourishing, which is all important. There's nobody we're going to go up to who’s going to say, no, we don't think that's important - even if they disagree. At least if you're any sort of reasonably socially intelligent creature. But because that sometimes is very qualitative, it feels sort of social and political and all of those bits. So this idea that the risk helps you think through or identify where harm may occur is a better conceptualization of both of those.

MARIANNA GANAPINI: Yeah, we hope that it's helpful. And we insist on the idea that it is not something that should require knowing about ethics. Because, otherwise, you start importing what you think ethics is and then you need a specialist for that. As you need a doctor to assess your health, you need an ethicist to determine these things. You can do it by yourself, or by googling it - although many people do it - but you can't not do it.

KIMBERLY NEVALA: That's a whole other conversation, yeah.

MARIANNA GANAPINI: Exactly. I mean, you shouldn't. You shouldn't do it. Similarly, don't improvise. Because if you improvise and assume that people who have no training in ethics can do ethical risk analysis then you are really importing more risk into your system. Because they don't know. They are not aware of the regulations. They are not aware of the impacts.

So we recommend having, for those organizations that can do it, establish an ethics committee that is empowered. And make sure that on the ethics committee you have a diverse pool of people. Including people who are not just computer scientists or engineers but also people who know about ethics and know about anthropology and know about sociology that can really give you a 360 view of how things can impact society and your users and so on and so forth. So that is an absolutely key point that, unfortunately, we're not seeing. A lot of organizations don't have ethicists on their board.

KIMBERLY NEVALA: Your work is fascinating, and it covers so many bounds. I have two other areas I want to ask you about quickly.

One is on the concept of trust. Now, we've seen this evolution over time. Many years ago, I guess it's not so many years ago, but maybe pre-pandemic, we were talking about FATE - fairness, accountability, trust, and ethics. Then it was responsible AI and now "trustworthy AI' seems to be the term du jour. I'm not sure it matters so much what we call it, as much as what we are trying to address with it. But this conversation around trust has been really interesting because here's a lot of discussion of how do we get people to trust the systems? Or are we trying to make systems that are trustworthy? We get caught up in these semantics.

You have argued, though, that maybe our current definitions of trust are too restrictive and based on a bit of a conceptual error. Presumably, my take on that, is that will limit our ability to, in fact, develop or engage with these AI systems in the most meaningful way. So what is it that we are getting wrong in the discourse around trust and AI today?

MARIANNA GANAPINI: So what I think we are getting wrong is that if we conceive of trust as only a relationship between two full-blown moral agents, then, of course, you cannot talk about trusting AI or distrusting AI because AI is not a full-blown moral agent. It's not going to be anytime soon.

I'm not opposed of using the idea of trusting AI or trustworthy AI. I'm totally OK with the term. What I am not OK with is conflating the type of trust that we could have towards AI with the type of trust that we, then, have towards our friends, towards our fellow human beings.

So there are slightly two things going on. This might not have a huge impact on the AI ethics space beyond academia. Although, there are some academics who worry about using the notion of trust, especially as one of the foundations of the EU AI Act. I think that worries are overblown.

I think that those worries should not be taken too seriously because they are working with only a limited notion of trust. They just think that you can trust only humans. That's not true. It's true that, when you talk about trusting your car, your normal car, you're thinking about relying on the car, not really trusting it. But we trust, for instance, animals. I trust my dog. In a sense, I may trust my cat not to do certain things. And I feel disappointment and even some level of resentment, to be honest, if my dog does things that I don't expect and I think that she shouldn't do.

I think that there are levels of agency, OK? Like, my Roomba doesn't have any agency. My friend has full-blown agency. But there's a lot in between. There are animals, as I said. And I think AI, as it's been developed now, merits a little more trust or credibility than my Roomba in terms of the things that I am expecting the AI to do. And by "expecting," I don't mean only what I'm foreseeing the AI doing, but I have a normative expectation. I think that the AI should not be biased. I think the AI should not discriminate. That doesn't mean that if there is an AI that discriminates, then the fault is in the AI. The fault is in whoever put that out there and designed it and so on and so forth.

So there are different things. I can trust my AI. I can trust my dog. I don't think, however, my dog or the AI is morally responsible. The owner is more responsible if the dog bites someone. So, if the AI goes wrong, the designers, maybe the company - I don't know - but someone else is more responsible. So let's not confuse these different levels. That's what philosophers do. They try to make things clear. And so I'm fully behind the framework that has been recently used of "trustworthy." I think it's a good term, and I am hoping that the discussion around trust doesn't undermine the effort of the European Union around that.

KIMBERLY NEVALA: Yeah, we should trust that the system is working exactly as it is designed to do, intentionally or unintentionally. And make decisions and react or respond accordingly. Yeah, very interesting.

I realize we're coming up on time here, but I have to sneak one last question in. You recently - I think it was just recently here in 2024, but it may have been previously - published a paper examining the basis of irrational beliefs. I found this really fascinating. I can't claim that I understood it properly or at the depth it requires, but you essentially argue that every belief, no matter how epistemically irrational, and perhaps we should define "epistemically irrational" for folks, is underpinned by a minimum of rationality.

Aside from this very interesting hypothesis, it got me thinking in the context of AI very specifically. In the context of the extreme hypering that is happening right now do you think the folks that are making outrageous claims today about some of the capabilities, or purported capabilities, of some of these systems truly believe it? Do they merely want it to be true? Or, does this hypering have nothing to do with the minimum rationality of irrational beliefs but it's coming straight from commercially motivated PR?

MARIANNA GANAPINI: Yeah, that's a tough one.

I suspect there is a bit of everything. I think there is a lot of PR. There is also a lot of hope and excitement around this technology. Genuine excitement. It is moving fast. It's hard to deny that. Sometimes it feels like announcements "in two years, we're going to have a full autonomous car.” And then it doesn't happen. And then, "two years, we have general artificial intelligence," and that doesn't happen.
So it feels that they're playing a sort of PR game as well. That's a little unfortunate but it's part of the game. It's not surprising.

In terms of irrationality, I think your question is very interesting because it relates to all other sorts of irrationalities that we have, like fake news and conspiracy theories. People seem to believe all sorts of crazy stuff.

Some philosophers of the past have argued that that cannot be right, that the people cannot be completely out of their minds, so irrational. There has got to be a limit because, otherwise, we wouldn't understand each other as people, as speakers, as communicators. I think that idea is right and I think that is right also, for fake news and the rest.

So when people say things, they might be honest. They were trying to say things that are somewhat true. But they might not really believe them. So we need to be careful not to project much on people. Thinking that they're so clear on what they believe and so clear on what their, really, views are. People sometimes just talk and say things and that's it. I think that's the moral of the story.

KIMBERLY NEVALA: I had a previous colleague and boss, he's fantastic, but he says everything in a tone that is very confident and just very - it's not necessarily loud, but it has a level of - it feels, it sounds settled. It sounds like he's made up his mind. I had to go back at some point and tell everyone, that's just the way he communicates. In fact, when he's doing that, he's asking a question and you need to respond accordingly and not assume that he's telling you what to do. Because he's really, in most cases, likely looking for feedback or input. And it strikes me that it's a little bit here, too. Which is, let's not just assume they're completely off their rocker or completely trying to behave nefariously. But there might be some inkling of something you were just not seeing in there.

MARIANNA GANAPINI: Right, right. That sounds right.

KIMBERLY NEVALA: And now I've proven why I'm not a philosophy major. So, in any case--

MARIANNA GANAPINI: What are you talking about?

KIMBERLY NEVALA: [LAUGHS]

MARIANNA GANAPINI: You should have been one.

KIMBERLY NEVALA: Thank you. You're very kind, clearly. Any final thoughts, words of wisdom for the audience before we let you go?

MARIANNA GANAPINI: Sure. I want to shed some light on one thing that I mentioned before and that I think is important. I'm optimistic in general, as a person. But in particular, I'm optimistic about AI and AI ethics. I think that the field is moving in the right direction and that we are doing the right things.

I would like to see more interdisciplinary work at the level of the companies when it comes to AI ethics. It's great that some big companies are all-in in terms of trying to build mitigation processes and committees and all that. But you need to have a broader array of people in those committees, and you really need to decide what it means to be an ethicist and what kind of skills you need, because I think that's a little fuzzy. People are not really sure what they should be looking for and I think we need more guidance on these are the things that establish that you're a doctor. These are the things that establish you're an ethicist. We need clarity and transparency on those issues as well. So I think that's my final comment.

KIMBERLY NEVALA: I know. That’s a fascinating point. This has been really interesting and helpful, I think, to the audience and certainly to me. Thank you so much for being very generous with your time and insights and just continuing to do the work of making ethical inquiry, if you will, both palatable and accessible to everybody in the field.

MARIANNA GANAPINI: Thank you.

KIMBERLY NEVALA: To continue learning from thinkers and doers such as Marianna, please subscribe wherever you listen to your podcasts, or follow us on YouTube.

Creators and Guests

Kimberly Nevala
Host
Kimberly Nevala
Strategic advisor at SAS
Marianna B. Ganapini
Guest
Marianna B. Ganapini
Professor and Founder, Logica.Now
Ethical Control and Trust with Marianna B. Ganapini
Broadcast by