The State of Play in AI Ethics with Katrina Ingram

KIMBERLY NEVALA: Welcome to Pondering AI. Today, I'm so pleased to bring you Katrina Ingram. Katrina is the Founder and CEO of the Canadian consultancy Ethically Aligned AI and an adjunct professor at the University of Alberta. We are going to be discussing the current state of play in AI ethics today. Welcome, Katrina.

KATRINA INGRAM: Hey, Kimberly. Great to be here.

KIMBERLY NEVALA: For those who are not familiar with your work already, how did you come about to the founding of Ethically Aligned AI?

KATRINA INGRAM: Well, it is a very serendipitous route that I took here, [LAUGHS] I have to say. I spent a lot of my career in a much different field. I actually come out of the world of public broadcasting, community radio specifically. And I spent most of my career there.

But in 2017, I had one of those kind of midlife career epiphanies where you ask yourself, is this what I want to keep doing, or do I want to try some new things? And I opted for the try new things door and found myself going back to university. So I went to the University of Alberta to do a master's degree in communications and technology. That's where I first encountered this topic of AI ethics: literally by wandering into a lecture by accident that was being given by a professor here. And it caught my imagination, so much so that I made it the topic of all of my research while I was at university. Then I graduated in 2020. I decided, I have to keep going with this work. It really compels me. And that's what inspired me to launch my company. Ever since then, I've just been plugging along in this field.

KIMBERLY NEVALA: Well, Edmonton is - for folks who aren't aware of it - an unexpectedly rich place to dive into the AI and AI technology pool, for sure.

Now, there's of late no shortage of PR fodder, we'd have to say [LAUGHS] on the topic of AI broadly including responsible AI, ethical AI, etc. As evidenced by the recent - brief yet quite intense kerfuffle, I'll call it - regarding the leadership at OpenAI.

I'm sure that everyone is well aware that the CEO, Sam Altman, was deposed, maybe hired away, and then re-anointed. All of which, at least in the public's eye, took place during the course of an interminably long weekend, give or take. What if anything should we be taking away from that or other recent happenings regarding the current state of play of AI development or ethics?

KATRINA INGRAM: Yeah, maybe just a couple of side comments before I dig into that.

The first is coming out of the media, I feel like I know spin pretty well [LAUGHS] having been on that side of it and also studying communications. So when you talk about ethics washing and where are we at with the state of play, that's something I think a lot about.

The second thing, what really compelled me to really want to work in this field, it wasn't actually just the technology. It was the people. When I started doing my research and I started investigating, who are these people that want this thing to happen, I found a lot of people on different sides, different camps. It's really interesting. So I, like everyone else who is in this field, I watched with great interest over that whole weekend the various events that were unfolding in real time. It just reminded me again that there's so much people, politics, power dynamics, all of that at play in the world of AI.

And what really struck me the most about that whole situation is I always say there is a lot of concentrations of power. I've always been really aware of that. But I was made hyper aware of that during the events. Literally a board that was comprised of six people went down to four people as they removed two of those people and then kind of reconfigured itself into once again. Now it is still a very small board of, I think, four people. All white men now, notably, though they say they're going to do something about that. But the whole situation, which, again, unfolded over the course of days, really reminded me of the power concentrations at play here.

KIMBERLY NEVALA: And what are the implications of that from your research and your observations of the world that we're currently operating in?

KATRINA INGRAM: Well, there's a lot of research that suggests that who makes the technology shapes the technology. So who gets to make these decisions? Who gets to decide the future?

When we think about that, it's a very small, very elite group of people. People with a lot of power, people with a lot of money, people with a lot of connections. And yet, these technologies impact all of us, every single one of us, every day. We're engaging with these technologies in many invisible ways, ways that we don't even think about just by living in the 21st century. Yet, these technologies have been shaped by a very small group of people to the benefit of that group.

So again, we need to be really cognizant of these kinds of power dynamics. We need to think about what are the ways that we can actually redistribute power in these systems? And it's not going to be easy because we've allowed it to accumulate over time to the state that it's in right now. But those are some of the things that I think about.

KIMBERLY NEVALA: A lot of the conversation about ethical AI or responsible AI does focus on what I would say are some discrete elements. Things that we can somewhat put our hands on like bias, for instance, or fairness. And while they are not easy concepts to define they're relatively well bounded in terms of the understanding of the problem.
When you look at the relatively small groups of people that are arguably driving the narratives around AI today, are there some fundamental ethical questions beyond these very discrete pieces that show up in that narrative?

KATRINA INGRAM: Not as much so because even the things that you've mentioned, they sort of favor that engineering mentality, right? We can do something about bias. What we'll do is we'll get some new training data, and we'll retrain our model, and we'll do these tests and we'll monitor it.

And I'm not saying that's bad. In fact, I advocate for a lot of those things too. But what's missed in that conversation are bigger issues such as, did we even need this technology in the first place?

As I move forward in this work, I try to say to clients that no is actually an option. You don't have to engage necessarily if it doesn't make any sense. And in a lot of cases, it doesn't make sense. There's a lot of FOMO going on where people feel like they need to use AI because everybody else is using it. I'm going to miss out if I don't. And that's not necessarily the case. So there is that dynamic at play as well.

But what also never gets questioned is, who's decided that these are the rules of the game that we're going to play by? A lot of that is just so normalized, and it's been going on for so long, that we don't really necessarily question it. If we look at, for example, whose companies get funded, still only we're seeing 2% of VC capital going to women based on certain studies. It's never been more than, you know, 5% or 10% in any study that I've ever looked at. Yet, that's so normalized that we've kind of moved on from that and now we're just centering on these other issues.

So once again, it's tough because I don't necessarily think those other issues are incorrect or not important. I absolutely do. I just don't think they tell the full story.

KIMBERLY NEVALA: Is that narrative around even what the definition of AGI is and the desirability of that path an example of that kind of ideology or belief system that's just been woven into the way the techs been developed and promoted?

KATRINA INGRAM: Yeah. I think it's gone - I mean, there's so many [LAUGHS].

First of all, we don't even really know what AGI is. Everybody has their own definition and they will let us know when it's arrived, apparently. That was another kind of interesting thread of OpenAI. This little board was going to decide when AGI was present in their solution, and then they'd let us know that that was the case.

So I find those things a little bit humorous. But at the same time, because these people have so much power and influence, it's also a little bit scary for everyone else that has to exist in this world.

I think this fantasy, this idea of AGI ,has really deep roots in the history of AI itself, in the popular culture. If you look at who has influenced the people who are actually building these technologies, in a lot of cases it's science fiction authors. People like Asimov, for example, who've painted this picture of a particular future. And now many of these people who actually do this work in AI feel like it's their job to bring about that future.

Is that a good thing or not? We haven't really stopped, I think, to consider that. (Or) had enough voices around the table to really decide if that's something that we want or not. But I believe it's going to go forward anyways because there's so much money that's being put behind it.

In terms of the co-opting piece, I think that's interesting too. Because what I've noticed is we've gone from ethics to responsible AI to AI safety in the course of a few years. And one of my concerns that I feel we're losing a bit of the ethics thread with the safety thread.

And it's not, again, that I don't think we should have safety. Absolutely. We need to make sure that these systems are safe and reliable and accessible for everyone. That's 100% a good thing. But it's this particular notion of safety that's sort of linked to this rogue AI future that we might have. Then all of the money, all of the funding, and the media attention gets put on that and we forget about these other things. The real and present dangers that we're facing right now.

KIMBERLY NEVALA: We've had a lot of discussions, even on this podcast. about ethical mitigation versus risk mitigation or harms versus risk. And is that ethics? Is it not ethics? Is it more important to talk about risk? How do you differentiate those terms and why is it important?

KATRINA INGRAM: Mm-hmm. Yeah, a lot to unpack there.

KIMBERLY NEVALA: Sorry. [LAUGHS]

KATRINA INGRAM: [LAUGHS] it's OK. Triple-barreled question. Let me go back. There's a lot here.

I think one of the big things that I see is this inevitability narrative. This kind of AI, it's inevitable. So what we need to do now is we need to focus on ensuring that the rogue AI doesn't happen, that the bad actors don't necessarily get a hold of it. And what I believe all of those things start to point to are sort of militaristically informed solutions.

So it's sort of like military, well, I'm going to do this so that you don't do that. We're going up and up and up here. And there's no disarmament-type solution, let's say. So I feel like that's one of the problems in focusing. It's almost like the safety narrative and some of these other narratives that are shaped by big AI go hand in hand in an odd way. So that's part of a problem too.

In thinking about risks and harms, I think that risks are good to consider, absolutely. And certainly, most organizations, most companies, think about risk management and they're very familiar with the language of risk.
But sometimes there are things that are harms that don't necessarily pose a great risk to an organization directly or maybe even indirectly in terms of things like reputational management. When I say directly, you know, organizations tend to focus on things like financial risk. Am I going to get sued? What's this going to cost me? And then maybe things like reputational risk. Am I going to lose customers? Am I going to basically make a bad name for myself? Those kinds of things tend to really drive decision making at companies.

The fact that you might have caused harm to some player who's off to the side of all of that might not factor in as largely into a risk-making decision framework. Yet, there's still a huge ethical consideration there. So these are the kinds of things that I try to weave into my work and try to bring to the table for conversation that I think are maybe missed outside of classic risk analysis.

KIMBERLY NEVALA: OK. I'd like to come back to that. But before I do, you made the comment about FOMO. And certainly, everybody has applied the label AI to everything and there's a lot of funding that comes with doing that. So there's a lot of incentives for folks to go down this path.

I am somewhat heartened by the fact that it does appear that the hype - even around the large language models - is being dampened slightly. Folks are starting to look a little bit more at what the real limitations of those systems are. But even so, there tends to be this rush to apply this really anywhere and everywhere. I've started to call this spreading it like peanut butter throughout the organization.

You've talked in the past about this idea of the law of diminishing returns and an analogy with the tech. Can you talk about how that might apply in helping us to maybe reset or rebalance our perspectives here?

KATRINA INGRAM: Yeah, absolutely. There's one thing I remember from Econ 101 and it was the law of diminishing returns. So for those of us not maybe as familiar with that, it's this idea that if you have one ice cream cone, it feels kind of good. It's a good thing. Maybe two's OK. By the time you get to your eighth ice cream, it's not very good anymore. We're all getting pretty sick. So there's this idea that as we get more and more of a good, it becomes a bad at some point.

And yet, technology seems to escape that kind of analysis. But I wonder if perhaps there is a corollary when we think about technology. Because there's always this sort of sense that, well, more technology is always better, it's always a good thing to have more technology. But that may not always be the case. And I wonder if that might be the case with some of these AI systems.

To pick up on that as an example, I was thinking, well, what are these things really, really good at? And in a lot of cases, what people get excited about when they think about generative AI is what I call handling the administrivia of their lives. [LAUGHS] So when I talk to people about this, it's about, oh, that report I really didn't want to write and it can write it because they all sound the same anyways.
But if you step back from that you might go, OK, well, maybe we just don't need to do those things anymore. Maybe we could do something totally different. But it locks us into this moment where we're just applying a technical solution to what might be a different kind of problem.

KIMBERLY NEVALA: Mm-hmm. This aligns with - I think it was Dr. Erica Thompson had said somewhere that sometimes the very act of trying to predict the future - which is a lot of what we're doing with these AI systems - and not just large language models or generative AI but more traditional AI and analytics also -is to predict what comes next based essentially on what has happened in the past. And she said that the very act of trying to predict that future in some cases locks us into that past. It limits our imagination about the different possibilities or a different way of thinking entirely. Does that resonate with you?

KATRINA INGRAM: I would agree with that. The obvious example is something like predictive policing, where you predict that crime's going to happen in this area, so you send more people in. So they see more crimes, so you have more of a prediction. And it goes down this spiral.

And that's the kind of thing that happens in many other domains as well. Where we get narrower and narrower information and we're kind of marching down a pathway. And we don't necessarily see all the time that that's happening. So it becomes kind of a self-fulfilling prophecy and a bit limiting all at the same time.

KIMBERLY NEVALA: Interesting. Now, we are at kind of a weird spot today. There's this strange dichotomy – I, at least, observe - where on one hand public awareness of AI is at an all-time high. Perhaps driven a lot by the ChatGPT and that piece. Yet, practical literacy and agency relative to the technology seems somewhat limited. Is that your observation as well? Do you think that's fair? How do you see that play out today in your environment or community?

KATRINA INGRAM: Absolutely. I would say even as little as a couple of years ago - when I first started this company in 2021 - part of what we did was an environmental scan to see where are people at with AI? What are they thinking about? And there was still a lot of, well, I'm not sure if AI's going to really be all that meaningful for my business. Or I'm not really thinking about it. It's not really on my agenda. Definitely not in my top 10. Fast forward to this moment that we've been having recently in 2023, and it's a lot more of, oh, AI is here, and I need to figure out how it's impacting me and my business.

Yet at the same time, AI means either a very specific thing to an individual i.e. AI is ChatGPT or AI is just generative AI or it's just this amorphous thing. We're not really sure how we're defining it. And I know it means a lot of different things, but what are those things? And why would I want them and what does it mean?

So yeah, there's a whole lot of education that needs to happen, and that's where I've been focusing a lot of my time in the last couple of years. Building courses, doing talks, workshops. There's a lot of literacy, as you say, that needs to happen.

KIMBERLY NEVALA: Mm-hmm. And do we have the right broad focus? Is there funding for education in your perspective right now? Is this something that we're focusing on with enough vigor today?

KATRINA INGRAM: No. [LAUGHS] The short answer is no.

You know, I was on a panel recently and I had occasion to go back and take a look at the Pan-Canadian AI Strategy. So this is a major bit of government funding coming out of the government of Canada.
And when you look at where that funding goes - it gets administered through CIFAR, which is the big research organization - it goes into commercialization, it goes into compute, and it goes into research. Those are the three big pillars for this funding. There is nothing in there about public education or responsible AI or literacy, or any of this.

And you might say, well, OK. They're a research organization. That's maybe not their job. But whose job is it? There needs to be funding for these kinds of things. And whether that flows through not-for-profit organizations or other mechanisms direct from the government, somebody needs to be doing this work. So I think it's a real gap in terms of how we've approached AI strategically as a country.

KIMBERLY NEVALA: One of the things we sometimes hear is that everyone just needs to become more mindful. We need to be more mindful of the technologies that we use, of the apps that we use, to understand that we are the product. And even for those of us in the midst of this in the tech field who are very, very aware, it is really difficult to keep up with all of these things.

I wonder, is it reasonable to expect individuals outside of this pool to be mindful as they try to traverse all of these digital environments and all the interfaces confronting us that transcend that digital to "real-life divide?" Is that a reasonable expectation? If not, how do we move forward to help protect and enable these folks?

KATRINA INGRAM: Yeah, I mean, it's reasonable to a point. I think that there needs to be some general awareness. And yes, we need to be mindful and we need to understand the basics of privacy.

But here's my analogy for this. Imagine you're in the grocery store. And imagine if every piece of food that you pick up, you have to assess it and go, huh, I wonder if this is safe to put in my cart. Let me read the ingredients. Let me do a little search and see what the supply chain is on this. It's just ridiculous when you think about it in those terms.

The reason we don't have to do that is because when we go to a grocery store, we know that there's all these other mechanisms that are in place. There's regulations. There's safety inspections. There's all of these different things that are there to protect us as consumers. So yeah, we might want to read the label and do a short analysis, but we're not going to be stuck in the grocery store analyzing every piece of food.

And that's where we're at with technology. Unfortunately, we haven't put those mechanisms in place and that's what we need to do. We're at the early days of that. There are regulations that are coming into place. We're starting to talk about this in terms of product safety and these kinds of considerations. That infrastructure really needs to be there so that people can still be mindful, of course, but also not feel like it's all on them.

KIMBERLY NEVALA: When you think about the role of regulation and certainly, we're in a time period now where we are inching forward with things like the EU AI Act. I know Canada has some policies and some national work going on. Here in the US, there was recently the executive order around AI. What is the role of regulation from your perspective? And what is it that we need to be regulating for?

KATRINA INGRAM: Mm-hmm. You can take this from different stakeholder angles, so let's take it from the consumer. The role of regulation is to set the floor [LAUGHS] so that I know things are safe. I mean, I think that's kind of the basics. That it sets up you know, when I sit in a chair, I know it's not going to topple over or something like that. So I think that, to me as a consumer, is the role of regulation.

However, I think for companies it also provides some certainty. It provides some guidelines that are very clear. That, again, are the floor, I would say, because I always say ethics is like reaching for the ceiling, but compliance is like the floor. So we have these compliance regulations that say, this is the bare minimum you need to do as a company in order to make your product safe for consumers. Here you go. Here's what that is. And it sets a bit of a level playing field because your competitors also need to abide by that. So it plays that role as well.

I think the challenge that governments have is they have this other - going back to this idea of FOMO - they're also afraid of missing out. They're afraid of missing out of all the economic benefits that new technologies will bring to a nation, to a country. So they play this role where they're walking the line of needing to regulate in ways that keep people safe and set the rules for organizations without being so heavy-handed that they're stifling innovation and economic growth.

That's the challenge of any good regulation. And it becomes even more challenging when you start to think about things like AI that are being woven into so many things that it can feel overwhelming. Where do we even start? So I do not envy the role of regulators. I think they have a very tough job to do.

KIMBERLY NEVALA: In the interim, because first comes the regulation and the policy itself and then all the standards and the methods. And then enforcement comes along, usually quite a bit after that horse leaves the barn, as it is wont to do.

In the shorter term, then, what can we do to make sure that there is more meaningful and broader stakeholder interactions as these systems are being developed? As we're contributing to them? As we're using them? As we are deciding with whom to do business and not.

KATRINA INGRAM: We as consumers? Are you talking about we as consumers?

KIMBERLY NEVALA: We as consumers, and then perhaps those of us also in tech as well.
KATRINA INGRAM: Yeah, it's super challenging. Part of what we can do, going back to counter some of the things I was saying about trying to do it all yourself….because right now it’s kind of on you. It kind of is on you to do a lot of this.

So I think understanding, again, having a basic education. There's a lot of material out there, a lot of material that's free as well online and books, great books, that have been written by people in this field. So there's a lot of material that people can access to get up to speed on what some of the basic issues are.

Then you need to make some choices about how you're going to engage with this technology. I'll give you maybe a silly example, but this is something that I do. We're all pretty familiar with the idea of autocomplete in a document system. We're writing away, and all of a sudden it suggests a word that we're going to use next. I pause for a moment and go, hmm. Is that what I actually wanted to say or not? And sometimes yeah, I'm like, OK, that's not a bad suggestion. But other times I think, you know what? I'm just going to keep going with my own suggestion or my own word that I wanted to use in the first place. And that's just a really small thing. It's sort of inconsequential. There's no bias or anything happening here necessarily, but it's just a little small thing that I do. And those are the kinds of everyday sort of things that I think about that we could all, you know, anybody could do that. Very easy to do.

KIMBERLY NEVALA: I've mentioned this before, but I've had occasion to have this discussion, for instance, with members of my family who I think really don't fundamentally understand how impactful all of these little systems like autocorrect – or as I call it auto-incorrect, because if you're not careful, more often than not – are. But also the extent to which those are really interwoven underneath. And my sister has said before, like, eh, privacy. Nothing's private anyway. What does it matter, you know? I'm shocked and horrified although at this point I think she says it to get a reaction.

But there still seems to be a bit of a fundamental lack of awareness. And as I look at folks just going about their lives, they could be, I think, reasonably forgiven for not knowing that they need to take a pause and look at these pieces. I don't know that there's an easy answer today but maybe this comes back down to education, starting that very early and all the way up through the years.

KATRINA INGRAM: Yeah. I have so much empathy for people around this issue because it's a challenge to, again, stay up on everything. And also, it's exhausting. Let's face it. [LAUGHS] It's exhausting.

Shoshana Zuboff is someone who's written a lot about this. She wrote a great book called Age of Surveillance Capitalism. I remember she had this phrase called "hiding in your own life." And that's what it can feel like. It can feel like, hey, wait a minute. This is my life, and yet I have to do all these machinations to protect myself from all of these technologies that are in my life, and that I actually need to move ahead in a modern 21st century lifestyle. We're not moving out and living off the grid necessarily here. So I have a ton of empathy for people who are just trying to navigate that.

But I do get concerned when I hear the apathy. When people just kind of give up and say, well, you know, I'm just going to live as this open book then, and maybe privacy doesn't matter, because I think they're missing out on the collective goods that privacy can bring.

In order to have human flourishing, we do need to have a sense of privacy. There's a lot that's been written about this. And yet as that gets eroded, I do wonder, what is that doing at the bigger scale? You know, at the individual scale, but also at the bigger scale. So if we all are making these kinds of choices, what we're doing is just allowing these companies to become more and more powerful. We're kind of back to the power dynamic argument because that's really what we're ceding. We're ceding this control over our lives.

KIMBERLY NEVALA: No easy answers there. And hopefully, I know there's a lot of folks that are looking at education and more broad awareness. Again, as we talked about earlier, though, it's interesting. Because a lot of those discussions make the going in assumption, make a baseline assumption, that the technology is inevitable and that you can't step back from it. That’s probably true in some cases and not true in others because we're a little bit hemmed in by the prediction we've made that that is, in fact, the fact.

But that being said, you work with companies day to day on thinking critically about ethics. About how to be responsible, how to deploy these technologies in ways that are meaningful and useful, and still that do no harm or minimize harm. How are you seeing companies proceed and how are you supporting them in approaching this topic practically on the ground today?

KATRINA INGRAM: Well, one of the things that is really encouraging for the work that I'm doing is that I think it will result in better implementations of AI in all kinds of ways. So, for example, most implementations of AI technologies actually fail and part of the reason they fail is that people don't necessarily trust them. They don't trust the process. They don't really know what went into it.
And even if the technology might be useful, let's say within a business context, it doesn't get any uptake. People don't use it and then it's just a waste. It's a waste of resources, time, et cetera.

So part of the message that I bring to businesses is that in order to do this well and right and to actually see benefit from it, you want to start from this position of responsible AI. Now, that also, I believe, has to include this idea that, you know what? This particular problem? Not a good one to bring this technology to bear on. And being able to say no, I think that needs to be an option because maybe there are other ways to solve the problem. Or maybe the ways that we're solving it right now are less risky or have other values that we haven't necessarily examined. Part of it is examining those things upfront to make that decision.

When I talk to companies about the business benefits, let's say, of responsible AI, that's certainly one of them. Because it's shocking to think that so many AI implementations actually fail and part of the reason is linked to this idea of just not trusting the technologies.

KIMBERLY NEVALA: Do you have a framework or a basic construct you use to help companies and the employees within them frame the discussion for themselves and for their stakeholders?

KATRINA INGRAM: Yeah, I'm actually working on something new right now that I'll share with you and your listeners. Like so much of any kind of pioneering field, you wind up making up a lot of your own tools as you go along. [LAUGHTER] That's just the way it goes.

But you also scan the horizon, see what other groups are doing. One of the things that I've noticed is that there's a lot of great frameworks out there. The problem is they're massive. So if you take something like NIST, for example, fantastic framework. Very comprehensive. Very scary for any company to read through that and go, where do I even start with all of this? So that was my starting point for my own framework. How do I make this manageable? How do I make this easy enough to implement that a company feels like they're able to actually do it? Because that's really important.

I was inspired by…very early on in my career, I was trained as a marketer. Even 20 years later, I still remember the four Ps of marketing. It's one of those things that just kind of gets drilled into your mind.

So I came up with this framework of the four C's for AI ethics. And my four C's are Context, Culture, Content, and Commitment. I started with context first because I believe that's so important, really understanding the environment that you're in. This includes the regulation. This includes the domain that you're in, the kind of work that you're doing, and your organization itself.

That melds into culture a little bit because culture starts to really focus on the people and everything to do with the people. So getting clear on those things.

Then we get to the content. And what I see is a lot of these AI frameworks, they dive right into the content first. Let's talk about bias or let's talk about privacy. So I wanted to set the table with some of these other concepts before we get to the actual content piece and center on the technology itself.

Then that last C, commitment, really speaks to this idea that this is not a one-and-done. This is an ongoing commitment that needs to be made to do this work. So that's my new framework that I'm playing around with right now. And trying to make this something that is manageable, that people feel that they can do, and that they feel that they can implement in their business.

KIMBERLY NEVALA: Would that framework be applied at a - I'm going to say global level - so that could be at your company or your function to set up high-level policies and structures and also at the level of an individual project or potential opportunity?

KATRINA INGRAM: I'm mostly thinking about it in the first instance, Kimberly. I'm thinking about it more from the company perspective. I haven't actually thought through it as a project-level framework yet. I'm mostly putting it together for some material that I'm building to really speak to how a company might implement a responsible AI program.

KIMBERLY NEVALA: Mm-hmm. Interesting. You mentioned earlier the incredible value and, in fact, the requirement for somebody to be able to say no. To be able to say, this is not a problem we should - maybe we could - but maybe this isn't a problem we should solve with technology. Or try to automate or to predict our way out of.

It strikes me that saying no is not something that a lot of organizations are very good at. Governance programs historically have gotten a bit of a black eye because they were always the place that good ideas went to die. Because they were the ‘no’ engine. How do we overcome that issue and make it acceptable, and in fact, a valued property, if you will, of how we engage in organizations?

KATRINA INGRAM: Yeah, it's super challenging. That's one of the challenges that anyone doing this kind of work faces and it's certainly one that I felt when I work with organizations. You need to somehow frame all of this as enabling in some way and not the no police. I think that is really hard. Yet, at the same time, you can't be afraid of actually taking that contrarian position or even saying no.

I think this is going to be really challenging. It links back to this idea of success metrics. So many of our success metrics are about what happened, not what didn't happen. So we don't really have - it's this negative thing - have we prevented these bad risks from happening? That's part of it: how do we weave this work into that. What would be the appropriate metric when we said no? I don't think it’s how many times have we said no; I don't think it's anything like that.

KIMBERLY NEVALA: Right.

KATRINA INGRAM: There's something there that we need to unpack in greater detail. I'm not sure I'm going to give a real satisfying answer for that necessarily. But I don't think it can preclude that option. I think that has to be on the table.

Doing this work, though, you have to walk that line. You have to be that person who's putting things in place sometimes. You know, being diplomatic but not being afraid to stand up at the same time and say the thing that's going to be unpopular. Trying to bring your work into the conversation in a way that feels enabling but also doesn't shirk from the hard truths. It's a balance. It's a real balance walking that line.

KIMBERLY NEVALA: I like that idea. Though, that as you said, it's not a hard and fast metric which says, you must deny something. But if everything is making it through your pipeline with relative ease and with very little pushback, perhaps either the right problems aren't being brought to the fore and/or it may be a bit of a rubber stamp, I suppose.

KATRINA INGRAM: Yeah.

KIMBERLY NEVALA: So you are out and about. You speak to a lot of groups, a lot of people, and work with really a lot of companies to enable this work. What are you observing or seeing happen now that you're finding most interesting and want to be keeping your eye on as we move forward here?

KATRINA INGRAM: Mm-hmm. Well, relative to where I started a couple of years ago, one of the things I find really encouraging is that more people want to have this conversation and they understand why it's important. And I think a lot of that goes back to ChatGPT and OpenAI and the level of awareness of AI in general. Understanding that this technology is probably going to impact their organization in some way and that we need to really think through what does that mean from an ethical standpoint. So that's really encouraging.

We're moving forward on regulations. We've recently passed at least the EU AI Act, so that's something to celebrate. We have a little bit of regulation in place right now. Even as you said, it takes a while for these things to kind of work themselves through a system and see what that really means.
But that is that compliance piece, which is really important and a complement to the ethics piece and helping to establish what the floor is. So that's really encouraging too.

KIMBERLY NEVALA: What would you like to see happen to meaningfully advance the conversation around AI ethics and, more importantly, the practical application of these ethical constructs in the future?

KATRINA INGRAM: One of the things I've been saying for a while now is that every company is becoming a tech company. You might not think of yourself as a tech company. You might think, I'm just a bakery or I run a retail organization. How am I a tech company? But the fact that data is so prevalent in most businesses these days makes you a technology company.

And so far, where I've seen the conversation really centered is on big tech. What is big tech doing? How are they enacting responsible AI and really focusing on those big players? That’s still important. We still need to focus on those big players. But other companies need to consider the ways that they're entangled in this ethical use of AI and really think that through as well. Even if you don't consider yourself a technology company, you are a part of this conversation. So that's one thing. And then if I can have-- if I can have one more thing…

KIMBERLY NEVALA: Yes, please.

KATRINA INGRAM: …I think it's for nontechnical people, and I put myself in that camp as well, to really feel empowered to speak up about some of these issues. Not to feel like you don't have an opinion, a voice that matters, a perspective, because you do and it's really important. It's important for the technologists to hear that. It's important for other people to hear that, politicians to hear that. I really want people to feel empowered to actually speak up and be part of the conversation. That's one of my wishes for 2024 as well.

KIMBERLY NEVALA: [LAUGHS]
Well, I will say, having had the immense pleasure of seeing you speak, everything you present is in support of that empowerment. Anything we can do to help amplify that message and your work further, we are happy to do. Thank you so much. I really appreciate your insights and your dedication to the work.

KATRINA INGRAM: Thanks so much.

[MUSIC PLAYING]

KIMBERLY NEVALA: To continue learning from thinkers and doers such as Katrina about the real impact of AI on our shared human experience, and what you can do about it, subscribe now.

Creators and Guests

Kimberly Nevala
Host
Kimberly Nevala
Strategic advisor at SAS
Katrina Ingram
Guest
Katrina Ingram
CEO Ethically Aligned AI
The State of Play in AI Ethics with Katrina Ingram
Broadcast by