Practical Ethics with Reid Blackman

KIMBERLY NEVALA: Welcome to "Pondering AI." I'm your host Kimberly Nevala. Thank you for lending us your ear as we continue to ponder the realities of AI with a diverse group of innovators, advocates, and doers.

Today, we're joined by Reid Blackman. Reid is the author of the book "Ethical Machines" and the CEO of Virtue Consultants. Reid founded this ethics advisory firm after an auspicious career as a professor of philosophy. He has also been a purveyor of fireworks and has swung through the skies on a flying trapeze. Although I believe the latter was for fun and not for profit but he can confirm that for us. Today, we're going to talk about making ethics practical and being effective ethical advocates. Welcome, Reid.

REID BLACKMAN: Yeah, thanks for having me. And for the record, I was an instructor, so I did get paid, although I was not in it for the money.

KIMBERLY NEVALA: [LAUGHS] Alright, fair enough. What was the impetus behind writing the book "Ethical Machines" which was published, I believe, last year in 2022? Were there any reactions to it that have surprised you?

REID BLACKMAN: Horror. No, no-- [LAUGHS]

No, so first, the impetus behind the book is that there has been, as you know, a tremendous amount of talk about AI ethics or AI ethical risks, or what some people call responsible AI, or trustworthy AI, or whatever. AI assurance is the last one that I heard. And I thought that there was lots of a whack-a-mole strategy with these kinds of things. Bias would pop up, and people start talking about bias. An explainability issue or a black box model would pop up and we start talking about that. But there didn't seem, to me, to be a comprehensive approach to the topic that discussed all the things that need to be discussed, at least at a high level to give people a solid foundation, a solid understanding of the foundations of AI ethics. And so it seemed to me like, oh, that's the kind of book that I can write. So I wrote it.

In terms of surprises, I don't think so. I think that perhaps one of the more gratifying things to see is that people keep praising the book as being super practical. Which is really nice for a philosopher to hear since we're always told that we're not practical people. So that was the goal, of course. It was to not just give people an understanding but to be able to translate that understanding into action. And it's been particularly gratifying to see people say, wow, this book is really practical, I know what to do now.

KIMBERLY NEVALA: It is very consumable. Although, I have to admit I got a little bit of a chuckle, because the subtitle of the book refers to how concise the advice within the book is. But the subtitle itself is not in the least bit concise, so that was an immediate ha-ha moment for me.
REID BLACKMAN: It's more concise than the original subtitle, which was something like, your concise guide to totally transparent – let’s see - totally transparent, remarkably respectful, and something behemothly-unbiased… I forgot what it was, but there was more, and the publisher said, take it easy.

KIMBERLY NEVALA: [Laughs]. So it's all relative. Now, you mentioned language and the many, many headings and terms that we tend to use when we are talking or referring to ethical, responsible, trustworthy, et cetera, et cetera AI. And we know that in many cases, the language does matter. Although I sometimes argue that it's probably more important that you pick a term and then just clearly define it and understand how other people understand it as well. But that aside, when it comes to ethics, you do espouse some key tenants, if you will, that can be seen in the language that you promote. So to kick this off, are you game to play a few rounds of why this and not that?

REID BLACKMAN: Yeah, sure.

KIMBERLY NEVALA: Alright! So let's start with why AI for not bad versus AI for good.

REID BLACKMAN: Alright, that's an easy one. We're talking about AI ethics and there wasn't a clear articulation of what we're really talking about. Because some people were talking about having a positive social impact and benefiting humanity and that sort of thing. And that's all fine and well. But other people were talking about mitigating biases of AI, or not having black box models, or the risks to privacy that citizens suffer as a result of various kinds of applications of AI.

First of all, as a philosopher, I'm just interested in conceptual distinctions. And so there's a difference between good and not bad. Good is some - if you like a positive number, bad is some negative number, but then there's zero. And I think that one of the things that companies should do is not - they're going to have to focus on one or the other, typically. And I think that where they need to put their emphasis or their focus on, their resources, their attention, their money, their time is on not ethically screwing things up, frankly. There's lots of ways to go ethically sideways, which are also reputationally sideways, which are, in some cases, regulatory and legally sideways.

And the main focus should be, first from a business perspective, do no harm to your people, employees, your customers, your clients, do no harm to your brand. And from an ethical perspective, just straight up as an ethicist, first, do no harm has to be the higher priority than do some good in the world. It's not OK if you help elderly people across the street but you also engage in bum fights or something, something like that. Yeah, you've done some good, but you really have to prioritize not doing the bad stuff.

So I think it's important to distinguish between AI for good and AI for not bad because otherwise, it gets swept up into this bigger conversation about doing good in the world. And then you have AI ethics as part of something like corporate social responsibility. It doesn't get woven into day-in-day-out operations, product development, product deployment, monitoring your products, and that's got to be where the focus is, both from a business perspective and an ethics perspective.

KIMBERLY NEVALA: Now, when we talk about weaving this into day-to-day operations, we've also had a lot of discussions about the differentiation between a traditional approach to risk management and the purview or domain of ethics. This is interesting because you also, as I understand, are a proponent of using the term "AI ethical risk mitigation" as opposed to the more generic umbrella term "AI ethics." Why is that?

REID BLACKMAN: So whenever I see "AI ethical risk," I sort of feel a little bit like that's me [LAUGHS] because I've been pushing people to talk like this for roughly five years. There's a variety of reasons.

One is what I just alluded to, which is that the ethical responsibility and business responsibility, number one is first, do no harm. That's one issue.

The other is that the language of ethics is not a language that most businesses speak. They're not comfortable with the word "ethics," they don't get it. One reason is because they think it's about being a do-gooder and they think, not unreasonably, we're not in the business to be do-gooders. We're in the business to do business, to make money, to sell stuff, whatever. Yeah, they might have mission statements and stuff like that, but they just say, no, we're not here to be … that's not our main thing. Yes, we'll give some money… and so that's what I think is in the minds of a lot of people when they hear "ethics."

And the other thing is that they do understand the language of risk. Businesses know what risk is. There's enterprise risk, there's enterprise risk categories, there's risk officers. So they understand the language of risk. And so I wanted to bridge the divide between talking about ethics and talking in a language that not only do they understand but you already have their buy-in, in some sense. Businesses are already bought into knowing they need to do some degree of risk mitigation. So it's the attempt to speak their language.

KIMBERLY NEVALA: Excellent. Now, one of the other things, and this probably jumped out because for a long time I've been a consultant in the areas of governance. In the book, when you were talking about stakeholder engagement and ownership, you make a really clear distinction and a push for getting justified buy-in versus relying on rote compliance. Now, you didn't say "rote." I added rote.

REID BLACKMAN: Sure.

KIMBERLY NEVALA: But what is the difference between justified buy-in and compliance, and why is that so important for organizations to get right?

REID BLACKMAN: I mean, if you don't have buy-in, then you have, at best, people complying if they think they're going to get caught if they don't do it. That's it. And you want more than that.

You want people to understand why we're doing this. If your data scientists and engineers, for instance, think that you're doing this, you have some kind of AI ethical risk program because it's PR, because the company wants to be able to say, yeah, we take ethics really seriously, then the data scientists aren't going to take it seriously. They will do the bare absolute bare minimum to check the box. They'll not check the box when they don't have to if they're not going to be held to account. So I just think you're going to get really bad outcomes if you don't get justified buy-in.

I actually think you can get justified buy-in if you do it the right way because, contrary to most assumptions, a lot of people really do care about this stuff. At the very least not doing really bad stuff. Forget about do-gooder. You're just not harming people. And so I think justified buy-in is on the table, it's there to be had, and it's good if for no other reason that it increases the likelihood of compliance with the AI ethical risk program. Not to mention the overall quality, the quality of the work that's done.

KIMBERLY NEVALA: Now, is one of the ways that we get justified buy-ins versus checklist compliance to be clear about what it is that we're trying to achieve? One of the things, and I know you've talked about this for several years and it comes through very clearly in the book (which I would recommend to everybody who's trying to put these things into action read) but it's really common to start with those principle or value statements. And if you look at them, they are almost all the same. They are almost interchangeable these days, company to company and even industry to industry, to some extent. You argue that, typically, these statements are just not concrete enough to be instructional or really useful. So why is that so-- and what is the better way?

REID BLACKMAN: Yeah, I think they're useless. Just to be clear, I think they're useless. Not just that they're not very good, they're useless. I think there are ways to - they often tend to be - but they don't have to be. They tend to be a waste of time. And the reason that they're so similar across hundreds, thousands of organizations is because they're so generic and watered down that they mean essentially nothing, so everyone can get behind it. So of course you get all these things that are the same because the way you get all this agreement is by taking it at such a general level that no one would reasonably disagree.

My favorite example is to talk about fairness. You say, we're for fairness. Everyone's for fairness. The KKK is there for fairness. If you just say, hey, KKK, will you sign this paper that we're for fairness? They'll say, yeah, absolutely. Now, you ask them to operationalize, you ask them what fairness consists in, and now things will start to diverge. But at that really high level, which is the level at which most ethics statements are written, you're going to get agreement from the most vile people around. So I do think that they're useless, that's why they're generic.

Then, the other thing is that people make this sort of weird move which is something like: we wrote this ethics statement and it's not guiding us. It's squishy. Therefore, ethics is squishy, so we should do something else. It's like, no, you wrote a squishy statement. You did a bad job with your statement. I'm not blaming you. It might not be your fault. You don't have the experience, or skill, or the knowledge about how to do a better one, but that's what you wrote.

The truth is you can make them really concrete, and the way that I tell people to do this is tie any articulation of a value to some set of guardrails, of action, so some things that you always will do or you never will do. So for instance, so just take something of a toy example, not necessarily, but in some cases, a toy example. You can say we value your privacy. I don't know what that means. But if you said something like: we value customer privacy and so we will never sell customer data to a third party. Alright, well, now you're starting to tell me what you mean by you respect people's privacy. And you're telling me what you mean by referring to a particular concrete action. More specifically, not only are you saying, we will never sell customer data to a third party, you're also saying: we will always check to make sure that we will not sell data to a third party, and that's a control. That's something you can operationalize. That's something you can put in place.

Now, if you do that, say for each value you have, let's say you've got five values, something like that. And let's say for each one of those values you have - I'm just going to pull a number out - you've got three to five guardrails. Because we value X, we will never Y, we will always Z, and so on. You have five principles, and for each one of those, you've got five guardrails, that's 25 actionable things you can do now to live up to your standards. And most importantly, those guardrails are stopping you from realizing your ethical, reputational, regulatory, and legal nightmares. Which is to say you've now got 25 concrete things to put in place, to operationalize, to implement that will stop you - or decrease the likelihood - of your company realizing nightmares. This is pretty concrete. It's not squishy.

KIMBERLY NEVALA: Now, to be clear, one of the things that's always been interesting when working in governance in general is the assumption that it's somehow really just dictatorial. There's a strict tabulation of, thou shalt do this or thou shalt not do that. And if you are in compliance with that list, you are in the clear. And it has led to a lot of badness for a lot of organizations where if something isn't on the list, somehow, it becomes OK or the "get out of jail free" card for doing something poorly.

You wrote this excellent article recently, and I think it was actually-- I may be paraphrasing the title exactly, The Signal App and the Danger of Privacy At All Costs, and it raised these really excellent points. But what it reinforced for me is how important it is that we enable individuals and organizations to deal with nuance. As opposed to taking the stand that I recast in my own mind as one principle or one interpretation to rule them all. Was that your intent? What is the practical lesson there and is that in conflict with what you just said about concrete actions?

REID BLACKMAN: It's not. So, yeah, look, I do think that organizations need to develop a capacity for reckoning with moral nuance, with moral gray areas and doing it well.

Now, when you're building an AI ethical risk program, usually the way that it gets done is it gets rolled out in phases. So for instance, phase one might be just having to do with those nightmares. Phase two might now put another layer on it, not just our nightmares, but those things that are pretty bad, not nightmarish situation, but they're pretty bad. Let's think about those.
We also need a mechanism early on-- this should be really phase one, frankly-- a mechanism by which the gray areas are spotted. The people who spot them don't have to themselves be able to engage in a morally nuanced evaluation of a use case or an application or whatever. But they need to be able to spot, oh, there's an ethical gray area here. They need to be able to spot it, they need to be able to flag it, and they need to be able to elevate it to the relevant people.

Now, whether that's a person - I hope not. Whether it's something more like an ethics board, or a risk committee, or whatever you want to call it, that's more appropriate. And then having those people appropriately trained, have the appropriate skills, knowledge, experience, et cetera. So it's going to have to be a cross functional team. Arguably, there should be an ethicist involved. I really hate saying it because it sounds self-serving. But then again, there are so many companies out there that I couldn't possibly help.

Having an ethicist around is really good for the purposes of helping people spot the nuance and work their way through it. That's really what an ethicist is primarily good for in those kinds of contexts. Not being the sort of moral priest that says this is right and this is wrong. But rather, here are some ways of thinking about it that might help you move forward in a concrete decision.

And then, of course, you can start marching the scope of the ability to deal with moral nuance outside of that committee to other teams. I don't know if that's going to be phase one. That's going to be more like phase two and phase three or something along those lines. Where, OK, now we're giving our first line of defense a better education, more skills, more training on how to deal with moral nuance themselves so it doesn't have to get kicked up to the committee or one of the committees, or something along those lines. So it's a muscle that needs to be exercised and grown over time. It's not something that you just start out with.

KIMBERLY NEVALA: There are so many challenges, really, for organizations that try to go down this path, and even for us as individuals. In the book, you pointed out that it was possible for folks to have, and I quote - I thought it was so striking. I wrote it down word-for-word - "wrong but reasonably held positions." And again, a striking turn of phrase, which in no way to my read, meant that there isn't still a, quote unquote, "right answer."

REID BLACKMAN: No.

KIMBERLY NEVALA: But can you talk about why it is so important to acknowledge that a position could be or might be wrong yet still reasonably held when we're engaging in ethical discussion and discourse?

REID BLACKMAN: Yeah, I mean, frankly, it's the same as non-ethical discourse. You might have a jury whose members are deliberating about whether the person did it. And some people think that the butler did it. Some people think the butler did not do it. Both parties have pretty good reasons for thinking that the butler did it or that the butler didn't do it. You say, it's a tough case. It's not an open or close case. It's not like the butler's DNA was all over the crime scene, and the blood of the victim underneath the butler's fingernails, and stuff like that. It's a tough case. And either the butler did it or he didn't, those are the options on the table. There's a fact of the matter about whether the butler did it or not. But someone may reasonably conclude, on the evidence available, he didn't do it, even though in fact he did, or conversely that he did do it even though in fact he did not.

So we frequently have these kinds of issues. I mean, I deal with it, I don't know - my kids get into a fight, he hit me first. No, she, hit me first. And I have to sort of parse through the evidence. OK, what's going on? There's many situations in which you've got limited evidence, you're not all-knowing. And you can make a reasonable conclusion based on that evidence but actually it turned out to be wrong. It's the same thing in ethics. You can engage in various kinds of deliberations about, is this the right thing to do, is this the wrong thing to do, is this ethically permissible, is it ethically impermissible/ And you may do your diligence and come to a reasonable conclusion. And you may be wrong.

KIMBERLY NEVALA: Yeah, and I think this is helpful because in a lot of cases, this question of people having different opinions in the context of ethics gets back to that point you made earlier. Which is, well, this is subjective, it's an opinion. And therefore, this is not something we can ever come to a concrete answer or take action on.

REID BLACKMAN: Yeah, but that would be… I know this is not your view, but it would be a bit silly to conclude that from the fact that people disagree. People disagree about whether the butler did it. People disagree, I mean, physicists have their disagreements. Chemists have their disagreements. Historians have their disagreements. Business leaders have disagreements about what's the best strategy to move forward to grow revenue. There's disagreements by hiring managers about who the best hire would be.
There's all sorts of disagreement, but we never suppose, oh, well, we disagree, so there's just no truth to the matter about what would best grow revenue. No. That strategy may well in fact turn out to be a terrible one or a great one. But the fact that people disagree doesn't show that there's no truth to the matter. It just shows that the truth is difficult to figure out.

Now, what does that mean? It means we've got to do our due diligence, both with figuring out the correct marketing strategy and the hiring strategy and the ethics strategy, or what to do in this particular use case. You need to do your due diligence. You need to engage in that deliberation. You need to have multiple people figuring out what's the evidence that exists and how strong are those pieces of evidence and how do they weigh against each other and to draw us to a reasonable conclusion. So it's really no different, the ethics disagreements are no different, in that respect anyway, than disagreements about what are already agreed to be matters of fact.

KIMBERLY NEVALA: I suppose maybe it's just the fact that we throw in human rights or equity or equality and these words that have a lot of just gravitas to them. And the stakes feel so big at times if you get it wrong or even a little wrong that it's scary. I'm a perfectionist. It's a tendency I have to actively warn against because it can be debilitating. It can be an absolutely perfect recipe for stasis.

And I do worry that with ethics in particular, even though as you point out, it's not uncommon to have differences of opinion or to have to work these through. We can easily find ourselves in situations where, as the saying goes, water isn't wet enough. And that's not to say that we shouldn't be shooting for optimal outcomes. But we can't let the enormity of some of these things, of the task ahead stymie progress. And I'm interested in your experience as both a philosopher and now a practicing ethicist. I would really wish we were all practicing ethicists, but I digress.

REID BLACKMAN: I'm more of a – I’m teaching. Those who can't do teach. I'm deeply unethical in my personal life. I just try to tell people what they ought to do.

[LAUGHING]

KIMBERLY NEVALA: Hmm. You may be very humble there.

But what in your experience is the delineation between ethical advocacy and fanaticism? And how fine is the line between them? And is it easy to tip the scales in an unproductive manner?

REID BLACKMAN: Yeah, so, I mean, I think that that distinction between advocacy and fanaticism, you're taking from that article, the op-ed, the New York Times op-ed on Signal. The way that they were using the term "fanaticism" there was to think that there is some moral principle, there's one principle or some value, that ought to trump all others in all circumstances no matter what. That strikes me as a kind of fanaticism. If you're willing to sacrifice anything and everything on the altar of privacy, for instance, then you're a fanatic. There are some things that are more important than people's privacy. If, for instance, we have to violate some people's privacy in a relatively minor way in order to, say, save the lives of children, that seems to me like it's well worth it.

I can create some-- if you want the philosopher to talk, I'll give you some thought experiments. Peek into this person's window, see them changing, and you'll stop a genocide. Is it worth it? Yeah, it's worth it. Sure, you violated their privacy. You owe them an apology. But you should do it because it's going to stop a genocide. It’s a silly thought experiment. But it brings out the point that our values come into conflict, and they've got to be weighed against each other, and fanatics don't do that that way. And so this speaks to the moral nuanced point as well. One way to avoid moral nuance is just to be a fanatic and say, no, this is the one thing that matters all the time, every time the most. That's one way. Did I answer your question? That's the fanaticism bit. Oh, being an advocate.

One thing that I like to tell people is that I'm not an activist. I'm an ethicist, but I'm not an activist. I don't go around protesting with a picket sign, and I'm not screaming, corporations are bad, or something along those lines. I'm not an activist. I'm not a priest. As I said before, I think that the role of the philosopher or the ethicist, at least in this context, is to help people understand the landscape better. I want them to be able to see the AI ethical risk landscape better than they've seen it before. I want the confusion to be lifted. I want them to be empowered or enabled to engage in careful deliberation, to be able to see someone else's position, to be able to work through that. I want them to be able to see when an AI ethical risk program is sufficiently robust or not, the ways it may fail or the way it may succeed.

And so I'm an advocate, I suppose, for AI ethics, or AI ethical risk mitigation, or AI ethical risk programs, or whatever you want to call them. But that's not activism and it's not fanaticism. It's just, hey, listen, there's some important issues here with regards to, particularly, ethical, reputational, regulatory legal risks. You should pay attention to them and do something about it. Here's what you should broadly do.

But I don't go around saying: and here's what exactly the content of your ethical conclusion should be.
My general view is, look, there's Hobby Lobby and there's Patagonia. They exist on opposite ends of the political spectrum while still being, arguably anyway, within the realm of reasonableness. And we need to talk about AI ethical risk mitigation in a way that both Hobby Lobby, Patagonia, and every organization between them can see its importance and adopt its practices that are in line with their already existing organizational values.

KIMBERLY NEVALA: Well, I think that actually helps folks who want to step into that role and advocate but have been shy about being viewed as an activist. Or, having to come to the table with the answer or being the judge and the jury to realize that's not necessarily the role and what's required here.

REID BLACKMAN: Yeah, this is why my next article - I'm not sure when it's coming out, maybe April or May or something like that - the working title, anyway, is Tell Me Your Ethical Nightmares. And it's just saying, listen, organizations, you need to be able to say the word "ethics." It's going to be OK. Because I think organizations right now - the other working title is "Corporations Need To Stop Being Afraid To Say The Word Ethics.” You can say the word "ethics." One way you can do it is by saying the phrase "ethical nightmares."

You know what ethical nightmares look like. You need to address them given the range of technologies that are coming down the pipeline. So, yes, my book is about AI, but I work outside of AI as well. Including blockchain and quantum, IoT, digital twins. There's all these technologies coming on the pipeline, tons of use cases, tons of applications. And you need to be able to say, we need to avoid the ethical nightmares of these technologies, both for our organization, for society, et cetera.

KIMBERLY NEVALA: So I will look forward to that article. I think it is extremely important. And I'll do whatever I can to help promote that conversation and forward it.

So speaking of the state of play in AI ethics today, and then let's talk a little bit about what organizations can do to progress this forward. It has been fascinating to watch the choreography occurring as systems such as ChatGPT, Bard, GPT Boosted Bing have been let loose in the wild. Followed with some immediate post-hoc warnings and calls for regulation by their creators no less. What conclusion about the state of play of AI ethics, if any, should we draw in terms of both the public response and these systems' creator's actions since their release?
REID BLACKMAN: Yeah, I mean, I wrote an op-ed, also published in the Times, on this topic. And it's clear to me that we can't rely on self-regulation. Self-regulation alone is not going to save society from the worst of AI. In some cases, it will. There'll be some cases in which organizations are responsible and don't do the thing that they really ought not to do. But there's going to be lots of cases where that's not the case. And that's in part just because of market forces. You want to innovate, you want to beat your competitor, you want to come out with the thing first. You're working on something; they're working on something. They're in competition with each other. You want to be the first to announce, and so you release without actually doing your ethical risk due diligence.

We need to have government involved. We need to have some kind of regulations. Of course, I'm not against innovation. I'm all for innovation. Regulation is not about making sure that you have to do it this way. Regulation allows for Hobby Lobbys to be Hobby Lobbys and Patagonias to be Patagonias, and it should do the same thing in the AI space. It can allow for reasonably held disagreement about what's ethically permissible. Unreasonably held views, not OK.

But if we don't have that, then…it should just aim at the nightmares, right? As one of my colleagues puts it, regulations about stopping people from getting murdered: literally but also metaphorically. That's what we're looking for. We need regulations to stop people from getting metaphorically and, in some cases, literally murdered by various kinds of applications of technology.

KIMBERLY NEVALA: Yeah, I think Henrik Sætra said it similarly. He said we need to make it painful for companies not to act ethically and responsibly. I'm not trying to tell them how to be good, but we do need to make it painful for them to be really bad.

REID BLACKMAN: Yeah, that's right. And so I'll say regulation and someone will always chime in and be like: it has to be enforced.

KIMBERLY NEVALA: [LAUGHS]

REID BLACKMAN: Yes, of course the enforcement. They have to be meaningful. Enforcements can't be a slap on the wrist.

But, I mean, if government is there to do anything, it's there to protect its citizens, bare minimum. And not just protect citizens from hurt feelings, but there to protect its citizens from really deeply life-altering things. The kinds of occurrences or technologies or actions by government or by individuals that restrict, block, or significantly hinder people's ability to get access to the basic goods of life: the basic things they need to live a minimally decent life. That's what government is there for, if anything. And so insofar, these technologies are phenomenally powerful, and they can undermine some people's ability to get access to the basic goods of life, that is to say the kinds of things that they need to lead a minimally decent life, then it's well within the purview of government to regulate against those things.

KIMBERLY NEVALA: Now, we could have a whole separate discussion, I think, about the evolving complex around regulations. Certainly, there's no shortage of principles. There's no shortage of responsible, trustworthy - call it what you will - frameworks. There's a whole swath of forthcoming and/or threatened legislation, and not to mention, to some level, increasing public awareness.

So it can, I think, feel undeniably complex and a bit disorienting. But this isn't an excuse for not taking action these days. What steps, concrete steps would you recommend, whether they be more small to medium-sized businesses or multinationals, what should they be actively doing now to engage in AI ethical risk mitigation?

REID BLACKMAN: So standardly, there's a pretty straightforward way of going about this.

One is you can write an ethics statement, an AI ethical risk statement, whatever you want to call it, that's not useless. You can write it in such a way that says, here's our standards. At least at a minimum, here's what we're going to do. It's not these lofty goals of, we're going to have a positive impact on the world and save humanity. It's not about lofty goals. It's about whatever the opposite of - something like the opposite of a goal - it's the anti-goal. This is the ethical nightmare, right? So I think number one, you specify what those things are and you put that in your ethics statement.

And then, and this is the step that most organizations miss. A lot of organizations will come to me and be like, we've got our principles, we want to implement now. It's like, no, no, no, hold on. You can't implement yet. You don't even know what your organization looks like relative to these standards right now. So you write that statement, and it doesn't have to be letter perfect. You don't have to wait until every single executive signs off. You get to some reasonable place where it's well-articulated, and then you do a gap and a feasibility analysis. Organizations know how to do that.

They know how to say, OK, what are the current people, processes, technologies that we have in place now? What are the policies that we have in place? Where do those not speak to the AI ethical risk standards that we're trying to meet? What gaps do we have to fill in? Do we need a self-standing AI ethics policy? Do we not need that? Are we going to weave it into existing policy? Are we going to use our existing enterprise risk categories and use that as our AI ethics risk categories as well? If not, are they going to be separate?

Gap and feasibility: who do we have on board? Who understands this kind of stuff? Who do we need to train? What kinds of roles need to be responsible? What RACI matrices exist that we can augment? What existing, say, risk boards or risk or compliance committees or the like, already exist that we can leverage or augment to create an ethics committee or an ethical risk committee?

You've got to do that work. Otherwise, you're going to create redundancies and inefficiencies, and things are going to break down. Things aren't going to harmonize well. You start doing this ethical risk program, the risk department never heard anything about this. Now they're upset that the way they do things at an enterprise level is not being mirrored at, say, the data science level, and now the brakes are slammed. But if you got those people on board in the first place and have them involved in that gap and feasibility analysis, you're not going to beat all those roadblocks down the road.

So number one, set your standards. Number two, do a gap and feasibility analysis. And from there, you can build a framework and an implementation playbook for how to step-by-step, how do we implement our framework, et cetera, et cetera. But you've got to start off with that stuff, especially the enterprise. Start-up? Forget about it: you don't need to do all that stuff. You don't need to do a gap analysis if you're a 50-person start-up because you've got nothing. [LAUGHS] You already know where you are. But if you're an enterprise, yeah, you have to do that, or you're going to run into a ton of problems.

KIMBERLY NEVALA: So that seems very logical, very straightforward. Why are more organizations not doing this now?

REID BLACKMAN: They don't feel the pain yet.

KIMBERLY NEVALA: Hmm.

REID BLACKMAN: That's it. There are various motivations for why some organizations are doing it now. And there's, I would say, a fairly -- it's only grown since I've started doing this. Over the past five years, I've seen a pretty sharp increase in the quantity of companies that are doing it. I think still most are not. Certainly more than 50% are still not.

And there are different motivations for doing it. Some really care. Some want to avoid brand risk. Some want to avoid those ethical risks. Some just want to be known for being on the ethical cutting edge so it's brand/ego. These different motives exist not just within organizations but within individuals in the organizations. That cocktail of motivation is in different ratios and different individuals and different corporations.

The ones that haven't done it yet, one, they might not think they're at risk. A lot of companies think, well, we're not doing AI, we don't have a big team of data scientists. But then you ask them, OK, is your HR procuring AI software? How about your marketing department? AI is coming in. And now we can say things like, is anyone in your organization using ChatGPT or Bing Search, or whatever it is? They don't know. They don't know. [LAUGHS]

I'm not blaming them for not knowing, but they don't know. And the truth is that there's lots of ways, there's lots of Trojan horses out there that get brought into enterprise and put the company at risk. So if nothing else, create an AI ethical risk program for your procurement process. But we're not really going to get greater than 50% adoption of AI ethical risk standards unless we get lots of pain, which comes with highly enforced, strict regulations.

KIMBERLY NEVALA: Everything old is new again. I mean, we've sadly walked into organizations back in the day as a data governance consultant and said, your way is fraught because they don't feel the pain. And/or you are doing too good of a job of masking the pain, which sounded odd.

REID BLACKMAN: Yeah.

KIMBERLY NEVALA: We definitely got the raised eyebrows when we said, well, you probably need to let a few of those balls hit the floor if you want them to take action at a higher level. It was horrified shock.

REID BLACKMAN: [LAUGHS] Yeah. And I'll say one more thing. The other things that-- we've been talking a lot about AI, and I mentioned this earlier, it's not just AI.

KIMBERLY NEVALA: Mhm, right.

REID BLACKMAN: I'm starting to talk about - insofar as I think that people can actually hear what I'm saying- not AI ethical risks but digital ethical risks.

Because it is AI and ML, but it's also blockchain technology. It's also quantum technology. It's also digital twins. Listen, there's so many technologies. And then there's the interrelation, the interoperability of those things, not just from a technical perspective but from an ethical perspective. There's tons of ways for things to break down.

And these organizations, we think it doesn't affect us now. One, I think they're wrong because for procurement reasons if nothing else. And number two, wait a couple of years. Wait two, three, four years, and you're going to see more and more technologies take it on board via enterprise. It's going to happen at a snap, and you're not going to be ready for it. And by that time, you won't even be compliant with regulations that exist.

KIMBERLY NEVALA: Yeah, we just had the opportunity to speak with Chris McClean. He's the Global Head of Digital Ethics at Avanade. And he made the same point. He said AI is a part but not the only part of his purview and you've emphasized that here as well. That being said, how similar do you see the concerns being between these domains, and in what areas might those diverge?

REID BLACKMAN: So you're saying the ethical risks related to different technologies?

KIMBERLY NEVALA: Yeah, some of those emerging technologies. You mentioned blockchain, quantum, AR, VR.

REID BLACKMAN: Yeah, one of the things that interests me, both intellectually and pragmatically, in what gets called digital ethics, or what I'm calling digital ethics, is that-- look, sometimes I get asked questions like, why do we even call it AI ethics? Isn't it just ethics and you're talking about AI?

And there's a way in which that's exactly right. There's no principled distinction between AI ethics and the rest of ethics. It's not as though there's something special, especially glowy about AI ethics at the conceptual level. But there's a pragmatic reason, I think, for making the distinction between digital ethics and, say, non-digital ethics. Which is that these technologies, by virtue of how they work, increase the likelihood of certain kinds of ethical risks being realized. So it's the nature of the beasts of the machines that they make these kinds of risks likely.

So, for instance, machine-learning is a pattern recognizer. It's the nature of the beast to recognize patterns in vast troves of data. So if you give it a bunch of data about people, or society at large, or groups of people, it's quite likely it's going to recognize biased or discriminatory patterns in that data. And so that's why we get the problem of bias or discrimination in AI. Because it's the nature of the beast to be a pattern recognizer, and some patterns are discriminatory in nature or they reflect discrimination, et cetera.

You're not going to get that with blockchain. Blockchain is not a pattern recognizer. That's just not the nature of the technology. But you do get this crazy, distributed network of computers or nodes with data and processing of data spread across thousands of entities. There's a lot of confusion about what governance looks like of a blockchain because people think, mistakenly, oh, it's distributed, it's decentralized, when actually that's not at all true. And so for blockchain, the discrimination problem isn't really likely by virtue of the fact that you're using blockchain, but deep, hard important questions about how to govern the blockchain come up given that blockchain is this massive beast spread across nations.

With AR, VR, you immediately get issues having to do with, for instance, the appropriateness of the content because it's a content delivery system. AI is not or need not be essentially a content delivery system. There's use cases for that. But VR is essentially a content delivery mechanism, so now you immediately have increased risk of inappropriate content. Whether that's inappropriate content for children, or adults, or whatever, you still have that issue.

So what interests me intellectually and pragmatically about digital ethics as a category is that it's the nature of the beasts of these technologies that each one of them increases the likelihood of a certain subset of ethical risks. So why study AI ethics and blockchain ethics? Because you need to understand, what is it about this technology that's making black boxes a reality and a problem? And you're going to get that both in AI and in quantum computing. What is it about AR or VR that makes it the case that we really have to think more about what constitutes appropriate content, and how people share that content, and whether we might play a role in spreading misinformation? Just because you're dealing with AI, you don't have to have that question. AR, VR, you might. I mean, it's much more likely.

KIMBERLY NEVALA: It strikes me as you're talking that if we have a robust ethics or governance framework, call it whatever you will, but one that addresses ethics, that's not just specific to AI or VR, we can ask some of the similar questions. The answers are going to be very different. The risks are going to be very different. The nature of those systems, as you said, make that a certainty. But if we have the mechanisms to ask the questions and perform some of this critical thinking, your organization, or an organization will be better prepared to handle whatever comes next, to some extent.

REID BLACKMAN: Yeah, that's a big thing that we stress with all of our clients. One of the first questions we'll ask is, do you an AI ethical risk program or a digital ethical risk program? Because what we recommend is creating a program that's flexible enough to accommodate whatever is coming down the pipeline.

That's why, for instance, I recommend against making explainability its own ethical risk category, which lots of people, lots of organizations do. They say, oh, we're looking for the risks here. One of the main pillars of our program is explainability. And that's great if you're just doing AI. But explainability has nothing to do with blockchain so it's going to be sort of this empty, weird floating category that you have to analyze your blockchain application for when it's just not relevant. So, one way I put it if I'm talking to AI people is I'll say something like: you don't want to overfit your ethics program to AI. You want it to be sufficiently flexible such that it can accommodate new technologies.

KIMBERLY NEVALA: Mhm-hmm. Strive to be a futurist and not a fortune teller, I suppose.
All right, so as we wrap up, what can we reasonably expect to see unfold on the ethical or AI or digital playing field in the foreseeable future? And what do you wish we would see but think we might not?

REID BLACKMAN: We'll see regulation, for sure. We have the EU AI Act. Canada is working on its regulations. I'm actually advising Canada on their federal AI regulations. The US, who knows? We don't even have privacy regulations, let alone AI regulations. The probability of our Congress getting a sufficient understanding of AI is such that they feel in a place to start passing regulations are slim to none, at least in the next few years.

So, look, we'll see some regulation. It'll be really interesting to see the extent to which companies are compliant with the EU AI Act in the EU and whether they carry that over to the US. Because it's just easier to be compliant across all places in which you're operating. Or, if they're going to bifurcate it. So Americans get treated worse by the likes of the act because we're not subject to it. Some organizations will differ. I know that some companies, they're compliant with CCPA, the California Data Protection Act, across the board because they don't know who's in California and who's not. So to make sure that they're compliant, they just treat everyone as a Californian citizen. So some companies will treat it like that, and some companies will treat different people differently. But we're going to see.

The only way we're going to get something that's akin to a federal regulation of AI in the US I think is by having that kind of de facto federal regulation by virtue of a bunch of state regulations, state and local regulations. Companies are just going to be like, this is too complicated. California's got their AI thing. New York has their AI thing. Massachusetts has theirs. Indiana has theirs. Texas says…

This is too complicated. Here's the standards we're going to meet across the board regardless of what state you live in because it's just too complicated otherwise. So I think we'll ultimately get to something that's roughly functionally equivalent to having federal AI regulations. Not because we have federal regulations but because we have a patchwork of state regulations. It's going to be hard to meet the New York regulation in one respect but easier to meet the Indiana regulation in that respect. But then it's going to be hard for the Indiana one and easy for the New York one. And so it's going to be really hard for organizations to sift through, OK, given these many regulations, how can we do this? We're not going be able to tailor it on a state-by-state level. Do we just go to the high - there's not really the highest bar because it just varies all over the map - so do we have to create a fictional regulation that spans all of them? That's going to be really strict, though. That's going to be the strictest one we could possibly have.

KIMBERLY NEVALA: Right.

REID BLACKMAN: Do we have a bigger risk appetite than that? So we're just going to have to eat it in some cases. We'll pay their regulatory fines. It'll be painful but not that painful, so let's just do that.

So I don't know. What I hope we would see, but I don't think that we will, is that unified, effective federal regulation of the US. Maybe we'll see it eventually, but 7 years, 10 years. After many more developments in the AI world have taken place, many more applications, and many more people in businesses have been wrecked.

KIMBERLY NEVALA: Well, there you have it.

REID BLACKMAN: [LAUGHS]

KIMBERLY NEVALA: The good news and bad news all rolled up in one.

REID BLACKMAN: [LAUGHS]

KIMBERLY NEVALA: Well, thank you, Reid. I know we are very much aligned in believing that ethics is not an option. And discussions such as this do go a long way to making the topic approachable for those who need it most, and in my view, that is everybody. So thank you again for joining us. And we'll continue to follow your work avidly.

REID BLACKMAN: Great. Well, thanks so much for the great conversation.

KIMBERLY NEVALA: Fantastic. And if you have not read the book "Ethical Machines" or you're wondering how to get your organization off the starting block, I highly recommend it.

Next up, we're going to be joined by Maria Santacaterina. Maria is the CEO and founder of Santacaterina, a global strategic leadership and executive board advisory firm. We're going to talk about the myriad, often unseen ways that AI is already shaping our lives. Subscribe now so you don't miss it.

Creators and Guests

Kimberly Nevala
Host
Kimberly Nevala
Strategic advisor at SAS
Reid Blackman
Guest
Reid Blackman
AI Ethics Advisor, Author: Ethical Machines
Practical Ethics with Reid Blackman
Broadcast by