Ethics for Engineers with Steven Kelts

KIMBERLY NEVALA: Welcome to Pondering AI. I'm your host, Kimberly Nevala.

In this episode, I am beyond pleased to bring you Steven Kelts. Steven is a lecturer in the Princeton School of Public and International Affairs, as well as a professional specialist for the Responsible Computing Curriculum at CITP. He has also created a course called Agile Ethics, which aims to train undergraduate computer science and engineering majors in how to incorporate ethical decision making into their professional workflows. In addition to all of this, he serves as an advisor to the Responsible AI Institute and is a Director of the University Responsible Tech Network for All Tech Is Human.

Today, we're going to be discussing why we can't or shouldn't leave engineering and technical teams out of ethical decision making, and also how we might think about evolving our current approaches to ethical training and enablement broadly. So welcome to the show, Steven.

STEVEN KELTS: Thank you so much for having me on, Kimberly. I look forward to our discussion.

KIMBERLY NEVALA: Yes. Now, you've done work in both the academic and corporate environments, ethical work in both of those environments for a very long time. But you had shared that there was one specific question that really sparked or animated your current focus areas. Can you talk to us a little bit about that animating question?

STEVEN KELTS: Yeah, it was an important moment in my life, and it was about five years ago. Like you said, I'd been doing work on tech ethics previously. I'd done work in corporations.

It was about five years ago that Ashley Casovan, who was then at the Responsible AI Institute, asked me, so Steven, you teach ethics. You've been doing it for 18 years. It was 18 years at that point. So how do you make people ethical? And I've got to say, much to my discredit, I had a response at that time. I said, actually, I don't. That's not what I do. That's not what an ethics professor does. I teach them about argumentation and how to drive their argument further forward, but I don't teach behavior.

And I reflected on that significantly afterwards. I've never asked Ashley what she thought of that moment, but I reflected on it a lot afterwards. And I'm like, my god, that's true. I don't actually teach behavior. And I started at that time actually to look into psychological literature, sociological literature, organizational psychology, literature out of business schools to try to look at what it is that actually could support ethical behavior.

So what cognitively or intuitively is ethical behavior, psychologically speaking? And how could that actually be supported in a corporate environment? And I started to see really that ethical reasoning, the thing I'd been teaching for 18 years, as valuable as it was, was the tiniest part of that whole equation.

KIMBERLY NEVALA: And you also -- correct me if I'm wrong here -- began to realize that the approach to how we approach ethics academically, and I think this actually aligns very closely as well to how we tend to approach this in corporate environments, was really falling prey to what you called -- and I don't know if you coined the phrase or not -- an affirmation illusion. What is an affirmation illusion?

STEVEN KELTS: So ethics professors, or it could also be sociology professors who are dealing in tech ethics issues, or psychologists who are dealing in tech ethics issues, when you teach students about an issue, they are the original stochastic parrots. And god bless their souls, but they made it all the way through their educations and into a university by reading the room really well.

And so they can read the professor really well. They hear your judgments about things. They see the logic you are using to create those judgments, and they're likely to give that back to you.
And so the fact that they have affirmed in the classroom that they too judge x instance of AI gone rogue to be socially undesirable, to be wrong, to be whatever judgment it is that you're giving Them, Frankly, they'll actually use that judgment as well in their interactions with you. But multiple studies of students outside of the classroom suggest that that doesn't actually change their judgment on any issue and may not change their behavior either.

KIMBERLY NEVALA: And in a lot of ways, while they are parroting back what you may - or they think you want to say - you've also said that, in fact, a lot of what we're doing is arming folks with more sophisticated arguments for things they already believe. As opposed to helping them think more critically and perhaps even just critique their own beliefs. Is that also a fair summarization of some of your findings there?

STEVEN KELTS: Yeah, that's right. An excellent study by Tania Lombrozo and Kerem Oktar and others actually found that out of an ethics class at Princeton University.

They found that if students who had gone through this ethics class, they decreased their use of intuition to make ethical decisions, that was associated with change in their ethical opinions. So they previously thought, say, that eating meat was good. They now come to decide that eating meat is bad. I don't take any substantive position on that issue, but it's one that you could look at.

But if students increased their use of ethical reasoning, that was associated with them simply giving more sophisticated arguments for the same thing that they had believed before. And I don't think anybody wants to work with that person in the corporate workplace. They're anchored to a particular opinion and just simply trying to gin up better arguments for it. So that would suggest that it's at least possible that teaching more sophisticated ethical reasoning actually backfires when it comes to ethical behavior change.

KIMBERLY NEVALA: Mm-hmm. And I found this interesting because there seems to be a bit of a connection with how we - and maybe I've just drawn a conclusion here or a corollary that doesn't hold - with how we sometimes approach, in the corporate environment in particular, our engineering and development teams.

Which is, a lot of times, we are approaching them essentially either assuming that they don't have the bandwidth or the interest, perhaps even the capacity to assess issues broadly, and/or it's just not their job. So we bring them very often pre-identified risks or harms. And we're asking them to help us quantify or mitigate those harms as opposed to helping us identify what they may be.

I wonder if that then plays into what becomes sort of a self-fulfilling prophecy about our perceptions of how engineers, how development teams, how leaders who come from engineering and development backgrounds, from tech backgrounds, approach the idea of responsible tech more holistically.

STEVEN KELTS: I think it really does. And let me draw a contrast here and say, I've long been part of responsible tech communities, or call it the AI ethics community, in New York City. And when you think there's a way to approach an issue when you are in the responsible tech community. And it is, say, to point to some technology, some implementation of AI, a recommender algorithm, a lot of people criticize Instagram, let's say, and its effect on teen girls. A recommender algorithm working with infinite scroll and it seems to have this captivating effect.

So you point to the technology and you point to the social effect of it. A wrong that it does or a consequence that it caused or something like that. And you leave the judgment implicit, right? And so you're saying, look everybody in the room, don't you share my judgment that this is really quite bad? That this effect on teen girls is bad? And everyone in the room nods in affirmation, right? A similar effect to what I was talking about that might trick ethics professors sometimes.

So that's really important. That's an important move for community formation. Which is to say, look, all of us in this room, do we share the same intuition that this social effect is bad? That helps form community. Very important.

It's also important to tell students stories like that. History does have a tendency to repeat itself, and so they may need to know these are the things that actually occurred in the past. But it wouldn't actually tell anyone how to spot ahead of time before deployment the way in which infinite scroll tagged to a recommender algorithm would have this effect. It doesn't actually tell them how to anticipate this effect.

And when you're working with an engineering team, they are the folks on the ground who are making the data choices. They are the folks who may be hearing the UX researcher tell them, this is how we think people are going to use this item. So there's a significant amount of information that the engineering team has about precisely what it is that they're deploying, which turns out to be super important to the ultimate social effect, whether it does or wrong or causes significant harms, or whatever the social effects might be.

So I would challenge people who are in those rooms and who are nodding in affirmation that many of the things caused by ICT firms within the past 12, 15 years in this country are bad, who are sort of affirming that to each other. I would challenge them to think about how they would have circumvented that with the knowledge that the developers at the time had. I think that's only a partial answer to your question.

KIMBERLY NEVALA: No, I think we'll get into some of the specifics too and the elements of good decision making.

But it also makes me think a little bit about when tech, I was going to say good tech goes bad, but sometimes maybe it's bad tech to start with. But when tech goes bad, we have a tendency to make assumptions about what people intended to happen or what their level of care or awareness was. And you also argue that this in and of itself is very, very problematic. Particularly if we have not invited those folks into a position or enabled them to partake in these conversations broadly.

I imagine that this puts us, A, in a position of folks being defensive about the tech they're putting out. But we're also creating an environment where diffusing or deflecting responsibility or accountability becomes more prominent as well.

STEVEN KELTS: Right. I feel that can be a really large problem. So there's a role certainly for ethical review boards within companies. And there's a role for outside experts outside of the development team to come in and say, we've scoped out this potential problem what's the algorithmic fix to it? What's the technical fix to it? That sort of approach that you were talking about earlier. There certainly is a role for that.

But we also should be assuming that the development team is full of good people who wouldn't want to cause these social problems. They may be placed in a bad informational environment, so they're not actually sharing information about what data sets are used to make particular predictions, or what have you. They may be placed in a bad managerial environment where either information is getting siloed by the manager, or the manager has a set of perverse incentives to please the managers above them, or to deliver a product which is haphazardly put together, et cetera.

But if you assume good people who are put in an environment where they need that further support, they need the information, they need to have these discussions themselves, if that's what you actually assume, I think that you'll see people spotting a lot more of these potential problems a lot earlier than even an ethics review board could.

KIMBERLY NEVALA: Mm-hmm. And so we really shouldn't be engaging these teams only by either handing off here's the identified issues we need you to mitigate or help us quantify or justify. And/or coming around on the back end when something has happened, and as you say, shaming and blaming. So labeling the wrong. And you said again, that also backfires or tends to backfire psychologically.

STEVEN KELTS: Yeah. And I think that your listeners can just think about it themselves. If their boss -- almost all of us have a boss -- if their boss came to them and said, hey, look, the only thing we need you to do right now, we're not going to explain to you why x or y is a bad outcome, why it's an outcome desired by the organization, why it's a socially desirable outcome. We're not going to explain to you anything about the reasoning about our judgment about this outcome. We're just instead going to tell you we need a technical fix that avoids this outcome.

Well, you're probably going to find the quickest technical way to do it. Then they're going to give you a metric of what it would mean to avoid that outcome and you're really just going to perform to the metric. And it may not be that you actually have addressed the problem either in the data source or in the algorithm, or whatever it might be although you're able to show your boss who performs to your metric.

Instead, if what you have is teams that are getting a full explanation. And indeed are engaged in the discussion and disagreement about whether a particular outcome is good or bad. They're engaged in that discussion. They come to endorse it for themselves. They may more creatively see around them, there are actually a bunch of things that we need to change about the technology here in order to achieve that outcome understood in its full depth.

KIMBERLY NEVALA: So before we talk about how we can do a better job, and I think this is true not just for engineering and tech teams. I just think we tend to assume and put them to the side when we are approaching ethical training and development both at the level of undergraduate and graduate environments, but also in corporations. So I do want to talk about what are we finding that works best to really increase folks' ethical decision-making skills or even their awareness and all of those components.

But before we do that, can we talk about the components of an ethical decision-making process, if you will? What are the elements that we need to be aware of and attend to in order to be good ethical decision makers? And I will differentiate that from assuming any particular outcome or output. This is really more about what is the practice or the process than it is about a specific conclusion.

STEVEN KELTS: Yeah. So social psychology does tend to tell us that most ethical decisions are made with our intuition. That you go about being an ethical person on a day-to-day basis, simply understanding that certain scenarios call for certain judgments and actions, and you make those judgments. You take those actions fairly intuitively. But that doesn't account for the fact that our intuitions can be changed over the course of time by our cognitive skills.

So a simple example might be find yourself telling yourself that you wish to be on some sort of diet. Avoid excess sugar, skip cookies after dinner, or whatever the case may be. And then actually, you find yourself just simply not having the desire for that sugar or those cookies over the course of time. So you have used a cognitive process to actually change even your intuitive behavior.

So what we might look at in terms of complex ethical decision making, sometimes called System 2 decision making, the thing that goes beyond intuition, is the ability actually to look at an ordinary scenario and to recognize that the schema that you have in mind for that scenario doesn't fit that scenario. So ordinarily, you would go about intuitively making a decision that x or y is the thing to do, but it doesn't seem to fit in this scenario.

So the first stage then in cognitively processing a new or a more processed decision would be awareness. Awareness that you are in a new situation, awareness of what the potential options are for your action. Awareness of what you will be able to cause given your own acts and what you won't be able to cause. Awareness of how it would affect other stakeholders around you. This isn't yet coming to an ethical decision about whether those effects are good or bad. It's just awareness.

So the example I use about awareness with my students is if you were walking down the street in New York City and you decided to stop and get one of those amazing New York City hot dogs on a street corner - I do all the time. And in the hustle bustle of the day, you walked away realizing that you hadn't put the $5 from your pocket down on the cart. You would shock into awareness. Something you ordinarily do intuitively; it's just a reflex or something like that. You'd shock into awareness, right?

And you'd be like, oh, wait a minute. That guy is probably wondering where his money is. This is a wrong or a harm to this guy. And the truth is, you may just keep walking. That's possible. Or you may decide that you do this specifically in order to avoid this harm or wrong to this guy. He's mad, and so I will respond to his imagined madness by returning and giving the $5 back. You may decide that the police would come after you. Or you may decide that you believe that there's a market value that says the only fair thing to do is for you to pay.

A lot of things could happen at that point, but the first thing I want to point out is that shock of awareness where you start to assess, I affected somebody. I did not do my ordinary behavior. Something unexpected is now required of me, returning to the cart or continuing to walk on. I'll have to continue to assess that action. I may have guilt or feel shame afterwards, et cetera, et cetera. But all those things come into awareness. And only then is judgment actually kicked in.

After moral awareness, we have judgment. That's where those arguments would come in. It was wrong to do that harm to him. The police might get me. I believe that markets ought to be fair and complete transactions, trust, things like that. That's where you make a judgment.

But just because you've made a judgment doesn't mean you actually have the intent to carry out that judgment. Because you might make a judgment that this was terribly wrong and then look at your watch and say, but I'm late for my next appointment. So that might stymie your intent.

And then there's also behavior as well. You turn around on that street, and you realize you're actually a salmon swimming upstream of a bunch of people trying to get to their lunch appointments on a crowded Midtown sidewalk and form the intent. And then you're like, yeah, no, I'm just going to walk away. It's too hard. So the final stage is behavior.
So you've got to be aware. You've got to make a judgment. You have to form the intent to act, and then you actually have to carry out the behavior. And it's at some of those latter stages, I think, that tech development teams get stymied by the structures around them.

KIMBERLY NEVALA: Mm-hmm. And so talk a little bit-- well, before we talk a little bit about the structures around them that stymie those later stages, you said one thing about awareness is, hey, this is something that has sort of changed the way I would normally approach a problem or how I might normally assess or judge it.

But is it also true that if we need to prioritize reasoning over intuition, we have to be also aware of or actively asking ourselves if it's not just when something jolts us out of our normal behavior? But there's also an element here of we have to ask, is our normal behavior correct? Does everything in my experience, in my previous judgments approach it?

And I guess I'm asking that question awkwardly. But in some cases, it seems to me there may be decisions we need to make or positions we may want to reevaluate where there may not be an actual sharp external driver to force the discussion. Where we may actually have to be proactive in looking outward. So it's going out to seek and test the boundaries as opposed to someone, something, coming in and giving us a nudge.

STEVEN KELTS: That's right. I think we can have reflection upon our intuitions and we can also have sort of meta reflection on our reflections. So we might actually look at the judgments that we ordinarily make at that second stage where we start to apply reasoning like, I believe in trust in markets, or the cops might get me, et cetera.

We can have meta reflection on those reflective moments as well. And we could start to say to ourselves, hey, look. Doing this just because the cops might get me is not really a great reason for doing it. Instead, I wish to actually commit to a principle of trust in markets. And I need to, I want to, start behaving from that principle.

And there's some indication that as people mature over the course of time, they move a little bit more towards that principled type of judgment, when judgment is called for, when they're not acting intuitively. So I think that that is important. We can reflect on our intuitions and have meta reflection about our reflections even.

KIMBERLY NEVALA: Now, you made the comment when we're thinking about ethical decision making, there's awareness, there's judgment or reasoning, there's intent and behavior.

And you said that very often, the environments that technical teams, developers, engineers find themselves sometimes stymie or even discourage - well, that's my word - discourage their participating in some of those latter steps. Can you talk to us about the conditions that, inadvertently or otherwise, may stymie teams from engaging in these activities?
STEVEN KELTS: Yeah, I think that organizational psychologists and others, especially in business schools, have been generating incredible insights on this for near 50 years now as they've studied the way that organizations act.

There are a lot of things that an organizational structure can do to stymie any part of those four steps of the cognitive decision-making process. Again, I want to point out, a lot of judgments are made intuitively, or a lot of decisions are made intuitively, even within the moral realm.

But when a new judgment needs to be made, when the second process is actually kicked in and a person needs to become aware of all the circumstances surrounding them, et cetera, think of information flows within your company. Are the people who are genuinely making the choice of what data set to use, what loss or objective function to use in order to train a model, are the people who are making those decisions aware of enough information about what the others around them are doing that they can see the interaction effects between, say, the data set that was chosen over there and the loss function that they're choosing now here?

Then, of course, at that stage of intent, there's all sorts of things that can misfire for people. It can be the case that they have a stock option that grants, that they're granted at a certain time, that vests at a certain moment. And they say to themselves, I really just care more about this than I do about waving a red flag about a particular product within the company. It could be that their boss says to them, this decision really gets made at the level of the ethics review board, so let's just punt to them, right? And then that person has been ethically un-enabled in certain ways. They're like, oh, I can actually turn off that radar detector that I was talking about earlier. I can turn off that awareness because the decision is going to get made by somebody else.

So there are lots of different ways that there are structures around us in our workplaces that turn off the ethical decision-making process. And we just want to make sure that people are empowered to have that ethical decision-making process on. Not that they're loaded with heaps of responsibility, but that they're empowered and told, you too are an important input to this process. What you know is important for all of us to know to derive the best product and to make the best ethical decision.

KIMBERLY NEVALA: If the environment is conducive to that, we'll stipulate that for the moment, what is it that we need to do to then actually enable? In terms of training, how should we be approaching giving folks the skills, giving them the capacity to engage in ethical decision making? How have we approached training in the past, and what are maybe some of the new or different approaches we should be considering moving forward?

STEVEN KELTS: Yeah. So that's where the idea for my Agile Ethics program, working with students, comes in.

In Agile Ethics, the students actually work through a real case. Some of them come from Google, from Meta, et cetera. They work through a real case, and they do it by playing a role within a development team. So you might have the product owner or a product manager on the team. You might have somebody on the team who's playing the CEO. They're all also given explanations of their own incentives and their desires, and what they want to get personally out of this work situation, et cetera. And they work through an ethical problem in the way that the development team itself encountered the ethical problem.

So at first it seems like just a choice of data set. And then second, it just seems like a choice of the algorithm used to process that data. But then it starts to emerge that a particular social problem, a wrong or a harm of some sort, is actually being caused, and now they have to mitigate that.

And they're discussing very quickly as a team what to do about it, taking action. And then the next sprint in the game shows up. And they generally tend to find that whatever action they took, there's some new consequence that they need to deal with.

So this is meant to prime students to be, first, morally aware that the choices that seemed as if they were merely technical at the beginning of the game end up having social consequences at the end of the game. For them to practice judgment. That is, to look at the unfolding social consequence and realize that they might have disagreements, and to work through those disagreements. To realize that disagreement about these things is OK. A decision can still get made, and you can still work on mitigating some harms, some biases or wrongs that are done.

And then to think about how it is that in their own work on a development team, they'll have to form the intent to do better despite all the headwinds, despite the monetary incentives, despite the fact that their CEO can just go ahead and change the decision that a team made. That happens in some of the games that my students play. Despite all these things that might stymie intent and behavior, they need to continue to push forward. There's really no choice but to do that.

And then in corporations, I know they're working on these sort of bottom-up approaches like this as well. Like at Google, the Moral Imagination workshops, which you can find and read about online. The Moral Imagination workshops give engineers some really brief descriptions of ethical arguments that people come and they make. And from just very brief descriptions about what it means to have a duty, what it means to have a responsibility, what it means to do it wrong, what it means to cause a harm, et cetera, what it means to care about consequences. From really brief descriptions of some of these things that ethicists themselves know are important to define, the engineers can then go and develop really sophisticated and complicated arguments about how to do their duty in the particular technological scenario that they are a part of. About how to think about harms, consequences, goods, bads for an end user, et cetera. So you really don't need much in terms of ethics lectures. I don't want to talk myself out of a job here.

KIMBERLY NEVALA: [LAUGHS]

STEVEN KELTS: You don't need much in terms of sophisticated ethics lectures in order to get people doing all four stages of of this process by themselves, in the right order.
KIMBERLY NEVALA: Is this a form of simulation, and what is the ultimate objective of it?

STEVEN KELTS: Yeah, that's right. It's a form of simulation. I do gamify them in a way. So there's an entire computer system that the students use that will actually score them on their choices, and they can go back and look afterwards. Did I make, in sprint 2, did I make the decision that ended up causing the ethical, moral, or social problem down the line? At the end of the game, there is a person who's declared a winner. And all they get for it is applause from the audience. There's no prize for it. It's just a way of keeping track and sort of incentivizing active thinking and active role playing.

But yeah, it's a simulation. And some evidence that we've gathered in studying students of mine who've gone through one of my classes - the classes include the role plays. But some evidence that we've gathered suggested that indeed, there is some effect on their moral awareness. Indeed, there is some effect on their wariness about the institutional environment that they will be put in. Some perhaps less trust that organizations always do right. Some increased belief that engineers themselves can handle these moral situations and are well-intended as they do it. And so we've shown some really interesting effects that I think are positive that the students will carry forward into their work environments and be able to make some better decisions.

KIMBERLY NEVALA: Mm-hmm. And I could see using the same kind of approaches in the corporate environment. Where one of the things we get caught on a lot of times is just, tell me what to build, or tell me what I can and cannot do. And this is an area, and AI in particular is an area, where we'll never have the definitive list of what you can and cannot do.

There will be cases that have very clear red lines or where experience has shown that, no, in fact, we've now learned that this is bad or harmful, and for these specific reasons. But it strikes me that this then is a way of helping folks think and put together, as you said, the cognitive process so that we can assess a new situation without needing a very definitive this is good, this is bad checklist of you can do this, you can't do that, because that list is always changing.

STEVEN KELTS: In the Google Moral Imagination workshops, they work with sci-fi scenarios. Some of them are quite funny, laughable. They contain puns, and so forth.

But the engineers themselves are actually put in unusual situations. So they're told, you are a representative of an alien being who has arrived on Earth with an orb that will make all people happy. And make the best arguments that you can for sort of deploying this orb. So by being put in an unusual-- that would be for a team doing some work on affective computing, right? They are, in fact, trying to change the emotions, in some sense, of a user. And they get to ask the question, would that be a good thing? What would deployment look like? What are the tripwires and fault lines that we want to try to avoid?

I've done some similar stuff with modified versions of the Agile Ethics workshops in smaller companies.

And one way to think of it is even just to get buy-in for something like the NIST risk management framework. You're telling people that they need to be concerned about privacy in their data sets. You're telling people that they need to be concerned about bias within their data sets. You're telling people they need to be concerned with things like post-deployment monitoring of the performance of the algorithm, incident reporting, et cetera. Well, by role playing through scenarios where these things become a little bit more alive for the team, you can see more investment in that. It's not just a box to tick on a form somewhere, although there will be a form, and it will have boxes.

KIMBERLY NEVALA: Always a form.

STEVEN KELTS: But it's not just rote documentation. You've enlivened the reason that the team is doing that. And from each team member's perspective, they can think about what they know about the data set, what they know about how the algorithm was designed, and so on and so forth. And inform the rest of the team as well about the purposes of the risk management framework beyond just the processes of the risk management framework.

KIMBERLY NEVALA: Mm-hmm. Now, I could see that folks could perceive this to be as strictly performative by an organization or a company if we practice this in training. But the environment in real life, on the ground, as we're in the middle of very quickly and usually under deadline and with all the pressures that come with it to deliver and deploy these things very quickly and keep up with everyone around us.

So what is it that managers, executives, other decision makers - I imagine the approach matters here - for not just, yeah, we've trained you to do this thing. But what do they need to provide so that these teams actually have the capacity and are supported in then not just practicing these behaviors in a class but putting them into practice in real life?

STEVEN KELTS: Yeah. So I've faced some difficult moral choices in my life, Kimberly. Some of them have to do with family. And you may associate with this, you or any of your listeners.

But sometimes when you have to make a sacrifice for family, sometimes when you realize you're going to have to reorient your life around that sacrifice, it doesn't feel good in the beginning. It feels gut churning. And there's some part of you that's saying, you could avoid this. You could walk away from this. You don't need to do this. Is there another way?

But you gut it out, and you commit. And maybe you reorient yourself around that family commitment, because there's somebody in need who needs you. And you find that eventually over the course of time, it just becomes who you are. You're not gutting your way through a sacrifice for a sick child or a niece who's lost her father. You're not gutting your way through those things anymore. It's utterly and completely who you are.

So I think that one hoped for effect of doing simulations like this is by first introducing it. And look, if a team member comes into a room for one of these simulations grumbling, oh my god, another corporate training, I can tell you from experience, from working with the Google people and from doing Agile Ethics, I can tell you from experience, a few minutes into it, they're playing the game, making arguments, thinking about things. And then they're discussing it for hours, perhaps days afterwards.

So one of the things that the corporation can commit to is just time to let people think about and practice these moral actions, a commitment by their managers to give them that time. Think about the commitment, the gut check commitment you sometimes have to make for family members, et cetera. If managers show this is important to us, we are all going to do this, role plays and simulations, I believe, again because of the data that I've collected, can have this effect on cognitive decision making.

And then the hard stuff comes, which is a lot of these processes do require slight increase in time to deployment of an MVP, or an improved product, or a new feature, et cetera. And companies need to be willing to do that. There are difficult instances. I think we all know that Gemini was held back for quite a long time by Google, in part because of concerns about its potential adversarial use, potential bias within the model, and so on and so forth. We all know that Google suffered somewhat in the way that they rolled out Gemini.

But hopefully, most companies can come to see that these delays are not fatal. That these delays are short. That these delays are worth it because the product is more reliable, because the user is more satisfied with them. And over the course of time, they can commit that small amount of further development time to allow the team to actually enact these decisions and these processes themselves.

KIMBERLY NEVALA: Well, and one - I don't know if it's… well, I think sometimes it is an intentional mechanism to deflect or to pass on accountability and responsibility.

But particularly in the AI space, it becomes really interesting this question. We see this with a lot of the large language models, some of these things that are called foundational or frontier models. Where, as we are asking folks to anticipate, to start to practice foresight - because again, perhaps incorrectly, I start to jump to what is that we're trying to teach people to do? To encourage teams, including development and technical teams, to actually have better mechanisms to anticipate consequences, to look forward, to try to assess the realm of possibilities and probabilities, however small they might be.

But in a lot of cases, there is sometimes this also intrinsic - it's just a question that comes up a lot now - which is, well, I can't be responsible for somebody else's actions. I don't know what context people are going to be able to use this in, right? Tech's not good or bad. It's all in how you wield it. And that argument wielded properly, again, gives us that quick exit out the side where it's, well, this is not really on me. It's going to be on whoever deploys it or uses it because I can't be responsible for the actions of everybody out there. How do we address that type of issue, or can we?

STEVEN KELTS: That's right. I think you can't be responsible for the actions of everyone else out there. And it would be a cognitive overload, it's sometimes called ethical burnout within the literature. It would be a cognitive overload to try to tell someone that they are, in fact, responsible for what everyone else on the team does.

But they can actually reorient their responsibility by being reminded of the users who will use the product. It's your 14-year-old daughter who's going to get their hands on this product eventually and take responsibility for the things that they can actually control.

Sometimes have the courage to simply state an opinion about something. If you see that another member of the team or another team is actually responsible for question x or y, that you do not have a window into how they're deciding that question or what technical solutions they're implementing, just to state, I have concerns about this. I'd like to think about it further. I'd like to know more.

Just to be the one who-- that doesn't necessarily make you a whistleblower within your company. And yet, it takes courage to raise your hand. And sometimes maybe the practice of that courage is the thing that you are responsible for.

KIMBERLY NEVALA: I really liked - you've used this example before. And I like it because I think it's a bit of a controversial example, but it highlights something that we're talking about here as well.

You sometimes talk about the exemplar of a gun manufacturer. Now, there are a wide range of perspectives out there about whether guns should be available or not available at all at a baseline level. But beyond that, it still doesn't, you've argued, it still doesn't absolve gun manufacturers from at least taking some base level of caution.

STEVEN KELTS: I think it's because guns and their use within society is so controversial that it's the perfect example.

So let's say you were working for a gun manufacturer and that you had no particular moral qualms about that. You had no particular moral qualms about the use of guns in school shootings, CEO shootings. Sorry to mention difficult examples. But you had no particular qualms about that, and you did think that it was the user of the gun who was responsible for those things.

It doesn't necessarily mean you would build a gun without a safety on it so that the intended use of the gun is guaranteed, so that it doesn't go off in the hands of the user as they're attempting to use it in the intended way. It doesn't mean that you would use subpar quality metal, because it doesn't matter what ends up happening by the user. If the gun explodes in their hand when they pull the trigger and it actually takes their arm off or it kills the user, et cetera, well, look, that doesn't matter. You still would build the gun in such a way that it could be controlled by its user so that their intent is actually enacted.

And so even if you have no ethical qualms about guns that doesn't mean you're off the hook entirely about how you manufacture it. Even if you had no ethical qualms about how a teenager uses your social media platform, that doesn't mean you would irresponsibly build it in such a way that it would go off like a bomb in their hands.

KIMBERLY NEVALA: Yeah, it's an interesting and very, I think, pointed example. And it certainly speaks to perhaps a little bit of the evolving regulatory and legal environment, but also a core level of -- I don't want to say care, because I think that also has its own implications. But there's a core level of accountability or a base.

There's always going to be a foundational level of accountability and responsibility that can be established. And therefore, there is still reason for all organizations, and with AI in particular, regardless of the kind of model, to engage in this type of thinking and activity. Well, that's my opinion anyway.

STEVEN KELTS: Yeah, correct. I think if you are designing an algorithm that you want to be of use x for a user, the example sort of holds.

You want to design it in such a way that it doesn't backfire in their accomplishment of objective x, that it doesn't somehow tempt them into wrong or social effect y, that it can't be misused by adversarial users to accomplish something that is not x.

There's still -- I think care is actually a good word for it, Kimberly. I think care for being careful or doing your due diligence with respect to product design means that your product will more effectively come out. Upon its launch, it will more effectively help your user to accomplish goal x. And I don't think that that's any sort of moral imposition or too much strain upon a person to think about those things.

KIMBERLY NEVALA: So as we wrap up this conversation, a lot of additional areas and details I think, we could certainly dive into. What is it that you would leave folks with or encourage them to do, either as individuals who work within these teams or more broadly as organizations to really promote and move forward more ethical decision making or ethical awareness broadly?

STEVEN KELTS: Empower your engineers to make ethical choices. This is really all for me about empowerment.

Where the rubber meets the road, where the ethical criticisms that we all are attuned to, where that meets the road, where risk management frameworks meet the road is really with the dev team itself. So empower your engineers to have these discussions.

Not everyone's going to be concerned about each issue raised. Not everyone's going to want to be part of these discussions. But if you empower your engineers to engage in these discussions, equip them with the tools, the arguments, the awareness, the management reporting structures, and even the incentives to engage in these conversations, those who are invested in it will show themselves.
They'll help to steer the process in new and better ways. They'll become better at foresight and anticipation of potential social problems down the line. And hopefully, we'll see a better product environment in the future because of that.

KIMBERLY NEVALA: Awesome. Wise words and a good call to action to end on. Thank you so much for your time and sharing your insights and work today. I know there is lots more to come, and we will point folks to those resources in the show notes as well.

STEVEN KELTS: Thank you so much for having me, Kimberly.

KIMBERLY NEVALA: Anytime, and I mean that truly and honestly. So yes. To continue learning from thinkers and doers and advocates like Steven, please subscribe to Pondering AI now. You'll find us on all of your favorite podcatchers or now on YouTube.

Creators and Guests

Kimberly Nevala
Host
Kimberly Nevala
Strategic advisor at SAS
Steven Kelts
Guest
Steven Kelts
Lecturer, Princeton School of Public and International Affairs
Ethics for Engineers with Steven Kelts
Broadcast by