Reframing Responsible AI with Ravit Dotan
KIMBERLY NEVALA: Welcome to Pondering AI. I'm your host, Kimberly Nevala.
In this episode, I'm so pleased to bring you Ravit Dotan. Ravit is an AI ethics researcher and a governance advisor working with organizations to procure, develop, and deploy AI systems responsibly. Her nomination as a finalist for the Responsible AI Leader of the Year was recently announced by Women In AI. Today, we're going to be talking about barriers, real and perceived, to ethical innovation and considerations for putting responsible AI into practice. So welcome to the show, Ravit.
RAVIT DOTAN: Hi, Kimberly. Thank you for having me.
KIMBERLY NEVALA: Oh, I'm so thrilled you joined us. Now, you started out your career as an academic researcher, looking at political and social implications of technology, and have expanded your scope into the practice of responsible innovation. Can you tell us a little bit…Was there a unique spark or trigger for you to-- I don't want to say make the turn, I think, to expand, to expand your scope of practice?
RAVIT DOTAN: Yeah. I actually started with science, not with AI at all.
KIMBERLY NEVALA: Excellent.
RAVIT DOTAN: I was doing a PhD in philosophy. That's how it got started. And within philosophy there's a subfield called philosophy of science.
And one of the things that people in philosophy of science study is how values are incorporated in scientific reasoning processes. There's a question of what does it mean to reason well or something like that. And science is often held to be an exemplar of that. So people are naturally drawn to the question of, well, can we have any reasoning without values at all?
And there was a time at the beginning of the 20th century where people thought, of course, when science is done well, it is value free; no values, because that would be biased. But then with time they realized, actually, definitely not. You cannot have reasoning without any values. Then the debate just kind of developed and became it's not even about whether the values are there or not. They're definitely there. The question is: what is the right way for them to be there?
So fast forward to early 21st century. A lot of conversations were about how do we manage the values well within science. So I was embedded in this whole debate when I was thinking to myself, OK, but we've been talking about physics and biology and anthropology and social sciences for, like, a hundred years in this little sub debate. What about this new field, machine learning? Because I realized that I'm hearing a very similar narrative coming from engineers. There are no values here. I'm just an engineer. If you want to talk about ethics, that's on you, philosopher.
But I knew it was false. It was false for the same reasons that I've seen about physics, biology, all of the sciences. So at the beginning, I just wanted to make the same point about machine learning as well. I wanted engineers specifically to understand how social and political biases are in interacting with their day-to-day work. Yes, it is on them as the people who are developing it. So that's how I got started.
But then I realized, after a while, just like the development of that debate in philosophy of science, I no longer wanted to make the point that the values are there because it was clear to me that they are. Instead, I wanted to focus on how do you manage those values? It's not even bad that they're there. It's just a fact of life. But how do we manage them well? And then the more I got into that question, the more I realized that I want to be in the industry because I want to influence the way that the values are actually managed within the technology.
KIMBERLY NEVALA: I feel like this discourse around the technology being neutral has come a long way, especially in the last five to 10 years. But we've been having this discussion about values alignment within artificial intelligence - and I don't actually like using "within artificial intelligence” because that's sort of like the royal AI - within AI systems and how do we align systems with values for a really long time.
You said it's really apparent to you that they can't be disintermediated. And the reasons that this belief is false is the same reasons it was false when we tried to disintermediate values with the other sciences. But what are some of those core misbeliefs, I don't know if "misbeliefs" is the right word there, about this premise that were true for science and that just hold forth in AI as well?
RAVIT DOTAN: Yeah. There are multiple arguments. I'm going to just use my own.
KIMBERLY NEVALA: Fair. I love it.
RAVIT DOTAN: Because, actually, this was one of my first papers. Actually, two papers, come to think of it. But I'll just mention one of them. This was a co-authored paper. I now forget the title, of course, but-- what was it? "The Values Embedded In the Machine Learning Discipline," I think that was the name. Actually, it won Best Paper at FACT in a year that I forget.
KIMBERLY NEVALA: We're going to link to it. So this won't be important.
RAVIT DOTAN: 2021 or '22? '22. I think '22. Anyway, what we did in that paper - most of the collaborators were computer scientists - and so we have chosen a hundred papers that were the most cited papers in, I think, 2019, 2018, and then 10 years prior. And we combed through those 100 papers to look for the values that are mentioned in them.
Specifically, a question in our minds is, when a paper makes a claim that we should accept, what are the reasons that it gives? This is a question that reveals the values. Because if they say, oh, our model is better, because of what? It's more accurate? It's more robust? It's more novel? What are those things. This is a question that reveals the values.
And another question is how do they even justify the project that they're doing? Do they have any kind of argument? Because a lot of papers will start with, oh, there's a huge problem in the world with great social importance and that's why we're doing this project. But there's no actual connection there, which means, OK, you could have preferred something more strongly tied to some kind of a social benefit. Sorry, this is a small tangent, because the point that I wanted to mention is about those values that we dug up.
So we made a list of all the top values that are mentioned. And the top, you would not be surprised to hear, are things like performance. That was the top one, and then robustness, novelty, those kinds of things. You know what wasn't mentioned? Anything related to social impact. So that was at the bottom.
So first argument, that is already a value choice. We should not take for granted that we should prefer a model just because it's more accurate. It is a reason, but there are also other reasons. That is a value choice between those values. So that's point 1.
Point 2, even when you're just focused on those "technical sounding" concepts, like accuracy, we actually dug into it a little more because "accuracy" is a broad term. It can have many different interpretations. You have to choose one when you make your argument in the paper. And we realized that the versions of accuracy that were used are actually power centralizing because they take for granted certain kinds of data sets. For example, that it has to work well for a huge data set that only a large organization is going to have access to. That's a very specific interpretation. That's not the only version of accuracy. And so we did a similar activity for the rest of the values to show there are choices that you're making, and those choices are reflecting on some values.
KIMBERLY NEVALA: I think this is really important because we tend to, within the space of responsible or ethical tech, or responsible and ethical, trustworthy AI, we tend to look or limit the discussions of values to things like fairness or equity or equality. Things that we might say are more tied to morals and sort of values in that philosophical sense. But I think it is actually very important that we think about somebody prioritizing accuracy or productivity or efficiency as a value in and of itself. Which then contrasts and intersects with these other components.
And now it has me thinking about-- I think I've mentioned before, I've developed a bit of an allergy to comparisons between human and AI system performance, because in a lot of cases, I think they're performative. And what you just said clarifies for me a little bit why I think I have such a reaction to that: because it doesn't tell me to what end and for what purpose. "The calculator can do math faster than I can" isn't a good measure of why that's useful, wanted or necessary, I suppose. So this provides a bit of a different lens into the whole value alignment conversation, at least in my opinion.
RAVIT DOTAN: Yeah, and another element that I think is missing is the purposes that we choose to use the tool for. Somehow it's neglected. But I think that that is one of the main reasons for impact. What am I using this tool for?
KIMBERLY NEVALA: Yeah. And can you give an example of why that's important or what gets lost when we don't have that more expansive conversation or ask that more expansive question?
RAVIT DOTAN: Yeah, OK. I need to decide. I have too many thoughts going in my mind right now.
I feel that a lot of the conversation in AI ethics and the documents we're seeing, we're seeing lists of risks. In the good case scenario, it will distinguish between a foundation model and some other type of AI. Fine, but it's very generic. In fact, you could use a foundation model in ways that are more innocuous than others. And I think that that's going to determine how bad the harm is.
And so when we think of things like character.ai, for example, we've seen recently the unfortunate case of the suicide. That is a more problematic use case than some other use cases. Like, if I'm just summarizing something to myself and it's a field I already know, the risk is just much lower.
So sometimes I feel that when we take the conversation into the more intellectualized version, like the academic papers that I myself have written, it's just easy to forget the more low-hanging fruit responsibility debate that we should be having. Which use cases should we pursue?
I think some of it may be coming from companies who are developing foundation models, because they pretend it's only up to the users. It's not up to us. But then these things just kind of happen. Yeah.
KIMBERLY NEVALA: And so this becomes this justification for, then, we always need to look at risk and harms at the level of the context of use. But there's also a higher level where we could look at LLMs that are, and I think this is a smaller subset of harms, but that these are just environmentally intensive. These are energy-hungry things as well. So how do we manage those to keep those two levels as well. Which is there might be some overarching considerations, risks, harms, even benefits truthfully, that we should consider at that broader level? But for most things at a more constructive level, we have to look at the specific context of use.
RAVIT DOTAN: Yes. I think we should go back to the same mindset that we are familiar with from other areas of life.
For example, I know that water is an important resource that should not be wasted. That is why I'm not just going to keep my water hose running. And I'm also not just going to spray around just for fun because I know it's an important resource that should not be wasted. This is just something that we all know. I mean, not maybe all of us, but a lot of us. Many of us. I believe most. We understand the resource cost of the things that we do, and we take it into account. And the same applies for AI as well.
Not everyone knows this, but the resources that are wasted when we use generative AI are actually higher than those that are used when it's trained. So it is up to us to not leave the water hose open when he doesn't have to be. So it starts with that.
But I think it's a mistake when we portray responsibility considerations as something totally new and foreign. No, it's the same. We just don't know that it means leaving the hose open, but it does. So we can explain it to people and just like go back to the usual mindset that, don't be wasteful because we know the consequences for the environment.
And the same goes for negative usages. For example, I know, when I write a paper for school or do an exam, I know that I have the option to cheat. We all know this. I know I have the option to cheat, but I choose not to. And most people choose not to even when they can. Sure, they can have a note with the formulas in their back pocket. They could and sometimes they do. But for the most part, they don't.
And it's the same with generative AI. I could write a whole paper that is generated, and that would be cheating. But for some reason, it doesn't feel the same kind of cheating. But why? Only because it's new? Only because it's generative AI?
Why don't we use the same mindset? I know I'm not supposed to cheat. I choose not to cheat. We choose not to cheat. And so we should remind everyone of the mindset that is already there. AI is not an exception to the norms that we are living in. It's not an exception to the laws, certainly, and it should also not be an exception to the norms that we're already familiar with.
At the same time, I also think that this technology affords different capabilities than we had before. And it's important to understand how the tasks that we want to execute should change, given this technology. So maybe it's not as important for me right now to generate every single word in this paper. Maybe I have a different priority now that I have this tool.
KIMBERLY NEVALA: I do wonder if the sheer ease of use, particularly of the ChatGPTs and systems of its ilk, also obscure for the average user what the impacts and implications are on the broader scale. Just like, because it's so easy for me to turn my tap on, and I'm washing the dishes. I mean, I had this conversation with someone the other day and I realized, why am I just letting the water run at full steam while I'm putting the dish over there? Like, when I was a kid, we filled the sink up - and I was from a very large family. There's nine of us. So we washed them all in one sink of water as opposed to now; how much am I actually flushing down? But it's just so easy and available that unless something kind of spurns you to think about it or reassess your usage…
And I do wonder if just that - because it seems so simple and it seems so quick and the answer comes back fast - there's a propensity to just believe it must be… it's not as energy intensive. It doesn't have these other impacts and implications because you don't see any of it. It's just hidden.
RAVIT DOTAN: That's right. And the analogy that comes to mind for me is meat. I'm vegetarian. I have been for, I guess, 20 years now.
And one of the arguments that people make, from the vegetarian or vegan side of the map, they will say, people don't notice what it is that they're eating because it's packaged so well. And I can see it as a similar argument. And so don't get me wrong, I'm not the person to convince anyone to be vegetarian. It's not my point. But the people who are making those arguments, one of the things that they try and show is, let's make it more clear. Like, yes, it's packaged in this very hygienic way, but don't forget where this is coming from. And don't forget what this actually is.
Same with water. I'm Israeli, and Israel is a country that doesn't have a lot of water. And so what they did for many years, they had an ad campaign about water usage. Again and again, they would run this ad. It was a face of a woman who's, like, drying, like the ground dries and starts to crack because he doesn't have enough water. And so they kept driving the message home to us, be careful with the water. It's really important.
That's why the water image came to mind for me to begin with, because it was drilled down so effectively. So it’s a matter of can we make this connection. And I think it's important to make this connection to things that we're already familiar with. Instead of building a whole new idea and saying, this is AI, so something totally different. No, this is the same. We know that resources are precious. We don't waste water, even when it's with a click of a button. I think if we just try to make that connection, it may be more impactful.
KIMBERLY NEVALA: Yeah, that's a great point. Now, I want to turn our attention a little bit to the state of play within organizations who either say or, really with good intent, want to deploy systems in ways that are responsible. There are-- and we'll talk about the mechanics at the lower level.
But at a higher level, are there particularly common narratives or tropes that can derail this discussion of ethical or responsible innovation within a company, even before it gets off the ground? An obvious example might be the argument that any structure and oversight up to and including regulation will, by definition, stifle progress and innovation. Which can make us less inclined to want to lean into anything that is perceived as a constraint. It's certainly not the only example, I think.
But from the work that you've done in your research and also in talking with a lot of companies and advising them, are there key perceptions or narratives that need to be either acknowledged or challenged from the get within an organization looking to do tech better?
RAVIT DOTAN: Yes. I love how you incorporated the name of my LLC, Tech Better, very subtly there.
KIMBERLY NEVALA: I have to admit, I said it and I was like, wow, I think I just did that, and it wasn't actually on purpose. So there you go. [laughs] …you're welcome.
RAVIT DOTAN: Yes, yes. So I’m going to use things that I've learned, especially through a research project that I have just finished. And we have studied the phenomenon of ethics washing in AI. Usually papers that talk about this phenomenon are just kind of criticizing it or offering ways to do better. But I was interested in just understanding the phenomenon better. What actually happens? What is it that goes wrong there? I do think that-- sorry.
KIMBERLY NEVALA: I was just going to say, maybe you can start, too, by giving the definition that you use of ethics washing. Because I think it does mean different things to different people.
RAVIT DOTAN: That's right. And so the concept that we use, and this is a co-authored paper as well, the concept that we use in that paper is decoupling.
It's a broader concept that we're taking from organizational psychology. Organizations sometimes show decoupling between policy and practice. That I see is a broader phenomenon. Of course, it's not new. Of course, we see it in many, many fields. But you have an organization that says one thing and does another. And there are two versions of that.
One is when they say things that they do not do. So that they have a policy, but the policy, for example, is never implemented. And the other version is when they do things but they say nothing about it. But they don't have a policy, for example. And the problem with that is because it's not anchored in any way, it can fall apart really easily.
And there are also organizations who do nothing and also say nothing. That's a different issue. And then there's the ideal. They say things and they also do them, and that's great.
So we studied the whole spectrum of those four types. And I tried to understand, what is it that goes wrong in the cases that it does go wrong? What is the difference between companies in which it does go wrong and companies in which it goes right?
And one of the things that I found is the companies where you had more on the saying side of things. It's not just policies. It might be statements that they make. Maybe they talk about it in meetings. Maybe it's a part of the company mission. Maybe they say it's a part of their DNA. Maybe they have someone in charge. Maybe they have a team. Maybe they have blog posts. There are so many ways for companies to talk positively about social impacts of AI. But then they don't necessarily implement any practices, practices such as red teaming, or building features that reduce any kind of negative impacts. There could be this gap.
I did see a difference between how it happens in small companies and large companies, which was interesting. I'll focus on the large companies for now. And we've noticed that there are, I think, four kinds of ways that it falls apart, so four kinds of-- you asked about tropes. So this would be the four, because back to the actual question. But you see, there are four patterns that we're seeing that are, I think, each of them is connected to a different trope. Sorry?
KIMBERLY NEVALA: I was just going to say, let's start at the top with trope one. And maybe as we go through this, too, I would be interested in that differentiation of does this tend to happen more in small or large companies or not necessarily if that's a factor at all?
RAVIT DOTAN: So I'll say the tropes that we found in large companies, and then we'll go back to the smaller ones if you'd like. Yeah.
So one of the most common things that I found in the large companies is that they have a top-down approach. So often they would start with: we're going to put someone in charge. There's going to be a policy. There's going to be a sub policy to the policy and then a sub-sub-sub-sub-sub policy to the sub policy.
And also there's going to be, the most common thing was, review processes to approve the features. And so if you want a new feature, I'm using it - I feel like "feature" is a loaded word because it means, is it a solution? Is it a platform? Is it a feature? Whatever. I'll just call it a feature for purposes of simplicity. But they'll say, if you want to build a new feature, you have to go through an approval process. And there's going to be typically a form that you need to fill out and/or a committee that's going to review it.
So that's the most typical thing that we've seen in the large companies and that process falls apart in multiple ways. One way is the process is too burdensome and too disconnected from the ground. So the people that are supposed to be doing it are actually not doing it. You'll talk to them, and they'll just say to you, listen, we know. Yeah, this process is even mandatory, because sometimes it's mandatory and sometimes it isn't. But even when it's mandatory, they'll say it's too much. It's too complex. We are actually trying to avoid it. We find ways. We find creative ways to avoid it. We know we're not going to call it a new-new feature. We're going to call it related to the older thing, so that we don't have to go through the approval process because it's just too much.
KIMBERLY NEVALA: It's a minor revision.
RAVIT DOTAN: Yeah. And another version of it is that sometimes the production teams feel really at cross purpose with the "ethics" teams. They can feel like they are two different parts of the organization that are pushing in different directions. Where they, the production team, are pushing for more products, more innovation, et cetera, whereas the safety team is pushing for stifling that.
The reason for this is because of how those units are positioned within the organization, they are positioned to be stoppers. They are positioned in a red tape capacity. They create processes that are perceived as burdensome, and then you have this internal tension and then you have less--
KIMBERLY NEVALA: No, I was going to say, now, some might argue that there is a healthy tension to be had in this kind of debate. But in what you're saying, I'm wondering if it becomes an unhealthy tension when it's, A, viewed as something that's a checkbox or a hurdle to be overcome, and when that discussion is not held in more nuanced ways at every step of the process, in an integrated fashion. Like,
is the problem that you've put them on--
RAVIT DOTAN: I will say something more. Yes, and I will add to it. The trope that you started with is how responsibility stifles innovation. I believe it comes from this tension. It's not the responsibility that stifles innovation. It's the way that those units are positioned that is stifling innovation.
And that is, I think, what people need to understand about this trope. It's not about the ethics. It's not about the responsibility. It's about how you build those teams and the tasks that they are given.
KIMBERLY NEVALA: So how do you do this better?
RAVIT DOTAN: First let me say another issue. So one problem is when you see that tension happening, I think, as I said, it gives rise to the "responsibility stifles innovation" trope.
But another thing that we've seen in organizations is that they develop such an elaborate framework for AI responsibility that now they're not adopting AI at all. Because they are worried, because they know it's too complex to execute. You see, that's another way that it is stifling innovation because it's too much.
And then a third one that we've seen is when there just seems to be a disconnect between the people who are responsible for the policies and the people who are supposed to be executing them. And so sometimes you can see it when you'll talk to the person who is responsible for the policy, and you'll ask them, OK, great. You have this policy. How is the implementation going? And they won't be able to speak to that.
KIMBERLY NEVALA: They direct you to the other person.
RAVIT DOTAN: If they even know who to direct me to. But I can see how that happens because policies are really important in big organizations. They're really important for communication. You can't just go to someone and say, you should do x. No, there should be a policy that says that that's your tool.
Of course you should have someone designated to be in charge of it.
I can see how it happens, but then you can see it's not even tension, like in the first case that I've mentioned. It's more like communication breakdown or something like that. So you asked how to avoid it. I can switch to that.
KIMBERLY NEVALA: Yeah. My expectation here is that there's not a single easy answer. But if there's something that directionally or this is the area to look for, because someone might be inclined to say, well, we just really need to be more discreet about taking those policies. And then we get into what the RACI matrices - or Responsibility, Accountability, Consulted, and Informed - and drive those down to standards and methods.
But I suspect there is - you mentioned organizational psychology before as well - there's some more fundamental design that needs to happen to break this down as well.
RAVIT DOTAN: What a great question, because this is my next paper that I'm working on right now. Yeah. So indeed, there are many levers one might be able to utilize for this. There are two that I'm especially interested in and one that I can touch on later.
As I mentioned at the beginning of this, I am interested in learning from the cases in which it works. And there are cases in which it works.
So the cases in which it works that I've seen are more common in the smaller companies. I think something is not going well in the typical top-down approach. So, one hypothesis that I have about this is why don't we emphasize more the bottom-up approach? Why don't we start with a few units in the organization, start with the units that already care most about the topic or it’s most relevant to them, and build processes that are actually helping them. And then we can maybe generalize more. So that's one strand that I'm thinking about.
But the other one that we're actually going to explore in the new paper that we're starting now is leadership. There's a field of study called ethical leadership. And apparently it has been proven that how ethical the leadership is can help with similar issues in other domains. So we want to try out this connection between to what extent the leadership is perceived to be "unethical"-- sorry, "ethical."
Can we draw a connection between the level of ethicness that the leadership is perceived to have and the level of actual implementation of responsible AI practices? Because if we can show this connection, then there are actually interventions to do to improve the ethics level. It's weird to say it like that, but the extent to which the ethical commitments of the leadership are seeping through the company. And then we can see whether pushing on that lever can actually improve their responsibility level. So that's the direction that we're currently exploring.
KIMBERLY NEVALA: So this is interesting. Now, you mentioned the bottoms-up approach. And I've mentioned before that I am a recovering management consultant, but one thing we always found that worked was this idea of starting with small practical projects.
And in very big organizations, to your point, you might still need that overarching mandate. Maybe there's some policies. But to start in a singular area - particularly when we're dealing with issues of governance and management - and really test out and develop a bit of a blueprint for how the organization actually operates. How your practices really work, what your development process looks like, and where you can plug these things in. And then use that blueprint to-- deliver that blueprint to then architect these same processes in other areas of the organization within some limitations. So that feels like a proven practice. And the interesting thing about that best practice is that it doesn't rely on a single best practice for every part of the organization.
On the leadership side, how do you measure the ethicalness, the level at which-- or how do you assess how ethical a leadership team is? And then does that have to be done in a way that doesn't preference what we would think of as more social or moral values; the extent to which they're fair or biased?
As you said earlier, one of your values might be efficiency. Now, we might argue that efficiency tells us a lot about some of those other types of things that we might think of more traditionally as values. But an organization could say, this is our North Star. We would probably push back on this; this gets back to the business of the business as business trope. But you could say they believe this, and so therefore there they are operating in line with their ethics, which may not be particularly ethical on some other spectrum.
So what's the assessment of - and this may just be work you're working on - but how do you set a scale for how ethical somebody is?
RAVIT DOTAN: Yes, excellent question. Fortunately for us, there's already an established questionnaire that people are using in that specific field.
KIMBERLY NEVALA: Interesting.
RAVIT DOTAN: Yeah, it has questions like, my manager incorporates-- my manager frequently asks what is the right thing to do. My manager talks to the team often about business ethics issues. My manager is someone I can trust. So I think there are eight or 10 questions of that sort. And that's how they typically do it in that domain, which apparently is a field of study, ethical leadership or leadership studies. So yeah, our plan is to rely on that because it is something already accepted that has been used.
KIMBERLY NEVALA: Interesting. Now, you also mentioned that these issues or constraints we've been talking about seem to pop up mostly in large organizations. And I may have projected to an implication here that wasn't there, but that this looks different in small organizations.
So small organizations have perhaps different issues but have also potentially been more successful in embedding responsible or ethical decision making into their processes. Can you speak to how this then looks different, or if it does, on organizations on the smaller side of the spectrum?
RAVIT DOTAN: Yeah. I'll start with the successes - not that there aren't successes in large organizations. Also, I'll mention that this study that we've done was a qualitative study so we were limited in how many we could include. We had 32 participants, which is a good number for that kind of research, but also, it's only 32. So yeah, so I'm careful, of course, about overgeneralizing.
But still, in smaller companies, what we have typically seen is that they are, especially in the very early, early stages, they prefer avoiding policies. They are even more worried than the larger organizations about very structured processes. They believe in the importance of their ability to adjust quickly. And so if they have very structured processes that would get in the way. So that's why sometimes they think, oh, I can't do the responsible AI things, like this or whatever, because it's too much for us.
And I believe that's a very unfortunate misconception because those documents, most of the industry documents, are very clearly meant for the larger organization. The smaller organization, it's fine that they're going to do something different. You're a team of 10. Yes, it's fine. You don't need such an elaborate process.
So for them, what tends to happen, the gap that we see there is the opposite one. We'll see the gap of you have all these things that you do but they're not anchored. And so there's a risk that it's just going to fall through when you do eventually grow. And that's the happy case scenario. I mean, except for the scenario wherein they do both, but the happy case scenario of the gap.
The unhappy case scenario of washing, that also looked a little different than it did in the big organizations. We saw more instances that felt more, "deceit" is a word, but I wouldn't necessarily use it as strongly. But there is a tendency to produce documents that would be seen externally but then not necessarily do them at all internally. The reason, I think, is the same reason for those gaps anywhere else.
I mean, the whole literature that we're using, which is called decoupling, started in the 1960s or so. That's when they first started writing about it. And what they noticed is that what happens is that we have more demands from society on organizations. However, the organizations don't actually want to change and so they want to create this barrier to protect their internal working. That was the theory.
So that's why they'll create processes that look a certain way from the outside, or even documents that look a certain way from the outside. But really, internally, you're not going to do it because they don't really want to change. And so that's something that I've seen in some of our cases. Especially I've seen it more clearly in the smaller organizations where you'll talk to some employees, and they'll tell you, yes, we do have this AI ethics document, however, no. Or, in one extreme case, the document was written for the purpose intentionally of appearing one way and not doing it at all intentionally for that. Yeah. So that's the extreme end.
And I haven't seen something of this format in the larger companies. It may be because they are under more scrutiny or something like that. But, you see how it's a totally different mindset. And it would also, I think, be a mistake to impose on them the same kind of “maturity expectations" of large organizations of having a policy. Sometimes I talk to them, not in the context of this research, and they'll say, we feel that it's unfair. Why would we be seen as not responsible just because we don't have elaborate policies? We're a startup. It's not a good fit for us at this point. It's fine because we do the work. And I could see their point.
KIMBERLY NEVALA: Yeah. And well, I guess this also points to our need to, when we try to assess or evaluate companies along these dimensions, to not just look for the piece of paper or what we used to call the dusty binder. It was this thick for data governance, and then we went, but they were still having basic data quality issues right down on the ground. And we realized that, again, this binder hadn't made it off the shelf ever in any concept. So being able to look at outcomes and activities as much as looking for the documentation of a process.
It's also interesting, though, because we have definitely seen very large companies - and lately there's been quite a, unfortunately here in the US - a backsliding of commitments to whether it's DEI or responsible AI or ethical AI precepts. But a lot of those, I think, maybe were just done really out in the open and with very little concern for the implications.
And it may be that that's a reputational risk or hit that a smaller company wouldn't be able to weather as well. So there might be some advantages of scale for big companies who are either ethics-washing blatantly, or choose to back up, back away from those commitments - if they ever did them - that are very, very different for a smaller organization.
RAVIT DOTAN: Yeah. And in addition to that, there's also smaller organizations, sometimes they want to compete with the bigger ones, especially by emphasizing that responsibility lens. They'll say we can offer a service to you that they will not, because they don't care about those things. But we know that you, our client, that you care about this because you don't want your reputation to be tarnished because you use a chatbot that says racist things. But we'll be able to provide a service to you as a smaller company because the others don't care. So that's another motivation that I don't know if it's as strong for the bigger ones.
KIMBERLY NEVALA: Now, it does seem that there's a risk that you could run in an organization of any size. So one of the things that we know, especially from startups and very small organizations, a lot of times where their energy and their impact comes from is based on individual advocacy. It's folks who feel very strongly about maybe it's the problem they're solving or the product they're pushing or developing. Maybe it's about being doing that in a way that's responsible and ethical.
In big companies, a lot of times, who have not been able to - maybe they have sort of top-down or maybe they haven't - you might have pockets of responsible innovation or mindfulness to some of these issues. But they are sometimes based, again, on individual contributors. And they are only as good as the relationships of that individual and their own political capital or their own decision-making capital. And as soon as that individual is gone or gets shuttered in whatever way, it goes away as well.
So have you seen that as well? And then what do organizations need to do to address that kind of an issue? Or can they?
RAVIT DOTAN: Yeah, actually I've read a really great paper, actually two, kind of about this. I agree with you.
First of all, the point where in smaller organizations, the weight of the founder is really big. And in research in the field of responsible innovation, even beyond AI, they say that when they ask: what is the difference between an organization that does responsible innovation and the one that does not? When it comes to small and medium businesses, enterprises? SMEs? SMBs? Whatever. And it's the founder. That's what I've read in many of them. It's the impact of the founder which does dwindle down as the organization grows. Naturally, that's what would happen, so just to acknowledge that point.
And then the second point, in a large organization, there are hurdles that those who care are facing. I think you are right that it does come down to a lot of personal relationships. And what I've seen in those other studies that I've read - there were two that I've read recently - that have studied how, one of them called it ethics workers, I think, how they're dealing with this reality and the kind of challenges that they're facing. And in both, you kind of have to rely on your skills of networking and to push things forward. And it's difficult still to manage all the things that are coming up your way.
KIMBERLY NEVALA: Yeah. So obviously, a lot of constraints, a lot of reasons and ways this can go badly and get stymied. But are there a common set of steps or elements that you advise organizations to address or to think about in order to progress this type of practice?
RAVIT DOTAN: Yeah. I have two ideas.
One, start small. I see very often organizations trying to start big. They want a huge framework. It collapses later for two reasons, I think. One, you already felt like you did a good thing as an organization by creating the framework, so maybe that's enough. I think it's just a psychological effect. In fact, I have read about this. It's a psychological effect that when we do a virtuous thing, we feel entitled later to do a non-virtuous thing. I've read it in the context of, I think it was a book, I forget what it was. It was years ago.
But they argued, this research says, that organizations that have a DEI policy can actually be more biased in their hiring practices. Because the mere fact that you've seen that policy is kind of biasing the way that you're thinking or something like that. So it was one of the hypotheses. And then-- yeah, it was terrible. And then the other one was that the mere fact of having a policy is our good deed. So we're done now. So we are dealing with the psychological effects, all of us are subject to them.
But then I also think it's very difficult. Like those organizations that I've seen in my research as well or in my consulting. I've seen it. It happened to me. I was on the other end of it where we work hard, we come up with something, but it's not embedded properly in the organization. And it's too big and it feels too much of a shift.
And the other side of that is connected, I think, when those concepts come separately from the production side of things that are perceived as stopping the wheels of production. So why do it that way? Why not go instead to: we actually want to adopt AI more. But the question is how to do it right, how to adopt AI the right way. I think that's a much better framing. In fact, it's now my LinkedIn headline.
Because I think people miss out on some of the basics. Why don't organizations adopt AI more? Often, it's because they don't know how to deal with those questions. If you have a governance framework, for example, or someone who's advising you or whatever way you want to do it, you can actually adopt it more quickly.
Recently, I saw a survey from the US census about AI adoption, or maybe it was generative AI adoption. Did you see that one?
KIMBERLY NEVALA: No, I don't think I have.
RAVIT DOTAN: It was weird because in all states except for Washington State, it was a single digit. Only in Washington State, I think it was 11% of businesses. How is that possible? Even in California and New York, like, what?
KIMBERLY NEVALA: We do have a lot of the big guys are situated here, pushing these systems, so...
RAVIT DOTAN: Oh, hey, that's right. But still, I would expect California more. But I think it's because people understand the limitations, and they're worried, so it's actually holding them back.
So why have a separate framing for responsibility instead of a unified framing of we're going to adopt AI the right way? And then you don't have this conflict and tension between teams.
KIMBERLY NEVALA: Right, where one is seen to be the advocate and the other is seen to be the stopping - stopper, detractor.
RAVIT DOTAN: Exactly.
KIMBERLY NEVALA: I guess it's that, well, it's interesting. We've recently spoken to Steven Kelts and also Phaedra Boinodiris who made this point coming from slightly different angles. That part of the issue is that we are not taking a multidisciplinary approach in how we implement ethical and responsible AI within companies. And we tend to disintermediate or assume that the engineers and developers who are the core, who really are the engines of these systems, are not interested or able to engage in these discussions. And until we stop with both of those, we address both of those issues, that we cannot expect that any of these programs will be both sustainable and organic over the long term.
RAVIT DOTAN: Yeah. That resonates with me, and I'll say my version of it.
Often when we think of responsible AI, there are only two kinds of organizational units that get the attention. One, as you said, is the technical teams, the engineers. They're supposed to be building the product. The other is compliance, risk, privacy, all of those. But actually, those are only two kinds.
Leadership is often left out. They're not really in the conversation but they set the tone. And all of the culture units or the units that really impact culture, like HR, internal communication, internal education, they're left out. In fact, I was in a workshop not long ago where the facilitator was the HR person in the company, and we talked about responsible AI things. And maybe me, or maybe someone else, brought up the point of, yeah, it does actually pertain to HR as well. And then she was the facilitator, but still she brought up the point of, but wait, it's supposed to apply to me somehow too? Yes, yes, it does.
And that's another version of interdisciplinarity. When the HR is disconnected and they have an important role in building the culture. If they're not a part of the shift, guess what? It's not going to be in the culture. The engineers can only do their job in fighting and fixing the problem if they have organizational space to do it. And we need units like that, HR and leadership, to make that space. And I feel they're often left by the wayside.
KIMBERLY NEVALA: And this ties right back to your previous comment about there's a very good way to potentially measure or predict how much an organization engages in any of this work and thought based on the moral or ethical orientation of leadership. So that's an interesting tie-in.
RAVIT DOTAN: Yes. It's also, I think, about what you actually measure. Measure what you treasure. And I'm thinking, why not measure this decoupling? Why not make it the goal of the organization to have low decoupling when it comes to responsibility? If you measure it, you can track it. You'll do better.
KIMBERLY NEVALA: Yeah. I think that question, though, of how do you measure decoupling, is an interesting one and probably also needs some… I have a suspicion that we've addressed this in other areas before, and we just aren't learning those lessons. But I may be wrong about that. So perhaps that's something else we can - it may actually come out of the research that you're doing now as well.
So all of that being said, given your research, given the practical experience you have, as you look to the community around responsible or ethical or trustworthy AI - the words change depending on context and where you're at - but the community of folks who are really pushing and promoting awareness and helping to develop the approaches and the practicum for this. What would you like to see happen for this, as we move forward here in the future?
RAVIT DOTAN: Yes, I would like to see a bit of a refresh to the way that we've been talking about AI ethics or responsible AI and all of those names. There's a way that we've been talking for the last few years; making lists of risks, for example, and best practices guides. I think we are ready to move on to the next phase and I would love to see from all of us in our community, what does the next phase look for us?
I think the challenge for us is to think about how we are reinventing what we do now, given that we have matured to a new phase, in my opinion. And I can say my version of it, but it's not necessarily your version of it. And I would love to hear multiple versions of it. But the main point is, what is the next phase? How do we mature the conversation? For me, there are two things.
One, breaking down the barriers between innovation and responsibility, instead switching to a mindset of we are just adopting AI the right way. I'm not the person to stop you from adopting AI. On the contrary, adopt it, but do it the right way.
Second, usually when we talk about AI responsibility, I think still we are primarily thinking about developing AI models. Not even necessarily generative AI models, but I think a lot of us still have the vision of machine learning models like before the generative AI boom, and we still talk about that. Or we may even talk about foundation models. But what about everything else? It's not the only game in town, especially now.
We, I think, are neglecting different kinds of AI adoption. We're neglecting, what's the proper way to build a custom GPT? What is the responsible way to build a custom GPT? Because companies are doing that. Solopreneurs are doing it. Companies are doing it. What is the proper way to fine tune? Why am not why am I not seeing frameworks specifically about that? And I think we need to dive more deeply into the "less technical" end of things and more the user end of things. What kind of processes would it be legitimate for me to even build?
And I want to go back to something I said earlier on this call: use cases. What should I build and what should I not build? There should be a conversation about that because building processes is no longer the territory of only engineers. It's actually anyone because we can all build processes - whether it's a custom GPT in ChatGPT or those projects in Cloud or the Gems in Gemini. These are powerful processes. Previously, only engineers could do it. I can do it. I do it. Why don't we move more of the conversation to be about that?
KIMBERLY NEVALA: Well, why don't we? Well, I think we should put that out to the audience, and let's hear what they have to say. I think that's a fantastic, again, call to action and/or challenge for us all.
I really appreciate your thoughts. And again, every time we talk as well, I always appreciate your willingness to not just answer the question, but to pose the next question. I think that's something we can all learn from as well. So thank you again for your time and insights.
RAVIT DOTAN: Thank you for having me. I also always enjoy all of our conversations, and this was no exception. Thank you so much.
KIMBERLY NEVALA: Excellent. Well, that's Ravit Dotan for you guys.
And if you'd like to continue learning and being challenged by thinkers, doers, and advocates like Ravit, you can subscribe to Pondering AI now. We're available wherever you listen to your podcasts and also on YouTube. In addition, if you have comments, questions, or guest suggestions, please write to me at PonderingAI@sas.com. That is S-A-S dot com. We'd love to hear from you.
