Ethical by Design with Olivia Gambelin
KIMBERLY NEVALA: Welcome to Pondering AI. I'm your host, Kimberly Nevala.
In this episode, I am extremely pleased to bring you Olivia Gambelin. Olivia is a renowned AI ethicist who works with product teams and/or executive teams, sometimes at the same time, to innovate responsibly with AI. She is the founder of Ethical Intelligence, which hosts the world's largest network of responsible AI practitioners, and also the author of the book, Responsible AI Implement an Ethical Approach in Your Organization.
We're going to be talking about technology as a human endeavor and values as a decision making technique. So welcome to the show, Olivia.
OLIVIA GAMBELIN: Thanks, Kimberly. I'm excited to be here.
KIMBERLY NEVALA: Before we dive deep into human value alignment, I am curious about what initially sparked your interest in philosophy and thereby set you on this path to your current work in AI ethics.
OLIVIA GAMBELIN: Oof, I would say it's probably a combination of multiple factors.
But one that I guess I don't normally talk about-- and it's kind of funny. My godfather gives me all of these books. And he's Irish and loves to tell stories and loves to tell jokes. So he ran across this book. And it's called Philogagging, which sounds ridiculous, but it's philosophy told through jokes. And this one is, Plato and a platypus walk into a bar. He gave me this book. I was at high school age at this point. And I remember looking at it going, what in the world? And I read it.
Not only is it hilarious - or maybe to someone that likes philosophy because they are all philosophy jokes = but it sparked my interest in, what is this field. What am I looking at here? And that little by little led into my, I guess, openness towards the field of philosophy. It's not really one that you're encouraged to go into. I actually discovered it.
I went and started in the field of physics. Loved the theory behind physics. Had an advisor suggest to me, well, if you like theory, why don't you try a logic course? And now I joke that that intro to logic was my intro to philosophy-- intro drug to philosophy and has hooked me ever since. Bit of a bunch of different combinations, and those ones are a little bit less of my common answer. Also, I'll tell you there, Kimberly.
KIMBERLY NEVALA: I love that, from physics to philosophy to technology. I was reflecting recently that when I was really young, my dad had commented to someone-- he was talking about me and my brother. But the comment he made about me was, she asks a lot of questions. And many, many years later, I'm like, you know, I really do. I still enjoy it. So these things show up in funny ways
.
OLIVIA GAMBELIN: Yeah, exactly.
KIMBERLY NEVALA: So as I've been reflecting recently on a lot of the different discourse within the AI ecosystem, I realize I'm developing a bit of an allergy to comparative exercises between humans and AI systems. And I'm interested in your perspective on this. Philosophically, do you think our current lean, I suppose, towards positioning or trying to position humans and AI systems as like entities and to engage in these kinds of comparative exercises is productive? Or are they leading us astray, or both?
OLIVIA GAMBELIN: I'm going to answer both to start, but I'll give an explanation behind that because I know it's the dreaded answer. Well, it depends. But currently, I would say that some of these comparisons are creating a narrative where it's us or them-- that kind of mindset. And the funny thing is, the them is AI systems.
But we put ourselves in this false dichotomy of, well, AI can do it better in this sense. But the ways that we're defining good or defining intellect is very narrow in the case of AI. So that when it comes to the comparison game with humans, we're, of course, going to lose the comparison game in, let's say, spotting patterns in big data sets. Our brain doesn't have the capacity to do that kind of computation. But it's not saying that we're limited in that sense. It's just, we don't have the mental space to process the amount of data that AI does.
But that is a poor comparison, if that's what we're defining intellect by, is the ability to process massive amounts of data. That's not actual intelligence. That's a very specific type of computational power. So that kind of comparison game, there humans are always going to be the losers. But it's not really a fair comparison game. It's not really one that's worth playing.
For me as a philosopher, though, I think it's quite interesting to find the gaps. So as we're building these AI systems that reflect human intelligence, that reflect how our patterns and our habits and how we move about the world. For me, it's very interesting. And maybe it's less of a comparison game and more of an understanding of what we've built with AI. And then the gap above that where it becomes uniquely human, where we are able to bring in aspects like emotional intelligence or the ability to understand and read body language. That, for humans, then becomes something that I like to call our superpowers, almost.
I guess that's not necessarily comparison. I'm not sitting here going AI versus human. Instead, I'm looking at what can't we copy. What can't we reflect? That's something uniquely human. And then how do we strengthen that instead of trying to copy it, trying to create a comparison with AI?
KIMBERLY NEVALA: And I believe you've also written or said in a slightly different context that sometimes, if we over-rely on trying to find or identify and lean into the human capabilities we can replicate with a machine, it actually sometimes leads to fairly shallow AI tools and shallow AI systems. So the products we're developing are, in and of themselves, shallow. Remind me of the context of that statement and why that might be important.
OLIVIA GAMBELIN: Yeah, so that's actually pointing directly at my current research. I've got many different levels and layers of obsessions. I call them obsessions and side quests sometimes.
But this is the one that I'm really digging into right now, and that is the fact that we keep trying to embed-- reflect. We have this language of, we want AI to have our values. And I kind of reached the point maybe about six months ago, this is a very recent deep dive, but I see the legs of it and this is going to be my deep dive for quite a while. But I reached this point probably about six months ago where I was looking at it going, yes, we can create technology that is reflective of our values. It can only be reflective to the point that we are designing those capacities and that we understand those values ourselves.
So, for example, let's talk about curiosity here. It's one of my favorite - not necessarily favorite values – but a characteristic that I like to embody. The value of curiosity: we have to understand in humans, in ourselves, what it means to be curious. And that if we're trying to reflect that into AI, well, you can get a surface layer of curiosity where AI, we'll say, a randomized recommendation system. That can be kind of curious, where you're starting to pick random subjects. It'll suggest random subjects. Yes, OK, that's curiosity on a surface level.
But underneath, the depths of curiosity go all the way into a person's confidence and in functioning within uncertainty and their appetite for risk and their desire to expand their knowledge base. It's not just checking out random topics. You can have vertical curiosity, where you go into depth of subjects like I'm doing right now with this obsession. Or you can have horizontal curiosity, where you're going across multiple subjects.
All of these are different types of curiosity. It becomes this beautiful concept that when we're just reflecting on that surface level into AI, it's scratching the surface but it's not necessarily curiosity in its full form. So what's relevant now, and my current focus is, I'm trying to flip the narrative a little bit on here. Kimberly, I don't have the answers to this right now. This is quite literally my research. It'll go into the next book. I've got a lot of companies and founders of people that have been willing to experiment with me on this.
But I'm trying to shift that focus from; we’ll use curiosity as the example. Instead of creating an AI system that is reflective of our curiosity, what if we instead shifted that and said, well, what kind of AI system would create the environment for the person using it to be curious? So instead of those values being a characteristic of the AI, how do we create it? How do we create AI products that enable us as humans to have those characteristics, to be curious at the end of the day? And you can use any values here, but curiosity just is a fun one to be able to go through.
KIMBERLY NEVALA: Yeah, I really love this and it resonates with something Eryk Salvaggio said to us as well about the topic of creativity. And he was saying the wrong question for us to be asking or the wrong goal is to make a machine - is a machine creative. The question is, what is creativity? And then, how can we use this as a tool within the creative process, not just looking for a creative output. We'll all watch with bated breath for the results of all of that research, which we are all curious about already, I'm quite sure.
But this does align well with, I think, a reframe or a flipping that you suggest. And it comes through in the book, but also in some of your other work, about how we often talk about human value alignment in AI systems. So if you go out today and search the topic or look for discussion on it, very often when folks are talking about human value alignment in AI systems, what they are talking about is trying to develop so-called ethical machines. So machines that are capable of moral decision making and making value judgments, for instance.
And this is an area that I found your work really interesting and important. Because as I understand it, the argument that you make is that ethics is not about imbuing the technology itself with values. But about humans using our values to direct how we develop and use the technology. And that's a fairly fundamentally different frame.
So first, do I understand that correctly? And don't hesitate to correct me if not. And if so, why is that reframing so important?
OLIVIA GAMBELIN: Well, first, yes, you understood that correctly, of course.
And second, the importance of that reframing is, through my experience between work and research and just conversations, 9 times out of 10 the challenges that we're facing with AI are human challenges. They boil back down to the people and the processes behind the scenes of the technology.
The technology is kind of like the tip of the iceberg. It's the result of all of the operations and design and protocols that it used to be built. It's the result of the people behind the scenes and their decision making and the values that have been held by that company that built the product. It's rarely ever down to something as simple as model architecture.
And I know that sounds controversial to say, but you can work with any model architecture. You can work with any data set. Those are the technical parameters that you have to work within, but they're not necessarily going to be the determinant-- very, very rarely. Of course, there's a little asterisk because with philosophy, there's always an outlier. [CHUCKLING] There's just always an outlier.
But very rarely those kind of decisions are going to have the massive ethical implications and value-based implications on the other end. It's more about understanding as a business and as an individual what are my goals? What are my objectives? What are my values?
And I put values on the same level as business objectives because values are what build a mission, are what build a brand, are what build that sense of loyalty and work and trust within a company. Placing those on that same level, what are those end objectives and values? And how, then, do we align our people - the people that are building our technology - how do we get them on board? And then how do we have the right protocols and processes that enable that decision making to create the technology that aligns with those values?
It's all about aligning our practices and our people in order to be able to achieve those reflections of our values. And I know we just talked about reflections can be a bit shallow, but those reflections, the enablement of those values in our technology. So AI is never really a technical story. It's usually a human story at the end of the day.
KIMBERLY NEVALA: And you also make the point that there's this odd hesitancy for organizations, in particular, to enable individuals within organizations to just straightforwardly make value judgments. Or say they're making value judgments or identify the value judgments that they're making.
It's a little bit odd because in point of fact, we make value judgments in our personal lives and our professional lives all of the time. Do you think we're becoming more comfortable with values? Not just sort of a performative list on a page, but values as an actionable attribute or element in decision making.
OLIVIA GAMBELIN: I would say that we are becoming more comfortable with the concept that values do have this impact. But I would say we're actually becoming less comfortable with making those decisions, which sounds a little bit odd, but let me explain here.
The fact that we recognize, more and more so are recognizing, that these value-based decisions have potentially global impact, as we've seen with -- name the AI scandal that we saw that went through the media. We're seeing the impact that these value-based decisions have.
And now it's being accompanied with almost a fear. I'm seeing with the responsible AI and ethics space, this lockdown of, we can't get this wrong. And therefore, we have to go step by step by step in these little steps. But then that's actually resulting in companies almost being faced with this wall of inertia. Of I have to have this 100% right from the start. And if I start it and I don't have it 100% right, then I've done it wrong. And so I shouldn't start it because I don't know how to do this right. And it becomes this odd cyclical -- we just won't touch it. Or if someone asks us, we'll say, maybe.
I heard this story the other day of a company going through a vendor procurement process. They were looking to procure a specific software. And part of their procurement process is to ask, well, what's your responsible AI policy? And they got all the way to the end and they had this great vendor, and everything aligned. But that vendor said, uh, that's proprietary knowledge we can't share with you. Exactly .. a little bit like a- mm, that seems odd. Why wouldn't you share a responsible AI strategy? And so the procuring company said, all right, nope. We're going with another company because we can't trust you. Why are you hiding this?
So it's this sense of, I can boil it down to, with tech we spent a long time pushing accountability onto the user. Pushing the accountability onto someone else. You see that quite often in the startup ecosystem. I'm sitting here in the Silicon Valley today. You see it quite often in the Silicon Valley startup ecosystem where it's, well, as long as we put the disclaimers, then it's the user's fault. Or, this is up to the users, or this is up to the stakeholders that misused our technology. We just provided the technology. It's this lack of accountability of, let me push the accountability onto someone else.
Whereas when you're making these values-based decisions and you are stating a responsible AI policy - - something as simple as that – you are having to take on accountability for what you've built and you've developed. You can't just say, oh, we developed the technology for the atomic bomb. And oops, someone used it how we didn't want them to. That no longer works as an excuse.
So it's creating that fear of, wow, we really see how things can go wrong, so we're almost afraid to start making those decisions to be able to move forward. Anyway, I'm now going in circles here, but we recognize the importance of these values-based decisions. In order to be able to fully engage with them, we have to take accountability for those decisions. But inherently, a lot of our tech ecosystem was built on this concept of, accountability is on someone else, not on the ones building it.
KIMBERLY NEVALA: Do you think that pendulum is really swinging back to us socially, whether it's governments or as the public folks that use this, to enforcing or expecting a level of accountability? Or are we talking about it more right now in this space than we're seeing in actuality?
OLIVIA GAMBELIN: I see it, especially in the younger generations, that pendulum is swinging hard. Especially the younger generations.
And I would say with those generations, it's probably two things. They're used to these kind of conversations. They seem to be much more sensitive to the values-based decision making. But they also have the technical literacy. They know what to push back on. That makes it harder for older generations and people that aren't necessarily technically literate to push back.
Because it's behind this veil of, I don't like what's happening here, but I don't know if I can say no here. I don't like how my data is being collected here, but I don't know my privacy settings. I don't know how to go and turn those off. Versus someone that's technically literate - and that tends to be more in the younger generation - can easily say, oh, no, I can go turn that off and I'll still be able to do what I want to.
So that pendulum is swinging hard with the younger generations, but it is just generally swinging. We're seeing that pushback through regulation. We're seeing that pushback through the market appetite.
And the companies that I'm seeing do well, and whether this up and down, I mean, AI is a constant storm of you're doing really well one day. And the next day, Sam Altman says something and you've crashed your company, that kind of thing.
The companies that are being able to weather this storm are the ones that are taking some form of accountability on, because that creates the user trust. It creates this trust from the user to look at a company like that and say, OK, well, at least. No one really knows what they're doing. If anyone says they do, they're lying. No one really knows what they're doing. But I can at least trust that they're paying attention because they've taken this accountability on. And that's creating a very loyal market and a loyal base of customers.
So it's an underlying trend. It's not necessarily in your face, but it's growing, and it's something that I'm really watching for in this new year.
KIMBERLY NEVALA: That's good news. That's great to hear, actually. Because again, I think we see loud and not necessarily proud, the stories of where companies have failed to uphold or indeed to even try to even implement any of these guardrails at all. And/or have failed miserably. We don't see enough even of those small quiet, but very, I think, important stories of companies who really are seeing the benefit and the value generated. So good to know it's coming and people are leaning in.
Now, one of the things that I really like about this values-driven decision-making approach is that it encompasses ambiguity. One of the things we've struggled with in data governance for a really long time is that it is almost impossible for anyone, in any situation, to come out with a comprehensive list of, thou shalt do this and thou shalt not do that, that covers any and all situations.
But that ambiguity is the strength, and it is the weakness, I think, of value-based decision making.
Because it's really easy to put a list of values or even principles. We see this with responsible AI programs. If you take a list of principles or values from a number of different companies, you could probably…I could make this a test and ask you to list off the top four or five. You'd probably have 75% --
OLIVIA GAMBELIN: I would do it.
KIMBERLY NEVALA: -- 90% of what's for every company. We won't do it, but we know we could.
So if we want to take a values-driven approach and we want to really put out and utilize values as a decision-making tool, if you will, what do those values have to look like? How do we define them? How do we communicate them? I think you talk about strong living definitions of values. What is a strong living definition of a value?
OLIVIA GAMBELIN: Well, first of all, it's a definition to start, which is a step --
KIMBERLY NEVALA: Well, OK.
[LAUGHING]
OLIVIA GAMBELIN: Exactly! But I say that and you would think this is obvious but a definition does not mean writing the value on the wall and writing a sentence after it. Saying like, privacy - probably one of those values, by the way, that would land on the top five for every company – privacy: we don't steal your data. Great, thank you.
That does not cover, that covers maybe 5% of the cases that you're going to come across. That is not a definition, just simple statements. It's like saying if you - here's a gut check. It's not a definition if it's something that you would, say, stand on a rooftop or a mountain and say, we don't steal your data. I'm proud of this. Probably not a good definition. Probably more of like an ego boost of, yes, we don't. This is a rallying call. That's not a definition.
KIMBERLY NEVALA: No proclamations.
OLIVIA GAMBELIN: Exactly, thank you. No proclamations. No proclamations: that is not a definition.
A definition is something that is going to go into depth that you can read and you say, OK, I can use this to help form specific thinking. I can use it as a reference point. And it does not apply to just one single moment in time. I can use it as a reference point in different contexts and cases.
Now, when it comes to defining these values, I find best practice with it is both that living and layered approach. Living definitions mean they are adaptive. Meaning, as you go through time, as you mature, as you experience - just like we do as people, but as you do as a company or as you do as a product team - you're going to run across experiences and instances where you go, oh, our definition right now needs to be updated. It doesn't encompass this scenario. Or in this scenario, it actually has adverse effects. Or it needs some fine tuning. It doesn't feel exactly like we're getting the results from it that we want to.
So living, in the sense that you write it down and you can adapt it. You can grow it. It's called moral maturity if we're going to give the philosophical term here. But it's that ability to ingest - like an algorithm - ingest input, learn from it, and then adapt your output. Same type of concept when you're coming to that living definition aspect.
And then I say the layered part. I find best practice, especially for larger companies, is to have a layered definition. Meaning, you have a higher level where this is, say, for the whole company. Then you can go down another layer and write the definition for each business unit, for example. And so that higher level definition is the root definition, but you're starting to adapt it.
So it may look different, say, for operations versus sales versus engineering. It's taking into context the specific business unit. And then you can go in another layer. And let's say, let's go to your product teams and you have multiple products. OK, well, what does that value say - we'll use privacy again. What does privacy look like for each of your products? What would that good look like? What would that standard look like?
So you can create this layered definition. All of those definitions, all those layers, point back to the root definition. But having those layers helps take into account the context. So it doesn't just feel like this fluffy thing that sits on the wall but instead feels like something that has been written directly for your use, for your context.
KIMBERLY NEVALA: And what are some of the, I was going to say -I don't know if it's tools - maybe it's tools that companies can put in place to support that? So I think of this sometimes as like case law. We've got laws on the books, and then we've got reference cases that show us how those laws are interpreted. Are there mechanisms or tools like that that you think are particularly important or helpful for organizations as they're working through this? And then we'll come back to The Values Canvas and the use of that but at a more tactical level.
OLIVIA GAMBELIN: Yeah. So yes. and you're going to hate this answer, but it depends on the company. Specifically on their culture and the decision making that already exists within that company.
For example, for some companies, the make-or-break tool, the one that really is going to make that impact, may be a training course. May be something as simple as education of, adding it into your yearly training course that all of your, say, engineers have to go through or a quarterly workshop. It may be down to education.
For other companies, it may be down to a framework that's hosted on a governance software. I mean, I can tell you actually, the smaller companies tend towards more education. Larger companies tend towards that framework and governance software, just for scale and how those companies function anyway.
But the reason I say it depends is when you're starting to incorporate these values, it's not a one size fits all. You need to take the time to understand how your organization, almost like an organism, already works. If, for example, you have a company where the decision making is mainly made bottom-up. And it's spread throughout teams, and everyone in and of themselves has agency to make these kind of ethical or value aligned decisions, and you suddenly bring in a rigid decision-making framework that they have to go through; that's not going to mesh with, first of all, the workflow. But second of all, the culture of decision making that's already ingrained there. So that framework and that software tool is going to fizzle out. The adoption is not going to work. It's going to create frustration and friction.
And sometimes you don't even need a new tool. You just need to add an extra step to a tool that already exists. So that's why I say, it depends, which makes it sometimes difficult to be able to answer this. Because you need to take that time to understand, how do I complement what's already at work here.
KIMBERLY NEVALA: I really appreciate the realism of that answer, though. And the practicality of it, resisting the urge to just say here's the general framework.
This also speaks to something we get to when we're talking about expanding a company's ethical, not just maturity, but ethical capability. And folks will say, it'd be really helpful if we can bring in ethicists who have that background and/or really bring in these very diverse, multidisciplinary teams with specializations that may not actually exist in a typical development organization or business today.
And that approach then really restricts the opportunity to engage in this type of work to very large, very well-funded, well-established companies who, perhaps ironically, are sometimes - not always, but sometimes - least inclined to do so. Which is a bit of a conundrum in and of itself.
But taking this approach, I think, and customizing it really says that every company at every level - whether you're a small business, whether you're a startup or you're a multinational behemoth - can and should integrate their values discretely and explicitly into decision making.
You created a tool called the Value Canvas for organizations that are saying we really don't understand necessarily what it means and how it relates to everything else. So can you talk to us a little bit about what the Values Canvas is? And we'll link out to these resources in the show notes as well. How does that help organizations proceed down this path and how does it integrate and overlay, or does it, with other corporate strategies that are already in flight?
OLIVIA GAMBELIN: So the Values Canvas -- I come from an entrepreneurial background. And one of the things as an entrepreneur, the second you even have the thought, you don't even have to voice it, you have the thought that you want to start a company, the Business Model Canvas materializes in front of your eyes. This is the tried-and-true default canvas for anyone in the entrepreneurial space.
What the Business Model Canvas does, is it outlines the nine different areas to consider when starting a company. And the purpose of it is to help the user, the person filling it out, map, hey, these are the resources I have. This is what I'm missing. This is what I need to focus on. These are my strengths. These are my weaknesses. Here are the holes. Here's everything that I need to consider from day one.
Now I took that same sentimentality, that same underlying need, and I translated that into this values-based decision making into the responsible AI space. Where I was hearing time and time and time again, the main blocker to responsible AI and ethics and any of this values-based decision making was I don't know where to start and I don't know what I'm missing.
Well, that's kind of what an entrepreneur does when they start a company. I don't know where to start, I don't know what I'm missing. So that created that underlying drive to design the Values Canvas. Now, the Values Canvas, what it does, plot twist, nine elements. It has nine elements that showcase and maps all that you need in order to translate values into action.
Now the Values Canvas it's divided into three pillars and each pillar has three elements. I know that sounds like lots of numbers going around there, but basically it creates a matrix of nine. The three pillars: it's people, process, technology. And the importance here is values, responsible AI, do not just come down to the technology. Like we were saying before, it also sits in the people and process.
So the Values Canvas makes sure that you're covering that holistic perspective, you're getting it down into your people. You know who is building the technology, you know how they're building it and the process, and you know what they're building in the technology.
You have those three pillars. And in each of those pillars, there are three elements. And I like to call those the impact points. That's where you're translating these values into action.
So to run you through one example, let's look at the people pillar. And people, the three elements, these rhyme so they stick in your head: it's educate, motivate, and communicate. And really what we're looking at there is are your people trained, are they motivated to engage, and are they talking about it?
It sounds so simple. But when it comes to any value-based decision making you need to be trained on those values. You need to understand what those mean in the context of the company. You need to be motivated to actually use those in your decision making or engage with whatever frameworks or tools or processes that have been created.
If you're bringing in a cumbersome framework and your people aren't motivated to use it, they're not going to use it. If it's adding to their already crammed schedule, they're not going to take into account this decision making. So that motivation, how you design your KPIs to reflect those values, are incredibly important.
And then communication. I can't tell you how many times I've gone into teams, both large and small, where everyone's saying, yeah, yeah, yeah, we do responsible AI. Values are really important to us. Saying OK, what are your values? And yeah, sometimes they say OK, they have the same values that they all say. And then ask them, well, how does that work in action? And everyone gives a different answer. And then they all look at each other and they're like, wait, what? What do you mean? What do you mean you're doing that? And, oh, you have a solution that I've been working for six months on trying to find and you've had it for a year? So that communication is so incredibly important. And that's just one pillar.
And I know that starts to sound complex, but it's simple. Solutions can be carried out with all of these. So you can create, let's say, a workshop that trains your teams on something as important as fairness. And in that workshop, you communicate or create a reward system for people that come up with new ways to mitigate bias. There you go, you've got your motivation. And this workshop happens every quarter. And half of the workshop is just talking about what people have been up to that month. There you go you've got communication. You have all three impact points in one solution, even.
Or you can create solutions for each of those three elements. Depends on the company. The larger companies will have more solutions. They'll have more resources. Smaller companies will have solutions that cover multiple elements because of resource dependency.
But the important part is you're hitting these impact points. You're hitting the needs. You're hitting the exact points where values go from something being written on a wall to something that is embedded into how a company functions. That was just the people layer.
And I won't run you through both the process and technology layers, but truly that's how it functions. You use the Canvas - you can use it as a map. Once you fill it out and you see, oh, we're missing about half of what's on here, you know where to target your resources and your budget for the next quarter to build that out. Or you can see where your weaknesses are, or you can see what strengths you can rely on. Same as with the Business Model Canvas.
KIMBERLY NEVALA: And as you've worked with organizations really across the spectrum, from startups to, as I said, those global multinational companies, probably across a very broad spectrum of even service and product lines and so on and so forth. What have been some of the most surprising, or perhaps positively reinforcing feedback or outcomes you've seen or heard about from companies using this as a practical, tactical technique?
OLIVIA GAMBELIN: Oof, OK. I have a few surprising ones that are more on business and operations. And then I have one that's more, I don't want to call it sentimental, but one that hits closer to home for an individual.
So business-wise. For me, at the very beginning of my career, of my journey within AI ethics, I was really surprised to see the quality of innovation that comes out of these kind of practices. And it's quite literally the innovation on the other end that surprises me because it's aspects that I would have never even thought of.
And what I mean by that is, it's a simple difference between saying, don't do this, don't do this, don't do this, and instead meaning protect your values. Don't do all of these things. And we get caught in that mindset of ethics is a blocker. And you have to slow down and it's going to tell you all the things you shouldn't do.
Versus the times where I've been able to work with a company where we get a functional framework in place, let's say, and we're working on value alignment. We're talking about, well, with this product, with this tool, how can we align with these values? What comes out the other end are richly complex yet beautifully simple solutions that I can say I never find anywhere else.
Something like a banking app. You can say we need to protect human agency. We need to make sure that people have control over their money and control over their data and information. And it starts to lock down when and where information can go within the app, how that data is shared. And that's important.
But then if you flip that conversation and you say, well, how do we align with that ability for human agency and autonomy where the user is taking back control over their finances. That kind of value. And it flips over and all of a sudden you're getting solutions like, well, can we design some type of reinforcing - not chatbot, but nudging system - that the user has control over. And they're actually able to engage with their own financial health and talk to it almost like it's another person. They get a financial advisor right there in the app, but it's locked within their own system.
So just these really interesting solutions that are coming from a different intentionality that I just don't see. So that's, we'll say, surprise on an innovation side. And I always run across surprises in this space.
But truly the biggest surprise and this I find every single time; no matter if we're taking an innovation approach, we're taking that risk-based approach. But the biggest surprise that I always come across is the fact that people, I've had people use this phrase before. They're like I get to fall in love with the technology again. I actually know what I'm building, and I have a whole other layer of depth and understanding that I never thought was possible.
It's like you're in a relationship with someone. And you go out, you have fun, you chat. And then you have that one night where you stay up till 3:00 in the morning and you have that deep conversation where you go, oh my God, you are a living, breathing person. And wow, there's so much underneath you all.
Same thing with the technology. Say, OK, this is cool and it's fun. And then you have that three-hour, you stay up till 3:00 in the morning, and you go, oh my God, wow. There's so much beauty and complexity underneath this, and I have a whole new respect for what I'm building.
That, in the beginning, used to surprise me a lot. And now it still surprises me in some cases because that conversation, that comment, always comes up at some point during an engagement. Sometimes it catches me by surprise when it does. But it's across the board. That excitement, that joy in finding those deeper levels and layers to your work.
KIMBERLY NEVALA: So true purpose and joy in the work can exist in the corporate environment, is really what I'm hearing.
OLIVIA GAMBELIN: Yes.
KIMBERLY NEVALA: And really quickly – we’re, I know, coming up on time here. So two last questions.
You mentioned whether you're taking a risk or an innovation-based approach. Do companies have to pick one or the other? I mean, are these in fundamental opposition? Or what is it that we need to consider when deciding how to -- I don't want to say balance, because I think that actually sets them up somewhat as opposition. But when you think about risk versus innovation, is there something about the narrative today that we can reset or something that companies need to think about when they're deciding how to address these topics or these issues?
OLIVIA GAMBELIN: Absolutely. We can have both at the same time. I like how you put it there. It's not an either/or. It's not, you have to go innovation or you have to go risk or you have to try and find the balance between the two. How I see it is, it's more that the risk-focused practices are the ones that enable the, almost, foundations and freedom to embrace the innovation side.
So if you just looked at values-based innovation, and you were only focused - especially in AI - only focused on the innovation aspect to ethics by design, you most likely are going to miss some very important regulatory compliance and risk factors. So as you're innovating, you're going to have this little voice in the back of your head of should we be doing that and what does that look like? And, oh, no, there's a new regulation and we missed that completely.
Versus, the risk side is more like creating the foundational structure of, let's say, your home. Lays the foundations, creates the structure that you're then able to design and create within. So it's not that either/or. It's, once you get the guardrails in place, you're creating the boundaries that you can create that will then drive your creativity and create that headspace, that confidence, that you're able to innovate safely. That you've done it without having to constantly wonder, should we be doing this?
You've already set that foundation down. Now you have the freedom and ability to engage on that innovation side.
So with that risk versus innovation it's not a balance that you're trying to strike. It's an and situation. It's not and or. But I do tend to both encourage and see companies start with the risk side. Only because that's like plugging up the holes of a water tank that's already leaking everywhere. You're plugging up the holes so that you can continue filling. So risk does tend to come first only because we need to bring some order to the Wild West first. And then we can actually go out into the next frontier.
KIMBERLY NEVALA: But it doesn't sound like it should stop there. Because offline, we talked a little bit about how sometimes responsible AI programs and these elements, when we only focus on risk, they can become checkboxes and checklists and very instrumental, if you will. So blending those two perspectives feels like it's necessary, even if it's not step one. Is that true?
OLIVIA GAMBELIN: Exactly, exactly. It's kind of like saying, hey, you got to wash the dishes before you can go watch TV. You got to make sure that you're set, you're solid. You've done what you need to do and now you get to go do the fun thing. Kind of that enticement at the end of the day. You want that to be a part of the narrative from the very start.
KIMBERLY NEVALA: Awesome. Now, there is a lot of work to be done. There are certainly a lot of challenges out there and a lot of opportunities. And I think this space, whether you're calling it ethical AI or responsible AI, can sometimes wear folks down. But you really remain ever optimistic.
What is it that just drives that optimism, keeps you so positively engaged as we move forward? And if you were talking to folks like myself, or just the public more generally, what would you…one thing and one thing only. If they did nothing else, what is it that you think they should be doing today as they engage with AI?
OLIVIA GAMBELIN: Mhm.
KIMBERLY NEVALA: You can tell me it's not a fair question. That's fair too.
OLIVIA GAMBELIN: Yeah, it's a good one. You're making me think here, Kimberly. I like it.
[CHUCKLING]
I'm definitely a naturally optimistic person, so I'm trying to separate that out mentally. Of if I wasn't naturally optimistic, why do I still get excited about this space? Because to be fair, there are high burnout rates for people in my kind of position. It's not an easy space to work in. And I've definitely gone through my dips of going, well, what am I doing in this space?
But I think for me, what it comes down to at the end of the day are the people. Because I love seeing how someone lights up when they actually get to talk about what matters to them. One of my favorite things to do is ask someone, you're like, what do you value? What's important to you. And watch someone light up and begin to talk about that.
And then to see as, I mean, a lot of my work is just asking people questions and leading them through an area of thought that they haven't necessarily explored before. Watching them reach this realization of, oh, this isn't a trade-off. I don't have to choose business or technical success at the cost of my values. I can actually make those values the core of what's important to this success. This isn't an either/or, this is an and. And there's ways to do this.
People light up differently. They get excited. And sometimes these values are quite beautiful. It's the intricacy of the human nature there. That has kept me optimistic. Because no matter who I talk to, I can always ask them, what do you value, and talk to me about that.
So maybe in that case, to answer your question, the one thing that I would tell people that they should do is, I mean, ask that of themselves. This sounds - I'm getting a little bit into self-help maybe here. But what do you value? And instead of discrediting that, instead of putting it to the side, instead of saying I'll do that in 60 years or hopefully that'll become a part of it, ask yourself, how does that reflect into your work? How has that led to your current success, and how can you lean into that more?
And that's really where I see people both get that joy out of their work but also build really amazing companies and products. So a little bit of a sentimental answer there, but that's kind of how I function, if I'm being honest.
KIMBERLY NEVALA: I think that's excellent, to end on a note of really positive joy. And that we can find wellbeing and develop technology in a values-driven way and in a way that brings us joy and enhances the human experience. So thank you so much for your time and your insights and continuing to just spread that light for all of us. It's a really important work, and I am very excited to see what comes out of your current research, and especially around that intersection of design thinking with value-centered design. So more to come, for sure. Thank you.
GAMBELIN: Absolutely. Thank you so much, Kimberly.
KIMBERLY NEVALA: Now, if you'd like to continue learning from thinkers, advocates, and doers such as Olivia, please subscribe to Pondering AI now. We're available on all your favorite podcatchers and also on YouTube.
