Auditing AI with Ryan Carrier
KIMBERLY NEVALA: Welcome to Pondering AI. I'm your host, Kimberly Nevala.
In this episode, I am beyond pleased to bring you Ryan Carrier. Ryan is the Executive Director of ForHumanity -- that's F-O-R -- and the president of ForHumanity Europe. ForHumanity is a nonprofit organization that is both promoting ethical AI usage and mitigating risk through the use of independent AI audits. This, of course, gives away the storyline for today's discussion, in which case we will in fact be talking about all things AI auditing, including the implications of the evolving or maybe devolving regulatory environment. So welcome to the show, Ryan.
RYAN CARRIER: Thank you, Kimberly. It's a pleasure to be here.
KIMBERLY NEVALA: Now, you have a background in finance. And certainly in the financial sector, there is no shortage of both algorithmic processing and auditing. Those two things are not always happening in conjunction with each other. So was it that background in finance and intersection with either of those elements that drove your early enthusiasm for this topic of AI auditing, or did it come from somewhere else entirely?
RYAN CARRIER: It definitely came from that background. So, as you mentioned, 25 years of financial background. Standard & Poor's. The World Bank. I ran a Wall Street trading desk for a time.
And then the last eight years before I started ForHumanity, I was running my own hedge fund. That was something that I survived, but didn't thrive at. And I share that with people so that they know that I sit on a real chair and not stacks of cash. A lot of people assume oh, hedge fund. Made a lot of wealth, and now he's trying to give back. No, no, no. I survived my hedge fund. But also, it's part of the formation story. So in those eight years, we developed artificial intelligence to help us manage money.
So then we get to 2016. I'm winding up that hedge fund. I know AI. And if you remember 2016, we had Cambridge Analytica. We had Facebook influencing elections. We had Microsoft's Tay bot going racist and rogue within 24 hours. A lot of harms and incidents that were occurring. So all of those factors come into play.
And then I have to add the last key and critical point, which is, I have two boys. And in 2016, they were four and six years old. And I looked at this path that we were on, and again, this juxtaposition between a financial history where we live and breathe risk management. Everything is done in the context of risk management versus, a world of technology where it's move fast and break things. That's the opposite of living in a world of risk management.
And I understand disruption. I understand the value of it. But in this context, not seeing governance, oversight, and accountability in technology in general and then extrapolating that forward into my boys' future, I don't mind sharing with you or the audience, I got scared.
Scared enough that I started a nonprofit public charity with no money and no plan. It was just the mission statement. And the mission statement remains to this day that we are going to examine and analyze the downside risk associated with AI, algorithmic, and autonomous systems. So that's our scope.
But we're going to look at the downside. And a lot of people who aren't financial people don't even understand downside risk. What's the difference? Well, upside risk is what's expected to be taken when you receive benefits when you have good results. We're not worried about the beneficial side. Why? Because the people who promote artificial intelligence, they're all about the benefits. They'll tell you all about it. They got that covered.
What they have not seemed to have done sufficiently is look at what are the downside risks? And look at a full picture of who the stakeholders should and ought to be in terms of those impacts. And so we're going to focus on those downside risks. Examine and analyze them and then engage in the maximum amount of risk mitigations possible.
Why? Because we support the idea that artificial intelligence can be beneficial. So if we mitigate as much risk as possible in these tools, then we maximize the benefit in these tools ForHumanity. And that's where the overly ambitious name of the organization came from. So that's the journey and how we got here, and how it kind of came together. And yeah, there's more, but I'll turn it back over to you.
KIMBERLY NEVALA: Well, I do love an overly ambitious mission statement when it is, in fact, focused on good ForHumanity, as it were. Now, you've been on this mission and arc then since 2016, sitting perhaps not in that same chair, but in this chair. What have you observed about how the conversation has changed, and what have been some of the either most perplexing or humorous types of pushback or questions you've gotten along the way?
RYAN CARRIER: So the change is that we are growing and maturing into recognizing that there should be some form of governance. Oversight. Accountability.
We've seen that in terms of corporate adoption itself. So corporations recognizing that they might have risks and liability, recognizing that they may not want to cause harm, and that there are steps that they can take to avoid that. Policy makers and governments putting laws in place. We see it. It's not a straight line function. So ebbs and flows in terms of this process.
The best way to describe this is an analogy. Technology is in the phase where automobiles were in the 1920s and where airlines were in the 1940s. People just put them in the sky, and if they fell out of the sky, it was too bad. You took that chance. If the car broke and you crashed and you died, that was too bad.
Well, eventually enough people were being killed by these things that we said, you know what? We need to create rules of the road. We need to create safety features in our cars. We need to test our planes before we put them in the sky, because there's nothing worse than 250 people crashing to their death, and they had nothing that they could do about it. So we created a safety culture. And by the way, that safety culture is where the source of some of our greatest innovations have come from.
So this argument - there's a predilection from Silicon Valley that argues that regulation stifles innovation. There's no evidence of that. It may change the pace in the sense that if I could have produced something in February, maybe I can't produce it until May because I have to put some systems in place. Some controls. So yes, it takes more work. I'm not saying that in any way, shape, or form. It takes more work.
But what it doesn't preclude is that the value of that more work might result in longer term, sustainable profitability operations, however you want to manifest these results, that get better results for all stakeholders. Not just your profitability. Not just your market share or market leadership.
So the change I see is a growth and a maturation looking at more and more regulation. But at the same time, I see a culture that just is fighting this tooth and nail. Is fighting AI governance and regulation.
There's also this belief in that culture, we can do it ourselves. We just need our own red team and we'll take care of it. It actually discounts the great value of genuine independence. It discounts the value of, really, human nature. As humans, we mostly want to do good things. I believe in the good in most people and that translates through to our corporations; which mostly, look, if you ask them, do you prefer short term profit or sustainable profit? They'll tell you sustainable profit.
But what happens as humans is we do things, and we do them the right way, until we get distracted. Until we get busy. Until our boss says it needs to be done next week. And that changes us from completing our compliance, our risk controls, our organizational approaches, our technical solutions to mitigate risks in general.
And the only thing that changes that is when a third party, whether it's a teacher in the classroom or whether it's an independent auditor assessing your financial reports or your technology: have you done something to compliance? Have you done it to completion? When that happens, we change our behavior and we do things, and we get them done. And that is the great value of third-party independent evaluations.
And look, I'll call out a specific example. OpenAI. Sam Altman writes his manifesto. And in that, he says, we should be independently audited. Have they? Not even close. Could they pass an independent audit? No chance in hell. So it's talking out of one side of your mouth, and then actually doing.
And the proof is really in the pudding. The amount of harms and danger, negative impacts, detrimental impacts that are coming from all of these different tools is meaningful enough. The infrastructure that could be put in place to create compliance by design is not onerous enough to justify not taking these
steps. And then you asked about funny examples. Nothing jumped to my mind, so I didn't address that.
KIMBERLY NEVALA: Well, it may be bitterly funny. Maybe it was a better way to frame that.
And you've mentioned, folks will - and business leaders in particular – will very often say, of course we want to have a sustainable, profitable business over the long term. But can get tripped up by those short-term goals and have a perception that this is very much going to trip us up.
So this question of how do we strike the right balance between what is sometimes couched as business objectives and then accounting for risks or ethics? And I think balance is maybe not the right way to think about it. You very much are on record as saying this is a healthy and natural tension, and we should be, in fact, embracing it. I mean, there should be an expectation of that.
But that said, a lot of times when we're talking about ethics or risk mitigation in this context, we're thinking of it in the wrong ways. And, in fact, we're executing it wrongly as well. Can you tell us more about that?
RYAN CARRIER: Yeah. I'm going to use another live example and calling out names. And I don't love to do it, but these are pretty big and public incidents.
Meta was fined $1.4 billion by the state of Texas for violating the biometric data laws. Now, what I can't tell you is that Meta made $5 billion from that. And therefore, the cost of business was $1.4 billion, and they were happy to wear it. I don't know how much money they might have made.
The other part that I don't know, and I largely reject, is that if they had actually put in the steps to be compliant with the law, first off, I'm 100% certain it would not have cost $1.4 billion to implement those risk controls, treatments, and mitigations. 100% certain of that. I'd be shocked if it was even a tenth of that. Maybe closer to a hundredth of that. So when you're doing genuine business cost-benefit analysis, either they are not doing the proper due diligence of what compliance with the law looks like. I believe this is true, number one. Number two, it's possible they made way more money than they got fined.
But that doesn't excuse the fact that they've broken a law and made a bunch of money by breaking the law. And that should not, should not, fit under any corporate model whatsoever. We can talk about breaking a law, but let's talk about breaking fundamental human values. We would not stand for this if they were employing slavery to achieve that business objective.
The problem is, is that it is entirely acceptable. And I've heard this many times. So this is that funny tongue in cheek, I'm so sad, I can't even stand to hear it. Oh, that $1.4 billion was the cost of doing business. The $5 billion being fined for Cambridge Analytica: cost of doing business.
It needs to be unacceptable at all levels. It needs to be unacceptable by the CEO. Needs to be an acceptable by the board of directors, needs to be unacceptable by the shareholders. The shareholders are annihilated by this value.
Does Facebook make a heck of a lot more than that? Of course they do. But they could have made $5 billion more, right? It's not like we're trading off $5 billion for nothing. It's really a question of just putting in the right infrastructure to meet compliance obligations and then think about the business in an appropriate context, rather than move fast and break things.
I got tired and fed up with the move fast and break things when these organizations were moving fast and breaking people and breaking relationships and breaking communities. Now, it's not OK. It's cute when you've disrupted a business model. It's colloquial and a nice, catchy phrase that we all remember when you're disrupting an old, stodgy business that needed to change and become more efficient. Love it. I'm capitalist at heart.
But when you're moving fast and breaking people and communities and relationships, fundamental values, democratic institutions like voting? Now it's not cute anymore. Now it's actually not acceptable. But that not acceptability needs to happen in the boardroom. Needs to happen in the C-suite. But it also needs to happen at the user level. The users need to reject using tools where this is their business model.
KIMBERLY NEVALA: When it comes to in the corporate side of it, is this something that you think corporations are beginning to take on themselves? Or does this take an external push from things like regulations? And then, how do we do that well?
RYAN CARRIER: I'm going to say both. So regulations are driving some measure of what I would call AI literacy. The UAI Act, Article 4, requires that all providers of artificial intelligence - so some of your audience may know that this law, which was enacted in June of 2024 - much of the law doesn't go into force until August of 2026. So there's time to build into the compliance.
But Article 4, which is essentially the first real article in the law, says that not just high risk, which most of the law is about, (but) all providers of AI systems, all deployers of AI systems, must provide AI literacy. AI education, if you will, to their employees and effectively to their users, those who are impacted by the systems. So regulations are driving some growth in that area where consumers, moms and pops, often, or retail kind of audience, should have the ability to increase their awareness of issues and risks using these tools.
In addition to that, wisely, I see certain organizations getting out in front of all of this and recognizing that if they're having better interactions with their customer base because that customer feels educated, feels confident about what the risks of using the tool might be. That leads to, we talked about this term already, more sustainable profitability. You're building trust. You're building brand relationship. It is a marketing and sales strategy.
And so I do see companies beginning to approach differentiating themselves from their competitors through responsible AI. Through good and robust governance oversight/accountability. Through providing education and training to impacted users. And so all of that comes together, I think, and continues to grow this space.
The last thing I would say is that we actually need to get to a place that we've gotten to with pharmaceuticals. So the FDA mandates that when you're allowed to advertise for a drug-- and we've all seen these commercials, right? The first 30 seconds of the commercial is the narrator showing the family of four, usually a boy and a girl, running through the woods. Pine trees and aspens in the background. Mountains and blue sky. And they're telling you why you should take this new, fantastic drug.
The FDA also mandates that the second 30 seconds of the commercial, or 30 seconds of the minute-long commercial, has to be about the side effects. So if you take this drug, you might get redness at the injection site and dry mouth. And you might get diarrhea and you might have joint pain, or you could die.
Now, what have they just done? They've disclosed what we call residual risks, what many of us would know as side effects. But when you know that, now when you go talk to your doctor, if this is a cancer drug and you're going to die anyway and the risk is death at least you are informed of maybe what the quality of life looks like versus the choices that I make. The choice of maybe I can even cure this.
But I'm an informed user. I'm an informed taker of this drug.
Now, let's translate the disclosure of residual risk to AI. Imagine what we might have avoided if ChatGPT, when it was first released in February of 2023, was forced to say, and oh, by the way, when you use this tool, don't use it to make judicial decisions. Don't use it to diagnose medical conditions. Don't use it to submit briefs to a legal proceeding, because that's not what it's designed for. And oh, by the way, this thing hallucinates and it will make up sources.
Imagine two things. Number one, now you've got informed users so you don't get the doctor in Palm Springs who makes a TikTok and says, oh, get your ChatGPT to write your letter to your insurance provider on why this insurance should be covered with completely false sources, embarrassing himself in a viral post. Or you don't get the lawyer who submits their legal brief using ChatGPT and gets disbarred.
All of these consequences don't happen. But also, don't have all the people who used ChatGPT, had it hallucinate, and they put forward as some sort of trustworthy documentation where they didn't check the sources.
And, look, this is their fault. It's in the fine print. But it's not the same as saying you are an informed user who understands and recognizes that maybe I need to be doing these things. Or maybe this isn't the purpose for which it was created. So the disclosure of residual risk is the last piece here, and that's something we don't do with artificial intelligence. We don't do with technology, really. And we really should.
KIMBERLY NEVALA: Yeah. And I would say some may argue that, well, this is why we have terms and conditions. And we've all had that experience, which is ridiculous. And I don't think anyone can argue -- even I don't read them, and I know I should. But it's mostly because, the mumbo jumbo, and you know that sometimes what's being said really is just you're either making a decision to give it all away or not.
So how do we make sure that disclosures then, because we hear things now about things like model cards. Or if you're in an application like ChatGPT, maybe there's a warning that pops up at the start. The argument there could be well, this is a quote, unquote "foundational model," and you could do anything. And it's impossible for us to know all the ways and means. But even if we could do that well, at the start of a session, our human biases, our human inclinations get in on as we go down and on.
So are there approaches to this that you think we need to be exploring more that are particularly effective? Or is this just really an active area of research and development right now?
RYAN CARRIER: So here, the law is being effective. And it's being effective even in jurisdictions like the United States, whether it's at state levels or not. It’s clear and plain language.
So there is a role for a contract like terms and conditions. There is a role to be able to disclose that. And whether my attorney needs to look at it or I want to look at it with a legal perspective, I obviously have the right and need to be able to do that. So that should stay.
At the same time as you, fully have experienced, as have I, we need to have clear and plain language approaches. And here the FDA model is fantastic. They can express in a meaningful way side effects in 30 seconds by stating them out clearly and plainly. So clear and plain language disclosures of risks is a great place to start.
Allowing for the ability to view the detailed contracts behind the scenes, always.
And then employing things, and we use a technical term for this, which is just in time notifications. So that would be that pop-up that you refer to that says, OK, you're about to engage with -- what could go wrong? What do you need to know before you engage with? And requiring that would then be physically tested, if you were auditing that or assuring that, that would be something that everybody could see as to whether your compliance is robust or not. And that would allow for a greater discernment as to whether fairness was being applied, or whether I want to choose one product or another.
And again, those who provide good and fair terms to their users, they will choose to do this because they want to show it off. And those who are out to screw their users through the terms and conditions, they're not going to want to do that. And oh, by the way, the public needs to discern that, value that, reward that, and then we will see change.
KIMBERLY NEVALA: Yeah, for sure. I know that, again for me personally, it tells me a lot about a company's mentality and what their ultimate objectives are. Even today, when you're trying to, for instance, opt in or opt out of data sharing or cookies. About whether it's the Accept All button, which is front and center, like, turn everything on. But if you want to turn it all off, you're going down through layers and layers and layers of turning everything off individually and this and that.
And so it's always very interesting when they're making it quite difficult. I do actually take a mental note and think, all right, you are not someone I actually want to do business with. And I've actually stopped in some cases. But hopefully, we'll also get to a better balance where that is an expectation in folks. And --
RYAN CARRIER: Actually, a great example. Let me jump on that for just a second. The FTC under Biden, they had basically identified nudges, dark patterns, deceptive design, and this is an example of that.
Symmetry of choice is considered to be good design. Asymmetry of choice where I can opt in and give them everything they want easy and it's quite hard to opt out, that is considered a dark pattern, deceptive design, and can be fined under the Unfair, Deceptive, and Abusives law.
Now, the founding for that is the FTC Act of 1914, believe it or not. But the FTC has already declared that it applies to AI, which means not only could the FTC do it, but state attorney generals are also empowered through what they call the Little FTC to be able to engage in the same work. And by the way, those may or may not have changed at the last election.
So companies should be aware that if they're going to play such games with you or anybody else, that there are mechanisms already in place to say, no, no, no, that's dirty pool, and there's fines involved in that. That's just one of the pieces of ethical oversight that we advise. The avoidance of what we call detrimental nudges, deceptive design, and dark patterns.
KIMBERLY NEVALA: Now, you alluded earlier to really the value and importance of independent audits in particular. I think that may in some cases also extend to assessment, although what we mean by independent in the case of an assessment might be slightly different. So I would love to talk a little bit about what the work here is.
And recently, I saw a posting from Rumman Chowdhury on LinkedIn, who's also done a lot of work with, for instance, public red teaming for different applications and things. And she had commented that while she was very heartened by the increased enthusiasm or talk around things like assessment, that she doesn't see the rigor and validation of methods and processes for AI test and evaluation. Which then has, she feels, really immense or important scientific and political ramifications and creates what she called a false hope of remedy. We think things are being improved or addressed when they are not. What's your reaction to that?
RYAN CARRIER: I would agree and support the take for sure.
The harms of insufficient rigor in data quality, information quality, causal hypothesis, construct validity, ground truth validation where it's appropriate and applicable. Bias mitigation. Everything I just listed can result in harms and detrimental impacts, sometimes to everybody. Sometimes to protected categories. Especially issues around bias impacts protected categories, intersectionalities, and vulnerable populations more.
AI has the power to exacerbate historical bias in data, in societal perceptions, and how it feeds through into these models. I do not believe that most organizations engage in sufficient rigor in controlling and mitigating and treating risks associated with any of those challenges and issues that I've mentioned.
Now, data scientists are not maliciously trying to engage in mitigation. Sorry, in bias. But what they aren't sufficiently trained in is discriminatory outcomes, a legal concept. It's not what data scientists are trained in, but they engage in this process. Our data science is about finding the best way to explain a large data set. So what people don't realize is that the very nature of what data science is a tension and a tradeoff between accuracy of the model and inclusivity.
This is not a conversation that most are having, and it's an example of where Rumman is correct. Where Jutta Treviranus out of Ontario is a global champion in this work. Basically saying, look, the nature of data science is to take anomalies, outliers, exceptions. So I'm using my hands here. So if hands are close together, and that's most of the data points, my model is going to describe something that's in the center of that. So if I have data points that are out here by my shoulders, out on the wide -- separated from that. By removing those data points from the data set, I get a more accurate model by sheer definition. So this is what data science does.
Here's the problem. Those anomalies, outliers, and exceptions are often people. They're often persons with disabilities. They're protected categories that are underrepresented. They're minorities of some kind, using the strict term minority. They're people, but you've just excluded them from your model.
So recognize that your tension and tradeoff between inclusion and accuracy is happening in your data science process. And recognize that when you exclude those people, number one, you might be engaging in discriminatory behavior. Or you might need to provide accommodations to those people who have been removed.
And when we do that more with our data process, we will have better results. We will engage in more bias mitigation. We will avoid harm to protected categories and vulnerable populations. And that should be the goal. It's rooted in what ForHumanity does.
And you started your question talking about independence. We view independence as critical. As I mentioned in the kind of that teacher example. humans will do the right thing until something else gets in their way. When you have an independent third-party evaluator, it changes your behavior.
That group is really tasked with being a proxy for society to see and determine, have you met society's goals? Society's requirements in how you are conducting your business? And remember, businesses exist for society first. That's why we allow the construct. Then we allow the construct of shareholders to finance the operation of that company. Society first. Shareholders second.
And so when we begin to think this way, then we have the opportunity to engage in a process that will mitigate these risks in advance. And independents can check and see if you've built compliance by design the way that you need to follow the law in advance. So independent third parties checking that you've done that as a proxy for society is the most robust form of, quote unquote, "enforcement."
In fact, it's proactive enforcement before people are harmed, because you're going to get audited as soon as you start to deploy the tool. Or you should be. And now you have compliance by design built into the structure rather than, oh, people were harmed. I'm enforced against by a government regulator. Paid a big fine. It doesn't change that people were harmed and damaged.
And so independent audit of AI systems provides this form of proactive compliance by design. Infrastructure of trust, we call it, to ensure compliance with laws, regulations, guidance, and best practices. In the end, the independence that we're talking about is where that third party auditor they can get remunerated for their time. They're paid for their services, but they are an at-risk auditor.
‘
Like, Arthur Andersen. Your audience, who probably…Arthur Andersen? Who the heck is Arthur Andersen? When it was a Big Five accounting firms, they were one of the big five. What they did is they advised Enron and other firms on how to take risk and cost and move it off their balance sheet. Then they walked around the corner to their audit partners and told their audit partners how to audit these off balance sheet items. It's basically fraud and malfeasance in its highest form.
And one of the reasons that independence is so critical is this is the opposite of independence. The auditor had no independence. In fact, they were the same firm. And so what ends up happening is if you advise a company on how to build compliance and then you go to audit them, you're grading your own homework. No.
I mean, this is so obvious to people when you put it in this context. You can't grade your own homework.
That's what we have the teacher for. That's what we have the third-party auditor for. So someone can be involved in helping you build compliance, but they can't audit that work. Or they can be the auditor, but then they can't advise on how to build compliance. Pick one or the other.
Now, you can provide both services, just not to the same audit client or to the same client. You can be an auditor over here, and you can be a consultant over here. And you can take advantage of all your expertise and your knowledge. You just can't do it for the same client. And so that is the independence that we talk about.
KIMBERLY NEVALA: I like this idea of audits as a proactive governance strategy, because I do think people think about this as reactive. i.e. it's something I'm going to do, essentially when I am forced to do so by a government agency. But this idea that we can start to engage in this type of work, not just merely at the behest of a regulator, but also to safeguard our own business is important.
RYAN CARRIER: 100%. Our certification scheme, so the rules that we lay down, and we hold those out humbly, right? ForHumanity is an authority because we have been doing this for five years because we have 2,500 people from 98 countries who can crowdsource in on this stuff. But in the end, we submit to the authority of governments and regulators where they want to take that authority.
So, for example, we have a GDPR certification scheme. We have submitted that to the authorities in Europe and are awaiting approval on that for the first GDPR scheme for artificial intelligence. Which would be great, assuming we can get it done.
But in many other places, governments and regulators will not approve what the set of rules are. And there we want to go to the marketplace and we want to seek voluntary adoption. Which is exactly what happened with financial rules back in 1973, '74, '75, before the SEC started to mandate 10-Q's and 10-K's.
So voluntary adoption has its own benefits, has its own merits, and we're actually beginning to see that already. Because the procurement process for these tools has begun to start to ask questions about compliance by design, types of controls, treatments and mitigations. Are you mitigating bias? Do you have data protection in place? What are you doing about personal data? Do you have explainability for your model?
And so what we're beginning to find is that organizations are now ready to engage in assurance in advance of even the laws and regulations coming into force, because they're finding it to be a marketing value. Because they're finding it to be a strategic decision and strategic placement amongst competitors.
And that will grow, and that will result in greater trust and brand loyalty.
Technology AI is no different than cars and planes and trains and everything else that we put safety features on over the last 150 years. We're just still growing into it. Still maturing into it. So that's the process that we're going to go through in that regard.
KIMBERLY NEVALA: Yeah, I think that's the oddity in how we approach that piece. And then, is there work, though, that organization should be doing to test and evaluate those systems while they are being developed and before deployment? And then what are some of the best practices around that, or ways to think about that?
RYAN CARRIER: So a ForHumanity audit, and others, would look at the entire algorithmic life cycle. Design development, deployment, monitoring, and even decommissioning. So it's the whole algorithmic life cycle.
And if you wait to conduct your audit or you wait to build your compliance by design at the deployment phase you know what you may have missed? Any of the work you needed to do in your source data to get to data quality. Information quality. Bias mitigation. Even explainability.
So to be compliant with best practices but also standards and certainly laws like the EUAI Act, you are engaging in compliance by design efforts from two days after you had your first idea for what this ought to look like.
And it really starts, from our perspective, with ensuring you have the experts in the room to tackle these issues. To tackle issues like data quality, information quality, model data and concept drift, bias mitigation, even cybersecurity.
Cybersecurity has been around for 25-- call it 30 years, but AI presents unique challenges for cybersecurity. So have you thought about how your AI can result in new vectors of attack, and therefore, is your cybersecurity ready to handle these kinds of challenges?
So we start by asking, do you have the experts in the room? And that means at the beginning. So the answer to your question for us is a robust yeah, it's at the design phase. Certainly the development phase. And deployment is obvious. Monitoring and how you go afterwards, absolutely.
KIMBERLY NEVALA: And when you say you need to have the experts in the room, I assume with a great deal of confidence, that's not just talking about the developers, the data scientists, data science teams themselves. But folks within your organization who can represent, yes, those business objectives. Yes, your ethical values and goals. Represent the user perspective. Is that true? And does that team also need to have a level of independence from the development teams as well to create, as you said, a healthy and natural tension throughout this process?
RYAN CARRIER: That's right. So the two primary - I'm going to call - people get really uncomfortable when I use the word committee. They're like, oh, another committee. And they think of it as the roadblock to getting business done. So I prefer to refer to these people as pools of experts, who need to communicate with each other. Which is a committee.
So pools of experts. So you need people who understand algorithmic risks. We call them the Algorithmic Risk Committee. And I listed off some, so I'm not going to do them again. But we also have a standing and empowered ethics committee. And that's because AI is a sociotechnical tool.
Not only are we impacted by the outcome of the tool like a calculator. We need the answer. But the nature of AI often means we have to put ourselves and our preferences and our personal data into the system to reach the outcome about us. Personalization, optimization, and so on.
So as a sociotechnical tool, what it means is that there's been instances of ethical choice made all throughout the life cycle. And those instances of ethical choice, you know who's not qualified to make those? Data scientists, designers, coders, developers. They're not trained in any of this.
This is a legitimate expertise based on the shared moral framework of the organization. And every company has the right to have their own shared moral framework, which should exist under the relevant legal frameworks in which they operate. The laws of the lands in which they operate their work.
That shared moral framework might be, we want customer satisfaction. We want to be good to our community. We want to employ people where our plants are. These are principles that corporations will state frequently. Do they uphold them? Good question. But they have the ability to operate their own shared moral framework.
Included in that are these instances of ethical choice in how they go about designing and developing their tool. What constitutes a serious incident? It's not just a legal question. It's also an ethical question.
When is model data and concept drift a meaningful deviation from the original purpose that you and I agreed when you bought my tool? If my model moves and changes, there's no law out there that says this amount of movement, this amount of model data or concept drift equals a deviation from purpose.
Therefore, it is an instance of ethical choice as to where you set the guardrails to say, this model has moved too much. It has deviated from the purpose by which we've agreed to operate with our customers and clients, and therefore we need to do something. Pause the model. Release a new version. Take it off the market and try again. Whatever it might be, these are the instances of ethical choice.
How about just even mitigating cognitive bias built into the design and development of these tools? These are all the expertise of the Ethics Committee, and that's what we challenge and charge them with. And they are independent from the Algorithmic Risk Committee, but they work together in essentially trying to mitigate risks in general to humans.
KIMBERLY NEVALA: And I think, again, Reid Blackman makes this point a lot, that ethicists are not intended to be activists. And this is something both those of us looking at things from the outside get wrong, and sometimes within companies get wrong. Which sometimes means that companies are going to make decisions that you don't agree about it.
And I think in his trademark snappy way, he had - the headline that caught my eye this morning - was that "ethics are not about how you feel about things." Which doesn't mean that they're soft and fuzzy. But it also doesn't mean that this is always going to drive something. Which is where I think consumer and user choice to say I'm going to participate in this or not participate in this needs to send that signal as well.
Do you see that misperception as well? And how do you work with organizations to get the expectations right for what each of these bodies, if you will, or committees, needs to bring to the table and what their boundaries are?
RYAN CARRIER: I love Reid's work, one of my oldest friends since moving into this industry, and he's 100% correct. There's no room for a member of an Ethics Committee to be an activist.
When you sign up for that Ethics Committee, you have signed up to effectively either adjudicate the existing shared moral framework of the organization, written down and documented in a code of ethics and a code of data ethics. So that's your operator's manual for what do I abide by as I've joined this firm?
And your duty is to advise the corporation on that shared moral framework and how their decisions might impact that shared moral framework. If you want to make a change to that shared moral framework, that is entirely acceptable. We change laws. We change morality all the time, and one person's morals is different from another person's morals.
And there's a famous example. In this case, I'm not going to name-- I guess I kind of have to. Axon Technologies was a surveillance drone company. And after the shooting in Uvalde, Texas, the CEO started to talk to schools and people who would buy their tool. And they were getting asked, well, why don't you put a taser on your drone so if you see a bad actor that you can take action? So you've moved from surveillance to defensive weaponry.
Well, first off, that company had an ethics board, which fantastic that they had it. But here's where the mistake occurred. That entire ethics board became activists and they threw up their hands and said, we're out. We're not doing this. And that's exactly the wrong approach. Exactly. And they may hear this and listen and see me as calling them out, and I am.
Your duty is to take the existing shared moral framework and go to that CEO and say, you're thinking about changing our shared moral framework from a surveillance company to defensive weaponry. Let's talk about what that means. Let's talk about what that means for our employees. Let's talk about what that means for some customers who just said, great idea. And most of our other customers may say, well, really bad idea. We can't buy your product if it's got a taser on it. Or can we opt out of that? Or figure those -- but your duty is to advise on the impact of these instances of ethical choice. If you throw up your hands and walk out of the room, you're not advising anybody. You can't advise, and your duty is first to advise the company.
Now, if at the end, they choose to make this change to become a defensive weaponry company as well as surveillance, and you personally are like, I can't work for a defensive weaponry. Well, then you quietly leave. And you go and you do your work elsewhere. And then if you want to be an activist, then you can have that role from the outside looking in.
When you are on an ethics committee and you want to build trust and relationship, and you want to effect good, ethical choice decisions built on a process and agreed upon shared moral framework, you have to do it as a confidential supporter. As a therapist for the organization that can work in confidentiality and have tough discussions. And then go where the organization is going to go, because the organization has a right to establish its own shared moral framework or change it as long as it abides by the law.
KIMBERLY NEVALA: I think we could spend a whole other hour on even just that aspect. But in the interest of not taking advantage of your time and generosity, as we've been talking today and you're doing this work more broadly, is there anything in particular that we haven't touched on that you'd really like to express or put out there for the audience to consider or ponder as we wrap up? This is really my garbage question, which is, what am I not asking that I really, really should have?
RYAN CARRIER: I think it's what each person's duty is.
And I'd like to encourage everybody to think about, when you watch something, when you make a choice, when you buy something, you're voting for it. And we need to have more and better discernment at a retail level, at a mom and pop level, at a buyer level of supporting organizations that are engaging in responsible AI. That have good AI governance, oversight, accountability. That aren't buying, selling, sharing, trading your data. And that's their primary source of revenue. That they're pretending that they know who you are through their inferential decisions about you.
We need to value our privacy and our own data in a way that causes us to make robust decisions and reward the companies that are celebrating and upholding those decisions. When we choose to do this at a societal level, then we will actually get more of the outcomes that we want.
I say there's three mechanisms for change at the corporate level. One is when lawmakers make laws. But, like our cars, sometimes we see a speed limit sign, doesn't mean we don't drive over the speed limit. So laws are there, and then they get broken.
The second way to cause change is when we sue for damages. When we sue for liability, because we changed the economics of the product or service that we're engaged with. And so suddenly, their costs go up if they have a lot of liability. And so suing and engaging in that process is a good feedback mechanism for managing bad behavior.
But the third and the most robust one is when we let the markets through supply and demand function. And when we demand responsible tools, when we demand large language models, ChatGPTs, that don't hallucinate, that don't engage in false sources, that are robustly disclosing to us what the risks are, then we're going to get better
results. And so there's a lot of power in the retail aspect. But it's not something we feel, right? Because it's one little person. You're like, what difference can I make? And the answer is, well, you can make a difference if all the people around you are doing the same things, and we eventually get a better result. So that's the one I'd leave you with.
KIMBERLY NEVALA: I really like that. I started to think there that perhaps there's room for another arm of ForHumanity, which will be a PR firm a dedicated to those moms and pops. Because some of this is also -- some of the big guys are just so pervasively in-your-face and everywhere that it's actually sometimes just hard to know that there are alternatives. Or other ways of doing things or other providers and service providers. And maybe this is also a way for us to help break down the iron grip that does some of the biggest players really have on a lot of these aspects today. So I really love that.
RYAN CARRIER: Well, eventually, I think we'll be doing Ad Council type of - not dissimilar to the way smoking was reduced in this country - to just educate a wide populace on risks, their power, their control, benefits of good decisions, and the risks associated with bad decisions. So I do see that coming. And honestly, we're working towards it right now.
KIMBERLY NEVALA: Awesome. That is fantastic. Well, thank you so much. I appreciated all of the time and insight. And with any luck, we can invite you back, entice you back in the future to see how we're progressing and what we need to focus on next. So, thanks again.
RYAN CARRIER: My pleasure. Thanks for having me on. And I'd love to come back at your convenience. So happy to do so.
KIMBERLY NEVALA: All right. We have that on record, folks. So that's awesome. And if you'd like to continue learning from advocates, doers, and thinkers like Ryan, please subscribe to Pondering AI Now. You'll find us wherever you listen to podcasts and also on YouTube.
