Digital Ethics and Regulation w/ Chris McClean

KIMBERLY NEVALA: It's day nine here at Pondering AI's Insights and Intuition series. Today, Chris McClean reflects on digital ethics and regulation. Welcome back, Chris. Thank you for joining us again.

CHRIS MCCLEAN: Very good to be here.

KIMBERLY NEVALA: You do a lot of work in the space of digital ethics and trust. What have you seen as the most consequential developments or areas of progress relative to AI and digital ethics of late?

CHRIS MCCLEAN: I think the big one for me this year was the encroaching – impending, I guess - EU AI act. This is going to be fairly monumental as far as impact on the market. It already has had a pretty substantial impact just among organizations that are trying to understand what the requirements are going to be. And to understand what it's going to take for them to adhere to those requirements.

But really, if you think across the industry, this is the first real large attempt to oversee this big category that we call AI. To put some good enforcement in place on practices, on things like responsibility and accountability, and things like fairness that we know are important. But baking it into a set of regulatory requirements that actually have a good deal of enforcement on the back end.

This is going to be a great experiment for all of us. And I think the EU is taking really good leadership in the way that they're approaching it even though we still continue to wait to see the final version. There's still negotiations going on and things like that. I think it's going to be a really good example of oversight and a really positive move for the industry.

KIMBERLY NEVALA: What are some of the key activities or impacts you've seen from companies as they try to anticipate and prepare for the Act to come into play?

CHRIS MCCLEAN: I can tell you firsthand, I just in the last week helped roll out a global responsible AI policy for Avanade. We have over 60,000 people worldwide. And we rolled out a policy across the entire organization, all of our business units. Anything that we're doing internally, anything that we're doing with clients, has to adhere to this new policy.

A lot of organizations that we're working with are doing very similar things. And a lot of it is very process-oriented, right? There's still good attention to the individual technical controls that you need for AI solutions. So as I mentioned, things like fairness, accountability, security, privacy controls, and things like that.

But what the regulation does, what I think it does very well, is require the processes around those controls. So having good documentation, having good risk management processes, understanding what the potential impact of those systems could be and then getting out ahead of those impacts to make sure the systems are controlled in the right way. So that infrastructure, so to speak, that you need for regulations, for compliance, I think it's going to be the big change for most organizations.

KIMBERLY NEVALA: Do you think, if this had been in force, that the way that we have seen LLMs and gen AI burst onto the scene would have been fundamentally different or not so much?

CHRIS MCCLEAN: Well, what's really interesting is there's always, always a conversation around government regulation and whether regulation can keep up with innovation in the industry.

We're seeing large language models change the conversation about the AI Act, which didn't really consider large language models and generative AI in the initial stages. So the negotiations that are ongoing are still actually changing the language, basically to keep up with innovation that's happening in the market. So it's possible if the AI Act was enforced a year ago, it could have changed the way that large language models rolled out.

But even more so, the reason that I appreciate the sort of infrastructure that's required by the AI Act is because we can't always anticipate what's coming next in the market. So if you do have things like a good process around risk assessment, around documentation, around transparency, understanding where your data is coming from and how it's being used; all of those things, which make for just good, controlled environment around governance of AI. If you have those things in place, what happens next –what innovation - doesn't matter as much. You're already out ahead of the game as far as control goes.

KIMBERLY NEVALA: So it's less about specific controls for a specific application and more about building that adaptability and resilience into the process itself.

CHRIS MCCLEAN: At least to begin with. And then as you see what kind of new applications and use cases are coming out, then you can modify and adjust accordingly.

KIMBERLY NEVALA: As you've worked both within Avanade and with clients, are there specific areas that you think require more attention or focus in general?

CHRIS MCCLEAN: I would say that governance structure is number one, especially with what we've seen with generative AI.

We've moved very quickly from AI being the domain of a small sub segment of the employee base - you have your data scientists that are doing really cool and amazing things with AI - to all of a sudden HR, marketing, finance, sales, customer service are all designing and developing interesting new applications and use cases around AI. So getting a handle on all of those different installations and implementations of AI, that is number one. That's the biggest challenge and the area that needs the most attention and the most support.

Once you get a handle on all of those different ways that AI could be used or is currently being used in your organization, then you can start to do the impact assessment. You can understand how it could impact your customers and your employees and so forth. And then you start to implement those controls, your transparency and your fairness and security and privacy and things like that.

KIMBERLY NEVALA: So all of that being said, is there anything that's at risk of being overlooked or left behind in our current approach? Or due to the prevailing social, political, and/or corporate winds today?

CHRIS MCCLEAN: I think so. I think the biggest challenge, maybe the biggest gap that I'm seeing, is that so much of what we talk about when we talk about responsible AI is ethical risk or AI risk. That's very important; there are a lot of uncertainties. And risk is, by definition, a consideration of the impact and likelihood of something that we just don't quite know about.

But there's also this entire other category of harms, of tradeoffs, of costs that go into making AI, training AI, implementing and operating AI. The environmental cost, we know, is very high for most of these systems. That's the extraction that goes into actually making the hardware. It's the energy cost. It's water costs and so forth. Those are known costs. They're not risks.

There's also the tremendous labor cost that goes into training these models that are looking through the data, tagging the data, and so forth. That's a tremendous amount of cost that goes into these systems. And oftentimes, the people that are doing that work are paid very poorly. In some cases, they have maybe some emotional or psychological damage because they were doing some very difficult kind of content moderation.

Every time we use any of these generative AI or (other) AI systems we're part of that tradeoff. We're getting some kind of benefit from the work that they've done or the environmental cost to put these systems together. So even when we talk about things like fairness and AI - and we're doing small adjustments to the model to make the outcomes more fair - there's also this other fairness issue which is who's benefiting from AI? Who's contributing to AI? Are they getting compensated for their contribution, and are they benefiting as much as the rest of us?

So there are a lot of other, I would say, non-risk issues. When we look at the EU AI Act, when we look at NIST which talks about the risk management framework, these are considering risks. We are ignoring some of the known harms, the known costs and the known tradeoffs. I feel we should all be talking a lot more about those costs as we're using and benefiting from AI.

KIMBERLY NEVALA: In our previous conversation, you had talked about this idea of extra-personal versus intra-personal trust. It strikes me that this is a bit of an extension of that conversation as well. Which is, it's not just about how easy is the tool to use and what does it give me. But what am I implicitly supporting by utilizing the tool, regardless of how good, fun, or interesting it is.

CHRIS MCCLEAN: Yeah, it's a great point. It's actually part of my PhD research, this idea of trust being a stance on whether or not a certain party is worthy of some power, if they're going to be a good steward of that power.

If you take this idea of trust as an emotional stance, as this is my personal idea of trust. Do I trust this person, this corporation, or this system? That's my decision or that's my stance. But as soon as I act on that stance, I'm giving power to that person or that system or that corporation. And that increase in power is my responsibility. So the outcomes, the tradeoffs, those harms and even the risks as a result of that power, that increased power, I bear some responsibility for the power that I've given.

I don't think enough of us are thinking about that as we interact with these systems. That we are, by interacting with them or even endorsing them, we are giving them power and that power has tradeoffs. It has risks, it has costs, and harms associated with it. And we should just be more thoughtful about those.

KIMBERLY NEVALA: Great point. So as we start to look forward into 2024, what do you expect to see take center stage in the arena of trust and ethics?

CHRIS MCCLEAN: We've talked about misinformation and disinformation a lot in this industry over the past couple of years. But with generative AI, this is going to reach a kind of a heightened conversation over the next year. Especially in the US, obviously we have a lot happening in politics with the election next year, but also around the world.

Not just politics, but big-picture geopolitics. There's wars and conflicts going on around the world. And we are already seeing AI start to come into those conflicts in different ways. AI being used to support war efforts, for example, in ways that we haven't seen and probably in ways that we don't currently see behind the scenes.

So AI's impact on the geopolitical scene should be part of the conversation. I think it will be a much bigger part of the conversation next year.

KIMBERLY NEVALA: And if you were writing the 2024 playbook for individuals or companies, what would you prioritize?

CHRIS MCCLEAN: Well if I had a choice, I think the one thing that's missing for me - if I could select just one - is that so much of the innovation is exciting, it's interesting, there's a tremendous amount of value. But it's really outpacing our conversation about what we're looking for from AI.

A lot of that's being driven by this excitement around the technology. Which again, it's really incredible, really exciting what's going on. But that speed is outpacing conversations that we would hope would involve people that have different backgrounds than just technology. So people from humanities, people that are involved in things like social justice, NGOs that are on the front lines of helping different communities, people that care about things like law enforcement and military and criminal justice. There are a lot of these areas that will be heavily impacted by AI.

Right now, so much of the conversation is focused on the technology and not on what we're trying to achieve with that technology. How do we make things like fairness happen? How do we make the criminal justice system more just? If it's just a technology conversation, we're not going to get there.

So if I had my wish for next year, it would be to have those conversations about what we value as humanity, as society and how can I support that. Rather than, what's this cool new technology and how can we apply it to all these different areas of our lives.

KIMBERLY NEVALA: Awesome. Thank you, Chris. I really enjoyed the quick conversation and look forward to more to come.

CHRIS MCCLEAN: Great to be here. Thanks for having me on again.

KIMBERLY NEVALA: Thank you, Chris. You are one of the best at bridging concepts with practice in this space. 12 days of Pondering AI continues tomorrow. Subscribe now for more insights into what's happening now and what to expect next in the ever-fascinating world of AI.

Creators and Guests

Kimberly Nevala
Host
Kimberly Nevala
Strategic advisor at SAS
Chris McClean
Guest
Chris McClean
Global Lead – Digital Ethics, Avanade
Digital Ethics and Regulation w/ Chris McClean
Broadcast by