Regulatory Progress & Pitfalls w/ Patrick Hall
KIMBERLY NEVALA: Welcome to Pondering AI. As you all know, the regulatory landscape for AI has been heating up like never before. So we were pleased to have Patrick Hall join us to provide a current take on the progress and pitfalls present in this rapidly evolving space.
Hey, Patrick. Thanks for joining us again.
PATRICK HALL: Hey, great to be here.
KIMBERLY NEVALA: You and I last spoke for the podcast back in June of 2022. I think it's fair to say there have been a few consequential developments since then. Can you talk to us about your observations of what has developed since? What has delighted? What's concerning you? What's most surprised you? Take your pick.
PATRICK HALL: Sure. So I'll start that we appear to be in the midst of some kind of reckoning with the excesses of digital technology, and in particular AI.
We see 41 states attorneys general launching a lawsuit claiming harm to children in online platforms. We have the Department of Justice pursuing antitrust actions. We have the very recent White House Executive Order on Trustworthy AI. And back in the beginning of 2023, we had the NIST AI Risk Management Framework and the AI Bill of Rights coming along at roughly the same time.
So it seems that the wheels of government and law are moving slowly to get some of the more problematic aspects of AI and digital technology more broadly under control. And I'd say that's a good thing. These are all just first steps. I expect this to go on for decades. But good first steps. Good first steps.
KIMBERLY NEVALA: So with the bursting on the scene of generative AI and large language models - ChatGPT and its ilk - do you think that there is an increased awareness in the public broadly, or in companies more specifically, about AI in general? And about both the power and the peril? Or are we being flummoxed or fogged in by all of the potential capabilities?
PATRICK HALL: I agree that there's a huge increase in awareness around topics relating to AI.
My personal impression, though, is that your general consumer still doesn't understand how often they interact with AI and machine learning systems. And especially younger consumers who are spending more time on social media, more time on e-commerce platforms, almost every aspect of their digital life is being commercialized and often impacted by some kind of machine learning system.
So you know, it's a yes and no. I do think this has raised the awareness of AI, but I do think that a lot of consumers are still unaware of exactly how often -- and I would say that it's very often -- that we're interacting with these machine learning systems.
I think the good news is we are quite inundated with hype around generative AI. They're very transformative technologies, important technologies. High risk, high reward. The same technology that can help you do your math homework or your English homework can also be used to generate horrible things. You know, synthetic child porn, voice tracks and video to be used in extortion schemes. So there's a really strong both sides of the coin effect here.
But I'm hopeful, actually, in the long run because so many people are working on getting the technology under control. I think we're going to have a rough few years. I think we'll see lots of incidents. I'm involved with what's known as the AI Incident Database. We've seen the size of that database more than double since the release of ChatGPT.
But in the long run, with the wheels of government starting to move slowly throughout the world and a lot of people doing good things to start getting the technology under control, I think in the long run we might actually pull this off. In the short run, I think we're in for a little bit of an exciting but bumpy ride.
KIMBERLY NEVALA: [LAUGHS] What are some of the types of incidents that are illustrative to people? That would help them understand where the boundaries and limitations and dangers of this particular technology are?
PATRICK HALL: So I would encourage anyone that's listening to go check out the AI Incident Database and look for themselves.
But a taste of the incidents would be anything from comical to really tragic. Suicide, extortion, revenge porn: there is a troubling side of this technology that we need to get a handle on. And so again, it can go anywhere from just kind of comical -- look! I can't believe ChatGPT would say this to somebody -- to really serious consumer harm and harm to children. And so every piece of technology is a tool or a weapon, and ChatGPT is no different.
KIMBERLY NEVALA: Last time we spoke, you talked to us about the need for data science, data scientists and companies that are deploying AI solutions broadly to really start to take a more deliberate approach. To adopt a real product mindset around AI-enabled tools and systems and to really focus in on things like safety, risk, reliability. What have you seen happen in that area? Are we making progress? Are there still significant gaps?
PATRICK HALL: It's still an ‘it depends’ answer.
One thing that I think is really interesting, though, is if you look at, say, a chatbot like BlenderBot3. Which was released by one big tech company right before ChatGPT: not right before, but a few months before. It clearly had very little safety and reliability thinking. It immediately came out and said the CEO of the company was an unethical businessman, made sort of very conspiratorial statements, very anti-Semitic statements, and basically had to be taken down immediately.
ChatGPT and this new generation of chatbots, I think the reason they're so successful and drawing all this attention into AI is because the companies that released them did invest into safety and reliability. And I think what I'd like to point out there for people who aren't aware, that's human effort.
So sure, there's a big fancy machine learning model in there somewhere, but all the guardrails, all of the safety mechanisms are done by tedious, laborious human hands. You know, just tedious tasks for humans to do to keep these systems under control. The content moderation, the safety guardrails. But the companies that did it, it paid off for them, right? Now we have these very popular, very serviceable, very functional chatbots that we all know about. So I would say in one way, yes, we've seen progress.
A big focus of Monday's executive order is treat AI systems like consumer products, because they are consumer products in many cases. But I think unfortunately, we still just see a lot of sloppy slop, go fast and break things and no respect for the scientific method. But I hope that one positive lesson to take out of the success of ChatGPT and Bard and other related technologies is the companies who invested in the safety guardrails were the ones who experienced the biggest success, in my opinion, on this matter. So it's not all bad news.
KIMBERLY NEVALA: Yeah, that's true. But it is also fair to say that they are also the biggest companies quite literally on the planet at the moment with the resources. You mentioned all the folks developing these guardrails. It does still feel a bit, though, like a giant game of whack-a-mole at scale. Even with all of that work done, it is still not reliable.
I read a study this morning, and it terrified me a bit. Not because these tools aren't useful, but because they are when used properly. But somewhere along the line of 70% or 80% of people who responded to this survey said yeah, we think these are kind of cool tools and we trust what comes out of them implicitly. I would trust what it says without fail.
How do we address that issue more broadly? Because you and I talked a little bit about this last time as well. The systems are set up in a way that one could argue that they are designed to somewhat purposefully click into those parts of our brain that equate a conscious intellect with language fluency or literacy. And also with the bits that like to anthropomorphize stuff. And so they use pronouns, they make fun of themselves, and they apologize.
How do we really ensure that folks broadly understand the true limitations of these systems, or is that just a pipe dream?
PATRICK HALL: I think it might be a pipe dream in the near term, sadly. And obviously, there are bad actors who are willing to take advantage of that.
I want to be clear that I ask, I demand, that my students actually use ChatGPT in their writing. And it makes them better writers. Does it make me a better writer or you a better writer? I doubt it. They upskill the less skilled. They don't necessarily help experts. And I like to pretend I'm an expert.
So one, they can be helpful tools. And I don't want to come across as too negative either, but I hate the way that these were rolled out in this sort of anthropomorphized fashion. And 41 states attorneys general would agree with you that some of these companies are playing on well-known concepts in psychology to make these products addictive, to make these products harmful, to make us keep coming back to these products. Whether we want to or not.
So the anthropomorphization I find very distasteful. And the consumer harm aspect I obviously find very distasteful. But I think to wake up from this pipe dream of just trusting AI systems, the only thing to do is education. And I guess that's why I'm interested in education at the moment and teaching data ethics classes and responsible machine learning classes and things like this. Because we just have to have a higher degree of technical literacy.
And that gets back to the point I was making that people just don't understand how often they're interacting with these systems and how broken and sloppy many of them are. So the first thing we have to do is just teach people about those things, and I try to do that. But the classroom only holds 50 people, and that's a semester, so [LAUGHS] so it's a slow process. It's going to be a slow process. It's going to take a long time.
KIMBERLY NEVALA: Yeah, we're starting to think about this at the lowest levels. Not in the way that we think about it from a business standpoint which is where can I infuse AI into everything? (Such as) into education down to see which kindergartners are paying attention or not. Which is categorically, by any measure, a terrible idea…
PATRICK HALL: My three-year-old had to take a standardized test to get into a school, and I was just blown away because there's no way it's a reliable measurement.
KIMBERLY NEVALA: Oh, for bleep's sake, right?
PATRICK HALL: Yeah, exactly. [LAUGHS]
KIMBERLY NEVALA: Anyway, this will be very interesting and there is certainly more to come. Because on the other hand language is how we communicate. And that unlocks an interface to these systems that is the basis of, I think, all of the overblown or not - discussions about democratizing access to information, expertise, knowledge and so on.
PATRICK HALL: Well, and I think another reason these products are so impactful is because the user interface, right? I don't have to code in SAS. I don't have to code in Python. I don't have to do math. I just sit down and type to something that talks to me back like a person. And so the power of that user interface is not to be underestimated, and it's probably one of the major reasons why there was such an explosion of interest around these technologies. So the human-like interface, again, it's a tool and a weapon. It's great and it's dangerous.
KIMBERLY NEVALA: Yeah. Now, earlier you mentioned the UK Safety Summit is kicking off this week. There's been a lot of chatter lately and concern raised that this concept of AI safety has been co-opted to refer to - in some cases people say divert attention to - long-range existential risk. AGI, right? The robots are coming. Do you agree with that? Is that an issue? Is that a concern?
PATRICK HALL: Yes. [LAUGHS] I fully dismiss the effective altruism, longtermism, existential risk take. It's stupid, OK? I'll leave it at that. I don't pay any attention to it. Now, that doesn't mean it's not dangerous. That doesn't mean it's not diverting attention and resources away. That makes me very mad, and I think that's a real possibility.
For me, I would say that I don't bother so much about these distinctions between AI ethics and responsible AI and AI safety. In fact, I don't take any of those things particularly seriously. What I take seriously is law, regulation, and risk management, compliance, audit, security, these processes that we've known have worked for a long time. So is AI safety a more dangerous buzzword than responsible AI? Maybe a little bit.
But another thing I tell my students is nuclear weapons, global warming or climate change: they're going to kill us long before AI will. So I just fully dismiss the longtermism and the effective altruism and the existential risk stuff. I don't even know what to say about it. It's nonsense.
KIMBERLY NEVALA: Yeah, and I think the concern I've seen is not so much that we don't need to be concerned about AI safety. It's that this co-opting of the term for the long-term existential crisis is what's diverting attention from the actual safety, harms, and needs.
PATRICK HALL: Yeah, it's marketing. That's why I try not to pay any attention to it.
If your listeners want to learn more about this, Vice did a great, very direct expose on this called "Why Big Tech Wants You to Think AI Will Kill Us All." And essentially, the answer is it's their new form of guerilla marketing, and that's all.
KIMBERLY NEVALA: Yeah. Perfect. I mean, not perfect, but…
PATRICK HALL: [LAUGHS]
PATRICK HALL: Yeah, AI safety and responsible AI is a weird field because you end up saying ‘good’ about really bad things. It won't even be the first time it happens to me today, first or last.
KIMBERLY NEVALA: Alright [LAUGHS] I'm going to have to edit that. Anyway, as we turn the corner into 2024, what do you expect we are going to see on the legal, risk, compliance fronts?
PATRICK HALL: Just slow movement. If the EU AI Act passes - which it should - I think that's the next big thing on the horizon. I think clearly, that's the next big thing on the horizon is the EU AI Act. And much like GDPR, it's written to have extra jurisdictional effects, right? It's written to make companies that want to participate in the EU market think about their behavior outside of the EU market.
And so much like GDPR had a huge effect on the data industry, which is still much bigger than the AI industry, I expect that the EU AI Act will have quite a large effect on the way we practice AI moving forward. AI, machine learning and statistics - whatever you want to call it. It's all still just statistics, by the way.
So I expect the EU AI Act to have a large impact. It's going to move the needle. The AI Executive Order moves the needle. But the needle still isn't even up to 20% or 30%, right? Each one of these things moves the needle a little bit. And I just expect it, like I said, to take decades to really get a mature legal framework set up around these very complex sociotechnical technologies.
KIMBERLY NEVALA: And so for companies that are not going to wait decades to implement these technologies - many of which, as you've said, are being used in socioeconomic contexts - what would you be telling them? Are there was a few key elements that they need to be really concentrating and working on now?
PATRICK HALL: So I would say the most basic things are to understand your existing legal liabilities, which, if you're in a regulated industry, those could actually be fairly substantial. I'm not a lawyer. I should have started off by saying that. I'm not a lawyer, just a lowly professor.
So you need to understand your existing legal obligations, and you need to understand the way that notions like bias or reliability are impacting your customers. Because even with the executive order, that mostly impacts government agencies, not commercial industries. The NIST AI Risk Management Framework is voluntary. So to have an impact inside an organization, you really have to start simple and you have to start with just the basics because the business has no incentive to accept an overbearing risk management framework or something like that. So you have to start with what's really going to work inside your company.
And what I found is follow the basics of product law that's existed for a long time. You know, don't be negligent with your products. And then any other legal obligations around security, privacy, or nondiscrimination, make sure you're following those. And then start trying to understand how quality affects your business. And to what extent can you push for increasing quality and see that as a positive in your business?
When we win, hopefully we have real regulation around AI. That will change the entire calculus because regulation is what pushes back against these business incentives to rush products to market. Until we have real regulation, which I expect to be a decade away at minimum, it's going to be more about, are you in a regulated industry? And if you're not, just the basics of product liability and the basics of if I make a better product, how is that going to affect my business? And I think that's how you have to start.
I'll have to say, though, if you want to learn more about real adult approaches in this area that aren't silly AI safety and silly responsible AI hype I would definitely look at the NIST AI Risk Management Framework. And I would definitely look into the banking resources, the Interagency Guidance for Model Risk Management. I would look at what's been going on with the FDA and medical devices, software as a medical device. I'd look at what's been going on with the FAA. They've been flying planes with sophisticated algorithms for decades now. So there's a lot of real adult advice out there that's not just silly AI safety and responsible AI hype, and that's what I like to focus on. And that's how I've made my way in this crazy world.
KIMBERLY NEVALA: [LAUGHS] Well, those are some insightful and incisive insights and advice from a self-professed lowly professor. Very humble, Patrick. Thank you again for your time today.
PATRICK HALL: Great to see you.
[MUSIC PLAYING]
KIMBERLY NEVALA: We enjoyed our conversation with Patrick so much we decided, why stop there? Starting on December 11, we bring you 12 Days of Pondering AI. In this capsule series, prior guests share their insights and intuitions on the ever-evolving world of AI. Subscribe now so you don't miss it.