A Healthier AI Narrative with Michael Strange

KIMBERLY NEVALA: Welcome to Pondering AI. I'm your host, Kimberly Nevala.

In this episode, I am pleased to be joined by Michael Strange. Michael is an associate professor or docent at Malmö University in Sweden. Apologies to all my Swedish colleagues for the very poor pronunciation there. Michael studies the politics and political economy of AI in healthcare, amongst other domains.

Today, we're going to be talking about debunking AI hype and how we can promote a healthier narrative and thereby more productive adoption of AI systems through novel governance approaches. Welcome to the show, Michael.

MICHAEL STRANGE: Thank you very much, pleasure to be here.

KIMBERLY NEVALA: And just for all of those out there, how do you pronounce the name of your university properly?

MICHAEL STRANGE: I daren't say on camera. [LAUGHING]

KIMBERLY NEVALA: OK, fair enough, fair enough. It'll be written out in the show notes for folks. Now, when I first found your work, Michael, it was not immediately obvious to me that your background would have been in international relations. So how did that interest in, I assume, politics and other elements of the socioeconomic system bring you to a research career that includes healthcare?

MICHAEL STRANGE: Well, it's a great question. So I got into an interest in healthcare, at least in terms of projects and stuff, from 2019. We had this great project called Precision Health and Everyday Democracy. And when you're a political scientist, then you're interested in - apart from things like democracy - you're interested in decision making. You're interested in things like how policy processes work, of course. But also you're interested in how different interests in society get to influence that process.

So when I got into healthcare, what I was struck by was healthcare is a very tangible area where you can see how policy really impacts people's wellbeing. But also you can see it's a system, of course. And it varies between different countries. And it says so much about a country, in terms of how the healthcare is organized.

And also, of course, with any kind of policy issue, there's always this question of, how do you define it? If you're talking about defense policy, how do you define defense? How do you define a security threat? In healthcare, this is a really important question. What is health? What is good health, but also where does healthcare take place? So I was getting into that. I was very interested in these questions about how healthcare is provided.

Then, of course, like everyone, the pandemic came along. And there were these two really interesting aspects for me. One was, of course, this hugely polarized debate about exactly how you should respond to this. Also within that this question about to what extent decision makers were listening to and looking at the experiences of everyday people. How people in very different walks of life, how they were-- to what extent they were able to follow the guidance, whatever the guidance was that they were given. And also access to things like vaccines, basic healthcare; a lot of these interesting questions. So that was one aspect.

The other aspect that came up in the pandemic that took me into AI was this narrative that became very, very common in many countries: that data would save us. This idea that if we could just give up our hang-ups around privacy to give over our most personal data, that would save us. And also, of course, within this was this narrative that it wasn't just about giving our data away, but it was giving our data away to large corporations, the big tech firms.

And what this underlined was also a broader shift. In that the core infrastructure through which we receive healthcare and also how the concept of health is understood, how much it actually shifted over to these big firms in many different countries. If you're talking in the States, also to what extent the old health providers were still in that role? Or, now was it increasingly being provided by big tech in Europe, where there's a different model for healthcare provision?

Of course, it was maybe a more obvious, extreme, radical shift. And I think what was interesting is whether or not any of those shifts were sensible. Putting aside these questions of whether you support the policy or not, just what was interesting - in terms of someone who's interested in decisions and policy making - was that this was a very radical shift. In many ways, this was revolutionary.

So that's why I wanted to get into this field. I wanted to understand what was going on. What were the assumptions? What kind of new power relations were forming? And also, when you talk about health care, when you talk about population data as well, in many ways you're talking about things which are meant to be core to the sovereignty of the nation state. Core to the idea of what is a country.

And at the same time, though, if you're talking about the healthcare of your population and it becomes dependent upon, in many ways, firms which are either more powerful than your country or based outside your country, it also challenges traditional understandings of certainly the jurisdiction over things like healthcare. But also, it gives a very changed idea of, as an individual, who do you now look to?

When you need your health, when you look to an actor who's going to protect you, do you look to your national government? Or do you look to whoever is providing the healthcare app on your smartphone, your device, your watch, whatever. So a lot of things going on in this period, amongst a lot of other things going on. And I was curious. I wanted to try and understand this better.

KIMBERLY NEVALA: And there's no doubt that the pandemic was a bit of a crucible and a crisis point for rapidly increasing or ramping up the push to digitization, broadly, and either datafication or quantification, more specifically, within healthcare. Across domains, to be frank, but healthcare in particular. And whether folks felt that that was a real or manufactured crisis is perhaps not the point.

But one of the things it definitely brought up, and I'd like to start at this more conceptual level, is the concept of hype. This is where I originally found your work as well. Why is it so important that we understand how hype and the narratives that are hyped up impact both our understanding and our decision making? Of how and where and when or even if we use AI in healthcare?

MICHAEL STRANGE: Yeah. So when we talk about hype in AI in general, of course, it can often imply that you're saying that you think AI is exaggerated. And it might be.

But also it can easily lead to one being positioned as someone who is somehow anti-AI. That you're saying, well, we don't like this technology. It's too fast, too much. Let's slow down.

So if you're talking about healthcare, then it can look like you're also saying, well, these are all the benefits. The promise is exaggerated, and we should step back from it. Now, putting aside any of the questions of whether or not that is actually correct, the problem is, of course, that that positions oneself as simply an obstacle. A barrier to applying AI in healthcare.

So let's say we see evidence that AI is really needed in healthcare. Let's say that we say we're not happy with the way the healthcare runs right now and there is the potential that AI could do fantastic things. So then you end up being positioned in this very awkward position of being an obstacle to this.

The thing is, hype can also be very negative. Hype can also be a narrative that AI is simply about big tech wanting to come in and steal all of our data and make vast profits and take over our hospitals and whatever. That is also, obviously - if we buy into the argument that AI can do wonderful things in healthcare - then that kind of negative hype is extremely dangerous as well.

So the thing about hype is that it is, by definition, exaggeration. It's not inherently positive. It's not inherently negative. What it does overall, though, is it takes away the space for actual debate. It takes away the space for nuance.

And like everything in life, if you want to do something well, you need to be able to understand it. You need to be able to talk about it. You need to be able to think through it. You need to be able to discuss it with other people. And also, of course, work on the basis that if it's something complex, there's always going to be people who know more about it than you yourself do.

So to have that kind of space, as I said, to have that kind of nuance is really difficult to do when there's hype. It becomes like a religion rather than a debate about knowledge.
We need to be able to test things. We need to be able to also work on the basis that a solution that seems great for people who fit one type of demographic might not fit everyone else. We need to be aware that a technology developed primarily around the healthcare of men might not suit women. Basic things.

But if we just buy into the hype and we just say, well, either you're for or against it, most people in the end will say, well, we're for it. We don't want to be against progress. We don't want to be against the future. We don't want to be anti-modern, left behind. But then we end up, if we just follow along with the hype, we end up being unable to ask important questions about exactly how to do this thing that is extremely complex.

AI, as we know, is a very complex technology. It covers many different types of technologies. It's very debatable if we should really use it as a category as well. You apply it to human health, which is in many ways way more complex as well. The complexity of healthcare, nutrition, all these things: super, super complex.

So the basic answer, in terms of, is AI going to be good for healthcare? We don't know. We have to try things out. We have to test things. We need to also work on that spirit of inquiry. It needs to be one in which we are humble and we're aware that things will go wrong inevitably. So we have to think through, what do we do when things do go wrong?

If we only stay in this hyped narrative, then if there's anything scary that happens, anything bad, then you're left in a position where if you are the messenger to announce this problem, suddenly you're now anti-AI. You're bad or we suddenly run away from it, panicked because we say, no, no, no, this is dangerous. We should never use it, never touch it. And then we don't get any of the benefits either.

So hype is something that is very dangerous. It risks us becoming very binary around AI. And it makes it very difficult for us to be able to really understand what it is. What does it mean in healthcare? But also, how can you and I have a voice in this?

KIMBERLY NEVALA: So it sounds like it really doesn't matter whether the hype ring lands on the positive or negative side of the scale. Ultimately, just the language that's used and the narrative that's created actually excludes people from the conversation. Or it encourages them not to enter. And that, ironically it would seem, likely hinders innovation. At least sustainable innovation over the long-term. Is that right?

MICHAEL STRANGE: Yeah. No, absolutely. And also one thing that's said about hype is that it has this effect where let's just say we go for negative hype. The idea that, in general, AI is going to do terrible things. In healthcare, it's just about, say, privatization and big tech, big capital taking over everything.

The problem with that narrative, that general negative hype - or it could be positive hype - is that it makes us feel as if we're all in the same boat. We're all in the same position. But of course, in reality, any kind of change like we're talking about here with AI and healthcare is going to affect different people in very different ways.
Some are going to benefit more than others. Some are going to be losing out more than others. But also in terms of where there are potential problems and the solutions for reducing those problems, they're going to be very different for different kinds of people, different types of groups.

Simply just in terms of your income level - if you have more wealth, you can more easily step out. Or for example, let's say we're in a situation where you have the chance to keep your data private or you can sell your data. Obviously, if you're on a lower income, you're under more pressure to sell your data. These kinds of things.

And the point is that the hype, because it levels it out, it makes us all feel equal. But we know, obviously, the Elon Musks of this world and so on are not affected in the same way as everyone else.

So when you bring that back to innovation, and you think about, OK, if we are developing technologies which are meant to radically improve healthcare and we understand the human population is very complex. People having different needs, different lifestyles, different types of jobs, all these kinds of things that make up the huge beautiful diversity of the human species. Innovation requires that we have access to that information.

And some of that information can be collected in the way that data models are built. Data is used to train AI. But also, of course there will be points where however hard we try, however good the engineers are, we can't actually capture the diversity of humanity in the data. So we also have to have follow-up stages where we can identify where are those limitations to the way we try to turn humanity into data points? What is it that we're missing in the data? What can't the data capture?

If at a certain point, AI is just trained on that data, there's a limit to how good it can be. It needs another stage where we can engage, we can alter, we can set parameters, whatever. But we can adjust it to suit the needs of basically the reality of the human species. And all of that is a story of innovation.

All of that is a story of design and development, but overall, it's innovation. We come across new ideas. We get surprised by reality. Reality tells us something more interesting than what was just in our heads. And that's really frustrating. But also, for engineers, that is exciting.

Engineers don't like it when their technology doesn't work at first. But on the other hand, a good engineer also understands that that is what makes a great engineer, that they adapt. New information comes in, they develop a solution. A bridge goes wrong at first, you redesign it to suit the situation, and then you learn something about bridge building. You realize the human population is more complex than your data initially suggested, and you innovate.

If we stay just in this black and white binary of hype around AI in healthcare, all we can basically see is whatever we decided we can already see within our made-up story of the technology. Whereas if we want to actually innovate, we need to move beyond the hype. We need to acknowledge risk. We need to acknowledge the potential for harm. We need to not be scared of these things but rather see them as inherent to the technology. And then use them in a broader participatory engagement with diverse society to be able to, in a very humble spirit of inquiry, learn. And that is innovation.

KIMBERLY NEVALA: And would I be right in assuming that when we talk about understanding the information and the data that's available - what's there, what's not there. And then iterating and refining on that, you are not suggesting that everything that is important can or even should be datafied, if you will. Or can be quantified. So is there a level of humility for us as builders and even as consumers of this tech to understand that there are areas where the data may point us in directions, or there are areas of the data that, no matter how good it is or how voluminous, will never be able to be the full solution?

MICHAEL STRANGE: Yeah, I think absolutely. You can learn a lot right now from just looking at broader discussions around science. You take the whole debate around vaccines. A hugely controversial debate and a lot of interesting aspects around that.

But one of the problems there was people's understanding of actually how science functions. People's also willingness or their appreciation of the limitations, of failures, you could say, of science. Because science is about inquiry, and there are always going to be things that we don't know.

That's what science is, right? If we knew everything, then we could just walk away. We could switch off the simulation and do something else.

KIMBERLY NEVALA: How boring.

MICHAEL STRANGE: But the reality is that there's always going to be limits to what we can do.

But we tell this story. The popular story around science is that there is very much a hyped binary where either science is perfect and we know everything, scientists are the ultimate experts. Or they're all mad and we don't trust them.

And in terms of when we talk about how we treat data, how we try to quantify the world in these systems. Llikewise, we need to be able to say, well, yeah, we do this for a reason. It can do amazing things for us.
Quantification is fantastic because it's a simplification of the world into numbers. It's fantastic because it can sometimes help us see, you could say, deeper truths, very interesting patterns. It's good for looking over very large data sets, comparing very large numbers of individuals.

But the thing about data sets is whilst it tells you some important aspects of humanity, a large data set, however many numbers are in there, is never going to tell you about your life. It's never going to tell you exactly how your body is going to respond to a particular treatment, because you're unique.

And that doesn't mean that the exercise in quantification was a waste of time because it helps us to be able to produce the baseline of the treatments; whether the pharmaceuticals or the procedures, the surgeries, whatever. But ultimately, we need to be able to still find time to tailor it to the individual's needs. And ideally, you would do that from scratch at the very beginning, when someone was being treated. But in reality, of course, it requires a period of time where you can try things out, adapt them, and so on.

In the case of AI and when we talk about quantification here, it's learned from vast data sets about the human body and how treatments are likely to pan out when applied to you. Again, that can tell you something very useful. But it's also very dangerous if you just press the green button, go, and the machines start doing whatever they're doing to you. You need to be able to have a point where you can review, test, and so on.

So the thing is, of course, partly it's that you can have a human nurse, a human doctor who can be in the loop, so to speak. But also for you as a patient, you need to be able to know, OK, if this technology is producing something which turns out to be harmful to me, what should my reaction be? Should I just be quiet about it?

Should I just be quiet and just accept and hope, overall, the effect is still good on me? Or what do I need to tell the nurse, the doctor? What is my relationship to this technology? How much should I actually just follow it? Should I just take it as a guide along with something else? This is the danger because it takes a lot of work to know how that relationship should function.

KIMBERLY NEVALA: And so to get really, I guess, practical and tactical for a minute. And then I want to back to that broader paradigm, where this isn't just a tool that's out there. It's a tool operating within an ecosystem with some very complex considerations and interactions.

But part of, I think, using the technology well is about really understanding what it is that we have in front of us today. What are some examples in the systems that we have today where we really just need to be aware of some of the-- I don't even know if they're limitations. They're just gaps. I suppose they are limitations but the gaps or the realities of the data which we have to operate against today?

MICHAEL STRANGE: This, of course, it varies between what sort of health conditions we're talking about.

But in general, we know that most data used in medicine comes from the more privileged countries in the world. But also the parts of the population which are themselves more privileged. Just in terms of talking about binary gender, we know that most medical treatments are developed on the male body. Even those which are specifically applied to the female body are developed on the male body. It's bizarre.

But also, we know - it’s often been noted - that conditions which are common in economically poorer countries there is much less research on them. We know that there's an abundance of research on conditions which are mainly affecting economically richer countries.

So if you're a medical professional, as a human, if you're aware of that, if you're aware of that history of global inequality that's embedded within the healthcare data then it's not that you can just reinvent the pharmaceuticals just as an individual regular doctor, regular physician. But it's something that you're aware of.

But if you just put that data into the computer, if the AI learns directly from that data, then of course you're giving it nothing else. You're not giving it that historical context. Then what else is it meant to do than just assume this is what the male body looks like? So typically, it's a male body. Typically, they're white. They're going to be living in a rich Democratic country. It's going to be overlooking a lot of the other aspects, the aspects that can be affecting the vast majority of people on the planet.

But then also there's a question of what is health? What is healthcare? And how much can we turn that into data? So we know that 90% of what affects healthcare outcomes is not from the clinic. It's not to do with what comes to you from the nurse, from the doctor. It's to do with factors that are there in society, the so-called social determinants of health.

So of course, that's to do with things like education. It's to do with your job. It's to do with your living conditions, your accommodation. And these factors are hugely important. We know that if you're a government and you want to improve healthcare or human well-being in the country, the best way to spend money is not so much on the hospitals directly, but to spend it on making sure that people have good housing, good education, and also changing labor laws so people can have more stable, regular, secure employment.

But that's this hugely complex, also very politicized debate because exactly those policy issues, in many ways, they represent some of the main divisions between different types of political parties in most countries. Whether you talk about having stronger trade unions or not, that's a hugely controversial issue. Even if we know there is data suggesting that having stronger trade union membership does improve human well-being and life expectancy within countries.

So you take this hugely complex debate, and then you have a system which is based upon quantified categories. How much of the data that tells you the bigger picture of human health do you put into the machine?

And it could be, of course, that we just say, well, we're going to be very honest. We're going to just put in all of the data that we can possibly think is related to human health and we're going to let the AI come up with the solution. And the output, it gives us a not necessarily, then, recommending a particular sort of pharma product, but instead it comes up with a policy prescription, maybe. But is that something, then, that anyone's going to follow? Are we just talking about suddenly we just hand over the policy process?

This is why it gets very complicated when you're talking about quantification in a human world that is hugely complex. And when you talk about health, you really see that complexity. There is no simple answer about what improves human health that is immune from political debate. People will get very, very political about these things.

KIMBERLY NEVALA: They get there very fast as well.

MICHAEL STRANGE: Yes.

KIMBERLY NEVALA: As you were talking, I started to think about, well, I could see folks saying, hey, what you guys are talking about here is healthcare, writ large. And perhaps where we're trying to move society and move health forward is just talking about the practice of medicine. Which is that much smaller interaction or engagement that happens when you go to a doctor because you've become ill.

It seems to me that that narrowing of the aperture, if you will, would also be harmful in thinking about - or thinking properly about - what are the right, even medical, if we said medicine, just the specific treatment. Again, that smaller scope, not having that broader viewpoint which likely cannot be accurately represented in data. No matter how many odd questions you ask me when I come to have my ingrown toenail looked at about whether I smoke, and whether I feel safe, and all of these components.

Will you get pushback from people saying, well, we can't solve this problem, but we should just narrowly focus on that problem only? And maybe that's OK. I don't know. Is that OK?

MICHAEL STRANGE: Potentially.

What you're talking about, of course, is the parameters of the system. So it requires that we can talk about applications where we know that the number of factors that are important is limited. And we can say that those factors can be covered by whatever data we're feeding into the system.

Now, obviously that can't be anything complicated like how should we allocate scarce resources in hospital. Because that would be more to do with the social determinants of health. But if you're talking about something very simple, like an ingrown toenail, some basic condition. If you're talking about a system where you are just automating basic checkups. If you talk about having a consumer health app on your smartphone and the AI can do regular conversations with you. You can have a chatbot which talks to you about are you applying your creams? Are you taking regular showers? These kinds of things.

Then, of course, that sounds pretty good because there's a lot of times where a lot of health treatments are just about being reminded to do things. So this is also where AI has massive potential in health. There are ways in which it can make you feel more connected. It's been noted that chatbots often they feel more empathetic. Healthcare chatbots often feel much more empathetic than humans.

The important thing to clarify, of course, always is that that is not empathy. Basically, it's to do with good salesperson-ship. The AI is able to learn the language that makes us feel that there is empathy. But the AI cannot have empathy with us. It cares nothing about us. It is not aware of our existence, so to speak.
Now, in that situation, there is a lot of excitement where people say, oh, but then this could make the whole experience of dealing with a doctor much better. If you've got the system that makes you feel that it's listening to you. Perhaps, but of course, the obvious danger there - if you're just talking about a chatbot which is talking to you about your ingrown toenail or any other kind of condition, then some basic -

KIMBERLY NEVALA: Hyperbolically, simple language [LAUGHING].

MICHAEL STRANGE: Yeah. But something where it's seeming to be empathetic. It makes you feel better. It makes you feel inspired. The same thing like Duolingo or any of these apps, right? It's the same basic thing that you're reminded to do something, and that could also be helpful in mental health, to a limited extent.

But the danger always is whenever you then begin to think, as a patient, that what you're dealing with is another human being, then you begin to give it agency and also, importantly, responsibility that it cannot take on. Particularly if you can ask it questions which have significant implications for your health, but you can't be sure that it's always going to be correct, then it becomes dangerous.

So it's very much about understanding. Innovation here is very much about understanding, what are the parameters? Understanding exactly what it can and can't do. And that is a story for the developers. But also it's very much about a story for the health professionals. It's also a story very much for the individual patients.

KIMBERLY NEVALA: And this, then, I think, comes back to when we're talking about digitization, up to and including, but not just limited to AI. We really have to think about these, particularly in healthcare, given all of these complexities, not just as a new tool to be put in hand.

But, as you've said, as something that is influencing or changing both the social practice of medicine, if you will, and the political and political economic paradigm. And by social practice, speaking to how do we actually relate to practitioners, between patient and practitioners. And how do practitioners relate in that big system?

So what is it that we have to understand? First of all, what's the political, economic paradigm? And then what are some of the areas that we have to take some particular care in addressing as we look at both that paradigm and implications for healthcare as a social practice?

MICHAEL STRANGE: Yeah, and this is a really interesting issue, I think, because to some extent, it's about who owns the technology.

But also, of course, as we know from word processing software today, it used to be the case that you would buy the software, you would have it on your computer, and in a sense, it was yours. But today, we have a subscription model, typically.

So if you stop subscribing, you stop having access to this technology. But you have a relationship of dependence on it. You might be able to go to other word processing systems. You can move your files over to that, potentially, but there'll be things that you lose and it can be harder for you to be able to collaborate with other people. There's a relationship of dependence.

In healthcare, there's a question of how much of this technology… One thing is technology that you just bring in, say, for managing a few simple processes. But also, it begins to become something through which you are increasingly, you're feeding a lot of the patient data into it. If it becomes also the way through which you then access that patient data and you make very significant decisions about how you spend scarce resources in your clinic, then it begins to become part of the infrastructure.

And we use words like virtual and so on to describe this technology, but in reality, this is very material.
It's very real. We all experience this, e-systems through which you deal with Social Security or whatever. These are very material, very real things.

And in healthcare, if this technology becomes part of the infrastructure through which the clinic functions, there's an important question. Also, when you talk about national security. What happens if the technology goes down? What options do you have just to be able to choose a different provider?

Also, to what extent does using this technology determine what other technologies you can bring into your hospital? If it begins to determine the standards for other technologies that you can bring in, then it begins to impact procurement. So instead of just being something that you've chosen to buy because you want to use it, it now becomes a gate.

Suddenly, you've now begun to hand over what was originally your responsibility. You had jurisdiction over procurement. You begin to hand that over to the company that owns the baseline infrastructural product, and then it, in effect, gets to set the standards. It gets to structure its own version of a market for other technologies.

Now, there are so many different ways of doing this. The same thing goes for just talking about healthcare data. To what extent does your patient data in these systems, do they require that the patient data leaves your clinic? Does it need to leave your country? Does it need to cross vast oceans? Or can you keep it internally within your clinic?

If your healthcare professionals - your doctors, nurses, and so on - if they say they want a new technology. If they want something very specific because there's something new they want to try out based upon some new research they've read, let's say. What do they do? Does that mean you now need to procure a new technology from far away? How easily can you talk to the developers?

Let's say it's to do with the electronic health record system and it doesn't quite fit your needs given the way you work with patients. Can you talk to the developers? Can you ask them to edit it? But also, do you have a system - some are talking about creating more local, clinic-based ecosystems - where the doctors can create their own basic apps. Of course, that's more like using generative AI to help them. So that's the baseline, the foundation, but then you can use that to create your own more local devices, which potentially could be owned by just a clinic. They would be bespoke.

The point overall is that there are a lot of different ways in which AI can be done in healthcare. There's a lot of assumptions also within it about: what is the role of the clinic? Is it purely a purchaser? Is it mainly that we look to doctors and nurses as a way of managing quality? Or do we also see the clinic as a provider?

The more the technology becomes central to healthcare: central to the way we manage our health data, central to the way we access the whole health system, but also how we monitor our own health.
Increasingly, as we see in a lot of governments now, there's the assumption always that it's the private sector that is the provider of this technology. That is a shift. That is a shift in which we are now saying, OK, the providers of this core infrastructural aspect of healthcare and health and human well-being in our country are coming from the private sector.

Now, again, this is not to question the merits of the private sector or the for-profit sector. But it is a significant shift in the way we think about healthcare. The same thing goes for countries where you have more of a for-profit insurance based system. It's still a radical shift because it changes who are the providers of healthcare? And we see that.

We see a lot of the big tech firms which do not have a history of being in healthcare suddenly positioning themselves, at least in their self-presentation, in many cases, as being the future leaders of human health care and well-being. Maybe that is all good, but it's a shift.

And again, with anything in life, for us to be able to do something properly, we need to be able to understand it. We need to be able to see it. We need to be able to debate what are the consequences because there's always going to be consequences. We can't rely on a single company or a single individual being able to anticipate all the consequences. To be able to anticipate consequences, to be resilient to potential problems, we need to be able to have this broader discussion.

KIMBERLY NEVALA: And it is interesting, because I think those companies would then say, well, of course - the debate about private or public or all of those bits aside - of course you don't need to trust us. This is neutral because it's based on data. But also, you don't need to trust us because the doctor is always in the loop. And one of your articles had a really interesting line. You said, the doctor in the loop is often a loophole for avoiding critical questions. What did you mean by that?

MICHAEL STRANGE: Yeah. I didn't come up with that phrase. That's from several other people, in fact.

But the idea is, the human in the loop. It's just this notion that it's often used as an excuse to overcome responsibility and accountability over the technology. Because you say, well, but it's fine, we can use the technology because there's always going to be a doctor, a nurse who's in control. But in reality, the doctors and nurses, they'll also point out, well, they haven't built these models, so they're not able to explain these models.

And of course, it depends on exactly what kind of models, what kind of systems we're talking about. But if we're talking about systems that involve some kind of level of complication, some kind of specialist knowledge, not all of which the healthcare professional necessarily has, then there comes a point about, well, what happens when it begins to hit the limits of that individual's knowledge?

We can talk about the autopilot effect, where at a certain point, the pilot is only brought in to manage the plane when things have really gone wrong. And that's the only time they get to practice. And of course, they can't possibly save the day because they don't have that experience. They've become, in a sense, de-professionalized. They've lost their expertise. And this is a real concern amongst a lot of healthcare professionals.

At the same time, we see studies that health professionals are particularly using things like ChatGPT, generative AI to be able to, for example, find better ways to be able to explain something to a patient. Or they want a quick summary for themselves so they use it for internet searches like many people are as well. And in some ways, that's particularly dangerous because in those situations, it's not a regulated, transparent use of AI. Rather, it's a system where, OK, so the doctor is hoping that the system is correct. But if it's wrong, there's no way of following that. There's a real issue here as well.

But yeah, sorry, I'm digressing here, but definitely the main concern here is to what extent, as a human, the healthcare professional really can be an effective human here? What is necessary for them to be able to maintain their expertise? If they're given, like a radiographer. It's often said that a radiographer can do fantastic things with AI. But if they're given different AIs and the AIs disagree, it's impossible for the radiographer to come in and say, that one's right, that one's wrong because of the level of the complexity of data involved. All they can do is ask a colleague, which one do you tend to prefer? But that's not necessarily based upon a rational judgment. It might just be the name is nicer.

So this is the thing. We have to be very honest about to what extent the human in the loop really can be a human in the loop. To what extent they really can be an active agent, they really can make a fair judgment of the system. And if they can't, then what does that mean?

KIMBERLY NEVALA: And to your point earlier, that requires us to think about a lot of things. Not just how - and part of it is how the system and the recommendations or the findings are displayed or promoted - how they're encouraged to use it. But are they compensated? Are they rewarded for using the systems? Are they rewarded for pushing back? How are they actually even trained to think about incorporating, ultimately, an additional piece of information into how they would go about making those decisions? Do they, or are they confident in their ability to, override it? And again, that has a lot to do not just with their own confidence or hubris, but with their understanding of the system and how it's presented.

So all of that being said, if we want AI to be perhaps not an all-encompassing panacea, but a panacea for a lot of the ills in health care and in medicine, do we fundamentally have to change how it's designed? And if so, what are some of the key elements to be considered there?

MICHAEL STRANGE: Yeah. One thing I've been playing around with lately is how do you help developers have more trusting relations towards the public, towards patients, towards health care professionals?

There's a lot of discussion about how do you get the public - patients, health care professionals - to trust the technology? But when you talk to developers, and you talk about, well, OK, do you talk to the public much about this? They might say, we ran it through a focus group or so on. But they're not that positive about the focus groups because the focus groups, typically, tend to be people who have the resources and the availability to be in the focus group. So typically, they tend to be privileged, above average age men. They're not going to give you a sense of how a broader population is going to receive a technology.

But also, in many cases, engineers see the public as they tend to mess up the technology. They tend to use it in weird ways, maybe immoral ways. They distort it. They've created this beautiful technology, and then it goes out into the big, bad world and the public screw it up. In terms of the relationship in that direction, it's one of general distrust. It's like, how can we lock things down so it's not going to go wrong when it goes out into the world?

And there's a lot of sense behind that, of course. And of course, just very anecdotally, one's own experience of how people play with things. But the problem with that is, of course, it also underlines that there's a very clear divide between developers and those who are affected by and meant to be using this technology.

So there's no easy, simple answer to this because you're talking about a very complex ecosystem. But some of it is that you need to be able to find ways to be able to explain things better to the public. But also you need to be able to find ways in which you can better explain patients', public healthcare professionals', needs to the developers and also, of course, the designers earlier on in the process.

Sometimes that is about being able to bring in people from different parts of society into that process.
We're at a time now where we see fewer and fewer women in leading roles within the tech sector. We know despite a common story of well, we want more women in the tech sector. We're going to work on it. We're trying to encourage them through these initiatives.

We know the reality: it's gone in the opposite direction, and it continues to go. We know that originally, the computer industry was primarily a female-dominated industry, up until the point where people began to realize that it was quite important, and then men took over. We know this from the US. We know this from the UK. We see it in other countries as well. There's some fantastic histories on this and it's an often forgotten history.
But what that points to is we have a problem where a lot of the design, the development, is taken by a small, unrepresentative group who are very well intentioned, in many cases, and they really want to do well for the rest of society. But we also need to be able to find ways in which the rest of society can be engaged.

So again, practically, that can be about training people from different backgrounds to be able to enter into the tech sector. Sometimes it involves being able to, of course, just have more open group, town hall type meetings where you can better be able to connect.

This is something well studied in healthcare overall. How can you, like for example, in cancer research, experiments with trying to bring a cancer patients and their families into discussions about how should you allocate scarce funding within cancer research? And that's actually extremely productive.

You can think of the same in terms of where we talk about AI healthcare. So you can bring in patients and their families - also, of course, healthcare professionals - earlier into the process. Not to see this just as something that we have to do but to find ways in which it can be directly empowering of innovation.

And I think this is a cultural shift as well because a lot of the discussion around ethics and trust, it's seen as having to do your homework. Like something frustrating have to do when you'd much rather do other things.

But if we can find ways in which it can actually empower the technology, to think about it in terms of how does it make the technology more resilient to a changing world? How can you make it more adaptable to very different contexts, very different populations? Then you can also see a way in which you can, in effect, monetize this. You can show there's an economic advantage to being able to have participatory dialogues, better recruitment policies.

And I know this is politically incorrect these days but thinking about how do you bring different groups into recruitment? Also all the different ways also in which you can have a broader discussion. Because I think one of the biggest dangers we have right now with this technology is, to some extent, the negative hype that we've become so scared of it that we don't want to use it, that we say we just want to run away from it. Because there really are some real dangers if it's not done properly.

And we can avoid that if we can say, look, this is how the technology works. We have a nuanced debate where we can say there are limitations to it. When it looks like it's being empathetic, it's not really being empathetic. It's just knows what words to use. And then that makes you think.

And of course, one thing that comes from that I think is important is also it makes you think more critically. It makes you rethink your relationship to healthcare professionals. Because you begin to think, well, OK, what I actually want from the technology is to feel that someone recognizes me. But if I'm told it doesn't really recognize me, that also reminds me that still overall in the healthcare system, I'm not recognized. And if I feel that's really important to me, and others feel the same, maybe we need to reorganize healthcare. Maybe we need to find ways in which we can have longer with a doctor early on. Or some talk about using the AI to be able to get more data to the doctor earlier so they can, in effect, extend the consultation with you.

Maybe that can work in some cases, but we can see that there's a lot here about how we should do AI. But just talking about AI in this way, in terms of what we want from AI and what are the potential dangers, it also allows us to have a very important discussion about how healthcare should be organized as well.

KIMBERLY NEVALA: I think that's incredible. It's a nice way of flipping the script, but also of looking at it, again, holistically, looking at the diverse ecosystem. And also not trying to rush in to say, we're trying to fix a problem here or there's a shortcoming that we can do with technology. Sometimes that thing we're trying to fix with technology perhaps tells us less about the capabilities and the value of the technology and more about the current systems we have in place.

And so rather than just automating or augmenting systems and practices that may not be serving us well, we need to think more holistically to design a system that works in the way we want it to work and utilize the technology to support that. And that does feel, as you said upfront, like a fundamental mindset shift for all involved, so awesome.

Any final words you'd like to leave with the audience?

MICHAEL STRANGE: No. I think just the way you put it, that was very eloquent.

Exactly that word, holism. Because AI basically is a connective technology. It's about connecting data points in beautiful patterns that a human can't see normally. It allows us to see things that we might not normally see as connected. And that's fantastic.

So we really need to think through when we want to implement it in healthcare how can we best support that? What other kind of data do we need to bring into this? But also, of course, as with anything that's new and we're experimenting with, it's very important that we don't put excess burden on it, excess responsibility on it. Because otherwise, when things go wrong, then we'll learn to hate it and blame it, when actually it's us who've been using it in the wrong way.

So it's a fantastic opportunity, but we really have to be very, very human and honest about this.

KIMBERLY NEVALA: Wise words indeed. And I really appreciate it. I will say, any eloquence in that statement from me comes very much from folks like yourselves who are doing the work, asking the questions, and putting them out there. I find that so exciting and so challenging, so I really appreciate that and you coming on to share some of your insights today.

MICHAEL STRANGE: Thank you. It's been a pleasure.

KIMBERLY NEVALA: Alright. If you'd like to continue learning from thinkers, doers, and advocates such as Michael, you can subscribe to Pondering AI Now. We're available wherever you get your podcasts and also on YouTube. In addition, if you have comments, questions, or guest suggestions, please write to us at ponderingai@sas.com.

Creators and Guests

Kimberly Nevala
Host
Kimberly Nevala
Strategic advisor at SAS
Dr. Michael Strange
Guest
Dr. Michael Strange
Associate Professor, Department of Global Political Studies, Malmö University
A Healthier AI Narrative with Michael Strange
Broadcast by