Stories we Tech with Dr. Ash Watson
KIMBERLY NEVALA: Welcome to Pondering AI. I'm your host, Kimberly Nevala.
In this episode, I am excited to bring you sociologist Dr. Ash Watson. Ash is a senior lecturer and fellow at the University of New South Wales. She is also a senior research fellow at the ARC Center of Excellence for Automated Decision-Making and Society. We're going to be talking about the premise of digital tech and how storytelling impacts our culture and our technology. So welcome to the show, Ash.
ASH WATSON: Hi. Thanks so much for having me on.
KIMBERLY NEVALA: Now, we've never spoken to a sociologist before. Did your interest in sociology come from, initially, an interest in storytelling or vice versa? Or did it all just kind of mesh in a wonderful mix along the way?
ASH WATSON: Mmm. Great question. It definitely started a bit with storytelling. In my undergraduate degree, I was actually a major in creative writing and literature. I had never heard of sociology by the time I came to university. It kind of wasn't a subject taught at high school where I went to school.
But I picked up a couple of electives along the way and just loved the way of understanding society that kind of sociological theory really brings to life. And I found that it gelled so well with the classes I was taking about classic and contemporary literature that I have really tried to blend them in my research.
A lot of my research - I'm a qualitative sociologist - so most of my research involves talking to people and hearing their stories in various forms. But I also try and bring in engagements with popular culture stories, creative forms of storytelling into the research where I can.
KIMBERLY NEVALA: So as we dive into this conversation, what should folks know about maybe what sociology itself is and things we need to consider about the sociology of storytelling itself?
ASH WATSON: Well, sociology is essentially the study of society. I am a sociologist because I am just deeply interested in people and how we live together, and storytelling is a big part of that. It's a big part of how we make sense of ourselves in the world. It's a big part of how we make sense of society around us. And it's a key part of basically how we live together, how we understand that we are a society, a community kind of distinct from others around the world and distinct in time as well.
KIMBERLY NEVALA: So storytelling really is a sense-making mechanism for ourselves? For what we think about ourselves and others?
ASH WATSON: Definitely. A real sense-making mechanism and meaning-making mechanism. We make meaning through the kinds of stories that we tell.
KIMBERLY NEVALA: And so how did this interest, then, lead you to really starting to research and think about the impacts of technology and digital technology, in particular, on our understandings of ourselves and the societies that we live in?
ASH WATSON: Mmm. Storytelling is so key to technology: how we talk about it, how we make sense of the place of it in our lives. So many of the modern technologies that we have today have their roots in things like sci-fi, in the very imaginative visions of people kind of 100, 150 years ago. So I'm particularly interested in how those stories are continuing to play out in the kinds of technologies that we see today. And especially in how we're imagining what is coming next. In the kinds of imaginaries we have about the future and the role that technology can or should play in that future life.
KIMBERLY NEVALA: From your perspective, how much of what we're pursuing today in tech seems to be a pursuit of bringing those older stories, those older fictions around science fiction in particular, to life? And how much are we actively challenging ourselves to think about and create, I suppose, new futures and new perspectives?
ASH WATSON: It's such a difficult question.
In so many ways, it feels like we can't escape those stories. They lay the patterns of our sense-making. And it can be really hard. Even when we're trying to think against them, we're still thinking in relation to them. It's kind of hard to see how we can possibly break out of the box of the stories that we've told before.
Not to say that people aren't trying to do so, aren't trying to imagine technologies like AI in a way that isn't just a replication of these great sci-fi stories we've had over the past 100 years. But we definitely - I don't know if we can help but work in relation to those stories.
KIMBERLY NEVALA: Now…so with all that being said, you've also really pointed out in some of your writings that digital technology, in particular, challenges our senses as humans. And we know we see this in our innate tendency, even for those of us who really do know better, to anthropomorphize, to imbue AI systems in particular, with feelings and intention and so on and so forth.
But you also point out that digital tech has a - I don't know if it's uncanny, that's my word - influence on our collective feelings and moods. Some might call that the collective vibe. Can you talk to us a little bit about that.
ASH WATSON: Yeah, of course. I mean, technology is a tool, and it's important for us to remember that. It's kind of what we do with technology that really makes a difference, that really matters in society.
But we also still need to pay attention to the way that technology shapes us. The way that technology shapes our social behaviors, the way that we increasingly act, the way that a computer wants us to act in order to be legible to those kinds of computerized systems. So it's kind of a bit of a relationship, a give and take between the agency that we have as people acting in the world, but also the impact that technologies have on us.
Yes, there is this kind of very rich vibe in contemporary society where technology seems totally embedded in the ways that we relate to each other. I think we've moved beyond thinking about the digital versus the analog life. Digital technologies are so embedded in our day-to-day lives, in the fabric of our society. We no longer go online. We just are online all of the time. Yeah, it definitely creates a very particular kind of texture to our everyday worlds.
KIMBERLY NEVALA: And another thing I've been wondering a little bit about recently.
You mentioned that technology is always a tool. But there's definitely a trend afoot today to talk about AI and AI systems as things like digital humans or assistants or co-pilots. Or use it as, or interact with it like, you would an intern. And that seems to me to be promoting a very specific view of the technology.
People get quite upset when we say AI is just a tool, these systems are just tools. And I've been trying to figure out in my own mind why we are so comfortable with calling what are essentially intelligent or smart computer systems as digital humans. But we're not-- we really push back against just calling them intelligent tools or smart tools. So any insights there from the point of view of sociology?
ASH WATSON: Yeah, the language that we use to talk about these technologies, the adjectives that we apply to them do so much heavy lifting.
Even with things like the cloud. It creates this idea that the technology has become this ephemeral thing that lives in the sky and has no impact. And actually, it's just a word that hides that these are still very material storage systems. They are just placed elsewhere. They are in somebody else's backyard rather than in ours.
We see exactly the same heavy lifting going on with how we're talking about AI. And when we anthropomorphize it, we do see it placed in these kinds of power dynamics, these roles, this kind of servitude. It's no mistake that AI voice assistants are overwhelmingly female voices because of the kind of domestic place of these within a household.
And yes, with AI agents and things, it does a lot of heavy lifting about how we are to make sense of these technologies. We can recognize the very human-to-human value of having an assistant, a helper, be able to guide us through a kind of really complicated system, something we're not familiar with or something that basically we just need some help to navigate.
And calling the technology by a human kind of name, like an agent, like an assistant, makes us connect those dots between how valuable we could find a real person standing next to us through whatever journey it is; from a mundane website navigation to trying to make sense of a really complicated healthcare journey. It carries that value across to their technology, and we feel that the technology can actually recreate that very human sense of value. And yeah, kind of journeying alongside us rather than thinking of it as a computerized tool, another website or platform.
KIMBERLY NEVALA: Does that also play into making it more difficult for us to have really constructive, critical discourse about it? I think you said sometimes as a social scientist, even trying to have a meaningful conversation about things like ethics, the reaction very often is as if you are against the tech.
And I wonder how much of that view of the tech as analogous, or like the assistant, the intern, the person, and imbuing it with some of those values and of course you want this outcome plays into that dynamic?
ASH WATSON: Oh, totally. It takes a long time to be able to get on the same page, I think, because these clever words that we use, like an intern, an agent, the cloud, they act kind of like a veil. They hide a lot of the actual technical systems, what's materially going on in the background. And so it can be extra work to be able to get down to ground, to talk in concrete and material terms about, what is it that we're actually talking about, in terms of these technologies here, and what is their real impact in the world?
KIMBERLY NEVALA: And why, perhaps, in areas like healthcare or education, would you not want to apply them? Because clearly, these are areas everyone wants to do well and make people well through them.
And, in fact, I found your work initially through a paper you wrote about AI in healthcare. Where you were looking at where the real value in some of the stories and narratives around AI-enabled tech or AI-driven tech - maybe it was just analytic, data-driven tech more roundly - and really found, which struck me, this idea of the value of the promise of the promise.
So can you tell folks a little bit about what led you into that research initially? And then we'll talk about some of the conclusions or takeaways, perhaps, that we, as business leaders, as persons developing these systems, should think about, if any. Or, maybe, as consumers.
ASH WATSON: Yeah. Of course. A couple of years ago, I started working for this big national research center that you mentioned: the ARC Center of Excellence for Automated Decision-Making and Society. Quite a mouthful. A shortened acronym is ADMS, which is much easier to say.
KIMBERLY NEVALA: That's good to know.
ASH WATSON: Yeah. And a key goal when starting out with this center was for us as researchers to better understand the impact of automation, automated decision-making, and applications like AI were having in Australia. So much of the research that we see comes out of the North American context or the European context. And we really wanted to understand, OK, what is actually happening here?
That not only was it a big focus in the research in this kind of North American/European context, but a lot of the news that we get here, too, about new innovations are also from overseas. So we really wanted to set out and find out what is happening here in practical terms? What applications are actually in use?
And in starting to do this research, I was overwhelmed by the sheer spread of language used to talk about these tools. So many kinds of clever turns of phrase. Every new tech or design company seems to call the same application a slightly different thing to have their edge in the market. I understand why people do it. But it makes it kind of hard for people coming into the space, say, a sociologist who primarily works in healthcare and is now also having to learn to grapple with these technologies, because they're becoming so significant in the healthcare landscape.
We first of all wanted to set out and say, OK, what is going on? What are these applications? What is some kind of common understandings that we can reach about not what these applications promise to do. How they're talked about is often around the benefit of the application rather than how they actually function.
So we did a big analysis. We did a deep dive into a heap of news and media coverage as well as a heap of companies themselves. Looking for their website landing pages, people who are involved with them, the kinds of coverage that they're getting across various public-facing media and industry-facing media.
And we dug down into the different applications that we found talked about and we asked are they actually in Australia? Are these things being used here? And are they any more than a flash in the pan? Are they actually seeing some significant use?
And we found a heap of case studies of applications that are being used in Australia. What we did was we categorized these beyond the kind of hype language that's used in promotional materials and we came up with a key categorization of the four applications that we're really seeing used in Australia.
The first is monitoring and tracking technologies. A lot of things, like enhancements to a smartphone or a smart watch, a particular app, a new kind of physical hardware for patients to use at home for any kind of monitoring and tracking healthcare needs. The other key areas that we're seeing AI in particular being used in Australian healthcare is for data management and analysis within hospitals, within clinics, kind of on that back end organizational side; cloud computing, in order for clinics and hospitals to better share across locations the key digital data that they work with. And also in robotics.
As well as coming up with that categorization, then what really stood out to us as sociologists that was interesting about this promotional material was what was promised in them. Sociologists of technology for a long time have studied what they call promissory discourses. Basically the kind of hype and promotion that technologies like this are talked about, that innovations are talked about using.
And it's often the promise of x; like the promise of cost savings, the promise of speed, the promise of greater patient healthcare outcomes. We definitely saw this at play in the materials that we analyzed. But something bigger seemed even more significant to us around AI innovation, and that was the promise of promise itself. And promise is a really interesting word because what the materials seem to do about AI was say, we promise that these technologies are promising.
KIMBERLY NEVALA: Interesting.
ASH WATSON: Yeah, it carries this sense of both potential and assurance. It's like where it's a sense of inevitability in them. It's a guarantee that these things will work. But also that they, rather than promising to do something specific, they carry this sense of promise that, yes, this is the right kind of bigger path that we need to be on to reach a future that we can't yet imagine but this is the right way to go about it.
KIMBERLY NEVALA: It's an odd thing that it's going to bring about a future that, as you said, perhaps we can't imagine yet we are promising to bring it about.
So there's got to be some conceptualization there. But you also often hear this refrain of this is the worst the technology will ever be. Again typically followed by, and therefore, we should really be learning how to use it now so we "grow with it," as it matures into its full potential.
I'm wondering if you see a bit of that as some of those trends or talking points? And what import they have then for how we think about when and how we do or don't both adopt and engage the technology but also when and where we develop the technology?
ASH WATSON: Yeah, it's a really interesting thing to be able to hold, this sense, this kind of assurance that the technology is growing and we need to be able to grow with it, and there will be teething problems.
It's hard to hold that at the same time as where we're seeing the most intensive applications being developed. Which are in these areas that carry this additional moral significance, like healthcare and contexts like crisis contexts, like following wildfires, following floods, even out the back of the pandemic.
So yes, we want to believe that it's OK that we have these teething problems. But also, these technologies are being tested in highly sensitive areas. We see particular take up in places like healthcare because there's this moral imperative that technology can and must help improve people's lives. It's this very simple argument that you can't be against the technology because what we are is pro saving people. You can't be against saving people's lives. So you have to be pro this development.
So it's difficult to try and hold these two things at the same time. To say, oh, OK, it's fine that it has teething problems while at the same time they're being tested in these highly sensitive areas with real impacts on people's lives, especially people who are kind of vulnerable in society.
KIMBERLY NEVALA: And is part of the narrative or the story that's being spun there also predicated on this thesis that technology is always progressive, it's always a net good, it's always an innovation to be pursued?
ASH WATSON: Definitely. There's this huge central narrative about growth and development. That all innovation is good change and that positive future social change requires technological innovation. I think that's one of the key narratives that we see in contemporary society: that the future requires us to technically innovate.
KIMBERLY NEVALA: Do you think some of that is because technology, at least until recently - maybe AI and the way AI systems operate or present themselves to us might challenge this - are hard. They're physical, or at least they used to be physical objects. But they were things we could interact with in a fairly germane way and they were in front of us. You could conceptually put your hands, figuratively speaking, put your hands on them.
So we tend to spend a lot of time projecting a story about how technology supports life and society and improves all aspects of our world. But this idea of non-technological approaches. Or how do we get into the soft/hard, all the soft hardness, that comes with dealing with how we interact with each other? And more, I don't know, it's not organizational, I was going to say sociological but that's just throwing the word "sociology" out here for the sake of it, I think… But other ways of approaching problems, whether that's rethinking economic systems, whether it's rethinking some of our social structures. Is there an aspect to this that that stuff is just not as easy for us to really see and understand? Or am I off on a complete tangent there?
ASH WATSON: No, I think you’re bang on.
It's hard work and important work, especially for people like me, like sociologists, to ground these conversations, to be able to talk in concrete and material terms about what these technologies involve, rather than this abstract idea of technology existing in the cloud, or yes, being this ephemeral thing in society.
It takes work for us to be able to ground these conversations in real case studies, in real environments, in real physical devices, in the real impacts they're having on our relationships, from micro to macro scales in society, and also, on our environment.
I suppose that's one of the key things that we wanted to do in that paper on the promise of promise and the impacts of AI in healthcare. By coming up with that categorization, we wanted to really give people some concrete things to hang their ideas on. To say, oh, OK. So these applications are about monitoring and tracking technologies. I can picture those. I know what a tracking app on my watch looks like. I know what a tracking app on a smartphone looks like.
Even things like data management and analysis. If we can draw back down the conversation from talking about these broad things like AI automation and say, well, this is a computer program that somebody at a clinic at the hospital uses to manage the enormous amount of digital data that flows through these institutions, it grounds it in a way and locates it in the real world. And I think that's an important step for us to be able to talk about these technologies in terms of their real-world impacts.
KIMBERLY NEVALA: It does take a bit more effort and probably a bit more time as well, unfortunately.
Now, you alluded earlier to innovation in the context of crisises. And recently, you have also been - we'll fast forward from the previous paper to today - talking about crisises or crises as laboratories of innovation and what the impacts and implications of that tends to be. It hearkens back, I suppose, to the old saying that you should never let a good crisis go to waste.
But what is important for us to understand about how the narrative or the approach to innovation changes when it's placed within a story of crisis and need, urgent need?
ASH WATSON: It really interested me when I started to look at applications of AI and automation that were being developed at the back of things like floods and wildfires - even pandemic-related technologies.
How common a phrase like the "it's a laboratory of innovation" was within so many different fields. Within technologists, even academics, some governments, using these situations to really drive technological investment. And not to say that they're wrong. These situations, so many of them can be helped by better technology.
But what we see in situations like this is there's often cause. We've seen this out the back of the LA wildfires. I read a piece in Forbes just last week where people said, this also happened in Australia a few years ago. And just like in the pandemic, when things like this happen, we need rapid government investment in technical solutions so things like this don't happen again.
Mostly interesting about that story was how they talked so positively about things that happened in Australia, when actually the tracking technology that the government invested in here to trace COVID cases was a very, very public $21 million failure. I think it tracked two cases of the virus and then was very publicly and humiliatingly for the government closed. So I read that and I thought, oh, we actually don't want things like that because these enormous, very expensive failures result.
What we see in moments like this is this huge moral imperative, I think even more strongly than we see in health, that technology is necessary for us to save lives and to save people's homes. It's a huge drive. We see lots of rapid investment in technologies happening after things like the LA wildfires. Investment, both financial and emotional, in how important these technologies are for us.
We also see a really hurried adoption. And because it's hurried, often quite a critical or a hasty adoption of technologies by all sorts of companies and organizations who want to help and want to be seen as though they're doing something after things like this happen.
And we get this really, at the moment, because of this cascading climate crisis that we're living within, we see this dystopian figure of the future really compelling innovation in the present. I don't think it's something that's easy for us to disentangle because it is such a moral argument that we really do need technological innovation to save people. And technology can help. It can make a huge difference but it's important that we do this well. It's important for us to have good technology and sound innovation and development happening rather than hasty, uncritical, expensive, failed innovation.
KIMBERLY NEVALA: And it'll be interesting to see what the ultimate analyses that come out of the retrospectives on things like the LA fires, for instance. But I think that is an area one could likely argue that there was a lot of knowledge and understanding and even wisdom about what and how things could have been different that had very little to do with, and potentially very little dependence on, our ability to try to, from a technological perspective, find a way to monitor, predict, and prevent. And so that's always that interesting point.
So is there something that we need to be doing when we want to rush into these kinds of circumstances to make sure that we're not just forwarding technological solutions but also looking at maybe more-- less, what's the word I'm looking for-- more analog solutions as well? Or maybe there's some processes or policies that were put in place that were contrary to what the known wisdom - that was maybe sometimes very unpopular - might have been? So going back to some of the science as well and not necessarily jumping just to tech.
ASH WATSON: Definitely. It would be so great if things like this could be fixed with technology alone, as if technology exists in a vacuum. It would be great if a wildfire was a technical problem. And people who were kind of pushing rapid innovation in these significant moments do frame something like a wildfire as a technical problem in need of a technical solution.
It's harder for us to grapple with as a society the fact that digital change, climate change, is a people problem. It requires us to work together in different ways. It requires people systems to change rather than a kind of a new app, a new sensor here, a new kind of technical Band-Aid.
Yes, it very much requires us to grapple with the kind of governmental, organizational, institutional community social dynamics of how we handle situations like this and what we need to do differently in the future to, yes, try and lessen our susceptibility to them and make more resilient, resourceful communities.
KIMBERLY NEVALA: Is there also an aspect of this that if we take more what I'm going to call procedural or process or people-oriented approaches to some of these problems. And I'm not, by the way, suggesting that both elements, technology and people, don't come together. I think there's pieces of both in all of these because they are complicated and complex situations.
But we tend to be more comfortable, or at least it's more of a standard operating procedure, that when we make investment in building something, there is, in theory, at least, the ability to measure a hard return. We expect a return on investment. And we don't… I mean, are we in danger of expecting returns on investment for things that are more about developments around society and how we work together? And these models - whether they're economic and business and our cooperative social models - they don't have that same black and white, return on investment. Do we need to rethink how we think about return on those kind of things as well?
ASH WATSON: Yeah, definitely.
It's obviously a huge underlying challenge that one of the central logics that drives how we understand development in society at the moment is an economic logic, that we do frame things in terms of return on investment. And that is one of the biggest reasons why we're seeing rapid investment in technology, when often, we know the problem with things are like labor shortages.
Instead of investing in people and putting money towards wages and new systems and new work, we're investing instead in technological systems because they do have this argument of return of investment, that people can make money off them in a way that you can't in the same way of investing in a labor workforce.
So those underlying economic logics do definitely make these harder conversations to have because we need to shift the narrative around how we measure the success and the value of developments - technical or social or otherwise. Even concepts like social return on investment still kind of come back to this underlying economic logic of things.
So it's a big challenge that I have no answers for. But yes, I agree that it's kind of a really complicated part of the overall problem, that this kind of economic logic drives how we understand things.
KIMBERLY NEVALA: And can you…I probably jumped forward a little bit in the story there as well.
The paper also lays out some interesting dynamics between a couple of different dimensions. Between innovation - which is either spawned on or relies on the perception or the reality of complexity - tying to visualization. Which I thought was interesting, I hadn't seen that before. And then speed.
So how do those components come together? And what elements of that are positive and in what ways do we have to be careful about how those levers are interacting?
ASH WATSON: Mmm. Yeah. So in this project, I analyzed a heap of applications of AI, especially around wildfires, around floods, and around COVID-19. And yes, I was interested in digging down into that moral imperative driving investment in each of these areas.
Then when I'd found a series of applications, I asked, well, what actually is innovative here? What is it that these technologies are doing? What are their claims to innovation? And what really stood out to me is, yes, speed and complexity, which are not new claims to innovation. We've seen this with technological investment for a very long time, I think, since the dawn of contemporary computing. Speed and the ability to handle complexity is a huge part of how we think about the value of new technologies.
Visualization was a really interesting thing that stood out to me. It seemed to me that what made these technologies innovative was that they are visualizing technologies. There was a key visual element to each of the applications that I found, whether they turn nonvisual data into something visual, and therefore, make it easier for us to deal meaningfully with the information and draw insights out of it.
That visual processing seemed really key to the claims to innovation that these applications were making. So I think it's something interesting that I'm going to keep paying attention to in where we're seeing AI develop the kinds of paths that we're on at the moment.
KIMBERLY NEVALA: Does that imply that we may have a tendency then to look to technology for solutions in areas where we believe we can put the data together in some way, that it can be visualized, can be made sort of concrete for folks? Which may also mean that we're overlooking opportunities in areas where perhaps the data is not so straightforward or so amenable to visualization?
ASH WATSON: I think so-- 100%. It's when we see these novel trajectories develop into what kinds of things are seen as particularly innovative. If visualization is one of these things, then yes, we're going to see trends towards valuing of data that fits into visualizing models and of understanding the outputs of them in visual terms. So we will definitely see a kind of concentration of information inputs that are required to be made visual. Yes.
KIMBERLY NEVALA: Yeah. This was a fabulous paper. It raised a ton of questions and points to ponder, if you will.
For me, another point that you made that I think we need to or suggest that we need to wrestle with more as well – and I think it's not just in the context of crisis - is thinking about when and where we are deploying these technologies so we're doing them in ways that are meaningful and germane to the populations they are meant to serve. You note that when innovation occurs in the context, or innovation is proposed in the context of a crisis, it changes not just how and why we go after it but for whom. And there's a vulnerability to very paternalistic solutions in that moment.
Can you talk to us a little bit about that? Again, I think from the side of sociology and the stories that we tell ourselves, this has some really interesting and potentially troubling implications as well.
ASH WATSON: Yeah, of course. One of the key questions, or the second key question that I asked, I suppose, after what is innovative about these technologies was who are they really for? Because there was a difference, actually, that I found in who designers and developers talked about in the media about who their innovations would help versus who was actually buying them and using them.
We see a very personal story emerge in a lot of the news interviews that designers are doing around this application will help your grandmother know if her house is going to flood or not. And, in reality, actually, these kind of flood-tracking technologies are bought by companies like mining companies to know how far in advance they need to move all of their highly, highly valuable infrastructure-- their trucks, their rail-- and help them plan.
So quite a distinction between your grandmother and whether her backyard will flood versus kind of an enormous mining company. Not to say that they're both not kind of valuable applications, but a real distinction between the stories that are told about who benefits from these applications versus who actually ends up using them.
So I was interested, then, in how this sense of vulnerability really seeps into the kinds of media coverage that relates to crisis technologies and who we talk about them being for. I think because of the kind of climate crisis that we're in at the moment there's this sense that potentially everybody is vulnerable. And, therefore, everybody can benefit from technological innovation in this space. Which just significantly and forever expands the sense of value that we cast into these technologies; that they are necessarily and morally for everyone, that everybody is potentially susceptible. Everybody is potentially vulnerable. And, therefore, we must invest in these technologies to make a difference to everybody's future.
Of course, that's not necessarily the case. It's a really nice story that if we just invest in the right AI model, everyone in the future of society and humanity will be saved. But it's a compelling argument, this subtle sense that everyone is potentially vulnerable. And, therefore, technological innovation is for everyone.
We know that's not the case. So much technological development actually worsens the divide between the haves and the have-nots in society. And these are great moments where we see that gap widening, even though the story says that we're closing the gap between the top and the bottom ends of society.
KIMBERLY NEVALA: Does this also create a perverse incentive for us to either manufacture crisises, although I don't - I mean, in this day and age there's plenty around to pick from - or to magnify either the impact and the immediacy of a crisis that is either before us today or is developing?
ASH WATSON: Mmm. I can see how that would be a real concern. Yeah, it's a tricky one because we also get so increasingly saturated by just the speed and how many kinds of especially climate crises that we're dealing with. But also all the other crises around the world: political, war, economic, cost of living crises.
These are all interlinked and related into this broader sense of a kind of dystopian future that we feel like we're facing down and that technology promises to solve for us. I think what we definitely see magnified is the impact that technology can have in relation to these crises.
So yes, what I think is magnified is perhaps not necessarily the crisis itself, but the risk that follows a crisis of us not capitalizing on that moment and taking advantage of it to achieve really rapid innovation.
KIMBERLY NEVALA: So that really, more so than an incentive to manufacture or magnify the crisis itself, it just plays back into that story of ultimate complexity and the need for speed and just nicely closes that loop then.
So as you've worked, and your work has grown over the years, and you've looked at this from these different perspectives, what is it that you think we need to be asking of ourselves, of the companies that we work with? What stories do we need to start telling that will help us promote a healthy narrative and one that is more aware or perhaps societally oriented, if you will?
ASH WATSON: Some really positive developments that I'm seeing happening is this big turn to research and development experts going to the people that they imagine being at the end of the train of benefit.
The actual communities on the ground who are really affected by things like wildfires, floods. Even people with kinds of complex health conditions that are seen as the end benefits, the end recipients of innovation, and actually working closely with those people. Because they know what challenges they face the best. They understand the context of their everyday lives. They know how technology is already helping. They know what their local community needs are.
And I think we're seeing an increasing turn to things like co-design processes being built into research and development. I think much more of that needs to happen. And especially not in these kind of token community engagement ways. But really meaningful development and innovation can happen when we, yes, get outside the lab. Get outside these often traditional ways of working that designers can have; where they have this great technology, and they go in search of a problem to apply it to.
Working meaningfully with communities I think is such a vital first step. We're starting to see some of this come into the industry, and we need to see much more of it in really meaningful ways.
KIMBERLY NEVALA: Awesome. So any final thoughts, things we didn't touch on that you would like to communicate to the audience before we wrap up?
ASH WATSON: Oh, great question. I don't know. I feel like we've had a broad conversation. I suppose where I go from here in my own work, and I hopefully can meet lots of other people who are also interested in this, is there remains such a gap between how we're talking about technology, what the promise of technology seems to be and the actual realities of the material current worlds that we live in today.
I heard a great talk recently by somebody who had done some research in the NHS about this amazing new kind of AI tracking monitoring technology for people who had just come out of complex surgeries. And they told this great anecdote about they'd kind of wheeled the machine into the room and went to hook it up to the patient, but there was only one power point that worked in the room.
So they have this amazing technology. They bring it into the space. They can't unplug the actual machine that's keeping the person alive in order to plug in AI monitoring technology. And it, therefore, sits useless in the corner because there's physically no socket that works for them to plug it into.
This is such a material and complex gap for us to address between what's promised by even technology that does exist in the world versus the world that we live in. So much work is needed to close it. There's so many opportunities to do so. So that's where my attention is really focused at the moment. And I think yeah, there's so much opportunity for work in this space.
KIMBERLY NEVALA: Well, we will direct folks to your work. I highly encourage them to follow it so that we can ensure that the technology we are creating today does not, in fact is not, being custom designed for those corners. And/or that the technology that the technology that perhaps is not fit for purpose finds those closets in ways that they should.
So thank you so much. I really appreciated your time and your insights today.
ASH WATSON: Yeah, thanks so much for having me. It's a pleasure.
KIMBERLY NEVALA: Excellent. Well, if you'd like to continue learning from thinkers, doers, and activists like Ash, subscribe to Pondering AI now. We're available wherever you listen to podcasts as well as on YouTube. Also, if you have comments, questions, or guest suggestions, you can reach us at ponderingai@sas.com. Email us today.
