AI Literacy Is Not All We Need with Mel Sellick

KIMBERLY NEVALA: Welcome to Pondering AI. I'm your host, Kimberly Nevala. In this episode, we're pondering human readiness with Mel Sellick. Mel is an applied psychologist whose work, amongst other things, has shaped IEEE's standards for organizational readiness and human-AI interaction and UNESCO policies. Welcome to the show, Mel.

MEL SELLICK: Good to see you, Kimberly. Happy to be here and jump into this. These are my favorite topics.

KIMBERLY NEVALA: So tell folks a little bit about the intellectual journey that you have been on. I know that you are currently, in fact, pursuing your PhD so we'll be calling you Dr. Sellick very shortly here.

MEL SELLICK: Yeah, let's hope so, in a matter of a few months. Yeah, so, well, the majority of my background was in broadcast news. I worked for NBC as a live news journalist. And over time, I really started increasing in my interest in the inner workings of narrative; how we live our life telling stories and creating stories inside of ourselves and expressing that outwards. So that led me to pursuing a master's in psychology, the impact of technology on human behavior.

And then ultimately I really realized that I wanted to help create better guidelines for healthier technology interaction. And I would like to do that on a global level for policy. So I pursued a PhD with that explicitly in mind.

KIMBERLY NEVALA: Excellent. And you had said something to me that I found so interesting about your early work as a broadcast journalist. You said it made you sort of uniquely sensitive to the power of framing and language. And that just seems so relevant for the moment we're in today. So can you talk a little bit about how, or if, that early work - which may not… to some extent some might say how do those things connect - really informs or factors into your current work, if at all?

MEL SELLICK: That's actually a really adept question. I think that everything is a story. Everything is a narrative. And we are also in a time where we're engaging with tools that really have the capacity to influence us and to help write our own storylines. Not only in the ways that we feel and think about ourselves and frame our capacities, but actually how we relate to others.

KIMBERLY NEVALA: So speaking of narratives, Anthropic recently published a paper. It was co-published with somebody who I don't, a company - I don't remember which consultancy - in which one of their findings was that users actively undermine their own autonomy when they're using these systems. So I want us to put aside the hubris of a company that is deploying systems that are actively designed to incentivize that very behavior. We'll just put that aside for a moment. But it raised what you called the myth of the reasonable user. What is that myth? And why is it important for us to just confront it head on these days?

MEL SELLICK: Yeah, that's a great example. And I got a little riled up. Now, to be fair, Anthropic does have, out of most of the big tech companies, it has the best reputation among researchers for at least having and continuing to have safety teams and this kind of thing. But that particular statement did also kind of get under my skin a little bit, too, because as you said, I have really called out this myth of the reasonable user.

And what I mean by that is that these systems are really designed for people who show up, that are able and have the capacity to continually be calm decision makers under pressure and uncertainty. People who have the ability to maintain their judgment within any kind of influential scenarios, understanding power dynamics, understanding and being aware of their own internal biases and mental schema. And this just doesn't exist. This type of human doesn't exist. We show up. It's like that line. I say it probably in every single interview I do. But wherever you go, there you are.

So we carry all of us into our situations. We're not these bifurcated beings. I'm this personal individual over here, and then at work, I'm completely something else. And we're really seeing these lines blur. So going back to this idea of being reasonable, we have this assumption that in work environments, that we show up. And we're going to be utilizing these tools because we've been educated. We understand. We're highly AI literate. We know what the tools can do but we still have a big blind spot in what the tools are actually doing to us.

So just this last week, actually, I think a few days ago, MIT released a study. It didn't get a lot of attention, interestingly enough, that we have this assumption that it's the emotional people who have challenges with dependencies on AI. And actually, what we're seeing now is being called an instrumental dependency. Instrumental dependency. So in our professional lives, we're seeing that workers are showing up, and they are becoming heavily dependent for things like writing and strategy and decision making. And that can be equally destructive.

KIMBERLY NEVALA: Yeah. So I think this sets the stage really perfectly for this discussion of human readiness. And I think most of our listeners will be at least familiar with the topic of AI literacy. But you've actually laid the stage here for why human literacy does not equate to readiness. And for all of us in the field who are feeling potentially smug - I don't know a better word for smug that doesn't have some negative connotations - but that might be feeling a little bit smug about our own ability to resist these forces I would say to take a seat, because, in fact, in this case, we're all vulnerable. But what differentiates human readiness from AI literacy? And why does this matter?

MEL SELLICK: Yeah, so for me, I think that literacy is about interfaces and readiness is about the interior. So it's really, instead of taking the lens and putting it on the AI, it's actually bringing it back and asking, what human capacities do we need, do we need to engage or do we need to cultivate in order to utilize these tools in a healthy way so that we're not diminishing these things? So what does that look like? It looks like emotional stability. It looks like cognitive competency and capability. It looks like relational awareness.

These are things that when we walk into a scenario where it is an asymmetrical relationship from the beginning, it's going to be something that influences us. And if we are not actually taking a stance of some personal responsibility ahead of time, we're really allowing ourselves to become quite vulnerable. And I know one of the things that has been coming up for me a lot lately is this question of vulnerability. And I want to say off the top that vulnerability is not a binary. It's actually something that is context dependent. And also, it's cumulative with AI, as well, if you think about it like that.

And so we can make a case that humans are state-based beings. We have bad days. Things go wrong. We have life circumstances. Perhaps we have situational trauma and/or stress. Or we have chronic trauma or intergenerational trauma. All of these things impact how we show up to the interaction. And I think that we have to be definitely a little bit more cognizant when we're saying vulnerable populations, because these situations change with conditions.

Now, that being the caveat, I really want to point to one thing that came out also in the last couple of weeks. And that is that users who have low literacy and users that have English as their second language, they tend to be far more vulnerable in the outputs that AI gives them. Let me explain that. So basically, what they found is that, so if you do not have a high level of English proficiency, and you have a level of low literacy, both of those things, but especially compounded if they're together, you get lesser quality outputs from the AI. And in some cases, they actually found - this was an MIT study - they found that the answers that ChatGPT gave users was diminishing and off putting.

KIMBERLY NEVALA: Interesting. Terrifying. Terrifying. So would it be a gross oversimplification if I said that literacy explains how the system works and human readiness enables us to use them appropriately in whatever context we are in at the time?

MEL SELLICK: That's not an oversimplification. It's an easy way to understand it. I think that really what we need to be looking at is that, by and large, AI literacy programs, even ethics frameworks, all of these really look very heavily on the technical aspects. How do we show up, and how do we deal with these tools in a way that we understand what they can do, especially related to privacy? And all of that is extremely important.

But I also think that it's really going to be a mismatch here always if we don't have some form of a social structure where we are able to talk about these tensions that are rising. So for example, the blame game, the blame game, shame game. I don't know if you've noticed this, but we are in a real bind here.

So we basically are expected to learn these tools, and we're told, and we're telling our future gens, hey, you're not going to get hired unless you have a certain set of AI skills. And so we actually really encourage heavy use and/or proficiency. So we can be more efficient humans in our work lives. So we have that. And then yet at the same time, if you were a worker and you use AI and then you let other people know you used AI, there's a massive stigma against that. Because humans have basically evolved to equate effort with value. So that's one bind. And then on the other side, if you don't learn the tools, then you are equally stigmatized.

So I feel as if we blame people and we shame people in either direction. And what I would say is on an organizational level, first of all, we need to be able to create safe spaces where people can actually talk about what they're experiencing. Whether it's fear, whether they're worried about job loss or becoming obsolete, whatever it is, anxiety about not being efficient with the tools, all of these things. It's a massive spectrum.

KIMBERLY NEVALA: Yeah, it is. And it is interesting because I think some of the early PR hype, if you will, about these tools has been that they're going to make everything so much more productive, so much more efficient. You're going to need less humans. And it has been a narrative that, while again some handwaving around, this is not-- this is about enabling a human, has actually, in point of fact, set them up as in opposition or in conflict with each other. It's got to be one or the other. So for better or for worse, we're seeing that, hey, the tools themselves, even if they're AI tools, don't necessarily drive true productivity. And maybe that's not what we want either. But I think we're early early in that conversation.

All of that being said then, if we think about human readiness - so not just understanding the sort of technical design of the systems - you've pointed to three pillars or capacities: psychological readiness, cognitive readiness, and relational readiness. Do I have those correct?

MEL SELLICK: You do. Yeah.

KIMBERLY NEVALA: And you had mentioned earlier fear. And a lot of times we have a tendency to say people who want to use the tools or don't want to use these tools – and don't want to use the tools in particular - are just afraid of them because they don't understand them. I'm not sure that that's a fair assessment either and it seems to be somewhat of a blanket statement.

But when we think about psychological readiness and psychological responses to these tools and their use, what is it that we're actually concerned about? And why does this go beyond fear? Or does it go beyond fear?

MEL SELLICK: Yeah, I think fear is an expression of it. But we bring a lot to the table. Humans have a wide breadth of experience and conditions that they grow up in, especially our social relational conditions. And I think that it's accurate to say it's not about fear.

And some elements, it could be reluctance because there is a history within a particular person's life where they have associated that with authority, or surveillance. Where there have been actual outcomes interacting with technology that have been unfavorable. So fear is simplifying it but I also believe that there's a challenge in understanding readiness in the sense that it looks differently for a lot of people. And what might be fear for someone else might just be a form of looking for comfort or having a mode of operation.

For example, we're starting to see that users who have had a history of anxious attachment, for example, have a much higher propensity to become dependent on the tool because they're looking for this emotional support and this validation. And I think it's worth mentioning here that we are hardwired to seek belonging and to seek validation. So it is very natural that we are going to be looking for that in these tools. So when something looks and acts and behaves as a social actor, our brain automatically treats it as a social actor. This has been honed over hundreds of thousands of years of social cognition.

So I think to put all of the onus on the individual user is a mistake. So going back to your Anthropic comment, it needs to be a combination of those things. It's not an either/or. Individuals do need to cultivate as much awareness as they possibly can, and to learn as much as they can about the potential influence here. But we also need to start creating and enacting some social structures that support this more collective understanding of what's going on and making that normal. Normalizing the fact that none of us really know where we're going with this, quite frankly. The data is still emerging.

KIMBERLY NEVALA: So that being said, if we go back to those three - I'm going to call them pillars, for lack of a better word and you tell me if there's a better one - psychological, cognitive, and relational readiness.

What is it that we are looking for and addressing when we address the psychological side of the house? And maybe a good way to think about that is, somebody who has a low level of psychological readiness may look like or act like this. And somebody who is more psychologically ready or aware will have these characteristics. I don't know if that's the best way to place it. But I'm wondering if we can give some exemplars. Here's what we're worried about when we talk about psychological readiness. And then here's an example of what that readiness confers on somebody.

MEL SELLICK: Yeah, I think a real simple one is emotional regulation. So being aware of when you're dysregulated. Or, for example, when you're really nervous about something, or you're anxious, or you're needing something, you're upset, and reaching for the AI tool. So someone who doesn't have any impulse control and no awareness of that impulse control, and then four hours go by, and they're doing multiple turns, multiple turns.

And this isn't just actually for personal uses. I think that we're still trying to categorize this into personal and professional. And the reality is, I think I might have mentioned this study before. So Microsoft analyzed 38.5 million Copilot conversations and found that people at work, while they were at work, using Copilot, as long as it was on the desktop, it was professional. But the entire time they were using their cell phones, their devices, they were using it for personal reasons on office hours. So again, I think that we can't neatly separate these things when we are having emotional dysregulation for whatever reason. And we all do. It's very normal to have those experiences.

We need to be extra conscientious and aware of our AI usage because, at that point, we're not using our prefrontal cortex. We're caught in that stress response cycle, which basically means we don't have the full reasoning capacities available to us because we have a lot of those neurochemicals. We've got adrenaline and cortisol and all those stressors. So that's one simple example of what emotional regulation looks like for someone who is interacting on that psychological level. So that's the first pillar.

KIMBERLY NEVALA: Interesting. And you said something there. I'm going to take maybe a slight detour just for a second. Is it also true, then, that folks who are using these tools at work, I don't know how that it may influence or how they use it outside of work. But how we use it outside of work in our personal lives will also then potentially translate to how we bring that into the workplace? And so when we think about decision making, or authority, or self-confidence, is it also then, it is important to understand then, from what you were saying, that because of the ubiquitousness of these tools or the broad availability, that this is not something where you can just say, keep this outside of work and operate in this fashion at work. Is that right?

MEL SELLICK: I think that is right. Yeah, so I basically-- it's what I talk about when I say that there's this spectrum of relationality.

We are effectively creating and transferring this continuum of relationships that we have had human to human into human synthetic now. And so, as you were saying, when we're at home and we're interacting with these tools, we are practicing conversations. We are practicing ways of relation with a non-reciprocal entity. So an entity that doesn't have any friction, there's no pushback, there's no opinion, that's going to ruffle your feathers, you are getting a one-way view, and then you're getting that reflected back to you. Because the whole idea is to keep you engaged. So it's very sycophantic.

And that over agreeableness also starts training us to anticipate that we can have this in our human-to-human relationships. And we're starting to see in the data that the ways that we interact with AI are now starting to spill over into less time with humans. There's less tolerance. There's less patience. And that works on a few levels, for example, like with fluency bias. We think we're getting the answers from the magic genie. And so then we start thinking everything should be like that. Snap the fingers. I want it right now.

And so we're also starting to, in my mind, atrophy the ability to hold tension and to hold disagreement or discomfort. And discomfort, disagreement and friction are not bad things. That's how we grow as humans. That's how we are actually able to be in relationship with other humans, because you're never going to find someone who's going to always tell you you're the greatest thing ever and completely agree with you all the time. That's not actually what it is to be in a human relationship. And it actually wouldn't even be that fulfilling if it were.

KIMBERLY NEVALA: No. It sounds good but it gets a little boring, doesn't it?

MEL SELLICK: I think so, yeah.

KIMBERLY NEVALA: One of the benefits of age is that I know I'm wrong so often. And these days I think that's a good thing probably.

MEL SELLICK: Yeah. You know when you're wrong. But I would also say you also know when you need extra information and when something else is wrong or doesn't feel right, and you can compare it to your experience. And I think this is also something that we're in danger of atrophying, this ability to recognize when something is wrong, and when we need to question it.

And so there needs to be - this goes to the cognitive pillar - we need to be able to be skeptical, to ask questions. Before we sit down and ask an AI for something, we need to actually understand how we think about it, take the time to think something through, and have our own understanding of our view before we engage AI to give us an answer. Because once we have that input, it is nearly impossible to parse those apart, what is mine and what was the AI's.

KIMBERLY NEVALA: Interesting. And I would imagine that - and I think we'll get to this in a minute - that all of these elements play together. So if we want to actually ensure that we are enabling folks to use these tools mindfully and also really protecting human agency, you can't have just one. Is that right? We really need to attend to the psychological, the cognitive and the relational in order to set people up for success.

MEL SELLICK: Exactly. And honestly, there are three pillars but they're really so interconnected and intertwined. It's really just, my separation of that is a nod to science because that's the way we have studied it. We tend to separate cognition and emotion. And cognition is embedded under psychology. So they all blend and then you really can't have one without the other.

Because you can be emotionally regulated in the sense of, OK, I know that I shouldn't use AI for the 10th time today. And I'm going to walk away because I need to put some space in between me and that so I can calm my nervous system down. Because we make decisions with our nervous system first. This is the way we're hardwired. It's embodied cognition. So our bodies determine actually what our behaviors are going to be at a very pre-conscious level before we even get into the situation. We've already assessed, is this safe? Is it not? What am I going to get out of this?

But to your point, we can't negate the cognitive aspects because we also need to understand when to bring in our judgment, what tool to use for what tasks. Sometimes we don't need AI at all. What we actually need to do is employ the relational end of things and go ask a coworker what they think and spend five minutes talking through that. And then if we still feel the need to get extra information, use AI to challenge what we've arrived at. So here's my point of view. Here's how I think we should do it. Give me a spectrum or give me an array of perspectives that are counter to what my perspective is and tell me why they may be just as correct.

KIMBERLY NEVALA: Yeah. And it's been interesting. I've been on campuses recently talking to a lot of students. And one of the more interesting conversations that comes up repeatedly is this idea of self-trust and reliance. Having the confidence to push back or to think differently or outside the boundaries of what these purported everything machines will spit back at you as well. And again this is something that some folks may just do more naturally, be more resilient in that regard, and just be more confident right from the get. And others - due to lots of conditions, not the least might be just personalities and intrinsic tendencies - are already less inclined to do so. That is one area that I find particularly concerning, as well. And it speaks to both, I guess, this cognitive and relational aspect that you're talking to.

MEL SELLICK: Yeah, and we're also seeing, especially with teenagers and early college students, that a lot of them are running all of their conversations through LLMs first. They're using AI to construct text messages to each other, to decipher who is right in this argument. I know a couple of teens who literally dumped their argument into AI to see who was right.

So these are the things that are developmental. And this isn't about judging that. I think it's about finding the balance. Because we know that when we use AI in healthy doses for specific tasks it really does enhance human learning, personalized learning, for students. We know that it can be of great benefit. Where it becomes potentially detrimental and harmful is when we are not aware of our own practices. When we show up and we're just blindly either accepting everything that's coming out. Or we are building habits that lead into dependencies. That's where things start cracking a little bit.

That's why I say that AI is no longer a tool. It's a relationship. And people are always like, oh, now don't use that word. And again, it's that spectrum of relationality. It's the confidante. It's the friend. It's the advisor. It's the thought partner. It is the thing, the entity that you end up assigning a role in your life, and you have an intention with it. And being clear on what that intention is, I think, is incredibly important.

KIMBERLY NEVALA: So I think that's a good transition, too. I want to zoom out a little bit more.

You talk about human flourishing and what the elements that contribute to flourishing broadly are. And in one of your papers you say that we've typically looked at this as a dyad. So it's the individual and the collective. And then the understandings, the relationships, the power dynamics between those two things are or have been intermediated by institutions. So that might be all those things like our educational institutions, health care, government, labor, et cetera, et cetera. But that today AI has entered as a participant and we now need to think of this as a triad. So individual, collective and synthetic.

Talk to us a little bit about that, how we came about to that, and what your thinking is on why it's important to recognize this as a third, as part of a triad, instead of a dyad. And then the question that came to mind for me is why the technology is not just either a different type of institution or being viewed as something that's integrated through institutions. And, therefore, is influencing the individual and the collective but isn't its own separate thing, if that makes sense?

MEL SELLICK: Yeah, those are really great questions. And I don't know that I have answers. I only have thoughts and ideas. This is such a new space, but I'm deeply involved in the human flourishing conversation. I'm on the Council for Harvard's AI for Human Flourishing. I'll be running a collab lab out of Oxford for human flourishing, as well. So I'm in the space and I talk about it a lot. I was invited - MIT has this series that they do with the National Institutes of Sciences and Leonardo Magazine - I don't know if you're familiar with that and we had a salon on flourishing and they invited me. And I brought this provocation about human flourishing becoming a triad now.

Just really asking the question: so if we know that AI has fundamentally changed from a systems perspective - many things like our information ecosystems. And now it's embedded within institutions, as you say, like our education institutions, our governments, et cetera. And also, we know we have it at an individual level, a social relational level, and an institutional level. Then understanding that it influences and it has its own sets of constraints and levers that it changes at every one of those levels.

Then we must assume to some degree, at least in advanced societies, that AI is now interwoven into our lives, into all of our lives. That it would be very rare to have someone show up at a dinner table that has not been touched by AI in some way. So whether it's how their newsfeed was curated, or whether they got their car loan decided upon, or what school they were allowed to attend, or who their friends are, or who they're dating. So you see, can look at every single level and understand that AI is an influence in some capacity.

Now, is it a third actor? Is it a system? Is it a meta system? So I look at all of those and look at human flourishing as this very traditional idea. Historically, it is individual and collective. But there is no purely unadulterated human experience anymore. And that's why I'm asking, what is the role of AI? I don't know. I have not yet determined is it a third actor, is it a system within a system, or is it the meta system? And that's up for grabs, and I love anybody's perspective. Just bring it. But yeah, I don't know. You tell me. You have more questions within that?

KIMBERLY NEVALA: It struck me as interesting. And one of the questions that came up -- so hopefully, if nothing else, with this episode you'll get a lot of calls from people with maybe more questions and some ideas about that.

If we start to go down, or if we go down this path or this thought experiment, how do we make sure that we're having all of these conversations in a way that doesn't presuppose that the way that AI systems are being used-- and again, a lot of what we're talking about here applies very specifically to the generative AI systems. But I would argue that even more traditional machine learning systems, predictive systems, certainly set the stage for this and probably warmed us up for the epoch we're currently in.

So how do we have these conversations whilst also being very clear-eyed that the ways that these are being used in places today doesn't have to be, that doesn't have to be, the way that they're used in the future? That there is room for people to disagree. That there is room for people to think about this differently. Do we really want it integrated and at what level in all of those institutions? To what extent do we want to default some of this decision making to the machines?

Because to be clear, I acknowledge here that it would require untying a heck of a lot of stuff in this tangled ball of string. That its become so entwined with what society and business is and even the social realm. But I guess that's a question that I wonder about and worry about. Which is how do we make sure that as we're having these conversations we are not just presupposing a future where the path that we're on right now - some people love it, some people hate it, some, probably most, think that there's some middle way - so that we're raising these issues now because they are important, this is absolutely influencing us, and yet not just further reinforcing an idea that it must be this way?

MEL SELLICK: Yeah, that's one of the crucial points, I think. This is not inevitable. This is not a narrative about inevitability. And to your point about really addressing this and being fearless in talking about it. Instead of trying to act like it's not happening, I think is super important, having more discussions like this in public forums.

So just as an aside, I was working with a group of educators, 300 educators or so. And I'm in the room, and the whole purpose is for me to help them unpack their AI stories. And here they are. It's the first time anybody's put them in a room and actually asked them how they felt. And there were people crying. There were people angry. There were people like-- the entire range of emotions. And it just made me realize so much that we are not, we don't have any, there's no normative here. There's no structure that we can all agree on or perspective that we can all agree on. And so every one of us are left to our own individual mental schema, our own ideas, in our own little worlds about what AI is and what it could be in the one sense.

And to your point, it doesn't have to be this way because we have active decisions. Our decisions, what we choose greatly determines what our futures look like. There's that. Also, where we put our attention. Our attention is probably one of the biggest aspects of agency, in my opinion. Because where you put your intention basically creates the knowledge of your life. It gives you the elements of your story that create the life that you live, where your attention goes and where your energy and effort flows.

And I think that we really do need to spend more time looking at alternative futures. Instead of this idea of it being this certain way whether it's in education, whether it's K through 12, or higher ed, or it’s institutions and governments around the world. And the challenge is incentive, as you know. So there are dominant market narratives that we have to reconcile. And we need people to be skilled. And also at the same time, I think that there's a little bit of, I won't say a grassroots movement, but I would say definitely there's some of the younger gens who are going to the dumbphones. Which, I would really love one of those dumbphones. I've been thinking about it. So yeah, we do have ability to do small things that add up to big things over time.

So I think talking about these kinds of actions that we can take together and individually really matter and help with that inevitability narrative. And I do want to also give pause here and articulate the point that perhaps you were trying to make, which is, in certain situations, we don't have choices. Like with institutions that decide our credit score or decide where we go to school and things like this. And we do have to work within those constraints because it's not just about generative AI. We can only have so much control. And that also goes back to this narrative of we're doing it to ourselves. That's not actually always true. We do know that this is very sophisticated design that is completely built to tap into our fundamental psychology of belonging, of validation, and fulfillment.

KIMBERLY NEVALA: Yeah. And so acknowledging that we do have to deal with the reality in front of us at the moment, even while we envision and perhaps work towards different futures, or at least not close them out, are there overarching systemic risks that you are concerned about if we do not really start to embrace and address the aspect of human readiness now?

MEL SELLICK: Yeah. There's a lot of concerns, but I do worry that we are slowly being anesthetized. Where we are trading in our fundamental human development for convenience. And I think the tragedy or the concern that I have the most is for this up-and-coming generation whose brains are still developing who are interacting with these tools. And they are at all varying levels of readiness because they are still humans in development.

And so I do worry that we are building future humans, that we are inadvertently reducing their capacities to fully function as thinking, autonomous agents when they are faced with advancing technology that will always be faster, smarter, and a black box, essentially, to them. And so we do need to build those internal capacities in the face of that.

KIMBERLY NEVALA: And so how do we go about doing this? Maybe we can talk about it. Because the other thing that I've seen in your work that is just so hopeful and great - in fact, I think this is probably the basis of a lot of the work that you've done with IEEE, with UNESCO, and I imagine in a lot of your consulting work, as well – is how do we design for and prepare both individuals and maybe the collective. Maybe that's at an organizational or other level.

So are there steps and things that we can be doing today? I will just throw out there one is to start to look at our programs. Not just AI literacy but start to build programs around human readiness. But what does it actually look like to enable people to be ready? What does a human readiness program look like? And are there things that we could be doing now to better design these systems for psychological, relational, cognitive wellness, if you will?

MEL SELLICK: I think a lot of new companies are starting to design systems that can be utilized, that are more narrow in focus, and that have constraints built in. I think when we're talking about design, friction is super important. I'm sure this audience knows that all day long. But in general, building in friction to the system is really important. Because when it's seamless and the design is seamless, it's super easy to continue multi-turn interactions. If we do not prompt users to be reflective, there's going to be millions of them who never will.

So I think we have to understand how humans are showing up to these systems and what they might be looking for. And then flagging them when they've been on the system too long. Flagging them and saying, hey, by the way, you've revised this 16 times, what else might be going on? Not that we need AI to be the therapist, but we need to design systems that actually push back instead of (ones) that are so agreeable. So I think that that's one thing. That's within the design of the system.

But I also think from a human systems perspective we need to build friction into our workspaces also. We need to organize our workflows so that humans actually still have to talk to other humans. And they have to problem solve with other humans. Nobody wants to do that, actually most of the time, because we've gotten really out of that habit. And COVID did us no favors. COVID did us nada. And so we're now back. We're still in this clunky "who's going back to the office? Who's not?"

OK, but we need other humans and we also need other perspectives. And that, ultimately, has a very high ROI for a company. Because if you are - over time, sure you can get that short boost of efficiency. But then what happens six months down the line when that manager actually can't make a decision without typing something into ChatGPT? Is that really who you want managing a team of people in your company and making strategic decisions?

So we really have to look at the short term versus the long term, because now we have very compelling data of not just cognitive offloading. I'm taking the easiest path, which is what the brain loves to do anyway, naturally. But we are actually seeing cognitive debt over time. So you can get the rewards now, but you will always pay the price later. So it's the same on an individual level as it is on an organizational level.

And also, interestingly enough, I've had some clients come as of late where they've adopted AI tools and the employees simply aren't using them. Or they say they're not using them. So if they are using them, they're not doing it in the structure within the company. They're doing it on the down low, in the ways that they want to, which may or may not be the best for the company.

Because as I think that you've heard me say before, humans are actually becoming the largest source of disinformation and misinformation in companies because they are taking knowledge, workers 40% of the time are taking AI outputs, with zero scrutiny. So what does that look like when your same manager is taking every single thing that AI says as truth. And then that is gone down, it's gone through the pipeline, and that eventually is going to hit your bottom line, your stakeholders, your reputation?

KIMBERLY NEVALA: Yeah, absolutely. And we'll point to, because I know that you've been working with some of these organizations and others - the UNESCO, CIA, IEEE - on diagnostics and things that people can look at to say: are we actually not just deploying AI but actually setting the organization up for success? So we'll point to some of that in the show notes.

But all that aside, I would love to give you just the opportunity: is there questions that we haven't asked that I really, really should have? Or some final thoughts you'd like to leave with the audience?

MEL SELLICK: Yeah, I think for me, the biggest thing that I'm noticing in my own life is that there's this internal conflict that people are having that I am seeing across all kinds of demographics and positionalities and backgrounds. Where we're using the tools and we know we're becoming dependent on them in some ways, me included.

Just to let you know, I took two months off of all media to give myself a break. I do that intermittently, destroying my LinkedIn algorithm. So people, please connect with me so I know that somebody likes me somewhere. It's hard not to be like, oh my God, nobody cares about me anymore, because I'm not getting my like, to show you how conditioned we are.

But building in some time to just give yourself some space between you and these digital tools. Self-reflection is hard. Conflict is hard, but it's really worth it. And I think probably the biggest tragedy would be if we become so dependent on these things that we actually forget what it's like to be in relationship with real human beings, who we really need.

One of the things that I love to think about that challenges me, and I'm still trying to find pathways towards, is that we have a practice, a very primal practice, of calming our nervous systems collectively. We know when we're in the room with someone who is calm it automatically calms our system down - our autonomic nervous system. When we are around hyper stressed people we automatically feel the same. So humans calibrate our nervous systems together. And when we start outsourcing even whatever that calm is, or that need is, that support to a machine we really miss out on the deeper, more connected, vital ways of participating in this life.

KIMBERLY NEVALA: Excellent, excellent note to end on, Mel. So I will stop myself here and thank you so much for your time and insights. And we'll happily help you re-hack the LinkedIn algorithm by pointing to all the things, both in the show notes and when we post this episode. But thank you again for your time today.

MEL SELLICK: Thank you.

KIMBERLY NEVALA: And to continue learning from thinkers, doers and advocates such as Mel, you can find us wherever you listen to podcasts, and also on YouTube.

Creators and Guests

Kimberly Nevala
Host
Kimberly Nevala
Strategic advisor at SAS
Mel Sellick
Guest
Mel Sellick
Applied Psychologist and Founder - Future Human Lab
AI Literacy Is Not All We Need with Mel Sellick
Broadcast by