A Question of Humanity with Pia Lauritzen, PhD

KIMBERLY NEVALA: Welcome to Pondering AI. I'm your host, Kimberly Nevala.

In this episode, I'm so pleased to be joined by Dr. Pia Lauritzen. Pia is a philosopher, tech inventor, and a Thinkers50 nominee. As the founder of Quest, Pia researches the nature and impact of questions. To grossly oversimplify in my own terms, she asks why the questions we ask get asked and why others don't, and how all of that in turn shapes us as individuals and societies.

Pia joins us today to ponder the questions shaping our technology and how we might hone our critical thinking skills in the age of AI. Or really, in any age at all. So, Pia, welcome to the show.

PIA LAURITZEN: Thank you so much. It's a great pleasure to be here, Kimberly.

KIMBERLY NEVALA: Now your focus on questions as a key lever shaping culture, leadership, even technology is quite unique. Have you always been the curious sort who asks a lot of questions, and/or has there been a series of events that conspired to bring you fortuitously to this area of study?

PIA LAURITZEN: I think it's the first. I think I have always been very yeah, curious, but also doubtful and skeptical and full of wonder of why things are the way they are and why they are not different. Or how they could be different.

Basically a huge curiosity towards how come we are not discussing all these questions that I find so fascinating. And so it's like I have a special passion for what I call shadow questions, it's the questions that live in the shadows of all the other questions. And so finding ways to shed light on these questions and making them help us connect in different ways and solve problems that are hard to solve and stuff like that. I think that's always been part of what I enjoy in life.

KIMBERLY NEVALA: So we'll come to the idea of critical thinking as a skill, which I think perhaps also speaks to some of those shadow questions you talk about. But I want to start with thinking about or asking some questions about questions just as a basic starter.

If you ask people why they ask questions, and I suspect you have done this in the course of your work, I imagine you'd get a range of answers. I myself might say I'm trying to learn something new or I'm trying to test my thinking or my knowledge. Or maybe I'm trying to test your thinking and your knowledge. Or I'm trying to express an interest in something or somebody. So it's a way to gently engage.

I don't think I would, on first pass or even on second pass, honestly, come up with the idea of using questions as a mechanism to distribute responsibility. And this is how you describe on your website some of your postdoc studies about language and culture: how questions are used to distribute responsibility.
And I think there's an interesting implication for the current space and time we're in now.

But what does it mean to or how can you use questions to distribute responsibility?

PIA LAURITZEN: So it's when you dig a little deeper in trying to understand some of the basic mechanisms in asking questions. And as a philosopher I like studying language and sometimes language just keeps these little magical presents for us to open. And then the interesting thing happens.

And what I realized was that the phrase to pose a question is closely related to taking a position. And it's the same in different languages. So I just said it in English. It's the same in Danish. I'm Danish, and we say stille et spørgsmål, to pose a question. And tage stilling is to take a position. So it's the exact same thing. It's the same in German and French and Italian and a lot of these Indo-European languages.

So I was curious to find out, what is this? What is this about asking questions or posing questions and taking a position? And how is it related with the same pattern you can see in giving a response and taking responsibility. So there is this connection that the language, many different languages, is actually trying to tell us something or at least that's what I was trying to figure out.

So I did these studies of different school classes, Russian, Chinese, Spanish, and Danish school class to figure out, do we raise our kids to use and understand questions in different ways in different parts of the world? And we certainly do.

And the reason I could find funding for this kind of research is, of course, because we live in this global world and a lot of companies are global and have these multinational employees. And trying to figure out why is it that we keep misunderstanding each other or we keep expecting things from each other and then we're disappointed when something else happens.

And locating this connection between posing questions and taking a position, and giving a response and taking responsibility, became the key for me to see and explain these differences in different cultures. And I identified basically three different ways of using questions. And this is actually built into grammar. So it's not only a matter of what is my mindset or how do I want to approach this. If we are not aware of this, it happens automatically.

So in a Danish or English culture, it's part of the same super linguistic family, so to speak. When we ask a question, we impose responsibility on the person we ask. So when you ask me a question, it's my responsibility to come up with a response. That's kind of the deal. We don't have to discuss it. Everybody knows that now it's on me.

But when I studied the Chinese school class, it was the complete opposite. So when the teacher asked a question, it was to demonstrate that now we're in a situation where I know the answer. I'm responsible for knowing the answer, and I'm just testing if you know the same answer. So you are not supposed to come up with something completely different. You're not supposed to give your own perspective on something. You're just supposed to show and demonstrate that you know what I know. So that's kind of the deal.

And in the Russian school class, it was a third option. Namely, it's neither me nor you who has the responsibility. When I ask a question, we are both committed to the subject matter of the question. So we are not looking at me. We're not looking at you. We're looking at the subject matter of the question, and that will somehow be responsible for providing the answer. So we have to relate to the subject matter, both of us.

And these are very, very different ways of interacting and of understanding and solving problems together. And if we're not aware of that, we will constantly be disappointed or misunderstand each other or simply go wrong in the communication.

KIMBERLY NEVALA: When I read this initially and it said, we use this to distribute responsibility, I supposed it then tells us a little bit about who is consciously or unconsciously driving a narrative. And I started to think about if there's something we can learn from this in terms of improving our capacity to interrogate the narratives about AI, for instance, that are spun for us today. It doesn't have to just be AI; it could be other technologies. But is there something that comes from this that allows us to be more mindful or to interact in the conversations that are coming around AI, in particular, in a more productive way?

PIA LAURITZEN: Yeah. I think my mission initially was just to understand and shed light on what I call the power of questions. And when I realized how powerful questions actually are it became my mission, so to speak, to democratize the power of questions. Because we have this tendency in societies across the world actually to monopolize the power of questions.

So we have some people whose job it is to ask questions. Right now it's your job to ask questions. And lawyers and coaches and teachers, it's part of their job. And then we have this large amount of people whose job it is to respond. We even have the term respondents. Their job is just to respond to whatever questions come their way.

So this actually prevents a lot of people from sharing their unique perspective. Because when we pose a question, as I mentioned before, we take a position on what we think is important or where we think we should focus. And if we're never allowed to ask these questions, our perspective and our experience on what is important it's not given the attention that we need if we want diverse creative solutions to the problems we're facing.

So this mission to democratize the power of questions also became a quest, so to speak, to build an awareness that it's a basic human feature to ask questions. We use questions to grow and to learn and to adapt to our surroundings. That's how we find out, is this important or is this not important. Should I go this way now or should I go another way?

And understanding questioning as a basic human feature becomes very important when dealing with technologies like AI. Because if you just take the Turing Test or the Imitation Game - as Turing called it himself - it becomes very obvious that what we design AI to do is to come up with answers. It’s to come up with convincing answers in a way that kind of tricks us into believing that it knows what it's doing. So this reminder that that's kind of the opposite of what we're designed to do. Or what we are born to do.

We are born to be curious so we can constantly adapt and we can constantly learn and grow. We do that by asking questions. And by having this reminder and even reminding ourselves and each other that we do it in different ways - we all do it, but we do it in different ways - is a very, as far as I can see, a very beautiful way of staying in touch with our humanity.

Both in terms of our own way of thinking and our own ability to think critically and creatively and all the things we would need in order to navigate. But also to respect the differences in how other people think. Because we can see they ask different kinds of questions, and they use questions differently.

So if we just build AI under the assumption that the question is just prompts, it's just something, it's just a matter of being able to ask the machine questions in a different way and then it will provide the answers, then we completely misunderstand why we're asking questions.

Back to your initial question that we tend to think that we do it to get answers. But we also do it to connect with each other, to show that we care for someone, to navigate things that don't have answers. We have all these different purposes for asking questions. And I just want to be part of reminding myself and everyone else of that, because I think that's very important when engaging with technologies like AI.

KIMBERLY NEVALA: I suppose the questions that we emphasize, the questions that we ask when we are thinking about any technology - and AI in particular, because that is the moment that we're in - influence how we then approach and think about the technology itself. So you have made the point that as technologists, we often lean into the question of how. And folks coming from a more humanistic perspective might ask the question why.

Before we get to how and this is important, not just because it reflects a difference in maybe our innate interests or what we want to achieve. But actually, asking those different questions results in setting different aspirations and different expectations for what it is that we are trying to achieve with the technology. So why we build it, not just how we build it but why we're building it, and to what end.

So can you talk a little bit about - maybe it's in the context of, for instance, thinking how - why, I should say why - why asking the question of why we think is different than asking the question of how we think or how a machine might think? And why that leads to some divergent paths in the context of AI.

PIA LAURITZEN: So back to Turing, who actually started out his 1950 article by writing ‘can machines think’. And I think that was really intriguing for me as a philosopher, to see this engineer kicking off his groundbreaking work on AI asking this question. But he also, very quickly, he said, “I’m not going to ask this question”. It's too meaningless to deserve discussion. I'm going to do something else. And then he introduced the Imitation Game, focusing on the machine's ability to come up with answers.

But had Turing actually asked the question you just mentioned? Instead of asking can machines think, instead of refusing to deal with the question, starting to explore why do human beings think? Because if we want to build something that imitates our ability to think it would make sense to figure out: but why do we even do it in the first place? Why do human beings think?

And just by asking that question, it becomes clear that, of course, machines cannot think because they don't have the problem that human beings have, which is the reason why they're thinking. And the problem, of course, is that we know that we don't know. So that's when we start asking questions, and that's when we start thinking because we know there's a gap between what we know and what we would like to know. So we start asking these questions to close this gap.

And knowing that you don't know is, again, it’s a basic human condition. It's just…we are born into this world without having all the answers. So we immediately, even before we have language, we start exploring and experimenting with our surroundings. We make noises to ask, where are you? And we smile to ask, “do you like me”? Is this the way I'm supposed to be in this world to get the love that I need and the food that I need? And when we start developing a language, our language, we ask: what is this? What is this? And at some point, we say why, why, why at everything.

So we are kind of born into this world knowing that we don't know and also knowing that we'd like to know. So that's why we ask all these questions and that's why we think. But building that you could say vulnerability or it's kind of like, it's a flaw. God knows everything. Or at least that's His job or Her job. And animals, they don't know and they don't care it seems.

So it is this - the French philosopher Merleau-Ponty called it a fragile mixture of animals and gods - that makes us human: that we know that we don't know. And if you were to build that into AI then you wouldn't be able to achieve this super intelligence that is the whole purpose of building AI. So it's totally counterintuitive to say that we would build a machine to think because thinking is a solution to a problem that we built the machine not to have.

So, for me, it's very interesting to actually start asking some of these why questions. Because when we do that, the stories, or the narratives as you say, that we are told about AI they kind of fall apart. Because it no longer makes sense to build a machine that would be able to do the things that we are saying it would be able to do if, at the same time, it were to imitate how we work or how our mind works. Does that make sense?

KIMBERLY NEVALA: Yeah, I think it does. I can anticipate, though, that folks will go, this is all very interesting. But it's just a metaphor. We're using human thinking as a metaphor. The important point is that a machine can do all of these things. That we are developing systems that do or will have all of the answers. So what's your beef with that? And why is that a problem? And I suspect there's some of those shadow questions you mentioned before that linger underneath there. Because this does have ramifications for us as individuals and as societies.

PIA LAURITZEN: Yeah. So for me, it's just impossible to imagine that there would ever be anything or anyone on this Earth, at least, under these conditions, that would have all the answers. It just doesn't make sense. And it's not what AI is built to do. It's not designed initially to have all the answers. It's just designed to imitate that it does. It's designed to convince us that it does. But it's not designed to have the answers.

So that's not an option as far as I can see. The question is whether AI will be so good at pretending or deceiving us into thinking that it has all the answers, that we will stop thinking for ourselves, that we will just trust the AI so much more than we trust ourselves. That is the issue. And that is what we should be afraid of.

Because if it was actually a machine that could think and do and build everything better than we can ourselves, then it would be a different issue. But the fact that that's not even what it's designed to do, it's just designed to pretend that it is, makes it a different problem. At least how I see it.

So that's why it's so important for us to keep asking some of the questions that the machine will never ask. I typically talk about not only why questions but I talk about the three big E's. So the existential, ethical, and epistemological questions. Because these are questions that humanity has to ask and answer for ourselves.

So, what does it mean to-- Who am I? Who are we? What are we supposed to do as the human species? These are questions that it makes absolutely no sense to ask a machine to come up with because they obviously have no clue about what it means to be human.

Ethical questions about what is actually the right thing to do here? These kinds of questions will always be questions that have to do with the limitations of humans. It's because we cannot do everything at once. We have to prioritize. We have to say, this is where I'm going to spend my money. This is where I'm going to spend my energy and my attention. Because again, we have these very limited conditions as human beings. So the ethical questions we have to ask - we have to come up with answers to that.

And the epistemological about when and how do we even know if what we think we know is true. We could be fooled right now. And we need to keep asking these questions because we will never get a machine or anything else that can say with absolute certainty that this is the truth. You just need to do this, this, and that. That will never be the case. So we will always have to ask these questions ourselves and come up with our own answers.

And the beauty of that and why it's very, very important that we keep doing that, is that the conditions I just described are exactly what make us part of this world. In the sense that the limitations we have are exactly the same limitations that the rest of nature have.

So when the planet, when we're dealing with climate crisis, we're dealing with health problems, we're dealing with species not being here anymore and all of this is because of the same limitations. Because like the tree, like the dog, nature around us, we get older and we die at some point. So these limitations that come with being humans, we share them with everything else on the planet except the technologies.

The technologies like AI don't share these conditions. They are designed to transcend these conditions. Alan Turing actually wrote it directly in his article himself. We should build these machines because evolution is going too slow. So we need to speed it up. That's what he wrote in 1950. And that's what we're talking about today all the time. We're talking about how to speed everything up, how to be more productive, how to solve problems faster, also better, but especially faster.

And our job is kind of staying in touch with our inner and outer nature. And saying, well, if we build technology that speeds everything up, then we cannot be part of it. And the rest of the world cannot be part of it. We will just break everything. So listening to our inner voice, listening to our inner nature. Saying, it's too much, we cannot keep up with this, becomes very important in this age, I think.

KIMBERLY NEVALA: And how do we counter-- It's funny because I've been reading your work and now I'm very conscious of how I, or what, questions I pose and how I pose them and how often I do that. So there's something uncanny about being interested in posing questions to someone who studies questions, I have to say. So that aside; because I'm always an awkward questioner, if I'm going to be honest.

PIA LAURITZEN: I think most people are. I am too.

KIMBERLY NEVALA: That's good news. So well, that's excellent.

How do you-- or what do you make of or how do you counter the argument that says that, yes, but the point that we have limitations is exactly why we need this technology. Because you are limited. You're not able to see all the options and you should let something else then optimize it for you. To make you the best you you can be and perhaps thats a you that you can't even imagine because, again, of all those innate human tendencies.

I find this a particularly cynical, is probably the most positive word I can come up for it, point of view. But what is the underlying, in your opinion and in your experience, what's the underlying incentive or objective when someone is really pushing this particular narrative forward?

PIA LAURITZEN: I think this narrative comes from a place of not accepting and acknowledging the facts, actually. Because the facts are that nothing or no one can optimize me. They can turn me into something else. But if I'm supposed to be a human being, being true to what that means, which is back to what we talked about before - knowing that I don't know - then it comes, it's part of the deal that I have to do the work.
I was born into this world knowing that I don't know. German philosopher Nietzsche called human beings the yet undefined animal, which means that it's our job to define ourselves. And I believe very strongly that he's right. That is what makes us human. It's that we are born into this world without the answers, but with a lot of questions guiding us in finding the answers.

And if that's the belief I have. If that's what I think being human is, then it kind of becomes impossible for someone else to optimize me. It's like, well, that can only happen if I stop being human. So, if that's the deal, then we should just be honest about that and say, well, then we should just stop. Is that what we want?

Maybe enough people would say yes, because we think the world - or I'm not even sure what - would be better if we stopped being human. But I get the feeling when I hear some of the Musks and the Zuckerbergs and some of these people, I get the feeling that they have this idea that something will be better if we stop being human. I just don't agree.

I think it's crucial that we continue being human. Not only for our own sake and not only for our individual or societal sake but, as I mentioned before, for the whole planet because we are part of nature. AI is not. It's artificially created. You cannot create-- Typically I make the distinction between being born to think and being built to think.

When you're born to think, it's part of your nature that you're capable of doing it if you practice and if you stay true to your questions and stuff like that. But when you're built to think, you are not limited by death like I am, you're limited by data. And if you just get more of it, you can expand. No matter how much data you give me, how much information, education and all these things, I'm still going to die at some point.
And that is what forces me to prioritize. That is what gives me this ethical edge that makes that I ask the question, so what is good? What is the right thing for me to spend my life on?

So the idea that we can build ethical technology or we can build ethical values into technology, for me, it's nonsense. Because ethics come from acknowledging that there are these limitations. So I'm kind of, as you can probably hear, I'm not buying the premise of the question. When people say that I'm kind of like OK, we're simply discussing two different things. And that's OK. But I'm not going to go there because that's not what I want to spend my time on. That's not what I prioritize, trying to make this fantasy that I don't really believe in come true.

KIMBERLY NEVALA: Well, and it is interesting because a lot of the narratives - and this is not to say that AI systems and this technology cannot be very, very useful. I think in fact, it could be. It could help us solve a lot of problems that would allow us to better live together with other humans and all of the other flora and fauna on the planet.

It seems to me we waive these sorts of problems or issues or concerns away as pesky short term issues that don't matter in the long term because we're going to achieve a future that we really cannot define in any concrete way. We can't say this is what it would look like, and this is how you would operate, and this is why it would be better. We're just saying it would be better. Maybe you wouldn't die here.

So this question that you raised is: does it make sense anyway? Because we're going to use the technology and therefore, we will live together - perhaps on a different planet - and we'll make it what we want. But what we want is never actually asked or answered critically in that discussion.

PIA LAURITZEN: No, and it is kind of, then - I don't want to sound rude - but I think it's very obvious that the people who are at the frontier in the AI discussions and talking about human level AI or talking about superintelligence and stuff like that; they are definitely more educated within engineering and technology and stuff and science than they are in culture and art and philosophy and stuff like that. It's just very, very obvious.

Because if that really was the case, if that really was the project: let's make sure that humans never die. They can live forever and we can solve all our problems. I had a conversation with Nick Bostrom about these things at some point because he published this book Deep Utopia after publishing Superintelligence. And he says, well, these kind of questions they are above my pay grade when I asked him about some of the things you just said.

So would we like that to be? I just thought it was an interesting choice of words. It's above my pay grade. It's kind of like OK, so who is going to take care of that? Because the thing is that if we reach that point, if technology actually were able to make our limitations disappear, then we're back to the exact same point as I made before, namely, then we would no longer be humans. We would be something else, and we would have to reinvent everything we know about being human.

Because the fact that we're going to die is not just some coincidental circumstance. It's defining us from the moment we are born. That we know that we are running out of time, that we're temporal creatures, that we are embedded in time. That the fact that I, when I'm being here right now, means that I'm not being where you are or where my kids are or where there are these-- it's not just a matter of am I or am I not going to die. It's defining everything we know and everything how we behave and how we think and how we talk and engage with each other and with the world.

So there would be so much work to do that I'm pretty sure that if Turing said, I don't want to discuss the question, can machines think, these guys would be blown away by all the questions they would need to discuss if they actually managed to do what they are saying they will do. Because everything would have to be redefined.

And this is just-- I don't know how you would find the investment for that because it is everything is driven by VC anyway, right? So you would have to convince someone that now we're going to spend the next decade on studying art, studying philosophy, studying all these things that would help us understand how we would actually be able to build a better world. And nobody has ever been willing to pay a lot of money for that. So I really don't know where that would take us.

KIMBERLY NEVALA: Well, I think as difficult as it has been to develop the technology, that's the really, really hard work, isn't it? Because it's not a matter of just connecting bits and bobs and transferring it from this chip to a qubit.

PIA LAURITZEN: Yeah, exactly. There is, of course, an easy way. There always is an easy way.

And that's just to pick a dictator to tell us how to do things. And maybe that's a project. We will just be five people who say we have the money. We have the power. We have it and we will just find a way that will make sense for us. That's the easy way, right?

And if they actually manage building technology that makes the rest of us numb, that makes us stop thinking, that makes us just do what we are supposed to do in this new utopia, well, that would be problem fixed. But I think the rest of us should ask ourselves whether that's where we want to go.

KIMBERLY NEVALA: I think we should. And normally I would say that's a good hypothetical thought experiment, but it feels a little close to the nut right now from where I'm sitting here in the US.

But as you were speaking, as well, saying that this fact of life that we are here for a finite period of time, it seems to me - and I this has been said otherwise, but it's interesting, although I don't know that I expected us to circle back in this conversation to that - is, perhaps this insane drive to quantify oneself, to be able to have an avatar and upload yourself to the quantum is just really someone screaming in the void against that human inevitability. And that may tell us something else about - I don't know what it tells us about - but it certainly doesn't seem that they're trying to solve the problem that they're trying to solve. Or what they're maybe struggling with just psychologically.

And I wonder if this plays into then the narrative that is also circulating that somehow we need a new philosophy for AI. Because the philosophy that has gone before doesn't suffice in the face of this net new, smarter than us, more intelligent AI. Do you see this as just another hedge around asking and answering the really hard questions, the really human questions?

PIA LAURITZEN: I see it as proof that the people who are saying these things actually have absolutely no idea of what it is that we need to stay human.

They have so many brilliant ideas about what it takes to build technology that can just blow our minds and accomplish things we never thought possible. But when it comes to understanding what it actually means to be human and respecting that being human can be threatened by being surrounded by technology that is designed to-- we have to ask the question: how does it impact the human ability to think for ourselves to be surrounded by machines designed to think for us? Because it does have an impact.

And when we say we need new philosophy or new philosophers, even, some of these tech guys have been saying. I’m kind of like could we just start by just giving a few moments attention to the philosophers we already have? You could just sit down and read Socrates, read Descartes, even Descartes, who famously laid the foundation for the scientific method. But I think he would look at what's going on right now and be like, what are you doing? This is the complete opposite of what we need for humanity.

Because it's back to what you said earlier where I totally agree there's no doubt that AI, at least not from my perspective, that AI can do fabulous things. And the technology should be developed to do amazing things. But we spend too much time thinking that it can do everything. And we spent too much time thinking that and talking about - and I think it's probably because that's where the money are - thinking about how it can replace even the most magical aspects of being human.

And that means that the three big E's I mentioned earlier - the existential, ethical, and epistemological questions - we're simply showing that we have absolutely no idea how these kinds of questions work and how these kinds of questions impact our ability to stay human. So for me, it's a matter of ignorance when we talk about technology being able to replace all these things.

And also when we say we need new philosophy, no, we actually - not philosophy in the traditional understanding - we just need to pay a little respect to the philosophy we already have that we built through thousands of years. What did it take? It's 50, 70 years. It was in '50 Turing wrote his article. So it's 70 years we've spent building AI but it's thousands of years we've spent studying what does it mean to be human. So instead of assuming that because we crack some neural codes on how the brain works, now we can imitate it. And now it will-- it's just for me, it's just have some respect, damn it. Sorry. But--

KIMBERLY NEVALA: Well, no, and I found this. Your work actually reintroduced me to the work of, I think he's German, Martin Heidegger.

PIA LAURITZEN: Yeah.

KIMBERLY NEVALA: Even if you don't buy into it as someone who loves tech or as a technologist into just plain, good old-fashioned technology, there are philosophers who did quite a while ago already put their finger on this or take a look at this through the lens of technology.

And I believe you quoted in an article that he said - Heidegger said - we need to get a better grip on the essence of technology. Perhaps you can give folks just a quick recap of why he was so concerned about this. But I think it's telling that was well before the current day and age and the concern that he raised, or the questions he suggests we grapple with, seem more relevant than ever to me today.

PIA LAURITZEN: Yeah, so he gave this lecture in 1954 - so four years after Turing wrote his article. I'm not sure whether he read Turing's article. I don't think so. It was just that they were dealing with the same evolution in technology, a revolution in technology, at the moment. And he gave this lecture calling the question concerning technology. And I'm very much inspired by his work.

He's with no comparison the philosopher I've studied the most. I've read this talk, this lecture, I've read it 25 times, I think, because it's really so predictive of the time we live in right now.

He predicted that unless we get a better grip on what he called the essence of technology, three things would happen.

First, we would lose touch with technology. Meaning that we would confuse what we can and cannot use technology for. We would think that technology can think. We would think that technology can make us live forever. All the things we're talking about now, that would be the first thing that would happen.

The second thing is that we would lose touch with reality. So we would start confusing what's real, what's false, what's right, what's wrong. Which is exactly - with deepfakes and everything we see in with misinformation and all these things - exactly what has happened. That we are starting to, we are very confused about what can and what cannot be trusted.

And finally, we would lose touch with ourselves. Meaning that we would stop trusting our own or losing touch with ourselves. We would stop trusting our own judgment. We would stop trusting the judgment of each other. Instead of just trusting what you're saying because you are a human being who has some experience, I would ask ChatGPT because I trust the engine more than I trust other human beings.

So these are very, very precise, I think predictions that he made in 1954. And the essence of technology, the reason he could see that happening, is that the essence of technology is the same across generations of technology. So whether it's a matter of a hammer or a stone axe used when we just started building tools, or it's a car from the more modern technology, and wind engine or something like that, or it's the AI or social media, the essence of technology stays the same.

Which is not to be a tool; a neutral tool that too many people talk about today. Well, it's just a tool and it can be used for good and it can be used for bad. He said that is the most dangerous assumption we can make. It's not neutral. It's not a tool. It has an essence of itself. Every technology has an essence of itself, and the essence is that it makes us relate to the world as if it's something that we can control.

So whether we use one or the other or the third technology, it makes us think of our surroundings as if we are in control, as if it's us making the decisions. When in fact, we are completely intertwined with technology. We are thinking through the lens of the technology we use. So it's not at all neutral, and we are not at all in control.

And what we should be doing is that we should - that's his advice and I've just modernized it and made it my license to communicate - is to constantly question the impact technology has on us. He says questioning is the piety of thought. So it's when we question that we think for ourselves and we relate to technology the way that we should. Namely, as something that is constantly impacting us. And to be free of that impact, or at least to have some kind of, not control, but just awareness of that impact, we need to keep questioning. And we need to ask all the questions that technology doesn't want us to ask.
Technology wants us to ask: what is this? How can I use it? How can I use it better? How can I use it more? That's what technology wants us to ask. What we should ask is: why should I use this technology right now? What am I not doing when I'm using this technology? How does it impact my assumptions about the world, my way of thinking and all these things?

So that's why I'm so in love with Heidegger. Which sounds weird because everybody thinks a dead German who was also - yeah, there are lots of - but I really think that he can help us in navigating.

KIMBERLY NEVALA: Well and this will sound like a strange, sharp left turn here as we wrap up. Because we could continue for quite some time. I always say that. And then I always say to myself, you always say that. But I feel like it's always true. So the audience will bear with me, I hope.

But it strikes me there may be a bit of a catch-22 with us really promoting, hey, it's OK, we don't need to learn all the things we used to learn. People just really need to lean in and embrace creative thinking or critical thinking. Because the point of thinking then is not just to go along with the prevailing narrative. It's not just to ask how. And a lot of times, even today in the technology narratives we see, they're promoting that people need to be able to start thinking more critically about how and where we can apply the technology.

And what I hear you saying, and as you've written as well, is that's actually not the point of critical thinking. It's part of it. But there's the larger question of why we use it, if we use it, that come well before that. So this predilection or promotion of critical thinking may have some unexpected consequences for us as technologists as well. Which may all be for the good.

So as we do wrap up, I did want to touch on this issue of critical thinking or thinking. You had a very striking, I think it was actually maybe a headline on the article, or a statement that said: critical thinking can't be taught, but we can learn to be more critical thinkers. And you go on to elucidate that thinking is an innate core skill and not one that AI, by the way, can readily replace. But as a skill, it is something that can be lost or honed.

Can you talk a little bit then about when we think about thinking more critically or thinking better, because maybe it's not even just critical thinking, What is it that we need to attend to ensure that we don't lose this skill that we all wield so effortlessly and unselfconsciously as children?

PIA LAURITZEN: Yeah. So again, I think the article that you're referring to, I'm also introducing a bit of Heidegger because he wrote in another text he talks about what calls for thinking. And that's an interesting way of putting it, because what he is exploring is that we should be listening.

So what calls for thinking, there's something that we are supposed to hear, that we are supposed to think about, that we're supposed to question. And he says, and I very much agree with that, that when we live in a world, especially now, it's gotten even worse since the 1950s, but when we live in a world where everything is constantly fighting for our attention, it's very, very easy to spend our limited time and our limited attention and our limited - all the conditions, all the limitations we have - to spend the time we have on what's interesting at the moment.

So right now this comes up, this comes up, this comes up. And he says, we should be really, really good at focusing on what calls for thinking, what needs to be thought about. Because there are lots of things that don't call for thinking, that just calls for attention or calls for action or calls for-- and these are not the things that we are supposed to spend our time on. So for us to-- sometimes it is, it's fun, right?
We're supposed to, we're allowed to have fun. And we're allowed to do things just because well, I just lost two hours, but I had fun doing it.

But that's not what happens. What happens is that suddenly we're retired and then we realize, oh my god. I just spent my whole entire work life doing stuff that, if I hadn't been doing it, either someone else would have done it or it would not have been done and it doesn't matter. It just disappeared.

So having this concept of what calls for thinking in the long run. What would be, if I spent my time today having this conversation with Kimberly, I'm not doing something else in the same hour. It's an hour, it just disappears from my life. But I spent the hour discussing something, sharing something that's very important for me. That I believe a lot of people can be inspired by and I believe can make a change in how we engage with technology. And that is part of my life mission, so to speak. So I spent my time on what calls for my thinking. My thinking is supposed to be spent on conversations like this.

But each of us have things that calls for thinking. And that's just, for me, it's important to keep that in mind: that there is a what that needs to be thought about. And then there are lots of whats that are just competing for our attention. And we should be really good at saying thanks, but no thanks, because this is what calls for thinking.

And the second thing we should be asking - as also, as Heidegger put it, is that the next question we should ask is - so who can we think with? So what calls for thinking and who can we think with. Because what calls for thinking cannot be thought about with AI. It calls for another human being.

So that's why I'm having this conversation with you and not with a chatbot. Because that would not stimulate my thinking. I would not be-- it would do something else, but it would not, I would not, solve the task that I have as a human being according to this way of thinking.

KIMBERLY NEVALA: All right. So last question. Although I'm often guilty of compound questions and I think this is going to be one of those as well. Is there a question I or others like me don't ask that you wish that I had or we did. Feel free to pose it and answer it if so. And any final thoughts you'd like to leave with the audience.

PIA LAURITZEN: I typically get so caught up in the conversation that I actually don't even remember what questions you asked, and therefore I don't stand here having this feeling that oh, I wish she'd ask me this. Because I think when the conversation works, then we cover whatever makes sense to cover at the moment. And I loved all your questions. I felt very inspired.

And I think that has to do with-- I just rewatched a podcast I did for a year ago or something like that. And he asked, how can we ask better questions? That's a question I often get asked. And then I was just very silent for a very long time until I thought, well, maybe I need to answer now. And my answer was kind of like I think the first thing we should do in asking better questions is not to put so much pressure on the questions.

So we have this tendency to think that it is the question that determines whether or not the conversation will succeed. But of course, it's not the question, it's the intention, and it's the engagement and the relation that determines whether or not a question makes this conversation go forward. And I think that's why I don't stand here thinking, I miss that kind of question. Because the conversation did what I think it was supposed to do.

KIMBERLY NEVALA: Well and I think that the point you just made about asking better questions is something that I took even from our initial call-in meeting. Which was perhaps to be a better thinker and a better questioner, it's not so much about asking better questions as also spending some time considering what's the motivation behind the question. So I understand why, in fact, I'm posing it and what I'm trying to get out from it. So that's been a helpful lesson in and of itself, which I've greatly appreciated.

So thank you so much for all of your time today. This was extremely fun. I've really enjoyed getting to know your work, and I'm so happy to be able to hopefully introduce a new audience to it as well.

PIA LAURITZEN: Thank you, Kimberly, I really enjoyed being with you.

KIMBERLY NEVALA: Fantastic. Well, hopefully we'll get you back in the future and we will revisit some of these and pose some new questions as well.

So to continue learning from thinkers, advocates, and doers such as Pia you can subscribe to Pondering AI Now. We're available wherever you listen to your podcasts and also on YouTube. In addition, if you have comments, questions, or guest suggestions, please write to me at PonderingAI@sas.com.

Creators and Guests

Kimberly Nevala
Host
Kimberly Nevala
Strategic advisor at SAS
Pia Lauritzen
Guest
Pia Lauritzen
Philosopher, Author and Tech Inventor
A Question of Humanity with Pia Lauritzen, PhD
Broadcast by