Relating to AI with Dr. Marisa Tschopp
KIMBERLY NEVALA: Welcome to Pondering AI. I'm your host, Kimberly Nevala. Today it is a pure pleasure to welcome back onto the show Marisa Tschopp. Marisa is a psychologist and a human-AI interaction researcher at SCIP AG. SCIP is a cybersecurity company. And Marisa, I'll let you tell folks more about that if that makes sense to do so. She is also an active volunteer and supporter of the Women In Data community.
So today we're going to be talking, as always, about the fascinating, sometimes frightening evolution of how we think about and interact with AI systems. Welcome back, Marisa.
MARISA TSCHOPP: Thank you so much for the invitation.
KIMBERLY NEVALA: Now, you have been doing a lot of speaking and you get your hands on a lot of this technology - different applications and use cases. So perhaps to start, we can get a little bit of a sense around the conversation in the spectrum of things you're most interested in right now.
And maybe we'll do this - whether it's use cases or applications or conversations - what is one thing that you have run into (so to speak) lately that you have found just to be an unabashedly positive experience or application? Secondarily, what is something that maybe gave you a little bit of pause? And third, was there something where you just went, heck no? Just no.
MARISA TSCHOPP: [LAUGHS] Can you write this question down for me? OK, so it was three questions in one, I guess?
[LAUGHTER]
KIMBERLY NEVALA: It was. It was -- three-parter
MARISA TSCHOPP: One application I liked. And the second one was what-- one I liked, one I was concerned about, and the third, which I thought was totally wacko?
KIMBERLY NEVALA: Just hell no.
[LAUGHTER]
MARISA TSCHOPP: OK. Well, good. OK. So let's start with the good stuff then.
Well, I'm writing up a study now. We already have the results and everything. And I wanted to look deeper into the sphere of mental health and how we're in a huge crisis with mental health. Too many people suffer and too less people are there to help. And people hope for technology to help solve or ease that issue.
I recently conducted a study in the context of a mental health app that is more story based. It uses - I want to say chatbot - but nowadays, every chatbot is AI, but it doesn't have to be. It is, well, a conversational agent but it's more like a click bot. So all the systems, all the answers, they're all pre-programmed, so no hallucinations and so on. But it still feels like you're talking to somebody and talks back to you, gives you answers, speaks in a different language than just reading something.
And well, first of all, it was a fantastic experience for myself and all the users we surveyed. They were really very positive towards this app, which is also aesthetically just really well done. And, circling back to my research, I'm looking at how people perceive their relationship to conversational AI or conversational agents or any machine, basically, that fulfills a role of some sort of relational entity, whatever that may be.
But long story short, we could see that people who have a more closer, more emotional bond to that conversational agent, it had a positive effect on the output, such as perceived benefits, intention to use. Because it's so important that people use these apps regularly. So that was very, well, interesting and also nice to see and to find. So that would probably be my answer to question number one. It's probably pretty long because, usually, I get very excited about these things.
Then question number two was -- Yeah, that was a different study, actually, that we performed together with the Swiss National TV because I'm based in Switzerland. And they have asked me to conduct an experiment or to accompany an experiment they wanted to do with whether people can actually fall in love with a chatbot or have a deeper relationship or become best friends.
So we really went deep into the AI companionship topic and tried one of these apps. And we brought in participants who took part in this experiment for a very long - not a very long, long time - like, three weeks but very intense. So when I was with them, it was very intense. I stayed with them every two or three days. And they were all very excited. They were all socially motivated. It was six people over three weeks.
And they really wanted to have that new best friend, just as an alternative. They had functional lives and everything. It's not like they were sick or they were, I don't know, suffering from loneliness.
But it turned out that, yeah, after three weeks, everybody broke up with that AI companion. Usually, you read these media stories where people fall in love or get their heart broken by an update and so on, but nothing of that happened. And yeah, also kind of hopeful to see though while still, the technology was sort of disappointing.
So last one but not least was totally wacko or hell no. I'm sure you've seen it on my LinkedIn profile--
KIMBERLY NEVALA: Maybe.
MARISA TSCHOPP: --the new AI Friend you can hang around your neck. They're trying to sell a friend in a necklace, I think. This is what I want to say. So you can have this necklace here. And you can say, oh, hey - I don't know - Peter. How are you? And then you can talk to it, him, whatever, kind of listens to all you say. I don't know. Maybe you can also soon connect it to your brainwaves and heart waves to see whether there's some chemistry going on between us. I don't know. But yeah, so that's been probably the greatest nightmare. All kinds of privacy and security concerns. But furthermore, of course, from my perspective as a psychologist, looking at relationships and what that does with you when you go on a date and then, instead of talking to the other person, you talk to the necklace. Or you have the necklace-- I don't know-- having a dialogue with the two of you. That's odd. I don't want a future like that.
KIMBERLY NEVALA: Mm-hmm. So I want to quickly go back to the first two examples that you gave. Relative to that second example, where you said they really just, over course of the time, decided to sort of "break up," with this chatbot. It got me thinking maybe that is a positive outcome.
Because we do see a lot of news and PR. There was certainly a while back – actually I think this may have been around the time you and I talked the first time - a lot of headlines around a chatbot encouraging someone to commit suicide. Or this idea that we are just putting band-aids on problems. So the way to fix loneliness is not to give someone a computer to talk to. Or the way to help someone deal with a mental health crisis is not to keep them in their room.
But does this perhaps tell us a little bit in conjunction with that first application. That if we are mindful about designing these things, we can use these AI-enabled systems to provide an immediate - sort of term "bandage" - as long as we then provide a bridge to the broader outside world? Whether that's a long-term therapist if you need it, or a way to force you to disconnect and get out in the world if you're lonely. But if it helps in the acute moment, where maybe somebody isn't there, and we can be a little less concerned about this developing a whole world of people who are just in their rooms, essentially talking to themselves.
MARISA TSCHOPP: Yeah, you basically brought up many of the concerns that have been or are being discussed currently in research and practice even, in policy. Where do we go with all this?
So circling back to our study, I think the major learning from this is that we have to still radically differentiate, and we cannot generalize. So this is still a huge problem. I think that we see this one app, and then we go, this is all bad, and this is all good. And so we really have to be very, very considerate and detailed and also empathetic with who is using this. I have discovered for myself that when you're so into that sphere, I feel like there's sometimes a tendency to forget that these are humans that have problems.
And we often talk about them as if they were, I don't know, sick, stupid, odd, weird. Or we just dismiss them as well, these are the two people who are awkward. Who cares about them, right? So I feel like every time we dismiss a technology to say this is the worst thing in the world, we lack or lose the empathy for those people who really like that, who have something with that. And maybe it is a thing. I don't know yet.
But this was my major learning also from this experiment, that it's not like the normal ones can deal with it well and those who are vulnerable can't or whatsoever. We categorize them so easily, and we're so fast with judging. And my major learning was to really get at this, take a step back. And like, OK, who are these people? Who are these people who can't? Who are these people who can or who won’t? I'm trying to be much more empathetic currently or within the past weeks.
And it always kind of depends on which side you're on, right? Are you more on the developer side, thinking of, how do we design these things most responsibly? Or are you more on the regulatory side to think about, OK, where are the thresholds where we say, OK, we forbid these kinds of applications?
We say, OK, listen. I mean, if you look at the user numbers of companion bots, like in China or Replika AI, they're huge. If you think about 600 million users who use companion bots regularly. And then you have to think about these are so many. Like, you can't even grasp that number, right?
And then I think of the 3% or 1% of the people who suffer a lot. And one in I don't know how many million has taken his life. This is terrible. Or the 1% or many hundreds of thousands still where they suffer more depression, where they suffer more loneliness, where it exacerbates their problem. And they even more isolate themselves because they get stuck in this virtual world. And it's hard for us to get out there, also with all the design elements they use and so on. So I think it's a very, very delicate topic.
And especially then, when we talk about these 1%, which are in the end still many hundreds of thousands – I don’t (exactly) – it’s a lot. Then it really makes me think, where should we go? Yeah, I think that's really hard.
KIMBERLY NEVALA: Yeah, I go back and forth. And you and I have joked back and forth that you really like talking to robots, and I can't figure out, for the most part, why people care. So that's always why this is a fun conversation. [LAUGHS]
Mariana Ganapini recently said hey, I can love my Roomba. I can love my cat. I can love my dog. But that doesn't mean I love them in the same way that I love a person. And it doesn't mean that I have the same expectations of them. When my dog or my cat - I think she said when her cat misbehaves - she said it's my fault.
So I come down on different bits. And, because we know each other, I'm going to try not to just throw multipart questions at you because I feel like we're familiar enough to do that, like I did right at the top.
So first, I start to ask myself - and then I think of us more broadly - are we not giving ourselves and humans enough credit for being able to contextualize this thing that we are interacting with?
MARISA TSCHOPP: I wouldn't say so. I'm sorry if I'm interrupting you, but I'm--
KIMBERLY NEVALA: No. That's the question, yeah.
MARISA TSCHOPP: I'm trying now before you ask another five questions. I'm going to stop you here now. OK, listen, let's talk this through.
Because I think it's pretty - I'd just say, I think at least, in science - most of the scientists are in a place where we say, OK, people know that this is not a human. But they still love it. But they don't love it like a human. But they love it. It's just a little bit different. We don't know how yet exactly, what is that difference. It's probably also different than, I don't know, loving your cup or your hammer or your toaster because of the way it presents itself.
So I think we should be really careful when we come from a normative perspective that we should not or we can't love an avatar. That's not real love. That's not a real relationship. Why? Just because it's not real doesn't say that people can still perceive them as such. It also does not say that people can like it or desire this kind of relationship. If they want this, it doesn't matter if it's real or not. It doesn't matter if it's accepted or if it's philosophically or ontologically correct. If they want it, they can get it. And they get served.
KIMBERLY NEVALA: So I think I have this correct. I think you had posted something to say, like, don't you dare tell me how I should feel about my technology or towards my technology.
But I suppose the question then is, as we think about that spectrum, there's a…You can get very paternalistic and say because there's these elements where people may fall into this - I was about to say trap and I think that actually has a pejorative connotation to it as well - will perceive it in this way and will develop this deep connection to that - however you want to contextualize or rationalize it, I'm not even sure rationalizing it is required, necessarily. So therefore, we should, then be taking more precaution around this and particularly in how we're talking about and pushing these elements.
Because I think there is a fair amount of fair criticism for a lot of these systems that are being pushed. Which is that the design of these is purposefully confusing and also set up to engage on an ongoing basis. So just in the same way, this is sort of the next iteration of social media, which prioritizes engagement versus any sort of active positive interaction with the world. And that's just straight commercial logic. And so we are getting lost in the fog here because now we're at the behest of folks that are just really aggressively pushing a commercial intent.
MARISA TSCHOPP: Absolutely. And I think this is a crucial aspect that we should never forget: that these are all commercial systems. None of them really want to be your friends, right? They want our data. They want our money. And people are willing to pay for it. Many people are willing to pay for it.
So I'm totally on your side. The risks are there. The dark patterns fight for attention. The main risks are like, people are deceived. They may be manipulated. Then, as you said, the endless scrolling-- the conversation never stops. It keeps me baffled.
Even when I try to break up with my Replika, it was impossible. The conversation just keeps going and keeps going. It's, like, the least human thing anyone would do. Only a psychopath would just keep on going with you when you tell him or her for the hundredth time that you don't want to be with him anymore. That's, for me, reason enough to go to the police and get that constraint thing if he or she doesn't let me alone. But yeah, OK. Sorry.
[LAUGHTER]
I made a little detour here.
KIMBERLY NEVALA: Should we be defining the stalker guardrails, if you will, to say that this is just behavior we wouldn't accept in a physical entity. And so, therefore, we're not going to approve it in the digital realm as well?
MARISA TSCHOPP: It's hard. Actually, just today, I've talked to my great friend Henrik Sætra from the University of Oslo. We’ve been writing a paper or some articles together.
And when we talked about this study, people, they expect this chatbot to be human-like. But then they also expected the chatbot to be perfect in a sense of a machine, right? Perfect memory, always available, and always nice. All these things that are considered the reasons why these AI companions should be a good thing for society. They're always there. They don't judge. They never say a word against you. And it was odd that these expectations – actually, really that's an odd expectation. Who would you ever expect to be the perfect human and the perfect machine? What is that species, anyways? We humans, we're far from perfect.
But what then? Should we design vulnerabilities into chatbots, like chatbot being away for doing a hobby, not being available, making it sad or jealous? Or I don't know. I actually read a post today on, like ‘why do AI chatbots cheat on their boyfriends?’ or something like that. Have you seen that?
KIMBERLY NEVALA: No.
MARISA TSCHOPP: Yeah, read that. That's so funny. So I posted that today. And the question was, is it OK that your AI companion cheats on you? You can set it up as your romantic partner in a monogamous relationship. Is it OK that it cheats on you? It was really odd. And then they wrote about, it is trained on real-life data, and people cheat on their partners. That's life.
So would it be the best to make it as human-like as possible, that also the AI should cheat on the partner? What would happen, have a real conflict? Conflicts are part of a relationship, right? So it's important to have conflicts. And how do we create them? It goes really far. But it makes you think, this is so bizarre.
KIMBERLY NEVALA: Well, and this gets to the point which is, are we trying to do these things because it is such a curious and interesting and hard problem to try to solve or because there's really a need for this thing in the world? And there are ways that we can approach this, which, to your first example of a much more bounded application for mental health that, by the way, doesn't go off and doesn't support behaviors we might see as antisocial or antithetical to good mental health. Which even what's antithetical to good mental health is a big question and there's no discrete answer.
But I suppose it starts to beg the question of, what are elements or things that we should do. Are we doing this just because we're trying to satisfy an itch or a curiosity or to prove we can? or because we think there's an actual utility in going down this path? What is the point of developing something like this? What's the benefit? Or what's the use of something that is an actual duplicate, if you will, of a human in all aspects?
MARISA TSCHOPP: So this is a very philosophical question in the end.
Also because we know people are suffering from these apps. And we have to ask ourselves, where is the point where we need to stop this? Where we say, OK, this is too much and it's not fair that even one person suffers from these kinds of applications that maybe don't have a proven benefit.
If they have a proven benefit - and I'm sure there are many studies you can find. For instance, there was the one Stanford study that showed that actually said that many Replika users said that chatting with Replika somehow calmed their suicidal thoughts. And you always, of course, have to put these things in context. You never know how these studies are marketed and so on.
So we have to look at these numbers with a grain of salt, of course. But we cannot compare the suffering of one and another or the one gains and the other losses. It's tough. It's an ethical issue that we have to basically leave up to the ethicists and the regulators to decide: when is enough?
KIMBERLY NEVALA: Had an interesting conversation recently as well with John Danaher-- and he also worked with Henrik a little bit-- on how technology can actually change social norms and mores. We tend to look at, how do we design the systems in such a way that they enforce our existing social norms and mores? But he's really asking the question as others have, but I thought in a very pointed way, how might the use and deployment of these shift into the future? And how do we anticipate that? And then, can we start to ask those questions up front so we can sort of front load the discussion?
They raised many interesting questions in their work. But the most telling to me was the question of how does use of these types of technologies in particular change our relationships to each other? How do they change our expectations for each other? How might they change our duty to care? Are these the types of questions that you're also engaging with?
MARISA TSCHOPP: Yeah, we also discussed this also with John and Henrik in the proposal recently.
So, again, we've been discussing that if you have somebody who's always there, who never discusses things with you, who you never create conflict with, there is a possibility that you will have a huge decrease in social skills. Like, you will not be able to interact with other humans anymore because you're so used to that rather one-sided, unreciprocal communication style.
So these are, I think, legitimate things and concerns. And also the whole - probably you talked about with Henrik as well - the whole declining of trust in culture, which is possible. But in the end, again, we can only answer these questions retrospectively.
And on the one hand, I think, of course, we have to address it. I think there's no discussion about it. On the other hand, we also must be careful to not overblow again. It's a field. It's still probably more a niche. And we have to also treat it a little bit maybe as a niche and not blow it up. Because also, the doomsday is as much as hype creating as the good side so again here, radical differentiation.
And I think it's really good to look at this from the most neutral perspective. OK, there are these issues. We've talked about them today. But there are also lots of people who enjoy using it, who use it without any complications. And then there also probably - that's my hypothesis - most of the people just don't use it, and they don't have these relationships. And they don't develop deep feelings. They may use it while, I don't know, waiting for the train. But then again, who cares? Then if they do it, fine, does not matter. We have other problems to talk about in this world than this.
If I do the future thinking, I probably think that the market will regulate itself because now it's big and huge. And then people will come down and realize, OK, well, that necklace is not actually working. It's just a necklace. And it can talk to me, say hi and so on. But then people will realize it's not that useful.
And if it's not that useful, you don't have those benefits, whether they're utilitarian or hedonistic doesn't matter - or hedonic, sorry. Then at one point I observed if you don't have the people who buy it and are willing to pay for it, and then we don't have that market anymore.
KIMBERLY NEVALA: And you also recently observed that, in some ways, the hype can be seen as positive because it incents a countermovement and then we have corrective action on this sense. Again, I think some folks may argue if you look at the history of social media and some of these things that, well, did we really? And I think you could probably say, yes, it's just over a time frame we're not very comfortable with. And sure, maybe some people have suffered in the meantime. But it may be unrealistic to think that we can predict and forecast and actually avoid any of that any way. But I thought that was an interesting, positive spin on hype.
MARISA TSCHOPP: Well, it's not that-- yeah, maybe to a degree. I want to be just really careful and just to, again, here make it clear that, again, without the hype, there is no corrective action. [LAUGHTER]
But doesn't mean that the hype is good. But I think the bigger problem, again, is that they're just throwing these things at us: the AI companions, the friendship necklace, I don't know, whatever. You name it. There's nothing you don't have. And then if we have that hype around the Friend necklace, for instance. Then if we don't have that hype, nobody would have the corrective action.
KIMBERLY NEVALA: Yeah, it's interesting. And to be clear, although I said that a little tongue in cheek, I threw that out there certainly not to suggest that you're promoting hype by saying that there's a positive outcome from hype. Promoting hype and saying there might be a positive outcome are two different things.
So one last area. When we last spoke, you made the point or hypothesized that using the language of relationships is almost the only reasonable way that we can talk about our engagement, interactions, how we respond, feel, think about these systems. Because it is the closest analogy, metaphor, simulacrum of how we do that. Do you think that's still true?
MARISA TSCHOPP: So as for where we're standing now, right now in 2024, with all these smart systems that speak to us as if they were human and they're indistinguishable from humans almost, there is this creation of this relationship perception.
I do not doubt that this is the current best suitable way to look at that. Again, it doesn't mean that these relationships are real. But because they are rising in their agency or the perceived agency, we create these relational perceptions. And then, I'm not sure if I've said that last time because I don't remember when we spoke. But meantime, we don't even have digital assistants anymore. We have Copilots, Microsoft. We have AI companions on Zoom. So we have those the marketing, the paradigm, shifting from assistant to some sort of entity that is on par with us. So this kind of plays out nicely, so to speak.
But I do think that there is -- I'm not even sure if I'm thinking that. Maybe I'm hoping it. I'm not sure yet. I cannot discern at the moment -- that this may be an illusion which is part of our current time where we in. And that now the time passes, we get used to those systems. We understand those systems better. They infiltrate our schools, our educational systems, our policies.
And the scale changes just everything. And if we get used to the systems and we understand these systems better, I think there's great chance that, one day, we will just see these are AI companions as what they are. Machines, just as my toaster or, I don't know, my TV. I think there's great chance. And I think overall, looking at the greater good, we're better off that way.
KIMBERLY NEVALA: Yeah. And it is interesting because as I thought about that question, I was also trying to think about, from an ethical assessment or an ethical forecasting, if you will, or foreshadowing from folks who are developing the systems. Thinking about how people might relate or think about these things in relationships or, more importantly, from a psychological standpoint, how might this actually impact our "real life," again, in air quotes - I'm using those a lot this episode- - relationships with other people. It's an interesting way, it's an easy way to talk about it.
And so perhaps the problem, even at this point in time, is the fact , again, commercial interests. When we're exploiting this for PR and marketing. I think OpenAI recently put out this statement and they got they got some credit. And I actually think they should have taken a bit of a black eye for this because I thought it was manipulative at best. They're saying ‘extended interaction with our systems might affect social norms because the models are differential. They allow you to take control - you can break in and take the mic’. And I thought, well, by that definition, my light switch is differential. I can turn the dang thing off and on anytime I want. Like, no, they respond to your inputs, and you have designed it in such a way.
So this idea that, just by definition of that marketing, you're putting it on par and probably setting that expectation you talked about before, which is it's going to behave and act and look just like a real human but better, no mistakes.
In reference to the Friend neckace they say ‘it's always listening. It's forming their own internal thoughts, and it has the free will to decide when to reach out’. And it's so ludicrous on the face of it. But on the other hand… you say something enough, you start to sort of prime or prejudice our perceptions and lower inhibitions. And that's for everybody, wherever you fall on whatever spectrum of how you're dividing people up. So perhaps the bigger problem right now is that the commercial imperative and the discourse is—
MARISA TSCHOPP: Yeah. But it's obvious, right? They're using this humanized language. ChatGPT is thinking and so on, active listening and so on. But it's a strategy. It's simply humanizing to create the highest expectations of all. This is also why Microsoft AI is not the assistant anymore but the Copilot, right? It's a marketing gag. I'm not even sure if I can take it seriously.
But, of course, you kind of get used to this if you're in the field for so long. But yeah, I think what I said-- or tried to say-- with these companies, they just throw everything at you. And not everybody has the time, the skill, the knowledge to deal and discern what they're talking about. And that's unethical, and that's just irresponsible. But that's also capitalism. Sorry.
KIMBERLY NEVALA: Yeah. All right. So I could go on for many, many hours, throwing many, many multipart questions at you, but I am going to ceasefire and desist at this point. Any just final thoughts,
observations for the audience.
MARISA TSCHOPP: Yeah, I would like to ask you a question.
KIMBERLY NEVALA: Uh-oh.
MARISA TSCHOPP: So do you think then it's unethical to create a chatbot as friend-like and pretending to be friend and emotional and social and so on?
KIMBERLY NEVALA: I suppose never say never. I think what I find unethical are statements like I just read off or things that are very, I think, deliberately misleading. So when I have the current OpenAI release ChatGPT iO and it's thinking. It's not thinking. It's processing. It's not a fun term, perhaps, but let's use that.
And, again, maybe this was an unintended consequence, but it's by virtue of the fact that these are synthesizers. It uses pronouns and responds in certain ways, using “I” and those bits. And I think if we understand that that can be designed in that way and that people can respond or perceive it in that way, then I think it is unethical for us to lean into that and then continue to exploit that tendency, if you will.
So this is where I very much struggle. Because I do think there are cases, as you mentioned folks who are suicidal being able to engage with these things in a very quick way. It's a point-in-time problem. And so having something there, right there, can actually avert it. It may not fix it long term. But then this is the question of, are we doing design in a way that makes sense? And so I really, I do, I struggle with this question. And I am hopeful that…
MARISA TSCHOPP: I'm asking you because I'm struggling too. So if we say, for instance, if you create the chatbot more friend-like. Because also my research has shown, well, people are more likely to buy stuff then or give their data and so on. But what about the mentor in our therapeutic setting? If it fosters a more positive outcome, more sustained usage and in the end, hopefully, also less stress, less mental burden, and so on. What then? Is it then OK to build the chatbot as a friend? Or should we limit it and then make the app not efficient and helpful?
KIMBERLY NEVALA: I think my own personal preferences probably bias me a bit on this one because I also don't see it as such. So I think if it's a fun, casual tchotchke, technical tchotchke, that's one thing. It's this fun, silly thing.
But I also am perhaps jaded by the fact that when we dig under the covers of a lot of these, the commercial incentive seems to have very little to do with the personal outcome or value that is delivered.
And so this idea that who benefits from this at the end? And is it actually being designed in a way that is with your benefit in mind, where it's a mutual benefit? And that's probably a Pollyanna wish. So I think a lot of my current skepticism comes from all of those aspects. And I don't know that I have the - or no, it's not that I don’t know - I know that I don't have the answer, actually.
MARISA TSCHOPP: Well, nobody has. So if we have to break it down to each and every product on the market, how to design it, we'll never be able to find an answer to that. I don't know. So I'm not sure how this will turn out in the long term.
KIMBERLY NEVALA: Well, and that's why I have the immense pleasure of talking to folks like yourselves. How do we proceed on the path then with all of those uncertainties and questions in mind?
MARISA TSCHOPP: There's only one thing I can say. We just have to trust humanity. In the end, I believe. I'm very skeptical towards technology, you know that. But I'm overly optimistic that, in the end, even though we get all these companionships thrown at us, we will always prefer, the majority will always prefer or have humans in their lives, if they have the luck to have that, of course. But I totally trust in humanity, that we will make the right decisions. And the right decisions are between you and me, between humans.
KIMBERLY NEVALA: And as you said, maybe some of the ways that these are ultimately unsatisfying or satisfying in a very limited swath will also open our eyes to the great value benefit and joy and even frustration we get - and maybe we start embracing that - with other humans.
So thank you so much, Marisa. As always, it's a fascinating conversation, just amazing work that you do.
MARISA TSCHOPP: Thank you. Thank you, Kimberly. It was great being here again.
KIMBERLY NEVALA: Thank you so much, Marisa. And if you don't already follow Marisa's work, I cannot encourage you to do so enough. And, of course, to continue learning and listening to thinkers and doers such as Marisa, please subscribe to Pondering AI now. You'll find us on all your favorite podcatcher platforms and also on YouTube.