The AI Experience with Sarah Gibbons and Kate Moran

KIMBERLY NEVALA: Welcome to Pondering AI. I am your host, Kimberly Nevala and I want to thank you for joining as we continue to ponder the realities of AI with a diverse group of innovators, advocates, and data professionals.

Today, we are joined by Sarah Gibbons and Kate Moran at Nielsen Norman Group. We're going to be discussing the impact UX design can have on AI development and the reverse: how AI is influencing the practice of UX design itself.

Kate and Sarah, thank you so much for joining us today. It's great to have you here.

SARAH GIBBONS: Thanks for having us.

KATE MORAN: We're happy to be here.

KIMBERLY NEVALA: All right, so let's start. Can you each provide us a quick glimpse into what initially sparked your passions for design and your current roles at NNG?

SARAH GIBBONS: Sure. I'll start. I am Sarah and I studied design in school. But long before that I always knew I wanted to do something related to design. I always joke with my dad and I say, “You're so easygoing about me choosing to go into art.” And he's, like, Sarah, I never had a choice. You decided you were going to be that from the moment you could talk and create.

So I found my way into graphic design, really a program more oriented around design theory, and then started my career in product design. Well, actually, advertising and then decided I don't want to sell my soul every day, so worked my way into product design. And from product design really found I was craving more of the system side of it. And then, as I think anyone in this field knows, it's not a perfectly linear path. A lot of times in your career you have these moments, and you don't necessarily recognize them in the moment, but in reflection, where you're like, Oh, that felt really good. I found that in doing a little bit of education for IBM design and training incoming designers, front-end devs, product managers in this user-centered approach to product design. And then found my way to NNG. So every day, I now do a lot more strategic work but still oversee our design team and a lot of these more design-centric practices within this field of user experience. So, that’s a little bit about me.

I think naturally a lot of people in our industry are prone to exploring new tools, new ways of working. UXers are never short in curiosity. It feels like an extension of that curiosity to be interested in AI and wanting to see what's going to happen, to ride the wave and not be scared of it. We are obviously all riding the same wave which is an exciting moment to be in.

KIMBERLY NEVALA: It is indeed. So Kate, how did you come to your current role at NNG?

KATE MORAN: So, I'm Kate Moran. Like Sarah, I was very decisive from a young age. I decided when I was maybe 10 that I wanted to be an English professor. Having no idea, I guess, in reality, what an English professor's day-to-day life is actually like. But I just loved reading. I loved talking about literature. I loved analyzing things and looking for themes. I loved thinking about people and communication.

Then, when I finally got to college I started speaking to some English professor and seeing what their day-to-day life actually was in reality and realized, oh, this actually isn't a great fit for me. That's when I discovered what information science was and realized that it had all of the things that I really loved and that user experience research helped me apply those things. And also spend a lot of time with people, studying people's behaviors, their tendencies and their preferences.

So I started working my way towards user experience, but just like Sarah, it was nonlinear. [LAUGHS] It's hard to break into UX when you're new and so I held a lot of different UX-adjacent roles. Then I decided to get my master's degree in information science and shared out my master's thesis. Somehow, Jakob Nielsen happened to see it and read it and recruited me into NNG and it’s been a lot of fun.

It’s so exciting right now with the evolution of AI and its application into various business contexts and with consumers to see how that's changing people's behaviors and preferences.

KIMBERLY NEVALA: I love the fact that your backgrounds are so very different. And while I can relate to loving a good book, I have to say, I have not a creative bone in my body. My stick figures don't even resemble stick figures, so I may be a little out of my depth on that side of things. But I love the multidimensionality and the fact that you bring these different aspects to your team and to the work that you're doing. I think that's particularly important today in the age of AI.

So before we dive into some of the details there, Sarah, could you recap the objectives of user experience - or UX, as we'll refer to it - design and when does it come into play, generally, during product or service development?

SARAH GIBBONS: Before we even get into user-centered design, what is design?

Design is the intention behind anything. So that could be a component of an interface, that could be a step in a process, that could be anything. And so I, a lot of times, say everyone's a designer and we're all making these intentional decisions throughout our days, throughout the work that we do. It's the purpose behind these actions, facts, or material objects.

So if you think about design as a whole, well, then user-centered design is putting intention behind things other people use. And it's not just one intention. It's a series of very complex things that then, hopefully, when someone is using whatever you're creating - whether it's a service so it's intangible or a physical product - it's that all those different decisions that you've made along the way in creating that output then lands in the hands or in front of the person using it. And it feels natural to them. It helps them meet their goal in a seamless way.

So when we say user-centered design or human-centered design; this is a really nice way to think about it because there's multiple humans in any system. It's not just the end users. It’s the people that they're related to and also the creators of that product or service. Human-centered design is basically saying we're not going to create what we're creating from a feature point of view or from a stakeholder point of view. We're going to create it from the view of the people that we're delivering it to.

That's really a mindset. In human-centered design we see a lot of processes, steps, research methodology. Really, this is a mindset and then we have all these tools that allow us to apply that mindset.

KIMBERLY NEVALA: When folks think about design in this way, I imagine most of us think this happens very late in the process. So somebody already has an idea, or a problem to solve, or a product in mind, And certainly, we have a tendency - and this has predated AI, although AI is no different - where we get access to a new tool, or, when we were kids a new toy, and we run around looking for ways to put it to use. There's that whole ‘when you're holding your hammer, everything looks like a nail’ (adage.) So can we use human-centered design in a different way to ensure that we are looking for and solving meaningful problems? And doing that in ways that support human workers flourishing and not just greasing the skids, particularly with AI, by automating tasks willy-nilly within existing processes.

Kate, I suppose this is the long-winded version of a previous question you and I have spoken about. Which is: how do we make sure that we're solving the right problems and designing machines that work for humans and not vice versa? What are your thoughts on that?

KATE MORAN: You may have seen me smiling as you were asking that question.

First, I totally agree with Sarah that human-centered design, UX design, customer experience, product design - a lot of these different words you might use to describe this kind of work - it all goes back to that mindset. Or almost like a philosophy, of trying to make sure we're being very strategic and thoughtful about the way that we're solving problems in our designs.

One of the classic gripes of people who work in these roles is we're being brought in too late in the process. So somebody else in some other role has already decided I have this new shiny toy or I saw this design feature on Netflix and now I want to cram that into my enterprise product. And, oh right before we finished all the coding, right before we roll this out to customers, let's get some UX research done. Well, that is not the time to make any kind of meaningful change.

And one of the things that we're seeing happen a lot right now with the access that we have to these LLMs and other variations of generative AI is people are so excited, they're looking for ways to cram this into their products. They're not necessarily stepping back and thinking, how does this solve a problem for the specific people that I'm designing for? How is this going to make their work easier or their lives better? Is this the right way to solve this problem?

We're seeing this manifest, especially right now, with AI chatbots. There are tons of companies and I can just imagine the conversations that executives are having. We need AI. How do we get AI into this product? I know! A chatbot. In some cases, that might be the solution, and that might be helpful. In other cases, that may not be the right way to apply this to your users' needs. That's a real challenge right now.

SARAH GIBBONS: I also think that it's really hard to grasp how one may leverage this new technology because of the limited ways we're able to interact with it right now.

I would challenge anyone - and this may not even be your decision - but really thinking higher level, larger, and longer-term is going to really benefit teams. Because every resource a team spends on integrating AI in a nonoptimal way is an opportunity cost. And that opportunity cost is on solving other problems that need to be solved or looking at the longer-term picture and starting to get the infrastructure in place to actually embed AI in a way that solves the original problem.

Everyone right now is problem hunting. They're problem hunting for problems that AI could solve rather than problem hunting at large. That is a really big red flag and I think it's going to actually create differentiation across a lot of different products - and this really comes from a leadership level - in the teams that decide to take the longer, higher-level point of view when it comes to AI. And create a multi-year strategy rather than try to treat it like a checklist item and pull resources away from meaningful work in order to deliver something that probably doesn't solve a need.

KATE MORAN: So coming back to your question, Kimberly. I know it was a question you asked a guest in a previous episode…

KIMBERLY NEVALA: Was it a question or a hypothesis? I'm not sure. Was it a diatribe?

SARAH GIBBONS: [LAUGHS]

KATE MORAN: Well, it’s a little bit of both, I think. What you said was: how do we make sure we're shaping AI tools to fit the needs of the users instead of trying to make our users shape their workflows or their behaviors to fit the AI? That is the eternal question of human-centered design [LAUGHS] and we've seen this happen with every previous iteration of technology.

If you look back to the start of the computing age, they had all these very complex systems that you had to have a degree or several degrees to be able to use. But as that rolled out to the consumer, suddenly we had, especially in the '80s, personal computing. People didn't have time to get degrees to figure out how to use these things.

Same thing happened in the internet age. First, it's the techies, it's the early adopters, it's the nerds like us who get in there. We know all of the terminology and we understand how the technology works so we're initially not designing it for the average person. We're definitely at that stage right now with these AI tools.

Look at what it takes to use Midjourney to create, to generate, images in any kind of precise and effective way. You have to be familiar with so much terminology, so many esoteric features, little things that you wouldn't know that you could use tucked away into Discord in order to be able to use it. Now that we're introducing this to the public, we're going to have to change that.

SARAH GIBBONS: It's such an interesting duality happening. It's a duality that sparks excitement in me, but I can see how it sparks a lot of almost fear in others. But a lot of what's happening right now is we're all so amazed by the technology that we're completely overlooking how, quite frankly, shitty every AI experience is right now.

KATE MORAN: Mm-hmm.

SARAH GIBBONS: And it's this shininess that's going to wear off. If you have anything to do with creating experiences, it's literally what our job is. How do we take the capabilities of this technology and make it intuitive to use? That is the task at hand and is going to be the task at hand for, honestly, probably the next 5 to 10 years at least.

KATE MORAN: Do you mind if I keep going on this?

KIMBERLY NEVALA: Yes, please. [LAUGHS]

KATE MORAN: This is just how Sarah and I talk.

SARAH GIBBONS: Kimberly, you're never going to get it back.
[LAUGHTER]
KIMBERLY NEVALA: I'm OK with that, as you can see by my very wide grin.

KATE MORAN: We'll be here for three hours.

Yeah, one thing that is related to this is the intense focus on conversation: conversational UI, conversational interaction. That is a huge component of how we will continue to interact with these systems in the future and going forward. But Sarah and I were on another podcast that I won't name and one of the hosts said we're going to go to 100% conversational UI. I said don't agree with that because there's a lot of downsides to that. It's linear. It requires users to hold a lot of things in their short-term memories versus seeing it on the page. So we're starting to see more and more of what are being called hybrid UIs - hybrid user interfaces. Terminology isn't quite a great fit and will probably change over time.

For example Midjourney came out with their Alpha UI. Which takes some of those things that had to be memorized, such as keywords and weird syntax, and exposes it in an interface. Instead of having to remember how much chaos you can have, which is a term that they use in Midjourney, that's suddenly a slider on the screen. You don't have to remember the shorthand command for that. You don't have to remember the syntax. You don't have to remember the range.

They're also changing some of the terminology used around these things. So something that used to be called chaos is now being called variation. That is a great signal of moving from this very developer-focused (perspective). Developers love a word like chaos. [LAUGHS] That may not be intuitive for every user, so we're shifting away from that.

I am so excited right now because we're looking at how these AI tools are changing user behaviors and preferences. But also trying to find ways to help users understand what these tools are and what they can do with them. It's a fascinating challenge because the average person doesn't really have a clear mental model on what I can do with this and what these different terms mean. So as a language nerd, I'm really excited about tweaking the UI labels and things like that.

KIMBERLY NEVALA: A couple things I want to come back to in what you just said, but before we do that, I want to back up just a minute to something you said, Sarah. It was interesting, Kate, when you talked about the change in the label from chaos to, what was it - iteration…

KATE MORAN: Variation.

KIMBERLY NEVALA: …some part of my head went, oh, chaos is a great word. Because maybe it will cause someone to pause and think a little bit about what just got presented to them and how they should contextualize it. So it seems to me and I may be way off here - you're going to correct me if I'm wrong, I hope - that one of the challenges, particularly with AI, because language is such an intuitive way for us to interact, is that we're going to need to strike a balance between making something that is very intuitive to use and yet guarding against some of our human intuitions that lead us astray.

We know that we've always had the tendency to be overconfident in results presented to us by computer systems well before AI. We have always had the human tendency to anthropomorphize. To ascribe human intention and tendencies to these systems if they appear or sound credible, or like a human might, and this is coming very much to the fore. That creates is a tendency to be overconfident or over trust what the systems are spitting out. As you said, the common folk doesn't necessarily understand the underpinnings of how they work.

I'm going to err on the positive side here and say that the intention of the design is to make it intuitive to use by using common language. But the unintended consequence might be that folks, in fact, aren't thinking critically about how, and where, and what to do. Again, we saw this a bit before AI. Depending on the order of things you put in a recommendation list, people have a tendency to assume the thing that came up first was, in fact, the most important or most recommended idea. Or in health care, that that's the most likely diagnosis when that might not be the case at all.

So how do we avoid or strike this balance between making things intuitive to use and guarding against these Intuitions that might lead us astray? How do we avoid using those innate human traits against ourselves while we're designing these apps?

SARAH GIBBONS: It's a really good question, and there's a lot of different points in there.

The first one that I always come back to is one, AI is not new. That is really important. I know our access and our ability to interact with it and the tangibility that has recently happened with these different LLMs and the ChatGPTs of the world. But it's been around and its infrastructure has been around for a while. It's been baked into quite a few products for multiple years, so I think that's important.

And then this idea that users have to be discerning. There is a critical thinking skill that is just becoming more and more important in life in general. It has also been building over the past decade with our - especially in the United States - news cycles. Where are you getting your news and what type of echo chamber is it in? What do you see online? Random web pages offering medical advice, how valid are they? It's this idea of discernment, and that's not new either.

What’s really interesting in what's coming is these multiple questions around, really, ethics. Ethics of the people building our products and what we enable. The transparency around what's being enabled with these longer-term building needs that our users are going to have to have, like critical thinking and discernment.

Then the question for all of us is: what role do we play in that as builders or creators of products that are communicating some type of information and in the legitimacy and likelihood that information is accurate?

It's really easy when you compare products on the market right now - and I hope that a lot of what we're sharing is evergreen - but in this current moment, if you take a ChatGPT compared to a Perplexity, Perplexity is an AI tool that's similar to a search or Google but it gives you a result with the sources that it's pooling. So you could ask the exact same question to both of those tools. You could have the same user. Say that user is a critical thinker and they're an average level of discerning to their own capabilities. Then which tool is enabling that user to be able to better discern what they're being served? I think that that's such a good tangible example for anyone listening. Where a tool that starts to give references and the ability to follow up on what they're being served, et cetera, is an example of a manifestation of a team being more enabling or leveraging more transparency in order to inform the user in positive ways.

Kate, I know you have a lot of thoughts here. [LAUGHS]

KATE MORAN: Yeah, you know I do. In many ways, I agree with Sarah on this and I do think Perplexity versus ChatGPT is a good illustration of different approaches and what the outcomes of those approaches might be.

I will say, though, I've been hearing this argument that the public just needs more critical thinking or the public just needs to do more work and check their sources since I was a child. I remember the early days of Wikipedia and having teachers say you can't use Wikipedia. Or, if you're going to use it, you have to make sure you read all the source material and I think that hasn't changed. We've seen different iterations of that over the years. People consuming their news from Facebook and just reading the headlines and not seeing the sources. Or people looking at featured snippets where Google will expose information directly on the search results page. We've seen this happen over and over again and these generative AI tools are just yet another iteration of it.

We can lament that people aren't citing their sources or checking their sources, but that's not going to do anything. We're not accomplishing anything. So I think you're right that Perplexity is a good example of the design team taking on the impetus to say we're not going to make people do this extra step of asking for the sources. We're going to include the sources directly with the result. When I've talked to Perplexity's design team, I've told them this, I love that they put those sources in thumbnails at the top of the page, so they're even above the answers. They're very visually recognizable. They're right at the top.

So it's unfortunate, but the impetus and the responsibility is entirely on us, as the people working on these systems, to think about how do we ensure that people are getting quality information?

SARAH GIBBONS: I do think in this whole conversation - and I'm just going to say it because someone listening may be thinking it - this is also just such a small percent of how we can think about leveraging AI in terms of delivering the final answer. When really, AI is a tool.

So if you look at other possible ways. Airbnb has been using AI to suggest pricing for hosts based on the area, and what's selling, and through the different seasons, and through local events, et cetera. That's such a good example where AI is not delivering the final choice or decision made. It's presenting knowledge in a concise way that then lets the human make that decision.

When we have these discussions and we pull these tangible examples, we hope that they are just illustrations and recognize that we're also talking about AI in such a limited scope of what it really can be leveraged to do. Also, if you're looking at one article of news, it could work in the opposite way. It could start to combine multiple articles from multiple news sources into potentially one. Where you're getting a lot of different viewpoints of the same news being delivered and potentially even a score of where on that spectrum it sits. It's really interesting to think about all these different ways that we can start to leverage it and combat it.

Also, it is the wild west right now. We have zero guidelines around what these things - I'm going to just say things being products, technology, teams, features - can do. There are people in the AI ethics and guidelines field that are better suited to answer these and have been long thinking on it. But there is some type of regulation that is going to need to ethically come into play.

KATE MORAN: I agree there's so many different variations in applications of AI. What is top-of-mind for people right now are these conversational GenAI systems because that's what's feeling new to everyone. But when we talk about what will have the biggest impact on individuals, on society, information-seeking is one of the biggest and that's certainly where I have concerns.

And so a future that you describe, Sarah, where we're leveraging these tools to actually improve transparency, to break down some of those bubbles that we've been in since social media first emerged on the scene, that is ideal. But what we've got to think about - and you and I talk a lot about this and I'm sure we'll continue to - the thing we've got to think about is, where's the motivation to do that? Who's building these systems, and what are they motivated to do?

SARAH GIBBONS: It's only going to come from an appetite from consumers and that's where everyone plays a role.

KIMBERLY NEVALA: There's a very large educational landscape and work we have to do with the public collective and in some aspects of design. Incorporating (elements) into these products that prompt people or highlight the shortfalls and limitations for them and encouraging them in whatever way we can to be diligent is good, but it is not the sole answer. A lot to think about there.

Kate, I want to go back to something you mentioned somewhat in passing about user behaviors. In the work that you're doing, or even what you've observed personally, are we seeing changes in discrete user behaviors as a result of using AI?

KATE MORAN: I don't think we're seeing this in a widespread way right now. It also depends on the type of people we're talking about in the community.

Certainly, we're starting to see a lot of changes in the tech community among people who make up the audience of Nielsen Norman Group, for example. We just recently did a survey and to be fair, I do think there's some selection bias in this survey, but we did a survey where 90% of the people who responded to the survey said they had used generative AI at least once. And I think 75% of those, roughly, had said that they used it in their work.

So we're starting to see some changes amongst the early adopters. But again, these kinds of tools, I don't think, have really broken into the daily lives of a lot of what you might think of as general consumers yet. And the ways that they are touching their lives are kind of invisible.

Sarah mentioned if you're an Airbnb owner, you get a recommended price based on an AI-automatically determined pricing estimate of similar types of Airbnbs in your location. There's tons of examples like Netflix. They've got all these different potential thumbnails they could show you for a show based on what you seem to like, and your viewing history, and people like you. They'll show you something that's custom to get your attention. Those kinds of things are not fundamentally shaping or changing user behaviors.

But what we are seeing is that people who start using these tools, they're developing different tendencies or ways of interacting with those systems and those behaviors are new. Actually, Sarah was involved in a study where we found two pretty surprising and interesting new user behaviors. Do you want to talk about those, Sarah?

SARAH GIBBONS: Yeah. Kate's spot on. I don't think that one, we really know, at a larger general public way, the mental model shift, which is more of a precursor to user behavior, that they then work two, to form that mental model.

KATE MORAN: Should we define what a mental model is for your audience, Kimberly?

SARAH GIBBONS: Go for it.

KIMBERLY NEVALA: Yes. [LAUGHS]

KATE MORAN: So when we say mental model, what we're referring to is the way that people imagine or envision the world and the way that it works. These mental models are interesting to study because they often don't perfectly reflect reality. This has a lot to do with the way that human beings make sense of their world and ties back into even our tendency to come up with myths. If there's thunder, what's the cause of that? Well, we don't have any visible understanding of how that works so we might attribute it to an angry god or something.

Similarly, when people are searching for a flight that they want to book, if they see that the search results are ranked in a particular way and they don't understand why that is, they'll come up with a rationale or a reason to explain it. Even if that's not actually what's going on behind the system. So mental models are really about people trying to make sense of a system, or a tool, or a feature, or a product in a way that they can understand. What would you add to that, Sarah?

SARAH GIBBONS: No, exactly. What's happening right now is, given where we are in this evolution, we don't really know. We have an idea of people's AI mental models, but it's changing really fast, and it's so limited. That's what is really fascinating right now. We are at the cusp of these mental models being created and then that mental model - or original V1 mental model - informing user behavior. Then that user behavior and interaction with these different AI systems then changing one's mental model.

That’s what we've been studying and what's fascinating is how quickly these mental models are changing, and because of those user behaviors are changing. The user behaviors, and what we really try to study at NNG, isn't the discrete specific tools but agnostic of those tools, what are people doing? And there's a lot of different things happening. Two of the ones that I've published with some of our research team are accordion editing (and apple picking).

Accordion editing is this idea of people approaching AI with this expanding and collapsing mental model. So they may bring something and have AI expand it. I imagine a high schooler having to write a paper brings a few sentences and expands it into a paper. Then, when they have to write a summary, they condense that down. And we're seeing that across, whether it's language or any way that someone's using a lot of the conversational AI right now, in this expanding and collapsing towards one single goal, which I think is really interesting as a user behavior. That's indicative of this mental model of how people
are viewing AI being able to help them in its capabilities of both expanding and then summarizing.

Apple picking is another one and it ties back to our earlier thread of discernment and using AI as a tool, not as an answer. It's across multiple tools. Agnostic people choosing different pieces of what they want and then combining those into a new output. I think that that's indicative of the limitations of AI and people not really knowing how to use it and thus not getting what they need from it. But also the need to be discerning because a lot of the outputs don't actually align with the quality or goal at hand. So users (are) then picking and combining towards that goal that they are hoping to achieve.

KIMBERLY NEVALA: We’ll link to that paper and research in the show notes. I will say one thing that struck me when I read it, or the question that came to mind - particularly in that first scenario where people were first narrowing in, and then they widened it, and then they narrowed back in – is would that have been predictable if we had mapped the human ideation process, how we think and create, before we even looked?

SARAH GIBBONS: Of course, of course. It's the beauty of what we do. These behaviors and the way that we think about things aren't going to be brand new only because the technology is. That's really important for people to remember. This is an evolution, not a completely new era. We're taking everything we know and expanding on what we can do and how quickly and free, quite frankly, how low cost, we can do it.

That's what gets me excited. It's these ideas of taking everything we've always done and then being able to leverage speed and efficiency. The other thing that we've heard multiple times - we're running some more research right now - is also the randomness of it, which I think is such a beautiful thing. Sometimes the randomness of AI actually really frustrates me. I'm like, why don't you get what I'm trying to make you do?

KATE MORAN: [LAUGHS]

SARAH GIBBONS: You're so random. But what if we start to leverage that as an ideation technique or as a part of our existing process? We had a quote from a user study that was well, I view it like throwing paint against the wall and seeing how wide it splatters. That's such a beautiful metaphor for a lot of how people are using AI right now, and its strengths, quite frankly.

KIMBERLY NEVALA: So we've just a few minutes left here. I feel like we should have set you up for a couple of hours. [LAUGHTER]

You work with practitioners and have been practitioners. How are you approaching the use of AI in the practice of human-centric design? Where does it work really well and where should people be cautious? Kate, maybe you can kick us off.

KATE MORAN: One of our favorite ways to talk to UX practitioners about how to leverage these GenAI tools in their work is to picture them as being like a UX intern. This is not someone who you're going to delegate everything to and have no oversight and let them produce all of your deliverables and things. But this is somebody that you can - "somebody" in air quotes - bounce ideas off of.

Or you can, for example, use it to help build a research plan. I just did that the other day. I built out a research plan for my next round of AI research, and it took me about an hour and a half. I gave ChatGPT our template for a research plan. I gave it a bunch of context and information. It filled everything in. It missed a lot of things, so I had to go back through and make some tweaks, edit things, add things in, take things out. It was not an appropriate finished product, but I had a pretty good rough draft in an hour and a half. I was able to then focus on other things and fine-tune some of those little details. So I think that's an appropriate way to think about a lot of these tools right now.

I will also say, I strongly encourage UX researchers, specifically, to be very skeptical about a lot of the AI-based tools that are coming onto the market for us right now. I've spent a lot of time researching these tools, experimenting with them, playing with demos, talking to the tool creators. There are a lot of unsubstantiated claims that are coming out right now like we can analyze your usability testing videos for you. But then when you actually dig into it they're looking at transcripts. That method is observation-based, so that doesn't make any sense.

There are also even products that are claiming that they'll replace the need to study users at all. That you can just play with their AI fake users and get the same quality of results. I would encourage people to be very skeptical about those kinds of claims.

KIMBERLY NEVALA: Let me leave both of you with one quick final question. What emerging areas of research are you most interested or engaged with?

SARAH GIBBONS: Oh, there's so many, Kimberly.

KIMBERLY NEVALA: It's so unfair, isn't it?

SARAH GIBBONS: It really is. We have a task force for AI within NNG, and we met with them yesterday, and there's two areas, two ends of the spectrum, actually.

I'm really interested in this higher level, broader, mental model user behavior that is really specific to the moment we're in because it's going to pass us. We're going to lose it. We're going to lose this moment that people don't really know that much about AI and those broad mental models, I think, will be really interesting.

On the other side of it, I'm really interested in AI in niche verticals. AI in these really specific cases for lawyers, for doctors. What it's going to be able to do throughout their processes and potentially how it's going to be leveraged within that very specific context in vertical. To me, that's actually where the strength at scale comes into play.

Those two ends of the spectrum obviously work together and will be really, really interesting.

KATE MORAN: I definitely agree with that, Sarah. Those sound like really interesting areas that we're going to see evolve over the next handful of years.

I'm really interested in seeing how language evolves for these AI products. There are a lot of challenges right now to UX writing when it comes to labeling things that are going to go into these hybrid UIs, figuring out how to explain how to use these different things, which, again, people don't really have these mental models around yet.

One thing that I'm hoping to do this year is conduct a lot of research myself on how these generative AI tools are impacting information seeking. As an information science nerd, that's right up my alley. It's really interesting to see how people are taking their understanding of how search works and trying to apply that to this new tool for gathering information. Seeing where that works, and where that fails, and how it's changing.

KIMBERLY NEVALA: These are great notes to end on. You have been incredibly generous with your time and insights today and have certainly shed some much-needed further light into the tricky business of creating good user experiences for AI applications and the art of applying AI to user design. Thank you, Kate and Sarah.

SARAH GIBBONS: Thanks, Kimberly.

KATE MORAN: Thanks for having us, Kimberly. If anybody wants updates on the research that we're conducting, feel free to check it out at nngroup.com.

KIMBERLY NEVALA: To continue learning from thinkers and doers such as Sarah and Kate about the real impact of AI on our shared human experiences, subscribe to Pondering AI now.

Creators and Guests

Kimberly Nevala
Host
Kimberly Nevala
Strategic advisor at SAS
Kate Moran
Guest
Kate Moran
VP of Research & Content, Nielsen Norman Group
Sarah Gibbons
Guest
Sarah Gibbons
Vice President, Nielsen Norman Group
The AI Experience with Sarah Gibbons and Kate Moran
Broadcast by