Designing for Human Experience with Sheryl Cababa

Sheryl Cababa discusses human centric design (HCD), why UX may be too successful and how systems thinking addresses human factors oft overlooked in design thinking.

[MUSIC PLAYING]

KIMBERLY NEVALA: Welcome back to "Pondering AI." My name is Kimberly Nevala, and I'm a Strategic Advisor at SAS. Here in our third season, we invite a diverse group of thinkers and doers to explore how we can create meaningful human experiences and make mindful decisions in the age of algorithms and AI.

Today, I'm beyond pleased to bring you Sheryl Cababa. Sheryl is the chief design officer at Substantial, where she conducts research, develops design strategies, and advocates for refocusing design on outcomes Sheryl joins us today to discuss the hot topic of human experience (also referred to as HX) or human-centric design.

Welcome, Sheryl.

SHERYL CABABA: Hi, thank you so much for having me. I'm really excited to talk to you today.

KIMBERLY NEVALA: Let's start by having you give us a short overview of your journey in design to date.

SHERYL CABABA: Yeah. So I actually started out as a journalist. [LAUGHS] That is what my degree is in. Of course, this is decades ago, so I'm telling on myself in terms of my age.

But I think what was interesting about it is that I came up in the '90s, when you could just basically kind of learn how to code things on your own. And so I quickly kind of learned how to code web pages and things like that. For my first employer, I worked as an intern at the Seattle Times. And one day, I got an idea about building an internet site for them. And being an intern - and at that time, I don't even know if they had a web page - but they were like, oh, sure, OK, if you want to, build it. So I basically built them their first intranet site.

And then I went on to work at Microsoft as a designer. I wasn't actually a developer there. But as a designer, I would like dip into the code sometimes. And spent about a decade in product design, and then moved into consulting, where my focus over the past decade or so has been on design research and strategy.

So I do a lot of human-centered design research, informed product strategy. I work with a huge diversity of organizations-- everyone from technology companies to philanthropy to government entities. And along the way, I try to help folks understand how to better connect the work that they're doing with societal outcomes. So that's in a very short summary what I have experienced in the industry and sort of my trajectory, which is a little bit circuitous.
KIMBERLY NEVALA: It seems crazy today to think that somebody wouldn't have a website. That's just what you do.

SHERYL CABABA: I know.

KIMBERLY NEVALA: As a journalist, in particular, and as a designer as well, it's important to ensure we're all, as people say, talking or playing from the same sheet of music.

So this idea of human experience, or human-centric design - we'll use HX as the shorthand today - is certainly picking up steam. How do you define human experience, or human-centric design? And what are the drivers that are powering this trend today?

SHERYL CABABA: So the term I use most often is human-centered design, or HCD. It aligns with the formalized processes within the space. It's often described as design thinking. So if you hear design thinking and human-centered design, they're usually one and the same in terms of processes, philosophy, and just methods and ways of doing things.

Oftentimes what people are describing when they're describing human-centered design is actually user-centered design, especially in the technology industry. And so what that means is that we are primarily focused on end users of products and services and how they experience kind of the direct benefit of use of our products and services.

And essentially, when the concept of user-centered design picked up steam was f in the 1980s, just as you were seeing a lot more digital technology coming into the market. And if you think about early products, or early computing products and things like that, they were just designed to work. They weren't designed to be sort of easier for you, or anything like that. They were designed to encapsulate features.

This isn't a computing product but take old remote controls or things like that: they were kind of impossible to use. But you could access everything on them. They didn't prioritize your ability to use them. They prioritized that everything was there and you had access to all the features and whatnot.

That really started to change with Apple. And it's basically sort of an evolution that has really oriented around the end-user experience as kind of the primary thing to design for. And that's because a lot of technology companies have seen the benefits of doing this, because of examples like Apple, right? They see that if you prioritize ease of use, if you reduce friction in products and services, that people are more likely to use them-- and they're more likely to stick with them too.

And so this mindset shift was particularly kind of important, especially as computing became such a central part of everyone's lives. Just like designers and technologists basically asking themselves, how does the user experience products? What do they find frustrating? What do they find easy? And how do we design in response to that?
And I think it's actually the source of a lot of problems. We're at the point where we've actually done so much of this too well. And in fact, the user-centered design process is backfiring on us.

KIMBERLY NEVALA: I think this is an interesting and important point for us to drill down in a little bit. Because sometimes when I've been having these conversations, folks will say, well, the folks talking about HX, or - as you call it, HCD (human-centric design) - isn't this just user experience in another form?

But it strikes me, based on what you're saying, that it might be a question of scale. So user experience is looking at a single individual utilizing or using a product. And perhaps we're trying to expand the lens to include not just an individual user, but also a broader group or society, or call it humanity, if you want to get really existential. Is that a fair interpretation?

SHERYL CABABA: Yeah. So a lot of it is about the framing. So when we talk about user-centered design, or user experience, the way we're positioning people is in relationship to a product. I mean that's why they're called users in that space. I know if you ever look at design Twitter, there's a discussion that flares up every year or whatever that is: stop calling them users.

Honestly, if you were designing for the use of your product, it's a fair interpretation to call people users. So I think the response to that was thinking about humans and their needs first. I think the drawback is these things are one and the same in terms of how they're executed and the processes involved.

And so what's missing is that discussion at scale and how we think about humanity and its needs. And not just a human and their needs and frustrations. Nor a human and their relationship just to a product. Because humans as humanity, they have relationships to each other. Groups have relationships to each other. Institutions have relationships to each other. And those things are kind of not taken into account in the fundamental processes that are used within human-centered design, user-centered design.

And I keep saying human-centered design, but it's also design thinking. Those things are kind of the same in terms of the process that we're describing. So I feel like it's a nice term, but we can't separate it from how it's interpreted and exercised within the field, which is really oriented around individuals and how they use products and services, especially in technology firms.

I think you would see in things like global health and what have you, there's kind of like a push to orient around communities and how communities respond to things like products and services-- interventions like policy-- all of those things. And that's a good evolution of human-centered design. But I still think it suffers from many of the same problems, which is we're thinking about how an individual person interacts with a product, service, or any of those other entities that I just mentioned.

And I don't think that's enough. We have to think about society as a whole and what we want out of something. What our needs are collectively and not just, what does an individual want or need?

KIMBERLY NEVALA: Yeah, it's such a good point. And I do want to dive a little more into where and why design thinking as we've traditionally thought about that it has fallen down. But perhaps we can take a little detour to cover some background.

Many of the issues that we're seeing may just be limitations that are becoming apparent as this tech is scaling up. We know, for instance, that algorithms or AI are exceptional pattern matchers. They are the ultimate pattern matcher. And people are pretty predictable: we are often unfailingly and sometimes, unfortunately, predictable in ways that even we as individuals don't always anticipate. I think we think we're more unpredictable than we are. And that's particularly true at scale.

So we're creating these patterns that these AI solutions and algorithms can then exploit. And I don't if that's the right term, so maybe I should just ask the question. Are current AI systems and algorithms exploiting some of these common human traits, such as our predictability, and what are the dangers in that approach?

SHERYL CABABA: Yeah. I'm sure folks who listen to this particular podcast understand the use of proxies. So basically, I live in a neighborhood in Seattle. And you probably see a lot of Volvos parked on my street.

And oftentimes what platforms do that are using AI is they use things like that- your purchasing behaviors, your zip code - those kinds of things, to be a proxy for things like your political beliefs. Even things like dog-breed ownership. And so that predictability actually allows…yeah, I would use the word exploit. It allows them to exploit those patterns and be able to sell products, to be able to sell advertising, to basically capitalize on our attention.

And so we know there's a lot of pitfalls to that. YouTube, of course, is a good example of connecting certain interests. My daughter, she started getting interested in these domino videos, where they build domino structures and they knock them down. And they're really elaborate. And once you watch one of those videos, your feed is going to turn into, I am a domino obsessive. Which is like, we share a YouTube login.

So I'm kind of like, oh my God, our feed is so crazy. It's just got all these different areas of interest. It could be like, oh, I'm trying to figure out how to fix a leaky sink. So instruction oriented around that is going to be mixed with her domino videos and her crafting videos and whatever.

So I think part of the issue is just that what this sort of exploitation is always oriented around-- and basically what it always is meant to lead to - is whatever the incentives are of the organization that's designing them. And if they are oriented around growth of their user base, and that's essentially how they make money, then that's what's going to be emphasized. And that is what the algorithm is going to be biased towards.

So I think it just requires a little bit of critical thinking on our part as just users of these products and services, to kind of understand what's happening.

And of course, the dangers of that are all of the effects we've been discussing as a society-- like polarization, addiction to social media, addiction to our devices, et cetera. And so yeah, I would view that as a lot of the outcomes that we probably as a society want to interrogate, but it sort of flies in the face of the incentives that a lot of platforms have.

KIMBERLY NEVALA: So why is design thinking, as we've traditionally executed it, I suppose, not up to the challenge? And what do we need to be thinking about, or focusing on instead?

SHERYL CABABA: Yeah. I think the issue with design thinking fundamentally is that this orientation around users or individuals and how they intersect with products, especially technology products, are based on this assumption that if you're designing for the benefit of your individual user, then you're likely making good decisions-- good design decisions, right?

So designing things like for efficiency and for capturing your attention seem like the right thing to do if you're designing for the individual. Why? Because you're fulfilling what they probably want. So even as you're doing user research it's like, oh, I really like this.

And that sort of intersecting with what we understand about things like behavioral psychology and how people basically interact with these things that grab their attention, it feels like you're doing them a service. Because you're getting them to use your product. You're getting them to love your product like you're getting them to, basically, engage with it meaningfully, and probably share things on that platform.
And an individual user might be like, yeah, I love this platform. I love TikTok. It's so funny. I like everything about it-- all the people and fans I have on there, and that I discover the way people are creative on it. But it's also like junk food.

I don't know if you have that feature on your phone that tells you how much screen time you've used in a week. But week after week, I am horrified by mine. Sometimes it's like, you have spent two hours and 40 minutes on your phone each day. And this is down 15%. And I'm like, my God, where did that time go?

And I think what it means is, it's not good for us to be doing things like scrolling en masse. Aza Raskin, who's at the Center for Humane Technology and who used to work for Facebook, and oftentimes the infinite scroll is attributed to him has said, I don't know that this is something that we should want as humans. It solved the problem. It was like, what do people want? Oh, they want to be able to continue scrolling. They want a sense of randomness, which is kind of based on, I believe, BJ Fogg's psychology theories. And so they provide just that amount of randomness and make it interesting enough for you to be like, oh, I've been scrolling for an hour, and what have I even done? It's like giving people junk food instead of like nutrition.

We are providing that. And it's considered in many ways, or has been considered good user-experience design, because the user experience is catering to whatever it is you want.

So I think in some ways, I've sought to prioritize humanity rather than individual humans in my practice. And one way of doing that is, one, kind of thinking about societal outcomes, and using systems thinking to broaden basically our lens and be able to explicitly think about unintended consequences.

KIMBERLY NEVALA: So talk a little bit more about systems thinking and how that would help us address some of these gaps and challenges you've identified.

SHERYL CABABA: Yeah. So there are three key concepts to systems thinking. And I've been thinking about it specifically within the realm of the design practice.

Because I do feel there are good things about things like advocating for humans and how they experience products and services. But how do we kind of broaden our thinking, so that it's not this narrow approach of having someone kind of test a product and be like, oh yeah, I can use this, or I can't use this, or this is how it'll fit into my life?

And conceptually, I the things we need to be thinking about our interconnectedness. Like, what are the things that are connected with someone's experience with what we're doing, or what we're creating? Who are other people who are involved? Who are people who we might be affecting who might not be an end user? Who are people who might be marginalized within the experience of our product, or outside of the experience of our product, but they're still affected by how other people use it?

The second concept is causality. So just thinking about cause and effect. And understanding that there are radiating effects to whatever it is you're designing or doing. Facebook has been learning this a million times over. Whether or not they're doing anything about it is a different story. I think they are. There are many fine people there who are actually trying to solve for these things.

But at the same time, I think it's kind of like, well, there's some causality there that's related to the incentives within your organization. What does your leadership want? What are they trying to achieve? What are they trying to gain? And so you don't want to be that salmon swimming upstream that's trying to kind of fight the downstream effects of trying to constantly design for growth and things like that, that might not result in societal health. So there's this idea that everything we do has radiating effects, and it's worthwhile to kind of interrogate what those are.

And then lastly, there's wholeness. So thinking about systems as a whole. How if you are working on products and services, how your products might intersect with other entities. So at the institutional level, how does it intersect with government? How does it intersect with regulation? How does it intersect with different communities in different places? So that you're not just designing a product on the West Coast in the US that's going to be used in other countries and yet you don't have any sort of cultural touchpoints or reference to how people might use or even abuse your products.

So I've been trying to emphasize those three things, which are interconnectedness, causality, and wholeness. And these are kinds of the things that in an ostensibly empathy-based practice like design thinking, or human-centered design, that you sort of lose sight of because they're kind of abstract.

You have to do some analysis from an abstract point of view. But you should be involving stakeholders who you normally wouldn't involve in your decision-making process.

KIMBERLY NEVALA:Is there a common example where focusing on user experience has - wittingly or unwittingly - led us astray?

SHERYL CABABA: Yeah, I think we've all experienced products or what have you that give us that feeling of: I'm unsure about this and is this actually good for me?

I've written in the past about things like autocomplete and how uneasy it makes me. Because I'm kind of like, yeah, this is easy, and I will use it in the moment. At the same time, I'm really cognizant that I'm training a machine to kind of learn about how we should be communicating, and that eventually all of this will converge into the same style of writing, the same style of thinking. I am sure folks have written a LinkedIn message.

And it's like blah, blah, blah, blah, blah-- there's those three buttons. And it says, sounds great. And I'm just like, that was what I was thinking about typing. But I'm going to fight you and type something else now, so that I'm not feeding... I refuse. I'm not going to feed this.

KIMBERLY NEVALA: I'm not that predictable.

SHERYL CABABA: [LAUGHS] Yeah, I'm not that predictable. I'm going to write, sounds good, instead, and put a period behind it. [LAUGHS] But yeah, just thinking about the repercussions of things is really important, especially when you're designing for like these broad systems that have machine learning and AI.

KIMBERLY NEVALA: And it's just so easy, because of the nature of these systems, to, as you said, really lean in and emphasize and design for and, in fact, execute in a way that just prioritizes convenience and uniformity. I mean, that's not the world I want to live in either.

I'm currently thinking about selling a house, and they're talking about staging. And I'm like, every single picture looks exactly the same. And I actually live in this house. So yeah, there's a mark on my wall. And no, I'm not going to paint it. And no, I don't want the exact same staged furniture. There's something about that that I find just soul sucking, right?

SHERYL CABABA: Yeah.

KIMBERLY NEVALA: It feels fake, in a very strange sense. And certainly that's not the only way these things can be applied. But as we think about interconnectedness and causality and that wholeness, I also start to go back to this conversation that often happens around what I'm starting to call it the ethos of unintended consequences.

Folks are sort of saying, it's not possible for us to really anticipate everything, so how can you hold me responsible for things that I can't imagine? Interested in your thoughts about how folks are either leaning into or avoiding this conversation about unintended consequences.

SHERYL CABABA: Yeah, what you said about that, which is like, how can you possibly anticipate everything? I've gotten that question multiple times, as I talk about, hey, product teams need to just basically integrate some sort of interrogation of what potential ramifications are going to be of your product, and not just like, hey, this is good, and we're trying to make this happen-- and this is the positive outcome to it.

Every single thing we do has some sort of ramifications. It could be oriented around who you aren't designing for versus who you are designing for. It could be oriented around how the system plays out like in terms of machine learning. And that could result in unintended consequences.

I think the problem people have is they have this idea that it's sort of zero-sum. Like, you have to anticipate everything and plan for it and be able to design for it in advance. And that's not reasonable.

Whereas I'm like, well, if you just think a little bit more about it, there are probably some things you could avoid, or at the very least anticipate if it happens. So I think there's a few things.

And I kind of think about how product teams operate through three lenses as well, which is like, what's your organization like? What kinds of processes are you using? And then also, what kind of product are you designing? Like, what's the value proposition and how might it affect people?

And so from the organizational standpoint, one of the things I think about a lot, especially just personally as a woman of color in tech, is how little representation there is in a lot of the creation of technology products. So like it or not, we kind of look at things through our own biased lens.

And I think for a long time, it was accepted, particularly within design thinking and human-centered design, that you could be, let's say, a design researcher from any type of background-- and you could just parachute in from outer space, or whatever, do research with a community, gather information and insights, and then parachute out and just basically synthesize what you learned and be like, this is how we design for them. And I don't think that's really a tenable stance anymore, nor should it have ever been.

I remember maybe 15 years ago I was in an interview with a company. And I was like, yeah, you need to do design research, let's say, with mothers to understand what their experience is. Particularly as they're still nursing, et cetera. And I remember the design research director saying, well, you don't really need to do that. You just need to be a mother to understand that.

And I was so inoculated in like the ways of design research and how we come to it with neutrality, and we can learn about all sorts of people, that I was kind of offended by that. I was like, well, you don't need a mother on your team. You could have somebody who's like a male 20-something who's never had kids, and he could go into the field and do research with moms and understand their experience.

But I think I've come full circle on this. And I'm like, yeah, this really applies in terms of…I think what he was saying, that design research director, was, we all come to the table with certain biases. And so you need to have people on your team who are actually like a mother who might have experienced this or might be experiencing this. And it's a testament that we face so many problems that are oriented around like racial biases and things like that, of how diverse our technology sector is.

So I think in terms of things like machine learning, I know Ruha Benjamin has done a lot of writing in this space. I think her book is called Race After Technology. And she basically says that when we're kind of designing products, especially if they're scaling and are involving AI, you need to audit it from an equity lens.

And so you have to think about, what are the unintended consequences of designing this at scale? How should AI systems prioritize individuals over society, and vice-versa?

And then, when is introducing an AI system the right answer, and when is it not-- which is actually a question that doesn't get asked a lot, is like, when is it not? Because we sometimes just assume we're on this trajectory where these things need to happen-- they're predetermined. And sometimes it's just not the right use of technology for the kinds of problems that we're solving.

KIMBERLY NEVALA: That may raise another question.

You mentioned that sometimes we just we run after it because the technology can do it. It's something we can accomplish, whether or not it's something we should really do, or really need to do.

It spurs the question: what should be the role of data science or algorithms or algorithmic decision making in design? Do algorithms inform design, or does design inform what we go after with an algorithm?

SHERYL CABABA: Yeah, that's a really good question. I've been thinking about this a lot, because I've been doing a lot of work in education. And the use, for example, of AI to evaluate, used in assessment spaces in children's education, is really unsettling to me.

A lot of the justification in the space for integrating AI ultimately has to do with the convenience of those who are doing the assessments. Whether we're talking about instructors or we're talking about districts or decision makers or state governments, the people who are using assessments to make decisions, whether in the right way or the wrong way, assessments are misused all the time. But I think oftentimes it's not really thinking about, how does this benefit students, one, and also, how are students potentially harmed by this?

And so if you know AI is going to be part of what you're designing for, it actually needs to inform how you design products. You need to think about not just how AI will execute itself within your user experience, or what have you, you have to think about the impact and the potential pitfalls to doing it in this way.

I know practitioners don't always have control over that, which is why I always advocate for, if you are thinking this way, you should be involved in the upfront strategy for these kinds of products and services that may scale up in this way, so that these questions and prompts can be raised early on-- before it gets into a space of, OK, now we're just doing this, and it is what it is.

So knowing the potential for data collection and how that shapes things like machine learning needs to be kind of integrated into design processes, and maybe in sort of a reciprocal way. You can create design so that it's meant to account for that. And that should also inform what kind of data needs to be collected.

We're dealing with this right now. In fact, I'm wrestling with this notion of disaggregating student data, for example-- so understanding how students experience things like educational software. Are, for example, racial minority students experiencing educational software in different ways in which it like biases against them? But what you need to do is you need to somehow collect that information to begin with. And there's terrible sort of privacy concerns that are oriented around that.

So I would never suggest this kind of decision making is easy in terms of what kind of data you collect and how you use it. But there should be some sort of reciprocal relationship with the design process. There shouldn't be just assumptions that certain types of data are going to be collected and certain types of data are going to be used in specific ways.

One of my tenets as a design researcher is always question the framing. Don't just assume that the framing is as it stands. And you should at least ask the questions. And you might be shut down. But you might also surface some potential issues that no one actually has thought about, because they're just trying to figure out how to make things work.

KIMBERLY NEVALA: And making things work is a fairly narrow focus, I think we've learned.

SHERYL CABABA: [LAUGHS]

KIMBERLY NEVALA: We could go on for a while.

Are there specific steps or suggestions you can leave with organizations or individuals for them to be able to really assess their practices and ensure that humans are being mindfully considered, if not central to the design of their products and processes?

SHERYL CABABA: Yeah. I think the key-- and this is probably not a satisfying answer-- but you have to look at it in a nuanced way. You can't just be like, I am serving humans, and be like, so therefore I'm going to talk to them, and the decisions I make are going to be good, because I'm listening to them.

Because people are not always good at stating what they want. And also, what people want and what people need could be two different things. I hate bringing up this quote. But wasn't it Henry Ford who said something like, people would ask for faster horses. That's often used in human-centered design research to justify a design researcher's need to interpret data that is coming from people. But I think that's actually not a good way of using it.

But I do think there is something to that in which we have to kind of think holistically and also in a nuanced way about, what do we need as a society, is so broad, as a collective. But then also taking individuals who are within that system into account, without necessarily designing only for convenience, or what have you.

I've been looking at this discussion on Twitter about an Edit button. This raises its head like every, I don't know, a few months, years, or whatever Elon Musk, who's like a passive board member or something on Twitter now, put out a poll the other day that said, this is really important. Do you want an Edit button? And I don't if it was meant to be an April Fool's joke, or something.

But the majority of people said, yes, we want an Edit button. And the Edit button is a really good example of you have to totally think about the potential repercussions. Just because people say they want it, because they're trying to avoid typos, doesn't mean it's the right solution.

Like, people retweet tweets all the time. Imagine if you retweeted something, and somebody edited it to be hate speech. Like, you're basically co-signing on that. And I don't think a lot of people who are using the product, if they're like, I wish there was an Edit button, are thinking that far ahead.

But as designers and technologists, we do need to be thinking that far ahead. We need to be thinking further ahead than Elon Musk when he's talking about, do you want an Edit button? Let's make this happen. I think honestly, he was just trolling, but whatever.

Anytime I have to think about him, I'm a little bit like, why? But it's like a good very direct example of you need to integrate into your process what that's going to look like. What are the radiating impacts going to be of that?

How is it going to actually affect institutions if you add in an Edit button, and you can't just do research with your end users and be like, OK, they all want this. We're doing it. You have to kind of think about what are the downstream consequences and let that inform your decision making too.

I remember when I was working on the Tarot Cards of Tech, which are a little pack of prompts that help you think about things like, what if Mother Nature were your client, who are you leaving out of your process? It's a good series of prompts to think about unintended consequences.

And I remember as we were creating it one of my friends said, what if we had a card that said, what if you did nothing? And everyone laughed and everything. And then I was like, wait, but hmm, that is a possibility sometimes. You can actually do nothing and that has an impact, right?

And we didn't include it, of course, because our audience are technologists. They orient towards action, and so it's hard to justify that. And it almost requires an entire philosophical discussion to include a problem like that. So we left it out, but I think about it a lot. And almost any time I'm in the position of designing anything, I make sure I ask myself that question.

KIMBERLY NEVALA: I tend to agree with you. I don't think we do ask that. And one of the issues a lot of people say is that this idea of failing fast or move fast and break things or just make it work assumes, as you said, we're oriented to action.

Sometimes the action perhaps we should be taking is to take no action at all. And to really ask, is this necessary? Is this needed? And what will be the consequence of that?

This very much harkens back to a conversation that I also had with Kate O'Neill. And she said we need to really think about - as in your Edit button example - not just when things go really, really wrong. There's consequences, intended and unintended of that. But what if it goes really, really right and people really, really use it? What happens then?

And so the thread I've taken from this entire conversation is just because we can't anticipate everything doesn't mean we should try to anticipate nothing. But also that it's not a single question, or a single point of view, or a single lens. We need to look at this from all different aspects and facets to the best of our abilities, to be able to really move - I suppose- the world and ourselves forward in a meaningful way.

So any final thoughts to share with the audience?

SHERYL CABABA: I do think, just like a tenet that I reinforce in my practice is, as I mentioned earlier, always question the framing.

And then secondly, everything is more nuanced than you think it is. And so how do we slow down and have those discussions in order to improve our decision making?

And I don't think about it as like necessarily right or wrong decision making anymore. I think about it as sort of a spectrum. And you should be just trying to continually grow and make better decisions.

We're all on this journey. And I'm not going to say, I know how to do all of this or that-- we have the perfect tools in our practice or anything. But I do think continuous improvement is really important, in terms of just trying to answer these questions better, look at things in a more nuanced way, and basically kind of slow down what you're doing-- especially if you're working in technology.

KIMBERLY NEVALA: Fantastic advice. Thank you so much, Sheryl. There's a lot to think about and for folks to take away, so I really appreciate you joining us today.

SHERYL CABABA: Thank you so much. I really enjoyed this conversation, and it was really nice talking to you.

KIMBERLY NEVALA: Awesome. Hopefully, we can entice you back in the future.

Now, next up, we are going to continue the conversation with Dr. Erica Thompson. And she's going to walk us through the gaps to be minded when using algorithms to make or influence decisions. So to ensure you don't miss our enlightening journey through what she calls model land, subscribe now.

[MUSIC PLAYING]

Creators and Guests

Kimberly Nevala
Host
Kimberly Nevala
Strategic advisor at SAS
Sheryl Cababa
Guest
Sheryl Cababa
Chief Design Officer - Substantial
Designing for Human Experience with Sheryl Cababa
Broadcast by