Putting Inclusion To Work w/ Giselle Mota

KIMBERLY NEVALA: Welcome to day 11 of Insights and Intuitions here at Pondering AI. Today Giselle Mota reflects on putting inclusion to work. Hello, Giselle. Welcome back to the show. We're thrilled to have you.

GISELLE MOTA: I'm thrilled, Kimberly. Thanks for having me.

KIMBERLY NEVALA: It's been a minute since we last spoke. So this first question may be unfair, but I'm interested in the most consequential developments or changes that you've observed in how we are thinking about work and the future of work of late?

GISELLE MOTA: I think some of the most consequential ones have been definitely around how society is evolving and changing. And how politics have become a bigger part of the future of work conversation and basically the evolution of work. Now, the way we look at work, at humans, continues to expand to think about the nuances of people. So, for example, people are having to stop and say, wait a minute, do we allow people to self-identify in certain ways? It's just very interesting to see how politics and society, it's all merging and mixing together.

And then work is changing. Now we're thinking about generative AI and how it's used in the workplace, which needs to really be thoughtful about policies, discrimination, bias, all of those things.

KIMBERLY NEVALA: You work in - in fact, your current title - is around product inclusion. Inclusion can be quite a bit broader than avoiding discrimination to thinking about how do we make things accessible and usable for a broader swath of personas. How are you seeing folks address or approach the idea of inclusion today? And is our embrace of AI tools, in all of its glory and problematic states, helping or hindering that work?

GISELLE MOTA: That's a loaded question, Kimberly. I will say that there's a lot to it.

First of all, you're right. Inclusion is not just about mitigation of bias and discrimination issues because quite honestly, where compliance ends, sometimes inclusion comes in, plays with it and then goes a little bit further. For example, we have anti-discrimination laws around the world. And it still doesn't mean that people are covered at the workplace. Or we have ADA compliance - the adults with disabilities act and all of that - and it doesn't mean that still today everything is accessible.

There's a point where it kind of takes off, where you get away from what's legislatively passed and what's permissible. A point where creativity, nuance, consideration comes into place. So quick examples are a card that you use for your financial transactions needs to be accessible. There are companies out there who have done something as simple as creating notches and grooves and tactile elements on a card so that someone who's low vision or blind will be able to have an agency over their financial experience. Where they don't have to ask someone, hey, is this my debit card, is this my prepaid, is this my credit card? They can tell themselves because they could feel it.

Some of the work that we're doing is expanding race and ethnicity inside of our self-identification features. Again, no official legislation that you have to do that. But we're doing something as simple as, for example, separating all the clusters of identity of being Asian. That's going the extra mile.

Now, gen AI has the ability sometimes to hurt or to help. The hurting part comes in that if the data itself wasn't already including all different types of representation of people. For example, AI on a video screening a person in determining if you're competent or not. If one of those assessments is to literally examine what a smile looks like and see if someone is friendly, what does a smile look like?
I mean, could it be that someone’s facial features are a little bit different or the person has autism and they don't give that eye contact. Or I've said it before, I always get the chuckle, but what if you had Botox, right? And your smile is not all the way smiling at that particular moment. Whatever it is that the data has been trained on, does it identify what a smile is in different experiences? So if you're using generative AI to come up with things and generate content or maybe make assessments itself, then you may be missing a whole bunch of people because you didn't design it with nuance in mind.

Where it can help is cases where it can hold people accountable to where the bias may have crept up because it can sort through systems. And it can say, hey, this is a report card or an audit of instances came up where this factor was being used. It's biased and you might want to fix this. Or AI can detect exclusionary language, where you might be using gender language that's not appropriate. You might be using language that is not friendly of people with disabilities, for example. And it may be able to tell you, you should change this in your policy, you should change this in your email. It could fix that as it generates content for us. Lots of different use cases that we can think of.

But at the base of it, if we don't get the design right from the core, and the data right at the core, and get the people making those designs to have a more broad understanding of the world and humanity, then we're in for some issues.

KIMBERLY NEVALA: When we think about approaching design in this way, are there a few key areas of focus that require more attention?

GISELLE MOTA: Yes, I think language because when it comes to generative AI, it's built a lot on large language models. And it uses context. Fr example, when you're training it and you’re using language to understand a sentence, for the computer to understand a sentence, it's going to use the words around a word to bring context into that sentence.

So what are we talking about when we're talking about "human" and you want to apply a characteristic to that word "human" and the adjectives or words around it? How does the computer understand what you're really talking about? Context clues. But sometimes what we've learned is that data around certain words are very biased. They're generalizing in a harmful way. So for example, when you do chatGPT itself and ask it to give you a random association of names, their favorite food, and where they're located in the world. What you'll tend to find is that there's not really a randomization there. Someone whose name might sound Indian gets associated with living in India and their favorite food is curry. [LAUGHTER]
Honestly, that wasn't very random. You told chatGPT to be random and it wasn't.

So associations, the context around certain words and how they've been trained: whether it's someone's religion, their gender, their race, their age, their disability. That all comes down to certain things that these systems have been trained on and there's problems in there. So language and context needs to be addressed, for sure.

KIMBERLY NEVALA: As you look forward into 2024 and you're thinking about inclusive design in the future of work, what do you expect to take center stage next year?

GISELLE MOTA: I think we're probably going to see more space around being intentional with what we were just talking about a moment ago. It's going to force us to really examine the nuances in things.

For example, if you do start a prompt, talking about women. Is it because of a health care use case, or anything at work, anything in an industry that might need to consider a gender, for example, for whatever it's about to output. We have to be forced to understand, what is woman though? It sounds so basic, but what is that? We understand that women can be so many different types of things. I just came from a conference that was for women and non-binary people. I believe the reason they did that was because they understand that people who are assigned female at birth could still identify as being non-binary. They wanted to welcome all kinds of people to the conference and reach that demographic as well.

So what is woman, what is race, what is gender, what are all of these things, what is disability? Is disability always just the protected considerations of what the ADA has put out there or what a particular country would identify? Or is it also, if you're like me who has dyslexia and you might not consider yourself to be "disabled," but you do need accommodations and you do need certain things.

So I think what will take center stage is us having to take a step back from what we think is an absolute, what we think is binary, what we think is black and white. From what we think, and really take a moment to consider the complexity of things and how maybe to sometimes neutralize some of that where it makes sense in our use cases. Do you have to get so particular in some cases? Sometimes you do and sometimes can you be more generalized.

KIMBERLY NEVALA: And for folks that are listening and saying, that would be fantastic. I wish I had the time to do that. I wish the market, I wish my company, I wish whatever entity would allow me, but I just don't have the time or the space that that requires. What would you say to that? What would your recommendations for what they should prioritize and how they can build this in a practical and effective way?

GISELLE MOTA: Absolutely. Start small. That might mean, because we've been talking about all these different things that make people who they are, start small and consider: is your product available to people who are left-handed? Start there, right? Does your software make considerations for people who speak a language or read from right to left and not just left to right languages? Literally, are your buttons mirrored? Is the language and the way that things are set up, are they set up in that way?

Start small. Maybe it's disability: this is a huge area. Maybe you start with people who are low vision and blind and you design with that in mind. Maybe it's for people who are hearing impaired or have hearing loss, are deaf. Start there.

Maybe even take several more steps back and start just with education and awareness at your company on why this stuff matters, what does it look like. And then start there. Develop the data that shows that your client demographic would benefit from this. And then ultimately the revenue of your company would benefit because you extend your reach to more people. So start small, pick a use case and follow that.

KIMBERLY NEVALA: I love that. I had the opportunity to speak recently with Yonah Welker. He had some great examples, where designing with disability seen and unseen in mind actually leads to a much more robust and broad product. Rather then, how we tend to think about it, which is I'm going to design a product or a system and then I'm going to think about how to adjust it to accommodate.

GISELLE MOTA: I agree. It's true. Then you also can create things like we were mentioning a moment ago. What if you don't identify as having a disability but you want to let your company know that you need some sort of accommodation: regardless of how your label shows up or if you want to be labeled or not. Everything should be accommodation-friendly for everyone, right? Start there and kind of switch up the system.

KIMBERLY NEVALA: Just start, is what I take away from that: that's fantastic. Thank you for your time. It was great to talk to you again.

GISELLE MOTA: Thanks, Kimberly.

KIMBERLY NEVALA: 12 days of Pondering AI continues tomorrow. Subscribe now for more insights into what's happening now and what to expect next in the ever-fascinating world of AI.

Creators and Guests

Kimberly Nevala
Host
Kimberly Nevala
Strategic advisor at SAS
Giselle Mota
Guest
Giselle Mota
Principle Future of Work ADP
Putting Inclusion To Work w/ Giselle Mota
Broadcast by