Diversity, Equity and Inclusion in AI with Tess Posner

Tess Posner shares her mission to make technology and education accessible to all, demonstrates why diversity and inclusion drive innovation, and assures us that it is not too late to ensure AI is both created by and beneficial for everyone.

[MUSIC PLAYING] KIMBERLY NEVALA: Welcome to the second episode of The Pondering AI Podcast. My name is still Kimberly Nevala. I'm a strategic advisor at SaaS and I'm your host this season as we contemplate the imperative for responsible AI. Each episode, I'll be joined by an expert to explore a different aspect of the ongoing quest to ensure artificial intelligence is deployed fairly, safely, and justly now and in the future.

In our last episode, Michael Kanaan really teed up the importance of education and crossing the digital divide. So today, I'm really pleased to continue that conversation and to be joined by Tess Posner. Tess is an educator, the CEO of AI4ALL, and an amazing advocate for diversity, inclusion, and equity in the tech economy. Welcome, Tess.

TESS POSNER: Thanks so much for having me. Really excited to be here.

KIMBERLY NEVALA: Now I've had the absolute pleasure of working with you and your organization for, I think, over a year now. But for folks who haven't had that honor yet, can you tell us a little bit about your background?

TESS POSNER: Yeah. So I started in the education space when I was first starting my career. Especially working with youth, I loved working with young people and seeing their career journeys and their growth. And one of the things that I learned early on is how technology skills are really becoming the new basic skills. And even to compete for something like an entry level job, you need to have basic computer literacy, and starting to build those basic digital skills. And yet they're not equally accessible to everyone.

And so we don't see, for example, people getting equal access to having computers at home based on where they grew up or where they live and their zip code. We certainly don't see equal access to computer science in schools as well, with only 40% of schools teaching that in the US. And a lot of under-resourced areas or under resourced schools getting the most impacted by those inequities. So thinking about how we can help prepare the next generation to thrive in the future economy, it's so important to think about that digital skills access.

So that's when I started really focusing on this problem of thinking about the future of work and the future of skills, and how do we prepare the young people of today for that future, whatever it's going to look like. I started to get into artificial intelligence because I was working from this White House initiative called TechHire that President Obama launched in 2015 to try to fill the 500,000 open tech jobs in the United States with non-traditional tech talent.

So people that maybe didn't have coding skills but had tremendous potential and capacity to do these jobs. And how do we train them up quickly to prepare for these jobs. And we started to see AI as a real trend that was coming up in terms of emerging skills that employers were looking for. There was a huge demand for, and yet not enough people actually were being trained for this. And there's even this big awareness problem in AI in terms of the way that it's portrayed in the media.

And so there's not only is there a gap in terms of access to AI skills. But there's also this the way that it's portrayed kind of potentially driving some people away from even getting involved in the field. And so that's really why AI4ALL was started to address this diversity crisis in this space and make sure that there's a people who represent all skills and backgrounds are actually building and shaping this technology because it's such a critical part of what the future will be, not just in technology, but in every field.

And so we need to have representation from all different parts of society shaping and building AI systems. And so that's really the kind of arc of how I got today thinking about how technology skills are the future of basic skills and that everyone really needs them to succeed, and then thinking about these emerging tech trends and why they're so important to have people involved in the early stages.

KIMBERLY NEVALA: Yeah, so I think that's really interesting. So it sounds like really the core mission then behind AI4ALL is improving literacy, improving access, and in fact then arming people with these skills, particularly in underserved or underrepresented communities. Is that a fair summary I suppose?

TESS POSNER: Absolutely. And it's interesting because when we thought about the mission of AI4ALL, we really thought about not just providing literacy but also really finding people who could be the leaders in the field. Because if we're just doing a workforce training to provide people with skills to become the practitioners of machine learning and AI technologies, that's really important. But at the end of the day, it's so important too who's in charge of making the decisions about how to regulate AI, what kinds of things should we be building with AI, what kinds of policies should we be making to regulate it?

And so we need to have diversity at all levels, not just in the kind of practitioner level. But also very much in the leadership level in all different fields, whether it's business, whether it's policy, whether it's education. And I think that that's the core of the mission of the AI4ALL is to start with that literacy but then also support students throughout that journey into their leadership careers.

KIMBERLY NEVALA: And so what has really inspired or even surprised you as you've been growing this whole community of AI change makers, I believe you call them?

TESS POSNER: Oh my gosh, so many things. That's my favorite part of the job really is getting to see what they create and what they're passionate about. And I think we've seen a couple of things that surprised us from the beginning. One is that so we start with programs in high school where we focus on giving AI skills to students who may be interested but may not have access to those skills in their schools, especially focusing on underrepresented populations in AI. So what we saw is that students really got excited about the opportunity for AI to solve problems that they care about.

And I think this was one of the big eye-opening things about what we learned about how to approach our curriculum, and the ways that we teach AI. It's really focusing on providing students with the tools, but letting them run with it in terms of what problems they care about.

And so, that is a really exciting thing because when we think about building more inclusion and representation in the space, it's so clear to me that doing that will yield so many more innovative solutions and people coming from their own life experiences and backgrounds to create things that we could have never imagined and will ultimately yield a way more diverse set of solutions than if we just have a homogeneous group building AI.

And so we see that group every day with our students coming up with things. For example, we had a young woman who lost her grandmother because the ambulance didn't arrive in time. Her idea was to build an AI tool that helps 911 calls be sorted by urgency to help people who are out in the field actually make a decision, a very difficult decision of course, about how to prioritize those calls.

And so that was something that directly came from a tragedy in her own life that she deeply felt needed to be solved. Other students in the Bay Area were really impacted by the wildfires and seeing that devastation in their community. So they wanted to work on-- they formed a team and worked on this app to try to use satellite imagery for early detection of wildfires, which is something urgently needed.

Those are just some of the examples we have really thousands of projects that students have come up with based on their passions and what they see a need for. And I think that's the key in what's most special about AI4ALL is really this idea of unleashing that potential that's really there and that young people can see AI as a tool to help them solve problems that they care about.

KIMBERLY NEVALA: So something else I think that speaks to is we tend to talk a lot about the need for diversity and inclusion. And I do want to come back to talking about what's driving this conversation today more broadly. But we tend to talk about it a lot in terms of risk. But I think what you're really highlighting is that actually focusing on diversity inclusion creates opportunity. It's driving innovation and ideas that solve real problems and providing benefits that we might not otherwise either think about, or approach in the same way if we are not bringing all of those diverse perspectives together.

So I think that is so critically important, and again that we really look at this from the positive lens as well as sometimes I think what's a bit of a more negative slant. But that being said, what are some of the real implications and risks if we are not really actively engaging in diversification and inclusion?

TESS POSNER: Absolutely. So if we look at some of the trends today, on the one hand, and I'm speaking from data from the AI index that just came out last week, or a few weeks ago, that's available online. People can check it out. It has lots of great info on trends. But we're seeing tremendous growth in the AI field.

So for example, the world's top universities have increased their investment in AI education over the past four years. The number of universities that teach AI in the undergraduate and graduate level has increased by 100%, 102.9% to be exact.

KIMBERLY NEVALA: That's amazing.

TESS POSNER: This is data from Indeed. We see that machine learning engineer was the fastest growing job on their platform with 344% growth in job listings between 2015 and 2018. And so we're seeing tremendous growth even through this pandemic, where a lot of things slowed down. AI certainly hasn't slowed down. It's been connected directly to things that we're dealing with post COVID-19 and even in COVID-19, even with where to send vaccines, and things like that.

So this remains a field that is growing and burgeoning every month that we see. And at the same time, the diversity crisis is not getting better. So 13% of AI and machine learning company CEOs are women. In 2019, among PhD graduates, 45% were white, only 3.2% were Hispanic, and 2.4% were Black. So what we're seeing is that for this incredibly important technology, a homogeneous group is in charge of building and shaping it. And I think that to your question about risks, several different things.

One is that biases creep into the product development lifecycle in different ways. And we've seen a lot of examples of that with the gender shade's study, which looked at facial recognition and found the systems in use today to be biased towards especially white men. And were less accurate at recognizing people of color and especially women of color.

And so this is amazing research by the way, gendershades.org. Everyone should check it out. There's a lot of different examples of how race and gender bias can creep into these systems which has life altering consequences if we think about how these systems are being used by law enforcement, immigration services, deciding who to give vaccines to. I mean, these are real world consequences as we outsource decision making to AI systems.

And so there's issues of bias especially representing race and gender. This is hugely consequential and needs to be addressed. And I think, obviously, there are a lot of different debates about how to solve bias. People talk about governance, regulation, technical solutions that might be able to help identify and track these issues before they actually are released, the products are actually released.

But at the end of the day, who is building it and shaping it has to be part of the solution. All of those solutions are only partial. Because if you have a homogeneous group, not only are they going to miss something, even unintentionally, but you also have the issue that they're building something for-- you don't have the people that are actually being impacted represented in those decisions. And that's a critical problem, even from a product development perspective.

So getting diversity in all stages of the product development lifecycle is so so critical in addition to some of these other solutions around regulation, and governance, ethics principles, all these different elements that we've been talking about over the past couple of years as a AI community. So to me, it's just very critical that that be front and center in the solutions to some of these risks, including bias. And I think that's just one of the ethical risks in having this diversity crisis exacerbated. And frankly, I'm worried because the pandemic has predominantly impacted communities of color and women and has exacerbated some of these inequities.

And so I'm concerned that this crisis that we're talking about could get worse if we don't, again, continue to focus more resources, attention, and awareness on it.

KIMBERLY NEVALA: Yeah. It definitely feels a bit like a perfect storm lately. We're confronting this global pandemic amid really important and impactful social, economic, and political turmoil, and all of which, I think, is happening at the same time as we're experiencing this rapid evolution and adoption of technologies such as AI. And I'm interested in your thoughts if you can provide some examples of how including those diverse perspectives, both from the perspective of the creators, and the consumers of these solutions, can help guard against bias and other unintended consequences, even if bias doesn't-- well, I think bias always exists. But if bias isn't the primary issue.

TESS POSNER: Yeah. Well, I mean, at AI4ALL, what we're trying to do is really prevent some of those things from happening through the programs that we run and we believe that just by having people in the room that represent those different perspectives, you're going to organically prevent some of those risks. And so like I said, even if you have some of the technical solutions in place, and some of the other guardrails if you will, in terms of how products and teams are set up, what kinds of guardrails you have from a management and governance perspective within an organization even, I think the best thing that companies can do in these questions is really make sure that those teams are diverse.

And so just having that perspective, from women, from Black and brown employees, from those that might be impacted. And so we can think about there's so many different stages of building a technology solution, or doing this research. There's the ideation phase, which is what should I build? What's the big idea? What problem is this product trying to solve? And I think if we look at that stage, truly understanding the problem is critical. And involving communities, and diverse communities, in those conversations is the first step.

Because like you said, unintended consequences. We can sometimes think that we're building a great solution. But there might be a reason why it hasn't worked in the past or it's actually not a good fit for specific problems that those communities might face. Those might surprise you in the end where it seems like a great idea. And I think there's that idea of being overly optimistic about technology solving certain problems, where it's like, oh, it seems like this great silver bullet.

But in the end, the adoption has prevented these challenges, or prevented it from having the actual impacts that it has. So I think again, even in that ideation phase of figuring out, OK. How are people really experiencing this problem? Is it a problem that needs to be solved with technology? Or can it be solved in other ways? And what's been tried in the past? And let's talk to the people that are actually having that problem. How would they describe it? Do they see this as a potential solution?

So I think involving them at that very early stage is really, really critical from the get go. And then in each lifecycle of the product development process, whether it's the build, or the iteration, or the testing, or the deployment, again, having those voices there, not just in people you're talking to from a user testing standpoint, but even who's actually building it. So I think that's really, really critical and where some of the impacts can happen in terms of risk mitigation.

KIMBERLY NEVALA: Yeah. I think that's just so incredibly important. I do some work volunteering with kids in the dependency system. And one of the things you realize very quickly is that it is, despite your good intent, both impossible and actually in a lot of cases harmful to try to project your own perspective and even jump to solutions for how people are really experiencing the system, what they see their issues and their roadblocks are. They're really not necessarily because everyone--

We all go through our day to day in a different way and see different challenges in different ways, and respond to those differently. And so that's been even for me a personal lesson about really needing to step back and let the person experiencing the issue or the concern tell us what they need, and react, and support them in that way.

TESS POSNER: Exactly. It's empathy and humility, which I feel like sometimes doesn't show up in technology conversations where it's really focused on getting to the solution and moving quickly. And all of that's great and is needed. But at the same time, we need to slow down and say, how do we really look at ourselves and have that critical eye of like, am I even building the right thing? I mean, we see this right now with all these companies struggling with, what is my ethical responsibility here versus a profit responsibility?

And we see that happening in the environmental stage looking at climate change and all those impacts where it's like, the consequences of ethical AI could be catastrophic if we make the wrong decisions. And so how do we pause and create those checks and balances rather than just diving headfirst into these new-- and of course, we're also excited about the promise of AI and the things that it could help us amplify our capability to solve these very complex challenges.

But at the same time, we have to do that thoughtfully and mindfully.

KIMBERLY NEVALA: Yeah, and I think this goes back to there's really no doubt in my mind that AI is accomplishing amazing things. But as you referred to earlier, there's no shortage of examples of AI gone awry, whether that's facial recognition systems performing poorly on darker skin tones, or hiring algorithms biased against women, medical applications that conflate access to care with the need for care.

But given how pervasive AI is today already, I think it can be easy to become cynical. So what advice would you give to those who think the horse is already out of the barn here?

TESS POSNER: That's a great question. And it's definitely something that-- it's funny. As we've seen the headlines kind of shift where a few years ago it was really positive about AI, and now people are waking up to some of these things we're talking about. What are the ethical risks? How do we prevent harms? And it's real. And we should take those things very seriously, especially in the way that they amplify existing inequities in the system.

On the other hand, yeah, like we're saying, there is this tremendous potential of AI in solving-- I mean, I think of it as a partner in our human ability to solve problems, rather than it's sort of becoming its own actor in solving problems or kind of maximizing that, but really this idea of AI complimenting human capability.

And so I think there's really no limit to what that can accomplish when it comes to human imagination and people that are doing incredible work to innovate around making society better for more people. And we see examples in the healthcare sector a lot, where there's better medical diagnosis, there's aiding doctors and nurses who need increased capacity, there's early detection of diseases, there's all kinds of applications around climate change with processing large amounts of data so that we can make faster decisions around some of these things that are really urgent.

And so I think that there's a unlimited supply of problems in the world that technology, and AI specifically, can be applied towards. And I think it's really about unleashing that human potential that's currently exist in places where we don't necessarily recognize it as such. If we can truly unlock that, I think there's absolutely no limit to what we can accomplish in terms of creating a better world for the future. And I think that's such a important question right now as we emerge out of the pandemic and think about, how do we create a better world, not just go back to the one that we were in before?

Because we all know that in some ways, the pandemic has just revealed some of these deep, deep problems that we have. And so, how do we think differently about what's ahead instead of just going back to the status quo? And I think that AI can be part of that if we focus on the right things, provide that ethical lens, and that diversity inclusion lens. It has the possibility to live up to that potential to solve some of the complex challenges we're facing today.

So I hope that everyone feels that sense of urgency as we move ahead in this pandemic recovery.

KIMBERLY NEVALA: Yeah. I think that's such an amazing perspective. And I think that, again, idea of using artificial intelligence to really amplify and augment human capability, not to replicate or necessarily try to replace it. So as we wrap up, from a practical perspective, is there some practical guidance or steps you would suggest for us as individuals, as organizations, or even as communities for how we can be more mindful creators and consumers of these technologies as we move forward?

TESS POSNER: Absolutely. I think there's something that-- everyone has a role to play. I think for companies, making sure to ask questions of, what is our ethics framework around building AI systems? Do we have a diverse team that's in charge of this looking at your data and machine learning teams? Thinking about what the governance role of the organization is, how do we measure impact, societal impact, of what we're building? And how we think about the harms in the process?

And not just letting that work fall to the women and people of color. Because I think a lot of times those are the D&I champions who are saying, hey, we need to solve this problem. We need to have allies who are creating more power in this shared voice and all doing our part to kind of influence the conversation and not just leaving it to those that are directly affected. Because I think that that is often what can be so hard about solving this is you have it fit just a few people in the organization where all the pressure is going in order to ask the ethical questions and focus on inclusion.

And that's just not going to work. And those should not be bearing the burden of solving the problem. It's everyone's problem. And it should be everyone's solution. So that's from a company perspective.

I think as individuals, taking the time to become literate about AI. There's a lot of great just AI literacy courses online that you can easily take. And I think to be an informed consumer of AI, that's what is also needed because you want to understand over 80% of Americans use AI regularly. And I bet a lot of people don't know how they're using it, what it's being used for, and really how to stay informed.

So I think just educating yourself about some of these things. And also even if you're not anywhere near the AI field, this is touching everything. Medicine, education, finance, insurance, everything that you can think of is going to intersect with AI. So understanding how that's going to impact your field, your job, how is it getting incorporated? I think that's another critical thing that if everyone is asking those questions and using critical thinking, and really demanding that we have ethical guardrails, that's when change can really happen.

KIMBERLY NEVALA: Yes. I think that's really amazing. And clearly despite the challenges ahead of us, I think we both believe the future is really bright. So any last words of wisdom or thoughts you'd like to leave with our listeners today?

TESS POSNER: Just to say that another thing to consider is thinking about the next generation. I mean that's really what AI4ALL is focused on is, it's not only us that will be impacted by the choices that we make today, but our children and the young people who are just starting their careers today. And so, they're who we need to think about as well. With that being said, if you want to mentor or get involved with young people who are learning about AI, that's a great way to get involved, hiring interns, thinking about the talent that you're involved with, whether it's at your company or just people in your community.

How can we bring AI to more young people? They will be part of shaping the future that they will inherit from us. And I think that youth voice is incredibly important as well. So thank you so much for having me. This was a really interesting conversation.

KIMBERLY NEVALA: Oh, thank you. I am absolutely energized. And I know our listeners are too. And if you would like to learn more or get involved, you can find links to AI4ALL and other organizations promoting diversity, inclusion, and justice in AI in this episode's show notes. So thank you so much, Tess.

Next up, we're going to continue this conversation with Renee Cummings. Renee as a criminologist, a psychologist, and an AI ethics evangelist, that's a mouthful, who is really passionate about keeping the human experience at the center of AI. So this conversation is actually just a great springboard for that. So make sure you don't miss it by subscribing now to Pondering AI in your favorite pod catcher.

[MUSIC PLAYING]

Creators and Guests

Kimberly Nevala
Host
Kimberly Nevala
Strategic advisor at SAS
Tess Posner
Guest
Tess Posner
CEO AI for All
Diversity, Equity and Inclusion in AI with Tess Posner
Broadcast by