Humanity in AI with Renée Cummings

Renée Cummings traces her unconventional path to data activism, opens our minds to the power of imagination and authenticity, and makes a cogent case for why being uncomfortable is key to creating a positive AI legacy.

[MUSIC PLAYING] KIMBERLY NEVALA: Welcome back and/or to the Pondering AI podcast. My name is Kimberly Nevala. I'm a strategic advisor at SAS and your host this season as we contemplate the imperative for responsible AI. Each episode, we'll be joined by an expert to explore a different aspect of the ongoing quest to ensure AI, or Artificial Intelligence, is deployed fairly, safely, and justly today and in the future.

Today, I am joined by Renee Cummings. Renee is a criminologist, criminal psychologist, and AI ethics evangelist, and the data activist in residence at the University of Virginia. She is also, as you're about to hear, one of the most positively passionate people I have ever had the pleasure of meeting. So welcome, Renee.

RENEE CUMMINGS: Kimberly, thank you for that absolutely fantastic introduction. Thank you very much.

KIMBERLY NEVALA: Well, now, your career did not and hasn't been in tech. Can you tell us a little about your experiences prior to joining the AI fray?

RENEE CUMMINGS: Certainly, so I began my life as a journalist and broadcaster. And then I started to work in marketing and fashion. And while I was doing that, I decided it was time to pursue a master's degree. And I actually did a joint master's degree in substance abuse therapy and education focusing on rehabilitation.

And while I was working in the realm of substance abuse, working with individuals who were taking treatment instead of a prison sentence, I realized that there was something bigger about rehabilitation that I was not tapping into. And that was really the correlation between substance abuse and criminal justice. And that's when I entered John Jay College of Criminal Justice. I started to pursue a career as a criminologist and criminal psychologist. While I was there, I also did a master's certificate in terrorism studies, looking at psychodynamics.

So I think all of my work has always been about the mind and how the mind works. In this context, it was how the mind works within the realm of commissioning crimes and how I could reduce criminal activity or criminal behavior, criminality just in general, and how I could treat things like addictions and all the things that may lead someone to a life of crime.

So it was while working as a criminal justice-- in the criminal justice system I started to come across these risk assessment tools that were being used for sentencing and being used in the corrections setting to grant something like parole. And there were things about it that really questioned me, like the content of the data that was being used and who was really creating these risk assessment tools. What were they basing it on?

So that was my entree into artificial intelligence, looking at algorithmic decision-making systems, whether or not they were fair, whether or not they were accountable. Because they certainly weren't transparent at the moment, and they were presenting major challenges and really frustrating due process. So that's how I ended up in AI.

KIMBERLY NEVALA: That's an amazing journey, and I didn't know some of that. I think you might be the very epitome of a lifelong learner. We talk about that all the time.

RENEE CUMMINGS: I am.

KIMBERLY NEVALA: So how did your role at the University of Virginia, which is really interesting, come about?

RENEE CUMMINGS: Well, I have had the wonderful experience, probably for the last two years, speaking internationally on AI ethics. I think I would tell people that the conscience of AI should be criminal justice, because that's where AI was making and continues to make life decisions, decisions that can change the trajectory of someone's life, pretty much, in the blink of an eye, so real-time criminal justice decisions.

And that really allowed me to look at my work in policing, because about the last 15 years of my life was spent working with the police, training police officers in things like homicide reduction, gun and gang violence reduction, crimes against children, violent crimes in general. And much of my work was also big-data policing. And you would realize that big-data policing and AI seems to have a natural fit that is very unnatural. So I think it is trying to bring some oversight to the work of law enforcement and the application of AI in law enforcement.

I applied for this residency at the School of Data Science at the University of Virginia, and I was very lucky to receive it. And my work there now looks at things like big-data policing, mass surveillance technologies. How do we bring more public oversight? How do we look at things like privacy juxtaposed against community security? So it's all in the same area, just different aspects of AI being deployed across the criminal justice matrix.

KIMBERLY NEVALA: So one of the things that I think we find really challenging is making the discussion, I think, of ethics palatable, or easy to engage in. I think it can feel esoteric. It can even, I think, feel threatening in an odd way. We hear things like ethics and, maybe even more so, regulation as a constraint on things like innovation. And I wonder, have we created a bit of an unnatural tension, even an artificial tension-- pardon the pun-- between these concepts?

RENEE CUMMINGS: I think no, and I'll tell you why. I think we all know that ethics and innovation can exist in the same space. And ethical innovation really stretches the imagination of artificial intelligence. I think there is this belief that the freedom to express creatively within the context of AI could be hampered if there was too much regulation, or if there's too much legislation. And that could be so. The jury's still out on that decision.

But I think what I try to do with my work is really about ways in which we can stretch not only the ethical imagination of AI, but appreciate that by stretching the ethical imagination of AI, we are also stretching the creative imagination. Because now we're looking at concepts like diversity and equity and inclusion, and how powerful they are when it comes to really bringing more perspective to the realm of AI.

KIMBERLY NEVALA: Yeah, and I know you talk a lot about-- and it's a really important conversation more broadly-- about intersectionality. And within the AI community today, what are we getting right and what are we getting wrong both in our understanding and our approach to intersectionality?

RENEE CUMMINGS: Well, certainly, I don't like to look at things as right or wrong.

KIMBERLY NEVALA: Fair.

RENEE CUMMINGS: This is certainly a new area. Now, AI in itself may not be that new. We know that there was that major conference in the '50s and all these great minds came together, and that's fantastic. I think there are certain challenges because it is something that is new. And what these challenges are presenting would be different ways in which we need to think about the technology.

So I think, at this moment, we haven't yet really embraced diversity and inclusion as tightly as we need to. We have not yet really embraced difference as tightly as we need to. And I think we have not really celebrated the kind of dynamism something like a multicultural approach could bring to the technology. We know there is a design monoculture, and we know that design monoculture emanates from a particular space and a particular mindset and a particular kind of thinking.

But what we need to do with AI is if we really want this technology to mature and this technology to really imagine new ways of existing together and more efficient ways and more effective ways of us creating systems that are going to benefit society, then we've got to broaden our cosmology. So it's not about right or wrong, but there are challenges. And I think AI is up for the challenge because there are so many brilliant and dynamic people working in this space.

KIMBERLY NEVALA: So can you give an example, maybe, of where a lack of dynamism-- I love that word; I think I may have mispronounced it-- has maybe caused some issues, and an example of a transformation that was really driven and enhanced by dynamism? So what does this actually allow us to accomplish if we're able to expand our imagination and also expand the tent in terms of who's participating?

RENEE CUMMINGS: Right. Certainly, I think a big one at the moment could be something like facial recognition. I was startled a few months ago by a headline that said "Facial recognition cannot tell Black people apart." And I think Joy Buolamwini has done an extraordinary amount of work when it comes to coded bias, when it comes to understanding that this technology is gender biased, it is racially biased.

And I think what we have seen is that if we are building a technology-- and of course, we know at this moment facial recognition is overpromised and underdelivered when it comes to crime control and crime prevention. We also know that the things that it had hoped to achieve in real time really did not happen because many of the cases attached to this technology have been wrongful arrest. And of course, wrongful conviction is never a good thing in the criminal justice system.

I think what we have seen because of these missteps by the technologists who did not embrace a diverse and inclusive perspective has been the creation of a space of resistance and a space of intellectualism and dynamism, because you have these brilliant, now in AI, AI ethicists and data activists who are really looking at more technology.

So that one mishap has created a space, or really created many different lenses, for us to look at technologies across the board within new and emerging technologies. So I think what we're seeing is where there is inaction, it's creating a lot of action, and that action is bringing more equity in the ethics space to the ways in which future technologies are going to be created. So it's actually a good thing.

KIMBERLY NEVALA: So is there an example that comes to mind now where we've actually been able to overcome and deploy this technology or these AI systems in a way that really does forward in a positive way, whether it's a system or a process, the progress of

RENEE CUMMINGS: Sure, I think the ones that are working well are the ones that are working, let's say, with less people, in the sense-- in climate change and in communications, and in just systems technology, in transportation, in health care, although there has been some challenge within health care when it comes to the lack of diversity in the data sets.

But we are definitely seeing as a society, as a world-- when we look at things like connectivity, of course, communication, as I said, transportation, retail, just marketing in general. Even in education, we are seeing great impact. I was reading recently of drones being sent to the Great Wall of China to photograph cracks in the walls so that architects could look at that and they could improve that. Those things blow my mind when you think about restoration, when you think about working in the museum space and the digitizing of all of that.

And I saw something recently using virtual reality that took us back to a march that Dr. Martin Luther King had. And I felt as though I was there. Those are great things. Those are the things that really excite. So I think we are seeing-- I mean, just in science and technology in general, we have been seeing an extraordinary impact with this technology. And that's what I love about AI, and that's what makes me so passionate. At scale, this technology can just do extraordinary, extraordinary good.

KIMBERLY NEVALA: Yeah, and it's interesting because a lot of the examples you just provided are probably AI being applied to systems where, fundamentally, they're not about making decisions or influencing human action or decision. And there's this really interesting conversation, I think, Safiya Noble, who's the author of Algorithms of Oppression, brings up regularly about the use of prediction systems, and when are they appropriate and inappropriate. Because these predictions use a past reality to predict a future reality.

And perhaps the way we architect them in some cases enforce outcomes that we don't intend to enforce. By virtue of using them, we've predicted a future reality, and then we put the process in place that enforces or creates that future reality. So are there systems-- and I'm interested in the context; you've worked within the criminal justice system and some of these really interesting areas-- where you feel AI is appropriate to use, or areas that you think we really have to apply or approach them with a lot of extra caution?

RENEE CUMMINGS: Definitely. Algorithmic decision-making systems are very powerful. And the challenge there becomes using historic data and really creating a future that looks like the past. And that's what we're seeing because of the biased data sets, because of the fact discrimination and something like systemic racism could be actually baked into a data set. So yeah, so we are definitely aware of that.

In criminal justice, AI has a major role to play. And I think I'm committed to ensuring that that role is played out fairly, and that role respects due process and duty of care, and there are requisite levels of due diligence and vigilance when it comes to building these technologies, and, of course, working with data scientists to build ethical resilience because that is also critical to your ethical imagination.

Where it can work is that big numbers have always worked well with crime prevention and crime control. So big data has a powerful role to play there. The challenge is that because so much of the data sets are biased, and because so much of the data is captured by overpolicing already overpoliced communities, then what we are getting is not a fresh approach to crime prevention or crime reduction. But we're getting an old and archaic, biased approach coming into the future. And that's what we don't want.

And how do we not get that? Well, definitely, we're realizing that, more and more, law enforcement needs to be trained. And there's a total lack of training when it comes to understanding these systems. So AI in the hands of a cavalier police agency can become a very deadly tool if it is used as a toy. And that's what we're seeing. So so much of my work looks at algorithmic policing and algorithmic force that's being deployed in the communities.

Now, as I said, I appreciate the kind of impact AI is having and will continue to have in criminal justice policing, sentencing, and corrections. The challenge at the moment is to get that right because of the lack of regulation, because the lack of training, and because that concept of an ethical approach to new and emerging technology is really not as broad based as we need it at this moment.

KIMBERLY NEVALA: Why do you think that is? Is it a lack of awareness? Is it a lack of education? Is it just hard?

RENEE CUMMINGS: Sure, I think it's happening so quickly. This is happening very quickly. And so many individuals are engaged in this technology and not knowing they're engaged in it, and almost sitting and waiting for AI to arrive. You always have to let them know that it's been here for quite some time. I think it is a combination of how quickly it is happening, the fact that technology always outpaces the law, and the law is continuously playing catch-up.

There's also the question of that codependence between big tech and government creating these algorithmic decision-making systems. So we all know that many big tech companies are sometimes bigger than some countries, and sometimes more powerful. And some countries are using certain companies to provide the hardware and the software for their digital transformation system.

So that codependence will always put some pressure on the kind of sustainable regulation that you can get and, of course, the training. The training isn't there just as yet, but I'm hopeful. I am hopeful that the work that we are doing and the ways in which we are building public awareness and we are doing public education through even this kind of media advocacy these are the critical things to get that conversation distilled into the corners where they need to be.

KIMBERLY NEVALA: Yeah, I want to go back. You had touched on the idea of creativity and us really needing to stretch our imagination when we think about designing and applying these systems. And we often talk about the need to start with the end in mind. And I wonder if we are, in some cases, misapplying this concept as we architect these new digital systems and processes.

And maybe we're doing that because we define the end in mind as recreating the existing system and process versus then starting with, what is it we want to achieve, and might there be a different approach to that? You've got some experience with this idea of therapeutic jurisprudence, which requires shifting the lens really fundamentally on some of these really snarly topics. How do you react to that? And how do we encourage and enable people to stretch their imagination and maybe start in a different place?

RENEE CUMMINGS: Well, it certainly comes down to deconstructing what is the end in mind. And I think we may need to disrupt that concept of the end in mind and ask, whose end are we talking about? I think what we need to see would be a bit more mindfulness in the ways in which we design attached to that concept of moral imagination and, of course, creative imagination.

I think what we would also like to look at-- not only a mindfulness, but I speak about this cosmology that needs to be broader. And I think it really comes back to understanding we can have as much diversity and inclusion and equity principles and frameworks there, but if we don't come from a place of authenticity when it comes to understanding our own biases, then we are not going to achieve that end in mind that is supposed to be for maximum good and maximum benefit and maximum groups.

So I think it really comes back, to me, to that place of being authentic about who we are, about our biases, and how these biases can really find their way into the kinds of things we design, and how do we think about the user experience, and who do we give priority when it comes to creating who that user is. So that, for me, is about authenticity and mindfulness combined with that moral imagination and an ethical approach to innovation.

KIMBERLY NEVALA: Yeah, and that's a great term, algorithmic authenticity. I really like that. I think it's a fantastic tag line and also, I think, the spirit of what you're trying to achieve. Really quickly, I guess, a little bit of a further question on that-- I think a lot of times, we know that the data AI systems are trained on is the reality they know. That's it. That's all of it.

And a lot of times, when we're approaching how to, maybe not design, but address what can be bias or inequalities or unfairness in those systems, we start by trying to fix the data. Does that work? Is that the right approach, or is there a better way to do this?

RENEE CUMMINGS: Well, at the moment, it is the approach. It's about, how do we de-bias the data? And there are many technical solutions and approaches. And more and more as we speak are being developed. But we've got to look at de-biasing the data and de-biasing the mind. So it's a combination of a technological approach and a thinking approach. And I think if we were to get both of those really aligned, then we will definitely see the kinds of systems that will create the kinds of legacies that we could be proud of.

KIMBERLY NEVALA: So are there specific tactics, if you will, that organizations can use to engage and enable this conversation? They can be sensitive. They can be contentious. We're dealing with trying to balance, sometimes-- whether this is for better or for worse-- balance corporate interests against the greater public good, these bigger pieces. And I imagine it can feel quite overwhelming. Are there specific things, as you advise companies that are looking to bring these, whether that's in the public sector and in the justice system or elsewhere, that they can start to adopt and bring into their organizations to help promote and forward this conversation?

RENEE CUMMINGS: Definitely, and I've spent quite some time working in crisis communication and in risk management, and now I do AI risk management. And it's about this. It's about appreciating that there is a risk-based approach to reducing bias in AI, and there's also a rights-based approach. And you've got to combine the risk-based approach with the rights-based approach to get the approach that really works.

So it's about creating a space for intellectual confrontation, allowing your designers and other people in the organization to sit and really flesh out whatever that product is going to be, or that program or that system, and have that authentic conversation. I always say, we ask questions, and then we get the answers, and the answers make us so uncomfortable that we don't want to move. But we've got to understand that the uncomfortable answers create a space for really coming up with creative solutions.

And I think if we were to do that, and we were to understand the whole concept of how deep implicit bias is, and it's not something we can remove by taking a one away or taking a zero away-- it's not going to happen like that. But you're going to have that honest and you're going to have that open space, and you're going to amplify certain voices in that space as well. It's really important to give all voices a space to get involved in the conversation.

And if you understand that, yes, there is a technical approach to reducing bias and discrimination in our algorithms, but there's also a rights-based approach, and we have got to think about accountability and transparency and explainability and fairness and about just being responsible and being trustworthy and being principled in the ways in which we approach AI, then we are really going to get responsible new and emerging technologies.

So it's about building an organizational culture of ethics, or an ethical organizational culture, that is authentic. We're not talking about ethical theater and all the acting that goes with it. We're not talking about ethics washing or window dressing with ethicists. We're not speaking about that. We're speaking about finding an authentic place and really building algorithms from that place.

KIMBERLY NEVALA: I really like that concept. And I think it's very difficult, though, that we have to be comfortable being uncomfortable, especially when we're dealing with these systems that are working against people. I'm not saying working against people pejoratively, but working for or on behalf of people, however that may be.

We know that we need to bring more diversity and inclusion into the workplace. We focus a lot on STEM in terms of that. Are there other skills and perspectives-- your background is such a great example-- that we should be prioritizing as we build out teams and as we think about building literacy and working for the workforce of the future?

RENEE CUMMINGS: Definitely. I think, more and more, we're realizing that AI cannot stand alone. And we're seeing that the social sciences and humanities are really spaces that organizations have got to pull from if you want to develop mature technology, particularly responsible and trustworthy AI.

It has got to be an interdisciplinary approach to the design, development, and deployment of this technology. And I think if we get that right, then we are going to see our perspectives broadening in real time, and we're going to see that really eclectic and inclusive cosmology that is so critical to just stretching this ethical and creative imagination of AI.

KIMBERLY NEVALA: So you clearly see the pitfalls and yet maintain such positivity about the potential of AI. What sustains and drives that spark for you? Because it could be easy-- it's easy to get overwhelmed and want to throw up your hands in some cases, I think.

RENEE CUMMINGS: Well, I spend so much time working with organizations and individuals to democratize data and democratize this technology. And I always say, it's not about fear. It really is about always being aware. You will always find challenges. This is something that we're developing. This is something that is so powerful and something that has such extraordinary promise, is that we've got to be committed to getting it right.

And as much as I want everyone to enjoy the benefits of this technology, I also want to ensure that the right protections are there, those robust and rigorous guardrails are there to hold this technology up and to really ensure that the greatest good comes out of the things that we're doing with new and emerging technologies. So for me, the challenges have got to come. And the more challenges we get in the early stages, the better prepared we are for building something that's going to really deliver what we want. So I think it's an exciting space.

And this is why I say we've got to be prepared to be uncomfortable, but we've got to be authentic, and we have got to be mindful. And we've got to know that without intellectual confrontation, without diversity and inclusion, without respecting equity and amplifying all voices and providing space for underrepresented or unrepresented or under-resourced groups to have a voice, and knowing that there are certain things that we're going to have to protect-- all of this is good. All of this is good trouble that we're creating, because what we want is really something that we can be proud of.

KIMBERLY NEVALA: Yeah, and would it be fair to say also that we need to strive for that perfection and understand that, to do that, we not only need to really focus on getting it right, but be comfortable with getting it wrong and having those conversations and making mistakes, making mistakes not because we didn't think about it, but because we're just never going to get it right, at least not the first time? Some of these things are going to be trial and error. And that's, again, another uncomfortable reality, assuming you agree with that conceptually.

RENEE CUMMINGS: Well, definitely. I mean, I think when it comes to perfection, perfection is really putting the bar really high because we're all imperfect. So we're never going to get the perfect algorithm, and we're never going to get the perfect algorithmic decision-making system, and we're never going to get the perfect AI. Because we as a world, as a people, as individuals are imperfect.

But we've got to appreciate and we've got to understand that we're going to get a lot of things wrong. But we've got to have the courage to say, we got it wrong, and we've got to do it right. And for us to do it right, we may need other individuals involved who may not sound like us, who may not look like us, who may not think like us.

But by having that measure of diversity, what we are creating are checks and balances. What we are creating would be systems for due diligence. What we are building would be broad consciousness. We are raising conscience. We are doing things that would make us eternally vigilant, because that is what we have got to be when it comes to new and emerging technology. We have got to be eternally vigilant. We cannot fall asleep on AI.

KIMBERLY NEVALA: Yeah, eternal vigilance-- I'm writing down all of these key concepts as we go. You've got my mind racing. If we look out now three to five years, what do you hope to see? What will have changed in that time?

RENEE CUMMINGS: What do I hope to see? Well, I hope to see a world that is definitely more efficient. I hope to see systems that are more effective. I hope to see governance systems that are definitely more equitable. I hope to see scientific solutions that are really diverse and inclusive and equitable as well. I hope to see AI creating a better world.

I always say when I work with data scientists-- because I always also say that most people ask me, what is a data activist? And for me, a data activist is the conscience of the data scientist. So when I speak to individuals, I always say that an algorithm is not a formula. It's not a computational-- it is a legacy that you are creating. So what I hope to see in three to five years is a legacy of this technology that embraces a world view and that really speaks to a society that is better than what we are now.

KIMBERLY NEVALA: And maybe a final thought-- what advice would you give individuals who are looking to engage and contribute to the development of AI and to this discussion writ large?

RENEE CUMMINGS: Definitely, I would say get involved. Get involved. Even if you are in a non-tech field, as I entered from a non-tech space, get involved. Because if we are using a technology to create a new world, a new world is not a world of only technologists. A new world is a world of all of us. So I would say get involved wherever you are. If you're already working in this field, I would say lead from wherever you stand or wherever you sit, and ensure that this legacy that you are creating with this technology is one that is going to build a better future.

KIMBERLY NEVALA: Thank you. I have to say, I leave this conversation with you, as I always do, just so positively energized, and my mind is spinning. So I really want to thank you for, as you have said, inviting yourself to the AI table and joining us here today. We are all so much the better for having you.

RENEE CUMMINGS: Thank you so much for inviting me. It was certainly a pleasure.

KIMBERLY NEVALA: All right, in our next episode, Beena Ammanath joins us to discuss the current state of AI ethics in research and practice. Beena serves as the executive director of Deloitte's global AI Institute and leads Deloitte's Trustworthy AI practice. Beena is also the founder of the nonprofit Humans for AI. Make sure you don't miss her by subscribing now to Pondering AI.

[MUSIC PLAYING]

Creators and Guests

Kimberly Nevala
Host
Kimberly Nevala
Strategic advisor at SAS
Renée Cummings
Guest
Renée Cummings
AI Ethicist, DEI and Data Activist, Criminologist
Humanity in AI with Renée Cummings
Broadcast by