No Community Left Behind with Paula Helm
KIMBERLY NEVALA: Welcome to Pondering AI. I'm your host, Kimberly Nevala.
In this episode, it is a pleasure to welcome Paula Helm. Paula is the Associate Professor of Empirical Ethics and Data Science at the University of Amsterdam. I originally found her work through a very interesting paper on synthetic data. And then was even more delighted to find her work on minority or hidden languages in the digital realm. So today, we're going to touch on both of those topics.
So thank you, Paula, for joining us and for being game to allow me to talk about both of these things.
PAULA HELM: Yeah, very happy. It's my pleasure.
KIMBERLY NEVALA: [LAUGHS] Now, you have an interest in empirical ethics, right? Which is an applied ethics within the context, I assume, of science. How did that interest in both ethics and science, and then subsequently in NLP and linguistics, come about?
PAULA HELM: Yeah. Well, first I would say that I understand ethics also as the scientific inquiry into human values. So I would not make a distinction between ethics and science. But of course, traditionally, ethics has been very theory-focused and very text-focused. And I initially was trained in more empirical traditions of anthropology, and also peace and conflict studies.
So I'm also always asked the question, why should we care? Why does that matter? So what? So sometimes we have these super interesting empirical findings, but then still the question remains, so what? What follows from that? Or why are we actually doing this research? Why are we interested in this topic? Why do we think it's important?
And we need to bring these two together and to study human values empirically. And how emerging technologies, how scientific practice, interferes with human values. How it brings about value change, how we also deal with value conflicts that arise in the context. And to not just do that theoretically, but also to actually look into what do different groups of people think is important? How do they understand certain values in different parts of the world, but also maybe in the same society within different groups?
And what would that then mean to try to advance certain values through embedding them into technical design? How does that work? How do we need to also take into account technical limitations?
So these are all questions that, in my view, belong to the field of empirical ethics.
KIMBERLY NEVALA: And how did you then connect or get to specifically the study of linguistics or language? Was it a happy circumstance, or have you always been interested in how we use words to convey or create meaning?
PAULA HELM: Both. [LAUGHS] So in a way, yeah.
I mean, language was also part of my research always since in anthropology, of course, language is a big part of studying culture. So for example, in the past I studied how people in addiction recovery they use language to, for example, performatively try to enact their recovery by saying, for example, "I am an alcoholic," for instance.
And then how did that bring me to languages and AI, minority languages? That was a bit of lucky coincidence. I was working on diversity and AI in a big project together with many people from different disciplines. And I was writing on how does AI affect diversity? But not just in terms of superficial diversity, like the number of women in tech or the number of female pictures in training data. But also diversity on a more subtle level that's not so easy to quantify; such as skills, psychological traits, characteristics of people, diversity in how we come to know the world, how we interpret situations. So this also all belongs to diversity.
Often when we think about diversity, when we talk about diversity, we do that in these very standard categories. But it's actually much more. And then it gets very complicated to study and measure it and to promote it. So we had these discussions. And then my colleagues said, well, Paula, could we maybe have a call? Because we actually own something and we would really like to hear your opinion. These were my technical partners and my opinion in terms of ethics.
And they were presenting me with their studies on the underrepresentation of languages in AI. And how, even though the performance of large language models is increasing at a really impressive rate - I'm very impressed, and I think it's wonderful. But then when you look very closely, there are these culture-specific terms. We all know them. I think every language has that. And so even though these models on first sight are very impressive in learning new languages with little data exposure, these culture-specific terms are still often a problem. And so this is also a kind of very subtle level of diversity that's impacted there. And that very much fit to my argument around these sorts of underlayers of diversity.
So they wanted to know from me what I think about that and whether I thought it was an ethical issue. Whether I thought it mattered in terms of human values. And I said, yes, I think it makes a huge difference with regard to what we call epistemic justice. So justice that is not just about the distribution of resources but about how other people value us as practitioners and bearers of knowledge.
Sort of, for example, testimonial justice. Does someone take me seriously if I testify on something? Or hermeneutic justice or injustice. Do people take my interpretation of a situation seriously? And these matters are also very much influenced by cultural differences, right? Because from where we interpret the world, from where we come from, determines how we might interpret a situation, how we might interpret emotions. And so we call that in philosophical discourse epistemic justice. It's not so widely known, but I think very important and very much affected by AI.
KIMBERLY NEVALA: I know you did some work with some Indigenous populations. I believe it was in the Amazon, but you'll correct me if I've got that wrong. Talk to us a little bit about the work that you did there, and what you think that we need to, or what we can learn about really just, I guess, the nature of language. And whether our perhaps simplistic hopes about being able to just discreetly and directly translate things really hold up.
PAULA HELM: Yeah. So maybe as a little qualification, I'm not a linguist.
So [LAUGHS] my focus is less on actually the language but it's more on the speakers and their communities. And the relation between technology developers and speaker communities. So this is my focus. I collaborate with linguists. I collaborate with technology developers. But I think this very specific focus on the speakers themselves is something that has not received the attention it deserves, so that's my intervention in a way in these collaborative settings.
But of course, I also learn a lot about language. So for one, I think it's important to also highlight that the translation fallacy applies to every translation. When we translate English into German, two languages that are very powerful in terms of political and economic backing, in terms of technical resources, still we have these sorts of terms that are not so easy and straightforwardly to be translated.
But then when we are now turning to Indigenous languages, then we have other factors that come in that we need to reckon with. That is, these are traditionally oral-based languages, so the way they are traded is different historically. There's also a history of colonial violence. So these communities have fought very hard for their autonomy. So buying into technical infrastructures that are very much owned and dominated by Western powers for them is a very tricky decision to be taken. Because it means losing some, giving up some, of the autonomy that they fought so hard for to become part of the global technical infrastructures, the communication infrastructures. Which also enable partaking in the global economy, which are now so important, right? And almost sort of inevitable. And then also making sacrifices to younger generations.
So there are a lot of tricky questions to be asked about these matters. And they all in a way boil down to the issue of power asymmetries, power inequality. That we might have less when you compare English with Mandarin, like two AI superpowers, than when we pair, for example, the Sateré-Mawé, a community of maximum 10,000 speakers left living in the Amazon rainforest trading in acai. And that's more or less what they have to trade, offer to trade. And then English, which is such a superpower language. So we have intense power asymmetries. We have data asymmetries. All that needs to be taken into account.
And then also, there are, of course, huge cultural differences. So what we did there, for instance, was when we were there in the Amazon rainforest, we witnessed the severest drought ever. That ever happened to the region, which is very worrying when you consider that it's the biggest fresh reservoir in the world. And that affects the livelihood of people there immensely, because, well, what for us are streets are for them the rivers, right? So their transportation pathways were in a way gone. Or they had to share them with alligators, which came so close because the water was so little. So these very on-the-spot problems.
So what we decided to do was to develop a magazine where we tried to translate Western, very much technology-based approaches. to water preservation and water monitoring with traditional Indigenous practices. And it quickly became clear that there were almost more terms that were not translatable than were, you know? Like, we had to discuss almost every single term because for the kind of Indigenous practices and the tools that they use there's no representation in English or Portuguese. And the other way around.
So that made it very clear how translation is actually a process of creating cultural understanding. And yeah, something where you discuss, you get to know each other. You negotiate also within your own group what you actually mean by something. And if you do that, that, for me, made it very clear how powerful also this process is.
So if you are a big tech company, you want to bring your technology to people around the world. You want to make it accessible for them. You want them to be able to express themselves in their language on your platform. But then you don't do that, you don't involve the speakers themselves. You really risk sort of going over their epistemic self-determination: their capacity to determine for themselves how they want to express themselves, how they want to know the world, and how they want to make themselves understandable by others.
KIMBERLY NEVALA: And that sort of flattening or that normalization of concepts and terms and language, I think we probably see that in a lot of different areas. But it strikes me based on what you said earlier in the richness of this particular oral tradition and the fact that there's so much other just cultural heritage and probably norms and – see, now what's the word I'm looking for? - traditions that are caught up in how those stories are told, which versions of things that they tell.
And on the other side, in some of our current approaches we tend to hear things like, well, isn't this going to be great? Because LLMs are great. They have all the knowledge in the world in them. But this is a really distinct, I think, and pointed example why in simply trying to capture words, or even capturing language in this way, there's a risk we're, in fact, not only not representing the knowledge or the information of that culture and of that speaker in that community, but we may be actually fundamentally changing it. Correct?
PAULA HELM: Yeah. So there's this term of semantic drift which is not about a radical change. But it's about slight and a little bit of drifting towards maybe a kind of anglo-centrization of the way we use language. Of how we communicate, how we create understanding amongst ourselves.
So what you see, I would say, is more processes rather. Which makes it also very challenging to study empirically. Some people use very strong words; they call it a kind of neocolonial processes or epistemic colonization. It depends on whether you want to go in this direction of comparing with these kinds of historical processes and you want to draw a parallel there. Or you rather want to just call it epistemic injustice or epistemic self-determination.
I think it's all about, well, it's about asking for permission. So the Māori, for instance, raised their voice - they addressed mostly Whisper AI - and said well, you are just using our data without asking us. And we actually don't want that. And then they developed a license for Indigenous language data.
Because what I think is also very important in this AI ethics debate is to realize that the way in which, for example, in Europe but also in the US companies give themselves ethics principles or the EU is trying to set ethical standards for AI. It's very much focused around Western conceptions of what is good and right of what we value.
And what this case with the Māori, I think, makes very clear is, for example, our understanding of privacy is very Western. Insofar as it's very much an understanding of privacy that's related to the individual to my like…I'm a EU citizen, so my personal data is protected under the GDPR.
But language data, [LAUGHS] well, what is that? Who does it belong to? It's not personal data, so it's not protected in the same way. It's considered to be sort of a public good. But that also means that powerful actors can grab it and use it for their advantage.
And Indigenous communities might see this very differently. They might actually say, well, language data - because for us telling stories is a very, very important component of our culture, of our identity - it needs to be protected. I think it's a very good case to show the eurocentrism or Western centrism of AI ethics values that are usually being discussed.
KIMBERLY NEVALA: Mm-hmm. And when we first met you said something interesting. In that there's a very distinct, I think, disparity or difference in approach or philosophy that came into really stark contrast when you said that Meta's approach comes in under this sort of theme of no language left behind. And you said this is really, really different and would have a very different sort of feel and implications if, in fact, the tagline was no community left behind. And so I think you've touched on this a little bit but why is that differentiation so important?
PAULA HELM: So yeah, it's a bit of my mantra, right? From no language left behind to no community left behind.
I mean, also, it very much depends on, how do you see the data subjects whose data you want or about whom you want to produce data? Do you see them just as data producers? Do you see them as future technology users? Or do you see them as co-shapers of AI?
So of course now there's this big AI hype. But AI has a long history as an academic discipline. It goes back to Cold War histories and has been shaped very much by Western imperial interests. So you can trace that really back to the Cold War.
And so what we see now, the big breakthroughs, they did not come out of the blue. There's a whole infrastructure behind it. There are big AI conferences. And again, what you see there - for instance, if you want to publish in one of the top AI conferences - it's all English. When you publish something, you try to beat a benchmark. And if you do that on a small language you might have no chance of getting into the big conferences.
So also what do AI talent strive for? Younger people who want to become famous in the field make advancements in the field. They also try to, of course, survive within the field so they play along with these rules. And these are all rules very much shaped by a very Anglocentric history.
The point about no community left behind is also do you just want to integrate communities into your own system. But you don't really want to change the system, you just want to expand it. Or are you willing to actually, if you want to expand the system, not just keep it as it is, but take the viewpoints of more people seriously and have them co-shape what you're working on. That's a very different approach.
And, of course, it takes much more time and energy and local investment. And the current AI field is very much concentrated around scale, especially since the breakthrough of transformer models. Of course, this is data-centric, data-driven AI, and it strives on masses of data and huge compute. So it's in a way power centralizing in its very structure. And that's a bit at odds with this idea of taking local communities seriously and co-shaping the field.
KIMBERLY NEVALA: And is there also an implicit assumption that in order for those smaller communities to engage in the global economy that they sort of, almost must by default, use some system like this that actually will translate? That serves as a translator or a mediator between them and the world in these other languages. I'm articulating this badly, but I speak English. And if I'm going to get on something and talk to you or I'm going to use something like a Copilot and I want that in my native language, it's not clear to me, based on what you're saying, that we're really enabling that kind of engagement.
PAULA HELM: In a way it's, of course, also very wonderful. I mean, my research is not about criticizing the idea of trade languages, for example. I'm a profiter of that myself. I'm German, I work in the Netherlands, and I teach in English, and I write in English. I also speak Portuguese and Spanish, languages that are very much alike. So I think it's wonderful.
For example, I'm teaching an English-speaking program with 130 students from all over the world and it's just amazing. Like, I learn so much from them. They learn so much from each other. Both in terms of the contents of our program but also because it's just very fascinating to have all these people from different parts of the world in a room.
Also, it’s such a picture of hope to have all this young talent who want to go out and want to change the world and want to bring ethics into AI. And some are from Lebanon, others from Singapore, some are from Taiwan, others from-- I had an Irish student who wrote her thesis on Irish also as, in a way, an Indigenous European language. And that's only possible because we all speak English. [LAUGHS]
But that's also what fascinates me so much about this field of empirical AI ethics; that there's no easy answer. And there's also no right or wrong in a very clear manner. But it's always a balancing. It's about seeing the sort of ups and downs. It's about trying to find middle ways. If it would be so easy, [LAUGHS] you wouldn't be sitting here, and I would not have a job.
It's also not just about minority languages, and that's very important. So I do research with Indigenous peoples; it's very fascinating and I think also very important that these languages and cultures do not die out. Because I think that the world needs the wisdom that they have as they might help us also find other ways that might be more friendly to the climate, for instance.
But it's also about languages spoken by many people in the world which are highly underrepresented in AI. So it's not just these small Indigenous languages, but for example, languages spoken in Africa, Pashto, Hausa. And these are languages spoken by many, many people, refugees who come to Europe. And they might not speak English or French, and then there's nobody at the border speaking their languages. And they need to use AI translation tools as well.
And again, we have the problem that, well, they already perform quite impressively. But maybe not as accurate as to be completely relied on in such situations where you make life or death decisions on people's fate, right? So if someone has a legitimate reason to claim asylum and then sometimes we had cases where very small mistakes in translation could distort the narrative of the person and lead to a legitimate asylum request being denied. And that is something that cannot happen because, I mean, that's not just epistemic injustice. That could cost the person their life.
So these are also very serious issues in that debate that are not just about minority languages, but that are, again, about power asymmetries. And that are also about how does the AI community want to go about using AI in ways that are responsible and not just ‘I create an image about my cat because it's fun’. Or I use an AI translation machine, translation app, to buy ice cream on my summer holidays. Then it's not so high stakes.
KIMBERLY NEVALA: Chocolate. Vanilla. You'll be all right. Yeah.
PAULA HELM: [LAUGHS] But if AI is to be used in high-stakes contexts such as immigration, the accuracy needs to be on a different level.
KIMBERLY NEVALA: Do you think that our current approaches are capable of getting us to the required level? Particularly in these kinds of high stakes, and they're also high stress, which means that how you would maybe articulate and speak normally could be very discombobulated as well, just by virtue of the situation. Which, I can imagine that not just with language, but now you're looking at bodily cues or trying to assess body language and these sorts of bits. And I can see lots of ways that this would go really, really wrong.
And I don't know whether we're heading in a direction that would allow us to do that. Or if this really requires us to be able to apply a little humility as well and recognize what these things should and should not be used for. Like, is this a circumstance where perhaps this is just not the right piece?
I mean, someone, I guess, would then argue and say, well, someone - if you were translating, Paula, you could get it wrong as well - but I think that's a fairly thin line. Because you are more likely, I think, to recognize just based on the context. and the content and the interaction itself, the nature of the interaction itself, that a mistake is being made.
PAULA HELM: Of course, humans fail as well, but they are also called interpreters, right? For a reason. So it's interpretation. That brings me back to this translation fallacy that it's, in a way, always interpretation.
But yeah, I mean, you're very right. If you have a legitimate reason to claim asylum then usually something really bad happened to you or will potentially happen to you. And that usually means that you're talking about something very heavy. And what do humans do when we talk about something horrible that happened to us? We often speak in metaphors, right? Or culturally specific ways of circumscribing a situation. So that also needs to be taken into account. This is why some people also like to talk about trauma-sensitive translation.
But I mean, AI causes a lot of energy. [LAUGHS] Energy consumption is a huge topic, of course. So if the energy consumption should be justified then it is especially the high-stakes situations where AI could be useful, right? I mean, immigration is, of course, an area where translation is a big issue, a lack of translators, and where it could potentially be very helpful to have reliable tools.
So who's investing into building these tools? It's very expensive. So maybe for big corporations, there's not enough economic incentive. And maybe then there's also not the political will. So who's doing it? That's, of course, I mean, a very realistic question in that context as well.
KIMBERLY NEVALA: Yeah. And to the best of my knowledge, there's not-- I think folks then start to look at is this the purview of governments or of community groups and things like that? But because of the way the core infrastructure's been set up, there's a very high barrier to entry. Unless we start to think about how do we package these in a way that somebody could actually use it and use it differently, right? And apply it differently, and that didn't require all of those things to come with it.
PAULA HELM: Yeah. But then of course, you also need to have the kind of regulation and safeguards in place on a very high level if you want to use this system in high-stakes areas. Talking about benchmarks, for instance, which are, to my knowledge, to this date, they're not differentiated enough yet to guarantee that AI systems are safe to use in certain contexts, for instance. So there's also a lot of work needing to be done in this context.
KIMBERLY NEVALA: Is there anything else that we haven't touched on relative to this work and your work with minority languages that you think is just important for folks to know that they might not be aware of?
PAULA HELM: Well, I think we made clear that it's not just about the languages, but about the speakers, right? That it's not just about first sight impressive performance but we have to look more closely. And that it's also about justice and cultural diversity and not just about expansion. I think these are the three main points that are very important for me to highlight.
KIMBERLY NEVALA: I had originally found you through this paper on synthetic data, because this is clearly all the rage at the moment. And in a lot of cases, with good reason.
But I thought you raise a really interesting point of discussion, because a lot of the time when we are talking about the value and the opportunity with synthetic data it is couched in the context of allowing us to resolve specific errors or omissions or gaps in the data. And, in fact, as a way to address things like ethical issues and other concerns that arise from things like bias or a lack of representation in the data, et cetera, et cetera. And so in some ways, it's seen as almost a panacea to address these ethical problems.
Your paper really puts out a bit of a different hypothesis in that it suggests that the indiscriminate use of this type of manufactured data might, in fact, have the opposite effect. It might actually, I think you called it, create a manufactured divide between ethical scrutiny and critical reflection. So can you tell us a little bit what it was that got you folks thinking about this question in this way and what the particular concern or observation is that led to this hypothesis as well?
PAULA HELM: MM-hmm. Yeah. So I've been working on several projects where we were interested or concerned with technical innovation. Usually, I am working on the ethical risks, ethical concerns.
And often, a whole set of ethical concerns were supposed to be solved by using synthetic data. Again, privacy, of course, is a big topic here. And again, such as with language technology, I do totally see the potential of synthetic data. I think it's also a fascinating idea.
But again, privacy is about much more than just personal data. Privacy also is a fundamental cornerstone of democracy. And often, when people are concerned about privacy, it's not just about them individually, right? Like activists, if you think back to Snowden even, then what people are concerned is about power.
And many of these topics, these issues that are being raised and that are being debated in terms of AI ethics are actually questions around power asymmetries. So when big tech companies try to solve them through technical solutions that again in a way sidesteps the communities that raise these ethical concerns. That might run the risk that these communities, they actually don't feel taken seriously, but just sidestepped, right?
Because for instance, what many people are concerned about who talk about privacy vis-a-vis big data is, for instance, how classification is being done, how predictions are being made, and how that is happening in a very untransparent way. And how the power around these classification and predictions lies in the hands of private actors. And so that's a concern that synthetic data in the way that it's being used to solve privacy issues cannot solve.
So that is one, I think, example where it becomes very clear what we try to argue there: that synthetic data are an amazing solution to deal with freak events. That's why people initially came up with these ideas. They might to some extent also help to mitigate some ethical problems with regard to bias and privacy. But they cannot, they are not fit to, confront the actually underlying power problems related to power asymmetry. And yeah, so that was our concern there.
Also, another concern, another point that was very important for us to highlight was, well, the point that, OK, synthetic data are clearly manufactured. That is obvious. But actually, I mean, when we think about it more closely, all data is manufactured. So I use this comparison with nuts that just lie around on the street and you can collect them. So when actors within the big data field talk about collecting data it sounds like they're just picking them up from the street. But of course, data's always being produced. Platforms that we use, social media, they're all designed around us producing as much and as valuable data as possible. So the data's also determined by that, the kind of data that's being produced.
So all data is manufactured and produced, not just synthetic data. So we also wanted to make clear that we don't forget that manufacturing is always an issue with data. And that one is not the real data that we collect from the street and the other one is the one that's being developed in the lab. And we only have to ethically attend to the real data, while what's happening in the lab is clean and out of ethical scrutiny.
KIMBERLY NEVALA: And I have to just throw out there that you referred to that more traditional form of data as user's behavioral surplus. Which I thought was such a pointed and yet oddly charming way to do that, and provides some context, I think, to think a little bit differently.
So you mentioned there that there's a tendency then - make sure I understand this correctly - with synthetic data, particularly if we're trying to, for instance, debias a data set. Or we think a particular population demographic segment isn't represented, hasn't been captured, hasn't participated intentionally or unintentionally in the development of the original data nuts. That by virtue of then using synthetic data, there's some sense that because we've synthesized it and because we're, in theory, doing this in a very deliberate way, then clearly bias can't be an issue. And therefore we don't have to worry as much about the outcomes or the things that may happen from a lack of diversity or because folks are not represented in the data.
Do I understand that correctly; that it just sort of fundamentally distorts our sense of harm that could occur?
PAULA HELM: Yeah. I would totally agree to how you just formulated it.
KIMBERLY NEVALA: And then another interesting point-- and we won't linger on this. But there was coming, or it appeared to be coming to, a point where there was only so much data that could be picked up. And so that might actually serve as a little bit of a natural constraint on the move to basically datafy everything and to really drive everything from not only singular platforms, but on the basis of information that is inferred or predicted about you.
So does synthetic data then also, in the eyes of those who wield it, provide an opportunity to double down on this approach and on the thought that all behavior, all engagement, all these things can be modeled or can be captured? Because if we can't actually capture it digitally by some version of direct measurement of capture, we can always just synthesize it.
PAULA HELM: Yeah. I mean, there are different ways of responding to that.
There are, of course, these sort of culture technical pessimists who argue against: do we really have to quantify everything? Maybe there's also something that we don't want to quantify. It's a bit tricky line of argumentation, I would say, because it can quickly go into the kind of purist, [LAUGHS] romantic nature direction.
I'm more, I think, on the side of what we actually need to talk about is power. [LAUGHS] And I would say that what's really important is to take a pause and look at what is current AI development, in all its biases that it manifests, mirroring back towards us, about us? We can actually learn a lot from AI if we take a step back from this sort of solutionist approach to what's really first an analytical one.
Like, the biases that we see in these systems are a reflection of our society and about social inequalities that are present in our world. And that are also very interesting to take a closer look at and to learn from. And then maybe to change or think about, do we like what we see when we look into the mirror? Of course, the mirror metaphor also has its limitations in terms of it's not just one mirror, but maybe that's the point about it, right?
So that would be, I think, the point that I find very important there rather than this argumentation about maybe we need to have these fears in life that should be out of datafication. I would not go as far as telling people, for example, who have a serious disease and where AI could help them to find a good cure, to tell them, no, no, that area is beyond datafication. That would be very top-down moralist approach to ethics that I don't subscribe to.
KIMBERLY NEVALA: Mm-hmm. Although would you subscribe to a theory that says if somebody provides a data product, wherein the interface that purports to be able to analyze and provide that advice, that there is a level of accountability for doing that responsibly and accurately that does come with it? And that's not always incumbent in how these services and products are delivered today.
PAULA HELM: Yeah, of course. I mean, at the moment it's the wild, wild West. And I mean, generally, if it's done well, regulation should help to protect consumers, protect citizens, to some extent equalize asymmetries within societies that are unjust. And that's needed, for sure. No question about that.
KIMBERLY NEVALA: Excellent. All right. Well, you have been very generous with your time and your insights. As I said, I started down the path of reading your work, and then I couldn't stop. So any last words or thoughts you'd like to leave with the audience?
PAULA HELM: That's a very good timing, because my parents - not my parents, I'm a parent - my children just came home.
KIMBERLY NEVALA: [LAUGHS] Excellent.
PAULA HELM: So that's an area where personally, privately, I say I don't want my children on this video. So that's where I would not say that it's up to others also how they want to handle it. [LAUGHS].
KIMBERLY NEVALA: Yeah. I think that gets back to that point of power and agency and who has the sort of decision making - not only authority, but control. And we will make sure that no child is represented in the background. So thank you so much. I think with that, I will thank you for your time and your insights.
PAULA HELM: Thank you for reading the papers and being so well prepared. It's very nice.
KIMBERLY NEVALA: Oh, absolutely. And hopefully, we'll get a chance to circle back in the future and see where this has all played out. So thank you so much.
PAULA HELM: Yeah. Thank you.
[MUSIC PLAYING]
KIMBERLY NEVALA: Alright. So to continue learning from thinkers, doers, and advocates like Paula you can subscribe to Pondering AI. You'll find us wherever you listen to podcasts, and also on YouTube.
