Policymaking and Accessibility in AI w/ Yonah Welker

KIMBERLY NEVALA: Welcome to day two of Insights and Intuitions with "Pondering AI." In this short take, Yonah Welker reflects on diversity and accessibility in AI. Welcome back, Yonah.

YONAH WELKER: Thank you so much.

KIMBERLY NEVALA: Now, I think it's fair to say since you and I talked a year or two ago that quite a bit has changed. And certainly over the last year, there has been a huge amount of activity and development: both in the space of regulation and also with the public-facing deployment of things like ChatGPT, generative AI, and so on. Looking at this through the lens of increasing accessibility, diversity, what have been the most important or impactful developments over the last year?

YONAH WELKER: Interesting question.

So first of all, for me personally, regulators are too focused on generative AI in my view. With generative AI they explore some risks. But actually, these risks existed even before. And now, for me, there is too much specific attention about large language models.

For instance, recently we worked with the World Health Organization on a report on generative AI in health care. And with UNESCO on generative AI in education. And I'm wondering, why only this? Why do you not talk about social robotics, a lot of assistive technologies. There are a lot of different technologies which require some clarity and a feedback loop between policymakers, technology startups and entrepreneurs.

So, it's a growing attention to AI regulation on one hand but at the same time, it's disproportional. Once again, I don't want to say that it doesn't pose any kind of risks. But I want to bring the attention that there are different types of risk. And that's why in recent years we were focused on bringing this spectrum of risk categories: unacceptable, high, low, minimum risk system. Including how the risk is specific and unique for different groups of people, different cases, different areas. How it's specific for health care, for education, for workplaces.

And one of the biggest achievements was our work with the OECD. So they work on the repository of the startups and technologies related to AI for assistance and disability assistive technologies. So they focused not just abstract regulation, but some kind of a connection with the entrepreneurs, technologists. And a bottom-up, vertical of understanding what's actually happening in this market.

This October they released the report and the list of these technologies (included) is over 120. And I hope it will help to build a more practical dialogue. Not just talking about how we can maybe restrict some technologies. But actually understanding the nature of bias, the nature of risks. Is it more about social issues? Is it more about lack of literacy, maybe? Or maybe we need to introduce new roles, new functions, change how we see the technical teams.

So that is what is gradually happening. For instance, currently, there is work presented by UNESCO focused on data, AI literacy, and literacy of AI in education. Once again, it was driven by generative AI risks. But it's good that at least they now understand that risks actually present by a more social (dimension), maybe institutional limitations, and maybe something which already existed historically.
It's not specific for generative AI and maybe not even specific to technology. It's specific for classrooms, for some social issues. And it's good to see some statistics and facts presented by the United Nations, UNESCO in actual reports and actual toolkits to be spread across entrepreneurs, technologists, developers.

I hope, in the next year, you will see many of these reports: some of which we had the opportunity to cooperate on. And now we are (getting) slightly closer and closer to an actual feedback loop between policymakers and technologists.

KIMBERLY NEVALA: Can you give us an example of the types of issues that are just not seeing the light of day that should be?

YONAH WELKER: Yes, the main problem is that government presents a list of types of risk but not the source of the risks. So they talk about misinformation, potential manipulation, the areas that biometric system are not accurate for particular groups. Or facial recognition systems, which can be biased against some groups. Or if we introduce generative AI to classrooms, it can probably provide too easy questions to students, limiting their creative and intellectual development.

But they constantly address all of these risks as an algorithmic issue. But it’s not an algorithmic issue. It's really a question of a curriculum. It's really a question of how you see these tools. It's similar to a calculator. The calculator helps you with the calculations but you still need to understand the nature and logic of mathematics to solve some bigger mathematical problems.

So what they still ignore is the nature of a particular historical problem. This includes the underrepresentation of specific groups at workplaces. So for people with disabilities, unemployment is up to 70% to 80%. These people are not involved in medical assessments. They are not involved in building some technologies. The public data sets related to smart cities or urban technologies were “gender-blind”. So they were mostly driven by male data sets because people believe it's a kind of a universal thing. So you didn’t need to create different types of citizens as a role model to build urban data sets.

There are so many kinds of this one-size-fits-all approach which were used. Not just AI - as a statistician, in urban design, in different types of workplace and institutional design - which exist around the world. And until you actually change this approach, you can't change AI.

And there is not any problem with AI and algorithms because the thing I repeat again, and again, and again: AI brings so many benefits for people with disabilities, people who are marginalized. It has helped to democratize access. It helps to create more adaptive learning, health experiences. So it's really more about how we create the curriculum, how we evolve the literacy, how we introduce new types of roles or functions in technical teams, how we introduce bioethics or social scientists who help with the search knowledge.

For instance, when we work on AI and robotics for autism we have people in the team who are sometimes focused on supporting parents. So, providing knowledge to families and parents on how to introduce this technology in proper way. Or if it's more about women's health, we have professionals focused on gender studies and so on.

There are so many broader understandings of how solutions should be implemented to serve humanity. And I hope governments and technologists will use it in a proper way.

KIMBERLY NEVALA: So if you had the magic wand and could ask people to really focus on one thing - which is a completely unfair question - what would you like most to see happen explicitly over the next six months to a year?

YONAH WELKER: I think it can’t be one thing. That's why I'm constantly saying that my work is in three directions: technology, policy and adoption, and cultural awareness. So we're building portfolios and we're building curriculum frameworks. And we create exhibitions, summits, festivals, to demonstrate it to humanity.

But if we could take only one thing, I would focus on what I call assistive pretext. So assistive pretext is a design philosophy that helps us to understand that when we create something – say a technology or policy for people with disabilities - that this innovation can serve everyone. So, let’s say SMS. Initially, SMS was created for people with a hearing impairment. But after that, it became a part of any mobile and smartphone. There are many things which we create for people with autism, mental health disorders, special education. But after that, these algorithms can benefit everyone. And sometimes, we create the policy or ethics framework. But the knowledge which we use for such frameworks can benefit everyone because it helps us to build more diverse understanding of regulation, of a stakeholder's involvement.

So that would be the main thing I would love to happen, that people understand the power of this intersectional impact. It could be through more complex stakeholder involvement. Or a broader portfolio of startups. Or maybe an exhibition which helped everyone - every stakeholder - to understand the power of these emerging technologies serving everyone in policies. So that's why, for instance, last year we organized the summit in Saudi Arabia and brought many governments and technologies around the world, including Europe, United States, Asia Pacific, exactly for this thing. We had TEDx-type of talks with entrepreneurs. We had museums. And people talked in different languages. But basically we were all addressing the same problem: what we can do now, what we can do next, and what type of red lines we would love to not cross to keep AI safe and humancentric. So that's exactly what I'd like to see globally.

KIMBERLY NEVALA: Fantastic points there, Yonah, and thank you again for your time this morning. This was fantastic. Really, really appreciate it. It's fun to have you back.

YONAH WELKER: Thank you so much. It was a pleasure for me.

KIMBERLY NEVALA: 12 Days of Pondering AI continues tomorrow. Subscribe now for more insights into what's happening now and what to expect next in the ever-fascinating world of AI.

Creators and Guests

Kimberly Nevala
Host
Kimberly Nevala
Strategic advisor at SAS
Yonah Welker
Guest
Yonah Welker
Explorer, Board Member EU Commission projects, Yonah.org
Policymaking and Accessibility in AI w/ Yonah Welker
Broadcast by