The Path to Zero Exclusion with Yonah Welker

Yonah Welker shares their unique path to technology, exposes the limits of inclusion, shows why digital safety and comfort aren’t synonymous, challenges us to collaborate broadly and embrace our role as digital citizens.

[MUSIC PLAYING]

KIMBERLY NEVALA: Welcome to Pondering AI. My name is Kimberly Nevala. I'm a strategic advisor at SAS, and so excited to be bringing you our second season. We are talking to a diverse group of researchers, policymakers, advocates, and doers, working to ensure AI solutions, put people and our environment first.

Today I'm so excited to be joined by Yonah Welker. Yonah has been working at the Intersection of Tech and Society since 2005, when they launched a hardware think tank. They think differently and are going to share their perspectives on achieving zero exclusion and embracing diversity in all of its forms. Thank you for being here, Yonah.

YONAH WELKER: Thank you so much. It's my pleasure.

KIMBERLY NEVALA: So, tell us a little bit about your journey to becoming a Tech Explorer and why you are so passionate about building a better tech future for all.

YONAH WELKER: Yes, it's a very interesting question. And definitely, my journey was complicated and comprehensive at the same time. Because I became a journalist very early, as a teenager, as a way to make a living. But at the same time, as a way to embrace and realize my passion to technology and exploration. And actually, I don’t just cover technology. I find emerging technologies, researchers, startups. So exactly what media companies are doing today - like Forbes creating emerging entrepreneurs - that is what I was really passionate about.

But at the same time, since I had disability, it was the only way to explore the world, being isolated. At the same time, since I had pretty severe communication, emotional, physical issues, it helped me to really break all of the borders and do so many things. And I would say since the moment when I start to be an entrepreneur or technology explorer, I really learned so many things I never could actually do before.

It was completely impossible. So, it was like a universal school of life, different skills. And it helped me to explore many technology things and non-technology, explore people, society. And since this moment, I really became much more comprehensive. On one hand, I continue my journey, covering technology and exploring it, but at the same time creating my own startups.

And then at some point, I was really passionate about using AI, using robots, machine learning in order to change the future of education, health care, civic technology. Because during the period I lived in a very isolated way, I realized how many cool stuff we could create from remote learning to adaptive learning, to different type of assistive technology. Everything is possible, because one of the first stuff we worked was, for instance, stuff related to Myspace in 2009.

And I was so passionate about more adaptive content for people. And I start to think, OK, we would do some adaptive experiences or personalized content just to increase engagement. But actually, we can do so many more things with such technology in order to adapt it for visual or hearing impairment, or particular cognitive spectrums. There are so many cases where we're able to analyze people behavior or specific health pattern, make a particular conclusion, and actually help and actually expand the border of classroom workplaces, and so on.

And this way, I actually do so many things today. And my mission actually includes several levels. On one hand, it's more focused on technology itself. And I work with emerging companies and startups, focus on the future of AI machine learning for assistive technology in field for autism, for neurodiversity, for disability, for dyslexia.

At the same time, I serve four ecosystems, which help to empower technology transfer in early research in this field, specifically, European Commission, where I evaluate different projects focused on the Smarter Cities, health care, future of learning. And the same time, I'm trying to work on policies and suggestions to ethics to actually adopt a more efficient way in schools, classrooms, workplaces, not harming health, well-being, safety, mental health conditions. Sometimes it's also about sexual harassment-- many cases we should take into account.

And the same time, I try to put all of this stuff into some kind of a bottom-up movement. For instance, I serve for a London Tech advocate disability working groups. We collaborate with MIT with women in AI on hackathons, on some kind of community initiative to involve more people to this work who are underrepresented, who have probably have no opportunity to be a part of a corporate world, but would love to work on such technologies, to become a part as a developer, as a researcher, ethics professionals. There are so many people who would love to contribute to many problems. And we would love to all the voices be heard.

Specifically, then today, are many problems of bias actually connected to the fact that we ignore such aspects as a social groups, income. And when we have more people, we're able to fix it and deliver better, and specifically, in such sensitive niche is assistive or neurodiversity technology.

KIMBERLY NEVALA: You have such a unique seat at the table and bring such an expansive and engaged perspective to the conversation. It's interesting when you talk about your early days and entre as a journalist and not feeling engaged or being able to be engaged. I think this is such an important perspective.

This topic of-- and you're talking about including a lot of populations or personas or viewpoints that are not necessarily well-represented or represented at all today at the technology table. And certainly, inclusion is a hot topic. It's being discussed in the press. It's being discussed in public forums, in boardrooms and beyond. But you really argue that inclusion is not enough. And it may in fact, continue to center power or agency in the hands of those that already have power or agency. So, can you talk to us about why or how zero exclusion differs from inclusion and why that's so important?

YONAH WELKER: Yes. First of all, when I start to think about inclusion in terms of neurodiversity and disability, I had many discussions with my peers. And also one of my peers and one of my good fellows, her name Yip (Thy-Diep ‘Yip’ Ta). She was born in Vietnam. And currently she drives one of the biggest decentralization funds and projects in the world.

And she mentioned inclusion is more about reverse racism, but not actually about fixing anything. So many of our members of our community who are Asians actually hate all of these reforms. After that, I had talked with people who were focused on LGBT inclusion. And they say no, it doesn't work.

And I start to think about inclusion in terms of a neurodiversity and disability. And I realized there were actually many problems we have. First of all, we don't ask more important questions. First of all, why we exclude someone? At first, for instance, when we work on women in AI, we focus on inclusion of women to AI field. We talk about not just the creation of something for women, but by women. So we demonstrate how women mind can create actually a bit different stuff. For instance, one of our recent hackathons demonstrate with one of our winners, were focused on autoimmune disorders, mental health, and many other topics which typically are not covered. So it's not just about exclusion of gender. It's exclusion of a particular type of thinking. And we should ask it first.

And after that, I started to think: how is different inclusion or zero exclusion? So I write list four or five elements which are pretty different. First of all, when we talk about inclusion, we always ask about what is primary or secondary. So, some would include us-- so like a global south or global north.
But it's not correct. Because for instance, 70% of winners of hackathon coming from emerging countries. So, if you say like if they kind of secondary world, it's not correct. We are partners. We work together. If it's about equality and equity, and you include us to the better world, it's the kind of vertical or a kind of another slavery. You just put us not very developed people into the next level of segregation. It's not about equality or equity.

Second, it's again about some kind of permission. We provide you with permission to be included. So like we create agency of a permission to be included. And we create groups, which should be included. And after that, we don't talk about accountability, why we exclude someone before. And probably, we would exclude someone in the future. Because since we create this a pretty superficial agency of inclusion, probably helps someone. But probably someone actually didn't get any help. Because it was beyond our agenda.

So I believe that in the long term perspective, inclusion concept doesn't fix the crucial thing, equity and equality for everyone. And actually, flat one level communication exchange in order to actually contribute to the gender studies, women's studies, cultural studies, and actually help community thrive, but more importantly, use these insights for technology for scientific progress and so on. Because one of the biggest challenges we face today is disconnection between social science and technology.
And it's not about social science to be included to technology. It doesn't work in a weird way. They work actually, in equal way, like a combination. So once again, we mostly try to create ecosystem portfolios, policies, trying to use all of the elements, not kind of a vertical in other levels like a secondary level, which should be included, but more like an equally important element. No matter is about research, is about individuals, groups.

And that's why I'm typically say zero exclusion for everyone, specifically marginalized groups, zero tolerance to any kind of violation of our rules, laws, or ethics against with people. So yes, it's the main logic we try to follow specifically, towards children, women and the most vulnerable groups.

KIMBERLY NEVALA: Yeah, it's a really fundamental shift, though. It seems so simple in language. But it's the difference between requiring an invite - an invite-only event - and having an open-door ecosystem that everyone feels welcome to enter. Now you just mentioned the need for multidisciplinary or diverse teams. And clearly when we discuss the idea of responsible and non-exclusive technology design, this is a key best practice. So, when we consider knowledge transfer through that ethical and diverse lens, what are the non-traditional disciplines and perspectives that are required?

YONAH WELKER: Yes, it's a very good question. I'm actually working on suggestion for European Commission and other ecosystems in terms of how we could empower and reshape technology evaluation, design thinking, approach to ethical criteria, and so on. And currently, there are several main gaps we have. First of all, it'd understanding of a social quartiles, income, social gaps, and marginalized communities, gender.

For instance, when we talk about autism, there's a significant difference how we explore and actually treat autism between girls and boys, which is why many girls are never diagnosed correctly. Because we use the same medical profiles and patterns, but they actually be different. And the same for instance with income-- typically, when we make user research, we rely on middle class.

So most of the marginalized community never a part of user research for AI data, data science tools, and so on. So, we just ignore them. Not because we are biased. But we just forget about it. So, in most cases, people who are actually able to bring this expertise to the board like gender studies, like understanding of particular cultural groups, economic groups, in particular, city and particular country. And specifically, is if it's cross countries or cross-cultural projects, it's extremely important.

And finally, it's about human rights, legal, and bioethics. Because currently, we have a trend to implement ethics professionals into the residence. But as you see, if corporates don't like them, we just fire them if they go too far in exploration of some troubles. So I believe until we actually reshape ecosystem, where everyone-- designers, developers, actually talk the same language, we understand human rights, implications of a bias, of issues in training data. They actively discussed it during meetups or stand-ups
And there's a product development process, which is why we try to facilitate. It really doesn't work.

So it's actually about accessible moral vocabulary, understanding of human rights. And probably some person who would like to have a bit more expertise in this field to facilitate this process a bit more proactively. So I think it's there's three key levels.

KIMBERLY NEVALA: Yeah, this is interesting. Because as you know, it's fairly easy to ask for input, and much more difficult to take that input in when it is contrary to maybe your experience or your objective or seems to raise a level of risk. So as we're bringing in those different professionals-- social scientists, psychologists, ethicists, just to name a few, what are those discrete steps-- and you may have just mentioned a few of them-- that we can take to make sure they're not just being asked for information that's not being used, but that we're making these contributors an integral part of the team and the decision making process?

YONAH WELKER: Yes, first of all, I don't really like the concept of boards when someone serves on boards. I’ve actively served as a board member for startups and companies, where I dedicate several hours per month. But I believe if someone actually would love to contribute their opinion, understanding and reshape something, they need to be full time or at least part time worker of these ecosystems.
So first of all, we should be deeply integrated to the process. But unfortunately, we have another problem. Recently, we have more and more discussions how we integrate, for instance, ethics professionals into the organization. Should they be the employee, or we should be independent in order to actually make someone accountable?

So for instance, if they explore something, should they be a part of a government or institutions, external ethics agencies, or corporate employees? So for instance, I believe they should be independent, but serving particular groups of organizations on a constant basis. I would love to bring some perspective, we use on the European Commission.

For instance, European Commission use a very diverse approach first of all. We were one of the pioneers of ethics for evaluation of technology. But more importantly, they actually use a very diverse approach to evaluators. They constantly update the pool of people. So for instance, I'm not able to evaluate the same startups or programs two years in a row. So we typically make a rotation.

At the same time, there's always a contract about conflict of interest. So there is a way on one hand, to put those professionals into the ecosystem, where then they're constantly engaged with the ecosystem with problems, for instance, of a neurodiversity of AI. But at the same time, they're independent from corporate level, and let's say revenue-making level.

So on one hand, we can be biased. We can be corrupted. At the same time, we can be influenced by someone who don't want to be accountable for some stuff. For instance, we had actually a lot of discussion about bias. But I believe we had much more crucial accidents with the companies like Theranos, who used billions of funding and created almost nothing.

So its actual problem, problem of promises, efficiency of boards, efficiency of people who should be responsible to oversee and observe a process, in terms of ethics, in terms of efficiency impact. But in some reason, they didn't And we do nothing. And we should understand why.

We believe it's really about independence from corporate level, but at the same time, more focused on being a part of an ecosystems. And with why for instance, I'm trying to be part of a very different ecosystem at the same time. For instance, I serve London Tech Advocate Disability Group in UK. At the same time, we collaborate with the neurodiversity professionals in Australia. At the same time, we work in the US, in Europe.

So I don't try to be part of one ecosystem like government or corporate. Because in the end, our goal is to serve patients, mothers, parents, and help this market to grow. And since it's very disconnected, until professionals, much more focused being a part of this ecosystem, not a concrete company, I think it's really difficult to grow with professionals, make them independent, and make them really have a power to make some accountable or actually reshape the process to make it more efficient, more ethical, and adopt it in different levels.

KIMBERLY NEVALA: Yeah, and I've observed that sometimes when we bring those folks in from the outside, we take this approach of shifting the accountability to them, even though they don't necessarily have the authority to affect a decision to change how something is deployed or how we think about it. And so this idea of accountability without authority and without shared responsibility is really, really tough.
I'd like to shift gears just slightly. You mentioned earlier, assistive technology and working with some of these bits. AI solutions have often been strongly criticized, if not outright denounced, for often promoting and reinforcing somewhat gross, maybe arbitrary, and very often binary classifications. It might be male, female, gay, straight, risky, not risky, credit worthy, not credit worthy.

And it started to bring into question for me the intent behind the rise of solutions recently that are marketed as effective or emotive or emotional AI. And folks may have seen some of these in the news, where we've seen articles about applications that purport to understand your work ethic, for instance, based on your posture and your micro expressions. Or whether I'm depressed based on the tilt of my head. And by that mantra, I am always depressed, even though I'm not actually depressed.

But regardless of the, I guess the suspect scientific basis of some of those things, they do seem to lean largely into understanding or normalizing a typical experience or posture. So there might be an application that folks with autism spectrum disorders might be able to use to respond to emotional cues, a quote, unquote, "neurotypical" individual would exhibit. But I haven't seen that happening in the reverse, where we have applications that nudge a quote, unquote, "neurotypical" individual to respond to someone who doesn't express or respond to things in the quote, unquote, "traditional" ways.

So as opposed to supporting a spectrum of experiences that allows everyone to engage in the way that's most natural for them, these types of applications seem to still promote this on or off idea of a correct or a normal posture and ask people who don't naturally fit that mold to change their behavior to fit in. Am I off base in that interpretation? Or are some of these applications leading us astray?

YONAH WELKER: Yes, it's wonderful. It's a very complex, and it's both philosophical and technical and a social question. So first of all, for instance, when we talk about diverse and neurodiverse people, there are so many criteria we take into account. For instance, we work with some startups. And along with neurodiversity, around 100 co-morbid conditions-- including digestive system, different neuromuscular disorder. And that's what we put into the spectrum in order, for instance, analyze it and better understand general health conditions, for instance.

But at same time, even if we talk only about neurodiversity, we try to evaluate some unique patterns like how a person communicates, think, learn, empathize, systemize, memorize, create. And all of this become a part of a particular technology flow, UX/UI design and so on. So it helps to create completely unique experiences, how, for instance, they communicate or think.

For instance, we have a platform with focus on the hiring people with autism for engineering or data science positions. And this platform is more focus on communication through chats, less for instance, using of videos, Zoom-style communication, like a direct interview. They use a different approach to colors. So there are many patterns we use in order to leverage all of the experience-- tactile, audio, sounds, visual, colors, physical, and non-physical.

And it became a huge foundation for so many amazing technology we have today. Like eye tracking for dyslexia would help to make your reading better, social robots for assistive learning in autism, platforms for hiring this people. And even biofeedback, a headset would help to, and allow to control, for instance, synthesizers in order to create music to re-imagine creativity.

For sure, on the one hand, it sounds so wonderful, so many good things. And actually it's not about promises. For instance, we actually implement this stuff at schools, both in Europe and in the United States. Recently, we've done it in Denmark, for instance. It was pretty easy. Society become more and more open to such innovation.

But we have another more philosophical and social question-- what's the proportion of this technology they make experience safe, for instance, and eliminate some kind of external, more negative experiences? For instance, person doesn't like to communicate. And we only use chats, text, particular colors. And we create a 100% safe zone. Is it good for overall development on long term perspective or not?

So at this point, we have another thing. And it's about human involvement. And it's about eco-chambers. And it's about filter bubbles. It was one of the criteria I actually proposed in the previous document for European Commission. So I've seen initially, in media platforms, we created with AI. When we create for instance, we use semantic analysis and collaborative filtering to make personalized content for people. They become a part of filter bubble.

So for instance, they like some content. And this content constantly recommended for them again and again and again. And we use similar patterns in education. So for instance, autistic children use some patterns of colors, text, learning. But it became repetitive. So it's actually it doesn't create new scenarios. Because our development always connected to some kind of a going beyond the comfort zone. It's actually challenges, which not comfortable, not all the time, but sometimes. So that's why we have another question focus on how we identify autonomy of social or social technology and assistive field.

And that's why I believe and it was one of my suggestions, is more involvement of human professionals who actually play a role of educators, researchers, assistants, caregivers, who actually use technology as a tool, not as a subject of action. Because currently, we have a lot of talks about bias. But in the end, only people can be biased. Only people can be accountable. We are not able to say that this child had some relapse or health issues, because social robot didn't act properly. Because we are not able to put this robot to the court. It doesn't work.

That's why in assistive technology, we create the new field with what we identify as stakeholders ecosystems. . It's the parents who are responsible for children, educators, teachers, doctors. And it's actually people who deal with technology and use it as a tool in the proportion, which should be used in appropriate way. And with why for instance, many companies in social robotics field, now position themselves not as developers, but as the learning companies.

So they actually create curriculum, how to use this technology in a safe way, safe not in terms of the damage of your health, but in terms of a progressive and smart way of your development. Then you're not constantly in safe zone 100% of time. No, you interact with technology with platforms. But also, we try to teach you how to interact with your friends, with your mother or your father, with your community and so on.

So we try to put beyond our comfort zone. And that's why, even though I'm a technologist and most of the time, I work on technologies. But it's actually the point where my work going in the direction of a social science, and which is why we need people from education field, psychology, social science, who are becoming the new breed of professionals. And let's say social engineers, health medical professionals, educators who will grow better understanding of implementation of technology in classrooms, in nursing practice, and so on.

By the way, sometimes ago we even had to talk about with the Bonnie Clipper-- she's a huge influencer in AI for nursing. And she mentioned that nurses become engineers nowadays. Because we deal so much with technology. And so we should know how to deal with the computer vision in hospital, for instance or an emergency room. But with same time, I'm asking her, so what's the line between technology and empathy. So our nurses, engineers, or they're like caregivers with big hearts. And we agreed that it's a combination, 50/50%. It's actually about any kind of a technology. Even in Wikipedia, it's a huge global, decentralized project. We use 50% of the people for manual curation and 50% of algorithms.
Because you're not able just to completely put all the work into the algorithms. You need double check principle is another our field. At the same time, always keeping your eyes on both omissions is no actions and actions. Because sometimes you need to be involved immediately, specifically as related to children, which is why we need people around. In particular, action which should be interrupted, is about for instance, misuse or negative impact, and so on.

So yes, it's a very complex field. And we're just at the beginning. For instance, hopefully some of funds in autism field, more proactively invest in this startups. And I'm happy to see more and more companies in this field. But I believe after that, we will see another step. And it will be more about social scientists and professionals who will learn how to use this technology better and adopt it on all of the levels, agencies, institutions, campuses. And it's even more complex question, taking into account gender difference, cultural difference, economic difference, and so on.

KIMBERLY NEVALA: Yeah, there are no easy answers. And it's not A or B, is it, in terms of that. I think that idea, too, that we really lean into technology as a tool and not a replacement for human engagement and human interaction, and also this idea that we should really be ensuring that we're creating safe, engaging environments, but not reinforcing barriers and boundaries inadvertently that hold people back.

You mentioned earlier, the Theranos and the billions of dollars that have been put into some of these technologies with very little to show for them, if anything at all. Certainly, I think realizing this new vision in which diversity and not exclusion are really just core to how we operate, requires investment. I know you're involved in, I believe it's called the Unit Venture. How does that venture diverge from the traditional venture capital approach? And why have you chosen to take that approach?

YONAH WELKER: Yes. First of all, I would love to start with the Theranos. Still, this case is one of the most important that I've seen in my life. First of all, as an evaluator, seen many fraud projects and many projects which have no any relation to science, technology. But the worst thing about Theranos is the social reaction.

Once, I had to talk with the person who lead the accelerator in justice field. And I asked her what she thinks about Theranos . and let's say, negative social impact of the situation. And she said, Oh no, this is a very good book. I asked, what do you mean? She says, I mean the bad blood book was so good. It's very interesting.

For many people, the Theranos case is more like a Hollywood story. I mean it's like, OK, this is some girl from Stanford. And after that, she creates technology which didn't exist. There were billions of money spent. But nobody cared about the fact that it was about blood testers. So it's about elements which crucially, use on every step of the medical trials and so on. So actually, it could kill thousands of people or maybe millions.

So the whole level of understanding of accountability, of entrepreneurship, of a technologist, researches, just on so low level, in comparison to the scope of rounds of numbers of unicorns, just dramatic. It's so irresponsible, but powerful people at the same time. So it's really created a new level of discussion.

So for instance, recently Amazon was fined by European Commission, around 600 millions of dollars or euros. And the whole cycle of penalties for cooperation with the GDPR, data privacy, human rights. I'm not able to comment all of them, because I didn't introduce all of these policies. But at least, the fact is that venture capital is far from ideal in terms of ethics.
And only in 2020, for instance, 500 startups start to think about ESG. They introduced project with World Economic Forum. But really, nobody cared about it before I was involved in this ecosystem. And nobody even mentioned it as a terms. So everyone was just about hassle and of round, and so on.

So in terms of Unit Ventures, it's not the venture capital fund. Actually, I'm a part of other venture funds. Unit Ventures is more about blockchain community, focus on stakeholders economy. They use around 200 cities around the world. But it's more about let's say, this decentralized community of the people who build stakeholders economy. I'm not able to commentate, just due to the lack of a deep expertise in blockchain. But in short, they try to let's say, advance venture capital in terms of more decentralized approach in stakeholders ownership. I'm only saying that I support it from my perspective in AI, in data, because one of the elements of my work is a data ownership or creator ownership.

So for instance, then child or someone interact with platform or robot. Everything should be not only private, not only protected and safe. But if you use the platform and you create something, you actually have an ownership over this data. So for instance, if you're a child who uses a platform or tool or robot to create musical stuff or some work, you have an ownership about it. Because we have a lot of data privacy talks, but not enough about data ownership.

So we are owners. And it's a bit more, because it's about concrete assets. So the similarity might work with the Unit Ventures. We all talk about how it's important to understand that now we are not just the citizens, we are digital citizens. We all have a digital property. We have intellectual property. Every time we create something, even currently during this podcast, we create some assets which can be used in a particular way-- monetized, assault, and so on.

And we should respect ourselves, our rights, and for sure, understand it. And it's specifically crucial then, we talk about assistive technology. There are so many things would interact with person. And unfortunately, the person not often understand what happened, unfortunately, due to the cognitive impairment. They typically connected to the caregivers. So from my side, I try to bring this perspective to this field. Because it's very sensitive and crucial.

KIMBERLY NEVALA: Yeah, we could talk for so, so long. Engaging in all of these conversations is so critically important. But it can be a bit intimidating. What advice would you leave with individuals who want to become better advocates, better allies, and just involved in achieving a truly diverse and non-exclusive world?

YONAH WELKER: Yes, first of all, from time to time I'm reached by people who ask about such advice. Most often, there are either technologists or advocates. So typically, we try to combine these elements.

KIMBERLY NEVALA: [laughing] Yes, combine those two together.

YONAH WELKER: Yes, Yes. So typically, definitely first of all, I don't ask about permission. Unfortunately, most of us are activists. We're not able to wait, for instance, policy. So we should actually suggest this policy, create community-driven working groups, initiatives, discuss it, how to make technology more ethical.

For instance, most of the ethics institutes work like communities. Women AI is a decentralized non-profit in community. Montreal AI Ethics Institute work in this way. All Tech Is Human in New York are working this way. Most of the ethics ecosystem created either by academia, by students, or by communities. So don't ask about permission if you believe that something wrong.

If you're a researcher, just do your work. Use some format in order to bring your way through the platform and for the research from institutes. Second, don't think that there is little or big impact. Now we're able to do so many things. We're able, for instance, create startups and complete bootstrap or use crowdfunding or use community to find someone. For instance, we spend hackathons and build teams from scratch.

And third, don't think that the results say emerging countries. And though we have a digital divide and it's real, but at the same time, the cost of involvement in borders become lower. Because for instance, most of the hackathons we spent recently, were spent for Zoom and Slack. And in MIT, most of it people come from emerging countries. And maybe only one problem sometimes, we experience, is an internet connection with all.

But anyway, we're able to create. They're able to participate. So don't try to limit yourself anyway. Think really big. Act, don't ask about permission. Use all possible channels. Try to make your strategy pretty complex, including both reshaping of policy, technology, create stuff, participate in hackathons, participate in community project.

Currently there are so many wonderful communities on Slack focus on AI, ethics, accessibility, neurodiversity. By the way, just this month in September, I'm organizing and chairing another meeting in London, focused on neurodiversity. And we welcome one of the best neurodiversity experts in the world. And our goal is to connect all of adults from US to UK to Australia, so we're trying to do. Yes, so you are welcome, and we're still on the mission.

KIMBERLY NEVALA: We are. Thank you. It's difficult for me to express just how the gratitude for you coming and sharing those perspectives. And for thinking differently and encouraging and enabling the rest of us to really do the same. So, thank you again, for joining us. This was extraordinary.

YONAH WELKER: Thank you so much. It's my pleasure.

KIMBERLY NEVALA: All right, in our next episode, we're going to continue the discussion of building a more creative, inclusive future with Dr. Valérie Morignat. She is a polymath, the CEO of Intelligent Story, and a leading advisor on the creative economy. Art, artificial intelligence, and augmented reality-- we're going to talk about it all. So subscribe now to Pondering AI in your favorite pod catcher.

[MUSIC PLAYING]

Creators and Guests

Kimberly Nevala
Host
Kimberly Nevala
Strategic advisor at SAS
Yonah Welker
Guest
Yonah Welker
Explorer, Board Member EU Commission projects, Yonah.org
The Path to Zero Exclusion with Yonah Welker
Broadcast by