Lama Nachman discusses frustration as a motivator, designing for authenticity, embracing uncertainty, clarity of purpose and why nothing is obvious in AI - even when giving people back their voice.
Lama Nachman is an Intel fellow and the director of Intel’s Human & AI Systems Research Lab. She also led Intel’s Responsible AI program. Lama’s team researches how AI can be applied to deliver contextually appropriate experiences that increase accessibility and amplify human potential.
In this inspirational discussion, Lama exposes the need for equity in AI, demonstrates the difficulty in empowering authentic human interaction, and why ‘Wizard of Oz’ approaches as well as a willingness to go back to the drawing board are critical.
Through the lens of her work in early childhood education to manufacturing and assistive technologies, Lama deftly illustrates the ethical dilemmas that arise with any AI application - no matter how well-meaning. Kimberly and Lama discuss why perfectionism in the enemy of progress and the need to design for uncertainty in AI. Speaking to her quest to give people suffering from ALS back their voice, Lama stresses how designing for authenticity over expediency is critical to unlock the human experience.
While pondering the many ethical conundrums that keep her up at night, Lama shows how an expansive, multi-disciplinary approach is critical to mitigate harm. Any why cooperation between humans and AI maximizes the potential of both.
A full transcript of this episode can be found here.
Our final episode this season features Dr. Ansgar Koene. Ansgar is the Global AI Ethics and Regulatory Leader at EY and a Sr. Research Fellow who specializes in social media, data ethics and AI regulation. Subscribe now to Pondering AI so you don’t miss him.