Artificial Empathy with Ben Bland

Ben Bland expressively explores emotive AI’s shaky scientific underpinnings, the gap between reality and perception, popular applications, and critical apprehensions. 

Ben exposes the scientific contention surrounding human emotion. He talks terms (emotive? empathic? not telepathic!) and outlines a spectrum of emotive applications. We discuss the powerful, often subtle, and sometimes insidious ways emotion can be leveraged. Ben explains the negative effects of perpetual positivity and why drawing clear red lines around the tech is difficult. 

He also addresses the qualitative sea change brought about by large language models (LLMs), implicit vs explicit design and commercial objectives. Noting that the social and psychological impacts of emotive AI systems have been poorly explored, he muses about the potential to actively evolve your machine’s emotional capability. 

Ben confronts the challenges of defining standards when the language is tricky, the science is shaky, and applications are proliferating. Lastly, Ben jazzes up empathy as a human superpower. While optimistic about empathic AI’s potential, he counsels proceeding with caution. 

Ben Bland is an independent consultant in ethical innovation. An active community contributor, Ben is the Chair of the IEEE P7014 Standard for Ethical Considerations in Emulated Empathy in Autonomous and Intelligent Systems and Vice-Chair of IEEE P7014.1 Recommended Practice for Ethical Considerations of Emulated Empathy in Partner-based General-Purpose Artificial Intelligence Systems.

A transcript of this episode is here.

Creators and Guests

Kimberly Nevala
Host
Kimberly Nevala
Strategic advisor at SAS
Ben Bland
Guest
Ben Bland
Co-Founder and Partner, Looper Chair, IEEE P7014 Standards Working Group for Emulated Empathy in Autonomous & Intelligent Systems
Artificial Empathy with Ben Bland
Broadcast by