Ethical Control and Trust with Marianna B. Ganapini

Marianna B. Ganapini contemplates AI nudging, entropy as a bellwether of risk, accessible ethical assessment, ethical ROI, the limits of trust and irrational beliefs. 

Marianna studies how AI-driven nudging ups the ethical ante relative to autonomy and decision-making. This is a solvable problem that may still prove difficult to regulate. She posits that the level of entropy within a system correlates with risks seen and unseen. We discuss the relationship between risk and harm and why a lack of knowledge imbues moral responsibility. Marianna describes how macro-level assessments can effectively take an AI system’s temperature (risk-wise). Addressing the evolving responsible AI discourse, Marianna asserts that limiting trust to moral agents is overly restrictive. The real problem is conflating trust between humans with the trust afforded any number of entities from your pet to your Roomba. Marianna also cautions against hastily judging another’s beliefs, even when they overhype AI. Acknowledging progress, Marianna advocates for increased interdisciplinary efforts and ethical certifications. 

Marianna B. Ganapini is a Professor of Philosophy and Founder of Logica.Now, a consultancy which seeks to educate and engage organizations in ethical AI inquiry. She is also a Faculty Director at the Montreal AI Ethics Institute and Visiting Scholar at the ND-IBM Tech Ethics Lab .  
A transcript of this episode is here

Creators and Guests

Kimberly Nevala
Host
Kimberly Nevala
Strategic advisor at SAS
Marianna B. Ganapini
Guest
Marianna B. Ganapini
Professor and Founder, Logica.Now
Ethical Control and Trust with Marianna B. Ganapini
Broadcast by