Upskilling Human Decision Making w/ Roger Spitz
KIMBERLY NEVALA: Welcome. It's day seven of Pondering AI's Insights and Intuition series. Today Roger Spitz reflects on upskilling human decision making. Roger, welcome back to Pondering AI. Thank you for joining us.
ROGER SPITZ: It's wonderful to be with you, Kimberly. Such a great idea to regroup.
KIMBERLY NEVALA: As the old saying goes, change is the only constant. Are you seeing that the level of uncertainty that folks are feeling is increasing or is it about the same? And how is that impacting our ability to make effective decisions in the current state?
ROGER SPITZ: So there's a lot to unpack there, Kimberly. But I'll try and kind of keep it on the short side. So what used to be esoteric topics at my end - decision making, complexity, unpredictability, and uncertainty – have suddenly become front of mind.
There's much more sensitivity and appreciation of the necessity and the survival of addressing things differently. But there's still a lot of reliance on old linear assumptions of a stable, predictable world. So the challenge is not so much whether the world is stable, unstable, whether people realize it isn't. But how much are organizations, countries, people, still failing in terms of imagination of possibilities. So that's point number one.
Point number two is that from our perspective at the Disruptive Futures Institute, there's an inverse relationship between predictability and uncertainty. And the more uncertain the environment, the less predictable. So one needs to have really different modes of decision making. And that's where we think foresight is quite important: to imagine the different possibilities, to think about how to see the world differently, to prepare differently, to respond differently. As opposed to relying on assumptions.
When Yellen, the head of the Fed, can go out and say, yeah, we thought we could control inflation. We thought we knew what it was and we just didn't have a clue. And this is the most sophisticated economists, models, and financial engineering you can find. And one of the most, if not the most powerful states in the world, you've got to realize that there's a different context. So you can see even with the Fed, and that very recently, it was business as usual until they realized that it shouldn't be.
KIMBERLY NEVALA: What is the implication there for how we should look at and utilize things like AI and analytics? Are these systems going to save us? What is their role?
ROGER SPITZ: So there are different aspects to that. There's certain environments where things are complicated as opposed to complex.
When it's complicated, you can understand cause and effect ex-ante. You can rely on experts and analysis and specialization. They are linear environments. There are known to be many drivers of change. In those environments, I think data can be relevant. The only thing is that it's historic. There's no data in the future.
I think the minute you're dealing with complex environments which are nonlinear, which are unpredictable. Where you can't establish exactly what might happen, it’s emergent. Where there are multiple drivers of change and those are constantly in flux and dynamic. At that point, that's not the sweet spot for data. So the danger is that much of environments and our complex challenges and problematics are of a complex nature. That's the reality of our world.
The second big impact - and you didn't explicitly ask - but I'd just like to extend this given the lovely discussion we had some time ago. As you delegate authority to AI in decision making, as humans, we’re basically de-skilling ourselves and less able to necessarily make decisions in complex environments because of delegating it.
So not only is AI potentially of limited value because AI is relying on past data, on systems which are complicated, which are predictable and understandable, you're then delegating it. You're de-skilling humans. In the 21st century it's existentialism 2.0, where you have human and algorithmic decision making. Humans do not have a monopoly on decision making today.
And we use this term "existential risk" for AI in terms of human decision making. Not with the narrow definition of existential, which has been hijacked by a lot of the media and the debate around risk of extinction: that risk of extinction is existential. But anything which touches human agency, freedom, and choice is actually existential, if you take the existential philosophers' perspective.
The existential question for us in terms of decision making and AI, in the context of this complex, unpredictable world, is in relation to education and skills. Is the educational system adequate for a complex environment and where we coexist with AI? What do we need to do to be able to make decisions in uncertain and unpredictable environments? How does it affect our decision making, mass automation and what we call inforruption which is information disrupted, disinformation, cyber insecurity.
So these are all areas which have an existential element to decision making and our reliance on AI. It's really just doing a triage as to where AI is good and where humans should be good. And making sure we give ourselves the empowerment and the changes to education, governance, and incentive systems to leverage both on AI where relevant, but also, importantly, to leverage on humans where necessary to stay relevant in today's world.
KIMBERLY NEVALA: Let's talk about what you might predict or expect to see happen in the near future and then what you think should happen. So first up, how do you expect this to play out?
ROGER SPITZ: My personal view is that it will continue on the current trajectory.
Which is everything that can be automated, virtualized, decentralized, cognified will continue to be so. And that AI has a role in that. I think it will continue to increase its position in the decision-making value chain. Again, I'm not saying that AI understands the brain or processes like the brain or understands what it does. I'm just suggesting that decision outcomes are made by algorithms irrespective of whether it understands what those decisions are or not. And that has an impact.
And I expect that the current debates will continue around existential risk, around ethics and many important topics. What I would like to see change is, first of all, to understand in the current debate: who is incentivized to form what point of view? I think a lot of the people who tell you, who speculate on whether it creates jobs, destroys jobs, the meme that goes around - which is it's not AI that will destroy jobs, but people who can't work with AI.
Two observations: one is that no one knows. It's unpredictable. It's surprising. The second thing is that a lot of people who formulate these studies and these points of view are incentivized because they're selling some kind of technological systems, et cetera. Nothing wrong with that. That's their job to do that. The only thing is for the general awareness and any changes we do to legislation or to what have you so people understand that incentives determine outcomes. If you're incentivized to sell certain things, you might take a certain point of view.
The important thing to understand is what needs to change for humans to cohabitate in a world where we're not a monopoly on decision making. How do we change the educational system? How do we change the governance and incentives?
The topics that are being debated; I'm not trying to minimize them. They're extremely important: ethics, et cetera. But I would like the debate to shift not from 95% or 99% on what happens to AI but actually addressing what does it mean for humanity.
KIMBERLY NEVALA: What are some of the other features that, if we are hyper focused on technology - both the ramifications of technology and the idea of technology as the driver of all future states - that we are overlooking?
ROGER SPITZ: I personally don't make the assumption that technology is the primary arch of underlying drivers of change. I think it's one of a multitude of drivers of change; but an important one and one which is inseparable from the human condition today. I think the human and existential condition today cannot be separated from our interactions and technological conditions. So in that sense, it has a number of features that are overriding. But it's not, to my mind, unique of change.
So the things we might not be focusing on enough, to my mind, one of the unique features of certain complex challenges like climate and AI and that, is the irreversibility. Without necessarily speculating on if and when one can achieve singularity and whether it is an extinction risk, et cetera, what is almost certain is that you could reach certain milestones which are irreversible. Therefore, a good understanding of what that would look like is important in an anticipatory way. Because once it's irreversible it might not be possible to go back. So irreversibility is something which I don't see personally enough in the debate generally.
And the second one is this idea that advanced technologies and AI systems are already incomprehensible and will become that much more so. So the more we're framing our understanding, our responses or legislation around an assumption that we actually understand precisely what it does. Or how it will evolve, what impact it will have on things - this a problem, because it's a little bit of a black box.
So we need to think about ways to build around embedded feedback loops and emergent (potentials) to evaluate, monitor before things might become irreversible. If everything we do is predicated on a few experts that believe they have a good handle on things so we're good to go and we haven't addressed the educational systems, the governance, the feedback loops, the emergent possibilities, the irreversibility, the black box element. Then we're relying too much on a narrow set of assumptions and addressing those without necessarily thinking more systemically.
KIMBERLY NEVALA: Yes, for sure. Awesome. Thanks again, Roger. As always, you have gotten the synapses firing.
ROGER SPITZ: No. No, my pleasure, Kimberly. Thanks for regrouping. These are very good topics to check in on from time to time. There's a lot happening and very important work you and your podcast are doing. Thank you so much for the support.
KIMBERLY NEVALA: 12 days of Pondering AI continues tomorrow. Subscribe now for more insights into what's happening now and what to expect next in the ever-fascinating world of AI.