The Future of Human Decision Making with Roger Spitz

Roger Spitz reconciles strategy with philosophy, contemplates the influence of AI systems and the skills required to make decisions in the face of uncertainty.

KIMBERLY NEVALA: Welcome to Pondering AI. My name is Kimberly Nevala, and I'm a strategic advisor at SAS. I'm so pleased to be hosting our third season, in which we explore how we can all create meaningful human experiences and make mindful decisions in the age of AI. And to that end, in this episode, we welcome Roger Spitz.

Roger is the CEO of Techistential and chairman of the Disruptive Futures Institute. He joins us to discuss his AAA model for human decision making in the age of AI. Welcome, Roger.

ROGER SPITZ: Great to be here, Kimberly, and great to have the opportunity to exchange on such an important topic.

KIMBERLY NEVALA: So let's start by sharing a little bit about your background and the nature of your work at Techistential and DFI.

ROGER SPITZ: That's great. No, with pleasure. So it's a little bit of a quirky background, but that's what we like. I have a pretty vanilla and boring 20 years in investment banking.

KIMBERLY NEVALA: [LAUGHS]

ROGER SPITZ: Well, I say boring. It's not boring per se, but let's say it's maybe a bit less interesting, I find, than what I'm doing today. But I spent 20 years advising - I was global head of M&A in technology for one of the large European banks advising clients on their acquisitions, divestments - always quite strategic at board, CEO, and shareholder level.

So I spent a lot of time on that. And of course, it's interesting. It's forward looking. You're looking at transactions which are transformational. But I found over time, over the past five years probably, that I got more and more interested and wanted to go down the rabbit hole of change and disruption and found that the more specific, kind of cookie-cutter business approach for strategy, for M&A, for the way corporates organize their value creation strategies were being themselves disrupted by unpredictability, uncertainty, and complexity.

And that led me, in a way, to what I'm doing now, which is very focused as a foresight practitioner on longer time frames, more systemic change, understanding better complexity, uncertainty, and unpredictability. Having moved slightly away from the more transactional portion of my life, which was very interesting and fascinating, but I just find it more meaningful and relevant to focus on more systemic, longer time frames.

KIMBERLY NEVALA: So before we dive into some of the mechanics of decision making and strategic foresight, I'd like to get a little bit existential with you. You have hypothesized that the philosophical framing of concepts such as agency, freedom, or choice - which are often viewed as innately human characteristics - might need to be reframed. Why does emerging technology such as AI necessitate a fundamental rethink of these existential concepts?

ROGER SPITZ: Yeah. So it's a fundamental predicament, I think, that humanity has today. I was very interested many years ago, and I've revisited this more recently, with existentialist philosophy. So reading the likes of Martin Heidegger, Kierkegaard, and Sartre and very crazy, also, about cinema. Some great movies which play with the concept of contingency: what may or may not happen. Movies by Alain Resnais like Smoking/No Smoking or even Kieslowski's The Double Life of Véronique.

And as you pointed out, Kimberly, one of the main characteristics of existentialism is the human condition and, in particular, its ability to have agency, choice, and freedom. And I think it's Jean-Paul Sartre standing on the shoulders of many, many giants who formulated that you exist. Man, first of all, exists, encounters himself, surges up in the world, and defines himself afterwards. The "him versus her" is Sartre's original language maybe 70, 80 years ago despite being so close to Simone de Beauvoir. That language was still used, so you'll have to forgive me.

But on the topic of what does that mean and what is the connection with AI, really, what I find is that today, we are in a complex world. The 21st century, Stephen Hawking mentioned that it was a century of complexity. Now, complex, amongst many other things, it means that you don't necessarily have all the right answers because there can be unknown unknowns. It's a nonlinear change, so the inputs aren't necessarily proportional to the outputs. Small impacts can have huge outcomes.

It means importantly that you don't always appreciate or are able to understand causality ex ante. So you have to kind of emerge and see what happens and establish causality sometimes ex post. So those are some of the features of complexity.

And what I think is happening is that for much of the recent past, including the past few decades, we could live our lives with the assumption that the world was stable, predictable, and linear. It wasn't, but the cost and the risks of making those assumptions were limited. Or every so often, you'd have a 2001 technology crash, or you'd have the 2008 financial crisis. But broadly, decade after decade, the world's businesses, individuals could live and assume or pretend to assume that this world is stable and predictable.

Now, that isn't correct. It's a complex world and more and more so. So today, the cost of business as usual is increasing because this uncertainty is increasing. And there's an inverse relationship between predictability and uncertainty. So predictability is reliable when you know what's happening, when there are known knowns or unknown knowns. You can rely on experts. How do you fix a car? How do you get a space shuttle to the moon, et cetera?

In a world which is complex, it's more emergent. So for the anecdote, it's only over the past few years, thanks to a colleague of mine, Rauli Nykänen in Finland who is studying at PhD level philosophy, who helped me get more comfortable to reconcile strategy with philosophy. And to answer your question, why do I think that we need a reboot for humanity and for the human skills in the face of AI? It's precisely because the world is complex, and therefore there aren't answers to everything.

KIMBERLY NEVALA: So where does existential philosophy fit into this increasingly complex world?

ROGER SPITZ: It actually fits in perfectly because it's a good thing that the world isn't predictable. It's a good thing that the world is uncertain because if it was, it would be a predetermined world where you don't have choice, freedom, and agency. So actually, you are able to create your essence. You exist, as Sartre kind of established. You exist, and then you create your essence, which is effectively one of the key concepts of complexity, which is emergence. Now today, just to wrap up that question --

KIMBERLY NEVALA: [LAUGHS]

ROGER SPITZ: Today, I think neither AI nor humans are fantastic at decision making or complexity, AI partially because it really is very good at pattern recognition and connecting the dots with large amounts, even unstructured data. But it does require causality. So when it's completely unknown unknowns, when the parameters are completely variable and unpredictable, when cause and effect is only established ex post, when there are no right answers, et cetera, AI is today not great either at that.

But humans should be because we have intuition, because we have many things, creativity. We have many features which mean that humans should actually be good at experimenting, at tinkering, at emergent behavior. The reason we're not, and this is where we need to sort of get to with the rest of our discussion, is precisely because we haven't had that reboot.

The educational system hasn't necessarily reframed what are the requirements, which are sometimes call the four Cs or critical skills. But what are those requirements to emerge and to be educated and to develop in a world which is complex? Where you don't necessarily have all the right answers, where there are unknown unknowns, and you need to emerge. And effectively, that's existentialism. You're thrown in life. You don't have a playbook you can just follow. And they have to sort figure it out.

KIMBERLY NEVALA: So I want to come back to the four Cs and the requirements as we round out the conversation. As you were speaking, something that was really striking is that you are in no way suggesting - in fact, I think the opposite - that humans could, or even should, really divorce or delegate decision making in a complex world to systems such as AI.

And it's interesting because in the continuing discussion and even push for "data-driven decision making," there's often an overt reluctance to rely upon or even consider human intuition and experience in decision making. What do you make of that? Is that the wrong approach?

ROGER SPITZ: Yeah. So the way I look at the decision-making value chain, which is, whether it's humans or algorithms, seeking to kind of reach the same outcomes and processes as decision making, you're going through kind of the analysis of the information. And that's analytics. You then have some degree of processing that and hopefully predictive elements. So that can support decision making. So that's when a lot of-- no doubt SAS, for instance, is very good at supporting decisions through data and through processing large amounts of data. It's drug discovery. It's negotiating a contract where the algorithms might pick up certain things and support with going through millions of contracts and due diligence or legal terms. And they're able to accelerate that and to support and enhance or augment human decision making.

So that's the sort of predictive aspect. I think the challenge arises on the prescriptive element, where the algorithms are seeking to make decisions in an autonomous way -- or the system. And where that is in a complex world where, candidly, I don't think algorithms necessarily should be doing that nor are able to do that because, in a complex world, there aren't these correlations. There are unknown unknowns, causality, et cetera.

And so in that respect, I am not anti-technology or anti-systems. I think AI has a very significant role to play, and we've seen it in very important things that are very enhancing human conditions such as drug discovery and for diseases and for many aspects. However, I do think that when you're starting to have a world that is complex, that we're not upgrading humans to be able to make decisions enough in a complex world, where you're having to delegate or relegate decision making to computers just simply because the world is becoming too complex, what's happening is that the systems are taking more of the value chain in the decision making than they necessarily should.

Sometimes it's because humans are having to delegate that because things are becoming too complex, and they just don't know, really, how to operate and make decisions with such complexity and volume of data. And sometimes it's also because companies are letting systems take over that decision making through algorithms, through social media, through credit scores, through all kinds of things which it shouldn't necessarily have such an autonomous way of deciding for these matters.

KIMBERLY NEVALA: And how do you see that use or reliance or over-reliance on AI? When and where does that become dangerous? Is there an example you can provide?

ROGER SPITZ: Yeah, listen, there's some amazing books and thought leaders and what the teams at Stanford are doing and in many parts of the world. So it's a topic that's very well covered around the biases, around many aspects-- for social credit, for health care, for things. But I would go more broadly than that. Obviously, these are very important things.

But I'll go more broadly in terms of the risk around delegating so much to machines, the risk of not being able to autonomously discern information, the risk of disinformation and misinformation which goes far beyond what movies like Social Dilemma project-- they're almost missing an entire layer of puppeteers manipulating the world, the polarization of societies, the inability to govern a country because it's so polarized. All this is partly driven by some semblance of pretending that algorithms can decide.

But ultimately, my point here is the following. It's not that the decision making per se is necessarily taking place with the algorithms but that human decisions are impacted by what the algorithms are doing. In other words, if you take the 20th century, in the 20th century when you look at existentialism, you had Jean-Paul Sartre, Simone de Beauvoir, and they'd sit at Café Flore and Deux Magots and talk about many things. The environment might not have been smoke free, but they were spared from the social, psychological, and existential effects of internet, algorithms, social media, and automation.

Good, bad, you know, I'm not passing judgment. I'm just saying it was an era where you could distinguish human decision making from that degree of productized, weaponized, and scale of influence. In the 21st century, you can't distinguish your existential condition. You can't separate it from technology.

And therefore, in a way, what we call techistentialism is existentialism 2.0, which is: techistentialism studies the nature of human beings' existence and decision making in our technological worlds. So we see that as you can't anymore consider that decision making is exclusive to humans. And therefore you're looking at both human and algorithmic decision making.

To try and answer your question in a concise way, I would say the two areas for me where one needs to be very thoughtful as to what one allows a system to do, one is where it is impacting significantly-- so where systems and algorithms are impacting significantly human decision making. And that goes to a very significant degree to things like information because although disinformation is not necessarily making decisions per se, in ways, it is because it's deciding how to reach certain people with certain information, et cetera. But the important thing is that it's also influencing decision making and creating polarization and weakening democracies and doing a lot of negative effects which impact human decision making. So in the role of systems, you also have the impact on human decision making.

Then you have specifically the point which is probably closer to the question you're asking, which is, at what point should one be thoughtful around the system making decisions? And then I guess for me, that point is the line between predictive and prescriptive.

KIMBERLY NEVALA: So there's a very interesting conversation developing today about our human propensity to become overconfident or over-reliant on these AI-enabled systems. Even when they're being used or supposed to be used just to inform or add information to our decision-making process. And that we stop really interrogating and thinking critically about when and where and why the information and the recommendations that are being provided may or may not be accurate, appropriate, or representative of the real world.

How do we address this sort of issue? And is that part of the delineation you make between using AI for prediction versus prescription?

ROGER SPITZ: Yes. So there are two elements to that. One is kind of the diagnosis, and then one is the, what do you do about it?

So in terms of the diagnosis, there's no doubt in my mind and in the mind of many other people-- unfortunately, not the majority-- that we're in a world which is no longer able to just look at risk, which is something you can quantify and put a probability on where you know the parameters, you have a specific question to look at, and you're able to kind of predict what the right answer-- you have known knowns or unknown unknowns, et cetera.

Not only are we not in risk zone of uncertainty, but I think we're not even in just what we call uncertainty, which is there are four possible outcomes, and you have a sense of what those four outcomes are. But you're just not sure how to calibrate the likelihood.

I think we're in what we could call deep uncertainty, which means that we can't even agree on what the likely outcomes are, let alone the probabilities, let alone whether they may or may not materialize. That includes space exploration. It includes certain existential risks like climate. It includes whether AI might reach human-level intelligence. It includes many features where it goes beyond risk and just pure uncertainty. It's really deep, fundamental uncertainty.

And so in that environment, which is deeply uncertain, unpredictable, complex, you can't really predict. You have multiple unknowns. You have broad probabilities. There are no right answers. It doesn't mean that they aren't within that broader universe of unpredictability. Certain elements, one can get comfortable on. Science is still extremely important-- scientific discoveries, AI supporting drug discoveries, or many other things.

So it's not a kind of anti-science thing. There are many, many important roles. But the diagnosis is that a lot of our environment is deeply uncertain. And the cost of assuming that it isn't, that it's linear and stable, is increasing, both in terms of the cost for opportunities for that matter and the cost in terms of risk. It's dual. It's not just negative.

So that's a diagnosis. And then to your point, what do you do about it? I think that one of the things is that AI is clearly playing a greater role at every step of the decision-making process. And I think that as humans, one should be better able and to upgrade how we make decisions in complex environment, which is now the dominant type of environment we have.

And so we devised what we call our AAA model, which is really quite simple. And we're borrowing great ideas from many other people. There's nothing revolutionary in this nor necessarily proprietary. But it's really just thinking about what constitutes a way of deciding and operating where you can't predict because there's deep uncertainty.

KIMBERLY NEVALA: As the name suggests, your model identifies three key capabilities for decision making in a complex world. What is the first A?

ROGER SPITZ: So the first one is borrowing, really, from Nassim Taleb the concept of antifragility. In other words, if shocks happen, you want to go beyond robustness or resilience and actually kind of improve from that.

So there's certain features which make you fragile. If you have companies that are buying back shares like the airlines and they super leverage because they assume the cost of debt will always be zero and they want to get a share price boost by buying back shares and all that jazz and they listen to all the MBAs at Harvard who tell them that it's not good practice and financial theory tells you that you shouldn't have cash on the balance sheet, well, that's great. It's great unless suddenly there's a 2008 crisis or a pandemic, in which case you bought up-- you've used all your real cash to indebt to yourself to buy back shares to give a five-minute fix to your share price. But you're just short on cash when you need it. That is fragile.

Antifragility is what-- it so happens that it's in Silicon Valley, but it's many companies in the world, and it's not just Silicon Valley. But it so happens that you'll see quite a lot of tech companies who are very happily keeping hundreds of billions, literally, of dollars on their balance sheet, going against what the financial theory, the MBA at Harvard would tell you to do with cash. Now, that is basically antifragile. There are many things that are antifragile.

But the idea of antifragility is it's foundational. It's not if and when you need to make decisions. It's, you don't know the future. Whatever happens, build certain features in terms of your company, your existence, or what have you so that when shocks happen, not only are you resilient, but maybe you can
even benefit from that.

KIMBERLY NEVALA: So antifragility fundamentally is about really embracing and planning for
unpredictability, as odd as that sounds.

ROGER SPITZ: Exactly. It sounds odd, but it's absolutely spot on. In other words, it thrives by mistakes because instead of, for instance, spending $2 billion and doing three years' research for something a bit theoretical that you try and roll out based on god knows how many assumptions, you're constantly doing small tinkering, testing. Things will go wrong. You're failing all the time.

But that creates a feedback loop. And that feedback loop helps you improve things. So it's really just basic tinkering, testing, trial and error experimenting as opposed to really reliance on many assumptions, structured, hierarchical, command-control, centralized, controlled environments for everything, which may not just work in the real world or in certain circumstances.

So for instance, one of the important features of fragility or antifragility is the idea of asymmetry. You might have someone who believes you can quantify everything. Great. And they'll say, listen, there's only a 0.5% chance of this happening. We should be good. Yeah, except at that 0.5% chance of something happening can happen. Maybe the 0.5% is wrong. And if it does happen and it wipes out the entire planet or the entire company or what have you, that decision is not OK just because it's 0.2% likelihood because you don't know that it's 0.2%. And the point of a complex world is non-linear and therefore asymmetrical, and so the impact can be ginormous despite the probability being tiny.

So those are some of the features to embed in the way you operate, in the way you build a company, in the way governments should be taking decisions, et cetera. It's the decision to save a few pennies on not stocking for masks or doing preparation for pandemics. Because, on the balance of probabilities, it shouldn't happen. Except that when it does, you've saved a few billion dollars and wasted a few trillion dollars, as we've seen in 2020. And that example is one amongst many.

So those are indeed one of the features that can allow you to be more resilient and ideally even get out better by virtue of shocks or surprises happening. Those antifragile foundations are helpful with that.

KIMBERLY NEVALA: Well, as a contrarian - a confirmed contrarian - this rings, this is music to my ears, that there may be a place for us in the world moving forward. So that's antifragility. What's the next pillar in the AAA model?

ROGER SPITZ: So the next pillar, and this is really the bread and butter for a lot of the foresight practitioners and the future studies, which is now a field I'm heavily involved in, is really just anticipatory. In other words, how can you proactively develop your ability and capacity today to prepare for and build a future?

The interesting thing about being anticipatory is that it's not just a question of looking at long-term horizons for events that may or may not happen in the long-term future. You're not actually trying to predict. You're trying to prepare in an anticipatory way for eventualities which are quite broad and which are not certain. So you're not actually predicting.

And the idea of that is to envision different possible outcomes with the view of informing the decisions you currently make. So it's quite paradoxical because most of the world will assume, number one, that you're trying to make predictions, which you're not; number two, that if you're thinking long term, it's a bit too far away and shouldn't impact today.

But the reality is that many of the most successful countries and governments and individuals and businesses and companies who are anticipatory, who embrace foresight and future studies, are every day making probably better decisions by virtue of having that longer time frame and that broader set of possibilities inform today's decision making. So that covers the anticipatory element, the second A.

KIMBERLY NEVALA: And before we go to the third pillar, which I believe is agility, are there established practices and techniques that exist today in the foresight toolkit?

ROGER SPITZ: Yes. And there's some wonderful courses and books, and it's a very rich field. It's unfortunately a little bit not as embraced as it should be by the business world precisely because the business world is really very much attached to numbers and to quantifying everything. And again, foresight is not that one ignores numbers. It's just that it's not the be-all and end-all.

But to give you a few elements of what is in the toolkit or what have you. Number one is if you compare it to a strategic plan, you're probably looking longer term. So simply whatever you're looking at, you're looking at 5, 10-plus years as opposed to maybe the next one, two, or three years as a plan.

The second thing is that instead of having very strict milestones which are often based on very fixed assumptions, it's more emergent. You're going to embed feedback loops, trial and error testing, emergent elements-- a little bit like the antifragility features-- partly because in a complex world, you don't have answers to everything. So if you're playing a playbook based on a bunch of assumptions and assuming that the world is, that that will be the outcome, it kind of doesn't work in an unpredictable world.

An element is also often, in a strategic plan, you're going to look for-- people will establish or governance or board or leadership team will establish a few key answers which you want to question. When you're thinking in terms of the futures field, you're also going to be thinking as heavily, if not more, on the questions as opposed to just the answers because, again, the idea is, what do we not know? The exploration for change, so in a strategic plan, you're often going to try and modelize uncertainties to deliver certainty. But that's just imaginary. It's an Excel spreadsheet. And I've done a million. 20 years in investment banking, I can tell you we have a sense of what it is. But it's not because you modelize uncertainties that deliver certainty. But on paper, it looks really great. I mean, you have great graphs. You have great models.

But the reality is that you need to explore and prepare for any kind of change irrespective of what the risk modeling comes up with. The strategic plan will often be more straightforward, linear, often focused on one or two eventualities, whereas in foresight, you might think of multiple futures, develop different scenarios, including some that seem surprising.

On the strategic plan, you're going to be very focused on data and noise and things that you're seeing as a trend analysis and the latest Gartner report. Futures, you're doing probably a broader scanning and interpreting earlier signals in terms of what they could mean and the next order implications of those as opposed to taking those for granted.

So all in all, to wrap, I would say your strategic plan is really often more predictive based on a bunch of assumptions, whereas the futures, we're not embarrassed. And we actually think it's essential to think about the plurality of the world and embed that uncertainty with your strategic thinking and hence that strategic foresight versus whatever McKinsey playbook strategy.

KIMBERLY NEVALA: So bringing some imagination to the boardroom and asking, what if, what else, what about?

ROGER SPITZ: Exactly. And again, sorry, just on that, the "what if" is the duality. It's not just as a risk or if something bad happens. It's also for incredible opportunities. So sorry. I didn't mean to interrupt.

KIMBERLY NEVALA: Oh, you didn't at all. That's really important because, again, it's about a non-constrained environment. We sometimes talk about open brainstorm, greenfield thinking, right? This is really about saying, what's the most outrageous thing I could think of happening? And then what about that? What if that actually happens, right?

ROGER SPITZ: Exactly. That's exactly right. Spot on.

KIMBERLY NEVALA: Perfect. All right, so that brings us then, I think, to the third and final pillar of this AAA model, which I believe is agility.

ROGER SPITZ: Yes. So agility, listen, it's a word which is, like "anticipatory," is very used in different contexts and can mean anything. And you have agile thinking, et cetera. The way we use it is in the sense of being able to emerge and reconcile in the present, in this complex world, reconciling short-term and long-term perspectives.

In other words, on the anticipatory, you're kind of thinking long-time horizons. And we mentioned earlier that one of the key objectives is actually to inform decision making. But you do have a long-term lens, often, in being anticipatory. It means thinking about the future but often also longer-term future.

What is essential is that in the complex world, which is emergent, you're constantly in the here and now and having to make decisions and having to make decisions in a context which is not predictable, which you don't have visibility on. And so the agility is really building that bridge to the future to drive the emergence today by reconciling these different time horizons.

And that requires agility because, number one, it's emergent. You're taking the different elements of the feedback loop and seeing how to decide. And number two, which is really the feature of complexity, is that you don't have a sort of playbook you can just follow because in a complex world, in an emergent world, you're having to make decisions often without being able to rely on known known or parameters which have some predictability in them. And so that degree of complexity means unpredictability, means high variability, means that it's not stable.

And therefore that emergent decision making requires a certain mindset which doesn't just rely on what you've been taught or what you've learned or the years of experience. It's just a different way of doing things. So that's what we label as agility.

And so if we kind of summarize the AAA, it's actually quite simple. It's sort of saying, build foundations where if shocks happen, you're more likely to be resilient or, ideally, even better than that. So have those kinds of foundations. Be thoughtful about them. Things which, if shocks happen or are going to be seriously detrimental, probably it's not that smart to try to make that bet.

The second thing is think about the future. Anticipate what you can because that reduces the extent of shocks and emergencies and crisis. There are certain things that are completely unpredictable but certain things which you can prepare for and be better prepared for that. So be anticipatory. Explore the future in a way that's inquisitive.

And then the third thing is that ultimately, the future doesn't exist. The past doesn't exist. Only the present exists. You are what you do every day. You wake up, and you make decisions. And therefore you need that agility to reconcile the different time frames and to make those decisions constantly with the only thing that exists, which is the present. But by virtue of having thought about the future, maybe those decisions are better informed.

KIMBERLY NEVALA: So some might listen to this on the surface and say, it seems like the AAA framework is a cheerleader or supports this idea of just move fast and break things. And there's certainly a lot of rightful – I think justified - angst about this thinking. Around, it's OK just to experiment, break, learn quickly, and move given that a lot of these things that are being deployed haven't had that sort of foresight applied to them.

And so as you said, there's these next-order implications and exponential impacts and trends that occur. So although if we think about something being agile or antifragile or anticipatory and being able to really test, experiment, respond quickly, live in the moment, I don't think you're saying that is the same - or I don't think you believe that's the same- as moving fast and just breaking things. Am I incorrect in that?

ROGER SPITZ: No. Thanks for raising that because it's a very important thing, a very important aspect, especially with the topics we're talking about, which ultimately is in terms of systems, algorithmic systems, and other kind of technology elements like that.

And you're right. The reason why I don't believe that it's just basically move fast and break things is because of the two other A-s than just agility. In other words, anticipatory, and I didn't want to spend too long on that: but really, when you're thinking about the future, one of the many things you're doing is looking at the next-order implications. And by virtue of thinking of the next-order implications, the idea is that it's more likely if you're thinking of the first-, second-, and third-order implications, if you're looking at unexpected consequences, if you're looking at longer time frames, and if you're looking at a broader set of stakeholders and causality than the more straightforward linear way, by virtue of being anticipatory in that respect, one would hope that you would be fleshing out in so far as possible some of the unexpected consequences.

You would be anticipatory in terms of the governance, in terms of realizing that these might be unexpected consequences, that these might be second- or third-order consequences, that these might be things that materialize in 5 or 10 years even though they may be OK very initially. And so the anticipatory is really meant to-- I wouldn't even say safeguard. It's really meant as a kind of essential way of not just saying, listen, only the present exists, emerge, do your thing, kick tires. Some will work. Don't worry about the rest.

It's saying you have to do that because only the present exists. You have to constantly emerge and make decisions. However, you have to be anticipatory. You need anticipatory governance. And you need to think about second-, third-degree implications. You think need to think about what's maybe unexpected consequences and what other sort of cause effects and outcomes might arise, which could be very broad and disastrous.

And then the other thing is you're doing that with foundations which hopefully are antifragile. So you are taking care of reputational risk. You are taking care of damages to society, et cetera, because it is fragile to ignore those. It is fragile what Spotify did by not anticipating and not thinking about the consequences. If it was intentional, that's fine. But if it wasn't, as it seemed to be that they scrambled and took a bunch of Joe Rogan off podcasts last minute when the issue arose, that was neither anticipatory nor necessarily antifragile because they didn't anticipate that that might arise. They didn't think how comfortable are they and what would they do if it did or avoid the situation.

And then the fragility element is, what does it mean in terms of reputation if something like that explodes and we get caught out? And it is not antifragile to be exposed to undue reputational risk. So that's why we put a lot of thought into it. It doesn't mean it's the right answer. But by having the antifragile foundations, by being anticipatory, and by having the agility, we feel that it safeguards some of the more obvious limitations of traditional planning.

Just a final point, and I'm sorry if I'm being a bit long, is that--

KIMBERLY NEVALA: No.

ROGER SPITZ: The two key ingredients to making the AAA work, one is alignment because you ultimately need stakeholders to have the buy-in and to move in a way that's aligned.

It doesn't mean that along the way, there aren't disagreements and discussions and healthy discussions. That's actually a very healthy and good thing. Innovation comes from dissent and from diverging views and diversity of opinions. But at some point, one needs to gather some degree of alignment to be able to be effective.

And the second thing that's important is agency. You can have a great framework. Unless you're actually everyday doing something about it with agency, no framework is of any use whatsoever. So just kind of mentioning that.

KIMBERLY NEVALA: No, I think that's excellent. And so to round things out, you talked at the top of the discussion about the requirement to really develop and evolve human’s ability to manage and make decisions and engage in a complex, uncertain environment. And way back in the day - I also come from a consulting background - we used to talk about change management. And we were talking about managing a discrete disruption, right, in the context of a specific event. And today, we really talk about managing change. And I think this is where that whole model you've outlined for us today comes, which is responding to, in fact expecting, continuous disruption - minor to major.

So there are a lot of levers for change. Not all of those are equally important. What is the strongest lever we should be pulling today to really help humans evolve to be able to manage in this increasingly complex and uncertain world?

ROGER SPITZ: Gosh, Kimberly, thanks so much for wrapping with that because it's really such an important question, right? And fortunately, well, there's a positive news, and there's a bad news.

KIMBERLY NEVALA: [LAUGHS]

ROGER SPITZ: The positive news is that a lot of work has been done in systems thinking in terms of, how do you affect change in complex environments? So people like Donella Meadows established that one of the levers for change in a systemic complex world-- and actually, the strongest lever for change is really to do with mindsets, to do with the way one views the world, to do with education.

And education doesn't just mean when you're young at school, although that's more helpful. It's constantly learning, relearning, and unlearning, so it's at any age. It's not limited to a particular age or whatever. But it is very important to realize that the way one is seeing the world, the assumptions we make, our worldviews and education is the strongest lever for change in a systemic complex world because everything is interrelated and a number of other reasons.

The bad news I find for now is that that should mean that educational system and the leadership changes and executive education are cabled in a way which allows people to understand the fundamental concepts of decision making, of freedom, the importance of agency, the importance of not making assumptions, the importance of allowing serendipity and chance and contingency and testing and trying things and failing and making mistakes. However, it's not the case.

A lot of the educational systems or the vanilla executive education or even the leadership teams are cabled to rely on data, to rely heavily on assumptions, are vilified if they make mistakes, et cetera. So you have an entire element around the framework of thinking, the mindset, the perception which is valuing facts and valuing knowledge and valuing data and valuing things which are not necessarily going to help you with that critical mind, which is required when things are not determinable.

KIMBERLY NEVALA: Yeah. Yeah, it's, again, a really interesting time and perspective because for a long time, we thought about confidence or good leadership as being tied to determination. By which I mean having a discreet answer and the right answer and confidence and being decisive and not getting distracted by things like uncertainty and all of these things. So it's a fairly fundamental shift in mindset even for how we evaluate and develop leaders at all levels of society and in organizations, as well.

ROGER SPITZ: Yeah, and school. I mean, if you think about the schooling system, I'm not against - on the contrary - languages and history and facts and all that. But one has to ask oneself in an age of AI, in an age where there aren't necessarily right answers, where the biggest challenges might come from thinking differently, thinking something new, inventing, making mistakes, learning from mistakes, feedback loops, how much of the educational system is harnessing that?

And sadly, for me, the US - it's not just the US, but the US in particular - it's a little bit like health care, right? No one actually cares about the educational system. The only thing that they want is to make sure it stays as bad as possible, right, because that's how they make money through certifications, through additional schooling, through giving loans to random degrees that are worth nothing and make money on the financing.

It's the same (with) the health care system in the US. The last thing the health care ecosystem would want would be people to be healthy or to avoid being sick or to die too early. You need people to be sufficiently sick but not die. You don't want anything preventative because if not, you don't get hospital fees. You don't get insurance fees. You can't sell medicine.

It's almost like a SaaS model, health care in the US. You make sure people aren't prevented from falling sick, so they get sick. They don't die, so you keep them alive with a bunch of drugs and all that. So you have a sort of recurring revenue model.

But it's really not in the interest of anyone in the health care system to make health care preventative, to make people healthier, and to have-- so sadly, education is a little bit the same consideration. There's too much money, too many incentives today-- incentives determine outcomes, right, as Bindra says.

So there's too much at stake for anybody to want to change it. And sadly, those incentives determine the outcomes. And the incentives are not towards allowing people to harness their thinking and to be better able and better prepared for a complex world. The incentives go towards making a lot of money through precisely having a bunch of things that sell well. But it really doesn't matter how effective they are otherwise.

KIMBERLY NEVALA: [LAUGHS] Well, thank you so much, Roger. And to clarify, I should say that SaaS in this context meant Sickness as a Service.

ROGER SPITZ: [LAUGHS] Very good.

KIMBERLY NEVALA: But anyway.

ROGER SPITZ: Love it.

KIMBERLY NEVALA: This was a thought-provoking and somewhat controversial at the end there, insight into the need for foresight and the work that's going to be required to truly create a global collective of anticipatory, agile, and antifragile humans. So thank you so much, and we'll have to have you back to debate some of these other topics there at the end. Thanks again for your time.

ROGER SPITZ: My pleasure, Kimberly. It was great to be on, and thanks for taking the time.

KIMBERLY NEVALA: Absolutely. Now, next up, we'll be joined by Fernando Lucini. Fernando is the global data science and machine learning engineering lead at Accenture. And he's going to join us to discuss the role of synthetic data in AI's future development. Subscribe now so you don't miss it.

Creators and Guests

Kimberly Nevala
Host
Kimberly Nevala
Strategic advisor at SAS
Roger Spitz
Guest
Roger Spitz
Founder – Techistential
The Future of Human Decision Making with Roger Spitz
Broadcast by