Listen on these platforms
Brief summary
Despite living in an age of ever-increasing uncertainty, advancing technology has amplified and improved our ability to make better business decisions. Alongside our take-over hosts, Barton Friedland and Jarno Kartela, special guest, Keith Grint, Professor Emeritus at Warwick University, explores how leaders can address ‘Wicked Problems’ that can seem impossible to solve. If you are a business leader, wanting to improve the way you think about decision-making, this is the podcast for you.
Episode Highlights
Ìý
The idea of wicked and tame problems was introduced in our era by Rittel and Webber in 1973, where they argued that wicked and tame problems need to be solved differently. There is a third type of problem, the critical problem or crisis, which is neither wicked nor tame.Ìý
Ìý
The digital economy has made some people assume that the amount of data available for computing power means that we can actually sort out decision-making much easier/ faster.Ìý That's confusing separate things: one is the amount of data and the other one is how you make a decision about what you're going to do with the data.
Ìý
You tend to get appointed and promoted on the basis of your technical skill for addressing tame problems but as you move up the organizational hierarchy, what you tend to face are problems and issues that you've never faced before and these would be wicked problems that we either don't know yet or don't know ever how to fix them.Ìý
Ìý
The third category of problems are critical problems. These are cases where you need to coerce people. A lot of leaders end up becoming quite authoritarian and taking over the command role on a permanent basis.
Ìý
People become quite addicted to particular kinds of decision styles.ÌýSome people like acting as commanders. Some people assume everything we face is a tame problem so they have an engineering, expertise-led approach. Then some people just don't like making decisions individually. They like to collaborate on all possible points, much more than the egalitarian line.Ìý
Ìý
A lot of people believe that given data plus AI, you can solve virtually any problem because it will eventually just solve itself, just throwing more data on it. But past data never tells us what's going to happen next.
Ìý
The future of data and AI-backed decision-making will be a form of interaction and simulation and active learning, as opposed to, "Let's bring datasets to future engineer the world."
Ìý
When people come together to make strategic choices, scenarios, and so on, they will be wearing different hats. It is a long exercise to getting from all of those different viewpoints to something that's concise and solid. The idea of a ‘campfire’ is, can we somehow model all of the assumptions and ideas and thoughts we have about the world and our own business in it so that we could simulate how it actually works?
Ìý
We have to find a way to codify what we think and believe and see and combine that with the data points that we know so that we can somehow see the big picture. Once we do, then it will again be humans who are making the decision.
Ìý
You can find the link to Keith's paper, referenced in this episode .
Podcast Transcript
Ìý
[00:00:01] Barton Friedland: Welcome to Pragmatism in Practice, a podcast from ºÚÁÏÃÅ, where we share stories of practical approaches to becoming a modern digital business. I'm Barton Friedland, a principal advisory consultant in Berlin and I'm alongside--
Ìý
[00:00:18] Jarno Kartela: Jarno Kartela, Global Head of AI Advisory from Finland and we are host for a special podcast takeover entitled Decisions, Decisions, Decisions. In this series, we're thrilled to bring on special guests to discuss an area of increasing burden for senior management: decisions. Let's get started.
Ìý
[00:00:42] Barton: Jarno, I am just so thrilled to be here with you today. Let's tell our listeners a little bit about why we're here and how we ended up doing this in the first place. Do you have a view?
Ìý
[00:00:58] Jarno: I have a view. I don't know if I have the view, but I guess at least from my perspective, I'm quite interested in how could we do not only faster decisions, but to do better decisions with all of this abundance of technology and the digital economy and all of this data enablement.
Ìý
[00:01:24] Barton: Yes, and I think that's also something that I've been preoccupied with for quite a long time. It's from my understanding something that not only you and I have been concerned about, but a lot of people who are pioneers in computer science have been concerned with this topic as well. We are here with Keith Grint today as our first special guest, who's been a Professor Emeritus at Warwick University since 2018. He spent 10 years working in various positions, across a number of industry sectors before switching to an academic career. He also knows about non-academic life, which is very refreshing.Ìý
Ìý
I think what I found particularly interesting as a quantitative piece of information is Keith has published more than 220 books and articles since 1994 and then cited more than 12,000 times. I think people seem to be resonating to some degree whether they agree or disagree with his work. His area of scholarship is quite broad. It spans the boundaries of leadership, the use of technology, and organizational forms, all of which impact day-to-day decision-making. Hello, Keith.
Ìý
[00:03:26] Keith Grint: Hi, thank you for inviting me. Yes, lots of citations. Not all of them legal, so we're okay. [chuckles]
Ìý
[laughter]
Ìý
[00:03:39] Barton: His most recent article published in the Human Relations Journal, January 2022, is entitled Wicked Problems in the Age of Uncertainty. What was interesting, it's an open-source paper, so you don't have to go behind a paywall to get it. Anyone can read it, and we're providing the link on the podcast. The paper is really interesting because it contests the idea that the current timeframe that we're living in is more uncertain than the times that preceded it.
Ìý
For those of you who may not know, the idea of wicked and tame problems was introduced in our era by Rittel and Webber in 1973, where they argued that wicked and tame problems need to be solved differently. Wicked problems have since become popularized into common use. In his essay, Keith suggests that between the binary of wicked and tame problems, there is a third type of problem, the critical problem or crisis, which is neither wicked nor tame. He loosely associates this problem type with different decision styles as well.
We're going to use this article as a starting point to discuss decision-making in our uncertain modern world. We'll explore the ways that technology can perhaps amplify or improve our ability to make better decisions. Keith, I'll read another paragraph. I guess I like to hear the sound of my voice. This paragraph I think is really interesting in the paper that you wrote.
You said, "It was the age of certainty for those old enough to remember what life was like before COVID-19, or Brexit in 2016, or the global financial crisis of 2008, or the Iraq War of 2003, or 9/11 in 2001, or the Gulf War in 1991, or the end of apartheid in 1990, or the fall of the Berlin Wall in 1989, or the Vietnam War between 1959 and 1975, or AIDS, or the Space Race, or the women's lib movement or the civil rights movement in the United States, or the Chinese Revolution in 1945, or the Second World War, or the Holocaust, or the Spanish Civil War, or the rise of the Nazis to power in 1933, or the Great Depression of 1929, or the Spanish flu from 1918 to 1921, or the Russian Revolution in 1917, or the First World War, or the beginnings of the automobile, flight, and relativity. That's only in the last 100 odd years, so don't get me started on the 19th century or earlier."
We won't provoke you, Keith. It's interesting. As I read this I thought, "Wow, we really do mask from our awareness how tumultuous our experience is.
Ìý
[00:07:00] Keith: Yes, we do. I think that's a generational issue. If you look historically, every generation thinks it's both unique and will uniquely change the world, nothing like it has come before and nothing will come afterwards. Every generation does the same thing. For example, in the 1930s, there were lots of claims that the youth of Britain was so poor and so maladapted that were there to be another world war, then Britain would not be able to find itself an army. Yet it did. It's the same now that we think we are uniquely situated and it's never been quite like this. It's always been quite like this in many ways.
Even though most of the things that we face in the world are not wicked problems in the way that it's just been described, yet it is the case that we live in a permanent state of uncertainty. You never quite know what's coming around the corner. We like to believe that we are unique in this, but we're certainly not. There is something in there about trying to understand both that phenomenon in and of itself and also what that means for decision-making now.
Because if we've been around this block before many times, maybe we can learn something from how other generations decided in the face of massive uncertainty. What they should do and what they shouldn't. That notion of fear, anxiety, and ignorance is something which I think is permanently embedded in the human condition. It's not something which is unique to today.
Ìý
[00:08:50] Jarno: Right. We mentioned the digital economy, right? We're talking about data quite a bit. Do you think the rise of the digital economy has any way changed our perception of how we feel and see uncertainty and how do we cope with it?
Ìý
[00:09:08] Keith: I think what the digital economy has done has made some people assume that the amount of data which is now available in the amount of computing power means that we can actually sort out decision-making much easier and much better than we used to be able to do. I think that's confusing separate things, so one is the amount of data and the other one is how you make a decision about what you're going to do with the data.
Even if we had perfect data and significantly enhanced computing power, we're still faced with choosing what are we going to do with the data and for what purpose does it exist? For example, Marx in The Communist Manifesto talked about that the future would be some kind of cornucopia where there would be no necessary decision-making about what you wanted out of life because it would be in such abundance that everybody could take everything.
This was his utopian ideal, and unless we're in a similar state, unless you get to the point where you actually don't need to worry about making a choice between different elements or variables or policies unless you ever get to that point, then by definition, you're going to have to make a decision about what to do with all the data. I don't think the data makes a critical difference to the problem of what are you going to do.
It does make a difference in the sense that I think some people are confused by the amount of data and the decision-making process. I think the amount of data would enable us to make better decisions under quite a few circumstances, but not all circumstances. I think that's the problem is that we just assume that we can now almost turn it into an autonomous decision-making machine. That I think is a critical issue for leadership studies generally, actually.
It's about recognizing that we're on the cusp of a point where the difference between humans and non-humans can be really very obscure. At that point, there's an issue about what is making the choice here? Is it the human making the choice, or is it some form of enhanced technology which is making the choice? Which is a different problem from the one we previously had. I think it still remains as a problem about what we're going to do.
Ìý
[00:11:44] Barton: Yes. If I take what you've just said about the relationship between human and non-human, non-human perhaps, in this case, being computational devices through which we might act in a digital economy. I think that it forces upon us a form of reflection or feedback that perhaps we didn't have before. It sounds like what we're saying is that we have no choice but to decide because the decisions that we make are the precursors to our action. Then there are the constraints or contextual features of the environment that we happen to live in that essentially drive us to make decisions between those different contexts or reconcile those contexts, which then leads us to this tripod-type view of wicked, critical, and tame problems. Would you like to talk about that a little bit?
Ìý
Ìý
[00:15:18] Keith: Sure. Just one last thing on the decision-making mechanism is that the assumption that there's no point in making a choice is actually a choice. This is the decision as well, so deciding not to decide is a decision and We might pretend that we're not involved in this and take a passive approach, but actually whatever you do is a decision, whether you do or don't do something. Going back onto the triple topology. We have 10 problems that we know how to fix, how to keep the lights on most of the time. I don't need to know how electricity works to make the light work, but if it fails, I need to have an expert into sort my electricity out. That would be a tame problem.
We have standard operating procedures for fixing those kinds of problems and we are both awash with those kinds of issues and dependent on them. One of the intriguing aspects is, for example, how few management programs there are these days. There's lots of leadership programs as though we don't actually need management, but we need management to keep the lights on, then everything else that works alongside that.
Those are the tame problems. Three-quarters of the things that we do are probably generated by tame problems that we know how to fix. You tend to get appointed and promoted on the basis of your technical skill for addressing tame problems and that might in itself be an issue because as you move up the organizational hierarchy, what you tend to face are problems and issues that you've never faced before and these would be wicked problems that we either don't know yet or don't know ever how to fix them. We might be able to just ameliorate them.
Crime is a permanently wicked problem. There aren't any countries or organizations that I know of that are long-lived and large-scale that don't have crime. If your target is zero crime, which would be a tame approach to the problem of crime, then you're probably wasting your time. You can make crime better or worse, but you don't seem to be able to fix it. Crime is a wicked problem, and we have usually to work collaboratively and collectively to try to think about how we might manage crime better, how we might reduce it.
A part of that issue is trying to work out, what is the wicked problem? Let's just take crime again. What is the wicked problem of crime? It's the problem that people carry knives or guns. Is that the problem? Or is it the context, which makes that an acceptable part of human culture? Is that the problem? Depending on where you think the problem lies, a wicked problem can have several different interpretations and several different responses to it.
Ìý
[00:18:02] Barton: Is it the educational level? Is it the amount of resource that people have, et cetera?
This is where we move into a space where I think machine learning and artificial intelligence approaches can have a bearing to help us address because these dimensions can quickly explode.Ìý
Ìý
[00:18:28] Keith: The adoption of using AI or enhanced learning or whatever to think about crime is a really important aspect. The deeper we dive into the data detail, the more I think we then realize that this is a wicked problem. The data doesn't help us come to a standard answer. It helps us recognize just how complex these kinds of issues are.
Ìý
[00:18:54] Barton: It's that recognition of complexity among the stakeholders that can often help to enable the group to make a decision that can be more effective then?
Ìý
[00:19:06] Keith: Yes, and I think that's one of the problems that a lot of leaders, especially political leaders don't take on board. They don't actually make people understand just how complex the issues are. They try to pretend that there's a simple answer, and if it doesn't work, then they find a scapegoat to blame. Actually, I think a lot of the responsibilities of leaders of all kinds is to tell people things they don't want to hear, to tell people unvarnished truths that may not be popular, but are "you need to hear about this."
Then the third category of problems are critical problems. This wasn't in the original work by Rittel and Webber, but it was an attempt by me to think about, are there cases where a coercive decision mechanism is perhaps more important than simply the collaborative approach used for wicked problems or the technical approach used for tame problems? I think there are cases where you need to coerce people. There are not very many, and I think a lot of leaders end up becoming quite authoritarian and taking over the command role as I would call it on a permanent basis.
There are occasions where we all need to coerce people, where you need to stop from. You need to stop your children from running in front of a car and that requires you not to have an interesting conversation afterwards, but to grab them or shout at them before they get to that point. When a fire breaks out, you need to coerce people out of the theater. That kind of approach. There is a requirement for a third category, I think, of critical problems with coercion or command.
I think what tends to happen is that people become quite addicted to particular kinds of decision styles. Some people like acting as commanders, they like shouting at people all the time. They are very authoritarian. Some people assume everything we face is a tame problem so they have an engineering approach, which would basically say, "Give me all the data and we'll sort out the answer." It's that expertise-led approach, which is where I think a lot of the IT stuff comes from.
Then some people just don't like making decisions individually. They like to collaborate on all possible points, much more than the egalitarian line. I think there are occasions where you do need to be collaborative and there are occasions when you don't. I think the real difficulty that many people have is being able to move up and down these kinds of decision-making ladders.
Sometimes you need to be decisive, shout at people. Sometimes you need to ask for their help, and sometimes you need to tell them to get on with it because it's their job. I think that is much more difficult to do unless you have a common language framework that you can use. If we all know what a wicked problem is or what a tame problem is, we can talk about that but if you don't have that collaborative language and it looks like you're simply being inconsistent, when one day you shout at people and the next day you are asking for their help, it appears to be just inconsistency as opposed to, "No,-
Ìý
[00:22:21] Barton: Absolutely [crosstalk].
Ìý
[00:22:22] Keith: -I need your help now because this is a wicked problem," and " I don't need your help now because we're on fire." That language base is really important. It's not just a way of describing something, it's actually embedded in the nature of the decision-making.
Ìý
[00:01:42] Jarno: Yes. I completely align that the tame problem and the engineering approach are often relatively similar because even if I look at my own field, if I look at how we got here in terms of data and AI, I think it's still true, especially in a larger context that a lot of people seem to have the intuition that like given data plus AI, you can solve virtually any problem because it will eventually just solve itself, just throwing more data on it.
We never thought about that probably enough that past data never tells us what's going to happen next. By design, that's an impossible thought. If past data has all the answers, why are we doing all of this? It should have the answers on how to run the world. It should have the answers on what are the products that customers want. It should have the answers on how to produce electricity with minimal cost and so on, but it doesn't seem to happen. I think there's a flaw in thinking that data plus AI like creates a lot of problems become tame, which is exactly what you said.
Ìý
[00:24:04] Barton: This is something that's come up for us in our background conversations. We've talked to some extent about the notion of the abdication of responsibility on the part of decision-makers and the assumption that computers can do it for us. I think something that you've talked a lot about, Jarno, is this view that the way that we see, for example, AI deployed in many organizations reflects this assumption that we can build something to do it for us but here are, in fact, other approaches we could take to partnering with technology that could enable a different approach to making decisions that would lead in fact, to what Keith was talking about, the building of a shared language and understanding. Would you like to talk a little bit about that?
Ìý
[00:24:50] Jarno: Yes, I think there's a few things behind that. One is, at some point in time, a lot of people seem to think that it's somehow desirable for us to have machines thinking like humans, that we want similar thinking, that we want computers to have a vision, we want them to have language, we want them to speak and think and do all of this stuff. I'm unsure if that's where we want to apply focus, though.
Also, they need to take the other problem, which is past data doesn't really tell us what's going to happen next, the past data doesn't hold optimal answers to problems, because if it did, we wouldn't be asking the question. I think combining especially these two, we're left with we have to figure out an interactive way between human intellect and machine, let's call it, intellect or something else, just computing power capability because it is very different than the one that we possess.
Machines don't need to simplify things. They're only as simple as you designed it to be. They can perform in many ways with much more capacity than we can. Then we can in many other aspects. I'm guessing that somehow the future of data and AI-backed decision-making will be a form of interaction and simulation and active learning, as opposed to, "Let's bring datasets to future engineer the world."
Ìý
[00:26:47] Barton: Keith, in some of your earlier scholarship, you explored some of these issues, both in terms of the history of machines from the Luddites, to the use of guns with Steve Rogar. What is your view about this? I think one of the points that Jarno is bringing out here is that these machines that we make are quite malleable. Yet, as Jarno said, we have this fixation almost as a society of making them do certain things like we do instead of thinking about how they could support us in doing what we do more effectively. Do you think that that's something that has come up in the past with other forms of technology based on your research?
Ìý
[00:27:37] Keith: That's an interesting question. I think there's something in here about how we attribute to technology, to machines, issues that they don't actually possess, but it enables us to make different kinds of decisions. For example, one of the things that, F. W. Taylor did is recognizing that human coercion can be quite counterproductive. If you employ some coercive supervisors to an assembly, to a workplace, you tend to get resistance. I think he was one of the very early pioneers to recognize that the thing to do therefore is to embed the coercion in a piece of technology. It appears to be--
Ìý
[00:28:25] Barton: Make it invisible.
Ìý
[00:28:26] Keith: It appears to be neutral and there's nothing you can do about it. Still today, we talk about things like, "It's the speed of the assembly line that drives me crazy," as if that can't be changed or as if that somehow, it miraculously came out of a computer rather than someone designed it into the computer in the first place. The speed of the technology and the way that we respond to those kinds of things is quite intriguing. We tend to obey traffic lights, but that depends upon which culture you're in, whether you obey traffic lights. I've been in several cultures where the traffic lights don't seem to have any purpose at all, traffic just ignores them.
Ìý
[00:29:05] Barton: This is the machine used to control, right? This is one of the primary uses of machine is to control, but I think what we're seeing here is that there is a potential for these, I think Turing called it a universal machine, for them to not just control, but to augment or to support, which is different. Is it our fixation with control that leads us to build systems that control?
Ìý
[00:29:38] Keith: I think that might be part of it, but I think you also need to contextualize the place where you're putting the machines in, in the first place. If you're placing machinery in to increase your levels of profits, for example, then you're not really interested in whether it's improving the lives of people. What you're interested in is does the machinery increase your levels of profit? If it doesn't, then-- If you're bringing a similar kind of machinery into an educational environment, then maybe you're not interested in the profit so much, you're interested in learning. I think you need to contextualize the point where what's the context for the assistance of the machinery? That is designed into the technology in the first place, it's not a neutral machine in that sense. It comes with political preferences built into it.
Ìý
[00:30:27] Barton: One of the things I'm kind of coming to in my thinking is that between these three heuristic problem types, critical, tame and wicked, I think that the area where Jarno and I have been thinking about the support of artificial intelligence and simulation is more in the wicked problem space where the heuristic would be that you want to have a collective engagement about the problem to develop alternative solutions and approaches.
One of the words that Jarno has developed-- I think Jarno is secretly a philosopher because he comes up with a lot of his own language. He uses the word campfire to represent this shared language. There's some engagements that we've done where we've done these sorts of engagements with senior executives, done qualitative research, built mathematical models that then represent those qualities, and then build dialable or parametric simulations where the stakeholders can actually see the results of their assumptions and the other data that we've collected into the simulation.
I think, for us, that the most valuable part of that is this notion of the campfire. Jarno, would you like to talk a little bit about what happens at that point in the engagement where this campfire emerges?
Ìý
[00:32:05] Jarno: Yes. I guess the main point of that is when people come together to make strategic choices, scenarios, and so on, they will be wearing different hats. There will be a person who is a commander. There will be a person who's a manager. There will be a person who's a leader and so on. They will all look at it like a different size. They're all going to see a different beast when you say something like, "We should figure out sustainability."
What happens then is it is a long exercise to getting from all of those different viewpoints to something that's concise and solid and we can talk about it if we don't use any technology. Here, I guess the idea of the campfire is, can we somehow model all of the assumptions and ideas and thoughts we have about the world and our own business in it so that we could simulate how it actually works? So that we answer the question of, "Okay, our massive company wants to go carbon-neutral. Should we do all electric vehicles? Or should we actually try to change the business model in a way that will change how customers behave that will ultimately change which types of products we will sell?" and so on. Which will have a higher impact with a lower cost or a lower resource rate or a lower something? What will maximize impact given the constraints?
More often than not when we've done this and we ran the simulations, it's not the top three that they thought it's going be that are the most meaningful things that you should do. That's not that it's somehow like, that AI gave the answer. It was more like, we have to build a campfire if the problem is wicked. We have to find a way to codify what we think and believe and see and combine that with the data points that we know so that we can somehow see the big picture. Once we do, then it will again be humans who are making the decision.
Ìý
[00:34:46] Barton: Is the campfire then the process of the stakeholders having the iterative conversations and developing that shared language and coming to consensus?
Ìý
[00:35:00] Jarno: The campfire makes everyone understand it's a wicked problem. It also shows how wicked it actually is and then we can simulate what should we do? If it doesn't seem right, if there's something missing and so on, we can just interact with it. I think this is what I'm trying to say that we've been stuck in this idea that put data in a system, create a prediction, and then we just agree or disagree with it. I think that's the problem. We should think about using computers as iterative mechanisms that somehow amplify our thinking and then we would not be stuck in the idea that, "This is faux precision. I don't trust it," and so on. You will only trust things that you actually interact with.
Ìý
[00:35:56] Keith: There's been some really interesting work very recently on what people are prepared to do in terms of trying to save the globe, trying to save the environment. What happens is the research is based on, "So these are the kind of 10 things that you could do. Which do you think is the most important?" That's the first question. Secondly, what is it that you do? Those two things usually coincide. Most people think the most important thing is to recycle your rubbish and turn the heating down slightly and not drive your car too often. Really relatively simple things to do.
What most people think are not important is it's not important to worry about having too much of a meat-based approach to food because that's not really important. Actually, those kinds of things are really important but most people don't do them. There's a really interesting correlation between what do I think is important and what do I always already do and therefore I'm in some kind of virtuous circle here? I'm doing my bit. I'm separating out the plastic every day in my bin. If you're asking me, "Am I green and environmental?" The answer is, "Absolutely, I am. Other people are not."
If you were to ask me, "Keith, the only way you could really do something important here is to stop eating meat," then I would have a problem with this and that that's the point. That's where I think the data would be really useful is to persuade people. It's not what you do and how you rationalize what you are doing. It's actually looking what's the impact of us all doing X or Y. We still have to make a decision at this point, we still have to put our preferences in order. It is worth kind of reflecting on the way that most people's ignorance of their lives are rooted in their value approximations at the same time.
Ìý
[00:38:00] Barton: I'm so glad that you brought this up because it gives me an opportunity to mention something that's come up in our conversations before, which is the notion of narrative and language games. Which many people may not know what those words mean but I think the way I see this is that what is it that the data acts on? Well, the data acts on our personal narratives because we all have them and we all use language in particular ways to forward our narratives which addresses it, which forms our identities. Could you for our listeners maybe summarize this concept of how narrative language games and identity are intertwined? Because that's the heart of what we're trying to impact here, isn't it?
Ìý
[00:38:54] Keith: No. I think it is. I think it's almost impossible to understand how lots of decisions are made in the absence of an understanding of the narrative that supports that. If you take the invasion of Ukraine at the moment, it doesn't seem to make any sense, unless you recognize that Putin's narrative is about a reclaiming of the so-called Glory Days of the Soviet Union, and also to remind the west of the great part that the Soviet Union, the Red Army played in the second world war. It's the reclaiming of that narrative, which drives him to make the decisions that many people in the west and elsewhere think, "Why on Earth are you doing this? It doesn't make any kind of sense." Well, it doesn't unless you have an understanding of the narrative that drives it in the first place.
I think it's the same with-- Brexit would be another good example. Unless you understand the narrative of the pro-Brexit politicians, that doesn't make any sense either, but once you understand that, if it's about reclaiming some Grand British Imperial past, then it makes some kind of sense. It might be wrong, but at least it makes some kind of sense. I think the narrative thing is a really important aspect and that also runs across through the notion of AI because people have very different interpretations of what AI is about and what the narrative is and where we're going with this, and therefore I'm either pro or I'm anti based upon my readings of Orwell, for example.
Ìý
[00:40:25] Barton: Yes, absolutely. I think this is really key because at least for us, this collection of the various points of view that Jarno was talking about gives us a way to say, "What if you put all these things together and make a soup out of it, a soup that is coherent and understandable so that people can see how the interaction of these different assumptions?" Not only are you representing people's narratives and identities in the data that you are collecting, but then you're actually showing how do these things play out together? How do they interact with each other?
Which can hopefully give people a sense of insight and understanding both into the wickedness or complexity of the issue, but also give people insight into the points where decisions can be made, where there's a clear benefit. I think in the case study that you were talking about, Jarno, they recognized I think that the cost of the electric vehicles was actually prohibitive and that it was actually very cost-effective to go to their customers and actually show them the simulation themselves, to explain to their customers.
Ìý
[00:41:39] Jarno: Yes, but the subsequent problem that this approach will generate is, now that you do amplified decisions, you should write that down. The problem with data in general is it doesn't have the action that triggered the change. If I just look at my Oura Ring, I can look at my sleep, I can look at my readiness, I can look at all of that stuff. It doesn't know that I had a beer. It will just say like, "Well, things went bad last night. It doesn't look good." It doesn't have the action that I took.
Even with everyone wearing this, we couldn't make any meaningful policy on what to do because it is lacking the action and then the outcome. This is why I think, again, the future is in interaction so that we are presented with actions, then we pick which ones to do and we somehow create a completely new decision-optimized data asset about the world because currently, none of us do. We don't have actions anywhere in the data. Data is just censoring for the most bits of it.
Ìý
[00:43:14] Barton: Yes, it comes down to a discipline of test, measure, learn. That you use the computational capability as scaffolding so that you as an individual and you as your team can make a decision, measure it, and learn from it, and then make the next one. The knowledge then begins to reside in the humans, not in the data that we're collecting. We're just using--
You have that wonderful case study where there's AI-supported generation of recipes for whiskey that's on our website from Mackmyra. To me, this is such a wonderful metaphor about the limits of human problem-solving. The problem that the whiskey maker faces is the problem that we all face. I've got a hundred different barrels of whiskey at different stages in different casks with different types of wood, which ones should I mix? As soon as you get over five or seven, that's not a problem that human minds can solve. It's not what our evolution has primed us for.
However, computers can solve those combinatorial problems, and they can be given the qualitative information about what's a good recipe, and they can be told, "Don't repeat a recipe." Then they can produce a set of recipes that an expert in making whiskey can actually understand and say, "Oh, that's a really interesting idea." I think it's the same thing with any of your domain stakeholders that is a finance expert or is a delivery expert.
Ìý
[00:44:58] Jarno: Yes, exactly. Again if you're faced with the problem of infinite choices which more often than not in business, that's what you're faced with. You want to experiment but you don't know what. You're not certain about the outcome. Product development is a prime example. You have an infinite amount of choices. The question is how do you get to the best one? How do you get to the best end product with a minimal amount of iterations?
The answer is not only with humans and not only with computers. The only way to do it with the highest possible quality is to train a computer to work alongside experts so that the experts can give feedback that this didn't work, that didn't work, that actually worked, and so on because then the computer and the capacity of that computing power is super efficient in understanding, "Okay, this means that we should try that next." That will be super hard for humans to do. "What is the next best iteration that we should do?" is a hard question.
Ìý
[00:46:20] Keith: Several things are involved in that. One is the kind of issue that if you go back to the whiskey problem, this is about, now whiskey depends upon who's consuming it, what kind of whiskey they want. Even though the data in the world wouldn't tell you which choice to make because this is a value decision so there won't be a consensus about this. There'll just be if you want a particularly sweet version of X, then this is what you should be going. That's where the computer will enable you to provide the sweeter compound but it won't tell me which choice to make.
The other way of thinking about the enhancement working together is and there's quite a lot of work from previous years on false data about an impending nuclear crisis, about data telling either Americans or Soviets that they were about to be destroyed by an incoming rocket, which lots of then human operators said that the chances of this being true are minimal so we're not going to press the button.
Ìý
[00:47:26] Keith: There are lots of occasions where that has happened. You can't really rely upon computing power or indeed human power to be able to make those decisions. That's what I think it benefits both.
Ìý
[00:47:41] Barton: If we were to close this by asking how do you think, Keith, our listeners can go about making better decisions with the resources that they have?
Ìý
[00:47:51] Keith: Well, for me it's really kind of pragmatic. If what you're doing is working, then keep doing it but if what you're doing is not working, then it might be what you're looking at is not a tame problem, but a wicked problem for which you need some collaboration, human, and probably non-human in this context.
For that to happen, you need to be able to admit that you don't know the answer all the time and that's something which is really difficult for lots of people in our authoritative positions. You get appointed to be the chief exec and then you recognize that actually, there are quite a few occasions where I don't quite know what I'm doing, but I can't admit that to anybody because they've just appointed me, so I'll pretend, and I will just make a decision based upon my gut feel rather than what I should be doing, which is asking for help.
The very point I think, is to recognize that you can't possibly know all the answers and therefore you need to be able to admit when you don't know the answers and for that to happen, you need some kind of common language and some kind of narrative frame which tells you, "This is how we see the world."
[00:48:55] Barton: You need to make it explicit to people. I think there was something that we've talked about in the past where this is actually a domain within leadership scholarship where there's a story that you told me about the gentleman who was the prime minister of South Africa falling on his sword at the end of apartheid as being a form of leadership that we might want to reconsider in terms of being desirable.
Ìý
[00:49:22] Keith: Yes. This is an argument that our assumptions about leadership lock them into some heroic mode all the time. When we think about the end of apartheid, we think about Mandela as the hero of the end of apartheid, leader of the ANC, but there are arguments that that only actually happened because of the consequences as well of people like FW de Klerk, the leader of the White South Africans at this point in time who made it explicit to his followers that they could not continue with apartheid because if they did, there'd be a civil war and they would lose a civil war.
FW de Klerk's responsibility is to disappoint his followers at a rate they can manage and that's basically the end of his career and that I think is the point that you need to be able to understand that leadership's not always about being heroic, it's sometimes being very anti-heroic as far as your followers are concerned, but telling them disappointing truths. This notion of telling truth to power is a really important point and I think in some ways, that can also be embodied into an AI system. That's the point of the AI system to tell humans disappointing news.
Ìý
[00:50:43] Barton: Maybe we're wrapping up with two thoughts for our listeners to think about. One is that they need to be able to identify the kind of problems they're actually dealing with and the second is to perhaps assess how broad or adaptable or flexible their leadership modes actually are.
Ìý
[00:51:03] Keith: Yes, now, I'd agree with that.
Ìý
[00:51:14] Barton: It's been so nice to speak with you. Jarno, Keith, it's been a pleasure.Ìý
Ìý
[music]
[00:51:34] [END OF AUDIO]
Ìý