Listen on these platforms
Brief summary
As AI continues to shape our world, more and more business leaders are faced with the challenge of integrating this technology while prioritizing human needs. Award-winning writer and keynote speaker,ÌýDavidÌýSax, joins us to shed light on the pressing questions business leaders must ask themselves when crafting an AI strategy that serves the needs of both the organization and its customers
Episode highlights
Ìý
Is it cheaper, perhaps to just replace that individual with a couple lines of code? Absolutely. Could you scale it and be more effective in a sense by doing that? Sure, but at what cost would it come to the humans you're serving? These are the bigger questions that need to be addressed.
Ìý
In reality, we don't know what the future holds. We don't know what tomorrow holds.
Ìý
At the end of the day, the AI is not there to serve the needs of the AI, or the technology is not there to serve the needs of the technology. It could very easily do that, and many things are built around that. It needs to serve the needs of humans. What are the needs of humans? The needs of humans are not fundamentally different than they were a thousand years ago.
Ìý
I think the most important thing that I learned, and I hope many of us learned, of the past couple of years is the tremendous need and benefit of critical thinking around the implementation of digital technology, especially as a replacement for aspects of our analog lives.
Ìý
What are people's tastes going to be like in 10, 15 years? What are the social dynamics? It's hard, if not impossible to predict that.
Ìý
How do you map out a portfolio of decisions where you can really leverage AI augmentation to make those decisions better, where you can enhance the human capabilities rather than figuring out the replacement approach?I think that's really an effective way of approaching what is the potential for AI around organizations.Ìý
Ìý
Regardless of what technology is coming around, we still have the same needs. I think this is something that, to bring it back, that we can't lose sight of. That we could have the most advanced technology and the most flying cars, but the problems that we're going to have to deal with, the things they're going to give us the greatest joy, the greatest challenges we have, whether it's a company, whether it's as a society, whether it's as individuals are going to be those things that are still fundamentally human and we can't get around them. We can't invent our way out of these problems.
Ìý
Ìý
Podcast transcript
[00:00:01] Barton Friedland: Welcome to Pragmatism in Practice, a podcast from ThoughtWorks, where we share stories of practical approaches to becoming a modern digital business. I'm Barton Friedland, located in Berlin, along with my colleague, Jonne Heikkinen. Hello, Jonne.
Ìý
[00:00:19] Jonne: Hello. Good to be here.
Ìý
[00:00:21] Barton: This week we are very pleased to have on our podcast David Sax. David is a writer based in Toronto, but he's lived in New York, Buenos Aires, Rio de Janeiro, and Montreal. He's seen a few different cultures and a few different ways of looking at the world and hopefully can share a bit of that with us.
Ìý
Two of his recent books that you may have read about or read yourself are entitled The Revenge of Analog and The Future is Analog, and the books really explore our relationship as human beings to the world around us. To what degrees do we want to see those relationships to digital technology and objects, or to what extent do we want to continue as a species that actually relates to other human beings in a physical reality? Welcome, David.
Ìý
[00:01:35] David Sax: My pleasure to be here. Thank you.
Ìý
[00:01:38] Barton: I think one of the things that caught our attention with your work is that you recently had an article in Monocle that was about the way that we think about the future or envision the future, and it took a rather critical stance towards a sense of disconnection that it appears that many of the people in a position to make decisions about technology. what is it that they are wielding on us innocent humans?
Ìý
[00:02:14] David: Well, I think there is, I guess, a strain of thought that one might call techno futurism, and it really is a device or object-centric vision and almost philosophy of the future. When you think back to antiquity, old yield times, the visions of the future were God-centric, religious-centric as we're talking pre-enlightenment. Our modern 20th-century equivalent of that has been digital, has been technocentric.... That devices, robots, computers, artificial intelligence, if we want to talk about data from Star Trek: Next Generation, transporters, lightsabers, whatever it is, this was the way that the future was envisioned, and everything was shaped around that.
Ìý
I think that narrative has become so dominant over the past three decades because we've actually seen this tremendous transformation in the way we work and communicate through digital technology, through the internet, and through smartphones, and through the sort of constellation of combinations of those technologies that we're using right now to speak in three different countries pretty seamlessly and for free.
Ìý
In reality, we don't know what the future holds. We don't know what tomorrow holds. Three and a half years ago, this pandemic began for the most of us, because we didn't live in China, three years ago. Three years ago we were locked into houses with kids trying to figure out how to Zoom and tending terrible sourdough.
Ìý
Six months before that, nobody, none of us were thinking that that was a possible future. When we were thinking about the future, it's such a complex thing. There's so many variables, and so much is unpredictable. We always see it as a surprise when something happens, whether it's in our own life or sort of geopolitics or business or whatever. The technological future, the sort of notion of that digital future is a very easy, simple, tangible narrative that we can sort of hook onto, and it's very attractive and especially so with business.
Ìý
What are people's tastes going to be like in 10, 15 years? What are the social dynamics? It's hard, if not impossible to predict that. Oh, well what are cars going to look like? Oh, they're going to be electronic and they're going to fly. This is what someone is telling us and this is where we should invest. I think it's very attractive for business leaders, for company owners, entrepreneurs, managers, policy makers, because they're constantly being asked and pressure is put on them to think forward, think for the future. Future proof things.
Ìý
These predictions, this sort of technologically driven narrative allows for a very easy and very reassuring and confident way to speak about the future. Well, oh, we know that we're going to be a digital first organization and that AI driven processes are going to be the most important. Oh, yes, yes, yes. Versus, well, we're not a 100% sure what's going to happen in three years time, so we should be cautious. Well, that's not seen as bold.
Ìý
[00:07:57] Barton: You're a bit stronger in the article. You make this point that that some of the product ideas themselves that are being launched upon the world are misdirected and actually bear no relationship to what humans might actually need.
Ìý
[00:08:12] David: Right. Well then it becomes a question of the tail wagging the dog in a way because the thing that's the future, that's the symbol of the future, that's the symbol of innovation and progress becomes a device or a technology or something that can be built and purchased and sold. Then it becomes, "Well, how do we do this?" Let's take an example. Couple months ago ChatGPT comes out from open AI and every business leader gets called into the office of their manager and they're like, "What's our AI strategy?" It's like, "Oh yes. All our customer portals are going to be driven by AI." "Great. Make it happen."
Ìý
In the way that, like, "What's our wearables strategy? What's our app strategy? We need an app." How many horrible, horrible apps are there out there from companies and organizations and governments that are useless and pointless and waste of money and don't do anything to improve a process or customer interaction, but are seen as necessary by someone in order to get with the times, so to speak?
Ìý
[00:09:20] Barton: The way that you'd even express it, it comes across this energy of a knee-jerk reaction, and that we aren't taking the time to think it through.
Ìý
If we think about the people that may be listening to this podcast and are thinking, "My boss did actually come into me and speak to me last week and say, I want that AI thing done," what would you encourage those people to consider in making their decisions? What do they need to ask themselves so that they can do a better job of discovering a better future?
Ìý
[00:17:43] Jonne: That is actually happening a lot. The different leaders from middle to senior executives, they're getting this pressure both from bottom-ups and also from the board. What's the AI strategy?
Ìý
[00:18:09] David: Well, I think the most important thing that I learned, and I hope many of us learned, of the past couple of years is the tremendous need and benefit of critical thinking around the implementation of digital technology, especially as a replacement for aspects of our analog lives. The questions are the most important thing because anyone can pull something out of a hat and can combine ideas together and find products that will appear as though they're doing something, but in order for it to be effective, both from a, let's say, profitable standpoint but also from this bigger question of building a desirable future, it needs to be able to address things that we as human beings ultimately need.
Ìý
At the end of the day, the AI is not there to serve the needs of the AI, or the technology is not there to serve the needs of the technology. It could very easily do that, and many things are built around that. It needs to serve the needs of humans. What are the needs of humans? The needs of humans are not fundamentally different than they were a thousand years ago.
Ìý
We need food, we need shelter, we need socialization, we need relationships, we need to feel valued. The question becomes, okay, does this application of this technology serve that or does it hinder it? Does it get in the way of it? You think of an area like customer service. I have a friend, he has an AI chatbot customer service company that's been running for a couple of years, so it's nothing new. Increasingly those in the customer service field are being offered these tools for serving, you don't need a call center, you don't need sales agents, the AI is going to handle it for you, the chatbots incredibly powerful can answer any question.
Ìý
It's true in some instances, it really can. When is my garbage being picked up or all these sorts of things. We all know and let's take the example of travel and a flight gets canceled or you have to change plans, what it is like on the other side of that as a human. That when you not only need an answer to something, but you fundamentally need to know that some other humans, somewhere in the world, whether they're in the office of that airline, or whether in there in a call center in Bangalore or Manila, are hearing you as another human, are empathizing and feeling what you're feeling, and are taking ownership over helping you in a way that not only gets the solution that you may desire, but actually comforts you and makes you feel as though you're not alone in the world.
Ìý
Is it cheaper, perhaps to just replace that individual with a couple lines of code? Absolutely. Could you scale it and be more effective in a sense by doing that? Sure, but at what cost would it come to the humans you're serving? These are the bigger questions that need to be addressed. It's a lot harder to convince a manager or shareholders of that, of the need to retain human staff, of the need to retain things like brick-and-mortar locations.
Ìý
"Oh, well, we can just cut all these things. We don't need them anymore." We've seen over the years that the stores that cut their physical locations and only sell online end up losing out on customers because people still like to buy things in person, for example. It may present these simple solutions but the answers that are the most effective are the ones that require that deeper reflection about what we want as humans at the end of it because fundamentally, you can create any technology you want but we're still human at the end of the day.
Ìý
[00:22:11] Barton: If I was coming to you as a colleague and said, "Hey, I've just been asked by my boss to put an AI strategy together," taking into account what you just said, what should I be looking at internal to my company? Because I think what I heard you just say is really what we've known for quite some time, at least it's been written about since the mid-80s, that if we do good human-centered design, if we include customers in the design process, if we develop a product with them-- You know that story about Flickr.
Flickr was a gaming platform and they were not doing very well, and so they asked their users, "What could we do that would be useful?" Somebody said, "Well, can I put my photos up there?" They did and look what happened. I think that there's a lot of precedent for listening to customers. I think back to what you said before, a lot of companies that we encounter, although they really would like to be human centricity on paper in that same kind of trophy way that the beauty pageants happen, when it comes down to actually empowering their people to create those relationships with customers, they're often prohibited from doing that.
Ìý
There's a lot of within organizations, a lot of hierarchy, a lot of we don't want these people speaking to those people without my permission. I've been asked to do this AI strategy, what advice would you give me?
Ìý
[00:23:41] David: What I do when I work with other companies, consult with them, give talks and so on what I find most effective is not speaking about that company or that group's work right away, because that's where people, individuals, will fall back to those processes, those hierarchies, will stay within the box, so to speak.
Ìý
I did a session with a bank recently. The bank had noticed that their main problem was that after COVID had ended in the bank branches that were all over the nation, thousands of branches, they really had noticed and had observed a decline in the customer service relationship and the customers were noticing and the bank managers were noticing and the tellers were noticing and they couldn't figure out what it was.
Ìý
Part of it was the physical barriers, the plexiglass and so on, the masks, but there was something else that was happening and it was really affecting things. All the people involved in the branch services of this big multinational bank went to their thing and we're like," Oh, we need design, we need this. We need design thinking. We need the seven steps that Stanford tells us. We need more plants. We need more interior design. We need more of these, we need to more that. Let's try this. Let's try that."
Ìý
What I did is say, "Okay, let's not think about banks at all. In your day-to-day life, give me two or three places you go. I want you to take a note over a week, take a notebook if you have to, and go through interactions that are going to be similar to a bank. Where do you go to get coffee? Where do you go to go grocery shopping? Do you have to go to pharmacy? Do you have to interact with the doctor's office? Not some sexy place like the Apple Store. Everyone says, oh, we want to be the Apple Store of bank. We want to be the Apple Store of everything. What did those interactions look like to you?
Ìý
Where was that interaction good? Where was it bad? Where does technology help that? Where did it stand in the way? Only by getting people to reflect on, let's say, a parallel experience which is in many ways a similar thing, do they get to see beyond those hierarchies, beyond the systems that they're doing. Again, it's easy for them to ask those questions in that thing.Ìý
Ìý
[00:28:14] Jonne: ÌýDavid, I really like what you mentioned about not really talking about the business but really diving deep into what are the two to three things that you do and just figure out how to get that better instead of trying to push, let's say, a technology for the people to use.
Ìý
I think what Barton referred earlier is a lot of the customer centricity which has been a big thing recently and being close to the customer-- leveraging technology close to the consumer, I think the thinking over there is pretty well advanced, but when we go internally and we look for what are the technologies where we help our experts or where we help our people be better at their work, I think that's quite often when we lose the sight for what is really important.
Ìý
When it comes to AI strategy, The angle that I have for those and I've been using for those is decisions because if you think about AI and if you think about what are the role of AI for organizations and decisions, one way to define AI is that AI is basically a way to interact with the surrounding world through the decisions that we make, either through being very close to automation or having human interloop for making those decisions and doing some action and learning from the outcome. Instead of just predicting from historical data, really learning from the action, I think that's-
Ìý
[00:30:15] Barton: Of learning in situ.
Ìý
[00:30:17] Jonne: Exactly. When we talk about decisions, AI is really a way to augment organizations, the people that are making these decisions. I think that's an interesting angle to approach. I really like, David, what you said about not focusing on more than one to two to three things that you do. That's an approach that we've also been leveraging when we do AI strategies.
Ìý
What are the decisions that you do in different functions, in different lines of businesses? How do you map out a portfolio of decisions where you can really leverage AI augmentation to make those decisions better, where you can enhance the human capabilities rather than figuring out the replacement approach?I think that's really an effective way of approaching what is the potential for AI around organizations.Ìý
Ìý
[00:31:51] Barton: Let me try to synthesize this because I want to acknowledge, David, first of all, that the first part of the answer that you gave was a really lovely, pragmatic answer because you were basically giving us a method for developing critical thinking. Because let's just review for a second how critical thinking works. It doesn't work like the knee-jerk reaction, make it so and get it done. It's not instrumental. You have to be in the shower a few times and let that idea come to you.
Ìý
The idea has to grow. If I want to know and answer and I go to my grocery store and my pharmacy and all of these locations, maybe I actually never looked at it before. Maybe I never thought about it before, and then you're really helping me to get those bearings. That's something that any senior person in organization could ask their staff to do for a week, to begin to develop those capabilities, those observation capabilities, and those interpretation capabilities.
Ìý
[00:35:55] David: I think emotion is something that is central to everything we do, and yet in the world of science and the world of data, in the world of business and management, emotion is a four letter word. It's something that's not discussed. You're not supposed to be emotional. We're talking about rational ideas. What is a rational thing?
Ìý
As anyone knows, leadership, customer service, sales, you name it. any aspect of what differentiates one organization, one success, one country, one society, one individual, one family from another, is largely based on emotion rather than raw data or these answers of the thing, "Oh, well we found what the answer was--" You look at any category of something, it's storytelling, it's tapping into emotion. Leadership is getting people to believe in the vision of something through this emotional process, and that's something that we can't distill down in a programmatic way.Ìý
Ìý
All these visions of the future that are technocentric really lack a basis in the emotion. There's this artist rendering of everybody flying around on their hovercrafts or whatever the hell it's going to be or their AI robots are hanging out with them, it's all happy, but what does a funeral look like? What is it when you're feeling lonely? We have these tremendous technologies of connection and yet, more people are dying of loneliness today than most diseases and pandemics out there.
Ìý
That's true across most of the Western world and increasingly in the developing world. If we're not addressing that in what we're doing, in our critical thinking around technology, then we're just missing out on the core of human existence.
Ìý
[00:38:12] Barton: It's critical thinking about human experience, and that should yield our visions of a future that we think might work for us. There's a story I want to share with you that's an analog to the story you just told about the interaction with human experience. There's some research that was done by some colleagues, Yuval Milo and Daniel Beunza in London. They followed the New York Stock Exchange for several years. They did ethnography during the period that the Securities and Exchange Commission mandated that all trading floors go electronic. You had no choice about it. You had to go electronic.
Ìý
There were a number of attempts that the New York Stock Exchange had tried, they didn't quite work, but they finally got to a place where they were able to automate. What was really interesting is that in 2010, there was a so-called flash crash of the NASDAQ market and as a result, every trading floor around the world had to cancel their trades except the New York Stock Exchange. Why is that? Well, it turned out, to your advice, when they built their system, for example, there used to be a human role on the trading floor called the market maker.
Ìý
This is a role that a JP Morgan or Morgan Stanley can play, but it comes with a moral obligation. They provide liquidity to the market. That means if everyone's selling automobile stocks and I'm the auto market maker, I have to buy them whether I want to or not, whether it makes me money or not, because it's the thing that keeps the market working. As a result, these people were very sensitive because of their domain expertise to fluctuations in the market. They could recognize things that other people couldn't, and they could recognize things that computers couldn't.
Ìý
They built a stop mechanism into their automated exchange that allowed domain experts to opt-out. That's why they didn't crash. I think this makes the point of ThoughtWorks idea about augmented AI. It inverts the paradigm that the machine comes up with the answer for you, but rather it respects the fact that people are actually quite good at what they do and understand things that computers can't.
Ìý
We've done things like help whiskey makers make better whiskey. We've done things like help airports figure out where they park the planes and where the people go, but communicate it more broadly and more quickly to stakeholders so that they can adapt in time more quickly. The central concept that as Jonne pointed to earlier, is the notion that we want to interact with some surface or thinking mechanism that reflects back to us the quality of our thinking, like a trampoline, that helps us jump a little bit higher, see a little bit further, or see something that we didn't see before and then wonder about it.
Ìý
I guess for me, it doesn't so much come from this is what I think ThoughtWorks should do because it's going to make us a lot of money, although it can, rather it's inspired more from this view of what are we capable of as people and how might we use technology in a way that can elevate our capabilities in such a way to really help us to shape the world and have meaningful connections? I was actually quite jealous when I read your book because you've been blessed with a lovely family, and from the way that you write, it sounds like you're friends with everyone and I think I'm much more introverted. I haven't had that experience that you have had socially in my life. For that reason, I think I probably weathered the pandemic maybe a bit more comfortably than you have, but it doesn't change the fact that I'm still human, that I still want people to be able to connect more effectively and more meaningfully.
Ìý
[00:42:44] David: Regardless of what technology is coming around, we still have the same needs. I think this is something that, to bring it back, that we can't lose sight of. That we could have the most advanced technology and the most flying cars, but the problems that we're going to have to deal with, the things they're going to give us the greatest joy, the greatest challenges we have, whether it's a company, whether it's as a society, whether it's as individuals are going to be those things that are still fundamentally human and we can't get around them. We can't invent our way out of these problems.
Ìý
[00:43:26] Barton: Exactly. The only way to deal with it is to deal with it.
Ìý
[00:43:29] David: The only way to deal with it is to deal with it. I think if we can face up to that and face up to the hard questions and have those difficult emotional, tricky, muddy conversations, we'll do ourselves a better service than just presenting some invented solution as the answer, which will only either kick the can down the road, or create some new problem that we're going to have to deal with in a few years.
Ìý
[00:44:01] Jonne: Yes. I was just actually using this as a segue because I think it's not just about the joy that leading into a better place, but it's also the process of it. Because what we know from innovation and psychological safety, we know that in organizations and in communities where psychological safety is high, we know that the innovation capability is also high.
Ìý
When we take this lens and this information and this piece of knowledge that we have into the context of augmentation versus replacement, I do not have data to back it up, but I would claim that organizations and communities and teams that are figuring out how can we use AI or people's skills. I'm pretty sure that they also have a better innovation capability and to come up with solutions to the problems that we are facing right now.
Ìý
[00:45:18] David: There was a great paper I read and I interviewed the author of it when I was researching the book. There was a number of them. They were, I think Cambridge, Yokum Croz, I'm mispronouncing his name. The paper that he had talked about, and it wasn't specifically addressing AI, but it was talking about technology going forward, was this notion of craft.
Ìý
That if we approach all projects or organizations or work, if we approach work in the future in the way that the craft movement is done, so the late 19th-century, early 20th-century craft movement when it arised in response to industrialization, and then even he references because he is written a lot about this because he is Dutch, the craft beer movement, the craft spirits movement, the craft food movement, that humans have these capabilities and we can automate and mass produce and industrialize anything, and yet that doesn't always serve our purposes.
Ìý
What the craft movement asks and what it succeeds in, let's take beer for a good example, is saying where does the technology, where does a streamlined MBA engineer-driven process improve things? You talked about consulting with a whiskey maker. Whatever, Macallan or whatever hires you guys. Great. We're going to work to streamline your payroll and your processes and this technology can help control the VATs better or we can get the distillation process down to this.
Ìý
Yet at the end of the day, you want and the value of Macallan or Nikka, if you're into Japanese whiskey, is that human craft element. Yes, but fundamentally at the end of the day, what is the taste of this thing and what gives it that story? How does that make it so that somebody enjoys that whiskey or wants to pay more for that whiskey and so forth? Think about that now in any job, in anything from a dry cleaner onto a car maker.
Ìý
What is it that defines the work that you do that is the human part of it that's actually the greatest strength and differentiator? How can you elevate those aspects of that work? Where can technology, whatever technology that is, support that in a way that frees up more time or more resources to devote to that type of creative thought or that type of customer service?
Ìý
[00:48:20] Barton: I think what you're articulating here is really the leadership challenge of our time.
Ìý
[00:48:26] David: It's nothing new. Again, we've been wrestling this for a while. What's new is the speed and the intimidation of the technology and the idea that if we don't submit to it blindly, then we will be left behind or crushed.
Ìý
[00:48:48] Barton: Let's take the flip side of it because in the article you very clearly sketched out how a very small group of wealthy White men are able to make decisions that have impact on billions of lives. As a consultant, sometimes when we do user research-- The example I always use is if you move into a new flat or a new home and the movers are putting everything down, they're like, "Where do you want the couch?" You tell them, "Over there by the window," and they put it there and then they go away, and then a few days later you're sitting on the couch and you're like, "No, it needs to move."
Ìý
There's also the side of this where it's not like the customers have the answer, sometimes they might be conditioned to want the instrumental things and not the things that elevate them. Should we have them do observation notebooks as well, or is there a different practice that you'd recommend? How can we support our stakeholders that are the customers to develop that criticality in their own experience?
Ìý
[00:50:02] David: Yes. I think part of that criticality and I think a big part of it is realizing that customers are not this monolithic group. There is this associated tendency, which is a terrible habit, to lump together groups of people by demographics, age is a big one, gender, cultural group race, geography, whatever, and paint this large brush because a study might show or the data might show that a majority of them are something.
Ìý
That's how you got these idiotic statements like, oh, millennials only want phones. Millennials only want to do things digital because look at the gen Z, they're all about the TikTok, these kids, and yet the reality bears these contradictions out, that the same young individuals that spend a lot of time on social media are also the same ones who are out buying film cameras and paper books and other analog goods.
Ìý
This reconciling of how that data is so contradictory is a difficult thing for a lot of companies to do. It's respecting the customer not by listening to-- picking the poll of each but realizing that each customer is going to be different. If you can build in the capability to address different customers across a wide range because humans are each different. On every day someone's going to go into your store and then a week later they'll come into the store, they might be a bit of a different person based on the type of day they're having or what's going on in their life. Allowing that flexibility, that fluidity, that range is that great strength. That's empathy.
Ìý
Empathy isn't just listening to idea and saying, "Okay, empathy means this type of design so the screen will be this big. Empathy. We ticked off the empathy box." Empathy is that real element of human understanding and saying, "Some people are going to want to drive a car with a big flat touchscreen, some people want knobs, and those people might be married to each other." How do you reconcile that?
Ìý
[00:52:15] Barton: Jonne, that makes me think we've seen so much with customers wanting, let's say, recommendation engines, which is a particular view on how to do personalization. It's what can we sell the customer? What is your view about how we can leverage AI for customers to help them express their differences and where appropriate, reconcile them in these kinds of interactions, in a commercial interaction?
Ìý
[00:52:44] Jonne: It all boils down to how you really design the system or the product with the consumer interfaces. Let me take an example. Spotify is a brilliant example where when you're using the system, you're actually telling a lot about your context that you are in which makes the modeling and the job for the AI really, really simple. This is because Spotify has had AI experience know-how as part of their design when they have been working on the system and the user experience.
Ìý
They have thought about what is the information that we need from the consumer in order to be able to be more relevant, in order to be able to suggest more relevant music for the context that they typically are in. It's the tail that is-
Ìý
[00:53:41] Barton: Wagging the dog.
Ìý
[00:53:41] Jonne: Yes, exactly. Tail wagging the dog. I feel like because of there's so much different that-- The amount of products and services that are consumer-facing are getting more and more intelligent. We're basically educating the consumers to have a growing level of expectation towards the level of personalization that they're getting here and there. They might be expecting actually, having that type of a service where that might not even be needed, which I feel like you shouldn't always have this personalized recommend.
Ìý
In every single service, I don't think that it makes sense to have this type of an approach, but what I do believe is that if you have this know-how about how do you design a product or service from the get-go, you know what's the role of AI to have an implicit part, has an implicit part of the system rather than figuring out what are the AI use cases afterwards. How do I make this system retrospectively intelligent? When you put it that way it sounds stupid, but this is what's happening a lot.Ìý
Ìý
[00:54:54] Barton: I think what you're saying to me it talks about a lot of the ways that we do things in ThoughtWorks happens to be the ways that, let's say, at the Xerox Palo Alto Research Center, they were researching technology, but they were also using anthropological methods to understand human culture. It's actually putting those two things together, like understanding the practices and the motivations of the people that you're working with.
Ìý
In fact, a lot of the ideas can actually come out of that research. David, if there's any, last advice that you would give our listeners who are constantly having to make decisions mostly now around how they're going to deal with AI in their workplace, building strategies, whatever, what advice would you like to leave them with?
Ìý
[00:55:50] David: That the future is still entirely unknown. At the end of the day, you, your team, your industry, your company, your customers are still human. That just because technology exists and is powerful and people are saying it's going to change everything, doesn't mean that you need to change everything you're doing. You need to jettison everything in order to blindly follow it overnight. That critical thinking is your greatest strength in finding your way through this. It doesn't need to be done today and doesn't need to be rushed into.
Ìý
[00:56:34] Barton: All right, but the effort would definitely need to be put in a little bit every day to move the thinking forward. Thank you, David, for being with us. We really appreciate it.
Ìý
[00:56:46] David: My pleasure.
Ìý
[music]
[00:56:58] [END OF AUDIO]
Ìý